Changes in 5.15.33
Revert "swiotlb: rework "fix info leak with DMA_FROM_DEVICE""
USB: serial: pl2303: add IBM device IDs
dt-bindings: usb: hcd: correct usb-device path
USB: serial: pl2303: fix GS type detection
USB: serial: simple: add Nokia phone driver
mm: kfence: fix missing objcg housekeeping for SLAB
hv: utils: add PTP_1588_CLOCK to Kconfig to fix build
HID: logitech-dj: add new lightspeed receiver id
HID: Add support for open wheel and no attachment to T300
xfrm: fix tunnel model fragmentation behavior
ARM: mstar: Select HAVE_ARM_ARCH_TIMER
virtio_console: break out of buf poll on remove
vdpa/mlx5: should verify CTRL_VQ feature exists for MQ
tools/virtio: fix virtio_test execution
ethernet: sun: Free the coherent when failing in probing
gpio: Revert regression in sysfs-gpio (gpiolib.c)
spi: Fix invalid sgs value
net:mcf8390: Use platform_get_irq() to get the interrupt
Revert "gpio: Revert regression in sysfs-gpio (gpiolib.c)"
spi: Fix erroneous sgs value with min_t()
Input: zinitix - do not report shadow fingers
af_key: add __GFP_ZERO flag for compose_sadb_supported in function pfkey_register
net: dsa: microchip: add spi_device_id tables
selftests: vm: fix clang build error multiple output files
locking/lockdep: Avoid potential access of invalid memory in lock_class
drm/amdgpu: move PX checking into amdgpu_device_ip_early_init
drm/amdgpu: only check for _PR3 on dGPUs
iommu/iova: Improve 32-bit free space estimate
virtio-blk: Use blk_validate_block_size() to validate block size
tpm: fix reference counting for struct tpm_chip
usb: typec: tipd: Forward plug orientation to typec subsystem
USB: usb-storage: Fix use of bitfields for hardware data in ene_ub6250.c
xhci: fix garbage USBSTS being logged in some cases
xhci: fix runtime PM imbalance in USB2 resume
xhci: make xhci_handshake timeout for xhci_reset() adjustable
xhci: fix uninitialized string returned by xhci_decode_ctrl_ctx()
mei: me: disable driver on the ign firmware
mei: me: add Alder Lake N device id.
mei: avoid iterator usage outside of list_for_each_entry
bus: mhi: pci_generic: Add mru_default for Quectel EM1xx series
bus: mhi: Fix MHI DMA structure endianness
docs: sphinx/requirements: Limit jinja2<3.1
coresight: Fix TRCCONFIGR.QE sysfs interface
coresight: syscfg: Fix memleak on registration failure in cscfg_create_device
iio: afe: rescale: use s64 for temporary scale calculations
iio: inkern: apply consumer scale on IIO_VAL_INT cases
iio: inkern: apply consumer scale when no channel scale is available
iio: inkern: make a best effort on offset calculation
greybus: svc: fix an error handling bug in gb_svc_hello()
clk: rockchip: re-add rational best approximation algorithm to the fractional divider
clk: uniphier: Fix fixed-rate initialization
ptrace: Check PTRACE_O_SUSPEND_SECCOMP permission on PTRACE_SEIZE
cifs: fix handlecache and multiuser
cifs: we do not need a spinlock around the tree access during umount
KEYS: fix length validation in keyctl_pkey_params_get_2()
KEYS: asymmetric: enforce that sig algo matches key algo
KEYS: asymmetric: properly validate hash_algo and encoding
Documentation: add link to stable release candidate tree
Documentation: update stable tree link
firmware: stratix10-svc: add missing callback parameter on RSU
firmware: sysfb: fix platform-device leak in error path
HID: intel-ish-hid: Use dma_alloc_coherent for firmware update
SUNRPC: avoid race between mod_timer() and del_timer_sync()
NFS: NFSv2/v3 clients should never be setting NFS_CAP_XATTR
NFSD: prevent underflow in nfssvc_decode_writeargs()
NFSD: prevent integer overflow on 32 bit systems
f2fs: fix to unlock page correctly in error path of is_alive()
f2fs: quota: fix loop condition at f2fs_quota_sync()
f2fs: fix to do sanity check on .cp_pack_total_block_count
remoteproc: Fix count check in rproc_coredump_write()
mm/mlock: fix two bugs in user_shm_lock()
pinctrl: ingenic: Fix regmap on X series SoCs
pinctrl: samsung: drop pin banks references on error paths
net: bnxt_ptp: fix compilation error
spi: mxic: Fix the transmit path
mtd: rawnand: protect access to rawnand devices while in suspend
can: ems_usb: ems_usb_start_xmit(): fix double dev_kfree_skb() in error path
can: m_can: m_can_tx_handler(): fix use after free of skb
can: usb_8dev: usb_8dev_start_xmit(): fix double dev_kfree_skb() in error path
jffs2: fix use-after-free in jffs2_clear_xattr_subsystem
jffs2: fix memory leak in jffs2_do_mount_fs
jffs2: fix memory leak in jffs2_scan_medium
mm: fs: fix lru_cache_disabled race in bh_lru
mm/pages_alloc.c: don't create ZONE_MOVABLE beyond the end of a node
mm: invalidate hwpoison page cache page in fault path
mempolicy: mbind_range() set_policy() after vma_merge()
scsi: core: sd: Add silence_suspend flag to suppress some PM messages
scsi: ufs: Fix runtime PM messages never-ending cycle
scsi: scsi_transport_fc: Fix FPIN Link Integrity statistics counters
scsi: libsas: Fix sas_ata_qc_issue() handling of NCQ NON DATA commands
qed: display VF trust config
qed: validate and restrict untrusted VFs vlan promisc mode
riscv: dts: canaan: Fix SPI3 bus width
riscv: Fix fill_callchain return value
riscv: Increase stack size under KASAN
Revert "Input: clear BTN_RIGHT/MIDDLE on buttonpads"
cifs: prevent bad output lengths in smb2_ioctl_query_info()
cifs: fix NULL ptr dereference in smb2_ioctl_query_info()
ALSA: cs4236: fix an incorrect NULL check on list iterator
ALSA: hda: Avoid unsol event during RPM suspending
ALSA: pcm: Fix potential AB/BA lock with buffer_mutex and mmap_lock
ALSA: hda/realtek: Fix audio regression on Mi Notebook Pro 2020
rtc: mc146818-lib: fix locking in mc146818_set_time
rtc: pl031: fix rtc features null pointer dereference
ocfs2: fix crash when mount with quota enabled
drm/simpledrm: Add "panel orientation" property on non-upright mounted LCD panels
mm: madvise: skip unmapped vma holes passed to process_madvise
mm: madvise: return correct bytes advised with process_madvise
Revert "mm: madvise: skip unmapped vma holes passed to process_madvise"
mm,hwpoison: unmap poisoned page before invalidation
mm/kmemleak: reset tag when compare object pointer
dm stats: fix too short end duration_ns when using precise_timestamps
dm: fix use-after-free in dm_cleanup_zoned_dev()
dm: interlock pending dm_io and dm_wait_for_bios_completion
dm: fix double accounting of flush with data
dm integrity: set journal entry unused when shrinking device
tracing: Have trace event string test handle zero length strings
drbd: fix potential silent data corruption
powerpc/kvm: Fix kvm_use_magic_page
PCI: fu740: Force 2.5GT/s for initial device probe
arm64: signal: nofpsimd: Do not allocate fp/simd context when not available
arm64: Do not defer reserve_crashkernel() for platforms with no DMA memory zones
arm64: dts: qcom: sm8250: Fix MSI IRQ for PCIe1 and PCIe2
arm64: dts: ti: k3-am65: Fix gic-v3 compatible regs
arm64: dts: ti: k3-j721e: Fix gic-v3 compatible regs
arm64: dts: ti: k3-j7200: Fix gic-v3 compatible regs
arm64: dts: ti: k3-am64: Fix gic-v3 compatible regs
ASoC: SOF: Intel: Fix NULL ptr dereference when ENOMEM
Revert "ACPI: Pass the same capabilities to the _OSC regardless of the query flag"
ACPI: properties: Consistently return -ENOENT if there are no more references
coredump: Also dump first pages of non-executable ELF libraries
ext4: fix ext4_fc_stats trace point
ext4: fix fs corruption when tring to remove a non-empty directory with IO error
ext4: make mb_optimize_scan performance mount option work with extents
drivers: hamradio: 6pack: fix UAF bug caused by mod_timer()
samples/landlock: Fix path_list memory leak
landlock: Use square brackets around "landlock-ruleset"
mailbox: tegra-hsp: Flush whole channel
block: limit request dispatch loop duration
block: don't merge across cgroup boundaries if blkcg is enabled
drm/edid: check basic audio support on CEA extension block
fbdev: Hot-unplug firmware fb devices on forced removal
video: fbdev: sm712fb: Fix crash in smtcfb_read()
video: fbdev: atari: Atari 2 bpp (STe) palette bugfix
rfkill: make new event layout opt-in
ARM: dts: at91: sama7g5: Remove unused properties in i2c nodes
ARM: dts: at91: sama5d2: Fix PMERRLOC resource size
ARM: dts: exynos: fix UART3 pins configuration in Exynos5250
ARM: dts: exynos: add missing HDMI supplies on SMDK5250
ARM: dts: exynos: add missing HDMI supplies on SMDK5420
mgag200 fix memmapsl configuration in GCTL6 register
carl9170: fix missing bit-wise or operator for tx_params
pstore: Don't use semaphores in always-atomic-context code
thermal: int340x: Increase bitmap size
lib/raid6/test: fix multiple definition linking error
exec: Force single empty string when argv is empty
crypto: rsa-pkcs1pad - only allow with rsa
crypto: rsa-pkcs1pad - correctly get hash from source scatterlist
crypto: rsa-pkcs1pad - restore signature length check
crypto: rsa-pkcs1pad - fix buffer overread in pkcs1pad_verify_complete()
bcache: fixup multiple threads crash
PM: domains: Fix sleep-in-atomic bug caused by genpd_debug_remove()
DEC: Limit PMAX memory probing to R3k systems
media: gpio-ir-tx: fix transmit with long spaces on Orange Pi PC
media: venus: hfi_cmds: List HDR10 property as unsupported for v1 and v3
media: venus: venc: Fix h264 8x8 transform control
media: davinci: vpif: fix unbalanced runtime PM get
media: davinci: vpif: fix unbalanced runtime PM enable
btrfs: zoned: mark relocation as writing
btrfs: extend locking to all space_info members accesses
btrfs: verify the tranisd of the to-be-written dirty extent buffer
xtensa: define update_mmu_tlb function
xtensa: fix stop_machine_cpuslocked call in patch_text
xtensa: fix xtensa_wsr always writing 0
drm/syncobj: flatten dma_fence_chains on transfer
drm/nouveau/backlight: Fix LVDS backlight detection on some laptops
drm/nouveau/backlight: Just set all backlight types as RAW
drm/fb-helper: Mark screen buffers in system memory with FBINFO_VIRTFB
brcmfmac: firmware: Allocate space for default boardrev in nvram
brcmfmac: pcie: Release firmwares in the brcmf_pcie_setup error path
brcmfmac: pcie: Declare missing firmware files in pcie.c
brcmfmac: pcie: Replace brcmf_pcie_copy_mem_todev with memcpy_toio
brcmfmac: pcie: Fix crashes due to early IRQs
drm/i915/opregion: check port number bounds for SWSCI display power state
drm/i915/gem: add missing boundary check in vm_access
PCI: imx6: Allow to probe when dw_pcie_wait_for_link() fails
PCI: pciehp: Clear cmd_busy bit in polling mode
PCI: xgene: Revert "PCI: xgene: Fix IB window setup"
regulator: qcom_smd: fix for_each_child.cocci warnings
selinux: access superblock_security_struct in LSM blob way
selinux: check return value of sel_make_avc_files
crypto: ccp - Ensure psp_ret is always init'd in __sev_platform_init_locked()
hwrng: cavium - Check health status while reading random data
hwrng: cavium - HW_RANDOM_CAVIUM should depend on ARCH_THUNDER
crypto: sun8i-ss - really disable hash on A80
crypto: authenc - Fix sleep in atomic context in decrypt_tail
crypto: mxs-dcp - Fix scatterlist processing
selinux: Fix selinux_sb_mnt_opts_compat()
thermal: int340x: Check for NULL after calling kmemdup()
crypto: octeontx2 - remove CONFIG_DM_CRYPT check
spi: tegra114: Add missing IRQ check in tegra_spi_probe
spi: tegra210-quad: Fix missin IRQ check in tegra_qspi_probe
stack: Constrain and fix stack offset randomization with Clang builds
arm64/mm: avoid fixmap race condition when create pud mapping
blk-cgroup: set blkg iostat after percpu stat aggregation
selftests/x86: Add validity check and allow field splitting
selftests/sgx: Treat CC as one argument
crypto: rockchip - ECB does not need IV
audit: log AUDIT_TIME_* records only from rules
EVM: fix the evm= __setup handler return value
crypto: ccree - don't attempt 0 len DMA mappings
crypto: hisilicon/sec - fix the aead software fallback for engine
spi: pxa2xx-pci: Balance reference count for PCI DMA device
hwmon: (pmbus) Add mutex to regulator ops
hwmon: (sch56xx-common) Replace WDOG_ACTIVE with WDOG_HW_RUNNING
nvme: cleanup __nvme_check_ids
nvme: fix the check for duplicate unique identifiers
block: don't delete queue kobject before its children
PM: hibernate: fix __setup handler error handling
PM: suspend: fix return value of __setup handler
spi: spi-zynqmp-gqspi: Handle error for dma_set_mask
hwrng: atmel - disable trng on failure path
crypto: sun8i-ss - call finalize with bh disabled
crypto: sun8i-ce - call finalize with bh disabled
crypto: amlogic - call finalize with bh disabled
crypto: gemini - call finalize with bh disabled
crypto: vmx - add missing dependencies
clocksource/drivers/timer-ti-dm: Fix regression from errata i940 fix
clocksource/drivers/exynos_mct: Refactor resources allocation
clocksource/drivers/exynos_mct: Handle DTS with higher number of interrupts
clocksource/drivers/timer-microchip-pit64b: Use notrace
clocksource/drivers/timer-of: Check return value of of_iomap in timer_of_base_init()
arm64: prevent instrumentation of bp hardening callbacks
KEYS: trusted: Fix trusted key backends when building as module
KEYS: trusted: Avoid calling null function trusted_key_exit
ACPI: APEI: fix return value of __setup handlers
crypto: ccp - ccp_dmaengine_unregister release dma channels
crypto: ccree - Fix use after free in cc_cipher_exit()
hwrng: nomadik - Change clk_disable to clk_disable_unprepare
hwmon: (pmbus) Add Vin unit off handling
clocksource: acpi_pm: fix return value of __setup handler
io_uring: don't check unrelated req->open.how in accept request
io_uring: terminate manual loop iterator loop correctly for non-vecs
watch_queue: Fix NULL dereference in error cleanup
watch_queue: Actually free the watch
f2fs: fix to enable ATGC correctly via gc_idle sysfs interface
sched/debug: Remove mpol_get/put and task_lock/unlock from sched_show_numa
sched/core: Export pelt_thermal_tp
sched/uclamp: Fix iowait boost escaping uclamp restriction
rseq: Remove broken uapi field layout on 32-bit little endian
perf/core: Fix address filter parser for multiple filters
perf/x86/intel/pt: Fix address filter config for 32-bit kernel
sched/fair: Improve consistency of allowed NUMA balance calculations
f2fs: fix missing free nid in f2fs_handle_failed_inode
nfsd: more robust allocation failure handling in nfsd_file_cache_init
sched/cpuacct: Fix charge percpu cpuusage
sched/rt: Plug rt_mutex_setprio() vs push_rt_task() race
f2fs: fix to avoid potential deadlock
btrfs: fix unexpected error path when reflinking an inline extent
f2fs: fix compressed file start atomic write may cause data corruption
selftests, x86: fix how check_cc.sh is being invoked
drivers/base/memory: add memory block to memory group after registration succeeded
kunit: make kunit_test_timeout compatible with comment
pinctrl: samsung: Remove EINT handler for Exynos850 ALIVE and CMGP gpios
media: staging: media: zoran: fix usage of vb2_dma_contig_set_max_seg_size
media: camss: csid-170: fix non-10bit formats
media: camss: csid-170: don't enable unused irqs
media: camss: csid-170: set the right HALT_CMD when disabled
media: camss: vfe-170: fix "VFE halt timeout" error
media: staging: media: imx: imx7-mipi-csis: Make subdev name unique
media: v4l2-mem2mem: Apply DST_QUEUE_OFF_BASE on MMAP buffers across ioctls
media: mtk-vcodec: potential dereference of null pointer
media: imx: imx8mq-mipi-csi2: remove wrong irq config write operation
media: imx: imx8mq-mipi_csi2: fix system resume
media: bttv: fix WARNING regression on tunerless devices
media: atmel: atmel-sama7g5-isc: fix ispck leftover
ASoC: sh: rz-ssi: Drop calling rz_ssi_pio_recv() recursively
ASoC: codecs: Check for error pointer after calling devm_regmap_init_mmio
ASoC: xilinx: xlnx_formatter_pcm: Handle sysclk setting
ASoC: simple-card-utils: Set sysclk on all components
media: coda: Fix missing put_device() call in coda_get_vdoa_data
media: meson: vdec: potential dereference of null pointer
media: hantro: Fix overfill bottom register field name
media: ov6650: Fix set format try processing path
media: v4l: Avoid unaligned access warnings when printing 4cc modifiers
media: ov5648: Don't pack controls struct
media: aspeed: Correct value for h-total-pixels
video: fbdev: matroxfb: set maxvram of vbG200eW to the same as vbG200 to avoid black screen
video: fbdev: controlfb: Fix COMPILE_TEST build
video: fbdev: smscufx: Fix null-ptr-deref in ufx_usb_probe()
video: fbdev: atmel_lcdfb: fix an error code in atmel_lcdfb_probe()
video: fbdev: fbcvt.c: fix printing in fb_cvt_print_name()
ARM: dts: Fix OpenBMC flash layout label addresses
firmware: qcom: scm: Remove reassignment to desc following initializer
ARM: dts: qcom: ipq4019: fix sleep clock
soc: qcom: rpmpd: Check for null return of devm_kcalloc
soc: qcom: ocmem: Fix missing put_device() call in of_get_ocmem
soc: qcom: aoss: remove spurious IRQF_ONESHOT flags
arm64: dts: qcom: sdm845: fix microphone bias properties and values
arm64: dts: qcom: sm8250: fix PCIe bindings to follow schema
arm64: dts: broadcom: bcm4908: use proper TWD binding
arm64: dts: qcom: sm8150: Correct TCS configuration for apps rsc
arm64: dts: qcom: sm8350: Correct TCS configuration for apps rsc
firmware: ti_sci: Fix compilation failure when CONFIG_TI_SCI_PROTOCOL is not defined
soc: ti: wkup_m3_ipc: Fix IRQ check in wkup_m3_ipc_probe
ARM: dts: sun8i: v3s: Move the csi1 block to follow address order
vsprintf: Fix potential unaligned access
ARM: dts: imx: Add missing LVDS decoder on M53Menlo
media: mexon-ge2d: fixup frames size in registers
media: video/hdmi: handle short reads of hdmi info frame.
media: ti-vpe: cal: Fix a NULL pointer dereference in cal_ctx_v4l2_init_formats()
media: em28xx: initialize refcount before kref_get
media: usb: go7007: s2250-board: fix leak in probe()
media: cedrus: H265: Fix neighbour info buffer size
media: cedrus: h264: Fix neighbour info buffer size
ASoC: codecs: rx-macro: fix accessing compander for aux
ASoC: codecs: rx-macro: fix accessing array out of bounds for enum type
ASoC: codecs: va-macro: fix accessing array out of bounds for enum type
ASoC: codecs: wc938x: fix accessing array out of bounds for enum type
ASoC: codecs: wcd938x: fix kcontrol max values
ASoC: codecs: wcd934x: fix kcontrol max values
ASoC: codecs: wcd934x: fix return value of wcd934x_rx_hph_mode_put
media: v4l2-core: Initialize h264 scaling matrix
media: ov5640: Fix set format, v4l2_mbus_pixelcode not updated
selftests/lkdtm: Add UBSAN config
lib: uninline simple_strntoull() as well
vsprintf: Fix %pK with kptr_restrict == 0
uaccess: fix nios2 and microblaze get_user_8()
ASoC: rt5663: check the return value of devm_kzalloc() in rt5663_parse_dp()
soc: mediatek: pm-domains: Add wakeup capacity support in power domain
mmc: sdhci_am654: Fix the driver data of AM64 SoC
ASoC: ti: davinci-i2s: Add check for clk_enable()
ALSA: spi: Add check for clk_enable()
arm64: dts: ns2: Fix spi-cpol and spi-cpha property
arm64: dts: broadcom: Fix sata nodename
printk: fix return value of printk.devkmsg __setup handler
ASoC: mxs-saif: Handle errors for clk_enable
ASoC: atmel_ssc_dai: Handle errors for clk_enable
ASoC: dwc-i2s: Handle errors for clk_enable
ASoC: soc-compress: prevent the potentially use of null pointer
memory: emif: Add check for setup_interrupts
memory: emif: check the pointer temp in get_device_details()
ALSA: firewire-lib: fix uninitialized flag for AV/C deferred transaction
arm64: dts: rockchip: Fix SDIO regulator supply properties on rk3399-firefly
m68k: coldfire/device.c: only build for MCF_EDMA when h/w macros are defined
media: stk1160: If start stream fails, return buffers with VB2_BUF_STATE_QUEUED
media: vidtv: Check for null return of vzalloc
ASoC: atmel: Add missing of_node_put() in at91sam9g20ek_audio_probe
ASoC: wm8350: Handle error for wm8350_register_irq
ASoC: fsi: Add check for clk_enable
video: fbdev: omapfb: Add missing of_node_put() in dvic_probe_of
media: saa7134: fix incorrect use to determine if list is empty
ivtv: fix incorrect device_caps for ivtvfb
ASoC: atmel: Fix error handling in snd_proto_probe
ASoC: rockchip: i2s: Fix missing clk_disable_unprepare() in rockchip_i2s_probe
ASoC: SOF: Add missing of_node_put() in imx8m_probe
ASoC: mediatek: use of_device_get_match_data()
ASoC: mediatek: mt8192-mt6359: Fix error handling in mt8192_mt6359_dev_probe
ASoC: rk817: Fix missing clk_disable_unprepare() in rk817_platform_probe
ASoC: dmaengine: do not use a NULL prepare_slave_config() callback
ASoC: mxs: Fix error handling in mxs_sgtl5000_probe
ASoC: fsl_spdif: Disable TX clock when stop
ASoC: imx-es8328: Fix error return code in imx_es8328_probe()
ASoC: SOF: Intel: enable DMI L1 for playback streams
ASoC: msm8916-wcd-digital: Fix missing clk_disable_unprepare() in msm8916_wcd_digital_probe
mmc: davinci_mmc: Handle error for clk_enable
ASoC: atmel: Fix error handling in sam9x5_wm8731_driver_probe
ASoC: msm8916-wcd-analog: Fix error handling in pm8916_wcd_analog_spmi_probe
ASoC: codecs: wcd934x: Add missing of_node_put() in wcd934x_codec_parse_data
ASoC: amd: Fix reference to PCM buffer address
ARM: configs: multi_v5_defconfig: re-enable CONFIG_V4L_PLATFORM_DRIVERS
ARM: configs: multi_v5_defconfig: re-enable DRM_PANEL and FB_xxx
drm/meson: osd_afbcd: Add an exit callback to struct meson_afbcd_ops
drm/meson: Make use of the helper function devm_platform_ioremap_resourcexxx()
drm/meson: split out encoder from meson_dw_hdmi
drm/meson: Fix error handling when afbcd.ops->init fails
drm/bridge: Fix free wrong object in sii8620_init_rcp_input_dev
drm/bridge: Add missing pm_runtime_disable() in __dw_mipi_dsi_probe
drm/bridge: nwl-dsi: Fix PM disable depth imbalance in nwl_dsi_probe
drm: bridge: adv7511: Fix ADV7535 HPD enablement
ath10k: fix memory overwrite of the WoWLAN wakeup packet pattern
drm/v3d/v3d_drv: Check for error num after setting mask
drm/panfrost: Check for error num after setting mask
libbpf: Fix possible NULL pointer dereference when destroying skeleton
bpftool: Only set obj->skeleton on complete success
udmabuf: validate ubuf->pagecount
bpf: Fix UAF due to race between btf_try_get_module and load_module
drm/selftests/test-drm_dp_mst_helper: Fix memory leak in sideband_msg_req_encode_decode
selftests: bpf: Fix bind on used port
Bluetooth: btintel: Fix WBS setting for Intel legacy ROM products
Bluetooth: hci_serdev: call init_rwsem() before p->open()
mtd: onenand: Check for error irq
mtd: rawnand: gpmi: fix controller timings setting
drm/edid: Don't clear formats if using deep color
drm/edid: Split deep color modes between RGB and YUV444
ionic: fix type complaint in ionic_dev_cmd_clean()
ionic: start watchdog after all is setup
ionic: Don't send reset commands if FW isn't running
drm/nouveau/acr: Fix undefined behavior in nvkm_acr_hsfw_load_bl()
drm/amd/display: Fix a NULL pointer dereference in amdgpu_dm_connector_add_common_modes()
drm/amd/pm: return -ENOTSUPP if there is no get_dpm_ultimate_freq function
net: phy: at803x: move page selection fix to config_init
selftests/bpf: Normalize XDP section names in selftests
selftests/bpf/test_xdp_redirect_multi: use temp netns for testing
ath9k_htc: fix uninit value bugs
RDMA/core: Set MR type in ib_reg_user_mr
KVM: PPC: Fix vmx/vsx mixup in mmio emulation
selftests/net: timestamping: Fix bind_phc check
i40e: don't reserve excessive XDP_PACKET_HEADROOM on XSK Rx to skb
i40e: respect metadata on XSK Rx to skb
igc: don't reserve excessive XDP_PACKET_HEADROOM on XSK Rx to skb
ixgbe: pass bi->xdp to ixgbe_construct_skb_zc() directly
ixgbe: don't reserve excessive XDP_PACKET_HEADROOM on XSK Rx to skb
ixgbe: respect metadata on XSK Rx to skb
power: reset: gemini-poweroff: Fix IRQ check in gemini_poweroff_probe
ray_cs: Check ioremap return value
powerpc: dts: t1040rdb: fix ports names for Seville Ethernet switch
KVM: PPC: Book3S HV: Check return value of kvmppc_radix_init
powerpc/perf: Don't use perf_hw_context for trace IMC PMU
mt76: connac: fix sta_rec_wtbl tag len
mt76: mt7915: use proper aid value in mt7915_mcu_wtbl_generic_tlv in sta mode
mt76: mt7915: use proper aid value in mt7915_mcu_sta_basic_tlv
mt76: mt7921: fix a leftover race in runtime-pm
mt76: mt7615: fix a leftover race in runtime-pm
mt76: mt7603: check sta_rates pointer in mt7603_sta_rate_tbl_update
mt76: mt7615: check sta_rates pointer in mt7615_sta_rate_tbl_update
ptp: unregister virtual clocks when unregistering physical clock.
net: dsa: mv88e6xxx: Enable port policy support on 6097
mac80211: Remove a couple of obsolete TODO
mac80211: limit bandwidth in HE capabilities
scripts/dtc: Call pkg-config POSIXly correct
livepatch: Fix build failure on 32 bits processors
net: asix: add proper error handling of usb read errors
i2c: bcm2835: Use platform_get_irq() to get the interrupt
i2c: bcm2835: Fix the error handling in 'bcm2835_i2c_probe()'
mtd: mchp23k256: Add SPI ID table
mtd: mchp48l640: Add SPI ID table
igc: avoid kernel warning when changing RX ring parameters
igb: refactor XDP registration
PCI: aardvark: Fix reading MSI interrupt number
PCI: aardvark: Fix reading PCI_EXP_RTSTA_PME bit on emulated bridge
RDMA/rxe: Check the last packet by RXE_END_MASK
libbpf: Fix signedness bug in btf_dump_array_data()
cxl/core: Fix cxl_probe_component_regs() error message
cxl/regs: Fix size of CXL Capability Header Register
net:enetc: allocate CBD ring data memory using DMA coherent methods
libbpf: Fix compilation warning due to mismatched printf format
drm/bridge: dw-hdmi: use safe format when first in bridge chain
libbpf: Use dynamically allocated buffer when receiving netlink messages
power: supply: ab8500: Fix memory leak in ab8500_fg_sysfs_init
HID: i2c-hid: fix GET/SET_REPORT for unnumbered reports
iommu/ipmmu-vmsa: Check for error num after setting mask
drm/bridge: anx7625: Fix overflow issue on reading EDID
bpftool: Fix the error when lookup in no-btf maps
drm/amd/pm: enable pm sysfs write for one VF mode
drm/amd/display: Add affected crtcs to atomic state for dsc mst unplug
libbpf: Fix memleak in libbpf_netlink_recv()
IB/cma: Allow XRC INI QPs to set their local ACK timeout
dax: make sure inodes are flushed before destroy cache
selftests: mptcp: add csum mib check for mptcp_connect
iwlwifi: mvm: Don't call iwl_mvm_sta_from_mac80211() with NULL sta
iwlwifi: mvm: don't iterate unadded vifs when handling FW SMPS req
iwlwifi: mvm: align locking in D3 test debugfs
iwlwifi: yoyo: remove DBGI_SRAM address reset writing
iwlwifi: Fix -EIO error code that is never returned
iwlwifi: mvm: Fix an error code in iwl_mvm_up()
mtd: rawnand: pl353: Set the nand chip node as the flash node
drm/msm/dp: populate connector of struct dp_panel
drm/msm/dp: stop link training after link training 2 failed
drm/msm/dp: always add fail-safe mode into connector mode list
drm/msm/dsi: Use "ref" fw clock instead of global name for VCO parent
drm/msm/dsi/phy: fix 7nm v4.0 settings for C-PHY mode
drm/msm/dpu: add DSPP blocks teardown
drm/msm/dpu: fix dp audio condition
dm crypt: fix get_key_size compiler warning if !CONFIG_KEYS
vfio/pci: fix memory leak during D3hot to D0 transition
vfio/pci: wake-up devices around reset functions
scsi: fnic: Fix a tracing statement
scsi: pm8001: Fix command initialization in pm80XX_send_read_log()
scsi: pm8001: Fix command initialization in pm8001_chip_ssp_tm_req()
scsi: pm8001: Fix payload initialization in pm80xx_set_thermal_config()
scsi: pm8001: Fix le32 values handling in pm80xx_set_sas_protocol_timer_config()
scsi: pm8001: Fix payload initialization in pm80xx_encrypt_update()
scsi: pm8001: Fix le32 values handling in pm80xx_chip_ssp_io_req()
scsi: pm8001: Fix le32 values handling in pm80xx_chip_sata_req()
scsi: pm8001: Fix NCQ NON DATA command task initialization
scsi: pm8001: Fix NCQ NON DATA command completion handling
scsi: pm8001: Fix abort all task initialization
RDMA/mlx5: Fix the flow of a miss in the allocation of a cache ODP MR
drm/amd/display: Remove vupdate_int_entry definition
TOMOYO: fix __setup handlers return values
power: supply: sbs-charger: Don't cancel work that is not initialized
ext2: correct max file size computing
drm/tegra: Fix reference leak in tegra_dsi_ganged_probe
power: supply: bq24190_charger: Fix bq24190_vbus_is_enabled() wrong false return
scsi: hisi_sas: Change permission of parameter prot_mask
drm/bridge: cdns-dsi: Make sure to to create proper aliases for dt
bpf, arm64: Call build_prologue() first in first JIT pass
bpf, arm64: Feed byte-offset into bpf line info
xsk: Fix race at socket teardown
RDMA/irdma: Fix netdev notifications for vlan's
RDMA/irdma: Fix Passthrough mode in VM
RDMA/irdma: Remove incorrect masking of PD
gpu: host1x: Fix a memory leak in 'host1x_remove()'
libbpf: Skip forward declaration when counting duplicated type names
powerpc/mm/numa: skip NUMA_NO_NODE onlining in parse_numa_properties()
powerpc/Makefile: Don't pass -mcpu=powerpc64 when building 32-bit
KVM: x86: Fix emulation in writing cr8
KVM: x86/emulator: Defer not-present segment check in __load_segment_descriptor()
hv_balloon: rate-limit "Unhandled message" warning
i2c: xiic: Make bus names unique
power: supply: wm8350-power: Handle error for wm8350_register_irq
power: supply: wm8350-power: Add missing free in free_charger_irq
IB/hfi1: Allow larger MTU without AIP
RDMA/core: Fix ib_qp_usecnt_dec() called when error
PCI: Reduce warnings on possible RW1C corruption
net: axienet: fix RX ring refill allocation failure handling
drm/msm/a6xx: Fix missing ARRAY_SIZE() check
mips: DEC: honor CONFIG_MIPS_FP_SUPPORT=n
MIPS: Sanitise Cavium switch cases in TLB handler synthesizers
powerpc/sysdev: fix incorrect use to determine if list is empty
powerpc/64s: Don't use DSISR for SLB faults
mfd: mc13xxx: Add check for mc13xxx_irq_request
libbpf: Unmap rings when umem deleted
selftests/bpf: Make test_lwt_ip_encap more stable and faster
platform/x86: huawei-wmi: check the return value of device_create_file()
scsi: mpt3sas: Fix incorrect 4GB boundary check
powerpc: 8xx: fix a return value error in mpc8xx_pic_init
vxcan: enable local echo for sent CAN frames
ath10k: Fix error handling in ath10k_setup_msa_resources
mips: cdmm: Fix refcount leak in mips_cdmm_phys_base
MIPS: RB532: fix return value of __setup handler
MIPS: pgalloc: fix memory leak caused by pgd_free()
mtd: rawnand: atmel: fix refcount issue in atmel_nand_controller_init
power: ab8500_chargalg: Use CLOCK_MONOTONIC
RDMA/irdma: Prevent some integer underflows
Revert "RDMA/core: Fix ib_qp_usecnt_dec() called when error"
RDMA/mlx5: Fix memory leak in error flow for subscribe event routine
bpf, sockmap: Fix memleak in sk_psock_queue_msg
bpf, sockmap: Fix memleak in tcp_bpf_sendmsg while sk msg is full
bpf, sockmap: Fix more uncharged while msg has more_data
bpf, sockmap: Fix double uncharge the mem of sk_msg
samples/bpf, xdpsock: Fix race when running for fix duration of time
USB: storage: ums-realtek: fix error code in rts51x_read_mem()
drm/i915/display: Fix HPD short pulse handling for eDP
netfilter: flowtable: Fix QinQ and pppoe support for inet table
mt76: mt7921: fix mt7921_queues_acq implementation
can: isotp: sanitize CAN ID checks in isotp_bind()
can: isotp: return -EADDRNOTAVAIL when reading from unbound socket
can: isotp: support MSG_TRUNC flag when reading from socket
bareudp: use ipv6_mod_enabled to check if IPv6 enabled
ibmvnic: fix race between xmit and reset
af_unix: Fix some data-races around unix_sk(sk)->oob_skb.
selftests/bpf: Fix error reporting from sock_fields programs
Bluetooth: hci_uart: add missing NULL check in h5_enqueue
Bluetooth: call hci_le_conn_failed with hdev lock in hci_le_conn_failed
Bluetooth: btmtksdio: Fix kernel oops in btmtksdio_interrupt
ipv4: Fix route lookups when handling ICMP redirects and PMTU updates
af_netlink: Fix shift out of bounds in group mask calculation
i2c: meson: Fix wrong speed use from probe
netfilter: conntrack: Add and use nf_ct_set_auto_assign_helper_warned()
i2c: mux: demux-pinctrl: do not deactivate a master that is not active
powerpc/pseries: Fix use after free in remove_phb_dynamic()
selftests/bpf/test_lirc_mode2.sh: Exit with proper code
PCI: Avoid broken MSI on SB600 USB devices
net: bcmgenet: Use stronger register read/writes to assure ordering
tcp: ensure PMTU updates are processed during fastopen
openvswitch: always update flow key after nat
net: dsa: fix panic on shutdown if multi-chip tree failed to probe
tipc: fix the timer expires after interval 100ms
mfd: asic3: Add missing iounmap() on error asic3_mfd_probe
ice: fix 'scheduling while atomic' on aux critical err interrupt
ice: don't allow to run ice_send_event_to_aux() in atomic ctx
drivers: ethernet: cpsw: fix panic when interrupt coaleceing is set via ethtool
kernel/resource: fix kfree() of bootmem memory again
staging: r8188eu: convert DBG_88E_LEVEL call in hal/rtl8188e_hal_init.c
staging: r8188eu: release_firmware is not called if allocation fails
mxser: fix xmit_buf leak in activate when LSR == 0xff
fsi: scom: Fix error handling
fsi: scom: Remove retries in indirect scoms
pwm: lpc18xx-sct: Initialize driver data and hardware before pwmchip_add()
pps: clients: gpio: Propagate return value from pps_gpio_probe
fsi: Aspeed: Fix a potential double free
misc: alcor_pci: Fix an error handling path
cpufreq: qcom-cpufreq-nvmem: fix reading of PVS Valid fuse
soundwire: intel: fix wrong register name in intel_shim_wake
clk: qcom: ipq8074: fix PCI-E clock oops
dmaengine: idxd: check GENCAP config support for gencfg register
dmaengine: idxd: change bandwidth token to read buffers
dmaengine: idxd: restore traffic class defaults after wq reset
iio: mma8452: Fix probe failing when an i2c_device_id is used
serial: 8250_aspeed_vuart: add PORT_ASPEED_VUART port type
staging:iio:adc:ad7280a: Fix handing of device address bit reversing.
pinctrl: renesas: r8a77470: Reduce size for narrow VIN1 channel
pinctrl: renesas: checker: Fix miscalculation of number of states
clk: qcom: ipq8074: Use floor ops for SDCC1 clock
phy: dphy: Correct lpx parameter and its derivatives(ta_{get,go,sure})
phy: phy-brcm-usb: fixup BCM4908 support
serial: 8250_mid: Balance reference count for PCI DMA device
serial: 8250_lpss: Balance reference count for PCI DMA device
NFS: Use of mapping_set_error() results in spurious errors
serial: 8250: Fix race condition in RTS-after-send handling
iio: adc: Add check for devm_request_threaded_irq
habanalabs: Add check for pci_enable_device
NFS: Return valid errors from nfs2/3_decode_dirent()
staging: r8188eu: fix endless loop in recv_func
dma-debug: fix return value of __setup handlers
clk: imx7d: Remove audio_mclk_root_clk
clk: imx: off by one in imx_lpcg_parse_clks_from_dt()
clk: at91: sama7g5: fix parents of PDMCs' GCLK
clk: qcom: clk-rcg2: Update logic to calculate D value for RCG
clk: qcom: clk-rcg2: Update the frac table for pixel clock
dmaengine: hisi_dma: fix MSI allocate fail when reload hisi_dma
remoteproc: qcom: Fix missing of_node_put in adsp_alloc_memory_region
remoteproc: qcom_wcnss: Add missing of_node_put() in wcnss_alloc_memory_region
remoteproc: qcom_q6v5_mss: Fix some leaks in q6v5_alloc_memory_region
nvdimm/region: Fix default alignment for small regions
clk: actions: Terminate clk_div_table with sentinel element
clk: loongson1: Terminate clk_div_table with sentinel element
clk: hisilicon: Terminate clk_div_table with sentinel element
clk: clps711x: Terminate clk_div_table with sentinel element
clk: Fix clk_hw_get_clk() when dev is NULL
clk: tegra: tegra124-emc: Fix missing put_device() call in emc_ensure_emc_driver
mailbox: imx: fix crash in resume on i.mx8ulp
NFS: remove unneeded check in decode_devicenotify_args()
staging: mt7621-dts: fix LEDs and pinctrl on GB-PC1 devicetree
staging: mt7621-dts: fix formatting
staging: mt7621-dts: fix pinctrl properties for ethernet
staging: mt7621-dts: fix GB-PC2 devicetree
pinctrl: mediatek: Fix missing of_node_put() in mtk_pctrl_init
pinctrl: mediatek: paris: Fix PIN_CONFIG_BIAS_* readback
pinctrl: mediatek: paris: Fix "argument" argument type for mtk_pinconf_get()
pinctrl: mediatek: paris: Fix pingroup pin config state readback
pinctrl: mediatek: paris: Skip custom extra pin config dump for virtual GPIOs
pinctrl: microchip sgpio: use reset driver
pinctrl: microchip-sgpio: lock RMW access
pinctrl: nomadik: Add missing of_node_put() in nmk_pinctrl_probe
pinctrl/rockchip: Add missing of_node_put() in rockchip_pinctrl_probe
tty: hvc: fix return value of __setup handler
kgdboc: fix return value of __setup handler
serial: 8250: fix XOFF/XON sending when DMA is used
virt: acrn: obtain pa from VMA with PFNMAP flag
virt: acrn: fix a memory leak in acrn_dev_ioctl()
kgdbts: fix return value of __setup handler
firmware: google: Properly state IOMEM dependency
driver core: dd: fix return value of __setup handler
jfs: fix divide error in dbNextAG
netfilter: nf_conntrack_tcp: preserve liberal flag in tcp options
SUNRPC don't resend a task on an offlined transport
NFSv4.1: don't retry BIND_CONN_TO_SESSION on session error
kdb: Fix the putarea helper function
perf stat: Fix forked applications enablement of counters
clk: qcom: gcc-msm8994: Fix gpll4 width
vsock/virtio: initialize vdev->priv before using VQs
vsock/virtio: read the negotiated features before using VQs
vsock/virtio: enable VQs early on probe
clk: Initialize orphan req_rate
xen: fix is_xen_pmu()
net: enetc: report software timestamping via SO_TIMESTAMPING
net: hns3: fix bug when PF set the duplicate MAC address for VFs
net: hns3: fix port base vlan add fail when concurrent with reset
net: hns3: add vlan list lock to protect vlan list
net: hns3: format the output of the MAC address
net: hns3: refine the process when PF set VF VLAN
net: phy: broadcom: Fix brcm_fet_config_init()
selftests: test_vxlan_under_vrf: Fix broken test case
NFS: Don't loop forever in nfs_do_recoalesce()
net: hns3: clean residual vf config after disable sriov
net: sparx5: depends on PTP_1588_CLOCK_OPTIONAL
qlcnic: dcb: default to returning -EOPNOTSUPP
net/x25: Fix null-ptr-deref caused by x25_disconnect
net: sparx5: switchdev: fix possible NULL pointer dereference
octeontx2-af: initialize action variable
net: prefer nf_ct_put instead of nf_conntrack_put
net/sched: act_ct: fix ref leak when switching zones
NFSv4/pNFS: Fix another issue with a list iterator pointing to the head
net: dsa: bcm_sf2_cfp: fix an incorrect NULL check on list iterator
fs: fd tables have to be multiples of BITS_PER_LONG
lib/test: use after free in register_test_dev_kmod()
fs: fix fd table size alignment properly
LSM: general protection fault in legacy_parse_param
regulator: rpi-panel: Handle I2C errors/timing to the Atmel
crypto: hisilicon/qm - cleanup warning in qm_vf_read_qos
gcc-plugins/stackleak: Exactly match strings instead of prefixes
pinctrl: npcm: Fix broken references to chip->parent_device
rcu: Mark writes to the rcu_segcblist structure's ->flags field
block/bfq_wf2q: correct weight to ioprio
crypto: xts - Add softdep on ecb
crypto: hisilicon/sec - not need to enable sm4 extra mode at HW V3
block, bfq: don't move oom_bfqq
selinux: use correct type for context length
arm64: module: remove (NOLOAD) from linker script
selinux: allow FIOCLEX and FIONCLEX with policy capability
loop: use sysfs_emit() in the sysfs xxx show()
Fix incorrect type in assignment of ipv6 port for audit
irqchip/qcom-pdc: Fix broken locking
irqchip/nvic: Release nvic_base upon failure
fs/binfmt_elf: Fix AT_PHDR for unusual ELF files
bfq: fix use-after-free in bfq_dispatch_request
ACPICA: Avoid walking the ACPI Namespace if it is not there
lib/raid6/test/Makefile: Use $(pound) instead of \# for Make 4.3
Revert "Revert "block, bfq: honor already-setup queue merges""
ACPI/APEI: Limit printable size of BERT table data
PM: core: keep irq flags in device_pm_check_callbacks()
parisc: Fix handling off probe non-access faults
nvme-tcp: lockdep: annotate in-kernel sockets
spi: tegra20: Use of_device_get_match_data()
atomics: Fix atomic64_{read_acquire,set_release} fallbacks
locking/lockdep: Iterate lock_classes directly when reading lockdep files
ext4: correct cluster len and clusters changed accounting in ext4_mb_mark_bb
ext4: fix ext4_mb_mark_bb() with flex_bg with fast_commit
sched/tracing: Report TASK_RTLOCK_WAIT tasks as TASK_UNINTERRUPTIBLE
ext4: don't BUG if someone dirty pages without asking ext4 first
f2fs: fix to do sanity check on curseg->alloc_type
NFSD: Fix nfsd_breaker_owns_lease() return values
f2fs: don't get FREEZE lock in f2fs_evict_inode in frozen fs
btrfs: harden identification of a stale device
btrfs: make search_csum_tree return 0 if we get -EFBIG
f2fs: use spin_lock to avoid hang
f2fs: compress: fix to print raw data size in error path of lz4 decompression
Adjust cifssb maximum read size
ntfs: add sanity check on allocation size
media: staging: media: zoran: move videodev alloc
media: staging: media: zoran: calculate the right buffer number for zoran_reap_stat_com
media: staging: media: zoran: fix various V4L2 compliance errors
media: atmel: atmel-isc-base: report frame sizes as full supported range
media: ir_toy: free before error exiting
ASoC: sh: rz-ssi: Make the data structures available before registering the handlers
ASoC: SOF: Intel: match sdw version on link_slaves_found
media: imx-jpeg: Prevent decoding NV12M jpegs into single-planar buffers
media: iommu/mediatek-v1: Free the existed fwspec if the master dev already has
media: iommu/mediatek: Return ENODEV if the device is NULL
media: iommu/mediatek: Add device_link between the consumer and the larb devices
video: fbdev: nvidiafb: Use strscpy() to prevent buffer overflow
video: fbdev: w100fb: Reset global state
video: fbdev: cirrusfb: check pixclock to avoid divide by zero
video: fbdev: omapfb: acx565akm: replace snprintf with sysfs_emit
ARM: dts: qcom: fix gic_irq_domain_translate warnings for msm8960
ARM: dts: bcm2837: Add the missing L1/L2 cache information
ASoC: madera: Add dependencies on MFD
media: atomisp_gmin_platform: Add DMI quirk to not turn AXP ELDO2 regulator off on some boards
media: atomisp: fix dummy_ptr check to avoid duplicate active_bo
ARM: ftrace: avoid redundant loads or clobbering IP
ARM: dts: imx7: Use audio_mclk_post_div instead audio_mclk_root_clk
arm64: defconfig: build imx-sdma as a module
video: fbdev: omapfb: panel-dsi-cm: Use sysfs_emit() instead of snprintf()
video: fbdev: omapfb: panel-tpo-td043mtea1: Use sysfs_emit() instead of snprintf()
video: fbdev: udlfb: replace snprintf in show functions with sysfs_emit
ARM: dts: bcm2711: Add the missing L1/L2 cache information
ASoC: soc-core: skip zero num_dai component in searching dai name
media: imx-jpeg: fix a bug of accessing array out of bounds
media: cx88-mpeg: clear interrupt status register before streaming video
uaccess: fix type mismatch warnings from access_ok()
lib/test_lockup: fix kernel pointer check for separate address spaces
ARM: tegra: tamonten: Fix I2C3 pad setting
ARM: mmp: Fix failure to remove sram device
ASoC: amd: vg: fix for pm resume callback sequence
video: fbdev: sm712fb: Fix crash in smtcfb_write()
media: i2c: ov5648: Fix lockdep error
media: Revert "media: em28xx: add missing em28xx_close_extension"
media: hdpvr: initialize dev->worker at hdpvr_register_videodev
ASoC: Intel: sof_sdw: fix quirks for 2022 HP Spectre x360 13"
tracing: Have TRACE_DEFINE_ENUM affect trace event types as well
mmc: host: Return an error when ->enable_sdio_irq() ops is missing
media: atomisp: fix bad usage at error handling logic
ALSA: hda/realtek: Add alc256-samsung-headphone fixup
KVM: x86: Reinitialize context if host userspace toggles EFER.LME
KVM: x86/mmu: Move "invalid" check out of kvm_tdp_mmu_get_root()
KVM: x86/mmu: Zap _all_ roots when unmapping gfn range in TDP MMU
KVM: x86/mmu: Check for present SPTE when clearing dirty bit in TDP MMU
KVM: x86: hyper-v: Drop redundant 'ex' parameter from kvm_hv_send_ipi()
KVM: x86: hyper-v: Drop redundant 'ex' parameter from kvm_hv_flush_tlb()
KVM: x86: hyper-v: Fix the maximum number of sparse banks for XMM fast TLB flush hypercalls
KVM: x86: hyper-v: HVCALL_SEND_IPI_EX is an XMM fast hypercall
powerpc/kasan: Fix early region not updated correctly
powerpc/lib/sstep: Fix 'sthcx' instruction
powerpc/lib/sstep: Fix build errors with newer binutils
powerpc: Add set_memory_{p/np}() and remove set_memory_attr()
powerpc: Fix build errors with newer binutils
drm/dp: Fix off-by-one in register cache size
drm/i915: Treat SAGV block time 0 as SAGV disabled
drm/i915: Fix PSF GV point mask when SAGV is not possible
drm/i915: Reject unsupported TMDS rates on ICL+
scsi: qla2xxx: Refactor asynchronous command initialization
scsi: qla2xxx: Implement ref count for SRB
scsi: qla2xxx: Fix stuck session in gpdb
scsi: qla2xxx: Fix warning message due to adisc being flushed
scsi: qla2xxx: Fix scheduling while atomic
scsi: qla2xxx: Fix premature hw access after PCI error
scsi: qla2xxx: Fix wrong FDMI data for 64G adapter
scsi: qla2xxx: Fix warning for missing error code
scsi: qla2xxx: Fix device reconnect in loop topology
scsi: qla2xxx: edif: Fix clang warning
scsi: qla2xxx: Fix T10 PI tag escape and IP guard options for 28XX adapters
scsi: qla2xxx: Add devids and conditionals for 28xx
scsi: qla2xxx: Check for firmware dump already collected
scsi: qla2xxx: Suppress a kernel complaint in qla_create_qpair()
scsi: qla2xxx: Fix disk failure to rediscover
scsi: qla2xxx: Fix incorrect reporting of task management failure
scsi: qla2xxx: Fix hang due to session stuck
scsi: qla2xxx: Fix missed DMA unmap for NVMe ls requests
scsi: qla2xxx: Fix N2N inconsistent PLOGI
scsi: qla2xxx: Fix stuck session of PRLI reject
scsi: qla2xxx: Reduce false trigger to login
scsi: qla2xxx: Use correct feature type field during RFF_ID processing
platform: chrome: Split trace include file
KVM: x86: Check lapic_in_kernel() before attempting to set a SynIC irq
KVM: x86: Avoid theoretical NULL pointer dereference in kvm_irq_delivery_to_apic_fast()
KVM: x86: Forbid VMM to set SYNIC/STIMER MSRs when SynIC wasn't activated
KVM: Prevent module exit until all VMs are freed
KVM: x86: fix sending PV IPI
KVM: SVM: fix panic on out-of-bounds guest IRQ
ubifs: rename_whiteout: Fix double free for whiteout_ui->data
ubifs: Fix deadlock in concurrent rename whiteout and inode writeback
ubifs: Add missing iput if do_tmpfile() failed in rename whiteout
ubifs: Rename whiteout atomically
ubifs: Fix 'ui->dirty' race between do_tmpfile() and writeback work
ubifs: Rectify space amount budget for mkdir/tmpfile operations
ubifs: setflags: Make dirtied_ino_d 8 bytes aligned
ubifs: Fix read out-of-bounds in ubifs_wbuf_write_nolock()
ubifs: Fix to add refcount once page is set private
ubifs: rename_whiteout: correct old_dir size computing
nvme: allow duplicate NSIDs for private namespaces
nvme: fix the read-only state for zoned namespaces with unsupposed features
wireguard: queueing: use CFI-safe ptr_ring cleanup function
wireguard: socket: free skb in send6 when ipv6 is disabled
wireguard: socket: ignore v6 endpoints when ipv6 is disabled
XArray: Fix xas_create_range() when multi-order entry present
can: mcba_usb: mcba_usb_start_xmit(): fix double dev_kfree_skb in error path
can: mcba_usb: properly check endpoint type
can: mcp251xfd: mcp251xfd_register_get_dev_id(): fix return of error value
XArray: Update the LRU list in xas_split()
modpost: restore the warning message for missing symbol versions
rtc: check if __rtc_read_time was successful
gfs2: gfs2_setattr_size error path fix
gfs2: Make sure FITRIM minlen is rounded up to fs block size
net: hns3: fix the concurrency between functions reading debugfs
net: hns3: fix software vlan talbe of vlan 0 inconsistent with hardware
rxrpc: fix some null-ptr-deref bugs in server_key.c
rxrpc: Fix call timer start racing with call destruction
mailbox: imx: fix wakeup failure from freeze mode
crypto: arm/aes-neonbs-cbc - Select generic cbc and aes
watch_queue: Free the page array when watch_queue is dismantled
pinctrl: pinconf-generic: Print arguments for bias-pull-*
watchdog: rti-wdt: Add missing pm_runtime_disable() in probe function
net: sparx5: uses, depends on BRIDGE or !BRIDGE
pinctrl: nuvoton: npcm7xx: Rename DS() macro to DSTR()
pinctrl: nuvoton: npcm7xx: Use %zu printk format for ARRAY_SIZE()
ASoC: mediatek: mt6358: add missing EXPORT_SYMBOLs
ubi: Fix race condition between ctrl_cdev_ioctl and ubi_cdev_ioctl
ARM: iop32x: offset IRQ numbers by 1
block: Fix the maximum minor value is blk_alloc_ext_minor()
io_uring: fix memory leak of uid in files registration
riscv module: remove (NOLOAD)
ACPI: CPPC: Avoid out of bounds access when parsing _CPC data
vhost: handle error while adding split ranges to iotlb
spi: Fix Tegra QSPI example
platform/chrome: cros_ec_typec: Check for EC device
can: isotp: restore accidentally removed MSG_PEEK feature
proc: bootconfig: Add null pointer check
drm/connector: Fix typo in documentation
scsi: qla2xxx: Add qla2x00_async_done() for async routines
staging: mt7621-dts: fix pinctrl-0 items to be size-1 items on ethernet
arm64: mm: Drop 'const' from conditional arm64_dma_phys_limit definition
ASoC: soc-compress: Change the check for codec_dai
Reinstate some of "swiotlb: rework "fix info leak with DMA_FROM_DEVICE""
tracing: Have type enum modifications copy the strings
net: add skb_set_end_offset() helper
net: preserve skb_end_offset() in skb_unclone_keeptruesize()
mm/mmap: return 1 from stack_guard_gap __setup() handler
ARM: 9187/1: JIVE: fix return value of __setup handler
mm/memcontrol: return 1 from cgroup.memory __setup() handler
mm/usercopy: return 1 from hardened_usercopy __setup() handler
af_unix: Support POLLPRI for OOB.
bpf: Adjust BPF stack helper functions to accommodate skip > 0
bpf: Fix comment for helper bpf_current_task_under_cgroup()
mmc: rtsx: Use pm_runtime_{get,put}() to handle runtime PM
dt-bindings: mtd: nand-controller: Fix the reg property description
dt-bindings: mtd: nand-controller: Fix a comment in the examples
dt-bindings: spi: mxic: The interrupt property is not mandatory
dt-bindings: memory: mtk-smi: No need mediatek,larb-id for mt8167
dt-bindings: pinctrl: pinctrl-microchip-sgpio: Fix example
ubi: fastmap: Return error code if memory allocation fails in add_aeb()
ASoC: SOF: Intel: Fix build error without SND_SOC_SOF_PCI_DEV
ASoC: topology: Allow TLV control to be either read or write
perf vendor events: Update metrics for SkyLake Server
media: ov6650: Add try support to selection API operations
media: ov6650: Fix crop rectangle affected by set format
spi: mediatek: support tick_delay without enhance_timing
ARM: dts: spear1340: Update serial node properties
ARM: dts: spear13xx: Update SPI dma properties
arm64: dts: ls1043a: Update i2c dma properties
arm64: dts: ls1046a: Update i2c node dma properties
um: Fix uml_mconsole stop/go
docs: sysctl/kernel: add missing bit to panic_print
openvswitch: Fixed nd target mask field in the flow dump.
torture: Make torture.sh help message match reality
n64cart: convert bi_disk to bi_bdev->bd_disk fix build
mmc: rtsx: Let MMC core handle runtime PM
mmc: rtsx: Fix build errors/warnings for unused variable
KVM: x86/mmu: do compare-and-exchange of gPTE via the user address
iommu/dma: Skip extra sync during unmap w/swiotlb
iommu/dma: Fold _swiotlb helpers into callers
iommu/dma: Check CONFIG_SWIOTLB more broadly
swiotlb: Support aligned swiotlb buffers
iommu/dma: Account for min_align_mask w/swiotlb
coredump: Snapshot the vmas in do_coredump
coredump: Remove the WARN_ON in dump_vma_snapshot
coredump/elf: Pass coredump_params into fill_note_info
coredump: Use the vma snapshot in fill_files_note
PCI: xgene: Revert "PCI: xgene: Use inbound resources for setup"
Linux 5.15.33
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: Id62bd8a22d0bfa7c2096539d253ffce804bed017
2950 lines
69 KiB
C
2950 lines
69 KiB
C
// SPDX-License-Identifier: GPL-2.0
|
|
/*
|
|
* Real-Time Scheduling Class (mapped to the SCHED_FIFO and SCHED_RR
|
|
* policies)
|
|
*/
|
|
#include "sched.h"
|
|
|
|
#include "pelt.h"
|
|
|
|
#include <trace/hooks/sched.h>
|
|
|
|
int sched_rr_timeslice = RR_TIMESLICE;
|
|
int sysctl_sched_rr_timeslice = (MSEC_PER_SEC / HZ) * RR_TIMESLICE;
|
|
/* More than 4 hours if BW_SHIFT equals 20. */
|
|
static const u64 max_rt_runtime = MAX_BW;
|
|
|
|
static int do_sched_rt_period_timer(struct rt_bandwidth *rt_b, int overrun);
|
|
|
|
struct rt_bandwidth def_rt_bandwidth;
|
|
|
|
static enum hrtimer_restart sched_rt_period_timer(struct hrtimer *timer)
|
|
{
|
|
struct rt_bandwidth *rt_b =
|
|
container_of(timer, struct rt_bandwidth, rt_period_timer);
|
|
int idle = 0;
|
|
int overrun;
|
|
|
|
raw_spin_lock(&rt_b->rt_runtime_lock);
|
|
for (;;) {
|
|
overrun = hrtimer_forward_now(timer, rt_b->rt_period);
|
|
if (!overrun)
|
|
break;
|
|
|
|
raw_spin_unlock(&rt_b->rt_runtime_lock);
|
|
idle = do_sched_rt_period_timer(rt_b, overrun);
|
|
raw_spin_lock(&rt_b->rt_runtime_lock);
|
|
}
|
|
if (idle)
|
|
rt_b->rt_period_active = 0;
|
|
raw_spin_unlock(&rt_b->rt_runtime_lock);
|
|
|
|
return idle ? HRTIMER_NORESTART : HRTIMER_RESTART;
|
|
}
|
|
|
|
void init_rt_bandwidth(struct rt_bandwidth *rt_b, u64 period, u64 runtime)
|
|
{
|
|
rt_b->rt_period = ns_to_ktime(period);
|
|
rt_b->rt_runtime = runtime;
|
|
|
|
raw_spin_lock_init(&rt_b->rt_runtime_lock);
|
|
|
|
hrtimer_init(&rt_b->rt_period_timer, CLOCK_MONOTONIC,
|
|
HRTIMER_MODE_REL_HARD);
|
|
rt_b->rt_period_timer.function = sched_rt_period_timer;
|
|
}
|
|
|
|
static inline void do_start_rt_bandwidth(struct rt_bandwidth *rt_b)
|
|
{
|
|
raw_spin_lock(&rt_b->rt_runtime_lock);
|
|
if (!rt_b->rt_period_active) {
|
|
rt_b->rt_period_active = 1;
|
|
/*
|
|
* SCHED_DEADLINE updates the bandwidth, as a run away
|
|
* RT task with a DL task could hog a CPU. But DL does
|
|
* not reset the period. If a deadline task was running
|
|
* without an RT task running, it can cause RT tasks to
|
|
* throttle when they start up. Kick the timer right away
|
|
* to update the period.
|
|
*/
|
|
hrtimer_forward_now(&rt_b->rt_period_timer, ns_to_ktime(0));
|
|
hrtimer_start_expires(&rt_b->rt_period_timer,
|
|
HRTIMER_MODE_ABS_PINNED_HARD);
|
|
}
|
|
raw_spin_unlock(&rt_b->rt_runtime_lock);
|
|
}
|
|
|
|
static void start_rt_bandwidth(struct rt_bandwidth *rt_b)
|
|
{
|
|
if (!rt_bandwidth_enabled() || rt_b->rt_runtime == RUNTIME_INF)
|
|
return;
|
|
|
|
do_start_rt_bandwidth(rt_b);
|
|
}
|
|
|
|
void init_rt_rq(struct rt_rq *rt_rq)
|
|
{
|
|
struct rt_prio_array *array;
|
|
int i;
|
|
|
|
array = &rt_rq->active;
|
|
for (i = 0; i < MAX_RT_PRIO; i++) {
|
|
INIT_LIST_HEAD(array->queue + i);
|
|
__clear_bit(i, array->bitmap);
|
|
}
|
|
/* delimiter for bitsearch: */
|
|
__set_bit(MAX_RT_PRIO, array->bitmap);
|
|
|
|
#if defined CONFIG_SMP
|
|
rt_rq->highest_prio.curr = MAX_RT_PRIO-1;
|
|
rt_rq->highest_prio.next = MAX_RT_PRIO-1;
|
|
rt_rq->rt_nr_migratory = 0;
|
|
rt_rq->overloaded = 0;
|
|
plist_head_init(&rt_rq->pushable_tasks);
|
|
#endif /* CONFIG_SMP */
|
|
/* We start is dequeued state, because no RT tasks are queued */
|
|
rt_rq->rt_queued = 0;
|
|
|
|
rt_rq->rt_time = 0;
|
|
rt_rq->rt_throttled = 0;
|
|
rt_rq->rt_runtime = 0;
|
|
raw_spin_lock_init(&rt_rq->rt_runtime_lock);
|
|
}
|
|
|
|
#ifdef CONFIG_RT_GROUP_SCHED
|
|
static void destroy_rt_bandwidth(struct rt_bandwidth *rt_b)
|
|
{
|
|
hrtimer_cancel(&rt_b->rt_period_timer);
|
|
}
|
|
|
|
#define rt_entity_is_task(rt_se) (!(rt_se)->my_q)
|
|
|
|
static inline struct task_struct *rt_task_of(struct sched_rt_entity *rt_se)
|
|
{
|
|
#ifdef CONFIG_SCHED_DEBUG
|
|
WARN_ON_ONCE(!rt_entity_is_task(rt_se));
|
|
#endif
|
|
return container_of(rt_se, struct task_struct, rt);
|
|
}
|
|
|
|
static inline struct rq *rq_of_rt_rq(struct rt_rq *rt_rq)
|
|
{
|
|
return rt_rq->rq;
|
|
}
|
|
|
|
static inline struct rt_rq *rt_rq_of_se(struct sched_rt_entity *rt_se)
|
|
{
|
|
return rt_se->rt_rq;
|
|
}
|
|
|
|
static inline struct rq *rq_of_rt_se(struct sched_rt_entity *rt_se)
|
|
{
|
|
struct rt_rq *rt_rq = rt_se->rt_rq;
|
|
|
|
return rt_rq->rq;
|
|
}
|
|
|
|
void unregister_rt_sched_group(struct task_group *tg)
|
|
{
|
|
if (tg->rt_se)
|
|
destroy_rt_bandwidth(&tg->rt_bandwidth);
|
|
|
|
}
|
|
|
|
void free_rt_sched_group(struct task_group *tg)
|
|
{
|
|
int i;
|
|
|
|
for_each_possible_cpu(i) {
|
|
if (tg->rt_rq)
|
|
kfree(tg->rt_rq[i]);
|
|
if (tg->rt_se)
|
|
kfree(tg->rt_se[i]);
|
|
}
|
|
|
|
kfree(tg->rt_rq);
|
|
kfree(tg->rt_se);
|
|
}
|
|
|
|
void init_tg_rt_entry(struct task_group *tg, struct rt_rq *rt_rq,
|
|
struct sched_rt_entity *rt_se, int cpu,
|
|
struct sched_rt_entity *parent)
|
|
{
|
|
struct rq *rq = cpu_rq(cpu);
|
|
|
|
rt_rq->highest_prio.curr = MAX_RT_PRIO-1;
|
|
rt_rq->rt_nr_boosted = 0;
|
|
rt_rq->rq = rq;
|
|
rt_rq->tg = tg;
|
|
|
|
tg->rt_rq[cpu] = rt_rq;
|
|
tg->rt_se[cpu] = rt_se;
|
|
|
|
if (!rt_se)
|
|
return;
|
|
|
|
if (!parent)
|
|
rt_se->rt_rq = &rq->rt;
|
|
else
|
|
rt_se->rt_rq = parent->my_q;
|
|
|
|
rt_se->my_q = rt_rq;
|
|
rt_se->parent = parent;
|
|
INIT_LIST_HEAD(&rt_se->run_list);
|
|
}
|
|
|
|
int alloc_rt_sched_group(struct task_group *tg, struct task_group *parent)
|
|
{
|
|
struct rt_rq *rt_rq;
|
|
struct sched_rt_entity *rt_se;
|
|
int i;
|
|
|
|
tg->rt_rq = kcalloc(nr_cpu_ids, sizeof(rt_rq), GFP_KERNEL);
|
|
if (!tg->rt_rq)
|
|
goto err;
|
|
tg->rt_se = kcalloc(nr_cpu_ids, sizeof(rt_se), GFP_KERNEL);
|
|
if (!tg->rt_se)
|
|
goto err;
|
|
|
|
init_rt_bandwidth(&tg->rt_bandwidth,
|
|
ktime_to_ns(def_rt_bandwidth.rt_period), 0);
|
|
|
|
for_each_possible_cpu(i) {
|
|
rt_rq = kzalloc_node(sizeof(struct rt_rq),
|
|
GFP_KERNEL, cpu_to_node(i));
|
|
if (!rt_rq)
|
|
goto err;
|
|
|
|
rt_se = kzalloc_node(sizeof(struct sched_rt_entity),
|
|
GFP_KERNEL, cpu_to_node(i));
|
|
if (!rt_se)
|
|
goto err_free_rq;
|
|
|
|
init_rt_rq(rt_rq);
|
|
rt_rq->rt_runtime = tg->rt_bandwidth.rt_runtime;
|
|
init_tg_rt_entry(tg, rt_rq, rt_se, i, parent->rt_se[i]);
|
|
}
|
|
|
|
return 1;
|
|
|
|
err_free_rq:
|
|
kfree(rt_rq);
|
|
err:
|
|
return 0;
|
|
}
|
|
|
|
#else /* CONFIG_RT_GROUP_SCHED */
|
|
|
|
#define rt_entity_is_task(rt_se) (1)
|
|
|
|
static inline struct task_struct *rt_task_of(struct sched_rt_entity *rt_se)
|
|
{
|
|
return container_of(rt_se, struct task_struct, rt);
|
|
}
|
|
|
|
static inline struct rq *rq_of_rt_rq(struct rt_rq *rt_rq)
|
|
{
|
|
return container_of(rt_rq, struct rq, rt);
|
|
}
|
|
|
|
static inline struct rq *rq_of_rt_se(struct sched_rt_entity *rt_se)
|
|
{
|
|
struct task_struct *p = rt_task_of(rt_se);
|
|
|
|
return task_rq(p);
|
|
}
|
|
|
|
static inline struct rt_rq *rt_rq_of_se(struct sched_rt_entity *rt_se)
|
|
{
|
|
struct rq *rq = rq_of_rt_se(rt_se);
|
|
|
|
return &rq->rt;
|
|
}
|
|
|
|
void unregister_rt_sched_group(struct task_group *tg) { }
|
|
|
|
void free_rt_sched_group(struct task_group *tg) { }
|
|
|
|
int alloc_rt_sched_group(struct task_group *tg, struct task_group *parent)
|
|
{
|
|
return 1;
|
|
}
|
|
#endif /* CONFIG_RT_GROUP_SCHED */
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
static void pull_rt_task(struct rq *this_rq);
|
|
|
|
static inline bool need_pull_rt_task(struct rq *rq, struct task_struct *prev)
|
|
{
|
|
/* Try to pull RT tasks here if we lower this rq's prio */
|
|
return rq->online && rq->rt.highest_prio.curr > prev->prio;
|
|
}
|
|
|
|
static inline int rt_overloaded(struct rq *rq)
|
|
{
|
|
return atomic_read(&rq->rd->rto_count);
|
|
}
|
|
|
|
static inline void rt_set_overload(struct rq *rq)
|
|
{
|
|
if (!rq->online)
|
|
return;
|
|
|
|
cpumask_set_cpu(rq->cpu, rq->rd->rto_mask);
|
|
/*
|
|
* Make sure the mask is visible before we set
|
|
* the overload count. That is checked to determine
|
|
* if we should look at the mask. It would be a shame
|
|
* if we looked at the mask, but the mask was not
|
|
* updated yet.
|
|
*
|
|
* Matched by the barrier in pull_rt_task().
|
|
*/
|
|
smp_wmb();
|
|
atomic_inc(&rq->rd->rto_count);
|
|
}
|
|
|
|
static inline void rt_clear_overload(struct rq *rq)
|
|
{
|
|
if (!rq->online)
|
|
return;
|
|
|
|
/* the order here really doesn't matter */
|
|
atomic_dec(&rq->rd->rto_count);
|
|
cpumask_clear_cpu(rq->cpu, rq->rd->rto_mask);
|
|
}
|
|
|
|
static void update_rt_migration(struct rt_rq *rt_rq)
|
|
{
|
|
if (rt_rq->rt_nr_migratory && rt_rq->rt_nr_total > 1) {
|
|
if (!rt_rq->overloaded) {
|
|
rt_set_overload(rq_of_rt_rq(rt_rq));
|
|
rt_rq->overloaded = 1;
|
|
}
|
|
} else if (rt_rq->overloaded) {
|
|
rt_clear_overload(rq_of_rt_rq(rt_rq));
|
|
rt_rq->overloaded = 0;
|
|
}
|
|
}
|
|
|
|
static void inc_rt_migration(struct sched_rt_entity *rt_se, struct rt_rq *rt_rq)
|
|
{
|
|
struct task_struct *p;
|
|
|
|
if (!rt_entity_is_task(rt_se))
|
|
return;
|
|
|
|
p = rt_task_of(rt_se);
|
|
rt_rq = &rq_of_rt_rq(rt_rq)->rt;
|
|
|
|
rt_rq->rt_nr_total++;
|
|
if (p->nr_cpus_allowed > 1)
|
|
rt_rq->rt_nr_migratory++;
|
|
|
|
update_rt_migration(rt_rq);
|
|
}
|
|
|
|
static void dec_rt_migration(struct sched_rt_entity *rt_se, struct rt_rq *rt_rq)
|
|
{
|
|
struct task_struct *p;
|
|
|
|
if (!rt_entity_is_task(rt_se))
|
|
return;
|
|
|
|
p = rt_task_of(rt_se);
|
|
rt_rq = &rq_of_rt_rq(rt_rq)->rt;
|
|
|
|
rt_rq->rt_nr_total--;
|
|
if (p->nr_cpus_allowed > 1)
|
|
rt_rq->rt_nr_migratory--;
|
|
|
|
update_rt_migration(rt_rq);
|
|
}
|
|
|
|
static inline int has_pushable_tasks(struct rq *rq)
|
|
{
|
|
return !plist_head_empty(&rq->rt.pushable_tasks);
|
|
}
|
|
|
|
static DEFINE_PER_CPU(struct callback_head, rt_push_head);
|
|
static DEFINE_PER_CPU(struct callback_head, rt_pull_head);
|
|
|
|
static void push_rt_tasks(struct rq *);
|
|
static void pull_rt_task(struct rq *);
|
|
|
|
static inline void rt_queue_push_tasks(struct rq *rq)
|
|
{
|
|
if (!has_pushable_tasks(rq))
|
|
return;
|
|
|
|
queue_balance_callback(rq, &per_cpu(rt_push_head, rq->cpu), push_rt_tasks);
|
|
}
|
|
|
|
static inline void rt_queue_pull_task(struct rq *rq)
|
|
{
|
|
queue_balance_callback(rq, &per_cpu(rt_pull_head, rq->cpu), pull_rt_task);
|
|
}
|
|
|
|
static void enqueue_pushable_task(struct rq *rq, struct task_struct *p)
|
|
{
|
|
plist_del(&p->pushable_tasks, &rq->rt.pushable_tasks);
|
|
plist_node_init(&p->pushable_tasks, p->prio);
|
|
plist_add(&p->pushable_tasks, &rq->rt.pushable_tasks);
|
|
|
|
/* Update the highest prio pushable task */
|
|
if (p->prio < rq->rt.highest_prio.next)
|
|
rq->rt.highest_prio.next = p->prio;
|
|
}
|
|
|
|
static void dequeue_pushable_task(struct rq *rq, struct task_struct *p)
|
|
{
|
|
plist_del(&p->pushable_tasks, &rq->rt.pushable_tasks);
|
|
|
|
/* Update the new highest prio pushable task */
|
|
if (has_pushable_tasks(rq)) {
|
|
p = plist_first_entry(&rq->rt.pushable_tasks,
|
|
struct task_struct, pushable_tasks);
|
|
rq->rt.highest_prio.next = p->prio;
|
|
} else {
|
|
rq->rt.highest_prio.next = MAX_RT_PRIO-1;
|
|
}
|
|
}
|
|
|
|
#else
|
|
|
|
static inline void enqueue_pushable_task(struct rq *rq, struct task_struct *p)
|
|
{
|
|
}
|
|
|
|
static inline void dequeue_pushable_task(struct rq *rq, struct task_struct *p)
|
|
{
|
|
}
|
|
|
|
static inline
|
|
void inc_rt_migration(struct sched_rt_entity *rt_se, struct rt_rq *rt_rq)
|
|
{
|
|
}
|
|
|
|
static inline
|
|
void dec_rt_migration(struct sched_rt_entity *rt_se, struct rt_rq *rt_rq)
|
|
{
|
|
}
|
|
|
|
static inline bool need_pull_rt_task(struct rq *rq, struct task_struct *prev)
|
|
{
|
|
return false;
|
|
}
|
|
|
|
static inline void pull_rt_task(struct rq *this_rq)
|
|
{
|
|
}
|
|
|
|
static inline void rt_queue_push_tasks(struct rq *rq)
|
|
{
|
|
}
|
|
#endif /* CONFIG_SMP */
|
|
|
|
static void enqueue_top_rt_rq(struct rt_rq *rt_rq);
|
|
static void dequeue_top_rt_rq(struct rt_rq *rt_rq);
|
|
|
|
static inline int on_rt_rq(struct sched_rt_entity *rt_se)
|
|
{
|
|
return rt_se->on_rq;
|
|
}
|
|
|
|
#ifdef CONFIG_UCLAMP_TASK
|
|
/*
|
|
* Verify the fitness of task @p to run on @cpu taking into account the uclamp
|
|
* settings.
|
|
*
|
|
* This check is only important for heterogeneous systems where uclamp_min value
|
|
* is higher than the capacity of a @cpu. For non-heterogeneous system this
|
|
* function will always return true.
|
|
*
|
|
* The function will return true if the capacity of the @cpu is >= the
|
|
* uclamp_min and false otherwise.
|
|
*
|
|
* Note that uclamp_min will be clamped to uclamp_max if uclamp_min
|
|
* > uclamp_max.
|
|
*/
|
|
static inline bool rt_task_fits_capacity(struct task_struct *p, int cpu)
|
|
{
|
|
unsigned int min_cap;
|
|
unsigned int max_cap;
|
|
unsigned int cpu_cap;
|
|
|
|
/* Only heterogeneous systems can benefit from this check */
|
|
if (!static_branch_unlikely(&sched_asym_cpucapacity))
|
|
return true;
|
|
|
|
min_cap = uclamp_eff_value(p, UCLAMP_MIN);
|
|
max_cap = uclamp_eff_value(p, UCLAMP_MAX);
|
|
|
|
cpu_cap = capacity_orig_of(cpu);
|
|
|
|
return cpu_cap >= min(min_cap, max_cap);
|
|
}
|
|
#else
|
|
static inline bool rt_task_fits_capacity(struct task_struct *p, int cpu)
|
|
{
|
|
return true;
|
|
}
|
|
#endif
|
|
|
|
#ifdef CONFIG_RT_GROUP_SCHED
|
|
|
|
static inline u64 sched_rt_runtime(struct rt_rq *rt_rq)
|
|
{
|
|
if (!rt_rq->tg)
|
|
return RUNTIME_INF;
|
|
|
|
return rt_rq->rt_runtime;
|
|
}
|
|
|
|
static inline u64 sched_rt_period(struct rt_rq *rt_rq)
|
|
{
|
|
return ktime_to_ns(rt_rq->tg->rt_bandwidth.rt_period);
|
|
}
|
|
|
|
typedef struct task_group *rt_rq_iter_t;
|
|
|
|
static inline struct task_group *next_task_group(struct task_group *tg)
|
|
{
|
|
do {
|
|
tg = list_entry_rcu(tg->list.next,
|
|
typeof(struct task_group), list);
|
|
} while (&tg->list != &task_groups && task_group_is_autogroup(tg));
|
|
|
|
if (&tg->list == &task_groups)
|
|
tg = NULL;
|
|
|
|
return tg;
|
|
}
|
|
|
|
#define for_each_rt_rq(rt_rq, iter, rq) \
|
|
for (iter = container_of(&task_groups, typeof(*iter), list); \
|
|
(iter = next_task_group(iter)) && \
|
|
(rt_rq = iter->rt_rq[cpu_of(rq)]);)
|
|
|
|
#define for_each_sched_rt_entity(rt_se) \
|
|
for (; rt_se; rt_se = rt_se->parent)
|
|
|
|
static inline struct rt_rq *group_rt_rq(struct sched_rt_entity *rt_se)
|
|
{
|
|
return rt_se->my_q;
|
|
}
|
|
|
|
static void enqueue_rt_entity(struct sched_rt_entity *rt_se, unsigned int flags);
|
|
static void dequeue_rt_entity(struct sched_rt_entity *rt_se, unsigned int flags);
|
|
|
|
static void sched_rt_rq_enqueue(struct rt_rq *rt_rq)
|
|
{
|
|
struct task_struct *curr = rq_of_rt_rq(rt_rq)->curr;
|
|
struct rq *rq = rq_of_rt_rq(rt_rq);
|
|
struct sched_rt_entity *rt_se;
|
|
|
|
int cpu = cpu_of(rq);
|
|
|
|
rt_se = rt_rq->tg->rt_se[cpu];
|
|
|
|
if (rt_rq->rt_nr_running) {
|
|
if (!rt_se)
|
|
enqueue_top_rt_rq(rt_rq);
|
|
else if (!on_rt_rq(rt_se))
|
|
enqueue_rt_entity(rt_se, 0);
|
|
|
|
if (rt_rq->highest_prio.curr < curr->prio)
|
|
resched_curr(rq);
|
|
}
|
|
}
|
|
|
|
static void sched_rt_rq_dequeue(struct rt_rq *rt_rq)
|
|
{
|
|
struct sched_rt_entity *rt_se;
|
|
int cpu = cpu_of(rq_of_rt_rq(rt_rq));
|
|
|
|
rt_se = rt_rq->tg->rt_se[cpu];
|
|
|
|
if (!rt_se) {
|
|
dequeue_top_rt_rq(rt_rq);
|
|
/* Kick cpufreq (see the comment in kernel/sched/sched.h). */
|
|
cpufreq_update_util(rq_of_rt_rq(rt_rq), 0);
|
|
}
|
|
else if (on_rt_rq(rt_se))
|
|
dequeue_rt_entity(rt_se, 0);
|
|
}
|
|
|
|
static inline int rt_rq_throttled(struct rt_rq *rt_rq)
|
|
{
|
|
return rt_rq->rt_throttled && !rt_rq->rt_nr_boosted;
|
|
}
|
|
|
|
static int rt_se_boosted(struct sched_rt_entity *rt_se)
|
|
{
|
|
struct rt_rq *rt_rq = group_rt_rq(rt_se);
|
|
struct task_struct *p;
|
|
|
|
if (rt_rq)
|
|
return !!rt_rq->rt_nr_boosted;
|
|
|
|
p = rt_task_of(rt_se);
|
|
return p->prio != p->normal_prio;
|
|
}
|
|
|
|
#ifdef CONFIG_SMP
|
|
static inline const struct cpumask *sched_rt_period_mask(void)
|
|
{
|
|
return this_rq()->rd->span;
|
|
}
|
|
#else
|
|
static inline const struct cpumask *sched_rt_period_mask(void)
|
|
{
|
|
return cpu_online_mask;
|
|
}
|
|
#endif
|
|
|
|
static inline
|
|
struct rt_rq *sched_rt_period_rt_rq(struct rt_bandwidth *rt_b, int cpu)
|
|
{
|
|
return container_of(rt_b, struct task_group, rt_bandwidth)->rt_rq[cpu];
|
|
}
|
|
|
|
static inline struct rt_bandwidth *sched_rt_bandwidth(struct rt_rq *rt_rq)
|
|
{
|
|
return &rt_rq->tg->rt_bandwidth;
|
|
}
|
|
|
|
#else /* !CONFIG_RT_GROUP_SCHED */
|
|
|
|
static inline u64 sched_rt_runtime(struct rt_rq *rt_rq)
|
|
{
|
|
return rt_rq->rt_runtime;
|
|
}
|
|
|
|
static inline u64 sched_rt_period(struct rt_rq *rt_rq)
|
|
{
|
|
return ktime_to_ns(def_rt_bandwidth.rt_period);
|
|
}
|
|
|
|
typedef struct rt_rq *rt_rq_iter_t;
|
|
|
|
#define for_each_rt_rq(rt_rq, iter, rq) \
|
|
for ((void) iter, rt_rq = &rq->rt; rt_rq; rt_rq = NULL)
|
|
|
|
#define for_each_sched_rt_entity(rt_se) \
|
|
for (; rt_se; rt_se = NULL)
|
|
|
|
static inline struct rt_rq *group_rt_rq(struct sched_rt_entity *rt_se)
|
|
{
|
|
return NULL;
|
|
}
|
|
|
|
static inline void sched_rt_rq_enqueue(struct rt_rq *rt_rq)
|
|
{
|
|
struct rq *rq = rq_of_rt_rq(rt_rq);
|
|
|
|
if (!rt_rq->rt_nr_running)
|
|
return;
|
|
|
|
enqueue_top_rt_rq(rt_rq);
|
|
resched_curr(rq);
|
|
}
|
|
|
|
static inline void sched_rt_rq_dequeue(struct rt_rq *rt_rq)
|
|
{
|
|
dequeue_top_rt_rq(rt_rq);
|
|
}
|
|
|
|
static inline int rt_rq_throttled(struct rt_rq *rt_rq)
|
|
{
|
|
return rt_rq->rt_throttled;
|
|
}
|
|
|
|
static inline const struct cpumask *sched_rt_period_mask(void)
|
|
{
|
|
return cpu_online_mask;
|
|
}
|
|
|
|
static inline
|
|
struct rt_rq *sched_rt_period_rt_rq(struct rt_bandwidth *rt_b, int cpu)
|
|
{
|
|
return &cpu_rq(cpu)->rt;
|
|
}
|
|
|
|
static inline struct rt_bandwidth *sched_rt_bandwidth(struct rt_rq *rt_rq)
|
|
{
|
|
return &def_rt_bandwidth;
|
|
}
|
|
|
|
#endif /* CONFIG_RT_GROUP_SCHED */
|
|
|
|
bool sched_rt_bandwidth_account(struct rt_rq *rt_rq)
|
|
{
|
|
struct rt_bandwidth *rt_b = sched_rt_bandwidth(rt_rq);
|
|
|
|
return (hrtimer_active(&rt_b->rt_period_timer) ||
|
|
rt_rq->rt_time < rt_b->rt_runtime);
|
|
}
|
|
|
|
#ifdef CONFIG_SMP
|
|
/*
|
|
* We ran out of runtime, see if we can borrow some from our neighbours.
|
|
*/
|
|
static void do_balance_runtime(struct rt_rq *rt_rq)
|
|
{
|
|
struct rt_bandwidth *rt_b = sched_rt_bandwidth(rt_rq);
|
|
struct root_domain *rd = rq_of_rt_rq(rt_rq)->rd;
|
|
int i, weight;
|
|
u64 rt_period;
|
|
|
|
weight = cpumask_weight(rd->span);
|
|
|
|
raw_spin_lock(&rt_b->rt_runtime_lock);
|
|
rt_period = ktime_to_ns(rt_b->rt_period);
|
|
for_each_cpu(i, rd->span) {
|
|
struct rt_rq *iter = sched_rt_period_rt_rq(rt_b, i);
|
|
s64 diff;
|
|
|
|
if (iter == rt_rq)
|
|
continue;
|
|
|
|
raw_spin_lock(&iter->rt_runtime_lock);
|
|
/*
|
|
* Either all rqs have inf runtime and there's nothing to steal
|
|
* or __disable_runtime() below sets a specific rq to inf to
|
|
* indicate its been disabled and disallow stealing.
|
|
*/
|
|
if (iter->rt_runtime == RUNTIME_INF)
|
|
goto next;
|
|
|
|
/*
|
|
* From runqueues with spare time, take 1/n part of their
|
|
* spare time, but no more than our period.
|
|
*/
|
|
diff = iter->rt_runtime - iter->rt_time;
|
|
if (diff > 0) {
|
|
diff = div_u64((u64)diff, weight);
|
|
if (rt_rq->rt_runtime + diff > rt_period)
|
|
diff = rt_period - rt_rq->rt_runtime;
|
|
iter->rt_runtime -= diff;
|
|
rt_rq->rt_runtime += diff;
|
|
if (rt_rq->rt_runtime == rt_period) {
|
|
raw_spin_unlock(&iter->rt_runtime_lock);
|
|
break;
|
|
}
|
|
}
|
|
next:
|
|
raw_spin_unlock(&iter->rt_runtime_lock);
|
|
}
|
|
raw_spin_unlock(&rt_b->rt_runtime_lock);
|
|
}
|
|
|
|
/*
|
|
* Ensure this RQ takes back all the runtime it lend to its neighbours.
|
|
*/
|
|
static void __disable_runtime(struct rq *rq)
|
|
{
|
|
struct root_domain *rd = rq->rd;
|
|
rt_rq_iter_t iter;
|
|
struct rt_rq *rt_rq;
|
|
|
|
if (unlikely(!scheduler_running))
|
|
return;
|
|
|
|
for_each_rt_rq(rt_rq, iter, rq) {
|
|
struct rt_bandwidth *rt_b = sched_rt_bandwidth(rt_rq);
|
|
s64 want;
|
|
int i;
|
|
|
|
raw_spin_lock(&rt_b->rt_runtime_lock);
|
|
raw_spin_lock(&rt_rq->rt_runtime_lock);
|
|
/*
|
|
* Either we're all inf and nobody needs to borrow, or we're
|
|
* already disabled and thus have nothing to do, or we have
|
|
* exactly the right amount of runtime to take out.
|
|
*/
|
|
if (rt_rq->rt_runtime == RUNTIME_INF ||
|
|
rt_rq->rt_runtime == rt_b->rt_runtime)
|
|
goto balanced;
|
|
raw_spin_unlock(&rt_rq->rt_runtime_lock);
|
|
|
|
/*
|
|
* Calculate the difference between what we started out with
|
|
* and what we current have, that's the amount of runtime
|
|
* we lend and now have to reclaim.
|
|
*/
|
|
want = rt_b->rt_runtime - rt_rq->rt_runtime;
|
|
|
|
/*
|
|
* Greedy reclaim, take back as much as we can.
|
|
*/
|
|
for_each_cpu(i, rd->span) {
|
|
struct rt_rq *iter = sched_rt_period_rt_rq(rt_b, i);
|
|
s64 diff;
|
|
|
|
/*
|
|
* Can't reclaim from ourselves or disabled runqueues.
|
|
*/
|
|
if (iter == rt_rq || iter->rt_runtime == RUNTIME_INF)
|
|
continue;
|
|
|
|
raw_spin_lock(&iter->rt_runtime_lock);
|
|
if (want > 0) {
|
|
diff = min_t(s64, iter->rt_runtime, want);
|
|
iter->rt_runtime -= diff;
|
|
want -= diff;
|
|
} else {
|
|
iter->rt_runtime -= want;
|
|
want -= want;
|
|
}
|
|
raw_spin_unlock(&iter->rt_runtime_lock);
|
|
|
|
if (!want)
|
|
break;
|
|
}
|
|
|
|
raw_spin_lock(&rt_rq->rt_runtime_lock);
|
|
/*
|
|
* We cannot be left wanting - that would mean some runtime
|
|
* leaked out of the system.
|
|
*/
|
|
BUG_ON(want);
|
|
balanced:
|
|
/*
|
|
* Disable all the borrow logic by pretending we have inf
|
|
* runtime - in which case borrowing doesn't make sense.
|
|
*/
|
|
rt_rq->rt_runtime = RUNTIME_INF;
|
|
rt_rq->rt_throttled = 0;
|
|
raw_spin_unlock(&rt_rq->rt_runtime_lock);
|
|
raw_spin_unlock(&rt_b->rt_runtime_lock);
|
|
|
|
/* Make rt_rq available for pick_next_task() */
|
|
sched_rt_rq_enqueue(rt_rq);
|
|
}
|
|
}
|
|
|
|
static void __enable_runtime(struct rq *rq)
|
|
{
|
|
rt_rq_iter_t iter;
|
|
struct rt_rq *rt_rq;
|
|
|
|
if (unlikely(!scheduler_running))
|
|
return;
|
|
|
|
/*
|
|
* Reset each runqueue's bandwidth settings
|
|
*/
|
|
for_each_rt_rq(rt_rq, iter, rq) {
|
|
struct rt_bandwidth *rt_b = sched_rt_bandwidth(rt_rq);
|
|
|
|
raw_spin_lock(&rt_b->rt_runtime_lock);
|
|
raw_spin_lock(&rt_rq->rt_runtime_lock);
|
|
rt_rq->rt_runtime = rt_b->rt_runtime;
|
|
rt_rq->rt_time = 0;
|
|
rt_rq->rt_throttled = 0;
|
|
raw_spin_unlock(&rt_rq->rt_runtime_lock);
|
|
raw_spin_unlock(&rt_b->rt_runtime_lock);
|
|
}
|
|
}
|
|
|
|
static void balance_runtime(struct rt_rq *rt_rq)
|
|
{
|
|
if (!sched_feat(RT_RUNTIME_SHARE))
|
|
return;
|
|
|
|
if (rt_rq->rt_time > rt_rq->rt_runtime) {
|
|
raw_spin_unlock(&rt_rq->rt_runtime_lock);
|
|
do_balance_runtime(rt_rq);
|
|
raw_spin_lock(&rt_rq->rt_runtime_lock);
|
|
}
|
|
}
|
|
#else /* !CONFIG_SMP */
|
|
static inline void balance_runtime(struct rt_rq *rt_rq) {}
|
|
#endif /* CONFIG_SMP */
|
|
|
|
static int do_sched_rt_period_timer(struct rt_bandwidth *rt_b, int overrun)
|
|
{
|
|
int i, idle = 1, throttled = 0;
|
|
const struct cpumask *span;
|
|
|
|
span = sched_rt_period_mask();
|
|
#ifdef CONFIG_RT_GROUP_SCHED
|
|
/*
|
|
* FIXME: isolated CPUs should really leave the root task group,
|
|
* whether they are isolcpus or were isolated via cpusets, lest
|
|
* the timer run on a CPU which does not service all runqueues,
|
|
* potentially leaving other CPUs indefinitely throttled. If
|
|
* isolation is really required, the user will turn the throttle
|
|
* off to kill the perturbations it causes anyway. Meanwhile,
|
|
* this maintains functionality for boot and/or troubleshooting.
|
|
*/
|
|
if (rt_b == &root_task_group.rt_bandwidth)
|
|
span = cpu_online_mask;
|
|
#endif
|
|
for_each_cpu(i, span) {
|
|
int enqueue = 0;
|
|
struct rt_rq *rt_rq = sched_rt_period_rt_rq(rt_b, i);
|
|
struct rq *rq = rq_of_rt_rq(rt_rq);
|
|
int skip;
|
|
|
|
/*
|
|
* When span == cpu_online_mask, taking each rq->lock
|
|
* can be time-consuming. Try to avoid it when possible.
|
|
*/
|
|
raw_spin_lock(&rt_rq->rt_runtime_lock);
|
|
if (!sched_feat(RT_RUNTIME_SHARE) && rt_rq->rt_runtime != RUNTIME_INF)
|
|
rt_rq->rt_runtime = rt_b->rt_runtime;
|
|
skip = !rt_rq->rt_time && !rt_rq->rt_nr_running;
|
|
raw_spin_unlock(&rt_rq->rt_runtime_lock);
|
|
if (skip)
|
|
continue;
|
|
|
|
raw_spin_rq_lock(rq);
|
|
update_rq_clock(rq);
|
|
|
|
if (rt_rq->rt_time) {
|
|
u64 runtime;
|
|
|
|
raw_spin_lock(&rt_rq->rt_runtime_lock);
|
|
if (rt_rq->rt_throttled)
|
|
balance_runtime(rt_rq);
|
|
runtime = rt_rq->rt_runtime;
|
|
rt_rq->rt_time -= min(rt_rq->rt_time, overrun*runtime);
|
|
if (rt_rq->rt_throttled && rt_rq->rt_time < runtime) {
|
|
rt_rq->rt_throttled = 0;
|
|
enqueue = 1;
|
|
|
|
/*
|
|
* When we're idle and a woken (rt) task is
|
|
* throttled check_preempt_curr() will set
|
|
* skip_update and the time between the wakeup
|
|
* and this unthrottle will get accounted as
|
|
* 'runtime'.
|
|
*/
|
|
if (rt_rq->rt_nr_running && rq->curr == rq->idle)
|
|
rq_clock_cancel_skipupdate(rq);
|
|
}
|
|
if (rt_rq->rt_time || rt_rq->rt_nr_running)
|
|
idle = 0;
|
|
raw_spin_unlock(&rt_rq->rt_runtime_lock);
|
|
} else if (rt_rq->rt_nr_running) {
|
|
idle = 0;
|
|
if (!rt_rq_throttled(rt_rq))
|
|
enqueue = 1;
|
|
}
|
|
if (rt_rq->rt_throttled)
|
|
throttled = 1;
|
|
|
|
if (enqueue)
|
|
sched_rt_rq_enqueue(rt_rq);
|
|
raw_spin_rq_unlock(rq);
|
|
}
|
|
|
|
if (!throttled && (!rt_bandwidth_enabled() || rt_b->rt_runtime == RUNTIME_INF))
|
|
return 1;
|
|
|
|
return idle;
|
|
}
|
|
|
|
static inline int rt_se_prio(struct sched_rt_entity *rt_se)
|
|
{
|
|
#ifdef CONFIG_RT_GROUP_SCHED
|
|
struct rt_rq *rt_rq = group_rt_rq(rt_se);
|
|
|
|
if (rt_rq)
|
|
return rt_rq->highest_prio.curr;
|
|
#endif
|
|
|
|
return rt_task_of(rt_se)->prio;
|
|
}
|
|
|
|
static int sched_rt_runtime_exceeded(struct rt_rq *rt_rq)
|
|
{
|
|
u64 runtime = sched_rt_runtime(rt_rq);
|
|
|
|
if (rt_rq->rt_throttled)
|
|
return rt_rq_throttled(rt_rq);
|
|
|
|
if (runtime >= sched_rt_period(rt_rq))
|
|
return 0;
|
|
|
|
balance_runtime(rt_rq);
|
|
runtime = sched_rt_runtime(rt_rq);
|
|
if (runtime == RUNTIME_INF)
|
|
return 0;
|
|
|
|
if (rt_rq->rt_time > runtime) {
|
|
struct rt_bandwidth *rt_b = sched_rt_bandwidth(rt_rq);
|
|
|
|
/*
|
|
* Don't actually throttle groups that have no runtime assigned
|
|
* but accrue some time due to boosting.
|
|
*/
|
|
if (likely(rt_b->rt_runtime)) {
|
|
rt_rq->rt_throttled = 1;
|
|
printk_deferred_once("sched: RT throttling activated\n");
|
|
|
|
trace_android_vh_dump_throttled_rt_tasks(
|
|
raw_smp_processor_id(),
|
|
rq_clock(rq_of_rt_rq(rt_rq)),
|
|
sched_rt_period(rt_rq),
|
|
runtime,
|
|
hrtimer_get_expires_ns(&rt_b->rt_period_timer));
|
|
} else {
|
|
/*
|
|
* In case we did anyway, make it go away,
|
|
* replenishment is a joke, since it will replenish us
|
|
* with exactly 0 ns.
|
|
*/
|
|
rt_rq->rt_time = 0;
|
|
}
|
|
|
|
if (rt_rq_throttled(rt_rq)) {
|
|
sched_rt_rq_dequeue(rt_rq);
|
|
return 1;
|
|
}
|
|
}
|
|
|
|
return 0;
|
|
}
|
|
|
|
/*
|
|
* Update the current task's runtime statistics. Skip current tasks that
|
|
* are not in our scheduling class.
|
|
*/
|
|
static void update_curr_rt(struct rq *rq)
|
|
{
|
|
struct task_struct *curr = rq->curr;
|
|
struct sched_rt_entity *rt_se = &curr->rt;
|
|
u64 delta_exec;
|
|
u64 now;
|
|
|
|
if (curr->sched_class != &rt_sched_class)
|
|
return;
|
|
|
|
now = rq_clock_task(rq);
|
|
delta_exec = now - curr->se.exec_start;
|
|
if (unlikely((s64)delta_exec <= 0))
|
|
return;
|
|
|
|
schedstat_set(curr->se.statistics.exec_max,
|
|
max(curr->se.statistics.exec_max, delta_exec));
|
|
|
|
curr->se.sum_exec_runtime += delta_exec;
|
|
account_group_exec_runtime(curr, delta_exec);
|
|
|
|
curr->se.exec_start = now;
|
|
cgroup_account_cputime(curr, delta_exec);
|
|
|
|
if (!rt_bandwidth_enabled())
|
|
return;
|
|
|
|
for_each_sched_rt_entity(rt_se) {
|
|
struct rt_rq *rt_rq = rt_rq_of_se(rt_se);
|
|
int exceeded;
|
|
|
|
if (sched_rt_runtime(rt_rq) != RUNTIME_INF) {
|
|
raw_spin_lock(&rt_rq->rt_runtime_lock);
|
|
rt_rq->rt_time += delta_exec;
|
|
exceeded = sched_rt_runtime_exceeded(rt_rq);
|
|
if (exceeded)
|
|
resched_curr(rq);
|
|
raw_spin_unlock(&rt_rq->rt_runtime_lock);
|
|
if (exceeded)
|
|
do_start_rt_bandwidth(sched_rt_bandwidth(rt_rq));
|
|
}
|
|
}
|
|
}
|
|
|
|
static void
|
|
dequeue_top_rt_rq(struct rt_rq *rt_rq)
|
|
{
|
|
struct rq *rq = rq_of_rt_rq(rt_rq);
|
|
|
|
BUG_ON(&rq->rt != rt_rq);
|
|
|
|
if (!rt_rq->rt_queued)
|
|
return;
|
|
|
|
BUG_ON(!rq->nr_running);
|
|
|
|
sub_nr_running(rq, rt_rq->rt_nr_running);
|
|
rt_rq->rt_queued = 0;
|
|
|
|
}
|
|
|
|
static void
|
|
enqueue_top_rt_rq(struct rt_rq *rt_rq)
|
|
{
|
|
struct rq *rq = rq_of_rt_rq(rt_rq);
|
|
|
|
BUG_ON(&rq->rt != rt_rq);
|
|
|
|
if (rt_rq->rt_queued)
|
|
return;
|
|
|
|
if (rt_rq_throttled(rt_rq))
|
|
return;
|
|
|
|
if (rt_rq->rt_nr_running) {
|
|
add_nr_running(rq, rt_rq->rt_nr_running);
|
|
rt_rq->rt_queued = 1;
|
|
}
|
|
|
|
/* Kick cpufreq (see the comment in kernel/sched/sched.h). */
|
|
cpufreq_update_util(rq, 0);
|
|
}
|
|
|
|
#if defined CONFIG_SMP
|
|
|
|
static void
|
|
inc_rt_prio_smp(struct rt_rq *rt_rq, int prio, int prev_prio)
|
|
{
|
|
struct rq *rq = rq_of_rt_rq(rt_rq);
|
|
|
|
#ifdef CONFIG_RT_GROUP_SCHED
|
|
/*
|
|
* Change rq's cpupri only if rt_rq is the top queue.
|
|
*/
|
|
if (&rq->rt != rt_rq)
|
|
return;
|
|
#endif
|
|
if (rq->online && prio < prev_prio)
|
|
cpupri_set(&rq->rd->cpupri, rq->cpu, prio);
|
|
}
|
|
|
|
static void
|
|
dec_rt_prio_smp(struct rt_rq *rt_rq, int prio, int prev_prio)
|
|
{
|
|
struct rq *rq = rq_of_rt_rq(rt_rq);
|
|
|
|
#ifdef CONFIG_RT_GROUP_SCHED
|
|
/*
|
|
* Change rq's cpupri only if rt_rq is the top queue.
|
|
*/
|
|
if (&rq->rt != rt_rq)
|
|
return;
|
|
#endif
|
|
if (rq->online && rt_rq->highest_prio.curr != prev_prio)
|
|
cpupri_set(&rq->rd->cpupri, rq->cpu, rt_rq->highest_prio.curr);
|
|
}
|
|
|
|
#else /* CONFIG_SMP */
|
|
|
|
static inline
|
|
void inc_rt_prio_smp(struct rt_rq *rt_rq, int prio, int prev_prio) {}
|
|
static inline
|
|
void dec_rt_prio_smp(struct rt_rq *rt_rq, int prio, int prev_prio) {}
|
|
|
|
#endif /* CONFIG_SMP */
|
|
|
|
#if defined CONFIG_SMP || defined CONFIG_RT_GROUP_SCHED
|
|
static void
|
|
inc_rt_prio(struct rt_rq *rt_rq, int prio)
|
|
{
|
|
int prev_prio = rt_rq->highest_prio.curr;
|
|
|
|
if (prio < prev_prio)
|
|
rt_rq->highest_prio.curr = prio;
|
|
|
|
inc_rt_prio_smp(rt_rq, prio, prev_prio);
|
|
}
|
|
|
|
static void
|
|
dec_rt_prio(struct rt_rq *rt_rq, int prio)
|
|
{
|
|
int prev_prio = rt_rq->highest_prio.curr;
|
|
|
|
if (rt_rq->rt_nr_running) {
|
|
|
|
WARN_ON(prio < prev_prio);
|
|
|
|
/*
|
|
* This may have been our highest task, and therefore
|
|
* we may have some recomputation to do
|
|
*/
|
|
if (prio == prev_prio) {
|
|
struct rt_prio_array *array = &rt_rq->active;
|
|
|
|
rt_rq->highest_prio.curr =
|
|
sched_find_first_bit(array->bitmap);
|
|
}
|
|
|
|
} else {
|
|
rt_rq->highest_prio.curr = MAX_RT_PRIO-1;
|
|
}
|
|
|
|
dec_rt_prio_smp(rt_rq, prio, prev_prio);
|
|
}
|
|
|
|
#else
|
|
|
|
static inline void inc_rt_prio(struct rt_rq *rt_rq, int prio) {}
|
|
static inline void dec_rt_prio(struct rt_rq *rt_rq, int prio) {}
|
|
|
|
#endif /* CONFIG_SMP || CONFIG_RT_GROUP_SCHED */
|
|
|
|
#ifdef CONFIG_RT_GROUP_SCHED
|
|
|
|
static void
|
|
inc_rt_group(struct sched_rt_entity *rt_se, struct rt_rq *rt_rq)
|
|
{
|
|
if (rt_se_boosted(rt_se))
|
|
rt_rq->rt_nr_boosted++;
|
|
|
|
if (rt_rq->tg)
|
|
start_rt_bandwidth(&rt_rq->tg->rt_bandwidth);
|
|
}
|
|
|
|
static void
|
|
dec_rt_group(struct sched_rt_entity *rt_se, struct rt_rq *rt_rq)
|
|
{
|
|
if (rt_se_boosted(rt_se))
|
|
rt_rq->rt_nr_boosted--;
|
|
|
|
WARN_ON(!rt_rq->rt_nr_running && rt_rq->rt_nr_boosted);
|
|
}
|
|
|
|
#else /* CONFIG_RT_GROUP_SCHED */
|
|
|
|
static void
|
|
inc_rt_group(struct sched_rt_entity *rt_se, struct rt_rq *rt_rq)
|
|
{
|
|
start_rt_bandwidth(&def_rt_bandwidth);
|
|
}
|
|
|
|
static inline
|
|
void dec_rt_group(struct sched_rt_entity *rt_se, struct rt_rq *rt_rq) {}
|
|
|
|
#endif /* CONFIG_RT_GROUP_SCHED */
|
|
|
|
static inline
|
|
unsigned int rt_se_nr_running(struct sched_rt_entity *rt_se)
|
|
{
|
|
struct rt_rq *group_rq = group_rt_rq(rt_se);
|
|
|
|
if (group_rq)
|
|
return group_rq->rt_nr_running;
|
|
else
|
|
return 1;
|
|
}
|
|
|
|
static inline
|
|
unsigned int rt_se_rr_nr_running(struct sched_rt_entity *rt_se)
|
|
{
|
|
struct rt_rq *group_rq = group_rt_rq(rt_se);
|
|
struct task_struct *tsk;
|
|
|
|
if (group_rq)
|
|
return group_rq->rr_nr_running;
|
|
|
|
tsk = rt_task_of(rt_se);
|
|
|
|
return (tsk->policy == SCHED_RR) ? 1 : 0;
|
|
}
|
|
|
|
static inline
|
|
void inc_rt_tasks(struct sched_rt_entity *rt_se, struct rt_rq *rt_rq)
|
|
{
|
|
int prio = rt_se_prio(rt_se);
|
|
|
|
WARN_ON(!rt_prio(prio));
|
|
rt_rq->rt_nr_running += rt_se_nr_running(rt_se);
|
|
rt_rq->rr_nr_running += rt_se_rr_nr_running(rt_se);
|
|
|
|
inc_rt_prio(rt_rq, prio);
|
|
inc_rt_migration(rt_se, rt_rq);
|
|
inc_rt_group(rt_se, rt_rq);
|
|
}
|
|
|
|
static inline
|
|
void dec_rt_tasks(struct sched_rt_entity *rt_se, struct rt_rq *rt_rq)
|
|
{
|
|
WARN_ON(!rt_prio(rt_se_prio(rt_se)));
|
|
WARN_ON(!rt_rq->rt_nr_running);
|
|
rt_rq->rt_nr_running -= rt_se_nr_running(rt_se);
|
|
rt_rq->rr_nr_running -= rt_se_rr_nr_running(rt_se);
|
|
|
|
dec_rt_prio(rt_rq, rt_se_prio(rt_se));
|
|
dec_rt_migration(rt_se, rt_rq);
|
|
dec_rt_group(rt_se, rt_rq);
|
|
}
|
|
|
|
/*
|
|
* Change rt_se->run_list location unless SAVE && !MOVE
|
|
*
|
|
* assumes ENQUEUE/DEQUEUE flags match
|
|
*/
|
|
static inline bool move_entity(unsigned int flags)
|
|
{
|
|
if ((flags & (DEQUEUE_SAVE | DEQUEUE_MOVE)) == DEQUEUE_SAVE)
|
|
return false;
|
|
|
|
return true;
|
|
}
|
|
|
|
static void __delist_rt_entity(struct sched_rt_entity *rt_se, struct rt_prio_array *array)
|
|
{
|
|
list_del_init(&rt_se->run_list);
|
|
|
|
if (list_empty(array->queue + rt_se_prio(rt_se)))
|
|
__clear_bit(rt_se_prio(rt_se), array->bitmap);
|
|
|
|
rt_se->on_list = 0;
|
|
}
|
|
|
|
static void __enqueue_rt_entity(struct sched_rt_entity *rt_se, unsigned int flags)
|
|
{
|
|
struct rt_rq *rt_rq = rt_rq_of_se(rt_se);
|
|
struct rt_prio_array *array = &rt_rq->active;
|
|
struct rt_rq *group_rq = group_rt_rq(rt_se);
|
|
struct list_head *queue = array->queue + rt_se_prio(rt_se);
|
|
|
|
/*
|
|
* Don't enqueue the group if its throttled, or when empty.
|
|
* The latter is a consequence of the former when a child group
|
|
* get throttled and the current group doesn't have any other
|
|
* active members.
|
|
*/
|
|
if (group_rq && (rt_rq_throttled(group_rq) || !group_rq->rt_nr_running)) {
|
|
if (rt_se->on_list)
|
|
__delist_rt_entity(rt_se, array);
|
|
return;
|
|
}
|
|
|
|
if (move_entity(flags)) {
|
|
WARN_ON_ONCE(rt_se->on_list);
|
|
if (flags & ENQUEUE_HEAD)
|
|
list_add(&rt_se->run_list, queue);
|
|
else
|
|
list_add_tail(&rt_se->run_list, queue);
|
|
|
|
__set_bit(rt_se_prio(rt_se), array->bitmap);
|
|
rt_se->on_list = 1;
|
|
}
|
|
rt_se->on_rq = 1;
|
|
|
|
inc_rt_tasks(rt_se, rt_rq);
|
|
}
|
|
|
|
static void __dequeue_rt_entity(struct sched_rt_entity *rt_se, unsigned int flags)
|
|
{
|
|
struct rt_rq *rt_rq = rt_rq_of_se(rt_se);
|
|
struct rt_prio_array *array = &rt_rq->active;
|
|
|
|
if (move_entity(flags)) {
|
|
WARN_ON_ONCE(!rt_se->on_list);
|
|
__delist_rt_entity(rt_se, array);
|
|
}
|
|
rt_se->on_rq = 0;
|
|
|
|
dec_rt_tasks(rt_se, rt_rq);
|
|
}
|
|
|
|
/*
|
|
* Because the prio of an upper entry depends on the lower
|
|
* entries, we must remove entries top - down.
|
|
*/
|
|
static void dequeue_rt_stack(struct sched_rt_entity *rt_se, unsigned int flags)
|
|
{
|
|
struct sched_rt_entity *back = NULL;
|
|
|
|
for_each_sched_rt_entity(rt_se) {
|
|
rt_se->back = back;
|
|
back = rt_se;
|
|
}
|
|
|
|
dequeue_top_rt_rq(rt_rq_of_se(back));
|
|
|
|
for (rt_se = back; rt_se; rt_se = rt_se->back) {
|
|
if (on_rt_rq(rt_se))
|
|
__dequeue_rt_entity(rt_se, flags);
|
|
}
|
|
}
|
|
|
|
static void enqueue_rt_entity(struct sched_rt_entity *rt_se, unsigned int flags)
|
|
{
|
|
struct rq *rq = rq_of_rt_se(rt_se);
|
|
|
|
dequeue_rt_stack(rt_se, flags);
|
|
for_each_sched_rt_entity(rt_se)
|
|
__enqueue_rt_entity(rt_se, flags);
|
|
enqueue_top_rt_rq(&rq->rt);
|
|
}
|
|
|
|
static void dequeue_rt_entity(struct sched_rt_entity *rt_se, unsigned int flags)
|
|
{
|
|
struct rq *rq = rq_of_rt_se(rt_se);
|
|
|
|
dequeue_rt_stack(rt_se, flags);
|
|
|
|
for_each_sched_rt_entity(rt_se) {
|
|
struct rt_rq *rt_rq = group_rt_rq(rt_se);
|
|
|
|
if (rt_rq && rt_rq->rt_nr_running)
|
|
__enqueue_rt_entity(rt_se, flags);
|
|
}
|
|
enqueue_top_rt_rq(&rq->rt);
|
|
}
|
|
|
|
/*
|
|
* Adding/removing a task to/from a priority array:
|
|
*/
|
|
static void
|
|
enqueue_task_rt(struct rq *rq, struct task_struct *p, int flags)
|
|
{
|
|
struct sched_rt_entity *rt_se = &p->rt;
|
|
|
|
if (flags & ENQUEUE_WAKEUP)
|
|
rt_se->timeout = 0;
|
|
|
|
enqueue_rt_entity(rt_se, flags);
|
|
|
|
if (!task_current(rq, p) && p->nr_cpus_allowed > 1)
|
|
enqueue_pushable_task(rq, p);
|
|
}
|
|
|
|
static void dequeue_task_rt(struct rq *rq, struct task_struct *p, int flags)
|
|
{
|
|
struct sched_rt_entity *rt_se = &p->rt;
|
|
|
|
update_curr_rt(rq);
|
|
dequeue_rt_entity(rt_se, flags);
|
|
|
|
dequeue_pushable_task(rq, p);
|
|
}
|
|
|
|
/*
|
|
* Put task to the head or the end of the run list without the overhead of
|
|
* dequeue followed by enqueue.
|
|
*/
|
|
static void
|
|
requeue_rt_entity(struct rt_rq *rt_rq, struct sched_rt_entity *rt_se, int head)
|
|
{
|
|
if (on_rt_rq(rt_se)) {
|
|
struct rt_prio_array *array = &rt_rq->active;
|
|
struct list_head *queue = array->queue + rt_se_prio(rt_se);
|
|
|
|
if (head)
|
|
list_move(&rt_se->run_list, queue);
|
|
else
|
|
list_move_tail(&rt_se->run_list, queue);
|
|
}
|
|
}
|
|
|
|
static void requeue_task_rt(struct rq *rq, struct task_struct *p, int head)
|
|
{
|
|
struct sched_rt_entity *rt_se = &p->rt;
|
|
struct rt_rq *rt_rq;
|
|
|
|
for_each_sched_rt_entity(rt_se) {
|
|
rt_rq = rt_rq_of_se(rt_se);
|
|
requeue_rt_entity(rt_rq, rt_se, head);
|
|
}
|
|
}
|
|
|
|
static void yield_task_rt(struct rq *rq)
|
|
{
|
|
requeue_task_rt(rq, rq->curr, 0);
|
|
}
|
|
|
|
#ifdef CONFIG_SMP
|
|
static int find_lowest_rq(struct task_struct *task);
|
|
|
|
#ifdef CONFIG_RT_SOFTINT_OPTIMIZATION
|
|
/*
|
|
* Return whether the task on the given cpu is currently non-preemptible
|
|
* while handling a potentially long softint, or if the task is likely
|
|
* to block preemptions soon because it is a ksoftirq thread that is
|
|
* handling slow softints.
|
|
*/
|
|
bool
|
|
task_may_not_preempt(struct task_struct *task, int cpu)
|
|
{
|
|
__u32 softirqs = per_cpu(active_softirqs, cpu) |
|
|
local_softirq_pending();
|
|
|
|
struct task_struct *cpu_ksoftirqd = per_cpu(ksoftirqd, cpu);
|
|
return ((softirqs & LONG_SOFTIRQ_MASK) &&
|
|
(task == cpu_ksoftirqd ||
|
|
task_thread_info(task)->preempt_count & SOFTIRQ_MASK));
|
|
}
|
|
EXPORT_SYMBOL_GPL(task_may_not_preempt);
|
|
#endif /* CONFIG_RT_SOFTINT_OPTIMIZATION */
|
|
|
|
static int
|
|
select_task_rq_rt(struct task_struct *p, int cpu, int flags)
|
|
{
|
|
struct task_struct *curr;
|
|
struct rq *rq;
|
|
bool test;
|
|
int target_cpu = -1;
|
|
bool may_not_preempt;
|
|
|
|
trace_android_rvh_select_task_rq_rt(p, cpu, flags & 0xF,
|
|
flags, &target_cpu);
|
|
if (target_cpu >= 0)
|
|
return target_cpu;
|
|
|
|
/* For anything but wake ups, just return the task_cpu */
|
|
if (!(flags & (WF_TTWU | WF_FORK)))
|
|
goto out;
|
|
|
|
rq = cpu_rq(cpu);
|
|
|
|
rcu_read_lock();
|
|
curr = READ_ONCE(rq->curr); /* unlocked access */
|
|
|
|
/*
|
|
* If the current task on @p's runqueue is a softirq task,
|
|
* it may run without preemption for a time that is
|
|
* ill-suited for a waiting RT task. Therefore, try to
|
|
* wake this RT task on another runqueue.
|
|
*
|
|
* Also, if the current task on @p's runqueue is an RT task, then
|
|
* try to see if we can wake this RT task up on another
|
|
* runqueue. Otherwise simply start this RT task
|
|
* on its current runqueue.
|
|
*
|
|
* We want to avoid overloading runqueues. If the woken
|
|
* task is a higher priority, then it will stay on this CPU
|
|
* and the lower prio task should be moved to another CPU.
|
|
* Even though this will probably make the lower prio task
|
|
* lose its cache, we do not want to bounce a higher task
|
|
* around just because it gave up its CPU, perhaps for a
|
|
* lock?
|
|
*
|
|
* For equal prio tasks, we just let the scheduler sort it out.
|
|
*
|
|
* Otherwise, just let it ride on the affined RQ and the
|
|
* post-schedule router will push the preempted task away
|
|
*
|
|
* This test is optimistic, if we get it wrong the load-balancer
|
|
* will have to sort it out.
|
|
*
|
|
* We take into account the capacity of the CPU to ensure it fits the
|
|
* requirement of the task - which is only important on heterogeneous
|
|
* systems like big.LITTLE.
|
|
*/
|
|
may_not_preempt = task_may_not_preempt(curr, cpu);
|
|
test = (curr && (may_not_preempt ||
|
|
(unlikely(rt_task(curr)) &&
|
|
(curr->nr_cpus_allowed < 2 || curr->prio <= p->prio))));
|
|
|
|
if (test || !rt_task_fits_capacity(p, cpu)) {
|
|
int target = find_lowest_rq(p);
|
|
|
|
/*
|
|
* Bail out if we were forcing a migration to find a better
|
|
* fitting CPU but our search failed.
|
|
*/
|
|
if (!test && target != -1 && !rt_task_fits_capacity(p, target))
|
|
goto out_unlock;
|
|
|
|
/*
|
|
* If cpu is non-preemptible, prefer remote cpu
|
|
* even if it's running a higher-prio task.
|
|
* Otherwise: Don't bother moving it if the destination CPU is
|
|
* not running a lower priority task.
|
|
*/
|
|
if (target != -1 &&
|
|
(may_not_preempt ||
|
|
p->prio < cpu_rq(target)->rt.highest_prio.curr))
|
|
cpu = target;
|
|
}
|
|
|
|
out_unlock:
|
|
rcu_read_unlock();
|
|
|
|
out:
|
|
return cpu;
|
|
}
|
|
|
|
static void check_preempt_equal_prio(struct rq *rq, struct task_struct *p)
|
|
{
|
|
/*
|
|
* Current can't be migrated, useless to reschedule,
|
|
* let's hope p can move out.
|
|
*/
|
|
if (rq->curr->nr_cpus_allowed == 1 ||
|
|
!cpupri_find(&rq->rd->cpupri, rq->curr, NULL))
|
|
return;
|
|
|
|
/*
|
|
* p is migratable, so let's not schedule it and
|
|
* see if it is pushed or pulled somewhere else.
|
|
*/
|
|
if (p->nr_cpus_allowed != 1 &&
|
|
cpupri_find(&rq->rd->cpupri, p, NULL))
|
|
return;
|
|
|
|
/*
|
|
* There appear to be other CPUs that can accept
|
|
* the current task but none can run 'p', so lets reschedule
|
|
* to try and push the current task away:
|
|
*/
|
|
requeue_task_rt(rq, p, 1);
|
|
resched_curr(rq);
|
|
}
|
|
|
|
static int balance_rt(struct rq *rq, struct task_struct *p, struct rq_flags *rf)
|
|
{
|
|
if (!on_rt_rq(&p->rt) && need_pull_rt_task(rq, p)) {
|
|
int done = 0;
|
|
|
|
/*
|
|
* This is OK, because current is on_cpu, which avoids it being
|
|
* picked for load-balance and preemption/IRQs are still
|
|
* disabled avoiding further scheduler activity on it and we've
|
|
* not yet started the picking loop.
|
|
*/
|
|
rq_unpin_lock(rq, rf);
|
|
trace_android_rvh_sched_balance_rt(rq, p, &done);
|
|
if (!done)
|
|
pull_rt_task(rq);
|
|
rq_repin_lock(rq, rf);
|
|
}
|
|
|
|
return sched_stop_runnable(rq) || sched_dl_runnable(rq) || sched_rt_runnable(rq);
|
|
}
|
|
#endif /* CONFIG_SMP */
|
|
|
|
/*
|
|
* Preempt the current task with a newly woken task if needed:
|
|
*/
|
|
static void check_preempt_curr_rt(struct rq *rq, struct task_struct *p, int flags)
|
|
{
|
|
if (p->prio < rq->curr->prio) {
|
|
resched_curr(rq);
|
|
return;
|
|
}
|
|
|
|
#ifdef CONFIG_SMP
|
|
/*
|
|
* If:
|
|
*
|
|
* - the newly woken task is of equal priority to the current task
|
|
* - the newly woken task is non-migratable while current is migratable
|
|
* - current will be preempted on the next reschedule
|
|
*
|
|
* we should check to see if current can readily move to a different
|
|
* cpu. If so, we will reschedule to allow the push logic to try
|
|
* to move current somewhere else, making room for our non-migratable
|
|
* task.
|
|
*/
|
|
if (p->prio == rq->curr->prio && !test_tsk_need_resched(rq->curr))
|
|
check_preempt_equal_prio(rq, p);
|
|
#endif
|
|
}
|
|
|
|
static inline void set_next_task_rt(struct rq *rq, struct task_struct *p, bool first)
|
|
{
|
|
p->se.exec_start = rq_clock_task(rq);
|
|
|
|
/* The running task is never eligible for pushing */
|
|
dequeue_pushable_task(rq, p);
|
|
|
|
if (!first)
|
|
return;
|
|
|
|
/*
|
|
* If prev task was rt, put_prev_task() has already updated the
|
|
* utilization. We only care of the case where we start to schedule a
|
|
* rt task
|
|
*/
|
|
if (rq->curr->sched_class != &rt_sched_class)
|
|
update_rt_rq_load_avg(rq_clock_pelt(rq), rq, 0);
|
|
|
|
rt_queue_push_tasks(rq);
|
|
}
|
|
|
|
static struct sched_rt_entity *pick_next_rt_entity(struct rq *rq,
|
|
struct rt_rq *rt_rq)
|
|
{
|
|
struct rt_prio_array *array = &rt_rq->active;
|
|
struct sched_rt_entity *next = NULL;
|
|
struct list_head *queue;
|
|
int idx;
|
|
|
|
idx = sched_find_first_bit(array->bitmap);
|
|
BUG_ON(idx >= MAX_RT_PRIO);
|
|
|
|
queue = array->queue + idx;
|
|
next = list_entry(queue->next, struct sched_rt_entity, run_list);
|
|
|
|
return next;
|
|
}
|
|
|
|
static struct task_struct *_pick_next_task_rt(struct rq *rq)
|
|
{
|
|
struct sched_rt_entity *rt_se;
|
|
struct rt_rq *rt_rq = &rq->rt;
|
|
|
|
do {
|
|
rt_se = pick_next_rt_entity(rq, rt_rq);
|
|
BUG_ON(!rt_se);
|
|
rt_rq = group_rt_rq(rt_se);
|
|
} while (rt_rq);
|
|
|
|
return rt_task_of(rt_se);
|
|
}
|
|
|
|
static struct task_struct *pick_task_rt(struct rq *rq)
|
|
{
|
|
struct task_struct *p;
|
|
|
|
if (!sched_rt_runnable(rq))
|
|
return NULL;
|
|
|
|
p = _pick_next_task_rt(rq);
|
|
|
|
return p;
|
|
}
|
|
|
|
static struct task_struct *pick_next_task_rt(struct rq *rq)
|
|
{
|
|
struct task_struct *p = pick_task_rt(rq);
|
|
|
|
if (p)
|
|
set_next_task_rt(rq, p, true);
|
|
|
|
return p;
|
|
}
|
|
|
|
static void put_prev_task_rt(struct rq *rq, struct task_struct *p)
|
|
{
|
|
update_curr_rt(rq);
|
|
|
|
update_rt_rq_load_avg(rq_clock_pelt(rq), rq, 1);
|
|
|
|
/*
|
|
* The previous task needs to be made eligible for pushing
|
|
* if it is still active
|
|
*/
|
|
if (on_rt_rq(&p->rt) && p->nr_cpus_allowed > 1)
|
|
enqueue_pushable_task(rq, p);
|
|
}
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
/* Only try algorithms three times */
|
|
#define RT_MAX_TRIES 3
|
|
|
|
static int pick_rt_task(struct rq *rq, struct task_struct *p, int cpu)
|
|
{
|
|
if (!task_running(rq, p) &&
|
|
cpumask_test_cpu(cpu, &p->cpus_mask))
|
|
return 1;
|
|
|
|
return 0;
|
|
}
|
|
|
|
/*
|
|
* Return the highest pushable rq's task, which is suitable to be executed
|
|
* on the CPU, NULL otherwise
|
|
*/
|
|
struct task_struct *pick_highest_pushable_task(struct rq *rq, int cpu)
|
|
{
|
|
struct plist_head *head = &rq->rt.pushable_tasks;
|
|
struct task_struct *p;
|
|
|
|
if (!has_pushable_tasks(rq))
|
|
return NULL;
|
|
|
|
plist_for_each_entry(p, head, pushable_tasks) {
|
|
if (pick_rt_task(rq, p, cpu))
|
|
return p;
|
|
}
|
|
|
|
return NULL;
|
|
}
|
|
EXPORT_SYMBOL_GPL(pick_highest_pushable_task);
|
|
|
|
static DEFINE_PER_CPU(cpumask_var_t, local_cpu_mask);
|
|
|
|
static int find_lowest_rq(struct task_struct *task)
|
|
{
|
|
struct sched_domain *sd;
|
|
struct cpumask *lowest_mask = this_cpu_cpumask_var_ptr(local_cpu_mask);
|
|
int this_cpu = smp_processor_id();
|
|
int cpu = -1;
|
|
int ret;
|
|
|
|
/* Make sure the mask is initialized first */
|
|
if (unlikely(!lowest_mask))
|
|
return -1;
|
|
|
|
if (task->nr_cpus_allowed == 1)
|
|
return -1; /* No other targets possible */
|
|
|
|
/*
|
|
* If we're on asym system ensure we consider the different capacities
|
|
* of the CPUs when searching for the lowest_mask.
|
|
*/
|
|
if (static_branch_unlikely(&sched_asym_cpucapacity)) {
|
|
|
|
ret = cpupri_find_fitness(&task_rq(task)->rd->cpupri,
|
|
task, lowest_mask,
|
|
rt_task_fits_capacity);
|
|
} else {
|
|
|
|
ret = cpupri_find(&task_rq(task)->rd->cpupri,
|
|
task, lowest_mask);
|
|
}
|
|
|
|
trace_android_rvh_find_lowest_rq(task, lowest_mask, ret, &cpu);
|
|
if (cpu >= 0)
|
|
return cpu;
|
|
|
|
if (!ret)
|
|
return -1; /* No targets found */
|
|
|
|
cpu = task_cpu(task);
|
|
|
|
/*
|
|
* At this point we have built a mask of CPUs representing the
|
|
* lowest priority tasks in the system. Now we want to elect
|
|
* the best one based on our affinity and topology.
|
|
*
|
|
* We prioritize the last CPU that the task executed on since
|
|
* it is most likely cache-hot in that location.
|
|
*/
|
|
if (cpumask_test_cpu(cpu, lowest_mask))
|
|
return cpu;
|
|
|
|
/*
|
|
* Otherwise, we consult the sched_domains span maps to figure
|
|
* out which CPU is logically closest to our hot cache data.
|
|
*/
|
|
if (!cpumask_test_cpu(this_cpu, lowest_mask))
|
|
this_cpu = -1; /* Skip this_cpu opt if not among lowest */
|
|
|
|
rcu_read_lock();
|
|
for_each_domain(cpu, sd) {
|
|
if (sd->flags & SD_WAKE_AFFINE) {
|
|
int best_cpu;
|
|
|
|
/*
|
|
* "this_cpu" is cheaper to preempt than a
|
|
* remote processor.
|
|
*/
|
|
if (this_cpu != -1 &&
|
|
cpumask_test_cpu(this_cpu, sched_domain_span(sd))) {
|
|
rcu_read_unlock();
|
|
return this_cpu;
|
|
}
|
|
|
|
best_cpu = cpumask_any_and_distribute(lowest_mask,
|
|
sched_domain_span(sd));
|
|
if (best_cpu < nr_cpu_ids) {
|
|
rcu_read_unlock();
|
|
return best_cpu;
|
|
}
|
|
}
|
|
}
|
|
rcu_read_unlock();
|
|
|
|
/*
|
|
* And finally, if there were no matches within the domains
|
|
* just give the caller *something* to work with from the compatible
|
|
* locations.
|
|
*/
|
|
if (this_cpu != -1)
|
|
return this_cpu;
|
|
|
|
cpu = cpumask_any_distribute(lowest_mask);
|
|
if (cpu < nr_cpu_ids)
|
|
return cpu;
|
|
|
|
return -1;
|
|
}
|
|
|
|
/* Will lock the rq it finds */
|
|
static struct rq *find_lock_lowest_rq(struct task_struct *task, struct rq *rq)
|
|
{
|
|
struct rq *lowest_rq = NULL;
|
|
int tries;
|
|
int cpu;
|
|
|
|
for (tries = 0; tries < RT_MAX_TRIES; tries++) {
|
|
cpu = find_lowest_rq(task);
|
|
|
|
if ((cpu == -1) || (cpu == rq->cpu))
|
|
break;
|
|
|
|
lowest_rq = cpu_rq(cpu);
|
|
|
|
if (lowest_rq->rt.highest_prio.curr <= task->prio) {
|
|
/*
|
|
* Target rq has tasks of equal or higher priority,
|
|
* retrying does not release any lock and is unlikely
|
|
* to yield a different result.
|
|
*/
|
|
lowest_rq = NULL;
|
|
break;
|
|
}
|
|
|
|
/* if the prio of this runqueue changed, try again */
|
|
if (double_lock_balance(rq, lowest_rq)) {
|
|
/*
|
|
* We had to unlock the run queue. In
|
|
* the mean time, task could have
|
|
* migrated already or had its affinity changed.
|
|
* Also make sure that it wasn't scheduled on its rq.
|
|
*/
|
|
if (unlikely(task_rq(task) != rq ||
|
|
!cpumask_test_cpu(lowest_rq->cpu, &task->cpus_mask) ||
|
|
task_running(rq, task) ||
|
|
!rt_task(task) ||
|
|
!task_on_rq_queued(task))) {
|
|
|
|
double_unlock_balance(rq, lowest_rq);
|
|
lowest_rq = NULL;
|
|
break;
|
|
}
|
|
}
|
|
|
|
/* If this rq is still suitable use it. */
|
|
if (lowest_rq->rt.highest_prio.curr > task->prio)
|
|
break;
|
|
|
|
/* try again */
|
|
double_unlock_balance(rq, lowest_rq);
|
|
lowest_rq = NULL;
|
|
}
|
|
|
|
return lowest_rq;
|
|
}
|
|
|
|
static struct task_struct *pick_next_pushable_task(struct rq *rq)
|
|
{
|
|
struct task_struct *p;
|
|
|
|
if (!has_pushable_tasks(rq))
|
|
return NULL;
|
|
|
|
p = plist_first_entry(&rq->rt.pushable_tasks,
|
|
struct task_struct, pushable_tasks);
|
|
|
|
BUG_ON(rq->cpu != task_cpu(p));
|
|
BUG_ON(task_current(rq, p));
|
|
BUG_ON(p->nr_cpus_allowed <= 1);
|
|
|
|
BUG_ON(!task_on_rq_queued(p));
|
|
BUG_ON(!rt_task(p));
|
|
|
|
return p;
|
|
}
|
|
|
|
/*
|
|
* If the current CPU has more than one RT task, see if the non
|
|
* running task can migrate over to a CPU that is running a task
|
|
* of lesser priority.
|
|
*/
|
|
static int push_rt_task(struct rq *rq, bool pull)
|
|
{
|
|
struct task_struct *next_task;
|
|
struct rq *lowest_rq;
|
|
int ret = 0;
|
|
|
|
if (!rq->rt.overloaded)
|
|
return 0;
|
|
|
|
next_task = pick_next_pushable_task(rq);
|
|
if (!next_task)
|
|
return 0;
|
|
|
|
retry:
|
|
/*
|
|
* It's possible that the next_task slipped in of
|
|
* higher priority than current. If that's the case
|
|
* just reschedule current.
|
|
*/
|
|
if (unlikely(next_task->prio < rq->curr->prio)) {
|
|
resched_curr(rq);
|
|
return 0;
|
|
}
|
|
|
|
if (is_migration_disabled(next_task)) {
|
|
struct task_struct *push_task = NULL;
|
|
int cpu;
|
|
|
|
if (!pull || rq->push_busy)
|
|
return 0;
|
|
|
|
/*
|
|
* Invoking find_lowest_rq() on anything but an RT task doesn't
|
|
* make sense. Per the above priority check, curr has to
|
|
* be of higher priority than next_task, so no need to
|
|
* reschedule when bailing out.
|
|
*
|
|
* Note that the stoppers are masqueraded as SCHED_FIFO
|
|
* (cf. sched_set_stop_task()), so we can't rely on rt_task().
|
|
*/
|
|
if (rq->curr->sched_class != &rt_sched_class)
|
|
return 0;
|
|
|
|
cpu = find_lowest_rq(rq->curr);
|
|
if (cpu == -1 || cpu == rq->cpu)
|
|
return 0;
|
|
|
|
/*
|
|
* Given we found a CPU with lower priority than @next_task,
|
|
* therefore it should be running. However we cannot migrate it
|
|
* to this other CPU, instead attempt to push the current
|
|
* running task on this CPU away.
|
|
*/
|
|
push_task = get_push_task(rq);
|
|
if (push_task) {
|
|
raw_spin_rq_unlock(rq);
|
|
stop_one_cpu_nowait(rq->cpu, push_cpu_stop,
|
|
push_task, &rq->push_work);
|
|
raw_spin_rq_lock(rq);
|
|
}
|
|
|
|
return 0;
|
|
}
|
|
|
|
if (WARN_ON(next_task == rq->curr))
|
|
return 0;
|
|
|
|
/* We might release rq lock */
|
|
get_task_struct(next_task);
|
|
|
|
/* find_lock_lowest_rq locks the rq if found */
|
|
lowest_rq = find_lock_lowest_rq(next_task, rq);
|
|
if (!lowest_rq) {
|
|
struct task_struct *task;
|
|
/*
|
|
* find_lock_lowest_rq releases rq->lock
|
|
* so it is possible that next_task has migrated.
|
|
*
|
|
* We need to make sure that the task is still on the same
|
|
* run-queue and is also still the next task eligible for
|
|
* pushing.
|
|
*/
|
|
task = pick_next_pushable_task(rq);
|
|
if (task == next_task) {
|
|
/*
|
|
* The task hasn't migrated, and is still the next
|
|
* eligible task, but we failed to find a run-queue
|
|
* to push it to. Do not retry in this case, since
|
|
* other CPUs will pull from us when ready.
|
|
*/
|
|
goto out;
|
|
}
|
|
|
|
if (!task)
|
|
/* No more tasks, just exit */
|
|
goto out;
|
|
|
|
/*
|
|
* Something has shifted, try again.
|
|
*/
|
|
put_task_struct(next_task);
|
|
next_task = task;
|
|
goto retry;
|
|
}
|
|
|
|
deactivate_task(rq, next_task, 0);
|
|
set_task_cpu(next_task, lowest_rq->cpu);
|
|
activate_task(lowest_rq, next_task, 0);
|
|
resched_curr(lowest_rq);
|
|
ret = 1;
|
|
|
|
double_unlock_balance(rq, lowest_rq);
|
|
out:
|
|
put_task_struct(next_task);
|
|
|
|
return ret;
|
|
}
|
|
|
|
static void push_rt_tasks(struct rq *rq)
|
|
{
|
|
/* push_rt_task will return true if it moved an RT */
|
|
while (push_rt_task(rq, false))
|
|
;
|
|
}
|
|
|
|
#ifdef HAVE_RT_PUSH_IPI
|
|
|
|
/*
|
|
* When a high priority task schedules out from a CPU and a lower priority
|
|
* task is scheduled in, a check is made to see if there's any RT tasks
|
|
* on other CPUs that are waiting to run because a higher priority RT task
|
|
* is currently running on its CPU. In this case, the CPU with multiple RT
|
|
* tasks queued on it (overloaded) needs to be notified that a CPU has opened
|
|
* up that may be able to run one of its non-running queued RT tasks.
|
|
*
|
|
* All CPUs with overloaded RT tasks need to be notified as there is currently
|
|
* no way to know which of these CPUs have the highest priority task waiting
|
|
* to run. Instead of trying to take a spinlock on each of these CPUs,
|
|
* which has shown to cause large latency when done on machines with many
|
|
* CPUs, sending an IPI to the CPUs to have them push off the overloaded
|
|
* RT tasks waiting to run.
|
|
*
|
|
* Just sending an IPI to each of the CPUs is also an issue, as on large
|
|
* count CPU machines, this can cause an IPI storm on a CPU, especially
|
|
* if its the only CPU with multiple RT tasks queued, and a large number
|
|
* of CPUs scheduling a lower priority task at the same time.
|
|
*
|
|
* Each root domain has its own irq work function that can iterate over
|
|
* all CPUs with RT overloaded tasks. Since all CPUs with overloaded RT
|
|
* task must be checked if there's one or many CPUs that are lowering
|
|
* their priority, there's a single irq work iterator that will try to
|
|
* push off RT tasks that are waiting to run.
|
|
*
|
|
* When a CPU schedules a lower priority task, it will kick off the
|
|
* irq work iterator that will jump to each CPU with overloaded RT tasks.
|
|
* As it only takes the first CPU that schedules a lower priority task
|
|
* to start the process, the rto_start variable is incremented and if
|
|
* the atomic result is one, then that CPU will try to take the rto_lock.
|
|
* This prevents high contention on the lock as the process handles all
|
|
* CPUs scheduling lower priority tasks.
|
|
*
|
|
* All CPUs that are scheduling a lower priority task will increment the
|
|
* rt_loop_next variable. This will make sure that the irq work iterator
|
|
* checks all RT overloaded CPUs whenever a CPU schedules a new lower
|
|
* priority task, even if the iterator is in the middle of a scan. Incrementing
|
|
* the rt_loop_next will cause the iterator to perform another scan.
|
|
*
|
|
*/
|
|
static int rto_next_cpu(struct root_domain *rd)
|
|
{
|
|
int next;
|
|
int cpu;
|
|
|
|
/*
|
|
* When starting the IPI RT pushing, the rto_cpu is set to -1,
|
|
* rt_next_cpu() will simply return the first CPU found in
|
|
* the rto_mask.
|
|
*
|
|
* If rto_next_cpu() is called with rto_cpu is a valid CPU, it
|
|
* will return the next CPU found in the rto_mask.
|
|
*
|
|
* If there are no more CPUs left in the rto_mask, then a check is made
|
|
* against rto_loop and rto_loop_next. rto_loop is only updated with
|
|
* the rto_lock held, but any CPU may increment the rto_loop_next
|
|
* without any locking.
|
|
*/
|
|
for (;;) {
|
|
|
|
/* When rto_cpu is -1 this acts like cpumask_first() */
|
|
cpu = cpumask_next(rd->rto_cpu, rd->rto_mask);
|
|
|
|
/* this will be any CPU in the rd->rto_mask, and can be a halted cpu update it */
|
|
trace_android_rvh_rto_next_cpu(rd->rto_cpu, rd->rto_mask, &cpu);
|
|
|
|
rd->rto_cpu = cpu;
|
|
|
|
if (cpu < nr_cpu_ids)
|
|
return cpu;
|
|
|
|
rd->rto_cpu = -1;
|
|
|
|
/*
|
|
* ACQUIRE ensures we see the @rto_mask changes
|
|
* made prior to the @next value observed.
|
|
*
|
|
* Matches WMB in rt_set_overload().
|
|
*/
|
|
next = atomic_read_acquire(&rd->rto_loop_next);
|
|
|
|
if (rd->rto_loop == next)
|
|
break;
|
|
|
|
rd->rto_loop = next;
|
|
}
|
|
|
|
return -1;
|
|
}
|
|
|
|
static inline bool rto_start_trylock(atomic_t *v)
|
|
{
|
|
return !atomic_cmpxchg_acquire(v, 0, 1);
|
|
}
|
|
|
|
static inline void rto_start_unlock(atomic_t *v)
|
|
{
|
|
atomic_set_release(v, 0);
|
|
}
|
|
|
|
static void tell_cpu_to_push(struct rq *rq)
|
|
{
|
|
int cpu = -1;
|
|
|
|
/* Keep the loop going if the IPI is currently active */
|
|
atomic_inc(&rq->rd->rto_loop_next);
|
|
|
|
/* Only one CPU can initiate a loop at a time */
|
|
if (!rto_start_trylock(&rq->rd->rto_loop_start))
|
|
return;
|
|
|
|
raw_spin_lock(&rq->rd->rto_lock);
|
|
|
|
/*
|
|
* The rto_cpu is updated under the lock, if it has a valid CPU
|
|
* then the IPI is still running and will continue due to the
|
|
* update to loop_next, and nothing needs to be done here.
|
|
* Otherwise it is finishing up and an ipi needs to be sent.
|
|
*/
|
|
if (rq->rd->rto_cpu < 0)
|
|
cpu = rto_next_cpu(rq->rd);
|
|
|
|
raw_spin_unlock(&rq->rd->rto_lock);
|
|
|
|
rto_start_unlock(&rq->rd->rto_loop_start);
|
|
|
|
if (cpu >= 0) {
|
|
/* Make sure the rd does not get freed while pushing */
|
|
sched_get_rd(rq->rd);
|
|
irq_work_queue_on(&rq->rd->rto_push_work, cpu);
|
|
}
|
|
}
|
|
|
|
/* Called from hardirq context */
|
|
void rto_push_irq_work_func(struct irq_work *work)
|
|
{
|
|
struct root_domain *rd =
|
|
container_of(work, struct root_domain, rto_push_work);
|
|
struct rq *rq;
|
|
int cpu;
|
|
|
|
rq = this_rq();
|
|
|
|
/*
|
|
* We do not need to grab the lock to check for has_pushable_tasks.
|
|
* When it gets updated, a check is made if a push is possible.
|
|
*/
|
|
if (has_pushable_tasks(rq)) {
|
|
raw_spin_rq_lock(rq);
|
|
while (push_rt_task(rq, true))
|
|
;
|
|
raw_spin_rq_unlock(rq);
|
|
}
|
|
|
|
raw_spin_lock(&rd->rto_lock);
|
|
|
|
/* Pass the IPI to the next rt overloaded queue */
|
|
cpu = rto_next_cpu(rd);
|
|
|
|
raw_spin_unlock(&rd->rto_lock);
|
|
|
|
if (cpu < 0) {
|
|
sched_put_rd(rd);
|
|
return;
|
|
}
|
|
|
|
/* Try the next RT overloaded CPU */
|
|
irq_work_queue_on(&rd->rto_push_work, cpu);
|
|
}
|
|
#endif /* HAVE_RT_PUSH_IPI */
|
|
|
|
static void pull_rt_task(struct rq *this_rq)
|
|
{
|
|
int this_cpu = this_rq->cpu, cpu;
|
|
bool resched = false;
|
|
struct task_struct *p, *push_task;
|
|
struct rq *src_rq;
|
|
int rt_overload_count = rt_overloaded(this_rq);
|
|
|
|
if (likely(!rt_overload_count))
|
|
return;
|
|
|
|
/*
|
|
* Match the barrier from rt_set_overloaded; this guarantees that if we
|
|
* see overloaded we must also see the rto_mask bit.
|
|
*/
|
|
smp_rmb();
|
|
|
|
/* If we are the only overloaded CPU do nothing */
|
|
if (rt_overload_count == 1 &&
|
|
cpumask_test_cpu(this_rq->cpu, this_rq->rd->rto_mask))
|
|
return;
|
|
|
|
#ifdef HAVE_RT_PUSH_IPI
|
|
if (sched_feat(RT_PUSH_IPI)) {
|
|
tell_cpu_to_push(this_rq);
|
|
return;
|
|
}
|
|
#endif
|
|
|
|
for_each_cpu(cpu, this_rq->rd->rto_mask) {
|
|
if (this_cpu == cpu)
|
|
continue;
|
|
|
|
src_rq = cpu_rq(cpu);
|
|
|
|
/*
|
|
* Don't bother taking the src_rq->lock if the next highest
|
|
* task is known to be lower-priority than our current task.
|
|
* This may look racy, but if this value is about to go
|
|
* logically higher, the src_rq will push this task away.
|
|
* And if its going logically lower, we do not care
|
|
*/
|
|
if (src_rq->rt.highest_prio.next >=
|
|
this_rq->rt.highest_prio.curr)
|
|
continue;
|
|
|
|
/*
|
|
* We can potentially drop this_rq's lock in
|
|
* double_lock_balance, and another CPU could
|
|
* alter this_rq
|
|
*/
|
|
push_task = NULL;
|
|
double_lock_balance(this_rq, src_rq);
|
|
|
|
/*
|
|
* We can pull only a task, which is pushable
|
|
* on its rq, and no others.
|
|
*/
|
|
p = pick_highest_pushable_task(src_rq, this_cpu);
|
|
|
|
/*
|
|
* Do we have an RT task that preempts
|
|
* the to-be-scheduled task?
|
|
*/
|
|
if (p && (p->prio < this_rq->rt.highest_prio.curr)) {
|
|
WARN_ON(p == src_rq->curr);
|
|
WARN_ON(!task_on_rq_queued(p));
|
|
|
|
/*
|
|
* There's a chance that p is higher in priority
|
|
* than what's currently running on its CPU.
|
|
* This is just that p is waking up and hasn't
|
|
* had a chance to schedule. We only pull
|
|
* p if it is lower in priority than the
|
|
* current task on the run queue
|
|
*/
|
|
if (p->prio < src_rq->curr->prio)
|
|
goto skip;
|
|
|
|
if (is_migration_disabled(p)) {
|
|
push_task = get_push_task(src_rq);
|
|
} else {
|
|
deactivate_task(src_rq, p, 0);
|
|
set_task_cpu(p, this_cpu);
|
|
activate_task(this_rq, p, 0);
|
|
resched = true;
|
|
}
|
|
/*
|
|
* We continue with the search, just in
|
|
* case there's an even higher prio task
|
|
* in another runqueue. (low likelihood
|
|
* but possible)
|
|
*/
|
|
}
|
|
skip:
|
|
double_unlock_balance(this_rq, src_rq);
|
|
|
|
if (push_task) {
|
|
raw_spin_rq_unlock(this_rq);
|
|
stop_one_cpu_nowait(src_rq->cpu, push_cpu_stop,
|
|
push_task, &src_rq->push_work);
|
|
raw_spin_rq_lock(this_rq);
|
|
}
|
|
}
|
|
|
|
if (resched)
|
|
resched_curr(this_rq);
|
|
}
|
|
|
|
/*
|
|
* If we are not running and we are not going to reschedule soon, we should
|
|
* try to push tasks away now
|
|
*/
|
|
static void task_woken_rt(struct rq *rq, struct task_struct *p)
|
|
{
|
|
bool need_to_push = !task_running(rq, p) &&
|
|
!test_tsk_need_resched(rq->curr) &&
|
|
p->nr_cpus_allowed > 1 &&
|
|
(dl_task(rq->curr) || rt_task(rq->curr)) &&
|
|
(rq->curr->nr_cpus_allowed < 2 ||
|
|
rq->curr->prio <= p->prio);
|
|
|
|
if (need_to_push)
|
|
push_rt_tasks(rq);
|
|
}
|
|
|
|
/* Assumes rq->lock is held */
|
|
static void rq_online_rt(struct rq *rq)
|
|
{
|
|
if (rq->rt.overloaded)
|
|
rt_set_overload(rq);
|
|
|
|
__enable_runtime(rq);
|
|
|
|
cpupri_set(&rq->rd->cpupri, rq->cpu, rq->rt.highest_prio.curr);
|
|
}
|
|
|
|
/* Assumes rq->lock is held */
|
|
static void rq_offline_rt(struct rq *rq)
|
|
{
|
|
if (rq->rt.overloaded)
|
|
rt_clear_overload(rq);
|
|
|
|
__disable_runtime(rq);
|
|
|
|
cpupri_set(&rq->rd->cpupri, rq->cpu, CPUPRI_INVALID);
|
|
}
|
|
|
|
/*
|
|
* When switch from the rt queue, we bring ourselves to a position
|
|
* that we might want to pull RT tasks from other runqueues.
|
|
*/
|
|
static void switched_from_rt(struct rq *rq, struct task_struct *p)
|
|
{
|
|
/*
|
|
* If there are other RT tasks then we will reschedule
|
|
* and the scheduling of the other RT tasks will handle
|
|
* the balancing. But if we are the last RT task
|
|
* we may need to handle the pulling of RT tasks
|
|
* now.
|
|
*/
|
|
if (!task_on_rq_queued(p) || rq->rt.rt_nr_running)
|
|
return;
|
|
|
|
rt_queue_pull_task(rq);
|
|
}
|
|
|
|
void __init init_sched_rt_class(void)
|
|
{
|
|
unsigned int i;
|
|
|
|
for_each_possible_cpu(i) {
|
|
zalloc_cpumask_var_node(&per_cpu(local_cpu_mask, i),
|
|
GFP_KERNEL, cpu_to_node(i));
|
|
}
|
|
}
|
|
#endif /* CONFIG_SMP */
|
|
|
|
/*
|
|
* When switching a task to RT, we may overload the runqueue
|
|
* with RT tasks. In this case we try to push them off to
|
|
* other runqueues.
|
|
*/
|
|
static void switched_to_rt(struct rq *rq, struct task_struct *p)
|
|
{
|
|
/*
|
|
* If we are running, update the avg_rt tracking, as the running time
|
|
* will now on be accounted into the latter.
|
|
*/
|
|
if (task_current(rq, p)) {
|
|
update_rt_rq_load_avg(rq_clock_pelt(rq), rq, 0);
|
|
return;
|
|
}
|
|
|
|
/*
|
|
* If we are not running we may need to preempt the current
|
|
* running task. If that current running task is also an RT task
|
|
* then see if we can move to another run queue.
|
|
*/
|
|
if (task_on_rq_queued(p)) {
|
|
#ifdef CONFIG_SMP
|
|
if (p->nr_cpus_allowed > 1 && rq->rt.overloaded)
|
|
rt_queue_push_tasks(rq);
|
|
#endif /* CONFIG_SMP */
|
|
if (p->prio < rq->curr->prio && cpu_online(cpu_of(rq)))
|
|
resched_curr(rq);
|
|
}
|
|
}
|
|
|
|
/*
|
|
* Priority of the task has changed. This may cause
|
|
* us to initiate a push or pull.
|
|
*/
|
|
static void
|
|
prio_changed_rt(struct rq *rq, struct task_struct *p, int oldprio)
|
|
{
|
|
if (!task_on_rq_queued(p))
|
|
return;
|
|
|
|
if (task_current(rq, p)) {
|
|
#ifdef CONFIG_SMP
|
|
/*
|
|
* If our priority decreases while running, we
|
|
* may need to pull tasks to this runqueue.
|
|
*/
|
|
if (oldprio < p->prio)
|
|
rt_queue_pull_task(rq);
|
|
|
|
/*
|
|
* If there's a higher priority task waiting to run
|
|
* then reschedule.
|
|
*/
|
|
if (p->prio > rq->rt.highest_prio.curr)
|
|
resched_curr(rq);
|
|
#else
|
|
/* For UP simply resched on drop of prio */
|
|
if (oldprio < p->prio)
|
|
resched_curr(rq);
|
|
#endif /* CONFIG_SMP */
|
|
} else {
|
|
/*
|
|
* This task is not running, but if it is
|
|
* greater than the current running task
|
|
* then reschedule.
|
|
*/
|
|
if (p->prio < rq->curr->prio)
|
|
resched_curr(rq);
|
|
}
|
|
}
|
|
|
|
#ifdef CONFIG_POSIX_TIMERS
|
|
static void watchdog(struct rq *rq, struct task_struct *p)
|
|
{
|
|
unsigned long soft, hard;
|
|
|
|
/* max may change after cur was read, this will be fixed next tick */
|
|
soft = task_rlimit(p, RLIMIT_RTTIME);
|
|
hard = task_rlimit_max(p, RLIMIT_RTTIME);
|
|
|
|
if (soft != RLIM_INFINITY) {
|
|
unsigned long next;
|
|
|
|
if (p->rt.watchdog_stamp != jiffies) {
|
|
p->rt.timeout++;
|
|
p->rt.watchdog_stamp = jiffies;
|
|
}
|
|
|
|
next = DIV_ROUND_UP(min(soft, hard), USEC_PER_SEC/HZ);
|
|
if (p->rt.timeout > next) {
|
|
posix_cputimers_rt_watchdog(&p->posix_cputimers,
|
|
p->se.sum_exec_runtime);
|
|
}
|
|
}
|
|
}
|
|
#else
|
|
static inline void watchdog(struct rq *rq, struct task_struct *p) { }
|
|
#endif
|
|
|
|
/*
|
|
* scheduler tick hitting a task of our scheduling class.
|
|
*
|
|
* NOTE: This function can be called remotely by the tick offload that
|
|
* goes along full dynticks. Therefore no local assumption can be made
|
|
* and everything must be accessed through the @rq and @curr passed in
|
|
* parameters.
|
|
*/
|
|
static void task_tick_rt(struct rq *rq, struct task_struct *p, int queued)
|
|
{
|
|
struct sched_rt_entity *rt_se = &p->rt;
|
|
|
|
update_curr_rt(rq);
|
|
update_rt_rq_load_avg(rq_clock_pelt(rq), rq, 1);
|
|
|
|
watchdog(rq, p);
|
|
|
|
/*
|
|
* RR tasks need a special form of timeslice management.
|
|
* FIFO tasks have no timeslices.
|
|
*/
|
|
if (p->policy != SCHED_RR)
|
|
return;
|
|
|
|
if (--p->rt.time_slice)
|
|
return;
|
|
|
|
p->rt.time_slice = sched_rr_timeslice;
|
|
|
|
/*
|
|
* Requeue to the end of queue if we (and all of our ancestors) are not
|
|
* the only element on the queue
|
|
*/
|
|
for_each_sched_rt_entity(rt_se) {
|
|
if (rt_se->run_list.prev != rt_se->run_list.next) {
|
|
requeue_task_rt(rq, p, 0);
|
|
resched_curr(rq);
|
|
return;
|
|
}
|
|
}
|
|
}
|
|
|
|
static unsigned int get_rr_interval_rt(struct rq *rq, struct task_struct *task)
|
|
{
|
|
/*
|
|
* Time slice is 0 for SCHED_FIFO tasks
|
|
*/
|
|
if (task->policy == SCHED_RR)
|
|
return sched_rr_timeslice;
|
|
else
|
|
return 0;
|
|
}
|
|
|
|
DEFINE_SCHED_CLASS(rt) = {
|
|
|
|
.enqueue_task = enqueue_task_rt,
|
|
.dequeue_task = dequeue_task_rt,
|
|
.yield_task = yield_task_rt,
|
|
|
|
.check_preempt_curr = check_preempt_curr_rt,
|
|
|
|
.pick_next_task = pick_next_task_rt,
|
|
.put_prev_task = put_prev_task_rt,
|
|
.set_next_task = set_next_task_rt,
|
|
|
|
#ifdef CONFIG_SMP
|
|
.balance = balance_rt,
|
|
.pick_task = pick_task_rt,
|
|
.select_task_rq = select_task_rq_rt,
|
|
.set_cpus_allowed = set_cpus_allowed_common,
|
|
.rq_online = rq_online_rt,
|
|
.rq_offline = rq_offline_rt,
|
|
.task_woken = task_woken_rt,
|
|
.switched_from = switched_from_rt,
|
|
.find_lock_rq = find_lock_lowest_rq,
|
|
#endif
|
|
|
|
.task_tick = task_tick_rt,
|
|
|
|
.get_rr_interval = get_rr_interval_rt,
|
|
|
|
.prio_changed = prio_changed_rt,
|
|
.switched_to = switched_to_rt,
|
|
|
|
.update_curr = update_curr_rt,
|
|
|
|
#ifdef CONFIG_UCLAMP_TASK
|
|
.uclamp_enabled = 1,
|
|
#endif
|
|
};
|
|
|
|
#ifdef CONFIG_RT_GROUP_SCHED
|
|
/*
|
|
* Ensure that the real time constraints are schedulable.
|
|
*/
|
|
static DEFINE_MUTEX(rt_constraints_mutex);
|
|
|
|
static inline int tg_has_rt_tasks(struct task_group *tg)
|
|
{
|
|
struct task_struct *task;
|
|
struct css_task_iter it;
|
|
int ret = 0;
|
|
|
|
/*
|
|
* Autogroups do not have RT tasks; see autogroup_create().
|
|
*/
|
|
if (task_group_is_autogroup(tg))
|
|
return 0;
|
|
|
|
css_task_iter_start(&tg->css, 0, &it);
|
|
while (!ret && (task = css_task_iter_next(&it)))
|
|
ret |= rt_task(task);
|
|
css_task_iter_end(&it);
|
|
|
|
return ret;
|
|
}
|
|
|
|
struct rt_schedulable_data {
|
|
struct task_group *tg;
|
|
u64 rt_period;
|
|
u64 rt_runtime;
|
|
};
|
|
|
|
static int tg_rt_schedulable(struct task_group *tg, void *data)
|
|
{
|
|
struct rt_schedulable_data *d = data;
|
|
struct task_group *child;
|
|
unsigned long total, sum = 0;
|
|
u64 period, runtime;
|
|
|
|
period = ktime_to_ns(tg->rt_bandwidth.rt_period);
|
|
runtime = tg->rt_bandwidth.rt_runtime;
|
|
|
|
if (tg == d->tg) {
|
|
period = d->rt_period;
|
|
runtime = d->rt_runtime;
|
|
}
|
|
|
|
/*
|
|
* Cannot have more runtime than the period.
|
|
*/
|
|
if (runtime > period && runtime != RUNTIME_INF)
|
|
return -EINVAL;
|
|
|
|
/*
|
|
* Ensure we don't starve existing RT tasks if runtime turns zero.
|
|
*/
|
|
if (rt_bandwidth_enabled() && !runtime &&
|
|
tg->rt_bandwidth.rt_runtime && tg_has_rt_tasks(tg))
|
|
return -EBUSY;
|
|
|
|
total = to_ratio(period, runtime);
|
|
|
|
/*
|
|
* Nobody can have more than the global setting allows.
|
|
*/
|
|
if (total > to_ratio(global_rt_period(), global_rt_runtime()))
|
|
return -EINVAL;
|
|
|
|
/*
|
|
* The sum of our children's runtime should not exceed our own.
|
|
*/
|
|
list_for_each_entry_rcu(child, &tg->children, siblings) {
|
|
period = ktime_to_ns(child->rt_bandwidth.rt_period);
|
|
runtime = child->rt_bandwidth.rt_runtime;
|
|
|
|
if (child == d->tg) {
|
|
period = d->rt_period;
|
|
runtime = d->rt_runtime;
|
|
}
|
|
|
|
sum += to_ratio(period, runtime);
|
|
}
|
|
|
|
if (sum > total)
|
|
return -EINVAL;
|
|
|
|
return 0;
|
|
}
|
|
|
|
static int __rt_schedulable(struct task_group *tg, u64 period, u64 runtime)
|
|
{
|
|
int ret;
|
|
|
|
struct rt_schedulable_data data = {
|
|
.tg = tg,
|
|
.rt_period = period,
|
|
.rt_runtime = runtime,
|
|
};
|
|
|
|
rcu_read_lock();
|
|
ret = walk_tg_tree(tg_rt_schedulable, tg_nop, &data);
|
|
rcu_read_unlock();
|
|
|
|
return ret;
|
|
}
|
|
|
|
static int tg_set_rt_bandwidth(struct task_group *tg,
|
|
u64 rt_period, u64 rt_runtime)
|
|
{
|
|
int i, err = 0;
|
|
|
|
/*
|
|
* Disallowing the root group RT runtime is BAD, it would disallow the
|
|
* kernel creating (and or operating) RT threads.
|
|
*/
|
|
if (tg == &root_task_group && rt_runtime == 0)
|
|
return -EINVAL;
|
|
|
|
/* No period doesn't make any sense. */
|
|
if (rt_period == 0)
|
|
return -EINVAL;
|
|
|
|
/*
|
|
* Bound quota to defend quota against overflow during bandwidth shift.
|
|
*/
|
|
if (rt_runtime != RUNTIME_INF && rt_runtime > max_rt_runtime)
|
|
return -EINVAL;
|
|
|
|
mutex_lock(&rt_constraints_mutex);
|
|
err = __rt_schedulable(tg, rt_period, rt_runtime);
|
|
if (err)
|
|
goto unlock;
|
|
|
|
raw_spin_lock_irq(&tg->rt_bandwidth.rt_runtime_lock);
|
|
tg->rt_bandwidth.rt_period = ns_to_ktime(rt_period);
|
|
tg->rt_bandwidth.rt_runtime = rt_runtime;
|
|
|
|
for_each_possible_cpu(i) {
|
|
struct rt_rq *rt_rq = tg->rt_rq[i];
|
|
|
|
raw_spin_lock(&rt_rq->rt_runtime_lock);
|
|
rt_rq->rt_runtime = rt_runtime;
|
|
raw_spin_unlock(&rt_rq->rt_runtime_lock);
|
|
}
|
|
raw_spin_unlock_irq(&tg->rt_bandwidth.rt_runtime_lock);
|
|
unlock:
|
|
mutex_unlock(&rt_constraints_mutex);
|
|
|
|
return err;
|
|
}
|
|
|
|
int sched_group_set_rt_runtime(struct task_group *tg, long rt_runtime_us)
|
|
{
|
|
u64 rt_runtime, rt_period;
|
|
|
|
rt_period = ktime_to_ns(tg->rt_bandwidth.rt_period);
|
|
rt_runtime = (u64)rt_runtime_us * NSEC_PER_USEC;
|
|
if (rt_runtime_us < 0)
|
|
rt_runtime = RUNTIME_INF;
|
|
else if ((u64)rt_runtime_us > U64_MAX / NSEC_PER_USEC)
|
|
return -EINVAL;
|
|
|
|
return tg_set_rt_bandwidth(tg, rt_period, rt_runtime);
|
|
}
|
|
|
|
long sched_group_rt_runtime(struct task_group *tg)
|
|
{
|
|
u64 rt_runtime_us;
|
|
|
|
if (tg->rt_bandwidth.rt_runtime == RUNTIME_INF)
|
|
return -1;
|
|
|
|
rt_runtime_us = tg->rt_bandwidth.rt_runtime;
|
|
do_div(rt_runtime_us, NSEC_PER_USEC);
|
|
return rt_runtime_us;
|
|
}
|
|
|
|
int sched_group_set_rt_period(struct task_group *tg, u64 rt_period_us)
|
|
{
|
|
u64 rt_runtime, rt_period;
|
|
|
|
if (rt_period_us > U64_MAX / NSEC_PER_USEC)
|
|
return -EINVAL;
|
|
|
|
rt_period = rt_period_us * NSEC_PER_USEC;
|
|
rt_runtime = tg->rt_bandwidth.rt_runtime;
|
|
|
|
return tg_set_rt_bandwidth(tg, rt_period, rt_runtime);
|
|
}
|
|
|
|
long sched_group_rt_period(struct task_group *tg)
|
|
{
|
|
u64 rt_period_us;
|
|
|
|
rt_period_us = ktime_to_ns(tg->rt_bandwidth.rt_period);
|
|
do_div(rt_period_us, NSEC_PER_USEC);
|
|
return rt_period_us;
|
|
}
|
|
|
|
static int sched_rt_global_constraints(void)
|
|
{
|
|
int ret = 0;
|
|
|
|
mutex_lock(&rt_constraints_mutex);
|
|
ret = __rt_schedulable(NULL, 0, 0);
|
|
mutex_unlock(&rt_constraints_mutex);
|
|
|
|
return ret;
|
|
}
|
|
|
|
int sched_rt_can_attach(struct task_group *tg, struct task_struct *tsk)
|
|
{
|
|
/* Don't accept realtime tasks when there is no way for them to run */
|
|
if (rt_task(tsk) && tg->rt_bandwidth.rt_runtime == 0)
|
|
return 0;
|
|
|
|
return 1;
|
|
}
|
|
|
|
#else /* !CONFIG_RT_GROUP_SCHED */
|
|
static int sched_rt_global_constraints(void)
|
|
{
|
|
unsigned long flags;
|
|
int i;
|
|
|
|
raw_spin_lock_irqsave(&def_rt_bandwidth.rt_runtime_lock, flags);
|
|
for_each_possible_cpu(i) {
|
|
struct rt_rq *rt_rq = &cpu_rq(i)->rt;
|
|
|
|
raw_spin_lock(&rt_rq->rt_runtime_lock);
|
|
rt_rq->rt_runtime = global_rt_runtime();
|
|
raw_spin_unlock(&rt_rq->rt_runtime_lock);
|
|
}
|
|
raw_spin_unlock_irqrestore(&def_rt_bandwidth.rt_runtime_lock, flags);
|
|
|
|
return 0;
|
|
}
|
|
#endif /* CONFIG_RT_GROUP_SCHED */
|
|
|
|
static int sched_rt_global_validate(void)
|
|
{
|
|
if (sysctl_sched_rt_period <= 0)
|
|
return -EINVAL;
|
|
|
|
if ((sysctl_sched_rt_runtime != RUNTIME_INF) &&
|
|
((sysctl_sched_rt_runtime > sysctl_sched_rt_period) ||
|
|
((u64)sysctl_sched_rt_runtime *
|
|
NSEC_PER_USEC > max_rt_runtime)))
|
|
return -EINVAL;
|
|
|
|
return 0;
|
|
}
|
|
|
|
static void sched_rt_do_global(void)
|
|
{
|
|
unsigned long flags;
|
|
|
|
raw_spin_lock_irqsave(&def_rt_bandwidth.rt_runtime_lock, flags);
|
|
def_rt_bandwidth.rt_runtime = global_rt_runtime();
|
|
def_rt_bandwidth.rt_period = ns_to_ktime(global_rt_period());
|
|
raw_spin_unlock_irqrestore(&def_rt_bandwidth.rt_runtime_lock, flags);
|
|
}
|
|
|
|
int sched_rt_handler(struct ctl_table *table, int write, void *buffer,
|
|
size_t *lenp, loff_t *ppos)
|
|
{
|
|
int old_period, old_runtime;
|
|
static DEFINE_MUTEX(mutex);
|
|
int ret;
|
|
|
|
mutex_lock(&mutex);
|
|
old_period = sysctl_sched_rt_period;
|
|
old_runtime = sysctl_sched_rt_runtime;
|
|
|
|
ret = proc_dointvec(table, write, buffer, lenp, ppos);
|
|
|
|
if (!ret && write) {
|
|
ret = sched_rt_global_validate();
|
|
if (ret)
|
|
goto undo;
|
|
|
|
ret = sched_dl_global_validate();
|
|
if (ret)
|
|
goto undo;
|
|
|
|
ret = sched_rt_global_constraints();
|
|
if (ret)
|
|
goto undo;
|
|
|
|
sched_rt_do_global();
|
|
sched_dl_do_global();
|
|
}
|
|
if (0) {
|
|
undo:
|
|
sysctl_sched_rt_period = old_period;
|
|
sysctl_sched_rt_runtime = old_runtime;
|
|
}
|
|
mutex_unlock(&mutex);
|
|
|
|
return ret;
|
|
}
|
|
|
|
int sched_rr_handler(struct ctl_table *table, int write, void *buffer,
|
|
size_t *lenp, loff_t *ppos)
|
|
{
|
|
int ret;
|
|
static DEFINE_MUTEX(mutex);
|
|
|
|
mutex_lock(&mutex);
|
|
ret = proc_dointvec(table, write, buffer, lenp, ppos);
|
|
/*
|
|
* Make sure that internally we keep jiffies.
|
|
* Also, writing zero resets the timeslice to default:
|
|
*/
|
|
if (!ret && write) {
|
|
sched_rr_timeslice =
|
|
sysctl_sched_rr_timeslice <= 0 ? RR_TIMESLICE :
|
|
msecs_to_jiffies(sysctl_sched_rr_timeslice);
|
|
}
|
|
mutex_unlock(&mutex);
|
|
|
|
return ret;
|
|
}
|
|
|
|
#ifdef CONFIG_SCHED_DEBUG
|
|
void print_rt_stats(struct seq_file *m, int cpu)
|
|
{
|
|
rt_rq_iter_t iter;
|
|
struct rt_rq *rt_rq;
|
|
|
|
rcu_read_lock();
|
|
for_each_rt_rq(rt_rq, iter, cpu_rq(cpu))
|
|
print_rt_rq(m, cpu, rt_rq);
|
|
rcu_read_unlock();
|
|
}
|
|
#endif /* CONFIG_SCHED_DEBUG */
|