Changes in 5.15.89
netfilter: nft_payload: incorrect arithmetics when fetching VLAN header bits
ALSA: control-led: use strscpy in set_led_id()
ALSA: hda/realtek - Turn on power early
ALSA: hda/realtek: Enable mute/micmute LEDs on HP Spectre x360 13-aw0xxx
KVM: arm64: Fix S1PTW handling on RO memslots
KVM: arm64: nvhe: Fix build with profile optimization
selftests: kvm: Fix a compile error in selftests/kvm/rseq_test.c
efi: tpm: Avoid READ_ONCE() for accessing the event log
docs: Fix the docs build with Sphinx 6.0
net: stmmac: add aux timestamps fifo clearance wait
perf auxtrace: Fix address filter duplicate symbol selection
s390/kexec: fix ipl report address for kdump
ASoC: qcom: lpass-cpu: Fix fallback SD line index handling
s390/cpum_sf: add READ_ONCE() semantics to compare and swap loops
s390/percpu: add READ_ONCE() to arch_this_cpu_to_op_simple()
drm/virtio: Fix GEM handle creation UAF
drm/i915/gt: Reset twice
net/mlx5e: Set action fwd flag when parsing tc action goto
cifs: Fix uninitialized memory read for smb311 posix symlink create
platform/x86: dell-privacy: Only register SW_CAMERA_LENS_COVER if present
platform/surface: aggregator: Ignore command messages not intended for us
platform/x86: dell-privacy: Fix SW_CAMERA_LENS_COVER reporting
dt-bindings: msm: dsi-controller-main: Fix operating-points-v2 constraint
drm/msm/adreno: Make adreno quirks not overwrite each other
dt-bindings: msm: dsi-controller-main: Fix power-domain constraint
dt-bindings: msm: dsi-controller-main: Fix description of core clock
dt-bindings: msm: dsi-phy-28nm: Add missing qcom, dsi-phy-regulator-ldo-mode
platform/x86: ideapad-laptop: Add Legion 5 15ARH05 DMI id to set_fn_lock_led_list[]
drm/msm/dp: do not complete dp_aux_cmd_fifo_tx() if irq is not for aux transfer
dt-bindings: msm/dsi: Don't require vdds-supply on 10nm PHY
dt-bindings: msm/dsi: Don't require vcca-supply on 14nm PHY
platform/x86: sony-laptop: Don't turn off 0x153 keyboard backlight during probe
ixgbe: fix pci device refcount leak
ipv6: raw: Deduct extension header length in rawv6_push_pending_frames
bus: mhi: host: Fix race between channel preparation and M0 event
usb: ulpi: defer ulpi_register on ulpi_read_id timeout
iommu/iova: Fix alloc iova overflows issue
iommu/mediatek-v1: Fix an error handling path in mtk_iommu_v1_probe()
sched/core: Fix use-after-free bug in dup_user_cpus_ptr()
netfilter: ipset: Fix overflow before widen in the bitmap_ip_create() function.
powerpc/imc-pmu: Fix use of mutex in IRQs disabled section
x86/boot: Avoid using Intel mnemonics in AT&T syntax asm
EDAC/device: Fix period calculation in edac_device_reset_delay_period()
x86/resctrl: Fix task CLOSID/RMID update race
regulator: da9211: Use irq handler when ready
scsi: mpi3mr: Refer CONFIG_SCSI_MPI3MR in Makefile
scsi: ufs: Stop using the clock scaling lock in the error handler
scsi: ufs: core: WLUN suspend SSU/enter hibern8 fail recovery
ASoC: wm8904: fix wrong outputs volume after power reactivation
ALSA: usb-audio: Make sure to stop endpoints before closing EPs
ALSA: usb-audio: Relax hw constraints for implicit fb sync
tipc: fix unexpected link reset due to discovery messages
octeontx2-af: Fix LMAC config in cgx_lmac_rx_tx_enable
hvc/xen: lock console list traversal
nfc: pn533: Wait for out_urb's completion in pn533_usb_send_frame()
af_unix: selftest: Fix the size of the parameter to connect()
tools/nolibc: x86: Remove `r8`, `r9` and `r10` from the clobber list
tools/nolibc: x86-64: Use `mov $60,%eax` instead of `mov $60,%rax`
tools/nolibc: use pselect6 on RISCV
tools/nolibc/std: move the standard type definitions to std.h
tools/nolibc/types: split syscall-specific definitions into their own files
tools/nolibc/arch: split arch-specific code into individual files
tools/nolibc/arch: mark the _start symbol as weak
tools/nolibc: Remove .global _start from the entry point code
tools/nolibc: restore mips branch ordering in the _start block
tools/nolibc: fix the O_* fcntl/open macro definitions for riscv
net/sched: act_mpls: Fix warning during failed attribute validation
net/mlx5: Fix ptp max frequency adjustment range
net/mlx5e: Don't support encap rules with gbp option
perf build: Properly guard libbpf includes
igc: Fix PPS delta between two synchronized end-points
platform/surface: aggregator: Add missing call to ssam_request_sync_free()
mm: Always release pages to the buddy allocator in memblock_free_late().
Documentation: KVM: add API issues section
KVM: x86: Do not return host topology information from KVM_GET_SUPPORTED_CPUID
io_uring: lock overflowing for IOPOLL
arm64: atomics: format whitespace consistently
arm64: atomics: remove LL/SC trampolines
arm64: cmpxchg_double*: hazard against entire exchange variable
efi: fix NULL-deref in init error path
scsi: mpt3sas: Remove scsi_dma_map() error messages
io_uring/io-wq: free worker if task_work creation is canceled
io_uring/io-wq: only free worker if it was allocated for creation
block: handle bio_split_to_limits() NULL return
Revert "usb: ulpi: defer ulpi_register on ulpi_read_id timeout"
pinctrl: amd: Add dynamic debugging for active GPIOs
Linux 5.15.89
Change-Id: I66c4f269aa7751b2e4aac919f822dfdcb844a69d
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
[ Upstream commit 115d9d77bb0f9152c60b6e8646369fa7f6167593 ]
If CONFIG_DEFERRED_STRUCT_PAGE_INIT is enabled, memblock_free_pages()
only releases pages to the buddy allocator if they are not in the
deferred range. This is correct for free pages (as defined by
for_each_free_mem_pfn_range_in_zone()) because free pages in the
deferred range will be initialized and released as part of the deferred
init process. memblock_free_pages() is called by memblock_free_late(),
which is used to free reserved ranges after memblock_free_all() has
run. All pages in reserved ranges have been initialized at that point,
and accordingly, those pages are not touched by the deferred init
process. This means that currently, if the pages that
memblock_free_late() intends to release are in the deferred range, they
will never be released to the buddy allocator. They will forever be
reserved.
In addition, memblock_free_pages() calls kmsan_memblock_free_pages(),
which is also correct for free pages but is not correct for reserved
pages. KMSAN metadata for reserved pages is initialized by
kmsan_init_shadow(), which runs shortly before memblock_free_all().
For both of these reasons, memblock_free_pages() should only be called
for free pages, and memblock_free_late() should call __free_pages_core()
directly instead.
One case where this issue can occur in the wild is EFI boot on
x86_64. The x86 EFI code reserves all EFI boot services memory ranges
via memblock_reserve() and frees them later via memblock_free_late()
(efi_reserve_boot_services() and efi_free_boot_services(),
respectively). If any of those ranges happens to fall within the
deferred init range, the pages will not be released and that memory will
be unavailable.
For example, on an Amazon EC2 t3.micro VM (1 GB) booting via EFI:
v6.2-rc2:
# grep -E 'Node|spanned|present|managed' /proc/zoneinfo
Node 0, zone DMA
spanned 4095
present 3999
managed 3840
Node 0, zone DMA32
spanned 246652
present 245868
managed 178867
v6.2-rc2 + patch:
# grep -E 'Node|spanned|present|managed' /proc/zoneinfo
Node 0, zone DMA
spanned 4095
present 3999
managed 3840
Node 0, zone DMA32
spanned 246652
present 245868
managed 222816 # +43,949 pages
Fixes: 3a80a7fa79 ("mm: meminit: initialise a subset of struct pages if CONFIG_DEFERRED_STRUCT_PAGE_INIT is set")
Signed-off-by: Aaron Thompson <dev@aaront.org>
Link: https://lore.kernel.org/r/01010185892de53e-e379acfb-7044-4b24-b30a-e2657c1ba989-000000@us-west-2.amazonses.com
Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Changes in 5.15.26
mm/filemap: Fix handling of THPs in generic_file_buffered_read()
cgroup/cpuset: Fix a race between cpuset_attach() and cpu hotplug
cgroup-v1: Correct privileges check in release_agent writes
x86/ptrace: Fix xfpregs_set()'s incorrect xmm clearing
btrfs: tree-checker: check item_size for inode_item
btrfs: tree-checker: check item_size for dev_item
clk: jz4725b: fix mmc0 clock gating
io_uring: don't convert to jiffies for waiting on timeouts
io_uring: disallow modification of rsrc_data during quiesce
selinux: fix misuse of mutex_is_locked()
vhost/vsock: don't check owner in vhost_vsock_stop() while releasing
parisc/unaligned: Fix fldd and fstd unaligned handlers on 32-bit kernel
parisc/unaligned: Fix ldw() and stw() unalignment handlers
KVM: x86/mmu: make apf token non-zero to fix bug
drm/amd/display: Protect update_bw_bounding_box FPU code.
drm/amd/pm: fix some OEM SKU specific stability issues
drm/amd: Check if ASPM is enabled from PCIe subsystem
drm/amdgpu: disable MMHUB PG for Picasso
drm/amdgpu: do not enable asic reset for raven2
drm/i915: Widen the QGV point mask
drm/i915: Correctly populate use_sagv_wm for all pipes
drm/i915: Fix bw atomic check when switching between SAGV vs. no SAGV
sr9700: sanity check for packet length
USB: zaurus: support another broken Zaurus
CDC-NCM: avoid overflow in sanity checking
netfilter: xt_socket: fix a typo in socket_mt_destroy()
netfilter: xt_socket: missing ifdef CONFIG_IP6_NF_IPTABLES dependency
netfilter: nf_tables_offload: incorrect flow offload action array size
tee: export teedev_open() and teedev_close_context()
optee: use driver internal tee_context for some rpc
ping: remove pr_err from ping_lookup
Revert "i40e: Fix reset bw limit when DCB enabled with 1 TC"
gpu: host1x: Always return syncpoint value when waiting
perf evlist: Fix failed to use cpu list for uncore events
perf data: Fix double free in perf_session__delete()
mptcp: fix race in incoming ADD_ADDR option processing
mptcp: add mibs counter for ignored incoming options
selftests: mptcp: fix diag instability
selftests: mptcp: be more conservative with cookie MPJ limits
bnx2x: fix driver load from initrd
bnxt_en: Fix active FEC reporting to ethtool
bnxt_en: Fix offline ethtool selftest with RDMA enabled
bnxt_en: Fix incorrect multicast rx mask setting when not requested
hwmon: Handle failure to register sensor with thermal zone correctly
net/mlx5: Fix tc max supported prio for nic mode
ice: check the return of ice_ptp_gettimex64
ice: initialize local variable 'tlv'
net/mlx5: Update the list of the PCI supported devices
bpf: Fix crash due to incorrect copy_map_value
bpf: Do not try bpf_msg_push_data with len 0
selftests: bpf: Check bpf_msg_push_data return value
bpf: Fix a bpf_timer initialization issue
bpf: Add schedule points in batch ops
io_uring: add a schedule point in io_add_buffers()
net: __pskb_pull_tail() & pskb_carve_frag_list() drop_monitor friends
nvme: also mark passthrough-only namespaces ready in nvme_update_ns_info
tipc: Fix end of loop tests for list_for_each_entry()
gso: do not skip outer ip header in case of ipip and net_failover
net: mv643xx_eth: process retval from of_get_mac_address
openvswitch: Fix setting ipv6 fields causing hw csum failure
drm/edid: Always set RGB444
net/mlx5e: Fix wrong return value on ioctl EEPROM query failure
drm/vc4: crtc: Fix runtime_pm reference counting
drm/i915/dg2: Print PHY name properly on calibration error
net/sched: act_ct: Fix flow table lookup after ct clear or switching zones
net: ll_temac: check the return value of devm_kmalloc()
net: Force inlining of checksum functions in net/checksum.h
netfilter: nf_tables: unregister flowtable hooks on netns exit
nfp: flower: Fix a potential leak in nfp_tunnel_add_shared_mac()
net: mdio-ipq4019: add delay after clock enable
netfilter: nf_tables: fix memory leak during stateful obj update
net/smc: Use a mutex for locking "struct smc_pnettable"
surface: surface3_power: Fix battery readings on batteries without a serial number
udp_tunnel: Fix end of loop test in udp_tunnel_nic_unregister()
net/mlx5: DR, Cache STE shadow memory
ibmvnic: schedule failover only if vioctl fails
net/mlx5: DR, Don't allow match on IP w/o matching on full ethertype/ip_version
net/mlx5: Fix possible deadlock on rule deletion
net/mlx5: Fix wrong limitation of metadata match on ecpf
net/mlx5: DR, Fix the threshold that defines when pool sync is initiated
net/mlx5e: MPLSoUDP decap, fix check for unsupported matches
net/mlx5e: kTLS, Use CHECKSUM_UNNECESSARY for device-offloaded packets
net/mlx5: Update log_max_qp value to be 17 at most
spi: spi-zynq-qspi: Fix a NULL pointer dereference in zynq_qspi_exec_mem_op()
gpio: rockchip: Reset int_bothedge when changing trigger
regmap-irq: Update interrupt clear register for proper reset
net-timestamp: convert sk->sk_tskey to atomic_t
RDMA/rtrs-clt: Fix possible double free in error case
RDMA/rtrs-clt: Move free_permit from free_clt to rtrs_clt_close
bnxt_en: Increase firmware message response DMA wait time
configfs: fix a race in configfs_{,un}register_subsystem()
RDMA/ib_srp: Fix a deadlock
tracing: Dump stacktrace trigger to the corresponding instance
tracing: Have traceon and traceoff trigger honor the instance
iio:imu:adis16480: fix buffering for devices with no burst mode
iio: adc: men_z188_adc: Fix a resource leak in an error handling path
iio: adc: tsc2046: fix memory corruption by preventing array overflow
iio: adc: ad7124: fix mask used for setting AIN_BUFP & AIN_BUFM bits
iio: accel: fxls8962af: add padding to regmap for SPI
iio: imu: st_lsm6dsx: wait for settling time in st_lsm6dsx_read_oneshot
iio: Fix error handling for PM
sc16is7xx: Fix for incorrect data being transmitted
ata: pata_hpt37x: disable primary channel on HPT371
Revert "USB: serial: ch341: add new Product ID for CH341A"
usb: gadget: rndis: add spinlock for rndis response list
USB: gadget: validate endpoint index for xilinx udc
tracefs: Set the group ownership in apply_options() not parse_options()
USB: serial: option: add support for DW5829e
USB: serial: option: add Telit LE910R1 compositions
usb: dwc2: drd: fix soft connect when gadget is unconfigured
usb: dwc3: pci: Add "snps,dis_u2_susphy_quirk" for Intel Bay Trail
usb: dwc3: pci: Fix Bay Trail phy GPIO mappings
usb: dwc3: gadget: Let the interrupt handler disable bottom halves.
xhci: re-initialize the HC during resume if HCE was set
xhci: Prevent futile URB re-submissions due to incorrect return value.
nvmem: core: Fix a conflict between MTD and NVMEM on wp-gpios property
mtd: core: Fix a conflict between MTD and NVMEM on wp-gpios property
driver core: Free DMA range map when device is released
btrfs: prevent copying too big compressed lzo segment
RDMA/cma: Do not change route.addr.src_addr outside state checks
thermal: int340x: fix memory leak in int3400_notify()
staging: fbtft: fb_st7789v: reset display before initialization
tps6598x: clear int mask on probe failure
IB/qib: Fix duplicate sysfs directory name
riscv: fix nommu_k210_sdcard_defconfig
riscv: fix oops caused by irqsoff latency tracer
tty: n_gsm: fix encoding of control signal octet bit DV
tty: n_gsm: fix proper link termination after failed open
tty: n_gsm: fix NULL pointer access due to DLCI release
tty: n_gsm: fix wrong tty control line for flow control
tty: n_gsm: fix wrong modem processing in convergence layer type 2
tty: n_gsm: fix deadlock in gsmtty_open()
pinctrl: fix loop in k210_pinconf_get_drive()
pinctrl: k210: Fix bias-pull-up
gpio: tegra186: Fix chip_data type confusion
memblock: use kfree() to release kmalloced memblock regions
ice: Fix race conditions between virtchnl handling and VF ndo ops
ice: fix concurrent reset and removal of VFs
Linux 5.15.26
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: Ied0cc9bd48b7af71a064107676f37b0dd39ce3cf
commit c94afc46cae7ad41b2ad6a99368147879f4b0e56 upstream.
memblock.{reserved,memory}.regions may be allocated using kmalloc() in
memblock_double_array(). Use kfree() to release these kmalloced regions
indicated by memblock_{reserved,memory}_in_slab.
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Fixes: 3010f87650 ("mm: discard memblock data later")
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
After switched page size from 64KB to 4KB on several arm64 servers here,
kmemleak starts to run out of early memory pool due to a huge number of
those early_pgtable_alloc() calls:
kmemleak_alloc_phys()
memblock_alloc_range_nid()
memblock_phys_alloc_range()
early_pgtable_alloc()
init_pmd()
alloc_init_pud()
__create_pgd_mapping()
__map_memblock()
paging_init()
setup_arch()
start_kernel()
Increased the default value of DEBUG_KMEMLEAK_MEM_POOL_SIZE by 4 times
won't be enough for a server with 200GB+ memory. There isn't much
interesting to check memory leaks for those early page tables and those
early memory mappings should not reference to other memory. Hence, no
kmemleak false positives, and we can safely skip tracking those early
allocations from kmemleak like we did in the commit fed84c7852
("mm/memblock.c: skip kmemleak for kasan_init()") without needing to
introduce complications to automatically scale the value depends on the
runtime memory size etc. After the patch, the default value of
DEBUG_KMEMLEAK_MEM_POOL_SIZE becomes sufficient again.
Signed-off-by: Qian Cai <quic_qiancai@quicinc.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Mike Rapoport <rppt@linux.ibm.com>
Link: https://lore.kernel.org/r/20211105150509.7826-1-quic_qiancai@quicinc.com
Signed-off-by: Will Deacon <will@kernel.org>
(cherry picked from commit c6975d7cab5b903aadbc0f78f9af4fae1bd23a50)
Change-Id: Ie2a33b4219185948cbbc599df76973d547c78dbb
Bug: 217222520
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
On architectures that support the preservation of memblock metadata
after __init, allow drivers to call memblock_free() to free a
reservation made by early arch code. This is a hack to support the
freeing of bootsplash reservations passed to Linux by the bootloader.
(This should be reworked in future versions of Android; do not
cherry-pick this patch forward.)
Bug: 139653858
Bug: 174620135
Change-Id: I32c0ee70c33c94deff70aa548896caa9978396fb
Signed-off-by: Alistair Delva <adelva@google.com>
(cherry picked from commit 2eeee9f41c)
Vladimir Zapolskiy reports:
Commit a7259df767 ("memblock: make memblock_find_in_range method
private") invokes a kernel panic while running kmemleak on OF platforms
with nomaped regions:
Unable to handle kernel paging request at virtual address fff000021e00000
[...]
scan_block+0x64/0x170
scan_gray_list+0xe8/0x17c
kmemleak_scan+0x270/0x514
kmemleak_write+0x34c/0x4ac
The memory allocated from memblock is registered with kmemleak, but if
it is marked MEMBLOCK_NOMAP it won't have linear map entries so an
attempt to scan such areas will fault.
Ideally, memblock_mark_nomap() would inform kmemleak to ignore
MEMBLOCK_NOMAP memory, but it can be called before kmemleak interfaces
operating on physical addresses can use __va() conversion.
Make sure that functions that mark allocated memory as MEMBLOCK_NOMAP
take care of informing kmemleak to ignore such memory.
Link: https://lore.kernel.org/all/8ade5174-b143-d621-8c8e-dc6a1898c6fb@linaro.org
Link: https://lore.kernel.org/all/c30ff0a2-d196-c50d-22f0-bd50696b1205@quicinc.com
Fixes: a7259df767 ("memblock: make memblock_find_in_range method private")
Reported-by: Vladimir Zapolskiy <vladimir.zapolskiy@linaro.org>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Vladimir Zapolskiy <vladimir.zapolskiy@linaro.org>
Tested-by: Qian Cai <quic_qiancai@quicinc.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Vladimir Zapolskiy reports:
commit a7259df767 ("memblock: make memblock_find_in_range method private")
invokes a kernel panic while running kmemleak on OF platforms with nomaped
regions:
Unable to handle kernel paging request at virtual address fff000021e00000
[...]
scan_block+0x64/0x170
scan_gray_list+0xe8/0x17c
kmemleak_scan+0x270/0x514
kmemleak_write+0x34c/0x4ac
Indeed, NOMAP regions don't have linear map entries so an attempt to scan
these areas would fault.
Prevent such faults by excluding NOMAP regions from kmemleak.
Link: https://lore.kernel.org/all/8ade5174-b143-d621-8c8e-dc6a1898c6fb@linaro.org
Fixes: a7259df767 ("memblock: make memblock_find_in_range method private")
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Tested-by: Vladimir Zapolskiy <vladimir.zapolskiy@linaro.org>
The boot-time allocation interface for memblock is a mess, with
'memblock_alloc()' returning a virtual pointer, but then you are
supposed to free it with 'memblock_free()' that takes a _physical_
address.
Not only is that all kinds of strange and illogical, but it actually
causes bugs, when people then use it like a normal allocation function,
and it fails spectacularly on a NULL pointer:
https://lore.kernel.org/all/20210912140820.GD25450@xsang-OptiPlex-9020/
or just random memory corruption if the debug checks don't catch it:
https://lore.kernel.org/all/61ab2d0c-3313-aaab-514c-e15b7aa054a0@suse.cz/
I really don't want to apply patches that treat the symptoms, when the
fundamental cause is this horribly confusing interface.
I started out looking at just automating a sane replacement sequence,
but because of this mix or virtual and physical addresses, and because
people have used the "__pa()" macro that can take either a regular
kernel pointer, or just the raw "unsigned long" address, it's all quite
messy.
So this just introduces a new saner interface for freeing a virtual
address that was allocated using 'memblock_alloc()', and that was kept
as a regular kernel pointer. And then it converts a couple of users
that are obvious and easy to test, including the 'xbc_nodes' case in
lib/bootconfig.c that caused problems.
Reported-by: kernel test robot <oliver.sang@intel.com>
Fixes: 40caa127f3 ("init: bootconfig: Remove all bootconfig data when the init memory is removed")
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Merge misc updates from Andrew Morton:
"173 patches.
Subsystems affected by this series: ia64, ocfs2, block, and mm (debug,
pagecache, gup, swap, shmem, memcg, selftests, pagemap, mremap,
bootmem, sparsemem, vmalloc, kasan, pagealloc, memory-failure,
hugetlb, userfaultfd, vmscan, compaction, mempolicy, memblock,
oom-kill, migration, ksm, percpu, vmstat, and madvise)"
* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (173 commits)
mm/madvise: add MADV_WILLNEED to process_madvise()
mm/vmstat: remove unneeded return value
mm/vmstat: simplify the array size calculation
mm/vmstat: correct some wrong comments
mm/percpu,c: remove obsolete comments of pcpu_chunk_populated()
selftests: vm: add COW time test for KSM pages
selftests: vm: add KSM merging time test
mm: KSM: fix data type
selftests: vm: add KSM merging across nodes test
selftests: vm: add KSM zero page merging test
selftests: vm: add KSM unmerge test
selftests: vm: add KSM merge test
mm/migrate: correct kernel-doc notation
mm: wire up syscall process_mrelease
mm: introduce process_mrelease system call
memblock: make memblock_find_in_range method private
mm/mempolicy.c: use in_task() in mempolicy_slab_node()
mm/mempolicy: unify the create() func for bind/interleave/prefer-many policies
mm/mempolicy: advertise new MPOL_PREFERRED_MANY
mm/hugetlb: add support for mempolicy MPOL_PREFERRED_MANY
...
There are a lot of uses of memblock_find_in_range() along with
memblock_reserve() from the times memblock allocation APIs did not exist.
memblock_find_in_range() is the very core of memblock allocations, so any
future changes to its internal behaviour would mandate updates of all the
users outside memblock.
Replace the calls to memblock_find_in_range() with an equivalent calls to
memblock_phys_alloc() and memblock_phys_alloc_range() and make
memblock_find_in_range() private method of memblock.
This simplifies the callers, ensures that (unlikely) errors in
memblock_reserve() are handled and improves maintainability of
memblock_find_in_range().
Link: https://lkml.kernel.org/r/20210816122622.30279-1-rppt@kernel.org
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> [arm64]
Acked-by: Kirill A. Shutemov <kirill.shtuemov@linux.intel.com>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> [ACPI]
Acked-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Acked-by: Nick Kossifidis <mick@ics.forth.gr> [riscv]
Tested-by: Guenter Roeck <linux@roeck-us.net>
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Functions memblock_alloc_exact_nid_raw() and memblock_alloc_try_nid_raw()
are intended for early memory allocation without overhead of zeroing the
allocated memory. Since these functions were used to allocate the memory
map, they have ended up with addition of a call to page_init_poison() that
poisoned the allocated memory when CONFIG_PAGE_POISON was set.
Since the memory map is allocated using a dedicated memmep_alloc()
function that takes care of the poisoning, remove page poisoning from the
memblock_alloc_*_raw() functions.
Link: https://lkml.kernel.org/r/20210714123739.16493-5-rppt@kernel.org
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Michal Simek <monstr@monstr.eu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
For memblock_cap_memory_range() to work properly, it should be called
after memory is detected and added to memblock with memblock_add() or
memblock_add_node(). If memblock_cap_memory_range() would be called
before memory is registered, we may silently corrupt memory later
because the crash kernel will see all memory as available.
Print a warning and bail out if ordering is not satisfied.
Suggested-by: Mike Rapoport <rppt@kernel.org>
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Link: https://lore.kernel.org/r/aabc5bad008d49f07d542815c6c8d28ec90bb09e.1628672091.git.geert+renesas@glider.be
Commit b10d6bca87 ("arch, drivers: replace for_each_membock() with
for_each_mem_range()") didn't take into account that when there is
movable_node parameter in the kernel command line, for_each_mem_range()
would skip ranges marked with MEMBLOCK_HOTPLUG.
The page table setup code in POWER uses for_each_mem_range() to create
the linear mapping of the physical memory and since the regions marked
as MEMORY_HOTPLUG are skipped, they never make it to the linear map.
A later access to the memory in those ranges will fail:
BUG: Unable to handle kernel data access on write at 0xc000000400000000
Faulting instruction address: 0xc00000000008a3c0
Oops: Kernel access of bad area, sig: 11 [#1]
LE PAGE_SIZE=64K MMU=Radix SMP NR_CPUS=2048 NUMA pSeries
Modules linked in:
CPU: 0 PID: 53 Comm: kworker/u2:0 Not tainted 5.13.0 #7
NIP: c00000000008a3c0 LR: c0000000003c1ed8 CTR: 0000000000000040
REGS: c000000008a57770 TRAP: 0300 Not tainted (5.13.0)
MSR: 8000000002009033 <SF,VEC,EE,ME,IR,DR,RI,LE> CR: 84222202 XER: 20040000
CFAR: c0000000003c1ed4 DAR: c000000400000000 DSISR: 42000000 IRQMASK: 0
GPR00: c0000000003c1ed8 c000000008a57a10 c0000000019da700 c000000400000000
GPR04: 0000000000000280 0000000000000180 0000000000000400 0000000000000200
GPR08: 0000000000000100 0000000000000080 0000000000000040 0000000000000300
GPR12: 0000000000000380 c000000001bc0000 c0000000001660c8 c000000006337e00
GPR16: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
GPR20: 0000000040000000 0000000020000000 c000000001a81990 c000000008c30000
GPR24: c000000008c20000 c000000001a81998 000fffffffff0000 c000000001a819a0
GPR28: c000000001a81908 c00c000001000000 c000000008c40000 c000000008a64680
NIP clear_user_page+0x50/0x80
LR __handle_mm_fault+0xc88/0x1910
Call Trace:
__handle_mm_fault+0xc44/0x1910 (unreliable)
handle_mm_fault+0x130/0x2a0
__get_user_pages+0x248/0x610
__get_user_pages_remote+0x12c/0x3e0
get_arg_page+0x54/0xf0
copy_string_kernel+0x11c/0x210
kernel_execve+0x16c/0x220
call_usermodehelper_exec_async+0x1b0/0x2f0
ret_from_kernel_thread+0x5c/0x70
Instruction dump:
79280fa4 79271764 79261f24 794ae8e2 7ca94214 7d683a14 7c893a14 7d893050
7d4903a6 60000000 60000000 60000000 <7c001fec> 7c091fec 7c081fec 7c051fec
---[ end trace 490b8c67e6075e09 ]---
Making for_each_mem_range() include MEMBLOCK_HOTPLUG regions in the
traversal fixes this issue.
Link: https://bugzilla.redhat.com/show_bug.cgi?id=1976100
Link: https://lkml.kernel.org/r/20210712071132.20902-1-rppt@kernel.org
Fixes: b10d6bca87 ("arch, drivers: replace for_each_membock() with for_each_mem_range()")
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Tested-by: Greg Kurz <groug@kaod.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Cc: <stable@vger.kernel.org> [5.10+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Pull memblock updates from Mike Rapoport:
"Fix arm crashes caused by holes in the memory map.
The coordination between freeing of unused memory map, pfn_valid() and
core mm assumptions about validity of the memory map in various ranges
was not designed for complex layouts of the physical memory with a lot
of holes all over the place.
Kefen Wang reported crashes in move_freepages() on a system with the
following memory layout [1]:
node 0: [mem 0x0000000080a00000-0x00000000855fffff]
node 0: [mem 0x0000000086a00000-0x0000000087dfffff]
node 0: [mem 0x000000008bd00000-0x000000008c4fffff]
node 0: [mem 0x000000008e300000-0x000000008ecfffff]
node 0: [mem 0x0000000090d00000-0x00000000bfffffff]
node 0: [mem 0x00000000cc000000-0x00000000dc9fffff]
node 0: [mem 0x00000000de700000-0x00000000de9fffff]
node 0: [mem 0x00000000e0800000-0x00000000e0bfffff]
node 0: [mem 0x00000000f4b00000-0x00000000f6ffffff]
node 0: [mem 0x00000000fda00000-0x00000000ffffefff]
These crashes can be mitigated by enabling CONFIG_HOLES_IN_ZONE on ARM
and essentially turning pfn_valid_within() to pfn_valid() instead of
having it hardwired to 1 on that architecture, but this would require
to keep CONFIG_HOLES_IN_ZONE solely for this purpose.
A cleaner approach is to update ARM's implementation of pfn_valid() to
take into accounting rounding of the freed memory map to pageblock
boundaries and make sure it returns true for PFNs that have memory map
entries even if there is no physical memory backing those PFNs"
Link: https://lore.kernel.org/lkml/2a1592ad-bc9d-4664-fd19-f7448a37edc0@huawei.com [1]
* tag 'memblock-v5.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock:
arm: extend pfn_valid to take into account freed memory map alignment
memblock: ensure there is no overflow in memblock_overlaps_region()
memblock: align freed memory map on pageblock boundaries with SPARSEMEM
memblock: free_unused_memmap: use pageblock units instead of MAX_ORDER
The struct pages representing a reserved memory region are initialized
using reserve_bootmem_range() function. This function is called for each
reserved region just before the memory is freed from memblock to the buddy
page allocator.
The struct pages for MEMBLOCK_NOMAP regions are kept with the default
values set by the memory map initialization which makes it necessary to
have a special treatment for such pages in pfn_valid() and
pfn_valid_within().
Split out initialization of the reserved pages to a function with a
meaningful name and treat the MEMBLOCK_NOMAP regions the same way as the
reserved regions and mark struct pages for the NOMAP regions as
PageReserved.
Link: https://lkml.kernel.org/r/20210511100550.28178-3-rppt@kernel.org
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
There maybe an overflow in memblock_overlaps_region() if it is called with
base and size such that
base + size > PHYS_ADDR_MAX
Make sure that memblock_overlaps_region() caps the size to prevent such
overflow and remove now duplicated call to memblock_cap_size() from
memblock_is_region_reserved().
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Tested-by: Tony Lindgren <tony@atomide.com>
When CONFIG_SPARSEMEM=y the ranges of the memory map that are freed are not
aligned to the pageblock boundaries which breaks assumptions about
homogeneity of the memory map throughout core mm code.
Make sure that the freed memory map is always aligned on pageblock
boundaries regardless of the memory model selection.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Tested-by: Tony Lindgren <tony@atomide.com>
The code that frees unused memory map uses rounds start and end of the
holes that are freed to MAX_ORDER_NR_PAGES to preserve continuity of the
memory map for MAX_ORDER regions.
Lots of core memory management functionality relies on homogeneity of the
memory map within each pageblock which size may differ from MAX_ORDER in
certain configurations.
Although currently, for the architectures that use free_unused_memmap(),
pageblock_order and MAX_ORDER are equivalent, it is cleaner to have common
notation thought mm code.
Replace MAX_ORDER_NR_PAGES with pageblock_nr_pages and update the comments
to make it more clear why the alignment to pageblock boundaries is
required.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Tested-by: Tony Lindgren <tony@atomide.com>
Pull memblock update from Mike Rapoport:
"Remove return value of memblock_free_all()
memblock_free_all() returns the total count of freed pages and its
callers used this value to update totalram_pages. This update is now
anyway a part of memblock_free_all() and its callers no longer check
the return value, so make memblock_free_all() void"
* tag 'memblock-v5.12-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock:
mm: memblock: remove return value of memblock_free_all()
With kaslr the kernel image is placed at a random place, so starting the
bottom-up allocation with the kernel_end can result in an allocation
failure and a warning like this one:
hugetlb_cma: reserve 2048 MiB, up to 2048 MiB per node
------------[ cut here ]------------
memblock: bottom-up allocation failed, memory hotremove may be affected
WARNING: CPU: 0 PID: 0 at mm/memblock.c:332 memblock_find_in_range_node+0x178/0x25a
Modules linked in:
CPU: 0 PID: 0 Comm: swapper Not tainted 5.10.0+ #1169
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-1.fc33 04/01/2014
RIP: 0010:memblock_find_in_range_node+0x178/0x25a
Code: e9 6d ff ff ff 48 85 c0 0f 85 da 00 00 00 80 3d 9b 35 df 00 00 75 15 48 c7 c7 c0 75 59 88 c6 05 8b 35 df 00 01 e8 25 8a fa ff <0f> 0b 48 c7 44 24 20 ff ff ff ff 44 89 e6 44 89 ea 48 c7 c1 70 5c
RSP: 0000:ffffffff88803d18 EFLAGS: 00010086 ORIG_RAX: 0000000000000000
RAX: 0000000000000000 RBX: 0000000240000000 RCX: 00000000ffffdfff
RDX: 00000000ffffdfff RSI: 00000000ffffffea RDI: 0000000000000046
RBP: 0000000100000000 R08: ffffffff88922788 R09: 0000000000009ffb
R10: 00000000ffffe000 R11: 3fffffffffffffff R12: 0000000000000000
R13: 0000000000000000 R14: 0000000080000000 R15: 00000001fb42c000
FS: 0000000000000000(0000) GS:ffffffff88f71000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: ffffa080fb401000 CR3: 00000001fa80a000 CR4: 00000000000406b0
Call Trace:
memblock_alloc_range_nid+0x8d/0x11e
cma_declare_contiguous_nid+0x2c4/0x38c
hugetlb_cma_reserve+0xdc/0x128
flush_tlb_one_kernel+0xc/0x20
native_set_fixmap+0x82/0xd0
flat_get_apic_id+0x5/0x10
register_lapic_address+0x8e/0x97
setup_arch+0x8a5/0xc3f
start_kernel+0x66/0x547
load_ucode_bsp+0x4c/0xcd
secondary_startup_64_no_verify+0xb0/0xbb
random: get_random_bytes called from __warn+0xab/0x110 with crng_init=0
---[ end trace f151227d0b39be70 ]---
At the same time, the kernel image is protected with memblock_reserve(),
so we can just start searching at PAGE_SIZE. In this case the bottom-up
allocation has the same chances to success as a top-down allocation, so
there is no reason to fallback in the case of a failure. All together it
simplifies the logic.
Link: https://lkml.kernel.org/r/20201217201214.3414100-2-guro@fb.com
Fixes: 8fabc62323 ("powerpc: Ensure that swiotlb buffer is allocated from low memory")
Signed-off-by: Roman Gushchin <guro@fb.com>
Reviewed-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Rik van Riel <riel@surriel.com>
Cc: Wonhyuk Yang <vvghjk1234@gmail.com>
Cc: Thiago Jung Bauermann <bauerman@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
memblock_phys_alloc_try_nid function's comments has typo NUMA as MUMA.
Correct this typo.
Signed-off-by: Levi Yun <ppbuk5246@gmail.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
No one checks the return value of memblock_free_all().
Make the return value void.
memblock_free_all() is used on mem_init() for each
architecture, and the total count of freed pages will be added
to _totalram_pages variable by calling totalram_pages_add().
so do not need to return total count of freed pages.
Signed-off-by: Daeseok Youn <daeseok.youn@gmail.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Steps on the way to 5.11-rc1
Fixes merge conflicts with:
arch/arm64/include/asm/thread_info.h
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I45a74bc1e219850bdbc480e5a309dfc216a5c171
Pull memblock updates from Mike Rapoport:
"memblock debug enhancements.
Improve tracking of early memory allocations when memblock debug is
enabled:
- Add memblock_dbg() to memblock_phys_alloc_range() to get details
about its usage
- Make memblock allocator wrappers actually inline to track their
callers in memblock debug messages"
* tag 'memblock-v5.11-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock:
mm: memblock: drop __init from memblock functions to make it inline
mm: memblock: add more debug logs
Export memblock_end_of_DRAM() so that module drivers can know
the physical end address of memory blocks that system booted with.
Bug: 171499373
Change-Id: Ib0c90b736f16d1abbf47d4f3288443c35f9e17b4
Signed-off-by: Sudarshan Rajagopalan <sudaraja@codeaurora.org>
It is useful to know the exact caller of memblock_phys_alloc_range() to
track early memory reservations during development.
Currently, when memblock debugging is enabled, the allocations done with
memblock_phys_alloc_range() are only reported at memblock_reserve():
[ 0.000000] memblock_reserve: [0x000000023fc6b000-0x000000023fc6bfff] memblock_alloc_range_nid+0xc0/0x188
Add memblock_dbg() to memblock_phys_alloc_range() to get details about
its usage.
For example:
[ 0.000000] memblock_phys_alloc_range: 4096 bytes align=0x1000 from=0x0000000000000000 max_addr=0x0000000000000000 early_pgtable_alloc+0x24/0x178
[ 0.000000] memblock_reserve: [0x000000023fc6b000-0x000000023fc6bfff] memblock_alloc_range_nid+0xc0/0x188
Signed-off-by: Faiyaz Mohammed <faiyazm@codeaurora.org>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
chanseset b3a7bb1851c8 ("docs: get rid of :c:type explicit declarations for structs")
removed several :c:type: markups, except by one.
Now, Sphinx 3.x complains about it:
.../Documentation/core-api/boot-time-mm:26: ../mm/memblock.c:51: WARNING: Unparseable C cross-reference: 'struct\nmemblock_type'
Invalid C declaration: Expected identifier in nested name, got keyword: struct [error at 6]
struct
memblock_type
------^
As, on Sphinx 3.x, the right markup is c:struct:`foo`.
So, let's remove it, relying on automarkup.py to convert it.
Reviewed-by: Mike Rapoport <rppt@linux.ibm.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
The :c:type:`foo` only works properly with structs before
Sphinx 3.x.
On Sphinx 3.x, structs should now be declared using the
.. c:struct, and referenced via :c:struct tag.
As we now have the automarkup.py macro, that automatically
convert:
struct foo
into cross-references, let's get rid of that, solving
several warnings when building docs with Sphinx 3.x.
Reviewed-by: André Almeida <andrealmeid@collabora.com> # blk-mq.rst
Reviewed-by: Takashi Iwai <tiwai@suse.de> # sound
Reviewed-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>