This reverts commit 1243dc518c.
percpu_rwsem is an rcu based lock. Under loaded conditions
this lock will require that each cpu perform a switch, causing
exessive delays with long-running tasks on those CPUs.
In the case of hotplug/pause, this can slow down the ability
to activate/deactivate a CPU.
Revert the change from cpuset_mutex to percpu_rwsem, to
eliminate that overhead, and revert to a global lock.
Bug: 161210528
Change-Id: Id00dcaa6d601b561d1321d3e944b6c52e9663f1a
Signed-off-by: Stephen Dickey <dickey@codeaurora.org>
Incorporate a vendor hook in the resume cpus path
so that vendor specific activities may take place.
Bug: 161210528
Change-Id: I74d03247491b004e891dbcfe06a478d00a95ba9f
Signed-off-by: Stephen Dickey <dickey@codeaurora.org>
In the resume_cpus() path, cpus cannot be taken
advantage of until the cpus write lock is acquired,
and cpus are activated and domains rebuilt. This
can incurr significant delay in the unpause operation.
Additionally, if scheduled through the kworker thread,
the wait time for rebuilding sched domains becomes
large due to a busy system that can prevent the kworker
from executing.
Activate the cpus and call the cpuset_hotplug_workfn
directly within resume_cpus prior to getting the cpus
write lock, thereby eliminating delays associated
with scheduling this activity.
Bug: 161210528
Change-Id: Ie2521f28ed9078b22d421d792f08413016d4dd62
Signed-off-by: Stephen Dickey <dickey@codeaurora.org>
Signed-off-by: Todd Kjos <tkjos@google.com>
paused_cpus intending to force CPUs to go idle as quickly as possible,
adding a migration step, to drain the rq from any running task.
Two steps are actually needed. The first one, "lazy", will run before the
cpu_active_mask has been synced. The second one will run after. It is
possible for another CPU, to observe an outdated version of that mask and
to enqueue a task on a rq that has just been marked inactive. The second
migration is there to catch any of those spurious move, while the first
one will drain the rq as quickly as possible to let the CPU reach an idle
state.
Bug: 161210528
Change-Id: Ie26c2e4c42665dd61d41a899a84536e56bf2b887
Signed-off-by: Vincent Donnefort <vincent.donnefort@arm.com>
pause_cpus intends to have a way to force a CPU to go idle and to resume
as quickly as possible, with as little disruption as possible on the
system. This is a way of saving energy or meet thermal constraints, for
which a full CPU hotunplug is too slow. A paused CPU is simply deactivated
from the scheduler point of view. This corresponds to the first hotunplug
step.
Each pause operation still needs some heavy synchronization. Allowing to
pause several CPUs in one go mitigate that issue.
Paused CPUs can be resumed with resume_cpus(), which also takes a cpumask
as an input.
Few limitations:
* It isn't possible to pause a CPU which is running SCHED_DEADLINE task.
* A paused CPU will be removed from any cpuset it is part of. Resuming
the CPU won't put back this CPU in the cpuset if using cgroup1.
Cgroup2 doesn't have this limitation.
* per-CPU kthreads are still allowed to run on a paused CPU.
Bug: 161210528
Change-Id: I1f5cb28190f8ec979bb8640a89b022f2f7266bcf
Signed-off-by: Vincent Donnefort <vincent.donnefort@arm.com>
Signed-off-by: Todd Kjos <tkjos@google.com>
In the event of a partial _cpu_down, (i.e. _cpu_down(target) where
target > CPUHP_AP_OFFLINE), the cpu_online_mask won't be aligned with
cpu_active_mask. This is an issue when trying to offline the last CPU
from cpu_active_mask, while num_online_cpus() > 1.
Protect against this case by checking num_active_cpus() instead of
num_online_cpus().
Bug: 161210528
Change-Id: Ibe7d9ef69e5f91e99be0d98076614a7654bda094
Signed-off-by: Vincent Donnefort <vincent.donnefort@arm.com>
In the event of a partial hotunplug, a stable state with a CPU set in the
online_mask and cleared from active_mask can happen. An online CPU, from a
scheduler point of view, should be part of the cpu_active_mask.
Bug: 161210528
Change-Id: I0d0aa6fca4c6dc145634c4aad6519045e0afc8e2
Signed-off-by: Vincent Donnefort <vincent.donnefort@arm.com>
Adding support in update_max_interval() for incomplete HP _cpu_down, where
cpu_active_mask != cpu_online_mask. This situation can happen in the event
of a partial _cpu_down. i.e. _cpu_down(target) where
target > CPUHP_AP_OFFLINE.
Bug: 161210528
Change-Id: Ia422057c65f16dc9aa8f6d272098b2308b00f0ac
Signed-off-by: Vincent Donnefort <vincent.donnefort@arm.com>
In the event of a partial hotunplug, a stable state with a CPU set in the
online mask but cleared in the active can happen. This is problematic for
the window between the active mask clearing and the sched domains rebuild.
RT could bounce back a task, migrated off a hotunplugged CPU. Introducing
an intersection between lowest_mask and the cpu_active_mask to prevent a
such situation.
Bug: 161210528
Change-Id: I4f8cb782c2ca560c297b7f4bdb2336918c83a5a1
Signed-off-by: Vincent Donnefort <vincent.donnefort@arm.com>
This new interface allows to trigger a stopper on a given CPU and wait
for the end of the work in a separated function cpu_stop_work_wait().
This differs from stop_one_cpu_nowait() by allowing the usage of the
cpu_stop completion mechanism.
Bug: 161210528
Change-Id: Ida51371e32897d008ece0639190fc21feabb0f28
Signed-off-by: Vincent Donnefort <vincent.donnefort@arm.com>
We have debug infrastructure built on top of preempt/irq disable/enable
events. This requires modifications to the kernel tracing code. Since
this is not feasible with GKI, we started with registering to the
existing preemptirq trace events. However the performance of wide
variety of use cases are regressed as the rate of preemptirq events
is super high and generic trace events are slow.
Since GKI allows optimized trace events via restricted trace hooks,
add the same for preemptirq event.
Bug: 174541725
Change-Id: Ic8d3cdd1c1aa6a9267d0b755694fedffa2ea8e36
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
Pull tracing fixes from Steven Rostedt:
- Use correct timestamp variable for ring buffer write stamp update
- Fix up before stamp and write stamp when crossing ring buffer sub
buffers
- Keep a zero delta in ring buffer in slow path if cmpxchg fails
- Fix trace_printk static buffer for archs that care
- Fix ftrace record accounting for ftrace ops with trampolines
- Fix DYNAMIC_FTRACE_WITH_DIRECT_CALLS dependency
- Remove WARN_ON in hwlat tracer that triggers on something that is OK
- Make "my_tramp" trampoline in ftrace direct sample code global
- Fixes in the bootconfig tool for better alignment management
* tag 'trace-v5.10-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace:
ring-buffer: Always check to put back before stamp when crossing pages
ftrace: Fix DYNAMIC_FTRACE_WITH_DIRECT_CALLS dependency
ftrace: Fix updating FTRACE_FL_TRAMP
tracing: Fix alignment of static buffer
tracing: Remove WARN_ON in start_thread()
samples/ftrace: Mark my_tramp[12]? global
ring-buffer: Set the right timestamp in the slow path of __rb_reserve_next()
ring-buffer: Update write stamp with the correct ts
docs: bootconfig: Update file format on initrd image
tools/bootconfig: Align the bootconfig applied initrd image size to 4
tools/bootconfig: Fix to check the write failure correctly
tools/bootconfig: Fix errno reference after printf()
The current ring buffer logic checks to see if the updating of the event
buffer was interrupted, and if it is, it will try to fix up the before stamp
with the write stamp to make them equal again. This logic is flawed, because
if it is not interrupted, the two are guaranteed to be different, as the
current event just updated the before stamp before allocation. This
guarantees that the next event (this one or another interrupting one) will
think it interrupted the time updates of a previous event and inject an
absolute time stamp to compensate.
The correct logic is to always update the timestamps when traversing to a
new sub buffer.
Cc: stable@vger.kernel.org
Fixes: a389d86f7f ("ring-buffer: Have nested events still record running time stamp")
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
On powerpc, kprobe-direct.tc triggered FTRACE_WARN_ON() in
ftrace_get_addr_new() followed by the below message:
Bad trampoline accounting at: 000000004222522f (wake_up_process+0xc/0x20) (f0000001)
The set of steps leading to this involved:
- modprobe ftrace-direct-too
- enable_probe
- modprobe ftrace-direct
- rmmod ftrace-direct <-- trigger
The problem turned out to be that we were not updating flags in the
ftrace record properly. From the above message about the trampoline
accounting being bad, it can be seen that the ftrace record still has
FTRACE_FL_TRAMP set though ftrace-direct module is going away. This
happens because we are checking if any ftrace_ops has the
FTRACE_FL_TRAMP flag set _before_ updating the filter hash.
The fix for this is to look for any _other_ ftrace_ops that also needs
FTRACE_FL_TRAMP.
Link: https://lkml.kernel.org/r/56c113aa9c3e10c19144a36d9684c7882bf09af5.1606412433.git.naveen.n.rao@linux.vnet.ibm.com
Cc: stable@vger.kernel.org
Fixes: a124692b69 ("ftrace: Enable trampoline when rec count returns back to one")
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
With 5.9 kernel on ARM64, I found ftrace_dump output was broken but
it had no problem with normal output "cat /sys/kernel/debug/tracing/trace".
With investigation, it seems coping the data into temporal buffer seems to
break the align binary printf expects if the static buffer is not aligned
with 4-byte. IIUC, get_arg in bstr_printf expects that args has already
right align to be decoded and seq_buf_bprintf says ``the arguments are saved
in a 32bit word array that is defined by the format string constraints``.
So if we don't keep the align under copy to temporal buffer, the output
will be broken by shifting some bytes.
This patch fixes it.
Link: https://lkml.kernel.org/r/20201125225654.1618966-1-minchan@kernel.org
Cc: <stable@vger.kernel.org>
Fixes: 8e99cf91b9 ("tracing: Do not allocate buffer in trace_find_next_entry() in atomic")
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Signed-off-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
This patch reverts commit 978defee11 ("tracing: Do a WARN_ON()
if start_thread() in hwlat is called when thread exists")
.start hook can be legally called several times if according
tracer is stopped
screen window 1
[root@localhost ~]# echo 1 > /sys/kernel/tracing/events/kmem/kfree/enable
[root@localhost ~]# echo 1 > /sys/kernel/tracing/options/pause-on-trace
[root@localhost ~]# less -F /sys/kernel/tracing/trace
screen window 2
[root@localhost ~]# cat /sys/kernel/debug/tracing/tracing_on
0
[root@localhost ~]# echo hwlat > /sys/kernel/debug/tracing/current_tracer
[root@localhost ~]# echo 1 > /sys/kernel/debug/tracing/tracing_on
[root@localhost ~]# cat /sys/kernel/debug/tracing/tracing_on
0
[root@localhost ~]# echo 2 > /sys/kernel/debug/tracing/tracing_on
triggers warning in dmesg:
WARNING: CPU: 3 PID: 1403 at kernel/trace/trace_hwlat.c:371 hwlat_tracer_start+0xc9/0xd0
Link: https://lkml.kernel.org/r/bd4d3e70-400d-9c82-7b73-a2d695e86b58@virtuozzo.com
Cc: Ingo Molnar <mingo@redhat.com>
Cc: stable@vger.kernel.org
Fixes: 978defee11 ("tracing: Do a WARN_ON() if start_thread() in hwlat is called when thread exists")
Signed-off-by: Vasily Averin <vvs@virtuozzo.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Some partners have value-adds based on aosp/540066, which cannot be
carried in ACK in its entirety as it no longer makes sense as-is (the
select_idle_capacity() rework upstream solved the issue differently).
It seems that those partners do not actually need the wake-wide tweaks,
they only need to access the wake_q length for wake-up balance. To
support this, add minimal tracking to the wake_q infrastructure in the
core kernel, but do that by adding a pointer to the wake_q_head to
task_struct directly to not litter all sched classes with an additional
sibling_count_hint argument to the select_task_rq callbacks.
Modules needing to access the wake_q length can do so by dereferencing
p->wake_q_head in the wake-up path when it is non-NULL.
Bug: 173981591
Signed-off-by: Quentin Perret <qperret@google.com>
Change-Id: I9a98167face92e70aba847d9f04d0c216065478c
Vendors might want to change tasks affinity settings when they are
moving from one cpuset into the other. Add vendor hook to give control
to vendor to implement what they need.
Bug: 174125747
Change-Id: Icee0405be0bca432002dae4a26ebe945082ce052
Signed-off-by: Satya Durga Srinivasu Prabhala <satyap@codeaurora.org>
Vendors might want to change tasks affinity settings when they are
moving from one cpuset into the other. Add vendor hook to give control
to vendor to implement what they need in sched_setaffinity().
Bug: 174125747
Change-Id: Ie703448147377cd62e76a58b620a7ab849a04924
Signed-off-by: Satya Durga Srinivasu Prabhala <satyap@codeaurora.org>
Pull locking fixes from Thomas Gleixner:
"Two more places which invoke tracing from RCU disabled regions in the
idle path.
Similar to the entry path the low level idle functions have to be
non-instrumentable"
* tag 'locking-urgent-2020-11-29' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
intel_idle: Fix intel_idle() vs tracing
sched/idle: Fix arch_cpu_idle() vs tracing
Pull printk fixes from Petr Mladek:
- do not lose trailing newline in pr_cont() calls
- two trivial fixes for a dead store and a config description
* tag 'printk-for-5.10-rc6-fixup' of git://git.kernel.org/pub/scm/linux/kernel/git/printk/linux:
printk: finalize records with trailing newlines
printk: remove unneeded dead-store assignment
init/Kconfig: Fix CPU number in LOG_CPU_MAX_BUF_SHIFT description
update_cpumask had a special case for empty buf which
did not update cpus_requested. This change reduces
differences (only to parsing) in empty/non-empty codepaths
to make them consistent.
Bug: 174125747
Bug: 120444281
Fixes: 4803def4e0b2 ("ANDROID: cpuset: Make cpusets restore on hotplug")
Test: check that writes to /dev/cpuset/background/tasks
Test: work as expected, e.g.:
Test: echo $$ > /dev/cpuset/background/tasks
Test: echo > /dev/cpuset/background/tasks
Signed-off-by: Roman Kiryanov <rkir@google.com>
Change-Id: I49d320ea046636ec38bd23f053317abc59f64f8e
[satyap@codeaurora.org: port to android-mainline kernel]
Signed-off-by: Satya Durga Srinivasu Prabhala <satyap@codeaurora.org>
This deliberately changes the behavior of the per-cpuset
cpus file to not be effected by hotplug. When a cpu is offlined,
it will be removed from the cpuset/cpus file. When a cpu is onlined,
if the cpuset originally requested that that cpu was part of the cpuset,
that cpu will be restored to the cpuset. The cpus files still
have to be hierachical, but the ranges no longer have to be out of
the currently online cpus, just the physically present cpus.
Bug: 174125747
Bug: 120444281
Change-Id: I22cdf33e7d312117bcefba1aeb0125e1ada289a9
Signed-off-by: Dmitry Shmidt <dimitrysh@google.com>
[AmitP: Refactored original changes to align with upstream commit
201af4c0fa ("cgroup: move cgroup files under kernel/cgroup/")]
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
[satyap@codeaurora.org: port to android-mainline kernel]
Signed-off-by: Satya Durga Srinivasu Prabhala <satyap@codeaurora.org>
It needs to export some symbols so that vendor module could reference.
Bug: 170507310
Signed-off-by: Rick Yiu <rickyiu@google.com>
Change-Id: I1b3c3ea8d0a11c01a1ca9e124e93f85e52856dc4
(cherry picked from commit a61271a41a2c2825a51bd7655fc446e16d23f5f6)
Signed-off-by: Will McVicker <willmcvicker@google.com>
This is to add capability for vendor to decide whether a cpufreq update
is needed, e.g. up/down rate limit.
Using restricted hook since it can be called from scheduler wakeup path.
Bug: 170511085
Signed-off-by: Wei Wang <wvw@google.com>
Change-Id: If9adea3a3e31efbf3858fbd009665a07dc70c638
(cherry picked from commit f9f3464532a045257f8138338b1beda86ef0a3be)
Signed-off-by: Will McVicker <willmcvicker@google.com>
Upstream moved the sugov to DEADLINE class which has higher prio than RT
so it can potentially block many RT use case in Android.
Also currently iowait doesn't distinguish background/foreground tasks
and we have seen cases where device run to high frequency unnecessarily
when running some background I/O.
Bug: 171598214
Signed-off-by: Wei Wang <wvw@google.com>
Change-Id: I21e9bfe9ef75a4178279574389e417c3f38e65ac
(cherry picked from commit 03177ef82bd942a3f163e826063491bae6ff0ac9)
Signed-off-by: Will McVicker <willmcvicker@google.com>
trace_sched_blocked_trace in CFS is really useful for debugging via
trace because it tell where the process was stuck on callstack.
For example,
<...>-6143 ( 6136) [005] d..2 50.278987: sched_blocked_reason: pid=6136 iowait=0 caller=SyS_mprotect+0x88/0x208
<...>-6136 ( 6136) [005] d..2 50.278990: sched_blocked_reason: pid=6142 iowait=0 caller=do_page_fault+0x1f4/0x3b0
<...>-6142 ( 6136) [006] d..2 50.278996: sched_blocked_reason: pid=6144 iowait=0 caller=SyS_prctl+0x52c/0xb58
<...>-6144 ( 6136) [006] d..2 50.279007: sched_blocked_reason: pid=6136 iowait=0 caller=vm_mmap_pgoff+0x74/0x104
However, sometime it gives pointless information like this.
RenderThread-2322 ( 1805) [006] d.s3 50.319046: sched_blocked_reason: pid=6136 iowait=1 caller=__lock_page_killable+0x17c/0x220
logd.writer-594 ( 587) [002] d.s3 50.334011: sched_blocked_reason: pid=6126 iowait=1 caller=wait_on_page_bit+0x194/0x208
kworker/u16:13-333 ( 333) [007] d.s4 50.343161: sched_blocked_reason: pid=6136 iowait=1 caller=__lock_page_killable+0x17c/0x220
Such wait_on_page_bit, __lock_page_killable are pointless because it doesn't
carry on higher information to identify the callstack.
The reason is page_lock and waitqueue are special synchronization method unlike
other normal locks(mutex, spinlock).
Let's mark them as "__sched" so get_wchan which used in trace_sched_blocked_trace
could detect it and skip them. It will produce more meaningful callstack
function like this.
<...>-2867 ( 1068) [002] d.h4 124.209701: sched_blocked_reason: pid=329 iowait=0 caller=worker_thread+0x378/0x470
<...>-2867 ( 1068) [002] d.s3 124.209763: sched_blocked_reason: pid=8454 iowait=1 caller=__filemap_fdatawait_range+0xa0/0x104
<...>-2867 ( 1068) [002] d.s4 124.209803: sched_blocked_reason: pid=869 iowait=0 caller=worker_thread+0x378/0x470
ScreenDecoratio-2364 ( 1867) [002] d.s3 124.209973: sched_blocked_reason: pid=8454 iowait=1 caller=f2fs_wait_on_page_writeback+0x84/0xcc
ScreenDecoratio-2364 ( 1867) [002] d.s4 124.209986: sched_blocked_reason: pid=869 iowait=0 caller=worker_thread+0x378/0x470
<...>-329 ( 329) [000] d..3 124.210435: sched_blocked_reason: pid=538 iowait=0 caller=worker_thread+0x378/0x470
kworker/u16:13-538 ( 538) [007] d..3 124.210450: sched_blocked_reason: pid=6 iowait=0 caller=worker_thread+0x378/0x470
Test: build pass and boot to home.
Bug: 144961676
Bug: 144713689
Bug: 172212772
Signed-off-by: Minchan Kim <minchan@google.com>
Signed-off-by: Jimmy Shiu <jimmyshiu@google.com>
Change-Id: I9c738802a16941ca767dcc37ae4463070b3fabf4
(cherry picked from commit 1e4de875d9e0cfaccf5131bcc709ae8646cdc168)
Signed-off-by: Will McVicker <willmcvicker@google.com>
Add hooks for vendor specific find_energy_efficient_cpu logic.
Bug: 170507310
Signed-off-by: Rick Yiu <rickyiu@google.com>
Change-Id: I064b501017e32d4f22f8128bed8bf3a1508ab15b
(cherry picked from commit 2f108e2ec6e89609cbae32c5d13d6ad9f2e858cb)
Signed-off-by: Will McVicker <willmcvicker@google.com>
Below symbols would be used by vendor modules to make better placement
decisions when respective hooks are registered.
1. uclamp_eff_value
2. idle_cpu
Bug: 174219212
Change-Id: I2b41ce9a7c3fb67a8170c5c253985c722f06e85a
Signed-off-by: Satya Durga Srinivasu Prabhala <satyap@codeaurora.org>
android_rvh_sched_nohz_balancer_kick hook allows vendor modules to
select the busiest CPU in a group during load balance. When the
load balancer could not pull tasks from this busiest CPU due to
affinity restriction, the CPU is cleared from env->cpu. This must
be passed to the vendor module, otherwise we keep selecting the
exempted CPU as the busiest CPU.
Bug: 174338902
Change-Id: Iedaa389a51849da4c3e094d731fe5e39cd909d81
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
We call arch_cpu_idle() with RCU disabled, but then use
local_irq_{en,dis}able(), which invokes tracing, which relies on RCU.
Switch all arch_cpu_idle() implementations to use
raw_local_irq_{en,dis}able() and carefully manage the
lockdep,rcu,tracing state like we do in entry.
(XXX: we really should change arch_cpu_idle() to not return with
interrupts enabled)
Reported-by: Sven Schnelle <svens@linux.ibm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lkml.kernel.org/r/20201120114925.594122626@infradead.org
While bringing in a change from older Kernel, commit 3adfd8e344
("ANDROID: sched: avoid placing RT threads on cores handling softirqs")
missed to add data type for cpu variable. Fix it by adding data type.
Bug: 168521633
Change-Id: I4cd3d0b68b5962004f295ce8d07546b2067bc728
Signed-off-by: Satya Durga Srinivasu Prabhala <satyap@codeaurora.org>
The following restrict vendor hooks are added. The vendor hook
can selectively opt in for the default scheduler behavior by not
modifying the done argument.
- android_rvh_sched_newidle_balance: For newly idle load balance.
- android_rvh_sched_nohz_balancer_kick: For deciding if an idle
CPU is woken up to do nohz balance or not.
- android_rvh_find_busiest_queue: For selecting the busiest runqueue
among the CPUs in the busiest group selected in find_busiest_group.
- android_rvh_migrate_queued_task: Vendor implementations may require
both source and destination CPUs runqueue locks to be held while
calling set_task_cpu() during a task migration. Add a hook when
a queued task is migration so that vendor implementation can detach
the task and call set_task_cpu() with both runqueue locks held.
Bug: 173661641
Change-Id: I6a09226081061b6433e4231359be252a0f28f04b
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
Some tasks, such as those related to audio, can be placed onto cores
which are too small to support them, leading to performance hits. Fix
this by having the sync wakeup path honor capacity.
Bug: 166278821
Signed-off-by: J. Avila <elavila@google.com>
Change-Id: I5f7ef330f952c95f9391eb733ad241345477c943
Pull scheduler fixes from Thomas Gleixner:
"A couple of scheduler fixes:
- Make the conditional update of the overutilized state work
correctly by caching the relevant flags state before overwriting
them and checking them afterwards.
- Fix a data race in the wakeup path which caused loadavg on ARM64
platforms to become a random number generator.
- Fix the ordering of the iowaiter accounting operations so it can't
be decremented before it is incremented.
- Fix a bug in the deadline scheduler vs. priority inheritance when a
non-deadline task A has inherited the parameters of a deadline task
B and then blocks on a non-deadline task C.
The second inheritance step used the static deadline parameters of
task A, which are usually 0, instead of further propagating task
B's parameters. The zero initialized parameters trigger a bug in
the deadline scheduler"
* tag 'sched-urgent-2020-11-22' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
sched/deadline: Fix priority inheritance with multiple scheduling classes
sched: Fix rq->nr_iowait ordering
sched: Fix data-race in wakeup
sched/fair: Fix overutilized update in enqueue_task_fair()
Pull locking fix from Thomas Gleixner:
"A single fix for lockdep which makes the recursion protection cover
graph lock/unlock"
* tag 'locking-urgent-2020-11-22' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
lockdep: Put graph lock/unlock under lock_recursion protection
Pull seccomp fixes from Kees Cook:
"This gets the seccomp selftests running again on powerpc and sh, and
fixes an audit reporting oversight noticed in both seccomp and ptrace.
- Fix typos in seccomp selftests on powerpc and sh (Kees Cook)
- Fix PF_SUPERPRIV audit marking in seccomp and ptrace (Mickaël
Salaün)"
* tag 'seccomp-v5.10-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux:
selftests/seccomp: sh: Fix register names
selftests/seccomp: powerpc: Fix typo in macro variable name
seccomp: Set PF_SUPERPRIV when checking capability
ptrace: Set PF_SUPERPRIV when checking capability