Merge tag 'sched-urgent-2021-06-30' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull scheduler fixes from Ingo Molnar: - Fix a small inconsistency (bug) in load tracking, caught by a new warning that several people reported. - Flip CONFIG_SCHED_CORE to default-disabled, and update the Kconfig help text. * tag 'sched-urgent-2021-06-30' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: sched/core: Disable CONFIG_SCHED_CORE by default sched/fair: Ensure _sum and _avg values stay consistent
This commit is contained in:
@@ -102,7 +102,6 @@ config PREEMPT_DYNAMIC
|
|||||||
|
|
||||||
config SCHED_CORE
|
config SCHED_CORE
|
||||||
bool "Core Scheduling for SMT"
|
bool "Core Scheduling for SMT"
|
||||||
default y
|
|
||||||
depends on SCHED_SMT
|
depends on SCHED_SMT
|
||||||
help
|
help
|
||||||
This option permits Core Scheduling, a means of coordinated task
|
This option permits Core Scheduling, a means of coordinated task
|
||||||
@@ -115,7 +114,8 @@ config SCHED_CORE
|
|||||||
- mitigation of some (not all) SMT side channels;
|
- mitigation of some (not all) SMT side channels;
|
||||||
- limiting SMT interference to improve determinism and/or performance.
|
- limiting SMT interference to improve determinism and/or performance.
|
||||||
|
|
||||||
SCHED_CORE is default enabled when SCHED_SMT is enabled -- when
|
SCHED_CORE is default disabled. When it is enabled and unused,
|
||||||
unused there should be no impact on performance.
|
which is the likely usage by Linux distributions, there should
|
||||||
|
be no measurable impact on performance.
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -3685,15 +3685,15 @@ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
|
|||||||
|
|
||||||
r = removed_load;
|
r = removed_load;
|
||||||
sub_positive(&sa->load_avg, r);
|
sub_positive(&sa->load_avg, r);
|
||||||
sub_positive(&sa->load_sum, r * divider);
|
sa->load_sum = sa->load_avg * divider;
|
||||||
|
|
||||||
r = removed_util;
|
r = removed_util;
|
||||||
sub_positive(&sa->util_avg, r);
|
sub_positive(&sa->util_avg, r);
|
||||||
sub_positive(&sa->util_sum, r * divider);
|
sa->util_sum = sa->util_avg * divider;
|
||||||
|
|
||||||
r = removed_runnable;
|
r = removed_runnable;
|
||||||
sub_positive(&sa->runnable_avg, r);
|
sub_positive(&sa->runnable_avg, r);
|
||||||
sub_positive(&sa->runnable_sum, r * divider);
|
sa->runnable_sum = sa->runnable_avg * divider;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* removed_runnable is the unweighted version of removed_load so we
|
* removed_runnable is the unweighted version of removed_load so we
|
||||||
|
|||||||
Reference in New Issue
Block a user