BACKPORT: UPSTREAM: mm: fs: invalidate bh_lrus for only cold path
kernel test robot reported the regression of fio.write_iops[1] with [2]. Since lru_add_drain is called frequently, invalidate bh_lrus there could increase bh_lrus cache miss ratio, which needs more IO in the end. This patch moves the bh_lrus invalidation from the hot path( e.g., zap_page_range, pagevec_release) to cold path(i.e., lru_add_drain_all, lru_cache_disable). "Xing, Zhengjun" confirmed : I test the patch, the regression reduced to -2.9%. [1] https://lore.kernel.org/lkml/20210520083144.GD14190@xsang-OptiPlex-9020/ [2]8cc621d2f4, mm: fs: invalidate BH LRU during page migration Bug: 194673488 Link: https://lkml.kernel.org/r/20210907212347.1977686-1-minchan@kernel.org (cherry picked from commit243418e392) [Chris: resolved conflicts due to Minchan's AOSP LRU commits] Signed-off-by: Minchan Kim <minchan@kernel.org> Reported-by: kernel test robot <oliver.sang@intel.com> Reviewed-by: Chris Goldsworthy <cgoldswo@codeaurora.org> Tested-by: "Xing, Zhengjun" <zhengjun.xing@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Chris Goldsworthy <quic_cgoldswo@quicinc.com> Change-Id: Icc5e456b058df516480b4378853464d6d7b43505
This commit is contained in:
committed by
Chris Goldsworthy
parent
49af2e35d5
commit
a9ac6ae90e
@@ -1454,12 +1454,16 @@ void invalidate_bh_lrus(void)
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(invalidate_bh_lrus);
|
||||
|
||||
void invalidate_bh_lrus_cpu(int cpu)
|
||||
/*
|
||||
* It's called from workqueue context so we need a bh_lru_lock to close
|
||||
* the race with preemption/irq.
|
||||
*/
|
||||
void invalidate_bh_lrus_cpu(void)
|
||||
{
|
||||
struct bh_lru *b;
|
||||
|
||||
bh_lru_lock();
|
||||
b = per_cpu_ptr(&bh_lrus, cpu);
|
||||
b = this_cpu_ptr(&bh_lrus);
|
||||
__invalidate_bh_lrus(b);
|
||||
bh_lru_unlock();
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user