BACKPORT: FROMLIST: mm: add pte_map_lock() and pte_spinlock()

pte_map_lock() and pte_spinlock() are used by fault handlers to ensure
the pte is mapped and locked before they commit the faulted page to the
mm's address space at the end of the fault.

The functions differ in their preconditions; pte_map_lock() expects
the pte to be unmapped prior to the call, while pte_spinlock() expects
it to be already mapped.

In the speculative fault case, the functions verify, after locking the pte,
that the mmap sequence count has not changed since the start of the fault,
and thus that no mmap lock writers have been running concurrently with
the fault. After that point the page table lock serializes any further
races with concurrent mmap lock writers.

If the mmap sequence count check fails, both functions will return false
with the pte being left unmapped and unlocked.

Signed-off-by: Michel Lespinasse <michel@lespinasse.org>
Link: https://lore.kernel.org/all/20220128131006.67712-18-michel@lespinasse.org/

Conflicts:
    include/linux/mm.h

1. Fixed pte_map_lock and pte_spinlock macros not to fail when
CONFIG_SPECULATIVE_PAGE_FAULT=n

Bug: 161210518
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Change-Id: Ibd7ccc2ead4fdf29f28c7657b312b2f677ac8836
This commit is contained in:
Michel Lespinasse
2022-01-24 17:43:55 -08:00
committed by Todd Kjos
parent 6ab660d7cb
commit 6e6766ab76
2 changed files with 102 additions and 0 deletions

View File

@@ -3324,5 +3324,41 @@ madvise_set_anon_name(struct mm_struct *mm, unsigned long start,
}
#endif
#ifdef CONFIG_MMU
#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
bool __pte_map_lock(struct vm_fault *vmf);
static inline bool pte_map_lock(struct vm_fault *vmf)
{
VM_BUG_ON(vmf->pte);
return __pte_map_lock(vmf);
}
static inline bool pte_spinlock(struct vm_fault *vmf)
{
VM_BUG_ON(!vmf->pte);
return __pte_map_lock(vmf);
}
#else /* !CONFIG_SPECULATIVE_PAGE_FAULT */
#define pte_map_lock(___vmf) \
({ \
___vmf->pte = pte_offset_map_lock(___vmf->vma->vm_mm, ___vmf->pmd,\
___vmf->address, &___vmf->ptl); \
true; \
})
#define pte_spinlock(___vmf) \
({ \
___vmf->ptl = pte_lockptr(___vmf->vma->vm_mm, ___vmf->pmd); \
spin_lock(___vmf->ptl); \
true; \
})
#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
#endif /* CONFIG_MMU */
#endif /* __KERNEL__ */
#endif /* _LINUX_MM_H */