mirror of
https://github.com/AuxXxilium/linux_dsm_epyc7002.git
synced 2024-12-23 23:46:09 +07:00
1a5a9906d4
In some cases it may happen that pmd_none_or_clear_bad() is called with the mmap_sem hold in read mode. In those cases the huge page faults can allocate hugepmds under pmd_none_or_clear_bad() and that can trigger a false positive from pmd_bad() that will not like to see a pmd materializing as trans huge. It's not khugepaged causing the problem, khugepaged holds the mmap_sem in write mode (and all those sites must hold the mmap_sem in read mode to prevent pagetables to go away from under them, during code review it seems vm86 mode on 32bit kernels requires that too unless it's restricted to 1 thread per process or UP builds). The race is only with the huge pagefaults that can convert a pmd_none() into a pmd_trans_huge(). Effectively all these pmd_none_or_clear_bad() sites running with mmap_sem in read mode are somewhat speculative with the page faults, and the result is always undefined when they run simultaneously. This is probably why it wasn't common to run into this. For example if the madvise(MADV_DONTNEED) runs zap_page_range() shortly before the page fault, the hugepage will not be zapped, if the page fault runs first it will be zapped. Altering pmd_bad() not to error out if it finds hugepmds won't be enough to fix this, because zap_pmd_range would then proceed to call zap_pte_range (which would be incorrect if the pmd become a pmd_trans_huge()). The simplest way to fix this is to read the pmd in the local stack (regardless of what we read, no need of actual CPU barriers, only compiler barrier needed), and be sure it is not changing under the code that computes its value. Even if the real pmd is changing under the value we hold on the stack, we don't care. If we actually end up in zap_pte_range it means the pmd was not none already and it was not huge, and it can't become huge from under us (khugepaged locking explained above). All we need is to enforce that there is no way anymore that in a code path like below, pmd_trans_huge can be false, but pmd_none_or_clear_bad can run into a hugepmd. The overhead of a barrier() is just a compiler tweak and should not be measurable (I only added it for THP builds). I don't exclude different compiler versions may have prevented the race too by caching the value of *pmd on the stack (that hasn't been verified, but it wouldn't be impossible considering pmd_none_or_clear_bad, pmd_bad, pmd_trans_huge, pmd_none are all inlines and there's no external function called in between pmd_trans_huge and pmd_none_or_clear_bad). if (pmd_trans_huge(*pmd)) { if (next-addr != HPAGE_PMD_SIZE) { VM_BUG_ON(!rwsem_is_locked(&tlb->mm->mmap_sem)); split_huge_page_pmd(vma->vm_mm, pmd); } else if (zap_huge_pmd(tlb, vma, pmd, addr)) continue; /* fall through */ } if (pmd_none_or_clear_bad(pmd)) Because this race condition could be exercised without special privileges this was reported in CVE-2012-1179. The race was identified and fully explained by Ulrich who debugged it. I'm quoting his accurate explanation below, for reference. ====== start quote ======= mapcount 0 page_mapcount 1 kernel BUG at mm/huge_memory.c:1384! At some point prior to the panic, a "bad pmd ..." message similar to the following is logged on the console: mm/memory.c:145: bad pmd ffff8800376e1f98(80000000314000e7). The "bad pmd ..." message is logged by pmd_clear_bad() before it clears the page's PMD table entry. 143 void pmd_clear_bad(pmd_t *pmd) 144 { -> 145 pmd_ERROR(*pmd); 146 pmd_clear(pmd); 147 } After the PMD table entry has been cleared, there is an inconsistency between the actual number of PMD table entries that are mapping the page and the page's map count (_mapcount field in struct page). When the page is subsequently reclaimed, __split_huge_page() detects this inconsistency. 1381 if (mapcount != page_mapcount(page)) 1382 printk(KERN_ERR "mapcount %d page_mapcount %d\n", 1383 mapcount, page_mapcount(page)); -> 1384 BUG_ON(mapcount != page_mapcount(page)); The root cause of the problem is a race of two threads in a multithreaded process. Thread B incurs a page fault on a virtual address that has never been accessed (PMD entry is zero) while Thread A is executing an madvise() system call on a virtual address within the same 2 MB (huge page) range. virtual address space .---------------------. | | | | .-|---------------------| | | | | | |<-- B(fault) | | | 2 MB | |/////////////////////|-. huge < |/////////////////////| > A(range) page | |/////////////////////|-' | | | | | | '-|---------------------| | | | | '---------------------' - Thread A is executing an madvise(..., MADV_DONTNEED) system call on the virtual address range "A(range)" shown in the picture. sys_madvise // Acquire the semaphore in shared mode. down_read(¤t->mm->mmap_sem) ... madvise_vma switch (behavior) case MADV_DONTNEED: madvise_dontneed zap_page_range unmap_vmas unmap_page_range zap_pud_range zap_pmd_range // // Assume that this huge page has never been accessed. // I.e. content of the PMD entry is zero (not mapped). // if (pmd_trans_huge(*pmd)) { // We don't get here due to the above assumption. } // // Assume that Thread B incurred a page fault and .---------> // sneaks in here as shown below. | // | if (pmd_none_or_clear_bad(pmd)) | { | if (unlikely(pmd_bad(*pmd))) | pmd_clear_bad | { | pmd_ERROR | // Log "bad pmd ..." message here. | pmd_clear | // Clear the page's PMD entry. | // Thread B incremented the map count | // in page_add_new_anon_rmap(), but | // now the page is no longer mapped | // by a PMD entry (-> inconsistency). | } | } | v - Thread B is handling a page fault on virtual address "B(fault)" shown in the picture. ... do_page_fault __do_page_fault // Acquire the semaphore in shared mode. down_read_trylock(&mm->mmap_sem) ... handle_mm_fault if (pmd_none(*pmd) && transparent_hugepage_enabled(vma)) // We get here due to the above assumption (PMD entry is zero). do_huge_pmd_anonymous_page alloc_hugepage_vma // Allocate a new transparent huge page here. ... __do_huge_pmd_anonymous_page ... spin_lock(&mm->page_table_lock) ... page_add_new_anon_rmap // Here we increment the page's map count (starts at -1). atomic_set(&page->_mapcount, 0) set_pmd_at // Here we set the page's PMD entry which will be cleared // when Thread A calls pmd_clear_bad(). ... spin_unlock(&mm->page_table_lock) The mmap_sem does not prevent the race because both threads are acquiring it in shared mode (down_read). Thread B holds the page_table_lock while the page's map count and PMD table entry are updated. However, Thread A does not synchronize on that lock. ====== end quote ======= [akpm@linux-foundation.org: checkpatch fixes] Reported-by: Ulrich Obergfell <uobergfe@redhat.com> Signed-off-by: Andrea Arcangeli <aarcange@redhat.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Cc: Mel Gorman <mgorman@suse.de> Cc: Hugh Dickins <hughd@google.com> Cc: Dave Jones <davej@redhat.com> Acked-by: Larry Woodman <lwoodman@redhat.com> Acked-by: Rik van Riel <riel@redhat.com> Cc: <stable@vger.kernel.org> [2.6.38+] Cc: Mark Salter <msalter@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
248 lines
5.8 KiB
C
248 lines
5.8 KiB
C
#include <linux/mm.h>
|
|
#include <linux/highmem.h>
|
|
#include <linux/sched.h>
|
|
#include <linux/hugetlb.h>
|
|
|
|
static int walk_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
|
|
struct mm_walk *walk)
|
|
{
|
|
pte_t *pte;
|
|
int err = 0;
|
|
|
|
pte = pte_offset_map(pmd, addr);
|
|
for (;;) {
|
|
err = walk->pte_entry(pte, addr, addr + PAGE_SIZE, walk);
|
|
if (err)
|
|
break;
|
|
addr += PAGE_SIZE;
|
|
if (addr == end)
|
|
break;
|
|
pte++;
|
|
}
|
|
|
|
pte_unmap(pte);
|
|
return err;
|
|
}
|
|
|
|
static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end,
|
|
struct mm_walk *walk)
|
|
{
|
|
pmd_t *pmd;
|
|
unsigned long next;
|
|
int err = 0;
|
|
|
|
pmd = pmd_offset(pud, addr);
|
|
do {
|
|
again:
|
|
next = pmd_addr_end(addr, end);
|
|
if (pmd_none(*pmd)) {
|
|
if (walk->pte_hole)
|
|
err = walk->pte_hole(addr, next, walk);
|
|
if (err)
|
|
break;
|
|
continue;
|
|
}
|
|
/*
|
|
* This implies that each ->pmd_entry() handler
|
|
* needs to know about pmd_trans_huge() pmds
|
|
*/
|
|
if (walk->pmd_entry)
|
|
err = walk->pmd_entry(pmd, addr, next, walk);
|
|
if (err)
|
|
break;
|
|
|
|
/*
|
|
* Check this here so we only break down trans_huge
|
|
* pages when we _need_ to
|
|
*/
|
|
if (!walk->pte_entry)
|
|
continue;
|
|
|
|
split_huge_page_pmd(walk->mm, pmd);
|
|
if (pmd_none_or_trans_huge_or_clear_bad(pmd))
|
|
goto again;
|
|
err = walk_pte_range(pmd, addr, next, walk);
|
|
if (err)
|
|
break;
|
|
} while (pmd++, addr = next, addr != end);
|
|
|
|
return err;
|
|
}
|
|
|
|
static int walk_pud_range(pgd_t *pgd, unsigned long addr, unsigned long end,
|
|
struct mm_walk *walk)
|
|
{
|
|
pud_t *pud;
|
|
unsigned long next;
|
|
int err = 0;
|
|
|
|
pud = pud_offset(pgd, addr);
|
|
do {
|
|
next = pud_addr_end(addr, end);
|
|
if (pud_none_or_clear_bad(pud)) {
|
|
if (walk->pte_hole)
|
|
err = walk->pte_hole(addr, next, walk);
|
|
if (err)
|
|
break;
|
|
continue;
|
|
}
|
|
if (walk->pud_entry)
|
|
err = walk->pud_entry(pud, addr, next, walk);
|
|
if (!err && (walk->pmd_entry || walk->pte_entry))
|
|
err = walk_pmd_range(pud, addr, next, walk);
|
|
if (err)
|
|
break;
|
|
} while (pud++, addr = next, addr != end);
|
|
|
|
return err;
|
|
}
|
|
|
|
#ifdef CONFIG_HUGETLB_PAGE
|
|
static unsigned long hugetlb_entry_end(struct hstate *h, unsigned long addr,
|
|
unsigned long end)
|
|
{
|
|
unsigned long boundary = (addr & huge_page_mask(h)) + huge_page_size(h);
|
|
return boundary < end ? boundary : end;
|
|
}
|
|
|
|
static int walk_hugetlb_range(struct vm_area_struct *vma,
|
|
unsigned long addr, unsigned long end,
|
|
struct mm_walk *walk)
|
|
{
|
|
struct hstate *h = hstate_vma(vma);
|
|
unsigned long next;
|
|
unsigned long hmask = huge_page_mask(h);
|
|
pte_t *pte;
|
|
int err = 0;
|
|
|
|
do {
|
|
next = hugetlb_entry_end(h, addr, end);
|
|
pte = huge_pte_offset(walk->mm, addr & hmask);
|
|
if (pte && walk->hugetlb_entry)
|
|
err = walk->hugetlb_entry(pte, hmask, addr, next, walk);
|
|
if (err)
|
|
return err;
|
|
} while (addr = next, addr != end);
|
|
|
|
return 0;
|
|
}
|
|
|
|
static struct vm_area_struct* hugetlb_vma(unsigned long addr, struct mm_walk *walk)
|
|
{
|
|
struct vm_area_struct *vma;
|
|
|
|
/* We don't need vma lookup at all. */
|
|
if (!walk->hugetlb_entry)
|
|
return NULL;
|
|
|
|
VM_BUG_ON(!rwsem_is_locked(&walk->mm->mmap_sem));
|
|
vma = find_vma(walk->mm, addr);
|
|
if (vma && vma->vm_start <= addr && is_vm_hugetlb_page(vma))
|
|
return vma;
|
|
|
|
return NULL;
|
|
}
|
|
|
|
#else /* CONFIG_HUGETLB_PAGE */
|
|
static struct vm_area_struct* hugetlb_vma(unsigned long addr, struct mm_walk *walk)
|
|
{
|
|
return NULL;
|
|
}
|
|
|
|
static int walk_hugetlb_range(struct vm_area_struct *vma,
|
|
unsigned long addr, unsigned long end,
|
|
struct mm_walk *walk)
|
|
{
|
|
return 0;
|
|
}
|
|
|
|
#endif /* CONFIG_HUGETLB_PAGE */
|
|
|
|
|
|
|
|
/**
|
|
* walk_page_range - walk a memory map's page tables with a callback
|
|
* @mm: memory map to walk
|
|
* @addr: starting address
|
|
* @end: ending address
|
|
* @walk: set of callbacks to invoke for each level of the tree
|
|
*
|
|
* Recursively walk the page table for the memory area in a VMA,
|
|
* calling supplied callbacks. Callbacks are called in-order (first
|
|
* PGD, first PUD, first PMD, first PTE, second PTE... second PMD,
|
|
* etc.). If lower-level callbacks are omitted, walking depth is reduced.
|
|
*
|
|
* Each callback receives an entry pointer and the start and end of the
|
|
* associated range, and a copy of the original mm_walk for access to
|
|
* the ->private or ->mm fields.
|
|
*
|
|
* Usually no locks are taken, but splitting transparent huge page may
|
|
* take page table lock. And the bottom level iterator will map PTE
|
|
* directories from highmem if necessary.
|
|
*
|
|
* If any callback returns a non-zero value, the walk is aborted and
|
|
* the return value is propagated back to the caller. Otherwise 0 is returned.
|
|
*
|
|
* walk->mm->mmap_sem must be held for at least read if walk->hugetlb_entry
|
|
* is !NULL.
|
|
*/
|
|
int walk_page_range(unsigned long addr, unsigned long end,
|
|
struct mm_walk *walk)
|
|
{
|
|
pgd_t *pgd;
|
|
unsigned long next;
|
|
int err = 0;
|
|
|
|
if (addr >= end)
|
|
return err;
|
|
|
|
if (!walk->mm)
|
|
return -EINVAL;
|
|
|
|
pgd = pgd_offset(walk->mm, addr);
|
|
do {
|
|
struct vm_area_struct *vma;
|
|
|
|
next = pgd_addr_end(addr, end);
|
|
|
|
/*
|
|
* handle hugetlb vma individually because pagetable walk for
|
|
* the hugetlb page is dependent on the architecture and
|
|
* we can't handled it in the same manner as non-huge pages.
|
|
*/
|
|
vma = hugetlb_vma(addr, walk);
|
|
if (vma) {
|
|
if (vma->vm_end < next)
|
|
next = vma->vm_end;
|
|
/*
|
|
* Hugepage is very tightly coupled with vma, so
|
|
* walk through hugetlb entries within a given vma.
|
|
*/
|
|
err = walk_hugetlb_range(vma, addr, next, walk);
|
|
if (err)
|
|
break;
|
|
pgd = pgd_offset(walk->mm, next);
|
|
continue;
|
|
}
|
|
|
|
if (pgd_none_or_clear_bad(pgd)) {
|
|
if (walk->pte_hole)
|
|
err = walk->pte_hole(addr, next, walk);
|
|
if (err)
|
|
break;
|
|
pgd++;
|
|
continue;
|
|
}
|
|
if (walk->pgd_entry)
|
|
err = walk->pgd_entry(pgd, addr, next, walk);
|
|
if (!err &&
|
|
(walk->pud_entry || walk->pmd_entry || walk->pte_entry))
|
|
err = walk_pud_range(pgd, addr, next, walk);
|
|
if (err)
|
|
break;
|
|
pgd++;
|
|
} while (addr = next, addr != end);
|
|
|
|
return err;
|
|
}
|