mirror of
https://github.com/AuxXxilium/linux_dsm_epyc7002.git
synced 2024-11-24 19:30:52 +07:00
b4955ce3dd
It's common practice to msync a large address range regularly, in which often only a few ptes have actually been dirtied since the previous pass. sync_pte_range then goes much faster if it tests whether pte is dirty before locating and accessing each struct page cacheline; and it is hardly slowed by ptep_clear_flush_dirty repeating that test in the opposite case, when every pte actually is dirty. But beware, s390's pte_dirty always says false, since its dirty bit is kept in the storage key, located via the struct page address. So skip this optimization in its case: use a pte_maybe_dirty macro which just says true if page_test_and_clear_dirty is implemented. Signed-off-by: Abhijit Karmarkar <abhijitk@veritas.com> Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org> |
||
---|---|---|
.. | ||
bootmem.c | ||
fadvise.c | ||
filemap.c | ||
fremap.c | ||
highmem.c | ||
hugetlb.c | ||
internal.h | ||
madvise.c | ||
Makefile | ||
memory.c | ||
mempolicy.c | ||
mempool.c | ||
mincore.c | ||
mlock.c | ||
mmap.c | ||
mprotect.c | ||
mremap.c | ||
msync.c | ||
nommu.c | ||
oom_kill.c | ||
page_alloc.c | ||
page_io.c | ||
page-writeback.c | ||
pdflush.c | ||
prio_tree.c | ||
readahead.c | ||
rmap.c | ||
shmem.c | ||
slab.c | ||
swap_state.c | ||
swap.c | ||
swapfile.c | ||
thrash.c | ||
tiny-shmem.c | ||
truncate.c | ||
vmalloc.c | ||
vmscan.c |