linux_dsm_epyc7002/mm
Wu Fengguang c42843f2f0 writeback: introduce smoothed global dirty limit
The start of a heavy weight application (ie. KVM) may instantly knock
down determine_dirtyable_memory() if the swap is not enabled or full.
global_dirty_limits() and bdi_dirty_limit() will in turn get global/bdi
dirty thresholds that are _much_ lower than the global/bdi dirty pages.

balance_dirty_pages() will then heavily throttle all dirtiers including
the light ones, until the dirty pages drop below the new dirty thresholds.
During this _deep_ dirty-exceeded state, the system may appear rather
unresponsive to the users.

About "deep" dirty-exceeded: task_dirty_limit() assigns 1/8 lower dirty
threshold to heavy dirtiers than light ones, and the dirty pages will
be throttled around the heavy dirtiers' dirty threshold and reasonably
below the light dirtiers' dirty threshold. In this state, only the heavy
dirtiers will be throttled and the dirty pages are carefully controlled
to not exceed the light dirtiers' dirty threshold. However if the
threshold itself suddenly drops below the number of dirty pages, the
light dirtiers will get heavily throttled.

So introduce global_dirty_limit for tracking the global dirty threshold
with policies

- follow downwards slowly
- follow up in one shot

global_dirty_limit can effectively mask out the impact of sudden drop of
dirtyable memory. It will be used in the next patch for two new type of
dirty limits. Note that the new dirty limits are not going to avoid
throttling the light dirtiers, but could limit their sleep time to 200ms.

Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
2011-07-09 22:09:02 -07:00
..
backing-dev.c writeback: show bdi write bandwidth in debugfs 2011-07-09 22:09:02 -07:00
bootmem.c
bounce.c
cleancache.c mm: cleancache core ops functions and config 2011-05-26 10:01:36 -06:00
compaction.c
debug-pagealloc.c
dmapool.c
fadvise.c
failslab.c
filemap_xip.c mm: Convert i_mmap_lock to a mutex 2011-05-25 08:39:18 -07:00
filemap.c writeback: split inode_wb_list_lock into bdi_writeback.list_lock 2011-06-08 08:25:21 +08:00
fremap.c mm: don't access vm_flags as 'int' 2011-05-26 09:20:31 -07:00
highmem.c
huge_memory.c mm: thp: optimize memcg charge in khugepaged 2011-05-25 08:39:21 -07:00
hugetlb.c mm: fix ENOSPC returned by handle_mm_fault() 2011-06-06 18:00:27 +09:00
hwpoison-inject.c
init-mm.c mm: convert mm->cpu_vm_cpumask into cpumask_var_t 2011-05-25 08:39:21 -07:00
internal.h mm: nommu: sort mm->mmap list properly 2011-05-25 08:39:05 -07:00
Kconfig mm: cleancache core ops functions and config 2011-05-26 10:01:36 -06:00
Kconfig.debug
kmemcheck.c
kmemleak-test.c
kmemleak.c kmemleak: Do not return a pointer to an object that kmemleak did not get 2011-05-19 17:35:28 +01:00
ksm.c oom: replace PF_OOM_ORIGIN with toggling oom_score_adj 2011-05-25 08:39:10 -07:00
maccess.c maccess,probe_kernel: Make write/read src const void * 2011-05-25 19:56:23 -04:00
madvise.c
Makefile mm: cleancache core ops functions and config 2011-05-26 10:01:36 -06:00
memblock.c
memcontrol.c memcg: add the pagefault count into memcg stats 2011-05-26 17:12:36 -07:00
memory_hotplug.c mm: remove dependency on CONFIG_FLATMEM from online_page() 2011-05-25 08:39:28 -07:00
memory-failure.c vmscan: change shrinker API by passing shrink_control struct 2011-05-25 08:39:26 -07:00
memory.c memcg: add the pagefault count into memcg stats 2011-05-26 17:12:36 -07:00
mempolicy.c mm: proc: move show_numa_map() to fs/proc/task_mmu.c 2011-05-25 08:39:34 -07:00
mempool.c
migrate.c mm: use refcounts for page_lock_anon_vma() 2011-05-25 08:39:19 -07:00
mincore.c
mlock.c mm: don't access vm_flags as 'int' 2011-05-26 09:20:31 -07:00
mm_init.c
mmap.c mm: don't access vm_flags as 'int' 2011-05-26 09:20:31 -07:00
mmu_context.c
mmu_notifier.c
mmzone.c
mprotect.c
mremap.c mm: Convert i_mmap_lock to a mutex 2011-05-25 08:39:18 -07:00
msync.c
nobootmem.c memblock/nobootmem: remove unneeded code from alloc_bootmem_node_high() 2011-05-25 08:39:31 -07:00
nommu.c nommu: add page alignment to mmap 2011-05-25 08:39:38 -07:00
oom_kill.c oom: replace PF_OOM_ORIGIN with toggling oom_score_adj 2011-05-25 08:39:10 -07:00
page_alloc.c Revert "mm: fail GFP_DMA allocations when ZONE_DMA is not configured" 2011-06-02 06:11:24 +09:00
page_cgroup.c memcg: move page-freeing code out of lock 2011-05-26 17:12:35 -07:00
page_io.c
page_isolation.c
page-writeback.c writeback: introduce smoothed global dirty limit 2011-07-09 22:09:02 -07:00
pagewalk.c
percpu-km.c
percpu-vm.c
percpu.c Merge branch 'for-2.6.40' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu 2011-05-24 11:53:42 -07:00
pgtable-generic.c
prio_tree.c sanitize <linux/prefetch.h> usage 2011-05-20 12:50:29 -07:00
quicklist.c
readahead.c readahead: readahead page allocations are OK to fail 2011-05-25 08:39:25 -07:00
rmap.c writeback: split inode_wb_list_lock into bdi_writeback.list_lock 2011-06-08 08:25:21 +08:00
shmem.c tmpfs: fix race between truncate and writepage 2011-05-28 16:09:26 -07:00
slab.c sanitize <linux/prefetch.h> usage 2011-05-20 12:50:29 -07:00
slob.c
slub.c slub: remove no-longer used 'unlock_out' label 2011-05-25 18:06:54 -07:00
sparse-vmemmap.c
sparse.c
swap_state.c
swap.c mm: batch activate_page() to reduce lock contention 2011-05-25 08:39:37 -07:00
swapfile.c oom: replace PF_OOM_ORIGIN with toggling oom_score_adj 2011-05-25 08:39:10 -07:00
thrash.c
truncate.c mm/fs: add hooks to support cleancache 2011-05-26 10:01:43 -06:00
util.c mm: nommu: sort mm->mmap list properly 2011-05-25 08:39:05 -07:00
vmalloc.c Merge branch 'upstream/tidy-xen-mmu-2.6.39' of git://git.kernel.org/pub/scm/linux/kernel/git/jeremy/xen 2011-05-26 19:01:15 -07:00
vmscan.c memcg: rename mem_cgroup_zone_nr_pages() to mem_cgroup_zone_nr_lru_pages() 2011-05-26 17:12:35 -07:00
vmstat.c mm, mem-hotplug: update pcp->stat_threshold when memory hotplug occur 2011-05-25 08:39:09 -07:00