linux_dsm_epyc7002/kernel/sched
Peter Zijlstra 870a0bb5d6 sched/numa: Don't scale the imbalance
It's far too easy to get ridiculously large imbalance pct when you
scale it like that. Use a fixed 125% for now.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/n/tip-zsriaft1dv7hhboyrpvqjy6s@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2012-05-14 15:05:26 +02:00
..
auto_group.c sched: Clean up parameter passing of proc_sched_autogroup_set_nice() 2012-03-02 12:23:49 +01:00
auto_group.h
clock.c
core.c sched/numa: Don't scale the imbalance 2012-05-14 15:05:26 +02:00
cpupri.c kernel-doc: fix kernel-doc warnings in sched 2012-01-23 08:44:54 -08:00
cpupri.h
debug.c sched: Change rq->nr_running to unsigned int 2012-05-09 15:00:49 +02:00
fair.c sched/fair: Revert sched-domain iteration breakage 2012-05-14 15:05:26 +02:00
features.h sched: Fix more load-balancing fallout 2012-04-26 12:54:52 +02:00
idle_task.c sched: Update documentation and comments 2012-05-07 15:04:18 +02:00
Makefile
rt.c Merge branch 'tip/sched/core' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace into sched/core 2012-04-14 15:12:04 +02:00
sched.h sched: Change rq->nr_running to unsigned int 2012-05-09 15:00:49 +02:00
stats.c sched: Remove sched_switch 2012-01-27 13:28:53 +01:00
stats.h
stop_task.c