mirror of
https://github.com/AuxXxilium/linux_dsm_epyc7002.git
synced 2024-12-23 03:34:46 +07:00
22e4ebb975
Implement MEMBARRIER_CMD_PRIVATE_EXPEDITED with IPIs using cpumask built from all runqueues for which current thread's mm is the same as the thread calling sys_membarrier. It executes faster than the non-expedited variant (no blocking). It also works on NOHZ_FULL configurations. Scheduler-wise, it requires a memory barrier before and after context switching between processes (which have different mm). The memory barrier before context switch is already present. For the barrier after context switch: * Our TSO archs can do RELEASE without being a full barrier. Look at x86 spin_unlock() being a regular STORE for example. But for those archs, all atomics imply smp_mb and all of them have atomic ops in switch_mm() for mm_cpumask(), and on x86 the CR3 load acts as a full barrier. * From all weakly ordered machines, only ARM64 and PPC can do RELEASE, the rest does indeed do smp_mb(), so there the spin_unlock() is a full barrier and we're good. * ARM64 has a very heavy barrier in switch_to(), which suffices. * PPC just removed its barrier from switch_to(), but appears to be talking about adding something to switch_mm(). So add a smp_mb__after_unlock_lock() for now, until this is settled on the PPC side. Changes since v3: - Properly document the memory barriers provided by each architecture. Changes since v2: - Address comments from Peter Zijlstra, - Add smp_mb__after_unlock_lock() after finish_lock_switch() in finish_task_switch() to add the memory barrier we need after storing to rq->curr. This is much simpler than the previous approach relying on atomic_dec_and_test() in mmdrop(), which actually added a memory barrier in the common case of switching between userspace processes. - Return -EINVAL when MEMBARRIER_CMD_SHARED is used on a nohz_full kernel, rather than having the whole membarrier system call returning -ENOSYS. Indeed, CMD_PRIVATE_EXPEDITED is compatible with nohz_full. Adapt the CMD_QUERY mask accordingly. Changes since v1: - move membarrier code under kernel/sched/ because it uses the scheduler runqueue, - only add the barrier when we switch from a kernel thread. The case where we switch from a user-space thread is already handled by the atomic_dec_and_test() in mmdrop(). - add a comment to mmdrop() documenting the requirement on the implicit memory barrier. CC: Peter Zijlstra <peterz@infradead.org> CC: Paul E. McKenney <paulmck@linux.vnet.ibm.com> CC: Boqun Feng <boqun.feng@gmail.com> CC: Andrew Hunter <ahh@google.com> CC: Maged Michael <maged.michael@gmail.com> CC: gromer@google.com CC: Avi Kivity <avi@scylladb.com> CC: Benjamin Herrenschmidt <benh@kernel.crashing.org> CC: Paul Mackerras <paulus@samba.org> CC: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Tested-by: Dave Watson <davejwatson@fb.com>
29 lines
1.2 KiB
Makefile
29 lines
1.2 KiB
Makefile
ifdef CONFIG_FUNCTION_TRACER
|
|
CFLAGS_REMOVE_clock.o = $(CC_FLAGS_FTRACE)
|
|
endif
|
|
|
|
# These files are disabled because they produce non-interesting flaky coverage
|
|
# that is not a function of syscall inputs. E.g. involuntary context switches.
|
|
KCOV_INSTRUMENT := n
|
|
|
|
ifneq ($(CONFIG_SCHED_OMIT_FRAME_POINTER),y)
|
|
# According to Alan Modra <alan@linuxcare.com.au>, the -fno-omit-frame-pointer is
|
|
# needed for x86 only. Why this used to be enabled for all architectures is beyond
|
|
# me. I suspect most platforms don't need this, but until we know that for sure
|
|
# I turn this off for IA-64 only. Andreas Schwab says it's also needed on m68k
|
|
# to get a correct value for the wait-channel (WCHAN in ps). --davidm
|
|
CFLAGS_core.o := $(PROFILING) -fno-omit-frame-pointer
|
|
endif
|
|
|
|
obj-y += core.o loadavg.o clock.o cputime.o
|
|
obj-y += idle_task.o fair.o rt.o deadline.o
|
|
obj-y += wait.o wait_bit.o swait.o completion.o idle.o
|
|
obj-$(CONFIG_SMP) += cpupri.o cpudeadline.o topology.o stop_task.o
|
|
obj-$(CONFIG_SCHED_AUTOGROUP) += autogroup.o
|
|
obj-$(CONFIG_SCHEDSTATS) += stats.o
|
|
obj-$(CONFIG_SCHED_DEBUG) += debug.o
|
|
obj-$(CONFIG_CGROUP_CPUACCT) += cpuacct.o
|
|
obj-$(CONFIG_CPU_FREQ) += cpufreq.o
|
|
obj-$(CONFIG_CPU_FREQ_GOV_SCHEDUTIL) += cpufreq_schedutil.o
|
|
obj-$(CONFIG_MEMBARRIER) += membarrier.o
|