linux_dsm_epyc7002/arch/arm/include
Will Deacon 509eb76ebf ARM: 7747/1: pcpu: ensure __my_cpu_offset cannot be re-ordered across barrier()
__my_cpu_offset is non-volatile, since we want its value to be cached
when we access several per-cpu variables in a row with preemption
disabled. This means that we rely on preempt_{en,dis}able to hazard
with the operation via the barrier() macro, so that we can't end up
migrating CPUs without reloading the per-cpu offset.

Unfortunately, GCC doesn't treat a "memory" clobber on a non-volatile
asm block as a side-effect, and will happily re-order it before other
memory clobbers (including those in prempt_disable()) and cache the
value. This has been observed to break the cmpxchg logic in the slub
allocator, leading to livelock in kmem_cache_alloc in mainline kernels.

This patch adds a dummy memory input operand to __my_cpu_offset,
forcing it to be ordered with respect to the barrier() macro.

Cc: <stable@vger.kernel.org>
Cc: Rob Herring <rob.herring@calxeda.com>
Reviewed-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-06-05 23:35:56 +01:00
..
asm ARM: 7747/1: pcpu: ensure __my_cpu_offset cannot be re-ordered across barrier() 2013-06-05 23:35:56 +01:00
debug ARM: ux500: Fix incorrect DEBUG UART virtual addresses 2013-05-14 10:26:45 +02:00
uapi/asm Merge branch 'kvm-arm-cleanup' from git://github.com/columbia/linux-kvm-arm.git 2013-04-25 18:23:48 +03:00