linux_dsm_epyc7002/arch/powerpc/lib
Mike Rapoport 7e1c4e2792 memblock: stop using implicit alignment to SMP_CACHE_BYTES
When a memblock allocation APIs are called with align = 0, the alignment
is implicitly set to SMP_CACHE_BYTES.

Implicit alignment is done deep in the memblock allocator and it can
come as a surprise.  Not that such an alignment would be wrong even
when used incorrectly but it is better to be explicit for the sake of
clarity and the prinicple of the least surprise.

Replace all such uses of memblock APIs with the 'align' parameter
explicitly set to SMP_CACHE_BYTES and stop implicit alignment assignment
in the memblock internal allocation functions.

For the case when memblock APIs are used via helper functions, e.g.  like
iommu_arena_new_node() in Alpha, the helper functions were detected with
Coccinelle's help and then manually examined and updated where
appropriate.

The direct memblock APIs users were updated using the semantic patch below:

@@
expression size, min_addr, max_addr, nid;
@@
(
|
- memblock_alloc_try_nid_raw(size, 0, min_addr, max_addr, nid)
+ memblock_alloc_try_nid_raw(size, SMP_CACHE_BYTES, min_addr, max_addr,
nid)
|
- memblock_alloc_try_nid_nopanic(size, 0, min_addr, max_addr, nid)
+ memblock_alloc_try_nid_nopanic(size, SMP_CACHE_BYTES, min_addr, max_addr,
nid)
|
- memblock_alloc_try_nid(size, 0, min_addr, max_addr, nid)
+ memblock_alloc_try_nid(size, SMP_CACHE_BYTES, min_addr, max_addr, nid)
|
- memblock_alloc(size, 0)
+ memblock_alloc(size, SMP_CACHE_BYTES)
|
- memblock_alloc_raw(size, 0)
+ memblock_alloc_raw(size, SMP_CACHE_BYTES)
|
- memblock_alloc_from(size, 0, min_addr)
+ memblock_alloc_from(size, SMP_CACHE_BYTES, min_addr)
|
- memblock_alloc_nopanic(size, 0)
+ memblock_alloc_nopanic(size, SMP_CACHE_BYTES)
|
- memblock_alloc_low(size, 0)
+ memblock_alloc_low(size, SMP_CACHE_BYTES)
|
- memblock_alloc_low_nopanic(size, 0)
+ memblock_alloc_low_nopanic(size, SMP_CACHE_BYTES)
|
- memblock_alloc_from_nopanic(size, 0, min_addr)
+ memblock_alloc_from_nopanic(size, SMP_CACHE_BYTES, min_addr)
|
- memblock_alloc_node(size, 0, nid)
+ memblock_alloc_node(size, SMP_CACHE_BYTES, nid)
)

[mhocko@suse.com: changelog update]
[akpm@linux-foundation.org: coding-style fixes]
[rppt@linux.ibm.com: fix missed uses of implicit alignment]
  Link: http://lkml.kernel.org/r/20181016133656.GA10925@rapoport-lnx
Link: http://lkml.kernel.org/r/1538687224-17535-1-git-send-email-rppt@linux.vnet.ibm.com
Signed-off-by: Mike Rapoport <rppt@linux.vnet.ibm.com>
Suggested-by: Michal Hocko <mhocko@suse.com>
Acked-by: Paul Burton <paul.burton@mips.com>	[MIPS]
Acked-by: Michael Ellerman <mpe@ellerman.id.au>	[powerpc]
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Chris Zankel <chris@zankel.net>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Guan Xuetao <gxt@pku.edu.cn>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Richard Weinberger <richard@nod.at>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-10-31 08:54:16 -07:00
..
alloc.c memblock: stop using implicit alignment to SMP_CACHE_BYTES 2018-10-31 08:54:16 -07:00
checksum_32.S powerpc: Implement csum_ipv6_magic in assembly 2018-06-04 00:39:19 +10:00
checksum_64.S powerpc: fix csum_ipv6_magic() on little endian platforms 2018-09-20 21:12:28 +10:00
checksum_wrappers.c Replace <asm/uaccess.h> with <linux/uaccess.h> globally 2016-12-24 11:46:01 -08:00
code-patching.c powerpc: handover page flags with a pgprot_t parameter 2018-10-14 18:04:09 +11:00
copy_32.S powerpc/lib: Use patch_site to patch copy_32 functions once cache is enabled 2018-08-10 22:12:35 +10:00
copypage_64.S powerpc: clean inclusions of asm/feature-fixups.h 2018-07-30 22:48:17 +10:00
copypage_power7.S powerpc/64: enhance memcmp() with VMX instruction for long bytes comparision 2018-07-24 22:03:21 +10:00
copyuser_64.S powerpc/64: Copy as much as possible in __copy_tofrom_user 2018-08-08 00:32:36 +10:00
copyuser_power7.S selftests/powerpc/64: Test all paths through copy routines 2018-08-08 00:32:35 +10:00
crtsavres.S powerpc/64: Do not create new section for save/restore functions 2017-05-30 14:59:51 +10:00
div64.S powerpc: Fix a corner case in __div64_32 2005-10-20 09:37:02 +10:00
error-inject.c powerpc: Add support for function error injection 2018-10-20 13:26:43 +11:00
feature-fixups-test.S powerpc: move ASM_CONST and stringify_in_c() into asm-const.h 2018-07-30 22:48:16 +10:00
feature-fixups.c powerpc/fsl: Add barrier_nospec implementation for NXP PowerPC Book3E 2018-08-08 00:32:24 +10:00
hweight_64.S powerpc: clean inclusions of asm/feature-fixups.h 2018-07-30 22:48:17 +10:00
ldstfp.S powerpc: move ASM_CONST and stringify_in_c() into asm-const.h 2018-07-30 22:48:16 +10:00
locks.c powerpc: clean the inclusion of stringify.h 2018-07-30 22:48:17 +10:00
Makefile powerpc: Add support for function error injection 2018-10-20 13:26:43 +11:00
mem_64.S powerpc/64: Remove static branch hints from memset() 2018-09-17 21:17:25 +10:00
memcmp_32.S powerpc/lib: optimise PPC32 memcmp 2018-06-04 00:39:21 +10:00
memcmp_64.S powerpc/64: add 32 bytes prechecking before using VMX optimization on memcmp() 2018-07-24 22:03:21 +10:00
memcpy_64.S selftests/powerpc/64: Test all paths through copy routines 2018-08-08 00:32:35 +10:00
memcpy_power7.S selftests/powerpc/64: Test all paths through copy routines 2018-08-08 00:32:35 +10:00
pmem.c powerpc/lib: Implement UACCESS_FLUSHCACHE API 2017-11-13 08:00:31 +11:00
quad.S powerpc: Handle most loads and stores in instruction emulation code 2017-09-01 16:39:48 +10:00
rheap.c treewide: kmalloc() -> kmalloc_array() 2018-06-12 16:19:22 -07:00
sstep.c powerpc/sstep: Fix kernel crash if VSX is not present 2018-06-04 00:39:08 +10:00
string_32.S powerpc/lib: optimise 32 bits __clear_user() 2018-06-04 00:39:21 +10:00
string_64.S powerpc: Fix invalid use of register expressions 2017-08-10 22:29:41 +10:00
string.S powerpc/lib: optimise PPC32 memcmp 2018-06-04 00:39:21 +10:00
strlen_32.S powerpc/lib: Implement strlen() in assembly for PPC32 2018-08-07 21:49:30 +10:00
test_emulate_step.c powerpc/sstep: Fix emulate_step test if VSX not present 2018-06-04 00:39:14 +10:00
vmx-helper.c powerpc/64: enhance memcmp() with VMX instruction for long bytes comparision 2018-07-24 22:03:21 +10:00
xor_vmx_glue.c powerpc/altivec: Add missing prototypes for altivec 2018-05-25 12:04:38 +10:00
xor_vmx.c powerpc/lib/xor_vmx: Ensure no altivec code executes before enable_kernel_altivec() 2017-06-02 20:17:52 +10:00
xor_vmx.h License cleanup: add SPDX GPL-2.0 license identifier to files with no license 2017-11-02 11:10:55 +01:00