linux_dsm_epyc7002/arch/mips/vdso/vdso.h
Arnd Bergmann ee38d94a0a page flags: prioritize kasan bits over last-cpuid
ARM64 randdconfig builds regularly run into a build error, especially
when NUMA_BALANCING and SPARSEMEM are enabled but not SPARSEMEM_VMEMMAP:

  #error "KASAN: not enough bits in page flags for tag"

The last-cpuid bits are already contitional on the available space, so
the result of the calculation is a bit random on whether they were
already left out or not.

Adding the kasan tag bits before last-cpuid makes it much more likely to
end up with a successful build here, and should be reliable for
randconfig at least, as long as that does not randomize NR_CPUS or
NODES_SHIFT but uses the defaults.

In order for the modified check to not trigger in the x86 vdso32 code
where all constants are wrong (building with -m32), enclose all the
definitions with an #ifdef.

[arnd@arndb.de: build fix]
  Link: http://lkml.kernel.org/r/CAK8P3a3Mno1SWTcuAOT0Wa9VS15pdU6EfnkxLbDpyS55yO04+g@mail.gmail.com
Link: http://lkml.kernel.org/r/20190722115520.3743282-1-arnd@arndb.de
Link: https://lore.kernel.org/lkml/20190618095347.3850490-1-arnd@arndb.de/
Fixes: 2813b9c029 ("kasan, mm, arm64: tag non slab memory allocated via pagealloc")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Andrey Konovalov <andreyknvl@google.com>
Reviewed-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Andrey Konovalov <andreyknvl@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-08-03 07:02:01 -07:00

87 lines
2.0 KiB
C

/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* Copyright (C) 2015 Imagination Technologies
* Author: Alex Smith <alex.smith@imgtec.com>
*/
#include <asm/sgidefs.h>
#if _MIPS_SIM != _MIPS_SIM_ABI64 && defined(CONFIG_64BIT)
/* Building 32-bit VDSO for the 64-bit kernel. Fake a 32-bit Kconfig. */
#define BUILD_VDSO32_64
#undef CONFIG_64BIT
#define CONFIG_32BIT 1
#ifndef __ASSEMBLY__
#include <asm-generic/atomic64.h>
#endif
#endif
#ifndef __ASSEMBLY__
#include <asm/asm.h>
#include <asm/page.h>
#include <asm/vdso.h>
static inline unsigned long get_vdso_base(void)
{
unsigned long addr;
/*
* We can't use cpu_has_mips_r6 since it needs the cpu_data[]
* kernel symbol.
*/
#ifdef CONFIG_CPU_MIPSR6
/*
* lapc <symbol> is an alias to addiupc reg, <symbol> - .
*
* We can't use addiupc because there is no label-label
* support for the addiupc reloc
*/
__asm__("lapc %0, _start \n"
: "=r" (addr) : :);
#else
/*
* Get the base load address of the VDSO. We have to avoid generating
* relocations and references to the GOT because ld.so does not peform
* relocations on the VDSO. We use the current offset from the VDSO base
* and perform a PC-relative branch which gives the absolute address in
* ra, and take the difference. The assembler chokes on
* "li %0, _start - .", so embed the offset as a word and branch over
* it.
*
*/
__asm__(
" .set push \n"
" .set noreorder \n"
" bal 1f \n"
" nop \n"
" .word _start - . \n"
"1: lw %0, 0($31) \n"
" " STR(PTR_ADDU) " %0, $31, %0 \n"
" .set pop \n"
: "=r" (addr)
:
: "$31");
#endif /* CONFIG_CPU_MIPSR6 */
return addr;
}
static inline const union mips_vdso_data *get_vdso_data(void)
{
return (const union mips_vdso_data *)(get_vdso_base() - PAGE_SIZE);
}
#ifdef CONFIG_CLKSRC_MIPS_GIC
static inline void __iomem *get_gic(const union mips_vdso_data *data)
{
return (void __iomem *)data - PAGE_SIZE;
}
#endif /* CONFIG_CLKSRC_MIPS_GIC */
#endif /* __ASSEMBLY__ */