linux_dsm_epyc7002/arch/arm64/mm
Mark Rutland 62a679cb28 arm64: simplify ptrauth initialization
Currently __cpu_setup conditionally initializes the address
authentication keys and enables them in SCTLR_EL1, doing so differently
for the primary CPU and secondary CPUs, and skipping this work for CPUs
returning from an idle state. For the latter case, cpu_do_resume
restores the keys and SCTLR_EL1 value after the MMU has been enabled.

This flow is rather difficult to follow, so instead let's move the
primary and secondary CPU initialization into their respective boot
paths. By following the example of cpu_do_resume and doing so once the
MMU is enabled, we can always initialize the keys from the values in
thread_struct, and avoid the machinery necessary to pass the keys in
secondary_data or open-coding initialization for the boot CPU.

This means we perform an additional RMW of SCTLR_EL1, but we already do
this in the cpu_do_resume path, and for other features in cpufeature.c,
so this isn't a major concern in a bringup path. Note that even while
the enable bits are clear, the key registers are accessible.

As this now renders the argument to __cpu_setup redundant, let's also
remove that entirely. Future extensions can follow a similar approach to
initialize values that differ for primary/secondary CPUs.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Reviewed-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20200423101606.37601-3-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2020-04-28 11:23:21 +01:00
..
cache.S arm64: mm: Use modern annotations for assembly functions 2020-01-08 12:23:38 +00:00
context.c arm64 updates for 5.7: 2020-03-31 10:05:01 -07:00
copypage.c treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 234 2019-06-19 17:09:07 +02:00
dma-mapping.c dma-mapping: drop the dev argument to arch_sync_dma_for_* 2019-11-20 20:31:38 +01:00
dump.c x86: mm: avoid allocating struct mm_struct on the stack 2020-02-04 03:05:25 +00:00
extable.c License cleanup: add SPDX GPL-2.0 license identifier to files with no license 2017-11-02 11:10:55 +01:00
fault.c mm/vma: introduce VM_ACCESS_FLAGS 2020-04-10 15:36:21 -07:00
flush.c mm: introduce page_size() 2019-09-24 15:54:08 -07:00
hugetlbpage.c arm64/hugetlb: Use macros for contiguous huge page sizes 2019-06-03 16:58:37 +01:00
init.c mm: hugetlb: optionally allocate gigantic hugepages using cma 2020-04-10 15:36:21 -07:00
ioremap.c arm64: remove __iounmap 2019-09-04 13:12:26 +01:00
kasan_init.c arm64: memory: rename VA_START to PAGE_END 2019-08-14 17:06:58 +01:00
Makefile arm64: mm: convert mm/dump.c to use walk_page_range() 2020-02-04 03:05:25 +00:00
mmap.c arm64, mm: move generic mmap layout functions to mm 2019-09-24 15:54:11 -07:00
mmu.c mm/memory_hotplug: add pgprot_t to mhp_params 2020-04-10 15:36:21 -07:00
numa.c arm64: Replace strncmp with str_has_prefix 2019-08-05 11:06:34 +01:00
pageattr.c mm: change_memory_common: add spaces for * operator 2020-01-08 17:30:33 +00:00
pgd.c mm: consolidate pgtable_cache_init() and pgd_cache_init() 2019-09-24 15:54:09 -07:00
physaddr.c License cleanup: add SPDX GPL-2.0 license identifier to files with no license 2017-11-02 11:10:55 +01:00
proc.S arm64: simplify ptrauth initialization 2020-04-28 11:23:21 +01:00
ptdump_debugfs.c arm64/mm: Hold memory hotplug lock while walking for kernel page table dump 2020-03-04 15:35:22 +00:00