arm64 updates for 5.8

- Branch Target Identification (BTI)
 	* Support for ARMv8.5-BTI in both user- and kernel-space. This
 	  allows branch targets to limit the types of branch from which
 	  they can be called and additionally prevents branching to
 	  arbitrary code, although kernel support requires a very recent
 	  toolchain.
 
 	* Function annotation via SYM_FUNC_START() so that assembly
 	  functions are wrapped with the relevant "landing pad"
 	  instructions.
 
 	* BPF and vDSO updates to use the new instructions.
 
 	* Addition of a new HWCAP and exposure of BTI capability to
 	  userspace via ID register emulation, along with ELF loader
 	  support for the BTI feature in .note.gnu.property.
 
 	* Non-critical fixes to CFI unwind annotations in the sigreturn
 	  trampoline.
 
 - Shadow Call Stack (SCS)
 	* Support for Clang's Shadow Call Stack feature, which reserves
 	  platform register x18 to point at a separate stack for each
 	  task that holds only return addresses. This protects function
 	  return control flow from buffer overruns on the main stack.
 
 	* Save/restore of x18 across problematic boundaries (user-mode,
 	  hypervisor, EFI, suspend, etc).
 
 	* Core support for SCS, should other architectures want to use it
 	  too.
 
 	* SCS overflow checking on context-switch as part of the existing
 	  stack limit check if CONFIG_SCHED_STACK_END_CHECK=y.
 
 - CPU feature detection
 	* Removed numerous "SANITY CHECK" errors when running on a system
 	  with mismatched AArch32 support at EL1. This is primarily a
 	  concern for KVM, which disabled support for 32-bit guests on
 	  such a system.
 
 	* Addition of new ID registers and fields as the architecture has
 	  been extended.
 
 - Perf and PMU drivers
 	* Minor fixes and cleanups to system PMU drivers.
 
 - Hardware errata
 	* Unify KVM workarounds for VHE and nVHE configurations.
 
 	* Sort vendor errata entries in Kconfig.
 
 - Secure Monitor Call Calling Convention (SMCCC)
 	* Update to the latest specification from Arm (v1.2).
 
 	* Allow PSCI code to query the SMCCC version.
 
 - Software Delegated Exception Interface (SDEI)
 	* Unexport a bunch of unused symbols.
 
 	* Minor fixes to handling of firmware data.
 
 - Pointer authentication
 	* Add support for dumping the kernel PAC mask in vmcoreinfo so
 	  that the stack can be unwound by tools such as kdump.
 
 	* Simplification of key initialisation during CPU bringup.
 
 - BPF backend
 	* Improve immediate generation for logical and add/sub
 	  instructions.
 
 - vDSO
 	- Minor fixes to the linker flags for consistency with other
 	  architectures and support for LLVM's unwinder.
 
 	- Clean up logic to initialise and map the vDSO into userspace.
 
 - ACPI
 	- Work around for an ambiguity in the IORT specification relating
 	  to the "num_ids" field.
 
 	- Support _DMA method for all named components rather than only
 	  PCIe root complexes.
 
 	- Minor other IORT-related fixes.
 
 - Miscellaneous
 	* Initialise debug traps early for KGDB and fix KDB cacheflushing
 	  deadlock.
 
 	* Minor tweaks to early boot state (documentation update, set
 	  TEXT_OFFSET to 0x0, increase alignment of PE/COFF sections).
 
 	* Refactoring and cleanup
 -----BEGIN PGP SIGNATURE-----
 
 iQFEBAABCgAuFiEEPxTL6PPUbjXGY88ct6xw3ITBYzQFAl7U9csQHHdpbGxAa2Vy
 bmVsLm9yZwAKCRC3rHDchMFjNLBHCACs/YU4SM7Om5f+7QnxIKao5DBr2CnGGvdC
 yTfDghFDTLQVv3MufLlfno3yBe5G8sQpcZfcc+hewfcGoMzVZXu8s7LzH6VSn9T9
 jmT3KjDMrg0RjSHzyumJp2McyelTk0a4FiKArSIIKsJSXUyb1uPSgm7SvKVDwEwU
 JGDzL9IGilmq59GiXfDzGhTZgmC37QdwRoRxDuqtqWQe5CHoRXYexg87HwBKOQxx
 HgU9L7ehri4MRZfpyjaDrr6quJo3TVnAAKXNBh3mZAskVS9ZrfKpEH0kYWYuqybv
 znKyHRecl/rrGePV8RTMtrwnSdU26zMXE/omsVVauDfG9hqzqm+Q
 =w3qi
 -----END PGP SIGNATURE-----

Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux

Pull arm64 updates from Will Deacon:
 "A sizeable pile of arm64 updates for 5.8.

  Summary below, but the big two features are support for Branch Target
  Identification and Clang's Shadow Call stack. The latter is currently
  arm64-only, but the high-level parts are all in core code so it could
  easily be adopted by other architectures pending toolchain support

  Branch Target Identification (BTI):

   - Support for ARMv8.5-BTI in both user- and kernel-space. This allows
     branch targets to limit the types of branch from which they can be
     called and additionally prevents branching to arbitrary code,
     although kernel support requires a very recent toolchain.

   - Function annotation via SYM_FUNC_START() so that assembly functions
     are wrapped with the relevant "landing pad" instructions.

   - BPF and vDSO updates to use the new instructions.

   - Addition of a new HWCAP and exposure of BTI capability to userspace
     via ID register emulation, along with ELF loader support for the
     BTI feature in .note.gnu.property.

   - Non-critical fixes to CFI unwind annotations in the sigreturn
     trampoline.

  Shadow Call Stack (SCS):

   - Support for Clang's Shadow Call Stack feature, which reserves
     platform register x18 to point at a separate stack for each task
     that holds only return addresses. This protects function return
     control flow from buffer overruns on the main stack.

   - Save/restore of x18 across problematic boundaries (user-mode,
     hypervisor, EFI, suspend, etc).

   - Core support for SCS, should other architectures want to use it
     too.

   - SCS overflow checking on context-switch as part of the existing
     stack limit check if CONFIG_SCHED_STACK_END_CHECK=y.

  CPU feature detection:

   - Removed numerous "SANITY CHECK" errors when running on a system
     with mismatched AArch32 support at EL1. This is primarily a concern
     for KVM, which disabled support for 32-bit guests on such a system.

   - Addition of new ID registers and fields as the architecture has
     been extended.

  Perf and PMU drivers:

   - Minor fixes and cleanups to system PMU drivers.

  Hardware errata:

   - Unify KVM workarounds for VHE and nVHE configurations.

   - Sort vendor errata entries in Kconfig.

  Secure Monitor Call Calling Convention (SMCCC):

   - Update to the latest specification from Arm (v1.2).

   - Allow PSCI code to query the SMCCC version.

  Software Delegated Exception Interface (SDEI):

   - Unexport a bunch of unused symbols.

   - Minor fixes to handling of firmware data.

  Pointer authentication:

   - Add support for dumping the kernel PAC mask in vmcoreinfo so that
     the stack can be unwound by tools such as kdump.

   - Simplification of key initialisation during CPU bringup.

  BPF backend:

   - Improve immediate generation for logical and add/sub instructions.

  vDSO:

   - Minor fixes to the linker flags for consistency with other
     architectures and support for LLVM's unwinder.

   - Clean up logic to initialise and map the vDSO into userspace.

  ACPI:

   - Work around for an ambiguity in the IORT specification relating to
     the "num_ids" field.

   - Support _DMA method for all named components rather than only PCIe
     root complexes.

   - Minor other IORT-related fixes.

  Miscellaneous:

   - Initialise debug traps early for KGDB and fix KDB cacheflushing
     deadlock.

   - Minor tweaks to early boot state (documentation update, set
     TEXT_OFFSET to 0x0, increase alignment of PE/COFF sections).

   - Refactoring and cleanup"

* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (148 commits)
  KVM: arm64: Move __load_guest_stage2 to kvm_mmu.h
  KVM: arm64: Check advertised Stage-2 page size capability
  arm64/cpufeature: Add get_arm64_ftr_reg_nowarn()
  ACPI/IORT: Remove the unused __get_pci_rid()
  arm64/cpuinfo: Add ID_MMFR4_EL1 into the cpuinfo_arm64 context
  arm64/cpufeature: Add remaining feature bits in ID_AA64PFR1 register
  arm64/cpufeature: Add remaining feature bits in ID_AA64PFR0 register
  arm64/cpufeature: Add remaining feature bits in ID_AA64ISAR0 register
  arm64/cpufeature: Add remaining feature bits in ID_MMFR4 register
  arm64/cpufeature: Add remaining feature bits in ID_PFR0 register
  arm64/cpufeature: Introduce ID_MMFR5 CPU register
  arm64/cpufeature: Introduce ID_DFR1 CPU register
  arm64/cpufeature: Introduce ID_PFR2 CPU register
  arm64/cpufeature: Make doublelock a signed feature in ID_AA64DFR0
  arm64/cpufeature: Drop TraceFilt feature exposure from ID_DFR0 register
  arm64/cpufeature: Add explicit ftr_id_isar0[] for ID_ISAR0 register
  arm64: mm: Add asid_gen_match() helper
  firmware: smccc: Fix missing prototype warning for arm_smccc_version_init
  arm64: vdso: Fix CFI directives in sigreturn trampoline
  arm64: vdso: Don't prefix sigreturn trampoline with a BTI C instruction
  ...
This commit is contained in:
Linus Torvalds 2020-06-01 15:18:27 -07:00
commit 533b220f7b
159 changed files with 2561 additions and 978 deletions

View File

@ -393,6 +393,12 @@ KERNELOFFSET
The kernel randomization offset. Used to compute the page offset. If The kernel randomization offset. Used to compute the page offset. If
KASLR is disabled, this value is zero. KASLR is disabled, this value is zero.
KERNELPACMASK
-------------
The mask to extract the Pointer Authentication Code from a kernel virtual
address.
arm arm
=== ===

View File

@ -173,7 +173,8 @@ Before jumping into the kernel, the following conditions must be met:
- Caches, MMUs - Caches, MMUs
The MMU must be off. The MMU must be off.
Instruction cache may be on or off. The instruction cache may be on or off, and must not hold any stale
entries corresponding to the loaded kernel image.
The address range corresponding to the loaded kernel image must be The address range corresponding to the loaded kernel image must be
cleaned to the PoC. In the presence of a system cache or other cleaned to the PoC. In the presence of a system cache or other
coherent masters with caches enabled, this will typically require coherent masters with caches enabled, this will typically require

View File

@ -176,6 +176,8 @@ infrastructure:
+------------------------------+---------+---------+ +------------------------------+---------+---------+
| SSBS | [7-4] | y | | SSBS | [7-4] | y |
+------------------------------+---------+---------+ +------------------------------+---------+---------+
| BT | [3-0] | y |
+------------------------------+---------+---------+
4) MIDR_EL1 - Main ID Register 4) MIDR_EL1 - Main ID Register

View File

@ -236,6 +236,11 @@ HWCAP2_RNG
Functionality implied by ID_AA64ISAR0_EL1.RNDR == 0b0001. Functionality implied by ID_AA64ISAR0_EL1.RNDR == 0b0001.
HWCAP2_BTI
Functionality implied by ID_AA64PFR0_EL1.BT == 0b0001.
4. Unused AT_HWCAP bits 4. Unused AT_HWCAP bits
----------------------- -----------------------

View File

@ -64,6 +64,10 @@ stable kernels.
+----------------+-----------------+-----------------+-----------------------------+ +----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-A53 | #843419 | ARM64_ERRATUM_843419 | | ARM | Cortex-A53 | #843419 | ARM64_ERRATUM_843419 |
+----------------+-----------------+-----------------+-----------------------------+ +----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-A55 | #1024718 | ARM64_ERRATUM_1024718 |
+----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-A55 | #1530923 | ARM64_ERRATUM_1530923 |
+----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-A57 | #832075 | ARM64_ERRATUM_832075 | | ARM | Cortex-A57 | #832075 | ARM64_ERRATUM_832075 |
+----------------+-----------------+-----------------+-----------------------------+ +----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-A57 | #852523 | N/A | | ARM | Cortex-A57 | #852523 | N/A |
@ -78,8 +82,6 @@ stable kernels.
+----------------+-----------------+-----------------+-----------------------------+ +----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-A73 | #858921 | ARM64_ERRATUM_858921 | | ARM | Cortex-A73 | #858921 | ARM64_ERRATUM_858921 |
+----------------+-----------------+-----------------+-----------------------------+ +----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-A55 | #1024718 | ARM64_ERRATUM_1024718 |
+----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-A76 | #1188873,1418040| ARM64_ERRATUM_1418040 | | ARM | Cortex-A76 | #1188873,1418040| ARM64_ERRATUM_1418040 |
+----------------+-----------------+-----------------+-----------------------------+ +----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-A76 | #1165522 | ARM64_ERRATUM_1165522 | | ARM | Cortex-A76 | #1165522 | ARM64_ERRATUM_1165522 |
@ -88,8 +90,6 @@ stable kernels.
+----------------+-----------------+-----------------+-----------------------------+ +----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-A76 | #1463225 | ARM64_ERRATUM_1463225 | | ARM | Cortex-A76 | #1463225 | ARM64_ERRATUM_1463225 |
+----------------+-----------------+-----------------+-----------------------------+ +----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-A55 | #1530923 | ARM64_ERRATUM_1530923 |
+----------------+-----------------+-----------------+-----------------------------+
| ARM | Neoverse-N1 | #1188873,1418040| ARM64_ERRATUM_1418040 | | ARM | Neoverse-N1 | #1188873,1418040| ARM64_ERRATUM_1418040 |
+----------------+-----------------+-----------------+-----------------------------+ +----------------+-----------------+-----------------+-----------------------------+
| ARM | Neoverse-N1 | #1349291 | N/A | | ARM | Neoverse-N1 | #1349291 | N/A |

View File

@ -543,6 +543,7 @@ encoded manner. The codes are the following:
hg huge page advise flag hg huge page advise flag
nh no huge page advise flag nh no huge page advise flag
mg mergable advise flag mg mergable advise flag
bt - arm64 BTI guarded page
== ======================================= == =======================================
Note that there is no guarantee that every flag and associated mnemonic will Note that there is no guarantee that every flag and associated mnemonic will

View File

@ -15518,6 +15518,15 @@ M: Nicolas Pitre <nico@fluxnic.net>
S: Odd Fixes S: Odd Fixes
F: drivers/net/ethernet/smsc/smc91x.* F: drivers/net/ethernet/smsc/smc91x.*
SECURE MONITOR CALL(SMC) CALLING CONVENTION (SMCCC)
M: Mark Rutland <mark.rutland@arm.com>
M: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
M: Sudeep Holla <sudeep.holla@arm.com>
L: linux-arm-kernel@lists.infradead.org
S: Maintained
F: drivers/firmware/smccc/
F: include/linux/arm-smccc.h
SMIA AND SMIA++ IMAGE SENSOR DRIVER SMIA AND SMIA++ IMAGE SENSOR DRIVER
M: Sakari Ailus <sakari.ailus@linux.intel.com> M: Sakari Ailus <sakari.ailus@linux.intel.com>
L: linux-media@vger.kernel.org L: linux-media@vger.kernel.org

View File

@ -862,6 +862,12 @@ ifdef CONFIG_LIVEPATCH
KBUILD_CFLAGS += $(call cc-option, -flive-patching=inline-clone) KBUILD_CFLAGS += $(call cc-option, -flive-patching=inline-clone)
endif endif
ifdef CONFIG_SHADOW_CALL_STACK
CC_FLAGS_SCS := -fsanitize=shadow-call-stack
KBUILD_CFLAGS += $(CC_FLAGS_SCS)
export CC_FLAGS_SCS
endif
# arch Makefile may override CC so keep this after arch Makefile is included # arch Makefile may override CC so keep this after arch Makefile is included
NOSTDINC_FLAGS += -nostdinc -isystem $(shell $(CC) -print-file-name=include) NOSTDINC_FLAGS += -nostdinc -isystem $(shell $(CC) -print-file-name=include)

View File

@ -533,6 +533,31 @@ config STACKPROTECTOR_STRONG
about 20% of all kernel functions, which increases the kernel code about 20% of all kernel functions, which increases the kernel code
size by about 2%. size by about 2%.
config ARCH_SUPPORTS_SHADOW_CALL_STACK
bool
help
An architecture should select this if it supports Clang's Shadow
Call Stack and implements runtime support for shadow stack
switching.
config SHADOW_CALL_STACK
bool "Clang Shadow Call Stack"
depends on CC_IS_CLANG && ARCH_SUPPORTS_SHADOW_CALL_STACK
depends on DYNAMIC_FTRACE_WITH_REGS || !FUNCTION_GRAPH_TRACER
help
This option enables Clang's Shadow Call Stack, which uses a
shadow stack to protect function return addresses from being
overwritten by an attacker. More information can be found in
Clang's documentation:
https://clang.llvm.org/docs/ShadowCallStack.html
Note that security guarantees in the kernel differ from the
ones documented for user space. The kernel must store addresses
of shadow stacks in memory, which means an attacker capable of
reading and writing arbitrary memory may be able to locate them
and hijack control flow by modifying the stacks.
config HAVE_ARCH_WITHIN_STACK_FRAMES config HAVE_ARCH_WITHIN_STACK_FRAMES
bool bool
help help

View File

@ -9,6 +9,7 @@ config ARM64
select ACPI_MCFG if (ACPI && PCI) select ACPI_MCFG if (ACPI && PCI)
select ACPI_SPCR_TABLE if ACPI select ACPI_SPCR_TABLE if ACPI
select ACPI_PPTT if ACPI select ACPI_PPTT if ACPI
select ARCH_BINFMT_ELF_STATE
select ARCH_HAS_DEBUG_VIRTUAL select ARCH_HAS_DEBUG_VIRTUAL
select ARCH_HAS_DEVMEM_IS_ALLOWED select ARCH_HAS_DEVMEM_IS_ALLOWED
select ARCH_HAS_DMA_PREP_COHERENT select ARCH_HAS_DMA_PREP_COHERENT
@ -33,6 +34,7 @@ config ARM64
select ARCH_HAS_SYSCALL_WRAPPER select ARCH_HAS_SYSCALL_WRAPPER
select ARCH_HAS_TEARDOWN_DMA_OPS if IOMMU_SUPPORT select ARCH_HAS_TEARDOWN_DMA_OPS if IOMMU_SUPPORT
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
select ARCH_HAVE_ELF_PROT
select ARCH_HAVE_NMI_SAFE_CMPXCHG select ARCH_HAVE_NMI_SAFE_CMPXCHG
select ARCH_INLINE_READ_LOCK if !PREEMPTION select ARCH_INLINE_READ_LOCK if !PREEMPTION
select ARCH_INLINE_READ_LOCK_BH if !PREEMPTION select ARCH_INLINE_READ_LOCK_BH if !PREEMPTION
@ -62,9 +64,12 @@ config ARM64
select ARCH_INLINE_SPIN_UNLOCK_IRQRESTORE if !PREEMPTION select ARCH_INLINE_SPIN_UNLOCK_IRQRESTORE if !PREEMPTION
select ARCH_KEEP_MEMBLOCK select ARCH_KEEP_MEMBLOCK
select ARCH_USE_CMPXCHG_LOCKREF select ARCH_USE_CMPXCHG_LOCKREF
select ARCH_USE_GNU_PROPERTY
select ARCH_USE_QUEUED_RWLOCKS select ARCH_USE_QUEUED_RWLOCKS
select ARCH_USE_QUEUED_SPINLOCKS select ARCH_USE_QUEUED_SPINLOCKS
select ARCH_USE_SYM_ANNOTATIONS
select ARCH_SUPPORTS_MEMORY_FAILURE select ARCH_SUPPORTS_MEMORY_FAILURE
select ARCH_SUPPORTS_SHADOW_CALL_STACK if CC_HAVE_SHADOW_CALL_STACK
select ARCH_SUPPORTS_ATOMIC_RMW select ARCH_SUPPORTS_ATOMIC_RMW
select ARCH_SUPPORTS_INT128 if CC_HAS_INT128 && (GCC_VERSION >= 50000 || CC_IS_CLANG) select ARCH_SUPPORTS_INT128 if CC_HAS_INT128 && (GCC_VERSION >= 50000 || CC_IS_CLANG)
select ARCH_SUPPORTS_NUMA_BALANCING select ARCH_SUPPORTS_NUMA_BALANCING
@ -525,13 +530,13 @@ config ARM64_ERRATUM_1418040
If unsure, say Y. If unsure, say Y.
config ARM64_WORKAROUND_SPECULATIVE_AT_VHE config ARM64_WORKAROUND_SPECULATIVE_AT
bool bool
config ARM64_ERRATUM_1165522 config ARM64_ERRATUM_1165522
bool "Cortex-A76: Speculative AT instruction using out-of-context translation regime could cause subsequent request to generate an incorrect translation" bool "Cortex-A76: 1165522: Speculative AT instruction using out-of-context translation regime could cause subsequent request to generate an incorrect translation"
default y default y
select ARM64_WORKAROUND_SPECULATIVE_AT_VHE select ARM64_WORKAROUND_SPECULATIVE_AT
help help
This option adds a workaround for ARM Cortex-A76 erratum 1165522. This option adds a workaround for ARM Cortex-A76 erratum 1165522.
@ -541,10 +546,23 @@ config ARM64_ERRATUM_1165522
If unsure, say Y. If unsure, say Y.
config ARM64_ERRATUM_1530923 config ARM64_ERRATUM_1319367
bool "Cortex-A55: Speculative AT instruction using out-of-context translation regime could cause subsequent request to generate an incorrect translation" bool "Cortex-A57/A72: 1319537: Speculative AT instruction using out-of-context translation regime could cause subsequent request to generate an incorrect translation"
default y default y
select ARM64_WORKAROUND_SPECULATIVE_AT_VHE select ARM64_WORKAROUND_SPECULATIVE_AT
help
This option adds work arounds for ARM Cortex-A57 erratum 1319537
and A72 erratum 1319367
Cortex-A57 and A72 cores could end-up with corrupted TLBs by
speculating an AT instruction during a guest context switch.
If unsure, say Y.
config ARM64_ERRATUM_1530923
bool "Cortex-A55: 1530923: Speculative AT instruction using out-of-context translation regime could cause subsequent request to generate an incorrect translation"
default y
select ARM64_WORKAROUND_SPECULATIVE_AT
help help
This option adds a workaround for ARM Cortex-A55 erratum 1530923. This option adds a workaround for ARM Cortex-A55 erratum 1530923.
@ -554,6 +572,9 @@ config ARM64_ERRATUM_1530923
If unsure, say Y. If unsure, say Y.
config ARM64_WORKAROUND_REPEAT_TLBI
bool
config ARM64_ERRATUM_1286807 config ARM64_ERRATUM_1286807
bool "Cortex-A76: Modification of the translation table for a virtual address might lead to read-after-read ordering violation" bool "Cortex-A76: Modification of the translation table for a virtual address might lead to read-after-read ordering violation"
default y default y
@ -570,22 +591,6 @@ config ARM64_ERRATUM_1286807
invalidated has been observed by other observers. The invalidated has been observed by other observers. The
workaround repeats the TLBI+DSB operation. workaround repeats the TLBI+DSB operation.
config ARM64_WORKAROUND_SPECULATIVE_AT_NVHE
bool
config ARM64_ERRATUM_1319367
bool "Cortex-A57/A72: Speculative AT instruction using out-of-context translation regime could cause subsequent request to generate an incorrect translation"
default y
select ARM64_WORKAROUND_SPECULATIVE_AT_NVHE
help
This option adds work arounds for ARM Cortex-A57 erratum 1319537
and A72 erratum 1319367
Cortex-A57 and A72 cores could end-up with corrupted TLBs by
speculating an AT instruction during a guest context switch.
If unsure, say Y.
config ARM64_ERRATUM_1463225 config ARM64_ERRATUM_1463225
bool "Cortex-A76: Software Step might prevent interrupt recognition" bool "Cortex-A76: Software Step might prevent interrupt recognition"
default y default y
@ -695,6 +700,35 @@ config CAVIUM_TX2_ERRATUM_219
If unsure, say Y. If unsure, say Y.
config FUJITSU_ERRATUM_010001
bool "Fujitsu-A64FX erratum E#010001: Undefined fault may occur wrongly"
default y
help
This option adds a workaround for Fujitsu-A64FX erratum E#010001.
On some variants of the Fujitsu-A64FX cores ver(1.0, 1.1), memory
accesses may cause undefined fault (Data abort, DFSC=0b111111).
This fault occurs under a specific hardware condition when a
load/store instruction performs an address translation using:
case-1 TTBR0_EL1 with TCR_EL1.NFD0 == 1.
case-2 TTBR0_EL2 with TCR_EL2.NFD0 == 1.
case-3 TTBR1_EL1 with TCR_EL1.NFD1 == 1.
case-4 TTBR1_EL2 with TCR_EL2.NFD1 == 1.
The workaround is to ensure these bits are clear in TCR_ELx.
The workaround only affects the Fujitsu-A64FX.
If unsure, say Y.
config HISILICON_ERRATUM_161600802
bool "Hip07 161600802: Erroneous redistributor VLPI base"
default y
help
The HiSilicon Hip07 SoC uses the wrong redistributor base
when issued ITS commands such as VMOVP and VMAPP, and requires
a 128kB offset to be applied to the target address in this commands.
If unsure, say Y.
config QCOM_FALKOR_ERRATUM_1003 config QCOM_FALKOR_ERRATUM_1003
bool "Falkor E1003: Incorrect translation due to ASID change" bool "Falkor E1003: Incorrect translation due to ASID change"
default y default y
@ -706,9 +740,6 @@ config QCOM_FALKOR_ERRATUM_1003
is unchanged. Work around the erratum by invalidating the walk cache is unchanged. Work around the erratum by invalidating the walk cache
entries for the trampoline before entering the kernel proper. entries for the trampoline before entering the kernel proper.
config ARM64_WORKAROUND_REPEAT_TLBI
bool
config QCOM_FALKOR_ERRATUM_1009 config QCOM_FALKOR_ERRATUM_1009
bool "Falkor E1009: Prematurely complete a DSB after a TLBI" bool "Falkor E1009: Prematurely complete a DSB after a TLBI"
default y default y
@ -730,25 +761,6 @@ config QCOM_QDF2400_ERRATUM_0065
If unsure, say Y. If unsure, say Y.
config SOCIONEXT_SYNQUACER_PREITS
bool "Socionext Synquacer: Workaround for GICv3 pre-ITS"
default y
help
Socionext Synquacer SoCs implement a separate h/w block to generate
MSI doorbell writes with non-zero values for the device ID.
If unsure, say Y.
config HISILICON_ERRATUM_161600802
bool "Hip07 161600802: Erroneous redistributor VLPI base"
default y
help
The HiSilicon Hip07 SoC uses the wrong redistributor base
when issued ITS commands such as VMOVP and VMAPP, and requires
a 128kB offset to be applied to the target address in this commands.
If unsure, say Y.
config QCOM_FALKOR_ERRATUM_E1041 config QCOM_FALKOR_ERRATUM_E1041
bool "Falkor E1041: Speculative instruction fetches might cause errant memory access" bool "Falkor E1041: Speculative instruction fetches might cause errant memory access"
default y default y
@ -759,22 +771,12 @@ config QCOM_FALKOR_ERRATUM_E1041
If unsure, say Y. If unsure, say Y.
config FUJITSU_ERRATUM_010001 config SOCIONEXT_SYNQUACER_PREITS
bool "Fujitsu-A64FX erratum E#010001: Undefined fault may occur wrongly" bool "Socionext Synquacer: Workaround for GICv3 pre-ITS"
default y default y
help help
This option adds a workaround for Fujitsu-A64FX erratum E#010001. Socionext Synquacer SoCs implement a separate h/w block to generate
On some variants of the Fujitsu-A64FX cores ver(1.0, 1.1), memory MSI doorbell writes with non-zero values for the device ID.
accesses may cause undefined fault (Data abort, DFSC=0b111111).
This fault occurs under a specific hardware condition when a
load/store instruction performs an address translation using:
case-1 TTBR0_EL1 with TCR_EL1.NFD0 == 1.
case-2 TTBR0_EL2 with TCR_EL2.NFD0 == 1.
case-3 TTBR1_EL1 with TCR_EL1.NFD1 == 1.
case-4 TTBR1_EL2 with TCR_EL2.NFD1 == 1.
The workaround is to ensure these bits are clear in TCR_ELx.
The workaround only affects the Fujitsu-A64FX.
If unsure, say Y. If unsure, say Y.
@ -1026,6 +1028,10 @@ config ARCH_HAS_CACHE_LINE_SIZE
config ARCH_ENABLE_SPLIT_PMD_PTLOCK config ARCH_ENABLE_SPLIT_PMD_PTLOCK
def_bool y if PGTABLE_LEVELS > 2 def_bool y if PGTABLE_LEVELS > 2
# Supported by clang >= 7.0
config CC_HAVE_SHADOW_CALL_STACK
def_bool $(cc-option, -fsanitize=shadow-call-stack -ffixed-x18)
config SECCOMP config SECCOMP
bool "Enable seccomp to safely compute untrusted bytecode" bool "Enable seccomp to safely compute untrusted bytecode"
---help--- ---help---
@ -1585,6 +1591,48 @@ endmenu
menu "ARMv8.5 architectural features" menu "ARMv8.5 architectural features"
config ARM64_BTI
bool "Branch Target Identification support"
default y
help
Branch Target Identification (part of the ARMv8.5 Extensions)
provides a mechanism to limit the set of locations to which computed
branch instructions such as BR or BLR can jump.
To make use of BTI on CPUs that support it, say Y.
BTI is intended to provide complementary protection to other control
flow integrity protection mechanisms, such as the Pointer
authentication mechanism provided as part of the ARMv8.3 Extensions.
For this reason, it does not make sense to enable this option without
also enabling support for pointer authentication. Thus, when
enabling this option you should also select ARM64_PTR_AUTH=y.
Userspace binaries must also be specifically compiled to make use of
this mechanism. If you say N here or the hardware does not support
BTI, such binaries can still run, but you get no additional
enforcement of branch destinations.
config ARM64_BTI_KERNEL
bool "Use Branch Target Identification for kernel"
default y
depends on ARM64_BTI
depends on ARM64_PTR_AUTH
depends on CC_HAS_BRANCH_PROT_PAC_RET_BTI
# https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94697
depends on !CC_IS_GCC || GCC_VERSION >= 100100
depends on !(CC_IS_CLANG && GCOV_KERNEL)
depends on (!FUNCTION_GRAPH_TRACER || DYNAMIC_FTRACE_WITH_REGS)
help
Build the kernel with Branch Target Identification annotations
and enable enforcement of this for kernel code. When this option
is enabled and the system supports BTI all kernel code including
modular code must have BTI enabled.
config CC_HAS_BRANCH_PROT_PAC_RET_BTI
# GCC 9 or later, clang 8 or later
def_bool $(cc-option,-mbranch-protection=pac-ret+leaf+bti)
config ARM64_E0PD config ARM64_E0PD
bool "Enable support for E0PD" bool "Enable support for E0PD"
default y default y

View File

@ -12,7 +12,6 @@
LDFLAGS_vmlinux :=--no-undefined -X LDFLAGS_vmlinux :=--no-undefined -X
CPPFLAGS_vmlinux.lds = -DTEXT_OFFSET=$(TEXT_OFFSET) CPPFLAGS_vmlinux.lds = -DTEXT_OFFSET=$(TEXT_OFFSET)
GZFLAGS :=-9
ifeq ($(CONFIG_RELOCATABLE), y) ifeq ($(CONFIG_RELOCATABLE), y)
# Pass --no-apply-dynamic-relocs to restore pre-binutils-2.27 behaviour # Pass --no-apply-dynamic-relocs to restore pre-binutils-2.27 behaviour
@ -71,7 +70,14 @@ branch-prot-flags-y += $(call cc-option,-mbranch-protection=none)
ifeq ($(CONFIG_ARM64_PTR_AUTH),y) ifeq ($(CONFIG_ARM64_PTR_AUTH),y)
branch-prot-flags-$(CONFIG_CC_HAS_SIGN_RETURN_ADDRESS) := -msign-return-address=all branch-prot-flags-$(CONFIG_CC_HAS_SIGN_RETURN_ADDRESS) := -msign-return-address=all
# We enable additional protection for leaf functions as there is some
# narrow potential for ROP protection benefits and no substantial
# performance impact has been observed.
ifeq ($(CONFIG_ARM64_BTI_KERNEL),y)
branch-prot-flags-$(CONFIG_CC_HAS_BRANCH_PROT_PAC_RET_BTI) := -mbranch-protection=pac-ret+leaf+bti
else
branch-prot-flags-$(CONFIG_CC_HAS_BRANCH_PROT_PAC_RET) := -mbranch-protection=pac-ret+leaf branch-prot-flags-$(CONFIG_CC_HAS_BRANCH_PROT_PAC_RET) := -mbranch-protection=pac-ret+leaf
endif
# -march=armv8.3-a enables the non-nops instructions for PAC, to avoid the # -march=armv8.3-a enables the non-nops instructions for PAC, to avoid the
# compiler to generate them and consequently to break the single image contract # compiler to generate them and consequently to break the single image contract
# we pass it only to the assembler. This option is utilized only in case of non # we pass it only to the assembler. This option is utilized only in case of non
@ -81,6 +87,10 @@ endif
KBUILD_CFLAGS += $(branch-prot-flags-y) KBUILD_CFLAGS += $(branch-prot-flags-y)
ifeq ($(CONFIG_SHADOW_CALL_STACK), y)
KBUILD_CFLAGS += -ffixed-x18
endif
ifeq ($(CONFIG_CPU_BIG_ENDIAN), y) ifeq ($(CONFIG_CPU_BIG_ENDIAN), y)
KBUILD_CPPFLAGS += -mbig-endian KBUILD_CPPFLAGS += -mbig-endian
CHECKFLAGS += -D__AARCH64EB__ CHECKFLAGS += -D__AARCH64EB__
@ -118,7 +128,7 @@ TEXT_OFFSET := $(shell awk "BEGIN {srand(); printf \"0x%06x\n\", \
int(2 * 1024 * 1024 / (2 ^ $(CONFIG_ARM64_PAGE_SHIFT)) * \ int(2 * 1024 * 1024 / (2 ^ $(CONFIG_ARM64_PAGE_SHIFT)) * \
rand()) * (2 ^ $(CONFIG_ARM64_PAGE_SHIFT))}") rand()) * (2 ^ $(CONFIG_ARM64_PAGE_SHIFT))}")
else else
TEXT_OFFSET := 0x00080000 TEXT_OFFSET := 0x0
endif endif
ifeq ($(CONFIG_KASAN_SW_TAGS), y) ifeq ($(CONFIG_KASAN_SW_TAGS), y)
@ -131,7 +141,7 @@ KBUILD_CFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
KBUILD_CPPFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT) KBUILD_CPPFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
KBUILD_AFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT) KBUILD_AFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
export TEXT_OFFSET GZFLAGS export TEXT_OFFSET
core-y += arch/arm64/ core-y += arch/arm64/
libs-y := arch/arm64/lib/ $(libs-y) libs-y := arch/arm64/lib/ $(libs-y)

View File

@ -39,25 +39,58 @@ alternative_if ARM64_HAS_GENERIC_AUTH
alternative_else_nop_endif alternative_else_nop_endif
.endm .endm
.macro ptrauth_keys_install_kernel tsk, sync, tmp1, tmp2, tmp3 .macro __ptrauth_keys_install_kernel_nosync tsk, tmp1, tmp2, tmp3
alternative_if ARM64_HAS_ADDRESS_AUTH
mov \tmp1, #THREAD_KEYS_KERNEL mov \tmp1, #THREAD_KEYS_KERNEL
add \tmp1, \tsk, \tmp1 add \tmp1, \tsk, \tmp1
ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_KERNEL_KEY_APIA] ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_KERNEL_KEY_APIA]
msr_s SYS_APIAKEYLO_EL1, \tmp2 msr_s SYS_APIAKEYLO_EL1, \tmp2
msr_s SYS_APIAKEYHI_EL1, \tmp3 msr_s SYS_APIAKEYHI_EL1, \tmp3
.if \sync == 1 .endm
isb
.endif .macro ptrauth_keys_install_kernel_nosync tsk, tmp1, tmp2, tmp3
alternative_if ARM64_HAS_ADDRESS_AUTH
__ptrauth_keys_install_kernel_nosync \tsk, \tmp1, \tmp2, \tmp3
alternative_else_nop_endif alternative_else_nop_endif
.endm .endm
.macro ptrauth_keys_install_kernel tsk, tmp1, tmp2, tmp3
alternative_if ARM64_HAS_ADDRESS_AUTH
__ptrauth_keys_install_kernel_nosync \tsk, \tmp1, \tmp2, \tmp3
isb
alternative_else_nop_endif
.endm
.macro __ptrauth_keys_init_cpu tsk, tmp1, tmp2, tmp3
mrs \tmp1, id_aa64isar1_el1
ubfx \tmp1, \tmp1, #ID_AA64ISAR1_APA_SHIFT, #8
cbz \tmp1, .Lno_addr_auth\@
mov_q \tmp1, (SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | \
SCTLR_ELx_ENDA | SCTLR_ELx_ENDB)
mrs \tmp2, sctlr_el1
orr \tmp2, \tmp2, \tmp1
msr sctlr_el1, \tmp2
__ptrauth_keys_install_kernel_nosync \tsk, \tmp1, \tmp2, \tmp3
isb
.Lno_addr_auth\@:
.endm
.macro ptrauth_keys_init_cpu tsk, tmp1, tmp2, tmp3
alternative_if_not ARM64_HAS_ADDRESS_AUTH
b .Lno_addr_auth\@
alternative_else_nop_endif
__ptrauth_keys_init_cpu \tsk, \tmp1, \tmp2, \tmp3
.Lno_addr_auth\@:
.endm
#else /* CONFIG_ARM64_PTR_AUTH */ #else /* CONFIG_ARM64_PTR_AUTH */
.macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3 .macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3
.endm .endm
.macro ptrauth_keys_install_kernel tsk, sync, tmp1, tmp2, tmp3 .macro ptrauth_keys_install_kernel_nosync tsk, tmp1, tmp2, tmp3
.endm
.macro ptrauth_keys_install_kernel tsk, tmp1, tmp2, tmp3
.endm .endm
#endif /* CONFIG_ARM64_PTR_AUTH */ #endif /* CONFIG_ARM64_PTR_AUTH */

View File

@ -736,4 +736,54 @@ USER(\label, ic ivau, \tmp2) // invalidate I line PoU
.Lyield_out_\@ : .Lyield_out_\@ :
.endm .endm
/*
* This macro emits a program property note section identifying
* architecture features which require special handling, mainly for
* use in assembly files included in the VDSO.
*/
#define NT_GNU_PROPERTY_TYPE_0 5
#define GNU_PROPERTY_AARCH64_FEATURE_1_AND 0xc0000000
#define GNU_PROPERTY_AARCH64_FEATURE_1_BTI (1U << 0)
#define GNU_PROPERTY_AARCH64_FEATURE_1_PAC (1U << 1)
#ifdef CONFIG_ARM64_BTI_KERNEL
#define GNU_PROPERTY_AARCH64_FEATURE_1_DEFAULT \
((GNU_PROPERTY_AARCH64_FEATURE_1_BTI | \
GNU_PROPERTY_AARCH64_FEATURE_1_PAC))
#endif
#ifdef GNU_PROPERTY_AARCH64_FEATURE_1_DEFAULT
.macro emit_aarch64_feature_1_and, feat=GNU_PROPERTY_AARCH64_FEATURE_1_DEFAULT
.pushsection .note.gnu.property, "a"
.align 3
.long 2f - 1f
.long 6f - 3f
.long NT_GNU_PROPERTY_TYPE_0
1: .string "GNU"
2:
.align 3
3: .long GNU_PROPERTY_AARCH64_FEATURE_1_AND
.long 5f - 4f
4:
/*
* This is described with an array of char in the Linux API
* spec but the text and all other usage (including binutils,
* clang and GCC) treat this as a 32 bit value so no swizzling
* is required for big endian.
*/
.long \feat
5:
.align 3
6:
.popsection
.endm
#else
.macro emit_aarch64_feature_1_and, feat=0
.endm
#endif /* GNU_PROPERTY_AARCH64_FEATURE_1_DEFAULT */
#endif /* __ASM_ASSEMBLER_H */ #endif /* __ASM_ASSEMBLER_H */

View File

@ -79,7 +79,7 @@ static inline void flush_icache_range(unsigned long start, unsigned long end)
* IPI all online CPUs so that they undergo a context synchronization * IPI all online CPUs so that they undergo a context synchronization
* event and are forced to refetch the new instructions. * event and are forced to refetch the new instructions.
*/ */
#ifdef CONFIG_KGDB
/* /*
* KGDB performs cache maintenance with interrupts disabled, so we * KGDB performs cache maintenance with interrupts disabled, so we
* will deadlock trying to IPI the secondary CPUs. In theory, we can * will deadlock trying to IPI the secondary CPUs. In theory, we can
@ -89,9 +89,9 @@ static inline void flush_icache_range(unsigned long start, unsigned long end)
* the patching operation, so we don't need extra IPIs here anyway. * the patching operation, so we don't need extra IPIs here anyway.
* In which case, add a KGDB-specific bodge and return early. * In which case, add a KGDB-specific bodge and return early.
*/ */
if (kgdb_connected && irqs_disabled()) if (in_dbg_master())
return; return;
#endif
kick_all_cpus_sync(); kick_all_cpus_sync();
} }

View File

@ -2,8 +2,6 @@
#ifndef __ASM_COMPILER_H #ifndef __ASM_COMPILER_H
#define __ASM_COMPILER_H #define __ASM_COMPILER_H
#if defined(CONFIG_ARM64_PTR_AUTH)
/* /*
* The EL0/EL1 pointer bits used by a pointer authentication code. * The EL0/EL1 pointer bits used by a pointer authentication code.
* This is dependent on TBI0/TBI1 being enabled, or bits 63:56 would also apply. * This is dependent on TBI0/TBI1 being enabled, or bits 63:56 would also apply.
@ -19,6 +17,4 @@
#define __builtin_return_address(val) \ #define __builtin_return_address(val) \
(void *)(ptrauth_clear_pac((unsigned long)__builtin_return_address(val))) (void *)(ptrauth_clear_pac((unsigned long)__builtin_return_address(val)))
#endif /* CONFIG_ARM64_PTR_AUTH */
#endif /* __ASM_COMPILER_H */ #endif /* __ASM_COMPILER_H */

View File

@ -33,6 +33,7 @@ struct cpuinfo_arm64 {
u64 reg_id_aa64zfr0; u64 reg_id_aa64zfr0;
u32 reg_id_dfr0; u32 reg_id_dfr0;
u32 reg_id_dfr1;
u32 reg_id_isar0; u32 reg_id_isar0;
u32 reg_id_isar1; u32 reg_id_isar1;
u32 reg_id_isar2; u32 reg_id_isar2;
@ -44,8 +45,11 @@ struct cpuinfo_arm64 {
u32 reg_id_mmfr1; u32 reg_id_mmfr1;
u32 reg_id_mmfr2; u32 reg_id_mmfr2;
u32 reg_id_mmfr3; u32 reg_id_mmfr3;
u32 reg_id_mmfr4;
u32 reg_id_mmfr5;
u32 reg_id_pfr0; u32 reg_id_pfr0;
u32 reg_id_pfr1; u32 reg_id_pfr1;
u32 reg_id_pfr2;
u32 reg_mvfr0; u32 reg_mvfr0;
u32 reg_mvfr1; u32 reg_mvfr1;

View File

@ -44,7 +44,7 @@
#define ARM64_SSBS 34 #define ARM64_SSBS 34
#define ARM64_WORKAROUND_1418040 35 #define ARM64_WORKAROUND_1418040 35
#define ARM64_HAS_SB 36 #define ARM64_HAS_SB 36
#define ARM64_WORKAROUND_SPECULATIVE_AT_VHE 37 #define ARM64_WORKAROUND_SPECULATIVE_AT 37
#define ARM64_HAS_ADDRESS_AUTH_ARCH 38 #define ARM64_HAS_ADDRESS_AUTH_ARCH 38
#define ARM64_HAS_ADDRESS_AUTH_IMP_DEF 39 #define ARM64_HAS_ADDRESS_AUTH_IMP_DEF 39
#define ARM64_HAS_GENERIC_AUTH_ARCH 40 #define ARM64_HAS_GENERIC_AUTH_ARCH 40
@ -55,13 +55,14 @@
#define ARM64_WORKAROUND_CAVIUM_TX2_219_TVM 45 #define ARM64_WORKAROUND_CAVIUM_TX2_219_TVM 45
#define ARM64_WORKAROUND_CAVIUM_TX2_219_PRFM 46 #define ARM64_WORKAROUND_CAVIUM_TX2_219_PRFM 46
#define ARM64_WORKAROUND_1542419 47 #define ARM64_WORKAROUND_1542419 47
#define ARM64_WORKAROUND_SPECULATIVE_AT_NVHE 48 #define ARM64_HAS_E0PD 48
#define ARM64_HAS_E0PD 49 #define ARM64_HAS_RNG 49
#define ARM64_HAS_RNG 50 #define ARM64_HAS_AMU_EXTN 50
#define ARM64_HAS_AMU_EXTN 51 #define ARM64_HAS_ADDRESS_AUTH 51
#define ARM64_HAS_ADDRESS_AUTH 52 #define ARM64_HAS_GENERIC_AUTH 52
#define ARM64_HAS_GENERIC_AUTH 53 #define ARM64_HAS_32BIT_EL1 53
#define ARM64_BTI 54
#define ARM64_NCAPS 54 #define ARM64_NCAPS 55
#endif /* __ASM_CPUCAPS_H */ #endif /* __ASM_CPUCAPS_H */

View File

@ -551,6 +551,13 @@ static inline bool id_aa64mmfr0_mixed_endian_el0(u64 mmfr0)
cpuid_feature_extract_unsigned_field(mmfr0, ID_AA64MMFR0_BIGENDEL0_SHIFT) == 0x1; cpuid_feature_extract_unsigned_field(mmfr0, ID_AA64MMFR0_BIGENDEL0_SHIFT) == 0x1;
} }
static inline bool id_aa64pfr0_32bit_el1(u64 pfr0)
{
u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL1_SHIFT);
return val == ID_AA64PFR0_EL1_32BIT_64BIT;
}
static inline bool id_aa64pfr0_32bit_el0(u64 pfr0) static inline bool id_aa64pfr0_32bit_el0(u64 pfr0)
{ {
u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL0_SHIFT); u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL0_SHIFT);
@ -680,6 +687,11 @@ static inline bool system_has_prio_mask_debugging(void)
system_uses_irq_prio_masking(); system_uses_irq_prio_masking();
} }
static inline bool system_supports_bti(void)
{
return IS_ENABLED(CONFIG_ARM64_BTI) && cpus_have_const_cap(ARM64_BTI);
}
#define ARM64_BP_HARDEN_UNKNOWN -1 #define ARM64_BP_HARDEN_UNKNOWN -1
#define ARM64_BP_HARDEN_WA_NEEDED 0 #define ARM64_BP_HARDEN_WA_NEEDED 0
#define ARM64_BP_HARDEN_NOT_REQUIRED 1 #define ARM64_BP_HARDEN_NOT_REQUIRED 1
@ -745,6 +757,24 @@ static inline bool cpu_has_hw_af(void)
extern bool cpu_has_amu_feat(int cpu); extern bool cpu_has_amu_feat(int cpu);
#endif #endif
static inline unsigned int get_vmid_bits(u64 mmfr1)
{
int vmid_bits;
vmid_bits = cpuid_feature_extract_unsigned_field(mmfr1,
ID_AA64MMFR1_VMIDBITS_SHIFT);
if (vmid_bits == ID_AA64MMFR1_VMIDBITS_16)
return 16;
/*
* Return the default here even if any reserved
* value is fetched from the system register.
*/
return 8;
}
u32 get_kvm_ipa_limit(void);
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
#endif #endif

View File

@ -125,5 +125,7 @@ static inline int reinstall_suspended_bps(struct pt_regs *regs)
int aarch32_break_handler(struct pt_regs *regs); int aarch32_break_handler(struct pt_regs *regs);
void debug_traps_init(void);
#endif /* __ASSEMBLY */ #endif /* __ASSEMBLY */
#endif /* __ASM_DEBUG_MONITORS_H */ #endif /* __ASM_DEBUG_MONITORS_H */

View File

@ -114,7 +114,11 @@
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#include <uapi/linux/elf.h>
#include <linux/bug.h> #include <linux/bug.h>
#include <linux/errno.h>
#include <linux/fs.h>
#include <linux/types.h>
#include <asm/processor.h> /* for signal_minsigstksz, used by ARCH_DLINFO */ #include <asm/processor.h> /* for signal_minsigstksz, used by ARCH_DLINFO */
typedef unsigned long elf_greg_t; typedef unsigned long elf_greg_t;
@ -224,6 +228,52 @@ extern int aarch32_setup_additional_pages(struct linux_binprm *bprm,
#endif /* CONFIG_COMPAT */ #endif /* CONFIG_COMPAT */
struct arch_elf_state {
int flags;
};
#define ARM64_ELF_BTI (1 << 0)
#define INIT_ARCH_ELF_STATE { \
.flags = 0, \
}
static inline int arch_parse_elf_property(u32 type, const void *data,
size_t datasz, bool compat,
struct arch_elf_state *arch)
{
/* No known properties for AArch32 yet */
if (IS_ENABLED(CONFIG_COMPAT) && compat)
return 0;
if (type == GNU_PROPERTY_AARCH64_FEATURE_1_AND) {
const u32 *p = data;
if (datasz != sizeof(*p))
return -ENOEXEC;
if (system_supports_bti() &&
(*p & GNU_PROPERTY_AARCH64_FEATURE_1_BTI))
arch->flags |= ARM64_ELF_BTI;
}
return 0;
}
static inline int arch_elf_pt_proc(void *ehdr, void *phdr,
struct file *f, bool is_interp,
struct arch_elf_state *state)
{
return 0;
}
static inline int arch_check_elf(void *ehdr, bool has_interp,
void *interp_ehdr,
struct arch_elf_state *state)
{
return 0;
}
#endif /* !__ASSEMBLY__ */ #endif /* !__ASSEMBLY__ */
#endif #endif

View File

@ -22,7 +22,7 @@
#define ESR_ELx_EC_PAC (0x09) /* EL2 and above */ #define ESR_ELx_EC_PAC (0x09) /* EL2 and above */
/* Unallocated EC: 0x0A - 0x0B */ /* Unallocated EC: 0x0A - 0x0B */
#define ESR_ELx_EC_CP14_64 (0x0C) #define ESR_ELx_EC_CP14_64 (0x0C)
/* Unallocated EC: 0x0d */ #define ESR_ELx_EC_BTI (0x0D)
#define ESR_ELx_EC_ILL (0x0E) #define ESR_ELx_EC_ILL (0x0E)
/* Unallocated EC: 0x0F - 0x10 */ /* Unallocated EC: 0x0F - 0x10 */
#define ESR_ELx_EC_SVC32 (0x11) #define ESR_ELx_EC_SVC32 (0x11)

View File

@ -34,6 +34,7 @@ static inline u32 disr_to_esr(u64 disr)
asmlinkage void enter_from_user_mode(void); asmlinkage void enter_from_user_mode(void);
void do_mem_abort(unsigned long addr, unsigned int esr, struct pt_regs *regs); void do_mem_abort(unsigned long addr, unsigned int esr, struct pt_regs *regs);
void do_undefinstr(struct pt_regs *regs); void do_undefinstr(struct pt_regs *regs);
void do_bti(struct pt_regs *regs);
asmlinkage void bad_mode(struct pt_regs *regs, int reason, unsigned int esr); asmlinkage void bad_mode(struct pt_regs *regs, int reason, unsigned int esr);
void do_debug_exception(unsigned long addr_if_watchpoint, unsigned int esr, void do_debug_exception(unsigned long addr_if_watchpoint, unsigned int esr,
struct pt_regs *regs); struct pt_regs *regs);

View File

@ -94,6 +94,7 @@
#define KERNEL_HWCAP_BF16 __khwcap2_feature(BF16) #define KERNEL_HWCAP_BF16 __khwcap2_feature(BF16)
#define KERNEL_HWCAP_DGH __khwcap2_feature(DGH) #define KERNEL_HWCAP_DGH __khwcap2_feature(DGH)
#define KERNEL_HWCAP_RNG __khwcap2_feature(RNG) #define KERNEL_HWCAP_RNG __khwcap2_feature(RNG)
#define KERNEL_HWCAP_BTI __khwcap2_feature(BTI)
/* /*
* This yields a mask that user programs can use to figure out what * This yields a mask that user programs can use to figure out what

View File

@ -39,13 +39,37 @@ enum aarch64_insn_encoding_class {
* system instructions */ * system instructions */
}; };
enum aarch64_insn_hint_op { enum aarch64_insn_hint_cr_op {
AARCH64_INSN_HINT_NOP = 0x0 << 5, AARCH64_INSN_HINT_NOP = 0x0 << 5,
AARCH64_INSN_HINT_YIELD = 0x1 << 5, AARCH64_INSN_HINT_YIELD = 0x1 << 5,
AARCH64_INSN_HINT_WFE = 0x2 << 5, AARCH64_INSN_HINT_WFE = 0x2 << 5,
AARCH64_INSN_HINT_WFI = 0x3 << 5, AARCH64_INSN_HINT_WFI = 0x3 << 5,
AARCH64_INSN_HINT_SEV = 0x4 << 5, AARCH64_INSN_HINT_SEV = 0x4 << 5,
AARCH64_INSN_HINT_SEVL = 0x5 << 5, AARCH64_INSN_HINT_SEVL = 0x5 << 5,
AARCH64_INSN_HINT_XPACLRI = 0x07 << 5,
AARCH64_INSN_HINT_PACIA_1716 = 0x08 << 5,
AARCH64_INSN_HINT_PACIB_1716 = 0x0A << 5,
AARCH64_INSN_HINT_AUTIA_1716 = 0x0C << 5,
AARCH64_INSN_HINT_AUTIB_1716 = 0x0E << 5,
AARCH64_INSN_HINT_PACIAZ = 0x18 << 5,
AARCH64_INSN_HINT_PACIASP = 0x19 << 5,
AARCH64_INSN_HINT_PACIBZ = 0x1A << 5,
AARCH64_INSN_HINT_PACIBSP = 0x1B << 5,
AARCH64_INSN_HINT_AUTIAZ = 0x1C << 5,
AARCH64_INSN_HINT_AUTIASP = 0x1D << 5,
AARCH64_INSN_HINT_AUTIBZ = 0x1E << 5,
AARCH64_INSN_HINT_AUTIBSP = 0x1F << 5,
AARCH64_INSN_HINT_ESB = 0x10 << 5,
AARCH64_INSN_HINT_PSB = 0x11 << 5,
AARCH64_INSN_HINT_TSB = 0x12 << 5,
AARCH64_INSN_HINT_CSDB = 0x14 << 5,
AARCH64_INSN_HINT_BTI = 0x20 << 5,
AARCH64_INSN_HINT_BTIC = 0x22 << 5,
AARCH64_INSN_HINT_BTIJ = 0x24 << 5,
AARCH64_INSN_HINT_BTIJC = 0x26 << 5,
}; };
enum aarch64_insn_imm_type { enum aarch64_insn_imm_type {
@ -344,7 +368,7 @@ __AARCH64_INSN_FUNCS(msr_reg, 0xFFF00000, 0xD5100000)
#undef __AARCH64_INSN_FUNCS #undef __AARCH64_INSN_FUNCS
bool aarch64_insn_is_nop(u32 insn); bool aarch64_insn_is_steppable_hint(u32 insn);
bool aarch64_insn_is_branch_imm(u32 insn); bool aarch64_insn_is_branch_imm(u32 insn);
static inline bool aarch64_insn_is_adr_adrp(u32 insn) static inline bool aarch64_insn_is_adr_adrp(u32 insn)
@ -370,7 +394,7 @@ u32 aarch64_insn_gen_comp_branch_imm(unsigned long pc, unsigned long addr,
enum aarch64_insn_branch_type type); enum aarch64_insn_branch_type type);
u32 aarch64_insn_gen_cond_branch_imm(unsigned long pc, unsigned long addr, u32 aarch64_insn_gen_cond_branch_imm(unsigned long pc, unsigned long addr,
enum aarch64_insn_condition cond); enum aarch64_insn_condition cond);
u32 aarch64_insn_gen_hint(enum aarch64_insn_hint_op op); u32 aarch64_insn_gen_hint(enum aarch64_insn_hint_cr_op op);
u32 aarch64_insn_gen_nop(void); u32 aarch64_insn_gen_nop(void);
u32 aarch64_insn_gen_branch_reg(enum aarch64_insn_register reg, u32 aarch64_insn_gen_branch_reg(enum aarch64_insn_register reg,
enum aarch64_insn_branch_type type); enum aarch64_insn_branch_type type);

View File

@ -507,10 +507,12 @@ static inline unsigned long vcpu_data_host_to_guest(struct kvm_vcpu *vcpu,
static __always_inline void kvm_skip_instr(struct kvm_vcpu *vcpu, bool is_wide_instr) static __always_inline void kvm_skip_instr(struct kvm_vcpu *vcpu, bool is_wide_instr)
{ {
if (vcpu_mode_is_32bit(vcpu)) if (vcpu_mode_is_32bit(vcpu)) {
kvm_skip_instr32(vcpu, is_wide_instr); kvm_skip_instr32(vcpu, is_wide_instr);
else } else {
*vcpu_pc(vcpu) += 4; *vcpu_pc(vcpu) += 4;
*vcpu_cpsr(vcpu) &= ~PSR_BTYPE_MASK;
}
/* advance the singlestep state machine */ /* advance the singlestep state machine */
*vcpu_cpsr(vcpu) &= ~DBG_SPSR_SS; *vcpu_cpsr(vcpu) &= ~DBG_SPSR_SS;

View File

@ -573,10 +573,6 @@ static inline bool kvm_arch_requires_vhe(void)
if (system_supports_sve()) if (system_supports_sve())
return true; return true;
/* Some implementations have defects that confine them to VHE */
if (cpus_have_cap(ARM64_WORKAROUND_SPECULATIVE_AT_VHE))
return true;
return false; return false;
} }
@ -670,7 +666,7 @@ static inline int kvm_arm_have_ssbd(void)
void kvm_vcpu_load_sysregs(struct kvm_vcpu *vcpu); void kvm_vcpu_load_sysregs(struct kvm_vcpu *vcpu);
void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu); void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu);
void kvm_set_ipa_limit(void); int kvm_set_ipa_limit(void);
#define __KVM_HAVE_ARCH_VM_ALLOC #define __KVM_HAVE_ARCH_VM_ALLOC
struct kvm *kvm_arch_alloc_vm(void); struct kvm *kvm_arch_alloc_vm(void);

View File

@ -10,10 +10,9 @@
#include <linux/compiler.h> #include <linux/compiler.h>
#include <linux/kvm_host.h> #include <linux/kvm_host.h>
#include <asm/alternative.h> #include <asm/alternative.h>
#include <asm/kvm_mmu.h>
#include <asm/sysreg.h> #include <asm/sysreg.h>
#define __hyp_text __section(.hyp.text) notrace #define __hyp_text __section(.hyp.text) notrace __noscs
#define read_sysreg_elx(r,nvh,vh) \ #define read_sysreg_elx(r,nvh,vh) \
({ \ ({ \
@ -88,22 +87,5 @@ void deactivate_traps_vhe_put(void);
u64 __guest_enter(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt); u64 __guest_enter(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt);
void __noreturn __hyp_do_panic(unsigned long, ...); void __noreturn __hyp_do_panic(unsigned long, ...);
/*
* Must be called from hyp code running at EL2 with an updated VTTBR
* and interrupts disabled.
*/
static __always_inline void __hyp_text __load_guest_stage2(struct kvm *kvm)
{
write_sysreg(kvm->arch.vtcr, vtcr_el2);
write_sysreg(kvm_get_vttbr(kvm), vttbr_el2);
/*
* ARM errata 1165522 and 1530923 require the actual execution of the
* above before we can switch to the EL1/EL0 translation regime used by
* the guest.
*/
asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT_VHE));
}
#endif /* __ARM64_KVM_HYP_H__ */ #endif /* __ARM64_KVM_HYP_H__ */

View File

@ -416,7 +416,7 @@ static inline unsigned int kvm_get_vmid_bits(void)
{ {
int reg = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1); int reg = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
return (cpuid_feature_extract_unsigned_field(reg, ID_AA64MMFR1_VMIDBITS_SHIFT) == 2) ? 16 : 8; return get_vmid_bits(reg);
} }
/* /*
@ -604,5 +604,22 @@ static __always_inline u64 kvm_get_vttbr(struct kvm *kvm)
return kvm_phys_to_vttbr(baddr) | vmid_field | cnp; return kvm_phys_to_vttbr(baddr) | vmid_field | cnp;
} }
/*
* Must be called from hyp code running at EL2 with an updated VTTBR
* and interrupts disabled.
*/
static __always_inline void __load_guest_stage2(struct kvm *kvm)
{
write_sysreg(kvm->arch.vtcr, vtcr_el2);
write_sysreg(kvm_get_vttbr(kvm), vttbr_el2);
/*
* ARM errata 1165522 and 1530923 require the actual execution of the
* above before we can switch to the EL1/EL0 translation regime used by
* the guest.
*/
asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT));
}
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
#endif /* __ARM64_KVM_MMU_H__ */ #endif /* __ARM64_KVM_MMU_H__ */

View File

@ -4,6 +4,52 @@
#define __ALIGN .align 2 #define __ALIGN .align 2
#define __ALIGN_STR ".align 2" #define __ALIGN_STR ".align 2"
#if defined(CONFIG_ARM64_BTI_KERNEL) && defined(__aarch64__)
/*
* Since current versions of gas reject the BTI instruction unless we
* set the architecture version to v8.5 we use the hint instruction
* instead.
*/
#define BTI_C hint 34 ;
#define BTI_J hint 36 ;
/*
* When using in-kernel BTI we need to ensure that PCS-conformant assembly
* functions have suitable annotations. Override SYM_FUNC_START to insert
* a BTI landing pad at the start of everything.
*/
#define SYM_FUNC_START(name) \
SYM_START(name, SYM_L_GLOBAL, SYM_A_ALIGN) \
BTI_C
#define SYM_FUNC_START_NOALIGN(name) \
SYM_START(name, SYM_L_GLOBAL, SYM_A_NONE) \
BTI_C
#define SYM_FUNC_START_LOCAL(name) \
SYM_START(name, SYM_L_LOCAL, SYM_A_ALIGN) \
BTI_C
#define SYM_FUNC_START_LOCAL_NOALIGN(name) \
SYM_START(name, SYM_L_LOCAL, SYM_A_NONE) \
BTI_C
#define SYM_FUNC_START_WEAK(name) \
SYM_START(name, SYM_L_WEAK, SYM_A_ALIGN) \
BTI_C
#define SYM_FUNC_START_WEAK_NOALIGN(name) \
SYM_START(name, SYM_L_WEAK, SYM_A_NONE) \
BTI_C
#define SYM_INNER_LABEL(name, linkage) \
.type name SYM_T_NONE ASM_NL \
SYM_ENTRY(name, linkage, SYM_A_NONE) \
BTI_J
#endif
/* /*
* Annotate a function as position independent, i.e., safe to be called before * Annotate a function as position independent, i.e., safe to be called before
* the kernel virtual mapping is activated. * the kernel virtual mapping is activated.

View File

@ -0,0 +1,37 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __ASM_MMAN_H__
#define __ASM_MMAN_H__
#include <linux/compiler.h>
#include <linux/types.h>
#include <uapi/asm/mman.h>
static inline unsigned long arch_calc_vm_prot_bits(unsigned long prot,
unsigned long pkey __always_unused)
{
if (system_supports_bti() && (prot & PROT_BTI))
return VM_ARM64_BTI;
return 0;
}
#define arch_calc_vm_prot_bits(prot, pkey) arch_calc_vm_prot_bits(prot, pkey)
static inline pgprot_t arch_vm_get_page_prot(unsigned long vm_flags)
{
return (vm_flags & VM_ARM64_BTI) ? __pgprot(PTE_GP) : __pgprot(0);
}
#define arch_vm_get_page_prot(vm_flags) arch_vm_get_page_prot(vm_flags)
static inline bool arch_validate_prot(unsigned long prot,
unsigned long addr __always_unused)
{
unsigned long supported = PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM;
if (system_supports_bti())
supported |= PROT_BTI;
return (prot & ~supported) == 0;
}
#define arch_validate_prot(prot, addr) arch_validate_prot(prot, addr)
#endif /* ! __ASM_MMAN_H__ */

View File

@ -151,6 +151,7 @@
#define PTE_SHARED (_AT(pteval_t, 3) << 8) /* SH[1:0], inner shareable */ #define PTE_SHARED (_AT(pteval_t, 3) << 8) /* SH[1:0], inner shareable */
#define PTE_AF (_AT(pteval_t, 1) << 10) /* Access Flag */ #define PTE_AF (_AT(pteval_t, 1) << 10) /* Access Flag */
#define PTE_NG (_AT(pteval_t, 1) << 11) /* nG */ #define PTE_NG (_AT(pteval_t, 1) << 11) /* nG */
#define PTE_GP (_AT(pteval_t, 1) << 50) /* BTI guarded */
#define PTE_DBM (_AT(pteval_t, 1) << 51) /* Dirty Bit Management */ #define PTE_DBM (_AT(pteval_t, 1) << 51) /* Dirty Bit Management */
#define PTE_CONT (_AT(pteval_t, 1) << 52) /* Contiguous range */ #define PTE_CONT (_AT(pteval_t, 1) << 52) /* Contiguous range */
#define PTE_PXN (_AT(pteval_t, 1) << 53) /* Privileged XN */ #define PTE_PXN (_AT(pteval_t, 1) << 53) /* Privileged XN */
@ -190,7 +191,6 @@
* Memory Attribute override for Stage-2 (MemAttr[3:0]) * Memory Attribute override for Stage-2 (MemAttr[3:0])
*/ */
#define PTE_S2_MEMATTR(t) (_AT(pteval_t, (t)) << 2) #define PTE_S2_MEMATTR(t) (_AT(pteval_t, (t)) << 2)
#define PTE_S2_MEMATTR_MASK (_AT(pteval_t, 0xf) << 2)
/* /*
* EL2/HYP PTE/PMD definitions * EL2/HYP PTE/PMD definitions

View File

@ -21,6 +21,7 @@
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#include <asm/cpufeature.h>
#include <asm/pgtable-types.h> #include <asm/pgtable-types.h>
extern bool arm64_use_ng_mappings; extern bool arm64_use_ng_mappings;
@ -31,6 +32,16 @@ extern bool arm64_use_ng_mappings;
#define PTE_MAYBE_NG (arm64_use_ng_mappings ? PTE_NG : 0) #define PTE_MAYBE_NG (arm64_use_ng_mappings ? PTE_NG : 0)
#define PMD_MAYBE_NG (arm64_use_ng_mappings ? PMD_SECT_NG : 0) #define PMD_MAYBE_NG (arm64_use_ng_mappings ? PMD_SECT_NG : 0)
/*
* If we have userspace only BTI we don't want to mark kernel pages
* guarded even if the system does support BTI.
*/
#ifdef CONFIG_ARM64_BTI_KERNEL
#define PTE_MAYBE_GP (system_supports_bti() ? PTE_GP : 0)
#else
#define PTE_MAYBE_GP 0
#endif
#define PROT_DEFAULT (_PROT_DEFAULT | PTE_MAYBE_NG) #define PROT_DEFAULT (_PROT_DEFAULT | PTE_MAYBE_NG)
#define PROT_SECT_DEFAULT (_PROT_SECT_DEFAULT | PMD_MAYBE_NG) #define PROT_SECT_DEFAULT (_PROT_SECT_DEFAULT | PMD_MAYBE_NG)

View File

@ -457,6 +457,7 @@ extern pgd_t init_pg_dir[PTRS_PER_PGD];
extern pgd_t init_pg_end[]; extern pgd_t init_pg_end[];
extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
extern pgd_t idmap_pg_dir[PTRS_PER_PGD]; extern pgd_t idmap_pg_dir[PTRS_PER_PGD];
extern pgd_t idmap_pg_end[];
extern pgd_t tramp_pg_dir[PTRS_PER_PGD]; extern pgd_t tramp_pg_dir[PTRS_PER_PGD];
extern void set_swapper_pgd(pgd_t *pgdp, pgd_t pgd); extern void set_swapper_pgd(pgd_t *pgdp, pgd_t pgd);
@ -508,7 +509,7 @@ static inline void pte_unmap(pte_t *pte) { }
#define pte_set_fixmap_offset(pmd, addr) pte_set_fixmap(pte_offset_phys(pmd, addr)) #define pte_set_fixmap_offset(pmd, addr) pte_set_fixmap(pte_offset_phys(pmd, addr))
#define pte_clear_fixmap() clear_fixmap(FIX_PTE) #define pte_clear_fixmap() clear_fixmap(FIX_PTE)
#define pmd_page(pmd) pfn_to_page(__phys_to_pfn(__pmd_to_phys(pmd))) #define pmd_page(pmd) phys_to_page(__pmd_to_phys(pmd))
/* use ONLY for statically allocated translation tables */ /* use ONLY for statically allocated translation tables */
#define pte_offset_kimg(dir,addr) ((pte_t *)__phys_to_kimg(pte_offset_phys((dir), (addr)))) #define pte_offset_kimg(dir,addr) ((pte_t *)__phys_to_kimg(pte_offset_phys((dir), (addr))))
@ -566,7 +567,7 @@ static inline phys_addr_t pud_page_paddr(pud_t pud)
#define pmd_set_fixmap_offset(pud, addr) pmd_set_fixmap(pmd_offset_phys(pud, addr)) #define pmd_set_fixmap_offset(pud, addr) pmd_set_fixmap(pmd_offset_phys(pud, addr))
#define pmd_clear_fixmap() clear_fixmap(FIX_PMD) #define pmd_clear_fixmap() clear_fixmap(FIX_PMD)
#define pud_page(pud) pfn_to_page(__phys_to_pfn(__pud_to_phys(pud))) #define pud_page(pud) phys_to_page(__pud_to_phys(pud))
/* use ONLY for statically allocated translation tables */ /* use ONLY for statically allocated translation tables */
#define pmd_offset_kimg(dir,addr) ((pmd_t *)__phys_to_kimg(pmd_offset_phys((dir), (addr)))) #define pmd_offset_kimg(dir,addr) ((pmd_t *)__phys_to_kimg(pmd_offset_phys((dir), (addr))))
@ -624,7 +625,7 @@ static inline phys_addr_t pgd_page_paddr(pgd_t pgd)
#define pud_set_fixmap_offset(pgd, addr) pud_set_fixmap(pud_offset_phys(pgd, addr)) #define pud_set_fixmap_offset(pgd, addr) pud_set_fixmap(pud_offset_phys(pgd, addr))
#define pud_clear_fixmap() clear_fixmap(FIX_PUD) #define pud_clear_fixmap() clear_fixmap(FIX_PUD)
#define pgd_page(pgd) pfn_to_page(__phys_to_pfn(__pgd_to_phys(pgd))) #define pgd_page(pgd) phys_to_page(__pgd_to_phys(pgd))
/* use ONLY for statically allocated translation tables */ /* use ONLY for statically allocated translation tables */
#define pud_offset_kimg(dir,addr) ((pud_t *)__phys_to_kimg(pud_offset_phys((dir), (addr)))) #define pud_offset_kimg(dir,addr) ((pud_t *)__phys_to_kimg(pud_offset_phys((dir), (addr))))
@ -660,7 +661,7 @@ static inline phys_addr_t pgd_page_paddr(pgd_t pgd)
static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
{ {
const pteval_t mask = PTE_USER | PTE_PXN | PTE_UXN | PTE_RDONLY | const pteval_t mask = PTE_USER | PTE_PXN | PTE_UXN | PTE_RDONLY |
PTE_PROT_NONE | PTE_VALID | PTE_WRITE; PTE_PROT_NONE | PTE_VALID | PTE_WRITE | PTE_GP;
/* preserve the hardware dirty information */ /* preserve the hardware dirty information */
if (pte_hw_dirty(pte)) if (pte_hw_dirty(pte))
pte = pte_mkdirty(pte); pte = pte_mkdirty(pte);

View File

@ -35,6 +35,7 @@
#define GIC_PRIO_PSR_I_SET (1 << 4) #define GIC_PRIO_PSR_I_SET (1 << 4)
/* Additional SPSR bits not exposed in the UABI */ /* Additional SPSR bits not exposed in the UABI */
#define PSR_IL_BIT (1 << 20) #define PSR_IL_BIT (1 << 20)
/* AArch32-specific ptrace requests */ /* AArch32-specific ptrace requests */

View File

@ -0,0 +1,29 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_SCS_H
#define _ASM_SCS_H
#ifdef __ASSEMBLY__
#include <asm/asm-offsets.h>
#ifdef CONFIG_SHADOW_CALL_STACK
scs_sp .req x18
.macro scs_load tsk, tmp
ldr scs_sp, [\tsk, #TSK_TI_SCS_SP]
.endm
.macro scs_save tsk, tmp
str scs_sp, [\tsk, #TSK_TI_SCS_SP]
.endm
#else
.macro scs_load tsk, tmp
.endm
.macro scs_save tsk, tmp
.endm
#endif /* CONFIG_SHADOW_CALL_STACK */
#endif /* __ASSEMBLY __ */
#endif /* _ASM_SCS_H */

View File

@ -23,14 +23,6 @@
#define CPU_STUCK_REASON_52_BIT_VA (UL(1) << CPU_STUCK_REASON_SHIFT) #define CPU_STUCK_REASON_52_BIT_VA (UL(1) << CPU_STUCK_REASON_SHIFT)
#define CPU_STUCK_REASON_NO_GRAN (UL(2) << CPU_STUCK_REASON_SHIFT) #define CPU_STUCK_REASON_NO_GRAN (UL(2) << CPU_STUCK_REASON_SHIFT)
/* Possible options for __cpu_setup */
/* Option to setup primary cpu */
#define ARM64_CPU_BOOT_PRIMARY (1)
/* Option to setup secondary cpus */
#define ARM64_CPU_BOOT_SECONDARY (2)
/* Option to setup cpus for different cpu run time services */
#define ARM64_CPU_RUNTIME (3)
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#include <asm/percpu.h> #include <asm/percpu.h>
@ -96,9 +88,6 @@ asmlinkage void secondary_start_kernel(void);
struct secondary_data { struct secondary_data {
void *stack; void *stack;
struct task_struct *task; struct task_struct *task;
#ifdef CONFIG_ARM64_PTR_AUTH
struct ptrauth_keys_kernel ptrauth_key;
#endif
long status; long status;
}; };

View File

@ -68,12 +68,10 @@ extern void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk);
DECLARE_PER_CPU(unsigned long *, irq_stack_ptr); DECLARE_PER_CPU(unsigned long *, irq_stack_ptr);
static inline bool on_irq_stack(unsigned long sp, static inline bool on_stack(unsigned long sp, unsigned long low,
unsigned long high, enum stack_type type,
struct stack_info *info) struct stack_info *info)
{ {
unsigned long low = (unsigned long)raw_cpu_read(irq_stack_ptr);
unsigned long high = low + IRQ_STACK_SIZE;
if (!low) if (!low)
return false; return false;
@ -83,12 +81,20 @@ static inline bool on_irq_stack(unsigned long sp,
if (info) { if (info) {
info->low = low; info->low = low;
info->high = high; info->high = high;
info->type = STACK_TYPE_IRQ; info->type = type;
} }
return true; return true;
} }
static inline bool on_irq_stack(unsigned long sp,
struct stack_info *info)
{
unsigned long low = (unsigned long)raw_cpu_read(irq_stack_ptr);
unsigned long high = low + IRQ_STACK_SIZE;
return on_stack(sp, low, high, STACK_TYPE_IRQ, info);
}
static inline bool on_task_stack(const struct task_struct *tsk, static inline bool on_task_stack(const struct task_struct *tsk,
unsigned long sp, unsigned long sp,
struct stack_info *info) struct stack_info *info)
@ -96,16 +102,7 @@ static inline bool on_task_stack(const struct task_struct *tsk,
unsigned long low = (unsigned long)task_stack_page(tsk); unsigned long low = (unsigned long)task_stack_page(tsk);
unsigned long high = low + THREAD_SIZE; unsigned long high = low + THREAD_SIZE;
if (sp < low || sp >= high) return on_stack(sp, low, high, STACK_TYPE_TASK, info);
return false;
if (info) {
info->low = low;
info->high = high;
info->type = STACK_TYPE_TASK;
}
return true;
} }
#ifdef CONFIG_VMAP_STACK #ifdef CONFIG_VMAP_STACK
@ -117,16 +114,7 @@ static inline bool on_overflow_stack(unsigned long sp,
unsigned long low = (unsigned long)raw_cpu_ptr(overflow_stack); unsigned long low = (unsigned long)raw_cpu_ptr(overflow_stack);
unsigned long high = low + OVERFLOW_STACK_SIZE; unsigned long high = low + OVERFLOW_STACK_SIZE;
if (sp < low || sp >= high) return on_stack(sp, low, high, STACK_TYPE_OVERFLOW, info);
return false;
if (info) {
info->low = low;
info->high = high;
info->type = STACK_TYPE_OVERFLOW;
}
return true;
} }
#else #else
static inline bool on_overflow_stack(unsigned long sp, static inline bool on_overflow_stack(unsigned long sp,

View File

@ -2,7 +2,7 @@
#ifndef __ASM_SUSPEND_H #ifndef __ASM_SUSPEND_H
#define __ASM_SUSPEND_H #define __ASM_SUSPEND_H
#define NR_CTX_REGS 12 #define NR_CTX_REGS 13
#define NR_CALLEE_SAVED_REGS 12 #define NR_CALLEE_SAVED_REGS 12
/* /*

View File

@ -105,6 +105,10 @@
#define SYS_DC_CSW sys_insn(1, 0, 7, 10, 2) #define SYS_DC_CSW sys_insn(1, 0, 7, 10, 2)
#define SYS_DC_CISW sys_insn(1, 0, 7, 14, 2) #define SYS_DC_CISW sys_insn(1, 0, 7, 14, 2)
/*
* System registers, organised loosely by encoding but grouped together
* where the architected name contains an index. e.g. ID_MMFR<n>_EL1.
*/
#define SYS_OSDTRRX_EL1 sys_reg(2, 0, 0, 0, 2) #define SYS_OSDTRRX_EL1 sys_reg(2, 0, 0, 0, 2)
#define SYS_MDCCINT_EL1 sys_reg(2, 0, 0, 2, 0) #define SYS_MDCCINT_EL1 sys_reg(2, 0, 0, 2, 0)
#define SYS_MDSCR_EL1 sys_reg(2, 0, 0, 2, 2) #define SYS_MDSCR_EL1 sys_reg(2, 0, 0, 2, 2)
@ -134,12 +138,16 @@
#define SYS_ID_PFR0_EL1 sys_reg(3, 0, 0, 1, 0) #define SYS_ID_PFR0_EL1 sys_reg(3, 0, 0, 1, 0)
#define SYS_ID_PFR1_EL1 sys_reg(3, 0, 0, 1, 1) #define SYS_ID_PFR1_EL1 sys_reg(3, 0, 0, 1, 1)
#define SYS_ID_PFR2_EL1 sys_reg(3, 0, 0, 3, 4)
#define SYS_ID_DFR0_EL1 sys_reg(3, 0, 0, 1, 2) #define SYS_ID_DFR0_EL1 sys_reg(3, 0, 0, 1, 2)
#define SYS_ID_DFR1_EL1 sys_reg(3, 0, 0, 3, 5)
#define SYS_ID_AFR0_EL1 sys_reg(3, 0, 0, 1, 3) #define SYS_ID_AFR0_EL1 sys_reg(3, 0, 0, 1, 3)
#define SYS_ID_MMFR0_EL1 sys_reg(3, 0, 0, 1, 4) #define SYS_ID_MMFR0_EL1 sys_reg(3, 0, 0, 1, 4)
#define SYS_ID_MMFR1_EL1 sys_reg(3, 0, 0, 1, 5) #define SYS_ID_MMFR1_EL1 sys_reg(3, 0, 0, 1, 5)
#define SYS_ID_MMFR2_EL1 sys_reg(3, 0, 0, 1, 6) #define SYS_ID_MMFR2_EL1 sys_reg(3, 0, 0, 1, 6)
#define SYS_ID_MMFR3_EL1 sys_reg(3, 0, 0, 1, 7) #define SYS_ID_MMFR3_EL1 sys_reg(3, 0, 0, 1, 7)
#define SYS_ID_MMFR4_EL1 sys_reg(3, 0, 0, 2, 6)
#define SYS_ID_MMFR5_EL1 sys_reg(3, 0, 0, 3, 6)
#define SYS_ID_ISAR0_EL1 sys_reg(3, 0, 0, 2, 0) #define SYS_ID_ISAR0_EL1 sys_reg(3, 0, 0, 2, 0)
#define SYS_ID_ISAR1_EL1 sys_reg(3, 0, 0, 2, 1) #define SYS_ID_ISAR1_EL1 sys_reg(3, 0, 0, 2, 1)
@ -147,7 +155,6 @@
#define SYS_ID_ISAR3_EL1 sys_reg(3, 0, 0, 2, 3) #define SYS_ID_ISAR3_EL1 sys_reg(3, 0, 0, 2, 3)
#define SYS_ID_ISAR4_EL1 sys_reg(3, 0, 0, 2, 4) #define SYS_ID_ISAR4_EL1 sys_reg(3, 0, 0, 2, 4)
#define SYS_ID_ISAR5_EL1 sys_reg(3, 0, 0, 2, 5) #define SYS_ID_ISAR5_EL1 sys_reg(3, 0, 0, 2, 5)
#define SYS_ID_MMFR4_EL1 sys_reg(3, 0, 0, 2, 6)
#define SYS_ID_ISAR6_EL1 sys_reg(3, 0, 0, 2, 7) #define SYS_ID_ISAR6_EL1 sys_reg(3, 0, 0, 2, 7)
#define SYS_MVFR0_EL1 sys_reg(3, 0, 0, 3, 0) #define SYS_MVFR0_EL1 sys_reg(3, 0, 0, 3, 0)
@ -552,6 +559,8 @@
#endif #endif
/* SCTLR_EL1 specific flags. */ /* SCTLR_EL1 specific flags. */
#define SCTLR_EL1_BT1 (BIT(36))
#define SCTLR_EL1_BT0 (BIT(35))
#define SCTLR_EL1_UCI (BIT(26)) #define SCTLR_EL1_UCI (BIT(26))
#define SCTLR_EL1_E0E (BIT(24)) #define SCTLR_EL1_E0E (BIT(24))
#define SCTLR_EL1_SPAN (BIT(23)) #define SCTLR_EL1_SPAN (BIT(23))
@ -594,6 +603,7 @@
/* id_aa64isar0 */ /* id_aa64isar0 */
#define ID_AA64ISAR0_RNDR_SHIFT 60 #define ID_AA64ISAR0_RNDR_SHIFT 60
#define ID_AA64ISAR0_TLB_SHIFT 56
#define ID_AA64ISAR0_TS_SHIFT 52 #define ID_AA64ISAR0_TS_SHIFT 52
#define ID_AA64ISAR0_FHM_SHIFT 48 #define ID_AA64ISAR0_FHM_SHIFT 48
#define ID_AA64ISAR0_DP_SHIFT 44 #define ID_AA64ISAR0_DP_SHIFT 44
@ -637,6 +647,8 @@
#define ID_AA64PFR0_CSV2_SHIFT 56 #define ID_AA64PFR0_CSV2_SHIFT 56
#define ID_AA64PFR0_DIT_SHIFT 48 #define ID_AA64PFR0_DIT_SHIFT 48
#define ID_AA64PFR0_AMU_SHIFT 44 #define ID_AA64PFR0_AMU_SHIFT 44
#define ID_AA64PFR0_MPAM_SHIFT 40
#define ID_AA64PFR0_SEL2_SHIFT 36
#define ID_AA64PFR0_SVE_SHIFT 32 #define ID_AA64PFR0_SVE_SHIFT 32
#define ID_AA64PFR0_RAS_SHIFT 28 #define ID_AA64PFR0_RAS_SHIFT 28
#define ID_AA64PFR0_GIC_SHIFT 24 #define ID_AA64PFR0_GIC_SHIFT 24
@ -655,15 +667,21 @@
#define ID_AA64PFR0_ASIMD_NI 0xf #define ID_AA64PFR0_ASIMD_NI 0xf
#define ID_AA64PFR0_ASIMD_SUPPORTED 0x0 #define ID_AA64PFR0_ASIMD_SUPPORTED 0x0
#define ID_AA64PFR0_EL1_64BIT_ONLY 0x1 #define ID_AA64PFR0_EL1_64BIT_ONLY 0x1
#define ID_AA64PFR0_EL1_32BIT_64BIT 0x2
#define ID_AA64PFR0_EL0_64BIT_ONLY 0x1 #define ID_AA64PFR0_EL0_64BIT_ONLY 0x1
#define ID_AA64PFR0_EL0_32BIT_64BIT 0x2 #define ID_AA64PFR0_EL0_32BIT_64BIT 0x2
/* id_aa64pfr1 */ /* id_aa64pfr1 */
#define ID_AA64PFR1_MPAMFRAC_SHIFT 16
#define ID_AA64PFR1_RASFRAC_SHIFT 12
#define ID_AA64PFR1_MTE_SHIFT 8
#define ID_AA64PFR1_SSBS_SHIFT 4 #define ID_AA64PFR1_SSBS_SHIFT 4
#define ID_AA64PFR1_BT_SHIFT 0
#define ID_AA64PFR1_SSBS_PSTATE_NI 0 #define ID_AA64PFR1_SSBS_PSTATE_NI 0
#define ID_AA64PFR1_SSBS_PSTATE_ONLY 1 #define ID_AA64PFR1_SSBS_PSTATE_ONLY 1
#define ID_AA64PFR1_SSBS_PSTATE_INSNS 2 #define ID_AA64PFR1_SSBS_PSTATE_INSNS 2
#define ID_AA64PFR1_BT_BTI 0x1
/* id_aa64zfr0 */ /* id_aa64zfr0 */
#define ID_AA64ZFR0_F64MM_SHIFT 56 #define ID_AA64ZFR0_F64MM_SHIFT 56
@ -688,6 +706,9 @@
#define ID_AA64ZFR0_SVEVER_SVE2 0x1 #define ID_AA64ZFR0_SVEVER_SVE2 0x1
/* id_aa64mmfr0 */ /* id_aa64mmfr0 */
#define ID_AA64MMFR0_TGRAN4_2_SHIFT 40
#define ID_AA64MMFR0_TGRAN64_2_SHIFT 36
#define ID_AA64MMFR0_TGRAN16_2_SHIFT 32
#define ID_AA64MMFR0_TGRAN4_SHIFT 28 #define ID_AA64MMFR0_TGRAN4_SHIFT 28
#define ID_AA64MMFR0_TGRAN64_SHIFT 24 #define ID_AA64MMFR0_TGRAN64_SHIFT 24
#define ID_AA64MMFR0_TGRAN16_SHIFT 20 #define ID_AA64MMFR0_TGRAN16_SHIFT 20
@ -752,6 +773,25 @@
#define ID_DFR0_PERFMON_8_1 0x4 #define ID_DFR0_PERFMON_8_1 0x4
#define ID_ISAR4_SWP_FRAC_SHIFT 28
#define ID_ISAR4_PSR_M_SHIFT 24
#define ID_ISAR4_SYNCH_PRIM_FRAC_SHIFT 20
#define ID_ISAR4_BARRIER_SHIFT 16
#define ID_ISAR4_SMC_SHIFT 12
#define ID_ISAR4_WRITEBACK_SHIFT 8
#define ID_ISAR4_WITHSHIFTS_SHIFT 4
#define ID_ISAR4_UNPRIV_SHIFT 0
#define ID_DFR1_MTPMU_SHIFT 0
#define ID_ISAR0_DIVIDE_SHIFT 24
#define ID_ISAR0_DEBUG_SHIFT 20
#define ID_ISAR0_COPROC_SHIFT 16
#define ID_ISAR0_CMPBRANCH_SHIFT 12
#define ID_ISAR0_BITFIELD_SHIFT 8
#define ID_ISAR0_BITCOUNT_SHIFT 4
#define ID_ISAR0_SWAP_SHIFT 0
#define ID_ISAR5_RDM_SHIFT 24 #define ID_ISAR5_RDM_SHIFT 24
#define ID_ISAR5_CRC32_SHIFT 16 #define ID_ISAR5_CRC32_SHIFT 16
#define ID_ISAR5_SHA2_SHIFT 12 #define ID_ISAR5_SHA2_SHIFT 12
@ -767,6 +807,22 @@
#define ID_ISAR6_DP_SHIFT 4 #define ID_ISAR6_DP_SHIFT 4
#define ID_ISAR6_JSCVT_SHIFT 0 #define ID_ISAR6_JSCVT_SHIFT 0
#define ID_MMFR4_EVT_SHIFT 28
#define ID_MMFR4_CCIDX_SHIFT 24
#define ID_MMFR4_LSM_SHIFT 20
#define ID_MMFR4_HPDS_SHIFT 16
#define ID_MMFR4_CNP_SHIFT 12
#define ID_MMFR4_XNX_SHIFT 8
#define ID_MMFR4_SPECSEI_SHIFT 0
#define ID_MMFR5_ETS_SHIFT 0
#define ID_PFR0_DIT_SHIFT 24
#define ID_PFR0_CSV2_SHIFT 16
#define ID_PFR2_SSBS_SHIFT 4
#define ID_PFR2_CSV3_SHIFT 0
#define MVFR0_FPROUND_SHIFT 28 #define MVFR0_FPROUND_SHIFT 28
#define MVFR0_FPSHVEC_SHIFT 24 #define MVFR0_FPSHVEC_SHIFT 24
#define MVFR0_FPSQRT_SHIFT 20 #define MVFR0_FPSQRT_SHIFT 20
@ -785,17 +841,14 @@
#define MVFR1_FPDNAN_SHIFT 4 #define MVFR1_FPDNAN_SHIFT 4
#define MVFR1_FPFTZ_SHIFT 0 #define MVFR1_FPFTZ_SHIFT 0
#define ID_PFR1_GIC_SHIFT 28
#define ID_AA64MMFR0_TGRAN4_SHIFT 28 #define ID_PFR1_VIRT_FRAC_SHIFT 24
#define ID_AA64MMFR0_TGRAN64_SHIFT 24 #define ID_PFR1_SEC_FRAC_SHIFT 20
#define ID_AA64MMFR0_TGRAN16_SHIFT 20 #define ID_PFR1_GENTIMER_SHIFT 16
#define ID_PFR1_VIRTUALIZATION_SHIFT 12
#define ID_AA64MMFR0_TGRAN4_NI 0xf #define ID_PFR1_MPROGMOD_SHIFT 8
#define ID_AA64MMFR0_TGRAN4_SUPPORTED 0x0 #define ID_PFR1_SECURITY_SHIFT 4
#define ID_AA64MMFR0_TGRAN64_NI 0xf #define ID_PFR1_PROGMOD_SHIFT 0
#define ID_AA64MMFR0_TGRAN64_SUPPORTED 0x0
#define ID_AA64MMFR0_TGRAN16_NI 0x0
#define ID_AA64MMFR0_TGRAN16_SUPPORTED 0x1
#if defined(CONFIG_ARM64_4K_PAGES) #if defined(CONFIG_ARM64_4K_PAGES)
#define ID_AA64MMFR0_TGRAN_SHIFT ID_AA64MMFR0_TGRAN4_SHIFT #define ID_AA64MMFR0_TGRAN_SHIFT ID_AA64MMFR0_TGRAN4_SHIFT

View File

@ -41,6 +41,10 @@ struct thread_info {
#endif #endif
} preempt; } preempt;
}; };
#ifdef CONFIG_SHADOW_CALL_STACK
void *scs_base;
void *scs_sp;
#endif
}; };
#define thread_saved_pc(tsk) \ #define thread_saved_pc(tsk) \
@ -100,11 +104,20 @@ void arch_release_task_struct(struct task_struct *tsk);
_TIF_SYSCALL_TRACEPOINT | _TIF_SECCOMP | \ _TIF_SYSCALL_TRACEPOINT | _TIF_SECCOMP | \
_TIF_SYSCALL_EMU) _TIF_SYSCALL_EMU)
#ifdef CONFIG_SHADOW_CALL_STACK
#define INIT_SCS \
.scs_base = init_shadow_call_stack, \
.scs_sp = init_shadow_call_stack,
#else
#define INIT_SCS
#endif
#define INIT_THREAD_INFO(tsk) \ #define INIT_THREAD_INFO(tsk) \
{ \ { \
.flags = _TIF_FOREIGN_FPSTATE, \ .flags = _TIF_FOREIGN_FPSTATE, \
.preempt_count = INIT_PREEMPT_COUNT, \ .preempt_count = INIT_PREEMPT_COUNT, \
.addr_limit = KERNEL_DS, \ .addr_limit = KERNEL_DS, \
INIT_SCS \
} }
#endif /* __ASM_THREAD_INFO_H */ #endif /* __ASM_THREAD_INFO_H */

View File

@ -73,5 +73,6 @@
#define HWCAP2_BF16 (1 << 14) #define HWCAP2_BF16 (1 << 14)
#define HWCAP2_DGH (1 << 15) #define HWCAP2_DGH (1 << 15)
#define HWCAP2_RNG (1 << 16) #define HWCAP2_RNG (1 << 16)
#define HWCAP2_BTI (1 << 17)
#endif /* _UAPI__ASM_HWCAP_H */ #endif /* _UAPI__ASM_HWCAP_H */

View File

@ -0,0 +1,9 @@
/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
#ifndef _UAPI__ASM_MMAN_H
#define _UAPI__ASM_MMAN_H
#include <asm-generic/mman.h>
#define PROT_BTI 0x10 /* BTI guarded page */
#endif /* ! _UAPI__ASM_MMAN_H */

View File

@ -46,6 +46,7 @@
#define PSR_I_BIT 0x00000080 #define PSR_I_BIT 0x00000080
#define PSR_A_BIT 0x00000100 #define PSR_A_BIT 0x00000100
#define PSR_D_BIT 0x00000200 #define PSR_D_BIT 0x00000200
#define PSR_BTYPE_MASK 0x00000c00
#define PSR_SSBS_BIT 0x00001000 #define PSR_SSBS_BIT 0x00001000
#define PSR_PAN_BIT 0x00400000 #define PSR_PAN_BIT 0x00400000
#define PSR_UAO_BIT 0x00800000 #define PSR_UAO_BIT 0x00800000
@ -55,6 +56,8 @@
#define PSR_Z_BIT 0x40000000 #define PSR_Z_BIT 0x40000000
#define PSR_N_BIT 0x80000000 #define PSR_N_BIT 0x80000000
#define PSR_BTYPE_SHIFT 10
/* /*
* Groups of PSR bits * Groups of PSR bits
*/ */
@ -63,6 +66,12 @@
#define PSR_x 0x0000ff00 /* Extension */ #define PSR_x 0x0000ff00 /* Extension */
#define PSR_c 0x000000ff /* Control */ #define PSR_c 0x000000ff /* Control */
/* Convenience names for the values of PSTATE.BTYPE */
#define PSR_BTYPE_NONE (0b00 << PSR_BTYPE_SHIFT)
#define PSR_BTYPE_JC (0b01 << PSR_BTYPE_SHIFT)
#define PSR_BTYPE_C (0b10 << PSR_BTYPE_SHIFT)
#define PSR_BTYPE_J (0b11 << PSR_BTYPE_SHIFT)
/* syscall emulation path in ptrace */ /* syscall emulation path in ptrace */
#define PTRACE_SYSEMU 31 #define PTRACE_SYSEMU 31
#define PTRACE_SYSEMU_SINGLESTEP 32 #define PTRACE_SYSEMU_SINGLESTEP 32

View File

@ -63,6 +63,7 @@ obj-$(CONFIG_CRASH_CORE) += crash_core.o
obj-$(CONFIG_ARM_SDE_INTERFACE) += sdei.o obj-$(CONFIG_ARM_SDE_INTERFACE) += sdei.o
obj-$(CONFIG_ARM64_SSBD) += ssbd.o obj-$(CONFIG_ARM64_SSBD) += ssbd.o
obj-$(CONFIG_ARM64_PTR_AUTH) += pointer_auth.o obj-$(CONFIG_ARM64_PTR_AUTH) += pointer_auth.o
obj-$(CONFIG_SHADOW_CALL_STACK) += scs.o
obj-y += vdso/ probes/ obj-y += vdso/ probes/
obj-$(CONFIG_COMPAT_VDSO) += vdso32/ obj-$(CONFIG_COMPAT_VDSO) += vdso32/

View File

@ -33,6 +33,10 @@ int main(void)
DEFINE(TSK_TI_ADDR_LIMIT, offsetof(struct task_struct, thread_info.addr_limit)); DEFINE(TSK_TI_ADDR_LIMIT, offsetof(struct task_struct, thread_info.addr_limit));
#ifdef CONFIG_ARM64_SW_TTBR0_PAN #ifdef CONFIG_ARM64_SW_TTBR0_PAN
DEFINE(TSK_TI_TTBR0, offsetof(struct task_struct, thread_info.ttbr0)); DEFINE(TSK_TI_TTBR0, offsetof(struct task_struct, thread_info.ttbr0));
#endif
#ifdef CONFIG_SHADOW_CALL_STACK
DEFINE(TSK_TI_SCS_BASE, offsetof(struct task_struct, thread_info.scs_base));
DEFINE(TSK_TI_SCS_SP, offsetof(struct task_struct, thread_info.scs_sp));
#endif #endif
DEFINE(TSK_STACK, offsetof(struct task_struct, stack)); DEFINE(TSK_STACK, offsetof(struct task_struct, stack));
#ifdef CONFIG_STACKPROTECTOR #ifdef CONFIG_STACKPROTECTOR
@ -92,9 +96,6 @@ int main(void)
BLANK(); BLANK();
DEFINE(CPU_BOOT_STACK, offsetof(struct secondary_data, stack)); DEFINE(CPU_BOOT_STACK, offsetof(struct secondary_data, stack));
DEFINE(CPU_BOOT_TASK, offsetof(struct secondary_data, task)); DEFINE(CPU_BOOT_TASK, offsetof(struct secondary_data, task));
#ifdef CONFIG_ARM64_PTR_AUTH
DEFINE(CPU_BOOT_PTRAUTH_KEY, offsetof(struct secondary_data, ptrauth_key));
#endif
BLANK(); BLANK();
#ifdef CONFIG_KVM_ARM_HOST #ifdef CONFIG_KVM_ARM_HOST
DEFINE(VCPU_CONTEXT, offsetof(struct kvm_vcpu, arch.ctxt)); DEFINE(VCPU_CONTEXT, offsetof(struct kvm_vcpu, arch.ctxt));

View File

@ -29,7 +29,7 @@
* branch to what would be the reset vector. It must be executed with the * branch to what would be the reset vector. It must be executed with the
* flat identity mapping. * flat identity mapping.
*/ */
ENTRY(__cpu_soft_restart) SYM_CODE_START(__cpu_soft_restart)
/* Clear sctlr_el1 flags. */ /* Clear sctlr_el1 flags. */
mrs x12, sctlr_el1 mrs x12, sctlr_el1
mov_q x13, SCTLR_ELx_FLAGS mov_q x13, SCTLR_ELx_FLAGS
@ -47,6 +47,6 @@ ENTRY(__cpu_soft_restart)
mov x1, x3 // arg1 mov x1, x3 // arg1
mov x2, x4 // arg2 mov x2, x4 // arg2
br x8 br x8
ENDPROC(__cpu_soft_restart) SYM_CODE_END(__cpu_soft_restart)
.popsection .popsection

View File

@ -635,7 +635,7 @@ has_neoverse_n1_erratum_1542419(const struct arm64_cpu_capabilities *entry,
return is_midr_in_range(midr, &range) && has_dic; return is_midr_in_range(midr, &range) && has_dic;
} }
#if defined(CONFIG_HARDEN_EL2_VECTORS) || defined(CONFIG_ARM64_ERRATUM_1319367) #if defined(CONFIG_HARDEN_EL2_VECTORS)
static const struct midr_range ca57_a72[] = { static const struct midr_range ca57_a72[] = {
MIDR_ALL_VERSIONS(MIDR_CORTEX_A57), MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
@ -757,12 +757,16 @@ static const struct arm64_cpu_capabilities erratum_843419_list[] = {
}; };
#endif #endif
#ifdef CONFIG_ARM64_WORKAROUND_SPECULATIVE_AT_VHE #ifdef CONFIG_ARM64_WORKAROUND_SPECULATIVE_AT
static const struct midr_range erratum_speculative_at_vhe_list[] = { static const struct midr_range erratum_speculative_at_list[] = {
#ifdef CONFIG_ARM64_ERRATUM_1165522 #ifdef CONFIG_ARM64_ERRATUM_1165522
/* Cortex A76 r0p0 to r2p0 */ /* Cortex A76 r0p0 to r2p0 */
MIDR_RANGE(MIDR_CORTEX_A76, 0, 0, 2, 0), MIDR_RANGE(MIDR_CORTEX_A76, 0, 0, 2, 0),
#endif #endif
#ifdef CONFIG_ARM64_ERRATUM_1319367
MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
#endif
#ifdef CONFIG_ARM64_ERRATUM_1530923 #ifdef CONFIG_ARM64_ERRATUM_1530923
/* Cortex A55 r0p0 to r2p0 */ /* Cortex A55 r0p0 to r2p0 */
MIDR_RANGE(MIDR_CORTEX_A55, 0, 0, 2, 0), MIDR_RANGE(MIDR_CORTEX_A55, 0, 0, 2, 0),
@ -774,7 +778,7 @@ static const struct midr_range erratum_speculative_at_vhe_list[] = {
const struct arm64_cpu_capabilities arm64_errata[] = { const struct arm64_cpu_capabilities arm64_errata[] = {
#ifdef CONFIG_ARM64_WORKAROUND_CLEAN_CACHE #ifdef CONFIG_ARM64_WORKAROUND_CLEAN_CACHE
{ {
.desc = "ARM errata 826319, 827319, 824069, 819472", .desc = "ARM errata 826319, 827319, 824069, or 819472",
.capability = ARM64_WORKAROUND_CLEAN_CACHE, .capability = ARM64_WORKAROUND_CLEAN_CACHE,
ERRATA_MIDR_RANGE_LIST(workaround_clean_cache), ERRATA_MIDR_RANGE_LIST(workaround_clean_cache),
.cpu_enable = cpu_enable_cache_maint_trap, .cpu_enable = cpu_enable_cache_maint_trap,
@ -856,7 +860,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
#endif #endif
#ifdef CONFIG_ARM64_WORKAROUND_REPEAT_TLBI #ifdef CONFIG_ARM64_WORKAROUND_REPEAT_TLBI
{ {
.desc = "Qualcomm erratum 1009, ARM erratum 1286807", .desc = "Qualcomm erratum 1009, or ARM erratum 1286807",
.capability = ARM64_WORKAROUND_REPEAT_TLBI, .capability = ARM64_WORKAROUND_REPEAT_TLBI,
.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
.matches = cpucap_multi_entry_cap_matches, .matches = cpucap_multi_entry_cap_matches,
@ -897,11 +901,11 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
ERRATA_MIDR_RANGE_LIST(erratum_1418040_list), ERRATA_MIDR_RANGE_LIST(erratum_1418040_list),
}, },
#endif #endif
#ifdef CONFIG_ARM64_WORKAROUND_SPECULATIVE_AT_VHE #ifdef CONFIG_ARM64_WORKAROUND_SPECULATIVE_AT
{ {
.desc = "ARM errata 1165522, 1530923", .desc = "ARM errata 1165522, 1319367, or 1530923",
.capability = ARM64_WORKAROUND_SPECULATIVE_AT_VHE, .capability = ARM64_WORKAROUND_SPECULATIVE_AT,
ERRATA_MIDR_RANGE_LIST(erratum_speculative_at_vhe_list), ERRATA_MIDR_RANGE_LIST(erratum_speculative_at_list),
}, },
#endif #endif
#ifdef CONFIG_ARM64_ERRATUM_1463225 #ifdef CONFIG_ARM64_ERRATUM_1463225
@ -934,13 +938,6 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
.matches = has_neoverse_n1_erratum_1542419, .matches = has_neoverse_n1_erratum_1542419,
.cpu_enable = cpu_enable_trap_ctr_access, .cpu_enable = cpu_enable_trap_ctr_access,
}, },
#endif
#ifdef CONFIG_ARM64_ERRATUM_1319367
{
.desc = "ARM erratum 1319367",
.capability = ARM64_WORKAROUND_SPECULATIVE_AT_NVHE,
ERRATA_MIDR_RANGE_LIST(ca57_a72),
},
#endif #endif
{ {
} }

View File

@ -3,6 +3,61 @@
* Contains CPU feature definitions * Contains CPU feature definitions
* *
* Copyright (C) 2015 ARM Ltd. * Copyright (C) 2015 ARM Ltd.
*
* A note for the weary kernel hacker: the code here is confusing and hard to
* follow! That's partly because it's solving a nasty problem, but also because
* there's a little bit of over-abstraction that tends to obscure what's going
* on behind a maze of helper functions and macros.
*
* The basic problem is that hardware folks have started gluing together CPUs
* with distinct architectural features; in some cases even creating SoCs where
* user-visible instructions are available only on a subset of the available
* cores. We try to address this by snapshotting the feature registers of the
* boot CPU and comparing these with the feature registers of each secondary
* CPU when bringing them up. If there is a mismatch, then we update the
* snapshot state to indicate the lowest-common denominator of the feature,
* known as the "safe" value. This snapshot state can be queried to view the
* "sanitised" value of a feature register.
*
* The sanitised register values are used to decide which capabilities we
* have in the system. These may be in the form of traditional "hwcaps"
* advertised to userspace or internal "cpucaps" which are used to configure
* things like alternative patching and static keys. While a feature mismatch
* may result in a TAINT_CPU_OUT_OF_SPEC kernel taint, a capability mismatch
* may prevent a CPU from being onlined at all.
*
* Some implementation details worth remembering:
*
* - Mismatched features are *always* sanitised to a "safe" value, which
* usually indicates that the feature is not supported.
*
* - A mismatched feature marked with FTR_STRICT will cause a "SANITY CHECK"
* warning when onlining an offending CPU and the kernel will be tainted
* with TAINT_CPU_OUT_OF_SPEC.
*
* - Features marked as FTR_VISIBLE have their sanitised value visible to
* userspace. FTR_VISIBLE features in registers that are only visible
* to EL0 by trapping *must* have a corresponding HWCAP so that late
* onlining of CPUs cannot lead to features disappearing at runtime.
*
* - A "feature" is typically a 4-bit register field. A "capability" is the
* high-level description derived from the sanitised field value.
*
* - Read the Arm ARM (DDI 0487F.a) section D13.1.3 ("Principles of the ID
* scheme for fields in ID registers") to understand when feature fields
* may be signed or unsigned (FTR_SIGNED and FTR_UNSIGNED accordingly).
*
* - KVM exposes its own view of the feature registers to guest operating
* systems regardless of FTR_VISIBLE. This is typically driven from the
* sanitised register values to allow virtual CPUs to be migrated between
* arbitrary physical CPUs, but some features not present on the host are
* also advertised and emulated. Look at sys_reg_descs[] for the gory
* details.
*
* - If the arm64_ftr_bits[] for a register has a missing field, then this
* field is treated as STRICT RES0, including for read_sanitised_ftr_reg().
* This is stronger than FTR_HIDDEN and can be used to hide features from
* KVM guests.
*/ */
#define pr_fmt(fmt) "CPU features: " fmt #define pr_fmt(fmt) "CPU features: " fmt
@ -124,6 +179,7 @@ static bool __system_matches_cap(unsigned int n);
*/ */
static const struct arm64_ftr_bits ftr_id_aa64isar0[] = { static const struct arm64_ftr_bits ftr_id_aa64isar0[] = {
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_RNDR_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_RNDR_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_TLB_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_TS_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_TS_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_FHM_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_FHM_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_DP_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_DP_SHIFT, 4, 0),
@ -166,22 +222,27 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV2_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV2_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_DIT_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_DIT_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_AMU_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_AMU_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_MPAM_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_SEL2_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE), ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_SVE_SHIFT, 4, 0), FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_SVE_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_RAS_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_RAS_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_GIC_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_GIC_SHIFT, 4, 0),
S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_ASIMD_SHIFT, 4, ID_AA64PFR0_ASIMD_NI), S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_ASIMD_SHIFT, 4, ID_AA64PFR0_ASIMD_NI),
S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_FP_SHIFT, 4, ID_AA64PFR0_FP_NI), S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_FP_SHIFT, 4, ID_AA64PFR0_FP_NI),
/* Linux doesn't care about the EL3 */
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL3_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL3_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL2_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL2_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_SHIFT, 4, ID_AA64PFR0_EL1_64BIT_ONLY), ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_SHIFT, 4, ID_AA64PFR0_EL1_64BIT_ONLY),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL0_SHIFT, 4, ID_AA64PFR0_EL0_64BIT_ONLY), ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL0_SHIFT, 4, ID_AA64PFR0_EL0_64BIT_ONLY),
ARM64_FTR_END, ARM64_FTR_END,
}; };
static const struct arm64_ftr_bits ftr_id_aa64pfr1[] = { static const struct arm64_ftr_bits ftr_id_aa64pfr1[] = {
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_MPAMFRAC_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_RASFRAC_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_SSBS_SHIFT, 4, ID_AA64PFR1_SSBS_PSTATE_NI), ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_SSBS_SHIFT, 4, ID_AA64PFR1_SSBS_PSTATE_NI),
ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_BTI),
FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_BT_SHIFT, 4, 0),
ARM64_FTR_END, ARM64_FTR_END,
}; };
@ -208,6 +269,24 @@ static const struct arm64_ftr_bits ftr_id_aa64zfr0[] = {
}; };
static const struct arm64_ftr_bits ftr_id_aa64mmfr0[] = { static const struct arm64_ftr_bits ftr_id_aa64mmfr0[] = {
/*
* Page size not being supported at Stage-2 is not fatal. You
* just give up KVM if PAGE_SIZE isn't supported there. Go fix
* your favourite nesting hypervisor.
*
* There is a small corner case where the hypervisor explicitly
* advertises a given granule size at Stage-2 (value 2) on some
* vCPUs, and uses the fallback to Stage-1 (value 0) for other
* vCPUs. Although this is not forbidden by the architecture, it
* indicates that the hypervisor is being silly (or buggy).
*
* We make no effort to cope with this and pretend that if these
* fields are inconsistent across vCPUs, then it isn't worth
* trying to bring KVM up.
*/
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_EXACT, ID_AA64MMFR0_TGRAN4_2_SHIFT, 4, 1),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_EXACT, ID_AA64MMFR0_TGRAN64_2_SHIFT, 4, 1),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_EXACT, ID_AA64MMFR0_TGRAN16_2_SHIFT, 4, 1),
/* /*
* We already refuse to boot CPUs that don't support our configured * We already refuse to boot CPUs that don't support our configured
* page size, so we can only detect mismatches for a page size other * page size, so we can only detect mismatches for a page size other
@ -247,7 +326,7 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr2[] = {
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_FWB_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_FWB_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_AT_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_AT_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_LVA_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_LVA_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_IESB_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_IESB_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_LSM_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_LSM_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_UAO_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_UAO_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_CNP_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_CNP_SHIFT, 4, 0),
@ -289,7 +368,7 @@ static const struct arm64_ftr_bits ftr_id_mmfr0[] = {
}; };
static const struct arm64_ftr_bits ftr_id_aa64dfr0[] = { static const struct arm64_ftr_bits ftr_id_aa64dfr0[] = {
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, 36, 28, 0), S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 36, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64DFR0_PMSVER_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64DFR0_PMSVER_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_CTX_CMPS_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_CTX_CMPS_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_WRPS_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_WRPS_SHIFT, 4, 0),
@ -316,6 +395,16 @@ static const struct arm64_ftr_bits ftr_dczid[] = {
ARM64_FTR_END, ARM64_FTR_END,
}; };
static const struct arm64_ftr_bits ftr_id_isar0[] = {
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_DIVIDE_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_DEBUG_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_COPROC_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_CMPBRANCH_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_BITFIELD_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_BITCOUNT_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_SWAP_SHIFT, 4, 0),
ARM64_FTR_END,
};
static const struct arm64_ftr_bits ftr_id_isar5[] = { static const struct arm64_ftr_bits ftr_id_isar5[] = {
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_RDM_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_RDM_SHIFT, 4, 0),
@ -328,7 +417,37 @@ static const struct arm64_ftr_bits ftr_id_isar5[] = {
}; };
static const struct arm64_ftr_bits ftr_id_mmfr4[] = { static const struct arm64_ftr_bits ftr_id_mmfr4[] = {
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_EVT_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_CCIDX_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_LSM_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_HPDS_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_CNP_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_XNX_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 4, 4, 0), /* ac2 */ ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 4, 4, 0), /* ac2 */
/*
* SpecSEI = 1 indicates that the PE might generate an SError on an
* external abort on speculative read. It is safe to assume that an
* SError might be generated than it will not be. Hence it has been
* classified as FTR_HIGHER_SAFE.
*/
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_HIGHER_SAFE, ID_MMFR4_SPECSEI_SHIFT, 4, 0),
ARM64_FTR_END,
};
static const struct arm64_ftr_bits ftr_id_isar4[] = {
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_SWP_FRAC_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_PSR_M_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_SYNCH_PRIM_FRAC_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_BARRIER_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_SMC_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_WRITEBACK_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_WITHSHIFTS_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_UNPRIV_SHIFT, 4, 0),
ARM64_FTR_END,
};
static const struct arm64_ftr_bits ftr_id_mmfr5[] = {
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR5_ETS_SHIFT, 4, 0),
ARM64_FTR_END, ARM64_FTR_END,
}; };
@ -344,6 +463,8 @@ static const struct arm64_ftr_bits ftr_id_isar6[] = {
}; };
static const struct arm64_ftr_bits ftr_id_pfr0[] = { static const struct arm64_ftr_bits ftr_id_pfr0[] = {
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR0_DIT_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_PFR0_CSV2_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 12, 4, 0), /* State3 */ ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 12, 4, 0), /* State3 */
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 8, 4, 0), /* State2 */ ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 8, 4, 0), /* State2 */
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 4, 4, 0), /* State1 */ ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 4, 4, 0), /* State1 */
@ -351,8 +472,26 @@ static const struct arm64_ftr_bits ftr_id_pfr0[] = {
ARM64_FTR_END, ARM64_FTR_END,
}; };
static const struct arm64_ftr_bits ftr_id_pfr1[] = {
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_GIC_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_VIRT_FRAC_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_SEC_FRAC_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_GENTIMER_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_VIRTUALIZATION_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_MPROGMOD_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_SECURITY_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_PROGMOD_SHIFT, 4, 0),
ARM64_FTR_END,
};
static const struct arm64_ftr_bits ftr_id_pfr2[] = {
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR2_SSBS_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_PFR2_CSV3_SHIFT, 4, 0),
ARM64_FTR_END,
};
static const struct arm64_ftr_bits ftr_id_dfr0[] = { static const struct arm64_ftr_bits ftr_id_dfr0[] = {
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 28, 4, 0), /* [31:28] TraceFilt */
S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 24, 4, 0xf), /* PerfMon */ S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 24, 4, 0xf), /* PerfMon */
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 20, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 20, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 16, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 16, 4, 0),
@ -363,6 +502,11 @@ static const struct arm64_ftr_bits ftr_id_dfr0[] = {
ARM64_FTR_END, ARM64_FTR_END,
}; };
static const struct arm64_ftr_bits ftr_id_dfr1[] = {
S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR1_MTPMU_SHIFT, 4, 0),
ARM64_FTR_END,
};
static const struct arm64_ftr_bits ftr_zcr[] = { static const struct arm64_ftr_bits ftr_zcr[] = {
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE,
ZCR_ELx_LEN_SHIFT, ZCR_ELx_LEN_SIZE, 0), /* LEN */ ZCR_ELx_LEN_SHIFT, ZCR_ELx_LEN_SIZE, 0), /* LEN */
@ -373,7 +517,7 @@ static const struct arm64_ftr_bits ftr_zcr[] = {
* Common ftr bits for a 32bit register with all hidden, strict * Common ftr bits for a 32bit register with all hidden, strict
* attributes, with 4bit feature fields and a default safe value of * attributes, with 4bit feature fields and a default safe value of
* 0. Covers the following 32bit registers: * 0. Covers the following 32bit registers:
* id_isar[0-4], id_mmfr[1-3], id_pfr1, mvfr[0-1] * id_isar[1-4], id_mmfr[1-3], id_pfr1, mvfr[0-1]
*/ */
static const struct arm64_ftr_bits ftr_generic_32bits[] = { static const struct arm64_ftr_bits ftr_generic_32bits[] = {
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 28, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 28, 4, 0),
@ -411,7 +555,7 @@ static const struct __ftr_reg_entry {
/* Op1 = 0, CRn = 0, CRm = 1 */ /* Op1 = 0, CRn = 0, CRm = 1 */
ARM64_FTR_REG(SYS_ID_PFR0_EL1, ftr_id_pfr0), ARM64_FTR_REG(SYS_ID_PFR0_EL1, ftr_id_pfr0),
ARM64_FTR_REG(SYS_ID_PFR1_EL1, ftr_generic_32bits), ARM64_FTR_REG(SYS_ID_PFR1_EL1, ftr_id_pfr1),
ARM64_FTR_REG(SYS_ID_DFR0_EL1, ftr_id_dfr0), ARM64_FTR_REG(SYS_ID_DFR0_EL1, ftr_id_dfr0),
ARM64_FTR_REG(SYS_ID_MMFR0_EL1, ftr_id_mmfr0), ARM64_FTR_REG(SYS_ID_MMFR0_EL1, ftr_id_mmfr0),
ARM64_FTR_REG(SYS_ID_MMFR1_EL1, ftr_generic_32bits), ARM64_FTR_REG(SYS_ID_MMFR1_EL1, ftr_generic_32bits),
@ -419,11 +563,11 @@ static const struct __ftr_reg_entry {
ARM64_FTR_REG(SYS_ID_MMFR3_EL1, ftr_generic_32bits), ARM64_FTR_REG(SYS_ID_MMFR3_EL1, ftr_generic_32bits),
/* Op1 = 0, CRn = 0, CRm = 2 */ /* Op1 = 0, CRn = 0, CRm = 2 */
ARM64_FTR_REG(SYS_ID_ISAR0_EL1, ftr_generic_32bits), ARM64_FTR_REG(SYS_ID_ISAR0_EL1, ftr_id_isar0),
ARM64_FTR_REG(SYS_ID_ISAR1_EL1, ftr_generic_32bits), ARM64_FTR_REG(SYS_ID_ISAR1_EL1, ftr_generic_32bits),
ARM64_FTR_REG(SYS_ID_ISAR2_EL1, ftr_generic_32bits), ARM64_FTR_REG(SYS_ID_ISAR2_EL1, ftr_generic_32bits),
ARM64_FTR_REG(SYS_ID_ISAR3_EL1, ftr_generic_32bits), ARM64_FTR_REG(SYS_ID_ISAR3_EL1, ftr_generic_32bits),
ARM64_FTR_REG(SYS_ID_ISAR4_EL1, ftr_generic_32bits), ARM64_FTR_REG(SYS_ID_ISAR4_EL1, ftr_id_isar4),
ARM64_FTR_REG(SYS_ID_ISAR5_EL1, ftr_id_isar5), ARM64_FTR_REG(SYS_ID_ISAR5_EL1, ftr_id_isar5),
ARM64_FTR_REG(SYS_ID_MMFR4_EL1, ftr_id_mmfr4), ARM64_FTR_REG(SYS_ID_MMFR4_EL1, ftr_id_mmfr4),
ARM64_FTR_REG(SYS_ID_ISAR6_EL1, ftr_id_isar6), ARM64_FTR_REG(SYS_ID_ISAR6_EL1, ftr_id_isar6),
@ -432,6 +576,9 @@ static const struct __ftr_reg_entry {
ARM64_FTR_REG(SYS_MVFR0_EL1, ftr_generic_32bits), ARM64_FTR_REG(SYS_MVFR0_EL1, ftr_generic_32bits),
ARM64_FTR_REG(SYS_MVFR1_EL1, ftr_generic_32bits), ARM64_FTR_REG(SYS_MVFR1_EL1, ftr_generic_32bits),
ARM64_FTR_REG(SYS_MVFR2_EL1, ftr_mvfr2), ARM64_FTR_REG(SYS_MVFR2_EL1, ftr_mvfr2),
ARM64_FTR_REG(SYS_ID_PFR2_EL1, ftr_id_pfr2),
ARM64_FTR_REG(SYS_ID_DFR1_EL1, ftr_id_dfr1),
ARM64_FTR_REG(SYS_ID_MMFR5_EL1, ftr_id_mmfr5),
/* Op1 = 0, CRn = 0, CRm = 4 */ /* Op1 = 0, CRn = 0, CRm = 4 */
ARM64_FTR_REG(SYS_ID_AA64PFR0_EL1, ftr_id_aa64pfr0), ARM64_FTR_REG(SYS_ID_AA64PFR0_EL1, ftr_id_aa64pfr0),
@ -468,16 +615,16 @@ static int search_cmp_ftr_reg(const void *id, const void *regp)
} }
/* /*
* get_arm64_ftr_reg - Lookup a feature register entry using its * get_arm64_ftr_reg_nowarn - Looks up a feature register entry using
* sys_reg() encoding. With the array arm64_ftr_regs sorted in the * its sys_reg() encoding. With the array arm64_ftr_regs sorted in the
* ascending order of sys_id , we use binary search to find a matching * ascending order of sys_id, we use binary search to find a matching
* entry. * entry.
* *
* returns - Upon success, matching ftr_reg entry for id. * returns - Upon success, matching ftr_reg entry for id.
* - NULL on failure. It is upto the caller to decide * - NULL on failure. It is upto the caller to decide
* the impact of a failure. * the impact of a failure.
*/ */
static struct arm64_ftr_reg *get_arm64_ftr_reg(u32 sys_id) static struct arm64_ftr_reg *get_arm64_ftr_reg_nowarn(u32 sys_id)
{ {
const struct __ftr_reg_entry *ret; const struct __ftr_reg_entry *ret;
@ -491,6 +638,27 @@ static struct arm64_ftr_reg *get_arm64_ftr_reg(u32 sys_id)
return NULL; return NULL;
} }
/*
* get_arm64_ftr_reg - Looks up a feature register entry using
* its sys_reg() encoding. This calls get_arm64_ftr_reg_nowarn().
*
* returns - Upon success, matching ftr_reg entry for id.
* - NULL on failure but with an WARN_ON().
*/
static struct arm64_ftr_reg *get_arm64_ftr_reg(u32 sys_id)
{
struct arm64_ftr_reg *reg;
reg = get_arm64_ftr_reg_nowarn(sys_id);
/*
* Requesting a non-existent register search is an error. Warn
* and let the caller handle it.
*/
WARN_ON(!reg);
return reg;
}
static u64 arm64_ftr_set_value(const struct arm64_ftr_bits *ftrp, s64 reg, static u64 arm64_ftr_set_value(const struct arm64_ftr_bits *ftrp, s64 reg,
s64 ftr_val) s64 ftr_val)
{ {
@ -552,7 +720,8 @@ static void __init init_cpu_ftr_reg(u32 sys_reg, u64 new)
const struct arm64_ftr_bits *ftrp; const struct arm64_ftr_bits *ftrp;
struct arm64_ftr_reg *reg = get_arm64_ftr_reg(sys_reg); struct arm64_ftr_reg *reg = get_arm64_ftr_reg(sys_reg);
BUG_ON(!reg); if (!reg)
return;
for (ftrp = reg->ftr_bits; ftrp->width; ftrp++) { for (ftrp = reg->ftr_bits; ftrp->width; ftrp++) {
u64 ftr_mask = arm64_ftr_mask(ftrp); u64 ftr_mask = arm64_ftr_mask(ftrp);
@ -625,6 +794,7 @@ void __init init_cpu_features(struct cpuinfo_arm64 *info)
if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0)) { if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0)) {
init_cpu_ftr_reg(SYS_ID_DFR0_EL1, info->reg_id_dfr0); init_cpu_ftr_reg(SYS_ID_DFR0_EL1, info->reg_id_dfr0);
init_cpu_ftr_reg(SYS_ID_DFR1_EL1, info->reg_id_dfr1);
init_cpu_ftr_reg(SYS_ID_ISAR0_EL1, info->reg_id_isar0); init_cpu_ftr_reg(SYS_ID_ISAR0_EL1, info->reg_id_isar0);
init_cpu_ftr_reg(SYS_ID_ISAR1_EL1, info->reg_id_isar1); init_cpu_ftr_reg(SYS_ID_ISAR1_EL1, info->reg_id_isar1);
init_cpu_ftr_reg(SYS_ID_ISAR2_EL1, info->reg_id_isar2); init_cpu_ftr_reg(SYS_ID_ISAR2_EL1, info->reg_id_isar2);
@ -636,8 +806,11 @@ void __init init_cpu_features(struct cpuinfo_arm64 *info)
init_cpu_ftr_reg(SYS_ID_MMFR1_EL1, info->reg_id_mmfr1); init_cpu_ftr_reg(SYS_ID_MMFR1_EL1, info->reg_id_mmfr1);
init_cpu_ftr_reg(SYS_ID_MMFR2_EL1, info->reg_id_mmfr2); init_cpu_ftr_reg(SYS_ID_MMFR2_EL1, info->reg_id_mmfr2);
init_cpu_ftr_reg(SYS_ID_MMFR3_EL1, info->reg_id_mmfr3); init_cpu_ftr_reg(SYS_ID_MMFR3_EL1, info->reg_id_mmfr3);
init_cpu_ftr_reg(SYS_ID_MMFR4_EL1, info->reg_id_mmfr4);
init_cpu_ftr_reg(SYS_ID_MMFR5_EL1, info->reg_id_mmfr5);
init_cpu_ftr_reg(SYS_ID_PFR0_EL1, info->reg_id_pfr0); init_cpu_ftr_reg(SYS_ID_PFR0_EL1, info->reg_id_pfr0);
init_cpu_ftr_reg(SYS_ID_PFR1_EL1, info->reg_id_pfr1); init_cpu_ftr_reg(SYS_ID_PFR1_EL1, info->reg_id_pfr1);
init_cpu_ftr_reg(SYS_ID_PFR2_EL1, info->reg_id_pfr2);
init_cpu_ftr_reg(SYS_MVFR0_EL1, info->reg_mvfr0); init_cpu_ftr_reg(SYS_MVFR0_EL1, info->reg_mvfr0);
init_cpu_ftr_reg(SYS_MVFR1_EL1, info->reg_mvfr1); init_cpu_ftr_reg(SYS_MVFR1_EL1, info->reg_mvfr1);
init_cpu_ftr_reg(SYS_MVFR2_EL1, info->reg_mvfr2); init_cpu_ftr_reg(SYS_MVFR2_EL1, info->reg_mvfr2);
@ -682,7 +855,9 @@ static int check_update_ftr_reg(u32 sys_id, int cpu, u64 val, u64 boot)
{ {
struct arm64_ftr_reg *regp = get_arm64_ftr_reg(sys_id); struct arm64_ftr_reg *regp = get_arm64_ftr_reg(sys_id);
BUG_ON(!regp); if (!regp)
return 0;
update_cpu_ftr_reg(regp, val); update_cpu_ftr_reg(regp, val);
if ((boot & regp->strict_mask) == (val & regp->strict_mask)) if ((boot & regp->strict_mask) == (val & regp->strict_mask))
return 0; return 0;
@ -691,6 +866,104 @@ static int check_update_ftr_reg(u32 sys_id, int cpu, u64 val, u64 boot)
return 1; return 1;
} }
static void relax_cpu_ftr_reg(u32 sys_id, int field)
{
const struct arm64_ftr_bits *ftrp;
struct arm64_ftr_reg *regp = get_arm64_ftr_reg(sys_id);
if (!regp)
return;
for (ftrp = regp->ftr_bits; ftrp->width; ftrp++) {
if (ftrp->shift == field) {
regp->strict_mask &= ~arm64_ftr_mask(ftrp);
break;
}
}
/* Bogus field? */
WARN_ON(!ftrp->width);
}
static int update_32bit_cpu_features(int cpu, struct cpuinfo_arm64 *info,
struct cpuinfo_arm64 *boot)
{
int taint = 0;
u64 pfr0 = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
/*
* If we don't have AArch32 at all then skip the checks entirely
* as the register values may be UNKNOWN and we're not going to be
* using them for anything.
*/
if (!id_aa64pfr0_32bit_el0(pfr0))
return taint;
/*
* If we don't have AArch32 at EL1, then relax the strictness of
* EL1-dependent register fields to avoid spurious sanity check fails.
*/
if (!id_aa64pfr0_32bit_el1(pfr0)) {
relax_cpu_ftr_reg(SYS_ID_ISAR4_EL1, ID_ISAR4_SMC_SHIFT);
relax_cpu_ftr_reg(SYS_ID_PFR1_EL1, ID_PFR1_VIRT_FRAC_SHIFT);
relax_cpu_ftr_reg(SYS_ID_PFR1_EL1, ID_PFR1_SEC_FRAC_SHIFT);
relax_cpu_ftr_reg(SYS_ID_PFR1_EL1, ID_PFR1_VIRTUALIZATION_SHIFT);
relax_cpu_ftr_reg(SYS_ID_PFR1_EL1, ID_PFR1_SECURITY_SHIFT);
relax_cpu_ftr_reg(SYS_ID_PFR1_EL1, ID_PFR1_PROGMOD_SHIFT);
}
taint |= check_update_ftr_reg(SYS_ID_DFR0_EL1, cpu,
info->reg_id_dfr0, boot->reg_id_dfr0);
taint |= check_update_ftr_reg(SYS_ID_DFR1_EL1, cpu,
info->reg_id_dfr1, boot->reg_id_dfr1);
taint |= check_update_ftr_reg(SYS_ID_ISAR0_EL1, cpu,
info->reg_id_isar0, boot->reg_id_isar0);
taint |= check_update_ftr_reg(SYS_ID_ISAR1_EL1, cpu,
info->reg_id_isar1, boot->reg_id_isar1);
taint |= check_update_ftr_reg(SYS_ID_ISAR2_EL1, cpu,
info->reg_id_isar2, boot->reg_id_isar2);
taint |= check_update_ftr_reg(SYS_ID_ISAR3_EL1, cpu,
info->reg_id_isar3, boot->reg_id_isar3);
taint |= check_update_ftr_reg(SYS_ID_ISAR4_EL1, cpu,
info->reg_id_isar4, boot->reg_id_isar4);
taint |= check_update_ftr_reg(SYS_ID_ISAR5_EL1, cpu,
info->reg_id_isar5, boot->reg_id_isar5);
taint |= check_update_ftr_reg(SYS_ID_ISAR6_EL1, cpu,
info->reg_id_isar6, boot->reg_id_isar6);
/*
* Regardless of the value of the AuxReg field, the AIFSR, ADFSR, and
* ACTLR formats could differ across CPUs and therefore would have to
* be trapped for virtualization anyway.
*/
taint |= check_update_ftr_reg(SYS_ID_MMFR0_EL1, cpu,
info->reg_id_mmfr0, boot->reg_id_mmfr0);
taint |= check_update_ftr_reg(SYS_ID_MMFR1_EL1, cpu,
info->reg_id_mmfr1, boot->reg_id_mmfr1);
taint |= check_update_ftr_reg(SYS_ID_MMFR2_EL1, cpu,
info->reg_id_mmfr2, boot->reg_id_mmfr2);
taint |= check_update_ftr_reg(SYS_ID_MMFR3_EL1, cpu,
info->reg_id_mmfr3, boot->reg_id_mmfr3);
taint |= check_update_ftr_reg(SYS_ID_MMFR4_EL1, cpu,
info->reg_id_mmfr4, boot->reg_id_mmfr4);
taint |= check_update_ftr_reg(SYS_ID_MMFR5_EL1, cpu,
info->reg_id_mmfr5, boot->reg_id_mmfr5);
taint |= check_update_ftr_reg(SYS_ID_PFR0_EL1, cpu,
info->reg_id_pfr0, boot->reg_id_pfr0);
taint |= check_update_ftr_reg(SYS_ID_PFR1_EL1, cpu,
info->reg_id_pfr1, boot->reg_id_pfr1);
taint |= check_update_ftr_reg(SYS_ID_PFR2_EL1, cpu,
info->reg_id_pfr2, boot->reg_id_pfr2);
taint |= check_update_ftr_reg(SYS_MVFR0_EL1, cpu,
info->reg_mvfr0, boot->reg_mvfr0);
taint |= check_update_ftr_reg(SYS_MVFR1_EL1, cpu,
info->reg_mvfr1, boot->reg_mvfr1);
taint |= check_update_ftr_reg(SYS_MVFR2_EL1, cpu,
info->reg_mvfr2, boot->reg_mvfr2);
return taint;
}
/* /*
* Update system wide CPU feature registers with the values from a * Update system wide CPU feature registers with the values from a
* non-boot CPU. Also performs SANITY checks to make sure that there * non-boot CPU. Also performs SANITY checks to make sure that there
@ -753,9 +1026,6 @@ void update_cpu_features(int cpu,
taint |= check_update_ftr_reg(SYS_ID_AA64MMFR2_EL1, cpu, taint |= check_update_ftr_reg(SYS_ID_AA64MMFR2_EL1, cpu,
info->reg_id_aa64mmfr2, boot->reg_id_aa64mmfr2); info->reg_id_aa64mmfr2, boot->reg_id_aa64mmfr2);
/*
* EL3 is not our concern.
*/
taint |= check_update_ftr_reg(SYS_ID_AA64PFR0_EL1, cpu, taint |= check_update_ftr_reg(SYS_ID_AA64PFR0_EL1, cpu,
info->reg_id_aa64pfr0, boot->reg_id_aa64pfr0); info->reg_id_aa64pfr0, boot->reg_id_aa64pfr0);
taint |= check_update_ftr_reg(SYS_ID_AA64PFR1_EL1, cpu, taint |= check_update_ftr_reg(SYS_ID_AA64PFR1_EL1, cpu,
@ -764,55 +1034,6 @@ void update_cpu_features(int cpu,
taint |= check_update_ftr_reg(SYS_ID_AA64ZFR0_EL1, cpu, taint |= check_update_ftr_reg(SYS_ID_AA64ZFR0_EL1, cpu,
info->reg_id_aa64zfr0, boot->reg_id_aa64zfr0); info->reg_id_aa64zfr0, boot->reg_id_aa64zfr0);
/*
* If we have AArch32, we care about 32-bit features for compat.
* If the system doesn't support AArch32, don't update them.
*/
if (id_aa64pfr0_32bit_el0(read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1)) &&
id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0)) {
taint |= check_update_ftr_reg(SYS_ID_DFR0_EL1, cpu,
info->reg_id_dfr0, boot->reg_id_dfr0);
taint |= check_update_ftr_reg(SYS_ID_ISAR0_EL1, cpu,
info->reg_id_isar0, boot->reg_id_isar0);
taint |= check_update_ftr_reg(SYS_ID_ISAR1_EL1, cpu,
info->reg_id_isar1, boot->reg_id_isar1);
taint |= check_update_ftr_reg(SYS_ID_ISAR2_EL1, cpu,
info->reg_id_isar2, boot->reg_id_isar2);
taint |= check_update_ftr_reg(SYS_ID_ISAR3_EL1, cpu,
info->reg_id_isar3, boot->reg_id_isar3);
taint |= check_update_ftr_reg(SYS_ID_ISAR4_EL1, cpu,
info->reg_id_isar4, boot->reg_id_isar4);
taint |= check_update_ftr_reg(SYS_ID_ISAR5_EL1, cpu,
info->reg_id_isar5, boot->reg_id_isar5);
taint |= check_update_ftr_reg(SYS_ID_ISAR6_EL1, cpu,
info->reg_id_isar6, boot->reg_id_isar6);
/*
* Regardless of the value of the AuxReg field, the AIFSR, ADFSR, and
* ACTLR formats could differ across CPUs and therefore would have to
* be trapped for virtualization anyway.
*/
taint |= check_update_ftr_reg(SYS_ID_MMFR0_EL1, cpu,
info->reg_id_mmfr0, boot->reg_id_mmfr0);
taint |= check_update_ftr_reg(SYS_ID_MMFR1_EL1, cpu,
info->reg_id_mmfr1, boot->reg_id_mmfr1);
taint |= check_update_ftr_reg(SYS_ID_MMFR2_EL1, cpu,
info->reg_id_mmfr2, boot->reg_id_mmfr2);
taint |= check_update_ftr_reg(SYS_ID_MMFR3_EL1, cpu,
info->reg_id_mmfr3, boot->reg_id_mmfr3);
taint |= check_update_ftr_reg(SYS_ID_PFR0_EL1, cpu,
info->reg_id_pfr0, boot->reg_id_pfr0);
taint |= check_update_ftr_reg(SYS_ID_PFR1_EL1, cpu,
info->reg_id_pfr1, boot->reg_id_pfr1);
taint |= check_update_ftr_reg(SYS_MVFR0_EL1, cpu,
info->reg_mvfr0, boot->reg_mvfr0);
taint |= check_update_ftr_reg(SYS_MVFR1_EL1, cpu,
info->reg_mvfr1, boot->reg_mvfr1);
taint |= check_update_ftr_reg(SYS_MVFR2_EL1, cpu,
info->reg_mvfr2, boot->reg_mvfr2);
}
if (id_aa64pfr0_sve(info->reg_id_aa64pfr0)) { if (id_aa64pfr0_sve(info->reg_id_aa64pfr0)) {
taint |= check_update_ftr_reg(SYS_ZCR_EL1, cpu, taint |= check_update_ftr_reg(SYS_ZCR_EL1, cpu,
info->reg_zcr, boot->reg_zcr); info->reg_zcr, boot->reg_zcr);
@ -823,6 +1044,12 @@ void update_cpu_features(int cpu,
sve_update_vq_map(); sve_update_vq_map();
} }
/*
* This relies on a sanitised view of the AArch64 ID registers
* (e.g. SYS_ID_AA64PFR0_EL1), so we call it last.
*/
taint |= update_32bit_cpu_features(cpu, info, boot);
/* /*
* Mismatched CPU features are a recipe for disaster. Don't even * Mismatched CPU features are a recipe for disaster. Don't even
* pretend to support them. * pretend to support them.
@ -837,8 +1064,8 @@ u64 read_sanitised_ftr_reg(u32 id)
{ {
struct arm64_ftr_reg *regp = get_arm64_ftr_reg(id); struct arm64_ftr_reg *regp = get_arm64_ftr_reg(id);
/* We shouldn't get a request for an unsupported register */ if (!regp)
BUG_ON(!regp); return 0;
return regp->sys_val; return regp->sys_val;
} }
@ -854,11 +1081,15 @@ static u64 __read_sysreg_by_encoding(u32 sys_id)
switch (sys_id) { switch (sys_id) {
read_sysreg_case(SYS_ID_PFR0_EL1); read_sysreg_case(SYS_ID_PFR0_EL1);
read_sysreg_case(SYS_ID_PFR1_EL1); read_sysreg_case(SYS_ID_PFR1_EL1);
read_sysreg_case(SYS_ID_PFR2_EL1);
read_sysreg_case(SYS_ID_DFR0_EL1); read_sysreg_case(SYS_ID_DFR0_EL1);
read_sysreg_case(SYS_ID_DFR1_EL1);
read_sysreg_case(SYS_ID_MMFR0_EL1); read_sysreg_case(SYS_ID_MMFR0_EL1);
read_sysreg_case(SYS_ID_MMFR1_EL1); read_sysreg_case(SYS_ID_MMFR1_EL1);
read_sysreg_case(SYS_ID_MMFR2_EL1); read_sysreg_case(SYS_ID_MMFR2_EL1);
read_sysreg_case(SYS_ID_MMFR3_EL1); read_sysreg_case(SYS_ID_MMFR3_EL1);
read_sysreg_case(SYS_ID_MMFR4_EL1);
read_sysreg_case(SYS_ID_MMFR5_EL1);
read_sysreg_case(SYS_ID_ISAR0_EL1); read_sysreg_case(SYS_ID_ISAR0_EL1);
read_sysreg_case(SYS_ID_ISAR1_EL1); read_sysreg_case(SYS_ID_ISAR1_EL1);
read_sysreg_case(SYS_ID_ISAR2_EL1); read_sysreg_case(SYS_ID_ISAR2_EL1);
@ -1409,6 +1640,21 @@ static bool can_use_gic_priorities(const struct arm64_cpu_capabilities *entry,
} }
#endif #endif
#ifdef CONFIG_ARM64_BTI
static void bti_enable(const struct arm64_cpu_capabilities *__unused)
{
/*
* Use of X16/X17 for tail-calls and trampolines that jump to
* function entry points using BR is a requirement for
* marking binaries with GNU_PROPERTY_AARCH64_FEATURE_1_BTI.
* So, be strict and forbid other BRs using other registers to
* jump onto a PACIxSP instruction:
*/
sysreg_clear_set(sctlr_el1, 0, SCTLR_EL1_BT0 | SCTLR_EL1_BT1);
isb();
}
#endif /* CONFIG_ARM64_BTI */
/* Internal helper functions to match cpu capability type */ /* Internal helper functions to match cpu capability type */
static bool static bool
cpucap_late_cpu_optional(const struct arm64_cpu_capabilities *cap) cpucap_late_cpu_optional(const struct arm64_cpu_capabilities *cap)
@ -1511,6 +1757,18 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.field_pos = ID_AA64PFR0_EL0_SHIFT, .field_pos = ID_AA64PFR0_EL0_SHIFT,
.min_field_value = ID_AA64PFR0_EL0_32BIT_64BIT, .min_field_value = ID_AA64PFR0_EL0_32BIT_64BIT,
}, },
#ifdef CONFIG_KVM
{
.desc = "32-bit EL1 Support",
.capability = ARM64_HAS_32BIT_EL1,
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
.matches = has_cpuid_feature,
.sys_reg = SYS_ID_AA64PFR0_EL1,
.sign = FTR_UNSIGNED,
.field_pos = ID_AA64PFR0_EL1_SHIFT,
.min_field_value = ID_AA64PFR0_EL1_32BIT_64BIT,
},
#endif
{ {
.desc = "Kernel page table isolation (KPTI)", .desc = "Kernel page table isolation (KPTI)",
.capability = ARM64_UNMAP_KERNEL_AT_EL0, .capability = ARM64_UNMAP_KERNEL_AT_EL0,
@ -1778,6 +2036,23 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.sign = FTR_UNSIGNED, .sign = FTR_UNSIGNED,
.min_field_value = 1, .min_field_value = 1,
}, },
#endif
#ifdef CONFIG_ARM64_BTI
{
.desc = "Branch Target Identification",
.capability = ARM64_BTI,
#ifdef CONFIG_ARM64_BTI_KERNEL
.type = ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE,
#else
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
#endif
.matches = has_cpuid_feature,
.cpu_enable = bti_enable,
.sys_reg = SYS_ID_AA64PFR1_EL1,
.field_pos = ID_AA64PFR1_BT_SHIFT,
.min_field_value = ID_AA64PFR1_BT_BTI,
.sign = FTR_UNSIGNED,
},
#endif #endif
{}, {},
}; };
@ -1888,6 +2163,9 @@ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_F64MM_SHIFT, FTR_UNSIGNED, ID_AA64ZFR0_F64MM, CAP_HWCAP, KERNEL_HWCAP_SVEF64MM), HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_F64MM_SHIFT, FTR_UNSIGNED, ID_AA64ZFR0_F64MM, CAP_HWCAP, KERNEL_HWCAP_SVEF64MM),
#endif #endif
HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_SSBS_SHIFT, FTR_UNSIGNED, ID_AA64PFR1_SSBS_PSTATE_INSNS, CAP_HWCAP, KERNEL_HWCAP_SSBS), HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_SSBS_SHIFT, FTR_UNSIGNED, ID_AA64PFR1_SSBS_PSTATE_INSNS, CAP_HWCAP, KERNEL_HWCAP_SSBS),
#ifdef CONFIG_ARM64_BTI
HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_BT_SHIFT, FTR_UNSIGNED, ID_AA64PFR1_BT_BTI, CAP_HWCAP, KERNEL_HWCAP_BTI),
#endif
#ifdef CONFIG_ARM64_PTR_AUTH #ifdef CONFIG_ARM64_PTR_AUTH
HWCAP_MULTI_CAP(ptr_auth_hwcap_addr_matches, CAP_HWCAP, KERNEL_HWCAP_PACA), HWCAP_MULTI_CAP(ptr_auth_hwcap_addr_matches, CAP_HWCAP, KERNEL_HWCAP_PACA),
HWCAP_MULTI_CAP(ptr_auth_hwcap_gen_matches, CAP_HWCAP, KERNEL_HWCAP_PACG), HWCAP_MULTI_CAP(ptr_auth_hwcap_gen_matches, CAP_HWCAP, KERNEL_HWCAP_PACG),
@ -2181,6 +2459,36 @@ static void verify_sve_features(void)
/* Add checks on other ZCR bits here if necessary */ /* Add checks on other ZCR bits here if necessary */
} }
static void verify_hyp_capabilities(void)
{
u64 safe_mmfr1, mmfr0, mmfr1;
int parange, ipa_max;
unsigned int safe_vmid_bits, vmid_bits;
if (!IS_ENABLED(CONFIG_KVM) || !IS_ENABLED(CONFIG_KVM_ARM_HOST))
return;
safe_mmfr1 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
mmfr0 = read_cpuid(ID_AA64MMFR0_EL1);
mmfr1 = read_cpuid(ID_AA64MMFR1_EL1);
/* Verify VMID bits */
safe_vmid_bits = get_vmid_bits(safe_mmfr1);
vmid_bits = get_vmid_bits(mmfr1);
if (vmid_bits < safe_vmid_bits) {
pr_crit("CPU%d: VMID width mismatch\n", smp_processor_id());
cpu_die_early();
}
/* Verify IPA range */
parange = cpuid_feature_extract_unsigned_field(mmfr0,
ID_AA64MMFR0_PARANGE_SHIFT);
ipa_max = id_aa64mmfr0_parange_to_phys_shift(parange);
if (ipa_max < get_kvm_ipa_limit()) {
pr_crit("CPU%d: IPA range mismatch\n", smp_processor_id());
cpu_die_early();
}
}
/* /*
* Run through the enabled system capabilities and enable() it on this CPU. * Run through the enabled system capabilities and enable() it on this CPU.
@ -2206,6 +2514,9 @@ static void verify_local_cpu_capabilities(void)
if (system_supports_sve()) if (system_supports_sve())
verify_sve_features(); verify_sve_features();
if (is_hyp_mode_available())
verify_hyp_capabilities();
} }
void check_local_cpu_capabilities(void) void check_local_cpu_capabilities(void)
@ -2394,7 +2705,7 @@ static int emulate_sys_reg(u32 id, u64 *valp)
if (sys_reg_CRm(id) == 0) if (sys_reg_CRm(id) == 0)
return emulate_id_reg(id, valp); return emulate_id_reg(id, valp);
regp = get_arm64_ftr_reg(id); regp = get_arm64_ftr_reg_nowarn(id);
if (regp) if (regp)
*valp = arm64_ftr_reg_user_value(regp); *valp = arm64_ftr_reg_user_value(regp);
else else

View File

@ -92,6 +92,7 @@ static const char *const hwcap_str[] = {
"bf16", "bf16",
"dgh", "dgh",
"rng", "rng",
"bti",
NULL NULL
}; };
@ -311,6 +312,8 @@ static int __init cpuinfo_regs_init(void)
} }
return 0; return 0;
} }
device_initcall(cpuinfo_regs_init);
static void cpuinfo_detect_icache_policy(struct cpuinfo_arm64 *info) static void cpuinfo_detect_icache_policy(struct cpuinfo_arm64 *info)
{ {
unsigned int cpu = smp_processor_id(); unsigned int cpu = smp_processor_id();
@ -362,6 +365,7 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info)
/* Update the 32bit ID registers only if AArch32 is implemented */ /* Update the 32bit ID registers only if AArch32 is implemented */
if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0)) { if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0)) {
info->reg_id_dfr0 = read_cpuid(ID_DFR0_EL1); info->reg_id_dfr0 = read_cpuid(ID_DFR0_EL1);
info->reg_id_dfr1 = read_cpuid(ID_DFR1_EL1);
info->reg_id_isar0 = read_cpuid(ID_ISAR0_EL1); info->reg_id_isar0 = read_cpuid(ID_ISAR0_EL1);
info->reg_id_isar1 = read_cpuid(ID_ISAR1_EL1); info->reg_id_isar1 = read_cpuid(ID_ISAR1_EL1);
info->reg_id_isar2 = read_cpuid(ID_ISAR2_EL1); info->reg_id_isar2 = read_cpuid(ID_ISAR2_EL1);
@ -373,8 +377,11 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info)
info->reg_id_mmfr1 = read_cpuid(ID_MMFR1_EL1); info->reg_id_mmfr1 = read_cpuid(ID_MMFR1_EL1);
info->reg_id_mmfr2 = read_cpuid(ID_MMFR2_EL1); info->reg_id_mmfr2 = read_cpuid(ID_MMFR2_EL1);
info->reg_id_mmfr3 = read_cpuid(ID_MMFR3_EL1); info->reg_id_mmfr3 = read_cpuid(ID_MMFR3_EL1);
info->reg_id_mmfr4 = read_cpuid(ID_MMFR4_EL1);
info->reg_id_mmfr5 = read_cpuid(ID_MMFR5_EL1);
info->reg_id_pfr0 = read_cpuid(ID_PFR0_EL1); info->reg_id_pfr0 = read_cpuid(ID_PFR0_EL1);
info->reg_id_pfr1 = read_cpuid(ID_PFR1_EL1); info->reg_id_pfr1 = read_cpuid(ID_PFR1_EL1);
info->reg_id_pfr2 = read_cpuid(ID_PFR2_EL1);
info->reg_mvfr0 = read_cpuid(MVFR0_EL1); info->reg_mvfr0 = read_cpuid(MVFR0_EL1);
info->reg_mvfr1 = read_cpuid(MVFR1_EL1); info->reg_mvfr1 = read_cpuid(MVFR1_EL1);
@ -403,5 +410,3 @@ void __init cpuinfo_store_boot_cpu(void)
boot_cpu_data = *info; boot_cpu_data = *info;
init_cpu_features(&boot_cpu_data); init_cpu_features(&boot_cpu_data);
} }
device_initcall(cpuinfo_regs_init);

View File

@ -5,6 +5,7 @@
*/ */
#include <linux/crash_core.h> #include <linux/crash_core.h>
#include <asm/cpufeature.h>
#include <asm/memory.h> #include <asm/memory.h>
void arch_crash_save_vmcoreinfo(void) void arch_crash_save_vmcoreinfo(void)
@ -16,4 +17,7 @@ void arch_crash_save_vmcoreinfo(void)
vmcoreinfo_append_str("NUMBER(PHYS_OFFSET)=0x%llx\n", vmcoreinfo_append_str("NUMBER(PHYS_OFFSET)=0x%llx\n",
PHYS_OFFSET); PHYS_OFFSET);
vmcoreinfo_append_str("KERNELOFFSET=%lx\n", kaslr_offset()); vmcoreinfo_append_str("KERNELOFFSET=%lx\n", kaslr_offset());
vmcoreinfo_append_str("NUMBER(KERNELPACMASK)=0x%llx\n",
system_supports_address_auth() ?
ptrauth_kernel_pac_mask() : 0);
} }

View File

@ -376,15 +376,13 @@ int aarch32_break_handler(struct pt_regs *regs)
} }
NOKPROBE_SYMBOL(aarch32_break_handler); NOKPROBE_SYMBOL(aarch32_break_handler);
static int __init debug_traps_init(void) void __init debug_traps_init(void)
{ {
hook_debug_fault_code(DBG_ESR_EVT_HWSS, single_step_handler, SIGTRAP, hook_debug_fault_code(DBG_ESR_EVT_HWSS, single_step_handler, SIGTRAP,
TRAP_TRACE, "single-step handler"); TRAP_TRACE, "single-step handler");
hook_debug_fault_code(DBG_ESR_EVT_BRK, brk_handler, SIGTRAP, hook_debug_fault_code(DBG_ESR_EVT_BRK, brk_handler, SIGTRAP,
TRAP_BRKPT, "ptrace BRK handler"); TRAP_BRKPT, "ptrace BRK handler");
return 0;
} }
arch_initcall(debug_traps_init);
/* Re-enable single step for syscall restarting. */ /* Re-enable single step for syscall restarting. */
void user_rewind_single_step(struct task_struct *task) void user_rewind_single_step(struct task_struct *task)

View File

@ -19,7 +19,7 @@ SYM_CODE_START(efi_enter_kernel)
* point stored in x0. Save those values in registers which are * point stored in x0. Save those values in registers which are
* callee preserved. * callee preserved.
*/ */
ldr w2, =stext_offset ldr w2, =primary_entry_offset
add x19, x0, x2 // relocated Image entrypoint add x19, x0, x2 // relocated Image entrypoint
mov x20, x1 // DTB address mov x20, x1 // DTB address

View File

@ -32,7 +32,7 @@ optional_header:
extra_header_fields: extra_header_fields:
.quad 0 // ImageBase .quad 0 // ImageBase
.long SZ_4K // SectionAlignment .long SEGMENT_ALIGN // SectionAlignment
.long PECOFF_FILE_ALIGNMENT // FileAlignment .long PECOFF_FILE_ALIGNMENT // FileAlignment
.short 0 // MajorOperatingSystemVersion .short 0 // MajorOperatingSystemVersion
.short 0 // MinorOperatingSystemVersion .short 0 // MinorOperatingSystemVersion

View File

@ -5,7 +5,7 @@
#include <linux/linkage.h> #include <linux/linkage.h>
ENTRY(__efi_rt_asm_wrapper) SYM_FUNC_START(__efi_rt_asm_wrapper)
stp x29, x30, [sp, #-32]! stp x29, x30, [sp, #-32]!
mov x29, sp mov x29, sp
@ -34,5 +34,14 @@ ENTRY(__efi_rt_asm_wrapper)
ldp x29, x30, [sp], #32 ldp x29, x30, [sp], #32
b.ne 0f b.ne 0f
ret ret
0: b efi_handle_corrupted_x18 // tail call 0:
ENDPROC(__efi_rt_asm_wrapper) /*
* With CONFIG_SHADOW_CALL_STACK, the kernel uses x18 to store a
* shadow stack pointer, which we need to restore before returning to
* potentially instrumented code. This is safe because the wrapper is
* called with preemption disabled and a separate shadow stack is used
* for interrupts.
*/
mov x18, x2
b efi_handle_corrupted_x18 // tail call
SYM_FUNC_END(__efi_rt_asm_wrapper)

View File

@ -94,7 +94,7 @@ asmlinkage void notrace el1_sync_handler(struct pt_regs *regs)
break; break;
default: default:
el1_inv(regs, esr); el1_inv(regs, esr);
}; }
} }
NOKPROBE_SYMBOL(el1_sync_handler); NOKPROBE_SYMBOL(el1_sync_handler);
@ -188,6 +188,14 @@ static void notrace el0_undef(struct pt_regs *regs)
} }
NOKPROBE_SYMBOL(el0_undef); NOKPROBE_SYMBOL(el0_undef);
static void notrace el0_bti(struct pt_regs *regs)
{
user_exit_irqoff();
local_daif_restore(DAIF_PROCCTX);
do_bti(regs);
}
NOKPROBE_SYMBOL(el0_bti);
static void notrace el0_inv(struct pt_regs *regs, unsigned long esr) static void notrace el0_inv(struct pt_regs *regs, unsigned long esr)
{ {
user_exit_irqoff(); user_exit_irqoff();
@ -255,6 +263,9 @@ asmlinkage void notrace el0_sync_handler(struct pt_regs *regs)
case ESR_ELx_EC_UNKNOWN: case ESR_ELx_EC_UNKNOWN:
el0_undef(regs); el0_undef(regs);
break; break;
case ESR_ELx_EC_BTI:
el0_bti(regs);
break;
case ESR_ELx_EC_BREAKPT_LOW: case ESR_ELx_EC_BREAKPT_LOW:
case ESR_ELx_EC_SOFTSTP_LOW: case ESR_ELx_EC_SOFTSTP_LOW:
case ESR_ELx_EC_WATCHPT_LOW: case ESR_ELx_EC_WATCHPT_LOW:

View File

@ -16,34 +16,34 @@
* *
* x0 - pointer to struct fpsimd_state * x0 - pointer to struct fpsimd_state
*/ */
ENTRY(fpsimd_save_state) SYM_FUNC_START(fpsimd_save_state)
fpsimd_save x0, 8 fpsimd_save x0, 8
ret ret
ENDPROC(fpsimd_save_state) SYM_FUNC_END(fpsimd_save_state)
/* /*
* Load the FP registers. * Load the FP registers.
* *
* x0 - pointer to struct fpsimd_state * x0 - pointer to struct fpsimd_state
*/ */
ENTRY(fpsimd_load_state) SYM_FUNC_START(fpsimd_load_state)
fpsimd_restore x0, 8 fpsimd_restore x0, 8
ret ret
ENDPROC(fpsimd_load_state) SYM_FUNC_END(fpsimd_load_state)
#ifdef CONFIG_ARM64_SVE #ifdef CONFIG_ARM64_SVE
ENTRY(sve_save_state) SYM_FUNC_START(sve_save_state)
sve_save 0, x1, 2 sve_save 0, x1, 2
ret ret
ENDPROC(sve_save_state) SYM_FUNC_END(sve_save_state)
ENTRY(sve_load_state) SYM_FUNC_START(sve_load_state)
sve_load 0, x1, x2, 3, x4 sve_load 0, x1, x2, 3, x4
ret ret
ENDPROC(sve_load_state) SYM_FUNC_END(sve_load_state)
ENTRY(sve_get_vl) SYM_FUNC_START(sve_get_vl)
_sve_rdvl 0, 1 _sve_rdvl 0, 1
ret ret
ENDPROC(sve_get_vl) SYM_FUNC_END(sve_get_vl)
#endif /* CONFIG_ARM64_SVE */ #endif /* CONFIG_ARM64_SVE */

View File

@ -23,8 +23,9 @@
* *
* ... where <entry> is either ftrace_caller or ftrace_regs_caller. * ... where <entry> is either ftrace_caller or ftrace_regs_caller.
* *
* Each instrumented function follows the AAPCS, so here x0-x8 and x19-x30 are * Each instrumented function follows the AAPCS, so here x0-x8 and x18-x30 are
* live, and x9-x18 are safe to clobber. * live (x18 holds the Shadow Call Stack pointer), and x9-x17 are safe to
* clobber.
* *
* We save the callsite's context into a pt_regs before invoking any ftrace * We save the callsite's context into a pt_regs before invoking any ftrace
* callbacks. So that we can get a sensible backtrace, we create a stack record * callbacks. So that we can get a sensible backtrace, we create a stack record

View File

@ -23,6 +23,7 @@
#include <asm/mmu.h> #include <asm/mmu.h>
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/ptrace.h> #include <asm/ptrace.h>
#include <asm/scs.h>
#include <asm/thread_info.h> #include <asm/thread_info.h>
#include <asm/asm-uaccess.h> #include <asm/asm-uaccess.h>
#include <asm/unistd.h> #include <asm/unistd.h>
@ -178,7 +179,9 @@ alternative_cb_end
apply_ssbd 1, x22, x23 apply_ssbd 1, x22, x23
ptrauth_keys_install_kernel tsk, 1, x20, x22, x23 ptrauth_keys_install_kernel tsk, x20, x22, x23
scs_load tsk, x20
.else .else
add x21, sp, #S_FRAME_SIZE add x21, sp, #S_FRAME_SIZE
get_current_task tsk get_current_task tsk
@ -343,6 +346,8 @@ alternative_else_nop_endif
msr cntkctl_el1, x1 msr cntkctl_el1, x1
4: 4:
#endif #endif
scs_save tsk, x0
/* No kernel C function calls after this as user keys are set. */ /* No kernel C function calls after this as user keys are set. */
ptrauth_keys_install_user tsk, x0, x1, x2 ptrauth_keys_install_user tsk, x0, x1, x2
@ -388,6 +393,9 @@ alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0
.macro irq_stack_entry .macro irq_stack_entry
mov x19, sp // preserve the original sp mov x19, sp // preserve the original sp
#ifdef CONFIG_SHADOW_CALL_STACK
mov x24, scs_sp // preserve the original shadow stack
#endif
/* /*
* Compare sp with the base of the task stack. * Compare sp with the base of the task stack.
@ -405,15 +413,25 @@ alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0
/* switch to the irq stack */ /* switch to the irq stack */
mov sp, x26 mov sp, x26
#ifdef CONFIG_SHADOW_CALL_STACK
/* also switch to the irq shadow stack */
adr_this_cpu scs_sp, irq_shadow_call_stack, x26
#endif
9998: 9998:
.endm .endm
/* /*
* x19 should be preserved between irq_stack_entry and * The callee-saved regs (x19-x29) should be preserved between
* irq_stack_exit. * irq_stack_entry and irq_stack_exit, but note that kernel_entry
* uses x20-x23 to store data for later use.
*/ */
.macro irq_stack_exit .macro irq_stack_exit
mov sp, x19 mov sp, x19
#ifdef CONFIG_SHADOW_CALL_STACK
mov scs_sp, x24
#endif
.endm .endm
/* GPRs used by entry code */ /* GPRs used by entry code */
@ -727,21 +745,10 @@ el0_error_naked:
b ret_to_user b ret_to_user
SYM_CODE_END(el0_error) SYM_CODE_END(el0_error)
/*
* Ok, we need to do extra processing, enter the slow path.
*/
work_pending:
mov x0, sp // 'regs'
bl do_notify_resume
#ifdef CONFIG_TRACE_IRQFLAGS
bl trace_hardirqs_on // enabled while in userspace
#endif
ldr x1, [tsk, #TSK_TI_FLAGS] // re-check for single-step
b finish_ret_to_user
/* /*
* "slow" syscall return path. * "slow" syscall return path.
*/ */
ret_to_user: SYM_CODE_START_LOCAL(ret_to_user)
disable_daif disable_daif
gic_prio_kentry_setup tmp=x3 gic_prio_kentry_setup tmp=x3
ldr x1, [tsk, #TSK_TI_FLAGS] ldr x1, [tsk, #TSK_TI_FLAGS]
@ -753,7 +760,19 @@ finish_ret_to_user:
bl stackleak_erase bl stackleak_erase
#endif #endif
kernel_exit 0 kernel_exit 0
ENDPROC(ret_to_user)
/*
* Ok, we need to do extra processing, enter the slow path.
*/
work_pending:
mov x0, sp // 'regs'
bl do_notify_resume
#ifdef CONFIG_TRACE_IRQFLAGS
bl trace_hardirqs_on // enabled while in userspace
#endif
ldr x1, [tsk, #TSK_TI_FLAGS] // re-check for single-step
b finish_ret_to_user
SYM_CODE_END(ret_to_user)
.popsection // .entry.text .popsection // .entry.text
@ -900,7 +919,9 @@ SYM_FUNC_START(cpu_switch_to)
ldr lr, [x8] ldr lr, [x8]
mov sp, x9 mov sp, x9
msr sp_el0, x1 msr sp_el0, x1
ptrauth_keys_install_kernel x1, 1, x8, x9, x10 ptrauth_keys_install_kernel x1, x8, x9, x10
scs_save x0, x8
scs_load x1, x8
ret ret
SYM_FUNC_END(cpu_switch_to) SYM_FUNC_END(cpu_switch_to)
NOKPROBE(cpu_switch_to) NOKPROBE(cpu_switch_to)
@ -1029,13 +1050,16 @@ SYM_CODE_START(__sdei_asm_handler)
mov x19, x1 mov x19, x1
#if defined(CONFIG_VMAP_STACK) || defined(CONFIG_SHADOW_CALL_STACK)
ldrb w4, [x19, #SDEI_EVENT_PRIORITY]
#endif
#ifdef CONFIG_VMAP_STACK #ifdef CONFIG_VMAP_STACK
/* /*
* entry.S may have been using sp as a scratch register, find whether * entry.S may have been using sp as a scratch register, find whether
* this is a normal or critical event and switch to the appropriate * this is a normal or critical event and switch to the appropriate
* stack for this CPU. * stack for this CPU.
*/ */
ldrb w4, [x19, #SDEI_EVENT_PRIORITY]
cbnz w4, 1f cbnz w4, 1f
ldr_this_cpu dst=x5, sym=sdei_stack_normal_ptr, tmp=x6 ldr_this_cpu dst=x5, sym=sdei_stack_normal_ptr, tmp=x6
b 2f b 2f
@ -1045,6 +1069,15 @@ SYM_CODE_START(__sdei_asm_handler)
mov sp, x5 mov sp, x5
#endif #endif
#ifdef CONFIG_SHADOW_CALL_STACK
/* Use a separate shadow call stack for normal and critical events */
cbnz w4, 3f
adr_this_cpu dst=scs_sp, sym=sdei_shadow_call_stack_normal, tmp=x6
b 4f
3: adr_this_cpu dst=scs_sp, sym=sdei_shadow_call_stack_critical, tmp=x6
4:
#endif
/* /*
* We may have interrupted userspace, or a guest, or exit-from or * We may have interrupted userspace, or a guest, or exit-from or
* return-to either of these. We can't trust sp_el0, restore it. * return-to either of these. We can't trust sp_el0, restore it.

View File

@ -13,6 +13,7 @@
#include <linux/init.h> #include <linux/init.h>
#include <linux/irqchip/arm-gic-v3.h> #include <linux/irqchip/arm-gic-v3.h>
#include <asm/asm_pointer_auth.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/boot.h> #include <asm/boot.h>
#include <asm/ptrace.h> #include <asm/ptrace.h>
@ -27,6 +28,7 @@
#include <asm/pgtable-hwdef.h> #include <asm/pgtable-hwdef.h>
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <asm/page.h> #include <asm/page.h>
#include <asm/scs.h>
#include <asm/smp.h> #include <asm/smp.h>
#include <asm/sysreg.h> #include <asm/sysreg.h>
#include <asm/thread_info.h> #include <asm/thread_info.h>
@ -70,9 +72,9 @@ _head:
* its opcode forms the magic "MZ" signature required by UEFI. * its opcode forms the magic "MZ" signature required by UEFI.
*/ */
add x13, x18, #0x16 add x13, x18, #0x16
b stext b primary_entry
#else #else
b stext // branch to kernel start, magic b primary_entry // branch to kernel start, magic
.long 0 // reserved .long 0 // reserved
#endif #endif
le64sym _kernel_offset_le // Image load offset from start of RAM, little-endian le64sym _kernel_offset_le // Image load offset from start of RAM, little-endian
@ -98,14 +100,13 @@ pe_header:
* primary lowlevel boot path: * primary lowlevel boot path:
* *
* Register Scope Purpose * Register Scope Purpose
* x21 stext() .. start_kernel() FDT pointer passed at boot in x0 * x21 primary_entry() .. start_kernel() FDT pointer passed at boot in x0
* x23 stext() .. start_kernel() physical misalignment/KASLR offset * x23 primary_entry() .. start_kernel() physical misalignment/KASLR offset
* x28 __create_page_tables() callee preserved temp register * x28 __create_page_tables() callee preserved temp register
* x19/x20 __primary_switch() callee preserved temp registers * x19/x20 __primary_switch() callee preserved temp registers
* x24 __primary_switch() .. relocate_kernel() * x24 __primary_switch() .. relocate_kernel() current RELR displacement
* current RELR displacement
*/ */
SYM_CODE_START(stext) SYM_CODE_START(primary_entry)
bl preserve_boot_args bl preserve_boot_args
bl el2_setup // Drop to EL1, w0=cpu_boot_mode bl el2_setup // Drop to EL1, w0=cpu_boot_mode
adrp x23, __PHYS_OFFSET adrp x23, __PHYS_OFFSET
@ -118,10 +119,9 @@ SYM_CODE_START(stext)
* On return, the CPU will be ready for the MMU to be turned on and * On return, the CPU will be ready for the MMU to be turned on and
* the TCR will have been set. * the TCR will have been set.
*/ */
mov x0, #ARM64_CPU_BOOT_PRIMARY
bl __cpu_setup // initialise processor bl __cpu_setup // initialise processor
b __primary_switch b __primary_switch
SYM_CODE_END(stext) SYM_CODE_END(primary_entry)
/* /*
* Preserve the arguments passed by the bootloader in x0 .. x3 * Preserve the arguments passed by the bootloader in x0 .. x3
@ -394,13 +394,19 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
/* /*
* Since the page tables have been populated with non-cacheable * Since the page tables have been populated with non-cacheable
* accesses (MMU disabled), invalidate the idmap and swapper page * accesses (MMU disabled), invalidate those tables again to
* tables again to remove any speculatively loaded cache lines. * remove any speculatively loaded cache lines.
*/ */
dmb sy
adrp x0, idmap_pg_dir adrp x0, idmap_pg_dir
adrp x1, idmap_pg_end
sub x1, x1, x0
bl __inval_dcache_area
adrp x0, init_pg_dir
adrp x1, init_pg_end adrp x1, init_pg_end
sub x1, x1, x0 sub x1, x1, x0
dmb sy
bl __inval_dcache_area bl __inval_dcache_area
ret x28 ret x28
@ -417,6 +423,10 @@ SYM_FUNC_START_LOCAL(__primary_switched)
adr_l x5, init_task adr_l x5, init_task
msr sp_el0, x5 // Save thread_info msr sp_el0, x5 // Save thread_info
#ifdef CONFIG_ARM64_PTR_AUTH
__ptrauth_keys_init_cpu x5, x6, x7, x8
#endif
adr_l x8, vectors // load VBAR_EL1 with virtual adr_l x8, vectors // load VBAR_EL1 with virtual
msr vbar_el1, x8 // vector table address msr vbar_el1, x8 // vector table address
isb isb
@ -424,6 +434,10 @@ SYM_FUNC_START_LOCAL(__primary_switched)
stp xzr, x30, [sp, #-16]! stp xzr, x30, [sp, #-16]!
mov x29, sp mov x29, sp
#ifdef CONFIG_SHADOW_CALL_STACK
adr_l scs_sp, init_shadow_call_stack // Set shadow call stack
#endif
str_l x21, __fdt_pointer, x5 // Save FDT pointer str_l x21, __fdt_pointer, x5 // Save FDT pointer
ldr_l x4, kimage_vaddr // Save the offset between ldr_l x4, kimage_vaddr // Save the offset between
@ -717,7 +731,6 @@ SYM_FUNC_START_LOCAL(secondary_startup)
* Common entry point for secondary CPUs. * Common entry point for secondary CPUs.
*/ */
bl __cpu_secondary_check52bitva bl __cpu_secondary_check52bitva
mov x0, #ARM64_CPU_BOOT_SECONDARY
bl __cpu_setup // initialise processor bl __cpu_setup // initialise processor
adrp x1, swapper_pg_dir adrp x1, swapper_pg_dir
bl __enable_mmu bl __enable_mmu
@ -737,8 +750,14 @@ SYM_FUNC_START_LOCAL(__secondary_switched)
ldr x2, [x0, #CPU_BOOT_TASK] ldr x2, [x0, #CPU_BOOT_TASK]
cbz x2, __secondary_too_slow cbz x2, __secondary_too_slow
msr sp_el0, x2 msr sp_el0, x2
scs_load x2, x3
mov x29, #0 mov x29, #0
mov x30, #0 mov x30, #0
#ifdef CONFIG_ARM64_PTR_AUTH
ptrauth_keys_init_cpu x2, x3, x4, x5
#endif
b secondary_start_kernel b secondary_start_kernel
SYM_FUNC_END(__secondary_switched) SYM_FUNC_END(__secondary_switched)

View File

@ -65,7 +65,7 @@
* x5: physical address of a zero page that remains zero after resume * x5: physical address of a zero page that remains zero after resume
*/ */
.pushsection ".hibernate_exit.text", "ax" .pushsection ".hibernate_exit.text", "ax"
ENTRY(swsusp_arch_suspend_exit) SYM_CODE_START(swsusp_arch_suspend_exit)
/* /*
* We execute from ttbr0, change ttbr1 to our copied linear map tables * We execute from ttbr0, change ttbr1 to our copied linear map tables
* with a break-before-make via the zero page * with a break-before-make via the zero page
@ -110,7 +110,7 @@ ENTRY(swsusp_arch_suspend_exit)
cbz x24, 3f /* Do we need to re-initialise EL2? */ cbz x24, 3f /* Do we need to re-initialise EL2? */
hvc #0 hvc #0
3: ret 3: ret
ENDPROC(swsusp_arch_suspend_exit) SYM_CODE_END(swsusp_arch_suspend_exit)
/* /*
* Restore the hyp stub. * Restore the hyp stub.
@ -119,15 +119,15 @@ ENDPROC(swsusp_arch_suspend_exit)
* *
* x24: The physical address of __hyp_stub_vectors * x24: The physical address of __hyp_stub_vectors
*/ */
el1_sync: SYM_CODE_START_LOCAL(el1_sync)
msr vbar_el2, x24 msr vbar_el2, x24
eret eret
ENDPROC(el1_sync) SYM_CODE_END(el1_sync)
.macro invalid_vector label .macro invalid_vector label
\label: SYM_CODE_START_LOCAL(\label)
b \label b \label
ENDPROC(\label) SYM_CODE_END(\label)
.endm .endm
invalid_vector el2_sync_invalid invalid_vector el2_sync_invalid
@ -141,7 +141,7 @@ ENDPROC(\label)
/* el2 vectors - switch el2 here while we restore the memory image. */ /* el2 vectors - switch el2 here while we restore the memory image. */
.align 11 .align 11
ENTRY(hibernate_el2_vectors) SYM_CODE_START(hibernate_el2_vectors)
ventry el2_sync_invalid // Synchronous EL2t ventry el2_sync_invalid // Synchronous EL2t
ventry el2_irq_invalid // IRQ EL2t ventry el2_irq_invalid // IRQ EL2t
ventry el2_fiq_invalid // FIQ EL2t ventry el2_fiq_invalid // FIQ EL2t
@ -161,6 +161,6 @@ ENTRY(hibernate_el2_vectors)
ventry el1_irq_invalid // IRQ 32-bit EL1 ventry el1_irq_invalid // IRQ 32-bit EL1
ventry el1_fiq_invalid // FIQ 32-bit EL1 ventry el1_fiq_invalid // FIQ 32-bit EL1
ventry el1_error_invalid // Error 32-bit EL1 ventry el1_error_invalid // Error 32-bit EL1
END(hibernate_el2_vectors) SYM_CODE_END(hibernate_el2_vectors)
.popsection .popsection

View File

@ -21,7 +21,7 @@
.align 11 .align 11
ENTRY(__hyp_stub_vectors) SYM_CODE_START(__hyp_stub_vectors)
ventry el2_sync_invalid // Synchronous EL2t ventry el2_sync_invalid // Synchronous EL2t
ventry el2_irq_invalid // IRQ EL2t ventry el2_irq_invalid // IRQ EL2t
ventry el2_fiq_invalid // FIQ EL2t ventry el2_fiq_invalid // FIQ EL2t
@ -41,11 +41,11 @@ ENTRY(__hyp_stub_vectors)
ventry el1_irq_invalid // IRQ 32-bit EL1 ventry el1_irq_invalid // IRQ 32-bit EL1
ventry el1_fiq_invalid // FIQ 32-bit EL1 ventry el1_fiq_invalid // FIQ 32-bit EL1
ventry el1_error_invalid // Error 32-bit EL1 ventry el1_error_invalid // Error 32-bit EL1
ENDPROC(__hyp_stub_vectors) SYM_CODE_END(__hyp_stub_vectors)
.align 11 .align 11
el1_sync: SYM_CODE_START_LOCAL(el1_sync)
cmp x0, #HVC_SET_VECTORS cmp x0, #HVC_SET_VECTORS
b.ne 2f b.ne 2f
msr vbar_el2, x1 msr vbar_el2, x1
@ -68,12 +68,12 @@ el1_sync:
9: mov x0, xzr 9: mov x0, xzr
eret eret
ENDPROC(el1_sync) SYM_CODE_END(el1_sync)
.macro invalid_vector label .macro invalid_vector label
\label: SYM_CODE_START_LOCAL(\label)
b \label b \label
ENDPROC(\label) SYM_CODE_END(\label)
.endm .endm
invalid_vector el2_sync_invalid invalid_vector el2_sync_invalid
@ -106,15 +106,15 @@ ENDPROC(\label)
* initialisation entry point. * initialisation entry point.
*/ */
ENTRY(__hyp_set_vectors) SYM_FUNC_START(__hyp_set_vectors)
mov x1, x0 mov x1, x0
mov x0, #HVC_SET_VECTORS mov x0, #HVC_SET_VECTORS
hvc #0 hvc #0
ret ret
ENDPROC(__hyp_set_vectors) SYM_FUNC_END(__hyp_set_vectors)
ENTRY(__hyp_reset_vectors) SYM_FUNC_START(__hyp_reset_vectors)
mov x0, #HVC_RESET_VECTORS mov x0, #HVC_RESET_VECTORS
hvc #0 hvc #0
ret ret
ENDPROC(__hyp_reset_vectors) SYM_FUNC_END(__hyp_reset_vectors)

View File

@ -13,7 +13,7 @@
#ifdef CONFIG_EFI #ifdef CONFIG_EFI
__efistub_kernel_size = _edata - _text; __efistub_kernel_size = _edata - _text;
__efistub_stext_offset = stext - _text; __efistub_primary_entry_offset = primary_entry - _text;
/* /*

View File

@ -51,21 +51,33 @@ enum aarch64_insn_encoding_class __kprobes aarch64_get_insn_class(u32 insn)
return aarch64_insn_encoding_class[(insn >> 25) & 0xf]; return aarch64_insn_encoding_class[(insn >> 25) & 0xf];
} }
/* NOP is an alias of HINT */ bool __kprobes aarch64_insn_is_steppable_hint(u32 insn)
bool __kprobes aarch64_insn_is_nop(u32 insn)
{ {
if (!aarch64_insn_is_hint(insn)) if (!aarch64_insn_is_hint(insn))
return false; return false;
switch (insn & 0xFE0) { switch (insn & 0xFE0) {
case AARCH64_INSN_HINT_YIELD: case AARCH64_INSN_HINT_XPACLRI:
case AARCH64_INSN_HINT_WFE: case AARCH64_INSN_HINT_PACIA_1716:
case AARCH64_INSN_HINT_WFI: case AARCH64_INSN_HINT_PACIB_1716:
case AARCH64_INSN_HINT_SEV: case AARCH64_INSN_HINT_AUTIA_1716:
case AARCH64_INSN_HINT_SEVL: case AARCH64_INSN_HINT_AUTIB_1716:
return false; case AARCH64_INSN_HINT_PACIAZ:
default: case AARCH64_INSN_HINT_PACIASP:
case AARCH64_INSN_HINT_PACIBZ:
case AARCH64_INSN_HINT_PACIBSP:
case AARCH64_INSN_HINT_AUTIAZ:
case AARCH64_INSN_HINT_AUTIASP:
case AARCH64_INSN_HINT_AUTIBZ:
case AARCH64_INSN_HINT_AUTIBSP:
case AARCH64_INSN_HINT_BTI:
case AARCH64_INSN_HINT_BTIC:
case AARCH64_INSN_HINT_BTIJ:
case AARCH64_INSN_HINT_BTIJC:
case AARCH64_INSN_HINT_NOP:
return true; return true;
default:
return false;
} }
} }
@ -574,7 +586,7 @@ u32 aarch64_insn_gen_cond_branch_imm(unsigned long pc, unsigned long addr,
offset >> 2); offset >> 2);
} }
u32 __kprobes aarch64_insn_gen_hint(enum aarch64_insn_hint_op op) u32 __kprobes aarch64_insn_gen_hint(enum aarch64_insn_hint_cr_op op)
{ {
return aarch64_insn_get_hint_value() | op; return aarch64_insn_get_hint_value() | op;
} }
@ -1535,16 +1547,10 @@ static u32 aarch64_encode_immediate(u64 imm,
u32 insn) u32 insn)
{ {
unsigned int immr, imms, n, ones, ror, esz, tmp; unsigned int immr, imms, n, ones, ror, esz, tmp;
u64 mask = ~0UL; u64 mask;
/* Can't encode full zeroes or full ones */
if (!imm || !~imm)
return AARCH64_BREAK_FAULT;
switch (variant) { switch (variant) {
case AARCH64_INSN_VARIANT_32BIT: case AARCH64_INSN_VARIANT_32BIT:
if (upper_32_bits(imm))
return AARCH64_BREAK_FAULT;
esz = 32; esz = 32;
break; break;
case AARCH64_INSN_VARIANT_64BIT: case AARCH64_INSN_VARIANT_64BIT:
@ -1556,6 +1562,12 @@ static u32 aarch64_encode_immediate(u64 imm,
return AARCH64_BREAK_FAULT; return AARCH64_BREAK_FAULT;
} }
mask = GENMASK(esz - 1, 0);
/* Can't encode full zeroes, full ones, or value wider than the mask */
if (!imm || imm == mask || imm & ~mask)
return AARCH64_BREAK_FAULT;
/* /*
* Inverse of Replicate(). Try to spot a repeating pattern * Inverse of Replicate(). Try to spot a repeating pattern
* with a pow2 stride. * with a pow2 stride.

View File

@ -138,12 +138,12 @@ static int setup_dtb(struct kimage *image,
/* add rng-seed */ /* add rng-seed */
if (rng_is_initialized()) { if (rng_is_initialized()) {
u8 rng_seed[RNG_SEED_SIZE]; void *rng_seed;
get_random_bytes(rng_seed, RNG_SEED_SIZE); ret = fdt_setprop_placeholder(dtb, off, FDT_PROP_RNG_SEED,
ret = fdt_setprop(dtb, off, FDT_PROP_RNG_SEED, rng_seed, RNG_SEED_SIZE, &rng_seed);
RNG_SEED_SIZE);
if (ret) if (ret)
goto out; goto out;
get_random_bytes(rng_seed, RNG_SEED_SIZE);
} else { } else {
pr_notice("RNG is not initialised: omitting \"%s\" property\n", pr_notice("RNG is not initialised: omitting \"%s\" property\n",
FDT_PROP_RNG_SEED); FDT_PROP_RNG_SEED);
@ -284,7 +284,7 @@ int load_other_segments(struct kimage *image,
image->arch.elf_headers_sz = headers_sz; image->arch.elf_headers_sz = headers_sz;
pr_debug("Loaded elf core header at 0x%lx bufsz=0x%lx memsz=0x%lx\n", pr_debug("Loaded elf core header at 0x%lx bufsz=0x%lx memsz=0x%lx\n",
image->arch.elf_headers_mem, headers_sz, headers_sz); image->arch.elf_headers_mem, kbuf.bufsz, kbuf.memsz);
} }
/* load initrd */ /* load initrd */
@ -305,7 +305,7 @@ int load_other_segments(struct kimage *image,
initrd_load_addr = kbuf.mem; initrd_load_addr = kbuf.mem;
pr_debug("Loaded initrd at 0x%lx bufsz=0x%lx memsz=0x%lx\n", pr_debug("Loaded initrd at 0x%lx bufsz=0x%lx memsz=0x%lx\n",
initrd_load_addr, initrd_len, initrd_len); initrd_load_addr, kbuf.bufsz, kbuf.memsz);
} }
/* load dtb */ /* load dtb */
@ -332,7 +332,7 @@ int load_other_segments(struct kimage *image,
image->arch.dtb_mem = kbuf.mem; image->arch.dtb_mem = kbuf.mem;
pr_debug("Loaded dtb at 0x%lx bufsz=0x%lx memsz=0x%lx\n", pr_debug("Loaded dtb at 0x%lx bufsz=0x%lx memsz=0x%lx\n",
kbuf.mem, dtb_len, dtb_len); kbuf.mem, kbuf.bufsz, kbuf.memsz);
return 0; return 0;

View File

@ -120,7 +120,7 @@ static bool has_pv_steal_clock(void)
struct arm_smccc_res res; struct arm_smccc_res res;
/* To detect the presence of PV time support we require SMCCC 1.1+ */ /* To detect the presence of PV time support we require SMCCC 1.1+ */
if (psci_ops.smccc_version < SMCCC_VERSION_1_1) if (arm_smccc_1_1_get_conduit() == SMCCC_CONDUIT_NONE)
return false; return false;
arm_smccc_1_1_invoke(ARM_SMCCC_ARCH_FEATURES_FUNC_ID, arm_smccc_1_1_invoke(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,

View File

@ -46,7 +46,7 @@ static bool __kprobes aarch64_insn_is_steppable(u32 insn)
* except for the NOP case. * except for the NOP case.
*/ */
if (aarch64_insn_is_hint(insn)) if (aarch64_insn_is_hint(insn))
return aarch64_insn_is_nop(insn); return aarch64_insn_is_steppable_hint(insn);
return true; return true;
} }

View File

@ -61,7 +61,7 @@
ldp x28, x29, [sp, #S_X28] ldp x28, x29, [sp, #S_X28]
.endm .endm
ENTRY(kretprobe_trampoline) SYM_CODE_START(kretprobe_trampoline)
sub sp, sp, #S_FRAME_SIZE sub sp, sp, #S_FRAME_SIZE
save_all_base_regs save_all_base_regs
@ -79,4 +79,4 @@ ENTRY(kretprobe_trampoline)
add sp, sp, #S_FRAME_SIZE add sp, sp, #S_FRAME_SIZE
ret ret
ENDPROC(kretprobe_trampoline) SYM_CODE_END(kretprobe_trampoline)

View File

@ -11,6 +11,7 @@
#include <linux/compat.h> #include <linux/compat.h>
#include <linux/efi.h> #include <linux/efi.h>
#include <linux/elf.h>
#include <linux/export.h> #include <linux/export.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/sched/debug.h> #include <linux/sched/debug.h>
@ -18,6 +19,7 @@
#include <linux/sched/task_stack.h> #include <linux/sched/task_stack.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/lockdep.h> #include <linux/lockdep.h>
#include <linux/mman.h>
#include <linux/mm.h> #include <linux/mm.h>
#include <linux/stddef.h> #include <linux/stddef.h>
#include <linux/sysctl.h> #include <linux/sysctl.h>
@ -209,6 +211,15 @@ void machine_restart(char *cmd)
while (1); while (1);
} }
#define bstr(suffix, str) [PSR_BTYPE_ ## suffix >> PSR_BTYPE_SHIFT] = str
static const char *const btypes[] = {
bstr(NONE, "--"),
bstr( JC, "jc"),
bstr( C, "-c"),
bstr( J , "j-")
};
#undef bstr
static void print_pstate(struct pt_regs *regs) static void print_pstate(struct pt_regs *regs)
{ {
u64 pstate = regs->pstate; u64 pstate = regs->pstate;
@ -227,7 +238,10 @@ static void print_pstate(struct pt_regs *regs)
pstate & PSR_AA32_I_BIT ? 'I' : 'i', pstate & PSR_AA32_I_BIT ? 'I' : 'i',
pstate & PSR_AA32_F_BIT ? 'F' : 'f'); pstate & PSR_AA32_F_BIT ? 'F' : 'f');
} else { } else {
printk("pstate: %08llx (%c%c%c%c %c%c%c%c %cPAN %cUAO)\n", const char *btype_str = btypes[(pstate & PSR_BTYPE_MASK) >>
PSR_BTYPE_SHIFT];
printk("pstate: %08llx (%c%c%c%c %c%c%c%c %cPAN %cUAO BTYPE=%s)\n",
pstate, pstate,
pstate & PSR_N_BIT ? 'N' : 'n', pstate & PSR_N_BIT ? 'N' : 'n',
pstate & PSR_Z_BIT ? 'Z' : 'z', pstate & PSR_Z_BIT ? 'Z' : 'z',
@ -238,7 +252,8 @@ static void print_pstate(struct pt_regs *regs)
pstate & PSR_I_BIT ? 'I' : 'i', pstate & PSR_I_BIT ? 'I' : 'i',
pstate & PSR_F_BIT ? 'F' : 'f', pstate & PSR_F_BIT ? 'F' : 'f',
pstate & PSR_PAN_BIT ? '+' : '-', pstate & PSR_PAN_BIT ? '+' : '-',
pstate & PSR_UAO_BIT ? '+' : '-'); pstate & PSR_UAO_BIT ? '+' : '-',
btype_str);
} }
} }
@ -655,3 +670,25 @@ asmlinkage void __sched arm64_preempt_schedule_irq(void)
if (system_capabilities_finalized()) if (system_capabilities_finalized())
preempt_schedule_irq(); preempt_schedule_irq();
} }
#ifdef CONFIG_BINFMT_ELF
int arch_elf_adjust_prot(int prot, const struct arch_elf_state *state,
bool has_interp, bool is_interp)
{
/*
* For dynamically linked executables the interpreter is
* responsible for setting PROT_BTI on everything except
* itself.
*/
if (is_interp != has_interp)
return prot;
if (!(state->flags & ARM64_ELF_BTI))
return prot;
if (prot & PROT_EXEC)
prot |= PROT_BTI;
return prot;
}
#endif

View File

@ -1875,7 +1875,7 @@ void syscall_trace_exit(struct pt_regs *regs)
*/ */
#define SPSR_EL1_AARCH64_RES0_BITS \ #define SPSR_EL1_AARCH64_RES0_BITS \
(GENMASK_ULL(63, 32) | GENMASK_ULL(27, 25) | GENMASK_ULL(23, 22) | \ (GENMASK_ULL(63, 32) | GENMASK_ULL(27, 25) | GENMASK_ULL(23, 22) | \
GENMASK_ULL(20, 13) | GENMASK_ULL(11, 10) | GENMASK_ULL(5, 5)) GENMASK_ULL(20, 13) | GENMASK_ULL(5, 5))
#define SPSR_EL1_AARCH32_RES0_BITS \ #define SPSR_EL1_AARCH32_RES0_BITS \
(GENMASK_ULL(63, 32) | GENMASK_ULL(22, 22) | GENMASK_ULL(20, 20)) (GENMASK_ULL(63, 32) | GENMASK_ULL(22, 22) | GENMASK_ULL(20, 20))

View File

@ -5,81 +5,81 @@
#include <linux/linkage.h> #include <linux/linkage.h>
ENTRY(absolute_data64) SYM_FUNC_START(absolute_data64)
ldr x0, 0f ldr x0, 0f
ret ret
0: .quad sym64_abs 0: .quad sym64_abs
ENDPROC(absolute_data64) SYM_FUNC_END(absolute_data64)
ENTRY(absolute_data32) SYM_FUNC_START(absolute_data32)
ldr w0, 0f ldr w0, 0f
ret ret
0: .long sym32_abs 0: .long sym32_abs
ENDPROC(absolute_data32) SYM_FUNC_END(absolute_data32)
ENTRY(absolute_data16) SYM_FUNC_START(absolute_data16)
adr x0, 0f adr x0, 0f
ldrh w0, [x0] ldrh w0, [x0]
ret ret
0: .short sym16_abs, 0 0: .short sym16_abs, 0
ENDPROC(absolute_data16) SYM_FUNC_END(absolute_data16)
ENTRY(signed_movw) SYM_FUNC_START(signed_movw)
movz x0, #:abs_g2_s:sym64_abs movz x0, #:abs_g2_s:sym64_abs
movk x0, #:abs_g1_nc:sym64_abs movk x0, #:abs_g1_nc:sym64_abs
movk x0, #:abs_g0_nc:sym64_abs movk x0, #:abs_g0_nc:sym64_abs
ret ret
ENDPROC(signed_movw) SYM_FUNC_END(signed_movw)
ENTRY(unsigned_movw) SYM_FUNC_START(unsigned_movw)
movz x0, #:abs_g3:sym64_abs movz x0, #:abs_g3:sym64_abs
movk x0, #:abs_g2_nc:sym64_abs movk x0, #:abs_g2_nc:sym64_abs
movk x0, #:abs_g1_nc:sym64_abs movk x0, #:abs_g1_nc:sym64_abs
movk x0, #:abs_g0_nc:sym64_abs movk x0, #:abs_g0_nc:sym64_abs
ret ret
ENDPROC(unsigned_movw) SYM_FUNC_END(unsigned_movw)
.align 12 .align 12
.space 0xff8 .space 0xff8
ENTRY(relative_adrp) SYM_FUNC_START(relative_adrp)
adrp x0, sym64_rel adrp x0, sym64_rel
add x0, x0, #:lo12:sym64_rel add x0, x0, #:lo12:sym64_rel
ret ret
ENDPROC(relative_adrp) SYM_FUNC_END(relative_adrp)
.align 12 .align 12
.space 0xffc .space 0xffc
ENTRY(relative_adrp_far) SYM_FUNC_START(relative_adrp_far)
adrp x0, memstart_addr adrp x0, memstart_addr
add x0, x0, #:lo12:memstart_addr add x0, x0, #:lo12:memstart_addr
ret ret
ENDPROC(relative_adrp_far) SYM_FUNC_END(relative_adrp_far)
ENTRY(relative_adr) SYM_FUNC_START(relative_adr)
adr x0, sym64_rel adr x0, sym64_rel
ret ret
ENDPROC(relative_adr) SYM_FUNC_END(relative_adr)
ENTRY(relative_data64) SYM_FUNC_START(relative_data64)
adr x1, 0f adr x1, 0f
ldr x0, [x1] ldr x0, [x1]
add x0, x0, x1 add x0, x0, x1
ret ret
0: .quad sym64_rel - . 0: .quad sym64_rel - .
ENDPROC(relative_data64) SYM_FUNC_END(relative_data64)
ENTRY(relative_data32) SYM_FUNC_START(relative_data32)
adr x1, 0f adr x1, 0f
ldr w0, [x1] ldr w0, [x1]
add x0, x0, x1 add x0, x0, x1
ret ret
0: .long sym64_rel - . 0: .long sym64_rel - .
ENDPROC(relative_data32) SYM_FUNC_END(relative_data32)
ENTRY(relative_data16) SYM_FUNC_START(relative_data16)
adr x1, 0f adr x1, 0f
ldrsh w0, [x1] ldrsh w0, [x1]
add x0, x0, x1 add x0, x0, x1
ret ret
0: .short sym64_rel - ., 0 0: .short sym64_rel - ., 0
ENDPROC(relative_data16) SYM_FUNC_END(relative_data16)

View File

@ -26,7 +26,7 @@
* control_code_page, a special page which has been set up to be preserved * control_code_page, a special page which has been set up to be preserved
* during the copy operation. * during the copy operation.
*/ */
ENTRY(arm64_relocate_new_kernel) SYM_CODE_START(arm64_relocate_new_kernel)
/* Setup the list loop variables. */ /* Setup the list loop variables. */
mov x18, x2 /* x18 = dtb address */ mov x18, x2 /* x18 = dtb address */
@ -111,7 +111,7 @@ ENTRY(arm64_relocate_new_kernel)
mov x3, xzr mov x3, xzr
br x17 br x17
ENDPROC(arm64_relocate_new_kernel) SYM_CODE_END(arm64_relocate_new_kernel)
.align 3 /* To keep the 64-bit values below naturally aligned. */ .align 3 /* To keep the 64-bit values below naturally aligned. */

16
arch/arm64/kernel/scs.c Normal file
View File

@ -0,0 +1,16 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Shadow Call Stack support.
*
* Copyright (C) 2019 Google LLC
*/
#include <linux/percpu.h>
#include <linux/scs.h>
DEFINE_SCS(irq_shadow_call_stack);
#ifdef CONFIG_ARM_SDE_INTERFACE
DEFINE_SCS(sdei_shadow_call_stack_normal);
DEFINE_SCS(sdei_shadow_call_stack_critical);
#endif

View File

@ -95,19 +95,7 @@ static bool on_sdei_normal_stack(unsigned long sp, struct stack_info *info)
unsigned long low = (unsigned long)raw_cpu_read(sdei_stack_normal_ptr); unsigned long low = (unsigned long)raw_cpu_read(sdei_stack_normal_ptr);
unsigned long high = low + SDEI_STACK_SIZE; unsigned long high = low + SDEI_STACK_SIZE;
if (!low) return on_stack(sp, low, high, STACK_TYPE_SDEI_NORMAL, info);
return false;
if (sp < low || sp >= high)
return false;
if (info) {
info->low = low;
info->high = high;
info->type = STACK_TYPE_SDEI_NORMAL;
}
return true;
} }
static bool on_sdei_critical_stack(unsigned long sp, struct stack_info *info) static bool on_sdei_critical_stack(unsigned long sp, struct stack_info *info)
@ -115,19 +103,7 @@ static bool on_sdei_critical_stack(unsigned long sp, struct stack_info *info)
unsigned long low = (unsigned long)raw_cpu_read(sdei_stack_critical_ptr); unsigned long low = (unsigned long)raw_cpu_read(sdei_stack_critical_ptr);
unsigned long high = low + SDEI_STACK_SIZE; unsigned long high = low + SDEI_STACK_SIZE;
if (!low) return on_stack(sp, low, high, STACK_TYPE_SDEI_CRITICAL, info);
return false;
if (sp < low || sp >= high)
return false;
if (info) {
info->low = low;
info->high = high;
info->type = STACK_TYPE_SDEI_CRITICAL;
}
return true;
} }
bool _on_sdei_stack(unsigned long sp, struct stack_info *info) bool _on_sdei_stack(unsigned long sp, struct stack_info *info)

View File

@ -732,6 +732,22 @@ static void setup_return(struct pt_regs *regs, struct k_sigaction *ka,
regs->regs[29] = (unsigned long)&user->next_frame->fp; regs->regs[29] = (unsigned long)&user->next_frame->fp;
regs->pc = (unsigned long)ka->sa.sa_handler; regs->pc = (unsigned long)ka->sa.sa_handler;
/*
* Signal delivery is a (wacky) indirect function call in
* userspace, so simulate the same setting of BTYPE as a BLR
* <register containing the signal handler entry point>.
* Signal delivery to a location in a PROT_BTI guarded page
* that is not a function entry point will now trigger a
* SIGILL in userspace.
*
* If the signal handler entry point is not in a PROT_BTI
* guarded page, this is harmless.
*/
if (system_supports_bti()) {
regs->pstate &= ~PSR_BTYPE_MASK;
regs->pstate |= PSR_BTYPE_C;
}
if (ka->sa.sa_flags & SA_RESTORER) if (ka->sa.sa_flags & SA_RESTORER)
sigtramp = ka->sa.sa_restorer; sigtramp = ka->sa.sa_restorer;
else else

View File

@ -62,7 +62,7 @@
* *
* x0 = struct sleep_stack_data area * x0 = struct sleep_stack_data area
*/ */
ENTRY(__cpu_suspend_enter) SYM_FUNC_START(__cpu_suspend_enter)
stp x29, lr, [x0, #SLEEP_STACK_DATA_CALLEE_REGS] stp x29, lr, [x0, #SLEEP_STACK_DATA_CALLEE_REGS]
stp x19, x20, [x0,#SLEEP_STACK_DATA_CALLEE_REGS+16] stp x19, x20, [x0,#SLEEP_STACK_DATA_CALLEE_REGS+16]
stp x21, x22, [x0,#SLEEP_STACK_DATA_CALLEE_REGS+32] stp x21, x22, [x0,#SLEEP_STACK_DATA_CALLEE_REGS+32]
@ -95,23 +95,22 @@ ENTRY(__cpu_suspend_enter)
ldp x29, lr, [sp], #16 ldp x29, lr, [sp], #16
mov x0, #1 mov x0, #1
ret ret
ENDPROC(__cpu_suspend_enter) SYM_FUNC_END(__cpu_suspend_enter)
.pushsection ".idmap.text", "awx" .pushsection ".idmap.text", "awx"
ENTRY(cpu_resume) SYM_CODE_START(cpu_resume)
bl el2_setup // if in EL2 drop to EL1 cleanly bl el2_setup // if in EL2 drop to EL1 cleanly
mov x0, #ARM64_CPU_RUNTIME
bl __cpu_setup bl __cpu_setup
/* enable the MMU early - so we can access sleep_save_stash by va */ /* enable the MMU early - so we can access sleep_save_stash by va */
adrp x1, swapper_pg_dir adrp x1, swapper_pg_dir
bl __enable_mmu bl __enable_mmu
ldr x8, =_cpu_resume ldr x8, =_cpu_resume
br x8 br x8
ENDPROC(cpu_resume) SYM_CODE_END(cpu_resume)
.ltorg .ltorg
.popsection .popsection
ENTRY(_cpu_resume) SYM_FUNC_START(_cpu_resume)
mrs x1, mpidr_el1 mrs x1, mpidr_el1
adr_l x8, mpidr_hash // x8 = struct mpidr_hash virt address adr_l x8, mpidr_hash // x8 = struct mpidr_hash virt address
@ -147,4 +146,4 @@ ENTRY(_cpu_resume)
ldp x29, lr, [x29] ldp x29, lr, [x29]
mov x0, #0 mov x0, #0
ret ret
ENDPROC(_cpu_resume) SYM_FUNC_END(_cpu_resume)

View File

@ -30,9 +30,9 @@
* unsigned long a6, unsigned long a7, struct arm_smccc_res *res, * unsigned long a6, unsigned long a7, struct arm_smccc_res *res,
* struct arm_smccc_quirk *quirk) * struct arm_smccc_quirk *quirk)
*/ */
ENTRY(__arm_smccc_smc) SYM_FUNC_START(__arm_smccc_smc)
SMCCC smc SMCCC smc
ENDPROC(__arm_smccc_smc) SYM_FUNC_END(__arm_smccc_smc)
EXPORT_SYMBOL(__arm_smccc_smc) EXPORT_SYMBOL(__arm_smccc_smc)
/* /*
@ -41,7 +41,7 @@ EXPORT_SYMBOL(__arm_smccc_smc)
* unsigned long a6, unsigned long a7, struct arm_smccc_res *res, * unsigned long a6, unsigned long a7, struct arm_smccc_res *res,
* struct arm_smccc_quirk *quirk) * struct arm_smccc_quirk *quirk)
*/ */
ENTRY(__arm_smccc_hvc) SYM_FUNC_START(__arm_smccc_hvc)
SMCCC hvc SMCCC hvc
ENDPROC(__arm_smccc_hvc) SYM_FUNC_END(__arm_smccc_hvc)
EXPORT_SYMBOL(__arm_smccc_hvc) EXPORT_SYMBOL(__arm_smccc_hvc)

View File

@ -65,7 +65,7 @@ EXPORT_PER_CPU_SYMBOL(cpu_number);
*/ */
struct secondary_data secondary_data; struct secondary_data secondary_data;
/* Number of CPUs which aren't online, but looping in kernel text. */ /* Number of CPUs which aren't online, but looping in kernel text. */
int cpus_stuck_in_kernel; static int cpus_stuck_in_kernel;
enum ipi_msg_type { enum ipi_msg_type {
IPI_RESCHEDULE, IPI_RESCHEDULE,
@ -114,10 +114,6 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
*/ */
secondary_data.task = idle; secondary_data.task = idle;
secondary_data.stack = task_stack_page(idle) + THREAD_SIZE; secondary_data.stack = task_stack_page(idle) + THREAD_SIZE;
#if defined(CONFIG_ARM64_PTR_AUTH)
secondary_data.ptrauth_key.apia.lo = idle->thread.keys_kernel.apia.lo;
secondary_data.ptrauth_key.apia.hi = idle->thread.keys_kernel.apia.hi;
#endif
update_cpu_boot_status(CPU_MMU_OFF); update_cpu_boot_status(CPU_MMU_OFF);
__flush_dcache_area(&secondary_data, sizeof(secondary_data)); __flush_dcache_area(&secondary_data, sizeof(secondary_data));
@ -140,10 +136,6 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
pr_crit("CPU%u: failed to come online\n", cpu); pr_crit("CPU%u: failed to come online\n", cpu);
secondary_data.task = NULL; secondary_data.task = NULL;
secondary_data.stack = NULL; secondary_data.stack = NULL;
#if defined(CONFIG_ARM64_PTR_AUTH)
secondary_data.ptrauth_key.apia.lo = 0;
secondary_data.ptrauth_key.apia.hi = 0;
#endif
__flush_dcache_area(&secondary_data, sizeof(secondary_data)); __flush_dcache_area(&secondary_data, sizeof(secondary_data));
status = READ_ONCE(secondary_data.status); status = READ_ONCE(secondary_data.status);
if (status == CPU_MMU_OFF) if (status == CPU_MMU_OFF)

View File

@ -98,6 +98,24 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
regs->orig_x0 = regs->regs[0]; regs->orig_x0 = regs->regs[0];
regs->syscallno = scno; regs->syscallno = scno;
/*
* BTI note:
* The architecture does not guarantee that SPSR.BTYPE is zero
* on taking an SVC, so we could return to userspace with a
* non-zero BTYPE after the syscall.
*
* This shouldn't matter except when userspace is explicitly
* doing something stupid, such as setting PROT_BTI on a page
* that lacks conforming BTI/PACIxSP instructions, falling
* through from one executable page to another with differing
* PROT_BTI, or messing with BTYPE via ptrace: in such cases,
* userspace should not be surprised if a SIGILL occurs on
* syscall return.
*
* So, don't touch regs->pstate & PSR_BTYPE_MASK here.
* (Similarly for HVC and SMC elsewhere.)
*/
cortex_a76_erratum_1463225_svc_handler(); cortex_a76_erratum_1463225_svc_handler();
local_daif_restore(DAIF_PROCCTX); local_daif_restore(DAIF_PROCCTX);
user_exit(); user_exit();

View File

@ -272,6 +272,61 @@ void arm64_notify_die(const char *str, struct pt_regs *regs,
} }
} }
#ifdef CONFIG_COMPAT
#define PSTATE_IT_1_0_SHIFT 25
#define PSTATE_IT_1_0_MASK (0x3 << PSTATE_IT_1_0_SHIFT)
#define PSTATE_IT_7_2_SHIFT 10
#define PSTATE_IT_7_2_MASK (0x3f << PSTATE_IT_7_2_SHIFT)
static u32 compat_get_it_state(struct pt_regs *regs)
{
u32 it, pstate = regs->pstate;
it = (pstate & PSTATE_IT_1_0_MASK) >> PSTATE_IT_1_0_SHIFT;
it |= ((pstate & PSTATE_IT_7_2_MASK) >> PSTATE_IT_7_2_SHIFT) << 2;
return it;
}
static void compat_set_it_state(struct pt_regs *regs, u32 it)
{
u32 pstate_it;
pstate_it = (it << PSTATE_IT_1_0_SHIFT) & PSTATE_IT_1_0_MASK;
pstate_it |= ((it >> 2) << PSTATE_IT_7_2_SHIFT) & PSTATE_IT_7_2_MASK;
regs->pstate &= ~PSR_AA32_IT_MASK;
regs->pstate |= pstate_it;
}
static void advance_itstate(struct pt_regs *regs)
{
u32 it;
/* ARM mode */
if (!(regs->pstate & PSR_AA32_T_BIT) ||
!(regs->pstate & PSR_AA32_IT_MASK))
return;
it = compat_get_it_state(regs);
/*
* If this is the last instruction of the block, wipe the IT
* state. Otherwise advance it.
*/
if (!(it & 7))
it = 0;
else
it = (it & 0xe0) | ((it << 1) & 0x1f);
compat_set_it_state(regs, it);
}
#else
static void advance_itstate(struct pt_regs *regs)
{
}
#endif
void arm64_skip_faulting_instruction(struct pt_regs *regs, unsigned long size) void arm64_skip_faulting_instruction(struct pt_regs *regs, unsigned long size)
{ {
regs->pc += size; regs->pc += size;
@ -282,6 +337,11 @@ void arm64_skip_faulting_instruction(struct pt_regs *regs, unsigned long size)
*/ */
if (user_mode(regs)) if (user_mode(regs))
user_fastforward_single_step(current); user_fastforward_single_step(current);
if (compat_user_mode(regs))
advance_itstate(regs);
else
regs->pstate &= ~PSR_BTYPE_MASK;
} }
static LIST_HEAD(undef_hook); static LIST_HEAD(undef_hook);
@ -411,6 +471,13 @@ void do_undefinstr(struct pt_regs *regs)
} }
NOKPROBE_SYMBOL(do_undefinstr); NOKPROBE_SYMBOL(do_undefinstr);
void do_bti(struct pt_regs *regs)
{
BUG_ON(!user_mode(regs));
force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc);
}
NOKPROBE_SYMBOL(do_bti);
#define __user_cache_maint(insn, address, res) \ #define __user_cache_maint(insn, address, res) \
if (address >= user_addr_max()) { \ if (address >= user_addr_max()) { \
res = -EFAULT; \ res = -EFAULT; \
@ -566,34 +633,7 @@ static const struct sys64_hook sys64_hooks[] = {
{}, {},
}; };
#ifdef CONFIG_COMPAT #ifdef CONFIG_COMPAT
#define PSTATE_IT_1_0_SHIFT 25
#define PSTATE_IT_1_0_MASK (0x3 << PSTATE_IT_1_0_SHIFT)
#define PSTATE_IT_7_2_SHIFT 10
#define PSTATE_IT_7_2_MASK (0x3f << PSTATE_IT_7_2_SHIFT)
static u32 compat_get_it_state(struct pt_regs *regs)
{
u32 it, pstate = regs->pstate;
it = (pstate & PSTATE_IT_1_0_MASK) >> PSTATE_IT_1_0_SHIFT;
it |= ((pstate & PSTATE_IT_7_2_MASK) >> PSTATE_IT_7_2_SHIFT) << 2;
return it;
}
static void compat_set_it_state(struct pt_regs *regs, u32 it)
{
u32 pstate_it;
pstate_it = (it << PSTATE_IT_1_0_SHIFT) & PSTATE_IT_1_0_MASK;
pstate_it |= ((it >> 2) << PSTATE_IT_7_2_SHIFT) & PSTATE_IT_7_2_MASK;
regs->pstate &= ~PSR_AA32_IT_MASK;
regs->pstate |= pstate_it;
}
static bool cp15_cond_valid(unsigned int esr, struct pt_regs *regs) static bool cp15_cond_valid(unsigned int esr, struct pt_regs *regs)
{ {
int cond; int cond;
@ -614,42 +654,12 @@ static bool cp15_cond_valid(unsigned int esr, struct pt_regs *regs)
return aarch32_opcode_cond_checks[cond](regs->pstate); return aarch32_opcode_cond_checks[cond](regs->pstate);
} }
static void advance_itstate(struct pt_regs *regs)
{
u32 it;
/* ARM mode */
if (!(regs->pstate & PSR_AA32_T_BIT) ||
!(regs->pstate & PSR_AA32_IT_MASK))
return;
it = compat_get_it_state(regs);
/*
* If this is the last instruction of the block, wipe the IT
* state. Otherwise advance it.
*/
if (!(it & 7))
it = 0;
else
it = (it & 0xe0) | ((it << 1) & 0x1f);
compat_set_it_state(regs, it);
}
static void arm64_compat_skip_faulting_instruction(struct pt_regs *regs,
unsigned int sz)
{
advance_itstate(regs);
arm64_skip_faulting_instruction(regs, sz);
}
static void compat_cntfrq_read_handler(unsigned int esr, struct pt_regs *regs) static void compat_cntfrq_read_handler(unsigned int esr, struct pt_regs *regs)
{ {
int reg = (esr & ESR_ELx_CP15_32_ISS_RT_MASK) >> ESR_ELx_CP15_32_ISS_RT_SHIFT; int reg = (esr & ESR_ELx_CP15_32_ISS_RT_MASK) >> ESR_ELx_CP15_32_ISS_RT_SHIFT;
pt_regs_write_reg(regs, reg, arch_timer_get_rate()); pt_regs_write_reg(regs, reg, arch_timer_get_rate());
arm64_compat_skip_faulting_instruction(regs, 4); arm64_skip_faulting_instruction(regs, 4);
} }
static const struct sys64_hook cp15_32_hooks[] = { static const struct sys64_hook cp15_32_hooks[] = {
@ -669,7 +679,7 @@ static void compat_cntvct_read_handler(unsigned int esr, struct pt_regs *regs)
pt_regs_write_reg(regs, rt, lower_32_bits(val)); pt_regs_write_reg(regs, rt, lower_32_bits(val));
pt_regs_write_reg(regs, rt2, upper_32_bits(val)); pt_regs_write_reg(regs, rt2, upper_32_bits(val));
arm64_compat_skip_faulting_instruction(regs, 4); arm64_skip_faulting_instruction(regs, 4);
} }
static const struct sys64_hook cp15_64_hooks[] = { static const struct sys64_hook cp15_64_hooks[] = {
@ -690,7 +700,7 @@ void do_cp15instr(unsigned int esr, struct pt_regs *regs)
* There is no T16 variant of a CP access, so we * There is no T16 variant of a CP access, so we
* always advance PC by 4 bytes. * always advance PC by 4 bytes.
*/ */
arm64_compat_skip_faulting_instruction(regs, 4); arm64_skip_faulting_instruction(regs, 4);
return; return;
} }
@ -753,6 +763,7 @@ static const char *esr_class_str[] = {
[ESR_ELx_EC_CP10_ID] = "CP10 MRC/VMRS", [ESR_ELx_EC_CP10_ID] = "CP10 MRC/VMRS",
[ESR_ELx_EC_PAC] = "PAC", [ESR_ELx_EC_PAC] = "PAC",
[ESR_ELx_EC_CP14_64] = "CP14 MCRR/MRRC", [ESR_ELx_EC_CP14_64] = "CP14 MCRR/MRRC",
[ESR_ELx_EC_BTI] = "BTI",
[ESR_ELx_EC_ILL] = "PSTATE.IL", [ESR_ELx_EC_ILL] = "PSTATE.IL",
[ESR_ELx_EC_SVC32] = "SVC (AArch32)", [ESR_ELx_EC_SVC32] = "SVC (AArch32)",
[ESR_ELx_EC_HVC32] = "HVC (AArch32)", [ESR_ELx_EC_HVC32] = "HVC (AArch32)",
@ -1043,11 +1054,11 @@ int __init early_brk64(unsigned long addr, unsigned int esr,
return bug_handler(regs, esr) != DBG_HOOK_HANDLED; return bug_handler(regs, esr) != DBG_HOOK_HANDLED;
} }
/* This registration must happen early, before debug_traps_init(). */
void __init trap_init(void) void __init trap_init(void)
{ {
register_kernel_break_hook(&bug_break_hook); register_kernel_break_hook(&bug_break_hook);
#ifdef CONFIG_KASAN_SW_TAGS #ifdef CONFIG_KASAN_SW_TAGS
register_kernel_break_hook(&kasan_break_hook); register_kernel_break_hook(&kasan_break_hook);
#endif #endif
debug_traps_init();
} }

View File

@ -33,20 +33,14 @@ extern char vdso_start[], vdso_end[];
extern char vdso32_start[], vdso32_end[]; extern char vdso32_start[], vdso32_end[];
#endif /* CONFIG_COMPAT_VDSO */ #endif /* CONFIG_COMPAT_VDSO */
/* vdso_lookup arch_index */ enum vdso_abi {
enum arch_vdso_type { VDSO_ABI_AA64,
ARM64_VDSO = 0,
#ifdef CONFIG_COMPAT_VDSO #ifdef CONFIG_COMPAT_VDSO
ARM64_VDSO32 = 1, VDSO_ABI_AA32,
#endif /* CONFIG_COMPAT_VDSO */ #endif /* CONFIG_COMPAT_VDSO */
}; };
#ifdef CONFIG_COMPAT_VDSO
#define VDSO_TYPES (ARM64_VDSO32 + 1)
#else
#define VDSO_TYPES (ARM64_VDSO + 1)
#endif /* CONFIG_COMPAT_VDSO */
struct __vdso_abi { struct vdso_abi_info {
const char *name; const char *name;
const char *vdso_code_start; const char *vdso_code_start;
const char *vdso_code_end; const char *vdso_code_end;
@ -57,14 +51,14 @@ struct __vdso_abi {
struct vm_special_mapping *cm; struct vm_special_mapping *cm;
}; };
static struct __vdso_abi vdso_lookup[VDSO_TYPES] __ro_after_init = { static struct vdso_abi_info vdso_info[] __ro_after_init = {
{ [VDSO_ABI_AA64] = {
.name = "vdso", .name = "vdso",
.vdso_code_start = vdso_start, .vdso_code_start = vdso_start,
.vdso_code_end = vdso_end, .vdso_code_end = vdso_end,
}, },
#ifdef CONFIG_COMPAT_VDSO #ifdef CONFIG_COMPAT_VDSO
{ [VDSO_ABI_AA32] = {
.name = "vdso32", .name = "vdso32",
.vdso_code_start = vdso32_start, .vdso_code_start = vdso32_start,
.vdso_code_end = vdso32_end, .vdso_code_end = vdso32_end,
@ -81,13 +75,13 @@ static union {
} vdso_data_store __page_aligned_data; } vdso_data_store __page_aligned_data;
struct vdso_data *vdso_data = vdso_data_store.data; struct vdso_data *vdso_data = vdso_data_store.data;
static int __vdso_remap(enum arch_vdso_type arch_index, static int __vdso_remap(enum vdso_abi abi,
const struct vm_special_mapping *sm, const struct vm_special_mapping *sm,
struct vm_area_struct *new_vma) struct vm_area_struct *new_vma)
{ {
unsigned long new_size = new_vma->vm_end - new_vma->vm_start; unsigned long new_size = new_vma->vm_end - new_vma->vm_start;
unsigned long vdso_size = vdso_lookup[arch_index].vdso_code_end - unsigned long vdso_size = vdso_info[abi].vdso_code_end -
vdso_lookup[arch_index].vdso_code_start; vdso_info[abi].vdso_code_start;
if (vdso_size != new_size) if (vdso_size != new_size)
return -EINVAL; return -EINVAL;
@ -97,24 +91,24 @@ static int __vdso_remap(enum arch_vdso_type arch_index,
return 0; return 0;
} }
static int __vdso_init(enum arch_vdso_type arch_index) static int __vdso_init(enum vdso_abi abi)
{ {
int i; int i;
struct page **vdso_pagelist; struct page **vdso_pagelist;
unsigned long pfn; unsigned long pfn;
if (memcmp(vdso_lookup[arch_index].vdso_code_start, "\177ELF", 4)) { if (memcmp(vdso_info[abi].vdso_code_start, "\177ELF", 4)) {
pr_err("vDSO is not a valid ELF object!\n"); pr_err("vDSO is not a valid ELF object!\n");
return -EINVAL; return -EINVAL;
} }
vdso_lookup[arch_index].vdso_pages = ( vdso_info[abi].vdso_pages = (
vdso_lookup[arch_index].vdso_code_end - vdso_info[abi].vdso_code_end -
vdso_lookup[arch_index].vdso_code_start) >> vdso_info[abi].vdso_code_start) >>
PAGE_SHIFT; PAGE_SHIFT;
/* Allocate the vDSO pagelist, plus a page for the data. */ /* Allocate the vDSO pagelist, plus a page for the data. */
vdso_pagelist = kcalloc(vdso_lookup[arch_index].vdso_pages + 1, vdso_pagelist = kcalloc(vdso_info[abi].vdso_pages + 1,
sizeof(struct page *), sizeof(struct page *),
GFP_KERNEL); GFP_KERNEL);
if (vdso_pagelist == NULL) if (vdso_pagelist == NULL)
@ -125,26 +119,27 @@ static int __vdso_init(enum arch_vdso_type arch_index)
/* Grab the vDSO code pages. */ /* Grab the vDSO code pages. */
pfn = sym_to_pfn(vdso_lookup[arch_index].vdso_code_start); pfn = sym_to_pfn(vdso_info[abi].vdso_code_start);
for (i = 0; i < vdso_lookup[arch_index].vdso_pages; i++) for (i = 0; i < vdso_info[abi].vdso_pages; i++)
vdso_pagelist[i + 1] = pfn_to_page(pfn + i); vdso_pagelist[i + 1] = pfn_to_page(pfn + i);
vdso_lookup[arch_index].dm->pages = &vdso_pagelist[0]; vdso_info[abi].dm->pages = &vdso_pagelist[0];
vdso_lookup[arch_index].cm->pages = &vdso_pagelist[1]; vdso_info[abi].cm->pages = &vdso_pagelist[1];
return 0; return 0;
} }
static int __setup_additional_pages(enum arch_vdso_type arch_index, static int __setup_additional_pages(enum vdso_abi abi,
struct mm_struct *mm, struct mm_struct *mm,
struct linux_binprm *bprm, struct linux_binprm *bprm,
int uses_interp) int uses_interp)
{ {
unsigned long vdso_base, vdso_text_len, vdso_mapping_len; unsigned long vdso_base, vdso_text_len, vdso_mapping_len;
unsigned long gp_flags = 0;
void *ret; void *ret;
vdso_text_len = vdso_lookup[arch_index].vdso_pages << PAGE_SHIFT; vdso_text_len = vdso_info[abi].vdso_pages << PAGE_SHIFT;
/* Be sure to map the data page */ /* Be sure to map the data page */
vdso_mapping_len = vdso_text_len + PAGE_SIZE; vdso_mapping_len = vdso_text_len + PAGE_SIZE;
@ -156,16 +151,19 @@ static int __setup_additional_pages(enum arch_vdso_type arch_index,
ret = _install_special_mapping(mm, vdso_base, PAGE_SIZE, ret = _install_special_mapping(mm, vdso_base, PAGE_SIZE,
VM_READ|VM_MAYREAD, VM_READ|VM_MAYREAD,
vdso_lookup[arch_index].dm); vdso_info[abi].dm);
if (IS_ERR(ret)) if (IS_ERR(ret))
goto up_fail; goto up_fail;
if (IS_ENABLED(CONFIG_ARM64_BTI_KERNEL) && system_supports_bti())
gp_flags = VM_ARM64_BTI;
vdso_base += PAGE_SIZE; vdso_base += PAGE_SIZE;
mm->context.vdso = (void *)vdso_base; mm->context.vdso = (void *)vdso_base;
ret = _install_special_mapping(mm, vdso_base, vdso_text_len, ret = _install_special_mapping(mm, vdso_base, vdso_text_len,
VM_READ|VM_EXEC| VM_READ|VM_EXEC|gp_flags|
VM_MAYREAD|VM_MAYWRITE|VM_MAYEXEC, VM_MAYREAD|VM_MAYWRITE|VM_MAYEXEC,
vdso_lookup[arch_index].cm); vdso_info[abi].cm);
if (IS_ERR(ret)) if (IS_ERR(ret))
goto up_fail; goto up_fail;
@ -184,46 +182,42 @@ static int __setup_additional_pages(enum arch_vdso_type arch_index,
static int aarch32_vdso_mremap(const struct vm_special_mapping *sm, static int aarch32_vdso_mremap(const struct vm_special_mapping *sm,
struct vm_area_struct *new_vma) struct vm_area_struct *new_vma)
{ {
return __vdso_remap(ARM64_VDSO32, sm, new_vma); return __vdso_remap(VDSO_ABI_AA32, sm, new_vma);
} }
#endif /* CONFIG_COMPAT_VDSO */ #endif /* CONFIG_COMPAT_VDSO */
/* enum aarch32_map {
* aarch32_vdso_pages: AA32_MAP_VECTORS, /* kuser helpers */
* 0 - kuser helpers
* 1 - sigreturn code
* or (CONFIG_COMPAT_VDSO):
* 0 - kuser helpers
* 1 - vdso data
* 2 - vdso code
*/
#define C_VECTORS 0
#ifdef CONFIG_COMPAT_VDSO #ifdef CONFIG_COMPAT_VDSO
#define C_VVAR 1 AA32_MAP_VVAR,
#define C_VDSO 2 AA32_MAP_VDSO,
#define C_PAGES (C_VDSO + 1)
#else #else
#define C_SIGPAGE 1 AA32_MAP_SIGPAGE
#define C_PAGES (C_SIGPAGE + 1) #endif
#endif /* CONFIG_COMPAT_VDSO */ };
static struct page *aarch32_vdso_pages[C_PAGES] __ro_after_init;
static struct vm_special_mapping aarch32_vdso_spec[C_PAGES] = { static struct page *aarch32_vectors_page __ro_after_init;
{ #ifndef CONFIG_COMPAT_VDSO
static struct page *aarch32_sig_page __ro_after_init;
#endif
static struct vm_special_mapping aarch32_vdso_maps[] = {
[AA32_MAP_VECTORS] = {
.name = "[vectors]", /* ABI */ .name = "[vectors]", /* ABI */
.pages = &aarch32_vdso_pages[C_VECTORS], .pages = &aarch32_vectors_page,
}, },
#ifdef CONFIG_COMPAT_VDSO #ifdef CONFIG_COMPAT_VDSO
{ [AA32_MAP_VVAR] = {
.name = "[vvar]", .name = "[vvar]",
}, },
{ [AA32_MAP_VDSO] = {
.name = "[vdso]", .name = "[vdso]",
.mremap = aarch32_vdso_mremap, .mremap = aarch32_vdso_mremap,
}, },
#else #else
{ [AA32_MAP_SIGPAGE] = {
.name = "[sigpage]", /* ABI */ .name = "[sigpage]", /* ABI */
.pages = &aarch32_vdso_pages[C_SIGPAGE], .pages = &aarch32_sig_page,
}, },
#endif /* CONFIG_COMPAT_VDSO */ #endif /* CONFIG_COMPAT_VDSO */
}; };
@ -243,8 +237,8 @@ static int aarch32_alloc_kuser_vdso_page(void)
memcpy((void *)(vdso_page + 0x1000 - kuser_sz), __kuser_helper_start, memcpy((void *)(vdso_page + 0x1000 - kuser_sz), __kuser_helper_start,
kuser_sz); kuser_sz);
aarch32_vdso_pages[C_VECTORS] = virt_to_page(vdso_page); aarch32_vectors_page = virt_to_page(vdso_page);
flush_dcache_page(aarch32_vdso_pages[C_VECTORS]); flush_dcache_page(aarch32_vectors_page);
return 0; return 0;
} }
@ -253,10 +247,10 @@ static int __aarch32_alloc_vdso_pages(void)
{ {
int ret; int ret;
vdso_lookup[ARM64_VDSO32].dm = &aarch32_vdso_spec[C_VVAR]; vdso_info[VDSO_ABI_AA32].dm = &aarch32_vdso_maps[AA32_MAP_VVAR];
vdso_lookup[ARM64_VDSO32].cm = &aarch32_vdso_spec[C_VDSO]; vdso_info[VDSO_ABI_AA32].cm = &aarch32_vdso_maps[AA32_MAP_VDSO];
ret = __vdso_init(ARM64_VDSO32); ret = __vdso_init(VDSO_ABI_AA32);
if (ret) if (ret)
return ret; return ret;
@ -275,8 +269,8 @@ static int __aarch32_alloc_vdso_pages(void)
return -ENOMEM; return -ENOMEM;
memcpy((void *)sigpage, __aarch32_sigret_code_start, sigret_sz); memcpy((void *)sigpage, __aarch32_sigret_code_start, sigret_sz);
aarch32_vdso_pages[C_SIGPAGE] = virt_to_page(sigpage); aarch32_sig_page = virt_to_page(sigpage);
flush_dcache_page(aarch32_vdso_pages[C_SIGPAGE]); flush_dcache_page(aarch32_sig_page);
ret = aarch32_alloc_kuser_vdso_page(); ret = aarch32_alloc_kuser_vdso_page();
if (ret) if (ret)
@ -306,7 +300,7 @@ static int aarch32_kuser_helpers_setup(struct mm_struct *mm)
ret = _install_special_mapping(mm, AARCH32_VECTORS_BASE, PAGE_SIZE, ret = _install_special_mapping(mm, AARCH32_VECTORS_BASE, PAGE_SIZE,
VM_READ | VM_EXEC | VM_READ | VM_EXEC |
VM_MAYREAD | VM_MAYEXEC, VM_MAYREAD | VM_MAYEXEC,
&aarch32_vdso_spec[C_VECTORS]); &aarch32_vdso_maps[AA32_MAP_VECTORS]);
return PTR_ERR_OR_ZERO(ret); return PTR_ERR_OR_ZERO(ret);
} }
@ -330,7 +324,7 @@ static int aarch32_sigreturn_setup(struct mm_struct *mm)
ret = _install_special_mapping(mm, addr, PAGE_SIZE, ret = _install_special_mapping(mm, addr, PAGE_SIZE,
VM_READ | VM_EXEC | VM_MAYREAD | VM_READ | VM_EXEC | VM_MAYREAD |
VM_MAYWRITE | VM_MAYEXEC, VM_MAYWRITE | VM_MAYEXEC,
&aarch32_vdso_spec[C_SIGPAGE]); &aarch32_vdso_maps[AA32_MAP_SIGPAGE]);
if (IS_ERR(ret)) if (IS_ERR(ret))
goto out; goto out;
@ -354,7 +348,7 @@ int aarch32_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
goto out; goto out;
#ifdef CONFIG_COMPAT_VDSO #ifdef CONFIG_COMPAT_VDSO
ret = __setup_additional_pages(ARM64_VDSO32, ret = __setup_additional_pages(VDSO_ABI_AA32,
mm, mm,
bprm, bprm,
uses_interp); uses_interp);
@ -371,22 +365,19 @@ int aarch32_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
static int vdso_mremap(const struct vm_special_mapping *sm, static int vdso_mremap(const struct vm_special_mapping *sm,
struct vm_area_struct *new_vma) struct vm_area_struct *new_vma)
{ {
return __vdso_remap(ARM64_VDSO, sm, new_vma); return __vdso_remap(VDSO_ABI_AA64, sm, new_vma);
} }
/* enum aarch64_map {
* aarch64_vdso_pages: AA64_MAP_VVAR,
* 0 - vvar AA64_MAP_VDSO,
* 1 - vdso };
*/
#define A_VVAR 0 static struct vm_special_mapping aarch64_vdso_maps[] __ro_after_init = {
#define A_VDSO 1 [AA64_MAP_VVAR] = {
#define A_PAGES (A_VDSO + 1)
static struct vm_special_mapping vdso_spec[A_PAGES] __ro_after_init = {
{
.name = "[vvar]", .name = "[vvar]",
}, },
{ [AA64_MAP_VDSO] = {
.name = "[vdso]", .name = "[vdso]",
.mremap = vdso_mremap, .mremap = vdso_mremap,
}, },
@ -394,10 +385,10 @@ static struct vm_special_mapping vdso_spec[A_PAGES] __ro_after_init = {
static int __init vdso_init(void) static int __init vdso_init(void)
{ {
vdso_lookup[ARM64_VDSO].dm = &vdso_spec[A_VVAR]; vdso_info[VDSO_ABI_AA64].dm = &aarch64_vdso_maps[AA64_MAP_VVAR];
vdso_lookup[ARM64_VDSO].cm = &vdso_spec[A_VDSO]; vdso_info[VDSO_ABI_AA64].cm = &aarch64_vdso_maps[AA64_MAP_VDSO];
return __vdso_init(ARM64_VDSO); return __vdso_init(VDSO_ABI_AA64);
} }
arch_initcall(vdso_init); arch_initcall(vdso_init);
@ -410,7 +401,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm,
if (down_write_killable(&mm->mmap_sem)) if (down_write_killable(&mm->mmap_sem))
return -EINTR; return -EINTR;
ret = __setup_additional_pages(ARM64_VDSO, ret = __setup_additional_pages(VDSO_ABI_AA64,
mm, mm,
bprm, bprm,
uses_interp); uses_interp);

View File

@ -17,15 +17,19 @@ obj-vdso := vgettimeofday.o note.o sigreturn.o
targets := $(obj-vdso) vdso.so vdso.so.dbg targets := $(obj-vdso) vdso.so vdso.so.dbg
obj-vdso := $(addprefix $(obj)/, $(obj-vdso)) obj-vdso := $(addprefix $(obj)/, $(obj-vdso))
btildflags-$(CONFIG_ARM64_BTI_KERNEL) += -z force-bti
# -Bsymbolic has been added for consistency with arm, the compat vDSO and
# potential future proofing if we end up with internal calls to the exported
# routines, as x86 does (see 6f121e548f83 ("x86, vdso: Reimplement vdso.so
# preparation in build-time C")).
ldflags-y := -shared -nostdlib -soname=linux-vdso.so.1 --hash-style=sysv \ ldflags-y := -shared -nostdlib -soname=linux-vdso.so.1 --hash-style=sysv \
--build-id -n -T -Bsymbolic --eh-frame-hdr --build-id -n $(btildflags-y) -T
ccflags-y := -fno-common -fno-builtin -fno-stack-protector -ffixed-x18 ccflags-y := -fno-common -fno-builtin -fno-stack-protector -ffixed-x18
ccflags-y += -DDISABLE_BRANCH_PROFILING ccflags-y += -DDISABLE_BRANCH_PROFILING
VDSO_LDFLAGS := -Bsymbolic CFLAGS_REMOVE_vgettimeofday.o = $(CC_FLAGS_FTRACE) -Os $(CC_FLAGS_SCS)
CFLAGS_REMOVE_vgettimeofday.o = $(CC_FLAGS_FTRACE) -Os
KBUILD_CFLAGS += $(DISABLE_LTO) KBUILD_CFLAGS += $(DISABLE_LTO)
KASAN_SANITIZE := n KASAN_SANITIZE := n
UBSAN_SANITIZE := n UBSAN_SANITIZE := n

View File

@ -12,9 +12,12 @@
#include <linux/version.h> #include <linux/version.h>
#include <linux/elfnote.h> #include <linux/elfnote.h>
#include <linux/build-salt.h> #include <linux/build-salt.h>
#include <asm/assembler.h>
ELFNOTE_START(Linux, 0, "a") ELFNOTE_START(Linux, 0, "a")
.long LINUX_VERSION_CODE .long LINUX_VERSION_CODE
ELFNOTE_END ELFNOTE_END
BUILD_SALT BUILD_SALT
emit_aarch64_feature_1_and

View File

@ -1,7 +1,11 @@
/* SPDX-License-Identifier: GPL-2.0-only */ /* SPDX-License-Identifier: GPL-2.0-only */
/* /*
* Sigreturn trampoline for returning from a signal when the SA_RESTORER * Sigreturn trampoline for returning from a signal when the SA_RESTORER
* flag is not set. * flag is not set. It serves primarily as a hall of shame for crappy
* unwinders and features an exciting but mysterious NOP instruction.
*
* It's also fragile as hell, so please think twice before changing anything
* in here.
* *
* Copyright (C) 2012 ARM Limited * Copyright (C) 2012 ARM Limited
* *
@ -9,18 +13,54 @@
*/ */
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/assembler.h>
#include <asm/unistd.h> #include <asm/unistd.h>
.text .text
nop /* Ensure that the mysterious NOP can be associated with a function. */
SYM_FUNC_START(__kernel_rt_sigreturn)
.cfi_startproc .cfi_startproc
/*
* .cfi_signal_frame causes the corresponding Frame Description Entry in the
* .eh_frame section to be annotated as a signal frame. This allows DWARF
* unwinders (e.g. libstdc++) to implement _Unwind_GetIPInfo(), which permits
* unwinding out of the signal trampoline without the need for the mysterious
* NOP.
*/
.cfi_signal_frame .cfi_signal_frame
.cfi_def_cfa x29, 0
.cfi_offset x29, 0 * 8 /*
.cfi_offset x30, 1 * 8 * Tell the unwinder where to locate the frame record linking back to the
* interrupted context. We don't provide unwind info for registers other
* than the frame pointer and the link register here; in practice, this
* is sufficient for unwinding in C/C++ based runtimes and the values in
* the sigcontext may have been modified by this point anyway. Debuggers
* already have baked-in strategies for attempting to unwind out of signals.
*/
.cfi_def_cfa x29, 0
.cfi_offset x29, 0 * 8
.cfi_offset x30, 1 * 8
/*
* This mysterious NOP is required for some unwinders (e.g. libc++) that
* unconditionally subtract one from the result of _Unwind_GetIP() in order to
* identify the calling function.
* Hack borrowed from arch/powerpc/kernel/vdso64/sigtramp.S.
*/
nop // Mysterious NOP
/*
* GDB relies on being able to identify the sigreturn instruction sequence to
* unwind from signal handlers. We cannot, therefore, use SYM_FUNC_START()
* here, as it will emit a BTI C instruction and break the unwinder. Thankfully,
* this function is only ever called from a RET and so omitting the landing pad
* is perfectly fine.
*/
SYM_CODE_START(__kernel_rt_sigreturn)
mov x8, #__NR_rt_sigreturn mov x8, #__NR_rt_sigreturn
svc #0 svc #0
.cfi_endproc .cfi_endproc
SYM_FUNC_END(__kernel_rt_sigreturn) SYM_CODE_END(__kernel_rt_sigreturn)
emit_aarch64_feature_1_and

View File

@ -8,6 +8,7 @@
#include <linux/init.h> #include <linux/init.h>
#include <linux/linkage.h> #include <linux/linkage.h>
#include <linux/const.h> #include <linux/const.h>
#include <asm/assembler.h>
#include <asm/page.h> #include <asm/page.h>
.globl vdso_start, vdso_end .globl vdso_start, vdso_end
@ -19,3 +20,5 @@ vdso_start:
vdso_end: vdso_end:
.previous .previous
emit_aarch64_feature_1_and

View File

@ -3,6 +3,9 @@
* This file provides both A32 and T32 versions, in accordance with the * This file provides both A32 and T32 versions, in accordance with the
* arm sigreturn code. * arm sigreturn code.
* *
* Please read the comments in arch/arm64/kernel/vdso/sigreturn.S to
* understand some of the craziness in here.
*
* Copyright (C) 2018 ARM Limited * Copyright (C) 2018 ARM Limited
*/ */
@ -17,39 +20,39 @@
.save {r0-r15} .save {r0-r15}
.pad #COMPAT_SIGFRAME_REGS_OFFSET .pad #COMPAT_SIGFRAME_REGS_OFFSET
nop nop
SYM_FUNC_START(__kernel_sigreturn_arm) SYM_CODE_START(__kernel_sigreturn_arm)
mov r7, #__NR_compat_sigreturn mov r7, #__NR_compat_sigreturn
svc #0 svc #0
.fnend .fnend
SYM_FUNC_END(__kernel_sigreturn_arm) SYM_CODE_END(__kernel_sigreturn_arm)
.fnstart .fnstart
.save {r0-r15} .save {r0-r15}
.pad #COMPAT_RT_SIGFRAME_REGS_OFFSET .pad #COMPAT_RT_SIGFRAME_REGS_OFFSET
nop nop
SYM_FUNC_START(__kernel_rt_sigreturn_arm) SYM_CODE_START(__kernel_rt_sigreturn_arm)
mov r7, #__NR_compat_rt_sigreturn mov r7, #__NR_compat_rt_sigreturn
svc #0 svc #0
.fnend .fnend
SYM_FUNC_END(__kernel_rt_sigreturn_arm) SYM_CODE_END(__kernel_rt_sigreturn_arm)
.thumb .thumb
.fnstart .fnstart
.save {r0-r15} .save {r0-r15}
.pad #COMPAT_SIGFRAME_REGS_OFFSET .pad #COMPAT_SIGFRAME_REGS_OFFSET
nop nop
SYM_FUNC_START(__kernel_sigreturn_thumb) SYM_CODE_START(__kernel_sigreturn_thumb)
mov r7, #__NR_compat_sigreturn mov r7, #__NR_compat_sigreturn
svc #0 svc #0
.fnend .fnend
SYM_FUNC_END(__kernel_sigreturn_thumb) SYM_CODE_END(__kernel_sigreturn_thumb)
.fnstart .fnstart
.save {r0-r15} .save {r0-r15}
.pad #COMPAT_RT_SIGFRAME_REGS_OFFSET .pad #COMPAT_RT_SIGFRAME_REGS_OFFSET
nop nop
SYM_FUNC_START(__kernel_rt_sigreturn_thumb) SYM_CODE_START(__kernel_rt_sigreturn_thumb)
mov r7, #__NR_compat_rt_sigreturn mov r7, #__NR_compat_rt_sigreturn
svc #0 svc #0
.fnend .fnend
SYM_FUNC_END(__kernel_rt_sigreturn_thumb) SYM_CODE_END(__kernel_rt_sigreturn_thumb)

View File

@ -17,10 +17,6 @@
#include "image.h" #include "image.h"
/* .exit.text needed in case of alternative patching */
#define ARM_EXIT_KEEP(x) x
#define ARM_EXIT_DISCARD(x)
OUTPUT_ARCH(aarch64) OUTPUT_ARCH(aarch64)
ENTRY(_text) ENTRY(_text)
@ -72,8 +68,8 @@ jiffies = jiffies_64;
/* /*
* The size of the PE/COFF section that covers the kernel image, which * The size of the PE/COFF section that covers the kernel image, which
* runs from stext to _edata, must be a round multiple of the PE/COFF * runs from _stext to _edata, must be a round multiple of the PE/COFF
* FileAlignment, which we set to its minimum value of 0x200. 'stext' * FileAlignment, which we set to its minimum value of 0x200. '_stext'
* itself is 4 KB aligned, so padding out _edata to a 0x200 aligned * itself is 4 KB aligned, so padding out _edata to a 0x200 aligned
* boundary should be sufficient. * boundary should be sufficient.
*/ */
@ -95,8 +91,6 @@ SECTIONS
* order of matching. * order of matching.
*/ */
/DISCARD/ : { /DISCARD/ : {
ARM_EXIT_DISCARD(EXIT_TEXT)
ARM_EXIT_DISCARD(EXIT_DATA)
EXIT_CALL EXIT_CALL
*(.discard) *(.discard)
*(.discard.*) *(.discard.*)
@ -139,6 +133,7 @@ SECTIONS
idmap_pg_dir = .; idmap_pg_dir = .;
. += IDMAP_DIR_SIZE; . += IDMAP_DIR_SIZE;
idmap_pg_end = .;
#ifdef CONFIG_UNMAP_KERNEL_AT_EL0 #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
tramp_pg_dir = .; tramp_pg_dir = .;
@ -161,7 +156,7 @@ SECTIONS
__exittext_begin = .; __exittext_begin = .;
.exit.text : { .exit.text : {
ARM_EXIT_KEEP(EXIT_TEXT) EXIT_TEXT
} }
__exittext_end = .; __exittext_end = .;
@ -175,7 +170,7 @@ SECTIONS
*(.altinstr_replacement) *(.altinstr_replacement)
} }
. = ALIGN(PAGE_SIZE); . = ALIGN(SEGMENT_ALIGN);
__inittext_end = .; __inittext_end = .;
__initdata_begin = .; __initdata_begin = .;
@ -188,7 +183,7 @@ SECTIONS
*(.init.rodata.* .init.bss) /* from the EFI stub */ *(.init.rodata.* .init.bss) /* from the EFI stub */
} }
.exit.data : { .exit.data : {
ARM_EXIT_KEEP(EXIT_DATA) EXIT_DATA
} }
PERCPU_SECTION(L1_CACHE_BYTES) PERCPU_SECTION(L1_CACHE_BYTES)
@ -246,6 +241,7 @@ SECTIONS
. += INIT_DIR_SIZE; . += INIT_DIR_SIZE;
init_pg_end = .; init_pg_end = .;
. = ALIGN(SEGMENT_ALIGN);
__pecoff_data_size = ABSOLUTE(. - __initdata_begin); __pecoff_data_size = ABSOLUTE(. - __initdata_begin);
_end = .; _end = .;

View File

@ -138,7 +138,7 @@ static void __hyp_text __activate_traps_nvhe(struct kvm_vcpu *vcpu)
write_sysreg(val, cptr_el2); write_sysreg(val, cptr_el2);
if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) { if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) {
struct kvm_cpu_context *ctxt = &vcpu->arch.ctxt; struct kvm_cpu_context *ctxt = &vcpu->arch.ctxt;
isb(); isb();
@ -181,7 +181,7 @@ static void deactivate_traps_vhe(void)
* above before we can switch to the EL2/EL0 translation regime used by * above before we can switch to the EL2/EL0 translation regime used by
* the host. * the host.
*/ */
asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT_VHE)); asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT));
write_sysreg(CPACR_EL1_DEFAULT, cpacr_el1); write_sysreg(CPACR_EL1_DEFAULT, cpacr_el1);
write_sysreg(vectors, vbar_el1); write_sysreg(vectors, vbar_el1);
@ -192,7 +192,7 @@ static void __hyp_text __deactivate_traps_nvhe(void)
{ {
u64 mdcr_el2 = read_sysreg(mdcr_el2); u64 mdcr_el2 = read_sysreg(mdcr_el2);
if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) { if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) {
u64 val; u64 val;
/* /*

View File

@ -107,7 +107,8 @@ static void __hyp_text __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt)
write_sysreg(ctxt->sys_regs[MPIDR_EL1], vmpidr_el2); write_sysreg(ctxt->sys_regs[MPIDR_EL1], vmpidr_el2);
write_sysreg(ctxt->sys_regs[CSSELR_EL1], csselr_el1); write_sysreg(ctxt->sys_regs[CSSELR_EL1], csselr_el1);
if (!cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) { if (has_vhe() ||
!cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) {
write_sysreg_el1(ctxt->sys_regs[SCTLR_EL1], SYS_SCTLR); write_sysreg_el1(ctxt->sys_regs[SCTLR_EL1], SYS_SCTLR);
write_sysreg_el1(ctxt->sys_regs[TCR_EL1], SYS_TCR); write_sysreg_el1(ctxt->sys_regs[TCR_EL1], SYS_TCR);
} else if (!ctxt->__hyp_running_vcpu) { } else if (!ctxt->__hyp_running_vcpu) {
@ -138,7 +139,8 @@ static void __hyp_text __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt)
write_sysreg(ctxt->sys_regs[PAR_EL1], par_el1); write_sysreg(ctxt->sys_regs[PAR_EL1], par_el1);
write_sysreg(ctxt->sys_regs[TPIDR_EL1], tpidr_el1); write_sysreg(ctxt->sys_regs[TPIDR_EL1], tpidr_el1);
if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE) && if (!has_vhe() &&
cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT) &&
ctxt->__hyp_running_vcpu) { ctxt->__hyp_running_vcpu) {
/* /*
* Must only be done for host registers, hence the context * Must only be done for host registers, hence the context

View File

@ -23,7 +23,7 @@ static void __hyp_text __tlb_switch_to_guest_vhe(struct kvm *kvm,
local_irq_save(cxt->flags); local_irq_save(cxt->flags);
if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_VHE)) { if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) {
/* /*
* For CPUs that are affected by ARM errata 1165522 or 1530923, * For CPUs that are affected by ARM errata 1165522 or 1530923,
* we cannot trust stage-1 to be in a correct state at that * we cannot trust stage-1 to be in a correct state at that
@ -63,7 +63,7 @@ static void __hyp_text __tlb_switch_to_guest_vhe(struct kvm *kvm,
static void __hyp_text __tlb_switch_to_guest_nvhe(struct kvm *kvm, static void __hyp_text __tlb_switch_to_guest_nvhe(struct kvm *kvm,
struct tlb_inv_context *cxt) struct tlb_inv_context *cxt)
{ {
if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) { if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) {
u64 val; u64 val;
/* /*
@ -79,8 +79,9 @@ static void __hyp_text __tlb_switch_to_guest_nvhe(struct kvm *kvm,
isb(); isb();
} }
/* __load_guest_stage2() includes an ISB for the workaround. */
__load_guest_stage2(kvm); __load_guest_stage2(kvm);
isb(); asm(ALTERNATIVE("isb", "nop", ARM64_WORKAROUND_SPECULATIVE_AT));
} }
static void __hyp_text __tlb_switch_to_guest(struct kvm *kvm, static void __hyp_text __tlb_switch_to_guest(struct kvm *kvm,
@ -103,7 +104,7 @@ static void __hyp_text __tlb_switch_to_host_vhe(struct kvm *kvm,
write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2); write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2);
isb(); isb();
if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_VHE)) { if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) {
/* Restore the registers to what they were */ /* Restore the registers to what they were */
write_sysreg_el1(cxt->tcr, SYS_TCR); write_sysreg_el1(cxt->tcr, SYS_TCR);
write_sysreg_el1(cxt->sctlr, SYS_SCTLR); write_sysreg_el1(cxt->sctlr, SYS_SCTLR);
@ -117,7 +118,7 @@ static void __hyp_text __tlb_switch_to_host_nvhe(struct kvm *kvm,
{ {
write_sysreg(0, vttbr_el2); write_sysreg(0, vttbr_el2);
if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) { if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) {
/* Ensure write of the host VMID */ /* Ensure write of the host VMID */
isb(); isb();
/* Restore the host's TCR_EL1 */ /* Restore the host's TCR_EL1 */

View File

@ -46,14 +46,6 @@ static const struct kvm_regs default_regs_reset32 = {
PSR_AA32_I_BIT | PSR_AA32_F_BIT), PSR_AA32_I_BIT | PSR_AA32_F_BIT),
}; };
static bool cpu_has_32bit_el1(void)
{
u64 pfr0;
pfr0 = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
return !!(pfr0 & 0x20);
}
/** /**
* kvm_arch_vm_ioctl_check_extension * kvm_arch_vm_ioctl_check_extension
* *
@ -66,7 +58,7 @@ int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext)
switch (ext) { switch (ext) {
case KVM_CAP_ARM_EL1_32BIT: case KVM_CAP_ARM_EL1_32BIT:
r = cpu_has_32bit_el1(); r = cpus_have_const_cap(ARM64_HAS_32BIT_EL1);
break; break;
case KVM_CAP_GUEST_DEBUG_HW_BPS: case KVM_CAP_GUEST_DEBUG_HW_BPS:
r = get_num_brps(); r = get_num_brps();
@ -288,7 +280,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
switch (vcpu->arch.target) { switch (vcpu->arch.target) {
default: default:
if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) { if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) {
if (!cpu_has_32bit_el1()) if (!cpus_have_const_cap(ARM64_HAS_32BIT_EL1))
goto out; goto out;
cpu_reset = &default_regs_reset32; cpu_reset = &default_regs_reset32;
} else { } else {
@ -340,11 +332,50 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
return ret; return ret;
} }
void kvm_set_ipa_limit(void) u32 get_kvm_ipa_limit(void)
{ {
unsigned int ipa_max, pa_max, va_max, parange; return kvm_ipa_limit;
}
int kvm_set_ipa_limit(void)
{
unsigned int ipa_max, pa_max, va_max, parange, tgran_2;
u64 mmfr0;
mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
parange = cpuid_feature_extract_unsigned_field(mmfr0,
ID_AA64MMFR0_PARANGE_SHIFT);
/*
* Check with ARMv8.5-GTG that our PAGE_SIZE is supported at
* Stage-2. If not, things will stop very quickly.
*/
switch (PAGE_SIZE) {
default:
case SZ_4K:
tgran_2 = ID_AA64MMFR0_TGRAN4_2_SHIFT;
break;
case SZ_16K:
tgran_2 = ID_AA64MMFR0_TGRAN16_2_SHIFT;
break;
case SZ_64K:
tgran_2 = ID_AA64MMFR0_TGRAN64_2_SHIFT;
break;
}
switch (cpuid_feature_extract_unsigned_field(mmfr0, tgran_2)) {
default:
case 1:
kvm_err("PAGE_SIZE not supported at Stage-2, giving up\n");
return -EINVAL;
case 0:
kvm_debug("PAGE_SIZE supported at Stage-2 (default)\n");
break;
case 2:
kvm_debug("PAGE_SIZE supported at Stage-2 (advertised)\n");
break;
}
parange = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1) & 0x7;
pa_max = id_aa64mmfr0_parange_to_phys_shift(parange); pa_max = id_aa64mmfr0_parange_to_phys_shift(parange);
/* Clamp the IPA limit to the PA size supported by the kernel */ /* Clamp the IPA limit to the PA size supported by the kernel */
@ -378,6 +409,8 @@ void kvm_set_ipa_limit(void)
"KVM IPA limit (%d bit) is smaller than default size\n", ipa_max); "KVM IPA limit (%d bit) is smaller than default size\n", ipa_max);
kvm_ipa_limit = ipa_max; kvm_ipa_limit = ipa_max;
kvm_info("IPA Size Limit: %dbits\n", kvm_ipa_limit); kvm_info("IPA Size Limit: %dbits\n", kvm_ipa_limit);
return 0;
} }
/* /*
@ -390,7 +423,7 @@ void kvm_set_ipa_limit(void)
*/ */
int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type) int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type)
{ {
u64 vtcr = VTCR_EL2_FLAGS; u64 vtcr = VTCR_EL2_FLAGS, mmfr0;
u32 parange, phys_shift; u32 parange, phys_shift;
u8 lvls; u8 lvls;
@ -406,7 +439,9 @@ int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type)
phys_shift = KVM_PHYS_SHIFT; phys_shift = KVM_PHYS_SHIFT;
} }
parange = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1) & 7; mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
parange = cpuid_feature_extract_unsigned_field(mmfr0,
ID_AA64MMFR0_PARANGE_SHIFT);
if (parange > ID_AA64MMFR0_PARANGE_MAX) if (parange > ID_AA64MMFR0_PARANGE_MAX)
parange = ID_AA64MMFR0_PARANGE_MAX; parange = ID_AA64MMFR0_PARANGE_MAX;
vtcr |= parange << VTCR_EL2_PS_SHIFT; vtcr |= parange << VTCR_EL2_PS_SHIFT;

View File

@ -1456,9 +1456,9 @@ static const struct sys_reg_desc sys_reg_descs[] = {
ID_SANITISED(MVFR1_EL1), ID_SANITISED(MVFR1_EL1),
ID_SANITISED(MVFR2_EL1), ID_SANITISED(MVFR2_EL1),
ID_UNALLOCATED(3,3), ID_UNALLOCATED(3,3),
ID_UNALLOCATED(3,4), ID_SANITISED(ID_PFR2_EL1),
ID_UNALLOCATED(3,5), ID_HIDDEN(ID_DFR1_EL1),
ID_UNALLOCATED(3,6), ID_SANITISED(ID_MMFR5_EL1),
ID_UNALLOCATED(3,7), ID_UNALLOCATED(3,7),
/* AArch64 ID registers */ /* AArch64 ID registers */

View File

@ -20,36 +20,36 @@
* x0 - bytes not copied * x0 - bytes not copied
*/ */
.macro ldrb1 ptr, regB, val .macro ldrb1 reg, ptr, val
uao_user_alternative 9998f, ldrb, ldtrb, \ptr, \regB, \val uao_user_alternative 9998f, ldrb, ldtrb, \reg, \ptr, \val
.endm .endm
.macro strb1 ptr, regB, val .macro strb1 reg, ptr, val
strb \ptr, [\regB], \val strb \reg, [\ptr], \val
.endm .endm
.macro ldrh1 ptr, regB, val .macro ldrh1 reg, ptr, val
uao_user_alternative 9998f, ldrh, ldtrh, \ptr, \regB, \val uao_user_alternative 9998f, ldrh, ldtrh, \reg, \ptr, \val
.endm .endm
.macro strh1 ptr, regB, val .macro strh1 reg, ptr, val
strh \ptr, [\regB], \val strh \reg, [\ptr], \val
.endm .endm
.macro ldr1 ptr, regB, val .macro ldr1 reg, ptr, val
uao_user_alternative 9998f, ldr, ldtr, \ptr, \regB, \val uao_user_alternative 9998f, ldr, ldtr, \reg, \ptr, \val
.endm .endm
.macro str1 ptr, regB, val .macro str1 reg, ptr, val
str \ptr, [\regB], \val str \reg, [\ptr], \val
.endm .endm
.macro ldp1 ptr, regB, regC, val .macro ldp1 reg1, reg2, ptr, val
uao_ldp 9998f, \ptr, \regB, \regC, \val uao_ldp 9998f, \reg1, \reg2, \ptr, \val
.endm .endm
.macro stp1 ptr, regB, regC, val .macro stp1 reg1, reg2, ptr, val
stp \ptr, \regB, [\regC], \val stp \reg1, \reg2, [\ptr], \val
.endm .endm
end .req x5 end .req x5

View File

@ -21,36 +21,36 @@
* Returns: * Returns:
* x0 - bytes not copied * x0 - bytes not copied
*/ */
.macro ldrb1 ptr, regB, val .macro ldrb1 reg, ptr, val
uao_user_alternative 9998f, ldrb, ldtrb, \ptr, \regB, \val uao_user_alternative 9998f, ldrb, ldtrb, \reg, \ptr, \val
.endm .endm
.macro strb1 ptr, regB, val .macro strb1 reg, ptr, val
uao_user_alternative 9998f, strb, sttrb, \ptr, \regB, \val uao_user_alternative 9998f, strb, sttrb, \reg, \ptr, \val
.endm .endm
.macro ldrh1 ptr, regB, val .macro ldrh1 reg, ptr, val
uao_user_alternative 9998f, ldrh, ldtrh, \ptr, \regB, \val uao_user_alternative 9998f, ldrh, ldtrh, \reg, \ptr, \val
.endm .endm
.macro strh1 ptr, regB, val .macro strh1 reg, ptr, val
uao_user_alternative 9998f, strh, sttrh, \ptr, \regB, \val uao_user_alternative 9998f, strh, sttrh, \reg, \ptr, \val
.endm .endm
.macro ldr1 ptr, regB, val .macro ldr1 reg, ptr, val
uao_user_alternative 9998f, ldr, ldtr, \ptr, \regB, \val uao_user_alternative 9998f, ldr, ldtr, \reg, \ptr, \val
.endm .endm
.macro str1 ptr, regB, val .macro str1 reg, ptr, val
uao_user_alternative 9998f, str, sttr, \ptr, \regB, \val uao_user_alternative 9998f, str, sttr, \reg, \ptr, \val
.endm .endm
.macro ldp1 ptr, regB, regC, val .macro ldp1 reg1, reg2, ptr, val
uao_ldp 9998f, \ptr, \regB, \regC, \val uao_ldp 9998f, \reg1, \reg2, \ptr, \val
.endm .endm
.macro stp1 ptr, regB, regC, val .macro stp1 reg1, reg2, ptr, val
uao_stp 9998f, \ptr, \regB, \regC, \val uao_stp 9998f, \reg1, \reg2, \ptr, \val
.endm .endm
end .req x5 end .req x5

View File

@ -19,36 +19,36 @@
* Returns: * Returns:
* x0 - bytes not copied * x0 - bytes not copied
*/ */
.macro ldrb1 ptr, regB, val .macro ldrb1 reg, ptr, val
ldrb \ptr, [\regB], \val ldrb \reg, [\ptr], \val
.endm .endm
.macro strb1 ptr, regB, val .macro strb1 reg, ptr, val
uao_user_alternative 9998f, strb, sttrb, \ptr, \regB, \val uao_user_alternative 9998f, strb, sttrb, \reg, \ptr, \val
.endm .endm
.macro ldrh1 ptr, regB, val .macro ldrh1 reg, ptr, val
ldrh \ptr, [\regB], \val ldrh \reg, [\ptr], \val
.endm .endm
.macro strh1 ptr, regB, val .macro strh1 reg, ptr, val
uao_user_alternative 9998f, strh, sttrh, \ptr, \regB, \val uao_user_alternative 9998f, strh, sttrh, \reg, \ptr, \val
.endm .endm
.macro ldr1 ptr, regB, val .macro ldr1 reg, ptr, val
ldr \ptr, [\regB], \val ldr \reg, [\ptr], \val
.endm .endm
.macro str1 ptr, regB, val .macro str1 reg, ptr, val
uao_user_alternative 9998f, str, sttr, \ptr, \regB, \val uao_user_alternative 9998f, str, sttr, \reg, \ptr, \val
.endm .endm
.macro ldp1 ptr, regB, regC, val .macro ldp1 reg1, reg2, ptr, val
ldp \ptr, \regB, [\regC], \val ldp \reg1, \reg2, [\ptr], \val
.endm .endm
.macro stp1 ptr, regB, regC, val .macro stp1 reg1, reg2, ptr, val
uao_stp 9998f, \ptr, \regB, \regC, \val uao_stp 9998f, \reg1, \reg2, \ptr, \val
.endm .endm
end .req x5 end .req x5

View File

@ -9,7 +9,7 @@
#include <asm/alternative.h> #include <asm/alternative.h>
#include <asm/assembler.h> #include <asm/assembler.h>
.cpu generic+crc .arch armv8-a+crc
.macro __crc32, c .macro __crc32, c
cmp x2, #16 cmp x2, #16

View File

@ -24,36 +24,36 @@
* Returns: * Returns:
* x0 - dest * x0 - dest
*/ */
.macro ldrb1 ptr, regB, val .macro ldrb1 reg, ptr, val
ldrb \ptr, [\regB], \val ldrb \reg, [\ptr], \val
.endm .endm
.macro strb1 ptr, regB, val .macro strb1 reg, ptr, val
strb \ptr, [\regB], \val strb \reg, [\ptr], \val
.endm .endm
.macro ldrh1 ptr, regB, val .macro ldrh1 reg, ptr, val
ldrh \ptr, [\regB], \val ldrh \reg, [\ptr], \val
.endm .endm
.macro strh1 ptr, regB, val .macro strh1 reg, ptr, val
strh \ptr, [\regB], \val strh \reg, [\ptr], \val
.endm .endm
.macro ldr1 ptr, regB, val .macro ldr1 reg, ptr, val
ldr \ptr, [\regB], \val ldr \reg, [\ptr], \val
.endm .endm
.macro str1 ptr, regB, val .macro str1 reg, ptr, val
str \ptr, [\regB], \val str \reg, [\ptr], \val
.endm .endm
.macro ldp1 ptr, regB, regC, val .macro ldp1 reg1, reg2, ptr, val
ldp \ptr, \regB, [\regC], \val ldp \reg1, \reg2, [\ptr], \val
.endm .endm
.macro stp1 ptr, regB, regC, val .macro stp1 reg1, reg2, ptr, val
stp \ptr, \regB, [\regC], \val stp \reg1, \reg2, [\ptr], \val
.endm .endm
.weak memcpy .weak memcpy

View File

@ -92,6 +92,9 @@ static void set_reserved_asid_bits(void)
bitmap_clear(asid_map, 0, NUM_USER_ASIDS); bitmap_clear(asid_map, 0, NUM_USER_ASIDS);
} }
#define asid_gen_match(asid) \
(!(((asid) ^ atomic64_read(&asid_generation)) >> asid_bits))
static void flush_context(void) static void flush_context(void)
{ {
int i; int i;
@ -220,8 +223,7 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu)
* because atomic RmWs are totally ordered for a given location. * because atomic RmWs are totally ordered for a given location.
*/ */
old_active_asid = atomic64_read(&per_cpu(active_asids, cpu)); old_active_asid = atomic64_read(&per_cpu(active_asids, cpu));
if (old_active_asid && if (old_active_asid && asid_gen_match(asid) &&
!((asid ^ atomic64_read(&asid_generation)) >> asid_bits) &&
atomic64_cmpxchg_relaxed(&per_cpu(active_asids, cpu), atomic64_cmpxchg_relaxed(&per_cpu(active_asids, cpu),
old_active_asid, asid)) old_active_asid, asid))
goto switch_mm_fastpath; goto switch_mm_fastpath;
@ -229,7 +231,7 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu)
raw_spin_lock_irqsave(&cpu_asid_lock, flags); raw_spin_lock_irqsave(&cpu_asid_lock, flags);
/* Check that our ASID belongs to the current generation. */ /* Check that our ASID belongs to the current generation. */
asid = atomic64_read(&mm->context.id); asid = atomic64_read(&mm->context.id);
if ((asid ^ atomic64_read(&asid_generation)) >> asid_bits) { if (!asid_gen_match(asid)) {
asid = new_context(mm); asid = new_context(mm);
atomic64_set(&mm->context.id, asid); atomic64_set(&mm->context.id, asid);
} }

View File

@ -145,6 +145,11 @@ static const struct prot_bits pte_bits[] = {
.val = PTE_UXN, .val = PTE_UXN,
.set = "UXN", .set = "UXN",
.clear = " ", .clear = " ",
}, {
.mask = PTE_GP,
.val = PTE_GP,
.set = "GP",
.clear = " ",
}, { }, {
.mask = PTE_ATTRINDX_MASK, .mask = PTE_ATTRINDX_MASK,
.val = PTE_ATTRINDX(MT_DEVICE_nGnRnE), .val = PTE_ATTRINDX(MT_DEVICE_nGnRnE),

View File

@ -272,7 +272,7 @@ int pfn_valid(unsigned long pfn)
if (pfn_to_section_nr(pfn) >= NR_MEM_SECTIONS) if (pfn_to_section_nr(pfn) >= NR_MEM_SECTIONS)
return 0; return 0;
if (!valid_section(__nr_to_section(pfn_to_section_nr(pfn)))) if (!valid_section(__pfn_to_section(pfn)))
return 0; return 0;
#endif #endif
return memblock_is_map_memory(addr); return memblock_is_map_memory(addr);

View File

@ -609,6 +609,22 @@ static int __init map_entry_trampoline(void)
core_initcall(map_entry_trampoline); core_initcall(map_entry_trampoline);
#endif #endif
/*
* Open coded check for BTI, only for use to determine configuration
* for early mappings for before the cpufeature code has run.
*/
static bool arm64_early_this_cpu_has_bti(void)
{
u64 pfr1;
if (!IS_ENABLED(CONFIG_ARM64_BTI_KERNEL))
return false;
pfr1 = read_sysreg_s(SYS_ID_AA64PFR1_EL1);
return cpuid_feature_extract_unsigned_field(pfr1,
ID_AA64PFR1_BT_SHIFT);
}
/* /*
* Create fine-grained mappings for the kernel. * Create fine-grained mappings for the kernel.
*/ */
@ -624,6 +640,14 @@ static void __init map_kernel(pgd_t *pgdp)
*/ */
pgprot_t text_prot = rodata_enabled ? PAGE_KERNEL_ROX : PAGE_KERNEL_EXEC; pgprot_t text_prot = rodata_enabled ? PAGE_KERNEL_ROX : PAGE_KERNEL_EXEC;
/*
* If we have a CPU that supports BTI and a kernel built for
* BTI then mark the kernel executable text as guarded pages
* now so we don't have to rewrite the page tables later.
*/
if (arm64_early_this_cpu_has_bti())
text_prot = __pgprot_modify(text_prot, PTE_GP, PTE_GP);
/* /*
* Only rodata will be remapped with different permissions later on, * Only rodata will be remapped with different permissions later on,
* all other segments are allowed to use contiguous mappings. * all other segments are allowed to use contiguous mappings.

Some files were not shown because too many files have changed in this diff Show More