Commit Graph

2622 Commits

Author SHA1 Message Date
Nobuhiro Iwamatsu
ba051f097f arm64/kernel: Fix return value when cpu_online() fails in __cpu_up()
If boot_secondary() was successful, and cpu_online() was an error in
__cpu_up(), -EIO was returned, but 0 is returned by commit d22b115cbf
("arm64/kernel: Simplify __cpu_up() by bailing out early").
Therefore, bringup_wait_for_ap() causes the primary core to wait for a
long time, which may cause boot failure.
This commit sets -EIO to return code under the same conditions.

Fixes: d22b115cbf ("arm64/kernel: Simplify __cpu_up() by bailing out early")
Signed-off-by: Nobuhiro Iwamatsu <nobuhiro1.iwamatsu@toshiba.co.jp>
Tested-by: Yuji Ishikawa <yuji2.ishikawa@toshiba.co.jp>
Acked-by: Will Deacon <will@kernel.org>
Cc: Gavin Shan <gshan@redhat.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20200527233457.2531118-1-nobuhiro1.iwamatsu@toshiba.co.jp
[catalin.marinas@arm.com: return -EIO at the end of the function]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-05-28 12:04:55 +01:00
Anshuman Khandual
3577dd37c7 arm64/cpufeature: Add get_arm64_ftr_reg_nowarn()
There is no way to proceed when requested register could not be searched in
arm64_ftr_reg[]. Requesting for a non present register would be an error as
well. Hence lets just WARN_ON() when search fails in get_arm64_ftr_reg()
rather than checking for return value and doing a BUG_ON() instead in some
individual callers. But there are also caller instances that dont error out
when register search fails. Add a new helper get_arm64_ftr_reg_nowarn() for
such cases.

Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Link: https://lore.kernel.org/r/1590573876-19120-1-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-27 12:46:07 +01:00
Ingo Molnar
d1343da330 More EFI changes for v5.8:
- Rename pr_efi/pr_efi_err to efi_info/efi_err, and use them consistently
 - Simplify and unify initrd loading
 - Parse the builtin command line on x86 (if provided)
 - Implement printk() support, including support for wide character strings
 - Some fixes for issues introduced by the first batch of v5.8 changes
 - Fix a missing prototypes warning
 - Simplify GDT handling in early mixed mode thunking code
 - Some other minor fixes and cleanups
 -----BEGIN PGP SIGNATURE-----
 
 iQEzBAABCgAdFiEEnNKg2mrY9zMBdeK7wjcgfpV0+n0FAl7Lb8UACgkQwjcgfpV0
 +n3/aAgAkEqqR/BoyzFiyYHujq6bXjESKYr8LrIjNWfnofB6nZqp1yXwFdL0qbj/
 PTZ1qIQAnOMmj11lvy1X894h2ZLqE6XEkqv7Xd2oxkh3fF6amlQUWfMpXUuGLo1k
 C4QGSfA0OOiM0OOi0Aqk1fL7sTmH23/j63dTR+fH8JMuYgjdls/yWNs0miqf8W2H
 ftj8fAKgHIJzFvdTC0vn1DZ6dEKczGLPEcVZ2ns2IJOJ69DsStKPLcD0mlW+EgV2
 EyfRSCQv55RYZRhdUOb+yVLRfU0M0IMDrrCDErHxZHXnQy00tmKXiEL20yuegv3u
 MUtRRw8ocn2/RskjgZkxtMjAAlty9A==
 =AwCh
 -----END PGP SIGNATURE-----

Merge tag 'efi-changes-for-v5.8' of git://git.kernel.org/pub/scm/linux/kernel/git/efi/efi into efi/core

More EFI changes for v5.8:

 - Rename pr_efi/pr_efi_err to efi_info/efi_err, and use them consistently
 - Simplify and unify initrd loading
 - Parse the builtin command line on x86 (if provided)
 - Implement printk() support, including support for wide character strings
 - Some fixes for issues introduced by the first batch of v5.8 changes
 - Fix a missing prototypes warning
 - Simplify GDT handling in early mixed mode thunking code
 - Some other minor fixes and cleanups

Conflicts:
	drivers/firmware/efi/libstub/efistub.h

Signed-off-by: Ingo Molnar <mingo@kernel.org>
2020-05-25 15:11:14 +02:00
David S. Miller
13209a8f73 Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net
The MSCC bug fix in 'net' had to be slightly adjusted because the
register accesses are done slightly differently in net-next.

Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-24 13:47:27 -07:00
Ard Biesheuvel
6e99d3213b efi/libstub: Add missing prototype for PE/COFF entry point
Fix a missing prototype warning by adding a forward declaration
for the PE/COFF entrypoint, and while at it, align the function
name between the x86 and ARM versions of the stub.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-05-23 11:01:57 +02:00
Anshuman Khandual
858b8a8039 arm64/cpuinfo: Add ID_MMFR4_EL1 into the cpuinfo_arm64 context
ID_MMFR4_EL1 has been missing in the CPU context (i.e cpuinfo_arm64). This
just adds the register along with other required changes.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org

Suggested-by: Will Deacon <will@kernel.org>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Link: https://lore.kernel.org/r/1589881254-10082-18-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-21 15:47:12 +01:00
Anshuman Khandual
14e270fa5c arm64/cpufeature: Add remaining feature bits in ID_AA64PFR1 register
Enable the following features bits in ID_AA64PFR1 register as per ARM DDI
0487F.a specification.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org

Suggested-by: Will Deacon <will@kernel.org>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Link: https://lore.kernel.org/r/1589881254-10082-12-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-21 15:47:12 +01:00
Anshuman Khandual
011e5f5bf5 arm64/cpufeature: Add remaining feature bits in ID_AA64PFR0 register
Enable MPAM and SEL2 features bits in ID_AA64PFR0 register as per ARM DDI
0487F.a specification.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org

Suggested-by: Will Deacon <will@kernel.org>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Link: https://lore.kernel.org/r/1589881254-10082-11-git-send-email-anshuman.khandual@arm.com
[will: Make SEL2 a NONSTRICT feature per Suzuki]
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-21 15:47:11 +01:00
Anshuman Khandual
7cd51a5a84 arm64/cpufeature: Add remaining feature bits in ID_AA64ISAR0 register
Enable TLB features bit in ID_AA64ISAR0 register as per ARM DDI 0487F.a
specification.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org

Suggested-by: Will Deacon <will@kernel.org>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Link: https://lore.kernel.org/r/1589881254-10082-10-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-21 15:47:11 +01:00
Anshuman Khandual
fcd6535322 arm64/cpufeature: Add remaining feature bits in ID_MMFR4 register
Enable all remaining feature bits like EVT, CCIDX, LSM, HPDS, CnP, XNX,
SpecSEI in ID_MMFR4 register per ARM DDI 0487F.a.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org

Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Link: https://lore.kernel.org/r/1589881254-10082-9-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-21 15:47:11 +01:00
Anshuman Khandual
0ae43a99fe arm64/cpufeature: Add remaining feature bits in ID_PFR0 register
Enable DIT and CSV2 feature bits in ID_PFR0 register as per ARM DDI 0487F.a
specification. Except RAS and AMU, all other feature bits are now enabled.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org

Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Link: https://lore.kernel.org/r/1589881254-10082-8-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-21 15:47:11 +01:00
Anshuman Khandual
152accf847 arm64/cpufeature: Introduce ID_MMFR5 CPU register
This adds basic building blocks required for ID_MMFR5 CPU register which
provides information about the implemented memory model and memory
management support in AArch32 state. This is added per ARM DDI 0487F.a
specification.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org

Suggested-by: Will Deacon <will@kernel.org>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Link: https://lore.kernel.org/r/1589881254-10082-7-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-21 15:47:11 +01:00
Anshuman Khandual
dd35ec0704 arm64/cpufeature: Introduce ID_DFR1 CPU register
This adds basic building blocks required for ID_DFR1 CPU register which
provides top level information about the debug system in AArch32 state.
We hide the register from KVM guests, as we don't emulate the 'MTPMU'
feature.

This is added per ARM DDI 0487F.a specification.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org

Suggested-by: Will Deacon <will@kernel.org>
Reviewed-by : Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Link: https://lore.kernel.org/r/1589881254-10082-6-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-21 15:47:11 +01:00
Anshuman Khandual
16824085a7 arm64/cpufeature: Introduce ID_PFR2 CPU register
This adds basic building blocks required for ID_PFR2 CPU register which
provides information about the AArch32 programmers model which must be
interpreted along with ID_PFR0 and ID_PFR1 CPU registers. This is added
per ARM DDI 0487F.a specification.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org

Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Link: https://lore.kernel.org/r/1589881254-10082-5-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-21 15:47:11 +01:00
Anshuman Khandual
e965bcb062 arm64/cpufeature: Make doublelock a signed feature in ID_AA64DFR0
Double lock feature can have the following possible values.

0b0000 - Double lock implemented
0b1111 - Double lock not implemented

But in case of a conflict the safe value should be 0b1111. Hence this must
be a signed feature instead. Also change FTR_EXACT to FTR_LOWER_SAFE. While
here, fix the erroneous bit width value from 28 to 4.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org

Suggested-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Link: https://lore.kernel.org/r/1589881254-10082-4-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-21 15:47:11 +01:00
Anshuman Khandual
1ed1b90a05 arm64/cpufeature: Drop TraceFilt feature exposure from ID_DFR0 register
ID_DFR0 based TraceFilt feature should not be exposed to guests. Hence lets
drop it.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org

Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Link: https://lore.kernel.org/r/1589881254-10082-3-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-21 15:47:11 +01:00
Anshuman Khandual
2a5bc6c47b arm64/cpufeature: Add explicit ftr_id_isar0[] for ID_ISAR0 register
ID_ISAR0[31..28] bits are RES0 in ARMv8, Reserved/UNK in ARMv7. Currently
these bits get exposed through generic_id_ftr32[] which is not desirable.
Hence define an explicit ftr_id_isar0[] array for ID_ISAR0 register where
those bits can be hidden.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org

Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Link: https://lore.kernel.org/r/1589881254-10082-2-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-21 15:47:11 +01:00
Will Deacon
a4eb355a3f arm64: vdso: Fix CFI directives in sigreturn trampoline
Daniel reports that the .cfi_startproc is misplaced for the sigreturn
trampoline, which causes LLVM's unwinder to misbehave:

  | I run into this with LLVM’s unwinder.
  | This combination was always broken.

This prompted Dave to question our use of CFI directives more generally,
and I ended up going down a rabbit hole trying to figure out how this
very poorly documented stuff gets used.

Move the CFI directives so that the "mysterious NOP" is included in
the .cfi_{start,end}proc block and add a bunch of comments so that I
can save myself another headache in future.

Cc: Tamas Zsoldos <tamas.zsoldos@arm.com>
Reported-by: Dave Martin <dave.martin@arm.com>
Reported-by: Daniel Kiss <daniel.kiss@arm.com>
Tested-by: Daniel Kiss <daniel.kiss@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-21 12:59:26 +01:00
Will Deacon
9a96428557 arm64: vdso: Don't prefix sigreturn trampoline with a BTI C instruction
For better or worse, GDB relies on the exact instruction sequence in the
VDSO sigreturn trampoline in order to unwind from signals correctly.
Commit c91db232da ("arm64: vdso: Convert to modern assembler annotations")
unfortunately added a BTI C instruction to the start of __kernel_rt_sigreturn,
which breaks this check. Thankfully, it's also not required, since the
trampoline is called from a RET instruction when returning from the signal
handler

Remove the unnecessary BTI C instruction from __kernel_rt_sigreturn,
and do the same for the 32-bit VDSO as well for good measure.

Cc: Daniel Kiss <daniel.kiss@arm.com>
Cc: Tamas Zsoldos <tamas.zsoldos@arm.com>
Reviewed-by: Dave Martin <dave.martin@arm.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
Fixes: c91db232da ("arm64: vdso: Convert to modern assembler annotations")
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-21 12:57:18 +01:00
Sudeep Holla
ad5a57dfe4 firmware: smccc: Drop smccc_version enum and use ARM_SMCCC_VERSION_1_x instead
Instead of maintaining 2 sets of enums/macros for tracking SMCCC version,
let us drop smccc_version enum and use ARM_SMCCC_VERSION_1_x directly
instead.

This is in preparation to drop smccc_version here and move it separately
under drivers/firmware/smccc.

Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
Tested-by: Etienne Carriere <etienne.carriere@st.com>
Reviewed-by: Steven Price <steven.price@arm.com>
Reviewed-by: Etienne Carriere <etienne.carriere@st.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20200518091222.27467-5-sudeep.holla@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-20 19:10:37 +01:00
Anshuman Khandual
f73531f025 arm64/cpufeature: Drop open encodings while extracting parange
Currently there are multiple instances of parange feature width mask open
encodings while fetching it's value. Even the width mask value (0x7) itself
is not accurate. It should be (0xf) per ID_AA64MMFR0_EL1.PARange[3:0] as in
ARM ARM (0487F.a). Replace them with cpuid_feature_extract_unsigned_field()
which can extract given standard feature (4 bits width i.e 0xf mask) field.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: James Morse <james.morse@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org

Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/1589360614-1164-1-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-20 18:40:13 +01:00
Anshuman Khandual
c73433fc63 arm64/cpufeature: Validate hypervisor capabilities during CPU hotplug
This validates hypervisor capabilities like VMID width, IPA range for any
hot plug CPU against system finalized values. KVM's view of the IPA space
is used while allowing a given CPU to come up. While here, it factors out
get_vmid_bits() for general use.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: kvmarm@lists.cs.columbia.edu
Cc: linux-kernel@vger.kernel.org

Suggested-by: Suzuki Poulose <suzuki.poulose@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/1589248647-22925-1-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-20 15:59:23 +01:00
Keno Fischer
1cf6022bd9 arm64: Fix PTRACE_SYSEMU semantics
Quoth the man page:
```
       If the tracee was restarted by PTRACE_SYSCALL or PTRACE_SYSEMU, the
       tracee enters syscall-enter-stop just prior to entering any system
       call (which will not be executed if the restart was using
       PTRACE_SYSEMU, regardless of any change made to registers at this
       point or how the tracee is restarted after this stop).
```

The parenthetical comment is currently true on x86 and powerpc,
but not currently true on arm64. arm64 re-checks the _TIF_SYSCALL_EMU
flag after the syscall entry ptrace stop. However, at this point,
it reflects which method was used to re-start the syscall
at the entry stop, rather than the method that was used to reach it.
Fix that by recording the original flag before performing the ptrace
stop, bringing the behavior in line with documentation and x86/powerpc.

Fixes: f086f67485 ("arm64: ptrace: add support for syscall emulation")
Cc: <stable@vger.kernel.org> # 5.3.x-
Signed-off-by: Keno Fischer <keno@juliacomputing.com>
Acked-by: Will Deacon <will@kernel.org>
Tested-by: Sudeep Holla <sudeep.holla@arm.com>
Tested-by: Bin Lu <Bin.Lu@arm.com>
[catalin.marinas@arm.com: moved 'flags' bit masking]
[catalin.marinas@arm.com: changed 'flags' type to unsigned long]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-05-20 12:02:42 +01:00
James Morse
8fcc4ae6fa arm64: acpi: Make apei_claim_sea() synchronise with APEI's irq work
APEI is unable to do all of its error handling work in nmi-context, so
it defers non-fatal work onto the irq_work queue. arch_irq_work_raise()
sends an IPI to the calling cpu, but this is not guaranteed to be taken
before returning to user-space.

Unless the exception interrupted a context with irqs-masked,
irq_work_run() can run immediately. Otherwise return -EINPROGRESS to
indicate ghes_notify_sea() found some work to do, but it hasn't
finished yet.

With this apei_claim_sea() returning '0' means this external-abort was
also notification of a firmware-first RAS error, and that APEI has
processed the CPER records.

Signed-off-by: James Morse <james.morse@arm.com>
Tested-by: Tyler Baicar <baicar@os.amperecomputing.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2020-05-19 19:51:11 +02:00
Peter Zijlstra
69ea03b56e hardirq/nmi: Allow nested nmi_enter()
Since there are already a number of sites (ARM64, PowerPC) that effectively
nest nmi_enter(), make the primitive support this before adding even more.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Marc Zyngier <maz@kernel.org>
Acked-by: Will Deacon <will@kernel.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lkml.kernel.org/r/20200505134100.864179229@linutronix.de
2020-05-19 15:51:17 +02:00
Yunfeng Ye
bd4298c72b arm64: stacktrace: Factor out some common code into on_stack()
There are some common codes for stack checking, so factors it out into
the function on_stack().

No functional change.

Signed-off-by: Yunfeng Ye <yeyunfeng@huawei.com>
Link: https://lore.kernel.org/r/07b3b0e6-3f58-4fed-07ea-7d17b7508948@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-18 18:04:22 +01:00
Douglas Anderson
b322c65f8c arm64: Call debug_traps_init() from trap_init() to help early kgdb
A new kgdb feature will soon land (kgdb_earlycon) that lets us run
kgdb much earlier.  In order for everything to work properly it's
important that the break hook is setup by the time we process
"kgdbwait".

Right now the break hook is setup in debug_traps_init() and that's
called from arch_initcall().  That's a bit too late since
kgdb_earlycon really needs things to be setup by the time the system
calls dbg_late_init().

We could fix this by adding call_break_hook() into early_brk64() and
that works fine.  However, it's a little ugly.  Instead, let's just
add a call to debug_traps_init() straight from trap_init().  There's
already a documented dependency between trap_init() and
debug_traps_init() and this makes the dependency more obvious rather
than just relying on a comment.

NOTE: this solution isn't early enough to let us select the
"ARCH_HAS_EARLY_DEBUG" KConfig option that is introduced by the
kgdb_earlycon patch series.  That would only be set if we could do
breakpoints when early params are parsed.  This patch only enables
"late early" breakpoints, AKA breakpoints when dbg_late_init() is
called.  It's expected that this should be fine for most people.

It should also be noted that if you crash you can still end up in kgdb
earlier than debug_traps_init().  Since you don't need breakpoints to
debug a crash that's fine.

Suggested-by: Will Deacon <will@kernel.org>
Signed-off-by: Douglas Anderson <dianders@chromium.org>
Acked-by: Will Deacon <will@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20200513160501.1.I0b5edf030cc6ebef6ab4829f8867cdaea42485d8@changeid
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-18 17:51:20 +01:00
Will Deacon
258c3d628f arm64: entry-ftrace.S: Update comment to indicate that x18 is live
The Shadow Call Stack pointer is held in x18, so update the ftrace
entry comment to indicate that it cannot be safely clobbered.

Reported-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-18 17:47:50 +01:00
Will Deacon
871e100e43 scs: Move DEFINE_SCS macro into core code
Defining static shadow call stacks is not architecture-specific, so move
the DEFINE_SCS() macro into the core header file.

Tested-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-18 17:47:48 +01:00
Will Deacon
88485be531 scs: Move scs_overflow_check() out of architecture code
There is nothing architecture-specific about scs_overflow_check() as
it's just a trivial wrapper around scs_corrupted().

For parity with task_stack_end_corrupted(), rename scs_corrupted() to
task_scs_end_corrupted() and call it from schedule_debug() when
CONFIG_SCHED_STACK_END_CHECK_is enabled, which better reflects its
purpose as a debug feature to catch inadvertent overflow of the SCS.
Finally, remove the unused scs_overflow_check() function entirely.

This has absolutely no impact on architectures that do not support SCS
(currently arm64 only).

Tested-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-18 17:47:40 +01:00
Will Deacon
711e8b0de0 arm64: scs: Use 'scs_sp' register alias for x18
x18 holds the SCS stack pointer value, so introduce a register alias to
make this easier to read in assembly code.

Tested-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-18 17:47:37 +01:00
Will Deacon
51189c7a7e arm64: scs: Store absolute SCS stack pointer value in thread_info
Storing the SCS information in thread_info as a {base,offset} pair
introduces an additional load instruction on the ret-to-user path,
since the SCS stack pointer in x18 has to be converted back to an offset
by subtracting the base.

Replace the offset with the absolute SCS stack pointer value instead
and avoid the redundant load.

Tested-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-18 17:47:22 +01:00
Will Deacon
d82755b2e7 KVM: arm64: Kill off CONFIG_KVM_ARM_HOST
CONFIG_KVM_ARM_HOST is just a proxy for CONFIG_KVM, so remove it in favour
of the latter.

Signed-off-by: Will Deacon <will@kernel.org>
Signed-off-by: Fuad Tabba <tabba@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20200505154520.194120-2-tabba@google.com
2020-05-16 15:04:18 +01:00
David S. Miller
da07f52d3c Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net
Move the bpf verifier trace check into the new switch statement in
HEAD.

Resolve the overlapping changes in hinic, where bug fixes overlap
the addition of VF support.

Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-15 13:48:59 -07:00
Sami Tolvanen
439dc2a117 arm64: scs: Add shadow stacks for SDEI
This change adds per-CPU shadow call stacks for the SDEI handler.
Similarly to how the kernel stacks are handled, we add separate shadow
stacks for normal and critical events.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: James Morse <james.morse@arm.com>
Tested-by: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-15 16:35:50 +01:00
Sami Tolvanen
5287569a79 arm64: Implement Shadow Call Stack
This change implements shadow stack switching, initial SCS set-up,
and interrupt shadow stacks for arm64.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-15 16:35:50 +01:00
Sami Tolvanen
cde5dec89e arm64: vdso: Disable Shadow Call Stack
Shadow stacks are only available in the kernel, so disable SCS
instrumentation for the vDSO.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-15 16:35:50 +01:00
Sami Tolvanen
e73f02c6eb arm64: efi: Restore register x18 if it was corrupted
If we detect a corrupted x18, restore the register before jumping back
to potentially SCS instrumented code. This is safe, because the wrapper
is called with preemption disabled and a separate shadow stack is used
for interrupt handling.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-15 16:35:50 +01:00
Geert Uytterhoeven
357dd8a2af arm64: cpufeature: Add "or" to mitigations for multiple errata
Several actions are not mitigations for a single erratum, but for
multiple errata.  However, printing a line like

    CPU features: detected: ARM errata 1165522, 1530923

may give the false impression that all three listed errata have been
detected.  This can confuse the user, who may think his Cortex-A55 is
suddenly affected by a Cortex-A76 erratum.

Add "or" to all descriptions for mitigations for multiple errata, to
make it clear that only one or more of the errata printed are
applicable, and not necessarily all of them.

Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Link: https://lore.kernel.org/r/20200512145255.5520-1-geert+renesas@glider.be
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-12 18:37:13 +01:00
Amit Daniel Kachhap
203b1152d1 arm64/crash_core: Export KERNELPACMASK in vmcoreinfo
Recently arm64 linux kernel added support for Armv8.3-A Pointer
Authentication feature. If this feature is enabled in the kernel and the
hardware supports address authentication then the return addresses are
signed and stored in the stack to prevent ROP kind of attack. Kdump tool
will now dump the kernel with signed lr values in the stack.

Any user analysis tool for this kernel dump may need the kernel pac mask
information in vmcoreinfo to generate the correct return address for
stacktrace purpose as well as to resolve the symbol name.

This patch is similar to commit ec6e822d1a ("arm64: expose user PAC
bit positions via ptrace") which exposes pac mask information via ptrace
interfaces.

The config gaurd ARM64_PTR_AUTH is removed form asm/compiler.h so macros
like ptrauth_kernel_pac_mask can be used ungaurded. This config protection
is confusing as the pointer authentication feature may be missing at
runtime even though this config is present.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/1589202116-18265-1-git-send-email-amit.kachhap@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-11 14:29:10 +01:00
Luke Nelson
579d1b3faa arm64: insn: Fix two bugs in encoding 32-bit logical immediates
This patch fixes two issues present in the current function for encoding
arm64 logical immediates when using the 32-bit variants of instructions.

First, the code does not correctly reject an all-ones 32-bit immediate,
and returns an undefined instruction encoding.

Second, the code incorrectly rejects some 32-bit immediates that are
actually encodable as logical immediates. The root cause is that the code
uses a default mask of 64-bit all-ones, even for 32-bit immediates.
This causes an issue later on when the default mask is used to fill the
top bits of the immediate with ones, shown here:

  /*
   * Pattern: 0..01..10..01..1
   *
   * Fill the unused top bits with ones, and check if
   * the result is a valid immediate (all ones with a
   * contiguous ranges of zeroes).
   */
  imm |= ~mask;
  if (!range_of_ones(~imm))
          return AARCH64_BREAK_FAULT;

To see the problem, consider an immediate of the form 0..01..10..01..1,
where the upper 32 bits are zero, such as 0x80000001. The code checks
if ~(imm | ~mask) contains a range of ones: the incorrect mask yields
1..10..01..10..0, which fails the check; the correct mask yields
0..01..10..0, which succeeds.

The fix for both issues is to generate a correct mask based on the
instruction immediate size, and use the mask to check for all-ones,
all-zeroes, and values wider than the mask.

Currently, arch/arm64/kvm/va_layout.c is the only user of this function,
which uses 64-bit immediates and therefore won't trigger these bugs.

We tested the new code against llvm-mc with all 1,302 encodable 32-bit
logical immediates and all 5,334 encodable 64-bit logical immediates.

Fixes: ef3935eeeb ("arm64: insn: Add encoder for bitwise operations using literals")
Suggested-by: Will Deacon <will@kernel.org>
Co-developed-by: Xi Wang <xi.wang@gmail.com>
Signed-off-by: Xi Wang <xi.wang@gmail.com>
Signed-off-by: Luke Nelson <luke.r.nels@gmail.com>
Reviewed-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20200508181547.24783-2-luke.r.nels@gmail.com
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-11 12:21:39 +01:00
Christoph Hellwig
d51c214541 arm64: fix the flush_icache_range arguments in machine_kexec
The second argument is the end "pointer", not the length.

Fixes: d28f6df130 ("arm64/kexec: Add core kexec support")
Cc: <stable@vger.kernel.org> # 4.8.x-
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-05-11 12:02:14 +01:00
Mark Brown
bf740a905f arm64: vdso: Map the vDSO text with guarded pages when built for BTI
The kernel is responsible for mapping the vDSO into userspace processes,
including mapping the text section as executable. Handle the mapping of
the vDSO for BTI similarly, mapping the text section as guarded pages so
the BTI annotations in the vDSO become effective when they are present.

This will mean that we can have BTI active for the vDSO in processes that
do not otherwise support BTI. This should not be an issue for any expected
use of the vDSO.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20200506195138.22086-12-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-07 17:53:20 +01:00
Mark Brown
5e02a1887f arm64: vdso: Force the vDSO to be linked as BTI when built for BTI
When the kernel and hence vDSO are built with BTI enabled force the linker
to link the vDSO as BTI. This will cause the linker to warn if any of the
input files do not have the BTI annotation, ensuring that we don't silently
fail to provide a vDSO that is built and annotated for BTI when the
kernel is being built with BTI.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20200506195138.22086-11-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-07 17:53:20 +01:00
Mark Brown
a6aadc2827 arm64: vdso: Annotate for BTI
Generate BTI annotations for all assembly files included in the 64 bit
vDSO.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20200506195138.22086-10-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-07 17:53:20 +01:00
Mark Brown
c8027285e3 arm64: Set GP bit in kernel page tables to enable BTI for the kernel
Now that the kernel is built with BTI annotations enable the feature by
setting the GP bit in the stage 1 translation tables.  This is done
based on the features supported by the boot CPU so that we do not need
to rewrite the translation tables.

In order to avoid potential issues on big.LITTLE systems when there are
a mix of BTI and non-BTI capable CPUs in the system when we have enabled
kernel mode BTI we change BTI to be a _STRICT_BOOT_CPU_FEATURE when we
have kernel BTI.  This will prevent any CPUs that don't support BTI
being started if the boot CPU supports BTI rather than simply not using
BTI as we do when supporting BTI only in userspace.  The main concern is
the possibility of BTYPE being preserved by a CPU that does not
implement BTI when a thread is migrated to it resulting in an incorrect
state which could generate an exception when the thread migrates back to
a CPU that does support BTI.  If we encounter practical systems which
mix BTI and non-BTI CPUs we will need to revisit this implementation.

Since we currently do not generate landing pads in the BPF JIT we only
map the base kernel text in this way.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20200506195138.22086-5-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-07 17:53:20 +01:00
Vincenzo Frascino
7e9f5e6629 arm64: vdso: Add --eh-frame-hdr to ldflags
LLVM's unwinder depends on the .eh_frame_hdr being present for
unwinding. However, when compiling Linux with GCC, the section
is not present in the vdso library object and when compiling
with Clang, it is present, but it has zero length.

With GCC the problem was not spotted because libgcc unwinder does
not require the .eh_frame_hdr section to be present.

Add --eh-frame-hdr to ldflags to correctly generate and populate
the section for both GCC and LLVM.

Fixes: 28b1a824a4 ("arm64: vdso: Substitute gettimeofday() with C implementation")
Reported-by: Tamas Zsoldos <tamas.zsoldos@arm.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Tested-by: Tamas Zsoldos <tamas.zsoldos@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20200507104049.47834-1-vincenzo.frascino@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-07 14:40:12 +01:00
David S. Miller
3793faad7b Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net
Conflicts were all overlapping changes.

Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06 22:10:13 -07:00
Will Deacon
e5159827e0 Merge branches 'for-next/asm' and 'for-next/insn' into for-next/bti
Merge in dependencies for in-kernel Branch Target Identification support.

* for-next/asm:
  arm64: Disable old style assembly annotations
  arm64: kernel: Convert to modern annotations for assembly functions
  arm64: entry: Refactor and modernise annotation for ret_to_user
  x86/asm: Provide a Kconfig symbol for disabling old assembly annotations
  x86/32: Remove CONFIG_DOUBLEFAULT

* for-next/insn:
  arm64: insn: Report PAC and BTI instructions as skippable
  arm64: insn: Don't assume unrecognized HINTs are skippable
  arm64: insn: Provide a better name for aarch64_insn_is_nop()
  arm64: insn: Add constants for new HINT instruction decode
2020-05-05 15:19:09 +01:00
Will Deacon
80e4e56132 Merge branch 'for-next/bti-user' into for-next/bti
Merge in user support for Branch Target Identification, which narrowly
missed the cut for 5.7 after a late ABI concern.

* for-next/bti-user:
  arm64: bti: Document behaviour for dynamically linked binaries
  arm64: elf: Fix allnoconfig kernel build with !ARCH_USE_GNU_PROPERTY
  arm64: BTI: Add Kconfig entry for userspace BTI
  mm: smaps: Report arm64 guarded pages in smaps
  arm64: mm: Display guarded pages in ptdump
  KVM: arm64: BTI: Reset BTYPE when skipping emulated instructions
  arm64: BTI: Reset BTYPE when skipping emulated instructions
  arm64: traps: Shuffle code to eliminate forward declarations
  arm64: unify native/compat instruction skipping
  arm64: BTI: Decode BYTPE bits when printing PSTATE
  arm64: elf: Enable BTI at exec based on ELF program properties
  elf: Allow arch to tweak initial mmap prot flags
  arm64: Basic Branch Target Identification support
  ELF: Add ELF program property parsing support
  ELF: UAPI and Kconfig additions for ELF program properties
2020-05-05 15:15:58 +01:00
Will Deacon
433022b58e arm64: cpufeature: Extend comment to describe absence of field info
When a feature register field is omitted from the description of the
register, the corresponding bits are treated as STRICT RES0, including
for KVM guests. This is subtly different to declaring the field as
HIDDEN/STRICT/EXACT/0, so update the comment to call this out.

Signed-off-by: Will Deacon <will@kernel.org>
2020-05-05 11:45:21 +01:00
Anshuman Khandual
da7bad98ee arm64/cpuinfo: Move device_initcall() near cpuinfo_regs_init()
This moves device_initcall() near cpuinfo_regs_init() making the calling
sequence clear. Besides it is a standard practice to have device_initcall()
(any __initcall for that matter) just after the function it actually calls.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Suzuki Poulose <suzuki.poulose@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org

Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/1588595377-4503-1-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-04 18:08:40 +01:00
Mark Brown
47d67e4d19 arm64: insn: Report PAC and BTI instructions as skippable
The PAC and BTI instructions can be safely skipped so report them as
such, allowing them to be probed.

Signed-off-by: Mark Brown <broonie@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20200504131326.18290-5-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-04 16:06:29 +01:00
Mark Brown
c71052cc9e arm64: insn: Don't assume unrecognized HINTs are skippable
Currently the kernel assumes that any HINT which it does not explicitly
recognise is skippable.  This is not robust as new instructions may be
added which need special handling, and in any case software should only
be using explicit NOP instructions for deliberate NOPs.

This has the effect of rendering PAC and BTI instructions unprobeable
which means that probes can't be inserted on the first instruction of
functions built with those features.

Signed-off-by: Mark Brown <broonie@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20200504131326.18290-4-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-04 16:06:29 +01:00
Mark Brown
07dcd9677c arm64: insn: Provide a better name for aarch64_insn_is_nop()
The current aarch64_insn_is_nop() has exactly one caller which uses it
solely to identify if the instruction is a HINT that can safely be stepped,
requiring us to list things that aren't NOPs and make things more confusing
than they need to be. Rename the function to reflect the actual usage and
make things more clear.

Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20200504131326.18290-3-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-04 16:06:29 +01:00
Mark Brown
bd507ca277 arm64: insn: Add constants for new HINT instruction decode
Add constants for decoding newer instructions defined in the HINT space.
Since we are now decoding both the op2 and CRm fields rename the enum as
well; this is compatible with what the existing users are doing.

Signed-off-by: Mark Brown <broonie@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20200504131326.18290-2-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-04 16:06:29 +01:00
Andrew Scull
02ab1f5018 arm64: Unify WORKAROUND_SPECULATIVE_AT_{NVHE,VHE}
Errata 1165522, 1319367 and 1530923 each allow TLB entries to be
allocated as a result of a speculative AT instruction. In order to
avoid mandating VHE on certain affected CPUs, apply the workaround to
both the nVHE and the VHE case for all affected CPUs.

Signed-off-by: Andrew Scull <ascull@google.com>
Acked-by: Will Deacon <will@kernel.org>
CC: Marc Zyngier <maz@kernel.org>
CC: James Morse <james.morse@arm.com>
CC: Suzuki K Poulose <suzuki.poulose@arm.com>
CC: Will Deacon <will@kernel.org>
CC: Steven Price <steven.price@arm.com>
Link: https://lore.kernel.org/r/20200504094858.108917-1-ascull@google.com
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-04 16:05:47 +01:00
Mark Brown
0343a7e463 arm64: kernel: Convert to modern annotations for assembly functions
In an effort to clarify and simplify the annotation of assembly functions
in the kernel new macros have been introduced. These replace ENTRY and
ENDPROC and also add a new annotation for static functions which previously
had no ENTRY equivalent. Update the annotations in the core kernel code to
the new macros.

Signed-off-by: Mark Brown <broonie@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20200501115430.37315-3-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-04 12:46:03 +01:00
Mark Brown
06607c7e93 arm64: entry: Refactor and modernise annotation for ret_to_user
As part of an effort to clarify and clean up the assembler annotations
new macros have been introduced which annotate the start and end of blocks
of code in assembler files. Currently ret_to_user has an out of line slow
path work_pending placed above the main function which makes annotating the
start and end of these blocks of code awkward.

Since work_pending is only referenced from within ret_to_user try to make
things a bit clearer by moving it after the current ret_to_user and then
marking both ret_to_user and work_pending as part of a single ret_to_user
code block.

Signed-off-by: Mark Brown <broonie@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20200501115430.37315-2-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-04 12:46:03 +01:00
Rob Herring
0b104773b4 PCI: Constify struct pci_ecam_ops
struct pci_ecam_ops is typically DT match table data which is defined to
be const. It's also best practice for ops structs to be const. Ideally,
we'd make struct pci_ops const as well, but that becomes pretty
invasive, so for now we just cast it where needed.

Link: https://lore.kernel.org/r/20200409234923.21598-2-robh@kernel.org
Signed-off-by: Rob Herring <robh@kernel.org>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Acked-by: Bjorn Helgaas <bhelgaas@google.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Andrew Murray <amurray@thegoodpenguin.co.uk>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Cc: Len Brown <lenb@kernel.org>
Cc: Jonathan Chocron <jonnyc@amazon.com>
Cc: Zhou Wang <wangzhou1@hisilicon.com>
Cc: Robert Richter <rrichter@marvell.com>
Cc: Toan Le <toan@os.amperecomputing.com>
Cc: Marc Gonzalez <marc.w.gonzalez@free.fr>
Cc: Mans Rullgard <mans@mansr.com>
Cc: linux-acpi@vger.kernel.org
2020-05-01 16:28:59 +01:00
Łukasz Stelmach
51075e0cb7 arm64: kexec_file: print appropriate variable
The value of kbuf->memsz may be different than kbuf->bufsz after calling
kexec_add_buffer(). Hence both values should be logged.

Fixes: 52b2a8af74 ("arm64: kexec_file: load initrd and device-tree")
Fixes: 3751e728ce ("arm64: kexec_file: add crash dump support")
Signed-off-by: Łukasz Stelmach <l.stelmach@samsung.com>
Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: James Morse <james.morse@arm.com>
Cc: Bhupesh Sharma <bhsharma@redhat.com>
Link: https://lore.kernel.org/r/20200430163142.27282-2-l.stelmach@samsung.com
Signed-off-by: Will Deacon <will@kernel.org>
2020-04-30 21:31:22 +01:00
Vincenzo Frascino
1578e5d031 arm64: vdso: Add -fasynchronous-unwind-tables to cflags
On arm64 linux gcc uses -fasynchronous-unwind-tables -funwind-tables
by default since gcc-8, so now the de facto platform ABI is to allow
unwinding from async signal handlers.

However on bare metal targets (aarch64-none-elf), and on old gcc,
async and sync unwind tables are not enabled by default to avoid
runtime memory costs.

This means if linux is built with a baremetal toolchain the vdso.so
may not have unwind tables which breaks the gcc platform ABI guarantee
in userspace.

Add -fasynchronous-unwind-tables explicitly to the vgettimeofday.o
cflags to address the ABI change.

Fixes: 28b1a824a4 ("arm64: vdso: Substitute gettimeofday() with C implementation")
Cc: Will Deacon <will@kernel.org>
Reported-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-04-30 18:35:32 +01:00
Mark Rutland
1d09094aa6 arm64: vdso: use consistent 'map' nomenclature
The current code doesn't use a consistent naming scheme for structures,
enums, or variables, making it harder than necessary to determine the
relationship between these.

Let's make this easier by consistently using 'map' nomenclature for
mappings created in userspace, minimizing redundant comments, and
using designated array initializers to tie indices to their respective
elements.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20200428164921.41641-5-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2020-04-29 10:52:53 +01:00
Mark Rutland
d3418f3839 arm64: vdso: use consistent 'abi' nomenclature
The current code doesn't use a consistent naming scheme for structures,
enums, or variables, making it harder than necessary to determine the
relationship between these.

Let's make this easier by consistently using 'vdso_abi' nomenclature.
The 'vdso_lookup' array is renamed to 'vdso_info' to describe what it
contains rather than how it is consumed.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20200428164921.41641-4-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2020-04-29 10:52:53 +01:00
Mark Rutland
3ee16ff343 arm64: vdso: simplify arch_vdso_type ifdeffery
Currently we have some ifdeffery to determine the number of elements in
enum arch_vdso_type as VDSO_TYPES, rather that the usual pattern of
having the enum define this:

| enum foo_type {
|         FOO_TYPE_A,
|         FOO_TYPE_B,
| #ifdef CONFIG_C
|         FOO_TYPE_C,
| #endif
|         NR_FOO_TYPES
| }

... however, given we only use this number to size the vdso_lookup[]
array, this is redundant anyway as the compiler can automatically size
the array to fit all defined elements.

So let's remove the VDSO_TYPES to simplify the code.

At the same time, let's use designated initializers for the array
elements so that these are guarnateed to be at the expected indices,
regardless of how we modify the structure. For clariy the redundant
explicit initialization of the enum elements is dropped.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20200428164921.41641-3-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2020-04-29 10:52:53 +01:00
Mark Rutland
74fc72e77d arm64: vdso: remove aarch32_vdso_pages[]
The aarch32_vdso_pages[] array is unnecessarily confusing. We only ever
use the C_VECTORS and C_SIGPAGE slots, and the other slots are unused
despite having corresponding mappings (sharing pages with the AArch64
vDSO).

Let's make this clearer by using separate variables for the vectors page
and the sigreturn page. A subsequent patch will clean up the C_* naming
and conflation of pages with mappings.

Note that since both the vectors page and sig page are single
pages, and the mapping is a single page long, their pages array do not
need to be NULL-terminated (and this was not the case with the existing
code for the sig page as it was the last entry in the aarch32_vdso_pages
array).

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20200428164921.41641-2-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2020-04-29 10:52:53 +01:00
Daniel Borkmann
0b54142e4b Merge branch 'work.sysctl' of ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/viro/vfs
Pull in Christoph Hellwig's series that changes the sysctl's ->proc_handler
methods to take kernel pointers instead. It gets rid of the set_fs address
space overrides used by BPF. As per discussion, pull in the feature branch
into bpf-next as it relates to BPF sysctl progs.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20200427071508.GV23230@ZenIV.linux.org.uk/T/
2020-04-28 21:23:38 +02:00
Ard Biesheuvel
76085aff29 efi/libstub/arm64: align PE/COFF sections to segment alignment
The arm64 kernel's segment alignment is fixed at 64 KB for any page
size, and relocatable kernels are able to fix up any misalignment of
the kernel image with respect to the 2 MB section alignment that is
mandated by the arm64 boot protocol.

Let's increase the PE/COFF section alignment to the same value, so that
kernels loaded by the UEFI PE/COFF loader are guaranteed to end up at
an address that doesn't require any reallocation to be done if the
kernel is relocatable.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20200413155521.24698-6-ardb@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2020-04-28 17:36:52 +01:00
Vincenzo Frascino
86b8783701 arm64: vdso: Add '-Bsymbolic' to ldflags
Commit 28b1a824a4 ("arm64: vdso: Substitute gettimeofday() with C
implementation") introduced an unused 'VDSO_LDFLAGS' variable to the
vdso Makefile, suggesting that we should be passing '-Bsymbolic' to the
linker, as we do when linking the compat vDSO.

Although it's not strictly necessary to pass this flag, it would be
required if we were to add any internal references to the exported
symbols. It's also consistent with how we link the compat vdso so, since
there's no real downside from passing it, add '-Bsymbolic' to the ldflags
for the native vDSO.

Fixes: 28b1a824a4 ("arm64: vdso: Substitute gettimeofday() with C implementation")
Reported-by: Geoff Levand <geoff@infradead.org>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20200428150854.33130-1-vincenzo.frascino@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2020-04-28 17:17:09 +01:00
Gavin Shan
9d2d75ede5 arm64/kernel: Fix range on invalidating dcache for boot page tables
Prior to commit 8eb7e28d4c ("arm64/mm: move runtime pgds to
rodata"), idmap_pgd_dir, tramp_pg_dir, reserved_ttbr0, swapper_pg_dir,
and init_pg_dir were contiguous at the end of the kernel image. The
maintenance at the end of __create_page_tables assumed these were
contiguous, and affected everything from the start of idmap_pg_dir
to the end of init_pg_dir.

That commit moved all but init_pg_dir into the .rodata section, with
other data placed between idmap_pg_dir and init_pg_dir, but did not
update the maintenance. Hence the maintenance is performed on much
more data than necessary (but as the bootloader previously made this
clean to the PoC there is no functional problem).

As we only alter idmap_pg_dir, and init_pg_dir, we only need to perform
maintenance for these. As the other dirs are in .rodata, the bootloader
will have initialised them as expected and cleaned them to the PoC. The
kernel will initialize them as necessary after enabling the MMU.

This patch reworks the maintenance to only cover the idmap_pg_dir and
init_pg_dir to avoid this unnecessary work.

Signed-off-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20200427235700.112220-1-gshan@redhat.com
Signed-off-by: Will Deacon <will@kernel.org>
2020-04-28 14:48:03 +01:00
Will Deacon
a2a6996352 arm64: cpufeature: Add an overview comment for the cpufeature framework
Now that Suzuki isn't within throwing distance, I thought I'd better add
a rough overview comment to cpufeature.c so that it doesn't take me days
to remember how it works next time.

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Link: https://lore.kernel.org/r/20200421142922.18950-9-will@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2020-04-28 14:23:45 +01:00
Will Deacon
98448cdfe7 arm64: cpufeature: Relax checks for AArch32 support at EL[0-2]
We don't need to be quite as strict about mismatched AArch32 support,
which is good because the friendly hardware folks have been busy
mismatching this to their hearts' content.

  * We don't care about EL2 or EL3 (there are silly comments concerning
    the latter, so remove those)

  * EL1 support is gated by the ARM64_HAS_32BIT_EL1 capability and handled
    gracefully when a mismatch occurs

  * EL0 support is gated by the ARM64_HAS_32BIT_EL0 capability and handled
    gracefully when a mismatch occurs

Relax the AArch32 checks to FTR_NONSTRICT.

Tested-by: Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Link: https://lore.kernel.org/r/20200421142922.18950-8-will@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2020-04-28 14:23:42 +01:00
Will Deacon
eab2f92607 arm64: cpufeature: Relax AArch32 system checks if EL1 is 64-bit only
If AArch32 is not supported at EL1, the AArch32 feature register fields
no longer advertise support for some system features:

  * ISAR4.SMC
  * PFR1.{Virt_frac, Sec_frac, Virtualization, Security, ProgMod}

In which case, we don't need to emit "SANITY CHECK" failures for all of
them.

Add logic to relax the strictness of individual feature register fields
at runtime and use this for the fields above if 32-bit EL1 is not
supported.

Tested-by: Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Link: https://lore.kernel.org/r/20200421142922.18950-7-will@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2020-04-28 14:23:37 +01:00
Will Deacon
1efcfe79d9 arm64: cpufeature: Factor out checking of AArch32 features
update_cpu_features() is pretty large, so split out the checking of the
AArch32 features into a separate function and call it after checking the
AArch64 features.

Tested-by: Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Link: https://lore.kernel.org/r/20200421142922.18950-6-will@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2020-04-28 14:23:34 +01:00
Will Deacon
13dc4d8361 arm64: cpufeature: Remove redundant call to id_aa64pfr0_32bit_el0()
There's no need to call id_aa64pfr0_32bit_el0() twice because the
sanitised value of ID_AA64PFR0_EL1 has already been updated for the CPU
being onlined.

Remove the redundant function call.

Tested-by: Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Link: https://lore.kernel.org/r/20200421142922.18950-5-will@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2020-04-28 14:23:30 +01:00
Will Deacon
540f76d12c arm64: cpufeature: Add CPU capability for AArch32 EL1 support
Although we emit a "SANITY CHECK" warning and taint the kernel if we
detect a CPU mismatch for AArch32 support at EL1, we still online the
CPU with disastrous consequences for any running 32-bit VMs.

Introduce a capability for AArch32 support at EL1 so that late onlining
of incompatible CPUs is forbidden.

Tested-by: Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20200421142922.18950-4-will@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2020-04-28 14:23:23 +01:00
Will Deacon
0113340e6e arm64: cpufeature: Spell out register fields for ID_ISAR4 and ID_PFR1
In preparation for runtime updates to the strictness of some AArch32
features, spell out the register fields for ID_ISAR4 and ID_PFR1 to make
things clearer to read. Note that this isn't functionally necessary, as
the feature arrays themselves are not modified dynamically and remain
'const'.

Tested-by: Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Link: https://lore.kernel.org/r/20200421142922.18950-3-will@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2020-04-28 14:23:18 +01:00
Sai Prakash Ranjan
9d3f888135 arm64: cpufeature: Relax check for IESB support
We don't care if IESB is supported or not as we always set
SCTLR_ELx.IESB and, if it works, that's really great.

Relax the ID_AA64MMFR2.IESB cpufeature check so that we don't warn and
taint if it's mismatched.

[will: rewrote commit message]

Signed-off-by: Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Link: https://lore.kernel.org/r/20200421142922.18950-2-will@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2020-04-28 14:23:11 +01:00
Zou Wei
2eaf63ba84 arm64: smp: Make cpus_stuck_in_kernel static
Fix the following sparse warning:

arch/arm64/kernel/smp.c:68:5: warning: symbol 'cpus_stuck_in_kernel'
was not declared. Should it be static?

Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: Zou Wei <zou_wei@huawei.com>
Link: https://lore.kernel.org/r/1587623606-96698-1-git-send-email-zou_wei@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
2020-04-28 14:09:57 +01:00
Jason Yan
0dd2334fd5 arm64: entry: remove unneeded semicolon in el1_sync_handler()
Fix the following coccicheck warning:

arch/arm64/kernel/entry-common.c:97:2-3: Unneeded semicolon

Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: Jason Yan <yanaijie@huawei.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20200418081909.41471-1-yanaijie@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
2020-04-28 14:08:57 +01:00
Ard Biesheuvel
1eae811da6 arm64/kernel: vmlinux.lds: drop redundant discard/keep macros
ARM_EXIT_KEEP and ARM_EXIT_DISCARD are always defined in the same way,
so we don't really need them in the first place.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20200416132730.25290-1-ardb@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2020-04-28 14:07:05 +01:00
George Spelvin
99ee28d996 arm64: kexec_file: Avoid temp buffer for RNG seed
After using get_random_bytes(), you want to wipe the buffer
afterward so the seed remains secret.

In this case, we can eliminate the temporary buffer entirely.
fdt_setprop_placeholder() returns a pointer to the property value
buffer, allowing us to put the random data directly in there without
using a temporary buffer at all.  Faster and less stack all in one.

Signed-off-by: George Spelvin <lkml@sdf.org>
Acked-by: Will Deacon <will@kernel.org>
Cc: Hsin-Yi Wang <hsinyi@chromium.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Link: https://lore.kernel.org/r/20200330173801.GA9199@SDF.ORG
Signed-off-by: Will Deacon <will@kernel.org>
2020-04-28 13:55:51 +01:00
Ard Biesheuvel
348a625dee arm64: rename stext to primary_entry
For historical reasons, the primary entry routine living somewhere in
the inittext section is called stext(), which is confusing, given that
there is also a section marker called _stext which lives at a fixed
offset in the image (either 64 or 4096 bytes, depending on whether
CONFIG_EFI is enabled)

Let's rename stext to primary_entry(), which is a better description
and reflects the secondary_entry() routine that already exists for
SMP boot.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20200326171423.3080-1-ardb@kernel.org
Reviwed-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
2020-04-28 13:55:16 +01:00
Mark Rutland
62a679cb28 arm64: simplify ptrauth initialization
Currently __cpu_setup conditionally initializes the address
authentication keys and enables them in SCTLR_EL1, doing so differently
for the primary CPU and secondary CPUs, and skipping this work for CPUs
returning from an idle state. For the latter case, cpu_do_resume
restores the keys and SCTLR_EL1 value after the MMU has been enabled.

This flow is rather difficult to follow, so instead let's move the
primary and secondary CPU initialization into their respective boot
paths. By following the example of cpu_do_resume and doing so once the
MMU is enabled, we can always initialize the keys from the values in
thread_struct, and avoid the machinery necessary to pass the keys in
secondary_data or open-coding initialization for the boot CPU.

This means we perform an additional RMW of SCTLR_EL1, but we already do
this in the cpu_do_resume path, and for other features in cpufeature.c,
so this isn't a major concern in a bringup path. Note that even while
the enable bits are clear, the key registers are accessible.

As this now renders the argument to __cpu_setup redundant, let's also
remove that entirely. Future extensions can follow a similar approach to
initialize values that differ for primary/secondary CPUs.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Reviewed-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20200423101606.37601-3-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2020-04-28 11:23:21 +01:00
Mark Rutland
d0055da526 arm64: remove ptrauth_keys_install_kernel sync arg
The 'sync' argument to ptrauth_keys_install_kernel macro is somewhat
opaque at callsites, so instead lets have regular and _nosync variants
of the macro to make this a little more obvious.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20200423101606.37601-2-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2020-04-28 11:23:21 +01:00
Christoph Hellwig
32927393dc sysctl: pass kernel pointers to ->proc_handler
Instead of having all the sysctl handlers deal with user pointers, which
is rather hairy in terms of the BPF interaction, copy the input to and
from  userspace in common code.  This also means that the strings are
always NUL-terminated by the common code, making the API a little bit
safer.

As most handler just pass through the data to one of the common handlers
a lot of the changes are mechnical.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Andrey Ignatov <rdna@fb.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-04-27 02:07:40 -04:00
Mark Rutland
9cc3d0c691 arm64: vdso: don't free unallocated pages
The aarch32_vdso_pages[] array never has entries allocated in the C_VVAR
or C_VDSO slots, and as the array is zero initialized these contain
NULL.

However in __aarch32_alloc_vdso_pages() when
aarch32_alloc_kuser_vdso_page() fails we attempt to free the page whose
struct page is at NULL, which is obviously nonsensical.

This patch removes the erroneous page freeing.

Fixes: 7c1deeeb01 ("arm64: compat: VDSO setup for compat layer")
Cc: <stable@vger.kernel.org> # 5.3.x-
Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-04-15 11:13:16 +01:00
Linus Torvalds
87ebc45d2d arm64 fixes:
- Ensure that the compiler and linker versions are aligned so that ld
   doesn't complain about not understanding a .note.gnu.property section
   (emitted when pointer authentication is enabled).
 
 - Force -mbranch-protection=none when the feature is not enabled, in
   case a compiler may choose a different default value.
 
 - Remove CONFIG_DEBUG_ALIGN_RODATA. It was never in defconfig and rarely
   enabled.
 
 - Fix checking 16-bit Thumb-2 instructions checking mask in the
   emulation of the SETEND instruction (it could match the bottom half of
   a 32-bit Thumb-2 instruction).
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEE5RElWfyWxS+3PLO2a9axLQDIXvEFAl6PUYAACgkQa9axLQDI
 XvH83g/7B5v0RFqjqVW4/cQKoN1rii7qSA8pBfNgGiCMJKtoGvliAlp3xWEtlW0h
 nYJ4gCvey946r5kvZrjdBXC/Ulo2CcGYtX0n8d+8IB6wXAnGcQ0DUBUFZ4+fAU9Z
 F7+R7its24dma9R1wIFHFmQUdlO+EgQTfQFvhQKYMSNVaFQF73Sp/vk3oKhJ2E0x
 QevgDBQSmmcX3DFxhUW7BdcdboBgtTDUGdhcImdorgp7QmI1r40espJKX4VMKvmb
 pfzwg+i7KM6N1RDhRfA2oFMegXwI3rvM3XesqYaua8+xWD5vJuIQfq+ysEq9F9x/
 Hnu+W9nbcN8RKQ9JToiqkE7ifuOBTvaIJaqsgIXYSqtYjatuPAh85MkrorHi9Ji2
 9i7fc0GMTgtgYDo/93++l8SmmRJMX+h+9KtGtxx39+UqGjToJMCnPGjwBSwe4wdK
 lKOAgj488HHsNwTlrRUnq1hXjNjd1w+ON7JM2L3IyRNX/eWN60VxwzwHkZMByCOj
 jlcY4ISWquigW4w9Sp4nxEhLF9dWT1+OrE33Xh3CUxPU94jSEvgcDHcxuGeGOlrA
 QjN1B2APZFox8XbOsLgeG2kKe5C3Fui90SEn0GyA0ncVLsXDI78VnVJR9uz5+6Pd
 ALVQKkJxswhSDPQFlH+7CmQAcr8jWyLEEvyXXaZsoJmewzCpEPM=
 =pHRG
 -----END PGP SIGNATURE-----

Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux

Pull arm64 fixes from Catalin Marinas:

 - Ensure that the compiler and linker versions are aligned so that ld
   doesn't complain about not understanding a .note.gnu.property section
   (emitted when pointer authentication is enabled).

 - Force -mbranch-protection=none when the feature is not enabled, in
   case a compiler may choose a different default value.

 - Remove CONFIG_DEBUG_ALIGN_RODATA. It was never in defconfig and
   rarely enabled.

 - Fix checking 16-bit Thumb-2 instructions checking mask in the
   emulation of the SETEND instruction (it could match the bottom half
   of a 32-bit Thumb-2 instruction).

* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
  arm64: armv8_deprecated: Fix undef_hook mask for thumb setend
  arm64: remove CONFIG_DEBUG_ALIGN_RODATA feature
  arm64: Always force a branch protection mode when the compiler has one
  arm64: Kconfig: ptrauth: Add binutils version check to fix mismatch
  init/kconfig: Add LD_VERSION Kconfig
2020-04-09 11:04:16 -07:00
Fredrik Strupe
fc2266011a arm64: armv8_deprecated: Fix undef_hook mask for thumb setend
For thumb instructions, call_undef_hook() in traps.c first reads a u16,
and if the u16 indicates a T32 instruction (u16 >= 0xe800), a second
u16 is read, which then makes up the the lower half-word of a T32
instruction. For T16 instructions, the second u16 is not read,
which makes the resulting u32 opcode always have the upper half set to
0.

However, having the upper half of instr_mask in the undef_hook set to 0
masks out the upper half of all thumb instructions - both T16 and T32.
This results in trapped T32 instructions with the lower half-word equal
to the T16 encoding of setend (b650) being matched, even though the upper
half-word is not 0000 and thus indicates a T32 opcode.

An example of such a T32 instruction is eaa0b650, which should raise a
SIGILL since T32 instructions with an eaa prefix are unallocated as per
Arm ARM, but instead works as a SETEND because the second half-word is set
to b650.

This patch fixes the issue by extending instr_mask to include the
upper u32 half, which will still match T16 instructions where the upper
half is 0, but not T32 instructions.

Fixes: 2d888f48e0 ("arm64: Emulate SETEND for AArch32 tasks")
Cc: <stable@vger.kernel.org> # 4.0.x-
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Fredrik Strupe <fredrik@strupe.net>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-04-08 18:19:20 +01:00
Linus Torvalds
ff2ae607c6 SPDX patches for 5.7-rc1.
Here are 3 SPDX patches for 5.7-rc1.
 
 One fixes up the SPDX tag for a single driver, while the other two go
 through the tree and add SPDX tags for all of the .gitignore files as
 needed.
 
 Nothing too complex, but you will get a merge conflict with your current
 tree, that should be trivial to handle (one file modified by two things,
 one file deleted.)
 
 All 3 of these have been in linux-next for a while, with no reported
 issues other than the merge conflict.
 
 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
 -----BEGIN PGP SIGNATURE-----
 
 iG0EABECAC0WIQT0tgzFv3jCIUoxPcsxR9QN2y37KQUCXodg5A8cZ3JlZ0Brcm9h
 aC5jb20ACgkQMUfUDdst+ykySQCgy9YDrkz7nWq6v3Gohl6+lW/L+rMAnRM4uTZm
 m5AuCzO3Azt9KBi7NL+L
 =2Lm5
 -----END PGP SIGNATURE-----

Merge tag 'spdx-5.7-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/spdx

Pull SPDX updates from Greg KH:
 "Here are three SPDX patches for 5.7-rc1.

  One fixes up the SPDX tag for a single driver, while the other two go
  through the tree and add SPDX tags for all of the .gitignore files as
  needed.

  Nothing too complex, but you will get a merge conflict with your
  current tree, that should be trivial to handle (one file modified by
  two things, one file deleted.)

  All three of these have been in linux-next for a while, with no
  reported issues other than the merge conflict"

* tag 'spdx-5.7-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/spdx:
  ASoC: MT6660: make spdxcheck.py happy
  .gitignore: add SPDX License Identifier
  .gitignore: remove too obvious comments
2020-04-03 13:12:26 -07:00
Linus Torvalds
3cd86a58f7 arm64 updates for 5.7:
- In-kernel Pointer Authentication support (previously only offered to
   user space).
 
 - ARM Activity Monitors (AMU) extension support allowing better CPU
   utilisation numbers for the scheduler (frequency invariance).
 
 - Memory hot-remove support for arm64.
 
 - Lots of asm annotations (SYM_*) in preparation for the in-kernel
   Branch Target Identification (BTI) support.
 
 - arm64 perf updates: ARMv8.5-PMU 64-bit counters, refactoring the PMU
   init callbacks, support for new DT compatibles.
 
 - IPv6 header checksum optimisation.
 
 - Fixes: SDEI (software delegated exception interface) double-lock on
   hibernate with shared events.
 
 - Minor clean-ups and refactoring: cpu_ops accessor, cpu_do_switch_mm()
   converted to C, cpufeature finalisation helper.
 
 - sys_mremap() comment explaining the asymmetric address untagging
   behaviour.
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEE5RElWfyWxS+3PLO2a9axLQDIXvEFAl6DVyIACgkQa9axLQDI
 XvHkqRAAiZA2EYKiQL4M1DJ1cNTADjT7xKX9+UtYBXj7GMVhgVWdunpHVE6qtfgk
 cT6avmKrS/6PDqizJgr+Z1yX8x3Kvs57G4BvmIUKIw97mkdewvFQ9JKv6VA1vb86
 7Qrl1WzqsGg5Kj9uUfI4h+ZoT1H4C/9PQeFxJwgZRtF9DxRh8O7VeZI+JCu8Aub2
 lIkjI8rh+EpTsGT9h/PMGWUcawnKQloZ1/F+GfMAuYBvIv2RNN2xVreJtTmm4NyJ
 VcpL0KCNyAI2lGdaJg5nBLRDyGuXDm5i+PLsCSXMquI4fie00txXeD8sjbeuO0ks
 YTJ0EhmUUhbSE17go+SxYiEFE0v09i+lD5ud+B4Vmojp0KTczTta9VSgURlbb2/9
 n9biq5G3PPDNIrZqiTT2Tf4AMz1350nkbzL2gzKecM5aIzR/u3y5yII5CgfZtFnj
 7bGbyFpFpcqI7UaISPsNCxmknbTt/7ff0WM3+7SbecxI3AD2mnxsOdN9JTLyhDp+
 owjyiaWxl5zMWF9DhplLG/9BKpNWSxh3skazdOdELd8GTq2MbJlXrVG2XgXTAOh3
 y1s6RQrfw8zXh8TSqdmmzauComXIRWTum/sbVB3U8Z3AUsIeq/NTSbN5X9JyIbOP
 HOabhlVhhkI6omN1grqPX4jwUiZLZoNfn7Ez4q71549KVK/uBtA=
 =LJVX
 -----END PGP SIGNATURE-----

Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux

Pull arm64 updates from Catalin Marinas:
 "The bulk is in-kernel pointer authentication, activity monitors and
  lots of asm symbol annotations. I also queued the sys_mremap() patch
  commenting the asymmetry in the address untagging.

  Summary:

   - In-kernel Pointer Authentication support (previously only offered
     to user space).

   - ARM Activity Monitors (AMU) extension support allowing better CPU
     utilisation numbers for the scheduler (frequency invariance).

   - Memory hot-remove support for arm64.

   - Lots of asm annotations (SYM_*) in preparation for the in-kernel
     Branch Target Identification (BTI) support.

   - arm64 perf updates: ARMv8.5-PMU 64-bit counters, refactoring the
     PMU init callbacks, support for new DT compatibles.

   - IPv6 header checksum optimisation.

   - Fixes: SDEI (software delegated exception interface) double-lock on
     hibernate with shared events.

   - Minor clean-ups and refactoring: cpu_ops accessor,
     cpu_do_switch_mm() converted to C, cpufeature finalisation helper.

   - sys_mremap() comment explaining the asymmetric address untagging
     behaviour"

* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (81 commits)
  mm/mremap: Add comment explaining the untagging behaviour of mremap()
  arm64: head: Convert install_el2_stub to SYM_INNER_LABEL
  arm64: Introduce get_cpu_ops() helper function
  arm64: Rename cpu_read_ops() to init_cpu_ops()
  arm64: Declare ACPI parking protocol CPU operation if needed
  arm64: move kimage_vaddr to .rodata
  arm64: use mov_q instead of literal ldr
  arm64: Kconfig: verify binutils support for ARM64_PTR_AUTH
  lkdtm: arm64: test kernel pointer authentication
  arm64: compile the kernel with ptrauth return address signing
  kconfig: Add support for 'as-option'
  arm64: suspend: restore the kernel ptrauth keys
  arm64: __show_regs: strip PAC from lr in printk
  arm64: unwind: strip PAC from kernel addresses
  arm64: mask PAC bits of __builtin_return_address
  arm64: initialize ptrauth keys for kernel booting task
  arm64: initialize and switch ptrauth kernel keys
  arm64: enable ptrauth earlier
  arm64: cpufeature: handle conflicts based on capability
  arm64: cpufeature: Move cpu capability helpers inside C file
  ...
2020-03-31 10:05:01 -07:00
Linus Torvalds
dbb381b619 timekeeping and timer updates:
Core:
 
   - Consolidation of the vDSO build infrastructure to address the
     difficulties of cross-builds for ARM64 compat vDSO libraries by
     restricting the exposure of header content to the vDSO build.
 
     This is achieved by splitting out header content into separate
     headers. which contain only the minimaly required information which is
     necessary to build the vDSO. These new headers are included from the
     kernel headers and the vDSO specific files.
 
   - Enhancements to the generic vDSO library allowing more fine grained
     control over the compiled in code, further reducing architecture
     specific storage and preparing for adopting the generic library by PPC.
 
   - Cleanup and consolidation of the exit related code in posix CPU timers.
 
   - Small cleanups and enhancements here and there
 
  Drivers:
 
   - The obligatory new drivers: Ingenic JZ47xx and X1000 TCU support
 
   - Correct the clock rate of PIT64b global clock
 
   - setup_irq() cleanup
 
   - Preparation for PWM and suspend support for the TI DM timer
 
   - Expand the fttmr010 driver to support ast2600 systems
 
   - The usual small fixes, enhancements and cleanups all over the place
 -----BEGIN PGP SIGNATURE-----
 
 iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAl6B+QETHHRnbHhAbGlu
 dXRyb25peC5kZQAKCRCmGPVMDXSYofJ5D/94s5fpaqiuNcaAsLq2D3DRIrTnqxx7
 yEeAOPcbYV1bM1SgY/M83L5yGc2S8ny787e26abwRTCZhZV3eAmRTphIFFIZR0Xk
 xS+i67odscbdJTRtztKj3uQ9rFxefszRuphyaa89pwSY9nnyMWLcahGSQOGs0LJK
 hvmgwPjyM1drNfPxgPiaFg7vDr2XxNATpQr/FBt+BhelvVan8TlAfrkcNPiLr++Y
 Axz925FP7jMaRRbZ1acji34gLiIAZk0jLCUdbix7YkPrqDB4GfO+v8Vez+fGClbJ
 uDOYeR4r1+Be/BtSJtJ2tHqtsKCcAL6agtaE2+epZq5HbzaZFRvBFaxgFNF8WVcn
 3FFibdEMdsRNfZTUVp5wwgOLN0UIqE/7LifE12oLEL2oFB5H2PiNEUw3E02XHO11
 rL3zgHhB6Ke1sXKPCjSGdmIQLbxZmV5kOlQFy7XuSeo5fmRapVzKNffnKcftIliF
 1HNtZbgdA+3tdxMFCqoo1QX+kotl9kgpslmdZ0qHAbaRb3xqLoSskbqEjFRMuSCC
 8bjJrwboD9T5GPfwodSCgqs/58CaSDuqPFbIjCay+p90Fcg6wWAkZtyG04ZLdPRc
 GgNNdN4gjTD9bnrRi8cH47z1g8OO4vt4K4SEbmjo8IlDW+9jYMxuwgR88CMeDXd7
 hu7aKsr2I2q/WQ==
 =5o9G
 -----END PGP SIGNATURE-----

Merge tag 'timers-core-2020-03-30' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull timekeeping and timer updates from Thomas Gleixner:
 "Core:

   - Consolidation of the vDSO build infrastructure to address the
     difficulties of cross-builds for ARM64 compat vDSO libraries by
     restricting the exposure of header content to the vDSO build.

     This is achieved by splitting out header content into separate
     headers. which contain only the minimaly required information which
     is necessary to build the vDSO. These new headers are included from
     the kernel headers and the vDSO specific files.

   - Enhancements to the generic vDSO library allowing more fine grained
     control over the compiled in code, further reducing architecture
     specific storage and preparing for adopting the generic library by
     PPC.

   - Cleanup and consolidation of the exit related code in posix CPU
     timers.

   - Small cleanups and enhancements here and there

  Drivers:

   - The obligatory new drivers: Ingenic JZ47xx and X1000 TCU support

   - Correct the clock rate of PIT64b global clock

   - setup_irq() cleanup

   - Preparation for PWM and suspend support for the TI DM timer

   - Expand the fttmr010 driver to support ast2600 systems

   - The usual small fixes, enhancements and cleanups all over the
     place"

* tag 'timers-core-2020-03-30' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (80 commits)
  Revert "clocksource/drivers/timer-probe: Avoid creating dead devices"
  vdso: Fix clocksource.h macro detection
  um: Fix header inclusion
  arm64: vdso32: Enable Clang Compilation
  lib/vdso: Enable common headers
  arm: vdso: Enable arm to use common headers
  x86/vdso: Enable x86 to use common headers
  mips: vdso: Enable mips to use common headers
  arm64: vdso32: Include common headers in the vdso library
  arm64: vdso: Include common headers in the vdso library
  arm64: Introduce asm/vdso/processor.h
  arm64: vdso32: Code clean up
  linux/elfnote.h: Replace elf.h with UAPI equivalent
  scripts: Fix the inclusion order in modpost
  common: Introduce processor.h
  linux/ktime.h: Extract common header for vDSO
  linux/jiffies.h: Extract common header for vDSO
  linux/time64.h: Extract common header for vDSO
  linux/time32.h: Extract common header for vDSO
  linux/time.h: Extract common header for vDSO
  ...
2020-03-30 18:51:47 -07:00
Linus Torvalds
992a1a3b45 CPU (hotplug) updates:
- Support for locked CSD objects in smp_call_function_single_async()
     which allows to simplify callsites in the scheduler core and MIPS
 
   - Treewide consolidation of CPU hotplug functions which ensures the
     consistency between the sysfs interface and kernel state. The low level
     functions cpu_up/down() are now confined to the core code and not
     longer accessible from random code.
 -----BEGIN PGP SIGNATURE-----
 
 iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAl6B9VQTHHRnbHhAbGlu
 dXRyb25peC5kZQAKCRCmGPVMDXSYodCyD/0WFYAe7LkOfNjkbLa0IeuyLjF9rnCi
 ilcSXMLpaVwwoQvm7MopwkXUDdmEIyeJ0B641j3mC3AKCRap4+O36H2IEg2byrj7
 twOvQNCfxpVVmCCD11FTH9aQa74LEB6AikTgjevhrRWj6eHsal7c2Ak26AzCgrt+
 0eEkOAOWJbLAlbIiPdHlCZ3TMldcs3gg+lRSYd5QCGQVkZFnwpXzyOvpyJEUGGbb
 R/JuvwJoLhRMiYAJDILoQQQg/J07ODuivse/R8PWaH2djkn+2NyRGrD794PhyyOg
 QoTU0ZrYD3Z48ACXv+N3jLM7wXMcFzjYtr1vW1E3O/YGA7GVIC6XHGbMQ7tEihY0
 ajtwq8DcnpKtuouviYnf7NuKgqdmJXkaZjz3Gms6n8nLXqqSVwuQELWV2CXkxNe6
 9kgnnKK+xXMOGI4TUhN8bejvkXqRCmKMeQJcWyf+7RA9UIhAJw5o7WGo8gXfQWUx
 tazCqDy/inYjqGxckW615fhi2zHfemlYTbSzIGOuMB1TEPKFcrgYAii/VMsYHQVZ
 5amkYUXGQ5brlCOzOn38lzp5OkALBnFzD7xgvOcQgWT3ynVpdqADfBytXiEEHh4J
 KSkSgSSRcS58397nIxnDcJgJouHLvAWYyPZ4UC6mfynuQIic31qMHGVqwdbEKMY3
 4M5dGgqIfOBgYw==
 =jwCg
 -----END PGP SIGNATURE-----

Merge tag 'smp-core-2020-03-30' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull core SMP updates from Thomas Gleixner:
 "CPU (hotplug) updates:

   - Support for locked CSD objects in smp_call_function_single_async()
     which allows to simplify callsites in the scheduler core and MIPS

   - Treewide consolidation of CPU hotplug functions which ensures the
     consistency between the sysfs interface and kernel state. The low
     level functions cpu_up/down() are now confined to the core code and
     not longer accessible from random code"

* tag 'smp-core-2020-03-30' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (22 commits)
  cpu/hotplug: Ignore pm_wakeup_pending() for disable_nonboot_cpus()
  cpu/hotplug: Hide cpu_up/down()
  cpu/hotplug: Move bringup of secondary CPUs out of smp_init()
  torture: Replace cpu_up/down() with add/remove_cpu()
  firmware: psci: Replace cpu_up/down() with add/remove_cpu()
  xen/cpuhotplug: Replace cpu_up/down() with device_online/offline()
  parisc: Replace cpu_up/down() with add/remove_cpu()
  sparc: Replace cpu_up/down() with add/remove_cpu()
  powerpc: Replace cpu_up/down() with add/remove_cpu()
  x86/smp: Replace cpu_up/down() with add/remove_cpu()
  arm64: hibernate: Use bringup_hibernate_cpu()
  cpu/hotplug: Provide bringup_hibernate_cpu()
  arm64: Use reboot_cpu instead of hardconding it to 0
  arm64: Don't use disable_nonboot_cpus()
  ARM: Use reboot_cpu instead of hardcoding it to 0
  ARM: Don't use disable_nonboot_cpus()
  ia64: Replace cpu_down() with smp_shutdown_nonboot_cpus()
  cpu/hotplug: Create a new function to shutdown nonboot cpus
  cpu/hotplug: Add new {add,remove}_cpu() functions
  sched/core: Remove rq.hrtick_csd_pending
  ...
2020-03-30 18:06:39 -07:00
Linus Torvalds
a776c270a0 Merge branch 'efi-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull EFI updates from Ingo Molnar:
 "The EFI changes in this cycle are much larger than usual, for two
  (positive) reasons:

   - The GRUB project is showing signs of life again, resulting in the
     introduction of the generic Linux/UEFI boot protocol, instead of
     x86 specific hacks which are increasingly difficult to maintain.
     There's hope that all future extensions will now go through that
     boot protocol.

   - Preparatory work for RISC-V EFI support.

  The main changes are:

   - Boot time GDT handling changes

   - Simplify handling of EFI properties table on arm64

   - Generic EFI stub cleanups, to improve command line handling, file
     I/O, memory allocation, etc.

   - Introduce a generic initrd loading method based on calling back
     into the firmware, instead of relying on the x86 EFI handover
     protocol or device tree.

   - Introduce a mixed mode boot method that does not rely on the x86
     EFI handover protocol either, and could potentially be adopted by
     other architectures (if another one ever surfaces where one
     execution mode is a superset of another)

   - Clean up the contents of 'struct efi', and move out everything that
     doesn't need to be stored there.

   - Incorporate support for UEFI spec v2.8A changes that permit
     firmware implementations to return EFI_UNSUPPORTED from UEFI
     runtime services at OS runtime, and expose a mask of which ones are
     supported or unsupported via a configuration table.

   - Partial fix for the lack of by-VA cache maintenance in the
     decompressor on 32-bit ARM.

   - Changes to load device firmware from EFI boot service memory
     regions

   - Various documentation updates and minor code cleanups and fixes"

* 'efi-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (114 commits)
  efi/libstub/arm: Fix spurious message that an initrd was loaded
  efi/libstub/arm64: Avoid image_base value from efi_loaded_image
  partitions/efi: Fix partition name parsing in GUID partition entry
  efi/x86: Fix cast of image argument
  efi/libstub/x86: Use ULONG_MAX as upper bound for all allocations
  efi: Fix a mistype in comments mentioning efivar_entry_iter_begin()
  efi/libstub: Avoid linking libstub/lib-ksyms.o into vmlinux
  efi/x86: Preserve %ebx correctly in efi_set_virtual_address_map()
  efi/x86: Ignore the memory attributes table on i386
  efi/x86: Don't relocate the kernel unless necessary
  efi/x86: Remove extra headroom for setup block
  efi/x86: Add kernel preferred address to PE header
  efi/x86: Decompress at start of PE image load address
  x86/boot/compressed/32: Save the output address instead of recalculating it
  efi/libstub/x86: Deal with exit() boot service returning
  x86/boot: Use unsigned comparison for addresses
  efi/x86: Avoid using code32_start
  efi/x86: Make efi32_pe_entry() more readable
  efi/x86: Respect 32-bit ABI in efi32_pe_entry()
  efi/x86: Annotate the LOADED_IMAGE_PROTOCOL_GUID with SYM_DATA
  ...
2020-03-30 16:13:08 -07:00
Qais Yousef
e646ac5bb8 arm64: hibernate: Use bringup_hibernate_cpu()
Use bringup_hibernate_cpu() instead of open coding it.

[ tglx: Split out the core change ]

Signed-off-by: Qais Yousef <qais.yousef@arm.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lkml.kernel.org/r/20200323135110.30522-9-qais.yousef@arm.com
2020-03-25 12:59:34 +01:00
Qais Yousef
5efbe6a6e1 arm64: Use reboot_cpu instead of hardconding it to 0
Use `reboot_cpu` variable instead of hardcoding 0 as the reboot cpu in
machine_shutdown().

Signed-off-by: Qais Yousef <qais.yousef@arm.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lkml.kernel.org/r/20200323135110.30522-8-qais.yousef@arm.com
2020-03-25 12:59:33 +01:00
Qais Yousef
d66b16f5df arm64: Don't use disable_nonboot_cpus()
disable_nonboot_cpus() is not safe to use when doing machine_down(),
because it relies on freeze_secondary_cpus() which in turn is
a suspend/resume related freeze and could abort if the logic detects any
pending activities that can prevent finishing the offlining process.

Beside disable_nonboot_cpus() is dependent on CONFIG_PM_SLEEP_SMP which
is an othogonal config to rely on to ensure this function works
correctly.

Signed-off-by: Qais Yousef <qais.yousef@arm.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lkml.kernel.org/r/20200323135110.30522-7-qais.yousef@arm.com
2020-03-25 12:59:33 +01:00
Catalin Marinas
44ca0e00b6 Merge branch 'for-next/kernel-ptrauth' into for-next/core
* for-next/kernel-ptrauth:
  : Return address signing - in-kernel support
  arm64: Kconfig: verify binutils support for ARM64_PTR_AUTH
  lkdtm: arm64: test kernel pointer authentication
  arm64: compile the kernel with ptrauth return address signing
  kconfig: Add support for 'as-option'
  arm64: suspend: restore the kernel ptrauth keys
  arm64: __show_regs: strip PAC from lr in printk
  arm64: unwind: strip PAC from kernel addresses
  arm64: mask PAC bits of __builtin_return_address
  arm64: initialize ptrauth keys for kernel booting task
  arm64: initialize and switch ptrauth kernel keys
  arm64: enable ptrauth earlier
  arm64: cpufeature: handle conflicts based on capability
  arm64: cpufeature: Move cpu capability helpers inside C file
  arm64: ptrauth: Add bootup/runtime flags for __cpu_setup
  arm64: install user ptrauth keys at kernel exit time
  arm64: rename ptrauth key structures to be user-specific
  arm64: cpufeature: add pointer auth meta-capabilities
  arm64: cpufeature: Fix meta-capability cpufeature check
2020-03-25 11:11:08 +00:00
Catalin Marinas
806dc825f0 Merge branch 'for-next/asm-cleanups' into for-next/core
* for-next/asm-cleanups:
  : Various asm clean-ups (alignment, mov_q vs ldr, .idmap)
  arm64: move kimage_vaddr to .rodata
  arm64: use mov_q instead of literal ldr
2020-03-25 11:10:51 +00:00
Catalin Marinas
0829a07695 Merge branch 'for-next/asm-annotations' into for-next/core
* for-next/asm-annotations:
  : Modernise arm64 assembly annotations
  arm64: head: Convert install_el2_stub to SYM_INNER_LABEL
  arm64: Mark call_smc_arch_workaround_1 as __maybe_unused
  arm64: entry-ftrace.S: Fix missing argument for CONFIG_FUNCTION_GRAPH_TRACER=y
  arm64: vdso32: Convert to modern assembler annotations
  arm64: vdso: Convert to modern assembler annotations
  arm64: sdei: Annotate SDEI entry points using new style annotations
  arm64: kvm: Modernize __smccc_workaround_1_smc_start annotations
  arm64: kvm: Modernize annotation for __bp_harden_hyp_vecs
  arm64: kvm: Annotate assembly using modern annoations
  arm64: kernel: Convert to modern annotations for assembly data
  arm64: head: Annotate stext and preserve_boot_args as code
  arm64: head.S: Convert to modern annotations for assembly functions
  arm64: ftrace: Modernise annotation of return_to_handler
  arm64: ftrace: Correct annotation of ftrace_caller assembly
  arm64: entry-ftrace.S: Convert to modern annotations for assembly functions
  arm64: entry: Additional annotation conversions for entry.S
  arm64: entry: Annotate ret_from_fork as code
  arm64: entry: Annotate vector table and handlers as code
  arm64: crypto: Modernize names for AES function macros
  arm64: crypto: Modernize some extra assembly annotations
2020-03-25 11:10:46 +00:00