Now that the arm64 bitops are inlines built atop of the regular atomics,
we don't need to export anything.
Remove the redundant exports.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The "L" AArch64 machine constraint, which we use for the "old" value in
an LL/SC cmpxchg(), generates an immediate that is suitable for a 64-bit
logical instruction. However, for cmpxchg() operations on types smaller
than 64 bits, this constraint can result in an invalid instruction which
is correctly rejected by GAS, such as EOR W1, W1, #0xffffffff.
Whilst we could special-case the constraint based on the cmpxchg size,
it's far easier to change the constraint to "K" and put up with using
a register for large 64-bit immediates. For out-of-line LL/SC atomics,
this is all moot anyway.
Reported-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Our percpu code is a bit of an inconsistent mess:
* It rolls its own xchg(), but reuses cmpxchg_local()
* It uses various different flavours of preempt_{enable,disable}()
* It returns values even for the non-returning RmW operations
* It makes no use of LSE atomics outside of the cmpxchg() ops
* There are individual macros for different sizes of access, but these
are all funneled through a switch statement rather than dispatched
directly to the relevant case
This patch rewrites the per-cpu operations to address these shortcomings.
Whilst the new code is a lot cleaner, the big advantage is that we can
use the non-returning ST- atomic instructions when we have LSE.
Signed-off-by: Will Deacon <will.deacon@arm.com>
The CAS instructions implicitly access only the relevant bits of the "old"
argument, so there is no need for explicit masking via type-casting as
there is in the LL/SC implementation.
Move the casting into the LL/SC code and remove it altogether for the LSE
implementation.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Our atomic instructions (either LSE atomics of LDXR/STXR sequences)
natively support byte, half-word, word and double-word memory accesses
so there is no need to mask the data register prior to being stored.
Signed-off-by: Will Deacon <will.deacon@arm.com>
The asm-generic/preempt.h implementation doesn't make use of the
PREEMPT_NEED_RESCHED flag, since this can interact badly with load/store
architectures which rely on the preempt_count word being unchanged across
an interrupt.
However, since we're a 64-bit architecture and the preempt count is
only 32 bits wide, we can simply pack it next to the resched flag and
load the whole thing in one go, so that a dec-and-test operation doesn't
need to load twice.
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Add hstate for each supported hugepage size using arch initcall.
* no hugepage parameters
Without hugepage parameters, only a default hugepage size is
available for dynamic allocation. It's different, for example, from
x86_64 and sparc64 where all supported hugepage sizes are available.
* only default_hugepagesz= is specified and set not to HPAGE_SIZE
In spite of the fact that default_hugepagesz= is set to a valid
hugepage size, it's treated as unsupported and reverted to
HPAGE_SIZE. Such behaviour is also different from x86_64 and
sparc64.
Acked-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Tom Saeger <tom.saeger@oracle.com>
Signed-off-by: Dmitry Klochkov <dmitry.klochkov@oracle.com>
Signed-off-by: Allen Pais <allen.pais@oracle.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This is a NEON acceleration method that can improve
performance by approximately 20%. I got the following
data from the centos 7.5 on Huawei's HISI1616 chip:
[ 93.837726] xor: measuring software checksum speed
[ 93.874039] 8regs : 7123.200 MB/sec
[ 93.914038] 32regs : 7180.300 MB/sec
[ 93.954043] arm64_neon: 9856.000 MB/sec
[ 93.954047] xor: using function: arm64_neon (9856.000 MB/sec)
I believe this code can bring some optimization for
all arm64 platform. thanks for Ard Biesheuvel's suggestions.
Signed-off-by: Jackie Liu <liuyun01@kylinos.cn>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
In a way similar to ARM commit 09096f6a0e ("ARM: 7822/1: add workaround
for ambiguous C99 stdint.h types"), this patch redefines the macros that
are used in stdint.h so its definitions of uint64_t and int64_t are
compatible with those of the kernel.
This patch comes from: https://patchwork.kernel.org/patch/3540001/
Wrote by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
We mark this file as a private file and don't have to override asm/types.h
Signed-off-by: Jackie Liu <liuyun01@kylinos.cn>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The comment about SYS_MEMBARRIER_SYNC_CORE relying on ERET being
context-synchronizing is confusing and misplaced with kpti. Given that
this is already documented under Documentation/ (see arch-support.txt
for membarrier), remove the comment altogether.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Some CPUs can speculate past an ERET instruction and potentially perform
speculative accesses to memory before processing the exception return.
Since the register state is often controlled by a lower privilege level
at the point of an ERET, this could potentially be used as part of a
side-channel attack.
This patch emits an SB sequence after each ERET so that speculation is
held up on exception return.
Signed-off-by: Will Deacon <will.deacon@arm.com>
We currently use a DSB; ISB sequence to inhibit speculation in set_fs().
Whilst this works for current CPUs, future CPUs may implement a new SB
barrier instruction which acts as an architected speculation barrier.
On CPUs that support it, patch in an SB; NOP sequence over the DSB; ISB
sequence and advertise the presence of the new instruction to userspace.
Signed-off-by: Will Deacon <will.deacon@arm.com>
We use a stop_machine call for each available capability to
enable it on all the CPUs available at boot time. Instead
we could batch the cpu_enable callbacks to a single stop_machine()
call to save us some time.
Reviewed-by: Vladimir Murzin <vladimir.murzin@arm.com>
Tested-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Use the sorted list of capability entries for the detection and
verification.
Reviewed-by: Vladimir Murzin <vladimir.murzin@arm.com>
Tested-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Make use of the sorted capability list to access the capability
entry in this_cpu_has_cap() to avoid iterating over the two
tables.
Reviewed-by: Vladimir Murzin <vladimir.murzin@arm.com>
Tested-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
We maintain two separate tables of capabilities, errata and features,
which decide the system capabilities. We iterate over each of these
tables for various operations (e.g, detection, verification etc.).
We do not have a way to map a system "capability" to its entry,
(i.e, cap -> struct arm64_cpu_capabilities) which is needed for
this_cpu_has_cap(). So we iterate over the table one by one to
find the entry and then do the operation. Also, this prevents
us from optimizing the way we "enable" the capabilities on the
CPUs, where we now issue a stop_machine() for each available
capability.
One solution is to merge the two tables into a single table,
sorted by the capability. But this is has the following
disadvantages:
- We loose the "classification" of an errata vs. feature
- It is quite easy to make a mistake when adding an entry,
unless we sort the table at runtime.
So we maintain a list of pointers to the capability entry, sorted
by the "cap number" in a separate array, initialized at boot time.
The only restriction is that we can have one "entry" per capability.
While at it, remove the duplicate declaration of arm64_errata table.
Reviewed-by: Vladimir Murzin <vladimir.murzin@arm.com>
Tested-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Remove duplicate entries for Qualcomm erratum 1003. Since the entries
are not purely based on generic MIDR checks, use the multi_cap_entry
type to merge the entries.
Cc: Christopher Covington <cov@codeaurora.org>
Cc: Will Deacon <will.deacon@arm.com>
Reviewed-by: Vladimir Murzin <vladimir.murzin@arm.com>
Tested-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Merge duplicate entries for a single capability using the midr
range list for Cavium errata 30115 and 27456.
Cc: Andrew Pinski <apinski@cavium.com>
Cc: David Daney <david.daney@cavium.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Vladimir Murzin <vladimir.murzin@arm.com>
Tested-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
We have two entries for ARM64_WORKAROUND_CLEAN_CACHE capability :
1) ARM Errata 826319, 827319, 824069, 819472 on A53 r0p[012]
2) ARM Errata 819472 on A53 r0p[01]
Both have the same work around. Merge these entries to avoid
duplicate entries for a single capability. Add a new Kconfig
entry to control the "capability" entry to make it easier
to handle combinations of the CONFIGs.
Cc: Will Deacon <will.deacon@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
readelf complains about the section layout of vmlinux when building
with CONFIG_RELOCATABLE=y (for KASLR):
readelf: Warning: [21]: Link field (0) should index a symtab section.
readelf: Warning: [21]: Info field (0) should index a relocatable section.
Also, it seems that our use of '-pie -shared' is contradictory, and
thus ambiguous. In general, the way KASLR is wired up at the moment
is highly tailored to how ld.bfd happens to implement (and conflate)
PIE executables and shared libraries, so given the current effort to
support other toolchains, let's fix some of these issues as well.
- Drop the -pie linker argument and just leave -shared. In ld.bfd,
the differences between them are unclear (except for the ELF type
of the produced image [0]) but lld chokes on seeing both at the
same time.
- Rename the .rela output section to .rela.dyn, as is customary for
shared libraries and PIE executables, so that it is not misidentified
by readelf as a static relocation section (producing the warnings
above).
- Pass the -z notext and -z norelro options to explicitly instruct the
linker to permit text relocations, and to omit the RELRO program
header (which requires a certain section layout that we don't adhere
to in the kernel). These are the defaults for current versions of
ld.bfd.
- Discard .eh_frame and .gnu.hash sections to avoid them from being
emitted between .head.text and .text, screwing up the section layout.
These changes only affect the ELF image, and produce the same binary
image.
[0] b9dce7f1ba ("arm64: kernel: force ET_DYN ELF type for ...")
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: Peter Smith <peter.smith@linaro.org>
Tested-by: Nick Desaulniers <ndesaulniers@google.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Improve the performance of the crc32() asm routines by getting rid of
most of the branches and small sized loads on the common path.
Instead, use a branchless code path involving overlapping 16 byte
loads to process the first (length % 32) bytes, and process the
remainder using a loop that processes 32 bytes at a time.
Tested using the following test program:
#include <stdlib.h>
extern void crc32_le(unsigned short, char const*, int);
int main(void)
{
static const char buf[4096];
srand(20181126);
for (int i = 0; i < 100 * 1000 * 1000; i++)
crc32_le(0, buf, rand() % 1024);
return 0;
}
On Cortex-A53 and Cortex-A57, the performance regresses but only very
slightly. On Cortex-A72 however, the performance improves from
$ time ./crc32
real 0m10.149s
user 0m10.149s
sys 0m0.000s
to
$ time ./crc32
real 0m7.915s
user 0m7.915s
sys 0m0.000s
Cc: Rui Sun <sunrui26@huawei.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The core ftrace hooks take the instrumented PC in x0, but for some
reason arm64's prepare_ftrace_return() takes this in x1.
For consistency, let's flip the argument order and always pass the
instrumented PC in x0.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Torsten Duwe <duwe@suse.de>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The save_return_regs and restore_return_regs macros are only used by
return_to_handler, and having them defined out-of-line only serves to
obscure the logic.
Before we complicate, let's clean this up and fold the logic directly
into return_to_handler, saving a few lines of macro boilerplate in the
process. At the same time, a missing trailing space is added to the
comments, fixing a code style violation.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Torsten Duwe <duwe@suse.de>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The core ftrace code requires that when it is handed the PC of an
instrumented function, this PC is the address of the instrumented
instruction. This is necessary so that the core ftrace code can identify
the specific instrumentation site. Since the instrumented function will
be a BL, the address of the instrumented function is LR - 4 at entry to
the ftrace code.
This fixup is applied in the mcount_get_pc and mcount_get_pc0 helpers,
which acquire the PC of the instrumented function.
The mcount_get_lr helper is used to acquire the LR of the instrumented
function, whose value does not require this adjustment, and cannot be
adjusted to anything meaningful. No adjustment of this value is made on
other architectures, including arm. However, arm64 adjusts this value by
4.
This patch brings arm64 in line with other architectures and removes the
adjustment of the LR value.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Torsten Duwe <duwe@suse.de>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The core frace code has an optional sanity check on the frame pointer
passed by ftrace_graph_caller and return_to_handler. This is cheap,
useful, and enabled unconditionally on x86, sparc, and riscv.
Let's do the same on arm64, so that we can catch any problems early.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Torsten Duwe <duwe@suse.de>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The global exports of ftrace_call and ftrace_graph_call are somewhat
painful to read. Let's use the generic GLOBAL() macro to ameliorate
matters.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Torsten Duwe <duwe@suse.de>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Commit 1212f7a16a ("scripts/kallsyms: filter arm64's __efistub_
symbols") updated the kallsyms code to filter out symbols with
the __efistub_ prefix explicitly, so we no longer require the
hack in our linker script to emit them as absolute symbols.
Cc: Nick Desaulniers <ndesaulniers@google.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
As of commit 6460d32014 ("arm64: io: Ensure calls to delay routines
are ordered against prior readX()"), MMIO reads smaller than 64 bits
fail to compile under clang because we end up mixing 32-bit and 64-bit
register operands for the same data processing instruction:
./include/asm-generic/io.h:695:9: warning: value size does not match register size specified by the constraint and modifier [-Wasm-operand-widths]
return readb(addr);
^
./arch/arm64/include/asm/io.h:147:58: note: expanded from macro 'readb'
^
./include/asm-generic/io.h:695:9: note: use constraint modifier "w"
./arch/arm64/include/asm/io.h:147:50: note: expanded from macro 'readb'
^
./arch/arm64/include/asm/io.h:118:24: note: expanded from macro '__iormb'
asm volatile("eor %0, %1, %1\n" \
^
Fix the build by casting the macro argument to 'unsigned long' when used
as an input to the inline asm.
Reported-by: Nick Desaulniers <nick.desaulniers@gmail.com>
Reported-by: Nathan Chancellor <natechancellor@gmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
In order to reduce the possibility of soft lock-ups, we bound the
maximum number of TLBI operations performed by a single call to
flush_tlb_range() to an arbitrary constant of 1024.
Whilst this does the job of avoiding lock-ups, we can actually be a bit
smarter by defining this as PTRS_PER_PTE. Due to the structure of our
page tables, using PTRS_PER_PTE means that an outer loop calling
flush_tlb_range() for entire table entries will end up performing just a
single TLBI operation for each entry. As an example, mremap()ing a 1GB
range mapped using 4k pages now requires only 512 TLBI operations when
moving the page tables as opposed to 262144 operations (512*512) when
using the current threshold of 1024.
Cc: Joel Fernandes <joel@joelfernandes.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Now that we have switched to the small code model entirely, and
reduced the extended KASLR range to 4 GB, we can be sure that the
targets of relative branches that are out of range are in range
for a ADRP/ADD pair, which is one instruction shorter than our
current MOVN/MOVK/MOVK sequence, and is more idiomatic and so it
is more likely to be implemented efficiently by micro-architectures.
So switch over the ordinary PLT code and the special handling of
the Cortex-A53 ADRP errata, as well as the ftrace trampline
handling.
Reviewed-by: Torsten Duwe <duwe@lst.de>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
[will: Added a couple of comments in the plt equality check]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Add support for emitting ADR and ADRP instructions so we can switch
over our PLT generation code in a subsequent patch.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
__install_bp_hardening_cb() is called via stop_machine() as part
of the cpu_enable callback. To force each CPU to take its turn
when allocating slots, they take a spinlock.
With the RT patches applied, the spinlock becomes a mutex,
and we get warnings about sleeping while in stop_machine():
| [ 0.319176] CPU features: detected: RAS Extension Support
| [ 0.319950] BUG: scheduling while atomic: migration/3/36/0x00000002
| [ 0.319955] Modules linked in:
| [ 0.319958] Preemption disabled at:
| [ 0.319969] [<ffff000008181ae4>] cpu_stopper_thread+0x7c/0x108
| [ 0.319973] CPU: 3 PID: 36 Comm: migration/3 Not tainted 4.19.1-rt3-00250-g330fc2c2a880 #2
| [ 0.319975] Hardware name: linux,dummy-virt (DT)
| [ 0.319976] Call trace:
| [ 0.319981] dump_backtrace+0x0/0x148
| [ 0.319983] show_stack+0x14/0x20
| [ 0.319987] dump_stack+0x80/0xa4
| [ 0.319989] __schedule_bug+0x94/0xb0
| [ 0.319991] __schedule+0x510/0x560
| [ 0.319992] schedule+0x38/0xe8
| [ 0.319994] rt_spin_lock_slowlock_locked+0xf0/0x278
| [ 0.319996] rt_spin_lock_slowlock+0x5c/0x90
| [ 0.319998] rt_spin_lock+0x54/0x58
| [ 0.320000] enable_smccc_arch_workaround_1+0xdc/0x260
| [ 0.320001] __enable_cpu_capability+0x10/0x20
| [ 0.320003] multi_cpu_stop+0x84/0x108
| [ 0.320004] cpu_stopper_thread+0x84/0x108
| [ 0.320008] smpboot_thread_fn+0x1e8/0x2b0
| [ 0.320009] kthread+0x124/0x128
| [ 0.320010] ret_from_fork+0x10/0x18
Switch this to a raw spinlock, as we know this is only called with
IRQs masked.
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The BAD_MADT_GICC_ENTRY check is a little too strict because
it rejects MADT entries that don't match the currently known
lengths. We should remove this restriction to avoid problems
if the table length changes. Future code which might depend on
additional fields should be written to validate those fields
before using them, rather than trying to globally check
known MADT version lengths.
Link: https://lkml.kernel.org/r/20181012192937.3819951-1-jeremy.linton@arm.com
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
[lorenzo.pieralisi@arm.com: added MADT macro comments]
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Acked-by: Sudeep Holla <sudeep.holla@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Al Stone <ahs3@redhat.com>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Signed-off-by: Will Deacon <will.deacon@arm.com>
A relatively standard idiom for ensuring that a pair of MMIO writes to a
device arrive at that device with a specified minimum delay between them
is as follows:
writel_relaxed(42, dev_base + CTL1);
readl(dev_base + CTL1);
udelay(10);
writel_relaxed(42, dev_base + CTL2);
the intention being that the read-back from the device will push the
prior write to CTL1, and the udelay will hold up the write to CTL1 until
at least 10us have elapsed.
Unfortunately, on arm64 where the underlying delay loop is implemented
as a read of the architected counter, the CPU does not guarantee
ordering from the readl() to the delay loop and therefore the delay loop
could in theory be speculated and not provide the desired interval
between the two writes.
Fix this in a similar manner to PowerPC by introducing a dummy control
dependency on the output of readX() which, combined with the ISB in the
read of the architected counter, guarantees that a subsequent delay loop
can not be executed until the readX() has returned its result.
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Will Deacon <will.deacon@arm.com>
When transitioning a PTE from young to old as part of page aging, we
can avoid waiting for the TLB invalidation to complete and therefore
drop the subsequent DSB instruction. Whilst this opens up a race with
page reclaim, where a PTE in active use via a stale, young TLB entry
does not update the underlying descriptor, the worst thing that happens
is that the page is reclaimed and then immediately faulted back in.
Given that we have a DSB in our context-switch path, the window for a
spurious reclaim is fairly limited and eliding the barrier claims to
boost NVMe/SSD accesses by over 10% on some platforms.
A similar optimisation was made for x86 in commit b13b1d2d86 ("x86/mm:
In the PTE swapout page reclaim case clear the accessed bit instead of
flushing the TLB").
Signed-off-by: Alex Van Brunt <avanbrunt@nvidia.com>
Signed-off-by: Ashish Mhetre <amhetre@nvidia.com>
[will: rewrote patch]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Instead of saving a pointer to the .plt and .init.plt sections to apply
plt-based relocations, save and use their section indices instead.
The mod->arch.{core,init}.plt pointers were problematic for livepatch
because they pointed within temporary section headers (provided by the
module loader via info->sechdrs) that would be freed after module load.
Since livepatch modules may need to apply relocations post-module-load
(for example, to patch a module that is loaded later), using section
indices to offset into the section headers (instead of accessing them
through a saved pointer) allows livepatch modules on arm64 to pass in
their own copy of the section headers to apply_relocate_add() to apply
delayed relocations.
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Miroslav Benes <mbenes@suse.cz>
Signed-off-by: Jessica Yu <jeyu@kernel.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
On arm64, we use block mappings and contiguous hints to map the linear
region, to minimize the TLB footprint. However, this means that the
entire region is mapped using read/write permissions, which we cannot
modify at page granularity without having to take intrusive measures to
prevent TLB conflicts.
This means the linear aliases of pages belonging to read-only mappings
(executable or otherwise) in the vmalloc region are also mapped read/write,
and could potentially be abused to modify things like module code, bpf JIT
code or other read-only data.
So let's fix this, by extending the set_memory_ro/rw routines to take
the linear alias into account. The consequence of enabling this is
that we can no longer use block mappings or contiguous hints, so in
cases where the TLB footprint of the linear region is a bottleneck,
performance may be affected.
Therefore, allow this feature to be runtime en/disabled, by setting
rodata=full (or 'on' to disable just this enhancement, or 'off' to
disable read-only mappings for code and r/o data entirely) on the
kernel command line. Also, allow the default value to be set via a
Kconfig option.
Tested-by: Laura Abbott <labbott@redhat.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Call vm_unmap_aliases() every time we apply any changes to permission
attributes of mappings in the vmalloc region. This avoids any potential
issues resulting from lingering writable or executable aliases of
mappings that should be read-only or non-executable, respectively.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The new memory EFI reservation feature we introduced to allow memory
reservations to persist across kexec may trigger an unbounded number
of calls to memblock_reserve(). The memblock subsystem can deal with
this fine, but not before memblock resizing is enabled, which we can
only do after paging_init(), when the memory we reallocate the array
into is actually mapped.
So break out the memreserve table processing into a separate routine
and call it after paging_init() on arm64. On ARM, because of limited
reviewing bandwidth of the maintainer, we cannot currently fix this,
so instead, disable the EFI persistent memreserve entirely on ARM so
we can fix it later.
Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-efi@vger.kernel.org
Link: http://lkml.kernel.org/r/20181114175544.12860-5-ard.biesheuvel@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
- Fix occasional page fault during boot due to memblock resizing before
the linear map is up.
- Define NET_IP_ALIGN to 0 to improve the DMA performance on some
platforms.
- lib/raid6 test build fix.
- .mailmap update for Punit Agrawal
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEE5RElWfyWxS+3PLO2a9axLQDIXvEFAlvmw/AACgkQa9axLQDI
XvG7YRAAn1iX/YfMP6m7+QArUXMc5zTmE+/K7hf7CZacFfG5kI1VI2fyUwVk1xk5
s88IjBHGFfNppZmwLKaKML7bBfS9EzzfmX6idcxCyozXTcz/KUR6PkIjp6uUwqU9
oRmSm0M4Uh/a6YUJWa5BO2xPdgKxucCZE0nWz1XXlVE3pt/emGsAci2upVN++75z
37EHSsSF/TdaWC5+/7f9L/nCglXCbP5BRuo+6vuiEzYR510k9ve6RYQnrHvu48Xy
r2vJD2LNfxGKVFvu6UFZ27LoFlOhS+/TxEZIghnwOnI1ktVXWL949B3WCnQMCU+X
NEH8Ev2mErKtTo2YZB+4p6+wXrzj34FDu/jR5/7QclkHhquNOGejolU8MXKOI4dE
osgfok/4w/e3n4eb/UzlLMZQTON6dae7eR1MZko7uWkRazWx/ndfr3SOxGQVQvbe
nbobDejzpEKkoIF1UttGNLnfiUpp5nwpnymxrxQ8nXJlpS6jqdhxY5A0PIBuaozE
wGXPbnWFkn1jAG9ey5PPI9wtkbP2JnIqKU4kuFvGt+lhsegHRt2d/8PyZQVZFye9
2dg63mf5ApIKgsR81ZhCBP2MFpT3d69srJkA1egfPIWtFFd1cb13xpD1d0ej/siM
VnauOm4jWs3wb8mQcW5MBATbk7lsBd7748AqVYY72PfR6lRO+Lk=
=mF/U
-----END PGP SIGNATURE-----
Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 fixes from Catalin Marinas:
- Fix occasional page fault during boot due to memblock resizing before
the linear map is up.
- Define NET_IP_ALIGN to 0 to improve the DMA performance on some
platforms.
- lib/raid6 test build fix.
- .mailmap update for Punit Agrawal
* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
arm64: memblock: don't permit memblock resizing until linear mapping is up
arm64: mm: define NET_IP_ALIGN to 0
lib/raid6: Fix arm64 test build
mailmap: Update email for Punit Agrawal
Bhupesh reports that having numerous memblock reservations at early
boot may result in the following crash:
Unable to handle kernel paging request at virtual address ffff80003ffe0000
...
Call trace:
__memcpy+0x110/0x180
memblock_add_range+0x134/0x2e8
memblock_reserve+0x70/0xb8
memblock_alloc_base_nid+0x6c/0x88
__memblock_alloc_base+0x3c/0x4c
memblock_alloc_base+0x28/0x4c
memblock_alloc+0x2c/0x38
early_pgtable_alloc+0x20/0xb0
paging_init+0x28/0x7f8
This is caused by the fact that we permit memblock resizing before the
linear mapping is up, and so the memblock_reserved() array is moved
into memory that is not mapped yet.
So let's ensure that this crash can no longer occur, by deferring to
call to memblock_allow_resize() to after the linear mapping has been
created.
Reported-by: Bhupesh Sharma <bhsharma@redhat.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
On arm64, there is no need to add 2 bytes of padding to the start of
each network buffer just to make the IP header appear 32-bit aligned.
Since this might actually adversely affect DMA performance some
platforms, let's override NET_IP_ALIGN to 0 to get rid of this
padding.
Acked-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Tested-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-----BEGIN PGP SIGNATURE-----
iQJIBAABCgAyFiEEoHhMeiyk5VmwVMwNGZQEC4GjKPQFAlvjC0gUHGRpbmd1eWVu
QGtlcm5lbC5vcmcACgkQGZQEC4GjKPS0Xw//RyxgxowjoKqabX0DbVsxdeaMRwPt
Pql/7oLrVBdbbcEkQZQA7Vo8Ryt7GPTV0v6mepWm3Fm8cIMxfU0wxoB49HIAr8cH
KZlmn57WlMMCB4o1H7l9lxyPWuAcmEoPTUs71ZatfdWv8sBYEkT6Sc/TwkvLqTMV
DJtFD8XBtZrXePjTwCKSPaAJ96yqcdLJ5znVmD7a3h3oyVLdW6X5kVNWWGtKHBim
z/KV46wBsJbyXTsjoCLU3S474dfjUsI34cr6EbbLDu9OFiAJFGcDwDyxtiRX75Hw
GodJ+TfaEO4EAKX5fl8y1hgbO+KQpbcvWhEPoowbO+TLrJjBsUO4nLN7uhN6QFW+
LNg1kWLVsA7xqVcF4XeUZ1YBc56lJeWQ4QCTYfl3ilaJkBzzEQlqx0nFuTMqAFfo
ME9r9ti1lCc/9uSi6jrh1dOix+8Sl2vtcGMStjQ9TMUrOX4h4GY6zHS8csSW90ys
OKO1xy7f+CcKSQx2GBxoAeVRNmwqzlhwxw4DjOt3/PaqTJqZMtNza51XGatuoaht
eCXH1LG2NtVKo0o/TOdnNrugF6Tc01kMfWXcKkpShiDFUd805Pq1EZW0SMG1+8OS
kYVUvldV4cqhUntVryx/b4RFpu9tp6OCITInjyDWs0Jcflr687GxJbglHTAnuFMd
hzjrPkKspkYzeNY=
=7zP6
-----END PGP SIGNATURE-----
Merge tag 'stratix10_dts_fix_for_v4.20' of git://git.kernel.org/pub/scm/linux/kernel/git/dinguyen/linux into fixes
ARM: dts: stratix10: fix multicast filtering
On Stratix 10, the EMAC has 256 hash buckets for multicast filtering. This
needs to be specified in DTS, otherwise the stmmac driver defaults to 64
buckets and initializes the filter incorrectly. As a result, e.g. valid
IPv6 multicast traffic ends up being dropped.
* tag 'stratix10_dts_fix_for_v4.20' of git://git.kernel.org/pub/scm/linux/kernel/git/dinguyen/linux:
arm64: dts: stratix10: fix multicast filtering
Signed-off-by: Olof Johansson <olof@lixom.net>
On Stratix 10, the EMAC has 256 hash buckets for multicast filtering. This
needs to be specified in DTS, otherwise the stmmac driver defaults to 64
buckets and initializes the filter incorrectly. As a result, e.g. valid
IPv6 multicast traffic ends up being dropped.
Fixes: 78cd6a9d8e ("arm64: dts: Add base stratix 10 dtsi")
Cc: stable@vger.kernel.org
Signed-off-by: Aaro Koskinen <aaro.koskinen@nokia.com>
Signed-off-by: Dinh Nguyen <dinguyen@kernel.org>
The "official" Condor boards have always been wired to mount NFS via
GEther, not EtherAVB -- the boards resoldered for EtherAVB were local
to Cogent Embedded, so we've been having an unpleasant situation where
a "normal" Condor board still can't mount NFS (unless an EtherAVB PHY
extension board is plugged in). Switch from EtherAVB to GEther at last!
Fixes: 8091788f3d ("arm64: dts: renesas: condor: add EtherAVB support")
Signed-off-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
Signed-off-by: Simon Horman <horms+renesas@verge.net.au>
hscif2 has 4 dmas, but has only 2 dma-names.
This patch add missing dma-names.
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
Fixes: e0f0bda793 ("arm64: dts: renesas: r8a7795: sort subnodes
of the soc node")
Signed-off-by: Simon Horman <horms+renesas@verge.net.au>
A few fixes who have come in near or during the merge window:
- Removal of a VLA usage in Marvell mpp platform code
- Enable some IPMI options for ARM64 servers by default, helps testing
- Enable PREEMPT on 32-bit ARMv7 defconfig
- Minor fix for stm32 DT (removal of an unused DMA property)
- Bugfix for TI OMAP1-based ams-delta (-EINVAL -> IRQ_NOTCONNECTED)
-----BEGIN PGP SIGNATURE-----
iQJDBAABCAAtFiEElf+HevZ4QCAJmMQ+jBrnPN6EHHcFAlvd6+YPHG9sb2ZAbGl4
b20ubmV0AAoJEIwa5zzehBx3xwUP/j3AFLgK7HPGzDHy6hqVBuezXaOtkMFDbfmS
quIpp60zoellKREIGQag06VI44IMtetfJpcLe6GFNAtBNg7ofQ4/H3jq4eFqP+Gd
abJC/iSzP86UKnAGoexkEQ4CxzYb+q6Okv3yBROWptFfvfDE9wyeKdl6SoSPYuvy
yZPmZdq2HguvjWfKz7BLGvr6nzMdwwj1MpvqsSS3xLCiQTBSq1VVgKyCSWin67r2
FiD1I+c37WNaJQP0eQ+Eu6mCc++jK3cdHPAp6LzmJ7YNVOqPCwU2KroY/DKLhz3v
oanKJENFlyNOunzdd2SuoM7TyFgo7rHDR1sQgKCbcLu3D32dPYg4T8+jr1xa3hRs
CFLFj9JKl6grZK1sIB35blmZ+6DJ7WfN1SRwD7tfvRM8UNxCUEs/5/Pf2ySU2cGx
k2ZiP2D+VS+eX41bZTiuJZOyX2Wsyis8TWTjCy8eT6uM1V4Zt4qiXigQW8qFg4RC
ta3Y2rUxjk8FZCV1I4mrIwkqsSL47fWuxtVxDnJP4tkA5HuVS+NJj8aneAYyJKiY
N6Gk1U8IJN40psk8OU7fwYUjJ6vhOgKxNo1nl7vf26Ddwl1fFsDHvFKgh9UmVDN5
boRSs+62AZVLTDdIpkz6hWSVPUO4lysi7n2Gi+qFjVifyE5TLH7ncSMZk6fbnLLI
v90sz43w
=mr8i
-----END PGP SIGNATURE-----
Merge tag 'armsoc-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
Pull ARM SoC fixes from Olof Johansson:
"A few fixes who have come in near or during the merge window:
- Removal of a VLA usage in Marvell mpp platform code
- Enable some IPMI options for ARM64 servers by default, helps
testing
- Enable PREEMPT on 32-bit ARMv7 defconfig
- Minor fix for stm32 DT (removal of an unused DMA property)
- Bugfix for TI OMAP1-based ams-delta (-EINVAL -> IRQ_NOTCONNECTED)"
* tag 'armsoc-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc:
ARM: dts: stm32: update HASH1 dmas property on stm32mp157c
ARM: orion: avoid VLA in orion_mpp_conf
ARM: defconfig: Update multi_v7 to use PREEMPT
arm64: defconfig: Enable some IPMI configs
soc: ti: QMSS: Fix usage of irq_set_affinity_hint
ARM: OMAP1: ams-delta: Fix impossible .irq < 0
- Fix W+X page (mark RO) allocated by the arm64 kprobes code
- Makefile fix for .i files in out of tree modules
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEE5RElWfyWxS+3PLO2a9axLQDIXvEFAlvdeMUACgkQa9axLQDI
XvGVAQ/8Dd6M1NwDsPqdemdWVOGJJi+c+Ou39c29dCvPkZV63ZgPBaQiOX5DJ18T
TUjC+Q2zW4Oag86q4N0REOQiafDfScNU/ZsDgnfHvagKNc6+V1lqzb8DsteeeCW9
YOPnbL/VV+dBKKXphjW23VQfsz5ryDU6HKoDHRgOOtHisnTOKJGj/HCXzn1LY6x4
us/Gl6U/kJyRs0/7F8lmfSatDK2o+bKo0/0X6OV7dNE3bo++rpWxCX8D/dcBiEwV
BZDkWu5noglnzYz/LwobYwIjshd6cNjSjJKgoudp3+6WtcFGiK347HDQyo6WqBSd
5hmo/R0My5SUWrwb3GVmxFQmDDxIwywneSkKdx00PNygoNBhu7VYOrf7C/8NOl2h
a0lMCl1Q9x+/2ZDWHhgcwZ6Nfkj/3hJ/3jQVtfqt7ldXgPmZQPrBcx0+CzjGAAiK
gmIpr7VH701KkQGMljV4W0AurWx4v/+YpewkSODBOcbEQTd6trl8I5+A0SA+o6eC
F479l8meU9H0vf9fMB1bkRxBipyaFRKNaTuabO3wHN45C4fzQCXQi5pjfvyAfC+f
zZbnTKeWVzAafnYGcS6Fml+hUD3QQdARnd3WDOyzwBC7EvZM3gGmFWnlkMxbgSuV
9c9+t7fMLChQiKUZ+bjNQpaXZ0YmA9+1fRqb9xCOP1/Ll7933A0=
=hH50
-----END PGP SIGNATURE-----
Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull more arm64 updates from Catalin Marinas:
- fix W+X page (mark RO) allocated by the arm64 kprobes code
- Makefile fix for .i files in out of tree modules
* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
arm64: kprobe: make page to RO mode when allocate it
arm64: kdump: fix small typo
arm64: makefile fix build of .i file in external module case
The arm64 port now runs on servers which use IPMI. This patch enables
relevant core configs to save manually enabling them when testing
mainline.
Signed-off-by: John Garry <john.garry@huawei.com>
[olof: Switched to =m instead of =y]
Signed-off-by: Olof Johansson <olof@lixom.net>
__swiotlb_get_sgtable_page and __swiotlb_mmap_pfn are not only misnamed
but also only used if CONFIG_IOMMU_DMA is set. Just add a simple ifdef
for now, given that we plan to remove them entirely for the next merge
window.
Reported-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Tested-by: Will Deacon <will.deacon@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>