Use the much more reader friendly ACCESS_ONCE() instead of the cast to volatile.
This is purely a stylistic change.
Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
Acked-by: Jesper Nilsson <jesper.nilsson@axis.com>
Acked-by: Hans-Christian Egtvedt <egtvedt@samfundet.no>
Acked-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: linux-arch@vger.kernel.org
Link: http://lkml.kernel.org/r/1411482607-20948-1-git-send-email-bobby.prani@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This patch extends the start and end address of initrd to be page aligned,
so that we can free all memory including the un-page aligned head or tail
page of initrd, if the start or end address of initrd are not page
aligned, the page can't be freed by free_initrd_mem() function.
Signed-off-by: Yalin Wang <yalin.wang@sonymobile.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch changes the __init_end address to a
page align address, so that free_initmem() can
free the whole .init section, because if the end
address is not page aligned, it will round down to
a page align address, then the tail unligned page
will not be freed.
Signed-off-by: wang <yalin.wang2010@gmail.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Change the type of physical address from unsigned long to phys_addr_t,
make valid_phys_addr_range more readable.
Signed-off-by: Min-Hua Chen <orca.chen@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch enables Thunder SoCs in the arm64 defconfig. This is
esp. useful to add Thunder platforms to automated builds based on
arm64 defconfig.
Signed-off-by: Robert Richter <rrichter@cavium.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
This introduces ARCH_THUNDER to enable soc specific drivers and dtb
files.
Signed-off-by: Radha Mohan Chintakuntla <rchintakuntla@cavium.com>
Signed-off-by: Robert Richter <rrichter@cavium.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Add initial device tree nodes for Cavium Thunder SoCs with support of
48 cores and gicv3. The dtsi file requires further changes, esp. for
pci, gicv3-its and smmu. This changes will be added later together
with the device drivers.
Signed-off-by: Radha Mohan Chintakuntla <rchintakuntla@cavium.com>
Signed-off-by: Robert Richter <rrichter@cavium.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
The definition of "comma" exists in scripts/Kbuild.include.
We should not double it.
Signed-off-by: Masahiro Yamada <yamada.m@jp.panasonic.com>
Cc: linux-arm-kernel@lists.infradead.org
Signed-off-by: Michal Marek <mmarek@suse.cz>
This patch replaces the static assignment of ~0 to dma_handle with
DMA_ERROR_CODE to be consistent with other platforms.
Signed-off-by: Sean Paul <seanpaul@chromium.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Use the generic PCI domain and OF functions to provide support for PCI
on arm64.
[bhelgaas: Change comments to use generic PCI, not just PCIe. Nothing at
this level is PCIe-specific.]
Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
The arch_timer_evtstrm_enable hooks in arm and arm64 are substantially
similar, the only difference being a CONFIG_COMPAT-conditional section
which is relevant only for arm64. Copy the arm64 version to the
driver, removing the arch-specific hooks.
Signed-off-by: Nathan Lynch <nathan_lynch@mentor.com>
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
The only difference between arm and arm64's implementations of
arch_counter_set_user_access is that 32-bit ARM does not enable user
access to the virtual counter. We want to enable this access for the
32-bit ARM VDSO, so copy the arm64 version to the driver itself, and
remove the arch-specific implementations.
Signed-off-by: Nathan Lynch <nathan_lynch@mentor.com>
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
This includes a bunch of changes:
- Support read-only memory slots on arm/arm64
- Various changes to fix Sparse warnings
- Correctly detect write vs. read Stage-2 faults
- Various VGIC cleanups and fixes
- Dynamic VGIC data strcuture sizing
- Fix SGI set_clear_pend offset bug
- Fix VTTBR_BADDR Mask
- Correctly report the FSC on Stage-2 faults
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJUJWAdAAoJEEtpOizt6ddy9cMH+gIoUPnRJLe+PPcOOyxOx6pr
+CnD/zAd0sLvxZLP/LBOzu99H3YrbO5kwI/172/8G1zUNI2hp6YxEEJaBCTHrz6l
RwgLy7a3EMMY51nJo5w2dkFUo8cUX9MsHqMpl2Xb7Dvo2ZHp+nDqRjwRY6yi+t4V
dWSJTRG6X+DIWyysij6jBtfKU6MpU+4NW3Zdk1fapf8QDkn+cBtV5X2QcmERCaIe
A1j9hiGi43KA3XWeeePU3aVaxC2XUhTayP8VsfVxoNG2manaS6lqjmbif5ghs/0h
rw7R3/Aj0MJny2zT016MkvKJKRukuVRD6e1lcYghqnSJhL2FossowZ9fHRADpqU=
=QgU8
-----END PGP SIGNATURE-----
Merge tag 'kvm-arm-for-3.18' of git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into kvm-next
Changes for KVM for arm/arm64 for 3.18
This includes a bunch of changes:
- Support read-only memory slots on arm/arm64
- Various changes to fix Sparse warnings
- Correctly detect write vs. read Stage-2 faults
- Various VGIC cleanups and fixes
- Dynamic VGIC data strcuture sizing
- Fix SGI set_clear_pend offset bug
- Fix VTTBR_BADDR Mask
- Correctly report the FSC on Stage-2 faults
Conflicts:
virt/kvm/eventfd.c
[duplicate, different patch where the kvm-arm version broke x86.
The kvm tree instead has the right one]
When we catch something that's not a permission fault or a translation
fault, we log the unsupported FSC in the kernel log, but we were masking
off the bottom bits of the FSC which was not very helpful.
Also correctly report the FSC for data and instruction faults rather
than telling people it was a DFCS, which doesn't exist in the ARM ARM.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
The current aarch64 calculation for VTTBR_BADDR_MASK masks only 39 bits
and not all the bits in the PA range. This is clearly a bug that
manifests itself on systems that allocate memory in the higher address
space range.
[ Modified from Joel's original patch to be based on PHYS_MASK_SHIFT
instead of a hard-coded value and to move the alignment check of the
allocation to mmu.c. Also added a comment explaining why we hardcode
the IPA range and changed the stage-2 pgd allocation to be based on
the 40 bit IPA range instead of the maximum possible 48 bit PA range.
- Christoffer ]
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Joel Schopp <joel.schopp@amd.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Implementing a restart handler in a module don't make sense as there would
be no guarantee that the module is loaded when a restart is needed.
Unexport arm_pm_restart to ensure that no one gets the idea to do it
anyway.
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Heiko Stuebner <heiko@sntech.de>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Dmitry Eremin-Solenikov <dbaryshkov@gmail.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jonas Jensen <jonas.jensen@gmail.com>
Cc: Maxime Ripard <maxime.ripard@free-electrons.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Tomasz Figa <t.figa@samsung.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Wim Van Sebroeck <wim@iguana.be>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
The kernel core now supports a restart handler call chain to restart the
system. Call it if arm_pm_restart is not set.
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Heiko Stuebner <heiko@sntech.de>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Dmitry Eremin-Solenikov <dbaryshkov@gmail.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jonas Jensen <jonas.jensen@gmail.com>
Cc: Maxime Ripard <maxime.ripard@free-electrons.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Tomasz Figa <t.figa@samsung.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Wim Van Sebroeck <wim@iguana.be>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Some of the KGDB macros used for generating the BRK instructions had the
wrong spelling for DBG and KGDB abbreviations.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Following a recent series of enhancements to the insn code the ARMv8
allnoconfig build has been generating a large number of warnings in the
form of:
arch/arm64/kernel/insn.c:689:8: warning: 'insn' may be used uninitialized in this function [-Wmaybe-uninitialized]
This is because BUG() and related macros can be compiled out so we get
execution paths which normally result in a panic compiling out to noops
instead.
I wasn't able to immediately identify a sensible return value to use in
these cases so just return AARCH64_BREAK_FAULT - this is all "should
never happen" code so hopefully it never has a practical impact.
Signed-off-by: Mark Brown <broonie@kernel.org>
[catalin.marinas@arm.com: AARCH64_BREAK_FAULT definition contributed by Daniel Borkmann]
[catalin.marinas@arm.com: replace return 0 with AARCH64_BREAK_FAULT]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This will be used to let the guest run while the APIC access page is
not pinned. Because subsequent patches will fill in the function
for x86, place the (still empty) x86 implementation in the x86.c file
instead of adding an inline function in kvm_host.h.
Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
1. We were calling clear_flush_young_notify in unmap_one, but we are
within an mmu notifier invalidate range scope. The spte exists no more
(due to range_start) and the accessed bit info has already been
propagated (due to kvm_pfn_set_accessed). Simply call
clear_flush_young.
2. We clear_flush_young on a primary MMU PMD, but this may be mapped
as a collection of PTEs by the secondary MMU (e.g. during log-dirty).
This required expanding the interface of the clear_flush_young mmu
notifier, so a lot of code has been trivially touched.
3. In the absence of shadow_accessed_mask (e.g. EPT A bit), we emulate
the access bit by blowing the spte. This requires proper synchronizing
with MMU notifier consumers, like every other removal of spte's does.
Signed-off-by: Andres Lagar-Cavilla <andreslc@google.com>
Acked-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The arm64 tree added calls to audit_syscall_entry() and rightly included
the syscall number. The interface has since been changed to not need
the syscall number. As such, arm64 should no longer pass that value.
Signed-off-by: Eric Paris <eparis@redhat.com>
This patch adds auditing functions on entry to or exit from
every system call invocation.
Acked-by: Richard Guy Briggs <rgb@redhat.com>
Acked-by Will Deacon <will.deacon@arm.com>
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
When returning from a debug exception taken from EL1, we unmask debug
exceptions after handling the exception. This is crucial for debug
exceptions taken from EL0, so that any kernel work on the ret_to_user
path can be debugged by kgdb.
However, when returning back to EL1 the only thing left to do is to
restore the original register state before the exception return. If
single-step has been enabled by the debug exception handler, we will
get stuck in an infinite debug exception loop, since we will take the
step exception as soon as we unmask debug exceptions.
This patch avoids unmasking debug exceptions on the debug exception
return path when the exception was taken from EL1.
Fixes: 2a2830703a (arm64: debug: avoid accessing mdscr_el1 on fault paths where possible)
Cc: <stable@vger.kernel.org> #3.16+
Reported-by: David Long <dave.long@linaro.org>
Reported-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Commit 6ecba8eb51 (arm64: Use bus notifiers to set per-device coherent
DMA ops) introduced bus notifiers to set the coherent dma ops based on
the 'dma-coherent' DT property. Since the generic of_dma_configure()
handles this property for platform and AMBA devices, replace the
notifiers with set_arch_dma_coherent_ops().
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
SMBIOS is important for server hardware vendors. It implements a spec for
providing descriptive information about the platform. Things like serial
numbers, physical layout of the ports, build configuration data, and the like.
This has been tested by dmidecode and lshw tools.
This patch adds the call to dmi_scan_machine() to arm64_enter_virtual_mode(),
as that is the point where the EFI Configuration Tables are registered as
being available. It needs to be in an early_initcall anyway as dmi_id_init(),
which is an arch_initcall itself, depends on dmi_scan_machine() having been
called already.
Signed-off-by: Yi Li <yi.li@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Pull x86 fixes from Ingo Molnar:
"Misc fixes:
EFI fixes, a build fix, a page table dumping (debug) fix and a clang
build fix"
* 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
efi/arm64: Fix fdt-related memory reservation
x86/mm: Apply the section attribute to the variable, not its type
x86/efi: Fixup GOT in all boot code paths
x86/efi: Only load initrd above 4g on second try
x86-64, ptdump: Mark espfix area only if existent
x86, irq: Fix build error caused by 9eabc99a63
The aarch64_insn_gen_branch_imm() function takes an enum as the last
argument rather than a bool. It happens to work because
AARCH64_INSN_BRANCH_LINK matches 'true' but better to use the actual
type.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
In order to make the number of interrupts configurable, use the new
fancy device management API to add KVM_DEV_ARM_VGIC_GRP_NR_IRQS as
a VGIC configurable attribute.
Userspace can now specify the exact size of the GIC (by increments
of 32 interrupts).
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Initializing max_mapnr using set_max_mapnr() helper function instead
of direct reference. Also not adding PHYS_PFN_OFFSET to max_pfn,
since it already contains it.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ganapatrao Kulkarni <ganapatrao.kulkarni@caviumnetworks.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The kernel wants to enable reporting of asynchronous interrupts (i.e.
System Errors) as early as possible. But if this happens too early then
any pending System Error on initial entry into the kernel may never be
reported where a user can see it. This situation will occur if the kernel
is configured with CONFIG_PANIC_ON_OOPS set and (default or command line)
enabled, in which case the kernel will panic as intended, however the
associated logging messages indicating this failure condition will remain
only in the kernel ring buffer and never be flushed out to the (not yet
configured) console. Therefore, this patch moves the enabling of
asynchronous interrupts during early setup to as early as reasonable,
but after parsing any possible earlycon parameters setting up earlycon.
Signed-off-by: Jon Masters <jcm@redhat.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Remove '#' from immediate parameter in AARCH64 inline assembly in mmu.
This code now works with both gcc and clang.
Signed-off-by: Mark Charlebois <charlebm@gmail.com>
Signed-off-by: Behan Webster <behanw@converseincode.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
ARM64 irq work self-IPI support depends on __smp_cross_call to point to
some relevant IRQ controller operations. This information should be
available after the call to init_IRQ().
Lets implement arch_irq_work_has_interrupt() accordingly.
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
The nohz full code needs irq work to trigger its own interrupt so that
the subsystem can work even when the tick is stopped.
Lets introduce arch_irq_work_has_interrupt() that archs can override to
tell about their support for this ability.
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Just a couple of stragglers here:
- Fix an issue migrating interrupts on CPU hotplug
- Fix a potential information leak of TLS registers across an exec
(Nathan has sent a corresponding patch for arch/arm/ to rmk)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABCgAGBQJUEfKCAAoJELescNyEwWM0I/8H/RLpR9kvk0npB8lroFJZUJfa
yIveU5kWnFpEpycjkDDHTYmXbbAMni1t6wII4ofMErDtMkJMW3y11gAp2iUEdP8w
YNGSO9WV8uddbEamoDnO1jMS2eE1sHSSFjXN5529ygM00mAdSq/sIYUkGrjkbRmo
6DHWFvaHYjZDIAb1teFFqtuaL5c4SX+DTwInqwO0hXIPXfgjmSD9PDk8KXJN0Qiu
daX3sNHlFyb4Bh4Q2/aIvQHrkFPVcNUnekCwNoHGgYJ/FMjTV67Kb5SmnlV41rSu
GU4dUuc26gumgrOQ9Yhob2AU6RhC4Auuht7ck+STZWy5kllmjX5TLZMLXmrLIRM=
=0A4L
-----END PGP SIGNATURE-----
Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 fixes from Will Deacon:
"Just a couple of stragglers here:
- fix an issue migrating interrupts on CPU hotplug
- fix a potential information leak of TLS registers across an exec
(Nathan has sent a corresponding patch for arch/arm/ to rmk)"
* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
arm64: flush TLS registers during exec
arm64: use irq_set_affinity with force=false when migrating irqs
The start address needs to be actually updated after it
is detected to be unaligned. Adjust it and the end address
properly.
Reported-by: Zi Shen Lim <zlim.lnx@gmail.com>
Reviewed-by: Zi Shen Lim <zlim.lnx@gmail.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
On ARM64, when the BPF JIT compiler fills the JIT image body with
opcodes during translation of eBPF into ARM64 opcodes, we may fail
for several reasons during that phase: one being that we jump to
the notyet label for not yet supported eBPF instructions such as
BPF_ST. In that case we only free offsets, but not the actual
allocated target image where opcodes are being stored. Fix it by
calling module_free() on dismantle time in case of errors.
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Acked-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* cpuidle:
arm64: add PSCI CPU_SUSPEND based cpu_suspend support
arm64: kernel: introduce cpu_init_idle CPU operation
arm64: kernel: refactor the CPU suspend API for retention states
Documentation: arm: define DT idle states bindings
This patch implements the cpu_suspend cpu operations method through
the PSCI CPU SUSPEND API. The PSCI implementation translates the idle state
index passed by the cpu_suspend core call into a valid PSCI state according to
the PSCI states initialized at boot through the cpu_init_idle() CPU
operations hook.
The PSCI CPU suspend operation hook checks if the PSCI state is a
standby state. If it is, it calls the PSCI suspend implementation
straight away, without saving any context. If the state is a power
down state the kernel calls the __cpu_suspend API (that saves the CPU
context) and passed the PSCI suspend finisher as a parameter so that PSCI
can be called by the __cpu_suspend implementation after saving and flushing
the context as last function before power down.
For power down states, entry point is set to cpu_resume physical address,
that represents the default kernel execution address following a CPU reset.
Reviewed-by: Ashwin Chaugule <ashwin.chaugule@linaro.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The CPUidle subsystem on ARM64 machines requires the idle states
implementation back-end to initialize idle states parameter upon
boot. This patch adds a hook in the CPU operations structure that
should be initialized by the CPU operations back-end in order to
provide a function that initializes cpu idle states.
This patch also adds the infrastructure to arm64 kernel required
to export the CPU operations based initialization interface, so
that drivers (ie CPUidle) can use it when they are initialized
at probe time.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
CPU suspend is the standard kernel interface to be used to enter
low-power states on ARM64 systems. Current cpu_suspend implementation
by default assumes that all low power states are losing the CPU context,
so the CPU registers must be saved and cleaned to DRAM upon state
entry. Furthermore, the current cpu_suspend() implementation assumes
that if the CPU suspend back-end method returns when called, this has
to be considered an error regardless of the return code (which can be
successful) since the CPU was not expected to return from a code path that
is different from cpu_resume code path - eg returning from the reset vector.
All in all this means that the current API does not cope well with low-power
states that preserve the CPU context when entered (ie retention states),
since first of all the context is saved for nothing on state entry for
those states and a successful state entry can return as a normal function
return, which is considered an error by the current CPU suspend
implementation.
This patch refactors the cpu_suspend() API so that it can be split in
two separate functionalities. The arm64 cpu_suspend API just provides
a wrapper around CPU suspend operation hook. A new function is
introduced (for architecture code use only) for states that require
context saving upon entry:
__cpu_suspend(unsigned long arg, int (*fn)(unsigned long))
__cpu_suspend() saves the context on function entry and calls the
so called suspend finisher (ie fn) to complete the suspend operation.
The finisher is not expected to return, unless it fails in which case
the error is propagated back to the __cpu_suspend caller.
The API refactoring results in the following pseudo code call sequence for a
suspending CPU, when triggered from a kernel subsystem:
/*
* int cpu_suspend(unsigned long idx)
* @idx: idle state index
*/
{
-> cpu_suspend(idx)
|---> CPU operations suspend hook called, if present
|--> if (retention_state)
|--> direct suspend back-end call (eg PSCI suspend)
else
|--> __cpu_suspend(idx, &back_end_finisher);
}
By refactoring the cpu_suspend API this way, the CPU operations back-end
has a chance to detect whether idle states require state saving or not
and can call the required suspend operations accordingly either through
simple function call or indirectly through __cpu_suspend() which carries out
state saving and suspend finisher dispatching to complete idle state entry.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Nathan reports that we leak TLS information from the parent context
during an exec, as we don't clear the TLS registers when flushing the
thread state.
This patch updates the flushing code so that we:
(1) Unconditionally zero the tpidr_el0 register (since this is fully
context switched for native tasks and zeroed for compat tasks)
(2) Zero the tp_value state in thread_info before clearing the
tpidrr0_el0 register for compat tasks (since this is only writable
by the set_tls compat syscall and therefore not fully switched).
A missing compiler barrier is also added to the compat set_tls syscall.
Cc: <stable@vger.kernel.org>
Acked-by: Nathan Lynch <Nathan_Lynch@mentor.com>
Reported-by: Nathan Lynch <Nathan_Lynch@mentor.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The ISS encoding for an exception from a Data Abort has a WnR
bit[6] that indicates whether the Data Abort was caused by a
read or a write instruction. While there are several fields
in the encoding that are only valid if the ISV bit[24] is set,
WnR is not one of them, so we can read it unconditionally.
Instead of fixing both implementations of kvm_is_write_fault()
in place, reimplement it just once using kvm_vcpu_dabt_iswrite(),
which already does the right thing with respect to the WnR bit.
Also fix up the callers to pass 'vcpu'
Acked-by: Laszlo Ersek <lersek@redhat.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Commit 86c8b27a01:
"arm64: ignore DT memreserve entries when booting in UEFI mode
prevents early_init_fdt_scan_reserved_mem() from being called for
arm64 kernels booting via UEFI. This was done because the kernel
will use the UEFI memory map to determine reserved memory regions.
That approach has problems in that early_init_fdt_scan_reserved_mem()
also reserves the FDT itself and any node-specific reserved memory.
By chance of some kernel configs, the FDT may be overwritten before
it can be unflattened and the kernel will fail to boot. More subtle
problems will result if the FDT has node specific reserved memory
which is not really reserved.
This patch has the UEFI stub remove the memory reserve map entries
from the FDT as it does with the memory nodes. This allows
early_init_fdt_scan_reserved_mem() to be called unconditionally
so that the other needed reservations are made.
Signed-off-by: Mark Salter <msalter@redhat.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
Raising the current maximum limit to 64. This is needed for Cavium's
Thunder systems that will have at least 48 cores per die.
The change keeps the current memory footprint in cpu mask structures.
It does not break existing code. Setting the maximum to 64 cpus still
boots systems with less cpus.
Mark's Juno happily booted with a NR_CPUS=64 kernel.
Tested on our Thunder system with 48 cores. We could see interrupts to
all cores.
Cc: Radha Mohan Chintakuntla <rchintakuntla@cavium.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Robert Richter <rrichter@cavium.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The JIT compiler emits A64 instructions. It supports eBPF only.
Legacy BPF is supported thanks to conversion by BPF core.
JIT is enabled in the same way as for other architectures:
echo 1 > /proc/sys/net/core/bpf_jit_enable
Or for additional compiler output:
echo 2 > /proc/sys/net/core/bpf_jit_enable
See Documentation/networking/filter.txt for more information.
The implementation passes all 57 tests in lib/test_bpf.c
on ARMv8 Foundation Model :) Also tested by Will on Juno platform.
Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Introduce function to generate logical (shifted register)
instructions.
Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Introduce function to generate data-processing (3 source) instructions.
Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Introduce function to generate data-processing (2 source) instructions.
Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Introduce function to generate data-processing (1 source) instructions.
Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Introduce function to generate add/subtract (shifted register)
instructions.
Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Introduce function to generate move wide (immediate) instructions.
Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Introduce function to generate bitfield instructions.
Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Introduce function to generate add/subtract (immediate) instructions.
Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Introduce function to generate load/store pair instructions.
Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Introduce function to generate load/store (register offset)
instructions.
Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Introduce function to generate conditional branch (immediate)
instructions.
Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Introduce function to generate unconditional branch (register)
instructions.
Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Introduce function to generate compare & branch (immediate)
instructions.
Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The global register current_stack_pointer holds the current stack pointer.
This change supports being able to compile the kernel with both gcc and clang.
Author: Mark Charlebois <charlebm@gmail.com>
Signed-off-by: Mark Charlebois <charlebm@gmail.com>
Signed-off-by: Behan Webster <behanw@converseincode.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
To support both Clang and GCC, use the global stack register variable vs
a local register variable.
Author: Mark Charlebois <charlebm@gmail.com>
Signed-off-by: Mark Charlebois <charlebm@gmail.com>
Signed-off-by: Behan Webster <behanw@converseincode.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Use the global current_stack_pointer to get the value of the stack pointer.
This change supports being able to compile the kernel with both gcc and clang.
Signed-off-by: Behan Webster <behanw@converseincode.com>
Signed-off-by: Mark Charlebois <charlebm@gmail.com>
Reviewed-by: Olof Johansson <olof@lixom.net>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Use the global current_stack_pointer to get the value of the stack pointer.
This change supports being able to compile the kernel with both gcc and clang.
Signed-off-by: Behan Webster <behanw@converseincode.com>
Signed-off-by: Mark Charlebois <charlebm@gmail.com>
Reviewed-by: Jan-Simon Möller <dl9pf@gmx.de>
Reviewed-by: Olof Johansson <olof@lixom.net>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Use the global current_stack_pointer to get the value of the stack pointer.
This change supports being able to compile the kernel with both gcc and clang.
Signed-off-by: Behan Webster <behanw@converseincode.com>
Signed-off-by: Mark Charlebois <charlebm@gmail.com>
Reviewed-by: Jan-Simon Möller <dl9pf@gmx.de>
Reviewed-by: Olof Johansson <olof@lixom.net>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Define a global named register for current_stack_pointer. The use of this new
variable guarantees that both gcc and clang can access this register in C code.
Signed-off-by: Behan Webster <behanw@converseincode.com>
Reviewed-by: Jan-Simon Möller <dl9pf@gmx.de>
Reviewed-by: Mark Charlebois <charlebm@gmail.com>
Reviewed-by: Olof Johansson <olof@lixom.net>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
In a similar fashion to other architecture, add the infrastructure
and Kconfig to enable DEBUG_SET_MODULE_RONX support. When
enabled, module ranges will be marked read-only/no-execute as
appropriate.
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
[will: fixed off-by-one in module end check]
Signed-off-by: Will Deacon <will.deacon@arm.com>
It's useful to be able to change individual bits in ptes at times.
Introduce functions for this and update existing pte_mk* functions
to use these primatives.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
[will: added missing inline keyword for new header functions]
Signed-off-by: Will Deacon <will.deacon@arm.com>
The current soft_restart() and setup_restart implementations incorrectly
assume that compiler will not spill/fill values to/from stack. However
this assumption seems to be wrong, revealed by the disassembly of the
currently existing code (v3.16) built with Linaro GCC 4.9-2014.05.
ffffffc000085224 <soft_restart>:
ffffffc000085224: a9be7bfd stp x29, x30, [sp,#-32]!
ffffffc000085228: 910003fd mov x29, sp
ffffffc00008522c: f9000fa0 str x0, [x29,#24]
ffffffc000085230: 94003d21 bl ffffffc0000946b4 <setup_mm_for_reboot>
ffffffc000085234: 94003b33 bl ffffffc000093f00 <flush_cache_all>
ffffffc000085238: 94003dfa bl ffffffc000094a20 <cpu_cache_off>
ffffffc00008523c: 94003b31 bl ffffffc000093f00 <flush_cache_all>
ffffffc000085240: b0003321 adrp x1, ffffffc0006ea000 <reset_devices>
ffffffc000085244: f9400fa0 ldr x0, [x29,#24] ----> spilled addr
ffffffc000085248: f942fc22 ldr x2, [x1,#1528] ----> global memstart_addr
ffffffc00008524c: f0000061 adrp x1, ffffffc000094000 <__inval_cache_range+0x40>
ffffffc000085250: 91290021 add x1, x1, #0xa40
ffffffc000085254: 8b010041 add x1, x2, x1
ffffffc000085258: d2c00802 mov x2, #0x4000000000 // #274877906944
ffffffc00008525c: 8b020021 add x1, x1, x2
ffffffc000085260: d63f0020 blr x1
...
Here the compiler generates memory accesses after the cache is disabled,
loading stale values for the spilled value and global variable. As we cannot
control when the compiler will access memory we must rewrite the
functions in assembly to stash values we need in registers prior to
disabling the cache, avoiding the use of memory.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Arun Chandran <achandran@mvista.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
If we cannot relocate the kernel Image to its preferred offset of base of DRAM
plus TEXT_OFFSET, instead relocate it to the lowest available 2 MB boundary plus
TEXT_OFFSET. We may lose a bit of memory at the low end, but we can still
proceed normally otherwise.
Acked-by: Mark Salter <msalter@redhat.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Leif Lindholm <leif.lindholm@linaro.org>
Tested-by: Leif Lindholm <leif.lindholm@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The static memory footprint of a kernel Image at boot is larger than the
Image file itself. Things like .bss data and initial page tables are allocated
statically but populated dynamically so their content is not contained in the
Image file.
However, if EFI (or GRUB) has loaded the Image at precisely the desired offset
of base of DRAM + TEXT_OFFSET, the Image will be booted in place, and we have
to make sure that the allocation done by the PE/COFF loader is large enough.
Fix this by growing the PE/COFF .text section to cover the entire static
memory footprint. The part of the section that is not covered by the payload
will be zero initialised by the PE/COFF loader.
Acked-by: Mark Salter <msalter@redhat.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Leif Lindholm <leif.lindholm@linaro.org>
Tested-by: Leif Lindholm <leif.lindholm@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
In certain cases the cpu-release-addr of a CPU may not fall in the
linear mapping (e.g. when the kernel is loaded above this address due to
the presence of other images in memory). This is problematic for the
spin-table code as it assumes that it can trivially convert a
cpu-release-addr to a valid VA in the linear map.
This patch modifies the spin-table code to use a temporary cached
mapping to write to a given cpu-release-addr, enabling us to support
addresses regardless of whether they are covered by the linear mapping.
Acked-by: Leif Lindholm <leif.lindholm@linaro.org>
Tested-by: Leif Lindholm <leif.lindholm@linaro.org>
Tested-by: Mark Salter <msalter@redhat.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
[ardb: added (__force void *) cast]
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
VIPT caches are non-aliasing if the index is derived from address bits that
are always equal between VA and PA. Classifying these as aliasing results in
unnecessary flushing which may hurt performance.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This adds helper functions and #defines to <asm/cachetype.h> to read the
line size and the number of sets from the level 1 instruction cache.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQIcBAABAgAGBQJUCi7SAAoJEBvWZb6bTYbyuhgP+wQXwL1W4f6c219PMmz/Hpiz
OCRCTFpz8eOC/e1VG5zCcocu7FisG43auH5WqDnyr8RcC8RZlfKcpIwvIAgGk1oP
RusqDbhHXSo+mWCYAlVDwGeAagc1sgxjpHAKJ0uLT7Rsv7hJp88g/KE5JbOX+M6k
UfO+bMSvys5eFCct57O5ZgLWR58k++9CC0f6U5B/uMkwYCnPz32YQL83ebpBLPvT
vHcRVgUPXj7pg4ng8uVALvLJTewEKK8aG5o8LXtOmmOdYgccxSouy5wrX00g48pb
lCB3UZrKZ07AfEgdK/06oz9RwqUUpu58VZSE4DVlEhEcPTx0Uy6k1PMw6utAJi5r
WZ+Ws3IBQMfp3oqWJmdBLte0JAjK89glhqjrrseXjtux1piknyPfquPB/tGw6wxv
rBMq4r64KJwcpL2DMYZGbHpZ7vbfDTJ7aYZvHBp2YRFnR9YE7lqGt3VJpp9WHkZT
7SMvIVFEdF1SN4jXLZ4+3tno5hPlH+MbeteCT6ZweqVfSQzHEo7AvriKQS0wPoBm
rOMZvO7SMSctHkBBqnTkXHnZ8r0J+v89VZr85ayQC/FHUEp6nFdYNiqqO54VkTfE
BKuoepRfvjAK40hnWIWlPUMmzK1tzZwxB9vU+ghJ2yJWZMo4oIXjwg6fvYfa8Ar/
r68L21dou0TNpUpyIdvN
=vBFm
-----END PGP SIGNATURE-----
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Pull kvm fixes from Paolo Bonzini:
"A smattering of bug fixes across most architectures"
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
powerpc/kvm/cma: Fix panic introduces by signed shift operation
KVM: s390/mm: Fix guest storage key corruption in ptep_set_access_flags
KVM: s390/mm: Fix storage key corruption during swapping
arm/arm64: KVM: Complete WFI/WFE instructions
ARM/ARM64: KVM: Nuke Hyp-mode tlbs before enabling MMU
KVM: s390/mm: try a cow on read only pages for key ops
KVM: s390: Fix user triggerable bug in dead code
The arm64 interrupt migration code on cpu offline calls
irqchip.irq_set_affinity() with the argument force=true. Originally
this argument had no effect because it was not used by any interrupt
chip driver and there was no semantics defined.
This changed with commit 01f8fa4f01 ("genirq: Allow forcing cpu
affinity of interrupts") which made the force argument useful to route
interrupts to not yet online cpus without checking the target cpu
against the cpu online mask. The following commit ffde1de640
("irqchip: gic: Support forced affinity setting") implemented this for
the GIC interrupt controller.
As a consequence the cpu offline irq migration fails if CPU0 is
offlined, because CPU0 is still set in the affinity mask and the
validation against cpu online mask is skipped to the force argument
being true. The following first_cpu(mask) selection always selects
CPU0 as the target.
Commit 601c942176d8("arm64: use cpu_online_mask when using forced
irq_set_affinity") intended to fix the above mentioned issue but
introduced another issue where affinity can be migrated to a wrong
CPU due to unconditional copy of cpu_online_mask.
As with for arm, solve the issue by calling irq_set_affinity() with
force=false from the CPU offline irq migration code so the GIC driver
validates the affinity mask against CPU online mask and therefore
removes CPU0 from the possible target candidates. Also revert the
changes done in the commit 601c942176 as it's no longer needed.
Tested on Juno platform.
Fixes: 601c942176d8("arm64: use cpu_online_mask when using forced
irq_set_affinity")
Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: <stable@vger.kernel.org> # 3.10.x
Signed-off-by: Will Deacon <will.deacon@arm.com>
All the arm64 irqchip drivers have been converted to handle_domain_irq,
making it possible to remove the handle_IRQ stub entierely.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lkml.kernel.org/r/1409047421-27649-26-git-send-email-marc.zyngier@arm.com
Signed-off-by: Jason Cooper <jason@lakedaemon.net>
In order to limit code duplication, convert the architecture specific
handle_IRQ to use the generic __handle_domain_irq function.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lkml.kernel.org/r/1409047421-27649-3-git-send-email-marc.zyngier@arm.com
Signed-off-by: Jason Cooper <jason@lakedaemon.net>
It turns out that vendors are relying on the format of /proc/cpuinfo,
and we've even spotted out-of-tree hacks attempting to make it look
identical to the format used by arch/arm/. That means we can't afford to
churn this interface in mainline, so revert the recent reformatting of
the file for arm64 pending discussions on the list to find out what
people actually want.
This reverts commit d7a49086f2.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Now arm64 defers reloading FPSIMD state, but this optimization also
introduces the bug after cpu resume back from low power mode.
The reason is after the cpu has been powered off, s/w need set the
cpu's fpsimd_last_state to NULL so that it will force to reload
FPSIMD state for the thread, otherwise there has the chance to meet
the condition for both the task's fpsimd_state.cpu field contains the
id of the current cpu, and the cpu's fpsimd_last_state per-cpu variable
points to the task's fpsimd_state, so finally kernel will skip to reload
the context during it return back to userland.
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Leo Yan <leoy@marvell.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The KSTK_ESP macro is used to determine the user stack pointer for a
given task. In particular, this is used to to report the '[stack]' VMA
in /proc/self/maps, which is used by Android to determine the stack
location for children of the main thread.
This patch fixes the macro to use user_stack_pointer instead of directly
returning sp. This means that we report w13 instead of sp, since the
former is used as the stack pointer when executing in AArch32 state.
Cc: <stable@vger.kernel.org>
Reported-by: Serban Constantinescu <Serban.Constantinescu@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Commit 5f888a1d33 (ARM64: perf: support dwarf unwinding in compat mode)
changes user_stack_pointer() to return the compat SP for 32-bit tasks
but without brackets around the whole definition, with possible issues
on the call sites (noticed with a subsequent fix for KSTK_ESP).
Fixes: 5f888a1d33 (ARM64: perf: support dwarf unwinding in compat mode)
Reported-by: Sudeep Holla <sudeep.holla@arm.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
In the beggining was on_each_cpu(), which required an unused argument to
kvm_arch_ops.hardware_{en,dis}able, but this was soon forgotten.
Remove unnecessary arguments that stem from this.
Signed-off-by: Radim KrÄmář <rkrcmar@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Using static inline is going to save few bytes and cycles.
For example on powerpc, the difference is 700 B after stripping.
(5 kB before)
This patch also deals with two overlooked empty functions:
kvm_arch_flush_shadow was not removed from arch/mips/kvm/mips.c
2df72e9bc KVM: split kvm_arch_flush_shadow
and kvm_arch_sched_in never made it into arch/ia64/kvm/kvm-ia64.c.
e790d9ef6 KVM: add kvm_arch_sched_in
Signed-off-by: Radim KrÄmář <rkrcmar@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Opaque KVM structs are useful for prototypes in asm/kvm_host.h, to avoid
"'struct foo' declared inside parameter list" warnings (and consequent
breakage due to conflicting types).
Move them from individual files to a generic place in linux/kvm_types.h.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This adds random number generator dts node to APM X-Gene platform.
Signed-off-by: Feng Kan <fkan@apm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The architecture specifies that when the processor wakes up from a WFE
or WFI instruction, the instruction is considered complete, however we
currrently return to EL1 (or EL0) at the WFI/WFE instruction itself.
While most guests may not be affected by this because their local
exception handler performs an exception returning setting the event bit
or with an interrupt pending, some guests like UEFI will get wedged due
this little mishap.
Simply skip the instruction when we have completed the emulation.
Cc: <stable@vger.kernel.org>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
X-Gene u-boot runs in EL2 mode with MMU enabled hence we might
have stale EL2 tlb enteris when we enable EL2 MMU on each host CPU.
This can happen on any ARM/ARM64 board running bootloader in
Hyp-mode (or EL2-mode) with MMU enabled.
This patch ensures that we flush all Hyp-mode (or EL2-mode) TLBs
on each host CPU before enabling Hyp-mode (or EL2-mode) MMU.
Cc: <stable@vger.kernel.org>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Signed-off-by: Anup Patel <anup.patel@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
The current perf_regs code relies on sp and pc sitting just off the end
of the pt_regs->regs array. This is ugly and fragile, so this patch
checks for these register explicitly and returns the appropriate field.
Acked-by: Jean Pihet <jean.pihet@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
copy_{to,from}_user return the number of bytes remaining on failure, not
an error code.
This patch returns -EFAULT when the copy operation didn't complete,
rather than expose the number of bytes not copied directly to userspace.
Signed-off-by: Will Deacon <will.deacon@arm.com>
I'm not sure what I was on when I wrote this, but when iterating over
the hardware watchpoint array (hbp_watch_array), our index is off by
ARM_MAX_BRP, so we walk off the end of our thread_struct...
... except, a dodgy condition in the loop means that it never executes
at all (bp cannot be NULL).
This patch fixes the code so that we remove the bp check and use the
correct index for accessing the watchpoint structures.
Cc: <stable@vger.kernel.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
We currently return the number of bytes not copied if set_timer_reg
fails, which is almost certainly not what userspace would like.
This patch returns -EFAULT instead.
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
is_valid_cache returns true if the specified cache is valid.
Unfortunately, if the parameter passed it out of range, we return
-ENOENT, which ends up as true leading to potential hilarity.
This patch returns false on the failure path instead.
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Running sparse results in a bunch of noisy address space mismatches
thanks to the broken __percpu annotation on kvm_get_running_vcpus.
This function returns a pcpu pointer to a pointer, not a pointer to a
pcpu pointer. This patch fixes the annotation, which kills the warnings
from sparse.
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Sparse kicks up about a type mismatch for kvm_target_cpu:
arch/arm64/kvm/guest.c:271:25: error: symbol 'kvm_target_cpu' redeclared with different type (originally declared at ./arch/arm64/include/asm/kvm_host.h:45) - different modifiers
so fix this by adding the missing const attribute to the function
declaration.
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
When userspace loads code and data in a read-only memory regions, KVM
needs to be able to handle this on arm and arm64. Specifically this is
used when running code directly from a read-only flash device; the
common scenario is a UEFI blob loaded with the -bios option in QEMU.
Note that the MMIO exit on writes to a read-only memory is ABI and can
be used to emulate block-erase style flash devices.
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Remove an unused local variable from head.S. It seems this was never
used even from the initial commit
9703d9d7f7 (arm64: Kernel booting and
initialisation), and is a left over from a previous implementation
of __calc_phys_offset.
Signed-off-by: Geoff Levand <geoff@infradead.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>