According to the ARM ARM, the behaviour is UNPREDICTABLE if the PC read
from the exception return stack is not half word aligned. See the
pseudo code for ExceptionReturn() and PopStack().
The signal handler's address has the bit 0 set, and setup_return()
directly writes this to regs->ARM_pc. Current hardware happens to
discard this bit, but QEMU's emulation doesn't and this makes processes
crash. Mask out bit 0 before the exception return in order to get
predictable behaviour.
Fixes: 19c4d593f0 ("ARM: ARMv7-M: Add support for exception handling")
Cc: stable@kernel.org
Acked-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Rabin Vincent <rabin@rab.in>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
asm/assembler.h is a better place for this macro since it is used by
asm files outside arch/arm/kernel/
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Arun KS <getarunks@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
ad65782fba (context_tracking: Optimize main APIs off case
with static key) converted context tracking main APIs to inline
function and left ARM asm callers behind.
This can be easily fixed by making ARM calling the post static
keys context tracking function. We just need to replicate the
static key checks there. We'll remove these later when ARM will
support the context tracking static keys.
Reported-by: Guenter Roeck <linux@roeck-us.net>
Reported-by: Russell King <linux@arm.linux.org.uk>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Tested-by: Kevin Hilman <khilman@linaro.org>
Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
Cc: Anil Kumar <anilk4.v@gmail.com>
Cc: Tony Lindgren <tony@atomide.com>
Cc: Benoit Cousson <b-cousson@ti.com>
Cc: Guenter Roeck <linux@roeck-us.net>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Kevin Hilman <khilman@linaro.org>
Pull ARM-v7M support from Uwe Kleine-König:
"All but the last patch were in next since next-20130418 without issues.
The last patch fixes a problem in combination with
8164f7a (ARM: 7680/1: Detect support for SDIV/UDIV from ISAR0 register)
which triggers a WARN_ON without an implemented read_cpuid_ext.
The branch merges fine into v3.10-rc1 and I'd be happy if you pulled it
for 3.11-rc1. The only missing piece to be able to run a Cortex-M3 is
the irqchip driver that will go in via Thomas Gleixner and platform
specific stuff."
This patch implements the exception handling for the ARMv7-M
architecture (pretty different from the A or R profiles).
It bases on work done earlier by Catalin for 2.6.33 but was nearly
completely rewritten to use a pt_regs layout compatible to the A
profile.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Jonathan Austin <jonathan.austin@arm.com>
Tested-by: Jonathan Austin <jonathan.austin@arm.com>
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
commit 91d1aa43 (context_tracking: New context tracking susbsystem)
generalized parts of the RCU userspace extended quiescent state into
the context tracking subsystem. Context tracking is then used
to implement adaptive tickless (a.k.a extended nohz)
To support the new context tracking subsystem on ARM, the user/kernel
boundary transtions need to be instrumented.
For exceptions and IRQs in usermode, the existing usr_entry macro is
used to instrument the user->kernel transition. For the return to
usermode path, the ret_to_user* path is instrumented. Using the
usr_entry macro, this covers interrupts in userspace, data abort and
prefetch abort exceptions in userspace as well as undefined exceptions
in userspace (which is where FP emulation and VFP are handled.)
For syscalls, the slow return path is covered by instrumenting the
ret_to_user path. In addition, the syscall entry point is
instrumented which covers the user->kernel transition for both fast
and slow syscalls, and an additional instrumentation point is added
for the fast syscall return path (ret_fast_syscall).
Cc: Mats Liljegren <mats.liljegren@enea.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Kevin Hilman <khilman@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
All svc exit paths need IRQs off. Rather than placing this before
every user of svc_exit, combine it into this macro.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The IRQ tracing exit path is much the same between all SVC mode
exits, so move this into the svc_exit macro. Use a macro parameter
to identify the IRQ case, which is the only different case there is.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The implementation of svc_exit didn't take into account any stack hole
created by svc_entry; as happens with the undef handler when kprobes are
configured. The fix is to read the saved value of SP rather than trying
to calculate it.
Signed-off-by: Jon Medhurst <tixy@yxit.co.uk>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Avoid enabling interrupts if the parent context had interrupts enabled
in the abort handler assembly code, and move this into the breakpoint/
page/alignment fault handlers instead.
This gets rid of some special-casing for the breakpoint fault handlers
from the low level abort handler path.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
If CONFIG_CPU_V6 is enabled, then the kernel must support ARMv6 CPUs
which don't have the V6K extensions implemented. Always use the
dummy store-exclusive method to ensure that the exclusive monitors are
cleared.
If CONFIG_CPU_V6 is not set, but CONFIG_CPU_32v6K is enabled, then we
have the K extensions available on all CPUs we're building support for,
so we can use the new clear-exclusive instruction.
Acked-by: Tony Lindgren <tony@atomide.com>
Tested-by: Sourav Poddar <sourav.poddar@ti.com>
Tested-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
On ARM, debug exceptions occur in the form of data or prefetch aborts.
One difference is that debug exceptions require access to per-cpu banked
registers and data structures which are not saved in the low-level exception
code. For kernels built with CONFIG_PREEMPT, there is an unlikely scenario
that the debug handler ends up running on a different CPU from the one
that originally signalled the event, resulting in random data being read
from the wrong registers.
This patch adds a debug_entry macro to the low-level exception handling
code which checks whether the taken exception is a debug exception. If
it is, the preempt count for the faulting process is incremented. After
the debug handler has finished, the count is decremented.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
ARMv5T and earlier require that a ldm {}^ instruction is not followed
by an instruction that accesses banked registers. This patch restores
the nop that was lost in commit b86040a59f.
Signed-off-by: Anders Grafström <grfstrm@users.sourceforge.net>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The 32-bit wide variant of "mov pc, reg" in Thumb-2 is unpredictable
causing improper handling of the undefined instructions not caught by
the kernel. This patch adds a movw_pc macro for such situations
(currently only used in call_fpe).
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Commit 200b812d00 "Clear the exclusive monitor when returning from an
exception" broke the vast majority of ARM systems in the wild which are
still pre ARMv6. The kernel is crashing on the first occurrence of an
exception due to the removal of the actual return instruction for them.
Let's add it back.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
Acked-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The patch adds a CLREX or dummy STREX to the exception return path. This
is needed because several atomic/locking operations use a pair of
LDREX/STREXEQ and the EQ condition may not always be satisfied. This
would leave the exclusive monitor status set and may cause problems with
atomic/locking operations in the interrupted code.
With this patch, the atomic_set() operation can be a simple STR
instruction (on SMP systems, the global exclusive monitor is cleared by
STR anyway). Clearing the exclusive monitor during context switch is no
longer needed as this is handled by the exception return path anyway.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Jamie Lokier <jamie@shareable.org>
5d25ac038a broke VFP builds due to
enable_irq not being defined as an assembly macro. Move it to
assembler.h so everyone can use it.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Nicolas Pitre
The ARM EABI says that the stack pointer has to be 64-bit aligned for
reasons already mentioned in patch #3101 when calling C functions.
We therefore must verify and adjust sp accordingly when taking an
exception from kernel mode since sp might not necessarily be 64-bit
aligned if the exception occurs in the middle of a kernel function.
If the exception occurs while in user mode then no sp fixup is needed as
long as sizeof(struct pt_regs) as well as any additional syscall data
stack space remain multiples of 8.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The current vector entry system does not allow for SMP. In
order to work around this, we need to eliminate our reliance
on the fixed save areas, which breaks the way we enable
alignment traps. This patch makes the alignment trap enable
code independent of the way we handle the save areas.
Signed-off-by: Russell King <rmk@arm.linux.org.uk>
SVC_MODE reflects the MODE_SVC definition in asm/ptrace.h. Use
the asm/ptrace.h definition instead, and remove SVC_MODE.
Signed-off-by: Russell King <rmk@arm.linux.org.uk>
Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.
Let it rip!