Commit 377747c406 ("ARM: entry: allow ARM-private syscalls to be
restarted") reworked the low-level syscall dispatcher to allow
restarting of ARM-private syscalls. Unfortunately, this relocated the
label used to dispatch a private syscall from the trace path, so that
the invocation would be bypassed altogether!
This causes applications to fail under strace as soon as they rely on
a private syscall (e.g. set_tls):
set_tls(0xb6fad4c0, 0xb6fadb98, 0xb6fb1050, 0xb6fad4c0, 0xb6fb1050)
= -1 ENOSYS (Function not implemented)
This patch fixes the label so that we correctly dispatch private
syscalls from the trace path.
Reported-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
Tested-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
System calls will only be restarted after signal handling if they (a)
return an error code indicating that a restart is required and (b) have
`why' set to a non-zero value, to indicate that the signal interrupted
them.
This patch leaves `why' set to a non-zero value for ARM-private syscalls
, and only zeroes it for syscalls that are not implemented.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Running an OABI_COMPAT kernel on an SMP platform can lead to fun and
games with page aging.
If one CPU issues a swi instruction immediately before another CPU
decides to mkold the page containing the swi instruction, then we will
fault attempting to load the instruction during the vector_swi handler
in order to retrieve its immediate field. Since this fault is not
currently dealt with by our exception tables, this results in a panic:
Unable to handle kernel paging request at virtual address 4020841c
pgd = c490c000
[4020841c] *pgd=84451831, *pte=bf05859d, *ppte=00000000
Internal error: Oops: 17 [#1] PREEMPT SMP ARM
Modules linked in: hid_sony(O)
CPU: 1 Tainted: G W O (3.4.0-perf-gf496dca-01162-gcbcc62b #1)
PC is at vector_swi+0x28/0x88
LR is at 0x40208420
This patch wraps all of the swi instruction loads with the USER macro
and provides a shared exception table entry which simply rewinds the
saved user PC and returns from the system call (without setting tbl, so
there's no worries with tracing or syscall restarting). Returning to
userspace will re-enter the page fault handler, from where we will
probably send SIGSEGV to the current task.
Reported-by: Wang, Yalin <yalin.wang@sonymobile.com>
Reviewed-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Pull ARM-v7M support from Uwe Kleine-König:
"All but the last patch were in next since next-20130418 without issues.
The last patch fixes a problem in combination with
8164f7a (ARM: 7680/1: Detect support for SDIV/UDIV from ISAR0 register)
which triggers a WARN_ON without an implemented read_cpuid_ext.
The branch merges fine into v3.10-rc1 and I'd be happy if you pulled it
for 3.11-rc1. The only missing piece to be able to run a Cortex-M3 is
the irqchip driver that will go in via Thomas Gleixner and platform
specific stuff."
This patch implements the exception handling for the ARMv7-M
architecture (pretty different from the A or R profiles).
It bases on work done earlier by Catalin for 2.6.33 but was nearly
completely rewritten to use a pt_regs layout compatible to the A
profile.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Jonathan Austin <jonathan.austin@arm.com>
Tested-by: Jonathan Austin <jonathan.austin@arm.com>
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
commit 91d1aa43 (context_tracking: New context tracking susbsystem)
generalized parts of the RCU userspace extended quiescent state into
the context tracking subsystem. Context tracking is then used
to implement adaptive tickless (a.k.a extended nohz)
To support the new context tracking subsystem on ARM, the user/kernel
boundary transtions need to be instrumented.
For exceptions and IRQs in usermode, the existing usr_entry macro is
used to instrument the user->kernel transition. For the return to
usermode path, the ret_to_user* path is instrumented. Using the
usr_entry macro, this covers interrupts in userspace, data abort and
prefetch abort exceptions in userspace as well as undefined exceptions
in userspace (which is where FP emulation and VFP are handled.)
For syscalls, the slow return path is covered by instrumenting the
ret_to_user path. In addition, the syscall entry point is
instrumented which covers the user->kernel transition for both fast
and slow syscalls, and an additional instrumentation point is added
for the fast syscall return path (ret_fast_syscall).
Cc: Mats Liljegren <mats.liljegren@enea.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Kevin Hilman <khilman@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The contents of the asm_trace_hardirqs_on is already conditional on
CONFIG_TRACE_IRQFLAGS. There's little point also making the use
of the macro conditional as well. Get rid of these ifdefs to make
the code easier to read.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Add unwind annotations to the ftrace assembly code so that the function
tracer's stacktracing options (func_stack_trace, etc.) work when
CONFIG_ARM_UNWIND is enabled.
Signed-off-by: Rabin Vincent <rabin@rab.in>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Pull big execve/kernel_thread/fork unification series from Al Viro:
"All architectures are converted to new model. Quite a bit of that
stuff is actually shared with architecture trees; in such cases it's
literally shared branch pulled by both, not a cherry-pick.
A lot of ugliness and black magic is gone (-3KLoC total in this one):
- kernel_thread()/kernel_execve()/sys_execve() redesign.
We don't do syscalls from kernel anymore for either kernel_thread()
or kernel_execve():
kernel_thread() is essentially clone(2) with callback run before we
return to userland, the callbacks either never return or do
successful do_execve() before returning.
kernel_execve() is a wrapper for do_execve() - it doesn't need to
do transition to user mode anymore.
As a result kernel_thread() and kernel_execve() are
arch-independent now - they live in kernel/fork.c and fs/exec.c
resp. sys_execve() is also in fs/exec.c and it's completely
architecture-independent.
- daemonize() is gone, along with its parts in fs/*.c
- struct pt_regs * is no longer passed to do_fork/copy_process/
copy_thread/do_execve/search_binary_handler/->load_binary/do_coredump.
- sys_fork()/sys_vfork()/sys_clone() unified; some architectures
still need wrappers (ones with callee-saved registers not saved in
pt_regs on syscall entry), but the main part of those suckers is in
kernel/fork.c now."
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/signal: (113 commits)
do_coredump(): get rid of pt_regs argument
print_fatal_signal(): get rid of pt_regs argument
ptrace_signal(): get rid of unused arguments
get rid of ptrace_signal_deliver() arguments
new helper: signal_pt_regs()
unify default ptrace_signal_deliver
flagday: kill pt_regs argument of do_fork()
death to idle_regs()
don't pass regs to copy_process()
flagday: don't pass regs to copy_thread()
bfin: switch to generic vfork, get rid of pointless wrappers
xtensa: switch to generic clone()
openrisc: switch to use of generic fork and clone
unicore32: switch to generic clone(2)
score: switch to generic fork/vfork/clone
c6x: sanitize copy_thread(), get rid of clone(2) wrapper, switch to generic clone()
take sys_fork/sys_vfork/sys_clone prototypes to linux/syscalls.h
mn10300: switch to generic fork/vfork/clone
h8300: switch to generic fork/vfork/clone
tile: switch to generic clone()
...
Conflicts:
arch/microblaze/include/asm/Kbuild
syscall_trace_exit is currently doing things back-to-front; invoking
the audit hook *after* signalling the debugger, which presents an
opportunity for the registers to be re-written by userspace in order to
bypass auditing constaints.
This patch fixes the ordering by moving the audit code first and the
tracehook code last. On the face of it, it looks like
current_thread_info()->syscall may be incorrect for the sys_exit
tracepoint, but that's actually not an issue because it will have been
set during syscall entry and cannot have changed since then.
Reported-by: Andrew Gabbasov <Andrew_Gabbasov@mentor.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
On tracehook-friendly platforms, a system call number of -1 falls
through without running much code or taking much action.
ARM is different. This adds a short-circuit check in the trace path to
avoid any additional work, as suggested by Russell King, to make sure
that ARM behaves the same way as other platforms.
Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Will Drewry <wad@chromium.org>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
There is very little difference in the TIF_SECCOMP and TIF_SYSCALL_WORK
path in entry-common.S, so merge TIF_SECCOMP into TIF_SYSCALL_WORK and
move seccomp into the syscall_trace_enter() handler.
Expanded some of the tracehook logic into the callers to make this code
more readable. Since tracehook needs to do register changing, this portion
is best left in its own function instead of copy/pasting into the callers.
Additionally, the return value for secure_computing() is now checked
and a -1 value will result in the system call being skipped.
Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Will Drewry <wad@chromium.org>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Daniel Mack reports an oops at boot with the latest kernels:
Internal error: Oops - undefined instruction: 0 [#1] SMP THUMB2
Modules linked in:
CPU: 0 Not tainted (3.6.0-11057-g584df1d #145)
PC is at cpsw_probe+0x45a/0x9ac
LR is at trace_hardirqs_on_caller+0x8f/0xfc
pc : [<c03493de>] lr : [<c005e81f>] psr: 60000113
sp : cf055fb0 ip : 00000000 fp : 00000000
r10: 00000000 r9 : 00000000 r8 : 00000000
r7 : 00000000 r6 : 00000000 r5 : c0344555 r4 : 00000000
r3 : cf057a40 r2 : 00000000 r1 : 00000001 r0 : 00000000
Flags: nZCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment user
Control: 50c5387d Table: 8f3f4019 DAC: 00000015
Process init (pid: 1, stack limit = 0xcf054240)
Stack: (0xcf055fb0 to 0xcf056000)
5fa0: 00000001 00000000 00000000 00000000
5fc0: cf055fb0 c000d1a8 00000000 00000000 00000000 00000000 00000000 00000000
5fe0: 00000000 be9b3f10 00000000 b6f6add0 00000010 00000000 aaaabfaf a8babbaa
The analysis of this is as follows. In init/main.c, we issue:
kernel_thread(kernel_init, NULL, CLONE_FS | CLONE_SIGHAND);
This creates a new thread, which falls through to the ret_from_fork
assembly, with r4 set NULL and r5 set to kernel_init. You can see
this in your oops dump register set - r5 is 0xc0344555, which is the
address of kernel_init plus 1 which marks the function as Thumb code.
Now, let's look at this code a little closer - this is what the
disassembly looks like:
c000d180 <ret_from_fork>:
c000d180: f03a fe08 bl c0047d94 <schedule_tail>
c000d184: 2d00 cmp r5, #0
c000d186: bf1e ittt ne
c000d188: 4620 movne r0, r4
c000d18a: 46fe movne lr, pc <-- XXXXXXX
c000d18c: 46af movne pc, r5
c000d18e: 46e9 mov r9, sp
c000d190: ea4f 3959 mov.w r9, r9, lsr #13
c000d194: ea4f 3949 mov.w r9, r9, lsl #13
c000d198: e7c8 b.n c000d12c <ret_to_user>
c000d19a: bf00 nop
c000d19c: f3af 8000 nop.w
This code was introduced in 9fff2fa0db (arm: switch to saner
kernel_execve() semantics). I have marked one instruction, and it's
the significant one - I'll come back to that later.
Eventually, having had a successful call to kernel_execve(), kernel_init()
returns zero.
In returning, it uses the value in 'lr' which was set by the instruction
I marked above. Unfortunately, this causes lr to contain 0xc000d18e -
an even address. This switches the ISA to ARM on return but with a non
word aligned PC value.
So, what do we end up executing? Well, not the instructions above - yes
the opcodes, but they don't mean the same thing in ARM mode. In ARM mode,
it looks like this instead:
c000d18c: 46e946af strbtmi r4, [r9], pc, lsr #13
c000d190: 3959ea4f ldmdbcc r9, {r0, r1, r2, r3, r6, r9, fp, sp, lr, pc}^
c000d194: 3949ea4f stmdbcc r9, {r0, r1, r2, r3, r6, r9, fp, sp, lr, pc}^
c000d198: bf00e7c8 svclt 0x0000e7c8
c000d19c: 8000f3af andhi pc, r0, pc, lsr #7
c000d1a0: e88db092 stm sp, {r1, r4, r7, ip, sp, pc}
c000d1a4: 46e81fff ; <UNDEFINED> instruction: 0x46e81fff
c000d1a8: 8a00f3ef bhi 0xc004a16c
c000d1ac: 0a0cf08a beq 0xc03493dc
I have included more above, because it's relevant. The PSR flags which
we can see in the oops dump are nZCv, so Z and C are set.
All the above ARM instructions are not executed, except for two.
c000d1a0, which has no writeback, and writes below the current stack
pointer (and that data is lost when we take the next exception.) The
other instruction which is executed is c000d1ac, which takes us to...
0xc03493dc. However, remember that bit 1 of the PC got set. So that
makes the PC value 0xc03493de.
And that value is the value we find in the oops dump for PC. What is
the instruction here when interpreted in ARM mode?
0: f71e150c ; <UNDEFINED> instruction: 0xf71e150c
and there we have our undefined instruction (remember that the 'never'
condition code, 0xf, has been deprecated and is now always executed as
it is now being used for additional instructions.)
This path also nicely explains the state of the stack we see in the oops
dump too.
The above is a consistent and sane story for how we got to the oops
dump, which all stems from the instruction at 0xc000d18a being wrong.
Reported-by: Daniel Mack <zonque@gmail.com>
Tested-by: Daniel Mack <zonque@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Pull third pile of kernel_execve() patches from Al Viro:
"The last bits of infrastructure for kernel_thread() et.al., with
alpha/arm/x86 use of those. Plus sanitizing the asm glue and
do_notify_resume() on alpha, fixing the "disabled irq while running
task_work stuff" breakage there.
At that point the rest of kernel_thread/kernel_execve/sys_execve work
can be done independently for different architectures. The only
pending bits that do depend on having all architectures converted are
restrictred to fs/* and kernel/* - that'll obviously have to wait for
the next cycle.
I thought we'd have to wait for all of them done before we start
eliminating the longjump-style insanity in kernel_execve(), but it
turned out there's a very simple way to do that without flagday-style
changes."
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/signal:
alpha: switch to saner kernel_execve() semantics
arm: switch to saner kernel_execve() semantics
x86, um: convert to saner kernel_execve() semantics
infrastructure for saner ret_from_kernel_thread semantics
make sure that kernel_thread() callbacks call do_exit() themselves
make sure that we always have a return path from kernel_execve()
ppc: eeh_event should just use kthread_run()
don't bother with kernel_thread/kernel_execve for launching linuxrc
alpha: get rid of switch_stack argument of do_work_pending()
alpha: don't bother passing switch_stack separately from regs
alpha: take SIGPENDING/NOTIFY_RESUME loop into signal.c
alpha: simplify TIF_NEED_RESCHED handling
Pull generic execve() changes from Al Viro:
"This introduces the generic kernel_thread() and kernel_execve()
functions, and switches x86, arm, alpha, um and s390 over to them."
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/signal: (26 commits)
s390: convert to generic kernel_execve()
s390: switch to generic kernel_thread()
s390: fold kernel_thread_helper() into ret_from_fork()
s390: fold execve_tail() into start_thread(), convert to generic sys_execve()
um: switch to generic kernel_thread()
x86, um/x86: switch to generic sys_execve and kernel_execve
x86: split ret_from_fork
alpha: introduce ret_from_kernel_execve(), switch to generic kernel_execve()
alpha: switch to generic kernel_thread()
alpha: switch to generic sys_execve()
arm: get rid of execve wrapper, switch to generic execve() implementation
arm: optimized current_pt_regs()
arm: introduce ret_from_kernel_execve(), switch to generic kernel_execve()
arm: split ret_from_fork, simplify kernel_thread() [based on patch by rmk]
generic sys_execve()
generic kernel_execve()
new helper: current_pt_regs()
preparation for generic kernel_thread()
um: kill thread->forking
um: let signal_delivered() do SIGTRAP on singlestepping into handler
...
As specified by ftrace-design.txt, TIF_SYSCALL_TRACEPOINT was
added, as well as NR_syscalls in asm/unistd.h. Additionally,
__sys_trace was modified to call trace_sys_enter and
trace_sys_exit when appropriate.
Tests #2 - #4 of "perf test" now complete successfully.
Signed-off-by: Steven Walter <stevenrwalter@gmail.com>
Signed-off-by: Wade Farnsworth <wade_farnsworth@mentor.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Prior to syscall invocation, __sys_trace only reloads r0-r3 from the
kernel stack, preventing the debugger from updating arguments 5-7 when
signalled via ptrace.
This patch updates the code to reload r0-r6, updating arguments 5 and 6
on the stack (argument 7 is only used by OABI indirect syscalls and
can remain in a register).
Reported-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
just let do_work_pending() return 1 on normal local restarts and
-1 on those that had been caused by ERESTART_RESTARTBLOCK (and 0
is still "all done, sod off to userland now"). And let the asm
glue flip scno to restart_syscall(2) one if it got negative from
us...
[will: resolved conflicts with audit fixes]
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The syscall_trace on ARM takes a `why' parameter to indicate whether or
not we are entering or exiting a system call. This can be confusing for
people looking at the code since (a) it conflicts with the why register
alias in the entry assembly code and (b) it is not immediately clear
what it represents.
This patch splits up the syscall_trace function into separate wrappers
for syscall entry and exit, allowing the low-level syscall handling
code to branch to the appropriate function.
Reported-by: Al Viro <viro@zeniv.linux.org.uk>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
ret_from_fork is setup for a freshly spawned child task via copy_thread,
called from copy_process. The latter function clears TIF_SYSCALL_TRACE
and also resets the child task's audit_context to NULL, meaning that
there is little point invoking the system call tracing routines.
Furthermore, getting hold of the syscall number is a complete pain and
it looks like the current code doesn't even bother.
This patch removes the syscall tracing checks from ret_from_fork.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
checking in do_signal() is pointless - if we get there with !user_mode(regs)
(and we might), we'll end up looping indefinitely. Check in work_pending
and break out of the loop if so.
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
This patch removes support for ARMv3 CPUs, which haven't worked properly
for quite some time (see the FIXME comment in arch/arm/mm/fault.c). The
only V3 parts left is the cache model for ARMv3, which is needed for some
odd reason by ARM740T CPUs, and being able to build with -march=armv3,
which is required for the RiscPC platform due to its bus structure.
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Jean-Christophe PLAGNIOL-VILLARD <plagnioj@jcrosoft.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Only 3 platforms need arch_ret_to_user macro, so add ARCH_HAS_RET_TO_USER
kconfig option and make iop13xx, iop32x and iop33x select it.
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
The dynamic ftrace ops startup test currently fails on Thumb-2 kernels:
Testing tracer function: PASSED
Testing dynamic ftrace: PASSED
Testing dynamic ftrace ops #1: (0 0 0 0 0) FAILED!
This is because while the addresses in the mcount records do not have
the zero bit set, the IP reported by the mcount call does have it set
(because it is copied from the LR). This mismatch causes the ops
filtering in ftrace_ops_list_func() to not call the relevant tracers.
Fix this by clearing the zero bit before adjusting the LR for the mcount
instruction size. Also, combine the mov+sub into a single sub
instruction.
Acked-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Rabin Vincent <rabin@rab.in>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch provides functionality to audit system call events on the
ARM platform. The implementation was based off the structure of the
MIPS platform and information in this
(http://lists.fedoraproject.org/pipermail/arm/2009-October/000382.html)
mailing list thread. The required audit_syscall_exit and
audit_syscall_entry checks were added to ptrace using the standard
registers for system call values (r0 through r3). A thread information
flag was added for auditing (TIF_SYSCALL_AUDIT) and a meta-flag was
added (_TIF_SYSCALL_WORK) to simplify modifications to the syscall
entry/exit. Now, if either the TRACE flag is set or the AUDIT flag is
set, the syscall_trace function will be executed. The prober changes
were made to Kconfig to allow CONFIG_AUDITSYSCALL to be enabled.
Due to platform availability limitations, this patch was only tested
on the Android platform running the modified "android-goldfish-2.6.29"
kernel. A test compile was performed using Code Sourcery's
cross-compilation toolset and the current linux-3.0 stable kernel. The
changes compile without error. I'm hoping, due to the simple modifications,
the patch is "obviously correct".
Signed-off-by: Nathaniel Husted <nhusted@gmail.com>
Signed-off-by: Eric Paris <eparis@redhat.com>
This patch fixes the lockdep warning of "unannotated irqs-off"[1].
After entering __irq_usr, arm core will disable interrupt automatically,
but __irq_usr does not annotate the irq disable, so lockdep may complain
the warning if it has chance to check this in irq handler.
This patch adds trace_hardirqs_off in __irq_usr before entering irq_handler
to handle the irq, also calls ret_to_user_from_irq to avoid calling
disable_irq again.
This is also a fix for irq off tracer.
[1], lockdep warning log of "unannotated irqs-off"
[ 13.804687] ------------[ cut here ]------------
[ 13.809570] WARNING: at kernel/lockdep.c:3335 check_flags+0x78/0x1d0()
[ 13.816467] Modules linked in:
[ 13.819732] Backtrace:
[ 13.822357] [<c01cb42c>] (dump_backtrace+0x0/0x100) from [<c06abb14>] (dump_stack+0x20/0x24)
[ 13.831268] r6:c07d8c2c r5:00000d07 r4:00000000 r3:00000000
[ 13.837280] [<c06abaf4>] (dump_stack+0x0/0x24) from [<c01ffc04>] (warn_slowpath_common+0x5c/0x74)
[ 13.846649] [<c01ffba8>] (warn_slowpath_common+0x0/0x74) from [<c01ffc48>] (warn_slowpath_null+0x2c/0x34)
[ 13.856781] r8:00000000 r7:00000000 r6:c18b8194 r5:60000093 r4:ef182000
[ 13.863708] r3:00000009
[ 13.866485] [<c01ffc1c>] (warn_slowpath_null+0x0/0x34) from [<c0237d84>] (check_flags+0x78/0x1d0)
[ 13.875823] [<c0237d0c>] (check_flags+0x0/0x1d0) from [<c023afc8>] (lock_acquire+0x4c/0x150)
[ 13.884704] [<c023af7c>] (lock_acquire+0x0/0x150) from [<c06af638>] (_raw_spin_lock+0x4c/0x84)
[ 13.893798] [<c06af5ec>] (_raw_spin_lock+0x0/0x84) from [<c01f9a44>] (sched_ttwu_pending+0x58/0x8c)
[ 13.903320] r6:ef92d040 r5:00000003 r4:c18b8180
[ 13.908233] [<c01f99ec>] (sched_ttwu_pending+0x0/0x8c) from [<c01f9a90>] (scheduler_ipi+0x18/0x1c)
[ 13.917663] r6:ef183fb0 r5:00000003 r4:00000000 r3:00000001
[ 13.923645] [<c01f9a78>] (scheduler_ipi+0x0/0x1c) from [<c01bc458>] (do_IPI+0x9c/0xfc)
[ 13.932006] [<c01bc3bc>] (do_IPI+0x0/0xfc) from [<c06b0888>] (__irq_usr+0x48/0xe0)
[ 13.939971] Exception stack(0xef183fb0 to 0xef183ff8)
[ 13.945281] 3fa0: ffffffc3 0001500c 00000001 0001500c
[ 13.953948] 3fc0: 00000050 400b45f0 400d9000 00000000 00000001 400d9600 6474e552 bea05b3c
[ 13.962585] 3fe0: 400d96c0 bea059c0 400b6574 400b65d8 20000010 ffffffff
[ 13.969573] r6:00000403 r5:fa240100 r4:ffffffff r3:20000010
[ 13.975585] ---[ end trace efc4896ab0fb62cb ]---
[ 13.980468] possible reason: unannotated irqs-off.
[ 13.985534] irq event stamp: 1610
[ 13.989044] hardirqs last enabled at (1610): [<c01c703c>] no_work_pending+0x8/0x2c
[ 13.997131] hardirqs last disabled at (1609): [<c01c7024>] ret_slow_syscall+0xc/0x1c
[ 14.005371] softirqs last enabled at (0): [<c01fe5e4>] copy_process+0x2cc/0xa24
[ 14.013183] softirqs last disabled at (0): [< (null)>] (null)
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
If the irqsoff tracer is in use, stop tracing the interrupt disable
interval when returning to userspace. Tracing userspace execution time
as interrupts disabled time is not helpful for kernel performance
analysis purposes. Only do so if the irqsoff tracer is enabled, to
avoid overhead for lockdep, which doesn't care.
Signed-off-by: Todd Poynor <toddpoynor@google.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Cc: Tim Bird <tim.bird@am.sony.com>
[rabin@rab.in: rebase on top of latest code,
keep code in ftrace.c instead of separate file,
check for ftrace_graph_entry also]
Signed-off-by: Rabin Vincent <rabin@rab.in>
Use assembler macros to avoid copy/pasting code between the
implementations of the two variants of the mcount call.
Signed-off-by: Rabin Vincent <rabin@rab.in>
* master.kernel.org:/home/rmk/linux-2.6-arm: (28 commits)
ARM: 6411/1: vexpress: set RAM latencies to 1 cycle for PL310 on ct-ca9x4 tile
ARM: 6409/1: davinci: map sram using MT_MEMORY_NONCACHED instead of MT_DEVICE
ARM: 6408/1: omap: Map only available sram memory
ARM: 6407/1: mmu: Setup MT_MEMORY and MT_MEMORY_NONCACHED L1 entries
ARM: pxa: remove pr_<level> uses of KERN_<level>
ARM: pxa168fb: clear enable bit when not active
ARM: pxa: fix cpu_is_pxa*() not expanding to zero when not configured
ARM: pxa168: fix corrected reset vector
ARM: pxa: Use PIO for PI2C communication on Palm27x
ARM: pxa: Fix Vpac270 gpio_power for MMC
ARM: 6401/1: plug a race in the alignment trap handler
ARM: 6406/1: at91sam9g45: fix i2c bus speed
leds: leds-ns2: fix locking
ARM: dove: fix __io() definition to use bus based offset
dmaengine: fix interrupt clearing for mv_xor
ARM: kirkwood: Unbreak PCIe I/O port
ARM: Fix build error when using KCONFIG_CONFIG
ARM: 6383/1: Implement phys_mem_access_prot() to avoid attributes aliasing
ARM: 6400/1: at91: fix arch_gettimeoffset fallout
ARM: 6398/1: add proc info for ARM11MPCore/Cortex-A9 from ARM
...
If a signal hits us outside of a syscall and another gets delivered
when we are in sigreturn (e.g. because it had been in sa_mask for
the first one and got sent to us while we'd been in the first handler),
we have a chance of returning from the second handler to location one
insn prior to where we ought to return. If r0 happens to contain -513
(-ERESTARTNOINTR), sigreturn will get confused into doing restart
syscall song and dance.
Incredible joy to debug, since it manifests as random, infrequent and
very hard to reproduce double execution of instructions in userland
code...
The fix is simple - mark it "don't bother with restarts" in wrapper,
i.e. set r8 to 0 in sys_sigreturn and sys_rt_sigreturn wrappers,
suppressing the syscall restart handling on return from these guys.
They can't legitimately return a restart-worthy error anyway.
Testcase:
#include <unistd.h>
#include <signal.h>
#include <stdlib.h>
#include <sys/time.h>
#include <errno.h>
void f(int n)
{
__asm__ __volatile__(
"ldr r0, [%0]\n"
"b 1f\n"
"b 2f\n"
"1:b .\n"
"2:\n" : : "r"(&n));
}
void handler1(int sig) { }
void handler2(int sig) { raise(1); }
void handler3(int sig) { exit(0); }
main()
{
struct sigaction s = {.sa_handler = handler2};
struct itimerval t1 = { .it_value = {1} };
struct itimerval t2 = { .it_value = {2} };
signal(1, handler1);
sigemptyset(&s.sa_mask);
sigaddset(&s.sa_mask, 1);
sigaction(SIGALRM, &s, NULL);
signal(SIGVTALRM, handler3);
setitimer(ITIMER_REAL, &t1, NULL);
setitimer(ITIMER_VIRTUAL, &t2, NULL);
f(-513); /* -ERESTARTNOINTR */
write(1, "buggered\n", 9);
return 1;
}
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
Cc: stable@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Al Viro reports that calling "sys_sigsuspend(-ERESTARTNOHAND, 0, 0)"
with two signals coming and being handled in kernel space results
in the syscall restart being done twice.
Avoid this by clearing the 'why' flag when we call the signal handling
code to prevent further syscall restarts after the first.
Acked-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>