2010-03-22 12:31:26 +07:00
|
|
|
/*
|
2005-04-17 05:20:36 +07:00
|
|
|
* Linux Magic System Request Key Hacks
|
|
|
|
*
|
|
|
|
* (c) 1997 Martin Mares <mj@atrey.karlin.mff.cuni.cz>
|
|
|
|
* based on ideas by Pavel Machek <pavel@atrey.karlin.mff.cuni.cz>
|
|
|
|
*
|
|
|
|
* (c) 2000 Crutcher Dunnavant <crutcher+kernel@datastacks.com>
|
|
|
|
* overhauled to use key registration
|
|
|
|
* based upon discusions in irc://irc.openprojects.net/#kernelnewbies
|
2010-03-22 12:31:26 +07:00
|
|
|
*
|
|
|
|
* Copyright (c) 2010 Dmitry Torokhov
|
|
|
|
* Input handler conversion
|
2005-04-17 05:20:36 +07:00
|
|
|
*/
|
|
|
|
|
2010-03-22 12:31:26 +07:00
|
|
|
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
|
|
|
|
|
2017-02-09 00:51:30 +07:00
|
|
|
#include <linux/sched/signal.h>
|
2013-02-07 22:47:07 +07:00
|
|
|
#include <linux/sched/rt.h>
|
2017-02-09 00:51:35 +07:00
|
|
|
#include <linux/sched/debug.h>
|
2017-02-09 00:51:36 +07:00
|
|
|
#include <linux/sched/task.h>
|
2005-04-17 05:20:36 +07:00
|
|
|
#include <linux/interrupt.h>
|
|
|
|
#include <linux/mm.h>
|
|
|
|
#include <linux/fs.h>
|
|
|
|
#include <linux/mount.h>
|
|
|
|
#include <linux/kdev_t.h>
|
|
|
|
#include <linux/major.h>
|
|
|
|
#include <linux/reboot.h>
|
|
|
|
#include <linux/sysrq.h>
|
|
|
|
#include <linux/kbd_kern.h>
|
2008-10-16 12:04:23 +07:00
|
|
|
#include <linux/proc_fs.h>
|
2009-08-02 16:28:21 +07:00
|
|
|
#include <linux/nmi.h>
|
2005-04-17 05:20:36 +07:00
|
|
|
#include <linux/quotaops.h>
|
perf: Do the big rename: Performance Counters -> Performance Events
Bye-bye Performance Counters, welcome Performance Events!
In the past few months the perfcounters subsystem has grown out its
initial role of counting hardware events, and has become (and is
becoming) a much broader generic event enumeration, reporting, logging,
monitoring, analysis facility.
Naming its core object 'perf_counter' and naming the subsystem
'perfcounters' has become more and more of a misnomer. With pending
code like hw-breakpoints support the 'counter' name is less and
less appropriate.
All in one, we've decided to rename the subsystem to 'performance
events' and to propagate this rename through all fields, variables
and API names. (in an ABI compatible fashion)
The word 'event' is also a bit shorter than 'counter' - which makes
it slightly more convenient to write/handle as well.
Thanks goes to Stephane Eranian who first observed this misnomer and
suggested a rename.
User-space tooling and ABI compatibility is not affected - this patch
should be function-invariant. (Also, defconfigs were not touched to
keep the size down.)
This patch has been generated via the following script:
FILES=$(find * -type f | grep -vE 'oprofile|[^K]config')
sed -i \
-e 's/PERF_EVENT_/PERF_RECORD_/g' \
-e 's/PERF_COUNTER/PERF_EVENT/g' \
-e 's/perf_counter/perf_event/g' \
-e 's/nb_counters/nb_events/g' \
-e 's/swcounter/swevent/g' \
-e 's/tpcounter_event/tp_event/g' \
$FILES
for N in $(find . -name perf_counter.[ch]); do
M=$(echo $N | sed 's/perf_counter/perf_event/g')
mv $N $M
done
FILES=$(find . -name perf_event.*)
sed -i \
-e 's/COUNTER_MASK/REG_MASK/g' \
-e 's/COUNTER/EVENT/g' \
-e 's/\<event\>/event_id/g' \
-e 's/counter/event/g' \
-e 's/Counter/Event/g' \
$FILES
... to keep it as correct as possible. This script can also be
used by anyone who has pending perfcounters patches - it converts
a Linux kernel tree over to the new naming. We tried to time this
change to the point in time where the amount of pending patches
is the smallest: the end of the merge window.
Namespace clashes were fixed up in a preparatory patch - and some
stylistic fallout will be fixed up in a subsequent patch.
( NOTE: 'counters' are still the proper terminology when we deal
with hardware registers - and these sed scripts are a bit
over-eager in renaming them. I've undone some of that, but
in case there's something left where 'counter' would be
better than 'event' we can undo that on an individual basis
instead of touching an otherwise nicely automated patch. )
Suggested-by: Stephane Eranian <eranian@google.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Paul Mackerras <paulus@samba.org>
Reviewed-by: Arjan van de Ven <arjan@linux.intel.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: David Howells <dhowells@redhat.com>
Cc: Kyle McMartin <kyle@mcmartin.ca>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: <linux-arch@vger.kernel.org>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-21 17:02:48 +07:00
|
|
|
#include <linux/perf_event.h>
|
2005-04-17 05:20:36 +07:00
|
|
|
#include <linux/kernel.h>
|
|
|
|
#include <linux/module.h>
|
|
|
|
#include <linux/suspend.h>
|
|
|
|
#include <linux/writeback.h>
|
|
|
|
#include <linux/swap.h>
|
|
|
|
#include <linux/spinlock.h>
|
|
|
|
#include <linux/vt_kern.h>
|
|
|
|
#include <linux/workqueue.h>
|
2007-02-16 16:28:16 +07:00
|
|
|
#include <linux/hrtimer.h>
|
2007-10-17 13:25:53 +07:00
|
|
|
#include <linux/oom.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 15:04:11 +07:00
|
|
|
#include <linux/slab.h>
|
2010-03-22 12:31:26 +07:00
|
|
|
#include <linux/input.h>
|
2011-09-16 13:31:11 +07:00
|
|
|
#include <linux/uaccess.h>
|
2013-01-07 14:23:33 +07:00
|
|
|
#include <linux/moduleparam.h>
|
2013-04-02 12:14:19 +07:00
|
|
|
#include <linux/jiffies.h>
|
2013-06-06 12:51:46 +07:00
|
|
|
#include <linux/syscalls.h>
|
2013-08-04 07:22:08 +07:00
|
|
|
#include <linux/of.h>
|
2014-06-07 04:38:14 +07:00
|
|
|
#include <linux/rcupdate.h>
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
#include <asm/ptrace.h>
|
2006-10-06 21:38:42 +07:00
|
|
|
#include <asm/irq_regs.h>
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
/* Whether we react on sysrq keys or just ignore them */
|
2013-10-07 07:05:46 +07:00
|
|
|
static int __read_mostly sysrq_enabled = CONFIG_MAGIC_SYSRQ_DEFAULT_ENABLE;
|
2010-03-22 12:31:26 +07:00
|
|
|
static bool __read_mostly sysrq_always_enabled;
|
2006-12-13 15:34:36 +07:00
|
|
|
|
2010-03-22 12:31:26 +07:00
|
|
|
static bool sysrq_on(void)
|
2006-12-13 15:34:36 +07:00
|
|
|
{
|
2010-03-22 12:31:26 +07:00
|
|
|
return sysrq_enabled || sysrq_always_enabled;
|
2006-12-13 15:34:36 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* A value of 1 means 'all', other nonzero values are an op mask:
|
|
|
|
*/
|
2010-03-22 12:31:26 +07:00
|
|
|
static bool sysrq_on_mask(int mask)
|
2006-12-13 15:34:36 +07:00
|
|
|
{
|
2010-03-22 12:31:26 +07:00
|
|
|
return sysrq_always_enabled ||
|
|
|
|
sysrq_enabled == 1 ||
|
|
|
|
(sysrq_enabled & mask);
|
2006-12-13 15:34:36 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static int __init sysrq_always_enabled_setup(char *str)
|
|
|
|
{
|
2010-03-22 12:31:26 +07:00
|
|
|
sysrq_always_enabled = true;
|
|
|
|
pr_info("sysrq always enabled.\n");
|
2006-12-13 15:34:36 +07:00
|
|
|
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
__setup("sysrq_always_enabled", sysrq_always_enabled_setup);
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2010-08-18 11:15:46 +07:00
|
|
|
static void sysrq_handle_loglevel(int key)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
int i;
|
2010-03-22 12:31:26 +07:00
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
i = key - '0';
|
2014-06-05 06:11:46 +07:00
|
|
|
console_loglevel = CONSOLE_LOGLEVEL_DEFAULT;
|
2015-02-12 06:26:21 +07:00
|
|
|
pr_info("Loglevel set to %d\n", i);
|
2005-04-17 05:20:36 +07:00
|
|
|
console_loglevel = i;
|
2006-03-25 18:07:08 +07:00
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
static struct sysrq_key_op sysrq_loglevel_op = {
|
|
|
|
.handler = sysrq_handle_loglevel,
|
2009-01-07 05:41:13 +07:00
|
|
|
.help_msg = "loglevel(0-9)",
|
2005-04-17 05:20:36 +07:00
|
|
|
.action_msg = "Changing Loglevel",
|
|
|
|
.enable_mask = SYSRQ_ENABLE_LOG,
|
|
|
|
};
|
|
|
|
|
|
|
|
#ifdef CONFIG_VT
|
2010-08-18 11:15:46 +07:00
|
|
|
static void sysrq_handle_SAK(int key)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2007-02-10 16:44:34 +07:00
|
|
|
struct work_struct *SAK_work = &vc_cons[fg_console].SAK_work;
|
|
|
|
schedule_work(SAK_work);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
static struct sysrq_key_op sysrq_SAK_op = {
|
|
|
|
.handler = sysrq_handle_SAK,
|
2013-03-07 16:00:02 +07:00
|
|
|
.help_msg = "sak(k)",
|
2005-04-17 05:20:36 +07:00
|
|
|
.action_msg = "SAK",
|
|
|
|
.enable_mask = SYSRQ_ENABLE_KEYBOARD,
|
|
|
|
};
|
2006-03-25 18:07:08 +07:00
|
|
|
#else
|
2010-03-22 12:31:26 +07:00
|
|
|
#define sysrq_SAK_op (*(struct sysrq_key_op *)NULL)
|
2005-04-17 05:20:36 +07:00
|
|
|
#endif
|
|
|
|
|
|
|
|
#ifdef CONFIG_VT
|
2010-08-18 11:15:46 +07:00
|
|
|
static void sysrq_handle_unraw(int key)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2012-02-28 21:49:23 +07:00
|
|
|
vt_reset_unicode(fg_console);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2012-02-28 21:49:23 +07:00
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
static struct sysrq_key_op sysrq_unraw_op = {
|
|
|
|
.handler = sysrq_handle_unraw,
|
2013-03-07 16:00:02 +07:00
|
|
|
.help_msg = "unraw(r)",
|
2007-10-17 13:29:38 +07:00
|
|
|
.action_msg = "Keyboard mode set to system default",
|
2005-04-17 05:20:36 +07:00
|
|
|
.enable_mask = SYSRQ_ENABLE_KEYBOARD,
|
|
|
|
};
|
2006-03-25 18:07:08 +07:00
|
|
|
#else
|
2010-03-22 12:31:26 +07:00
|
|
|
#define sysrq_unraw_op (*(struct sysrq_key_op *)NULL)
|
2005-04-17 05:20:36 +07:00
|
|
|
#endif /* CONFIG_VT */
|
|
|
|
|
2010-08-18 11:15:46 +07:00
|
|
|
static void sysrq_handle_crash(int key)
|
2005-06-26 04:58:25 +07:00
|
|
|
{
|
2009-06-18 06:28:17 +07:00
|
|
|
char *killer = NULL;
|
sysrq, kdump: make sysrq-c consistent
commit d6580a9f15238b87e618310c862231ae3f352d2d ("kexec: sysrq: simplify
sysrq-c handler") changed the behavior of sysrq-c to unconditional
dereference of NULL pointer. So in cases with CONFIG_KEXEC, where
crash_kexec() was directly called from sysrq-c before, now it can be said
that a step of "real oops" was inserted before starting kdump.
However, in contrast to oops via SysRq-c from keyboard which results in
panic due to in_interrupt(), oops via "echo c > /proc/sysrq-trigger" will
not become panic unless panic_on_oops=1. It means that even if dump is
properly configured to be taken on panic, the sysrq-c from proc interface
might not start crashdump while the sysrq-c from keyboard can start
crashdump. This confuses traditional users of kdump, i.e. people who
expect sysrq-c to do common behavior in both of the keyboard and proc
interface.
This patch brings the keyboard and proc interface behavior of sysrq-c in
line, by forcing panic_on_oops=1 before oops in sysrq-c handler.
And some updates in documentation are included, to clarify that there is
no longer dependency with CONFIG_KEXEC, and that now the system can just
crash by sysrq-c if no dump mechanism is configured.
Signed-off-by: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
Cc: Ken'ichi Ohmichi <oomichi@mxs.nes.nec.co.jp>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Cc: Brayan Arraes <brayan@yack.com.br>
Cc: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-07-30 05:04:14 +07:00
|
|
|
|
2015-12-18 08:15:10 +07:00
|
|
|
/* we need to release the RCU read lock here,
|
|
|
|
* otherwise we get an annoying
|
|
|
|
* 'BUG: sleeping function called from invalid context'
|
|
|
|
* complaint from the kernel before the panic.
|
|
|
|
*/
|
|
|
|
rcu_read_unlock();
|
sysrq, kdump: make sysrq-c consistent
commit d6580a9f15238b87e618310c862231ae3f352d2d ("kexec: sysrq: simplify
sysrq-c handler") changed the behavior of sysrq-c to unconditional
dereference of NULL pointer. So in cases with CONFIG_KEXEC, where
crash_kexec() was directly called from sysrq-c before, now it can be said
that a step of "real oops" was inserted before starting kdump.
However, in contrast to oops via SysRq-c from keyboard which results in
panic due to in_interrupt(), oops via "echo c > /proc/sysrq-trigger" will
not become panic unless panic_on_oops=1. It means that even if dump is
properly configured to be taken on panic, the sysrq-c from proc interface
might not start crashdump while the sysrq-c from keyboard can start
crashdump. This confuses traditional users of kdump, i.e. people who
expect sysrq-c to do common behavior in both of the keyboard and proc
interface.
This patch brings the keyboard and proc interface behavior of sysrq-c in
line, by forcing panic_on_oops=1 before oops in sysrq-c handler.
And some updates in documentation are included, to clarify that there is
no longer dependency with CONFIG_KEXEC, and that now the system can just
crash by sysrq-c if no dump mechanism is configured.
Signed-off-by: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
Cc: Ken'ichi Ohmichi <oomichi@mxs.nes.nec.co.jp>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Cc: Brayan Arraes <brayan@yack.com.br>
Cc: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-07-30 05:04:14 +07:00
|
|
|
panic_on_oops = 1; /* force panic */
|
|
|
|
wmb();
|
2009-06-18 06:28:17 +07:00
|
|
|
*killer = 1;
|
2005-06-26 04:58:25 +07:00
|
|
|
}
|
sysrq, kdump: make sysrq-c consistent
commit d6580a9f15238b87e618310c862231ae3f352d2d ("kexec: sysrq: simplify
sysrq-c handler") changed the behavior of sysrq-c to unconditional
dereference of NULL pointer. So in cases with CONFIG_KEXEC, where
crash_kexec() was directly called from sysrq-c before, now it can be said
that a step of "real oops" was inserted before starting kdump.
However, in contrast to oops via SysRq-c from keyboard which results in
panic due to in_interrupt(), oops via "echo c > /proc/sysrq-trigger" will
not become panic unless panic_on_oops=1. It means that even if dump is
properly configured to be taken on panic, the sysrq-c from proc interface
might not start crashdump while the sysrq-c from keyboard can start
crashdump. This confuses traditional users of kdump, i.e. people who
expect sysrq-c to do common behavior in both of the keyboard and proc
interface.
This patch brings the keyboard and proc interface behavior of sysrq-c in
line, by forcing panic_on_oops=1 before oops in sysrq-c handler.
And some updates in documentation are included, to clarify that there is
no longer dependency with CONFIG_KEXEC, and that now the system can just
crash by sysrq-c if no dump mechanism is configured.
Signed-off-by: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
Cc: Ken'ichi Ohmichi <oomichi@mxs.nes.nec.co.jp>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Cc: Brayan Arraes <brayan@yack.com.br>
Cc: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-07-30 05:04:14 +07:00
|
|
|
static struct sysrq_key_op sysrq_crash_op = {
|
2009-06-18 06:28:17 +07:00
|
|
|
.handler = sysrq_handle_crash,
|
2013-03-07 16:00:02 +07:00
|
|
|
.help_msg = "crash(c)",
|
2009-06-18 06:28:17 +07:00
|
|
|
.action_msg = "Trigger a crash",
|
2005-06-26 04:58:25 +07:00
|
|
|
.enable_mask = SYSRQ_ENABLE_DUMP,
|
|
|
|
};
|
|
|
|
|
2010-08-18 11:15:46 +07:00
|
|
|
static void sysrq_handle_reboot(int key)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2006-10-01 13:28:02 +07:00
|
|
|
lockdep_off();
|
2005-04-17 05:20:36 +07:00
|
|
|
local_irq_enable();
|
2005-07-27 00:51:06 +07:00
|
|
|
emergency_restart();
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
static struct sysrq_key_op sysrq_reboot_op = {
|
|
|
|
.handler = sysrq_handle_reboot,
|
2013-03-07 16:00:02 +07:00
|
|
|
.help_msg = "reboot(b)",
|
2005-04-17 05:20:36 +07:00
|
|
|
.action_msg = "Resetting",
|
|
|
|
.enable_mask = SYSRQ_ENABLE_BOOT,
|
|
|
|
};
|
|
|
|
|
2010-08-18 11:15:46 +07:00
|
|
|
static void sysrq_handle_sync(int key)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
emergency_sync();
|
|
|
|
}
|
|
|
|
static struct sysrq_key_op sysrq_sync_op = {
|
|
|
|
.handler = sysrq_handle_sync,
|
2013-03-07 16:00:02 +07:00
|
|
|
.help_msg = "sync(s)",
|
2005-04-17 05:20:36 +07:00
|
|
|
.action_msg = "Emergency Sync",
|
|
|
|
.enable_mask = SYSRQ_ENABLE_SYNC,
|
|
|
|
};
|
|
|
|
|
2010-08-18 11:15:46 +07:00
|
|
|
static void sysrq_handle_show_timers(int key)
|
2007-02-16 16:28:16 +07:00
|
|
|
{
|
|
|
|
sysrq_timer_list_show();
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct sysrq_key_op sysrq_show_timers_op = {
|
|
|
|
.handler = sysrq_handle_show_timers,
|
2013-03-07 16:00:02 +07:00
|
|
|
.help_msg = "show-all-timers(q)",
|
2008-10-20 17:33:14 +07:00
|
|
|
.action_msg = "Show clockevent devices & pending hrtimers (no others)",
|
2007-02-16 16:28:16 +07:00
|
|
|
};
|
|
|
|
|
2010-08-18 11:15:46 +07:00
|
|
|
static void sysrq_handle_mountro(int key)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
emergency_remount();
|
|
|
|
}
|
|
|
|
static struct sysrq_key_op sysrq_mountro_op = {
|
|
|
|
.handler = sysrq_handle_mountro,
|
2013-03-07 16:00:02 +07:00
|
|
|
.help_msg = "unmount(u)",
|
2005-04-17 05:20:36 +07:00
|
|
|
.action_msg = "Emergency Remount R/O",
|
|
|
|
.enable_mask = SYSRQ_ENABLE_REMOUNT,
|
|
|
|
};
|
|
|
|
|
2006-07-03 14:24:56 +07:00
|
|
|
#ifdef CONFIG_LOCKDEP
|
2010-08-18 11:15:46 +07:00
|
|
|
static void sysrq_handle_showlocks(int key)
|
2006-01-10 06:59:21 +07:00
|
|
|
{
|
2006-07-03 14:24:33 +07:00
|
|
|
debug_show_all_locks();
|
2006-01-10 06:59:21 +07:00
|
|
|
}
|
2006-07-03 14:24:56 +07:00
|
|
|
|
2006-01-10 06:59:21 +07:00
|
|
|
static struct sysrq_key_op sysrq_showlocks_op = {
|
|
|
|
.handler = sysrq_handle_showlocks,
|
2013-03-07 16:00:02 +07:00
|
|
|
.help_msg = "show-all-locks(d)",
|
2006-01-10 06:59:21 +07:00
|
|
|
.action_msg = "Show Locks Held",
|
|
|
|
};
|
2006-03-25 18:07:08 +07:00
|
|
|
#else
|
2010-03-22 12:31:26 +07:00
|
|
|
#define sysrq_showlocks_op (*(struct sysrq_key_op *)NULL)
|
2006-01-10 06:59:21 +07:00
|
|
|
#endif
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2008-04-29 14:59:21 +07:00
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
static DEFINE_SPINLOCK(show_lock);
|
|
|
|
|
|
|
|
static void showacpu(void *dummy)
|
|
|
|
{
|
|
|
|
unsigned long flags;
|
|
|
|
|
|
|
|
/* Idle CPUs have no interesting backtrace. */
|
|
|
|
if (idle_cpu(smp_processor_id()))
|
|
|
|
return;
|
|
|
|
|
|
|
|
spin_lock_irqsave(&show_lock, flags);
|
2015-02-12 06:26:21 +07:00
|
|
|
pr_info("CPU%d:\n", smp_processor_id());
|
2008-04-29 14:59:21 +07:00
|
|
|
show_stack(NULL, NULL);
|
|
|
|
spin_unlock_irqrestore(&show_lock, flags);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void sysrq_showregs_othercpus(struct work_struct *dummy)
|
|
|
|
{
|
2008-06-27 16:52:45 +07:00
|
|
|
smp_call_function(showacpu, NULL, 0);
|
2008-04-29 14:59:21 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static DECLARE_WORK(sysrq_showallcpus, sysrq_showregs_othercpus);
|
|
|
|
|
2010-08-18 11:15:46 +07:00
|
|
|
static void sysrq_handle_showallcpus(int key)
|
2008-04-29 14:59:21 +07:00
|
|
|
{
|
2009-08-03 14:31:54 +07:00
|
|
|
/*
|
|
|
|
* Fall back to the workqueue based printing if the
|
|
|
|
* backtrace printing did not succeed or the
|
|
|
|
* architecture has no support for it:
|
|
|
|
*/
|
|
|
|
if (!trigger_all_cpu_backtrace()) {
|
|
|
|
struct pt_regs *regs = get_irq_regs();
|
|
|
|
|
|
|
|
if (regs) {
|
2015-02-12 06:26:21 +07:00
|
|
|
pr_info("CPU%d:\n", smp_processor_id());
|
2009-08-03 14:31:54 +07:00
|
|
|
show_regs(regs);
|
|
|
|
}
|
|
|
|
schedule_work(&sysrq_showallcpus);
|
|
|
|
}
|
2008-04-29 14:59:21 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static struct sysrq_key_op sysrq_showallcpus_op = {
|
|
|
|
.handler = sysrq_handle_showallcpus,
|
2013-03-07 16:00:02 +07:00
|
|
|
.help_msg = "show-backtrace-all-active-cpus(l)",
|
2008-04-29 14:59:21 +07:00
|
|
|
.action_msg = "Show backtrace of all active CPUs",
|
|
|
|
.enable_mask = SYSRQ_ENABLE_DUMP,
|
|
|
|
};
|
|
|
|
#endif
|
|
|
|
|
2010-08-18 11:15:46 +07:00
|
|
|
static void sysrq_handle_showregs(int key)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
IRQ: Maintain regs pointer globally rather than passing to IRQ handlers
Maintain a per-CPU global "struct pt_regs *" variable which can be used instead
of passing regs around manually through all ~1800 interrupt handlers in the
Linux kernel.
The regs pointer is used in few places, but it potentially costs both stack
space and code to pass it around. On the FRV arch, removing the regs parameter
from all the genirq function results in a 20% speed up of the IRQ exit path
(ie: from leaving timer_interrupt() to leaving do_IRQ()).
Where appropriate, an arch may override the generic storage facility and do
something different with the variable. On FRV, for instance, the address is
maintained in GR28 at all times inside the kernel as part of general exception
handling.
Having looked over the code, it appears that the parameter may be handed down
through up to twenty or so layers of functions. Consider a USB character
device attached to a USB hub, attached to a USB controller that posts its
interrupts through a cascaded auxiliary interrupt controller. A character
device driver may want to pass regs to the sysrq handler through the input
layer which adds another few layers of parameter passing.
I've build this code with allyesconfig for x86_64 and i386. I've runtested the
main part of the code on FRV and i386, though I can't test most of the drivers.
I've also done partial conversion for powerpc and MIPS - these at least compile
with minimal configurations.
This will affect all archs. Mostly the changes should be relatively easy.
Take do_IRQ(), store the regs pointer at the beginning, saving the old one:
struct pt_regs *old_regs = set_irq_regs(regs);
And put the old one back at the end:
set_irq_regs(old_regs);
Don't pass regs through to generic_handle_irq() or __do_IRQ().
In timer_interrupt(), this sort of change will be necessary:
- update_process_times(user_mode(regs));
- profile_tick(CPU_PROFILING, regs);
+ update_process_times(user_mode(get_irq_regs()));
+ profile_tick(CPU_PROFILING);
I'd like to move update_process_times()'s use of get_irq_regs() into itself,
except that i386, alone of the archs, uses something other than user_mode().
Some notes on the interrupt handling in the drivers:
(*) input_dev() is now gone entirely. The regs pointer is no longer stored in
the input_dev struct.
(*) finish_unlinks() in drivers/usb/host/ohci-q.c needs checking. It does
something different depending on whether it's been supplied with a regs
pointer or not.
(*) Various IRQ handler function pointers have been moved to type
irq_handler_t.
Signed-Off-By: David Howells <dhowells@redhat.com>
(cherry picked from 1b16e7ac850969f38b375e511e3fa2f474a33867 commit)
2006-10-05 20:55:46 +07:00
|
|
|
struct pt_regs *regs = get_irq_regs();
|
|
|
|
if (regs)
|
|
|
|
show_regs(regs);
|
perf: Do the big rename: Performance Counters -> Performance Events
Bye-bye Performance Counters, welcome Performance Events!
In the past few months the perfcounters subsystem has grown out its
initial role of counting hardware events, and has become (and is
becoming) a much broader generic event enumeration, reporting, logging,
monitoring, analysis facility.
Naming its core object 'perf_counter' and naming the subsystem
'perfcounters' has become more and more of a misnomer. With pending
code like hw-breakpoints support the 'counter' name is less and
less appropriate.
All in one, we've decided to rename the subsystem to 'performance
events' and to propagate this rename through all fields, variables
and API names. (in an ABI compatible fashion)
The word 'event' is also a bit shorter than 'counter' - which makes
it slightly more convenient to write/handle as well.
Thanks goes to Stephane Eranian who first observed this misnomer and
suggested a rename.
User-space tooling and ABI compatibility is not affected - this patch
should be function-invariant. (Also, defconfigs were not touched to
keep the size down.)
This patch has been generated via the following script:
FILES=$(find * -type f | grep -vE 'oprofile|[^K]config')
sed -i \
-e 's/PERF_EVENT_/PERF_RECORD_/g' \
-e 's/PERF_COUNTER/PERF_EVENT/g' \
-e 's/perf_counter/perf_event/g' \
-e 's/nb_counters/nb_events/g' \
-e 's/swcounter/swevent/g' \
-e 's/tpcounter_event/tp_event/g' \
$FILES
for N in $(find . -name perf_counter.[ch]); do
M=$(echo $N | sed 's/perf_counter/perf_event/g')
mv $N $M
done
FILES=$(find . -name perf_event.*)
sed -i \
-e 's/COUNTER_MASK/REG_MASK/g' \
-e 's/COUNTER/EVENT/g' \
-e 's/\<event\>/event_id/g' \
-e 's/counter/event/g' \
-e 's/Counter/Event/g' \
$FILES
... to keep it as correct as possible. This script can also be
used by anyone who has pending perfcounters patches - it converts
a Linux kernel tree over to the new naming. We tried to time this
change to the point in time where the amount of pending patches
is the smallest: the end of the merge window.
Namespace clashes were fixed up in a preparatory patch - and some
stylistic fallout will be fixed up in a subsequent patch.
( NOTE: 'counters' are still the proper terminology when we deal
with hardware registers - and these sed scripts are a bit
over-eager in renaming them. I've undone some of that, but
in case there's something left where 'counter' would be
better than 'event' we can undo that on an individual basis
instead of touching an otherwise nicely automated patch. )
Suggested-by: Stephane Eranian <eranian@google.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Paul Mackerras <paulus@samba.org>
Reviewed-by: Arjan van de Ven <arjan@linux.intel.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: David Howells <dhowells@redhat.com>
Cc: Kyle McMartin <kyle@mcmartin.ca>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: <linux-arch@vger.kernel.org>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-21 17:02:48 +07:00
|
|
|
perf_event_print_debug();
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
static struct sysrq_key_op sysrq_showregs_op = {
|
|
|
|
.handler = sysrq_handle_showregs,
|
2013-03-07 16:00:02 +07:00
|
|
|
.help_msg = "show-registers(p)",
|
2005-04-17 05:20:36 +07:00
|
|
|
.action_msg = "Show Regs",
|
|
|
|
.enable_mask = SYSRQ_ENABLE_DUMP,
|
|
|
|
};
|
|
|
|
|
2010-08-18 11:15:46 +07:00
|
|
|
static void sysrq_handle_showstate(int key)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
show_state();
|
workqueue: dump workqueues on sysrq-t
Workqueues are used extensively throughout the kernel but sometimes
it's difficult to debug stalls involving work items because visibility
into its inner workings is fairly limited. Although sysrq-t task dump
annotates each active worker task with the information on the work
item being executed, it is challenging to find out which work items
are pending or delayed on which queues and how pools are being
managed.
This patch implements show_workqueue_state() which dumps all busy
workqueues and pools and is called from the sysrq-t handler. At the
end of sysrq-t dump, something like the following is printed.
Showing busy workqueues and worker pools:
...
workqueue filler_wq: flags=0x0
pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=2/256
in-flight: 491:filler_workfn, 507:filler_workfn
pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=2/256
in-flight: 501:filler_workfn
pending: filler_workfn
...
workqueue test_wq: flags=0x8
pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1
in-flight: 510(RESCUER):test_workfn BAR(69) BAR(500)
delayed: test_workfn1 BAR(492), test_workfn2
...
pool 0: cpus=0 node=0 flags=0x0 nice=0 workers=2 manager: 137
pool 2: cpus=1 node=0 flags=0x0 nice=0 workers=3 manager: 469
pool 3: cpus=1 node=0 flags=0x0 nice=-20 workers=2 idle: 16
pool 8: cpus=0-3 flags=0x4 nice=0 workers=2 manager: 62
The above shows that test_wq is executing test_workfn() on pid 510
which is the rescuer and also that there are two tasks 69 and 500
waiting for the work item to finish in flush_work(). As test_wq has
max_active of 1, there are two work items for test_workfn1() and
test_workfn2() which are delayed till the current work item is
finished. In addition, pid 492 is flushing test_workfn1().
The work item for test_workfn() is being executed on pwq of pool 2
which is the normal priority per-cpu pool for CPU 1. The pool has
three workers, two of which are executing filler_workfn() for
filler_wq and the last one is assuming the manager role trying to
create more workers.
This extra workqueue state dump will hopefully help chasing down hangs
involving workqueues.
v3: cpulist_pr_cont() replaced with "%*pbl" printf formatting.
v2: As suggested by Andrew, minor formatting change in pr_cont_work(),
printk()'s replaced with pr_info()'s, and cpumask printing now
uses cpulist_pr_cont().
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
CC: Ingo Molnar <mingo@redhat.com>
2015-03-09 20:22:28 +07:00
|
|
|
show_workqueue_state();
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
static struct sysrq_key_op sysrq_showstate_op = {
|
|
|
|
.handler = sysrq_handle_showstate,
|
2013-03-07 16:00:02 +07:00
|
|
|
.help_msg = "show-task-states(t)",
|
2005-04-17 05:20:36 +07:00
|
|
|
.action_msg = "Show State",
|
|
|
|
.enable_mask = SYSRQ_ENABLE_DUMP,
|
|
|
|
};
|
|
|
|
|
2010-08-18 11:15:46 +07:00
|
|
|
static void sysrq_handle_showstate_blocked(int key)
|
2006-12-07 11:35:59 +07:00
|
|
|
{
|
|
|
|
show_state_filter(TASK_UNINTERRUPTIBLE);
|
|
|
|
}
|
|
|
|
static struct sysrq_key_op sysrq_showstate_blocked_op = {
|
|
|
|
.handler = sysrq_handle_showstate_blocked,
|
2013-03-07 16:00:02 +07:00
|
|
|
.help_msg = "show-blocked-tasks(w)",
|
2006-12-07 11:35:59 +07:00
|
|
|
.action_msg = "Show Blocked State",
|
|
|
|
.enable_mask = SYSRQ_ENABLE_DUMP,
|
|
|
|
};
|
|
|
|
|
2008-11-02 01:53:34 +07:00
|
|
|
#ifdef CONFIG_TRACING
|
|
|
|
#include <linux/ftrace.h>
|
|
|
|
|
2010-08-18 11:15:46 +07:00
|
|
|
static void sysrq_ftrace_dump(int key)
|
2008-11-02 01:53:34 +07:00
|
|
|
{
|
2010-04-19 00:08:41 +07:00
|
|
|
ftrace_dump(DUMP_ALL);
|
2008-11-02 01:53:34 +07:00
|
|
|
}
|
|
|
|
static struct sysrq_key_op sysrq_ftrace_dump_op = {
|
|
|
|
.handler = sysrq_ftrace_dump,
|
2013-03-07 16:00:02 +07:00
|
|
|
.help_msg = "dump-ftrace-buffer(z)",
|
2008-11-02 01:53:34 +07:00
|
|
|
.action_msg = "Dump ftrace buffer",
|
|
|
|
.enable_mask = SYSRQ_ENABLE_DUMP,
|
|
|
|
};
|
|
|
|
#else
|
2010-03-22 12:31:26 +07:00
|
|
|
#define sysrq_ftrace_dump_op (*(struct sysrq_key_op *)NULL)
|
2008-11-02 01:53:34 +07:00
|
|
|
#endif
|
2006-12-07 11:35:59 +07:00
|
|
|
|
2010-08-18 11:15:46 +07:00
|
|
|
static void sysrq_handle_showmem(int key)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2017-02-23 06:46:16 +07:00
|
|
|
show_mem(0, NULL);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
static struct sysrq_key_op sysrq_showmem_op = {
|
|
|
|
.handler = sysrq_handle_showmem,
|
2013-03-07 16:00:02 +07:00
|
|
|
.help_msg = "show-memory-usage(m)",
|
2005-04-17 05:20:36 +07:00
|
|
|
.action_msg = "Show Memory",
|
|
|
|
.enable_mask = SYSRQ_ENABLE_DUMP,
|
|
|
|
};
|
|
|
|
|
2006-03-25 18:07:08 +07:00
|
|
|
/*
|
|
|
|
* Signal sysrq helper function. Sends a signal to all user processes.
|
|
|
|
*/
|
2005-04-17 05:20:36 +07:00
|
|
|
static void send_sig_all(int sig)
|
|
|
|
{
|
|
|
|
struct task_struct *p;
|
|
|
|
|
2012-02-07 13:49:39 +07:00
|
|
|
read_lock(&tasklist_lock);
|
2005-04-17 05:20:36 +07:00
|
|
|
for_each_process(p) {
|
2012-02-07 13:49:51 +07:00
|
|
|
if (p->flags & PF_KTHREAD)
|
|
|
|
continue;
|
|
|
|
if (is_global_init(p))
|
|
|
|
continue;
|
|
|
|
|
2012-04-06 04:25:05 +07:00
|
|
|
do_send_sig_info(sig, SEND_SIG_FORCED, p, true);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2012-02-07 13:49:39 +07:00
|
|
|
read_unlock(&tasklist_lock);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2010-08-18 11:15:46 +07:00
|
|
|
static void sysrq_handle_term(int key)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
send_sig_all(SIGTERM);
|
2014-06-05 06:11:46 +07:00
|
|
|
console_loglevel = CONSOLE_LOGLEVEL_DEBUG;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
static struct sysrq_key_op sysrq_term_op = {
|
|
|
|
.handler = sysrq_handle_term,
|
2013-03-07 16:00:02 +07:00
|
|
|
.help_msg = "terminate-all-tasks(e)",
|
2005-04-17 05:20:36 +07:00
|
|
|
.action_msg = "Terminate All Tasks",
|
|
|
|
.enable_mask = SYSRQ_ENABLE_SIGNAL,
|
|
|
|
};
|
|
|
|
|
2006-11-22 21:55:48 +07:00
|
|
|
static void moom_callback(struct work_struct *ignored)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2015-09-09 05:00:36 +07:00
|
|
|
const gfp_t gfp_mask = GFP_KERNEL;
|
|
|
|
struct oom_control oc = {
|
|
|
|
.zonelist = node_zonelist(first_memory_node, gfp_mask),
|
|
|
|
.nodemask = NULL,
|
2016-07-27 05:22:33 +07:00
|
|
|
.memcg = NULL,
|
2015-09-09 05:00:36 +07:00
|
|
|
.gfp_mask = gfp_mask,
|
2015-09-09 05:00:39 +07:00
|
|
|
.order = -1,
|
2015-09-09 05:00:36 +07:00
|
|
|
};
|
|
|
|
|
2015-06-25 06:57:19 +07:00
|
|
|
mutex_lock(&oom_lock);
|
2015-09-09 05:00:36 +07:00
|
|
|
if (!out_of_memory(&oc))
|
2015-02-12 06:26:24 +07:00
|
|
|
pr_info("OOM request ignored because killer is disabled\n");
|
2015-06-25 06:57:19 +07:00
|
|
|
mutex_unlock(&oom_lock);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2006-11-22 21:55:48 +07:00
|
|
|
static DECLARE_WORK(moom_work, moom_callback);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2010-08-18 11:15:46 +07:00
|
|
|
static void sysrq_handle_moom(int key)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
schedule_work(&moom_work);
|
|
|
|
}
|
|
|
|
static struct sysrq_key_op sysrq_moom_op = {
|
|
|
|
.handler = sysrq_handle_moom,
|
2013-03-07 16:00:02 +07:00
|
|
|
.help_msg = "memory-full-oom-kill(f)",
|
2005-04-17 05:20:36 +07:00
|
|
|
.action_msg = "Manual OOM execution",
|
2008-10-16 12:01:43 +07:00
|
|
|
.enable_mask = SYSRQ_ENABLE_SIGNAL,
|
2005-04-17 05:20:36 +07:00
|
|
|
};
|
|
|
|
|
2009-04-01 05:23:46 +07:00
|
|
|
#ifdef CONFIG_BLOCK
|
2010-08-18 11:15:46 +07:00
|
|
|
static void sysrq_handle_thaw(int key)
|
2009-04-01 05:23:46 +07:00
|
|
|
{
|
|
|
|
emergency_thaw_all();
|
|
|
|
}
|
|
|
|
static struct sysrq_key_op sysrq_thaw_op = {
|
|
|
|
.handler = sysrq_handle_thaw,
|
2013-03-07 16:00:02 +07:00
|
|
|
.help_msg = "thaw-filesystems(j)",
|
2009-04-01 05:23:46 +07:00
|
|
|
.action_msg = "Emergency Thaw of all frozen filesystems",
|
|
|
|
.enable_mask = SYSRQ_ENABLE_SIGNAL,
|
|
|
|
};
|
|
|
|
#endif
|
|
|
|
|
2010-08-18 11:15:46 +07:00
|
|
|
static void sysrq_handle_kill(int key)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
send_sig_all(SIGKILL);
|
2014-06-05 06:11:46 +07:00
|
|
|
console_loglevel = CONSOLE_LOGLEVEL_DEBUG;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
static struct sysrq_key_op sysrq_kill_op = {
|
|
|
|
.handler = sysrq_handle_kill,
|
2013-03-07 16:00:02 +07:00
|
|
|
.help_msg = "kill-all-tasks(i)",
|
2005-04-17 05:20:36 +07:00
|
|
|
.action_msg = "Kill All Tasks",
|
|
|
|
.enable_mask = SYSRQ_ENABLE_SIGNAL,
|
|
|
|
};
|
|
|
|
|
2010-08-18 11:15:46 +07:00
|
|
|
static void sysrq_handle_unrt(int key)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
normalize_rt_tasks();
|
|
|
|
}
|
|
|
|
static struct sysrq_key_op sysrq_unrt_op = {
|
|
|
|
.handler = sysrq_handle_unrt,
|
2013-03-07 16:00:02 +07:00
|
|
|
.help_msg = "nice-all-RT-tasks(n)",
|
2005-04-17 05:20:36 +07:00
|
|
|
.action_msg = "Nice All RT Tasks",
|
|
|
|
.enable_mask = SYSRQ_ENABLE_RTNICE,
|
|
|
|
};
|
|
|
|
|
|
|
|
/* Key Operations table and lock */
|
|
|
|
static DEFINE_SPINLOCK(sysrq_key_table_lock);
|
2006-03-25 18:07:08 +07:00
|
|
|
|
|
|
|
static struct sysrq_key_op *sysrq_key_table[36] = {
|
|
|
|
&sysrq_loglevel_op, /* 0 */
|
|
|
|
&sysrq_loglevel_op, /* 1 */
|
|
|
|
&sysrq_loglevel_op, /* 2 */
|
|
|
|
&sysrq_loglevel_op, /* 3 */
|
|
|
|
&sysrq_loglevel_op, /* 4 */
|
|
|
|
&sysrq_loglevel_op, /* 5 */
|
|
|
|
&sysrq_loglevel_op, /* 6 */
|
|
|
|
&sysrq_loglevel_op, /* 7 */
|
|
|
|
&sysrq_loglevel_op, /* 8 */
|
|
|
|
&sysrq_loglevel_op, /* 9 */
|
|
|
|
|
|
|
|
/*
|
2007-02-01 14:48:17 +07:00
|
|
|
* a: Don't use for system provided sysrqs, it is handled specially on
|
|
|
|
* sparc and will never arrive.
|
2006-03-25 18:07:08 +07:00
|
|
|
*/
|
|
|
|
NULL, /* a */
|
|
|
|
&sysrq_reboot_op, /* b */
|
2017-04-04 12:50:20 +07:00
|
|
|
&sysrq_crash_op, /* c */
|
2006-03-25 18:07:08 +07:00
|
|
|
&sysrq_showlocks_op, /* d */
|
|
|
|
&sysrq_term_op, /* e */
|
|
|
|
&sysrq_moom_op, /* f */
|
2009-05-14 09:56:59 +07:00
|
|
|
/* g: May be registered for the kernel debugger */
|
2006-03-25 18:07:08 +07:00
|
|
|
NULL, /* g */
|
2009-04-01 05:23:46 +07:00
|
|
|
NULL, /* h - reserved for help */
|
2006-03-25 18:07:08 +07:00
|
|
|
&sysrq_kill_op, /* i */
|
2009-04-01 05:23:46 +07:00
|
|
|
#ifdef CONFIG_BLOCK
|
|
|
|
&sysrq_thaw_op, /* j */
|
|
|
|
#else
|
2006-03-25 18:07:08 +07:00
|
|
|
NULL, /* j */
|
2009-04-01 05:23:46 +07:00
|
|
|
#endif
|
2006-03-25 18:07:08 +07:00
|
|
|
&sysrq_SAK_op, /* k */
|
2008-04-29 14:59:21 +07:00
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
&sysrq_showallcpus_op, /* l */
|
|
|
|
#else
|
2006-03-25 18:07:08 +07:00
|
|
|
NULL, /* l */
|
2008-04-29 14:59:21 +07:00
|
|
|
#endif
|
2006-03-25 18:07:08 +07:00
|
|
|
&sysrq_showmem_op, /* m */
|
|
|
|
&sysrq_unrt_op, /* n */
|
2007-02-01 14:48:17 +07:00
|
|
|
/* o: This will often be registered as 'Off' at init time */
|
2006-03-25 18:07:08 +07:00
|
|
|
NULL, /* o */
|
|
|
|
&sysrq_showregs_op, /* p */
|
2007-02-16 16:28:16 +07:00
|
|
|
&sysrq_show_timers_op, /* q */
|
2007-02-01 14:48:17 +07:00
|
|
|
&sysrq_unraw_op, /* r */
|
2006-03-25 18:07:08 +07:00
|
|
|
&sysrq_sync_op, /* s */
|
|
|
|
&sysrq_showstate_op, /* t */
|
|
|
|
&sysrq_mountro_op, /* u */
|
2009-05-14 09:56:59 +07:00
|
|
|
/* v: May be registered for frame buffer console restore */
|
2006-03-25 18:07:08 +07:00
|
|
|
NULL, /* v */
|
2007-02-01 14:48:17 +07:00
|
|
|
&sysrq_showstate_blocked_op, /* w */
|
2015-05-19 15:50:29 +07:00
|
|
|
/* x: May be registered on mips for TLB dump */
|
2007-02-01 14:48:17 +07:00
|
|
|
/* x: May be registered on ppc/powerpc for xmon */
|
2012-10-16 23:34:01 +07:00
|
|
|
/* x: May be registered on sparc64 for global PMU dump */
|
2007-02-01 14:48:17 +07:00
|
|
|
NULL, /* x */
|
2008-05-20 13:46:00 +07:00
|
|
|
/* y: May be registered on sparc64 for global register dump */
|
2006-03-25 18:07:08 +07:00
|
|
|
NULL, /* y */
|
2008-11-02 01:53:34 +07:00
|
|
|
&sysrq_ftrace_dump_op, /* z */
|
2005-04-17 05:20:36 +07:00
|
|
|
};
|
|
|
|
|
|
|
|
/* key2index calculation, -1 on invalid index */
|
2006-03-25 18:07:08 +07:00
|
|
|
static int sysrq_key_table_key2index(int key)
|
|
|
|
{
|
2005-04-17 05:20:36 +07:00
|
|
|
int retval;
|
2006-03-25 18:07:08 +07:00
|
|
|
|
|
|
|
if ((key >= '0') && (key <= '9'))
|
2005-04-17 05:20:36 +07:00
|
|
|
retval = key - '0';
|
2006-03-25 18:07:08 +07:00
|
|
|
else if ((key >= 'a') && (key <= 'z'))
|
2005-04-17 05:20:36 +07:00
|
|
|
retval = key + 10 - 'a';
|
2006-03-25 18:07:08 +07:00
|
|
|
else
|
2005-04-17 05:20:36 +07:00
|
|
|
retval = -1;
|
|
|
|
return retval;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* get and put functions for the table, exposed to modules.
|
|
|
|
*/
|
2006-03-25 18:07:08 +07:00
|
|
|
struct sysrq_key_op *__sysrq_get_key_op(int key)
|
|
|
|
{
|
|
|
|
struct sysrq_key_op *op_p = NULL;
|
2005-04-17 05:20:36 +07:00
|
|
|
int i;
|
2006-03-25 18:07:08 +07:00
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
i = sysrq_key_table_key2index(key);
|
2006-03-25 18:07:08 +07:00
|
|
|
if (i != -1)
|
|
|
|
op_p = sysrq_key_table[i];
|
2010-03-22 12:31:26 +07:00
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
return op_p;
|
|
|
|
}
|
|
|
|
|
2006-03-25 18:07:08 +07:00
|
|
|
static void __sysrq_put_key_op(int key, struct sysrq_key_op *op_p)
|
|
|
|
{
|
|
|
|
int i = sysrq_key_table_key2index(key);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
if (i != -1)
|
|
|
|
sysrq_key_table[i] = op_p;
|
|
|
|
}
|
|
|
|
|
2010-08-18 11:15:47 +07:00
|
|
|
void __handle_sysrq(int key, bool check_mask)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
struct sysrq_key_op *op_p;
|
|
|
|
int orig_log_level;
|
2006-03-25 18:07:08 +07:00
|
|
|
int i;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2014-06-07 04:38:14 +07:00
|
|
|
rcu_sysrq_start();
|
2014-06-07 04:38:13 +07:00
|
|
|
rcu_read_lock();
|
2009-01-16 04:50:52 +07:00
|
|
|
/*
|
|
|
|
* Raise the apparent loglevel to maximum so that the sysrq header
|
|
|
|
* is shown to provide the user with positive feedback. We do not
|
|
|
|
* simply emit this at KERN_EMERG as that would change message
|
|
|
|
* routing in the consumers of /proc/kmsg.
|
|
|
|
*/
|
2005-04-17 05:20:36 +07:00
|
|
|
orig_log_level = console_loglevel;
|
2014-06-05 06:11:46 +07:00
|
|
|
console_loglevel = CONSOLE_LOGLEVEL_DEFAULT;
|
2015-02-12 06:26:21 +07:00
|
|
|
pr_info("SysRq : ");
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
op_p = __sysrq_get_key_op(key);
|
|
|
|
if (op_p) {
|
2006-03-25 18:07:08 +07:00
|
|
|
/*
|
|
|
|
* Should we check for enabled operations (/proc/sysrq-trigger
|
|
|
|
* should not) and is the invoked operation enabled?
|
|
|
|
*/
|
2006-12-13 15:34:36 +07:00
|
|
|
if (!check_mask || sysrq_on_mask(op_p->enable_mask)) {
|
2015-02-12 06:26:21 +07:00
|
|
|
pr_cont("%s\n", op_p->action_msg);
|
2005-04-17 05:20:36 +07:00
|
|
|
console_loglevel = orig_log_level;
|
2010-08-18 11:15:46 +07:00
|
|
|
op_p->handler(key);
|
2006-03-25 18:07:08 +07:00
|
|
|
} else {
|
2015-02-12 06:26:21 +07:00
|
|
|
pr_cont("This sysrq operation is disabled.\n");
|
2006-03-25 18:07:08 +07:00
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
} else {
|
2015-02-12 06:26:21 +07:00
|
|
|
pr_cont("HELP : ");
|
2005-04-17 05:20:36 +07:00
|
|
|
/* Only print the help msg once per handler */
|
2006-03-25 18:07:08 +07:00
|
|
|
for (i = 0; i < ARRAY_SIZE(sysrq_key_table); i++) {
|
|
|
|
if (sysrq_key_table[i]) {
|
|
|
|
int j;
|
|
|
|
|
|
|
|
for (j = 0; sysrq_key_table[i] !=
|
|
|
|
sysrq_key_table[j]; j++)
|
|
|
|
;
|
|
|
|
if (j != i)
|
|
|
|
continue;
|
2015-02-12 06:26:21 +07:00
|
|
|
pr_cont("%s ", sysrq_key_table[i]->help_msg);
|
2006-03-25 18:07:08 +07:00
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2015-02-12 06:26:21 +07:00
|
|
|
pr_cont("\n");
|
2005-04-17 05:20:36 +07:00
|
|
|
console_loglevel = orig_log_level;
|
|
|
|
}
|
2014-06-07 04:38:13 +07:00
|
|
|
rcu_read_unlock();
|
2014-06-07 04:38:14 +07:00
|
|
|
rcu_sysrq_end();
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2010-08-18 11:15:47 +07:00
|
|
|
void handle_sysrq(int key)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2006-12-13 15:34:36 +07:00
|
|
|
if (sysrq_on())
|
2010-08-18 11:15:47 +07:00
|
|
|
__handle_sysrq(key, true);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2006-03-25 18:07:08 +07:00
|
|
|
EXPORT_SYMBOL(handle_sysrq);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2010-03-22 12:31:26 +07:00
|
|
|
#ifdef CONFIG_INPUT
|
2015-05-27 04:45:29 +07:00
|
|
|
static int sysrq_reset_downtime_ms;
|
2010-03-22 12:31:26 +07:00
|
|
|
|
|
|
|
/* Simple translation table for the SysRq keys */
|
2010-11-15 15:23:42 +07:00
|
|
|
static const unsigned char sysrq_xlate[KEY_CNT] =
|
2010-03-22 12:31:26 +07:00
|
|
|
"\000\0331234567890-=\177\t" /* 0x00 - 0x0f */
|
|
|
|
"qwertyuiop[]\r\000as" /* 0x10 - 0x1f */
|
|
|
|
"dfghjkl;'`\000\\zxcv" /* 0x20 - 0x2f */
|
|
|
|
"bnm,./\000*\000 \000\201\202\203\204\205" /* 0x30 - 0x3f */
|
|
|
|
"\206\207\210\211\212\000\000789-456+1" /* 0x40 - 0x4f */
|
|
|
|
"230\177\000\000\213\214\000\000\000\000\000\000\000\000\000\000" /* 0x50 - 0x5f */
|
|
|
|
"\r\000/"; /* 0x60 - 0x6f */
|
|
|
|
|
2010-11-15 15:23:42 +07:00
|
|
|
struct sysrq_state {
|
|
|
|
struct input_handle handle;
|
|
|
|
struct work_struct reinject_work;
|
|
|
|
unsigned long key_down[BITS_TO_LONGS(KEY_CNT)];
|
|
|
|
unsigned int alt;
|
|
|
|
unsigned int alt_use;
|
|
|
|
bool active;
|
|
|
|
bool need_reinject;
|
2011-02-03 13:59:54 +07:00
|
|
|
bool reinjecting;
|
2013-01-07 14:23:33 +07:00
|
|
|
|
|
|
|
/* reset sequence handling */
|
|
|
|
bool reset_canceled;
|
2013-06-06 12:51:46 +07:00
|
|
|
bool reset_requested;
|
2013-01-07 14:23:33 +07:00
|
|
|
unsigned long reset_keybit[BITS_TO_LONGS(KEY_CNT)];
|
|
|
|
int reset_seq_len;
|
|
|
|
int reset_seq_cnt;
|
|
|
|
int reset_seq_version;
|
2013-04-02 12:14:19 +07:00
|
|
|
struct timer_list keyreset_timer;
|
2010-11-15 15:23:42 +07:00
|
|
|
};
|
|
|
|
|
2013-01-07 14:23:33 +07:00
|
|
|
#define SYSRQ_KEY_RESET_MAX 20 /* Should be plenty */
|
|
|
|
static unsigned short sysrq_reset_seq[SYSRQ_KEY_RESET_MAX];
|
|
|
|
static unsigned int sysrq_reset_seq_len;
|
|
|
|
static unsigned int sysrq_reset_seq_version = 1;
|
|
|
|
|
|
|
|
static void sysrq_parse_reset_sequence(struct sysrq_state *state)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
unsigned short key;
|
|
|
|
|
|
|
|
state->reset_seq_cnt = 0;
|
|
|
|
|
|
|
|
for (i = 0; i < sysrq_reset_seq_len; i++) {
|
|
|
|
key = sysrq_reset_seq[i];
|
|
|
|
|
|
|
|
if (key == KEY_RESERVED || key > KEY_MAX)
|
|
|
|
break;
|
|
|
|
|
|
|
|
__set_bit(key, state->reset_keybit);
|
|
|
|
state->reset_seq_len++;
|
|
|
|
|
|
|
|
if (test_bit(key, state->key_down))
|
|
|
|
state->reset_seq_cnt++;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Disable reset until old keys are not released */
|
|
|
|
state->reset_canceled = state->reset_seq_cnt != 0;
|
|
|
|
|
|
|
|
state->reset_seq_version = sysrq_reset_seq_version;
|
|
|
|
}
|
|
|
|
|
2013-06-06 12:51:46 +07:00
|
|
|
static void sysrq_do_reset(unsigned long _state)
|
2013-04-02 12:14:19 +07:00
|
|
|
{
|
2013-06-06 12:51:46 +07:00
|
|
|
struct sysrq_state *state = (struct sysrq_state *) _state;
|
|
|
|
|
|
|
|
state->reset_requested = true;
|
|
|
|
|
|
|
|
sys_sync();
|
|
|
|
kernel_restart(NULL);
|
2013-04-02 12:14:19 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static void sysrq_handle_reset_request(struct sysrq_state *state)
|
|
|
|
{
|
2013-06-06 12:51:46 +07:00
|
|
|
if (state->reset_requested)
|
|
|
|
__handle_sysrq(sysrq_xlate[KEY_B], false);
|
|
|
|
|
2013-04-02 12:14:19 +07:00
|
|
|
if (sysrq_reset_downtime_ms)
|
|
|
|
mod_timer(&state->keyreset_timer,
|
|
|
|
jiffies + msecs_to_jiffies(sysrq_reset_downtime_ms));
|
|
|
|
else
|
2013-06-06 12:51:46 +07:00
|
|
|
sysrq_do_reset((unsigned long)state);
|
2013-04-02 12:14:19 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static void sysrq_detect_reset_sequence(struct sysrq_state *state,
|
2013-01-07 14:23:33 +07:00
|
|
|
unsigned int code, int value)
|
|
|
|
{
|
|
|
|
if (!test_bit(code, state->reset_keybit)) {
|
|
|
|
/*
|
|
|
|
* Pressing any key _not_ in reset sequence cancels
|
2013-04-02 12:14:19 +07:00
|
|
|
* the reset sequence. Also cancelling the timer in
|
|
|
|
* case additional keys were pressed after a reset
|
|
|
|
* has been requested.
|
2013-01-07 14:23:33 +07:00
|
|
|
*/
|
2013-04-02 12:14:19 +07:00
|
|
|
if (value && state->reset_seq_cnt) {
|
2013-01-07 14:23:33 +07:00
|
|
|
state->reset_canceled = true;
|
2013-04-02 12:14:19 +07:00
|
|
|
del_timer(&state->keyreset_timer);
|
|
|
|
}
|
2013-01-07 14:23:33 +07:00
|
|
|
} else if (value == 0) {
|
2013-04-02 12:14:19 +07:00
|
|
|
/*
|
|
|
|
* Key release - all keys in the reset sequence need
|
|
|
|
* to be pressed and held for the reset timeout
|
|
|
|
* to hold.
|
|
|
|
*/
|
|
|
|
del_timer(&state->keyreset_timer);
|
|
|
|
|
2013-01-07 14:23:33 +07:00
|
|
|
if (--state->reset_seq_cnt == 0)
|
|
|
|
state->reset_canceled = false;
|
|
|
|
} else if (value == 1) {
|
|
|
|
/* key press, not autorepeat */
|
|
|
|
if (++state->reset_seq_cnt == state->reset_seq_len &&
|
|
|
|
!state->reset_canceled) {
|
2013-04-02 12:14:19 +07:00
|
|
|
sysrq_handle_reset_request(state);
|
2013-01-07 14:23:33 +07:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-08-04 07:22:08 +07:00
|
|
|
#ifdef CONFIG_OF
|
|
|
|
static void sysrq_of_get_keyreset_config(void)
|
|
|
|
{
|
|
|
|
u32 key;
|
|
|
|
struct device_node *np;
|
|
|
|
struct property *prop;
|
|
|
|
const __be32 *p;
|
|
|
|
|
|
|
|
np = of_find_node_by_path("/chosen/linux,sysrq-reset-seq");
|
|
|
|
if (!np) {
|
|
|
|
pr_debug("No sysrq node found");
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Reset in case a __weak definition was present */
|
|
|
|
sysrq_reset_seq_len = 0;
|
|
|
|
|
|
|
|
of_property_for_each_u32(np, "keyset", prop, p, key) {
|
|
|
|
if (key == KEY_RESERVED || key > KEY_MAX ||
|
|
|
|
sysrq_reset_seq_len == SYSRQ_KEY_RESET_MAX)
|
|
|
|
break;
|
|
|
|
|
|
|
|
sysrq_reset_seq[sysrq_reset_seq_len++] = (unsigned short)key;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Get reset timeout if any. */
|
|
|
|
of_property_read_u32(np, "timeout-ms", &sysrq_reset_downtime_ms);
|
|
|
|
}
|
|
|
|
#else
|
|
|
|
static void sysrq_of_get_keyreset_config(void)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2010-11-15 15:23:42 +07:00
|
|
|
static void sysrq_reinject_alt_sysrq(struct work_struct *work)
|
|
|
|
{
|
|
|
|
struct sysrq_state *sysrq =
|
|
|
|
container_of(work, struct sysrq_state, reinject_work);
|
|
|
|
struct input_handle *handle = &sysrq->handle;
|
|
|
|
unsigned int alt_code = sysrq->alt_use;
|
|
|
|
|
|
|
|
if (sysrq->need_reinject) {
|
2011-02-03 13:59:54 +07:00
|
|
|
/* we do not want the assignment to be reordered */
|
|
|
|
sysrq->reinjecting = true;
|
|
|
|
mb();
|
|
|
|
|
2010-11-15 15:23:42 +07:00
|
|
|
/* Simulate press and release of Alt + SysRq */
|
|
|
|
input_inject_event(handle, EV_KEY, alt_code, 1);
|
|
|
|
input_inject_event(handle, EV_KEY, KEY_SYSRQ, 1);
|
|
|
|
input_inject_event(handle, EV_SYN, SYN_REPORT, 1);
|
|
|
|
|
|
|
|
input_inject_event(handle, EV_KEY, KEY_SYSRQ, 0);
|
|
|
|
input_inject_event(handle, EV_KEY, alt_code, 0);
|
|
|
|
input_inject_event(handle, EV_SYN, SYN_REPORT, 1);
|
2011-02-03 13:59:54 +07:00
|
|
|
|
|
|
|
mb();
|
|
|
|
sysrq->reinjecting = false;
|
2010-11-15 15:23:42 +07:00
|
|
|
}
|
|
|
|
}
|
2010-03-22 12:31:26 +07:00
|
|
|
|
2013-01-07 14:23:33 +07:00
|
|
|
static bool sysrq_handle_keypress(struct sysrq_state *sysrq,
|
|
|
|
unsigned int code, int value)
|
2010-03-22 12:31:26 +07:00
|
|
|
{
|
2010-11-15 15:23:42 +07:00
|
|
|
bool was_active = sysrq->active;
|
2010-09-30 08:04:21 +07:00
|
|
|
bool suppress;
|
|
|
|
|
2013-01-07 14:23:33 +07:00
|
|
|
switch (code) {
|
2011-02-03 13:59:54 +07:00
|
|
|
|
2013-01-07 14:23:33 +07:00
|
|
|
case KEY_LEFTALT:
|
|
|
|
case KEY_RIGHTALT:
|
|
|
|
if (!value) {
|
|
|
|
/* One of ALTs is being released */
|
|
|
|
if (sysrq->active && code == sysrq->alt_use)
|
|
|
|
sysrq->active = false;
|
2010-09-30 08:04:21 +07:00
|
|
|
|
2013-01-07 14:23:33 +07:00
|
|
|
sysrq->alt = KEY_RESERVED;
|
|
|
|
|
|
|
|
} else if (value != 2) {
|
|
|
|
sysrq->alt = code;
|
|
|
|
sysrq->need_reinject = false;
|
|
|
|
}
|
2010-11-15 15:23:42 +07:00
|
|
|
break;
|
2010-03-22 12:31:26 +07:00
|
|
|
|
2013-01-07 14:23:33 +07:00
|
|
|
case KEY_SYSRQ:
|
|
|
|
if (value == 1 && sysrq->alt != KEY_RESERVED) {
|
|
|
|
sysrq->active = true;
|
|
|
|
sysrq->alt_use = sysrq->alt;
|
|
|
|
/*
|
|
|
|
* If nothing else will be pressed we'll need
|
|
|
|
* to re-inject Alt-SysRq keysroke.
|
|
|
|
*/
|
|
|
|
sysrq->need_reinject = true;
|
|
|
|
}
|
2010-03-22 12:31:26 +07:00
|
|
|
|
2013-01-07 14:23:33 +07:00
|
|
|
/*
|
|
|
|
* Pretend that sysrq was never pressed at all. This
|
|
|
|
* is needed to properly handle KGDB which will try
|
|
|
|
* to release all keys after exiting debugger. If we
|
|
|
|
* do not clear key bit it KGDB will end up sending
|
|
|
|
* release events for Alt and SysRq, potentially
|
|
|
|
* triggering print screen function.
|
|
|
|
*/
|
|
|
|
if (sysrq->active)
|
|
|
|
clear_bit(KEY_SYSRQ, sysrq->handle.dev->key);
|
2010-06-09 22:13:06 +07:00
|
|
|
|
2013-01-07 14:23:33 +07:00
|
|
|
break;
|
2010-11-15 15:23:42 +07:00
|
|
|
|
2013-01-07 14:23:33 +07:00
|
|
|
default:
|
|
|
|
if (sysrq->active && value && value != 2) {
|
|
|
|
sysrq->need_reinject = false;
|
|
|
|
__handle_sysrq(sysrq_xlate[code], true);
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
}
|
2010-11-15 15:23:42 +07:00
|
|
|
|
2013-01-07 14:23:33 +07:00
|
|
|
suppress = sysrq->active;
|
2010-11-15 15:23:42 +07:00
|
|
|
|
2013-01-07 14:23:33 +07:00
|
|
|
if (!sysrq->active) {
|
2010-11-15 15:23:42 +07:00
|
|
|
|
2013-01-07 14:23:33 +07:00
|
|
|
/*
|
|
|
|
* See if reset sequence has changed since the last time.
|
|
|
|
*/
|
|
|
|
if (sysrq->reset_seq_version != sysrq_reset_seq_version)
|
|
|
|
sysrq_parse_reset_sequence(sysrq);
|
2010-11-15 15:23:42 +07:00
|
|
|
|
2013-01-07 14:23:33 +07:00
|
|
|
/*
|
|
|
|
* If we are not suppressing key presses keep track of
|
|
|
|
* keyboard state so we can release keys that have been
|
|
|
|
* pressed before entering SysRq mode.
|
|
|
|
*/
|
|
|
|
if (value)
|
|
|
|
set_bit(code, sysrq->key_down);
|
|
|
|
else
|
|
|
|
clear_bit(code, sysrq->key_down);
|
|
|
|
|
|
|
|
if (was_active)
|
|
|
|
schedule_work(&sysrq->reinject_work);
|
|
|
|
|
2013-04-02 12:14:19 +07:00
|
|
|
/* Check for reset sequence */
|
|
|
|
sysrq_detect_reset_sequence(sysrq, code, value);
|
2010-03-22 12:31:26 +07:00
|
|
|
|
2013-01-07 14:23:33 +07:00
|
|
|
} else if (value == 0 && test_and_clear_bit(code, sysrq->key_down)) {
|
|
|
|
/*
|
|
|
|
* Pass on release events for keys that was pressed before
|
|
|
|
* entering SysRq mode.
|
|
|
|
*/
|
|
|
|
suppress = false;
|
|
|
|
}
|
2010-11-15 15:23:42 +07:00
|
|
|
|
2013-01-07 14:23:33 +07:00
|
|
|
return suppress;
|
|
|
|
}
|
2010-11-15 15:23:42 +07:00
|
|
|
|
2013-01-07 14:23:33 +07:00
|
|
|
static bool sysrq_filter(struct input_handle *handle,
|
|
|
|
unsigned int type, unsigned int code, int value)
|
|
|
|
{
|
|
|
|
struct sysrq_state *sysrq = handle->private;
|
|
|
|
bool suppress;
|
2010-11-15 15:23:42 +07:00
|
|
|
|
2013-01-07 14:23:33 +07:00
|
|
|
/*
|
|
|
|
* Do not filter anything if we are in the process of re-injecting
|
|
|
|
* Alt+SysRq combination.
|
|
|
|
*/
|
|
|
|
if (sysrq->reinjecting)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
switch (type) {
|
|
|
|
|
|
|
|
case EV_SYN:
|
|
|
|
suppress = false;
|
|
|
|
break;
|
|
|
|
|
|
|
|
case EV_KEY:
|
|
|
|
suppress = sysrq_handle_keypress(sysrq, code, value);
|
2010-03-22 12:31:26 +07:00
|
|
|
break;
|
|
|
|
|
|
|
|
default:
|
2010-11-15 15:23:42 +07:00
|
|
|
suppress = sysrq->active;
|
2010-03-22 12:31:26 +07:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2010-09-30 08:04:21 +07:00
|
|
|
return suppress;
|
2010-03-22 12:31:26 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static int sysrq_connect(struct input_handler *handler,
|
|
|
|
struct input_dev *dev,
|
|
|
|
const struct input_device_id *id)
|
|
|
|
{
|
2010-11-15 15:23:42 +07:00
|
|
|
struct sysrq_state *sysrq;
|
2010-03-22 12:31:26 +07:00
|
|
|
int error;
|
|
|
|
|
2010-11-15 15:23:42 +07:00
|
|
|
sysrq = kzalloc(sizeof(struct sysrq_state), GFP_KERNEL);
|
|
|
|
if (!sysrq)
|
2010-03-22 12:31:26 +07:00
|
|
|
return -ENOMEM;
|
|
|
|
|
2010-11-15 15:23:42 +07:00
|
|
|
INIT_WORK(&sysrq->reinject_work, sysrq_reinject_alt_sysrq);
|
|
|
|
|
|
|
|
sysrq->handle.dev = dev;
|
|
|
|
sysrq->handle.handler = handler;
|
|
|
|
sysrq->handle.name = "sysrq";
|
|
|
|
sysrq->handle.private = sysrq;
|
2013-06-06 12:51:46 +07:00
|
|
|
setup_timer(&sysrq->keyreset_timer,
|
|
|
|
sysrq_do_reset, (unsigned long)sysrq);
|
2010-03-22 12:31:26 +07:00
|
|
|
|
2010-11-15 15:23:42 +07:00
|
|
|
error = input_register_handle(&sysrq->handle);
|
2010-03-22 12:31:26 +07:00
|
|
|
if (error) {
|
|
|
|
pr_err("Failed to register input sysrq handler, error %d\n",
|
|
|
|
error);
|
|
|
|
goto err_free;
|
|
|
|
}
|
|
|
|
|
2010-11-15 15:23:42 +07:00
|
|
|
error = input_open_device(&sysrq->handle);
|
2010-03-22 12:31:26 +07:00
|
|
|
if (error) {
|
|
|
|
pr_err("Failed to open input device, error %d\n", error);
|
|
|
|
goto err_unregister;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
err_unregister:
|
2010-11-15 15:23:42 +07:00
|
|
|
input_unregister_handle(&sysrq->handle);
|
2010-03-22 12:31:26 +07:00
|
|
|
err_free:
|
2010-11-15 15:23:42 +07:00
|
|
|
kfree(sysrq);
|
2010-03-22 12:31:26 +07:00
|
|
|
return error;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void sysrq_disconnect(struct input_handle *handle)
|
|
|
|
{
|
2010-11-15 15:23:42 +07:00
|
|
|
struct sysrq_state *sysrq = handle->private;
|
|
|
|
|
2010-03-22 12:31:26 +07:00
|
|
|
input_close_device(handle);
|
2010-11-15 15:23:42 +07:00
|
|
|
cancel_work_sync(&sysrq->reinject_work);
|
2013-04-02 12:14:19 +07:00
|
|
|
del_timer_sync(&sysrq->keyreset_timer);
|
2010-03-22 12:31:26 +07:00
|
|
|
input_unregister_handle(handle);
|
2010-11-15 15:23:42 +07:00
|
|
|
kfree(sysrq);
|
2010-03-22 12:31:26 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2010-09-30 08:04:21 +07:00
|
|
|
* We are matching on KEY_LEFTALT instead of KEY_SYSRQ because not all
|
|
|
|
* keyboards have SysRq key predefined and so user may add it to keymap
|
2010-03-22 12:31:26 +07:00
|
|
|
* later, but we expect all such keyboards to have left alt.
|
|
|
|
*/
|
|
|
|
static const struct input_device_id sysrq_ids[] = {
|
|
|
|
{
|
|
|
|
.flags = INPUT_DEVICE_ID_MATCH_EVBIT |
|
|
|
|
INPUT_DEVICE_ID_MATCH_KEYBIT,
|
2017-01-06 00:14:16 +07:00
|
|
|
.evbit = { [BIT_WORD(EV_KEY)] = BIT_MASK(EV_KEY) },
|
|
|
|
.keybit = { [BIT_WORD(KEY_LEFTALT)] = BIT_MASK(KEY_LEFTALT) },
|
2010-03-22 12:31:26 +07:00
|
|
|
},
|
|
|
|
{ },
|
|
|
|
};
|
|
|
|
|
|
|
|
static struct input_handler sysrq_handler = {
|
|
|
|
.filter = sysrq_filter,
|
|
|
|
.connect = sysrq_connect,
|
|
|
|
.disconnect = sysrq_disconnect,
|
|
|
|
.name = "sysrq",
|
|
|
|
.id_table = sysrq_ids,
|
|
|
|
};
|
|
|
|
|
|
|
|
static bool sysrq_handler_registered;
|
|
|
|
|
|
|
|
static inline void sysrq_register_handler(void)
|
|
|
|
{
|
|
|
|
int error;
|
|
|
|
|
2013-08-04 07:22:08 +07:00
|
|
|
sysrq_of_get_keyreset_config();
|
|
|
|
|
2010-03-22 12:31:26 +07:00
|
|
|
error = input_register_handler(&sysrq_handler);
|
|
|
|
if (error)
|
|
|
|
pr_err("Failed to register input handler, error %d", error);
|
|
|
|
else
|
|
|
|
sysrq_handler_registered = true;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void sysrq_unregister_handler(void)
|
|
|
|
{
|
|
|
|
if (sysrq_handler_registered) {
|
|
|
|
input_unregister_handler(&sysrq_handler);
|
|
|
|
sysrq_handler_registered = false;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-01-07 14:23:33 +07:00
|
|
|
static int sysrq_reset_seq_param_set(const char *buffer,
|
|
|
|
const struct kernel_param *kp)
|
|
|
|
{
|
|
|
|
unsigned long val;
|
|
|
|
int error;
|
|
|
|
|
2013-06-01 14:30:06 +07:00
|
|
|
error = kstrtoul(buffer, 0, &val);
|
2013-01-07 14:23:33 +07:00
|
|
|
if (error < 0)
|
|
|
|
return error;
|
|
|
|
|
|
|
|
if (val > KEY_MAX)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
*((unsigned short *)kp->arg) = val;
|
|
|
|
sysrq_reset_seq_version++;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2015-05-27 08:39:38 +07:00
|
|
|
static const struct kernel_param_ops param_ops_sysrq_reset_seq = {
|
2013-01-07 14:23:33 +07:00
|
|
|
.get = param_get_ushort,
|
|
|
|
.set = sysrq_reset_seq_param_set,
|
|
|
|
};
|
|
|
|
|
|
|
|
#define param_check_sysrq_reset_seq(name, p) \
|
|
|
|
__param_check(name, p, unsigned short)
|
|
|
|
|
2015-08-20 04:48:06 +07:00
|
|
|
/*
|
|
|
|
* not really modular, but the easiest way to keep compat with existing
|
|
|
|
* bootargs behaviour is to continue using module_param here.
|
|
|
|
*/
|
2013-01-07 14:23:33 +07:00
|
|
|
module_param_array_named(reset_seq, sysrq_reset_seq, sysrq_reset_seq,
|
|
|
|
&sysrq_reset_seq_len, 0644);
|
|
|
|
|
2013-04-02 12:14:19 +07:00
|
|
|
module_param_named(sysrq_downtime_ms, sysrq_reset_downtime_ms, int, 0644);
|
|
|
|
|
2010-03-22 12:31:26 +07:00
|
|
|
#else
|
|
|
|
|
|
|
|
static inline void sysrq_register_handler(void)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void sysrq_unregister_handler(void)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
|
|
|
#endif /* CONFIG_INPUT */
|
|
|
|
|
|
|
|
int sysrq_toggle_support(int enable_mask)
|
|
|
|
{
|
|
|
|
bool was_enabled = sysrq_on();
|
|
|
|
|
|
|
|
sysrq_enabled = enable_mask;
|
|
|
|
|
|
|
|
if (was_enabled != sysrq_on()) {
|
|
|
|
if (sysrq_on())
|
|
|
|
sysrq_register_handler();
|
|
|
|
else
|
|
|
|
sysrq_unregister_handler();
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2005-11-09 12:39:47 +07:00
|
|
|
static int __sysrq_swap_key_ops(int key, struct sysrq_key_op *insert_op_p,
|
2006-03-25 18:07:08 +07:00
|
|
|
struct sysrq_key_op *remove_op_p)
|
|
|
|
{
|
2005-04-17 05:20:36 +07:00
|
|
|
int retval;
|
|
|
|
|
2014-06-07 04:38:13 +07:00
|
|
|
spin_lock(&sysrq_key_table_lock);
|
2005-04-17 05:20:36 +07:00
|
|
|
if (__sysrq_get_key_op(key) == remove_op_p) {
|
|
|
|
__sysrq_put_key_op(key, insert_op_p);
|
|
|
|
retval = 0;
|
|
|
|
} else {
|
|
|
|
retval = -1;
|
|
|
|
}
|
2014-06-07 04:38:13 +07:00
|
|
|
spin_unlock(&sysrq_key_table_lock);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* A concurrent __handle_sysrq either got the old op or the new op.
|
|
|
|
* Wait for it to go away before returning, so the code for an old
|
|
|
|
* op is not freed (eg. on module unload) while it is in use.
|
|
|
|
*/
|
|
|
|
synchronize_rcu();
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
return retval;
|
|
|
|
}
|
|
|
|
|
|
|
|
int register_sysrq_key(int key, struct sysrq_key_op *op_p)
|
|
|
|
{
|
|
|
|
return __sysrq_swap_key_ops(key, op_p, NULL);
|
|
|
|
}
|
2006-03-25 18:07:08 +07:00
|
|
|
EXPORT_SYMBOL(register_sysrq_key);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
int unregister_sysrq_key(int key, struct sysrq_key_op *op_p)
|
|
|
|
{
|
|
|
|
return __sysrq_swap_key_ops(key, NULL, op_p);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(unregister_sysrq_key);
|
2008-10-16 12:04:23 +07:00
|
|
|
|
|
|
|
#ifdef CONFIG_PROC_FS
|
|
|
|
/*
|
|
|
|
* writing 'C' to /proc/sysrq-trigger is like sysrq-C
|
|
|
|
*/
|
|
|
|
static ssize_t write_sysrq_trigger(struct file *file, const char __user *buf,
|
|
|
|
size_t count, loff_t *ppos)
|
|
|
|
{
|
|
|
|
if (count) {
|
|
|
|
char c;
|
|
|
|
|
|
|
|
if (get_user(c, buf))
|
|
|
|
return -EFAULT;
|
2010-08-18 11:15:47 +07:00
|
|
|
__handle_sysrq(c, false);
|
2008-10-16 12:04:23 +07:00
|
|
|
}
|
2010-03-22 12:31:26 +07:00
|
|
|
|
2008-10-16 12:04:23 +07:00
|
|
|
return count;
|
|
|
|
}
|
|
|
|
|
|
|
|
static const struct file_operations proc_sysrq_trigger_operations = {
|
|
|
|
.write = write_sysrq_trigger,
|
llseek: automatically add .llseek fop
All file_operations should get a .llseek operation so we can make
nonseekable_open the default for future file operations without a
.llseek pointer.
The three cases that we can automatically detect are no_llseek, seq_lseek
and default_llseek. For cases where we can we can automatically prove that
the file offset is always ignored, we use noop_llseek, which maintains
the current behavior of not returning an error from a seek.
New drivers should normally not use noop_llseek but instead use no_llseek
and call nonseekable_open at open time. Existing drivers can be converted
to do the same when the maintainer knows for certain that no user code
relies on calling seek on the device file.
The generated code is often incorrectly indented and right now contains
comments that clarify for each added line why a specific variant was
chosen. In the version that gets submitted upstream, the comments will
be gone and I will manually fix the indentation, because there does not
seem to be a way to do that using coccinelle.
Some amount of new code is currently sitting in linux-next that should get
the same modifications, which I will do at the end of the merge window.
Many thanks to Julia Lawall for helping me learn to write a semantic
patch that does all this.
===== begin semantic patch =====
// This adds an llseek= method to all file operations,
// as a preparation for making no_llseek the default.
//
// The rules are
// - use no_llseek explicitly if we do nonseekable_open
// - use seq_lseek for sequential files
// - use default_llseek if we know we access f_pos
// - use noop_llseek if we know we don't access f_pos,
// but we still want to allow users to call lseek
//
@ open1 exists @
identifier nested_open;
@@
nested_open(...)
{
<+...
nonseekable_open(...)
...+>
}
@ open exists@
identifier open_f;
identifier i, f;
identifier open1.nested_open;
@@
int open_f(struct inode *i, struct file *f)
{
<+...
(
nonseekable_open(...)
|
nested_open(...)
)
...+>
}
@ read disable optional_qualifier exists @
identifier read_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
expression E;
identifier func;
@@
ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
{
<+...
(
*off = E
|
*off += E
|
func(..., off, ...)
|
E = *off
)
...+>
}
@ read_no_fpos disable optional_qualifier exists @
identifier read_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
@@
ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
{
... when != off
}
@ write @
identifier write_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
expression E;
identifier func;
@@
ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
{
<+...
(
*off = E
|
*off += E
|
func(..., off, ...)
|
E = *off
)
...+>
}
@ write_no_fpos @
identifier write_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
@@
ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
{
... when != off
}
@ fops0 @
identifier fops;
@@
struct file_operations fops = {
...
};
@ has_llseek depends on fops0 @
identifier fops0.fops;
identifier llseek_f;
@@
struct file_operations fops = {
...
.llseek = llseek_f,
...
};
@ has_read depends on fops0 @
identifier fops0.fops;
identifier read_f;
@@
struct file_operations fops = {
...
.read = read_f,
...
};
@ has_write depends on fops0 @
identifier fops0.fops;
identifier write_f;
@@
struct file_operations fops = {
...
.write = write_f,
...
};
@ has_open depends on fops0 @
identifier fops0.fops;
identifier open_f;
@@
struct file_operations fops = {
...
.open = open_f,
...
};
// use no_llseek if we call nonseekable_open
////////////////////////////////////////////
@ nonseekable1 depends on !has_llseek && has_open @
identifier fops0.fops;
identifier nso ~= "nonseekable_open";
@@
struct file_operations fops = {
... .open = nso, ...
+.llseek = no_llseek, /* nonseekable */
};
@ nonseekable2 depends on !has_llseek @
identifier fops0.fops;
identifier open.open_f;
@@
struct file_operations fops = {
... .open = open_f, ...
+.llseek = no_llseek, /* open uses nonseekable */
};
// use seq_lseek for sequential files
/////////////////////////////////////
@ seq depends on !has_llseek @
identifier fops0.fops;
identifier sr ~= "seq_read";
@@
struct file_operations fops = {
... .read = sr, ...
+.llseek = seq_lseek, /* we have seq_read */
};
// use default_llseek if there is a readdir
///////////////////////////////////////////
@ fops1 depends on !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier readdir_e;
@@
// any other fop is used that changes pos
struct file_operations fops = {
... .readdir = readdir_e, ...
+.llseek = default_llseek, /* readdir is present */
};
// use default_llseek if at least one of read/write touches f_pos
/////////////////////////////////////////////////////////////////
@ fops2 depends on !fops1 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read.read_f;
@@
// read fops use offset
struct file_operations fops = {
... .read = read_f, ...
+.llseek = default_llseek, /* read accesses f_pos */
};
@ fops3 depends on !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier write.write_f;
@@
// write fops use offset
struct file_operations fops = {
... .write = write_f, ...
+ .llseek = default_llseek, /* write accesses f_pos */
};
// Use noop_llseek if neither read nor write accesses f_pos
///////////////////////////////////////////////////////////
@ fops4 depends on !fops1 && !fops2 && !fops3 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read_no_fpos.read_f;
identifier write_no_fpos.write_f;
@@
// write fops use offset
struct file_operations fops = {
...
.write = write_f,
.read = read_f,
...
+.llseek = noop_llseek, /* read and write both use no f_pos */
};
@ depends on has_write && !has_read && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier write_no_fpos.write_f;
@@
struct file_operations fops = {
... .write = write_f, ...
+.llseek = noop_llseek, /* write uses no f_pos */
};
@ depends on has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read_no_fpos.read_f;
@@
struct file_operations fops = {
... .read = read_f, ...
+.llseek = noop_llseek, /* read uses no f_pos */
};
@ depends on !has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
@@
struct file_operations fops = {
...
+.llseek = noop_llseek, /* no read or write fn */
};
===== End semantic patch =====
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Cc: Julia Lawall <julia@diku.dk>
Cc: Christoph Hellwig <hch@infradead.org>
2010-08-15 23:52:59 +07:00
|
|
|
.llseek = noop_llseek,
|
2008-10-16 12:04:23 +07:00
|
|
|
};
|
|
|
|
|
2010-03-22 12:31:26 +07:00
|
|
|
static void sysrq_init_procfs(void)
|
|
|
|
{
|
|
|
|
if (!proc_create("sysrq-trigger", S_IWUSR, NULL,
|
|
|
|
&proc_sysrq_trigger_operations))
|
|
|
|
pr_err("Failed to register proc interface\n");
|
|
|
|
}
|
|
|
|
|
|
|
|
#else
|
|
|
|
|
|
|
|
static inline void sysrq_init_procfs(void)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
|
|
|
#endif /* CONFIG_PROC_FS */
|
|
|
|
|
2008-10-16 12:04:23 +07:00
|
|
|
static int __init sysrq_init(void)
|
|
|
|
{
|
2010-03-22 12:31:26 +07:00
|
|
|
sysrq_init_procfs();
|
|
|
|
|
|
|
|
if (sysrq_on())
|
|
|
|
sysrq_register_handler();
|
|
|
|
|
2008-10-16 12:04:23 +07:00
|
|
|
return 0;
|
|
|
|
}
|
2015-08-20 04:48:06 +07:00
|
|
|
device_initcall(sysrq_init);
|