2008-05-13 02:20:42 +07:00
|
|
|
/*
|
|
|
|
* Infrastructure for profiling code inserted by 'gcc -pg'.
|
|
|
|
*
|
|
|
|
* Copyright (C) 2007-2008 Steven Rostedt <srostedt@redhat.com>
|
|
|
|
* Copyright (C) 2004-2008 Ingo Molnar <mingo@redhat.com>
|
|
|
|
*
|
|
|
|
* Originally ported from the -rt patch by:
|
|
|
|
* Copyright (C) 2007 Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
|
|
*
|
|
|
|
* Based on code in the latency_tracer, that is:
|
|
|
|
*
|
|
|
|
* Copyright (C) 2004-2006 Ingo Molnar
|
|
|
|
* Copyright (C) 2004 William Lee Irwin III
|
|
|
|
*/
|
|
|
|
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
#include <linux/stop_machine.h>
|
|
|
|
#include <linux/clocksource.h>
|
|
|
|
#include <linux/kallsyms.h>
|
2008-05-13 02:20:43 +07:00
|
|
|
#include <linux/seq_file.h>
|
2009-01-15 04:33:27 +07:00
|
|
|
#include <linux/suspend.h>
|
2008-05-13 02:20:43 +07:00
|
|
|
#include <linux/debugfs.h>
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
#include <linux/hardirq.h>
|
2008-02-23 22:55:50 +07:00
|
|
|
#include <linux/kthread.h>
|
2008-05-13 02:20:43 +07:00
|
|
|
#include <linux/uaccess.h>
|
2008-02-23 22:55:50 +07:00
|
|
|
#include <linux/ftrace.h>
|
2008-05-13 02:20:43 +07:00
|
|
|
#include <linux/sysctl.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 15:04:11 +07:00
|
|
|
#include <linux/slab.h>
|
2008-05-13 02:20:43 +07:00
|
|
|
#include <linux/ctype.h>
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
#include <linux/list.h>
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 03:29:06 +07:00
|
|
|
#include <linux/hash.h>
|
2010-03-06 06:03:25 +07:00
|
|
|
#include <linux/rcupdate.h>
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
|
2009-04-15 06:39:12 +07:00
|
|
|
#include <trace/events/sched.h>
|
2009-03-24 12:10:15 +07:00
|
|
|
|
2009-05-29 00:37:24 +07:00
|
|
|
#include <asm/setup.h>
|
2008-06-22 01:17:27 +07:00
|
|
|
|
2009-03-24 10:12:58 +07:00
|
|
|
#include "trace_output.h"
|
2009-03-20 23:50:56 +07:00
|
|
|
#include "trace_stat.h"
|
2008-05-13 02:20:42 +07:00
|
|
|
|
2008-10-23 20:33:03 +07:00
|
|
|
#define FTRACE_WARN_ON(cond) \
|
2011-04-29 21:36:31 +07:00
|
|
|
({ \
|
|
|
|
int ___r = cond; \
|
|
|
|
if (WARN_ON(___r)) \
|
2008-10-23 20:33:03 +07:00
|
|
|
ftrace_kill(); \
|
2011-04-29 21:36:31 +07:00
|
|
|
___r; \
|
|
|
|
})
|
2008-10-23 20:33:03 +07:00
|
|
|
|
|
|
|
#define FTRACE_WARN_ON_ONCE(cond) \
|
2011-04-29 21:36:31 +07:00
|
|
|
({ \
|
|
|
|
int ___r = cond; \
|
|
|
|
if (WARN_ON_ONCE(___r)) \
|
2008-10-23 20:33:03 +07:00
|
|
|
ftrace_kill(); \
|
2011-04-29 21:36:31 +07:00
|
|
|
___r; \
|
|
|
|
})
|
2008-10-23 20:33:03 +07:00
|
|
|
|
2009-02-17 03:28:00 +07:00
|
|
|
/* hash bits for specific function selection */
|
|
|
|
#define FTRACE_HASH_BITS 7
|
|
|
|
#define FTRACE_FUNC_HASHSIZE (1 << FTRACE_HASH_BITS)
|
2011-05-03 04:34:47 +07:00
|
|
|
#define FTRACE_HASH_DEFAULT_BITS 10
|
|
|
|
#define FTRACE_HASH_MAX_BITS 12
|
2009-02-17 03:28:00 +07:00
|
|
|
|
2008-05-13 02:20:48 +07:00
|
|
|
/* ftrace_enabled is a method to turn ftrace on or off */
|
|
|
|
int ftrace_enabled __read_mostly;
|
2008-05-13 02:20:43 +07:00
|
|
|
static int last_ftrace_enabled;
|
2008-05-13 02:20:43 +07:00
|
|
|
|
2008-11-06 04:05:44 +07:00
|
|
|
/* Quick disabling of function tracer. */
|
|
|
|
int function_trace_stop;
|
|
|
|
|
2009-10-14 03:33:52 +07:00
|
|
|
/* List for set_ftrace_pid's pids. */
|
|
|
|
LIST_HEAD(ftrace_pids);
|
|
|
|
struct ftrace_pid {
|
|
|
|
struct list_head list;
|
|
|
|
struct pid *pid;
|
|
|
|
};
|
|
|
|
|
2008-05-13 02:20:48 +07:00
|
|
|
/*
|
|
|
|
* ftrace_disabled is set when an anomaly is discovered.
|
|
|
|
* ftrace_disabled is much stronger than ftrace_enabled.
|
|
|
|
*/
|
|
|
|
static int ftrace_disabled __read_mostly;
|
|
|
|
|
2009-02-14 13:15:39 +07:00
|
|
|
static DEFINE_MUTEX(ftrace_lock);
|
2008-05-13 02:20:43 +07:00
|
|
|
|
2011-06-01 02:51:55 +07:00
|
|
|
static struct ftrace_ops ftrace_list_end __read_mostly = {
|
2009-03-26 00:26:41 +07:00
|
|
|
.func = ftrace_stub,
|
2008-05-13 02:20:42 +07:00
|
|
|
};
|
|
|
|
|
2011-05-04 20:27:52 +07:00
|
|
|
static struct ftrace_ops *ftrace_global_list __read_mostly = &ftrace_list_end;
|
|
|
|
static struct ftrace_ops *ftrace_ops_list __read_mostly = &ftrace_list_end;
|
2008-05-13 02:20:42 +07:00
|
|
|
ftrace_func_t ftrace_trace_function __read_mostly = ftrace_stub;
|
2008-11-06 04:05:44 +07:00
|
|
|
ftrace_func_t __ftrace_trace_function __read_mostly = ftrace_stub;
|
2008-11-26 12:16:23 +07:00
|
|
|
ftrace_func_t ftrace_pid_function __read_mostly = ftrace_stub;
|
2011-05-04 09:49:52 +07:00
|
|
|
static struct ftrace_ops global_ops;
|
2008-05-13 02:20:42 +07:00
|
|
|
|
2011-05-04 20:27:52 +07:00
|
|
|
static void
|
|
|
|
ftrace_ops_list_func(unsigned long ip, unsigned long parent_ip);
|
|
|
|
|
2010-03-06 06:03:25 +07:00
|
|
|
/*
|
2011-05-04 20:27:52 +07:00
|
|
|
* Traverse the ftrace_global_list, invoking all entries. The reason that we
|
2010-03-06 06:03:25 +07:00
|
|
|
* can use rcu_dereference_raw() is that elements removed from this list
|
|
|
|
* are simply leaked, so there is no need to interact with a grace-period
|
|
|
|
* mechanism. The rcu_dereference_raw() calls are needed to handle
|
2011-05-04 20:27:52 +07:00
|
|
|
* concurrent insertions into the ftrace_global_list.
|
2010-03-06 06:03:25 +07:00
|
|
|
*
|
|
|
|
* Silly Alpha and silly pointer-speculation compiler optimizations!
|
|
|
|
*/
|
2011-05-04 20:27:52 +07:00
|
|
|
static void ftrace_global_list_func(unsigned long ip,
|
|
|
|
unsigned long parent_ip)
|
2008-05-13 02:20:42 +07:00
|
|
|
{
|
ftrace: Add internal recursive checks
Witold reported a reboot caused by the selftests of the dynamic function
tracer. He sent me a config and I used ktest to do a config_bisect on it
(as my config did not cause the crash). It pointed out that the problem
config was CONFIG_PROVE_RCU.
What happened was that if multiple callbacks are attached to the
function tracer, we iterate a list of callbacks. Because the list is
managed by synchronize_sched() and preempt_disable, the access to the
pointers uses rcu_dereference_raw().
When PROVE_RCU is enabled, the rcu_dereference_raw() calls some
debugging functions, which happen to be traced. The tracing of the debug
function would then call rcu_dereference_raw() which would then call the
debug function and then... well you get the idea.
I first wrote two different patches to solve this bug.
1) add a __rcu_dereference_raw() that would not do any checks.
2) add notrace to the offending debug functions.
Both of these patches worked.
Talking with Paul McKenney on IRC, he suggested to add recursion
detection instead. This seemed to be a better solution, so I decided to
implement it. As the task_struct already has a trace_recursion to detect
recursion in the ring buffer, and that has a very small number it
allows, I decided to use that same variable to add flags that can detect
the recursion inside the infrastructure of the function tracer.
I plan to change it so that the task struct bit can be checked in
mcount, but as that requires changes to all archs, I will hold that off
to the next merge window.
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Link: http://lkml.kernel.org/r/1306348063.1465.116.camel@gandalf.stny.rr.com
Reported-by: Witold Baryluk <baryluk@smp.if.uj.edu.pl>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-05-26 01:27:43 +07:00
|
|
|
struct ftrace_ops *op;
|
|
|
|
|
|
|
|
if (unlikely(trace_recursion_test(TRACE_GLOBAL_BIT)))
|
|
|
|
return;
|
2008-05-13 02:20:42 +07:00
|
|
|
|
ftrace: Add internal recursive checks
Witold reported a reboot caused by the selftests of the dynamic function
tracer. He sent me a config and I used ktest to do a config_bisect on it
(as my config did not cause the crash). It pointed out that the problem
config was CONFIG_PROVE_RCU.
What happened was that if multiple callbacks are attached to the
function tracer, we iterate a list of callbacks. Because the list is
managed by synchronize_sched() and preempt_disable, the access to the
pointers uses rcu_dereference_raw().
When PROVE_RCU is enabled, the rcu_dereference_raw() calls some
debugging functions, which happen to be traced. The tracing of the debug
function would then call rcu_dereference_raw() which would then call the
debug function and then... well you get the idea.
I first wrote two different patches to solve this bug.
1) add a __rcu_dereference_raw() that would not do any checks.
2) add notrace to the offending debug functions.
Both of these patches worked.
Talking with Paul McKenney on IRC, he suggested to add recursion
detection instead. This seemed to be a better solution, so I decided to
implement it. As the task_struct already has a trace_recursion to detect
recursion in the ring buffer, and that has a very small number it
allows, I decided to use that same variable to add flags that can detect
the recursion inside the infrastructure of the function tracer.
I plan to change it so that the task struct bit can be checked in
mcount, but as that requires changes to all archs, I will hold that off
to the next merge window.
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Link: http://lkml.kernel.org/r/1306348063.1465.116.camel@gandalf.stny.rr.com
Reported-by: Witold Baryluk <baryluk@smp.if.uj.edu.pl>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-05-26 01:27:43 +07:00
|
|
|
trace_recursion_set(TRACE_GLOBAL_BIT);
|
|
|
|
op = rcu_dereference_raw(ftrace_global_list); /*see above*/
|
2008-05-13 02:20:42 +07:00
|
|
|
while (op != &ftrace_list_end) {
|
|
|
|
op->func(ip, parent_ip);
|
2010-03-06 06:03:25 +07:00
|
|
|
op = rcu_dereference_raw(op->next); /*see above*/
|
2008-05-13 02:20:42 +07:00
|
|
|
};
|
ftrace: Add internal recursive checks
Witold reported a reboot caused by the selftests of the dynamic function
tracer. He sent me a config and I used ktest to do a config_bisect on it
(as my config did not cause the crash). It pointed out that the problem
config was CONFIG_PROVE_RCU.
What happened was that if multiple callbacks are attached to the
function tracer, we iterate a list of callbacks. Because the list is
managed by synchronize_sched() and preempt_disable, the access to the
pointers uses rcu_dereference_raw().
When PROVE_RCU is enabled, the rcu_dereference_raw() calls some
debugging functions, which happen to be traced. The tracing of the debug
function would then call rcu_dereference_raw() which would then call the
debug function and then... well you get the idea.
I first wrote two different patches to solve this bug.
1) add a __rcu_dereference_raw() that would not do any checks.
2) add notrace to the offending debug functions.
Both of these patches worked.
Talking with Paul McKenney on IRC, he suggested to add recursion
detection instead. This seemed to be a better solution, so I decided to
implement it. As the task_struct already has a trace_recursion to detect
recursion in the ring buffer, and that has a very small number it
allows, I decided to use that same variable to add flags that can detect
the recursion inside the infrastructure of the function tracer.
I plan to change it so that the task struct bit can be checked in
mcount, but as that requires changes to all archs, I will hold that off
to the next merge window.
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Link: http://lkml.kernel.org/r/1306348063.1465.116.camel@gandalf.stny.rr.com
Reported-by: Witold Baryluk <baryluk@smp.if.uj.edu.pl>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-05-26 01:27:43 +07:00
|
|
|
trace_recursion_clear(TRACE_GLOBAL_BIT);
|
2008-05-13 02:20:42 +07:00
|
|
|
}
|
|
|
|
|
2008-11-26 12:16:23 +07:00
|
|
|
static void ftrace_pid_func(unsigned long ip, unsigned long parent_ip)
|
|
|
|
{
|
2008-12-04 03:36:58 +07:00
|
|
|
if (!test_tsk_trace_trace(current))
|
2008-11-26 12:16:23 +07:00
|
|
|
return;
|
|
|
|
|
|
|
|
ftrace_pid_function(ip, parent_ip);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void set_ftrace_pid_function(ftrace_func_t func)
|
|
|
|
{
|
|
|
|
/* do not set ftrace_pid_function to itself! */
|
|
|
|
if (func != ftrace_pid_func)
|
|
|
|
ftrace_pid_function = func;
|
|
|
|
}
|
|
|
|
|
2008-05-13 02:20:42 +07:00
|
|
|
/**
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
* clear_ftrace_function - reset the ftrace function
|
2008-05-13 02:20:42 +07:00
|
|
|
*
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
* This NULLs the ftrace function and in essence stops
|
|
|
|
* tracing. There may be lag
|
2008-05-13 02:20:42 +07:00
|
|
|
*/
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
void clear_ftrace_function(void)
|
2008-05-13 02:20:42 +07:00
|
|
|
{
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
ftrace_trace_function = ftrace_stub;
|
2008-11-06 04:05:44 +07:00
|
|
|
__ftrace_trace_function = ftrace_stub;
|
2008-11-26 12:16:23 +07:00
|
|
|
ftrace_pid_function = ftrace_stub;
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
}
|
|
|
|
|
2008-11-06 04:05:44 +07:00
|
|
|
#ifndef CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST
|
|
|
|
/*
|
|
|
|
* For those archs that do not test ftrace_trace_stop in their
|
|
|
|
* mcount call site, we need to do it from C.
|
|
|
|
*/
|
|
|
|
static void ftrace_test_stop_func(unsigned long ip, unsigned long parent_ip)
|
|
|
|
{
|
|
|
|
if (function_trace_stop)
|
|
|
|
return;
|
|
|
|
|
|
|
|
__ftrace_trace_function(ip, parent_ip);
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2011-05-04 09:49:52 +07:00
|
|
|
static void update_global_ops(void)
|
2011-04-28 08:43:36 +07:00
|
|
|
{
|
|
|
|
ftrace_func_t func;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If there's only one function registered, then call that
|
|
|
|
* function directly. Otherwise, we need to iterate over the
|
|
|
|
* registered callers.
|
|
|
|
*/
|
2011-05-04 20:27:52 +07:00
|
|
|
if (ftrace_global_list == &ftrace_list_end ||
|
|
|
|
ftrace_global_list->next == &ftrace_list_end)
|
|
|
|
func = ftrace_global_list->func;
|
2011-04-28 08:43:36 +07:00
|
|
|
else
|
2011-05-04 20:27:52 +07:00
|
|
|
func = ftrace_global_list_func;
|
2011-04-28 08:43:36 +07:00
|
|
|
|
|
|
|
/* If we filter on pids, update to use the pid function */
|
|
|
|
if (!list_empty(&ftrace_pids)) {
|
|
|
|
set_ftrace_pid_function(func);
|
|
|
|
func = ftrace_pid_func;
|
|
|
|
}
|
2011-05-04 09:49:52 +07:00
|
|
|
|
|
|
|
global_ops.func = func;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void update_ftrace_function(void)
|
|
|
|
{
|
|
|
|
ftrace_func_t func;
|
|
|
|
|
|
|
|
update_global_ops();
|
|
|
|
|
2011-05-06 08:14:55 +07:00
|
|
|
/*
|
|
|
|
* If we are at the end of the list and this ops is
|
|
|
|
* not dynamic, then have the mcount trampoline call
|
|
|
|
* the function directly
|
|
|
|
*/
|
2011-05-04 20:27:52 +07:00
|
|
|
if (ftrace_ops_list == &ftrace_list_end ||
|
2011-05-06 08:14:55 +07:00
|
|
|
(ftrace_ops_list->next == &ftrace_list_end &&
|
|
|
|
!(ftrace_ops_list->flags & FTRACE_OPS_FL_DYNAMIC)))
|
2011-05-04 20:27:52 +07:00
|
|
|
func = ftrace_ops_list->func;
|
|
|
|
else
|
|
|
|
func = ftrace_ops_list_func;
|
2011-05-04 09:49:52 +07:00
|
|
|
|
2011-04-28 08:43:36 +07:00
|
|
|
#ifdef CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST
|
|
|
|
ftrace_trace_function = func;
|
|
|
|
#else
|
|
|
|
__ftrace_trace_function = func;
|
|
|
|
ftrace_trace_function = ftrace_test_stop_func;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2011-05-04 09:49:52 +07:00
|
|
|
static void add_ftrace_ops(struct ftrace_ops **list, struct ftrace_ops *ops)
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
{
|
2011-05-04 09:49:52 +07:00
|
|
|
ops->next = *list;
|
2008-05-13 02:20:42 +07:00
|
|
|
/*
|
2011-05-04 20:27:52 +07:00
|
|
|
* We are entering ops into the list but another
|
2008-05-13 02:20:42 +07:00
|
|
|
* CPU might be walking that list. We need to make sure
|
|
|
|
* the ops->next pointer is valid before another CPU sees
|
2011-05-04 20:27:52 +07:00
|
|
|
* the ops pointer included into the list.
|
2008-05-13 02:20:42 +07:00
|
|
|
*/
|
2011-05-04 09:49:52 +07:00
|
|
|
rcu_assign_pointer(*list, ops);
|
2008-05-13 02:20:42 +07:00
|
|
|
}
|
|
|
|
|
2011-05-04 09:49:52 +07:00
|
|
|
static int remove_ftrace_ops(struct ftrace_ops **list, struct ftrace_ops *ops)
|
2008-05-13 02:20:42 +07:00
|
|
|
{
|
|
|
|
struct ftrace_ops **p;
|
|
|
|
|
|
|
|
/*
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
* If we are removing the last function, then simply point
|
|
|
|
* to the ftrace_stub.
|
2008-05-13 02:20:42 +07:00
|
|
|
*/
|
2011-05-04 09:49:52 +07:00
|
|
|
if (*list == ops && ops->next == &ftrace_list_end) {
|
|
|
|
*list = &ftrace_list_end;
|
2009-02-14 13:42:44 +07:00
|
|
|
return 0;
|
2008-05-13 02:20:42 +07:00
|
|
|
}
|
|
|
|
|
2011-05-04 09:49:52 +07:00
|
|
|
for (p = list; *p != &ftrace_list_end; p = &(*p)->next)
|
2008-05-13 02:20:42 +07:00
|
|
|
if (*p == ops)
|
|
|
|
break;
|
|
|
|
|
2009-02-14 13:42:44 +07:00
|
|
|
if (*p != ops)
|
|
|
|
return -1;
|
2008-05-13 02:20:42 +07:00
|
|
|
|
|
|
|
*p = (*p)->next;
|
2011-05-04 09:49:52 +07:00
|
|
|
return 0;
|
|
|
|
}
|
2008-05-13 02:20:42 +07:00
|
|
|
|
2011-05-04 09:49:52 +07:00
|
|
|
static int __register_ftrace_function(struct ftrace_ops *ops)
|
|
|
|
{
|
|
|
|
if (ftrace_disabled)
|
|
|
|
return -ENODEV;
|
|
|
|
|
|
|
|
if (FTRACE_WARN_ON(ops == &global_ops))
|
|
|
|
return -EINVAL;
|
|
|
|
|
2011-05-04 20:27:52 +07:00
|
|
|
if (WARN_ON(ops->flags & FTRACE_OPS_FL_ENABLED))
|
|
|
|
return -EBUSY;
|
|
|
|
|
2011-05-06 08:14:55 +07:00
|
|
|
if (!core_kernel_data((unsigned long)ops))
|
|
|
|
ops->flags |= FTRACE_OPS_FL_DYNAMIC;
|
|
|
|
|
2011-05-04 20:27:52 +07:00
|
|
|
if (ops->flags & FTRACE_OPS_FL_GLOBAL) {
|
|
|
|
int first = ftrace_global_list == &ftrace_list_end;
|
|
|
|
add_ftrace_ops(&ftrace_global_list, ops);
|
|
|
|
ops->flags |= FTRACE_OPS_FL_ENABLED;
|
|
|
|
if (first)
|
|
|
|
add_ftrace_ops(&ftrace_ops_list, &global_ops);
|
|
|
|
} else
|
|
|
|
add_ftrace_ops(&ftrace_ops_list, ops);
|
|
|
|
|
2011-05-04 09:49:52 +07:00
|
|
|
if (ftrace_enabled)
|
|
|
|
update_ftrace_function();
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int __unregister_ftrace_function(struct ftrace_ops *ops)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (ftrace_disabled)
|
|
|
|
return -ENODEV;
|
|
|
|
|
2011-05-04 20:27:52 +07:00
|
|
|
if (WARN_ON(!(ops->flags & FTRACE_OPS_FL_ENABLED)))
|
|
|
|
return -EBUSY;
|
|
|
|
|
2011-05-04 09:49:52 +07:00
|
|
|
if (FTRACE_WARN_ON(ops == &global_ops))
|
|
|
|
return -EINVAL;
|
|
|
|
|
2011-05-04 20:27:52 +07:00
|
|
|
if (ops->flags & FTRACE_OPS_FL_GLOBAL) {
|
|
|
|
ret = remove_ftrace_ops(&ftrace_global_list, ops);
|
|
|
|
if (!ret && ftrace_global_list == &ftrace_list_end)
|
|
|
|
ret = remove_ftrace_ops(&ftrace_ops_list, &global_ops);
|
|
|
|
if (!ret)
|
|
|
|
ops->flags &= ~FTRACE_OPS_FL_ENABLED;
|
|
|
|
} else
|
|
|
|
ret = remove_ftrace_ops(&ftrace_ops_list, ops);
|
|
|
|
|
2011-05-04 09:49:52 +07:00
|
|
|
if (ret < 0)
|
|
|
|
return ret;
|
2011-05-04 20:27:52 +07:00
|
|
|
|
2011-04-28 08:43:36 +07:00
|
|
|
if (ftrace_enabled)
|
|
|
|
update_ftrace_function();
|
2008-05-13 02:20:42 +07:00
|
|
|
|
2011-05-06 08:14:55 +07:00
|
|
|
/*
|
|
|
|
* Dynamic ops may be freed, we must make sure that all
|
|
|
|
* callers are done before leaving this function.
|
|
|
|
*/
|
|
|
|
if (ops->flags & FTRACE_OPS_FL_DYNAMIC)
|
|
|
|
synchronize_sched();
|
|
|
|
|
2009-02-14 13:42:44 +07:00
|
|
|
return 0;
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
}
|
|
|
|
|
2008-11-26 12:16:23 +07:00
|
|
|
static void ftrace_update_pid_func(void)
|
|
|
|
{
|
2011-04-28 08:43:36 +07:00
|
|
|
/* Only do something if we are tracing something */
|
2008-11-26 12:16:23 +07:00
|
|
|
if (ftrace_trace_function == ftrace_stub)
|
2009-03-06 13:29:04 +07:00
|
|
|
return;
|
2008-11-26 12:16:23 +07:00
|
|
|
|
2011-04-28 08:43:36 +07:00
|
|
|
update_ftrace_function();
|
2008-11-26 12:16:23 +07:00
|
|
|
}
|
|
|
|
|
2009-03-24 04:12:36 +07:00
|
|
|
#ifdef CONFIG_FUNCTION_PROFILER
|
|
|
|
struct ftrace_profile {
|
|
|
|
struct hlist_node node;
|
|
|
|
unsigned long ip;
|
|
|
|
unsigned long counter;
|
2009-03-24 10:12:58 +07:00
|
|
|
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
|
|
|
|
unsigned long long time;
|
2010-04-27 01:02:05 +07:00
|
|
|
unsigned long long time_squared;
|
2009-03-24 10:12:58 +07:00
|
|
|
#endif
|
2009-02-17 03:28:00 +07:00
|
|
|
};
|
|
|
|
|
2009-03-24 04:12:36 +07:00
|
|
|
struct ftrace_profile_page {
|
|
|
|
struct ftrace_profile_page *next;
|
|
|
|
unsigned long index;
|
|
|
|
struct ftrace_profile records[];
|
2008-05-13 02:20:43 +07:00
|
|
|
};
|
|
|
|
|
2009-03-25 07:50:39 +07:00
|
|
|
struct ftrace_profile_stat {
|
|
|
|
atomic_t disabled;
|
|
|
|
struct hlist_head *hash;
|
|
|
|
struct ftrace_profile_page *pages;
|
|
|
|
struct ftrace_profile_page *start;
|
|
|
|
struct tracer_stat stat;
|
|
|
|
};
|
|
|
|
|
2009-03-24 04:12:36 +07:00
|
|
|
#define PROFILE_RECORDS_SIZE \
|
|
|
|
(PAGE_SIZE - offsetof(struct ftrace_profile_page, records))
|
2008-05-13 02:20:43 +07:00
|
|
|
|
2009-03-24 04:12:36 +07:00
|
|
|
#define PROFILES_PER_PAGE \
|
|
|
|
(PROFILE_RECORDS_SIZE / sizeof(struct ftrace_profile))
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
|
2009-03-26 00:26:41 +07:00
|
|
|
static int ftrace_profile_bits __read_mostly;
|
|
|
|
static int ftrace_profile_enabled __read_mostly;
|
|
|
|
|
|
|
|
/* ftrace_profile_lock - synchronize the enable and disable of the profiler */
|
2009-03-20 23:50:56 +07:00
|
|
|
static DEFINE_MUTEX(ftrace_profile_lock);
|
|
|
|
|
2009-03-25 07:50:39 +07:00
|
|
|
static DEFINE_PER_CPU(struct ftrace_profile_stat, ftrace_profile_stats);
|
2009-03-24 04:12:36 +07:00
|
|
|
|
|
|
|
#define FTRACE_PROFILE_HASH_SIZE 1024 /* must be power of 2 */
|
|
|
|
|
2009-03-20 23:50:56 +07:00
|
|
|
static void *
|
|
|
|
function_stat_next(void *v, int idx)
|
|
|
|
{
|
2009-03-24 04:12:36 +07:00
|
|
|
struct ftrace_profile *rec = v;
|
|
|
|
struct ftrace_profile_page *pg;
|
2009-03-20 23:50:56 +07:00
|
|
|
|
2009-03-24 04:12:36 +07:00
|
|
|
pg = (struct ftrace_profile_page *)((unsigned long)rec & PAGE_MASK);
|
2009-03-20 23:50:56 +07:00
|
|
|
|
|
|
|
again:
|
2009-06-26 10:15:37 +07:00
|
|
|
if (idx != 0)
|
|
|
|
rec++;
|
|
|
|
|
2009-03-20 23:50:56 +07:00
|
|
|
if ((void *)rec >= (void *)&pg->records[pg->index]) {
|
|
|
|
pg = pg->next;
|
|
|
|
if (!pg)
|
|
|
|
return NULL;
|
|
|
|
rec = &pg->records[0];
|
2009-03-24 04:12:36 +07:00
|
|
|
if (!rec->counter)
|
|
|
|
goto again;
|
2009-03-20 23:50:56 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
return rec;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void *function_stat_start(struct tracer_stat *trace)
|
|
|
|
{
|
2009-03-25 07:50:39 +07:00
|
|
|
struct ftrace_profile_stat *stat =
|
|
|
|
container_of(trace, struct ftrace_profile_stat, stat);
|
|
|
|
|
|
|
|
if (!stat || !stat->start)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
return function_stat_next(&stat->start->records[0], 0);
|
2009-03-20 23:50:56 +07:00
|
|
|
}
|
|
|
|
|
2009-03-24 10:12:58 +07:00
|
|
|
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
|
|
|
|
/* function graph compares on total time */
|
|
|
|
static int function_stat_cmp(void *p1, void *p2)
|
|
|
|
{
|
|
|
|
struct ftrace_profile *a = p1;
|
|
|
|
struct ftrace_profile *b = p2;
|
|
|
|
|
|
|
|
if (a->time < b->time)
|
|
|
|
return -1;
|
|
|
|
if (a->time > b->time)
|
|
|
|
return 1;
|
|
|
|
else
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
#else
|
|
|
|
/* not function graph compares against hits */
|
2009-03-20 23:50:56 +07:00
|
|
|
static int function_stat_cmp(void *p1, void *p2)
|
|
|
|
{
|
2009-03-24 04:12:36 +07:00
|
|
|
struct ftrace_profile *a = p1;
|
|
|
|
struct ftrace_profile *b = p2;
|
2009-03-20 23:50:56 +07:00
|
|
|
|
|
|
|
if (a->counter < b->counter)
|
|
|
|
return -1;
|
|
|
|
if (a->counter > b->counter)
|
|
|
|
return 1;
|
|
|
|
else
|
|
|
|
return 0;
|
|
|
|
}
|
2009-03-24 10:12:58 +07:00
|
|
|
#endif
|
2009-03-20 23:50:56 +07:00
|
|
|
|
|
|
|
static int function_stat_headers(struct seq_file *m)
|
|
|
|
{
|
2009-03-24 10:12:58 +07:00
|
|
|
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
|
2009-03-26 08:00:47 +07:00
|
|
|
seq_printf(m, " Function "
|
2010-04-27 01:02:05 +07:00
|
|
|
"Hit Time Avg s^2\n"
|
2009-03-26 08:00:47 +07:00
|
|
|
" -------- "
|
2010-04-27 01:02:05 +07:00
|
|
|
"--- ---- --- ---\n");
|
2009-03-24 10:12:58 +07:00
|
|
|
#else
|
2009-03-20 23:50:56 +07:00
|
|
|
seq_printf(m, " Function Hit\n"
|
|
|
|
" -------- ---\n");
|
2009-03-24 10:12:58 +07:00
|
|
|
#endif
|
2009-03-20 23:50:56 +07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int function_stat_show(struct seq_file *m, void *v)
|
|
|
|
{
|
2009-03-24 04:12:36 +07:00
|
|
|
struct ftrace_profile *rec = v;
|
2009-03-20 23:50:56 +07:00
|
|
|
char str[KSYM_SYMBOL_LEN];
|
2010-08-23 15:50:12 +07:00
|
|
|
int ret = 0;
|
2009-03-24 10:12:58 +07:00
|
|
|
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
|
2009-03-26 08:00:47 +07:00
|
|
|
static struct trace_seq s;
|
|
|
|
unsigned long long avg;
|
2010-04-27 01:02:05 +07:00
|
|
|
unsigned long long stddev;
|
2009-03-24 10:12:58 +07:00
|
|
|
#endif
|
2010-08-23 15:50:12 +07:00
|
|
|
mutex_lock(&ftrace_profile_lock);
|
|
|
|
|
|
|
|
/* we raced with function_profile_reset() */
|
|
|
|
if (unlikely(rec->counter == 0)) {
|
|
|
|
ret = -EBUSY;
|
|
|
|
goto out;
|
|
|
|
}
|
2009-03-20 23:50:56 +07:00
|
|
|
|
|
|
|
kallsyms_lookup(rec->ip, NULL, NULL, NULL, str);
|
2009-03-24 10:12:58 +07:00
|
|
|
seq_printf(m, " %-30.30s %10lu", str, rec->counter);
|
|
|
|
|
|
|
|
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
|
|
|
|
seq_printf(m, " ");
|
2009-03-26 08:00:47 +07:00
|
|
|
avg = rec->time;
|
|
|
|
do_div(avg, rec->counter);
|
|
|
|
|
2010-04-27 01:02:05 +07:00
|
|
|
/* Sample standard deviation (s^2) */
|
|
|
|
if (rec->counter <= 1)
|
|
|
|
stddev = 0;
|
|
|
|
else {
|
|
|
|
stddev = rec->time_squared - rec->counter * avg * avg;
|
|
|
|
/*
|
|
|
|
* Divide only 1000 for ns^2 -> us^2 conversion.
|
|
|
|
* trace_print_graph_duration will divide 1000 again.
|
|
|
|
*/
|
|
|
|
do_div(stddev, (rec->counter - 1) * 1000);
|
|
|
|
}
|
|
|
|
|
2009-03-26 08:00:47 +07:00
|
|
|
trace_seq_init(&s);
|
|
|
|
trace_print_graph_duration(rec->time, &s);
|
|
|
|
trace_seq_puts(&s, " ");
|
|
|
|
trace_print_graph_duration(avg, &s);
|
2010-04-27 01:02:05 +07:00
|
|
|
trace_seq_puts(&s, " ");
|
|
|
|
trace_print_graph_duration(stddev, &s);
|
2009-03-24 10:12:58 +07:00
|
|
|
trace_print_seq(m, &s);
|
|
|
|
#endif
|
|
|
|
seq_putc(m, '\n');
|
2010-08-23 15:50:12 +07:00
|
|
|
out:
|
|
|
|
mutex_unlock(&ftrace_profile_lock);
|
2009-03-20 23:50:56 +07:00
|
|
|
|
2010-08-23 15:50:12 +07:00
|
|
|
return ret;
|
2009-03-20 23:50:56 +07:00
|
|
|
}
|
|
|
|
|
2009-03-25 07:50:39 +07:00
|
|
|
static void ftrace_profile_reset(struct ftrace_profile_stat *stat)
|
2009-03-20 23:50:56 +07:00
|
|
|
{
|
2009-03-24 04:12:36 +07:00
|
|
|
struct ftrace_profile_page *pg;
|
2009-03-20 23:50:56 +07:00
|
|
|
|
2009-03-25 07:50:39 +07:00
|
|
|
pg = stat->pages = stat->start;
|
2009-03-20 23:50:56 +07:00
|
|
|
|
2009-03-24 04:12:36 +07:00
|
|
|
while (pg) {
|
|
|
|
memset(pg->records, 0, PROFILE_RECORDS_SIZE);
|
|
|
|
pg->index = 0;
|
|
|
|
pg = pg->next;
|
2009-03-20 23:50:56 +07:00
|
|
|
}
|
|
|
|
|
2009-03-25 07:50:39 +07:00
|
|
|
memset(stat->hash, 0,
|
2009-03-24 04:12:36 +07:00
|
|
|
FTRACE_PROFILE_HASH_SIZE * sizeof(struct hlist_head));
|
|
|
|
}
|
2009-03-20 23:50:56 +07:00
|
|
|
|
2009-03-25 07:50:39 +07:00
|
|
|
int ftrace_profile_pages_init(struct ftrace_profile_stat *stat)
|
2009-03-24 04:12:36 +07:00
|
|
|
{
|
|
|
|
struct ftrace_profile_page *pg;
|
2009-03-26 07:06:34 +07:00
|
|
|
int functions;
|
|
|
|
int pages;
|
2009-03-24 04:12:36 +07:00
|
|
|
int i;
|
2009-03-20 23:50:56 +07:00
|
|
|
|
2009-03-24 04:12:36 +07:00
|
|
|
/* If we already allocated, do nothing */
|
2009-03-25 07:50:39 +07:00
|
|
|
if (stat->pages)
|
2009-03-24 04:12:36 +07:00
|
|
|
return 0;
|
2009-03-20 23:50:56 +07:00
|
|
|
|
2009-03-25 07:50:39 +07:00
|
|
|
stat->pages = (void *)get_zeroed_page(GFP_KERNEL);
|
|
|
|
if (!stat->pages)
|
2009-03-24 04:12:36 +07:00
|
|
|
return -ENOMEM;
|
2009-03-20 23:50:56 +07:00
|
|
|
|
2009-03-26 07:06:34 +07:00
|
|
|
#ifdef CONFIG_DYNAMIC_FTRACE
|
|
|
|
functions = ftrace_update_tot_cnt;
|
|
|
|
#else
|
|
|
|
/*
|
|
|
|
* We do not know the number of functions that exist because
|
|
|
|
* dynamic tracing is what counts them. With past experience
|
|
|
|
* we have around 20K functions. That should be more than enough.
|
|
|
|
* It is highly unlikely we will execute every function in
|
|
|
|
* the kernel.
|
|
|
|
*/
|
|
|
|
functions = 20000;
|
|
|
|
#endif
|
|
|
|
|
2009-03-25 07:50:39 +07:00
|
|
|
pg = stat->start = stat->pages;
|
2009-03-20 23:50:56 +07:00
|
|
|
|
2009-03-26 07:06:34 +07:00
|
|
|
pages = DIV_ROUND_UP(functions, PROFILES_PER_PAGE);
|
|
|
|
|
|
|
|
for (i = 0; i < pages; i++) {
|
2009-03-24 04:12:36 +07:00
|
|
|
pg->next = (void *)get_zeroed_page(GFP_KERNEL);
|
|
|
|
if (!pg->next)
|
2009-03-26 07:06:34 +07:00
|
|
|
goto out_free;
|
2009-03-24 04:12:36 +07:00
|
|
|
pg = pg->next;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
2009-03-26 07:06:34 +07:00
|
|
|
|
|
|
|
out_free:
|
|
|
|
pg = stat->start;
|
|
|
|
while (pg) {
|
|
|
|
unsigned long tmp = (unsigned long)pg;
|
|
|
|
|
|
|
|
pg = pg->next;
|
|
|
|
free_page(tmp);
|
|
|
|
}
|
|
|
|
|
|
|
|
free_page((unsigned long)stat->pages);
|
|
|
|
stat->pages = NULL;
|
|
|
|
stat->start = NULL;
|
|
|
|
|
|
|
|
return -ENOMEM;
|
2009-03-20 23:50:56 +07:00
|
|
|
}
|
|
|
|
|
2009-03-25 07:50:39 +07:00
|
|
|
static int ftrace_profile_init_cpu(int cpu)
|
2009-03-20 23:50:56 +07:00
|
|
|
{
|
2009-03-25 07:50:39 +07:00
|
|
|
struct ftrace_profile_stat *stat;
|
2009-03-24 04:12:36 +07:00
|
|
|
int size;
|
2009-03-20 23:50:56 +07:00
|
|
|
|
2009-03-25 07:50:39 +07:00
|
|
|
stat = &per_cpu(ftrace_profile_stats, cpu);
|
|
|
|
|
|
|
|
if (stat->hash) {
|
2009-03-24 04:12:36 +07:00
|
|
|
/* If the profile is already created, simply reset it */
|
2009-03-25 07:50:39 +07:00
|
|
|
ftrace_profile_reset(stat);
|
2009-03-24 04:12:36 +07:00
|
|
|
return 0;
|
|
|
|
}
|
2009-03-20 23:50:56 +07:00
|
|
|
|
2009-03-24 04:12:36 +07:00
|
|
|
/*
|
|
|
|
* We are profiling all functions, but usually only a few thousand
|
|
|
|
* functions are hit. We'll make a hash of 1024 items.
|
|
|
|
*/
|
|
|
|
size = FTRACE_PROFILE_HASH_SIZE;
|
2009-03-20 23:50:56 +07:00
|
|
|
|
2009-03-25 07:50:39 +07:00
|
|
|
stat->hash = kzalloc(sizeof(struct hlist_head) * size, GFP_KERNEL);
|
2009-03-24 04:12:36 +07:00
|
|
|
|
2009-03-25 07:50:39 +07:00
|
|
|
if (!stat->hash)
|
2009-03-24 04:12:36 +07:00
|
|
|
return -ENOMEM;
|
|
|
|
|
2009-03-25 07:50:39 +07:00
|
|
|
if (!ftrace_profile_bits) {
|
|
|
|
size--;
|
2009-03-24 04:12:36 +07:00
|
|
|
|
2009-03-25 07:50:39 +07:00
|
|
|
for (; size; size >>= 1)
|
|
|
|
ftrace_profile_bits++;
|
|
|
|
}
|
2009-03-24 04:12:36 +07:00
|
|
|
|
2009-03-26 07:06:34 +07:00
|
|
|
/* Preallocate the function profiling pages */
|
2009-03-25 07:50:39 +07:00
|
|
|
if (ftrace_profile_pages_init(stat) < 0) {
|
|
|
|
kfree(stat->hash);
|
|
|
|
stat->hash = NULL;
|
2009-03-24 04:12:36 +07:00
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
2009-03-20 23:50:56 +07:00
|
|
|
}
|
|
|
|
|
2009-03-25 07:50:39 +07:00
|
|
|
static int ftrace_profile_init(void)
|
|
|
|
{
|
|
|
|
int cpu;
|
|
|
|
int ret = 0;
|
|
|
|
|
|
|
|
for_each_online_cpu(cpu) {
|
|
|
|
ret = ftrace_profile_init_cpu(cpu);
|
|
|
|
if (ret)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2009-03-24 04:12:36 +07:00
|
|
|
/* interrupts must be disabled */
|
2009-03-25 07:50:39 +07:00
|
|
|
static struct ftrace_profile *
|
|
|
|
ftrace_find_profiled_func(struct ftrace_profile_stat *stat, unsigned long ip)
|
2009-03-20 23:50:56 +07:00
|
|
|
{
|
2009-03-24 04:12:36 +07:00
|
|
|
struct ftrace_profile *rec;
|
2009-03-20 23:50:56 +07:00
|
|
|
struct hlist_head *hhd;
|
|
|
|
struct hlist_node *n;
|
|
|
|
unsigned long key;
|
|
|
|
|
|
|
|
key = hash_long(ip, ftrace_profile_bits);
|
2009-03-25 07:50:39 +07:00
|
|
|
hhd = &stat->hash[key];
|
2009-03-20 23:50:56 +07:00
|
|
|
|
|
|
|
if (hlist_empty(hhd))
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
hlist_for_each_entry_rcu(rec, n, hhd, node) {
|
|
|
|
if (rec->ip == ip)
|
2009-03-24 04:12:36 +07:00
|
|
|
return rec;
|
|
|
|
}
|
|
|
|
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2009-03-25 07:50:39 +07:00
|
|
|
static void ftrace_add_profile(struct ftrace_profile_stat *stat,
|
|
|
|
struct ftrace_profile *rec)
|
2009-03-24 04:12:36 +07:00
|
|
|
{
|
|
|
|
unsigned long key;
|
|
|
|
|
|
|
|
key = hash_long(rec->ip, ftrace_profile_bits);
|
2009-03-25 07:50:39 +07:00
|
|
|
hlist_add_head_rcu(&rec->node, &stat->hash[key]);
|
2009-03-24 04:12:36 +07:00
|
|
|
}
|
|
|
|
|
2009-03-26 07:06:34 +07:00
|
|
|
/*
|
|
|
|
* The memory is already allocated, this simply finds a new record to use.
|
|
|
|
*/
|
2009-03-24 04:12:36 +07:00
|
|
|
static struct ftrace_profile *
|
2009-03-26 07:06:34 +07:00
|
|
|
ftrace_profile_alloc(struct ftrace_profile_stat *stat, unsigned long ip)
|
2009-03-24 04:12:36 +07:00
|
|
|
{
|
|
|
|
struct ftrace_profile *rec = NULL;
|
|
|
|
|
2009-03-26 07:06:34 +07:00
|
|
|
/* prevent recursion (from NMIs) */
|
2009-03-25 07:50:39 +07:00
|
|
|
if (atomic_inc_return(&stat->disabled) != 1)
|
2009-03-24 04:12:36 +07:00
|
|
|
goto out;
|
|
|
|
|
|
|
|
/*
|
2009-03-26 07:06:34 +07:00
|
|
|
* Try to find the function again since an NMI
|
|
|
|
* could have added it
|
2009-03-24 04:12:36 +07:00
|
|
|
*/
|
2009-03-25 07:50:39 +07:00
|
|
|
rec = ftrace_find_profiled_func(stat, ip);
|
2009-03-24 04:12:36 +07:00
|
|
|
if (rec)
|
2009-03-25 07:50:39 +07:00
|
|
|
goto out;
|
2009-03-24 04:12:36 +07:00
|
|
|
|
2009-03-25 07:50:39 +07:00
|
|
|
if (stat->pages->index == PROFILES_PER_PAGE) {
|
|
|
|
if (!stat->pages->next)
|
|
|
|
goto out;
|
|
|
|
stat->pages = stat->pages->next;
|
2009-03-20 23:50:56 +07:00
|
|
|
}
|
2009-03-24 04:12:36 +07:00
|
|
|
|
2009-03-25 07:50:39 +07:00
|
|
|
rec = &stat->pages->records[stat->pages->index++];
|
2009-03-24 04:12:36 +07:00
|
|
|
rec->ip = ip;
|
2009-03-25 07:50:39 +07:00
|
|
|
ftrace_add_profile(stat, rec);
|
2009-03-24 04:12:36 +07:00
|
|
|
|
2009-03-20 23:50:56 +07:00
|
|
|
out:
|
2009-03-25 07:50:39 +07:00
|
|
|
atomic_dec(&stat->disabled);
|
2009-03-20 23:50:56 +07:00
|
|
|
|
|
|
|
return rec;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
function_profile_call(unsigned long ip, unsigned long parent_ip)
|
|
|
|
{
|
2009-03-25 07:50:39 +07:00
|
|
|
struct ftrace_profile_stat *stat;
|
2009-03-24 04:12:36 +07:00
|
|
|
struct ftrace_profile *rec;
|
2009-03-20 23:50:56 +07:00
|
|
|
unsigned long flags;
|
|
|
|
|
|
|
|
if (!ftrace_profile_enabled)
|
|
|
|
return;
|
|
|
|
|
|
|
|
local_irq_save(flags);
|
2009-03-25 07:50:39 +07:00
|
|
|
|
|
|
|
stat = &__get_cpu_var(ftrace_profile_stats);
|
2009-06-02 08:51:28 +07:00
|
|
|
if (!stat->hash || !ftrace_profile_enabled)
|
2009-03-25 07:50:39 +07:00
|
|
|
goto out;
|
|
|
|
|
|
|
|
rec = ftrace_find_profiled_func(stat, ip);
|
2009-03-24 04:12:36 +07:00
|
|
|
if (!rec) {
|
2009-03-26 07:06:34 +07:00
|
|
|
rec = ftrace_profile_alloc(stat, ip);
|
2009-03-24 04:12:36 +07:00
|
|
|
if (!rec)
|
|
|
|
goto out;
|
|
|
|
}
|
2009-03-20 23:50:56 +07:00
|
|
|
|
|
|
|
rec->counter++;
|
|
|
|
out:
|
|
|
|
local_irq_restore(flags);
|
|
|
|
}
|
|
|
|
|
2009-03-24 10:12:58 +07:00
|
|
|
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
|
|
|
|
static int profile_graph_entry(struct ftrace_graph_ent *trace)
|
|
|
|
{
|
|
|
|
function_profile_call(trace->func, 0);
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void profile_graph_return(struct ftrace_graph_ret *trace)
|
|
|
|
{
|
2009-03-25 07:50:39 +07:00
|
|
|
struct ftrace_profile_stat *stat;
|
2009-03-25 10:17:58 +07:00
|
|
|
unsigned long long calltime;
|
2009-03-24 10:12:58 +07:00
|
|
|
struct ftrace_profile *rec;
|
2009-03-25 07:50:39 +07:00
|
|
|
unsigned long flags;
|
2009-03-24 10:12:58 +07:00
|
|
|
|
|
|
|
local_irq_save(flags);
|
2009-03-25 07:50:39 +07:00
|
|
|
stat = &__get_cpu_var(ftrace_profile_stats);
|
2009-06-02 08:51:28 +07:00
|
|
|
if (!stat->hash || !ftrace_profile_enabled)
|
2009-03-25 07:50:39 +07:00
|
|
|
goto out;
|
|
|
|
|
2010-04-28 08:04:24 +07:00
|
|
|
/* If the calltime was zero'd ignore it */
|
|
|
|
if (!trace->calltime)
|
|
|
|
goto out;
|
|
|
|
|
2009-03-25 10:17:58 +07:00
|
|
|
calltime = trace->rettime - trace->calltime;
|
|
|
|
|
|
|
|
if (!(trace_flags & TRACE_ITER_GRAPH_TIME)) {
|
|
|
|
int index;
|
|
|
|
|
|
|
|
index = trace->depth;
|
|
|
|
|
|
|
|
/* Append this call time to the parent time to subtract */
|
|
|
|
if (index)
|
|
|
|
current->ret_stack[index - 1].subtime += calltime;
|
|
|
|
|
|
|
|
if (current->ret_stack[index].subtime < calltime)
|
|
|
|
calltime -= current->ret_stack[index].subtime;
|
|
|
|
else
|
|
|
|
calltime = 0;
|
|
|
|
}
|
|
|
|
|
2009-03-25 07:50:39 +07:00
|
|
|
rec = ftrace_find_profiled_func(stat, trace->func);
|
2010-04-27 01:02:05 +07:00
|
|
|
if (rec) {
|
2009-03-25 10:17:58 +07:00
|
|
|
rec->time += calltime;
|
2010-04-27 01:02:05 +07:00
|
|
|
rec->time_squared += calltime * calltime;
|
|
|
|
}
|
2009-03-25 10:17:58 +07:00
|
|
|
|
2009-03-25 07:50:39 +07:00
|
|
|
out:
|
2009-03-24 10:12:58 +07:00
|
|
|
local_irq_restore(flags);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int register_ftrace_profiler(void)
|
|
|
|
{
|
|
|
|
return register_ftrace_graph(&profile_graph_return,
|
|
|
|
&profile_graph_entry);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void unregister_ftrace_profiler(void)
|
|
|
|
{
|
|
|
|
unregister_ftrace_graph();
|
|
|
|
}
|
|
|
|
#else
|
2011-06-01 02:51:55 +07:00
|
|
|
static struct ftrace_ops ftrace_profile_ops __read_mostly = {
|
2009-03-26 00:26:41 +07:00
|
|
|
.func = function_profile_call,
|
2009-03-20 23:50:56 +07:00
|
|
|
};
|
|
|
|
|
2009-03-24 10:12:58 +07:00
|
|
|
static int register_ftrace_profiler(void)
|
|
|
|
{
|
|
|
|
return register_ftrace_function(&ftrace_profile_ops);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void unregister_ftrace_profiler(void)
|
|
|
|
{
|
|
|
|
unregister_ftrace_function(&ftrace_profile_ops);
|
|
|
|
}
|
|
|
|
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
|
|
|
|
|
2009-03-20 23:50:56 +07:00
|
|
|
static ssize_t
|
|
|
|
ftrace_profile_write(struct file *filp, const char __user *ubuf,
|
|
|
|
size_t cnt, loff_t *ppos)
|
|
|
|
{
|
|
|
|
unsigned long val;
|
|
|
|
int ret;
|
|
|
|
|
2011-06-08 02:58:27 +07:00
|
|
|
ret = kstrtoul_from_user(ubuf, cnt, 10, &val);
|
|
|
|
if (ret)
|
2009-03-20 23:50:56 +07:00
|
|
|
return ret;
|
|
|
|
|
|
|
|
val = !!val;
|
|
|
|
|
|
|
|
mutex_lock(&ftrace_profile_lock);
|
|
|
|
if (ftrace_profile_enabled ^ val) {
|
|
|
|
if (val) {
|
2009-03-24 04:12:36 +07:00
|
|
|
ret = ftrace_profile_init();
|
|
|
|
if (ret < 0) {
|
|
|
|
cnt = ret;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2009-03-24 10:12:58 +07:00
|
|
|
ret = register_ftrace_profiler();
|
|
|
|
if (ret < 0) {
|
|
|
|
cnt = ret;
|
|
|
|
goto out;
|
|
|
|
}
|
2009-03-20 23:50:56 +07:00
|
|
|
ftrace_profile_enabled = 1;
|
|
|
|
} else {
|
|
|
|
ftrace_profile_enabled = 0;
|
2009-06-02 08:51:28 +07:00
|
|
|
/*
|
|
|
|
* unregister_ftrace_profiler calls stop_machine
|
|
|
|
* so this acts like an synchronize_sched.
|
|
|
|
*/
|
2009-03-24 10:12:58 +07:00
|
|
|
unregister_ftrace_profiler();
|
2009-03-20 23:50:56 +07:00
|
|
|
}
|
|
|
|
}
|
2009-03-24 04:12:36 +07:00
|
|
|
out:
|
2009-03-20 23:50:56 +07:00
|
|
|
mutex_unlock(&ftrace_profile_lock);
|
|
|
|
|
2009-10-24 06:36:16 +07:00
|
|
|
*ppos += cnt;
|
2009-03-20 23:50:56 +07:00
|
|
|
|
|
|
|
return cnt;
|
|
|
|
}
|
|
|
|
|
2009-03-24 04:12:36 +07:00
|
|
|
static ssize_t
|
|
|
|
ftrace_profile_read(struct file *filp, char __user *ubuf,
|
|
|
|
size_t cnt, loff_t *ppos)
|
|
|
|
{
|
2009-03-26 00:26:41 +07:00
|
|
|
char buf[64]; /* big enough to hold a number */
|
2009-03-24 04:12:36 +07:00
|
|
|
int r;
|
|
|
|
|
|
|
|
r = sprintf(buf, "%u\n", ftrace_profile_enabled);
|
|
|
|
return simple_read_from_buffer(ubuf, cnt, ppos, buf, r);
|
|
|
|
}
|
|
|
|
|
2009-03-20 23:50:56 +07:00
|
|
|
static const struct file_operations ftrace_profile_fops = {
|
|
|
|
.open = tracing_open_generic,
|
|
|
|
.read = ftrace_profile_read,
|
|
|
|
.write = ftrace_profile_write,
|
llseek: automatically add .llseek fop
All file_operations should get a .llseek operation so we can make
nonseekable_open the default for future file operations without a
.llseek pointer.
The three cases that we can automatically detect are no_llseek, seq_lseek
and default_llseek. For cases where we can we can automatically prove that
the file offset is always ignored, we use noop_llseek, which maintains
the current behavior of not returning an error from a seek.
New drivers should normally not use noop_llseek but instead use no_llseek
and call nonseekable_open at open time. Existing drivers can be converted
to do the same when the maintainer knows for certain that no user code
relies on calling seek on the device file.
The generated code is often incorrectly indented and right now contains
comments that clarify for each added line why a specific variant was
chosen. In the version that gets submitted upstream, the comments will
be gone and I will manually fix the indentation, because there does not
seem to be a way to do that using coccinelle.
Some amount of new code is currently sitting in linux-next that should get
the same modifications, which I will do at the end of the merge window.
Many thanks to Julia Lawall for helping me learn to write a semantic
patch that does all this.
===== begin semantic patch =====
// This adds an llseek= method to all file operations,
// as a preparation for making no_llseek the default.
//
// The rules are
// - use no_llseek explicitly if we do nonseekable_open
// - use seq_lseek for sequential files
// - use default_llseek if we know we access f_pos
// - use noop_llseek if we know we don't access f_pos,
// but we still want to allow users to call lseek
//
@ open1 exists @
identifier nested_open;
@@
nested_open(...)
{
<+...
nonseekable_open(...)
...+>
}
@ open exists@
identifier open_f;
identifier i, f;
identifier open1.nested_open;
@@
int open_f(struct inode *i, struct file *f)
{
<+...
(
nonseekable_open(...)
|
nested_open(...)
)
...+>
}
@ read disable optional_qualifier exists @
identifier read_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
expression E;
identifier func;
@@
ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
{
<+...
(
*off = E
|
*off += E
|
func(..., off, ...)
|
E = *off
)
...+>
}
@ read_no_fpos disable optional_qualifier exists @
identifier read_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
@@
ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
{
... when != off
}
@ write @
identifier write_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
expression E;
identifier func;
@@
ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
{
<+...
(
*off = E
|
*off += E
|
func(..., off, ...)
|
E = *off
)
...+>
}
@ write_no_fpos @
identifier write_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
@@
ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
{
... when != off
}
@ fops0 @
identifier fops;
@@
struct file_operations fops = {
...
};
@ has_llseek depends on fops0 @
identifier fops0.fops;
identifier llseek_f;
@@
struct file_operations fops = {
...
.llseek = llseek_f,
...
};
@ has_read depends on fops0 @
identifier fops0.fops;
identifier read_f;
@@
struct file_operations fops = {
...
.read = read_f,
...
};
@ has_write depends on fops0 @
identifier fops0.fops;
identifier write_f;
@@
struct file_operations fops = {
...
.write = write_f,
...
};
@ has_open depends on fops0 @
identifier fops0.fops;
identifier open_f;
@@
struct file_operations fops = {
...
.open = open_f,
...
};
// use no_llseek if we call nonseekable_open
////////////////////////////////////////////
@ nonseekable1 depends on !has_llseek && has_open @
identifier fops0.fops;
identifier nso ~= "nonseekable_open";
@@
struct file_operations fops = {
... .open = nso, ...
+.llseek = no_llseek, /* nonseekable */
};
@ nonseekable2 depends on !has_llseek @
identifier fops0.fops;
identifier open.open_f;
@@
struct file_operations fops = {
... .open = open_f, ...
+.llseek = no_llseek, /* open uses nonseekable */
};
// use seq_lseek for sequential files
/////////////////////////////////////
@ seq depends on !has_llseek @
identifier fops0.fops;
identifier sr ~= "seq_read";
@@
struct file_operations fops = {
... .read = sr, ...
+.llseek = seq_lseek, /* we have seq_read */
};
// use default_llseek if there is a readdir
///////////////////////////////////////////
@ fops1 depends on !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier readdir_e;
@@
// any other fop is used that changes pos
struct file_operations fops = {
... .readdir = readdir_e, ...
+.llseek = default_llseek, /* readdir is present */
};
// use default_llseek if at least one of read/write touches f_pos
/////////////////////////////////////////////////////////////////
@ fops2 depends on !fops1 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read.read_f;
@@
// read fops use offset
struct file_operations fops = {
... .read = read_f, ...
+.llseek = default_llseek, /* read accesses f_pos */
};
@ fops3 depends on !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier write.write_f;
@@
// write fops use offset
struct file_operations fops = {
... .write = write_f, ...
+ .llseek = default_llseek, /* write accesses f_pos */
};
// Use noop_llseek if neither read nor write accesses f_pos
///////////////////////////////////////////////////////////
@ fops4 depends on !fops1 && !fops2 && !fops3 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read_no_fpos.read_f;
identifier write_no_fpos.write_f;
@@
// write fops use offset
struct file_operations fops = {
...
.write = write_f,
.read = read_f,
...
+.llseek = noop_llseek, /* read and write both use no f_pos */
};
@ depends on has_write && !has_read && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier write_no_fpos.write_f;
@@
struct file_operations fops = {
... .write = write_f, ...
+.llseek = noop_llseek, /* write uses no f_pos */
};
@ depends on has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read_no_fpos.read_f;
@@
struct file_operations fops = {
... .read = read_f, ...
+.llseek = noop_llseek, /* read uses no f_pos */
};
@ depends on !has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
@@
struct file_operations fops = {
...
+.llseek = noop_llseek, /* no read or write fn */
};
===== End semantic patch =====
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Cc: Julia Lawall <julia@diku.dk>
Cc: Christoph Hellwig <hch@infradead.org>
2010-08-15 23:52:59 +07:00
|
|
|
.llseek = default_llseek,
|
2009-03-20 23:50:56 +07:00
|
|
|
};
|
|
|
|
|
2009-03-25 07:50:39 +07:00
|
|
|
/* used to initialize the real stat files */
|
|
|
|
static struct tracer_stat function_stats __initdata = {
|
2009-03-26 00:26:41 +07:00
|
|
|
.name = "functions",
|
|
|
|
.stat_start = function_stat_start,
|
|
|
|
.stat_next = function_stat_next,
|
|
|
|
.stat_cmp = function_stat_cmp,
|
|
|
|
.stat_headers = function_stat_headers,
|
|
|
|
.stat_show = function_stat_show
|
2009-03-25 07:50:39 +07:00
|
|
|
};
|
|
|
|
|
2009-06-04 11:55:45 +07:00
|
|
|
static __init void ftrace_profile_debugfs(struct dentry *d_tracer)
|
2009-03-20 23:50:56 +07:00
|
|
|
{
|
2009-03-25 07:50:39 +07:00
|
|
|
struct ftrace_profile_stat *stat;
|
2009-03-20 23:50:56 +07:00
|
|
|
struct dentry *entry;
|
2009-03-25 07:50:39 +07:00
|
|
|
char *name;
|
2009-03-20 23:50:56 +07:00
|
|
|
int ret;
|
2009-03-25 07:50:39 +07:00
|
|
|
int cpu;
|
|
|
|
|
|
|
|
for_each_possible_cpu(cpu) {
|
|
|
|
stat = &per_cpu(ftrace_profile_stats, cpu);
|
|
|
|
|
|
|
|
/* allocate enough for function name + cpu number */
|
|
|
|
name = kmalloc(32, GFP_KERNEL);
|
|
|
|
if (!name) {
|
|
|
|
/*
|
|
|
|
* The files created are permanent, if something happens
|
|
|
|
* we still do not free memory.
|
|
|
|
*/
|
|
|
|
WARN(1,
|
|
|
|
"Could not allocate stat file for cpu %d\n",
|
|
|
|
cpu);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
stat->stat = function_stats;
|
|
|
|
snprintf(name, 32, "function%d", cpu);
|
|
|
|
stat->stat.name = name;
|
|
|
|
ret = register_stat_tracer(&stat->stat);
|
|
|
|
if (ret) {
|
|
|
|
WARN(1,
|
|
|
|
"Could not register function stat for cpu %d\n",
|
|
|
|
cpu);
|
|
|
|
kfree(name);
|
|
|
|
return;
|
|
|
|
}
|
2009-03-20 23:50:56 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
entry = debugfs_create_file("function_profile_enabled", 0644,
|
|
|
|
d_tracer, NULL, &ftrace_profile_fops);
|
|
|
|
if (!entry)
|
|
|
|
pr_warning("Could not create debugfs "
|
|
|
|
"'function_profile_enabled' entry\n");
|
|
|
|
}
|
|
|
|
|
|
|
|
#else /* CONFIG_FUNCTION_PROFILER */
|
2009-06-04 11:55:45 +07:00
|
|
|
static __init void ftrace_profile_debugfs(struct dentry *d_tracer)
|
2009-03-20 23:50:56 +07:00
|
|
|
{
|
|
|
|
}
|
|
|
|
#endif /* CONFIG_FUNCTION_PROFILER */
|
|
|
|
|
2009-03-24 04:12:36 +07:00
|
|
|
static struct pid * const ftrace_swapper_pid = &init_struct_pid;
|
|
|
|
|
|
|
|
#ifdef CONFIG_DYNAMIC_FTRACE
|
|
|
|
|
|
|
|
#ifndef CONFIG_FTRACE_MCOUNT_RECORD
|
|
|
|
# error Dynamic ftrace depends on MCOUNT_RECORD
|
|
|
|
#endif
|
|
|
|
|
|
|
|
static struct hlist_head ftrace_func_hash[FTRACE_FUNC_HASHSIZE] __read_mostly;
|
|
|
|
|
|
|
|
struct ftrace_func_probe {
|
|
|
|
struct hlist_node node;
|
|
|
|
struct ftrace_probe_ops *ops;
|
|
|
|
unsigned long flags;
|
|
|
|
unsigned long ip;
|
|
|
|
void *data;
|
|
|
|
struct rcu_head rcu;
|
|
|
|
};
|
|
|
|
|
|
|
|
enum {
|
|
|
|
FTRACE_ENABLE_CALLS = (1 << 0),
|
|
|
|
FTRACE_DISABLE_CALLS = (1 << 1),
|
|
|
|
FTRACE_UPDATE_TRACE_FUNC = (1 << 2),
|
2010-09-15 09:19:46 +07:00
|
|
|
FTRACE_START_FUNC_RET = (1 << 3),
|
|
|
|
FTRACE_STOP_FUNC_RET = (1 << 4),
|
2009-03-24 04:12:36 +07:00
|
|
|
};
|
2011-04-30 02:12:32 +07:00
|
|
|
struct ftrace_func_entry {
|
|
|
|
struct hlist_node hlist;
|
|
|
|
unsigned long ip;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct ftrace_hash {
|
|
|
|
unsigned long size_bits;
|
|
|
|
struct hlist_head *buckets;
|
|
|
|
unsigned long count;
|
2011-05-06 05:03:47 +07:00
|
|
|
struct rcu_head rcu;
|
2011-04-30 02:12:32 +07:00
|
|
|
};
|
|
|
|
|
2011-05-03 04:34:47 +07:00
|
|
|
/*
|
|
|
|
* We make these constant because no one should touch them,
|
|
|
|
* but they are used as the default "empty hash", to avoid allocating
|
|
|
|
* it all the time. These are in a read only section such that if
|
|
|
|
* anyone does try to modify it, it will cause an exception.
|
|
|
|
*/
|
|
|
|
static const struct hlist_head empty_buckets[1];
|
|
|
|
static const struct ftrace_hash empty_hash = {
|
|
|
|
.buckets = (struct hlist_head *)empty_buckets,
|
2011-04-30 07:59:51 +07:00
|
|
|
};
|
2011-05-03 04:34:47 +07:00
|
|
|
#define EMPTY_HASH ((struct ftrace_hash *)&empty_hash)
|
2009-03-24 04:12:36 +07:00
|
|
|
|
2011-05-04 09:49:52 +07:00
|
|
|
static struct ftrace_ops global_ops = {
|
2011-05-02 23:29:25 +07:00
|
|
|
.func = ftrace_stub,
|
2011-05-03 04:34:47 +07:00
|
|
|
.notrace_hash = EMPTY_HASH,
|
|
|
|
.filter_hash = EMPTY_HASH,
|
2011-05-02 23:29:25 +07:00
|
|
|
};
|
|
|
|
|
2009-03-24 04:12:36 +07:00
|
|
|
static struct dyn_ftrace *ftrace_new_addrs;
|
|
|
|
|
|
|
|
static DEFINE_MUTEX(ftrace_regex_lock);
|
|
|
|
|
|
|
|
struct ftrace_page {
|
|
|
|
struct ftrace_page *next;
|
|
|
|
int index;
|
|
|
|
struct dyn_ftrace records[];
|
|
|
|
};
|
|
|
|
|
|
|
|
#define ENTRIES_PER_PAGE \
|
|
|
|
((PAGE_SIZE - sizeof(struct ftrace_page)) / sizeof(struct dyn_ftrace))
|
|
|
|
|
|
|
|
/* estimate from running different kernels */
|
|
|
|
#define NR_TO_INIT 10000
|
|
|
|
|
|
|
|
static struct ftrace_page *ftrace_pages_start;
|
|
|
|
static struct ftrace_page *ftrace_pages;
|
|
|
|
|
|
|
|
static struct dyn_ftrace *ftrace_free_records;
|
|
|
|
|
2011-04-30 02:12:32 +07:00
|
|
|
static struct ftrace_func_entry *
|
|
|
|
ftrace_lookup_ip(struct ftrace_hash *hash, unsigned long ip)
|
|
|
|
{
|
|
|
|
unsigned long key;
|
|
|
|
struct ftrace_func_entry *entry;
|
|
|
|
struct hlist_head *hhd;
|
|
|
|
struct hlist_node *n;
|
|
|
|
|
|
|
|
if (!hash->count)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
if (hash->size_bits > 0)
|
|
|
|
key = hash_long(ip, hash->size_bits);
|
|
|
|
else
|
|
|
|
key = 0;
|
|
|
|
|
|
|
|
hhd = &hash->buckets[key];
|
|
|
|
|
|
|
|
hlist_for_each_entry_rcu(entry, n, hhd, hlist) {
|
|
|
|
if (entry->ip == ip)
|
|
|
|
return entry;
|
|
|
|
}
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2011-05-03 04:34:47 +07:00
|
|
|
static void __add_hash_entry(struct ftrace_hash *hash,
|
|
|
|
struct ftrace_func_entry *entry)
|
2011-04-30 02:12:32 +07:00
|
|
|
{
|
|
|
|
struct hlist_head *hhd;
|
|
|
|
unsigned long key;
|
|
|
|
|
|
|
|
if (hash->size_bits)
|
2011-05-03 04:34:47 +07:00
|
|
|
key = hash_long(entry->ip, hash->size_bits);
|
2011-04-30 02:12:32 +07:00
|
|
|
else
|
|
|
|
key = 0;
|
|
|
|
|
|
|
|
hhd = &hash->buckets[key];
|
|
|
|
hlist_add_head(&entry->hlist, hhd);
|
|
|
|
hash->count++;
|
2011-05-03 04:34:47 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static int add_hash_entry(struct ftrace_hash *hash, unsigned long ip)
|
|
|
|
{
|
|
|
|
struct ftrace_func_entry *entry;
|
|
|
|
|
|
|
|
entry = kmalloc(sizeof(*entry), GFP_KERNEL);
|
|
|
|
if (!entry)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
entry->ip = ip;
|
|
|
|
__add_hash_entry(hash, entry);
|
2011-04-30 02:12:32 +07:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2011-05-03 04:34:47 +07:00
|
|
|
free_hash_entry(struct ftrace_hash *hash,
|
2011-04-30 02:12:32 +07:00
|
|
|
struct ftrace_func_entry *entry)
|
|
|
|
{
|
|
|
|
hlist_del(&entry->hlist);
|
|
|
|
kfree(entry);
|
|
|
|
hash->count--;
|
|
|
|
}
|
|
|
|
|
2011-05-03 04:34:47 +07:00
|
|
|
static void
|
|
|
|
remove_hash_entry(struct ftrace_hash *hash,
|
|
|
|
struct ftrace_func_entry *entry)
|
|
|
|
{
|
|
|
|
hlist_del(&entry->hlist);
|
|
|
|
hash->count--;
|
|
|
|
}
|
|
|
|
|
2011-04-30 02:12:32 +07:00
|
|
|
static void ftrace_hash_clear(struct ftrace_hash *hash)
|
|
|
|
{
|
|
|
|
struct hlist_head *hhd;
|
|
|
|
struct hlist_node *tp, *tn;
|
|
|
|
struct ftrace_func_entry *entry;
|
|
|
|
int size = 1 << hash->size_bits;
|
|
|
|
int i;
|
|
|
|
|
2011-05-03 04:34:47 +07:00
|
|
|
if (!hash->count)
|
|
|
|
return;
|
|
|
|
|
2011-04-30 02:12:32 +07:00
|
|
|
for (i = 0; i < size; i++) {
|
|
|
|
hhd = &hash->buckets[i];
|
|
|
|
hlist_for_each_entry_safe(entry, tp, tn, hhd, hlist)
|
2011-05-03 04:34:47 +07:00
|
|
|
free_hash_entry(hash, entry);
|
2011-04-30 02:12:32 +07:00
|
|
|
}
|
|
|
|
FTRACE_WARN_ON(hash->count);
|
|
|
|
}
|
|
|
|
|
2011-05-03 04:34:47 +07:00
|
|
|
static void free_ftrace_hash(struct ftrace_hash *hash)
|
|
|
|
{
|
|
|
|
if (!hash || hash == EMPTY_HASH)
|
|
|
|
return;
|
|
|
|
ftrace_hash_clear(hash);
|
|
|
|
kfree(hash->buckets);
|
|
|
|
kfree(hash);
|
|
|
|
}
|
|
|
|
|
2011-05-06 05:03:47 +07:00
|
|
|
static void __free_ftrace_hash_rcu(struct rcu_head *rcu)
|
|
|
|
{
|
|
|
|
struct ftrace_hash *hash;
|
|
|
|
|
|
|
|
hash = container_of(rcu, struct ftrace_hash, rcu);
|
|
|
|
free_ftrace_hash(hash);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void free_ftrace_hash_rcu(struct ftrace_hash *hash)
|
|
|
|
{
|
|
|
|
if (!hash || hash == EMPTY_HASH)
|
|
|
|
return;
|
|
|
|
call_rcu_sched(&hash->rcu, __free_ftrace_hash_rcu);
|
|
|
|
}
|
|
|
|
|
2011-05-03 04:34:47 +07:00
|
|
|
static struct ftrace_hash *alloc_ftrace_hash(int size_bits)
|
|
|
|
{
|
|
|
|
struct ftrace_hash *hash;
|
|
|
|
int size;
|
|
|
|
|
|
|
|
hash = kzalloc(sizeof(*hash), GFP_KERNEL);
|
|
|
|
if (!hash)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
size = 1 << size_bits;
|
|
|
|
hash->buckets = kzalloc(sizeof(*hash->buckets) * size, GFP_KERNEL);
|
|
|
|
|
|
|
|
if (!hash->buckets) {
|
|
|
|
kfree(hash);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
hash->size_bits = size_bits;
|
|
|
|
|
|
|
|
return hash;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct ftrace_hash *
|
|
|
|
alloc_and_copy_ftrace_hash(int size_bits, struct ftrace_hash *hash)
|
|
|
|
{
|
|
|
|
struct ftrace_func_entry *entry;
|
|
|
|
struct ftrace_hash *new_hash;
|
|
|
|
struct hlist_node *tp;
|
|
|
|
int size;
|
|
|
|
int ret;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
new_hash = alloc_ftrace_hash(size_bits);
|
|
|
|
if (!new_hash)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
/* Empty hash? */
|
|
|
|
if (!hash || !hash->count)
|
|
|
|
return new_hash;
|
|
|
|
|
|
|
|
size = 1 << hash->size_bits;
|
|
|
|
for (i = 0; i < size; i++) {
|
|
|
|
hlist_for_each_entry(entry, tp, &hash->buckets[i], hlist) {
|
|
|
|
ret = add_hash_entry(new_hash, entry->ip);
|
|
|
|
if (ret < 0)
|
|
|
|
goto free_hash;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
FTRACE_WARN_ON(new_hash->count != hash->count);
|
|
|
|
|
|
|
|
return new_hash;
|
|
|
|
|
|
|
|
free_hash:
|
|
|
|
free_ftrace_hash(new_hash);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
ftrace_hash_move(struct ftrace_hash **dst, struct ftrace_hash *src)
|
|
|
|
{
|
|
|
|
struct ftrace_func_entry *entry;
|
|
|
|
struct hlist_node *tp, *tn;
|
|
|
|
struct hlist_head *hhd;
|
2011-05-06 05:03:47 +07:00
|
|
|
struct ftrace_hash *old_hash;
|
|
|
|
struct ftrace_hash *new_hash;
|
2011-05-03 04:34:47 +07:00
|
|
|
unsigned long key;
|
|
|
|
int size = src->count;
|
|
|
|
int bits = 0;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If the new source is empty, just free dst and assign it
|
|
|
|
* the empty_hash.
|
|
|
|
*/
|
|
|
|
if (!src->count) {
|
2011-05-06 05:03:47 +07:00
|
|
|
free_ftrace_hash_rcu(*dst);
|
|
|
|
rcu_assign_pointer(*dst, EMPTY_HASH);
|
2011-05-03 04:34:47 +07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Make the hash size about 1/2 the # found
|
|
|
|
*/
|
|
|
|
for (size /= 2; size; size >>= 1)
|
|
|
|
bits++;
|
|
|
|
|
|
|
|
/* Don't allocate too much */
|
|
|
|
if (bits > FTRACE_HASH_MAX_BITS)
|
|
|
|
bits = FTRACE_HASH_MAX_BITS;
|
|
|
|
|
2011-05-06 05:03:47 +07:00
|
|
|
new_hash = alloc_ftrace_hash(bits);
|
|
|
|
if (!new_hash)
|
|
|
|
return -ENOMEM;
|
2011-05-03 04:34:47 +07:00
|
|
|
|
|
|
|
size = 1 << src->size_bits;
|
|
|
|
for (i = 0; i < size; i++) {
|
|
|
|
hhd = &src->buckets[i];
|
|
|
|
hlist_for_each_entry_safe(entry, tp, tn, hhd, hlist) {
|
|
|
|
if (bits > 0)
|
|
|
|
key = hash_long(entry->ip, bits);
|
|
|
|
else
|
|
|
|
key = 0;
|
|
|
|
remove_hash_entry(src, entry);
|
2011-05-06 05:03:47 +07:00
|
|
|
__add_hash_entry(new_hash, entry);
|
2011-05-03 04:34:47 +07:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2011-05-06 05:03:47 +07:00
|
|
|
old_hash = *dst;
|
|
|
|
rcu_assign_pointer(*dst, new_hash);
|
|
|
|
free_ftrace_hash_rcu(old_hash);
|
|
|
|
|
2011-05-03 04:34:47 +07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2011-05-04 20:27:52 +07:00
|
|
|
/*
|
|
|
|
* Test the hashes for this ops to see if we want to call
|
|
|
|
* the ops->func or not.
|
|
|
|
*
|
|
|
|
* It's a match if the ip is in the ops->filter_hash or
|
|
|
|
* the filter_hash does not exist or is empty,
|
|
|
|
* AND
|
|
|
|
* the ip is not in the ops->notrace_hash.
|
2011-05-06 08:14:55 +07:00
|
|
|
*
|
|
|
|
* This needs to be called with preemption disabled as
|
|
|
|
* the hashes are freed with call_rcu_sched().
|
2011-05-04 20:27:52 +07:00
|
|
|
*/
|
|
|
|
static int
|
|
|
|
ftrace_ops_test(struct ftrace_ops *ops, unsigned long ip)
|
|
|
|
{
|
|
|
|
struct ftrace_hash *filter_hash;
|
|
|
|
struct ftrace_hash *notrace_hash;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
filter_hash = rcu_dereference_raw(ops->filter_hash);
|
|
|
|
notrace_hash = rcu_dereference_raw(ops->notrace_hash);
|
|
|
|
|
|
|
|
if ((!filter_hash || !filter_hash->count ||
|
|
|
|
ftrace_lookup_ip(filter_hash, ip)) &&
|
|
|
|
(!notrace_hash || !notrace_hash->count ||
|
|
|
|
!ftrace_lookup_ip(notrace_hash, ip)))
|
|
|
|
ret = 1;
|
|
|
|
else
|
|
|
|
ret = 0;
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2009-03-24 04:12:36 +07:00
|
|
|
/*
|
|
|
|
* This is a double for. Do not use 'break' to break out of the loop,
|
|
|
|
* you must use a goto.
|
|
|
|
*/
|
|
|
|
#define do_for_each_ftrace_rec(pg, rec) \
|
|
|
|
for (pg = ftrace_pages_start; pg; pg = pg->next) { \
|
|
|
|
int _____i; \
|
|
|
|
for (_____i = 0; _____i < pg->index; _____i++) { \
|
|
|
|
rec = &pg->records[_____i];
|
|
|
|
|
|
|
|
#define while_for_each_ftrace_rec() \
|
|
|
|
} \
|
|
|
|
}
|
|
|
|
|
2011-05-04 00:25:24 +07:00
|
|
|
static void __ftrace_hash_rec_update(struct ftrace_ops *ops,
|
|
|
|
int filter_hash,
|
|
|
|
bool inc)
|
|
|
|
{
|
|
|
|
struct ftrace_hash *hash;
|
|
|
|
struct ftrace_hash *other_hash;
|
|
|
|
struct ftrace_page *pg;
|
|
|
|
struct dyn_ftrace *rec;
|
|
|
|
int count = 0;
|
|
|
|
int all = 0;
|
|
|
|
|
|
|
|
/* Only update if the ops has been registered */
|
|
|
|
if (!(ops->flags & FTRACE_OPS_FL_ENABLED))
|
|
|
|
return;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* In the filter_hash case:
|
|
|
|
* If the count is zero, we update all records.
|
|
|
|
* Otherwise we just update the items in the hash.
|
|
|
|
*
|
|
|
|
* In the notrace_hash case:
|
|
|
|
* We enable the update in the hash.
|
|
|
|
* As disabling notrace means enabling the tracing,
|
|
|
|
* and enabling notrace means disabling, the inc variable
|
|
|
|
* gets inversed.
|
|
|
|
*/
|
|
|
|
if (filter_hash) {
|
|
|
|
hash = ops->filter_hash;
|
|
|
|
other_hash = ops->notrace_hash;
|
2011-05-04 20:27:52 +07:00
|
|
|
if (!hash || !hash->count)
|
2011-05-04 00:25:24 +07:00
|
|
|
all = 1;
|
|
|
|
} else {
|
|
|
|
inc = !inc;
|
|
|
|
hash = ops->notrace_hash;
|
|
|
|
other_hash = ops->filter_hash;
|
|
|
|
/*
|
|
|
|
* If the notrace hash has no items,
|
|
|
|
* then there's nothing to do.
|
|
|
|
*/
|
2011-05-04 20:27:52 +07:00
|
|
|
if (hash && !hash->count)
|
2011-05-04 00:25:24 +07:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
do_for_each_ftrace_rec(pg, rec) {
|
|
|
|
int in_other_hash = 0;
|
|
|
|
int in_hash = 0;
|
|
|
|
int match = 0;
|
|
|
|
|
|
|
|
if (all) {
|
|
|
|
/*
|
|
|
|
* Only the filter_hash affects all records.
|
|
|
|
* Update if the record is not in the notrace hash.
|
|
|
|
*/
|
2011-05-04 20:27:52 +07:00
|
|
|
if (!other_hash || !ftrace_lookup_ip(other_hash, rec->ip))
|
2011-05-04 00:25:24 +07:00
|
|
|
match = 1;
|
|
|
|
} else {
|
2011-05-04 20:27:52 +07:00
|
|
|
in_hash = hash && !!ftrace_lookup_ip(hash, rec->ip);
|
|
|
|
in_other_hash = other_hash && !!ftrace_lookup_ip(other_hash, rec->ip);
|
2011-05-04 00:25:24 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
if (filter_hash && in_hash && !in_other_hash)
|
|
|
|
match = 1;
|
|
|
|
else if (!filter_hash && in_hash &&
|
|
|
|
(in_other_hash || !other_hash->count))
|
|
|
|
match = 1;
|
|
|
|
}
|
|
|
|
if (!match)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if (inc) {
|
|
|
|
rec->flags++;
|
|
|
|
if (FTRACE_WARN_ON((rec->flags & ~FTRACE_FL_MASK) == FTRACE_REF_MAX))
|
|
|
|
return;
|
|
|
|
} else {
|
|
|
|
if (FTRACE_WARN_ON((rec->flags & ~FTRACE_FL_MASK) == 0))
|
|
|
|
return;
|
|
|
|
rec->flags--;
|
|
|
|
}
|
|
|
|
count++;
|
|
|
|
/* Shortcut, if we handled all records, we are done. */
|
|
|
|
if (!all && count == hash->count)
|
|
|
|
return;
|
|
|
|
} while_for_each_ftrace_rec();
|
|
|
|
}
|
|
|
|
|
|
|
|
static void ftrace_hash_rec_disable(struct ftrace_ops *ops,
|
|
|
|
int filter_hash)
|
|
|
|
{
|
|
|
|
__ftrace_hash_rec_update(ops, filter_hash, 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void ftrace_hash_rec_enable(struct ftrace_ops *ops,
|
|
|
|
int filter_hash)
|
|
|
|
{
|
|
|
|
__ftrace_hash_rec_update(ops, filter_hash, 1);
|
|
|
|
}
|
|
|
|
|
2008-05-13 02:20:51 +07:00
|
|
|
static void ftrace_free_rec(struct dyn_ftrace *rec)
|
2008-05-13 02:20:48 +07:00
|
|
|
{
|
2009-03-24 12:38:06 +07:00
|
|
|
rec->freelist = ftrace_free_records;
|
2008-05-13 02:20:48 +07:00
|
|
|
ftrace_free_records = rec;
|
|
|
|
rec->flags |= FTRACE_FL_FREE;
|
|
|
|
}
|
|
|
|
|
2008-05-13 02:20:51 +07:00
|
|
|
static struct dyn_ftrace *ftrace_alloc_dyn_node(unsigned long ip)
|
2008-05-13 02:20:43 +07:00
|
|
|
{
|
2008-05-13 02:20:48 +07:00
|
|
|
struct dyn_ftrace *rec;
|
|
|
|
|
|
|
|
/* First check for freed records */
|
|
|
|
if (ftrace_free_records) {
|
|
|
|
rec = ftrace_free_records;
|
|
|
|
|
|
|
|
if (unlikely(!(rec->flags & FTRACE_FL_FREE))) {
|
2008-10-23 20:33:03 +07:00
|
|
|
FTRACE_WARN_ON_ONCE(1);
|
2008-05-13 02:20:48 +07:00
|
|
|
ftrace_free_records = NULL;
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2009-03-24 12:38:06 +07:00
|
|
|
ftrace_free_records = rec->freelist;
|
2008-05-13 02:20:48 +07:00
|
|
|
memset(rec, 0, sizeof(*rec));
|
|
|
|
return rec;
|
|
|
|
}
|
|
|
|
|
2008-05-13 02:20:43 +07:00
|
|
|
if (ftrace_pages->index == ENTRIES_PER_PAGE) {
|
2008-10-23 20:33:07 +07:00
|
|
|
if (!ftrace_pages->next) {
|
|
|
|
/* allocate another page */
|
|
|
|
ftrace_pages->next =
|
|
|
|
(void *)get_zeroed_page(GFP_KERNEL);
|
|
|
|
if (!ftrace_pages->next)
|
|
|
|
return NULL;
|
|
|
|
}
|
2008-05-13 02:20:43 +07:00
|
|
|
ftrace_pages = ftrace_pages->next;
|
|
|
|
}
|
|
|
|
|
|
|
|
return &ftrace_pages->records[ftrace_pages->index++];
|
|
|
|
}
|
|
|
|
|
2008-10-23 20:33:07 +07:00
|
|
|
static struct dyn_ftrace *
|
2008-05-13 02:20:43 +07:00
|
|
|
ftrace_record_ip(unsigned long ip)
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
{
|
2008-10-23 20:33:07 +07:00
|
|
|
struct dyn_ftrace *rec;
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
|
2008-11-15 07:21:19 +07:00
|
|
|
if (ftrace_disabled)
|
2008-10-23 20:33:07 +07:00
|
|
|
return NULL;
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
|
2008-10-23 20:33:07 +07:00
|
|
|
rec = ftrace_alloc_dyn_node(ip);
|
|
|
|
if (!rec)
|
|
|
|
return NULL;
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
|
2008-10-23 20:33:07 +07:00
|
|
|
rec->ip = ip;
|
2009-03-24 12:38:06 +07:00
|
|
|
rec->newlist = ftrace_new_addrs;
|
2009-03-13 16:51:27 +07:00
|
|
|
ftrace_new_addrs = rec;
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
|
2008-10-23 20:33:07 +07:00
|
|
|
return rec;
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
}
|
|
|
|
|
2008-11-15 07:21:19 +07:00
|
|
|
static void print_ip_ins(const char *fmt, unsigned char *p)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
printk(KERN_CONT "%s", fmt);
|
|
|
|
|
|
|
|
for (i = 0; i < MCOUNT_INSN_SIZE; i++)
|
|
|
|
printk(KERN_CONT "%s%02x", i ? ":" : "", p[i]);
|
|
|
|
}
|
|
|
|
|
2008-11-15 07:21:19 +07:00
|
|
|
static void ftrace_bug(int failed, unsigned long ip)
|
2008-11-15 07:21:19 +07:00
|
|
|
{
|
|
|
|
switch (failed) {
|
|
|
|
case -EFAULT:
|
|
|
|
FTRACE_WARN_ON_ONCE(1);
|
|
|
|
pr_info("ftrace faulted on modifying ");
|
|
|
|
print_ip_sym(ip);
|
|
|
|
break;
|
|
|
|
case -EINVAL:
|
|
|
|
FTRACE_WARN_ON_ONCE(1);
|
|
|
|
pr_info("ftrace failed to modify ");
|
|
|
|
print_ip_sym(ip);
|
|
|
|
print_ip_ins(" actual: ", (unsigned char *)ip);
|
|
|
|
printk(KERN_CONT "\n");
|
|
|
|
break;
|
|
|
|
case -EPERM:
|
|
|
|
FTRACE_WARN_ON_ONCE(1);
|
|
|
|
pr_info("ftrace faulted on writing ");
|
|
|
|
print_ip_sym(ip);
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
FTRACE_WARN_ON_ONCE(1);
|
|
|
|
pr_info("ftrace faulted on unknown error ");
|
|
|
|
print_ip_sym(ip);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2008-05-13 02:20:43 +07:00
|
|
|
|
2010-02-03 04:49:11 +07:00
|
|
|
/* Return 1 if the address range is reserved for ftrace */
|
|
|
|
int ftrace_text_reserved(void *start, void *end)
|
|
|
|
{
|
|
|
|
struct dyn_ftrace *rec;
|
|
|
|
struct ftrace_page *pg;
|
|
|
|
|
|
|
|
do_for_each_ftrace_rec(pg, rec) {
|
|
|
|
if (rec->ip <= (unsigned long)end &&
|
|
|
|
rec->ip + MCOUNT_INSN_SIZE > (unsigned long)start)
|
|
|
|
return 1;
|
|
|
|
} while_for_each_ftrace_rec();
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2008-06-01 23:17:30 +07:00
|
|
|
static int
|
2008-11-15 07:21:19 +07:00
|
|
|
__ftrace_replace_code(struct dyn_ftrace *rec, int enable)
|
2008-05-13 02:20:43 +07:00
|
|
|
{
|
2008-11-16 12:02:06 +07:00
|
|
|
unsigned long ftrace_addr;
|
2009-07-15 11:32:15 +07:00
|
|
|
unsigned long flag = 0UL;
|
2008-11-16 12:02:06 +07:00
|
|
|
|
2009-01-09 10:29:42 +07:00
|
|
|
ftrace_addr = (unsigned long)FTRACE_ADDR;
|
2008-05-13 02:20:43 +07:00
|
|
|
|
2008-11-16 04:31:41 +07:00
|
|
|
/*
|
2011-05-04 00:25:24 +07:00
|
|
|
* If we are enabling tracing:
|
2008-11-16 04:31:41 +07:00
|
|
|
*
|
2011-05-04 00:25:24 +07:00
|
|
|
* If the record has a ref count, then we need to enable it
|
|
|
|
* because someone is using it.
|
2008-11-16 04:31:41 +07:00
|
|
|
*
|
2011-05-04 00:25:24 +07:00
|
|
|
* Otherwise we make sure its disabled.
|
|
|
|
*
|
|
|
|
* If we are disabling tracing, then disable all records that
|
|
|
|
* are enabled.
|
2008-11-16 04:31:41 +07:00
|
|
|
*/
|
2011-05-04 00:25:24 +07:00
|
|
|
if (enable && (rec->flags & ~FTRACE_FL_MASK))
|
|
|
|
flag = FTRACE_FL_ENABLED;
|
2008-11-16 04:31:41 +07:00
|
|
|
|
2009-07-15 11:32:15 +07:00
|
|
|
/* If the state of this record hasn't changed, then do nothing */
|
|
|
|
if ((rec->flags & FTRACE_FL_ENABLED) == flag)
|
|
|
|
return 0;
|
2008-11-16 04:31:41 +07:00
|
|
|
|
2009-07-15 11:32:15 +07:00
|
|
|
if (flag) {
|
|
|
|
rec->flags |= FTRACE_FL_ENABLED;
|
|
|
|
return ftrace_make_call(rec, ftrace_addr);
|
2008-05-13 02:20:43 +07:00
|
|
|
}
|
|
|
|
|
2009-07-15 11:32:15 +07:00
|
|
|
rec->flags &= ~FTRACE_FL_ENABLED;
|
|
|
|
return ftrace_make_nop(NULL, rec, ftrace_addr);
|
2008-05-13 02:20:43 +07:00
|
|
|
}
|
|
|
|
|
2008-05-13 02:20:51 +07:00
|
|
|
static void ftrace_replace_code(int enable)
|
2008-05-13 02:20:43 +07:00
|
|
|
{
|
|
|
|
struct dyn_ftrace *rec;
|
|
|
|
struct ftrace_page *pg;
|
2009-02-17 23:20:26 +07:00
|
|
|
int failed;
|
2008-05-13 02:20:43 +07:00
|
|
|
|
2011-04-22 10:16:46 +07:00
|
|
|
if (unlikely(ftrace_disabled))
|
|
|
|
return;
|
|
|
|
|
2009-02-14 00:43:56 +07:00
|
|
|
do_for_each_ftrace_rec(pg, rec) {
|
2011-04-26 01:32:42 +07:00
|
|
|
/* Skip over free records */
|
|
|
|
if (rec->flags & FTRACE_FL_FREE)
|
2009-02-14 00:43:56 +07:00
|
|
|
continue;
|
|
|
|
|
|
|
|
failed = __ftrace_replace_code(rec, enable);
|
2009-03-13 16:16:34 +07:00
|
|
|
if (failed) {
|
2009-10-08 03:57:56 +07:00
|
|
|
ftrace_bug(failed, rec->ip);
|
|
|
|
/* Stop processing */
|
|
|
|
return;
|
2008-05-13 02:20:43 +07:00
|
|
|
}
|
2009-02-14 00:43:56 +07:00
|
|
|
} while_for_each_ftrace_rec();
|
2008-05-13 02:20:43 +07:00
|
|
|
}
|
|
|
|
|
2008-05-25 01:40:04 +07:00
|
|
|
static int
|
2008-11-15 07:21:19 +07:00
|
|
|
ftrace_code_disable(struct module *mod, struct dyn_ftrace *rec)
|
2008-05-13 02:20:43 +07:00
|
|
|
{
|
|
|
|
unsigned long ip;
|
2008-10-23 20:32:59 +07:00
|
|
|
int ret;
|
2008-05-13 02:20:43 +07:00
|
|
|
|
|
|
|
ip = rec->ip;
|
|
|
|
|
2011-04-22 10:16:46 +07:00
|
|
|
if (unlikely(ftrace_disabled))
|
|
|
|
return 0;
|
|
|
|
|
2009-01-09 10:29:40 +07:00
|
|
|
ret = ftrace_make_nop(mod, rec, MCOUNT_ADDR);
|
2008-10-23 20:32:59 +07:00
|
|
|
if (ret) {
|
2008-11-15 07:21:19 +07:00
|
|
|
ftrace_bug(ret, ip);
|
2008-05-25 01:40:04 +07:00
|
|
|
return 0;
|
2008-05-13 02:20:48 +07:00
|
|
|
}
|
2008-05-25 01:40:04 +07:00
|
|
|
return 1;
|
2008-05-13 02:20:43 +07:00
|
|
|
}
|
|
|
|
|
2009-02-18 01:35:06 +07:00
|
|
|
/*
|
|
|
|
* archs can override this function if they must do something
|
|
|
|
* before the modifying code is performed.
|
|
|
|
*/
|
|
|
|
int __weak ftrace_arch_code_modify_prepare(void)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* archs can override this function if they must do something
|
|
|
|
* after the modifying code is performed.
|
|
|
|
*/
|
|
|
|
int __weak ftrace_arch_code_modify_post_process(void)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2008-05-13 02:20:51 +07:00
|
|
|
static int __ftrace_modify_code(void *data)
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
{
|
2008-05-13 02:20:43 +07:00
|
|
|
int *command = data;
|
|
|
|
|
2008-11-12 03:01:42 +07:00
|
|
|
if (*command & FTRACE_ENABLE_CALLS)
|
2008-05-13 02:20:43 +07:00
|
|
|
ftrace_replace_code(1);
|
2008-11-12 03:01:42 +07:00
|
|
|
else if (*command & FTRACE_DISABLE_CALLS)
|
2008-05-13 02:20:43 +07:00
|
|
|
ftrace_replace_code(0);
|
|
|
|
|
|
|
|
if (*command & FTRACE_UPDATE_TRACE_FUNC)
|
|
|
|
ftrace_update_ftrace_func(ftrace_trace_function);
|
|
|
|
|
2008-11-26 12:16:24 +07:00
|
|
|
if (*command & FTRACE_START_FUNC_RET)
|
|
|
|
ftrace_enable_ftrace_graph_caller();
|
|
|
|
else if (*command & FTRACE_STOP_FUNC_RET)
|
|
|
|
ftrace_disable_ftrace_graph_caller();
|
|
|
|
|
2008-05-13 02:20:43 +07:00
|
|
|
return 0;
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
}
|
|
|
|
|
2008-05-13 02:20:51 +07:00
|
|
|
static void ftrace_run_update_code(int command)
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
{
|
2009-02-18 01:35:06 +07:00
|
|
|
int ret;
|
|
|
|
|
|
|
|
ret = ftrace_arch_code_modify_prepare();
|
|
|
|
FTRACE_WARN_ON(ret);
|
|
|
|
if (ret)
|
|
|
|
return;
|
|
|
|
|
2008-07-29 00:16:31 +07:00
|
|
|
stop_machine(__ftrace_modify_code, &command, NULL);
|
2009-02-18 01:35:06 +07:00
|
|
|
|
|
|
|
ret = ftrace_arch_code_modify_post_process();
|
|
|
|
FTRACE_WARN_ON(ret);
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
}
|
|
|
|
|
2008-05-13 02:20:43 +07:00
|
|
|
static ftrace_func_t saved_ftrace_func;
|
2008-11-06 04:05:44 +07:00
|
|
|
static int ftrace_start_up;
|
2011-05-04 20:27:52 +07:00
|
|
|
static int global_start_up;
|
2008-11-26 12:16:23 +07:00
|
|
|
|
|
|
|
static void ftrace_startup_enable(int command)
|
|
|
|
{
|
|
|
|
if (saved_ftrace_func != ftrace_trace_function) {
|
|
|
|
saved_ftrace_func = ftrace_trace_function;
|
|
|
|
command |= FTRACE_UPDATE_TRACE_FUNC;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!command || !ftrace_enabled)
|
|
|
|
return;
|
|
|
|
|
|
|
|
ftrace_run_update_code(command);
|
|
|
|
}
|
2008-05-13 02:20:43 +07:00
|
|
|
|
2011-05-24 02:24:25 +07:00
|
|
|
static int ftrace_startup(struct ftrace_ops *ops, int command)
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
{
|
2011-05-04 20:27:52 +07:00
|
|
|
bool hash_enable = true;
|
|
|
|
|
2008-05-13 02:20:48 +07:00
|
|
|
if (unlikely(ftrace_disabled))
|
2011-05-24 02:24:25 +07:00
|
|
|
return -ENODEV;
|
2008-05-13 02:20:48 +07:00
|
|
|
|
2008-11-06 04:05:44 +07:00
|
|
|
ftrace_start_up++;
|
2008-11-16 04:31:41 +07:00
|
|
|
command |= FTRACE_ENABLE_CALLS;
|
2008-05-13 02:20:43 +07:00
|
|
|
|
2011-05-04 20:27:52 +07:00
|
|
|
/* ops marked global share the filter hashes */
|
|
|
|
if (ops->flags & FTRACE_OPS_FL_GLOBAL) {
|
|
|
|
ops = &global_ops;
|
|
|
|
/* Don't update hash if global is already set */
|
|
|
|
if (global_start_up)
|
|
|
|
hash_enable = false;
|
|
|
|
global_start_up++;
|
|
|
|
}
|
|
|
|
|
2011-05-04 00:25:24 +07:00
|
|
|
ops->flags |= FTRACE_OPS_FL_ENABLED;
|
2011-05-04 20:27:52 +07:00
|
|
|
if (hash_enable)
|
2011-05-04 00:25:24 +07:00
|
|
|
ftrace_hash_rec_enable(ops, 1);
|
|
|
|
|
2008-11-26 12:16:23 +07:00
|
|
|
ftrace_startup_enable(command);
|
2011-05-24 02:24:25 +07:00
|
|
|
|
|
|
|
return 0;
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
}
|
|
|
|
|
2011-05-04 08:55:54 +07:00
|
|
|
static void ftrace_shutdown(struct ftrace_ops *ops, int command)
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
{
|
2011-05-04 20:27:52 +07:00
|
|
|
bool hash_disable = true;
|
|
|
|
|
2008-05-13 02:20:48 +07:00
|
|
|
if (unlikely(ftrace_disabled))
|
|
|
|
return;
|
|
|
|
|
2008-11-06 04:05:44 +07:00
|
|
|
ftrace_start_up--;
|
2009-06-20 11:52:21 +07:00
|
|
|
/*
|
|
|
|
* Just warn in case of unbalance, no need to kill ftrace, it's not
|
|
|
|
* critical but the ftrace_call callers may be never nopped again after
|
|
|
|
* further ftrace uses.
|
|
|
|
*/
|
|
|
|
WARN_ON_ONCE(ftrace_start_up < 0);
|
|
|
|
|
2011-05-04 20:27:52 +07:00
|
|
|
if (ops->flags & FTRACE_OPS_FL_GLOBAL) {
|
|
|
|
ops = &global_ops;
|
|
|
|
global_start_up--;
|
|
|
|
WARN_ON_ONCE(global_start_up < 0);
|
|
|
|
/* Don't update hash if global still has users */
|
|
|
|
if (global_start_up) {
|
|
|
|
WARN_ON_ONCE(!ftrace_start_up);
|
|
|
|
hash_disable = false;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (hash_disable)
|
2011-05-04 00:25:24 +07:00
|
|
|
ftrace_hash_rec_disable(ops, 1);
|
|
|
|
|
2011-05-04 20:27:52 +07:00
|
|
|
if (ops != &global_ops || !global_start_up)
|
2011-05-04 00:25:24 +07:00
|
|
|
ops->flags &= ~FTRACE_OPS_FL_ENABLED;
|
2011-05-04 20:27:52 +07:00
|
|
|
|
|
|
|
if (!ftrace_start_up)
|
|
|
|
command |= FTRACE_DISABLE_CALLS;
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
|
2008-05-13 02:20:43 +07:00
|
|
|
if (saved_ftrace_func != ftrace_trace_function) {
|
|
|
|
saved_ftrace_func = ftrace_trace_function;
|
|
|
|
command |= FTRACE_UPDATE_TRACE_FUNC;
|
|
|
|
}
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
|
2008-05-13 02:20:43 +07:00
|
|
|
if (!command || !ftrace_enabled)
|
2009-02-14 13:42:44 +07:00
|
|
|
return;
|
2008-05-13 02:20:43 +07:00
|
|
|
|
|
|
|
ftrace_run_update_code(command);
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
}
|
|
|
|
|
2008-05-13 02:20:51 +07:00
|
|
|
static void ftrace_startup_sysctl(void)
|
2008-05-13 02:20:43 +07:00
|
|
|
{
|
2008-05-13 02:20:48 +07:00
|
|
|
if (unlikely(ftrace_disabled))
|
|
|
|
return;
|
|
|
|
|
2008-05-13 02:20:43 +07:00
|
|
|
/* Force update next time */
|
|
|
|
saved_ftrace_func = NULL;
|
2008-11-06 04:05:44 +07:00
|
|
|
/* ftrace_start_up is true if we want ftrace running */
|
|
|
|
if (ftrace_start_up)
|
2010-09-15 09:19:46 +07:00
|
|
|
ftrace_run_update_code(FTRACE_ENABLE_CALLS);
|
2008-05-13 02:20:43 +07:00
|
|
|
}
|
|
|
|
|
2008-05-13 02:20:51 +07:00
|
|
|
static void ftrace_shutdown_sysctl(void)
|
2008-05-13 02:20:43 +07:00
|
|
|
{
|
2008-05-13 02:20:48 +07:00
|
|
|
if (unlikely(ftrace_disabled))
|
|
|
|
return;
|
|
|
|
|
2008-11-06 04:05:44 +07:00
|
|
|
/* ftrace_start_up is true if ftrace is running */
|
|
|
|
if (ftrace_start_up)
|
2010-09-15 09:19:46 +07:00
|
|
|
ftrace_run_update_code(FTRACE_DISABLE_CALLS);
|
2008-05-13 02:20:43 +07:00
|
|
|
}
|
|
|
|
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
static cycle_t ftrace_update_time;
|
|
|
|
static unsigned long ftrace_update_cnt;
|
|
|
|
unsigned long ftrace_update_tot_cnt;
|
|
|
|
|
2008-11-15 07:21:19 +07:00
|
|
|
static int ftrace_update_code(struct module *mod)
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
{
|
2009-03-13 16:51:27 +07:00
|
|
|
struct dyn_ftrace *p;
|
2008-06-22 01:20:29 +07:00
|
|
|
cycle_t start, stop;
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
|
2008-05-13 02:20:46 +07:00
|
|
|
start = ftrace_now(raw_smp_processor_id());
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
ftrace_update_cnt = 0;
|
|
|
|
|
2009-03-13 16:51:27 +07:00
|
|
|
while (ftrace_new_addrs) {
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
|
2008-10-23 20:33:07 +07:00
|
|
|
/* If something went wrong, bail without enabling anything */
|
|
|
|
if (unlikely(ftrace_disabled))
|
|
|
|
return -1;
|
2008-06-22 01:20:29 +07:00
|
|
|
|
2009-03-13 16:51:27 +07:00
|
|
|
p = ftrace_new_addrs;
|
2009-03-24 12:38:06 +07:00
|
|
|
ftrace_new_addrs = p->newlist;
|
2009-03-13 16:51:27 +07:00
|
|
|
p->flags = 0L;
|
2008-06-22 01:20:29 +07:00
|
|
|
|
2009-10-14 03:33:53 +07:00
|
|
|
/*
|
2011-03-31 08:57:33 +07:00
|
|
|
* Do the initial record conversion from mcount jump
|
2009-10-14 03:33:53 +07:00
|
|
|
* to the NOP instructions.
|
|
|
|
*/
|
|
|
|
if (!ftrace_code_disable(mod, p)) {
|
2008-10-23 20:33:07 +07:00
|
|
|
ftrace_free_rec(p);
|
2011-04-26 01:32:42 +07:00
|
|
|
/* Game over */
|
|
|
|
break;
|
2009-10-14 03:33:53 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
ftrace_update_cnt++;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If the tracing is enabled, go ahead and enable the record.
|
|
|
|
*
|
|
|
|
* The reason not to enable the record immediatelly is the
|
|
|
|
* inherent check of ftrace_make_nop/ftrace_make_call for
|
|
|
|
* correct previous instructions. Making first the NOP
|
|
|
|
* conversion puts the module to the correct state, thus
|
|
|
|
* passing the ftrace_make_call check.
|
|
|
|
*/
|
|
|
|
if (ftrace_start_up) {
|
|
|
|
int failed = __ftrace_replace_code(p, 1);
|
|
|
|
if (failed) {
|
|
|
|
ftrace_bug(failed, p->ip);
|
|
|
|
ftrace_free_rec(p);
|
|
|
|
}
|
|
|
|
}
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
}
|
|
|
|
|
2008-05-13 02:20:46 +07:00
|
|
|
stop = ftrace_now(raw_smp_processor_id());
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
ftrace_update_time = stop - start;
|
|
|
|
ftrace_update_tot_cnt += ftrace_update_cnt;
|
|
|
|
|
2008-05-13 02:20:42 +07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2008-08-15 02:45:08 +07:00
|
|
|
static int __init ftrace_dyn_table_alloc(unsigned long num_to_init)
|
2008-05-13 02:20:43 +07:00
|
|
|
{
|
|
|
|
struct ftrace_page *pg;
|
|
|
|
int cnt;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
/* allocate a few pages */
|
|
|
|
ftrace_pages_start = (void *)get_zeroed_page(GFP_KERNEL);
|
|
|
|
if (!ftrace_pages_start)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Allocate a few more pages.
|
|
|
|
*
|
|
|
|
* TODO: have some parser search vmlinux before
|
|
|
|
* final linking to find all calls to ftrace.
|
|
|
|
* Then we can:
|
|
|
|
* a) know how many pages to allocate.
|
|
|
|
* and/or
|
|
|
|
* b) set up the table then.
|
|
|
|
*
|
|
|
|
* The dynamic code is still necessary for
|
|
|
|
* modules.
|
|
|
|
*/
|
|
|
|
|
|
|
|
pg = ftrace_pages = ftrace_pages_start;
|
|
|
|
|
2008-08-15 02:45:08 +07:00
|
|
|
cnt = num_to_init / ENTRIES_PER_PAGE;
|
2008-10-23 20:33:07 +07:00
|
|
|
pr_info("ftrace: allocating %ld entries in %d pages\n",
|
function tracing: fix wrong pos computing when read buffer has been fulfilled
Impact: make output of available_filter_functions complete
phenomenon:
The first value of dyn_ftrace_total_info is not equal with
`cat available_filter_functions | wc -l`, but they should be equal.
root cause:
When printing functions with seq_printf in t_show, if the read buffer
is just overflowed by current function record, then this function
won't be printed to user space through read buffer, it will
just be dropped. So we can't see this function printing.
So, every time the last function to fill the read buffer, if overflowed,
will be dropped.
This also applies to set_ftrace_filter if set_ftrace_filter has
more bytes than read buffer.
fix:
Through checking return value of seq_printf, if less than 0, we know
this function doesn't be printed. Then we decrease position to force
this function to be printed next time, in next read buffer.
Another little fix is to show correct allocating pages count.
Signed-off-by: walimis <walimisdev@gmail.com>
Acked-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-11-15 14:19:06 +07:00
|
|
|
num_to_init, cnt + 1);
|
2008-05-13 02:20:43 +07:00
|
|
|
|
|
|
|
for (i = 0; i < cnt; i++) {
|
|
|
|
pg->next = (void *)get_zeroed_page(GFP_KERNEL);
|
|
|
|
|
|
|
|
/* If we fail, we'll try later anyway */
|
|
|
|
if (!pg->next)
|
|
|
|
break;
|
|
|
|
|
|
|
|
pg = pg->next;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2008-05-13 02:20:43 +07:00
|
|
|
enum {
|
|
|
|
FTRACE_ITER_FILTER = (1 << 0),
|
2009-09-11 22:29:29 +07:00
|
|
|
FTRACE_ITER_NOTRACE = (1 << 1),
|
2011-04-22 09:59:12 +07:00
|
|
|
FTRACE_ITER_PRINTALL = (1 << 2),
|
|
|
|
FTRACE_ITER_HASH = (1 << 3),
|
2011-05-04 01:39:21 +07:00
|
|
|
FTRACE_ITER_ENABLED = (1 << 4),
|
2008-05-13 02:20:43 +07:00
|
|
|
};
|
|
|
|
|
|
|
|
#define FTRACE_BUFF_MAX (KSYM_SYMBOL_LEN+4) /* room for wildcards */
|
|
|
|
|
|
|
|
struct ftrace_iterator {
|
2010-09-10 22:47:43 +07:00
|
|
|
loff_t pos;
|
2010-09-09 21:00:28 +07:00
|
|
|
loff_t func_pos;
|
|
|
|
struct ftrace_page *pg;
|
|
|
|
struct dyn_ftrace *func;
|
|
|
|
struct ftrace_func_probe *probe;
|
|
|
|
struct trace_parser parser;
|
2011-04-30 07:59:51 +07:00
|
|
|
struct ftrace_hash *hash;
|
2011-05-03 04:34:47 +07:00
|
|
|
struct ftrace_ops *ops;
|
2010-09-09 21:00:28 +07:00
|
|
|
int hidx;
|
|
|
|
int idx;
|
|
|
|
unsigned flags;
|
2008-05-13 02:20:43 +07:00
|
|
|
};
|
|
|
|
|
2009-02-17 03:28:00 +07:00
|
|
|
static void *
|
2010-09-09 21:00:28 +07:00
|
|
|
t_hash_next(struct seq_file *m, loff_t *pos)
|
2009-02-17 03:28:00 +07:00
|
|
|
{
|
|
|
|
struct ftrace_iterator *iter = m->private;
|
2010-09-09 21:00:28 +07:00
|
|
|
struct hlist_node *hnd = NULL;
|
2009-02-17 03:28:00 +07:00
|
|
|
struct hlist_head *hhd;
|
|
|
|
|
|
|
|
(*pos)++;
|
2010-09-10 22:47:43 +07:00
|
|
|
iter->pos = *pos;
|
2009-02-17 03:28:00 +07:00
|
|
|
|
2010-09-09 21:00:28 +07:00
|
|
|
if (iter->probe)
|
|
|
|
hnd = &iter->probe->node;
|
2009-02-17 03:28:00 +07:00
|
|
|
retry:
|
|
|
|
if (iter->hidx >= FTRACE_FUNC_HASHSIZE)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
hhd = &ftrace_func_hash[iter->hidx];
|
|
|
|
|
|
|
|
if (hlist_empty(hhd)) {
|
|
|
|
iter->hidx++;
|
|
|
|
hnd = NULL;
|
|
|
|
goto retry;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!hnd)
|
|
|
|
hnd = hhd->first;
|
|
|
|
else {
|
|
|
|
hnd = hnd->next;
|
|
|
|
if (!hnd) {
|
|
|
|
iter->hidx++;
|
|
|
|
goto retry;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2010-09-09 21:00:28 +07:00
|
|
|
if (WARN_ON_ONCE(!hnd))
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
iter->probe = hlist_entry(hnd, struct ftrace_func_probe, node);
|
|
|
|
|
|
|
|
return iter;
|
2009-02-17 03:28:00 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static void *t_hash_start(struct seq_file *m, loff_t *pos)
|
|
|
|
{
|
|
|
|
struct ftrace_iterator *iter = m->private;
|
|
|
|
void *p = NULL;
|
2009-06-24 08:54:54 +07:00
|
|
|
loff_t l;
|
|
|
|
|
2010-09-09 19:43:22 +07:00
|
|
|
if (iter->func_pos > *pos)
|
|
|
|
return NULL;
|
2009-02-17 03:28:00 +07:00
|
|
|
|
2009-06-24 08:54:54 +07:00
|
|
|
iter->hidx = 0;
|
2010-09-09 19:43:22 +07:00
|
|
|
for (l = 0; l <= (*pos - iter->func_pos); ) {
|
2010-09-09 21:00:28 +07:00
|
|
|
p = t_hash_next(m, &l);
|
2009-06-24 08:54:54 +07:00
|
|
|
if (!p)
|
|
|
|
break;
|
|
|
|
}
|
2010-09-09 21:00:28 +07:00
|
|
|
if (!p)
|
|
|
|
return NULL;
|
|
|
|
|
2010-09-10 22:47:43 +07:00
|
|
|
/* Only set this if we have an item */
|
|
|
|
iter->flags |= FTRACE_ITER_HASH;
|
|
|
|
|
2010-09-09 21:00:28 +07:00
|
|
|
return iter;
|
2009-02-17 03:28:00 +07:00
|
|
|
}
|
|
|
|
|
2010-09-09 21:00:28 +07:00
|
|
|
static int
|
|
|
|
t_hash_show(struct seq_file *m, struct ftrace_iterator *iter)
|
2009-02-17 03:28:00 +07:00
|
|
|
{
|
2009-02-18 00:32:04 +07:00
|
|
|
struct ftrace_func_probe *rec;
|
2009-02-17 03:28:00 +07:00
|
|
|
|
2010-09-09 21:00:28 +07:00
|
|
|
rec = iter->probe;
|
|
|
|
if (WARN_ON_ONCE(!rec))
|
|
|
|
return -EIO;
|
2009-02-17 03:28:00 +07:00
|
|
|
|
2009-02-17 11:06:01 +07:00
|
|
|
if (rec->ops->print)
|
|
|
|
return rec->ops->print(m, rec->ip, rec->ops, rec->data);
|
|
|
|
|
2009-09-17 11:05:58 +07:00
|
|
|
seq_printf(m, "%ps:%ps", (void *)rec->ip, (void *)rec->ops->func);
|
2009-02-17 03:28:00 +07:00
|
|
|
|
|
|
|
if (rec->data)
|
|
|
|
seq_printf(m, ":%p", rec->data);
|
|
|
|
seq_putc(m, '\n');
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2008-05-13 02:20:51 +07:00
|
|
|
static void *
|
2008-05-13 02:20:43 +07:00
|
|
|
t_next(struct seq_file *m, void *v, loff_t *pos)
|
|
|
|
{
|
|
|
|
struct ftrace_iterator *iter = m->private;
|
2011-05-02 23:29:25 +07:00
|
|
|
struct ftrace_ops *ops = &global_ops;
|
2008-05-13 02:20:43 +07:00
|
|
|
struct dyn_ftrace *rec = NULL;
|
|
|
|
|
2011-04-22 10:16:46 +07:00
|
|
|
if (unlikely(ftrace_disabled))
|
|
|
|
return NULL;
|
|
|
|
|
2009-02-17 03:28:00 +07:00
|
|
|
if (iter->flags & FTRACE_ITER_HASH)
|
2010-09-09 21:00:28 +07:00
|
|
|
return t_hash_next(m, pos);
|
2009-02-17 03:28:00 +07:00
|
|
|
|
2008-05-13 02:20:43 +07:00
|
|
|
(*pos)++;
|
2011-02-16 23:35:34 +07:00
|
|
|
iter->pos = iter->func_pos = *pos;
|
2008-05-13 02:20:43 +07:00
|
|
|
|
2009-02-16 23:21:52 +07:00
|
|
|
if (iter->flags & FTRACE_ITER_PRINTALL)
|
2010-09-14 22:21:11 +07:00
|
|
|
return t_hash_start(m, pos);
|
2009-02-16 23:21:52 +07:00
|
|
|
|
2008-05-13 02:20:43 +07:00
|
|
|
retry:
|
|
|
|
if (iter->idx >= iter->pg->index) {
|
|
|
|
if (iter->pg->next) {
|
|
|
|
iter->pg = iter->pg->next;
|
|
|
|
iter->idx = 0;
|
|
|
|
goto retry;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
rec = &iter->pg->records[iter->idx++];
|
2008-08-15 09:47:17 +07:00
|
|
|
if ((rec->flags & FTRACE_FL_FREE) ||
|
|
|
|
|
2008-11-08 10:36:02 +07:00
|
|
|
((iter->flags & FTRACE_ITER_FILTER) &&
|
2011-05-02 23:29:25 +07:00
|
|
|
!(ftrace_lookup_ip(ops->filter_hash, rec->ip))) ||
|
2008-11-08 10:36:02 +07:00
|
|
|
|
2008-05-22 22:46:33 +07:00
|
|
|
((iter->flags & FTRACE_ITER_NOTRACE) &&
|
2011-05-04 01:39:21 +07:00
|
|
|
!ftrace_lookup_ip(ops->notrace_hash, rec->ip)) ||
|
|
|
|
|
|
|
|
((iter->flags & FTRACE_ITER_ENABLED) &&
|
|
|
|
!(rec->flags & ~FTRACE_FL_MASK))) {
|
|
|
|
|
2008-05-13 02:20:43 +07:00
|
|
|
rec = NULL;
|
|
|
|
goto retry;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2010-09-09 21:00:28 +07:00
|
|
|
if (!rec)
|
2010-09-14 22:21:11 +07:00
|
|
|
return t_hash_start(m, pos);
|
2010-09-09 21:00:28 +07:00
|
|
|
|
|
|
|
iter->func = rec;
|
|
|
|
|
|
|
|
return iter;
|
2008-05-13 02:20:43 +07:00
|
|
|
}
|
|
|
|
|
2010-09-10 22:47:43 +07:00
|
|
|
static void reset_iter_read(struct ftrace_iterator *iter)
|
|
|
|
{
|
|
|
|
iter->pos = 0;
|
|
|
|
iter->func_pos = 0;
|
|
|
|
iter->flags &= ~(FTRACE_ITER_PRINTALL & FTRACE_ITER_HASH);
|
2008-05-13 02:20:43 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static void *t_start(struct seq_file *m, loff_t *pos)
|
|
|
|
{
|
|
|
|
struct ftrace_iterator *iter = m->private;
|
2011-05-02 23:29:25 +07:00
|
|
|
struct ftrace_ops *ops = &global_ops;
|
2008-05-13 02:20:43 +07:00
|
|
|
void *p = NULL;
|
2009-06-24 08:54:19 +07:00
|
|
|
loff_t l;
|
2008-05-13 02:20:43 +07:00
|
|
|
|
2009-02-17 03:28:00 +07:00
|
|
|
mutex_lock(&ftrace_lock);
|
2011-04-22 10:16:46 +07:00
|
|
|
|
|
|
|
if (unlikely(ftrace_disabled))
|
|
|
|
return NULL;
|
|
|
|
|
2010-09-10 22:47:43 +07:00
|
|
|
/*
|
|
|
|
* If an lseek was done, then reset and start from beginning.
|
|
|
|
*/
|
|
|
|
if (*pos < iter->pos)
|
|
|
|
reset_iter_read(iter);
|
|
|
|
|
2009-02-16 23:21:52 +07:00
|
|
|
/*
|
|
|
|
* For set_ftrace_filter reading, if we have the filter
|
|
|
|
* off, we can short cut and just print out that all
|
|
|
|
* functions are enabled.
|
|
|
|
*/
|
2011-05-02 23:29:25 +07:00
|
|
|
if (iter->flags & FTRACE_ITER_FILTER && !ops->filter_hash->count) {
|
2009-02-16 23:21:52 +07:00
|
|
|
if (*pos > 0)
|
2009-02-17 03:28:00 +07:00
|
|
|
return t_hash_start(m, pos);
|
2009-02-16 23:21:52 +07:00
|
|
|
iter->flags |= FTRACE_ITER_PRINTALL;
|
2010-09-10 06:34:59 +07:00
|
|
|
/* reset in case of seek/pread */
|
|
|
|
iter->flags &= ~FTRACE_ITER_HASH;
|
2009-02-16 23:21:52 +07:00
|
|
|
return iter;
|
|
|
|
}
|
|
|
|
|
2009-02-17 03:28:00 +07:00
|
|
|
if (iter->flags & FTRACE_ITER_HASH)
|
|
|
|
return t_hash_start(m, pos);
|
|
|
|
|
2010-09-10 22:47:43 +07:00
|
|
|
/*
|
|
|
|
* Unfortunately, we need to restart at ftrace_pages_start
|
|
|
|
* every time we let go of the ftrace_mutex. This is because
|
|
|
|
* those pointers can change without the lock.
|
|
|
|
*/
|
2009-06-24 08:54:19 +07:00
|
|
|
iter->pg = ftrace_pages_start;
|
|
|
|
iter->idx = 0;
|
|
|
|
for (l = 0; l <= *pos; ) {
|
|
|
|
p = t_next(m, p, &l);
|
|
|
|
if (!p)
|
|
|
|
break;
|
2008-11-28 11:13:21 +07:00
|
|
|
}
|
function tracing: fix wrong pos computing when read buffer has been fulfilled
Impact: make output of available_filter_functions complete
phenomenon:
The first value of dyn_ftrace_total_info is not equal with
`cat available_filter_functions | wc -l`, but they should be equal.
root cause:
When printing functions with seq_printf in t_show, if the read buffer
is just overflowed by current function record, then this function
won't be printed to user space through read buffer, it will
just be dropped. So we can't see this function printing.
So, every time the last function to fill the read buffer, if overflowed,
will be dropped.
This also applies to set_ftrace_filter if set_ftrace_filter has
more bytes than read buffer.
fix:
Through checking return value of seq_printf, if less than 0, we know
this function doesn't be printed. Then we decrease position to force
this function to be printed next time, in next read buffer.
Another little fix is to show correct allocating pages count.
Signed-off-by: walimis <walimisdev@gmail.com>
Acked-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-11-15 14:19:06 +07:00
|
|
|
|
2010-09-09 21:00:28 +07:00
|
|
|
if (!p) {
|
|
|
|
if (iter->flags & FTRACE_ITER_FILTER)
|
|
|
|
return t_hash_start(m, pos);
|
2009-02-17 03:28:00 +07:00
|
|
|
|
2010-09-09 21:00:28 +07:00
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
return iter;
|
2008-05-13 02:20:43 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static void t_stop(struct seq_file *m, void *p)
|
|
|
|
{
|
2009-02-17 03:28:00 +07:00
|
|
|
mutex_unlock(&ftrace_lock);
|
2008-05-13 02:20:43 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static int t_show(struct seq_file *m, void *v)
|
|
|
|
{
|
2009-02-16 23:21:52 +07:00
|
|
|
struct ftrace_iterator *iter = m->private;
|
2010-09-09 21:00:28 +07:00
|
|
|
struct dyn_ftrace *rec;
|
2008-05-13 02:20:43 +07:00
|
|
|
|
2009-02-17 03:28:00 +07:00
|
|
|
if (iter->flags & FTRACE_ITER_HASH)
|
2010-09-09 21:00:28 +07:00
|
|
|
return t_hash_show(m, iter);
|
2009-02-17 03:28:00 +07:00
|
|
|
|
2009-02-16 23:21:52 +07:00
|
|
|
if (iter->flags & FTRACE_ITER_PRINTALL) {
|
|
|
|
seq_printf(m, "#### all functions enabled ####\n");
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2010-09-09 21:00:28 +07:00
|
|
|
rec = iter->func;
|
|
|
|
|
2008-05-13 02:20:43 +07:00
|
|
|
if (!rec)
|
|
|
|
return 0;
|
|
|
|
|
2011-05-04 01:39:21 +07:00
|
|
|
seq_printf(m, "%ps", (void *)rec->ip);
|
|
|
|
if (iter->flags & FTRACE_ITER_ENABLED)
|
|
|
|
seq_printf(m, " (%ld)",
|
|
|
|
rec->flags & ~FTRACE_FL_MASK);
|
|
|
|
seq_printf(m, "\n");
|
2008-05-13 02:20:43 +07:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2009-09-23 06:43:43 +07:00
|
|
|
static const struct seq_operations show_ftrace_seq_ops = {
|
2008-05-13 02:20:43 +07:00
|
|
|
.start = t_start,
|
|
|
|
.next = t_next,
|
|
|
|
.stop = t_stop,
|
|
|
|
.show = t_show,
|
|
|
|
};
|
|
|
|
|
2008-05-13 02:20:51 +07:00
|
|
|
static int
|
2008-05-13 02:20:43 +07:00
|
|
|
ftrace_avail_open(struct inode *inode, struct file *file)
|
|
|
|
{
|
|
|
|
struct ftrace_iterator *iter;
|
|
|
|
int ret;
|
|
|
|
|
2008-05-13 02:20:48 +07:00
|
|
|
if (unlikely(ftrace_disabled))
|
|
|
|
return -ENODEV;
|
|
|
|
|
2008-05-13 02:20:43 +07:00
|
|
|
iter = kzalloc(sizeof(*iter), GFP_KERNEL);
|
|
|
|
if (!iter)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
iter->pg = ftrace_pages_start;
|
|
|
|
|
|
|
|
ret = seq_open(file, &show_ftrace_seq_ops);
|
|
|
|
if (!ret) {
|
|
|
|
struct seq_file *m = file->private_data;
|
2008-05-13 02:20:46 +07:00
|
|
|
|
2008-05-13 02:20:43 +07:00
|
|
|
m->private = iter;
|
2008-05-13 02:20:46 +07:00
|
|
|
} else {
|
2008-05-13 02:20:43 +07:00
|
|
|
kfree(iter);
|
2008-05-13 02:20:46 +07:00
|
|
|
}
|
2008-05-13 02:20:43 +07:00
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2011-05-04 01:39:21 +07:00
|
|
|
static int
|
|
|
|
ftrace_enabled_open(struct inode *inode, struct file *file)
|
|
|
|
{
|
|
|
|
struct ftrace_iterator *iter;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (unlikely(ftrace_disabled))
|
|
|
|
return -ENODEV;
|
|
|
|
|
|
|
|
iter = kzalloc(sizeof(*iter), GFP_KERNEL);
|
|
|
|
if (!iter)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
iter->pg = ftrace_pages_start;
|
|
|
|
iter->flags = FTRACE_ITER_ENABLED;
|
|
|
|
|
|
|
|
ret = seq_open(file, &show_ftrace_seq_ops);
|
|
|
|
if (!ret) {
|
|
|
|
struct seq_file *m = file->private_data;
|
|
|
|
|
|
|
|
m->private = iter;
|
|
|
|
} else {
|
|
|
|
kfree(iter);
|
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2011-04-30 07:59:51 +07:00
|
|
|
static void ftrace_filter_reset(struct ftrace_hash *hash)
|
2008-05-13 02:20:43 +07:00
|
|
|
{
|
2009-02-14 13:15:39 +07:00
|
|
|
mutex_lock(&ftrace_lock);
|
2011-04-30 07:59:51 +07:00
|
|
|
ftrace_hash_clear(hash);
|
2009-02-14 13:15:39 +07:00
|
|
|
mutex_unlock(&ftrace_lock);
|
2008-05-13 02:20:43 +07:00
|
|
|
}
|
|
|
|
|
2008-05-13 02:20:51 +07:00
|
|
|
static int
|
2011-05-02 23:29:25 +07:00
|
|
|
ftrace_regex_open(struct ftrace_ops *ops, int flag,
|
2011-04-30 07:59:51 +07:00
|
|
|
struct inode *inode, struct file *file)
|
2008-05-13 02:20:43 +07:00
|
|
|
{
|
|
|
|
struct ftrace_iterator *iter;
|
2011-05-02 23:29:25 +07:00
|
|
|
struct ftrace_hash *hash;
|
2008-05-13 02:20:43 +07:00
|
|
|
int ret = 0;
|
|
|
|
|
2008-05-13 02:20:48 +07:00
|
|
|
if (unlikely(ftrace_disabled))
|
|
|
|
return -ENODEV;
|
|
|
|
|
2008-05-13 02:20:43 +07:00
|
|
|
iter = kzalloc(sizeof(*iter), GFP_KERNEL);
|
|
|
|
if (!iter)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
2009-09-11 22:29:29 +07:00
|
|
|
if (trace_parser_get_init(&iter->parser, FTRACE_BUFF_MAX)) {
|
|
|
|
kfree(iter);
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
2011-05-02 23:29:25 +07:00
|
|
|
if (flag & FTRACE_ITER_NOTRACE)
|
|
|
|
hash = ops->notrace_hash;
|
|
|
|
else
|
|
|
|
hash = ops->filter_hash;
|
|
|
|
|
2011-05-03 04:34:47 +07:00
|
|
|
iter->ops = ops;
|
|
|
|
iter->flags = flag;
|
|
|
|
|
|
|
|
if (file->f_mode & FMODE_WRITE) {
|
|
|
|
mutex_lock(&ftrace_lock);
|
|
|
|
iter->hash = alloc_and_copy_ftrace_hash(FTRACE_HASH_DEFAULT_BITS, hash);
|
|
|
|
mutex_unlock(&ftrace_lock);
|
|
|
|
|
|
|
|
if (!iter->hash) {
|
|
|
|
trace_parser_put(&iter->parser);
|
|
|
|
kfree(iter);
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
}
|
2011-04-30 07:59:51 +07:00
|
|
|
|
2008-05-22 22:46:33 +07:00
|
|
|
mutex_lock(&ftrace_regex_lock);
|
2011-05-03 04:34:47 +07:00
|
|
|
|
2008-05-13 02:20:43 +07:00
|
|
|
if ((file->f_mode & FMODE_WRITE) &&
|
2009-07-23 10:29:30 +07:00
|
|
|
(file->f_flags & O_TRUNC))
|
2011-05-03 04:34:47 +07:00
|
|
|
ftrace_filter_reset(iter->hash);
|
2008-05-13 02:20:43 +07:00
|
|
|
|
|
|
|
if (file->f_mode & FMODE_READ) {
|
|
|
|
iter->pg = ftrace_pages_start;
|
|
|
|
|
|
|
|
ret = seq_open(file, &show_ftrace_seq_ops);
|
|
|
|
if (!ret) {
|
|
|
|
struct seq_file *m = file->private_data;
|
|
|
|
m->private = iter;
|
2009-09-22 12:54:28 +07:00
|
|
|
} else {
|
2011-05-03 04:34:47 +07:00
|
|
|
/* Failed */
|
|
|
|
free_ftrace_hash(iter->hash);
|
2009-09-22 12:54:28 +07:00
|
|
|
trace_parser_put(&iter->parser);
|
2008-05-13 02:20:43 +07:00
|
|
|
kfree(iter);
|
2009-09-22 12:54:28 +07:00
|
|
|
}
|
2008-05-13 02:20:43 +07:00
|
|
|
} else
|
|
|
|
file->private_data = iter;
|
2008-05-22 22:46:33 +07:00
|
|
|
mutex_unlock(&ftrace_regex_lock);
|
2008-05-13 02:20:43 +07:00
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2008-05-22 22:46:33 +07:00
|
|
|
static int
|
|
|
|
ftrace_filter_open(struct inode *inode, struct file *file)
|
|
|
|
{
|
2011-05-02 23:29:25 +07:00
|
|
|
return ftrace_regex_open(&global_ops, FTRACE_ITER_FILTER,
|
2011-04-30 07:59:51 +07:00
|
|
|
inode, file);
|
2008-05-22 22:46:33 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
ftrace_notrace_open(struct inode *inode, struct file *file)
|
|
|
|
{
|
2011-05-02 23:29:25 +07:00
|
|
|
return ftrace_regex_open(&global_ops, FTRACE_ITER_NOTRACE,
|
2011-04-30 07:59:51 +07:00
|
|
|
inode, file);
|
2008-05-22 22:46:33 +07:00
|
|
|
}
|
|
|
|
|
2008-05-13 02:20:51 +07:00
|
|
|
static loff_t
|
2008-05-22 22:46:33 +07:00
|
|
|
ftrace_regex_lseek(struct file *file, loff_t offset, int origin)
|
2008-05-13 02:20:43 +07:00
|
|
|
{
|
|
|
|
loff_t ret;
|
|
|
|
|
|
|
|
if (file->f_mode & FMODE_READ)
|
|
|
|
ret = seq_lseek(file, offset, origin);
|
|
|
|
else
|
|
|
|
file->f_pos = ret = 1;
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2009-02-14 05:08:48 +07:00
|
|
|
static int ftrace_match(char *str, char *regex, int len, int type)
|
2009-02-14 03:56:43 +07:00
|
|
|
{
|
|
|
|
int matched = 0;
|
2010-01-14 09:53:02 +07:00
|
|
|
int slen;
|
2009-02-14 03:56:43 +07:00
|
|
|
|
|
|
|
switch (type) {
|
|
|
|
case MATCH_FULL:
|
|
|
|
if (strcmp(str, regex) == 0)
|
|
|
|
matched = 1;
|
|
|
|
break;
|
|
|
|
case MATCH_FRONT_ONLY:
|
|
|
|
if (strncmp(str, regex, len) == 0)
|
|
|
|
matched = 1;
|
|
|
|
break;
|
|
|
|
case MATCH_MIDDLE_ONLY:
|
|
|
|
if (strstr(str, regex))
|
|
|
|
matched = 1;
|
|
|
|
break;
|
|
|
|
case MATCH_END_ONLY:
|
2010-01-14 09:53:02 +07:00
|
|
|
slen = strlen(str);
|
|
|
|
if (slen >= len && memcmp(str + slen - len, regex, len) == 0)
|
2009-02-14 03:56:43 +07:00
|
|
|
matched = 1;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
return matched;
|
|
|
|
}
|
|
|
|
|
2011-04-30 02:12:32 +07:00
|
|
|
static int
|
2011-04-30 07:59:51 +07:00
|
|
|
enter_record(struct ftrace_hash *hash, struct dyn_ftrace *rec, int not)
|
2011-04-27 03:11:03 +07:00
|
|
|
{
|
2011-04-30 02:12:32 +07:00
|
|
|
struct ftrace_func_entry *entry;
|
|
|
|
int ret = 0;
|
|
|
|
|
2011-04-30 07:59:51 +07:00
|
|
|
entry = ftrace_lookup_ip(hash, rec->ip);
|
|
|
|
if (not) {
|
|
|
|
/* Do nothing if it doesn't exist */
|
|
|
|
if (!entry)
|
|
|
|
return 0;
|
2011-04-30 02:12:32 +07:00
|
|
|
|
2011-05-03 04:34:47 +07:00
|
|
|
free_hash_entry(hash, entry);
|
2011-04-30 07:59:51 +07:00
|
|
|
} else {
|
|
|
|
/* Do nothing if it exists */
|
|
|
|
if (entry)
|
|
|
|
return 0;
|
2011-04-30 02:12:32 +07:00
|
|
|
|
2011-04-30 07:59:51 +07:00
|
|
|
ret = add_hash_entry(hash, rec->ip);
|
2011-04-30 02:12:32 +07:00
|
|
|
}
|
|
|
|
return ret;
|
2011-04-27 03:11:03 +07:00
|
|
|
}
|
|
|
|
|
2009-02-14 05:08:48 +07:00
|
|
|
static int
|
2011-04-29 07:32:08 +07:00
|
|
|
ftrace_match_record(struct dyn_ftrace *rec, char *mod,
|
|
|
|
char *regex, int len, int type)
|
2009-02-14 05:08:48 +07:00
|
|
|
{
|
|
|
|
char str[KSYM_SYMBOL_LEN];
|
2011-04-29 07:32:08 +07:00
|
|
|
char *modname;
|
|
|
|
|
|
|
|
kallsyms_lookup(rec->ip, NULL, NULL, &modname, str);
|
|
|
|
|
|
|
|
if (mod) {
|
|
|
|
/* module lookup requires matching the module */
|
|
|
|
if (!modname || strcmp(modname, mod))
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
/* blank search means to match all funcs in the mod */
|
|
|
|
if (!len)
|
|
|
|
return 1;
|
|
|
|
}
|
2009-02-14 05:08:48 +07:00
|
|
|
|
|
|
|
return ftrace_match(str, regex, len, type);
|
|
|
|
}
|
|
|
|
|
2011-04-30 07:59:51 +07:00
|
|
|
static int
|
|
|
|
match_records(struct ftrace_hash *hash, char *buff,
|
|
|
|
int len, char *mod, int not)
|
2009-02-14 03:56:43 +07:00
|
|
|
{
|
2011-04-29 07:32:08 +07:00
|
|
|
unsigned search_len = 0;
|
2009-02-14 03:56:43 +07:00
|
|
|
struct ftrace_page *pg;
|
|
|
|
struct dyn_ftrace *rec;
|
2011-04-29 07:32:08 +07:00
|
|
|
int type = MATCH_FULL;
|
|
|
|
char *search = buff;
|
2009-12-08 10:15:11 +07:00
|
|
|
int found = 0;
|
2011-04-30 02:12:32 +07:00
|
|
|
int ret;
|
2009-02-14 03:56:43 +07:00
|
|
|
|
2011-04-29 07:32:08 +07:00
|
|
|
if (len) {
|
|
|
|
type = filter_parse_regex(buff, len, &search, ¬);
|
|
|
|
search_len = strlen(search);
|
|
|
|
}
|
2009-02-14 03:56:43 +07:00
|
|
|
|
2009-02-14 13:15:39 +07:00
|
|
|
mutex_lock(&ftrace_lock);
|
2009-02-14 00:43:56 +07:00
|
|
|
|
2011-04-29 07:32:08 +07:00
|
|
|
if (unlikely(ftrace_disabled))
|
|
|
|
goto out_unlock;
|
2009-02-14 03:56:43 +07:00
|
|
|
|
2009-02-14 00:43:56 +07:00
|
|
|
do_for_each_ftrace_rec(pg, rec) {
|
|
|
|
|
2011-04-29 07:32:08 +07:00
|
|
|
if (ftrace_match_record(rec, mod, search, search_len, type)) {
|
2011-04-30 07:59:51 +07:00
|
|
|
ret = enter_record(hash, rec, not);
|
2011-04-30 02:12:32 +07:00
|
|
|
if (ret < 0) {
|
|
|
|
found = ret;
|
|
|
|
goto out_unlock;
|
|
|
|
}
|
2009-12-08 10:15:11 +07:00
|
|
|
found = 1;
|
2009-02-14 00:43:56 +07:00
|
|
|
}
|
|
|
|
} while_for_each_ftrace_rec();
|
2011-04-29 07:32:08 +07:00
|
|
|
out_unlock:
|
2009-02-14 13:15:39 +07:00
|
|
|
mutex_unlock(&ftrace_lock);
|
2009-12-08 10:15:11 +07:00
|
|
|
|
|
|
|
return found;
|
2008-05-13 02:20:43 +07:00
|
|
|
}
|
|
|
|
|
2009-02-14 05:08:48 +07:00
|
|
|
static int
|
2011-04-30 07:59:51 +07:00
|
|
|
ftrace_match_records(struct ftrace_hash *hash, char *buff, int len)
|
2009-02-14 05:08:48 +07:00
|
|
|
{
|
2011-04-30 07:59:51 +07:00
|
|
|
return match_records(hash, buff, len, NULL, 0);
|
2009-02-14 05:08:48 +07:00
|
|
|
}
|
|
|
|
|
2011-04-30 07:59:51 +07:00
|
|
|
static int
|
|
|
|
ftrace_match_module_records(struct ftrace_hash *hash, char *buff, char *mod)
|
2009-02-14 05:08:48 +07:00
|
|
|
{
|
|
|
|
int not = 0;
|
2009-02-17 23:20:26 +07:00
|
|
|
|
2009-02-14 05:08:48 +07:00
|
|
|
/* blank or '*' mean the same */
|
|
|
|
if (strcmp(buff, "*") == 0)
|
|
|
|
buff[0] = 0;
|
|
|
|
|
|
|
|
/* handle the case of 'dont filter this module' */
|
|
|
|
if (strcmp(buff, "!") == 0 || strcmp(buff, "!*") == 0) {
|
|
|
|
buff[0] = 0;
|
|
|
|
not = 1;
|
|
|
|
}
|
|
|
|
|
2011-04-30 07:59:51 +07:00
|
|
|
return match_records(hash, buff, strlen(buff), mod, not);
|
2009-02-14 05:08:48 +07:00
|
|
|
}
|
|
|
|
|
2009-02-14 12:40:25 +07:00
|
|
|
/*
|
|
|
|
* We register the module command as a template to show others how
|
|
|
|
* to register the a command as well.
|
|
|
|
*/
|
|
|
|
|
|
|
|
static int
|
|
|
|
ftrace_mod_callback(char *func, char *cmd, char *param, int enable)
|
|
|
|
{
|
2011-05-02 23:29:25 +07:00
|
|
|
struct ftrace_ops *ops = &global_ops;
|
2011-04-30 07:59:51 +07:00
|
|
|
struct ftrace_hash *hash;
|
2009-02-14 12:40:25 +07:00
|
|
|
char *mod;
|
2011-04-30 02:12:32 +07:00
|
|
|
int ret = -EINVAL;
|
2009-02-14 12:40:25 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* cmd == 'mod' because we only registered this func
|
|
|
|
* for the 'mod' ftrace_func_command.
|
|
|
|
* But if you register one func with multiple commands,
|
|
|
|
* you can tell which command was used by the cmd
|
|
|
|
* parameter.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* we must have a module name */
|
|
|
|
if (!param)
|
2011-04-30 02:12:32 +07:00
|
|
|
return ret;
|
2009-02-14 12:40:25 +07:00
|
|
|
|
|
|
|
mod = strsep(¶m, ":");
|
|
|
|
if (!strlen(mod))
|
2011-04-30 02:12:32 +07:00
|
|
|
return ret;
|
2009-02-14 12:40:25 +07:00
|
|
|
|
2011-04-30 07:59:51 +07:00
|
|
|
if (enable)
|
2011-05-02 23:29:25 +07:00
|
|
|
hash = ops->filter_hash;
|
2011-04-30 07:59:51 +07:00
|
|
|
else
|
2011-05-02 23:29:25 +07:00
|
|
|
hash = ops->notrace_hash;
|
2011-04-30 07:59:51 +07:00
|
|
|
|
|
|
|
ret = ftrace_match_module_records(hash, func, mod);
|
2011-04-30 02:12:32 +07:00
|
|
|
if (!ret)
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (ret < 0)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
return 0;
|
2009-02-14 12:40:25 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static struct ftrace_func_command ftrace_mod_cmd = {
|
|
|
|
.name = "mod",
|
|
|
|
.func = ftrace_mod_callback,
|
|
|
|
};
|
|
|
|
|
|
|
|
static int __init ftrace_mod_cmd_init(void)
|
|
|
|
{
|
|
|
|
return register_ftrace_command(&ftrace_mod_cmd);
|
|
|
|
}
|
|
|
|
device_initcall(ftrace_mod_cmd_init);
|
|
|
|
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 03:29:06 +07:00
|
|
|
static void
|
2009-02-18 00:32:04 +07:00
|
|
|
function_trace_probe_call(unsigned long ip, unsigned long parent_ip)
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 03:29:06 +07:00
|
|
|
{
|
2009-02-18 00:32:04 +07:00
|
|
|
struct ftrace_func_probe *entry;
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 03:29:06 +07:00
|
|
|
struct hlist_head *hhd;
|
|
|
|
struct hlist_node *n;
|
|
|
|
unsigned long key;
|
|
|
|
|
|
|
|
key = hash_long(ip, FTRACE_HASH_BITS);
|
|
|
|
|
|
|
|
hhd = &ftrace_func_hash[key];
|
|
|
|
|
|
|
|
if (hlist_empty(hhd))
|
|
|
|
return;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Disable preemption for these calls to prevent a RCU grace
|
|
|
|
* period. This syncs the hash iteration and freeing of items
|
|
|
|
* on the hash. rcu_read_lock is too dangerous here.
|
|
|
|
*/
|
2010-06-03 20:36:50 +07:00
|
|
|
preempt_disable_notrace();
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 03:29:06 +07:00
|
|
|
hlist_for_each_entry_rcu(entry, n, hhd, node) {
|
|
|
|
if (entry->ip == ip)
|
|
|
|
entry->ops->func(ip, parent_ip, &entry->data);
|
|
|
|
}
|
2010-06-03 20:36:50 +07:00
|
|
|
preempt_enable_notrace();
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 03:29:06 +07:00
|
|
|
}
|
|
|
|
|
2009-02-18 00:32:04 +07:00
|
|
|
static struct ftrace_ops trace_probe_ops __read_mostly =
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 03:29:06 +07:00
|
|
|
{
|
2009-03-26 00:26:41 +07:00
|
|
|
.func = function_trace_probe_call,
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 03:29:06 +07:00
|
|
|
};
|
|
|
|
|
2009-02-18 00:32:04 +07:00
|
|
|
static int ftrace_probe_registered;
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 03:29:06 +07:00
|
|
|
|
2009-02-18 00:32:04 +07:00
|
|
|
static void __enable_ftrace_function_probe(void)
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 03:29:06 +07:00
|
|
|
{
|
2011-05-04 20:27:52 +07:00
|
|
|
int ret;
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 03:29:06 +07:00
|
|
|
int i;
|
|
|
|
|
2009-02-18 00:32:04 +07:00
|
|
|
if (ftrace_probe_registered)
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 03:29:06 +07:00
|
|
|
return;
|
|
|
|
|
|
|
|
for (i = 0; i < FTRACE_FUNC_HASHSIZE; i++) {
|
|
|
|
struct hlist_head *hhd = &ftrace_func_hash[i];
|
|
|
|
if (hhd->first)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
/* Nothing registered? */
|
|
|
|
if (i == FTRACE_FUNC_HASHSIZE)
|
|
|
|
return;
|
|
|
|
|
2011-05-04 20:27:52 +07:00
|
|
|
ret = __register_ftrace_function(&trace_probe_ops);
|
|
|
|
if (!ret)
|
2011-05-24 02:24:25 +07:00
|
|
|
ret = ftrace_startup(&trace_probe_ops, 0);
|
2011-05-04 20:27:52 +07:00
|
|
|
|
2009-02-18 00:32:04 +07:00
|
|
|
ftrace_probe_registered = 1;
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 03:29:06 +07:00
|
|
|
}
|
|
|
|
|
2009-02-18 00:32:04 +07:00
|
|
|
static void __disable_ftrace_function_probe(void)
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 03:29:06 +07:00
|
|
|
{
|
2011-05-04 20:27:52 +07:00
|
|
|
int ret;
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 03:29:06 +07:00
|
|
|
int i;
|
|
|
|
|
2009-02-18 00:32:04 +07:00
|
|
|
if (!ftrace_probe_registered)
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 03:29:06 +07:00
|
|
|
return;
|
|
|
|
|
|
|
|
for (i = 0; i < FTRACE_FUNC_HASHSIZE; i++) {
|
|
|
|
struct hlist_head *hhd = &ftrace_func_hash[i];
|
|
|
|
if (hhd->first)
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* no more funcs left */
|
2011-05-04 20:27:52 +07:00
|
|
|
ret = __unregister_ftrace_function(&trace_probe_ops);
|
|
|
|
if (!ret)
|
|
|
|
ftrace_shutdown(&trace_probe_ops, 0);
|
|
|
|
|
2009-02-18 00:32:04 +07:00
|
|
|
ftrace_probe_registered = 0;
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 03:29:06 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
static void ftrace_free_entry_rcu(struct rcu_head *rhp)
|
|
|
|
{
|
2009-02-18 00:32:04 +07:00
|
|
|
struct ftrace_func_probe *entry =
|
|
|
|
container_of(rhp, struct ftrace_func_probe, rcu);
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 03:29:06 +07:00
|
|
|
|
|
|
|
if (entry->ops->free)
|
|
|
|
entry->ops->free(&entry->data);
|
|
|
|
kfree(entry);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
int
|
2009-02-18 00:32:04 +07:00
|
|
|
register_ftrace_function_probe(char *glob, struct ftrace_probe_ops *ops,
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 03:29:06 +07:00
|
|
|
void *data)
|
|
|
|
{
|
2009-02-18 00:32:04 +07:00
|
|
|
struct ftrace_func_probe *entry;
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 03:29:06 +07:00
|
|
|
struct ftrace_page *pg;
|
|
|
|
struct dyn_ftrace *rec;
|
|
|
|
int type, len, not;
|
2009-02-17 23:20:26 +07:00
|
|
|
unsigned long key;
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 03:29:06 +07:00
|
|
|
int count = 0;
|
|
|
|
char *search;
|
|
|
|
|
2009-09-25 02:31:51 +07:00
|
|
|
type = filter_parse_regex(glob, strlen(glob), &search, ¬);
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 03:29:06 +07:00
|
|
|
len = strlen(search);
|
|
|
|
|
2009-02-18 00:32:04 +07:00
|
|
|
/* we do not support '!' for function probes */
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 03:29:06 +07:00
|
|
|
if (WARN_ON(not))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
mutex_lock(&ftrace_lock);
|
|
|
|
|
2011-04-22 10:16:46 +07:00
|
|
|
if (unlikely(ftrace_disabled))
|
|
|
|
goto out_unlock;
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 03:29:06 +07:00
|
|
|
|
2011-04-22 10:16:46 +07:00
|
|
|
do_for_each_ftrace_rec(pg, rec) {
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 03:29:06 +07:00
|
|
|
|
2011-04-29 07:32:08 +07:00
|
|
|
if (!ftrace_match_record(rec, NULL, search, len, type))
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 03:29:06 +07:00
|
|
|
continue;
|
|
|
|
|
|
|
|
entry = kmalloc(sizeof(*entry), GFP_KERNEL);
|
|
|
|
if (!entry) {
|
2009-02-18 00:32:04 +07:00
|
|
|
/* If we did not process any, then return error */
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 03:29:06 +07:00
|
|
|
if (!count)
|
|
|
|
count = -ENOMEM;
|
|
|
|
goto out_unlock;
|
|
|
|
}
|
|
|
|
|
|
|
|
count++;
|
|
|
|
|
|
|
|
entry->data = data;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The caller might want to do something special
|
|
|
|
* for each function we find. We call the callback
|
|
|
|
* to give the caller an opportunity to do so.
|
|
|
|
*/
|
|
|
|
if (ops->callback) {
|
|
|
|
if (ops->callback(rec->ip, &entry->data) < 0) {
|
|
|
|
/* caller does not like this func */
|
|
|
|
kfree(entry);
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
entry->ops = ops;
|
|
|
|
entry->ip = rec->ip;
|
|
|
|
|
|
|
|
key = hash_long(entry->ip, FTRACE_HASH_BITS);
|
|
|
|
hlist_add_head_rcu(&entry->node, &ftrace_func_hash[key]);
|
|
|
|
|
|
|
|
} while_for_each_ftrace_rec();
|
2009-02-18 00:32:04 +07:00
|
|
|
__enable_ftrace_function_probe();
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 03:29:06 +07:00
|
|
|
|
|
|
|
out_unlock:
|
|
|
|
mutex_unlock(&ftrace_lock);
|
|
|
|
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
|
|
|
|
enum {
|
2009-02-18 00:32:04 +07:00
|
|
|
PROBE_TEST_FUNC = 1,
|
|
|
|
PROBE_TEST_DATA = 2
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 03:29:06 +07:00
|
|
|
};
|
|
|
|
|
|
|
|
static void
|
2009-02-18 00:32:04 +07:00
|
|
|
__unregister_ftrace_function_probe(char *glob, struct ftrace_probe_ops *ops,
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 03:29:06 +07:00
|
|
|
void *data, int flags)
|
|
|
|
{
|
2009-02-18 00:32:04 +07:00
|
|
|
struct ftrace_func_probe *entry;
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 03:29:06 +07:00
|
|
|
struct hlist_node *n, *tmp;
|
|
|
|
char str[KSYM_SYMBOL_LEN];
|
|
|
|
int type = MATCH_FULL;
|
|
|
|
int i, len = 0;
|
|
|
|
char *search;
|
|
|
|
|
2009-09-15 17:06:30 +07:00
|
|
|
if (glob && (strcmp(glob, "*") == 0 || !strlen(glob)))
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 03:29:06 +07:00
|
|
|
glob = NULL;
|
2009-09-15 17:06:30 +07:00
|
|
|
else if (glob) {
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 03:29:06 +07:00
|
|
|
int not;
|
|
|
|
|
2009-09-25 02:31:51 +07:00
|
|
|
type = filter_parse_regex(glob, strlen(glob), &search, ¬);
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 03:29:06 +07:00
|
|
|
len = strlen(search);
|
|
|
|
|
2009-02-18 00:32:04 +07:00
|
|
|
/* we do not support '!' for function probes */
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 03:29:06 +07:00
|
|
|
if (WARN_ON(not))
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
mutex_lock(&ftrace_lock);
|
|
|
|
for (i = 0; i < FTRACE_FUNC_HASHSIZE; i++) {
|
|
|
|
struct hlist_head *hhd = &ftrace_func_hash[i];
|
|
|
|
|
|
|
|
hlist_for_each_entry_safe(entry, n, tmp, hhd, node) {
|
|
|
|
|
|
|
|
/* break up if statements for readability */
|
2009-02-18 00:32:04 +07:00
|
|
|
if ((flags & PROBE_TEST_FUNC) && entry->ops != ops)
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 03:29:06 +07:00
|
|
|
continue;
|
|
|
|
|
2009-02-18 00:32:04 +07:00
|
|
|
if ((flags & PROBE_TEST_DATA) && entry->data != data)
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 03:29:06 +07:00
|
|
|
continue;
|
|
|
|
|
|
|
|
/* do this last, since it is the most expensive */
|
|
|
|
if (glob) {
|
|
|
|
kallsyms_lookup(entry->ip, NULL, NULL,
|
|
|
|
NULL, str);
|
|
|
|
if (!ftrace_match(str, glob, len, type))
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
hlist_del(&entry->node);
|
|
|
|
call_rcu(&entry->rcu, ftrace_free_entry_rcu);
|
|
|
|
}
|
|
|
|
}
|
2009-02-18 00:32:04 +07:00
|
|
|
__disable_ftrace_function_probe();
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 03:29:06 +07:00
|
|
|
mutex_unlock(&ftrace_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2009-02-18 00:32:04 +07:00
|
|
|
unregister_ftrace_function_probe(char *glob, struct ftrace_probe_ops *ops,
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 03:29:06 +07:00
|
|
|
void *data)
|
|
|
|
{
|
2009-02-18 00:32:04 +07:00
|
|
|
__unregister_ftrace_function_probe(glob, ops, data,
|
|
|
|
PROBE_TEST_FUNC | PROBE_TEST_DATA);
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 03:29:06 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2009-02-18 00:32:04 +07:00
|
|
|
unregister_ftrace_function_probe_func(char *glob, struct ftrace_probe_ops *ops)
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 03:29:06 +07:00
|
|
|
{
|
2009-02-18 00:32:04 +07:00
|
|
|
__unregister_ftrace_function_probe(glob, ops, NULL, PROBE_TEST_FUNC);
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 03:29:06 +07:00
|
|
|
}
|
|
|
|
|
2009-02-18 00:32:04 +07:00
|
|
|
void unregister_ftrace_function_probe_all(char *glob)
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 03:29:06 +07:00
|
|
|
{
|
2009-02-18 00:32:04 +07:00
|
|
|
__unregister_ftrace_function_probe(glob, NULL, NULL, 0);
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 03:29:06 +07:00
|
|
|
}
|
|
|
|
|
2009-02-14 12:40:25 +07:00
|
|
|
static LIST_HEAD(ftrace_commands);
|
|
|
|
static DEFINE_MUTEX(ftrace_cmd_mutex);
|
|
|
|
|
|
|
|
int register_ftrace_command(struct ftrace_func_command *cmd)
|
|
|
|
{
|
|
|
|
struct ftrace_func_command *p;
|
|
|
|
int ret = 0;
|
|
|
|
|
|
|
|
mutex_lock(&ftrace_cmd_mutex);
|
|
|
|
list_for_each_entry(p, &ftrace_commands, list) {
|
|
|
|
if (strcmp(cmd->name, p->name) == 0) {
|
|
|
|
ret = -EBUSY;
|
|
|
|
goto out_unlock;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
list_add(&cmd->list, &ftrace_commands);
|
|
|
|
out_unlock:
|
|
|
|
mutex_unlock(&ftrace_cmd_mutex);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
int unregister_ftrace_command(struct ftrace_func_command *cmd)
|
|
|
|
{
|
|
|
|
struct ftrace_func_command *p, *n;
|
|
|
|
int ret = -ENODEV;
|
|
|
|
|
|
|
|
mutex_lock(&ftrace_cmd_mutex);
|
|
|
|
list_for_each_entry_safe(p, n, &ftrace_commands, list) {
|
|
|
|
if (strcmp(cmd->name, p->name) == 0) {
|
|
|
|
ret = 0;
|
|
|
|
list_del_init(&p->list);
|
|
|
|
goto out_unlock;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
out_unlock:
|
|
|
|
mutex_unlock(&ftrace_cmd_mutex);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2011-05-03 04:34:47 +07:00
|
|
|
static int ftrace_process_regex(struct ftrace_hash *hash,
|
|
|
|
char *buff, int len, int enable)
|
2009-02-14 05:08:48 +07:00
|
|
|
{
|
2009-02-14 12:40:25 +07:00
|
|
|
char *func, *command, *next = buff;
|
2009-02-17 23:20:26 +07:00
|
|
|
struct ftrace_func_command *p;
|
2011-06-01 18:18:47 +07:00
|
|
|
int ret = -EINVAL;
|
2009-02-14 05:08:48 +07:00
|
|
|
|
|
|
|
func = strsep(&next, ":");
|
|
|
|
|
|
|
|
if (!next) {
|
2011-04-30 07:59:51 +07:00
|
|
|
ret = ftrace_match_records(hash, func, len);
|
2011-04-30 02:12:32 +07:00
|
|
|
if (!ret)
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (ret < 0)
|
|
|
|
return ret;
|
|
|
|
return 0;
|
2009-02-14 05:08:48 +07:00
|
|
|
}
|
|
|
|
|
2009-02-14 12:40:25 +07:00
|
|
|
/* command found */
|
2009-02-14 05:08:48 +07:00
|
|
|
|
|
|
|
command = strsep(&next, ":");
|
|
|
|
|
2009-02-14 12:40:25 +07:00
|
|
|
mutex_lock(&ftrace_cmd_mutex);
|
|
|
|
list_for_each_entry(p, &ftrace_commands, list) {
|
|
|
|
if (strcmp(p->name, command) == 0) {
|
|
|
|
ret = p->func(func, command, next, enable);
|
|
|
|
goto out_unlock;
|
|
|
|
}
|
2009-02-14 05:08:48 +07:00
|
|
|
}
|
2009-02-14 12:40:25 +07:00
|
|
|
out_unlock:
|
|
|
|
mutex_unlock(&ftrace_cmd_mutex);
|
2009-02-14 05:08:48 +07:00
|
|
|
|
2009-02-14 12:40:25 +07:00
|
|
|
return ret;
|
2009-02-14 05:08:48 +07:00
|
|
|
}
|
|
|
|
|
2008-05-13 02:20:51 +07:00
|
|
|
static ssize_t
|
2008-05-22 22:46:33 +07:00
|
|
|
ftrace_regex_write(struct file *file, const char __user *ubuf,
|
|
|
|
size_t cnt, loff_t *ppos, int enable)
|
2008-05-13 02:20:43 +07:00
|
|
|
{
|
|
|
|
struct ftrace_iterator *iter;
|
2009-09-11 22:29:29 +07:00
|
|
|
struct trace_parser *parser;
|
|
|
|
ssize_t ret, read;
|
2008-05-13 02:20:43 +07:00
|
|
|
|
2009-09-22 12:52:20 +07:00
|
|
|
if (!cnt)
|
2008-05-13 02:20:43 +07:00
|
|
|
return 0;
|
|
|
|
|
2008-05-22 22:46:33 +07:00
|
|
|
mutex_lock(&ftrace_regex_lock);
|
2008-05-13 02:20:43 +07:00
|
|
|
|
2011-04-22 10:16:46 +07:00
|
|
|
ret = -ENODEV;
|
|
|
|
if (unlikely(ftrace_disabled))
|
|
|
|
goto out_unlock;
|
|
|
|
|
2008-05-13 02:20:43 +07:00
|
|
|
if (file->f_mode & FMODE_READ) {
|
|
|
|
struct seq_file *m = file->private_data;
|
|
|
|
iter = m->private;
|
|
|
|
} else
|
|
|
|
iter = file->private_data;
|
|
|
|
|
2009-09-11 22:29:29 +07:00
|
|
|
parser = &iter->parser;
|
|
|
|
read = trace_get_user(parser, ubuf, cnt, ppos);
|
2008-05-13 02:20:43 +07:00
|
|
|
|
2009-09-22 12:52:20 +07:00
|
|
|
if (read >= 0 && trace_parser_loaded(parser) &&
|
2009-09-11 22:29:29 +07:00
|
|
|
!trace_parser_cont(parser)) {
|
2011-05-03 04:34:47 +07:00
|
|
|
ret = ftrace_process_regex(iter->hash, parser->buffer,
|
2009-09-11 22:29:29 +07:00
|
|
|
parser->idx, enable);
|
2009-12-08 10:15:30 +07:00
|
|
|
trace_parser_clear(parser);
|
2008-05-13 02:20:43 +07:00
|
|
|
if (ret)
|
2009-11-03 07:55:38 +07:00
|
|
|
goto out_unlock;
|
2009-08-11 22:29:04 +07:00
|
|
|
}
|
2008-05-13 02:20:43 +07:00
|
|
|
|
|
|
|
ret = read;
|
2009-11-03 07:55:38 +07:00
|
|
|
out_unlock:
|
2009-09-11 22:29:29 +07:00
|
|
|
mutex_unlock(&ftrace_regex_lock);
|
2009-11-03 07:55:38 +07:00
|
|
|
|
2008-05-13 02:20:43 +07:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2008-05-22 22:46:33 +07:00
|
|
|
static ssize_t
|
|
|
|
ftrace_filter_write(struct file *file, const char __user *ubuf,
|
|
|
|
size_t cnt, loff_t *ppos)
|
|
|
|
{
|
|
|
|
return ftrace_regex_write(file, ubuf, cnt, ppos, 1);
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t
|
|
|
|
ftrace_notrace_write(struct file *file, const char __user *ubuf,
|
|
|
|
size_t cnt, loff_t *ppos)
|
|
|
|
{
|
|
|
|
return ftrace_regex_write(file, ubuf, cnt, ppos, 0);
|
|
|
|
}
|
|
|
|
|
2011-05-03 04:34:47 +07:00
|
|
|
static int
|
2011-05-02 23:29:25 +07:00
|
|
|
ftrace_set_regex(struct ftrace_ops *ops, unsigned char *buf, int len,
|
|
|
|
int reset, int enable)
|
2008-05-22 22:46:33 +07:00
|
|
|
{
|
2011-05-03 04:34:47 +07:00
|
|
|
struct ftrace_hash **orig_hash;
|
2011-05-02 23:29:25 +07:00
|
|
|
struct ftrace_hash *hash;
|
2011-05-03 04:34:47 +07:00
|
|
|
int ret;
|
2011-05-02 23:29:25 +07:00
|
|
|
|
2011-05-06 09:54:01 +07:00
|
|
|
/* All global ops uses the global ops filters */
|
|
|
|
if (ops->flags & FTRACE_OPS_FL_GLOBAL)
|
|
|
|
ops = &global_ops;
|
|
|
|
|
2008-05-22 22:46:33 +07:00
|
|
|
if (unlikely(ftrace_disabled))
|
2011-05-03 04:34:47 +07:00
|
|
|
return -ENODEV;
|
2008-05-22 22:46:33 +07:00
|
|
|
|
2011-05-02 23:29:25 +07:00
|
|
|
if (enable)
|
2011-05-03 04:34:47 +07:00
|
|
|
orig_hash = &ops->filter_hash;
|
2011-05-02 23:29:25 +07:00
|
|
|
else
|
2011-05-03 04:34:47 +07:00
|
|
|
orig_hash = &ops->notrace_hash;
|
|
|
|
|
|
|
|
hash = alloc_and_copy_ftrace_hash(FTRACE_HASH_DEFAULT_BITS, *orig_hash);
|
|
|
|
if (!hash)
|
|
|
|
return -ENOMEM;
|
2011-05-02 23:29:25 +07:00
|
|
|
|
2008-05-22 22:46:33 +07:00
|
|
|
mutex_lock(&ftrace_regex_lock);
|
|
|
|
if (reset)
|
2011-04-30 07:59:51 +07:00
|
|
|
ftrace_filter_reset(hash);
|
2008-05-22 22:46:33 +07:00
|
|
|
if (buf)
|
2011-04-30 07:59:51 +07:00
|
|
|
ftrace_match_records(hash, buf, len);
|
2011-05-03 04:34:47 +07:00
|
|
|
|
|
|
|
mutex_lock(&ftrace_lock);
|
|
|
|
ret = ftrace_hash_move(orig_hash, hash);
|
|
|
|
mutex_unlock(&ftrace_lock);
|
|
|
|
|
2008-05-22 22:46:33 +07:00
|
|
|
mutex_unlock(&ftrace_regex_lock);
|
2011-05-03 04:34:47 +07:00
|
|
|
|
|
|
|
free_ftrace_hash(hash);
|
|
|
|
return ret;
|
2008-05-22 22:46:33 +07:00
|
|
|
}
|
|
|
|
|
2008-05-13 02:20:45 +07:00
|
|
|
/**
|
|
|
|
* ftrace_set_filter - set a function to filter on in ftrace
|
2011-05-06 09:54:01 +07:00
|
|
|
* @ops - the ops to set the filter with
|
|
|
|
* @buf - the string that holds the function filter text.
|
|
|
|
* @len - the length of the string.
|
|
|
|
* @reset - non zero to reset all filters before applying this filter.
|
|
|
|
*
|
|
|
|
* Filters denote which functions should be enabled when tracing is enabled.
|
|
|
|
* If @buf is NULL and reset is set, all functions will be enabled for tracing.
|
|
|
|
*/
|
|
|
|
void ftrace_set_filter(struct ftrace_ops *ops, unsigned char *buf,
|
|
|
|
int len, int reset)
|
|
|
|
{
|
|
|
|
ftrace_set_regex(ops, buf, len, reset, 1);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(ftrace_set_filter);
|
|
|
|
|
|
|
|
/**
|
|
|
|
* ftrace_set_notrace - set a function to not trace in ftrace
|
|
|
|
* @ops - the ops to set the notrace filter with
|
|
|
|
* @buf - the string that holds the function notrace text.
|
|
|
|
* @len - the length of the string.
|
|
|
|
* @reset - non zero to reset all filters before applying this filter.
|
|
|
|
*
|
|
|
|
* Notrace Filters denote which functions should not be enabled when tracing
|
|
|
|
* is enabled. If @buf is NULL and reset is set, all functions will be enabled
|
|
|
|
* for tracing.
|
|
|
|
*/
|
|
|
|
void ftrace_set_notrace(struct ftrace_ops *ops, unsigned char *buf,
|
|
|
|
int len, int reset)
|
|
|
|
{
|
|
|
|
ftrace_set_regex(ops, buf, len, reset, 0);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(ftrace_set_notrace);
|
|
|
|
/**
|
|
|
|
* ftrace_set_filter - set a function to filter on in ftrace
|
|
|
|
* @ops - the ops to set the filter with
|
2008-05-13 02:20:45 +07:00
|
|
|
* @buf - the string that holds the function filter text.
|
|
|
|
* @len - the length of the string.
|
|
|
|
* @reset - non zero to reset all filters before applying this filter.
|
|
|
|
*
|
|
|
|
* Filters denote which functions should be enabled when tracing is enabled.
|
|
|
|
* If @buf is NULL and reset is set, all functions will be enabled for tracing.
|
|
|
|
*/
|
2011-05-06 09:54:01 +07:00
|
|
|
void ftrace_set_global_filter(unsigned char *buf, int len, int reset)
|
2008-05-13 02:20:45 +07:00
|
|
|
{
|
2011-05-02 23:29:25 +07:00
|
|
|
ftrace_set_regex(&global_ops, buf, len, reset, 1);
|
2008-05-22 22:46:33 +07:00
|
|
|
}
|
2011-05-06 09:54:01 +07:00
|
|
|
EXPORT_SYMBOL_GPL(ftrace_set_global_filter);
|
2008-05-13 02:20:48 +07:00
|
|
|
|
2008-05-22 22:46:33 +07:00
|
|
|
/**
|
|
|
|
* ftrace_set_notrace - set a function to not trace in ftrace
|
2011-05-06 09:54:01 +07:00
|
|
|
* @ops - the ops to set the notrace filter with
|
2008-05-22 22:46:33 +07:00
|
|
|
* @buf - the string that holds the function notrace text.
|
|
|
|
* @len - the length of the string.
|
|
|
|
* @reset - non zero to reset all filters before applying this filter.
|
|
|
|
*
|
|
|
|
* Notrace Filters denote which functions should not be enabled when tracing
|
|
|
|
* is enabled. If @buf is NULL and reset is set, all functions will be enabled
|
|
|
|
* for tracing.
|
|
|
|
*/
|
2011-05-06 09:54:01 +07:00
|
|
|
void ftrace_set_global_notrace(unsigned char *buf, int len, int reset)
|
2008-05-22 22:46:33 +07:00
|
|
|
{
|
2011-05-02 23:29:25 +07:00
|
|
|
ftrace_set_regex(&global_ops, buf, len, reset, 0);
|
2008-05-13 02:20:45 +07:00
|
|
|
}
|
2011-05-06 09:54:01 +07:00
|
|
|
EXPORT_SYMBOL_GPL(ftrace_set_global_notrace);
|
2008-05-13 02:20:45 +07:00
|
|
|
|
2009-05-29 00:37:24 +07:00
|
|
|
/*
|
|
|
|
* command line interface to allow users to set filters on boot up.
|
|
|
|
*/
|
|
|
|
#define FTRACE_FILTER_SIZE COMMAND_LINE_SIZE
|
|
|
|
static char ftrace_notrace_buf[FTRACE_FILTER_SIZE] __initdata;
|
|
|
|
static char ftrace_filter_buf[FTRACE_FILTER_SIZE] __initdata;
|
|
|
|
|
|
|
|
static int __init set_ftrace_notrace(char *str)
|
|
|
|
{
|
|
|
|
strncpy(ftrace_notrace_buf, str, FTRACE_FILTER_SIZE);
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
__setup("ftrace_notrace=", set_ftrace_notrace);
|
|
|
|
|
|
|
|
static int __init set_ftrace_filter(char *str)
|
|
|
|
{
|
|
|
|
strncpy(ftrace_filter_buf, str, FTRACE_FILTER_SIZE);
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
__setup("ftrace_filter=", set_ftrace_filter);
|
|
|
|
|
2009-10-13 03:17:21 +07:00
|
|
|
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
|
2009-11-05 10:16:17 +07:00
|
|
|
static char ftrace_graph_buf[FTRACE_FILTER_SIZE] __initdata;
|
2010-03-06 08:02:19 +07:00
|
|
|
static int ftrace_set_func(unsigned long *array, int *idx, char *buffer);
|
|
|
|
|
2009-10-13 03:17:21 +07:00
|
|
|
static int __init set_graph_function(char *str)
|
|
|
|
{
|
2009-10-15 01:43:39 +07:00
|
|
|
strlcpy(ftrace_graph_buf, str, FTRACE_FILTER_SIZE);
|
2009-10-13 03:17:21 +07:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
__setup("ftrace_graph_filter=", set_graph_function);
|
|
|
|
|
|
|
|
static void __init set_ftrace_early_graph(char *buf)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
char *func;
|
|
|
|
|
|
|
|
while (buf) {
|
|
|
|
func = strsep(&buf, ",");
|
|
|
|
/* we allow only one expression at a time */
|
|
|
|
ret = ftrace_set_func(ftrace_graph_funcs, &ftrace_graph_count,
|
|
|
|
func);
|
|
|
|
if (ret)
|
|
|
|
printk(KERN_DEBUG "ftrace: function %s not "
|
|
|
|
"traceable\n", func);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
|
|
|
|
|
2011-05-02 23:29:25 +07:00
|
|
|
static void __init
|
|
|
|
set_ftrace_early_filter(struct ftrace_ops *ops, char *buf, int enable)
|
2009-05-29 00:37:24 +07:00
|
|
|
{
|
|
|
|
char *func;
|
|
|
|
|
|
|
|
while (buf) {
|
|
|
|
func = strsep(&buf, ",");
|
2011-05-02 23:29:25 +07:00
|
|
|
ftrace_set_regex(ops, func, strlen(func), 0, enable);
|
2009-05-29 00:37:24 +07:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void __init set_ftrace_early_filters(void)
|
|
|
|
{
|
|
|
|
if (ftrace_filter_buf[0])
|
2011-05-02 23:29:25 +07:00
|
|
|
set_ftrace_early_filter(&global_ops, ftrace_filter_buf, 1);
|
2009-05-29 00:37:24 +07:00
|
|
|
if (ftrace_notrace_buf[0])
|
2011-05-02 23:29:25 +07:00
|
|
|
set_ftrace_early_filter(&global_ops, ftrace_notrace_buf, 0);
|
2009-10-13 03:17:21 +07:00
|
|
|
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
|
|
|
|
if (ftrace_graph_buf[0])
|
|
|
|
set_ftrace_early_graph(ftrace_graph_buf);
|
|
|
|
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
|
2009-05-29 00:37:24 +07:00
|
|
|
}
|
|
|
|
|
2008-05-13 02:20:51 +07:00
|
|
|
static int
|
2011-04-30 07:59:51 +07:00
|
|
|
ftrace_regex_release(struct inode *inode, struct file *file)
|
2008-05-13 02:20:43 +07:00
|
|
|
{
|
|
|
|
struct seq_file *m = (struct seq_file *)file->private_data;
|
|
|
|
struct ftrace_iterator *iter;
|
2011-05-03 04:34:47 +07:00
|
|
|
struct ftrace_hash **orig_hash;
|
2009-09-11 22:29:29 +07:00
|
|
|
struct trace_parser *parser;
|
2011-05-04 00:25:24 +07:00
|
|
|
int filter_hash;
|
2011-05-03 04:34:47 +07:00
|
|
|
int ret;
|
2008-05-13 02:20:43 +07:00
|
|
|
|
2008-05-22 22:46:33 +07:00
|
|
|
mutex_lock(&ftrace_regex_lock);
|
2008-05-13 02:20:43 +07:00
|
|
|
if (file->f_mode & FMODE_READ) {
|
|
|
|
iter = m->private;
|
|
|
|
|
|
|
|
seq_release(inode, file);
|
|
|
|
} else
|
|
|
|
iter = file->private_data;
|
|
|
|
|
2009-09-11 22:29:29 +07:00
|
|
|
parser = &iter->parser;
|
|
|
|
if (trace_parser_loaded(parser)) {
|
|
|
|
parser->buffer[parser->idx] = 0;
|
2011-04-30 07:59:51 +07:00
|
|
|
ftrace_match_records(iter->hash, parser->buffer, parser->idx);
|
2008-05-13 02:20:43 +07:00
|
|
|
}
|
|
|
|
|
2009-09-11 22:29:29 +07:00
|
|
|
trace_parser_put(parser);
|
|
|
|
|
2011-04-30 09:35:33 +07:00
|
|
|
if (file->f_mode & FMODE_WRITE) {
|
2011-05-04 00:25:24 +07:00
|
|
|
filter_hash = !!(iter->flags & FTRACE_ITER_FILTER);
|
|
|
|
|
|
|
|
if (filter_hash)
|
2011-05-03 04:34:47 +07:00
|
|
|
orig_hash = &iter->ops->filter_hash;
|
2011-05-04 00:25:24 +07:00
|
|
|
else
|
|
|
|
orig_hash = &iter->ops->notrace_hash;
|
2011-05-03 04:34:47 +07:00
|
|
|
|
2011-04-30 09:35:33 +07:00
|
|
|
mutex_lock(&ftrace_lock);
|
2011-05-04 00:25:24 +07:00
|
|
|
/*
|
|
|
|
* Remove the current set, update the hash and add
|
|
|
|
* them back.
|
|
|
|
*/
|
|
|
|
ftrace_hash_rec_disable(iter->ops, filter_hash);
|
2011-05-03 04:34:47 +07:00
|
|
|
ret = ftrace_hash_move(orig_hash, iter->hash);
|
2011-05-04 00:25:24 +07:00
|
|
|
if (!ret) {
|
|
|
|
ftrace_hash_rec_enable(iter->ops, filter_hash);
|
|
|
|
if (iter->ops->flags & FTRACE_OPS_FL_ENABLED
|
|
|
|
&& ftrace_enabled)
|
|
|
|
ftrace_run_update_code(FTRACE_ENABLE_CALLS);
|
|
|
|
}
|
2011-04-30 09:35:33 +07:00
|
|
|
mutex_unlock(&ftrace_lock);
|
|
|
|
}
|
2011-05-03 04:34:47 +07:00
|
|
|
free_ftrace_hash(iter->hash);
|
|
|
|
kfree(iter);
|
2011-04-30 09:35:33 +07:00
|
|
|
|
2008-05-22 22:46:33 +07:00
|
|
|
mutex_unlock(&ftrace_regex_lock);
|
2008-05-13 02:20:43 +07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2009-03-06 09:44:55 +07:00
|
|
|
static const struct file_operations ftrace_avail_fops = {
|
2008-05-13 02:20:43 +07:00
|
|
|
.open = ftrace_avail_open,
|
|
|
|
.read = seq_read,
|
|
|
|
.llseek = seq_lseek,
|
2009-08-17 15:54:03 +07:00
|
|
|
.release = seq_release_private,
|
2008-05-13 02:20:43 +07:00
|
|
|
};
|
|
|
|
|
2011-05-04 01:39:21 +07:00
|
|
|
static const struct file_operations ftrace_enabled_fops = {
|
|
|
|
.open = ftrace_enabled_open,
|
|
|
|
.read = seq_read,
|
|
|
|
.llseek = seq_lseek,
|
|
|
|
.release = seq_release_private,
|
|
|
|
};
|
|
|
|
|
2009-03-06 09:44:55 +07:00
|
|
|
static const struct file_operations ftrace_filter_fops = {
|
2008-05-13 02:20:43 +07:00
|
|
|
.open = ftrace_filter_open,
|
2009-03-13 16:47:23 +07:00
|
|
|
.read = seq_read,
|
2008-05-13 02:20:43 +07:00
|
|
|
.write = ftrace_filter_write,
|
2010-09-10 22:47:43 +07:00
|
|
|
.llseek = ftrace_regex_lseek,
|
2011-04-30 07:59:51 +07:00
|
|
|
.release = ftrace_regex_release,
|
2008-05-13 02:20:43 +07:00
|
|
|
};
|
|
|
|
|
2009-03-06 09:44:55 +07:00
|
|
|
static const struct file_operations ftrace_notrace_fops = {
|
2008-05-22 22:46:33 +07:00
|
|
|
.open = ftrace_notrace_open,
|
2009-03-13 16:47:23 +07:00
|
|
|
.read = seq_read,
|
2008-05-22 22:46:33 +07:00
|
|
|
.write = ftrace_notrace_write,
|
|
|
|
.llseek = ftrace_regex_lseek,
|
2011-04-30 07:59:51 +07:00
|
|
|
.release = ftrace_regex_release,
|
2008-05-22 22:46:33 +07:00
|
|
|
};
|
|
|
|
|
2008-12-04 03:36:57 +07:00
|
|
|
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
|
|
|
|
|
|
|
|
static DEFINE_MUTEX(graph_lock);
|
|
|
|
|
|
|
|
int ftrace_graph_count;
|
2010-02-10 14:43:04 +07:00
|
|
|
int ftrace_graph_filter_enabled;
|
2008-12-04 03:36:57 +07:00
|
|
|
unsigned long ftrace_graph_funcs[FTRACE_GRAPH_MAX_FUNCS] __read_mostly;
|
|
|
|
|
|
|
|
static void *
|
2009-06-24 08:54:00 +07:00
|
|
|
__g_next(struct seq_file *m, loff_t *pos)
|
2008-12-04 03:36:57 +07:00
|
|
|
{
|
2009-06-24 08:54:00 +07:00
|
|
|
if (*pos >= ftrace_graph_count)
|
2008-12-04 03:36:57 +07:00
|
|
|
return NULL;
|
2009-09-18 13:06:28 +07:00
|
|
|
return &ftrace_graph_funcs[*pos];
|
2009-06-24 08:54:00 +07:00
|
|
|
}
|
2008-12-04 03:36:57 +07:00
|
|
|
|
2009-06-24 08:54:00 +07:00
|
|
|
static void *
|
|
|
|
g_next(struct seq_file *m, void *v, loff_t *pos)
|
|
|
|
{
|
|
|
|
(*pos)++;
|
|
|
|
return __g_next(m, pos);
|
2008-12-04 03:36:57 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static void *g_start(struct seq_file *m, loff_t *pos)
|
|
|
|
{
|
|
|
|
mutex_lock(&graph_lock);
|
|
|
|
|
2009-02-20 03:13:12 +07:00
|
|
|
/* Nothing, tell g_show to print all functions are enabled */
|
2010-02-10 14:43:04 +07:00
|
|
|
if (!ftrace_graph_filter_enabled && !*pos)
|
2009-02-20 03:13:12 +07:00
|
|
|
return (void *)1;
|
|
|
|
|
2009-06-24 08:54:00 +07:00
|
|
|
return __g_next(m, pos);
|
2008-12-04 03:36:57 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static void g_stop(struct seq_file *m, void *p)
|
|
|
|
{
|
|
|
|
mutex_unlock(&graph_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int g_show(struct seq_file *m, void *v)
|
|
|
|
{
|
|
|
|
unsigned long *ptr = v;
|
|
|
|
|
|
|
|
if (!ptr)
|
|
|
|
return 0;
|
|
|
|
|
2009-02-20 03:13:12 +07:00
|
|
|
if (ptr == (unsigned long *)1) {
|
|
|
|
seq_printf(m, "#### all functions enabled ####\n");
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2009-09-17 11:05:58 +07:00
|
|
|
seq_printf(m, "%ps\n", (void *)*ptr);
|
2008-12-04 03:36:57 +07:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2009-09-23 06:43:43 +07:00
|
|
|
static const struct seq_operations ftrace_graph_seq_ops = {
|
2008-12-04 03:36:57 +07:00
|
|
|
.start = g_start,
|
|
|
|
.next = g_next,
|
|
|
|
.stop = g_stop,
|
|
|
|
.show = g_show,
|
|
|
|
};
|
|
|
|
|
|
|
|
static int
|
|
|
|
ftrace_graph_open(struct inode *inode, struct file *file)
|
|
|
|
{
|
|
|
|
int ret = 0;
|
|
|
|
|
|
|
|
if (unlikely(ftrace_disabled))
|
|
|
|
return -ENODEV;
|
|
|
|
|
|
|
|
mutex_lock(&graph_lock);
|
|
|
|
if ((file->f_mode & FMODE_WRITE) &&
|
2009-07-23 10:29:30 +07:00
|
|
|
(file->f_flags & O_TRUNC)) {
|
2010-02-10 14:43:04 +07:00
|
|
|
ftrace_graph_filter_enabled = 0;
|
2008-12-04 03:36:57 +07:00
|
|
|
ftrace_graph_count = 0;
|
|
|
|
memset(ftrace_graph_funcs, 0, sizeof(ftrace_graph_funcs));
|
|
|
|
}
|
2009-09-18 13:06:28 +07:00
|
|
|
mutex_unlock(&graph_lock);
|
2008-12-04 03:36:57 +07:00
|
|
|
|
2009-09-18 13:06:28 +07:00
|
|
|
if (file->f_mode & FMODE_READ)
|
2008-12-04 03:36:57 +07:00
|
|
|
ret = seq_open(file, &ftrace_graph_seq_ops);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2009-07-23 10:29:11 +07:00
|
|
|
static int
|
|
|
|
ftrace_graph_release(struct inode *inode, struct file *file)
|
|
|
|
{
|
|
|
|
if (file->f_mode & FMODE_READ)
|
|
|
|
seq_release(inode, file);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2008-12-04 03:36:57 +07:00
|
|
|
static int
|
2009-02-20 03:13:12 +07:00
|
|
|
ftrace_set_func(unsigned long *array, int *idx, char *buffer)
|
2008-12-04 03:36:57 +07:00
|
|
|
{
|
|
|
|
struct dyn_ftrace *rec;
|
|
|
|
struct ftrace_page *pg;
|
2009-02-20 03:13:12 +07:00
|
|
|
int search_len;
|
2010-02-10 14:43:04 +07:00
|
|
|
int fail = 1;
|
2009-02-20 03:13:12 +07:00
|
|
|
int type, not;
|
|
|
|
char *search;
|
|
|
|
bool exists;
|
|
|
|
int i;
|
2008-12-04 03:36:57 +07:00
|
|
|
|
2009-02-20 03:13:12 +07:00
|
|
|
/* decode regex */
|
2009-09-25 02:31:51 +07:00
|
|
|
type = filter_parse_regex(buffer, strlen(buffer), &search, ¬);
|
2010-02-10 14:43:04 +07:00
|
|
|
if (!not && *idx >= FTRACE_GRAPH_MAX_FUNCS)
|
|
|
|
return -EBUSY;
|
2009-02-20 03:13:12 +07:00
|
|
|
|
|
|
|
search_len = strlen(search);
|
|
|
|
|
2009-02-14 13:15:39 +07:00
|
|
|
mutex_lock(&ftrace_lock);
|
2011-04-22 10:16:46 +07:00
|
|
|
|
|
|
|
if (unlikely(ftrace_disabled)) {
|
|
|
|
mutex_unlock(&ftrace_lock);
|
|
|
|
return -ENODEV;
|
|
|
|
}
|
|
|
|
|
2009-02-14 00:43:56 +07:00
|
|
|
do_for_each_ftrace_rec(pg, rec) {
|
|
|
|
|
2011-04-22 10:16:46 +07:00
|
|
|
if (rec->flags & FTRACE_FL_FREE)
|
2009-02-14 00:43:56 +07:00
|
|
|
continue;
|
|
|
|
|
2011-04-29 07:32:08 +07:00
|
|
|
if (ftrace_match_record(rec, NULL, search, search_len, type)) {
|
2010-02-10 14:43:04 +07:00
|
|
|
/* if it is in the array */
|
2009-02-20 03:13:12 +07:00
|
|
|
exists = false;
|
2010-02-10 14:43:04 +07:00
|
|
|
for (i = 0; i < *idx; i++) {
|
2009-02-20 03:13:12 +07:00
|
|
|
if (array[i] == rec->ip) {
|
|
|
|
exists = true;
|
2009-02-14 00:43:56 +07:00
|
|
|
break;
|
|
|
|
}
|
2010-02-10 14:43:04 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
if (!not) {
|
|
|
|
fail = 0;
|
|
|
|
if (!exists) {
|
|
|
|
array[(*idx)++] = rec->ip;
|
|
|
|
if (*idx >= FTRACE_GRAPH_MAX_FUNCS)
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
if (exists) {
|
|
|
|
array[i] = array[--(*idx)];
|
|
|
|
array[*idx] = 0;
|
|
|
|
fail = 0;
|
|
|
|
}
|
|
|
|
}
|
2008-12-04 03:36:57 +07:00
|
|
|
}
|
2009-02-14 00:43:56 +07:00
|
|
|
} while_for_each_ftrace_rec();
|
2010-02-10 14:43:04 +07:00
|
|
|
out:
|
2009-02-14 13:15:39 +07:00
|
|
|
mutex_unlock(&ftrace_lock);
|
2008-12-04 03:36:57 +07:00
|
|
|
|
2010-02-10 14:43:04 +07:00
|
|
|
if (fail)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
ftrace_graph_filter_enabled = 1;
|
|
|
|
return 0;
|
2008-12-04 03:36:57 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t
|
|
|
|
ftrace_graph_write(struct file *file, const char __user *ubuf,
|
|
|
|
size_t cnt, loff_t *ppos)
|
|
|
|
{
|
2009-09-11 22:29:29 +07:00
|
|
|
struct trace_parser parser;
|
2009-09-22 12:52:20 +07:00
|
|
|
ssize_t read, ret;
|
2008-12-04 03:36:57 +07:00
|
|
|
|
2010-02-10 14:43:04 +07:00
|
|
|
if (!cnt)
|
2008-12-04 03:36:57 +07:00
|
|
|
return 0;
|
|
|
|
|
|
|
|
mutex_lock(&graph_lock);
|
|
|
|
|
2009-09-11 22:29:29 +07:00
|
|
|
if (trace_parser_get_init(&parser, FTRACE_BUFF_MAX)) {
|
|
|
|
ret = -ENOMEM;
|
2009-09-22 12:52:57 +07:00
|
|
|
goto out_unlock;
|
2008-12-04 03:36:57 +07:00
|
|
|
}
|
|
|
|
|
2009-09-11 22:29:29 +07:00
|
|
|
read = trace_get_user(&parser, ubuf, cnt, ppos);
|
2008-12-04 03:36:57 +07:00
|
|
|
|
2009-09-22 12:52:20 +07:00
|
|
|
if (read >= 0 && trace_parser_loaded((&parser))) {
|
2009-09-11 22:29:29 +07:00
|
|
|
parser.buffer[parser.idx] = 0;
|
|
|
|
|
|
|
|
/* we allow only one expression at a time */
|
2009-09-18 13:06:28 +07:00
|
|
|
ret = ftrace_set_func(ftrace_graph_funcs, &ftrace_graph_count,
|
2009-09-11 22:29:29 +07:00
|
|
|
parser.buffer);
|
2008-12-04 03:36:57 +07:00
|
|
|
if (ret)
|
2009-09-22 12:52:57 +07:00
|
|
|
goto out_free;
|
2008-12-04 03:36:57 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
ret = read;
|
2009-09-22 12:52:57 +07:00
|
|
|
|
|
|
|
out_free:
|
2009-09-11 22:29:29 +07:00
|
|
|
trace_parser_put(&parser);
|
2009-09-22 12:52:57 +07:00
|
|
|
out_unlock:
|
2008-12-04 03:36:57 +07:00
|
|
|
mutex_unlock(&graph_lock);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static const struct file_operations ftrace_graph_fops = {
|
2009-07-23 10:29:11 +07:00
|
|
|
.open = ftrace_graph_open,
|
|
|
|
.read = seq_read,
|
|
|
|
.write = ftrace_graph_write,
|
|
|
|
.release = ftrace_graph_release,
|
llseek: automatically add .llseek fop
All file_operations should get a .llseek operation so we can make
nonseekable_open the default for future file operations without a
.llseek pointer.
The three cases that we can automatically detect are no_llseek, seq_lseek
and default_llseek. For cases where we can we can automatically prove that
the file offset is always ignored, we use noop_llseek, which maintains
the current behavior of not returning an error from a seek.
New drivers should normally not use noop_llseek but instead use no_llseek
and call nonseekable_open at open time. Existing drivers can be converted
to do the same when the maintainer knows for certain that no user code
relies on calling seek on the device file.
The generated code is often incorrectly indented and right now contains
comments that clarify for each added line why a specific variant was
chosen. In the version that gets submitted upstream, the comments will
be gone and I will manually fix the indentation, because there does not
seem to be a way to do that using coccinelle.
Some amount of new code is currently sitting in linux-next that should get
the same modifications, which I will do at the end of the merge window.
Many thanks to Julia Lawall for helping me learn to write a semantic
patch that does all this.
===== begin semantic patch =====
// This adds an llseek= method to all file operations,
// as a preparation for making no_llseek the default.
//
// The rules are
// - use no_llseek explicitly if we do nonseekable_open
// - use seq_lseek for sequential files
// - use default_llseek if we know we access f_pos
// - use noop_llseek if we know we don't access f_pos,
// but we still want to allow users to call lseek
//
@ open1 exists @
identifier nested_open;
@@
nested_open(...)
{
<+...
nonseekable_open(...)
...+>
}
@ open exists@
identifier open_f;
identifier i, f;
identifier open1.nested_open;
@@
int open_f(struct inode *i, struct file *f)
{
<+...
(
nonseekable_open(...)
|
nested_open(...)
)
...+>
}
@ read disable optional_qualifier exists @
identifier read_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
expression E;
identifier func;
@@
ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
{
<+...
(
*off = E
|
*off += E
|
func(..., off, ...)
|
E = *off
)
...+>
}
@ read_no_fpos disable optional_qualifier exists @
identifier read_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
@@
ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
{
... when != off
}
@ write @
identifier write_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
expression E;
identifier func;
@@
ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
{
<+...
(
*off = E
|
*off += E
|
func(..., off, ...)
|
E = *off
)
...+>
}
@ write_no_fpos @
identifier write_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
@@
ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
{
... when != off
}
@ fops0 @
identifier fops;
@@
struct file_operations fops = {
...
};
@ has_llseek depends on fops0 @
identifier fops0.fops;
identifier llseek_f;
@@
struct file_operations fops = {
...
.llseek = llseek_f,
...
};
@ has_read depends on fops0 @
identifier fops0.fops;
identifier read_f;
@@
struct file_operations fops = {
...
.read = read_f,
...
};
@ has_write depends on fops0 @
identifier fops0.fops;
identifier write_f;
@@
struct file_operations fops = {
...
.write = write_f,
...
};
@ has_open depends on fops0 @
identifier fops0.fops;
identifier open_f;
@@
struct file_operations fops = {
...
.open = open_f,
...
};
// use no_llseek if we call nonseekable_open
////////////////////////////////////////////
@ nonseekable1 depends on !has_llseek && has_open @
identifier fops0.fops;
identifier nso ~= "nonseekable_open";
@@
struct file_operations fops = {
... .open = nso, ...
+.llseek = no_llseek, /* nonseekable */
};
@ nonseekable2 depends on !has_llseek @
identifier fops0.fops;
identifier open.open_f;
@@
struct file_operations fops = {
... .open = open_f, ...
+.llseek = no_llseek, /* open uses nonseekable */
};
// use seq_lseek for sequential files
/////////////////////////////////////
@ seq depends on !has_llseek @
identifier fops0.fops;
identifier sr ~= "seq_read";
@@
struct file_operations fops = {
... .read = sr, ...
+.llseek = seq_lseek, /* we have seq_read */
};
// use default_llseek if there is a readdir
///////////////////////////////////////////
@ fops1 depends on !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier readdir_e;
@@
// any other fop is used that changes pos
struct file_operations fops = {
... .readdir = readdir_e, ...
+.llseek = default_llseek, /* readdir is present */
};
// use default_llseek if at least one of read/write touches f_pos
/////////////////////////////////////////////////////////////////
@ fops2 depends on !fops1 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read.read_f;
@@
// read fops use offset
struct file_operations fops = {
... .read = read_f, ...
+.llseek = default_llseek, /* read accesses f_pos */
};
@ fops3 depends on !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier write.write_f;
@@
// write fops use offset
struct file_operations fops = {
... .write = write_f, ...
+ .llseek = default_llseek, /* write accesses f_pos */
};
// Use noop_llseek if neither read nor write accesses f_pos
///////////////////////////////////////////////////////////
@ fops4 depends on !fops1 && !fops2 && !fops3 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read_no_fpos.read_f;
identifier write_no_fpos.write_f;
@@
// write fops use offset
struct file_operations fops = {
...
.write = write_f,
.read = read_f,
...
+.llseek = noop_llseek, /* read and write both use no f_pos */
};
@ depends on has_write && !has_read && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier write_no_fpos.write_f;
@@
struct file_operations fops = {
... .write = write_f, ...
+.llseek = noop_llseek, /* write uses no f_pos */
};
@ depends on has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read_no_fpos.read_f;
@@
struct file_operations fops = {
... .read = read_f, ...
+.llseek = noop_llseek, /* read uses no f_pos */
};
@ depends on !has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
@@
struct file_operations fops = {
...
+.llseek = noop_llseek, /* no read or write fn */
};
===== End semantic patch =====
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Cc: Julia Lawall <julia@diku.dk>
Cc: Christoph Hellwig <hch@infradead.org>
2010-08-15 23:52:59 +07:00
|
|
|
.llseek = seq_lseek,
|
2008-12-04 03:36:57 +07:00
|
|
|
};
|
|
|
|
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
|
|
|
|
|
2008-11-26 12:16:23 +07:00
|
|
|
static __init int ftrace_init_dyn_debugfs(struct dentry *d_tracer)
|
2008-05-13 02:20:43 +07:00
|
|
|
{
|
|
|
|
|
2009-03-27 06:25:38 +07:00
|
|
|
trace_create_file("available_filter_functions", 0444,
|
|
|
|
d_tracer, NULL, &ftrace_avail_fops);
|
2008-05-13 02:20:43 +07:00
|
|
|
|
2011-05-04 01:39:21 +07:00
|
|
|
trace_create_file("enabled_functions", 0444,
|
|
|
|
d_tracer, NULL, &ftrace_enabled_fops);
|
|
|
|
|
2009-03-27 06:25:38 +07:00
|
|
|
trace_create_file("set_ftrace_filter", 0644, d_tracer,
|
|
|
|
NULL, &ftrace_filter_fops);
|
2008-05-22 22:46:33 +07:00
|
|
|
|
2009-03-27 06:25:38 +07:00
|
|
|
trace_create_file("set_ftrace_notrace", 0644, d_tracer,
|
2008-05-22 22:46:33 +07:00
|
|
|
NULL, &ftrace_notrace_fops);
|
ftrace: user update and disable dynamic ftrace daemon
In dynamic ftrace, the mcount function starts off pointing to a stub
function that just returns.
On start up, the call to the stub is modified to point to a "record_ip"
function. The job of the record_ip function is to add the function to
a pre-allocated hash list. If the function is already there, it simply is
ignored, otherwise it is added to the list.
Later, a ftraced daemon wakes up and calls kstop_machine if any functions
have been recorded, and changes the calls to the recorded functions to
a simple nop. If no functions were recorded, the daemon goes back to sleep.
The daemon wakes up once a second to see if it needs to update any newly
recorded functions into nops. Usually it does not, but if a lot of code
has been executed for the first time in the kernel, the ftraced daemon
will call kstop_machine to update those into nops.
The problem currently is that there's no way to stop the daemon from doing
this, and it can cause unneeded latencies (800us which for some is bothersome).
This patch adds a new file /debugfs/tracing/ftraced_enabled. If the daemon
is active, reading this will return "enabled\n" and "disabled\n" when the
daemon is not running. To disable the daemon, the user can echo "0" or
"disable" into this file, and "1" or "enable" to re-enable the daemon.
Since the daemon is used to convert the functions into nops to increase
the performance of the system, I also added that anytime something is
written into the ftraced_enabled file, kstop_machine will run if there
are new functions that have been detected that need to be converted.
This way the user can disable the daemon but still be able to control the
conversion of the mcount calls to nops by simply,
"echo 0 > /debugfs/tracing/ftraced_enabled"
when they need to do more conversions.
To see the number of converted functions:
"cat /debugfs/tracing/dyn_ftrace_total_info"
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-05-28 07:48:37 +07:00
|
|
|
|
2008-12-04 03:36:57 +07:00
|
|
|
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
|
2009-03-27 06:25:38 +07:00
|
|
|
trace_create_file("set_graph_function", 0444, d_tracer,
|
2008-12-04 03:36:57 +07:00
|
|
|
NULL,
|
|
|
|
&ftrace_graph_fops);
|
|
|
|
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
|
|
|
|
|
2008-05-13 02:20:43 +07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2009-10-14 03:33:53 +07:00
|
|
|
static int ftrace_process_locs(struct module *mod,
|
2008-11-15 07:21:19 +07:00
|
|
|
unsigned long *start,
|
2008-08-15 02:45:08 +07:00
|
|
|
unsigned long *end)
|
|
|
|
{
|
|
|
|
unsigned long *p;
|
|
|
|
unsigned long addr;
|
2011-06-07 20:26:46 +07:00
|
|
|
unsigned long flags;
|
2008-08-15 02:45:08 +07:00
|
|
|
|
2009-02-14 13:42:44 +07:00
|
|
|
mutex_lock(&ftrace_lock);
|
2008-08-15 02:45:08 +07:00
|
|
|
p = start;
|
|
|
|
while (p < end) {
|
|
|
|
addr = ftrace_call_adjust(*p++);
|
2008-11-15 07:21:19 +07:00
|
|
|
/*
|
|
|
|
* Some architecture linkers will pad between
|
|
|
|
* the different mcount_loc sections of different
|
|
|
|
* object files to satisfy alignments.
|
|
|
|
* Skip any NULL pointers.
|
|
|
|
*/
|
|
|
|
if (!addr)
|
|
|
|
continue;
|
2008-08-15 02:45:08 +07:00
|
|
|
ftrace_record_ip(addr);
|
|
|
|
}
|
|
|
|
|
2011-06-07 20:26:46 +07:00
|
|
|
/*
|
|
|
|
* Disable interrupts to prevent interrupts from executing
|
|
|
|
* code that is being modified.
|
|
|
|
*/
|
|
|
|
local_irq_save(flags);
|
2008-11-15 07:21:19 +07:00
|
|
|
ftrace_update_code(mod);
|
2011-06-07 20:26:46 +07:00
|
|
|
local_irq_restore(flags);
|
2009-02-14 13:42:44 +07:00
|
|
|
mutex_unlock(&ftrace_lock);
|
2008-08-15 02:45:08 +07:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2009-04-16 00:24:06 +07:00
|
|
|
#ifdef CONFIG_MODULES
|
2009-10-08 00:00:35 +07:00
|
|
|
void ftrace_release_mod(struct module *mod)
|
2009-04-16 00:24:06 +07:00
|
|
|
{
|
|
|
|
struct dyn_ftrace *rec;
|
|
|
|
struct ftrace_page *pg;
|
|
|
|
|
2011-04-22 10:16:46 +07:00
|
|
|
mutex_lock(&ftrace_lock);
|
|
|
|
|
2009-10-08 00:00:35 +07:00
|
|
|
if (ftrace_disabled)
|
2011-04-22 10:16:46 +07:00
|
|
|
goto out_unlock;
|
2009-04-16 00:24:06 +07:00
|
|
|
|
|
|
|
do_for_each_ftrace_rec(pg, rec) {
|
2009-10-08 00:00:35 +07:00
|
|
|
if (within_module_core(rec->ip, mod)) {
|
2009-04-16 00:24:06 +07:00
|
|
|
/*
|
|
|
|
* rec->ip is changed in ftrace_free_rec()
|
|
|
|
* It should not between s and e if record was freed.
|
|
|
|
*/
|
|
|
|
FTRACE_WARN_ON(rec->flags & FTRACE_FL_FREE);
|
|
|
|
ftrace_free_rec(rec);
|
|
|
|
}
|
|
|
|
} while_for_each_ftrace_rec();
|
2011-04-22 10:16:46 +07:00
|
|
|
out_unlock:
|
2009-04-16 00:24:06 +07:00
|
|
|
mutex_unlock(&ftrace_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void ftrace_init_module(struct module *mod,
|
|
|
|
unsigned long *start, unsigned long *end)
|
2008-08-15 02:45:09 +07:00
|
|
|
{
|
2008-08-16 08:40:04 +07:00
|
|
|
if (ftrace_disabled || start == end)
|
2008-08-15 09:47:19 +07:00
|
|
|
return;
|
2009-10-14 03:33:53 +07:00
|
|
|
ftrace_process_locs(mod, start, end);
|
2008-08-15 02:45:09 +07:00
|
|
|
}
|
|
|
|
|
2009-04-16 00:24:06 +07:00
|
|
|
static int ftrace_module_notify(struct notifier_block *self,
|
|
|
|
unsigned long val, void *data)
|
|
|
|
{
|
|
|
|
struct module *mod = data;
|
|
|
|
|
|
|
|
switch (val) {
|
|
|
|
case MODULE_STATE_COMING:
|
|
|
|
ftrace_init_module(mod, mod->ftrace_callsites,
|
|
|
|
mod->ftrace_callsites +
|
|
|
|
mod->num_ftrace_callsites);
|
|
|
|
break;
|
|
|
|
case MODULE_STATE_GOING:
|
2009-10-08 00:00:35 +07:00
|
|
|
ftrace_release_mod(mod);
|
2009-04-16 00:24:06 +07:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
#else
|
|
|
|
static int ftrace_module_notify(struct notifier_block *self,
|
|
|
|
unsigned long val, void *data)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
#endif /* CONFIG_MODULES */
|
|
|
|
|
|
|
|
struct notifier_block ftrace_module_nb = {
|
|
|
|
.notifier_call = ftrace_module_notify,
|
|
|
|
.priority = 0,
|
|
|
|
};
|
|
|
|
|
2008-08-15 02:45:08 +07:00
|
|
|
extern unsigned long __start_mcount_loc[];
|
|
|
|
extern unsigned long __stop_mcount_loc[];
|
|
|
|
|
|
|
|
void __init ftrace_init(void)
|
|
|
|
{
|
|
|
|
unsigned long count, addr, flags;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
/* Keep the ftrace pointer to the stub */
|
|
|
|
addr = (unsigned long)ftrace_stub;
|
|
|
|
|
|
|
|
local_irq_save(flags);
|
|
|
|
ftrace_dyn_arch_init(&addr);
|
|
|
|
local_irq_restore(flags);
|
|
|
|
|
|
|
|
/* ftrace_dyn_arch_init places the return code in addr */
|
|
|
|
if (addr)
|
|
|
|
goto failed;
|
|
|
|
|
|
|
|
count = __stop_mcount_loc - __start_mcount_loc;
|
|
|
|
|
|
|
|
ret = ftrace_dyn_table_alloc(count);
|
|
|
|
if (ret)
|
|
|
|
goto failed;
|
|
|
|
|
|
|
|
last_ftrace_enabled = ftrace_enabled = 1;
|
|
|
|
|
2009-10-14 03:33:53 +07:00
|
|
|
ret = ftrace_process_locs(NULL,
|
2008-11-15 07:21:19 +07:00
|
|
|
__start_mcount_loc,
|
2008-08-15 02:45:08 +07:00
|
|
|
__stop_mcount_loc);
|
|
|
|
|
2009-04-16 00:24:06 +07:00
|
|
|
ret = register_module_notifier(&ftrace_module_nb);
|
2009-05-17 14:31:38 +07:00
|
|
|
if (ret)
|
2009-04-16 00:24:06 +07:00
|
|
|
pr_warning("Failed to register trace ftrace module notifier\n");
|
|
|
|
|
2009-05-29 00:37:24 +07:00
|
|
|
set_ftrace_early_filters();
|
|
|
|
|
2008-08-15 02:45:08 +07:00
|
|
|
return;
|
|
|
|
failed:
|
|
|
|
ftrace_disabled = 1;
|
|
|
|
}
|
|
|
|
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
#else
|
2008-10-29 02:17:38 +07:00
|
|
|
|
2011-05-04 09:49:52 +07:00
|
|
|
static struct ftrace_ops global_ops = {
|
2011-05-04 08:55:54 +07:00
|
|
|
.func = ftrace_stub,
|
|
|
|
};
|
|
|
|
|
2008-10-29 02:17:38 +07:00
|
|
|
static int __init ftrace_nodyn_init(void)
|
|
|
|
{
|
|
|
|
ftrace_enabled = 1;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
device_initcall(ftrace_nodyn_init);
|
|
|
|
|
2008-11-26 12:16:23 +07:00
|
|
|
static inline int ftrace_init_dyn_debugfs(struct dentry *d_tracer) { return 0; }
|
|
|
|
static inline void ftrace_startup_enable(int command) { }
|
2008-11-26 12:16:24 +07:00
|
|
|
/* Keep as macros so we do not need to define the commands */
|
2011-05-24 02:33:49 +07:00
|
|
|
# define ftrace_startup(ops, command) \
|
|
|
|
({ \
|
|
|
|
(ops)->flags |= FTRACE_OPS_FL_ENABLED; \
|
|
|
|
0; \
|
|
|
|
})
|
2011-05-04 08:55:54 +07:00
|
|
|
# define ftrace_shutdown(ops, command) do { } while (0)
|
2008-05-13 02:20:45 +07:00
|
|
|
# define ftrace_startup_sysctl() do { } while (0)
|
|
|
|
# define ftrace_shutdown_sysctl() do { } while (0)
|
2011-05-04 20:27:52 +07:00
|
|
|
|
|
|
|
static inline int
|
|
|
|
ftrace_ops_test(struct ftrace_ops *ops, unsigned long ip)
|
|
|
|
{
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
#endif /* CONFIG_DYNAMIC_FTRACE */
|
|
|
|
|
2011-05-04 20:27:52 +07:00
|
|
|
static void
|
|
|
|
ftrace_ops_list_func(unsigned long ip, unsigned long parent_ip)
|
|
|
|
{
|
2011-05-06 08:14:55 +07:00
|
|
|
struct ftrace_ops *op;
|
2011-05-04 20:27:52 +07:00
|
|
|
|
ftrace: Add internal recursive checks
Witold reported a reboot caused by the selftests of the dynamic function
tracer. He sent me a config and I used ktest to do a config_bisect on it
(as my config did not cause the crash). It pointed out that the problem
config was CONFIG_PROVE_RCU.
What happened was that if multiple callbacks are attached to the
function tracer, we iterate a list of callbacks. Because the list is
managed by synchronize_sched() and preempt_disable, the access to the
pointers uses rcu_dereference_raw().
When PROVE_RCU is enabled, the rcu_dereference_raw() calls some
debugging functions, which happen to be traced. The tracing of the debug
function would then call rcu_dereference_raw() which would then call the
debug function and then... well you get the idea.
I first wrote two different patches to solve this bug.
1) add a __rcu_dereference_raw() that would not do any checks.
2) add notrace to the offending debug functions.
Both of these patches worked.
Talking with Paul McKenney on IRC, he suggested to add recursion
detection instead. This seemed to be a better solution, so I decided to
implement it. As the task_struct already has a trace_recursion to detect
recursion in the ring buffer, and that has a very small number it
allows, I decided to use that same variable to add flags that can detect
the recursion inside the infrastructure of the function tracer.
I plan to change it so that the task struct bit can be checked in
mcount, but as that requires changes to all archs, I will hold that off
to the next merge window.
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Link: http://lkml.kernel.org/r/1306348063.1465.116.camel@gandalf.stny.rr.com
Reported-by: Witold Baryluk <baryluk@smp.if.uj.edu.pl>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-05-26 01:27:43 +07:00
|
|
|
if (unlikely(trace_recursion_test(TRACE_INTERNAL_BIT)))
|
|
|
|
return;
|
|
|
|
|
|
|
|
trace_recursion_set(TRACE_INTERNAL_BIT);
|
2011-05-06 08:14:55 +07:00
|
|
|
/*
|
|
|
|
* Some of the ops may be dynamically allocated,
|
|
|
|
* they must be freed after a synchronize_sched().
|
|
|
|
*/
|
|
|
|
preempt_disable_notrace();
|
|
|
|
op = rcu_dereference_raw(ftrace_ops_list);
|
2011-05-04 20:27:52 +07:00
|
|
|
while (op != &ftrace_list_end) {
|
|
|
|
if (ftrace_ops_test(op, ip))
|
|
|
|
op->func(ip, parent_ip);
|
|
|
|
op = rcu_dereference_raw(op->next);
|
|
|
|
};
|
2011-05-06 08:14:55 +07:00
|
|
|
preempt_enable_notrace();
|
ftrace: Add internal recursive checks
Witold reported a reboot caused by the selftests of the dynamic function
tracer. He sent me a config and I used ktest to do a config_bisect on it
(as my config did not cause the crash). It pointed out that the problem
config was CONFIG_PROVE_RCU.
What happened was that if multiple callbacks are attached to the
function tracer, we iterate a list of callbacks. Because the list is
managed by synchronize_sched() and preempt_disable, the access to the
pointers uses rcu_dereference_raw().
When PROVE_RCU is enabled, the rcu_dereference_raw() calls some
debugging functions, which happen to be traced. The tracing of the debug
function would then call rcu_dereference_raw() which would then call the
debug function and then... well you get the idea.
I first wrote two different patches to solve this bug.
1) add a __rcu_dereference_raw() that would not do any checks.
2) add notrace to the offending debug functions.
Both of these patches worked.
Talking with Paul McKenney on IRC, he suggested to add recursion
detection instead. This seemed to be a better solution, so I decided to
implement it. As the task_struct already has a trace_recursion to detect
recursion in the ring buffer, and that has a very small number it
allows, I decided to use that same variable to add flags that can detect
the recursion inside the infrastructure of the function tracer.
I plan to change it so that the task struct bit can be checked in
mcount, but as that requires changes to all archs, I will hold that off
to the next merge window.
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Link: http://lkml.kernel.org/r/1306348063.1465.116.camel@gandalf.stny.rr.com
Reported-by: Witold Baryluk <baryluk@smp.if.uj.edu.pl>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-05-26 01:27:43 +07:00
|
|
|
trace_recursion_clear(TRACE_INTERNAL_BIT);
|
2011-05-04 20:27:52 +07:00
|
|
|
}
|
|
|
|
|
2008-12-04 12:26:41 +07:00
|
|
|
static void clear_ftrace_swapper(void)
|
2008-12-04 12:26:40 +07:00
|
|
|
{
|
|
|
|
struct task_struct *p;
|
2008-12-04 12:26:41 +07:00
|
|
|
int cpu;
|
2008-12-04 12:26:40 +07:00
|
|
|
|
2008-12-04 12:26:41 +07:00
|
|
|
get_online_cpus();
|
|
|
|
for_each_online_cpu(cpu) {
|
|
|
|
p = idle_task(cpu);
|
2008-12-04 12:26:40 +07:00
|
|
|
clear_tsk_trace_trace(p);
|
2008-12-04 12:26:41 +07:00
|
|
|
}
|
|
|
|
put_online_cpus();
|
|
|
|
}
|
2008-12-04 12:26:40 +07:00
|
|
|
|
2008-12-04 12:26:41 +07:00
|
|
|
static void set_ftrace_swapper(void)
|
|
|
|
{
|
|
|
|
struct task_struct *p;
|
|
|
|
int cpu;
|
|
|
|
|
|
|
|
get_online_cpus();
|
|
|
|
for_each_online_cpu(cpu) {
|
|
|
|
p = idle_task(cpu);
|
|
|
|
set_tsk_trace_trace(p);
|
|
|
|
}
|
|
|
|
put_online_cpus();
|
2008-12-04 12:26:40 +07:00
|
|
|
}
|
|
|
|
|
2008-12-04 12:26:41 +07:00
|
|
|
static void clear_ftrace_pid(struct pid *pid)
|
|
|
|
{
|
|
|
|
struct task_struct *p;
|
|
|
|
|
2009-02-04 02:39:04 +07:00
|
|
|
rcu_read_lock();
|
2008-12-04 12:26:41 +07:00
|
|
|
do_each_pid_task(pid, PIDTYPE_PID, p) {
|
|
|
|
clear_tsk_trace_trace(p);
|
|
|
|
} while_each_pid_task(pid, PIDTYPE_PID, p);
|
2009-02-04 02:39:04 +07:00
|
|
|
rcu_read_unlock();
|
|
|
|
|
2008-12-04 12:26:41 +07:00
|
|
|
put_pid(pid);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void set_ftrace_pid(struct pid *pid)
|
2008-12-04 12:26:40 +07:00
|
|
|
{
|
|
|
|
struct task_struct *p;
|
|
|
|
|
2009-02-04 02:39:04 +07:00
|
|
|
rcu_read_lock();
|
2008-12-04 12:26:40 +07:00
|
|
|
do_each_pid_task(pid, PIDTYPE_PID, p) {
|
|
|
|
set_tsk_trace_trace(p);
|
|
|
|
} while_each_pid_task(pid, PIDTYPE_PID, p);
|
2009-02-04 02:39:04 +07:00
|
|
|
rcu_read_unlock();
|
2008-12-04 12:26:40 +07:00
|
|
|
}
|
|
|
|
|
2009-10-14 03:33:52 +07:00
|
|
|
static void clear_ftrace_pid_task(struct pid *pid)
|
2008-12-04 12:26:41 +07:00
|
|
|
{
|
2009-10-14 03:33:52 +07:00
|
|
|
if (pid == ftrace_swapper_pid)
|
2008-12-04 12:26:41 +07:00
|
|
|
clear_ftrace_swapper();
|
|
|
|
else
|
2009-10-14 03:33:52 +07:00
|
|
|
clear_ftrace_pid(pid);
|
2008-12-04 12:26:41 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static void set_ftrace_pid_task(struct pid *pid)
|
|
|
|
{
|
|
|
|
if (pid == ftrace_swapper_pid)
|
|
|
|
set_ftrace_swapper();
|
|
|
|
else
|
|
|
|
set_ftrace_pid(pid);
|
|
|
|
}
|
|
|
|
|
2009-10-14 03:33:52 +07:00
|
|
|
static int ftrace_pid_add(int p)
|
2008-11-26 12:16:23 +07:00
|
|
|
{
|
2008-12-04 12:26:40 +07:00
|
|
|
struct pid *pid;
|
2009-10-14 03:33:52 +07:00
|
|
|
struct ftrace_pid *fpid;
|
|
|
|
int ret = -EINVAL;
|
2008-11-26 12:16:23 +07:00
|
|
|
|
2009-10-14 03:33:52 +07:00
|
|
|
mutex_lock(&ftrace_lock);
|
2008-11-26 12:16:23 +07:00
|
|
|
|
2009-10-14 03:33:52 +07:00
|
|
|
if (!p)
|
|
|
|
pid = ftrace_swapper_pid;
|
|
|
|
else
|
|
|
|
pid = find_get_pid(p);
|
2008-11-26 12:16:23 +07:00
|
|
|
|
2009-10-14 03:33:52 +07:00
|
|
|
if (!pid)
|
|
|
|
goto out;
|
2008-11-26 12:16:23 +07:00
|
|
|
|
2009-10-14 03:33:52 +07:00
|
|
|
ret = 0;
|
2008-11-26 12:16:23 +07:00
|
|
|
|
2009-10-14 03:33:52 +07:00
|
|
|
list_for_each_entry(fpid, &ftrace_pids, list)
|
|
|
|
if (fpid->pid == pid)
|
|
|
|
goto out_put;
|
2008-12-04 12:26:40 +07:00
|
|
|
|
2009-10-14 03:33:52 +07:00
|
|
|
ret = -ENOMEM;
|
2008-11-26 12:16:23 +07:00
|
|
|
|
2009-10-14 03:33:52 +07:00
|
|
|
fpid = kmalloc(sizeof(*fpid), GFP_KERNEL);
|
|
|
|
if (!fpid)
|
|
|
|
goto out_put;
|
2008-11-26 12:16:23 +07:00
|
|
|
|
2009-10-14 03:33:52 +07:00
|
|
|
list_add(&fpid->list, &ftrace_pids);
|
|
|
|
fpid->pid = pid;
|
2008-12-04 03:36:58 +07:00
|
|
|
|
2009-10-14 03:33:52 +07:00
|
|
|
set_ftrace_pid_task(pid);
|
2008-12-04 12:26:40 +07:00
|
|
|
|
2009-10-14 03:33:52 +07:00
|
|
|
ftrace_update_pid_func();
|
|
|
|
ftrace_startup_enable(0);
|
|
|
|
|
|
|
|
mutex_unlock(&ftrace_lock);
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
out_put:
|
|
|
|
if (pid != ftrace_swapper_pid)
|
|
|
|
put_pid(pid);
|
2008-12-04 12:26:40 +07:00
|
|
|
|
2009-10-14 03:33:52 +07:00
|
|
|
out:
|
|
|
|
mutex_unlock(&ftrace_lock);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void ftrace_pid_reset(void)
|
|
|
|
{
|
|
|
|
struct ftrace_pid *fpid, *safe;
|
2008-12-04 12:26:40 +07:00
|
|
|
|
2009-10-14 03:33:52 +07:00
|
|
|
mutex_lock(&ftrace_lock);
|
|
|
|
list_for_each_entry_safe(fpid, safe, &ftrace_pids, list) {
|
|
|
|
struct pid *pid = fpid->pid;
|
|
|
|
|
|
|
|
clear_ftrace_pid_task(pid);
|
|
|
|
|
|
|
|
list_del(&fpid->list);
|
|
|
|
kfree(fpid);
|
2008-11-26 12:16:23 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
ftrace_update_pid_func();
|
|
|
|
ftrace_startup_enable(0);
|
|
|
|
|
2009-02-14 13:42:44 +07:00
|
|
|
mutex_unlock(&ftrace_lock);
|
2009-10-14 03:33:52 +07:00
|
|
|
}
|
2008-11-26 12:16:23 +07:00
|
|
|
|
2009-10-14 03:33:52 +07:00
|
|
|
static void *fpid_start(struct seq_file *m, loff_t *pos)
|
|
|
|
{
|
|
|
|
mutex_lock(&ftrace_lock);
|
|
|
|
|
|
|
|
if (list_empty(&ftrace_pids) && (!*pos))
|
|
|
|
return (void *) 1;
|
|
|
|
|
|
|
|
return seq_list_start(&ftrace_pids, *pos);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void *fpid_next(struct seq_file *m, void *v, loff_t *pos)
|
|
|
|
{
|
|
|
|
if (v == (void *)1)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
return seq_list_next(v, &ftrace_pids, pos);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void fpid_stop(struct seq_file *m, void *p)
|
|
|
|
{
|
|
|
|
mutex_unlock(&ftrace_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int fpid_show(struct seq_file *m, void *v)
|
|
|
|
{
|
|
|
|
const struct ftrace_pid *fpid = list_entry(v, struct ftrace_pid, list);
|
|
|
|
|
|
|
|
if (v == (void *)1) {
|
|
|
|
seq_printf(m, "no pid\n");
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (fpid->pid == ftrace_swapper_pid)
|
|
|
|
seq_printf(m, "swapper tasks\n");
|
|
|
|
else
|
|
|
|
seq_printf(m, "%u\n", pid_vnr(fpid->pid));
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static const struct seq_operations ftrace_pid_sops = {
|
|
|
|
.start = fpid_start,
|
|
|
|
.next = fpid_next,
|
|
|
|
.stop = fpid_stop,
|
|
|
|
.show = fpid_show,
|
|
|
|
};
|
|
|
|
|
|
|
|
static int
|
|
|
|
ftrace_pid_open(struct inode *inode, struct file *file)
|
|
|
|
{
|
|
|
|
int ret = 0;
|
|
|
|
|
|
|
|
if ((file->f_mode & FMODE_WRITE) &&
|
|
|
|
(file->f_flags & O_TRUNC))
|
|
|
|
ftrace_pid_reset();
|
|
|
|
|
|
|
|
if (file->f_mode & FMODE_READ)
|
|
|
|
ret = seq_open(file, &ftrace_pid_sops);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2008-11-26 12:16:23 +07:00
|
|
|
static ssize_t
|
|
|
|
ftrace_pid_write(struct file *filp, const char __user *ubuf,
|
|
|
|
size_t cnt, loff_t *ppos)
|
|
|
|
{
|
2009-11-23 17:03:28 +07:00
|
|
|
char buf[64], *tmp;
|
2008-11-26 12:16:23 +07:00
|
|
|
long val;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (cnt >= sizeof(buf))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (copy_from_user(&buf, ubuf, cnt))
|
|
|
|
return -EFAULT;
|
|
|
|
|
|
|
|
buf[cnt] = 0;
|
|
|
|
|
2009-10-14 03:33:52 +07:00
|
|
|
/*
|
|
|
|
* Allow "echo > set_ftrace_pid" or "echo -n '' > set_ftrace_pid"
|
|
|
|
* to clean the filter quietly.
|
|
|
|
*/
|
2009-11-23 17:03:28 +07:00
|
|
|
tmp = strstrip(buf);
|
|
|
|
if (strlen(tmp) == 0)
|
2009-10-14 03:33:52 +07:00
|
|
|
return 1;
|
|
|
|
|
2009-11-23 17:03:28 +07:00
|
|
|
ret = strict_strtol(tmp, 10, &val);
|
2008-11-26 12:16:23 +07:00
|
|
|
if (ret < 0)
|
|
|
|
return ret;
|
|
|
|
|
2009-10-14 03:33:52 +07:00
|
|
|
ret = ftrace_pid_add(val);
|
2008-11-26 12:16:23 +07:00
|
|
|
|
2009-10-14 03:33:52 +07:00
|
|
|
return ret ? ret : cnt;
|
|
|
|
}
|
2008-11-26 12:16:23 +07:00
|
|
|
|
2009-10-14 03:33:52 +07:00
|
|
|
static int
|
|
|
|
ftrace_pid_release(struct inode *inode, struct file *file)
|
|
|
|
{
|
|
|
|
if (file->f_mode & FMODE_READ)
|
|
|
|
seq_release(inode, file);
|
2008-11-26 12:16:23 +07:00
|
|
|
|
2009-10-14 03:33:52 +07:00
|
|
|
return 0;
|
2008-11-26 12:16:23 +07:00
|
|
|
}
|
|
|
|
|
2009-03-06 09:44:55 +07:00
|
|
|
static const struct file_operations ftrace_pid_fops = {
|
2009-10-14 03:33:52 +07:00
|
|
|
.open = ftrace_pid_open,
|
|
|
|
.write = ftrace_pid_write,
|
|
|
|
.read = seq_read,
|
|
|
|
.llseek = seq_lseek,
|
|
|
|
.release = ftrace_pid_release,
|
2008-11-26 12:16:23 +07:00
|
|
|
};
|
|
|
|
|
|
|
|
static __init int ftrace_init_debugfs(void)
|
|
|
|
{
|
|
|
|
struct dentry *d_tracer;
|
|
|
|
|
|
|
|
d_tracer = tracing_init_dentry();
|
|
|
|
if (!d_tracer)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
ftrace_init_dyn_debugfs(d_tracer);
|
|
|
|
|
2009-03-27 06:25:38 +07:00
|
|
|
trace_create_file("set_ftrace_pid", 0644, d_tracer,
|
|
|
|
NULL, &ftrace_pid_fops);
|
2009-03-24 04:12:36 +07:00
|
|
|
|
|
|
|
ftrace_profile_debugfs(d_tracer);
|
|
|
|
|
2008-11-26 12:16:23 +07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
fs_initcall(ftrace_init_debugfs);
|
|
|
|
|
2008-07-11 07:58:15 +07:00
|
|
|
/**
|
2008-10-23 20:33:02 +07:00
|
|
|
* ftrace_kill - kill ftrace
|
2008-07-11 07:58:15 +07:00
|
|
|
*
|
|
|
|
* This function should be used by panic code. It stops ftrace
|
|
|
|
* but in a not so nice way. If you need to simply kill ftrace
|
|
|
|
* from a non-atomic section, use ftrace_kill.
|
|
|
|
*/
|
2008-10-23 20:33:02 +07:00
|
|
|
void ftrace_kill(void)
|
2008-07-11 07:58:15 +07:00
|
|
|
{
|
|
|
|
ftrace_disabled = 1;
|
|
|
|
ftrace_enabled = 0;
|
|
|
|
clear_ftrace_function();
|
|
|
|
}
|
|
|
|
|
2008-05-13 02:20:42 +07:00
|
|
|
/**
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
* register_ftrace_function - register a function for profiling
|
|
|
|
* @ops - ops structure that holds the function for profiling.
|
2008-05-13 02:20:42 +07:00
|
|
|
*
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
* Register a function to be called by all functions in the
|
|
|
|
* kernel.
|
|
|
|
*
|
|
|
|
* Note: @ops->func and all the functions it calls must be labeled
|
|
|
|
* with "notrace", otherwise it will go into a
|
|
|
|
* recursive loop.
|
2008-05-13 02:20:42 +07:00
|
|
|
*/
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
int register_ftrace_function(struct ftrace_ops *ops)
|
2008-05-13 02:20:42 +07:00
|
|
|
{
|
2011-04-22 10:16:46 +07:00
|
|
|
int ret = -1;
|
2008-05-13 02:20:48 +07:00
|
|
|
|
2009-02-14 13:42:44 +07:00
|
|
|
mutex_lock(&ftrace_lock);
|
2008-11-16 12:02:06 +07:00
|
|
|
|
2011-04-22 10:16:46 +07:00
|
|
|
if (unlikely(ftrace_disabled))
|
|
|
|
goto out_unlock;
|
|
|
|
|
2008-05-13 02:20:43 +07:00
|
|
|
ret = __register_ftrace_function(ops);
|
2011-05-04 20:27:52 +07:00
|
|
|
if (!ret)
|
2011-05-24 02:24:25 +07:00
|
|
|
ret = ftrace_startup(ops, 0);
|
2011-05-04 20:27:52 +07:00
|
|
|
|
2008-05-13 02:20:43 +07:00
|
|
|
|
2011-04-22 10:16:46 +07:00
|
|
|
out_unlock:
|
2009-02-14 13:42:44 +07:00
|
|
|
mutex_unlock(&ftrace_lock);
|
2008-05-13 02:20:43 +07:00
|
|
|
return ret;
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
}
|
2011-05-06 08:14:55 +07:00
|
|
|
EXPORT_SYMBOL_GPL(register_ftrace_function);
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
|
|
|
|
/**
|
2009-01-13 05:35:50 +07:00
|
|
|
* unregister_ftrace_function - unregister a function for profiling.
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
* @ops - ops structure that holds the function to unregister
|
|
|
|
*
|
|
|
|
* Unregister a function that was added to be called by ftrace profiling.
|
|
|
|
*/
|
|
|
|
int unregister_ftrace_function(struct ftrace_ops *ops)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
2009-02-14 13:42:44 +07:00
|
|
|
mutex_lock(&ftrace_lock);
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
ret = __unregister_ftrace_function(ops);
|
2011-05-04 20:27:52 +07:00
|
|
|
if (!ret)
|
|
|
|
ftrace_shutdown(ops, 0);
|
2009-02-14 13:42:44 +07:00
|
|
|
mutex_unlock(&ftrace_lock);
|
2008-05-13 02:20:43 +07:00
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
2011-05-06 08:14:55 +07:00
|
|
|
EXPORT_SYMBOL_GPL(unregister_ftrace_function);
|
2008-05-13 02:20:43 +07:00
|
|
|
|
2008-05-13 02:20:51 +07:00
|
|
|
int
|
2008-05-13 02:20:43 +07:00
|
|
|
ftrace_enable_sysctl(struct ctl_table *table, int write,
|
2009-09-24 05:57:19 +07:00
|
|
|
void __user *buffer, size_t *lenp,
|
2008-05-13 02:20:43 +07:00
|
|
|
loff_t *ppos)
|
|
|
|
{
|
2011-04-22 10:16:46 +07:00
|
|
|
int ret = -ENODEV;
|
2008-05-13 02:20:48 +07:00
|
|
|
|
2009-02-14 13:42:44 +07:00
|
|
|
mutex_lock(&ftrace_lock);
|
2008-05-13 02:20:43 +07:00
|
|
|
|
2011-04-22 10:16:46 +07:00
|
|
|
if (unlikely(ftrace_disabled))
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
ret = proc_dointvec(table, write, buffer, lenp, ppos);
|
2008-05-13 02:20:43 +07:00
|
|
|
|
2009-06-26 15:55:51 +07:00
|
|
|
if (ret || !write || (last_ftrace_enabled == !!ftrace_enabled))
|
2008-05-13 02:20:43 +07:00
|
|
|
goto out;
|
|
|
|
|
2009-06-26 15:55:51 +07:00
|
|
|
last_ftrace_enabled = !!ftrace_enabled;
|
2008-05-13 02:20:43 +07:00
|
|
|
|
|
|
|
if (ftrace_enabled) {
|
|
|
|
|
|
|
|
ftrace_startup_sysctl();
|
|
|
|
|
|
|
|
/* we are starting ftrace again */
|
2011-05-04 20:27:52 +07:00
|
|
|
if (ftrace_ops_list != &ftrace_list_end) {
|
|
|
|
if (ftrace_ops_list->next == &ftrace_list_end)
|
|
|
|
ftrace_trace_function = ftrace_ops_list->func;
|
2008-05-13 02:20:43 +07:00
|
|
|
else
|
2011-05-04 20:27:52 +07:00
|
|
|
ftrace_trace_function = ftrace_ops_list_func;
|
2008-05-13 02:20:43 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
} else {
|
|
|
|
/* stopping ftrace calls (just send to ftrace_stub) */
|
|
|
|
ftrace_trace_function = ftrace_stub;
|
|
|
|
|
|
|
|
ftrace_shutdown_sysctl();
|
|
|
|
}
|
|
|
|
|
|
|
|
out:
|
2009-02-14 13:42:44 +07:00
|
|
|
mutex_unlock(&ftrace_lock);
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 02:20:42 +07:00
|
|
|
return ret;
|
2008-05-13 02:20:42 +07:00
|
|
|
}
|
2008-10-24 17:47:10 +07:00
|
|
|
|
2008-11-26 03:07:04 +07:00
|
|
|
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
|
2008-11-16 12:02:06 +07:00
|
|
|
|
2009-04-04 02:24:12 +07:00
|
|
|
static int ftrace_graph_active;
|
2009-01-15 04:33:27 +07:00
|
|
|
static struct notifier_block ftrace_suspend_notifier;
|
2008-11-16 12:02:06 +07:00
|
|
|
|
2008-12-03 11:50:05 +07:00
|
|
|
int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2008-11-26 06:57:25 +07:00
|
|
|
/* The callbacks that hook a function */
|
|
|
|
trace_func_graph_ret_t ftrace_graph_return =
|
|
|
|
(trace_func_graph_ret_t)ftrace_stub;
|
2008-12-03 11:50:05 +07:00
|
|
|
trace_func_graph_ent_t ftrace_graph_entry = ftrace_graph_entry_stub;
|
2008-11-23 12:22:56 +07:00
|
|
|
|
|
|
|
/* Try to assign a return stack array on FTRACE_RETSTACK_ALLOC_SIZE tasks. */
|
|
|
|
static int alloc_retstack_tasklist(struct ftrace_ret_stack **ret_stack_list)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
int ret = 0;
|
|
|
|
unsigned long flags;
|
|
|
|
int start = 0, end = FTRACE_RETSTACK_ALLOC_SIZE;
|
|
|
|
struct task_struct *g, *t;
|
|
|
|
|
|
|
|
for (i = 0; i < FTRACE_RETSTACK_ALLOC_SIZE; i++) {
|
|
|
|
ret_stack_list[i] = kmalloc(FTRACE_RETFUNC_DEPTH
|
|
|
|
* sizeof(struct ftrace_ret_stack),
|
|
|
|
GFP_KERNEL);
|
|
|
|
if (!ret_stack_list[i]) {
|
|
|
|
start = 0;
|
|
|
|
end = i;
|
|
|
|
ret = -ENOMEM;
|
|
|
|
goto free;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
read_lock_irqsave(&tasklist_lock, flags);
|
|
|
|
do_each_thread(g, t) {
|
|
|
|
if (start == end) {
|
|
|
|
ret = -EAGAIN;
|
|
|
|
goto unlock;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (t->ret_stack == NULL) {
|
2008-12-06 09:43:41 +07:00
|
|
|
atomic_set(&t->tracing_graph_pause, 0);
|
2008-11-23 12:22:56 +07:00
|
|
|
atomic_set(&t->trace_overrun, 0);
|
2009-06-03 01:01:19 +07:00
|
|
|
t->curr_ret_stack = -1;
|
|
|
|
/* Make sure the tasks see the -1 first: */
|
|
|
|
smp_wmb();
|
|
|
|
t->ret_stack = ret_stack_list[start++];
|
2008-11-23 12:22:56 +07:00
|
|
|
}
|
|
|
|
} while_each_thread(g, t);
|
|
|
|
|
|
|
|
unlock:
|
|
|
|
read_unlock_irqrestore(&tasklist_lock, flags);
|
|
|
|
free:
|
|
|
|
for (i = start; i < end; i++)
|
|
|
|
kfree(ret_stack_list[i]);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2009-03-24 12:10:15 +07:00
|
|
|
static void
|
tracing: Let tracepoints have data passed to tracepoint callbacks
This patch adds data to be passed to tracepoint callbacks.
The created functions from DECLARE_TRACE() now need a mandatory data
parameter. For example:
DECLARE_TRACE(mytracepoint, int value, value)
Will create the register function:
int register_trace_mytracepoint((void(*)(void *data, int value))probe,
void *data);
As the first argument, all callbacks (probes) must take a (void *data)
parameter. So a callback for the above tracepoint will look like:
void myprobe(void *data, int value)
{
}
The callback may choose to ignore the data parameter.
This change allows callbacks to register a private data pointer along
with the function probe.
void mycallback(void *data, int value);
register_trace_mytracepoint(mycallback, mydata);
Then the mycallback() will receive the "mydata" as the first parameter
before the args.
A more detailed example:
DECLARE_TRACE(mytracepoint, TP_PROTO(int status), TP_ARGS(status));
/* In the C file */
DEFINE_TRACE(mytracepoint, TP_PROTO(int status), TP_ARGS(status));
[...]
trace_mytracepoint(status);
/* In a file registering this tracepoint */
int my_callback(void *data, int status)
{
struct my_struct my_data = data;
[...]
}
[...]
my_data = kmalloc(sizeof(*my_data), GFP_KERNEL);
init_my_data(my_data);
register_trace_mytracepoint(my_callback, my_data);
The same callback can also be registered to the same tracepoint as long
as the data registered is different. Note, the data must also be used
to unregister the callback:
unregister_trace_mytracepoint(my_callback, my_data);
Because of the data parameter, tracepoints declared this way can not have
no args. That is:
DECLARE_TRACE(mytracepoint, TP_PROTO(void), TP_ARGS());
will cause an error.
If no arguments are needed, a new macro can be used instead:
DECLARE_TRACE_NOARGS(mytracepoint);
Since there are no arguments, the proto and args fields are left out.
This is part of a series to make the tracepoint footprint smaller:
text data bss dec hex filename
4913961 1088356 861512 6863829 68bbd5 vmlinux.orig
4914025 1088868 861512 6864405 68be15 vmlinux.class
4918492 1084612 861512 6864616 68bee8 vmlinux.tracepoint
Again, this patch also increases the size of the kernel, but
lays the ground work for decreasing it.
v5: Fixed net/core/drop_monitor.c to handle these updates.
v4: Moved the DECLARE_TRACE() DECLARE_TRACE_NOARGS out of the
#ifdef CONFIG_TRACE_POINTS, since the two are the same in both
cases. The __DECLARE_TRACE() is what changes.
Thanks to Frederic Weisbecker for pointing this out.
v3: Made all register_* functions require data to be passed and
all callbacks to take a void * parameter as its first argument.
This makes the calling functions comply with C standards.
Also added more comments to the modifications of DECLARE_TRACE().
v2: Made the DECLARE_TRACE() have the ability to pass arguments
and added a new DECLARE_TRACE_NOARGS() for tracepoints that
do not need any arguments.
Acked-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Acked-by: Masami Hiramatsu <mhiramat@redhat.com>
Acked-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Neil Horman <nhorman@tuxdriver.com>
Cc: David S. Miller <davem@davemloft.net>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2010-04-21 04:04:50 +07:00
|
|
|
ftrace_graph_probe_sched_switch(void *ignore,
|
|
|
|
struct task_struct *prev, struct task_struct *next)
|
2009-03-24 12:10:15 +07:00
|
|
|
{
|
|
|
|
unsigned long long timestamp;
|
|
|
|
int index;
|
|
|
|
|
2009-03-24 22:06:24 +07:00
|
|
|
/*
|
|
|
|
* Does the user want to count the time a function was asleep.
|
|
|
|
* If so, do not update the time stamps.
|
|
|
|
*/
|
|
|
|
if (trace_flags & TRACE_ITER_SLEEP_TIME)
|
|
|
|
return;
|
|
|
|
|
2009-03-24 12:10:15 +07:00
|
|
|
timestamp = trace_clock_local();
|
|
|
|
|
|
|
|
prev->ftrace_timestamp = timestamp;
|
|
|
|
|
|
|
|
/* only process tasks that we timestamped */
|
|
|
|
if (!next->ftrace_timestamp)
|
|
|
|
return;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Update all the counters in next to make up for the
|
|
|
|
* time next was sleeping.
|
|
|
|
*/
|
|
|
|
timestamp -= next->ftrace_timestamp;
|
|
|
|
|
|
|
|
for (index = next->curr_ret_stack; index >= 0; index--)
|
|
|
|
next->ret_stack[index].calltime += timestamp;
|
|
|
|
}
|
|
|
|
|
2008-11-23 12:22:56 +07:00
|
|
|
/* Allocate a return stack for each task */
|
2008-11-26 03:07:04 +07:00
|
|
|
static int start_graph_tracing(void)
|
2008-11-23 12:22:56 +07:00
|
|
|
{
|
|
|
|
struct ftrace_ret_stack **ret_stack_list;
|
2009-02-18 00:35:34 +07:00
|
|
|
int ret, cpu;
|
2008-11-23 12:22:56 +07:00
|
|
|
|
|
|
|
ret_stack_list = kmalloc(FTRACE_RETSTACK_ALLOC_SIZE *
|
|
|
|
sizeof(struct ftrace_ret_stack *),
|
|
|
|
GFP_KERNEL);
|
|
|
|
|
|
|
|
if (!ret_stack_list)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
2009-02-18 00:35:34 +07:00
|
|
|
/* The cpu_boot init_task->ret_stack will never be freed */
|
2009-06-02 23:03:19 +07:00
|
|
|
for_each_online_cpu(cpu) {
|
|
|
|
if (!idle_task(cpu)->ret_stack)
|
ftrace: Fix memory leak with function graph and cpu hotplug
When the fuction graph tracer starts, it needs to make a special
stack for each task to save the real return values of the tasks.
All running tasks have this stack created, as well as any new
tasks.
On CPU hot plug, the new idle task will allocate a stack as well
when init_idle() is called. The problem is that cpu hotplug does
not create a new idle_task. Instead it uses the idle task that
existed when the cpu went down.
ftrace_graph_init_task() will add a new ret_stack to the task
that is given to it. Because a clone will make the task
have a stack of its parent it does not check if the task's
ret_stack is already NULL or not. When the CPU hotplug code
starts a CPU up again, it will allocate a new stack even
though one already existed for it.
The solution is to treat the idle_task specially. In fact, the
function_graph code already does, just not at init_idle().
Instead of using the ftrace_graph_init_task() for the idle task,
which that function expects the task to be a clone, have a
separate ftrace_graph_init_idle_task(). Also, we will create a
per_cpu ret_stack that is used by the idle task. When we call
ftrace_graph_init_idle_task() it will check if the idle task's
ret_stack is NULL, if it is, then it will assign it the per_cpu
ret_stack.
Reported-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Suggested-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stable Tree <stable@kernel.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-02-11 09:26:13 +07:00
|
|
|
ftrace_graph_init_idle_task(idle_task(cpu), cpu);
|
2009-06-02 23:03:19 +07:00
|
|
|
}
|
2009-02-18 00:35:34 +07:00
|
|
|
|
2008-11-23 12:22:56 +07:00
|
|
|
do {
|
|
|
|
ret = alloc_retstack_tasklist(ret_stack_list);
|
|
|
|
} while (ret == -EAGAIN);
|
|
|
|
|
2009-03-24 12:10:15 +07:00
|
|
|
if (!ret) {
|
tracing: Let tracepoints have data passed to tracepoint callbacks
This patch adds data to be passed to tracepoint callbacks.
The created functions from DECLARE_TRACE() now need a mandatory data
parameter. For example:
DECLARE_TRACE(mytracepoint, int value, value)
Will create the register function:
int register_trace_mytracepoint((void(*)(void *data, int value))probe,
void *data);
As the first argument, all callbacks (probes) must take a (void *data)
parameter. So a callback for the above tracepoint will look like:
void myprobe(void *data, int value)
{
}
The callback may choose to ignore the data parameter.
This change allows callbacks to register a private data pointer along
with the function probe.
void mycallback(void *data, int value);
register_trace_mytracepoint(mycallback, mydata);
Then the mycallback() will receive the "mydata" as the first parameter
before the args.
A more detailed example:
DECLARE_TRACE(mytracepoint, TP_PROTO(int status), TP_ARGS(status));
/* In the C file */
DEFINE_TRACE(mytracepoint, TP_PROTO(int status), TP_ARGS(status));
[...]
trace_mytracepoint(status);
/* In a file registering this tracepoint */
int my_callback(void *data, int status)
{
struct my_struct my_data = data;
[...]
}
[...]
my_data = kmalloc(sizeof(*my_data), GFP_KERNEL);
init_my_data(my_data);
register_trace_mytracepoint(my_callback, my_data);
The same callback can also be registered to the same tracepoint as long
as the data registered is different. Note, the data must also be used
to unregister the callback:
unregister_trace_mytracepoint(my_callback, my_data);
Because of the data parameter, tracepoints declared this way can not have
no args. That is:
DECLARE_TRACE(mytracepoint, TP_PROTO(void), TP_ARGS());
will cause an error.
If no arguments are needed, a new macro can be used instead:
DECLARE_TRACE_NOARGS(mytracepoint);
Since there are no arguments, the proto and args fields are left out.
This is part of a series to make the tracepoint footprint smaller:
text data bss dec hex filename
4913961 1088356 861512 6863829 68bbd5 vmlinux.orig
4914025 1088868 861512 6864405 68be15 vmlinux.class
4918492 1084612 861512 6864616 68bee8 vmlinux.tracepoint
Again, this patch also increases the size of the kernel, but
lays the ground work for decreasing it.
v5: Fixed net/core/drop_monitor.c to handle these updates.
v4: Moved the DECLARE_TRACE() DECLARE_TRACE_NOARGS out of the
#ifdef CONFIG_TRACE_POINTS, since the two are the same in both
cases. The __DECLARE_TRACE() is what changes.
Thanks to Frederic Weisbecker for pointing this out.
v3: Made all register_* functions require data to be passed and
all callbacks to take a void * parameter as its first argument.
This makes the calling functions comply with C standards.
Also added more comments to the modifications of DECLARE_TRACE().
v2: Made the DECLARE_TRACE() have the ability to pass arguments
and added a new DECLARE_TRACE_NOARGS() for tracepoints that
do not need any arguments.
Acked-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Acked-by: Masami Hiramatsu <mhiramat@redhat.com>
Acked-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Neil Horman <nhorman@tuxdriver.com>
Cc: David S. Miller <davem@davemloft.net>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2010-04-21 04:04:50 +07:00
|
|
|
ret = register_trace_sched_switch(ftrace_graph_probe_sched_switch, NULL);
|
2009-03-24 12:10:15 +07:00
|
|
|
if (ret)
|
|
|
|
pr_info("ftrace_graph: Couldn't activate tracepoint"
|
|
|
|
" probe to kernel_sched_switch\n");
|
|
|
|
}
|
|
|
|
|
2008-11-23 12:22:56 +07:00
|
|
|
kfree(ret_stack_list);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2009-01-15 04:33:27 +07:00
|
|
|
/*
|
|
|
|
* Hibernation protection.
|
|
|
|
* The state of the current task is too much unstable during
|
|
|
|
* suspend/restore to disk. We want to protect against that.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
ftrace_suspend_notifier_call(struct notifier_block *bl, unsigned long state,
|
|
|
|
void *unused)
|
|
|
|
{
|
|
|
|
switch (state) {
|
|
|
|
case PM_HIBERNATION_PREPARE:
|
|
|
|
pause_graph_tracing();
|
|
|
|
break;
|
|
|
|
|
|
|
|
case PM_POST_HIBERNATION:
|
|
|
|
unpause_graph_tracing();
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
return NOTIFY_DONE;
|
|
|
|
}
|
|
|
|
|
2008-11-26 06:57:25 +07:00
|
|
|
int register_ftrace_graph(trace_func_graph_ret_t retfunc,
|
|
|
|
trace_func_graph_ent_t entryfunc)
|
2008-11-11 13:14:25 +07:00
|
|
|
{
|
2008-11-16 12:02:06 +07:00
|
|
|
int ret = 0;
|
|
|
|
|
2009-02-14 13:42:44 +07:00
|
|
|
mutex_lock(&ftrace_lock);
|
2008-11-16 12:02:06 +07:00
|
|
|
|
2009-03-24 11:18:31 +07:00
|
|
|
/* we currently allow only one tracer registered at a time */
|
2009-04-04 02:24:12 +07:00
|
|
|
if (ftrace_graph_active) {
|
2009-03-24 11:18:31 +07:00
|
|
|
ret = -EBUSY;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2009-01-15 04:33:27 +07:00
|
|
|
ftrace_suspend_notifier.notifier_call = ftrace_suspend_notifier_call;
|
|
|
|
register_pm_notifier(&ftrace_suspend_notifier);
|
|
|
|
|
2009-04-04 02:24:12 +07:00
|
|
|
ftrace_graph_active++;
|
2008-11-26 03:07:04 +07:00
|
|
|
ret = start_graph_tracing();
|
2008-11-23 12:22:56 +07:00
|
|
|
if (ret) {
|
2009-04-04 02:24:12 +07:00
|
|
|
ftrace_graph_active--;
|
2008-11-23 12:22:56 +07:00
|
|
|
goto out;
|
|
|
|
}
|
2008-11-26 12:16:25 +07:00
|
|
|
|
2008-11-26 06:57:25 +07:00
|
|
|
ftrace_graph_return = retfunc;
|
|
|
|
ftrace_graph_entry = entryfunc;
|
2008-11-26 12:16:25 +07:00
|
|
|
|
2011-05-24 02:24:25 +07:00
|
|
|
ret = ftrace_startup(&global_ops, FTRACE_START_FUNC_RET);
|
2008-11-16 12:02:06 +07:00
|
|
|
|
|
|
|
out:
|
2009-02-14 13:42:44 +07:00
|
|
|
mutex_unlock(&ftrace_lock);
|
2008-11-16 12:02:06 +07:00
|
|
|
return ret;
|
2008-11-11 13:14:25 +07:00
|
|
|
}
|
|
|
|
|
2008-11-26 03:07:04 +07:00
|
|
|
void unregister_ftrace_graph(void)
|
2008-11-11 13:14:25 +07:00
|
|
|
{
|
2009-02-14 13:42:44 +07:00
|
|
|
mutex_lock(&ftrace_lock);
|
2008-11-16 12:02:06 +07:00
|
|
|
|
2009-04-04 02:24:12 +07:00
|
|
|
if (unlikely(!ftrace_graph_active))
|
2009-03-30 22:11:28 +07:00
|
|
|
goto out;
|
|
|
|
|
2009-04-04 02:24:12 +07:00
|
|
|
ftrace_graph_active--;
|
2008-11-26 06:57:25 +07:00
|
|
|
ftrace_graph_return = (trace_func_graph_ret_t)ftrace_stub;
|
2008-12-03 11:50:05 +07:00
|
|
|
ftrace_graph_entry = ftrace_graph_entry_stub;
|
2011-05-04 08:55:54 +07:00
|
|
|
ftrace_shutdown(&global_ops, FTRACE_STOP_FUNC_RET);
|
2009-01-15 04:33:27 +07:00
|
|
|
unregister_pm_notifier(&ftrace_suspend_notifier);
|
tracing: Let tracepoints have data passed to tracepoint callbacks
This patch adds data to be passed to tracepoint callbacks.
The created functions from DECLARE_TRACE() now need a mandatory data
parameter. For example:
DECLARE_TRACE(mytracepoint, int value, value)
Will create the register function:
int register_trace_mytracepoint((void(*)(void *data, int value))probe,
void *data);
As the first argument, all callbacks (probes) must take a (void *data)
parameter. So a callback for the above tracepoint will look like:
void myprobe(void *data, int value)
{
}
The callback may choose to ignore the data parameter.
This change allows callbacks to register a private data pointer along
with the function probe.
void mycallback(void *data, int value);
register_trace_mytracepoint(mycallback, mydata);
Then the mycallback() will receive the "mydata" as the first parameter
before the args.
A more detailed example:
DECLARE_TRACE(mytracepoint, TP_PROTO(int status), TP_ARGS(status));
/* In the C file */
DEFINE_TRACE(mytracepoint, TP_PROTO(int status), TP_ARGS(status));
[...]
trace_mytracepoint(status);
/* In a file registering this tracepoint */
int my_callback(void *data, int status)
{
struct my_struct my_data = data;
[...]
}
[...]
my_data = kmalloc(sizeof(*my_data), GFP_KERNEL);
init_my_data(my_data);
register_trace_mytracepoint(my_callback, my_data);
The same callback can also be registered to the same tracepoint as long
as the data registered is different. Note, the data must also be used
to unregister the callback:
unregister_trace_mytracepoint(my_callback, my_data);
Because of the data parameter, tracepoints declared this way can not have
no args. That is:
DECLARE_TRACE(mytracepoint, TP_PROTO(void), TP_ARGS());
will cause an error.
If no arguments are needed, a new macro can be used instead:
DECLARE_TRACE_NOARGS(mytracepoint);
Since there are no arguments, the proto and args fields are left out.
This is part of a series to make the tracepoint footprint smaller:
text data bss dec hex filename
4913961 1088356 861512 6863829 68bbd5 vmlinux.orig
4914025 1088868 861512 6864405 68be15 vmlinux.class
4918492 1084612 861512 6864616 68bee8 vmlinux.tracepoint
Again, this patch also increases the size of the kernel, but
lays the ground work for decreasing it.
v5: Fixed net/core/drop_monitor.c to handle these updates.
v4: Moved the DECLARE_TRACE() DECLARE_TRACE_NOARGS out of the
#ifdef CONFIG_TRACE_POINTS, since the two are the same in both
cases. The __DECLARE_TRACE() is what changes.
Thanks to Frederic Weisbecker for pointing this out.
v3: Made all register_* functions require data to be passed and
all callbacks to take a void * parameter as its first argument.
This makes the calling functions comply with C standards.
Also added more comments to the modifications of DECLARE_TRACE().
v2: Made the DECLARE_TRACE() have the ability to pass arguments
and added a new DECLARE_TRACE_NOARGS() for tracepoints that
do not need any arguments.
Acked-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Acked-by: Masami Hiramatsu <mhiramat@redhat.com>
Acked-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Neil Horman <nhorman@tuxdriver.com>
Cc: David S. Miller <davem@davemloft.net>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2010-04-21 04:04:50 +07:00
|
|
|
unregister_trace_sched_switch(ftrace_graph_probe_sched_switch, NULL);
|
2008-11-16 12:02:06 +07:00
|
|
|
|
2009-03-30 22:11:28 +07:00
|
|
|
out:
|
2009-02-14 13:42:44 +07:00
|
|
|
mutex_unlock(&ftrace_lock);
|
2008-11-11 13:14:25 +07:00
|
|
|
}
|
2008-11-23 12:22:56 +07:00
|
|
|
|
ftrace: Fix memory leak with function graph and cpu hotplug
When the fuction graph tracer starts, it needs to make a special
stack for each task to save the real return values of the tasks.
All running tasks have this stack created, as well as any new
tasks.
On CPU hot plug, the new idle task will allocate a stack as well
when init_idle() is called. The problem is that cpu hotplug does
not create a new idle_task. Instead it uses the idle task that
existed when the cpu went down.
ftrace_graph_init_task() will add a new ret_stack to the task
that is given to it. Because a clone will make the task
have a stack of its parent it does not check if the task's
ret_stack is already NULL or not. When the CPU hotplug code
starts a CPU up again, it will allocate a new stack even
though one already existed for it.
The solution is to treat the idle_task specially. In fact, the
function_graph code already does, just not at init_idle().
Instead of using the ftrace_graph_init_task() for the idle task,
which that function expects the task to be a clone, have a
separate ftrace_graph_init_idle_task(). Also, we will create a
per_cpu ret_stack that is used by the idle task. When we call
ftrace_graph_init_idle_task() it will check if the idle task's
ret_stack is NULL, if it is, then it will assign it the per_cpu
ret_stack.
Reported-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Suggested-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stable Tree <stable@kernel.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-02-11 09:26:13 +07:00
|
|
|
static DEFINE_PER_CPU(struct ftrace_ret_stack *, idle_ret_stack);
|
|
|
|
|
|
|
|
static void
|
|
|
|
graph_init_task(struct task_struct *t, struct ftrace_ret_stack *ret_stack)
|
|
|
|
{
|
|
|
|
atomic_set(&t->tracing_graph_pause, 0);
|
|
|
|
atomic_set(&t->trace_overrun, 0);
|
|
|
|
t->ftrace_timestamp = 0;
|
2011-03-31 08:57:33 +07:00
|
|
|
/* make curr_ret_stack visible before we add the ret_stack */
|
ftrace: Fix memory leak with function graph and cpu hotplug
When the fuction graph tracer starts, it needs to make a special
stack for each task to save the real return values of the tasks.
All running tasks have this stack created, as well as any new
tasks.
On CPU hot plug, the new idle task will allocate a stack as well
when init_idle() is called. The problem is that cpu hotplug does
not create a new idle_task. Instead it uses the idle task that
existed when the cpu went down.
ftrace_graph_init_task() will add a new ret_stack to the task
that is given to it. Because a clone will make the task
have a stack of its parent it does not check if the task's
ret_stack is already NULL or not. When the CPU hotplug code
starts a CPU up again, it will allocate a new stack even
though one already existed for it.
The solution is to treat the idle_task specially. In fact, the
function_graph code already does, just not at init_idle().
Instead of using the ftrace_graph_init_task() for the idle task,
which that function expects the task to be a clone, have a
separate ftrace_graph_init_idle_task(). Also, we will create a
per_cpu ret_stack that is used by the idle task. When we call
ftrace_graph_init_idle_task() it will check if the idle task's
ret_stack is NULL, if it is, then it will assign it the per_cpu
ret_stack.
Reported-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Suggested-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stable Tree <stable@kernel.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-02-11 09:26:13 +07:00
|
|
|
smp_wmb();
|
|
|
|
t->ret_stack = ret_stack;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Allocate a return stack for the idle task. May be the first
|
|
|
|
* time through, or it may be done by CPU hotplug online.
|
|
|
|
*/
|
|
|
|
void ftrace_graph_init_idle_task(struct task_struct *t, int cpu)
|
|
|
|
{
|
|
|
|
t->curr_ret_stack = -1;
|
|
|
|
/*
|
|
|
|
* The idle task has no parent, it either has its own
|
|
|
|
* stack or no stack at all.
|
|
|
|
*/
|
|
|
|
if (t->ret_stack)
|
|
|
|
WARN_ON(t->ret_stack != per_cpu(idle_ret_stack, cpu));
|
|
|
|
|
|
|
|
if (ftrace_graph_active) {
|
|
|
|
struct ftrace_ret_stack *ret_stack;
|
|
|
|
|
|
|
|
ret_stack = per_cpu(idle_ret_stack, cpu);
|
|
|
|
if (!ret_stack) {
|
|
|
|
ret_stack = kmalloc(FTRACE_RETFUNC_DEPTH
|
|
|
|
* sizeof(struct ftrace_ret_stack),
|
|
|
|
GFP_KERNEL);
|
|
|
|
if (!ret_stack)
|
|
|
|
return;
|
|
|
|
per_cpu(idle_ret_stack, cpu) = ret_stack;
|
|
|
|
}
|
|
|
|
graph_init_task(t, ret_stack);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2008-11-23 12:22:56 +07:00
|
|
|
/* Allocate a return stack for newly created task */
|
2008-11-26 03:07:04 +07:00
|
|
|
void ftrace_graph_init_task(struct task_struct *t)
|
2008-11-23 12:22:56 +07:00
|
|
|
{
|
2009-06-03 03:51:55 +07:00
|
|
|
/* Make sure we do not use the parent ret_stack */
|
|
|
|
t->ret_stack = NULL;
|
2010-03-13 07:41:23 +07:00
|
|
|
t->curr_ret_stack = -1;
|
2009-06-03 03:51:55 +07:00
|
|
|
|
2009-04-04 02:24:12 +07:00
|
|
|
if (ftrace_graph_active) {
|
2009-06-02 23:26:07 +07:00
|
|
|
struct ftrace_ret_stack *ret_stack;
|
|
|
|
|
|
|
|
ret_stack = kmalloc(FTRACE_RETFUNC_DEPTH
|
2008-11-23 12:22:56 +07:00
|
|
|
* sizeof(struct ftrace_ret_stack),
|
|
|
|
GFP_KERNEL);
|
2009-06-02 23:26:07 +07:00
|
|
|
if (!ret_stack)
|
2008-11-23 12:22:56 +07:00
|
|
|
return;
|
ftrace: Fix memory leak with function graph and cpu hotplug
When the fuction graph tracer starts, it needs to make a special
stack for each task to save the real return values of the tasks.
All running tasks have this stack created, as well as any new
tasks.
On CPU hot plug, the new idle task will allocate a stack as well
when init_idle() is called. The problem is that cpu hotplug does
not create a new idle_task. Instead it uses the idle task that
existed when the cpu went down.
ftrace_graph_init_task() will add a new ret_stack to the task
that is given to it. Because a clone will make the task
have a stack of its parent it does not check if the task's
ret_stack is already NULL or not. When the CPU hotplug code
starts a CPU up again, it will allocate a new stack even
though one already existed for it.
The solution is to treat the idle_task specially. In fact, the
function_graph code already does, just not at init_idle().
Instead of using the ftrace_graph_init_task() for the idle task,
which that function expects the task to be a clone, have a
separate ftrace_graph_init_idle_task(). Also, we will create a
per_cpu ret_stack that is used by the idle task. When we call
ftrace_graph_init_idle_task() it will check if the idle task's
ret_stack is NULL, if it is, then it will assign it the per_cpu
ret_stack.
Reported-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Suggested-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stable Tree <stable@kernel.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-02-11 09:26:13 +07:00
|
|
|
graph_init_task(t, ret_stack);
|
2009-06-03 03:51:55 +07:00
|
|
|
}
|
2008-11-23 12:22:56 +07:00
|
|
|
}
|
|
|
|
|
2008-11-26 03:07:04 +07:00
|
|
|
void ftrace_graph_exit_task(struct task_struct *t)
|
2008-11-23 12:22:56 +07:00
|
|
|
{
|
2008-11-23 23:33:12 +07:00
|
|
|
struct ftrace_ret_stack *ret_stack = t->ret_stack;
|
|
|
|
|
2008-11-23 12:22:56 +07:00
|
|
|
t->ret_stack = NULL;
|
2008-11-23 23:33:12 +07:00
|
|
|
/* NULL must become visible to IRQs before we free it: */
|
|
|
|
barrier();
|
|
|
|
|
|
|
|
kfree(ret_stack);
|
2008-11-23 12:22:56 +07:00
|
|
|
}
|
2008-12-03 11:50:02 +07:00
|
|
|
|
|
|
|
void ftrace_graph_stop(void)
|
|
|
|
{
|
|
|
|
ftrace_stop();
|
|
|
|
}
|
2008-11-11 13:14:25 +07:00
|
|
|
#endif
|