2009-06-03 03:59:57 +07:00
|
|
|
/*
|
2009-06-03 04:37:05 +07:00
|
|
|
* builtin-record.c
|
|
|
|
*
|
|
|
|
* Builtin record command: Record the profile of a workload
|
|
|
|
* (or a CPU, or a PID) into the perf.data output file - for
|
|
|
|
* later analysis via perf report.
|
2009-06-03 03:59:57 +07:00
|
|
|
*/
|
2010-02-03 10:53:14 +07:00
|
|
|
#define _FILE_OFFSET_BITS 64
|
|
|
|
|
2009-05-27 14:10:38 +07:00
|
|
|
#include "builtin.h"
|
2009-06-03 04:37:05 +07:00
|
|
|
|
|
|
|
#include "perf.h"
|
|
|
|
|
2010-02-04 01:52:05 +07:00
|
|
|
#include "util/build-id.h"
|
2009-05-01 23:29:57 +07:00
|
|
|
#include "util/util.h"
|
2009-05-26 14:17:18 +07:00
|
|
|
#include "util/parse-options.h"
|
2009-05-26 16:10:09 +07:00
|
|
|
#include "util/parse-events.h"
|
2009-05-01 23:29:57 +07:00
|
|
|
|
2009-06-25 22:05:54 +07:00
|
|
|
#include "util/header.h"
|
2009-08-12 16:07:25 +07:00
|
|
|
#include "util/event.h"
|
2011-01-04 01:39:04 +07:00
|
|
|
#include "util/evsel.h"
|
2009-08-17 03:05:48 +07:00
|
|
|
#include "util/debug.h"
|
2009-12-12 06:24:02 +07:00
|
|
|
#include "util/session.h"
|
perf symbols: Use the buildids if present
With this change 'perf record' will intercept PERF_RECORD_MMAP
calls, creating a linked list of DSOs, then when the session
finishes, it will traverse this list and read the buildids,
stashing them at the end of the file and will set up a new
feature bit in the header bitmask.
'perf report' will then notice this feature and populate the
'dsos' list and set the build ids.
When reading the symtabs it will refuse to load from a file that
doesn't have the same build id. This improves the
reliability of the profiler output, as symbols and profiling
data is more guaranteed to match.
Example:
[root@doppio ~]# perf report | head
/home/acme/bin/perf with build id b1ea544ac3746e7538972548a09aadecc5753868 not found, continuing without symbols
# Samples: 2621434559
#
# Overhead Command Shared Object Symbol
# ........ ............... ............................. ......
#
7.91% init [kernel] [k] read_hpet
7.64% init [kernel] [k] mwait_idle_with_hints
7.60% swapper [kernel] [k] read_hpet
7.60% swapper [kernel] [k] mwait_idle_with_hints
3.65% init [kernel] [k] 0xffffffffa02339d9
[root@doppio ~]#
In this case the 'perf' binary was an older one, vanished,
so its symbols probably wouldn't match or would cause subtly
different (and misleading) output.
Next patches will support the kernel as well, reading the build
id notes for it and the modules from /sys.
Another patch should also introduce a new plumbing command:
'perf list-buildids'
that will then be used in porcelain that is distro specific to
fetch -debuginfo packages where such buildids are present. This
will in turn allow for one to run 'perf record' in one machine
and 'perf report' in another.
Future work on having the buildid sent directly from the kernel
in the PERF_RECORD_MMAP event is needed to close races, as the
DSO can be changed during a 'perf record' session, but this
patch at least helps with non-corner cases and current/older
kernels.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Frank Ch. Eigler <fche@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jason Baron <jbaron@redhat.com>
Cc: Jim Keniston <jkenisto@us.ibm.com>
Cc: K. Prasad <prasad@linux.vnet.ibm.com>
Cc: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Roland McGrath <roland@redhat.com>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
LKML-Reference: <1257367843-26224-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-05 03:50:43 +07:00
|
|
|
#include "util/symbol.h"
|
perf tools: Fix sparse CPU numbering related bugs
At present, the perf subcommands that do system-wide monitoring
(perf stat, perf record and perf top) don't work properly unless
the online cpus are numbered 0, 1, ..., N-1. These tools ask
for the number of online cpus with sysconf(_SC_NPROCESSORS_ONLN)
and then try to create events for cpus 0, 1, ..., N-1.
This creates problems for systems where the online cpus are
numbered sparsely. For example, a POWER6 system in
single-threaded mode (i.e. only running 1 hardware thread per
core) will have only even-numbered cpus online.
This fixes the problem by reading the /sys/devices/system/cpu/online
file to find out which cpus are online. The code that does that is in
tools/perf/util/cpumap.[ch], and consists of a read_cpu_map()
function that sets up a cpumap[] array and returns the number of
online cpus. If /sys/devices/system/cpu/online can't be read or
can't be parsed successfully, it falls back to using sysconf to
ask how many cpus are online and sets up an identity map in cpumap[].
The perf record, perf stat and perf top code then calls
read_cpu_map() in the system-wide monitoring case (instead of
sysconf) and uses cpumap[] to get the cpu numbers to pass to
perf_event_open.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Cc: Anton Blanchard <anton@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
LKML-Reference: <20100310093609.GA3959@brick.ozlabs.ibm.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-03-10 16:36:09 +07:00
|
|
|
#include "util/cpumap.h"
|
2009-06-25 22:05:54 +07:00
|
|
|
|
2009-06-02 20:52:24 +07:00
|
|
|
#include <unistd.h>
|
2009-04-08 20:01:31 +07:00
|
|
|
#include <sched.h>
|
2010-05-19 04:29:23 +07:00
|
|
|
#include <sys/mman.h>
|
2009-04-08 20:01:31 +07:00
|
|
|
|
2011-01-04 01:39:04 +07:00
|
|
|
#define FD(e, x, y) (*(int *)xyarray__entry(e->fd, x, y))
|
|
|
|
|
2010-04-15 00:42:07 +07:00
|
|
|
enum write_mode_t {
|
|
|
|
WRITE_FORCE,
|
|
|
|
WRITE_APPEND
|
|
|
|
};
|
|
|
|
|
2010-05-17 22:20:43 +07:00
|
|
|
static u64 user_interval = ULLONG_MAX;
|
|
|
|
static u64 default_interval = 0;
|
perf session: Parse sample earlier
At perf_session__process_event, so that we reduce the number of lines in eache
tool sample processing routine that now receives a sample_data pointer already
parsed.
This will also be useful in the next patch, where we'll allow sample the
identity fields in MMAP, FORK, EXIT, etc, when it will be possible to see (cpu,
timestamp) just after before every event.
Also validate callchains in perf_session__process_event, i.e. as early as
possible, and keep a counter of the number of events discarded due to invalid
callchains, warning the user about it if it happens.
There is an assumption that was kept that all events have the same sample_type,
that will be dealt with in the future, when this preexisting limitation will be
removed.
Tested-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Ian Munsie <imunsie@au1.ibm.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Ian Munsie <imunsie@au1.ibm.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Stephane Eranian <eranian@google.com>
LKML-Reference: <1291318772-30880-4-git-send-email-acme@infradead.org>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-12-02 23:10:21 +07:00
|
|
|
static u64 sample_type;
|
2009-06-06 14:58:57 +07:00
|
|
|
|
2011-01-04 02:49:48 +07:00
|
|
|
static struct cpu_map *cpus;
|
2009-04-08 20:01:31 +07:00
|
|
|
static unsigned int page_size;
|
2009-10-06 20:14:21 +07:00
|
|
|
static unsigned int mmap_pages = 128;
|
2010-04-15 03:09:02 +07:00
|
|
|
static unsigned int user_freq = UINT_MAX;
|
2009-10-06 20:14:21 +07:00
|
|
|
static int freq = 1000;
|
2009-04-08 20:01:31 +07:00
|
|
|
static int output;
|
2010-04-02 11:59:16 +07:00
|
|
|
static int pipe_output = 0;
|
2009-05-27 14:33:18 +07:00
|
|
|
static const char *output_name = "perf.data";
|
2009-10-06 20:14:21 +07:00
|
|
|
static int group = 0;
|
2010-05-18 01:39:16 +07:00
|
|
|
static int realtime_prio = 0;
|
perf record: Add "nodelay" mode, disabled by default
Sometimes there is a need to use perf in "live-log" mode. The problem
is, for seldom events, actual info output is largely delayed because
perf-record reads sample data in whole pages.
So for such scenarious, add flag for perf-record to go in "nodelay"
mode. To track e.g. what's going on in icmp_rcv while ping is running
Use it with something like this:
(1) $ perf probe -L icmp_rcv | grep -U8 '^ *43\>'
goto error;
}
38 if (!pskb_pull(skb, sizeof(*icmph)))
goto error;
icmph = icmp_hdr(skb);
43 ICMPMSGIN_INC_STATS_BH(net, icmph->type);
/*
* 18 is the highest 'known' ICMP type. Anything else is a mystery
*
* RFC 1122: 3.2.2 Unknown ICMP messages types MUST be silently
* discarded.
*/
50 if (icmph->type > NR_ICMP_TYPES)
goto error;
$ perf probe icmp_rcv:43 'type=icmph->type'
(2) $ cat trace-icmp.py
[...]
def trace_begin():
print "in trace_begin"
def trace_end():
print "in trace_end"
def probe__icmp_rcv(event_name, context, common_cpu,
common_secs, common_nsecs, common_pid, common_comm,
__probe_ip, type):
print_header(event_name, common_cpu, common_secs, common_nsecs,
common_pid, common_comm)
print "__probe_ip=%u, type=%u\n" % \
(__probe_ip, type),
[...]
(3) $ perf record -a -D -e probe:icmp_rcv -o - | \
perf script -i - -s trace-icmp.py
Thanks to Peter Zijlstra for pointing how to do it.
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>, Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Tom Zanussi <tzanussi@gmail.com>
LKML-Reference: <20110112140613.GA11698@tugrik.mns.mnsspb.ru>
Signed-off-by: Kirill Smelkov <kirr@mns.spb.ru>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2011-01-12 21:59:36 +07:00
|
|
|
static bool nodelay = false;
|
2010-04-13 15:37:33 +07:00
|
|
|
static bool raw_samples = false;
|
2010-12-02 19:25:28 +07:00
|
|
|
static bool sample_id_all_avail = true;
|
2010-04-13 15:37:33 +07:00
|
|
|
static bool system_wide = false;
|
2009-10-06 20:14:21 +07:00
|
|
|
static pid_t target_pid = -1;
|
2010-03-18 21:36:05 +07:00
|
|
|
static pid_t target_tid = -1;
|
2011-01-04 02:53:33 +07:00
|
|
|
static struct thread_map *threads;
|
2009-10-06 20:14:21 +07:00
|
|
|
static pid_t child_pid = -1;
|
2010-05-12 15:40:01 +07:00
|
|
|
static bool no_inherit = false;
|
2010-04-15 00:42:07 +07:00
|
|
|
static enum write_mode_t write_mode = WRITE_FORCE;
|
2010-04-13 15:37:33 +07:00
|
|
|
static bool call_graph = false;
|
|
|
|
static bool inherit_stat = false;
|
|
|
|
static bool no_samples = false;
|
|
|
|
static bool sample_address = false;
|
2010-12-02 19:25:28 +07:00
|
|
|
static bool sample_time = false;
|
2010-06-17 16:39:01 +07:00
|
|
|
static bool no_buildid = false;
|
2010-11-27 04:39:15 +07:00
|
|
|
static bool no_buildid_cache = false;
|
2009-10-06 20:14:21 +07:00
|
|
|
|
|
|
|
static long samples = 0;
|
|
|
|
static u64 bytes_written = 0;
|
2009-06-06 14:58:57 +07:00
|
|
|
|
2010-03-18 21:36:05 +07:00
|
|
|
static struct pollfd *event_array;
|
2009-06-06 14:58:57 +07:00
|
|
|
|
2009-10-06 20:14:21 +07:00
|
|
|
static int nr_poll = 0;
|
|
|
|
static int nr_cpu = 0;
|
2009-06-06 14:58:57 +07:00
|
|
|
|
2009-10-06 20:14:21 +07:00
|
|
|
static int file_new = 1;
|
2010-02-04 01:52:05 +07:00
|
|
|
static off_t post_processing_offset;
|
2009-06-25 22:05:54 +07:00
|
|
|
|
2009-12-12 06:24:02 +07:00
|
|
|
static struct perf_session *session;
|
2010-05-28 17:00:01 +07:00
|
|
|
static const char *cpu_list;
|
2009-06-19 04:22:55 +07:00
|
|
|
|
2009-04-08 20:01:31 +07:00
|
|
|
struct mmap_data {
|
2009-06-06 14:58:57 +07:00
|
|
|
void *base;
|
|
|
|
unsigned int mask;
|
|
|
|
unsigned int prev;
|
2009-04-08 20:01:31 +07:00
|
|
|
};
|
|
|
|
|
2010-05-20 19:45:26 +07:00
|
|
|
static struct mmap_data mmap_array[MAX_NR_CPUS];
|
2009-06-06 14:58:57 +07:00
|
|
|
|
2009-06-18 16:40:28 +07:00
|
|
|
static unsigned long mmap_read_head(struct mmap_data *md)
|
2009-04-08 20:01:31 +07:00
|
|
|
{
|
perf: Do the big rename: Performance Counters -> Performance Events
Bye-bye Performance Counters, welcome Performance Events!
In the past few months the perfcounters subsystem has grown out its
initial role of counting hardware events, and has become (and is
becoming) a much broader generic event enumeration, reporting, logging,
monitoring, analysis facility.
Naming its core object 'perf_counter' and naming the subsystem
'perfcounters' has become more and more of a misnomer. With pending
code like hw-breakpoints support the 'counter' name is less and
less appropriate.
All in one, we've decided to rename the subsystem to 'performance
events' and to propagate this rename through all fields, variables
and API names. (in an ABI compatible fashion)
The word 'event' is also a bit shorter than 'counter' - which makes
it slightly more convenient to write/handle as well.
Thanks goes to Stephane Eranian who first observed this misnomer and
suggested a rename.
User-space tooling and ABI compatibility is not affected - this patch
should be function-invariant. (Also, defconfigs were not touched to
keep the size down.)
This patch has been generated via the following script:
FILES=$(find * -type f | grep -vE 'oprofile|[^K]config')
sed -i \
-e 's/PERF_EVENT_/PERF_RECORD_/g' \
-e 's/PERF_COUNTER/PERF_EVENT/g' \
-e 's/perf_counter/perf_event/g' \
-e 's/nb_counters/nb_events/g' \
-e 's/swcounter/swevent/g' \
-e 's/tpcounter_event/tp_event/g' \
$FILES
for N in $(find . -name perf_counter.[ch]); do
M=$(echo $N | sed 's/perf_counter/perf_event/g')
mv $N $M
done
FILES=$(find . -name perf_event.*)
sed -i \
-e 's/COUNTER_MASK/REG_MASK/g' \
-e 's/COUNTER/EVENT/g' \
-e 's/\<event\>/event_id/g' \
-e 's/counter/event/g' \
-e 's/Counter/Event/g' \
$FILES
... to keep it as correct as possible. This script can also be
used by anyone who has pending perfcounters patches - it converts
a Linux kernel tree over to the new naming. We tried to time this
change to the point in time where the amount of pending patches
is the smallest: the end of the merge window.
Namespace clashes were fixed up in a preparatory patch - and some
stylistic fallout will be fixed up in a subsequent patch.
( NOTE: 'counters' are still the proper terminology when we deal
with hardware registers - and these sed scripts are a bit
over-eager in renaming them. I've undone some of that, but
in case there's something left where 'counter' would be
better than 'event' we can undo that on an individual basis
instead of touching an otherwise nicely automated patch. )
Suggested-by: Stephane Eranian <eranian@google.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Paul Mackerras <paulus@samba.org>
Reviewed-by: Arjan van de Ven <arjan@linux.intel.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: David Howells <dhowells@redhat.com>
Cc: Kyle McMartin <kyle@mcmartin.ca>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: <linux-arch@vger.kernel.org>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-21 17:02:48 +07:00
|
|
|
struct perf_event_mmap_page *pc = md->base;
|
2009-06-18 16:40:28 +07:00
|
|
|
long head;
|
2009-04-08 20:01:31 +07:00
|
|
|
|
|
|
|
head = pc->data_head;
|
|
|
|
rmb();
|
|
|
|
|
|
|
|
return head;
|
|
|
|
}
|
|
|
|
|
2009-06-18 16:40:28 +07:00
|
|
|
static void mmap_write_tail(struct mmap_data *md, unsigned long tail)
|
|
|
|
{
|
perf: Do the big rename: Performance Counters -> Performance Events
Bye-bye Performance Counters, welcome Performance Events!
In the past few months the perfcounters subsystem has grown out its
initial role of counting hardware events, and has become (and is
becoming) a much broader generic event enumeration, reporting, logging,
monitoring, analysis facility.
Naming its core object 'perf_counter' and naming the subsystem
'perfcounters' has become more and more of a misnomer. With pending
code like hw-breakpoints support the 'counter' name is less and
less appropriate.
All in one, we've decided to rename the subsystem to 'performance
events' and to propagate this rename through all fields, variables
and API names. (in an ABI compatible fashion)
The word 'event' is also a bit shorter than 'counter' - which makes
it slightly more convenient to write/handle as well.
Thanks goes to Stephane Eranian who first observed this misnomer and
suggested a rename.
User-space tooling and ABI compatibility is not affected - this patch
should be function-invariant. (Also, defconfigs were not touched to
keep the size down.)
This patch has been generated via the following script:
FILES=$(find * -type f | grep -vE 'oprofile|[^K]config')
sed -i \
-e 's/PERF_EVENT_/PERF_RECORD_/g' \
-e 's/PERF_COUNTER/PERF_EVENT/g' \
-e 's/perf_counter/perf_event/g' \
-e 's/nb_counters/nb_events/g' \
-e 's/swcounter/swevent/g' \
-e 's/tpcounter_event/tp_event/g' \
$FILES
for N in $(find . -name perf_counter.[ch]); do
M=$(echo $N | sed 's/perf_counter/perf_event/g')
mv $N $M
done
FILES=$(find . -name perf_event.*)
sed -i \
-e 's/COUNTER_MASK/REG_MASK/g' \
-e 's/COUNTER/EVENT/g' \
-e 's/\<event\>/event_id/g' \
-e 's/counter/event/g' \
-e 's/Counter/Event/g' \
$FILES
... to keep it as correct as possible. This script can also be
used by anyone who has pending perfcounters patches - it converts
a Linux kernel tree over to the new naming. We tried to time this
change to the point in time where the amount of pending patches
is the smallest: the end of the merge window.
Namespace clashes were fixed up in a preparatory patch - and some
stylistic fallout will be fixed up in a subsequent patch.
( NOTE: 'counters' are still the proper terminology when we deal
with hardware registers - and these sed scripts are a bit
over-eager in renaming them. I've undone some of that, but
in case there's something left where 'counter' would be
better than 'event' we can undo that on an individual basis
instead of touching an otherwise nicely automated patch. )
Suggested-by: Stephane Eranian <eranian@google.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Paul Mackerras <paulus@samba.org>
Reviewed-by: Arjan van de Ven <arjan@linux.intel.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: David Howells <dhowells@redhat.com>
Cc: Kyle McMartin <kyle@mcmartin.ca>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: <linux-arch@vger.kernel.org>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-21 17:02:48 +07:00
|
|
|
struct perf_event_mmap_page *pc = md->base;
|
2009-06-18 16:40:28 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* ensure all reads are done before we write the tail out.
|
|
|
|
*/
|
|
|
|
/* mb(); */
|
|
|
|
pc->data_tail = tail;
|
|
|
|
}
|
|
|
|
|
2010-04-02 11:59:21 +07:00
|
|
|
static void advance_output(size_t size)
|
|
|
|
{
|
|
|
|
bytes_written += size;
|
|
|
|
}
|
|
|
|
|
2009-06-19 04:22:55 +07:00
|
|
|
static void write_output(void *buf, size_t size)
|
|
|
|
{
|
|
|
|
while (size) {
|
|
|
|
int ret = write(output, buf, size);
|
|
|
|
|
|
|
|
if (ret < 0)
|
|
|
|
die("failed to write");
|
|
|
|
|
|
|
|
size -= ret;
|
|
|
|
buf += ret;
|
|
|
|
|
|
|
|
bytes_written += ret;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2009-12-14 04:50:24 +07:00
|
|
|
static int process_synthesized_event(event_t *event,
|
perf session: Parse sample earlier
At perf_session__process_event, so that we reduce the number of lines in eache
tool sample processing routine that now receives a sample_data pointer already
parsed.
This will also be useful in the next patch, where we'll allow sample the
identity fields in MMAP, FORK, EXIT, etc, when it will be possible to see (cpu,
timestamp) just after before every event.
Also validate callchains in perf_session__process_event, i.e. as early as
possible, and keep a counter of the number of events discarded due to invalid
callchains, warning the user about it if it happens.
There is an assumption that was kept that all events have the same sample_type,
that will be dealt with in the future, when this preexisting limitation will be
removed.
Tested-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Ian Munsie <imunsie@au1.ibm.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Ian Munsie <imunsie@au1.ibm.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Stephane Eranian <eranian@google.com>
LKML-Reference: <1291318772-30880-4-git-send-email-acme@infradead.org>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-12-02 23:10:21 +07:00
|
|
|
struct sample_data *sample __used,
|
2009-12-14 04:50:24 +07:00
|
|
|
struct perf_session *self __used)
|
2009-10-27 04:23:18 +07:00
|
|
|
{
|
2010-02-04 01:52:05 +07:00
|
|
|
write_output(event, event->header.size);
|
2009-10-27 04:23:18 +07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2009-04-08 20:01:31 +07:00
|
|
|
static void mmap_read(struct mmap_data *md)
|
|
|
|
{
|
|
|
|
unsigned int head = mmap_read_head(md);
|
|
|
|
unsigned int old = md->prev;
|
|
|
|
unsigned char *data = md->base + page_size;
|
|
|
|
unsigned long size;
|
|
|
|
void *buf;
|
|
|
|
int diff;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If we're further behind than half the buffer, there's a chance
|
2009-06-05 19:29:10 +07:00
|
|
|
* the writer will bite our tail and mess up the samples under us.
|
2009-04-08 20:01:31 +07:00
|
|
|
*
|
|
|
|
* If we somehow ended up ahead of the head, we got messed up.
|
|
|
|
*
|
|
|
|
* In either case, truncate and restart at head.
|
|
|
|
*/
|
|
|
|
diff = head - old;
|
2009-06-18 16:40:28 +07:00
|
|
|
if (diff < 0) {
|
2010-05-19 05:52:40 +07:00
|
|
|
fprintf(stderr, "WARNING: failed to keep up with mmap data\n");
|
2009-04-08 20:01:31 +07:00
|
|
|
/*
|
|
|
|
* head points to a known good entry, start there.
|
|
|
|
*/
|
|
|
|
old = head;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (old != head)
|
2009-06-05 19:29:10 +07:00
|
|
|
samples++;
|
2009-04-08 20:01:31 +07:00
|
|
|
|
|
|
|
size = head - old;
|
|
|
|
|
|
|
|
if ((old & md->mask) + size != (head & md->mask)) {
|
|
|
|
buf = &data[old & md->mask];
|
|
|
|
size = md->mask + 1 - (old & md->mask);
|
|
|
|
old += size;
|
2009-06-04 00:27:19 +07:00
|
|
|
|
2010-02-04 01:52:05 +07:00
|
|
|
write_output(buf, size);
|
2009-04-08 20:01:31 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
buf = &data[old & md->mask];
|
|
|
|
size = head - old;
|
|
|
|
old += size;
|
2009-06-04 00:27:19 +07:00
|
|
|
|
2010-02-04 01:52:05 +07:00
|
|
|
write_output(buf, size);
|
2009-04-08 20:01:31 +07:00
|
|
|
|
|
|
|
md->prev = old;
|
2009-06-18 16:40:28 +07:00
|
|
|
mmap_write_tail(md, old);
|
2009-04-08 20:01:31 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static volatile int done = 0;
|
2009-06-10 20:55:59 +07:00
|
|
|
static volatile int signr = -1;
|
2009-04-08 20:01:31 +07:00
|
|
|
|
2009-05-05 22:50:27 +07:00
|
|
|
static void sig_handler(int sig)
|
2009-04-08 20:01:31 +07:00
|
|
|
{
|
2009-05-05 22:50:27 +07:00
|
|
|
done = 1;
|
2009-06-10 20:55:59 +07:00
|
|
|
signr = sig;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void sig_atexit(void)
|
|
|
|
{
|
perf record: prevent kill(0, SIGTERM);
At exit, perf record will kill the process it was profiling by sending a
SIGTERM to child_pid (if it had been initialised), but in certain situations
child_pid may be 0 and perf would mistakenly kill more processes than intended.
child_pid is set to the return of fork() to either 0 or the pid of the child.
Ordinarily this would not present an issue as the child calls execvp to spawn
the process to be profiled and would therefore never run it's sig_atexit and
never attempt to kill pid 0.
However, if a nonexistant binary had been passed in to perf record the call to
execvp would fail and child_pid would be left set to 0. The child would then
exit and it's atexit handler, finding that child_pid was initialised to 0,
would call kill(0, SIGTERM), resulting in every process within it's process
group being killed.
In the case that perf was being run directly from the shell this typically
would not be an issue as the shell isolates the process. However, if perf was
being called from another program it could kill unexpected processes, which may
even include X.
This patch changes the logic of the test for whether child_pid was initialised
to only consider positive pids as valid, thereby never attempting to kill pid
0.
Cc: David S. Miller <davem@davemloft.net>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Tom Zanussi <tzanussi@gmail.com>
LKML-Reference: <1276072680-17378-1-git-send-email-imunsie@au1.ibm.com>
Signed-off-by: Ian Munsie <imunsie@au1.ibm.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-06-09 15:38:00 +07:00
|
|
|
if (child_pid > 0)
|
2009-10-04 07:35:01 +07:00
|
|
|
kill(child_pid, SIGTERM);
|
|
|
|
|
2010-12-07 00:13:38 +07:00
|
|
|
if (signr == -1 || signr == SIGUSR1)
|
2009-06-10 20:55:59 +07:00
|
|
|
return;
|
|
|
|
|
|
|
|
signal(signr, SIG_DFL);
|
|
|
|
kill(getpid(), signr);
|
2009-04-08 20:01:31 +07:00
|
|
|
}
|
|
|
|
|
2009-06-05 18:18:41 +07:00
|
|
|
static int group_fd;
|
|
|
|
|
perf: Do the big rename: Performance Counters -> Performance Events
Bye-bye Performance Counters, welcome Performance Events!
In the past few months the perfcounters subsystem has grown out its
initial role of counting hardware events, and has become (and is
becoming) a much broader generic event enumeration, reporting, logging,
monitoring, analysis facility.
Naming its core object 'perf_counter' and naming the subsystem
'perfcounters' has become more and more of a misnomer. With pending
code like hw-breakpoints support the 'counter' name is less and
less appropriate.
All in one, we've decided to rename the subsystem to 'performance
events' and to propagate this rename through all fields, variables
and API names. (in an ABI compatible fashion)
The word 'event' is also a bit shorter than 'counter' - which makes
it slightly more convenient to write/handle as well.
Thanks goes to Stephane Eranian who first observed this misnomer and
suggested a rename.
User-space tooling and ABI compatibility is not affected - this patch
should be function-invariant. (Also, defconfigs were not touched to
keep the size down.)
This patch has been generated via the following script:
FILES=$(find * -type f | grep -vE 'oprofile|[^K]config')
sed -i \
-e 's/PERF_EVENT_/PERF_RECORD_/g' \
-e 's/PERF_COUNTER/PERF_EVENT/g' \
-e 's/perf_counter/perf_event/g' \
-e 's/nb_counters/nb_events/g' \
-e 's/swcounter/swevent/g' \
-e 's/tpcounter_event/tp_event/g' \
$FILES
for N in $(find . -name perf_counter.[ch]); do
M=$(echo $N | sed 's/perf_counter/perf_event/g')
mv $N $M
done
FILES=$(find . -name perf_event.*)
sed -i \
-e 's/COUNTER_MASK/REG_MASK/g' \
-e 's/COUNTER/EVENT/g' \
-e 's/\<event\>/event_id/g' \
-e 's/counter/event/g' \
-e 's/Counter/Event/g' \
$FILES
... to keep it as correct as possible. This script can also be
used by anyone who has pending perfcounters patches - it converts
a Linux kernel tree over to the new naming. We tried to time this
change to the point in time where the amount of pending patches
is the smallest: the end of the merge window.
Namespace clashes were fixed up in a preparatory patch - and some
stylistic fallout will be fixed up in a subsequent patch.
( NOTE: 'counters' are still the proper terminology when we deal
with hardware registers - and these sed scripts are a bit
over-eager in renaming them. I've undone some of that, but
in case there's something left where 'counter' would be
better than 'event' we can undo that on an individual basis
instead of touching an otherwise nicely automated patch. )
Suggested-by: Stephane Eranian <eranian@google.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Paul Mackerras <paulus@samba.org>
Reviewed-by: Arjan van de Ven <arjan@linux.intel.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: David Howells <dhowells@redhat.com>
Cc: Kyle McMartin <kyle@mcmartin.ca>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: <linux-arch@vger.kernel.org>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-21 17:02:48 +07:00
|
|
|
static struct perf_header_attr *get_header_attr(struct perf_event_attr *a, int nr)
|
2009-06-25 22:05:54 +07:00
|
|
|
{
|
|
|
|
struct perf_header_attr *h_attr;
|
|
|
|
|
2009-12-12 06:24:02 +07:00
|
|
|
if (nr < session->header.attrs) {
|
|
|
|
h_attr = session->header.attr[nr];
|
2009-06-25 22:05:54 +07:00
|
|
|
} else {
|
|
|
|
h_attr = perf_header_attr__new(a);
|
2009-11-17 04:30:26 +07:00
|
|
|
if (h_attr != NULL)
|
2009-12-12 06:24:02 +07:00
|
|
|
if (perf_header__add_attr(&session->header, h_attr) < 0) {
|
2009-11-17 10:18:09 +07:00
|
|
|
perf_header_attr__delete(h_attr);
|
|
|
|
h_attr = NULL;
|
|
|
|
}
|
2009-06-25 22:05:54 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
return h_attr;
|
|
|
|
}
|
|
|
|
|
2011-01-04 01:39:04 +07:00
|
|
|
static void create_counter(struct perf_evsel *evsel, int cpu)
|
2009-04-08 20:01:31 +07:00
|
|
|
{
|
2011-01-04 01:39:04 +07:00
|
|
|
char *filter = evsel->filter;
|
|
|
|
struct perf_event_attr *attr = &evsel->attr;
|
2009-06-25 22:05:54 +07:00
|
|
|
struct perf_header_attr *h_attr;
|
2011-01-04 01:39:04 +07:00
|
|
|
int track = !evsel->idx; /* only the first counter needs these */
|
2010-03-18 21:36:05 +07:00
|
|
|
int thread_index;
|
2009-10-15 10:22:07 +07:00
|
|
|
int ret;
|
2009-06-25 22:05:54 +07:00
|
|
|
struct {
|
|
|
|
u64 count;
|
|
|
|
u64 time_enabled;
|
|
|
|
u64 time_running;
|
|
|
|
u64 id;
|
|
|
|
} read_data;
|
2010-12-25 21:12:25 +07:00
|
|
|
/*
|
|
|
|
* Check if parse_single_tracepoint_event has already asked for
|
|
|
|
* PERF_SAMPLE_TIME.
|
|
|
|
*
|
|
|
|
* XXX this is kludgy but short term fix for problems introduced by
|
|
|
|
* eac23d1c that broke 'perf script' by having different sample_types
|
|
|
|
* when using multiple tracepoint events when we use a perf binary
|
|
|
|
* that tries to use sample_id_all on an older kernel.
|
|
|
|
*
|
|
|
|
* We need to move counter creation to perf_session, support
|
|
|
|
* different sample_types, etc.
|
|
|
|
*/
|
|
|
|
bool time_needed = attr->sample_type & PERF_SAMPLE_TIME;
|
2009-06-25 22:05:54 +07:00
|
|
|
|
|
|
|
attr->read_format = PERF_FORMAT_TOTAL_TIME_ENABLED |
|
|
|
|
PERF_FORMAT_TOTAL_TIME_RUNNING |
|
|
|
|
PERF_FORMAT_ID;
|
2009-05-05 22:50:27 +07:00
|
|
|
|
2009-08-13 15:27:18 +07:00
|
|
|
attr->sample_type |= PERF_SAMPLE_IP | PERF_SAMPLE_TID;
|
2009-06-14 20:04:15 +07:00
|
|
|
|
2010-03-05 22:51:05 +07:00
|
|
|
if (nr_counters > 1)
|
|
|
|
attr->sample_type |= PERF_SAMPLE_ID;
|
|
|
|
|
2010-04-15 03:09:02 +07:00
|
|
|
/*
|
|
|
|
* We default some events to a 1 default interval. But keep
|
|
|
|
* it a weak assumption overridable by the user.
|
|
|
|
*/
|
|
|
|
if (!attr->sample_period || (user_freq != UINT_MAX &&
|
2010-05-17 22:20:43 +07:00
|
|
|
user_interval != ULLONG_MAX)) {
|
2010-04-15 03:09:02 +07:00
|
|
|
if (freq) {
|
|
|
|
attr->sample_type |= PERF_SAMPLE_PERIOD;
|
|
|
|
attr->freq = 1;
|
|
|
|
attr->sample_freq = freq;
|
|
|
|
} else {
|
|
|
|
attr->sample_period = default_interval;
|
|
|
|
}
|
2009-06-05 23:37:22 +07:00
|
|
|
}
|
2009-06-14 20:04:15 +07:00
|
|
|
|
2009-06-25 02:12:48 +07:00
|
|
|
if (no_samples)
|
|
|
|
attr->sample_freq = 0;
|
|
|
|
|
|
|
|
if (inherit_stat)
|
|
|
|
attr->inherit_stat = 1;
|
|
|
|
|
2010-05-18 21:30:49 +07:00
|
|
|
if (sample_address) {
|
2009-07-16 20:44:29 +07:00
|
|
|
attr->sample_type |= PERF_SAMPLE_ADDR;
|
2010-05-18 21:30:49 +07:00
|
|
|
attr->mmap_data = track;
|
|
|
|
}
|
2009-07-16 20:44:29 +07:00
|
|
|
|
2009-06-14 20:04:15 +07:00
|
|
|
if (call_graph)
|
|
|
|
attr->sample_type |= PERF_SAMPLE_CALLCHAIN;
|
|
|
|
|
2010-06-04 21:27:10 +07:00
|
|
|
if (system_wide)
|
|
|
|
attr->sample_type |= PERF_SAMPLE_CPU;
|
|
|
|
|
2010-12-25 21:12:25 +07:00
|
|
|
if (sample_id_all_avail &&
|
|
|
|
(sample_time || system_wide || !no_inherit || cpu_list))
|
2010-12-02 19:25:28 +07:00
|
|
|
attr->sample_type |= PERF_SAMPLE_TIME;
|
|
|
|
|
2009-09-03 01:20:38 +07:00
|
|
|
if (raw_samples) {
|
2009-09-03 17:00:22 +07:00
|
|
|
attr->sample_type |= PERF_SAMPLE_TIME;
|
2009-08-13 15:27:19 +07:00
|
|
|
attr->sample_type |= PERF_SAMPLE_RAW;
|
2009-09-03 01:20:38 +07:00
|
|
|
attr->sample_type |= PERF_SAMPLE_CPU;
|
|
|
|
}
|
2009-08-07 06:25:54 +07:00
|
|
|
|
perf record: Add "nodelay" mode, disabled by default
Sometimes there is a need to use perf in "live-log" mode. The problem
is, for seldom events, actual info output is largely delayed because
perf-record reads sample data in whole pages.
So for such scenarious, add flag for perf-record to go in "nodelay"
mode. To track e.g. what's going on in icmp_rcv while ping is running
Use it with something like this:
(1) $ perf probe -L icmp_rcv | grep -U8 '^ *43\>'
goto error;
}
38 if (!pskb_pull(skb, sizeof(*icmph)))
goto error;
icmph = icmp_hdr(skb);
43 ICMPMSGIN_INC_STATS_BH(net, icmph->type);
/*
* 18 is the highest 'known' ICMP type. Anything else is a mystery
*
* RFC 1122: 3.2.2 Unknown ICMP messages types MUST be silently
* discarded.
*/
50 if (icmph->type > NR_ICMP_TYPES)
goto error;
$ perf probe icmp_rcv:43 'type=icmph->type'
(2) $ cat trace-icmp.py
[...]
def trace_begin():
print "in trace_begin"
def trace_end():
print "in trace_end"
def probe__icmp_rcv(event_name, context, common_cpu,
common_secs, common_nsecs, common_pid, common_comm,
__probe_ip, type):
print_header(event_name, common_cpu, common_secs, common_nsecs,
common_pid, common_comm)
print "__probe_ip=%u, type=%u\n" % \
(__probe_ip, type),
[...]
(3) $ perf record -a -D -e probe:icmp_rcv -o - | \
perf script -i - -s trace-icmp.py
Thanks to Peter Zijlstra for pointing how to do it.
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>, Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Tom Zanussi <tzanussi@gmail.com>
LKML-Reference: <20110112140613.GA11698@tugrik.mns.mnsspb.ru>
Signed-off-by: Kirill Smelkov <kirr@mns.spb.ru>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2011-01-12 21:59:36 +07:00
|
|
|
if (nodelay) {
|
|
|
|
attr->watermark = 0;
|
|
|
|
attr->wakeup_events = 1;
|
|
|
|
}
|
|
|
|
|
2009-06-06 14:58:57 +07:00
|
|
|
attr->mmap = track;
|
|
|
|
attr->comm = track;
|
2010-05-12 15:40:01 +07:00
|
|
|
attr->inherit = !no_inherit;
|
|
|
|
if (target_pid == -1 && target_tid == -1 && !system_wide) {
|
2010-03-18 21:36:04 +07:00
|
|
|
attr->disabled = 1;
|
2010-03-15 21:46:57 +07:00
|
|
|
attr->enable_on_exec = 1;
|
2010-03-18 21:36:04 +07:00
|
|
|
}
|
2010-12-02 19:25:28 +07:00
|
|
|
retry_sample_id:
|
|
|
|
attr->sample_id_all = sample_id_all_avail ? 1 : 0;
|
2010-03-15 21:46:57 +07:00
|
|
|
|
2011-01-04 02:53:33 +07:00
|
|
|
for (thread_index = 0; thread_index < threads->nr; thread_index++) {
|
2009-06-07 22:39:02 +07:00
|
|
|
try_again:
|
2011-01-04 02:53:33 +07:00
|
|
|
FD(evsel, nr_cpu, thread_index) = sys_perf_event_open(attr, threads->map[thread_index], cpu, group_fd, 0);
|
2010-03-18 21:36:05 +07:00
|
|
|
|
2011-01-04 01:39:04 +07:00
|
|
|
if (FD(evsel, nr_cpu, thread_index) < 0) {
|
2010-03-18 21:36:05 +07:00
|
|
|
int err = errno;
|
|
|
|
|
|
|
|
if (err == EPERM || err == EACCES)
|
|
|
|
die("Permission error - are you root?\n"
|
|
|
|
"\t Consider tweaking"
|
|
|
|
" /proc/sys/kernel/perf_event_paranoid.\n");
|
2010-05-28 17:00:01 +07:00
|
|
|
else if (err == ENODEV && cpu_list) {
|
2010-03-18 21:36:05 +07:00
|
|
|
die("No such device - did you specify"
|
|
|
|
" an out-of-range profile CPU?\n");
|
2010-12-02 19:25:28 +07:00
|
|
|
} else if (err == EINVAL && sample_id_all_avail) {
|
|
|
|
/*
|
|
|
|
* Old kernel, no attr->sample_id_type_all field
|
|
|
|
*/
|
|
|
|
sample_id_all_avail = false;
|
2010-12-25 21:12:25 +07:00
|
|
|
if (!sample_time && !raw_samples && !time_needed)
|
2010-12-09 12:33:53 +07:00
|
|
|
attr->sample_type &= ~PERF_SAMPLE_TIME;
|
|
|
|
|
2010-12-02 19:25:28 +07:00
|
|
|
goto retry_sample_id;
|
2010-03-18 21:36:05 +07:00
|
|
|
}
|
2009-06-07 22:39:02 +07:00
|
|
|
|
2010-03-18 21:36:05 +07:00
|
|
|
/*
|
|
|
|
* If it's cycles then fall back to hrtimer
|
|
|
|
* based cpu-clock-tick sw counter, which
|
|
|
|
* is always available even if no PMU support:
|
|
|
|
*/
|
|
|
|
if (attr->type == PERF_TYPE_HARDWARE
|
|
|
|
&& attr->config == PERF_COUNT_HW_CPU_CYCLES) {
|
|
|
|
|
|
|
|
if (verbose)
|
|
|
|
warning(" ... trying to fall back to cpu-clock-ticks\n");
|
|
|
|
attr->type = PERF_TYPE_SOFTWARE;
|
|
|
|
attr->config = PERF_COUNT_SW_CPU_CLOCK;
|
|
|
|
goto try_again;
|
|
|
|
}
|
|
|
|
printf("\n");
|
2010-11-20 08:37:24 +07:00
|
|
|
error("sys_perf_event_open() syscall returned with %d (%s). /bin/dmesg may provide additional information.\n",
|
2011-01-04 01:39:04 +07:00
|
|
|
FD(evsel, nr_cpu, thread_index), strerror(err));
|
2009-11-16 12:25:53 +07:00
|
|
|
|
|
|
|
#if defined(__i386__) || defined(__x86_64__)
|
2010-03-18 21:36:05 +07:00
|
|
|
if (attr->type == PERF_TYPE_HARDWARE && err == EOPNOTSUPP)
|
|
|
|
die("No hardware sampling interrupt available."
|
|
|
|
" No APIC? If so then you can boot the kernel"
|
|
|
|
" with the \"lapic\" boot parameter to"
|
|
|
|
" force-enable it.\n");
|
2009-11-16 12:25:53 +07:00
|
|
|
#endif
|
|
|
|
|
2010-03-18 21:36:05 +07:00
|
|
|
die("No CONFIG_PERF_EVENTS=y kernel support configured?\n");
|
|
|
|
exit(-1);
|
|
|
|
}
|
2009-06-07 22:39:02 +07:00
|
|
|
|
2011-01-04 01:39:04 +07:00
|
|
|
h_attr = get_header_attr(attr, evsel->idx);
|
2010-03-18 21:36:05 +07:00
|
|
|
if (h_attr == NULL)
|
|
|
|
die("nomem\n");
|
2009-06-25 22:05:54 +07:00
|
|
|
|
2010-03-18 21:36:05 +07:00
|
|
|
if (!file_new) {
|
|
|
|
if (memcmp(&h_attr->attr, attr, sizeof(*attr))) {
|
|
|
|
fprintf(stderr, "incompatible append\n");
|
|
|
|
exit(-1);
|
|
|
|
}
|
2009-06-25 22:05:54 +07:00
|
|
|
}
|
|
|
|
|
2011-01-04 01:39:04 +07:00
|
|
|
if (read(FD(evsel, nr_cpu, thread_index), &read_data, sizeof(read_data)) == -1) {
|
2010-08-28 22:46:19 +07:00
|
|
|
perror("Unable to read perf file descriptor");
|
2010-03-18 21:36:05 +07:00
|
|
|
exit(-1);
|
|
|
|
}
|
2009-06-25 22:05:54 +07:00
|
|
|
|
2010-03-18 21:36:05 +07:00
|
|
|
if (perf_header_attr__add_id(h_attr, read_data.id) < 0) {
|
|
|
|
pr_warning("Not enough memory to add id\n");
|
|
|
|
exit(-1);
|
|
|
|
}
|
2009-06-25 22:05:54 +07:00
|
|
|
|
2011-01-04 01:39:04 +07:00
|
|
|
assert(FD(evsel, nr_cpu, thread_index) >= 0);
|
|
|
|
fcntl(FD(evsel, nr_cpu, thread_index), F_SETFL, O_NONBLOCK);
|
2009-05-05 22:50:27 +07:00
|
|
|
|
2010-03-18 21:36:05 +07:00
|
|
|
/*
|
|
|
|
* First counter acts as the group leader:
|
|
|
|
*/
|
|
|
|
if (group && group_fd == -1)
|
2011-01-04 01:39:04 +07:00
|
|
|
group_fd = FD(evsel, nr_cpu, thread_index);
|
|
|
|
|
|
|
|
if (evsel->idx || thread_index) {
|
|
|
|
struct perf_evsel *first;
|
|
|
|
first = list_entry(evsel_list.next, struct perf_evsel, node);
|
|
|
|
ret = ioctl(FD(evsel, nr_cpu, thread_index),
|
|
|
|
PERF_EVENT_IOC_SET_OUTPUT,
|
|
|
|
FD(first, nr_cpu, 0));
|
2010-05-20 19:45:26 +07:00
|
|
|
if (ret) {
|
|
|
|
error("failed to set output: %d (%s)\n", errno,
|
|
|
|
strerror(errno));
|
|
|
|
exit(-1);
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
mmap_array[nr_cpu].prev = 0;
|
|
|
|
mmap_array[nr_cpu].mask = mmap_pages*page_size - 1;
|
|
|
|
mmap_array[nr_cpu].base = mmap(NULL, (mmap_pages+1)*page_size,
|
2011-01-04 01:39:04 +07:00
|
|
|
PROT_READ | PROT_WRITE, MAP_SHARED, FD(evsel, nr_cpu, thread_index), 0);
|
2010-05-20 19:45:26 +07:00
|
|
|
if (mmap_array[nr_cpu].base == MAP_FAILED) {
|
|
|
|
error("failed to mmap with %d (%s)\n", errno, strerror(errno));
|
|
|
|
exit(-1);
|
|
|
|
}
|
|
|
|
|
2011-01-04 01:39:04 +07:00
|
|
|
event_array[nr_poll].fd = FD(evsel, nr_cpu, thread_index);
|
2010-05-20 19:45:26 +07:00
|
|
|
event_array[nr_poll].events = POLLIN;
|
|
|
|
nr_poll++;
|
2009-09-13 23:15:54 +07:00
|
|
|
}
|
2009-09-14 13:57:15 +07:00
|
|
|
|
2010-03-18 21:36:05 +07:00
|
|
|
if (filter != NULL) {
|
2011-01-04 01:39:04 +07:00
|
|
|
ret = ioctl(FD(evsel, nr_cpu, thread_index),
|
|
|
|
PERF_EVENT_IOC_SET_FILTER, filter);
|
2010-03-18 21:36:05 +07:00
|
|
|
if (ret) {
|
|
|
|
error("failed to set filter with %d (%s)\n", errno,
|
|
|
|
strerror(errno));
|
|
|
|
exit(-1);
|
|
|
|
}
|
2009-10-15 10:22:07 +07:00
|
|
|
}
|
|
|
|
}
|
2010-12-25 21:12:25 +07:00
|
|
|
|
|
|
|
if (!sample_type)
|
|
|
|
sample_type = attr->sample_type;
|
2009-06-05 18:18:41 +07:00
|
|
|
}
|
2009-06-04 00:17:25 +07:00
|
|
|
|
2010-03-18 21:36:05 +07:00
|
|
|
static void open_counters(int cpu)
|
2009-06-05 18:18:41 +07:00
|
|
|
{
|
2011-01-04 01:39:04 +07:00
|
|
|
struct perf_evsel *pos;
|
2009-05-05 22:50:27 +07:00
|
|
|
|
2009-06-05 18:18:41 +07:00
|
|
|
group_fd = -1;
|
2011-01-04 01:39:04 +07:00
|
|
|
|
|
|
|
list_for_each_entry(pos, &evsel_list, node)
|
|
|
|
create_counter(pos, cpu);
|
2009-06-05 18:18:41 +07:00
|
|
|
|
2009-05-05 22:50:27 +07:00
|
|
|
nr_cpu++;
|
|
|
|
}
|
|
|
|
|
2010-02-04 01:52:05 +07:00
|
|
|
static int process_buildids(void)
|
|
|
|
{
|
|
|
|
u64 size = lseek(output, 0, SEEK_CUR);
|
|
|
|
|
2010-03-12 01:53:11 +07:00
|
|
|
if (size == 0)
|
|
|
|
return 0;
|
|
|
|
|
2010-02-04 01:52:05 +07:00
|
|
|
session->fd = output;
|
|
|
|
return __perf_session__process_events(session, post_processing_offset,
|
|
|
|
size - post_processing_offset,
|
|
|
|
size, &build_id__mark_dso_hit_ops);
|
|
|
|
}
|
|
|
|
|
2009-06-19 04:22:55 +07:00
|
|
|
static void atexit_header(void)
|
|
|
|
{
|
2010-04-02 11:59:22 +07:00
|
|
|
if (!pipe_output) {
|
|
|
|
session->header.data_size += bytes_written;
|
2009-06-19 04:22:55 +07:00
|
|
|
|
2010-11-27 04:39:15 +07:00
|
|
|
if (!no_buildid)
|
|
|
|
process_buildids();
|
2010-04-02 11:59:22 +07:00
|
|
|
perf_header__write(&session->header, output, true);
|
2010-07-30 00:08:55 +07:00
|
|
|
perf_session__delete(session);
|
2011-01-11 21:42:00 +07:00
|
|
|
perf_evsel_list__delete();
|
2010-07-31 04:31:28 +07:00
|
|
|
symbol__exit();
|
2010-04-02 11:59:22 +07:00
|
|
|
}
|
2009-06-19 04:22:55 +07:00
|
|
|
}
|
|
|
|
|
2010-04-28 07:17:50 +07:00
|
|
|
static void event__synthesize_guest_os(struct machine *machine, void *data)
|
2010-04-19 12:32:50 +07:00
|
|
|
{
|
|
|
|
int err;
|
2010-04-28 07:17:50 +07:00
|
|
|
struct perf_session *psession = data;
|
2010-04-19 12:32:50 +07:00
|
|
|
|
2010-04-28 07:17:50 +07:00
|
|
|
if (machine__is_host(machine))
|
2010-04-19 12:32:50 +07:00
|
|
|
return;
|
|
|
|
|
|
|
|
/*
|
|
|
|
*As for guest kernel when processing subcommand record&report,
|
|
|
|
*we arrange module mmap prior to guest kernel mmap and trigger
|
|
|
|
*a preload dso because default guest module symbols are loaded
|
|
|
|
*from guest kallsyms instead of /lib/modules/XXX/XXX. This
|
|
|
|
*method is used to avoid symbol missing when the first addr is
|
|
|
|
*in module instead of in guest kernel.
|
|
|
|
*/
|
|
|
|
err = event__synthesize_modules(process_synthesized_event,
|
2010-04-28 07:17:50 +07:00
|
|
|
psession, machine);
|
2010-04-19 12:32:50 +07:00
|
|
|
if (err < 0)
|
|
|
|
pr_err("Couldn't record guest kernel [%d]'s reference"
|
2010-04-28 07:17:50 +07:00
|
|
|
" relocation symbol.\n", machine->pid);
|
2010-04-19 12:32:50 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* We use _stext for guest kernel because guest kernel's /proc/kallsyms
|
|
|
|
* have no _text sometimes.
|
|
|
|
*/
|
|
|
|
err = event__synthesize_kernel_mmap(process_synthesized_event,
|
2010-04-28 07:17:50 +07:00
|
|
|
psession, machine, "_text");
|
2010-04-19 12:32:50 +07:00
|
|
|
if (err < 0)
|
|
|
|
err = event__synthesize_kernel_mmap(process_synthesized_event,
|
2010-04-28 07:17:50 +07:00
|
|
|
psession, machine, "_stext");
|
2010-04-19 12:32:50 +07:00
|
|
|
if (err < 0)
|
|
|
|
pr_err("Couldn't record guest kernel [%d]'s reference"
|
2010-04-28 07:17:50 +07:00
|
|
|
" relocation symbol.\n", machine->pid);
|
2010-04-19 12:32:50 +07:00
|
|
|
}
|
|
|
|
|
2010-05-03 03:05:29 +07:00
|
|
|
static struct perf_event_header finished_round_event = {
|
|
|
|
.size = sizeof(struct perf_event_header),
|
|
|
|
.type = PERF_RECORD_FINISHED_ROUND,
|
|
|
|
};
|
|
|
|
|
|
|
|
static void mmap_read_all(void)
|
|
|
|
{
|
2010-05-20 19:45:26 +07:00
|
|
|
int i;
|
2010-05-03 03:05:29 +07:00
|
|
|
|
|
|
|
for (i = 0; i < nr_cpu; i++) {
|
2010-05-20 19:45:26 +07:00
|
|
|
if (mmap_array[i].base)
|
|
|
|
mmap_read(&mmap_array[i]);
|
2010-05-03 03:05:29 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
if (perf_header__has_feat(&session->header, HEADER_TRACE_INFO))
|
|
|
|
write_output(&finished_round_event, sizeof(finished_round_event));
|
|
|
|
}
|
|
|
|
|
2009-12-28 06:36:57 +07:00
|
|
|
static int __cmd_record(int argc, const char **argv)
|
2009-05-05 22:50:27 +07:00
|
|
|
{
|
2011-01-04 01:39:04 +07:00
|
|
|
int i;
|
2009-06-03 03:59:57 +07:00
|
|
|
struct stat st;
|
|
|
|
int flags;
|
2009-11-19 23:55:55 +07:00
|
|
|
int err;
|
2009-09-18 00:59:05 +07:00
|
|
|
unsigned long waking = 0;
|
2009-12-16 23:55:55 +07:00
|
|
|
int child_ready_pipe[2], go_pipe[2];
|
2010-03-18 21:36:04 +07:00
|
|
|
const bool forks = argc > 0;
|
2009-12-16 23:55:55 +07:00
|
|
|
char buf;
|
2010-04-28 07:17:50 +07:00
|
|
|
struct machine *machine;
|
2009-04-08 20:01:31 +07:00
|
|
|
|
|
|
|
page_size = sysconf(_SC_PAGE_SIZE);
|
|
|
|
|
2009-06-19 04:22:55 +07:00
|
|
|
atexit(sig_atexit);
|
|
|
|
signal(SIGCHLD, sig_handler);
|
|
|
|
signal(SIGINT, sig_handler);
|
2010-12-07 00:13:38 +07:00
|
|
|
signal(SIGUSR1, sig_handler);
|
2009-06-19 04:22:55 +07:00
|
|
|
|
2009-12-28 06:36:57 +07:00
|
|
|
if (forks && (pipe(child_ready_pipe) < 0 || pipe(go_pipe) < 0)) {
|
2009-12-16 23:55:55 +07:00
|
|
|
perror("failed to create pipes");
|
|
|
|
exit(-1);
|
|
|
|
}
|
|
|
|
|
2010-04-02 11:59:16 +07:00
|
|
|
if (!strcmp(output_name, "-"))
|
|
|
|
pipe_output = 1;
|
|
|
|
else if (!stat(output_name, &st) && st.st_size) {
|
2010-04-15 00:42:07 +07:00
|
|
|
if (write_mode == WRITE_FORCE) {
|
2009-12-15 05:09:30 +07:00
|
|
|
char oldname[PATH_MAX];
|
|
|
|
snprintf(oldname, sizeof(oldname), "%s.old",
|
|
|
|
output_name);
|
|
|
|
unlink(oldname);
|
|
|
|
rename(output_name, oldname);
|
2009-08-07 19:16:01 +07:00
|
|
|
}
|
2010-04-15 00:42:07 +07:00
|
|
|
} else if (write_mode == WRITE_APPEND) {
|
|
|
|
write_mode = WRITE_FORCE;
|
2009-06-02 20:52:24 +07:00
|
|
|
}
|
|
|
|
|
2010-02-04 15:46:42 +07:00
|
|
|
flags = O_CREAT|O_RDWR;
|
2010-04-15 00:42:07 +07:00
|
|
|
if (write_mode == WRITE_APPEND)
|
2009-06-19 04:22:55 +07:00
|
|
|
file_new = 0;
|
2009-06-03 03:59:57 +07:00
|
|
|
else
|
|
|
|
flags |= O_TRUNC;
|
|
|
|
|
2010-04-02 11:59:16 +07:00
|
|
|
if (pipe_output)
|
|
|
|
output = STDOUT_FILENO;
|
|
|
|
else
|
|
|
|
output = open(output_name, flags, S_IRUSR | S_IWUSR);
|
2009-04-08 20:01:31 +07:00
|
|
|
if (output < 0) {
|
|
|
|
perror("failed to create output file");
|
|
|
|
exit(-1);
|
|
|
|
}
|
|
|
|
|
2010-04-15 00:42:07 +07:00
|
|
|
session = perf_session__new(output_name, O_WRONLY,
|
2010-12-10 10:09:16 +07:00
|
|
|
write_mode == WRITE_FORCE, false, NULL);
|
2009-12-12 06:24:02 +07:00
|
|
|
if (session == NULL) {
|
2009-11-17 10:18:11 +07:00
|
|
|
pr_err("Not enough memory for reading perf file header\n");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2010-11-27 04:39:15 +07:00
|
|
|
if (!no_buildid)
|
|
|
|
perf_header__set_feat(&session->header, HEADER_BUILD_ID);
|
|
|
|
|
2009-11-19 23:55:55 +07:00
|
|
|
if (!file_new) {
|
2010-04-02 11:59:15 +07:00
|
|
|
err = perf_header__read(session, output);
|
2009-11-19 23:55:55 +07:00
|
|
|
if (err < 0)
|
2010-07-30 00:08:55 +07:00
|
|
|
goto out_delete_session;
|
2009-11-19 23:55:55 +07:00
|
|
|
}
|
|
|
|
|
2011-01-04 01:39:04 +07:00
|
|
|
if (have_tracepoints(&evsel_list))
|
2009-12-12 06:24:02 +07:00
|
|
|
perf_header__set_feat(&session->header, HEADER_TRACE_INFO);
|
2009-10-07 04:36:47 +07:00
|
|
|
|
2010-07-30 00:08:55 +07:00
|
|
|
/*
|
|
|
|
* perf_session__delete(session) will be called at atexit_header()
|
|
|
|
*/
|
2009-06-19 04:22:55 +07:00
|
|
|
atexit(atexit_header);
|
|
|
|
|
2009-12-28 06:36:57 +07:00
|
|
|
if (forks) {
|
2010-03-18 21:36:04 +07:00
|
|
|
child_pid = fork();
|
2010-06-01 04:18:18 +07:00
|
|
|
if (child_pid < 0) {
|
2009-12-16 23:55:55 +07:00
|
|
|
perror("failed to fork");
|
|
|
|
exit(-1);
|
|
|
|
}
|
2009-06-25 22:05:54 +07:00
|
|
|
|
2010-03-18 21:36:04 +07:00
|
|
|
if (!child_pid) {
|
2010-04-02 11:59:16 +07:00
|
|
|
if (pipe_output)
|
|
|
|
dup2(2, 1);
|
2009-12-16 23:55:55 +07:00
|
|
|
close(child_ready_pipe[0]);
|
|
|
|
close(go_pipe[1]);
|
|
|
|
fcntl(go_pipe[0], F_SETFD, FD_CLOEXEC);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Do a dummy execvp to get the PLT entry resolved,
|
|
|
|
* so we avoid the resolver overhead on the real
|
|
|
|
* execvp call.
|
|
|
|
*/
|
|
|
|
execvp("", (char **)argv);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Tell the parent we're ready to go
|
|
|
|
*/
|
|
|
|
close(child_ready_pipe[1]);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Wait until the parent tells us to go.
|
|
|
|
*/
|
|
|
|
if (read(go_pipe[0], &buf, 1) == -1)
|
|
|
|
perror("unable to read pipe");
|
|
|
|
|
|
|
|
execvp(argv[0], (char **)argv);
|
|
|
|
|
|
|
|
perror(argv[0]);
|
2010-12-07 00:13:38 +07:00
|
|
|
kill(getppid(), SIGUSR1);
|
2009-12-16 23:55:55 +07:00
|
|
|
exit(-1);
|
2009-08-12 16:18:01 +07:00
|
|
|
}
|
2009-12-16 23:55:55 +07:00
|
|
|
|
2010-03-18 21:36:05 +07:00
|
|
|
if (!system_wide && target_tid == -1 && target_pid == -1)
|
2011-01-04 02:53:33 +07:00
|
|
|
threads->map[0] = child_pid;
|
2010-03-18 21:36:05 +07:00
|
|
|
|
2009-12-16 23:55:55 +07:00
|
|
|
close(child_ready_pipe[1]);
|
|
|
|
close(go_pipe[0]);
|
|
|
|
/*
|
|
|
|
* wait for child to settle
|
|
|
|
*/
|
|
|
|
if (read(child_ready_pipe[0], &buf, 1) == -1) {
|
|
|
|
perror("unable to read pipe");
|
|
|
|
exit(-1);
|
|
|
|
}
|
|
|
|
close(child_ready_pipe[0]);
|
|
|
|
}
|
|
|
|
|
2010-05-28 17:00:01 +07:00
|
|
|
if (!system_wide && no_inherit && !cpu_list) {
|
|
|
|
open_counters(-1);
|
2009-12-16 23:55:55 +07:00
|
|
|
} else {
|
2011-01-04 02:49:48 +07:00
|
|
|
for (i = 0; i < cpus->nr; i++)
|
|
|
|
open_counters(cpus->map[i]);
|
2009-08-12 16:18:01 +07:00
|
|
|
}
|
2009-04-08 20:01:31 +07:00
|
|
|
|
perf session: Parse sample earlier
At perf_session__process_event, so that we reduce the number of lines in eache
tool sample processing routine that now receives a sample_data pointer already
parsed.
This will also be useful in the next patch, where we'll allow sample the
identity fields in MMAP, FORK, EXIT, etc, when it will be possible to see (cpu,
timestamp) just after before every event.
Also validate callchains in perf_session__process_event, i.e. as early as
possible, and keep a counter of the number of events discarded due to invalid
callchains, warning the user about it if it happens.
There is an assumption that was kept that all events have the same sample_type,
that will be dealt with in the future, when this preexisting limitation will be
removed.
Tested-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Ian Munsie <imunsie@au1.ibm.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Ian Munsie <imunsie@au1.ibm.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Stephane Eranian <eranian@google.com>
LKML-Reference: <1291318772-30880-4-git-send-email-acme@infradead.org>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2010-12-02 23:10:21 +07:00
|
|
|
perf_session__set_sample_type(session, sample_type);
|
|
|
|
|
2010-04-02 11:59:16 +07:00
|
|
|
if (pipe_output) {
|
|
|
|
err = perf_header__write_pipe(output);
|
|
|
|
if (err < 0)
|
|
|
|
return err;
|
|
|
|
} else if (file_new) {
|
2009-12-12 06:24:02 +07:00
|
|
|
err = perf_header__write(&session->header, output, false);
|
2009-11-19 23:55:56 +07:00
|
|
|
if (err < 0)
|
|
|
|
return err;
|
2010-01-06 01:50:31 +07:00
|
|
|
}
|
|
|
|
|
2010-02-04 01:52:05 +07:00
|
|
|
post_processing_offset = lseek(output, 0, SEEK_CUR);
|
|
|
|
|
2010-12-02 19:25:28 +07:00
|
|
|
perf_session__set_sample_id_all(session, sample_id_all_avail);
|
|
|
|
|
2010-04-02 11:59:19 +07:00
|
|
|
if (pipe_output) {
|
|
|
|
err = event__synthesize_attrs(&session->header,
|
|
|
|
process_synthesized_event,
|
|
|
|
session);
|
|
|
|
if (err < 0) {
|
|
|
|
pr_err("Couldn't synthesize attrs.\n");
|
|
|
|
return err;
|
|
|
|
}
|
2010-04-02 11:59:20 +07:00
|
|
|
|
|
|
|
err = event__synthesize_event_types(process_synthesized_event,
|
|
|
|
session);
|
|
|
|
if (err < 0) {
|
|
|
|
pr_err("Couldn't synthesize event_types.\n");
|
|
|
|
return err;
|
|
|
|
}
|
2010-04-02 11:59:21 +07:00
|
|
|
|
2011-01-04 01:39:04 +07:00
|
|
|
if (have_tracepoints(&evsel_list)) {
|
2010-05-03 12:14:48 +07:00
|
|
|
/*
|
|
|
|
* FIXME err <= 0 here actually means that
|
|
|
|
* there were no tracepoints so its not really
|
|
|
|
* an error, just that we don't need to
|
|
|
|
* synthesize anything. We really have to
|
|
|
|
* return this more properly and also
|
|
|
|
* propagate errors that now are calling die()
|
|
|
|
*/
|
2011-01-04 01:39:04 +07:00
|
|
|
err = event__synthesize_tracing_data(output, &evsel_list,
|
2010-05-03 12:14:48 +07:00
|
|
|
process_synthesized_event,
|
|
|
|
session);
|
|
|
|
if (err <= 0) {
|
|
|
|
pr_err("Couldn't record tracing data.\n");
|
|
|
|
return err;
|
|
|
|
}
|
2010-05-02 23:37:24 +07:00
|
|
|
advance_output(err);
|
2010-05-03 12:14:48 +07:00
|
|
|
}
|
2010-04-02 11:59:19 +07:00
|
|
|
}
|
|
|
|
|
2010-04-28 07:17:50 +07:00
|
|
|
machine = perf_session__find_host_machine(session);
|
|
|
|
if (!machine) {
|
2010-04-19 12:32:50 +07:00
|
|
|
pr_err("Couldn't find native kernel information.\n");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2010-01-06 01:50:31 +07:00
|
|
|
err = event__synthesize_kernel_mmap(process_synthesized_event,
|
2010-04-28 07:17:50 +07:00
|
|
|
session, machine, "_text");
|
2010-03-31 04:27:39 +07:00
|
|
|
if (err < 0)
|
|
|
|
err = event__synthesize_kernel_mmap(process_synthesized_event,
|
2010-04-28 07:17:50 +07:00
|
|
|
session, machine, "_stext");
|
2010-11-22 23:01:55 +07:00
|
|
|
if (err < 0)
|
|
|
|
pr_err("Couldn't record kernel reference relocation symbol\n"
|
|
|
|
"Symbol resolution may be skewed if relocation was used (e.g. kexec).\n"
|
|
|
|
"Check /proc/kallsyms permission or run as root.\n");
|
perf tools: Encode kernel module mappings in perf.data
We were always looking at the running machine /proc/modules,
even when processing a perf.data file, which only makes sense
when we're doing 'perf record' and 'perf report' on the same
machine, and in close sucession, or if we don't use modules at
all, right Peter? ;-)
Now, at 'perf record' time we read /proc/modules, find the long
path for modules, and put them as PERF_MMAP events, just like we
did to encode the reloc reference symbol for vmlinux. Talking
about that now it is encoded in .pgoff, so that we can use
.{start,len} to store the address boundaries for the kernel so
that when we reconstruct the kmaps tree we can do lookups right
away, without having to fixup the end of the kernel maps like we
did in the past (and now only in perf record).
One more step in the 'perf archive' direction when we'll finally
be able to collect data in one machine and analyse in another.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1263396139-4798-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-01-13 22:22:17 +07:00
|
|
|
|
2010-04-19 12:32:50 +07:00
|
|
|
err = event__synthesize_modules(process_synthesized_event,
|
2010-04-28 07:17:50 +07:00
|
|
|
session, machine);
|
2010-11-22 23:01:55 +07:00
|
|
|
if (err < 0)
|
|
|
|
pr_err("Couldn't record kernel module information.\n"
|
|
|
|
"Symbol resolution may be skewed if relocation was used (e.g. kexec).\n"
|
|
|
|
"Check /proc/modules permission or run as root.\n");
|
|
|
|
|
2010-04-19 12:32:50 +07:00
|
|
|
if (perf_guest)
|
2010-04-28 07:17:50 +07:00
|
|
|
perf_session__process_machines(session, event__synthesize_guest_os);
|
2009-06-25 22:05:54 +07:00
|
|
|
|
2010-06-17 01:59:01 +07:00
|
|
|
if (!system_wide)
|
2010-03-18 21:36:05 +07:00
|
|
|
event__synthesize_thread(target_tid, process_synthesized_event,
|
2009-12-14 04:50:24 +07:00
|
|
|
session);
|
2009-10-27 04:23:18 +07:00
|
|
|
else
|
2009-12-14 04:50:24 +07:00
|
|
|
event__synthesize_threads(process_synthesized_event, session);
|
2009-06-25 22:05:54 +07:00
|
|
|
|
2009-04-08 20:01:31 +07:00
|
|
|
if (realtime_prio) {
|
|
|
|
struct sched_param param;
|
|
|
|
|
|
|
|
param.sched_priority = realtime_prio;
|
|
|
|
if (sched_setscheduler(0, SCHED_FIFO, ¶m)) {
|
2009-10-22 02:34:06 +07:00
|
|
|
pr_err("Could not set realtime priority.\n");
|
2009-04-08 20:01:31 +07:00
|
|
|
exit(-1);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2009-12-16 23:55:55 +07:00
|
|
|
/*
|
|
|
|
* Let the child rip
|
|
|
|
*/
|
2009-12-28 06:36:57 +07:00
|
|
|
if (forks)
|
|
|
|
close(go_pipe[1]);
|
2009-12-16 23:55:55 +07:00
|
|
|
|
2009-06-25 02:12:48 +07:00
|
|
|
for (;;) {
|
2009-06-05 19:29:10 +07:00
|
|
|
int hits = samples;
|
2010-03-18 21:36:05 +07:00
|
|
|
int thread;
|
2009-04-08 20:01:31 +07:00
|
|
|
|
2010-05-03 03:05:29 +07:00
|
|
|
mmap_read_all();
|
2009-04-08 20:01:31 +07:00
|
|
|
|
2009-06-25 02:12:48 +07:00
|
|
|
if (hits == samples) {
|
|
|
|
if (done)
|
|
|
|
break;
|
2009-11-19 23:55:55 +07:00
|
|
|
err = poll(event_array, nr_poll, -1);
|
2009-09-18 00:59:05 +07:00
|
|
|
waking++;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (done) {
|
|
|
|
for (i = 0; i < nr_cpu; i++) {
|
2011-01-04 01:39:04 +07:00
|
|
|
struct perf_evsel *pos;
|
|
|
|
|
|
|
|
list_for_each_entry(pos, &evsel_list, node) {
|
2010-03-18 21:36:05 +07:00
|
|
|
for (thread = 0;
|
2011-01-04 02:53:33 +07:00
|
|
|
thread < threads->nr;
|
2010-03-18 21:36:05 +07:00
|
|
|
thread++)
|
2011-01-04 01:39:04 +07:00
|
|
|
ioctl(FD(pos, i, thread),
|
2010-03-18 21:36:05 +07:00
|
|
|
PERF_EVENT_IOC_DISABLE);
|
|
|
|
}
|
2009-09-18 00:59:05 +07:00
|
|
|
}
|
2009-06-25 02:12:48 +07:00
|
|
|
}
|
2009-04-08 20:01:31 +07:00
|
|
|
}
|
|
|
|
|
2010-12-07 00:13:38 +07:00
|
|
|
if (quiet || signr == SIGUSR1)
|
2010-10-27 00:20:09 +07:00
|
|
|
return 0;
|
|
|
|
|
2009-09-18 00:59:05 +07:00
|
|
|
fprintf(stderr, "[ perf record: Woken up %ld times to write data ]\n", waking);
|
|
|
|
|
2009-06-04 00:27:19 +07:00
|
|
|
/*
|
|
|
|
* Approximate RIP event size: 24 bytes.
|
|
|
|
*/
|
|
|
|
fprintf(stderr,
|
2009-06-05 19:29:10 +07:00
|
|
|
"[ perf record: Captured and wrote %.3f MB %s (~%lld samples) ]\n",
|
2009-06-04 00:27:19 +07:00
|
|
|
(double)bytes_written / 1024.0 / 1024.0,
|
|
|
|
output_name,
|
|
|
|
bytes_written / 24);
|
2009-06-03 04:43:11 +07:00
|
|
|
|
2009-04-08 20:01:31 +07:00
|
|
|
return 0;
|
2010-07-30 00:08:55 +07:00
|
|
|
|
|
|
|
out_delete_session:
|
|
|
|
perf_session__delete(session);
|
|
|
|
return err;
|
2009-04-08 20:01:31 +07:00
|
|
|
}
|
2009-05-26 14:17:18 +07:00
|
|
|
|
|
|
|
static const char * const record_usage[] = {
|
2009-05-28 21:25:34 +07:00
|
|
|
"perf record [<options>] [<command>]",
|
|
|
|
"perf record [<options>] -- <command> [<options>]",
|
2009-05-26 14:17:18 +07:00
|
|
|
NULL
|
|
|
|
};
|
|
|
|
|
2010-04-15 00:42:07 +07:00
|
|
|
static bool force, append_file;
|
|
|
|
|
2010-11-10 21:11:30 +07:00
|
|
|
const struct option record_options[] = {
|
2009-05-26 14:17:18 +07:00
|
|
|
OPT_CALLBACK('e', "event", NULL, "event",
|
2009-06-06 17:24:17 +07:00
|
|
|
"event selector. use 'perf list' to list available events",
|
|
|
|
parse_events),
|
2009-10-15 10:22:07 +07:00
|
|
|
OPT_CALLBACK(0, "filter", NULL, "filter",
|
|
|
|
"event filter", parse_filter),
|
2009-05-26 14:17:18 +07:00
|
|
|
OPT_INTEGER('p', "pid", &target_pid,
|
2010-03-18 21:36:05 +07:00
|
|
|
"record events on existing process id"),
|
|
|
|
OPT_INTEGER('t', "tid", &target_tid,
|
|
|
|
"record events on existing thread id"),
|
2009-05-26 14:17:18 +07:00
|
|
|
OPT_INTEGER('r', "realtime", &realtime_prio,
|
|
|
|
"collect data with this RT SCHED_FIFO priority"),
|
perf record: Add "nodelay" mode, disabled by default
Sometimes there is a need to use perf in "live-log" mode. The problem
is, for seldom events, actual info output is largely delayed because
perf-record reads sample data in whole pages.
So for such scenarious, add flag for perf-record to go in "nodelay"
mode. To track e.g. what's going on in icmp_rcv while ping is running
Use it with something like this:
(1) $ perf probe -L icmp_rcv | grep -U8 '^ *43\>'
goto error;
}
38 if (!pskb_pull(skb, sizeof(*icmph)))
goto error;
icmph = icmp_hdr(skb);
43 ICMPMSGIN_INC_STATS_BH(net, icmph->type);
/*
* 18 is the highest 'known' ICMP type. Anything else is a mystery
*
* RFC 1122: 3.2.2 Unknown ICMP messages types MUST be silently
* discarded.
*/
50 if (icmph->type > NR_ICMP_TYPES)
goto error;
$ perf probe icmp_rcv:43 'type=icmph->type'
(2) $ cat trace-icmp.py
[...]
def trace_begin():
print "in trace_begin"
def trace_end():
print "in trace_end"
def probe__icmp_rcv(event_name, context, common_cpu,
common_secs, common_nsecs, common_pid, common_comm,
__probe_ip, type):
print_header(event_name, common_cpu, common_secs, common_nsecs,
common_pid, common_comm)
print "__probe_ip=%u, type=%u\n" % \
(__probe_ip, type),
[...]
(3) $ perf record -a -D -e probe:icmp_rcv -o - | \
perf script -i - -s trace-icmp.py
Thanks to Peter Zijlstra for pointing how to do it.
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>, Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Tom Zanussi <tzanussi@gmail.com>
LKML-Reference: <20110112140613.GA11698@tugrik.mns.mnsspb.ru>
Signed-off-by: Kirill Smelkov <kirr@mns.spb.ru>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2011-01-12 21:59:36 +07:00
|
|
|
OPT_BOOLEAN('D', "no-delay", &nodelay,
|
|
|
|
"collect data without buffering"),
|
2009-08-13 15:27:19 +07:00
|
|
|
OPT_BOOLEAN('R', "raw-samples", &raw_samples,
|
|
|
|
"collect raw sample records from all opened counters"),
|
2009-05-26 14:17:18 +07:00
|
|
|
OPT_BOOLEAN('a', "all-cpus", &system_wide,
|
|
|
|
"system-wide collection from all CPUs"),
|
2009-06-03 03:59:57 +07:00
|
|
|
OPT_BOOLEAN('A', "append", &append_file,
|
|
|
|
"append to the output file to do incremental profiling"),
|
2010-05-28 17:00:01 +07:00
|
|
|
OPT_STRING('C', "cpu", &cpu_list, "cpu",
|
|
|
|
"list of cpus to monitor"),
|
2009-06-02 20:52:24 +07:00
|
|
|
OPT_BOOLEAN('f', "force", &force,
|
2010-04-15 00:42:07 +07:00
|
|
|
"overwrite existing data file (deprecated)"),
|
2010-05-17 22:20:43 +07:00
|
|
|
OPT_U64('c', "count", &user_interval, "event period to sample"),
|
2009-06-03 03:59:57 +07:00
|
|
|
OPT_STRING('o', "output", &output_name, "file",
|
|
|
|
"output file name"),
|
2010-05-12 15:40:01 +07:00
|
|
|
OPT_BOOLEAN('i', "no-inherit", &no_inherit,
|
|
|
|
"child tasks do not inherit counters"),
|
2010-05-18 01:39:16 +07:00
|
|
|
OPT_UINTEGER('F', "freq", &user_freq, "profile at this frequency"),
|
|
|
|
OPT_UINTEGER('m', "mmap-pages", &mmap_pages, "number of mmap data pages"),
|
2009-06-14 20:04:15 +07:00
|
|
|
OPT_BOOLEAN('g', "call-graph", &call_graph,
|
|
|
|
"do call-graph (stack chain/backtrace) recording"),
|
2010-04-13 15:37:33 +07:00
|
|
|
OPT_INCR('v', "verbose", &verbose,
|
2009-06-07 22:39:02 +07:00
|
|
|
"be more verbose (show counter open errors, etc)"),
|
2010-10-27 00:20:09 +07:00
|
|
|
OPT_BOOLEAN('q', "quiet", &quiet, "don't print any message"),
|
2009-06-25 02:12:48 +07:00
|
|
|
OPT_BOOLEAN('s', "stat", &inherit_stat,
|
|
|
|
"per thread counts"),
|
2009-07-16 20:44:29 +07:00
|
|
|
OPT_BOOLEAN('d', "data", &sample_address,
|
|
|
|
"Sample addresses"),
|
2010-12-02 19:25:28 +07:00
|
|
|
OPT_BOOLEAN('T', "timestamp", &sample_time, "Sample timestamps"),
|
2009-06-25 02:12:48 +07:00
|
|
|
OPT_BOOLEAN('n', "no-samples", &no_samples,
|
|
|
|
"don't sample"),
|
2010-11-27 04:39:15 +07:00
|
|
|
OPT_BOOLEAN('N', "no-buildid-cache", &no_buildid_cache,
|
2010-06-17 16:39:01 +07:00
|
|
|
"do not update the buildid cache"),
|
2010-11-27 04:39:15 +07:00
|
|
|
OPT_BOOLEAN('B', "no-buildid", &no_buildid,
|
|
|
|
"do not collect buildids in perf.data"),
|
2009-05-26 14:17:18 +07:00
|
|
|
OPT_END()
|
|
|
|
};
|
|
|
|
|
2009-07-01 17:37:06 +07:00
|
|
|
int cmd_record(int argc, const char **argv, const char *prefix __used)
|
2009-05-26 14:17:18 +07:00
|
|
|
{
|
2011-01-04 01:39:04 +07:00
|
|
|
int err = -ENOMEM;
|
|
|
|
struct perf_evsel *pos;
|
2009-05-26 14:17:18 +07:00
|
|
|
|
2010-11-10 21:11:30 +07:00
|
|
|
argc = parse_options(argc, argv, record_options, record_usage,
|
2009-12-16 05:04:40 +07:00
|
|
|
PARSE_OPT_STOP_AT_NON_OPTION);
|
2010-03-18 21:36:05 +07:00
|
|
|
if (!argc && target_pid == -1 && target_tid == -1 &&
|
2010-05-28 17:00:01 +07:00
|
|
|
!system_wide && !cpu_list)
|
2010-11-10 21:11:30 +07:00
|
|
|
usage_with_options(record_usage, record_options);
|
2009-05-26 14:17:18 +07:00
|
|
|
|
2010-04-15 00:42:07 +07:00
|
|
|
if (force && append_file) {
|
|
|
|
fprintf(stderr, "Can't overwrite and append at the same time."
|
|
|
|
" You need to choose between -f and -A");
|
2010-11-10 21:11:30 +07:00
|
|
|
usage_with_options(record_usage, record_options);
|
2010-04-15 00:42:07 +07:00
|
|
|
} else if (append_file) {
|
|
|
|
write_mode = WRITE_APPEND;
|
|
|
|
} else {
|
|
|
|
write_mode = WRITE_FORCE;
|
|
|
|
}
|
|
|
|
|
2009-12-16 05:04:40 +07:00
|
|
|
symbol__init();
|
2010-11-27 04:39:15 +07:00
|
|
|
|
|
|
|
if (no_buildid_cache || no_buildid)
|
2010-06-17 16:39:01 +07:00
|
|
|
disable_buildid_cache();
|
2009-12-16 05:04:40 +07:00
|
|
|
|
2011-01-04 01:39:04 +07:00
|
|
|
if (list_empty(&evsel_list) && perf_evsel_list__create_default() < 0) {
|
|
|
|
pr_err("Not enough memory for event selector list\n");
|
|
|
|
goto out_symbol_exit;
|
2009-06-12 04:11:50 +07:00
|
|
|
}
|
2009-05-26 14:17:18 +07:00
|
|
|
|
2011-01-04 02:53:33 +07:00
|
|
|
if (target_pid != -1)
|
2010-03-18 21:36:05 +07:00
|
|
|
target_tid = target_pid;
|
|
|
|
|
2011-01-04 02:53:33 +07:00
|
|
|
threads = thread_map__new(target_pid, target_tid);
|
|
|
|
if (threads == NULL) {
|
|
|
|
pr_err("Problems finding threads of monitor\n");
|
|
|
|
usage_with_options(record_usage, record_options);
|
2010-03-18 21:36:05 +07:00
|
|
|
}
|
|
|
|
|
2011-01-04 02:49:48 +07:00
|
|
|
cpus = cpu_map__new(cpu_list);
|
|
|
|
if (cpus == NULL) {
|
|
|
|
perror("failed to parse CPUs map");
|
2011-01-04 01:39:04 +07:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
list_for_each_entry(pos, &evsel_list, node) {
|
2011-01-04 02:53:33 +07:00
|
|
|
if (perf_evsel__alloc_fd(pos, cpus->nr, threads->nr) < 0)
|
2011-01-04 01:39:04 +07:00
|
|
|
goto out_free_fd;
|
2010-03-18 21:36:05 +07:00
|
|
|
}
|
2011-01-04 02:53:33 +07:00
|
|
|
event_array = malloc((sizeof(struct pollfd) * MAX_NR_CPUS *
|
|
|
|
MAX_COUNTERS * threads->nr));
|
2010-03-18 21:36:05 +07:00
|
|
|
if (!event_array)
|
2010-07-30 00:08:55 +07:00
|
|
|
goto out_free_fd;
|
2010-03-18 21:36:05 +07:00
|
|
|
|
2010-05-17 22:20:43 +07:00
|
|
|
if (user_interval != ULLONG_MAX)
|
2010-04-15 03:09:02 +07:00
|
|
|
default_interval = user_interval;
|
|
|
|
if (user_freq != UINT_MAX)
|
|
|
|
freq = user_freq;
|
|
|
|
|
2009-10-12 12:56:03 +07:00
|
|
|
/*
|
|
|
|
* User specified count overrides default frequency.
|
|
|
|
*/
|
|
|
|
if (default_interval)
|
|
|
|
freq = 0;
|
|
|
|
else if (freq) {
|
|
|
|
default_interval = freq;
|
|
|
|
} else {
|
|
|
|
fprintf(stderr, "frequency and count are zero, aborting\n");
|
2010-07-30 00:08:55 +07:00
|
|
|
err = -EINVAL;
|
|
|
|
goto out_free_event_array;
|
2009-10-12 12:56:03 +07:00
|
|
|
}
|
|
|
|
|
2010-07-30 00:08:55 +07:00
|
|
|
err = __cmd_record(argc, argv);
|
|
|
|
|
|
|
|
out_free_event_array:
|
|
|
|
free(event_array);
|
|
|
|
out_free_fd:
|
2011-01-04 02:53:33 +07:00
|
|
|
thread_map__delete(threads);
|
|
|
|
threads = NULL;
|
2010-07-31 04:31:28 +07:00
|
|
|
out_symbol_exit:
|
|
|
|
symbol__exit();
|
2010-07-30 00:08:55 +07:00
|
|
|
return err;
|
2009-05-26 14:17:18 +07:00
|
|
|
}
|