User visible:
- Allow disabling/enabling events dynamicly in 'perf top':
a 'perf top' session can instantly become a 'perf report'
one, i.e. going from dynamic analysis to a static one,
returning to a dynamic one is possible, to toogle the
modes, just press CTRL+z. (Arnaldo Carvalho de Melo)
- Greatly speed up 'perf probe --list' by caching debuginfo
(Masami Hiramatsu)
- Fix 'perf trace' race condition at the end of started
workloads (Sukadev Bhattiprolu)
- Fix a problem when opening old perf.data with different
byte order (Wang Nan)
Infrastructure:
- Ignore .config-detected in .gitignore (Wang Nan)
- Move libtraceevent dynamic list to separated LDFLAGS
variable (Wang Nan)
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJVgeMMAAoJENZQFvNTUqpAN6oP/jnkFswb0ul0ZhznxDrHfLwy
+mjItjuDpWBS2haRF7IPaBkMu+NwKbTIT45r2P+EcV8PvVbevxjaTmtzMgnbFxNE
al80A+y3HtFPbRj7Am5yaYnd5Z/BOVLfvqk41vX+GNBr9esqgrNNFwTbVatnn1fM
RF2fbr83VcQJO6CysT4EwxHS2dsFfG3R1R3VsO5aVrsGNjWoBhB+5ljRr4KHYUBN
CmEUut+J7vELZvmoJfiJzpfsesRydZegAHJc0MLPADvgGFn/bZbyZ95YfvFJcPq9
Tf/GPp4J2J0iJccNdV2RhBZNCUlDvz4PPdxZsAhaSdA/4Z0uwzcTLPA7GNlQZ0I5
h+tqBJTSM91m+6N2FRzZAEJpEnAdHONljNJeAl3XRR6/W1dh9HxxzWf5ECvu1Lpf
az4z3Wf5tnoNwRBP2wxV8vau/r0IgRDclw8TY8bRgdeMNSBxcG0i5G80uSFQRV1m
KoHGUA1ykjX/9NVAr7giOJ/B8UR2yK2TPWEzQH781RVKZZlH4mnXgNt0OqmXonuU
WKZSLKbGg8wCKRVbmf/Q/pf15jYLQnKKqm39YjTadkW+u4GeW9C89YyTaQeahy1o
efIVm4ecS4MOtqaOsedYqMvgf0Gtv1Ud+2hT0qH01jz1IsUCHgtPCkvjAysnOBK8
51AoegTS5VWzX3xa9Hy+
=hhW4
-----END PGP SIGNATURE-----
Merge tag 'perf-core-for-mingo-2' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core
Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo:
User visible changes:
- List perf probes to stdout. (Masami Hiramatsu)
- Return error when none of the requested probes were
installed. (Masami Hiramatsu)
- Cut off the gcc optimization postfixes from
function name in 'perf probe'. (Masami Hiramatsu)
- Allow disabling/enabling events dynamicly in 'perf top':
a 'perf top' session can instantly become a 'perf report'
one, i.e. going from dynamic analysis to a static one,
returning to a dynamic one is possible, to toogle the
modes, just press CTRL+z. (Arnaldo Carvalho de Melo)
- Greatly speed up 'perf probe --list' by caching debuginfo.
(Masami Hiramatsu)
- Fix 'perf trace' race condition at the end of started
workloads. (Sukadev Bhattiprolu)
- Fix a problem when opening old perf.data with different
byte order. (Wang Nan)
Infrastructure changes:
- Replace map->referenced & maps->removed_maps with
map->refcnt. (Arnaldo Carvalho de Melo)
- Introduce the xyarray__reset() function. (Jiri Olsa)
- Add thread_map__(alloc|realloc)() helpers. (Jiri Olsa)
- Move perf_evsel__(alloc|free|reset)_counts into stat object. (Jiri Olsa)
- Introduce perf_counts__(new|delete|reset)() functions. (Jiri Olsa)
- Ignore .config-detected in .gitignore. (Wang Nan)
- Move libtraceevent dynamic list to separated LDFLAGS
variable. (Wang Nan)
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Now it is possible to press CTRL+z at anytime and that will disable the
events being monitored, essentially turning 'top' into 'report', with
pressing CTRL+z again making it enable the events again, returning to
the 'top' behaviour, i.e. dynamic + decaying of older samples.
One may want, for instance, play with:
-d, --delay <n> number of seconds to delay between refreshes
and:
-z, --zero zero history across updates
Plus CTRL+z to see only the events since last zeroing, etc.
Suggested-by: Ingo Molnar <mingo@kernel.org>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: David Ahern <dsahern@gmail.com>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-zq7tnh5462blt2yda0bcxh5b@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
For an upcoming feature in 'perf top' we will have a hotkey to
enable/disable events, so remember if the events in the list are
enabled or disabled and allows toggling this state using a new
method.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: David Ahern <dsahern@gmail.com>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-64c4jvdl5feg2zhimxvokqka@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
I get following crash on multiple systems and across several releases
(at least since v3.18).
Core was generated by `/tmp/perf trace sleep 0.2 '.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 perf_mmap__read_head (mm=0x3fff9bf30070) at util/evlist.h:195
195 u64 head = ACCESS_ONCE(pc->data_head);
(gdb) bt
#0 perf_mmap__read_head (mm=0x3fff9bf30070) at util/evlist.h:195
#1 perf_evlist__mmap_read (evlist=0x10027f11910, idx=<optimized out>)
at util/evlist.c:637
#2 0x000000001003ce4c in trace__run (argv=<optimized out>,
argc=<optimized out>, trace=0x3fffd7b28288) at builtin-trace.c:2259
#3 cmd_trace (argc=<optimized out>, argv=<optimized out>,
prefix=<optimized out>) at builtin-trace.c:2799
#4 0x00000000100657b8 in run_builtin (p=0x10176798 <commands+480>, argc=3,
argv=0x3fffd7b2b550) at perf.c:370
#5 0x00000000100063e8 in handle_internal_command (argv=0x3fffd7b2b550, argc=3)
at perf.c:429
#6 run_argv (argv=0x3fffd7b2af70, argcp=0x3fffd7b2af7c) at perf.c:473
#7 main (argc=3, argv=0x3fffd7b2b550) at perf.c:588
The problem seems to be a race condition, when the application has just
exited. Some/all fds associated with the perf-events (tracepoints) go
into a POLLHUP/ POLLERR state and the mmap region associated with those
events are unmapped (in perf_evlist__filter_pollfd()).
But we go back and do a perf_evlist__mmap_read() which assumes that the
mmaps are still valid and we hit the crash.
If the mapping for an event is released, its refcnt is 0 (and ->base
is NULL), so ensure we have non-zero refcount before accessing the map.
Note that perf-record has a similar logic but unlike perf-trace, the
record__mmap_read_all() checks the evlist->mmap[i].base before accessing
the map.
Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Li Zhang <zhlcindy@linux.vnet.ibm.com>
Link: http://lkml.kernel.org/r/20150612060003.GA19913@us.ibm.com
[ Fixed it up to use atomic_read() ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Speed up the "perf probe --list" by caching the last used debuginfo.
perf probe --list always open and load debuginfo for each entry of probe
list. This takes very a long time.
E.g. with vfs_* events (total 96 probes)
[root@localhost perf]# time ./perf probe -l &> /dev/null
real 0m25.376s
user 0m24.381s
sys 0m1.012s
To solve this issue, this adds debuginfo_cache to cache the
last used debuginfo on memory.
With this fix, the perf-probe --list significantly improves
its speed.
[root@localhost perf]# time ./perf probe -l &> /dev/null
real 0m0.161s
user 0m0.136s
sys 0m0.025s
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Naohiro Aota <naota@elisp.net>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20150617145854.19715.15314.stgit@localhost.localdomain
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
When the last part of converted events are blacklisted or out-of-text,
those are skipped and perf probe doesn't show usage examples. This
fixes it to show the example even if the last part of event list is
skipped.
E.g. without this patch, events are added, but suddenly end:
# perf probe vfs_*
vfs_caches_init_early is out of .text, skip it.
vfs_caches_init is out of .text, skip it.
Added new events:
probe:vfs_fallocate (on vfs_*)
probe:vfs_open (on vfs_*)
...
probe:vfs_dentry_acceptable (on vfs_*)
probe:vfs_load_quota_inode (on vfs_*)
#
With this fix:
# perf probe vfs_*
vfs_caches_init_early is out of .text, skip it.
vfs_caches_init is out of .text, skip it.
Added new events:
probe:vfs_fallocate (on vfs_*)
...
probe:vfs_load_quota_inode (on vfs_*)
You can now use it in all perf tools, such as:
perf record -e probe:vfs_load_quota_inode -aR sleep 1
Note that this can be reproduced ONLY IF the vfs_caches_init* is the
last part of matched symbol list. I've checked this happens on
"3.19.0-generic #18-Ubuntu" kernel binary.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Naohiro Aota <naota@elisp.net>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20150616115057.19906.5502.stgit@localhost.localdomain
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Commit e3d09ec812 ("tools lib traceevent:
Export dynamic symbols used by traceevent plugins") adds libtraceevent
dynamic list directly into LDFLAGS, which makes all targets depend on
that list through LDFLAGS.
This is not good since some of targets like libgtk.so doesn't use plugin
at all, but require the existance of that list because of linker
options.
This patch isolates the -Xlink option into LIBTRACEEVENT_DYNAMIC_LIST_LDFLAGS,
makes only perf and perf.so use it.
Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: He Kuang <hekuang@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1434552389-89144-1-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Following error occurs when trying to use 'perf report' on x86_64 to
cross analysis a perf.data generated by an old perf on a big-endian
machine:
# perf report
*** Error in `/home/w00229757/perf': free(): invalid next size (fast): 0x00000000032c99f0 ***
======= Backtrace: =========
/lib64/libc.so.6(+0x6eeef)[0x7ff6ff7e2eef]
/lib64/libc.so.6(+0x78cae)[0x7ff6ff7eccae]
/lib64/libc.so.6(+0x79987)[0x7ff6ff7ed987]
/path/to/perf[0x4ac734]
/path/to/perf[0x4ac829]
/path/to/perf(perf_header__process_sections+0x129)[0x4ad2c9]
/path/to/perf(perf_session__read_header+0x2e1)[0x4ad9e1]
/path/to/perf(perf_session__new+0x168)[0x4bd458]
/path/to/perf(cmd_report+0xfa0)[0x43eb70]
/path/to/perf[0x47adc3]
/path/to/perf(main+0x5f6)[0x42fd06]
/lib64/libc.so.6(__libc_start_main+0xf5)[0x7ff6ff795bd5]
/path/to/perf[0x42fe35]
======= Memory map: ========
[SNIP]
The bug is in perf_event__attr_swap(). It swaps all fields in 'struct
perf_event_attr' without checking whether the swapped field exist or
not. In addition, in read_event_desc() allocs memory for attr according
to size read from perf.data.
Therefore, if the perf.data is collected by an old perf (without
aux_watermark, for example), when perf_event__attr_swap() swaping
attr->aux_watermark it destroy malloc's metadata.
This patch introduces boundary checking in perf_event__attr_swap(). It
adds macros bswap_field_64 and bswap_field_32 into
perf_event__attr_swap() to make it only swap exist fields.
Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1434534999-85347-1-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Commit fcfd6611fb ("tools build: Add
detected config support") dynamically creates .config-detected. Add it
to .gitignore.
Signed-off-by: Wang Nan <wangnan0@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Zefan Li <lizefan@huawei.com>
Link: http://lkml.kernel.org/r/1434542358-5430-1-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Fix perf probe to return an error if no probe is added due to the given
probe point being on the blacklist.
To fix this problem, this moves the blacklist checking to right after
finding symbols/probe-points and marks them as skipped.
If all the symbols are skipped, "perf probe" returns an error as it
fails to find the corresponding probe address.
E.g. currently if a blacklisted probe is given:
# perf probe do_trap && echo 'succeed'
Added new event:
Warning: Skipped probing on blacklisted function: sync_regs
succeed
No! It must fail! With this patch, it correctly fails:
# perf probe do_trap && echo 'succeed'
do_trap is blacklisted function, skip it.
Probe point 'do_trap' not found.
Error: Failed to add events.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Naohiro Aota <naota@elisp.net>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20150616115055.19906.31359.stgit@localhost.localdomain
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
When libunwind is on, there is a compile error as :
util/unwind-libunwind.c:363:21: error: 'dso' undeclared (first use in this function)
dso__data_put_fd(dso);
This patch fixes it.
Signed-off-by: Hou Pengyang <houpengyang@huawei.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Fixes: 4bb11d012a ("perf tools: Add dso__data_get/put_fd()")
Link: http://lkml.kernel.org/r/1434453395-10560-1-git-send-email-houpengyang@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
In order to have 'struct thread_map' allocation on single place and can
change it easily in following patch.
Using alloc|realloc for static helpers, because thread_map__new is
already used in public interface.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1434269985-521-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
To zero all the xyarray contents. It will be used in following patches.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1434269985-521-2-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Since commit 5e17b28f1e ("perf probe: Add --quiet option to
suppress output result message") have replaced printf with pr_info,
perf probe -l outputs its result in stderr. However, that is not
what the commit expected.
E.g.:
# perf probe -l > /dev/null
probe:vfs_read (on vfs_read@ksrc/linux-3/fs/read_write.c)
With this fix:
# perf probe -l > list
# cat list
probe:vfs_read (on vfs_read@ksrc/linux-3/fs/read_write.c)
Of course, --quiet(-q) still works on --add/--del.
# perf probe -q vfs_write
# perf probe -l
probe:vfs_read (on vfs_read@ksrc/linux-3/fs/read_write.c)
probe:vfs_write (on vfs_write@ksrc/linux-3/fs/read_write.c)
-----
Reported-by: Naohiro Aota <naota@elisp.net>
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Naohiro Aota <naota@elisp.net>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20150613013116.24402.2923.stgit@localhost.localdomain
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
'make build-test' finds an error that make_python_perf_so fails due to
missing of libtraceevent-dynamic-list:
'.../python2' util/setup.py \
--quiet build_ext; \
mkdir -p python && \
cp python_ext_build/lib/perf.so python/
/path/to/ld: cannot open linker script file /path/to/kernel/tools/lib/traceevent/libtraceevent-dynamic-list: No such file or directory
collect2: error: ld returned 1 exit status
error: command 'x86_64-linux-gcc' failed with exit status 1
cp: cannot stat 'python_ext_build/lib/perf.so': No such file or directory
make[3]: *** [python/perf.so] Error 1
make[2]: *** [python/perf.so] Error 2
test: test -f ./python/perf.so
make[1]: *** [make_python_perf_so] Error 1
make: *** [build-test] Error 2
make: Leaving directory `/path/to/kernel/tools/perf'
This is caused by commit e3d09ec812
("tools lib traceevent: Export dynamic symbols used by traceevent
plugins") that, it adds the list file to LDFLAGS but forgot to add it to
dependency list of python/perf.so.
This patch fixes this problem.
Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: He Kuang <hekuang@huawei.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Link: http://lkml.kernel.org/r/1434079031-123162-1-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Use just reference counts, so that when no more hist_entry instances
references a map and the thread instance goes away by processing a
PERF_RECORD_EXIT, we can delete the maps.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: David Ahern <dsahern@gmail.com>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-oym7lfhcc7ss6xpz44h7nbxs@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cut off the postfixes which gcc added for optimized routines from the
event name automatically generated from symbol name, since *probe-events
doesn't accept it. Those symbols will be used if we don't use debuginfo
to find target functions.
E.g. without this fix;
-----
# perf probe -va alloc_buf.isra.23
probe-definition(0): alloc_buf.isra.23
symbol:alloc_buf.isra.23 file:(null) line:0 offset:0 return:0 lazy:(null)
[...]
Opening /sys/kernel/debug/tracing/kprobe_events write=1
Added new event:
Writing event: p:probe/alloc_buf.isra.23 _text+4869328
Failed to write event: Invalid argument
Error: Failed to add events. Reason: Invalid argument (Code: -22)
-----
With this fix;
-----
perf probe -va alloc_buf.isra.23
probe-definition(0): alloc_buf.isra.23
symbol:alloc_buf.isra.23 file:(null) line:0 offset:0 return:0 lazy:(null)
[...]
Opening /sys/kernel/debug/tracing/kprobe_events write=1
Added new event:
Writing event: p:probe/alloc_buf _text+4869328
probe:alloc_buf (on alloc_buf.isra.23)
You can now use it in all perf tools, such as:
perf record -e probe:alloc_buf -aR sleep 1
-----
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Naohiro Aota <naota@elisp.net>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20150612050820.20548.41625.stgit@localhost.localdomain
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Building perf out of kernel tree is currently broken because the
MANIFEST file refers to kernel files that have been removed. With this
patch make perf-targz-src-pkg succeeds as does building perf using the
generated tarfile.
Signed-off-by: David Ahern <david.ahern@oracle.com>
Link: http://lkml.kernel.org/r/1433526173-172332-1-git-send-email-david.ahern@oracle.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Failed in 32bit arch build like this:
CC /opt/h00206996/output/perf/arm32/builtin-record.o
util/session.c: In function ‘perf_session__warn_about_errors’:
util/session.c:1304:9: error: format ‘%lu’ expects argument of type ‘long unsigned int’,
but argument 2 has type ‘long long unsigned int’ [-Werror=format=]
builtin-report.c: In function ‘perf_evlist__tty_browse_hists’:
builtin-report.c:323:2: error: format ‘%lu’ expects argument of type ‘long unsigned int’,
but argument 3 has type ‘u64’ [-Werror=format=]
Replace %lu format strings in warning message with PRIu64 for u64
'total_lost_samples' to fix this problem.
Signed-off-by: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/1434026664-71642-1-git-send-email-hekuang@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
perf stat ignores the unsupported event and continue to count supported
event. But if the unsupported event is group leader, perf tool will
crash. After applying this patch, the unsupported group leader will
error out immediately.
Without this patch:
$ perf stat -x, -e '{node-prefetch-refs,cycles}' -- sleep 1
perf: util/evsel.c:1009: get_group_fd: Assertion `!(fd == -1)' failed.
Aborted (core dumped)
With this patch:
$ perf stat -x, -e '{node-prefetch-refs,cycles}' -- sleep 1
Error:
The node-prefetch-refs event is not supported.
Commiter note: Here I got a different output, but no core dump:
[acme@zoo linux]$ perf stat -x, -e '{node-prefetch-refs,cycles}' -- sleep 1
Error:
The sys_perf_event_open() syscall returned with 22 (Invalid argument)
for event (node-prefetch-refs).
/bin/dmesg may provide additional information.
No CONFIG_PERF_EVENTS=y kernel support configured?
Signed-off-by: Kan Liang <kan.liang@intel.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Andi Kleen <andi@firstfloor.org>
Link: http://lkml.kernel.org/r/1434004360-8570-1-git-send-email-kan.liang@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Because there's too many options and I cannot read, I frequently get
confused between -c and -P, and try to do things like:
perf record -P 50000 -- foo
Which does not work; try and make the option description slightly longer
and hopefully less confusing.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20150610144850.GP19282@twins.programming.kicks-ass.net
[ Do those changes on the man page as well ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Use unique temporary files when copying to buildid dir to prevent races
in case multiple instances are trying to copy same file. This is done by
- creating template in form <path>/.<filename>.XXXXXX where the suffix is
used by mkstemp() to create unique file
- change file mode
- copy content
- if successful link temp file to target file
- unlink temp file
At this point the only file left at target path should be the desired
one either created by us or other instance if we raced. This should also
prevent not yet fully copied files to be visible to to other perf
instances that could try to parse them.
On top of that slow_copyfile no longer needs to deal with file mode when
creating file since temporary file is already created and mode is set.
Succesfully tested by myself by running perf record, archive and reading
the data on other system and by running perf buildid-cache on perf
binary itself. I also did revert fix from 0635b0f that to exposes
previously fixed race with EEXIST and recreator test passed sucessfully.
Signed-off-by: Milos Vyletel <milos@redhat.com>
Acked-by: Ingo Molnar <mingo@kernel.org>
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Steven Rostedt <rostedt@goodmis.org>
Link: http://lkml.kernel.org/r/1433775018-19868-1-git-send-email-milos@redhat.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
This has a different model than the 'thread' and 'map' struct lifetimes:
there is not a definitive "don't use this DSO anymore" event, i.e. we may
get many 'struct map' holding references to the '/usr/lib64/libc-2.20.so'
DSO but then at some point some DSO may have no references but we still
don't want to straight away release its resources, because "soon" we may
get a new 'struct map' that needs it and we want to reuse its symtab or
other resources.
So we need some way to garbage collect it when crossing some memory
usage threshold, which is left for anoter patch, for now it is
sufficient to release it when calling dsos__exit(), i.e. when deleting
the whole list as part of deleting the 'struct machine' containing it,
which will leave only referenced objects being used.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/n/tip-majzgz07cm90t2tejrjy4clf@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
To allow concurrent access, next step: refcount struct dso instances, so
that we can ditch unused them when the last map pointing to it goes
away.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/n/tip-yk1k08etpd2aoe3tnrf0oizn@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Calling the function 'machine__new_module' implies a new 'module' will
be allocated, when in fact what is returned is a 'struct map' instance,
that not necessarily will be instantiated, as if one already exists with
the given module name, it will be returned instead.
So be consistent with other "find and if not there, create" like
functions, like machine__findnew_thread, machine__findnew_dso, etc, and
rename it to machine__findnew_module_map(), that in turn will call
machine__findnew_module_dso().
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/n/tip-acv830vd3hwww2ih5vjtbmu3@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
The size of perf.data is missing update in no-buildid mode, which gives
wrong output result.
Before this patch:
$ perf.perf record -B -e syscalls:sys_enter_open uname
Linux
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.000 MB perf.data ]
After this patch:
$ perf.perf record -B -e syscalls:sys_enter_open uname
Linux
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.001 MB perf.data ]
Signed-off-by: He Kuang <hekuang@huawei.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/1432819050-30511-1-git-send-email-hekuang@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
The libtrace-dynamic-list file is used to export symbols used by
traceevent plugins.
Signed-off-by: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/1432819735-35040-2-git-send-email-hekuang@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Traceevent plugins need dynamic symbols exported from libtraceevent.a,
otherwise a dlopen error will occur during plugins loading.
This patch uses dynamic-list-file to export dynamic symbols which will
be used in plugins to perf executable.
The problem is covered up if feature-libpython is enabled, because
PYTHON_EMBED_LDOPTS contains '-Xlinker --export-dynamic' which adds all
symbols to the dynamic symbol table. So we should reproduce the problem
by setting NO_LIBPYTHON=1.
Before this patch:
(Prepare plugins)
$ ls /root/.traceevent/plugins/
plugin_sched_switch.so
plugin_function.so
...
$ perf record -e 'ftrace:function' ls
$ perf script
Warning: could not load plugin '/mnt/data/root/.traceevent/plugins/plugin_sched_switch.so'
/root/.traceevent/plugins/plugin_sched_switch.so: undefined symbol: pevent_unregister_event_handler
Warning: could not load plugin '/root/.traceevent/plugins/plugin_function.so'
/root/.traceevent/plugins/plugin_function.so: undefined symbol: warning
...
:1049 1049 [000] 9666.754487: ftrace:function: ffffffff8118bc50 <-- ffffffff8118c5b3
:1049 1049 [000] 9666.754487: ftrace:function: ffffffff818e2440 <-- ffffffff8118bc75
:1049 1049 [000] 9666.754487: ftrace:function: ffffffff8106eee0 <-- ffffffff811212e2
After this patch:
$ perf record -e 'ftrace:function' ls
$ perf script
:1049 1049 [000] 9666.754487: ftrace:function: __set_task_comm
:1049 1049 [000] 9666.754487: ftrace:function: _raw_spin_lock
:1049 1049 [000] 9666.754487: ftrace:function: task_tgid_nr_ns
...
Signed-off-by: He Kuang <hekuang@huawei.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/1432819735-35040-1-git-send-email-hekuang@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Separating shadow counters code into separate object as a cleanup, but
mainly for upcomming changes, so could use it from script command
context.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1433341559-31848-10-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
As preparation for moving shadow counters code into its own object.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1433341559-31848-9-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
As preparation for moving shadow counters code into its own object.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1433341559-31848-8-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Move shadow counters display code into separate function as preparation
for moving it into its own object.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1433341559-31848-7-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Move shadow counters reset code into separate function
as preparation for moving it into its own object.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1433341559-31848-6-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
It's no longer needed, because we use nameid to recognize transaction
events.
Keeping it only in stat code to initialize transaction events.
I.e. struct perf_stat::id, accessible via evsel->priv, will be only set
for transaction related events.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1433341559-31848-5-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
We can use already existing parse_events interface.
Both transaction_attrs and transaction_limited_attrs are changed to be
single strings.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1433341559-31848-4-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Using perf_stat::id to check for transaction events, instead of current
position based way.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1433341559-31848-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
We need fast way to identify evsel as transaction event for shadow
counters computation. Currently we are using possition (in evlist) based
way.
Adding 'id' into 'struct perf_stat' so it can carry transaction event ID
and we can use it for shadow counters computations.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/20150604135055.GB23625@krava.redhat.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
PEBSv3 as present on Skylake fixed the long standing issue of the
status bits. They now really reflect the events that generated the
record.
Tested-by: Andi Kleen <ak@linux.intel.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This patch modifies the perf tool to handle the new RECORD type,
PERF_RECORD_LOST_SAMPLES.
The number of lost-sample events is stored in
.nr_events[PERF_RECORD_LOST_SAMPLES]. The exact number of samples
which the kernel dropped is stored in total_lost_samples.
When the percentage of dropped samples is greater than 5%, a warning
is printed.
Here are some examples:
Eg 1, Recording different frequently-occurring events is safe with the
patch. Only a very low drop rate is associated with such actions.
$ perf record -e '{cycles:p,instructions:p}' -c 20003 --no-time ~/tchain ~/tchain
$ perf report -D | tail
SAMPLE events: 120243
MMAP2 events: 5
LOST_SAMPLES events: 24
FINISHED_ROUND events: 15
cycles:p stats:
TOTAL events: 59348
SAMPLE events: 59348
instructions:p stats:
TOTAL events: 60895
SAMPLE events: 60895
$ perf report --stdio --group
# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 24
#
# Samples: 120K of event 'anon group { cycles:p, instructions:p }'
# Event count (approx.): 24048600000
#
# Overhead Command Shared Object Symbol
# ................ ........... ................
..................................
#
99.74% 99.86% tchain_edit tchain_edit [.] f3
0.09% 0.02% tchain_edit tchain_edit [.] f2
0.04% 0.00% tchain_edit [kernel.vmlinux] [k] ixgbe_read_reg
Eg 2, Recording the same thing multiple times can lead to high drop
rate, but it is not a useful configuration.
$ perf record -e '{cycles:p,cycles:p}' -c 20003 --no-time ~/tchain
Warning: Processed 600592 samples and lost 99.73% samples!
[perf record: Woken up 148 times to write data]
[perf record: Captured and wrote 36.922 MB perf.data (1206322 samples)]
[perf record: Woken up 1 times to write data]
[perf record: Captured and wrote 0.121 MB perf.data (1629 samples)]
Signed-off-by: Kan Liang <kan.liang@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: acme@infradead.org
Cc: eranian@google.com
Link: http://lkml.kernel.org/r/1431285195-14269-9-git-send-email-kan.liang@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
After enlarging the PEBS interrupt threshold, there may be some mixed up
PEBS samples which are discarded by the kernel.
This patch makes the kernel emit a PERF_RECORD_LOST_SAMPLES record with
the number of possible discarded records when it is impossible to demux
the samples.
It makes sure the user is not left in the dark about such discards.
Signed-off-by: Kan Liang <kan.liang@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: acme@infradead.org
Cc: eranian@google.com
Link: http://lkml.kernel.org/r/1431285195-14269-8-git-send-email-kan.liang@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Currently the PEBS buffer size is 4k, it can only hold about 21
PEBS records. This patch enlarges the PEBS buffer size to 64k
(the same as the BTS buffer).
64k memory can hold about 330 PEBS records. This will significantly
reduce the number of PMIs when batched PEBS interrupts are enabled.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
Signed-off-by: Kan Liang <kan.liang@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: acme@infradead.org
Cc: eranian@google.com
Link: http://lkml.kernel.org/r/1430940834-8964-7-git-send-email-kan.liang@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Flush the PEBS buffer during context switches if PEBS interrupt threshold
is larger than one. This allows perf to supply TID for sample outputs.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
Signed-off-by: Kan Liang <kan.liang@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: acme@infradead.org
Cc: eranian@google.com
Link: http://lkml.kernel.org/r/1430940834-8964-6-git-send-email-kan.liang@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
PEBS always had the capability to log samples to its buffers without
an interrupt. Traditionally perf has not used this but always set the
PEBS threshold to one.
For frequently occurring events (like cycles or branches or load/store)
this in term requires using a relatively high sampling period to avoid
overloading the system, by only processing PMIs. This in term increases
sampling error.
For the common cases we still need to use the PMI because the PEBS
hardware has various limitations. The biggest one is that it can not
supply a callgraph. It also requires setting a fixed period, as the
hardware does not support adaptive period. Another issue is that it
cannot supply a time stamp and some other options. To supply a TID it
requires flushing on context switch. It can however supply the IP, the
load/store address, TSX information, registers, and some other things.
So we can make PEBS work for some specific cases, basically as long as
you can do without a callgraph and can set the period you can use this
new PEBS mode.
The main benefit is the ability to support much lower sampling period
(down to -c 1000) without extensive overhead.
One use cases is for example to increase the resolution of the c2c tool.
Another is double checking when you suspect the standard sampling has
too much sampling error.
Some numbers on the overhead, using cycle soak, comparing the elapsed
time from "kernbench -M -H" between plain (threshold set to one) and
multi (large threshold).
The test command for plain:
"perf record --time -e cycles:p -c $period -- kernbench -M -H"
The test command for multi:
"perf record --no-time -e cycles:p -c $period -- kernbench -M -H"
( The only difference of test command between multi and plain is time
stamp options. Since time stamp is not supported by large PEBS
threshold, it can be used as a flag to indicate if large threshold is
enabled during the test. )
period plain(Sec) multi(Sec) Delta
10003 32.7 16.5 16.2
20003 30.2 16.2 14.0
40003 18.6 14.1 4.5
80003 16.8 14.6 2.2
100003 16.9 14.1 2.8
800003 15.4 15.7 -0.3
1000003 15.3 15.2 0.2
2000003 15.3 15.1 0.1
With periods below 100003, plain (threshold one) cause much more
overhead. With 10003 sampling period, the Elapsed Time for multi is
even 2X faster than plain.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
Signed-off-by: Kan Liang <kan.liang@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: acme@infradead.org
Cc: eranian@google.com
Link: http://lkml.kernel.org/r/1430940834-8964-5-git-send-email-kan.liang@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
When the PEBS interrupt threshold is larger than one record and the
machine supports multiple PEBS events, the records of these events are
mixed up and we need to demultiplex them.
Demuxing the records is hard because the hardware is deficient. The
hardware has two issues that, when combined, create impossible
scenarios to demux.
The first issue is that the 'status' field of the PEBS record is a copy
of the GLOBAL_STATUS MSR at PEBS assist time. To see why this is a
problem let us first describe the regular PEBS cycle:
A) the CTRn value reaches 0:
- the corresponding bit in GLOBAL_STATUS gets set
- we start arming the hardware assist
< some unspecified amount of time later -- this could cover multiple
events of interest >
B) the hardware assist is armed, any next event will trigger it
C) a matching event happens:
- the hardware assist triggers and generates a PEBS record
this includes a copy of GLOBAL_STATUS at this moment
- if we auto-reload we (re)set CTRn
- we clear the relevant bit in GLOBAL_STATUS
Now consider the following chain of events:
A0, B0, A1, C0
The event generated for counter 0 will include a status with counter 1
set, even though its not at all related to the record. A similar thing
can happen with a !PEBS event if it just happens to overflow at the
right moment.
The second issue is that the hardware will only emit one record for two
or more counters if the event that triggers the assist is 'close'. The
'close' can be several cycles. In some cases even the complete assist,
if the event is something that doesn't need retirement.
For instance, consider this chain of events:
A0, B0, A1, B1, C01
Where C01 is an event that triggers both hardware assists, we will
generate but a single record, but again with both counters listed in the
status field.
This time the record pertains to both events.
Note that these two cases are different but undistinguishable with the
data as generated. Therefore demuxing records with multiple PEBS bits
(we can safely ignore status bits for !PEBS counters) is impossible.
Furthermore we cannot emit the record to both events because that might
cause a data leak -- the events might not have the same privileges -- so
what this patch does is discard such events.
The assumption/hope is that such discards will be rare.
Here lists some possible ways you may get high discard rate.
- when you count the same thing multiple times. But it is not a useful
configuration.
- you can be unfortunate if you measure with a userspace only PEBS
event along with either a kernel or unrestricted PEBS event. Imagine
the event triggering and setting the overflow flag right before
entering the kernel. Then all kernel side events will end up with
multiple bits set.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
Signed-off-by: Kan Liang <kan.liang@intel.com>
[ Changelog improvements. ]
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: acme@infradead.org
Cc: eranian@google.com
Link: http://lkml.kernel.org/r/1430940834-8964-4-git-send-email-kan.liang@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>