mirror of
https://github.com/AuxXxilium/linux_dsm_epyc7002.git
synced 2024-12-23 09:48:39 +07:00
cf4aae1a05
10973 Commits
Author | SHA1 | Message | Date | |
---|---|---|---|---|
Masami Hiramatsu
|
66f69b2197 |
perf probe: Support DW_AT_const_value constant value
Support DW_AT_const_value for variable assignment instead of location. Note that this requires ftrace supporting immediate value. Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Cc: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: Tom Zanussi <tom.zanussi@linux.intel.com> Link: http://lore.kernel.org/lkml/157406476012.24476.16096289871757175775.stgit@devnote2 Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Masami Hiramatsu
|
72363540c0 |
perf probe: Support multiprobe event
Support multiprobe event if the event is based on function and lines and kernel supports it. In this case, perf probe creates the first probe with an event, and tries to append following probes on that event, since those probes must be on the same source code line. Before this patch; # perf probe -a vfs_read:18 Added new events: probe:vfs_read_L18 (on vfs_read:18) probe:vfs_read_L18_1 (on vfs_read:18) You can now use it in all perf tools, such as: perf record -e probe:vfs_read_L18_1 -aR sleep 1 # After this patch (on multiprobe supported kernel) # perf probe -a vfs_read:18 Added new events: probe:vfs_read_L18 (on vfs_read:18) probe:vfs_read_L18 (on vfs_read:18) You can now use it in all perf tools, such as: perf record -e probe:vfs_read_L18 -aR sleep 1 # Committer testing: On a kernel that doesn't support multiprobe events, after this patch: # uname -a Linux quaco 5.3.8-200.fc30.x86_64 #1 SMP Tue Oct 29 14:46:22 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux # grep append /sys/kernel/debug/tracing/README be modified by appending '.descending' or '.ascending' to a can be modified by appending any of the following modifiers # # perf probe -a vfs_read:18 Added new events: probe:vfs_read_L18 (on vfs_read:18) probe:vfs_read_L18_1 (on vfs_read:18) You can now use it in all perf tools, such as: perf record -e probe:vfs_read_L18_1 -aR sleep 1 # perf probe -l probe:vfs_read_L18 (on vfs_read:18@fs/read_write.c) probe:vfs_read_L18_1 (on vfs_read:18@fs/read_write.c) # Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Cc: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: Tom Zanussi <tom.zanussi@linux.intel.com> Link: http://lore.kernel.org/lkml/157406475010.24476.586290752591512351.stgit@devnote2 Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Masami Hiramatsu
|
15354d5469 |
perf probe: Generate event name with line number
Generate event name from function name with line number as <function>_L<line_number>. Note that this is only for the new event which is defined by the line number of function (except for line 0). If there is another event on same line, you have to use "-f" option. In that case, the new event has "_1" suffix. e.g. # perf probe -a kernel_read:2 Added new event: probe:kernel_read_L2 (on kernel_read:2) You can now use it in all perf tools, such as: perf record -e probe:kernel_read_L2 -aR sleep 1 But if we omit the line number or 0th line, it will have no suffix. # perf probe -a kernel_read:0 Added new event: probe:kernel_read (on kernel_read) You can now use it in all perf tools, such as: perf record -e probe:kernel_read -aR sleep 1 probe:kernel_read (on kernel_read@linux-5.0.0/fs/read_write.c) probe:kernel_read_L2 (on kernel_read:2@linux-5.0.0/fs/read_write.c) Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Cc: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: Tom Zanussi <tom.zanussi@linux.intel.com> Link: http://lore.kernel.org/lkml/157406474026.24476.2828897745502059569.stgit@devnote2 Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Masami Hiramatsu
|
499144c83d |
perf probe: Do not show non representive lines by perf-probe -L
Since perf probe -L shows non representive lines, it can be mislead users where user can put probes. This prevents to show such non representive lines so that user can understand which lines user can probe. # perf probe -L kernel_read <kernel_read@/build/linux-pvZVvI/linux-5.0.0/fs/read_write.c:0> 0 ssize_t kernel_read(struct file *file, void *buf, size_t count, loff_t *pos) { 2 mm_segment_t old_fs; ssize_t result; old_fs = get_fs(); 6 set_fs(get_ds()); /* The cast to a user pointer is valid due to the set_fs() */ 8 result = vfs_read(file, (void __user *)buf, count, pos); 9 set_fs(old_fs); 10 return result; } EXPORT_SYMBOL(kernel_read); Committer testing: Before: # perf probe -L kernel_read <kernel_read@/usr/src/debug/kernel-5.3.fc30/linux-5.3.8-200.fc30.x86_64/fs/read_write.c:0> 0 ssize_t kernel_read(struct file *file, void *buf, size_t count, loff_t *pos) 1 { 2 mm_segment_t old_fs; 3 ssize_t result; 5 old_fs = get_fs(); 6 set_fs(KERNEL_DS); /* The cast to a user pointer is valid due to the set_fs() */ 8 result = vfs_read(file, (void __user *)buf, count, pos); 9 set_fs(old_fs); 10 return result; } EXPORT_SYMBOL(kernel_read); # See the 1, 3, 5 lines? They shouldn't be there, after this patch: # perf probe -L kernel_read <kernel_read@/usr/src/debug/kernel-5.3.fc30/linux-5.3.8-200.fc30.x86_64/fs/read_write.c:0> 0 ssize_t kernel_read(struct file *file, void *buf, size_t count, loff_t *pos) { 2 mm_segment_t old_fs; ssize_t result; old_fs = get_fs(); 6 set_fs(KERNEL_DS); /* The cast to a user pointer is valid due to the set_fs() */ 8 result = vfs_read(file, (void __user *)buf, count, pos); 9 set_fs(old_fs); 10 return result; } EXPORT_SYMBOL(kernel_read); # Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Reported-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Cc: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: Tom Zanussi <tom.zanussi@linux.intel.com> Link: http://lore.kernel.org/lkml/157406473064.24476.2913278267727587314.stgit@devnote2 Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Masami Hiramatsu
|
1ae5d88a4e |
perf probe: Verify given line is a representive line
Verify user given probe line is a representive line (which doesn't share the address with other lines or the line is the least line among the lines which shares same address), and if not, it shows what is the representive line. Without this fix, user can put a probe on the lines which is not a a representive line. But since this is not a representive line, perf probe -l shows a representive line number instead of user given line number. e.g. (put kernel_read:3, but listed as kernel_read:2) # perf probe -a kernel_read:3 Added new event: probe:kernel_read (on kernel_read:3) You can now use it in all perf tools, such as: perf record -e probe:kernel_read -aR sleep 1 # perf probe -l probe:kernel_read (on kernel_read:2@linux-5.0.0/fs/read_write.c) With this fix, perf probe doesn't allow user to put a probe on a representive line, and tell what is the representive line. # perf probe -a kernel_read:3 This line is sharing the addrees with other lines. Please try to probe at kernel_read:2 instead. Error: Failed to add events. Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Reported-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Cc: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: Tom Zanussi <tom.zanussi@linux.intel.com> Link: http://lore.kernel.org/lkml/157406472071.24476.14915451439785001021.stgit@devnote2 Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Masami Hiramatsu
|
57f95bf5f8 |
perf probe: Show correct statement line number by perf probe -l
The dwarf_getsrc_die() can return the line which is not a statement nor the least line number among the lines which shares same address. This can lead perf probe --list shows incorrect line number for probed address. To fix this, this introduces cu_getsrc_die() which returns only a statement line and which is the least line number (we call it the representive line for an address), and use it in cu_find_lineinfo(). Also, if the given address is the entry address of a real function, cu_find_lineinfo() returns the function declared line number instead of the start line number of the function body. For example, without this change perf probe -l shows incorrect line as below. # perf probe -a kernel_read:2 Added new event: probe:kernel_read (on kernel_read:2) You can now use it in all perf tools, such as: perf record -e probe:kernel_read -aR sleep 1 # perf probe -l probe:kernel_read (on kernel_read:1@linux-5.0.0/fs/read_write.c) With this fix, it shows correct line number as below; # perf probe -l probe:kernel_read (on kernel_read:2@linux-5.0.0/fs/read_write.c) Reported-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Cc: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: Tom Zanussi <tom.zanussi@linux.intel.com> Link: http://lore.kernel.org/lkml/157406471067.24476.17463149618465494448.stgit@devnote2 Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Adrian Hunter
|
1e5f015442 |
x86/insn: perf tools: Add some instructions to the new instructions test
Add to the "x86 instruction decoder - new instructions" test the following instructions: cldemote tpause umonitor umwait movdiri movdir64b enqcmd enqcmds encls enclu enclv pconfig wbnoinvd For information about the instructions, refer Intel SDM May 2019 (325462-070US) and Intel Architecture Instruction Set Extensions May 2019 (319433-037). Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Reviewed-by: Andi Kleen <ak@linux.intel.com> Cc: Borislav Petkov <bp@alien8.de> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: x86@kernel.org Link: http://lore.kernel.org/lkml/20191115135447.6519-2-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Arnaldo Carvalho de Melo
|
7624e69465 |
perf map: Move seldom used ->flags field to second cacheline
So we start with: $ pahole -C map ~/bin/perf struct map { union { struct rb_node rb_node __attribute__((__aligned__(8))); /* 0 24 */ struct list_head node; /* 0 16 */ } __attribute__((__aligned__(8))); /* 0 24 */ u64 start; /* 24 8 */ u64 end; /* 32 8 */ _Bool erange_warned:1; /* 40: 0 1 */ _Bool priv:1; /* 40: 1 1 */ /* XXX 6 bits hole, try to pack */ /* XXX 3 bytes hole, try to pack */ u32 prot; /* 44 4 */ u32 flags; /* 48 4 */ /* XXX 4 bytes hole, try to pack */ u64 pgoff; /* 56 8 */ /* --- cacheline 1 boundary (64 bytes) --- */ u64 reloc; /* 64 8 */ u32 maj; /* 72 4 */ u32 min; /* 76 4 */ u64 ino; /* 80 8 */ u64 ino_generation; /* 88 8 */ u64 (*map_ip)(struct map *, u64); /* 96 8 */ u64 (*unmap_ip)(struct map *, u64); /* 104 8 */ struct dso * dso; /* 112 8 */ refcount_t refcnt; /* 120 4 */ /* size: 128, cachelines: 2, members: 17 */ /* sum members: 116, holes: 2, sum holes: 7 */ /* sum bitfield members: 2 bits, bit holes: 1, sum bit holes: 6 bits */ /* padding: 4 */ /* forced alignments: 1 */ } __attribute__((__aligned__(8))); $ and 'flags' is seldom used when printing details about the map or with the "cacheline" sort order, we can move them it to the second cacheline, that will allow combining it with 'refcnt', that is only four bytes: $ pahole -C map ~/bin/perf struct map { union { struct rb_node rb_node __attribute__((__aligned__(8))); /* 0 24 */ struct list_head node; /* 0 16 */ } __attribute__((__aligned__(8))); /* 0 24 */ u64 start; /* 24 8 */ u64 end; /* 32 8 */ _Bool erange_warned:1; /* 40: 0 1 */ _Bool priv:1; /* 40: 1 1 */ /* XXX 6 bits hole, try to pack */ /* XXX 3 bytes hole, try to pack */ u32 prot; /* 44 4 */ u64 pgoff; /* 48 8 */ u64 reloc; /* 56 8 */ /* --- cacheline 1 boundary (64 bytes) --- */ u32 maj; /* 64 4 */ u32 min; /* 68 4 */ u64 ino; /* 72 8 */ u64 ino_generation; /* 80 8 */ u64 (*map_ip)(struct map *, u64); /* 88 8 */ u64 (*unmap_ip)(struct map *, u64); /* 96 8 */ struct dso * dso; /* 104 8 */ refcount_t refcnt; /* 112 4 */ u32 flags; /* 116 4 */ /* size: 120, cachelines: 2, members: 17 */ /* sum members: 116, holes: 1, sum holes: 3 */ /* sum bitfield members: 2 bits, bit holes: 1, sum bit holes: 6 bits */ /* forced alignments: 1 */ /* last cacheline: 56 bytes */ } __attribute__((__aligned__(8))); $ Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lkml.kernel.org/n/tip-2cdw3zlw1mkamaf7nqtdlxfi@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Arnaldo Carvalho de Melo
|
dbc984c961 |
perf map: Use bitmap for booleans
The map->priv and map->erange_warned are seldom used, the first only in tests/vmlinux-kallsyms.c, the later only when hist_entry__inc_addr_samples() returns -ERANGE in 'perf top', which are really rare occasions, so make them a bool bitfield. This will open up space for other members on the first cacheline. $ pahole -C map ~/bin/perf struct map { union { struct rb_node rb_node __attribute__((__aligned__(8))); /* 0 24 */ struct list_head node; /* 0 16 */ } __attribute__((__aligned__(8))); /* 0 24 */ u64 start; /* 24 8 */ u64 end; /* 32 8 */ _Bool erange_warned:1; /* 40: 0 1 */ _Bool priv:1; /* 40: 1 1 */ /* XXX 6 bits hole, try to pack */ /* XXX 3 bytes hole, try to pack */ u32 prot; /* 44 4 */ u32 flags; /* 48 4 */ /* XXX 4 bytes hole, try to pack */ u64 pgoff; /* 56 8 */ /* --- cacheline 1 boundary (64 bytes) --- */ u64 reloc; /* 64 8 */ u32 maj; /* 72 4 */ u32 min; /* 76 4 */ u64 ino; /* 80 8 */ u64 ino_generation; /* 88 8 */ u64 (*map_ip)(struct map *, u64); /* 96 8 */ u64 (*unmap_ip)(struct map *, u64); /* 104 8 */ struct dso * dso; /* 112 8 */ refcount_t refcnt; /* 120 4 */ /* size: 128, cachelines: 2, members: 17 */ /* sum members: 116, holes: 2, sum holes: 7 */ /* sum bitfield members: 2 bits, bit holes: 1, sum bit holes: 6 bits */ /* padding: 4 */ /* forced alignments: 1 */ } __attribute__((__aligned__(8))); $ Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lkml.kernel.org/n/tip-g5545pcq4ff0wr17tfb1piqt@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Adrian Hunter
|
aceb98261e |
perf callchain: Fix segfault in thread__resolve_callchain_sample()
Do not dereference 'chain' when it is NULL.
$ perf record -e intel_pt//u -e branch-misses:u uname
$ perf report --itrace=l --branch-history
perf: Segmentation fault
Fixes:
|
||
Arnaldo Carvalho de Melo
|
a7c2b572e2 |
perf map_groups: Auto sort maps by name, if needed
There are still lots of lookups by name, even if just when loading vmlinux, till that code is studied to figure out if its possible to do away with those map lookup by names, provide a way to sort it using libc's qsort/bsearch. Doing it at the first lookup defers the sorting a bit, and as the code stands now, is never done for user maps, just for the kernel ones. # perf probe -l # perf probe -x ~/bin/perf -L __map_groups__find_by_name <__map_groups__find_by_name@/home/acme/git/perf/tools/perf/util/symbol.c:0> 0 static struct map *__map_groups__find_by_name(struct map_groups *mg, const char *name) 1 { struct map **mapp; 4 if (mg->maps_by_name == NULL && 5 map__groups__sort_by_name_from_rbtree(mg)) 6 return NULL; 8 mapp = bsearch(name, mg->maps_by_name, mg->nr_maps, sizeof(*mapp), map__strcmp_name); 9 if (mapp) 10 return *mapp; 11 return NULL; 12 } struct map *map_groups__find_by_name(struct map_groups *mg, const char *name) { # perf probe -x ~/bin/perf 'found=__map_groups__find_by_name:10 name:string' Added new event: probe_perf:found (on __map_groups__find_by_name:10 in /home/acme/bin/perf with name:string) You can now use it in all perf tools, such as: perf record -e probe_perf:found -aR sleep 1 # # perf probe -x ~/bin/perf -L map_groups__find_by_name <map_groups__find_by_name@/home/acme/git/perf/tools/perf/util/symbol.c:0> 0 struct map *map_groups__find_by_name(struct map_groups *mg, const char *name) 1 { 2 struct maps *maps = &mg->maps; struct map *map; 5 down_read(&maps->lock); 7 if (mg->last_search_by_name && strcmp(mg->last_search_by_name->dso->short_name, name) == 0) { 8 map = mg->last_search_by_name; 9 goto out_unlock; } /* * If we have mg->maps_by_name, then the name isn't in the rbtree, * as mg->maps_by_name mirrors the rbtree when lookups by name are * made. */ 16 map = __map_groups__find_by_name(mg, name); 17 if (map || mg->maps_by_name != NULL) 18 goto out_unlock; /* Fallback to traversing the rbtree... */ 21 maps__for_each_entry(maps, map) 22 if (strcmp(map->dso->short_name, name) == 0) { 23 mg->last_search_by_name = map; 24 goto out_unlock; } 27 map = NULL; out_unlock: 30 up_read(&maps->lock); 31 return map; 32 } int dso__load_vmlinux(struct dso *dso, struct map *map, const char *vmlinux, bool vmlinux_allocated) # perf probe -x ~/bin/perf 'fallback=map_groups__find_by_name:21 name:string' Added new events: probe_perf:fallback (on map_groups__find_by_name:21 in /home/acme/bin/perf with name:string) probe_perf:fallback_1 (on map_groups__find_by_name:21 in /home/acme/bin/perf with name:string) You can now use it in all perf tools, such as: perf record -e probe_perf:fallback_1 -aR sleep 1 # # perf probe -l probe_perf:fallback (on map_groups__find_by_name:21@util/symbol.c in /home/acme/bin/perf with name_string) probe_perf:fallback_1 (on map_groups__find_by_name:21@util/symbol.c in /home/acme/bin/perf with name_string) probe_perf:found (on __map_groups__find_by_name:10@util/symbol.c in /home/acme/bin/perf with name_string) # # perf stat -e probe_perf:* Now run 'perf top' in another term and then, after a while, stop 'perf stat': Furthermore, if we ask for interval printing, we can see that that is done just at the start of the workload: # perf stat -I1000 -e probe_perf:* # time counts unit events 1.000319513 0 probe_perf:found 1.000319513 0 probe_perf:fallback_1 1.000319513 0 probe_perf:fallback 2.001868092 23,251 probe_perf:found 2.001868092 0 probe_perf:fallback_1 2.001868092 0 probe_perf:fallback 3.002901597 0 probe_perf:found 3.002901597 0 probe_perf:fallback_1 3.002901597 0 probe_perf:fallback 4.003358591 0 probe_perf:found 4.003358591 0 probe_perf:fallback_1 4.003358591 0 probe_perf:fallback ^C # Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lkml.kernel.org/n/tip-c5lmbyr14x448rcfii7y6t3k@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Arnaldo Carvalho de Melo
|
a94ab91a54 |
perf machine: No need to check if kernel module maps pre-exist
We'only populating maps for kernel modules either from perf.data file PERF_RECORD_MMAP records or when parsing /proc/modules, so there is no need to first look if we already have those module maps in the list, that would mean the kernel has duplicate entries. So ditch one use of looking up maps by name. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lkml.kernel.org/n/tip-gnzjg2hhuz6jnrw91m35059y@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Arnaldo Carvalho de Melo
|
6e0a9b3dfa |
perf record: No need to process the synthesized MMAP events twice
At the end of a 'perf record' session, by default, we'll process all samples and populate the threads, maps, etc so as to find out which of the DSOs got samples, to reduce the size of the build-id table we'll add to the perf.data headers. But we don't need to process the PERF_RECORD_MMAP events synthesized for the kernel modules, as we have those already via perf_session__create_kernel_maps(), so add mmap/mmap2 handlers that first look at event->header.misc to see if the event is for a user map, bailing out if not. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lkml.kernel.org/n/tip-mofoxvcx2dryppcw3o689jdd@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Arnaldo Carvalho de Melo
|
f068435d9b |
perf map: No need to adjust the long name of modules
At some point in the past we needed to make sure we would get the long
name of modules and not just what we get from /proc/modules, but that
need, as described in the cset that introduced the adjustment function:
Fixes:
|
||
Arnaldo Carvalho de Melo
|
1ae14516cb |
perf map_groups: Add a front end cache for map lookups by name
Lets see if it helps: First look at the probeable lines for the function that does lookups by name in a map_groups struct: # perf probe -x ~/bin/perf -L map_groups__find_by_name <map_groups__find_by_name@/home/acme/git/perf/tools/perf/util/symbol.c:0> 0 struct map *map_groups__find_by_name(struct map_groups *mg, const char *name) 1 { 2 struct maps *maps = &mg->maps; struct map *map; 5 down_read(&maps->lock); 7 if (mg->last_search_by_name && strcmp(mg->last_search_by_name->dso->short_name, name) == 0) { 8 map = mg->last_search_by_name; 9 goto out_unlock; } 12 maps__for_each_entry(maps, map) 13 if (strcmp(map->dso->short_name, name) == 0) { 14 mg->last_search_by_name = map; 15 goto out_unlock; } 18 map = NULL; out_unlock: 21 up_read(&maps->lock); 22 return map; 23 } int dso__load_vmlinux(struct dso *dso, struct map *map, const char *vmlinux, bool vmlinux_allocated) # Now add a probe to the place where we reuse the last search: # perf probe -x ~/bin/perf map_groups__find_by_name:8 Added new event: probe_perf:map_groups__find_by_name (on map_groups__find_by_name:8 in /home/acme/bin/perf) You can now use it in all perf tools, such as: perf record -e probe_perf:map_groups__find_by_name -aR sleep 1 # Now lets do a system wide 'perf stat' counting those events: # perf stat -e probe_perf:* Leave it running and lets do a 'perf top', then, after a while, stop the 'perf stat': # perf stat -e probe_perf:* ^C Performance counter stats for 'system wide': 3,603 probe_perf:map_groups__find_by_name 44.565253139 seconds time elapsed # yeah, good to have. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lkml.kernel.org/n/tip-tcz37g3nxv3tvxw3q90vga3p@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Arnaldo Carvalho de Melo
|
c5c584d2db |
perf maps: Do not use an rbtree to sort by map name
This is only used for the kernel maps, shave 24 bytes out 'struct map' and just traverse the existing per ip rbtree to look for maps by name, use a front end cache to reuse the last search if its the same name. After this 'struct map' is down to just two cachelines: $ pahole -C map ~/bin/perf struct map { union { struct rb_node rb_node __attribute__((__aligned__(8))); /* 0 24 */ struct list_head node; /* 0 16 */ } __attribute__((__aligned__(8))); /* 0 24 */ u64 start; /* 24 8 */ u64 end; /* 32 8 */ _Bool erange_warned; /* 40 1 */ /* XXX 3 bytes hole, try to pack */ u32 priv; /* 44 4 */ u32 prot; /* 48 4 */ u32 flags; /* 52 4 */ u64 pgoff; /* 56 8 */ /* --- cacheline 1 boundary (64 bytes) --- */ u64 reloc; /* 64 8 */ u32 maj; /* 72 4 */ u32 min; /* 76 4 */ u64 ino; /* 80 8 */ u64 ino_generation; /* 88 8 */ u64 (*map_ip)(struct map *, u64); /* 96 8 */ u64 (*unmap_ip)(struct map *, u64); /* 104 8 */ struct dso * dso; /* 112 8 */ refcount_t refcnt; /* 120 4 */ /* size: 128, cachelines: 2, members: 17 */ /* sum members: 121, holes: 1, sum holes: 3 */ /* padding: 4 */ /* forced alignments: 1 */ } __attribute__((__aligned__(8))); $ Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lkml.kernel.org/n/tip-bvr8fqfgzxtgnhnwt5sssx5g@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Arnaldo Carvalho de Melo
|
bcb8af5c46 |
perf maps: Purge the entries from maps->names in __maps__purge()
No need to iterate via the ->names rbtree, as all the entries there as in maps->entries as well, reuse __maps__purge() for that. Doing it this way we can kill maps__for_each_entry_by_name(), maps__for_each_entry_by_name_safe(), maps__{first,next}_by_name(). Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lkml.kernel.org/n/tip-ps0nrio8pydyo23rr2s696ue@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Adrian Hunter
|
af833988c0 |
perf scripts python: exported-sql-viewer.py: Fix use of TRUE with SQLite
Prior to version 3.23 SQLite does not support TRUE or FALSE, so always
use 1 and 0 for SQLite.
Fixes:
|
||
James Clark
|
da3ef7f6cd |
perf vendor events power9: Fix commas so PMU event files are valid JSON
No functional change. Remove extra commas in the power9 JSON files so that the files can be parsed and validated by other utilities such as Python that fail to parse invalid JSON. Before: $ diffstat -l -p1 /wb/1.patch | while read filename ; do echo $filename ; cat $filename | json_verify ; done tools/perf/pmu-events/arch/powerpc/power9/cache.json parse error: invalid object key (must be a string) [ {, "EventCode": "0x300 (right here) ------^ JSON is invalid tools/perf/pmu-events/arch/powerpc/power9/floating-point.json parse error: invalid object key (must be a string) [ {, "EventCode": "0x141 (right here) ------^ JSON is invalid tools/perf/pmu-events/arch/powerpc/power9/frontend.json parse error: invalid object key (must be a string) [ {, "EventCode": "0x250 (right here) ------^ JSON is invalid tools/perf/pmu-events/arch/powerpc/power9/marked.json parse error: invalid object key (must be a string) [ {, "EventCode": "0x301 (right here) ------^ JSON is invalid tools/perf/pmu-events/arch/powerpc/power9/memory.json parse error: invalid object key (must be a string) [ {, "EventCode": "0x300 (right here) ------^ JSON is invalid tools/perf/pmu-events/arch/powerpc/power9/other.json parse error: invalid object key (must be a string) [ {, "EventCode": "0x308 (right here) ------^ JSON is invalid tools/perf/pmu-events/arch/powerpc/power9/pipeline.json parse error: invalid object key (must be a string) [ {, "EventCode": "0x4D0 (right here) ------^ JSON is invalid tools/perf/pmu-events/arch/powerpc/power9/pmc.json parse error: invalid object key (must be a string) [ {, "EventCode": "0x200 (right here) ------^ JSON is invalid tools/perf/pmu-events/arch/powerpc/power9/translation.json parse error: invalid object key (must be a string) [ {, "EventCode": "0x1E" (right here) ------^ JSON is invalid $ After: $ diffstat -l -p1 /wb/1.patch | while read filename ; do echo $filename ; cat $filename | json_verify ; done tools/perf/pmu-events/arch/powerpc/power9/cache.json JSON is valid tools/perf/pmu-events/arch/powerpc/power9/floating-point.json JSON is valid tools/perf/pmu-events/arch/powerpc/power9/frontend.json JSON is valid tools/perf/pmu-events/arch/powerpc/power9/marked.json JSON is valid tools/perf/pmu-events/arch/powerpc/power9/memory.json JSON is valid tools/perf/pmu-events/arch/powerpc/power9/other.json JSON is valid tools/perf/pmu-events/arch/powerpc/power9/pipeline.json JSON is valid tools/perf/pmu-events/arch/powerpc/power9/pmc.json JSON is valid tools/perf/pmu-events/arch/powerpc/power9/translation.json JSON is valid $ Signed-off-by: James Clark <james.clark@arm.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Kevin Mooney <kevin.mooney@arm.com> Cc: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> Cc: Mamatha Inamdar <mamatha4@linux.vnet.ibm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Cc: nd@arm.com Link: http://lore.kernel.org/lkml/20191112160342.26470-3-james.clark@arm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
James Clark
|
835e5bd909 |
perf vendor events power8: Fix commas so PMU event files are valid JSON
No functional change. Remove extra commas in the power8 JSON files so that the files can be parsed and validated by other utilities such as Python that fail to parse invalid JSON. Committer testing: Before: $ diffstat -l -p1 /wb/1.patch | while read filename ; do echo $filename ; cat $filename | json_verify ; done tools/perf/pmu-events/arch/powerpc/power8/cache.json parse error: invalid object key (must be a string) [ {, "EventCode": "0x4c0 (right here) ------^ JSON is invalid tools/perf/pmu-events/arch/powerpc/power8/floating-point.json parse error: invalid object key (must be a string) [ {, "EventCode": "0x200 (right here) ------^ JSON is invalid tools/perf/pmu-events/arch/powerpc/power8/frontend.json parse error: invalid object key (must be a string) [ {, "EventCode": "0x250 (right here) ------^ JSON is invalid tools/perf/pmu-events/arch/powerpc/power8/marked.json parse error: invalid object key (must be a string) [ {, "EventCode": "0x351 (right here) ------^ JSON is invalid tools/perf/pmu-events/arch/powerpc/power8/memory.json parse error: invalid object key (must be a string) [ {, "EventCode": "0x100 (right here) ------^ JSON is invalid tools/perf/pmu-events/arch/powerpc/power8/other.json parse error: invalid object key (must be a string) [ {, "EventCode": "0x1f0 (right here) ------^ JSON is invalid tools/perf/pmu-events/arch/powerpc/power8/pipeline.json parse error: invalid object key (must be a string) [ {, "EventCode": "0x100 (right here) ------^ JSON is invalid tools/perf/pmu-events/arch/powerpc/power8/pmc.json parse error: invalid object key (must be a string) [ {, "EventCode": "0x200 (right here) ------^ JSON is invalid tools/perf/pmu-events/arch/powerpc/power8/translation.json parse error: invalid object key (must be a string) [ {, "EventCode": "0x4c0 (right here) ------^ JSON is invalid $ After: $ diffstat -l -p1 /wb/1.patch | while read filename ; do echo $filename ; cat $filename | json_verify ; done tools/perf/pmu-events/arch/powerpc/power8/cache.json JSON is valid tools/perf/pmu-events/arch/powerpc/power8/floating-point.json JSON is valid tools/perf/pmu-events/arch/powerpc/power8/frontend.json JSON is valid tools/perf/pmu-events/arch/powerpc/power8/marked.json JSON is valid tools/perf/pmu-events/arch/powerpc/power8/memory.json JSON is valid tools/perf/pmu-events/arch/powerpc/power8/other.json JSON is valid tools/perf/pmu-events/arch/powerpc/power8/pipeline.json JSON is valid tools/perf/pmu-events/arch/powerpc/power8/pmc.json JSON is valid tools/perf/pmu-events/arch/powerpc/power8/translation.json JSON is valid $ Signed-off-by: James Clark <james.clark@arm.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Kevin Mooney <kevin.mooney@arm.com> Cc: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> Cc: Mamatha Inamdar <mamatha4@linux.vnet.ibm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Cc: nd@arm.com Link: http://lore.kernel.org/lkml/20191112160342.26470-2-james.clark@arm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
James Clark
|
a44e4f3ab1 |
perf vendor events arm64: Fix commas so PMU event files are valid JSON
No functional change. Add and remove extra commas in the arm64 JSON files so that the files can be parsed and validated by other utilities such as Python that fail to parse invalid JSON. Committer testing: Before: $ diffstat -l -p1 /wb/1.patch | while read filename ; do echo $filename ; cat $filename | json_verify ; done tools/perf/pmu-events/arch/arm64/ampere/emag/branch.json parse error: invalid object key (must be a string) [ { "ArchStdEvent" (right here) ------^ JSON is invalid tools/perf/pmu-events/arch/arm64/ampere/emag/bus.json parse error: invalid object key (must be a string) [ { "ArchStdEvent" (right here) ------^ JSON is invalid tools/perf/pmu-events/arch/arm64/ampere/emag/cache.json parse error: invalid object key (must be a string) [ { "ArchStdEvent" (right here) ------^ JSON is invalid tools/perf/pmu-events/arch/arm64/ampere/emag/clock.json parse error: unallowed token at this point in JSON text [ { "PublicDescrip (right here) ------^ JSON is invalid tools/perf/pmu-events/arch/arm64/ampere/emag/exception.json parse error: invalid object key (must be a string) [ { "ArchStdEvent" (right here) ------^ JSON is invalid tools/perf/pmu-events/arch/arm64/ampere/emag/instruction.json parse error: invalid object key (must be a string) [ { "ArchStdEvent" (right here) ------^ JSON is invalid tools/perf/pmu-events/arch/arm64/ampere/emag/intrinsic.json parse error: invalid object key (must be a string) [ { "ArchStdEvent" (right here) ------^ JSON is invalid tools/perf/pmu-events/arch/arm64/ampere/emag/memory.json parse error: invalid object key (must be a string) [ { "ArchStdEvent" (right here) ------^ JSON is invalid tools/perf/pmu-events/arch/arm64/ampere/emag/pipeline.json parse error: unallowed token at this point in JSON text [ { "PublicDescrip (right here) ------^ JSON is invalid tools/perf/pmu-events/arch/arm64/arm/cortex-a53/branch.json parse error: invalid object key (must be a string) [ { "ArchStdEvent": "BR (right here) ------^ JSON is invalid tools/perf/pmu-events/arch/arm64/arm/cortex-a53/bus.json parse error: invalid object key (must be a string) [ { "ArchStdEvent": (right here) ------^ JSON is invalid tools/perf/pmu-events/arch/arm64/arm/cortex-a53/other.json parse error: invalid object key (must be a string) [ { "ArchStdEvent": (right here) ------^ JSON is invalid tools/perf/pmu-events/arch/arm64/arm/cortex-a57-a72/core-imp-def.json parse error: invalid object key (must be a string) [ { "ArchStdEvent" (right here) ------^ JSON is invalid tools/perf/pmu-events/arch/arm64/armv8-recommended.json parse error: after array element, I expect ',' or ']' [ { "PublicDescrip (right here) ------^ JSON is invalid tools/perf/pmu-events/arch/arm64/cavium/thunderx2/core-imp-def.json parse error: invalid object key (must be a string) [ { "ArchStdEvent" (right here) ------^ JSON is invalid tools/perf/pmu-events/arch/arm64/hisilicon/hip08/core-imp-def.json parse error: invalid object key (must be a string) [ { "ArchStdEvent" (right here) ------^ JSON is invalid tools/perf/pmu-events/arch/arm64/hisilicon/hip08/uncore-ddrc.json parse error: invalid object key (must be a string) [ { "EventCode": "0x00 (right here) ------^ JSON is invalid tools/perf/pmu-events/arch/arm64/hisilicon/hip08/uncore-hha.json parse error: invalid object key (must be a string) [ { "EventCode": "0x00 (right here) ------^ JSON is invalid tools/perf/pmu-events/arch/arm64/hisilicon/hip08/uncore-l3c.json parse error: invalid object key (must be a string) [ { "EventCode": "0x00 (right here) ------^ JSON is invalid $ After: $ diffstat -l -p1 /wb/1.patch | while read filename ; do echo $filename ; cat $filename | json_verify ; done tools/perf/pmu-events/arch/arm64/ampere/emag/branch.json JSON is valid tools/perf/pmu-events/arch/arm64/ampere/emag/bus.json JSON is valid tools/perf/pmu-events/arch/arm64/ampere/emag/cache.json JSON is valid tools/perf/pmu-events/arch/arm64/ampere/emag/clock.json JSON is valid tools/perf/pmu-events/arch/arm64/ampere/emag/exception.json JSON is valid tools/perf/pmu-events/arch/arm64/ampere/emag/instruction.json JSON is valid tools/perf/pmu-events/arch/arm64/ampere/emag/intrinsic.json JSON is valid tools/perf/pmu-events/arch/arm64/ampere/emag/memory.json JSON is valid tools/perf/pmu-events/arch/arm64/ampere/emag/pipeline.json JSON is valid tools/perf/pmu-events/arch/arm64/arm/cortex-a53/branch.json JSON is valid tools/perf/pmu-events/arch/arm64/arm/cortex-a53/bus.json JSON is valid tools/perf/pmu-events/arch/arm64/arm/cortex-a53/other.json JSON is valid tools/perf/pmu-events/arch/arm64/arm/cortex-a57-a72/core-imp-def.json JSON is valid tools/perf/pmu-events/arch/arm64/armv8-recommended.json JSON is valid tools/perf/pmu-events/arch/arm64/cavium/thunderx2/core-imp-def.json JSON is valid tools/perf/pmu-events/arch/arm64/hisilicon/hip08/core-imp-def.json JSON is valid tools/perf/pmu-events/arch/arm64/hisilicon/hip08/uncore-ddrc.json JSON is valid tools/perf/pmu-events/arch/arm64/hisilicon/hip08/uncore-hha.json JSON is valid tools/perf/pmu-events/arch/arm64/hisilicon/hip08/uncore-l3c.json JSON is valid $ Signed-off-by: James Clark <james.clark@arm.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: John Garry <john.garry@huawei.com> Cc: Kevin Mooney <kevin.mooney@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: nd@arm.com Link: http://lore.kernel.org/lkml/20191112160342.26470-1-james.clark@arm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Ian Rogers
|
e1e9b78d39 |
perf parse: Use YYABORT to clear stack after failure, plugging leaks
Using return rather than YYABORT means that the stack isn't cleared up following a failure. The change to YYABORT means the return value is 1 rather than -1, but the callers just check for a result of 0 (success). Add missing free of a list when an error occurs in event_pmu. Signed-off-by: Ian Rogers <irogers@google.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lore.kernel.org/lkml/20191109075840.181231-1-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Ravi Bangoria
|
ccd26741f5 |
perf tool: Provide an option to print perf_event_open args and return value
Perf record with verbose=2 already prints this information along with whole lot of other traces which requires lot of scrolling. Introduce an option to print only perf_event_open() arguments and return value. Sample o/p: $ perf --debug perf-event-open=1 record -- ls > /dev/null ------------------------------------------------------------ perf_event_attr: size 112 { sample_period, sample_freq } 4000 sample_type IP|TID|TIME|PERIOD read_format ID disabled 1 inherit 1 exclude_kernel 1 mmap 1 comm 1 freq 1 enable_on_exec 1 task 1 precise_ip 3 sample_id_all 1 exclude_guest 1 mmap2 1 comm_exec 1 ksymbol 1 bpf_event 1 ------------------------------------------------------------ sys_perf_event_open: pid 4308 cpu 0 group_fd -1 flags 0x8 = 4 sys_perf_event_open: pid 4308 cpu 1 group_fd -1 flags 0x8 = 5 sys_perf_event_open: pid 4308 cpu 2 group_fd -1 flags 0x8 = 6 sys_perf_event_open: pid 4308 cpu 3 group_fd -1 flags 0x8 = 8 sys_perf_event_open: pid 4308 cpu 4 group_fd -1 flags 0x8 = 9 sys_perf_event_open: pid 4308 cpu 5 group_fd -1 flags 0x8 = 10 sys_perf_event_open: pid 4308 cpu 6 group_fd -1 flags 0x8 = 11 sys_perf_event_open: pid 4308 cpu 7 group_fd -1 flags 0x8 = 12 ------------------------------------------------------------ perf_event_attr: type 1 size 112 config 0x9 watermark 1 sample_id_all 1 bpf_event 1 { wakeup_events, wakeup_watermark } 1 ------------------------------------------------------------ sys_perf_event_open: pid -1 cpu 0 group_fd -1 flags 0x8 sys_perf_event_open failed, error -13 [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.002 MB perf.data (9 samples) ] Committer notes: Just like the 'verbose' variable this new 'debug_peo_args' needs to be added to util/python.c, since we don't link the debug.o file in the python binding, which ended up making 'perf test python' fail with: # perf test -v python 18: 'import perf' in python : --- start --- test child forked, pid 19237 Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: /tmp/build/perf/python/perf.so: undefined symbol: debug_peo_args test child finished with -1 ---- end ---- 'import perf' in python: FAILED! # After adding that new variable to util/python.c: # perf test -v python 18: 'import perf' in python : --- start --- test child forked, pid 22364 test child finished with 0 ---- end ---- 'import perf' in python: Ok # Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lore.kernel.org/lkml/20191108094128.28769-1-ravi.bangoria@linux.ibm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Arnaldo Carvalho de Melo
|
7b018e2987 |
perf map: Remove ->groups from 'struct map'
With this 'struct map' uses a bit over 3 cachelines: $ pahole -C map ~/bin/perf <SNIP> /* --- cacheline 2 boundary (128 bytes) --- */ u64 (*unmap_ip)(struct map *, u64); /* 128 8 */ struct dso * dso; /* 136 8 */ refcount_t refcnt; /* 144 4 */ /* size: 152, cachelines: 3, members: 18 */ /* sum members: 145, holes: 1, sum holes: 3 */ /* padding: 4 */ /* forced alignments: 2 */ /* last cacheline: 24 bytes */ } __attribute__((__aligned__(8))); $ We probably can move map->map/unmap_ip() moved to 'struct map_groups', that will shave more 16 bytes, getting this almost to two cachelines. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lkml.kernel.org/n/tip-ymlv3nzpofv2fugnjnizkrwy@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Arnaldo Carvalho de Melo
|
3f662fc08d |
perf map: Combine maps__fixup_overlappings with its only use
In the process we can kill some of the struct map->groups usage, trying to get rid of this per-full struct map fields getting in the way of sharing a map across father/parent processes. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lkml.kernel.org/n/tip-e50eqtqw3za24vmbjnqmmcs6@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Arnaldo Carvalho de Melo
|
94e44b9ca5 |
perf annotate: Stop using map->groups, use map_symbol->mg instead
These were the last uses of map->groups, next cset will nuke it. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lkml.kernel.org/n/tip-n3g0foos7l7uxq9nar0zo0vj@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Arnaldo Carvalho de Melo
|
08f6680e62 |
perf tools: Add a 'struct map_groups' pointer to 'struct map_symbol'
And fill it whenever we setup a a 'struct map_symbol', now we need to use it, next cset. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lkml.kernel.org/n/tip-fzwfcnddenz1o7uj1fzw3g46@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Arnaldo Carvalho de Melo
|
93fcce96c7 |
perf symbols: Use kmaps(map)->machine when we know its a kernel map
And then stop using map->groups to achieve that. To test that that branch is being taken, probe the function that is only called from there and then run something like 'perf top' in another xterm: # perf probe -x ~/bin/perf machine__map_x86_64_entry_trampolines Added new event: probe_perf:machine__map_x86_64_entry_trampolines (on machine__map_x86_64_entry_trampolines in /home/acme/bin/perf) You can now use it in all perf tools, such as: perf record -e probe_perf:machine__map_x86_64_entry_trampolines -aR sleep 1 # perf trace -e probe_perf:* 0.000 bash/10614 probe_perf:machine__map_x86_64_entry_trampolines(__probe_ip: 5224944) ^C# Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lkml.kernel.org/n/tip-lgrrzdxo2p9liq2keivcg887@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Arnaldo Carvalho de Melo
|
d46a4cdf49 |
pref tools: Make 'struct addr_map_symbol' contain 'struct map_symbol'
So that we pass that substructure around and with it consolidate lots of functions that receive a (map, symbol) pair and now can receive just a 'struct map_symbol' pointer. This further paves the way to add 'struct map_groups' to 'struct map_symbol' so that we can have all we need for annotation so that we can ditch 'struct map'->groups, i.e. have the map_groups pointer in a more central place, avoiding the pointer in the 'struct map' that have tons of instances. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lkml.kernel.org/n/tip-fs90ttd9q12l7989fo7pw81q@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Arnaldo Carvalho de Melo
|
5f0fef8ac3 |
perf callchain: Use 'struct map_symbol' in 'struct callchain_cursor_node'
To ease passing around map+symbol, just like done for other parts of the tree recently. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Arnaldo Carvalho de Melo
|
c1529738f5 |
perf unwind: Use 'struct map_symbol' in 'struct unwind_entry'
To help in passing that info around to callchain routines that, for the same reason, are moving to use 'struct map_symbol'. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lkml.kernel.org/n/tip-epsiibeprpxa8qpwji47uskc@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Arnaldo Carvalho de Melo
|
2975489458 |
perf annotate: Pass a 'map_symbol' in places receiving a pair of 'map' and 'symbol' pointers
We are already passing things like: symbol__annotate(ms->sym, ms->map, ...) So shorten the signature of such functions to receive the 'map_symbol' pointer. This also paves the way to having the 'struct map_groups' pointer in the 'struct map_symbol' so that we can get rid of 'struct map'->groups. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lkml.kernel.org/n/tip-23yx8v1t41nzpkpi7rdrozww@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Arnaldo Carvalho de Melo
|
d3a022cbdc |
perf tools: Add map_groups to 'struct addr_location'
From there we can get al->mg->machine, so replace that field with the more useful 'struct map_groups' that for now we're obtaining from al->map->groups, and that is one thing getting into the way of maps being fully shareable. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lkml.kernel.org/n/tip-4qdducrm32tgrjupcp0kjh1e@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Arnaldo Carvalho de Melo
|
9d355b381b |
perf map_groups: Pass the object to map_groups__find_ams()
We were just passing a map to look for and reuse its map->groups member, but the idea is that this is going away, as a map can be in multiple rb_trees when being reused via a map_node, so do as all the other map_groups methods and pass as its first arg the object being operated on. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lkml.kernel.org/n/tip-nmi2pbggqloogwl6vxrvex5a@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Arnaldo Carvalho de Melo
|
f2baa060cd |
perf symbols: Stop using map->groups, we can use kmaps instead
To test that that function is being called I just added a probe on that place, enabled it via 'perf trace' asking for at most 16 levels of backtraces, system wide, and then ran 'perf top' on another xterm, voilà: # perf probe -x ~/bin/perf dso__process_kernel_symbol Added new event: probe_perf:dso__process_kernel_symbol (on dso__process_kernel_symbol in /home/acme/bin/perf) You can now use it in all perf tools, such as: perf record -e probe_perf:dso__process_kernel_symbol -aR sleep 1 # perf trace -e probe_perf:dso__process_kernel_symbol/max-stack=16/ --max-events=2 # perf trace -e probe_perf:dso__process_kernel_symbol/max-stack=16/ --max-events=2 0.000 :17345/17345 probe_perf:dso__process_kernel_symbol(__probe_ip: 5680224) dso__process_kernel_symbol (/home/acme/bin/perf) dso__load_vmlinux (/home/acme/bin/perf) dso__load_vmlinux_path (/home/acme/bin/perf) dso__load (/home/acme/bin/perf) map__load (/home/acme/bin/perf) thread__find_map (/home/acme/bin/perf) machine__resolve (/home/acme/bin/perf) deliver_event (/home/acme/bin/perf) __ordered_events__flush.part.0 (/home/acme/bin/perf) process_thread (/home/acme/bin/perf) start_thread (/usr/lib64/libpthread-2.29.so) 0.064 :17345/17345 probe_perf:dso__process_kernel_symbol(__probe_ip: 5680224) dso__process_kernel_symbol (/home/acme/bin/perf) dso__load_vmlinux (/home/acme/bin/perf) dso__load_vmlinux_path (/home/acme/bin/perf) dso__load (/home/acme/bin/perf) map__load (/home/acme/bin/perf) thread__find_map (/home/acme/bin/perf) machine__resolve (/home/acme/bin/perf) deliver_event (/home/acme/bin/perf) __ordered_events__flush.part.0 (/home/acme/bin/perf) process_thread (/home/acme/bin/perf) start_thread (/usr/lib64/libpthread-2.29.so) # # perf stat -e probe_perf:dso__process_kernel_symbol ^C Performance counter stats for 'system wide': 107,308 probe_perf:dso__process_kernel_symbol 8.215399813 seconds time elapsed # Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lkml.kernel.org/n/tip-5fy66x5hr5ct9pmw84jkiwvm@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Arnaldo Carvalho de Melo
|
de90d513b2 |
perf map: Use map->dso->kernel + map__kmaps() in map__kmaps()
Its equivalent to using map->groups to obtain the machine struct. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lkml.kernel.org/n/tip-bdbazuj4ggrmzxdviaqdrdwh@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Ingo Molnar
|
56b2147f34 |
perf/core improvements and fixes:
perf report: Jin Yao: - Introduce --total-cycles, for basic block profiling, further using data obtained from LBR, an example should suffice: # perf record -b ^C[ perf record: Woken up 595 times to write data ] [ perf record: Captured and wrote 156.672 MB perf.data (196873 samples) ] # perf evlist -v cycles: size: 112, { sample_period, sample_freq }: 4000, sample_type: IP|TID|TIME|CPU|PERIOD|BRANCH_STACK, read_format: ID, disabled: 1, inherit: 1, mmap: 1, comm: 1, freq: 1, task: 1, precise_ip: 3, sample_id_all: 1, exclude_guest: 1, mmap2: 1, comm_exec: 1, ksymbol: 1, bpf_event: 1, branch_sample_type: ANY # perf report --total-cycles --stdio # To display the perf.data header info, please use --header/--header-only options. # # Total Lost Samples: 0 # # Samples: 6M of event 'cycles' # Event count (approx.): 6299936 # # Sampled Sampled Avg Avg # Cycles% Cycles Cycles% Cycles [Program Block Range] Shared Object # ....... ...... ....... ..... .................................... ................ # 2.17% 1.7M 0.08% 607 [compiler.h:199 -> common.c:221] [kernel.vmlinux] 0.72% 544.5K 0.03% 230 [entry_64.S:657 -> entry_64.S:662] [kernel.vmlinux] 0.56% 541.8K 0.09% 672 [compiler.h:199 -> common.c:300] [kernel.vmlinux] 0.39% 293.2K 0.01% 104 [list_debug.c:43 -> list_debug.c:61] [kernel.vmlinux] 0.36% 278.6K 0.03% 272 [entry_64.S:1289 -> entry_64.S:1308] [kernel.vmlinux] perf record: Adrian Hunter: - Allow storing perf.data in a directory together with a copy of /proc/kcore. Jiwei Sun: - Add support for limit perf output file size, i.e.: # perf record --all-cpus -F 10000 --max-size=4M sleep 10h [ perf record: perf size limit reached (4097 KB), stopping session ] [ perf record: Woken up 6 times to write data ] [ perf record: Captured and wrote 4.048 MB perf.data (54094 samples) ] Terminated # ls -lah perf.data -rw-------. 1 root root 4.1M Nov 7 15:27 perf.data # perf stat: Jiri Olsa: - Add --per-node agregation support: In live mode: # perf stat -a -I 1000 -e cycles --per-node # time node cpus counts unit events 1.000542550 N0 20 6,202,097 cycles 1.000542550 N1 20 639,559 cycles 2.002040063 N0 20 7,412,495 cycles 2.002040063 N1 20 2,185,577 cycles 3.003451699 N0 20 6,508,917 cycles 3.003451699 N1 20 765,607 cycles ... Or in the record/report stat session: # perf stat record -a -I 1000 -e cycles # time counts unit events 1.000536937 10,008,468 cycles 2.002090152 9,578,539 cycles 3.003625233 7,647,869 cycles 4.005135036 7,032,086 cycles ^C 4.340902364 3,923,893 cycles # perf stat report --per-node # time node cpus counts unit events 1.000536937 N0 20 9,355,086 cycles 1.000536937 N1 20 653,382 cycles 2.002090152 N0 20 7,712,838 cycles 2.002090152 N1 20 1,865,701 cycles ... perf probe: Masami Hiramatsu: Various fixes related to recent additions to the DWARF format: - Fix to find range-only function instance - Walk function lines in lexical blocks - Fix to show function entry line as probe-able - Fix wrong address verification - Fix to probe a function which has no entry pc - Fix to probe an inline function which has no entry pc - Fix to list probe event with correct line number - Fix to show inlined function callsite without entry_pc - Fix to show ranges of variables in functions without entry_pc - Return a better scope DIE if there is no best scope - Skip end-of-sequence and non statement lines - Filter out instances except for inlined subroutine and subprogram - Fix to show calling lines of inlined functions - Skip overlapped location on searching variables perf inject: Adrian Hunter: - Do not strip evsels with --strip, as they are needed for create_gcov (see the autofdo example in tools/perf/Documentation/intel-pt.txt). Intel PT: Adrian Hunter: - Intel PT uses an auxtrace_cache to store the results of code-walking, to avoid repeated decoding. Add an auxtrace_cache__remove to handle text poke events. core: Andi Kleen: - Always preserve errno while cleaning up perf_event_open failures. llvm: Arnaldo Carvalho de Melo: - No need to tell that the request for saving a .o file for BPF events, as expressed in ~/.perfconfig was satisfied, make that a debug message. perf vendor events: Intel: Haiyan Song: - Update CascadelakeX events to v1.05. - Update all the Intel JSON metrics from TMAM 3.6. Treewide: Ian Rogers: - Improve error paths, plugging leaks found using LLVM tools such as libFuzzer. jevents: Yunfeng Ye: - Fix resource leak in process_mapfile() and main() perf kvm: Igor Lubashev: - Use evlist layer api when possible. libsubcmd: James Clark: - Move EXTRA_FLAGS to the end to allow overriding existing flags. - Use -O0 with DEBUG=1 perf diff: Jin Yao: - Don't use hack to skip column length calculation CoreSight ETM: Leo yan: - Fix definition of macro TO_CS_QUEUE_NR ARM64: John Garry: - Do not try to include libelf header files when its feature detection failed, fixing the cross build for ARM64. perf tests: Leo Yan: - Fix out of bounds memory access in the backward ring buffer test. Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> -----BEGIN PGP SIGNATURE----- iHUEABYIAB0WIQR2GiIUctdOfX2qHhGyPKLppCJ+JwUCXcRowQAKCRCyPKLppCJ+ JxHcAQCTtl9N3zkNjLWif1i6AGKNU9TzYpup+jDR5J83ggLqgQD+O931nR9wXUOe 9bDUr45cNw3ZkRbc1558hKPWIsceJgU= =Rko+ -----END PGP SIGNATURE----- Merge tag 'perf-core-for-mingo-5.5-20191107' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo: perf report: Jin Yao: - Introduce --total-cycles, for basic block profiling, further using data obtained from LBR, an example should suffice: # perf record -b ^C[ perf record: Woken up 595 times to write data ] [ perf record: Captured and wrote 156.672 MB perf.data (196873 samples) ] # perf evlist -v cycles: size: 112, { sample_period, sample_freq }: 4000, sample_type: IP|TID|TIME|CPU|PERIOD|BRANCH_STACK, read_format: ID, disabled: 1, inherit: 1, mmap: 1, comm: 1, freq: 1, task: 1, precise_ip: 3, sample_id_all: 1, exclude_guest: 1, mmap2: 1, comm_exec: 1, ksymbol: 1, bpf_event: 1, branch_sample_type: ANY # perf report --total-cycles --stdio # To display the perf.data header info, please use --header/--header-only options. # # Total Lost Samples: 0 # # Samples: 6M of event 'cycles' # Event count (approx.): 6299936 # # Sampled Sampled Avg Avg # Cycles% Cycles Cycles% Cycles [Program Block Range] Shared Object # ....... ...... ....... ..... .................................... ................ # 2.17% 1.7M 0.08% 607 [compiler.h:199 -> common.c:221] [kernel.vmlinux] 0.72% 544.5K 0.03% 230 [entry_64.S:657 -> entry_64.S:662] [kernel.vmlinux] 0.56% 541.8K 0.09% 672 [compiler.h:199 -> common.c:300] [kernel.vmlinux] 0.39% 293.2K 0.01% 104 [list_debug.c:43 -> list_debug.c:61] [kernel.vmlinux] 0.36% 278.6K 0.03% 272 [entry_64.S:1289 -> entry_64.S:1308] [kernel.vmlinux] perf record: Adrian Hunter: - Allow storing perf.data in a directory together with a copy of /proc/kcore. Jiwei Sun: - Add support for limit perf output file size, i.e.: # perf record --all-cpus -F 10000 --max-size=4M sleep 10h [ perf record: perf size limit reached (4097 KB), stopping session ] [ perf record: Woken up 6 times to write data ] [ perf record: Captured and wrote 4.048 MB perf.data (54094 samples) ] Terminated # ls -lah perf.data -rw-------. 1 root root 4.1M Nov 7 15:27 perf.data # perf stat: Jiri Olsa: - Add --per-node agregation support: In live mode: # perf stat -a -I 1000 -e cycles --per-node # time node cpus counts unit events 1.000542550 N0 20 6,202,097 cycles 1.000542550 N1 20 639,559 cycles 2.002040063 N0 20 7,412,495 cycles 2.002040063 N1 20 2,185,577 cycles 3.003451699 N0 20 6,508,917 cycles 3.003451699 N1 20 765,607 cycles ... Or in the record/report stat session: # perf stat record -a -I 1000 -e cycles # time counts unit events 1.000536937 10,008,468 cycles 2.002090152 9,578,539 cycles 3.003625233 7,647,869 cycles 4.005135036 7,032,086 cycles ^C 4.340902364 3,923,893 cycles # perf stat report --per-node # time node cpus counts unit events 1.000536937 N0 20 9,355,086 cycles 1.000536937 N1 20 653,382 cycles 2.002090152 N0 20 7,712,838 cycles 2.002090152 N1 20 1,865,701 cycles ... perf probe: Masami Hiramatsu: Various fixes related to recent additions to the DWARF format: - Fix to find range-only function instance - Walk function lines in lexical blocks - Fix to show function entry line as probe-able - Fix wrong address verification - Fix to probe a function which has no entry pc - Fix to probe an inline function which has no entry pc - Fix to list probe event with correct line number - Fix to show inlined function callsite without entry_pc - Fix to show ranges of variables in functions without entry_pc - Return a better scope DIE if there is no best scope - Skip end-of-sequence and non statement lines - Filter out instances except for inlined subroutine and subprogram - Fix to show calling lines of inlined functions - Skip overlapped location on searching variables perf inject: Adrian Hunter: - Do not strip evsels with --strip, as they are needed for create_gcov (see the autofdo example in tools/perf/Documentation/intel-pt.txt). Intel PT: Adrian Hunter: - Intel PT uses an auxtrace_cache to store the results of code-walking, to avoid repeated decoding. Add an auxtrace_cache__remove to handle text poke events. core: Andi Kleen: - Always preserve errno while cleaning up perf_event_open failures. llvm: Arnaldo Carvalho de Melo: - No need to tell that the request for saving a .o file for BPF events, as expressed in ~/.perfconfig was satisfied, make that a debug message. perf vendor events: Intel: Haiyan Song: - Update CascadelakeX events to v1.05. - Update all the Intel JSON metrics from TMAM 3.6. Treewide: Ian Rogers: - Improve error paths, plugging leaks found using LLVM tools such as libFuzzer. jevents: Yunfeng Ye: - Fix resource leak in process_mapfile() and main() perf kvm: Igor Lubashev: - Use evlist layer api when possible. libsubcmd: James Clark: - Move EXTRA_FLAGS to the end to allow overriding existing flags. - Use -O0 with DEBUG=1 perf diff: Jin Yao: - Don't use hack to skip column length calculation CoreSight ETM: Leo yan: - Fix definition of macro TO_CS_QUEUE_NR ARM64: John Garry: - Do not try to include libelf header files when its feature detection failed, fixing the cross build for ARM64. perf tests: Leo Yan: - Fix out of bounds memory access in the backward ring buffer test. Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> |
||
Ingo Molnar
|
1ca7feb590 |
Linux 5.4-rc7
-----BEGIN PGP SIGNATURE----- iQFSBAABCAA8FiEEq68RxlopcLEwq+PEeb4+QwBBGIYFAl3IqJQeHHRvcnZhbGRz QGxpbnV4LWZvdW5kYXRpb24ub3JnAAoJEHm+PkMAQRiGOiUH+gOEDwid5OODaFAd CggXugdFIlBZefKqGVNW5sjgX8pxFWHXuEMC8iNb6QXtQZdFrI6LFf9hhUDmzQtm 6y1LPxxEiTZjObMEsBNylb7tyzgujFHcAlp0Zro3w/HLCqmYTSP3FF46i2u6KZfL XhkpM4X7R7qxlfpdhlfESv/ElRGocZe6SwXfC7pcPo5flFcmkdu9ijqhNd/6CZ/h Nf9rTsD/wEDVUelFbgVN+LJzlaB0tsyc4Zbof07n8OsFZjhdEOop8gfM/kTBLcyY 6bh66SfDScdsNnC/l8csbPjSZRx+i+nQs67DyhGNnsSAFgHBZdC4Tb/2mDCwhCLR dUvuYZc= =1N6F -----END PGP SIGNATURE----- Merge tag 'v5.4-rc7' into perf/core, to pick up fixes Signed-off-by: Ingo Molnar <mingo@kernel.org> |
||
Linus Torvalds
|
b584a17628 |
Merge branch 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull perf tooling fixes from Thomas Gleixner: - Fix the time sorting algorithm which was broken due to truncation of big numbers - Fix the python script generator fail caused by a broken tracepoint array iterator * 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: perf tools: Fix time sorting perf tools: Remove unused trace_find_next_event() perf scripting engines: Iterate on tep event arrays directly |
||
Jin Yao
|
7fa46cbf20 |
perf report: Sort by sampled cycles percent per block for tui
Previous patch has implemented a new option "--total-cycles". But only stdio mode is supported. This patch supports the tui mode and support '--percent-limit'. For example, perf record -b ./div perf report --total-cycles --percent-limit 1 # Samples: 2753248 of event 'cycles' Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object 26.04% 2.8M 0.40% 18 [div.c:42 -> div.c:39] div 15.17% 1.2M 0.16% 7 [random_r.c:357 -> random_r.c:380] libc-2.27.so 5.11% 402.0K 0.04% 2 [div.c:27 -> div.c:28] div 4.87% 381.6K 0.04% 2 [random.c:288 -> random.c:291] libc-2.27.so 4.53% 381.0K 0.04% 2 [div.c:40 -> div.c:40] div 3.85% 300.9K 0.02% 1 [div.c:22 -> div.c:25] div 3.08% 241.1K 0.02% 1 [rand.c:26 -> rand.c:27] libc-2.27.so 3.06% 240.0K 0.02% 1 [random.c:291 -> random.c:291] libc-2.27.so 2.78% 215.7K 0.02% 1 [random.c:298 -> random.c:298] libc-2.27.so 2.52% 198.3K 0.02% 1 [random.c:293 -> random.c:293] libc-2.27.so 2.36% 184.8K 0.02% 1 [rand.c:28 -> rand.c:28] libc-2.27.so 2.33% 180.5K 0.02% 1 [random.c:295 -> random.c:295] libc-2.27.so 2.28% 176.7K 0.02% 1 [random.c:295 -> random.c:295] libc-2.27.so 2.20% 168.8K 0.02% 1 [rand@plt+0 -> rand@plt+0] div 1.98% 158.2K 0.02% 1 [random_r.c:388 -> random_r.c:388] libc-2.27.so 1.57% 123.3K 0.02% 1 [div.c:42 -> div.c:44] div 1.44% 116.0K 0.42% 19 [random_r.c:357 -> random_r.c:394] libc-2.27.so -------------------------------------------------- v7: --- 1. Since we have used use_browser in report__browse_block_hists to support stdio mode, now we also add supporting for tui. 2. Move block tui browser code from ui/browsers/hists.c to block-info.c. v6: --- Create report__tui_browse_block_hists in block-info.c (codes are moved from builtin-report.c). v5: --- Fix a crash issue when running perf report without '--total-cycles'. The issue is because the internal flag is renamed from 'total_cycles' to 'total_cycles_mode' in previous patch but this patch still uses 'total_cycles' to check if the '--total-cycles' option is enabled, which causes the code to be inconsistent. v4: --- Since the block collection is moved out of printing in previous patch, this patch is updated accordingly for tui supporting. v3: --- Minor change since the function name is changed: block_total_cycles_percent -> block_info__total_cycles_percent Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@intel.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/20191107074719.26139-8-yao.jin@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Jin Yao
|
0b49f83657 |
perf report: Support --percent-limit for --total-cycles
We have already supported the '--total-cycles' option in previous patch. It's also useful to show entries only above a threshold percent. This patch enables '--percent-limit' for not showing entries under that percent. For example: perf report --total-cycles --stdio --percent-limit 1 # To display the perf.data header info, please use --header/--header-only options. # # # Total Lost Samples: 0 # # Samples: 2M of event 'cycles' # Event count (approx.): 2753248 # # Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object # ............... .............. ........... .......... ................................................................. .................... # 26.04% 2.8M 0.40% 18 [div.c:42 -> div.c:39] div 15.17% 1.2M 0.16% 7 [random_r.c:357 -> random_r.c:380] libc-2.27.so 5.11% 402.0K 0.04% 2 [div.c:27 -> div.c:28] div 4.87% 381.6K 0.04% 2 [random.c:288 -> random.c:291] libc-2.27.so 4.53% 381.0K 0.04% 2 [div.c:40 -> div.c:40] div 3.85% 300.9K 0.02% 1 [div.c:22 -> div.c:25] div 3.08% 241.1K 0.02% 1 [rand.c:26 -> rand.c:27] libc-2.27.so 3.06% 240.0K 0.02% 1 [random.c:291 -> random.c:291] libc-2.27.so 2.78% 215.7K 0.02% 1 [random.c:298 -> random.c:298] libc-2.27.so 2.52% 198.3K 0.02% 1 [random.c:293 -> random.c:293] libc-2.27.so 2.36% 184.8K 0.02% 1 [rand.c:28 -> rand.c:28] libc-2.27.so 2.33% 180.5K 0.02% 1 [random.c:295 -> random.c:295] libc-2.27.so 2.28% 176.7K 0.02% 1 [random.c:295 -> random.c:295] libc-2.27.so 2.20% 168.8K 0.02% 1 [rand@plt+0 -> rand@plt+0] div 1.98% 158.2K 0.02% 1 [random_r.c:388 -> random_r.c:388] libc-2.27.so 1.57% 123.3K 0.02% 1 [div.c:42 -> div.c:44] div 1.44% 116.0K 0.42% 19 [random_r.c:357 -> random_r.c:394] libc-2.27.so Committer testing: From second exapmple onwards slightly edited for brevity: # perf report --total-cycles --percent-limit 2 --stdio # To display the perf.data header info, please use --header/--header-only options. # # # Total Lost Samples: 0 # # Samples: 6M of event 'cycles' # Event count (approx.): 6299936 # # Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object # ............... .............. ........... .......... ...................................................................... .................... # 2.17% 1.7M 0.08% 607 [compiler.h:199 -> common.c:221] [kernel.vmlinux] # # (Tip: Create an archive with symtabs to analyse on other machine: perf archive) # # perf report --total-cycles --percent-limit 1 --stdio # Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object 2.17% 1.7M 0.08% 607 [compiler.h:199 -> common.c:221] [kernel.vmlinux] 1.75% 1.3M 8.34% 65.5K [memset-vec-unaligned-erms.S:147 -> memset-vec-unaligned-erms.S:151] libc-2.29.so # # perf report --total-cycles --percent-limit 0.7 --stdio # Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object 2.17% 1.7M 0.08% 607 [compiler.h:199 -> common.c:221] [kernel.vmlinux] 1.75% 1.3M 8.34% 65.5K [memset-vec-unaligned-erms.S:147 -> memset-vec-unaligned-erms.S:151] libc-2.29.so 0.72% 544.5K 0.03% 230 [entry_64.S:657 -> entry_64.S:662] [kernel.vmlinux] # ------------------------------------------- It only shows the entries which 'Sampled Cycles%' > 1%. v7: --- No functional change. Only fix the conflict issue because previous patches are changed. v6: --- No functional change. Only fix the conflict issue because previous patches are changed. v5: --- No functional change. Only fix the conflict issue because previous patches are changed. v4: --- No functional change. Only fix the build issue because previous patches are changed. Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@intel.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/20191107074719.26139-7-yao.jin@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Jin Yao
|
6f7164fa23 |
perf report: Sort by sampled cycles percent per block for stdio
It would be useful to support sorting for all blocks by the sampled cycles percent per block. This is useful to concentrate on the globally hottest blocks. This patch implements a new option "--total-cycles" which sorts all blocks by 'Sampled Cycles%'. The 'Sampled Cycles%' is the percent: percent = block sampled cycles aggregation / total sampled cycles Note that, this patch only supports "--stdio" mode. For example, # perf record -b ./div # perf report --total-cycles --stdio # To display the perf.data header info, please use --header/--header-only options. # # Total Lost Samples: 0 # # Samples: 2M of event 'cycles' # Event count (approx.): 2753248 # # Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object # ............... .............. ........... .......... ................................................ ................. # 26.04% 2.8M 0.40% 18 [div.c:42 -> div.c:39] div 15.17% 1.2M 0.16% 7 [random_r.c:357 -> random_r.c:380] libc-2.27.so 5.11% 402.0K 0.04% 2 [div.c:27 -> div.c:28] div 4.87% 381.6K 0.04% 2 [random.c:288 -> random.c:291] libc-2.27.so 4.53% 381.0K 0.04% 2 [div.c:40 -> div.c:40] div 3.85% 300.9K 0.02% 1 [div.c:22 -> div.c:25] div 3.08% 241.1K 0.02% 1 [rand.c:26 -> rand.c:27] libc-2.27.so 3.06% 240.0K 0.02% 1 [random.c:291 -> random.c:291] libc-2.27.so 2.78% 215.7K 0.02% 1 [random.c:298 -> random.c:298] libc-2.27.so 2.52% 198.3K 0.02% 1 [random.c:293 -> random.c:293] libc-2.27.so 2.36% 184.8K 0.02% 1 [rand.c:28 -> rand.c:28] libc-2.27.so 2.33% 180.5K 0.02% 1 [random.c:295 -> random.c:295] libc-2.27.so 2.28% 176.7K 0.02% 1 [random.c:295 -> random.c:295] libc-2.27.so 2.20% 168.8K 0.02% 1 [rand@plt+0 -> rand@plt+0] div 1.98% 158.2K 0.02% 1 [random_r.c:388 -> random_r.c:388] libc-2.27.so 1.57% 123.3K 0.02% 1 [div.c:42 -> div.c:44] div 1.44% 116.0K 0.42% 19 [random_r.c:357 -> random_r.c:394] libc-2.27.so 0.25% 182.5K 0.02% 1 [random_r.c:388 -> random_r.c:391] libc-2.27.so 0.00% 48 1.07% 48 [x86_pmu_enable+284 -> x86_pmu_enable+298] [kernel.kallsyms] 0.00% 74 1.64% 74 [vm_mmap_pgoff+0 -> vm_mmap_pgoff+92] [kernel.kallsyms] 0.00% 73 1.62% 73 [vm_mmap+0 -> vm_mmap+48] [kernel.kallsyms] 0.00% 63 0.69% 31 [up_write+0 -> up_write+34] [kernel.kallsyms] 0.00% 13 0.29% 13 [setup_arg_pages+396 -> setup_arg_pages+413] [kernel.kallsyms] 0.00% 3 0.07% 3 [setup_arg_pages+418 -> setup_arg_pages+450] [kernel.kallsyms] 0.00% 616 6.84% 308 [security_mmap_file+0 -> security_mmap_file+72] [kernel.kallsyms] 0.00% 23 0.51% 23 [security_mmap_file+77 -> security_mmap_file+87] [kernel.kallsyms] 0.00% 4 0.02% 1 [sched_clock+0 -> sched_clock+4] [kernel.kallsyms] 0.00% 4 0.02% 1 [sched_clock+9 -> sched_clock+12] [kernel.kallsyms] 0.00% 1 0.02% 1 [rcu_nmi_exit+0 -> rcu_nmi_exit+9] [kernel.kallsyms] Committer testing: This should provide material for hours of endless joy, both from looking for suspicious things in the implementation of this patch, such as the top one: # Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object 2.17% 1.7M 0.08% 607 [compiler.h:199 -> common.c:221] [kernel.vmlinux] As well from things that look legit: # Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object 0.16% 123.0K 0.60% 4.7K [nospec-branch.h:265 -> nospec-branch.h:278] [kernel.vmlinux] :-) Very short system wide taken branches session: # perf record -h -b Usage: perf record [<options>] [<command>] or: perf record [<options>] -- <command> [<options>] -b, --branch-any sample any taken branches # # perf record -b ^C[ perf record: Woken up 595 times to write data ] [ perf record: Captured and wrote 156.672 MB perf.data (196873 samples) ] # # perf evlist -v cycles: size: 112, { sample_period, sample_freq }: 4000, sample_type: IP|TID|TIME|CPU|PERIOD|BRANCH_STACK, read_format: ID, disabled: 1, inherit: 1, mmap: 1, comm: 1, freq: 1, task: 1, precise_ip: 3, sample_id_all: 1, exclude_guest: 1, mmap2: 1, comm_exec: 1, ksymbol: 1, bpf_event: 1, branch_sample_type: ANY # # perf report --total-cycles --stdio # To display the perf.data header info, please use --header/--header-only options. # # Total Lost Samples: 0 # # Samples: 6M of event 'cycles' # Event count (approx.): 6299936 # # Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object # ............... .............. ........... .......... ...................................................................... .................... # 2.17% 1.7M 0.08% 607 [compiler.h:199 -> common.c:221] [kernel.vmlinux] 1.75% 1.3M 8.34% 65.5K [memset-vec-unaligned-erms.S:147 -> memset-vec-unaligned-erms.S:151] libc-2.29.so 0.72% 544.5K 0.03% 230 [entry_64.S:657 -> entry_64.S:662] [kernel.vmlinux] 0.56% 541.8K 0.09% 672 [compiler.h:199 -> common.c:300] [kernel.vmlinux] 0.39% 293.2K 0.01% 104 [list_debug.c:43 -> list_debug.c:61] [kernel.vmlinux] 0.36% 278.6K 0.03% 272 [entry_64.S:1289 -> entry_64.S:1308] [kernel.vmlinux] 0.30% 260.8K 0.07% 564 [clear_page_64.S:47 -> clear_page_64.S:50] [kernel.vmlinux] 0.28% 215.3K 0.05% 369 [traps.c:623 -> traps.c:628] [kernel.vmlinux] 0.23% 178.1K 0.04% 278 [entry_64.S:271 -> entry_64.S:275] [kernel.vmlinux] 0.20% 152.6K 0.09% 706 [paravirt.c:177 -> paravirt.c:179] [kernel.vmlinux] 0.20% 155.8K 0.05% 373 [entry_64.S:153 -> entry_64.S:175] [kernel.vmlinux] 0.18% 136.6K 0.03% 222 [msr.h:105 -> msr.h:166] [kernel.vmlinux] 0.16% 123.0K 0.60% 4.7K [nospec-branch.h:265 -> nospec-branch.h:278] [kernel.vmlinux] 0.16% 118.3K 0.01% 44 [entry_64.S:632 -> entry_64.S:657] [kernel.vmlinux] 0.14% 104.5K 0.00% 28 [rwsem.c:1541 -> rwsem.c:1544] [kernel.vmlinux] 0.13% 99.2K 0.01% 53 [spinlock.c:150 -> spinlock.c:152] [kernel.vmlinux] 0.13% 95.5K 0.00% 35 [swap.c:456 -> swap.c:471] [kernel.vmlinux] 0.12% 96.2K 0.05% 407 [copy_user_64.S:175 -> copy_user_64.S:209] [kernel.vmlinux] 0.11% 85.9K 0.00% 31 [swap.c:400 -> page-flags.h:188] [kernel.vmlinux] 0.10% 73.0K 0.01% 52 [paravirt.h:763 -> list.h:131] [kernel.vmlinux] 0.07% 56.2K 0.03% 214 [filemap.c:1524 -> filemap.c:1557] [kernel.vmlinux] 0.07% 54.2K 0.02% 145 [memory.c:1032 -> memory.c:1049] [kernel.vmlinux] 0.07% 50.3K 0.00% 39 [mmzone.c:49 -> mmzone.c:69] [kernel.vmlinux] 0.06% 48.3K 0.01% 40 [paravirt.h:768 -> page_alloc.c:3304] [kernel.vmlinux] 0.06% 46.7K 0.02% 155 [memory.c:1032 -> memory.c:1056] [kernel.vmlinux] 0.06% 46.9K 0.01% 103 [swap.c:867 -> swap.c:902] [kernel.vmlinux] 0.06% 47.8K 0.00% 34 [entry_64.S:1201 -> entry_64.S:1202] [kernel.vmlinux] ----------------------------------------------------------- v7: --- Use use_browser in report__browse_block_hists for supporting stdio and potential tui mode. v6: --- Create report__browse_block_hists in block-info.c (codes are moved from builtin-report.c). It's called from perf_evlist__tty_browse_hists. v5: --- 1. Move all block functions to block-info.c 2. Move the code of setting ms in block hist_entry to other patch. v4: --- 1. Use new option '--total-cycles' to replace '-s total_cycles' in v3. 2. Move block info collection out of block info printing. v3: --- 1. Use common function block_info__process_sym to process the blocks per symbol. 2. Remove the nasty hack for skipping calculation of column length 3. Some minor cleanup Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@intel.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/20191107074719.26139-6-yao.jin@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Jin Yao
|
b65a7d372b |
perf hist: Support block formats with compare/sort/display
This patch provides helper routines to support new columns for block info output. The new columns are: Sampled Cycles% Sampled Cycles Avg Cycles% Avg Cycles [Program Block Range] Shared Object v5: --- 1. Move more block related functions from builtin-report.c to block-info.c 2. Set ms (map+sym) in block hist_entry. Because this info is needed for reporting the block range (i.e. source line) Committer notes: Remove unused set_fmt() function, some build were not completing with: util/block-info.c:396:20: error: unused function 'set_fmt' [-Werror,-Wunused-function] static inline void set_fmt(struct block_fmt *block_fmt, ^ 1 error generated. Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@intel.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/20191107074719.26139-5-yao.jin@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Jin Yao
|
7841f40aed |
perf hist: Count the total cycles of all samples
We can get the per sample cycles by hist__account_cycles(). It's also useful to know the total cycles of all samples in order to get the cycles coverage for a single program block in further. For example: coverage = per block sampled cycles / total sampled cycles This patch creates a new argument 'total_cycles' in hist__account_cycles(), which will be added with the cycles of each sample. Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@intel.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/20191107074719.26139-4-yao.jin@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Jin Yao
|
6041441870 |
perf block: Cleanup and refactor block info functions
We have already implemented some block-info related functions. Now it's time to do some cleanup, refactoring and move the functions and structures to new block-info.h/block-info.c. v4: --- Move code for skipping column length calculation to patch: 'perf diff: Don't use hack to skip column length calculation' v3: --- 1. Rename the patch title 2. Rename from block.h/block.c to block-info.h/block-info.c 3. Move more common part to block-info, such as block_info__process_sym. 4. Remove the nasty hack for skipping calculation of column length Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@intel.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/20191107074719.26139-3-yao.jin@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Jin Yao
|
0bdf181fe0 |
perf diff: Don't use hack to skip column length calculation
Previously we use a nasty hack to skip the hists__calc_col_len for block since this function is not very suitable for block column length calculation. This patch removes the hack code and add a check at the entry of hists__calc_col_len to skip for block case. Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@intel.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/20191107074719.26139-2-yao.jin@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Leo Yan
|
af8490eb2b |
perf tests: Fix out of bounds memory access
The test case 'Read backward ring buffer' failed on 32-bit architectures
which were found by LKFT perf testing. The test failed on arm32 x15
device, qemu_arm32, qemu_i386, and found intermittent failure on i386;
the failure log is as below:
50: Read backward ring buffer :
--- start ---
test child forked, pid 510
Using CPUID GenuineIntel-6-9E-9
mmap size 1052672B
mmap size 8192B
Finished reading overwrite ring buffer: rewind
free(): invalid next size (fast)
test child interrupted
---- end ----
Read backward ring buffer: FAILED!
The log hints there have issue for memory usage, thus free() reports
error 'invalid next size' and directly exit for the case. Finally, this
issue is root caused as out of bounds memory access for the data array
'evsel->id'.
The backward ring buffer test invokes do_test() twice. 'evsel->id' is
allocated at the first call with the flow:
test__backward_ring_buffer()
`-> do_test()
`-> evlist__mmap()
`-> evlist__mmap_ex()
`-> perf_evsel__alloc_id()
So 'evsel->id' is allocated with one item, and it will be used in
function perf_evlist__id_add():
evsel->id[0] = id
evsel->ids = 1
At the second call for do_test(), it skips to initialize 'evsel->id'
and reuses the array which is allocated in the first call. But
'evsel->ids' contains the stale value. Thus:
evsel->id[1] = id -> out of bound access
evsel->ids = 2
To fix this issue, we will use evlist__open() and evlist__close() pair
functions to prepare and cleanup context for evlist; so 'evsel->id' and
'evsel->ids' can be initialized properly when invoke do_test() and avoid
the out of bounds memory access.
Fixes:
|
||
Jiwei Sun
|
6d57581659 |
perf record: Add support for limit perf output file size
The patch adds a new option to limit the output file size, then based on it, we can create a wrapper of the perf command that uses the option to avoid exhausting the disk space by the unconscious user. In order to make the perf.data parsable, we just limit the sample data size, since the perf.data consists of many headers and sample data and other data, the actual size of the recorded file will bigger than the setting value. Testing it: # ./perf record -a -g --max-size=10M Couldn't synthesize bpf events. [ perf record: perf size limit reached (10249 KB), stopping session ] [ perf record: Woken up 32 times to write data ] [ perf record: Captured and wrote 10.133 MB perf.data (71964 samples) ] # ls -lh perf.data -rw------- 1 root root 11M Oct 22 14:32 perf.data # ./perf record -a -g --max-size=10K [ perf record: perf size limit reached (10 KB), stopping session ] Couldn't synthesize bpf events. [ perf record: Woken up 0 times to write data ] [ perf record: Captured and wrote 1.546 MB perf.data (69 samples) ] # ls -l perf.data -rw------- 1 root root 1626952 Oct 22 14:36 perf.data Committer notes: Fixed the build in multiple distros by using PRIu64 to print u64 struct members, fixing this: builtin-record.c: In function 'record__write': builtin-record.c:150:5: error: format '%lu' expects argument of type 'long unsigned int', but argument 3 has type 'u64' [-Werror=format=] rec->bytes_written >> 10); ^ CC /tmp/build/pe Signed-off-by: Jiwei Sun <jiwei.sun@windriver.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Michael Petlan <mpetlan@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Richard Danter <richard.danter@windriver.com> Link: http://lore.kernel.org/lkml/20191022080901.3841-1-jiwei.sun@windriver.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Masami Hiramatsu
|
dee36a2abb |
perf probe: Skip overlapped location on searching variables
Since debuginfo__find_probes() callback function can be called with the location which already passed, the callback function must filter out such overlapped locations. add_probe_trace_event() has already done it by commit |
||
Masami Hiramatsu
|
86c0bf8539 |
perf probe: Fix to show calling lines of inlined functions
Fix to show calling lines of inlined functions (where an inline function
is called).
die_walk_lines() filtered out the lines inside inlined functions based
on the address. However this also filtered out the lines which call
those inlined functions from the target function.
To solve this issue, check the call_file and call_line attributes and do
not filter out if it matches to the line information.
Without this fix, perf probe -L doesn't show some lines correctly.
(don't see the lines after 17)
# perf probe -L vfs_read
<vfs_read@/home/mhiramat/ksrc/linux/fs/read_write.c:0>
0 ssize_t vfs_read(struct file *file, char __user *buf, size_t count, loff_t *pos)
1 {
2 ssize_t ret;
4 if (!(file->f_mode & FMODE_READ))
return -EBADF;
6 if (!(file->f_mode & FMODE_CAN_READ))
return -EINVAL;
8 if (unlikely(!access_ok(buf, count)))
return -EFAULT;
11 ret = rw_verify_area(READ, file, pos, count);
12 if (!ret) {
13 if (count > MAX_RW_COUNT)
count = MAX_RW_COUNT;
15 ret = __vfs_read(file, buf, count, pos);
16 if (ret > 0) {
fsnotify_access(file);
add_rchar(current, ret);
}
With this fix:
# perf probe -L vfs_read
<vfs_read@/home/mhiramat/ksrc/linux/fs/read_write.c:0>
0 ssize_t vfs_read(struct file *file, char __user *buf, size_t count, loff_t *pos)
1 {
2 ssize_t ret;
4 if (!(file->f_mode & FMODE_READ))
return -EBADF;
6 if (!(file->f_mode & FMODE_CAN_READ))
return -EINVAL;
8 if (unlikely(!access_ok(buf, count)))
return -EFAULT;
11 ret = rw_verify_area(READ, file, pos, count);
12 if (!ret) {
13 if (count > MAX_RW_COUNT)
count = MAX_RW_COUNT;
15 ret = __vfs_read(file, buf, count, pos);
16 if (ret > 0) {
17 fsnotify_access(file);
18 add_rchar(current, ret);
}
20 inc_syscr(current);
}
Fixes:
|
||
Masami Hiramatsu
|
da6cb952a8 |
perf probe: Filter out instances except for inlined subroutine and subprogram
Filter out instances except for inlined_subroutine and subprogram DIE in
die_walk_instances() and die_is_func_instance().
This fixes an issue that perf probe sets some probes on calling address
instead of a target function itself.
When perf probe walks on instances of an abstruct origin (a kind of
function prototype of inlined function), die_walk_instances() can also
pass a GNU_call_site (a GNU extension for call site) to callback. Since
it is not an inlined instance of target function, we have to filter out
when searching a probe point.
Without this patch, perf probe sets probes on call site address too.This
can happen on some function which is marked "inlined", but has actual
symbol. (I'm not sure why GCC mark it "inlined"):
# perf probe -D vfs_read
p:probe/vfs_read _text+2500017
p:probe/vfs_read_1 _text+2499468
p:probe/vfs_read_2 _text+2499563
p:probe/vfs_read_3 _text+2498876
p:probe/vfs_read_4 _text+2498512
p:probe/vfs_read_5 _text+2498627
With this patch:
Slightly different results, similar tho:
# perf probe -D vfs_read
p:probe/vfs_read _text+2498512
Committer testing:
# uname -a
Linux quaco 5.3.8-200.fc30.x86_64 #1 SMP Tue Oct 29 14:46:22 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Before:
# perf probe -D vfs_read
p:probe/vfs_read _text+3131557
p:probe/vfs_read_1 _text+3130975
p:probe/vfs_read_2 _text+3131047
p:probe/vfs_read_3 _text+3130380
p:probe/vfs_read_4 _text+3130000
# uname -a
Linux quaco 5.3.8-200.fc30.x86_64 #1 SMP Tue Oct 29 14:46:22 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
#
After:
# perf probe -D vfs_read
p:probe/vfs_read _text+3130000
#
Fixes:
|
||
Masami Hiramatsu
|
f4d99bdfd1 |
perf probe: Skip end-of-sequence and non statement lines
Skip end-of-sequence and non-statement lines while walking through lines
list.
The "end-of-sequence" line information means:
"the current address is that of the first byte after the
end of a sequence of target machine instructions."
(DWARF version 4 spec 6.2.2)
This actually means out of scope and we can not probe on it.
On the other hand, the statement lines (is_stmt) means:
"the current instruction is a recommended breakpoint location.
A recommended breakpoint location is intended to “represent”
a line, a statement and/or a semantically distinct subpart
of a statement."
(DWARF version 4 spec 6.2.2)
So, non-statement line info also should be skipped.
These can reduce unneeded probe points and also avoid an error.
E.g. without this patch:
# perf probe -a "clear_tasks_mm_cpumask:1"
Added new events:
probe:clear_tasks_mm_cpumask (on clear_tasks_mm_cpumask:1)
probe:clear_tasks_mm_cpumask_1 (on clear_tasks_mm_cpumask:1)
probe:clear_tasks_mm_cpumask_2 (on clear_tasks_mm_cpumask:1)
probe:clear_tasks_mm_cpumask_3 (on clear_tasks_mm_cpumask:1)
probe:clear_tasks_mm_cpumask_4 (on clear_tasks_mm_cpumask:1)
You can now use it in all perf tools, such as:
perf record -e probe:clear_tasks_mm_cpumask_4 -aR sleep 1
#
This puts 5 probes on one line, but acutally it's not inlined function.
This is because there are many non statement instructions at the
function prologue.
With this patch:
# perf probe -a "clear_tasks_mm_cpumask:1"
Added new event:
probe:clear_tasks_mm_cpumask (on clear_tasks_mm_cpumask:1)
You can now use it in all perf tools, such as:
perf record -e probe:clear_tasks_mm_cpumask -aR sleep 1
#
Now perf-probe skips unneeded addresses.
Committer testing:
Slightly different results, but similar:
Before:
# uname -a
Linux quaco 5.3.8-200.fc30.x86_64 #1 SMP Tue Oct 29 14:46:22 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
#
# perf probe -a "clear_tasks_mm_cpumask:1"
Added new events:
probe:clear_tasks_mm_cpumask (on clear_tasks_mm_cpumask:1)
probe:clear_tasks_mm_cpumask_1 (on clear_tasks_mm_cpumask:1)
probe:clear_tasks_mm_cpumask_2 (on clear_tasks_mm_cpumask:1)
You can now use it in all perf tools, such as:
perf record -e probe:clear_tasks_mm_cpumask_2 -aR sleep 1
#
After:
# perf probe -a "clear_tasks_mm_cpumask:1"
Added new event:
probe:clear_tasks_mm_cpumask (on clear_tasks_mm_cpumask:1)
You can now use it in all perf tools, such as:
perf record -e probe:clear_tasks_mm_cpumask -aR sleep 1
# perf probe -l
probe:clear_tasks_mm_cpumask (on clear_tasks_mm_cpumask@kernel/cpu.c)
#
Fixes:
|
||
Masami Hiramatsu
|
c701636aee |
perf probe: Return a better scope DIE if there is no best scope
Make find_best_scope() returns innermost DIE at given address if there is no best matched scope DIE. Since Gcc sometimes generates intuitively strange line info which is out of inlined function address range, we need this fixup. Without this, sometimes perf probe failed to probe on a line inside an inlined function: # perf probe -D ksys_open:3 Failed to find scope of probe point. Error: Failed to add events. With this fix, 'perf probe' can probe it: # perf probe -D ksys_open:3 p:probe/ksys_open _text+25707308 p:probe/ksys_open_1 _text+25710596 p:probe/ksys_open_2 _text+25711114 p:probe/ksys_open_3 _text+25711343 p:probe/ksys_open_4 _text+25714058 p:probe/ksys_open_5 _text+2819653 p:probe/ksys_open_6 _text+2819701 Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Cc: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: Tom Zanussi <tom.zanussi@linux.intel.com> Link: http://lore.kernel.org/lkml/157291300887.19771.14936015360963292236.stgit@devnote2 Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Ian Rogers
|
5c65b1c084 |
perf annotate: Fix heap overflow
Fix expand_tabs that copies the source lines '\0' and then appends another '\0' at a potentially out of bounds address. Signed-off-by: Ian Rogers <irogers@google.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Song Liu <songliubraving@fb.com> Cc: Stephane Eranian <eranian@google.com> Link: http://lore.kernel.org/lkml/20191026035644.217548-1-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Arnaldo Carvalho de Melo
|
93730f85eb |
perf machine: Add kernel_dso() method
To reduce boilerplate in some places. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lkml.kernel.org/n/tip-9s1bgoxxhlnu037e1nqx0tw3@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Arnaldo Carvalho de Melo
|
b0c76fc4cf |
perf symbols: Remove needless checks for map->groups->machine
Its sufficient to check if map->groups is NULL before using it to get ->machine value. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lkml.kernel.org/n/tip-utiepyiv8b1tf8f79ok9d6j8@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Ian Rogers
|
1dc925568f |
perf parse: Add a deep delete for parse event terms
Add a parse_events_term deep delete function so that owned strings and arrays are freed. Signed-off-by: Ian Rogers <irogers@google.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: John Garry <john.garry@huawei.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Martin KaFai Lau <kafai@fb.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Song Liu <songliubraving@fb.com> Cc: Stephane Eranian <eranian@google.com> Cc: Yonghong Song <yhs@fb.com> Cc: bpf@vger.kernel.org Cc: clang-built-linux@googlegroups.com Cc: netdev@vger.kernel.org Link: http://lore.kernel.org/lkml/20191030223448.12930-10-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Ian Rogers
|
38f2c4226e |
perf parse: If pmu configuration fails free terms
Avoid a memory leak when the configuration fails. Signed-off-by: Ian Rogers <irogers@google.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: John Garry <john.garry@huawei.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Martin KaFai Lau <kafai@fb.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Song Liu <songliubraving@fb.com> Cc: Stephane Eranian <eranian@google.com> Cc: Yonghong Song <yhs@fb.com> Cc: bpf@vger.kernel.org Cc: clang-built-linux@googlegroups.com Cc: netdev@vger.kernel.org Link: http://lore.kernel.org/lkml/20191030223448.12930-9-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Ian Rogers
|
cabbf26821 |
perf parse: Before yyabort-ing free components
Yyabort doesn't destruct inputs and so this must be done manually before using yyabort. Signed-off-by: Ian Rogers <irogers@google.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: John Garry <john.garry@huawei.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Martin KaFai Lau <kafai@fb.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Song Liu <songliubraving@fb.com> Cc: Stephane Eranian <eranian@google.com> Cc: Yonghong Song <yhs@fb.com> Cc: bpf@vger.kernel.org Cc: clang-built-linux@googlegroups.com Cc: netdev@vger.kernel.org Link: http://lore.kernel.org/lkml/20191030223448.12930-8-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Ian Rogers
|
f2a8ecd8b1 |
perf parse: Add destructors for parse event terms
If parsing fails then destructors are ran to clean the up the stack. Rename the head union member to make the term and evlist use cases more distinct, this simplifies matching the correct destructor. Committer notes: Jiri: "Nice did not know about this.. looks like it's been in bison for some time, right?" Ian: "Looks like it wasn't in Bison 1 but in Bison 2, we're at Bison 3 and Bison 2 is > 14 years old: https://web.archive.org/web/20050924004158/http://www.gnu.org/software/bison/manual/html_mono/bison.html#Destructor-Decl" Signed-off-by: Ian Rogers <irogers@google.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: John Garry <john.garry@huawei.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Martin KaFai Lau <kafai@fb.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Song Liu <songliubraving@fb.com> Cc: Stephane Eranian <eranian@google.com> Cc: Yonghong Song <yhs@fb.com> Cc: bpf@vger.kernel.org Cc: clang-built-linux@googlegroups.com Cc: netdev@vger.kernel.org Link: http://lore.kernel.org/lkml/20191030223448.12930-7-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Ian Rogers
|
b6645a7235 |
perf parse: Ensure config and str in terms are unique
Make it easier to release memory associated with parse event terms by duplicating the string for the config name and ensuring the val string is a duplicate. Currently the parser may memory leak terms and this is addressed in a later patch. Signed-off-by: Ian Rogers <irogers@google.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: John Garry <john.garry@huawei.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Martin KaFai Lau <kafai@fb.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Song Liu <songliubraving@fb.com> Cc: Stephane Eranian <eranian@google.com> Cc: Yonghong Song <yhs@fb.com> Cc: bpf@vger.kernel.org Cc: clang-built-linux@googlegroups.com Cc: netdev@vger.kernel.org Link: http://lore.kernel.org/lkml/20191030223448.12930-6-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Ian Rogers
|
448d732cef |
perf parse: Add parse events handle error
Parse event error handling may overwrite one error string with another creating memory leaks. Introduce a helper routine that warns about multiple error messages as well as avoiding the memory leak. A reproduction of this problem can be seen with: perf stat -e c/c/ After this change this produces: WARNING: multiple event parsing errors event syntax error: 'c/c/' \___ unknown term valid terms: event,filter_rem,filter_opc0,edge,filter_isoc,filter_tid,filter_loc,filter_nc,inv,umask,filter_opc1,tid_en,thresh,filter_all_op,filter_not_nm,filter_state,filter_nm,config,config1,config2,name,period,percore Run 'perf list' for a list of valid events Usage: perf stat [<options>] [<command>] -e, --event <event> event selector. use 'perf list' to list available events Signed-off-by: Ian Rogers <irogers@google.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: John Garry <john.garry@huawei.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Martin KaFai Lau <kafai@fb.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Song Liu <songliubraving@fb.com> Cc: Stephane Eranian <eranian@google.com> Cc: Yonghong Song <yhs@fb.com> Cc: bpf@vger.kernel.org Cc: clang-built-linux@googlegroups.com Cc: netdev@vger.kernel.org Link: http://lore.kernel.org/lkml/20191030223448.12930-2-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Adrian Hunter
|
ef5502a1d9 |
perf inject: Make --strip keep evsels
create_gcov (refer to the autofdo example in tools/perf/Documentation/intel-pt.txt) now needs the evsels to read the perf.data file. So don't strip them. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lore.kernel.org/lkml/20191105100057.21465-1-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
John Garry
|
71f699078b |
perf tools: Fix cross compile for ARM64
Currently when cross compiling perf tool for ARM64 on my x86 machine I get this error: arch/arm64/util/sym-handling.c:9:10: fatal error: gelf.h: No such file or directory #include <gelf.h> For the build, libelf is reported off: Auto-detecting system features: ... ... libelf: [ OFF ] Indeed, test-libelf is not built successfully: more ./build/feature/test-libelf.make.output test-libelf.c:2:10: fatal error: libelf.h: No such file or directory #include <libelf.h> ^~~~~~~~~~ compilation terminated. I have no such problems natively compiling on ARM64, and I did not previously have this issue for cross compiling. Fix by relocating the gelf.h include. Signed-off-by: John Garry <john.garry@huawei.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Will Deacon <will@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Link: http://lore.kernel.org/lkml/1573045254-39833-1-git-send-email-john.garry@huawei.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Jiri Olsa
|
86895b480a |
perf stat: Add --per-node agregation support
Adding new --per-node option to aggregate counts per NUMA nodes for system-wide mode measurements. You can specify --per-node in live mode: # perf stat -a -I 1000 -e cycles --per-node # time node cpus counts unit events 1.000542550 N0 20 6,202,097 cycles 1.000542550 N1 20 639,559 cycles 2.002040063 N0 20 7,412,495 cycles 2.002040063 N1 20 2,185,577 cycles 3.003451699 N0 20 6,508,917 cycles 3.003451699 N1 20 765,607 cycles ... Or in the record/report stat session: # perf stat record -a -I 1000 -e cycles # time counts unit events 1.000536937 10,008,468 cycles 2.002090152 9,578,539 cycles 3.003625233 7,647,869 cycles 4.005135036 7,032,086 cycles ^C 4.340902364 3,923,893 cycles # perf stat report --per-node # time node cpus counts unit events 1.000536937 N0 20 9,355,086 cycles 1.000536937 N1 20 653,382 cycles 2.002090152 N0 20 7,712,838 cycles 2.002090152 N1 20 1,865,701 cycles 3.003625233 N0 20 6,604,441 cycles 3.003625233 N1 20 1,043,428 cycles 4.005135036 N0 20 6,350,522 cycles 4.005135036 N1 20 681,564 cycles 4.340902364 N0 20 3,403,188 cycles 4.340902364 N1 20 520,705 cycles Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexey Budankov <alexey.budankov@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Joe Mario <jmario@redhat.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Michael Petlan <mpetlan@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20190904073415.723-4-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Jiri Olsa
|
389799a7a1 |
perf env: Add perf_env__numa_node()
To speed up cpu to node lookup, add perf_env__numa_node(), that creates cpu array on the first lookup, that holds numa nodes for each stored cpu. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexey Budankov <alexey.budankov@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Joe Mario <jmario@redhat.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Michael Petlan <mpetlan@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20190904073415.723-3-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Haiyan Song
|
61ec07f591 |
perf vendor events intel: Update all the Intel JSON metrics from TMAM 3.6.
New Metrics: - DSB_Switches: fraction of cycles CPU was stalled due to switches from DSB to MITE pipeline [all] - L2_Evictions_{Silent|NonSilent}_PKI: L2 {silent|non silent} ecivtions rate per Kilo instruction [SKX+] - IpFarBranch - Instructions per Far Branch Other Enhancements & fixes: - KBLR/CFL & CLX move to separate columns (no column sharing via if #model) - Re-organized/renamed Metric Group Signed-off-by: Haiyan Song <haiyanx.song@intel.com> Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@intel.com> Link: http://lore.kernel.org/lkml/20191030082308.10919-1-haiyanx.song@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Haiyan Song
|
7fcf1b89c8 |
perf vendor events intel: Update CascadelakeX events to v1.05
Update CascadelakeX events to v1.05. Other changes: remove duplicated and without description events. Signed-off-by: Haiyan Song <haiyanx.song@intel.com> Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@intel.com> Link: http://lore.kernel.org/lkml/20191030082308.10919-1-haiyanx.song@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Ian Rogers
|
8e8714c3d1 |
perf tools: Splice events onto evlist even on error
If event parsing fails the event list is leaked, instead splice the list onto the out result and let the caller cleanup. An example input for parse_events found by libFuzzer that reproduces this memory leak is 'm{'. Signed-off-by: Ian Rogers <irogers@google.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: John Garry <john.garry@huawei.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Martin KaFai Lau <kafai@fb.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Song Liu <songliubraving@fb.com> Cc: Stephane Eranian <eranian@google.com> Cc: Yonghong Song <yhs@fb.com> Cc: bpf@vger.kernel.org Cc: clang-built-linux@googlegroups.com Cc: netdev@vger.kernel.org Link: http://lore.kernel.org/lkml/20191025180827.191916-5-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Arnaldo Carvalho de Melo
|
50481461cf |
perf map_groups: Introduce for_each_entry() and for_each_entry_safe() iterators
To reduce boilerplate, providing a more compact form to iterate over the maps in a map_group. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lkml.kernel.org/n/tip-gc3go6fmdn30twusg91t2q56@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Arnaldo Carvalho de Melo
|
8efc4f0568 |
perf maps: Add for_each_entry()/_safe() iterators
To reduce boilerplate, provide a more compact form using an idiom present in other trees of data structures. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lkml.kernel.org/n/tip-59gmq4kg1r68ou1wknyjl78x@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Arnaldo Carvalho de Melo
|
20419d3a5b |
perf map: Allow map__next() to receive a NULL arg
Just like free(), return NULL in that case, will simplify the for_each_entry_safe() iterators. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lkml.kernel.org/n/tip-pbde2ucn49khnrebclys9pny@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Arnaldo Carvalho de Melo
|
ee2555b612 |
perf map: Check if the map still has some refcounts on exit
We were checking just if it was still on some rb tree, but that is not the only way that this map can still have references, map->refcnt is there exactly for this, use it. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lkml.kernel.org/n/tip-hany65tbeavsax7n3xvwl9pc@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Adrian Hunter
|
b86a9d918a |
perf dso: Add dso__data_write_cache_addr()
Add functions to write into the dso file data cache, but not change the file itself. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Borislav Petkov <bp@alien8.de> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: x86@kernel.org Link: http://lore.kernel.org/lkml/20191025130000.13032-4-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Adrian Hunter
|
366df72657 |
perf dso: Refactor dso_cache__read()
Refactor dso_cache__read() to separate populating the cache from copying data from it. This is preparation for adding a cache "write" that will update the data in the cache. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Borislav Petkov <bp@alien8.de> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: x86@kernel.org Link: http://lore.kernel.org/lkml/20191025130000.13032-3-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Adrian Hunter
|
fd62c1097a |
perf auxtrace: Add auxtrace_cache__remove()
Add auxtrace_cache__remove(). Intel PT uses an auxtrace_cache to store the results of code-walking, so that the same block of instructions does not have to be decoded repeatedly. However, when there are text poke events, the associated cache entries need to be removed. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Borislav Petkov <bp@alien8.de> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: x86@kernel.org Link: http://lore.kernel.org/lkml/20191025130000.13032-6-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Masami Hiramatsu
|
af04dd2f8e |
perf probe: Fix to show ranges of variables in functions without entry_pc
Fix to show ranges of variables (--range and --vars option) in functions
which DIE has only ranges but no entry_pc attribute.
Without this fix:
# perf probe --range -V clear_tasks_mm_cpumask
Available variables at clear_tasks_mm_cpumask
@<clear_tasks_mm_cpumask+0>
(No matched variables)
With this fix:
# perf probe --range -V clear_tasks_mm_cpumask
Available variables at clear_tasks_mm_cpumask
@<clear_tasks_mm_cpumask+0>
[VAL] int cpu @<clear_tasks_mm_cpumask+[0-35,317-317,2052-2059]>
Committer testing:
Before:
[root@quaco ~]# perf probe --range -V clear_tasks_mm_cpumask
Available variables at clear_tasks_mm_cpumask
@<clear_tasks_mm_cpumask+0>
(No matched variables)
[root@quaco ~]#
After:
[root@quaco ~]# perf probe --range -V clear_tasks_mm_cpumask
Available variables at clear_tasks_mm_cpumask
@<clear_tasks_mm_cpumask+0>
[VAL] int cpu @<clear_tasks_mm_cpumask+[0-23,23-105,105-106,106-106,1843-1850,1850-1862]>
[root@quaco ~]#
Using it:
[root@quaco ~]# perf probe clear_tasks_mm_cpumask cpu
Added new event:
probe:clear_tasks_mm_cpumask (on clear_tasks_mm_cpumask with cpu)
You can now use it in all perf tools, such as:
perf record -e probe:clear_tasks_mm_cpumask -aR sleep 1
[root@quaco ~]# perf probe -l
probe:clear_tasks_mm_cpumask (on clear_tasks_mm_cpumask@kernel/cpu.c with cpu)
[root@quaco ~]#
[root@quaco ~]# perf trace -e probe:*cpumask
^C[root@quaco ~]#
Fixes:
|
||
Masami Hiramatsu
|
18e21eb671 |
perf probe: Fix to show inlined function callsite without entry_pc
Fix 'perf probe --line' option to show inlined function callsite lines
even if the function DIE has only ranges.
Without this:
# perf probe -L amd_put_event_constraints
...
2 {
3 if (amd_has_nb(cpuc) && amd_is_nb_event(&event->hw))
__amd_put_nb_event_constraints(cpuc, event);
5 }
With this patch:
# perf probe -L amd_put_event_constraints
...
2 {
3 if (amd_has_nb(cpuc) && amd_is_nb_event(&event->hw))
4 __amd_put_nb_event_constraints(cpuc, event);
5 }
Committer testing:
Before:
[root@quaco ~]# perf probe -L amd_put_event_constraints
<amd_put_event_constraints@/usr/src/debug/kernel-5.2.fc30/linux-5.2.18-200.fc30.x86_64/arch/x86/events/amd/core.c:0>
0 static void amd_put_event_constraints(struct cpu_hw_events *cpuc,
struct perf_event *event)
2 {
3 if (amd_has_nb(cpuc) && amd_is_nb_event(&event->hw))
__amd_put_nb_event_constraints(cpuc, event);
5 }
PMU_FORMAT_ATTR(event, "config:0-7,32-35");
PMU_FORMAT_ATTR(umask, "config:8-15" );
[root@quaco ~]#
After:
[root@quaco ~]# perf probe -L amd_put_event_constraints
<amd_put_event_constraints@/usr/src/debug/kernel-5.2.fc30/linux-5.2.18-200.fc30.x86_64/arch/x86/events/amd/core.c:0>
0 static void amd_put_event_constraints(struct cpu_hw_events *cpuc,
struct perf_event *event)
2 {
3 if (amd_has_nb(cpuc) && amd_is_nb_event(&event->hw))
4 __amd_put_nb_event_constraints(cpuc, event);
5 }
PMU_FORMAT_ATTR(event, "config:0-7,32-35");
PMU_FORMAT_ATTR(umask, "config:8-15" );
[root@quaco ~]# perf probe amd_put_event_constraints:4
Added new event:
probe:amd_put_event_constraints (on amd_put_event_constraints:4)
You can now use it in all perf tools, such as:
perf record -e probe:amd_put_event_constraints -aR sleep 1
[root@quaco ~]#
[root@quaco ~]# perf probe -l
probe:amd_put_event_constraints (on amd_put_event_constraints:4@arch/x86/events/amd/core.c)
probe:clear_tasks_mm_cpumask (on clear_tasks_mm_cpumask@kernel/cpu.c)
[root@quaco ~]#
Using it:
[root@quaco ~]# perf trace -e probe:*
^C[root@quaco ~]#
Ok, Intel system here... :-)
Fixes:
|
||
Masami Hiramatsu
|
3895534dd7 |
perf probe: Fix to list probe event with correct line number
Since debuginfo__find_probe_point() uses dwarf_entrypc() for finding the
entry address of the function on which a probe is, it will fail when the
function DIE has only ranges attribute.
To fix this issue, use die_entrypc() instead of dwarf_entrypc().
Without this fix, perf probe -l shows incorrect offset:
# perf probe -l
probe:clear_tasks_mm_cpumask (on clear_tasks_mm_cpumask+18446744071579263632@work/linux/linux/kernel/cpu.c)
probe:clear_tasks_mm_cpumask_1 (on clear_tasks_mm_cpumask+18446744071579263752@work/linux/linux/kernel/cpu.c)
With this:
# perf probe -l
probe:clear_tasks_mm_cpumask (on clear_tasks_mm_cpumask@work/linux/linux/kernel/cpu.c)
probe:clear_tasks_mm_cpumask_1 (on clear_tasks_mm_cpumask:21@work/linux/linux/kernel/cpu.c)
Committer testing:
Before:
[root@quaco ~]# perf probe -l
probe:clear_tasks_mm_cpumask (on clear_tasks_mm_cpumask+18446744071579765152@kernel/cpu.c)
[root@quaco ~]#
After:
[root@quaco ~]# perf probe -l
probe:clear_tasks_mm_cpumask (on clear_tasks_mm_cpumask@kernel/cpu.c)
[root@quaco ~]#
Fixes:
|
||
Masami Hiramatsu
|
eb6933b29d |
perf probe: Fix to probe an inline function which has no entry pc
Fix perf probe to probe an inlne function which has no entry pc
or low pc but only has ranges attribute.
This seems very rare case, but I could find a few examples, as
same as probe_point_search_cb(), use die_entrypc() to get the
entry address in probe_point_inline_cb() too.
Without this patch:
# perf probe -D __amd_put_nb_event_constraints
Failed to get entry address of __amd_put_nb_event_constraints.
Probe point '__amd_put_nb_event_constraints' not found.
Error: Failed to add events.
With this patch:
# perf probe -D __amd_put_nb_event_constraints
p:probe/__amd_put_nb_event_constraints amd_put_event_constraints+43
Committer testing:
Before:
[root@quaco ~]# perf probe -D __amd_put_nb_event_constraints
Failed to get entry address of __amd_put_nb_event_constraints.
Probe point '__amd_put_nb_event_constraints' not found.
Error: Failed to add events.
[root@quaco ~]#
After:
[root@quaco ~]# perf probe -D __amd_put_nb_event_constraints
p:probe/__amd_put_nb_event_constraints _text+33789
[root@quaco ~]#
Fixes:
|
||
Masami Hiramatsu
|
5d16dbcc31 |
perf probe: Fix to probe a function which has no entry pc
Fix 'perf probe' to probe a function which has no entry pc or low pc but
only has ranges attribute.
probe_point_search_cb() uses dwarf_entrypc() to get the probe address,
but that doesn't work for the function DIE which has only ranges
attribute. Use die_entrypc() instead.
Without this fix:
# perf probe -k ../build-x86_64/vmlinux -D clear_tasks_mm_cpumask:0
Probe point 'clear_tasks_mm_cpumask' not found.
Error: Failed to add events.
With this:
# perf probe -k ../build-x86_64/vmlinux -D clear_tasks_mm_cpumask:0
p:probe/clear_tasks_mm_cpumask clear_tasks_mm_cpumask+0
Committer testing:
Before:
[root@quaco ~]# perf probe clear_tasks_mm_cpumask:0
Probe point 'clear_tasks_mm_cpumask' not found.
Error: Failed to add events.
[root@quaco ~]#
After:
[root@quaco ~]# perf probe clear_tasks_mm_cpumask:0
Added new event:
probe:clear_tasks_mm_cpumask (on clear_tasks_mm_cpumask)
You can now use it in all perf tools, such as:
perf record -e probe:clear_tasks_mm_cpumask -aR sleep 1
[root@quaco ~]#
Using it with 'perf trace':
[root@quaco ~]# perf trace -e probe:clear_tasks_mm_cpumask
Doesn't seem to be used in x86_64:
$ find . -name "*.c" | xargs grep clear_tasks_mm_cpumask
./kernel/cpu.c: * clear_tasks_mm_cpumask - Safely clear tasks' mm_cpumask for a CPU
./kernel/cpu.c:void clear_tasks_mm_cpumask(int cpu)
./arch/xtensa/kernel/smp.c: clear_tasks_mm_cpumask(cpu);
./arch/csky/kernel/smp.c: clear_tasks_mm_cpumask(cpu);
./arch/sh/kernel/smp.c: clear_tasks_mm_cpumask(cpu);
./arch/arm/kernel/smp.c: clear_tasks_mm_cpumask(cpu);
./arch/powerpc/mm/nohash/mmu_context.c: clear_tasks_mm_cpumask(cpu);
$ find . -name "*.h" | xargs grep clear_tasks_mm_cpumask
./include/linux/cpu.h:void clear_tasks_mm_cpumask(int cpu);
$ find . -name "*.S" | xargs grep clear_tasks_mm_cpumask
$
Fixes:
|
||
Masami Hiramatsu
|
07d3698578 |
perf probe: Fix wrong address verification
Since there are some DIE which has only ranges instead of the
combination of entrypc/highpc, address verification must use
dwarf_haspc() instead of dwarf_entrypc/dwarf_highpc.
Also, the ranges only DIE will have a partial code in different section
(e.g. unlikely code will be in text.unlikely as "FUNC.cold" symbol). In
that case, we can not use dwarf_entrypc() or die_entrypc(), because the
offset from original DIE can be a minus value.
Instead, this simply gets the symbol and offset from symtab.
Without this patch;
# perf probe -D clear_tasks_mm_cpumask:1
Failed to get entry address of clear_tasks_mm_cpumask
Error: Failed to add events.
And with this patch:
# perf probe -D clear_tasks_mm_cpumask:1
p:probe/clear_tasks_mm_cpumask clear_tasks_mm_cpumask+0
p:probe/clear_tasks_mm_cpumask_1 clear_tasks_mm_cpumask+5
p:probe/clear_tasks_mm_cpumask_2 clear_tasks_mm_cpumask+8
p:probe/clear_tasks_mm_cpumask_3 clear_tasks_mm_cpumask+16
p:probe/clear_tasks_mm_cpumask_4 clear_tasks_mm_cpumask+82
Committer testing:
I managed to reproduce the above:
[root@quaco ~]# perf probe -D clear_tasks_mm_cpumask:1
p:probe/clear_tasks_mm_cpumask _text+919968
p:probe/clear_tasks_mm_cpumask_1 _text+919973
p:probe/clear_tasks_mm_cpumask_2 _text+919976
[root@quaco ~]#
But then when trying to actually put the probe in place, it fails if I
use :0 as the offset:
[root@quaco ~]# perf probe -L clear_tasks_mm_cpumask | head -5
<clear_tasks_mm_cpumask@/usr/src/debug/kernel-5.2.fc30/linux-5.2.18-200.fc30.x86_64/kernel/cpu.c:0>
0 void clear_tasks_mm_cpumask(int cpu)
1 {
2 struct task_struct *p;
[root@quaco ~]# perf probe clear_tasks_mm_cpumask:0
Probe point 'clear_tasks_mm_cpumask' not found.
Error: Failed to add events.
[root@quaco
The next patch is needed to fix this case.
Fixes:
|
||
Yunfeng Ye
|
1785fbb738 |
perf jevents: Fix resource leak in process_mapfile() and main()
There are memory leaks and file descriptor resource leaks in process_mapfile() and main(). Fix this by adding free(), fclose() and free_arch_std_events() on the error paths. Fixes: |
||
Masami Hiramatsu
|
91e2f539ee |
perf probe: Fix to show function entry line as probe-able
Fix die_walk_lines() to list the function entry line correctly. Since
the dwarf_entrypc() does not return the entry pc if the DIE has only
range attribute, __die_walk_funclines() fails to list the declaration
line (entry line) in that case.
To solve this issue, this introduces die_entrypc() which correctly
returns the entry PC (the first address range) even if the DIE has only
range attribute. With this fix die_walk_lines() shows the function entry
line is able to probe correctly.
Fixes:
|
||
Masami Hiramatsu
|
acb6a7047a |
perf probe: Walk function lines in lexical blocks
Since some inlined functions are in lexical blocks of given function, we
have to recursively walk through the DIE tree. Without this fix,
perf-probe -L can miss the inlined functions which is in a lexical block
(like if (..) { func() } case.)
However, even though, to walk the lines in a given function, we don't
need to follow the children DIE of inlined functions because those do
not have any lines in the specified function.
We need to walk though whole trees only if we walk all lines in a given
file, because an inlined function can include another inlined function
in the same file.
Fixes:
|
||
Masami Hiramatsu
|
b77afa1f81 |
perf probe: Fix to find range-only function instance
Fix die_is_func_instance() to find range-only function instance.
In some case, a function instance can be made without any low PC or
entry PC, but only with address ranges by optimization. (e.g. cold text
partially in "text.unlikely" section) To find such function instance, we
have to check the range attribute too.
Fixes:
|
||
Igor Lubashev
|
4bfbcf3ee1 |
perf kvm: Use evlist layer api when possible
No need for layer violations when a proper evlist api is available. Signed-off-by: Igor Lubashev <ilubashe@akamai.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/1571795693-23558-4-git-send-email-ilubashe@akamai.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Leo Yan
|
b7dc21f546 |
perf tests: Fix a typo
Correct typo in comment: s/suck/stuck. Signed-off-by: Leo Yan <leo.yan@linaro.org> Reported-by: Will Deacon <will@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/20191023083324.12093-1-leo.yan@linaro.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Ian Rogers
|
826100a7ce |
perf tools: Avoid a malloc() for array events
Use realloc() rather than malloc()+memcpy() to possibly avoid a memory allocation when appending array elements. Signed-off-by: Ian Rogers <irogers@google.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: John Garry <john.garry@huawei.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Martin KaFai Lau <kafai@fb.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Song Liu <songliubraving@fb.com> Cc: Stephane Eranian <eranian@google.com> Cc: Yonghong Song <yhs@fb.com> Cc: bpf@vger.kernel.org Cc: clang-built-linux@googlegroups.com Cc: netdev@vger.kernel.org Link: http://lore.kernel.org/lkml/20191023005337.196160-6-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Ian Rogers
|
a26e47162d |
perf tools: Move ALLOC_LIST into a function
Having a YYABORT in a macro makes it hard to free memory for components of a rule. Separate the logic out. Signed-off-by: Ian Rogers <irogers@google.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: John Garry <john.garry@huawei.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Martin KaFai Lau <kafai@fb.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Song Liu <songliubraving@fb.com> Cc: Stephane Eranian <eranian@google.com> Cc: Yonghong Song <yhs@fb.com> Cc: bpf@vger.kernel.org Cc: clang-built-linux@googlegroups.com Cc: netdev@vger.kernel.org Link: http://lore.kernel.org/lkml/20191023005337.196160-5-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Andi Kleen
|
2ccfb8bc21 |
perf evsel: Avoid close(-1)
In some weak fallback cases close can be called a lot with -1. Check for this case and avoid calling close then. This is mainly to shut up valgrind which complains about this case. Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lore.kernel.org/lkml/20191020175202.32456-3-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Andi Kleen
|
796c01a4bf |
perf evsel: Always preserve errno while cleaning up perf_event_open failures
In some cases when perf_event_open fails, it may do some closes to clean up. In special cases these closes can fail too, which overwrites the errno of the perf_event_open, which is then incorrectly reported. Save/restore errno around closes. Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lore.kernel.org/lkml/20191020175202.32456-2-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Leo Yan
|
9d604aad4b |
perf cs-etm: Fix definition of macro TO_CS_QUEUE_NR
Macro TO_CS_QUEUE_NR definition has a typo, which uses 'trace_id_chan' as its parameter, this doesn't match with its definition body which uses 'trace_chan_id'. So renames the parameter to 'trace_chan_id'. It's luck to have a local variable 'trace_chan_id' in the function cs_etm__setup_queue(), even we wrongly define the macro TO_CS_QUEUE_NR, the local variable 'trace_chan_id' is used rather than the macro's parameter 'trace_id_chan'; so the compiler doesn't complain for this before. After renaming the parameter, it leads to a compiling error due cs_etm__setup_queue() has no variable 'trace_id_chan'. This patch uses the variable 'trace_chan_id' for the macro so that fixes the compiling error. Signed-off-by: Leo Yan <leo.yan@linaro.org> Reviewed-by: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Suzuki Poulouse <suzuki.poulose@arm.com> Cc: coresight ml <coresight@lists.linaro.org> Cc: linux-arm-kernel@lists.infradead.org Link: http://lore.kernel.org/lkml/20191021074808.25795-1-leo.yan@linaro.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Arnaldo Carvalho de Melo
|
a33d261198 |
perf llvm: Make .o saving a debug message, not an info one
Its a bit annoying to have that message, better make it a debug one. I.e. now this message will only appear when using '-v': [root@quaco tracebuffer]# trace -e bristot.c LLVM: dumping bristot.o ^C[root@quaco tracebuffer]# Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Daniel Bristot de Oliveira <bristot@redhat.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Luis Cláudio Gonçalves <lclaudio@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lkml.kernel.org/n/tip-o7jd4i7s66kosec5torubqps@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Adrian Hunter
|
eeb399b531 |
perf record: Put a copy of kcore into the perf.data directory
Add a new 'perf record' option '--kcore' which will put a copy of /proc/kcore, kallsyms and modules into a perf.data directory. Note, that without the --kcore option, output goes to a file as previously. The tools' -o and -i options work with either a file name or directory name. Example: $ sudo perf record --kcore uname $ sudo tree perf.data perf.data ├── kcore_dir │ ├── kallsyms │ ├── kcore │ └── modules └── data $ sudo perf script -v build id event received for vmlinux: 1eaa285996affce2d74d8e66dcea09a80c9941de build id event received for [vdso]: 8bbaf5dc62a9b644b4d4e4539737e104e4a84541 Samples for 'cycles' event do not have CPU attribute set. Skipping 'cpu' field. Using CPUID GenuineIntel-6-8E-A Using perf.data/kcore_dir/kcore for kernel data Using perf.data/kcore_dir/kallsyms for symbols perf 19058 506778.423729: 1 cycles: ffffffffa2caa548 native_write_msr+0x8 (vmlinux) perf 19058 506778.423733: 1 cycles: ffffffffa2caa548 native_write_msr+0x8 (vmlinux) perf 19058 506778.423734: 7 cycles: ffffffffa2caa548 native_write_msr+0x8 (vmlinux) perf 19058 506778.423736: 117 cycles: ffffffffa2caa54a native_write_msr+0xa (vmlinux) perf 19058 506778.423738: 2092 cycles: ffffffffa2c9b7b0 native_apic_msr_write+0x0 (vmlinux) perf 19058 506778.423740: 37380 cycles: ffffffffa2f121d0 perf_event_addr_filters_exec+0x0 (vmlinux) uname 19058 506778.423751: 582673 cycles: ffffffffa303a407 propagate_protected_usage+0x147 (vmlinux) uname 19058 506778.423892: 2241841 cycles: ffffffffa2cae0c9 unwind_next_frame.part.5+0x79 (vmlinux) uname 19058 506778.424430: 2457397 cycles: ffffffffa3019232 check_memory_region+0x52 (vmlinux) Committer testing: # rm -rf perf.data* # perf record sleep 1 [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.024 MB perf.data (7 samples) ] # ls -l perf.data -rw-------. 1 root root 34772 Oct 21 11:08 perf.data # perf record --kcore uname Linux [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.024 MB perf.data (7 samples) ] ls[root@quaco ~]# ls -lad perf.data* drwx------. 3 root root 4096 Oct 21 11:08 perf.data -rw-------. 1 root root 34772 Oct 21 11:08 perf.data.old # perf evlist -v cycles: size: 112, { sample_period, sample_freq }: 4000, sample_type: IP|TID|TIME|PERIOD, read_format: ID, disabled: 1, inherit: 1, mmap: 1, comm: 1, freq: 1, enable_on_exec: 1, task: 1, precise_ip: 3, sample_id_all: 1, exclude_guest: 1, mmap2: 1, comm_exec: 1, ksymbol: 1, bpf_event: 1 # perf evlist -v -i perf.data/data cycles: size: 112, { sample_period, sample_freq }: 4000, sample_type: IP|TID|TIME|PERIOD, read_format: ID, disabled: 1, inherit: 1, mmap: 1, comm: 1, freq: 1, enable_on_exec: 1, task: 1, precise_ip: 3, sample_id_all: 1, exclude_guest: 1, mmap2: 1, comm_exec: 1, ksymbol: 1, bpf_event: 1 # Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Link: http://lore.kernel.org/lkml/20191004083121.12182-6-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Adrian Hunter
|
46e201efa1 |
perf data: Support single perf.data file directory
Support directory output that contains a regular perf.data file, named "data". By default the directory is named perf.data i.e. perf.data └── data Most of the infrastructure to support a directory is already there. This patch makes the changes needed to support the format above. Presently there is no 'perf record' option to output a directory. This is preparation for adding support for putting a copy of /proc/kcore in the directory. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Link: http://lore.kernel.org/lkml/20191004083121.12182-5-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Jiri Olsa
|
01e97a59ea |
perf session: Fix indent in perf_session__new()"
Fix up indentation. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Adrian Hunter <adrian.hunter@intel.com> Link: http://lore.kernel.org/lkml/20191007112027.GD6919@krava Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Adrian Hunter
|
9b70b9db4e |
perf data: Rename directory "header" file to "data"
In preparation to support a single file directory format, rename "header" to "data" because "header" is a mis-leading name when there is only 1 file. Note, in the multi-file case, the "header" file also contains data. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Link: http://lore.kernel.org/lkml/20191004083121.12182-4-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Adrian Hunter
|
3dedec4f5c |
perf data: Move perf_dir_version into data.h
perf_dir_version belongs to struct perf_data which is declared in data.h. To allow its use in inline perf_data functions, move perf_dir_version to data.h Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Link: http://lore.kernel.org/lkml/20191004083121.12182-3-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
Adrian Hunter
|
490e6db09a |
perf data: Correctly identify directory data files
In order to rename the "header" file to "data" without conflicting, correctly identify the non-header files as starting with "data." Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Link: http://lore.kernel.org/lkml/20191004083121.12182-2-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |