linux_dsm_epyc7002/tools/perf/util/hist.c

2692 lines
64 KiB
C
Raw Normal View History

License cleanup: add SPDX GPL-2.0 license identifier to files with no license Many source files in the tree are missing licensing information, which makes it harder for compliance tools to determine the correct license. By default all files without license information are under the default license of the kernel, which is GPL version 2. Update the files which contain no license information with the 'GPL-2.0' SPDX license identifier. The SPDX identifier is a legally binding shorthand, which can be used instead of the full boiler plate text. This patch is based on work done by Thomas Gleixner and Kate Stewart and Philippe Ombredanne. How this work was done: Patches were generated and checked against linux-4.14-rc6 for a subset of the use cases: - file had no licensing information it it. - file was a */uapi/* one with no licensing information in it, - file was a */uapi/* one with existing licensing information, Further patches will be generated in subsequent months to fix up cases where non-standard license headers were used, and references to license had to be inferred by heuristics based on keywords. The analysis to determine which SPDX License Identifier to be applied to a file was done in a spreadsheet of side by side results from of the output of two independent scanners (ScanCode & Windriver) producing SPDX tag:value files created by Philippe Ombredanne. Philippe prepared the base worksheet, and did an initial spot review of a few 1000 files. The 4.13 kernel was the starting point of the analysis with 60,537 files assessed. Kate Stewart did a file by file comparison of the scanner results in the spreadsheet to determine which SPDX license identifier(s) to be applied to the file. She confirmed any determination that was not immediately clear with lawyers working with the Linux Foundation. Criteria used to select files for SPDX license identifier tagging was: - Files considered eligible had to be source code files. - Make and config files were included as candidates if they contained >5 lines of source - File already had some variant of a license header in it (even if <5 lines). All documentation files were explicitly excluded. The following heuristics were used to determine which SPDX license identifiers to apply. - when both scanners couldn't find any license traces, file was considered to have no license information in it, and the top level COPYING file license applied. For non */uapi/* files that summary was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 11139 and resulted in the first patch in this series. If that file was a */uapi/* path one, it was "GPL-2.0 WITH Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 WITH Linux-syscall-note 930 and resulted in the second patch in this series. - if a file had some form of licensing information in it, and was one of the */uapi/* ones, it was denoted with the Linux-syscall-note if any GPL family license was found in the file or had no licensing in it (per prior point). Results summary: SPDX license identifier # files ---------------------------------------------------|------ GPL-2.0 WITH Linux-syscall-note 270 GPL-2.0+ WITH Linux-syscall-note 169 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17 LGPL-2.1+ WITH Linux-syscall-note 15 GPL-1.0+ WITH Linux-syscall-note 14 ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5 LGPL-2.0+ WITH Linux-syscall-note 4 LGPL-2.1 WITH Linux-syscall-note 3 ((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3 ((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1 and that resulted in the third patch in this series. - when the two scanners agreed on the detected license(s), that became the concluded license(s). - when there was disagreement between the two scanners (one detected a license but the other didn't, or they both detected different licenses) a manual inspection of the file occurred. - In most cases a manual inspection of the information in the file resulted in a clear resolution of the license that should apply (and which scanner probably needed to revisit its heuristics). - When it was not immediately clear, the license identifier was confirmed with lawyers working with the Linux Foundation. - If there was any question as to the appropriate license identifier, the file was flagged for further research and to be revisited later in time. In total, over 70 hours of logged manual review was done on the spreadsheet to determine the SPDX license identifiers to apply to the source files by Kate, Philippe, Thomas and, in some cases, confirmation by lawyers working with the Linux Foundation. Kate also obtained a third independent scan of the 4.13 code base from FOSSology, and compared selected files where the other two scanners disagreed against that SPDX file, to see if there was new insights. The Windriver scanner is based on an older version of FOSSology in part, so they are related. Thomas did random spot checks in about 500 files from the spreadsheets for the uapi headers and agreed with SPDX license identifier in the files he inspected. For the non-uapi files Thomas did random spot checks in about 15000 files. In initial set of patches against 4.14-rc6, 3 files were found to have copy/paste license identifier errors, and have been fixed to reflect the correct identifier. Additionally Philippe spent 10 hours this week doing a detailed manual inspection and review of the 12,461 patched files from the initial patch version early this week with: - a full scancode scan run, collecting the matched texts, detected license ids and scores - reviewing anything where there was a license detected (about 500+ files) to ensure that the applied SPDX license was correct - reviewing anything where there was no detection but the patch license was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied SPDX license was correct This produced a worksheet with 20 files needing minor correction. This worksheet was then exported into 3 different .csv files for the different types of files to be modified. These .csv files were then reviewed by Greg. Thomas wrote a script to parse the csv files and add the proper SPDX tag to the file, in the format that the file expected. This script was further refined by Greg based on the output to detect more types of files automatically and to distinguish between header and source .c files (which need different comment types.) Finally Greg ran the script using the .csv files to generate the patches. Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org> Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 21:07:57 +07:00
// SPDX-License-Identifier: GPL-2.0
#include "callchain.h"
#include "util.h"
#include "build-id.h"
#include "hist.h"
#include "map.h"
#include "session.h"
perf tools: Add 'cgroup_id' sort order keyword This patch introduces a cgroup identifier entry field in perf report to identify or distinguish data of different cgroups. It uses the device number and inode number of cgroup namespace, included in perf data with the new PERF_RECORD_NAMESPACES event, as cgroup identifier. With the assumption that each container is created with it's own cgroup namespace, this allows assessment/analysis of multiple containers at once. A simple test for this would be to clone a few processes passing SIGCHILD & CLONE_NEWCROUP flags to each of them, execute shell and run different workloads on each of those contexts, while running perf record command with --namespaces option. Shown below is the output of perf report, sorted with cgroup identifier, on perf.data generated with the above test scenario, clearly indicating one context's considerable use of kernel memory in comparison with others: $ perf report -s cgroup_id,sample --stdio # # Total Lost Samples: 0 # # Samples: 5K of event 'kmem:kmalloc' # Event count (approx.): 5965 # # Overhead cgroup id (dev/inode) Samples # ........ ..................... ............ # 81.27% 3/0xeffffffb 4848 16.24% 3/0xf00000d0 969 1.16% 3/0xf00000ce 69 0.82% 3/0xf00000cf 49 0.50% 0/0x0 30 While this is a start, there is further scope of improving this. For example, instead of cgroup namespace's device and inode numbers, dev and inode numbers of some or all namespaces may be used to distinguish which processes are running in a given container context. Also, scripts to map device and inode info to containers sounds plausible for better tracing of containers. Signed-off-by: Hari Bathini <hbathini@linux.vnet.ibm.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexei Starovoitov <ast@fb.com> Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com> Cc: Aravinda Prasad <aravinda@linux.vnet.ibm.com> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Eric Biederman <ebiederm@xmission.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sargun Dhillon <sargun@sargun.me> Cc: Steven Rostedt <rostedt@goodmis.org> Link: http://lkml.kernel.org/r/148891933338.25309.756882900782042645.stgit@hbathini.in.ibm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-08 03:42:13 +07:00
#include "namespaces.h"
#include "sort.h"
#include "units.h"
#include "evlist.h"
#include "evsel.h"
#include "annotate.h"
#include "srcline.h"
#include "symbol.h"
#include "thread.h"
#include "ui/progress.h"
#include <errno.h>
perf diff: Percent calcs should use double values Otherwise we do integer math and the delta values round up to multiples of 1.0%. Also, calculate absolute values. Things look precise now: $ perf report -i perf.data.old --sort dso,symbol | head -13 9.02% libc-2.10.1.so [.] _IO_vfprintf_internal 4.88% find [.] 0x00000000014af0 2.91% [kernel] [k] __kmalloc 2.85% [kernel] [k] ext4_htree_store_dirent 2.50% libc-2.10.1.so [.] __GI_memmove 2.44% [kernel] [k] half_md4_transform 2.43% [kernel] [k] _spin_lock 2.33% [kernel] [k] system_call $ perf report -i perf.data --sort dso,symbol | head -13 8.55% libc-2.10.1.so [.] _IO_vfprintf_internal 3.11% [kernel] [k] __kmalloc 3.07% [kernel] [k] ext4_htree_store_dirent 2.66% find [.] 0x00000000016bcf 2.61% [kernel] [k] _atomic_dec_and_lock 2.46% [kernel] [k] half_md4_transform 2.41% libc-2.10.1.so [.] __GI_memmove 2.30% find [.] 0x00000000009219 $ perf diff | head -13 9.02% -0.47% libc-2.10.1.so [.] _IO_vfprintf_internal 2.91% +0.20% [kernel] [k] __kmalloc 2.85% +0.23% [kernel] [k] ext4_htree_store_dirent 1.99% +0.62% [kernel] [k] _atomic_dec_and_lock 2.44% +0.02% [kernel] [k] half_md4_transform 2.50% -0.09% libc-2.10.1.so [.] __GI_memmove 1.88% +0.01% [kernel] [k] __d_lookup 2.43% -0.75% [kernel] [k] _spin_lock 0.97% +0.62% [kernel] [k] path_get 1.99% -0.42% libc-2.10.1.so [.] _int_malloc $ Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Frédéric Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> LKML-Reference: <1260981109-2621-1-git-send-email-acme@infradead.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-12-16 23:31:49 +07:00
#include <math.h>
#include <inttypes.h>
#include <sys/param.h>
static bool hists__filter_entry_by_dso(struct hists *hists,
struct hist_entry *he);
static bool hists__filter_entry_by_thread(struct hists *hists,
struct hist_entry *he);
static bool hists__filter_entry_by_symbol(struct hists *hists,
struct hist_entry *he);
static bool hists__filter_entry_by_socket(struct hists *hists,
struct hist_entry *he);
u16 hists__col_len(struct hists *hists, enum hist_column col)
{
return hists->col_len[col];
}
void hists__set_col_len(struct hists *hists, enum hist_column col, u16 len)
{
hists->col_len[col] = len;
}
bool hists__new_col_len(struct hists *hists, enum hist_column col, u16 len)
{
if (len > hists__col_len(hists, col)) {
hists__set_col_len(hists, col, len);
return true;
}
return false;
}
void hists__reset_col_len(struct hists *hists)
{
enum hist_column col;
for (col = 0; col < HISTC_NR_COLS; ++col)
hists__set_col_len(hists, col, 0);
}
static void hists__set_unres_dso_col_len(struct hists *hists, int dso)
{
const unsigned int unresolved_col_width = BITS_PER_LONG / 4;
if (hists__col_len(hists, dso) < unresolved_col_width &&
!symbol_conf.col_width_list_str && !symbol_conf.field_sep &&
!symbol_conf.dso_list)
hists__set_col_len(hists, dso, unresolved_col_width);
}
void hists__calc_col_len(struct hists *hists, struct hist_entry *h)
{
const unsigned int unresolved_col_width = BITS_PER_LONG / 4;
int symlen;
u16 len;
/*
* +4 accounts for '[x] ' priv level info
* +2 accounts for 0x prefix on raw addresses
* +3 accounts for ' y ' symtab origin info
*/
if (h->ms.sym) {
symlen = h->ms.sym->namelen + 4;
if (verbose > 0)
symlen += BITS_PER_LONG / 4 + 2 + 3;
hists__new_col_len(hists, HISTC_SYMBOL, symlen);
} else {
symlen = unresolved_col_width + 4 + 2;
hists__new_col_len(hists, HISTC_SYMBOL, symlen);
hists__set_unres_dso_col_len(hists, HISTC_DSO);
}
len = thread__comm_len(h->thread);
if (hists__new_col_len(hists, HISTC_COMM, len))
hists__set_col_len(hists, HISTC_THREAD, len + 8);
if (h->ms.map) {
len = dso__name_len(h->ms.map->dso);
hists__new_col_len(hists, HISTC_DSO, len);
}
if (h->parent)
hists__new_col_len(hists, HISTC_PARENT, h->parent->namelen);
if (h->branch_info) {
if (h->branch_info->from.sym) {
symlen = (int)h->branch_info->from.sym->namelen + 4;
if (verbose > 0)
symlen += BITS_PER_LONG / 4 + 2 + 3;
hists__new_col_len(hists, HISTC_SYMBOL_FROM, symlen);
symlen = dso__name_len(h->branch_info->from.map->dso);
hists__new_col_len(hists, HISTC_DSO_FROM, symlen);
} else {
symlen = unresolved_col_width + 4 + 2;
hists__new_col_len(hists, HISTC_SYMBOL_FROM, symlen);
hists__set_unres_dso_col_len(hists, HISTC_DSO_FROM);
}
if (h->branch_info->to.sym) {
symlen = (int)h->branch_info->to.sym->namelen + 4;
if (verbose > 0)
symlen += BITS_PER_LONG / 4 + 2 + 3;
hists__new_col_len(hists, HISTC_SYMBOL_TO, symlen);
symlen = dso__name_len(h->branch_info->to.map->dso);
hists__new_col_len(hists, HISTC_DSO_TO, symlen);
} else {
symlen = unresolved_col_width + 4 + 2;
hists__new_col_len(hists, HISTC_SYMBOL_TO, symlen);
hists__set_unres_dso_col_len(hists, HISTC_DSO_TO);
}
perf report: Add srcline_from/to branch sort keys Add "srcline_from" and "srcline_to" branch sort keys that allow to show the source lines of a branch. That makes it much easier to track down where particular branches happen in the program, for example to examine branch mispredictions, or to associate it with cycle counts: % perf record -b -e cycles:p ./tcall % perf report --sort srcline_from,srcline_to,mispredict ... 15.10% tcall.c:18 tcall.c:10 N 14.83% tcall.c:11 tcall.c:5 N 14.12% tcall.c:7 tcall.c:12 N 14.04% tcall.c:12 tcall.c:5 N 12.42% tcall.c:17 tcall.c:18 N 12.39% tcall.c:7 tcall.c:13 N 12.27% tcall.c:13 tcall.c:17 N ... % perf report --sort srcline_from,srcline_to,cycles ... 17.12% tcall.c:18 tcall.c:11 1 17.01% tcall.c:12 tcall.c:6 1 16.98% tcall.c:11 tcall.c:6 1 15.91% tcall.c:17 tcall.c:18 1 6.38% tcall.c:7 tcall.c:17 7 4.80% tcall.c:7 tcall.c:12 8 4.21% tcall.c:7 tcall.c:17 8 2.67% tcall.c:7 tcall.c:12 7 2.62% tcall.c:7 tcall.c:12 10 2.10% tcall.c:7 tcall.c:17 9 1.58% tcall.c:7 tcall.c:12 6 1.44% tcall.c:7 tcall.c:12 5 1.38% tcall.c:7 tcall.c:12 9 1.06% tcall.c:7 tcall.c:17 13 1.05% tcall.c:7 tcall.c:12 4 1.01% tcall.c:7 tcall.c:17 6 Open issues: - Some kernel symbols get misresolved. Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Link: http://lkml.kernel.org/r/1463775308-32748-1-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-21 03:15:08 +07:00
if (h->branch_info->srcline_from)
hists__new_col_len(hists, HISTC_SRCLINE_FROM,
strlen(h->branch_info->srcline_from));
if (h->branch_info->srcline_to)
hists__new_col_len(hists, HISTC_SRCLINE_TO,
strlen(h->branch_info->srcline_to));
}
if (h->mem_info) {
if (h->mem_info->daddr.sym) {
symlen = (int)h->mem_info->daddr.sym->namelen + 4
+ unresolved_col_width + 2;
hists__new_col_len(hists, HISTC_MEM_DADDR_SYMBOL,
symlen);
perf tools: Add dcacheline sort In perf's 'mem-mode', one can get access to a whole bunch of details specific to a particular sample instruction. A bunch of those details relate to the data address. One interesting thing you can do with data addresses is to convert them into a unique cacheline they belong too. Organizing these data cachelines into similar groups and sorting them can reveal cache contention. This patch creates an alogorithm based on various sample details that can help group entries together into data cachelines and allows 'perf report' to sort on it. The algorithm relies on having proper mmap2 support in the kernel to help determine if the memory map the data address belongs to is private to a pid or globally shared. The alogortithm is as follows: o group cpumodes together o group entries with discovered maps together o sort on major, minor, inode and inode generation numbers o if userspace anon, then sort on pid o sort on cachelines based on data addresses The 'dcacheline' sort option in 'perf report' only works in 'mem-mode'. Sample output: # # Samples: 206 of event 'cpu/mem-loads/pp' # Total weight : 2534 # Sort order : dcacheline,pid # # Overhead Samples Data Cacheline Command: Pid # ........ ............ ...................................................................... .................. # 13.22% 1 [k] 0xffff88042f08ebc0 swapper: 0 9.27% 1 [k] 0xffff88082e8cea80 swapper: 0 3.59% 2 [k] 0xffffffff819ba180 swapper: 0 0.32% 1 [k] arch_trigger_all_cpu_backtrace_handler_na.23901+0xffffffffffffffe0 swapper: 0 0.32% 1 [k] timekeeper_seq+0xfffffffffffffff8 swapper: 0 Note: Added a '+1' to symlen size in hists__calc_col_len to prevent the next column from prematurely tabbing over and mis-aligning. Not sure what the problem is. Signed-off-by: Don Zickus <dzickus@redhat.com> Link: http://lkml.kernel.org/r/1401208087-181977-8-git-send-email-dzickus@redhat.com Signed-off-by: Jiri Olsa <jolsa@kernel.org>
2014-06-01 20:38:29 +07:00
hists__new_col_len(hists, HISTC_MEM_DCACHELINE,
symlen + 1);
} else {
symlen = unresolved_col_width + 4 + 2;
hists__new_col_len(hists, HISTC_MEM_DADDR_SYMBOL,
symlen);
hists__new_col_len(hists, HISTC_MEM_DCACHELINE,
symlen);
}
if (h->mem_info->iaddr.sym) {
symlen = (int)h->mem_info->iaddr.sym->namelen + 4
+ unresolved_col_width + 2;
hists__new_col_len(hists, HISTC_MEM_IADDR_SYMBOL,
symlen);
} else {
symlen = unresolved_col_width + 4 + 2;
hists__new_col_len(hists, HISTC_MEM_IADDR_SYMBOL,
symlen);
}
if (h->mem_info->daddr.map) {
symlen = dso__name_len(h->mem_info->daddr.map->dso);
hists__new_col_len(hists, HISTC_MEM_DADDR_DSO,
symlen);
} else {
symlen = unresolved_col_width + 4 + 2;
hists__set_unres_dso_col_len(hists, HISTC_MEM_DADDR_DSO);
}
hists__new_col_len(hists, HISTC_MEM_PHYS_DADDR,
unresolved_col_width + 4 + 2);
} else {
symlen = unresolved_col_width + 4 + 2;
hists__new_col_len(hists, HISTC_MEM_DADDR_SYMBOL, symlen);
hists__new_col_len(hists, HISTC_MEM_IADDR_SYMBOL, symlen);
hists__set_unres_dso_col_len(hists, HISTC_MEM_DADDR_DSO);
}
perf tools: Add 'cgroup_id' sort order keyword This patch introduces a cgroup identifier entry field in perf report to identify or distinguish data of different cgroups. It uses the device number and inode number of cgroup namespace, included in perf data with the new PERF_RECORD_NAMESPACES event, as cgroup identifier. With the assumption that each container is created with it's own cgroup namespace, this allows assessment/analysis of multiple containers at once. A simple test for this would be to clone a few processes passing SIGCHILD & CLONE_NEWCROUP flags to each of them, execute shell and run different workloads on each of those contexts, while running perf record command with --namespaces option. Shown below is the output of perf report, sorted with cgroup identifier, on perf.data generated with the above test scenario, clearly indicating one context's considerable use of kernel memory in comparison with others: $ perf report -s cgroup_id,sample --stdio # # Total Lost Samples: 0 # # Samples: 5K of event 'kmem:kmalloc' # Event count (approx.): 5965 # # Overhead cgroup id (dev/inode) Samples # ........ ..................... ............ # 81.27% 3/0xeffffffb 4848 16.24% 3/0xf00000d0 969 1.16% 3/0xf00000ce 69 0.82% 3/0xf00000cf 49 0.50% 0/0x0 30 While this is a start, there is further scope of improving this. For example, instead of cgroup namespace's device and inode numbers, dev and inode numbers of some or all namespaces may be used to distinguish which processes are running in a given container context. Also, scripts to map device and inode info to containers sounds plausible for better tracing of containers. Signed-off-by: Hari Bathini <hbathini@linux.vnet.ibm.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexei Starovoitov <ast@fb.com> Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com> Cc: Aravinda Prasad <aravinda@linux.vnet.ibm.com> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Eric Biederman <ebiederm@xmission.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sargun Dhillon <sargun@sargun.me> Cc: Steven Rostedt <rostedt@goodmis.org> Link: http://lkml.kernel.org/r/148891933338.25309.756882900782042645.stgit@hbathini.in.ibm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-08 03:42:13 +07:00
hists__new_col_len(hists, HISTC_CGROUP_ID, 20);
hists__new_col_len(hists, HISTC_CPU, 3);
hists__new_col_len(hists, HISTC_SOCKET, 6);
hists__new_col_len(hists, HISTC_MEM_LOCKED, 6);
hists__new_col_len(hists, HISTC_MEM_TLB, 22);
hists__new_col_len(hists, HISTC_MEM_SNOOP, 12);
hists__new_col_len(hists, HISTC_MEM_LVL, 21 + 3);
hists__new_col_len(hists, HISTC_LOCAL_WEIGHT, 12);
hists__new_col_len(hists, HISTC_GLOBAL_WEIGHT, 12);
if (h->srcline) {
len = MAX(strlen(h->srcline), strlen(sort_srcline.se_header));
hists__new_col_len(hists, HISTC_SRCLINE, len);
}
perf report: Add support for srcfile sort key In some cases it's useful to characterize samples by file. This is useful to get a higher level categorization, for example to map cost to subsystems. Add a srcfile sort key to perf report. It builds on top of the existing srcline support. Commiter notes: E.g.: # perf record -F 10000 usleep 1 [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.016 MB perf.data (13 samples) ] [root@zoo ~]# perf report -s srcfile --stdio # Total Lost Samples: 0 # # Samples: 13 of event 'cycles' # Event count (approx.): 869878 # # Overhead Source File # ........ ........... 60.99% . 20.62% paravirt.h 14.23% rmap.c 4.04% signal.c 0.11% msr.h # The first line is collecting all the files for which srcfiles couldn't somehow get resolved to: # perf report -s srcfile,dso --stdio # Total Lost Samples: 0 # # Samples: 13 of event 'cycles' # Event count (approx.): 869878 # # Overhead Source File Shared Object # ........ ........... ................ 40.97% . ld-2.20.so 20.62% paravirt.h [kernel.vmlinux] 20.02% . libc-2.20.so 14.23% rmap.c [kernel.vmlinux] 4.04% signal.c [kernel.vmlinux] 0.11% msr.h [kernel.vmlinux] # XXX: Investigate why that is not resolving on Fedora 21, Andi says he hasn't seen this on Fedora 22. Signed-off-by: Andi Kleen <ak@linux.intel.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/r/1438988064-21834-1-git-send-email-andi@firstfloor.org [ Added column length update, from 0e65bdb3f90f ('perf hists: Update the column width for the "srcline" sort key') ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-08-08 05:54:24 +07:00
if (h->srcfile)
hists__new_col_len(hists, HISTC_SRCFILE, strlen(h->srcfile));
if (h->transaction)
hists__new_col_len(hists, HISTC_TRANSACTION,
hist_entry__transaction_len());
if (h->trace_output)
hists__new_col_len(hists, HISTC_TRACE, strlen(h->trace_output));
}
void hists__output_recalc_col_len(struct hists *hists, int max_rows)
{
struct rb_node *next = rb_first_cached(&hists->entries);
struct hist_entry *n;
int row = 0;
hists__reset_col_len(hists);
while (next && row++ < max_rows) {
n = rb_entry(next, struct hist_entry, rb_node);
if (!n->filtered)
hists__calc_col_len(hists, n);
next = rb_next(&n->rb_node);
}
}
static void he_stat__add_cpumode_period(struct he_stat *he_stat,
unsigned int cpumode, u64 period)
{
switch (cpumode) {
case PERF_RECORD_MISC_KERNEL:
he_stat->period_sys += period;
break;
case PERF_RECORD_MISC_USER:
he_stat->period_us += period;
break;
case PERF_RECORD_MISC_GUEST_KERNEL:
he_stat->period_guest_sys += period;
break;
case PERF_RECORD_MISC_GUEST_USER:
he_stat->period_guest_us += period;
break;
default:
break;
}
}
static void he_stat__add_period(struct he_stat *he_stat, u64 period,
u64 weight)
{
he_stat->period += period;
he_stat->weight += weight;
he_stat->nr_events += 1;
}
static void he_stat__add_stat(struct he_stat *dest, struct he_stat *src)
{
dest->period += src->period;
dest->period_sys += src->period_sys;
dest->period_us += src->period_us;
dest->period_guest_sys += src->period_guest_sys;
dest->period_guest_us += src->period_guest_us;
dest->nr_events += src->nr_events;
dest->weight += src->weight;
}
static void he_stat__decay(struct he_stat *he_stat)
perf top: Reuse the 'report' hist_entry/hists classes This actually fixes several problems we had in the old 'perf top': 1. Unresolved symbols not show, limitation that came from the old "KernelTop" codebase, to solve it we would need to do changes that would make sym_entry have most of the hist_entry fields. 2. It was using the number of samples, not the sum of sample->period. And brings the --sort code that allows us to have all the views in 'perf report', for instance: [root@emilia ~]# perf top --sort dso PerfTop: 5903 irqs/sec kernel:77.5% exact: 0.0% [1000Hz cycles], (all, 8 CPUs) ------------------------------------------------------------------------------ 31.59% libcrypto.so.1.0.0 21.55% [kernel] 18.57% libpython2.6.so.1.0 7.04% libc-2.12.so 6.99% _backend_agg.so 4.72% sshd 1.48% multiarray.so 1.39% libfreetype.so.6.3.22 1.37% perf 0.71% libgobject-2.0.so.0.2200.5 0.53% [tg3] 0.48% libglib-2.0.so.0.2200.5 0.44% libstdc++.so.6.0.13 0.40% libcairo.so.2.10800.8 0.38% libm-2.12.so 0.34% umath.so 0.30% libgdk-x11-2.0.so.0.1800.9 0.22% libpthread-2.12.so 0.20% libgtk-x11-2.0.so.0.1800.9 0.20% librt-2.12.so 0.15% _path.so 0.13% libpango-1.0.so.0.2800.1 0.11% libatlas.so.3.0 0.09% ft2font.so 0.09% libpangoft2-1.0.so.0.2800.1 0.08% libX11.so.6.3.0 0.07% [vdso] 0.06% cyclictest ^C All the filter lists can be used as well: --dsos, --comms, --symbols, etc. The 'perf report' TUI is also reused, being possible to apply all the zoom operations, do annotation, etc. This change will allow multiple simplifications in the symbol system as well, that will be detailed in upcoming changesets. Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-xzaaldxq7zhqrrxdxjifk1mh@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2011-10-06 05:16:15 +07:00
{
he_stat->period = (he_stat->period * 7) / 8;
he_stat->nr_events = (he_stat->nr_events * 7) / 8;
/* XXX need decay for weight too? */
perf top: Reuse the 'report' hist_entry/hists classes This actually fixes several problems we had in the old 'perf top': 1. Unresolved symbols not show, limitation that came from the old "KernelTop" codebase, to solve it we would need to do changes that would make sym_entry have most of the hist_entry fields. 2. It was using the number of samples, not the sum of sample->period. And brings the --sort code that allows us to have all the views in 'perf report', for instance: [root@emilia ~]# perf top --sort dso PerfTop: 5903 irqs/sec kernel:77.5% exact: 0.0% [1000Hz cycles], (all, 8 CPUs) ------------------------------------------------------------------------------ 31.59% libcrypto.so.1.0.0 21.55% [kernel] 18.57% libpython2.6.so.1.0 7.04% libc-2.12.so 6.99% _backend_agg.so 4.72% sshd 1.48% multiarray.so 1.39% libfreetype.so.6.3.22 1.37% perf 0.71% libgobject-2.0.so.0.2200.5 0.53% [tg3] 0.48% libglib-2.0.so.0.2200.5 0.44% libstdc++.so.6.0.13 0.40% libcairo.so.2.10800.8 0.38% libm-2.12.so 0.34% umath.so 0.30% libgdk-x11-2.0.so.0.1800.9 0.22% libpthread-2.12.so 0.20% libgtk-x11-2.0.so.0.1800.9 0.20% librt-2.12.so 0.15% _path.so 0.13% libpango-1.0.so.0.2800.1 0.11% libatlas.so.3.0 0.09% ft2font.so 0.09% libpangoft2-1.0.so.0.2800.1 0.08% libX11.so.6.3.0 0.07% [vdso] 0.06% cyclictest ^C All the filter lists can be used as well: --dsos, --comms, --symbols, etc. The 'perf report' TUI is also reused, being possible to apply all the zoom operations, do annotation, etc. This change will allow multiple simplifications in the symbol system as well, that will be detailed in upcoming changesets. Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-xzaaldxq7zhqrrxdxjifk1mh@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2011-10-06 05:16:15 +07:00
}
static void hists__delete_entry(struct hists *hists, struct hist_entry *he);
perf top: Reuse the 'report' hist_entry/hists classes This actually fixes several problems we had in the old 'perf top': 1. Unresolved symbols not show, limitation that came from the old "KernelTop" codebase, to solve it we would need to do changes that would make sym_entry have most of the hist_entry fields. 2. It was using the number of samples, not the sum of sample->period. And brings the --sort code that allows us to have all the views in 'perf report', for instance: [root@emilia ~]# perf top --sort dso PerfTop: 5903 irqs/sec kernel:77.5% exact: 0.0% [1000Hz cycles], (all, 8 CPUs) ------------------------------------------------------------------------------ 31.59% libcrypto.so.1.0.0 21.55% [kernel] 18.57% libpython2.6.so.1.0 7.04% libc-2.12.so 6.99% _backend_agg.so 4.72% sshd 1.48% multiarray.so 1.39% libfreetype.so.6.3.22 1.37% perf 0.71% libgobject-2.0.so.0.2200.5 0.53% [tg3] 0.48% libglib-2.0.so.0.2200.5 0.44% libstdc++.so.6.0.13 0.40% libcairo.so.2.10800.8 0.38% libm-2.12.so 0.34% umath.so 0.30% libgdk-x11-2.0.so.0.1800.9 0.22% libpthread-2.12.so 0.20% libgtk-x11-2.0.so.0.1800.9 0.20% librt-2.12.so 0.15% _path.so 0.13% libpango-1.0.so.0.2800.1 0.11% libatlas.so.3.0 0.09% ft2font.so 0.09% libpangoft2-1.0.so.0.2800.1 0.08% libX11.so.6.3.0 0.07% [vdso] 0.06% cyclictest ^C All the filter lists can be used as well: --dsos, --comms, --symbols, etc. The 'perf report' TUI is also reused, being possible to apply all the zoom operations, do annotation, etc. This change will allow multiple simplifications in the symbol system as well, that will be detailed in upcoming changesets. Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-xzaaldxq7zhqrrxdxjifk1mh@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2011-10-06 05:16:15 +07:00
static bool hists__decay_entry(struct hists *hists, struct hist_entry *he)
{
u64 prev_period = he->stat.period;
u64 diff;
if (prev_period == 0)
return true;
he_stat__decay(&he->stat);
if (symbol_conf.cumulate_callchain)
he_stat__decay(he->stat_acc);
decay_callchain(he->callchain);
diff = prev_period - he->stat.period;
if (!he->depth) {
hists->stats.total_period -= diff;
if (!he->filtered)
hists->stats.total_non_filtered_period -= diff;
}
if (!he->leaf) {
struct hist_entry *child;
struct rb_node *node = rb_first_cached(&he->hroot_out);
while (node) {
child = rb_entry(node, struct hist_entry, rb_node);
node = rb_next(node);
if (hists__decay_entry(hists, child))
hists__delete_entry(hists, child);
}
}
return he->stat.period == 0;
perf top: Reuse the 'report' hist_entry/hists classes This actually fixes several problems we had in the old 'perf top': 1. Unresolved symbols not show, limitation that came from the old "KernelTop" codebase, to solve it we would need to do changes that would make sym_entry have most of the hist_entry fields. 2. It was using the number of samples, not the sum of sample->period. And brings the --sort code that allows us to have all the views in 'perf report', for instance: [root@emilia ~]# perf top --sort dso PerfTop: 5903 irqs/sec kernel:77.5% exact: 0.0% [1000Hz cycles], (all, 8 CPUs) ------------------------------------------------------------------------------ 31.59% libcrypto.so.1.0.0 21.55% [kernel] 18.57% libpython2.6.so.1.0 7.04% libc-2.12.so 6.99% _backend_agg.so 4.72% sshd 1.48% multiarray.so 1.39% libfreetype.so.6.3.22 1.37% perf 0.71% libgobject-2.0.so.0.2200.5 0.53% [tg3] 0.48% libglib-2.0.so.0.2200.5 0.44% libstdc++.so.6.0.13 0.40% libcairo.so.2.10800.8 0.38% libm-2.12.so 0.34% umath.so 0.30% libgdk-x11-2.0.so.0.1800.9 0.22% libpthread-2.12.so 0.20% libgtk-x11-2.0.so.0.1800.9 0.20% librt-2.12.so 0.15% _path.so 0.13% libpango-1.0.so.0.2800.1 0.11% libatlas.so.3.0 0.09% ft2font.so 0.09% libpangoft2-1.0.so.0.2800.1 0.08% libX11.so.6.3.0 0.07% [vdso] 0.06% cyclictest ^C All the filter lists can be used as well: --dsos, --comms, --symbols, etc. The 'perf report' TUI is also reused, being possible to apply all the zoom operations, do annotation, etc. This change will allow multiple simplifications in the symbol system as well, that will be detailed in upcoming changesets. Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-xzaaldxq7zhqrrxdxjifk1mh@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2011-10-06 05:16:15 +07:00
}
static void hists__delete_entry(struct hists *hists, struct hist_entry *he)
{
struct rb_root_cached *root_in;
struct rb_root_cached *root_out;
if (he->parent_he) {
root_in = &he->parent_he->hroot_in;
root_out = &he->parent_he->hroot_out;
} else {
if (hists__has(hists, need_collapse))
root_in = &hists->entries_collapsed;
else
root_in = hists->entries_in;
root_out = &hists->entries;
}
rb_erase_cached(&he->rb_node_in, root_in);
rb_erase_cached(&he->rb_node, root_out);
--hists->nr_entries;
if (!he->filtered)
--hists->nr_non_filtered_entries;
hist_entry__delete(he);
}
void hists__decay_entries(struct hists *hists, bool zap_user, bool zap_kernel)
perf top: Reuse the 'report' hist_entry/hists classes This actually fixes several problems we had in the old 'perf top': 1. Unresolved symbols not show, limitation that came from the old "KernelTop" codebase, to solve it we would need to do changes that would make sym_entry have most of the hist_entry fields. 2. It was using the number of samples, not the sum of sample->period. And brings the --sort code that allows us to have all the views in 'perf report', for instance: [root@emilia ~]# perf top --sort dso PerfTop: 5903 irqs/sec kernel:77.5% exact: 0.0% [1000Hz cycles], (all, 8 CPUs) ------------------------------------------------------------------------------ 31.59% libcrypto.so.1.0.0 21.55% [kernel] 18.57% libpython2.6.so.1.0 7.04% libc-2.12.so 6.99% _backend_agg.so 4.72% sshd 1.48% multiarray.so 1.39% libfreetype.so.6.3.22 1.37% perf 0.71% libgobject-2.0.so.0.2200.5 0.53% [tg3] 0.48% libglib-2.0.so.0.2200.5 0.44% libstdc++.so.6.0.13 0.40% libcairo.so.2.10800.8 0.38% libm-2.12.so 0.34% umath.so 0.30% libgdk-x11-2.0.so.0.1800.9 0.22% libpthread-2.12.so 0.20% libgtk-x11-2.0.so.0.1800.9 0.20% librt-2.12.so 0.15% _path.so 0.13% libpango-1.0.so.0.2800.1 0.11% libatlas.so.3.0 0.09% ft2font.so 0.09% libpangoft2-1.0.so.0.2800.1 0.08% libX11.so.6.3.0 0.07% [vdso] 0.06% cyclictest ^C All the filter lists can be used as well: --dsos, --comms, --symbols, etc. The 'perf report' TUI is also reused, being possible to apply all the zoom operations, do annotation, etc. This change will allow multiple simplifications in the symbol system as well, that will be detailed in upcoming changesets. Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-xzaaldxq7zhqrrxdxjifk1mh@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2011-10-06 05:16:15 +07:00
{
struct rb_node *next = rb_first_cached(&hists->entries);
perf top: Reuse the 'report' hist_entry/hists classes This actually fixes several problems we had in the old 'perf top': 1. Unresolved symbols not show, limitation that came from the old "KernelTop" codebase, to solve it we would need to do changes that would make sym_entry have most of the hist_entry fields. 2. It was using the number of samples, not the sum of sample->period. And brings the --sort code that allows us to have all the views in 'perf report', for instance: [root@emilia ~]# perf top --sort dso PerfTop: 5903 irqs/sec kernel:77.5% exact: 0.0% [1000Hz cycles], (all, 8 CPUs) ------------------------------------------------------------------------------ 31.59% libcrypto.so.1.0.0 21.55% [kernel] 18.57% libpython2.6.so.1.0 7.04% libc-2.12.so 6.99% _backend_agg.so 4.72% sshd 1.48% multiarray.so 1.39% libfreetype.so.6.3.22 1.37% perf 0.71% libgobject-2.0.so.0.2200.5 0.53% [tg3] 0.48% libglib-2.0.so.0.2200.5 0.44% libstdc++.so.6.0.13 0.40% libcairo.so.2.10800.8 0.38% libm-2.12.so 0.34% umath.so 0.30% libgdk-x11-2.0.so.0.1800.9 0.22% libpthread-2.12.so 0.20% libgtk-x11-2.0.so.0.1800.9 0.20% librt-2.12.so 0.15% _path.so 0.13% libpango-1.0.so.0.2800.1 0.11% libatlas.so.3.0 0.09% ft2font.so 0.09% libpangoft2-1.0.so.0.2800.1 0.08% libX11.so.6.3.0 0.07% [vdso] 0.06% cyclictest ^C All the filter lists can be used as well: --dsos, --comms, --symbols, etc. The 'perf report' TUI is also reused, being possible to apply all the zoom operations, do annotation, etc. This change will allow multiple simplifications in the symbol system as well, that will be detailed in upcoming changesets. Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-xzaaldxq7zhqrrxdxjifk1mh@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2011-10-06 05:16:15 +07:00
struct hist_entry *n;
while (next) {
n = rb_entry(next, struct hist_entry, rb_node);
next = rb_next(&n->rb_node);
if (((zap_user && n->level == '.') ||
(zap_kernel && n->level != '.') ||
hists__decay_entry(hists, n))) {
hists__delete_entry(hists, n);
perf top: Reuse the 'report' hist_entry/hists classes This actually fixes several problems we had in the old 'perf top': 1. Unresolved symbols not show, limitation that came from the old "KernelTop" codebase, to solve it we would need to do changes that would make sym_entry have most of the hist_entry fields. 2. It was using the number of samples, not the sum of sample->period. And brings the --sort code that allows us to have all the views in 'perf report', for instance: [root@emilia ~]# perf top --sort dso PerfTop: 5903 irqs/sec kernel:77.5% exact: 0.0% [1000Hz cycles], (all, 8 CPUs) ------------------------------------------------------------------------------ 31.59% libcrypto.so.1.0.0 21.55% [kernel] 18.57% libpython2.6.so.1.0 7.04% libc-2.12.so 6.99% _backend_agg.so 4.72% sshd 1.48% multiarray.so 1.39% libfreetype.so.6.3.22 1.37% perf 0.71% libgobject-2.0.so.0.2200.5 0.53% [tg3] 0.48% libglib-2.0.so.0.2200.5 0.44% libstdc++.so.6.0.13 0.40% libcairo.so.2.10800.8 0.38% libm-2.12.so 0.34% umath.so 0.30% libgdk-x11-2.0.so.0.1800.9 0.22% libpthread-2.12.so 0.20% libgtk-x11-2.0.so.0.1800.9 0.20% librt-2.12.so 0.15% _path.so 0.13% libpango-1.0.so.0.2800.1 0.11% libatlas.so.3.0 0.09% ft2font.so 0.09% libpangoft2-1.0.so.0.2800.1 0.08% libX11.so.6.3.0 0.07% [vdso] 0.06% cyclictest ^C All the filter lists can be used as well: --dsos, --comms, --symbols, etc. The 'perf report' TUI is also reused, being possible to apply all the zoom operations, do annotation, etc. This change will allow multiple simplifications in the symbol system as well, that will be detailed in upcoming changesets. Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-xzaaldxq7zhqrrxdxjifk1mh@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2011-10-06 05:16:15 +07:00
}
}
}
void hists__delete_entries(struct hists *hists)
{
struct rb_node *next = rb_first_cached(&hists->entries);
struct hist_entry *n;
while (next) {
n = rb_entry(next, struct hist_entry, rb_node);
next = rb_next(&n->rb_node);
hists__delete_entry(hists, n);
}
}
/*
* histogram, sorted on item, collects periods
*/
static int hist_entry__init(struct hist_entry *he,
struct hist_entry *template,
bool sample_self,
size_t callchain_size)
{
*he = *template;
he->callchain_size = callchain_size;
if (symbol_conf.cumulate_callchain) {
he->stat_acc = malloc(sizeof(he->stat));
if (he->stat_acc == NULL)
return -ENOMEM;
memcpy(he->stat_acc, &he->stat, sizeof(he->stat));
if (!sample_self)
memset(&he->stat, 0, sizeof(he->stat));
}
map__get(he->ms.map);
if (he->branch_info) {
/*
* This branch info is (a part of) allocated from
* sample__resolve_bstack() and will be freed after
* adding new entries. So we need to save a copy.
*/
he->branch_info = malloc(sizeof(*he->branch_info));
if (he->branch_info == NULL)
goto err;
memcpy(he->branch_info, template->branch_info,
sizeof(*he->branch_info));
map__get(he->branch_info->from.map);
map__get(he->branch_info->to.map);
}
if (he->mem_info) {
map__get(he->mem_info->iaddr.map);
map__get(he->mem_info->daddr.map);
}
perf hists: Check if a hist_entry has callchains before using them So far if we use 'perf record -g' this will make symbol_conf.use_callchain 'true' and logic will assume that all events have callchains enabled, but ever since we added the possibility of setting up callchains for some events (e.g.: -e cycles/call-graph=dwarf/) while not for others, we limit usage scenarios by looking at that symbol_conf.use_callchain global boolean, we better look at each event attributes. On the road to that we need to look if a hist_entry has callchains, that is, to go from hist_entry->hists to the evsel that contains it, to then look at evsel->sample_type for PERF_SAMPLE_CALLCHAIN. The next step is to add a symbol_conf.ignore_callchains global, to use in the places where what we really want to know is if callchains should be ignored, even if present. Then -g will mean just to select a callchain mode to be applied to all events not explicitely setting some other callchain mode, i.e. a default callchain mode, and --no-call-graph will set symbol_conf.ignore_callchains with that clear intention. That too will at some point become a per evsel thing, that tools can set for all or just a few of its evsels. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-0sas5cm4dsw2obn75g7ruz69@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-05-29 23:59:24 +07:00
if (hist_entry__has_callchains(he) && symbol_conf.use_callchain)
callchain_init(he->callchain);
if (he->raw_data) {
he->raw_data = memdup(he->raw_data, he->raw_size);
if (he->raw_data == NULL)
goto err_infos;
}
if (he->srcline) {
he->srcline = strdup(he->srcline);
if (he->srcline == NULL)
goto err_rawdata;
}
INIT_LIST_HEAD(&he->pairs.node);
thread__get(he->thread);
he->hroot_in = RB_ROOT_CACHED;
he->hroot_out = RB_ROOT_CACHED;
if (!symbol_conf.report_hierarchy)
he->leaf = true;
return 0;
err_rawdata:
free(he->raw_data);
err_infos:
if (he->branch_info) {
map__put(he->branch_info->from.map);
map__put(he->branch_info->to.map);
free(he->branch_info);
}
if (he->mem_info) {
map__put(he->mem_info->iaddr.map);
map__put(he->mem_info->daddr.map);
}
err:
map__zput(he->ms.map);
free(he->stat_acc);
return -ENOMEM;
}
static void *hist_entry__zalloc(size_t size)
{
return zalloc(size + sizeof(struct hist_entry));
}
static void hist_entry__free(void *ptr)
{
free(ptr);
}
static struct hist_entry_ops default_ops = {
.new = hist_entry__zalloc,
.free = hist_entry__free,
};
static struct hist_entry *hist_entry__new(struct hist_entry *template,
bool sample_self)
{
struct hist_entry_ops *ops = template->ops;
size_t callchain_size = 0;
struct hist_entry *he;
int err = 0;
if (!ops)
ops = template->ops = &default_ops;
if (symbol_conf.use_callchain)
callchain_size = sizeof(struct callchain_root);
he = ops->new(callchain_size);
if (he) {
err = hist_entry__init(he, template, sample_self, callchain_size);
if (err) {
ops->free(he);
he = NULL;
}
}
return he;
}
static u8 symbol__parent_filter(const struct symbol *parent)
{
if (symbol_conf.exclude_other && parent == NULL)
return 1 << HIST_FILTER__PARENT;
return 0;
}
static void hist_entry__add_callchain_period(struct hist_entry *he, u64 period)
{
perf hists: Check if a hist_entry has callchains before using them So far if we use 'perf record -g' this will make symbol_conf.use_callchain 'true' and logic will assume that all events have callchains enabled, but ever since we added the possibility of setting up callchains for some events (e.g.: -e cycles/call-graph=dwarf/) while not for others, we limit usage scenarios by looking at that symbol_conf.use_callchain global boolean, we better look at each event attributes. On the road to that we need to look if a hist_entry has callchains, that is, to go from hist_entry->hists to the evsel that contains it, to then look at evsel->sample_type for PERF_SAMPLE_CALLCHAIN. The next step is to add a symbol_conf.ignore_callchains global, to use in the places where what we really want to know is if callchains should be ignored, even if present. Then -g will mean just to select a callchain mode to be applied to all events not explicitely setting some other callchain mode, i.e. a default callchain mode, and --no-call-graph will set symbol_conf.ignore_callchains with that clear intention. That too will at some point become a per evsel thing, that tools can set for all or just a few of its evsels. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-0sas5cm4dsw2obn75g7ruz69@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-05-29 23:59:24 +07:00
if (!hist_entry__has_callchains(he) || !symbol_conf.use_callchain)
return;
he->hists->callchain_period += period;
if (!he->filtered)
he->hists->callchain_non_filtered_period += period;
}
static struct hist_entry *hists__findnew_entry(struct hists *hists,
struct hist_entry *entry,
struct addr_location *al,
bool sample_self)
{
struct rb_node **p;
struct rb_node *parent = NULL;
struct hist_entry *he;
int64_t cmp;
u64 period = entry->stat.period;
u64 weight = entry->stat.weight;
bool leftmost = true;
p = &hists->entries_in->rb_root.rb_node;
while (*p != NULL) {
parent = *p;
he = rb_entry(parent, struct hist_entry, rb_node_in);
/*
* Make sure that it receives arguments in a same order as
* hist_entry__collapse() so that we can use an appropriate
* function when searching an entry regardless which sort
* keys were used.
*/
cmp = hist_entry__cmp(he, entry);
if (!cmp) {
if (sample_self) {
he_stat__add_period(&he->stat, period, weight);
hist_entry__add_callchain_period(he, period);
}
if (symbol_conf.cumulate_callchain)
he_stat__add_period(he->stat_acc, period, weight);
/*
* This mem info was allocated from sample__resolve_mem
* and will not be used anymore.
*/
mem_info__zput(entry->mem_info);
/* If the map of an existing hist_entry has
* become out-of-date due to an exec() or
* similar, update it. Otherwise we will
* mis-adjust symbol addresses when computing
* the history counter to increment.
*/
if (he->ms.map != entry->ms.map) {
map__put(he->ms.map);
he->ms.map = map__get(entry->ms.map);
}
goto out;
}
if (cmp < 0)
p = &(*p)->rb_left;
else {
p = &(*p)->rb_right;
leftmost = false;
}
}
he = hist_entry__new(entry, sample_self);
if (!he)
return NULL;
if (sample_self)
hist_entry__add_callchain_period(he, period);
hists->nr_entries++;
rb_link_node(&he->rb_node_in, parent, p);
rb_insert_color_cached(&he->rb_node_in, hists->entries_in, leftmost);
out:
if (sample_self)
he_stat__add_cpumode_period(&he->stat, al->cpumode, period);
if (symbol_conf.cumulate_callchain)
he_stat__add_cpumode_period(he->stat_acc, al->cpumode, period);
return he;
}
static struct hist_entry*
__hists__add_entry(struct hists *hists,
struct addr_location *al,
struct symbol *sym_parent,
struct branch_info *bi,
struct mem_info *mi,
struct perf_sample *sample,
bool sample_self,
struct hist_entry_ops *ops)
{
perf tools: Add 'cgroup_id' sort order keyword This patch introduces a cgroup identifier entry field in perf report to identify or distinguish data of different cgroups. It uses the device number and inode number of cgroup namespace, included in perf data with the new PERF_RECORD_NAMESPACES event, as cgroup identifier. With the assumption that each container is created with it's own cgroup namespace, this allows assessment/analysis of multiple containers at once. A simple test for this would be to clone a few processes passing SIGCHILD & CLONE_NEWCROUP flags to each of them, execute shell and run different workloads on each of those contexts, while running perf record command with --namespaces option. Shown below is the output of perf report, sorted with cgroup identifier, on perf.data generated with the above test scenario, clearly indicating one context's considerable use of kernel memory in comparison with others: $ perf report -s cgroup_id,sample --stdio # # Total Lost Samples: 0 # # Samples: 5K of event 'kmem:kmalloc' # Event count (approx.): 5965 # # Overhead cgroup id (dev/inode) Samples # ........ ..................... ............ # 81.27% 3/0xeffffffb 4848 16.24% 3/0xf00000d0 969 1.16% 3/0xf00000ce 69 0.82% 3/0xf00000cf 49 0.50% 0/0x0 30 While this is a start, there is further scope of improving this. For example, instead of cgroup namespace's device and inode numbers, dev and inode numbers of some or all namespaces may be used to distinguish which processes are running in a given container context. Also, scripts to map device and inode info to containers sounds plausible for better tracing of containers. Signed-off-by: Hari Bathini <hbathini@linux.vnet.ibm.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexei Starovoitov <ast@fb.com> Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com> Cc: Aravinda Prasad <aravinda@linux.vnet.ibm.com> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Eric Biederman <ebiederm@xmission.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sargun Dhillon <sargun@sargun.me> Cc: Steven Rostedt <rostedt@goodmis.org> Link: http://lkml.kernel.org/r/148891933338.25309.756882900782042645.stgit@hbathini.in.ibm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-08 03:42:13 +07:00
struct namespaces *ns = thread__namespaces(al->thread);
struct hist_entry entry = {
.thread = al->thread,
.comm = thread__comm(al->thread),
perf tools: Add 'cgroup_id' sort order keyword This patch introduces a cgroup identifier entry field in perf report to identify or distinguish data of different cgroups. It uses the device number and inode number of cgroup namespace, included in perf data with the new PERF_RECORD_NAMESPACES event, as cgroup identifier. With the assumption that each container is created with it's own cgroup namespace, this allows assessment/analysis of multiple containers at once. A simple test for this would be to clone a few processes passing SIGCHILD & CLONE_NEWCROUP flags to each of them, execute shell and run different workloads on each of those contexts, while running perf record command with --namespaces option. Shown below is the output of perf report, sorted with cgroup identifier, on perf.data generated with the above test scenario, clearly indicating one context's considerable use of kernel memory in comparison with others: $ perf report -s cgroup_id,sample --stdio # # Total Lost Samples: 0 # # Samples: 5K of event 'kmem:kmalloc' # Event count (approx.): 5965 # # Overhead cgroup id (dev/inode) Samples # ........ ..................... ............ # 81.27% 3/0xeffffffb 4848 16.24% 3/0xf00000d0 969 1.16% 3/0xf00000ce 69 0.82% 3/0xf00000cf 49 0.50% 0/0x0 30 While this is a start, there is further scope of improving this. For example, instead of cgroup namespace's device and inode numbers, dev and inode numbers of some or all namespaces may be used to distinguish which processes are running in a given container context. Also, scripts to map device and inode info to containers sounds plausible for better tracing of containers. Signed-off-by: Hari Bathini <hbathini@linux.vnet.ibm.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexei Starovoitov <ast@fb.com> Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com> Cc: Aravinda Prasad <aravinda@linux.vnet.ibm.com> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Eric Biederman <ebiederm@xmission.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sargun Dhillon <sargun@sargun.me> Cc: Steven Rostedt <rostedt@goodmis.org> Link: http://lkml.kernel.org/r/148891933338.25309.756882900782042645.stgit@hbathini.in.ibm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-08 03:42:13 +07:00
.cgroup_id = {
.dev = ns ? ns->link_info[CGROUP_NS_INDEX].dev : 0,
.ino = ns ? ns->link_info[CGROUP_NS_INDEX].ino : 0,
},
.ms = {
.map = al->map,
.sym = al->sym,
},
.srcline = (char *) al->srcline,
.socket = al->socket,
.cpu = al->cpu,
.cpumode = al->cpumode,
.ip = al->addr,
.level = al->level,
.stat = {
.nr_events = 1,
.period = sample->period,
.weight = sample->weight,
},
.parent = sym_parent,
.filtered = symbol__parent_filter(sym_parent) | al->filtered,
.hists = hists,
.branch_info = bi,
.mem_info = mi,
.transaction = sample->transaction,
.raw_data = sample->raw_data,
.raw_size = sample->raw_size,
.ops = ops,
}, *he = hists__findnew_entry(hists, &entry, al, sample_self);
if (!hists->has_callchains && he && he->callchain_size != 0)
hists->has_callchains = true;
return he;
}
struct hist_entry *hists__add_entry(struct hists *hists,
struct addr_location *al,
struct symbol *sym_parent,
struct branch_info *bi,
struct mem_info *mi,
struct perf_sample *sample,
bool sample_self)
{
return __hists__add_entry(hists, al, sym_parent, bi, mi,
sample, sample_self, NULL);
}
struct hist_entry *hists__add_entry_ops(struct hists *hists,
struct hist_entry_ops *ops,
struct addr_location *al,
struct symbol *sym_parent,
struct branch_info *bi,
struct mem_info *mi,
struct perf_sample *sample,
bool sample_self)
{
return __hists__add_entry(hists, al, sym_parent, bi, mi,
sample, sample_self, ops);
}
static int
iter_next_nop_entry(struct hist_entry_iter *iter __maybe_unused,
struct addr_location *al __maybe_unused)
{
return 0;
}
static int
iter_add_next_nop_entry(struct hist_entry_iter *iter __maybe_unused,
struct addr_location *al __maybe_unused)
{
return 0;
}
static int
iter_prepare_mem_entry(struct hist_entry_iter *iter, struct addr_location *al)
{
struct perf_sample *sample = iter->sample;
struct mem_info *mi;
mi = sample__resolve_mem(sample, al);
if (mi == NULL)
return -ENOMEM;
iter->priv = mi;
return 0;
}
static int
iter_add_single_mem_entry(struct hist_entry_iter *iter, struct addr_location *al)
{
u64 cost;
struct mem_info *mi = iter->priv;
struct hists *hists = evsel__hists(iter->evsel);
struct perf_sample *sample = iter->sample;
struct hist_entry *he;
if (mi == NULL)
return -EINVAL;
cost = sample->weight;
if (!cost)
cost = 1;
/*
* must pass period=weight in order to get the correct
* sorting from hists__collapse_resort() which is solely
* based on periods. We want sorting be done on nr_events * weight
* and this is indirectly achieved by passing period=weight here
* and the he_stat__add_period() function.
*/
sample->period = cost;
he = hists__add_entry(hists, al, iter->parent, NULL, mi,
sample, true);
if (!he)
return -ENOMEM;
iter->he = he;
return 0;
}
static int
iter_finish_mem_entry(struct hist_entry_iter *iter,
struct addr_location *al __maybe_unused)
{
struct perf_evsel *evsel = iter->evsel;
struct hists *hists = evsel__hists(evsel);
struct hist_entry *he = iter->he;
int err = -EINVAL;
if (he == NULL)
goto out;
hists__inc_nr_samples(hists, he->filtered);
err = hist_entry__append_callchain(he, iter->sample);
out:
/*
* We don't need to free iter->priv (mem_info) here since the mem info
* was either already freed in hists__findnew_entry() or passed to a
* new hist entry by hist_entry__new().
*/
iter->priv = NULL;
iter->he = NULL;
return err;
}
static int
iter_prepare_branch_entry(struct hist_entry_iter *iter, struct addr_location *al)
{
struct branch_info *bi;
struct perf_sample *sample = iter->sample;
bi = sample__resolve_bstack(sample, al);
if (!bi)
return -ENOMEM;
iter->curr = 0;
iter->total = sample->branch_stack->nr;
iter->priv = bi;
return 0;
}
static int
perf report: Show branch type statistics for stdio mode Show the branch type statistics at the end of perf report --stdio. For example: perf report --stdio COND_FWD: 28.5% COND_BWD: 9.4% CROSS_4K: 0.7% CROSS_2M: 14.1% COND: 37.9% UNCOND: 0.2% IND: 6.7% CALL: 26.5% RET: 28.7% SYSRET: 0.0% The branch types are: COND_FWD: conditional forward COND_BWD: conditional backward COND: conditional branch UNCOND: unconditional branch IND: indirect CALL: function call IND_CALL: indirect function call RET: function return SYSCALL: syscall SYSRET: syscall return COND_CALL: conditional function call COND_RET: conditional function return CROSS_4K and CROSS_2M: They are the metrics checking for branches cross 4K or 2MB pages. It's an approximate computing. We don't know if the area is 4K or 2MB, so always compute both. To make the output simple, if a branch crosses 2M area, CROSS_4K will not be incremented. Change log v7: Since the common branch type definitions are changed, some tags/strings are updated accordingly. v6: Remove branch_type_stat_display() since it's moved to branch.c. v5: Remove the unnecessary sort__mode checking in hist_iter__branch_callback(). v4: Comparing to previous version, the major changes are: Add the computing of JCC forward/JCC backward and cross page checking by using the from and to addresses. Signed-off-by: Yao Jin <yao.jin@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1500379995-6449-7-git-send-email-yao.jin@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-07-18 19:13:14 +07:00
iter_add_single_branch_entry(struct hist_entry_iter *iter __maybe_unused,
struct addr_location *al __maybe_unused)
{
return 0;
}
static int
iter_next_branch_entry(struct hist_entry_iter *iter, struct addr_location *al)
{
struct branch_info *bi = iter->priv;
int i = iter->curr;
if (bi == NULL)
return 0;
if (iter->curr >= iter->total)
return 0;
al->map = bi[i].to.map;
al->sym = bi[i].to.sym;
al->addr = bi[i].to.addr;
return 1;
}
static int
iter_add_next_branch_entry(struct hist_entry_iter *iter, struct addr_location *al)
{
struct branch_info *bi;
struct perf_evsel *evsel = iter->evsel;
struct hists *hists = evsel__hists(evsel);
struct perf_sample *sample = iter->sample;
struct hist_entry *he = NULL;
int i = iter->curr;
int err = 0;
bi = iter->priv;
if (iter->hide_unresolved && !(bi[i].from.sym && bi[i].to.sym))
goto out;
/*
* The report shows the percentage of total branches captured
* and not events sampled. Thus we use a pseudo period of 1.
*/
sample->period = 1;
sample->weight = bi->flags.cycles ? bi->flags.cycles : 1;
he = hists__add_entry(hists, al, iter->parent, &bi[i], NULL,
sample, true);
if (he == NULL)
return -ENOMEM;
hists__inc_nr_samples(hists, he->filtered);
out:
iter->he = he;
iter->curr++;
return err;
}
static int
iter_finish_branch_entry(struct hist_entry_iter *iter,
struct addr_location *al __maybe_unused)
{
zfree(&iter->priv);
iter->he = NULL;
return iter->curr >= iter->total ? 0 : -1;
}
static int
iter_prepare_normal_entry(struct hist_entry_iter *iter __maybe_unused,
struct addr_location *al __maybe_unused)
{
return 0;
}
static int
iter_add_single_normal_entry(struct hist_entry_iter *iter, struct addr_location *al)
{
struct perf_evsel *evsel = iter->evsel;
struct perf_sample *sample = iter->sample;
struct hist_entry *he;
he = hists__add_entry(evsel__hists(evsel), al, iter->parent, NULL, NULL,
sample, true);
if (he == NULL)
return -ENOMEM;
iter->he = he;
return 0;
}
static int
iter_finish_normal_entry(struct hist_entry_iter *iter,
struct addr_location *al __maybe_unused)
{
struct hist_entry *he = iter->he;
struct perf_evsel *evsel = iter->evsel;
struct perf_sample *sample = iter->sample;
if (he == NULL)
return 0;
iter->he = NULL;
hists__inc_nr_samples(evsel__hists(evsel), he->filtered);
return hist_entry__append_callchain(he, sample);
}
static int
iter_prepare_cumulative_entry(struct hist_entry_iter *iter,
struct addr_location *al __maybe_unused)
{
struct hist_entry **he_cache;
callchain_cursor_commit(&callchain_cursor);
/*
* This is for detecting cycles or recursions so that they're
* cumulated only one time to prevent entries more than 100%
* overhead.
*/
perf report: Fix memory corruption in --branch-history mode --branch-history Jin Yao reported memory corrupton in perf report with branch info used for stack trace: > Following command lines will cause perf crash. > perf record -j call -g -a <application> > perf report --branch-history > > *** Error in `perf': double free or corruption (!prev): 0x00000000104aa040 *** > ======= Backtrace: ========= > /lib/x86_64-linux-gnu/libc.so.6(+0x77725)[0x7f6b37254725] > /lib/x86_64-linux-gnu/libc.so.6(+0x7ff4a)[0x7f6b3725cf4a] > /lib/x86_64-linux-gnu/libc.so.6(cfree+0x4c)[0x7f6b37260abc] > perf[0x51b914] > perf(hist_entry_iter__add+0x1e5)[0x51f305] > perf[0x43cf01] > perf[0x4fa3bf] > perf[0x4fa923] > perf[0x4fd396] > perf[0x4f9614] > perf(perf_session__process_events+0x89e)[0x4fc38e] > perf(cmd_report+0x15d2)[0x43f202] > perf[0x4a059f] > perf(main+0x631)[0x427b71] > /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x7f6b371fd830] > perf(_start+0x29)[0x427d89] For the cumulative output, we allocate the he_cache array based on the --max-stack option value and populate it with data from 'callchain_cursor'. The --max-stack option value does not ensure now the limit for number of callchain_cursor nodes, so the cumulative iter code will allocate smaller array than it's actually needed and cause above corruption. I think the --max-stack limit does not apply here anyway, because we add callchain data as normal hist entries, while the --max-stack control the limit of single entry callchain depth. Using the callchain_cursor.nr as he_cache array count to fix this. Also removing struct hist_entry_iter::max_stack, because there's no longer any use for it. We need more fixes to ensure that the branch stack code follows properly the logic of --max-stack, which is not the case at the moment. Original-patch-by: Jin Yao <yao.jin@linux.intel.com> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Reported-by: Jin Yao <yao.jin@linux.intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20180216123619.GA9945@krava Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-02-16 19:36:19 +07:00
he_cache = malloc(sizeof(*he_cache) * (callchain_cursor.nr + 1));
if (he_cache == NULL)
return -ENOMEM;
iter->priv = he_cache;
iter->curr = 0;
return 0;
}
static int
iter_add_single_cumulative_entry(struct hist_entry_iter *iter,
struct addr_location *al)
{
struct perf_evsel *evsel = iter->evsel;
struct hists *hists = evsel__hists(evsel);
struct perf_sample *sample = iter->sample;
struct hist_entry **he_cache = iter->priv;
struct hist_entry *he;
int err = 0;
he = hists__add_entry(hists, al, iter->parent, NULL, NULL,
sample, true);
if (he == NULL)
return -ENOMEM;
iter->he = he;
he_cache[iter->curr++] = he;
hist_entry__append_callchain(he, sample);
/*
* We need to re-initialize the cursor since callchain_append()
* advanced the cursor to the end.
*/
callchain_cursor_commit(&callchain_cursor);
hists__inc_nr_samples(hists, he->filtered);
return err;
}
static int
iter_next_cumulative_entry(struct hist_entry_iter *iter,
struct addr_location *al)
{
struct callchain_cursor_node *node;
node = callchain_cursor_current(&callchain_cursor);
if (node == NULL)
return 0;
return fill_callchain_info(al, node, iter->hide_unresolved);
}
static int
iter_add_next_cumulative_entry(struct hist_entry_iter *iter,
struct addr_location *al)
{
struct perf_evsel *evsel = iter->evsel;
struct perf_sample *sample = iter->sample;
struct hist_entry **he_cache = iter->priv;
struct hist_entry *he;
struct hist_entry he_tmp = {
.hists = evsel__hists(evsel),
.cpu = al->cpu,
.thread = al->thread,
.comm = thread__comm(al->thread),
.ip = al->addr,
.ms = {
.map = al->map,
.sym = al->sym,
},
.srcline = (char *) al->srcline,
.parent = iter->parent,
.raw_data = sample->raw_data,
.raw_size = sample->raw_size,
};
int i;
struct callchain_cursor cursor;
callchain_cursor_snapshot(&cursor, &callchain_cursor);
callchain_cursor_advance(&callchain_cursor);
/*
* Check if there's duplicate entries in the callchain.
* It's possible that it has cycles or recursive calls.
*/
for (i = 0; i < iter->curr; i++) {
if (hist_entry__cmp(he_cache[i], &he_tmp) == 0) {
/* to avoid calling callback function */
iter->he = NULL;
return 0;
}
}
he = hists__add_entry(evsel__hists(evsel), al, iter->parent, NULL, NULL,
sample, false);
if (he == NULL)
return -ENOMEM;
iter->he = he;
he_cache[iter->curr++] = he;
perf hists: Check if a hist_entry has callchains before using them So far if we use 'perf record -g' this will make symbol_conf.use_callchain 'true' and logic will assume that all events have callchains enabled, but ever since we added the possibility of setting up callchains for some events (e.g.: -e cycles/call-graph=dwarf/) while not for others, we limit usage scenarios by looking at that symbol_conf.use_callchain global boolean, we better look at each event attributes. On the road to that we need to look if a hist_entry has callchains, that is, to go from hist_entry->hists to the evsel that contains it, to then look at evsel->sample_type for PERF_SAMPLE_CALLCHAIN. The next step is to add a symbol_conf.ignore_callchains global, to use in the places where what we really want to know is if callchains should be ignored, even if present. Then -g will mean just to select a callchain mode to be applied to all events not explicitely setting some other callchain mode, i.e. a default callchain mode, and --no-call-graph will set symbol_conf.ignore_callchains with that clear intention. That too will at some point become a per evsel thing, that tools can set for all or just a few of its evsels. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-0sas5cm4dsw2obn75g7ruz69@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-05-29 23:59:24 +07:00
if (hist_entry__has_callchains(he) && symbol_conf.use_callchain)
callchain_append(he->callchain, &cursor, sample->period);
return 0;
}
static int
iter_finish_cumulative_entry(struct hist_entry_iter *iter,
struct addr_location *al __maybe_unused)
{
zfree(&iter->priv);
iter->he = NULL;
return 0;
}
const struct hist_iter_ops hist_iter_mem = {
.prepare_entry = iter_prepare_mem_entry,
.add_single_entry = iter_add_single_mem_entry,
.next_entry = iter_next_nop_entry,
.add_next_entry = iter_add_next_nop_entry,
.finish_entry = iter_finish_mem_entry,
};
const struct hist_iter_ops hist_iter_branch = {
.prepare_entry = iter_prepare_branch_entry,
.add_single_entry = iter_add_single_branch_entry,
.next_entry = iter_next_branch_entry,
.add_next_entry = iter_add_next_branch_entry,
.finish_entry = iter_finish_branch_entry,
};
const struct hist_iter_ops hist_iter_normal = {
.prepare_entry = iter_prepare_normal_entry,
.add_single_entry = iter_add_single_normal_entry,
.next_entry = iter_next_nop_entry,
.add_next_entry = iter_add_next_nop_entry,
.finish_entry = iter_finish_normal_entry,
};
const struct hist_iter_ops hist_iter_cumulative = {
.prepare_entry = iter_prepare_cumulative_entry,
.add_single_entry = iter_add_single_cumulative_entry,
.next_entry = iter_next_cumulative_entry,
.add_next_entry = iter_add_next_cumulative_entry,
.finish_entry = iter_finish_cumulative_entry,
};
int hist_entry_iter__add(struct hist_entry_iter *iter, struct addr_location *al,
int max_stack_depth, void *arg)
{
int err, err2;
struct map *alm = NULL;
if (al)
alm = map__get(al->map);
err = sample__resolve_callchain(iter->sample, &callchain_cursor, &iter->parent,
iter->evsel, al, max_stack_depth);
if (err)
return err;
err = iter->ops->prepare_entry(iter, al);
if (err)
goto out;
err = iter->ops->add_single_entry(iter, al);
if (err)
goto out;
if (iter->he && iter->add_entry_cb) {
err = iter->add_entry_cb(iter, al, true, arg);
if (err)
goto out;
}
while (iter->ops->next_entry(iter, al)) {
err = iter->ops->add_next_entry(iter, al);
if (err)
break;
if (iter->he && iter->add_entry_cb) {
err = iter->add_entry_cb(iter, al, false, arg);
if (err)
goto out;
}
}
out:
err2 = iter->ops->finish_entry(iter, al);
if (!err)
err = err2;
map__put(alm);
return err;
}
int64_t
hist_entry__cmp(struct hist_entry *left, struct hist_entry *right)
{
struct hists *hists = left->hists;
struct perf_hpp_fmt *fmt;
int64_t cmp = 0;
hists__for_each_sort_list(hists, fmt) {
if (perf_hpp__is_dynamic_entry(fmt) &&
!perf_hpp__defined_dynamic_entry(fmt, hists))
continue;
cmp = fmt->cmp(fmt, left, right);
if (cmp)
break;
}
return cmp;
}
int64_t
hist_entry__collapse(struct hist_entry *left, struct hist_entry *right)
{
struct hists *hists = left->hists;
struct perf_hpp_fmt *fmt;
int64_t cmp = 0;
hists__for_each_sort_list(hists, fmt) {
if (perf_hpp__is_dynamic_entry(fmt) &&
!perf_hpp__defined_dynamic_entry(fmt, hists))
continue;
cmp = fmt->collapse(fmt, left, right);
if (cmp)
break;
}
return cmp;
}
void hist_entry__delete(struct hist_entry *he)
{
struct hist_entry_ops *ops = he->ops;
thread__zput(he->thread);
map__zput(he->ms.map);
if (he->branch_info) {
map__zput(he->branch_info->from.map);
map__zput(he->branch_info->to.map);
perf report: Add srcline_from/to branch sort keys Add "srcline_from" and "srcline_to" branch sort keys that allow to show the source lines of a branch. That makes it much easier to track down where particular branches happen in the program, for example to examine branch mispredictions, or to associate it with cycle counts: % perf record -b -e cycles:p ./tcall % perf report --sort srcline_from,srcline_to,mispredict ... 15.10% tcall.c:18 tcall.c:10 N 14.83% tcall.c:11 tcall.c:5 N 14.12% tcall.c:7 tcall.c:12 N 14.04% tcall.c:12 tcall.c:5 N 12.42% tcall.c:17 tcall.c:18 N 12.39% tcall.c:7 tcall.c:13 N 12.27% tcall.c:13 tcall.c:17 N ... % perf report --sort srcline_from,srcline_to,cycles ... 17.12% tcall.c:18 tcall.c:11 1 17.01% tcall.c:12 tcall.c:6 1 16.98% tcall.c:11 tcall.c:6 1 15.91% tcall.c:17 tcall.c:18 1 6.38% tcall.c:7 tcall.c:17 7 4.80% tcall.c:7 tcall.c:12 8 4.21% tcall.c:7 tcall.c:17 8 2.67% tcall.c:7 tcall.c:12 7 2.62% tcall.c:7 tcall.c:12 10 2.10% tcall.c:7 tcall.c:17 9 1.58% tcall.c:7 tcall.c:12 6 1.44% tcall.c:7 tcall.c:12 5 1.38% tcall.c:7 tcall.c:12 9 1.06% tcall.c:7 tcall.c:17 13 1.05% tcall.c:7 tcall.c:12 4 1.01% tcall.c:7 tcall.c:17 6 Open issues: - Some kernel symbols get misresolved. Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Link: http://lkml.kernel.org/r/1463775308-32748-1-git-send-email-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-05-21 03:15:08 +07:00
free_srcline(he->branch_info->srcline_from);
free_srcline(he->branch_info->srcline_to);
zfree(&he->branch_info);
}
if (he->mem_info) {
map__zput(he->mem_info->iaddr.map);
map__zput(he->mem_info->daddr.map);
mem_info__zput(he->mem_info);
}
zfree(&he->stat_acc);
free_srcline(he->srcline);
perf report: Add support for srcfile sort key In some cases it's useful to characterize samples by file. This is useful to get a higher level categorization, for example to map cost to subsystems. Add a srcfile sort key to perf report. It builds on top of the existing srcline support. Commiter notes: E.g.: # perf record -F 10000 usleep 1 [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.016 MB perf.data (13 samples) ] [root@zoo ~]# perf report -s srcfile --stdio # Total Lost Samples: 0 # # Samples: 13 of event 'cycles' # Event count (approx.): 869878 # # Overhead Source File # ........ ........... 60.99% . 20.62% paravirt.h 14.23% rmap.c 4.04% signal.c 0.11% msr.h # The first line is collecting all the files for which srcfiles couldn't somehow get resolved to: # perf report -s srcfile,dso --stdio # Total Lost Samples: 0 # # Samples: 13 of event 'cycles' # Event count (approx.): 869878 # # Overhead Source File Shared Object # ........ ........... ................ 40.97% . ld-2.20.so 20.62% paravirt.h [kernel.vmlinux] 20.02% . libc-2.20.so 14.23% rmap.c [kernel.vmlinux] 4.04% signal.c [kernel.vmlinux] 0.11% msr.h [kernel.vmlinux] # XXX: Investigate why that is not resolving on Fedora 21, Andi says he hasn't seen this on Fedora 22. Signed-off-by: Andi Kleen <ak@linux.intel.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/r/1438988064-21834-1-git-send-email-andi@firstfloor.org [ Added column length update, from 0e65bdb3f90f ('perf hists: Update the column width for the "srcline" sort key') ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-08-08 05:54:24 +07:00
if (he->srcfile && he->srcfile[0])
free(he->srcfile);
free_callchain(he->callchain);
free(he->trace_output);
free(he->raw_data);
ops->free(he);
}
/*
* If this is not the last column, then we need to pad it according to the
* pre-calculated max length for this column, otherwise don't bother adding
* spaces because that would break viewing this with, for instance, 'less',
* that would show tons of trailing spaces when a long C++ demangled method
* names is sampled.
*/
int hist_entry__snprintf_alignment(struct hist_entry *he, struct perf_hpp *hpp,
struct perf_hpp_fmt *fmt, int printed)
{
if (!list_is_last(&fmt->list, &he->hists->hpp_list->fields)) {
const int width = fmt->width(fmt, hpp, he->hists);
if (printed < width) {
advance_hpp(hpp, printed);
printed = scnprintf(hpp->buf, hpp->size, "%-*s", width - printed, " ");
}
}
return printed;
}
/*
* collapse the histogram
*/
static void hists__apply_filters(struct hists *hists, struct hist_entry *he);
static void hists__remove_entry_filter(struct hists *hists, struct hist_entry *he,
enum hist_filter type);
typedef bool (*fmt_chk_fn)(struct perf_hpp_fmt *fmt);
static bool check_thread_entry(struct perf_hpp_fmt *fmt)
{
return perf_hpp__is_thread_entry(fmt) || perf_hpp__is_comm_entry(fmt);
}
static void hist_entry__check_and_remove_filter(struct hist_entry *he,
enum hist_filter type,
fmt_chk_fn check)
{
struct perf_hpp_fmt *fmt;
bool type_match = false;
struct hist_entry *parent = he->parent_he;
switch (type) {
case HIST_FILTER__THREAD:
if (symbol_conf.comm_list == NULL &&
symbol_conf.pid_list == NULL &&
symbol_conf.tid_list == NULL)
return;
break;
case HIST_FILTER__DSO:
if (symbol_conf.dso_list == NULL)
return;
break;
case HIST_FILTER__SYMBOL:
if (symbol_conf.sym_list == NULL)
return;
break;
case HIST_FILTER__PARENT:
case HIST_FILTER__GUEST:
case HIST_FILTER__HOST:
case HIST_FILTER__SOCKET:
case HIST_FILTER__C2C:
default:
return;
}
/* if it's filtered by own fmt, it has to have filter bits */
perf_hpp_list__for_each_format(he->hpp_list, fmt) {
if (check(fmt)) {
type_match = true;
break;
}
}
if (type_match) {
/*
* If the filter is for current level entry, propagate
* filter marker to parents. The marker bit was
* already set by default so it only needs to clear
* non-filtered entries.
*/
if (!(he->filtered & (1 << type))) {
while (parent) {
parent->filtered &= ~(1 << type);
parent = parent->parent_he;
}
}
} else {
/*
* If current entry doesn't have matching formats, set
* filter marker for upper level entries. it will be
* cleared if its lower level entries is not filtered.
*
* For lower-level entries, it inherits parent's
* filter bit so that lower level entries of a
* non-filtered entry won't set the filter marker.
*/
if (parent == NULL)
he->filtered |= (1 << type);
else
he->filtered |= (parent->filtered & (1 << type));
}
}
static void hist_entry__apply_hierarchy_filters(struct hist_entry *he)
{
hist_entry__check_and_remove_filter(he, HIST_FILTER__THREAD,
check_thread_entry);
hist_entry__check_and_remove_filter(he, HIST_FILTER__DSO,
perf_hpp__is_dso_entry);
hist_entry__check_and_remove_filter(he, HIST_FILTER__SYMBOL,
perf_hpp__is_sym_entry);
hists__apply_filters(he->hists, he);
}
static struct hist_entry *hierarchy_insert_entry(struct hists *hists,
struct rb_root_cached *root,
struct hist_entry *he,
struct hist_entry *parent_he,
struct perf_hpp_list *hpp_list)
{
struct rb_node **p = &root->rb_root.rb_node;
struct rb_node *parent = NULL;
struct hist_entry *iter, *new;
struct perf_hpp_fmt *fmt;
int64_t cmp;
bool leftmost = true;
while (*p != NULL) {
parent = *p;
iter = rb_entry(parent, struct hist_entry, rb_node_in);
cmp = 0;
perf_hpp_list__for_each_sort_list(hpp_list, fmt) {
cmp = fmt->collapse(fmt, iter, he);
if (cmp)
break;
}
if (!cmp) {
he_stat__add_stat(&iter->stat, &he->stat);
return iter;
}
if (cmp < 0)
p = &parent->rb_left;
else {
p = &parent->rb_right;
leftmost = false;
}
}
new = hist_entry__new(he, true);
if (new == NULL)
return NULL;
hists->nr_entries++;
/* save related format list for output */
new->hpp_list = hpp_list;
new->parent_he = parent_he;
hist_entry__apply_hierarchy_filters(new);
/* some fields are now passed to 'new' */
perf_hpp_list__for_each_sort_list(hpp_list, fmt) {
if (perf_hpp__is_trace_entry(fmt) || perf_hpp__is_dynamic_entry(fmt))
he->trace_output = NULL;
else
new->trace_output = NULL;
if (perf_hpp__is_srcline_entry(fmt))
he->srcline = NULL;
else
new->srcline = NULL;
if (perf_hpp__is_srcfile_entry(fmt))
he->srcfile = NULL;
else
new->srcfile = NULL;
}
rb_link_node(&new->rb_node_in, parent, p);
rb_insert_color_cached(&new->rb_node_in, root, leftmost);
return new;
}
static int hists__hierarchy_insert_entry(struct hists *hists,
struct rb_root_cached *root,
struct hist_entry *he)
{
struct perf_hpp_list_node *node;
struct hist_entry *new_he = NULL;
struct hist_entry *parent = NULL;
int depth = 0;
int ret = 0;
list_for_each_entry(node, &hists->hpp_formats, list) {
/* skip period (overhead) and elided columns */
if (node->level == 0 || node->skip)
continue;
/* insert copy of 'he' for each fmt into the hierarchy */
new_he = hierarchy_insert_entry(hists, root, he, parent, &node->hpp);
if (new_he == NULL) {
ret = -1;
break;
}
root = &new_he->hroot_in;
new_he->depth = depth++;
parent = new_he;
}
if (new_he) {
new_he->leaf = true;
perf hists: Check if a hist_entry has callchains before using them So far if we use 'perf record -g' this will make symbol_conf.use_callchain 'true' and logic will assume that all events have callchains enabled, but ever since we added the possibility of setting up callchains for some events (e.g.: -e cycles/call-graph=dwarf/) while not for others, we limit usage scenarios by looking at that symbol_conf.use_callchain global boolean, we better look at each event attributes. On the road to that we need to look if a hist_entry has callchains, that is, to go from hist_entry->hists to the evsel that contains it, to then look at evsel->sample_type for PERF_SAMPLE_CALLCHAIN. The next step is to add a symbol_conf.ignore_callchains global, to use in the places where what we really want to know is if callchains should be ignored, even if present. Then -g will mean just to select a callchain mode to be applied to all events not explicitely setting some other callchain mode, i.e. a default callchain mode, and --no-call-graph will set symbol_conf.ignore_callchains with that clear intention. That too will at some point become a per evsel thing, that tools can set for all or just a few of its evsels. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-0sas5cm4dsw2obn75g7ruz69@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-05-29 23:59:24 +07:00
if (hist_entry__has_callchains(new_he) &&
symbol_conf.use_callchain) {
callchain_cursor_reset(&callchain_cursor);
if (callchain_merge(&callchain_cursor,
new_he->callchain,
he->callchain) < 0)
ret = -1;
}
}
/* 'he' is no longer used */
hist_entry__delete(he);
/* return 0 (or -1) since it already applied filters */
return ret;
}
static int hists__collapse_insert_entry(struct hists *hists,
struct rb_root_cached *root,
struct hist_entry *he)
{
struct rb_node **p = &root->rb_root.rb_node;
struct rb_node *parent = NULL;
struct hist_entry *iter;
int64_t cmp;
bool leftmost = true;
if (symbol_conf.report_hierarchy)
return hists__hierarchy_insert_entry(hists, root, he);
while (*p != NULL) {
parent = *p;
iter = rb_entry(parent, struct hist_entry, rb_node_in);
cmp = hist_entry__collapse(iter, he);
if (!cmp) {
int ret = 0;
he_stat__add_stat(&iter->stat, &he->stat);
if (symbol_conf.cumulate_callchain)
he_stat__add_stat(iter->stat_acc, he->stat_acc);
perf hists: Check if a hist_entry has callchains before using them So far if we use 'perf record -g' this will make symbol_conf.use_callchain 'true' and logic will assume that all events have callchains enabled, but ever since we added the possibility of setting up callchains for some events (e.g.: -e cycles/call-graph=dwarf/) while not for others, we limit usage scenarios by looking at that symbol_conf.use_callchain global boolean, we better look at each event attributes. On the road to that we need to look if a hist_entry has callchains, that is, to go from hist_entry->hists to the evsel that contains it, to then look at evsel->sample_type for PERF_SAMPLE_CALLCHAIN. The next step is to add a symbol_conf.ignore_callchains global, to use in the places where what we really want to know is if callchains should be ignored, even if present. Then -g will mean just to select a callchain mode to be applied to all events not explicitely setting some other callchain mode, i.e. a default callchain mode, and --no-call-graph will set symbol_conf.ignore_callchains with that clear intention. That too will at some point become a per evsel thing, that tools can set for all or just a few of its evsels. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-0sas5cm4dsw2obn75g7ruz69@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-05-29 23:59:24 +07:00
if (hist_entry__has_callchains(he) && symbol_conf.use_callchain) {
callchain_cursor_reset(&callchain_cursor);
if (callchain_merge(&callchain_cursor,
iter->callchain,
he->callchain) < 0)
ret = -1;
perf callchain: Feed callchains into a cursor The callchains are fed with an array of a fixed size. As a result we iterate over each callchains three times: - 1st to resolve symbols - 2nd to filter out context boundaries - 3rd for the insertion into the tree This also involves some pairs of memory allocation/deallocation everytime we insert a callchain, for the filtered out array of addresses and for the array of symbols that comes along. Instead, feed the callchains through a linked list with persistent allocations. It brings several pros like: - Merge the 1st and 2nd iterations in one. That was possible before but in a way that would involve allocating an array slightly taller than necessary because we don't know in advance the number of context boundaries to filter out. - Much lesser allocations/deallocations. The linked list keeps persistent empty entries for the next usages and is extendable at will. - Makes it easier for multiple sources of callchains to feed a stacktrace together. This is deemed to pave the way for cfi based callchains wherein traditional frame pointer based kernel stacktraces will precede cfi based user ones, producing an overall callchain which size is hardly predictable. This requirement makes the static array obsolete and makes a linked list based iterator a much more flexible fit. Basic testing on a big perf file containing callchains (~ 176 MB) has shown a throughput gain of about 11% with perf report. Cc: Ingo Molnar <mingo@elte.hu> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1294977121-5700-2-git-send-email-fweisbec@gmail.com> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2011-01-14 10:51:58 +07:00
}
hist_entry__delete(he);
return ret;
}
if (cmp < 0)
p = &(*p)->rb_left;
else {
p = &(*p)->rb_right;
leftmost = false;
}
}
hists->nr_entries++;
rb_link_node(&he->rb_node_in, parent, p);
rb_insert_color_cached(&he->rb_node_in, root, leftmost);
return 1;
}
struct rb_root_cached *hists__get_rotate_entries_in(struct hists *hists)
{
struct rb_root_cached *root;
pthread_mutex_lock(&hists->lock);
root = hists->entries_in;
if (++hists->entries_in > &hists->entries_in_array[1])
hists->entries_in = &hists->entries_in_array[0];
pthread_mutex_unlock(&hists->lock);
return root;
}
static void hists__apply_filters(struct hists *hists, struct hist_entry *he)
{
hists__filter_entry_by_dso(hists, he);
hists__filter_entry_by_thread(hists, he);
hists__filter_entry_by_symbol(hists, he);
hists__filter_entry_by_socket(hists, he);
}
int hists__collapse_resort(struct hists *hists, struct ui_progress *prog)
{
struct rb_root_cached *root;
struct rb_node *next;
struct hist_entry *n;
int ret;
if (!hists__has(hists, need_collapse))
return 0;
hists->nr_entries = 0;
root = hists__get_rotate_entries_in(hists);
next = rb_first_cached(root);
while (next) {
if (session_done())
break;
n = rb_entry(next, struct hist_entry, rb_node_in);
next = rb_next(&n->rb_node_in);
rb_erase_cached(&n->rb_node_in, root);
ret = hists__collapse_insert_entry(hists, &hists->entries_collapsed, n);
if (ret < 0)
return -1;
if (ret) {
/*
* If it wasn't combined with one of the entries already
* collapsed, we need to apply the filters that may have
* been set by, say, the hist_browser.
*/
hists__apply_filters(hists, n);
}
if (prog)
ui_progress__update(prog, 1);
}
return 0;
}
static int hist_entry__sort(struct hist_entry *a, struct hist_entry *b)
{
struct hists *hists = a->hists;
struct perf_hpp_fmt *fmt;
int64_t cmp = 0;
hists__for_each_sort_list(hists, fmt) {
perf tools: Skip dynamic fields not defined for current event When there are multiple events, each dynamic sort key is defined just for one event. In this case other events will always show "N/A" for those fields. But they are meaningless and consume precious screen width. Let's skip those undefined dynamic fields. $ perf record -e kmem:kmalloc,kmem:kfree -a sleep 1 $ perf report -s 'comm,kmalloc.*' --stdio # To display the perf.data header info, please use --header/--header-only options. # # # Total Lost Samples: 0 # # Samples: 20K of event 'kmem:kmalloc' # Event count (approx.): 20533 # # Overhead Command call_site ptr bytes_req bytes_alloc gfp_flags # ........ ....... .................. .................. ......... ........... ................... # 99.89% perf ffffffffa01d4396 0xffff8803ffb79720 96 96 GFP_NOFS|GFP_ZERO 0.06% sleep ffffffff8114e1cd 0xffff8803d228a000 4096 4096 GFP_KERNEL 0.03% perf ffffffff811d6ae6 0xffff8803f7678f00 240 256 GFP_KERNEL|GFP_ZERO 0.00% perf ffffffff812263c1 0xffff880406172380 128 128 GFP_KERNEL 0.00% perf ffffffff812264b9 0xffff8803ffac1600 504 512 GFP_KERNEL 0.00% perf ffffffff81226634 0xffff880401dc5280 28 32 GFP_KERNEL 0.00% sleep ffffffff81226da9 0xffff8803ffac3a00 392 512 GFP_KERNEL # Samples: 20K of event 'kmem:kfree' # Event count (approx.): 20597 # # Overhead Command # ........ .............. # 99.63% perf 0.14% sleep 0.11% irq/36-iwlwifi 0.11% kworker/u16:0 0.01% Xorg 0.00% firefox Signed-off-by: Namhyung Kim <namhyung@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1450804030-29193-12-git-send-email-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-12-23 00:07:08 +07:00
if (perf_hpp__should_skip(fmt, a->hists))
continue;
cmp = fmt->sort(fmt, a, b);
if (cmp)
break;
}
return cmp;
}
static void hists__reset_filter_stats(struct hists *hists)
{
hists->nr_non_filtered_entries = 0;
hists->stats.total_non_filtered_period = 0;
}
void hists__reset_stats(struct hists *hists)
{
hists->nr_entries = 0;
hists->stats.total_period = 0;
hists__reset_filter_stats(hists);
}
static void hists__inc_filter_stats(struct hists *hists, struct hist_entry *h)
{
hists->nr_non_filtered_entries++;
hists->stats.total_non_filtered_period += h->stat.period;
}
void hists__inc_stats(struct hists *hists, struct hist_entry *h)
{
if (!h->filtered)
hists__inc_filter_stats(hists, h);
hists->nr_entries++;
hists->stats.total_period += h->stat.period;
}
static void hierarchy_recalc_total_periods(struct hists *hists)
{
struct rb_node *node;
struct hist_entry *he;
node = rb_first_cached(&hists->entries);
hists->stats.total_period = 0;
hists->stats.total_non_filtered_period = 0;
/*
* recalculate total period using top-level entries only
* since lower level entries only see non-filtered entries
* but upper level entries have sum of both entries.
*/
while (node) {
he = rb_entry(node, struct hist_entry, rb_node);
node = rb_next(node);
hists->stats.total_period += he->stat.period;
if (!he->filtered)
hists->stats.total_non_filtered_period += he->stat.period;
}
}
static void hierarchy_insert_output_entry(struct rb_root_cached *root,
struct hist_entry *he)
{
struct rb_node **p = &root->rb_root.rb_node;
struct rb_node *parent = NULL;
struct hist_entry *iter;
struct perf_hpp_fmt *fmt;
bool leftmost = true;
while (*p != NULL) {
parent = *p;
iter = rb_entry(parent, struct hist_entry, rb_node);
if (hist_entry__sort(he, iter) > 0)
p = &parent->rb_left;
else {
p = &parent->rb_right;
leftmost = false;
}
}
rb_link_node(&he->rb_node, parent, p);
rb_insert_color_cached(&he->rb_node, root, leftmost);
perf report: Update column width of dynamic entries The column width of dynamic entries is updated when comparing hist entries. However some unique entries can miss the chance to update. So move the update to output resort stage to make sure every entry will get called before display. To do that, abuse ->sort callback to update the width when the third argument is NULL. When resorting entries in normal path, it never be NULL so it should be fine IMHO. Before: # Overhead ptr / bytes_req / gfp_flags # .............. .......................................... # 37.50% 0xffff8803f7669400 37.50% 448 37.50% GFP_ATOMIC|GFP_NOWARN|GFP_NOMEMALLOC 10.42% 0xffff8803f766be00 8.33% 96 8.33% GFP_ATOMIC|GFP_NOWARN|GFP_NOMEMALLOC 2.08% 512 2.08% GFP_KERNEL|GFP_NOWARN|GFP_REPEAT|GFP <-- here After: # Overhead ptr / bytes_req / gfp_flags # .............. ..................................................... # 37.50% 0xffff8803f7669400 37.50% 448 37.50% GFP_ATOMIC|GFP_NOWARN|GFP_NOMEMALLOC 10.42% 0xffff8803f766be00 8.33% 96 8.33% GFP_ATOMIC|GFP_NOWARN|GFP_NOMEMALLOC 2.08% 512 2.08% GFP_KERNEL|GFP_NOWARN|GFP_REPEAT|GFP_NOMEMALLOC Signed-off-by: Namhyung Kim <namhyung@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Andi Kleen <andi@firstfloor.org> Cc: David Ahern <dsahern@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1456512767-1164-5-git-send-email-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-27 01:52:47 +07:00
/* update column width of dynamic entry */
perf_hpp_list__for_each_sort_list(he->hpp_list, fmt) {
if (perf_hpp__is_dynamic_entry(fmt))
fmt->sort(fmt, he, NULL);
}
}
static void hists__hierarchy_output_resort(struct hists *hists,
struct ui_progress *prog,
struct rb_root_cached *root_in,
struct rb_root_cached *root_out,
u64 min_callchain_hits,
bool use_callchain)
{
struct rb_node *node;
struct hist_entry *he;
*root_out = RB_ROOT_CACHED;
node = rb_first_cached(root_in);
while (node) {
he = rb_entry(node, struct hist_entry, rb_node_in);
node = rb_next(node);
hierarchy_insert_output_entry(root_out, he);
if (prog)
ui_progress__update(prog, 1);
hists->nr_entries++;
if (!he->filtered) {
hists->nr_non_filtered_entries++;
hists__calc_col_len(hists, he);
}
if (!he->leaf) {
hists__hierarchy_output_resort(hists, prog,
&he->hroot_in,
&he->hroot_out,
min_callchain_hits,
use_callchain);
continue;
}
if (!use_callchain)
continue;
if (callchain_param.mode == CHAIN_GRAPH_REL) {
u64 total = he->stat.period;
if (symbol_conf.cumulate_callchain)
total = he->stat_acc->period;
min_callchain_hits = total * (callchain_param.min_percent / 100);
}
callchain_param.sort(&he->sorted_chain, he->callchain,
min_callchain_hits, &callchain_param);
}
}
static void __hists__insert_output_entry(struct rb_root_cached *entries,
2010-05-10 23:04:11 +07:00
struct hist_entry *he,
perf callchain: Allow disabling call graphs per event This patch introduce "call-graph=no" to disable per-event callgraph. Here is an example. perf record -e 'cpu/cpu-cycles,call-graph=fp/,cpu/instructions,call-graph=no/' sleep 1 perf report --stdio # To display the perf.data header info, please use --header/--header-only options. # # # Total Lost Samples: 0 # # Samples: 6 of event 'cpu/cpu-cycles,call-graph=fp/' # Event count (approx.): 774218 # # Children Self Command Shared Object Symbol # ........ ........ ....... ................ ........................................ # 61.94% 0.00% sleep [kernel.vmlinux] [k] entry_SYSCALL_64_fastpath | ---entry_SYSCALL_64_fastpath | |--97.30%-- __brk | --2.70%-- mmap64 _dl_check_map_versions _dl_check_all_versions 61.94% 0.00% sleep [kernel.vmlinux] [k] perf_event_mmap | ---perf_event_mmap | |--97.30%-- do_brk | sys_brk | entry_SYSCALL_64_fastpath | __brk | --2.70%-- mmap_region do_mmap_pgoff vm_mmap_pgoff sys_mmap_pgoff sys_mmap entry_SYSCALL_64_fastpath mmap64 _dl_check_map_versions _dl_check_all_versions ...... # Samples: 6 of event 'cpu/instructions,call-graph=no/' # Event count (approx.): 359692 # # Children Self Command Shared Object Symbol # ........ ........ ....... ................ ................................. # 89.03% 0.00% sleep [unknown] [.] 0xffff6598ffff6598 89.03% 0.00% sleep ld-2.17.so [.] _dl_resolve_conflicts 89.03% 0.00% sleep [kernel.vmlinux] [k] page_fault Signed-off-by: Kan Liang <kan.liang@intel.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/r/1439289050-40510-2-git-send-email-kan.liang@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-08-11 17:30:48 +07:00
u64 min_callchain_hits,
bool use_callchain)
{
struct rb_node **p = &entries->rb_root.rb_node;
struct rb_node *parent = NULL;
struct hist_entry *iter;
perf report: Update column width of dynamic entries The column width of dynamic entries is updated when comparing hist entries. However some unique entries can miss the chance to update. So move the update to output resort stage to make sure every entry will get called before display. To do that, abuse ->sort callback to update the width when the third argument is NULL. When resorting entries in normal path, it never be NULL so it should be fine IMHO. Before: # Overhead ptr / bytes_req / gfp_flags # .............. .......................................... # 37.50% 0xffff8803f7669400 37.50% 448 37.50% GFP_ATOMIC|GFP_NOWARN|GFP_NOMEMALLOC 10.42% 0xffff8803f766be00 8.33% 96 8.33% GFP_ATOMIC|GFP_NOWARN|GFP_NOMEMALLOC 2.08% 512 2.08% GFP_KERNEL|GFP_NOWARN|GFP_REPEAT|GFP <-- here After: # Overhead ptr / bytes_req / gfp_flags # .............. ..................................................... # 37.50% 0xffff8803f7669400 37.50% 448 37.50% GFP_ATOMIC|GFP_NOWARN|GFP_NOMEMALLOC 10.42% 0xffff8803f766be00 8.33% 96 8.33% GFP_ATOMIC|GFP_NOWARN|GFP_NOMEMALLOC 2.08% 512 2.08% GFP_KERNEL|GFP_NOWARN|GFP_REPEAT|GFP_NOMEMALLOC Signed-off-by: Namhyung Kim <namhyung@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Andi Kleen <andi@firstfloor.org> Cc: David Ahern <dsahern@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1456512767-1164-5-git-send-email-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-27 01:52:47 +07:00
struct perf_hpp_fmt *fmt;
bool leftmost = true;
if (use_callchain) {
if (callchain_param.mode == CHAIN_GRAPH_REL) {
u64 total = he->stat.period;
if (symbol_conf.cumulate_callchain)
total = he->stat_acc->period;
min_callchain_hits = total * (callchain_param.min_percent / 100);
}
callchain_param.sort(&he->sorted_chain, he->callchain,
min_callchain_hits, &callchain_param);
}
while (*p != NULL) {
parent = *p;
iter = rb_entry(parent, struct hist_entry, rb_node);
if (hist_entry__sort(he, iter) > 0)
p = &(*p)->rb_left;
else {
p = &(*p)->rb_right;
leftmost = false;
}
}
rb_link_node(&he->rb_node, parent, p);
rb_insert_color_cached(&he->rb_node, entries, leftmost);
perf report: Update column width of dynamic entries The column width of dynamic entries is updated when comparing hist entries. However some unique entries can miss the chance to update. So move the update to output resort stage to make sure every entry will get called before display. To do that, abuse ->sort callback to update the width when the third argument is NULL. When resorting entries in normal path, it never be NULL so it should be fine IMHO. Before: # Overhead ptr / bytes_req / gfp_flags # .............. .......................................... # 37.50% 0xffff8803f7669400 37.50% 448 37.50% GFP_ATOMIC|GFP_NOWARN|GFP_NOMEMALLOC 10.42% 0xffff8803f766be00 8.33% 96 8.33% GFP_ATOMIC|GFP_NOWARN|GFP_NOMEMALLOC 2.08% 512 2.08% GFP_KERNEL|GFP_NOWARN|GFP_REPEAT|GFP <-- here After: # Overhead ptr / bytes_req / gfp_flags # .............. ..................................................... # 37.50% 0xffff8803f7669400 37.50% 448 37.50% GFP_ATOMIC|GFP_NOWARN|GFP_NOMEMALLOC 10.42% 0xffff8803f766be00 8.33% 96 8.33% GFP_ATOMIC|GFP_NOWARN|GFP_NOMEMALLOC 2.08% 512 2.08% GFP_KERNEL|GFP_NOWARN|GFP_REPEAT|GFP_NOMEMALLOC Signed-off-by: Namhyung Kim <namhyung@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Andi Kleen <andi@firstfloor.org> Cc: David Ahern <dsahern@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1456512767-1164-5-git-send-email-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-27 01:52:47 +07:00
perf_hpp_list__for_each_sort_list(&perf_hpp_list, fmt) {
if (perf_hpp__is_dynamic_entry(fmt) &&
perf_hpp__defined_dynamic_entry(fmt, he->hists))
fmt->sort(fmt, he, NULL); /* update column width */
}
}
static void output_resort(struct hists *hists, struct ui_progress *prog,
bool use_callchain, hists__resort_cb_t cb,
void *cb_arg)
{
struct rb_root_cached *root;
struct rb_node *next;
struct hist_entry *n;
u64 callchain_total;
u64 min_callchain_hits;
callchain_total = hists->callchain_period;
if (symbol_conf.filter_relative)
callchain_total = hists->callchain_non_filtered_period;
min_callchain_hits = callchain_total * (callchain_param.min_percent / 100);
hists__reset_stats(hists);
hists__reset_col_len(hists);
if (symbol_conf.report_hierarchy) {
hists__hierarchy_output_resort(hists, prog,
&hists->entries_collapsed,
&hists->entries,
min_callchain_hits,
use_callchain);
hierarchy_recalc_total_periods(hists);
return;
}
if (hists__has(hists, need_collapse))
root = &hists->entries_collapsed;
else
root = hists->entries_in;
next = rb_first_cached(root);
hists->entries = RB_ROOT_CACHED;
while (next) {
n = rb_entry(next, struct hist_entry, rb_node_in);
next = rb_next(&n->rb_node_in);
if (cb && cb(n, cb_arg))
continue;
perf callchain: Allow disabling call graphs per event This patch introduce "call-graph=no" to disable per-event callgraph. Here is an example. perf record -e 'cpu/cpu-cycles,call-graph=fp/,cpu/instructions,call-graph=no/' sleep 1 perf report --stdio # To display the perf.data header info, please use --header/--header-only options. # # # Total Lost Samples: 0 # # Samples: 6 of event 'cpu/cpu-cycles,call-graph=fp/' # Event count (approx.): 774218 # # Children Self Command Shared Object Symbol # ........ ........ ....... ................ ........................................ # 61.94% 0.00% sleep [kernel.vmlinux] [k] entry_SYSCALL_64_fastpath | ---entry_SYSCALL_64_fastpath | |--97.30%-- __brk | --2.70%-- mmap64 _dl_check_map_versions _dl_check_all_versions 61.94% 0.00% sleep [kernel.vmlinux] [k] perf_event_mmap | ---perf_event_mmap | |--97.30%-- do_brk | sys_brk | entry_SYSCALL_64_fastpath | __brk | --2.70%-- mmap_region do_mmap_pgoff vm_mmap_pgoff sys_mmap_pgoff sys_mmap entry_SYSCALL_64_fastpath mmap64 _dl_check_map_versions _dl_check_all_versions ...... # Samples: 6 of event 'cpu/instructions,call-graph=no/' # Event count (approx.): 359692 # # Children Self Command Shared Object Symbol # ........ ........ ....... ................ ................................. # 89.03% 0.00% sleep [unknown] [.] 0xffff6598ffff6598 89.03% 0.00% sleep ld-2.17.so [.] _dl_resolve_conflicts 89.03% 0.00% sleep [kernel.vmlinux] [k] page_fault Signed-off-by: Kan Liang <kan.liang@intel.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/r/1439289050-40510-2-git-send-email-kan.liang@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-08-11 17:30:48 +07:00
__hists__insert_output_entry(&hists->entries, n, min_callchain_hits, use_callchain);
hists__inc_stats(hists, n);
if (!n->filtered)
hists__calc_col_len(hists, n);
if (prog)
ui_progress__update(prog, 1);
}
}
void perf_evsel__output_resort_cb(struct perf_evsel *evsel, struct ui_progress *prog,
hists__resort_cb_t cb, void *cb_arg)
{
bool use_callchain;
if (evsel && symbol_conf.use_callchain && !symbol_conf.show_ref_callgraph)
use_callchain = evsel__has_callchain(evsel);
else
use_callchain = symbol_conf.use_callchain;
perf report: Make --branch-history work without callgraphs(-g) option in perf record perf record -b -g <command> perf report --branch-history This merges the LBRs with the callgraphs. However it would be nice if it also works without callgraphs (-g) set in perf record, so that only the LBRs are displayed. But currently perf report errors in this case. For example, perf record -b <command> perf report --branch-history Error: Selected -g or --branch-history but no callchain data. Did you call 'perf record' without -g? This patch displays the LBRs only even if callgraphs(-g) is not enabled in perf record. Change log: v2: According to Milian Wolff's comment, change the obsolete error message. Now the error message is: ┌─Error:─────────────────────────────────────┐ │Selected -g or --branch-history. │ │But no callchain or branch data. │ │Did you call 'perf record' without -g or -b?│ │ │ │ │ │Press any key... │ └────────────────────────────────────────────┘ When passing the last parameter to hists__fprintf, changes "|" to "||". hists__fprintf(hists, !quiet, 0, 0, rep->min_percent, stdout, symbol_conf.use_callchain || symbol_conf.show_branchflag_count); Signed-off-by: Yao Jin <yao.jin@linux.intel.com> Reviewed-by: Andi Kleen <ak@linux.intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1494240182-28899-1-git-send-email-yao.jin@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-05-08 17:43:02 +07:00
use_callchain |= symbol_conf.show_branchflag_count;
output_resort(evsel__hists(evsel), prog, use_callchain, cb, cb_arg);
}
void perf_evsel__output_resort(struct perf_evsel *evsel, struct ui_progress *prog)
{
return perf_evsel__output_resort_cb(evsel, prog, NULL, NULL);
}
void hists__output_resort(struct hists *hists, struct ui_progress *prog)
{
output_resort(hists, prog, symbol_conf.use_callchain, NULL, NULL);
}
void hists__output_resort_cb(struct hists *hists, struct ui_progress *prog,
hists__resort_cb_t cb)
{
output_resort(hists, prog, symbol_conf.use_callchain, cb, NULL);
}
static bool can_goto_child(struct hist_entry *he, enum hierarchy_move_dir hmd)
{
if (he->leaf || hmd == HMD_FORCE_SIBLING)
return false;
if (he->unfolded || hmd == HMD_FORCE_CHILD)
return true;
return false;
}
struct rb_node *rb_hierarchy_last(struct rb_node *node)
{
struct hist_entry *he = rb_entry(node, struct hist_entry, rb_node);
while (can_goto_child(he, HMD_NORMAL)) {
node = rb_last(&he->hroot_out.rb_root);
he = rb_entry(node, struct hist_entry, rb_node);
}
return node;
}
struct rb_node *__rb_hierarchy_next(struct rb_node *node, enum hierarchy_move_dir hmd)
{
struct hist_entry *he = rb_entry(node, struct hist_entry, rb_node);
if (can_goto_child(he, hmd))
node = rb_first_cached(&he->hroot_out);
else
node = rb_next(node);
while (node == NULL) {
he = he->parent_he;
if (he == NULL)
break;
node = rb_next(&he->rb_node);
}
return node;
}
struct rb_node *rb_hierarchy_prev(struct rb_node *node)
{
struct hist_entry *he = rb_entry(node, struct hist_entry, rb_node);
node = rb_prev(node);
if (node)
return rb_hierarchy_last(node);
he = he->parent_he;
if (he == NULL)
return NULL;
return &he->rb_node;
}
bool hist_entry__has_hierarchy_children(struct hist_entry *he, float limit)
{
struct rb_node *node;
struct hist_entry *child;
float percent;
if (he->leaf)
return false;
node = rb_first_cached(&he->hroot_out);
child = rb_entry(node, struct hist_entry, rb_node);
while (node && child->filtered) {
node = rb_next(node);
child = rb_entry(node, struct hist_entry, rb_node);
}
if (node)
percent = hist_entry__get_percent_limit(child);
else
percent = 0;
return node && percent >= limit;
}
static void hists__remove_entry_filter(struct hists *hists, struct hist_entry *h,
enum hist_filter filter)
{
h->filtered &= ~(1 << filter);
if (symbol_conf.report_hierarchy) {
struct hist_entry *parent = h->parent_he;
while (parent) {
he_stat__add_stat(&parent->stat, &h->stat);
parent->filtered &= ~(1 << filter);
if (parent->filtered)
goto next;
/* force fold unfiltered entry for simplicity */
parent->unfolded = false;
parent->has_no_entry = false;
parent->row_offset = 0;
parent->nr_rows = 0;
next:
parent = parent->parent_he;
}
}
if (h->filtered)
return;
/* force fold unfiltered entry for simplicity */
h->unfolded = false;
h->has_no_entry = false;
h->row_offset = 0;
perf hists browser: Fix UI bug after zoom into thread/dso/symbol When zoom into thread/dso/symbol, the fold/unfold stat is cleared in hists__filter_by_thread/dso/symbol(), but h->nr_rows is not cleared. So if we toggle fold stat on the unfold entires, nr_entries got a wrong value. This bug can be reproduced as follows: $ perf record -g -e syscalls:sys_enter_open ls $ perf report Children Self Command Shared Object Symbol ================================================================ + 50.00% 0.00% ls ld64.so [.] _dl_get_ready_to_run - 50.00% 0.00% ls ld64.so [.] _dl_load_shared_library _dl_load_shared_library <= [Zoom into thread/dso] _dl_get_ready_to_run _start ... In the new thread hists, all entries reset to fold, if we unfold the same entry as we previously unfolded, nr_entries got wrong value, and we can't move down cursor to bottom row. Thread: ls Children Self Command Shared Object Symbol ================================================================ + 50.00% 0.00% ls ld64.so [.] _dl_get_ready_to_run - 50.00% 0.00% ls ld64.so [.] _dl_load_shared_library _dl_load_shared_library _dl_get_ready_to_run <= [cursor may stop here, can't move down] _start ... This patch clear h->nr_rows to fix this bug. Signed-off-by: He Kuang <hekuang@huawei.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1426077363-855-2-git-send-email-hekuang@huawei.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-03-11 19:36:03 +07:00
h->nr_rows = 0;
hists->stats.nr_non_filtered_samples += h->stat.nr_events;
hists__inc_filter_stats(hists, h);
hists__calc_col_len(hists, h);
}
static bool hists__filter_entry_by_dso(struct hists *hists,
struct hist_entry *he)
{
if (hists->dso_filter != NULL &&
(he->ms.map == NULL || he->ms.map->dso != hists->dso_filter)) {
he->filtered |= (1 << HIST_FILTER__DSO);
return true;
}
return false;
}
static bool hists__filter_entry_by_thread(struct hists *hists,
struct hist_entry *he)
{
if (hists->thread_filter != NULL &&
he->thread != hists->thread_filter) {
he->filtered |= (1 << HIST_FILTER__THREAD);
return true;
}
return false;
}
static bool hists__filter_entry_by_symbol(struct hists *hists,
struct hist_entry *he)
{
if (hists->symbol_filter_str != NULL &&
(!he->ms.sym || strstr(he->ms.sym->name,
hists->symbol_filter_str) == NULL)) {
he->filtered |= (1 << HIST_FILTER__SYMBOL);
return true;
}
return false;
}
static bool hists__filter_entry_by_socket(struct hists *hists,
struct hist_entry *he)
{
if ((hists->socket_filter > -1) &&
(he->socket != hists->socket_filter)) {
he->filtered |= (1 << HIST_FILTER__SOCKET);
return true;
}
return false;
}
typedef bool (*filter_fn_t)(struct hists *hists, struct hist_entry *he);
static void hists__filter_by_type(struct hists *hists, int type, filter_fn_t filter)
{
struct rb_node *nd;
hists->stats.nr_non_filtered_samples = 0;
hists__reset_filter_stats(hists);
hists__reset_col_len(hists);
for (nd = rb_first_cached(&hists->entries); nd; nd = rb_next(nd)) {
struct hist_entry *h = rb_entry(nd, struct hist_entry, rb_node);
if (filter(hists, h))
continue;
hists__remove_entry_filter(hists, h, type);
}
}
static void resort_filtered_entry(struct rb_root_cached *root,
struct hist_entry *he)
{
struct rb_node **p = &root->rb_root.rb_node;
struct rb_node *parent = NULL;
struct hist_entry *iter;
struct rb_root_cached new_root = RB_ROOT_CACHED;
struct rb_node *nd;
bool leftmost = true;
while (*p != NULL) {
parent = *p;
iter = rb_entry(parent, struct hist_entry, rb_node);
if (hist_entry__sort(he, iter) > 0)
p = &(*p)->rb_left;
else {
p = &(*p)->rb_right;
leftmost = false;
}
}
rb_link_node(&he->rb_node, parent, p);
rb_insert_color_cached(&he->rb_node, root, leftmost);
if (he->leaf || he->filtered)
return;
nd = rb_first_cached(&he->hroot_out);
while (nd) {
struct hist_entry *h = rb_entry(nd, struct hist_entry, rb_node);
nd = rb_next(nd);
rb_erase_cached(&h->rb_node, &he->hroot_out);
resort_filtered_entry(&new_root, h);
}
he->hroot_out = new_root;
}
static void hists__filter_hierarchy(struct hists *hists, int type, const void *arg)
{
struct rb_node *nd;
struct rb_root_cached new_root = RB_ROOT_CACHED;
hists->stats.nr_non_filtered_samples = 0;
hists__reset_filter_stats(hists);
hists__reset_col_len(hists);
nd = rb_first_cached(&hists->entries);
while (nd) {
struct hist_entry *h = rb_entry(nd, struct hist_entry, rb_node);
int ret;
ret = hist_entry__filter(h, type, arg);
/*
* case 1. non-matching type
* zero out the period, set filter marker and move to child
*/
if (ret < 0) {
memset(&h->stat, 0, sizeof(h->stat));
h->filtered |= (1 << type);
nd = __rb_hierarchy_next(&h->rb_node, HMD_FORCE_CHILD);
}
/*
* case 2. matched type (filter out)
* set filter marker and move to next
*/
else if (ret == 1) {
h->filtered |= (1 << type);
nd = __rb_hierarchy_next(&h->rb_node, HMD_FORCE_SIBLING);
}
/*
* case 3. ok (not filtered)
* add period to hists and parents, erase the filter marker
* and move to next sibling
*/
else {
hists__remove_entry_filter(hists, h, type);
nd = __rb_hierarchy_next(&h->rb_node, HMD_FORCE_SIBLING);
}
}
hierarchy_recalc_total_periods(hists);
/*
* resort output after applying a new filter since filter in a lower
* hierarchy can change periods in a upper hierarchy.
*/
nd = rb_first_cached(&hists->entries);
while (nd) {
struct hist_entry *h = rb_entry(nd, struct hist_entry, rb_node);
nd = rb_next(nd);
rb_erase_cached(&h->rb_node, &hists->entries);
resort_filtered_entry(&new_root, h);
}
hists->entries = new_root;
}
void hists__filter_by_thread(struct hists *hists)
{
if (symbol_conf.report_hierarchy)
hists__filter_hierarchy(hists, HIST_FILTER__THREAD,
hists->thread_filter);
else
hists__filter_by_type(hists, HIST_FILTER__THREAD,
hists__filter_entry_by_thread);
}
void hists__filter_by_dso(struct hists *hists)
{
if (symbol_conf.report_hierarchy)
hists__filter_hierarchy(hists, HIST_FILTER__DSO,
hists->dso_filter);
else
hists__filter_by_type(hists, HIST_FILTER__DSO,
hists__filter_entry_by_dso);
}
void hists__filter_by_symbol(struct hists *hists)
{
if (symbol_conf.report_hierarchy)
hists__filter_hierarchy(hists, HIST_FILTER__SYMBOL,
hists->symbol_filter_str);
else
hists__filter_by_type(hists, HIST_FILTER__SYMBOL,
hists__filter_entry_by_symbol);
}
void hists__filter_by_socket(struct hists *hists)
{
if (symbol_conf.report_hierarchy)
hists__filter_hierarchy(hists, HIST_FILTER__SOCKET,
&hists->socket_filter);
else
hists__filter_by_type(hists, HIST_FILTER__SOCKET,
hists__filter_entry_by_socket);
}
void events_stats__inc(struct events_stats *stats, u32 type)
{
++stats->nr_events[0];
++stats->nr_events[type];
}
void hists__inc_nr_events(struct hists *hists, u32 type)
{
events_stats__inc(&hists->stats, type);
}
void hists__inc_nr_samples(struct hists *hists, bool filtered)
{
events_stats__inc(&hists->stats, PERF_RECORD_SAMPLE);
if (!filtered)
hists->stats.nr_non_filtered_samples++;
}
static struct hist_entry *hists__add_dummy_entry(struct hists *hists,
struct hist_entry *pair)
{
struct rb_root_cached *root;
struct rb_node **p;
struct rb_node *parent = NULL;
struct hist_entry *he;
int64_t cmp;
bool leftmost = true;
if (hists__has(hists, need_collapse))
root = &hists->entries_collapsed;
else
root = hists->entries_in;
p = &root->rb_root.rb_node;
while (*p != NULL) {
parent = *p;
he = rb_entry(parent, struct hist_entry, rb_node_in);
cmp = hist_entry__collapse(he, pair);
if (!cmp)
goto out;
if (cmp < 0)
p = &(*p)->rb_left;
else {
p = &(*p)->rb_right;
leftmost = false;
}
}
he = hist_entry__new(pair, true);
if (he) {
memset(&he->stat, 0, sizeof(he->stat));
he->hists = hists;
if (symbol_conf.cumulate_callchain)
memset(he->stat_acc, 0, sizeof(he->stat));
rb_link_node(&he->rb_node_in, parent, p);
rb_insert_color_cached(&he->rb_node_in, root, leftmost);
hists__inc_stats(hists, he);
he->dummy = true;
}
out:
return he;
}
static struct hist_entry *add_dummy_hierarchy_entry(struct hists *hists,
struct rb_root_cached *root,
struct hist_entry *pair)
{
struct rb_node **p;
struct rb_node *parent = NULL;
struct hist_entry *he;
struct perf_hpp_fmt *fmt;
bool leftmost = true;
p = &root->rb_root.rb_node;
while (*p != NULL) {
int64_t cmp = 0;
parent = *p;
he = rb_entry(parent, struct hist_entry, rb_node_in);
perf_hpp_list__for_each_sort_list(he->hpp_list, fmt) {
cmp = fmt->collapse(fmt, he, pair);
if (cmp)
break;
}
if (!cmp)
goto out;
if (cmp < 0)
p = &parent->rb_left;
else {
p = &parent->rb_right;
leftmost = false;
}
}
he = hist_entry__new(pair, true);
if (he) {
rb_link_node(&he->rb_node_in, parent, p);
rb_insert_color_cached(&he->rb_node_in, root, leftmost);
he->dummy = true;
he->hists = hists;
memset(&he->stat, 0, sizeof(he->stat));
hists__inc_stats(hists, he);
}
out:
return he;
}
static struct hist_entry *hists__find_entry(struct hists *hists,
struct hist_entry *he)
{
struct rb_node *n;
if (hists__has(hists, need_collapse))
n = hists->entries_collapsed.rb_root.rb_node;
else
n = hists->entries_in->rb_root.rb_node;
while (n) {
struct hist_entry *iter = rb_entry(n, struct hist_entry, rb_node_in);
int64_t cmp = hist_entry__collapse(iter, he);
if (cmp < 0)
n = n->rb_left;
else if (cmp > 0)
n = n->rb_right;
else
return iter;
}
return NULL;
}
static struct hist_entry *hists__find_hierarchy_entry(struct rb_root_cached *root,
struct hist_entry *he)
{
struct rb_node *n = root->rb_root.rb_node;
while (n) {
struct hist_entry *iter;
struct perf_hpp_fmt *fmt;
int64_t cmp = 0;
iter = rb_entry(n, struct hist_entry, rb_node_in);
perf_hpp_list__for_each_sort_list(he->hpp_list, fmt) {
cmp = fmt->collapse(fmt, iter, he);
if (cmp)
break;
}
if (cmp < 0)
n = n->rb_left;
else if (cmp > 0)
n = n->rb_right;
else
return iter;
}
return NULL;
}
static void hists__match_hierarchy(struct rb_root_cached *leader_root,
struct rb_root_cached *other_root)
{
struct rb_node *nd;
struct hist_entry *pos, *pair;
for (nd = rb_first_cached(leader_root); nd; nd = rb_next(nd)) {
pos = rb_entry(nd, struct hist_entry, rb_node_in);
pair = hists__find_hierarchy_entry(other_root, pos);
if (pair) {
hist_entry__add_pair(pair, pos);
hists__match_hierarchy(&pos->hroot_in, &pair->hroot_in);
}
}
}
/*
* Look for pairs to link to the leader buckets (hist_entries):
*/
void hists__match(struct hists *leader, struct hists *other)
{
struct rb_root_cached *root;
struct rb_node *nd;
struct hist_entry *pos, *pair;
if (symbol_conf.report_hierarchy) {
/* hierarchy report always collapses entries */
return hists__match_hierarchy(&leader->entries_collapsed,
&other->entries_collapsed);
}
if (hists__has(leader, need_collapse))
root = &leader->entries_collapsed;
else
root = leader->entries_in;
for (nd = rb_first_cached(root); nd; nd = rb_next(nd)) {
pos = rb_entry(nd, struct hist_entry, rb_node_in);
pair = hists__find_entry(other, pos);
if (pair)
hist_entry__add_pair(pair, pos);
}
}
static int hists__link_hierarchy(struct hists *leader_hists,
struct hist_entry *parent,
struct rb_root_cached *leader_root,
struct rb_root_cached *other_root)
{
struct rb_node *nd;
struct hist_entry *pos, *leader;
for (nd = rb_first_cached(other_root); nd; nd = rb_next(nd)) {
pos = rb_entry(nd, struct hist_entry, rb_node_in);
if (hist_entry__has_pairs(pos)) {
bool found = false;
list_for_each_entry(leader, &pos->pairs.head, pairs.node) {
if (leader->hists == leader_hists) {
found = true;
break;
}
}
if (!found)
return -1;
} else {
leader = add_dummy_hierarchy_entry(leader_hists,
leader_root, pos);
if (leader == NULL)
return -1;
/* do not point parent in the pos */
leader->parent_he = parent;
hist_entry__add_pair(pos, leader);
}
if (!pos->leaf) {
if (hists__link_hierarchy(leader_hists, leader,
&leader->hroot_in,
&pos->hroot_in) < 0)
return -1;
}
}
return 0;
}
/*
* Look for entries in the other hists that are not present in the leader, if
* we find them, just add a dummy entry on the leader hists, with period=0,
* nr_events=0, to serve as the list header.
*/
int hists__link(struct hists *leader, struct hists *other)
{
struct rb_root_cached *root;
struct rb_node *nd;
struct hist_entry *pos, *pair;
if (symbol_conf.report_hierarchy) {
/* hierarchy report always collapses entries */
return hists__link_hierarchy(leader, NULL,
&leader->entries_collapsed,
&other->entries_collapsed);
}
if (hists__has(other, need_collapse))
root = &other->entries_collapsed;
else
root = other->entries_in;
for (nd = rb_first_cached(root); nd; nd = rb_next(nd)) {
pos = rb_entry(nd, struct hist_entry, rb_node_in);
if (!hist_entry__has_pairs(pos)) {
pair = hists__add_dummy_entry(leader, pos);
if (pair == NULL)
return -1;
hist_entry__add_pair(pos, pair);
}
}
return 0;
}
void hist__account_cycles(struct branch_stack *bs, struct addr_location *al,
struct perf_sample *sample, bool nonany_branch_mode)
{
struct branch_info *bi;
/* If we have branch cycles always annotate them. */
if (bs && bs->nr && bs->entries[0].flags.cycles) {
int i;
bi = sample__resolve_bstack(sample, al);
if (bi) {
struct addr_map_symbol *prev = NULL;
/*
* Ignore errors, still want to process the
* other entries.
*
* For non standard branch modes always
* force no IPC (prev == NULL)
*
* Note that perf stores branches reversed from
* program order!
*/
for (i = bs->nr - 1; i >= 0; i--) {
addr_map_symbol__account_cycles(&bi[i].from,
nonany_branch_mode ? NULL : prev,
bi[i].flags.cycles);
prev = &bi[i].to;
}
free(bi);
}
}
}
size_t perf_evlist__fprintf_nr_events(struct perf_evlist *evlist, FILE *fp)
{
struct perf_evsel *pos;
size_t ret = 0;
evlist__for_each_entry(evlist, pos) {
ret += fprintf(fp, "%s stats:\n", perf_evsel__name(pos));
ret += events_stats__fprintf(&evsel__hists(pos)->stats, fp);
}
return ret;
}
u64 hists__total_period(struct hists *hists)
{
return symbol_conf.filter_relative ? hists->stats.total_non_filtered_period :
hists->stats.total_period;
}
int __hists__scnprintf_title(struct hists *hists, char *bf, size_t size, bool show_freq)
{
char unit;
int printed;
const struct dso *dso = hists->dso_filter;
const struct thread *thread = hists->thread_filter;
int socket_id = hists->socket_filter;
unsigned long nr_samples = hists->stats.nr_events[PERF_RECORD_SAMPLE];
u64 nr_events = hists->stats.total_period;
struct perf_evsel *evsel = hists_to_evsel(hists);
const char *ev_name = perf_evsel__name(evsel);
char buf[512], sample_freq_str[64] = "";
size_t buflen = sizeof(buf);
char ref[30] = " show reference callgraph, ";
bool enable_ref = false;
if (symbol_conf.filter_relative) {
nr_samples = hists->stats.nr_non_filtered_samples;
nr_events = hists->stats.total_non_filtered_period;
}
if (perf_evsel__is_group_event(evsel)) {
struct perf_evsel *pos;
perf_evsel__group_desc(evsel, buf, buflen);
ev_name = buf;
for_each_group_member(pos, evsel) {
struct hists *pos_hists = evsel__hists(pos);
if (symbol_conf.filter_relative) {
nr_samples += pos_hists->stats.nr_non_filtered_samples;
nr_events += pos_hists->stats.total_non_filtered_period;
} else {
nr_samples += pos_hists->stats.nr_events[PERF_RECORD_SAMPLE];
nr_events += pos_hists->stats.total_period;
}
}
}
if (symbol_conf.show_ref_callgraph &&
strstr(ev_name, "call-graph=no"))
enable_ref = true;
if (show_freq)
scnprintf(sample_freq_str, sizeof(sample_freq_str), " %d Hz,", evsel->attr.sample_freq);
nr_samples = convert_unit(nr_samples, &unit);
printed = scnprintf(bf, size,
"Samples: %lu%c of event%s '%s',%s%sEvent count (approx.): %" PRIu64,
nr_samples, unit, evsel->nr_members > 1 ? "s" : "",
ev_name, sample_freq_str, enable_ref ? ref : " ", nr_events);
if (hists->uid_filter_str)
printed += snprintf(bf + printed, size - printed,
", UID: %s", hists->uid_filter_str);
if (thread) {
if (hists__has(hists, thread)) {
printed += scnprintf(bf + printed, size - printed,
", Thread: %s(%d)",
(thread->comm_set ? thread__comm_str(thread) : ""),
thread->tid);
} else {
printed += scnprintf(bf + printed, size - printed,
", Thread: %s",
(thread->comm_set ? thread__comm_str(thread) : ""));
}
}
if (dso)
printed += scnprintf(bf + printed, size - printed,
", DSO: %s", dso->short_name);
if (socket_id > -1)
printed += scnprintf(bf + printed, size - printed,
", Processor Socket: %d", socket_id);
return printed;
}
int parse_filter_percentage(const struct option *opt __maybe_unused,
const char *arg, int unset __maybe_unused)
{
if (!strcmp(arg, "relative"))
symbol_conf.filter_relative = true;
else if (!strcmp(arg, "absolute"))
symbol_conf.filter_relative = false;
perf tools: Propagate perf_config() errors Previously these were being ignored, sometimes silently. Stop doing that, emitting debug messages and handling the errors. Testing it: $ cat ~/.perfconfig cat: /home/acme/.perfconfig: No such file or directory $ perf stat -e cycles usleep 1 Performance counter stats for 'usleep 1': 938,996 cycles:u 0.003813731 seconds time elapsed $ perf top --stdio Error: You may not have permission to collect system-wide stats. Consider tweaking /proc/sys/kernel/perf_event_paranoid, <SNIP> [ perf record: Captured and wrote 0.019 MB perf.data (7 samples) ] [acme@jouet linux]$ perf report --stdio # To display the perf.data header info, please use --header/--header-only options. # Overhead Command Shared Object Symbol # ........ ....... ................. ......................... 71.77% usleep libc-2.24.so [.] _dl_addr 27.07% usleep ld-2.24.so [.] _dl_next_ld_env_entry 1.13% usleep [kernel.kallsyms] [k] page_fault $ $ touch ~/.perfconfig $ ls -la ~/.perfconfig -rw-rw-r--. 1 acme acme 0 Jan 27 12:14 /home/acme/.perfconfig $ $ perf stat -e instructions usleep 1 Performance counter stats for 'usleep 1': 244,610 instructions:u 0.000805383 seconds time elapsed $ [root@jouet ~]# chown acme.acme ~/.perfconfig [root@jouet ~]# perf stat -e cycles usleep 1 Warning: File /root/.perfconfig not owned by current user or root, ignoring it. Performance counter stats for 'usleep 1': 937,615 cycles 0.000836931 seconds time elapsed # Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/n/tip-j2rq96so6xdqlr8p8rd6a3jx@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-01-24 23:44:10 +07:00
else {
pr_debug("Invalid percentage: %s\n", arg);
return -1;
perf tools: Propagate perf_config() errors Previously these were being ignored, sometimes silently. Stop doing that, emitting debug messages and handling the errors. Testing it: $ cat ~/.perfconfig cat: /home/acme/.perfconfig: No such file or directory $ perf stat -e cycles usleep 1 Performance counter stats for 'usleep 1': 938,996 cycles:u 0.003813731 seconds time elapsed $ perf top --stdio Error: You may not have permission to collect system-wide stats. Consider tweaking /proc/sys/kernel/perf_event_paranoid, <SNIP> [ perf record: Captured and wrote 0.019 MB perf.data (7 samples) ] [acme@jouet linux]$ perf report --stdio # To display the perf.data header info, please use --header/--header-only options. # Overhead Command Shared Object Symbol # ........ ....... ................. ......................... 71.77% usleep libc-2.24.so [.] _dl_addr 27.07% usleep ld-2.24.so [.] _dl_next_ld_env_entry 1.13% usleep [kernel.kallsyms] [k] page_fault $ $ touch ~/.perfconfig $ ls -la ~/.perfconfig -rw-rw-r--. 1 acme acme 0 Jan 27 12:14 /home/acme/.perfconfig $ $ perf stat -e instructions usleep 1 Performance counter stats for 'usleep 1': 244,610 instructions:u 0.000805383 seconds time elapsed $ [root@jouet ~]# chown acme.acme ~/.perfconfig [root@jouet ~]# perf stat -e cycles usleep 1 Warning: File /root/.perfconfig not owned by current user or root, ignoring it. Performance counter stats for 'usleep 1': 937,615 cycles 0.000836931 seconds time elapsed # Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/n/tip-j2rq96so6xdqlr8p8rd6a3jx@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-01-24 23:44:10 +07:00
}
return 0;
}
int perf_hist_config(const char *var, const char *value)
{
if (!strcmp(var, "hist.percentage"))
return parse_filter_percentage(NULL, value, 0);
return 0;
}
int __hists__init(struct hists *hists, struct perf_hpp_list *hpp_list)
{
memset(hists, 0, sizeof(*hists));
hists->entries_in_array[0] = hists->entries_in_array[1] = RB_ROOT_CACHED;
hists->entries_in = &hists->entries_in_array[0];
hists->entries_collapsed = RB_ROOT_CACHED;
hists->entries = RB_ROOT_CACHED;
pthread_mutex_init(&hists->lock, NULL);
hists->socket_filter = -1;
hists->hpp_list = hpp_list;
INIT_LIST_HEAD(&hists->hpp_formats);
return 0;
}
static void hists__delete_remaining_entries(struct rb_root_cached *root)
{
struct rb_node *node;
struct hist_entry *he;
while (!RB_EMPTY_ROOT(&root->rb_root)) {
node = rb_first_cached(root);
rb_erase_cached(node, root);
he = rb_entry(node, struct hist_entry, rb_node_in);
hist_entry__delete(he);
}
}
static void hists__delete_all_entries(struct hists *hists)
{
hists__delete_entries(hists);
hists__delete_remaining_entries(&hists->entries_in_array[0]);
hists__delete_remaining_entries(&hists->entries_in_array[1]);
hists__delete_remaining_entries(&hists->entries_collapsed);
}
static void hists_evsel__exit(struct perf_evsel *evsel)
{
struct hists *hists = evsel__hists(evsel);
struct perf_hpp_fmt *fmt, *pos;
struct perf_hpp_list_node *node, *tmp;
hists__delete_all_entries(hists);
list_for_each_entry_safe(node, tmp, &hists->hpp_formats, list) {
perf_hpp_list__for_each_format_safe(&node->hpp, fmt, pos) {
list_del(&fmt->list);
free(fmt);
}
list_del(&node->list);
free(node);
}
}
static int hists_evsel__init(struct perf_evsel *evsel)
{
struct hists *hists = evsel__hists(evsel);
__hists__init(hists, &perf_hpp_list);
return 0;
}
/*
* XXX We probably need a hists_evsel__exit() to free the hist_entries
* stored in the rbtree...
*/
int hists__init(void)
{
int err = perf_evsel__object_config(sizeof(struct hists_evsel),
hists_evsel__init,
hists_evsel__exit);
if (err)
fputs("FATAL ERROR: Couldn't setup hists class\n", stderr);
return err;
}
void perf_hpp_list__init(struct perf_hpp_list *list)
{
INIT_LIST_HEAD(&list->fields);
INIT_LIST_HEAD(&list->sorts);
}