1
0
Fork 0
alistair23-linux/tools/perf/ui/browsers/hists.c

3443 lines
86 KiB
C
Raw Normal View History

License cleanup: add SPDX GPL-2.0 license identifier to files with no license Many source files in the tree are missing licensing information, which makes it harder for compliance tools to determine the correct license. By default all files without license information are under the default license of the kernel, which is GPL version 2. Update the files which contain no license information with the 'GPL-2.0' SPDX license identifier. The SPDX identifier is a legally binding shorthand, which can be used instead of the full boiler plate text. This patch is based on work done by Thomas Gleixner and Kate Stewart and Philippe Ombredanne. How this work was done: Patches were generated and checked against linux-4.14-rc6 for a subset of the use cases: - file had no licensing information it it. - file was a */uapi/* one with no licensing information in it, - file was a */uapi/* one with existing licensing information, Further patches will be generated in subsequent months to fix up cases where non-standard license headers were used, and references to license had to be inferred by heuristics based on keywords. The analysis to determine which SPDX License Identifier to be applied to a file was done in a spreadsheet of side by side results from of the output of two independent scanners (ScanCode & Windriver) producing SPDX tag:value files created by Philippe Ombredanne. Philippe prepared the base worksheet, and did an initial spot review of a few 1000 files. The 4.13 kernel was the starting point of the analysis with 60,537 files assessed. Kate Stewart did a file by file comparison of the scanner results in the spreadsheet to determine which SPDX license identifier(s) to be applied to the file. She confirmed any determination that was not immediately clear with lawyers working with the Linux Foundation. Criteria used to select files for SPDX license identifier tagging was: - Files considered eligible had to be source code files. - Make and config files were included as candidates if they contained >5 lines of source - File already had some variant of a license header in it (even if <5 lines). All documentation files were explicitly excluded. The following heuristics were used to determine which SPDX license identifiers to apply. - when both scanners couldn't find any license traces, file was considered to have no license information in it, and the top level COPYING file license applied. For non */uapi/* files that summary was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 11139 and resulted in the first patch in this series. If that file was a */uapi/* path one, it was "GPL-2.0 WITH Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 WITH Linux-syscall-note 930 and resulted in the second patch in this series. - if a file had some form of licensing information in it, and was one of the */uapi/* ones, it was denoted with the Linux-syscall-note if any GPL family license was found in the file or had no licensing in it (per prior point). Results summary: SPDX license identifier # files ---------------------------------------------------|------ GPL-2.0 WITH Linux-syscall-note 270 GPL-2.0+ WITH Linux-syscall-note 169 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17 LGPL-2.1+ WITH Linux-syscall-note 15 GPL-1.0+ WITH Linux-syscall-note 14 ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5 LGPL-2.0+ WITH Linux-syscall-note 4 LGPL-2.1 WITH Linux-syscall-note 3 ((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3 ((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1 and that resulted in the third patch in this series. - when the two scanners agreed on the detected license(s), that became the concluded license(s). - when there was disagreement between the two scanners (one detected a license but the other didn't, or they both detected different licenses) a manual inspection of the file occurred. - In most cases a manual inspection of the information in the file resulted in a clear resolution of the license that should apply (and which scanner probably needed to revisit its heuristics). - When it was not immediately clear, the license identifier was confirmed with lawyers working with the Linux Foundation. - If there was any question as to the appropriate license identifier, the file was flagged for further research and to be revisited later in time. In total, over 70 hours of logged manual review was done on the spreadsheet to determine the SPDX license identifiers to apply to the source files by Kate, Philippe, Thomas and, in some cases, confirmation by lawyers working with the Linux Foundation. Kate also obtained a third independent scan of the 4.13 code base from FOSSology, and compared selected files where the other two scanners disagreed against that SPDX file, to see if there was new insights. The Windriver scanner is based on an older version of FOSSology in part, so they are related. Thomas did random spot checks in about 500 files from the spreadsheets for the uapi headers and agreed with SPDX license identifier in the files he inspected. For the non-uapi files Thomas did random spot checks in about 15000 files. In initial set of patches against 4.14-rc6, 3 files were found to have copy/paste license identifier errors, and have been fixed to reflect the correct identifier. Additionally Philippe spent 10 hours this week doing a detailed manual inspection and review of the 12,461 patched files from the initial patch version early this week with: - a full scancode scan run, collecting the matched texts, detected license ids and scores - reviewing anything where there was a license detected (about 500+ files) to ensure that the applied SPDX license was correct - reviewing anything where there was no detection but the patch license was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied SPDX license was correct This produced a worksheet with 20 files needing minor correction. This worksheet was then exported into 3 different .csv files for the different types of files to be modified. These .csv files were then reviewed by Greg. Thomas wrote a script to parse the csv files and add the proper SPDX tag to the file, in the format that the file expected. This script was further refined by Greg based on the output to detect more types of files automatically and to distinguish between header and source .c files (which need different comment types.) Finally Greg ran the script using the .csv files to generate the patches. Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org> Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 08:07:57 -06:00
// SPDX-License-Identifier: GPL-2.0
#include <dirent.h>
#include <errno.h>
#include <inttypes.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <linux/rbtree.h>
#include <linux/string.h>
#include <sys/ttydefaults.h>
#include <linux/time64.h>
#include <linux/zalloc.h>
#include "../../util/debug.h"
#include "../../util/dso.h"
#include "../../util/callchain.h"
#include "../../util/evsel.h"
#include "../../util/evlist.h"
#include "../../util/hist.h"
#include "../../util/map.h"
#include "../../util/symbol.h"
#include "../../util/pstack.h"
#include "../../util/sort.h"
#include "../../util/top.h"
#include "../../util/thread.h"
#include "../../arch/common.h"
#include "../../perf.h"
#include "../browsers/hists.h"
#include "../helpline.h"
#include "../util.h"
#include "../ui.h"
#include "map.h"
#include "annotate.h"
#include "srcline.h"
#include "string2.h"
#include "units.h"
#include "time-utils.h"
#include <linux/ctype.h>
extern void hist_browser__init_hpp(void);
static int hists_browser__scnprintf_title(struct hist_browser *browser, char *bf, size_t size);
static void hist_browser__update_nr_entries(struct hist_browser *hb);
static struct rb_node *hists__filter_entries(struct rb_node *nd,
float min_pcnt);
static bool hist_browser__has_filter(struct hist_browser *hb)
{
return hists__has_filter(hb->hists) || hb->min_pcnt || symbol_conf.has_filter || hb->c2c_filter;
}
perf hists browser: Fix UI bug after fold/unfold In perf hists browser, the fold/unfold stat of each hist entry is recorded but hb->nr_callchain_rows loses its value after zoom out and zoom in back. This causes a wrong row cursor range that restrict user to move down anymore. This bug can be reproduced as follows: $ perf record -g -e syscalls:* ls $ perf report Available samples ================================================================ 2 syscalls:sys_enter_mprotect <= [enter one of the entries] 2 syscalls:sys_exit_mprotect 13 syscalls:sys_enter_brk ... In the hists brower, unfold some of the items, now the cursor can reach to any rows: Children Self Command Shared Object Symbol ================================================================ - 100.00% 100.00% ls libuClibc-0.9.33.2.so [.] lstat64 - lstat64 16.67% 0x6469702e64 8.33% 0x646970 8.33% 0x617461 8.33% 0x65 - 16.67% 0.00% ls [unknown] [.]0x6469702e64 0x6469702e64 <= [cursor can reach to bottom line, everything is ok] Now, zoom back to "Available samples" and enter again: Children Self Command Shared Object Symbol ================================================================ - 100.00% 100.00% ls libuClibc-0.9.33.2.so [.] lstat64 - lstat64 16.67% 0x6469702e64 8.33% 0x646970 8.33% 0x617461 <= [cursor may stop here, can't move down anymore] 8.33% 0x65 - 16.67% 0.00% ls [unknown] [.]0x6469702e64 0x6469702e64 This patch recalculates hb->nr_callchain_rows to fix the bug. Signed-off-by: He Kuang <hekuang@huawei.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1426144909-18951-1-git-send-email-hekuang@huawei.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-03-12 01:21:49 -06:00
static int hist_browser__get_folding(struct hist_browser *browser)
{
struct rb_node *nd;
struct hists *hists = browser->hists;
int unfolded_rows = 0;
for (nd = rb_first_cached(&hists->entries);
perf hists browser: Fix UI bug after fold/unfold In perf hists browser, the fold/unfold stat of each hist entry is recorded but hb->nr_callchain_rows loses its value after zoom out and zoom in back. This causes a wrong row cursor range that restrict user to move down anymore. This bug can be reproduced as follows: $ perf record -g -e syscalls:* ls $ perf report Available samples ================================================================ 2 syscalls:sys_enter_mprotect <= [enter one of the entries] 2 syscalls:sys_exit_mprotect 13 syscalls:sys_enter_brk ... In the hists brower, unfold some of the items, now the cursor can reach to any rows: Children Self Command Shared Object Symbol ================================================================ - 100.00% 100.00% ls libuClibc-0.9.33.2.so [.] lstat64 - lstat64 16.67% 0x6469702e64 8.33% 0x646970 8.33% 0x617461 8.33% 0x65 - 16.67% 0.00% ls [unknown] [.]0x6469702e64 0x6469702e64 <= [cursor can reach to bottom line, everything is ok] Now, zoom back to "Available samples" and enter again: Children Self Command Shared Object Symbol ================================================================ - 100.00% 100.00% ls libuClibc-0.9.33.2.so [.] lstat64 - lstat64 16.67% 0x6469702e64 8.33% 0x646970 8.33% 0x617461 <= [cursor may stop here, can't move down anymore] 8.33% 0x65 - 16.67% 0.00% ls [unknown] [.]0x6469702e64 0x6469702e64 This patch recalculates hb->nr_callchain_rows to fix the bug. Signed-off-by: He Kuang <hekuang@huawei.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1426144909-18951-1-git-send-email-hekuang@huawei.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-03-12 01:21:49 -06:00
(nd = hists__filter_entries(nd, browser->min_pcnt)) != NULL;
nd = rb_hierarchy_next(nd)) {
perf hists browser: Fix UI bug after fold/unfold In perf hists browser, the fold/unfold stat of each hist entry is recorded but hb->nr_callchain_rows loses its value after zoom out and zoom in back. This causes a wrong row cursor range that restrict user to move down anymore. This bug can be reproduced as follows: $ perf record -g -e syscalls:* ls $ perf report Available samples ================================================================ 2 syscalls:sys_enter_mprotect <= [enter one of the entries] 2 syscalls:sys_exit_mprotect 13 syscalls:sys_enter_brk ... In the hists brower, unfold some of the items, now the cursor can reach to any rows: Children Self Command Shared Object Symbol ================================================================ - 100.00% 100.00% ls libuClibc-0.9.33.2.so [.] lstat64 - lstat64 16.67% 0x6469702e64 8.33% 0x646970 8.33% 0x617461 8.33% 0x65 - 16.67% 0.00% ls [unknown] [.]0x6469702e64 0x6469702e64 <= [cursor can reach to bottom line, everything is ok] Now, zoom back to "Available samples" and enter again: Children Self Command Shared Object Symbol ================================================================ - 100.00% 100.00% ls libuClibc-0.9.33.2.so [.] lstat64 - lstat64 16.67% 0x6469702e64 8.33% 0x646970 8.33% 0x617461 <= [cursor may stop here, can't move down anymore] 8.33% 0x65 - 16.67% 0.00% ls [unknown] [.]0x6469702e64 0x6469702e64 This patch recalculates hb->nr_callchain_rows to fix the bug. Signed-off-by: He Kuang <hekuang@huawei.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1426144909-18951-1-git-send-email-hekuang@huawei.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-03-12 01:21:49 -06:00
struct hist_entry *he =
rb_entry(nd, struct hist_entry, rb_node);
if (he->leaf && he->unfolded)
perf hists browser: Fix UI bug after fold/unfold In perf hists browser, the fold/unfold stat of each hist entry is recorded but hb->nr_callchain_rows loses its value after zoom out and zoom in back. This causes a wrong row cursor range that restrict user to move down anymore. This bug can be reproduced as follows: $ perf record -g -e syscalls:* ls $ perf report Available samples ================================================================ 2 syscalls:sys_enter_mprotect <= [enter one of the entries] 2 syscalls:sys_exit_mprotect 13 syscalls:sys_enter_brk ... In the hists brower, unfold some of the items, now the cursor can reach to any rows: Children Self Command Shared Object Symbol ================================================================ - 100.00% 100.00% ls libuClibc-0.9.33.2.so [.] lstat64 - lstat64 16.67% 0x6469702e64 8.33% 0x646970 8.33% 0x617461 8.33% 0x65 - 16.67% 0.00% ls [unknown] [.]0x6469702e64 0x6469702e64 <= [cursor can reach to bottom line, everything is ok] Now, zoom back to "Available samples" and enter again: Children Self Command Shared Object Symbol ================================================================ - 100.00% 100.00% ls libuClibc-0.9.33.2.so [.] lstat64 - lstat64 16.67% 0x6469702e64 8.33% 0x646970 8.33% 0x617461 <= [cursor may stop here, can't move down anymore] 8.33% 0x65 - 16.67% 0.00% ls [unknown] [.]0x6469702e64 0x6469702e64 This patch recalculates hb->nr_callchain_rows to fix the bug. Signed-off-by: He Kuang <hekuang@huawei.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1426144909-18951-1-git-send-email-hekuang@huawei.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-03-12 01:21:49 -06:00
unfolded_rows += he->nr_rows;
}
return unfolded_rows;
}
static void hist_browser__set_title_space(struct hist_browser *hb)
{
struct ui_browser *browser = &hb->b;
struct hists *hists = hb->hists;
struct perf_hpp_list *hpp_list = hists->hpp_list;
browser->extra_title_lines = hb->show_headers ? hpp_list->nr_header_lines : 0;
}
static u32 hist_browser__nr_entries(struct hist_browser *hb)
{
u32 nr_entries;
if (symbol_conf.report_hierarchy)
nr_entries = hb->nr_hierarchy_entries;
else if (hist_browser__has_filter(hb))
nr_entries = hb->nr_non_filtered_entries;
else
nr_entries = hb->hists->nr_entries;
perf hists browser: Fix UI bug after fold/unfold In perf hists browser, the fold/unfold stat of each hist entry is recorded but hb->nr_callchain_rows loses its value after zoom out and zoom in back. This causes a wrong row cursor range that restrict user to move down anymore. This bug can be reproduced as follows: $ perf record -g -e syscalls:* ls $ perf report Available samples ================================================================ 2 syscalls:sys_enter_mprotect <= [enter one of the entries] 2 syscalls:sys_exit_mprotect 13 syscalls:sys_enter_brk ... In the hists brower, unfold some of the items, now the cursor can reach to any rows: Children Self Command Shared Object Symbol ================================================================ - 100.00% 100.00% ls libuClibc-0.9.33.2.so [.] lstat64 - lstat64 16.67% 0x6469702e64 8.33% 0x646970 8.33% 0x617461 8.33% 0x65 - 16.67% 0.00% ls [unknown] [.]0x6469702e64 0x6469702e64 <= [cursor can reach to bottom line, everything is ok] Now, zoom back to "Available samples" and enter again: Children Self Command Shared Object Symbol ================================================================ - 100.00% 100.00% ls libuClibc-0.9.33.2.so [.] lstat64 - lstat64 16.67% 0x6469702e64 8.33% 0x646970 8.33% 0x617461 <= [cursor may stop here, can't move down anymore] 8.33% 0x65 - 16.67% 0.00% ls [unknown] [.]0x6469702e64 0x6469702e64 This patch recalculates hb->nr_callchain_rows to fix the bug. Signed-off-by: He Kuang <hekuang@huawei.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1426144909-18951-1-git-send-email-hekuang@huawei.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-03-12 01:21:49 -06:00
hb->nr_callchain_rows = hist_browser__get_folding(hb);
return nr_entries + hb->nr_callchain_rows;
}
static void hist_browser__update_rows(struct hist_browser *hb)
{
struct ui_browser *browser = &hb->b;
struct hists *hists = hb->hists;
struct perf_hpp_list *hpp_list = hists->hpp_list;
u16 index_row;
if (!hb->show_headers) {
browser->rows += browser->extra_title_lines;
browser->extra_title_lines = 0;
return;
}
browser->extra_title_lines = hpp_list->nr_header_lines;
browser->rows -= browser->extra_title_lines;
/*
* Verify if we were at the last line and that line isn't
* visibe because we now show the header line(s).
*/
index_row = browser->index - browser->top_idx;
if (index_row >= browser->rows)
browser->index -= index_row - browser->rows + 1;
}
static void hist_browser__refresh_dimensions(struct ui_browser *browser)
{
struct hist_browser *hb = container_of(browser, struct hist_browser, b);
/* 3 == +/- toggle symbol before actual hist_entry rendering */
browser->width = 3 + (hists__sort_list_width(hb->hists) + sizeof("[k]"));
/*
* FIXME: Just keeping existing behaviour, but this really should be
* before updating browser->width, as it will invalidate the
* calculation above. Fix this and the fallout in another
* changeset.
*/
ui_browser__refresh_dimensions(browser);
}
static void hist_browser__reset(struct hist_browser *browser)
{
/*
* The hists__remove_entry_filter() already folds non-filtered
* entries so we can assume it has 0 callchain rows.
*/
browser->nr_callchain_rows = 0;
hist_browser__update_nr_entries(browser);
browser->b.nr_entries = hist_browser__nr_entries(browser);
hist_browser__refresh_dimensions(&browser->b);
ui_browser__reset_index(&browser->b);
}
static char tree__folded_sign(bool unfolded)
{
return unfolded ? '-' : '+';
}
static char hist_entry__folded(const struct hist_entry *he)
{
return he->has_children ? tree__folded_sign(he->unfolded) : ' ';
}
static char callchain_list__folded(const struct callchain_list *cl)
{
return cl->has_children ? tree__folded_sign(cl->unfolded) : ' ';
}
static void callchain_list__set_folding(struct callchain_list *cl, bool unfold)
{
cl->unfolded = unfold ? cl->has_children : false;
}
static int callchain_node__count_rows_rb_tree(struct callchain_node *node)
{
int n = 0;
struct rb_node *nd;
for (nd = rb_first(&node->rb_root); nd; nd = rb_next(nd)) {
struct callchain_node *child = rb_entry(nd, struct callchain_node, rb_node);
struct callchain_list *chain;
char folded_sign = ' '; /* No children */
list_for_each_entry(chain, &child->val, list) {
++n;
/* We need this because we may not have children */
folded_sign = callchain_list__folded(chain);
if (folded_sign == '+')
break;
}
if (folded_sign == '-') /* Have children and they're unfolded */
n += callchain_node__count_rows_rb_tree(child);
}
return n;
}
perf hists browser: Support flat callchains The flat callchain mode is to print all chains in a single, simple hierarchy so make it easy to see. Currently perf report --tui doesn't show flat callchains properly. With flat callchains, only leaf nodes are added to the final rbtree so it should show entries in parent nodes. To do that, add parent_val list to struct callchain_node and show them along with the (normal) val list. For example, consider following callchains with '-g graph'. $ perf report -g graph - 39.93% swapper [kernel.vmlinux] [k] intel_idle intel_idle cpuidle_enter_state cpuidle_enter call_cpuidle - cpu_startup_entry 28.63% start_secondary - 11.30% rest_init start_kernel x86_64_start_reservations x86_64_start_kernel Before: $ perf report -g flat - 39.93% swapper [kernel.vmlinux] [k] intel_idle 28.63% start_secondary - 11.30% rest_init start_kernel x86_64_start_reservations x86_64_start_kernel After: $ perf report -g flat - 39.93% swapper [kernel.vmlinux] [k] intel_idle - 28.63% intel_idle cpuidle_enter_state cpuidle_enter call_cpuidle cpu_startup_entry start_secondary - 11.30% intel_idle cpuidle_enter_state cpuidle_enter call_cpuidle cpu_startup_entry start_kernel x86_64_start_reservations x86_64_start_kernel Signed-off-by: Namhyung Kim <namhyung@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1447047946-1691-8-git-send-email-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-08 22:45:43 -07:00
static int callchain_node__count_flat_rows(struct callchain_node *node)
{
struct callchain_list *chain;
char folded_sign = 0;
int n = 0;
list_for_each_entry(chain, &node->parent_val, list) {
if (!folded_sign) {
/* only check first chain list entry */
folded_sign = callchain_list__folded(chain);
if (folded_sign == '+')
return 1;
}
n++;
}
list_for_each_entry(chain, &node->val, list) {
if (!folded_sign) {
/* node->parent_val list might be empty */
folded_sign = callchain_list__folded(chain);
if (folded_sign == '+')
return 1;
}
n++;
}
return n;
}
static int callchain_node__count_folded_rows(struct callchain_node *node __maybe_unused)
{
return 1;
}
static int callchain_node__count_rows(struct callchain_node *node)
{
struct callchain_list *chain;
bool unfolded = false;
int n = 0;
perf hists browser: Support flat callchains The flat callchain mode is to print all chains in a single, simple hierarchy so make it easy to see. Currently perf report --tui doesn't show flat callchains properly. With flat callchains, only leaf nodes are added to the final rbtree so it should show entries in parent nodes. To do that, add parent_val list to struct callchain_node and show them along with the (normal) val list. For example, consider following callchains with '-g graph'. $ perf report -g graph - 39.93% swapper [kernel.vmlinux] [k] intel_idle intel_idle cpuidle_enter_state cpuidle_enter call_cpuidle - cpu_startup_entry 28.63% start_secondary - 11.30% rest_init start_kernel x86_64_start_reservations x86_64_start_kernel Before: $ perf report -g flat - 39.93% swapper [kernel.vmlinux] [k] intel_idle 28.63% start_secondary - 11.30% rest_init start_kernel x86_64_start_reservations x86_64_start_kernel After: $ perf report -g flat - 39.93% swapper [kernel.vmlinux] [k] intel_idle - 28.63% intel_idle cpuidle_enter_state cpuidle_enter call_cpuidle cpu_startup_entry start_secondary - 11.30% intel_idle cpuidle_enter_state cpuidle_enter call_cpuidle cpu_startup_entry start_kernel x86_64_start_reservations x86_64_start_kernel Signed-off-by: Namhyung Kim <namhyung@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1447047946-1691-8-git-send-email-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-08 22:45:43 -07:00
if (callchain_param.mode == CHAIN_FLAT)
return callchain_node__count_flat_rows(node);
else if (callchain_param.mode == CHAIN_FOLDED)
return callchain_node__count_folded_rows(node);
perf hists browser: Support flat callchains The flat callchain mode is to print all chains in a single, simple hierarchy so make it easy to see. Currently perf report --tui doesn't show flat callchains properly. With flat callchains, only leaf nodes are added to the final rbtree so it should show entries in parent nodes. To do that, add parent_val list to struct callchain_node and show them along with the (normal) val list. For example, consider following callchains with '-g graph'. $ perf report -g graph - 39.93% swapper [kernel.vmlinux] [k] intel_idle intel_idle cpuidle_enter_state cpuidle_enter call_cpuidle - cpu_startup_entry 28.63% start_secondary - 11.30% rest_init start_kernel x86_64_start_reservations x86_64_start_kernel Before: $ perf report -g flat - 39.93% swapper [kernel.vmlinux] [k] intel_idle 28.63% start_secondary - 11.30% rest_init start_kernel x86_64_start_reservations x86_64_start_kernel After: $ perf report -g flat - 39.93% swapper [kernel.vmlinux] [k] intel_idle - 28.63% intel_idle cpuidle_enter_state cpuidle_enter call_cpuidle cpu_startup_entry start_secondary - 11.30% intel_idle cpuidle_enter_state cpuidle_enter call_cpuidle cpu_startup_entry start_kernel x86_64_start_reservations x86_64_start_kernel Signed-off-by: Namhyung Kim <namhyung@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1447047946-1691-8-git-send-email-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-08 22:45:43 -07:00
list_for_each_entry(chain, &node->val, list) {
++n;
unfolded = chain->unfolded;
}
if (unfolded)
n += callchain_node__count_rows_rb_tree(node);
return n;
}
static int callchain__count_rows(struct rb_root *chain)
{
struct rb_node *nd;
int n = 0;
for (nd = rb_first(chain); nd; nd = rb_next(nd)) {
struct callchain_node *node = rb_entry(nd, struct callchain_node, rb_node);
n += callchain_node__count_rows(node);
}
return n;
}
static int hierarchy_count_rows(struct hist_browser *hb, struct hist_entry *he,
bool include_children)
{
int count = 0;
struct rb_node *node;
struct hist_entry *child;
if (he->leaf)
return callchain__count_rows(&he->sorted_chain);
if (he->has_no_entry)
return 1;
node = rb_first_cached(&he->hroot_out);
while (node) {
float percent;
child = rb_entry(node, struct hist_entry, rb_node);
percent = hist_entry__get_percent_limit(child);
if (!child->filtered && percent >= hb->min_pcnt) {
count++;
if (include_children && child->unfolded)
count += hierarchy_count_rows(hb, child, true);
}
node = rb_next(node);
}
return count;
}
static bool hist_entry__toggle_fold(struct hist_entry *he)
{
if (!he)
return false;
if (!he->has_children)
return false;
he->unfolded = !he->unfolded;
return true;
}
static bool callchain_list__toggle_fold(struct callchain_list *cl)
{
if (!cl)
return false;
if (!cl->has_children)
return false;
cl->unfolded = !cl->unfolded;
return true;
}
static void callchain_node__init_have_children_rb_tree(struct callchain_node *node)
{
struct rb_node *nd = rb_first(&node->rb_root);
for (nd = rb_first(&node->rb_root); nd; nd = rb_next(nd)) {
struct callchain_node *child = rb_entry(nd, struct callchain_node, rb_node);
struct callchain_list *chain;
bool first = true;
list_for_each_entry(chain, &child->val, list) {
if (first) {
first = false;
chain->has_children = chain->list.next != &child->val ||
!RB_EMPTY_ROOT(&child->rb_root);
} else
chain->has_children = chain->list.next == &child->val &&
!RB_EMPTY_ROOT(&child->rb_root);
}
callchain_node__init_have_children_rb_tree(child);
}
}
static void callchain_node__init_have_children(struct callchain_node *node,
bool has_sibling)
{
struct callchain_list *chain;
chain = list_entry(node->val.next, struct callchain_list, list);
chain->has_children = has_sibling;
if (!list_empty(&node->val)) {
chain = list_entry(node->val.prev, struct callchain_list, list);
chain->has_children = !RB_EMPTY_ROOT(&node->rb_root);
}
callchain_node__init_have_children_rb_tree(node);
}
static void callchain__init_have_children(struct rb_root *root)
{
struct rb_node *nd = rb_first(root);
bool has_sibling = nd && rb_next(nd);
for (nd = rb_first(root); nd; nd = rb_next(nd)) {
struct callchain_node *node = rb_entry(nd, struct callchain_node, rb_node);
callchain_node__init_have_children(node, has_sibling);
if (callchain_param.mode == CHAIN_FLAT ||
callchain_param.mode == CHAIN_FOLDED)
perf hists browser: Support flat callchains The flat callchain mode is to print all chains in a single, simple hierarchy so make it easy to see. Currently perf report --tui doesn't show flat callchains properly. With flat callchains, only leaf nodes are added to the final rbtree so it should show entries in parent nodes. To do that, add parent_val list to struct callchain_node and show them along with the (normal) val list. For example, consider following callchains with '-g graph'. $ perf report -g graph - 39.93% swapper [kernel.vmlinux] [k] intel_idle intel_idle cpuidle_enter_state cpuidle_enter call_cpuidle - cpu_startup_entry 28.63% start_secondary - 11.30% rest_init start_kernel x86_64_start_reservations x86_64_start_kernel Before: $ perf report -g flat - 39.93% swapper [kernel.vmlinux] [k] intel_idle 28.63% start_secondary - 11.30% rest_init start_kernel x86_64_start_reservations x86_64_start_kernel After: $ perf report -g flat - 39.93% swapper [kernel.vmlinux] [k] intel_idle - 28.63% intel_idle cpuidle_enter_state cpuidle_enter call_cpuidle cpu_startup_entry start_secondary - 11.30% intel_idle cpuidle_enter_state cpuidle_enter call_cpuidle cpu_startup_entry start_kernel x86_64_start_reservations x86_64_start_kernel Signed-off-by: Namhyung Kim <namhyung@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1447047946-1691-8-git-send-email-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-08 22:45:43 -07:00
callchain_node__make_parent_list(node);
}
}
static void hist_entry__init_have_children(struct hist_entry *he)
{
if (he->init_have_children)
return;
if (he->leaf) {
he->has_children = !RB_EMPTY_ROOT(&he->sorted_chain);
callchain__init_have_children(&he->sorted_chain);
} else {
he->has_children = !RB_EMPTY_ROOT(&he->hroot_out.rb_root);
}
he->init_have_children = true;
}
static bool hist_browser__toggle_fold(struct hist_browser *browser)
{
struct hist_entry *he = browser->he_selection;
struct map_symbol *ms = browser->selection;
struct callchain_list *cl = container_of(ms, struct callchain_list, ms);
bool has_children;
if (!he || !ms)
return false;
if (ms == &he->ms)
has_children = hist_entry__toggle_fold(he);
else
has_children = callchain_list__toggle_fold(cl);
if (has_children) {
int child_rows = 0;
hist_entry__init_have_children(he);
browser->b.nr_entries -= he->nr_rows;
if (he->leaf)
browser->nr_callchain_rows -= he->nr_rows;
else
browser->nr_hierarchy_entries -= he->nr_rows;
if (symbol_conf.report_hierarchy)
child_rows = hierarchy_count_rows(browser, he, true);
if (he->unfolded) {
if (he->leaf)
he->nr_rows = callchain__count_rows(
&he->sorted_chain);
else
he->nr_rows = hierarchy_count_rows(browser, he, false);
/* account grand children */
if (symbol_conf.report_hierarchy)
browser->b.nr_entries += child_rows - he->nr_rows;
if (!he->leaf && he->nr_rows == 0) {
he->has_no_entry = true;
he->nr_rows = 1;
}
} else {
if (symbol_conf.report_hierarchy)
browser->b.nr_entries -= child_rows - he->nr_rows;
if (he->has_no_entry)
he->has_no_entry = false;
he->nr_rows = 0;
}
browser->b.nr_entries += he->nr_rows;
if (he->leaf)
browser->nr_callchain_rows += he->nr_rows;
else
browser->nr_hierarchy_entries += he->nr_rows;
return true;
}
/* If it doesn't have children, no toggling performed */
return false;
}
static int callchain_node__set_folding_rb_tree(struct callchain_node *node, bool unfold)
{
int n = 0;
struct rb_node *nd;
for (nd = rb_first(&node->rb_root); nd; nd = rb_next(nd)) {
struct callchain_node *child = rb_entry(nd, struct callchain_node, rb_node);
struct callchain_list *chain;
bool has_children = false;
list_for_each_entry(chain, &child->val, list) {
++n;
callchain_list__set_folding(chain, unfold);
has_children = chain->has_children;
}
if (has_children)
n += callchain_node__set_folding_rb_tree(child, unfold);
}
return n;
}
static int callchain_node__set_folding(struct callchain_node *node, bool unfold)
{
struct callchain_list *chain;
bool has_children = false;
int n = 0;
list_for_each_entry(chain, &node->val, list) {
++n;
callchain_list__set_folding(chain, unfold);
has_children = chain->has_children;
}
if (has_children)
n += callchain_node__set_folding_rb_tree(node, unfold);
return n;
}
static int callchain__set_folding(struct rb_root *chain, bool unfold)
{
struct rb_node *nd;
int n = 0;
for (nd = rb_first(chain); nd; nd = rb_next(nd)) {
struct callchain_node *node = rb_entry(nd, struct callchain_node, rb_node);
n += callchain_node__set_folding(node, unfold);
}
return n;
}
static int hierarchy_set_folding(struct hist_browser *hb, struct hist_entry *he,
bool unfold __maybe_unused)
{
float percent;
struct rb_node *nd;
struct hist_entry *child;
int n = 0;
for (nd = rb_first_cached(&he->hroot_out); nd; nd = rb_next(nd)) {
child = rb_entry(nd, struct hist_entry, rb_node);
percent = hist_entry__get_percent_limit(child);
if (!child->filtered && percent >= hb->min_pcnt)
n++;
}
return n;
}
static void __hist_entry__set_folding(struct hist_entry *he,
struct hist_browser *hb, bool unfold)
{
hist_entry__init_have_children(he);
he->unfolded = unfold ? he->has_children : false;
if (he->has_children) {
int n;
if (he->leaf)
n = callchain__set_folding(&he->sorted_chain, unfold);
else
n = hierarchy_set_folding(hb, he, unfold);
he->nr_rows = unfold ? n : 0;
} else
he->nr_rows = 0;
}
static void hist_entry__set_folding(struct hist_entry *he,
struct hist_browser *browser, bool unfold)
{
double percent;
percent = hist_entry__get_percent_limit(he);
if (he->filtered || percent < browser->min_pcnt)
return;
__hist_entry__set_folding(he, browser, unfold);
if (!he->depth || unfold)
browser->nr_hierarchy_entries++;
if (he->leaf)
browser->nr_callchain_rows += he->nr_rows;
else if (unfold && !hist_entry__has_hierarchy_children(he, browser->min_pcnt)) {
browser->nr_hierarchy_entries++;
he->has_no_entry = true;
he->nr_rows = 1;
} else
he->has_no_entry = false;
}
static void
__hist_browser__set_folding(struct hist_browser *browser, bool unfold)
{
struct rb_node *nd;
struct hist_entry *he;
nd = rb_first_cached(&browser->hists->entries);
while (nd) {
he = rb_entry(nd, struct hist_entry, rb_node);
/* set folding state even if it's currently folded */
nd = __rb_hierarchy_next(nd, HMD_FORCE_CHILD);
hist_entry__set_folding(he, browser, unfold);
}
}
static void hist_browser__set_folding(struct hist_browser *browser, bool unfold)
{
browser->nr_hierarchy_entries = 0;
browser->nr_callchain_rows = 0;
__hist_browser__set_folding(browser, unfold);
browser->b.nr_entries = hist_browser__nr_entries(browser);
/* Go to the start, we may be way after valid entries after a collapse */
ui_browser__reset_index(&browser->b);
}
static void hist_browser__set_folding_selected(struct hist_browser *browser, bool unfold)
{
if (!browser->he_selection)
return;
hist_entry__set_folding(browser->he_selection, browser, unfold);
browser->b.nr_entries = hist_browser__nr_entries(browser);
}
static void ui_browser__warn_lost_events(struct ui_browser *browser)
{
ui_browser__warning(browser, 4,
"Events are being lost, check IO/CPU overload!\n\n"
"You may want to run 'perf' using a RT scheduler policy:\n\n"
" perf top -r 80\n\n"
"Or reduce the sampling frequency.");
}
static int hist_browser__title(struct hist_browser *browser, char *bf, size_t size)
{
return browser->title ? browser->title(browser, bf, size) : 0;
}
int hist_browser__run(struct hist_browser *browser, const char *help,
bool warn_lost_event)
{
int key;
char title[160];
struct hist_browser_timer *hbt = browser->hbt;
int delay_secs = hbt ? hbt->refresh : 0;
browser->b.entries = &browser->hists->entries;
browser->b.nr_entries = hist_browser__nr_entries(browser);
hist_browser__title(browser, title, sizeof(title));
if (ui_browser__show(&browser->b, title, "%s", help) < 0)
return -1;
while (1) {
key = ui_browser__run(&browser->b, delay_secs);
switch (key) {
case K_TIMER: {
u64 nr_entries;
WARN_ON_ONCE(!hbt);
if (hbt)
hbt->timer(hbt->arg);
if (hist_browser__has_filter(browser) ||
symbol_conf.report_hierarchy)
hist_browser__update_nr_entries(browser);
nr_entries = hist_browser__nr_entries(browser);
ui_browser__update_nr_entries(&browser->b, nr_entries);
if (warn_lost_event &&
(browser->hists->stats.nr_lost_warned !=
browser->hists->stats.nr_events[PERF_RECORD_LOST])) {
browser->hists->stats.nr_lost_warned =
browser->hists->stats.nr_events[PERF_RECORD_LOST];
ui_browser__warn_lost_events(&browser->b);
}
hist_browser__title(browser, title, sizeof(title));
ui_browser__show_title(&browser->b, title);
continue;
}
case 'D': { /* Debug */
static int seq;
struct hist_entry *h = rb_entry(browser->b.top,
struct hist_entry, rb_node);
ui_helpline__pop();
ui_helpline__fpush("%d: nr_ent=(%d,%d), etl: %d, rows=%d, idx=%d, fve: idx=%d, row_off=%d, nrows=%d",
seq++, browser->b.nr_entries,
browser->hists->nr_entries,
browser->b.extra_title_lines,
2014-07-01 08:07:54 -06:00
browser->b.rows,
browser->b.index,
browser->b.top_idx,
h->row_offset, h->nr_rows);
}
break;
case 'C':
/* Collapse the whole world. */
hist_browser__set_folding(browser, false);
break;
case 'c':
/* Collapse the selected entry. */
hist_browser__set_folding_selected(browser, false);
break;
case 'E':
/* Expand the whole world. */
hist_browser__set_folding(browser, true);
break;
case 'e':
/* Expand the selected entry. */
hist_browser__set_folding_selected(browser, true);
break;
case 'H':
browser->show_headers = !browser->show_headers;
hist_browser__update_rows(browser);
break;
case K_ENTER:
if (hist_browser__toggle_fold(browser))
break;
/* fall thru */
default:
goto out;
}
}
out:
ui_browser__hide(&browser->b);
return key;
}
struct callchain_print_arg {
/* for hists browser */
off_t row_offset;
bool is_current_entry;
/* for file dump */
FILE *fp;
int printed;
};
typedef void (*print_callchain_entry_fn)(struct hist_browser *browser,
struct callchain_list *chain,
const char *str, int offset,
unsigned short row,
struct callchain_print_arg *arg);
static void hist_browser__show_callchain_entry(struct hist_browser *browser,
struct callchain_list *chain,
const char *str, int offset,
unsigned short row,
struct callchain_print_arg *arg)
{
int color, width;
char folded_sign = callchain_list__folded(chain);
bool show_annotated = browser->show_dso && chain->ms.sym && symbol__annotation(chain->ms.sym)->src;
color = HE_COLORSET_NORMAL;
width = browser->b.width - (offset + 2);
if (ui_browser__is_current_entry(&browser->b, row)) {
browser->selection = &chain->ms;
color = HE_COLORSET_SELECTED;
arg->is_current_entry = true;
}
ui_browser__set_color(&browser->b, color);
ui_browser__gotorc(&browser->b, row, 0);
ui_browser__write_nstring(&browser->b, " ", offset);
ui_browser__printf(&browser->b, "%c", folded_sign);
ui_browser__write_graph(&browser->b, show_annotated ? SLSMG_RARROW_CHAR : ' ');
ui_browser__write_nstring(&browser->b, str, width);
}
static void hist_browser__fprintf_callchain_entry(struct hist_browser *b __maybe_unused,
struct callchain_list *chain,
const char *str, int offset,
unsigned short row __maybe_unused,
struct callchain_print_arg *arg)
{
char folded_sign = callchain_list__folded(chain);
arg->printed += fprintf(arg->fp, "%*s%c %s\n", offset, " ",
folded_sign, str);
}
typedef bool (*check_output_full_fn)(struct hist_browser *browser,
unsigned short row);
static bool hist_browser__check_output_full(struct hist_browser *browser,
unsigned short row)
{
return browser->b.rows == row;
}
static bool hist_browser__check_dump_full(struct hist_browser *browser __maybe_unused,
unsigned short row __maybe_unused)
{
return false;
}
#define LEVEL_OFFSET_STEP 3
static int hist_browser__show_callchain_list(struct hist_browser *browser,
struct callchain_node *node,
struct callchain_list *chain,
unsigned short row, u64 total,
bool need_percent, int offset,
print_callchain_entry_fn print,
struct callchain_print_arg *arg)
{
char bf[1024], *alloc_str;
char buf[64], *alloc_str2;
const char *str;
int ret = 1;
if (arg->row_offset != 0) {
arg->row_offset--;
return 0;
}
alloc_str = NULL;
alloc_str2 = NULL;
str = callchain_list__sym_name(chain, bf, sizeof(bf),
browser->show_dso);
if (symbol_conf.show_branchflag_count) {
callchain_list_counts__printf_value(chain, NULL,
buf, sizeof(buf));
if (asprintf(&alloc_str2, "%s%s", str, buf) < 0)
str = "Not enough memory!";
else
str = alloc_str2;
}
if (need_percent) {
callchain_node__scnprintf_value(node, buf, sizeof(buf),
total);
if (asprintf(&alloc_str, "%s %s", buf, str) < 0)
str = "Not enough memory!";
else
str = alloc_str;
}
print(browser, chain, str, offset, row, arg);
free(alloc_str);
free(alloc_str2);
return ret;
}
perf hists browser: Fix percent display in callchains When there's only a single callchain, perf doesn't print its percentage in front of the symbols. This is because it assumes that the percentage is same as parents. But if a percent limit is applied, it's possible that there are actually a couple of child nodes but only one of them is shown. In this case it should display the percent to prevent misunderstanding of its percentage is same as the parent's. For example, let's see the following callchain. $ perf report --no-children --percent-limit 0.01 --tui ... - 0.06% sleep [kernel.vmlinux] [k] kmem_cache_alloc_trace kmem_cache_alloc_trace - perf_event_mmap - 0.04% mmap_region do_mmap_pgoff - vm_mmap_pgoff + 0.02% sys_mmap_pgoff + 0.02% vm_mmap + 0.02% mprotect_fixup Current code omits the percent if 'mmap_region' becomes the only node when percent limit is set to 0.03%, its percent is not 0.06% but users will assume it incorrectly. Before: $ perf report --no-children --percent-limit 0.03 --tui ... 0.06% sleep [kernel.vmlinux] [k] kmem_cache_alloc_trace kmem_cache_alloc_trace - perf_event_mmap - mmap_region do_mmap_pgoff vm_mmap_pgoff After: $ perf report --no-children --percent-limit 0.03 --tui ... 0.06% sleep [kernel.vmlinux] [k] kmem_cache_alloc_trace kmem_cache_alloc_trace - perf_event_mmap - 0.04% mmap_region do_mmap_pgoff vm_mmap_pgoff Signed-off-by: Namhyung Kim <namhyung@kernel.org> Cc: Andi Kleen <andi@firstfloor.org> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1453909257-26015-10-git-send-email-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-01-27 08:40:56 -07:00
static bool check_percent_display(struct rb_node *node, u64 parent_total)
{
struct callchain_node *child;
if (node == NULL)
return false;
if (rb_next(node))
return true;
child = rb_entry(node, struct callchain_node, rb_node);
return callchain_cumul_hits(child) != parent_total;
}
perf hists browser: Support flat callchains The flat callchain mode is to print all chains in a single, simple hierarchy so make it easy to see. Currently perf report --tui doesn't show flat callchains properly. With flat callchains, only leaf nodes are added to the final rbtree so it should show entries in parent nodes. To do that, add parent_val list to struct callchain_node and show them along with the (normal) val list. For example, consider following callchains with '-g graph'. $ perf report -g graph - 39.93% swapper [kernel.vmlinux] [k] intel_idle intel_idle cpuidle_enter_state cpuidle_enter call_cpuidle - cpu_startup_entry 28.63% start_secondary - 11.30% rest_init start_kernel x86_64_start_reservations x86_64_start_kernel Before: $ perf report -g flat - 39.93% swapper [kernel.vmlinux] [k] intel_idle 28.63% start_secondary - 11.30% rest_init start_kernel x86_64_start_reservations x86_64_start_kernel After: $ perf report -g flat - 39.93% swapper [kernel.vmlinux] [k] intel_idle - 28.63% intel_idle cpuidle_enter_state cpuidle_enter call_cpuidle cpu_startup_entry start_secondary - 11.30% intel_idle cpuidle_enter_state cpuidle_enter call_cpuidle cpu_startup_entry start_kernel x86_64_start_reservations x86_64_start_kernel Signed-off-by: Namhyung Kim <namhyung@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1447047946-1691-8-git-send-email-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-08 22:45:43 -07:00
static int hist_browser__show_callchain_flat(struct hist_browser *browser,
struct rb_root *root,
unsigned short row, u64 total,
perf hists browser: Fix percent display in callchains When there's only a single callchain, perf doesn't print its percentage in front of the symbols. This is because it assumes that the percentage is same as parents. But if a percent limit is applied, it's possible that there are actually a couple of child nodes but only one of them is shown. In this case it should display the percent to prevent misunderstanding of its percentage is same as the parent's. For example, let's see the following callchain. $ perf report --no-children --percent-limit 0.01 --tui ... - 0.06% sleep [kernel.vmlinux] [k] kmem_cache_alloc_trace kmem_cache_alloc_trace - perf_event_mmap - 0.04% mmap_region do_mmap_pgoff - vm_mmap_pgoff + 0.02% sys_mmap_pgoff + 0.02% vm_mmap + 0.02% mprotect_fixup Current code omits the percent if 'mmap_region' becomes the only node when percent limit is set to 0.03%, its percent is not 0.06% but users will assume it incorrectly. Before: $ perf report --no-children --percent-limit 0.03 --tui ... 0.06% sleep [kernel.vmlinux] [k] kmem_cache_alloc_trace kmem_cache_alloc_trace - perf_event_mmap - mmap_region do_mmap_pgoff vm_mmap_pgoff After: $ perf report --no-children --percent-limit 0.03 --tui ... 0.06% sleep [kernel.vmlinux] [k] kmem_cache_alloc_trace kmem_cache_alloc_trace - perf_event_mmap - 0.04% mmap_region do_mmap_pgoff vm_mmap_pgoff Signed-off-by: Namhyung Kim <namhyung@kernel.org> Cc: Andi Kleen <andi@firstfloor.org> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1453909257-26015-10-git-send-email-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-01-27 08:40:56 -07:00
u64 parent_total,
perf hists browser: Support flat callchains The flat callchain mode is to print all chains in a single, simple hierarchy so make it easy to see. Currently perf report --tui doesn't show flat callchains properly. With flat callchains, only leaf nodes are added to the final rbtree so it should show entries in parent nodes. To do that, add parent_val list to struct callchain_node and show them along with the (normal) val list. For example, consider following callchains with '-g graph'. $ perf report -g graph - 39.93% swapper [kernel.vmlinux] [k] intel_idle intel_idle cpuidle_enter_state cpuidle_enter call_cpuidle - cpu_startup_entry 28.63% start_secondary - 11.30% rest_init start_kernel x86_64_start_reservations x86_64_start_kernel Before: $ perf report -g flat - 39.93% swapper [kernel.vmlinux] [k] intel_idle 28.63% start_secondary - 11.30% rest_init start_kernel x86_64_start_reservations x86_64_start_kernel After: $ perf report -g flat - 39.93% swapper [kernel.vmlinux] [k] intel_idle - 28.63% intel_idle cpuidle_enter_state cpuidle_enter call_cpuidle cpu_startup_entry start_secondary - 11.30% intel_idle cpuidle_enter_state cpuidle_enter call_cpuidle cpu_startup_entry start_kernel x86_64_start_reservations x86_64_start_kernel Signed-off-by: Namhyung Kim <namhyung@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1447047946-1691-8-git-send-email-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-08 22:45:43 -07:00
print_callchain_entry_fn print,
struct callchain_print_arg *arg,
check_output_full_fn is_output_full)
{
struct rb_node *node;
int first_row = row, offset = LEVEL_OFFSET_STEP;
bool need_percent;
node = rb_first(root);
perf hists browser: Fix percent display in callchains When there's only a single callchain, perf doesn't print its percentage in front of the symbols. This is because it assumes that the percentage is same as parents. But if a percent limit is applied, it's possible that there are actually a couple of child nodes but only one of them is shown. In this case it should display the percent to prevent misunderstanding of its percentage is same as the parent's. For example, let's see the following callchain. $ perf report --no-children --percent-limit 0.01 --tui ... - 0.06% sleep [kernel.vmlinux] [k] kmem_cache_alloc_trace kmem_cache_alloc_trace - perf_event_mmap - 0.04% mmap_region do_mmap_pgoff - vm_mmap_pgoff + 0.02% sys_mmap_pgoff + 0.02% vm_mmap + 0.02% mprotect_fixup Current code omits the percent if 'mmap_region' becomes the only node when percent limit is set to 0.03%, its percent is not 0.06% but users will assume it incorrectly. Before: $ perf report --no-children --percent-limit 0.03 --tui ... 0.06% sleep [kernel.vmlinux] [k] kmem_cache_alloc_trace kmem_cache_alloc_trace - perf_event_mmap - mmap_region do_mmap_pgoff vm_mmap_pgoff After: $ perf report --no-children --percent-limit 0.03 --tui ... 0.06% sleep [kernel.vmlinux] [k] kmem_cache_alloc_trace kmem_cache_alloc_trace - perf_event_mmap - 0.04% mmap_region do_mmap_pgoff vm_mmap_pgoff Signed-off-by: Namhyung Kim <namhyung@kernel.org> Cc: Andi Kleen <andi@firstfloor.org> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1453909257-26015-10-git-send-email-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-01-27 08:40:56 -07:00
need_percent = check_percent_display(node, parent_total);
perf hists browser: Support flat callchains The flat callchain mode is to print all chains in a single, simple hierarchy so make it easy to see. Currently perf report --tui doesn't show flat callchains properly. With flat callchains, only leaf nodes are added to the final rbtree so it should show entries in parent nodes. To do that, add parent_val list to struct callchain_node and show them along with the (normal) val list. For example, consider following callchains with '-g graph'. $ perf report -g graph - 39.93% swapper [kernel.vmlinux] [k] intel_idle intel_idle cpuidle_enter_state cpuidle_enter call_cpuidle - cpu_startup_entry 28.63% start_secondary - 11.30% rest_init start_kernel x86_64_start_reservations x86_64_start_kernel Before: $ perf report -g flat - 39.93% swapper [kernel.vmlinux] [k] intel_idle 28.63% start_secondary - 11.30% rest_init start_kernel x86_64_start_reservations x86_64_start_kernel After: $ perf report -g flat - 39.93% swapper [kernel.vmlinux] [k] intel_idle - 28.63% intel_idle cpuidle_enter_state cpuidle_enter call_cpuidle cpu_startup_entry start_secondary - 11.30% intel_idle cpuidle_enter_state cpuidle_enter call_cpuidle cpu_startup_entry start_kernel x86_64_start_reservations x86_64_start_kernel Signed-off-by: Namhyung Kim <namhyung@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1447047946-1691-8-git-send-email-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-11-08 22:45:43 -07:00
while (node) {
struct callchain_node *child = rb_entry(node, struct callchain_node, rb_node);
struct rb_node *next = rb_next(node);
struct callchain_list *chain;
char folded_sign = ' ';
int first = true;
int extra_offset = 0;
list_for_each_entry(chain, &child->parent_val, list) {
bool was_first = first;
if (first)
first = false;
else if (need_percent)
extra_offset = LEVEL_OFFSET_STEP;
folded_sign = callchain_list__folded(chain);
row += hist_browser__show_callchain_list(browser, child,
chain, row, total,
was_first && need_percent,
offset + extra_offset,
print, arg);
if (is_output_full(browser, row))
goto out;
if (folded_sign == '+')
goto next;
}
list_for_each_entry(chain, &child->val, list) {
bool was_first = first;
if (first)
first = false;
else if (need_percent)
extra_offset = LEVEL_OFFSET_STEP;
folded_sign = callchain_list__folded(chain);
row += hist_browser__show_callchain_list(browser, child,
chain, row, total,
was_first && need_percent,
offset + extra_offset,
print, arg);
if (is_output_full(browser, row))
goto out;
if (folded_sign == '+')
break;
}
next:
if (is_output_full(browser, row))
break;
node = next;
}
out:
return row - first_row;
}
static char *hist_browser__folded_callchain_str(struct hist_browser *browser,
struct callchain_list *chain,
char *value_str, char *old_str)
{
char bf[1024];
const char *str;
char *new;
str = callchain_list__sym_name(chain, bf, sizeof(bf),
browser->show_dso);
if (old_str) {
if (asprintf(&new, "%s%s%s", old_str,
symbol_conf.field_sep ?: ";", str) < 0)
new = NULL;
} else {
if (value_str) {
if (asprintf(&new, "%s %s", value_str, str) < 0)
new = NULL;
} else {
if (asprintf(&new, "%s", str) < 0)
new = NULL;
}
}
return new;
}
static int hist_browser__show_callchain_folded(struct hist_browser *browser,
struct rb_root *root,
unsigned short row, u64 total,
perf hists browser: Fix percent display in callchains When there's only a single callchain, perf doesn't print its percentage in front of the symbols. This is because it assumes that the percentage is same as parents. But if a percent limit is applied, it's possible that there are actually a couple of child nodes but only one of them is shown. In this case it should display the percent to prevent misunderstanding of its percentage is same as the parent's. For example, let's see the following callchain. $ perf report --no-children --percent-limit 0.01 --tui ... - 0.06% sleep [kernel.vmlinux] [k] kmem_cache_alloc_trace kmem_cache_alloc_trace - perf_event_mmap - 0.04% mmap_region do_mmap_pgoff - vm_mmap_pgoff + 0.02% sys_mmap_pgoff + 0.02% vm_mmap + 0.02% mprotect_fixup Current code omits the percent if 'mmap_region' becomes the only node when percent limit is set to 0.03%, its percent is not 0.06% but users will assume it incorrectly. Before: $ perf report --no-children --percent-limit 0.03 --tui ... 0.06% sleep [kernel.vmlinux] [k] kmem_cache_alloc_trace kmem_cache_alloc_trace - perf_event_mmap - mmap_region do_mmap_pgoff vm_mmap_pgoff After: $ perf report --no-children --percent-limit 0.03 --tui ... 0.06% sleep [kernel.vmlinux] [k] kmem_cache_alloc_trace kmem_cache_alloc_trace - perf_event_mmap - 0.04% mmap_region do_mmap_pgoff vm_mmap_pgoff Signed-off-by: Namhyung Kim <namhyung@kernel.org> Cc: Andi Kleen <andi@firstfloor.org> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1453909257-26015-10-git-send-email-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-01-27 08:40:56 -07:00
u64 parent_total,
print_callchain_entry_fn print,
struct callchain_print_arg *arg,
check_output_full_fn is_output_full)
{
struct rb_node *node;
int first_row = row, offset = LEVEL_OFFSET_STEP;
bool need_percent;
node = rb_first(root);
perf hists browser: Fix percent display in callchains When there's only a single callchain, perf doesn't print its percentage in front of the symbols. This is because it assumes that the percentage is same as parents. But if a percent limit is applied, it's possible that there are actually a couple of child nodes but only one of them is shown. In this case it should display the percent to prevent misunderstanding of its percentage is same as the parent's. For example, let's see the following callchain. $ perf report --no-children --percent-limit 0.01 --tui ... - 0.06% sleep [kernel.vmlinux] [k] kmem_cache_alloc_trace kmem_cache_alloc_trace - perf_event_mmap - 0.04% mmap_region do_mmap_pgoff - vm_mmap_pgoff + 0.02% sys_mmap_pgoff + 0.02% vm_mmap + 0.02% mprotect_fixup Current code omits the percent if 'mmap_region' becomes the only node when percent limit is set to 0.03%, its percent is not 0.06% but users will assume it incorrectly. Before: $ perf report --no-children --percent-limit 0.03 --tui ... 0.06% sleep [kernel.vmlinux] [k] kmem_cache_alloc_trace kmem_cache_alloc_trace - perf_event_mmap - mmap_region do_mmap_pgoff vm_mmap_pgoff After: $ perf report --no-children --percent-limit 0.03 --tui ... 0.06% sleep [kernel.vmlinux] [k] kmem_cache_alloc_trace kmem_cache_alloc_trace - perf_event_mmap - 0.04% mmap_region do_mmap_pgoff vm_mmap_pgoff Signed-off-by: Namhyung Kim <namhyung@kernel.org> Cc: Andi Kleen <andi@firstfloor.org> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1453909257-26015-10-git-send-email-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-01-27 08:40:56 -07:00
need_percent = check_percent_display(node, parent_total);
while (node) {
struct callchain_node *child = rb_entry(node, struct callchain_node, rb_node);
struct rb_node *next = rb_next(node);
struct callchain_list *chain, *first_chain = NULL;
int first = true;
char *value_str = NULL, *value_str_alloc = NULL;
char *chain_str = NULL, *chain_str_alloc = NULL;
if (arg->row_offset != 0) {
arg->row_offset--;
goto next;
}
if (need_percent) {
char buf[64];
callchain_node__scnprintf_value(child, buf, sizeof(buf), total);
if (asprintf(&value_str, "%s", buf) < 0) {
value_str = (char *)"<...>";
goto do_print;
}
value_str_alloc = value_str;
}
list_for_each_entry(chain, &child->parent_val, list) {
chain_str = hist_browser__folded_callchain_str(browser,
chain, value_str, chain_str);
if (first) {
first = false;
first_chain = chain;
}
if (chain_str == NULL) {
chain_str = (char *)"Not enough memory!";
goto do_print;
}
chain_str_alloc = chain_str;
}
list_for_each_entry(chain, &child->val, list) {
chain_str = hist_browser__folded_callchain_str(browser,
chain, value_str, chain_str);
if (first) {
first = false;
first_chain = chain;
}
if (chain_str == NULL) {
chain_str = (char *)"Not enough memory!";
goto do_print;
}
chain_str_alloc = chain_str;
}
do_print:
print(browser, first_chain, chain_str, offset, row++, arg);
free(value_str_alloc);
free(chain_str_alloc);
next:
if (is_output_full(browser, row))
break;
node = next;
}
return row - first_row;
}
static int hist_browser__show_callchain_graph(struct hist_browser *browser,
struct rb_root *root, int level,
unsigned short row, u64 total,
u64 parent_total,
print_callchain_entry_fn print,
struct callchain_print_arg *arg,
check_output_full_fn is_output_full)
{
struct rb_node *node;
int first_row = row, offset = level * LEVEL_OFFSET_STEP;
bool need_percent;
u64 percent_total = total;
if (callchain_param.mode == CHAIN_GRAPH_REL)
percent_total = parent_total;
node = rb_first(root);
perf hists browser: Fix percent display in callchains When there's only a single callchain, perf doesn't print its percentage in front of the symbols. This is because it assumes that the percentage is same as parents. But if a percent limit is applied, it's possible that there are actually a couple of child nodes but only one of them is shown. In this case it should display the percent to prevent misunderstanding of its percentage is same as the parent's. For example, let's see the following callchain. $ perf report --no-children --percent-limit 0.01 --tui ... - 0.06% sleep [kernel.vmlinux] [k] kmem_cache_alloc_trace kmem_cache_alloc_trace - perf_event_mmap - 0.04% mmap_region do_mmap_pgoff - vm_mmap_pgoff + 0.02% sys_mmap_pgoff + 0.02% vm_mmap + 0.02% mprotect_fixup Current code omits the percent if 'mmap_region' becomes the only node when percent limit is set to 0.03%, its percent is not 0.06% but users will assume it incorrectly. Before: $ perf report --no-children --percent-limit 0.03 --tui ... 0.06% sleep [kernel.vmlinux] [k] kmem_cache_alloc_trace kmem_cache_alloc_trace - perf_event_mmap - mmap_region do_mmap_pgoff vm_mmap_pgoff After: $ perf report --no-children --percent-limit 0.03 --tui ... 0.06% sleep [kernel.vmlinux] [k] kmem_cache_alloc_trace kmem_cache_alloc_trace - perf_event_mmap - 0.04% mmap_region do_mmap_pgoff vm_mmap_pgoff Signed-off-by: Namhyung Kim <namhyung@kernel.org> Cc: Andi Kleen <andi@firstfloor.org> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1453909257-26015-10-git-send-email-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-01-27 08:40:56 -07:00
need_percent = check_percent_display(node, parent_total);
while (node) {
struct callchain_node *child = rb_entry(node, struct callchain_node, rb_node);
struct rb_node *next = rb_next(node);
struct callchain_list *chain;
char folded_sign = ' ';
int first = true;
int extra_offset = 0;
list_for_each_entry(chain, &child->val, list) {
bool was_first = first;
if (first)
first = false;
else if (need_percent)
extra_offset = LEVEL_OFFSET_STEP;
folded_sign = callchain_list__folded(chain);
row += hist_browser__show_callchain_list(browser, child,
chain, row, percent_total,
was_first && need_percent,
offset + extra_offset,
print, arg);
if (is_output_full(browser, row))
goto out;
if (folded_sign == '+')
break;
}
if (folded_sign == '-') {
const int new_level = level + (extra_offset ? 2 : 1);
row += hist_browser__show_callchain_graph(browser, &child->rb_root,
new_level, row, total,
child->children_hit,
print, arg, is_output_full);
}
if (is_output_full(browser, row))
break;
node = next;
}
out:
return row - first_row;
}
static int hist_browser__show_callchain(struct hist_browser *browser,
struct hist_entry *entry, int level,
unsigned short row,
print_callchain_entry_fn print,
struct callchain_print_arg *arg,
check_output_full_fn is_output_full)
{
u64 total = hists__total_period(entry->hists);
u64 parent_total;
int printed;
if (symbol_conf.cumulate_callchain)
parent_total = entry->stat_acc->period;
else
parent_total = entry->stat.period;
if (callchain_param.mode == CHAIN_FLAT) {
printed = hist_browser__show_callchain_flat(browser,
&entry->sorted_chain, row,
total, parent_total, print, arg,
is_output_full);
} else if (callchain_param.mode == CHAIN_FOLDED) {
printed = hist_browser__show_callchain_folded(browser,
&entry->sorted_chain, row,
total, parent_total, print, arg,
is_output_full);
} else {
printed = hist_browser__show_callchain_graph(browser,
&entry->sorted_chain, level, row,
total, parent_total, print, arg,
is_output_full);
}
if (arg->is_current_entry)
browser->he_selection = entry;
return printed;
}
struct hpp_arg {
struct ui_browser *b;
char folded_sign;
bool current_entry;
};
int __hpp__slsmg_color_printf(struct perf_hpp *hpp, const char *fmt, ...)
{
struct hpp_arg *arg = hpp->ptr;
int ret, len;
va_list args;
double percent;
va_start(args, fmt);
len = va_arg(args, int);
percent = va_arg(args, double);
va_end(args);
ui_browser__set_percent_color(arg->b, percent, arg->current_entry);
ret = scnprintf(hpp->buf, hpp->size, fmt, len, percent);
ui_browser__printf(arg->b, "%s", hpp->buf);
return ret;
}
#define __HPP_COLOR_PERCENT_FN(_type, _field) \
static u64 __hpp_get_##_field(struct hist_entry *he) \
{ \
return he->stat._field; \
} \
\
static int \
hist_browser__hpp_color_##_type(struct perf_hpp_fmt *fmt, \
struct perf_hpp *hpp, \
struct hist_entry *he) \
{ \
return hpp__fmt(fmt, hpp, he, __hpp_get_##_field, " %*.2f%%", \
__hpp__slsmg_color_printf, true); \
}
#define __HPP_COLOR_ACC_PERCENT_FN(_type, _field) \
static u64 __hpp_get_acc_##_field(struct hist_entry *he) \
{ \
return he->stat_acc->_field; \
} \
\
static int \
hist_browser__hpp_color_##_type(struct perf_hpp_fmt *fmt, \
struct perf_hpp *hpp, \
struct hist_entry *he) \
{ \
if (!symbol_conf.cumulate_callchain) { \
struct hpp_arg *arg = hpp->ptr; \
int len = fmt->user_len ?: fmt->len; \
int ret = scnprintf(hpp->buf, hpp->size, \
"%*s", len, "N/A"); \
ui_browser__printf(arg->b, "%s", hpp->buf); \
\
return ret; \
} \
return hpp__fmt(fmt, hpp, he, __hpp_get_acc_##_field, \
" %*.2f%%", __hpp__slsmg_color_printf, true); \
}
__HPP_COLOR_PERCENT_FN(overhead, period)
__HPP_COLOR_PERCENT_FN(overhead_sys, period_sys)
__HPP_COLOR_PERCENT_FN(overhead_us, period_us)
__HPP_COLOR_PERCENT_FN(overhead_guest_sys, period_guest_sys)
__HPP_COLOR_PERCENT_FN(overhead_guest_us, period_guest_us)
__HPP_COLOR_ACC_PERCENT_FN(overhead_acc, period)
#undef __HPP_COLOR_PERCENT_FN
#undef __HPP_COLOR_ACC_PERCENT_FN
void hist_browser__init_hpp(void)
{
perf_hpp__format[PERF_HPP__OVERHEAD].color =
hist_browser__hpp_color_overhead;
perf_hpp__format[PERF_HPP__OVERHEAD_SYS].color =
hist_browser__hpp_color_overhead_sys;
perf_hpp__format[PERF_HPP__OVERHEAD_US].color =
hist_browser__hpp_color_overhead_us;
perf_hpp__format[PERF_HPP__OVERHEAD_GUEST_SYS].color =
hist_browser__hpp_color_overhead_guest_sys;
perf_hpp__format[PERF_HPP__OVERHEAD_GUEST_US].color =
hist_browser__hpp_color_overhead_guest_us;
perf_hpp__format[PERF_HPP__OVERHEAD_ACC].color =
hist_browser__hpp_color_overhead_acc;
res_sample_init();
}
static int hist_browser__show_entry(struct hist_browser *browser,
struct hist_entry *entry,
unsigned short row)
{
int printed = 0;
int width = browser->b.width;
char folded_sign = ' ';
bool current_entry = ui_browser__is_current_entry(&browser->b, row);
perf hists: Check if a hist_entry has callchains before using them So far if we use 'perf record -g' this will make symbol_conf.use_callchain 'true' and logic will assume that all events have callchains enabled, but ever since we added the possibility of setting up callchains for some events (e.g.: -e cycles/call-graph=dwarf/) while not for others, we limit usage scenarios by looking at that symbol_conf.use_callchain global boolean, we better look at each event attributes. On the road to that we need to look if a hist_entry has callchains, that is, to go from hist_entry->hists to the evsel that contains it, to then look at evsel->sample_type for PERF_SAMPLE_CALLCHAIN. The next step is to add a symbol_conf.ignore_callchains global, to use in the places where what we really want to know is if callchains should be ignored, even if present. Then -g will mean just to select a callchain mode to be applied to all events not explicitely setting some other callchain mode, i.e. a default callchain mode, and --no-call-graph will set symbol_conf.ignore_callchains with that clear intention. That too will at some point become a per evsel thing, that tools can set for all or just a few of its evsels. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-0sas5cm4dsw2obn75g7ruz69@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-05-29 10:59:24 -06:00
bool use_callchain = hist_entry__has_callchains(entry) && symbol_conf.use_callchain;
off_t row_offset = entry->row_offset;
bool first = true;
struct perf_hpp_fmt *fmt;
if (current_entry) {
browser->he_selection = entry;
browser->selection = &entry->ms;
}
perf hists: Check if a hist_entry has callchains before using them So far if we use 'perf record -g' this will make symbol_conf.use_callchain 'true' and logic will assume that all events have callchains enabled, but ever since we added the possibility of setting up callchains for some events (e.g.: -e cycles/call-graph=dwarf/) while not for others, we limit usage scenarios by looking at that symbol_conf.use_callchain global boolean, we better look at each event attributes. On the road to that we need to look if a hist_entry has callchains, that is, to go from hist_entry->hists to the evsel that contains it, to then look at evsel->sample_type for PERF_SAMPLE_CALLCHAIN. The next step is to add a symbol_conf.ignore_callchains global, to use in the places where what we really want to know is if callchains should be ignored, even if present. Then -g will mean just to select a callchain mode to be applied to all events not explicitely setting some other callchain mode, i.e. a default callchain mode, and --no-call-graph will set symbol_conf.ignore_callchains with that clear intention. That too will at some point become a per evsel thing, that tools can set for all or just a few of its evsels. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-0sas5cm4dsw2obn75g7ruz69@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-05-29 10:59:24 -06:00
if (use_callchain) {
hist_entry__init_have_children(entry);
folded_sign = hist_entry__folded(entry);
}
if (row_offset == 0) {
struct hpp_arg arg = {
.b = &browser->b,
.folded_sign = folded_sign,
.current_entry = current_entry,
};
int column = 0;
ui_browser__gotorc(&browser->b, row, 0);
hists__for_each_format(browser->hists, fmt) {
char s[2048];
struct perf_hpp hpp = {
.buf = s,
.size = sizeof(s),
.ptr = &arg,
};
perf tools: Skip dynamic fields not defined for current event When there are multiple events, each dynamic sort key is defined just for one event. In this case other events will always show "N/A" for those fields. But they are meaningless and consume precious screen width. Let's skip those undefined dynamic fields. $ perf record -e kmem:kmalloc,kmem:kfree -a sleep 1 $ perf report -s 'comm,kmalloc.*' --stdio # To display the perf.data header info, please use --header/--header-only options. # # # Total Lost Samples: 0 # # Samples: 20K of event 'kmem:kmalloc' # Event count (approx.): 20533 # # Overhead Command call_site ptr bytes_req bytes_alloc gfp_flags # ........ ....... .................. .................. ......... ........... ................... # 99.89% perf ffffffffa01d4396 0xffff8803ffb79720 96 96 GFP_NOFS|GFP_ZERO 0.06% sleep ffffffff8114e1cd 0xffff8803d228a000 4096 4096 GFP_KERNEL 0.03% perf ffffffff811d6ae6 0xffff8803f7678f00 240 256 GFP_KERNEL|GFP_ZERO 0.00% perf ffffffff812263c1 0xffff880406172380 128 128 GFP_KERNEL 0.00% perf ffffffff812264b9 0xffff8803ffac1600 504 512 GFP_KERNEL 0.00% perf ffffffff81226634 0xffff880401dc5280 28 32 GFP_KERNEL 0.00% sleep ffffffff81226da9 0xffff8803ffac3a00 392 512 GFP_KERNEL # Samples: 20K of event 'kmem:kfree' # Event count (approx.): 20597 # # Overhead Command # ........ .............. # 99.63% perf 0.14% sleep 0.11% irq/36-iwlwifi 0.11% kworker/u16:0 0.01% Xorg 0.00% firefox Signed-off-by: Namhyung Kim <namhyung@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1450804030-29193-12-git-send-email-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-12-22 10:07:08 -07:00
if (perf_hpp__should_skip(fmt, entry->hists) ||
column++ < browser->b.horiz_scroll)
continue;
if (current_entry && browser->b.navkeypressed) {
ui_browser__set_color(&browser->b,
HE_COLORSET_SELECTED);
} else {
ui_browser__set_color(&browser->b,
HE_COLORSET_NORMAL);
}
if (first) {
perf hists: Check if a hist_entry has callchains before using them So far if we use 'perf record -g' this will make symbol_conf.use_callchain 'true' and logic will assume that all events have callchains enabled, but ever since we added the possibility of setting up callchains for some events (e.g.: -e cycles/call-graph=dwarf/) while not for others, we limit usage scenarios by looking at that symbol_conf.use_callchain global boolean, we better look at each event attributes. On the road to that we need to look if a hist_entry has callchains, that is, to go from hist_entry->hists to the evsel that contains it, to then look at evsel->sample_type for PERF_SAMPLE_CALLCHAIN. The next step is to add a symbol_conf.ignore_callchains global, to use in the places where what we really want to know is if callchains should be ignored, even if present. Then -g will mean just to select a callchain mode to be applied to all events not explicitely setting some other callchain mode, i.e. a default callchain mode, and --no-call-graph will set symbol_conf.ignore_callchains with that clear intention. That too will at some point become a per evsel thing, that tools can set for all or just a few of its evsels. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-0sas5cm4dsw2obn75g7ruz69@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-05-29 10:59:24 -06:00
if (use_callchain) {
ui_browser__printf(&browser->b, "%c ", folded_sign);
width -= 2;
}
first = false;
} else {
ui_browser__printf(&browser->b, " ");
width -= 2;
}
if (fmt->color) {
int ret = fmt->color(fmt, &hpp, entry);
hist_entry__snprintf_alignment(entry, &hpp, fmt, ret);
/*
* fmt->color() already used ui_browser to
* print the non alignment bits, skip it (+ret):
*/
ui_browser__printf(&browser->b, "%s", s + ret);
} else {
hist_entry__snprintf_alignment(entry, &hpp, fmt, fmt->entry(fmt, &hpp, entry));
ui_browser__printf(&browser->b, "%s", s);
}
width -= hpp.buf - s;
}
/* The scroll bar isn't being used */
if (!browser->b.navkeypressed)
width += 1;
ui_browser__write_nstring(&browser->b, "", width);
++row;
++printed;
} else
--row_offset;
2014-07-01 08:07:54 -06:00
if (folded_sign == '-' && row != browser->b.rows) {
struct callchain_print_arg arg = {
.row_offset = row_offset,
.is_current_entry = current_entry,
};
printed += hist_browser__show_callchain(browser,
entry, 1, row,
hist_browser__show_callchain_entry,
&arg,
hist_browser__check_output_full);
}
return printed;
}
static int hist_browser__show_hierarchy_entry(struct hist_browser *browser,
struct hist_entry *entry,
unsigned short row,
int level)
{
int printed = 0;
int width = browser->b.width;
char folded_sign = ' ';
bool current_entry = ui_browser__is_current_entry(&browser->b, row);
off_t row_offset = entry->row_offset;
bool first = true;
struct perf_hpp_fmt *fmt;
struct perf_hpp_list_node *fmt_node;
struct hpp_arg arg = {
.b = &browser->b,
.current_entry = current_entry,
};
int column = 0;
int hierarchy_indent = (entry->hists->nr_hpp_node - 2) * HIERARCHY_INDENT;
if (current_entry) {
browser->he_selection = entry;
browser->selection = &entry->ms;
}
hist_entry__init_have_children(entry);
folded_sign = hist_entry__folded(entry);
arg.folded_sign = folded_sign;
if (entry->leaf && row_offset) {
row_offset--;
goto show_callchain;
}
ui_browser__gotorc(&browser->b, row, 0);
if (current_entry && browser->b.navkeypressed)
ui_browser__set_color(&browser->b, HE_COLORSET_SELECTED);
else
ui_browser__set_color(&browser->b, HE_COLORSET_NORMAL);
ui_browser__write_nstring(&browser->b, "", level * HIERARCHY_INDENT);
width -= level * HIERARCHY_INDENT;
/* the first hpp_list_node is for overhead columns */
fmt_node = list_first_entry(&entry->hists->hpp_formats,
struct perf_hpp_list_node, list);
perf_hpp_list__for_each_format(&fmt_node->hpp, fmt) {
char s[2048];
struct perf_hpp hpp = {
.buf = s,
.size = sizeof(s),
.ptr = &arg,
};
if (perf_hpp__should_skip(fmt, entry->hists) ||
column++ < browser->b.horiz_scroll)
continue;
if (current_entry && browser->b.navkeypressed) {
ui_browser__set_color(&browser->b,
HE_COLORSET_SELECTED);
} else {
ui_browser__set_color(&browser->b,
HE_COLORSET_NORMAL);
}
if (first) {
ui_browser__printf(&browser->b, "%c ", folded_sign);
width -= 2;
first = false;
} else {
ui_browser__printf(&browser->b, " ");
width -= 2;
}
if (fmt->color) {
int ret = fmt->color(fmt, &hpp, entry);
hist_entry__snprintf_alignment(entry, &hpp, fmt, ret);
/*
* fmt->color() already used ui_browser to
* print the non alignment bits, skip it (+ret):
*/
ui_browser__printf(&browser->b, "%s", s + ret);
} else {
int ret = fmt->entry(fmt, &hpp, entry);
hist_entry__snprintf_alignment(entry, &hpp, fmt, ret);
ui_browser__printf(&browser->b, "%s", s);
}
width -= hpp.buf - s;
}
if (!first) {
ui_browser__write_nstring(&browser->b, "", hierarchy_indent);
width -= hierarchy_indent;
}
if (column >= browser->b.horiz_scroll) {
char s[2048];
struct perf_hpp hpp = {
.buf = s,
.size = sizeof(s),
.ptr = &arg,
};
if (current_entry && browser->b.navkeypressed) {
ui_browser__set_color(&browser->b,
HE_COLORSET_SELECTED);
} else {
ui_browser__set_color(&browser->b,
HE_COLORSET_NORMAL);
}
perf_hpp_list__for_each_format(entry->hpp_list, fmt) {
if (first) {
ui_browser__printf(&browser->b, "%c ", folded_sign);
first = false;
} else {
ui_browser__write_nstring(&browser->b, "", 2);
}
width -= 2;
/*
* No need to call hist_entry__snprintf_alignment()
* since this fmt is always the last column in the
* hierarchy mode.
*/
if (fmt->color) {
width -= fmt->color(fmt, &hpp, entry);
} else {
int i = 0;
width -= fmt->entry(fmt, &hpp, entry);
ui_browser__printf(&browser->b, "%s", skip_spaces(s));
while (isspace(s[i++]))
width++;
}
}
}
/* The scroll bar isn't being used */
if (!browser->b.navkeypressed)
width += 1;
ui_browser__write_nstring(&browser->b, "", width);
++row;
++printed;
show_callchain:
if (entry->leaf && folded_sign == '-' && row != browser->b.rows) {
struct callchain_print_arg carg = {
.row_offset = row_offset,
};
printed += hist_browser__show_callchain(browser, entry,
level + 1, row,
hist_browser__show_callchain_entry, &carg,
hist_browser__check_output_full);
}
return printed;
}
static int hist_browser__show_no_entry(struct hist_browser *browser,
unsigned short row, int level)
{
int width = browser->b.width;
bool current_entry = ui_browser__is_current_entry(&browser->b, row);
bool first = true;
int column = 0;
int ret;
struct perf_hpp_fmt *fmt;
struct perf_hpp_list_node *fmt_node;
int indent = browser->hists->nr_hpp_node - 2;
if (current_entry) {
browser->he_selection = NULL;
browser->selection = NULL;
}
ui_browser__gotorc(&browser->b, row, 0);
if (current_entry && browser->b.navkeypressed)
ui_browser__set_color(&browser->b, HE_COLORSET_SELECTED);
else
ui_browser__set_color(&browser->b, HE_COLORSET_NORMAL);
ui_browser__write_nstring(&browser->b, "", level * HIERARCHY_INDENT);
width -= level * HIERARCHY_INDENT;
/* the first hpp_list_node is for overhead columns */
fmt_node = list_first_entry(&browser->hists->hpp_formats,
struct perf_hpp_list_node, list);
perf_hpp_list__for_each_format(&fmt_node->hpp, fmt) {
if (perf_hpp__should_skip(fmt, browser->hists) ||
column++ < browser->b.horiz_scroll)
continue;
ret = fmt->width(fmt, NULL, browser->hists);
if (first) {
/* for folded sign */
first = false;
ret++;
} else {
/* space between columns */
ret += 2;
}
ui_browser__write_nstring(&browser->b, "", ret);
width -= ret;
}
ui_browser__write_nstring(&browser->b, "", indent * HIERARCHY_INDENT);
width -= indent * HIERARCHY_INDENT;
if (column >= browser->b.horiz_scroll) {
char buf[32];
ret = snprintf(buf, sizeof(buf), "no entry >= %.2f%%", browser->min_pcnt);
ui_browser__printf(&browser->b, " %s", buf);
width -= ret + 2;
}
/* The scroll bar isn't being used */
if (!browser->b.navkeypressed)
width += 1;
ui_browser__write_nstring(&browser->b, "", width);
return 1;
}
static int advance_hpp_check(struct perf_hpp *hpp, int inc)
{
advance_hpp(hpp, inc);
return hpp->size <= 0;
}
static int
hists_browser__scnprintf_headers(struct hist_browser *browser, char *buf,
size_t size, int line)
{
struct hists *hists = browser->hists;
struct perf_hpp dummy_hpp = {
.buf = buf,
.size = size,
};
struct perf_hpp_fmt *fmt;
size_t ret = 0;
int column = 0;
int span = 0;
perf hists: Check if a hist_entry has callchains before using them So far if we use 'perf record -g' this will make symbol_conf.use_callchain 'true' and logic will assume that all events have callchains enabled, but ever since we added the possibility of setting up callchains for some events (e.g.: -e cycles/call-graph=dwarf/) while not for others, we limit usage scenarios by looking at that symbol_conf.use_callchain global boolean, we better look at each event attributes. On the road to that we need to look if a hist_entry has callchains, that is, to go from hist_entry->hists to the evsel that contains it, to then look at evsel->sample_type for PERF_SAMPLE_CALLCHAIN. The next step is to add a symbol_conf.ignore_callchains global, to use in the places where what we really want to know is if callchains should be ignored, even if present. Then -g will mean just to select a callchain mode to be applied to all events not explicitely setting some other callchain mode, i.e. a default callchain mode, and --no-call-graph will set symbol_conf.ignore_callchains with that clear intention. That too will at some point become a per evsel thing, that tools can set for all or just a few of its evsels. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-0sas5cm4dsw2obn75g7ruz69@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-05-29 10:59:24 -06:00
if (hists__has_callchains(hists) && symbol_conf.use_callchain) {
ret = scnprintf(buf, size, " ");
if (advance_hpp_check(&dummy_hpp, ret))
return ret;
}
hists__for_each_format(browser->hists, fmt) {
perf tools: Skip dynamic fields not defined for current event When there are multiple events, each dynamic sort key is defined just for one event. In this case other events will always show "N/A" for those fields. But they are meaningless and consume precious screen width. Let's skip those undefined dynamic fields. $ perf record -e kmem:kmalloc,kmem:kfree -a sleep 1 $ perf report -s 'comm,kmalloc.*' --stdio # To display the perf.data header info, please use --header/--header-only options. # # # Total Lost Samples: 0 # # Samples: 20K of event 'kmem:kmalloc' # Event count (approx.): 20533 # # Overhead Command call_site ptr bytes_req bytes_alloc gfp_flags # ........ ....... .................. .................. ......... ........... ................... # 99.89% perf ffffffffa01d4396 0xffff8803ffb79720 96 96 GFP_NOFS|GFP_ZERO 0.06% sleep ffffffff8114e1cd 0xffff8803d228a000 4096 4096 GFP_KERNEL 0.03% perf ffffffff811d6ae6 0xffff8803f7678f00 240 256 GFP_KERNEL|GFP_ZERO 0.00% perf ffffffff812263c1 0xffff880406172380 128 128 GFP_KERNEL 0.00% perf ffffffff812264b9 0xffff8803ffac1600 504 512 GFP_KERNEL 0.00% perf ffffffff81226634 0xffff880401dc5280 28 32 GFP_KERNEL 0.00% sleep ffffffff81226da9 0xffff8803ffac3a00 392 512 GFP_KERNEL # Samples: 20K of event 'kmem:kfree' # Event count (approx.): 20597 # # Overhead Command # ........ .............. # 99.63% perf 0.14% sleep 0.11% irq/36-iwlwifi 0.11% kworker/u16:0 0.01% Xorg 0.00% firefox Signed-off-by: Namhyung Kim <namhyung@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1450804030-29193-12-git-send-email-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-12-22 10:07:08 -07:00
if (perf_hpp__should_skip(fmt, hists) || column++ < browser->b.horiz_scroll)
continue;
ret = fmt->header(fmt, &dummy_hpp, hists, line, &span);
if (advance_hpp_check(&dummy_hpp, ret))
break;
if (span)
continue;
ret = scnprintf(dummy_hpp.buf, dummy_hpp.size, " ");
if (advance_hpp_check(&dummy_hpp, ret))
break;
}
return ret;
}
static int hists_browser__scnprintf_hierarchy_headers(struct hist_browser *browser, char *buf, size_t size)
{
struct hists *hists = browser->hists;
struct perf_hpp dummy_hpp = {
.buf = buf,
.size = size,
};
struct perf_hpp_fmt *fmt;
struct perf_hpp_list_node *fmt_node;
size_t ret = 0;
int column = 0;
int indent = hists->nr_hpp_node - 2;
bool first_node, first_col;
ret = scnprintf(buf, size, " ");
if (advance_hpp_check(&dummy_hpp, ret))
return ret;
first_node = true;
/* the first hpp_list_node is for overhead columns */
fmt_node = list_first_entry(&hists->hpp_formats,
struct perf_hpp_list_node, list);
perf_hpp_list__for_each_format(&fmt_node->hpp, fmt) {
if (column++ < browser->b.horiz_scroll)
continue;
ret = fmt->header(fmt, &dummy_hpp, hists, 0, NULL);
if (advance_hpp_check(&dummy_hpp, ret))
break;
ret = scnprintf(dummy_hpp.buf, dummy_hpp.size, " ");
if (advance_hpp_check(&dummy_hpp, ret))
break;
first_node = false;
}
if (!first_node) {
ret = scnprintf(dummy_hpp.buf, dummy_hpp.size, "%*s",
indent * HIERARCHY_INDENT, "");
if (advance_hpp_check(&dummy_hpp, ret))
return ret;
}
first_node = true;
list_for_each_entry_continue(fmt_node, &hists->hpp_formats, list) {
if (!first_node) {
ret = scnprintf(dummy_hpp.buf, dummy_hpp.size, " / ");
if (advance_hpp_check(&dummy_hpp, ret))
break;
}
first_node = false;
first_col = true;
perf_hpp_list__for_each_format(&fmt_node->hpp, fmt) {
char *start;
if (perf_hpp__should_skip(fmt, hists))
continue;
if (!first_col) {
ret = scnprintf(dummy_hpp.buf, dummy_hpp.size, "+");
if (advance_hpp_check(&dummy_hpp, ret))
break;
}
first_col = false;
ret = fmt->header(fmt, &dummy_hpp, hists, 0, NULL);
dummy_hpp.buf[ret] = '\0';
start = strim(dummy_hpp.buf);
ret = strlen(start);
if (start != dummy_hpp.buf)
memmove(dummy_hpp.buf, start, ret + 1);
if (advance_hpp_check(&dummy_hpp, ret))
break;
}
}
return ret;
}
static void hists_browser__hierarchy_headers(struct hist_browser *browser)
{
char headers[1024];
hists_browser__scnprintf_hierarchy_headers(browser, headers,
sizeof(headers));
ui_browser__gotorc(&browser->b, 0, 0);
ui_browser__set_color(&browser->b, HE_COLORSET_ROOT);
ui_browser__write_nstring(&browser->b, headers, browser->b.width + 1);
}
static void hists_browser__headers(struct hist_browser *browser)
{
struct hists *hists = browser->hists;
struct perf_hpp_list *hpp_list = hists->hpp_list;
int line;
for (line = 0; line < hpp_list->nr_header_lines; line++) {
char headers[1024];
hists_browser__scnprintf_headers(browser, headers,
sizeof(headers), line);
ui_browser__gotorc_title(&browser->b, line, 0);
ui_browser__set_color(&browser->b, HE_COLORSET_ROOT);
ui_browser__write_nstring(&browser->b, headers, browser->b.width + 1);
}
}
static void hist_browser__show_headers(struct hist_browser *browser)
{
if (symbol_conf.report_hierarchy)
hists_browser__hierarchy_headers(browser);
else
hists_browser__headers(browser);
}
static void ui_browser__hists_init_top(struct ui_browser *browser)
{
if (browser->top == NULL) {
struct hist_browser *hb;
hb = container_of(browser, struct hist_browser, b);
browser->top = rb_first_cached(&hb->hists->entries);
}
}
static unsigned int hist_browser__refresh(struct ui_browser *browser)
{
unsigned row = 0;
struct rb_node *nd;
struct hist_browser *hb = container_of(browser, struct hist_browser, b);
perf hists browser: Remove leftover from row returned from refresh The per-browser screen refresh routine (ui_browser->refresh()) should return the first row that should be cleaned after the rows just printed, in case not all rows available on the screen gets filled. When moving the extra title lines logic from the hists browser to the generic ui_browser class, one piece of that logic remained in the hists browser and then when going back from the annotate browser to the hists browser in a case where fewer lines were displayed in the hists browser, for instance when filtering the entries per substring, one line of the annotate browser would remain on the screen, fix that. Example of the screen artifact: ================================================================================ Samples: 73K of event 'cycles:ppp', 4000 Hz, Event count (approx.): 45172901394 Overhead Shared O Symbol 0.30% [kernel] [k] __indirect_thunk_start 0.09% [kernel] [k] __x86_indirect_thunk_r10 │ lfence ================================================================================ Here from 'perf top' the view was zoomed with '/thunk' to functions having that substring, then the first was annotated and from the annotate browser ESC was pressed, then the first lines were overwritten, but the 'lfence' line remained due to the off by one bug fixed in this cset. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Fixes: ef9ff6017e3c ("perf ui browser: Move the extra title lines from the hists browser") Link: https://lkml.kernel.org/n/tip-odryfso74eaarm0z3e4v9owx@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-04-06 09:11:11 -06:00
if (hb->show_headers)
hist_browser__show_headers(hb);
ui_browser__hists_init_top(browser);
hb->he_selection = NULL;
hb->selection = NULL;
for (nd = browser->top; nd; nd = rb_hierarchy_next(nd)) {
struct hist_entry *h = rb_entry(nd, struct hist_entry, rb_node);
float percent;
if (h->filtered) {
/* let it move to sibling */
h->unfolded = false;
continue;
}
percent = hist_entry__get_percent_limit(h);
if (percent < hb->min_pcnt)
continue;
if (symbol_conf.report_hierarchy) {
row += hist_browser__show_hierarchy_entry(hb, h, row,
h->depth);
if (row == browser->rows)
break;
if (h->has_no_entry) {
hist_browser__show_no_entry(hb, row, h->depth + 1);
row++;
}
} else {
row += hist_browser__show_entry(hb, h, row);
}
2014-07-01 08:07:54 -06:00
if (row == browser->rows)
break;
}
perf hists browser: Remove leftover from row returned from refresh The per-browser screen refresh routine (ui_browser->refresh()) should return the first row that should be cleaned after the rows just printed, in case not all rows available on the screen gets filled. When moving the extra title lines logic from the hists browser to the generic ui_browser class, one piece of that logic remained in the hists browser and then when going back from the annotate browser to the hists browser in a case where fewer lines were displayed in the hists browser, for instance when filtering the entries per substring, one line of the annotate browser would remain on the screen, fix that. Example of the screen artifact: ================================================================================ Samples: 73K of event 'cycles:ppp', 4000 Hz, Event count (approx.): 45172901394 Overhead Shared O Symbol 0.30% [kernel] [k] __indirect_thunk_start 0.09% [kernel] [k] __x86_indirect_thunk_r10 │ lfence ================================================================================ Here from 'perf top' the view was zoomed with '/thunk' to functions having that substring, then the first was annotated and from the annotate browser ESC was pressed, then the first lines were overwritten, but the 'lfence' line remained due to the off by one bug fixed in this cset. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Fixes: ef9ff6017e3c ("perf ui browser: Move the extra title lines from the hists browser") Link: https://lkml.kernel.org/n/tip-odryfso74eaarm0z3e4v9owx@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-04-06 09:11:11 -06:00
return row;
}
static struct rb_node *hists__filter_entries(struct rb_node *nd,
float min_pcnt)
{
while (nd != NULL) {
struct hist_entry *h = rb_entry(nd, struct hist_entry, rb_node);
float percent = hist_entry__get_percent_limit(h);
if (!h->filtered && percent >= min_pcnt)
return nd;
/*
* If it's filtered, its all children also were filtered.
* So move to sibling node.
*/
if (rb_next(nd))
nd = rb_next(nd);
else
nd = rb_hierarchy_next(nd);
}
return NULL;
}
static struct rb_node *hists__filter_prev_entries(struct rb_node *nd,
float min_pcnt)
{
while (nd != NULL) {
struct hist_entry *h = rb_entry(nd, struct hist_entry, rb_node);
float percent = hist_entry__get_percent_limit(h);
if (!h->filtered && percent >= min_pcnt)
return nd;
nd = rb_hierarchy_prev(nd);
}
return NULL;
}
static void ui_browser__hists_seek(struct ui_browser *browser,
off_t offset, int whence)
{
struct hist_entry *h;
struct rb_node *nd;
bool first = true;
struct hist_browser *hb;
hb = container_of(browser, struct hist_browser, b);
if (browser->nr_entries == 0)
return;
ui_browser__hists_init_top(browser);
switch (whence) {
case SEEK_SET:
nd = hists__filter_entries(rb_first(browser->entries),
hb->min_pcnt);
break;
case SEEK_CUR:
nd = browser->top;
goto do_offset;
case SEEK_END:
nd = rb_hierarchy_last(rb_last(browser->entries));
nd = hists__filter_prev_entries(nd, hb->min_pcnt);
first = false;
break;
default:
return;
}
/*
* Moves not relative to the first visible entry invalidates its
* row_offset:
*/
h = rb_entry(browser->top, struct hist_entry, rb_node);
h->row_offset = 0;
/*
* Here we have to check if nd is expanded (+), if it is we can't go
* the next top level hist_entry, instead we must compute an offset of
* what _not_ to show and not change the first visible entry.
*
* This offset increments when we are going from top to bottom and
* decreases when we're going from bottom to top.
*
* As we don't have backpointers to the top level in the callchains
* structure, we need to always print the whole hist_entry callchain,
* skipping the first ones that are before the first visible entry
* and stop when we printed enough lines to fill the screen.
*/
do_offset:
perf hists browser: Add NULL pointer check to prevent crash Before this patch we can trigger a segfault by following steps: Step 0: Use 'perf record' to generate a perf.data without callchain Step 1: perf report Step 2: Use UP/DOWN to select an entry, don't press 'ENTER' Step 3: Use '/' to filter symbols, use a filter which returns empty result Step 4: Press 'ENTER' (notice here that the old selection is still there. This is another problem) Step 5: Press 'ENTER' to annotate that symbol Step 6: Press 'LEFT' to go out. Result: segfault: perf: Segmentation fault -------- backtrace -------- /home/wangnan/perf[0x53e568] /lib64/libc.so.6(+0x3545f)[0x7fba75d3245f] /home/wangnan/perf[0x537516] /home/wangnan/perf[0x533fef] /home/wangnan/perf[0x53b347] /home/wangnan/perf(perf_evlist__tui_browse_hists+0x96)[0x53d206] /home/wangnan/perf(cmd_report+0x1b9f)[0x442c7f] /home/wangnan/perf[0x47efa2] /home/wangnan/perf(main+0x5f5)[0x432fa5] /lib64/libc.so.6(__libc_start_main+0xf4)[0x7fba75d1ebd4] /home/wangnan/perf[0x4330d4] This is because in this case 'nd' could be NULL in ui_browser__hists_seek(), but that function never checks it. This patch adds checker for potential NULL pointer in that function. After this patch the above steps won't segfault. Signed-off-by: Wang Nan <wangnan0@huawei.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Zefan Li <lizefan@huawei.com> Cc: pi3orama@163.com Link: http://lkml.kernel.org/r/1449455746-41952-3-git-send-email-wangnan0@huawei.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-12-06 19:35:45 -07:00
if (!nd)
return;
if (offset > 0) {
do {
h = rb_entry(nd, struct hist_entry, rb_node);
if (h->unfolded && h->leaf) {
u16 remaining = h->nr_rows - h->row_offset;
if (offset > remaining) {
offset -= remaining;
h->row_offset = 0;
} else {
h->row_offset += offset;
offset = 0;
browser->top = nd;
break;
}
}
nd = hists__filter_entries(rb_hierarchy_next(nd),
hb->min_pcnt);
if (nd == NULL)
break;
--offset;
browser->top = nd;
} while (offset != 0);
} else if (offset < 0) {
while (1) {
h = rb_entry(nd, struct hist_entry, rb_node);
if (h->unfolded && h->leaf) {
if (first) {
if (-offset > h->row_offset) {
offset += h->row_offset;
h->row_offset = 0;
} else {
h->row_offset += offset;
offset = 0;
browser->top = nd;
break;
}
} else {
if (-offset > h->nr_rows) {
offset += h->nr_rows;
h->row_offset = 0;
} else {
h->row_offset = h->nr_rows + offset;
offset = 0;
browser->top = nd;
break;
}
}
}
nd = hists__filter_prev_entries(rb_hierarchy_prev(nd),
hb->min_pcnt);
if (nd == NULL)
break;
++offset;
browser->top = nd;
if (offset == 0) {
/*
* Last unfiltered hist_entry, check if it is
* unfolded, if it is then we should have
* row_offset at its last entry.
*/
h = rb_entry(nd, struct hist_entry, rb_node);
if (h->unfolded && h->leaf)
h->row_offset = h->nr_rows;
break;
}
first = false;
}
} else {
browser->top = nd;
h = rb_entry(nd, struct hist_entry, rb_node);
h->row_offset = 0;
}
}
static int hist_browser__fprintf_callchain(struct hist_browser *browser,
struct hist_entry *he, FILE *fp,
int level)
{
struct callchain_print_arg arg = {
.fp = fp,
};
hist_browser__show_callchain(browser, he, level, 0,
hist_browser__fprintf_callchain_entry, &arg,
hist_browser__check_dump_full);
return arg.printed;
}
static int hist_browser__fprintf_entry(struct hist_browser *browser,
struct hist_entry *he, FILE *fp)
{
char s[8192];
int printed = 0;
char folded_sign = ' ';
struct perf_hpp hpp = {
.buf = s,
.size = sizeof(s),
};
struct perf_hpp_fmt *fmt;
bool first = true;
int ret;
perf hists: Check if a hist_entry has callchains before using them So far if we use 'perf record -g' this will make symbol_conf.use_callchain 'true' and logic will assume that all events have callchains enabled, but ever since we added the possibility of setting up callchains for some events (e.g.: -e cycles/call-graph=dwarf/) while not for others, we limit usage scenarios by looking at that symbol_conf.use_callchain global boolean, we better look at each event attributes. On the road to that we need to look if a hist_entry has callchains, that is, to go from hist_entry->hists to the evsel that contains it, to then look at evsel->sample_type for PERF_SAMPLE_CALLCHAIN. The next step is to add a symbol_conf.ignore_callchains global, to use in the places where what we really want to know is if callchains should be ignored, even if present. Then -g will mean just to select a callchain mode to be applied to all events not explicitely setting some other callchain mode, i.e. a default callchain mode, and --no-call-graph will set symbol_conf.ignore_callchains with that clear intention. That too will at some point become a per evsel thing, that tools can set for all or just a few of its evsels. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-0sas5cm4dsw2obn75g7ruz69@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-05-29 10:59:24 -06:00
if (hist_entry__has_callchains(he) && symbol_conf.use_callchain) {
folded_sign = hist_entry__folded(he);
printed += fprintf(fp, "%c ", folded_sign);
}
hists__for_each_format(browser->hists, fmt) {
perf tools: Skip dynamic fields not defined for current event When there are multiple events, each dynamic sort key is defined just for one event. In this case other events will always show "N/A" for those fields. But they are meaningless and consume precious screen width. Let's skip those undefined dynamic fields. $ perf record -e kmem:kmalloc,kmem:kfree -a sleep 1 $ perf report -s 'comm,kmalloc.*' --stdio # To display the perf.data header info, please use --header/--header-only options. # # # Total Lost Samples: 0 # # Samples: 20K of event 'kmem:kmalloc' # Event count (approx.): 20533 # # Overhead Command call_site ptr bytes_req bytes_alloc gfp_flags # ........ ....... .................. .................. ......... ........... ................... # 99.89% perf ffffffffa01d4396 0xffff8803ffb79720 96 96 GFP_NOFS|GFP_ZERO 0.06% sleep ffffffff8114e1cd 0xffff8803d228a000 4096 4096 GFP_KERNEL 0.03% perf ffffffff811d6ae6 0xffff8803f7678f00 240 256 GFP_KERNEL|GFP_ZERO 0.00% perf ffffffff812263c1 0xffff880406172380 128 128 GFP_KERNEL 0.00% perf ffffffff812264b9 0xffff8803ffac1600 504 512 GFP_KERNEL 0.00% perf ffffffff81226634 0xffff880401dc5280 28 32 GFP_KERNEL 0.00% sleep ffffffff81226da9 0xffff8803ffac3a00 392 512 GFP_KERNEL # Samples: 20K of event 'kmem:kfree' # Event count (approx.): 20597 # # Overhead Command # ........ .............. # 99.63% perf 0.14% sleep 0.11% irq/36-iwlwifi 0.11% kworker/u16:0 0.01% Xorg 0.00% firefox Signed-off-by: Namhyung Kim <namhyung@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1450804030-29193-12-git-send-email-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-12-22 10:07:08 -07:00
if (perf_hpp__should_skip(fmt, he->hists))
continue;
if (!first) {
ret = scnprintf(hpp.buf, hpp.size, " ");
advance_hpp(&hpp, ret);
} else
first = false;
ret = fmt->entry(fmt, &hpp, he);
ret = hist_entry__snprintf_alignment(he, &hpp, fmt, ret);
advance_hpp(&hpp, ret);
}
printed += fprintf(fp, "%s\n", s);
if (folded_sign == '-')
printed += hist_browser__fprintf_callchain(browser, he, fp, 1);
return printed;
}
static int hist_browser__fprintf_hierarchy_entry(struct hist_browser *browser,
struct hist_entry *he,
FILE *fp, int level)
{
char s[8192];
int printed = 0;
char folded_sign = ' ';
struct perf_hpp hpp = {
.buf = s,
.size = sizeof(s),
};
struct perf_hpp_fmt *fmt;
struct perf_hpp_list_node *fmt_node;
bool first = true;
int ret;
int hierarchy_indent = (he->hists->nr_hpp_node - 2) * HIERARCHY_INDENT;
printed = fprintf(fp, "%*s", level * HIERARCHY_INDENT, "");
folded_sign = hist_entry__folded(he);
printed += fprintf(fp, "%c", folded_sign);
/* the first hpp_list_node is for overhead columns */
fmt_node = list_first_entry(&he->hists->hpp_formats,
struct perf_hpp_list_node, list);
perf_hpp_list__for_each_format(&fmt_node->hpp, fmt) {
if (!first) {
ret = scnprintf(hpp.buf, hpp.size, " ");
advance_hpp(&hpp, ret);
} else
first = false;
ret = fmt->entry(fmt, &hpp, he);
advance_hpp(&hpp, ret);
}
ret = scnprintf(hpp.buf, hpp.size, "%*s", hierarchy_indent, "");
advance_hpp(&hpp, ret);
perf_hpp_list__for_each_format(he->hpp_list, fmt) {
ret = scnprintf(hpp.buf, hpp.size, " ");
advance_hpp(&hpp, ret);
ret = fmt->entry(fmt, &hpp, he);
advance_hpp(&hpp, ret);
}
strim(s);
printed += fprintf(fp, "%s\n", s);
if (he->leaf && folded_sign == '-') {
printed += hist_browser__fprintf_callchain(browser, he, fp,
he->depth + 1);
}
return printed;
}
static int hist_browser__fprintf(struct hist_browser *browser, FILE *fp)
{
struct rb_node *nd = hists__filter_entries(rb_first(browser->b.entries),
browser->min_pcnt);
int printed = 0;
while (nd) {
struct hist_entry *h = rb_entry(nd, struct hist_entry, rb_node);
if (symbol_conf.report_hierarchy) {
printed += hist_browser__fprintf_hierarchy_entry(browser,
h, fp,
h->depth);
} else {
printed += hist_browser__fprintf_entry(browser, h, fp);
}
nd = hists__filter_entries(rb_hierarchy_next(nd),
browser->min_pcnt);
}
return printed;
}
static int hist_browser__dump(struct hist_browser *browser)
{
char filename[64];
FILE *fp;
while (1) {
scnprintf(filename, sizeof(filename), "perf.hist.%d", browser->print_seq);
if (access(filename, F_OK))
break;
/*
* XXX: Just an arbitrary lazy upper limit
*/
if (++browser->print_seq == 8192) {
ui_helpline__fpush("Too many perf.hist.N files, nothing written!");
return -1;
}
}
fp = fopen(filename, "w");
if (fp == NULL) {
char bf[64];
const char *err = str_error_r(errno, bf, sizeof(bf));
ui_helpline__fpush("Couldn't write to %s: %s", filename, err);
return -1;
}
++browser->print_seq;
hist_browser__fprintf(browser, fp);
fclose(fp);
ui_helpline__fpush("%s written!", filename);
return 0;
}
void hist_browser__init(struct hist_browser *browser,
struct hists *hists)
{
struct perf_hpp_fmt *fmt;
browser->hists = hists;
browser->b.refresh = hist_browser__refresh;
browser->b.refresh_dimensions = hist_browser__refresh_dimensions;
browser->b.seek = ui_browser__hists_seek;
browser->b.use_navkeypressed = true;
browser->show_headers = symbol_conf.show_hist_headers;
hist_browser__set_title_space(browser);
if (symbol_conf.report_hierarchy) {
struct perf_hpp_list_node *fmt_node;
/* count overhead columns (in the first node) */
fmt_node = list_first_entry(&hists->hpp_formats,
struct perf_hpp_list_node, list);
perf_hpp_list__for_each_format(&fmt_node->hpp, fmt)
++browser->b.columns;
/* add a single column for whole hierarchy sort keys*/
++browser->b.columns;
} else {
hists__for_each_format(hists, fmt)
++browser->b.columns;
}
hists__reset_column_width(hists);
}
struct hist_browser *hist_browser__new(struct hists *hists)
{
struct hist_browser *browser = zalloc(sizeof(*browser));
if (browser)
hist_browser__init(browser, hists);
return browser;
}
static struct hist_browser *
perf_evsel_browser__new(struct evsel *evsel,
struct hist_browser_timer *hbt,
struct perf_env *env,
struct annotation_options *annotation_opts)
{
struct hist_browser *browser = hist_browser__new(evsel__hists(evsel));
if (browser) {
browser->hbt = hbt;
browser->env = env;
browser->title = hists_browser__scnprintf_title;
browser->annotation_opts = annotation_opts;
}
return browser;
}
void hist_browser__delete(struct hist_browser *browser)
{
free(browser);
}
static struct hist_entry *hist_browser__selected_entry(struct hist_browser *browser)
{
return browser->he_selection;
}
static struct thread *hist_browser__selected_thread(struct hist_browser *browser)
{
return browser->he_selection->thread;
}
/* Check whether the browser is for 'top' or 'report' */
static inline bool is_report_browser(void *timer)
{
return timer == NULL;
}
static int hists_browser__scnprintf_title(struct hist_browser *browser, char *bf, size_t size)
{
struct hist_browser_timer *hbt = browser->hbt;
int printed = __hists__scnprintf_title(browser->hists, bf, size, !is_report_browser(hbt));
if (!is_report_browser(hbt)) {
struct perf_top *top = hbt->arg;
printed += scnprintf(bf + printed, size - printed,
" lost: %" PRIu64 "/%" PRIu64,
top->lost, top->lost_total);
printed += scnprintf(bf + printed, size - printed,
" drop: %" PRIu64 "/%" PRIu64,
top->drop, top->drop_total);
if (top->zero)
printed += scnprintf(bf + printed, size - printed, " [z]");
perf_top__reset_sample_counters(top);
}
return printed;
}
static inline void free_popup_options(char **options, int n)
{
int i;
for (i = 0; i < n; ++i)
zfree(&options[i]);
}
/*
* Only runtime switching of perf data file will make "input_name" point
* to a malloced buffer. So add "is_input_name_malloced" flag to decide
* whether we need to call free() for current "input_name" during the switch.
*/
static bool is_input_name_malloced = false;
static int switch_data_file(void)
{
char *pwd, *options[32], *abs_path[32], *tmp;
DIR *pwd_dir;
int nr_options = 0, choice = -1, ret = -1;
struct dirent *dent;
pwd = getenv("PWD");
if (!pwd)
return ret;
pwd_dir = opendir(pwd);
if (!pwd_dir)
return ret;
memset(options, 0, sizeof(options));
memset(abs_path, 0, sizeof(abs_path));
while ((dent = readdir(pwd_dir))) {
char path[PATH_MAX];
u64 magic;
char *name = dent->d_name;
FILE *file;
if (!(dent->d_type == DT_REG))
continue;
snprintf(path, sizeof(path), "%s/%s", pwd, name);
file = fopen(path, "r");
if (!file)
continue;
if (fread(&magic, 1, 8, file) < 8)
goto close_file_and_continue;
if (is_perf_magic(magic)) {
options[nr_options] = strdup(name);
if (!options[nr_options])
goto close_file_and_continue;
abs_path[nr_options] = strdup(path);
if (!abs_path[nr_options]) {
zfree(&options[nr_options]);
ui__warning("Can't search all data files due to memory shortage.\n");
fclose(file);
break;
}
nr_options++;
}
close_file_and_continue:
fclose(file);
if (nr_options >= 32) {
ui__warning("Too many perf data files in PWD!\n"
"Only the first 32 files will be listed.\n");
break;
}
}
closedir(pwd_dir);
if (nr_options) {
choice = ui__popup_menu(nr_options, options);
if (choice < nr_options && choice >= 0) {
tmp = strdup(abs_path[choice]);
if (tmp) {
if (is_input_name_malloced)
free((void *)input_name);
input_name = tmp;
is_input_name_malloced = true;
ret = 0;
} else
ui__warning("Data switch failed due to memory shortage!\n");
}
}
free_popup_options(options, nr_options);
free_popup_options(abs_path, nr_options);
return ret;
}
struct popup_action {
unsigned long time;
struct thread *thread;
struct map_symbol ms;
int socket;
struct evsel *evsel;
enum rstype rstype;
int (*fn)(struct hist_browser *browser, struct popup_action *act);
};
static int
do_annotate(struct hist_browser *browser, struct popup_action *act)
{
struct evsel *evsel;
struct annotation *notes;
struct hist_entry *he;
int err;
if (!browser->annotation_opts->objdump_path &&
perf_env__lookup_objdump(browser->env, &browser->annotation_opts->objdump_path))
return 0;
notes = symbol__annotation(act->ms.sym);
if (!notes->src)
return 0;
evsel = hists_to_evsel(browser->hists);
err = map_symbol__tui_annotate(&act->ms, evsel, browser->hbt,
browser->annotation_opts);
he = hist_browser__selected_entry(browser);
/*
* offer option to annotate the other branch source or target
* (if they exists) when returning from annotate
*/
if ((err == 'q' || err == CTRL('c')) && he->branch_info)
return 1;
ui_browser__update_nr_entries(&browser->b, browser->hists->nr_entries);
if (err)
ui_browser__handle_resize(&browser->b);
return 0;
}
static int
add_annotate_opt(struct hist_browser *browser __maybe_unused,
struct popup_action *act, char **optstr,
struct map *map, struct symbol *sym)
{
if (sym == NULL || map->dso->annotate_warned)
return 0;
if (asprintf(optstr, "Annotate %s", sym->name) < 0)
return 0;
act->ms.map = map;
act->ms.sym = sym;
act->fn = do_annotate;
return 1;
}
static int
do_zoom_thread(struct hist_browser *browser, struct popup_action *act)
{
struct thread *thread = act->thread;
if ((!hists__has(browser->hists, thread) &&
!hists__has(browser->hists, comm)) || thread == NULL)
return 0;
if (browser->hists->thread_filter) {
pstack__remove(browser->pstack, &browser->hists->thread_filter);
perf_hpp__set_elide(HISTC_THREAD, false);
thread__zput(browser->hists->thread_filter);
ui_helpline__pop();
} else {
if (hists__has(browser->hists, thread)) {
ui_helpline__fpush("To zoom out press ESC or ENTER + \"Zoom out of %s(%d) thread\"",
thread->comm_set ? thread__comm_str(thread) : "",
thread->tid);
} else {
ui_helpline__fpush("To zoom out press ESC or ENTER + \"Zoom out of %s thread\"",
thread->comm_set ? thread__comm_str(thread) : "");
}
browser->hists->thread_filter = thread__get(thread);
perf_hpp__set_elide(HISTC_THREAD, false);
pstack__push(browser->pstack, &browser->hists->thread_filter);
}
hists__filter_by_thread(browser->hists);
hist_browser__reset(browser);
return 0;
}
static int
add_thread_opt(struct hist_browser *browser, struct popup_action *act,
char **optstr, struct thread *thread)
{
int ret;
if ((!hists__has(browser->hists, thread) &&
!hists__has(browser->hists, comm)) || thread == NULL)
return 0;
if (hists__has(browser->hists, thread)) {
ret = asprintf(optstr, "Zoom %s %s(%d) thread",
browser->hists->thread_filter ? "out of" : "into",
thread->comm_set ? thread__comm_str(thread) : "",
thread->tid);
} else {
ret = asprintf(optstr, "Zoom %s %s thread",
browser->hists->thread_filter ? "out of" : "into",
thread->comm_set ? thread__comm_str(thread) : "");
}
if (ret < 0)
return 0;
act->thread = thread;
act->fn = do_zoom_thread;
return 1;
}
static int
do_zoom_dso(struct hist_browser *browser, struct popup_action *act)
{
struct map *map = act->ms.map;
if (!hists__has(browser->hists, dso) || map == NULL)
return 0;
if (browser->hists->dso_filter) {
pstack__remove(browser->pstack, &browser->hists->dso_filter);
perf_hpp__set_elide(HISTC_DSO, false);
browser->hists->dso_filter = NULL;
ui_helpline__pop();
} else {
ui_helpline__fpush("To zoom out press ESC or ENTER + \"Zoom out of %s DSO\"",
__map__is_kernel(map) ? "the Kernel" : map->dso->short_name);
browser->hists->dso_filter = map->dso;
perf_hpp__set_elide(HISTC_DSO, true);
pstack__push(browser->pstack, &browser->hists->dso_filter);
}
hists__filter_by_dso(browser->hists);
hist_browser__reset(browser);
return 0;
}
static int
add_dso_opt(struct hist_browser *browser, struct popup_action *act,
char **optstr, struct map *map)
{
if (!hists__has(browser->hists, dso) || map == NULL)
return 0;
if (asprintf(optstr, "Zoom %s %s DSO",
browser->hists->dso_filter ? "out of" : "into",
__map__is_kernel(map) ? "the Kernel" : map->dso->short_name) < 0)
return 0;
act->ms.map = map;
act->fn = do_zoom_dso;
return 1;
}
static int
do_browse_map(struct hist_browser *browser __maybe_unused,
struct popup_action *act)
{
map__browse(act->ms.map);
return 0;
}
static int
add_map_opt(struct hist_browser *browser,
struct popup_action *act, char **optstr, struct map *map)
{
if (!hists__has(browser->hists, dso) || map == NULL)
return 0;
if (asprintf(optstr, "Browse map details") < 0)
return 0;
act->ms.map = map;
act->fn = do_browse_map;
return 1;
}
static int
do_run_script(struct hist_browser *browser __maybe_unused,
struct popup_action *act)
{
char *script_opt;
int len;
int n = 0;
len = 100;
if (act->thread)
len += strlen(thread__comm_str(act->thread));
else if (act->ms.sym)
len += strlen(act->ms.sym->name);
script_opt = malloc(len);
if (!script_opt)
return -1;
script_opt[0] = 0;
if (act->thread) {
n = scnprintf(script_opt, len, " -c %s ",
thread__comm_str(act->thread));
} else if (act->ms.sym) {
n = scnprintf(script_opt, len, " -S %s ",
act->ms.sym->name);
}
if (act->time) {
char start[32], end[32];
unsigned long starttime = act->time;
unsigned long endtime = act->time + symbol_conf.time_quantum;
if (starttime == endtime) { /* Display 1ms as fallback */
starttime -= 1*NSEC_PER_MSEC;
endtime += 1*NSEC_PER_MSEC;
}
timestamp__scnprintf_usec(starttime, start, sizeof start);
timestamp__scnprintf_usec(endtime, end, sizeof end);
n += snprintf(script_opt + n, len - n, " --time %s,%s", start, end);
}
script_browse(script_opt, act->evsel);
free(script_opt);
return 0;
}
static int
do_res_sample_script(struct hist_browser *browser __maybe_unused,
struct popup_action *act)
{
struct hist_entry *he;
he = hist_browser__selected_entry(browser);
res_sample_browse(he->res_samples, he->num_res, act->evsel, act->rstype);
return 0;
}
static int
add_script_opt_2(struct hist_browser *browser __maybe_unused,
struct popup_action *act, char **optstr,
struct thread *thread, struct symbol *sym,
struct evsel *evsel, const char *tstr)
{
if (thread) {
if (asprintf(optstr, "Run scripts for samples of thread [%s]%s",
thread__comm_str(thread), tstr) < 0)
return 0;
} else if (sym) {
if (asprintf(optstr, "Run scripts for samples of symbol [%s]%s",
sym->name, tstr) < 0)
return 0;
} else {
if (asprintf(optstr, "Run scripts for all samples%s", tstr) < 0)
return 0;
}
act->thread = thread;
act->ms.sym = sym;
act->evsel = evsel;
act->fn = do_run_script;
return 1;
}
static int
add_script_opt(struct hist_browser *browser,
struct popup_action *act, char **optstr,
struct thread *thread, struct symbol *sym,
struct evsel *evsel)
{
int n, j;
struct hist_entry *he;
n = add_script_opt_2(browser, act, optstr, thread, sym, evsel, "");
he = hist_browser__selected_entry(browser);
if (sort_order && strstr(sort_order, "time")) {
char tstr[128];
optstr++;
act++;
j = sprintf(tstr, " in ");
j += timestamp__scnprintf_usec(he->time, tstr + j,
sizeof tstr - j);
j += sprintf(tstr + j, "-");
timestamp__scnprintf_usec(he->time + symbol_conf.time_quantum,
tstr + j, sizeof tstr - j);
n += add_script_opt_2(browser, act, optstr, thread, sym,
evsel, tstr);
act->time = he->time;
}
return n;
}
static int
add_res_sample_opt(struct hist_browser *browser __maybe_unused,
struct popup_action *act, char **optstr,
struct res_sample *res_sample,
struct evsel *evsel,
enum rstype type)
{
if (!res_sample)
return 0;
if (asprintf(optstr, "Show context for individual samples %s",
type == A_ASM ? "with assembler" :
type == A_SOURCE ? "with source" : "") < 0)
return 0;
act->fn = do_res_sample_script;
act->evsel = evsel;
act->rstype = type;
return 1;
}
static int
do_switch_data(struct hist_browser *browser __maybe_unused,
struct popup_action *act __maybe_unused)
{
if (switch_data_file()) {
ui__warning("Won't switch the data files due to\n"
"no valid data file get selected!\n");
return 0;
}
return K_SWITCH_INPUT_DATA;
}
static int
add_switch_opt(struct hist_browser *browser,
struct popup_action *act, char **optstr)
{
if (!is_report_browser(browser->hbt))
return 0;
if (asprintf(optstr, "Switch to another data file in PWD") < 0)
return 0;
act->fn = do_switch_data;
return 1;
}
static int
do_exit_browser(struct hist_browser *browser __maybe_unused,
struct popup_action *act __maybe_unused)
{
return 0;
}
static int
add_exit_opt(struct hist_browser *browser __maybe_unused,
struct popup_action *act, char **optstr)
{
if (asprintf(optstr, "Exit") < 0)
return 0;
act->fn = do_exit_browser;
return 1;
}
static int
do_zoom_socket(struct hist_browser *browser, struct popup_action *act)
{
if (!hists__has(browser->hists, socket) || act->socket < 0)
return 0;
if (browser->hists->socket_filter > -1) {
pstack__remove(browser->pstack, &browser->hists->socket_filter);
browser->hists->socket_filter = -1;
perf_hpp__set_elide(HISTC_SOCKET, false);
} else {
browser->hists->socket_filter = act->socket;
perf_hpp__set_elide(HISTC_SOCKET, true);
pstack__push(browser->pstack, &browser->hists->socket_filter);
}
hists__filter_by_socket(browser->hists);
hist_browser__reset(browser);
return 0;
}
static int
add_socket_opt(struct hist_browser *browser, struct popup_action *act,
char **optstr, int socket_id)
{
if (!hists__has(browser->hists, socket) || socket_id < 0)
return 0;
if (asprintf(optstr, "Zoom %s Processor Socket %d",
(browser->hists->socket_filter > -1) ? "out of" : "into",
socket_id) < 0)
return 0;
act->socket = socket_id;
act->fn = do_zoom_socket;
return 1;
}
static void hist_browser__update_nr_entries(struct hist_browser *hb)
{
u64 nr_entries = 0;
struct rb_node *nd = rb_first_cached(&hb->hists->entries);
if (hb->min_pcnt == 0 && !symbol_conf.report_hierarchy) {
hb->nr_non_filtered_entries = hb->hists->nr_non_filtered_entries;
return;
}
while ((nd = hists__filter_entries(nd, hb->min_pcnt)) != NULL) {
nr_entries++;
nd = rb_hierarchy_next(nd);
}
hb->nr_non_filtered_entries = nr_entries;
hb->nr_hierarchy_entries = nr_entries;
}
static void hist_browser__update_percent_limit(struct hist_browser *hb,
double percent)
{
struct hist_entry *he;
struct rb_node *nd = rb_first_cached(&hb->hists->entries);
u64 total = hists__total_period(hb->hists);
u64 min_callchain_hits = total * (percent / 100);
hb->min_pcnt = callchain_param.min_percent = percent;
while ((nd = hists__filter_entries(nd, hb->min_pcnt)) != NULL) {
he = rb_entry(nd, struct hist_entry, rb_node);
if (he->has_no_entry) {
he->has_no_entry = false;
he->nr_rows = 0;
}
perf hists: Check if a hist_entry has callchains before using them So far if we use 'perf record -g' this will make symbol_conf.use_callchain 'true' and logic will assume that all events have callchains enabled, but ever since we added the possibility of setting up callchains for some events (e.g.: -e cycles/call-graph=dwarf/) while not for others, we limit usage scenarios by looking at that symbol_conf.use_callchain global boolean, we better look at each event attributes. On the road to that we need to look if a hist_entry has callchains, that is, to go from hist_entry->hists to the evsel that contains it, to then look at evsel->sample_type for PERF_SAMPLE_CALLCHAIN. The next step is to add a symbol_conf.ignore_callchains global, to use in the places where what we really want to know is if callchains should be ignored, even if present. Then -g will mean just to select a callchain mode to be applied to all events not explicitely setting some other callchain mode, i.e. a default callchain mode, and --no-call-graph will set symbol_conf.ignore_callchains with that clear intention. That too will at some point become a per evsel thing, that tools can set for all or just a few of its evsels. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-0sas5cm4dsw2obn75g7ruz69@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-05-29 10:59:24 -06:00
if (!he->leaf || !hist_entry__has_callchains(he) || !symbol_conf.use_callchain)
goto next;
if (callchain_param.mode == CHAIN_GRAPH_REL) {
total = he->stat.period;
if (symbol_conf.cumulate_callchain)
total = he->stat_acc->period;
min_callchain_hits = total * (percent / 100);
}
callchain_param.sort(&he->sorted_chain, he->callchain,
min_callchain_hits, &callchain_param);
next:
nd = __rb_hierarchy_next(nd, HMD_FORCE_CHILD);
/* force to re-evaluate folding state of callchains */
he->init_have_children = false;
hist_entry__set_folding(he, hb, false);
}
}
static int perf_evsel__hists_browse(struct evsel *evsel, int nr_events,
const char *helpline,
bool left_exits,
struct hist_browser_timer *hbt,
float min_pcnt,
struct perf_env *env,
bool warn_lost_event,
struct annotation_options *annotation_opts)
{
struct hists *hists = evsel__hists(evsel);
struct hist_browser *browser = perf_evsel_browser__new(evsel, hbt, env, annotation_opts);
struct branch_info *bi = NULL;
#define MAX_OPTIONS 16
char *options[MAX_OPTIONS];
struct popup_action actions[MAX_OPTIONS];
int nr_options = 0;
int key = -1;
char buf[64];
int delay_secs = hbt ? hbt->refresh : 0;
#define HIST_BROWSER_HELP_COMMON \
"h/?/F1 Show this window\n" \
"UP/DOWN/PGUP\n" \
"PGDN/SPACE Navigate\n" \
"q/ESC/CTRL+C Exit browser or go back to previous screen\n\n" \
"For multiple event sessions:\n\n" \
"TAB/UNTAB Switch events\n\n" \
"For symbolic views (--sort has sym):\n\n" \
"ENTER Zoom into DSO/Threads & Annotate current symbol\n" \
"ESC Zoom out\n" \
"a Annotate current symbol\n" \
"C Collapse all callchains\n" \
"d Zoom into current DSO\n" \
"E Expand all callchains\n" \
"F Toggle percentage of filtered entries\n" \
"H Display column headers\n" \
"L Change percent limit\n" \
"m Display context menu\n" \
"S Zoom into current Processor Socket\n" \
/* help messages are sorted by lexical order of the hotkey */
perf tools: Replace automatic const char[] variables by statics An automatic const char[] variable gets initialized at runtime, just like any other automatic variable. For long strings, that uses a lot of stack and wastes time building the string; e.g. for the "No %s allocation events..." case one has: 444516: 48 b8 4e 6f 20 25 73 20 61 6c movabs $0x6c61207325206f4e,%rax # "No %s al" ... 444674: 48 89 45 80 mov %rax,-0x80(%rbp) 444678: 48 b8 6c 6f 63 61 74 69 6f 6e movabs $0x6e6f697461636f6c,%rax # "location" 444682: 48 89 45 88 mov %rax,-0x78(%rbp) 444686: 48 b8 20 65 76 65 6e 74 73 20 movabs $0x2073746e65766520,%rax # " events " 444690: 66 44 89 55 c4 mov %r10w,-0x3c(%rbp) 444695: 48 89 45 90 mov %rax,-0x70(%rbp) 444699: 48 b8 66 6f 75 6e 64 2e 20 20 movabs $0x20202e646e756f66,%rax Make them all static so that the compiler just references objects in .rodata. Committer testing: Ok, using dwarves's codiff tool: $ codiff --functions /tmp/perf.before ~/bin/perf builtin-sched.c: cmd_sched | -48 1 function changed, 48 bytes removed, diff: -48 builtin-report.c: cmd_report | -32 1 function changed, 32 bytes removed, diff: -32 builtin-kmem.c: cmd_kmem | -64 build_alloc_func_list | -50 2 functions changed, 114 bytes removed, diff: -114 builtin-c2c.c: perf_c2c__report | -390 1 function changed, 390 bytes removed, diff: -390 ui/browsers/header.c: tui__header_window | -104 1 function changed, 104 bytes removed, diff: -104 /home/acme/bin/perf: 9 functions changed, 688 bytes removed, diff: -688 Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk> Acked-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20181102230624.20064-1-linux@rasmusvillemoes.dk Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-11-02 17:06:23 -06:00
static const char report_help[] = HIST_BROWSER_HELP_COMMON
"i Show header information\n"
"P Print histograms to perf.hist.N\n"
"r Run available scripts\n"
"s Switch to another data file in PWD\n"
"t Zoom into current Thread\n"
"V Verbose (DSO names in callchains, etc)\n"
"/ Filter symbol by name";
perf tools: Replace automatic const char[] variables by statics An automatic const char[] variable gets initialized at runtime, just like any other automatic variable. For long strings, that uses a lot of stack and wastes time building the string; e.g. for the "No %s allocation events..." case one has: 444516: 48 b8 4e 6f 20 25 73 20 61 6c movabs $0x6c61207325206f4e,%rax # "No %s al" ... 444674: 48 89 45 80 mov %rax,-0x80(%rbp) 444678: 48 b8 6c 6f 63 61 74 69 6f 6e movabs $0x6e6f697461636f6c,%rax # "location" 444682: 48 89 45 88 mov %rax,-0x78(%rbp) 444686: 48 b8 20 65 76 65 6e 74 73 20 movabs $0x2073746e65766520,%rax # " events " 444690: 66 44 89 55 c4 mov %r10w,-0x3c(%rbp) 444695: 48 89 45 90 mov %rax,-0x70(%rbp) 444699: 48 b8 66 6f 75 6e 64 2e 20 20 movabs $0x20202e646e756f66,%rax Make them all static so that the compiler just references objects in .rodata. Committer testing: Ok, using dwarves's codiff tool: $ codiff --functions /tmp/perf.before ~/bin/perf builtin-sched.c: cmd_sched | -48 1 function changed, 48 bytes removed, diff: -48 builtin-report.c: cmd_report | -32 1 function changed, 32 bytes removed, diff: -32 builtin-kmem.c: cmd_kmem | -64 build_alloc_func_list | -50 2 functions changed, 114 bytes removed, diff: -114 builtin-c2c.c: perf_c2c__report | -390 1 function changed, 390 bytes removed, diff: -390 ui/browsers/header.c: tui__header_window | -104 1 function changed, 104 bytes removed, diff: -104 /home/acme/bin/perf: 9 functions changed, 688 bytes removed, diff: -688 Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk> Acked-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20181102230624.20064-1-linux@rasmusvillemoes.dk Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-11-02 17:06:23 -06:00
static const char top_help[] = HIST_BROWSER_HELP_COMMON
"P Print histograms to perf.hist.N\n"
"t Zoom into current Thread\n"
"V Verbose (DSO names in callchains, etc)\n"
"z Toggle zeroing of samples\n"
"f Enable/Disable events\n"
"/ Filter symbol by name";
if (browser == NULL)
return -1;
/* reset abort key so that it can get Ctrl-C as a key */
SLang_reset_tty();
SLang_init_tty(0, 0, 0);
if (min_pcnt)
browser->min_pcnt = min_pcnt;
hist_browser__update_nr_entries(browser);
browser->pstack = pstack__new(3);
if (browser->pstack == NULL)
goto out;
ui_helpline__push(helpline);
memset(options, 0, sizeof(options));
memset(actions, 0, sizeof(actions));
if (symbol_conf.col_width_list_str)
perf_hpp__set_user_width(symbol_conf.col_width_list_str);
if (!is_report_browser(hbt))
browser->b.no_samples_msg = "Collecting samples...";
while (1) {
struct thread *thread = NULL;
struct map *map = NULL;
int choice = 0;
int socked_id = -1;
nr_options = 0;
key = hist_browser__run(browser, helpline,
warn_lost_event);
if (browser->he_selection != NULL) {
thread = hist_browser__selected_thread(browser);
map = browser->selection->map;
socked_id = browser->he_selection->socket;
}
switch (key) {
case K_TAB:
case K_UNTAB:
if (nr_events == 1)
continue;
/*
* Exit the browser, let hists__browser_tree
* go to the next or previous
*/
goto out_free_stack;
case 'a':
if (!hists__has(hists, sym)) {
ui_browser__warning(&browser->b, delay_secs * 2,
"Annotation is only available for symbolic views, "
"include \"sym*\" in --sort to use it.");
continue;
}
if (browser->selection == NULL ||
browser->selection->sym == NULL ||
browser->selection->map->dso->annotate_warned)
continue;
actions->ms.map = browser->selection->map;
actions->ms.sym = browser->selection->sym;
do_annotate(browser, actions);
continue;
case 'P':
hist_browser__dump(browser);
continue;
case 'd':
actions->ms.map = map;
do_zoom_dso(browser, actions);
continue;
case 'V':
verbose = (verbose + 1) % 4;
browser->show_dso = verbose > 0;
ui_helpline__fpush("Verbosity level set to %d\n",
verbose);
continue;
case 't':
actions->thread = thread;
do_zoom_thread(browser, actions);
continue;
case 'S':
actions->socket = socked_id;
do_zoom_socket(browser, actions);
continue;
case '/':
if (ui_browser__input_window("Symbol to show",
"Please enter the name of symbol you want to see.\n"
"To remove the filter later, press / + ENTER.",
buf, "ENTER: OK, ESC: Cancel",
delay_secs * 2) == K_ENTER) {
hists->symbol_filter_str = *buf ? buf : NULL;
hists__filter_by_symbol(hists);
hist_browser__reset(browser);
}
continue;
case 'r':
if (is_report_browser(hbt)) {
actions->thread = NULL;
actions->ms.sym = NULL;
do_run_script(browser, actions);
}
continue;
case 's':
if (is_report_browser(hbt)) {
key = do_switch_data(browser, actions);
if (key == K_SWITCH_INPUT_DATA)
goto out_free_stack;
}
continue;
case 'i':
/* env->arch is NULL for live-mode (i.e. perf top) */
if (env->arch)
tui__header_window(env);
continue;
case 'F':
symbol_conf.filter_relative ^= 1;
continue;
case 'z':
if (!is_report_browser(hbt)) {
struct perf_top *top = hbt->arg;
top->zero = !top->zero;
}
continue;
case 'L':
if (ui_browser__input_window("Percent Limit",
"Please enter the value you want to hide entries under that percent.",
buf, "ENTER: OK, ESC: Cancel",
delay_secs * 2) == K_ENTER) {
char *end;
double new_percent = strtod(buf, &end);
if (new_percent < 0 || new_percent > 100) {
ui_browser__warning(&browser->b, delay_secs * 2,
"Invalid percent: %.2f", new_percent);
continue;
}
hist_browser__update_percent_limit(browser, new_percent);
hist_browser__reset(browser);
}
continue;
case K_F1:
case 'h':
case '?':
ui_browser__help_window(&browser->b,
is_report_browser(hbt) ? report_help : top_help);
continue;
case K_ENTER:
case K_RIGHT:
case 'm':
/* menu */
break;
case K_ESC:
case K_LEFT: {
const void *top;
if (pstack__empty(browser->pstack)) {
/*
* Go back to the perf_evsel_menu__run or other user
*/
if (left_exits)
goto out_free_stack;
if (key == K_ESC &&
ui_browser__dialog_yesno(&browser->b,
"Do you really want to exit?"))
goto out_free_stack;
continue;
}
top = pstack__peek(browser->pstack);
if (top == &browser->hists->dso_filter) {
/*
* No need to set actions->dso here since
* it's just to remove the current filter.
* Ditto for thread below.
*/
do_zoom_dso(browser, actions);
} else if (top == &browser->hists->thread_filter) {
do_zoom_thread(browser, actions);
} else if (top == &browser->hists->socket_filter) {
do_zoom_socket(browser, actions);
}
continue;
}
case 'q':
case CTRL('c'):
goto out_free_stack;
case 'f':
if (!is_report_browser(hbt)) {
struct perf_top *top = hbt->arg;
perf_evlist__toggle_enable(top->evlist);
/*
* No need to refresh, resort/decay histogram
* entries if we are not collecting samples:
*/
if (top->evlist->enabled) {
helpline = "Press 'f' to disable the events or 'h' to see other hotkeys";
hbt->refresh = delay_secs;
} else {
helpline = "Press 'f' again to re-enable the events";
hbt->refresh = 0;
}
continue;
}
/* Fall thru */
default:
helpline = "Press '?' for help on key bindings";
continue;
}
if (!hists__has(hists, sym) || browser->selection == NULL)
goto skip_annotation;
if (sort__mode == SORT_MODE__BRANCH) {
if (browser->he_selection)
bi = browser->he_selection->branch_info;
if (bi == NULL)
goto skip_annotation;
nr_options += add_annotate_opt(browser,
&actions[nr_options],
&options[nr_options],
bi->from.map,
bi->from.sym);
if (bi->to.sym != bi->from.sym)
nr_options += add_annotate_opt(browser,
&actions[nr_options],
&options[nr_options],
bi->to.map,
bi->to.sym);
} else {
nr_options += add_annotate_opt(browser,
&actions[nr_options],
&options[nr_options],
browser->selection->map,
browser->selection->sym);
}
skip_annotation:
nr_options += add_thread_opt(browser, &actions[nr_options],
&options[nr_options], thread);
nr_options += add_dso_opt(browser, &actions[nr_options],
&options[nr_options], map);
nr_options += add_map_opt(browser, &actions[nr_options],
&options[nr_options],
browser->selection ?
browser->selection->map : NULL);
nr_options += add_socket_opt(browser, &actions[nr_options],
&options[nr_options],
socked_id);
/* perf script support */
if (!is_report_browser(hbt))
goto skip_scripting;
if (browser->he_selection) {
if (hists__has(hists, thread) && thread) {
nr_options += add_script_opt(browser,
&actions[nr_options],
&options[nr_options],
thread, NULL, evsel);
}
/*
* Note that browser->selection != NULL
* when browser->he_selection is not NULL,
* so we don't need to check browser->selection
* before fetching browser->selection->sym like what
* we do before fetching browser->selection->map.
*
* See hist_browser__show_entry.
*/
if (hists__has(hists, sym) && browser->selection->sym) {
nr_options += add_script_opt(browser,
&actions[nr_options],
&options[nr_options],
NULL, browser->selection->sym,
evsel);
}
}
nr_options += add_script_opt(browser, &actions[nr_options],
&options[nr_options], NULL, NULL, evsel);
nr_options += add_res_sample_opt(browser, &actions[nr_options],
&options[nr_options],
hist_browser__selected_entry(browser)->res_samples,
evsel, A_NORMAL);
nr_options += add_res_sample_opt(browser, &actions[nr_options],
&options[nr_options],
hist_browser__selected_entry(browser)->res_samples,
evsel, A_ASM);
nr_options += add_res_sample_opt(browser, &actions[nr_options],
&options[nr_options],
hist_browser__selected_entry(browser)->res_samples,
evsel, A_SOURCE);
nr_options += add_switch_opt(browser, &actions[nr_options],
&options[nr_options]);
skip_scripting:
nr_options += add_exit_opt(browser, &actions[nr_options],
&options[nr_options]);
do {
struct popup_action *act;
choice = ui__popup_menu(nr_options, options);
if (choice == -1 || choice >= nr_options)
break;
act = &actions[choice];
key = act->fn(browser, act);
} while (key == 1);
if (key == K_SWITCH_INPUT_DATA)
break;
}
out_free_stack:
pstack__delete(browser->pstack);
out:
hist_browser__delete(browser);
free_popup_options(options, MAX_OPTIONS);
return key;
}
struct evsel_menu {
struct ui_browser b;
struct evsel *selection;
struct annotation_options *annotation_opts;
bool lost_events, lost_events_warned;
float min_pcnt;
struct perf_env *env;
};
static void perf_evsel_menu__write(struct ui_browser *browser,
void *entry, int row)
{
struct evsel_menu *menu = container_of(browser,
struct evsel_menu, b);
struct evsel *evsel = list_entry(entry, struct evsel, core.node);
struct hists *hists = evsel__hists(evsel);
bool current_entry = ui_browser__is_current_entry(browser, row);
unsigned long nr_events = hists->stats.nr_events[PERF_RECORD_SAMPLE];
const char *ev_name = perf_evsel__name(evsel);
char bf[256], unit;
const char *warn = " ";
size_t printed;
ui_browser__set_color(browser, current_entry ? HE_COLORSET_SELECTED :
HE_COLORSET_NORMAL);
if (perf_evsel__is_group_event(evsel)) {
struct evsel *pos;
ev_name = perf_evsel__group_name(evsel);
for_each_group_member(pos, evsel) {
struct hists *pos_hists = evsel__hists(pos);
nr_events += pos_hists->stats.nr_events[PERF_RECORD_SAMPLE];
}
}
nr_events = convert_unit(nr_events, &unit);
printed = scnprintf(bf, sizeof(bf), "%lu%c%s%s", nr_events,
unit, unit == ' ' ? "" : " ", ev_name);
ui_browser__printf(browser, "%s", bf);
nr_events = hists->stats.nr_events[PERF_RECORD_LOST];
if (nr_events != 0) {
menu->lost_events = true;
if (!current_entry)
ui_browser__set_color(browser, HE_COLORSET_TOP);
nr_events = convert_unit(nr_events, &unit);
printed += scnprintf(bf, sizeof(bf), ": %ld%c%schunks LOST!",
nr_events, unit, unit == ' ' ? "" : " ");
warn = bf;
}
ui_browser__write_nstring(browser, warn, browser->width - printed);
if (current_entry)
menu->selection = evsel;
}
static int perf_evsel_menu__run(struct evsel_menu *menu,
int nr_events, const char *help,
struct hist_browser_timer *hbt,
bool warn_lost_event)
{
struct evlist *evlist = menu->b.priv;
struct evsel *pos;
const char *title = "Available samples";
int delay_secs = hbt ? hbt->refresh : 0;
int key;
if (ui_browser__show(&menu->b, title,
"ESC: exit, ENTER|->: Browse histograms") < 0)
return -1;
while (1) {
key = ui_browser__run(&menu->b, delay_secs);
switch (key) {
case K_TIMER:
if (hbt)
hbt->timer(hbt->arg);
if (!menu->lost_events_warned &&
menu->lost_events &&
warn_lost_event) {
ui_browser__warn_lost_events(&menu->b);
menu->lost_events_warned = true;
}
continue;
case K_RIGHT:
case K_ENTER:
if (!menu->selection)
continue;
pos = menu->selection;
browse_hists:
perf_evlist__set_selected(evlist, pos);
/*
* Give the calling tool a chance to populate the non
* default evsel resorted hists tree.
*/
if (hbt)
hbt->timer(hbt->arg);
key = perf_evsel__hists_browse(pos, nr_events, help,
true, hbt,
menu->min_pcnt,
menu->env,
warn_lost_event,
menu->annotation_opts);
ui_browser__show_title(&menu->b, title);
switch (key) {
case K_TAB:
if (pos->core.node.next == &evlist->core.entries)
pos = perf_evlist__first(evlist);
else
pos = perf_evsel__next(pos);
goto browse_hists;
case K_UNTAB:
if (pos->core.node.prev == &evlist->core.entries)
pos = perf_evlist__last(evlist);
else
pos = perf_evsel__prev(pos);
goto browse_hists;
case K_SWITCH_INPUT_DATA:
case 'q':
case CTRL('c'):
goto out;
case K_ESC:
default:
continue;
}
case K_LEFT:
continue;
case K_ESC:
if (!ui_browser__dialog_yesno(&menu->b,
"Do you really want to exit?"))
continue;
/* Fall thru */
case 'q':
case CTRL('c'):
goto out;
default:
continue;
}
}
out:
ui_browser__hide(&menu->b);
return key;
}
static bool filter_group_entries(struct ui_browser *browser __maybe_unused,
void *entry)
{
struct evsel *evsel = list_entry(entry, struct evsel, core.node);
if (symbol_conf.event_group && !perf_evsel__is_group_leader(evsel))
return true;
return false;
}
static int __perf_evlist__tui_browse_hists(struct evlist *evlist,
int nr_entries, const char *help,
struct hist_browser_timer *hbt,
float min_pcnt,
struct perf_env *env,
bool warn_lost_event,
struct annotation_options *annotation_opts)
{
struct evsel *pos;
struct evsel_menu menu = {
.b = {
.entries = &evlist->core.entries,
.refresh = ui_browser__list_head_refresh,
.seek = ui_browser__list_head_seek,
.write = perf_evsel_menu__write,
.filter = filter_group_entries,
.nr_entries = nr_entries,
.priv = evlist,
},
.min_pcnt = min_pcnt,
.env = env,
.annotation_opts = annotation_opts,
};
ui_helpline__push("Press ESC to exit");
evlist__for_each_entry(evlist, pos) {
const char *ev_name = perf_evsel__name(pos);
size_t line_len = strlen(ev_name) + 7;
if (menu.b.width < line_len)
menu.b.width = line_len;
}
return perf_evsel_menu__run(&menu, nr_entries, help,
hbt, warn_lost_event);
}
int perf_evlist__tui_browse_hists(struct evlist *evlist, const char *help,
struct hist_browser_timer *hbt,
float min_pcnt,
struct perf_env *env,
bool warn_lost_event,
struct annotation_options *annotation_opts)
{
int nr_entries = evlist->core.nr_entries;
single_entry:
if (nr_entries == 1) {
struct evsel *first = perf_evlist__first(evlist);
return perf_evsel__hists_browse(first, nr_entries, help,
false, hbt, min_pcnt,
env, warn_lost_event,
annotation_opts);
}
if (symbol_conf.event_group) {
struct evsel *pos;
nr_entries = 0;
evlist__for_each_entry(evlist, pos) {
if (perf_evsel__is_group_leader(pos))
nr_entries++;
}
if (nr_entries == 1)
goto single_entry;
}
return __perf_evlist__tui_browse_hists(evlist, nr_entries, help,
hbt, min_pcnt, env,
warn_lost_event,
annotation_opts);
}