Commit graph

180130 commits

Author SHA1 Message Date
austin_zhang@linux.intel.com f7e7ee3675 perf record: Fix existing process callgraph symbol
When 'perf record -g' a existing process, even with debuginfo
packages, still cannnot get symbol from 'perf report'.

try:

 perf record -g -p `pidof xxx` -f
 perf report

    68.26%    :1181           b74870f2  [.] 0x000000b74870f2
              |
              |--32.09%-- 0xb73b5b44
              |          0xb7487102
              |          0xb748a4e2
              |          0xb748633d
              |          0xb73b41cd
              |          0xb73b4467
              |          0xb747d531

The reason is: for existing process, in __cmd_record(),
the pid is 0 rather than the existing process id.

Signed-off-by: Austin Zhang <austin_zhang@linux.intel.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <4710.10.255.24.35.1265389362.squirrel@linux.intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-02-08 16:55:52 +01:00
Masami Hiramatsu 076dc4a65a x86/alternatives: Fix build warning
Fixes these warnings:

 arch/x86/kernel/alternative.c: In function 'alternatives_text_reserved':
 arch/x86/kernel/alternative.c:402: warning: comparison of distinct pointer types lacks a cast
 arch/x86/kernel/alternative.c:402: warning: comparison of distinct pointer types lacks a cast
 arch/x86/kernel/alternative.c:405: warning: comparison of distinct pointer types lacks a cast
 arch/x86/kernel/alternative.c:405: warning: comparison of distinct pointer types lacks a cast

Caused by:

  2cfa197: ftrace/alternatives: Introducing *_text_reserved functions

Changes in v2:
  - Use local variables to compare, instead of type casts.

Reported-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Masami Hiramatsu <mhiramat@redhat.com>
Cc: systemtap <systemtap@sources.redhat.com>
Cc: DLE <dle-develop@lists.sourceforge.net>
LKML-Reference: <20100205171647.15750.37221.stgit@dhcp-100-2-132.bos.redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-02-07 18:08:24 +01:00
Arnaldo Carvalho de Melo 5f48536436 perf top: Use address pattern in lookup_sym_source
Because we may have aliases, like __GI___strcoll_l in
/lib64/libc-2.10.2.so that appears in objdump as:

$ objdump --start-address=0x0000003715a86420 \
           --stop-address=0x0000003715a872dc -dS /lib64/libc-2.10.2.so

0000003715a86420 <__strcoll_l>:
  3715a86420:	55                   	push   %rbp
  3715a86421:	48 89 e5             	mov    %rsp,%rbp
  3715a86424:	41 57                	push   %r15
[root@doppio linux-2.6-tip]#

So look for the address exactly at the start of the line instead
so that annotation can work for in these cases.

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Kirill Smelkov <kirr@landau.phys.spbu.ru>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1265550376-12665-2-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-02-07 17:30:21 +01:00
Kirill Smelkov ee11b90b12 perf top: Fix annotate for userspace
First, for programs and prelinked libraries, annotate code was
fooled by objdump output IPs (src->eip in the code) being
wrongly converted to absolute IPs. In such case there were no
conversion needed, but in

   src->eip = strtoull(src->line, NULL, 16);
   src->eip = map->unmap_ip(map, src->eip); // = eip + map->start - map->pgoff

we were reading absolute address from objdump (e.g. 8048604) and
then almost doubling it, because eip & map->start are
approximately close for small programs.

Needless to say, that later, in record_precise_ip() there was no
matching with real runtime IPs.

And second, like with `perf annotate` the problem with
non-prelinked *.so was that we were doing rip -> objdump address
conversion wrong.

Also, because unlike `perf annotate`, `perf top` code does
annotation based on absolute IPs for performance reasons(*), new
helper for mapping objdump addresse to IP is introduced.

(*) we get samples info in absolute IPs, and since we do lots of
    hit-testing on absolute IPs at runtime in record_precise_ip(), it's
    better to convert objdump addresses to IPs once and do no conversion
    at runtime.

I also had to fix how objdump output is parsed (with hardcoded
8/16 characters format, which was inappropriate for ET_DYN dsos
with small addresses like '4ac')

Also note, that not all objdump output lines has associtated
IPs, e.g. look at source lines here:

    000004ac <my_strlen>:
    extern "C"
    int my_strlen(const char *s)
     4ac:   55                      push   %ebp
     4ad:   89 e5                   mov    %esp,%ebp
     4af:   83 ec 10                sub    $0x10,%esp
    {
        int len = 0;
     4b2:   c7 45 fc 00 00 00 00    movl   $0x0,-0x4(%ebp)
     4b9:   eb 08                   jmp    4c3 <my_strlen+0x17>

        while (*s) {
            ++len;
     4bb:   83 45 fc 01             addl   $0x1,-0x4(%ebp)
            ++s;
     4bf:   83 45 08 01             addl   $0x1,0x8(%ebp)

So we mark them with eip=0, and ignore such lines in annotate
lookup code.

Signed-off-by: Kirill Smelkov <kirr@landau.phys.spbu.ru>
[ Note: one hunk of this patch was applied by Mike in 57d8188 ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
LKML-Reference: <1265550376-12665-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-02-07 17:30:20 +01:00
Masami Hiramatsu 5ecaafdbf4 kprobes: Add mcount to the kprobes blacklist
Since mcount function can be called from everywhere,
it should be blacklisted. Moreover, the "mcount" symbol
is a special symbol name. So, it is better to put it in
the generic blacklist.

Signed-off-by: Masami Hiramatsu <mhiramat@redhat.com>
Cc: systemtap <systemtap@sources.redhat.com>
Cc: DLE <dle-develop@lists.sourceforge.net>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
LKML-Reference: <20100205062433.3745.36726.stgit@dhcp-100-2-132.bos.redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-02-05 08:13:57 +01:00
Ingo Molnar 2161db9693 perf tools: Fix session init on non-modular kernels
perf top and perf record refuses to initialize on non-modular kernels:
refuse to initialize:

 $ perf top -v
  map_groups__set_modules_path_dir: cannot open /lib/modules/2.6.33-rc6-tip-00586-g398dde3-dirty/

Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1265223128-11786-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-02-04 10:22:01 +01:00
Xiao Guangrong f887f3019e perf tools: Clean up O_LARGEFILE et al usage
Setting _FILE_OFFSET_BITS and using O_LARGEFILE, lseek64, etc,
is redundant. Thanks H. Peter Anvin for pointing it out.

So, this patch removes O_LARGEFILE, lseek64, etc.

Suggested-by: "H. Peter Anvin" <hpa@zytor.com>
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
LKML-Reference: <4B6A8972.3070605@cn.fujitsu.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-02-04 10:03:03 +01:00
Stephane Eranian 447a194b39 perf_events, x86: Fix bug in hw_perf_enable()
We cannot assume that because hwc->idx == assign[i], we can avoid
reprogramming the counter in hw_perf_enable().

The event may have been scheduled out and another event may have been
programmed into this counter. Thus, we need a more robust way of
verifying if the counter still contains config/data related to an event.

This patch adds a generation number to each counter on each cpu. Using
this mechanism we can verify reliabilty whether the content of a counter
corresponds to an event.

Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <4b66dc67.0b38560a.1635.ffffae18@mx.google.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-02-04 09:59:50 +01:00
Peter Zijlstra fce877e3a4 bitops: Ensure the compile time HWEIGHT is only used for such
Avoid accidental misuse by failing to compile things

Suggested-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-02-04 09:59:50 +01:00
Peter Zijlstra 8c48e44419 perf_events, x86: Implement intel core solo/duo support
Implement Intel Core Solo/Duo, aka.
Intel Architectural Performance Monitoring Version 1.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Arjan van de Ven <arjan@linux.intel.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-02-04 09:59:49 +01:00
Peter Zijlstra 9717e6cd3d perf_events: Optimize perf_event_task_tick()
Pretty much all of the calls do perf_disable/perf_enable cycles, pull
that out to cut back on hardware programming.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-02-04 09:59:49 +01:00
Masami Hiramatsu f24bb999d2 ftrace: Remove record freezing
Remove record freezing. Because kprobes never puts probe on
ftrace's mcount call anymore, it doesn't need ftrace to check
whether kprobes on it.

Signed-off-by: Masami Hiramatsu <mhiramat@redhat.com>
Cc: systemtap <systemtap@sources.redhat.com>
Cc: DLE <dle-develop@lists.sourceforge.net>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: przemyslaw@pawelczyk.it
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <20100202214925.4694.73469.stgit@dhcp-100-2-132.bos.redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-02-04 09:36:19 +01:00
Masami Hiramatsu 4554dbcb85 kprobes: Check probe address is reserved
Check whether the address of new probe is already reserved by
ftrace or alternatives (on x86) when registering new probe.
If reserved, it returns an error and not register the probe.

Signed-off-by: Masami Hiramatsu <mhiramat@redhat.com>
Cc: systemtap <systemtap@sources.redhat.com>
Cc: DLE <dle-develop@lists.sourceforge.net>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: przemyslaw@pawelczyk.it
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: Jim Keniston <jkenisto@us.ibm.com>
Cc: Mathieu Desnoyers <compudj@krystal.dyndns.org>
Cc: Jason Baron <jbaron@redhat.com>
LKML-Reference: <20100202214918.4694.94179.stgit@dhcp-100-2-132.bos.redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-02-04 09:36:19 +01:00
Masami Hiramatsu 2cfa19780d ftrace/alternatives: Introducing *_text_reserved functions
Introducing *_text_reserved functions for checking the text
address range is partially reserved or not. This patch provides
checking routines for x86 smp alternatives and dynamic ftrace.
Since both functions modify fixed pieces of kernel text, they
should reserve and protect those from other dynamic text
modifier, like kprobes.

This will also be extended when introducing other subsystems
which modify fixed pieces of kernel text. Dynamic text modifiers
should avoid those.

Signed-off-by: Masami Hiramatsu <mhiramat@redhat.com>
Cc: systemtap <systemtap@sources.redhat.com>
Cc: DLE <dle-develop@lists.sourceforge.net>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: przemyslaw@pawelczyk.it
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: Jim Keniston <jkenisto@us.ibm.com>
Cc: Mathieu Desnoyers <compudj@krystal.dyndns.org>
Cc: Jason Baron <jbaron@redhat.com>
LKML-Reference: <20100202214911.4694.16587.stgit@dhcp-100-2-132.bos.redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-02-04 09:36:19 +01:00
Masami Hiramatsu 615d0ebbc7 kprobes: Disable booster when CONFIG_PREEMPT=y
Disable kprobe booster when CONFIG_PREEMPT=y at this time,
because it can't ensure that all kernel threads preempted on
kprobe's boosted slot run out from the slot even using
freeze_processes().

The booster on preemptive kernel will be resumed if
synchronize_tasks() or something like that is introduced.

Signed-off-by: Masami Hiramatsu <mhiramat@redhat.com>
Cc: systemtap <systemtap@sources.redhat.com>
Cc: DLE <dle-develop@lists.sourceforge.net>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jim Keniston <jkenisto@us.ibm.com>
Cc: Mathieu Desnoyers <compudj@krystal.dyndns.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
LKML-Reference: <20100202214904.4694.24330.stgit@dhcp-100-2-132.bos.redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-02-04 09:36:18 +01:00
Mike Galbraith 57d818895f perf annotate: Fix perf top module symbol annotation
Signed-off-by: Mike Galbraith <efault@gmx.de>
Cc: Kirill Smelkov <kirr@landau.phys.spbu.ru>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <1265265106.6364.5.camel@marge.simson.net>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-02-04 09:33:28 +01:00
Kirill Smelkov 6cff0e8dba perf top: Teach it to autolocate vmlinux
By relying on logic in dso__load_kernel_sym(), we can
automatically load vmlinux.

The only thing which needs to be adjusted, is how --sym-annotate
option is handled - now we can't rely on vmlinux been loaded
until full successful pass of dso__load_vmlinux(), but that's
not the case if we'll do sym_filter_entry setup in
symbol_filter().

So move this step right after event__process_sample() where we
know the whole dso__load_kernel_sym() pass is done.

By the way, though conceptually similar `perf top` still can't
annotate userspace - see next patches with fixes.

Signed-off-by: Kirill Smelkov <kirr@landau.phys.spbu.ru>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
LKML-Reference: <1265223128-11786-9-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-02-04 09:33:28 +01:00
Kirill Smelkov 7a2b620986 perf annotate: Fix it for non-prelinked *.so
The problem was we were incorrectly calculating objdump
addresses for sym->start and sym->end, look:

For simple ET_DYN type DSO (*.so) with one function, objdump -dS
output is something like this:

    000004ac <my_strlen>:
    int my_strlen(const char *s)
     4ac:   55                      push   %ebp
     4ad:   89 e5                   mov    %esp,%ebp
     4af:   83 ec 10                sub    $0x10,%esp
    {

i.e. we have relative-to-dso-mapping IPs (=RIP) there.

For ET_EXEC type and probably for prelinked libs as well (sorry
can't test - I don't use prelink) objdump outputs absolute IPs,
e.g.

    08048604 <zz_strlen>:
    extern "C"
    int zz_strlen(const char *s)
     8048604:       55                      push   %ebp
     8048605:       89 e5                   mov    %esp,%ebp
     8048607:       83 ec 10                sub    $0x10,%esp
    {

So, if sym->start is always relative to dso mapping(*), we'll
have to unmap it for ET_EXEC like cases, and leave as is for
ET_DYN cases.

(*) and it is - we've explicitely made it relative. Look for
    adjust_symbols handling in dso__load_sym()

Previously we were always unmapping sym->start and for ET_DYN
dsos resulting addresses were wrong, and so objdump output was
empty.

The end result was that perf annotate output for symbols from
non-prelinked *.so had always 0.00% percents only, which is
wrong.

To fix it, let's introduce a helper for converting rip to
objdump address, and also let's document what map_ip() and
unmap_ip() do -- I had to study sources for several hours to
understand it.

Signed-off-by: Kirill Smelkov <kirr@landau.phys.spbu.ru>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
LKML-Reference: <1265223128-11786-8-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-02-04 09:33:27 +01:00
Arnaldo Carvalho de Melo 29a9f66d70 perf tools: Adjust some verbosity levels
Not to pollute too much 'perf annotate' debugging sessions.

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1265223128-11786-7-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-02-04 09:33:27 +01:00
Arnaldo Carvalho de Melo 6122e4e4f5 perf record: Stop intercepting events, use postprocessing to get build-ids
We want to stream events as fast as possible to perf.data, and
also in the future we want to have splice working, when no
interception will be possible.

Using build_id__mark_dso_hit_ops to create the list of DSOs that
back MMAPs we also optimize disk usage in the build-id cache by
only caching DSOs that had hits.

Suggested-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1265223128-11786-6-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-02-04 09:33:27 +01:00
Arnaldo Carvalho de Melo 7b2567c1f5 perf build-id: Move the routine to find DSOs with hits to the lib
Because 'perf record' will have to find the build-ids in after
we stop recording, so as to reduce even more the impact in the
workload while we do the measurement.

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1265223128-11786-5-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-02-04 09:33:26 +01:00
Arnaldo Carvalho de Melo 8ad94c6052 perf probe: Don't use a perf_session instance just to resolve symbols
With the recent modifications done to untie the session and
symbol layers, 'perf probe' now can use just the symbols layer.

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Acked-by: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-02-04 09:33:26 +01:00
Arnaldo Carvalho de Melo 8d92c02ab0 perf symbols: Ditch vdso global variable
We can check using strcmp, most DSOs don't start with '[' so the
test is cheap enough and we had to test it there anyway since
when reading perf.data files we weren't calling the routine that
created this global variable and thus weren't setting it as
"loaded", which was causing a bogus:

  Failed to open [vdso], continuing without symbols

Message as the first line of 'perf report'.

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1265223128-11786-3-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-02-04 09:33:26 +01:00
Arnaldo Carvalho de Melo 6275ce2d5f perf symbols: Fixup vsyscall maps
While debugging a problem reported by Pekka Enberg by printing
the IP and all the maps for a thread when we don't find a map
for an IP I noticed that dso__load_sym needs to fixup these
extra maps it creates to hold symbols in different ELF sections
than the main kernel one.

Now we're back showing things like:

[root@doppio linux-2.6-tip]# perf report | grep vsyscall
     0.02%             mutt  [kernel.kallsyms].vsyscall_fn  [.] vread_hpet
     0.01%            named  [kernel.kallsyms].vsyscall_fn  [.] vread_hpet
     0.01%   NetworkManager  [kernel.kallsyms].vsyscall_fn  [.] vread_hpet
     0.01%         gconfd-2  [kernel.kallsyms].vsyscall_0   [.] vgettimeofday
     0.01%  hald-addon-rfki  [kernel.kallsyms].vsyscall_fn  [.] vread_hpet
     0.00%      dbus-daemon  [kernel.kallsyms].vsyscall_fn  [.] vread_hpet
[root@doppio linux-2.6-tip]#

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1265223128-11786-2-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-02-04 09:33:25 +01:00
Arnaldo Carvalho de Melo 9de89fe7c5 perf symbols: Remove perf_session usage in symbols layer
I noticed while writing the first test in 'perf regtest' that to
just test the symbol handling routines one needs to create a
perf session, that is a layer centered on a perf.data file,
events, etc, so I untied these layers.

This reduces the complexity for the users as the number of
parameters to most of the symbols and session APIs now was
reduced while not adding more state to all the map instances by
only having data that is needed to split the kernel (kallsyms
and ELF symtab sections) maps and do vmlinux relocation on the
main kernel map.

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1265223128-11786-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-02-04 09:33:24 +01:00
Xiao Guangrong b8f46c5a34 perf tools: Use O_LARGEFILE to open perf data file
Open perf data file with O_LARGEFILE flag since its size is
easily larger that 2G.

For example:

 # rm -rf perf.data
 # ./perf kmem record sleep 300

 [ perf record: Woken up 0 times to write data ]
 [ perf record: Captured and wrote 3142.147 MB perf.data
 (~137282513 samples) ]

 # ll -h perf.data
 -rw------- 1 root root 3.1G .....

Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
LKML-Reference: <4B68F32A.9040203@cn.fujitsu.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-02-03 09:03:59 +01:00
Ingo Molnar 59f411b62c perf lock: Clean up various details
Fix up a few small stylistic details:

 - use consistent vertical spacing/alignment
 - remove line80 artifacts
 - group some global variables better
 - remove dead code

Plus rename 'prof' to 'report' to make it more in line with other
tools, and remove the line/file keying as we really want to use
IPs like the other tools do.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Cc: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <1264851813-8413-12-git-send-email-mitake@dcl.info.waseda.ac.jp>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-01-31 09:08:27 +01:00
Hitoshi Mitake 9b5e350c7a perf lock: Introduce new tool "perf lock", for analyzing lock statistics
Adding new subcommand "perf lock" to perf.

I have a lot of remaining ToDos, but for now perf lock can
already provide minimal functionality for analyzing lock
statistics.

Signed-off-by: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <1264851813-8413-12-git-send-email-mitake@dcl.info.waseda.ac.jp>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-01-31 09:08:26 +01:00
Hitoshi Mitake c965be10ca perf lock: Enhance information of lock trace events
Add wait time and lock identification details.

Signed-off-by: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <1264851813-8413-11-git-send-email-mitake@dcl.info.waseda.ac.jp>
[ removed the file/line bits as we can do that better via IPs ]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-01-31 09:08:23 +01:00
Hitoshi Mitake 18e97e06b5 perf: Add util/include/linuxhash.h to include hash.h of kernel
linux/hash.h, hash header of kernel, is also useful for perf.

util/include/linuxhash.h includes linux/hash.h, so we can use
hash facilities (e.g. hash_long()) in perf now.

Signed-off-by: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <1264851813-8413-3-git-send-email-mitake@dcl.info.waseda.ac.jp>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-01-31 08:27:53 +01:00
Hitoshi Mitake 86d8d29634 perf tools: Add __data_loc support
This patch is required to test the next patch for perf lock.

At 064739bc4b ,
support for the modifier "__data_loc" of format is added.

But, when I wanted to parse format of lock_acquired (or some
event else), raw_field_ptr() did not returned correct pointer.

So I modified raw_field_ptr() like this patch. Then
raw_field_ptr() works well.

Signed-off-by: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp>
Acked-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Tom Zanussi <tzanussi@gmail.com>
Cc: Steven Rostedt <srostedt@redhat.com>
LKML-Reference: <1264851813-8413-2-git-send-email-mitake@dcl.info.waseda.ac.jp>
[ v3: fixed minor stylistic detail ]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-01-31 08:27:52 +01:00
Hitoshi Mitake a8e6f734ce Revert "perf record: Intercept all events"
This reverts commit f5a2c3dce0.

This patch is required for making "perf lock rec" work.
The commit f5a2c3dce0 changes write_event() of builtin-record.c
. And changed write_event() sometimes doesn't stop with perf
lock rec.

Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <new-submission>
[ that commit also causes perf record to not be Ctrl-C-able,
  and it's concetually wrong to parse the data at record time
  (unconditionally - even when not needed), as we eventually
  want to be able to do zero-copy recording, at least for
  non-archive recordings.  ]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-01-31 08:27:52 +01:00
John Kacur 6a1b751fb8 perf: Ignore perf-archive temp file
Tell git to ignore perf-archive.

Signed-off-by: John Kacur <jkacur@redhat.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <1264633557-17597-6-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-01-29 10:37:33 +01:00
Thiago Farina 4c574159d0 tools/perf/perf.c: Clean up trivial style issues
Checked with:
./../scripts/checkpatch.pl --terse --file perf.c

 perf.c: 51: ERROR: open brace '{' following function declarations go on the next line
 perf.c: 73: ERROR: "foo*** bar" should be "foo ***bar"
 perf.c:112: ERROR: space prohibited before that close parenthesis ')'
 perf.c:127: ERROR: space prohibited before that close parenthesis ')'
 perf.c:171: ERROR: "foo** bar" should be "foo **bar"
 perf.c:213: ERROR: "(foo*)" should be "(foo *)"
 perf.c:216: ERROR: "(foo*)" should be "(foo *)"
 perf.c:217: ERROR: space required before that '*' (ctx:OxV)
 perf.c:452: ERROR: do not initialise statics to 0 or NULL
 perf.c:453: ERROR: do not initialise statics to 0 or NULL

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Masami Hiramatsu <mhiramat@redhat.com>
LKML-Reference: <1264633557-17597-7-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-01-29 10:36:35 +01:00
Ingo Molnar ae7f6711d6 Merge branch 'perf/urgent' into perf/core
Merge reason: We want to queue up a dependent patch. Also update to
              later -rc's.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-01-29 10:36:22 +01:00
Arnaldo Carvalho de Melo 64abebf731 perf session: Create kernel maps in the constructor
Removing one extra step needed in the tools that need this,
fixing a bug in 'perf probe' where this was not being done.

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1264633557-17597-4-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-01-29 09:20:58 +01:00
Arnaldo Carvalho de Melo fd1d908c54 perf symbols: Split helpers used when creating kernel dso object
To make it clear and allow for direct usage by, for instance,
regression test suites.

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1264633557-17597-3-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-01-29 09:20:58 +01:00
Arnaldo Carvalho de Melo a19afe4641 perf symbols: Factor out dso__load_vmlinux_path()
So that we can call it directly from regression tests, and also
to reduce the size of dso__load_kernel_sym(), making it more
clear.

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1264633557-17597-2-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-01-29 09:20:57 +01:00
Arnaldo Carvalho de Melo 72b8fa1730 perf top: Exit if specified --vmlinux can't be used
As we do lazy loading of symtabs we only will know if the
specified vmlinux file is invalid when we actually have a hit in
kernel space and then try to load it. So if we get kernel hits
and there are _no_ symbols in the DSO backing the kernel map,
bail out.

Reported-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1264633557-17597-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-01-29 09:20:57 +01:00
Peter Zijlstra 75c9f3284a perf_events: Fix sample_period transfer on inherit
One problem with frequency driven counters is that we cannot
predict the rate at which they trigger, therefore we have to
start them at period=1, this causes a ramp up effect. However,
if we fail to propagate the stable state on fork each new child
will have to ramp up again. This can lead to significant
artifacts in sample data.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: eranian@google.com
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <1264752266.4283.2121.camel@laptop>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-01-29 09:15:26 +01:00
Peter Zijlstra 18c01f8abf perf_events, x86: Remove spurious counter reset from x86_pmu_enable()
At enable time the counter might still have a ->idx pointing to
a previously occupied location that might now be taken by
another event. Resetting the counter at that location with data
from this event will destroy the other counter's count.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
LKML-Reference: <20100127221122.261477183@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-01-29 09:01:49 +01:00
Peter Zijlstra 452a339a97 perf_events, x86: Implement Intel Westmere support
The new Intel documentation includes Westmere arch specific
event maps that are significantly different from the Nehalem
ones. Add support for this generation.

Found the CPUID model numbers on wikipedia.

Also ammend some Nehalem constraints, spotted those when looking
for the differences between Nehalem and Westmere.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Stephane Eranian <eranian@google.com>
LKML-Reference: <20100127221122.151865645@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-01-29 09:01:48 +01:00
Peter Zijlstra 1a6e21f791 perf_events, x86: Clean up hw_perf_*_all() implementation
Put the recursion avoidance code in the generic hook instead of
replicating it in each implementation.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
LKML-Reference: <20100127221122.057507285@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-01-29 09:01:47 +01:00
Peter Zijlstra ed8777fc13 perf_events, x86: Fix event constraint masks
Since constraints are specified on the event number, not number
and unit mask shorten the constraint masks so that we'll
actually match something.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
LKML-Reference: <20100127221121.967610372@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-01-29 09:01:46 +01:00
Peter Zijlstra 2e8418736d perf_event: x86: Deduplicate the disable code
Share the meat of the x86_pmu_disable() code with hw_perf_enable().

Also remove the barrier() from that code, since I could not convince
myself we actually need it.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-01-29 09:01:45 +01:00
Ingo Molnar 184f412c33 perf, x86: Clean up event constraints code a bit
- Remove stray debug code
 - Improve ugly macros a bit
 - Remove some whitespace damage
 - (Also fix up some accumulated damage in perf_event.h)

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Cc: Stephane Eranian <eranian@google.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <new-submission>
2010-01-29 09:01:44 +01:00
Peter Zijlstra 6c9687abeb perf_event: x86: Optimize x86_pmu_disable()
x86_pmu_disable() removes the event from the cpuc->event_list[], however
since an event can only be on that list once, stop looking after we found
it.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-01-29 09:01:43 +01:00
Peter Zijlstra c933c1a603 perf_event: x86: Optimize the fast path a little more
Remove num from the fast path and save a few ops.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
LKML-Reference: <20100122155536.056430539@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-01-29 09:01:42 +01:00
Peter Zijlstra 272d30be62 perf_event: x86: Optimize constraint weight computation
Add a weight member to the constraint structure and avoid recomputing the
weight at runtime.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
LKML-Reference: <20100122155535.963944926@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-01-29 09:01:41 +01:00
Peter Zijlstra 63b146490b perf_event: x86: Optimize the constraint searching bits
Instead of copying bitmasks around, pass pointers to the constraint
structure.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
LKML-Reference: <20100122155535.887853503@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-01-29 09:01:40 +01:00