1
0
Fork 0

Remove oprofile and dcookies support

The "oprofile" user-space tools don't use the kernel OPROFILE support any more,
 and haven't in a long time. User-space has been converted to the perf
 interfaces.
 
 The dcookies stuff is only used by the oprofile code. Now that oprofile's
 support is getting removed from the kernel, there is no need for dcookies as
 well.
 
 Remove kernel's old oprofile and dcookies support.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABAgAGBQJgJMEVAAoJENK5HDyugRIcL8YP/jkmXH5CZT80ntcqrJGWKcG7
 lWbach7uNeQteht7B1ZPKvojxizTkmfrN2sClX0B2hbGkc5TiWUQ2ZSnvnfWDZ8+
 z2qQcEB11G/ReL2vvRk1fJlWdAOyUfrPee/44AkemnLRv+Niw/8PqnGd87yDQGsK
 qy5E1XXfbjUq6Y/uMiLOX3+21I6w6o2Q6I3NNXC93s0wS3awqnft8n0XBC7iAPBj
 eowRJxpdRU2Vcuj8UOzzOI7gQlwdjwYImyLPbRy/V8NawC8a+FHrPrf5/GCYlVzl
 7TGFBsDQSmzvrBChUfoGz1Rq/VZ1a357p5rhRqemfUrdkjW+vyzelnD8I1W/hb2o
 SmBXoPoyl3+UkFHNyJI0mI7obaV+2PzyXMV0JIQUj+IiX/mfeFv0nF4XfZD2IkRt
 6xhaYj775Zrx32iBdGZIvvLg5Gh9ZkZmR5vJ7Fi/EIZFe6Z+bZnPKUROnAgS/o0z
 +UkSygOhgo/1XbqrzZVk1iweWeu+EUMbY4YQv2qVnFhpvsq4ieThcUGQpWcxGjjH
 WP8O0n1yq1slsnpUtxhiTsm46ENajx9zZp6Iv6Ws+NM0RUqjND8BdF1co9WGD3LS
 cnZMFBs4Bg/V1HICL/D4s6L7t1ofrEXIgJH1y3iF0HeECq03mU4CgA/qly9Aebqg
 UxPF3oNlVOPlds9FzsU2
 =I2Ac
 -----END PGP SIGNATURE-----

Merge tag 'oprofile-removal-5.12' of git://git.kernel.org/pub/scm/linux/kernel/git/vireshk/linux

Pull oprofile and dcookies removal from Viresh Kumar:
 "Remove oprofile and dcookies support

  The 'oprofile' user-space tools don't use the kernel OPROFILE support
  any more, and haven't in a long time. User-space has been converted to
  the perf interfaces.

  The dcookies stuff is only used by the oprofile code. Now that
  oprofile's support is getting removed from the kernel, there is no
  need for dcookies as well.

  Remove kernel's old oprofile and dcookies support"

* tag 'oprofile-removal-5.12' of git://git.kernel.org/pub/scm/linux/kernel/git/vireshk/linux:
  fs: Remove dcookies support
  drivers: Remove CONFIG_OPROFILE support
  arch: xtensa: Remove CONFIG_OPROFILE support
  arch: x86: Remove CONFIG_OPROFILE support
  arch: sparc: Remove CONFIG_OPROFILE support
  arch: sh: Remove CONFIG_OPROFILE support
  arch: s390: Remove CONFIG_OPROFILE support
  arch: powerpc: Remove oprofile
  arch: powerpc: Stop building and using oprofile
  arch: parisc: Remove CONFIG_OPROFILE support
  arch: mips: Remove CONFIG_OPROFILE support
  arch: microblaze: Remove CONFIG_OPROFILE support
  arch: ia64: Remove rest of perfmon support
  arch: ia64: Remove CONFIG_OPROFILE support
  arch: hexagon: Don't select HAVE_OPROFILE
  arch: arc: Remove CONFIG_OPROFILE support
  arch: arm: Remove CONFIG_OPROFILE support
  arch: alpha: Remove CONFIG_OPROFILE support
master
Linus Torvalds 2021-02-21 10:40:34 -08:00
commit 24880bef41
199 changed files with 8 additions and 15569 deletions

View File

@ -8,8 +8,7 @@ Although RCU is usually used to protect read-mostly data structures,
it is possible to use RCU to provide dynamic non-maskable interrupt
handlers, as well as dynamic irq handlers. This document describes
how to do this, drawing loosely from Zwane Mwaikambo's NMI-timer
work in "arch/x86/oprofile/nmi_timer_int.c" and in
"arch/x86/kernel/traps.c".
work in "arch/x86/kernel/traps.c".
The relevant pieces of code are listed below, each followed by a
brief explanation::

View File

@ -3458,20 +3458,6 @@
For example, to override I2C bus2:
omap_mux=i2c2_scl.i2c2_scl=0x100,i2c2_sda.i2c2_sda=0x100
oprofile.timer= [HW]
Use timer interrupt instead of performance counters
oprofile.cpu_type= Force an oprofile cpu type
This might be useful if you have an older oprofile
userland or if you want common events.
Format: { arch_perfmon }
arch_perfmon: [X86] Force use of architectural
perfmon on Intel CPUs instead of the
CPU specific event set.
timer: [X86] Force use of architectural NMI
timer mode (see also oprofile.timer
for generic hr timer mode)
oops=panic Always panic on oopses. Default is to just kill the
process, but there is a small probability of
deadlocking the machine.

View File

@ -1317,7 +1317,6 @@ When kbuild executes, the following steps are followed (roughly):
libs-y += arch/sparc/lib/
drivers-$(CONFIG_PM) += arch/sparc/power/
drivers-$(CONFIG_OPROFILE) += arch/sparc/oprofile/
7.5 Architecture-specific boot images
-------------------------------------

View File

@ -135,7 +135,6 @@ FW_HEADER_MAGIC 0x65726F66 fw_header ``drivers/atm/fo
SLOT_MAGIC 0x67267321 slot ``drivers/hotplug/cpqphp.h``
SLOT_MAGIC 0x67267322 slot ``drivers/hotplug/acpiphp.h``
LO_MAGIC 0x68797548 nbd_device ``include/linux/nbd.h``
OPROFILE_MAGIC 0x6f70726f super_block ``drivers/oprofile/oprofilefs.h``
M3_STATE_MAGIC 0x734d724d m3_state ``sound/oss/maestro3.c``
VMALLOC_MAGIC 0x87654320 snd_alloc_track ``sound/core/memory.c``
KMALLOC_MAGIC 0x87654321 snd_alloc_track ``sound/core/memory.c``

View File

@ -141,7 +141,6 @@ FW_HEADER_MAGIC 0x65726F66 fw_header ``drivers/atm/fo
SLOT_MAGIC 0x67267321 slot ``drivers/hotplug/cpqphp.h``
SLOT_MAGIC 0x67267322 slot ``drivers/hotplug/acpiphp.h``
LO_MAGIC 0x68797548 nbd_device ``include/linux/nbd.h``
OPROFILE_MAGIC 0x6f70726f super_block ``drivers/oprofile/oprofilefs.h``
M3_STATE_MAGIC 0x734d724d m3_state ``sound/oss/maestro3.c``
VMALLOC_MAGIC 0x87654320 snd_alloc_track ``sound/core/memory.c``
KMALLOC_MAGIC 0x87654321 snd_alloc_track ``sound/core/memory.c``

View File

@ -124,7 +124,6 @@ FW_HEADER_MAGIC 0x65726F66 fw_header ``drivers/atm/fo
SLOT_MAGIC 0x67267321 slot ``drivers/hotplug/cpqphp.h``
SLOT_MAGIC 0x67267322 slot ``drivers/hotplug/acpiphp.h``
LO_MAGIC 0x68797548 nbd_device ``include/linux/nbd.h``
OPROFILE_MAGIC 0x6f70726f super_block ``drivers/oprofile/oprofilefs.h``
M3_STATE_MAGIC 0x734d724d m3_state ``sound/oss/maestro3.c``
VMALLOC_MAGIC 0x87654320 snd_alloc_track ``sound/core/memory.c``
KMALLOC_MAGIC 0x87654321 snd_alloc_track ``sound/core/memory.c``

View File

@ -1413,7 +1413,6 @@ F: arch/arm*/include/asm/hw_breakpoint.h
F: arch/arm*/include/asm/perf_event.h
F: arch/arm*/kernel/hw_breakpoint.c
F: arch/arm*/kernel/perf_*
F: arch/arm/oprofile/common.c
F: drivers/perf/
F: include/linux/perf/arm_pmu.h
@ -4060,7 +4059,6 @@ W: http://www.ibm.com/developerworks/power/cell/
F: arch/powerpc/include/asm/cell*.h
F: arch/powerpc/include/asm/spu*.h
F: arch/powerpc/include/uapi/asm/spu*.h
F: arch/powerpc/oprofile/*cell*
F: arch/powerpc/platforms/cell/
CELLWISE CW2015 BATTERY DRIVER
@ -13291,15 +13289,6 @@ S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound.git
F: sound/drivers/opl4/
OPROFILE
M: Robert Richter <rric@kernel.org>
L: oprofile-list@lists.sf.net
S: Maintained
F: arch/*/include/asm/oprofile*.h
F: arch/*/oprofile/
F: drivers/oprofile/
F: include/linux/oprofile.h
ORACLE CLUSTER FILESYSTEM 2 (OCFS2)
M: Mark Fasheh <mark@fasheh.com>
M: Joel Becker <jlbec@evilplan.org>

View File

@ -33,38 +33,6 @@ config HOTPLUG_SMT
config GENERIC_ENTRY
bool
config OPROFILE
tristate "OProfile system profiling"
depends on PROFILING
depends on HAVE_OPROFILE
select RING_BUFFER
select RING_BUFFER_ALLOW_SWAP
help
OProfile is a profiling system capable of profiling the
whole system, include the kernel, kernel modules, libraries,
and applications.
If unsure, say N.
config OPROFILE_EVENT_MULTIPLEX
bool "OProfile multiplexing support (EXPERIMENTAL)"
default n
depends on OPROFILE && X86
help
The number of hardware counters is limited. The multiplexing
feature enables OProfile to gather more events than counters
are provided by the hardware. This is realized by switching
between events at a user specified time interval.
If unsure, say N.
config HAVE_OPROFILE
bool
config OPROFILE_NMI_TIMER
def_bool y
depends on PERF_EVENTS && HAVE_PERF_EVENTS_NMI && !PPC64
config KPROBES
bool "Kprobes"
depends on MODULES

View File

@ -14,7 +14,6 @@ config ALPHA
select HAVE_AOUT
select HAVE_ASM_MODVERSIONS
select HAVE_IDE
select HAVE_OPROFILE
select HAVE_PCSPKR_PLATFORM
select HAVE_PERF_EVENTS
select NEED_DMA_MAP_STATE

View File

@ -40,7 +40,6 @@ head-y := arch/alpha/kernel/head.o
core-y += arch/alpha/kernel/ arch/alpha/mm/
core-$(CONFIG_MATHEMU) += arch/alpha/math-emu/
drivers-$(CONFIG_OPROFILE) += arch/alpha/oprofile/
libs-y += arch/alpha/lib/
# export what is needed by arch/alpha/boot/Makefile

View File

@ -1,20 +0,0 @@
# SPDX-License-Identifier: GPL-2.0
ccflags-y := -Werror -Wno-sign-compare
obj-$(CONFIG_OPROFILE) += oprofile.o
DRIVER_OBJS = $(addprefix ../../../drivers/oprofile/, \
oprof.o cpu_buffer.o buffer_sync.o \
event_buffer.o oprofile_files.o \
oprofilefs.o oprofile_stats.o \
timer_int.o )
oprofile-y := $(DRIVER_OBJS) common.o
oprofile-$(CONFIG_ALPHA_GENERIC) += op_model_ev4.o \
op_model_ev5.o \
op_model_ev6.o \
op_model_ev67.o
oprofile-$(CONFIG_ALPHA_EV4) += op_model_ev4.o
oprofile-$(CONFIG_ALPHA_EV5) += op_model_ev5.o
oprofile-$(CONFIG_ALPHA_EV6) += op_model_ev6.o \
op_model_ev67.o

View File

@ -1,189 +0,0 @@
/**
* @file arch/alpha/oprofile/common.c
*
* @remark Copyright 2002 OProfile authors
* @remark Read the file COPYING
*
* @author Richard Henderson <rth@twiddle.net>
*/
#include <linux/oprofile.h>
#include <linux/init.h>
#include <linux/smp.h>
#include <linux/errno.h>
#include <asm/ptrace.h>
#include <asm/special_insns.h>
#include "op_impl.h"
extern struct op_axp_model op_model_ev4 __attribute__((weak));
extern struct op_axp_model op_model_ev5 __attribute__((weak));
extern struct op_axp_model op_model_pca56 __attribute__((weak));
extern struct op_axp_model op_model_ev6 __attribute__((weak));
extern struct op_axp_model op_model_ev67 __attribute__((weak));
static struct op_axp_model *model;
extern void (*perf_irq)(unsigned long, struct pt_regs *);
static void (*save_perf_irq)(unsigned long, struct pt_regs *);
static struct op_counter_config ctr[20];
static struct op_system_config sys;
static struct op_register_config reg;
/* Called from do_entInt to handle the performance monitor interrupt. */
static void
op_handle_interrupt(unsigned long which, struct pt_regs *regs)
{
model->handle_interrupt(which, regs, ctr);
/* If the user has selected an interrupt frequency that is
not exactly the width of the counter, write a new value
into the counter such that it'll overflow after N more
events. */
if ((reg.need_reset >> which) & 1)
model->reset_ctr(&reg, which);
}
static int
op_axp_setup(void)
{
unsigned long i, e;
/* Install our interrupt handler into the existing hook. */
save_perf_irq = perf_irq;
perf_irq = op_handle_interrupt;
/* Compute the mask of enabled counters. */
for (i = e = 0; i < model->num_counters; ++i)
if (ctr[i].enabled)
e |= 1 << i;
reg.enable = e;
/* Pre-compute the values to stuff in the hardware registers. */
model->reg_setup(&reg, ctr, &sys);
/* Configure the registers on all cpus. */
smp_call_function(model->cpu_setup, &reg, 1);
model->cpu_setup(&reg);
return 0;
}
static void
op_axp_shutdown(void)
{
/* Remove our interrupt handler. We may be removing this module. */
perf_irq = save_perf_irq;
}
static void
op_axp_cpu_start(void *dummy)
{
wrperfmon(1, reg.enable);
}
static int
op_axp_start(void)
{
smp_call_function(op_axp_cpu_start, NULL, 1);
op_axp_cpu_start(NULL);
return 0;
}
static inline void
op_axp_cpu_stop(void *dummy)
{
/* Disable performance monitoring for all counters. */
wrperfmon(0, -1);
}
static void
op_axp_stop(void)
{
smp_call_function(op_axp_cpu_stop, NULL, 1);
op_axp_cpu_stop(NULL);
}
static int
op_axp_create_files(struct dentry *root)
{
int i;
for (i = 0; i < model->num_counters; ++i) {
struct dentry *dir;
char buf[4];
snprintf(buf, sizeof buf, "%d", i);
dir = oprofilefs_mkdir(root, buf);
oprofilefs_create_ulong(dir, "enabled", &ctr[i].enabled);
oprofilefs_create_ulong(dir, "event", &ctr[i].event);
oprofilefs_create_ulong(dir, "count", &ctr[i].count);
/* Dummies. */
oprofilefs_create_ulong(dir, "kernel", &ctr[i].kernel);
oprofilefs_create_ulong(dir, "user", &ctr[i].user);
oprofilefs_create_ulong(dir, "unit_mask", &ctr[i].unit_mask);
}
if (model->can_set_proc_mode) {
oprofilefs_create_ulong(root, "enable_pal",
&sys.enable_pal);
oprofilefs_create_ulong(root, "enable_kernel",
&sys.enable_kernel);
oprofilefs_create_ulong(root, "enable_user",
&sys.enable_user);
}
return 0;
}
int __init
oprofile_arch_init(struct oprofile_operations *ops)
{
struct op_axp_model *lmodel = NULL;
switch (implver()) {
case IMPLVER_EV4:
lmodel = &op_model_ev4;
break;
case IMPLVER_EV5:
/* 21164PC has a slightly different set of events.
Recognize the chip by the presence of the MAX insns. */
if (!amask(AMASK_MAX))
lmodel = &op_model_pca56;
else
lmodel = &op_model_ev5;
break;
case IMPLVER_EV6:
/* 21264A supports ProfileMe.
Recognize the chip by the presence of the CIX insns. */
if (!amask(AMASK_CIX))
lmodel = &op_model_ev67;
else
lmodel = &op_model_ev6;
break;
}
if (!lmodel)
return -ENODEV;
model = lmodel;
ops->create_files = op_axp_create_files;
ops->setup = op_axp_setup;
ops->shutdown = op_axp_shutdown;
ops->start = op_axp_start;
ops->stop = op_axp_stop;
ops->cpu_type = lmodel->cpu_type;
printk(KERN_INFO "oprofile: using %s performance monitoring.\n",
lmodel->cpu_type);
return 0;
}
void
oprofile_arch_exit(void)
{
}

View File

@ -1,55 +0,0 @@
/**
* @file arch/alpha/oprofile/op_impl.h
*
* @remark Copyright 2002 OProfile authors
* @remark Read the file COPYING
*
* @author Richard Henderson <rth@twiddle.net>
*/
#ifndef OP_IMPL_H
#define OP_IMPL_H 1
/* Per-counter configuration as set via oprofilefs. */
struct op_counter_config {
unsigned long enabled;
unsigned long event;
unsigned long count;
/* Dummies because I am too lazy to hack the userspace tools. */
unsigned long kernel;
unsigned long user;
unsigned long unit_mask;
};
/* System-wide configuration as set via oprofilefs. */
struct op_system_config {
unsigned long enable_pal;
unsigned long enable_kernel;
unsigned long enable_user;
};
/* Cached values for the various performance monitoring registers. */
struct op_register_config {
unsigned long enable;
unsigned long mux_select;
unsigned long proc_mode;
unsigned long freq;
unsigned long reset_values;
unsigned long need_reset;
};
/* Per-architecture configuration and hooks. */
struct op_axp_model {
void (*reg_setup) (struct op_register_config *,
struct op_counter_config *,
struct op_system_config *);
void (*cpu_setup) (void *);
void (*reset_ctr) (struct op_register_config *, unsigned long);
void (*handle_interrupt) (unsigned long, struct pt_regs *,
struct op_counter_config *);
char *cpu_type;
unsigned char num_counters;
unsigned char can_set_proc_mode;
};
#endif

View File

@ -1,114 +0,0 @@
/**
* @file arch/alpha/oprofile/op_model_ev4.c
*
* @remark Copyright 2002 OProfile authors
* @remark Read the file COPYING
*
* @author Richard Henderson <rth@twiddle.net>
*/
#include <linux/oprofile.h>
#include <linux/smp.h>
#include <asm/ptrace.h>
#include "op_impl.h"
/* Compute all of the registers in preparation for enabling profiling. */
static void
ev4_reg_setup(struct op_register_config *reg,
struct op_counter_config *ctr,
struct op_system_config *sys)
{
unsigned long ctl = 0, count, hilo;
/* Select desired events. We've mapped the event numbers
such that they fit directly into the event selection fields.
Note that there is no "off" setting. In both cases we select
the EXTERNAL event source, hoping that it'll be the lowest
frequency, and set the frequency counter to LOW. The interrupts
for these "disabled" counter overflows are ignored by the
interrupt handler.
This is most irritating, because the hardware *can* enable and
disable the interrupts for these counters independently, but the
wrperfmon interface doesn't allow it. */
ctl |= (ctr[0].enabled ? ctr[0].event << 8 : 14 << 8);
ctl |= (ctr[1].enabled ? (ctr[1].event - 16) << 32 : 7ul << 32);
/* EV4 can not read or write its counter registers. The only
thing one can do at all is see if you overflow and get an
interrupt. We can set the width of the counters, to some
extent. Take the interrupt count selected by the user,
map it onto one of the possible values, and write it back. */
count = ctr[0].count;
if (count <= 4096)
count = 4096, hilo = 1;
else
count = 65536, hilo = 0;
ctr[0].count = count;
ctl |= (ctr[0].enabled && hilo) << 3;
count = ctr[1].count;
if (count <= 256)
count = 256, hilo = 1;
else
count = 4096, hilo = 0;
ctr[1].count = count;
ctl |= (ctr[1].enabled && hilo);
reg->mux_select = ctl;
/* Select performance monitoring options. */
/* ??? Need to come up with some mechanism to trace only
selected processes. EV4 does not have a mechanism to
select kernel or user mode only. For now, enable always. */
reg->proc_mode = 0;
/* Frequency is folded into mux_select for EV4. */
reg->freq = 0;
/* See above regarding no writes. */
reg->reset_values = 0;
reg->need_reset = 0;
}
/* Program all of the registers in preparation for enabling profiling. */
static void
ev4_cpu_setup(void *x)
{
struct op_register_config *reg = x;
wrperfmon(2, reg->mux_select);
wrperfmon(3, reg->proc_mode);
}
static void
ev4_handle_interrupt(unsigned long which, struct pt_regs *regs,
struct op_counter_config *ctr)
{
/* EV4 can't properly disable counters individually.
Discard "disabled" events now. */
if (!ctr[which].enabled)
return;
/* Record the sample. */
oprofile_add_sample(regs, which);
}
struct op_axp_model op_model_ev4 = {
.reg_setup = ev4_reg_setup,
.cpu_setup = ev4_cpu_setup,
.reset_ctr = NULL,
.handle_interrupt = ev4_handle_interrupt,
.cpu_type = "alpha/ev4",
.num_counters = 2,
.can_set_proc_mode = 0,
};

View File

@ -1,209 +0,0 @@
/**
* @file arch/alpha/oprofile/op_model_ev5.c
*
* @remark Copyright 2002 OProfile authors
* @remark Read the file COPYING
*
* @author Richard Henderson <rth@twiddle.net>
*/
#include <linux/oprofile.h>
#include <linux/smp.h>
#include <asm/ptrace.h>
#include "op_impl.h"
/* Compute all of the registers in preparation for enabling profiling.
The 21164 (EV5) and 21164PC (PCA65) vary in the bit placement and
meaning of the "CBOX" events. Given that we don't care about meaning
at this point, arrange for the difference in bit placement to be
handled by common code. */
static void
common_reg_setup(struct op_register_config *reg,
struct op_counter_config *ctr,
struct op_system_config *sys,
int cbox1_ofs, int cbox2_ofs)
{
int i, ctl, reset, need_reset;
/* Select desired events. The event numbers are selected such
that they map directly into the event selection fields:
PCSEL0: 0, 1
PCSEL1: 24-39
CBOX1: 40-47
PCSEL2: 48-63
CBOX2: 64-71
There are two special cases, in that CYCLES can be measured
on PCSEL[02], and SCACHE_WRITE can be measured on CBOX[12].
These event numbers are canonicalizes to their first appearance. */
ctl = 0;
for (i = 0; i < 3; ++i) {
unsigned long event = ctr[i].event;
if (!ctr[i].enabled)
continue;
/* Remap the duplicate events, as described above. */
if (i == 2) {
if (event == 0)
event = 12+48;
else if (event == 2+41)
event = 4+65;
}
/* Convert the event numbers onto mux_select bit mask. */
if (event < 2)
ctl |= event << 31;
else if (event < 24)
/* error */;
else if (event < 40)
ctl |= (event - 24) << 4;
else if (event < 48)
ctl |= (event - 40) << cbox1_ofs | 15 << 4;
else if (event < 64)
ctl |= event - 48;
else if (event < 72)
ctl |= (event - 64) << cbox2_ofs | 15;
}
reg->mux_select = ctl;
/* Select processor mode. */
/* ??? Need to come up with some mechanism to trace only selected
processes. For now select from pal, kernel and user mode. */
ctl = 0;
ctl |= !sys->enable_pal << 9;
ctl |= !sys->enable_kernel << 8;
ctl |= !sys->enable_user << 30;
reg->proc_mode = ctl;
/* Select interrupt frequencies. Take the interrupt count selected
by the user, and map it onto one of the possible counter widths.
If the user value is in between, compute a value to which the
counter is reset at each interrupt. */
ctl = reset = need_reset = 0;
for (i = 0; i < 3; ++i) {
unsigned long max, hilo, count = ctr[i].count;
if (!ctr[i].enabled)
continue;
if (count <= 256)
count = 256, hilo = 3, max = 256;
else {
max = (i == 2 ? 16384 : 65536);
hilo = 2;
if (count > max)
count = max;
}
ctr[i].count = count;
ctl |= hilo << (8 - i*2);
reset |= (max - count) << (48 - 16*i);
if (count != max)
need_reset |= 1 << i;
}
reg->freq = ctl;
reg->reset_values = reset;
reg->need_reset = need_reset;
}
static void
ev5_reg_setup(struct op_register_config *reg,
struct op_counter_config *ctr,
struct op_system_config *sys)
{
common_reg_setup(reg, ctr, sys, 19, 22);
}
static void
pca56_reg_setup(struct op_register_config *reg,
struct op_counter_config *ctr,
struct op_system_config *sys)
{
common_reg_setup(reg, ctr, sys, 8, 11);
}
/* Program all of the registers in preparation for enabling profiling. */
static void
ev5_cpu_setup (void *x)
{
struct op_register_config *reg = x;
wrperfmon(2, reg->mux_select);
wrperfmon(3, reg->proc_mode);
wrperfmon(4, reg->freq);
wrperfmon(6, reg->reset_values);
}
/* CTR is a counter for which the user has requested an interrupt count
in between one of the widths selectable in hardware. Reset the count
for CTR to the value stored in REG->RESET_VALUES.
For EV5, this means disabling profiling, reading the current values,
masking in the value for the desired register, writing, then turning
profiling back on.
This can be streamlined if profiling is only enabled for user mode.
In that case we know that the counters are not currently incrementing
(due to being in kernel mode). */
static void
ev5_reset_ctr(struct op_register_config *reg, unsigned long ctr)
{
unsigned long values, mask, not_pk, reset_values;
mask = (ctr == 0 ? 0xfffful << 48
: ctr == 1 ? 0xfffful << 32
: 0x3fff << 16);
not_pk = 1 << 9 | 1 << 8;
reset_values = reg->reset_values;
if ((reg->proc_mode & not_pk) == not_pk) {
values = wrperfmon(5, 0);
values = (reset_values & mask) | (values & ~mask & -2);
wrperfmon(6, values);
} else {
wrperfmon(0, -1);
values = wrperfmon(5, 0);
values = (reset_values & mask) | (values & ~mask & -2);
wrperfmon(6, values);
wrperfmon(1, reg->enable);
}
}
static void
ev5_handle_interrupt(unsigned long which, struct pt_regs *regs,
struct op_counter_config *ctr)
{
/* Record the sample. */
oprofile_add_sample(regs, which);
}
struct op_axp_model op_model_ev5 = {
.reg_setup = ev5_reg_setup,
.cpu_setup = ev5_cpu_setup,
.reset_ctr = ev5_reset_ctr,
.handle_interrupt = ev5_handle_interrupt,
.cpu_type = "alpha/ev5",
.num_counters = 3,
.can_set_proc_mode = 1,
};
struct op_axp_model op_model_pca56 = {
.reg_setup = pca56_reg_setup,
.cpu_setup = ev5_cpu_setup,
.reset_ctr = ev5_reset_ctr,
.handle_interrupt = ev5_handle_interrupt,
.cpu_type = "alpha/pca56",
.num_counters = 3,
.can_set_proc_mode = 1,
};

View File

@ -1,101 +0,0 @@
/**
* @file arch/alpha/oprofile/op_model_ev6.c
*
* @remark Copyright 2002 OProfile authors
* @remark Read the file COPYING
*
* @author Richard Henderson <rth@twiddle.net>
*/
#include <linux/oprofile.h>
#include <linux/smp.h>
#include <asm/ptrace.h>
#include "op_impl.h"
/* Compute all of the registers in preparation for enabling profiling. */
static void
ev6_reg_setup(struct op_register_config *reg,
struct op_counter_config *ctr,
struct op_system_config *sys)
{
unsigned long ctl, reset, need_reset, i;
/* Select desired events. We've mapped the event numbers
such that they fit directly into the event selection fields. */
ctl = 0;
if (ctr[0].enabled && ctr[0].event)
ctl |= (ctr[0].event & 1) << 4;
if (ctr[1].enabled)
ctl |= (ctr[1].event - 2) & 15;
reg->mux_select = ctl;
/* Select logging options. */
/* ??? Need to come up with some mechanism to trace only
selected processes. EV6 does not have a mechanism to
select kernel or user mode only. For now, enable always. */
reg->proc_mode = 0;
/* EV6 cannot change the width of the counters as with the
other implementations. But fortunately, we can write to
the counters and set the value such that it will overflow
at the right time. */
reset = need_reset = 0;
for (i = 0; i < 2; ++i) {
unsigned long count = ctr[i].count;
if (!ctr[i].enabled)
continue;
if (count > 0x100000)
count = 0x100000;
ctr[i].count = count;
reset |= (0x100000 - count) << (i ? 6 : 28);
if (count != 0x100000)
need_reset |= 1 << i;
}
reg->reset_values = reset;
reg->need_reset = need_reset;
}
/* Program all of the registers in preparation for enabling profiling. */
static void
ev6_cpu_setup (void *x)
{
struct op_register_config *reg = x;
wrperfmon(2, reg->mux_select);
wrperfmon(3, reg->proc_mode);
wrperfmon(6, reg->reset_values | 3);
}
/* CTR is a counter for which the user has requested an interrupt count
in between one of the widths selectable in hardware. Reset the count
for CTR to the value stored in REG->RESET_VALUES. */
static void
ev6_reset_ctr(struct op_register_config *reg, unsigned long ctr)
{
wrperfmon(6, reg->reset_values | (1 << ctr));
}
static void
ev6_handle_interrupt(unsigned long which, struct pt_regs *regs,
struct op_counter_config *ctr)
{
/* Record the sample. */
oprofile_add_sample(regs, which);
}
struct op_axp_model op_model_ev6 = {
.reg_setup = ev6_reg_setup,
.cpu_setup = ev6_cpu_setup,
.reset_ctr = ev6_reset_ctr,
.handle_interrupt = ev6_handle_interrupt,
.cpu_type = "alpha/ev6",
.num_counters = 2,
.can_set_proc_mode = 0,
};

View File

@ -1,261 +0,0 @@
/**
* @file arch/alpha/oprofile/op_model_ev67.c
*
* @remark Copyright 2002 OProfile authors
* @remark Read the file COPYING
*
* @author Richard Henderson <rth@twiddle.net>
* @author Falk Hueffner <falk@debian.org>
*/
#include <linux/oprofile.h>
#include <linux/smp.h>
#include <asm/ptrace.h>
#include "op_impl.h"
/* Compute all of the registers in preparation for enabling profiling. */
static void
ev67_reg_setup(struct op_register_config *reg,
struct op_counter_config *ctr,
struct op_system_config *sys)
{
unsigned long ctl, reset, need_reset, i;
/* Select desired events. */
ctl = 1UL << 4; /* Enable ProfileMe mode. */
/* The event numbers are chosen so we can use them directly if
PCTR1 is enabled. */
if (ctr[1].enabled) {
ctl |= (ctr[1].event & 3) << 2;
} else {
if (ctr[0].event == 0) /* cycles */
ctl |= 1UL << 2;
}
reg->mux_select = ctl;
/* Select logging options. */
/* ??? Need to come up with some mechanism to trace only
selected processes. EV67 does not have a mechanism to
select kernel or user mode only. For now, enable always. */
reg->proc_mode = 0;
/* EV67 cannot change the width of the counters as with the
other implementations. But fortunately, we can write to
the counters and set the value such that it will overflow
at the right time. */
reset = need_reset = 0;
for (i = 0; i < 2; ++i) {
unsigned long count = ctr[i].count;
if (!ctr[i].enabled)
continue;
if (count > 0x100000)
count = 0x100000;
ctr[i].count = count;
reset |= (0x100000 - count) << (i ? 6 : 28);
if (count != 0x100000)
need_reset |= 1 << i;
}
reg->reset_values = reset;
reg->need_reset = need_reset;
}
/* Program all of the registers in preparation for enabling profiling. */
static void
ev67_cpu_setup (void *x)
{
struct op_register_config *reg = x;
wrperfmon(2, reg->mux_select);
wrperfmon(3, reg->proc_mode);
wrperfmon(6, reg->reset_values | 3);
}
/* CTR is a counter for which the user has requested an interrupt count
in between one of the widths selectable in hardware. Reset the count
for CTR to the value stored in REG->RESET_VALUES. */
static void
ev67_reset_ctr(struct op_register_config *reg, unsigned long ctr)
{
wrperfmon(6, reg->reset_values | (1 << ctr));
}
/* ProfileMe conditions which will show up as counters. We can also
detect the following, but it seems unlikely that anybody is
interested in counting them:
* Reset
* MT_FPCR (write to floating point control register)
* Arithmetic trap
* Dstream Fault
* Machine Check (ECC fault, etc.)
* OPCDEC (illegal opcode)
* Floating point disabled
* Differentiate between DTB single/double misses and 3 or 4 level
page tables
* Istream access violation
* Interrupt
* Icache Parity Error.
* Instruction killed (nop, trapb)
Unfortunately, there seems to be no way to detect Dcache and Bcache
misses; the latter could be approximated by making the counter
count Bcache misses, but that is not precise.
We model this as 20 counters:
* PCTR0
* PCTR1
* 9 ProfileMe events, induced by PCTR0
* 9 ProfileMe events, induced by PCTR1
*/
enum profileme_counters {
PM_STALLED, /* Stalled for at least one cycle
between the fetch and map stages */
PM_TAKEN, /* Conditional branch taken */
PM_MISPREDICT, /* Branch caused mispredict trap */
PM_ITB_MISS, /* ITB miss */
PM_DTB_MISS, /* DTB miss */
PM_REPLAY, /* Replay trap */
PM_LOAD_STORE, /* Load-store order trap */
PM_ICACHE_MISS, /* Icache miss */
PM_UNALIGNED, /* Unaligned Load/Store */
PM_NUM_COUNTERS
};
static inline void
op_add_pm(unsigned long pc, int kern, unsigned long counter,
struct op_counter_config *ctr, unsigned long event)
{
unsigned long fake_counter = 2 + event;
if (counter == 1)
fake_counter += PM_NUM_COUNTERS;
if (ctr[fake_counter].enabled)
oprofile_add_pc(pc, kern, fake_counter);
}
static void
ev67_handle_interrupt(unsigned long which, struct pt_regs *regs,
struct op_counter_config *ctr)
{
unsigned long pmpc, pctr_ctl;
int kern = !user_mode(regs);
int mispredict = 0;
union {
unsigned long v;
struct {
unsigned reserved: 30; /* 0-29 */
unsigned overcount: 3; /* 30-32 */
unsigned icache_miss: 1; /* 33 */
unsigned trap_type: 4; /* 34-37 */
unsigned load_store: 1; /* 38 */
unsigned trap: 1; /* 39 */
unsigned mispredict: 1; /* 40 */
} fields;
} i_stat;
enum trap_types {
TRAP_REPLAY,
TRAP_INVALID0,
TRAP_DTB_DOUBLE_MISS_3,
TRAP_DTB_DOUBLE_MISS_4,
TRAP_FP_DISABLED,
TRAP_UNALIGNED,
TRAP_DTB_SINGLE_MISS,
TRAP_DSTREAM_FAULT,
TRAP_OPCDEC,
TRAP_INVALID1,
TRAP_MACHINE_CHECK,
TRAP_INVALID2,
TRAP_ARITHMETIC,
TRAP_INVALID3,
TRAP_MT_FPCR,
TRAP_RESET
};
pmpc = wrperfmon(9, 0);
/* ??? Don't know how to handle physical-mode PALcode address. */
if (pmpc & 1)
return;
pmpc &= ~2; /* clear reserved bit */
i_stat.v = wrperfmon(8, 0);
if (i_stat.fields.trap) {
switch (i_stat.fields.trap_type) {
case TRAP_INVALID1:
case TRAP_INVALID2:
case TRAP_INVALID3:
/* Pipeline redirection occurred. PMPC points
to PALcode. Recognize ITB miss by PALcode
offset address, and get actual PC from
EXC_ADDR. */
oprofile_add_pc(regs->pc, kern, which);
if ((pmpc & ((1 << 15) - 1)) == 581)
op_add_pm(regs->pc, kern, which,
ctr, PM_ITB_MISS);
/* Most other bit and counter values will be
those for the first instruction in the
fault handler, so we're done. */
return;
case TRAP_REPLAY:
op_add_pm(pmpc, kern, which, ctr,
(i_stat.fields.load_store
? PM_LOAD_STORE : PM_REPLAY));
break;
case TRAP_DTB_DOUBLE_MISS_3:
case TRAP_DTB_DOUBLE_MISS_4:
case TRAP_DTB_SINGLE_MISS:
op_add_pm(pmpc, kern, which, ctr, PM_DTB_MISS);
break;
case TRAP_UNALIGNED:
op_add_pm(pmpc, kern, which, ctr, PM_UNALIGNED);
break;
case TRAP_INVALID0:
case TRAP_FP_DISABLED:
case TRAP_DSTREAM_FAULT:
case TRAP_OPCDEC:
case TRAP_MACHINE_CHECK:
case TRAP_ARITHMETIC:
case TRAP_MT_FPCR:
case TRAP_RESET:
break;
}
/* ??? JSR/JMP/RET/COR or HW_JSR/HW_JMP/HW_RET/HW_COR
mispredicts do not set this bit but can be
recognized by the presence of one of these
instructions at the PMPC location with bit 39
set. */
if (i_stat.fields.mispredict) {
mispredict = 1;
op_add_pm(pmpc, kern, which, ctr, PM_MISPREDICT);
}
}
oprofile_add_pc(pmpc, kern, which);
pctr_ctl = wrperfmon(5, 0);
if (pctr_ctl & (1UL << 27))
op_add_pm(pmpc, kern, which, ctr, PM_STALLED);
/* Unfortunately, TAK is undefined on mispredicted branches.
??? It is also undefined for non-cbranch insns, should
check that. */
if (!mispredict && pctr_ctl & (1UL << 0))
op_add_pm(pmpc, kern, which, ctr, PM_TAKEN);
}
struct op_axp_model op_model_ev67 = {
.reg_setup = ev67_reg_setup,
.cpu_setup = ev67_cpu_setup,
.reset_ctr = ev67_reset_ctr,
.handle_interrupt = ev67_handle_interrupt,
.cpu_type = "alpha/ev67",
.num_counters = 20,
.can_set_proc_mode = 0,
};

View File

@ -37,7 +37,6 @@ config ARC
select HAVE_KPROBES
select HAVE_KRETPROBES
select HAVE_MOD_ARCH_SPECIFIC
select HAVE_OPROFILE
select HAVE_PERF_EVENTS
select HANDLE_DOMAIN_IRQ
select IRQ_DOMAIN

View File

@ -96,8 +96,6 @@ core-$(CONFIG_ARC_PLAT_TB10X) += arch/arc/plat-tb10x/
core-$(CONFIG_ARC_PLAT_AXS10X) += arch/arc/plat-axs10x/
core-$(CONFIG_ARC_SOC_HSDK) += arch/arc/plat-hsdk/
drivers-$(CONFIG_OPROFILE) += arch/arc/oprofile/
libs-y += arch/arc/lib/ $(LIBGCC)
boot := arch/arc/boot

View File

@ -1,10 +0,0 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_OPROFILE) += oprofile.o
DRIVER_OBJS = $(addprefix ../../../drivers/oprofile/, \
oprof.o cpu_buffer.o buffer_sync.o \
event_buffer.o oprofile_files.o \
oprofilefs.o oprofile_stats.o \
timer_int.o )
oprofile-y := $(DRIVER_OBJS) common.o

View File

@ -1,23 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com)
*
* Based on orig code from @author John Levon <levon@movementarian.org>
*/
#include <linux/oprofile.h>
#include <linux/perf_event.h>
int __init oprofile_arch_init(struct oprofile_operations *ops)
{
/*
* A failure here, forces oprofile core to switch to Timer based PC
* sampling, which will happen if say perf is not enabled/available
*/
return oprofile_perf_init(ops);
}
void oprofile_arch_exit(void)
{
oprofile_perf_exit();
}

View File

@ -102,7 +102,6 @@ config ARM
select HAVE_KRETPROBES if HAVE_KPROBES
select HAVE_MOD_ARCH_SPECIFIC
select HAVE_NMI
select HAVE_OPROFILE if HAVE_PERF_EVENTS
select HAVE_OPTPROBES if !THUMB2_KERNEL
select HAVE_PERF_EVENTS
select HAVE_PERF_REGS

View File

@ -260,8 +260,6 @@ core-y += $(machdirs) $(platdirs)
core- += $(patsubst %,arch/arm/mach-%/, $(machine-))
core- += $(patsubst %,arch/arm/plat-%/, $(plat-))
drivers-$(CONFIG_OPROFILE) += arch/arm/oprofile/
libs-y := arch/arm/lib/ $(libs-y)
# Default target when executing plain make

View File

@ -21,7 +21,6 @@ CONFIG_KALLSYMS_ALL=y
CONFIG_EMBEDDED=y
# CONFIG_COMPAT_BRK is not set
CONFIG_PROFILING=y
CONFIG_OPROFILE=y
CONFIG_JUMP_LABEL=y
CONFIG_CC_STACKPROTECTOR_REGULAR=y
CONFIG_MODULES=y

View File

@ -11,7 +11,6 @@ CONFIG_BLK_DEV_INITRD=y
# CONFIG_PERF_EVENTS is not set
CONFIG_SLAB=y
CONFIG_PROFILING=y
CONFIG_OPROFILE=m
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MODULE_FORCE_UNLOAD=y

View File

@ -5,7 +5,6 @@ CONFIG_SYSFS_DEPRECATED_V2=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
CONFIG_EXPERT=y
CONFIG_PROFILING=y
CONFIG_OPROFILE=m
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MODULE_FORCE_UNLOAD=y

View File

@ -27,7 +27,6 @@ CONFIG_AEABI=y
CONFIG_ZBOOT_ROM_TEXT=0x0
CONFIG_ZBOOT_ROM_BSS=0x0
CONFIG_PM_DEBUG=y
CONFIG_OPROFILE=y
CONFIG_KPROBES=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y

View File

@ -16,7 +16,6 @@ CONFIG_KALLSYMS_ALL=y
# CONFIG_BASE_FULL is not set
CONFIG_EMBEDDED=y
CONFIG_PROFILING=y
CONFIG_OPROFILE=y
CONFIG_KPROBES=y
CONFIG_MODULES=y
CONFIG_MODULE_FORCE_LOAD=y

View File

@ -67,7 +67,6 @@ CONFIG_CPU_FREQ_STAT=y
CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y
CONFIG_CPU_IDLE=y
CONFIG_ARM_KIRKWOOD_CPUIDLE=y
CONFIG_OPROFILE=y
CONFIG_KPROBES=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y

View File

@ -5,7 +5,6 @@ CONFIG_EXPERT=y
CONFIG_KALLSYMS_ALL=y
# CONFIG_SLUB_DEBUG is not set
CONFIG_PROFILING=y
CONFIG_OPROFILE=y
CONFIG_KPROBES=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y

View File

@ -4,7 +4,6 @@ CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_LOG_BUF_SHIFT=19
CONFIG_PROFILING=y
CONFIG_OPROFILE=y
CONFIG_KPROBES=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y

View File

@ -13,7 +13,6 @@ CONFIG_EXPERT=y
# CONFIG_VM_EVENT_COUNTERS is not set
CONFIG_SLOB=y
CONFIG_PROFILING=y
CONFIG_OPROFILE=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MODULE_FORCE_UNLOAD=y

View File

@ -62,7 +62,6 @@ CONFIG_CRYPTO_AES_ARM=m
CONFIG_CRYPTO_AES_ARM_BS=m
CONFIG_CRYPTO_GHASH_ARM_CE=m
CONFIG_CRYPTO_CHACHA20_NEON=m
CONFIG_OPROFILE=y
CONFIG_KPROBES=y
CONFIG_MODULES=y
CONFIG_MODULE_FORCE_LOAD=y

View File

@ -5,7 +5,6 @@ CONFIG_LOG_BUF_SHIFT=14
CONFIG_EXPERT=y
# CONFIG_SLUB_DEBUG is not set
CONFIG_PROFILING=y
CONFIG_OPROFILE=y
CONFIG_KPROBES=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y

View File

@ -13,7 +13,6 @@ CONFIG_KALLSYMS_ALL=y
CONFIG_EMBEDDED=y
CONFIG_SLOB=y
CONFIG_PROFILING=y
CONFIG_OPROFILE=m
CONFIG_KPROBES=y
CONFIG_MODULES=y
CONFIG_MODULE_FORCE_LOAD=y

View File

@ -10,7 +10,6 @@ CONFIG_EMBEDDED=y
# CONFIG_SLUB_DEBUG is not set
# CONFIG_COMPAT_BRK is not set
CONFIG_PROFILING=y
CONFIG_OPROFILE=y
CONFIG_KPROBES=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y

View File

@ -18,7 +18,6 @@ CONFIG_ZBOOT_ROM_TEXT=0x0
CONFIG_ZBOOT_ROM_BSS=0x0
CONFIG_VFP=y
CONFIG_NEON=y
CONFIG_OPROFILE=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
# CONFIG_BLK_DEV_BSG is not set

View File

@ -5,7 +5,6 @@ CONFIG_SYSFS_DEPRECATED_V2=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
CONFIG_EXPERT=y
CONFIG_PROFILING=y
CONFIG_OPROFILE=m
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MODULE_FORCE_UNLOAD=y

View File

@ -11,7 +11,6 @@ CONFIG_CPUSETS=y
# CONFIG_NET_NS is not set
CONFIG_BLK_DEV_INITRD=y
CONFIG_PROFILING=y
CONFIG_OPROFILE=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
# CONFIG_BLK_DEV_BSG is not set

View File

@ -1,14 +0,0 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_OPROFILE) += oprofile.o
DRIVER_OBJS = $(addprefix ../../../drivers/oprofile/, \
oprof.o cpu_buffer.o buffer_sync.o \
event_buffer.o oprofile_files.o \
oprofilefs.o oprofile_stats.o \
timer_int.o )
ifeq ($(CONFIG_HW_PERF_EVENTS),y)
DRIVER_OBJS += $(addprefix ../../../drivers/oprofile/, oprofile_perf.o)
endif
oprofile-y := $(DRIVER_OBJS) common.o

View File

@ -1,132 +0,0 @@
/**
* @file common.c
*
* @remark Copyright 2004 Oprofile Authors
* @remark Copyright 2010 ARM Ltd.
* @remark Read the file COPYING
*
* @author Zwane Mwaikambo
* @author Will Deacon [move to perf]
*/
#include <linux/cpumask.h>
#include <linux/init.h>
#include <linux/mutex.h>
#include <linux/oprofile.h>
#include <linux/perf_event.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
#include <asm/stacktrace.h>
#include <linux/uaccess.h>
#include <asm/perf_event.h>
#include <asm/ptrace.h>
#ifdef CONFIG_HW_PERF_EVENTS
/*
* OProfile has a curious naming scheme for the ARM PMUs, but they are
* part of the user ABI so we need to map from the perf PMU name for
* supported PMUs.
*/
static struct op_perf_name {
char *perf_name;
char *op_name;
} op_perf_name_map[] = {
{ "armv5_xscale1", "arm/xscale1" },
{ "armv5_xscale2", "arm/xscale2" },
{ "armv6_1136", "arm/armv6" },
{ "armv6_1156", "arm/armv6" },
{ "armv6_1176", "arm/armv6" },
{ "armv6_11mpcore", "arm/mpcore" },
{ "armv7_cortex_a8", "arm/armv7" },
{ "armv7_cortex_a9", "arm/armv7-ca9" },
};
char *op_name_from_perf_id(void)
{
int i;
struct op_perf_name names;
const char *perf_name = perf_pmu_name();
for (i = 0; i < ARRAY_SIZE(op_perf_name_map); ++i) {
names = op_perf_name_map[i];
if (!strcmp(names.perf_name, perf_name))
return names.op_name;
}
return NULL;
}
#endif
static int report_trace(struct stackframe *frame, void *d)
{
unsigned int *depth = d;
if (*depth) {
oprofile_add_trace(frame->pc);
(*depth)--;
}
return *depth == 0;
}
/*
* The registers we're interested in are at the end of the variable
* length saved register structure. The fp points at the end of this
* structure so the address of this struct is:
* (struct frame_tail *)(xxx->fp)-1
*/
struct frame_tail {
struct frame_tail *fp;
unsigned long sp;
unsigned long lr;
} __attribute__((packed));
static struct frame_tail* user_backtrace(struct frame_tail *tail)
{
struct frame_tail buftail[2];
/* Also check accessibility of one struct frame_tail beyond */
if (!access_ok(tail, sizeof(buftail)))
return NULL;
if (__copy_from_user_inatomic(buftail, tail, sizeof(buftail)))
return NULL;
oprofile_add_trace(buftail[0].lr);
/* frame pointers should strictly progress back up the stack
* (towards higher addresses) */
if (tail + 1 >= buftail[0].fp)
return NULL;
return buftail[0].fp-1;
}
static void arm_backtrace(struct pt_regs * const regs, unsigned int depth)
{
struct frame_tail *tail = ((struct frame_tail *) regs->ARM_fp) - 1;
if (!user_mode(regs)) {
struct stackframe frame;
arm_get_current_stackframe(regs, &frame);
walk_stackframe(&frame, report_trace, &depth);
return;
}
while (depth-- && tail && !((unsigned long) tail & 3))
tail = user_backtrace(tail);
}
int __init oprofile_arch_init(struct oprofile_operations *ops)
{
/* provide backtrace support also in timer mode: */
ops->backtrace = arm_backtrace;
return oprofile_perf_init(ops);
}
void oprofile_arch_exit(void)
{
oprofile_perf_exit();
}

View File

@ -7,7 +7,6 @@ config HEXAGON
select ARCH_32BIT_OFF_T
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
select ARCH_NO_PREEMPT
select HAVE_OPROFILE
# Other pending projects/to-do items.
# select HAVE_REGS_AND_STACK_ACCESS_API
# select HAVE_HW_BREAKPOINT if PERF_EVENTS

View File

@ -24,7 +24,6 @@ config IA64
select HAVE_UNSTABLE_SCHED_CLOCK
select HAVE_EXIT_THREAD
select HAVE_IDE
select HAVE_OPROFILE
select HAVE_KPROBES
select HAVE_KRETPROBES
select HAVE_FTRACE_MCOUNT_RECORD

View File

@ -52,7 +52,6 @@ core-y += arch/ia64/kernel/ arch/ia64/mm/
core-$(CONFIG_IA64_SGI_UV) += arch/ia64/uv/
drivers-y += arch/ia64/pci/ arch/ia64/hp/common/
drivers-$(CONFIG_OPROFILE) += arch/ia64/oprofile/
PHONY += compressed check

View File

@ -2,7 +2,6 @@ CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
CONFIG_LOG_BUF_SHIFT=16
CONFIG_PROFILING=y
CONFIG_OPROFILE=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_PARTITION_ADVANCED=y

View File

@ -69,7 +69,6 @@ extern int ia64_last_device_vector;
#define IA64_NUM_DEVICE_VECTORS (IA64_LAST_DEVICE_VECTOR - IA64_FIRST_DEVICE_VECTOR + 1)
#define IA64_MCA_RENDEZ_VECTOR 0xe8 /* MCA rendez interrupt */
#define IA64_PERFMON_VECTOR 0xee /* performance monitor interrupt vector */
#define IA64_TIMER_VECTOR 0xef /* use highest-prio group 15 interrupt for timer */
#define IA64_MCA_WAKEUP_VECTOR 0xf0 /* MCA wakeup (must be >MCA_RENDEZ_VECTOR) */
#define IA64_IPI_LOCAL_TLB_FLUSH 0xfc /* SMP flush local TLB */

View File

@ -1,111 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2001-2003 Hewlett-Packard Co
* Stephane Eranian <eranian@hpl.hp.com>
*/
#ifndef _ASM_IA64_PERFMON_H
#define _ASM_IA64_PERFMON_H
#include <uapi/asm/perfmon.h>
extern long perfmonctl(int fd, int cmd, void *arg, int narg);
typedef struct {
void (*handler)(int irq, void *arg, struct pt_regs *regs);
} pfm_intr_handler_desc_t;
extern void pfm_save_regs (struct task_struct *);
extern void pfm_load_regs (struct task_struct *);
extern void pfm_exit_thread(struct task_struct *);
extern int pfm_use_debug_registers(struct task_struct *);
extern int pfm_release_debug_registers(struct task_struct *);
extern void pfm_syst_wide_update_task(struct task_struct *, unsigned long info, int is_ctxswin);
extern void pfm_inherit(struct task_struct *task, struct pt_regs *regs);
extern void pfm_init_percpu(void);
extern void pfm_handle_work(void);
extern int pfm_install_alt_pmu_interrupt(pfm_intr_handler_desc_t *h);
extern int pfm_remove_alt_pmu_interrupt(pfm_intr_handler_desc_t *h);
/*
* Reset PMD register flags
*/
#define PFM_PMD_SHORT_RESET 0
#define PFM_PMD_LONG_RESET 1
typedef union {
unsigned int val;
struct {
unsigned int notify_user:1; /* notify user program of overflow */
unsigned int reset_ovfl_pmds:1; /* reset overflowed PMDs */
unsigned int block_task:1; /* block monitored task on kernel exit */
unsigned int mask_monitoring:1; /* mask monitors via PMCx.plm */
unsigned int reserved:28; /* for future use */
} bits;
} pfm_ovfl_ctrl_t;
typedef struct {
unsigned char ovfl_pmd; /* index of overflowed PMD */
unsigned char ovfl_notify; /* =1 if monitor requested overflow notification */
unsigned short active_set; /* event set active at the time of the overflow */
pfm_ovfl_ctrl_t ovfl_ctrl; /* return: perfmon controls to set by handler */
unsigned long pmd_last_reset; /* last reset value of of the PMD */
unsigned long smpl_pmds[4]; /* bitmask of other PMD of interest on overflow */
unsigned long smpl_pmds_values[PMU_MAX_PMDS]; /* values for the other PMDs of interest */
unsigned long pmd_value; /* current 64-bit value of the PMD */
unsigned long pmd_eventid; /* eventid associated with PMD */
} pfm_ovfl_arg_t;
typedef struct {
char *fmt_name;
pfm_uuid_t fmt_uuid;
size_t fmt_arg_size;
unsigned long fmt_flags;
int (*fmt_validate)(struct task_struct *task, unsigned int flags, int cpu, void *arg);
int (*fmt_getsize)(struct task_struct *task, unsigned int flags, int cpu, void *arg, unsigned long *size);
int (*fmt_init)(struct task_struct *task, void *buf, unsigned int flags, int cpu, void *arg);
int (*fmt_handler)(struct task_struct *task, void *buf, pfm_ovfl_arg_t *arg, struct pt_regs *regs, unsigned long stamp);
int (*fmt_restart)(struct task_struct *task, pfm_ovfl_ctrl_t *ctrl, void *buf, struct pt_regs *regs);
int (*fmt_restart_active)(struct task_struct *task, pfm_ovfl_ctrl_t *ctrl, void *buf, struct pt_regs *regs);
int (*fmt_exit)(struct task_struct *task, void *buf, struct pt_regs *regs);
struct list_head fmt_list;
} pfm_buffer_fmt_t;
extern int pfm_register_buffer_fmt(pfm_buffer_fmt_t *fmt);
extern int pfm_unregister_buffer_fmt(pfm_uuid_t uuid);
/*
* perfmon interface exported to modules
*/
extern int pfm_mod_read_pmds(struct task_struct *, void *req, unsigned int nreq, struct pt_regs *regs);
extern int pfm_mod_write_pmcs(struct task_struct *, void *req, unsigned int nreq, struct pt_regs *regs);
extern int pfm_mod_write_ibrs(struct task_struct *task, void *req, unsigned int nreq, struct pt_regs *regs);
extern int pfm_mod_write_dbrs(struct task_struct *task, void *req, unsigned int nreq, struct pt_regs *regs);
/*
* describe the content of the local_cpu_date->pfm_syst_info field
*/
#define PFM_CPUINFO_SYST_WIDE 0x1 /* if set a system wide session exists */
#define PFM_CPUINFO_DCR_PP 0x2 /* if set the system wide session has started */
#define PFM_CPUINFO_EXCL_IDLE 0x4 /* the system wide session excludes the idle task */
/*
* sysctl control structure. visible to sampling formats
*/
typedef struct {
int debug; /* turn on/off debugging via syslog */
int debug_ovfl; /* turn on/off debug printk in overflow handler */
int fastctxsw; /* turn on/off fast (unsecure) ctxsw */
int expert_mode; /* turn on/off value checking */
} pfm_sysctl_t;
extern pfm_sysctl_t pfm_sysctl;
#endif /* _ASM_IA64_PERFMON_H */

View File

@ -1,178 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
/*
* Copyright (C) 2001-2003 Hewlett-Packard Co
* Stephane Eranian <eranian@hpl.hp.com>
*/
#ifndef _UAPI_ASM_IA64_PERFMON_H
#define _UAPI_ASM_IA64_PERFMON_H
/*
* perfmon commands supported on all CPU models
*/
#define PFM_WRITE_PMCS 0x01
#define PFM_WRITE_PMDS 0x02
#define PFM_READ_PMDS 0x03
#define PFM_STOP 0x04
#define PFM_START 0x05
#define PFM_ENABLE 0x06 /* obsolete */
#define PFM_DISABLE 0x07 /* obsolete */
#define PFM_CREATE_CONTEXT 0x08
#define PFM_DESTROY_CONTEXT 0x09 /* obsolete use close() */
#define PFM_RESTART 0x0a
#define PFM_PROTECT_CONTEXT 0x0b /* obsolete */
#define PFM_GET_FEATURES 0x0c
#define PFM_DEBUG 0x0d
#define PFM_UNPROTECT_CONTEXT 0x0e /* obsolete */
#define PFM_GET_PMC_RESET_VAL 0x0f
#define PFM_LOAD_CONTEXT 0x10
#define PFM_UNLOAD_CONTEXT 0x11
/*
* PMU model specific commands (may not be supported on all PMU models)
*/
#define PFM_WRITE_IBRS 0x20
#define PFM_WRITE_DBRS 0x21
/*
* context flags
*/
#define PFM_FL_NOTIFY_BLOCK 0x01 /* block task on user level notifications */
#define PFM_FL_SYSTEM_WIDE 0x02 /* create a system wide context */
#define PFM_FL_OVFL_NO_MSG 0x80 /* do not post overflow/end messages for notification */
/*
* event set flags
*/
#define PFM_SETFL_EXCL_IDLE 0x01 /* exclude idle task (syswide only) XXX: DO NOT USE YET */
/*
* PMC flags
*/
#define PFM_REGFL_OVFL_NOTIFY 0x1 /* send notification on overflow */
#define PFM_REGFL_RANDOM 0x2 /* randomize sampling interval */
/*
* PMD/PMC/IBR/DBR return flags (ignored on input)
*
* Those flags are used on output and must be checked in case EAGAIN is returned
* by any of the calls using a pfarg_reg_t or pfarg_dbreg_t structure.
*/
#define PFM_REG_RETFL_NOTAVAIL (1UL<<31) /* set if register is implemented but not available */
#define PFM_REG_RETFL_EINVAL (1UL<<30) /* set if register entry is invalid */
#define PFM_REG_RETFL_MASK (PFM_REG_RETFL_NOTAVAIL|PFM_REG_RETFL_EINVAL)
#define PFM_REG_HAS_ERROR(flag) (((flag) & PFM_REG_RETFL_MASK) != 0)
typedef unsigned char pfm_uuid_t[16]; /* custom sampling buffer identifier type */
/*
* Request structure used to define a context
*/
typedef struct {
pfm_uuid_t ctx_smpl_buf_id; /* which buffer format to use (if needed) */
unsigned long ctx_flags; /* noblock/block */
unsigned short ctx_nextra_sets; /* number of extra event sets (you always get 1) */
unsigned short ctx_reserved1; /* for future use */
int ctx_fd; /* return arg: unique identification for context */
void *ctx_smpl_vaddr; /* return arg: virtual address of sampling buffer, is used */
unsigned long ctx_reserved2[11];/* for future use */
} pfarg_context_t;
/*
* Request structure used to write/read a PMC or PMD
*/
typedef struct {
unsigned int reg_num; /* which register */
unsigned short reg_set; /* event set for this register */
unsigned short reg_reserved1; /* for future use */
unsigned long reg_value; /* initial pmc/pmd value */
unsigned long reg_flags; /* input: pmc/pmd flags, return: reg error */
unsigned long reg_long_reset; /* reset after buffer overflow notification */
unsigned long reg_short_reset; /* reset after counter overflow */
unsigned long reg_reset_pmds[4]; /* which other counters to reset on overflow */
unsigned long reg_random_seed; /* seed value when randomization is used */
unsigned long reg_random_mask; /* bitmask used to limit random value */
unsigned long reg_last_reset_val;/* return: PMD last reset value */
unsigned long reg_smpl_pmds[4]; /* which pmds are accessed when PMC overflows */
unsigned long reg_smpl_eventid; /* opaque sampling event identifier */
unsigned long reg_reserved2[3]; /* for future use */
} pfarg_reg_t;
typedef struct {
unsigned int dbreg_num; /* which debug register */
unsigned short dbreg_set; /* event set for this register */
unsigned short dbreg_reserved1; /* for future use */
unsigned long dbreg_value; /* value for debug register */
unsigned long dbreg_flags; /* return: dbreg error */
unsigned long dbreg_reserved2[1]; /* for future use */
} pfarg_dbreg_t;
typedef struct {
unsigned int ft_version; /* perfmon: major [16-31], minor [0-15] */
unsigned int ft_reserved; /* reserved for future use */
unsigned long reserved[4]; /* for future use */
} pfarg_features_t;
typedef struct {
pid_t load_pid; /* process to load the context into */
unsigned short load_set; /* first event set to load */
unsigned short load_reserved1; /* for future use */
unsigned long load_reserved2[3]; /* for future use */
} pfarg_load_t;
typedef struct {
int msg_type; /* generic message header */
int msg_ctx_fd; /* generic message header */
unsigned long msg_ovfl_pmds[4]; /* which PMDs overflowed */
unsigned short msg_active_set; /* active set at the time of overflow */
unsigned short msg_reserved1; /* for future use */
unsigned int msg_reserved2; /* for future use */
unsigned long msg_tstamp; /* for perf tuning/debug */
} pfm_ovfl_msg_t;
typedef struct {
int msg_type; /* generic message header */
int msg_ctx_fd; /* generic message header */
unsigned long msg_tstamp; /* for perf tuning */
} pfm_end_msg_t;
typedef struct {
int msg_type; /* type of the message */
int msg_ctx_fd; /* unique identifier for the context */
unsigned long msg_tstamp; /* for perf tuning */
} pfm_gen_msg_t;
#define PFM_MSG_OVFL 1 /* an overflow happened */
#define PFM_MSG_END 2 /* task to which context was attached ended */
typedef union {
pfm_ovfl_msg_t pfm_ovfl_msg;
pfm_end_msg_t pfm_end_msg;
pfm_gen_msg_t pfm_gen_msg;
} pfm_msg_t;
/*
* Define the version numbers for both perfmon as a whole and the sampling buffer format.
*/
#define PFM_VERSION_MAJ 2U
#define PFM_VERSION_MIN 0U
#define PFM_VERSION (((PFM_VERSION_MAJ&0xffff)<<16)|(PFM_VERSION_MIN & 0xffff))
#define PFM_VERSION_MAJOR(x) (((x)>>16) & 0xffff)
#define PFM_VERSION_MINOR(x) ((x) & 0xffff)
/*
* miscellaneous architected definitions
*/
#define PMU_FIRST_COUNTER 4 /* first counting monitor (PMC/PMD) */
#define PMU_MAX_PMCS 256 /* maximum architected number of PMC registers */
#define PMU_MAX_PMDS 256 /* maximum architected number of PMD registers */
#endif /* _UAPI_ASM_IA64_PERFMON_H */

View File

@ -1,84 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
/*
* Copyright (C) 2002-2003 Hewlett-Packard Co
* Stephane Eranian <eranian@hpl.hp.com>
*
* This file implements the default sampling buffer format
* for Linux/ia64 perfmon subsystem.
*/
#ifndef __PERFMON_DEFAULT_SMPL_H__
#define __PERFMON_DEFAULT_SMPL_H__ 1
#define PFM_DEFAULT_SMPL_UUID { \
0x4d, 0x72, 0xbe, 0xc0, 0x06, 0x64, 0x41, 0x43, 0x82, 0xb4, 0xd3, 0xfd, 0x27, 0x24, 0x3c, 0x97}
/*
* format specific parameters (passed at context creation)
*/
typedef struct {
unsigned long buf_size; /* size of the buffer in bytes */
unsigned int flags; /* buffer specific flags */
unsigned int res1; /* for future use */
unsigned long reserved[2]; /* for future use */
} pfm_default_smpl_arg_t;
/*
* combined context+format specific structure. Can be passed
* to PFM_CONTEXT_CREATE
*/
typedef struct {
pfarg_context_t ctx_arg;
pfm_default_smpl_arg_t buf_arg;
} pfm_default_smpl_ctx_arg_t;
/*
* This header is at the beginning of the sampling buffer returned to the user.
* It is directly followed by the first record.
*/
typedef struct {
unsigned long hdr_count; /* how many valid entries */
unsigned long hdr_cur_offs; /* current offset from top of buffer */
unsigned long hdr_reserved2; /* reserved for future use */
unsigned long hdr_overflows; /* how many times the buffer overflowed */
unsigned long hdr_buf_size; /* how many bytes in the buffer */
unsigned int hdr_version; /* contains perfmon version (smpl format diffs) */
unsigned int hdr_reserved1; /* for future use */
unsigned long hdr_reserved[10]; /* for future use */
} pfm_default_smpl_hdr_t;
/*
* Entry header in the sampling buffer. The header is directly followed
* with the values of the PMD registers of interest saved in increasing
* index order: PMD4, PMD5, and so on. How many PMDs are present depends
* on how the session was programmed.
*
* In the case where multiple counters overflow at the same time, multiple
* entries are written consecutively.
*
* last_reset_value member indicates the initial value of the overflowed PMD.
*/
typedef struct {
int pid; /* thread id (for NPTL, this is gettid()) */
unsigned char reserved1[3]; /* reserved for future use */
unsigned char ovfl_pmd; /* index of overflowed PMD */
unsigned long last_reset_val; /* initial value of overflowed PMD */
unsigned long ip; /* where did the overflow interrupt happened */
unsigned long tstamp; /* ar.itc when entering perfmon intr. handler */
unsigned short cpu; /* cpu on which the overflow occurred */
unsigned short set; /* event set active when overflow occurred */
int tgid; /* thread group id (for NPTL, this is getpid()) */
} pfm_default_smpl_entry_t;
#define PFM_DEFAULT_MAX_PMDS 64 /* how many pmds supported by data structures (sizeof(unsigned long) */
#define PFM_DEFAULT_MAX_ENTRY_SIZE (sizeof(pfm_default_smpl_entry_t)+(sizeof(unsigned long)*PFM_DEFAULT_MAX_PMDS))
#define PFM_DEFAULT_SMPL_MIN_BUF_SIZE (sizeof(pfm_default_smpl_hdr_t)+PFM_DEFAULT_MAX_ENTRY_SIZE)
#define PFM_DEFAULT_SMPL_VERSION_MAJ 2U
#define PFM_DEFAULT_SMPL_VERSION_MIN 0U
#define PFM_DEFAULT_SMPL_VERSION (((PFM_DEFAULT_SMPL_VERSION_MAJ&0xffff)<<16)|(PFM_DEFAULT_SMPL_VERSION_MIN & 0xffff))
#endif /* __PERFMON_DEFAULT_SMPL_H__ */

View File

@ -648,46 +648,6 @@ static int version_info(struct seq_file *m)
return 0;
}
static int perfmon_info(struct seq_file *m)
{
u64 pm_buffer[16];
pal_perf_mon_info_u_t pm_info;
if (ia64_pal_perf_mon_info(pm_buffer, &pm_info) != 0)
return 0;
seq_printf(m,
"PMC/PMD pairs : %d\n"
"Counter width : %d bits\n"
"Cycle event number : %d\n"
"Retired event number : %d\n"
"Implemented PMC : ",
pm_info.pal_perf_mon_info_s.generic,
pm_info.pal_perf_mon_info_s.width,
pm_info.pal_perf_mon_info_s.cycles,
pm_info.pal_perf_mon_info_s.retired);
bitregister_process(m, pm_buffer, 256);
seq_puts(m, "\nImplemented PMD : ");
bitregister_process(m, pm_buffer+4, 256);
seq_puts(m, "\nCycles count capable : ");
bitregister_process(m, pm_buffer+8, 256);
seq_puts(m, "\nRetired bundles count capable : ");
#ifdef CONFIG_ITANIUM
/*
* PAL_PERF_MON_INFO reports that only PMC4 can be used to count CPU_CYCLES
* which is wrong, both PMC4 and PMD5 support it.
*/
if (pm_buffer[12] == 0x10)
pm_buffer[12]=0x30;
#endif
bitregister_process(m, pm_buffer+12, 256);
seq_putc(m, '\n');
return 0;
}
static int frequency_info(struct seq_file *m)
{
struct pal_freq_ratio proc, itc, bus;
@ -816,7 +776,6 @@ static const palinfo_entry_t palinfo_entries[]={
{ "power_info", power_info, },
{ "register_info", register_info, },
{ "processor_info", processor_info, },
{ "perfmon_info", perfmon_info, },
{ "frequency_info", frequency_info, },
{ "bus_info", bus_info },
{ "tr_info", tr_info, }

View File

@ -1,297 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (C) 2002-2003 Hewlett-Packard Co
* Stephane Eranian <eranian@hpl.hp.com>
*
* This file implements the default sampling buffer format
* for the Linux/ia64 perfmon-2 subsystem.
*/
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/module.h>
#include <linux/init.h>
#include <asm/delay.h>
#include <linux/smp.h>
#include <asm/perfmon.h>
#include <asm/perfmon_default_smpl.h>
MODULE_AUTHOR("Stephane Eranian <eranian@hpl.hp.com>");
MODULE_DESCRIPTION("perfmon default sampling format");
MODULE_LICENSE("GPL");
#define DEFAULT_DEBUG 1
#ifdef DEFAULT_DEBUG
#define DPRINT(a) \
do { \
if (unlikely(pfm_sysctl.debug >0)) { printk("%s.%d: CPU%d ", __func__, __LINE__, smp_processor_id()); printk a; } \
} while (0)
#define DPRINT_ovfl(a) \
do { \
if (unlikely(pfm_sysctl.debug > 0 && pfm_sysctl.debug_ovfl >0)) { printk("%s.%d: CPU%d ", __func__, __LINE__, smp_processor_id()); printk a; } \
} while (0)
#else
#define DPRINT(a)
#define DPRINT_ovfl(a)
#endif
static int
default_validate(struct task_struct *task, unsigned int flags, int cpu, void *data)
{
pfm_default_smpl_arg_t *arg = (pfm_default_smpl_arg_t*)data;
int ret = 0;
if (data == NULL) {
DPRINT(("[%d] no argument passed\n", task_pid_nr(task)));
return -EINVAL;
}
DPRINT(("[%d] validate flags=0x%x CPU%d\n", task_pid_nr(task), flags, cpu));
/*
* must hold at least the buffer header + one minimally sized entry
*/
if (arg->buf_size < PFM_DEFAULT_SMPL_MIN_BUF_SIZE) return -EINVAL;
DPRINT(("buf_size=%lu\n", arg->buf_size));
return ret;
}
static int
default_get_size(struct task_struct *task, unsigned int flags, int cpu, void *data, unsigned long *size)
{
pfm_default_smpl_arg_t *arg = (pfm_default_smpl_arg_t *)data;
/*
* size has been validated in default_validate
*/
*size = arg->buf_size;
return 0;
}
static int
default_init(struct task_struct *task, void *buf, unsigned int flags, int cpu, void *data)
{
pfm_default_smpl_hdr_t *hdr;
pfm_default_smpl_arg_t *arg = (pfm_default_smpl_arg_t *)data;
hdr = (pfm_default_smpl_hdr_t *)buf;
hdr->hdr_version = PFM_DEFAULT_SMPL_VERSION;
hdr->hdr_buf_size = arg->buf_size;
hdr->hdr_cur_offs = sizeof(*hdr);
hdr->hdr_overflows = 0UL;
hdr->hdr_count = 0UL;
DPRINT(("[%d] buffer=%p buf_size=%lu hdr_size=%lu hdr_version=%u cur_offs=%lu\n",
task_pid_nr(task),
buf,
hdr->hdr_buf_size,
sizeof(*hdr),
hdr->hdr_version,
hdr->hdr_cur_offs));
return 0;
}
static int
default_handler(struct task_struct *task, void *buf, pfm_ovfl_arg_t *arg, struct pt_regs *regs, unsigned long stamp)
{
pfm_default_smpl_hdr_t *hdr;
pfm_default_smpl_entry_t *ent;
void *cur, *last;
unsigned long *e, entry_size;
unsigned int npmds, i;
unsigned char ovfl_pmd;
unsigned char ovfl_notify;
if (unlikely(buf == NULL || arg == NULL|| regs == NULL || task == NULL)) {
DPRINT(("[%d] invalid arguments buf=%p arg=%p\n", task->pid, buf, arg));
return -EINVAL;
}
hdr = (pfm_default_smpl_hdr_t *)buf;
cur = buf+hdr->hdr_cur_offs;
last = buf+hdr->hdr_buf_size;
ovfl_pmd = arg->ovfl_pmd;
ovfl_notify = arg->ovfl_notify;
/*
* precheck for sanity
*/
if ((last - cur) < PFM_DEFAULT_MAX_ENTRY_SIZE) goto full;
npmds = hweight64(arg->smpl_pmds[0]);
ent = (pfm_default_smpl_entry_t *)cur;
prefetch(arg->smpl_pmds_values);
entry_size = sizeof(*ent) + (npmds << 3);
/* position for first pmd */
e = (unsigned long *)(ent+1);
hdr->hdr_count++;
DPRINT_ovfl(("[%d] count=%lu cur=%p last=%p free_bytes=%lu ovfl_pmd=%d ovfl_notify=%d npmds=%u\n",
task->pid,
hdr->hdr_count,
cur, last,
last-cur,
ovfl_pmd,
ovfl_notify, npmds));
/*
* current = task running at the time of the overflow.
*
* per-task mode:
* - this is usually the task being monitored.
* Under certain conditions, it might be a different task
*
* system-wide:
* - this is not necessarily the task controlling the session
*/
ent->pid = current->pid;
ent->ovfl_pmd = ovfl_pmd;
ent->last_reset_val = arg->pmd_last_reset; //pmd[0].reg_last_reset_val;
/*
* where did the fault happen (includes slot number)
*/
ent->ip = regs->cr_iip | ((regs->cr_ipsr >> 41) & 0x3);
ent->tstamp = stamp;
ent->cpu = smp_processor_id();
ent->set = arg->active_set;
ent->tgid = current->tgid;
/*
* selectively store PMDs in increasing index number
*/
if (npmds) {
unsigned long *val = arg->smpl_pmds_values;
for(i=0; i < npmds; i++) {
*e++ = *val++;
}
}
/*
* update position for next entry
*/
hdr->hdr_cur_offs += entry_size;
cur += entry_size;
/*
* post check to avoid losing the last sample
*/
if ((last - cur) < PFM_DEFAULT_MAX_ENTRY_SIZE) goto full;
/*
* keep same ovfl_pmds, ovfl_notify
*/
arg->ovfl_ctrl.bits.notify_user = 0;
arg->ovfl_ctrl.bits.block_task = 0;
arg->ovfl_ctrl.bits.mask_monitoring = 0;
arg->ovfl_ctrl.bits.reset_ovfl_pmds = 1; /* reset before returning from interrupt handler */
return 0;
full:
DPRINT_ovfl(("sampling buffer full free=%lu, count=%lu, ovfl_notify=%d\n", last-cur, hdr->hdr_count, ovfl_notify));
/*
* increment number of buffer overflow.
* important to detect duplicate set of samples.
*/
hdr->hdr_overflows++;
/*
* if no notification requested, then we saturate the buffer
*/
if (ovfl_notify == 0) {
arg->ovfl_ctrl.bits.notify_user = 0;
arg->ovfl_ctrl.bits.block_task = 0;
arg->ovfl_ctrl.bits.mask_monitoring = 1;
arg->ovfl_ctrl.bits.reset_ovfl_pmds = 0;
} else {
arg->ovfl_ctrl.bits.notify_user = 1;
arg->ovfl_ctrl.bits.block_task = 1; /* ignored for non-blocking context */
arg->ovfl_ctrl.bits.mask_monitoring = 1;
arg->ovfl_ctrl.bits.reset_ovfl_pmds = 0; /* no reset now */
}
return -1; /* we are full, sorry */
}
static int
default_restart(struct task_struct *task, pfm_ovfl_ctrl_t *ctrl, void *buf, struct pt_regs *regs)
{
pfm_default_smpl_hdr_t *hdr;
hdr = (pfm_default_smpl_hdr_t *)buf;
hdr->hdr_count = 0UL;
hdr->hdr_cur_offs = sizeof(*hdr);
ctrl->bits.mask_monitoring = 0;
ctrl->bits.reset_ovfl_pmds = 1; /* uses long-reset values */
return 0;
}
static int
default_exit(struct task_struct *task, void *buf, struct pt_regs *regs)
{
DPRINT(("[%d] exit(%p)\n", task_pid_nr(task), buf));
return 0;
}
static pfm_buffer_fmt_t default_fmt={
.fmt_name = "default_format",
.fmt_uuid = PFM_DEFAULT_SMPL_UUID,
.fmt_arg_size = sizeof(pfm_default_smpl_arg_t),
.fmt_validate = default_validate,
.fmt_getsize = default_get_size,
.fmt_init = default_init,
.fmt_handler = default_handler,
.fmt_restart = default_restart,
.fmt_restart_active = default_restart,
.fmt_exit = default_exit,
};
static int __init
pfm_default_smpl_init_module(void)
{
int ret;
ret = pfm_register_buffer_fmt(&default_fmt);
if (ret == 0) {
printk("perfmon_default_smpl: %s v%u.%u registered\n",
default_fmt.fmt_name,
PFM_DEFAULT_SMPL_VERSION_MAJ,
PFM_DEFAULT_SMPL_VERSION_MIN);
} else {
printk("perfmon_default_smpl: %s cannot register ret=%d\n",
default_fmt.fmt_name,
ret);
}
return ret;
}
static void __exit
pfm_default_smpl_cleanup_module(void)
{
int ret;
ret = pfm_unregister_buffer_fmt(default_fmt.fmt_uuid);
printk("perfmon_default_smpl: unregister %s=%d\n", default_fmt.fmt_name, ret);
}
module_init(pfm_default_smpl_init_module);
module_exit(pfm_default_smpl_cleanup_module);

View File

@ -1,46 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* This file contains the generic PMU register description tables
* and pmc checker used by perfmon.c.
*
* Copyright (C) 2002-2003 Hewlett Packard Co
* Stephane Eranian <eranian@hpl.hp.com>
*/
static pfm_reg_desc_t pfm_gen_pmc_desc[PMU_MAX_PMCS]={
/* pmc0 */ { PFM_REG_CONTROL , 0, 0x1UL, -1UL, NULL, NULL, {0UL,0UL, 0UL, 0UL}, {0UL,0UL, 0UL, 0UL}},
/* pmc1 */ { PFM_REG_CONTROL , 0, 0x0UL, -1UL, NULL, NULL, {0UL,0UL, 0UL, 0UL}, {0UL,0UL, 0UL, 0UL}},
/* pmc2 */ { PFM_REG_CONTROL , 0, 0x0UL, -1UL, NULL, NULL, {0UL,0UL, 0UL, 0UL}, {0UL,0UL, 0UL, 0UL}},
/* pmc3 */ { PFM_REG_CONTROL , 0, 0x0UL, -1UL, NULL, NULL, {0UL,0UL, 0UL, 0UL}, {0UL,0UL, 0UL, 0UL}},
/* pmc4 */ { PFM_REG_COUNTING, 0, 0x0UL, -1UL, NULL, NULL, {RDEP(4),0UL, 0UL, 0UL}, {0UL,0UL, 0UL, 0UL}},
/* pmc5 */ { PFM_REG_COUNTING, 0, 0x0UL, -1UL, NULL, NULL, {RDEP(5),0UL, 0UL, 0UL}, {0UL,0UL, 0UL, 0UL}},
/* pmc6 */ { PFM_REG_COUNTING, 0, 0x0UL, -1UL, NULL, NULL, {RDEP(6),0UL, 0UL, 0UL}, {0UL,0UL, 0UL, 0UL}},
/* pmc7 */ { PFM_REG_COUNTING, 0, 0x0UL, -1UL, NULL, NULL, {RDEP(7),0UL, 0UL, 0UL}, {0UL,0UL, 0UL, 0UL}},
{ PFM_REG_END , 0, 0x0UL, -1UL, NULL, NULL, {0,}, {0,}}, /* end marker */
};
static pfm_reg_desc_t pfm_gen_pmd_desc[PMU_MAX_PMDS]={
/* pmd0 */ { PFM_REG_NOTIMPL , 0, 0x0UL, -1UL, NULL, NULL, {0,}, {0,}},
/* pmd1 */ { PFM_REG_NOTIMPL , 0, 0x0UL, -1UL, NULL, NULL, {0,}, {0,}},
/* pmd2 */ { PFM_REG_NOTIMPL , 0, 0x0UL, -1UL, NULL, NULL, {0,}, {0,}},
/* pmd3 */ { PFM_REG_NOTIMPL , 0, 0x0UL, -1UL, NULL, NULL, {0,}, {0,}},
/* pmd4 */ { PFM_REG_COUNTING, 0, 0x0UL, -1UL, NULL, NULL, {0UL,0UL, 0UL, 0UL}, {RDEP(4),0UL, 0UL, 0UL}},
/* pmd5 */ { PFM_REG_COUNTING, 0, 0x0UL, -1UL, NULL, NULL, {0UL,0UL, 0UL, 0UL}, {RDEP(5),0UL, 0UL, 0UL}},
/* pmd6 */ { PFM_REG_COUNTING, 0, 0x0UL, -1UL, NULL, NULL, {0UL,0UL, 0UL, 0UL}, {RDEP(6),0UL, 0UL, 0UL}},
/* pmd7 */ { PFM_REG_COUNTING, 0, 0x0UL, -1UL, NULL, NULL, {0UL,0UL, 0UL, 0UL}, {RDEP(7),0UL, 0UL, 0UL}},
{ PFM_REG_END , 0, 0x0UL, -1UL, NULL, NULL, {0,}, {0,}}, /* end marker */
};
/*
* impl_pmcs, impl_pmds are computed at runtime to minimize errors!
*/
static pmu_config_t pmu_conf_gen={
.pmu_name = "Generic",
.pmu_family = 0xff, /* any */
.ovfl_val = (1UL << 32) - 1,
.num_ibrs = 0, /* does not use */
.num_dbrs = 0, /* does not use */
.pmd_desc = pfm_gen_pmd_desc,
.pmc_desc = pfm_gen_pmc_desc
};

View File

@ -1,7 +1,7 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* This file contains the Itanium PMU register description tables
* and pmc checker used by perfmon.c.
* and pmc checker.
*
* Copyright (C) 2002-2003 Hewlett Packard Co
* Stephane Eranian <eranian@hpl.hp.com>

View File

@ -1,188 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* This file contains the McKinley PMU register description tables
* and pmc checker used by perfmon.c.
*
* Copyright (C) 2002-2003 Hewlett Packard Co
* Stephane Eranian <eranian@hpl.hp.com>
*/
static int pfm_mck_pmc_check(struct task_struct *task, pfm_context_t *ctx, unsigned int cnum, unsigned long *val, struct pt_regs *regs);
static pfm_reg_desc_t pfm_mck_pmc_desc[PMU_MAX_PMCS]={
/* pmc0 */ { PFM_REG_CONTROL , 0, 0x1UL, -1UL, NULL, NULL, {0UL,0UL, 0UL, 0UL}, {0UL,0UL, 0UL, 0UL}},
/* pmc1 */ { PFM_REG_CONTROL , 0, 0x0UL, -1UL, NULL, NULL, {0UL,0UL, 0UL, 0UL}, {0UL,0UL, 0UL, 0UL}},
/* pmc2 */ { PFM_REG_CONTROL , 0, 0x0UL, -1UL, NULL, NULL, {0UL,0UL, 0UL, 0UL}, {0UL,0UL, 0UL, 0UL}},
/* pmc3 */ { PFM_REG_CONTROL , 0, 0x0UL, -1UL, NULL, NULL, {0UL,0UL, 0UL, 0UL}, {0UL,0UL, 0UL, 0UL}},
/* pmc4 */ { PFM_REG_COUNTING, 6, 0x0000000000800000UL, 0xfffff7fUL, NULL, pfm_mck_pmc_check, {RDEP(4),0UL, 0UL, 0UL}, {0UL,0UL, 0UL, 0UL}},
/* pmc5 */ { PFM_REG_COUNTING, 6, 0x0UL, 0xfffff7fUL, NULL, pfm_mck_pmc_check, {RDEP(5),0UL, 0UL, 0UL}, {0UL,0UL, 0UL, 0UL}},
/* pmc6 */ { PFM_REG_COUNTING, 6, 0x0UL, 0xfffff7fUL, NULL, pfm_mck_pmc_check, {RDEP(6),0UL, 0UL, 0UL}, {0UL,0UL, 0UL, 0UL}},
/* pmc7 */ { PFM_REG_COUNTING, 6, 0x0UL, 0xfffff7fUL, NULL, pfm_mck_pmc_check, {RDEP(7),0UL, 0UL, 0UL}, {0UL,0UL, 0UL, 0UL}},
/* pmc8 */ { PFM_REG_CONFIG , 0, 0xffffffff3fffffffUL, 0xffffffff3ffffffbUL, NULL, pfm_mck_pmc_check, {0UL,0UL, 0UL, 0UL}, {0UL,0UL, 0UL, 0UL}},
/* pmc9 */ { PFM_REG_CONFIG , 0, 0xffffffff3ffffffcUL, 0xffffffff3ffffffbUL, NULL, pfm_mck_pmc_check, {0UL,0UL, 0UL, 0UL}, {0UL,0UL, 0UL, 0UL}},
/* pmc10 */ { PFM_REG_MONITOR , 4, 0x0UL, 0xffffUL, NULL, pfm_mck_pmc_check, {RDEP(0)|RDEP(1),0UL, 0UL, 0UL}, {0UL,0UL, 0UL, 0UL}},
/* pmc11 */ { PFM_REG_MONITOR , 6, 0x0UL, 0x30f01cf, NULL, pfm_mck_pmc_check, {RDEP(2)|RDEP(3)|RDEP(17),0UL, 0UL, 0UL}, {0UL,0UL, 0UL, 0UL}},
/* pmc12 */ { PFM_REG_MONITOR , 6, 0x0UL, 0xffffUL, NULL, pfm_mck_pmc_check, {RDEP(8)|RDEP(9)|RDEP(10)|RDEP(11)|RDEP(12)|RDEP(13)|RDEP(14)|RDEP(15)|RDEP(16),0UL, 0UL, 0UL}, {0UL,0UL, 0UL, 0UL}},
/* pmc13 */ { PFM_REG_CONFIG , 0, 0x00002078fefefefeUL, 0x1e00018181818UL, NULL, pfm_mck_pmc_check, {0UL,0UL, 0UL, 0UL}, {0UL,0UL, 0UL, 0UL}},
/* pmc14 */ { PFM_REG_CONFIG , 0, 0x0db60db60db60db6UL, 0x2492UL, NULL, pfm_mck_pmc_check, {0UL,0UL, 0UL, 0UL}, {0UL,0UL, 0UL, 0UL}},
/* pmc15 */ { PFM_REG_CONFIG , 0, 0x00000000fffffff0UL, 0xfUL, NULL, pfm_mck_pmc_check, {0UL,0UL, 0UL, 0UL}, {0UL,0UL, 0UL, 0UL}},
{ PFM_REG_END , 0, 0x0UL, -1UL, NULL, NULL, {0,}, {0,}}, /* end marker */
};
static pfm_reg_desc_t pfm_mck_pmd_desc[PMU_MAX_PMDS]={
/* pmd0 */ { PFM_REG_BUFFER , 0, 0x0UL, -1UL, NULL, NULL, {RDEP(1),0UL, 0UL, 0UL}, {RDEP(10),0UL, 0UL, 0UL}},
/* pmd1 */ { PFM_REG_BUFFER , 0, 0x0UL, -1UL, NULL, NULL, {RDEP(0),0UL, 0UL, 0UL}, {RDEP(10),0UL, 0UL, 0UL}},
/* pmd2 */ { PFM_REG_BUFFER , 0, 0x0UL, -1UL, NULL, NULL, {RDEP(3)|RDEP(17),0UL, 0UL, 0UL}, {RDEP(11),0UL, 0UL, 0UL}},
/* pmd3 */ { PFM_REG_BUFFER , 0, 0x0UL, -1UL, NULL, NULL, {RDEP(2)|RDEP(17),0UL, 0UL, 0UL}, {RDEP(11),0UL, 0UL, 0UL}},
/* pmd4 */ { PFM_REG_COUNTING, 0, 0x0UL, -1UL, NULL, NULL, {0UL,0UL, 0UL, 0UL}, {RDEP(4),0UL, 0UL, 0UL}},
/* pmd5 */ { PFM_REG_COUNTING, 0, 0x0UL, -1UL, NULL, NULL, {0UL,0UL, 0UL, 0UL}, {RDEP(5),0UL, 0UL, 0UL}},
/* pmd6 */ { PFM_REG_COUNTING, 0, 0x0UL, -1UL, NULL, NULL, {0UL,0UL, 0UL, 0UL}, {RDEP(6),0UL, 0UL, 0UL}},
/* pmd7 */ { PFM_REG_COUNTING, 0, 0x0UL, -1UL, NULL, NULL, {0UL,0UL, 0UL, 0UL}, {RDEP(7),0UL, 0UL, 0UL}},
/* pmd8 */ { PFM_REG_BUFFER , 0, 0x0UL, -1UL, NULL, NULL, {RDEP(9)|RDEP(10)|RDEP(11)|RDEP(12)|RDEP(13)|RDEP(14)|RDEP(15)|RDEP(16),0UL, 0UL, 0UL}, {RDEP(12),0UL, 0UL, 0UL}},
/* pmd9 */ { PFM_REG_BUFFER , 0, 0x0UL, -1UL, NULL, NULL, {RDEP(8)|RDEP(10)|RDEP(11)|RDEP(12)|RDEP(13)|RDEP(14)|RDEP(15)|RDEP(16),0UL, 0UL, 0UL}, {RDEP(12),0UL, 0UL, 0UL}},
/* pmd10 */ { PFM_REG_BUFFER , 0, 0x0UL, -1UL, NULL, NULL, {RDEP(8)|RDEP(9)|RDEP(11)|RDEP(12)|RDEP(13)|RDEP(14)|RDEP(15)|RDEP(16),0UL, 0UL, 0UL}, {RDEP(12),0UL, 0UL, 0UL}},
/* pmd11 */ { PFM_REG_BUFFER , 0, 0x0UL, -1UL, NULL, NULL, {RDEP(8)|RDEP(9)|RDEP(10)|RDEP(12)|RDEP(13)|RDEP(14)|RDEP(15)|RDEP(16),0UL, 0UL, 0UL}, {RDEP(12),0UL, 0UL, 0UL}},
/* pmd12 */ { PFM_REG_BUFFER , 0, 0x0UL, -1UL, NULL, NULL, {RDEP(8)|RDEP(9)|RDEP(10)|RDEP(11)|RDEP(13)|RDEP(14)|RDEP(15)|RDEP(16),0UL, 0UL, 0UL}, {RDEP(12),0UL, 0UL, 0UL}},
/* pmd13 */ { PFM_REG_BUFFER , 0, 0x0UL, -1UL, NULL, NULL, {RDEP(8)|RDEP(9)|RDEP(10)|RDEP(11)|RDEP(12)|RDEP(14)|RDEP(15)|RDEP(16),0UL, 0UL, 0UL}, {RDEP(12),0UL, 0UL, 0UL}},
/* pmd14 */ { PFM_REG_BUFFER , 0, 0x0UL, -1UL, NULL, NULL, {RDEP(8)|RDEP(9)|RDEP(10)|RDEP(11)|RDEP(12)|RDEP(13)|RDEP(15)|RDEP(16),0UL, 0UL, 0UL}, {RDEP(12),0UL, 0UL, 0UL}},
/* pmd15 */ { PFM_REG_BUFFER , 0, 0x0UL, -1UL, NULL, NULL, {RDEP(8)|RDEP(9)|RDEP(10)|RDEP(11)|RDEP(12)|RDEP(13)|RDEP(14)|RDEP(16),0UL, 0UL, 0UL}, {RDEP(12),0UL, 0UL, 0UL}},
/* pmd16 */ { PFM_REG_BUFFER , 0, 0x0UL, -1UL, NULL, NULL, {RDEP(8)|RDEP(9)|RDEP(10)|RDEP(11)|RDEP(12)|RDEP(13)|RDEP(14)|RDEP(15),0UL, 0UL, 0UL}, {RDEP(12),0UL, 0UL, 0UL}},
/* pmd17 */ { PFM_REG_BUFFER , 0, 0x0UL, -1UL, NULL, NULL, {RDEP(2)|RDEP(3),0UL, 0UL, 0UL}, {RDEP(11),0UL, 0UL, 0UL}},
{ PFM_REG_END , 0, 0x0UL, -1UL, NULL, NULL, {0,}, {0,}}, /* end marker */
};
/*
* PMC reserved fields must have their power-up values preserved
*/
static int
pfm_mck_reserved(unsigned int cnum, unsigned long *val, struct pt_regs *regs)
{
unsigned long tmp1, tmp2, ival = *val;
/* remove reserved areas from user value */
tmp1 = ival & PMC_RSVD_MASK(cnum);
/* get reserved fields values */
tmp2 = PMC_DFL_VAL(cnum) & ~PMC_RSVD_MASK(cnum);
*val = tmp1 | tmp2;
DPRINT(("pmc[%d]=0x%lx, mask=0x%lx, reset=0x%lx, val=0x%lx\n",
cnum, ival, PMC_RSVD_MASK(cnum), PMC_DFL_VAL(cnum), *val));
return 0;
}
/*
* task can be NULL if the context is unloaded
*/
static int
pfm_mck_pmc_check(struct task_struct *task, pfm_context_t *ctx, unsigned int cnum, unsigned long *val, struct pt_regs *regs)
{
int ret = 0, check_case1 = 0;
unsigned long val8 = 0, val14 = 0, val13 = 0;
int is_loaded;
/* first preserve the reserved fields */
pfm_mck_reserved(cnum, val, regs);
/* sanitfy check */
if (ctx == NULL) return -EINVAL;
is_loaded = ctx->ctx_state == PFM_CTX_LOADED || ctx->ctx_state == PFM_CTX_MASKED;
/*
* we must clear the debug registers if pmc13 has a value which enable
* memory pipeline event constraints. In this case we need to clear the
* the debug registers if they have not yet been accessed. This is required
* to avoid picking stale state.
* PMC13 is "active" if:
* one of the pmc13.cfg_dbrpXX field is different from 0x3
* AND
* at the corresponding pmc13.ena_dbrpXX is set.
*/
DPRINT(("cnum=%u val=0x%lx, using_dbreg=%d loaded=%d\n", cnum, *val, ctx->ctx_fl_using_dbreg, is_loaded));
if (cnum == 13 && is_loaded
&& (*val & 0x1e00000000000UL) && (*val & 0x18181818UL) != 0x18181818UL && ctx->ctx_fl_using_dbreg == 0) {
DPRINT(("pmc[%d]=0x%lx has active pmc13 settings, clearing dbr\n", cnum, *val));
/* don't mix debug with perfmon */
if (task && (task->thread.flags & IA64_THREAD_DBG_VALID) != 0) return -EINVAL;
/*
* a count of 0 will mark the debug registers as in use and also
* ensure that they are properly cleared.
*/
ret = pfm_write_ibr_dbr(PFM_DATA_RR, ctx, NULL, 0, regs);
if (ret) return ret;
}
/*
* we must clear the (instruction) debug registers if any pmc14.ibrpX bit is enabled
* before they are (fl_using_dbreg==0) to avoid picking up stale information.
*/
if (cnum == 14 && is_loaded && ((*val & 0x2222UL) != 0x2222UL) && ctx->ctx_fl_using_dbreg == 0) {
DPRINT(("pmc[%d]=0x%lx has active pmc14 settings, clearing ibr\n", cnum, *val));
/* don't mix debug with perfmon */
if (task && (task->thread.flags & IA64_THREAD_DBG_VALID) != 0) return -EINVAL;
/*
* a count of 0 will mark the debug registers as in use and also
* ensure that they are properly cleared.
*/
ret = pfm_write_ibr_dbr(PFM_CODE_RR, ctx, NULL, 0, regs);
if (ret) return ret;
}
switch(cnum) {
case 4: *val |= 1UL << 23; /* force power enable bit */
break;
case 8: val8 = *val;
val13 = ctx->ctx_pmcs[13];
val14 = ctx->ctx_pmcs[14];
check_case1 = 1;
break;
case 13: val8 = ctx->ctx_pmcs[8];
val13 = *val;
val14 = ctx->ctx_pmcs[14];
check_case1 = 1;
break;
case 14: val8 = ctx->ctx_pmcs[8];
val13 = ctx->ctx_pmcs[13];
val14 = *val;
check_case1 = 1;
break;
}
/* check illegal configuration which can produce inconsistencies in tagging
* i-side events in L1D and L2 caches
*/
if (check_case1) {
ret = ((val13 >> 45) & 0xf) == 0
&& ((val8 & 0x1) == 0)
&& ((((val14>>1) & 0x3) == 0x2 || ((val14>>1) & 0x3) == 0x0)
||(((val14>>4) & 0x3) == 0x2 || ((val14>>4) & 0x3) == 0x0));
if (ret) DPRINT((KERN_DEBUG "perfmon: failure check_case1\n"));
}
return ret ? -EINVAL : 0;
}
/*
* impl_pmcs, impl_pmds are computed at runtime to minimize errors!
*/
static pmu_config_t pmu_conf_mck={
.pmu_name = "Itanium 2",
.pmu_family = 0x1f,
.flags = PFM_PMU_IRQ_RESEND,
.ovfl_val = (1UL << 47) - 1,
.pmd_desc = pfm_mck_pmd_desc,
.pmc_desc = pfm_mck_pmc_desc,
.num_ibrs = 8,
.num_dbrs = 8,
.use_rr_dbregs = 1 /* debug register are use for range restrictions */
};

View File

@ -1,270 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* This file contains the Montecito PMU register description tables
* and pmc checker used by perfmon.c.
*
* Copyright (c) 2005-2006 Hewlett-Packard Development Company, L.P.
* Contributed by Stephane Eranian <eranian@hpl.hp.com>
*/
static int pfm_mont_pmc_check(struct task_struct *task, pfm_context_t *ctx, unsigned int cnum, unsigned long *val, struct pt_regs *regs);
#define RDEP_MONT_ETB (RDEP(38)|RDEP(39)|RDEP(48)|RDEP(49)|RDEP(50)|RDEP(51)|RDEP(52)|RDEP(53)|RDEP(54)|\
RDEP(55)|RDEP(56)|RDEP(57)|RDEP(58)|RDEP(59)|RDEP(60)|RDEP(61)|RDEP(62)|RDEP(63))
#define RDEP_MONT_DEAR (RDEP(32)|RDEP(33)|RDEP(36))
#define RDEP_MONT_IEAR (RDEP(34)|RDEP(35))
static pfm_reg_desc_t pfm_mont_pmc_desc[PMU_MAX_PMCS]={
/* pmc0 */ { PFM_REG_CONTROL , 0, 0x0, -1, NULL, NULL, {0,0, 0, 0}, {0,0, 0, 0}},
/* pmc1 */ { PFM_REG_CONTROL , 0, 0x0, -1, NULL, NULL, {0,0, 0, 0}, {0,0, 0, 0}},
/* pmc2 */ { PFM_REG_CONTROL , 0, 0x0, -1, NULL, NULL, {0,0, 0, 0}, {0,0, 0, 0}},
/* pmc3 */ { PFM_REG_CONTROL , 0, 0x0, -1, NULL, NULL, {0,0, 0, 0}, {0,0, 0, 0}},
/* pmc4 */ { PFM_REG_COUNTING, 6, 0x2000000, 0x7c7fff7f, NULL, pfm_mont_pmc_check, {RDEP(4),0, 0, 0}, {0,0, 0, 0}},
/* pmc5 */ { PFM_REG_COUNTING, 6, 0x2000000, 0x7c7fff7f, NULL, pfm_mont_pmc_check, {RDEP(5),0, 0, 0}, {0,0, 0, 0}},
/* pmc6 */ { PFM_REG_COUNTING, 6, 0x2000000, 0x7c7fff7f, NULL, pfm_mont_pmc_check, {RDEP(6),0, 0, 0}, {0,0, 0, 0}},
/* pmc7 */ { PFM_REG_COUNTING, 6, 0x2000000, 0x7c7fff7f, NULL, pfm_mont_pmc_check, {RDEP(7),0, 0, 0}, {0,0, 0, 0}},
/* pmc8 */ { PFM_REG_COUNTING, 6, 0x2000000, 0x7c7fff7f, NULL, pfm_mont_pmc_check, {RDEP(8),0, 0, 0}, {0,0, 0, 0}},
/* pmc9 */ { PFM_REG_COUNTING, 6, 0x2000000, 0x7c7fff7f, NULL, pfm_mont_pmc_check, {RDEP(9),0, 0, 0}, {0,0, 0, 0}},
/* pmc10 */ { PFM_REG_COUNTING, 6, 0x2000000, 0x7c7fff7f, NULL, pfm_mont_pmc_check, {RDEP(10),0, 0, 0}, {0,0, 0, 0}},
/* pmc11 */ { PFM_REG_COUNTING, 6, 0x2000000, 0x7c7fff7f, NULL, pfm_mont_pmc_check, {RDEP(11),0, 0, 0}, {0,0, 0, 0}},
/* pmc12 */ { PFM_REG_COUNTING, 6, 0x2000000, 0x7c7fff7f, NULL, pfm_mont_pmc_check, {RDEP(12),0, 0, 0}, {0,0, 0, 0}},
/* pmc13 */ { PFM_REG_COUNTING, 6, 0x2000000, 0x7c7fff7f, NULL, pfm_mont_pmc_check, {RDEP(13),0, 0, 0}, {0,0, 0, 0}},
/* pmc14 */ { PFM_REG_COUNTING, 6, 0x2000000, 0x7c7fff7f, NULL, pfm_mont_pmc_check, {RDEP(14),0, 0, 0}, {0,0, 0, 0}},
/* pmc15 */ { PFM_REG_COUNTING, 6, 0x2000000, 0x7c7fff7f, NULL, pfm_mont_pmc_check, {RDEP(15),0, 0, 0}, {0,0, 0, 0}},
/* pmc16 */ { PFM_REG_NOTIMPL, },
/* pmc17 */ { PFM_REG_NOTIMPL, },
/* pmc18 */ { PFM_REG_NOTIMPL, },
/* pmc19 */ { PFM_REG_NOTIMPL, },
/* pmc20 */ { PFM_REG_NOTIMPL, },
/* pmc21 */ { PFM_REG_NOTIMPL, },
/* pmc22 */ { PFM_REG_NOTIMPL, },
/* pmc23 */ { PFM_REG_NOTIMPL, },
/* pmc24 */ { PFM_REG_NOTIMPL, },
/* pmc25 */ { PFM_REG_NOTIMPL, },
/* pmc26 */ { PFM_REG_NOTIMPL, },
/* pmc27 */ { PFM_REG_NOTIMPL, },
/* pmc28 */ { PFM_REG_NOTIMPL, },
/* pmc29 */ { PFM_REG_NOTIMPL, },
/* pmc30 */ { PFM_REG_NOTIMPL, },
/* pmc31 */ { PFM_REG_NOTIMPL, },
/* pmc32 */ { PFM_REG_CONFIG, 0, 0x30f01ffffffffffUL, 0x30f01ffffffffffUL, NULL, pfm_mont_pmc_check, {0,0, 0, 0}, {0,0, 0, 0}},
/* pmc33 */ { PFM_REG_CONFIG, 0, 0x0, 0x1ffffffffffUL, NULL, pfm_mont_pmc_check, {0,0, 0, 0}, {0,0, 0, 0}},
/* pmc34 */ { PFM_REG_CONFIG, 0, 0xf01ffffffffffUL, 0xf01ffffffffffUL, NULL, pfm_mont_pmc_check, {0,0, 0, 0}, {0,0, 0, 0}},
/* pmc35 */ { PFM_REG_CONFIG, 0, 0x0, 0x1ffffffffffUL, NULL, pfm_mont_pmc_check, {0,0, 0, 0}, {0,0, 0, 0}},
/* pmc36 */ { PFM_REG_CONFIG, 0, 0xfffffff0, 0xf, NULL, pfm_mont_pmc_check, {0,0, 0, 0}, {0,0, 0, 0}},
/* pmc37 */ { PFM_REG_MONITOR, 4, 0x0, 0x3fff, NULL, pfm_mont_pmc_check, {RDEP_MONT_IEAR, 0, 0, 0}, {0, 0, 0, 0}},
/* pmc38 */ { PFM_REG_CONFIG, 0, 0xdb6, 0x2492, NULL, pfm_mont_pmc_check, {0,0, 0, 0}, {0,0, 0, 0}},
/* pmc39 */ { PFM_REG_MONITOR, 6, 0x0, 0xffcf, NULL, pfm_mont_pmc_check, {RDEP_MONT_ETB,0, 0, 0}, {0,0, 0, 0}},
/* pmc40 */ { PFM_REG_MONITOR, 6, 0x2000000, 0xf01cf, NULL, pfm_mont_pmc_check, {RDEP_MONT_DEAR,0, 0, 0}, {0,0, 0, 0}},
/* pmc41 */ { PFM_REG_CONFIG, 0, 0x00002078fefefefeUL, 0x1e00018181818UL, NULL, pfm_mont_pmc_check, {0,0, 0, 0}, {0,0, 0, 0}},
/* pmc42 */ { PFM_REG_MONITOR, 6, 0x0, 0x7ff4f, NULL, pfm_mont_pmc_check, {RDEP_MONT_ETB,0, 0, 0}, {0,0, 0, 0}},
{ PFM_REG_END , 0, 0x0, -1, NULL, NULL, {0,}, {0,}}, /* end marker */
};
static pfm_reg_desc_t pfm_mont_pmd_desc[PMU_MAX_PMDS]={
/* pmd0 */ { PFM_REG_NOTIMPL, },
/* pmd1 */ { PFM_REG_NOTIMPL, },
/* pmd2 */ { PFM_REG_NOTIMPL, },
/* pmd3 */ { PFM_REG_NOTIMPL, },
/* pmd4 */ { PFM_REG_COUNTING, 0, 0x0, -1, NULL, NULL, {0,0, 0, 0}, {RDEP(4),0, 0, 0}},
/* pmd5 */ { PFM_REG_COUNTING, 0, 0x0, -1, NULL, NULL, {0,0, 0, 0}, {RDEP(5),0, 0, 0}},
/* pmd6 */ { PFM_REG_COUNTING, 0, 0x0, -1, NULL, NULL, {0,0, 0, 0}, {RDEP(6),0, 0, 0}},
/* pmd7 */ { PFM_REG_COUNTING, 0, 0x0, -1, NULL, NULL, {0,0, 0, 0}, {RDEP(7),0, 0, 0}},
/* pmd8 */ { PFM_REG_COUNTING, 0, 0x0, -1, NULL, NULL, {0,0, 0, 0}, {RDEP(8),0, 0, 0}},
/* pmd9 */ { PFM_REG_COUNTING, 0, 0x0, -1, NULL, NULL, {0,0, 0, 0}, {RDEP(9),0, 0, 0}},
/* pmd10 */ { PFM_REG_COUNTING, 0, 0x0, -1, NULL, NULL, {0,0, 0, 0}, {RDEP(10),0, 0, 0}},
/* pmd11 */ { PFM_REG_COUNTING, 0, 0x0, -1, NULL, NULL, {0,0, 0, 0}, {RDEP(11),0, 0, 0}},
/* pmd12 */ { PFM_REG_COUNTING, 0, 0x0, -1, NULL, NULL, {0,0, 0, 0}, {RDEP(12),0, 0, 0}},
/* pmd13 */ { PFM_REG_COUNTING, 0, 0x0, -1, NULL, NULL, {0,0, 0, 0}, {RDEP(13),0, 0, 0}},
/* pmd14 */ { PFM_REG_COUNTING, 0, 0x0, -1, NULL, NULL, {0,0, 0, 0}, {RDEP(14),0, 0, 0}},
/* pmd15 */ { PFM_REG_COUNTING, 0, 0x0, -1, NULL, NULL, {0,0, 0, 0}, {RDEP(15),0, 0, 0}},
/* pmd16 */ { PFM_REG_NOTIMPL, },
/* pmd17 */ { PFM_REG_NOTIMPL, },
/* pmd18 */ { PFM_REG_NOTIMPL, },
/* pmd19 */ { PFM_REG_NOTIMPL, },
/* pmd20 */ { PFM_REG_NOTIMPL, },
/* pmd21 */ { PFM_REG_NOTIMPL, },
/* pmd22 */ { PFM_REG_NOTIMPL, },
/* pmd23 */ { PFM_REG_NOTIMPL, },
/* pmd24 */ { PFM_REG_NOTIMPL, },
/* pmd25 */ { PFM_REG_NOTIMPL, },
/* pmd26 */ { PFM_REG_NOTIMPL, },
/* pmd27 */ { PFM_REG_NOTIMPL, },
/* pmd28 */ { PFM_REG_NOTIMPL, },
/* pmd29 */ { PFM_REG_NOTIMPL, },
/* pmd30 */ { PFM_REG_NOTIMPL, },
/* pmd31 */ { PFM_REG_NOTIMPL, },
/* pmd32 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP(33)|RDEP(36),0, 0, 0}, {RDEP(40),0, 0, 0}},
/* pmd33 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP(32)|RDEP(36),0, 0, 0}, {RDEP(40),0, 0, 0}},
/* pmd34 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP(35),0, 0, 0}, {RDEP(37),0, 0, 0}},
/* pmd35 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP(34),0, 0, 0}, {RDEP(37),0, 0, 0}},
/* pmd36 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP(32)|RDEP(33),0, 0, 0}, {RDEP(40),0, 0, 0}},
/* pmd37 */ { PFM_REG_NOTIMPL, },
/* pmd38 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP_MONT_ETB,0, 0, 0}, {RDEP(39),0, 0, 0}},
/* pmd39 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP_MONT_ETB,0, 0, 0}, {RDEP(39),0, 0, 0}},
/* pmd40 */ { PFM_REG_NOTIMPL, },
/* pmd41 */ { PFM_REG_NOTIMPL, },
/* pmd42 */ { PFM_REG_NOTIMPL, },
/* pmd43 */ { PFM_REG_NOTIMPL, },
/* pmd44 */ { PFM_REG_NOTIMPL, },
/* pmd45 */ { PFM_REG_NOTIMPL, },
/* pmd46 */ { PFM_REG_NOTIMPL, },
/* pmd47 */ { PFM_REG_NOTIMPL, },
/* pmd48 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP_MONT_ETB,0, 0, 0}, {RDEP(39),0, 0, 0}},
/* pmd49 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP_MONT_ETB,0, 0, 0}, {RDEP(39),0, 0, 0}},
/* pmd50 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP_MONT_ETB,0, 0, 0}, {RDEP(39),0, 0, 0}},
/* pmd51 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP_MONT_ETB,0, 0, 0}, {RDEP(39),0, 0, 0}},
/* pmd52 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP_MONT_ETB,0, 0, 0}, {RDEP(39),0, 0, 0}},
/* pmd53 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP_MONT_ETB,0, 0, 0}, {RDEP(39),0, 0, 0}},
/* pmd54 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP_MONT_ETB,0, 0, 0}, {RDEP(39),0, 0, 0}},
/* pmd55 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP_MONT_ETB,0, 0, 0}, {RDEP(39),0, 0, 0}},
/* pmd56 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP_MONT_ETB,0, 0, 0}, {RDEP(39),0, 0, 0}},
/* pmd57 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP_MONT_ETB,0, 0, 0}, {RDEP(39),0, 0, 0}},
/* pmd58 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP_MONT_ETB,0, 0, 0}, {RDEP(39),0, 0, 0}},
/* pmd59 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP_MONT_ETB,0, 0, 0}, {RDEP(39),0, 0, 0}},
/* pmd60 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP_MONT_ETB,0, 0, 0}, {RDEP(39),0, 0, 0}},
/* pmd61 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP_MONT_ETB,0, 0, 0}, {RDEP(39),0, 0, 0}},
/* pmd62 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP_MONT_ETB,0, 0, 0}, {RDEP(39),0, 0, 0}},
/* pmd63 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP_MONT_ETB,0, 0, 0}, {RDEP(39),0, 0, 0}},
{ PFM_REG_END , 0, 0x0, -1, NULL, NULL, {0,}, {0,}}, /* end marker */
};
/*
* PMC reserved fields must have their power-up values preserved
*/
static int
pfm_mont_reserved(unsigned int cnum, unsigned long *val, struct pt_regs *regs)
{
unsigned long tmp1, tmp2, ival = *val;
/* remove reserved areas from user value */
tmp1 = ival & PMC_RSVD_MASK(cnum);
/* get reserved fields values */
tmp2 = PMC_DFL_VAL(cnum) & ~PMC_RSVD_MASK(cnum);
*val = tmp1 | tmp2;
DPRINT(("pmc[%d]=0x%lx, mask=0x%lx, reset=0x%lx, val=0x%lx\n",
cnum, ival, PMC_RSVD_MASK(cnum), PMC_DFL_VAL(cnum), *val));
return 0;
}
/*
* task can be NULL if the context is unloaded
*/
static int
pfm_mont_pmc_check(struct task_struct *task, pfm_context_t *ctx, unsigned int cnum, unsigned long *val, struct pt_regs *regs)
{
int ret = 0;
unsigned long val32 = 0, val38 = 0, val41 = 0;
unsigned long tmpval;
int check_case1 = 0;
int is_loaded;
/* first preserve the reserved fields */
pfm_mont_reserved(cnum, val, regs);
tmpval = *val;
/* sanity check */
if (ctx == NULL) return -EINVAL;
is_loaded = ctx->ctx_state == PFM_CTX_LOADED || ctx->ctx_state == PFM_CTX_MASKED;
/*
* we must clear the debug registers if pmc41 has a value which enable
* memory pipeline event constraints. In this case we need to clear the
* the debug registers if they have not yet been accessed. This is required
* to avoid picking stale state.
* PMC41 is "active" if:
* one of the pmc41.cfg_dtagXX field is different from 0x3
* AND
* at the corresponding pmc41.en_dbrpXX is set.
* AND
* ctx_fl_using_dbreg == 0 (i.e., dbr not yet used)
*/
DPRINT(("cnum=%u val=0x%lx, using_dbreg=%d loaded=%d\n", cnum, tmpval, ctx->ctx_fl_using_dbreg, is_loaded));
if (cnum == 41 && is_loaded
&& (tmpval & 0x1e00000000000UL) && (tmpval & 0x18181818UL) != 0x18181818UL && ctx->ctx_fl_using_dbreg == 0) {
DPRINT(("pmc[%d]=0x%lx has active pmc41 settings, clearing dbr\n", cnum, tmpval));
/* don't mix debug with perfmon */
if (task && (task->thread.flags & IA64_THREAD_DBG_VALID) != 0) return -EINVAL;
/*
* a count of 0 will mark the debug registers if:
* AND
*/
ret = pfm_write_ibr_dbr(PFM_DATA_RR, ctx, NULL, 0, regs);
if (ret) return ret;
}
/*
* we must clear the (instruction) debug registers if:
* pmc38.ig_ibrpX is 0 (enabled)
* AND
* ctx_fl_using_dbreg == 0 (i.e., dbr not yet used)
*/
if (cnum == 38 && is_loaded && ((tmpval & 0x492UL) != 0x492UL) && ctx->ctx_fl_using_dbreg == 0) {
DPRINT(("pmc38=0x%lx has active pmc38 settings, clearing ibr\n", tmpval));
/* don't mix debug with perfmon */
if (task && (task->thread.flags & IA64_THREAD_DBG_VALID) != 0) return -EINVAL;
/*
* a count of 0 will mark the debug registers as in use and also
* ensure that they are properly cleared.
*/
ret = pfm_write_ibr_dbr(PFM_CODE_RR, ctx, NULL, 0, regs);
if (ret) return ret;
}
switch(cnum) {
case 32: val32 = *val;
val38 = ctx->ctx_pmcs[38];
val41 = ctx->ctx_pmcs[41];
check_case1 = 1;
break;
case 38: val38 = *val;
val32 = ctx->ctx_pmcs[32];
val41 = ctx->ctx_pmcs[41];
check_case1 = 1;
break;
case 41: val41 = *val;
val32 = ctx->ctx_pmcs[32];
val38 = ctx->ctx_pmcs[38];
check_case1 = 1;
break;
}
/* check illegal configuration which can produce inconsistencies in tagging
* i-side events in L1D and L2 caches
*/
if (check_case1) {
ret = (((val41 >> 45) & 0xf) == 0 && ((val32>>57) & 0x1) == 0)
&& ((((val38>>1) & 0x3) == 0x2 || ((val38>>1) & 0x3) == 0)
|| (((val38>>4) & 0x3) == 0x2 || ((val38>>4) & 0x3) == 0));
if (ret) {
DPRINT(("invalid config pmc38=0x%lx pmc41=0x%lx pmc32=0x%lx\n", val38, val41, val32));
return -EINVAL;
}
}
*val = tmpval;
return 0;
}
/*
* impl_pmcs, impl_pmds are computed at runtime to minimize errors!
*/
static pmu_config_t pmu_conf_mont={
.pmu_name = "Montecito",
.pmu_family = 0x20,
.flags = PFM_PMU_IRQ_RESEND,
.ovfl_val = (1UL << 47) - 1,
.pmd_desc = pfm_mont_pmd_desc,
.pmc_desc = pfm_mont_pmc_desc,
.num_ibrs = 8,
.num_dbrs = 8,
.use_rr_dbregs = 1 /* debug register are use for range retrictions */
};

View File

@ -1,10 +0,0 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_OPROFILE) += oprofile.o
DRIVER_OBJS := $(addprefix ../../../drivers/oprofile/, \
oprof.o cpu_buffer.o buffer_sync.o \
event_buffer.o oprofile_files.o \
oprofilefs.o oprofile_stats.o \
timer_int.o )
oprofile-y := $(DRIVER_OBJS) init.o backtrace.o

View File

@ -1,131 +0,0 @@
/**
* @file backtrace.c
*
* @remark Copyright 2004 Silicon Graphics Inc. All Rights Reserved.
* @remark Read the file COPYING
*
* @author Greg Banks <gnb@melbourne.sgi.com>
* @author Keith Owens <kaos@melbourne.sgi.com>
* Based on work done for the ia64 port of the SGI kernprof patch, which is
* Copyright (c) 2003-2004 Silicon Graphics Inc. All Rights Reserved.
*/
#include <linux/oprofile.h>
#include <linux/sched.h>
#include <linux/mm.h>
#include <asm/ptrace.h>
/*
* For IA64 we need to perform a complex little dance to get both
* the struct pt_regs and a synthetic struct switch_stack in place
* to allow the unwind code to work. This dance requires our unwind
* using code to be called from a function called from unw_init_running().
* There we only get a single void* data pointer, so use this struct
* to hold all the data we need during the unwind.
*/
typedef struct
{
unsigned int depth;
struct pt_regs *regs;
struct unw_frame_info frame;
unsigned long *prev_pfs_loc; /* state for WAR for old spinlock ool code */
} ia64_backtrace_t;
/* Returns non-zero if the PC is in the Interrupt Vector Table */
static __inline__ int in_ivt_code(unsigned long pc)
{
extern char ia64_ivt[];
return (pc >= (u_long)ia64_ivt && pc < (u_long)ia64_ivt+32768);
}
/*
* Unwind to next stack frame.
*/
static __inline__ int next_frame(ia64_backtrace_t *bt)
{
/*
* Avoid unsightly console message from unw_unwind() when attempting
* to unwind through the Interrupt Vector Table which has no unwind
* information.
*/
if (in_ivt_code(bt->frame.ip))
return 0;
/*
* WAR for spinlock contention from leaf functions. ia64_spinlock_contention_pre3_4
* has ar.pfs == r0. Leaf functions do not modify ar.pfs so ar.pfs remains
* as 0, stopping the backtrace. Record the previous ar.pfs when the current
* IP is in ia64_spinlock_contention_pre3_4 then unwind, if pfs_loc has not changed
* after unwind then use pt_regs.ar_pfs which is where the real ar.pfs is for
* leaf functions.
*/
if (bt->prev_pfs_loc && bt->regs && bt->frame.pfs_loc == bt->prev_pfs_loc)
bt->frame.pfs_loc = &bt->regs->ar_pfs;
bt->prev_pfs_loc = NULL;
return unw_unwind(&bt->frame) == 0;
}
static void do_ia64_backtrace(struct unw_frame_info *info, void *vdata)
{
ia64_backtrace_t *bt = vdata;
struct switch_stack *sw;
int count = 0;
u_long pc, sp;
sw = (struct switch_stack *)(info+1);
/* padding from unw_init_running */
sw = (struct switch_stack *)(((unsigned long)sw + 15) & ~15);
unw_init_frame_info(&bt->frame, current, sw);
/* skip over interrupt frame and oprofile calls */
do {
unw_get_sp(&bt->frame, &sp);
if (sp >= (u_long)bt->regs)
break;
if (!next_frame(bt))
return;
} while (count++ < 200);
/* finally, grab the actual sample */
while (bt->depth-- && next_frame(bt)) {
unw_get_ip(&bt->frame, &pc);
oprofile_add_trace(pc);
if (unw_is_intr_frame(&bt->frame)) {
/*
* Interrupt received on kernel stack; this can
* happen when timer interrupt fires while processing
* a softirq from the tail end of a hardware interrupt
* which interrupted a system call. Don't laugh, it
* happens! Splice the backtrace into two parts to
* avoid spurious cycles in the gprof output.
*/
/* TODO: split rather than drop the 2nd half */
break;
}
}
}
void
ia64_backtrace(struct pt_regs * const regs, unsigned int depth)
{
ia64_backtrace_t bt;
unsigned long flags;
/*
* On IA64 there is little hope of getting backtraces from
* user space programs -- the problems of getting the unwind
* information from arbitrary user programs are extreme.
*/
if (user_mode(regs))
return;
bt.depth = depth;
bt.regs = regs;
bt.prev_pfs_loc = NULL;
local_irq_save(flags);
unw_init_running(do_ia64_backtrace, &bt);
local_irq_restore(flags);
}

View File

@ -1,28 +0,0 @@
/**
* @file init.c
*
* @remark Copyright 2002 OProfile authors
* @remark Read the file COPYING
*
* @author John Levon <levon@movementarian.org>
*/
#include <linux/kernel.h>
#include <linux/oprofile.h>
#include <linux/init.h>
#include <linux/errno.h>
extern int perfmon_init(struct oprofile_operations *ops);
extern void perfmon_exit(void);
extern void ia64_backtrace(struct pt_regs * const regs, unsigned int depth);
int __init oprofile_arch_init(struct oprofile_operations *ops)
{
ops->backtrace = ia64_backtrace;
return -ENODEV;
}
void oprofile_arch_exit(void)
{
}

View File

@ -30,7 +30,6 @@ config MICROBLAZE
select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FUNCTION_TRACER
select HAVE_OPROFILE
select HAVE_PCI
select IRQ_DOMAIN
select XILINX_INTC

View File

@ -54,8 +54,6 @@ core-y += arch/microblaze/kernel/
core-y += arch/microblaze/mm/
core-$(CONFIG_PCI) += arch/microblaze/pci/
drivers-$(CONFIG_OPROFILE) += arch/microblaze/oprofile/
boot := arch/microblaze/boot
# Are we making a simpleImage.<boardname> target? If so, crack out the boardname

View File

@ -1,14 +0,0 @@
# SPDX-License-Identifier: GPL-2.0
#
# arch/microblaze/oprofile/Makefile
#
obj-$(CONFIG_OPROFILE) += oprofile.o
DRIVER_OBJS := $(addprefix ../../../drivers/oprofile/, \
oprof.o cpu_buffer.o buffer_sync.o \
event_buffer.o oprofile_files.o \
oprofilefs.o oprofile_stats.o \
timer_int.o )
oprofile-y := $(DRIVER_OBJS) microblaze_oprofile.o

View File

@ -1,22 +0,0 @@
/*
* Microblaze oprofile code
*
* Copyright (C) 2009 Michal Simek <monstr@monstr.eu>
* Copyright (C) 2009 PetaLogix
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#include <linux/oprofile.h>
#include <linux/init.h>
int __init oprofile_arch_init(struct oprofile_operations *ops)
{
return -1;
}
void oprofile_arch_exit(void)
{
}

View File

@ -74,7 +74,6 @@ config MIPS
select HAVE_LD_DEAD_CODE_DATA_ELIMINATION
select HAVE_MOD_ARCH_SPECIFIC
select HAVE_NMI
select HAVE_OPROFILE
select HAVE_PERF_EVENTS
select HAVE_REGS_AND_STACK_ACCESS_API
select HAVE_RSEQ
@ -2845,7 +2844,7 @@ config NODES_SHIFT
config HW_PERF_EVENTS
bool "Enable hardware performance counter support for perf events"
depends on PERF_EVENTS && !OPROFILE && (CPU_MIPS32 || CPU_MIPS64 || CPU_R10000 || CPU_SB1 || CPU_CAVIUM_OCTEON || CPU_XLP || CPU_LOONGSON64)
depends on PERF_EVENTS && (CPU_MIPS32 || CPU_MIPS64 || CPU_R10000 || CPU_SB1 || CPU_CAVIUM_OCTEON || CPU_XLP || CPU_LOONGSON64)
default y
help
Enable hardware performance counter support for perf events. If

View File

@ -316,7 +316,6 @@ libs-$(CONFIG_MIPS_FP_SUPPORT) += arch/mips/math-emu/
core-y += arch/mips/
drivers-y += arch/mips/crypto/
drivers-$(CONFIG_OPROFILE) += arch/mips/oprofile/
# suspend and hibernation support
drivers-$(CONFIG_PM) += arch/mips/power/

View File

@ -22,7 +22,6 @@ CONFIG_MIPS32_N32=y
# CONFIG_SUSPEND is not set
CONFIG_HIBERNATION=y
CONFIG_PM_STD_PARTITION="/dev/sda3"
CONFIG_OPROFILE=m
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MODULE_FORCE_UNLOAD=y

View File

@ -14,7 +14,6 @@ CONFIG_SGI_IP32=y
CONFIG_PCI=y
CONFIG_MIPS32_O32=y
CONFIG_MIPS32_N32=y
CONFIG_OPROFILE=m
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_PARTITION_ADVANCED=y

View File

@ -21,7 +21,6 @@ CONFIG_MIPS32_O32=y
CONFIG_MIPS32_N32=y
CONFIG_HIBERNATION=y
CONFIG_PM_STD_PARTITION="/dev/hda3"
CONFIG_OPROFILE=m
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MODVERSIONS=y

View File

@ -17,7 +17,6 @@ CONFIG_PCCARD=m
CONFIG_YENTA=m
CONFIG_PD6729=m
CONFIG_I82092=m
CONFIG_OPROFILE=m
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MODVERSIONS=y

View File

@ -30,7 +30,6 @@ CONFIG_PM=y
CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_DEFAULT_GOV_USERSPACE=y
CONFIG_CPUFREQ_DT=y
CONFIG_OPROFILE=y
CONFIG_JUMP_LABEL=y
# CONFIG_STACKPROTECTOR is not set
# CONFIG_BLK_DEV_BSG is not set

View File

@ -56,15 +56,6 @@ extern int mach_i8259_irq(void);
(*(volatile u32 *)((char *)CKSEG1ADDR(LOONGSON_REG_BASE) + (x)))
#define LOONGSON_IRQ_BASE 32
#define LOONGSON2_PERFCNT_IRQ (MIPS_CPU_IRQ_BASE + 6) /* cpu perf counter */
#include <linux/interrupt.h>
static inline void do_perfcnt_IRQ(void)
{
#if IS_ENABLED(CONFIG_OPROFILE)
do_IRQ(LOONGSON2_PERFCNT_IRQ);
#endif
}
#define LOONGSON_FLASH_BASE 0x1c000000
#define LOONGSON_FLASH_SIZE 0x02000000 /* 32M */

View File

@ -26,7 +26,7 @@ asmlinkage void mach_irq_dispatch(unsigned int pending)
if (pending & CAUSEF_IP7)
do_IRQ(MIPS_CPU_IRQ_BASE + 7);
else if (pending & CAUSEF_IP6) /* perf counter loverflow */
do_perfcnt_IRQ();
return;
else if (pending & CAUSEF_IP5)
i8259_irqdispatch();
else if (pending & CAUSEF_IP2)

View File

@ -75,7 +75,6 @@ void mach_irq_dispatch(unsigned int pending)
if (pending & CAUSEF_IP7)
do_IRQ(LOONGSON_TIMER_IRQ);
else if (pending & CAUSEF_IP6) { /* North Bridge, Perf counter */
do_perfcnt_IRQ();
bonito_irqdispatch();
} else if (pending & CAUSEF_IP3) /* CPU UART */
do_IRQ(LOONGSON_UART_IRQ);

View File

@ -1,18 +0,0 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_OPROFILE) += oprofile.o
DRIVER_OBJS = $(addprefix ../../../drivers/oprofile/, \
oprof.o cpu_buffer.o buffer_sync.o \
event_buffer.o oprofile_files.o \
oprofilefs.o oprofile_stats.o \
timer_int.o )
oprofile-y := $(DRIVER_OBJS) common.o backtrace.o
oprofile-$(CONFIG_CPU_MIPS32) += op_model_mipsxx.o
oprofile-$(CONFIG_CPU_MIPS64) += op_model_mipsxx.o
oprofile-$(CONFIG_CPU_R10000) += op_model_mipsxx.o
oprofile-$(CONFIG_CPU_SB1) += op_model_mipsxx.o
oprofile-$(CONFIG_CPU_XLR) += op_model_mipsxx.o
oprofile-$(CONFIG_CPU_LOONGSON2EF) += op_model_loongson2.o
oprofile-$(CONFIG_CPU_LOONGSON64) += op_model_loongson3.o

View File

@ -1,177 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
#include <linux/oprofile.h>
#include <linux/sched.h>
#include <linux/mm.h>
#include <linux/uaccess.h>
#include <asm/ptrace.h>
#include <asm/stacktrace.h>
#include <linux/stacktrace.h>
#include <linux/kernel.h>
#include <asm/sections.h>
#include <asm/inst.h>
struct stackframe {
unsigned long sp;
unsigned long pc;
unsigned long ra;
};
static inline int get_mem(unsigned long addr, unsigned long *result)
{
unsigned long *address = (unsigned long *) addr;
if (!access_ok(address, sizeof(unsigned long)))
return -1;
if (__copy_from_user_inatomic(result, address, sizeof(unsigned long)))
return -3;
return 0;
}
/*
* These two instruction helpers were taken from process.c
*/
static inline int is_ra_save_ins(union mips_instruction *ip)
{
/* sw / sd $ra, offset($sp) */
return (ip->i_format.opcode == sw_op || ip->i_format.opcode == sd_op)
&& ip->i_format.rs == 29 && ip->i_format.rt == 31;
}
static inline int is_sp_move_ins(union mips_instruction *ip)
{
/* addiu/daddiu sp,sp,-imm */
if (ip->i_format.rs != 29 || ip->i_format.rt != 29)
return 0;
if (ip->i_format.opcode == addiu_op || ip->i_format.opcode == daddiu_op)
return 1;
return 0;
}
/*
* Looks for specific instructions that mark the end of a function.
* This usually means we ran into the code area of the previous function.
*/
static inline int is_end_of_function_marker(union mips_instruction *ip)
{
/* jr ra */
if (ip->r_format.func == jr_op && ip->r_format.rs == 31)
return 1;
/* lui gp */
if (ip->i_format.opcode == lui_op && ip->i_format.rt == 28)
return 1;
return 0;
}
/*
* TODO for userspace stack unwinding:
* - handle cases where the stack is adjusted inside a function
* (generally doesn't happen)
* - find optimal value for max_instr_check
* - try to find a better way to handle leaf functions
*/
static inline int unwind_user_frame(struct stackframe *old_frame,
const unsigned int max_instr_check)
{
struct stackframe new_frame = *old_frame;
off_t ra_offset = 0;
size_t stack_size = 0;
unsigned long addr;
if (old_frame->pc == 0 || old_frame->sp == 0 || old_frame->ra == 0)
return -9;
for (addr = new_frame.pc; (addr + max_instr_check > new_frame.pc)
&& (!ra_offset || !stack_size); --addr) {
union mips_instruction ip;
if (get_mem(addr, (unsigned long *) &ip))
return -11;
if (is_sp_move_ins(&ip)) {
int stack_adjustment = ip.i_format.simmediate;
if (stack_adjustment > 0)
/* This marks the end of the previous function,
which means we overran. */
break;
stack_size = (unsigned long) stack_adjustment;
} else if (is_ra_save_ins(&ip)) {
int ra_slot = ip.i_format.simmediate;
if (ra_slot < 0)
/* This shouldn't happen. */
break;
ra_offset = ra_slot;
} else if (is_end_of_function_marker(&ip))
break;
}
if (!ra_offset || !stack_size)
goto done;
if (ra_offset) {
new_frame.ra = old_frame->sp + ra_offset;
if (get_mem(new_frame.ra, &(new_frame.ra)))
return -13;
}
if (stack_size) {
new_frame.sp = old_frame->sp + stack_size;
if (get_mem(new_frame.sp, &(new_frame.sp)))
return -14;
}
if (new_frame.sp > old_frame->sp)
return -2;
done:
new_frame.pc = old_frame->ra;
*old_frame = new_frame;
return 0;
}
static inline void do_user_backtrace(unsigned long low_addr,
struct stackframe *frame,
unsigned int depth)
{
const unsigned int max_instr_check = 512;
const unsigned long high_addr = low_addr + THREAD_SIZE;
while (depth-- && !unwind_user_frame(frame, max_instr_check)) {
oprofile_add_trace(frame->ra);
if (frame->sp < low_addr || frame->sp > high_addr)
break;
}
}
#ifndef CONFIG_KALLSYMS
static inline void do_kernel_backtrace(unsigned long low_addr,
struct stackframe *frame,
unsigned int depth) { }
#else
static inline void do_kernel_backtrace(unsigned long low_addr,
struct stackframe *frame,
unsigned int depth)
{
while (depth-- && frame->pc) {
frame->pc = unwind_stack_by_address(low_addr,
&(frame->sp),
frame->pc,
&(frame->ra));
oprofile_add_trace(frame->ra);
}
}
#endif
void notrace op_mips_backtrace(struct pt_regs *const regs, unsigned int depth)
{
struct stackframe frame = { .sp = regs->regs[29],
.pc = regs->cp0_epc,
.ra = regs->regs[31] };
const int userspace = user_mode(regs);
const unsigned long low_addr = ALIGN(frame.sp, THREAD_SIZE);
if (userspace)
do_user_backtrace(low_addr, &frame, depth);
else
do_kernel_backtrace(low_addr, &frame, depth);
}

View File

@ -1,147 +0,0 @@
/*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*
* Copyright (C) 2004, 2005 Ralf Baechle
* Copyright (C) 2005 MIPS Technologies, Inc.
*/
#include <linux/compiler.h>
#include <linux/errno.h>
#include <linux/init.h>
#include <linux/oprofile.h>
#include <linux/smp.h>
#include <asm/cpu-info.h>
#include <asm/cpu-type.h>
#include "op_impl.h"
extern struct op_mips_model op_model_mipsxx_ops __weak;
extern struct op_mips_model op_model_loongson2_ops __weak;
extern struct op_mips_model op_model_loongson3_ops __weak;
static struct op_mips_model *model;
static struct op_counter_config ctr[20];
static int op_mips_setup(void)
{
/* Pre-compute the values to stuff in the hardware registers. */
model->reg_setup(ctr);
/* Configure the registers on all cpus. */
on_each_cpu(model->cpu_setup, NULL, 1);
return 0;
}
static int op_mips_create_files(struct dentry *root)
{
int i;
for (i = 0; i < model->num_counters; ++i) {
struct dentry *dir;
char buf[4];
snprintf(buf, sizeof buf, "%d", i);
dir = oprofilefs_mkdir(root, buf);
oprofilefs_create_ulong(dir, "enabled", &ctr[i].enabled);
oprofilefs_create_ulong(dir, "event", &ctr[i].event);
oprofilefs_create_ulong(dir, "count", &ctr[i].count);
oprofilefs_create_ulong(dir, "kernel", &ctr[i].kernel);
oprofilefs_create_ulong(dir, "user", &ctr[i].user);
oprofilefs_create_ulong(dir, "exl", &ctr[i].exl);
/* Dummy. */
oprofilefs_create_ulong(dir, "unit_mask", &ctr[i].unit_mask);
}
return 0;
}
static int op_mips_start(void)
{
on_each_cpu(model->cpu_start, NULL, 1);
return 0;
}
static void op_mips_stop(void)
{
/* Disable performance monitoring for all counters. */
on_each_cpu(model->cpu_stop, NULL, 1);
}
int __init oprofile_arch_init(struct oprofile_operations *ops)
{
struct op_mips_model *lmodel = NULL;
int res;
switch (boot_cpu_type()) {
case CPU_5KC:
case CPU_M14KC:
case CPU_M14KEC:
case CPU_20KC:
case CPU_24K:
case CPU_25KF:
case CPU_34K:
case CPU_1004K:
case CPU_74K:
case CPU_1074K:
case CPU_INTERAPTIV:
case CPU_PROAPTIV:
case CPU_P5600:
case CPU_I6400:
case CPU_M5150:
case CPU_LOONGSON32:
case CPU_SB1:
case CPU_SB1A:
case CPU_R10000:
case CPU_R12000:
case CPU_R14000:
case CPU_R16000:
case CPU_XLR:
lmodel = &op_model_mipsxx_ops;
break;
case CPU_LOONGSON2EF:
lmodel = &op_model_loongson2_ops;
break;
case CPU_LOONGSON64:
lmodel = &op_model_loongson3_ops;
break;
}
/*
* Always set the backtrace. This allows unsupported CPU types to still
* use timer-based oprofile.
*/
ops->backtrace = op_mips_backtrace;
if (!lmodel)
return -ENODEV;
res = lmodel->init();
if (res)
return res;
model = lmodel;
ops->create_files = op_mips_create_files;
ops->setup = op_mips_setup;
//ops->shutdown = op_mips_shutdown;
ops->start = op_mips_start;
ops->stop = op_mips_stop;
ops->cpu_type = lmodel->cpu_type;
printk(KERN_INFO "oprofile: using %s performance monitoring.\n",
lmodel->cpu_type);
return 0;
}
void oprofile_arch_exit(void)
{
if (model)
model->exit();
}

View File

@ -1,41 +0,0 @@
/**
* @file arch/alpha/oprofile/op_impl.h
*
* @remark Copyright 2002 OProfile authors
* @remark Read the file COPYING
*
* @author Richard Henderson <rth@twiddle.net>
*/
#ifndef OP_IMPL_H
#define OP_IMPL_H 1
extern int (*perf_irq)(void);
/* Per-counter configuration as set via oprofilefs. */
struct op_counter_config {
unsigned long enabled;
unsigned long event;
unsigned long count;
/* Dummies because I am too lazy to hack the userspace tools. */
unsigned long kernel;
unsigned long user;
unsigned long exl;
unsigned long unit_mask;
};
/* Per-architecture configure and hooks. */
struct op_mips_model {
void (*reg_setup) (struct op_counter_config *);
void (*cpu_setup) (void *dummy);
int (*init)(void);
void (*exit)(void);
void (*cpu_start)(void *args);
void (*cpu_stop)(void *args);
char *cpu_type;
unsigned char num_counters;
};
void op_mips_backtrace(struct pt_regs * const regs, unsigned int depth);
#endif

View File

@ -1,161 +0,0 @@
/*
* Loongson2 performance counter driver for oprofile
*
* Copyright (C) 2009 Lemote Inc.
* Author: Yanhua <yanh@lemote.com>
* Author: Wu Zhangjin <wuzhangjin@gmail.com>
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#include <linux/init.h>
#include <linux/oprofile.h>
#include <linux/interrupt.h>
#include <loongson.h> /* LOONGSON2_PERFCNT_IRQ */
#include "op_impl.h"
#define LOONGSON2_CPU_TYPE "mips/loongson2"
#define LOONGSON2_PERFCNT_OVERFLOW (1ULL << 31)
#define LOONGSON2_PERFCTRL_EXL (1UL << 0)
#define LOONGSON2_PERFCTRL_KERNEL (1UL << 1)
#define LOONGSON2_PERFCTRL_SUPERVISOR (1UL << 2)
#define LOONGSON2_PERFCTRL_USER (1UL << 3)
#define LOONGSON2_PERFCTRL_ENABLE (1UL << 4)
#define LOONGSON2_PERFCTRL_EVENT(idx, event) \
(((event) & 0x0f) << ((idx) ? 9 : 5))
#define read_c0_perfctrl() __read_64bit_c0_register($24, 0)
#define write_c0_perfctrl(val) __write_64bit_c0_register($24, 0, val)
#define read_c0_perfcnt() __read_64bit_c0_register($25, 0)
#define write_c0_perfcnt(val) __write_64bit_c0_register($25, 0, val)
static struct loongson2_register_config {
unsigned int ctrl;
unsigned long long reset_counter1;
unsigned long long reset_counter2;
int cnt1_enabled, cnt2_enabled;
} reg;
static char *oprofid = "LoongsonPerf";
static irqreturn_t loongson2_perfcount_handler(int irq, void *dev_id);
static void reset_counters(void *arg)
{
write_c0_perfctrl(0);
write_c0_perfcnt(0);
}
static void loongson2_reg_setup(struct op_counter_config *cfg)
{
unsigned int ctrl = 0;
reg.reset_counter1 = 0;
reg.reset_counter2 = 0;
/*
* Compute the performance counter ctrl word.
* For now, count kernel and user mode.
*/
if (cfg[0].enabled) {
ctrl |= LOONGSON2_PERFCTRL_EVENT(0, cfg[0].event);
reg.reset_counter1 = 0x80000000ULL - cfg[0].count;
}
if (cfg[1].enabled) {
ctrl |= LOONGSON2_PERFCTRL_EVENT(1, cfg[1].event);
reg.reset_counter2 = 0x80000000ULL - cfg[1].count;
}
if (cfg[0].enabled || cfg[1].enabled) {
ctrl |= LOONGSON2_PERFCTRL_EXL | LOONGSON2_PERFCTRL_ENABLE;
if (cfg[0].kernel || cfg[1].kernel)
ctrl |= LOONGSON2_PERFCTRL_KERNEL;
if (cfg[0].user || cfg[1].user)
ctrl |= LOONGSON2_PERFCTRL_USER;
}
reg.ctrl = ctrl;
reg.cnt1_enabled = cfg[0].enabled;
reg.cnt2_enabled = cfg[1].enabled;
}
static void loongson2_cpu_setup(void *args)
{
write_c0_perfcnt((reg.reset_counter2 << 32) | reg.reset_counter1);
}
static void loongson2_cpu_start(void *args)
{
/* Start all counters on current CPU */
if (reg.cnt1_enabled || reg.cnt2_enabled)
write_c0_perfctrl(reg.ctrl);
}
static void loongson2_cpu_stop(void *args)
{
/* Stop all counters on current CPU */
write_c0_perfctrl(0);
memset(&reg, 0, sizeof(reg));
}
static irqreturn_t loongson2_perfcount_handler(int irq, void *dev_id)
{
uint64_t counter, counter1, counter2;
struct pt_regs *regs = get_irq_regs();
int enabled;
/* Check whether the irq belongs to me */
enabled = read_c0_perfctrl() & LOONGSON2_PERFCTRL_ENABLE;
if (!enabled)
return IRQ_NONE;
enabled = reg.cnt1_enabled | reg.cnt2_enabled;
if (!enabled)
return IRQ_NONE;
counter = read_c0_perfcnt();
counter1 = counter & 0xffffffff;
counter2 = counter >> 32;
if (counter1 & LOONGSON2_PERFCNT_OVERFLOW) {
if (reg.cnt1_enabled)
oprofile_add_sample(regs, 0);
counter1 = reg.reset_counter1;
}
if (counter2 & LOONGSON2_PERFCNT_OVERFLOW) {
if (reg.cnt2_enabled)
oprofile_add_sample(regs, 1);
counter2 = reg.reset_counter2;
}
write_c0_perfcnt((counter2 << 32) | counter1);
return IRQ_HANDLED;
}
static int __init loongson2_init(void)
{
return request_irq(LOONGSON2_PERFCNT_IRQ, loongson2_perfcount_handler,
IRQF_SHARED, "Perfcounter", oprofid);
}
static void loongson2_exit(void)
{
reset_counters(NULL);
free_irq(LOONGSON2_PERFCNT_IRQ, oprofid);
}
struct op_mips_model op_model_loongson2_ops = {
.reg_setup = loongson2_reg_setup,
.cpu_setup = loongson2_cpu_setup,
.init = loongson2_init,
.exit = loongson2_exit,
.cpu_start = loongson2_cpu_start,
.cpu_stop = loongson2_cpu_stop,
.cpu_type = LOONGSON2_CPU_TYPE,
.num_counters = 2
};

View File

@ -1,213 +0,0 @@
/*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*
*/
#include <linux/init.h>
#include <linux/cpu.h>
#include <linux/smp.h>
#include <linux/proc_fs.h>
#include <linux/oprofile.h>
#include <linux/spinlock.h>
#include <linux/interrupt.h>
#include <linux/uaccess.h>
#include <irq.h>
#include <loongson.h>
#include "op_impl.h"
#define LOONGSON3_PERFCNT_OVERFLOW (1ULL << 63)
#define LOONGSON3_PERFCTRL_EXL (1UL << 0)
#define LOONGSON3_PERFCTRL_KERNEL (1UL << 1)
#define LOONGSON3_PERFCTRL_SUPERVISOR (1UL << 2)
#define LOONGSON3_PERFCTRL_USER (1UL << 3)
#define LOONGSON3_PERFCTRL_ENABLE (1UL << 4)
#define LOONGSON3_PERFCTRL_W (1UL << 30)
#define LOONGSON3_PERFCTRL_M (1UL << 31)
#define LOONGSON3_PERFCTRL_EVENT(idx, event) \
(((event) & (idx ? 0x0f : 0x3f)) << 5)
/* Loongson-3 PerfCount performance counter1 register */
#define read_c0_perflo1() __read_64bit_c0_register($25, 0)
#define write_c0_perflo1(val) __write_64bit_c0_register($25, 0, val)
#define read_c0_perfhi1() __read_64bit_c0_register($25, 1)
#define write_c0_perfhi1(val) __write_64bit_c0_register($25, 1, val)
/* Loongson-3 PerfCount performance counter2 register */
#define read_c0_perflo2() __read_64bit_c0_register($25, 2)
#define write_c0_perflo2(val) __write_64bit_c0_register($25, 2, val)
#define read_c0_perfhi2() __read_64bit_c0_register($25, 3)
#define write_c0_perfhi2(val) __write_64bit_c0_register($25, 3, val)
static int (*save_perf_irq)(void);
static struct loongson3_register_config {
unsigned int control1;
unsigned int control2;
unsigned long long reset_counter1;
unsigned long long reset_counter2;
int ctr1_enable, ctr2_enable;
} reg;
static void reset_counters(void *arg)
{
write_c0_perfhi1(0);
write_c0_perfhi2(0);
write_c0_perflo1(0xc0000000);
write_c0_perflo2(0x40000000);
}
/* Compute all of the registers in preparation for enabling profiling. */
static void loongson3_reg_setup(struct op_counter_config *ctr)
{
unsigned int control1 = 0;
unsigned int control2 = 0;
reg.reset_counter1 = 0;
reg.reset_counter2 = 0;
/* Compute the performance counter control word. */
/* For now count kernel and user mode */
if (ctr[0].enabled) {
control1 |= LOONGSON3_PERFCTRL_EVENT(0, ctr[0].event) |
LOONGSON3_PERFCTRL_ENABLE;
if (ctr[0].kernel)
control1 |= LOONGSON3_PERFCTRL_KERNEL;
if (ctr[0].user)
control1 |= LOONGSON3_PERFCTRL_USER;
reg.reset_counter1 = 0x8000000000000000ULL - ctr[0].count;
}
if (ctr[1].enabled) {
control2 |= LOONGSON3_PERFCTRL_EVENT(1, ctr[1].event) |
LOONGSON3_PERFCTRL_ENABLE;
if (ctr[1].kernel)
control2 |= LOONGSON3_PERFCTRL_KERNEL;
if (ctr[1].user)
control2 |= LOONGSON3_PERFCTRL_USER;
reg.reset_counter2 = 0x8000000000000000ULL - ctr[1].count;
}
if (ctr[0].enabled)
control1 |= LOONGSON3_PERFCTRL_EXL;
if (ctr[1].enabled)
control2 |= LOONGSON3_PERFCTRL_EXL;
reg.control1 = control1;
reg.control2 = control2;
reg.ctr1_enable = ctr[0].enabled;
reg.ctr2_enable = ctr[1].enabled;
}
/* Program all of the registers in preparation for enabling profiling. */
static void loongson3_cpu_setup(void *args)
{
uint64_t perfcount1, perfcount2;
perfcount1 = reg.reset_counter1;
perfcount2 = reg.reset_counter2;
write_c0_perfhi1(perfcount1);
write_c0_perfhi2(perfcount2);
}
static void loongson3_cpu_start(void *args)
{
/* Start all counters on current CPU */
reg.control1 |= (LOONGSON3_PERFCTRL_W|LOONGSON3_PERFCTRL_M);
reg.control2 |= (LOONGSON3_PERFCTRL_W|LOONGSON3_PERFCTRL_M);
if (reg.ctr1_enable)
write_c0_perflo1(reg.control1);
if (reg.ctr2_enable)
write_c0_perflo2(reg.control2);
}
static void loongson3_cpu_stop(void *args)
{
/* Stop all counters on current CPU */
write_c0_perflo1(0xc0000000);
write_c0_perflo2(0x40000000);
memset(&reg, 0, sizeof(reg));
}
static int loongson3_perfcount_handler(void)
{
unsigned long flags;
uint64_t counter1, counter2;
uint32_t cause, handled = IRQ_NONE;
struct pt_regs *regs = get_irq_regs();
cause = read_c0_cause();
if (!(cause & CAUSEF_PCI))
return handled;
counter1 = read_c0_perfhi1();
counter2 = read_c0_perfhi2();
local_irq_save(flags);
if (counter1 & LOONGSON3_PERFCNT_OVERFLOW) {
if (reg.ctr1_enable)
oprofile_add_sample(regs, 0);
counter1 = reg.reset_counter1;
}
if (counter2 & LOONGSON3_PERFCNT_OVERFLOW) {
if (reg.ctr2_enable)
oprofile_add_sample(regs, 1);
counter2 = reg.reset_counter2;
}
local_irq_restore(flags);
write_c0_perfhi1(counter1);
write_c0_perfhi2(counter2);
if (!(cause & CAUSEF_TI))
handled = IRQ_HANDLED;
return handled;
}
static int loongson3_starting_cpu(unsigned int cpu)
{
write_c0_perflo1(reg.control1);
write_c0_perflo2(reg.control2);
return 0;
}
static int loongson3_dying_cpu(unsigned int cpu)
{
write_c0_perflo1(0xc0000000);
write_c0_perflo2(0x40000000);
return 0;
}
static int __init loongson3_init(void)
{
on_each_cpu(reset_counters, NULL, 1);
cpuhp_setup_state_nocalls(CPUHP_AP_MIPS_OP_LOONGSON3_STARTING,
"mips/oprofile/loongson3:starting",
loongson3_starting_cpu, loongson3_dying_cpu);
save_perf_irq = perf_irq;
perf_irq = loongson3_perfcount_handler;
return 0;
}
static void loongson3_exit(void)
{
on_each_cpu(reset_counters, NULL, 1);
cpuhp_remove_state_nocalls(CPUHP_AP_MIPS_OP_LOONGSON3_STARTING);
perf_irq = save_perf_irq;
}
struct op_mips_model op_model_loongson3_ops = {
.reg_setup = loongson3_reg_setup,
.cpu_setup = loongson3_cpu_setup,
.init = loongson3_init,
.exit = loongson3_exit,
.cpu_start = loongson3_cpu_start,
.cpu_stop = loongson3_cpu_stop,
.cpu_type = "mips/loongson3",
.num_counters = 2
};

View File

@ -1,479 +0,0 @@
/*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*
* Copyright (C) 2004, 05, 06 by Ralf Baechle
* Copyright (C) 2005 by MIPS Technologies, Inc.
*/
#include <linux/cpumask.h>
#include <linux/oprofile.h>
#include <linux/interrupt.h>
#include <linux/smp.h>
#include <asm/irq_regs.h>
#include <asm/time.h>
#include "op_impl.h"
#define M_PERFCTL_EVENT(event) (((event) << MIPS_PERFCTRL_EVENT_S) & \
MIPS_PERFCTRL_EVENT)
#define M_PERFCTL_VPEID(vpe) ((vpe) << MIPS_PERFCTRL_VPEID_S)
#define M_COUNTER_OVERFLOW (1UL << 31)
static int (*save_perf_irq)(void);
static int perfcount_irq;
/*
* XLR has only one set of counters per core. Designate the
* first hardware thread in the core for setup and init.
* Skip CPUs with non-zero hardware thread id (4 hwt per core)
*/
#if defined(CONFIG_CPU_XLR) && defined(CONFIG_SMP)
#define oprofile_skip_cpu(c) ((cpu_logical_map(c) & 0x3) != 0)
#else
#define oprofile_skip_cpu(c) 0
#endif
#ifdef CONFIG_MIPS_MT_SMP
#define WHAT (MIPS_PERFCTRL_MT_EN_VPE | \
M_PERFCTL_VPEID(cpu_vpe_id(&current_cpu_data)))
#define vpe_id() (cpu_has_mipsmt_pertccounters ? \
0 : cpu_vpe_id(&current_cpu_data))
/*
* The number of bits to shift to convert between counters per core and
* counters per VPE. There is no reasonable interface atm to obtain the
* number of VPEs used by Linux and in the 34K this number is fixed to two
* anyways so we hardcore a few things here for the moment. The way it's
* done here will ensure that oprofile VSMP kernel will run right on a lesser
* core like a 24K also or with maxcpus=1.
*/
static inline unsigned int vpe_shift(void)
{
if (num_possible_cpus() > 1)
return 1;
return 0;
}
#else
#define WHAT 0
#define vpe_id() 0
static inline unsigned int vpe_shift(void)
{
return 0;
}
#endif
static inline unsigned int counters_total_to_per_cpu(unsigned int counters)
{
return counters >> vpe_shift();
}
static inline unsigned int counters_per_cpu_to_total(unsigned int counters)
{
return counters << vpe_shift();
}
#define __define_perf_accessors(r, n, np) \
\
static inline unsigned int r_c0_ ## r ## n(void) \
{ \
unsigned int cpu = vpe_id(); \
\
switch (cpu) { \
case 0: \
return read_c0_ ## r ## n(); \
case 1: \
return read_c0_ ## r ## np(); \
default: \
BUG(); \
} \
return 0; \
} \
\
static inline void w_c0_ ## r ## n(unsigned int value) \
{ \
unsigned int cpu = vpe_id(); \
\
switch (cpu) { \
case 0: \
write_c0_ ## r ## n(value); \
return; \
case 1: \
write_c0_ ## r ## np(value); \
return; \
default: \
BUG(); \
} \
return; \
} \
__define_perf_accessors(perfcntr, 0, 2)
__define_perf_accessors(perfcntr, 1, 3)
__define_perf_accessors(perfcntr, 2, 0)
__define_perf_accessors(perfcntr, 3, 1)
__define_perf_accessors(perfctrl, 0, 2)
__define_perf_accessors(perfctrl, 1, 3)
__define_perf_accessors(perfctrl, 2, 0)
__define_perf_accessors(perfctrl, 3, 1)
struct op_mips_model op_model_mipsxx_ops;
static struct mipsxx_register_config {
unsigned int control[4];
unsigned int counter[4];
} reg;
/* Compute all of the registers in preparation for enabling profiling. */
static void mipsxx_reg_setup(struct op_counter_config *ctr)
{
unsigned int counters = op_model_mipsxx_ops.num_counters;
int i;
/* Compute the performance counter control word. */
for (i = 0; i < counters; i++) {
reg.control[i] = 0;
reg.counter[i] = 0;
if (!ctr[i].enabled)
continue;
reg.control[i] = M_PERFCTL_EVENT(ctr[i].event) |
MIPS_PERFCTRL_IE;
if (ctr[i].kernel)
reg.control[i] |= MIPS_PERFCTRL_K;
if (ctr[i].user)
reg.control[i] |= MIPS_PERFCTRL_U;
if (ctr[i].exl)
reg.control[i] |= MIPS_PERFCTRL_EXL;
if (boot_cpu_type() == CPU_XLR)
reg.control[i] |= XLR_PERFCTRL_ALLTHREADS;
reg.counter[i] = 0x80000000 - ctr[i].count;
}
}
/* Program all of the registers in preparation for enabling profiling. */
static void mipsxx_cpu_setup(void *args)
{
unsigned int counters = op_model_mipsxx_ops.num_counters;
if (oprofile_skip_cpu(smp_processor_id()))
return;
switch (counters) {
case 4:
w_c0_perfctrl3(0);
w_c0_perfcntr3(reg.counter[3]);
fallthrough;
case 3:
w_c0_perfctrl2(0);
w_c0_perfcntr2(reg.counter[2]);
fallthrough;
case 2:
w_c0_perfctrl1(0);
w_c0_perfcntr1(reg.counter[1]);
fallthrough;
case 1:
w_c0_perfctrl0(0);
w_c0_perfcntr0(reg.counter[0]);
}
}
/* Start all counters on current CPU */
static void mipsxx_cpu_start(void *args)
{
unsigned int counters = op_model_mipsxx_ops.num_counters;
if (oprofile_skip_cpu(smp_processor_id()))
return;
switch (counters) {
case 4:
w_c0_perfctrl3(WHAT | reg.control[3]);
fallthrough;
case 3:
w_c0_perfctrl2(WHAT | reg.control[2]);
fallthrough;
case 2:
w_c0_perfctrl1(WHAT | reg.control[1]);
fallthrough;
case 1:
w_c0_perfctrl0(WHAT | reg.control[0]);
}
}
/* Stop all counters on current CPU */
static void mipsxx_cpu_stop(void *args)
{
unsigned int counters = op_model_mipsxx_ops.num_counters;
if (oprofile_skip_cpu(smp_processor_id()))
return;
switch (counters) {
case 4:
w_c0_perfctrl3(0);
fallthrough;
case 3:
w_c0_perfctrl2(0);
fallthrough;
case 2:
w_c0_perfctrl1(0);
fallthrough;
case 1:
w_c0_perfctrl0(0);
}
}
static int mipsxx_perfcount_handler(void)
{
unsigned int counters = op_model_mipsxx_ops.num_counters;
unsigned int control;
unsigned int counter;
int handled = IRQ_NONE;
if (cpu_has_mips_r2 && !(read_c0_cause() & CAUSEF_PCI))
return handled;
switch (counters) {
#define HANDLE_COUNTER(n) \
case n + 1: \
control = r_c0_perfctrl ## n(); \
counter = r_c0_perfcntr ## n(); \
if ((control & MIPS_PERFCTRL_IE) && \
(counter & M_COUNTER_OVERFLOW)) { \
oprofile_add_sample(get_irq_regs(), n); \
w_c0_perfcntr ## n(reg.counter[n]); \
handled = IRQ_HANDLED; \
}
HANDLE_COUNTER(3)
fallthrough;
HANDLE_COUNTER(2)
fallthrough;
HANDLE_COUNTER(1)
fallthrough;
HANDLE_COUNTER(0)
}
return handled;
}
static inline int __n_counters(void)
{
if (!cpu_has_perf)
return 0;
if (!(read_c0_perfctrl0() & MIPS_PERFCTRL_M))
return 1;
if (!(read_c0_perfctrl1() & MIPS_PERFCTRL_M))
return 2;
if (!(read_c0_perfctrl2() & MIPS_PERFCTRL_M))
return 3;
return 4;
}
static inline int n_counters(void)
{
int counters;
switch (current_cpu_type()) {
case CPU_R10000:
counters = 2;
break;
case CPU_R12000:
case CPU_R14000:
case CPU_R16000:
counters = 4;
break;
default:
counters = __n_counters();
}
return counters;
}
static void reset_counters(void *arg)
{
int counters = (int)(long)arg;
switch (counters) {
case 4:
w_c0_perfctrl3(0);
w_c0_perfcntr3(0);
fallthrough;
case 3:
w_c0_perfctrl2(0);
w_c0_perfcntr2(0);
fallthrough;
case 2:
w_c0_perfctrl1(0);
w_c0_perfcntr1(0);
fallthrough;
case 1:
w_c0_perfctrl0(0);
w_c0_perfcntr0(0);
}
}
static irqreturn_t mipsxx_perfcount_int(int irq, void *dev_id)
{
return mipsxx_perfcount_handler();
}
static int __init mipsxx_init(void)
{
int counters;
counters = n_counters();
if (counters == 0) {
printk(KERN_ERR "Oprofile: CPU has no performance counters\n");
return -ENODEV;
}
#ifdef CONFIG_MIPS_MT_SMP
if (!cpu_has_mipsmt_pertccounters)
counters = counters_total_to_per_cpu(counters);
#endif
on_each_cpu(reset_counters, (void *)(long)counters, 1);
op_model_mipsxx_ops.num_counters = counters;
switch (current_cpu_type()) {
case CPU_M14KC:
op_model_mipsxx_ops.cpu_type = "mips/M14Kc";
break;
case CPU_M14KEC:
op_model_mipsxx_ops.cpu_type = "mips/M14KEc";
break;
case CPU_20KC:
op_model_mipsxx_ops.cpu_type = "mips/20K";
break;
case CPU_24K:
op_model_mipsxx_ops.cpu_type = "mips/24K";
break;
case CPU_25KF:
op_model_mipsxx_ops.cpu_type = "mips/25K";
break;
case CPU_1004K:
case CPU_34K:
op_model_mipsxx_ops.cpu_type = "mips/34K";
break;
case CPU_1074K:
case CPU_74K:
op_model_mipsxx_ops.cpu_type = "mips/74K";
break;
case CPU_INTERAPTIV:
op_model_mipsxx_ops.cpu_type = "mips/interAptiv";
break;
case CPU_PROAPTIV:
op_model_mipsxx_ops.cpu_type = "mips/proAptiv";
break;
case CPU_P5600:
op_model_mipsxx_ops.cpu_type = "mips/P5600";
break;
case CPU_I6400:
op_model_mipsxx_ops.cpu_type = "mips/I6400";
break;
case CPU_M5150:
op_model_mipsxx_ops.cpu_type = "mips/M5150";
break;
case CPU_5KC:
op_model_mipsxx_ops.cpu_type = "mips/5K";
break;
case CPU_R10000:
if ((current_cpu_data.processor_id & 0xff) == 0x20)
op_model_mipsxx_ops.cpu_type = "mips/r10000-v2.x";
else
op_model_mipsxx_ops.cpu_type = "mips/r10000";
break;
case CPU_R12000:
case CPU_R14000:
op_model_mipsxx_ops.cpu_type = "mips/r12000";
break;
case CPU_R16000:
op_model_mipsxx_ops.cpu_type = "mips/r16000";
break;
case CPU_SB1:
case CPU_SB1A:
op_model_mipsxx_ops.cpu_type = "mips/sb1";
break;
case CPU_LOONGSON32:
op_model_mipsxx_ops.cpu_type = "mips/loongson1";
break;
case CPU_XLR:
op_model_mipsxx_ops.cpu_type = "mips/xlr";
break;
default:
printk(KERN_ERR "Profiling unsupported for this CPU\n");
return -ENODEV;
}
save_perf_irq = perf_irq;
perf_irq = mipsxx_perfcount_handler;
if (get_c0_perfcount_int)
perfcount_irq = get_c0_perfcount_int();
else if (cp0_perfcount_irq >= 0)
perfcount_irq = MIPS_CPU_IRQ_BASE + cp0_perfcount_irq;
else
perfcount_irq = -1;
if (perfcount_irq >= 0)
return request_irq(perfcount_irq, mipsxx_perfcount_int,
IRQF_PERCPU | IRQF_NOBALANCING |
IRQF_NO_THREAD | IRQF_NO_SUSPEND |
IRQF_SHARED,
"Perfcounter", save_perf_irq);
return 0;
}
static void mipsxx_exit(void)
{
int counters = op_model_mipsxx_ops.num_counters;
if (perfcount_irq >= 0)
free_irq(perfcount_irq, save_perf_irq);
counters = counters_per_cpu_to_total(counters);
on_each_cpu(reset_counters, (void *)(long)counters, 1);
perf_irq = save_perf_irq;
}
struct op_mips_model op_model_mipsxx_ops = {
.reg_setup = mipsxx_reg_setup,
.cpu_setup = mipsxx_cpu_setup,
.init = mipsxx_init,
.exit = mipsxx_exit,
.cpu_start = mipsxx_cpu_start,
.cpu_stop = mipsxx_cpu_stop,
};

View File

@ -4,7 +4,6 @@ config PARISC
select ARCH_32BIT_OFF_T if !64BIT
select ARCH_MIGHT_HAVE_PC_PARPORT
select HAVE_IDE
select HAVE_OPROFILE
select HAVE_FUNCTION_TRACER
select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_SYSCALL_TRACEPOINTS

View File

@ -116,8 +116,6 @@ kernel-y := mm/ kernel/ math-emu/
core-y += $(addprefix arch/parisc/, $(kernel-y))
libs-y += arch/parisc/lib/ $(LIBGCC)
drivers-$(CONFIG_OPROFILE) += arch/parisc/oprofile/
boot := arch/parisc/boot
PALO := $(shell if (which palo 2>&1); then : ; \

View File

@ -1,10 +0,0 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_OPROFILE) += oprofile.o
DRIVER_OBJS = $(addprefix ../../../drivers/oprofile/, \
oprof.o cpu_buffer.o buffer_sync.o \
event_buffer.o oprofile_files.o \
oprofilefs.o oprofile_stats.o \
timer_int.o )
oprofile-y := $(DRIVER_OBJS) init.o

View File

@ -1,23 +0,0 @@
/**
* @file init.c
*
* @remark Copyright 2002 OProfile authors
* @remark Read the file COPYING
*
* @author John Levon <levon@movementarian.org>
*/
#include <linux/errno.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/oprofile.h>
int __init oprofile_arch_init(struct oprofile_operations *ops)
{
return -ENODEV;
}
void oprofile_arch_exit(void)
{
}

View File

@ -226,7 +226,6 @@ config PPC
select HAVE_MOD_ARCH_SPECIFIC
select HAVE_NMI if PERF_EVENTS || (PPC64 && PPC_BOOK3S)
select HAVE_HARDLOCKUP_DETECTOR_ARCH if (PPC64 && PPC_BOOK3S)
select HAVE_OPROFILE
select HAVE_OPTPROBES if PPC64
select HAVE_PERF_EVENTS
select HAVE_PERF_EVENTS_NMI if PPC64

View File

@ -276,8 +276,6 @@ head-$(CONFIG_PPC_OF_BOOT_TRAMPOLINE) += arch/powerpc/kernel/prom_init.o
# See arch/powerpc/Kbuild for content of core part of the kernel
core-y += arch/powerpc/
drivers-$(CONFIG_OPROFILE) += arch/powerpc/oprofile/
# Default to zImage, override when needed
all: zImage

View File

@ -8,7 +8,6 @@ CONFIG_EXPERT=y
CONFIG_KALLSYMS_ALL=y
# CONFIG_SLUB_CPU_PARTIAL is not set
CONFIG_PROFILING=y
CONFIG_OPROFILE=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
# CONFIG_BLK_DEV_BSG is not set

View File

@ -6,7 +6,6 @@ CONFIG_LOG_BUF_SHIFT=14
CONFIG_EXPERT=y
CONFIG_KALLSYMS_ALL=y
CONFIG_PROFILING=y
CONFIG_OPROFILE=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
# CONFIG_BLK_DEV_BSG is not set

View File

@ -17,7 +17,6 @@ CONFIG_KALLSYMS_ALL=y
CONFIG_BPF_SYSCALL=y
CONFIG_EMBEDDED=y
CONFIG_PROFILING=y
CONFIG_OPROFILE=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
# CONFIG_BLK_DEV_BSG is not set

View File

@ -7,7 +7,6 @@ CONFIG_BLK_DEV_INITRD=y
CONFIG_EXPERT=y
CONFIG_KALLSYMS_ALL=y
CONFIG_PROFILING=y
CONFIG_OPROFILE=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
# CONFIG_BLK_DEV_BSG is not set

View File

@ -14,7 +14,6 @@ CONFIG_CPUSETS=y
CONFIG_BLK_DEV_INITRD=y
# CONFIG_COMPAT_BRK is not set
CONFIG_PROFILING=y
CONFIG_OPROFILE=m
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_PARTITION_ADVANCED=y

View File

@ -12,7 +12,6 @@ CONFIG_CGROUPS=y
CONFIG_BLK_DEV_INITRD=y
# CONFIG_COMPAT_BRK is not set
CONFIG_PROFILING=y
CONFIG_OPROFILE=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MODVERSIONS=y

View File

@ -9,7 +9,6 @@ CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
# CONFIG_COMPAT_BRK is not set
CONFIG_PROFILING=y
CONFIG_OPROFILE=m
CONFIG_KPROBES=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y

View File

@ -7,7 +7,6 @@ CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_PROFILING=y
CONFIG_OPROFILE=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
# CONFIG_BLK_DEV_BSG is not set

View File

@ -10,7 +10,6 @@ CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y
# CONFIG_COMPAT_BRK is not set
CONFIG_PROFILING=y
CONFIG_OPROFILE=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MODULE_FORCE_UNLOAD=y

View File

@ -30,7 +30,6 @@ CONFIG_BLK_DEV_INITRD=y
CONFIG_BPF_SYSCALL=y
# CONFIG_COMPAT_BRK is not set
CONFIG_PROFILING=y
CONFIG_OPROFILE=m
CONFIG_KPROBES=y
CONFIG_JUMP_LABEL=y
CONFIG_MODULES=y

View File

@ -62,7 +62,6 @@ CONFIG_VIRTUALIZATION=y
CONFIG_KVM_BOOK3S_64=m
CONFIG_KVM_BOOK3S_64_HV=m
CONFIG_VHOST_NET=m
CONFIG_OPROFILE=m
CONFIG_KPROBES=y
CONFIG_JUMP_LABEL=y
CONFIG_MODULES=y

View File

@ -14,7 +14,6 @@ CONFIG_CPUSETS=y
CONFIG_BLK_DEV_INITRD=y
# CONFIG_COMPAT_BRK is not set
CONFIG_PROFILING=y
CONFIG_OPROFILE=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MODVERSIONS=y

View File

@ -19,7 +19,6 @@ CONFIG_USER_NS=y
CONFIG_BLK_DEV_INITRD=y
# CONFIG_COMPAT_BRK is not set
CONFIG_PROFILING=y
CONFIG_OPROFILE=m
CONFIG_KPROBES=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y

View File

@ -13,7 +13,6 @@ CONFIG_EMBEDDED=y
# CONFIG_COMPAT_BRK is not set
CONFIG_SLAB=y
CONFIG_PROFILING=y
CONFIG_OPROFILE=m
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
# CONFIG_PPC_POWERNV is not set

View File

@ -29,7 +29,6 @@ CONFIG_BLK_DEV_INITRD=y
CONFIG_BPF_SYSCALL=y
# CONFIG_COMPAT_BRK is not set
CONFIG_PROFILING=y
CONFIG_OPROFILE=m
CONFIG_KPROBES=y
CONFIG_JUMP_LABEL=y
CONFIG_MODULES=y

Some files were not shown because too many files have changed in this diff Show More