perf_counter: Fix throttling lock-up

Throttling logic is broken and we can lock up with too small
hw sampling intervals.

Make the throttling code more robust: disable counters even
if we already disabled them.

( Also clean up whitespace damage i noticed while reading
  various pieces of code related to throttling. )

Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
This commit is contained in:
Ingo Molnar 2009-06-03 22:19:36 +02:00
parent 233f0b95ca
commit 128f048f0f
2 changed files with 15 additions and 6 deletions

View file

@ -91,7 +91,7 @@ static u64 intel_pmu_raw_event(u64 event)
#define CORE_EVNTSEL_INV_MASK 0x00800000ULL
#define CORE_EVNTSEL_COUNTER_MASK 0xFF000000ULL
#define CORE_EVNTSEL_MASK \
#define CORE_EVNTSEL_MASK \
(CORE_EVNTSEL_EVENT_MASK | \
CORE_EVNTSEL_UNIT_MASK | \
CORE_EVNTSEL_EDGE_MASK | \

View file

@ -2822,11 +2822,20 @@ int perf_counter_overflow(struct perf_counter *counter,
if (!throttle) {
counter->hw.interrupts++;
} else if (counter->hw.interrupts != MAX_INTERRUPTS) {
counter->hw.interrupts++;
if (HZ*counter->hw.interrupts > (u64)sysctl_perf_counter_limit) {
counter->hw.interrupts = MAX_INTERRUPTS;
perf_log_throttle(counter, 0);
} else {
if (counter->hw.interrupts != MAX_INTERRUPTS) {
counter->hw.interrupts++;
if (HZ*counter->hw.interrupts > (u64)sysctl_perf_counter_limit) {
counter->hw.interrupts = MAX_INTERRUPTS;
perf_log_throttle(counter, 0);
ret = 1;
}
} else {
/*
* Keep re-disabling counters even though on the previous
* pass we disabled it - just in case we raced with a
* sched-in and the counter got enabled again:
*/
ret = 1;
}
}