1
0
Fork 0

perf/x86/intel: Make WARN()ings consistent

The intel_commit_scheduling() callback is pointlessly different from
the start and stop scheduling callback.

Furthermore, the constraint should never be NULL, so remove that test.

Even though we'll never get called (because we NULL the callbacks)
when !is_ht_workaround_enabled() put that test in.

Collapse the (pointless) WARN_ON_ONCE() and bail on !cpuc->excl_cntrs --
this is doubly pointless, because its the same condition as
is_ht_workaround_enabled() which was already pointless because the
whole method won't ever be called.

Furthremore, make all the !excl_cntrs test WARN_ON_ONCE(); they're all
pointless, because the above, either the function
({get,put}_excl_constraint) are already predicated on it existing or
the is_ht_workaround_enabled() thing is the same test.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
hifive-unleashed-5.1
Peter Zijlstra 2015-05-21 10:57:28 +02:00 committed by Ingo Molnar
parent aaf932e816
commit 17186ccda3
1 changed files with 8 additions and 12 deletions

View File

@ -1915,7 +1915,7 @@ intel_start_scheduling(struct cpu_hw_events *cpuc)
/*
* no exclusion needed
*/
if (!excl_cntrs)
if (WARN_ON_ONCE(!excl_cntrs))
return;
xl = &excl_cntrs->states[tid];
@ -1949,7 +1949,7 @@ intel_stop_scheduling(struct cpu_hw_events *cpuc)
/*
* no exclusion needed
*/
if (!excl_cntrs)
if (WARN_ON_ONCE(!excl_cntrs))
return;
xl = &excl_cntrs->states[tid];
@ -1985,7 +1985,7 @@ intel_get_excl_constraints(struct cpu_hw_events *cpuc, struct perf_event *event,
/*
* no exclusion needed
*/
if (!excl_cntrs)
if (WARN_ON_ONCE(!excl_cntrs))
return c;
/*
@ -2126,9 +2126,7 @@ static void intel_put_excl_constraints(struct cpu_hw_events *cpuc,
if (cpuc->is_fake)
return;
WARN_ON_ONCE(!excl_cntrs);
if (!excl_cntrs)
if (WARN_ON_ONCE(!excl_cntrs))
return;
xl = &excl_cntrs->states[tid];
@ -2193,17 +2191,15 @@ static void intel_commit_scheduling(struct cpu_hw_events *cpuc, int idx, int cnt
struct intel_excl_states *xl;
int tid = cpuc->excl_thread_id;
if (cpuc->is_fake || !c)
if (cpuc->is_fake || !is_ht_workaround_enabled())
return;
if (WARN_ON_ONCE(!excl_cntrs))
return;
if (!(c->flags & PERF_X86_EVENT_DYNAMIC))
return;
WARN_ON_ONCE(!excl_cntrs);
if (!excl_cntrs)
return;
xl = &excl_cntrs->states[tid];
lockdep_assert_held(&excl_cntrs->lock);