1
0
Fork 0

sched/core: More notrace annotations

preempt_schedule_common() is marked notrace, but it does not use
_notrace() preempt_count functions and __schedule() is also not marked
notrace, which means that its perfectly possible to end up in the
tracer from preempt_schedule_common().

Steve says:

  | Yep, there's some history to this. This was originally the issue that
  | caused function tracing to go into infinite recursion. But now we have
  | preempt_schedule_notrace(), which is used by the function tracer, and
  | that function must not be traced till preemption is disabled.
  |
  | Now if function tracing is running and we take an interrupt when
  | NEED_RESCHED is set, it calls
  |
  |   preempt_schedule_common() (not traced)
  |
  | But then that calls preempt_disable() (traced)
  |
  | function tracer calls preempt_disable_notrace() followed by
  | preempt_enable_notrace() which will see NEED_RESCHED set, and it will
  | call preempt_schedule_notrace(), which stops the recursion, but
  | still calls __schedule() here, and that means when we return, we call
  | the __schedule() from preempt_schedule_common().
  |
  | That said, I prefer this patch. Preemption is disabled before calling
  | __schedule(), and we get rid of a one round recursion with the
  | scheduler.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Steven Rostedt <rostedt@goodmis.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
hifive-unleashed-5.1
Peter Zijlstra 2015-09-28 18:52:36 +02:00 committed by Ingo Molnar
parent e61bf1e43b
commit 499d79559f
1 changed files with 3 additions and 3 deletions

View File

@ -3057,7 +3057,7 @@ again:
*
* WARNING: must be called with preemption disabled!
*/
static void __sched __schedule(bool preempt)
static void __sched notrace __schedule(bool preempt)
{
struct task_struct *prev, *next;
unsigned long *switch_count;
@ -3203,9 +3203,9 @@ void __sched schedule_preempt_disabled(void)
static void __sched notrace preempt_schedule_common(void)
{
do {
preempt_disable();
preempt_disable_notrace();
__schedule(true);
sched_preempt_enable_no_resched();
preempt_enable_no_resched_notrace();
/*
* Check again in case we missed a preemption opportunity