alistair23-linux/kernel/trace
Steven Rostedt (Red Hat) 12cce594fa ftrace/x86: Allow !CONFIG_PREEMPT dynamic ops to use allocated trampolines
When the static ftrace_ops (like function tracer) enables tracing, and it
is the only callback that is referencing a function, a trampoline is
dynamically allocated to the function that calls the callback directly
instead of calling a loop function that iterates over all the registered
ftrace ops (if more than one ops is registered).

But when it comes to dynamically allocated ftrace_ops, where they may be
freed, on a CONFIG_PREEMPT kernel there's no way to know when it is safe
to free the trampoline. If a task was preempted while executing on the
trampoline, there's currently no way to know when it will be off that
trampoline.

But this is not true when it comes to !CONFIG_PREEMPT. The current method
of calling schedule_on_each_cpu() will force tasks off the trampoline,
becaues they can not schedule while on it (kernel preemption is not
configured). That means it is safe to free a dynamically allocated
ftrace ops trampoline when CONFIG_PREEMPT is not configured.

Cc: H. Peter Anvin <hpa@linux.intel.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: Borislav Petkov <bp@suse.de>
Tested-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Tested-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2014-11-11 12:41:52 -05:00
..
blktrace.c
ftrace.c ftrace/x86: Allow !CONFIG_PREEMPT dynamic ops to use allocated trampolines 2014-11-11 12:41:52 -05:00
Kconfig tracing: Remove function_trace_stop and HAVE_FUNCTION_TRACE_MCOUNT_TEST 2014-07-18 13:58:12 -04:00
Makefile
power-traces.c
ring_buffer.c ring-buffer: Fix infinite spin in reading buffer 2014-10-02 16:51:18 -04:00
ring_buffer_benchmark.c sched, cleanup, treewide: Remove set_current_state(TASK_RUNNING) after schedule() 2014-09-19 12:35:17 +02:00
rpm-traces.c
trace.c Merge branch 'timers-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip 2014-08-05 17:46:42 -07:00
trace.h tracing: let user specify tracing_thresh after selecting function_graph 2014-07-18 15:48:52 -04:00
trace_benchmark.c
trace_benchmark.h
trace_branch.c
trace_clock.c tracing: Fix wraparound problems in "uptime" trace clock 2014-07-21 09:56:12 -04:00
trace_entries.h
trace_event_perf.c perf: Check permission only for parent tracepoint event 2014-07-28 10:01:38 +02:00
trace_events.c tracing: Robustify wait loop 2014-10-08 19:51:01 -04:00
trace_events_filter.c tracing: Kill "filter_string" arg of replace_preds() 2014-07-16 14:58:53 -04:00
trace_events_filter_test.h
trace_events_trigger.c
trace_export.c
trace_functions.c
trace_functions_graph.c tracing: Convert local function_graph functions to static 2014-07-18 21:16:06 -04:00
trace_irqsoff.c
trace_kdb.c
trace_kprobe.c
trace_mmiotrace.c
trace_nop.c
trace_output.c tracing: Add trace_seq_buffer_ptr() helper function 2014-07-01 07:13:39 -04:00
trace_output.h tracing: Make trace_seq_putmem_hex() more robust 2014-07-01 07:13:37 -04:00
trace_printk.c
trace_probe.c
trace_probe.h
trace_sched_switch.c
trace_sched_wakeup.c
trace_selftest.c Seems that Peter Zijlstra added a new check that is making old 2014-10-12 07:28:55 -04:00
trace_selftest_dynamic.c
trace_seq.c tracing: Remove trace_seq_reserve() 2014-07-01 07:13:37 -04:00
trace_stack.c sched: Add helper for task stack page overrun checking 2014-09-19 12:35:23 +02:00
trace_stat.c
trace_stat.h
trace_syscalls.c kernel: trace_syscalls: Replace rcu_assign_pointer() with RCU_INIT_POINTER() 2014-09-10 10:48:47 -04:00
trace_uprobe.c tracing/uprobes: Kill the dead TRACE_EVENT_FL_USE_CALL_FILTER logic 2014-07-16 14:25:19 -04:00