This fixes a small race between allocating a snapshot buffer and setting the

snapshot trigger. On a slow machine, the trigger can occur before the
 snapshot is allocated causing a warning to be displayed in the ring buffer,
 and no snapshot triggering. Reversing the allocation and the enabling of the
 trigger fixes the problem.
 -----BEGIN PGP SIGNATURE-----
 
 iIoEABYIADIWIQRRSw7ePDh/lE+zeZMp5XQQmuv6qgUCXpe22hQccm9zdGVkdEBn
 b29kbWlzLm9yZwAKCRAp5XQQmuv6qq/AAQCFl/iUjobPU4OLGHkUxZelv95Q38+s
 LcuUD1mtTgHANgD+J8XDnYKfimMLSnUMnihamARpTCVO5lTiRFA7gPlT9QY=
 =MQz7
 -----END PGP SIGNATURE-----

Merge tag 'trace-v5.7-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace

Pull tracing fix from Steven Rostedt:
 "This fixes a small race between allocating a snapshot buffer and
  setting the snapshot trigger.

  On a slow machine, the trigger can occur before the snapshot is
  allocated causing a warning to be displayed in the ring buffer, and no
  snapshot triggering. Reversing the allocation and the enabling of the
  trigger fixes the problem"

* tag 'trace-v5.7-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace:
  tracing: Fix the race between registering 'snapshot' event trigger and triggering 'snapshot' operation
This commit is contained in:
Linus Torvalds 2020-04-16 10:14:22 -07:00
commit 4ede125902

View file

@ -1088,14 +1088,10 @@ register_snapshot_trigger(char *glob, struct event_trigger_ops *ops,
struct event_trigger_data *data,
struct trace_event_file *file)
{
int ret = register_trigger(glob, ops, data, file);
if (tracing_alloc_snapshot_instance(file->tr) != 0)
return 0;
if (ret > 0 && tracing_alloc_snapshot_instance(file->tr) != 0) {
unregister_trigger(glob, ops, data, file);
ret = 0;
}
return ret;
return register_trigger(glob, ops, data, file);
}
static int