1
0
Fork 0
remarkable-linux/kernel/bpf
Alexei Starovoitov 15a07b3381 bpf: add lookup/update support for per-cpu hash and array maps
The functions bpf_map_lookup_elem(map, key, value) and
bpf_map_update_elem(map, key, value, flags) need to get/set
values from all-cpus for per-cpu hash and array maps,
so that user space can aggregate/update them as necessary.

Example of single counter aggregation in user space:
  unsigned int nr_cpus = sysconf(_SC_NPROCESSORS_CONF);
  long values[nr_cpus];
  long value = 0;

  bpf_lookup_elem(fd, key, values);
  for (i = 0; i < nr_cpus; i++)
    value += values[i];

The user space must provide round_up(value_size, 8) * nr_cpus
array to get/set values, since kernel will use 'long' copy
of per-cpu values to try to copy good counters atomically.
It's a best-effort, since bpf programs and user space are racing
to access the same memory.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-06 03:34:36 -05:00
..
Makefile bpf: add support for persistent maps/progs 2015-11-02 22:48:39 -05:00
arraymap.c bpf: add lookup/update support for per-cpu hash and array maps 2016-02-06 03:34:36 -05:00
core.c bpf: move clearing of A/X into classic to eBPF migration prologue 2015-12-18 16:04:51 -05:00
hashtab.c bpf: add lookup/update support for per-cpu hash and array maps 2016-02-06 03:34:36 -05:00
helpers.c bpf: split state from prandom_u32() and consolidate {c, e}BPF prngs 2015-10-08 05:26:39 -07:00
inode.c bpf, inode: allow for rename and link ops 2015-12-12 18:44:23 -05:00
syscall.c bpf: add lookup/update support for per-cpu hash and array maps 2016-02-06 03:34:36 -05:00
verifier.c net: bpf: reject invalid shifts 2016-01-12 17:06:53 -05:00