1
0
Fork 0

Merge branch 'bpftool-probes'

Quentin Monnet says:

====================
Hi,
This set adds a new command to bpftool in order to dump a list of
eBPF-related parameters for the system (or for a specific network
device) to the console. Once again, this is based on a suggestion from
Daniel.

At this time, output includes:

    - Availability of bpf() system call
    - Availability of bpf() system call for unprivileged users
    - JIT status (enabled or not, with or without debugging traces)
    - JIT hardening status
    - JIT kallsyms exports status
    - Global memory limit for JIT compiler for unprivileged users
    - Status of kernel compilation options related to BPF features
    - Availability of known eBPF program types
    - Availability of known eBPF map types
    - Availability of known eBPF helper functions

There are three different ways to dump this information at this time:

    - Plain output dumps probe results in plain text. It is the most
      flexible options for providing descriptive output to the user, but
      should not be relied upon for parsing the output.
    - JSON output is supported.
    - A third mode, available through the "macros" keyword appended to the
      command line, dumps some of those parameters (not all) as a series of
      "#define" directives, that can be included into a C header file for
      example.

Probes for supported program and map types, and supported helpers, are
directly added to libbpf, so that other applications (or selftests) can
reuse them as necessary.

If the user does not have root privileges (or more precisely, the
CAP_SYS_ADMIN capability) detection will be erroneous for most
parameters. Therefore, forbid non-root users to run the command.

v5:
- Move exported symbols to a new LIBBPF_0.0.2 section in libbpf.map
  (patches 4 to 6).
- Minor fixes on patches 3 and 4.

v4:
- Probe bpf_jit_limit parameter (patch 2).
- Probe some additional kernel config options (patch 3).
- Minor fixes on patch 6.

v3:
- Do not probe kernel version in bpftool (just retrieve it to probe support
  for kprobes in libbpf).
- Change the way results for helper support is displayed: now one list of
  compatible helpers for each program type (and C-style output gets a
  HAVE_PROG_TYPE_HELPER(prog_type, helper) macro to help with tests. See
  patches 6, 7.
- Address other comments from feedback from v2 (please refer to individual
  patches' history).

v2 (please also refer to individual patches' history):
- Move probes for prog/map types, helpers, from bpftool to libbpf.
- Move C-style output as a separate patch, and restrict it to a subset of
  collected information (bpf() availability, prog/map types, helpers).
- Now probe helpers with all supported program types, and display a list of
  compatible program types (as supported on the system) for each helper.
- NOT addressed: grouping compilation options for kernel into subsections
  (patch 3) (I don't see an easy way of grouping them at the moment, please
  see also the discussion on v1 thread).
====================

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
hifive-unleashed-5.1
Alexei Starovoitov 2019-01-22 22:15:41 -08:00
commit cbeaad9028
16 changed files with 1147 additions and 3 deletions

View File

@ -142,5 +142,6 @@ SEE ALSO
**bpftool**\ (8),
**bpftool-prog**\ (8),
**bpftool-map**\ (8),
**bpftool-feature**\ (8),
**bpftool-net**\ (8),
**bpftool-perf**\ (8)

View File

@ -0,0 +1,85 @@
===============
bpftool-feature
===============
-------------------------------------------------------------------------------
tool for inspection of eBPF-related parameters for Linux kernel or net device
-------------------------------------------------------------------------------
:Manual section: 8
SYNOPSIS
========
**bpftool** [*OPTIONS*] **feature** *COMMAND*
*OPTIONS* := { { **-j** | **--json** } [{ **-p** | **--pretty** }] }
*COMMANDS* := { **probe** | **help** }
MAP COMMANDS
=============
| **bpftool** **feature probe** [*COMPONENT*] [**macros** [**prefix** *PREFIX*]]
| **bpftool** **feature help**
|
| *COMPONENT* := { **kernel** | **dev** *NAME* }
DESCRIPTION
===========
**bpftool feature probe** [**kernel**] [**macros** [**prefix** *PREFIX*]]
Probe the running kernel and dump a number of eBPF-related
parameters, such as availability of the **bpf()** system call,
JIT status, eBPF program types availability, eBPF helper
functions availability, and more.
If the **macros** keyword (but not the **-j** option) is
passed, a subset of the output is dumped as a list of
**#define** macros that are ready to be included in a C
header file, for example. If, additionally, **prefix** is
used to define a *PREFIX*, the provided string will be used
as a prefix to the names of the macros: this can be used to
avoid conflicts on macro names when including the output of
this command as a header file.
Keyword **kernel** can be omitted. If no probe target is
specified, probing the kernel is the default behaviour.
Note that when probed, some eBPF helpers (e.g.
**bpf_trace_printk**\ () or **bpf_probe_write_user**\ ()) may
print warnings to kernel logs.
**bpftool feature probe dev** *NAME* [**macros** [**prefix** *PREFIX*]]
Probe network device for supported eBPF features and dump
results to the console.
The two keywords **macros** and **prefix** have the same
role as when probing the kernel.
**bpftool feature help**
Print short help message.
OPTIONS
=======
-h, --help
Print short generic help message (similar to **bpftool help**).
-v, --version
Print version number (similar to **bpftool version**).
-j, --json
Generate JSON output. For commands that cannot produce JSON, this
option has no effect.
-p, --pretty
Generate human-readable JSON output. Implies **-j**.
SEE ALSO
========
**bpf**\ (2),
**bpf-helpers**\ (7),
**bpftool**\ (8),
**bpftool-prog**\ (8),
**bpftool-map**\ (8),
**bpftool-cgroup**\ (8),
**bpftool-net**\ (8),
**bpftool-perf**\ (8)

View File

@ -256,5 +256,6 @@ SEE ALSO
**bpftool**\ (8),
**bpftool-prog**\ (8),
**bpftool-cgroup**\ (8),
**bpftool-feature**\ (8),
**bpftool-net**\ (8),
**bpftool-perf**\ (8)

View File

@ -142,4 +142,5 @@ SEE ALSO
**bpftool-prog**\ (8),
**bpftool-map**\ (8),
**bpftool-cgroup**\ (8),
**bpftool-feature**\ (8),
**bpftool-perf**\ (8)

View File

@ -84,4 +84,5 @@ SEE ALSO
**bpftool-prog**\ (8),
**bpftool-map**\ (8),
**bpftool-cgroup**\ (8),
**bpftool-feature**\ (8),
**bpftool-net**\ (8)

View File

@ -258,5 +258,6 @@ SEE ALSO
**bpftool**\ (8),
**bpftool-map**\ (8),
**bpftool-cgroup**\ (8),
**bpftool-feature**\ (8),
**bpftool-net**\ (8),
**bpftool-perf**\ (8)

View File

@ -72,5 +72,6 @@ SEE ALSO
**bpftool-prog**\ (8),
**bpftool-map**\ (8),
**bpftool-cgroup**\ (8),
**bpftool-feature**\ (8),
**bpftool-net**\ (8),
**bpftool-perf**\ (8)

View File

@ -679,6 +679,25 @@ _bpftool()
;;
esac
;;
feature)
case $command in
probe)
[[ $prev == "dev" ]] && _sysfs_get_netdevs && return 0
[[ $prev == "prefix" ]] && return 0
if _bpftool_search_list 'macros'; then
COMPREPLY+=( $( compgen -W 'prefix' -- "$cur" ) )
else
COMPREPLY+=( $( compgen -W 'macros' -- "$cur" ) )
fi
_bpftool_one_of_list 'kernel dev'
return 0
;;
*)
[[ $prev == $object ]] && \
COMPREPLY=( $( compgen -W 'help probe' -- "$cur" ) )
;;
esac
;;
esac
} &&
complete -F _bpftool bpftool

View File

@ -0,0 +1,764 @@
// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
/* Copyright (c) 2019 Netronome Systems, Inc. */
#include <ctype.h>
#include <errno.h>
#include <string.h>
#include <unistd.h>
#include <net/if.h>
#include <sys/utsname.h>
#include <sys/vfs.h>
#include <linux/filter.h>
#include <linux/limits.h>
#include <bpf.h>
#include <libbpf.h>
#include "main.h"
#ifndef PROC_SUPER_MAGIC
# define PROC_SUPER_MAGIC 0x9fa0
#endif
enum probe_component {
COMPONENT_UNSPEC,
COMPONENT_KERNEL,
COMPONENT_DEVICE,
};
#define BPF_HELPER_MAKE_ENTRY(name) [BPF_FUNC_ ## name] = "bpf_" # name
static const char * const helper_name[] = {
__BPF_FUNC_MAPPER(BPF_HELPER_MAKE_ENTRY)
};
#undef BPF_HELPER_MAKE_ENTRY
/* Miscellaneous utility functions */
static bool check_procfs(void)
{
struct statfs st_fs;
if (statfs("/proc", &st_fs) < 0)
return false;
if ((unsigned long)st_fs.f_type != PROC_SUPER_MAGIC)
return false;
return true;
}
static void uppercase(char *str, size_t len)
{
size_t i;
for (i = 0; i < len && str[i] != '\0'; i++)
str[i] = toupper(str[i]);
}
/* Printing utility functions */
static void
print_bool_feature(const char *feat_name, const char *plain_name,
const char *define_name, bool res, const char *define_prefix)
{
if (json_output)
jsonw_bool_field(json_wtr, feat_name, res);
else if (define_prefix)
printf("#define %s%sHAVE_%s\n", define_prefix,
res ? "" : "NO_", define_name);
else
printf("%s is %savailable\n", plain_name, res ? "" : "NOT ");
}
static void print_kernel_option(const char *name, const char *value)
{
char *endptr;
int res;
/* No support for C-style ouptut */
if (json_output) {
if (!value) {
jsonw_null_field(json_wtr, name);
return;
}
errno = 0;
res = strtol(value, &endptr, 0);
if (!errno && *endptr == '\n')
jsonw_int_field(json_wtr, name, res);
else
jsonw_string_field(json_wtr, name, value);
} else {
if (value)
printf("%s is set to %s\n", name, value);
else
printf("%s is not set\n", name);
}
}
static void
print_start_section(const char *json_title, const char *plain_title,
const char *define_comment, const char *define_prefix)
{
if (json_output) {
jsonw_name(json_wtr, json_title);
jsonw_start_object(json_wtr);
} else if (define_prefix) {
printf("%s\n", define_comment);
} else {
printf("%s\n", plain_title);
}
}
static void
print_end_then_start_section(const char *json_title, const char *plain_title,
const char *define_comment,
const char *define_prefix)
{
if (json_output)
jsonw_end_object(json_wtr);
else
printf("\n");
print_start_section(json_title, plain_title, define_comment,
define_prefix);
}
/* Probing functions */
static int read_procfs(const char *path)
{
char *endptr, *line = NULL;
size_t len = 0;
FILE *fd;
int res;
fd = fopen(path, "r");
if (!fd)
return -1;
res = getline(&line, &len, fd);
fclose(fd);
if (res < 0)
return -1;
errno = 0;
res = strtol(line, &endptr, 10);
if (errno || *line == '\0' || *endptr != '\n')
res = -1;
free(line);
return res;
}
static void probe_unprivileged_disabled(void)
{
int res;
/* No support for C-style ouptut */
res = read_procfs("/proc/sys/kernel/unprivileged_bpf_disabled");
if (json_output) {
jsonw_int_field(json_wtr, "unprivileged_bpf_disabled", res);
} else {
switch (res) {
case 0:
printf("bpf() syscall for unprivileged users is enabled\n");
break;
case 1:
printf("bpf() syscall restricted to privileged users\n");
break;
case -1:
printf("Unable to retrieve required privileges for bpf() syscall\n");
break;
default:
printf("bpf() syscall restriction has unknown value %d\n", res);
}
}
}
static void probe_jit_enable(void)
{
int res;
/* No support for C-style ouptut */
res = read_procfs("/proc/sys/net/core/bpf_jit_enable");
if (json_output) {
jsonw_int_field(json_wtr, "bpf_jit_enable", res);
} else {
switch (res) {
case 0:
printf("JIT compiler is disabled\n");
break;
case 1:
printf("JIT compiler is enabled\n");
break;
case 2:
printf("JIT compiler is enabled with debugging traces in kernel logs\n");
break;
case -1:
printf("Unable to retrieve JIT-compiler status\n");
break;
default:
printf("JIT-compiler status has unknown value %d\n",
res);
}
}
}
static void probe_jit_harden(void)
{
int res;
/* No support for C-style ouptut */
res = read_procfs("/proc/sys/net/core/bpf_jit_harden");
if (json_output) {
jsonw_int_field(json_wtr, "bpf_jit_harden", res);
} else {
switch (res) {
case 0:
printf("JIT compiler hardening is disabled\n");
break;
case 1:
printf("JIT compiler hardening is enabled for unprivileged users\n");
break;
case 2:
printf("JIT compiler hardening is enabled for all users\n");
break;
case -1:
printf("Unable to retrieve JIT hardening status\n");
break;
default:
printf("JIT hardening status has unknown value %d\n",
res);
}
}
}
static void probe_jit_kallsyms(void)
{
int res;
/* No support for C-style ouptut */
res = read_procfs("/proc/sys/net/core/bpf_jit_kallsyms");
if (json_output) {
jsonw_int_field(json_wtr, "bpf_jit_kallsyms", res);
} else {
switch (res) {
case 0:
printf("JIT compiler kallsyms exports are disabled\n");
break;
case 1:
printf("JIT compiler kallsyms exports are enabled for root\n");
break;
case -1:
printf("Unable to retrieve JIT kallsyms export status\n");
break;
default:
printf("JIT kallsyms exports status has unknown value %d\n", res);
}
}
}
static void probe_jit_limit(void)
{
int res;
/* No support for C-style ouptut */
res = read_procfs("/proc/sys/net/core/bpf_jit_limit");
if (json_output) {
jsonw_int_field(json_wtr, "bpf_jit_limit", res);
} else {
switch (res) {
case -1:
printf("Unable to retrieve global memory limit for JIT compiler for unprivileged users\n");
break;
default:
printf("Global memory limit for JIT compiler for unprivileged users is %d bytes\n", res);
}
}
}
static char *get_kernel_config_option(FILE *fd, const char *option)
{
size_t line_n = 0, optlen = strlen(option);
char *res, *strval, *line = NULL;
ssize_t n;
rewind(fd);
while ((n = getline(&line, &line_n, fd)) > 0) {
if (strncmp(line, option, optlen))
continue;
/* Check we have at least '=', value, and '\n' */
if (strlen(line) < optlen + 3)
continue;
if (*(line + optlen) != '=')
continue;
/* Trim ending '\n' */
line[strlen(line) - 1] = '\0';
/* Copy and return config option value */
strval = line + optlen + 1;
res = strdup(strval);
free(line);
return res;
}
free(line);
return NULL;
}
static void probe_kernel_image_config(void)
{
static const char * const options[] = {
/* Enable BPF */
"CONFIG_BPF",
/* Enable bpf() syscall */
"CONFIG_BPF_SYSCALL",
/* Does selected architecture support eBPF JIT compiler */
"CONFIG_HAVE_EBPF_JIT",
/* Compile eBPF JIT compiler */
"CONFIG_BPF_JIT",
/* Avoid compiling eBPF interpreter (use JIT only) */
"CONFIG_BPF_JIT_ALWAYS_ON",
/* cgroups */
"CONFIG_CGROUPS",
/* BPF programs attached to cgroups */
"CONFIG_CGROUP_BPF",
/* bpf_get_cgroup_classid() helper */
"CONFIG_CGROUP_NET_CLASSID",
/* bpf_skb_{,ancestor_}cgroup_id() helpers */
"CONFIG_SOCK_CGROUP_DATA",
/* Tracing: attach BPF to kprobes, tracepoints, etc. */
"CONFIG_BPF_EVENTS",
/* Kprobes */
"CONFIG_KPROBE_EVENTS",
/* Uprobes */
"CONFIG_UPROBE_EVENTS",
/* Tracepoints */
"CONFIG_TRACING",
/* Syscall tracepoints */
"CONFIG_FTRACE_SYSCALLS",
/* bpf_override_return() helper support for selected arch */
"CONFIG_FUNCTION_ERROR_INJECTION",
/* bpf_override_return() helper */
"CONFIG_BPF_KPROBE_OVERRIDE",
/* Network */
"CONFIG_NET",
/* AF_XDP sockets */
"CONFIG_XDP_SOCKETS",
/* BPF_PROG_TYPE_LWT_* and related helpers */
"CONFIG_LWTUNNEL_BPF",
/* BPF_PROG_TYPE_SCHED_ACT, TC (traffic control) actions */
"CONFIG_NET_ACT_BPF",
/* BPF_PROG_TYPE_SCHED_CLS, TC filters */
"CONFIG_NET_CLS_BPF",
/* TC clsact qdisc */
"CONFIG_NET_CLS_ACT",
/* Ingress filtering with TC */
"CONFIG_NET_SCH_INGRESS",
/* bpf_skb_get_xfrm_state() helper */
"CONFIG_XFRM",
/* bpf_get_route_realm() helper */
"CONFIG_IP_ROUTE_CLASSID",
/* BPF_PROG_TYPE_LWT_SEG6_LOCAL and related helpers */
"CONFIG_IPV6_SEG6_BPF",
/* BPF_PROG_TYPE_LIRC_MODE2 and related helpers */
"CONFIG_BPF_LIRC_MODE2",
/* BPF stream parser and BPF socket maps */
"CONFIG_BPF_STREAM_PARSER",
/* xt_bpf module for passing BPF programs to netfilter */
"CONFIG_NETFILTER_XT_MATCH_BPF",
/* bpfilter back-end for iptables */
"CONFIG_BPFILTER",
/* bpftilter module with "user mode helper" */
"CONFIG_BPFILTER_UMH",
/* test_bpf module for BPF tests */
"CONFIG_TEST_BPF",
};
char *value, *buf = NULL;
struct utsname utsn;
char path[PATH_MAX];
size_t i, n;
ssize_t ret;
FILE *fd;
if (uname(&utsn))
goto no_config;
snprintf(path, sizeof(path), "/boot/config-%s", utsn.release);
fd = fopen(path, "r");
if (!fd && errno == ENOENT) {
/* Some distributions put the config file at /proc/config, give
* it a try.
* Sometimes it is also at /proc/config.gz but we do not try
* this one for now, it would require linking against libz.
*/
fd = fopen("/proc/config", "r");
}
if (!fd) {
p_info("skipping kernel config, can't open file: %s",
strerror(errno));
goto no_config;
}
/* Sanity checks */
ret = getline(&buf, &n, fd);
ret = getline(&buf, &n, fd);
if (!buf || !ret) {
p_info("skipping kernel config, can't read from file: %s",
strerror(errno));
free(buf);
goto no_config;
}
if (strcmp(buf, "# Automatically generated file; DO NOT EDIT.\n")) {
p_info("skipping kernel config, can't find correct file");
free(buf);
goto no_config;
}
free(buf);
for (i = 0; i < ARRAY_SIZE(options); i++) {
value = get_kernel_config_option(fd, options[i]);
print_kernel_option(options[i], value);
free(value);
}
fclose(fd);
return;
no_config:
for (i = 0; i < ARRAY_SIZE(options); i++)
print_kernel_option(options[i], NULL);
}
static bool probe_bpf_syscall(const char *define_prefix)
{
bool res;
bpf_load_program(BPF_PROG_TYPE_UNSPEC, NULL, 0, NULL, 0, NULL, 0);
res = (errno != ENOSYS);
print_bool_feature("have_bpf_syscall",
"bpf() syscall",
"BPF_SYSCALL",
res, define_prefix);
return res;
}
static void
probe_prog_type(enum bpf_prog_type prog_type, bool *supported_types,
const char *define_prefix, __u32 ifindex)
{
char feat_name[128], plain_desc[128], define_name[128];
const char *plain_comment = "eBPF program_type ";
size_t maxlen;
bool res;
if (ifindex)
/* Only test offload-able program types */
switch (prog_type) {
case BPF_PROG_TYPE_SCHED_CLS:
case BPF_PROG_TYPE_XDP:
break;
default:
return;
}
res = bpf_probe_prog_type(prog_type, ifindex);
supported_types[prog_type] |= res;
maxlen = sizeof(plain_desc) - strlen(plain_comment) - 1;
if (strlen(prog_type_name[prog_type]) > maxlen) {
p_info("program type name too long");
return;
}
sprintf(feat_name, "have_%s_prog_type", prog_type_name[prog_type]);
sprintf(define_name, "%s_prog_type", prog_type_name[prog_type]);
uppercase(define_name, sizeof(define_name));
sprintf(plain_desc, "%s%s", plain_comment, prog_type_name[prog_type]);
print_bool_feature(feat_name, plain_desc, define_name, res,
define_prefix);
}
static void
probe_map_type(enum bpf_map_type map_type, const char *define_prefix,
__u32 ifindex)
{
char feat_name[128], plain_desc[128], define_name[128];
const char *plain_comment = "eBPF map_type ";
size_t maxlen;
bool res;
res = bpf_probe_map_type(map_type, ifindex);
maxlen = sizeof(plain_desc) - strlen(plain_comment) - 1;
if (strlen(map_type_name[map_type]) > maxlen) {
p_info("map type name too long");
return;
}
sprintf(feat_name, "have_%s_map_type", map_type_name[map_type]);
sprintf(define_name, "%s_map_type", map_type_name[map_type]);
uppercase(define_name, sizeof(define_name));
sprintf(plain_desc, "%s%s", plain_comment, map_type_name[map_type]);
print_bool_feature(feat_name, plain_desc, define_name, res,
define_prefix);
}
static void
probe_helpers_for_progtype(enum bpf_prog_type prog_type, bool supported_type,
const char *define_prefix, __u32 ifindex)
{
const char *ptype_name = prog_type_name[prog_type];
char feat_name[128];
unsigned int id;
bool res;
if (ifindex)
/* Only test helpers for offload-able program types */
switch (prog_type) {
case BPF_PROG_TYPE_SCHED_CLS:
case BPF_PROG_TYPE_XDP:
break;
default:
return;
}
if (json_output) {
sprintf(feat_name, "%s_available_helpers", ptype_name);
jsonw_name(json_wtr, feat_name);
jsonw_start_array(json_wtr);
} else if (!define_prefix) {
printf("eBPF helpers supported for program type %s:",
ptype_name);
}
for (id = 1; id < ARRAY_SIZE(helper_name); id++) {
if (!supported_type)
res = false;
else
res = bpf_probe_helper(id, prog_type, ifindex);
if (json_output) {
if (res)
jsonw_string(json_wtr, helper_name[id]);
} else if (define_prefix) {
printf("#define %sBPF__PROG_TYPE_%s__HELPER_%s %s\n",
define_prefix, ptype_name, helper_name[id],
res ? "1" : "0");
} else {
if (res)
printf("\n\t- %s", helper_name[id]);
}
}
if (json_output)
jsonw_end_array(json_wtr);
else if (!define_prefix)
printf("\n");
}
static int do_probe(int argc, char **argv)
{
enum probe_component target = COMPONENT_UNSPEC;
const char *define_prefix = NULL;
bool supported_types[128] = {};
__u32 ifindex = 0;
unsigned int i;
char *ifname;
/* Detection assumes user has sufficient privileges (CAP_SYS_ADMIN).
* Let's approximate, and restrict usage to root user only.
*/
if (geteuid()) {
p_err("please run this command as root user");
return -1;
}
set_max_rlimit();
while (argc) {
if (is_prefix(*argv, "kernel")) {
if (target != COMPONENT_UNSPEC) {
p_err("component to probe already specified");
return -1;
}
target = COMPONENT_KERNEL;
NEXT_ARG();
} else if (is_prefix(*argv, "dev")) {
NEXT_ARG();
if (target != COMPONENT_UNSPEC || ifindex) {
p_err("component to probe already specified");
return -1;
}
if (!REQ_ARGS(1))
return -1;
target = COMPONENT_DEVICE;
ifname = GET_ARG();
ifindex = if_nametoindex(ifname);
if (!ifindex) {
p_err("unrecognized netdevice '%s': %s", ifname,
strerror(errno));
return -1;
}
} else if (is_prefix(*argv, "macros") && !define_prefix) {
define_prefix = "";
NEXT_ARG();
} else if (is_prefix(*argv, "prefix")) {
if (!define_prefix) {
p_err("'prefix' argument can only be use after 'macros'");
return -1;
}
if (strcmp(define_prefix, "")) {
p_err("'prefix' already defined");
return -1;
}
NEXT_ARG();
if (!REQ_ARGS(1))
return -1;
define_prefix = GET_ARG();
} else {
p_err("expected no more arguments, 'kernel', 'dev', 'macros' or 'prefix', got: '%s'?",
*argv);
return -1;
}
}
if (json_output) {
define_prefix = NULL;
jsonw_start_object(json_wtr);
}
switch (target) {
case COMPONENT_KERNEL:
case COMPONENT_UNSPEC:
if (define_prefix)
break;
print_start_section("system_config",
"Scanning system configuration...",
NULL, /* define_comment never used here */
NULL); /* define_prefix always NULL here */
if (check_procfs()) {
probe_unprivileged_disabled();
probe_jit_enable();
probe_jit_harden();
probe_jit_kallsyms();
probe_jit_limit();
} else {
p_info("/* procfs not mounted, skipping related probes */");
}
probe_kernel_image_config();
if (json_output)
jsonw_end_object(json_wtr);
else
printf("\n");
break;
default:
break;
}
print_start_section("syscall_config",
"Scanning system call availability...",
"/*** System call availability ***/",
define_prefix);
if (!probe_bpf_syscall(define_prefix))
/* bpf() syscall unavailable, don't probe other BPF features */
goto exit_close_json;
print_end_then_start_section("program_types",
"Scanning eBPF program types...",
"/*** eBPF program types ***/",
define_prefix);
for (i = BPF_PROG_TYPE_UNSPEC + 1; i < ARRAY_SIZE(prog_type_name); i++)
probe_prog_type(i, supported_types, define_prefix, ifindex);
print_end_then_start_section("map_types",
"Scanning eBPF map types...",
"/*** eBPF map types ***/",
define_prefix);
for (i = BPF_MAP_TYPE_UNSPEC + 1; i < map_type_name_size; i++)
probe_map_type(i, define_prefix, ifindex);
print_end_then_start_section("helpers",
"Scanning eBPF helper functions...",
"/*** eBPF helper functions ***/",
define_prefix);
if (define_prefix)
printf("/*\n"
" * Use %sHAVE_PROG_TYPE_HELPER(prog_type_name, helper_name)\n"
" * to determine if <helper_name> is available for <prog_type_name>,\n"
" * e.g.\n"
" * #if %sHAVE_PROG_TYPE_HELPER(xdp, bpf_redirect)\n"
" * // do stuff with this helper\n"
" * #elif\n"
" * // use a workaround\n"
" * #endif\n"
" */\n"
"#define %sHAVE_PROG_TYPE_HELPER(prog_type, helper) \\\n"
" %sBPF__PROG_TYPE_ ## prog_type ## __HELPER_ ## helper\n",
define_prefix, define_prefix, define_prefix,
define_prefix);
for (i = BPF_PROG_TYPE_UNSPEC + 1; i < ARRAY_SIZE(prog_type_name); i++)
probe_helpers_for_progtype(i, supported_types[i],
define_prefix, ifindex);
exit_close_json:
if (json_output) {
/* End current "section" of probes */
jsonw_end_object(json_wtr);
/* End root object */
jsonw_end_object(json_wtr);
}
return 0;
}
static int do_help(int argc, char **argv)
{
if (json_output) {
jsonw_null(json_wtr);
return 0;
}
fprintf(stderr,
"Usage: %s %s probe [COMPONENT] [macros [prefix PREFIX]]\n"
" %s %s help\n"
"\n"
" COMPONENT := { kernel | dev NAME }\n"
"",
bin_name, argv[-2], bin_name, argv[-2]);
return 0;
}
static const struct cmd cmds[] = {
{ "help", do_help },
{ "probe", do_probe },
{ 0 }
};
int do_feature(int argc, char **argv)
{
return cmd_select(cmds, argc, argv, do_help);
}

View File

@ -56,7 +56,7 @@ static int do_help(int argc, char **argv)
" %s batch file FILE\n"
" %s version\n"
"\n"
" OBJECT := { prog | map | cgroup | perf | net }\n"
" OBJECT := { prog | map | cgroup | perf | net | feature }\n"
" " HELP_SPEC_OPTIONS "\n"
"",
bin_name, bin_name, bin_name);
@ -187,6 +187,7 @@ static const struct cmd cmds[] = {
{ "cgroup", do_cgroup },
{ "perf", do_perf },
{ "net", do_net },
{ "feature", do_feature },
{ "version", do_version },
{ 0 }
};

View File

@ -75,6 +75,9 @@ static const char * const prog_type_name[] = {
[BPF_PROG_TYPE_FLOW_DISSECTOR] = "flow_dissector",
};
extern const char * const map_type_name[];
extern const size_t map_type_name_size;
enum bpf_obj_type {
BPF_OBJ_UNKNOWN,
BPF_OBJ_PROG,
@ -145,6 +148,7 @@ int do_cgroup(int argc, char **arg);
int do_perf(int argc, char **arg);
int do_net(int argc, char **arg);
int do_tracelog(int argc, char **arg);
int do_feature(int argc, char **argv);
int parse_u32_arg(int *argc, char ***argv, __u32 *val, const char *what);
int prog_parse_fd(int *argc, char ***argv);

View File

@ -21,7 +21,7 @@
#include "json_writer.h"
#include "main.h"
static const char * const map_type_name[] = {
const char * const map_type_name[] = {
[BPF_MAP_TYPE_UNSPEC] = "unspec",
[BPF_MAP_TYPE_HASH] = "hash",
[BPF_MAP_TYPE_ARRAY] = "array",
@ -48,6 +48,8 @@ static const char * const map_type_name[] = {
[BPF_MAP_TYPE_STACK] = "stack",
};
const size_t map_type_name_size = ARRAY_SIZE(map_type_name);
static bool map_is_per_cpu(__u32 type)
{
return type == BPF_MAP_TYPE_PERCPU_HASH ||

View File

@ -1 +1 @@
libbpf-y := libbpf.o bpf.o nlattr.o btf.o libbpf_errno.o str_error.o netlink.o bpf_prog_linfo.o
libbpf-y := libbpf.o bpf.o nlattr.o btf.o libbpf_errno.o str_error.o netlink.o bpf_prog_linfo.o libbpf_probes.o

View File

@ -355,6 +355,20 @@ LIBBPF_API const struct bpf_line_info *
bpf_prog_linfo__lfind(const struct bpf_prog_linfo *prog_linfo,
__u32 insn_off, __u32 nr_skip);
/*
* Probe for supported system features
*
* Note that running many of these probes in a short amount of time can cause
* the kernel to reach the maximal size of lockable memory allowed for the
* user, causing subsequent probes to fail. In this case, the caller may want
* to adjust that limit with setrlimit().
*/
LIBBPF_API bool bpf_probe_prog_type(enum bpf_prog_type prog_type,
__u32 ifindex);
LIBBPF_API bool bpf_probe_map_type(enum bpf_map_type map_type, __u32 ifindex);
LIBBPF_API bool bpf_probe_helper(enum bpf_func_id id,
enum bpf_prog_type prog_type, __u32 ifindex);
#ifdef __cplusplus
} /* extern "C" */
#endif

View File

@ -124,3 +124,10 @@ LIBBPF_0.0.1 {
local:
*;
};
LIBBPF_0.0.2 {
global:
bpf_probe_helper;
bpf_probe_map_type;
bpf_probe_prog_type;
} LIBBPF_0.0.1;

View File

@ -0,0 +1,242 @@
// SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause)
/* Copyright (c) 2019 Netronome Systems, Inc. */
#include <errno.h>
#include <fcntl.h>
#include <string.h>
#include <stdlib.h>
#include <unistd.h>
#include <net/if.h>
#include <sys/utsname.h>
#include <linux/filter.h>
#include <linux/kernel.h>
#include "bpf.h"
#include "libbpf.h"
static bool grep(const char *buffer, const char *pattern)
{
return !!strstr(buffer, pattern);
}
static int get_vendor_id(int ifindex)
{
char ifname[IF_NAMESIZE], path[64], buf[8];
ssize_t len;
int fd;
if (!if_indextoname(ifindex, ifname))
return -1;
snprintf(path, sizeof(path), "/sys/class/net/%s/device/vendor", ifname);
fd = open(path, O_RDONLY);
if (fd < 0)
return -1;
len = read(fd, buf, sizeof(buf));
close(fd);
if (len < 0)
return -1;
if (len >= (ssize_t)sizeof(buf))
return -1;
buf[len] = '\0';
return strtol(buf, NULL, 0);
}
static int get_kernel_version(void)
{
int version, subversion, patchlevel;
struct utsname utsn;
/* Return 0 on failure, and attempt to probe with empty kversion */
if (uname(&utsn))
return 0;
if (sscanf(utsn.release, "%d.%d.%d",
&version, &subversion, &patchlevel) != 3)
return 0;
return (version << 16) + (subversion << 8) + patchlevel;
}
static void
probe_load(enum bpf_prog_type prog_type, const struct bpf_insn *insns,
size_t insns_cnt, char *buf, size_t buf_len, __u32 ifindex)
{
struct bpf_load_program_attr xattr = {};
int fd;
switch (prog_type) {
case BPF_PROG_TYPE_CGROUP_SOCK_ADDR:
xattr.expected_attach_type = BPF_CGROUP_INET4_CONNECT;
break;
case BPF_PROG_TYPE_KPROBE:
xattr.kern_version = get_kernel_version();
break;
case BPF_PROG_TYPE_UNSPEC:
case BPF_PROG_TYPE_SOCKET_FILTER:
case BPF_PROG_TYPE_SCHED_CLS:
case BPF_PROG_TYPE_SCHED_ACT:
case BPF_PROG_TYPE_TRACEPOINT:
case BPF_PROG_TYPE_XDP:
case BPF_PROG_TYPE_PERF_EVENT:
case BPF_PROG_TYPE_CGROUP_SKB:
case BPF_PROG_TYPE_CGROUP_SOCK:
case BPF_PROG_TYPE_LWT_IN:
case BPF_PROG_TYPE_LWT_OUT:
case BPF_PROG_TYPE_LWT_XMIT:
case BPF_PROG_TYPE_SOCK_OPS:
case BPF_PROG_TYPE_SK_SKB:
case BPF_PROG_TYPE_CGROUP_DEVICE:
case BPF_PROG_TYPE_SK_MSG:
case BPF_PROG_TYPE_RAW_TRACEPOINT:
case BPF_PROG_TYPE_LWT_SEG6LOCAL:
case BPF_PROG_TYPE_LIRC_MODE2:
case BPF_PROG_TYPE_SK_REUSEPORT:
case BPF_PROG_TYPE_FLOW_DISSECTOR:
default:
break;
}
xattr.prog_type = prog_type;
xattr.insns = insns;
xattr.insns_cnt = insns_cnt;
xattr.license = "GPL";
xattr.prog_ifindex = ifindex;
fd = bpf_load_program_xattr(&xattr, buf, buf_len);
if (fd >= 0)
close(fd);
}
bool bpf_probe_prog_type(enum bpf_prog_type prog_type, __u32 ifindex)
{
struct bpf_insn insns[2] = {
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN()
};
if (ifindex && prog_type == BPF_PROG_TYPE_SCHED_CLS)
/* nfp returns -EINVAL on exit(0) with TC offload */
insns[0].imm = 2;
errno = 0;
probe_load(prog_type, insns, ARRAY_SIZE(insns), NULL, 0, ifindex);
return errno != EINVAL && errno != EOPNOTSUPP;
}
bool bpf_probe_map_type(enum bpf_map_type map_type, __u32 ifindex)
{
int key_size, value_size, max_entries, map_flags;
struct bpf_create_map_attr attr = {};
int fd = -1, fd_inner;
key_size = sizeof(__u32);
value_size = sizeof(__u32);
max_entries = 1;
map_flags = 0;
switch (map_type) {
case BPF_MAP_TYPE_STACK_TRACE:
value_size = sizeof(__u64);
break;
case BPF_MAP_TYPE_LPM_TRIE:
key_size = sizeof(__u64);
value_size = sizeof(__u64);
map_flags = BPF_F_NO_PREALLOC;
break;
case BPF_MAP_TYPE_CGROUP_STORAGE:
case BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE:
key_size = sizeof(struct bpf_cgroup_storage_key);
value_size = sizeof(__u64);
max_entries = 0;
break;
case BPF_MAP_TYPE_QUEUE:
case BPF_MAP_TYPE_STACK:
key_size = 0;
break;
case BPF_MAP_TYPE_UNSPEC:
case BPF_MAP_TYPE_HASH:
case BPF_MAP_TYPE_ARRAY:
case BPF_MAP_TYPE_PROG_ARRAY:
case BPF_MAP_TYPE_PERF_EVENT_ARRAY:
case BPF_MAP_TYPE_PERCPU_HASH:
case BPF_MAP_TYPE_PERCPU_ARRAY:
case BPF_MAP_TYPE_CGROUP_ARRAY:
case BPF_MAP_TYPE_LRU_HASH:
case BPF_MAP_TYPE_LRU_PERCPU_HASH:
case BPF_MAP_TYPE_ARRAY_OF_MAPS:
case BPF_MAP_TYPE_HASH_OF_MAPS:
case BPF_MAP_TYPE_DEVMAP:
case BPF_MAP_TYPE_SOCKMAP:
case BPF_MAP_TYPE_CPUMAP:
case BPF_MAP_TYPE_XSKMAP:
case BPF_MAP_TYPE_SOCKHASH:
case BPF_MAP_TYPE_REUSEPORT_SOCKARRAY:
default:
break;
}
if (map_type == BPF_MAP_TYPE_ARRAY_OF_MAPS ||
map_type == BPF_MAP_TYPE_HASH_OF_MAPS) {
/* TODO: probe for device, once libbpf has a function to create
* map-in-map for offload
*/
if (ifindex)
return false;
fd_inner = bpf_create_map(BPF_MAP_TYPE_HASH,
sizeof(__u32), sizeof(__u32), 1, 0);
if (fd_inner < 0)
return false;
fd = bpf_create_map_in_map(map_type, NULL, sizeof(__u32),
fd_inner, 1, 0);
close(fd_inner);
} else {
/* Note: No other restriction on map type probes for offload */
attr.map_type = map_type;
attr.key_size = key_size;
attr.value_size = value_size;
attr.max_entries = max_entries;
attr.map_flags = map_flags;
attr.map_ifindex = ifindex;
fd = bpf_create_map_xattr(&attr);
}
if (fd >= 0)
close(fd);
return fd >= 0;
}
bool bpf_probe_helper(enum bpf_func_id id, enum bpf_prog_type prog_type,
__u32 ifindex)
{
struct bpf_insn insns[2] = {
BPF_EMIT_CALL(id),
BPF_EXIT_INSN()
};
char buf[4096] = {};
bool res;
probe_load(prog_type, insns, ARRAY_SIZE(insns), buf, sizeof(buf),
ifindex);
res = !grep(buf, "invalid func ") && !grep(buf, "unknown func ");
if (ifindex) {
switch (get_vendor_id(ifindex)) {
case 0x19ee: /* Netronome specific */
res = res && !grep(buf, "not supported by FW") &&
!grep(buf, "unsupported function id");
break;
default:
break;
}
}
return res;
}