linux_dsm_epyc7002/tools/bpf/bpftool/prog.c

2050 lines
46 KiB
C
Raw Normal View History

// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
/* Copyright (C) 2017-2018 Netronome Systems, Inc. */
#define _GNU_SOURCE
#include <errno.h>
#include <fcntl.h>
bpftool: Introduce "prog profile" command With fentry/fexit programs, it is possible to profile BPF program with hardware counters. Introduce bpftool "prog profile", which measures key metrics of a BPF program. bpftool prog profile command creates per-cpu perf events. Then it attaches fentry/fexit programs to the target BPF program. The fentry program saves perf event value to a map. The fexit program reads the perf event again, and calculates the difference, which is the instructions/cycles used by the target program. Example input and output: ./bpftool prog profile id 337 duration 3 cycles instructions llc_misses 4228 run_cnt 3403698 cycles (84.08%) 3525294 instructions # 1.04 insn per cycle (84.05%) 13 llc_misses # 3.69 LLC misses per million isns (83.50%) This command measures cycles and instructions for BPF program with id 337 for 3 seconds. The program has triggered 4228 times. The rest of the output is similar to perf-stat. In this example, the counters were only counting ~84% of the time because of time multiplexing of perf counters. Note that, this approach measures cycles and instructions in very small increments. So the fentry/fexit programs introduce noticeable errors to the measurement results. The fentry/fexit programs are generated with BPF skeletons. Therefore, we build bpftool twice. The first time _bpftool is built without skeletons. Then, _bpftool is used to generate the skeletons. The second time, bpftool is built with skeletons. Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Quentin Monnet <quentin@isovalent.com> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200309173218.2739965-2-songliubraving@fb.com
2020-03-10 00:32:15 +07:00
#include <signal.h>
#include <stdarg.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <time.h>
#include <unistd.h>
#include <net/if.h>
bpftool: Introduce "prog profile" command With fentry/fexit programs, it is possible to profile BPF program with hardware counters. Introduce bpftool "prog profile", which measures key metrics of a BPF program. bpftool prog profile command creates per-cpu perf events. Then it attaches fentry/fexit programs to the target BPF program. The fentry program saves perf event value to a map. The fexit program reads the perf event again, and calculates the difference, which is the instructions/cycles used by the target program. Example input and output: ./bpftool prog profile id 337 duration 3 cycles instructions llc_misses 4228 run_cnt 3403698 cycles (84.08%) 3525294 instructions # 1.04 insn per cycle (84.05%) 13 llc_misses # 3.69 LLC misses per million isns (83.50%) This command measures cycles and instructions for BPF program with id 337 for 3 seconds. The program has triggered 4228 times. The rest of the output is similar to perf-stat. In this example, the counters were only counting ~84% of the time because of time multiplexing of perf counters. Note that, this approach measures cycles and instructions in very small increments. So the fentry/fexit programs introduce noticeable errors to the measurement results. The fentry/fexit programs are generated with BPF skeletons. Therefore, we build bpftool twice. The first time _bpftool is built without skeletons. Then, _bpftool is used to generate the skeletons. The second time, bpftool is built with skeletons. Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Quentin Monnet <quentin@isovalent.com> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200309173218.2739965-2-songliubraving@fb.com
2020-03-10 00:32:15 +07:00
#include <sys/ioctl.h>
#include <sys/types.h>
#include <sys/stat.h>
bpftool: Introduce "prog profile" command With fentry/fexit programs, it is possible to profile BPF program with hardware counters. Introduce bpftool "prog profile", which measures key metrics of a BPF program. bpftool prog profile command creates per-cpu perf events. Then it attaches fentry/fexit programs to the target BPF program. The fentry program saves perf event value to a map. The fexit program reads the perf event again, and calculates the difference, which is the instructions/cycles used by the target program. Example input and output: ./bpftool prog profile id 337 duration 3 cycles instructions llc_misses 4228 run_cnt 3403698 cycles (84.08%) 3525294 instructions # 1.04 insn per cycle (84.05%) 13 llc_misses # 3.69 LLC misses per million isns (83.50%) This command measures cycles and instructions for BPF program with id 337 for 3 seconds. The program has triggered 4228 times. The rest of the output is similar to perf-stat. In this example, the counters were only counting ~84% of the time because of time multiplexing of perf counters. Note that, this approach measures cycles and instructions in very small increments. So the fentry/fexit programs introduce noticeable errors to the measurement results. The fentry/fexit programs are generated with BPF skeletons. Therefore, we build bpftool twice. The first time _bpftool is built without skeletons. Then, _bpftool is used to generate the skeletons. The second time, bpftool is built with skeletons. Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Quentin Monnet <quentin@isovalent.com> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200309173218.2739965-2-songliubraving@fb.com
2020-03-10 00:32:15 +07:00
#include <sys/syscall.h>
#include <linux/err.h>
bpftool: Introduce "prog profile" command With fentry/fexit programs, it is possible to profile BPF program with hardware counters. Introduce bpftool "prog profile", which measures key metrics of a BPF program. bpftool prog profile command creates per-cpu perf events. Then it attaches fentry/fexit programs to the target BPF program. The fentry program saves perf event value to a map. The fexit program reads the perf event again, and calculates the difference, which is the instructions/cycles used by the target program. Example input and output: ./bpftool prog profile id 337 duration 3 cycles instructions llc_misses 4228 run_cnt 3403698 cycles (84.08%) 3525294 instructions # 1.04 insn per cycle (84.05%) 13 llc_misses # 3.69 LLC misses per million isns (83.50%) This command measures cycles and instructions for BPF program with id 337 for 3 seconds. The program has triggered 4228 times. The rest of the output is similar to perf-stat. In this example, the counters were only counting ~84% of the time because of time multiplexing of perf counters. Note that, this approach measures cycles and instructions in very small increments. So the fentry/fexit programs introduce noticeable errors to the measurement results. The fentry/fexit programs are generated with BPF skeletons. Therefore, we build bpftool twice. The first time _bpftool is built without skeletons. Then, _bpftool is used to generate the skeletons. The second time, bpftool is built with skeletons. Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Quentin Monnet <quentin@isovalent.com> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200309173218.2739965-2-songliubraving@fb.com
2020-03-10 00:32:15 +07:00
#include <linux/perf_event.h>
#include <linux/sizes.h>
#include <bpf/bpf.h>
#include <bpf/btf.h>
#include <bpf/libbpf.h>
#include "cfg.h"
#include "main.h"
#include "xlated_dumper.h"
enum dump_mode {
DUMP_JITED,
DUMP_XLATED,
};
static const char * const attach_type_strings[] = {
[BPF_SK_SKB_STREAM_PARSER] = "stream_parser",
[BPF_SK_SKB_STREAM_VERDICT] = "stream_verdict",
[BPF_SK_MSG_VERDICT] = "msg_verdict",
2018-11-09 23:21:46 +07:00
[BPF_FLOW_DISSECTOR] = "flow_dissector",
[__MAX_BPF_ATTACH_TYPE] = NULL,
};
static enum bpf_attach_type parse_attach_type(const char *str)
{
enum bpf_attach_type type;
for (type = 0; type < __MAX_BPF_ATTACH_TYPE; type++) {
if (attach_type_strings[type] &&
is_prefix(str, attach_type_strings[type]))
return type;
}
return __MAX_BPF_ATTACH_TYPE;
}
static void print_boot_time(__u64 nsecs, char *buf, unsigned int size)
{
struct timespec real_time_ts, boot_time_ts;
time_t wallclock_secs;
struct tm load_tm;
buf[--size] = '\0';
if (clock_gettime(CLOCK_REALTIME, &real_time_ts) ||
clock_gettime(CLOCK_BOOTTIME, &boot_time_ts)) {
perror("Can't read clocks");
snprintf(buf, size, "%llu", nsecs / 1000000000);
return;
}
wallclock_secs = (real_time_ts.tv_sec - boot_time_ts.tv_sec) +
(real_time_ts.tv_nsec - boot_time_ts.tv_nsec + nsecs) /
1000000000;
if (!localtime_r(&wallclock_secs, &load_tm)) {
snprintf(buf, size, "%llu", nsecs / 1000000000);
return;
}
if (json_output)
strftime(buf, size, "%s", &load_tm);
else
strftime(buf, size, "%FT%T%z", &load_tm);
}
static int prog_fd_by_nametag(void *nametag, int **fds, bool tag)
{
unsigned int id = 0;
int fd, nb_fds = 0;
void *tmp;
int err;
while (true) {
struct bpf_prog_info info = {};
__u32 len = sizeof(info);
err = bpf_prog_get_next_id(id, &id);
if (err) {
if (errno != ENOENT) {
p_err("%s", strerror(errno));
goto err_close_fds;
}
return nb_fds;
}
fd = bpf_prog_get_fd_by_id(id);
if (fd < 0) {
p_err("can't get prog by id (%u): %s",
id, strerror(errno));
goto err_close_fds;
}
err = bpf_obj_get_info_by_fd(fd, &info, &len);
if (err) {
p_err("can't get prog info (%u): %s",
id, strerror(errno));
goto err_close_fd;
}
if ((tag && memcmp(nametag, info.tag, BPF_TAG_SIZE)) ||
(!tag && strncmp(nametag, info.name, BPF_OBJ_NAME_LEN))) {
close(fd);
continue;
}
if (nb_fds > 0) {
tmp = realloc(*fds, (nb_fds + 1) * sizeof(int));
if (!tmp) {
p_err("failed to realloc");
goto err_close_fd;
}
*fds = tmp;
}
(*fds)[nb_fds++] = fd;
}
err_close_fd:
close(fd);
err_close_fds:
while (--nb_fds >= 0)
close((*fds)[nb_fds]);
return -1;
}
static int prog_parse_fds(int *argc, char ***argv, int **fds)
{
if (is_prefix(**argv, "id")) {
unsigned int id;
char *endptr;
NEXT_ARGP();
id = strtoul(**argv, &endptr, 0);
if (*endptr) {
p_err("can't parse %s as ID", **argv);
return -1;
}
NEXT_ARGP();
(*fds)[0] = bpf_prog_get_fd_by_id(id);
if ((*fds)[0] < 0) {
p_err("get by id (%u): %s", id, strerror(errno));
return -1;
}
return 1;
} else if (is_prefix(**argv, "tag")) {
unsigned char tag[BPF_TAG_SIZE];
NEXT_ARGP();
if (sscanf(**argv, BPF_TAG_FMT, tag, tag + 1, tag + 2,
tag + 3, tag + 4, tag + 5, tag + 6, tag + 7)
!= BPF_TAG_SIZE) {
p_err("can't parse tag");
return -1;
}
NEXT_ARGP();
return prog_fd_by_nametag(tag, fds, true);
} else if (is_prefix(**argv, "name")) {
char *name;
NEXT_ARGP();
name = **argv;
if (strlen(name) > BPF_OBJ_NAME_LEN - 1) {
p_err("can't parse name");
return -1;
}
NEXT_ARGP();
return prog_fd_by_nametag(name, fds, false);
} else if (is_prefix(**argv, "pinned")) {
char *path;
NEXT_ARGP();
path = **argv;
NEXT_ARGP();
(*fds)[0] = open_obj_pinned_any(path, BPF_OBJ_PROG);
if ((*fds)[0] < 0)
return -1;
return 1;
}
p_err("expected 'id', 'tag', 'name' or 'pinned', got: '%s'?", **argv);
return -1;
}
int prog_parse_fd(int *argc, char ***argv)
{
int *fds = NULL;
int nb_fds, fd;
fds = malloc(sizeof(int));
if (!fds) {
p_err("mem alloc failed");
return -1;
}
nb_fds = prog_parse_fds(argc, argv, &fds);
if (nb_fds != 1) {
if (nb_fds > 1) {
p_err("several programs match this handle");
while (nb_fds--)
close(fds[nb_fds]);
}
fd = -1;
goto exit_free;
}
fd = fds[0];
exit_free:
free(fds);
return fd;
}
static void show_prog_maps(int fd, __u32 num_maps)
{
struct bpf_prog_info info = {};
__u32 len = sizeof(info);
__u32 map_ids[num_maps];
unsigned int i;
int err;
info.nr_map_ids = num_maps;
info.map_ids = ptr_to_u64(map_ids);
err = bpf_obj_get_info_by_fd(fd, &info, &len);
if (err || !info.nr_map_ids)
return;
if (json_output) {
jsonw_name(json_wtr, "map_ids");
jsonw_start_array(json_wtr);
for (i = 0; i < info.nr_map_ids; i++)
jsonw_uint(json_wtr, map_ids[i]);
jsonw_end_array(json_wtr);
} else {
printf(" map_ids ");
for (i = 0; i < info.nr_map_ids; i++)
printf("%u%s", map_ids[i],
i == info.nr_map_ids - 1 ? "" : ",");
}
}
static void print_prog_header_json(struct bpf_prog_info *info)
{
jsonw_uint_field(json_wtr, "id", info->id);
if (info->type < ARRAY_SIZE(prog_type_name))
jsonw_string_field(json_wtr, "type",
prog_type_name[info->type]);
else
jsonw_uint_field(json_wtr, "type", info->type);
if (*info->name)
jsonw_string_field(json_wtr, "name", info->name);
jsonw_name(json_wtr, "tag");
jsonw_printf(json_wtr, "\"" BPF_TAG_FMT "\"",
info->tag[0], info->tag[1], info->tag[2], info->tag[3],
info->tag[4], info->tag[5], info->tag[6], info->tag[7]);
jsonw_bool_field(json_wtr, "gpl_compatible", info->gpl_compatible);
if (info->run_time_ns) {
jsonw_uint_field(json_wtr, "run_time_ns", info->run_time_ns);
jsonw_uint_field(json_wtr, "run_cnt", info->run_cnt);
}
}
static void print_prog_json(struct bpf_prog_info *info, int fd)
{
char *memlock;
jsonw_start_object(json_wtr);
print_prog_header_json(info);
print_dev_json(info->ifindex, info->netns_dev, info->netns_ino);
if (info->load_time) {
char buf[32];
print_boot_time(info->load_time, buf, sizeof(buf));
/* Piggy back on load_time, since 0 uid is a valid one */
jsonw_name(json_wtr, "loaded_at");
jsonw_printf(json_wtr, "%s", buf);
jsonw_uint_field(json_wtr, "uid", info->created_by_uid);
}
jsonw_uint_field(json_wtr, "bytes_xlated", info->xlated_prog_len);
if (info->jited_prog_len) {
jsonw_bool_field(json_wtr, "jited", true);
jsonw_uint_field(json_wtr, "bytes_jited", info->jited_prog_len);
} else {
jsonw_bool_field(json_wtr, "jited", false);
}
memlock = get_fdinfo(fd, "memlock");
if (memlock)
jsonw_int_field(json_wtr, "bytes_memlock", atoi(memlock));
free(memlock);
if (info->nr_map_ids)
show_prog_maps(fd, info->nr_map_ids);
if (info->btf_id)
jsonw_int_field(json_wtr, "btf_id", info->btf_id);
tools: bpftool: show filenames of pinned objects Added support to show filenames of pinned objects. For example: root@test# ./bpftool prog 3: tracepoint name tracepoint__irq tag f677a7dd722299a3 loaded_at Oct 26/11:39 uid 0 xlated 160B not jited memlock 4096B map_ids 4 pinned /sys/fs/bpf/softirq_prog 4: tracepoint name tracepoint__irq tag ea5dc530d00b92b6 loaded_at Oct 26/11:39 uid 0 xlated 392B not jited memlock 4096B map_ids 4,6 root@test# ./bpftool --json --pretty prog [{ "id": 3, "type": "tracepoint", "name": "tracepoint__irq", "tag": "f677a7dd722299a3", "loaded_at": "Oct 26/11:39", "uid": 0, "bytes_xlated": 160, "jited": false, "bytes_memlock": 4096, "map_ids": [4 ], "pinned": ["/sys/fs/bpf/softirq_prog" ] },{ "id": 4, "type": "tracepoint", "name": "tracepoint__irq", "tag": "ea5dc530d00b92b6", "loaded_at": "Oct 26/11:39", "uid": 0, "bytes_xlated": 392, "jited": false, "bytes_memlock": 4096, "map_ids": [4,6 ], "pinned": [] } ] root@test# ./bpftool map 4: hash name start flags 0x0 key 4B value 16B max_entries 10240 memlock 1003520B pinned /sys/fs/bpf/softirq_map1 5: hash name iptr flags 0x0 key 4B value 8B max_entries 10240 memlock 921600B root@test# ./bpftool --json --pretty map [{ "id": 4, "type": "hash", "name": "start", "flags": 0, "bytes_key": 4, "bytes_value": 16, "max_entries": 10240, "bytes_memlock": 1003520, "pinned": ["/sys/fs/bpf/softirq_map1" ] },{ "id": 5, "type": "hash", "name": "iptr", "flags": 0, "bytes_key": 4, "bytes_value": 8, "max_entries": 10240, "bytes_memlock": 921600, "pinned": [] } ] Signed-off-by: Prashant Bhole <bhole_prashant_q7@lab.ntt.co.jp> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-11-08 11:55:48 +07:00
if (!hash_empty(prog_table.table)) {
struct pinned_obj *obj;
jsonw_name(json_wtr, "pinned");
jsonw_start_array(json_wtr);
hash_for_each_possible(prog_table.table, obj, hash, info->id) {
if (obj->id == info->id)
jsonw_string(json_wtr, obj->path);
}
jsonw_end_array(json_wtr);
}
jsonw_end_object(json_wtr);
}
static void print_prog_header_plain(struct bpf_prog_info *info)
{
printf("%u: ", info->id);
if (info->type < ARRAY_SIZE(prog_type_name))
printf("%s ", prog_type_name[info->type]);
else
printf("type %u ", info->type);
if (*info->name)
printf("name %s ", info->name);
printf("tag ");
fprint_hex(stdout, info->tag, BPF_TAG_SIZE, "");
print_dev_plain(info->ifindex, info->netns_dev, info->netns_ino);
printf("%s", info->gpl_compatible ? " gpl" : "");
if (info->run_time_ns)
printf(" run_time_ns %lld run_cnt %lld",
info->run_time_ns, info->run_cnt);
printf("\n");
}
static void print_prog_plain(struct bpf_prog_info *info, int fd)
{
char *memlock;
print_prog_header_plain(info);
if (info->load_time) {
char buf[32];
print_boot_time(info->load_time, buf, sizeof(buf));
/* Piggy back on load_time, since 0 uid is a valid one */
printf("\tloaded_at %s uid %u\n", buf, info->created_by_uid);
}
printf("\txlated %uB", info->xlated_prog_len);
if (info->jited_prog_len)
printf(" jited %uB", info->jited_prog_len);
else
printf(" not jited");
memlock = get_fdinfo(fd, "memlock");
if (memlock)
printf(" memlock %sB", memlock);
free(memlock);
if (info->nr_map_ids)
show_prog_maps(fd, info->nr_map_ids);
tools: bpftool: show filenames of pinned objects Added support to show filenames of pinned objects. For example: root@test# ./bpftool prog 3: tracepoint name tracepoint__irq tag f677a7dd722299a3 loaded_at Oct 26/11:39 uid 0 xlated 160B not jited memlock 4096B map_ids 4 pinned /sys/fs/bpf/softirq_prog 4: tracepoint name tracepoint__irq tag ea5dc530d00b92b6 loaded_at Oct 26/11:39 uid 0 xlated 392B not jited memlock 4096B map_ids 4,6 root@test# ./bpftool --json --pretty prog [{ "id": 3, "type": "tracepoint", "name": "tracepoint__irq", "tag": "f677a7dd722299a3", "loaded_at": "Oct 26/11:39", "uid": 0, "bytes_xlated": 160, "jited": false, "bytes_memlock": 4096, "map_ids": [4 ], "pinned": ["/sys/fs/bpf/softirq_prog" ] },{ "id": 4, "type": "tracepoint", "name": "tracepoint__irq", "tag": "ea5dc530d00b92b6", "loaded_at": "Oct 26/11:39", "uid": 0, "bytes_xlated": 392, "jited": false, "bytes_memlock": 4096, "map_ids": [4,6 ], "pinned": [] } ] root@test# ./bpftool map 4: hash name start flags 0x0 key 4B value 16B max_entries 10240 memlock 1003520B pinned /sys/fs/bpf/softirq_map1 5: hash name iptr flags 0x0 key 4B value 8B max_entries 10240 memlock 921600B root@test# ./bpftool --json --pretty map [{ "id": 4, "type": "hash", "name": "start", "flags": 0, "bytes_key": 4, "bytes_value": 16, "max_entries": 10240, "bytes_memlock": 1003520, "pinned": ["/sys/fs/bpf/softirq_map1" ] },{ "id": 5, "type": "hash", "name": "iptr", "flags": 0, "bytes_key": 4, "bytes_value": 8, "max_entries": 10240, "bytes_memlock": 921600, "pinned": [] } ] Signed-off-by: Prashant Bhole <bhole_prashant_q7@lab.ntt.co.jp> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-11-08 11:55:48 +07:00
if (!hash_empty(prog_table.table)) {
struct pinned_obj *obj;
hash_for_each_possible(prog_table.table, obj, hash, info->id) {
if (obj->id == info->id)
printf("\n\tpinned %s", obj->path);
tools: bpftool: show filenames of pinned objects Added support to show filenames of pinned objects. For example: root@test# ./bpftool prog 3: tracepoint name tracepoint__irq tag f677a7dd722299a3 loaded_at Oct 26/11:39 uid 0 xlated 160B not jited memlock 4096B map_ids 4 pinned /sys/fs/bpf/softirq_prog 4: tracepoint name tracepoint__irq tag ea5dc530d00b92b6 loaded_at Oct 26/11:39 uid 0 xlated 392B not jited memlock 4096B map_ids 4,6 root@test# ./bpftool --json --pretty prog [{ "id": 3, "type": "tracepoint", "name": "tracepoint__irq", "tag": "f677a7dd722299a3", "loaded_at": "Oct 26/11:39", "uid": 0, "bytes_xlated": 160, "jited": false, "bytes_memlock": 4096, "map_ids": [4 ], "pinned": ["/sys/fs/bpf/softirq_prog" ] },{ "id": 4, "type": "tracepoint", "name": "tracepoint__irq", "tag": "ea5dc530d00b92b6", "loaded_at": "Oct 26/11:39", "uid": 0, "bytes_xlated": 392, "jited": false, "bytes_memlock": 4096, "map_ids": [4,6 ], "pinned": [] } ] root@test# ./bpftool map 4: hash name start flags 0x0 key 4B value 16B max_entries 10240 memlock 1003520B pinned /sys/fs/bpf/softirq_map1 5: hash name iptr flags 0x0 key 4B value 8B max_entries 10240 memlock 921600B root@test# ./bpftool --json --pretty map [{ "id": 4, "type": "hash", "name": "start", "flags": 0, "bytes_key": 4, "bytes_value": 16, "max_entries": 10240, "bytes_memlock": 1003520, "pinned": ["/sys/fs/bpf/softirq_map1" ] },{ "id": 5, "type": "hash", "name": "iptr", "flags": 0, "bytes_key": 4, "bytes_value": 8, "max_entries": 10240, "bytes_memlock": 921600, "pinned": [] } ] Signed-off-by: Prashant Bhole <bhole_prashant_q7@lab.ntt.co.jp> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-11-08 11:55:48 +07:00
}
}
if (info->btf_id)
printf("\n\tbtf_id %d", info->btf_id);
printf("\n");
}
static int show_prog(int fd)
{
struct bpf_prog_info info = {};
__u32 len = sizeof(info);
int err;
err = bpf_obj_get_info_by_fd(fd, &info, &len);
if (err) {
p_err("can't get prog info: %s", strerror(errno));
return -1;
}
if (json_output)
print_prog_json(&info, fd);
else
print_prog_plain(&info, fd);
return 0;
}
static int do_show_subset(int argc, char **argv)
{
int *fds = NULL;
int nb_fds, i;
int err = -1;
fds = malloc(sizeof(int));
if (!fds) {
p_err("mem alloc failed");
return -1;
}
nb_fds = prog_parse_fds(&argc, &argv, &fds);
if (nb_fds < 1)
goto exit_free;
if (json_output && nb_fds > 1)
jsonw_start_array(json_wtr); /* root array */
for (i = 0; i < nb_fds; i++) {
err = show_prog(fds[i]);
if (err) {
for (; i < nb_fds; i++)
close(fds[i]);
break;
}
close(fds[i]);
}
if (json_output && nb_fds > 1)
jsonw_end_array(json_wtr); /* root array */
exit_free:
free(fds);
return err;
}
static int do_show(int argc, char **argv)
{
__u32 id = 0;
int err;
int fd;
if (show_pinned)
build_pinned_obj_table(&prog_table, BPF_OBJ_PROG);
tools: bpftool: show filenames of pinned objects Added support to show filenames of pinned objects. For example: root@test# ./bpftool prog 3: tracepoint name tracepoint__irq tag f677a7dd722299a3 loaded_at Oct 26/11:39 uid 0 xlated 160B not jited memlock 4096B map_ids 4 pinned /sys/fs/bpf/softirq_prog 4: tracepoint name tracepoint__irq tag ea5dc530d00b92b6 loaded_at Oct 26/11:39 uid 0 xlated 392B not jited memlock 4096B map_ids 4,6 root@test# ./bpftool --json --pretty prog [{ "id": 3, "type": "tracepoint", "name": "tracepoint__irq", "tag": "f677a7dd722299a3", "loaded_at": "Oct 26/11:39", "uid": 0, "bytes_xlated": 160, "jited": false, "bytes_memlock": 4096, "map_ids": [4 ], "pinned": ["/sys/fs/bpf/softirq_prog" ] },{ "id": 4, "type": "tracepoint", "name": "tracepoint__irq", "tag": "ea5dc530d00b92b6", "loaded_at": "Oct 26/11:39", "uid": 0, "bytes_xlated": 392, "jited": false, "bytes_memlock": 4096, "map_ids": [4,6 ], "pinned": [] } ] root@test# ./bpftool map 4: hash name start flags 0x0 key 4B value 16B max_entries 10240 memlock 1003520B pinned /sys/fs/bpf/softirq_map1 5: hash name iptr flags 0x0 key 4B value 8B max_entries 10240 memlock 921600B root@test# ./bpftool --json --pretty map [{ "id": 4, "type": "hash", "name": "start", "flags": 0, "bytes_key": 4, "bytes_value": 16, "max_entries": 10240, "bytes_memlock": 1003520, "pinned": ["/sys/fs/bpf/softirq_map1" ] },{ "id": 5, "type": "hash", "name": "iptr", "flags": 0, "bytes_key": 4, "bytes_value": 8, "max_entries": 10240, "bytes_memlock": 921600, "pinned": [] } ] Signed-off-by: Prashant Bhole <bhole_prashant_q7@lab.ntt.co.jp> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-11-08 11:55:48 +07:00
if (argc == 2)
return do_show_subset(argc, argv);
if (argc)
return BAD_ARG();
if (json_output)
jsonw_start_array(json_wtr);
while (true) {
err = bpf_prog_get_next_id(id, &id);
if (err) {
if (errno == ENOENT) {
err = 0;
break;
}
p_err("can't get next program: %s%s", strerror(errno),
errno == EINVAL ? " -- kernel too old?" : "");
err = -1;
break;
}
fd = bpf_prog_get_fd_by_id(id);
if (fd < 0) {
if (errno == ENOENT)
continue;
p_err("can't get prog by id (%u): %s",
id, strerror(errno));
err = -1;
break;
}
err = show_prog(fd);
close(fd);
if (err)
break;
}
if (json_output)
jsonw_end_array(json_wtr);
return err;
}
static int
prog_dump(struct bpf_prog_info *info, enum dump_mode mode,
char *filepath, bool opcodes, bool visual, bool linum)
{
bpf: libbpf: bpftool: Print bpf_line_info during prog dump This patch adds print bpf_line_info function in 'prog dump jitted' and 'prog dump xlated': [root@arch-fb-vm1 bpf]# ~/devshare/fb-kernel/linux/tools/bpf/bpftool/bpftool prog dump jited pinned /sys/fs/bpf/test_btf_haskv [...] int test_long_fname_2(struct dummy_tracepoint_args * arg): bpf_prog_44a040bf25481309_test_long_fname_2: ; static int test_long_fname_2(struct dummy_tracepoint_args *arg) 0: push %rbp 1: mov %rsp,%rbp 4: sub $0x30,%rsp b: sub $0x28,%rbp f: mov %rbx,0x0(%rbp) 13: mov %r13,0x8(%rbp) 17: mov %r14,0x10(%rbp) 1b: mov %r15,0x18(%rbp) 1f: xor %eax,%eax 21: mov %rax,0x20(%rbp) 25: xor %esi,%esi ; int key = 0; 27: mov %esi,-0x4(%rbp) ; if (!arg->sock) 2a: mov 0x8(%rdi),%rdi ; if (!arg->sock) 2e: cmp $0x0,%rdi 32: je 0x0000000000000070 34: mov %rbp,%rsi ; counts = bpf_map_lookup_elem(&btf_map, &key); 37: add $0xfffffffffffffffc,%rsi 3b: movabs $0xffff8881139d7480,%rdi 45: add $0x110,%rdi 4c: mov 0x0(%rsi),%eax 4f: cmp $0x4,%rax 53: jae 0x000000000000005e 55: shl $0x3,%rax 59: add %rdi,%rax 5c: jmp 0x0000000000000060 5e: xor %eax,%eax ; if (!counts) 60: cmp $0x0,%rax 64: je 0x0000000000000070 ; counts->v6++; 66: mov 0x4(%rax),%edi 69: add $0x1,%rdi 6d: mov %edi,0x4(%rax) 70: mov 0x0(%rbp),%rbx 74: mov 0x8(%rbp),%r13 78: mov 0x10(%rbp),%r14 7c: mov 0x18(%rbp),%r15 80: add $0x28,%rbp 84: leaveq 85: retq [...] With linum: [root@arch-fb-vm1 bpf]# ~/devshare/fb-kernel/linux/tools/bpf/bpftool/bpftool prog dump jited pinned /sys/fs/bpf/test_btf_haskv linum int _dummy_tracepoint(struct dummy_tracepoint_args * arg): bpf_prog_b07ccb89267cf242__dummy_tracepoint: ; return test_long_fname_1(arg); [file:/data/users/kafai/fb-kernel/linux/tools/testing/selftests/bpf/test_btf_haskv.c line_num:54 line_col:9] 0: push %rbp 1: mov %rsp,%rbp 4: sub $0x28,%rsp b: sub $0x28,%rbp f: mov %rbx,0x0(%rbp) 13: mov %r13,0x8(%rbp) 17: mov %r14,0x10(%rbp) 1b: mov %r15,0x18(%rbp) 1f: xor %eax,%eax 21: mov %rax,0x20(%rbp) 25: callq 0x000000000000851e ; return test_long_fname_1(arg); [file:/data/users/kafai/fb-kernel/linux/tools/testing/selftests/bpf/test_btf_haskv.c line_num:54 line_col:2] 2a: xor %eax,%eax 2c: mov 0x0(%rbp),%rbx 30: mov 0x8(%rbp),%r13 34: mov 0x10(%rbp),%r14 38: mov 0x18(%rbp),%r15 3c: add $0x28,%rbp 40: leaveq 41: retq [...] Signed-off-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-08 07:42:32 +07:00
struct bpf_prog_linfo *prog_linfo = NULL;
const char *disasm_opt = NULL;
bpf: allow for correlation of maps and helpers in dump Currently a dump of an xlated prog (post verifier stage) doesn't correlate used helpers as well as maps. The prog info lists involved map ids, however there's no correlation of where in the program they are used as of today. Likewise, bpftool does not correlate helper calls with the target functions. The latter can be done w/o any kernel changes through kallsyms, and also has the advantage that this works with inlined helpers and BPF calls. Example, via interpreter: # tc filter show dev foo ingress filter protocol all pref 49152 bpf chain 0 filter protocol all pref 49152 bpf chain 0 handle 0x1 foo.o:[ingress] \ direct-action not_in_hw id 1 tag c74773051b364165 <-- prog id:1 * Output before patch (calls/maps remain unclear): # bpftool prog dump xlated id 1 <-- dump prog id:1 0: (b7) r1 = 2 1: (63) *(u32 *)(r10 -4) = r1 2: (bf) r2 = r10 3: (07) r2 += -4 4: (18) r1 = 0xffff95c47a8d4800 6: (85) call unknown#73040 7: (15) if r0 == 0x0 goto pc+18 8: (bf) r2 = r10 9: (07) r2 += -4 10: (bf) r1 = r0 11: (85) call unknown#73040 12: (15) if r0 == 0x0 goto pc+23 [...] * Output after patch: # bpftool prog dump xlated id 1 0: (b7) r1 = 2 1: (63) *(u32 *)(r10 -4) = r1 2: (bf) r2 = r10 3: (07) r2 += -4 4: (18) r1 = map[id:2] <-- map id:2 6: (85) call bpf_map_lookup_elem#73424 <-- helper call 7: (15) if r0 == 0x0 goto pc+18 8: (bf) r2 = r10 9: (07) r2 += -4 10: (bf) r1 = r0 11: (85) call bpf_map_lookup_elem#73424 12: (15) if r0 == 0x0 goto pc+23 [...] # bpftool map show id 2 <-- show/dump/etc map id:2 2: hash_of_maps flags 0x0 key 4B value 4B max_entries 3 memlock 4096B Example, JITed, same prog: # tc filter show dev foo ingress filter protocol all pref 49152 bpf chain 0 filter protocol all pref 49152 bpf chain 0 handle 0x1 foo.o:[ingress] \ direct-action not_in_hw id 3 tag c74773051b364165 jited # bpftool prog show id 3 3: sched_cls tag c74773051b364165 loaded_at Dec 19/13:48 uid 0 xlated 384B jited 257B memlock 4096B map_ids 2 # bpftool prog dump xlated id 3 0: (b7) r1 = 2 1: (63) *(u32 *)(r10 -4) = r1 2: (bf) r2 = r10 3: (07) r2 += -4 4: (18) r1 = map[id:2] <-- map id:2 6: (85) call __htab_map_lookup_elem#77408 <-+ inlined rewrite 7: (15) if r0 == 0x0 goto pc+2 | 8: (07) r0 += 56 | 9: (79) r0 = *(u64 *)(r0 +0) <-+ 10: (15) if r0 == 0x0 goto pc+24 11: (bf) r2 = r10 12: (07) r2 += -4 [...] Example, same prog, but kallsyms disabled (in that case we are also not allowed to pass any relative offsets, etc, so prog becomes pointer sanitized on dump): # sysctl kernel.kptr_restrict=2 kernel.kptr_restrict = 2 # bpftool prog dump xlated id 3 0: (b7) r1 = 2 1: (63) *(u32 *)(r10 -4) = r1 2: (bf) r2 = r10 3: (07) r2 += -4 4: (18) r1 = map[id:2] 6: (85) call bpf_unspec#0 7: (15) if r0 == 0x0 goto pc+2 [...] Example, BPF calls via interpreter: # bpftool prog dump xlated id 1 0: (85) call pc+2#__bpf_prog_run_args32 1: (b7) r0 = 1 2: (95) exit 3: (b7) r0 = 2 4: (95) exit Example, BPF calls via JIT: # sysctl net.core.bpf_jit_enable=1 net.core.bpf_jit_enable = 1 # sysctl net.core.bpf_jit_kallsyms=1 net.core.bpf_jit_kallsyms = 1 # bpftool prog dump xlated id 1 0: (85) call pc+2#bpf_prog_3b185187f1855c4c_F 1: (b7) r0 = 1 2: (95) exit 3: (b7) r0 = 2 4: (95) exit And finally, an example for tail calls that is now working as well wrt correlation: # bpftool prog dump xlated id 2 [...] 10: (b7) r2 = 8 11: (85) call bpf_trace_printk#-41312 12: (bf) r1 = r6 13: (18) r2 = map[id:1] 15: (b7) r3 = 0 16: (85) call bpf_tail_call#12 17: (b7) r1 = 42 18: (6b) *(u16 *)(r6 +46) = r1 19: (b7) r0 = 0 20: (95) exit # bpftool map show id 1 1: prog_array flags 0x0 key 4B value 4B max_entries 1 memlock 4096B Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2017-12-20 19:42:57 +07:00
struct dump_data dd = {};
bpftool: use bpf_program__get_prog_info_linear() in prog.c:do_dump() This patches uses bpf_program__get_prog_info_linear() to simplify the logic in prog.c do_dump(). Committer testing: Before: # bpftool prog dump xlated id 208 > /tmp/dump.xlated.before # bpftool prog dump jited id 208 > /tmp/dump.jited.before # bpftool map dump id 107 > /tmp/map.dump.before After: # ~acme/git/perf/tools/bpf/bpftool/bpftool map dump id 107 > /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 > /tmp/dump.xlated.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump jited id 208 > /tmp/dump.jited.after # diff -u /tmp/dump.xlated.before /tmp/dump.xlated.after # diff -u /tmp/dump.jited.before /tmp/dump.jited.after # diff -u /tmp/map.dump.before /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 0: (bf) r6 = r1 1: (85) call bpf_get_current_pid_tgid#80800 2: (63) *(u32 *)(r10 -328) = r0 3: (bf) r2 = r10 4: (07) r2 += -328 5: (18) r1 = map[id:107] 7: (85) call __htab_map_lookup_elem#85680 8: (15) if r0 == 0x0 goto pc+1 9: (07) r0 += 56 10: (b7) r7 = 0 11: (55) if r0 != 0x0 goto pc+52 12: (bf) r1 = r10 13: (07) r1 += -328 14: (b7) r2 = 64 15: (bf) r3 = r6 16: (85) call bpf_probe_read#-46848 17: (bf) r2 = r10 18: (07) r2 += -320 19: (18) r1 = map[id:106] 21: (07) r1 += 208 22: (61) r0 = *(u32 *)(r2 +0) 23: (35) if r0 >= 0x200 goto pc+3 24: (67) r0 <<= 3 25: (0f) r0 += r1 26: (05) goto pc+1 27: (b7) r0 = 0 28: (15) if r0 == 0x0 goto pc+35 29: (71) r1 = *(u8 *)(r0 +0) 30: (15) if r1 == 0x0 goto pc+33 31: (b7) r5 = 64 32: (79) r1 = *(u64 *)(r10 -320) 33: (15) if r1 == 0x2 goto pc+2 34: (15) if r1 == 0x101 goto pc+3 35: (55) if r1 != 0x15 goto pc+19 36: (79) r3 = *(u64 *)(r6 +16) 37: (05) goto pc+1 38: (79) r3 = *(u64 *)(r6 +24) 39: (15) if r3 == 0x0 goto pc+15 40: (b7) r1 = 0 41: (63) *(u32 *)(r10 -260) = r1 42: (bf) r1 = r10 43: (07) r1 += -256 44: (b7) r2 = 256 45: (85) call bpf_probe_read_str#-46704 46: (b7) r5 = 328 47: (63) *(u32 *)(r10 -264) = r0 48: (bf) r1 = r0 49: (67) r1 <<= 32 50: (77) r1 >>= 32 51: (25) if r1 > 0xff goto pc+3 52: (07) r0 += 72 53: (57) r0 &= 255 54: (bf) r5 = r0 55: (bf) r4 = r10 56: (07) r4 += -328 57: (bf) r1 = r6 58: (18) r2 = map[id:105] 60: (18) r3 = 0xffffffff 62: (85) call bpf_perf_event_output_tp#-45104 63: (bf) r7 = r0 64: (bf) r0 = r7 65: (95) exit # Signed-off-by: Song Liu <songliubraving@fb.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: kernel-team@fb.com Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stanislav Fomichev <sdf@google.com> Link: http://lkml.kernel.org/r/20190312053051.2690567-4-songliubraving@fb.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-03-12 12:30:39 +07:00
void *func_info = NULL;
tools/bpf: bpftool: add support for func types This patch added support to print function signature if btf func_info is available. Note that ksym now uses function name instead of prog_name as prog_name has a limit of 16 bytes including ending '\0'. The following is a sample output for selftests test_btf with file test_btf_haskv.o for translated insns and jited insns respectively. $ bpftool prog dump xlated id 1 int _dummy_tracepoint(struct dummy_tracepoint_args * arg): 0: (85) call pc+2#bpf_prog_2dcecc18072623fc_test_long_fname_1 1: (b7) r0 = 0 2: (95) exit int test_long_fname_1(struct dummy_tracepoint_args * arg): 3: (85) call pc+1#bpf_prog_89d64e4abf0f0126_test_long_fname_2 4: (95) exit int test_long_fname_2(struct dummy_tracepoint_args * arg): 5: (b7) r2 = 0 6: (63) *(u32 *)(r10 -4) = r2 7: (79) r1 = *(u64 *)(r1 +8) ... 22: (07) r1 += 1 23: (63) *(u32 *)(r0 +4) = r1 24: (95) exit $ bpftool prog dump jited id 1 int _dummy_tracepoint(struct dummy_tracepoint_args * arg): bpf_prog_b07ccb89267cf242__dummy_tracepoint: 0: push %rbp 1: mov %rsp,%rbp ...... 3c: add $0x28,%rbp 40: leaveq 41: retq int test_long_fname_1(struct dummy_tracepoint_args * arg): bpf_prog_2dcecc18072623fc_test_long_fname_1: 0: push %rbp 1: mov %rsp,%rbp ...... 3a: add $0x28,%rbp 3e: leaveq 3f: retq int test_long_fname_2(struct dummy_tracepoint_args * arg): bpf_prog_89d64e4abf0f0126_test_long_fname_2: 0: push %rbp 1: mov %rsp,%rbp ...... 80: add $0x28,%rbp 84: leaveq 85: retq Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-20 06:29:21 +07:00
struct btf *btf = NULL;
char func_sig[1024];
unsigned char *buf;
bpftool: use bpf_program__get_prog_info_linear() in prog.c:do_dump() This patches uses bpf_program__get_prog_info_linear() to simplify the logic in prog.c do_dump(). Committer testing: Before: # bpftool prog dump xlated id 208 > /tmp/dump.xlated.before # bpftool prog dump jited id 208 > /tmp/dump.jited.before # bpftool map dump id 107 > /tmp/map.dump.before After: # ~acme/git/perf/tools/bpf/bpftool/bpftool map dump id 107 > /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 > /tmp/dump.xlated.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump jited id 208 > /tmp/dump.jited.after # diff -u /tmp/dump.xlated.before /tmp/dump.xlated.after # diff -u /tmp/dump.jited.before /tmp/dump.jited.after # diff -u /tmp/map.dump.before /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 0: (bf) r6 = r1 1: (85) call bpf_get_current_pid_tgid#80800 2: (63) *(u32 *)(r10 -328) = r0 3: (bf) r2 = r10 4: (07) r2 += -328 5: (18) r1 = map[id:107] 7: (85) call __htab_map_lookup_elem#85680 8: (15) if r0 == 0x0 goto pc+1 9: (07) r0 += 56 10: (b7) r7 = 0 11: (55) if r0 != 0x0 goto pc+52 12: (bf) r1 = r10 13: (07) r1 += -328 14: (b7) r2 = 64 15: (bf) r3 = r6 16: (85) call bpf_probe_read#-46848 17: (bf) r2 = r10 18: (07) r2 += -320 19: (18) r1 = map[id:106] 21: (07) r1 += 208 22: (61) r0 = *(u32 *)(r2 +0) 23: (35) if r0 >= 0x200 goto pc+3 24: (67) r0 <<= 3 25: (0f) r0 += r1 26: (05) goto pc+1 27: (b7) r0 = 0 28: (15) if r0 == 0x0 goto pc+35 29: (71) r1 = *(u8 *)(r0 +0) 30: (15) if r1 == 0x0 goto pc+33 31: (b7) r5 = 64 32: (79) r1 = *(u64 *)(r10 -320) 33: (15) if r1 == 0x2 goto pc+2 34: (15) if r1 == 0x101 goto pc+3 35: (55) if r1 != 0x15 goto pc+19 36: (79) r3 = *(u64 *)(r6 +16) 37: (05) goto pc+1 38: (79) r3 = *(u64 *)(r6 +24) 39: (15) if r3 == 0x0 goto pc+15 40: (b7) r1 = 0 41: (63) *(u32 *)(r10 -260) = r1 42: (bf) r1 = r10 43: (07) r1 += -256 44: (b7) r2 = 256 45: (85) call bpf_probe_read_str#-46704 46: (b7) r5 = 328 47: (63) *(u32 *)(r10 -264) = r0 48: (bf) r1 = r0 49: (67) r1 <<= 32 50: (77) r1 >>= 32 51: (25) if r1 > 0xff goto pc+3 52: (07) r0 += 72 53: (57) r0 &= 255 54: (bf) r5 = r0 55: (bf) r4 = r10 56: (07) r4 += -328 57: (bf) r1 = r6 58: (18) r2 = map[id:105] 60: (18) r3 = 0xffffffff 62: (85) call bpf_perf_event_output_tp#-45104 63: (bf) r7 = r0 64: (bf) r0 = r7 65: (95) exit # Signed-off-by: Song Liu <songliubraving@fb.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: kernel-team@fb.com Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stanislav Fomichev <sdf@google.com> Link: http://lkml.kernel.org/r/20190312053051.2690567-4-songliubraving@fb.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-03-12 12:30:39 +07:00
__u32 member_len;
ssize_t n;
int fd;
bpftool: use bpf_program__get_prog_info_linear() in prog.c:do_dump() This patches uses bpf_program__get_prog_info_linear() to simplify the logic in prog.c do_dump(). Committer testing: Before: # bpftool prog dump xlated id 208 > /tmp/dump.xlated.before # bpftool prog dump jited id 208 > /tmp/dump.jited.before # bpftool map dump id 107 > /tmp/map.dump.before After: # ~acme/git/perf/tools/bpf/bpftool/bpftool map dump id 107 > /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 > /tmp/dump.xlated.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump jited id 208 > /tmp/dump.jited.after # diff -u /tmp/dump.xlated.before /tmp/dump.xlated.after # diff -u /tmp/dump.jited.before /tmp/dump.jited.after # diff -u /tmp/map.dump.before /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 0: (bf) r6 = r1 1: (85) call bpf_get_current_pid_tgid#80800 2: (63) *(u32 *)(r10 -328) = r0 3: (bf) r2 = r10 4: (07) r2 += -328 5: (18) r1 = map[id:107] 7: (85) call __htab_map_lookup_elem#85680 8: (15) if r0 == 0x0 goto pc+1 9: (07) r0 += 56 10: (b7) r7 = 0 11: (55) if r0 != 0x0 goto pc+52 12: (bf) r1 = r10 13: (07) r1 += -328 14: (b7) r2 = 64 15: (bf) r3 = r6 16: (85) call bpf_probe_read#-46848 17: (bf) r2 = r10 18: (07) r2 += -320 19: (18) r1 = map[id:106] 21: (07) r1 += 208 22: (61) r0 = *(u32 *)(r2 +0) 23: (35) if r0 >= 0x200 goto pc+3 24: (67) r0 <<= 3 25: (0f) r0 += r1 26: (05) goto pc+1 27: (b7) r0 = 0 28: (15) if r0 == 0x0 goto pc+35 29: (71) r1 = *(u8 *)(r0 +0) 30: (15) if r1 == 0x0 goto pc+33 31: (b7) r5 = 64 32: (79) r1 = *(u64 *)(r10 -320) 33: (15) if r1 == 0x2 goto pc+2 34: (15) if r1 == 0x101 goto pc+3 35: (55) if r1 != 0x15 goto pc+19 36: (79) r3 = *(u64 *)(r6 +16) 37: (05) goto pc+1 38: (79) r3 = *(u64 *)(r6 +24) 39: (15) if r3 == 0x0 goto pc+15 40: (b7) r1 = 0 41: (63) *(u32 *)(r10 -260) = r1 42: (bf) r1 = r10 43: (07) r1 += -256 44: (b7) r2 = 256 45: (85) call bpf_probe_read_str#-46704 46: (b7) r5 = 328 47: (63) *(u32 *)(r10 -264) = r0 48: (bf) r1 = r0 49: (67) r1 <<= 32 50: (77) r1 >>= 32 51: (25) if r1 > 0xff goto pc+3 52: (07) r0 += 72 53: (57) r0 &= 255 54: (bf) r5 = r0 55: (bf) r4 = r10 56: (07) r4 += -328 57: (bf) r1 = r6 58: (18) r2 = map[id:105] 60: (18) r3 = 0xffffffff 62: (85) call bpf_perf_event_output_tp#-45104 63: (bf) r7 = r0 64: (bf) r0 = r7 65: (95) exit # Signed-off-by: Song Liu <songliubraving@fb.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: kernel-team@fb.com Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stanislav Fomichev <sdf@google.com> Link: http://lkml.kernel.org/r/20190312053051.2690567-4-songliubraving@fb.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-03-12 12:30:39 +07:00
if (mode == DUMP_JITED) {
if (info->jited_prog_len == 0 || !info->jited_prog_insns) {
bpftool: use bpf_program__get_prog_info_linear() in prog.c:do_dump() This patches uses bpf_program__get_prog_info_linear() to simplify the logic in prog.c do_dump(). Committer testing: Before: # bpftool prog dump xlated id 208 > /tmp/dump.xlated.before # bpftool prog dump jited id 208 > /tmp/dump.jited.before # bpftool map dump id 107 > /tmp/map.dump.before After: # ~acme/git/perf/tools/bpf/bpftool/bpftool map dump id 107 > /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 > /tmp/dump.xlated.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump jited id 208 > /tmp/dump.jited.after # diff -u /tmp/dump.xlated.before /tmp/dump.xlated.after # diff -u /tmp/dump.jited.before /tmp/dump.jited.after # diff -u /tmp/map.dump.before /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 0: (bf) r6 = r1 1: (85) call bpf_get_current_pid_tgid#80800 2: (63) *(u32 *)(r10 -328) = r0 3: (bf) r2 = r10 4: (07) r2 += -328 5: (18) r1 = map[id:107] 7: (85) call __htab_map_lookup_elem#85680 8: (15) if r0 == 0x0 goto pc+1 9: (07) r0 += 56 10: (b7) r7 = 0 11: (55) if r0 != 0x0 goto pc+52 12: (bf) r1 = r10 13: (07) r1 += -328 14: (b7) r2 = 64 15: (bf) r3 = r6 16: (85) call bpf_probe_read#-46848 17: (bf) r2 = r10 18: (07) r2 += -320 19: (18) r1 = map[id:106] 21: (07) r1 += 208 22: (61) r0 = *(u32 *)(r2 +0) 23: (35) if r0 >= 0x200 goto pc+3 24: (67) r0 <<= 3 25: (0f) r0 += r1 26: (05) goto pc+1 27: (b7) r0 = 0 28: (15) if r0 == 0x0 goto pc+35 29: (71) r1 = *(u8 *)(r0 +0) 30: (15) if r1 == 0x0 goto pc+33 31: (b7) r5 = 64 32: (79) r1 = *(u64 *)(r10 -320) 33: (15) if r1 == 0x2 goto pc+2 34: (15) if r1 == 0x101 goto pc+3 35: (55) if r1 != 0x15 goto pc+19 36: (79) r3 = *(u64 *)(r6 +16) 37: (05) goto pc+1 38: (79) r3 = *(u64 *)(r6 +24) 39: (15) if r3 == 0x0 goto pc+15 40: (b7) r1 = 0 41: (63) *(u32 *)(r10 -260) = r1 42: (bf) r1 = r10 43: (07) r1 += -256 44: (b7) r2 = 256 45: (85) call bpf_probe_read_str#-46704 46: (b7) r5 = 328 47: (63) *(u32 *)(r10 -264) = r0 48: (bf) r1 = r0 49: (67) r1 <<= 32 50: (77) r1 >>= 32 51: (25) if r1 > 0xff goto pc+3 52: (07) r0 += 72 53: (57) r0 &= 255 54: (bf) r5 = r0 55: (bf) r4 = r10 56: (07) r4 += -328 57: (bf) r1 = r6 58: (18) r2 = map[id:105] 60: (18) r3 = 0xffffffff 62: (85) call bpf_perf_event_output_tp#-45104 63: (bf) r7 = r0 64: (bf) r0 = r7 65: (95) exit # Signed-off-by: Song Liu <songliubraving@fb.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: kernel-team@fb.com Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stanislav Fomichev <sdf@google.com> Link: http://lkml.kernel.org/r/20190312053051.2690567-4-songliubraving@fb.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-03-12 12:30:39 +07:00
p_info("no instructions returned");
return -1;
tools: bpftool: add delimiters to multi-function JITed dumps This splits up the contiguous JITed dump obtained via the bpf system call into more relatable chunks for each function in the program. If the kernel symbols corresponding to these are known, they are printed in the header for each JIT image dump otherwise the masked start address is printed. Before applying this patch: # bpftool prog dump jited id 1 0: push %rbp 1: mov %rsp,%rbp ... 70: leaveq 71: retq 72: push %rbp 73: mov %rsp,%rbp ... dd: leaveq de: retq # bpftool -p prog dump jited id 1 [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x71", "operation": "retq", "operands": [null ] },{ "pc": "0x72", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0xde", "operation": "retq", "operands": [null ] } ] After applying this patch: # echo 0 > /proc/sys/net/core/bpf_jit_kallsyms # bpftool prog dump jited id 1 0xffffffffc02c7000: 0: push %rbp 1: mov %rsp,%rbp ... 70: leaveq 71: retq 0xffffffffc02cf000: 0: push %rbp 1: mov %rsp,%rbp ... 6b: leaveq 6c: retq # bpftool -p prog dump jited id 1 [{ "name": "0xffffffffc02c7000", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x71", "operation": "retq", "operands": [null ] } ] },{ "name": "0xffffffffc02cf000", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x6c", "operation": "retq", "operands": [null ] } ] } ] # echo 1 > /proc/sys/net/core/bpf_jit_kallsyms # bpftool prog dump jited id 1 bpf_prog_b811aab41a39ad3d_foo: 0: push %rbp 1: mov %rsp,%rbp ... 70: leaveq 71: retq bpf_prog_cf418ac8b67bebd9_F: 0: push %rbp 1: mov %rsp,%rbp ... 6b: leaveq 6c: retq # bpftool -p prog dump jited id 1 [{ "name": "bpf_prog_b811aab41a39ad3d_foo", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x71", "operation": "retq", "operands": [null ] } ] },{ "name": "bpf_prog_cf418ac8b67bebd9_F", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x6c", "operation": "retq", "operands": [null ] } ] } ] Signed-off-by: Sandipan Das <sandipan@linux.vnet.ibm.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-05-24 13:56:54 +07:00
}
bpftool: use bpf_program__get_prog_info_linear() in prog.c:do_dump() This patches uses bpf_program__get_prog_info_linear() to simplify the logic in prog.c do_dump(). Committer testing: Before: # bpftool prog dump xlated id 208 > /tmp/dump.xlated.before # bpftool prog dump jited id 208 > /tmp/dump.jited.before # bpftool map dump id 107 > /tmp/map.dump.before After: # ~acme/git/perf/tools/bpf/bpftool/bpftool map dump id 107 > /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 > /tmp/dump.xlated.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump jited id 208 > /tmp/dump.jited.after # diff -u /tmp/dump.xlated.before /tmp/dump.xlated.after # diff -u /tmp/dump.jited.before /tmp/dump.jited.after # diff -u /tmp/map.dump.before /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 0: (bf) r6 = r1 1: (85) call bpf_get_current_pid_tgid#80800 2: (63) *(u32 *)(r10 -328) = r0 3: (bf) r2 = r10 4: (07) r2 += -328 5: (18) r1 = map[id:107] 7: (85) call __htab_map_lookup_elem#85680 8: (15) if r0 == 0x0 goto pc+1 9: (07) r0 += 56 10: (b7) r7 = 0 11: (55) if r0 != 0x0 goto pc+52 12: (bf) r1 = r10 13: (07) r1 += -328 14: (b7) r2 = 64 15: (bf) r3 = r6 16: (85) call bpf_probe_read#-46848 17: (bf) r2 = r10 18: (07) r2 += -320 19: (18) r1 = map[id:106] 21: (07) r1 += 208 22: (61) r0 = *(u32 *)(r2 +0) 23: (35) if r0 >= 0x200 goto pc+3 24: (67) r0 <<= 3 25: (0f) r0 += r1 26: (05) goto pc+1 27: (b7) r0 = 0 28: (15) if r0 == 0x0 goto pc+35 29: (71) r1 = *(u8 *)(r0 +0) 30: (15) if r1 == 0x0 goto pc+33 31: (b7) r5 = 64 32: (79) r1 = *(u64 *)(r10 -320) 33: (15) if r1 == 0x2 goto pc+2 34: (15) if r1 == 0x101 goto pc+3 35: (55) if r1 != 0x15 goto pc+19 36: (79) r3 = *(u64 *)(r6 +16) 37: (05) goto pc+1 38: (79) r3 = *(u64 *)(r6 +24) 39: (15) if r3 == 0x0 goto pc+15 40: (b7) r1 = 0 41: (63) *(u32 *)(r10 -260) = r1 42: (bf) r1 = r10 43: (07) r1 += -256 44: (b7) r2 = 256 45: (85) call bpf_probe_read_str#-46704 46: (b7) r5 = 328 47: (63) *(u32 *)(r10 -264) = r0 48: (bf) r1 = r0 49: (67) r1 <<= 32 50: (77) r1 >>= 32 51: (25) if r1 > 0xff goto pc+3 52: (07) r0 += 72 53: (57) r0 &= 255 54: (bf) r5 = r0 55: (bf) r4 = r10 56: (07) r4 += -328 57: (bf) r1 = r6 58: (18) r2 = map[id:105] 60: (18) r3 = 0xffffffff 62: (85) call bpf_perf_event_output_tp#-45104 63: (bf) r7 = r0 64: (bf) r0 = r7 65: (95) exit # Signed-off-by: Song Liu <songliubraving@fb.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: kernel-team@fb.com Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stanislav Fomichev <sdf@google.com> Link: http://lkml.kernel.org/r/20190312053051.2690567-4-songliubraving@fb.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-03-12 12:30:39 +07:00
buf = (unsigned char *)(info->jited_prog_insns);
member_len = info->jited_prog_len;
} else { /* DUMP_XLATED */
if (info->xlated_prog_len == 0 || !info->xlated_prog_insns) {
bpftool: use bpf_program__get_prog_info_linear() in prog.c:do_dump() This patches uses bpf_program__get_prog_info_linear() to simplify the logic in prog.c do_dump(). Committer testing: Before: # bpftool prog dump xlated id 208 > /tmp/dump.xlated.before # bpftool prog dump jited id 208 > /tmp/dump.jited.before # bpftool map dump id 107 > /tmp/map.dump.before After: # ~acme/git/perf/tools/bpf/bpftool/bpftool map dump id 107 > /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 > /tmp/dump.xlated.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump jited id 208 > /tmp/dump.jited.after # diff -u /tmp/dump.xlated.before /tmp/dump.xlated.after # diff -u /tmp/dump.jited.before /tmp/dump.jited.after # diff -u /tmp/map.dump.before /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 0: (bf) r6 = r1 1: (85) call bpf_get_current_pid_tgid#80800 2: (63) *(u32 *)(r10 -328) = r0 3: (bf) r2 = r10 4: (07) r2 += -328 5: (18) r1 = map[id:107] 7: (85) call __htab_map_lookup_elem#85680 8: (15) if r0 == 0x0 goto pc+1 9: (07) r0 += 56 10: (b7) r7 = 0 11: (55) if r0 != 0x0 goto pc+52 12: (bf) r1 = r10 13: (07) r1 += -328 14: (b7) r2 = 64 15: (bf) r3 = r6 16: (85) call bpf_probe_read#-46848 17: (bf) r2 = r10 18: (07) r2 += -320 19: (18) r1 = map[id:106] 21: (07) r1 += 208 22: (61) r0 = *(u32 *)(r2 +0) 23: (35) if r0 >= 0x200 goto pc+3 24: (67) r0 <<= 3 25: (0f) r0 += r1 26: (05) goto pc+1 27: (b7) r0 = 0 28: (15) if r0 == 0x0 goto pc+35 29: (71) r1 = *(u8 *)(r0 +0) 30: (15) if r1 == 0x0 goto pc+33 31: (b7) r5 = 64 32: (79) r1 = *(u64 *)(r10 -320) 33: (15) if r1 == 0x2 goto pc+2 34: (15) if r1 == 0x101 goto pc+3 35: (55) if r1 != 0x15 goto pc+19 36: (79) r3 = *(u64 *)(r6 +16) 37: (05) goto pc+1 38: (79) r3 = *(u64 *)(r6 +24) 39: (15) if r3 == 0x0 goto pc+15 40: (b7) r1 = 0 41: (63) *(u32 *)(r10 -260) = r1 42: (bf) r1 = r10 43: (07) r1 += -256 44: (b7) r2 = 256 45: (85) call bpf_probe_read_str#-46704 46: (b7) r5 = 328 47: (63) *(u32 *)(r10 -264) = r0 48: (bf) r1 = r0 49: (67) r1 <<= 32 50: (77) r1 >>= 32 51: (25) if r1 > 0xff goto pc+3 52: (07) r0 += 72 53: (57) r0 &= 255 54: (bf) r5 = r0 55: (bf) r4 = r10 56: (07) r4 += -328 57: (bf) r1 = r6 58: (18) r2 = map[id:105] 60: (18) r3 = 0xffffffff 62: (85) call bpf_perf_event_output_tp#-45104 63: (bf) r7 = r0 64: (bf) r0 = r7 65: (95) exit # Signed-off-by: Song Liu <songliubraving@fb.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: kernel-team@fb.com Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stanislav Fomichev <sdf@google.com> Link: http://lkml.kernel.org/r/20190312053051.2690567-4-songliubraving@fb.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-03-12 12:30:39 +07:00
p_err("error retrieving insn dump: kernel.kptr_restrict set?");
return -1;
tools/bpf: bpftool: add support for func types This patch added support to print function signature if btf func_info is available. Note that ksym now uses function name instead of prog_name as prog_name has a limit of 16 bytes including ending '\0'. The following is a sample output for selftests test_btf with file test_btf_haskv.o for translated insns and jited insns respectively. $ bpftool prog dump xlated id 1 int _dummy_tracepoint(struct dummy_tracepoint_args * arg): 0: (85) call pc+2#bpf_prog_2dcecc18072623fc_test_long_fname_1 1: (b7) r0 = 0 2: (95) exit int test_long_fname_1(struct dummy_tracepoint_args * arg): 3: (85) call pc+1#bpf_prog_89d64e4abf0f0126_test_long_fname_2 4: (95) exit int test_long_fname_2(struct dummy_tracepoint_args * arg): 5: (b7) r2 = 0 6: (63) *(u32 *)(r10 -4) = r2 7: (79) r1 = *(u64 *)(r1 +8) ... 22: (07) r1 += 1 23: (63) *(u32 *)(r0 +4) = r1 24: (95) exit $ bpftool prog dump jited id 1 int _dummy_tracepoint(struct dummy_tracepoint_args * arg): bpf_prog_b07ccb89267cf242__dummy_tracepoint: 0: push %rbp 1: mov %rsp,%rbp ...... 3c: add $0x28,%rbp 40: leaveq 41: retq int test_long_fname_1(struct dummy_tracepoint_args * arg): bpf_prog_2dcecc18072623fc_test_long_fname_1: 0: push %rbp 1: mov %rsp,%rbp ...... 3a: add $0x28,%rbp 3e: leaveq 3f: retq int test_long_fname_2(struct dummy_tracepoint_args * arg): bpf_prog_89d64e4abf0f0126_test_long_fname_2: 0: push %rbp 1: mov %rsp,%rbp ...... 80: add $0x28,%rbp 84: leaveq 85: retq Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-20 06:29:21 +07:00
}
bpftool: use bpf_program__get_prog_info_linear() in prog.c:do_dump() This patches uses bpf_program__get_prog_info_linear() to simplify the logic in prog.c do_dump(). Committer testing: Before: # bpftool prog dump xlated id 208 > /tmp/dump.xlated.before # bpftool prog dump jited id 208 > /tmp/dump.jited.before # bpftool map dump id 107 > /tmp/map.dump.before After: # ~acme/git/perf/tools/bpf/bpftool/bpftool map dump id 107 > /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 > /tmp/dump.xlated.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump jited id 208 > /tmp/dump.jited.after # diff -u /tmp/dump.xlated.before /tmp/dump.xlated.after # diff -u /tmp/dump.jited.before /tmp/dump.jited.after # diff -u /tmp/map.dump.before /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 0: (bf) r6 = r1 1: (85) call bpf_get_current_pid_tgid#80800 2: (63) *(u32 *)(r10 -328) = r0 3: (bf) r2 = r10 4: (07) r2 += -328 5: (18) r1 = map[id:107] 7: (85) call __htab_map_lookup_elem#85680 8: (15) if r0 == 0x0 goto pc+1 9: (07) r0 += 56 10: (b7) r7 = 0 11: (55) if r0 != 0x0 goto pc+52 12: (bf) r1 = r10 13: (07) r1 += -328 14: (b7) r2 = 64 15: (bf) r3 = r6 16: (85) call bpf_probe_read#-46848 17: (bf) r2 = r10 18: (07) r2 += -320 19: (18) r1 = map[id:106] 21: (07) r1 += 208 22: (61) r0 = *(u32 *)(r2 +0) 23: (35) if r0 >= 0x200 goto pc+3 24: (67) r0 <<= 3 25: (0f) r0 += r1 26: (05) goto pc+1 27: (b7) r0 = 0 28: (15) if r0 == 0x0 goto pc+35 29: (71) r1 = *(u8 *)(r0 +0) 30: (15) if r1 == 0x0 goto pc+33 31: (b7) r5 = 64 32: (79) r1 = *(u64 *)(r10 -320) 33: (15) if r1 == 0x2 goto pc+2 34: (15) if r1 == 0x101 goto pc+3 35: (55) if r1 != 0x15 goto pc+19 36: (79) r3 = *(u64 *)(r6 +16) 37: (05) goto pc+1 38: (79) r3 = *(u64 *)(r6 +24) 39: (15) if r3 == 0x0 goto pc+15 40: (b7) r1 = 0 41: (63) *(u32 *)(r10 -260) = r1 42: (bf) r1 = r10 43: (07) r1 += -256 44: (b7) r2 = 256 45: (85) call bpf_probe_read_str#-46704 46: (b7) r5 = 328 47: (63) *(u32 *)(r10 -264) = r0 48: (bf) r1 = r0 49: (67) r1 <<= 32 50: (77) r1 >>= 32 51: (25) if r1 > 0xff goto pc+3 52: (07) r0 += 72 53: (57) r0 &= 255 54: (bf) r5 = r0 55: (bf) r4 = r10 56: (07) r4 += -328 57: (bf) r1 = r6 58: (18) r2 = map[id:105] 60: (18) r3 = 0xffffffff 62: (85) call bpf_perf_event_output_tp#-45104 63: (bf) r7 = r0 64: (bf) r0 = r7 65: (95) exit # Signed-off-by: Song Liu <songliubraving@fb.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: kernel-team@fb.com Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stanislav Fomichev <sdf@google.com> Link: http://lkml.kernel.org/r/20190312053051.2690567-4-songliubraving@fb.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-03-12 12:30:39 +07:00
buf = (unsigned char *)info->xlated_prog_insns;
member_len = info->xlated_prog_len;
tools/bpf: bpftool: add support for func types This patch added support to print function signature if btf func_info is available. Note that ksym now uses function name instead of prog_name as prog_name has a limit of 16 bytes including ending '\0'. The following is a sample output for selftests test_btf with file test_btf_haskv.o for translated insns and jited insns respectively. $ bpftool prog dump xlated id 1 int _dummy_tracepoint(struct dummy_tracepoint_args * arg): 0: (85) call pc+2#bpf_prog_2dcecc18072623fc_test_long_fname_1 1: (b7) r0 = 0 2: (95) exit int test_long_fname_1(struct dummy_tracepoint_args * arg): 3: (85) call pc+1#bpf_prog_89d64e4abf0f0126_test_long_fname_2 4: (95) exit int test_long_fname_2(struct dummy_tracepoint_args * arg): 5: (b7) r2 = 0 6: (63) *(u32 *)(r10 -4) = r2 7: (79) r1 = *(u64 *)(r1 +8) ... 22: (07) r1 += 1 23: (63) *(u32 *)(r0 +4) = r1 24: (95) exit $ bpftool prog dump jited id 1 int _dummy_tracepoint(struct dummy_tracepoint_args * arg): bpf_prog_b07ccb89267cf242__dummy_tracepoint: 0: push %rbp 1: mov %rsp,%rbp ...... 3c: add $0x28,%rbp 40: leaveq 41: retq int test_long_fname_1(struct dummy_tracepoint_args * arg): bpf_prog_2dcecc18072623fc_test_long_fname_1: 0: push %rbp 1: mov %rsp,%rbp ...... 3a: add $0x28,%rbp 3e: leaveq 3f: retq int test_long_fname_2(struct dummy_tracepoint_args * arg): bpf_prog_89d64e4abf0f0126_test_long_fname_2: 0: push %rbp 1: mov %rsp,%rbp ...... 80: add $0x28,%rbp 84: leaveq 85: retq Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-20 06:29:21 +07:00
}
bpftool: use bpf_program__get_prog_info_linear() in prog.c:do_dump() This patches uses bpf_program__get_prog_info_linear() to simplify the logic in prog.c do_dump(). Committer testing: Before: # bpftool prog dump xlated id 208 > /tmp/dump.xlated.before # bpftool prog dump jited id 208 > /tmp/dump.jited.before # bpftool map dump id 107 > /tmp/map.dump.before After: # ~acme/git/perf/tools/bpf/bpftool/bpftool map dump id 107 > /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 > /tmp/dump.xlated.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump jited id 208 > /tmp/dump.jited.after # diff -u /tmp/dump.xlated.before /tmp/dump.xlated.after # diff -u /tmp/dump.jited.before /tmp/dump.jited.after # diff -u /tmp/map.dump.before /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 0: (bf) r6 = r1 1: (85) call bpf_get_current_pid_tgid#80800 2: (63) *(u32 *)(r10 -328) = r0 3: (bf) r2 = r10 4: (07) r2 += -328 5: (18) r1 = map[id:107] 7: (85) call __htab_map_lookup_elem#85680 8: (15) if r0 == 0x0 goto pc+1 9: (07) r0 += 56 10: (b7) r7 = 0 11: (55) if r0 != 0x0 goto pc+52 12: (bf) r1 = r10 13: (07) r1 += -328 14: (b7) r2 = 64 15: (bf) r3 = r6 16: (85) call bpf_probe_read#-46848 17: (bf) r2 = r10 18: (07) r2 += -320 19: (18) r1 = map[id:106] 21: (07) r1 += 208 22: (61) r0 = *(u32 *)(r2 +0) 23: (35) if r0 >= 0x200 goto pc+3 24: (67) r0 <<= 3 25: (0f) r0 += r1 26: (05) goto pc+1 27: (b7) r0 = 0 28: (15) if r0 == 0x0 goto pc+35 29: (71) r1 = *(u8 *)(r0 +0) 30: (15) if r1 == 0x0 goto pc+33 31: (b7) r5 = 64 32: (79) r1 = *(u64 *)(r10 -320) 33: (15) if r1 == 0x2 goto pc+2 34: (15) if r1 == 0x101 goto pc+3 35: (55) if r1 != 0x15 goto pc+19 36: (79) r3 = *(u64 *)(r6 +16) 37: (05) goto pc+1 38: (79) r3 = *(u64 *)(r6 +24) 39: (15) if r3 == 0x0 goto pc+15 40: (b7) r1 = 0 41: (63) *(u32 *)(r10 -260) = r1 42: (bf) r1 = r10 43: (07) r1 += -256 44: (b7) r2 = 256 45: (85) call bpf_probe_read_str#-46704 46: (b7) r5 = 328 47: (63) *(u32 *)(r10 -264) = r0 48: (bf) r1 = r0 49: (67) r1 <<= 32 50: (77) r1 >>= 32 51: (25) if r1 > 0xff goto pc+3 52: (07) r0 += 72 53: (57) r0 &= 255 54: (bf) r5 = r0 55: (bf) r4 = r10 56: (07) r4 += -328 57: (bf) r1 = r6 58: (18) r2 = map[id:105] 60: (18) r3 = 0xffffffff 62: (85) call bpf_perf_event_output_tp#-45104 63: (bf) r7 = r0 64: (bf) r0 = r7 65: (95) exit # Signed-off-by: Song Liu <songliubraving@fb.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: kernel-team@fb.com Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stanislav Fomichev <sdf@google.com> Link: http://lkml.kernel.org/r/20190312053051.2690567-4-songliubraving@fb.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-03-12 12:30:39 +07:00
if (info->btf_id && btf__get_from_id(info->btf_id, &btf)) {
tools/bpf: bpftool: add support for func types This patch added support to print function signature if btf func_info is available. Note that ksym now uses function name instead of prog_name as prog_name has a limit of 16 bytes including ending '\0'. The following is a sample output for selftests test_btf with file test_btf_haskv.o for translated insns and jited insns respectively. $ bpftool prog dump xlated id 1 int _dummy_tracepoint(struct dummy_tracepoint_args * arg): 0: (85) call pc+2#bpf_prog_2dcecc18072623fc_test_long_fname_1 1: (b7) r0 = 0 2: (95) exit int test_long_fname_1(struct dummy_tracepoint_args * arg): 3: (85) call pc+1#bpf_prog_89d64e4abf0f0126_test_long_fname_2 4: (95) exit int test_long_fname_2(struct dummy_tracepoint_args * arg): 5: (b7) r2 = 0 6: (63) *(u32 *)(r10 -4) = r2 7: (79) r1 = *(u64 *)(r1 +8) ... 22: (07) r1 += 1 23: (63) *(u32 *)(r0 +4) = r1 24: (95) exit $ bpftool prog dump jited id 1 int _dummy_tracepoint(struct dummy_tracepoint_args * arg): bpf_prog_b07ccb89267cf242__dummy_tracepoint: 0: push %rbp 1: mov %rsp,%rbp ...... 3c: add $0x28,%rbp 40: leaveq 41: retq int test_long_fname_1(struct dummy_tracepoint_args * arg): bpf_prog_2dcecc18072623fc_test_long_fname_1: 0: push %rbp 1: mov %rsp,%rbp ...... 3a: add $0x28,%rbp 3e: leaveq 3f: retq int test_long_fname_2(struct dummy_tracepoint_args * arg): bpf_prog_89d64e4abf0f0126_test_long_fname_2: 0: push %rbp 1: mov %rsp,%rbp ...... 80: add $0x28,%rbp 84: leaveq 85: retq Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-20 06:29:21 +07:00
p_err("failed to get btf");
return -1;
tools/bpf: bpftool: add support for func types This patch added support to print function signature if btf func_info is available. Note that ksym now uses function name instead of prog_name as prog_name has a limit of 16 bytes including ending '\0'. The following is a sample output for selftests test_btf with file test_btf_haskv.o for translated insns and jited insns respectively. $ bpftool prog dump xlated id 1 int _dummy_tracepoint(struct dummy_tracepoint_args * arg): 0: (85) call pc+2#bpf_prog_2dcecc18072623fc_test_long_fname_1 1: (b7) r0 = 0 2: (95) exit int test_long_fname_1(struct dummy_tracepoint_args * arg): 3: (85) call pc+1#bpf_prog_89d64e4abf0f0126_test_long_fname_2 4: (95) exit int test_long_fname_2(struct dummy_tracepoint_args * arg): 5: (b7) r2 = 0 6: (63) *(u32 *)(r10 -4) = r2 7: (79) r1 = *(u64 *)(r1 +8) ... 22: (07) r1 += 1 23: (63) *(u32 *)(r0 +4) = r1 24: (95) exit $ bpftool prog dump jited id 1 int _dummy_tracepoint(struct dummy_tracepoint_args * arg): bpf_prog_b07ccb89267cf242__dummy_tracepoint: 0: push %rbp 1: mov %rsp,%rbp ...... 3c: add $0x28,%rbp 40: leaveq 41: retq int test_long_fname_1(struct dummy_tracepoint_args * arg): bpf_prog_2dcecc18072623fc_test_long_fname_1: 0: push %rbp 1: mov %rsp,%rbp ...... 3a: add $0x28,%rbp 3e: leaveq 3f: retq int test_long_fname_2(struct dummy_tracepoint_args * arg): bpf_prog_89d64e4abf0f0126_test_long_fname_2: 0: push %rbp 1: mov %rsp,%rbp ...... 80: add $0x28,%rbp 84: leaveq 85: retq Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-20 06:29:21 +07:00
}
bpftool: use bpf_program__get_prog_info_linear() in prog.c:do_dump() This patches uses bpf_program__get_prog_info_linear() to simplify the logic in prog.c do_dump(). Committer testing: Before: # bpftool prog dump xlated id 208 > /tmp/dump.xlated.before # bpftool prog dump jited id 208 > /tmp/dump.jited.before # bpftool map dump id 107 > /tmp/map.dump.before After: # ~acme/git/perf/tools/bpf/bpftool/bpftool map dump id 107 > /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 > /tmp/dump.xlated.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump jited id 208 > /tmp/dump.jited.after # diff -u /tmp/dump.xlated.before /tmp/dump.xlated.after # diff -u /tmp/dump.jited.before /tmp/dump.jited.after # diff -u /tmp/map.dump.before /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 0: (bf) r6 = r1 1: (85) call bpf_get_current_pid_tgid#80800 2: (63) *(u32 *)(r10 -328) = r0 3: (bf) r2 = r10 4: (07) r2 += -328 5: (18) r1 = map[id:107] 7: (85) call __htab_map_lookup_elem#85680 8: (15) if r0 == 0x0 goto pc+1 9: (07) r0 += 56 10: (b7) r7 = 0 11: (55) if r0 != 0x0 goto pc+52 12: (bf) r1 = r10 13: (07) r1 += -328 14: (b7) r2 = 64 15: (bf) r3 = r6 16: (85) call bpf_probe_read#-46848 17: (bf) r2 = r10 18: (07) r2 += -320 19: (18) r1 = map[id:106] 21: (07) r1 += 208 22: (61) r0 = *(u32 *)(r2 +0) 23: (35) if r0 >= 0x200 goto pc+3 24: (67) r0 <<= 3 25: (0f) r0 += r1 26: (05) goto pc+1 27: (b7) r0 = 0 28: (15) if r0 == 0x0 goto pc+35 29: (71) r1 = *(u8 *)(r0 +0) 30: (15) if r1 == 0x0 goto pc+33 31: (b7) r5 = 64 32: (79) r1 = *(u64 *)(r10 -320) 33: (15) if r1 == 0x2 goto pc+2 34: (15) if r1 == 0x101 goto pc+3 35: (55) if r1 != 0x15 goto pc+19 36: (79) r3 = *(u64 *)(r6 +16) 37: (05) goto pc+1 38: (79) r3 = *(u64 *)(r6 +24) 39: (15) if r3 == 0x0 goto pc+15 40: (b7) r1 = 0 41: (63) *(u32 *)(r10 -260) = r1 42: (bf) r1 = r10 43: (07) r1 += -256 44: (b7) r2 = 256 45: (85) call bpf_probe_read_str#-46704 46: (b7) r5 = 328 47: (63) *(u32 *)(r10 -264) = r0 48: (bf) r1 = r0 49: (67) r1 <<= 32 50: (77) r1 >>= 32 51: (25) if r1 > 0xff goto pc+3 52: (07) r0 += 72 53: (57) r0 &= 255 54: (bf) r5 = r0 55: (bf) r4 = r10 56: (07) r4 += -328 57: (bf) r1 = r6 58: (18) r2 = map[id:105] 60: (18) r3 = 0xffffffff 62: (85) call bpf_perf_event_output_tp#-45104 63: (bf) r7 = r0 64: (bf) r0 = r7 65: (95) exit # Signed-off-by: Song Liu <songliubraving@fb.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: kernel-team@fb.com Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stanislav Fomichev <sdf@google.com> Link: http://lkml.kernel.org/r/20190312053051.2690567-4-songliubraving@fb.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-03-12 12:30:39 +07:00
func_info = (void *)info->func_info;
if (info->nr_line_info) {
prog_linfo = bpf_prog_linfo__new(info);
bpf: libbpf: bpftool: Print bpf_line_info during prog dump This patch adds print bpf_line_info function in 'prog dump jitted' and 'prog dump xlated': [root@arch-fb-vm1 bpf]# ~/devshare/fb-kernel/linux/tools/bpf/bpftool/bpftool prog dump jited pinned /sys/fs/bpf/test_btf_haskv [...] int test_long_fname_2(struct dummy_tracepoint_args * arg): bpf_prog_44a040bf25481309_test_long_fname_2: ; static int test_long_fname_2(struct dummy_tracepoint_args *arg) 0: push %rbp 1: mov %rsp,%rbp 4: sub $0x30,%rsp b: sub $0x28,%rbp f: mov %rbx,0x0(%rbp) 13: mov %r13,0x8(%rbp) 17: mov %r14,0x10(%rbp) 1b: mov %r15,0x18(%rbp) 1f: xor %eax,%eax 21: mov %rax,0x20(%rbp) 25: xor %esi,%esi ; int key = 0; 27: mov %esi,-0x4(%rbp) ; if (!arg->sock) 2a: mov 0x8(%rdi),%rdi ; if (!arg->sock) 2e: cmp $0x0,%rdi 32: je 0x0000000000000070 34: mov %rbp,%rsi ; counts = bpf_map_lookup_elem(&btf_map, &key); 37: add $0xfffffffffffffffc,%rsi 3b: movabs $0xffff8881139d7480,%rdi 45: add $0x110,%rdi 4c: mov 0x0(%rsi),%eax 4f: cmp $0x4,%rax 53: jae 0x000000000000005e 55: shl $0x3,%rax 59: add %rdi,%rax 5c: jmp 0x0000000000000060 5e: xor %eax,%eax ; if (!counts) 60: cmp $0x0,%rax 64: je 0x0000000000000070 ; counts->v6++; 66: mov 0x4(%rax),%edi 69: add $0x1,%rdi 6d: mov %edi,0x4(%rax) 70: mov 0x0(%rbp),%rbx 74: mov 0x8(%rbp),%r13 78: mov 0x10(%rbp),%r14 7c: mov 0x18(%rbp),%r15 80: add $0x28,%rbp 84: leaveq 85: retq [...] With linum: [root@arch-fb-vm1 bpf]# ~/devshare/fb-kernel/linux/tools/bpf/bpftool/bpftool prog dump jited pinned /sys/fs/bpf/test_btf_haskv linum int _dummy_tracepoint(struct dummy_tracepoint_args * arg): bpf_prog_b07ccb89267cf242__dummy_tracepoint: ; return test_long_fname_1(arg); [file:/data/users/kafai/fb-kernel/linux/tools/testing/selftests/bpf/test_btf_haskv.c line_num:54 line_col:9] 0: push %rbp 1: mov %rsp,%rbp 4: sub $0x28,%rsp b: sub $0x28,%rbp f: mov %rbx,0x0(%rbp) 13: mov %r13,0x8(%rbp) 17: mov %r14,0x10(%rbp) 1b: mov %r15,0x18(%rbp) 1f: xor %eax,%eax 21: mov %rax,0x20(%rbp) 25: callq 0x000000000000851e ; return test_long_fname_1(arg); [file:/data/users/kafai/fb-kernel/linux/tools/testing/selftests/bpf/test_btf_haskv.c line_num:54 line_col:2] 2a: xor %eax,%eax 2c: mov 0x0(%rbp),%rbx 30: mov 0x8(%rbp),%r13 34: mov 0x10(%rbp),%r14 38: mov 0x18(%rbp),%r15 3c: add $0x28,%rbp 40: leaveq 41: retq [...] Signed-off-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-08 07:42:32 +07:00
if (!prog_linfo)
p_info("error in processing bpf_line_info. continue without it.");
bpf: libbpf: bpftool: Print bpf_line_info during prog dump This patch adds print bpf_line_info function in 'prog dump jitted' and 'prog dump xlated': [root@arch-fb-vm1 bpf]# ~/devshare/fb-kernel/linux/tools/bpf/bpftool/bpftool prog dump jited pinned /sys/fs/bpf/test_btf_haskv [...] int test_long_fname_2(struct dummy_tracepoint_args * arg): bpf_prog_44a040bf25481309_test_long_fname_2: ; static int test_long_fname_2(struct dummy_tracepoint_args *arg) 0: push %rbp 1: mov %rsp,%rbp 4: sub $0x30,%rsp b: sub $0x28,%rbp f: mov %rbx,0x0(%rbp) 13: mov %r13,0x8(%rbp) 17: mov %r14,0x10(%rbp) 1b: mov %r15,0x18(%rbp) 1f: xor %eax,%eax 21: mov %rax,0x20(%rbp) 25: xor %esi,%esi ; int key = 0; 27: mov %esi,-0x4(%rbp) ; if (!arg->sock) 2a: mov 0x8(%rdi),%rdi ; if (!arg->sock) 2e: cmp $0x0,%rdi 32: je 0x0000000000000070 34: mov %rbp,%rsi ; counts = bpf_map_lookup_elem(&btf_map, &key); 37: add $0xfffffffffffffffc,%rsi 3b: movabs $0xffff8881139d7480,%rdi 45: add $0x110,%rdi 4c: mov 0x0(%rsi),%eax 4f: cmp $0x4,%rax 53: jae 0x000000000000005e 55: shl $0x3,%rax 59: add %rdi,%rax 5c: jmp 0x0000000000000060 5e: xor %eax,%eax ; if (!counts) 60: cmp $0x0,%rax 64: je 0x0000000000000070 ; counts->v6++; 66: mov 0x4(%rax),%edi 69: add $0x1,%rdi 6d: mov %edi,0x4(%rax) 70: mov 0x0(%rbp),%rbx 74: mov 0x8(%rbp),%r13 78: mov 0x10(%rbp),%r14 7c: mov 0x18(%rbp),%r15 80: add $0x28,%rbp 84: leaveq 85: retq [...] With linum: [root@arch-fb-vm1 bpf]# ~/devshare/fb-kernel/linux/tools/bpf/bpftool/bpftool prog dump jited pinned /sys/fs/bpf/test_btf_haskv linum int _dummy_tracepoint(struct dummy_tracepoint_args * arg): bpf_prog_b07ccb89267cf242__dummy_tracepoint: ; return test_long_fname_1(arg); [file:/data/users/kafai/fb-kernel/linux/tools/testing/selftests/bpf/test_btf_haskv.c line_num:54 line_col:9] 0: push %rbp 1: mov %rsp,%rbp 4: sub $0x28,%rsp b: sub $0x28,%rbp f: mov %rbx,0x0(%rbp) 13: mov %r13,0x8(%rbp) 17: mov %r14,0x10(%rbp) 1b: mov %r15,0x18(%rbp) 1f: xor %eax,%eax 21: mov %rax,0x20(%rbp) 25: callq 0x000000000000851e ; return test_long_fname_1(arg); [file:/data/users/kafai/fb-kernel/linux/tools/testing/selftests/bpf/test_btf_haskv.c line_num:54 line_col:2] 2a: xor %eax,%eax 2c: mov 0x0(%rbp),%rbx 30: mov 0x8(%rbp),%r13 34: mov 0x10(%rbp),%r14 38: mov 0x18(%rbp),%r15 3c: add $0x28,%rbp 40: leaveq 41: retq [...] Signed-off-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-08 07:42:32 +07:00
}
if (filepath) {
fd = open(filepath, O_WRONLY | O_CREAT | O_TRUNC, 0600);
if (fd < 0) {
p_err("can't open file %s: %s", filepath,
strerror(errno));
return -1;
}
bpftool: use bpf_program__get_prog_info_linear() in prog.c:do_dump() This patches uses bpf_program__get_prog_info_linear() to simplify the logic in prog.c do_dump(). Committer testing: Before: # bpftool prog dump xlated id 208 > /tmp/dump.xlated.before # bpftool prog dump jited id 208 > /tmp/dump.jited.before # bpftool map dump id 107 > /tmp/map.dump.before After: # ~acme/git/perf/tools/bpf/bpftool/bpftool map dump id 107 > /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 > /tmp/dump.xlated.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump jited id 208 > /tmp/dump.jited.after # diff -u /tmp/dump.xlated.before /tmp/dump.xlated.after # diff -u /tmp/dump.jited.before /tmp/dump.jited.after # diff -u /tmp/map.dump.before /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 0: (bf) r6 = r1 1: (85) call bpf_get_current_pid_tgid#80800 2: (63) *(u32 *)(r10 -328) = r0 3: (bf) r2 = r10 4: (07) r2 += -328 5: (18) r1 = map[id:107] 7: (85) call __htab_map_lookup_elem#85680 8: (15) if r0 == 0x0 goto pc+1 9: (07) r0 += 56 10: (b7) r7 = 0 11: (55) if r0 != 0x0 goto pc+52 12: (bf) r1 = r10 13: (07) r1 += -328 14: (b7) r2 = 64 15: (bf) r3 = r6 16: (85) call bpf_probe_read#-46848 17: (bf) r2 = r10 18: (07) r2 += -320 19: (18) r1 = map[id:106] 21: (07) r1 += 208 22: (61) r0 = *(u32 *)(r2 +0) 23: (35) if r0 >= 0x200 goto pc+3 24: (67) r0 <<= 3 25: (0f) r0 += r1 26: (05) goto pc+1 27: (b7) r0 = 0 28: (15) if r0 == 0x0 goto pc+35 29: (71) r1 = *(u8 *)(r0 +0) 30: (15) if r1 == 0x0 goto pc+33 31: (b7) r5 = 64 32: (79) r1 = *(u64 *)(r10 -320) 33: (15) if r1 == 0x2 goto pc+2 34: (15) if r1 == 0x101 goto pc+3 35: (55) if r1 != 0x15 goto pc+19 36: (79) r3 = *(u64 *)(r6 +16) 37: (05) goto pc+1 38: (79) r3 = *(u64 *)(r6 +24) 39: (15) if r3 == 0x0 goto pc+15 40: (b7) r1 = 0 41: (63) *(u32 *)(r10 -260) = r1 42: (bf) r1 = r10 43: (07) r1 += -256 44: (b7) r2 = 256 45: (85) call bpf_probe_read_str#-46704 46: (b7) r5 = 328 47: (63) *(u32 *)(r10 -264) = r0 48: (bf) r1 = r0 49: (67) r1 <<= 32 50: (77) r1 >>= 32 51: (25) if r1 > 0xff goto pc+3 52: (07) r0 += 72 53: (57) r0 &= 255 54: (bf) r5 = r0 55: (bf) r4 = r10 56: (07) r4 += -328 57: (bf) r1 = r6 58: (18) r2 = map[id:105] 60: (18) r3 = 0xffffffff 62: (85) call bpf_perf_event_output_tp#-45104 63: (bf) r7 = r0 64: (bf) r0 = r7 65: (95) exit # Signed-off-by: Song Liu <songliubraving@fb.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: kernel-team@fb.com Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stanislav Fomichev <sdf@google.com> Link: http://lkml.kernel.org/r/20190312053051.2690567-4-songliubraving@fb.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-03-12 12:30:39 +07:00
n = write(fd, buf, member_len);
close(fd);
bpftool: use bpf_program__get_prog_info_linear() in prog.c:do_dump() This patches uses bpf_program__get_prog_info_linear() to simplify the logic in prog.c do_dump(). Committer testing: Before: # bpftool prog dump xlated id 208 > /tmp/dump.xlated.before # bpftool prog dump jited id 208 > /tmp/dump.jited.before # bpftool map dump id 107 > /tmp/map.dump.before After: # ~acme/git/perf/tools/bpf/bpftool/bpftool map dump id 107 > /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 > /tmp/dump.xlated.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump jited id 208 > /tmp/dump.jited.after # diff -u /tmp/dump.xlated.before /tmp/dump.xlated.after # diff -u /tmp/dump.jited.before /tmp/dump.jited.after # diff -u /tmp/map.dump.before /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 0: (bf) r6 = r1 1: (85) call bpf_get_current_pid_tgid#80800 2: (63) *(u32 *)(r10 -328) = r0 3: (bf) r2 = r10 4: (07) r2 += -328 5: (18) r1 = map[id:107] 7: (85) call __htab_map_lookup_elem#85680 8: (15) if r0 == 0x0 goto pc+1 9: (07) r0 += 56 10: (b7) r7 = 0 11: (55) if r0 != 0x0 goto pc+52 12: (bf) r1 = r10 13: (07) r1 += -328 14: (b7) r2 = 64 15: (bf) r3 = r6 16: (85) call bpf_probe_read#-46848 17: (bf) r2 = r10 18: (07) r2 += -320 19: (18) r1 = map[id:106] 21: (07) r1 += 208 22: (61) r0 = *(u32 *)(r2 +0) 23: (35) if r0 >= 0x200 goto pc+3 24: (67) r0 <<= 3 25: (0f) r0 += r1 26: (05) goto pc+1 27: (b7) r0 = 0 28: (15) if r0 == 0x0 goto pc+35 29: (71) r1 = *(u8 *)(r0 +0) 30: (15) if r1 == 0x0 goto pc+33 31: (b7) r5 = 64 32: (79) r1 = *(u64 *)(r10 -320) 33: (15) if r1 == 0x2 goto pc+2 34: (15) if r1 == 0x101 goto pc+3 35: (55) if r1 != 0x15 goto pc+19 36: (79) r3 = *(u64 *)(r6 +16) 37: (05) goto pc+1 38: (79) r3 = *(u64 *)(r6 +24) 39: (15) if r3 == 0x0 goto pc+15 40: (b7) r1 = 0 41: (63) *(u32 *)(r10 -260) = r1 42: (bf) r1 = r10 43: (07) r1 += -256 44: (b7) r2 = 256 45: (85) call bpf_probe_read_str#-46704 46: (b7) r5 = 328 47: (63) *(u32 *)(r10 -264) = r0 48: (bf) r1 = r0 49: (67) r1 <<= 32 50: (77) r1 >>= 32 51: (25) if r1 > 0xff goto pc+3 52: (07) r0 += 72 53: (57) r0 &= 255 54: (bf) r5 = r0 55: (bf) r4 = r10 56: (07) r4 += -328 57: (bf) r1 = r6 58: (18) r2 = map[id:105] 60: (18) r3 = 0xffffffff 62: (85) call bpf_perf_event_output_tp#-45104 63: (bf) r7 = r0 64: (bf) r0 = r7 65: (95) exit # Signed-off-by: Song Liu <songliubraving@fb.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: kernel-team@fb.com Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stanislav Fomichev <sdf@google.com> Link: http://lkml.kernel.org/r/20190312053051.2690567-4-songliubraving@fb.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-03-12 12:30:39 +07:00
if (n != member_len) {
p_err("error writing output file: %s",
n < 0 ? strerror(errno) : "short write");
return -1;
}
if (json_output)
jsonw_null(json_wtr);
bpftool: use bpf_program__get_prog_info_linear() in prog.c:do_dump() This patches uses bpf_program__get_prog_info_linear() to simplify the logic in prog.c do_dump(). Committer testing: Before: # bpftool prog dump xlated id 208 > /tmp/dump.xlated.before # bpftool prog dump jited id 208 > /tmp/dump.jited.before # bpftool map dump id 107 > /tmp/map.dump.before After: # ~acme/git/perf/tools/bpf/bpftool/bpftool map dump id 107 > /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 > /tmp/dump.xlated.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump jited id 208 > /tmp/dump.jited.after # diff -u /tmp/dump.xlated.before /tmp/dump.xlated.after # diff -u /tmp/dump.jited.before /tmp/dump.jited.after # diff -u /tmp/map.dump.before /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 0: (bf) r6 = r1 1: (85) call bpf_get_current_pid_tgid#80800 2: (63) *(u32 *)(r10 -328) = r0 3: (bf) r2 = r10 4: (07) r2 += -328 5: (18) r1 = map[id:107] 7: (85) call __htab_map_lookup_elem#85680 8: (15) if r0 == 0x0 goto pc+1 9: (07) r0 += 56 10: (b7) r7 = 0 11: (55) if r0 != 0x0 goto pc+52 12: (bf) r1 = r10 13: (07) r1 += -328 14: (b7) r2 = 64 15: (bf) r3 = r6 16: (85) call bpf_probe_read#-46848 17: (bf) r2 = r10 18: (07) r2 += -320 19: (18) r1 = map[id:106] 21: (07) r1 += 208 22: (61) r0 = *(u32 *)(r2 +0) 23: (35) if r0 >= 0x200 goto pc+3 24: (67) r0 <<= 3 25: (0f) r0 += r1 26: (05) goto pc+1 27: (b7) r0 = 0 28: (15) if r0 == 0x0 goto pc+35 29: (71) r1 = *(u8 *)(r0 +0) 30: (15) if r1 == 0x0 goto pc+33 31: (b7) r5 = 64 32: (79) r1 = *(u64 *)(r10 -320) 33: (15) if r1 == 0x2 goto pc+2 34: (15) if r1 == 0x101 goto pc+3 35: (55) if r1 != 0x15 goto pc+19 36: (79) r3 = *(u64 *)(r6 +16) 37: (05) goto pc+1 38: (79) r3 = *(u64 *)(r6 +24) 39: (15) if r3 == 0x0 goto pc+15 40: (b7) r1 = 0 41: (63) *(u32 *)(r10 -260) = r1 42: (bf) r1 = r10 43: (07) r1 += -256 44: (b7) r2 = 256 45: (85) call bpf_probe_read_str#-46704 46: (b7) r5 = 328 47: (63) *(u32 *)(r10 -264) = r0 48: (bf) r1 = r0 49: (67) r1 <<= 32 50: (77) r1 >>= 32 51: (25) if r1 > 0xff goto pc+3 52: (07) r0 += 72 53: (57) r0 &= 255 54: (bf) r5 = r0 55: (bf) r4 = r10 56: (07) r4 += -328 57: (bf) r1 = r6 58: (18) r2 = map[id:105] 60: (18) r3 = 0xffffffff 62: (85) call bpf_perf_event_output_tp#-45104 63: (bf) r7 = r0 64: (bf) r0 = r7 65: (95) exit # Signed-off-by: Song Liu <songliubraving@fb.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: kernel-team@fb.com Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stanislav Fomichev <sdf@google.com> Link: http://lkml.kernel.org/r/20190312053051.2690567-4-songliubraving@fb.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-03-12 12:30:39 +07:00
} else if (mode == DUMP_JITED) {
const char *name = NULL;
bpftool: use bpf_program__get_prog_info_linear() in prog.c:do_dump() This patches uses bpf_program__get_prog_info_linear() to simplify the logic in prog.c do_dump(). Committer testing: Before: # bpftool prog dump xlated id 208 > /tmp/dump.xlated.before # bpftool prog dump jited id 208 > /tmp/dump.jited.before # bpftool map dump id 107 > /tmp/map.dump.before After: # ~acme/git/perf/tools/bpf/bpftool/bpftool map dump id 107 > /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 > /tmp/dump.xlated.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump jited id 208 > /tmp/dump.jited.after # diff -u /tmp/dump.xlated.before /tmp/dump.xlated.after # diff -u /tmp/dump.jited.before /tmp/dump.jited.after # diff -u /tmp/map.dump.before /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 0: (bf) r6 = r1 1: (85) call bpf_get_current_pid_tgid#80800 2: (63) *(u32 *)(r10 -328) = r0 3: (bf) r2 = r10 4: (07) r2 += -328 5: (18) r1 = map[id:107] 7: (85) call __htab_map_lookup_elem#85680 8: (15) if r0 == 0x0 goto pc+1 9: (07) r0 += 56 10: (b7) r7 = 0 11: (55) if r0 != 0x0 goto pc+52 12: (bf) r1 = r10 13: (07) r1 += -328 14: (b7) r2 = 64 15: (bf) r3 = r6 16: (85) call bpf_probe_read#-46848 17: (bf) r2 = r10 18: (07) r2 += -320 19: (18) r1 = map[id:106] 21: (07) r1 += 208 22: (61) r0 = *(u32 *)(r2 +0) 23: (35) if r0 >= 0x200 goto pc+3 24: (67) r0 <<= 3 25: (0f) r0 += r1 26: (05) goto pc+1 27: (b7) r0 = 0 28: (15) if r0 == 0x0 goto pc+35 29: (71) r1 = *(u8 *)(r0 +0) 30: (15) if r1 == 0x0 goto pc+33 31: (b7) r5 = 64 32: (79) r1 = *(u64 *)(r10 -320) 33: (15) if r1 == 0x2 goto pc+2 34: (15) if r1 == 0x101 goto pc+3 35: (55) if r1 != 0x15 goto pc+19 36: (79) r3 = *(u64 *)(r6 +16) 37: (05) goto pc+1 38: (79) r3 = *(u64 *)(r6 +24) 39: (15) if r3 == 0x0 goto pc+15 40: (b7) r1 = 0 41: (63) *(u32 *)(r10 -260) = r1 42: (bf) r1 = r10 43: (07) r1 += -256 44: (b7) r2 = 256 45: (85) call bpf_probe_read_str#-46704 46: (b7) r5 = 328 47: (63) *(u32 *)(r10 -264) = r0 48: (bf) r1 = r0 49: (67) r1 <<= 32 50: (77) r1 >>= 32 51: (25) if r1 > 0xff goto pc+3 52: (07) r0 += 72 53: (57) r0 &= 255 54: (bf) r5 = r0 55: (bf) r4 = r10 56: (07) r4 += -328 57: (bf) r1 = r6 58: (18) r2 = map[id:105] 60: (18) r3 = 0xffffffff 62: (85) call bpf_perf_event_output_tp#-45104 63: (bf) r7 = r0 64: (bf) r0 = r7 65: (95) exit # Signed-off-by: Song Liu <songliubraving@fb.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: kernel-team@fb.com Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stanislav Fomichev <sdf@google.com> Link: http://lkml.kernel.org/r/20190312053051.2690567-4-songliubraving@fb.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-03-12 12:30:39 +07:00
if (info->ifindex) {
name = ifindex_to_bfd_params(info->ifindex,
info->netns_dev,
info->netns_ino,
&disasm_opt);
if (!name)
return -1;
bpf: allow for correlation of maps and helpers in dump Currently a dump of an xlated prog (post verifier stage) doesn't correlate used helpers as well as maps. The prog info lists involved map ids, however there's no correlation of where in the program they are used as of today. Likewise, bpftool does not correlate helper calls with the target functions. The latter can be done w/o any kernel changes through kallsyms, and also has the advantage that this works with inlined helpers and BPF calls. Example, via interpreter: # tc filter show dev foo ingress filter protocol all pref 49152 bpf chain 0 filter protocol all pref 49152 bpf chain 0 handle 0x1 foo.o:[ingress] \ direct-action not_in_hw id 1 tag c74773051b364165 <-- prog id:1 * Output before patch (calls/maps remain unclear): # bpftool prog dump xlated id 1 <-- dump prog id:1 0: (b7) r1 = 2 1: (63) *(u32 *)(r10 -4) = r1 2: (bf) r2 = r10 3: (07) r2 += -4 4: (18) r1 = 0xffff95c47a8d4800 6: (85) call unknown#73040 7: (15) if r0 == 0x0 goto pc+18 8: (bf) r2 = r10 9: (07) r2 += -4 10: (bf) r1 = r0 11: (85) call unknown#73040 12: (15) if r0 == 0x0 goto pc+23 [...] * Output after patch: # bpftool prog dump xlated id 1 0: (b7) r1 = 2 1: (63) *(u32 *)(r10 -4) = r1 2: (bf) r2 = r10 3: (07) r2 += -4 4: (18) r1 = map[id:2] <-- map id:2 6: (85) call bpf_map_lookup_elem#73424 <-- helper call 7: (15) if r0 == 0x0 goto pc+18 8: (bf) r2 = r10 9: (07) r2 += -4 10: (bf) r1 = r0 11: (85) call bpf_map_lookup_elem#73424 12: (15) if r0 == 0x0 goto pc+23 [...] # bpftool map show id 2 <-- show/dump/etc map id:2 2: hash_of_maps flags 0x0 key 4B value 4B max_entries 3 memlock 4096B Example, JITed, same prog: # tc filter show dev foo ingress filter protocol all pref 49152 bpf chain 0 filter protocol all pref 49152 bpf chain 0 handle 0x1 foo.o:[ingress] \ direct-action not_in_hw id 3 tag c74773051b364165 jited # bpftool prog show id 3 3: sched_cls tag c74773051b364165 loaded_at Dec 19/13:48 uid 0 xlated 384B jited 257B memlock 4096B map_ids 2 # bpftool prog dump xlated id 3 0: (b7) r1 = 2 1: (63) *(u32 *)(r10 -4) = r1 2: (bf) r2 = r10 3: (07) r2 += -4 4: (18) r1 = map[id:2] <-- map id:2 6: (85) call __htab_map_lookup_elem#77408 <-+ inlined rewrite 7: (15) if r0 == 0x0 goto pc+2 | 8: (07) r0 += 56 | 9: (79) r0 = *(u64 *)(r0 +0) <-+ 10: (15) if r0 == 0x0 goto pc+24 11: (bf) r2 = r10 12: (07) r2 += -4 [...] Example, same prog, but kallsyms disabled (in that case we are also not allowed to pass any relative offsets, etc, so prog becomes pointer sanitized on dump): # sysctl kernel.kptr_restrict=2 kernel.kptr_restrict = 2 # bpftool prog dump xlated id 3 0: (b7) r1 = 2 1: (63) *(u32 *)(r10 -4) = r1 2: (bf) r2 = r10 3: (07) r2 += -4 4: (18) r1 = map[id:2] 6: (85) call bpf_unspec#0 7: (15) if r0 == 0x0 goto pc+2 [...] Example, BPF calls via interpreter: # bpftool prog dump xlated id 1 0: (85) call pc+2#__bpf_prog_run_args32 1: (b7) r0 = 1 2: (95) exit 3: (b7) r0 = 2 4: (95) exit Example, BPF calls via JIT: # sysctl net.core.bpf_jit_enable=1 net.core.bpf_jit_enable = 1 # sysctl net.core.bpf_jit_kallsyms=1 net.core.bpf_jit_kallsyms = 1 # bpftool prog dump xlated id 1 0: (85) call pc+2#bpf_prog_3b185187f1855c4c_F 1: (b7) r0 = 1 2: (95) exit 3: (b7) r0 = 2 4: (95) exit And finally, an example for tail calls that is now working as well wrt correlation: # bpftool prog dump xlated id 2 [...] 10: (b7) r2 = 8 11: (85) call bpf_trace_printk#-41312 12: (bf) r1 = r6 13: (18) r2 = map[id:1] 15: (b7) r3 = 0 16: (85) call bpf_tail_call#12 17: (b7) r1 = 42 18: (6b) *(u16 *)(r6 +46) = r1 19: (b7) r0 = 0 20: (95) exit # bpftool map show id 1 1: prog_array flags 0x0 key 4B value 4B max_entries 1 memlock 4096B Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2017-12-20 19:42:57 +07:00
}
bpftool: use bpf_program__get_prog_info_linear() in prog.c:do_dump() This patches uses bpf_program__get_prog_info_linear() to simplify the logic in prog.c do_dump(). Committer testing: Before: # bpftool prog dump xlated id 208 > /tmp/dump.xlated.before # bpftool prog dump jited id 208 > /tmp/dump.jited.before # bpftool map dump id 107 > /tmp/map.dump.before After: # ~acme/git/perf/tools/bpf/bpftool/bpftool map dump id 107 > /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 > /tmp/dump.xlated.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump jited id 208 > /tmp/dump.jited.after # diff -u /tmp/dump.xlated.before /tmp/dump.xlated.after # diff -u /tmp/dump.jited.before /tmp/dump.jited.after # diff -u /tmp/map.dump.before /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 0: (bf) r6 = r1 1: (85) call bpf_get_current_pid_tgid#80800 2: (63) *(u32 *)(r10 -328) = r0 3: (bf) r2 = r10 4: (07) r2 += -328 5: (18) r1 = map[id:107] 7: (85) call __htab_map_lookup_elem#85680 8: (15) if r0 == 0x0 goto pc+1 9: (07) r0 += 56 10: (b7) r7 = 0 11: (55) if r0 != 0x0 goto pc+52 12: (bf) r1 = r10 13: (07) r1 += -328 14: (b7) r2 = 64 15: (bf) r3 = r6 16: (85) call bpf_probe_read#-46848 17: (bf) r2 = r10 18: (07) r2 += -320 19: (18) r1 = map[id:106] 21: (07) r1 += 208 22: (61) r0 = *(u32 *)(r2 +0) 23: (35) if r0 >= 0x200 goto pc+3 24: (67) r0 <<= 3 25: (0f) r0 += r1 26: (05) goto pc+1 27: (b7) r0 = 0 28: (15) if r0 == 0x0 goto pc+35 29: (71) r1 = *(u8 *)(r0 +0) 30: (15) if r1 == 0x0 goto pc+33 31: (b7) r5 = 64 32: (79) r1 = *(u64 *)(r10 -320) 33: (15) if r1 == 0x2 goto pc+2 34: (15) if r1 == 0x101 goto pc+3 35: (55) if r1 != 0x15 goto pc+19 36: (79) r3 = *(u64 *)(r6 +16) 37: (05) goto pc+1 38: (79) r3 = *(u64 *)(r6 +24) 39: (15) if r3 == 0x0 goto pc+15 40: (b7) r1 = 0 41: (63) *(u32 *)(r10 -260) = r1 42: (bf) r1 = r10 43: (07) r1 += -256 44: (b7) r2 = 256 45: (85) call bpf_probe_read_str#-46704 46: (b7) r5 = 328 47: (63) *(u32 *)(r10 -264) = r0 48: (bf) r1 = r0 49: (67) r1 <<= 32 50: (77) r1 >>= 32 51: (25) if r1 > 0xff goto pc+3 52: (07) r0 += 72 53: (57) r0 &= 255 54: (bf) r5 = r0 55: (bf) r4 = r10 56: (07) r4 += -328 57: (bf) r1 = r6 58: (18) r2 = map[id:105] 60: (18) r3 = 0xffffffff 62: (85) call bpf_perf_event_output_tp#-45104 63: (bf) r7 = r0 64: (bf) r0 = r7 65: (95) exit # Signed-off-by: Song Liu <songliubraving@fb.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: kernel-team@fb.com Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stanislav Fomichev <sdf@google.com> Link: http://lkml.kernel.org/r/20190312053051.2690567-4-songliubraving@fb.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-03-12 12:30:39 +07:00
if (info->nr_jited_func_lens && info->jited_func_lens) {
tools: bpftool: add delimiters to multi-function JITed dumps This splits up the contiguous JITed dump obtained via the bpf system call into more relatable chunks for each function in the program. If the kernel symbols corresponding to these are known, they are printed in the header for each JIT image dump otherwise the masked start address is printed. Before applying this patch: # bpftool prog dump jited id 1 0: push %rbp 1: mov %rsp,%rbp ... 70: leaveq 71: retq 72: push %rbp 73: mov %rsp,%rbp ... dd: leaveq de: retq # bpftool -p prog dump jited id 1 [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x71", "operation": "retq", "operands": [null ] },{ "pc": "0x72", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0xde", "operation": "retq", "operands": [null ] } ] After applying this patch: # echo 0 > /proc/sys/net/core/bpf_jit_kallsyms # bpftool prog dump jited id 1 0xffffffffc02c7000: 0: push %rbp 1: mov %rsp,%rbp ... 70: leaveq 71: retq 0xffffffffc02cf000: 0: push %rbp 1: mov %rsp,%rbp ... 6b: leaveq 6c: retq # bpftool -p prog dump jited id 1 [{ "name": "0xffffffffc02c7000", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x71", "operation": "retq", "operands": [null ] } ] },{ "name": "0xffffffffc02cf000", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x6c", "operation": "retq", "operands": [null ] } ] } ] # echo 1 > /proc/sys/net/core/bpf_jit_kallsyms # bpftool prog dump jited id 1 bpf_prog_b811aab41a39ad3d_foo: 0: push %rbp 1: mov %rsp,%rbp ... 70: leaveq 71: retq bpf_prog_cf418ac8b67bebd9_F: 0: push %rbp 1: mov %rsp,%rbp ... 6b: leaveq 6c: retq # bpftool -p prog dump jited id 1 [{ "name": "bpf_prog_b811aab41a39ad3d_foo", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x71", "operation": "retq", "operands": [null ] } ] },{ "name": "bpf_prog_cf418ac8b67bebd9_F", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x6c", "operation": "retq", "operands": [null ] } ] } ] Signed-off-by: Sandipan Das <sandipan@linux.vnet.ibm.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-05-24 13:56:54 +07:00
struct kernel_sym *sym = NULL;
tools/bpf: bpftool: add support for func types This patch added support to print function signature if btf func_info is available. Note that ksym now uses function name instead of prog_name as prog_name has a limit of 16 bytes including ending '\0'. The following is a sample output for selftests test_btf with file test_btf_haskv.o for translated insns and jited insns respectively. $ bpftool prog dump xlated id 1 int _dummy_tracepoint(struct dummy_tracepoint_args * arg): 0: (85) call pc+2#bpf_prog_2dcecc18072623fc_test_long_fname_1 1: (b7) r0 = 0 2: (95) exit int test_long_fname_1(struct dummy_tracepoint_args * arg): 3: (85) call pc+1#bpf_prog_89d64e4abf0f0126_test_long_fname_2 4: (95) exit int test_long_fname_2(struct dummy_tracepoint_args * arg): 5: (b7) r2 = 0 6: (63) *(u32 *)(r10 -4) = r2 7: (79) r1 = *(u64 *)(r1 +8) ... 22: (07) r1 += 1 23: (63) *(u32 *)(r0 +4) = r1 24: (95) exit $ bpftool prog dump jited id 1 int _dummy_tracepoint(struct dummy_tracepoint_args * arg): bpf_prog_b07ccb89267cf242__dummy_tracepoint: 0: push %rbp 1: mov %rsp,%rbp ...... 3c: add $0x28,%rbp 40: leaveq 41: retq int test_long_fname_1(struct dummy_tracepoint_args * arg): bpf_prog_2dcecc18072623fc_test_long_fname_1: 0: push %rbp 1: mov %rsp,%rbp ...... 3a: add $0x28,%rbp 3e: leaveq 3f: retq int test_long_fname_2(struct dummy_tracepoint_args * arg): bpf_prog_89d64e4abf0f0126_test_long_fname_2: 0: push %rbp 1: mov %rsp,%rbp ...... 80: add $0x28,%rbp 84: leaveq 85: retq Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-20 06:29:21 +07:00
struct bpf_func_info *record;
tools: bpftool: add delimiters to multi-function JITed dumps This splits up the contiguous JITed dump obtained via the bpf system call into more relatable chunks for each function in the program. If the kernel symbols corresponding to these are known, they are printed in the header for each JIT image dump otherwise the masked start address is printed. Before applying this patch: # bpftool prog dump jited id 1 0: push %rbp 1: mov %rsp,%rbp ... 70: leaveq 71: retq 72: push %rbp 73: mov %rsp,%rbp ... dd: leaveq de: retq # bpftool -p prog dump jited id 1 [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x71", "operation": "retq", "operands": [null ] },{ "pc": "0x72", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0xde", "operation": "retq", "operands": [null ] } ] After applying this patch: # echo 0 > /proc/sys/net/core/bpf_jit_kallsyms # bpftool prog dump jited id 1 0xffffffffc02c7000: 0: push %rbp 1: mov %rsp,%rbp ... 70: leaveq 71: retq 0xffffffffc02cf000: 0: push %rbp 1: mov %rsp,%rbp ... 6b: leaveq 6c: retq # bpftool -p prog dump jited id 1 [{ "name": "0xffffffffc02c7000", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x71", "operation": "retq", "operands": [null ] } ] },{ "name": "0xffffffffc02cf000", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x6c", "operation": "retq", "operands": [null ] } ] } ] # echo 1 > /proc/sys/net/core/bpf_jit_kallsyms # bpftool prog dump jited id 1 bpf_prog_b811aab41a39ad3d_foo: 0: push %rbp 1: mov %rsp,%rbp ... 70: leaveq 71: retq bpf_prog_cf418ac8b67bebd9_F: 0: push %rbp 1: mov %rsp,%rbp ... 6b: leaveq 6c: retq # bpftool -p prog dump jited id 1 [{ "name": "bpf_prog_b811aab41a39ad3d_foo", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x71", "operation": "retq", "operands": [null ] } ] },{ "name": "bpf_prog_cf418ac8b67bebd9_F", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x6c", "operation": "retq", "operands": [null ] } ] } ] Signed-off-by: Sandipan Das <sandipan@linux.vnet.ibm.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-05-24 13:56:54 +07:00
char sym_name[SYM_MAX_NAME];
unsigned char *img = buf;
__u64 *ksyms = NULL;
__u32 *lens;
__u32 i;
bpftool: use bpf_program__get_prog_info_linear() in prog.c:do_dump() This patches uses bpf_program__get_prog_info_linear() to simplify the logic in prog.c do_dump(). Committer testing: Before: # bpftool prog dump xlated id 208 > /tmp/dump.xlated.before # bpftool prog dump jited id 208 > /tmp/dump.jited.before # bpftool map dump id 107 > /tmp/map.dump.before After: # ~acme/git/perf/tools/bpf/bpftool/bpftool map dump id 107 > /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 > /tmp/dump.xlated.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump jited id 208 > /tmp/dump.jited.after # diff -u /tmp/dump.xlated.before /tmp/dump.xlated.after # diff -u /tmp/dump.jited.before /tmp/dump.jited.after # diff -u /tmp/map.dump.before /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 0: (bf) r6 = r1 1: (85) call bpf_get_current_pid_tgid#80800 2: (63) *(u32 *)(r10 -328) = r0 3: (bf) r2 = r10 4: (07) r2 += -328 5: (18) r1 = map[id:107] 7: (85) call __htab_map_lookup_elem#85680 8: (15) if r0 == 0x0 goto pc+1 9: (07) r0 += 56 10: (b7) r7 = 0 11: (55) if r0 != 0x0 goto pc+52 12: (bf) r1 = r10 13: (07) r1 += -328 14: (b7) r2 = 64 15: (bf) r3 = r6 16: (85) call bpf_probe_read#-46848 17: (bf) r2 = r10 18: (07) r2 += -320 19: (18) r1 = map[id:106] 21: (07) r1 += 208 22: (61) r0 = *(u32 *)(r2 +0) 23: (35) if r0 >= 0x200 goto pc+3 24: (67) r0 <<= 3 25: (0f) r0 += r1 26: (05) goto pc+1 27: (b7) r0 = 0 28: (15) if r0 == 0x0 goto pc+35 29: (71) r1 = *(u8 *)(r0 +0) 30: (15) if r1 == 0x0 goto pc+33 31: (b7) r5 = 64 32: (79) r1 = *(u64 *)(r10 -320) 33: (15) if r1 == 0x2 goto pc+2 34: (15) if r1 == 0x101 goto pc+3 35: (55) if r1 != 0x15 goto pc+19 36: (79) r3 = *(u64 *)(r6 +16) 37: (05) goto pc+1 38: (79) r3 = *(u64 *)(r6 +24) 39: (15) if r3 == 0x0 goto pc+15 40: (b7) r1 = 0 41: (63) *(u32 *)(r10 -260) = r1 42: (bf) r1 = r10 43: (07) r1 += -256 44: (b7) r2 = 256 45: (85) call bpf_probe_read_str#-46704 46: (b7) r5 = 328 47: (63) *(u32 *)(r10 -264) = r0 48: (bf) r1 = r0 49: (67) r1 <<= 32 50: (77) r1 >>= 32 51: (25) if r1 > 0xff goto pc+3 52: (07) r0 += 72 53: (57) r0 &= 255 54: (bf) r5 = r0 55: (bf) r4 = r10 56: (07) r4 += -328 57: (bf) r1 = r6 58: (18) r2 = map[id:105] 60: (18) r3 = 0xffffffff 62: (85) call bpf_perf_event_output_tp#-45104 63: (bf) r7 = r0 64: (bf) r0 = r7 65: (95) exit # Signed-off-by: Song Liu <songliubraving@fb.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: kernel-team@fb.com Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stanislav Fomichev <sdf@google.com> Link: http://lkml.kernel.org/r/20190312053051.2690567-4-songliubraving@fb.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-03-12 12:30:39 +07:00
if (info->nr_jited_ksyms) {
tools: bpftool: add delimiters to multi-function JITed dumps This splits up the contiguous JITed dump obtained via the bpf system call into more relatable chunks for each function in the program. If the kernel symbols corresponding to these are known, they are printed in the header for each JIT image dump otherwise the masked start address is printed. Before applying this patch: # bpftool prog dump jited id 1 0: push %rbp 1: mov %rsp,%rbp ... 70: leaveq 71: retq 72: push %rbp 73: mov %rsp,%rbp ... dd: leaveq de: retq # bpftool -p prog dump jited id 1 [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x71", "operation": "retq", "operands": [null ] },{ "pc": "0x72", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0xde", "operation": "retq", "operands": [null ] } ] After applying this patch: # echo 0 > /proc/sys/net/core/bpf_jit_kallsyms # bpftool prog dump jited id 1 0xffffffffc02c7000: 0: push %rbp 1: mov %rsp,%rbp ... 70: leaveq 71: retq 0xffffffffc02cf000: 0: push %rbp 1: mov %rsp,%rbp ... 6b: leaveq 6c: retq # bpftool -p prog dump jited id 1 [{ "name": "0xffffffffc02c7000", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x71", "operation": "retq", "operands": [null ] } ] },{ "name": "0xffffffffc02cf000", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x6c", "operation": "retq", "operands": [null ] } ] } ] # echo 1 > /proc/sys/net/core/bpf_jit_kallsyms # bpftool prog dump jited id 1 bpf_prog_b811aab41a39ad3d_foo: 0: push %rbp 1: mov %rsp,%rbp ... 70: leaveq 71: retq bpf_prog_cf418ac8b67bebd9_F: 0: push %rbp 1: mov %rsp,%rbp ... 6b: leaveq 6c: retq # bpftool -p prog dump jited id 1 [{ "name": "bpf_prog_b811aab41a39ad3d_foo", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x71", "operation": "retq", "operands": [null ] } ] },{ "name": "bpf_prog_cf418ac8b67bebd9_F", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x6c", "operation": "retq", "operands": [null ] } ] } ] Signed-off-by: Sandipan Das <sandipan@linux.vnet.ibm.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-05-24 13:56:54 +07:00
kernel_syms_load(&dd);
bpftool: use bpf_program__get_prog_info_linear() in prog.c:do_dump() This patches uses bpf_program__get_prog_info_linear() to simplify the logic in prog.c do_dump(). Committer testing: Before: # bpftool prog dump xlated id 208 > /tmp/dump.xlated.before # bpftool prog dump jited id 208 > /tmp/dump.jited.before # bpftool map dump id 107 > /tmp/map.dump.before After: # ~acme/git/perf/tools/bpf/bpftool/bpftool map dump id 107 > /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 > /tmp/dump.xlated.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump jited id 208 > /tmp/dump.jited.after # diff -u /tmp/dump.xlated.before /tmp/dump.xlated.after # diff -u /tmp/dump.jited.before /tmp/dump.jited.after # diff -u /tmp/map.dump.before /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 0: (bf) r6 = r1 1: (85) call bpf_get_current_pid_tgid#80800 2: (63) *(u32 *)(r10 -328) = r0 3: (bf) r2 = r10 4: (07) r2 += -328 5: (18) r1 = map[id:107] 7: (85) call __htab_map_lookup_elem#85680 8: (15) if r0 == 0x0 goto pc+1 9: (07) r0 += 56 10: (b7) r7 = 0 11: (55) if r0 != 0x0 goto pc+52 12: (bf) r1 = r10 13: (07) r1 += -328 14: (b7) r2 = 64 15: (bf) r3 = r6 16: (85) call bpf_probe_read#-46848 17: (bf) r2 = r10 18: (07) r2 += -320 19: (18) r1 = map[id:106] 21: (07) r1 += 208 22: (61) r0 = *(u32 *)(r2 +0) 23: (35) if r0 >= 0x200 goto pc+3 24: (67) r0 <<= 3 25: (0f) r0 += r1 26: (05) goto pc+1 27: (b7) r0 = 0 28: (15) if r0 == 0x0 goto pc+35 29: (71) r1 = *(u8 *)(r0 +0) 30: (15) if r1 == 0x0 goto pc+33 31: (b7) r5 = 64 32: (79) r1 = *(u64 *)(r10 -320) 33: (15) if r1 == 0x2 goto pc+2 34: (15) if r1 == 0x101 goto pc+3 35: (55) if r1 != 0x15 goto pc+19 36: (79) r3 = *(u64 *)(r6 +16) 37: (05) goto pc+1 38: (79) r3 = *(u64 *)(r6 +24) 39: (15) if r3 == 0x0 goto pc+15 40: (b7) r1 = 0 41: (63) *(u32 *)(r10 -260) = r1 42: (bf) r1 = r10 43: (07) r1 += -256 44: (b7) r2 = 256 45: (85) call bpf_probe_read_str#-46704 46: (b7) r5 = 328 47: (63) *(u32 *)(r10 -264) = r0 48: (bf) r1 = r0 49: (67) r1 <<= 32 50: (77) r1 >>= 32 51: (25) if r1 > 0xff goto pc+3 52: (07) r0 += 72 53: (57) r0 &= 255 54: (bf) r5 = r0 55: (bf) r4 = r10 56: (07) r4 += -328 57: (bf) r1 = r6 58: (18) r2 = map[id:105] 60: (18) r3 = 0xffffffff 62: (85) call bpf_perf_event_output_tp#-45104 63: (bf) r7 = r0 64: (bf) r0 = r7 65: (95) exit # Signed-off-by: Song Liu <songliubraving@fb.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: kernel-team@fb.com Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stanislav Fomichev <sdf@google.com> Link: http://lkml.kernel.org/r/20190312053051.2690567-4-songliubraving@fb.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-03-12 12:30:39 +07:00
ksyms = (__u64 *) info->jited_ksyms;
tools: bpftool: add delimiters to multi-function JITed dumps This splits up the contiguous JITed dump obtained via the bpf system call into more relatable chunks for each function in the program. If the kernel symbols corresponding to these are known, they are printed in the header for each JIT image dump otherwise the masked start address is printed. Before applying this patch: # bpftool prog dump jited id 1 0: push %rbp 1: mov %rsp,%rbp ... 70: leaveq 71: retq 72: push %rbp 73: mov %rsp,%rbp ... dd: leaveq de: retq # bpftool -p prog dump jited id 1 [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x71", "operation": "retq", "operands": [null ] },{ "pc": "0x72", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0xde", "operation": "retq", "operands": [null ] } ] After applying this patch: # echo 0 > /proc/sys/net/core/bpf_jit_kallsyms # bpftool prog dump jited id 1 0xffffffffc02c7000: 0: push %rbp 1: mov %rsp,%rbp ... 70: leaveq 71: retq 0xffffffffc02cf000: 0: push %rbp 1: mov %rsp,%rbp ... 6b: leaveq 6c: retq # bpftool -p prog dump jited id 1 [{ "name": "0xffffffffc02c7000", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x71", "operation": "retq", "operands": [null ] } ] },{ "name": "0xffffffffc02cf000", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x6c", "operation": "retq", "operands": [null ] } ] } ] # echo 1 > /proc/sys/net/core/bpf_jit_kallsyms # bpftool prog dump jited id 1 bpf_prog_b811aab41a39ad3d_foo: 0: push %rbp 1: mov %rsp,%rbp ... 70: leaveq 71: retq bpf_prog_cf418ac8b67bebd9_F: 0: push %rbp 1: mov %rsp,%rbp ... 6b: leaveq 6c: retq # bpftool -p prog dump jited id 1 [{ "name": "bpf_prog_b811aab41a39ad3d_foo", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x71", "operation": "retq", "operands": [null ] } ] },{ "name": "bpf_prog_cf418ac8b67bebd9_F", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x6c", "operation": "retq", "operands": [null ] } ] } ] Signed-off-by: Sandipan Das <sandipan@linux.vnet.ibm.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-05-24 13:56:54 +07:00
}
if (json_output)
jsonw_start_array(json_wtr);
bpftool: use bpf_program__get_prog_info_linear() in prog.c:do_dump() This patches uses bpf_program__get_prog_info_linear() to simplify the logic in prog.c do_dump(). Committer testing: Before: # bpftool prog dump xlated id 208 > /tmp/dump.xlated.before # bpftool prog dump jited id 208 > /tmp/dump.jited.before # bpftool map dump id 107 > /tmp/map.dump.before After: # ~acme/git/perf/tools/bpf/bpftool/bpftool map dump id 107 > /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 > /tmp/dump.xlated.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump jited id 208 > /tmp/dump.jited.after # diff -u /tmp/dump.xlated.before /tmp/dump.xlated.after # diff -u /tmp/dump.jited.before /tmp/dump.jited.after # diff -u /tmp/map.dump.before /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 0: (bf) r6 = r1 1: (85) call bpf_get_current_pid_tgid#80800 2: (63) *(u32 *)(r10 -328) = r0 3: (bf) r2 = r10 4: (07) r2 += -328 5: (18) r1 = map[id:107] 7: (85) call __htab_map_lookup_elem#85680 8: (15) if r0 == 0x0 goto pc+1 9: (07) r0 += 56 10: (b7) r7 = 0 11: (55) if r0 != 0x0 goto pc+52 12: (bf) r1 = r10 13: (07) r1 += -328 14: (b7) r2 = 64 15: (bf) r3 = r6 16: (85) call bpf_probe_read#-46848 17: (bf) r2 = r10 18: (07) r2 += -320 19: (18) r1 = map[id:106] 21: (07) r1 += 208 22: (61) r0 = *(u32 *)(r2 +0) 23: (35) if r0 >= 0x200 goto pc+3 24: (67) r0 <<= 3 25: (0f) r0 += r1 26: (05) goto pc+1 27: (b7) r0 = 0 28: (15) if r0 == 0x0 goto pc+35 29: (71) r1 = *(u8 *)(r0 +0) 30: (15) if r1 == 0x0 goto pc+33 31: (b7) r5 = 64 32: (79) r1 = *(u64 *)(r10 -320) 33: (15) if r1 == 0x2 goto pc+2 34: (15) if r1 == 0x101 goto pc+3 35: (55) if r1 != 0x15 goto pc+19 36: (79) r3 = *(u64 *)(r6 +16) 37: (05) goto pc+1 38: (79) r3 = *(u64 *)(r6 +24) 39: (15) if r3 == 0x0 goto pc+15 40: (b7) r1 = 0 41: (63) *(u32 *)(r10 -260) = r1 42: (bf) r1 = r10 43: (07) r1 += -256 44: (b7) r2 = 256 45: (85) call bpf_probe_read_str#-46704 46: (b7) r5 = 328 47: (63) *(u32 *)(r10 -264) = r0 48: (bf) r1 = r0 49: (67) r1 <<= 32 50: (77) r1 >>= 32 51: (25) if r1 > 0xff goto pc+3 52: (07) r0 += 72 53: (57) r0 &= 255 54: (bf) r5 = r0 55: (bf) r4 = r10 56: (07) r4 += -328 57: (bf) r1 = r6 58: (18) r2 = map[id:105] 60: (18) r3 = 0xffffffff 62: (85) call bpf_perf_event_output_tp#-45104 63: (bf) r7 = r0 64: (bf) r0 = r7 65: (95) exit # Signed-off-by: Song Liu <songliubraving@fb.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: kernel-team@fb.com Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stanislav Fomichev <sdf@google.com> Link: http://lkml.kernel.org/r/20190312053051.2690567-4-songliubraving@fb.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-03-12 12:30:39 +07:00
lens = (__u32 *) info->jited_func_lens;
for (i = 0; i < info->nr_jited_func_lens; i++) {
tools: bpftool: add delimiters to multi-function JITed dumps This splits up the contiguous JITed dump obtained via the bpf system call into more relatable chunks for each function in the program. If the kernel symbols corresponding to these are known, they are printed in the header for each JIT image dump otherwise the masked start address is printed. Before applying this patch: # bpftool prog dump jited id 1 0: push %rbp 1: mov %rsp,%rbp ... 70: leaveq 71: retq 72: push %rbp 73: mov %rsp,%rbp ... dd: leaveq de: retq # bpftool -p prog dump jited id 1 [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x71", "operation": "retq", "operands": [null ] },{ "pc": "0x72", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0xde", "operation": "retq", "operands": [null ] } ] After applying this patch: # echo 0 > /proc/sys/net/core/bpf_jit_kallsyms # bpftool prog dump jited id 1 0xffffffffc02c7000: 0: push %rbp 1: mov %rsp,%rbp ... 70: leaveq 71: retq 0xffffffffc02cf000: 0: push %rbp 1: mov %rsp,%rbp ... 6b: leaveq 6c: retq # bpftool -p prog dump jited id 1 [{ "name": "0xffffffffc02c7000", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x71", "operation": "retq", "operands": [null ] } ] },{ "name": "0xffffffffc02cf000", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x6c", "operation": "retq", "operands": [null ] } ] } ] # echo 1 > /proc/sys/net/core/bpf_jit_kallsyms # bpftool prog dump jited id 1 bpf_prog_b811aab41a39ad3d_foo: 0: push %rbp 1: mov %rsp,%rbp ... 70: leaveq 71: retq bpf_prog_cf418ac8b67bebd9_F: 0: push %rbp 1: mov %rsp,%rbp ... 6b: leaveq 6c: retq # bpftool -p prog dump jited id 1 [{ "name": "bpf_prog_b811aab41a39ad3d_foo", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x71", "operation": "retq", "operands": [null ] } ] },{ "name": "bpf_prog_cf418ac8b67bebd9_F", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x6c", "operation": "retq", "operands": [null ] } ] } ] Signed-off-by: Sandipan Das <sandipan@linux.vnet.ibm.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-05-24 13:56:54 +07:00
if (ksyms) {
sym = kernel_syms_search(&dd, ksyms[i]);
if (sym)
sprintf(sym_name, "%s", sym->name);
else
sprintf(sym_name, "0x%016llx", ksyms[i]);
} else {
strcpy(sym_name, "unknown");
}
tools/bpf: bpftool: add support for func types This patch added support to print function signature if btf func_info is available. Note that ksym now uses function name instead of prog_name as prog_name has a limit of 16 bytes including ending '\0'. The following is a sample output for selftests test_btf with file test_btf_haskv.o for translated insns and jited insns respectively. $ bpftool prog dump xlated id 1 int _dummy_tracepoint(struct dummy_tracepoint_args * arg): 0: (85) call pc+2#bpf_prog_2dcecc18072623fc_test_long_fname_1 1: (b7) r0 = 0 2: (95) exit int test_long_fname_1(struct dummy_tracepoint_args * arg): 3: (85) call pc+1#bpf_prog_89d64e4abf0f0126_test_long_fname_2 4: (95) exit int test_long_fname_2(struct dummy_tracepoint_args * arg): 5: (b7) r2 = 0 6: (63) *(u32 *)(r10 -4) = r2 7: (79) r1 = *(u64 *)(r1 +8) ... 22: (07) r1 += 1 23: (63) *(u32 *)(r0 +4) = r1 24: (95) exit $ bpftool prog dump jited id 1 int _dummy_tracepoint(struct dummy_tracepoint_args * arg): bpf_prog_b07ccb89267cf242__dummy_tracepoint: 0: push %rbp 1: mov %rsp,%rbp ...... 3c: add $0x28,%rbp 40: leaveq 41: retq int test_long_fname_1(struct dummy_tracepoint_args * arg): bpf_prog_2dcecc18072623fc_test_long_fname_1: 0: push %rbp 1: mov %rsp,%rbp ...... 3a: add $0x28,%rbp 3e: leaveq 3f: retq int test_long_fname_2(struct dummy_tracepoint_args * arg): bpf_prog_89d64e4abf0f0126_test_long_fname_2: 0: push %rbp 1: mov %rsp,%rbp ...... 80: add $0x28,%rbp 84: leaveq 85: retq Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-20 06:29:21 +07:00
if (func_info) {
bpftool: use bpf_program__get_prog_info_linear() in prog.c:do_dump() This patches uses bpf_program__get_prog_info_linear() to simplify the logic in prog.c do_dump(). Committer testing: Before: # bpftool prog dump xlated id 208 > /tmp/dump.xlated.before # bpftool prog dump jited id 208 > /tmp/dump.jited.before # bpftool map dump id 107 > /tmp/map.dump.before After: # ~acme/git/perf/tools/bpf/bpftool/bpftool map dump id 107 > /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 > /tmp/dump.xlated.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump jited id 208 > /tmp/dump.jited.after # diff -u /tmp/dump.xlated.before /tmp/dump.xlated.after # diff -u /tmp/dump.jited.before /tmp/dump.jited.after # diff -u /tmp/map.dump.before /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 0: (bf) r6 = r1 1: (85) call bpf_get_current_pid_tgid#80800 2: (63) *(u32 *)(r10 -328) = r0 3: (bf) r2 = r10 4: (07) r2 += -328 5: (18) r1 = map[id:107] 7: (85) call __htab_map_lookup_elem#85680 8: (15) if r0 == 0x0 goto pc+1 9: (07) r0 += 56 10: (b7) r7 = 0 11: (55) if r0 != 0x0 goto pc+52 12: (bf) r1 = r10 13: (07) r1 += -328 14: (b7) r2 = 64 15: (bf) r3 = r6 16: (85) call bpf_probe_read#-46848 17: (bf) r2 = r10 18: (07) r2 += -320 19: (18) r1 = map[id:106] 21: (07) r1 += 208 22: (61) r0 = *(u32 *)(r2 +0) 23: (35) if r0 >= 0x200 goto pc+3 24: (67) r0 <<= 3 25: (0f) r0 += r1 26: (05) goto pc+1 27: (b7) r0 = 0 28: (15) if r0 == 0x0 goto pc+35 29: (71) r1 = *(u8 *)(r0 +0) 30: (15) if r1 == 0x0 goto pc+33 31: (b7) r5 = 64 32: (79) r1 = *(u64 *)(r10 -320) 33: (15) if r1 == 0x2 goto pc+2 34: (15) if r1 == 0x101 goto pc+3 35: (55) if r1 != 0x15 goto pc+19 36: (79) r3 = *(u64 *)(r6 +16) 37: (05) goto pc+1 38: (79) r3 = *(u64 *)(r6 +24) 39: (15) if r3 == 0x0 goto pc+15 40: (b7) r1 = 0 41: (63) *(u32 *)(r10 -260) = r1 42: (bf) r1 = r10 43: (07) r1 += -256 44: (b7) r2 = 256 45: (85) call bpf_probe_read_str#-46704 46: (b7) r5 = 328 47: (63) *(u32 *)(r10 -264) = r0 48: (bf) r1 = r0 49: (67) r1 <<= 32 50: (77) r1 >>= 32 51: (25) if r1 > 0xff goto pc+3 52: (07) r0 += 72 53: (57) r0 &= 255 54: (bf) r5 = r0 55: (bf) r4 = r10 56: (07) r4 += -328 57: (bf) r1 = r6 58: (18) r2 = map[id:105] 60: (18) r3 = 0xffffffff 62: (85) call bpf_perf_event_output_tp#-45104 63: (bf) r7 = r0 64: (bf) r0 = r7 65: (95) exit # Signed-off-by: Song Liu <songliubraving@fb.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: kernel-team@fb.com Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stanislav Fomichev <sdf@google.com> Link: http://lkml.kernel.org/r/20190312053051.2690567-4-songliubraving@fb.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-03-12 12:30:39 +07:00
record = func_info + i * info->func_info_rec_size;
tools/bpf: bpftool: add support for func types This patch added support to print function signature if btf func_info is available. Note that ksym now uses function name instead of prog_name as prog_name has a limit of 16 bytes including ending '\0'. The following is a sample output for selftests test_btf with file test_btf_haskv.o for translated insns and jited insns respectively. $ bpftool prog dump xlated id 1 int _dummy_tracepoint(struct dummy_tracepoint_args * arg): 0: (85) call pc+2#bpf_prog_2dcecc18072623fc_test_long_fname_1 1: (b7) r0 = 0 2: (95) exit int test_long_fname_1(struct dummy_tracepoint_args * arg): 3: (85) call pc+1#bpf_prog_89d64e4abf0f0126_test_long_fname_2 4: (95) exit int test_long_fname_2(struct dummy_tracepoint_args * arg): 5: (b7) r2 = 0 6: (63) *(u32 *)(r10 -4) = r2 7: (79) r1 = *(u64 *)(r1 +8) ... 22: (07) r1 += 1 23: (63) *(u32 *)(r0 +4) = r1 24: (95) exit $ bpftool prog dump jited id 1 int _dummy_tracepoint(struct dummy_tracepoint_args * arg): bpf_prog_b07ccb89267cf242__dummy_tracepoint: 0: push %rbp 1: mov %rsp,%rbp ...... 3c: add $0x28,%rbp 40: leaveq 41: retq int test_long_fname_1(struct dummy_tracepoint_args * arg): bpf_prog_2dcecc18072623fc_test_long_fname_1: 0: push %rbp 1: mov %rsp,%rbp ...... 3a: add $0x28,%rbp 3e: leaveq 3f: retq int test_long_fname_2(struct dummy_tracepoint_args * arg): bpf_prog_89d64e4abf0f0126_test_long_fname_2: 0: push %rbp 1: mov %rsp,%rbp ...... 80: add $0x28,%rbp 84: leaveq 85: retq Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-20 06:29:21 +07:00
btf_dumper_type_only(btf, record->type_id,
func_sig,
sizeof(func_sig));
}
tools: bpftool: add delimiters to multi-function JITed dumps This splits up the contiguous JITed dump obtained via the bpf system call into more relatable chunks for each function in the program. If the kernel symbols corresponding to these are known, they are printed in the header for each JIT image dump otherwise the masked start address is printed. Before applying this patch: # bpftool prog dump jited id 1 0: push %rbp 1: mov %rsp,%rbp ... 70: leaveq 71: retq 72: push %rbp 73: mov %rsp,%rbp ... dd: leaveq de: retq # bpftool -p prog dump jited id 1 [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x71", "operation": "retq", "operands": [null ] },{ "pc": "0x72", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0xde", "operation": "retq", "operands": [null ] } ] After applying this patch: # echo 0 > /proc/sys/net/core/bpf_jit_kallsyms # bpftool prog dump jited id 1 0xffffffffc02c7000: 0: push %rbp 1: mov %rsp,%rbp ... 70: leaveq 71: retq 0xffffffffc02cf000: 0: push %rbp 1: mov %rsp,%rbp ... 6b: leaveq 6c: retq # bpftool -p prog dump jited id 1 [{ "name": "0xffffffffc02c7000", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x71", "operation": "retq", "operands": [null ] } ] },{ "name": "0xffffffffc02cf000", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x6c", "operation": "retq", "operands": [null ] } ] } ] # echo 1 > /proc/sys/net/core/bpf_jit_kallsyms # bpftool prog dump jited id 1 bpf_prog_b811aab41a39ad3d_foo: 0: push %rbp 1: mov %rsp,%rbp ... 70: leaveq 71: retq bpf_prog_cf418ac8b67bebd9_F: 0: push %rbp 1: mov %rsp,%rbp ... 6b: leaveq 6c: retq # bpftool -p prog dump jited id 1 [{ "name": "bpf_prog_b811aab41a39ad3d_foo", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x71", "operation": "retq", "operands": [null ] } ] },{ "name": "bpf_prog_cf418ac8b67bebd9_F", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x6c", "operation": "retq", "operands": [null ] } ] } ] Signed-off-by: Sandipan Das <sandipan@linux.vnet.ibm.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-05-24 13:56:54 +07:00
if (json_output) {
jsonw_start_object(json_wtr);
tools/bpf: bpftool: add support for func types This patch added support to print function signature if btf func_info is available. Note that ksym now uses function name instead of prog_name as prog_name has a limit of 16 bytes including ending '\0'. The following is a sample output for selftests test_btf with file test_btf_haskv.o for translated insns and jited insns respectively. $ bpftool prog dump xlated id 1 int _dummy_tracepoint(struct dummy_tracepoint_args * arg): 0: (85) call pc+2#bpf_prog_2dcecc18072623fc_test_long_fname_1 1: (b7) r0 = 0 2: (95) exit int test_long_fname_1(struct dummy_tracepoint_args * arg): 3: (85) call pc+1#bpf_prog_89d64e4abf0f0126_test_long_fname_2 4: (95) exit int test_long_fname_2(struct dummy_tracepoint_args * arg): 5: (b7) r2 = 0 6: (63) *(u32 *)(r10 -4) = r2 7: (79) r1 = *(u64 *)(r1 +8) ... 22: (07) r1 += 1 23: (63) *(u32 *)(r0 +4) = r1 24: (95) exit $ bpftool prog dump jited id 1 int _dummy_tracepoint(struct dummy_tracepoint_args * arg): bpf_prog_b07ccb89267cf242__dummy_tracepoint: 0: push %rbp 1: mov %rsp,%rbp ...... 3c: add $0x28,%rbp 40: leaveq 41: retq int test_long_fname_1(struct dummy_tracepoint_args * arg): bpf_prog_2dcecc18072623fc_test_long_fname_1: 0: push %rbp 1: mov %rsp,%rbp ...... 3a: add $0x28,%rbp 3e: leaveq 3f: retq int test_long_fname_2(struct dummy_tracepoint_args * arg): bpf_prog_89d64e4abf0f0126_test_long_fname_2: 0: push %rbp 1: mov %rsp,%rbp ...... 80: add $0x28,%rbp 84: leaveq 85: retq Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-20 06:29:21 +07:00
if (func_info && func_sig[0] != '\0') {
jsonw_name(json_wtr, "proto");
jsonw_string(json_wtr, func_sig);
}
tools: bpftool: add delimiters to multi-function JITed dumps This splits up the contiguous JITed dump obtained via the bpf system call into more relatable chunks for each function in the program. If the kernel symbols corresponding to these are known, they are printed in the header for each JIT image dump otherwise the masked start address is printed. Before applying this patch: # bpftool prog dump jited id 1 0: push %rbp 1: mov %rsp,%rbp ... 70: leaveq 71: retq 72: push %rbp 73: mov %rsp,%rbp ... dd: leaveq de: retq # bpftool -p prog dump jited id 1 [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x71", "operation": "retq", "operands": [null ] },{ "pc": "0x72", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0xde", "operation": "retq", "operands": [null ] } ] After applying this patch: # echo 0 > /proc/sys/net/core/bpf_jit_kallsyms # bpftool prog dump jited id 1 0xffffffffc02c7000: 0: push %rbp 1: mov %rsp,%rbp ... 70: leaveq 71: retq 0xffffffffc02cf000: 0: push %rbp 1: mov %rsp,%rbp ... 6b: leaveq 6c: retq # bpftool -p prog dump jited id 1 [{ "name": "0xffffffffc02c7000", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x71", "operation": "retq", "operands": [null ] } ] },{ "name": "0xffffffffc02cf000", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x6c", "operation": "retq", "operands": [null ] } ] } ] # echo 1 > /proc/sys/net/core/bpf_jit_kallsyms # bpftool prog dump jited id 1 bpf_prog_b811aab41a39ad3d_foo: 0: push %rbp 1: mov %rsp,%rbp ... 70: leaveq 71: retq bpf_prog_cf418ac8b67bebd9_F: 0: push %rbp 1: mov %rsp,%rbp ... 6b: leaveq 6c: retq # bpftool -p prog dump jited id 1 [{ "name": "bpf_prog_b811aab41a39ad3d_foo", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x71", "operation": "retq", "operands": [null ] } ] },{ "name": "bpf_prog_cf418ac8b67bebd9_F", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x6c", "operation": "retq", "operands": [null ] } ] } ] Signed-off-by: Sandipan Das <sandipan@linux.vnet.ibm.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-05-24 13:56:54 +07:00
jsonw_name(json_wtr, "name");
jsonw_string(json_wtr, sym_name);
jsonw_name(json_wtr, "insns");
} else {
tools/bpf: bpftool: add support for func types This patch added support to print function signature if btf func_info is available. Note that ksym now uses function name instead of prog_name as prog_name has a limit of 16 bytes including ending '\0'. The following is a sample output for selftests test_btf with file test_btf_haskv.o for translated insns and jited insns respectively. $ bpftool prog dump xlated id 1 int _dummy_tracepoint(struct dummy_tracepoint_args * arg): 0: (85) call pc+2#bpf_prog_2dcecc18072623fc_test_long_fname_1 1: (b7) r0 = 0 2: (95) exit int test_long_fname_1(struct dummy_tracepoint_args * arg): 3: (85) call pc+1#bpf_prog_89d64e4abf0f0126_test_long_fname_2 4: (95) exit int test_long_fname_2(struct dummy_tracepoint_args * arg): 5: (b7) r2 = 0 6: (63) *(u32 *)(r10 -4) = r2 7: (79) r1 = *(u64 *)(r1 +8) ... 22: (07) r1 += 1 23: (63) *(u32 *)(r0 +4) = r1 24: (95) exit $ bpftool prog dump jited id 1 int _dummy_tracepoint(struct dummy_tracepoint_args * arg): bpf_prog_b07ccb89267cf242__dummy_tracepoint: 0: push %rbp 1: mov %rsp,%rbp ...... 3c: add $0x28,%rbp 40: leaveq 41: retq int test_long_fname_1(struct dummy_tracepoint_args * arg): bpf_prog_2dcecc18072623fc_test_long_fname_1: 0: push %rbp 1: mov %rsp,%rbp ...... 3a: add $0x28,%rbp 3e: leaveq 3f: retq int test_long_fname_2(struct dummy_tracepoint_args * arg): bpf_prog_89d64e4abf0f0126_test_long_fname_2: 0: push %rbp 1: mov %rsp,%rbp ...... 80: add $0x28,%rbp 84: leaveq 85: retq Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-20 06:29:21 +07:00
if (func_info && func_sig[0] != '\0')
printf("%s:\n", func_sig);
tools: bpftool: add delimiters to multi-function JITed dumps This splits up the contiguous JITed dump obtained via the bpf system call into more relatable chunks for each function in the program. If the kernel symbols corresponding to these are known, they are printed in the header for each JIT image dump otherwise the masked start address is printed. Before applying this patch: # bpftool prog dump jited id 1 0: push %rbp 1: mov %rsp,%rbp ... 70: leaveq 71: retq 72: push %rbp 73: mov %rsp,%rbp ... dd: leaveq de: retq # bpftool -p prog dump jited id 1 [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x71", "operation": "retq", "operands": [null ] },{ "pc": "0x72", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0xde", "operation": "retq", "operands": [null ] } ] After applying this patch: # echo 0 > /proc/sys/net/core/bpf_jit_kallsyms # bpftool prog dump jited id 1 0xffffffffc02c7000: 0: push %rbp 1: mov %rsp,%rbp ... 70: leaveq 71: retq 0xffffffffc02cf000: 0: push %rbp 1: mov %rsp,%rbp ... 6b: leaveq 6c: retq # bpftool -p prog dump jited id 1 [{ "name": "0xffffffffc02c7000", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x71", "operation": "retq", "operands": [null ] } ] },{ "name": "0xffffffffc02cf000", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x6c", "operation": "retq", "operands": [null ] } ] } ] # echo 1 > /proc/sys/net/core/bpf_jit_kallsyms # bpftool prog dump jited id 1 bpf_prog_b811aab41a39ad3d_foo: 0: push %rbp 1: mov %rsp,%rbp ... 70: leaveq 71: retq bpf_prog_cf418ac8b67bebd9_F: 0: push %rbp 1: mov %rsp,%rbp ... 6b: leaveq 6c: retq # bpftool -p prog dump jited id 1 [{ "name": "bpf_prog_b811aab41a39ad3d_foo", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x71", "operation": "retq", "operands": [null ] } ] },{ "name": "bpf_prog_cf418ac8b67bebd9_F", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x6c", "operation": "retq", "operands": [null ] } ] } ] Signed-off-by: Sandipan Das <sandipan@linux.vnet.ibm.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-05-24 13:56:54 +07:00
printf("%s:\n", sym_name);
}
bpf: libbpf: bpftool: Print bpf_line_info during prog dump This patch adds print bpf_line_info function in 'prog dump jitted' and 'prog dump xlated': [root@arch-fb-vm1 bpf]# ~/devshare/fb-kernel/linux/tools/bpf/bpftool/bpftool prog dump jited pinned /sys/fs/bpf/test_btf_haskv [...] int test_long_fname_2(struct dummy_tracepoint_args * arg): bpf_prog_44a040bf25481309_test_long_fname_2: ; static int test_long_fname_2(struct dummy_tracepoint_args *arg) 0: push %rbp 1: mov %rsp,%rbp 4: sub $0x30,%rsp b: sub $0x28,%rbp f: mov %rbx,0x0(%rbp) 13: mov %r13,0x8(%rbp) 17: mov %r14,0x10(%rbp) 1b: mov %r15,0x18(%rbp) 1f: xor %eax,%eax 21: mov %rax,0x20(%rbp) 25: xor %esi,%esi ; int key = 0; 27: mov %esi,-0x4(%rbp) ; if (!arg->sock) 2a: mov 0x8(%rdi),%rdi ; if (!arg->sock) 2e: cmp $0x0,%rdi 32: je 0x0000000000000070 34: mov %rbp,%rsi ; counts = bpf_map_lookup_elem(&btf_map, &key); 37: add $0xfffffffffffffffc,%rsi 3b: movabs $0xffff8881139d7480,%rdi 45: add $0x110,%rdi 4c: mov 0x0(%rsi),%eax 4f: cmp $0x4,%rax 53: jae 0x000000000000005e 55: shl $0x3,%rax 59: add %rdi,%rax 5c: jmp 0x0000000000000060 5e: xor %eax,%eax ; if (!counts) 60: cmp $0x0,%rax 64: je 0x0000000000000070 ; counts->v6++; 66: mov 0x4(%rax),%edi 69: add $0x1,%rdi 6d: mov %edi,0x4(%rax) 70: mov 0x0(%rbp),%rbx 74: mov 0x8(%rbp),%r13 78: mov 0x10(%rbp),%r14 7c: mov 0x18(%rbp),%r15 80: add $0x28,%rbp 84: leaveq 85: retq [...] With linum: [root@arch-fb-vm1 bpf]# ~/devshare/fb-kernel/linux/tools/bpf/bpftool/bpftool prog dump jited pinned /sys/fs/bpf/test_btf_haskv linum int _dummy_tracepoint(struct dummy_tracepoint_args * arg): bpf_prog_b07ccb89267cf242__dummy_tracepoint: ; return test_long_fname_1(arg); [file:/data/users/kafai/fb-kernel/linux/tools/testing/selftests/bpf/test_btf_haskv.c line_num:54 line_col:9] 0: push %rbp 1: mov %rsp,%rbp 4: sub $0x28,%rsp b: sub $0x28,%rbp f: mov %rbx,0x0(%rbp) 13: mov %r13,0x8(%rbp) 17: mov %r14,0x10(%rbp) 1b: mov %r15,0x18(%rbp) 1f: xor %eax,%eax 21: mov %rax,0x20(%rbp) 25: callq 0x000000000000851e ; return test_long_fname_1(arg); [file:/data/users/kafai/fb-kernel/linux/tools/testing/selftests/bpf/test_btf_haskv.c line_num:54 line_col:2] 2a: xor %eax,%eax 2c: mov 0x0(%rbp),%rbx 30: mov 0x8(%rbp),%r13 34: mov 0x10(%rbp),%r14 38: mov 0x18(%rbp),%r15 3c: add $0x28,%rbp 40: leaveq 41: retq [...] Signed-off-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-08 07:42:32 +07:00
disasm_print_insn(img, lens[i], opcodes,
name, disasm_opt, btf,
prog_linfo, ksyms[i], i,
linum);
tools: bpftool: add delimiters to multi-function JITed dumps This splits up the contiguous JITed dump obtained via the bpf system call into more relatable chunks for each function in the program. If the kernel symbols corresponding to these are known, they are printed in the header for each JIT image dump otherwise the masked start address is printed. Before applying this patch: # bpftool prog dump jited id 1 0: push %rbp 1: mov %rsp,%rbp ... 70: leaveq 71: retq 72: push %rbp 73: mov %rsp,%rbp ... dd: leaveq de: retq # bpftool -p prog dump jited id 1 [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x71", "operation": "retq", "operands": [null ] },{ "pc": "0x72", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0xde", "operation": "retq", "operands": [null ] } ] After applying this patch: # echo 0 > /proc/sys/net/core/bpf_jit_kallsyms # bpftool prog dump jited id 1 0xffffffffc02c7000: 0: push %rbp 1: mov %rsp,%rbp ... 70: leaveq 71: retq 0xffffffffc02cf000: 0: push %rbp 1: mov %rsp,%rbp ... 6b: leaveq 6c: retq # bpftool -p prog dump jited id 1 [{ "name": "0xffffffffc02c7000", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x71", "operation": "retq", "operands": [null ] } ] },{ "name": "0xffffffffc02cf000", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x6c", "operation": "retq", "operands": [null ] } ] } ] # echo 1 > /proc/sys/net/core/bpf_jit_kallsyms # bpftool prog dump jited id 1 bpf_prog_b811aab41a39ad3d_foo: 0: push %rbp 1: mov %rsp,%rbp ... 70: leaveq 71: retq bpf_prog_cf418ac8b67bebd9_F: 0: push %rbp 1: mov %rsp,%rbp ... 6b: leaveq 6c: retq # bpftool -p prog dump jited id 1 [{ "name": "bpf_prog_b811aab41a39ad3d_foo", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x71", "operation": "retq", "operands": [null ] } ] },{ "name": "bpf_prog_cf418ac8b67bebd9_F", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x6c", "operation": "retq", "operands": [null ] } ] } ] Signed-off-by: Sandipan Das <sandipan@linux.vnet.ibm.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-05-24 13:56:54 +07:00
img += lens[i];
if (json_output)
jsonw_end_object(json_wtr);
else
printf("\n");
}
if (json_output)
jsonw_end_array(json_wtr);
} else {
bpftool: use bpf_program__get_prog_info_linear() in prog.c:do_dump() This patches uses bpf_program__get_prog_info_linear() to simplify the logic in prog.c do_dump(). Committer testing: Before: # bpftool prog dump xlated id 208 > /tmp/dump.xlated.before # bpftool prog dump jited id 208 > /tmp/dump.jited.before # bpftool map dump id 107 > /tmp/map.dump.before After: # ~acme/git/perf/tools/bpf/bpftool/bpftool map dump id 107 > /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 > /tmp/dump.xlated.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump jited id 208 > /tmp/dump.jited.after # diff -u /tmp/dump.xlated.before /tmp/dump.xlated.after # diff -u /tmp/dump.jited.before /tmp/dump.jited.after # diff -u /tmp/map.dump.before /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 0: (bf) r6 = r1 1: (85) call bpf_get_current_pid_tgid#80800 2: (63) *(u32 *)(r10 -328) = r0 3: (bf) r2 = r10 4: (07) r2 += -328 5: (18) r1 = map[id:107] 7: (85) call __htab_map_lookup_elem#85680 8: (15) if r0 == 0x0 goto pc+1 9: (07) r0 += 56 10: (b7) r7 = 0 11: (55) if r0 != 0x0 goto pc+52 12: (bf) r1 = r10 13: (07) r1 += -328 14: (b7) r2 = 64 15: (bf) r3 = r6 16: (85) call bpf_probe_read#-46848 17: (bf) r2 = r10 18: (07) r2 += -320 19: (18) r1 = map[id:106] 21: (07) r1 += 208 22: (61) r0 = *(u32 *)(r2 +0) 23: (35) if r0 >= 0x200 goto pc+3 24: (67) r0 <<= 3 25: (0f) r0 += r1 26: (05) goto pc+1 27: (b7) r0 = 0 28: (15) if r0 == 0x0 goto pc+35 29: (71) r1 = *(u8 *)(r0 +0) 30: (15) if r1 == 0x0 goto pc+33 31: (b7) r5 = 64 32: (79) r1 = *(u64 *)(r10 -320) 33: (15) if r1 == 0x2 goto pc+2 34: (15) if r1 == 0x101 goto pc+3 35: (55) if r1 != 0x15 goto pc+19 36: (79) r3 = *(u64 *)(r6 +16) 37: (05) goto pc+1 38: (79) r3 = *(u64 *)(r6 +24) 39: (15) if r3 == 0x0 goto pc+15 40: (b7) r1 = 0 41: (63) *(u32 *)(r10 -260) = r1 42: (bf) r1 = r10 43: (07) r1 += -256 44: (b7) r2 = 256 45: (85) call bpf_probe_read_str#-46704 46: (b7) r5 = 328 47: (63) *(u32 *)(r10 -264) = r0 48: (bf) r1 = r0 49: (67) r1 <<= 32 50: (77) r1 >>= 32 51: (25) if r1 > 0xff goto pc+3 52: (07) r0 += 72 53: (57) r0 &= 255 54: (bf) r5 = r0 55: (bf) r4 = r10 56: (07) r4 += -328 57: (bf) r1 = r6 58: (18) r2 = map[id:105] 60: (18) r3 = 0xffffffff 62: (85) call bpf_perf_event_output_tp#-45104 63: (bf) r7 = r0 64: (bf) r0 = r7 65: (95) exit # Signed-off-by: Song Liu <songliubraving@fb.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: kernel-team@fb.com Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stanislav Fomichev <sdf@google.com> Link: http://lkml.kernel.org/r/20190312053051.2690567-4-songliubraving@fb.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-03-12 12:30:39 +07:00
disasm_print_insn(buf, member_len, opcodes, name,
bpf: libbpf: bpftool: Print bpf_line_info during prog dump This patch adds print bpf_line_info function in 'prog dump jitted' and 'prog dump xlated': [root@arch-fb-vm1 bpf]# ~/devshare/fb-kernel/linux/tools/bpf/bpftool/bpftool prog dump jited pinned /sys/fs/bpf/test_btf_haskv [...] int test_long_fname_2(struct dummy_tracepoint_args * arg): bpf_prog_44a040bf25481309_test_long_fname_2: ; static int test_long_fname_2(struct dummy_tracepoint_args *arg) 0: push %rbp 1: mov %rsp,%rbp 4: sub $0x30,%rsp b: sub $0x28,%rbp f: mov %rbx,0x0(%rbp) 13: mov %r13,0x8(%rbp) 17: mov %r14,0x10(%rbp) 1b: mov %r15,0x18(%rbp) 1f: xor %eax,%eax 21: mov %rax,0x20(%rbp) 25: xor %esi,%esi ; int key = 0; 27: mov %esi,-0x4(%rbp) ; if (!arg->sock) 2a: mov 0x8(%rdi),%rdi ; if (!arg->sock) 2e: cmp $0x0,%rdi 32: je 0x0000000000000070 34: mov %rbp,%rsi ; counts = bpf_map_lookup_elem(&btf_map, &key); 37: add $0xfffffffffffffffc,%rsi 3b: movabs $0xffff8881139d7480,%rdi 45: add $0x110,%rdi 4c: mov 0x0(%rsi),%eax 4f: cmp $0x4,%rax 53: jae 0x000000000000005e 55: shl $0x3,%rax 59: add %rdi,%rax 5c: jmp 0x0000000000000060 5e: xor %eax,%eax ; if (!counts) 60: cmp $0x0,%rax 64: je 0x0000000000000070 ; counts->v6++; 66: mov 0x4(%rax),%edi 69: add $0x1,%rdi 6d: mov %edi,0x4(%rax) 70: mov 0x0(%rbp),%rbx 74: mov 0x8(%rbp),%r13 78: mov 0x10(%rbp),%r14 7c: mov 0x18(%rbp),%r15 80: add $0x28,%rbp 84: leaveq 85: retq [...] With linum: [root@arch-fb-vm1 bpf]# ~/devshare/fb-kernel/linux/tools/bpf/bpftool/bpftool prog dump jited pinned /sys/fs/bpf/test_btf_haskv linum int _dummy_tracepoint(struct dummy_tracepoint_args * arg): bpf_prog_b07ccb89267cf242__dummy_tracepoint: ; return test_long_fname_1(arg); [file:/data/users/kafai/fb-kernel/linux/tools/testing/selftests/bpf/test_btf_haskv.c line_num:54 line_col:9] 0: push %rbp 1: mov %rsp,%rbp 4: sub $0x28,%rsp b: sub $0x28,%rbp f: mov %rbx,0x0(%rbp) 13: mov %r13,0x8(%rbp) 17: mov %r14,0x10(%rbp) 1b: mov %r15,0x18(%rbp) 1f: xor %eax,%eax 21: mov %rax,0x20(%rbp) 25: callq 0x000000000000851e ; return test_long_fname_1(arg); [file:/data/users/kafai/fb-kernel/linux/tools/testing/selftests/bpf/test_btf_haskv.c line_num:54 line_col:2] 2a: xor %eax,%eax 2c: mov 0x0(%rbp),%rbx 30: mov 0x8(%rbp),%r13 34: mov 0x10(%rbp),%r14 38: mov 0x18(%rbp),%r15 3c: add $0x28,%rbp 40: leaveq 41: retq [...] Signed-off-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-08 07:42:32 +07:00
disasm_opt, btf, NULL, 0, 0, false);
tools: bpftool: add delimiters to multi-function JITed dumps This splits up the contiguous JITed dump obtained via the bpf system call into more relatable chunks for each function in the program. If the kernel symbols corresponding to these are known, they are printed in the header for each JIT image dump otherwise the masked start address is printed. Before applying this patch: # bpftool prog dump jited id 1 0: push %rbp 1: mov %rsp,%rbp ... 70: leaveq 71: retq 72: push %rbp 73: mov %rsp,%rbp ... dd: leaveq de: retq # bpftool -p prog dump jited id 1 [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x71", "operation": "retq", "operands": [null ] },{ "pc": "0x72", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0xde", "operation": "retq", "operands": [null ] } ] After applying this patch: # echo 0 > /proc/sys/net/core/bpf_jit_kallsyms # bpftool prog dump jited id 1 0xffffffffc02c7000: 0: push %rbp 1: mov %rsp,%rbp ... 70: leaveq 71: retq 0xffffffffc02cf000: 0: push %rbp 1: mov %rsp,%rbp ... 6b: leaveq 6c: retq # bpftool -p prog dump jited id 1 [{ "name": "0xffffffffc02c7000", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x71", "operation": "retq", "operands": [null ] } ] },{ "name": "0xffffffffc02cf000", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x6c", "operation": "retq", "operands": [null ] } ] } ] # echo 1 > /proc/sys/net/core/bpf_jit_kallsyms # bpftool prog dump jited id 1 bpf_prog_b811aab41a39ad3d_foo: 0: push %rbp 1: mov %rsp,%rbp ... 70: leaveq 71: retq bpf_prog_cf418ac8b67bebd9_F: 0: push %rbp 1: mov %rsp,%rbp ... 6b: leaveq 6c: retq # bpftool -p prog dump jited id 1 [{ "name": "bpf_prog_b811aab41a39ad3d_foo", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x71", "operation": "retq", "operands": [null ] } ] },{ "name": "bpf_prog_cf418ac8b67bebd9_F", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x6c", "operation": "retq", "operands": [null ] } ] } ] Signed-off-by: Sandipan Das <sandipan@linux.vnet.ibm.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-05-24 13:56:54 +07:00
}
} else if (visual) {
if (json_output)
jsonw_null(json_wtr);
else
bpftool: use bpf_program__get_prog_info_linear() in prog.c:do_dump() This patches uses bpf_program__get_prog_info_linear() to simplify the logic in prog.c do_dump(). Committer testing: Before: # bpftool prog dump xlated id 208 > /tmp/dump.xlated.before # bpftool prog dump jited id 208 > /tmp/dump.jited.before # bpftool map dump id 107 > /tmp/map.dump.before After: # ~acme/git/perf/tools/bpf/bpftool/bpftool map dump id 107 > /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 > /tmp/dump.xlated.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump jited id 208 > /tmp/dump.jited.after # diff -u /tmp/dump.xlated.before /tmp/dump.xlated.after # diff -u /tmp/dump.jited.before /tmp/dump.jited.after # diff -u /tmp/map.dump.before /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 0: (bf) r6 = r1 1: (85) call bpf_get_current_pid_tgid#80800 2: (63) *(u32 *)(r10 -328) = r0 3: (bf) r2 = r10 4: (07) r2 += -328 5: (18) r1 = map[id:107] 7: (85) call __htab_map_lookup_elem#85680 8: (15) if r0 == 0x0 goto pc+1 9: (07) r0 += 56 10: (b7) r7 = 0 11: (55) if r0 != 0x0 goto pc+52 12: (bf) r1 = r10 13: (07) r1 += -328 14: (b7) r2 = 64 15: (bf) r3 = r6 16: (85) call bpf_probe_read#-46848 17: (bf) r2 = r10 18: (07) r2 += -320 19: (18) r1 = map[id:106] 21: (07) r1 += 208 22: (61) r0 = *(u32 *)(r2 +0) 23: (35) if r0 >= 0x200 goto pc+3 24: (67) r0 <<= 3 25: (0f) r0 += r1 26: (05) goto pc+1 27: (b7) r0 = 0 28: (15) if r0 == 0x0 goto pc+35 29: (71) r1 = *(u8 *)(r0 +0) 30: (15) if r1 == 0x0 goto pc+33 31: (b7) r5 = 64 32: (79) r1 = *(u64 *)(r10 -320) 33: (15) if r1 == 0x2 goto pc+2 34: (15) if r1 == 0x101 goto pc+3 35: (55) if r1 != 0x15 goto pc+19 36: (79) r3 = *(u64 *)(r6 +16) 37: (05) goto pc+1 38: (79) r3 = *(u64 *)(r6 +24) 39: (15) if r3 == 0x0 goto pc+15 40: (b7) r1 = 0 41: (63) *(u32 *)(r10 -260) = r1 42: (bf) r1 = r10 43: (07) r1 += -256 44: (b7) r2 = 256 45: (85) call bpf_probe_read_str#-46704 46: (b7) r5 = 328 47: (63) *(u32 *)(r10 -264) = r0 48: (bf) r1 = r0 49: (67) r1 <<= 32 50: (77) r1 >>= 32 51: (25) if r1 > 0xff goto pc+3 52: (07) r0 += 72 53: (57) r0 &= 255 54: (bf) r5 = r0 55: (bf) r4 = r10 56: (07) r4 += -328 57: (bf) r1 = r6 58: (18) r2 = map[id:105] 60: (18) r3 = 0xffffffff 62: (85) call bpf_perf_event_output_tp#-45104 63: (bf) r7 = r0 64: (bf) r0 = r7 65: (95) exit # Signed-off-by: Song Liu <songliubraving@fb.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: kernel-team@fb.com Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stanislav Fomichev <sdf@google.com> Link: http://lkml.kernel.org/r/20190312053051.2690567-4-songliubraving@fb.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-03-12 12:30:39 +07:00
dump_xlated_cfg(buf, member_len);
} else {
kernel_syms_load(&dd);
bpftool: use bpf_program__get_prog_info_linear() in prog.c:do_dump() This patches uses bpf_program__get_prog_info_linear() to simplify the logic in prog.c do_dump(). Committer testing: Before: # bpftool prog dump xlated id 208 > /tmp/dump.xlated.before # bpftool prog dump jited id 208 > /tmp/dump.jited.before # bpftool map dump id 107 > /tmp/map.dump.before After: # ~acme/git/perf/tools/bpf/bpftool/bpftool map dump id 107 > /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 > /tmp/dump.xlated.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump jited id 208 > /tmp/dump.jited.after # diff -u /tmp/dump.xlated.before /tmp/dump.xlated.after # diff -u /tmp/dump.jited.before /tmp/dump.jited.after # diff -u /tmp/map.dump.before /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 0: (bf) r6 = r1 1: (85) call bpf_get_current_pid_tgid#80800 2: (63) *(u32 *)(r10 -328) = r0 3: (bf) r2 = r10 4: (07) r2 += -328 5: (18) r1 = map[id:107] 7: (85) call __htab_map_lookup_elem#85680 8: (15) if r0 == 0x0 goto pc+1 9: (07) r0 += 56 10: (b7) r7 = 0 11: (55) if r0 != 0x0 goto pc+52 12: (bf) r1 = r10 13: (07) r1 += -328 14: (b7) r2 = 64 15: (bf) r3 = r6 16: (85) call bpf_probe_read#-46848 17: (bf) r2 = r10 18: (07) r2 += -320 19: (18) r1 = map[id:106] 21: (07) r1 += 208 22: (61) r0 = *(u32 *)(r2 +0) 23: (35) if r0 >= 0x200 goto pc+3 24: (67) r0 <<= 3 25: (0f) r0 += r1 26: (05) goto pc+1 27: (b7) r0 = 0 28: (15) if r0 == 0x0 goto pc+35 29: (71) r1 = *(u8 *)(r0 +0) 30: (15) if r1 == 0x0 goto pc+33 31: (b7) r5 = 64 32: (79) r1 = *(u64 *)(r10 -320) 33: (15) if r1 == 0x2 goto pc+2 34: (15) if r1 == 0x101 goto pc+3 35: (55) if r1 != 0x15 goto pc+19 36: (79) r3 = *(u64 *)(r6 +16) 37: (05) goto pc+1 38: (79) r3 = *(u64 *)(r6 +24) 39: (15) if r3 == 0x0 goto pc+15 40: (b7) r1 = 0 41: (63) *(u32 *)(r10 -260) = r1 42: (bf) r1 = r10 43: (07) r1 += -256 44: (b7) r2 = 256 45: (85) call bpf_probe_read_str#-46704 46: (b7) r5 = 328 47: (63) *(u32 *)(r10 -264) = r0 48: (bf) r1 = r0 49: (67) r1 <<= 32 50: (77) r1 >>= 32 51: (25) if r1 > 0xff goto pc+3 52: (07) r0 += 72 53: (57) r0 &= 255 54: (bf) r5 = r0 55: (bf) r4 = r10 56: (07) r4 += -328 57: (bf) r1 = r6 58: (18) r2 = map[id:105] 60: (18) r3 = 0xffffffff 62: (85) call bpf_perf_event_output_tp#-45104 63: (bf) r7 = r0 64: (bf) r0 = r7 65: (95) exit # Signed-off-by: Song Liu <songliubraving@fb.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: kernel-team@fb.com Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stanislav Fomichev <sdf@google.com> Link: http://lkml.kernel.org/r/20190312053051.2690567-4-songliubraving@fb.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-03-12 12:30:39 +07:00
dd.nr_jited_ksyms = info->nr_jited_ksyms;
dd.jited_ksyms = (__u64 *) info->jited_ksyms;
tools/bpf: bpftool: add support for func types This patch added support to print function signature if btf func_info is available. Note that ksym now uses function name instead of prog_name as prog_name has a limit of 16 bytes including ending '\0'. The following is a sample output for selftests test_btf with file test_btf_haskv.o for translated insns and jited insns respectively. $ bpftool prog dump xlated id 1 int _dummy_tracepoint(struct dummy_tracepoint_args * arg): 0: (85) call pc+2#bpf_prog_2dcecc18072623fc_test_long_fname_1 1: (b7) r0 = 0 2: (95) exit int test_long_fname_1(struct dummy_tracepoint_args * arg): 3: (85) call pc+1#bpf_prog_89d64e4abf0f0126_test_long_fname_2 4: (95) exit int test_long_fname_2(struct dummy_tracepoint_args * arg): 5: (b7) r2 = 0 6: (63) *(u32 *)(r10 -4) = r2 7: (79) r1 = *(u64 *)(r1 +8) ... 22: (07) r1 += 1 23: (63) *(u32 *)(r0 +4) = r1 24: (95) exit $ bpftool prog dump jited id 1 int _dummy_tracepoint(struct dummy_tracepoint_args * arg): bpf_prog_b07ccb89267cf242__dummy_tracepoint: 0: push %rbp 1: mov %rsp,%rbp ...... 3c: add $0x28,%rbp 40: leaveq 41: retq int test_long_fname_1(struct dummy_tracepoint_args * arg): bpf_prog_2dcecc18072623fc_test_long_fname_1: 0: push %rbp 1: mov %rsp,%rbp ...... 3a: add $0x28,%rbp 3e: leaveq 3f: retq int test_long_fname_2(struct dummy_tracepoint_args * arg): bpf_prog_89d64e4abf0f0126_test_long_fname_2: 0: push %rbp 1: mov %rsp,%rbp ...... 80: add $0x28,%rbp 84: leaveq 85: retq Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-20 06:29:21 +07:00
dd.btf = btf;
dd.func_info = func_info;
bpftool: use bpf_program__get_prog_info_linear() in prog.c:do_dump() This patches uses bpf_program__get_prog_info_linear() to simplify the logic in prog.c do_dump(). Committer testing: Before: # bpftool prog dump xlated id 208 > /tmp/dump.xlated.before # bpftool prog dump jited id 208 > /tmp/dump.jited.before # bpftool map dump id 107 > /tmp/map.dump.before After: # ~acme/git/perf/tools/bpf/bpftool/bpftool map dump id 107 > /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 > /tmp/dump.xlated.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump jited id 208 > /tmp/dump.jited.after # diff -u /tmp/dump.xlated.before /tmp/dump.xlated.after # diff -u /tmp/dump.jited.before /tmp/dump.jited.after # diff -u /tmp/map.dump.before /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 0: (bf) r6 = r1 1: (85) call bpf_get_current_pid_tgid#80800 2: (63) *(u32 *)(r10 -328) = r0 3: (bf) r2 = r10 4: (07) r2 += -328 5: (18) r1 = map[id:107] 7: (85) call __htab_map_lookup_elem#85680 8: (15) if r0 == 0x0 goto pc+1 9: (07) r0 += 56 10: (b7) r7 = 0 11: (55) if r0 != 0x0 goto pc+52 12: (bf) r1 = r10 13: (07) r1 += -328 14: (b7) r2 = 64 15: (bf) r3 = r6 16: (85) call bpf_probe_read#-46848 17: (bf) r2 = r10 18: (07) r2 += -320 19: (18) r1 = map[id:106] 21: (07) r1 += 208 22: (61) r0 = *(u32 *)(r2 +0) 23: (35) if r0 >= 0x200 goto pc+3 24: (67) r0 <<= 3 25: (0f) r0 += r1 26: (05) goto pc+1 27: (b7) r0 = 0 28: (15) if r0 == 0x0 goto pc+35 29: (71) r1 = *(u8 *)(r0 +0) 30: (15) if r1 == 0x0 goto pc+33 31: (b7) r5 = 64 32: (79) r1 = *(u64 *)(r10 -320) 33: (15) if r1 == 0x2 goto pc+2 34: (15) if r1 == 0x101 goto pc+3 35: (55) if r1 != 0x15 goto pc+19 36: (79) r3 = *(u64 *)(r6 +16) 37: (05) goto pc+1 38: (79) r3 = *(u64 *)(r6 +24) 39: (15) if r3 == 0x0 goto pc+15 40: (b7) r1 = 0 41: (63) *(u32 *)(r10 -260) = r1 42: (bf) r1 = r10 43: (07) r1 += -256 44: (b7) r2 = 256 45: (85) call bpf_probe_read_str#-46704 46: (b7) r5 = 328 47: (63) *(u32 *)(r10 -264) = r0 48: (bf) r1 = r0 49: (67) r1 <<= 32 50: (77) r1 >>= 32 51: (25) if r1 > 0xff goto pc+3 52: (07) r0 += 72 53: (57) r0 &= 255 54: (bf) r5 = r0 55: (bf) r4 = r10 56: (07) r4 += -328 57: (bf) r1 = r6 58: (18) r2 = map[id:105] 60: (18) r3 = 0xffffffff 62: (85) call bpf_perf_event_output_tp#-45104 63: (bf) r7 = r0 64: (bf) r0 = r7 65: (95) exit # Signed-off-by: Song Liu <songliubraving@fb.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: kernel-team@fb.com Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stanislav Fomichev <sdf@google.com> Link: http://lkml.kernel.org/r/20190312053051.2690567-4-songliubraving@fb.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-03-12 12:30:39 +07:00
dd.finfo_rec_size = info->func_info_rec_size;
bpf: libbpf: bpftool: Print bpf_line_info during prog dump This patch adds print bpf_line_info function in 'prog dump jitted' and 'prog dump xlated': [root@arch-fb-vm1 bpf]# ~/devshare/fb-kernel/linux/tools/bpf/bpftool/bpftool prog dump jited pinned /sys/fs/bpf/test_btf_haskv [...] int test_long_fname_2(struct dummy_tracepoint_args * arg): bpf_prog_44a040bf25481309_test_long_fname_2: ; static int test_long_fname_2(struct dummy_tracepoint_args *arg) 0: push %rbp 1: mov %rsp,%rbp 4: sub $0x30,%rsp b: sub $0x28,%rbp f: mov %rbx,0x0(%rbp) 13: mov %r13,0x8(%rbp) 17: mov %r14,0x10(%rbp) 1b: mov %r15,0x18(%rbp) 1f: xor %eax,%eax 21: mov %rax,0x20(%rbp) 25: xor %esi,%esi ; int key = 0; 27: mov %esi,-0x4(%rbp) ; if (!arg->sock) 2a: mov 0x8(%rdi),%rdi ; if (!arg->sock) 2e: cmp $0x0,%rdi 32: je 0x0000000000000070 34: mov %rbp,%rsi ; counts = bpf_map_lookup_elem(&btf_map, &key); 37: add $0xfffffffffffffffc,%rsi 3b: movabs $0xffff8881139d7480,%rdi 45: add $0x110,%rdi 4c: mov 0x0(%rsi),%eax 4f: cmp $0x4,%rax 53: jae 0x000000000000005e 55: shl $0x3,%rax 59: add %rdi,%rax 5c: jmp 0x0000000000000060 5e: xor %eax,%eax ; if (!counts) 60: cmp $0x0,%rax 64: je 0x0000000000000070 ; counts->v6++; 66: mov 0x4(%rax),%edi 69: add $0x1,%rdi 6d: mov %edi,0x4(%rax) 70: mov 0x0(%rbp),%rbx 74: mov 0x8(%rbp),%r13 78: mov 0x10(%rbp),%r14 7c: mov 0x18(%rbp),%r15 80: add $0x28,%rbp 84: leaveq 85: retq [...] With linum: [root@arch-fb-vm1 bpf]# ~/devshare/fb-kernel/linux/tools/bpf/bpftool/bpftool prog dump jited pinned /sys/fs/bpf/test_btf_haskv linum int _dummy_tracepoint(struct dummy_tracepoint_args * arg): bpf_prog_b07ccb89267cf242__dummy_tracepoint: ; return test_long_fname_1(arg); [file:/data/users/kafai/fb-kernel/linux/tools/testing/selftests/bpf/test_btf_haskv.c line_num:54 line_col:9] 0: push %rbp 1: mov %rsp,%rbp 4: sub $0x28,%rsp b: sub $0x28,%rbp f: mov %rbx,0x0(%rbp) 13: mov %r13,0x8(%rbp) 17: mov %r14,0x10(%rbp) 1b: mov %r15,0x18(%rbp) 1f: xor %eax,%eax 21: mov %rax,0x20(%rbp) 25: callq 0x000000000000851e ; return test_long_fname_1(arg); [file:/data/users/kafai/fb-kernel/linux/tools/testing/selftests/bpf/test_btf_haskv.c line_num:54 line_col:2] 2a: xor %eax,%eax 2c: mov 0x0(%rbp),%rbx 30: mov 0x8(%rbp),%r13 34: mov 0x10(%rbp),%r14 38: mov 0x18(%rbp),%r15 3c: add $0x28,%rbp 40: leaveq 41: retq [...] Signed-off-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-08 07:42:32 +07:00
dd.prog_linfo = prog_linfo;
if (json_output)
bpftool: use bpf_program__get_prog_info_linear() in prog.c:do_dump() This patches uses bpf_program__get_prog_info_linear() to simplify the logic in prog.c do_dump(). Committer testing: Before: # bpftool prog dump xlated id 208 > /tmp/dump.xlated.before # bpftool prog dump jited id 208 > /tmp/dump.jited.before # bpftool map dump id 107 > /tmp/map.dump.before After: # ~acme/git/perf/tools/bpf/bpftool/bpftool map dump id 107 > /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 > /tmp/dump.xlated.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump jited id 208 > /tmp/dump.jited.after # diff -u /tmp/dump.xlated.before /tmp/dump.xlated.after # diff -u /tmp/dump.jited.before /tmp/dump.jited.after # diff -u /tmp/map.dump.before /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 0: (bf) r6 = r1 1: (85) call bpf_get_current_pid_tgid#80800 2: (63) *(u32 *)(r10 -328) = r0 3: (bf) r2 = r10 4: (07) r2 += -328 5: (18) r1 = map[id:107] 7: (85) call __htab_map_lookup_elem#85680 8: (15) if r0 == 0x0 goto pc+1 9: (07) r0 += 56 10: (b7) r7 = 0 11: (55) if r0 != 0x0 goto pc+52 12: (bf) r1 = r10 13: (07) r1 += -328 14: (b7) r2 = 64 15: (bf) r3 = r6 16: (85) call bpf_probe_read#-46848 17: (bf) r2 = r10 18: (07) r2 += -320 19: (18) r1 = map[id:106] 21: (07) r1 += 208 22: (61) r0 = *(u32 *)(r2 +0) 23: (35) if r0 >= 0x200 goto pc+3 24: (67) r0 <<= 3 25: (0f) r0 += r1 26: (05) goto pc+1 27: (b7) r0 = 0 28: (15) if r0 == 0x0 goto pc+35 29: (71) r1 = *(u8 *)(r0 +0) 30: (15) if r1 == 0x0 goto pc+33 31: (b7) r5 = 64 32: (79) r1 = *(u64 *)(r10 -320) 33: (15) if r1 == 0x2 goto pc+2 34: (15) if r1 == 0x101 goto pc+3 35: (55) if r1 != 0x15 goto pc+19 36: (79) r3 = *(u64 *)(r6 +16) 37: (05) goto pc+1 38: (79) r3 = *(u64 *)(r6 +24) 39: (15) if r3 == 0x0 goto pc+15 40: (b7) r1 = 0 41: (63) *(u32 *)(r10 -260) = r1 42: (bf) r1 = r10 43: (07) r1 += -256 44: (b7) r2 = 256 45: (85) call bpf_probe_read_str#-46704 46: (b7) r5 = 328 47: (63) *(u32 *)(r10 -264) = r0 48: (bf) r1 = r0 49: (67) r1 <<= 32 50: (77) r1 >>= 32 51: (25) if r1 > 0xff goto pc+3 52: (07) r0 += 72 53: (57) r0 &= 255 54: (bf) r5 = r0 55: (bf) r4 = r10 56: (07) r4 += -328 57: (bf) r1 = r6 58: (18) r2 = map[id:105] 60: (18) r3 = 0xffffffff 62: (85) call bpf_perf_event_output_tp#-45104 63: (bf) r7 = r0 64: (bf) r0 = r7 65: (95) exit # Signed-off-by: Song Liu <songliubraving@fb.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: kernel-team@fb.com Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stanislav Fomichev <sdf@google.com> Link: http://lkml.kernel.org/r/20190312053051.2690567-4-songliubraving@fb.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-03-12 12:30:39 +07:00
dump_xlated_json(&dd, buf, member_len, opcodes,
bpf: libbpf: bpftool: Print bpf_line_info during prog dump This patch adds print bpf_line_info function in 'prog dump jitted' and 'prog dump xlated': [root@arch-fb-vm1 bpf]# ~/devshare/fb-kernel/linux/tools/bpf/bpftool/bpftool prog dump jited pinned /sys/fs/bpf/test_btf_haskv [...] int test_long_fname_2(struct dummy_tracepoint_args * arg): bpf_prog_44a040bf25481309_test_long_fname_2: ; static int test_long_fname_2(struct dummy_tracepoint_args *arg) 0: push %rbp 1: mov %rsp,%rbp 4: sub $0x30,%rsp b: sub $0x28,%rbp f: mov %rbx,0x0(%rbp) 13: mov %r13,0x8(%rbp) 17: mov %r14,0x10(%rbp) 1b: mov %r15,0x18(%rbp) 1f: xor %eax,%eax 21: mov %rax,0x20(%rbp) 25: xor %esi,%esi ; int key = 0; 27: mov %esi,-0x4(%rbp) ; if (!arg->sock) 2a: mov 0x8(%rdi),%rdi ; if (!arg->sock) 2e: cmp $0x0,%rdi 32: je 0x0000000000000070 34: mov %rbp,%rsi ; counts = bpf_map_lookup_elem(&btf_map, &key); 37: add $0xfffffffffffffffc,%rsi 3b: movabs $0xffff8881139d7480,%rdi 45: add $0x110,%rdi 4c: mov 0x0(%rsi),%eax 4f: cmp $0x4,%rax 53: jae 0x000000000000005e 55: shl $0x3,%rax 59: add %rdi,%rax 5c: jmp 0x0000000000000060 5e: xor %eax,%eax ; if (!counts) 60: cmp $0x0,%rax 64: je 0x0000000000000070 ; counts->v6++; 66: mov 0x4(%rax),%edi 69: add $0x1,%rdi 6d: mov %edi,0x4(%rax) 70: mov 0x0(%rbp),%rbx 74: mov 0x8(%rbp),%r13 78: mov 0x10(%rbp),%r14 7c: mov 0x18(%rbp),%r15 80: add $0x28,%rbp 84: leaveq 85: retq [...] With linum: [root@arch-fb-vm1 bpf]# ~/devshare/fb-kernel/linux/tools/bpf/bpftool/bpftool prog dump jited pinned /sys/fs/bpf/test_btf_haskv linum int _dummy_tracepoint(struct dummy_tracepoint_args * arg): bpf_prog_b07ccb89267cf242__dummy_tracepoint: ; return test_long_fname_1(arg); [file:/data/users/kafai/fb-kernel/linux/tools/testing/selftests/bpf/test_btf_haskv.c line_num:54 line_col:9] 0: push %rbp 1: mov %rsp,%rbp 4: sub $0x28,%rsp b: sub $0x28,%rbp f: mov %rbx,0x0(%rbp) 13: mov %r13,0x8(%rbp) 17: mov %r14,0x10(%rbp) 1b: mov %r15,0x18(%rbp) 1f: xor %eax,%eax 21: mov %rax,0x20(%rbp) 25: callq 0x000000000000851e ; return test_long_fname_1(arg); [file:/data/users/kafai/fb-kernel/linux/tools/testing/selftests/bpf/test_btf_haskv.c line_num:54 line_col:2] 2a: xor %eax,%eax 2c: mov 0x0(%rbp),%rbx 30: mov 0x8(%rbp),%r13 34: mov 0x10(%rbp),%r14 38: mov 0x18(%rbp),%r15 3c: add $0x28,%rbp 40: leaveq 41: retq [...] Signed-off-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-08 07:42:32 +07:00
linum);
else
bpftool: use bpf_program__get_prog_info_linear() in prog.c:do_dump() This patches uses bpf_program__get_prog_info_linear() to simplify the logic in prog.c do_dump(). Committer testing: Before: # bpftool prog dump xlated id 208 > /tmp/dump.xlated.before # bpftool prog dump jited id 208 > /tmp/dump.jited.before # bpftool map dump id 107 > /tmp/map.dump.before After: # ~acme/git/perf/tools/bpf/bpftool/bpftool map dump id 107 > /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 > /tmp/dump.xlated.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump jited id 208 > /tmp/dump.jited.after # diff -u /tmp/dump.xlated.before /tmp/dump.xlated.after # diff -u /tmp/dump.jited.before /tmp/dump.jited.after # diff -u /tmp/map.dump.before /tmp/map.dump.after # ~acme/git/perf/tools/bpf/bpftool/bpftool prog dump xlated id 208 0: (bf) r6 = r1 1: (85) call bpf_get_current_pid_tgid#80800 2: (63) *(u32 *)(r10 -328) = r0 3: (bf) r2 = r10 4: (07) r2 += -328 5: (18) r1 = map[id:107] 7: (85) call __htab_map_lookup_elem#85680 8: (15) if r0 == 0x0 goto pc+1 9: (07) r0 += 56 10: (b7) r7 = 0 11: (55) if r0 != 0x0 goto pc+52 12: (bf) r1 = r10 13: (07) r1 += -328 14: (b7) r2 = 64 15: (bf) r3 = r6 16: (85) call bpf_probe_read#-46848 17: (bf) r2 = r10 18: (07) r2 += -320 19: (18) r1 = map[id:106] 21: (07) r1 += 208 22: (61) r0 = *(u32 *)(r2 +0) 23: (35) if r0 >= 0x200 goto pc+3 24: (67) r0 <<= 3 25: (0f) r0 += r1 26: (05) goto pc+1 27: (b7) r0 = 0 28: (15) if r0 == 0x0 goto pc+35 29: (71) r1 = *(u8 *)(r0 +0) 30: (15) if r1 == 0x0 goto pc+33 31: (b7) r5 = 64 32: (79) r1 = *(u64 *)(r10 -320) 33: (15) if r1 == 0x2 goto pc+2 34: (15) if r1 == 0x101 goto pc+3 35: (55) if r1 != 0x15 goto pc+19 36: (79) r3 = *(u64 *)(r6 +16) 37: (05) goto pc+1 38: (79) r3 = *(u64 *)(r6 +24) 39: (15) if r3 == 0x0 goto pc+15 40: (b7) r1 = 0 41: (63) *(u32 *)(r10 -260) = r1 42: (bf) r1 = r10 43: (07) r1 += -256 44: (b7) r2 = 256 45: (85) call bpf_probe_read_str#-46704 46: (b7) r5 = 328 47: (63) *(u32 *)(r10 -264) = r0 48: (bf) r1 = r0 49: (67) r1 <<= 32 50: (77) r1 >>= 32 51: (25) if r1 > 0xff goto pc+3 52: (07) r0 += 72 53: (57) r0 &= 255 54: (bf) r5 = r0 55: (bf) r4 = r10 56: (07) r4 += -328 57: (bf) r1 = r6 58: (18) r2 = map[id:105] 60: (18) r3 = 0xffffffff 62: (85) call bpf_perf_event_output_tp#-45104 63: (bf) r7 = r0 64: (bf) r0 = r7 65: (95) exit # Signed-off-by: Song Liu <songliubraving@fb.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: kernel-team@fb.com Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stanislav Fomichev <sdf@google.com> Link: http://lkml.kernel.org/r/20190312053051.2690567-4-songliubraving@fb.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-03-12 12:30:39 +07:00
dump_xlated_plain(&dd, buf, member_len, opcodes,
bpf: libbpf: bpftool: Print bpf_line_info during prog dump This patch adds print bpf_line_info function in 'prog dump jitted' and 'prog dump xlated': [root@arch-fb-vm1 bpf]# ~/devshare/fb-kernel/linux/tools/bpf/bpftool/bpftool prog dump jited pinned /sys/fs/bpf/test_btf_haskv [...] int test_long_fname_2(struct dummy_tracepoint_args * arg): bpf_prog_44a040bf25481309_test_long_fname_2: ; static int test_long_fname_2(struct dummy_tracepoint_args *arg) 0: push %rbp 1: mov %rsp,%rbp 4: sub $0x30,%rsp b: sub $0x28,%rbp f: mov %rbx,0x0(%rbp) 13: mov %r13,0x8(%rbp) 17: mov %r14,0x10(%rbp) 1b: mov %r15,0x18(%rbp) 1f: xor %eax,%eax 21: mov %rax,0x20(%rbp) 25: xor %esi,%esi ; int key = 0; 27: mov %esi,-0x4(%rbp) ; if (!arg->sock) 2a: mov 0x8(%rdi),%rdi ; if (!arg->sock) 2e: cmp $0x0,%rdi 32: je 0x0000000000000070 34: mov %rbp,%rsi ; counts = bpf_map_lookup_elem(&btf_map, &key); 37: add $0xfffffffffffffffc,%rsi 3b: movabs $0xffff8881139d7480,%rdi 45: add $0x110,%rdi 4c: mov 0x0(%rsi),%eax 4f: cmp $0x4,%rax 53: jae 0x000000000000005e 55: shl $0x3,%rax 59: add %rdi,%rax 5c: jmp 0x0000000000000060 5e: xor %eax,%eax ; if (!counts) 60: cmp $0x0,%rax 64: je 0x0000000000000070 ; counts->v6++; 66: mov 0x4(%rax),%edi 69: add $0x1,%rdi 6d: mov %edi,0x4(%rax) 70: mov 0x0(%rbp),%rbx 74: mov 0x8(%rbp),%r13 78: mov 0x10(%rbp),%r14 7c: mov 0x18(%rbp),%r15 80: add $0x28,%rbp 84: leaveq 85: retq [...] With linum: [root@arch-fb-vm1 bpf]# ~/devshare/fb-kernel/linux/tools/bpf/bpftool/bpftool prog dump jited pinned /sys/fs/bpf/test_btf_haskv linum int _dummy_tracepoint(struct dummy_tracepoint_args * arg): bpf_prog_b07ccb89267cf242__dummy_tracepoint: ; return test_long_fname_1(arg); [file:/data/users/kafai/fb-kernel/linux/tools/testing/selftests/bpf/test_btf_haskv.c line_num:54 line_col:9] 0: push %rbp 1: mov %rsp,%rbp 4: sub $0x28,%rsp b: sub $0x28,%rbp f: mov %rbx,0x0(%rbp) 13: mov %r13,0x8(%rbp) 17: mov %r14,0x10(%rbp) 1b: mov %r15,0x18(%rbp) 1f: xor %eax,%eax 21: mov %rax,0x20(%rbp) 25: callq 0x000000000000851e ; return test_long_fname_1(arg); [file:/data/users/kafai/fb-kernel/linux/tools/testing/selftests/bpf/test_btf_haskv.c line_num:54 line_col:2] 2a: xor %eax,%eax 2c: mov 0x0(%rbp),%rbx 30: mov 0x8(%rbp),%r13 34: mov 0x10(%rbp),%r14 38: mov 0x18(%rbp),%r15 3c: add $0x28,%rbp 40: leaveq 41: retq [...] Signed-off-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-08 07:42:32 +07:00
linum);
kernel_syms_destroy(&dd);
}
return 0;
}
static int do_dump(int argc, char **argv)
{
struct bpf_prog_info_linear *info_linear;
char *filepath = NULL;
bool opcodes = false;
bool visual = false;
enum dump_mode mode;
bool linum = false;
int *fds = NULL;
int nb_fds, i = 0;
int err = -1;
__u64 arrays;
if (is_prefix(*argv, "jited")) {
if (disasm_init())
return -1;
mode = DUMP_JITED;
} else if (is_prefix(*argv, "xlated")) {
mode = DUMP_XLATED;
} else {
p_err("expected 'xlated' or 'jited', got: %s", *argv);
return -1;
}
NEXT_ARG();
if (argc < 2)
usage();
fds = malloc(sizeof(int));
if (!fds) {
p_err("mem alloc failed");
return -1;
}
nb_fds = prog_parse_fds(&argc, &argv, &fds);
if (nb_fds < 1)
goto exit_free;
if (is_prefix(*argv, "file")) {
NEXT_ARG();
if (!argc) {
p_err("expected file path");
goto exit_close;
}
if (nb_fds > 1) {
p_err("several programs matched");
goto exit_close;
}
filepath = *argv;
NEXT_ARG();
} else if (is_prefix(*argv, "opcodes")) {
opcodes = true;
NEXT_ARG();
} else if (is_prefix(*argv, "visual")) {
if (nb_fds > 1) {
p_err("several programs matched");
goto exit_close;
}
visual = true;
NEXT_ARG();
} else if (is_prefix(*argv, "linum")) {
linum = true;
NEXT_ARG();
}
if (argc) {
usage();
goto exit_close;
}
if (mode == DUMP_JITED)
arrays = 1UL << BPF_PROG_INFO_JITED_INSNS;
else
arrays = 1UL << BPF_PROG_INFO_XLATED_INSNS;
arrays |= 1UL << BPF_PROG_INFO_JITED_KSYMS;
arrays |= 1UL << BPF_PROG_INFO_JITED_FUNC_LENS;
arrays |= 1UL << BPF_PROG_INFO_FUNC_INFO;
arrays |= 1UL << BPF_PROG_INFO_LINE_INFO;
arrays |= 1UL << BPF_PROG_INFO_JITED_LINE_INFO;
if (json_output && nb_fds > 1)
jsonw_start_array(json_wtr); /* root array */
for (i = 0; i < nb_fds; i++) {
info_linear = bpf_program__get_prog_info_linear(fds[i], arrays);
if (IS_ERR_OR_NULL(info_linear)) {
p_err("can't get prog info: %s", strerror(errno));
break;
}
if (json_output && nb_fds > 1) {
jsonw_start_object(json_wtr); /* prog object */
print_prog_header_json(&info_linear->info);
jsonw_name(json_wtr, "insns");
} else if (nb_fds > 1) {
print_prog_header_plain(&info_linear->info);
}
err = prog_dump(&info_linear->info, mode, filepath, opcodes,
visual, linum);
if (json_output && nb_fds > 1)
jsonw_end_object(json_wtr); /* prog object */
else if (i != nb_fds - 1 && nb_fds > 1)
printf("\n");
free(info_linear);
if (err)
break;
close(fds[i]);
}
if (json_output && nb_fds > 1)
jsonw_end_array(json_wtr); /* root array */
exit_close:
for (; i < nb_fds; i++)
close(fds[i]);
exit_free:
free(fds);
return err;
}
static int do_pin(int argc, char **argv)
{
int err;
err = do_pin_any(argc, argv, prog_parse_fd);
if (!err && json_output)
jsonw_null(json_wtr);
return err;
}
struct map_replace {
int idx;
int fd;
char *name;
};
static int map_replace_compar(const void *p1, const void *p2)
{
const struct map_replace *a = p1, *b = p2;
return a->idx - b->idx;
}
2018-11-09 23:21:46 +07:00
static int parse_attach_detach_args(int argc, char **argv, int *progfd,
enum bpf_attach_type *attach_type,
int *mapfd)
{
2018-11-09 23:21:46 +07:00
if (!REQ_ARGS(3))
return -EINVAL;
2018-11-09 23:21:46 +07:00
*progfd = prog_parse_fd(&argc, &argv);
if (*progfd < 0)
return *progfd;
2018-11-09 23:21:46 +07:00
*attach_type = parse_attach_type(*argv);
if (*attach_type == __MAX_BPF_ATTACH_TYPE) {
p_err("invalid attach/detach type");
return -EINVAL;
}
2018-11-09 23:21:46 +07:00
if (*attach_type == BPF_FLOW_DISSECTOR) {
*mapfd = -1;
return 0;
}
NEXT_ARG();
2018-11-09 23:21:46 +07:00
if (!REQ_ARGS(2))
return -EINVAL;
*mapfd = map_parse_fd(&argc, &argv);
if (*mapfd < 0)
return *mapfd;
return 0;
}
2018-11-09 23:21:46 +07:00
static int do_attach(int argc, char **argv)
{
enum bpf_attach_type attach_type;
int err, progfd;
int mapfd;
err = parse_attach_detach_args(argc, argv,
&progfd, &attach_type, &mapfd);
if (err)
return err;
err = bpf_prog_attach(progfd, mapfd, attach_type, 0);
if (err) {
p_err("failed prog attach to map");
return -EINVAL;
}
if (json_output)
jsonw_null(json_wtr);
return 0;
}
static int do_detach(int argc, char **argv)
{
enum bpf_attach_type attach_type;
2018-11-09 23:21:46 +07:00
int err, progfd;
int mapfd;
2018-11-09 23:21:46 +07:00
err = parse_attach_detach_args(argc, argv,
&progfd, &attach_type, &mapfd);
if (err)
return err;
err = bpf_prog_detach2(progfd, mapfd, attach_type);
if (err) {
p_err("failed prog detach from map");
return -EINVAL;
}
if (json_output)
jsonw_null(json_wtr);
return 0;
}
static int check_single_stdin(char *file_data_in, char *file_ctx_in)
{
if (file_data_in && file_ctx_in &&
!strcmp(file_data_in, "-") && !strcmp(file_ctx_in, "-")) {
p_err("cannot use standard input for both data_in and ctx_in");
return -1;
}
return 0;
}
static int get_run_data(const char *fname, void **data_ptr, unsigned int *size)
{
size_t block_size = 256;
size_t buf_size = block_size;
size_t nb_read = 0;
void *tmp;
FILE *f;
if (!fname) {
*data_ptr = NULL;
*size = 0;
return 0;
}
if (!strcmp(fname, "-"))
f = stdin;
else
f = fopen(fname, "r");
if (!f) {
p_err("failed to open %s: %s", fname, strerror(errno));
return -1;
}
*data_ptr = malloc(block_size);
if (!*data_ptr) {
p_err("failed to allocate memory for data_in/ctx_in: %s",
strerror(errno));
goto err_fclose;
}
while ((nb_read += fread(*data_ptr + nb_read, 1, block_size, f))) {
if (feof(f))
break;
if (ferror(f)) {
p_err("failed to read data_in/ctx_in from %s: %s",
fname, strerror(errno));
goto err_free;
}
if (nb_read > buf_size - block_size) {
if (buf_size == UINT32_MAX) {
p_err("data_in/ctx_in is too long (max: %d)",
UINT32_MAX);
goto err_free;
}
/* No space for fread()-ing next chunk; realloc() */
buf_size *= 2;
tmp = realloc(*data_ptr, buf_size);
if (!tmp) {
p_err("failed to reallocate data_in/ctx_in: %s",
strerror(errno));
goto err_free;
}
*data_ptr = tmp;
}
}
if (f != stdin)
fclose(f);
*size = nb_read;
return 0;
err_free:
free(*data_ptr);
*data_ptr = NULL;
err_fclose:
if (f != stdin)
fclose(f);
return -1;
}
static void hex_print(void *data, unsigned int size, FILE *f)
{
size_t i, j;
char c;
for (i = 0; i < size; i += 16) {
/* Row offset */
fprintf(f, "%07zx\t", i);
/* Hexadecimal values */
for (j = i; j < i + 16 && j < size; j++)
fprintf(f, "%02x%s", *(uint8_t *)(data + j),
j % 2 ? " " : "");
for (; j < i + 16; j++)
fprintf(f, " %s", j % 2 ? " " : "");
/* ASCII values (if relevant), '.' otherwise */
fprintf(f, "| ");
for (j = i; j < i + 16 && j < size; j++) {
c = *(char *)(data + j);
if (c < ' ' || c > '~')
c = '.';
fprintf(f, "%c%s", c, j == i + 7 ? " " : "");
}
fprintf(f, "\n");
}
}
static int
print_run_output(void *data, unsigned int size, const char *fname,
const char *json_key)
{
size_t nb_written;
FILE *f;
if (!fname)
return 0;
if (!strcmp(fname, "-")) {
f = stdout;
if (json_output) {
jsonw_name(json_wtr, json_key);
print_data_json(data, size);
} else {
hex_print(data, size, f);
}
return 0;
}
f = fopen(fname, "w");
if (!f) {
p_err("failed to open %s: %s", fname, strerror(errno));
return -1;
}
nb_written = fwrite(data, 1, size, f);
fclose(f);
if (nb_written != size) {
p_err("failed to write output data/ctx: %s", strerror(errno));
return -1;
}
return 0;
}
static int alloc_run_data(void **data_ptr, unsigned int size_out)
{
*data_ptr = calloc(size_out, 1);
if (!*data_ptr) {
p_err("failed to allocate memory for output data/ctx: %s",
strerror(errno));
return -1;
}
return 0;
}
static int do_run(int argc, char **argv)
{
char *data_fname_in = NULL, *data_fname_out = NULL;
char *ctx_fname_in = NULL, *ctx_fname_out = NULL;
struct bpf_prog_test_run_attr test_attr = {0};
const unsigned int default_size = SZ_32K;
void *data_in = NULL, *data_out = NULL;
void *ctx_in = NULL, *ctx_out = NULL;
unsigned int repeat = 1;
int fd, err;
if (!REQ_ARGS(4))
return -1;
fd = prog_parse_fd(&argc, &argv);
if (fd < 0)
return -1;
while (argc) {
if (detect_common_prefix(*argv, "data_in", "data_out",
"data_size_out", NULL))
return -1;
if (detect_common_prefix(*argv, "ctx_in", "ctx_out",
"ctx_size_out", NULL))
return -1;
if (is_prefix(*argv, "data_in")) {
NEXT_ARG();
if (!REQ_ARGS(1))
return -1;
data_fname_in = GET_ARG();
if (check_single_stdin(data_fname_in, ctx_fname_in))
return -1;
} else if (is_prefix(*argv, "data_out")) {
NEXT_ARG();
if (!REQ_ARGS(1))
return -1;
data_fname_out = GET_ARG();
} else if (is_prefix(*argv, "data_size_out")) {
char *endptr;
NEXT_ARG();
if (!REQ_ARGS(1))
return -1;
test_attr.data_size_out = strtoul(*argv, &endptr, 0);
if (*endptr) {
p_err("can't parse %s as output data size",
*argv);
return -1;
}
NEXT_ARG();
} else if (is_prefix(*argv, "ctx_in")) {
NEXT_ARG();
if (!REQ_ARGS(1))
return -1;
ctx_fname_in = GET_ARG();
if (check_single_stdin(data_fname_in, ctx_fname_in))
return -1;
} else if (is_prefix(*argv, "ctx_out")) {
NEXT_ARG();
if (!REQ_ARGS(1))
return -1;
ctx_fname_out = GET_ARG();
} else if (is_prefix(*argv, "ctx_size_out")) {
char *endptr;
NEXT_ARG();
if (!REQ_ARGS(1))
return -1;
test_attr.ctx_size_out = strtoul(*argv, &endptr, 0);
if (*endptr) {
p_err("can't parse %s as output context size",
*argv);
return -1;
}
NEXT_ARG();
} else if (is_prefix(*argv, "repeat")) {
char *endptr;
NEXT_ARG();
if (!REQ_ARGS(1))
return -1;
repeat = strtoul(*argv, &endptr, 0);
if (*endptr) {
p_err("can't parse %s as repeat number",
*argv);
return -1;
}
NEXT_ARG();
} else {
p_err("expected no more arguments, 'data_in', 'data_out', 'data_size_out', 'ctx_in', 'ctx_out', 'ctx_size_out' or 'repeat', got: '%s'?",
*argv);
return -1;
}
}
err = get_run_data(data_fname_in, &data_in, &test_attr.data_size_in);
if (err)
return -1;
if (data_in) {
if (!test_attr.data_size_out)
test_attr.data_size_out = default_size;
err = alloc_run_data(&data_out, test_attr.data_size_out);
if (err)
goto free_data_in;
}
err = get_run_data(ctx_fname_in, &ctx_in, &test_attr.ctx_size_in);
if (err)
goto free_data_out;
if (ctx_in) {
if (!test_attr.ctx_size_out)
test_attr.ctx_size_out = default_size;
err = alloc_run_data(&ctx_out, test_attr.ctx_size_out);
if (err)
goto free_ctx_in;
}
test_attr.prog_fd = fd;
test_attr.repeat = repeat;
test_attr.data_in = data_in;
test_attr.data_out = data_out;
test_attr.ctx_in = ctx_in;
test_attr.ctx_out = ctx_out;
err = bpf_prog_test_run_xattr(&test_attr);
if (err) {
p_err("failed to run program: %s", strerror(errno));
goto free_ctx_out;
}
err = 0;
if (json_output)
jsonw_start_object(json_wtr); /* root */
/* Do not exit on errors occurring when printing output data/context,
* we still want to print return value and duration for program run.
*/
if (test_attr.data_size_out)
err += print_run_output(test_attr.data_out,
test_attr.data_size_out,
data_fname_out, "data_out");
if (test_attr.ctx_size_out)
err += print_run_output(test_attr.ctx_out,
test_attr.ctx_size_out,
ctx_fname_out, "ctx_out");
if (json_output) {
jsonw_uint_field(json_wtr, "retval", test_attr.retval);
jsonw_uint_field(json_wtr, "duration", test_attr.duration);
jsonw_end_object(json_wtr); /* root */
} else {
fprintf(stdout, "Return value: %u, duration%s: %uns\n",
test_attr.retval,
repeat > 1 ? " (average)" : "", test_attr.duration);
}
free_ctx_out:
free(ctx_out);
free_ctx_in:
free(ctx_in);
free_data_out:
free(data_out);
free_data_in:
free(data_in);
return err;
}
tools: bpftool: Restore message on failure to guess program type In commit 4a3d6c6a6e4d ("libbpf: Reduce log level for custom section names"), log level for messages for libbpf_attach_type_by_name() and libbpf_prog_type_by_name() was downgraded from "info" to "debug". The latter function, in particular, is used by bpftool when attempting to load programs, and this change caused bpftool to exit with no hint or error message when it fails to detect the type of the program to load (unless "-d" option was provided). To help users understand why bpftool fails to load the program, let's do a second run of the function with log level in "debug" mode in case of failure. Before: # bpftool prog load sample_ret0.o /sys/fs/bpf/sample_ret0 # echo $? 255 Or really verbose with -d flag: # bpftool -d prog load sample_ret0.o /sys/fs/bpf/sample_ret0 libbpf: loading sample_ret0.o libbpf: section(1) .strtab, size 134, link 0, flags 0, type=3 libbpf: skip section(1) .strtab libbpf: section(2) .text, size 16, link 0, flags 6, type=1 libbpf: found program .text libbpf: section(3) .debug_abbrev, size 55, link 0, flags 0, type=1 libbpf: skip section(3) .debug_abbrev libbpf: section(4) .debug_info, size 75, link 0, flags 0, type=1 libbpf: skip section(4) .debug_info libbpf: section(5) .rel.debug_info, size 32, link 14, flags 0, type=9 libbpf: skip relo .rel.debug_info(5) for section(4) libbpf: section(6) .debug_str, size 150, link 0, flags 30, type=1 libbpf: skip section(6) .debug_str libbpf: section(7) .BTF, size 155, link 0, flags 0, type=1 libbpf: section(8) .BTF.ext, size 80, link 0, flags 0, type=1 libbpf: section(9) .rel.BTF.ext, size 32, link 14, flags 0, type=9 libbpf: skip relo .rel.BTF.ext(9) for section(8) libbpf: section(10) .debug_frame, size 40, link 0, flags 0, type=1 libbpf: skip section(10) .debug_frame libbpf: section(11) .rel.debug_frame, size 16, link 14, flags 0, type=9 libbpf: skip relo .rel.debug_frame(11) for section(10) libbpf: section(12) .debug_line, size 74, link 0, flags 0, type=1 libbpf: skip section(12) .debug_line libbpf: section(13) .rel.debug_line, size 16, link 14, flags 0, type=9 libbpf: skip relo .rel.debug_line(13) for section(12) libbpf: section(14) .symtab, size 96, link 1, flags 0, type=2 libbpf: looking for externs among 4 symbols... libbpf: collected 0 externs total libbpf: failed to guess program type from ELF section '.text' libbpf: supported section(type) names are: socket sk_reuseport kprobe/ [...] After: # bpftool prog load sample_ret0.o /sys/fs/bpf/sample_ret0 libbpf: failed to guess program type from ELF section '.text' libbpf: supported section(type) names are: socket sk_reuseport kprobe/ [...] Signed-off-by: Quentin Monnet <quentin@isovalent.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20200311021205.9755-1-quentin@isovalent.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2020-03-11 09:12:05 +07:00
static int
get_prog_type_by_name(const char *name, enum bpf_prog_type *prog_type,
enum bpf_attach_type *expected_attach_type)
{
libbpf_print_fn_t print_backup;
int ret;
ret = libbpf_prog_type_by_name(name, prog_type, expected_attach_type);
if (!ret)
return ret;
/* libbpf_prog_type_by_name() failed, let's re-run with debug level */
print_backup = libbpf_set_print(print_all_levels);
ret = libbpf_prog_type_by_name(name, prog_type, expected_attach_type);
libbpf_set_print(print_backup);
return ret;
}
static int load_with_options(int argc, char **argv, bool first_prog_only)
{
enum bpf_prog_type common_prog_type = BPF_PROG_TYPE_UNSPEC;
DECLARE_LIBBPF_OPTS(bpf_object_open_opts, open_opts,
.relaxed_maps = relaxed_maps,
);
struct bpf_object_load_attr load_attr = { 0 };
tools: bpftool: make -d option print debug output from verifier The "-d" option is used to require all logs available for bpftool. So far it meant telling libbpf to print even debug-level information. But there is another source of info that can be made more verbose: when we attemt to load programs with bpftool, we can pass a log_level parameter to the verifier in order to control the amount of information that is printed to the console. Reuse the "-d" option to print all information the verifier can tell. At this time, this means logs related to BPF_LOG_LEVEL1, BPF_LOG_LEVEL2 and BPF_LOG_STATS. As mentioned in the discussion on the first version of this set, these macros are internal to the kernel (include/linux/bpf_verifier.h) and are not meant to be part of the stable user API, therefore we simply use the related constants to print whatever we can at this time, without trying to tell users what is log_level1 or what is statistics. Verifier logs are only used when loading programs for now (In the future: for loading BTF objects with bpftool? Although libbpf does not currently offer to print verifier info at debug level if no error occurred when loading BTF objects), so bpftool.rst and bpftool-prog.rst are the only man pages to get the update. v3: - Add details on log level and BTF loading at the end of commit log. v2: - Remove the possibility to select the log levels to use (v1 offered a combination of "log_level1", "log_level2" and "stats"). - The macros from kernel header bpf_verifier.h are not used (and therefore not moved to UAPI header). - In v1 this was a distinct option, but is now merged in the only "-d" switch to activate libbpf and verifier debug-level logs all at the same time. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-05-24 17:36:48 +07:00
enum bpf_attach_type expected_attach_type;
struct map_replace *map_replace = NULL;
struct bpf_program *prog = NULL, *pos;
unsigned int old_map_fds = 0;
const char *pinmaps = NULL;
struct bpf_object *obj;
struct bpf_map *map;
const char *pinfile;
unsigned int i, j;
__u32 ifindex = 0;
const char *file;
int idx, err;
if (!REQ_ARGS(2))
return -1;
file = GET_ARG();
pinfile = GET_ARG();
while (argc) {
if (is_prefix(*argv, "type")) {
char *type;
NEXT_ARG();
if (common_prog_type != BPF_PROG_TYPE_UNSPEC) {
p_err("program type already specified");
goto err_free_reuse_maps;
}
if (!REQ_ARGS(1))
goto err_free_reuse_maps;
/* Put a '/' at the end of type to appease libbpf */
type = malloc(strlen(*argv) + 2);
if (!type) {
p_err("mem alloc failed");
goto err_free_reuse_maps;
}
*type = 0;
strcat(type, *argv);
strcat(type, "/");
tools: bpftool: Restore message on failure to guess program type In commit 4a3d6c6a6e4d ("libbpf: Reduce log level for custom section names"), log level for messages for libbpf_attach_type_by_name() and libbpf_prog_type_by_name() was downgraded from "info" to "debug". The latter function, in particular, is used by bpftool when attempting to load programs, and this change caused bpftool to exit with no hint or error message when it fails to detect the type of the program to load (unless "-d" option was provided). To help users understand why bpftool fails to load the program, let's do a second run of the function with log level in "debug" mode in case of failure. Before: # bpftool prog load sample_ret0.o /sys/fs/bpf/sample_ret0 # echo $? 255 Or really verbose with -d flag: # bpftool -d prog load sample_ret0.o /sys/fs/bpf/sample_ret0 libbpf: loading sample_ret0.o libbpf: section(1) .strtab, size 134, link 0, flags 0, type=3 libbpf: skip section(1) .strtab libbpf: section(2) .text, size 16, link 0, flags 6, type=1 libbpf: found program .text libbpf: section(3) .debug_abbrev, size 55, link 0, flags 0, type=1 libbpf: skip section(3) .debug_abbrev libbpf: section(4) .debug_info, size 75, link 0, flags 0, type=1 libbpf: skip section(4) .debug_info libbpf: section(5) .rel.debug_info, size 32, link 14, flags 0, type=9 libbpf: skip relo .rel.debug_info(5) for section(4) libbpf: section(6) .debug_str, size 150, link 0, flags 30, type=1 libbpf: skip section(6) .debug_str libbpf: section(7) .BTF, size 155, link 0, flags 0, type=1 libbpf: section(8) .BTF.ext, size 80, link 0, flags 0, type=1 libbpf: section(9) .rel.BTF.ext, size 32, link 14, flags 0, type=9 libbpf: skip relo .rel.BTF.ext(9) for section(8) libbpf: section(10) .debug_frame, size 40, link 0, flags 0, type=1 libbpf: skip section(10) .debug_frame libbpf: section(11) .rel.debug_frame, size 16, link 14, flags 0, type=9 libbpf: skip relo .rel.debug_frame(11) for section(10) libbpf: section(12) .debug_line, size 74, link 0, flags 0, type=1 libbpf: skip section(12) .debug_line libbpf: section(13) .rel.debug_line, size 16, link 14, flags 0, type=9 libbpf: skip relo .rel.debug_line(13) for section(12) libbpf: section(14) .symtab, size 96, link 1, flags 0, type=2 libbpf: looking for externs among 4 symbols... libbpf: collected 0 externs total libbpf: failed to guess program type from ELF section '.text' libbpf: supported section(type) names are: socket sk_reuseport kprobe/ [...] After: # bpftool prog load sample_ret0.o /sys/fs/bpf/sample_ret0 libbpf: failed to guess program type from ELF section '.text' libbpf: supported section(type) names are: socket sk_reuseport kprobe/ [...] Signed-off-by: Quentin Monnet <quentin@isovalent.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20200311021205.9755-1-quentin@isovalent.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2020-03-11 09:12:05 +07:00
err = get_prog_type_by_name(type, &common_prog_type,
&expected_attach_type);
free(type);
if (err < 0)
goto err_free_reuse_maps;
NEXT_ARG();
} else if (is_prefix(*argv, "map")) {
void *new_map_replace;
char *endptr, *name;
int fd;
NEXT_ARG();
if (!REQ_ARGS(4))
goto err_free_reuse_maps;
if (is_prefix(*argv, "idx")) {
NEXT_ARG();
idx = strtoul(*argv, &endptr, 0);
if (*endptr) {
p_err("can't parse %s as IDX", *argv);
goto err_free_reuse_maps;
}
name = NULL;
} else if (is_prefix(*argv, "name")) {
NEXT_ARG();
name = *argv;
idx = -1;
} else {
p_err("expected 'idx' or 'name', got: '%s'?",
*argv);
goto err_free_reuse_maps;
}
NEXT_ARG();
fd = map_parse_fd(&argc, &argv);
if (fd < 0)
goto err_free_reuse_maps;
new_map_replace = reallocarray(map_replace,
old_map_fds + 1,
sizeof(*map_replace));
if (!new_map_replace) {
p_err("mem alloc failed");
goto err_free_reuse_maps;
}
map_replace = new_map_replace;
map_replace[old_map_fds].idx = idx;
map_replace[old_map_fds].name = name;
map_replace[old_map_fds].fd = fd;
old_map_fds++;
} else if (is_prefix(*argv, "dev")) {
NEXT_ARG();
if (ifindex) {
p_err("offload device already specified");
goto err_free_reuse_maps;
}
if (!REQ_ARGS(1))
goto err_free_reuse_maps;
ifindex = if_nametoindex(*argv);
if (!ifindex) {
p_err("unrecognized netdevice '%s': %s",
*argv, strerror(errno));
goto err_free_reuse_maps;
}
NEXT_ARG();
} else if (is_prefix(*argv, "pinmaps")) {
NEXT_ARG();
if (!REQ_ARGS(1))
goto err_free_reuse_maps;
pinmaps = GET_ARG();
} else {
p_err("expected no more arguments, 'type', 'map' or 'dev', got: '%s'?",
*argv);
goto err_free_reuse_maps;
}
}
set_max_rlimit();
obj = bpf_object__open_file(file, &open_opts);
if (IS_ERR_OR_NULL(obj)) {
p_err("failed to open object file");
goto err_free_reuse_maps;
}
bpf_object__for_each_program(pos, obj) {
enum bpf_prog_type prog_type = common_prog_type;
if (prog_type == BPF_PROG_TYPE_UNSPEC) {
const char *sec_name = bpf_program__title(pos, false);
tools: bpftool: Restore message on failure to guess program type In commit 4a3d6c6a6e4d ("libbpf: Reduce log level for custom section names"), log level for messages for libbpf_attach_type_by_name() and libbpf_prog_type_by_name() was downgraded from "info" to "debug". The latter function, in particular, is used by bpftool when attempting to load programs, and this change caused bpftool to exit with no hint or error message when it fails to detect the type of the program to load (unless "-d" option was provided). To help users understand why bpftool fails to load the program, let's do a second run of the function with log level in "debug" mode in case of failure. Before: # bpftool prog load sample_ret0.o /sys/fs/bpf/sample_ret0 # echo $? 255 Or really verbose with -d flag: # bpftool -d prog load sample_ret0.o /sys/fs/bpf/sample_ret0 libbpf: loading sample_ret0.o libbpf: section(1) .strtab, size 134, link 0, flags 0, type=3 libbpf: skip section(1) .strtab libbpf: section(2) .text, size 16, link 0, flags 6, type=1 libbpf: found program .text libbpf: section(3) .debug_abbrev, size 55, link 0, flags 0, type=1 libbpf: skip section(3) .debug_abbrev libbpf: section(4) .debug_info, size 75, link 0, flags 0, type=1 libbpf: skip section(4) .debug_info libbpf: section(5) .rel.debug_info, size 32, link 14, flags 0, type=9 libbpf: skip relo .rel.debug_info(5) for section(4) libbpf: section(6) .debug_str, size 150, link 0, flags 30, type=1 libbpf: skip section(6) .debug_str libbpf: section(7) .BTF, size 155, link 0, flags 0, type=1 libbpf: section(8) .BTF.ext, size 80, link 0, flags 0, type=1 libbpf: section(9) .rel.BTF.ext, size 32, link 14, flags 0, type=9 libbpf: skip relo .rel.BTF.ext(9) for section(8) libbpf: section(10) .debug_frame, size 40, link 0, flags 0, type=1 libbpf: skip section(10) .debug_frame libbpf: section(11) .rel.debug_frame, size 16, link 14, flags 0, type=9 libbpf: skip relo .rel.debug_frame(11) for section(10) libbpf: section(12) .debug_line, size 74, link 0, flags 0, type=1 libbpf: skip section(12) .debug_line libbpf: section(13) .rel.debug_line, size 16, link 14, flags 0, type=9 libbpf: skip relo .rel.debug_line(13) for section(12) libbpf: section(14) .symtab, size 96, link 1, flags 0, type=2 libbpf: looking for externs among 4 symbols... libbpf: collected 0 externs total libbpf: failed to guess program type from ELF section '.text' libbpf: supported section(type) names are: socket sk_reuseport kprobe/ [...] After: # bpftool prog load sample_ret0.o /sys/fs/bpf/sample_ret0 libbpf: failed to guess program type from ELF section '.text' libbpf: supported section(type) names are: socket sk_reuseport kprobe/ [...] Signed-off-by: Quentin Monnet <quentin@isovalent.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20200311021205.9755-1-quentin@isovalent.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2020-03-11 09:12:05 +07:00
err = get_prog_type_by_name(sec_name, &prog_type,
&expected_attach_type);
if (err < 0)
goto err_close_obj;
}
bpf_program__set_ifindex(pos, ifindex);
bpf_program__set_type(pos, prog_type);
bpf_program__set_expected_attach_type(pos, expected_attach_type);
}
qsort(map_replace, old_map_fds, sizeof(*map_replace),
map_replace_compar);
/* After the sort maps by name will be first on the list, because they
* have idx == -1. Resolve them.
*/
j = 0;
while (j < old_map_fds && map_replace[j].name) {
i = 0;
bpf_object__for_each_map(map, obj) {
if (!strcmp(bpf_map__name(map), map_replace[j].name)) {
map_replace[j].idx = i;
break;
}
i++;
}
if (map_replace[j].idx == -1) {
p_err("unable to find map '%s'", map_replace[j].name);
goto err_close_obj;
}
j++;
}
/* Resort if any names were resolved */
if (j)
qsort(map_replace, old_map_fds, sizeof(*map_replace),
map_replace_compar);
/* Set ifindex and name reuse */
j = 0;
idx = 0;
bpf_object__for_each_map(map, obj) {
if (!bpf_map__is_offload_neutral(map))
bpf_map__set_ifindex(map, ifindex);
if (j < old_map_fds && idx == map_replace[j].idx) {
err = bpf_map__reuse_fd(map, map_replace[j++].fd);
if (err) {
p_err("unable to set up map reuse: %d", err);
goto err_close_obj;
}
/* Next reuse wants to apply to the same map */
if (j < old_map_fds && map_replace[j].idx == idx) {
p_err("replacement for map idx %d specified more than once",
idx);
goto err_close_obj;
}
}
idx++;
}
if (j < old_map_fds) {
p_err("map idx '%d' not used", map_replace[j].idx);
goto err_close_obj;
}
tools: bpftool: make -d option print debug output from verifier The "-d" option is used to require all logs available for bpftool. So far it meant telling libbpf to print even debug-level information. But there is another source of info that can be made more verbose: when we attemt to load programs with bpftool, we can pass a log_level parameter to the verifier in order to control the amount of information that is printed to the console. Reuse the "-d" option to print all information the verifier can tell. At this time, this means logs related to BPF_LOG_LEVEL1, BPF_LOG_LEVEL2 and BPF_LOG_STATS. As mentioned in the discussion on the first version of this set, these macros are internal to the kernel (include/linux/bpf_verifier.h) and are not meant to be part of the stable user API, therefore we simply use the related constants to print whatever we can at this time, without trying to tell users what is log_level1 or what is statistics. Verifier logs are only used when loading programs for now (In the future: for loading BTF objects with bpftool? Although libbpf does not currently offer to print verifier info at debug level if no error occurred when loading BTF objects), so bpftool.rst and bpftool-prog.rst are the only man pages to get the update. v3: - Add details on log level and BTF loading at the end of commit log. v2: - Remove the possibility to select the log levels to use (v1 offered a combination of "log_level1", "log_level2" and "stats"). - The macros from kernel header bpf_verifier.h are not used (and therefore not moved to UAPI header). - In v1 this was a distinct option, but is now merged in the only "-d" switch to activate libbpf and verifier debug-level logs all at the same time. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-05-24 17:36:48 +07:00
load_attr.obj = obj;
if (verifier_logs)
/* log_level1 + log_level2 + stats, but not stable UAPI */
load_attr.log_level = 1 + 2 + 4;
err = bpf_object__load_xattr(&load_attr);
if (err) {
p_err("failed to load object file");
goto err_close_obj;
}
err = mount_bpffs_for_pin(pinfile);
if (err)
goto err_close_obj;
if (first_prog_only) {
prog = bpf_program__next(NULL, obj);
if (!prog) {
p_err("object file doesn't contain any bpf program");
goto err_close_obj;
}
err = bpf_obj_pin(bpf_program__fd(prog), pinfile);
if (err) {
p_err("failed to pin program %s",
bpf_program__title(prog, false));
goto err_close_obj;
}
} else {
err = bpf_object__pin_programs(obj, pinfile);
if (err) {
p_err("failed to pin all programs");
goto err_close_obj;
}
}
if (pinmaps) {
err = bpf_object__pin_maps(obj, pinmaps);
if (err) {
p_err("failed to pin all maps");
goto err_unpin;
}
}
if (json_output)
jsonw_null(json_wtr);
bpf_object__close(obj);
for (i = 0; i < old_map_fds; i++)
close(map_replace[i].fd);
free(map_replace);
return 0;
err_unpin:
if (first_prog_only)
unlink(pinfile);
else
bpf_object__unpin_programs(obj, pinfile);
err_close_obj:
bpf_object__close(obj);
err_free_reuse_maps:
for (i = 0; i < old_map_fds; i++)
close(map_replace[i].fd);
free(map_replace);
return -1;
}
static int do_load(int argc, char **argv)
{
return load_with_options(argc, argv, true);
}
static int do_loadall(int argc, char **argv)
{
return load_with_options(argc, argv, false);
}
bpftool: Introduce "prog profile" command With fentry/fexit programs, it is possible to profile BPF program with hardware counters. Introduce bpftool "prog profile", which measures key metrics of a BPF program. bpftool prog profile command creates per-cpu perf events. Then it attaches fentry/fexit programs to the target BPF program. The fentry program saves perf event value to a map. The fexit program reads the perf event again, and calculates the difference, which is the instructions/cycles used by the target program. Example input and output: ./bpftool prog profile id 337 duration 3 cycles instructions llc_misses 4228 run_cnt 3403698 cycles (84.08%) 3525294 instructions # 1.04 insn per cycle (84.05%) 13 llc_misses # 3.69 LLC misses per million isns (83.50%) This command measures cycles and instructions for BPF program with id 337 for 3 seconds. The program has triggered 4228 times. The rest of the output is similar to perf-stat. In this example, the counters were only counting ~84% of the time because of time multiplexing of perf counters. Note that, this approach measures cycles and instructions in very small increments. So the fentry/fexit programs introduce noticeable errors to the measurement results. The fentry/fexit programs are generated with BPF skeletons. Therefore, we build bpftool twice. The first time _bpftool is built without skeletons. Then, _bpftool is used to generate the skeletons. The second time, bpftool is built with skeletons. Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Quentin Monnet <quentin@isovalent.com> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200309173218.2739965-2-songliubraving@fb.com
2020-03-10 00:32:15 +07:00
#ifdef BPFTOOL_WITHOUT_SKELETONS
static int do_profile(int argc, char **argv)
{
p_err("bpftool prog profile command is not supported. Please build bpftool with clang >= 10.0.0");
bpftool: Introduce "prog profile" command With fentry/fexit programs, it is possible to profile BPF program with hardware counters. Introduce bpftool "prog profile", which measures key metrics of a BPF program. bpftool prog profile command creates per-cpu perf events. Then it attaches fentry/fexit programs to the target BPF program. The fentry program saves perf event value to a map. The fexit program reads the perf event again, and calculates the difference, which is the instructions/cycles used by the target program. Example input and output: ./bpftool prog profile id 337 duration 3 cycles instructions llc_misses 4228 run_cnt 3403698 cycles (84.08%) 3525294 instructions # 1.04 insn per cycle (84.05%) 13 llc_misses # 3.69 LLC misses per million isns (83.50%) This command measures cycles and instructions for BPF program with id 337 for 3 seconds. The program has triggered 4228 times. The rest of the output is similar to perf-stat. In this example, the counters were only counting ~84% of the time because of time multiplexing of perf counters. Note that, this approach measures cycles and instructions in very small increments. So the fentry/fexit programs introduce noticeable errors to the measurement results. The fentry/fexit programs are generated with BPF skeletons. Therefore, we build bpftool twice. The first time _bpftool is built without skeletons. Then, _bpftool is used to generate the skeletons. The second time, bpftool is built with skeletons. Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Quentin Monnet <quentin@isovalent.com> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200309173218.2739965-2-songliubraving@fb.com
2020-03-10 00:32:15 +07:00
return 0;
}
#else /* BPFTOOL_WITHOUT_SKELETONS */
#include "profiler.skel.h"
struct profile_metric {
const char *name;
struct bpf_perf_event_value val;
struct perf_event_attr attr;
bool selected;
/* calculate ratios like instructions per cycle */
const int ratio_metric; /* 0 for N/A, 1 for index 0 (cycles) */
const char *ratio_desc;
const float ratio_mul;
} metrics[] = {
{
.name = "cycles",
.attr = {
.type = PERF_TYPE_HARDWARE,
.config = PERF_COUNT_HW_CPU_CYCLES,
.exclude_user = 1,
},
},
{
.name = "instructions",
.attr = {
.type = PERF_TYPE_HARDWARE,
.config = PERF_COUNT_HW_INSTRUCTIONS,
.exclude_user = 1,
},
.ratio_metric = 1,
.ratio_desc = "insns per cycle",
.ratio_mul = 1.0,
},
{
.name = "l1d_loads",
.attr = {
.type = PERF_TYPE_HW_CACHE,
.config =
PERF_COUNT_HW_CACHE_L1D |
(PERF_COUNT_HW_CACHE_OP_READ << 8) |
(PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16),
.exclude_user = 1,
},
},
{
.name = "llc_misses",
.attr = {
.type = PERF_TYPE_HW_CACHE,
.config =
PERF_COUNT_HW_CACHE_LL |
(PERF_COUNT_HW_CACHE_OP_READ << 8) |
(PERF_COUNT_HW_CACHE_RESULT_MISS << 16),
.exclude_user = 1
},
.ratio_metric = 2,
.ratio_desc = "LLC misses per million insns",
.ratio_mul = 1e6,
},
};
static __u64 profile_total_count;
#define MAX_NUM_PROFILE_METRICS 4
static int profile_parse_metrics(int argc, char **argv)
{
unsigned int metric_cnt;
int selected_cnt = 0;
unsigned int i;
metric_cnt = sizeof(metrics) / sizeof(struct profile_metric);
while (argc > 0) {
for (i = 0; i < metric_cnt; i++) {
if (is_prefix(argv[0], metrics[i].name)) {
if (!metrics[i].selected)
selected_cnt++;
metrics[i].selected = true;
break;
}
}
if (i == metric_cnt) {
p_err("unknown metric %s", argv[0]);
return -1;
}
NEXT_ARG();
}
if (selected_cnt > MAX_NUM_PROFILE_METRICS) {
p_err("too many (%d) metrics, please specify no more than %d metrics at at time",
selected_cnt, MAX_NUM_PROFILE_METRICS);
return -1;
}
return selected_cnt;
}
static void profile_read_values(struct profiler_bpf *obj)
{
__u32 m, cpu, num_cpu = obj->rodata->num_cpu;
int reading_map_fd, count_map_fd;
__u64 counts[num_cpu];
__u32 key = 0;
int err;
reading_map_fd = bpf_map__fd(obj->maps.accum_readings);
count_map_fd = bpf_map__fd(obj->maps.counts);
if (reading_map_fd < 0 || count_map_fd < 0) {
p_err("failed to get fd for map");
return;
}
err = bpf_map_lookup_elem(count_map_fd, &key, counts);
if (err) {
p_err("failed to read count_map: %s", strerror(errno));
return;
}
profile_total_count = 0;
for (cpu = 0; cpu < num_cpu; cpu++)
profile_total_count += counts[cpu];
for (m = 0; m < ARRAY_SIZE(metrics); m++) {
struct bpf_perf_event_value values[num_cpu];
if (!metrics[m].selected)
continue;
err = bpf_map_lookup_elem(reading_map_fd, &key, values);
if (err) {
p_err("failed to read reading_map: %s",
strerror(errno));
return;
}
for (cpu = 0; cpu < num_cpu; cpu++) {
metrics[m].val.counter += values[cpu].counter;
metrics[m].val.enabled += values[cpu].enabled;
metrics[m].val.running += values[cpu].running;
}
key++;
}
}
static void profile_print_readings_json(void)
{
__u32 m;
jsonw_start_array(json_wtr);
for (m = 0; m < ARRAY_SIZE(metrics); m++) {
if (!metrics[m].selected)
continue;
jsonw_start_object(json_wtr);
jsonw_string_field(json_wtr, "metric", metrics[m].name);
jsonw_lluint_field(json_wtr, "run_cnt", profile_total_count);
jsonw_lluint_field(json_wtr, "value", metrics[m].val.counter);
jsonw_lluint_field(json_wtr, "enabled", metrics[m].val.enabled);
jsonw_lluint_field(json_wtr, "running", metrics[m].val.running);
jsonw_end_object(json_wtr);
}
jsonw_end_array(json_wtr);
}
static void profile_print_readings_plain(void)
{
__u32 m;
printf("\n%18llu %-20s\n", profile_total_count, "run_cnt");
for (m = 0; m < ARRAY_SIZE(metrics); m++) {
struct bpf_perf_event_value *val = &metrics[m].val;
int r;
if (!metrics[m].selected)
continue;
printf("%18llu %-20s", val->counter, metrics[m].name);
r = metrics[m].ratio_metric - 1;
if (r >= 0 && metrics[r].selected &&
metrics[r].val.counter > 0) {
printf("# %8.2f %-30s",
val->counter * metrics[m].ratio_mul /
metrics[r].val.counter,
metrics[m].ratio_desc);
} else {
printf("%-41s", "");
}
if (val->enabled > val->running)
printf("(%4.2f%%)",
val->running * 100.0 / val->enabled);
printf("\n");
}
}
static void profile_print_readings(void)
{
if (json_output)
profile_print_readings_json();
else
profile_print_readings_plain();
}
static char *profile_target_name(int tgt_fd)
{
struct bpf_prog_info_linear *info_linear;
struct bpf_func_info *func_info;
const struct btf_type *t;
char *name = NULL;
struct btf *btf;
info_linear = bpf_program__get_prog_info_linear(
tgt_fd, 1UL << BPF_PROG_INFO_FUNC_INFO);
if (IS_ERR_OR_NULL(info_linear)) {
p_err("failed to get info_linear for prog FD %d", tgt_fd);
return NULL;
}
if (info_linear->info.btf_id == 0 ||
btf__get_from_id(info_linear->info.btf_id, &btf)) {
p_err("prog FD %d doesn't have valid btf", tgt_fd);
goto out;
}
func_info = (struct bpf_func_info *)(info_linear->info.func_info);
t = btf__type_by_id(btf, func_info[0].type_id);
if (!t) {
p_err("btf %d doesn't have type %d",
info_linear->info.btf_id, func_info[0].type_id);
goto out;
}
name = strdup(btf__name_by_offset(btf, t->name_off));
out:
free(info_linear);
return name;
}
static struct profiler_bpf *profile_obj;
static int profile_tgt_fd = -1;
static char *profile_tgt_name;
static int *profile_perf_events;
static int profile_perf_event_cnt;
static void profile_close_perf_events(struct profiler_bpf *obj)
{
int i;
for (i = profile_perf_event_cnt - 1; i >= 0; i--)
close(profile_perf_events[i]);
free(profile_perf_events);
profile_perf_event_cnt = 0;
}
static int profile_open_perf_events(struct profiler_bpf *obj)
{
unsigned int cpu, m;
int map_fd, pmu_fd;
profile_perf_events = calloc(
sizeof(int), obj->rodata->num_cpu * obj->rodata->num_metric);
if (!profile_perf_events) {
p_err("failed to allocate memory for perf_event array: %s",
strerror(errno));
return -1;
}
map_fd = bpf_map__fd(obj->maps.events);
if (map_fd < 0) {
p_err("failed to get fd for events map");
return -1;
}
for (m = 0; m < ARRAY_SIZE(metrics); m++) {
if (!metrics[m].selected)
continue;
for (cpu = 0; cpu < obj->rodata->num_cpu; cpu++) {
pmu_fd = syscall(__NR_perf_event_open, &metrics[m].attr,
-1/*pid*/, cpu, -1/*group_fd*/, 0);
if (pmu_fd < 0 ||
bpf_map_update_elem(map_fd, &profile_perf_event_cnt,
&pmu_fd, BPF_ANY) ||
ioctl(pmu_fd, PERF_EVENT_IOC_ENABLE, 0)) {
p_err("failed to create event %s on cpu %d",
metrics[m].name, cpu);
return -1;
}
profile_perf_events[profile_perf_event_cnt++] = pmu_fd;
}
}
return 0;
}
static void profile_print_and_cleanup(void)
{
profile_close_perf_events(profile_obj);
profile_read_values(profile_obj);
profile_print_readings();
profiler_bpf__destroy(profile_obj);
close(profile_tgt_fd);
free(profile_tgt_name);
}
static void int_exit(int signo)
{
profile_print_and_cleanup();
exit(0);
}
static int do_profile(int argc, char **argv)
{
int num_metric, num_cpu, err = -1;
struct bpf_program *prog;
unsigned long duration;
char *endptr;
/* we at least need two args for the prog and one metric */
if (!REQ_ARGS(3))
return -EINVAL;
/* parse target fd */
profile_tgt_fd = prog_parse_fd(&argc, &argv);
if (profile_tgt_fd < 0) {
p_err("failed to parse fd");
return -1;
}
/* parse profiling optional duration */
if (argc > 2 && is_prefix(argv[0], "duration")) {
NEXT_ARG();
duration = strtoul(*argv, &endptr, 0);
if (*endptr)
usage();
NEXT_ARG();
} else {
duration = UINT_MAX;
}
num_metric = profile_parse_metrics(argc, argv);
if (num_metric <= 0)
goto out;
num_cpu = libbpf_num_possible_cpus();
if (num_cpu <= 0) {
p_err("failed to identify number of CPUs");
goto out;
}
profile_obj = profiler_bpf__open();
if (!profile_obj) {
p_err("failed to open and/or load BPF object");
goto out;
}
profile_obj->rodata->num_cpu = num_cpu;
profile_obj->rodata->num_metric = num_metric;
/* adjust map sizes */
bpf_map__resize(profile_obj->maps.events, num_metric * num_cpu);
bpf_map__resize(profile_obj->maps.fentry_readings, num_metric);
bpf_map__resize(profile_obj->maps.accum_readings, num_metric);
bpf_map__resize(profile_obj->maps.counts, 1);
/* change target name */
profile_tgt_name = profile_target_name(profile_tgt_fd);
if (!profile_tgt_name)
goto out;
bpf_object__for_each_program(prog, profile_obj->obj) {
err = bpf_program__set_attach_target(prog, profile_tgt_fd,
profile_tgt_name);
if (err) {
p_err("failed to set attach target\n");
goto out;
}
}
set_max_rlimit();
err = profiler_bpf__load(profile_obj);
if (err) {
p_err("failed to load profile_obj");
goto out;
}
err = profile_open_perf_events(profile_obj);
if (err)
goto out;
err = profiler_bpf__attach(profile_obj);
if (err) {
p_err("failed to attach profile_obj");
goto out;
}
signal(SIGINT, int_exit);
sleep(duration);
profile_print_and_cleanup();
return 0;
out:
profile_close_perf_events(profile_obj);
if (profile_obj)
profiler_bpf__destroy(profile_obj);
close(profile_tgt_fd);
free(profile_tgt_name);
return err;
}
#endif /* BPFTOOL_WITHOUT_SKELETONS */
static int do_help(int argc, char **argv)
{
if (json_output) {
jsonw_null(json_wtr);
return 0;
}
fprintf(stderr,
"Usage: %1$s %2$s { show | list } [PROG]\n"
" %1$s %2$s dump xlated PROG [{ file FILE | opcodes | visual | linum }]\n"
" %1$s %2$s dump jited PROG [{ file FILE | opcodes | linum }]\n"
" %1$s %2$s pin PROG FILE\n"
" %1$s %2$s { load | loadall } OBJ PATH \\\n"
" [type TYPE] [dev NAME] \\\n"
" [map { idx IDX | name NAME } MAP]\\\n"
" [pinmaps MAP_DIR]\n"
" %1$s %2$s attach PROG ATTACH_TYPE [MAP]\n"
" %1$s %2$s detach PROG ATTACH_TYPE [MAP]\n"
" %1$s %2$s run PROG \\\n"
" data_in FILE \\\n"
" [data_out FILE [data_size_out L]] \\\n"
" [ctx_in FILE [ctx_out FILE [ctx_size_out M]]] \\\n"
" [repeat N]\n"
" %1$s %2$s profile PROG [duration DURATION] METRICs\n"
" %1$s %2$s tracelog\n"
" %1$s %2$s help\n"
"\n"
" " HELP_SPEC_MAP "\n"
" " HELP_SPEC_PROGRAM "\n"
" TYPE := { socket | kprobe | kretprobe | classifier | action |\n"
" tracepoint | raw_tracepoint | xdp | perf_event | cgroup/skb |\n"
" cgroup/sock | cgroup/dev | lwt_in | lwt_out | lwt_xmit |\n"
" lwt_seg6local | sockops | sk_skb | sk_msg | lirc_mode2 |\n"
" sk_reuseport | flow_dissector | cgroup/sysctl |\n"
" cgroup/bind4 | cgroup/bind6 | cgroup/post_bind4 |\n"
" cgroup/post_bind6 | cgroup/connect4 | cgroup/connect6 |\n"
" cgroup/getpeername4 | cgroup/getpeername6 |\n"
" cgroup/getsockname4 | cgroup/getsockname6 | cgroup/sendmsg4 |\n"
" cgroup/sendmsg6 | cgroup/recvmsg4 | cgroup/recvmsg6 |\n"
" cgroup/getsockopt | cgroup/setsockopt |\n"
" struct_ops | fentry | fexit | freplace }\n"
" ATTACH_TYPE := { msg_verdict | stream_verdict | stream_parser |\n"
2018-11-09 23:21:46 +07:00
" flow_dissector }\n"
bpftool: Introduce "prog profile" command With fentry/fexit programs, it is possible to profile BPF program with hardware counters. Introduce bpftool "prog profile", which measures key metrics of a BPF program. bpftool prog profile command creates per-cpu perf events. Then it attaches fentry/fexit programs to the target BPF program. The fentry program saves perf event value to a map. The fexit program reads the perf event again, and calculates the difference, which is the instructions/cycles used by the target program. Example input and output: ./bpftool prog profile id 337 duration 3 cycles instructions llc_misses 4228 run_cnt 3403698 cycles (84.08%) 3525294 instructions # 1.04 insn per cycle (84.05%) 13 llc_misses # 3.69 LLC misses per million isns (83.50%) This command measures cycles and instructions for BPF program with id 337 for 3 seconds. The program has triggered 4228 times. The rest of the output is similar to perf-stat. In this example, the counters were only counting ~84% of the time because of time multiplexing of perf counters. Note that, this approach measures cycles and instructions in very small increments. So the fentry/fexit programs introduce noticeable errors to the measurement results. The fentry/fexit programs are generated with BPF skeletons. Therefore, we build bpftool twice. The first time _bpftool is built without skeletons. Then, _bpftool is used to generate the skeletons. The second time, bpftool is built with skeletons. Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Quentin Monnet <quentin@isovalent.com> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200309173218.2739965-2-songliubraving@fb.com
2020-03-10 00:32:15 +07:00
" METRIC := { cycles | instructions | l1d_loads | llc_misses }\n"
" " HELP_SPEC_OPTIONS "\n"
"",
bin_name, argv[-2]);
return 0;
}
static const struct cmd cmds[] = {
{ "show", do_show },
{ "list", do_show },
{ "help", do_help },
{ "dump", do_dump },
{ "pin", do_pin },
{ "load", do_load },
{ "loadall", do_loadall },
{ "attach", do_attach },
{ "detach", do_detach },
{ "tracelog", do_tracelog },
{ "run", do_run },
bpftool: Introduce "prog profile" command With fentry/fexit programs, it is possible to profile BPF program with hardware counters. Introduce bpftool "prog profile", which measures key metrics of a BPF program. bpftool prog profile command creates per-cpu perf events. Then it attaches fentry/fexit programs to the target BPF program. The fentry program saves perf event value to a map. The fexit program reads the perf event again, and calculates the difference, which is the instructions/cycles used by the target program. Example input and output: ./bpftool prog profile id 337 duration 3 cycles instructions llc_misses 4228 run_cnt 3403698 cycles (84.08%) 3525294 instructions # 1.04 insn per cycle (84.05%) 13 llc_misses # 3.69 LLC misses per million isns (83.50%) This command measures cycles and instructions for BPF program with id 337 for 3 seconds. The program has triggered 4228 times. The rest of the output is similar to perf-stat. In this example, the counters were only counting ~84% of the time because of time multiplexing of perf counters. Note that, this approach measures cycles and instructions in very small increments. So the fentry/fexit programs introduce noticeable errors to the measurement results. The fentry/fexit programs are generated with BPF skeletons. Therefore, we build bpftool twice. The first time _bpftool is built without skeletons. Then, _bpftool is used to generate the skeletons. The second time, bpftool is built with skeletons. Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Quentin Monnet <quentin@isovalent.com> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200309173218.2739965-2-songliubraving@fb.com
2020-03-10 00:32:15 +07:00
{ "profile", do_profile },
{ 0 }
};
int do_prog(int argc, char **argv)
{
return cmd_select(cmds, argc, argv, do_help);
}