Skip to content

Commit 7c2e988

Browse files
author
Alexei Starovoitov
committed
bpf: fix x64 JIT code generation for jmp to 1st insn
Introduction of bounded loops exposed old bug in x64 JIT. JIT maintains the array of offsets to the end of all instructions to compute jmp offsets. addrs[0] - offset of the end of the 1st insn (that includes prologue). addrs[1] - offset of the end of the 2nd insn. JIT didn't keep the offset of the beginning of the 1st insn, since classic BPF didn't have backward jumps and valid extended BPF couldn't have a branch to 1st insn, because it didn't allow loops. With bounded loops it's possible to construct a valid program that jumps backwards to the 1st insn. Fix JIT by computing: addrs[0] - offset of the end of prologue == start of the 1st insn. addrs[1] - offset of the end of 1st insn. v1->v2: - Yonghong noticed a bug in jit linfo. Fix it by passing 'addrs + 1' to bpf_prog_fill_jited_linfo(), since it expects insn_to_jit_off array to be offsets to last byte. Reported-by: [email protected] Fixes: 2589726 ("bpf: introduce bounded loops") Fixes: 0a14842 ("net: filter: Just In Time compiler for x86-64") Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Song Liu <[email protected]>
1 parent 3415ec6 commit 7c2e988

File tree

1 file changed

+5
-4
lines changed

1 file changed

+5
-4
lines changed

arch/x86/net/bpf_jit_comp.c

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -390,8 +390,9 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
390390

391391
emit_prologue(&prog, bpf_prog->aux->stack_depth,
392392
bpf_prog_was_classic(bpf_prog));
393+
addrs[0] = prog - temp;
393394

394-
for (i = 0; i < insn_cnt; i++, insn++) {
395+
for (i = 1; i <= insn_cnt; i++, insn++) {
395396
const s32 imm32 = insn->imm;
396397
u32 dst_reg = insn->dst_reg;
397398
u32 src_reg = insn->src_reg;
@@ -1105,7 +1106,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
11051106
extra_pass = true;
11061107
goto skip_init_addrs;
11071108
}
1108-
addrs = kmalloc_array(prog->len, sizeof(*addrs), GFP_KERNEL);
1109+
addrs = kmalloc_array(prog->len + 1, sizeof(*addrs), GFP_KERNEL);
11091110
if (!addrs) {
11101111
prog = orig_prog;
11111112
goto out_addrs;
@@ -1115,7 +1116,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
11151116
* Before first pass, make a rough estimation of addrs[]
11161117
* each BPF instruction is translated to less than 64 bytes
11171118
*/
1118-
for (proglen = 0, i = 0; i < prog->len; i++) {
1119+
for (proglen = 0, i = 0; i <= prog->len; i++) {
11191120
proglen += 64;
11201121
addrs[i] = proglen;
11211122
}
@@ -1180,7 +1181,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
11801181

11811182
if (!image || !prog->is_func || extra_pass) {
11821183
if (image)
1183-
bpf_prog_fill_jited_linfo(prog, addrs);
1184+
bpf_prog_fill_jited_linfo(prog, addrs + 1);
11841185
out_addrs:
11851186
kfree(addrs);
11861187
kfree(jit_data);

0 commit comments

Comments
 (0)