Skip to content
  1. Sep 12, 2017
    • Yonghong Song's avatar
      bpf: add " ll" in the LD_IMM64 asmstring · be9c0034
      Yonghong Song authored
      
      
      This partially revert previous fix in commit f5858045aa0b
      ("bpf: proper print imm64 expression in inst printer").
      
      In that commit, the original suffix "ll" is removed from
      LD_IMM64 asmstring. In the customer print method, the "ll"
      suffix is printed if the rhs is an immediate. For example,
      "r2 = 5ll" => "r2 = 5ll", and "r3 = varll" => "r3 = var".
      
      This has an issue though for assembler. Since assembler
      relies on asmstring to do pattern matching, it will not
      be able to distiguish between "mov r2, 5" and
      "ld_imm64 r2, 5" since both asmstring is "r2 = 5".
      In such cases, the assembler uses 64bit load for all
      "r = <val>" asm insts.
      
      This patch adds back " ll" suffix for ld_imm64 with one
      additional space for "#reg = #global_var" case.
      
      Signed-off-by: default avatarYonghong Song <yhs@fb.com>
      Acked-by: default avatarAlexei Starovoitov <ast@kernel.org>
      llvm-svn: 312978
      be9c0034
  2. Apr 28, 2017
  3. Apr 14, 2017
    • Alexei Starovoitov's avatar
      [bpf] Fix memory offset check for loads and stores · 56db1451
      Alexei Starovoitov authored
      
      
      If the offset cannot fit into the instruction, an addition to the
      pointer is emitted before the actual access. However, BPF offsets are
      16-bit but LLVM considers them to be, for the matter of this check,
      to be 32-bit long.
      
      This causes the following program:
      
      int bpf_prog1(void *ign)
      {
      
      volatile unsigned long t = 0x8983984739ull;
      return *(unsigned long *)((0xffffffff8fff0002ull) + t);
      
      }
      
      To generate the following (wrong) code:
      
      0: 18 01 00 00 39 47 98 83 00 00 00 00 89 00 00 00
      
      r1 = 590618314553ll
      
      2: 7b 1a f8 ff 00 00 00 00 *(u64 *)(r10 - 8) = r1
      3: 79 a1 f8 ff 00 00 00 00 r1 = *(u64 *)(r10 - 8)
      4: 79 10 02 00 00 00 00 00 r0 = *(u64 *)(r1 + 2)
      5: 95 00 00 00 00 00 00 00 exit
      
      Fix it by changing the offset check to 16-bit.
      
      Patch by Nadav Amit <nadav.amit@gmail.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Differential Revision: https://reviews.llvm.org/D32055
      
      llvm-svn: 300269
      56db1451
Loading