Skip to content
  1. Mar 24, 2011
  2. Mar 23, 2011
  3. Mar 22, 2011
  4. Mar 21, 2011
  5. Mar 20, 2011
  6. Mar 19, 2011
    • Daniel Dunbar's avatar
      Revert r127953, "SimplifyCFG has stopped duplicating returns into predecessors · 327cd36f
      Daniel Dunbar authored
      to canonicalize IR", it broke a lot of things.
      
      llvm-svn: 127954
      327cd36f
    • Evan Cheng's avatar
      SimplifyCFG has stopped duplicating returns into predecessors to canonicalize IR · 824a7113
      Evan Cheng authored
      to have single return block (at least getting there) for optimizations. This
      is general goodness but it would prevent some tailcall optimizations.
      One specific case is code like this:
      int f1(void);
      int f2(void);
      int f3(void);
      int f4(void);
      int f5(void);
      int f6(void);
      int foo(int x) {
        switch(x) {
        case 1: return f1();
        case 2: return f2();
        case 3: return f3();
        case 4: return f4();
        case 5: return f5();
        case 6: return f6();
        }
      }
      
      =>
      LBB0_2:                                 ## %sw.bb
        callq   _f1
        popq    %rbp
        ret
      LBB0_3:                                 ## %sw.bb1
        callq   _f2
        popq    %rbp
        ret
      LBB0_4:                                 ## %sw.bb3
        callq   _f3
        popq    %rbp
        ret
      
      This patch teaches codegenprep to duplicate returns when the return value
      is a phi and where the phi operands are produced by tail calls followed by
      an unconditional branch:
      
      sw.bb7:                                           ; preds = %entry
        %call8 = tail call i32 @f5() nounwind
        br label %return
      sw.bb9:                                           ; preds = %entry
        %call10 = tail call i32 @f6() nounwind
        br label %return
      return:
        %retval.0 = phi i32 [ %call10, %sw.bb9 ], [ %call8, %sw.bb7 ], ... [ 0, %entry ]
        ret i32 %retval.0
      
      This allows codegen to generate better code like this:
      
      LBB0_2:                                 ## %sw.bb
              jmp     _f1                     ## TAILCALL
      LBB0_3:                                 ## %sw.bb1
              jmp     _f2                     ## TAILCALL
      LBB0_4:                                 ## %sw.bb3
              jmp     _f3                     ## TAILCALL
      
      rdar://9147433
      
      llvm-svn: 127953
      824a7113
    • Devang Patel's avatar
      If an AllocaInst referred by DbgDeclareInst is used by a LoadInst then the... · 2c7ee270
      Devang Patel authored
      If an AllocaInst referred by DbgDeclareInst is used by a LoadInst then the LoadInst should also get a corresponding llvm.dbg.value intrinsic.
      
      llvm-svn: 127924
      2c7ee270
    • Devang Patel's avatar
      Remove dead code. · 3ac171d4
      Devang Patel authored
      llvm-svn: 127923
      3ac171d4
    • Devang Patel's avatar
      c1431e6e
  7. Mar 18, 2011
  8. Mar 17, 2011
  9. Mar 16, 2011
  10. Mar 15, 2011
  11. Mar 14, 2011
  12. Mar 13, 2011
    • Jin-Gu Kang's avatar
      Add comment as following: · b7538c71
      Jin-Gu Kang authored
      load and store reference same memory location, the memory location
      is represented by getelementptr with two uses (load and store) and
      the getelementptr's base is alloca with single use. At this point,
      instructions from alloca to store can be removed.
      (this pattern is generated when bitfield is accessed.)
      For example,
      %u = alloca %struct.test, align 4               ; [#uses=1]
      %0 = getelementptr inbounds %struct.test* %u, i32 0, i32 0;[#uses=2]
      %1 = load i8* %0, align 4                       ; [#uses=1]
      %2 = and i8 %1, -16                             ; [#uses=1]
      %3 = or i8 %2, 5                                ; [#uses=1]
      store i8 %3, i8* %0, align 4
      
      llvm-svn: 127565
      b7538c71
  13. Mar 12, 2011
Loading