Skip to content
  1. Apr 17, 2007
  2. Apr 16, 2007
  3. Apr 15, 2007
  4. Apr 14, 2007
  5. Apr 13, 2007
    • Chris Lattner's avatar
      Now that codegen prepare isn't defeating me, I can finally fix what I set · efd3051d
      Chris Lattner authored
      out to do! :)
      
      This fixes a problem where LSR would insert a bunch of code into each MBB
      that uses a particular subexpression (e.g. IV+base+C).  The problem is that
      this code cannot be CSE'd back together if inserted into different blocks.
      
      This patch changes LSR to attempt to insert a single copy of this code and
      share it, allowing codegenprepare to duplicate the code if it can be sunk
      into various addressing modes.  On CodeGen/ARM/lsr-code-insertion.ll,
      for example, this gives us code like:
      
              add r8, r0, r5
              str r6, [r8, #+4]
      ..
              ble LBB1_4      @cond_next
      LBB1_3: @cond_true
              str r10, [r8, #+4]
      LBB1_4: @cond_next
      ...
      LBB1_5: @cond_true55
              ldr r6, LCPI1_1
              str r6, [r8, #+4]
      
      instead of:
      
              add r10, r0, r6
              str r8, [r10, #+4]
      ...
              ble LBB1_4      @cond_next
      LBB1_3: @cond_true
              add r8, r0, r6
              str r10, [r8, #+4]
      LBB1_4: @cond_next
      ...
      LBB1_5: @cond_true55
              add r8, r0, r6
              ldr r10, LCPI1_1
              str r10, [r8, #+4]
      
      Besides being smaller and more efficient, this makes it immediately
      obvious that it is profitable to predicate LBB1_3 now :)
      
      llvm-svn: 35972
      efd3051d
    • Chris Lattner's avatar
      Completely rewrite addressing-mode related sinking of code. In particular, · feee64e9
      Chris Lattner authored
      this fixes problems where codegenprepare would sink expressions into load/stores
      that are not valid, and fixes cases where it would miss important valid ones.
      
      This fixes several serious codesize and perf issues, particularly on targets
      with complex addressing modes like arm and x86.  For example, now we compile
      CodeGen/X86/isel-sink.ll to:
      
      _test:
              movl 8(%esp), %eax
              movl 4(%esp), %ecx
              cmpl $1233, %eax
              ja LBB1_2       #F
      LBB1_1: #T
              movl $4, (%ecx,%eax,4)
              movl $141, %eax
              ret
      LBB1_2: #F
              movl (%ecx,%eax,4), %eax
              ret
      
      instead of:
      
      _test:
              movl 8(%esp), %eax
              leal (,%eax,4), %ecx
              addl 4(%esp), %ecx
              cmpl $1233, %eax
              ja LBB1_2       #F
      LBB1_1: #T
              movl $4, (%ecx)
              movl $141, %eax
              ret
      LBB1_2: #F
              movl (%ecx), %eax
              ret
      
      llvm-svn: 35970
      feee64e9
    • Devang Patel's avatar
      Remove use of SlowOperationInformer. · 38705d54
      Devang Patel authored
      llvm-svn: 35967
      38705d54
    • Devang Patel's avatar
      Undo previous check-in. · b730fe57
      Devang Patel authored
      llvm-svn: 35966
      b730fe57
    • Devang Patel's avatar
      Hello uses LLVMSupport.a (SlowerOperationInformer) · f929b861
      Devang Patel authored
      llvm-svn: 35965
      f929b861
  6. Apr 12, 2007
  7. Apr 11, 2007
  8. Apr 10, 2007
Loading