Skip to content
  1. May 19, 2007
    • Chris Lattner's avatar
      Handle negative strides much more optimally. This compiles X86/lsr-negative-stride.ll · e8bd53c3
      Chris Lattner authored
      into:
      
      _t:
              movl 8(%esp), %ecx
              movl 4(%esp), %eax
              cmpl %ecx, %eax
              je LBB1_3       #bb17
      LBB1_1: #bb
              cmpl %ecx, %eax
              jg LBB1_4       #cond_true
      LBB1_2: #cond_false
              subl %eax, %ecx
              cmpl %ecx, %eax
              jne LBB1_1      #bb
      LBB1_3: #bb17
              ret
      LBB1_4: #cond_true
              subl %ecx, %eax
              cmpl %ecx, %eax
              jne LBB1_1      #bb
              jmp LBB1_3      #bb17
      
      instead of:
      
      _t:
              subl $4, %esp
              movl %esi, (%esp)
              movl 12(%esp), %ecx
              movl 8(%esp), %eax
              cmpl %ecx, %eax
              je LBB1_4       #bb17
      LBB1_1: #bb.outer
              movl %ecx, %edx
              negl %edx
      LBB1_2: #bb
              cmpl %ecx, %eax
              jle LBB1_5      #cond_false
      LBB1_3: #cond_true
              addl %edx, %eax
              cmpl %ecx, %eax
              jne LBB1_2      #bb
      LBB1_4: #bb17
              movl (%esp), %esi
              addl $4, %esp
              ret
      LBB1_5: #cond_false
              movl %ecx, %edx
              subl %eax, %edx
              movl %eax, %esi
              addl %esi, %esi
              cmpl %ecx, %esi
              je LBB1_4       #bb17
      LBB1_6: #cond_false.bb.outer_crit_edge
              movl %edx, %ecx
              jmp LBB1_1      #bb.outer
      
      llvm-svn: 37252
      e8bd53c3
  2. May 12, 2007
  3. May 04, 2007
  4. May 03, 2007
  5. May 02, 2007
  6. May 01, 2007
  7. Apr 24, 2007
  8. Apr 15, 2007
  9. Apr 13, 2007
    • Chris Lattner's avatar
      Now that codegen prepare isn't defeating me, I can finally fix what I set · efd3051d
      Chris Lattner authored
      out to do! :)
      
      This fixes a problem where LSR would insert a bunch of code into each MBB
      that uses a particular subexpression (e.g. IV+base+C).  The problem is that
      this code cannot be CSE'd back together if inserted into different blocks.
      
      This patch changes LSR to attempt to insert a single copy of this code and
      share it, allowing codegenprepare to duplicate the code if it can be sunk
      into various addressing modes.  On CodeGen/ARM/lsr-code-insertion.ll,
      for example, this gives us code like:
      
              add r8, r0, r5
              str r6, [r8, #+4]
      ..
              ble LBB1_4      @cond_next
      LBB1_3: @cond_true
              str r10, [r8, #+4]
      LBB1_4: @cond_next
      ...
      LBB1_5: @cond_true55
              ldr r6, LCPI1_1
              str r6, [r8, #+4]
      
      instead of:
      
              add r10, r0, r6
              str r8, [r10, #+4]
      ...
              ble LBB1_4      @cond_next
      LBB1_3: @cond_true
              add r8, r0, r6
              str r10, [r8, #+4]
      LBB1_4: @cond_next
      ...
      LBB1_5: @cond_true55
              add r8, r0, r6
              ldr r10, LCPI1_1
              str r10, [r8, #+4]
      
      Besides being smaller and more efficient, this makes it immediately
      obvious that it is profitable to predicate LBB1_3 now :)
      
      llvm-svn: 35972
      efd3051d
  10. Apr 10, 2007
  11. Apr 07, 2007
  12. Apr 03, 2007
  13. Apr 02, 2007
  14. Mar 26, 2007
  15. Mar 20, 2007
  16. Mar 13, 2007
  17. Mar 09, 2007
  18. Mar 06, 2007
  19. Mar 03, 2007
  20. Mar 02, 2007
  21. Feb 10, 2007
  22. Feb 06, 2007
  23. Jan 15, 2007
  24. Jan 08, 2007
    • Reid Spencer's avatar
      For PR1097: · bf96e02a
      Reid Spencer authored
      Enable complex addressing modes on 64-bit platforms involving two induction
      variables by keeping a size and scale in 64-bits not 32.
      Patch by Dan Gohman.
      
      llvm-svn: 33011
      bf96e02a
  25. Jan 06, 2007
  26. Dec 31, 2006
    • Reid Spencer's avatar
      For PR950: · c635f47d
      Reid Spencer authored
      This patch replaces signed integer types with signless ones:
      1. [US]Byte -> Int8
      2. [U]Short -> Int16
      3. [U]Int   -> Int32
      4. [U]Long  -> Int64.
      5. Removal of isSigned, isUnsigned, getSignedVersion, getUnsignedVersion
         and other methods related to signedness. In a few places this warranted
         identifying the signedness information from other sources.
      
      llvm-svn: 32785
      c635f47d
  27. Dec 23, 2006
    • Reid Spencer's avatar
      For PR950: · 266e42b3
      Reid Spencer authored
      This patch removes the SetCC instructions and replaces them with the ICmp
      and FCmp instructions. The SetCondInst instruction has been removed and
      been replaced with ICmpInst and FCmpInst.
      
      llvm-svn: 32751
      266e42b3
  28. Dec 19, 2006
  29. Dec 13, 2006
  30. Dec 12, 2006
Loading