Skip to content
  1. Aug 02, 2007
  2. Aug 01, 2007
  3. Jul 31, 2007
  4. Jun 19, 2007
    • Dan Gohman's avatar
      Rename ScalarEvolution::deleteInstructionFromRecords to · 32f53bbd
      Dan Gohman authored
      deleteValueFromRecords and loosen the types to all it to accept
      Value* instead of just Instruction*, since this is what
      ScalarEvolution uses internally anyway. This allows more flexibility
      for future uses.
      
      llvm-svn: 37657
      32f53bbd
  5. Jun 15, 2007
    • Dan Gohman's avatar
      Add a SCEV class and supporting code for sign-extend expressions. · cb9e09ad
      Dan Gohman authored
      This created an ambiguity for expandInTy to decide when to use
      sign-extension or zero-extension, but it turns out that most of its callers
      don't actually need a type conversion, now that LLVM types don't have
      explicit signedness. Drop expandInTy in favor of plain expand, and change
      the few places that actually need a type conversion to do it themselves.
      
      llvm-svn: 37591
      cb9e09ad
  6. Jun 07, 2007
  7. Jun 06, 2007
  8. May 19, 2007
    • Chris Lattner's avatar
      Handle negative strides much more optimally. This compiles X86/lsr-negative-stride.ll · e8bd53c3
      Chris Lattner authored
      into:
      
      _t:
              movl 8(%esp), %ecx
              movl 4(%esp), %eax
              cmpl %ecx, %eax
              je LBB1_3       #bb17
      LBB1_1: #bb
              cmpl %ecx, %eax
              jg LBB1_4       #cond_true
      LBB1_2: #cond_false
              subl %eax, %ecx
              cmpl %ecx, %eax
              jne LBB1_1      #bb
      LBB1_3: #bb17
              ret
      LBB1_4: #cond_true
              subl %ecx, %eax
              cmpl %ecx, %eax
              jne LBB1_1      #bb
              jmp LBB1_3      #bb17
      
      instead of:
      
      _t:
              subl $4, %esp
              movl %esi, (%esp)
              movl 12(%esp), %ecx
              movl 8(%esp), %eax
              cmpl %ecx, %eax
              je LBB1_4       #bb17
      LBB1_1: #bb.outer
              movl %ecx, %edx
              negl %edx
      LBB1_2: #bb
              cmpl %ecx, %eax
              jle LBB1_5      #cond_false
      LBB1_3: #cond_true
              addl %edx, %eax
              cmpl %ecx, %eax
              jne LBB1_2      #bb
      LBB1_4: #bb17
              movl (%esp), %esi
              addl $4, %esp
              ret
      LBB1_5: #cond_false
              movl %ecx, %edx
              subl %eax, %edx
              movl %eax, %esi
              addl %esi, %esi
              cmpl %ecx, %esi
              je LBB1_4       #bb17
      LBB1_6: #cond_false.bb.outer_crit_edge
              movl %edx, %ecx
              jmp LBB1_1      #bb.outer
      
      llvm-svn: 37252
      e8bd53c3
  9. May 12, 2007
  10. May 04, 2007
  11. May 03, 2007
  12. May 02, 2007
  13. May 01, 2007
  14. Apr 24, 2007
  15. Apr 15, 2007
  16. Apr 13, 2007
    • Chris Lattner's avatar
      Now that codegen prepare isn't defeating me, I can finally fix what I set · efd3051d
      Chris Lattner authored
      out to do! :)
      
      This fixes a problem where LSR would insert a bunch of code into each MBB
      that uses a particular subexpression (e.g. IV+base+C).  The problem is that
      this code cannot be CSE'd back together if inserted into different blocks.
      
      This patch changes LSR to attempt to insert a single copy of this code and
      share it, allowing codegenprepare to duplicate the code if it can be sunk
      into various addressing modes.  On CodeGen/ARM/lsr-code-insertion.ll,
      for example, this gives us code like:
      
              add r8, r0, r5
              str r6, [r8, #+4]
      ..
              ble LBB1_4      @cond_next
      LBB1_3: @cond_true
              str r10, [r8, #+4]
      LBB1_4: @cond_next
      ...
      LBB1_5: @cond_true55
              ldr r6, LCPI1_1
              str r6, [r8, #+4]
      
      instead of:
      
              add r10, r0, r6
              str r8, [r10, #+4]
      ...
              ble LBB1_4      @cond_next
      LBB1_3: @cond_true
              add r8, r0, r6
              str r10, [r8, #+4]
      LBB1_4: @cond_next
      ...
      LBB1_5: @cond_true55
              add r8, r0, r6
              ldr r10, LCPI1_1
              str r10, [r8, #+4]
      
      Besides being smaller and more efficient, this makes it immediately
      obvious that it is profitable to predicate LBB1_3 now :)
      
      llvm-svn: 35972
      efd3051d
  17. Apr 10, 2007
  18. Apr 07, 2007
  19. Apr 03, 2007
  20. Apr 02, 2007
  21. Mar 26, 2007
  22. Mar 20, 2007
  23. Mar 13, 2007
  24. Mar 09, 2007
  25. Mar 06, 2007
  26. Mar 03, 2007
  27. Mar 02, 2007
  28. Feb 10, 2007
  29. Feb 06, 2007
  30. Jan 15, 2007
Loading