Skip to content
  1. Oct 29, 2007
  2. Oct 26, 2007
    • Evan Cheng's avatar
      Loosen up iv reuse to allow reuse of the same stride but a larger type when... · 7f3d0247
      Evan Cheng authored
      Loosen up iv reuse to allow reuse of the same stride but a larger type when truncating from the larger type to smaller type is free.
      e.g.
      Turns this loop:
      LBB1_1: # entry.bb_crit_edge
              xorl    %ecx, %ecx
              xorw    %dx, %dx
              movw    %dx, %si
      LBB1_2: # bb
              movl    L_X$non_lazy_ptr, %edi
              movw    %si, (%edi)
              movl    L_Y$non_lazy_ptr, %edi
              movw    %dx, (%edi)
      		addw    $4, %dx
      		incw    %si
      		incl    %ecx
      		cmpl    %eax, %ecx
      		jne     LBB1_2  # bb
      	
      into
      
      LBB1_1: # entry.bb_crit_edge
              xorl    %ecx, %ecx
              xorw    %dx, %dx
      LBB1_2: # bb
              movl    L_X$non_lazy_ptr, %esi
              movw    %cx, (%esi)
              movl    L_Y$non_lazy_ptr, %esi
              movw    %dx, (%esi)
              addw    $4, %dx
      		incl    %ecx
              cmpl    %eax, %ecx
              jne     LBB1_2  # bb
      
      llvm-svn: 43375
      7f3d0247
  3. Oct 21, 2007
  4. Oct 19, 2007
    • Rafael Espindola's avatar
      Add support for byval function whose argument is not 32 bit aligned. · 846c19dd
      Rafael Espindola authored
      To do this it is necessary to add a "always inline" argument to the
      memcpy node. For completeness I have also added this node to memmove
      and memset.  I have also added getMem* functions, because the extra
      argument makes it cumbersome to use getNode and because I get confused
      by it :-)
      
      llvm-svn: 43172
      846c19dd
  5. Oct 17, 2007
  6. Oct 16, 2007
  7. Oct 15, 2007
  8. Oct 14, 2007
  9. Oct 12, 2007
  10. Oct 11, 2007
  11. Oct 09, 2007
  12. Oct 08, 2007
  13. Oct 05, 2007
  14. Sep 29, 2007
  15. Sep 28, 2007
  16. Sep 26, 2007
  17. Sep 25, 2007
  18. Sep 24, 2007
  19. Sep 23, 2007
    • Dale Johannesen's avatar
      Fix PR 1681. When X86 target uses +sse -sse2, · e36c4002
      Dale Johannesen authored
      keep f32 in SSE registers and f64 in x87.  This
      is effectively a new codegen mode.
      Change addLegalFPImmediate to permit float and
      double variants to do different things.
      Adjust callers.
      
      llvm-svn: 42246
      e36c4002
  20. Sep 21, 2007
  21. Sep 20, 2007
  22. Sep 17, 2007
  23. Sep 15, 2007
    • Dale Johannesen's avatar
      Remove the assumption that FP's are either float or · 98d3a08d
      Dale Johannesen authored
      double from some of the many places in the optimizers
      it appears, and do something reasonable with x86
      long double.
      Make APInt::dump() public, remove newline, use it to
      dump ConstantSDNode's.
      Allow APFloats in FoldingSet.
      Expand X86 backend handling of long doubles (conversions
      to/from int, mostly).
      
      llvm-svn: 41967
      98d3a08d
  24. Sep 14, 2007
  25. Sep 11, 2007
  26. Sep 06, 2007
    • Dale Johannesen's avatar
      Next round of APFloat changes. · bed9dc42
      Dale Johannesen authored
      Use APFloat in UpgradeParser and AsmParser.
      Change all references to ConstantFP to use the
      APFloat interface rather than double.  Remove
      the ConstantFP double interfaces.
      Use APFloat functions for constant folding arithmetic
      and comparisons.
      (There are still way too many places APFloat is
      just a wrapper around host float/double, but we're
      getting there.)
      
      llvm-svn: 41747
      bed9dc42
  27. Sep 03, 2007
Loading