Skip to content
  1. Nov 11, 2008
  2. Oct 24, 2008
  3. Oct 17, 2008
  4. Aug 26, 2008
  5. Jul 17, 2008
  6. Jun 24, 2008
  7. May 06, 2008
  8. Apr 21, 2008
  9. Mar 28, 2008
  10. Mar 23, 2008
  11. Mar 19, 2008
  12. Mar 14, 2008
  13. Mar 08, 2008
  14. Mar 06, 2008
  15. Mar 02, 2008
  16. Feb 28, 2008
  17. Feb 27, 2008
  18. Feb 21, 2008
  19. Feb 18, 2008
  20. Feb 17, 2008
  21. Feb 16, 2008
  22. Feb 14, 2008
  23. Jan 11, 2008
    • Chris Lattner's avatar
      add a note, remove a done deed. · ff5998e6
      Chris Lattner authored
      llvm-svn: 45869
      ff5998e6
    • Arnold Schwaighofer's avatar
      Improve tail call optimized call's argument lowering. Before this · 6cf72fbb
      Arnold Schwaighofer authored
      commit all arguments where moved to the stack slot where they would
      reside on a normal function call before the lowering to the tail call
      stack slot. This was done to prevent arguments overwriting each other.
      Now only arguments sourcing from a FORMAL_ARGUMENTS node or a
      CopyFromReg node with virtual register (could also be a caller's
      argument) are lowered indirectly.
      
       --This line, and those below, will be ignored--
      
      M    X86/X86ISelLowering.cpp
      M    X86/README.txt
      
      llvm-svn: 45867
      6cf72fbb
  24. Jan 09, 2008
  25. Jan 07, 2008
  26. Dec 29, 2007
  27. Dec 28, 2007
  28. Dec 24, 2007
  29. Dec 18, 2007
  30. Dec 05, 2007
  31. Nov 24, 2007
  32. Nov 02, 2007
  33. Oct 28, 2007
  34. Oct 26, 2007
    • Evan Cheng's avatar
      Loosen up iv reuse to allow reuse of the same stride but a larger type when... · 7f3d0247
      Evan Cheng authored
      Loosen up iv reuse to allow reuse of the same stride but a larger type when truncating from the larger type to smaller type is free.
      e.g.
      Turns this loop:
      LBB1_1: # entry.bb_crit_edge
              xorl    %ecx, %ecx
              xorw    %dx, %dx
              movw    %dx, %si
      LBB1_2: # bb
              movl    L_X$non_lazy_ptr, %edi
              movw    %si, (%edi)
              movl    L_Y$non_lazy_ptr, %edi
              movw    %dx, (%edi)
      		addw    $4, %dx
      		incw    %si
      		incl    %ecx
      		cmpl    %eax, %ecx
      		jne     LBB1_2  # bb
      	
      into
      
      LBB1_1: # entry.bb_crit_edge
              xorl    %ecx, %ecx
              xorw    %dx, %dx
      LBB1_2: # bb
              movl    L_X$non_lazy_ptr, %esi
              movw    %cx, (%esi)
              movl    L_Y$non_lazy_ptr, %esi
              movw    %dx, (%esi)
              addw    $4, %dx
      		incl    %ecx
              cmpl    %eax, %ecx
              jne     LBB1_2  # bb
      
      llvm-svn: 43375
      7f3d0247
Loading