Skip to content
  1. Nov 09, 2007
  2. Nov 07, 2007
  3. Nov 06, 2007
  4. Nov 05, 2007
    • Evan Cheng's avatar
      Use movups to spill / restore SSE registers on targets where stacks alignment is · 9337929a
      Evan Cheng authored
      less than 16. This is a temporary solution until dynamic stack alignment is
      implemented.
      
      llvm-svn: 43703
      9337929a
    • Duncan Sands's avatar
      Eliminate the remaining uses of getTypeSize. This · 283207a7
      Duncan Sands authored
      should only effect x86 when using long double.  Now
      12/16 bytes are output for long double globals (the
      exact amount depends on the alignment).  This brings
      globals in line with the rest of LLVM: the space
      reserved for an object is now always the ABI size.
      One tricky point is that only 10 bytes should be
      output for long double if it is a field in a packed
      struct, which is the reason for the additional
      argument to EmitGlobalConstant.
      
      llvm-svn: 43688
      283207a7
  5. Nov 04, 2007
  6. Nov 02, 2007
  7. Nov 01, 2007
  8. Oct 31, 2007
  9. Oct 30, 2007
  10. Oct 29, 2007
  11. Oct 28, 2007
  12. Oct 26, 2007
    • Anton Korobeynikov's avatar
      Fix off-by-one stack offset computations (dwarf information) for callee-saved · d07d6a41
      Anton Korobeynikov authored
      registers in case, when FP pointer was eliminated. This should fixes misc. random
      EH-related crahses, when stuff is compiled with -fomit-frame-pointer.
      Thanks Duncan for nailing this bug!
      
      llvm-svn: 43381
      d07d6a41
    • Evan Cheng's avatar
      Loosen up iv reuse to allow reuse of the same stride but a larger type when... · 7f3d0247
      Evan Cheng authored
      Loosen up iv reuse to allow reuse of the same stride but a larger type when truncating from the larger type to smaller type is free.
      e.g.
      Turns this loop:
      LBB1_1: # entry.bb_crit_edge
              xorl    %ecx, %ecx
              xorw    %dx, %dx
              movw    %dx, %si
      LBB1_2: # bb
              movl    L_X$non_lazy_ptr, %edi
              movw    %si, (%edi)
              movl    L_Y$non_lazy_ptr, %edi
              movw    %dx, (%edi)
      		addw    $4, %dx
      		incw    %si
      		incl    %ecx
      		cmpl    %eax, %ecx
      		jne     LBB1_2  # bb
      	
      into
      
      LBB1_1: # entry.bb_crit_edge
              xorl    %ecx, %ecx
              xorw    %dx, %dx
      LBB1_2: # bb
              movl    L_X$non_lazy_ptr, %esi
              movw    %cx, (%esi)
              movl    L_Y$non_lazy_ptr, %esi
              movw    %dx, (%esi)
              addw    $4, %dx
      		incl    %ecx
              cmpl    %eax, %ecx
              jne     LBB1_2  # bb
      
      llvm-svn: 43375
      7f3d0247
  13. Oct 22, 2007
  14. Oct 21, 2007
  15. Oct 20, 2007
  16. Oct 19, 2007
    • Evan Cheng's avatar
      Local spiller optimization: · 35ff7937
      Evan Cheng authored
      Turn a store folding instruction into a load folding instruction. e.g.
           xorl  %edi, %eax
           movl  %eax, -32(%ebp)
           movl  -36(%ebp), %eax
           orl   %eax, -32(%ebp)
      =>
           xorl  %edi, %eax
           orl   -36(%ebp), %eax
           mov   %eax, -32(%ebp)
      This enables the unfolding optimization for a subsequent instruction which will
      also eliminate the newly introduced store instruction.
      
      llvm-svn: 43192
      35ff7937
    • Rafael Espindola's avatar
      Add support for byval function whose argument is not 32 bit aligned. · 846c19dd
      Rafael Espindola authored
      To do this it is necessary to add a "always inline" argument to the
      memcpy node. For completeness I have also added this node to memmove
      and memset.  I have also added getMem* functions, because the extra
      argument makes it cumbersome to use getNode and because I get confused
      by it :-)
      
      llvm-svn: 43172
      846c19dd
    • Evan Cheng's avatar
      - Added getOpcodeAfterMemoryUnfold(). It doesn't unfold an instruction, but... · 463e2ab0
      Evan Cheng authored
      - Added getOpcodeAfterMemoryUnfold(). It doesn't unfold an instruction, but only returns the opcode of the instruction post unfolding.
      - Fix some copy+paste bugs.
      
      llvm-svn: 43153
      463e2ab0
  17. Oct 18, 2007
  18. Oct 17, 2007
  19. Oct 16, 2007
  20. Oct 15, 2007
  21. Oct 14, 2007
Loading