Skip to content
  1. Mar 01, 2008
  2. Feb 21, 2008
  3. Feb 20, 2008
  4. Feb 19, 2008
  5. Feb 07, 2008
    • Evan Cheng's avatar
      Fix a x86-64 codegen deficiency. Allow gv + offset when using rip addressing mode. · a20a7736
      Evan Cheng authored
      Before:
      _main:
              subq    $8, %rsp
              leaq    _X(%rip), %rax
              movsd   8(%rax), %xmm1
              movss   _X(%rip), %xmm0
              call    _t
              xorl    %ecx, %ecx
              movl    %ecx, %eax
              addq    $8, %rsp
              ret
      Now:
      _main:
              subq    $8, %rsp
              movsd   _X+8(%rip), %xmm1
              movss   _X(%rip), %xmm0
              call    _t
              xorl    %ecx, %ecx
              movl    %ecx, %eax
              addq    $8, %rsp
              ret
      
      Notice there is another idiotic codegen issue that needs to be fixed asap:
      xorl    %ecx, %ecx
      movl    %ecx, %eax
      
      llvm-svn: 46850
      a20a7736
  6. Feb 03, 2008
  7. Jan 23, 2008
    • Duncan Sands's avatar
      The last pieces needed for loading arbitrary · 95d46ef8
      Duncan Sands authored
      precision integers.  This won't actually work
      (and most of the code is dead) unless the new
      legalization machinery is turned on.  While
      there, I rationalized the handling of i1, and
      removed some bogus (and unused) sextload patterns.
      For i1, this could result in microscopically
      better code for some architectures (not X86).
      It might also result in worse code if annotating
      with AssertZExt nodes turns out to be more harmful
      than helpful.
      
      llvm-svn: 46280
      95d46ef8
  8. Jan 17, 2008
    • Chris Lattner's avatar
      This commit changes: · 1ea55cf8
      Chris Lattner authored
      1. Legalize now always promotes truncstore of i1 to i8. 
      2. Remove patterns and gunk related to truncstore i1 from targets.
      3. Rename the StoreXAction stuff to TruncStoreAction in TLI.
      4. Make the TLI TruncStoreAction table a 2d table to handle from/to conversions.
      5. Mark a wide variety of invalid truncstores as such in various targets, e.g.
         X86 currently doesn't support truncstore of any of its integer types.
      6. Add legalize support for truncstores with invalid value input types.
      7. Add a dag combine transform to turn store(truncate) into truncstore when
         safe.
      
      The later allows us to compile CodeGen/X86/storetrunc-fp.ll to:
      
      _foo:
      	fldt	20(%esp)
      	fldt	4(%esp)
      	faddp	%st(1)
      	movl	36(%esp), %eax
      	fstps	(%eax)
      	ret
      
      instead of:
      
      _foo:
      	subl	$4, %esp
      	fldt	24(%esp)
      	fldt	8(%esp)
      	faddp	%st(1)
      	fstps	(%esp)
      	movl	40(%esp), %eax
      	movss	(%esp), %xmm0
      	movss	%xmm0, (%eax)
      	addl	$4, %esp
      	ret
      
      llvm-svn: 46140
      1ea55cf8
  9. Jan 15, 2008
  10. Jan 11, 2008
  11. Jan 10, 2008
  12. Jan 07, 2008
  13. Jan 05, 2008
  14. Dec 29, 2007
  15. Dec 22, 2007
  16. Dec 18, 2007
  17. Dec 14, 2007
  18. Dec 13, 2007
  19. Nov 13, 2007
  20. Nov 12, 2007
  21. Oct 19, 2007
    • Evan Cheng's avatar
      Local spiller optimization: · 35ff7937
      Evan Cheng authored
      Turn a store folding instruction into a load folding instruction. e.g.
           xorl  %edi, %eax
           movl  %eax, -32(%ebp)
           movl  -36(%ebp), %eax
           orl   %eax, -32(%ebp)
      =>
           xorl  %edi, %eax
           orl   -36(%ebp), %eax
           mov   %eax, -32(%ebp)
      This enables the unfolding optimization for a subsequent instruction which will
      also eliminate the newly introduced store instruction.
      
      llvm-svn: 43192
      35ff7937
  22. Oct 12, 2007
  23. Oct 11, 2007
Loading