Skip to content
  1. Mar 10, 2008
  2. Mar 04, 2008
  3. Feb 27, 2008
    • Chris Lattner's avatar
      Compile x86-64-and-mask.ll into: · 3c7d3d57
      Chris Lattner authored
      _test:
      	movl	%edi, %eax
      	ret
      
      instead of:
      
      _test:
              movl    $4294967295, %ecx
              movq    %rdi, %rax
              andq    %rcx, %rax
              ret
      
      It would be great to write this as a Pat pattern that used subregs 
      instead of a 'pseudo' instruction, but I don't know how to do that
      in td files.
      
      llvm-svn: 47658
      3c7d3d57
  4. Feb 12, 2008
  5. Feb 07, 2008
    • Evan Cheng's avatar
      Fix a x86-64 codegen deficiency. Allow gv + offset when using rip addressing mode. · a20a7736
      Evan Cheng authored
      Before:
      _main:
              subq    $8, %rsp
              leaq    _X(%rip), %rax
              movsd   8(%rax), %xmm1
              movss   _X(%rip), %xmm0
              call    _t
              xorl    %ecx, %ecx
              movl    %ecx, %eax
              addq    $8, %rsp
              ret
      Now:
      _main:
              subq    $8, %rsp
              movsd   _X+8(%rip), %xmm1
              movss   _X(%rip), %xmm0
              call    _t
              xorl    %ecx, %ecx
              movl    %ecx, %eax
              addq    $8, %rsp
              ret
      
      Notice there is another idiotic codegen issue that needs to be fixed asap:
      xorl    %ecx, %ecx
      movl    %ecx, %eax
      
      llvm-svn: 46850
      a20a7736
  6. Feb 03, 2008
  7. Jan 29, 2008
    • Evan Cheng's avatar
      Work in progress. This patch *fixes* x86-64 calls which are modelled as... · 084a1cdc
      Evan Cheng authored
      Work in progress. This patch *fixes* x86-64 calls which are modelled as StructRet but really should be return in registers, e.g. _Complex long double, some 128-bit aggregates. This is a short term solution that is necessary only because llvm, for now, cannot model i128 nor call's with multiple results.
      Status: This only works for direct calls, and only the caller side is done. Disabled for now.
      
      llvm-svn: 46527
      084a1cdc
  8. Jan 23, 2008
    • Duncan Sands's avatar
      The last pieces needed for loading arbitrary · 95d46ef8
      Duncan Sands authored
      precision integers.  This won't actually work
      (and most of the code is dead) unless the new
      legalization machinery is turned on.  While
      there, I rationalized the handling of i1, and
      removed some bogus (and unused) sextload patterns.
      For i1, this could result in microscopically
      better code for some architectures (not X86).
      It might also result in worse code if annotating
      with AssertZExt nodes turns out to be more harmful
      than helpful.
      
      llvm-svn: 46280
      95d46ef8
  9. Jan 11, 2008
  10. Jan 10, 2008
  11. Jan 07, 2008
  12. Dec 29, 2007
  13. Dec 18, 2007
  14. Dec 14, 2007
  15. Dec 13, 2007
  16. Nov 12, 2007
  17. Oct 19, 2007
    • Evan Cheng's avatar
      Local spiller optimization: · 35ff7937
      Evan Cheng authored
      Turn a store folding instruction into a load folding instruction. e.g.
           xorl  %edi, %eax
           movl  %eax, -32(%ebp)
           movl  -36(%ebp), %eax
           orl   %eax, -32(%ebp)
      =>
           xorl  %edi, %eax
           orl   -36(%ebp), %eax
           mov   %eax, -32(%ebp)
      This enables the unfolding optimization for a subsequent instruction which will
      also eliminate the newly introduced store instruction.
      
      llvm-svn: 43192
      35ff7937
  18. Oct 12, 2007
  19. Oct 11, 2007
  20. Oct 06, 2007
  21. Oct 05, 2007
  22. Sep 29, 2007
  23. Sep 27, 2007
  24. Sep 26, 2007
  25. Sep 25, 2007
  26. Sep 17, 2007
Loading