Skip to content
  1. Nov 16, 2004
  2. Nov 15, 2004
  3. Nov 14, 2004
  4. Nov 13, 2004
    • Chris Lattner's avatar
      Hack around stupidity in GCC, fixing Burg with the CBE and · 073f6ca3
      Chris Lattner authored
      CBackend/2004-11-13-FunctionPointerCast.llx
      
      llvm-svn: 17710
      073f6ca3
    • Chris Lattner's avatar
      · 049d33a7
      Chris Lattner authored
      shld is a very high latency operation. Instead of emitting it for shifts of
      two or three, open code the equivalent operation which is faster on athlon
      and P4 (by a substantial margin).
      
      For example, instead of compiling this:
      
      long long X2(long long Y) { return Y << 2; }
      
      to:
      
      X3_2:
              movl 4(%esp), %eax
              movl 8(%esp), %edx
              shldl $2, %eax, %edx
              shll $2, %eax
              ret
      
      Compile it to:
      
      X2:
              movl 4(%esp), %eax
              movl 8(%esp), %ecx
              movl %eax, %edx
              shrl $30, %edx
              leal (%edx,%ecx,4), %edx
              shll $2, %eax
              ret
      
      Likewise, for << 3, compile to:
      
      X3:
              movl 4(%esp), %eax
              movl 8(%esp), %ecx
              movl %eax, %edx
              shrl $29, %edx
              leal (%edx,%ecx,8), %edx
              shll $3, %eax
              ret
      
      This matches icc, except that icc open codes the shifts as adds on the P4.
      
      llvm-svn: 17707
      049d33a7
    • Chris Lattner's avatar
      Add missing check · ef6bd92a
      Chris Lattner authored
      llvm-svn: 17706
      ef6bd92a
    • Chris Lattner's avatar
      Compile: · 8d521bb1
      Chris Lattner authored
      long long X3_2(long long Y) { return Y+Y; }
      int X(int Y) { return Y+Y; }
      
      into:
      
      X3_2:
              movl 4(%esp), %eax
              movl 8(%esp), %edx
              addl %eax, %eax
              adcl %edx, %edx
              ret
      X:
              movl 4(%esp), %eax
              addl %eax, %eax
              ret
      
      instead of:
      
      X3_2:
              movl 4(%esp), %eax
              movl 8(%esp), %edx
              shldl $1, %eax, %edx
              shll $1, %eax
              ret
      
      X:
              movl 4(%esp), %eax
              shll $1, %eax
              ret
      
      llvm-svn: 17705
      8d521bb1
  5. Nov 10, 2004
  6. Nov 09, 2004
  7. Nov 08, 2004
  8. Nov 07, 2004
  9. Nov 05, 2004
  10. Nov 04, 2004
  11. Nov 02, 2004
  12. Nov 01, 2004
  13. Oct 30, 2004
  14. Oct 29, 2004
  15. Oct 28, 2004
Loading