- Feb 04, 2009
-
-
Chris Lattner authored
llvm-svn: 63752
-
Chris Lattner authored
SSE disabled. llvm-svn: 63751
-
- Jan 28, 2009
-
-
Evan Cheng authored
The memory alignment requirement on some of the mov{h|l}p{d|s} patterns are 16-byte. That is overly strict. These instructions read / write f64 memory locations without alignment requirement. llvm-svn: 63195
-
- Sep 20, 2008
-
-
Chris Lattner authored
llvm-svn: 56391
-
- Aug 19, 2008
-
-
Chris Lattner authored
llvm-svn: 54964
-
- Jun 25, 2008
-
-
Evan Cheng authored
shift. - Add a readme entry for a missing vector_shuffle optimization that results in awful codegen. llvm-svn: 52740
-
- May 24, 2008
-
-
Evan Cheng authored
llvm-svn: 51526
-
- May 23, 2008
-
-
Evan Cheng authored
llvm-svn: 51501
-
Dan Gohman authored
llvm-svn: 51491
-
Evan Cheng authored
llvm-svn: 51487
-
Chris Lattner authored
instruction for doing this? llvm-svn: 51473
-
- May 13, 2008
-
-
Chris Lattner authored
llvm-svn: 51062
-
Chris Lattner authored
llvm-svn: 51060
-
Evan Cheng authored
Instead of a vector load, shuffle and then extract an element. Load the element from address with an offset. pshufd $1, (%rdi), %xmm0 movd %xmm0, %eax => movl 4(%rdi), %eax llvm-svn: 51026
-
Evan Cheng authored
llvm-svn: 51019
-
Evan Cheng authored
Xform bitconvert(build_pair(load a, load b)) to a single load if the load locations are at the right offset from each other. llvm-svn: 51008
-
- May 11, 2008
-
-
Anton Korobeynikov authored
llvm-svn: 50959
-
- Apr 10, 2008
-
-
Chris Lattner authored
llvm-svn: 49466
-
Chris Lattner authored
llvm-svn: 49465
-
- Mar 09, 2008
-
-
Chris Lattner authored
into a vector of zeros or undef, and when the top part is obviously zero, we can just use movd + shuffle. This allows us to compile vec_set-B.ll into: _test3: movl $1234567, %eax andl 4(%esp), %eax movd %eax, %xmm0 ret instead of: _test3: subl $28, %esp movl $1234567, %eax andl 32(%esp), %eax movl %eax, (%esp) movl $0, 4(%esp) movq (%esp), %xmm0 addl $28, %esp ret llvm-svn: 48090
-
Chris Lattner authored
llvm-svn: 48064
-
Chris Lattner authored
#include <xmmintrin.h> __m128i doload64(short x) {return _mm_set_epi16(0,0,0,0,0,0,0,1);} into: movl $1, %eax movd %eax, %xmm0 ret instead of a constant pool load. llvm-svn: 48063
-
- Mar 08, 2008
-
-
Chris Lattner authored
llvm-svn: 48055
-
Chris Lattner authored
llvm-svn: 48054
-
- Mar 05, 2008
-
-
Chris Lattner authored
llvm-svn: 47948
-
Chris Lattner authored
llvm-svn: 47939
-
- Mar 02, 2008
-
-
Chris Lattner authored
llvm-svn: 47828
-
- Feb 14, 2008
-
-
Chris Lattner authored
llvm-svn: 47109
-
- Feb 13, 2008
-
-
Nate Begeman authored
llvm-svn: 47051
-
- Feb 11, 2008
-
-
Nate Begeman authored
Add some notes to the README. llvm-svn: 46949
-
- Jan 27, 2008
-
-
Chris Lattner authored
llvm-svn: 46413
-
- Jan 26, 2008
-
-
Chris Lattner authored
llvm-svn: 46405
-
- Dec 29, 2007
-
-
Chris Lattner authored
eliminating the llvm.x86.sse2.loadl.pd intrinsic?), one shuffle optzn may be done (if shufps is better than pinsw, Evan, please review), and we already know about LICM of simple instructions. llvm-svn: 45407
-
- Dec 21, 2007
-
-
Evan Cheng authored
llvm-svn: 45280
-
- Oct 29, 2007
-
-
Chris Lattner authored
llvm-svn: 43444
-
- Oct 02, 2007
-
-
Bill Wendling authored
llvm-svn: 42549
-
Bill Wendling authored
llvm-svn: 42548
-
Bill Wendling authored
llvm-svn: 42542
-
- Sep 26, 2007
-
-
Chris Lattner authored
llvm-svn: 42345
-
- Aug 24, 2007
-
-
Chris Lattner authored
llvm-svn: 41359
-