- Sep 29, 2006
-
-
Chris Lattner authored
llvm-svn: 30663
-
- Sep 21, 2006
-
-
Anton Korobeynikov authored
llvm-svn: 30549
-
- Sep 11, 2006
-
-
Evan Cheng authored
operand of a conditional branch to allow load folding into CMP / TEST instructions. llvm-svn: 30241
-
- Sep 08, 2006
-
-
Evan Cheng authored
llvm-svn: 30177
-
- Aug 01, 2006
-
-
Chris Lattner authored
The CFE refers to all single-register constraints (like "A") by their 16-bit name, even though the 8 or 32-bit version of the register may be needed. The X86 backend should realize what is going on and redecode the name back to its proper form. llvm-svn: 29420
-
- Jul 11, 2006
-
-
Chris Lattner authored
CodeGen/X86/2006-07-10-InlineAsmAConstraint.ll llvm-svn: 29101
-
- Jul 07, 2006
-
-
Evan Cheng authored
(load x+8), (load x+12), <0, 1, 2, 3> to a single 128-bit load (aligned and unaligned). e.g. __m128 test(float a, float b, float c, float d) { return _mm_set_ps(d, c, b, a); } _test: movups 4(%esp), %xmm0 ret llvm-svn: 29042
-
- Jun 24, 2006
-
-
Evan Cheng authored
Simplify X86CompilationCallback: always align to 16-byte boundary; don't save EAX/EDX if unnecessary. llvm-svn: 28910
-
- May 25, 2006
-
-
Evan Cheng authored
the copyto/fromregs instead of making the X86ISD::CALL selection code create them. llvm-svn: 28463
-
- May 24, 2006
-
-
Chris Lattner authored
by Anton Korobeynikov! This is a step towards closing PR786. llvm-svn: 28447
-
- May 23, 2006
-
-
Evan Cheng authored
FORMAL_ARGUMENTS nodes include a token operand. llvm-svn: 28439
-
Chris Lattner authored
return argument pops the hidden struct pointer if present, not the caller. For example, in this testcase: struct X { int D, E, F, G; }; struct X bar() { struct X a; a.D = 0; a.E = 1; a.F = 2; a.G = 3; return a; } void foo(struct X *P) { *P = bar(); } We used to emit: _foo: subl $28, %esp movl 32(%esp), %eax movl %eax, (%esp) call _bar addl $28, %esp ret _bar: movl 4(%esp), %eax movl $0, (%eax) movl $1, 4(%eax) movl $2, 8(%eax) movl $3, 12(%eax) ret This is correct on Linux/X86 but not Darwin/X86. With this patch, we now emit: _foo: subl $28, %esp movl 32(%esp), %eax movl %eax, (%esp) call _bar *** addl $24, %esp ret _bar: movl 4(%esp), %eax movl $0, (%eax) movl $1, 4(%eax) movl $2, 8(%eax) movl $3, 12(%eax) *** ret $4 For the record, GCC emits (which is functionally equivalent to our new code): _bar: movl 4(%esp), %eax movl $3, 12(%eax) movl $2, 8(%eax) movl $1, 4(%eax) movl $0, (%eax) ret $4 _foo: pushl %esi subl $40, %esp movl 48(%esp), %esi leal 16(%esp), %eax movl %eax, (%esp) call _bar subl $4, %esp movl 16(%esp), %eax movl %eax, (%esi) movl 20(%esp), %eax movl %eax, 4(%esi) movl 24(%esp), %eax movl %eax, 8(%esi) movl 28(%esp), %eax movl %eax, 12(%esi) addl $40, %esp popl %esi ret This fixes SingleSource/Benchmarks/CoyoteBench/fftbench with LLC and the JIT, and fixes the X86-backend portion of PR729. The CBE still needs to be updated. llvm-svn: 28438
-
- May 17, 2006
-
-
Evan Cheng authored
llvm-svn: 28357
-
- Apr 27, 2006
-
-
Evan Cheng authored
- Fixed vararg support. llvm-svn: 27985
-
- Apr 26, 2006
-
-
Evan Cheng authored
llvm-svn: 27975
-
- Apr 25, 2006
-
-
Evan Cheng authored
llvm-svn: 27972
-
- Apr 21, 2006
-
-
Evan Cheng authored
scalar value. e.g. _mm_set_epi32(0, a, 0, 0); ==> movd 4(%esp), %xmm0 pshufd $69, %xmm0, %xmm0 _mm_set_epi8(0, 0, 0, 0, 0, a, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0); ==> movzbw 4(%esp), %ax movzwl %ax, %eax pxor %xmm0, %xmm0 pinsrw $5, %eax, %xmm0 llvm-svn: 27923
-
- Apr 20, 2006
-
-
Evan Cheng authored
to a vector shuffle. - VECTOR_SHUFFLE lowering change in preparation for more efficient codegen of vector shuffle with zero (or any splat) vector. llvm-svn: 27875
-
- Apr 19, 2006
-
-
Evan Cheng authored
llvm-svn: 27840
-
- Apr 14, 2006
-
-
Evan Cheng authored
llvm-svn: 27711
-
- Apr 11, 2006
-
-
Evan Cheng authored
llvm-svn: 27575
-
- Apr 07, 2006
-
-
Evan Cheng authored
- Normalize shuffle nodes so result vector lower half elements come from the first vector, the rest come from the second vector. (Except for the exceptions :-). - Other minor fixes. llvm-svn: 27474
-
- Apr 06, 2006
-
-
Evan Cheng authored
llvm-svn: 27444
-
- Apr 05, 2006
-
-
Evan Cheng authored
vector_shuffle v1, v1, <0, 4, 1, 5, 2, 6, 3, 7> This is turned into vector_shuffle v1, <undef>, <0, 0, 1, 1, 2, 2, 3, 3> by dag combiner. It would match a {p}unpckl on x86. llvm-svn: 27437
-
- Mar 31, 2006
-
-
Evan Cheng authored
INSERT_VECTOR_ELT to insert a 16-bit value in a 128-bit vector. llvm-svn: 27314
-
Evan Cheng authored
from a 128-bit vector. llvm-svn: 27304
-
- Mar 30, 2006
-
-
Evan Cheng authored
- Added SSE2 128-bit integer pack with signed saturation ops. - Added pshufhw and pshuflw ops. llvm-svn: 27252
-
- Mar 28, 2006
-
-
Evan Cheng authored
* Bug fixes. llvm-svn: 27218
-
Evan Cheng authored
- Some misc. bug fixes. - Use MOVHPDrm to load from m64 to upper half of a XMM register. llvm-svn: 27210
-
Evan Cheng authored
intrinsics as such. llvm-svn: 27200
-
- Mar 26, 2006
-
-
Evan Cheng authored
llvm-svn: 27150
-
- Mar 25, 2006
-
-
Evan Cheng authored
series of unpack and interleave ops. llvm-svn: 27119
-
Evan Cheng authored
llvm-svn: 27091
-
- Mar 24, 2006
-
-
Evan Cheng authored
llvm-svn: 27056
-
Evan Cheng authored
llvm-svn: 27040
-
Evan Cheng authored
llvm-svn: 27024
-
- Mar 22, 2006
-
-
Evan Cheng authored
64-bit vector shuffle. llvm-svn: 26964
-
Evan Cheng authored
splat and PSHUFD cases. - Clean up shuffle / splat matching code. llvm-svn: 26954
-
Evan Cheng authored
PSHUFD. We can make permutes entries which point to the undef pointing anything we want. - Change some names to appease Chris. llvm-svn: 26951
-
Evan Cheng authored
llvm-svn: 26940
-