- Mar 30, 2006
-
-
Evan Cheng authored
llvm-svn: 27251
-
- Mar 29, 2006
-
-
Evan Cheng authored
vector_shuffle undef. llvm-svn: 27250
-
Evan Cheng authored
llvm-svn: 27249
-
Evan Cheng authored
integer vector logical operations would match andp{s|d} instead of pand. llvm-svn: 27248
-
Evan Cheng authored
llvm-svn: 27247
-
Evan Cheng authored
- Whenever possible use ops of the right packed types for vector shuffles / splats. llvm-svn: 27246
-
Evan Cheng authored
llvm-svn: 27245
-
Evan Cheng authored
- Other shuffle related fixes. llvm-svn: 27244
-
Chris Lattner authored
llvm-svn: 27243
-
Chris Lattner authored
llvm-svn: 27242
-
Chris Lattner authored
Handle constantpacked vectors with constantexpr elements. This fixes CodeGen/Generic/vector-constantexpr.ll llvm-svn: 27241
-
Evan Cheng authored
The source operands type are v4sf with upper bits passes through. Added matching code for these. llvm-svn: 27240
-
Evan Cheng authored
llvm-svn: 27239
-
Evan Cheng authored
mismatch against the enum table. This is a part of Sabre's master plan to drive me nuts with subtle bugs that happens to only affect x86 be. :-) llvm-svn: 27237
-
Chris Lattner authored
sure to build it as SHUFFLE(X, undef, mask), not SHUFFLE(X, X, mask). The later is not canonical form, and prevents the PPC splat pattern from matching. For a particular splat, we go from generating this: li r10, lo16(LCPI1_0) lis r11, ha16(LCPI1_0) lvx v3, r11, r10 vperm v3, v2, v2, v3 to generating: vspltw v3, v2, 3 llvm-svn: 27236
-
Chris Lattner authored
llvm-svn: 27235
-
- Mar 28, 2006
-
-
Chris Lattner authored
llvm-svn: 27234
-
Chris Lattner authored
vector_shuffle node. For this: void test(__m128 *res, __m128 *A, __m128 *B) { *res = _mm_unpacklo_ps(*A, *B); } we now produce this code: _test: movl 8(%esp), %eax movaps (%eax), %xmm0 movl 12(%esp), %eax unpcklps (%eax), %xmm0 movl 4(%esp), %eax movaps %xmm0, (%eax) ret instead of this: _test: subl $76, %esp movl 88(%esp), %eax movaps (%eax), %xmm0 movaps %xmm0, (%esp) movaps %xmm0, 32(%esp) movss 4(%esp), %xmm0 movss 32(%esp), %xmm1 unpcklps %xmm0, %xmm1 movl 84(%esp), %eax movaps (%eax), %xmm0 movaps %xmm0, 16(%esp) movaps %xmm0, 48(%esp) movss 20(%esp), %xmm0 movss 48(%esp), %xmm2 unpcklps %xmm0, %xmm2 unpcklps %xmm1, %xmm2 movl 80(%esp), %eax movaps %xmm2, (%eax) addl $76, %esp ret GCC produces this (with -fomit-frame-pointer): _test: subl $12, %esp movl 20(%esp), %eax movaps (%eax), %xmm0 movl 24(%esp), %eax unpcklps (%eax), %xmm0 movl 16(%esp), %eax movaps %xmm0, (%eax) addl $12, %esp ret llvm-svn: 27233
-
Chris Lattner authored
llvm-svn: 27232
-
Chris Lattner authored
llvm-svn: 27231
-
Chris Lattner authored
llvm-svn: 27230
-
Chris Lattner authored
llvm-svn: 27229
-
Chris Lattner authored
llvm-svn: 27228
-
Chris Lattner authored
llvm-svn: 27227
-
Jim Laskey authored
llvm-svn: 27226
-
Jim Laskey authored
llvm-svn: 27225
-
Jim Laskey authored
llvm-svn: 27224
-
Jim Laskey authored
llvm-svn: 27223
-
Evan Cheng authored
llvm-svn: 27222
-
Evan Cheng authored
llvm-svn: 27221
-
Evan Cheng authored
llvm-svn: 27220
-
Evan Cheng authored
llvm-svn: 27219
-
Evan Cheng authored
* Bug fixes. llvm-svn: 27218
-
Evan Cheng authored
llvm-svn: 27217
-
Nate Begeman authored
llvm-svn: 27216
-
Nate Begeman authored
llvm-svn: 27215
-
Jeff Cohen authored
llvm-svn: 27214
-
Chris Lattner authored
llvm-svn: 27213
-
Evan Cheng authored
llvm-svn: 27212
-
Evan Cheng authored
llvm-svn: 27211
-