- Apr 02, 2006
-
-
Chris Lattner authored
llvm-svn: 27337
-
Chris Lattner authored
"vspltisb v0, 8" instead of a constant pool load. llvm-svn: 27335
-
Chris Lattner authored
llvm-svn: 27331
-
- Apr 01, 2006
-
-
Chris Lattner authored
llvm-svn: 27324
-
Chris Lattner authored
llvm-svn: 27322
-
Evan Cheng authored
llvm-svn: 27321
-
Chris Lattner authored
No functionality change. llvm-svn: 27320
-
Evan Cheng authored
alignment of a packed type. This is obviously wrong. Added a workaround that returns the size of the packed type as its alignment. The correct fix would be to return a target dependent alignment value provided via TargetLowering (or some other interface). llvm-svn: 27319
-
- Mar 31, 2006
-
-
Chris Lattner authored
llvm-svn: 27315
-
Evan Cheng authored
INSERT_VECTOR_ELT to insert a 16-bit value in a 128-bit vector. llvm-svn: 27314
-
Evan Cheng authored
llvm-svn: 27310
-
Chris Lattner authored
llvm-svn: 27308
-
Chris Lattner authored
llvm-svn: 27307
-
Chris Lattner authored
llvm-svn: 27306
-
Chris Lattner authored
llvm-svn: 27305
-
Evan Cheng authored
from a 128-bit vector. llvm-svn: 27304
-
Evan Cheng authored
llvm-svn: 27303
-
Chris Lattner authored
llvm-svn: 27302
-
Chris Lattner authored
llvm-svn: 27291
-
Chris Lattner authored
identical instructions into a single instruction. For example, for: void test(vector float *x, vector float *y, int *P) { int v = vec_any_out(*x, *y); *x = (vector float)vec_cmpb(*x, *y); *P = v; } we now generate: _test: mfspr r2, 256 oris r6, r2, 49152 mtspr 256, r6 lvx v0, 0, r4 lvx v1, 0, r3 vcmpbfp. v0, v1, v0 mfcr r4, 2 stvx v0, 0, r3 rlwinm r3, r4, 27, 31, 31 xori r3, r3, 1 stw r3, 0(r5) mtspr 256, r2 blr instead of: _test: mfspr r2, 256 oris r6, r2, 57344 mtspr 256, r6 lvx v0, 0, r4 lvx v1, 0, r3 vcmpbfp. v2, v1, v0 mfcr r4, 2 *** vcmpbfp v0, v1, v0 rlwinm r4, r4, 27, 31, 31 stvx v0, 0, r3 xori r3, r4, 1 stw r3, 0(r5) mtspr 256, r2 blr Testcase here: CodeGen/PowerPC/vcmp-fold.ll llvm-svn: 27290
-
Chris Lattner authored
llvm-svn: 27288
-
Chris Lattner authored
llvm-svn: 27287
-
Chris Lattner authored
predicates to VCMPo nodes. llvm-svn: 27285
-
Chris Lattner authored
llvm-svn: 27284
-
Chris Lattner authored
llvm-svn: 27277
-
Chris Lattner authored
llvm-svn: 27276
-
Evan Cheng authored
llvm-svn: 27275
-
Chris Lattner authored
unpromoted element type. llvm-svn: 27273
-
Evan Cheng authored
llvm-svn: 27272
-
Evan Cheng authored
llvm-svn: 27271
-
Chris Lattner authored
llvm-svn: 27270
-
Chris Lattner authored
llvm-svn: 27268
-
Chris Lattner authored
directly correspond to intrinsics. llvm-svn: 27266
-
Chris Lattner authored
llvm-svn: 27265
-
- Mar 30, 2006
-
-
Evan Cheng authored
Use pshufd, pshuhw, and pshulw to shuffle v4f32 if shufps doesn't match. Use shufps to shuffle v4f32 if pshufd, pshuhw, and pshulw don't match. llvm-svn: 27259
-
Evan Cheng authored
llvm-svn: 27257
-
Evan Cheng authored
llvm-svn: 27256
-
Evan Cheng authored
llvm-svn: 27255
-
Evan Cheng authored
For example, packsswb actually creates a v16i8 from a pair of v8i16. But since the intrinsic specification forces the output type to match the operands. llvm-svn: 27254
-
Evan Cheng authored
- Added SSE2 128-bit integer pack with signed saturation ops. - Added pshufhw and pshuflw ops. llvm-svn: 27252
-