- Apr 20, 2006
-
-
Chris Lattner authored
llvm-svn: 27885
-
Andrew Lenharth authored
llvm-svn: 27881
-
Andrew Lenharth authored
can be converted to losslessly, we can continue the conversion to a direct call. llvm-svn: 27880
-
Evan Cheng authored
to a vector shuffle. - VECTOR_SHUFFLE lowering change in preparation for more efficient codegen of vector shuffle with zero (or any splat) vector. llvm-svn: 27875
-
Evan Cheng authored
DAG combiner can turn a VAND V, <-1, 0, -1, -1>, i.e. vector clear elements, into a vector shuffle with a zero vector. It only does so when TLI tells it the xform is profitable. llvm-svn: 27874
-
Chris Lattner authored
CodeGen/PowerPC/2006-04-19-vmaddfp-crash.ll llvm-svn: 27868
-
Chris Lattner authored
llvm-svn: 27863
-
Evan Cheng authored
but i64 is not. If possible, change a i64 op to a f64 (e.g. load, constant) and then cast it back. llvm-svn: 27849
-
Evan Cheng authored
llvm-svn: 27847
-
Chris Lattner authored
llvm-svn: 27846
-
Evan Cheng authored
instructions. - Fixed a commute vector_shuff bug. llvm-svn: 27845
-
- Apr 19, 2006
-
-
Evan Cheng authored
llvm-svn: 27844
-
Evan Cheng authored
llvm-svn: 27843
-
Evan Cheng authored
- Added more movhlps and movlhps patterns. llvm-svn: 27842
-
Evan Cheng authored
llvm-svn: 27840
-
Evan Cheng authored
llvm-svn: 27836
-
Evan Cheng authored
- Increase cost (complexity) of patterns which match mov{h|l}ps ops. These are preferred over shufps in most cases. llvm-svn: 27835
-
Evan Cheng authored
llvm-svn: 27834
-
Chris Lattner authored
llvm-svn: 27832
-
Andrew Lenharth authored
llvm-svn: 27831
-
Andrew Lenharth authored
llvm-svn: 27830
-
Andrew Lenharth authored
llvm-svn: 27829
-
Chris Lattner authored
llvm-svn: 27828
-
Chris Lattner authored
llvm-svn: 27827
-
Andrew Lenharth authored
llvm-svn: 27821
-
Andrew Lenharth authored
llvm-svn: 27819
-
- Apr 18, 2006
-
-
Evan Cheng authored
- PINSRWrmi encoding bug. llvm-svn: 27818
-
Evan Cheng authored
llvm-svn: 27817
-
Evan Cheng authored
llvm-svn: 27816
-
Evan Cheng authored
llvm-svn: 27815
-
Evan Cheng authored
llvm-svn: 27814
-
Evan Cheng authored
llvm-svn: 27813
-
Andrew Lenharth authored
llvm-svn: 27812
-
Andrew Lenharth authored
llvm-svn: 27811
-
Chris Lattner authored
llvm-svn: 27810
-
Chris Lattner authored
llvm-svn: 27809
-
Chris Lattner authored
void foo2(vector float *A, vector float *B) { vector float C = (vector float)vec_cmpeq(*A, *B); if (!vec_any_eq(*A, *B)) *B = (vector float){0,0,0,0}; *A = C; } llvm-svn: 27808
-
Evan Cheng authored
llvm-svn: 27807
-
Chris Lattner authored
llvm-svn: 27806
-
Chris Lattner authored
If an altivec predicate compare is used immediately by a branch, don't use a (serializing) MFCR instruction to read the CR6 register, which requires a compare to get it back to CR's. Instead, just branch on CR6 directly. :) For example, for: void foo2(vector float *A, vector float *B) { if (!vec_any_eq(*A, *B)) *B = (vector float){0,0,0,0}; } We now generate: _foo2: mfspr r2, 256 oris r5, r2, 12288 mtspr 256, r5 lvx v2, 0, r4 lvx v3, 0, r3 vcmpeqfp. v2, v3, v2 bne cr6, LBB1_2 ; UnifiedReturnBlock LBB1_1: ; cond_true vxor v2, v2, v2 stvx v2, 0, r4 mtspr 256, r2 blr LBB1_2: ; UnifiedReturnBlock mtspr 256, r2 blr instead of: _foo2: mfspr r2, 256 oris r5, r2, 12288 mtspr 256, r5 lvx v2, 0, r4 lvx v3, 0, r3 vcmpeqfp. v2, v3, v2 mfcr r3, 2 rlwinm r3, r3, 27, 31, 31 cmpwi cr0, r3, 0 beq cr0, LBB1_2 ; UnifiedReturnBlock LBB1_1: ; cond_true vxor v2, v2, v2 stvx v2, 0, r4 mtspr 256, r2 blr LBB1_2: ; UnifiedReturnBlock mtspr 256, r2 blr This implements CodeGen/PowerPC/vec_br_cmp.ll. llvm-svn: 27804
-