- Apr 17, 2006
-
-
Chris Lattner authored
llvm-svn: 27775
-
Chris Lattner authored
llvm-svn: 27774
-
Evan Cheng authored
llvm-svn: 27773
-
Chris Lattner authored
the vrsave register for the caller. This allows us to codegen a function as: _test_rol: mfspr r2, 256 mr r3, r2 mtspr 256, r3 vspltisw v2, -12 vrlw v2, v2, v2 mtspr 256, r2 blr instead of: _test_rol: mfspr r2, 256 oris r3, r2, 40960 mtspr 256, r3 vspltisw v0, -12 vrlw v2, v0, v0 mtspr 256, r2 blr llvm-svn: 27772
-
Chris Lattner authored
vspltisw v2, -12 vrlw v2, v2, v2 instead of: vspltisw v0, -12 vrlw v2, v0, v0 when a function is returning a value. llvm-svn: 27771
-
Chris Lattner authored
llvm-svn: 27770
-
Chris Lattner authored
llvm-svn: 27769
-
Evan Cheng authored
llvm-svn: 27768
-
Chris Lattner authored
llvm-svn: 27767
-
Chris Lattner authored
llvm-svn: 27766
-
Chris Lattner authored
and a shuffle. For this: void %test2(<4 x float>* %F, float %f) { %tmp = load <4 x float>* %F ; <<4 x float>> [#uses=2] %tmp3 = add <4 x float> %tmp, %tmp ; <<4 x float>> [#uses=1] %tmp2 = insertelement <4 x float> %tmp3, float %f, uint 2 ; <<4 x float>> [#uses=2] %tmp6 = add <4 x float> %tmp2, %tmp2 ; <<4 x float>> [#uses=1] store <4 x float> %tmp6, <4 x float>* %F ret void } we now get this on X86 (which will get better): _test2: movl 4(%esp), %eax movaps (%eax), %xmm0 addps %xmm0, %xmm0 movaps %xmm0, %xmm1 shufps $3, %xmm1, %xmm1 movaps %xmm0, %xmm2 shufps $1, %xmm2, %xmm2 unpcklps %xmm1, %xmm2 movss 8(%esp), %xmm1 unpcklps %xmm1, %xmm0 unpcklps %xmm2, %xmm0 addps %xmm0, %xmm0 movaps %xmm0, (%eax) ret instead of: _test2: subl $28, %esp movl 32(%esp), %eax movaps (%eax), %xmm0 addps %xmm0, %xmm0 movaps %xmm0, (%esp) movss 36(%esp), %xmm0 movss %xmm0, 8(%esp) movaps (%esp), %xmm0 addps %xmm0, %xmm0 movaps %xmm0, (%eax) addl $28, %esp ret llvm-svn: 27765
-
Chris Lattner authored
being a bit more clever, add support for odd splats from -31 to -17. llvm-svn: 27764
-
Evan Cheng authored
llvm-svn: 27763
-
Evan Cheng authored
llvm-svn: 27762
-
Jeff Cohen authored
llvm-svn: 27761
-
Chris Lattner authored
This implements vec_constants.ll:test_vsldoi and test_rol llvm-svn: 27760
-
Chris Lattner authored
llvm-svn: 27759
-
Chris Lattner authored
llvm-svn: 27758
-
Evan Cheng authored
llvm-svn: 27755
-
Chris Lattner authored
new patterns. llvm-svn: 27754
-
Chris Lattner authored
llvm-svn: 27753
-
Chris Lattner authored
PowerPC/vec_constants.ll:test_29. llvm-svn: 27752
-
Chris Lattner authored
llvm-svn: 27751
-
Chris Lattner authored
Effeciently codegen even splats in the range [-32,30]. This allows us to codegen <30,30,30,30> as: vspltisw v0, 15 vadduwm v2, v0, v0 instead of as a cp load. llvm-svn: 27750
-
Chris Lattner authored
llvm-svn: 27749
-
Chris Lattner authored
if it can be implemented in 3 or fewer discrete altivec instructions, codegen it as such. This implements Regression/CodeGen/PowerPC/vec_perf_shuffle.ll llvm-svn: 27748
-
Chris Lattner authored
and shouldn't be lowered to vperm. llvm-svn: 27747
-
Chris Lattner authored
llvm-svn: 27746
-
Chris Lattner authored
llvm-svn: 27745
-
Chris Lattner authored
llvm-svn: 27744
-
Chris Lattner authored
llvm-svn: 27743
-
Chris Lattner authored
llvm-svn: 27742
-
Chris Lattner authored
llvm-svn: 27741
-
Chris Lattner authored
llvm-svn: 27740
-
Chris Lattner authored
of various 4-element vectors. llvm-svn: 27739
-
Chris Lattner authored
llvm-svn: 27738
-
Chris Lattner authored
llvm-svn: 27737
-
Chris Lattner authored
Altivec vectors. llvm-svn: 27736
-
- Apr 16, 2006
-
-
Evan Cheng authored
llvm-svn: 27735
-
Evan Cheng authored
llvm-svn: 27734
-