- Jun 20, 2006
-
-
Chris Lattner authored
llvm-svn: 28880
-
Chris Lattner authored
removed, tblgen produces identical output to with them in. llvm-svn: 28867
-
- Jun 16, 2006
-
-
Chris Lattner authored
llvm-svn: 28840
-
Chris Lattner authored
now compile: static unsigned long X; void test1() { X = 0; } into: _test1: lis r2, ha16(_X) li r3, 0 stw r3, lo16(_X)(r2) blr Totally amazing :) llvm-svn: 28839
-
Chris Lattner authored
llvm-svn: 28838
-
- Jun 10, 2006
-
-
Chris Lattner authored
as using incoming argument registers, so the local allocator would clobber them between their set and use. To fix this, we give the call instructions a variable number of uses in the CALL MachineInstr itself, so live variables understands the live ranges of these register arguments. llvm-svn: 28744
-
- Jun 06, 2006
-
-
Chris Lattner authored
llvm-svn: 28696
-
- May 17, 2006
-
-
Chris Lattner authored
enough to be autogenerated. llvm-svn: 28354
-
Chris Lattner authored
the copyto/fromregs instead of making the PPCISD::CALL selection code create them. This vastly simplifies the selection code, and moves the ABI handling parts into one place. llvm-svn: 28346
-
- Apr 22, 2006
-
-
Nate Begeman authored
x86 and ppc for 100% dense switch statements when relocations are non-PIC. This support will be extended and enhanced in the coming days to support PIC, and less dense forms of jump tables. llvm-svn: 27947
-
- Apr 18, 2006
-
-
Chris Lattner authored
llvm-svn: 27810
-
Chris Lattner authored
If an altivec predicate compare is used immediately by a branch, don't use a (serializing) MFCR instruction to read the CR6 register, which requires a compare to get it back to CR's. Instead, just branch on CR6 directly. :) For example, for: void foo2(vector float *A, vector float *B) { if (!vec_any_eq(*A, *B)) *B = (vector float){0,0,0,0}; } We now generate: _foo2: mfspr r2, 256 oris r5, r2, 12288 mtspr 256, r5 lvx v2, 0, r4 lvx v3, 0, r3 vcmpeqfp. v2, v3, v2 bne cr6, LBB1_2 ; UnifiedReturnBlock LBB1_1: ; cond_true vxor v2, v2, v2 stvx v2, 0, r4 mtspr 256, r2 blr LBB1_2: ; UnifiedReturnBlock mtspr 256, r2 blr instead of: _foo2: mfspr r2, 256 oris r5, r2, 12288 mtspr 256, r5 lvx v2, 0, r4 lvx v3, 0, r3 vcmpeqfp. v2, v3, v2 mfcr r3, 2 rlwinm r3, r3, 27, 31, 31 cmpwi cr0, r3, 0 beq cr0, LBB1_2 ; UnifiedReturnBlock LBB1_1: ; cond_true vxor v2, v2, v2 stvx v2, 0, r4 mtspr 256, r2 blr LBB1_2: ; UnifiedReturnBlock mtspr 256, r2 blr This implements CodeGen/PowerPC/vec_br_cmp.ll. llvm-svn: 27804
-
- Apr 09, 2006
-
-
Chris Lattner authored
llvm-svn: 27543
-
- Mar 31, 2006
-
-
Chris Lattner authored
predicates to VCMPo nodes. llvm-svn: 27285
-
- Mar 28, 2006
-
-
Chris Lattner authored
same thing and we have a dag node for the former. llvm-svn: 27205
-
- Mar 26, 2006
-
-
Chris Lattner authored
llvm-svn: 27151
-
- Mar 25, 2006
-
-
Chris Lattner authored
Add a bunch of patterns for different datatypes, e.g. bit_convert, undef and zero vector support. llvm-svn: 27117
-
Chris Lattner authored
llvm-svn: 27116
-
Chris Lattner authored
llvm-svn: 27112
-
Chris Lattner authored
<int -1, int -1, int -1, int -1> and <int 65537, int 65537, int 65537, int 65537> Using things like: vspltisb v0, -1 and: vspltish v0, 1 instead of using constant pool loads. This implements CodeGen/PowerPC/vec_splat.ll:splat_imm_i{32|16}. llvm-svn: 27106
-
- Mar 24, 2006
-
-
Chris Lattner authored
llvm-svn: 27069
-
Chris Lattner authored
Regression/CodeGen/PowerPC/vec_zero.ll llvm-svn: 27059
-
Chris Lattner authored
llvm-svn: 27049
-
- Mar 23, 2006
-
-
Chris Lattner authored
llvm-svn: 26995
-
- Mar 22, 2006
-
-
Chris Lattner authored
_foo2: extsw r2, r3 std r2, -8(r1) lfd f0, -8(r1) fcfid f0, f0 frsp f1, f0 blr instead of this: _foo2: lis r2, ha16(LCPI2_0) lis r4, 17200 xoris r3, r3, 32768 stw r3, -4(r1) stw r4, -8(r1) lfs f0, lo16(LCPI2_0)(r2) lfd f1, -8(r1) fsub f0, f1, f0 frsp f1, f0 blr This speeds up Misc/pi from 2.44s->2.09s with LLC and from 3.01->2.18s with llcbeta (16.7% and 38.1% respectively). llvm-svn: 26943
-
Chris Lattner authored
llvm-svn: 26935
-
- Mar 21, 2006
-
-
Chris Lattner authored
llvm-svn: 26913
-
- Mar 20, 2006
-
-
Chris Lattner authored
figuring these out! :) llvm-svn: 26904
-
Chris Lattner authored
llvm-svn: 26901
-
Evan Cheng authored
llvm-svn: 26900
-
Chris Lattner authored
constant pool load. This generates significantly nicer code for splats. When tblgen gets bugfixed, we can remove the custom selection code. llvm-svn: 26898
-
Chris Lattner authored
instructions llvm-svn: 26894
-
Chris Lattner authored
llvm-svn: 26889
-
Chris Lattner authored
llvm-svn: 26888
-
Chris Lattner authored
TODO: leave specific ones as VECTOR_SHUFFLE's and turn them into specialized operations like vsplt* llvm-svn: 26887
-
Chris Lattner authored
llvm-svn: 26883
-
- Mar 19, 2006
-
-
Chris Lattner authored
llvm-svn: 26868
-
Chris Lattner authored
llvm-svn: 26863
-
Chris Lattner authored
llvm-svn: 26857
-
Chris Lattner authored
llvm-svn: 26853
-