- Dec 08, 2005
-
-
Chris Lattner authored
1. Only forward subst offsets into loads and stores, not into arbitrary things, where it will likely become a load. 2. If the source is a cast from pointer, forward subst the cast as well, allowing us to fold the cast away (improving cases when the cast is from an alloca or global). This hasn't been fully tested, but does appear to further reduce register pressure and improve code. Lets let the testers grind on it a bit. :) llvm-svn: 24640
-
Chris Lattner authored
llvm-svn: 24639
-
Evan Cheng authored
llvm-svn: 24638
-
Evan Cheng authored
llvm-svn: 24637
-
Evan Cheng authored
* Renamed MatchingNodes to RootNodes. llvm-svn: 24636
-
Evan Cheng authored
false if the match is not profitable. e.g. leal 1(%eax), %eax. * Added patterns for X86 integer loads and LEA32. llvm-svn: 24635
-
Evan Cheng authored
matching code that is not currently auto-generated by tblgen, e.g. X86 addressing mode. Selection routines for complex patterns can return multiple operands, e.g. X86 addressing mode returns 4. llvm-svn: 24634
-
- Dec 07, 2005
-
-
Nate Begeman authored
type when the target did not support them. Also teach Legalize how to expand ConstantVecs. This allows us to generate _test: lwz r2, 12(r3) lwz r4, 8(r3) lwz r5, 4(r3) lwz r6, 0(r3) addi r2, r2, 4 addi r4, r4, 3 addi r5, r5, 2 addi r6, r6, 1 stw r2, 12(r3) stw r4, 8(r3) stw r5, 4(r3) stw r6, 0(r3) blr For: void %test(%v4i *%P) { %T = load %v4i* %P %S = add %v4i %T, <int 1, int 2, int 3, int 4> store %v4i %S, %v4i * %P ret void } On PowerPC. llvm-svn: 24633
-
Chris Lattner authored
if the target supports the resultant sextinreg llvm-svn: 24632
-
Chris Lattner authored
llvm-svn: 24631
-
Chris Lattner authored
when the types match up. This allows the X86 backend to compile: sbyte %toggle_value(sbyte* %tmp.1) { %tmp.2 = load sbyte* %tmp.1 ret sbyte %tmp.2 } to this: _toggle_value: mov %EAX, DWORD PTR [%ESP + 4] movsx %EAX, BYTE PTR [%EAX] ret instead of this: _toggle_value: mov %EAX, DWORD PTR [%ESP + 4] movsx %EAX, BYTE PTR [%EAX] movsx %EAX, %AL ret noticed in Shootout/objinst. -Chris llvm-svn: 24630
-
Chris Lattner authored
llvm-svn: 24629
-
Andrew Lenharth authored
llvm-svn: 24628
-
- Dec 06, 2005
-
-
Chris Lattner authored
llvm-svn: 24627
-
Andrew Lenharth authored
This solves the problem of the CBE renaming symbols that start with . but the assembly side still trying to reference them by their old names. Should be safe untill we hit a language front end that lets you specify such a name. llvm-svn: 24626
-
Andrew Lenharth authored
more decent branches for FP. I might have to make some intermediate nodes to actually be able to use the DAG for FPcmp llvm-svn: 24625
-
Andrew Lenharth authored
llvm-svn: 24624
-
Sumant Kowshik authored
llvm-svn: 24623
-
Sumant Kowshik authored
llvm-svn: 24621
-
Sumant Kowshik authored
llvm-svn: 24620
-
Chris Lattner authored
PR662. Thanks to Markus for providing me with a ton of files to reproduce the problem! llvm-svn: 24619
-
Chris Lattner authored
llvm-svn: 24618
-
Chris Lattner authored
Patch by Saem Ghani, thanks! llvm-svn: 24617
-
Nate Begeman authored
constant nodes with vector types. Also teach the asm printer how to print ConstantPacked constant pool entries. This allows us to generate altivec code such as the following, which adds a vector constantto a packed float. LCPI1_0: <4 x float> < float 0.0e+0, float 0.0e+0, float 0.0e+0, float 1.0e+0 > .space 4 .space 4 .space 4 .long 1065353216 ; float 1 .text .align 4 .globl _foo _foo: lis r2, ha16(LCPI1_0) la r2, lo16(LCPI1_0)(r2) li r4, 0 lvx v0, r4, r2 lvx v1, r4, r3 vaddfp v0, v1, v0 stvx v0, r4, r3 blr For the llvm code: void %foo(<4 x float> * %a) { entry: %tmp1 = load <4 x float> * %a; %tmp2 = add <4 x float> %tmp1, < float 0.0, float 0.0, float 0.0, float 1.0 > store <4 x float> %tmp2, <4 x float> *%a ret void } llvm-svn: 24616
-
Chris Lattner authored
amount handling that PPC provides. These are generated by the lowering code and prevents the dag combiner from assuming (rightfully) that the shifts don't only look at 5 bits. This fixes a miscompilation of crafty with the new front-end. llvm-svn: 24615
-
Andrew Lenharth authored
llvm-svn: 24614
-
Andrew Lenharth authored
llvm-svn: 24613
-
Andrew Lenharth authored
llvm-svn: 24612
-
Evan Cheng authored
llvm-svn: 24611
-
Evan Cheng authored
* Fixed a bug related to hasCtrlDep property use. llvm-svn: 24610
-
- Dec 05, 2005
-
-
Andrew Lenharth authored
llvm-svn: 24609
-
Chris Lattner authored
know that small negative values fit into the immediate field of addressing modes. llvm-svn: 24608
-
Andrew Lenharth authored
llvm-svn: 24607
-
Chris Lattner authored
PPC and other targets). In a particular, consider code like this: struct Vector3 { double x, y, z; }; struct Matrix3 { Vector3 a, b, c; }; double dot(Vector3 &a, Vector3 &b) { return a.x * b.x + a.y * b.y + a.z * b.z; } Vector3 mul(Vector3 &a, Matrix3 &b) { Vector3 r; r.x = dot( a, b.a ); r.y = dot( a, b.b ); r.z = dot( a, b.c ); return r; } void transform(Matrix3 &m, Vector3 *x, int n) { for (int i = 0; i < n; i++) x[i] = mul( x[i], m ); } we compile transform to a loop with all of the GEP instructions for indexing into 'm' pulled out of the loop (9 of them). Because isel occurs a bb at a time we are unable to fold the constant index into the loads in the loop, leading to PPC code that looks like this: LBB3_1: ; no_exit.preheader li r2, 0 addi r6, r3, 64 ;; 9 values live across the loop body! addi r7, r3, 56 addi r8, r3, 48 addi r9, r3, 40 addi r10, r3, 32 addi r11, r3, 24 addi r12, r3, 16 addi r30, r3, 8 LBB3_2: ; no_exit lfd f0, 0(r30) lfd f1, 8(r4) fmul f0, f1, f0 lfd f2, 0(r3) ;; no constant indices folded into the loads! lfd f3, 0(r4) lfd f4, 0(r10) lfd f5, 0(r6) lfd f6, 0(r7) lfd f7, 0(r8) lfd f8, 0(r9) lfd f9, 0(r11) lfd f10, 0(r12) lfd f11, 16(r4) fmadd f0, f3, f2, f0 fmul f2, f1, f4 fmadd f0, f11, f10, f0 fmadd f2, f3, f9, f2 fmul f1, f1, f6 stfd f0, 0(r4) fmadd f0, f11, f8, f2 fmadd f1, f3, f7, f1 stfd f0, 8(r4) fmadd f0, f11, f5, f1 addi r29, r4, 24 stfd f0, 16(r4) addi r2, r2, 1 cmpw cr0, r2, r5 or r4, r29, r29 bne cr0, LBB3_2 ; no_exit uh, yuck. With this patch, we now sink the constant offsets into the loop, producing this code: LBB3_1: ; no_exit.preheader li r2, 0 LBB3_2: ; no_exit lfd f0, 8(r3) lfd f1, 8(r4) fmul f0, f1, f0 lfd f2, 0(r3) lfd f3, 0(r4) lfd f4, 32(r3) ;; much nicer. lfd f5, 64(r3) lfd f6, 56(r3) lfd f7, 48(r3) lfd f8, 40(r3) lfd f9, 24(r3) lfd f10, 16(r3) lfd f11, 16(r4) fmadd f0, f3, f2, f0 fmul f2, f1, f4 fmadd f0, f11, f10, f0 fmadd f2, f3, f9, f2 fmul f1, f1, f6 stfd f0, 0(r4) fmadd f0, f11, f8, f2 fmadd f1, f3, f7, f1 stfd f0, 8(r4) fmadd f0, f11, f5, f1 addi r6, r4, 24 stfd f0, 16(r4) addi r2, r2, 1 cmpw cr0, r2, r5 or r4, r6, r6 bne cr0, LBB3_2 ; no_exit This is much nicer as it reduces register pressure in the loop a lot. On X86, this takes the function from having 9 spilled registers to 2. This should help some spec programs on X86 (gzip?) This is currently only enabled with -enable-gep-isel-opt to allow perf testing tonight. llvm-svn: 24606
-
Chris Lattner authored
internal linkage. Patch provided by Evan Jones, thanks! llvm-svn: 24604
-
Chris Lattner authored
llvm-svn: 24603
-
Chris Lattner authored
llvm-svn: 24602
-
Chris Lattner authored
llvm-svn: 24601
-
Chris Lattner authored
llvm-svn: 24600
-
Chris Lattner authored
1. Remove redundant type casts now that PR673 is implemented. 2. Implement the OUT*ir instructions correctly. The port number really *is* a 16-bit value, but the patterns should only match if the number is 0-255. Update the patterns so they now match. 3. Fix patterns for shifts to reflect that the shift amount is always an i8, not an i16 as they were believed to be before. This previous fib stopped working when we started knowing that CL has type i8. 4. Change use of i16i8imm in SH*ri patterns to all be imm. llvm-svn: 24599
-