- Dec 15, 2005
-
-
Chris Lattner authored
if after legalize. This fixes IA64 failures. llvm-svn: 24725
-
- Dec 14, 2005
-
-
Chris Lattner authored
load. This reduces number of worklist iterations and avoid missing optimizations depending on folding of things into sext_inreg nodes (which aren't supported by all targets). Tested by Regression/CodeGen/X86/extend.ll:test2 llvm-svn: 24712
-
Chris Lattner authored
Allow (zext (truncate)) to apply after legalize if the target supports AND (which all do). This compiles short %foo() { %tmp.0 = load ubyte* %X ; <ubyte> [#uses=1] %tmp.3 = cast ubyte %tmp.0 to short ; <short> [#uses=1] ret short %tmp.3 } to: _foo: movzbl _X, %eax ret instead of: _foo: movzbl _X, %eax movzbl %al, %eax ret thanks to Evan for pointing this out. llvm-svn: 24709
-
Chris Lattner authored
llvm-svn: 24706
-
Evan Cheng authored
llvm-svn: 24702
-
- Dec 13, 2005
-
-
Chris Lattner authored
llvm-svn: 24697
-
Chris Lattner authored
for emitting the ctor/dtor list for common targets. llvm-svn: 24694
-
Nate Begeman authored
ConstantVec legalizing code, which would return constantpool nodes that were not of the target's pointer type. llvm-svn: 24691
-
- Dec 12, 2005
-
-
Chris Lattner authored
llvm-svn: 24678
-
Chris Lattner authored
llvm-svn: 24677
-
- Dec 11, 2005
-
-
Chris Lattner authored
llvm-svn: 24663
-
- Dec 10, 2005
-
-
Nate Begeman authored
them in the PPC backend, to simplify some logic out of Select and SelectAddr. llvm-svn: 24657
-
Evan Cheng authored
llvm-svn: 24653
-
- Dec 09, 2005
-
-
Chris Lattner authored
llvm-svn: 24646
-
Chris Lattner authored
out to me. llvm-svn: 24644
-
- Dec 08, 2005
-
-
Chris Lattner authored
1. Only forward subst offsets into loads and stores, not into arbitrary things, where it will likely become a load. 2. If the source is a cast from pointer, forward subst the cast as well, allowing us to fold the cast away (improving cases when the cast is from an alloca or global). This hasn't been fully tested, but does appear to further reduce register pressure and improve code. Lets let the testers grind on it a bit. :) llvm-svn: 24640
-
- Dec 07, 2005
-
-
Nate Begeman authored
type when the target did not support them. Also teach Legalize how to expand ConstantVecs. This allows us to generate _test: lwz r2, 12(r3) lwz r4, 8(r3) lwz r5, 4(r3) lwz r6, 0(r3) addi r2, r2, 4 addi r4, r4, 3 addi r5, r5, 2 addi r6, r6, 1 stw r2, 12(r3) stw r4, 8(r3) stw r5, 4(r3) stw r6, 0(r3) blr For: void %test(%v4i *%P) { %T = load %v4i* %P %S = add %v4i %T, <int 1, int 2, int 3, int 4> store %v4i %S, %v4i * %P ret void } On PowerPC. llvm-svn: 24633
-
Chris Lattner authored
if the target supports the resultant sextinreg llvm-svn: 24632
-
Chris Lattner authored
when the types match up. This allows the X86 backend to compile: sbyte %toggle_value(sbyte* %tmp.1) { %tmp.2 = load sbyte* %tmp.1 ret sbyte %tmp.2 } to this: _toggle_value: mov %EAX, DWORD PTR [%ESP + 4] movsx %EAX, BYTE PTR [%EAX] ret instead of this: _toggle_value: mov %EAX, DWORD PTR [%ESP + 4] movsx %EAX, BYTE PTR [%EAX] movsx %EAX, %AL ret noticed in Shootout/objinst. -Chris llvm-svn: 24630
-
- Dec 06, 2005
-
-
Nate Begeman authored
constant nodes with vector types. Also teach the asm printer how to print ConstantPacked constant pool entries. This allows us to generate altivec code such as the following, which adds a vector constantto a packed float. LCPI1_0: <4 x float> < float 0.0e+0, float 0.0e+0, float 0.0e+0, float 1.0e+0 > .space 4 .space 4 .space 4 .long 1065353216 ; float 1 .text .align 4 .globl _foo _foo: lis r2, ha16(LCPI1_0) la r2, lo16(LCPI1_0)(r2) li r4, 0 lvx v0, r4, r2 lvx v1, r4, r3 vaddfp v0, v1, v0 stvx v0, r4, r3 blr For the llvm code: void %foo(<4 x float> * %a) { entry: %tmp1 = load <4 x float> * %a; %tmp2 = add <4 x float> %tmp1, < float 0.0, float 0.0, float 0.0, float 1.0 > store <4 x float> %tmp2, <4 x float> *%a ret void } llvm-svn: 24616
-
- Dec 05, 2005
-
-
Chris Lattner authored
PPC and other targets). In a particular, consider code like this: struct Vector3 { double x, y, z; }; struct Matrix3 { Vector3 a, b, c; }; double dot(Vector3 &a, Vector3 &b) { return a.x * b.x + a.y * b.y + a.z * b.z; } Vector3 mul(Vector3 &a, Matrix3 &b) { Vector3 r; r.x = dot( a, b.a ); r.y = dot( a, b.b ); r.z = dot( a, b.c ); return r; } void transform(Matrix3 &m, Vector3 *x, int n) { for (int i = 0; i < n; i++) x[i] = mul( x[i], m ); } we compile transform to a loop with all of the GEP instructions for indexing into 'm' pulled out of the loop (9 of them). Because isel occurs a bb at a time we are unable to fold the constant index into the loads in the loop, leading to PPC code that looks like this: LBB3_1: ; no_exit.preheader li r2, 0 addi r6, r3, 64 ;; 9 values live across the loop body! addi r7, r3, 56 addi r8, r3, 48 addi r9, r3, 40 addi r10, r3, 32 addi r11, r3, 24 addi r12, r3, 16 addi r30, r3, 8 LBB3_2: ; no_exit lfd f0, 0(r30) lfd f1, 8(r4) fmul f0, f1, f0 lfd f2, 0(r3) ;; no constant indices folded into the loads! lfd f3, 0(r4) lfd f4, 0(r10) lfd f5, 0(r6) lfd f6, 0(r7) lfd f7, 0(r8) lfd f8, 0(r9) lfd f9, 0(r11) lfd f10, 0(r12) lfd f11, 16(r4) fmadd f0, f3, f2, f0 fmul f2, f1, f4 fmadd f0, f11, f10, f0 fmadd f2, f3, f9, f2 fmul f1, f1, f6 stfd f0, 0(r4) fmadd f0, f11, f8, f2 fmadd f1, f3, f7, f1 stfd f0, 8(r4) fmadd f0, f11, f5, f1 addi r29, r4, 24 stfd f0, 16(r4) addi r2, r2, 1 cmpw cr0, r2, r5 or r4, r29, r29 bne cr0, LBB3_2 ; no_exit uh, yuck. With this patch, we now sink the constant offsets into the loop, producing this code: LBB3_1: ; no_exit.preheader li r2, 0 LBB3_2: ; no_exit lfd f0, 8(r3) lfd f1, 8(r4) fmul f0, f1, f0 lfd f2, 0(r3) lfd f3, 0(r4) lfd f4, 32(r3) ;; much nicer. lfd f5, 64(r3) lfd f6, 56(r3) lfd f7, 48(r3) lfd f8, 40(r3) lfd f9, 24(r3) lfd f10, 16(r3) lfd f11, 16(r4) fmadd f0, f3, f2, f0 fmul f2, f1, f4 fmadd f0, f11, f10, f0 fmadd f2, f3, f9, f2 fmul f1, f1, f6 stfd f0, 0(r4) fmadd f0, f11, f8, f2 fmadd f1, f3, f7, f1 stfd f0, 8(r4) fmadd f0, f11, f5, f1 addi r6, r4, 24 stfd f0, 16(r4) addi r2, r2, 1 cmpw cr0, r2, r5 or r4, r6, r6 bne cr0, LBB3_2 ; no_exit This is much nicer as it reduces register pressure in the loop a lot. On X86, this takes the function from having 9 spilled registers to 2. This should help some spec programs on X86 (gzip?) This is currently only enabled with -enable-gep-isel-opt to allow perf testing tonight. llvm-svn: 24606
-
- Dec 03, 2005
-
-
Chris Lattner authored
llvm-svn: 24583
-
- Dec 02, 2005
-
-
Andrew Lenharth authored
llvm-svn: 24574
-
Andrew Lenharth authored
llvm-svn: 24573
-
Chris Lattner authored
should come from the arbitrary ops map. This fixes Regression/CodeGen/PowerPC/2005-12-01-Crash.ll llvm-svn: 24571
-
- Dec 01, 2005
-
-
Chris Lattner authored
llvm-svn: 24568
-
Chris Lattner authored
selecting a node and use a mix of getTargetNode() and SelectNodeTo. Because SelectNodeTo didn't check the CSE maps for a preexisting node and didn't insert its result into the CSE maps, we would sometimes miss a CSE opportunity. This is extremely rare, but worth fixing for completeness. llvm-svn: 24565
-
Nate Begeman authored
work. This change has no effect on generated code. llvm-svn: 24563
-
- Nov 30, 2005
-
-
Chris Lattner authored
llvm-svn: 24548
-
Chris Lattner authored
replaceAllUses'ing. llvm-svn: 24539
-
Andrew Lenharth authored
llvm-svn: 24537
-
Nate Begeman authored
changes allow us to generate the following code: _foo: li r2, 0 lvx v0, r2, r3 vaddfp v0, v0, v0 stvx v0, r2, r3 blr for this llvm: void %foo(<4 x float>* %a) { entry: %tmp1 = load <4 x float>* %a %tmp2 = add <4 x float> %tmp1, %tmp1 store <4 x float> %tmp2, <4 x float>* %a ret void } llvm-svn: 24534
-
Andrew Lenharth authored
llvm-svn: 24531
-
Reid Spencer authored
file to become corrupted due to interactions between mmap'd memory segments and file descriptors closing. The problem is completely avoiding by using a third temporary file. Patch provided by Evan Jones llvm-svn: 24527
-
Evan Cheng authored
GlobalValue * and index pair. Update getGlobalAddress() for symmetry. llvm-svn: 24524
-
Evan Cheng authored
llvm-svn: 24523
-
- Nov 29, 2005
-
-
Chris Lattner authored
contributed by Daniel Berlin, with a few cleanups here and there by me. llvm-svn: 24515
-
Nate Begeman authored
and make a few changes to the legalization machinery to support more than 16 types. llvm-svn: 24511
-
- Nov 22, 2005
-
-
Nate Begeman authored
vector operations (load, add, sub, mul). This allows us to codegen: void %foo(<4 x float> * %a) { entry: %tmp1 = load <4 x float> * %a; %tmp2 = add <4 x float> %tmp1, %tmp1 store <4 x float> %tmp2, <4 x float> *%a ret void } on ppc as: _foo: lfs f0, 12(r3) lfs f1, 8(r3) lfs f2, 4(r3) lfs f3, 0(r3) fadds f0, f0, f0 fadds f1, f1, f1 fadds f2, f2, f2 fadds f3, f3, f3 stfs f0, 12(r3) stfs f1, 8(r3) stfs f2, 4(r3) stfs f3, 0(r3) blr llvm-svn: 24484
-
Nate Begeman authored
generates it. Make MVT::Vector expand-only, and remove the code in Legalize that attempts to legalize it. The plan for supporting N x Type is to continually epxand it in ExpandOp until it gets down to 2 x Type, where it will be scalarized into a pair of scalars. llvm-svn: 24482
-