- Mar 15, 2005
-
-
Chris Lattner authored
using Function::arg_{iterator|begin|end}. Likewise Module::g* -> Module::global_*. This patch is contributed by Gabor Greif, thanks! llvm-svn: 20597
-
- Mar 10, 2005
-
-
Chris Lattner authored
llvm-svn: 20555
-
Chris Lattner authored
because we were checking the wrong thing. Thanks to andrew for pointing this out! llvm-svn: 20554
-
Chris Lattner authored
numbering values in live ranges for physical registers. The alpha backend currently generates code that looks like this: vreg = preg ... preg = vreg use preg ... preg = vreg use preg etc. Because vreg contains the value of preg coming in, each of the copies back into preg contain that initial value as well. In the case of the Alpha, this allows this testcase: void "foo"(int %blah) { store int 5, int *%MyVar store int 12, int* %MyVar2 ret void } to compile to: foo: ldgp $29, 0($27) ldiq $0,5 stl $0,MyVar ldiq $0,12 stl $0,MyVar2 ret $31,($26),1 instead of: foo: ldgp $29, 0($27) bis $29,$29,$0 ldiq $1,5 bis $0,$0,$29 stl $1,MyVar ldiq $1,12 bis $0,$0,$29 stl $1,MyVar2 ret $31,($26),1 This does not seem to have any noticable effect on X86 code. This fixes PR535. llvm-svn: 20536
-
- Mar 09, 2005
-
-
Chris Lattner authored
This allows the alpha backend to compile: bool %test(uint %P) { %c = seteq uint %P, 0 ret bool %c } into: test: ldgp $29, 0($27) ZAP $16,240,$0 CMPEQ $0,0,$0 AND $0,1,$0 ret $31,($26),1 instead of: test: ldgp $29, 0($27) ZAP $16,240,$0 ldiq $1,0 ZAP $1,240,$1 CMPEQ $0,$1,$0 AND $0,1,$0 ret $31,($26),1 ... and fixes PR534. llvm-svn: 20534
-
- Mar 01, 2005
-
-
Alkis Evlogimenos authored
llvm-svn: 20382
-
- Feb 28, 2005
-
-
Chris Lattner authored
llvm-svn: 20375
-
- Feb 22, 2005
-
-
Chris Lattner authored
Changing 'op' here caused us to not enter the store into a map, causing reemission of the code!! In practice, a simple loop like this: no_exit: ; preds = %no_exit, %entry %indvar = phi uint [ %indvar.next, %no_exit ], [ 0, %entry ] ; <uint> [#uses=3] %tmp.4 = getelementptr "complex long double"* %P, uint %indvar, uint 0 ; <double*> [#uses=1] store double 0.000000e+00, double* %tmp.4 %indvar.next = add uint %indvar, 1 ; <uint> [#uses=2] %exitcond = seteq uint %indvar.next, %N ; <bool> [#uses=1] br bool %exitcond, label %return, label %no_exit was being code gen'd to: .LBBtest_1: # no_exit movl %edx, %esi shll $4, %esi movl $0, 4(%eax,%esi) movl $0, (%eax,%esi) incl %edx movl $0, (%eax,%esi) movl $0, 4(%eax,%esi) cmpl %ecx, %edx jne .LBBtest_1 # no_exit Note that we are doing 4 32-bit stores instead of 2. Now we generate: .LBBtest_1: # no_exit movl %edx, %esi incl %esi shll $4, %edx movl $0, (%eax,%edx) movl $0, 4(%eax,%edx) cmpl %ecx, %esi movl %esi, %edx jne .LBBtest_1 # no_exit This is much happier, though it would be even better if the increment of ESI was scheduled after the compare :-/ llvm-svn: 20265
-
- Feb 17, 2005
-
-
Misha Brukman authored
llvm-svn: 20231
-
Chris Lattner authored
for 0.0 and -0.0. llvm-svn: 20230
-
Chris Lattner authored
Don't sink argument loads into loops or other bad places. This disables folding of argument loads with instructions that are not in the entry block. llvm-svn: 20228
-
- Feb 14, 2005
-
-
Chris Lattner authored
prints: getelementptr (int* %A, int -1) as: "(A) - 4" instead of "(A) + 18446744073709551612", which makes the assembler much happier. This fixes test/Regression/CodeGen/X86/2005-02-14-IllegalAssembler.ll, and Benchmarks/Prolangs-C/cdecl with LLC on X86. llvm-svn: 20183
-
- Feb 04, 2005
-
-
Chris Lattner authored
targets. llvm-svn: 20030
-
Andrew Lenharth authored
llvm-svn: 20026
-
- Feb 02, 2005
-
-
Chris Lattner authored
llvm-svn: 19986
-
- Feb 01, 2005
-
-
Chris Lattner authored
llvm-svn: 19969
-
- Jan 30, 2005
-
-
Chris Lattner authored
llvm-svn: 19930
-
- Jan 29, 2005
-
-
Chris Lattner authored
llvm-svn: 19924
-
- Jan 28, 2005
-
-
Chris Lattner authored
llvm-svn: 19880
-
Chris Lattner authored
truncated, e.g. (truncate:i8 something:i16) on a 32 or 64-bit RISC. llvm-svn: 19879
-
Chris Lattner authored
llvm-svn: 19878
-
Chris Lattner authored
legalized, and actually return the correct result when we legalize the chain first. llvm-svn: 19866
-
- Jan 24, 2005
-
-
Chris Lattner authored
llvm-svn: 19797
-
Chris Lattner authored
registers. This information is computed directly by the register allocator now. llvm-svn: 19795
-
- Jan 23, 2005
-
-
Chris Lattner authored
llvm-svn: 19793
-
Chris Lattner authored
llvm-svn: 19792
-
Chris Lattner authored
llvm-svn: 19791
-
Chris Lattner authored
llvm-svn: 19789
-
Chris Lattner authored
llvm-svn: 19787
-
Chris Lattner authored
The first half of correct chain insertion for libcalls. This is not enough to fix Fhourstones yet though. llvm-svn: 19781
-
Chris Lattner authored
the new TLI that is available. Implement support for handling out of range shifts. This allows us to compile this code (a 64-bit rotate): unsigned long long f3(unsigned long long x) { return (x << 32) | (x >> (64-32)); } into this: f3: mov %EDX, DWORD PTR [%ESP + 4] mov %EAX, DWORD PTR [%ESP + 8] ret GCC produces this: $ gcc t.c -masm=intel -O3 -S -o - -fomit-frame-pointer .. f3: push %ebx mov %ebx, DWORD PTR [%esp+12] mov %ecx, DWORD PTR [%esp+8] mov %eax, %ebx mov %edx, %ecx pop %ebx ret The Simple ISEL produces (eww gross): f3: sub %ESP, 4 mov DWORD PTR [%ESP], %ESI mov %EDX, DWORD PTR [%ESP + 8] mov %ECX, DWORD PTR [%ESP + 12] mov %EAX, 0 mov %ESI, 0 or %EAX, %ECX or %EDX, %ESI mov %ESI, DWORD PTR [%ESP] add %ESP, 4 ret llvm-svn: 19780
-
Chris Lattner authored
llvm-svn: 19779
-
Chris Lattner authored
llvm-svn: 19763
-
- Jan 22, 2005
-
-
Chris Lattner authored
This fixes the return-address-not-being-saved problem in the Alpha backend. llvm-svn: 19741
-
Chris Lattner authored
llvm-svn: 19739
-
Chris Lattner authored
llvm-svn: 19738
-
Chris Lattner authored
llvm-svn: 19737
-
Chris Lattner authored
llvm-svn: 19736
-
Chris Lattner authored
llvm-svn: 19735
-
- Jan 21, 2005
-
-
Chris Lattner authored
llvm-svn: 19727
-