- Feb 03, 2006
-
-
Chris Lattner authored
llvm-svn: 25918
-
Chris Lattner authored
this code: store [stack slot #0], R10 = add R14, [stack slot #0] The spiller didn't know that the store made the value of [stackslot#0] available in R10 *IF* the store came from a copy instruction with the store folded into it. This patch teaches VirtRegMap to look at these stores and recognize the values they make available. In one case Evan provided, this code: divsd %XMM0, %XMM1 movsd %XMM1, QWORD PTR [%ESP + 40] 1) movsd QWORD PTR [%ESP + 48], %XMM1 2) movsd %XMM1, QWORD PTR [%ESP + 48] addsd %XMM1, %XMM0 3) movsd QWORD PTR [%ESP + 48], %XMM1 movsd QWORD PTR [%ESP + 4], %XMM0 turns into: divsd %XMM0, %XMM1 movsd %XMM1, QWORD PTR [%ESP + 40] addsd %XMM1, %XMM0 3) movsd QWORD PTR [%ESP + 48], %XMM1 movsd QWORD PTR [%ESP + 4], %XMM0 In this case, instruction #2 was removed because of the value made available by #1, and inst #1 was later deleted because it is now never used before the stack slot is redefined by #3. This occurs here and there in a lot of code with high spilling, on PPC most of the removed loads/stores are LSU-reject-causing loads, which is nice. On X86, things are much better (because it spills more), where we nuke about 1% of the instructions from SMG2000 and several hundred from eon. More improvements to come... llvm-svn: 25917
-
- Feb 02, 2006
-
-
Nate Begeman authored
llvm-svn: 25916
-
Chris Lattner authored
llvm-svn: 25915
-
Chris Lattner authored
llvm-svn: 25914
-
Chris Lattner authored
Move isLoadFrom/StoreToStackSlot from MRegisterInfo to TargetInstrInfo,a far more logical place. Other methods should also be moved if anyoneis interested. :) llvm-svn: 25913
-
Chris Lattner authored
a far more logical place. Other methods should also be moved if anyone is interested. :) llvm-svn: 25912
-
Chris Lattner authored
llvm-svn: 25911
-
Chris Lattner authored
llvm-svn: 25910
-
Chris Lattner authored
llvm-svn: 25909
-
Chris Lattner authored
llvm-svn: 25908
-
Chris Lattner authored
llvm-svn: 25907
-
Chris Lattner authored
llvm-svn: 25906
-
Chris Lattner authored
llvm-svn: 25905
-
Chris Lattner authored
llvm-svn: 25903
-
Nate Begeman authored
llvm-svn: 25902
-
Chris Lattner authored
and instruction. This allows us to compile stuff like this: bool %X(int %X) { %Y = add int %X, 14 %Z = setne int %Y, 12345 ret bool %Z } to this: _X: cmpl $12331, 4(%esp) setne %al movzbl %al, %eax ret instead of this: _X: cmpl $12331, 4(%esp) setne %al movzbl %al, %eax andl $1, %eax ret This occurs quite a bit with the X86 backend. For example, 25 times in lambda, 30 times in 177.mesa, 14 times in galgel, 70 times in fma3d, 25 times in vpr, several hundred times in gcc, ~45 times in crafty, ~60 times in parser, ~140 times in eon, 110 times in perlbmk, 55 on gap, 16 times on bzip2, 14 times on twolf, and 1-2 times in many other SPEC2K programs. llvm-svn: 25901
-
Chris Lattner authored
llvm-svn: 25900
-
Chris Lattner authored
llvm-svn: 25899
-
Chris Lattner authored
(C1-X) == C2 --> X == C1-C2 (X+C1) == C2 --> X == C2-C1 This allows us to compile this: bool %X(int %X) { %Y = add int %X, 14 %Z = setne int %Y, 12345 ret bool %Z } into this: _X: cmpl $12331, 4(%esp) setne %al movzbl %al, %eax andl $1, %eax ret not this: _X: movl $14, %eax addl 4(%esp), %eax cmpl $12345, %eax setne %al movzbl %al, %eax andl $1, %eax ret Testcase here: Regression/CodeGen/X86/compare-add.ll nukage of the and coming up next. llvm-svn: 25898
-
Chris Lattner authored
llvm-svn: 25897
-
Evan Cheng authored
llvm-svn: 25896
-
Chris Lattner authored
llvm-svn: 25895
-
Evan Cheng authored
llvm-svn: 25894
-
Chris Lattner authored
%C = call int asm "xyz $0, $1, $2, $3", "=r,r,r,0"(int %A, int %B, int 4) and get: xyz r2, r3, r4, r2 note that the r2's are pinned together. Yaay for 2-address instructions. 2342 ---------------------------------------------------------------------- llvm-svn: 25893
-
Chris Lattner authored
llvm-svn: 25892
-
Chris Lattner authored
llvm-svn: 25891
-
Chris Lattner authored
llvm-svn: 25890
-
Evan Cheng authored
llvm-svn: 25889
-
Evan Cheng authored
llvm-svn: 25888
-
Evan Cheng authored
llvm-svn: 25887
-
- Feb 01, 2006
-
-
Chris Lattner authored
substituted operands. For this testcase: int %test(int %A, int %B) { %C = call int asm "xyz $0, $1, $2", "=r,r,r"(int %A, int %B) ret int %C } we now emit: _test: or r2, r3, r3 or r3, r4, r4 xyz r2, r2, r3 ;; look here or r3, r2, r2 blr ... note the substituted operands. :) llvm-svn: 25886
-
Chris Lattner authored
llvm-svn: 25885
-
Chris Lattner authored
llvm-svn: 25884
-
Chris Lattner authored
llvm-svn: 25883
-
Andrew Lenharth authored
llvm-svn: 25882
-
Andrew Lenharth authored
llvm-svn: 25881
-
Chris Lattner authored
llvm-svn: 25880
-
Nate Begeman authored
llvm-svn: 25879
-
Chris Lattner authored
int %test(int %A, int %B) { %C = call int asm "xyz $0, $1, $2", "=r,r,r"(int %A, int %B) ret int %C } into: (0x8906130, LLVM BB @0x8902220): %r2 = OR4 %r3, %r3 %r3 = OR4 %r4, %r4 INLINEASM <es:xyz $0, $1, $2>, %r2<def>, %r2, %r3 %r3 = OR4 %r2, %r2 BLR which asmprints as: _test: or r2, r3, r3 or r3, r4, r4 xyz $0, $1, $2 ;; need to print the operands now :) or r3, r2, r2 blr llvm-svn: 25878
-