- Feb 25, 2004
-
-
Brian Gaeke authored
llvm-svn: 11828
-
Brian Gaeke authored
llvm-svn: 11827
-
Brian Gaeke authored
llvm-svn: 11826
-
Chris Lattner authored
llvm-svn: 11824
-
John Criswell authored
llvm-svn: 11823
-
Chris Lattner authored
a bunch of stuff! :) llvm-svn: 11822
-
Chris Lattner authored
llvm-svn: 11821
-
Chris Lattner authored
scaled indexes. This allows us to compile GEP's like this: int* %test([10 x { int, { int } }]* %X, int %Idx) { %Idx = cast int %Idx to long %X = getelementptr [10 x { int, { int } }]* %X, long 0, long %Idx, ubyte 1, ubyte 0 ret int* %X } Into a single address computation: test: mov %EAX, DWORD PTR [%ESP + 4] mov %ECX, DWORD PTR [%ESP + 8] lea %EAX, DWORD PTR [%EAX + 8*%ECX + 4] ret Before it generated: test: mov %EAX, DWORD PTR [%ESP + 4] mov %ECX, DWORD PTR [%ESP + 8] shl %ECX, 3 add %EAX, %ECX lea %EAX, DWORD PTR [%EAX + 4] ret This is useful for things like int/float/double arrays, as the indexing can be folded into the loads&stores, reducing register pressure and decreasing the pressure on the decode unit. With these changes, I expect our performance on 256.bzip2 and gzip to improve a lot. On bzip2 for example, we go from this: 10665 asm-printer - Number of machine instrs printed 40 ra-local - Number of loads/stores folded into instructions 1708 ra-local - Number of loads added 1532 ra-local - Number of stores added 1354 twoaddressinstruction - Number of instructions added 1354 twoaddressinstruction - Number of two-address instructions 2794 x86-peephole - Number of peephole optimization performed to this: 9873 asm-printer - Number of machine instrs printed 41 ra-local - Number of loads/stores folded into instructions 1710 ra-local - Number of loads added 1521 ra-local - Number of stores added 789 twoaddressinstruction - Number of instructions added 789 twoaddressinstruction - Number of two-address instructions 2142 x86-peephole - Number of peephole optimization performed ... and these types of instructions are often in tight loops. Linear scan is also helped, but not as much. It goes from: 8787 asm-printer - Number of machine instrs printed 2389 liveintervals - Number of identity moves eliminated after coalescing 2288 liveintervals - Number of interval joins performed 3522 liveintervals - Number of intervals after coalescing 5810 liveintervals - Number of original intervals 700 spiller - Number of loads added 487 spiller - Number of stores added 303 spiller - Number of register spills 1354 twoaddressinstruction - Number of instructions added 1354 twoaddressinstruction - Number of two-address instructions 363 x86-peephole - Number of peephole optimization performed to: 7982 asm-printer - Number of machine instrs printed 1759 liveintervals - Number of identity moves eliminated after coalescing 1658 liveintervals - Number of interval joins performed 3282 liveintervals - Number of intervals after coalescing 4940 liveintervals - Number of original intervals 635 spiller - Number of loads added 452 spiller - Number of stores added 288 spiller - Number of register spills 789 twoaddressinstruction - Number of instructions added 789 twoaddressinstruction - Number of two-address instructions 258 x86-peephole - Number of peephole optimization performed Though I'm not complaining about the drop in the number of intervals. :) llvm-svn: 11820
-
Chris Lattner authored
to do analysis. *** FOLD getelementptr instructions into loads and stores when possible, making use of some of the crazy X86 addressing modes. For example, the following C++ program fragment: struct complex { double re, im; complex(double r, double i) : re(r), im(i) {} }; inline complex operator+(const complex& a, const complex& b) { return complex(a.re+b.re, a.im+b.im); } complex addone(const complex& arg) { return arg + complex(1,0); } Used to be compiled to: _Z6addoneRK7complex: mov %EAX, DWORD PTR [%ESP + 4] mov %ECX, DWORD PTR [%ESP + 8] *** mov %EDX, %ECX fld QWORD PTR [%EDX] fld1 faddp %ST(1) *** add %ECX, 8 fld QWORD PTR [%ECX] fldz faddp %ST(1) *** mov %ECX, %EAX fxch %ST(1) fstp QWORD PTR [%ECX] *** add %EAX, 8 fstp QWORD PTR [%EAX] ret Now it is compiled to: _Z6addoneRK7complex: mov %EAX, DWORD PTR [%ESP + 4] mov %ECX, DWORD PTR [%ESP + 8] fld QWORD PTR [%ECX] fld1 faddp %ST(1) fld QWORD PTR [%ECX + 8] fldz faddp %ST(1) fxch %ST(1) fstp QWORD PTR [%EAX] fstp QWORD PTR [%EAX + 8] ret Other programs should see similar improvements, across the board. Note that in addition to reducing instruction count, this also reduces register pressure a lot, always a good thing on X86. :) llvm-svn: 11819
-
Chris Lattner authored
llvm-svn: 11818
-
Chris Lattner authored
into a single LEA instruction. This should improve the code generated for things like X->A.B.C[12].D. The bigger benefit is still coming though. Note that this uses an LEA instruction instead of an add, giving the register allocator more freedom. We should probably never generate ADDri32's. llvm-svn: 11817
-
Chris Lattner authored
an intermediate register. llvm-svn: 11816
-
Brian Gaeke authored
so that we always get the inline function instead. Remember, kids, like it says in the GCC manual, "An Inline Function is As Fast As a Macro." llvm-svn: 11815
-
- Feb 24, 2004
-
-
Brian Gaeke authored
llvm-svn: 11814
-
Chris Lattner authored
llvm-svn: 11813
-
Chris Lattner authored
llvm-svn: 11811
-
Chris Lattner authored
Also fix problem where we didn't check to see if a node pointer was null. Though fclose(null) doesn't make a lot of sense, 300.twolf does it. llvm-svn: 11810
-
John Criswell authored
llvm-svn: 11809
-
Brian Gaeke authored
llvm-svn: 11804
-
Chris Lattner authored
longer was getting this #include, it always fell back on the less precise floating point initializer values, causing some testsuite failures. llvm-svn: 11803
-
Chris Lattner authored
llvm-svn: 11801
-
John Criswell authored
llvm-svn: 11800
-
Chris Lattner authored
llvm-svn: 11799
-
Alkis Evlogimenos authored
allocator. The implementation is completely rewritten and now employs several optimizations not exercised before. For example for 164.gzip we have 997 loads and 699 stores vs the 1221 loads and 880 stores we have before. llvm-svn: 11798
-
Chris Lattner authored
This case occurs many times in various benchmarks, especially when combined with the previous patch. This allows it to get stuff like: if (X == 4 || X == 3) if (X == 5 || X == 8) and switch (X) { case 4: case 5: case 6: if (X == 4 || X == 5) llvm-svn: 11797
-
Chris Lattner authored
merged. llvm-svn: 11796
-
Alkis Evlogimenos authored
register mapping or a stack slot mapping. llvm-svn: 11795
-
Chris Lattner authored
llvm-svn: 11794
-
Chris Lattner authored
llvm-svn: 11793
-
Chris Lattner authored
This turns code like this: if (X == 4 | X == 7) and if (X != 4 & X != 7) into switch instructions. llvm-svn: 11792
-
Chris Lattner authored
if (X == 4 || X == 7) and if (X != 4 && X != 7) into switch instructions. llvm-svn: 11791
-
Chris Lattner authored
llvm-svn: 11790
-
Chris Lattner authored
llvm-svn: 11789
-
Chris Lattner authored
llvm-svn: 11788
-
Chris Lattner authored
llvm-svn: 11787
-
Chris Lattner authored
llvm-svn: 11786
-
Chris Lattner authored
remove our dependency on boost! Thanks to Reid Spencer for making this possible! llvm-svn: 11785
-
Chris Lattner authored
template. Thanks go out to Reid Spencer for skillfully extracting this from boost! llvm-svn: 11784
-
Chris Lattner authored
llvm-svn: 11783
-
Alkis Evlogimenos authored
llvm-svn: 11782
-