- Feb 08, 2007
-
-
Evan Cheng authored
The code sequence before the spiller is something like: = tMOVrr %reg1117 = tMOVrr %reg1078 = tLSLri %reg1117, 2 The it starts spilling: %r0 = tRestore <fi#5>, 0 %r1 = tRestore <fi#7>, 0 %r1 = tMOVrr %r1<kill> tSpill %r1, <fi#5>, 0 %reg1078 = tLSLri %reg1117, 2 It restores the value while processing the first tMOVrr. At this point, the spiller remembers fi#5 is available in %r0. Next it processes the second move. It restores the source before the move and spills the result afterwards. The move becomes a noop and is deleted. However, a spill has been inserted and that should invalidate reuse of %r0 for fi#5 and add reuse of %r1 for fi#5. Therefore, %reg1117 (which is also assigned fi#5) should get %r1, not %r0. llvm-svn: 34039
-
Bill Wendling authored
do some common stuff, then on our own add an object file writer (by calling a concrete function), and then do some finishing stuff, if need be. llvm-svn: 34032
-
Bill Wendling authored
llvm-svn: 34031
-
Bill Wendling authored
llvm-svn: 34027
-
- Feb 07, 2007
-
-
Nate Begeman authored
based on the alignment of the symbol and the target data's preferred align for that type. Also, rename some arguments for consistency. llvm-svn: 33984
-
- Feb 06, 2007
-
-
Chris Lattner authored
1. Memset takes an i32 for the value to set, not i8. This was causing GCC to ICE all over the place (PR1183). 2. memcpy/memmove were not properly zext/trunc'ing the size in some cases. llvm-svn: 33970
-
Chris Lattner authored
llvm-svn: 33957
-
Chris Lattner authored
llvm-svn: 33946
-
- Feb 05, 2007
-
-
Chris Lattner authored
llvm-svn: 33924
-
Anton Korobeynikov authored
llvm-svn: 33888
-
- Feb 04, 2007
-
-
Chris Lattner authored
speeds up the isel pass from 2.5570s to 2.4722s on kc++ (3.4%). llvm-svn: 33879
-
Chris Lattner authored
their operands with the node itself. This reduces malloc traffic for operand lists. This reduces isel time on kc++ from 2.6164 to 2.5570s, about 2.3%. llvm-svn: 33878
-
Chris Lattner authored
llvm-svn: 33876
-
Chris Lattner authored
llvm-svn: 33875
-
Chris Lattner authored
no behavior or performance change here. llvm-svn: 33869
-
Chris Lattner authored
llvm-svn: 33868
-
Chris Lattner authored
llvm-svn: 33867
-
Chris Lattner authored
llvm-svn: 33866
-
Chris Lattner authored
llvm-svn: 33863
-
Chris Lattner authored
llvm-svn: 33862
-
Chris Lattner authored
llvm-svn: 33861
-
Chris Lattner authored
aren't worth it. llvm-svn: 33860
-
Chris Lattner authored
time as a whole on kc++ by 11%. llvm-svn: 33857
-
Chris Lattner authored
up isel on kimwitu by 0.7%. llvm-svn: 33853
-
Chris Lattner authored
llvm-svn: 33852
-
Chris Lattner authored
the users set (most nodes have 1 or 2 users). This speeds up the isel pass 3.2% on kimwitu. llvm-svn: 33849
-
- Feb 03, 2007
-
-
Bill Wendling authored
llvm-svn: 33816
-
Chris Lattner authored
speeds up isel as a whole time by 2.6%. llvm-svn: 33810
-
Chris Lattner authored
isel as a whole by 3.3%. llvm-svn: 33809
-
Evan Cheng authored
llvm-svn: 33806
-
- Feb 02, 2007
-
-
Reid Spencer authored
llvm-svn: 33784
-
Reid Spencer authored
This feature is needed in order to support shifts of more than 255 bits on large integer types. This changes the syntax for llvm assembly to make shl, ashr and lshr instructions look like a binary operator: shl i32 %X, 1 instead of shl i32 %X, i8 1 Additionally, this should help a few passes perform additional optimizations. llvm-svn: 33776
-
- Feb 01, 2007
-
-
Jim Laskey authored
llvm-svn: 33758
-
Jim Laskey authored
llvm-svn: 33757
-
Jim Laskey authored
llvm-svn: 33755
-
Anton Korobeynikov authored
affected part is codegen of "memove" inside x86 backend. This fixes PR1144 llvm-svn: 33752
-
Chris Lattner authored
llvm-svn: 33749
-
Chris Lattner authored
llvm-svn: 33745
-
Chris Lattner authored
llvm-svn: 33736
-
- Jan 31, 2007
-
-
Evan Cheng authored
result of the comparison libcall against zero. llvm-svn: 33701
-