- Oct 10, 2005
-
-
Chris Lattner authored
llvm-svn: 23678
-
Chris Lattner authored
llvm-svn: 23677
-
- Oct 09, 2005
-
-
Andrew Lenharth authored
This seems useful from the original patch that added the function. If there is a reason it is not useful on a RISC type target, let me know and I will pull it out llvm-svn: 23676
-
Chris Lattner authored
llvm-svn: 23674
-
Chris Lattner authored
llvm-svn: 23673
-
Chris Lattner authored
IV strides dependend on the pointer order of the strides in memory. Non-determinism is bad. llvm-svn: 23672
-
Chris Lattner authored
creating a new vreg and inserting a copy: just use the input vreg directly. This speeds up the compile (e.g. about 5% on mesa with a debug build of llc) by not adding a bunch of copies and vregs to be coallesced away. On mesa, for example, this reduces the number of intervals from 168601 to 129040 going into the coallescer. llvm-svn: 23671
-
Chris Lattner authored
the 177.mesa failure from last night, and fixes the CodeGen/PowerPC/2005-10-08-ArithmeticRotate.ll regression test I added. If this code cannot be fixed, it should be removed for good, but I'll leave it to Nate to decide its fate. llvm-svn: 23670
-
Chris Lattner authored
llvm-svn: 23669
-
- Oct 08, 2005
-
-
Nate Begeman authored
merge, and using subtarget info for ptr size. llvm-svn: 23668
-
Chris Lattner authored
llvm-svn: 23667
-
Nate Begeman authored
llvm-svn: 23666
-
Nate Begeman authored
llvm-svn: 23665
-
Chris Lattner authored
is faster and uses less stack space. This reduces our stack requirement enough to compile sixtrack, and though it's a hack, should be enough until we switch to iterative isel llvm-svn: 23664
-
- Oct 07, 2005
-
-
Chris Lattner authored
llvm-svn: 23663
-
Chris Lattner authored
C-X's llvm-svn: 23662
-
Chris Lattner authored
llvm-svn: 23661
-
Chris Lattner authored
llvm-svn: 23660
-
Chris Lattner authored
implements CodeGen/PowerPC/div-2.ll llvm-svn: 23659
-
Chris Lattner authored
llvm-svn: 23658
-
Jeff Cohen authored
llvm-svn: 23657
-
Jeff Cohen authored
llvm-svn: 23656
-
Chris Lattner authored
this out to me llvm-svn: 23655
-
Chris Lattner authored
classes on PPC. We were emitting fmr instructions to do fp extensions, which weren't getting coallesced. This fixes Regression/CodeGen/PowerPC/fpcopy.ll llvm-svn: 23654
-
Chris Lattner authored
llvm-svn: 23653
-
Chris Lattner authored
llvm-svn: 23652
-
Chris Lattner authored
llvm-svn: 23651
-
- Oct 06, 2005
-
-
Chris Lattner authored
llvm-svn: 23650
-
Chris Lattner authored
llvm-svn: 23649
-
Chris Lattner authored
llvm-svn: 23648
-
Chris Lattner authored
helps but not enough. Start pulling cases out of PPC32DAGToDAGISel::Select. With GCC 4, this function required 8512 bytes of stack space for each invocation (GCC 3 required less than 700 bytes). Pulling this first function out gets us down to 8224. More to come :( llvm-svn: 23647
-
Chris Lattner authored
llvm-svn: 23646
-
Chris Lattner authored
previous copy elisions and we discover we need to reload a register, make sure to use the regclass of the original register for the reload, not the class of the current register. This avoid using 16-bit loads to reload 32-bit values. llvm-svn: 23645
-
Andrew Lenharth authored
llvm-svn: 23644
-
Andrew Lenharth authored
llvm-svn: 23643
-
Chris Lattner authored
llvm-svn: 23642
-
- Oct 05, 2005
-
-
Nate Begeman authored
llvm-svn: 23641
-
Nate Begeman authored
llvm-svn: 23640
-
Nate Begeman authored
llvm-svn: 23639
-
Chris Lattner authored
store r12 -> [ss#2] R3 = load [ss#1] use R3 R3 = load [ss#2] R4 = load [ss#1] and turn it into this code: store R12 -> [ss#2] R3 = load [ss#1] use R3 R3 = R12 R4 = R3 <- oops! The problem was that promoting R3 = load[ss#2] to a copy missed the fact that the instruction invalidated R3 at that point. llvm-svn: 23638
-