- Oct 12, 2005
-
-
Jim Laskey authored
llvm-svn: 23700
-
- Oct 11, 2005
-
-
Chris Lattner authored
llvm-svn: 23694
-
Chris Lattner authored
llvm-svn: 23693
-
Chris Lattner authored
llvm-svn: 23692
-
Chris Lattner authored
location, replace them with a new store of the last value. This occurs in the same neighborhood in 197.parser, speeding it up about 1.5% llvm-svn: 23691
-
Chris Lattner authored
multiple results. Use this support to implement trivial store->load forwarding, implementing CodeGen/PowerPC/store-load-fwd.ll. Though this is the most simple case and can be extended in the future, it is still useful. For example, it speeds up 197.parser by 6.2% by avoiding an LSU reject in xalloc: stw r6, lo16(l5_end_of_array)(r2) addi r2, r5, -4 stwx r5, r4, r2 - lwzx r5, r4, r2 - rlwinm r5, r5, 0, 0, 30 stwx r5, r4, r2 lwz r2, -4(r4) ori r2, r2, 1 llvm-svn: 23690
-
- Oct 10, 2005
-
-
Nate Begeman authored
sext_inreg into zext_inreg based on the signbit (fires a lot), srem into urem, etc. llvm-svn: 23688
-
Chris Lattner authored
llvm-svn: 23686
-
Chris Lattner authored
llvm-svn: 23685
-
Chris Lattner authored
llvm-svn: 23684
-
Chris Lattner authored
removal of a bunch of ad-hoc and crufty code from SelectionDAG.cpp. llvm-svn: 23682
-
Chris Lattner authored
llvm-svn: 23679
-
Chris Lattner authored
llvm-svn: 23678
-
- Oct 09, 2005
-
-
Chris Lattner authored
creating a new vreg and inserting a copy: just use the input vreg directly. This speeds up the compile (e.g. about 5% on mesa with a debug build of llc) by not adding a bunch of copies and vregs to be coallesced away. On mesa, for example, this reduces the number of intervals from 168601 to 129040 going into the coallescer. llvm-svn: 23671
-
- Oct 08, 2005
-
-
Nate Begeman authored
llvm-svn: 23665
-
- Oct 07, 2005
-
-
Chris Lattner authored
llvm-svn: 23663
-
Chris Lattner authored
C-X's llvm-svn: 23662
-
Chris Lattner authored
llvm-svn: 23660
-
Chris Lattner authored
implements CodeGen/PowerPC/div-2.ll llvm-svn: 23659
-
- Oct 06, 2005
-
-
Chris Lattner authored
llvm-svn: 23646
-
Chris Lattner authored
previous copy elisions and we discover we need to reload a register, make sure to use the regclass of the original register for the reload, not the class of the current register. This avoid using 16-bit loads to reload 32-bit values. llvm-svn: 23645
-
Chris Lattner authored
llvm-svn: 23642
-
- Oct 05, 2005
-
-
Nate Begeman authored
llvm-svn: 23641
-
Nate Begeman authored
llvm-svn: 23640
-
Nate Begeman authored
llvm-svn: 23639
-
Chris Lattner authored
store r12 -> [ss#2] R3 = load [ss#1] use R3 R3 = load [ss#2] R4 = load [ss#1] and turn it into this code: store R12 -> [ss#2] R3 = load [ss#1] use R3 R3 = R12 R4 = R3 <- oops! The problem was that promoting R3 = load[ss#2] to a copy missed the fact that the instruction invalidated R3 at that point. llvm-svn: 23638
-
Chris Lattner authored
with the dag combiner. This speeds up espresso by 8%, reaching performance parity with the dag-combiner-disabled llc. llvm-svn: 23636
-
Chris Lattner authored
llvm-svn: 23635
-
Chris Lattner authored
dead node elim and dag combiner passes where the root is potentially updated. This fixes a fixme in the dag combiner. llvm-svn: 23634
-
Chris Lattner authored
that testcase still does not pass with the dag combiner. This is because not all forms of br* are folded yet. Also, when we combine a node into another one, delete the node immediately instead of waiting for the node to potentially come up in the future. llvm-svn: 23632
-
Chris Lattner authored
the second phase of dag combining llvm-svn: 23631
-
Chris Lattner authored
llvm-svn: 23630
-
- Oct 04, 2005
-
-
Jim Laskey authored
llvm-svn: 23622
-
Nate Begeman authored
Since calls return more than one value, don't bail if one of their uses happens to be a node that's not an MVT::Other when following the chain from CALLSEQ_START to CALLSEQ_END. Once we've found a CALLSEQ_START, we can just return; there's no need to tail-recurse further up the graph. Most importantly, just because something only has one use doesn't mean we should use it's one use to follow from start to end. This faulty logic caused us to follow a chain of one-use FP operations back to a much earlier call, putting a cycle in the graph from a later start to an earlier end. This is a better fix that reverting to the workaround committed earlier today. llvm-svn: 23620
-
Nate Begeman authored
Neither of us have yet figured out why this code is necessary, but stuff breaks if its not there. Still tracking this down... llvm-svn: 23617
-
- Oct 03, 2005
-
-
Jim Laskey authored
llvm-svn: 23610
-
Chris Lattner authored
llvm-svn: 23609
-
Chris Lattner authored
llvm-svn: 23606
-
- Oct 02, 2005
-
-
Chris Lattner authored
large basic blocks because it was purely recursive. This switches it to an iterative/recursive hybrid. llvm-svn: 23596
-
Chris Lattner authored
llvm-svn: 23595
-