- Jul 23, 2004
-
-
Chris Lattner authored
* Inline some functions * Eliminate some comparisons from the release build This is good for another .3 on gcc. llvm-svn: 15144
-
Chris Lattner authored
want to insert a new range into the middle of the vector, then delete ranges one at a time next to the inserted one as they are merged. Instead, if the inserted interval overlaps, just start merging. The only time we insert into the middle of the vector is when we don't overlap at all. Also delete blocks of live ranges if we overlap with many of them. This patch speeds up joining by .7 seconds on a large testcase, but more importantly gets all of the range adding code into addRangeFrom. llvm-svn: 15141
-
Chris Lattner authored
comparisons, reducing linscan by another .1 seconds :) llvm-svn: 15139
-
Chris Lattner authored
llvm-svn: 15138
-
Chris Lattner authored
llvm-svn: 15137
-
Chris Lattner authored
a very modest speedup of .3 seconds compiling 176.gcc (out of 20s). llvm-svn: 15136
-
Chris Lattner authored
llvm-svn: 15135
-
Chris Lattner authored
will soon be renamed) into their own file. The new file should not emit DEBUG output or have other side effects. The LiveInterval class also now doesn't know whether its working on registers or some other thing. In the future we will want to use the LiveInterval class and friends to do stack packing. In addition to a code simplification, this will allow us to do it more easily. llvm-svn: 15134
-
Chris Lattner authored
Use an explicit LiveRange class to represent ranges instead of an std::pair. This is a minor cleanup, but is really intended to make a future patch simpler and less invasive. Alkis, could you please take a look at LiveInterval::liveAt? I suspect that you can add an operator<(unsigned) to LiveRange, allowing us to speed up the upper_bound call by quite a bit (this would also apply to other callers of upper/lower_bound). I would do it myself, but I still don't understand that crazy liveAt function, despite the comment. :) Basically I would like to see this: LiveRange dummy(index, index+1); Ranges::const_iterator r = std::upper_bound(ranges.begin(), ranges.end(), dummy); Turn into: Ranges::const_iterator r = std::upper_bound(ranges.begin(), ranges.end(), index); llvm-svn: 15130
-
Chris Lattner authored
the live intervals for some registers. llvm-svn: 15125
-
Chris Lattner authored
interfere. Because these intervals have a single definition, and one of them is a copy instruction, they are always safe to merge even if their lifetimes interfere. This slightly reduces the amount of spill code, for example on 252.eon, from: 12837 spiller - Number of loads added 7604 spiller - Number of stores added 5842 spiller - Number of register spills 18155 liveintervals - Number of identity moves eliminated after coalescing to: 12754 spiller - Number of loads added 7585 spiller - Number of stores added 5803 spiller - Number of register spills 18262 liveintervals - Number of identity moves eliminated after coalescing The much much bigger win would be to merge intervals with multiple definitions (aka phi nodes) but this is not that day. llvm-svn: 15124
-
Chris Lattner authored
llvm-svn: 15118
-
- Jul 22, 2004
-
-
Chris Lattner authored
llvm-svn: 15115
-
Chris Lattner authored
llvm-svn: 15114
-
Chris Lattner authored
llvm-svn: 15111
-
Alkis Evlogimenos authored
llvm-svn: 15108
-
Misha Brukman authored
llvm-svn: 15107
-
Alkis Evlogimenos authored
intervals need not be sorted anymore. Removing this redundant step improves LiveIntervals running time by 5% on 176.gcc. llvm-svn: 15106
-
Alkis Evlogimenos authored
llvm-svn: 15105
-
Alkis Evlogimenos authored
compilation of gcc: * Use vectors instead of lists for the intervals sets * Use a heap for the unhandled set to keep intervals always sorted and makes insertions back to the heap very fast (compared to scanning a list) llvm-svn: 15103
-
Chris Lattner authored
llvm-svn: 15098
-
Alkis Evlogimenos authored
the end will reduce erase() runtimes. llvm-svn: 15093
-
Chris Lattner authored
fortunately, they are easy to handle if we know about them. This patch fixes some serious pessimization of code produced by the linscan register allocator. llvm-svn: 15092
-
Chris Lattner authored
llvm-svn: 15091
-
- Jul 21, 2004
-
-
Brian Gaeke authored
llvm-svn: 15089
-
Alkis Evlogimenos authored
llvm-svn: 15078
-
Alkis Evlogimenos authored
llvm-svn: 15073
-
Alkis Evlogimenos authored
compile time for 176.gcc from 5.6 secs to 4.7 secs. llvm-svn: 15072
-
Alkis Evlogimenos authored
llvm-svn: 15069
-
Alkis Evlogimenos authored
llvm-svn: 15068
-
Alkis Evlogimenos authored
llvm-svn: 15067
-
- Jul 20, 2004
-
-
Alkis Evlogimenos authored
stack slots. This is in preparation for the iterative linear scan. llvm-svn: 15032
-
Alkis Evlogimenos authored
llvm-svn: 15031
-
Alkis Evlogimenos authored
llvm-svn: 15011
-
- Jul 19, 2004
-
-
Chris Lattner authored
llvm-svn: 15005
-
Chris Lattner authored
is a simple change, but seems to improve code a little. For example, on 256.bzip2, we went from 75.0s -> 73.33s (2% speedup). llvm-svn: 15004
-
Chris Lattner authored
llvm-svn: 15003
-
Chris Lattner authored
ask instructions for their parent. llvm-svn: 14998
-
Chris Lattner authored
llvm-svn: 14997
-
Chris Lattner authored
llvm-svn: 14996
-