- Jul 23, 2004
-
-
Chris Lattner authored
will soon be renamed) into their own file. The new file should not emit DEBUG output or have other side effects. The LiveInterval class also now doesn't know whether its working on registers or some other thing. In the future we will want to use the LiveInterval class and friends to do stack packing. In addition to a code simplification, this will allow us to do it more easily. llvm-svn: 15134
-
Misha Brukman authored
* Function pointers implemented correctly using appropriate stubs Contributed by Nate Begeman. llvm-svn: 15133
-
John Criswell authored
standards. This is in hopes of fixing configuration problems on Windows Services for Unix. llvm-svn: 15132
-
Chris Lattner authored
Use an explicit LiveRange class to represent ranges instead of an std::pair. This is a minor cleanup, but is really intended to make a future patch simpler and less invasive. Alkis, could you please take a look at LiveInterval::liveAt? I suspect that you can add an operator<(unsigned) to LiveRange, allowing us to speed up the upper_bound call by quite a bit (this would also apply to other callers of upper/lower_bound). I would do it myself, but I still don't understand that crazy liveAt function, despite the comment. :) Basically I would like to see this: LiveRange dummy(index, index+1); Ranges::const_iterator r = std::upper_bound(ranges.begin(), ranges.end(), dummy); Turn into: Ranges::const_iterator r = std::upper_bound(ranges.begin(), ranges.end(), index); llvm-svn: 15130
-
Chris Lattner authored
llvm-svn: 15129
-
Chris Lattner authored
the live intervals for some registers. llvm-svn: 15125
-
Chris Lattner authored
interfere. Because these intervals have a single definition, and one of them is a copy instruction, they are always safe to merge even if their lifetimes interfere. This slightly reduces the amount of spill code, for example on 252.eon, from: 12837 spiller - Number of loads added 7604 spiller - Number of stores added 5842 spiller - Number of register spills 18155 liveintervals - Number of identity moves eliminated after coalescing to: 12754 spiller - Number of loads added 7585 spiller - Number of stores added 5803 spiller - Number of register spills 18262 liveintervals - Number of identity moves eliminated after coalescing The much much bigger win would be to merge intervals with multiple definitions (aka phi nodes) but this is not that day. llvm-svn: 15124
-
Misha Brukman authored
* Print out another '\n' after printing out program execution status * Make sure code wraps at 80 cols llvm-svn: 15123
-
Misha Brukman authored
llvm-svn: 15122
-
Misha Brukman authored
* Fix indentation back to 2 spaces llvm-svn: 15121
-
Misha Brukman authored
* Convert tabs to spaces llvm-svn: 15120
-
Misha Brukman authored
* Fix spacing llvm-svn: 15119
-
Chris Lattner authored
llvm-svn: 15118
-
Misha Brukman authored
llvm-svn: 15117
-
John Criswell authored
llvm-svn: 15116
-
- Jul 22, 2004
-
-
Chris Lattner authored
llvm-svn: 15115
-
Chris Lattner authored
llvm-svn: 15114
-
Chris Lattner authored
it can be ressurected from CVS. llvm-svn: 15113
-
Chris Lattner authored
again in the future, it can be resurrected out of CVS llvm-svn: 15112
-
Chris Lattner authored
llvm-svn: 15111
-
Misha Brukman authored
* Don't allow negative immediates to users of unsigned immediates * Fix long compares * Support <const int>, op as a potential immediate candidate * Fix sign extension of short and byte loads * Fix and improve integer casts * Fix passing of doubles as vararg functions Patch contributed by Nate Begeman. llvm-svn: 15109
-
Alkis Evlogimenos authored
llvm-svn: 15108
-
Misha Brukman authored
llvm-svn: 15107
-
Alkis Evlogimenos authored
intervals need not be sorted anymore. Removing this redundant step improves LiveIntervals running time by 5% on 176.gcc. llvm-svn: 15106
-
Alkis Evlogimenos authored
llvm-svn: 15105
-
Chris Lattner authored
Add new DSE pass. Add a temporary option to disable it in case we need it This is going in after the July 22 nightly tester run, so we'll wait until the 23rd to see it :) llvm-svn: 15104
-
Alkis Evlogimenos authored
compilation of gcc: * Use vectors instead of lists for the intervals sets * Use a heap for the unhandled set to keep intervals always sorted and makes insertions back to the heap very fast (compared to scanning a list) llvm-svn: 15103
-
Chris Lattner authored
llvm-svn: 15102
-
Chris Lattner authored
can be improved in many ways. But: stop laughing, even with -basicaa it deletes 15% of the stores in 252.eon :) llvm-svn: 15101
-
Chris Lattner authored
llvm-svn: 15100
-
Chris Lattner authored
llvm-svn: 15099
-
Chris Lattner authored
llvm-svn: 15098
-
Chris Lattner authored
to the field being updated. Patch contributed by Tobias Nurmiranta llvm-svn: 15097
-
Chris Lattner authored
llvm-svn: 15096
-
Chris Lattner authored
Tobias Nurmiranta llvm-svn: 15095
-
Chris Lattner authored
Patch contributed by Tobias Nurmiranta llvm-svn: 15094
-
Alkis Evlogimenos authored
the end will reduce erase() runtimes. llvm-svn: 15093
-
Chris Lattner authored
fortunately, they are easy to handle if we know about them. This patch fixes some serious pessimization of code produced by the linscan register allocator. llvm-svn: 15092
-
Chris Lattner authored
llvm-svn: 15091
-
- Jul 21, 2004
-
-
Chris Lattner authored
mov %EDI, 12 add %EDI, %ECX mov %ECX, 12 add %ECX, %EDX mov %EDX, 12 add %EDX, %ESI instead (really!) generate this: add %ECX, 12 add %EDX, 12 add %ESI, 12 llvm-svn: 15090
-