- Mar 16, 2011
-
-
Cameron Zwarich authored
rather than an int. Thankfully, this only causes LLVM to miss optimizations, not generate incorrect code. This just fixes the zext at the return. We still insert an i32 ZextAssert when reading a function's arguments, but it is followed by a truncate and another i8 ZextAssert so it is not optimized. llvm-svn: 127766
-
Cameron Zwarich authored
llvm-svn: 127764
-
Daniel Dunbar authored
plus the test where it used to break.", which broke Clang self-host of a Debug+Asserts compiler, on OS X. llvm-svn: 127763
-
Renato Golin authored
llvm-svn: 127757
-
- Mar 15, 2011
-
-
Jakob Stoklund Olesen authored
After live range splitting, an original value may be available in multiple registers. Tracing back through the registers containing the same value, find the best place to insert a spill, determine if the value has already been spilled, or discover a reaching def that may be rematerialized. This is only the analysis part. The information is not used for anything yet. llvm-svn: 127698
-
Jakob Stoklund Olesen authored
llvm-svn: 127697
-
Evan Cheng authored
v2 = bitcast v1 ... v3 = bitcast v2 ... = v3 => v2 = bitcast v1 ... = v1 if v1 and v3 are of in the same register class. bitcast between i32 and fp (and others) are often not nops since they are in different register classes. These bitcast instructions are often left because they are in different basic blocks and cannot be eliminated by dag combine. rdar://9104514 llvm-svn: 127668
-
Evan Cheng authored
zext(undef) = 0, because the top bits will be zero. llvm-svn: 127649
-
Bill Wendling authored
and then go kablooie. The problem was that it was tracking the PHI nodes anew each time into this function. But it didn't need to. And because the recursion didn't know that a PHINode was visited before, it would go ahead and call itself. There is a testcase, but unfortunately it's too big to add. This problem will go away with the EH rewrite. <rdar://problem/8856298> llvm-svn: 127640
-
- Mar 14, 2011
-
-
Jakob Stoklund Olesen authored
Use the opportunity to get rid of the trailing underscore variable names. llvm-svn: 127618
-
Jakob Stoklund Olesen authored
Remove the unused reserved_ bit vector, no functional change intended. This doesn't break 'svn blame', this file really is all my fault. llvm-svn: 127607
-
Evan Cheng authored
llvm-svn: 127600
-
Evan Cheng authored
llvm-svn: 127598
-
- Mar 13, 2011
-
-
Jakob Stoklund Olesen authored
Use the virtual register number as a cache tag instead. They are not reused. llvm-svn: 127561
-
Jakob Stoklund Olesen authored
This allows the allocator to free any resources used by the virtual register, including physical register assignments. llvm-svn: 127560
-
- Mar 12, 2011
-
-
Duncan Sands authored
llvm-gcc-i386-linux-selfhost and llvm-x86_64-linux-checks buildbots. The original log entry: Remove optimization emitting a reference insted of label difference, since it can create more relocations. Removed isBaseAddressKnownZero method, because it is no longer used. llvm-svn: 127540
-
Jakob Stoklund Olesen authored
llvm-svn: 127530
-
Jakob Stoklund Olesen authored
Live range splitting can create a number of small live ranges containing only a single real use. Spill these small live ranges along with the large range they are connected to with copies. This enables memory operand folding and maximizes the spill to fill distance. Work in progress with known bugs. llvm-svn: 127529
-
Jakob Stoklund Olesen authored
There are too many compatibility problems with using mixed types in std::upper_bound, and I don't want to spend 110 lines of boilerplate setting up a call to a 10-line function. Binary search is not /that/ hard to implement correctly. I tried terminating the binary search with a linear search, but that actually made the algorithm slower against my expectation. Most live intervals have less than 4 segments. The early test against endIndex() does pay, and this version is 25% faster than plain std::upper_bound(). llvm-svn: 127522
-
- Mar 11, 2011
-
-
Cameron Zwarich authored
protector insertion not working correctly with unreachable code. Since that revision was rolled out, this test doesn't actual fail before this fix. llvm-svn: 127497
-
Owen Anderson authored
llvm-svn: 127496
-
Jan Sjödin authored
Remove optimization emitting a reference insted of label difference, since it can create more relocations. Removed isBaseAddressKnownZero method, because it is no longer used. llvm-svn: 127478
-
Andrew Trick authored
Replace -dag-chain-limit flag with constant. It has survived a release cycle without being touched, so no longer needs to pollute the hidden-help text. llvm-svn: 127468
-
John Wiegley authored
The existing CompEnd predicate does not define a strict weak order as required by the C++03 standard; therefore, its use as a predicate to std::upper_bound is invalid. For a discussion of this issue, see http://www.open-std.org/jtc1/sc22/wg21/docs/lwg-defects.html#270 This patch replaces the asymmetrical comparison with an iterator adaptor that achieves the same effect while being strictly standard-conforming by ensuring an apples-to-apples comparison. llvm-svn: 127462
-
Evan Cheng authored
Avoid replacing the value of a directly stored load with the stored value if the load is indexed. rdar://9117613. llvm-svn: 127440
-
- Mar 10, 2011
-
-
Cameron Zwarich authored
llvm-svn: 127398
-
Jakob Stoklund Olesen authored
This makes it possible to register delegates and get callbacks when the spiller edits live ranges. llvm-svn: 127389
-
Jakob Stoklund Olesen authored
llvm-svn: 127388
-
Evan Cheng authored
llvm-svn: 127380
-
Evan Cheng authored
llvm-svn: 127376
-
- Mar 09, 2011
-
-
Evan Cheng authored
flexible. If it returns a register class that's different from the input, then that's the register class used for cross-register class copies. If it returns a register class that's the same as the input, then no cross- register class copies are needed (normal copies would do). If it returns null, then it's not at all possible to copy registers of the specified register class. llvm-svn: 127368
-
Jakob Stoklund Olesen authored
The damage done by physreg coalescing only depends on the number of instructions the extended physreg live range covers. This fixes PR9438. The heuristic is still luck-based, and physreg coalescing really should be disabled completely. We need a register allocator with better hinting support before that is possible. Convert a test to FileCheck and force spilling by inserting an extra call. The previous spilling behavior was dependent on misguided physreg coalescing decisions. llvm-svn: 127351
-
Andrew Trick authored
This helps cases like 2008-07-19-movups-spills.ll, but doesn't have an obvious impact on benchmarks llvm-svn: 127347
-
Benjamin Kramer authored
llvm-svn: 127335
-
Benjamin Kramer authored
llvm-svn: 127331
-
Matt Beaumont-Gay authored
llvm-svn: 127311
-
Jakob Stoklund Olesen authored
This will we used for keeping register allocator data structures up to date while LiveRangeEdit is trimming live intervals. llvm-svn: 127300
-
Jakob Stoklund Olesen authored
llvm-svn: 127295
-
- Mar 08, 2011
-
-
Jakob Stoklund Olesen authored
LiveRangeEdit::eliminateDeadDefs() will eventually be used by coalescing, splitting, and spilling for dead code elimination. It can delete chains of dead instructions as long as there are no dependency loops. llvm-svn: 127287
-
Jakob Stoklund Olesen authored
Patch by Olaf Krzikalla! llvm-svn: 127264
-