- Aug 06, 2011
-
-
Jakob Stoklund Olesen authored
No functional change. llvm-svn: 136994
-
Jakob Stoklund Olesen authored
These functions are no longer used, and they are easily replaced with a loop calling shouldSplitSingleBlock and splitSingleBlock. llvm-svn: 136993
-
Jakob Stoklund Olesen authored
Drop the use of SplitAnalysis::getMultiUseBlocks, there is no need to go through a SmallPtrSet any more. llvm-svn: 136992
-
Jakob Stoklund Olesen authored
Normally, we don't create a live range for a single instruction in a basic block, the spiller does that anyway. However, when splitting a live range that belongs to a proper register sub-class, inserting these extra COPY instructions completely remove the constraints from the remainder interval, and it may be allocated from the larger super-class. The spiller will mop up these small live ranges if we end up spilling anyway. It calls them snippets. llvm-svn: 136989
-
- Aug 05, 2011
-
-
Jakob Stoklund Olesen authored
Some instructions require restricted register classes, but most of the time that doesn't affect register allocation. For example, some instructions don't work with the stack pointer, but that is a reserved register anyway. Sometimes it matters, GR32_ABCD only has 4 allocatable registers. For such a proper sub-class, the register allocator should try to enable register class inflation since that makes more registers available for allocation. Make sure only legal super-classes are considered. For example, tGPR is not a proper sub-class in Thumb mode, but in ARM mode it is. llvm-svn: 136981
-
Jakob Stoklund Olesen authored
The old code would look at kills and defs in one pass over the instruction operands, causing problems with this code: %R0<def>, %CPSR<def,dead> = tLSLri %R5<kill>, 2, pred:14, pred:%noreg %R0<def>, %CPSR<def,dead> = tADDrr %R4<kill>, %R0<kill>, pred:14, %pred:%noreg The last instruction kills and redefines %R0, so it is still live after the instruction. This caused a register scavenger crash when compiling 483.xalancbmk for armv6. I am not including a test case because it requires too much bad luck to expose this old bug. First you need to convince the register allocator to use %R0 twice on the tADDrr instruction, then you have to convince BranchFolding to do something that causes it to run the register scavenger on he bad block. <rdar://problem/9898200> llvm-svn: 136973
-
Chandler Carruth authored
inlined variable, based on the discussion in PR10542. This explodes the runtime of several passes down the pipeline due to a large number of "copies" remaining live across a large function. This only shows up with both debug and opt, but when it does it creates a many-minute compile when self-hosting LLVM+Clang. There are several other cases that show these types of regressions. All of this is tracked in PR10542, and progress is being made on fixing the issue. Once its addressed, the re-instated, but until then this restores the performance for self-hosting and other opt+debug builds. Devang, let me know if this causes any trouble, or impedes fixing it in any way, and thanks for working on this! llvm-svn: 136953
-
- Aug 04, 2011
-
-
Jakob Stoklund Olesen authored
Patch by Ivan Krasin! llvm-svn: 136921
-
Devang Patel authored
llvm-svn: 136916
-
Devang Patel authored
llvm-svn: 136915
-
Devang Patel authored
llvm-svn: 136901
-
Jakob Stoklund Olesen authored
It is possible to have multiple DBG_VALUEs for the same variable: 32L TEST32rr %vreg0<kill>, %vreg0, %EFLAGS<imp-def>; GR32:%vreg0 DBG_VALUE 2, 0, !"i" DBG_VALUE %noreg, %0, !"i" When that happens, keep the last one instead of the first. llvm-svn: 136842
-
Jakob Stoklund Olesen authored
This helps generate better code in functions with high register pressure. The previous version of compact region splitting caused regressions because the regions were a bit too large. A stronger negative bias applied in r136832 fixed this problem. llvm-svn: 136836
-
Devang Patel authored
Do not drop undef debug values. These are used as range termination marker by live debug variable pass. llvm-svn: 136834
-
Jakob Stoklund Olesen authored
Apply twice the negative bias on transparent blocks when computing the compact regions. This excludes loop backedges from the region when only one of the loop blocks uses the register. Previously, we would include the backedge in the region if the loop preheader and the loop latch both used the register, but the loop header didn't. When both the header and latch blocks use the register, we still keep it live on the backedge. llvm-svn: 136832
-
Chandler Carruth authored
lib/CodeGen/RegAllocGreedy.cpp:1176:18: warning: unused variable 'B' [-Wunused-variable] if (unsigned B = Cand.getBundles(BundleCand, BestCand)) { ^ lib/CodeGen/RegAllocGreedy.cpp:1188:18: warning: unused variable 'B' [-Wunused-variable] if (unsigned B = Cand.getBundles(BundleCand, 0)) { ^ llvm-svn: 136831
-
Jakub Staszak authored
llvm-svn: 136828
-
Jakub Staszak authored
llvm-svn: 136826
-
- Aug 03, 2011
-
-
Jakub Staszak authored
llvm-svn: 136816
-
Eli Friedman authored
New approach to r136737: insert the necessary fences for atomic ops in platform-independent code, since a bunch of platforms (ARM, Mips, PPC, Alpha are the relevant targets here) need to do essentially the same thing. I think this completes the basic CodeGen for atomicrmw and cmpxchg. llvm-svn: 136813
-
Bob Wilson authored
llvm-svn: 136802
-
Devang Patel authored
llvm-svn: 136759
-
Jakob Stoklund Olesen authored
llvm-svn: 136742
-
Jakob Stoklund Olesen authored
This information is not used for anything yet. llvm-svn: 136741
-
Jakob Stoklund Olesen authored
With a 'FirstDef' field right there, it is very confusing that FirstUse refers to an instruction that may be a def. llvm-svn: 136739
-
Jakob Stoklund Olesen authored
This is either an invalid SlotIndex, or valno->def for the first value defined inside the block. PHI values are not counted as defined inside the block. The FirstDef field will be used when estimating the cost of spilling around a block. llvm-svn: 136736
-
Jakob Stoklund Olesen authored
llvm-svn: 136735
-
- Aug 02, 2011
-
-
Jakob Stoklund Olesen authored
The PrefBoth constraint is used for blocks that ideally want a live-in value both on the stack and in a register. This would be used by a block that has a use before interference forces a spill. Secondly, add the ChangesValue flag to BlockConstraint. This tells SpillPlacement if a live-in value on the stack can be reused as a live-out stack value for free. If the block redefines the virtual register, a spill would be required for that. This extra information will be used by SpillPlacement to more accurately calculate spill costs when a value can exist both on the stack and in a register. The simplest example is a basic block that reads the virtual register, but doesn't change its value. Spilling around such a block requires a reload, but no spill in the block. The spiller already knows this, but the spill placer doesn't. That can sometimes lead to suboptimal regions. llvm-svn: 136731
-
Eli Friedman authored
The testcase looks extremely fragile, so I'm adding an assertion which should catch any cases like this. llvm-svn: 136711
-
- Aug 01, 2011
-
-
Jay Foad authored
llvm-svn: 136609
-
- Jul 31, 2011
-
-
Bill Wendling authored
This adds the 'resume' instruction class, IR parsing, and bitcode reading and writing. The 'resume' instruction resumes propagation of an existing (in-flight) exception whose unwinding was interrupted with a 'landingpad' instruction (to be added later). llvm-svn: 136589
-
Jakob Stoklund Olesen authored
llvm-svn: 136584
-
- Jul 30, 2011
-
-
Jakob Stoklund Olesen authored
While this generally helped x86-64, there was some large regressions for i386. llvm-svn: 136571
-
Bill Wendling authored
r136339, r136341, r136369, r136387, r136392, r136396, r136429, r136430, r136444, r136445, r136446, r136253 pending review. llvm-svn: 136556
-
Jakob Stoklund Olesen authored
The ARM target depends on CPSR liveness being tracked after register allocation. llvm-svn: 136548
-
Jakob Stoklund Olesen authored
This includes registers like EFLAGS and ST0-ST7. We don't check for liveness issues in the verifier and scavenger because registers will never be allocated from these classes. While in SSA form, we do care about the liveness of unallocatable unreserved registers. Liveness of EFLAGS and ST0 neds to be correct for MachineDCE and MachineSinking. llvm-svn: 136541
-
Jakob Stoklund Olesen authored
llvm-svn: 136535
-
Jakob Stoklund Olesen authored
This flag is true from isel to register allocation when the machine function is required to be in SSA form. The TwoAddressInstructionPass and PHIElimination passes clear the flag. The SSA flag wil be used by the machine code verifier to check for SSA form, and eventually an assertion can enforce it in +Asserts builds. This will catch the common target error of creating machine code with multiple defs of a virtual register. llvm-svn: 136532
-
Jakub Staszak authored
llvm-svn: 136529
-
Jakob Stoklund Olesen authored
This helps generate better code in functions with high register pressure. llvm-svn: 136528
-