- Jan 13, 2012
-
-
Andrew Trick authored
llvm-svn: 148105
-
- Jan 12, 2012
-
-
Jakob Stoklund Olesen authored
llvm-svn: 148031
-
Jakob Stoklund Olesen authored
llvm-svn: 147979
-
- Nov 13, 2011
-
-
Jakob Stoklund Olesen authored
Nobody cared, StackSlotColoring scans the instructions to find used stack slots. llvm-svn: 144485
-
- Nov 01, 2011
-
-
Jakob Stoklund Olesen authored
No test case, spotted by inspection. llvm-svn: 143407
-
- Sep 14, 2011
-
-
Jakob Stoklund Olesen authored
THe LRE_DidCloneVirtReg callback may be called with vitual registers that RAGreedy doesn't even know about yet. In that case, there are no data structures to update. llvm-svn: 139702
-
- Sep 12, 2011
-
-
Jakob Stoklund Olesen authored
It has been enabled by default for a while, it was only there to allow performance comparisons. llvm-svn: 139501
-
Jakob Stoklund Olesen authored
SplitKit always computes a complement live range to cover the places where the original live range was live, but no explicit region has been allocated. Currently, the complement live range is created to be as small as possible - it never overlaps any of the regions. This minimizes register pressure, but if the complement is going to be spilled anyway, that is not very important. The spiller will eliminate redundant spills, and hoist others by making the spill slot live range overlap some of the regions created by splitting. Stack slots are cheap. This patch adds the interface to enable spill modes in SplitKit. In spill mode, SplitKit will assume that the complement is going to spill, so it will allow it to overlap regions in order to avoid back-copies. By doing some of the spiller's work early, the complement live range becomes simpler. In some cases, it can become much simpler because no extra PHI-defs are required. This will speed up both splitting and spilling. This is only the interface to enable spill modes, no implementation yet. llvm-svn: 139500
-
- Aug 19, 2011
-
-
Benjamin Kramer authored
llvm-svn: 138025
-
- Aug 09, 2011
-
-
Jakob Stoklund Olesen authored
A public interface is no longer needed since RegisterCoalescer is not an analysis any more. llvm-svn: 137082
-
- Aug 06, 2011
-
-
Jakob Stoklund Olesen authored
llvm-svn: 137023
-
Jakob Stoklund Olesen authored
All new local ranges are marked as RS_New now, so there is no need to attempt splitting of RS_Spill ranges any more. llvm-svn: 137002
-
Jakob Stoklund Olesen authored
The local ranges created get to stay in the RS_New stage, just like for local and region splitting. This gives tryLocalSplit a bit more freedom the first time it sees one of these new local ranges. llvm-svn: 137001
-
Jakob Stoklund Olesen authored
llvm-svn: 136996
-
Jakob Stoklund Olesen authored
No functional change. llvm-svn: 136994
-
Jakob Stoklund Olesen authored
Drop the use of SplitAnalysis::getMultiUseBlocks, there is no need to go through a SmallPtrSet any more. llvm-svn: 136992
-
Jakob Stoklund Olesen authored
Normally, we don't create a live range for a single instruction in a basic block, the spiller does that anyway. However, when splitting a live range that belongs to a proper register sub-class, inserting these extra COPY instructions completely remove the constraints from the remainder interval, and it may be allocated from the larger super-class. The spiller will mop up these small live ranges if we end up spilling anyway. It calls them snippets. llvm-svn: 136989
-
- Aug 04, 2011
-
-
Jakob Stoklund Olesen authored
This helps generate better code in functions with high register pressure. The previous version of compact region splitting caused regressions because the regions were a bit too large. A stronger negative bias applied in r136832 fixed this problem. llvm-svn: 136836
-
Jakob Stoklund Olesen authored
Apply twice the negative bias on transparent blocks when computing the compact regions. This excludes loop backedges from the region when only one of the loop blocks uses the register. Previously, we would include the backedge in the region if the loop preheader and the loop latch both used the register, but the loop header didn't. When both the header and latch blocks use the register, we still keep it live on the backedge. llvm-svn: 136832
-
Chandler Carruth authored
lib/CodeGen/RegAllocGreedy.cpp:1176:18: warning: unused variable 'B' [-Wunused-variable] if (unsigned B = Cand.getBundles(BundleCand, BestCand)) { ^ lib/CodeGen/RegAllocGreedy.cpp:1188:18: warning: unused variable 'B' [-Wunused-variable] if (unsigned B = Cand.getBundles(BundleCand, 0)) { ^ llvm-svn: 136831
-
- Aug 03, 2011
-
-
Jakob Stoklund Olesen authored
llvm-svn: 136742
-
Jakob Stoklund Olesen authored
This information is not used for anything yet. llvm-svn: 136741
-
Jakob Stoklund Olesen authored
With a 'FirstDef' field right there, it is very confusing that FirstUse refers to an instruction that may be a def. llvm-svn: 136739
-
- Jul 31, 2011
-
-
Jakob Stoklund Olesen authored
llvm-svn: 136584
-
- Jul 30, 2011
-
-
Jakob Stoklund Olesen authored
While this generally helped x86-64, there was some large regressions for i386. llvm-svn: 136571
-
Jakob Stoklund Olesen authored
This helps generate better code in functions with high register pressure. llvm-svn: 136528
-
- Jul 28, 2011
-
-
Jakob Stoklund Olesen authored
There are two conflicting strategies in play: - Under high register pressure, we want to assign large live ranges first. Smaller live ranges are easier to place afterwards. - Live range splitting is guided by interference, so splitting should be deferred until interference is as realistic as possible. With the recent changes to the live range stages, and with compact regions enabled, it is less traumatic to split a live range too early. If some of the split products were too big, they can often be split again. By reversing the RS_Split order, we get this queue order: 1. Normal live ranges, large to small. 2. RS_Split live ranges, large to small. The large-to-small order improves RAGreedy's puzzle solving skills under high register pressure. It may cause a bit more iterated splitting, but we handle that better now. With this change, -compact-regions is mostly an improvement on SPEC. llvm-svn: 136388
-
- Jul 27, 2011
-
-
Jakob Stoklund Olesen authored
When splitting global live ranges, it is now possible to split for multiple destination intervals at once. Previously, we only had the main and stack intervals. Each edge bundle is assigned to a split candidate, and splitAroundRegion will insert copies between the candidate intervals and the stack interval as needed. The multi-way splitting is used to split around compact regions when enabled with -compact-regions. The best candidate register still gets all the bundles it wants, but everything outside the main interval is first split around compact regions before we create single-block intervals. Compact region splitting still causes some regressions, so it is not enabled by default. llvm-svn: 136186
-
- Jul 26, 2011
-
-
Jakob Stoklund Olesen authored
When dead code elimination deletes a PHI value, the virtual register may split into multiple connected components. In that case, revert each component to the RS_Assign stage. The new components are guaranteed to be smaller (the original value numbers are distributed among the components), so this will always be making progress. The components are now allowed to evict other live ranges or be split again. llvm-svn: 136034
-
- Jul 25, 2011
-
-
Jakob Stoklund Olesen authored
This mechanism already exists, but the RS_Split2 stage makes it clearer. When live range splitting creates ranges that may not be making progress, they are marked RS_Split2 instead of RS_New. These ranges may be split again, but only in a way that can be proven to make progress. For local ranges, that means they must be split into ranges used by strictly fewer instructions. For global ranges, region splitting is bypassed and the RS_Split2 ranges go straight to per-block splitting. llvm-svn: 135912
-
Jakob Stoklund Olesen authored
The stage is used to control where a live range is going, not where it is coming from. Live ranges created by splitting will usually be marked RS_New, but some are marked RS_Spill to avoid wasting time trying to split them again. The old RS_Global and RS_Local stages are merged - they are really the same thing for local and global live ranges. llvm-svn: 135911
-
- Jul 23, 2011
-
-
Jakob Stoklund Olesen authored
This method computes the edge bundles that should be live when splitting around a compact region. This is independent of interference. The function returns false if the live range was already a compact region, or the compact region doesn't have any live bundles - it would be the same as splitting around basic blocks. Compact regions are computed using the normal spill placement code. We pretend there is interference in all live-through blocks that don't use the live range. This removes all edges from the Hopfield network used for spill placement, so it converges instantly. llvm-svn: 135847
-
Jakob Stoklund Olesen authored
A split candidate can have a null PhysReg which means that it doesn't map to a real interference pattern. Instead, pretend that all through blocks have interference. This makes it possible to generate compact regions where the live range doesn't go through blocks that don't use it. The live range will still be live between directly connected blocks with uses. Splitting around a compact region tends to produce a live range with a high spill weight, so it may evict a less dense live range. llvm-svn: 135845
-
- Jul 18, 2011
-
-
Frits van Bommel authored
Migrate LLVM and Clang to use the new makeArrayRef(...) functions where previously explicit non-default constructors were used. Mostly mechanical with some manual reformatting. llvm-svn: 135390
-
- Jul 16, 2011
-
-
Jakub Staszak authored
llvm-svn: 135354
-
- Jul 15, 2011
-
-
Jakob Stoklund Olesen authored
This gets rid of some of the gory splitting details in RAGreedy and makes them available to future SplitKit clients. Slightly generalize the functionality to support multi-way splitting. Specifically, SplitEditor::splitLiveThroughBlock() supports switching between different register intervals in a block. llvm-svn: 135307
-
- Jul 14, 2011
-
-
Jakob Stoklund Olesen authored
Original commit message: Count references to interference cache entries. Each InterferenceCache::Cursor instance references a cache entry. A non-zero reference count guarantees that the entry won't be reused for a new register. This makes it possible to have multiple live cursors examining interference for different physregs. The total number of live cursors into a cache must be kept below InterferenceCache::getMaxCursors(). Code generation should be unaffected by this change, and it doesn't seem to affect the cache replacement strategy either. llvm-svn: 135130
-
Jakob Stoklund Olesen authored
llvm-svn: 135122
-
Jakob Stoklund Olesen authored
Each InterferenceCache::Cursor instance references a cache entry. A non-zero reference count guarantees that the entry won't be reused for a new register. This makes it possible to have multiple live cursors examining interference for different physregs. The total number of live cursors into a cache must be kept below InterferenceCache::getMaxCursors(). Code generation should be unaffected by this change, and it doesn't seem to affect the cache replacement strategy either. llvm-svn: 135121
-
Jakob Stoklund Olesen authored
The cache entry referenced by the best split candidate could become clobbered by an unsuccessful candidate. The correct fix here is to use reference counts on the cache entries. Coming up. llvm-svn: 135113
-