- Sep 15, 2011
-
-
Jakob Stoklund Olesen authored
Blocks with multiple PHI successors only need to go on the worklist once. Use a SmallPtrSet to track the live-out blocks that have already been handled. This is a lot faster than the two live range check we would otherwise do. Also stop recomputing hasPHIKill flags. Like RenumberValues(), it is conservatively correct to leave them in, and they are not used for anything important. llvm-svn: 139792
-
Jakob Stoklund Olesen authored
It does, after all. RemoveCopyByCommutingDef rewrites the uses of one particular value number in A. It doesn't know how to rewrite phi uses, so there can't be any. llvm-svn: 139787
-
Jakob Stoklund Olesen authored
There is only one legitimate use remaining, in addIntervalsForSpills(). All other calls to hasPHIKill() are only used to update PHIKill flags. The addIntervalsForSpills() function is part of the old spilling framework, only used by linearscan. llvm-svn: 139783
-
Jakob Stoklund Olesen authored
Instead, let HasOtherReachingDefs() test for defs in B that overlap any phi-defs in A as well. This test is slightly different, but almost identical. A perfectly precise test would only check those phi-defs in A that are reachable from AValNo. llvm-svn: 139782
-
Jakob Stoklund Olesen authored
The source live range is recomputed using shrinkToUses() which does handle phis correctly. The hasPHIKill() condition was relevant in the old days when ReMaterializeTrivialDef() tried to recompute the live range itself. The shrinkToUses() function will mark the original def as dead when no more uses and phi kills remain. It is then removed by runOnMachineFunction(). llvm-svn: 139781
-
Jakob Stoklund Olesen authored
It is conservatively correct to keep the hasPHIKill flags, even after deleting PHI-defs. The calculation can be very expensive after taildup has created a quadratic number of indirectbr edges in the CFG, and the hasPHIKill flag isn't used for anything after RenumberValues(). llvm-svn: 139780
-
Andrew Trick authored
An improper SlotIndex->VNInfo lookup was leading to unsafe copy removal. Fixes PR10920 401.bzip2 miscompile with no IV rewrite. llvm-svn: 139765
-
Devang Patel authored
llvm-svn: 139751
-
- Sep 14, 2011
-
-
Jakob Stoklund Olesen authored
THe LRE_DidCloneVirtReg callback may be called with vitual registers that RAGreedy doesn't even know about yet. In that case, there are no data structures to update. llvm-svn: 139702
-
Jakob Stoklund Olesen authored
When a back-copy is hoisted to the nearest common dominator, keep looking up the dominator tree for a less loopy dominator, and place the back-copy there instead. Don't do this when a single existing back-copy dominates all the others. Assume the client knows what he is doing, and keep the dominating back-copy. This prevents us from hoisting back-copies into loops in most cases. If a value is defined in a loop with multiple exits, we may still hoist back-copies into that loop. That is the speed/size tradeoff. llvm-svn: 139698
-
Nadav Rotem authored
llvm-svn: 139692
-
Jakob Stoklund Olesen authored
When a ParentVNI maps to multiple defs in a new interval, its live range may still be derived directly from RegAssign by transferValues(). On the other hand, when instructions have been rematerialized or hoisted, it may be necessary to completely recompute live ranges using LiveRangeCalc::extend() to all uses. Use a bit in the value map to indicate that a live range must be recomputed. Rename markComplexMapped() to forceRecompute(). This fixes some live range verification errors when -split-spill-mode=size hoists back-copies by recomputing source ranges when RegAssign kills can't be moved. llvm-svn: 139660
-
Jakob Stoklund Olesen authored
Whenever the complement interval is defined by multiple copies of the same value, hoist those back-copies to the nearest common dominator. This ensures that at most one copy is inserted per value in the complement inteval, and no phi-defs are needed. llvm-svn: 139651
-
Eli Friedman authored
llvm-svn: 139649
-
- Sep 13, 2011
-
-
Eli Friedman authored
llvm-svn: 139641
-
Nadav Rotem authored
llvm-svn: 139633
-
Nadav Rotem authored
xor/and/or (For example SSE2). llvm-svn: 139623
-
Devang Patel authored
llvm-svn: 139616
-
Jakob Stoklund Olesen authored
This function is used to flag values where the complement interval may overlap other intervals. Call it from overlapIntv, and use the flag to fully recompute those live ranges in transferValues(). llvm-svn: 139612
-
Jakob Stoklund Olesen authored
llvm-svn: 139608
-
Jakob Stoklund Olesen authored
Three out of four clients prefer this interface which is consistent with extendIntervalEndTo() and LiveRangeCalc::extend(). llvm-svn: 139604
-
Jakob Stoklund Olesen authored
The complement interval may overlap the other intervals created, so use a separate LiveRangeCalc instance to compute its live range. A LiveRangeCalc instance can only be shared among non-overlapping intervals. llvm-svn: 139603
-
NAKAMURA Takumi authored
llvm-svn: 139581
-
Jakob Stoklund Olesen authored
SplitKit will soon need two copies of these data structures, and the algorithms will also be useful when LiveIntervalAnalysis becomes independent of LiveVariables. llvm-svn: 139572
-
- Sep 12, 2011
-
-
Bill Wendling authored
Splitting a landing pad takes considerable care because of PHIs and other nasties. The problem is that the jump table needs to jump to the landing pad block. However, the landing pad block can be jumped to only by an invoke instruction. So we clone the landingpad instruction into its own basic block, have the invoke jump to there. The landingpad instruction's basic block's successor is now the target for the jump table. But because of PHI nodes, we need to create another basic block for the jump table to jump to. This is definitely a hack, because the values for the PHI nodes may not be defined on the edge from the jump table. But that's okay, because the jump table is simply a construct to mimic what is happening in the CFG. So the values are mysteriously there, even though there is no value for the PHI from the jump table's edge (hence calling this a hack). llvm-svn: 139545
-
Jakob Stoklund Olesen authored
It has been enabled by default for a while, it was only there to allow performance comparisons. llvm-svn: 139501
-
Jakob Stoklund Olesen authored
SplitKit always computes a complement live range to cover the places where the original live range was live, but no explicit region has been allocated. Currently, the complement live range is created to be as small as possible - it never overlaps any of the regions. This minimizes register pressure, but if the complement is going to be spilled anyway, that is not very important. The spiller will eliminate redundant spills, and hoist others by making the spill slot live range overlap some of the regions created by splitting. Stack slots are cheap. This patch adds the interface to enable spill modes in SplitKit. In spill mode, SplitKit will assume that the complement is going to spill, so it will allow it to overlap regions in order to avoid back-copies. By doing some of the spiller's work early, the complement live range becomes simpler. In some cases, it can become much simpler because no extra PHI-defs are required. This will speed up both splitting and spilling. This is only the interface to enable spill modes, no implementation yet. llvm-svn: 139500
-
Jakob Stoklund Olesen authored
llvm-svn: 139498
-
- Sep 10, 2011
-
-
Richard Trieu authored
assert("error"); to: assert(0 && "error"); llvm-svn: 139449
-
Chris Lattner authored
llvm-svn: 139419
-
- Sep 09, 2011
-
-
Eli Friedman authored
Make the SelectionDAG verify that all the operands of BUILD_VECTOR have the same type. Teach DAGCombiner::visitINSERT_VECTOR_ELT not to make invalid BUILD_VECTORs. Fixes PR10897. llvm-svn: 139407
-
Jakob Stoklund Olesen authored
In some cases such as interpreters using indirectbr, the CFG can be very complicated, and live range splitting may be forced to insert a large number of phi-defs. When that happens, traceSiblingValue can spend a lot of time zipping around in the CFG looking for defs and reloads. This patch causes more information to be cached in SibValues, and the cached values are used to terminate searches early. This speeds up spilling by 20x in one interpreter test case. For more typical code, this is just a 10% speedup of spilling. The previous version had bugs that caused miscompilations. They have been fixed. llvm-svn: 139378
-
Devang Patel authored
Directly point debug info to the stack slot of the arugment, instead of trying to keep track of vreg in which it the arugment is copied. The LiveDebugVariable can keep track of variable's ranges. llvm-svn: 139330
-
- Sep 07, 2011
-
-
Jakob Stoklund Olesen authored
It broke the self host and clang-x86_64-darwin10-RA. llvm-svn: 139259
-
Jakob Stoklund Olesen authored
In some cases such as interpreters using indirectbr, the CFG can be very complicated, and live range splitting may be forced to insert a large number of phi-defs. When that happens, traceSiblingValue can spend a lot of time zipping around in the CFG looking for defs and reloads. This patch causes more information to be cached in SibValues, and the cached values are used to terminate searches early. This speeds up spilling by 20x in one interpreter test case. For more typical code, this is just a 10% speedup of spilling. llvm-svn: 139247
-
James Molloy authored
Refactor instprinter and mcdisassembler to take a SubtargetInfo. Add -mattr= handling to llvm-mc. Reviewed by Owen Anderson. llvm-svn: 139237
-
Eli Friedman authored
Relax the MemOperands on atomics a bit. Fixes -verify-machineinstrs failures for atomic laod/store on ARM. (The fix for the related failures on x86 is going to be nastier because we actually need Acquire memoperands attached to the atomic load instrs, etc.) llvm-svn: 139221
-
Devang Patel authored
While sinking machine instructions, sink matching DBG_VALUEs also otherwise live debug variable pass will drop DBG_VALUEs on the floor. llvm-svn: 139208
-
- Sep 06, 2011
-
-
Duncan Sands authored
with a vector condition); such selects become VSELECT codegen nodes. This patch also removes VSETCC codegen nodes, unifying them with SETCC nodes (codegen was actually often using SETCC for vector SETCC already). This ensures that various DAG combiner optimizations kick in for vector comparisons. Passes dragonegg bootstrap with no testsuite regressions (nightly testsuite as well as "make check-all"). Patch mostly by Nadav Rotem. llvm-svn: 139159
-
Duncan Sands authored
init.trampoline and adjust.trampoline intrinsics, into two intrinsics like in GCC. While having one combined intrinsic is tempting, it is not natural because typically the trampoline initialization needs to be done in one function, and the result of adjust trampoline is needed in a different (nested) function. To get around this llvm-gcc hacks the nested function lowering code to insert an additional parent variable holding the adjust.trampoline result that can be accessed from the child function. Dragonegg doesn't have the luxury of tweaking GCC code, so it stored the result of adjust.trampoline in the memory GCC set aside for the trampoline itself (this is always available in the child function), and set up some new memory (using an alloca) to hold the trampoline. Unfortunately this breaks Go which allocates trampoline memory on the heap and wants to use it even after the parent has exited (!). Rather than doing even more hacks to get Go working, it seemed best to just use two intrinsics like in GCC. Patch mostly by Sanjoy Das. llvm-svn: 139140
-