- Jun 21, 2012
-
-
Jakob Stoklund Olesen authored
I don't think anyone has been using this functionality for a while, and it is getting in the way of refactoring now. llvm-svn: 158876
-
Jakob Stoklund Olesen authored
Register allocators depend on it being permanently enabled now. llvm-svn: 158873
-
Jakob Stoklund Olesen authored
Deterministically enumerate the virtual registers instead. llvm-svn: 158872
-
Jakob Stoklund Olesen authored
They are living in LiveRegMatrix now. llvm-svn: 158868
-
Jakob Stoklund Olesen authored
Stop depending on the LiveIntervalUnions in RegAllocBase, they are about to be removed. The changes are mostly replacing register alias iterators with regunit iterators, and querying LiveRegMatrix instrad of RegAllocBase. InterferenceCache is converted to work with per-regunit LiveIntervalUnions, and it checks fixed regunit interference separately, using the fixed live intervals provided by LiveIntervalAnalysis. The local splitting helper calcGapWeights() is also considering fixed regunit interference which is kept on the side now. llvm-svn: 158867
-
Jakob Stoklund Olesen authored
Stop using the LiveIntervalUnions provided by RegAllocBase, they will be removed soon. llvm-svn: 158866
-
Jakob Stoklund Olesen authored
Soon we won't need to compute live intervals for physical registers. llvm-svn: 158865
-
Jakob Stoklund Olesen authored
Filter out physreg candidates with regunit interferrence. Also compute regmask interference more efficiently. llvm-svn: 158864
-
- Jun 20, 2012
-
-
Jakob Stoklund Olesen authored
That is a DenseMap iterator keyed by pointers, so the iteration order is nondeterministic. I would like to replace the DenseMap with an IndexedMap which doesn't allow iteration. llvm-svn: 158856
-
Pete Cooper authored
Add users of a MERGE_VALUE node to the worklist to process again when the node is removed. Sorry, no test case. Foudn it by inspection of the code llvm-svn: 158839
-
Jakob Stoklund Olesen authored
Regunit live ranges are computed on demand, so when mi-sched calls handleMove, some regunits may not have live ranges yet. That makes updating them easier: Just skip the non-existing ranges. They will be computed correctly from the rescheduled machine code when they are needed. llvm-svn: 158831
-
Jakob Stoklund Olesen authored
llvm-svn: 158827
-
Hal Finkel authored
The test case for this will come with the PPC indexed preinc loads commit. llvm-svn: 158822
-
Aaron Ballman authored
llvm-svn: 158820
-
Chandler Carruth authored
I'll admit I'm not entirely satisfied with this change, but it seemed the cleanest option. Other suggestions quite welcome The issue is that the traits specializations have static methods which return the typedef'ed PHI_iterator type. In both the IR and MI layers this is typedef'ed to a custom iterator class defined in an anonymous namespace giving the types and the functions returning them internal linkage. However, because the traits specialization is defined in the 'llvm' namespace (where it has to be, specialized template lives there), and is in turn used in the templated implementation of the SSAUpdater. This led to the linkage conflict that Clang now warns about. The simplest solution to me was just to define the PHI_iterator as a nested class inside the trait specialization. That way it still doesn't get scoped widely, it can't be accidentally reused somewhere, etc. This is a little gross just because nested class definitions are a little gross, but the alternatives seem more ad-hoc. llvm-svn: 158799
-
Andrew Trick authored
-stable-loops enables a new algorithm for generating the Loop forest. It differs from the original algorithm in a few respects: - Not determined by use-list order. - Initially guarantees RPO order of block and subloops. - Linear in the number of CFG edges. - Nonrecursive. I didn't want to change the LoopInfo API yet, so the block lists are still inclusive. This seems strange to me, and it means that building LoopInfo is not strictly linear, but it may not be a problem in practice. At least the block lists start out in RPO order now. In the future we may add an attribute or wrapper analysis that allows other passes to assume RPO order. The primary motivation of this work was not to optimize LoopInfo, but to allow reproducing performance issues by decomposing the compilation stages. I'm often unable to do this with the current LoopInfo, because the loop tree order determines Loop pass order. Serializing the IR tends to invert the order, which reverses the optimization order. This makes it nearly impossible to debug interdependent loop optimizations such as LSR. I also believe this will provide more stable performance results across time. llvm-svn: 158790
-
Andrew Trick authored
The implementation only needs inclusion from LoopInfo.cpp and MachineLoopInfo.cpp. Clients of the interface should only include the interface. This makes the interface readable and speeds up rebuilds after modifying the implementation. llvm-svn: 158787
-
Jakob Stoklund Olesen authored
When LiveIntervals is tracking fixed interference in regunits, make sure to update those intervals as well. Currently guarded by -live-regunits. llvm-svn: 158766
-
Chad Rosier authored
llvm-svn: 158762
-
Chad Rosier authored
ensureAlignment() in MachineFunction). Also, drop setMaxAlignment() in favor of this new function. This creates a main entry point to setting MaxAlignment, which will be helpful for future work. No functionality change intended. llvm-svn: 158758
-
Lang Hames authored
This patch adds DAG combines to form FMAs from pairs of FADD + FMUL or FSUB + FMUL. The combines are performed when: (a) Either AllowExcessFPPrecision option (-enable-excess-fp-precision for llc) OR UnsafeFPMath option (-enable-unsafe-fp-math) are set, and (b) TargetLoweringInfo::isFMAFasterThanMulAndAdd(VT) is true for the type of the FADD/FSUB, and (c) The FMUL only has one user (the FADD/FSUB). If your target has fast FMA instructions you can make use of these combines by overriding TargetLoweringInfo::isFMAFasterThanMulAndAdd(VT) to return true for types supported by your FMA instruction, and adding patterns to match ISD::FMA to your FMA instructions. llvm-svn: 158757
-
Jakob Stoklund Olesen authored
llvm-svn: 158755
-
- Jun 19, 2012
-
-
Jakob Stoklund Olesen authored
The PPC::EXTSW instruction preserves the low 32 bits of its input, just like some of the x86 instructions. Use it to reduce register pressure when the low 32 bits have multiple uses. This requires a small change to PeepholeOptimizer since EXTSW takes a 64-bit input register. This is related to PR5997. llvm-svn: 158743
-
Jakob Stoklund Olesen authored
No functional change. llvm-svn: 158742
-
Rafael Espindola authored
TargetLoweringObjectFileELF. Use this to support it on X86. Unlike ARM, on X86 it is not easy to find out if .init_array should be used or not, so the decision is made via TargetOptions and defaults to off. Add a command line option to llc that enables it. llvm-svn: 158692
-
- Jun 18, 2012
-
-
Hal Finkel authored
This patch changes the type used to hold the FU bitset from unsigned to uint64_t. This will be needed for some upcoming PowerPC itineraries. llvm-svn: 158679
-
- Jun 16, 2012
-
-
Benjamin Kramer authored
llvm-svn: 158608
-
Jakob Stoklund Olesen authored
We now have a proper machine code verifier pass between register allocation and rewriting. llvm-svn: 158577
-
Jakob Stoklund Olesen authored
llvm-svn: 158575
-
Jakob Stoklund Olesen authored
Calling checkRegMaskInterference(VirtReg) checks if VirtReg crosses any regmask operands, regardless of the registers they clobber. llvm-svn: 158563
-
- Jun 15, 2012
-
-
Bill Wendling authored
llvm-svn: 158535
-
Jakob Stoklund Olesen authored
We only do very limited physreg coalescing now, but we still merge virtual registers into reserved registers. llvm-svn: 158526
-
- Jun 14, 2012
-
-
Akira Hatanaka authored
the last instruction of a basic block. llvm-svn: 158468
-
Lang Hames authored
llvm-svn: 158467
-
Andrew Trick authored
llvm-svn: 158461
-
- Jun 13, 2012
-
-
Andrew Trick authored
For store->load dependencies that may alias, we should always use TrueMemOrderLatency, which may eventually become a subtarget hook. In effect, we should guarantee at least TrueMemOrderLatency on at least one DAG path from a store to a may-alias load. This should fix the standard mode as well as -enable-aa-sched-mi". llvm-svn: 158380
-
Andrew Trick authored
llvm-svn: 158379
-
- Jun 12, 2012
-
-
Andrew Trick authored
llvm-svn: 158340
-
Andrew Trick authored
llvm-svn: 158339
-
- Jun 09, 2012
-
-
Benjamin Kramer authored
llvm-svn: 158265
-