- Mar 26, 2013
-
-
Jakob Stoklund Olesen authored
All Intel CPUs since Yonah look a lot alike, at least at the granularity of the scheduling models. We can add more accurate models for processors that aren't Sandy Bridge if required. Haswell will probably need its own. The Atom processor and anything based on NetBurst is completely different. So are the non-Intel chips. llvm-svn: 178080
-
David Blaikie authored
This will be used to factor out some uses of magic number operand offsets inside Clang where these fields were updated in an effort to resolve forward declarations/circular references. llvm-svn: 178078
-
Hal Finkel authored
As suggested by Bill Schmidt (in reviewing r178067), use the real register number bit lengths (which is self-documenting, and prevents using illegal numbers), and set only the relevant bits in HWEncoding (which defaults to 0). No functionality change intended. llvm-svn: 178077
-
Hal Finkel authored
As pointed out by Richard Sandiford, my recent updates to the register scavenger broke targets that use custom spilling (because the new code assumed that if there were no valid spill slots, than spilling would be impossible). I don't have a test case, but it should be possible to create one for Thumb 1, Mips 16, etc. llvm-svn: 178073
-
Hal Finkel authored
As pointed out by Jakob, we don't need to maintain a separate register-numbering table. Instead we should let TableGen generate the table for us from the information (already present) in PPCRegisterInfo.td. TRI->getEncodingValue is now used to access register-encoding values. No functionality change intended. llvm-svn: 178067
-
NAKAMURA Takumi authored
llvm-svn: 178065
-
Hal Finkel authored
Now that the register scavenger can support multiple spill slots, and PEI can use virtual-register-based scavenging for multiple simultaneous registers, we can use a virtual register for the transfer register in the CR spilling code. This should eliminate the last place (outside of the prologue/epilogue) where we depend on the unconditional availability of the r0 register. We will soon be able to allocate it (in a somewhat restricted sense) as a GPR. llvm-svn: 178060
-
Hal Finkel authored
PPC's use of PEI's virtual-register-based scavenging functionality had redefined the virtual registers (it was non-SSA). Now that PEI supports dealing with instructions with multiple virtual registers, this can be cleanup up to use multiple virtual registers and keep SSA form. No functionality change intended. llvm-svn: 178059
-
Hal Finkel authored
The previous algorithm could not deal properly with scavenging multiple virtual registers because it kept only one live virtual -> physical mapping (and iterated through operands in order). Now we don't maintain a current mapping, but rather use replaceRegWith to completely remove the virtual register as soon as the mapping is established. In order to allow the register scavenger to return a physical register killed by an instruction for definition by that same instruction, we now call RS->forward(I) prior to eliminating virtual registers defined in I. This requires a minor update to forward to ignore virtual registers. These new features will be tested in forthcoming commits. llvm-svn: 178058
-
Jakob Stoklund Olesen authored
Now all x86 instructions that have itinerary classes also have SchedRW lists. This is required before the new scheduling models can be used. There are still unannotated instructions remaining, but they don't have itinerary classes either. llvm-svn: 178051
-
Jakob Stoklund Olesen authored
This only covers the instructions that were given itinerary classes for the Atom model. llvm-svn: 178050
-
Jakob Stoklund Olesen authored
This could definitely be more granular. I am not sure if it makes a difference. llvm-svn: 178049
-
Jakob Stoklund Olesen authored
llvm-svn: 178048
-
Arnold Schwaighofer authored
This is a compile time optimization. Before the patch we would do two traversals on each call to aliasGEP - one with a set size parameter one with UnknownSize. We can do better by first checking the result of the alias query with UnknownSize. Only if this one returns MayAlias do we query a second time using size and type. This recovers an about 7% compile time regression on spec/ammp. radar://12349960 llvm-svn: 178045
-
Michael Liao authored
- Add 'PRFCHW' feature defined in AVX2 ISA extension llvm-svn: 178040
-
Jyotsna Verma authored
llvm-svn: 178032
-
Ulrich Weigand authored
The OptimizeIntToFloatBitCast converts shift-truncate sequences into extractelement operations. The computation of the element index to be used in the resulting operation is currently only correct for little-endian targets. This commit fixes the element index computation to be correct for big-endian targets as well. If the target byte order is unknown, the optimization cannot be performed at all. llvm-svn: 178031
-
Jyotsna Verma authored
llvm-svn: 178030
-
Arnold Schwaighofer authored
This reverts commit r177968. It is causing failures in a local build bot. "fatal error: error in backend: Expected a variant SchedClass" Original commit message: Move the CortexA9 resources into the CortexA9 SchedModel namespace. Define resource mappings under the CortexA9 SchedModel. Define resources and mappings for the SwiftModel. llvm-svn: 178028
-
Benjamin Kramer authored
llvm-svn: 178025
-
Christian Konig authored
Not only fold immediates, but avoid unnecessary copies as well. Signed-off-by:
Christian König <christian.koenig@amd.com> llvm-svn: 178024
-
Christian Konig authored
Prevent loading M0 multiple times. Signed-off-by:
Christian König <christian.koenig@amd.com> llvm-svn: 178023
-
Christian Konig authored
Just define the address as unknown instead of VReg_32. Signed-off-by:
Christian König <christian.koenig@amd.com> llvm-svn: 178022
-
Christian Konig authored
Signed-off-by:
Christian König <christian.koenig@amd.com> llvm-svn: 178021
-
Christian Konig authored
They read from constant register space anyway. v2: fix lit tests Signed-off-by:
Christian König <christian.koenig@amd.com> llvm-svn: 178020
-
Christian Konig authored
Just enable WQM when we see an LDS interpolation instruction. Signed-off-by:
Christian König <christian.koenig@amd.com> llvm-svn: 178019
-
Christian Konig authored
Restore the EXEC mask early, otherwise a copy might end up not beeing executed. Candidate for the mesa stable branch. Signed-off-by:
Christian König <christian.koenig@amd.com> Reviewed-by:
Michel Dänzer <michel.daenzer@amd.com> Tested-by:
Michel Dänzer <michel.daenzer@amd.com> llvm-svn: 178018
-
Joe Abbey authored
If PC or SP is the destination, the disassembler erroneously failed with the invalid encoding, despite the manual saying that both are fine. This patch addresses failure to decode encoding T4 of LDR (A8.8.62) which is a postindexed load, where the offset 0xc is applied to SP after the load occurs. llvm-svn: 178017
-
Alexey Samsonov authored
[ASan] Change the ABI of __asan_before_dynamic_init function: now it takes pointer to private string with module name. This string serves as a unique module ID in ASan runtime. LLVM part llvm-svn: 178013
-
Ulrich Weigand authored
There remain a number of patterns that cannot (and should not) be handled by the asm parser, in particular all the Pseudo patterns. This commit marks those patterns as isCodeGenOnly. No change in generated code. llvm-svn: 178008
-
Ulrich Weigand authored
MCTargetDesc/PPCMCCodeEmitter.cpp current has code like: if (isSVR4ABI() && is64BitMode()) Fixups.push_back(MCFixup::Create(0, MO.getExpr(), (MCFixupKind)PPC::fixup_ppc_toc16)); else Fixups.push_back(MCFixup::Create(0, MO.getExpr(), (MCFixupKind)PPC::fixup_ppc_lo16)); This is a problem for the asm parser, since it requires knowledge of the ABI / 64-bit mode to be set up. However, more fundamentally, at this point we shouldn't make such distinctions anyway; in an assembler file, it always ought to be possible to e.g. generate TOC relocations even when the main ABI is one that doesn't use TOC. Fortunately, this is actually completely unnecessary; that code was added to decide whether to generate TOC relocations, but that information is in fact already encoded in the VariantKind of the underlying symbol. This commit therefore merges those fixup types into one, and then decides which relocation to use based on the VariantKind. No changes in generated code. llvm-svn: 178007
-
Ulrich Weigand authored
As part of the the sequence generated to implement long double -> int conversions, we need to perform an FADD in round-to-zero mode. This is problematical since the FPSCR is not at all modeled at the SelectionDAG level, and thus there is a risk of getting floating point instructions generated out of sequence with the instructions to modify FPSCR. The current code handles this by somewhat "special" patterns that in part have dummy operands, and/or duplicate existing instructions, making them awkward to handle in the asm parser. This commit changes this by leaving the "FADD in round-to-zero mode" as an atomic operation on the SelectionDAG level, and only split it up into real instructions at the MI level (via custom inserter). Since at *this* level the FPSCR *is* modeled (via the "RM" hard register), much of the "special" stuff can just go away, and the resulting patterns can be used by the asm parser. No significant change in generated code expected. llvm-svn: 178006
-
Ulrich Weigand authored
The LDrs pattern is a duplicate of LD, except that it accepts memory addresses where the displacement is a symbolLo64. An operand type "memrs" is defined for just that purpose. However, this wouldn't be necessary if the default "memrix" operand type were to simply accept 64-bit symbolic addresses directly. The only problem with that is that it uses "symbolLo", which is hardcoded to 32-bit. To fix this, this commit changes "memri" and "memrix" to use new operand types for the memory displacement, which allow iPTR instead of i32. This will also make address parsing easier to implment in the asm parser. No change in generated code. llvm-svn: 178005
-
Ulrich Weigand authored
The ADDI/ADDI8 patterns are currently duplicated into ADDIL/ADDI8L, which describe the same instruction, except that they accept a symbolLo[64] operand instead of a s16imm[64] operand. This duplication confuses the asm parser, and it actually not really needed, since symbolLo[64] already accepts immediate operands anyway. So this commit removes the duplicate patterns. No change in generated code. llvm-svn: 178004
-
Ulrich Weigand authored
This commit changes the ISEL patterns to use a CCBITRC operand instead of a "pred" operand. This matches the actual instruction text more directly, and simplifies use of ISEL with the asm parser. In addition, this change allows some simplification of handling the "pred" operand, as this is now only used by BCC. No change in generated code. llvm-svn: 178003
-
Ulrich Weigand authored
The BLR pattern cannot be recognized by the asm parser in its current form. This complexity is due to an apparent attempt to enable conditional BLR variants. However, none of those can ever be generated by current code; the pattern is only ever created using the default "pred" operand. To simplify the pattern and allow it to be recognized by the parser, this commit removes those attempts at conditional BLR support. When we later come back to actually add real conditional BLR, this should probably be done via a fully generic conditional branch pattern. No change in generated code. llvm-svn: 178002
-
Ulrich Weigand authored
In PPCInstr64Bit.td, some branch patterns appear in a different sequence than the corresponding 32-bit patterns in PPCInstrInfo.td. To simplify future changes that affect both files, this commit moves those patterns to rearrange them into a similar sequence. No effect on generated code. llvm-svn: 178001
-
Christian Konig authored
Use a MapVector on types where the iteration order matters. Otherwise we doesn't always produce a deterministic output. Signed-off-by:
Christian König <christian.koenig@amd.com> Reviewed-by:
Michel Dänzer <michel.daenzer@amd.com> llvm-svn: 177999
-
Andrew Trick authored
Fixes PR15570: SEGV: SCEV back-edge info invalid after dead code removal. Indvars creates a SCEV expression for the loop's back edge taken count, then determines that the comparison is always true and removes it. When loop-unroll asks for the expression, it contains a NULL SCEVUnknkown (as a CallbackVH). forgetMemoizedResults should invalidate the loop back edges expression. llvm-svn: 177986
-
Chandler Carruth authored
its own library. These functions are bridging between the bitcode reader and the ll parser which are in different libraries. Previously we didn't have any good library to do this, and instead played fast and loose with a "header only" set of interfaces in the Support library. This really doesn't work well as evidenced by the recent attempt to add timing logic to the these routines. As part of this, make them normal functions rather than weird inline functions, and sink the implementation into the library. Also clean up the header to be nice and minimal. This requires updating lots of build system dependencies to specify that the IRReader library is needed, and several source files to not implicitly rely upon the header file to transitively include all manner of other headers. If you are using IRReader.h, this commit will break you (the header moved) and you'll need to also update your library usage to include 'irreader'. I will commit the corresponding change to Clang momentarily. llvm-svn: 177971
-