- Oct 26, 2016
-
-
Dean Michael Berris authored
Usage: llvm-xray extract <object file> [-o <filename or '-'>] The tool gets the XRay instrumentation map from an object file and turns it into YAML. We first support ELF64 sleds on x86_64 binaries, with provision for supporting other supported platforms and formats later. This is the first of a many-part change to fully implement the `llvm-xray` tool. We also define a subcommand registration and dispatch mechanism to be used by other further subcommand implementations for llvm-xray. Diffusion Revision: https://reviews.llvm.org/D21987 llvm-svn: 285165
-
Dean Michael Berris authored
Reverts r285155 -- misconfigured tests. llvm-svn: 285156
-
Dean Michael Berris authored
Usage: llvm-xray extract <object file> [-o <filename or '-'>] The tool gets the XRay instrumentation map from an object file and turns it into YAML. We first support ELF64 sleds on x86_64 binaries, with provision for supporting other supported platforms and formats later. This is the first of a many-part change to fully implement the `llvm-xray` tool. We also define a subcommand registration and dispatch mechanism to be used by other further subcommand implementations for llvm-xray. llvm-svn: 285155
-
Rui Ueyama authored
Not all echo commands support "-e". On the other hand, printf command is in POSIX, so it's more portable than "echo -e". llvm-svn: 285151
-
James Y Knight authored
On SparcV8, it was previously the case that a variable-sized alloca might overlap by 4-bytes the last fixed stack variable, effectively because 92 (the number of bytes reserved for the register spill area) != 96 (the offset added to SP for where to start a DYNAMIC_STACKALLOC). It's not as simple as changing 96 to 92, because variables that should be 8-byte aligned would then be misaligned. For now, simply increase the allocation size by 8 bytes for each dynamic allocation -- wastes space, but at least doesn't overlap. As the large comment says, doing this more efficiently will require larger changes in llvm. Also adds some test cases showing that we continue to not support dynamic stack allocation and over-alignment in the same function. llvm-svn: 285131
-
Bob Haarman authored
Summary: Fixes PR28281. MSVC lists indirect virtual base classes in the field list of a class, using LF_IVBCLASS records. This change makes LLVM emit such records when processing DW_TAG_inheritance tags with the DIFlagVirtual and (newly introduced) DIFlagIndirect tags. Reviewers: rnk, ruiu, zturner Differential Revision: https://reviews.llvm.org/D25578 llvm-svn: 285130
-
Simon Pilgrim authored
[DAGCombiner] Enable (urem x, (shl pow2, y)) -> (and x, (add (shl pow2, y), -1)) combine for splatted vectors llvm-svn: 285129
-
- Oct 25, 2016
-
-
Rong Xu authored
Summary: Select instruction annotation in IR PGO uses the edge count to infer the branch count. It's currently placed in setInstrumentedCounts() where no all the BB counts have been computed. This leads to wrong branch weights. Move the annotation after all BB counts are populated. Reviewers: davidxl Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D25961 llvm-svn: 285128
-
Simon Pilgrim authored
llvm-svn: 285124
-
Simon Pilgrim authored
SelectionDAG::SignBitIsZero (via SelectionDAG::computeKnownBits) has supported vectors since rL280927 llvm-svn: 285123
-
Simon Pilgrim authored
llvm-svn: 285121
-
Simon Pilgrim authored
llvm-svn: 285119
-
Simon Pilgrim authored
SelectionDAG::SignBitIsZero (via SelectionDAG::computeKnownBits) has supported vectors since rL280927 llvm-svn: 285118
-
Guozhi Wei authored
The original patch of the A->B->A BitCast optimization was reverted by r274094 because it may cause infinite loop inside compiler https://llvm.org/bugs/show_bug.cgi?id=27996. The problem is with following code xB = load (type B); xA = load (type A); +yA = (A)xB; B -> A +zAn = PHI[yA, xA]; PHI +zBn = (B)zAn; // A -> B store zAn; store zBn; optimizeBitCastFromPhi generates +zBn = (B)zAn; // A -> B and expects it will be combined with the following store instruction to another store zAn Unfortunately before combineStoreToValueType is called on the store instruction, optimizeBitCastFromPhi is called on the new BitCast again, and this pattern repeats indefinitely. optimizeBitCastFromPhi only generates BitCast for load/store instructions, only the BitCast before store can cause the reexecution of optimizeBitCastFromPhi, and BitCast before store can easily be handled by InstCombineLoadStoreAlloca.cpp. So the solution to the problem is if all users of a CI are store instructions, we should not do optimizeBitCastFromPhi on it. Then optimizeBitCastFromPhi will not be called on the new BitCast instructions. Differential Revision: https://reviews.llvm.org/D23896 llvm-svn: 285116
-
Simon Pilgrim authored
llvm-svn: 285112
-
Robert Lougher authored
This reverts r285093, as it caused unexpected buildbot failures on clang-ppc64le-linux, clang-ppc64be-linux, clang-ppc64be-linux-multistage and clang-ppc64be-linux-lnt. Failing test ubsan/TestCases/TypeCheck/vptr.cpp. llvm-svn: 285110
-
Sanjay Patel authored
Fixes the FIXMEs in D25952 and rL285075. Patch by bryant! Differential Revision: https://reviews.llvm.org/D25955 llvm-svn: 285108
-
Evandro Menezes authored
Add an option to allow easier experimentation by target maintainers with the minimum number of entries to create jump tables. Also clarify the name of the other existing option governing the creation of jump tables. Differential revision: https://reviews.llvm.org/D25883 llvm-svn: 285104
-
Evandro Menezes authored
When there's a tie between partitionings of jump tables, consider also cases that result in no jump tables, but in one or a few cases. The motivation is that many contemporary processors typically perform case switches fairly quickly. Differential revision: https://reviews.llvm.org/D25212 llvm-svn: 285099
-
Matthew Simpson authored
When we predicate an instruction (div, rem, store) we place the instruction in its own basic block within the vectorized loop. If a predicated instruction has scalar operands, it's possible to recursively sink these scalar expressions into the predicated block so that they might avoid execution. This patch sinks as much scalar computation as possible into predicated blocks. We previously were able to sink such operands only if they were extractelement instructions. Differential Revision: https://reviews.llvm.org/D25632 llvm-svn: 285097
-
Sanjay Patel authored
Patch by bryant! Differential Revision: https://reviews.llvm.org/D25952 llvm-svn: 285095
-
Michael Ilseman authored
This adds a new function to DebugInfo.cpp that takes an llvm::Module as input and removes all debug info metadata that is not directly needed for line tables, thus effectively stripping all type and variable information from the module. The primary motivation for this feature was the bitcode work flow (cf. http://lists.llvm.org/pipermail/llvm-dev/2016-June/100643.html for more background). This is not wired up yet, but will be in subsequent patches. For testing, the new functionality is exposed to opt with a -strip-nonlinetable-debuginfo option. The secondary use-case (and one that works right now!) is as a reduction pass in bugpoint. I added two new bugpoint options (-disable-strip-debuginfo and -disable-strip-debug-types) to control the new features. By default it will first attempt to remove all debug information, then only the type info, and then proceed to hack at any remaining MDNodes. Thanks to Adrian Prantl for stewarding this patch! llvm-svn: 285094
-
Robert Lougher authored
The branch folding pass tail merges blocks into a common-tail. However, the tail retains the debug information from one of the original inputs to the merge (chosen randomly). This is a problem for sampled-based PGO, as hits on the common-tail will be attributed to whichever block was chosen, irrespective of which path was actually taken to the common-tail. This patch fixes the issue by nulling the debug location for the common-tail. Differential Revision: https://reviews.llvm.org/D25742 llvm-svn: 285093
-
Vedant Kumar authored
Differential Revision: https://reviews.llvm.org/D25086 llvm-svn: 285088
-
Dan Gohman authored
call_indirect, grow_memory, and current_memory now have immediate operands in the 0xd binary encoding. llvm-svn: 285085
-
Andrea Di Biagio authored
When indvars widened an induction variable, the debug location for the loop increment computation was incorrectly set equal to the debug loc of the loop latch terminator. This patch fixes the issue by propagating the correct location from the original loop increment instruction to the new widened increment. Differential Revision: https://reviews.llvm.org/D25872 llvm-svn: 285083
-
Geoff Berry authored
Now that MemorySSA keeps track of whether MemoryUses are optimized, use getClobberingMemoryAccess() to check MemoryUse memory dependencies since it should no longer be so expensive. This is a follow-up change to https://reviews.llvm.org/D25881 llvm-svn: 285080
-
Ulrich Weigand authored
It is not safe to use LOAD ON CONDITION to implement access to a memory location marked "volatile", since the architecture leaves it unspecified whether or not an access happens if the condition is false. The current code already appears to care about that: def LOC : CondUnaryRSY<"loc", 0xEBF2, nonvolatile_load, GR32, 4>; Unfortunately, that "nonvolatile_load" operator is simply ignored by the CondUnaryRSY class, and there was no test to catch it. llvm-svn: 285077
-
Sanjay Patel authored
llvm-svn: 285075
-
Simon Pilgrim authored
We already have (V)PMOVZX* combining support, this is the beginning of handling (V)PMOVSX* similarly - other combines in combineVSZext can be generalized in future patches. This unearthed an interesting bug in that we were generating illegal build vectors on 32-bit targets - it was proving difficult to create a test for it from PMOVZX, but it fired immediately with PMOVSX. I've created a more general form of the existing getConstVector to handle these cases - ideally this should be handled in non-target-specific code but I couldn't find an equivalent. Differential Revision: https://reviews.llvm.org/D25874 llvm-svn: 285072
-
Sanjay Patel authored
Accidentally put in the hoped-for checks ahead of the transform! llvm-svn: 285070
-
Sanjay Patel authored
llvm-svn: 285069
-
Zvi Rackover authored
Summary: Do *not* perform combines such as: vector_shuffle<4,1,2,3>(build_vector(Ud, C0, C1 C2), scalar_to_vector(X)) -> build_vector(X, C0, C1, C2) Keeping the shuffle allows lowering the constant build_vector to a materialized constant vector (such as a vector-load from the constant-pool or some other idiom). Reviewers: delena, igorb, spatel, mkuper, andreadb, RKSimon Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D25524 llvm-svn: 285063
-
Craig Topper authored
[AVX-512] Add support for creating SIGN_EXTEND_VECTOR_INREG and ZERO_EXTEND_VECTOR_INREG for 512-bit vectors to support vpmovzxbq and vpmovsxbq. Summary: The one tricky thing about this is that the sign/zero_extend_inreg uses v64i8 as an input type which isn't legal without BWI support. Though the vpmovsxbq and vpmovzxbq instructions themselves don't require BWI. To support this we need to add custom lowering for ZERO_EXTEND_VECTOR_INREG with v64i8 input. This can mostly reuse the existing sign extend code with a couple checks for sign extend vs zero extend added. Reviewers: delena, RKSimon Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D25594 llvm-svn: 285053
-
Sanjay Patel authored
llvm-svn: 285046
-
Sanjay Patel authored
llvm-svn: 285045
-
Vedant Kumar authored
When we load coverage data from multiple objects, we don't have a way to attribute a source object to a function record. Printing out the object filename next to the source filename is already not very useful: soon, it'll actually become misleading. Stop printing out the filename now. llvm-svn: 285043
-
Sanjay Patel authored
llvm-svn: 285036
-
- Oct 24, 2016
-
-
Eli Friedman authored
There are two fixes here: one, AnalyzeUsesOfPointer can't return false until it has checked all the uses of the pointer. Two, if a global uses another global, we have to assume the address of the first global escapes. Fixes https://llvm.org/bugs/show_bug.cgi?id=30707 . Differential Revision: https://reviews.llvm.org/D25798 llvm-svn: 285034
-
Simon Pilgrim authored
Use isConstOrConstSplat helper. Also use APInt instead of getZExtValue directly to avoid out of range issues. llvm-svn: 285033
-