- Feb 12, 2019
-
-
Max Kazantsev authored
llvm-svn: 353799
-
Eli Friedman authored
The code checked that the first root was an appropriate distance from the base value, but skipped checking the other roots. This could lead to rerolling a loop that can't be legally rerolled (at least, not without rewriting the loop in a non-trivial way). Differential Revision: https://reviews.llvm.org/D56812 llvm-svn: 353779
-
- Feb 11, 2019
-
-
Michael Kruse authored
Loop::setAlreadyUnrolled() and LoopVectorizeHints::setLoopAlreadyUnrolled() both add loop metadata that stops the same loop from being transformed multiple times. This patch merges both implementations. In doing so we fix 3 potential issues: * setLoopAlreadyUnrolled() kept the llvm.loop.vectorize/interleave.* metadata even though it will not be used anymore. This already caused problems such as http://llvm.org/PR40546. Change the behavior to the one of setAlreadyUnrolled which deletes this loop metadata. * setAlreadyUnrolled() used to create a new LoopID by calling MDNode::get with nullptr as the first operand, then replacing it by the returned references using replaceOperandWith. It is possible that MDNode::get would instead return an existing node (due to de-duplication) that then gets modified. To avoid, use a fresh TempMDNode that does not get uniqued with anything else before replacing it with replaceOperandWith. * LoopVectorizeHints::matchesHintMetadataName() only compares the suffix of the attribute to set the new value for. That is, when called with "enable", would erase attributes such as "llvm.loop.unroll.enable", "llvm.loop.vectorize.enable" and "llvm.loop.distribute.enable" instead of the one to replace. Fortunately, function was only called with "isvectorized". Differential Revision: https://reviews.llvm.org/D57566 llvm-svn: 353738
-
Sanjay Patel authored
This bug seems to be harmless in release builds, but will cause an error in UBSAN builds or an assertion failure in debug builds. When it gets to this opcode comparison, it assumes both of the operands are BinaryOperators, but the prior m_LogicalShift will also match a ConstantExpr. The cast<BinaryOperator> will assert in a debug build, or reading an invalid value for BinaryOp from memory with ((BinaryOperator*)constantExpr)->getOpcode() will cause an error in a UBSAN build. The test I added will fail without this change in debug/UBSAN builds, but not in release. Patch by: @AndrewScheidecker (Andrew Scheidecker) Differential Revision: https://reviews.llvm.org/D58049 llvm-svn: 353736
-
Alina Sbirlea authored
Summary: If there is no clobbering access for a store inside the loop, that store can only be hoisted if there are no interfearing loads. A more general verification introduced here: there are no loads that are not optimized to an access outside the loop. Addresses PR40586. Reviewers: george.burgess.iv Subscribers: sanjoy, jlebar, Prazek, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D57967 llvm-svn: 353734
-
Chandler Carruth authored
llvm-svn: 353667
-
Chandler Carruth authored
new one. llvm-svn: 353665
-
Chandler Carruth authored
`CallBase`. Users have been updated. You can see how to update any out-of-tree usages: pass `cast<CallBase>(CS.getInstruction())`. llvm-svn: 353661
-
Chandler Carruth authored
`CallBase` class rather than `CallSite` wrappers. I pushed this change down through most of the statepoint infrastructure, completely removing the use of CallSite where I could reasonably do so. I ended up making a couple of cut-points: generic call handling (instcombine, TLI, SDAG). As soon as it hit truly generic handling with users outside the immediate code, I simply transitioned into or out of a `CallSite` to make this a reasonable sized chunk. Differential Revision: https://reviews.llvm.org/D56122 llvm-svn: 353660
-
- Feb 10, 2019
-
-
Fangrui Song authored
isInstructionTriviallyDead also performs the use_empty() check. llvm-svn: 353637
-
Craig Topper authored
llvm-svn: 353630
-
- Feb 09, 2019
-
-
Fangrui Song authored
cxxDtorIsEmpty checks callers recursively to determine if the __cxa_atexit-registered function is empty, and eliminates the __cxa_atexit call accordingly. This recursive check is unnecessary as redundant instructions and function calls can be removed by early-cse and inliner. In addition, cxxDtorIsEmpty does not mark visited function and it may visit a function exponential times (multiplication principle). llvm-svn: 353603
-
Gabor Buella authored
For some specific cases with bitcast A->B->A with intervening PHI nodes InstCombiner::optimizeBitCastFromPhi transformation creates extra PHI nodes, which are actually a copy of already created PHI or in another words, they are redundant. These extra PHI nodes could lead to extra move instructions generated after DeSSA transformation. This happens when several conditions are met - SROA kicks in and creates new alloca; - there is a simple assignment L = R, which falls under 'canonicalize loads' done by combineLoadToOperationType (this transformation is by default). Exactly this transformation is the reason of bitcasts generated; - the alloca is then used in A->B->A + PHI chain; - there is a loop unrolling. As a result optimizeBitCastFromPhi creates as many of PHI nodes for each new SROA alloca as loop unrolling factor is. These new extra PHI nodes are redundant actually except of one and should not be created. Moreover the idea of optimizeBitCastFromPhi is to get rid of the cast (when possible) but that doesn't happen in these conditions. The proposed fix is to do the cast replacement for the whole calculated/accumulated PHI closure not for one cast only, which is an argument to the optimizeBitCastFromPhi. These will help to accomplish several things: 1) avoid extra PHI nodes generated as all casts which may trigger optimizeBitCastFromPhi transformation will be replaced, 3) bitcasts will be replaced, and 3) create more opportunities to remove dead code, which appears after the replacement. A new test case shows that it's possible to get rid of all bitcasts completely and get quite good code reduction. Author: Igor Tsimbalist <igor.v.tsimbalist@intel.com> Reviewed By: Carrot Differential Revision: https://reviews.llvm.org/D57053 llvm-svn: 353595
-
Sergey Dmitriev authored
Reviewers: vsk, fhahn, davidxl Reviewed By: vsk Subscribers: hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D57957 llvm-svn: 353582
-
- Feb 08, 2019
-
-
Craig Topper authored
This patch accompanies the RFC posted here: http://lists.llvm.org/pipermail/llvm-dev/2018-October/127239.html This patch adds a new CallBr IR instruction to support asm-goto inline assembly like gcc as used by the linux kernel. This instruction is both a call instruction and a terminator instruction with multiple successors. Only inline assembly usage is supported today. This also adds a new INLINEASM_BR opcode to SelectionDAG and MachineIR to represent an INLINEASM block that is also considered a terminator instruction. There will likely be more bug fixes and optimizations to follow this, but we felt it had reached a point where we would like to switch to an incremental development model. Patch by Craig Topper, Alexander Ivchenko, Mikhail Dvoretckii Differential Revision: https://reviews.llvm.org/D53765 llvm-svn: 353563
-
Vedant Kumar authored
When CodeExtractor saves the result of InvokeInst at the first insertion point of the 'normal destination' basic block, this block can be omitted in the outlined region, so store is placed outside of the function. The suggested solution is to process saving outputs after creating exit stubs for new function, and stores will be placed in that blocks before return in this case. Patch by Sergei Kachkov! Fixes llvm.org/PR40455. Differential Revision: https://reviews.llvm.org/D57919 llvm-svn: 353562
-
Reid Kleckner authored
Summary: The motivating use case is eliminating duplicate profile data registered for the same inline function in two object files. Before this change, users would observe multiple symbol definition errors with VC link, but links with LLD would succeed. Users (Mozilla) have reported that PGO works well with clang-cl and LLD, but when using LLD without this static registration, we would get into a "relocation against a discarded section" situation. I'm not sure what happens in that situation, but I suspect that duplicate, unused profile information was retained. If so, this change will reduce the size of such binaries with LLD. Now, Windows uses static registration and is in line with all the other platforms. Reviewers: davidxl, wmi, inglorion, void, calixte Subscribers: mgorny, krytarowski, eraman, fedor.sergeev, hiraditya, #sanitizers, dmajor, llvm-commits Tags: #sanitizers, #llvm Differential Revision: https://reviews.llvm.org/D57929 llvm-svn: 353547
-
Teresa Johnson authored
Summary: ArgumentPromotion had code to specifically move the dbg metadata over to the new function, but other metadata such as the function_entry_count !prof metadata was not. Replace code that moved dbg metadata with a call to copyMetadata. The old metadata is automatically removed when the old Function is removed. Reviewers: davidxl Subscribers: llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D57846 llvm-svn: 353537
-
Carlos Alberto Enciso authored
Check that when SimplifyCFG is flattening a 'br', all their debug intrinsic instructions are removed, including any dbg.label referencing a label associated with the basic blocks being removed. Differential Revision: https://reviews.llvm.org/D57444 llvm-svn: 353511
-
Max Kazantsev authored
`insert/deleteEdge` methods in DTU can make updates incorrectly in some cases (see https://bugs.llvm.org/show_bug.cgi?id=40528), and it is recommended to use `applyUpdates` methods instead when it is needed to make a mass update in CFG. Differential Revision: https://reviews.llvm.org/D57316 Reviewed By: kuhar llvm-svn: 353502
-
Sergey Dmitriev authored
Summary: Assumption cache's self-updating mechanism does not correctly handle the case when blocks are extracted from the function by the CodeExtractor. As a result function's assumption cache may have stale references to the llvm.assume calls that were moved to the outlined function. This patch fixes this problem by removing extracted llvm.assume calls from the function’s assumption cache. Reviewers: hfinkel, vsk, fhahn, davidxl, sanjoy Reviewed By: hfinkel, vsk Subscribers: llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D57215 llvm-svn: 353500
-
- Feb 07, 2019
-
-
Quentin Colombet authored
This commit teaches InstCombine how to replace an atomicrmw operation into a simple load atomic. For a given `atomicrmw <op>`, this is possible when: 1. The ordering of that operation is compatible with a load (i.e., anything that doesn't have a release semantic). 2. <op> does not modify the value being stored Differential Revision: https://reviews.llvm.org/D57854 llvm-svn: 353471
-
Florian Hahn authored
llvm-svn: 353469
-
Sanjay Patel authored
This fixes a class of bugs introduced by D44367, which transforms various cases of icmp (bitcast ([su]itofp X)), Y to icmp X, Y. If the bitcast is between vector types with a different number of elements, the current code will produce bad IR along the lines of: icmp <N x i32> ..., <M x i32> <...>. This patch suppresses the transform if the bitcast changes the number of vector elements. Patch by: @AndrewScheidecker (Andrew Scheidecker) Differential Revision: https://reviews.llvm.org/D57871 llvm-svn: 353467
-
Sanjay Patel authored
llvm-svn: 353462
-
Florian Hahn authored
As discussed in D57382, interleaving should be avoided if computeMaxVF returns None, same as we currently do for vectorization. Fixes https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=6477 Reviewers: Ayal, dcaballe, hsaito, mkuper, rengolin Reviewed By: Ayal Differential Revision: https://reviews.llvm.org/D57837 llvm-svn: 353461
-
Reid Kleckner authored
llvm-svn: 353439
-
Teresa Johnson authored
Summary: When compiling with profile data, ensure the split cold function gets cold function_entry_count metadata (just use 0 since it should be cold). Otherwise with function sections it will not be placed in the unlikely text section with other cold code. Reviewers: vsk Subscribers: sebpop, hiraditya, davidxl, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D57900 llvm-svn: 353434
-
Sam Parker authored
Modify GenerateConstantOffsetsImpl to create offsets that can be used by indexed addressing modes. If formulae can be generated which result in the constant offset being the same size as the recurrence, we can generate a pre-indexed access. This allows the pointer to be updated via the single pre-indexed access so that (hopefully) no add/subs are required to update it for the next iteration. For small cores, this can significantly improve performance DSP-like loops. Differential Revision: https://reviews.llvm.org/D55373 llvm-svn: 353403
-
- Feb 06, 2019
-
-
Alina Sbirlea authored
Summary: Experimentally we found that promotion to scalars carries less benefits than sinking and hoisting in LICM. When using MemorySSA, we build an AliasSetTracker on demand in order to reuse the current infrastructure. We only build it if less than AccessCapForMSSAPromotion exist in the loop, a cap that is by default set to 250. This value ensures there are no runtime regressions, and there are small compile time gains for pathological cases. A much lower value (20) was found to yield a single regression in the llvm-test-suite and much higher benefits for compile times. Conservatively we set the current cap to a high value, but we will explore lowering it when MemorySSA is enabled by default. Reviewers: sanjoy, chandlerc Subscribers: nemanjai, jlebar, Prazek, george.burgess.iv, jfb, jsji, llvm-commits Differential Revision: https://reviews.llvm.org/D56625 llvm-svn: 353339
-
Sanjay Patel authored
We should canonicalize to one of these forms, and compare-with-zero could be more conducive to follow-on transforms. This also leads to generally better codegen as shown in PR40611: https://bugs.llvm.org/show_bug.cgi?id=40611 llvm-svn: 353313
-
Max Kazantsev authored
llvm-svn: 353290
-
Max Kazantsev authored
llvm-svn: 353277
-
Max Kazantsev authored
llvm-svn: 353276
-
Max Kazantsev authored
llvm-svn: 353275
-
Max Kazantsev authored
llvm-svn: 353274
-
Max Kazantsev authored
llvm-svn: 353273
-
Teresa Johnson authored
Summary: Follow up to D57082 which moved splitting earlier in the pipeline, in order to perform it before inlining. However, it was moved too early, before the IR is annotated with instrumented PGO data. This caused the splitting to incorrectly determine cold functions. Move it to just after PGO annotation (still before inlining), in both pass managers. Reviewers: vsk, hiraditya, sebpop Subscribers: mehdi_amini, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D57805 llvm-svn: 353270
-
Richard Trieu authored
DomTreeUpdater depends on headers from Analysis, but is in IR. This is a layering violation since Analysis depends on IR. Relocate this code from IR to Analysis to fix the layering violation. llvm-svn: 353265
-
Vedant Kumar authored
Resumes that are not reachable from a cleanup landing pad are considered to be unreachable. It’s not safe to split them out. rdar://47808235 llvm-svn: 353242
-