Skip to content
  1. Oct 05, 2012
    • Chandler Carruth's avatar
      Teach the new SROA a new trick. Now we zap any memcpy or memmoves which · e5b7a2cc
      Chandler Carruth authored
      are in fact identity operations. We detect these and kill their
      partitions so that even splitting is unaffected by them. This is
      particularly important because Clang relies on emitting identity memcpy
      operations for struct copies, and these fold away to constants very
      often after inlining.
      
      Fixes the last big performance FIXME I have on my plate.
      
      llvm-svn: 165285
      e5b7a2cc
    • Chandler Carruth's avatar
      Lift the speculation visitor above all the helpers that are targeted at · 90c4a3ae
      Chandler Carruth authored
      the rewrite visitor to make the fact that the speculation is completely
      independent a bit more clear.
      
      I promise that this is just a cut/paste of the one visitor and adding
      the annonymous namespace wrappings. The diff may look completely
      preposterous, it does in git for some reason.
      
      llvm-svn: 165284
      90c4a3ae
  2. Oct 04, 2012
    • Preston Gurd's avatar
      This patch corrects commit 165126 by using an integer bit width instead of · 0d67f510
      Preston Gurd authored
      a pointer to a type, in order to remove the uses of getGlobalContext().
      
      Patch by Tyler Nowicki.
      
      llvm-svn: 165255
      0d67f510
    • Jakub Staszak's avatar
      Add a comment to the commit r165187. · e076cac0
      Jakub Staszak authored
      llvm-svn: 165238
      e076cac0
    • Duncan Sands's avatar
      In my recent change to avoid use of underaligned memory I didn't notice that · a6d20010
      Duncan Sands authored
      cpyDest can be mutated in some cases, which would then cause a crash later if
      indeed the memory was underaligned.  This brought down several buildbots, so
      I guess the underaligned case is much more common than I thought!
      
      llvm-svn: 165228
      a6d20010
    • Chandler Carruth's avatar
      Fix PR13969, a mini-phase-ordering issue with the new SROA pass. · ac8317fd
      Chandler Carruth authored
      Currently, we re-visit allocas when something changes about the way they
      might be *split* to allow better scalarization to take place. However,
      we weren't handling the case when the *promotion* is what would change
      the behavior of SROA. When an address derived from an alloca is stored
      into another alloca, we consider the first to have escaped. If the
      second is ever promoted to an SSA value, we will suddenly be able to run
      the SROA pass on the first alloca.
      
      This patch adds explicit support for this form if iteration. When we
      detect a store of a pointer derived from an alloca, we flag the
      underlying alloca for reprocessing after promotion. The logic works hard
      to only do this when there is definitely going to be promotion and it
      might remove impediments to the analysis of the alloca.
      
      Thanks to Nick for the great test case and Benjamin for some sanity
      check review.
      
      llvm-svn: 165223
      ac8317fd
    • Duncan Sands's avatar
      The memcpy optimizer was happily doing call slot forwarding when the new memory · c6ada69a
      Duncan Sands authored
      was less aligned than the old.  In the testcase this results in an overaligned
      memset: the memset alignment was correct for the original memory but is too much
      for the new memory.  Fix this by either increasing the alignment of the new
      memory or bailing out if that isn't possible.  Should fix the gcc-4.7 self-host
      buildbot failure.
      
      llvm-svn: 165220
      c6ada69a
    • Chandler Carruth's avatar
      Teach the integer-promotion rewrite strategy to be endianness aware. · 43c8b46d
      Chandler Carruth authored
      Sorry for this being broken so long. =/
      
      As part of this, switch all of the existing tests to be Little Endian,
      which is the behavior I was asserting in them anyways! Add in a new
      big-endian test that checks the interesting behavior there.
      
      Another part of this is to tighten the rules abotu when we perform the
      full-integer promotion. This logic now rejects cases where there fully
      promoted integer is a non-multiple-of-8 bitwidth or cases where the
      loads or stores touch bits which are in the allocated space of the
      alloca but are not loaded or stored when accessing the integer. Sadly,
      these aren't really observable today as the rest of the pass will
      already ensure the invariants hold. However, the latter situation is
      likely to become a potential concern in the future.
      
      Thanks to Benjamin and Duncan for early review of this patch. I'm still
      looking into whether there are further endianness issues, please let me
      know if anyone sees BE failures persisting past this.
      
      llvm-svn: 165219
      43c8b46d
    • Bill Wendling's avatar
      Use method to query for attributes. · e8619aa1
      Bill Wendling authored
      llvm-svn: 165209
      e8619aa1
    • Jakub Staszak's avatar
      Fix PR13967. · f8a81295
      Jakub Staszak authored
      llvm-svn: 165187
      f8a81295
  3. Oct 03, 2012
    • Chandler Carruth's avatar
      Fix an issue where we failed to adjust the alignment constraint on · 08e5f49f
      Chandler Carruth authored
      a memcpy to reflect that '0' has a different meaning when applied to
      a load or store. Now we correctly use underaligned loads and stores for
      the test case added.
      
      llvm-svn: 165101
      08e5f49f
    • Chandler Carruth's avatar
      Try to use a better set of abstractions for computing the alignment · 4b2b38d3
      Chandler Carruth authored
      necessary during rewriting. As part of this, fix a real think-o here
      where we might have left off an alignment specification when the address
      is in fact underaligned. I haven't come up with any way to trigger this,
      as there is always some other factor that reduces the alignment, but it
      certainly might have been an observable bug in some way I can't think
      of. This also slightly changes the strategy for placing explicit
      alignments on loads and stores to only do so when the alignment does not
      match that required by the ABI. This causes a few redundant alignments
      to go away from test cases.
      
      I've also added a couple of tests that really push on the alignment that
      we end up with on loads and stores. More to come here as I try to fix an
      underlying bug I have conjectured and produced test cases for, although
      it's not clear if this bug is the one currently hitting dragonegg's
      gcc47 bootstrap.
      
      llvm-svn: 165100
      4b2b38d3
    • Chandler Carruth's avatar
      Switch the SetVector::remove_if implementation to use partition which · 3f57b829
      Chandler Carruth authored
      preserves the values of the relocated entries, unlikely remove_if. This
      allows walking them and erasing them.
      
      Also flesh out the predicate we are using for this to support the
      various constraints actually imposed on a UnaryPredicate -- without this
      we can't compose it with std::not1.
      
      Thanks to Sean Silva for the review here and noticing the issue with
      std::remove_if.
      
      llvm-svn: 165073
      3f57b829
    • Chandler Carruth's avatar
      Teach the new SROA to handle cases where an alloca that has already been · b09f0a3c
      Chandler Carruth authored
      scheduled for processing on the worklist eventually gets deleted while
      we are processing another alloca, fixing the original test case in
      PR13990.
      
      To facilitate this, add a remove_if helper to the SetVector abstraction.
      It's not easy to use the standard abstractions for this because of the
      specifics of SetVectors types and implementation.
      
      Finally, a nice small test case is included. Thanks to Benjamin for the
      fantastic reduced test case here! All I had to do was delete some empty
      basic blocks!
      
      llvm-svn: 165065
      b09f0a3c
  4. Oct 02, 2012
    • Chandler Carruth's avatar
      Fix another crasher in SROA, reported by Joel. · 6c3890b6
      Chandler Carruth authored
      We require that the indices into the use lists are stable in order to
      build fast lookup tables to locate a particular partition use from an
      operand of a PHI or select. This is (obviously in hind sight)
      incompatible with erasing elements from the array. Really, we don't want
      to erase anyways. It is expensive, and a rare operation. Instead, simply
      weaken the contract of the PartitionUse structure to allow null Use
      pointers to represent dead uses. Now we can clear out the pointer to
      mark things as dead, and all it requires is adding some 'continue'
      checks to the various loops.
      
      I'm still reducing a test case for this, as the test case I have is
      huge. I think this one I can get a nice test case for though, as it was
      much more deterministic.
      
      llvm-svn: 165032
      6c3890b6
    • Chandler Carruth's avatar
      Fix a silly coding error on my part. The whole point of the speculator · 3903e052
      Chandler Carruth authored
      being separate was that it can grow the use list. As a consequence, we
      can't use the iterator-pair interface, we need an index based interface.
      Expose such an interface from the AllocaPartitioning, and use it in the
      speculator.
      
      This should at least fix a use-after-free bug found by Duncan, and may
      fix some of the other crashers.
      
      I don't have a nice deterministic test case yet, but if I get a good
      one, I'll add it.
      
      llvm-svn: 165027
      3903e052
  5. Oct 01, 2012
    • Chandler Carruth's avatar
      Make this plural. Spotted by Duncan in review (and a very old typo, this · d71ef3a0
      Chandler Carruth authored
      is the second time I've moved this comment around...)
      
      llvm-svn: 164939
      d71ef3a0
    • Chandler Carruth's avatar
      Prune some unnecessary includes. · d325f802
      Chandler Carruth authored
      llvm-svn: 164938
      d325f802
    • Chandler Carruth's avatar
      Fix several issues with alignment. We weren't always accounting for type · 176ca71a
      Chandler Carruth authored
      alignment requirements of the new alloca. As one consequence which was
      reported as a bug by Duncan, we overaligned memcpy calls to ranges of
      allocas after they were rewritten to types with lower alignment
      requirements. Other consquences are possible, but I don't have any test
      cases for them.
      
      llvm-svn: 164937
      176ca71a
    • Chandler Carruth's avatar
      Factor the PHI and select speculation into a separate rewriter. This · 82a57543
      Chandler Carruth authored
      could probably be factored still further to hoist this logic into
      a generic helper, but currently I don't have particularly clean ideas
      about how to handle that.
      
      This at least allows us to drop custom load rewriting from the
      speculation logic, which in turn allows the existing load rewriting
      logic to fire. In theory, this could enable vector promotion or other
      tricks after speculation occurs, but I've not dug into such issues. This
      is primarily just cleaning up the factoring of the code and the
      resulting logic.
      
      llvm-svn: 164933
      82a57543
    • Chandler Carruth's avatar
      Refactor the PartitionUse structure to actually use the Use* instead of · 54e8f0b4
      Chandler Carruth authored
      a pair of instructions, one for the used pointer and the second for the
      user. This simplifies the representation and also makes it more dense.
      
      This was noticed because of the miscompile in PR13926. In that case, we
      were running up against a fundamental "bad idea" in the speculation of
      PHI and select instructions: the speculation and rewriting are
      interleaved, which requires phi speculation to also perform load
      rewriting! This is bad, and causes us to miss opportunities to do (for
      example) vector rewriting only exposed after PHI speculation, etc etc.
      It also, in the old system, required us to insert *new* load uses into
      the current partition's use list, which would then be ignored during
      rewriting because we had already extracted an end iterator for the use
      list. The appending behavior (and much of the other oddities) stem from
      the strange de-duplication strategy in the PartitionUse builder.
      Amusingly, all this went without notice for so long because it could
      only be triggered by having *different* GEPs into the same partition of
      the same alloca, where both different GEPs were operands of a single
      PHI, and where the GEP which was not encountered first also had multiple
      uses within that same PHI node... Hence the insane steps required to
      reproduce.
      
      So, step one in fixing this fundamental bad idea is to make the
      PartitionUse actually contain a Use*, and to make the builder do proper
      deduplication instead of funky de-duplication. This is enough to remove
      the appending behavior, and fix the miscompile in PR13926, but there is
      more work to be done here. Subsequent commits will lift the speculation
      into its own visitor. It'll be a useful step toward potentially
      extracting all of the speculation logic into a generic utility
      transform.
      
      The existing PHI test case for repeated operands has been made more
      extreme to catch even these issues. This test case, run through the old
      pass, will exactly reproduce the miscompile from PR13926. ;] We were so
      close here!
      
      llvm-svn: 164925
      54e8f0b4
  6. Sep 29, 2012
    • Chandler Carruth's avatar
      Fix a somewhat surprising miscompile where code relying on an ABI · 903790ef
      Chandler Carruth authored
      alignment could lose it due to the alloca type moving down to a much
      smaller alignment guarantee.
      
      Now SROA will actively compute a proper alignment, factoring the target
      data, any explicit alignment, and the offset within the struct. This
      will in some cases lower the alignment requirements, but when we lower
      them below those of the type, we drop the alignment entirely to give
      freedom to the code generator to align it however is convenient.
      
      Thanks to Duncan for the lovely test case that pinned this down. =]
      
      llvm-svn: 164891
      903790ef
    • Evan Cheng's avatar
      Do not delete BBs if their addresses are taken. rdar://12396696 · 64a223ae
      Evan Cheng authored
      llvm-svn: 164866
      64a223ae
  7. Sep 28, 2012
  8. Sep 26, 2012
    • Bill Wendling's avatar
      Remove the `hasFnAttr' method from Function. · 863bab68
      Bill Wendling authored
      The hasFnAttr method has been replaced by querying the Attributes explicitly. No
      intended functionality change.
      
      llvm-svn: 164725
      863bab68
    • Chandler Carruth's avatar
      Analogous fix to memset and memcpy rewriting. Don't have a test case · 208124f5
      Chandler Carruth authored
      contrived for these yet, as I spotted them by inspection and the test
      cases are a bit more tricky to phrase.
      
      llvm-svn: 164691
      208124f5
    • Chandler Carruth's avatar
      When rewriting the pointer operand to a load or store which has · 3e4273dd
      Chandler Carruth authored
      alignment guarantees attached, re-compute the alignment so that we
      consider offsets which impact alignment.
      
      llvm-svn: 164690
      3e4273dd
    • Chandler Carruth's avatar
      Teach all of the loads, stores, memsets and memcpys created by the · 871ba724
      Chandler Carruth authored
      rewriter in SROA to carry a proper alignment. This involves
      interrogating various sources of alignment, etc. This is a more complete
      and principled fix to PR13920 as well as related bugs pointed out by Eli
      in review and by inspection in the area.
      
      Also by inspection fix the integer and vector promotion paths to create
      aligned loads and stores. I still need to work up test cases for
      these... Sorry for the delay, they were found purely by inspection.
      
      llvm-svn: 164689
      871ba724
    • Chandler Carruth's avatar
      Revert the business end of r164636 and try again. I'll come in again. ;] · 4bd8f66e
      Chandler Carruth authored
      This should really, really fix PR13916. For real this time. The
      underlying bug is... a bit more subtle than I had imagined.
      
      The setup is a code pattern that leads to an @llvm.memcpy call with two
      equal pointers to an alloca in the source and dest. Now, not any pattern
      will do. The alloca needs to be formed just so, and both pointers should
      be wrapped in different bitcasts etc. When this precise pattern hits,
      a funny sequence of events transpires. First, we correctly detect the
      potential for overlap, and correctly optimize the memcpy. The first
      time. However, we do simplify the set of users of the alloca, and that
      causes us to run the alloca back through the SROA pass in case there are
      knock-on simplifications. At this point, a curious thing has happened.
      If we happen to have an i8 alloca, we have direct i8 pointer values. So
      we don't bother creating a cast, we rewrite the arguments to the memcpy
      to dircetly refer to the alloca.
      
      Now, in an unrelated area of the pass, we have clever logic which
      ensures that when visiting each User of a particular pointer derived
      from an alloca, we only visit that User once, and directly inspect all
      of its operands which refer to that particular pointer value. However,
      the mechanism used to detect memcpy's with the potential to overlap
      relied upon getting visited once per *Use*, not once per *User*. This is
      always true *unless* the same exact value is both source and dest. It
      turns out that almost nothing actually produces that pattern though.
      
      We can hand craft test cases that more directly test this behavior of
      course, and those are included. Also, note that there is a significant
      missed optimization here -- we prove in many cases that there is
      a non-volatile memcpy call with identical source and dest addresses. We
      shouldn't prevent splitting the alloca in that case, and in fact we
      should just remove such memcpy calls eagerly. I'll address that in
      a subsequent commit.
      
      llvm-svn: 164669
      4bd8f66e
    • Nick Lewycky's avatar
      Don't drop the alignment on a memcpy intrinsic when producing a store. This is · d9f79106
      Nick Lewycky authored
      only a missed optimization opportunity if the store is over-aligned, but a
      miscompile if the store's new type has a higher natural alignment than the
      memcpy did. Fixes PR13920!
      
      llvm-svn: 164641
      d9f79106
  9. Sep 25, 2012
  10. Sep 24, 2012
    • Chandler Carruth's avatar
      Address one of the original FIXMEs for the new SROA pass by implementing · 92924fd2
      Chandler Carruth authored
      integer promotion analogous to vector promotion. When there is an
      integer alloca being accessed both as its integer type and as a narrower
      integer type, promote the narrower access to "insert" and "extract" the
      smaller integer from the larger one, and make the integer alloca
      a candidate for promotion.
      
      In the new formulation, we don't care about target legal integer or use
      thresholds to control things. Instead, we only perform this promotion to
      an integer type which the frontend has already emitted a load or store
      for. This bounds the scope and prevents optimization passes from
      coalescing larger and larger entities into a single integer.
      
      llvm-svn: 164479
      92924fd2
  11. Sep 23, 2012
    • Chandler Carruth's avatar
      Switch to a signed representation for the dynamic offsets while walking · e7a1ba5e
      Chandler Carruth authored
      across the uses of the alloca. It's entirely possible for negative
      numbers to come up here, and in some rare cases simply doing the 2's
      complement arithmetic isn't the correct decision. Notably, we can't zext
      the index of the GEP. The definition of GEP is that these offsets are
      sign extended or truncated to the size of the pointer, and then wrapping
      2's complement arithmetic used.
      
      This patch fixes an issue that comes up with *no* input from the
      buildbots or bootstrap afaict. The only place where it manifested,
      disturbingly, is Clang's own regression test suite. A reduced and
      targeted collection of tests are added to cope with this. Note that I've
      tried to pin down the potential cases of overflow, but may have missed
      some cases. I've tried to add a few cases to test this, but its hard
      because LLVM has quite limited support for >64bit constructs.
      
      llvm-svn: 164475
      e7a1ba5e
Loading