Skip to content
  1. Oct 01, 2012
    • Chandler Carruth's avatar
      Fix more misspellings found by Duncan during review. · 9866b97f
      Chandler Carruth authored
      llvm-svn: 164940
      9866b97f
    • Chandler Carruth's avatar
      Fix several issues with alignment. We weren't always accounting for type · 176ca71a
      Chandler Carruth authored
      alignment requirements of the new alloca. As one consequence which was
      reported as a bug by Duncan, we overaligned memcpy calls to ranges of
      allocas after they were rewritten to types with lower alignment
      requirements. Other consquences are possible, but I don't have any test
      cases for them.
      
      llvm-svn: 164937
      176ca71a
    • Chandler Carruth's avatar
      Refactor the PartitionUse structure to actually use the Use* instead of · 54e8f0b4
      Chandler Carruth authored
      a pair of instructions, one for the used pointer and the second for the
      user. This simplifies the representation and also makes it more dense.
      
      This was noticed because of the miscompile in PR13926. In that case, we
      were running up against a fundamental "bad idea" in the speculation of
      PHI and select instructions: the speculation and rewriting are
      interleaved, which requires phi speculation to also perform load
      rewriting! This is bad, and causes us to miss opportunities to do (for
      example) vector rewriting only exposed after PHI speculation, etc etc.
      It also, in the old system, required us to insert *new* load uses into
      the current partition's use list, which would then be ignored during
      rewriting because we had already extracted an end iterator for the use
      list. The appending behavior (and much of the other oddities) stem from
      the strange de-duplication strategy in the PartitionUse builder.
      Amusingly, all this went without notice for so long because it could
      only be triggered by having *different* GEPs into the same partition of
      the same alloca, where both different GEPs were operands of a single
      PHI, and where the GEP which was not encountered first also had multiple
      uses within that same PHI node... Hence the insane steps required to
      reproduce.
      
      So, step one in fixing this fundamental bad idea is to make the
      PartitionUse actually contain a Use*, and to make the builder do proper
      deduplication instead of funky de-duplication. This is enough to remove
      the appending behavior, and fix the miscompile in PR13926, but there is
      more work to be done here. Subsequent commits will lift the speculation
      into its own visitor. It'll be a useful step toward potentially
      extracting all of the speculation logic into a generic utility
      transform.
      
      The existing PHI test case for repeated operands has been made more
      extreme to catch even these issues. This test case, run through the old
      pass, will exactly reproduce the miscompile from PR13926. ;] We were so
      close here!
      
      llvm-svn: 164925
      54e8f0b4
  2. Sep 29, 2012
    • Chandler Carruth's avatar
      Fix a somewhat surprising miscompile where code relying on an ABI · 903790ef
      Chandler Carruth authored
      alignment could lose it due to the alloca type moving down to a much
      smaller alignment guarantee.
      
      Now SROA will actively compute a proper alignment, factoring the target
      data, any explicit alignment, and the offset within the struct. This
      will in some cases lower the alignment requirements, but when we lower
      them below those of the type, we drop the alignment entirely to give
      freedom to the code generator to align it however is convenient.
      
      Thanks to Duncan for the lovely test case that pinned this down. =]
      
      llvm-svn: 164891
      903790ef
  3. Sep 26, 2012
    • Chandler Carruth's avatar
      When rewriting the pointer operand to a load or store which has · 3e4273dd
      Chandler Carruth authored
      alignment guarantees attached, re-compute the alignment so that we
      consider offsets which impact alignment.
      
      llvm-svn: 164690
      3e4273dd
    • Chandler Carruth's avatar
      Teach all of the loads, stores, memsets and memcpys created by the · 871ba724
      Chandler Carruth authored
      rewriter in SROA to carry a proper alignment. This involves
      interrogating various sources of alignment, etc. This is a more complete
      and principled fix to PR13920 as well as related bugs pointed out by Eli
      in review and by inspection in the area.
      
      Also by inspection fix the integer and vector promotion paths to create
      aligned loads and stores. I still need to work up test cases for
      these... Sorry for the delay, they were found purely by inspection.
      
      llvm-svn: 164689
      871ba724
    • Chandler Carruth's avatar
      Revert the business end of r164636 and try again. I'll come in again. ;] · 4bd8f66e
      Chandler Carruth authored
      This should really, really fix PR13916. For real this time. The
      underlying bug is... a bit more subtle than I had imagined.
      
      The setup is a code pattern that leads to an @llvm.memcpy call with two
      equal pointers to an alloca in the source and dest. Now, not any pattern
      will do. The alloca needs to be formed just so, and both pointers should
      be wrapped in different bitcasts etc. When this precise pattern hits,
      a funny sequence of events transpires. First, we correctly detect the
      potential for overlap, and correctly optimize the memcpy. The first
      time. However, we do simplify the set of users of the alloca, and that
      causes us to run the alloca back through the SROA pass in case there are
      knock-on simplifications. At this point, a curious thing has happened.
      If we happen to have an i8 alloca, we have direct i8 pointer values. So
      we don't bother creating a cast, we rewrite the arguments to the memcpy
      to dircetly refer to the alloca.
      
      Now, in an unrelated area of the pass, we have clever logic which
      ensures that when visiting each User of a particular pointer derived
      from an alloca, we only visit that User once, and directly inspect all
      of its operands which refer to that particular pointer value. However,
      the mechanism used to detect memcpy's with the potential to overlap
      relied upon getting visited once per *Use*, not once per *User*. This is
      always true *unless* the same exact value is both source and dest. It
      turns out that almost nothing actually produces that pattern though.
      
      We can hand craft test cases that more directly test this behavior of
      course, and those are included. Also, note that there is a significant
      missed optimization here -- we prove in many cases that there is
      a non-volatile memcpy call with identical source and dest addresses. We
      shouldn't prevent splitting the alloca in that case, and in fact we
      should just remove such memcpy calls eagerly. I'll address that in
      a subsequent commit.
      
      llvm-svn: 164669
      4bd8f66e
    • Nick Lewycky's avatar
      Don't drop the alignment on a memcpy intrinsic when producing a store. This is · d9f79106
      Nick Lewycky authored
      only a missed optimization opportunity if the store is over-aligned, but a
      miscompile if the store's new type has a higher natural alignment than the
      memcpy did. Fixes PR13920!
      
      llvm-svn: 164641
      d9f79106
  4. Sep 25, 2012
  5. Sep 24, 2012
    • Chandler Carruth's avatar
      Address one of the original FIXMEs for the new SROA pass by implementing · 92924fd2
      Chandler Carruth authored
      integer promotion analogous to vector promotion. When there is an
      integer alloca being accessed both as its integer type and as a narrower
      integer type, promote the narrower access to "insert" and "extract" the
      smaller integer from the larger one, and make the integer alloca
      a candidate for promotion.
      
      In the new formulation, we don't care about target legal integer or use
      thresholds to control things. Instead, we only perform this promotion to
      an integer type which the frontend has already emitted a load or store
      for. This bounds the scope and prevents optimization passes from
      coalescing larger and larger entities into a single integer.
      
      llvm-svn: 164479
      92924fd2
  6. Sep 23, 2012
    • Chandler Carruth's avatar
      Switch to a signed representation for the dynamic offsets while walking · e7a1ba5e
      Chandler Carruth authored
      across the uses of the alloca. It's entirely possible for negative
      numbers to come up here, and in some rare cases simply doing the 2's
      complement arithmetic isn't the correct decision. Notably, we can't zext
      the index of the GEP. The definition of GEP is that these offsets are
      sign extended or truncated to the size of the pointer, and then wrapping
      2's complement arithmetic used.
      
      This patch fixes an issue that comes up with *no* input from the
      buildbots or bootstrap afaict. The only place where it manifested,
      disturbingly, is Clang's own regression test suite. A reduced and
      targeted collection of tests are added to cope with this. Note that I've
      tried to pin down the potential cases of overflow, but may have missed
      some cases. I've tried to add a few cases to test this, but its hard
      because LLVM has quite limited support for >64bit constructs.
      
      llvm-svn: 164475
      e7a1ba5e
  7. Sep 22, 2012
    • Chandler Carruth's avatar
      Fix a case where the new SROA pass failed to zap dead operands to · 225d4bdb
      Chandler Carruth authored
      selects with a constant condition. This resulted in the operands
      remaining live through the SROA rewriter. Most of the time, this just
      caused some dead allocas to persist and get zapped by later passes, but
      in one case found by Joerg, it caused a crash when we tried to *promote*
      the alloca despite it having this dead use. We already have the
      mechanisms in place to handle this, just wire select up to them.
      
      llvm-svn: 164427
      225d4bdb
  8. Sep 19, 2012
    • Chandler Carruth's avatar
      Fix the last crasher I've gotten a reproduction for in SROA. This one · 3f882d4c
      Chandler Carruth authored
      from the dragonegg build bots when we turned on the full version of the
      pass. Included a much reduced test case for this pesky bug, despite
      bugpoint's uncooperative behavior.
      
      Also, I audited all the similar code I could find and didn't spot any
      other cases where this mistake cropped up.
      
      llvm-svn: 164178
      3f882d4c
  9. Sep 18, 2012
    • Chandler Carruth's avatar
      Fix getCommonType in a different way from the way I fixed it when · d356fd02
      Chandler Carruth authored
      working on FCA splitting. Instead of refusing to form a common type when
      there are uses of a subsection of the alloca as well as a use of the
      entire alloca, just skip the subsection uses and continue looking for
      a whole-alloca use with a type that we can use.
      
      This produces slightly prettier IR I think, and also fixes the other
      failure in the test.
      
      llvm-svn: 164146
      d356fd02
    • Benjamin Kramer's avatar
      XFAIL SROA test until Chandler can get to it. · d4d37db0
      Benjamin Kramer authored
      llvm-svn: 164128
      d4d37db0
    • Chandler Carruth's avatar
      Fix a warning in release builds and a test case I forgot to update with · a34f3567
      Chandler Carruth authored
      a fix to getCommonType in the previous patch.
      
      llvm-svn: 164120
      a34f3567
    • Chandler Carruth's avatar
      Add a major missing piece to the new SROA pass: aggressive splitting of · 42cb9cb1
      Chandler Carruth authored
      FCAs. This is essential in order to promote allocas that are used in
      struct returns by frontends like Clang. The FCA load would block the
      rest of the pass from firing, resulting is significant regressions with
      the bullet benchmark in the nightly test suite.
      
      Thanks to Duncan for repeated discussions about how best to do this, and
      to both him and Benjamin for review.
      
      This appears to have blocked many places where the pass tries to fire,
      and so I'm expect somewhat different results with this fix added.
      
      As with the last big patch, I'm including a change to enable the SROA by
      default *temporarily*. Ben is going to remove this as soon as the LNT
      bots pick up the patch. I'm just trying to get a round of LNT numbers
      from the stable machines in the lab.
      
      NOTE: Four clang tests are expected to fail in the brief window where
      this is enabled. Sorry for the noise!
      
      llvm-svn: 164119
      42cb9cb1
  10. Sep 15, 2012
    • Chandler Carruth's avatar
      Port the SSAUpdater-based promotion logic from the old SROA pass to the · 70b44c5c
      Chandler Carruth authored
      new one, and add support for running the new pass in that mode and in
      that slot of the pass manager. With this the new pass can completely
      replace the old one within the pipeline.
      
      The strategy for enabling or disabling the SSAUpdater logic is to do it
      by making the requirement of the domtree analysis optional. By default,
      it is required and we get the standard mem2reg approach. This is usually
      the desired strategy when run in stand-alone situations. Within the
      CGSCC pass manager, we disable requiring of the domtree analysis and
      consequentially trigger fallback to the SSAUpdater promotion.
      
      In theory this would allow the pass to re-use a domtree if one happened
      to be available even when run in a mode that doesn't require it. In
      practice, it lets us have a single pass rather than two which was
      simpler for me to wrap my head around.
      
      There is a hidden flag to force the use of the SSAUpdater code path for
      the purpose of testing. The primary testing strategy is just to run the
      existing tests through that path. One notable difference is that it has
      custom code to handle lifetime markers, and one of the tests has been
      enhanced to exercise that code.
      
      This has survived a bootstrap and the test suite without serious
      correctness issues, however my run of the test suite produced *very*
      alarming performance numbers. I don't entirely understand or trust them
      though, so more investigation is on-going.
      
      To aid my understanding of the performance impact of the new SROA now
      that it runs throughout the optimization pipeline, I'm enabling it by
      default in this commit, and will disable it again once the LNT bots have
      picked up one iteration with it. I want to get those bots (which are
      much more stable) to evaluate the impact of the change before I jump to
      any conclusions.
      
      NOTE: Several Clang tests will fail because they run -O3 and check the
      result's order of output. They'll go back to passing once I disable it
      again.
      
      llvm-svn: 163965
      70b44c5c
  11. Sep 14, 2012
    • Chandler Carruth's avatar
      Introduce a new SROA implementation. · 1b398ae0
      Chandler Carruth authored
      This is essentially a ground up re-think of the SROA pass in LLVM. It
      was initially inspired by a few problems with the existing pass:
      - It is subject to the bane of my existence in optimizations: arbitrary
        thresholds.
      - It is overly conservative about which constructs can be split and
        promoted.
      - The vector value replacement aspect is separated from the splitting
        logic, missing many opportunities where splitting and vector value
        formation can work together.
      - The splitting is entirely based around the underlying type of the
        alloca, despite this type often having little to do with the reality
        of how that memory is used. This is especially prevelant with unions
        and base classes where we tail-pack derived members.
      - When splitting fails (often due to the thresholds), the vector value
        replacement (again because it is separate) can kick in for
        preposterous cases where we simply should have split the value. This
        results in forming i1024 and i2048 integer "bit vectors" that
        tremendously slow down subsequnet IR optimizations (due to large
        APInts) and impede the backend's lowering.
      
      The new design takes an approach that fundamentally is not susceptible
      to many of these problems. It is the result of a discusison between
      myself and Duncan Sands over IRC about how to premptively avoid these
      types of problems and how to do SROA in a more principled way. Since
      then, it has evolved and grown, but this remains an important aspect: it
      fixes real world problems with the SROA process today.
      
      First, the transform of SROA actually has little to do with replacement.
      It has more to do with splitting. The goal is to take an aggregate
      alloca and form a composition of scalar allocas which can replace it and
      will be most suitable to the eventual replacement by scalar SSA values.
      The actual replacement is performed by mem2reg (and in the future
      SSAUpdater).
      
      The splitting is divided into four phases. The first phase is an
      analysis of the uses of the alloca. This phase recursively walks uses,
      building up a dense datastructure representing the ranges of the
      alloca's memory actually used and checking for uses which inhibit any
      aspects of the transform such as the escape of a pointer.
      
      Once we have a mapping of the ranges of the alloca used by individual
      operations, we compute a partitioning of the used ranges. Some uses are
      inherently splittable (such as memcpy and memset), while scalar uses are
      not splittable. The goal is to build a partitioning that has the minimum
      number of splits while placing each unsplittable use in its own
      partition. Overlapping unsplittable uses belong to the same partition.
      This is the target split of the aggregate alloca, and it maximizes the
      number of scalar accesses which become accesses to their own alloca and
      candidates for promotion.
      
      Third, we re-walk the uses of the alloca and assign each specific memory
      access to all the partitions touched so that we have dense use-lists for
      each partition.
      
      Finally, we build a new, smaller alloca for each partition and rewrite
      each use of that partition to use the new alloca. During this phase the
      pass will also work very hard to transform uses of an alloca into a form
      suitable for promotion, including forming vector operations, speculating
      loads throguh PHI nodes and selects, etc.
      
      After splitting is complete, each newly refined alloca that is
      a candidate for promotion to a scalar SSA value is run through mem2reg.
      
      There are lots of reasonably detailed comments in the source code about
      the design and algorithms, and I'm going to be trying to improve them in
      subsequent commits to ensure this is well documented, as the new pass is
      in many ways more complex than the old one.
      
      Some of this is still a WIP, but the current state is reasonbly stable.
      It has passed bootstrap, the nightly test suite, and Duncan has run it
      successfully through the ACATS and DragonEgg test suites. That said, it
      remains behind a default-off flag until the last few pieces are in
      place, and full testing can be done.
      
      Specific areas I'm looking at next:
      - Improved comments and some code cleanup from reviews.
      - SSAUpdater and enabling this pass inside the CGSCC pass manager.
      - Some datastructure tuning and compile-time measurements.
      - More aggressive FCA splitting and vector formation.
      
      Many thanks to Duncan Sands for the thorough final review, as well as
      Benjamin Kramer for lots of review during the process of writing this
      pass, and Daniel Berlin for reviewing the data structures and algorithms
      and general theory of the pass. Also, several other people on IRC, over
      lunch tables, etc for lots of feedback and advice.
      
      llvm-svn: 163883
      1b398ae0
Loading