Skip to content
  1. Sep 26, 2012
    • Chandler Carruth's avatar
      Teach all of the loads, stores, memsets and memcpys created by the · 871ba724
      Chandler Carruth authored
      rewriter in SROA to carry a proper alignment. This involves
      interrogating various sources of alignment, etc. This is a more complete
      and principled fix to PR13920 as well as related bugs pointed out by Eli
      in review and by inspection in the area.
      
      Also by inspection fix the integer and vector promotion paths to create
      aligned loads and stores. I still need to work up test cases for
      these... Sorry for the delay, they were found purely by inspection.
      
      llvm-svn: 164689
      871ba724
    • Benjamin Kramer's avatar
      Fix tests that didn't test anything. · 205d70ed
      Benjamin Kramer authored
      llvm-svn: 164686
      205d70ed
    • Hans Wennborg's avatar
      SimplifyCFG: Make the switch-to-lookup table transformation store the · 39583b88
      Hans Wennborg authored
      tables in bitmaps when they fit in a target-legal register.
      
      This saves some space, and it also allows for building tables that would
      otherwise be deemed too sparse.
      
      One interesting case that this hits is example 7 from
      http://blog.regehr.org/archives/320. We currently generate good code
      for this when lowering the switch to the selection DAG: we build a
      bitmask to decide whether to jump to one block or the other. My patch
      will result in the same bitmask, but it removes the need for the jump,
      as the return value can just be retrieved from the mask.
      
      llvm-svn: 164684
      39583b88
    • Chandler Carruth's avatar
      Revert the business end of r164636 and try again. I'll come in again. ;] · 4bd8f66e
      Chandler Carruth authored
      This should really, really fix PR13916. For real this time. The
      underlying bug is... a bit more subtle than I had imagined.
      
      The setup is a code pattern that leads to an @llvm.memcpy call with two
      equal pointers to an alloca in the source and dest. Now, not any pattern
      will do. The alloca needs to be formed just so, and both pointers should
      be wrapped in different bitcasts etc. When this precise pattern hits,
      a funny sequence of events transpires. First, we correctly detect the
      potential for overlap, and correctly optimize the memcpy. The first
      time. However, we do simplify the set of users of the alloca, and that
      causes us to run the alloca back through the SROA pass in case there are
      knock-on simplifications. At this point, a curious thing has happened.
      If we happen to have an i8 alloca, we have direct i8 pointer values. So
      we don't bother creating a cast, we rewrite the arguments to the memcpy
      to dircetly refer to the alloca.
      
      Now, in an unrelated area of the pass, we have clever logic which
      ensures that when visiting each User of a particular pointer derived
      from an alloca, we only visit that User once, and directly inspect all
      of its operands which refer to that particular pointer value. However,
      the mechanism used to detect memcpy's with the potential to overlap
      relied upon getting visited once per *Use*, not once per *User*. This is
      always true *unless* the same exact value is both source and dest. It
      turns out that almost nothing actually produces that pattern though.
      
      We can hand craft test cases that more directly test this behavior of
      course, and those are included. Also, note that there is a significant
      missed optimization here -- we prove in many cases that there is
      a non-volatile memcpy call with identical source and dest addresses. We
      shouldn't prevent splitting the alloca in that case, and in fact we
      should just remove such memcpy calls eagerly. I'll address that in
      a subsequent commit.
      
      llvm-svn: 164669
      4bd8f66e
    • Nick Lewycky's avatar
      Don't drop the alignment on a memcpy intrinsic when producing a store. This is · d9f79106
      Nick Lewycky authored
      only a missed optimization opportunity if the store is over-aligned, but a
      miscompile if the store's new type has a higher natural alignment than the
      memcpy did. Fixes PR13920!
      
      llvm-svn: 164641
      d9f79106
  2. Sep 25, 2012
  3. Sep 24, 2012
    • Richard Osborne's avatar
      Add missing : in CHECK line. · 6fb4bd77
      Richard Osborne authored
      llvm-svn: 164540
      6fb4bd77
    • Richard Osborne's avatar
      Add missing check for presence of target data. · 2fd29bfb
      Richard Osborne authored
      This avoids a crash in visitAllocaInst when target data isn't available.
      
      llvm-svn: 164539
      2fd29bfb
    • Chandler Carruth's avatar
      Address one of the original FIXMEs for the new SROA pass by implementing · 92924fd2
      Chandler Carruth authored
      integer promotion analogous to vector promotion. When there is an
      integer alloca being accessed both as its integer type and as a narrower
      integer type, promote the narrower access to "insert" and "extract" the
      smaller integer from the larger one, and make the integer alloca
      a candidate for promotion.
      
      In the new formulation, we don't care about target legal integer or use
      thresholds to control things. Instead, we only perform this promotion to
      an integer type which the frontend has already emitted a load or store
      for. This bounds the scope and prevents optimization passes from
      coalescing larger and larger entities into a single integer.
      
      llvm-svn: 164479
      92924fd2
  4. Sep 23, 2012
    • Chandler Carruth's avatar
      Switch to a signed representation for the dynamic offsets while walking · e7a1ba5e
      Chandler Carruth authored
      across the uses of the alloca. It's entirely possible for negative
      numbers to come up here, and in some rare cases simply doing the 2's
      complement arithmetic isn't the correct decision. Notably, we can't zext
      the index of the GEP. The definition of GEP is that these offsets are
      sign extended or truncated to the size of the pointer, and then wrapping
      2's complement arithmetic used.
      
      This patch fixes an issue that comes up with *no* input from the
      buildbots or bootstrap afaict. The only place where it manifested,
      disturbingly, is Clang's own regression test suite. A reduced and
      targeted collection of tests are added to cope with this. Note that I've
      tried to pin down the potential cases of overflow, but may have missed
      some cases. I've tried to add a few cases to test this, but its hard
      because LLVM has quite limited support for >64bit constructs.
      
      llvm-svn: 164475
      e7a1ba5e
  5. Sep 22, 2012
    • Chandler Carruth's avatar
      Fix a case where the new SROA pass failed to zap dead operands to · 225d4bdb
      Chandler Carruth authored
      selects with a constant condition. This resulted in the operands
      remaining live through the SROA rewriter. Most of the time, this just
      caused some dead allocas to persist and get zapped by later passes, but
      in one case found by Joerg, it caused a crash when we tried to *promote*
      the alloca despite it having this dead use. We already have the
      mechanisms in place to handle this, just wire select up to them.
      
      llvm-svn: 164427
      225d4bdb
  6. Sep 21, 2012
  7. Sep 19, 2012
    • Hans Wennborg's avatar
      SimplifyCFG: Don't generate invalid code for switch used to initialize · f744fa91
      Hans Wennborg authored
      two variables where the first variable is returned and the second
      ignored.
      
      I don't think this occurs in practice (other passes should have cleaned
      up the unused phi node), but it should still be handled correctly.
      
      Also make the logic for determining if we should return early less
      sketchy.
      
      llvm-svn: 164225
      f744fa91
    • Hans Wennborg's avatar
      Move load_to_switch.ll to test/CodeGen/SPARC/ · ff9b5a84
      Hans Wennborg authored
      Because the test invokes llc -march=sparc, it needs to be in a directory
      which is only run when the sparc target is built.
      
      llvm-svn: 164211
      ff9b5a84
    • Nadav Rotem's avatar
      rename test · 0b661191
      Nadav Rotem authored
      llvm-svn: 164210
      0b661191
    • Nadav Rotem's avatar
      Prevent inlining of callees which allocate lots of memory into a recursive caller. · 4eb3d4b2
      Nadav Rotem authored
      Example:
      
      void foo() {
       ... foo();   // I'm recursive!
      
        bar();
      }
      
      bar() {  int a[1000];  // large stack size }
      
      rdar://10853263
      
      llvm-svn: 164207
      4eb3d4b2
    • Hans Wennborg's avatar
      CodeGenPrep: turn lookup tables into switches for some targets. · 02fbc716
      Hans Wennborg authored
      This is a follow-up from r163302, which added a transformation to
      SimplifyCFG that turns some switches into loads from lookup tables.
      
      It was pointed out that some targets, such as GPUs and deeply embedded
      targets, might not find this appropriate, but SimplifyCFG doesn't have
      enough information about the target to decide this.
      
      This patch adds the reverse transformation to CodeGenPrep: it turns
      loads from lookup tables back into switches for targets where we do not
      build jump tables (assuming these are also the targets where lookup
      tables are inappropriate).
      
      Hopefully we will eventually get to have target information in
      SimplifyCFG, and then this CodeGenPrep transformation can be removed.
      
      llvm-svn: 164206
      02fbc716
    • Chandler Carruth's avatar
      Fix the last crasher I've gotten a reproduction for in SROA. This one · 3f882d4c
      Chandler Carruth authored
      from the dragonegg build bots when we turned on the full version of the
      pass. Included a much reduced test case for this pesky bug, despite
      bugpoint's uncooperative behavior.
      
      Also, I audited all the similar code I could find and didn't spot any
      other cases where this mistake cropped up.
      
      llvm-svn: 164178
      3f882d4c
  8. Sep 18, 2012
  9. Sep 17, 2012
  10. Sep 15, 2012
    • Chandler Carruth's avatar
      Port the SSAUpdater-based promotion logic from the old SROA pass to the · 70b44c5c
      Chandler Carruth authored
      new one, and add support for running the new pass in that mode and in
      that slot of the pass manager. With this the new pass can completely
      replace the old one within the pipeline.
      
      The strategy for enabling or disabling the SSAUpdater logic is to do it
      by making the requirement of the domtree analysis optional. By default,
      it is required and we get the standard mem2reg approach. This is usually
      the desired strategy when run in stand-alone situations. Within the
      CGSCC pass manager, we disable requiring of the domtree analysis and
      consequentially trigger fallback to the SSAUpdater promotion.
      
      In theory this would allow the pass to re-use a domtree if one happened
      to be available even when run in a mode that doesn't require it. In
      practice, it lets us have a single pass rather than two which was
      simpler for me to wrap my head around.
      
      There is a hidden flag to force the use of the SSAUpdater code path for
      the purpose of testing. The primary testing strategy is just to run the
      existing tests through that path. One notable difference is that it has
      custom code to handle lifetime markers, and one of the tests has been
      enhanced to exercise that code.
      
      This has survived a bootstrap and the test suite without serious
      correctness issues, however my run of the test suite produced *very*
      alarming performance numbers. I don't entirely understand or trust them
      though, so more investigation is on-going.
      
      To aid my understanding of the performance impact of the new SROA now
      that it runs throughout the optimization pipeline, I'm enabling it by
      default in this commit, and will disable it again once the LNT bots have
      picked up one iteration with it. I want to get those bots (which are
      much more stable) to evaluate the impact of the change before I jump to
      any conclusions.
      
      NOTE: Several Clang tests will fail because they run -O3 and check the
      result's order of output. They'll go back to passing once I disable it
      again.
      
      llvm-svn: 163965
      70b44c5c
    • Manman Ren's avatar
      PGO: preserve branch-weight metadata when simplifying two branches with a common · bfb9d435
      Manman Ren authored
      destination.
      
      Updated previous implementation to fix a case not covered:
      // PBI: br i1 %x, TrueDest, BB
      // BI:  br i1 %y, TrueDest, FalseDest
      The other case was handled correctly.
      // PBI: br i1 %x, BB, FalseDest
      // BI:  br i1 %y, TrueDest, FalseDest
      
      Also tried to use 64-bit arithmetic instead of APInt with scale to simplify the
      computation. Let me know if you have other opinions about this.
      
      llvm-svn: 163954
      bfb9d435
  11. Sep 14, 2012
    • Manman Ren's avatar
      PGO: preserve branch-weight metadata when simplifying a switch with a single · 8691e522
      Manman Ren authored
      case to a conditional branch and when removing dead cases.
      
      llvm-svn: 163942
      8691e522
    • Alex Rosenberg's avatar
      Review feedback from Duncan Sands. Alphabetize includes and simplify · af2808cb
      Alex Rosenberg authored
      lit config.
      
      llvm-svn: 163928
      af2808cb
    • Manman Ren's avatar
      PGO: preserve branch-weight metadata when merging two switches where · d81b8e88
      Manman Ren authored
      the default target of the first switch is not the basic block the second switch
      is in (PredDefault != BB).
      
      llvm-svn: 163916
      d81b8e88
    • Chandler Carruth's avatar
      Introduce a new SROA implementation. · 1b398ae0
      Chandler Carruth authored
      This is essentially a ground up re-think of the SROA pass in LLVM. It
      was initially inspired by a few problems with the existing pass:
      - It is subject to the bane of my existence in optimizations: arbitrary
        thresholds.
      - It is overly conservative about which constructs can be split and
        promoted.
      - The vector value replacement aspect is separated from the splitting
        logic, missing many opportunities where splitting and vector value
        formation can work together.
      - The splitting is entirely based around the underlying type of the
        alloca, despite this type often having little to do with the reality
        of how that memory is used. This is especially prevelant with unions
        and base classes where we tail-pack derived members.
      - When splitting fails (often due to the thresholds), the vector value
        replacement (again because it is separate) can kick in for
        preposterous cases where we simply should have split the value. This
        results in forming i1024 and i2048 integer "bit vectors" that
        tremendously slow down subsequnet IR optimizations (due to large
        APInts) and impede the backend's lowering.
      
      The new design takes an approach that fundamentally is not susceptible
      to many of these problems. It is the result of a discusison between
      myself and Duncan Sands over IRC about how to premptively avoid these
      types of problems and how to do SROA in a more principled way. Since
      then, it has evolved and grown, but this remains an important aspect: it
      fixes real world problems with the SROA process today.
      
      First, the transform of SROA actually has little to do with replacement.
      It has more to do with splitting. The goal is to take an aggregate
      alloca and form a composition of scalar allocas which can replace it and
      will be most suitable to the eventual replacement by scalar SSA values.
      The actual replacement is performed by mem2reg (and in the future
      SSAUpdater).
      
      The splitting is divided into four phases. The first phase is an
      analysis of the uses of the alloca. This phase recursively walks uses,
      building up a dense datastructure representing the ranges of the
      alloca's memory actually used and checking for uses which inhibit any
      aspects of the transform such as the escape of a pointer.
      
      Once we have a mapping of the ranges of the alloca used by individual
      operations, we compute a partitioning of the used ranges. Some uses are
      inherently splittable (such as memcpy and memset), while scalar uses are
      not splittable. The goal is to build a partitioning that has the minimum
      number of splits while placing each unsplittable use in its own
      partition. Overlapping unsplittable uses belong to the same partition.
      This is the target split of the aggregate alloca, and it maximizes the
      number of scalar accesses which become accesses to their own alloca and
      candidates for promotion.
      
      Third, we re-walk the uses of the alloca and assign each specific memory
      access to all the partitions touched so that we have dense use-lists for
      each partition.
      
      Finally, we build a new, smaller alloca for each partition and rewrite
      each use of that partition to use the new alloca. During this phase the
      pass will also work very hard to transform uses of an alloca into a form
      suitable for promotion, including forming vector operations, speculating
      loads throguh PHI nodes and selects, etc.
      
      After splitting is complete, each newly refined alloca that is
      a candidate for promotion to a scalar SSA value is run through mem2reg.
      
      There are lots of reasonably detailed comments in the source code about
      the design and algorithms, and I'm going to be trying to improve them in
      subsequent commits to ensure this is well documented, as the new pass is
      in many ways more complex than the old one.
      
      Some of this is still a WIP, but the current state is reasonbly stable.
      It has passed bootstrap, the nightly test suite, and Duncan has run it
      successfully through the ACATS and DragonEgg test suites. That said, it
      remains behind a default-off flag until the last few pieces are in
      place, and full testing can be done.
      
      Specific areas I'm looking at next:
      - Improved comments and some code cleanup from reviews.
      - SSAUpdater and enabling this pass inside the CGSCC pass manager.
      - Some datastructure tuning and compile-time measurements.
      - More aggressive FCA splitting and vector formation.
      
      Many thanks to Duncan Sands for the thorough final review, as well as
      Benjamin Kramer for lots of review during the process of writing this
      pass, and Daniel Berlin for reviewing the data structures and algorithms
      and general theory of the pass. Also, several other people on IRC, over
      lunch tables, etc for lots of feedback and advice.
      
      llvm-svn: 163883
      1b398ae0
Loading