Skip to content
  1. Sep 29, 2012
    • Chandler Carruth's avatar
      Fix a somewhat surprising miscompile where code relying on an ABI · 903790ef
      Chandler Carruth authored
      alignment could lose it due to the alloca type moving down to a much
      smaller alignment guarantee.
      
      Now SROA will actively compute a proper alignment, factoring the target
      data, any explicit alignment, and the offset within the struct. This
      will in some cases lower the alignment requirements, but when we lower
      them below those of the type, we drop the alignment entirely to give
      freedom to the code generator to align it however is convenient.
      
      Thanks to Duncan for the lovely test case that pinned this down. =]
      
      llvm-svn: 164891
      903790ef
    • Evan Cheng's avatar
      Add test case for r164850. · 9e99f0c4
      Evan Cheng authored
      llvm-svn: 164867
      9e99f0c4
  2. Sep 28, 2012
  3. Sep 27, 2012
  4. Sep 26, 2012
    • Hans Wennborg's avatar
      Address Duncan's comments on r164684: · cd3a11f7
      Hans Wennborg authored
      - Put statistics in alphabetical order
      - Don't use getZextValue when building TableInt, just use APInts
      - Introduce Create{Z,S}ExtOrTrunc in IRBuilder.
      
      llvm-svn: 164696
      cd3a11f7
    • Chandler Carruth's avatar
      When rewriting the pointer operand to a load or store which has · 3e4273dd
      Chandler Carruth authored
      alignment guarantees attached, re-compute the alignment so that we
      consider offsets which impact alignment.
      
      llvm-svn: 164690
      3e4273dd
    • Chandler Carruth's avatar
      Teach all of the loads, stores, memsets and memcpys created by the · 871ba724
      Chandler Carruth authored
      rewriter in SROA to carry a proper alignment. This involves
      interrogating various sources of alignment, etc. This is a more complete
      and principled fix to PR13920 as well as related bugs pointed out by Eli
      in review and by inspection in the area.
      
      Also by inspection fix the integer and vector promotion paths to create
      aligned loads and stores. I still need to work up test cases for
      these... Sorry for the delay, they were found purely by inspection.
      
      llvm-svn: 164689
      871ba724
    • Benjamin Kramer's avatar
      Fix tests that didn't test anything. · 205d70ed
      Benjamin Kramer authored
      llvm-svn: 164686
      205d70ed
    • Hans Wennborg's avatar
      SimplifyCFG: Make the switch-to-lookup table transformation store the · 39583b88
      Hans Wennborg authored
      tables in bitmaps when they fit in a target-legal register.
      
      This saves some space, and it also allows for building tables that would
      otherwise be deemed too sparse.
      
      One interesting case that this hits is example 7 from
      http://blog.regehr.org/archives/320. We currently generate good code
      for this when lowering the switch to the selection DAG: we build a
      bitmask to decide whether to jump to one block or the other. My patch
      will result in the same bitmask, but it removes the need for the jump,
      as the return value can just be retrieved from the mask.
      
      llvm-svn: 164684
      39583b88
    • Chandler Carruth's avatar
      Revert the business end of r164636 and try again. I'll come in again. ;] · 4bd8f66e
      Chandler Carruth authored
      This should really, really fix PR13916. For real this time. The
      underlying bug is... a bit more subtle than I had imagined.
      
      The setup is a code pattern that leads to an @llvm.memcpy call with two
      equal pointers to an alloca in the source and dest. Now, not any pattern
      will do. The alloca needs to be formed just so, and both pointers should
      be wrapped in different bitcasts etc. When this precise pattern hits,
      a funny sequence of events transpires. First, we correctly detect the
      potential for overlap, and correctly optimize the memcpy. The first
      time. However, we do simplify the set of users of the alloca, and that
      causes us to run the alloca back through the SROA pass in case there are
      knock-on simplifications. At this point, a curious thing has happened.
      If we happen to have an i8 alloca, we have direct i8 pointer values. So
      we don't bother creating a cast, we rewrite the arguments to the memcpy
      to dircetly refer to the alloca.
      
      Now, in an unrelated area of the pass, we have clever logic which
      ensures that when visiting each User of a particular pointer derived
      from an alloca, we only visit that User once, and directly inspect all
      of its operands which refer to that particular pointer value. However,
      the mechanism used to detect memcpy's with the potential to overlap
      relied upon getting visited once per *Use*, not once per *User*. This is
      always true *unless* the same exact value is both source and dest. It
      turns out that almost nothing actually produces that pattern though.
      
      We can hand craft test cases that more directly test this behavior of
      course, and those are included. Also, note that there is a significant
      missed optimization here -- we prove in many cases that there is
      a non-volatile memcpy call with identical source and dest addresses. We
      shouldn't prevent splitting the alloca in that case, and in fact we
      should just remove such memcpy calls eagerly. I'll address that in
      a subsequent commit.
      
      llvm-svn: 164669
      4bd8f66e
    • Nick Lewycky's avatar
      Don't drop the alignment on a memcpy intrinsic when producing a store. This is · d9f79106
      Nick Lewycky authored
      only a missed optimization opportunity if the store is over-aligned, but a
      miscompile if the store's new type has a higher natural alignment than the
      memcpy did. Fixes PR13920!
      
      llvm-svn: 164641
      d9f79106
  5. Sep 25, 2012
  6. Sep 24, 2012
    • Richard Osborne's avatar
      Add missing : in CHECK line. · 6fb4bd77
      Richard Osborne authored
      llvm-svn: 164540
      6fb4bd77
    • Richard Osborne's avatar
      Add missing check for presence of target data. · 2fd29bfb
      Richard Osborne authored
      This avoids a crash in visitAllocaInst when target data isn't available.
      
      llvm-svn: 164539
      2fd29bfb
    • Chandler Carruth's avatar
      Address one of the original FIXMEs for the new SROA pass by implementing · 92924fd2
      Chandler Carruth authored
      integer promotion analogous to vector promotion. When there is an
      integer alloca being accessed both as its integer type and as a narrower
      integer type, promote the narrower access to "insert" and "extract" the
      smaller integer from the larger one, and make the integer alloca
      a candidate for promotion.
      
      In the new formulation, we don't care about target legal integer or use
      thresholds to control things. Instead, we only perform this promotion to
      an integer type which the frontend has already emitted a load or store
      for. This bounds the scope and prevents optimization passes from
      coalescing larger and larger entities into a single integer.
      
      llvm-svn: 164479
      92924fd2
  7. Sep 23, 2012
    • Chandler Carruth's avatar
      Switch to a signed representation for the dynamic offsets while walking · e7a1ba5e
      Chandler Carruth authored
      across the uses of the alloca. It's entirely possible for negative
      numbers to come up here, and in some rare cases simply doing the 2's
      complement arithmetic isn't the correct decision. Notably, we can't zext
      the index of the GEP. The definition of GEP is that these offsets are
      sign extended or truncated to the size of the pointer, and then wrapping
      2's complement arithmetic used.
      
      This patch fixes an issue that comes up with *no* input from the
      buildbots or bootstrap afaict. The only place where it manifested,
      disturbingly, is Clang's own regression test suite. A reduced and
      targeted collection of tests are added to cope with this. Note that I've
      tried to pin down the potential cases of overflow, but may have missed
      some cases. I've tried to add a few cases to test this, but its hard
      because LLVM has quite limited support for >64bit constructs.
      
      llvm-svn: 164475
      e7a1ba5e
  8. Sep 22, 2012
    • Chandler Carruth's avatar
      Fix a case where the new SROA pass failed to zap dead operands to · 225d4bdb
      Chandler Carruth authored
      selects with a constant condition. This resulted in the operands
      remaining live through the SROA rewriter. Most of the time, this just
      caused some dead allocas to persist and get zapped by later passes, but
      in one case found by Joerg, it caused a crash when we tried to *promote*
      the alloca despite it having this dead use. We already have the
      mechanisms in place to handle this, just wire select up to them.
      
      llvm-svn: 164427
      225d4bdb
  9. Sep 21, 2012
  10. Sep 19, 2012
    • Hans Wennborg's avatar
      SimplifyCFG: Don't generate invalid code for switch used to initialize · f744fa91
      Hans Wennborg authored
      two variables where the first variable is returned and the second
      ignored.
      
      I don't think this occurs in practice (other passes should have cleaned
      up the unused phi node), but it should still be handled correctly.
      
      Also make the logic for determining if we should return early less
      sketchy.
      
      llvm-svn: 164225
      f744fa91
    • Hans Wennborg's avatar
      Move load_to_switch.ll to test/CodeGen/SPARC/ · ff9b5a84
      Hans Wennborg authored
      Because the test invokes llc -march=sparc, it needs to be in a directory
      which is only run when the sparc target is built.
      
      llvm-svn: 164211
      ff9b5a84
    • Nadav Rotem's avatar
      rename test · 0b661191
      Nadav Rotem authored
      llvm-svn: 164210
      0b661191
    • Nadav Rotem's avatar
      Prevent inlining of callees which allocate lots of memory into a recursive caller. · 4eb3d4b2
      Nadav Rotem authored
      Example:
      
      void foo() {
       ... foo();   // I'm recursive!
      
        bar();
      }
      
      bar() {  int a[1000];  // large stack size }
      
      rdar://10853263
      
      llvm-svn: 164207
      4eb3d4b2
    • Hans Wennborg's avatar
      CodeGenPrep: turn lookup tables into switches for some targets. · 02fbc716
      Hans Wennborg authored
      This is a follow-up from r163302, which added a transformation to
      SimplifyCFG that turns some switches into loads from lookup tables.
      
      It was pointed out that some targets, such as GPUs and deeply embedded
      targets, might not find this appropriate, but SimplifyCFG doesn't have
      enough information about the target to decide this.
      
      This patch adds the reverse transformation to CodeGenPrep: it turns
      loads from lookup tables back into switches for targets where we do not
      build jump tables (assuming these are also the targets where lookup
      tables are inappropriate).
      
      Hopefully we will eventually get to have target information in
      SimplifyCFG, and then this CodeGenPrep transformation can be removed.
      
      llvm-svn: 164206
      02fbc716
    • Chandler Carruth's avatar
      Fix the last crasher I've gotten a reproduction for in SROA. This one · 3f882d4c
      Chandler Carruth authored
      from the dragonegg build bots when we turned on the full version of the
      pass. Included a much reduced test case for this pesky bug, despite
      bugpoint's uncooperative behavior.
      
      Also, I audited all the similar code I could find and didn't spot any
      other cases where this mistake cropped up.
      
      llvm-svn: 164178
      3f882d4c
Loading