Skip to content
  1. Mar 04, 2014
  2. Mar 03, 2014
    • Benjamin Kramer's avatar
      [C++11] Use std::tie to simplify compare operators. · b2f034b8
      Benjamin Kramer authored
      No functionality change.
      
      llvm-svn: 202751
      b2f034b8
    • Benjamin Kramer's avatar
      [C++11] Remove a leftover std::function instance. · 9c794c7a
      Benjamin Kramer authored
      It's not needed anymore.
      
      llvm-svn: 202748
      9c794c7a
    • Chandler Carruth's avatar
      [C++11] Remove the completely unnecessary requirement on SetVector's · d031fe9f
      Chandler Carruth authored
      remove_if that its predicate is adaptable. We don't actually need this,
      we can write a generic adapter for any predicate.
      
      This lets us remove some very wrong std::function usages. We should
      never be using std::function for predicates to algorithms. This incurs
      an *indirect* call overhead for every evaluation of the predicate, and
      makes it very hard to inline through.
      
      llvm-svn: 202742
      d031fe9f
    • Tobias Grosser's avatar
      [C++11] Add a basic block range view for RegionInfo · 4abf9d3a
      Tobias Grosser authored
      This also switches the users in LLVM to ensure this functionality is tested.
      
      llvm-svn: 202705
      4abf9d3a
    • Chandler Carruth's avatar
      [C++11] Add two range adaptor views to User: operands and · 1583e99c
      Chandler Carruth authored
      operand_values. The first provides a range view over operand Use
      objects, and the second provides a range view over the Value*s being
      used by those operands.
      
      The naming is "STL-style" rather than "LLVM-style" because we have
      historically named iterator methods STL-style, and range methods seem to
      have far more in common with their iterator counterparts than with
      "normal" APIs. Feel free to bikeshed on this one if you want, I'm happy
      to change these around if people feel strongly.
      
      I've switched code in SROA and LCG to exercise these mostly to ensure
      they work correctly -- we don't really have an easy way to unittest this
      and they're trivial.
      
      llvm-svn: 202687
      1583e99c
  3. Mar 02, 2014
  4. Mar 01, 2014
  5. Feb 26, 2014
    • Andrew Trick's avatar
      Fix PR18165: LSR must avoid scaling factors that exceed the limit on truncated use. · 429e9edd
      Andrew Trick authored
      Patch by Michael Zolotukhin!
      
      llvm-svn: 202273
      429e9edd
    • Chandler Carruth's avatar
      [SROA] Use the correct index integer size in GEPs through non-default · dfb2efd0
      Chandler Carruth authored
      address spaces.
      
      This isn't really a correctness issue (the values are truncated) but its
      much cleaner.
      
      Patch by Matt Arsenault!
      
      llvm-svn: 202252
      dfb2efd0
    • Chandler Carruth's avatar
      [SROA] Teach SROA how to handle pointers from address spaces other than · 286d87ed
      Chandler Carruth authored
      the default.
      
      Based on the patch by Matt Arsenault, D1764!
      
      I switched one place to use the more direct pointer type to compute the
      desired address space, and I reworked the memcpy rewriting section to
      reflect significant refactorings that this patch helped inspire.
      
      Thanks to several of the folks who helped review and improve the patch
      as well.
      
      llvm-svn: 202247
      286d87ed
    • Chandler Carruth's avatar
      [SROA] Split the alignment computation complete for the memcpy rewriting · aa72b93a
      Chandler Carruth authored
      to work independently for the slice side and the other side.
      
      This allows us to only compute the minimum of the two when we actually
      rewrite to a memcpy that needs to take the minimum, and preserve higher
      alignment for one side or the other when rewriting to loads and stores.
      
      This fix was inspired by seeing the result of some refactoring that
      makes addrspace handling better.
      
      llvm-svn: 202242
      aa72b93a
    • Chandler Carruth's avatar
      [SROA] The original refactoring inspired by the addrspace patch in · 181ed05b
      Chandler Carruth authored
      D1764, which in turn set off the other refactorings to make
      'getSliceAlign()' a sensible thing.
      
      There are two possible inputs to the required alignment of a memory
      transfer intrinsic: the alignment constraints of the source and the
      destination. If we are *only* introducing a (potentially new) offset
      onto one side of the transfer, we don't need to consider the alignment
      constraints of the other side. Use this to simplify the logic feeding
      into alignment computation for unsplit transfers.
      
      Also, hoist the clamp of the magical zero alignment for these intrinsics
      to the more customary one alignment early. This lets several other
      conditions melt away.
      
      No functionality changed. There is a further improvement this exposes
      which *will* change functionality, but that's arriving in a separate
      patch.
      
      llvm-svn: 202232
      181ed05b
    • Chandler Carruth's avatar
      [SROA] Yet another slight refactoring that simplifies an API in the · 47954c80
      Chandler Carruth authored
      rewriting logic: don't pass custom offsets for the adjusted pointer to
      the new alloca.
      
      We always passed NewBeginOffset here. Sometimes we spelled it
      BeginOffset, but only when they were in fact equal. Whats worse, the API
      is set up so that you can't reasonably call it with anything else -- it
      assumes that you're passing it an offset relative to the *original*
      alloca that happens to fall within the new one. That's the whole point
      of NewBeginOffset, it's the clamped beginning offset.
      
      No functionality changed.
      
      llvm-svn: 202231
      47954c80
    • Chandler Carruth's avatar
      [SROA] Simplify the computing of alignment: we only ever need the · 2659e503
      Chandler Carruth authored
      alignment of the slice being rewritten, not any arbitrary offset.
      
      Every caller is really just trying to compute the alignment for the
      whole slice, never for some arbitrary alignment. They are also just
      passing a type when they have one to see if we can skip an explicit
      alignment in the IR by using the type's alignment. This makes for a much
      simpler interface.
      
      Another refactoring inspired by the addrspace patch for SROA, although
      only loosely related.
      
      llvm-svn: 202230
      2659e503
    • Chandler Carruth's avatar
      [SROA] Use NewOffsetBegin in the unsplit case for memset merely for · 735d5bee
      Chandler Carruth authored
      consistency with memcpy rewriting, and fix a latent bug in the alignment
      management for memset.
      
      The alignment issue is that getAdjustedAllocaPtr is computing the
      *relative* offset into the new alloca, but the alignment isn't being set
      to the relative offset, it was using the the absolute offset which is
      into the old alloca.
      
      I don't think its possible to write a test case that actually reaches
      this code where the resulting alignment would be observably different,
      but the intent was clearly to use the relative offset within the new
      alloca.
      
      llvm-svn: 202229
      735d5bee
    • Chandler Carruth's avatar
      [SROA] Use the members for New{Begin,End}Offset in the rewrite helpers · ea27cf08
      Chandler Carruth authored
      rather than passing them as arguments.
      
      While I generally prefer actual arguments, in this case the readability
      loss is substantial. By using members we avoid repeatedly calculating
      the offsets, and once we're using members it is useful to ensure that
      those names *always* refer to the original-alloca-relative new offset
      for a rewritten slice.
      
      No functionality changed. Follow-up refactoring, all toward getting the
      address space patch merged.
      
      llvm-svn: 202228
      ea27cf08
    • Chandler Carruth's avatar
      [SROA] Compute the New{Begin,End}Offset values once for each alloca · c46b6eb3
      Chandler Carruth authored
      slice being rewritten.
      
      We had the same code scattered across most of the visits. Instead,
      compute the new offsets and the slice size once when we start to visit
      a particular slice, and use the member variables from then on. This
      reduces quite a bit of code duplication.
      
      No functionality changed. Refactoring inspired to make it easier to
      apply the address space patch to SROA.
      
      llvm-svn: 202227
      c46b6eb3
    • Chandler Carruth's avatar
      [SROA] Fix PR18615 with some long overdue simplifications to the bounds · 6aedc106
      Chandler Carruth authored
      checking in SROA.
      
      The primary change is to just rely on uge for checking that the offset
      is within the allocation size. This removes the explicit checks against
      isNegative which were terribly error prone (including the reversed logic
      that led to PR18615) and prevented us from supporting stack allocations
      larger than half the address space.... Ok, so maybe the latter isn't
      *common* but it's a silly restriction to have.
      
      Also, we used to try to support a PHI node which loaded from before the
      start of the allocation if any of the loaded bytes were within the
      allocation. This doesn't make any sense, we have never really supported
      loading or storing *before* the allocation starts. The simplified logic
      just doesn't care.
      
      We continue to allow loading past the end of the allocation in part to
      support cases where there is a PHI and some loads are larger than others
      and the larger ones reach past the end of the allocation. We could solve
      this a different and more conservative way, but I'm still somewhat
      paranoid about this.
      
      llvm-svn: 202224
      6aedc106
  6. Feb 25, 2014
    • Chandler Carruth's avatar
      [reassociate] Switch two std::sort calls into std::stable_sort calls as · 7b8e1124
      Chandler Carruth authored
      their inputs come from std::stable_sort and they are not total orders.
      
      I'm not a huge fan of this, but the really bad std::stable_sort is right
      at the beginning of Reassociate. After we commit to stable-sort based
      consistent respect of source order, the downstream sorts shouldn't undo
      that unless they have a total order or they are used in an
      order-insensitive way. Neither appears to be true for these cases.
      I don't have particularly good test cases, but this jumped out by
      inspection when looking for output instability in this pass due to
      changes in the ordering of std::sort.
      
      llvm-svn: 202196
      7b8e1124
    • Chandler Carruth's avatar
      [SROA] Add an off-by-default *strict* inbounds check to SROA. I had SROA · 3b79b2ab
      Chandler Carruth authored
      implemented this way a long time ago and due to the overwhelming bugs
      that surfaced, moved to a much more relaxed variant. Richard Smith would
      like to understand the magnitude of this problem and it seems fairly
      harmless to keep some flag-controlled logic to get the extremely strict
      behavior here. I'll remove it if it doesn't prove useful.
      
      llvm-svn: 202193
      3b79b2ab
    • Rafael Espindola's avatar
      Make DataLayout a plain object, not a pass. · 93512512
      Rafael Espindola authored
      Instead, have a DataLayoutPass that holds one. This will allow parts of LLVM
      don't don't handle passes to also use DataLayout.
      
      llvm-svn: 202168
      93512512
    • Rafael Espindola's avatar
      Factor out calls to AA.getDataLayout(). · 6d6e87be
      Rafael Espindola authored
      llvm-svn: 202157
      6d6e87be
    • Chandler Carruth's avatar
      [SROA] Use the original load name with the SROA-prefixed IRB rather than · 25adb7b0
      Chandler Carruth authored
      just "load". This helps avoid pointless de-duping with order-sensitive
      numbers as we already have unique names from the original load. It also
      makes the resulting IR quite a bit easier to read.
      
      llvm-svn: 202140
      25adb7b0
    • Chandler Carruth's avatar
      [SROA] Thread the ability to add a pointer-specific name prefix through · cb93cd2d
      Chandler Carruth authored
      the pointer adjustment code. This is the primary code path that creates
      totally new instructions in SROA and being able to lump them based on
      the pointer value's name for which they were created causes
      *significantly* fewer name collisions and general noise in the debug
      output. This is particularly significant because it is making it much
      harder to track down instability in the output of SROA, as name
      de-duplication is a totally harmless form of instability that gets in
      the way of seeing real problems.
      
      The new fancy naming scheme tries to dig out the root "pre-SROA" name
      for pointer values and associate that all the way through the pointer
      formation instructions. Digging out the root is important to prevent the
      multiple iterative rounds of SROA from just layering too much cruft on
      top of cruft here. We already track the layers of SROAs iteration in the
      alloca name prefix. We don't need to duplicate it here.
      
      Should have no functionality change, and shouldn't have any really
      measurable impact on NDEBUG builds, as most of the complex logic is
      debug-only.
      
      llvm-svn: 202139
      cb93cd2d
    • Chandler Carruth's avatar
      [SROA] Rather than copying the logic for building a name prefix into the · 51175533
      Chandler Carruth authored
      PHI-pointer builder, just copy the builder and clobber the obvious
      fields.
      
      llvm-svn: 202136
      51175533
    • Chandler Carruth's avatar
      [SROA] Simplify some of the logic to dig out the old pointer value by · 8183a50f
      Chandler Carruth authored
      using OldPtr more heavily. Lots of this code was written before the
      rewriter had an OldPtr member setup ahead of time. There are already
      asserts in place that should ensure this doesn't change any
      functionality.
      
      llvm-svn: 202135
      8183a50f
    • Chandler Carruth's avatar
      [SROA] Adjust to new clang-format style. · 7625c54e
      Chandler Carruth authored
      llvm-svn: 202134
      7625c54e
    • Chandler Carruth's avatar
      [SROA] Fix a *glaring* bug in r202091: you have to actually *write* · a8c4cc68
      Chandler Carruth authored
      the break statement, not just think it to yourself....
      
      No idea how this worked at all, much less survived most bots, my
      bootstrap, and some bot bootstraps!
      
      The Polly one didn't survive, and this was filed as PR18959. I don't
      have a reduced test case and honestly I'm not seeing the need. What we
      probably need here are better asserts / debug-build behavior in
      SmallPtrSet so that this madness doesn't make it so far.
      
      llvm-svn: 202129
      a8c4cc68
    • Alexey Samsonov's avatar
      Silence GCC warning · 26af6f7f
      Alexey Samsonov authored
      llvm-svn: 202119
      26af6f7f
    • Alp Toker's avatar
      Fix typos · 70b36995
      Alp Toker authored
      llvm-svn: 202107
      70b36995
    • Chandler Carruth's avatar
      [SROA] Add a debugging tool which shuffles the slices sequence prior to · 83cee772
      Chandler Carruth authored
      sorting it. This helps uncover latent reliance on the original ordering
      which aren't guaranteed to be preserved by std::sort (but often are),
      and which are based on the use-def chain orderings which also aren't
      (technically) guaranteed.
      
      Only available in C++11 debug builds, and behind a flag to prevent noise
      at the moment, but this is generally useful so figured I'd put it in the
      tree rather than keeping it out-of-tree.
      
      llvm-svn: 202106
      83cee772
    • Chandler Carruth's avatar
      [SROA] Use a more direct way of determining whether we are processing · bb2a9324
      Chandler Carruth authored
      the destination operand or source operand of a memmove.
      
      It so happens that it was impossible for SROA to try to rewrite
      self-memmove where the operands are *identical*, because either such
      a think is volatile (and we don't rewrite) or it is non-volatile, and we
      don't even register it as a use of the alloca.
      
      However, making the 'IsDest' test *rely* on this subtle fact is... Very
      confusing for the reader. We should use the direct and readily available
      test of the Use* which gives us concrete information about which operand
      is being rewritten.
      
      No functionality changed, I hope! ;]
      
      llvm-svn: 202103
      bb2a9324
    • Chandler Carruth's avatar
      [SROA] Fix another instability in SROA with respect to the slice · 3bf18ed5
      Chandler Carruth authored
      ordering.
      
      The fundamental problem that we're hitting here is that the use-def
      chain ordering is *itself* not a stable thing to be relying on in the
      rewriting for SROA. Further, we use a non-stable sort over the slices to
      arrange them based on the section of the alloca they're operating on.
      With a debugging STL implementation (or different implementations in
      stage2 and stage3) this can cause stage2 != stage3.
      
      The specific aspect of this problem fixed in this commit deals with the
      rewriting and load-speculation around PHIs and Selects. This, like many
      other aspects of the use-rewriting in SROA, is really part of the
      "strong SSA-formation" that is doen by SROA where it works very hard to
      canonicalize loads and stores in *just* the right way to satisfy the
      needs of mem2reg[1]. When we have a select (or a PHI) with 2 uses of the
      same alloca, we test that loads downstream of the select are
      speculatable around it twice. If only one of the operands to the select
      needs to be rewritten, then if we get lucky we rewrite that one first
      and the select is immediately speculatable. This can cause the order of
      operand visitation, and thus the order of slices to be rewritten, to
      change an alloca from promotable to non-promotable and vice versa.
      
      The fix is to defer all of the speculation until *after* the rewrite
      phase is done. Once we've rewritten everything, we can accurately test
      for whether speculation will work (once, instead of twice!) and the
      order ceases to matter.
      
      This also happens to simplify the other subtlety of speculation -- we
      need to *not* speculate anything unless the result of speculating will
      make the alloca fully promotable by mem2reg. I had a previous attempt at
      simplifying this, but it was still pretty horrible.
      
      There is actually already a *really* nice test case for this in
      basictest.ll, but on multiple STL implementations and inputs, we just
      got "lucky". Fortunately, the test case is very small and we can
      essentially build it in exactly the opposite way to get reasonable
      coverage in both directions even from normal STL implementations.
      
      llvm-svn: 202092
      3bf18ed5
    • Rafael Espindola's avatar
      Make some DataLayout pointers const. · aeff8a9c
      Rafael Espindola authored
      No functionality change. Just reduces the noise of an upcoming patch.
      
      llvm-svn: 202087
      aeff8a9c
  7. Feb 22, 2014
Loading