Skip to content
  1. Mar 10, 2014
  2. Mar 09, 2014
    • Chandler Carruth's avatar
      [C++11] Add range based accessors for the Use-Def chain of a Value. · cdf47884
      Chandler Carruth authored
      This requires a number of steps.
      1) Move value_use_iterator into the Value class as an implementation
         detail
      2) Change it to actually be a *Use* iterator rather than a *User*
         iterator.
      3) Add an adaptor which is a User iterator that always looks through the
         Use to the User.
      4) Wrap these in Value::use_iterator and Value::user_iterator typedefs.
      5) Add the range adaptors as Value::uses() and Value::users().
      6) Update *all* of the callers to correctly distinguish between whether
         they wanted a use_iterator (and to explicitly dig out the User when
         needed), or a user_iterator which makes the Use itself totally
         opaque.
      
      Because #6 requires churning essentially everything that walked the
      Use-Def chains, I went ahead and added all of the range adaptors and
      switched them to range-based loops where appropriate. Also because the
      renaming requires at least churning every line of code, it didn't make
      any sense to split these up into multiple commits -- all of which would
      touch all of the same lies of code.
      
      The result is still not quite optimal. The Value::use_iterator is a nice
      regular iterator, but Value::user_iterator is an iterator over User*s
      rather than over the User objects themselves. As a consequence, it fits
      a bit awkwardly into the range-based world and it has the weird
      extra-dereferencing 'operator->' that so many of our iterators have.
      I think this could be fixed by providing something which transforms
      a range of T&s into a range of T*s, but that *can* be separated into
      another patch, and it isn't yet 100% clear whether this is the right
      move.
      
      However, this change gets us most of the benefit and cleans up
      a substantial amount of code around Use and User. =]
      
      llvm-svn: 203364
      cdf47884
  3. Mar 07, 2014
  4. Mar 06, 2014
  5. Mar 05, 2014
  6. Mar 04, 2014
  7. Mar 03, 2014
    • Benjamin Kramer's avatar
      [C++11] Use std::tie to simplify compare operators. · b2f034b8
      Benjamin Kramer authored
      No functionality change.
      
      llvm-svn: 202751
      b2f034b8
    • Benjamin Kramer's avatar
      [C++11] Remove a leftover std::function instance. · 9c794c7a
      Benjamin Kramer authored
      It's not needed anymore.
      
      llvm-svn: 202748
      9c794c7a
    • Chandler Carruth's avatar
      [C++11] Remove the completely unnecessary requirement on SetVector's · d031fe9f
      Chandler Carruth authored
      remove_if that its predicate is adaptable. We don't actually need this,
      we can write a generic adapter for any predicate.
      
      This lets us remove some very wrong std::function usages. We should
      never be using std::function for predicates to algorithms. This incurs
      an *indirect* call overhead for every evaluation of the predicate, and
      makes it very hard to inline through.
      
      llvm-svn: 202742
      d031fe9f
    • Tobias Grosser's avatar
      [C++11] Add a basic block range view for RegionInfo · 4abf9d3a
      Tobias Grosser authored
      This also switches the users in LLVM to ensure this functionality is tested.
      
      llvm-svn: 202705
      4abf9d3a
    • Chandler Carruth's avatar
      [C++11] Add two range adaptor views to User: operands and · 1583e99c
      Chandler Carruth authored
      operand_values. The first provides a range view over operand Use
      objects, and the second provides a range view over the Value*s being
      used by those operands.
      
      The naming is "STL-style" rather than "LLVM-style" because we have
      historically named iterator methods STL-style, and range methods seem to
      have far more in common with their iterator counterparts than with
      "normal" APIs. Feel free to bikeshed on this one if you want, I'm happy
      to change these around if people feel strongly.
      
      I've switched code in SROA and LCG to exercise these mostly to ensure
      they work correctly -- we don't really have an easy way to unittest this
      and they're trivial.
      
      llvm-svn: 202687
      1583e99c
  8. Mar 02, 2014
  9. Mar 01, 2014
  10. Feb 26, 2014
    • Andrew Trick's avatar
      Fix PR18165: LSR must avoid scaling factors that exceed the limit on truncated use. · 429e9edd
      Andrew Trick authored
      Patch by Michael Zolotukhin!
      
      llvm-svn: 202273
      429e9edd
    • Chandler Carruth's avatar
      [SROA] Use the correct index integer size in GEPs through non-default · dfb2efd0
      Chandler Carruth authored
      address spaces.
      
      This isn't really a correctness issue (the values are truncated) but its
      much cleaner.
      
      Patch by Matt Arsenault!
      
      llvm-svn: 202252
      dfb2efd0
    • Chandler Carruth's avatar
      [SROA] Teach SROA how to handle pointers from address spaces other than · 286d87ed
      Chandler Carruth authored
      the default.
      
      Based on the patch by Matt Arsenault, D1764!
      
      I switched one place to use the more direct pointer type to compute the
      desired address space, and I reworked the memcpy rewriting section to
      reflect significant refactorings that this patch helped inspire.
      
      Thanks to several of the folks who helped review and improve the patch
      as well.
      
      llvm-svn: 202247
      286d87ed
    • Chandler Carruth's avatar
      [SROA] Split the alignment computation complete for the memcpy rewriting · aa72b93a
      Chandler Carruth authored
      to work independently for the slice side and the other side.
      
      This allows us to only compute the minimum of the two when we actually
      rewrite to a memcpy that needs to take the minimum, and preserve higher
      alignment for one side or the other when rewriting to loads and stores.
      
      This fix was inspired by seeing the result of some refactoring that
      makes addrspace handling better.
      
      llvm-svn: 202242
      aa72b93a
    • Chandler Carruth's avatar
      [SROA] The original refactoring inspired by the addrspace patch in · 181ed05b
      Chandler Carruth authored
      D1764, which in turn set off the other refactorings to make
      'getSliceAlign()' a sensible thing.
      
      There are two possible inputs to the required alignment of a memory
      transfer intrinsic: the alignment constraints of the source and the
      destination. If we are *only* introducing a (potentially new) offset
      onto one side of the transfer, we don't need to consider the alignment
      constraints of the other side. Use this to simplify the logic feeding
      into alignment computation for unsplit transfers.
      
      Also, hoist the clamp of the magical zero alignment for these intrinsics
      to the more customary one alignment early. This lets several other
      conditions melt away.
      
      No functionality changed. There is a further improvement this exposes
      which *will* change functionality, but that's arriving in a separate
      patch.
      
      llvm-svn: 202232
      181ed05b
    • Chandler Carruth's avatar
      [SROA] Yet another slight refactoring that simplifies an API in the · 47954c80
      Chandler Carruth authored
      rewriting logic: don't pass custom offsets for the adjusted pointer to
      the new alloca.
      
      We always passed NewBeginOffset here. Sometimes we spelled it
      BeginOffset, but only when they were in fact equal. Whats worse, the API
      is set up so that you can't reasonably call it with anything else -- it
      assumes that you're passing it an offset relative to the *original*
      alloca that happens to fall within the new one. That's the whole point
      of NewBeginOffset, it's the clamped beginning offset.
      
      No functionality changed.
      
      llvm-svn: 202231
      47954c80
    • Chandler Carruth's avatar
      [SROA] Simplify the computing of alignment: we only ever need the · 2659e503
      Chandler Carruth authored
      alignment of the slice being rewritten, not any arbitrary offset.
      
      Every caller is really just trying to compute the alignment for the
      whole slice, never for some arbitrary alignment. They are also just
      passing a type when they have one to see if we can skip an explicit
      alignment in the IR by using the type's alignment. This makes for a much
      simpler interface.
      
      Another refactoring inspired by the addrspace patch for SROA, although
      only loosely related.
      
      llvm-svn: 202230
      2659e503
    • Chandler Carruth's avatar
      [SROA] Use NewOffsetBegin in the unsplit case for memset merely for · 735d5bee
      Chandler Carruth authored
      consistency with memcpy rewriting, and fix a latent bug in the alignment
      management for memset.
      
      The alignment issue is that getAdjustedAllocaPtr is computing the
      *relative* offset into the new alloca, but the alignment isn't being set
      to the relative offset, it was using the the absolute offset which is
      into the old alloca.
      
      I don't think its possible to write a test case that actually reaches
      this code where the resulting alignment would be observably different,
      but the intent was clearly to use the relative offset within the new
      alloca.
      
      llvm-svn: 202229
      735d5bee
    • Chandler Carruth's avatar
      [SROA] Use the members for New{Begin,End}Offset in the rewrite helpers · ea27cf08
      Chandler Carruth authored
      rather than passing them as arguments.
      
      While I generally prefer actual arguments, in this case the readability
      loss is substantial. By using members we avoid repeatedly calculating
      the offsets, and once we're using members it is useful to ensure that
      those names *always* refer to the original-alloca-relative new offset
      for a rewritten slice.
      
      No functionality changed. Follow-up refactoring, all toward getting the
      address space patch merged.
      
      llvm-svn: 202228
      ea27cf08
    • Chandler Carruth's avatar
      [SROA] Compute the New{Begin,End}Offset values once for each alloca · c46b6eb3
      Chandler Carruth authored
      slice being rewritten.
      
      We had the same code scattered across most of the visits. Instead,
      compute the new offsets and the slice size once when we start to visit
      a particular slice, and use the member variables from then on. This
      reduces quite a bit of code duplication.
      
      No functionality changed. Refactoring inspired to make it easier to
      apply the address space patch to SROA.
      
      llvm-svn: 202227
      c46b6eb3
    • Chandler Carruth's avatar
      [SROA] Fix PR18615 with some long overdue simplifications to the bounds · 6aedc106
      Chandler Carruth authored
      checking in SROA.
      
      The primary change is to just rely on uge for checking that the offset
      is within the allocation size. This removes the explicit checks against
      isNegative which were terribly error prone (including the reversed logic
      that led to PR18615) and prevented us from supporting stack allocations
      larger than half the address space.... Ok, so maybe the latter isn't
      *common* but it's a silly restriction to have.
      
      Also, we used to try to support a PHI node which loaded from before the
      start of the allocation if any of the loaded bytes were within the
      allocation. This doesn't make any sense, we have never really supported
      loading or storing *before* the allocation starts. The simplified logic
      just doesn't care.
      
      We continue to allow loading past the end of the allocation in part to
      support cases where there is a PHI and some loads are larger than others
      and the larger ones reach past the end of the allocation. We could solve
      this a different and more conservative way, but I'm still somewhat
      paranoid about this.
      
      llvm-svn: 202224
      6aedc106
  11. Feb 25, 2014
    • Chandler Carruth's avatar
      [reassociate] Switch two std::sort calls into std::stable_sort calls as · 7b8e1124
      Chandler Carruth authored
      their inputs come from std::stable_sort and they are not total orders.
      
      I'm not a huge fan of this, but the really bad std::stable_sort is right
      at the beginning of Reassociate. After we commit to stable-sort based
      consistent respect of source order, the downstream sorts shouldn't undo
      that unless they have a total order or they are used in an
      order-insensitive way. Neither appears to be true for these cases.
      I don't have particularly good test cases, but this jumped out by
      inspection when looking for output instability in this pass due to
      changes in the ordering of std::sort.
      
      llvm-svn: 202196
      7b8e1124
    • Chandler Carruth's avatar
      [SROA] Add an off-by-default *strict* inbounds check to SROA. I had SROA · 3b79b2ab
      Chandler Carruth authored
      implemented this way a long time ago and due to the overwhelming bugs
      that surfaced, moved to a much more relaxed variant. Richard Smith would
      like to understand the magnitude of this problem and it seems fairly
      harmless to keep some flag-controlled logic to get the extremely strict
      behavior here. I'll remove it if it doesn't prove useful.
      
      llvm-svn: 202193
      3b79b2ab
    • Rafael Espindola's avatar
      Make DataLayout a plain object, not a pass. · 93512512
      Rafael Espindola authored
      Instead, have a DataLayoutPass that holds one. This will allow parts of LLVM
      don't don't handle passes to also use DataLayout.
      
      llvm-svn: 202168
      93512512
Loading