Skip to content
  1. Oct 27, 2020
    • River Riddle's avatar
      [mlir][Pattern] Add a new FrozenRewritePatternList class · 3fffffa8
      River Riddle authored
      This class represents a rewrite pattern list that has been frozen, and thus immutable. This replaces the uses of OwningRewritePatternList in pattern driver related API, such as dialect conversion. When PDL becomes more prevalent, this API will allow for optimizing a set of patterns once without the need to do this per run of a pass.
      
      Differential Revision: https://reviews.llvm.org/D89104
      3fffffa8
    • River Riddle's avatar
      [mlir][NFC] Move around the code related to PatternRewriting to improve layering · b6eb26fd
      River Riddle authored
      There are several pieces of pattern rewriting infra in IR/ that really shouldn't be there. This revision moves those pieces to a better location such that they are easier to evolve in the future(e.g. with PDL). More concretely this revision does the following:
      
      * Create a Transforms/GreedyPatternRewriteDriver.h and move the apply*andFold methods there.
      The definitions for these methods are already in Transforms/ so it doesn't make sense for the declarations to be in IR.
      
      * Create a new lib/Rewrite library and move PatternApplicator there.
      This new library will be focused on applying rewrites, and will also include compiling rewrites with PDL.
      
      Differential Revision: https://reviews.llvm.org/D89103
      b6eb26fd
    • MaheshRavishankar's avatar
      [mlir][Linalg] Miscalleneous enhancements to cover more fusion cases. · 78f37b74
      MaheshRavishankar authored
      Adds support for
      - Dropping unit dimension loops for indexed_generic ops.
      - Folding consecutive folding (or expanding) reshapes when the result
        (or src) is a scalar.
      - Fixes to indexed_generic -> generic fusion when zero-dim tensors are
        involved.
      
      Differential Revision: https://reviews.llvm.org/D90118
      78f37b74
  2. Oct 26, 2020
    • Nicolas Vasilache's avatar
      [mlir][Linalg] Add basic support for TileAndFuse on Linalg on tensors. · 37e0fdd0
      Nicolas Vasilache authored
      This revision allows the fusion of the producer of input tensors in the consumer under a tiling transformation (which produces subtensors).
      Many pieces are still missing (e.g. support init_tensors, better refactor LinalgStructuredOp interface support, try to merge implementations and reuse code) but this still allows getting started.
      
      The greedy pass itself is just for testing purposes and will be extracted in a separate test pass.
      
      Differential revision: https://reviews.llvm.org/D89491
      37e0fdd0
  3. Oct 14, 2020
  4. Oct 13, 2020
  5. Oct 12, 2020
    • Nicolas Vasilache's avatar
      [mlir][Linalg] Add named Linalg ops on tensor to buffer support. · 422aaf31
      Nicolas Vasilache authored
      This revision introduces support for buffer allocation for any named linalg op.
      To avoid template instantiating many ops, a new ConversionPattern is created to capture the LinalgOp interface.
      
      Some APIs are updated to remain consistent with MLIR style:
      `OwningRewritePatternList * -> OwningRewritePatternList &`
      `BufferAssignmentTypeConverter * -> BufferAssignmentTypeConverter &`
      
      Differential revision: https://reviews.llvm.org/D89226
      422aaf31
    • Alexander Belyaev's avatar
      [mlir] Move Linalg tensors-to-buffers tests to Linalg tests. · b98e5e0f
      Alexander Belyaev authored
      The buffer placement preparation tests in
      test/Transforms/buffer-placement-preparation* are using Linalg as a test
      dialect which leads to confusion and "copy-pasta", i.e. Linalg is being
      extended now and when TensorsToBuffers.cpp is changed, TestBufferPlacement is
      sometimes kept in-sync, which should not be the case.
      
      This has led to the unnoticed bug, because the tests were in a different directory and the patterns were slightly off.
      
      Differential Revision: https://reviews.llvm.org/D89209
      b98e5e0f
  6. Oct 10, 2020
  7. Oct 09, 2020
  8. Oct 08, 2020
  9. Oct 07, 2020
  10. Oct 06, 2020
  11. Oct 02, 2020
    • Nicolas Vasilache's avatar
      [mlir] Add a subtensor operation · e3de249a
      Nicolas Vasilache authored
      This revision introduces a `subtensor` op, which is the counterpart of `subview` for a tensor operand. This also refactors the relevant pieces to allow reusing the `subview` implementation where appropriate.
      
      This operation will be used to implement tiling for Linalg on tensors.
      e3de249a
  12. Oct 01, 2020
  13. Sep 30, 2020
    • MaheshRavishankar's avatar
      [mlir][Linalg] Add pattern to tile and fuse Linalg operations on buffers. · c694588f
      MaheshRavishankar authored
      The pattern is structured similar to other patterns like
      LinalgTilingPattern. The fusion patterns takes options that allows you
      to fuse with producers of multiple operands at once.
      - The pattern fuses only at the level that is known to be legal, i.e
        if a reduction loop in the consumer is tiled, then fusion should
        happen "before" this loop. Some refactoring of the fusion code is
        needed to fuse only where it is legal.
      - Since the fusion on buffers uses the LinalgDependenceGraph that is
        not mutable in place the fusion pattern keeps the original
        operations in the IR, but are tagged with a marker that can be later
        used to find the original operations.
      
      This change also fixes an issue with tiling and
      distribution/interchange where if the tile size of a loop were 0 it
      wasnt account for in these.
      
      Differential Revision: https://reviews.llvm.org/D88435
      c694588f
    • Mahesh Ravishankar's avatar
      [mlir][Linalg] Generalize the logic to compute reassociation maps · 892fdc92
      Mahesh Ravishankar authored
      while folding tensor_reshape op.
      
      While folding reshapes that introduce unit extent dims, the logic to
      compute the reassociation maps can be generalized to handle some
      corner cases, for example, when the folded shape still has unit-extent
      dims but corresponds to folded unit extent dims of the expanded shape.
      
      Differential Revision: https://reviews.llvm.org/D88521
      892fdc92
    • Jakub Lichman's avatar
      [mlir][Linalg] Tile sizes for Conv ops vectorization added as pass arguments · 0b17d475
      Jakub Lichman authored
      Current setup for conv op vectorization does not enable user to specify tile
      sizes as well as dimensions for vectorization. In this commit we change that by
      adding tile sizes as pass arguments. Every dimension with corresponding tile
      size > 1 is automatically vectorized.
      
      Differential Revision: https://reviews.llvm.org/D88533
      0b17d475
  14. Sep 29, 2020
  15. Sep 23, 2020
  16. Sep 22, 2020
  17. Sep 17, 2020
    • Jakub Lichman's avatar
      [mlir][Linalg] Convolution tiling added to ConvOp vectorization pass · 347d59b1
      Jakub Lichman authored
      ConvOp vectorization supports now only convolutions of static shapes with dimensions
      of size either 3(vectorized) or 1(not) as underlying vectors have to be of static
      shape as well. In this commit we add support for convolutions of any size as well as
      dynamic shapes by leveraging existing matmul infrastructure for tiling of both input
      and kernel to sizes accepted by the previous version of ConvOp vectorization.
      In the future this pass can be extended to take "tiling mask" as a user input which
      will enable vectorization of user specified dimensions.
      
      Differential Revision: https://reviews.llvm.org/D87676
      347d59b1
  18. Sep 11, 2020
  19. Sep 10, 2020
  20. Sep 08, 2020
  21. Sep 07, 2020
  22. Sep 03, 2020
    • Jakub Lichman's avatar
      [mlir][Linalg] Wrong tile size for convolutions fixed · 8d35080e
      Jakub Lichman authored
      Sizes of tiles (subviews) are bigger by 1 than they should. Let's consider
      1D convolution without batches or channels. Furthermore let m iterate over
      the output and n over the kernel then input is accessed with m + n. In tiling
      subview sizes for convolutions are computed by applying requested tile size
      together with kernel size to the above mentioned expression thus let's say
      for tile size of 2 the subview size is 2 + size(n), which is bigger by one
      than it should since we move kernel only once. The problem behind it is that
      range is not turned into closed interval before the composition. This commit
      fixes the problem by turning ranges first into closed intervals by substracting
      1 and after the composition back to half open by adding 1.
      
      Differential Revision: https://reviews.llvm.org/D86638
      8d35080e
  23. Sep 02, 2020
    • Ehsan Toosi's avatar
      [mlir] Extend BufferAssignmentTypeConverter with result conversion callbacks · 39cf83cc
      Ehsan Toosi authored
      In this PR, the users of BufferPlacement can configure
      BufferAssginmentTypeConverter. These new configurations would give the user more
      freedom in the process of converting function signature, and return and call
      operation conversions.
      
      These are the new features:
          - Accepting callback functions for decomposing types (i.e. 1 to N type
          conversion such as unpacking tuple types).
          - Defining ResultConversionKind for specifying whether a function result
          with a certain type should be appended to the function arguments list or
          should be kept as function result. (Usage:
          converter.setResultConversionKind<MemRefType>(AppendToArgumentList))
          - Accepting callback functions for composing or decomposing values (i.e. N
          to 1 and 1 to N value conversion).
      
      Differential Revision: https://reviews.llvm.org/D85133
      39cf83cc
Loading