Skip to content
  1. Oct 27, 2020
    • River Riddle's avatar
      [mlir][Pattern] Add a new FrozenRewritePatternList class · 3fffffa8
      River Riddle authored
      This class represents a rewrite pattern list that has been frozen, and thus immutable. This replaces the uses of OwningRewritePatternList in pattern driver related API, such as dialect conversion. When PDL becomes more prevalent, this API will allow for optimizing a set of patterns once without the need to do this per run of a pass.
      
      Differential Revision: https://reviews.llvm.org/D89104
      3fffffa8
    • River Riddle's avatar
      [mlir][NFC] Move around the code related to PatternRewriting to improve layering · b6eb26fd
      River Riddle authored
      There are several pieces of pattern rewriting infra in IR/ that really shouldn't be there. This revision moves those pieces to a better location such that they are easier to evolve in the future(e.g. with PDL). More concretely this revision does the following:
      
      * Create a Transforms/GreedyPatternRewriteDriver.h and move the apply*andFold methods there.
      The definitions for these methods are already in Transforms/ so it doesn't make sense for the declarations to be in IR.
      
      * Create a new lib/Rewrite library and move PatternApplicator there.
      This new library will be focused on applying rewrites, and will also include compiling rewrites with PDL.
      
      Differential Revision: https://reviews.llvm.org/D89103
      b6eb26fd
    • MaheshRavishankar's avatar
      [mlir][Linalg] Miscalleneous enhancements to cover more fusion cases. · 78f37b74
      MaheshRavishankar authored
      Adds support for
      - Dropping unit dimension loops for indexed_generic ops.
      - Folding consecutive folding (or expanding) reshapes when the result
        (or src) is a scalar.
      - Fixes to indexed_generic -> generic fusion when zero-dim tensors are
        involved.
      
      Differential Revision: https://reviews.llvm.org/D90118
      78f37b74
  2. Oct 26, 2020
    • Nicolas Vasilache's avatar
      [mlir][Linalg] Add basic support for TileAndFuse on Linalg on tensors. · 37e0fdd0
      Nicolas Vasilache authored
      This revision allows the fusion of the producer of input tensors in the consumer under a tiling transformation (which produces subtensors).
      Many pieces are still missing (e.g. support init_tensors, better refactor LinalgStructuredOp interface support, try to merge implementations and reuse code) but this still allows getting started.
      
      The greedy pass itself is just for testing purposes and will be extracted in a separate test pass.
      
      Differential revision: https://reviews.llvm.org/D89491
      37e0fdd0
  3. Oct 20, 2020
  4. Oct 14, 2020
  5. Oct 13, 2020
  6. Oct 12, 2020
  7. Oct 10, 2020
  8. Oct 09, 2020
  9. Oct 08, 2020
  10. Oct 07, 2020
  11. Oct 06, 2020
  12. Oct 05, 2020
  13. Oct 02, 2020
    • Nicolas Vasilache's avatar
      [mlir] Add a subtensor operation · e3de249a
      Nicolas Vasilache authored
      This revision introduces a `subtensor` op, which is the counterpart of `subview` for a tensor operand. This also refactors the relevant pieces to allow reusing the `subview` implementation where appropriate.
      
      This operation will be used to implement tiling for Linalg on tensors.
      e3de249a
  14. Oct 01, 2020
  15. Sep 30, 2020
    • MaheshRavishankar's avatar
      [mlir][Linalg] Add pattern to tile and fuse Linalg operations on buffers. · c694588f
      MaheshRavishankar authored
      The pattern is structured similar to other patterns like
      LinalgTilingPattern. The fusion patterns takes options that allows you
      to fuse with producers of multiple operands at once.
      - The pattern fuses only at the level that is known to be legal, i.e
        if a reduction loop in the consumer is tiled, then fusion should
        happen "before" this loop. Some refactoring of the fusion code is
        needed to fuse only where it is legal.
      - Since the fusion on buffers uses the LinalgDependenceGraph that is
        not mutable in place the fusion pattern keeps the original
        operations in the IR, but are tagged with a marker that can be later
        used to find the original operations.
      
      This change also fixes an issue with tiling and
      distribution/interchange where if the tile size of a loop were 0 it
      wasnt account for in these.
      
      Differential Revision: https://reviews.llvm.org/D88435
      c694588f
    • Mahesh Ravishankar's avatar
      [mlir][Linalg] Generalize the logic to compute reassociation maps · 892fdc92
      Mahesh Ravishankar authored
      while folding tensor_reshape op.
      
      While folding reshapes that introduce unit extent dims, the logic to
      compute the reassociation maps can be generalized to handle some
      corner cases, for example, when the folded shape still has unit-extent
      dims but corresponds to folded unit extent dims of the expanded shape.
      
      Differential Revision: https://reviews.llvm.org/D88521
      892fdc92
    • Jakub Lichman's avatar
      [mlir][Linalg] Tile sizes for Conv ops vectorization added as pass arguments · 0b17d475
      Jakub Lichman authored
      Current setup for conv op vectorization does not enable user to specify tile
      sizes as well as dimensions for vectorization. In this commit we change that by
      adding tile sizes as pass arguments. Every dimension with corresponding tile
      size > 1 is automatically vectorized.
      
      Differential Revision: https://reviews.llvm.org/D88533
      0b17d475
  16. Sep 29, 2020
  17. Sep 23, 2020
  18. Sep 22, 2020
  19. Sep 18, 2020
    • Nicolas Vasilache's avatar
      [mlir][Linalg] Evolve named ops to use assembly form and support linalg on tensors. · 93fd30ba
      Nicolas Vasilache authored
      This revision allows representing a reduction at the level of linalg on tensors for named ops. When a structured op has a reduction and returns tensor(s), new conventions are added and documented.
      
      As an illustration, the syntax for a `linalg.matmul` writing into a buffer is:
      
      ```
        linalg.matmul ins(%a, %b : memref<?x?xf32>, tensor<?x?xf32>)
                     outs(%c : memref<?x?xf32>)
      ```
      
      , whereas the syntax for a `linalg.matmul` returning a new tensor is:
      
      ```
        %d = linalg.matmul ins(%a, %b : tensor<?x?xf32>, memref<?x?xf32>)
                          init(%c : memref<?x?xf32>)
                            -> tensor<?x?xf32>
      ```
      
      Other parts of linalg will be extended accordingly to allow mixed buffer/tensor semantics in the presence of reductions.
      93fd30ba
  20. Sep 17, 2020
    • Jakub Lichman's avatar
      [mlir][Linalg] Convolution tiling added to ConvOp vectorization pass · 347d59b1
      Jakub Lichman authored
      ConvOp vectorization supports now only convolutions of static shapes with dimensions
      of size either 3(vectorized) or 1(not) as underlying vectors have to be of static
      shape as well. In this commit we add support for convolutions of any size as well as
      dynamic shapes by leveraging existing matmul infrastructure for tiling of both input
      and kernel to sizes accepted by the previous version of ConvOp vectorization.
      In the future this pass can be extended to take "tiling mask" as a user input which
      will enable vectorization of user specified dimensions.
      
      Differential Revision: https://reviews.llvm.org/D87676
      347d59b1
Loading