Skip to content
  1. Mar 24, 2021
  2. Mar 23, 2021
  3. Mar 22, 2021
    • Nicolas Vasilache's avatar
      [mlir][Linalg] Fix linalg on tensor fusion · bcd6424f
      Nicolas Vasilache authored
      - Drop unnecessary occurrences of rewriter.eraseOp: dead linalg ops on tensors should be cleaned up by DCE.
      - reimplement the part of Linalg on fusion that constructs the body and block arguments: the previous implementation had too much magic. Instead this spells out all cases explicitly and asserts / introduces TODOs for incorrect cases.
      
      As a consequence, we can use the default traversal order for this pattern.
      
      Differential Revision: https://reviews.llvm.org/D99070
      bcd6424f
    • Adrian Kuegel's avatar
      [mlir] Add an option to still use bottom-up traversal · c691b968
      Adrian Kuegel authored
      GreedyPatternRewriteDriver was changed from bottom-up traversal to top-down traversal. Not all passes work yet with that change for traversal order. To give some time for fixing, add an option to allow to switch back to bottom-up traversal. Use this option in FusionOfTensorOpsPass which fails otherwise.
      
      Differential Revision: https://reviews.llvm.org/D99059
      c691b968
  4. Mar 21, 2021
  5. Mar 19, 2021
  6. Mar 18, 2021
  7. Mar 15, 2021
    • Julian Gross's avatar
      [MLIR] Create memref dialect and move dialect-specific ops from std. · e2310704
      Julian Gross authored
      Create the memref dialect and move dialect-specific ops
      from std dialect to this dialect.
      
      Moved ops:
      AllocOp -> MemRef_AllocOp
      AllocaOp -> MemRef_AllocaOp
      AssumeAlignmentOp -> MemRef_AssumeAlignmentOp
      DeallocOp -> MemRef_DeallocOp
      DimOp -> MemRef_DimOp
      MemRefCastOp -> MemRef_CastOp
      MemRefReinterpretCastOp -> MemRef_ReinterpretCastOp
      GetGlobalMemRefOp -> MemRef_GetGlobalOp
      GlobalMemRefOp -> MemRef_GlobalOp
      LoadOp -> MemRef_LoadOp
      PrefetchOp -> MemRef_PrefetchOp
      ReshapeOp -> MemRef_ReshapeOp
      StoreOp -> MemRef_StoreOp
      SubViewOp -> MemRef_SubViewOp
      TransposeOp -> MemRef_TransposeOp
      TensorLoadOp -> MemRef_TensorLoadOp
      TensorStoreOp -> MemRef_TensorStoreOp
      TensorToMemRefOp -> MemRef_BufferCastOp
      ViewOp -> MemRef_ViewOp
      
      The roadmap to split the memref dialect from std is discussed here:
      https://llvm.discourse.group/t/rfc-split-the-memref-dialect-from-std/2667
      
      Differential Revision: https://reviews.llvm.org/D98041
      e2310704
  8. Mar 13, 2021
  9. Mar 10, 2021
  10. Mar 09, 2021
  11. Mar 05, 2021
  12. Mar 04, 2021
  13. Mar 03, 2021
  14. Mar 02, 2021
  15. Feb 28, 2021
    • Aart Bik's avatar
      [mlir][sparse] fixed inaccury in maintaining universal index · 6afaea66
      Aart Bik authored
      The universal index was maintained if dense indices were still
      in place, and lattice points followed. However, it should only
      be kept if any of those following lattice points actually
      consumes the universal index. This change also fixes an
      inaccuracy with a missing broadcast around vector invariant.
      
      Reviewed By: bixia
      
      Differential Revision: https://reviews.llvm.org/D97594
      6afaea66
    • Stella Laurenzo's avatar
      [mlir][linalg] Add symbolic type conversion to linalg named ops. · 2ceedc3a
      Stella Laurenzo authored
      This enables this kind of construct in the DSL to generate a named op that is polymorphic over numeric type variables `T` and `U`, generating the correct arithmetic casts at construction time:
      
      ```
      @tc_def_op
      def polymorphic_matmul(A=TensorDef(T1, S.M, S.K),
                             B=TensorDef(T2, S.K, S.N),
                             C=TensorDef(U, S.M, S.N, output=True)):
        implements(ContractionOpInterface)
        C[D.m, D.n] += cast(U, A[D.m, D.k]) * cast(U, B[D.k, D.n])
      ```
      
      Presently, this only supports type variables that are bound to the element type of one of the arguments, although a further extension that allows binding a type variable to an attribute would allow some more expressiveness and may be useful for some formulations. This is left to a future patch. In addition, this patch does not yet materialize the verifier support which ensures that types are bound correctly (for such simple examples, failing to do so will yield IR that fails verification, it just won't yet fail with a precise error).
      
      Note that the full grid of extensions/truncation/int<->float conversions are supported, but many of them are lossy and higher level code needs to be mindful of numerics (it is not the job of this level).
      
      As-is, this should be sufficient for most integer matmul scenarios we work with in typical quantization schemes.
      
      Differential Revision: https://reviews.llvm.org/D97603
      2ceedc3a
  16. Feb 26, 2021
  17. Feb 25, 2021
Loading