Skip to content
  1. Mar 21, 2021
  2. Mar 19, 2021
  3. Mar 18, 2021
  4. Mar 15, 2021
    • Julian Gross's avatar
      [MLIR] Create memref dialect and move dialect-specific ops from std. · e2310704
      Julian Gross authored
      Create the memref dialect and move dialect-specific ops
      from std dialect to this dialect.
      
      Moved ops:
      AllocOp -> MemRef_AllocOp
      AllocaOp -> MemRef_AllocaOp
      AssumeAlignmentOp -> MemRef_AssumeAlignmentOp
      DeallocOp -> MemRef_DeallocOp
      DimOp -> MemRef_DimOp
      MemRefCastOp -> MemRef_CastOp
      MemRefReinterpretCastOp -> MemRef_ReinterpretCastOp
      GetGlobalMemRefOp -> MemRef_GetGlobalOp
      GlobalMemRefOp -> MemRef_GlobalOp
      LoadOp -> MemRef_LoadOp
      PrefetchOp -> MemRef_PrefetchOp
      ReshapeOp -> MemRef_ReshapeOp
      StoreOp -> MemRef_StoreOp
      SubViewOp -> MemRef_SubViewOp
      TransposeOp -> MemRef_TransposeOp
      TensorLoadOp -> MemRef_TensorLoadOp
      TensorStoreOp -> MemRef_TensorStoreOp
      TensorToMemRefOp -> MemRef_BufferCastOp
      ViewOp -> MemRef_ViewOp
      
      The roadmap to split the memref dialect from std is discussed here:
      https://llvm.discourse.group/t/rfc-split-the-memref-dialect-from-std/2667
      
      Differential Revision: https://reviews.llvm.org/D98041
      e2310704
  5. Mar 13, 2021
  6. Mar 10, 2021
  7. Mar 09, 2021
  8. Mar 05, 2021
  9. Mar 04, 2021
  10. Mar 03, 2021
  11. Mar 02, 2021
  12. Feb 28, 2021
    • Aart Bik's avatar
      [mlir][sparse] fixed inaccury in maintaining universal index · 6afaea66
      Aart Bik authored
      The universal index was maintained if dense indices were still
      in place, and lattice points followed. However, it should only
      be kept if any of those following lattice points actually
      consumes the universal index. This change also fixes an
      inaccuracy with a missing broadcast around vector invariant.
      
      Reviewed By: bixia
      
      Differential Revision: https://reviews.llvm.org/D97594
      6afaea66
    • Stella Laurenzo's avatar
      [mlir][linalg] Add symbolic type conversion to linalg named ops. · 2ceedc3a
      Stella Laurenzo authored
      This enables this kind of construct in the DSL to generate a named op that is polymorphic over numeric type variables `T` and `U`, generating the correct arithmetic casts at construction time:
      
      ```
      @tc_def_op
      def polymorphic_matmul(A=TensorDef(T1, S.M, S.K),
                             B=TensorDef(T2, S.K, S.N),
                             C=TensorDef(U, S.M, S.N, output=True)):
        implements(ContractionOpInterface)
        C[D.m, D.n] += cast(U, A[D.m, D.k]) * cast(U, B[D.k, D.n])
      ```
      
      Presently, this only supports type variables that are bound to the element type of one of the arguments, although a further extension that allows binding a type variable to an attribute would allow some more expressiveness and may be useful for some formulations. This is left to a future patch. In addition, this patch does not yet materialize the verifier support which ensures that types are bound correctly (for such simple examples, failing to do so will yield IR that fails verification, it just won't yet fail with a precise error).
      
      Note that the full grid of extensions/truncation/int<->float conversions are supported, but many of them are lossy and higher level code needs to be mindful of numerics (it is not the job of this level).
      
      As-is, this should be sufficient for most integer matmul scenarios we work with in typical quantization schemes.
      
      Differential Revision: https://reviews.llvm.org/D97603
      2ceedc3a
  13. Feb 26, 2021
  14. Feb 25, 2021
  15. Feb 24, 2021
  16. Feb 23, 2021
    • Aart Bik's avatar
      [mlir][sparse] incorporate vector index into address computation · 17fa9198
      Aart Bik authored
      When computing dense address, a vectorized index must be accounted
      for properly. This bug was formerly undetected because we get 0 * prev + i
      in most cases, which folds away the scalar part. Now it works for all cases.
      
      Reviewed By: bixia
      
      Differential Revision: https://reviews.llvm.org/D97317
      17fa9198
    • Nicolas Vasilache's avatar
      [mlir][Linalg] Retire hoistViewAllocOps. · 8cf14b8d
      Nicolas Vasilache authored
      This transformation was only used for quick experimentation and is not general enough.
      Retire it.
      
      Differential Revision: https://reviews.llvm.org/D97266
      8cf14b8d
    • KareemErgawy-TomTom's avatar
      [MLIR][LinAlg] Start detensoring implementation. · 67e0d58d
      KareemErgawy-TomTom authored
      This commit is the first baby step towards detensoring in
      linalg-on-tensors.
      
      Detensoring is the process through which a tensor value is convereted to one
      or potentially more primitive value(s). During this process, operations with
      such detensored operands are also converted to an equivalen form that works
      on primitives.
      
      The detensoring process is driven by linalg-on-tensor ops. In particular, a
      linalg-on-tensor op is checked to see whether *all* its operands can be
      detensored. If so, those operands are converted to thier primitive
      counterparts and the linalg op is replaced by an equivalent op that takes
      those new primitive values as operands.
      
      This works towards handling github/google/iree#1159.
      
      Reviewed By: nicolasvasilache
      
      Differential Revision: https://reviews.llvm.org/D96271
      67e0d58d
    • Aart Bik's avatar
      [sparse][mlir] simplify lattice optimization logic · 0df59f23
      Aart Bik authored
      Simplifies the way lattices are optimized with less, but more
      powerful rules. This also fixes an inaccuracy where too many
      lattices resulted (expecting a non-existing universal index).
      Also puts no-side-effects on all proper getters and unifies
      bufferization flags order in integration tests (for future,
      more complex use cases).
      
      Reviewed By: bixia
      
      Differential Revision: https://reviews.llvm.org/D97134
      0df59f23
  17. Feb 22, 2021
  18. Feb 21, 2021
    • Stella Laurenzo's avatar
      Implement simple type polymorphism for linalg named ops. · 6c9541d4
      Stella Laurenzo authored
      * It was decided that this was the end of the line for the existing custom tc parser/generator, and this is the first step to replacing it with a declarative format that maps well to mathy source languages.
      * One such source language is implemented here: https://github.com/stellaraccident/mlir-linalgpy/blob/main/samples/mm.py
        * In fact, this is the exact source of the declarative `polymorphic_matmul` in this change.
        * I am working separately to clean this python implementation up and add it to MLIR (probably as `mlir.tools.linalg_opgen` or equiv). The scope of the python side is greater than just generating named ops: the ops are callable and directly emit `linalg.generic` ops fully dynamically, and this is intended to be a feature for frontends like npcomp to define custom linear algebra ops at runtime.
      * There is more work required to handle full type polymorphism, especially with respect to integer formulations, since they require more specificity wrt types.
      * Followups to this change will bring the new generator to feature parity with the current one and delete the current. Roughly, this involves adding support for interface declarations and attribute symbol bindings.
      
      Differential Revision: https://reviews.llvm.org/D97135
      6c9541d4
  19. Feb 19, 2021
  20. Feb 18, 2021
Loading