Skip to content
  1. Apr 05, 2021
  2. Apr 02, 2021
    • Aart Bik's avatar
      [mlir][sparse] support for very narrow index and pointer types · a0c5b7e3
      Aart Bik authored
      Rationale:
      Small indices and values, when allowed by the required range of the
      input tensors, can reduce the memory footprint of sparse tensors
      even more. Note, however, that we must be careful zero extending
      the values (since sparse tensors never use negatives for indexing),
      but LLVM treats the index type as signed in most memory operations
      (like the scatter and gather). This CL dots all the i's in this regard.
      
      Reviewed By: bixia
      
      Differential Revision: https://reviews.llvm.org/D99777
      a0c5b7e3
  3. Mar 29, 2021
  4. Mar 24, 2021
  5. Mar 23, 2021
  6. Mar 22, 2021
    • Nicolas Vasilache's avatar
      [mlir][Linalg] Fix linalg on tensor fusion · bcd6424f
      Nicolas Vasilache authored
      - Drop unnecessary occurrences of rewriter.eraseOp: dead linalg ops on tensors should be cleaned up by DCE.
      - reimplement the part of Linalg on fusion that constructs the body and block arguments: the previous implementation had too much magic. Instead this spells out all cases explicitly and asserts / introduces TODOs for incorrect cases.
      
      As a consequence, we can use the default traversal order for this pattern.
      
      Differential Revision: https://reviews.llvm.org/D99070
      bcd6424f
    • Adrian Kuegel's avatar
      [mlir] Add an option to still use bottom-up traversal · c691b968
      Adrian Kuegel authored
      GreedyPatternRewriteDriver was changed from bottom-up traversal to top-down traversal. Not all passes work yet with that change for traversal order. To give some time for fixing, add an option to allow to switch back to bottom-up traversal. Use this option in FusionOfTensorOpsPass which fails otherwise.
      
      Differential Revision: https://reviews.llvm.org/D99059
      c691b968
  7. Mar 21, 2021
  8. Mar 19, 2021
  9. Mar 18, 2021
  10. Mar 15, 2021
    • Julian Gross's avatar
      [MLIR] Create memref dialect and move dialect-specific ops from std. · e2310704
      Julian Gross authored
      Create the memref dialect and move dialect-specific ops
      from std dialect to this dialect.
      
      Moved ops:
      AllocOp -> MemRef_AllocOp
      AllocaOp -> MemRef_AllocaOp
      AssumeAlignmentOp -> MemRef_AssumeAlignmentOp
      DeallocOp -> MemRef_DeallocOp
      DimOp -> MemRef_DimOp
      MemRefCastOp -> MemRef_CastOp
      MemRefReinterpretCastOp -> MemRef_ReinterpretCastOp
      GetGlobalMemRefOp -> MemRef_GetGlobalOp
      GlobalMemRefOp -> MemRef_GlobalOp
      LoadOp -> MemRef_LoadOp
      PrefetchOp -> MemRef_PrefetchOp
      ReshapeOp -> MemRef_ReshapeOp
      StoreOp -> MemRef_StoreOp
      SubViewOp -> MemRef_SubViewOp
      TransposeOp -> MemRef_TransposeOp
      TensorLoadOp -> MemRef_TensorLoadOp
      TensorStoreOp -> MemRef_TensorStoreOp
      TensorToMemRefOp -> MemRef_BufferCastOp
      ViewOp -> MemRef_ViewOp
      
      The roadmap to split the memref dialect from std is discussed here:
      https://llvm.discourse.group/t/rfc-split-the-memref-dialect-from-std/2667
      
      Differential Revision: https://reviews.llvm.org/D98041
      e2310704
  11. Mar 13, 2021
  12. Mar 10, 2021
  13. Mar 09, 2021
  14. Mar 05, 2021
  15. Mar 04, 2021
    • Aart Bik's avatar
      [mlir][sparse] fix bug in reduction chain · 553cb6d4
      Aart Bik authored
      Found with exhaustive testing, it is possible that a while loop
      appears in between chainable for loops. As long as we don't
      scalarize reductions in while loops, this means we need to
      terminate the chain at the while. This also refactors the
      reduction code into more readable helper methods.
      
      Reviewed By: bixia
      
      Differential Revision: https://reviews.llvm.org/D97886
      553cb6d4
  16. Mar 03, 2021
  17. Mar 02, 2021
  18. Feb 28, 2021
  19. Feb 26, 2021
  20. Feb 23, 2021
    • Aart Bik's avatar
      [mlir][sparse] incorporate vector index into address computation · 17fa9198
      Aart Bik authored
      When computing dense address, a vectorized index must be accounted
      for properly. This bug was formerly undetected because we get 0 * prev + i
      in most cases, which folds away the scalar part. Now it works for all cases.
      
      Reviewed By: bixia
      
      Differential Revision: https://reviews.llvm.org/D97317
      17fa9198
    • Nicolas Vasilache's avatar
      [mlir][Linalg] Retire hoistViewAllocOps. · 8cf14b8d
      Nicolas Vasilache authored
      This transformation was only used for quick experimentation and is not general enough.
      Retire it.
      
      Differential Revision: https://reviews.llvm.org/D97266
      8cf14b8d
    • KareemErgawy-TomTom's avatar
      [MLIR][LinAlg] Start detensoring implementation. · 67e0d58d
      KareemErgawy-TomTom authored
      This commit is the first baby step towards detensoring in
      linalg-on-tensors.
      
      Detensoring is the process through which a tensor value is convereted to one
      or potentially more primitive value(s). During this process, operations with
      such detensored operands are also converted to an equivalen form that works
      on primitives.
      
      The detensoring process is driven by linalg-on-tensor ops. In particular, a
      linalg-on-tensor op is checked to see whether *all* its operands can be
      detensored. If so, those operands are converted to thier primitive
      counterparts and the linalg op is replaced by an equivalent op that takes
      those new primitive values as operands.
      
      This works towards handling github/google/iree#1159.
      
      Reviewed By: nicolasvasilache
      
      Differential Revision: https://reviews.llvm.org/D96271
      67e0d58d
Loading