Skip to content
  1. Mar 21, 2021
  2. Mar 19, 2021
  3. Mar 18, 2021
  4. Mar 15, 2021
    • Julian Gross's avatar
      [MLIR] Create memref dialect and move dialect-specific ops from std. · e2310704
      Julian Gross authored
      Create the memref dialect and move dialect-specific ops
      from std dialect to this dialect.
      
      Moved ops:
      AllocOp -> MemRef_AllocOp
      AllocaOp -> MemRef_AllocaOp
      AssumeAlignmentOp -> MemRef_AssumeAlignmentOp
      DeallocOp -> MemRef_DeallocOp
      DimOp -> MemRef_DimOp
      MemRefCastOp -> MemRef_CastOp
      MemRefReinterpretCastOp -> MemRef_ReinterpretCastOp
      GetGlobalMemRefOp -> MemRef_GetGlobalOp
      GlobalMemRefOp -> MemRef_GlobalOp
      LoadOp -> MemRef_LoadOp
      PrefetchOp -> MemRef_PrefetchOp
      ReshapeOp -> MemRef_ReshapeOp
      StoreOp -> MemRef_StoreOp
      SubViewOp -> MemRef_SubViewOp
      TransposeOp -> MemRef_TransposeOp
      TensorLoadOp -> MemRef_TensorLoadOp
      TensorStoreOp -> MemRef_TensorStoreOp
      TensorToMemRefOp -> MemRef_BufferCastOp
      ViewOp -> MemRef_ViewOp
      
      The roadmap to split the memref dialect from std is discussed here:
      https://llvm.discourse.group/t/rfc-split-the-memref-dialect-from-std/2667
      
      Differential Revision: https://reviews.llvm.org/D98041
      e2310704
  5. Mar 13, 2021
  6. Mar 10, 2021
  7. Mar 09, 2021
  8. Mar 05, 2021
  9. Mar 04, 2021
    • Aart Bik's avatar
      [mlir][sparse] fix bug in reduction chain · 553cb6d4
      Aart Bik authored
      Found with exhaustive testing, it is possible that a while loop
      appears in between chainable for loops. As long as we don't
      scalarize reductions in while loops, this means we need to
      terminate the chain at the while. This also refactors the
      reduction code into more readable helper methods.
      
      Reviewed By: bixia
      
      Differential Revision: https://reviews.llvm.org/D97886
      553cb6d4
  10. Mar 03, 2021
  11. Mar 02, 2021
  12. Feb 28, 2021
  13. Feb 26, 2021
  14. Feb 23, 2021
    • Aart Bik's avatar
      [mlir][sparse] incorporate vector index into address computation · 17fa9198
      Aart Bik authored
      When computing dense address, a vectorized index must be accounted
      for properly. This bug was formerly undetected because we get 0 * prev + i
      in most cases, which folds away the scalar part. Now it works for all cases.
      
      Reviewed By: bixia
      
      Differential Revision: https://reviews.llvm.org/D97317
      17fa9198
    • Nicolas Vasilache's avatar
      [mlir][Linalg] Retire hoistViewAllocOps. · 8cf14b8d
      Nicolas Vasilache authored
      This transformation was only used for quick experimentation and is not general enough.
      Retire it.
      
      Differential Revision: https://reviews.llvm.org/D97266
      8cf14b8d
    • KareemErgawy-TomTom's avatar
      [MLIR][LinAlg] Start detensoring implementation. · 67e0d58d
      KareemErgawy-TomTom authored
      This commit is the first baby step towards detensoring in
      linalg-on-tensors.
      
      Detensoring is the process through which a tensor value is convereted to one
      or potentially more primitive value(s). During this process, operations with
      such detensored operands are also converted to an equivalen form that works
      on primitives.
      
      The detensoring process is driven by linalg-on-tensor ops. In particular, a
      linalg-on-tensor op is checked to see whether *all* its operands can be
      detensored. If so, those operands are converted to thier primitive
      counterparts and the linalg op is replaced by an equivalent op that takes
      those new primitive values as operands.
      
      This works towards handling github/google/iree#1159.
      
      Reviewed By: nicolasvasilache
      
      Differential Revision: https://reviews.llvm.org/D96271
      67e0d58d
    • Aart Bik's avatar
      [sparse][mlir] simplify lattice optimization logic · 0df59f23
      Aart Bik authored
      Simplifies the way lattices are optimized with less, but more
      powerful rules. This also fixes an inaccuracy where too many
      lattices resulted (expecting a non-existing universal index).
      Also puts no-side-effects on all proper getters and unifies
      bufferization flags order in integration tests (for future,
      more complex use cases).
      
      Reviewed By: bixia
      
      Differential Revision: https://reviews.llvm.org/D97134
      0df59f23
  15. Feb 19, 2021
  16. Feb 18, 2021
  17. Feb 16, 2021
    • Nicolas Vasilache's avatar
      [mlir][Linalg] Generalize vector::transfer hoisting on tensors. · 21debeae
      Nicolas Vasilache authored
      This revision adds support for hoisting "subtensor + vector.transfer_read" / "subtensor_insert + vector.transfer_write pairs" across scf.for.
      The unit of hoisting becomes a HoistableRead / HoistableWrite struct which contains a pair of "vector.transfer_read + optional subtensor" / "vector.transfer_write + optional subtensor_insert".
      scf::ForOp canonicalization patterns are applied greedily on the successful application of the transformation to cleanup the IR more eagerly and potentially expose more transformation opportunities.
      
      Differential revision: https://reviews.llvm.org/D96731
      21debeae
    • Nicolas Vasilache's avatar
      [mlir] Drop reliance of SliceAnalysis on specific ops. · d01ea0ed
      Nicolas Vasilache authored
      SliceAnalysis originally was developed in the context of affine.for within mlfunc.
      It predates the notion of region.
      This revision updates it to not hardcode specific ops like scf::ForOp.
      When rooted at an op, the behavior of the slice computation changes as it recurses into the regions of the op. This does not support gathering all values transitively depending on a loop induction variable anymore.
      Additional variants rooted at a Value are added to also support the existing behavior.
      
      Differential revision: https://reviews.llvm.org/D96702
      d01ea0ed
  18. Feb 14, 2021
  19. Feb 12, 2021
  20. Feb 11, 2021
  21. Feb 10, 2021
Loading