Skip to content
  1. Feb 08, 2021
  2. Feb 05, 2021
  3. Feb 04, 2021
    • Mehdi Amini's avatar
    • Nicolas Vasilache's avatar
      [mlir][Linalg] Introduce a ContractionOpInterface · e4a503a2
      Nicolas Vasilache authored
      This revision takes advantage of recent extensions to vectorization to refactor contraction detection into a bona fide Linalg interface.
      The mlit-linalg-ods-gen parser is extended to support adding such interfaces.
      The detection that was originally enabling vectorization is refactored to serve as both a test on a generic LinalgOp as well as to verify ops that declare to conform to that interface.
      
      This is plugged through Linalg transforms and strategies but it quickly becomes evident that the complexity and rigidity of the C++ class based templating does not pay for itself.
      Therefore, this revision changes the API for vectorization patterns to get rid of templates as much as possible.
      Variadic templates are relegated to the internals of LinalgTransformationFilter as much as possible and away from the user-facing APIs.
      
      It is expected other patterns / transformations will follow the same path and drop as much C++ templating as possible from the class definition.
      
      Differential revision: https://reviews.llvm.org/D95973
      e4a503a2
    • Nicolas Vasilache's avatar
      [mlir][Linalg] Drop SliceOp · f4ac9f03
      Nicolas Vasilache authored
      This op is subsumed by rank-reducing SubViewOp and has become useless.
      
      Differential revision: https://reviews.llvm.org/D95317
      f4ac9f03
    • Nicolas Vasilache's avatar
      [mlir][Linalg] Generalize the definition of a Linalg contraction. · f245b7ad
      Nicolas Vasilache authored
      This revision defines a Linalg contraction in general terms:
      
        1. Has 2 input and 1 output shapes.
        2. Has at least one reduction dimension.
        3. Has only projected permutation indexing maps.
        4. its body computes `u5(u1(c) + u2(u3(a) * u4(b)))` on some field
          (AddOpType, MulOpType), where u1, u2, u3, u4 and u5 represent scalar unary
          operations that may change the type (e.g. for mixed-precision).
      
      As a consequence, when vectorization of such an op occurs, the only special
      behavior is that the (unique) MulOpType is vectorized into a
      `vector.contract`. All other ops are handled in a generic fashion.
      
       In the future, we may wish to allow more input arguments and elementwise and
       constant operations that do not involve the reduction dimension(s).
      
      A test is added to demonstrate the proper vectorization of matmul_i8_i8_i32.
      
      Differential revision: https://reviews.llvm.org/D95939
      f245b7ad
  4. Feb 02, 2021
    • Benjamin Kramer's avatar
    • Nicolas Vasilache's avatar
      [mlir][Linalg] Refactor Linalg vectorization for better reuse and extensibility. · 0a2a260a
      Nicolas Vasilache authored
      This revision unifies Linalg vectorization and paves the way for vectorization of Linalg ops with mixed-precision operations.
      The new algorithm traverses the ops in the linalg block in order and avoids recursion.
      It uses a BlockAndValueMapping to keep track of vectorized operations.
      
      The revision makes the following modifications but is otherwise NFC:
      1. vector.transfer_read are created eagerly and may appear in a different order than the original order.
      2. a more progressive vectorization to vector.contract results in only the multiply operation being converted to `vector.contract %a, %b, %zero`, where `%zero` is a
      constant of the proper type. Later vector canonicalizations are assumed to rewrite vector.contract %a, %b, %zero + add to a proper accumulate form.
      
      Differential revision: https://reviews.llvm.org/D95797
      0a2a260a
  5. Feb 01, 2021
    • Hanhan Wang's avatar
      [mlir][Linalg] Replace SimplePad with PadTensor in hoist-padding · b3f611bf
      Hanhan Wang authored
      This is the last revision to migrate using SimplePadOp to PadTensorOp, and the
      SimplePadOp is removed in the patch. Update a bit in SliceAnalysis because the
      PadTensorOp takes a region different from SimplePadOp. This is not covered by
      LinalgOp because it is not a structured op.
      
      Also, remove a duplicated comment from cpp file, which is already described in a
      header file. And update the pseudo-mlir in the comment.
      
      This is as same as D95615 but fixing one dep in CMakeLists.txt
      
      Different from D95671, the fix was applied to run target.
      
      Reviewed By: mravishankar
      
      Differential Revision: https://reviews.llvm.org/D95785
      b3f611bf
    • Tres Popp's avatar
      Revert "[mlir][Linalg] Replace SimplePad with PadTensor in hoist-padding" · 2790cbed
      Tres Popp authored
      This reverts commit d9b953d8.
      
      This commit resulted in build bot failures and the author is away from a
      computer, so I am reverting on their behalf until they have a chance to
      look into this.
      2790cbed
    • Hanhan Wang's avatar
      [mlir][Linalg] Replace SimplePad with PadTensor in hoist-padding · d9b953d8
      Hanhan Wang authored
      This is the last revision to migrate using SimplePadOp to PadTensorOp, and the
      SimplePadOp is removed in the patch. Update a bit in SliceAnalysis because the
      PadTensorOp takes a region different from SimplePadOp. This is not covered by
      LinalgOp because it is not a structured op.
      
      Also, remove a duplicated comment from cpp file, which is already described in a
      header file. And update the pseudo-mlir in the comment.
      
      Reviewed By: nicolasvasilache
      
      Differential Revision: https://reviews.llvm.org/D95671
      d9b953d8
  6. Jan 28, 2021
  7. Jan 26, 2021
  8. Jan 25, 2021
  9. Jan 22, 2021
  10. Jan 20, 2021
  11. Jan 19, 2021
  12. Jan 16, 2021
  13. Jan 15, 2021
  14. Jan 13, 2021
    • Aart Bik's avatar
      [mlir][sparse] add vectorization strategies to sparse compiler · f4f158b2
      Aart Bik authored
      Similar to the parallelization strategies, the vectorization strategies
      provide control on what loops should be vectorize. Unlike the parallel
      strategies, only innermost loops are considered, but including reductions,
      with the control of vectorizing dense loops only or dense and sparse loops.
      
      The vectorized loops are always controlled by a vector mask to avoid
      overrunning the iterations, but subsequent vector operation folding removes
      redundant masks and replaces the operations with more efficient counterparts.
      Similarly, we will rely on subsequent loop optimizations to further optimize
      masking, e.g. using an unconditional full vector loop and scalar cleanup loop.
      
      The current strategy already demonstrates a nice interaction between the
      sparse compiler and all prior optimizations that went into the vector dialect.
      
      Ongoing discussion at:
      https://llvm.discourse.group/t/mlir-support-for-sparse-tensors/2020/10
      
      Reviewed By: penpornk
      
      Differential Revision: https://reviews.llvm.org/D94551
      f4f158b2
    • David Blaikie's avatar
  15. Jan 12, 2021
  16. Jan 11, 2021
Loading