Skip to content
  1. Jan 22, 2021
  2. Jan 20, 2021
  3. Jan 19, 2021
  4. Jan 16, 2021
  5. Jan 15, 2021
  6. Jan 14, 2021
  7. Jan 13, 2021
    • Aart Bik's avatar
      [mlir][sparse] add vectorization strategies to sparse compiler · f4f158b2
      Aart Bik authored
      Similar to the parallelization strategies, the vectorization strategies
      provide control on what loops should be vectorize. Unlike the parallel
      strategies, only innermost loops are considered, but including reductions,
      with the control of vectorizing dense loops only or dense and sparse loops.
      
      The vectorized loops are always controlled by a vector mask to avoid
      overrunning the iterations, but subsequent vector operation folding removes
      redundant masks and replaces the operations with more efficient counterparts.
      Similarly, we will rely on subsequent loop optimizations to further optimize
      masking, e.g. using an unconditional full vector loop and scalar cleanup loop.
      
      The current strategy already demonstrates a nice interaction between the
      sparse compiler and all prior optimizations that went into the vector dialect.
      
      Ongoing discussion at:
      https://llvm.discourse.group/t/mlir-support-for-sparse-tensors/2020/10
      
      Reviewed By: penpornk
      
      Differential Revision: https://reviews.llvm.org/D94551
      f4f158b2
    • David Blaikie's avatar
  8. Jan 12, 2021
  9. Jan 11, 2021
  10. Jan 08, 2021
  11. Jan 07, 2021
  12. Jan 06, 2021
  13. Jan 05, 2021
  14. Dec 29, 2020
  15. Dec 21, 2020
  16. Dec 18, 2020
    • Aart Bik's avatar
      [mlir][sparse] scalarize reductions in for-loops during sparse codegen · 14da25b4
      Aart Bik authored
      Reductions in innermost loops become harder for the backend to disambiguate
      after bufferization into memrefs, resulting in less efficient load-update-store
      cycles. By scalarizing innermost reductions, the backend is more likely to assign
      a register to perform the reduction (also prepares vectorization). Even though
      we could scalarize reductions for more outer loops and while-loops as well,
      currently scalarization is only done for chains of innermost for-loops, where
      it matters most, to avoid complicating codegen unnecessary (viz. adding lots
      of yield instructions).
      
      This CL also refactors condition simplification into the merger class,
      where it belongs, so that conditions are simplified only once per loop
      nest and not repeatedly as was currently done. This CL also fixes a few
      minor bugs, some layout issues, and comments.
      
      Reviewed By: penpornk
      
      Differential Revision: https://reviews.llvm.org/D93143
      14da25b4
    • Sean Silva's avatar
      [mlir] Move `std.tensor_cast` -> `tensor.cast`. · 129d6e55
      Sean Silva authored
      This is almost entirely mechanical.
      
      Differential Revision: https://reviews.llvm.org/D93357
      129d6e55
  17. Dec 17, 2020
    • MaheshRavishankar's avatar
      [mlir][Linalg] Define a linalg.init_tensor operation. · 118a7156
      MaheshRavishankar authored
      This operation is used to materialize a tensor of a particular
      shape. The shape could be specified as a mix of static and dynamic
      values.
      
      The use of this operation is to be an `init` tensor for Linalg
      structured operation on tensors where the bounds of the computation
      depends on the shape of the output of the linalg operation. The result
      of this operation will be used as the `init` tensor of such Linalg
      operations. To note,
      
      1) The values in the tensor materialized is not used. Any operation to
         which this is an init tensor is expected to overwrite the entire
         tensor.
      2) The tensor is materialized only for the shape of the output and to
         make the loop bounds depend only on operands of the structured
         operation.
      
      Based on (1) and (2) it is assumed that these operations eventually go
      away since they are only used in `dim` operations that can be
      canonicalized to make this operation dead. Such canonicalization are
      added here too.
      
      Differential Revision: https://reviews.llvm.org/D93374
      118a7156
    • River Riddle's avatar
      [mlir][IR][NFC] Move context/location parameters of builtin Type::get methods... · 1b97cdf8
      River Riddle authored
      [mlir][IR][NFC] Move context/location parameters of builtin Type::get methods to the start of the parameter list
      
      This better matches the rest of the infrastructure, is much simpler, and makes it easier to move these types to being declaratively specified.
      
      Differential Revision: https://reviews.llvm.org/D93432
      1b97cdf8
  18. Dec 15, 2020
  19. Dec 14, 2020
  20. Dec 13, 2020
  21. Dec 09, 2020
Loading