Skip to content
  1. Dec 21, 2020
  2. Dec 18, 2020
    • Aart Bik's avatar
      [mlir][sparse] scalarize reductions in for-loops during sparse codegen · 14da25b4
      Aart Bik authored
      Reductions in innermost loops become harder for the backend to disambiguate
      after bufferization into memrefs, resulting in less efficient load-update-store
      cycles. By scalarizing innermost reductions, the backend is more likely to assign
      a register to perform the reduction (also prepares vectorization). Even though
      we could scalarize reductions for more outer loops and while-loops as well,
      currently scalarization is only done for chains of innermost for-loops, where
      it matters most, to avoid complicating codegen unnecessary (viz. adding lots
      of yield instructions).
      
      This CL also refactors condition simplification into the merger class,
      where it belongs, so that conditions are simplified only once per loop
      nest and not repeatedly as was currently done. This CL also fixes a few
      minor bugs, some layout issues, and comments.
      
      Reviewed By: penpornk
      
      Differential Revision: https://reviews.llvm.org/D93143
      14da25b4
    • Sean Silva's avatar
      [mlir] Move `std.tensor_cast` -> `tensor.cast`. · 129d6e55
      Sean Silva authored
      This is almost entirely mechanical.
      
      Differential Revision: https://reviews.llvm.org/D93357
      129d6e55
  3. Dec 17, 2020
    • MaheshRavishankar's avatar
      [mlir][Linalg] Define a linalg.init_tensor operation. · 118a7156
      MaheshRavishankar authored
      This operation is used to materialize a tensor of a particular
      shape. The shape could be specified as a mix of static and dynamic
      values.
      
      The use of this operation is to be an `init` tensor for Linalg
      structured operation on tensors where the bounds of the computation
      depends on the shape of the output of the linalg operation. The result
      of this operation will be used as the `init` tensor of such Linalg
      operations. To note,
      
      1) The values in the tensor materialized is not used. Any operation to
         which this is an init tensor is expected to overwrite the entire
         tensor.
      2) The tensor is materialized only for the shape of the output and to
         make the loop bounds depend only on operands of the structured
         operation.
      
      Based on (1) and (2) it is assumed that these operations eventually go
      away since they are only used in `dim` operations that can be
      canonicalized to make this operation dead. Such canonicalization are
      added here too.
      
      Differential Revision: https://reviews.llvm.org/D93374
      118a7156
    • River Riddle's avatar
      [mlir][IR][NFC] Move context/location parameters of builtin Type::get methods... · 1b97cdf8
      River Riddle authored
      [mlir][IR][NFC] Move context/location parameters of builtin Type::get methods to the start of the parameter list
      
      This better matches the rest of the infrastructure, is much simpler, and makes it easier to move these types to being declaratively specified.
      
      Differential Revision: https://reviews.llvm.org/D93432
      1b97cdf8
  4. Dec 15, 2020
  5. Dec 14, 2020
  6. Dec 13, 2020
  7. Dec 09, 2020
  8. Dec 07, 2020
    • Aart Bik's avatar
      [mlir][sparse] hoist loop invariant tensor loads in sparse compiler · 74cd9e58
      Aart Bik authored
      After bufferization, the backend has much more trouble hoisting loop invariant
      loads from the loops generated by the sparse compiler. Therefore, this is done
      during sparse code generation. Note that we don't bother hoisting derived
      invariant expressions on SSA values, since the backend does that very well.
      
      Still TBD: scalarize reductions to avoid load-add-store cycles
      
      Reviewed By: penpornk
      
      Differential Revision: https://reviews.llvm.org/D92534
      74cd9e58
  9. Dec 04, 2020
  10. Dec 02, 2020
  11. Nov 26, 2020
  12. Nov 25, 2020
    • Aart Bik's avatar
      [mlir][sparse] add parallelization strategies to sparse compiler · 5c4e397e
      Aart Bik authored
      This CL adds the ability to request different parallelization strategies
      for the generate code. Every "parallel" loop is a candidate, and converted
      to a parallel op if it is an actual for-loop (not a while) and the strategy
      allows dense/sparse outer/inner parallelization.
      
      This will connect directly with the work of @ezhulenev on parallel loops.
      
      Still TBD: vectorization strategy
      
      Reviewed By: penpornk
      
      Differential Revision: https://reviews.llvm.org/D91978
      5c4e397e
  13. Nov 24, 2020
  14. Nov 23, 2020
  15. Nov 21, 2020
  16. Nov 20, 2020
  17. Nov 19, 2020
  18. Nov 18, 2020
Loading