- Dec 21, 2020
-
-
nicolasvasilache authored
This revision drops init_tensor arguments from Linalg on tensors and instead uniformizes the output buffers and output tensors to be consistent. This significantly simplifies the usage of Linalg on tensors and is a stepping stone for its evolution towards a mixed tensor and shape abstraction discussed in https://llvm.discourse.group/t/linalg-and-shapes/2421/19. Differential Revision: https://reviews.llvm.org/D93469
-
Thomas Raoux authored
Transfer_ops can now work on both buffers and tensor. Right now, lowering of the tensor case is not supported yet. Differential Revision: https://reviews.llvm.org/D93500
-
- Dec 18, 2020
-
-
Aart Bik authored
Reductions in innermost loops become harder for the backend to disambiguate after bufferization into memrefs, resulting in less efficient load-update-store cycles. By scalarizing innermost reductions, the backend is more likely to assign a register to perform the reduction (also prepares vectorization). Even though we could scalarize reductions for more outer loops and while-loops as well, currently scalarization is only done for chains of innermost for-loops, where it matters most, to avoid complicating codegen unnecessary (viz. adding lots of yield instructions). This CL also refactors condition simplification into the merger class, where it belongs, so that conditions are simplified only once per loop nest and not repeatedly as was currently done. This CL also fixes a few minor bugs, some layout issues, and comments. Reviewed By: penpornk Differential Revision: https://reviews.llvm.org/D93143
-
Sean Silva authored
This is almost entirely mechanical. Differential Revision: https://reviews.llvm.org/D93357
-
- Dec 17, 2020
-
-
MaheshRavishankar authored
This operation is used to materialize a tensor of a particular shape. The shape could be specified as a mix of static and dynamic values. The use of this operation is to be an `init` tensor for Linalg structured operation on tensors where the bounds of the computation depends on the shape of the output of the linalg operation. The result of this operation will be used as the `init` tensor of such Linalg operations. To note, 1) The values in the tensor materialized is not used. Any operation to which this is an init tensor is expected to overwrite the entire tensor. 2) The tensor is materialized only for the shape of the output and to make the loop bounds depend only on operands of the structured operation. Based on (1) and (2) it is assumed that these operations eventually go away since they are only used in `dim` operations that can be canonicalized to make this operation dead. Such canonicalization are added here too. Differential Revision: https://reviews.llvm.org/D93374
-
River Riddle authored
[mlir][IR][NFC] Move context/location parameters of builtin Type::get methods to the start of the parameter list This better matches the rest of the infrastructure, is much simpler, and makes it easier to move these types to being declaratively specified. Differential Revision: https://reviews.llvm.org/D93432
-
- Dec 15, 2020
-
-
Tres Popp authored
This is useful for scalar code that uses for/while loops. This has also been confirmed to work for representing std.pow as an scf.for loop on gpus. Differential Revision: https://reviews.llvm.org/D93308
-
- Dec 14, 2020
-
-
Thomas Raoux authored
Fix a bug causing to pick the wrong vector size to broadcast to when the source vectors have different ranks. Differential Revision: https://reviews.llvm.org/D93118
-
- Dec 13, 2020
-
-
Christian Sigg authored
This is a preparation step to remove those methods from OpState. Reviewed By: mehdi_amini Differential Revision: https://reviews.llvm.org/D93098
-
- Dec 09, 2020
-
-
Tres Popp authored
This is to prevent assertion failures on scf.if and shape.assuming operations where this is not enough information currently to handle any aliasing information. Differential Revision: https://reviews.llvm.org/D92963
-
Christian Sigg authored
[mlir] Use mlir::OpState::operator->() to get to methods of mlir::Operation. This is a preparation step to remove the corresponding methods from OpState. Reviewed By: silvas, rriddle Differential Revision: https://reviews.llvm.org/D92878
-
- Dec 07, 2020
-
-
Aart Bik authored
After bufferization, the backend has much more trouble hoisting loop invariant loads from the loops generated by the sparse compiler. Therefore, this is done during sparse code generation. Note that we don't bother hoisting derived invariant expressions on SSA values, since the backend does that very well. Still TBD: scalarize reductions to avoid load-add-store cycles Reviewed By: penpornk Differential Revision: https://reviews.llvm.org/D92534
-
- Dec 04, 2020
-
-
Nicolas Vasilache authored
-
Nicolas Vasilache authored
Let tiling to scf.for actually use the distribution method. For now only Cyclic is supported. Differential Revision: https://reviews.llvm.org/D92653
-
Hanhan Wang authored
In the past, the reshape op can be folded only if the indexing map is permutation in consumer's usage. We can relax to condition to be projected permutation. This patch still limits the fusion for scalar cases. Scalar case is a corner case, because we need to decide where to put extra dims. Reviewed By: mravishankar Differential Revision: https://reviews.llvm.org/D92466
-
River Riddle authored
This is part of a larger refactoring the better congregates the builtin structures under the BuiltinDialect. This also removes the problematic "standard" naming that clashes with the "standard" dialect, which is not defined within IR/. A temporary forward is placed in StandardTypes.h to allow time for downstream users to replaced references. Differential Revision: https://reviews.llvm.org/D92435
-
Thomas Raoux authored
Add support for vectorization for linalg.generic representing element-wise ops. Those are converted to transfer_read + vector ops + transfer_write. Also re-organize the vectorization tests to be together. Implementation derived from the work of @burmako, @agrue and @fedelebron. Differential Revision: https://reviews.llvm.org/D92540
-
- Dec 02, 2020
-
-
Christian Sigg authored
Given that OpState already implicit converts to Operator*, this seems reasonable. The alternative would be to add more functions to OpState which forward to Operation. Reviewed By: rriddle, ftynse Differential Revision: https://reviews.llvm.org/D92266
-
- Nov 26, 2020
-
-
Aart Bik authored
This change gives sparse compiler clients more control over selecting individual types for the pointers and indices in the sparse storage schemes. Narrower width obviously results in smaller memory footprints, but the range should always suffice for the maximum number of entries or index value. Reviewed By: penpornk Differential Revision: https://reviews.llvm.org/D92126
-
Sean Silva authored
It still had the old name from before ElementwiseMappable was added.
-
- Nov 25, 2020
-
-
Aart Bik authored
This CL adds the ability to request different parallelization strategies for the generate code. Every "parallel" loop is a candidate, and converted to a parallel op if it is an actual for-loop (not a while) and the strategy allows dense/sparse outer/inner parallelization. This will connect directly with the work of @ezhulenev on parallel loops. Still TBD: vectorization strategy Reviewed By: penpornk Differential Revision: https://reviews.llvm.org/D91978
-
- Nov 24, 2020
-
-
Aart Bik authored
Generalizes invariant handling to anything defined outside the Linalg op (parameters and SSA computations). Fixes bug that was using parameter number as tensor number. Reviewed By: penpornk Differential Revision: https://reviews.llvm.org/D91985
-
Nicolas Vasilache authored
Print part of an op of the form: ``` <optional-offset-prefix>`[` offset-list `]` <optional-size-prefix>`[` size-list `]` <optional-stride-prefix>[` stride-list `]` ``` Also address some leftover nits. Differential revision: https://reviews.llvm.org/D92031
-
Alexander Belyaev authored
Differential Revision: https://reviews.llvm.org/D92014
-
- Nov 23, 2020
-
-
Nicolas Vasilache authored
-
MaheshRavishankar authored
Exposing some utility functions from Linalg to allow for promotion of fused views outside of the core tile+fuse logic. This is an alternative to patch D91322 which adds the promotion logic to the tileAndFuse method. Downside with that approach is that it is not easily customizable based on needs. Differential Revision: https://reviews.llvm.org/D91503
-
MaheshRavishankar authored
Enhance the tile+fuse logic to allow fusing a sequence of operations. Make sure the value used to obtain tile shape is a SubViewOp/SubTensorOp. Current logic used to get the bounds of loop depends on the use of `getOrCreateRange` method on `SubViewOp` and `SubTensorOp`. Make sure that the value/dim used to compute the range is from such ops. This fix is a reasonable WAR, but a btter fix would be to make `getOrCreateRange` method be a method of `ViewInterface`. Differential Revision: https://reviews.llvm.org/D90991
-
Nicolas Vasilache authored
Differential Revision: https://reviews.llvm.org/D91956
-
Nicolas Vasilache authored
This revision refactors code used in various Linalg transformations and makes it a first class citizen to the LinalgStructureOpInterface. This is in preparation to allowing more advanced Linalg behavior but is otherwise NFC. Differential revision: https://reviews.llvm.org/D91863
-
- Nov 21, 2020
-
-
Aart Bik authored
Adds tests for full sum reduction (tensors summed up into scalars) and the well-known sampled-dense-dense-matrix-product. Refines the optimizations rules slightly to handle the summation better. Reviewed By: penpornk Differential Revision: https://reviews.llvm.org/D91818
-
- Nov 20, 2020
-
-
Thomas Raoux authored
Add transformation to be able to forward transfer_write into transfer_read operation and to be able to remove dead transfer_write when a transfer_write is overwritten before being read. Differential Revision: https://reviews.llvm.org/D91321
-
Mikhail Goncharov authored
This reverts commit f8284d21. Revert "[mlir][Linalg] NFC: Expose some utility functions used for promotion." This reverts commit 0c59f515. Revert "Remove unused isZero function" This reverts commit 0f9f0a40. Change f8284d21 led to multiple failures in IREE compilation.
-
Geoffrey Martin-Noble authored
Unused since https://reviews.llvm.org/D91503 and triggering -Wunused-function Reviewed By: rriddle Differential Revision: https://reviews.llvm.org/D91838
-
MaheshRavishankar authored
Exposing some utility functions from Linalg to allow for promotion of fused views outside of the core tile+fuse logic. This is an alternative to patch D91322 which adds the promotion logic to the tileAndFuse method. Downside with that approach is that it is not easily customizable based on needs. Differential Revision: https://reviews.llvm.org/D91503
-
MaheshRavishankar authored
Enhance the tile+fuse logic to allow fusing a sequence of operations. Differential Revision: https://reviews.llvm.org/D90991
-
MaheshRavishankar authored
Differential Revision: https://reviews.llvm.org/D91749
-
- Nov 19, 2020
-
-
River Riddle authored
* Move ops to a BuiltinOps.h * Add file comments
-
Lei Zhang authored
This commit starts a new pass and patterns for converting Linalg named ops to generic ops. This enables us to leverage the flexbility from generic ops during transformations. Right now only linalg.conv is supported; others will be added when useful. Reviewed By: nicolasvasilache Differential Revision: https://reviews.llvm.org/D91357
-
Aart Bik authored
Rationale: Make sure preconditions are tested already during verfication. Currently, the only way a sparse rewriting rule can fail is if (1) the linalg op does not have sparse annotations, or (2) a yet to be handled operation is encounted inside the op Reviewed By: penpornk Differential Revision: https://reviews.llvm.org/D91748
-
- Nov 18, 2020
-
-
MaheshRavishankar authored
Differential Revision: https://reviews.llvm.org/D91502
-