- Jan 22, 2021
-
-
MaheshRavishankar authored
Fusion of generic/indexed_generic operations with tensor_reshape by expansion when the latter just adds/removes unit-dimensions is disabled since it just adds unit-trip count loops. Differential Revision: https://reviews.llvm.org/D94626
-
MaheshRavishankar authored
Differential Revision: https://reviews.llvm.org/D93086
-
MaheshRavishankar authored
representing dependence from producer result to consumer. With Linalg on tensors the dependence between operations can be from the result of the producer to the consumer. This change just does a NFC refactoring of the LinalgDependenceGraphElem to allow representing both OpResult and OpOperand*. Differential Revision: https://reviews.llvm.org/D95208
-
Hanhan Wang authored
`linalg.pad_tensor` is an operation that pads the `source` tensor with given `low` and `high` padding config. Example 1: ```mlir %pad_value = ... : f32 %1 = linalg.pad_tensor %0 low[1, 2] high[2, 3] { ^bb0(%arg0 : index, %arg1 : index): linalg.yield %pad_value : f32 } : tensor<?x?xf32> to tensor<?x?xf32> ``` Example 2: ```mlir %pad_value = ... : f32 %1 = linalg.pad_tensor %arg0 low[2, %arg1, 3, 3] high[3, 3, %arg1, 2] { ^bb0(%arg2: index, %arg3: index, %arg4: index, %arg5: index): linalg.yield %pad_value : f32 } : tensor<1x2x2x?xf32> to tensor<6x?x?x?xf32> ``` Reviewed By: nicolasvasilache Differential Revision: https://reviews.llvm.org/D93704
-
- Jan 20, 2021
-
-
Nicolas Vasilache authored
This may simplify the composition of patterns but is otherwise NFC.
-
Nicolas Vasilache authored
-
Aart Bik authored
Use cases with 16- or even 8-bit pointer/index structures have been identified. Reviewed By: penpornk Differential Revision: https://reviews.llvm.org/D95015
-
- Jan 19, 2021
-
-
Mehdi Amini authored
-
- Jan 16, 2021
-
-
Thomas Raoux authored
-
Thomas Raoux authored
This allow using this helper outside of the linalg canonicalization. Differential Revision: https://reviews.llvm.org/D94826
-
- Jan 15, 2021
-
-
MaheshRavishankar authored
The operantion is an identity if the values yielded by the operation is the argument of the basic block of that operation. Add this missing check. Differential Revision: https://reviews.llvm.org/D94819
-
Aart Bik authored
This is a very minor improvement during iteration graph construction. If the first attempt considering the dimension order of all tensors fails, a second attempt is made using the constraints of sparse tensors only. Dense tensors prefer dimension order (locality) but provide random access if needed, enabling the compilation of more sparse kernels. Reviewed By: penpornk Differential Revision: https://reviews.llvm.org/D94709
-
MaheshRavishankar authored
With the recent changes to linalg on tensor semantics, the tiling operations works out-of-the-box for generic operations. Add a test to verify that and some minor refactoring. Differential Revision: https://reviews.llvm.org/D93077
-
MaheshRavishankar authored
Add canonicalization to replace use of the result of a linalg operation on tensors in a dim operation, to use one of the operands of the linalg operations instead. This allows the linalg op itself to be deleted when all its non-dim uses are removed (say through tiling, etc.) Differential Revision: https://reviews.llvm.org/D93076
-
- Jan 14, 2021
-
-
MaheshRavishankar authored
linalg.generic/indexed_generic operations on tensors whose body is just yielding the (non-induction variable) arguments of the operation can be canonicalized by replacing uses of the result with the corresponding arguments. Differential Revision: https://reviews.llvm.org/D94581
-
- Jan 13, 2021
-
-
Aart Bik authored
Similar to the parallelization strategies, the vectorization strategies provide control on what loops should be vectorize. Unlike the parallel strategies, only innermost loops are considered, but including reductions, with the control of vectorizing dense loops only or dense and sparse loops. The vectorized loops are always controlled by a vector mask to avoid overrunning the iterations, but subsequent vector operation folding removes redundant masks and replaces the operations with more efficient counterparts. Similarly, we will rely on subsequent loop optimizations to further optimize masking, e.g. using an unconditional full vector loop and scalar cleanup loop. The current strategy already demonstrates a nice interaction between the sparse compiler and all prior optimizations that went into the vector dialect. Ongoing discussion at: https://llvm.discourse.group/t/mlir-support-for-sparse-tensors/2020/10 Reviewed By: penpornk Differential Revision: https://reviews.llvm.org/D94551
-
David Blaikie authored
-
- Jan 12, 2021
-
-
Nicolas Vasilache authored
This revision uniformizes fusion APIs to allow passing OpOperand, OpResult and adds a finer level of control fusion. Differential Revision: https://reviews.llvm.org/D94493
-
Rob Suderman authored
getDynOperands behavior is commonly used in a number of passes. Refactored to use a helper function and avoid code reuse. Differential Revision: https://reviews.llvm.org/D94340
-
- Jan 11, 2021
-
-
MaheshRavishankar authored
When fusing tensor_reshape ops with generic/indexed_Generic op, new linalg.init_tensor operations were created for the `outs` of the fused op. While correct (technically) it is better to just reshape the original `outs` operands and rely on canonicalization of init_tensor -> tensor_reshape to achieve the same effect. Differential Revision: https://reviews.llvm.org/D93774
-
MaheshRavishankar authored
Reshaping an init_tensor can be folded to a init_tensor op of the final type. Differential Revision: https://reviews.llvm.org/D93773
-
Lei Zhang authored
Linalg ops are perfect loop nests. When materializing the concrete loop nest, the default order specified by the Linalg op's iterators may not be the best for further CodeGen: targets frequently need to plan the loop order in order to gain better data access. And different targets can have different preferences. So there should exist a way to control the order. Reviewed By: nicolasvasilache Differential Revision: https://reviews.llvm.org/D91795
-
- Jan 08, 2021
-
-
MaheshRavishankar authored
Change the implementation of LinalgOp with TensorReshapeOp by expansion to be more modular and easier to follow. Differential Revision: https://reviews.llvm.org/D93748
-
MaheshRavishankar authored
The existing verification of reshape ops in linalg (linalg.reshape and linalg.tensor_reshape) allows specification of illegal ops, where - A dynamic dimension is expanded into multiple dynamic dimensions. This is ill-specified. - A static dimension is expanded into dynamic dimension or viceversa, - The product of extents of the static dimensions in the expanded type doesnt match the static dimension of the collapsed type. Making all of these illegal. This also implies that some pessimization in canonicalization due to incomplete semantics of the operation can be dropped. Differential Revision: https://reviews.llvm.org/D93724
-
- Jan 07, 2021
-
-
Kazuaki Ishizaki authored
fix typo under include and lib directories Reviewed By: antiagainst Differential Revision: https://reviews.llvm.org/D94220
-
- Jan 06, 2021
-
-
Thomas Raoux authored
Add same hoisting transformation existing for transfer ops on buffers for transfer_ops on tensor. The logic is significantly different so this is done as a separate transformation and it is expect that user would know which transformation to use based on the flow. Differential Revision: https://reviews.llvm.org/D94115
-
Aart Bik authored
Nicolas changed the tensor abstraction so that every output has its own shape definition. This simplifies the "inference" that was used in the sparse compiler. Reviewed By: penpornk Differential Revision: https://reviews.llvm.org/D94119
-
- Jan 05, 2021
-
-
Alexander Belyaev authored
Differential Revision: https://reviews.llvm.org/D94079
-
- Dec 29, 2020
-
-
Thomas Raoux authored
Support vectorization of linalg ops using tensor inputs/outputs. Differential Revision: https://reviews.llvm.org/D93890
-
- Dec 21, 2020
-
-
Aart Bik authored
Fixes a merge conflict with previous two CLs. Reviewed By: mravishankar Differential Revision: https://reviews.llvm.org/D93664
-
nicolasvasilache authored
This revision drops init_tensor arguments from Linalg on tensors and instead uniformizes the output buffers and output tensors to be consistent. This significantly simplifies the usage of Linalg on tensors and is a stepping stone for its evolution towards a mixed tensor and shape abstraction discussed in https://llvm.discourse.group/t/linalg-and-shapes/2421/19. Differential Revision: https://reviews.llvm.org/D93469
-
Thomas Raoux authored
Transfer_ops can now work on both buffers and tensor. Right now, lowering of the tensor case is not supported yet. Differential Revision: https://reviews.llvm.org/D93500
-
- Dec 18, 2020
-
-
Aart Bik authored
Reductions in innermost loops become harder for the backend to disambiguate after bufferization into memrefs, resulting in less efficient load-update-store cycles. By scalarizing innermost reductions, the backend is more likely to assign a register to perform the reduction (also prepares vectorization). Even though we could scalarize reductions for more outer loops and while-loops as well, currently scalarization is only done for chains of innermost for-loops, where it matters most, to avoid complicating codegen unnecessary (viz. adding lots of yield instructions). This CL also refactors condition simplification into the merger class, where it belongs, so that conditions are simplified only once per loop nest and not repeatedly as was currently done. This CL also fixes a few minor bugs, some layout issues, and comments. Reviewed By: penpornk Differential Revision: https://reviews.llvm.org/D93143
-
Sean Silva authored
This is almost entirely mechanical. Differential Revision: https://reviews.llvm.org/D93357
-
- Dec 17, 2020
-
-
MaheshRavishankar authored
This operation is used to materialize a tensor of a particular shape. The shape could be specified as a mix of static and dynamic values. The use of this operation is to be an `init` tensor for Linalg structured operation on tensors where the bounds of the computation depends on the shape of the output of the linalg operation. The result of this operation will be used as the `init` tensor of such Linalg operations. To note, 1) The values in the tensor materialized is not used. Any operation to which this is an init tensor is expected to overwrite the entire tensor. 2) The tensor is materialized only for the shape of the output and to make the loop bounds depend only on operands of the structured operation. Based on (1) and (2) it is assumed that these operations eventually go away since they are only used in `dim` operations that can be canonicalized to make this operation dead. Such canonicalization are added here too. Differential Revision: https://reviews.llvm.org/D93374
-
River Riddle authored
[mlir][IR][NFC] Move context/location parameters of builtin Type::get methods to the start of the parameter list This better matches the rest of the infrastructure, is much simpler, and makes it easier to move these types to being declaratively specified. Differential Revision: https://reviews.llvm.org/D93432
-
- Dec 15, 2020
-
-
Tres Popp authored
This is useful for scalar code that uses for/while loops. This has also been confirmed to work for representing std.pow as an scf.for loop on gpus. Differential Revision: https://reviews.llvm.org/D93308
-
- Dec 14, 2020
-
-
Thomas Raoux authored
Fix a bug causing to pick the wrong vector size to broadcast to when the source vectors have different ranks. Differential Revision: https://reviews.llvm.org/D93118
-
- Dec 13, 2020
-
-
Christian Sigg authored
This is a preparation step to remove those methods from OpState. Reviewed By: mehdi_amini Differential Revision: https://reviews.llvm.org/D93098
-
- Dec 09, 2020
-
-
Tres Popp authored
This is to prevent assertion failures on scf.if and shape.assuming operations where this is not enough information currently to handle any aliasing information. Differential Revision: https://reviews.llvm.org/D92963
-