- May 21, 2021
-
-
Nicolas Vasilache authored
Drop the remaining EDSC subdirectories and update all uses. Differential Revision: https://reviews.llvm.org/D102911
-
- May 20, 2021
-
-
Nicolas Vasilache authored
Drop the Affine dialect EDSC subdirectory and update all uses. Differential Revision: https://reviews.llvm.org/D102878
-
Nicolas Vasilache authored
Drop the MemRef dialect EDSC subdirectory and update all uses. Differential Revision: https://reviews.llvm.org/D102868
-
Nicolas Vasilache authored
Drop the Linalg dialect EDSC subdirectory and update all uses. Differential Revision: https://reviews.llvm.org/D102848
-
Nicolas Vasilache authored
Instead, use createOrFold builders which result in more static information available. Differential Revision: https://reviews.llvm.org/D102832
-
- May 19, 2021
-
-
Nicolas Vasilache authored
Drop the SCF dialect EDSC subdirectory and update all uses. Differential Revision: https://reviews.llvm.org/D102780
-
Nicolas Vasilache authored
Drop the vector dialect EDSC subdirectory and update all uses.
-
MaheshRavishankar authored
LinalgOps that are all parallel do not use the value of `outs` tensor. The semantics is that the `outs` tensor is fully overwritten. Using anything other than `init_tensor` can add false dependencies between operations, when the use is just for the shape of the tensor. Adding a canonicalization to always use `init_tensor` in such cases, breaks this dependence. Differential Revision: https://reviews.llvm.org/D102561
-
- May 17, 2021
-
-
Tobias Gysi authored
Replace the templated linalgLowerOpToLoops method by three specialized methods linalgOpToLoops, LinalgOpToParallelLoops, and linalgOpToAffineLoops. Differential Revision: https://reviews.llvm.org/D102324
-
Adrian Kuegel authored
BEGIN_PUBLIC Add support for complex constants to MLIR core. END_PUBLIC Differential Revision: https://reviews.llvm.org/D101908
-
- May 15, 2021
-
-
Nicolas Vasilache authored
[mlir][Linalg] NFC - More gracefully degrade lookup into failure during comprehensive bufferization (4/n) Differential revsion: https://reviews.llvm.org/D102420
-
- May 14, 2021
-
-
Nicolas Vasilache authored
Differential revision: https://reviews.llvm.org/D102417
-
Rahul Joshi authored
Differential Revision: https://reviews.llvm.org/D102458
-
Nicolas Vasilache authored
Differential revision: https://reviews.llvm.org/D102395
-
Nicolas Vasilache authored
This is the first step towards upstreaming comprehensive bufferization following the discourse post: https://llvm.discourse.group/t/rfc-linalg-on-tensors-update-and-comprehensive-bufferization-rfc/3373/6. This first commit introduces a basic pass for bufferizing within function boundaries, assuming that the inplaceable function boundaries have been marked as such. Differential revision: https://reviews.llvm.org/D101693
-
- May 13, 2021
-
-
Sean Silva authored
This covers the extremely common case of replacing all uses of a Value with a new op that is itself a user of the original Value. This should also be a little bit more efficient than the `SmallPtrSet<Operation *, 1>{op}` idiom that was being used before. Differential Revision: https://reviews.llvm.org/D102373
-
Tobias Gysi authored
after introducing the IndexedGenericOp to GenericOp canonicalization (https://reviews.llvm.org/D101612). Differential Revision: https://reviews.llvm.org/D102163
-
Tobias Gysi authored
after introducing the IndexedGenericOp to GenericOp canonicalization (https://reviews.llvm.org/D101612). Differential Revision: https://reviews.llvm.org/D102235
-
- May 12, 2021
-
-
Tobias Gysi authored
after introducing the IndexedGenericOp to GenericOp canonicalization (https://reviews.llvm.org/D101612). Differential Revision: https://reviews.llvm.org/D102245
-
Tobias Gysi authored
after introducing the IndexedGenericOp to GenericOp canonicalization (https://reviews.llvm.org/D101612). Differential Revision: https://reviews.llvm.org/D102308
-
- May 11, 2021
-
-
Tobias Gysi authored
after introducing the IndexedGenericOp to GenericOp canonicalization (https://reviews.llvm.org/D101612). Differential Revision: https://reviews.llvm.org/D102187
-
Tobias Gysi authored
after introducing the IndexedGenericOp to GenericOp canonicalization (https://reviews.llvm.org/D101612). Differential Revision: https://reviews.llvm.org/D102174
-
Tobias Gysi authored
after introducing the IndexedGenericOp to GenericOp canonicalization (https://reviews.llvm.org/D101612). Differential Revision: https://reviews.llvm.org/D102176
-
Aart Bik authored
All glue and clutter in the linalg ops has been replaced by proper sparse tensor type encoding. This code is no longer needed. Thanks to ntv@ for giving us a temporary home in linalg. So long, and thanks for all the fish. Reviewed By: bixia Differential Revision: https://reviews.llvm.org/D102098
-
- May 08, 2021
-
-
River Riddle authored
The current design uses a unique entry for each argument/result attribute, with the name of the entry being something like "arg0". This provides for a somewhat sparse design, but ends up being much more expensive (from a runtime perspective) in-practice. The design requires building a string every time we lookup the dictionary for a specific arg/result, and also requires N attribute lookups when collecting all of the arg/result attribute dictionaries. This revision restructures the design to instead have an ArrayAttr that contains all of the attribute dictionaries for arguments and another for results. This design reduces the number of attribute name lookups to 1, and allows for O(1) lookup for individual element dictionaries. The major downside is that we can end up with larger memory usage, as the ArrayAttr contains an entry for each element even if that element has no attributes. If the memory usage becomes too problematic, we can experiment with a more sparse structure that still provides a lot of the wins in this revision. This dropped the compilation time of a somewhat large TensorFlow model from ~650 seconds to ~400 seconds. Differential Revision: https://reviews.llvm.org/D102035
-
- May 07, 2021
-
-
Alexander Belyaev authored
Differential Revision: https://reviews.llvm.org/D102089
-
Tobias Gysi authored
Remove the builder signature taking a signed dimension identifier. Reviewed By: ergawy Differential Revision: https://reviews.llvm.org/D102055
-
MaheshRavishankar authored
The pattern to convert subtensor ops to their rank-reduced versions (by dropping unit-dims in the result) can also convert to a zero-rank tensor. Handle that case. This also fixes a OOB access bug in the existing pattern for such cases. Differential Revision: https://reviews.llvm.org/D101949
-
- May 06, 2021
-
-
thomasraoux authored
This expose a lambda control instead of just a boolean to control unit dimension folding. This however gives more control to user to pick a good heuristic. Folding reshapes helps fusion opportunities but may generate sub-optimal generic ops. Differential Revision: https://reviews.llvm.org/D101917
-
MaheshRavishankar authored
Fixing a minor bug which lead to element type of the output being modified when folding reshapes with generic op. Differential Revision: https://reviews.llvm.org/D101942
-
- May 05, 2021
-
-
Tobias Gysi authored
The old index op handling let the new index operations point back to the producer block. As a result, after fusion some index operations in the fused block had back references to the old producer block resulting in illegal IR. The patch now relies on a block and value mapping to avoid such back references. Differential Revision: https://reviews.llvm.org/D101887
-
Alexander Belyaev authored
Differential Revision: https://reviews.llvm.org/D101861
-
Aart Bik authored
This revision migrates more code from Linalg into the new permanent home of SparseTensor. It replaces the test passes with proper compiler passes. NOTE: the actual removal of the last glue and clutter in Linalg will follow Reviewed By: bixia Differential Revision: https://reviews.llvm.org/D101811
-
- May 04, 2021
-
-
Tobias Gysi authored
Ensure the index operations are lowered on all linalg loop lowering paths. Differential Revision: https://reviews.llvm.org/D101827
-
Eugene Zhulenev authored
This fixes a performance regression in vec-mat vectorization Reviewed By: asaadaldien Differential Revision: https://reviews.llvm.org/D101795
-
- May 03, 2021
-
-
MaheshRavishankar authored
Given the source and destination shapes, if they are static, or if the expanded/collapsed dimensions are unit-extent, it is possible to compute the reassociation maps that can be used to reshape one type into another. Add a utility method to return the reassociation maps when possible. This utility function can be used to fuse a sequence of reshape ops, given the type of the source of the producer and the final result type. This pattern supercedes a more constrained folding pattern added to DropUnitDims pass. Differential Revision: https://reviews.llvm.org/D101343
-
MaheshRavishankar authored
Convert subtensor and subtensor_insert operations to use their rank-reduced versions to drop unit dimensions. Differential Revision: https://reviews.llvm.org/D101495
-
thomasraoux authored
The current implementation had a bug as it was relying on the target vector dimension sizes to calculate where to insert broadcast. If several dimensions have the same size we may insert the broadcast on the wrong dimension. The correct broadcast cannot be inferred from the type of the source and destination vector. Instead when we want to extend transfer ops we calculate an "inverse" map to the projected permutation and insert broadcast in place of the projected dimensions. Differential Revision: https://reviews.llvm.org/D101738
-
Frederik Gossen authored
Differential Revision: https://reviews.llvm.org/D101771
-
Frederik Gossen authored
Add dedicated pass `convert-linalg-tiled-loops-to-scf` to lower `linalg.tiled_loop`s. Differential Revision: https://reviews.llvm.org/D101768
-