- May 13, 2021
-
-
Tobias Gysi authored
after introducing the IndexedGenericOp to GenericOp canonicalization (https://reviews.llvm.org/D101612). Differential Revision: https://reviews.llvm.org/D102235
-
Matthias Springer authored
These two patterns allow for more efficient codegen in VectorToSCF. Differential Revision: https://reviews.llvm.org/D102222
-
Matthias Springer authored
Such ops are no-ops and are folded to their respective `source`/`vector` operand. Differential Revision: https://reviews.llvm.org/D101879
-
Matthias Springer authored
Broadcast dimensions of a vector transfer op have no corresponding dimension in the mask vector. E.g., a 2-D TransferReadOp, where one dimension is a broadcast, can have a 1-D `mask` attribute. This commit also adds a few additional transfer op integration tests for various combinations of broadcasts, masking, dim transposes, etc. Differential Revision: https://reviews.llvm.org/D101745
-
Matthias Springer authored
This reverts commit c9087788. Accidentally pushed old version of the commit.
-
Matthias Springer authored
Broadcast dimensions of a vector transfer op have no corresponding dimension in the mask vector. E.g., a 2-D TransferReadOp, where one dimension is a broadcast, can have a 1-D `mask` attribute. This commit also adds a few additional transfer op integration tests for various combinations of broadcasts, masking, dim transposes, etc. Differential Revision: https://reviews.llvm.org/D101745
-
- May 12, 2021
-
-
Rob Suderman authored
Rank-0 case causes a graph during linalg reshape operation. Differential Revision: https://reviews.llvm.org/D102282
-
Inho Seo authored
The current static checker for linalg does not work on the decreasing index cases well. So, this is to Update the current static bound checker for linalg to cover decreasing index cases. Reviewed By: hanchung Differential Revision: https://reviews.llvm.org/D102302
-
Aart Bik authored
Reviewed By: bixia Differential Revision: https://reviews.llvm.org/D102285
-
Valentin Clement authored
Add a conversion pass to convert higher-level type before translation. This conversion extract meangingful information and pack it into a struct that the translation (D101504) will be able to understand. Reviewed By: ftynse Differential Revision: https://reviews.llvm.org/D102170
-
Tobias Gysi authored
after introducing the IndexedGenericOp to GenericOp canonicalization (https://reviews.llvm.org/D101612). Differential Revision: https://reviews.llvm.org/D102245
-
Tobias Gysi authored
after introducing the IndexedGenericOp to GenericOp canonicalization (https://reviews.llvm.org/D101612). Differential Revision: https://reviews.llvm.org/D102308
-
Dumitru Potop authored
First step in adding alignment as an attribute to MLIR global definitions. Alignment can be specified for global objects in LLVM IR. It can also be specified as a named attribute in the LLVMIR dialect of MLIR. However, this attribute has no standing and is discarded during translation from MLIR to LLVM IR. This patch does two things: First, it adds the attribute to the syntax of the llvm.mlir.global operation, and by doing this it also adds accessors and verifications. The syntax is "align=XX" (with XX being an integer), placed right after the value of the operation. Second, it allows transforming this operation to and from LLVM IR. It is checked whether the value is an integer power of 2. Reviewed By: ftynse, mehdi_amini Differential Revision: https://reviews.llvm.org/D101492
-
- May 11, 2021
-
-
Benjamin Kramer authored
This is actually necessary for correctness, as memref.reinterpret_cast doesn't verify if the output shape doesn't match the static sizes. Differential Revision: https://reviews.llvm.org/D102232
-
Uday Bondhugula authored
Switch llvm.noalias attribute from a boolean attribute to a unit attribute. Differential Revision: https://reviews.llvm.org/D102225
-
Tres Popp authored
VectorTransfer split previously only split read xfer ops. This adds the same logic to write ops. The resulting code involves 2 conditionals for write ops while read ops only needed 1, but the created ops are built upon the same patterns, so pattern matching/expectations are all consistent other than in regards to the if/else ops. Differential Revision: https://reviews.llvm.org/D102157
-
Tobias Gysi authored
after introducing the IndexedGenericOp to GenericOp canonicalization (https://reviews.llvm.org/D101612). Differential Revision: https://reviews.llvm.org/D102187
-
Tobias Gysi authored
after introducing the IndexedGenericOp to GenericOp canonicalization (https://reviews.llvm.org/D101612). Differential Revision: https://reviews.llvm.org/D102174
-
Tobias Gysi authored
after introducing the IndexedGenericOp to GenericOp canonicalization (https://reviews.llvm.org/D101612). Differential Revision: https://reviews.llvm.org/D102176
-
Aart Bik authored
All glue and clutter in the linalg ops has been replaced by proper sparse tensor type encoding. This code is no longer needed. Thanks to ntv@ for giving us a temporary home in linalg. So long, and thanks for all the fish. Reviewed By: bixia Differential Revision: https://reviews.llvm.org/D102098
-
Benjamin Kramer authored
This trivially folds into a constant when all operands are constant. Differential Revision: https://reviews.llvm.org/D102199
-
- May 10, 2021
-
-
Aart Bik authored
A very elaborate, but also very fun revision because all puzzle pieces are finally "falling in place". 1. replaces lingalg annotations + flags with proper sparse tensor types 2. add rigorous verification on sparse tensor type and sparse primitives 3. removes glue and clutter on opaque pointers in favor of sparse tensor types 4. migrates all tests to use sparse tensor types NOTE: next CL will remove *all* obsoleted sparse code in Linalg Reviewed By: bixia Differential Revision: https://reviews.llvm.org/D102095
-
Lei Zhang authored
According to the API contract, LinalgLoopDistributionOptions expects to work on parallel iterators. When getting processor information, only loop ranges for parallel dimensions should be fed in. But right now after generating scf.for loop nests, we feed in *all* loops, including the ones materialized for reduction iterators. This can cause unexpected distribution of reduction dimensions. This commit fixes it. Reviewed By: mravishankar Differential Revision: https://reviews.llvm.org/D102079
-
Julian Gross authored
In the buffer deallocation pass, unranked memref types are not properly supported. After investigating this issue, it turns out that the Clone and Dealloc operation does not support unranked memref types in the current implementation. This patch adds the missing feature and enables the transformation of any memref type. This patch solves this bug: https://bugs.llvm.org/show_bug.cgi?id=48385 Differential Revision: https://reviews.llvm.org/D101760
-
Frederik Gossen authored
As a canonicalization, infer the resulting shape rank if possible. Differential Revision: https://reviews.llvm.org/D102068
-
- May 08, 2021
-
-
River Riddle authored
The current design uses a unique entry for each argument/result attribute, with the name of the entry being something like "arg0". This provides for a somewhat sparse design, but ends up being much more expensive (from a runtime perspective) in-practice. The design requires building a string every time we lookup the dictionary for a specific arg/result, and also requires N attribute lookups when collecting all of the arg/result attribute dictionaries. This revision restructures the design to instead have an ArrayAttr that contains all of the attribute dictionaries for arguments and another for results. This design reduces the number of attribute name lookups to 1, and allows for O(1) lookup for individual element dictionaries. The major downside is that we can end up with larger memory usage, as the ArrayAttr contains an entry for each element even if that element has no attributes. If the memory usage becomes too problematic, we can experiment with a more sparse structure that still provides a lot of the wins in this revision. This dropped the compilation time of a somewhat large TensorFlow model from ~650 seconds to ~400 seconds. Differential Revision: https://reviews.llvm.org/D102035
-
thomasraoux authored
Previous change caused another warning in some build configuration: "default label in switch which covers all enumeration values"
-
- May 07, 2021
-
-
thomasraoux authored
-
thomasraoux authored
Differential Revision: https://reviews.llvm.org/D102091
-
Alexander Belyaev authored
Differential Revision: https://reviews.llvm.org/D102088
-
Alexander Belyaev authored
Differential Revision: https://reviews.llvm.org/D102089
-
thomasraoux authored
Differential Revision: https://reviews.llvm.org/D102034
-
Tobias Gysi authored
Remove the builder signature taking a signed dimension identifier. Reviewed By: ergawy Differential Revision: https://reviews.llvm.org/D102055
-
Tobias Gysi authored
Replace all `linalg.indexed_generic` ops by `linalg.generic` ops that access the iteration indices using the `linalg.index` op. Differential Revision: https://reviews.llvm.org/D101612
-
MaheshRavishankar authored
The pattern to convert subtensor ops to their rank-reduced versions (by dropping unit-dims in the result) can also convert to a zero-rank tensor. Handle that case. This also fixes a OOB access bug in the existing pattern for such cases. Differential Revision: https://reviews.llvm.org/D101949
-
- May 06, 2021
-
-
Lei Zhang authored
Reviewed By: rriddle Differential Revision: https://reviews.llvm.org/D102009
-
thomasraoux authored
-
thomasraoux authored
This expose a lambda control instead of just a boolean to control unit dimension folding. This however gives more control to user to pick a good heuristic. Folding reshapes helps fusion opportunities but may generate sub-optimal generic ops. Differential Revision: https://reviews.llvm.org/D101917
-
thomasraoux authored
-
thomasraoux authored
Differential Revision: https://reviews.llvm.org/D101955
-