- Mar 21, 2021
-
-
Chris Lattner authored
This updates the codebase to pass the context when creating an instance of OwningRewritePatternList, and starts removing extraneous MLIRContext parameters. There are many many more to be removed. Differential Revision: https://reviews.llvm.org/D99028
-
- Mar 19, 2021
-
-
Benjamin Kramer authored
Transforms.cpp:586:16: error: unused variable 'v' [-Werror,-Wunused-variable] for (Value v : operands) ^
-
Nicolas Vasilache authored
-
Alexander Belyaev authored
https://llvm.discourse.group/t/rfc-add-linalg-tileop/2833 Differential Revision: https://reviews.llvm.org/D98900
-
- Mar 18, 2021
-
-
Mehdi Amini authored
This reverts commit 32a744ab. CI is broken: test/Dialect/Linalg/bufferize.mlir:274:12: error: CHECK: expected string not found in input // CHECK: %[[MEMREF:.*]] = tensor_to_memref %[[IN]] : memref<?xf32> ^
-
Eugene Zhulenev authored
`BufferizeAnyLinalgOp` fails because `FillOp` is not a `LinalgGenericOp` and it fails while reading operand sizes attribute. Reviewed By: nicolasvasilache Differential Revision: https://reviews.llvm.org/D98671
-
thomasraoux authored
This propagates the affine map to transfer_read op in case it is not a minor identity map. Differential Revision: https://reviews.llvm.org/D98523
-
Alexander Belyaev authored
Also use `ArrayAttr` to pass iterator pass to the TiledLoopOp builder. Differential Revision: https://reviews.llvm.org/D98871
- Mar 15, 2021
-
-
Julian Gross authored
Create the memref dialect and move dialect-specific ops from std dialect to this dialect. Moved ops: AllocOp -> MemRef_AllocOp AllocaOp -> MemRef_AllocaOp AssumeAlignmentOp -> MemRef_AssumeAlignmentOp DeallocOp -> MemRef_DeallocOp DimOp -> MemRef_DimOp MemRefCastOp -> MemRef_CastOp MemRefReinterpretCastOp -> MemRef_ReinterpretCastOp GetGlobalMemRefOp -> MemRef_GetGlobalOp GlobalMemRefOp -> MemRef_GlobalOp LoadOp -> MemRef_LoadOp PrefetchOp -> MemRef_PrefetchOp ReshapeOp -> MemRef_ReshapeOp StoreOp -> MemRef_StoreOp SubViewOp -> MemRef_SubViewOp TransposeOp -> MemRef_TransposeOp TensorLoadOp -> MemRef_TensorLoadOp TensorStoreOp -> MemRef_TensorStoreOp TensorToMemRefOp -> MemRef_BufferCastOp ViewOp -> MemRef_ViewOp The roadmap to split the memref dialect from std is discussed here: https://llvm.discourse.group/t/rfc-split-the-memref-dialect-from-std/2667 Differential Revision: https://reviews.llvm.org/D98041
-
- Mar 13, 2021
-
-
Aart Bik authored
This is a temporary work-around to get our all-annotations-all-flags stress testing effort run clean. In the long run, we want to provide efficient implementations of strided loads and stores though Reviewed By: bixia Differential Revision: https://reviews.llvm.org/D98563
-
- Mar 10, 2021
-
-
Inho Seo authored
Moved getStaticLoopRanges and getStaticShape methods to LinalgInterfaces.td to add static shape verification It is to use the methods in LinalgInterfaces.cpp for additional static shape verification to match the shaped operands and loop on linalgOps. If I used the existing methods, I would face circular dependency linking issue. Now we can use them as methods of LinalgOp. Reviewed By: hanchung Differential Revision: https://reviews.llvm.org/D98163
-
- Mar 09, 2021
-
-
Tobias Gysi authored
Return the vectorization results using a vector passed by reference instead of returning them embedded in a structure. Differential Revision: https://reviews.llvm.org/D98182
-
- Mar 05, 2021
-
-
Aart Bik authored
Reduction updates should be masked, just like the load and stores. Note that alternatively, we could use the fact that masked values are zero of += updates and mask invariants to get this working but that would not work for *= updates. Masking the update itself is cleanest. This change also replaces the constant mask with a broadcast of "true" since this constant folds much better for various folding patterns. Reviewed By: nicolasvasilache Differential Revision: https://reviews.llvm.org/D98000
-
Nicolas Vasilache authored
-
- Mar 04, 2021
-
-
Nicolas Vasilache authored
Differential Revision: https://reviews.llvm.org/D97939
-
Aart Bik authored
Found with exhaustive testing, it is possible that a while loop appears in between chainable for loops. As long as we don't scalarize reductions in while loops, this means we need to terminate the chain at the while. This also refactors the reduction code into more readable helper methods. Reviewed By: bixia Differential Revision: https://reviews.llvm.org/D97886
-
- Mar 03, 2021
-
-
MaheshRavishankar authored
The SubTensorInsertOp has a requirement that dest type and result type match. Just folding the tensor.cast operation violates this and creates verification errors during canonicalization. Also fix other canonicalization methods that werent inserting casts properly. Differential Revision: https://reviews.llvm.org/D97800
-
Aart Bik authored
Reviewed By: bixia Differential Revision: https://reviews.llvm.org/D97795
-
- Mar 02, 2021
-
-
Frederik Gossen authored
Some elementwise operations are not scalarizable, vectorizable, or tensorizable. Split `ElementwiseMappable` trait into the following, more precise traits. - `Elementwise` - `Scalarizable` - `Vectorizable` - `Tensorizable` This allows for reuse of `Elementwise` in dialects like HLO. Differential Revision: https://reviews.llvm.org/D97674
-
KareemErgawy-TomTom authored
This patch continues detensorizing implementation by detensoring internal control flow in functions. In order to detensorize functions, all the non-entry block's arguments are detensored and branches between such blocks are properly updated to reflect the detensored types as well. Function entry block (signature) is left intact. This continues work towards handling github/google/iree#1159. Reviewed By: silvas Differential Revision: https://reviews.llvm.org/D97148
-
Stella Laurenzo authored
Differential Revision: https://reviews.llvm.org/D97602
-
- Feb 28, 2021
-
-
Aart Bik authored
The universal index was maintained if dense indices were still in place, and lattice points followed. However, it should only be kept if any of those following lattice points actually consumes the universal index. This change also fixes an inaccuracy with a missing broadcast around vector invariant. Reviewed By: bixia Differential Revision: https://reviews.llvm.org/D97594
-
Stella Laurenzo authored
This enables this kind of construct in the DSL to generate a named op that is polymorphic over numeric type variables `T` and `U`, generating the correct arithmetic casts at construction time: ``` @tc_def_op def polymorphic_matmul(A=TensorDef(T1, S.M, S.K), B=TensorDef(T2, S.K, S.N), C=TensorDef(U, S.M, S.N, output=True)): implements(ContractionOpInterface) C[D.m, D.n] += cast(U, A[D.m, D.k]) * cast(U, B[D.k, D.n]) ``` Presently, this only supports type variables that are bound to the element type of one of the arguments, although a further extension that allows binding a type variable to an attribute would allow some more expressiveness and may be useful for some formulations. This is left to a future patch. In addition, this patch does not yet materialize the verifier support which ensures that types are bound correctly (for such simple examples, failing to do so will yield IR that fails verification, it just won't yet fail with a precise error). Note that the full grid of extensions/truncation/int<->float conversions are supported, but many of them are lossy and higher level code needs to be mindful of numerics (it is not the job of this level). As-is, this should be sufficient for most integer matmul scenarios we work with in typical quantization schemes. Differential Revision: https://reviews.llvm.org/D97603
-
- Feb 26, 2021
-
-
Aart Bik authored
Similar to mask-load/store and compress/expand, the gather and scatter operation now allow for higher dimension uses. Note that to support the mixed-type index, the new syntax is: vector.gather %base [%i,%j] [%kvector] .... The first client of this generalization is the sparse compiler, which needs to define scatter and gathers on dense operands of higher dimensions too. Reviewed By: bixia Differential Revision: https://reviews.llvm.org/D97422
-
Christian Sigg authored
Fix call sites. The method will be removed 2 weeks later. Reviewed By: rriddle Differential Revision: https://reviews.llvm.org/D97530
-
- Feb 25, 2021
-
-
Christian Sigg authored
Fix call sites. The method will be removed 2 weeks later. Reviewed By: rriddle Differential Revision: https://reviews.llvm.org/D97464
-
- Feb 24, 2021
-
-
Alexander Belyaev authored
https://llvm.discourse.group/t/rfc-add-linalg-tileop/2833 Differential Revision: https://reviews.llvm.org/D97372
-
Alexander Belyaev authored
The test did not check whether the operations can be parsed again after printing them once. Differential Revision: https://reviews.llvm.org/D97368
-
- Feb 23, 2021
-
-
Aart Bik authored
When computing dense address, a vectorized index must be accounted for properly. This bug was formerly undetected because we get 0 * prev + i in most cases, which folds away the scalar part. Now it works for all cases. Reviewed By: bixia Differential Revision: https://reviews.llvm.org/D97317
-
Nicolas Vasilache authored
This transformation was only used for quick experimentation and is not general enough. Retire it. Differential Revision: https://reviews.llvm.org/D97266
-
KareemErgawy-TomTom authored
This commit is the first baby step towards detensoring in linalg-on-tensors. Detensoring is the process through which a tensor value is convereted to one or potentially more primitive value(s). During this process, operations with such detensored operands are also converted to an equivalen form that works on primitives. The detensoring process is driven by linalg-on-tensor ops. In particular, a linalg-on-tensor op is checked to see whether *all* its operands can be detensored. If so, those operands are converted to thier primitive counterparts and the linalg op is replaced by an equivalent op that takes those new primitive values as operands. This works towards handling github/google/iree#1159. Reviewed By: nicolasvasilache Differential Revision: https://reviews.llvm.org/D96271
-
Aart Bik authored
Simplifies the way lattices are optimized with less, but more powerful rules. This also fixes an inaccuracy where too many lattices resulted (expecting a non-existing universal index). Also puts no-side-effects on all proper getters and unifies bufferization flags order in integration tests (for future, more complex use cases). Reviewed By: bixia Differential Revision: https://reviews.llvm.org/D97134
-
- Feb 22, 2021
-
-
Geoffrey Martin-Noble authored
Followup to https://reviews.llvm.org/D97006 which broke the shared libs build because of a missing dependency. Differential Revision: https://reviews.llvm.org/D97213
-
- Feb 21, 2021
-
-
Stella Laurenzo authored
* It was decided that this was the end of the line for the existing custom tc parser/generator, and this is the first step to replacing it with a declarative format that maps well to mathy source languages. * One such source language is implemented here: https://github.com/stellaraccident/mlir-linalgpy/blob/main/samples/mm.py * In fact, this is the exact source of the declarative `polymorphic_matmul` in this change. * I am working separately to clean this python implementation up and add it to MLIR (probably as `mlir.tools.linalg_opgen` or equiv). The scope of the python side is greater than just generating named ops: the ops are callable and directly emit `linalg.generic` ops fully dynamically, and this is intended to be a feature for frontends like npcomp to define custom linear algebra ops at runtime. * There is more work required to handle full type polymorphism, especially with respect to integer formulations, since they require more specificity wrt types. * Followups to this change will bring the new generator to feature parity with the current one and delete the current. Roughly, this involves adding support for interface declarations and attribute symbol bindings. Differential Revision: https://reviews.llvm.org/D97135
-
- Feb 19, 2021
-
-
Geoffrey Martin-Noble authored
These are unused since https://reviews.llvm.org/rG81264dfbe80df08668a325a61613b64243b99c01 Reviewed By: nicolasvasilache Differential Revision: https://reviews.llvm.org/D97014
-
Nicolas Vasilache authored
-
- Feb 18, 2021
-
-
Alexander Belyaev authored
`subtensor_insert` was used instead of `linalg.subtensor_yield` to make this PR smaller. Verification will be added in a follow-up PR. Differential Revision: https://reviews.llvm.org/D96943
-
Alexander Belyaev authored
This commit introduced a cyclic dependency: Memref dialect depends on Standard because it used ConstantIndexOp. Std depends on the MemRef dialect in its EDSC/Intrinsics.h Working on a fix. This reverts commit 8aa6c376.
-
Julian Gross authored
Create the memref dialect and move several dialect-specific ops without dependencies to other ops from std dialect to this dialect. Moved ops: AllocOp -> MemRef_AllocOp AllocaOp -> MemRef_AllocaOp DeallocOp -> MemRef_DeallocOp MemRefCastOp -> MemRef_CastOp GetGlobalMemRefOp -> MemRef_GetGlobalOp GlobalMemRefOp -> MemRef_GlobalOp PrefetchOp -> MemRef_PrefetchOp ReshapeOp -> MemRef_ReshapeOp StoreOp -> MemRef_StoreOp TransposeOp -> MemRef_TransposeOp ViewOp -> MemRef_ViewOp The roadmap to split the memref dialect from std is discussed here: https://llvm.discourse.group/t/rfc-split-the-memref-dialect-from-std/2667 Differential Revision: https://reviews.llvm.org/D96425
-