- Mar 24, 2021
-
-
Lei Zhang authored
init tensor operands also has indexing map and generally follow the same constraints we expect for non-init-tensor operands. Differential Revision: https://reviews.llvm.org/D99115
-
Lei Zhang authored
This commit exposes an option to the pattern FoldWithProducerReshapeOpByExpansion to allow folding unit dim reshapes. This gives callers more fine-grained controls. Differential Revision: https://reviews.llvm.org/D99114
-
Lei Zhang authored
Until now Linalg fusion only allow fusing producers whose operands are all permutation indexing maps. It's easier to deduce the subtensor/subview but it is an unnecessary constraint, as in tiling we have more advanced logic to deduce the subranges even when the operand is not of permutation indexing maps, e.g., the input operand for convolution ops. This patch uses the logic on tiling side to deduce subranges for fusion. This enables fusing convolution with its consumer ops when possible. Along the way, we are now generating proper affine.min ops to guard against size boundaries, if we cannot be certain they won't be out of bounds. Differential Revision: https://reviews.llvm.org/D99014
-
Lei Zhang authored
This is a preparation step to reuse makeTiledShapes in tensor fusion. Along the way, did some lightweight cleanups. Differential Revision: https://reviews.llvm.org/D99013
-
Tobias Gysi authored
All linalg operations having a region builder shall call it during op creation. Calling it during vectorization is obsolete. Differential Revision: https://reviews.llvm.org/D99168
-
Nicolas Vasilache authored
Fix the BlockAndValueMapping update that was missing entries for scf.for op's blockIterArgs. Skip cloning subtensors of the padded tensor as the logic for these is separate. Add a filter to drop side-effecting ops. Tests are beefed up to verify the IR is sound in all hoisting configurations for 2-level 3-D tiled matmul. Differential Revision: https://reviews.llvm.org/D99255
-
- Mar 23, 2021
-
-
River Riddle authored
[mlir][Pattern] Add better support for using interfaces/traits to match root operations in rewrite patterns To match an interface or trait, users currently have to use the `MatchAny` tag. This tag can be quite problematic for compile time for things like the canonicalizer, as the `MatchAny` patterns may get applied to *every* operation. This revision adds better support by bucketing interface/trait patterns based on which registered operations have them registered. This means that moving forward we will only attempt to match these patterns to operations that have this interface registered. Two simplify defining patterns that match traits and interfaces, two new utility classes have been added: OpTraitRewritePattern and OpInterfaceRewritePattern. Differential Revision: https://reviews.llvm.org/D98986
-
Alex Zinenko authored
-
Nicolas Vasilache authored
This revision introduces proper backward slice computation during the hoisting of PadTensorOp. This allows hoisting padding even across multiple levels of tiling. Such hoisting requires the proper handling of loop bounds that may depend on enclosing loop variables. Differential revision: https://reviews.llvm.org/D98965
-
Chris Lattner authored
This nicely aligns the naming with RewritePatternSet. This type isn't as widely used, but we keep a using declaration in to help with downstream consumption of this change. Differential Revision: https://reviews.llvm.org/D99131
-
Chris Lattner authored
[PatternMatch] Big mechanical rename OwningRewritePatternList -> RewritePatternSet and insert -> add. NFC This doesn't change APIs, this just cleans up the many in-tree uses of these names to use the new preferred names. We'll keep the old names around for a couple weeks to help transitions. Differential Revision: https://reviews.llvm.org/D99127
-
- Mar 22, 2021
-
-
Nicolas Vasilache authored
- Drop unnecessary occurrences of rewriter.eraseOp: dead linalg ops on tensors should be cleaned up by DCE. - reimplement the part of Linalg on fusion that constructs the body and block arguments: the previous implementation had too much magic. Instead this spells out all cases explicitly and asserts / introduces TODOs for incorrect cases. As a consequence, we can use the default traversal order for this pattern. Differential Revision: https://reviews.llvm.org/D99070
-
Adrian Kuegel authored
GreedyPatternRewriteDriver was changed from bottom-up traversal to top-down traversal. Not all passes work yet with that change for traversal order. To give some time for fixing, add an option to allow to switch back to bottom-up traversal. Use this option in FusionOfTensorOpsPass which fails otherwise. Differential Revision: https://reviews.llvm.org/D99059
-
- Mar 21, 2021
-
-
Chris Lattner authored
This updates the codebase to pass the context when creating an instance of OwningRewritePatternList, and starts removing extraneous MLIRContext parameters. There are many many more to be removed. Differential Revision: https://reviews.llvm.org/D99028
-
- Mar 19, 2021
-
-
Benjamin Kramer authored
Transforms.cpp:586:16: error: unused variable 'v' [-Werror,-Wunused-variable] for (Value v : operands) ^
-
Nicolas Vasilache authored
-
Alexander Belyaev authored
https://llvm.discourse.group/t/rfc-add-linalg-tileop/2833 Differential Revision: https://reviews.llvm.org/D98900
-
- Mar 18, 2021
-
-
Mehdi Amini authored
This reverts commit 32a744ab. CI is broken: test/Dialect/Linalg/bufferize.mlir:274:12: error: CHECK: expected string not found in input // CHECK: %[[MEMREF:.*]] = tensor_to_memref %[[IN]] : memref<?xf32> ^
-
Eugene Zhulenev authored
`BufferizeAnyLinalgOp` fails because `FillOp` is not a `LinalgGenericOp` and it fails while reading operand sizes attribute. Reviewed By: nicolasvasilache Differential Revision: https://reviews.llvm.org/D98671
-
thomasraoux authored
This propagates the affine map to transfer_read op in case it is not a minor identity map. Differential Revision: https://reviews.llvm.org/D98523
-
Alexander Belyaev authored
Also use `ArrayAttr` to pass iterator pass to the TiledLoopOp builder. Differential Revision: https://reviews.llvm.org/D98871
- Mar 15, 2021
-
-
Julian Gross authored
Create the memref dialect and move dialect-specific ops from std dialect to this dialect. Moved ops: AllocOp -> MemRef_AllocOp AllocaOp -> MemRef_AllocaOp AssumeAlignmentOp -> MemRef_AssumeAlignmentOp DeallocOp -> MemRef_DeallocOp DimOp -> MemRef_DimOp MemRefCastOp -> MemRef_CastOp MemRefReinterpretCastOp -> MemRef_ReinterpretCastOp GetGlobalMemRefOp -> MemRef_GetGlobalOp GlobalMemRefOp -> MemRef_GlobalOp LoadOp -> MemRef_LoadOp PrefetchOp -> MemRef_PrefetchOp ReshapeOp -> MemRef_ReshapeOp StoreOp -> MemRef_StoreOp SubViewOp -> MemRef_SubViewOp TransposeOp -> MemRef_TransposeOp TensorLoadOp -> MemRef_TensorLoadOp TensorStoreOp -> MemRef_TensorStoreOp TensorToMemRefOp -> MemRef_BufferCastOp ViewOp -> MemRef_ViewOp The roadmap to split the memref dialect from std is discussed here: https://llvm.discourse.group/t/rfc-split-the-memref-dialect-from-std/2667 Differential Revision: https://reviews.llvm.org/D98041
-
- Mar 13, 2021
-
-
Aart Bik authored
This is a temporary work-around to get our all-annotations-all-flags stress testing effort run clean. In the long run, we want to provide efficient implementations of strided loads and stores though Reviewed By: bixia Differential Revision: https://reviews.llvm.org/D98563
-
- Mar 10, 2021
-
-
Inho Seo authored
Moved getStaticLoopRanges and getStaticShape methods to LinalgInterfaces.td to add static shape verification It is to use the methods in LinalgInterfaces.cpp for additional static shape verification to match the shaped operands and loop on linalgOps. If I used the existing methods, I would face circular dependency linking issue. Now we can use them as methods of LinalgOp. Reviewed By: hanchung Differential Revision: https://reviews.llvm.org/D98163
-
- Mar 09, 2021
-
-
Tobias Gysi authored
Return the vectorization results using a vector passed by reference instead of returning them embedded in a structure. Differential Revision: https://reviews.llvm.org/D98182
-
- Mar 05, 2021
-
-
Aart Bik authored
Reduction updates should be masked, just like the load and stores. Note that alternatively, we could use the fact that masked values are zero of += updates and mask invariants to get this working but that would not work for *= updates. Masking the update itself is cleanest. This change also replaces the constant mask with a broadcast of "true" since this constant folds much better for various folding patterns. Reviewed By: nicolasvasilache Differential Revision: https://reviews.llvm.org/D98000
-
Nicolas Vasilache authored
-
- Mar 04, 2021
-
-
Nicolas Vasilache authored
Differential Revision: https://reviews.llvm.org/D97939
-
Aart Bik authored
Found with exhaustive testing, it is possible that a while loop appears in between chainable for loops. As long as we don't scalarize reductions in while loops, this means we need to terminate the chain at the while. This also refactors the reduction code into more readable helper methods. Reviewed By: bixia Differential Revision: https://reviews.llvm.org/D97886
-
- Mar 03, 2021
-
-
MaheshRavishankar authored
The SubTensorInsertOp has a requirement that dest type and result type match. Just folding the tensor.cast operation violates this and creates verification errors during canonicalization. Also fix other canonicalization methods that werent inserting casts properly. Differential Revision: https://reviews.llvm.org/D97800
-
Aart Bik authored
Reviewed By: bixia Differential Revision: https://reviews.llvm.org/D97795
-
- Mar 02, 2021
-
-
Frederik Gossen authored
Some elementwise operations are not scalarizable, vectorizable, or tensorizable. Split `ElementwiseMappable` trait into the following, more precise traits. - `Elementwise` - `Scalarizable` - `Vectorizable` - `Tensorizable` This allows for reuse of `Elementwise` in dialects like HLO. Differential Revision: https://reviews.llvm.org/D97674
-
KareemErgawy-TomTom authored
This patch continues detensorizing implementation by detensoring internal control flow in functions. In order to detensorize functions, all the non-entry block's arguments are detensored and branches between such blocks are properly updated to reflect the detensored types as well. Function entry block (signature) is left intact. This continues work towards handling github/google/iree#1159. Reviewed By: silvas Differential Revision: https://reviews.llvm.org/D97148
-
Stella Laurenzo authored
Differential Revision: https://reviews.llvm.org/D97602
-
- Feb 28, 2021
-
-
Aart Bik authored
The universal index was maintained if dense indices were still in place, and lattice points followed. However, it should only be kept if any of those following lattice points actually consumes the universal index. This change also fixes an inaccuracy with a missing broadcast around vector invariant. Reviewed By: bixia Differential Revision: https://reviews.llvm.org/D97594
-
Stella Laurenzo authored
This enables this kind of construct in the DSL to generate a named op that is polymorphic over numeric type variables `T` and `U`, generating the correct arithmetic casts at construction time: ``` @tc_def_op def polymorphic_matmul(A=TensorDef(T1, S.M, S.K), B=TensorDef(T2, S.K, S.N), C=TensorDef(U, S.M, S.N, output=True)): implements(ContractionOpInterface) C[D.m, D.n] += cast(U, A[D.m, D.k]) * cast(U, B[D.k, D.n]) ``` Presently, this only supports type variables that are bound to the element type of one of the arguments, although a further extension that allows binding a type variable to an attribute would allow some more expressiveness and may be useful for some formulations. This is left to a future patch. In addition, this patch does not yet materialize the verifier support which ensures that types are bound correctly (for such simple examples, failing to do so will yield IR that fails verification, it just won't yet fail with a precise error). Note that the full grid of extensions/truncation/int<->float conversions are supported, but many of them are lossy and higher level code needs to be mindful of numerics (it is not the job of this level). As-is, this should be sufficient for most integer matmul scenarios we work with in typical quantization schemes. Differential Revision: https://reviews.llvm.org/D97603
-
- Feb 26, 2021
-
-
Aart Bik authored
Similar to mask-load/store and compress/expand, the gather and scatter operation now allow for higher dimension uses. Note that to support the mixed-type index, the new syntax is: vector.gather %base [%i,%j] [%kvector] .... The first client of this generalization is the sparse compiler, which needs to define scatter and gathers on dense operands of higher dimensions too. Reviewed By: bixia Differential Revision: https://reviews.llvm.org/D97422
-
Christian Sigg authored
Fix call sites. The method will be removed 2 weeks later. Reviewed By: rriddle Differential Revision: https://reviews.llvm.org/D97530
-
- Feb 25, 2021
-
-
Christian Sigg authored
Fix call sites. The method will be removed 2 weeks later. Reviewed By: rriddle Differential Revision: https://reviews.llvm.org/D97464
-