- Apr 21, 2021
-
-
Ahmed Taei authored
This will prevent fusion that spains all dims and generates (d0, d1, ...) -> () reshape that isn't legal Differential Revision: https://reviews.llvm.org/D100805
-
thomasraoux authored
Break up the dependency between SCF ops and substituteMin helper and make a more generic version of AffineMinSCFCanonicalization. This reduce dependencies between linalg and SCF and will allow the logic to be used with other kind of ops. (Like ID ops). Differential Revision: https://reviews.llvm.org/D100321
-
Tobias Gysi authored
Instead of always running the region builder check if the generalized op has a region attached. If yes inline the existing region instead of calling the region builder. This change circumvents a problem with named operations that have a region builder taking captures and the generalization pass not knowing about this captures. Differential Revision: https://reviews.llvm.org/D100880
-
- Apr 20, 2021
-
-
Tobias Gysi authored
The patch extends the vectorization pass to lower linalg index operations to vector code. It allocates constant 1d vectors that enumerate the indexes along the iteration dimensions and broadcasts/transposes these 1d vectors to the iteration space. Differential Revision: https://reviews.llvm.org/D100373
-
KareemErgawy-TomTom authored
This patch extends the control-flow cost-model for detensoring by implementing a forward-looking pass on block arguments that should be detensored. This makes sure that if a (to-be-detensored) block argument "escapes" its block through the terminator, then the successor arguments are also detensored. Reviewed By: silvas Differential Revision: https://reviews.llvm.org/D100457
-
Tobias Gysi authored
The patch replaces the index operations in the body of fused producers and linearizes the indices after expansion. Differential Revision: https://reviews.llvm.org/D100479
-
Tobias Gysi authored
Update the dimensions of the index operations to account for dropped dimensions and replace the index operations of dropped dimensions by zero. Differential Revision: https://reviews.llvm.org/D100395
-
- Apr 19, 2021
-
-
Tobias Gysi authored
Instead of interchanging loops during the loop lowering this pass performs the interchange by permuting the indexing maps. It also updates the iterator types and the index accesses in the body of the operation. Differential Revision: https://reviews.llvm.org/D100627
-
- Apr 16, 2021
-
-
Nicolas Vasilache authored
Differential Revision: https://reviews.llvm.org/D100603
-
Ahmed Taei authored
Without this tile-and-pad will never terminate if pad-fails. Reviewed By: nicolasvasilache Differential Revision: https://reviews.llvm.org/D97720
-
River Riddle authored
Differential Revision: https://reviews.llvm.org/D100436
-
- Apr 15, 2021
-
-
Aart Bik authored
Rationale: Now that vector<?xindex> is allowed, the restriction on vectorization of index types in the sparse compiler can be removed. Also needs generalization of scatter/gather index types. Reviewed By: gysit Differential Revision: https://reviews.llvm.org/D100522
-
- Apr 14, 2021
-
-
Tobias Gysi authored
The patch updates the linalg fusion pass to add the tile offsets to the indices. Differential Revision: https://reviews.llvm.org/D100456
-
- Apr 13, 2021
-
-
Tobias Gysi authored
The patch updates the tiling pass to add the tile offsets to the indices returned by the linalg operations. Differential Revision: https://reviews.llvm.org/D100379
-
Tobias Gysi authored
The patch extends the linalg to loop lowering pass to replace all linalg index operations by the induction variables of the generated loop nests. Differential Revision: https://reviews.llvm.org/D100364
-
KareemErgawy-TomTom authored
This patch introduces the neccessary infrastructure changes to implement cost-modelling for detensoring. In particular, it introduces the following changes: - An extension to the dialect conversion framework to selectively convert sub-set of non-entry BB arguments. - An extension to branch conversion pattern to selectively convert sub-set of a branche's operands. - An interface for detensoring cost-modelling. - 2 simple implementations of 2 different cost models. This sets the stage to explose cost-modelling for detessoring in an easier way. We still need to come up with better cost models. Reviewed By: silvas Differential Revision: https://reviews.llvm.org/D99945
-
- Apr 12, 2021
-
-
MaheshRavishankar authored
Fusing a constant with a linalg.generic operation can result in the fused operation being illegal since the loop bound computation fails. Avoid such fusions. Differential Revision: https://reviews.llvm.org/D100272
-
Tobias Gysi authored
The `linalg.index` operation provides access to the iteration indexes of immediately enclosing linalg operations. It takes a dimension `dim` attribute and returns the iteration index in the given dimension. Having `linalg.index` allows us to unify `linalg.generic` and `linalg.indexed_generic` and also enables index access in named operations. Differential Revision: https://reviews.llvm.org/D100292
-
- Apr 09, 2021
-
-
MaheshRavishankar authored
Recent change enable dropping unit-trip loops of "reduction" iterator type as well. This is fine as long as there is one other "reduction" iterator in the operation. Without this the initialized value (value of `out`) is not read which leads to a correctness issue. Also fix a bug in the `fill` -> `tensor_reshape` folding. The `out` operand of the `fill` needs to be reshaped to get the `out` operand of the generated `fill` operation. Differential Revision: https://reviews.llvm.org/D100145
-
- Apr 07, 2021
-
-
Aart Bik authored
Some sparse matrices operate on integral values (in contrast with the common f32 and f64 values). This CL expands the compiler and runtime support to deal with several common type combinations. Reviewed By: bixia Differential Revision: https://reviews.llvm.org/D99999
-
- Apr 06, 2021
-
-
Nicolas Vasilache authored
Linalg fusion on tensors has mismatching assumptions on the operand side than on the region bbArg side. Relax the behavior on the operand/indexing map side so that we better support output operands that may also be read from. Differential revision: https://reviews.llvm.org/D99499
-
MaheshRavishankar authored
Right now Elementwise operations fusion in Linalg fuses everything it can. This can run up against resource limits of the target hardware without some checks. This patch adds a callback function that clients can use to implement a cost function. When two elementwise operations are deemed structurally fusable, the callback can be used to control if the fusion applies. Differential Revision: https://reviews.llvm.org/D99820
-
- Apr 05, 2021
-
-
MaheshRavishankar authored
The moved `populate` methods are only relevant to Linalg operations. So they are better of in `linalg` namespace. Also rename `populateLinalgTensorOpsFusionPatterns` to `populateElementwiseOpsFusionPatterns`. This makes the scope of these patterns explicit and disambiguates it with fusion on tensors using tile + fuse. Differential Revision: https://reviews.llvm.org/D99819
-
- Apr 02, 2021
-
-
Aart Bik authored
Rationale: Small indices and values, when allowed by the required range of the input tensors, can reduce the memory footprint of sparse tensors even more. Note, however, that we must be careful zero extending the values (since sparse tensors never use negatives for indexing), but LLVM treats the index type as signed in most memory operations (like the scatter and gather). This CL dots all the i's in this regard. Reviewed By: bixia Differential Revision: https://reviews.llvm.org/D99777
-
- Mar 29, 2021
-
-
MaheshRavishankar authored
Subtensor operations that are taking a slice out of a tensor that is unit-extent along a dimension can be rewritten to drop that dimension. Differential Revision: https://reviews.llvm.org/D99226
-
MaheshRavishankar authored
Drop usage of `emitRemark` and use `notifyMatchFailure` instead to avoid unnecessary spew during compilation. Differential Revision: https://reviews.llvm.org/D99485
-
- Mar 24, 2021
-
-
Lei Zhang authored
init tensor operands also has indexing map and generally follow the same constraints we expect for non-init-tensor operands. Differential Revision: https://reviews.llvm.org/D99115
-
Lei Zhang authored
This commit exposes an option to the pattern FoldWithProducerReshapeOpByExpansion to allow folding unit dim reshapes. This gives callers more fine-grained controls. Differential Revision: https://reviews.llvm.org/D99114
-
Lei Zhang authored
Until now Linalg fusion only allow fusing producers whose operands are all permutation indexing maps. It's easier to deduce the subtensor/subview but it is an unnecessary constraint, as in tiling we have more advanced logic to deduce the subranges even when the operand is not of permutation indexing maps, e.g., the input operand for convolution ops. This patch uses the logic on tiling side to deduce subranges for fusion. This enables fusing convolution with its consumer ops when possible. Along the way, we are now generating proper affine.min ops to guard against size boundaries, if we cannot be certain they won't be out of bounds. Differential Revision: https://reviews.llvm.org/D99014
-
Lei Zhang authored
This is a preparation step to reuse makeTiledShapes in tensor fusion. Along the way, did some lightweight cleanups. Differential Revision: https://reviews.llvm.org/D99013
-
Tobias Gysi authored
All linalg operations having a region builder shall call it during op creation. Calling it during vectorization is obsolete. Differential Revision: https://reviews.llvm.org/D99168
-
Nicolas Vasilache authored
Fix the BlockAndValueMapping update that was missing entries for scf.for op's blockIterArgs. Skip cloning subtensors of the padded tensor as the logic for these is separate. Add a filter to drop side-effecting ops. Tests are beefed up to verify the IR is sound in all hoisting configurations for 2-level 3-D tiled matmul. Differential Revision: https://reviews.llvm.org/D99255
-
- Mar 23, 2021
-
-
River Riddle authored
[mlir][Pattern] Add better support for using interfaces/traits to match root operations in rewrite patterns To match an interface or trait, users currently have to use the `MatchAny` tag. This tag can be quite problematic for compile time for things like the canonicalizer, as the `MatchAny` patterns may get applied to *every* operation. This revision adds better support by bucketing interface/trait patterns based on which registered operations have them registered. This means that moving forward we will only attempt to match these patterns to operations that have this interface registered. Two simplify defining patterns that match traits and interfaces, two new utility classes have been added: OpTraitRewritePattern and OpInterfaceRewritePattern. Differential Revision: https://reviews.llvm.org/D98986
-
Alex Zinenko authored
-
Nicolas Vasilache authored
This revision introduces proper backward slice computation during the hoisting of PadTensorOp. This allows hoisting padding even across multiple levels of tiling. Such hoisting requires the proper handling of loop bounds that may depend on enclosing loop variables. Differential revision: https://reviews.llvm.org/D98965
-
Chris Lattner authored
This nicely aligns the naming with RewritePatternSet. This type isn't as widely used, but we keep a using declaration in to help with downstream consumption of this change. Differential Revision: https://reviews.llvm.org/D99131
-
Chris Lattner authored
[PatternMatch] Big mechanical rename OwningRewritePatternList -> RewritePatternSet and insert -> add. NFC This doesn't change APIs, this just cleans up the many in-tree uses of these names to use the new preferred names. We'll keep the old names around for a couple weeks to help transitions. Differential Revision: https://reviews.llvm.org/D99127
-
- Mar 22, 2021
-
-
Nicolas Vasilache authored
- Drop unnecessary occurrences of rewriter.eraseOp: dead linalg ops on tensors should be cleaned up by DCE. - reimplement the part of Linalg on fusion that constructs the body and block arguments: the previous implementation had too much magic. Instead this spells out all cases explicitly and asserts / introduces TODOs for incorrect cases. As a consequence, we can use the default traversal order for this pattern. Differential Revision: https://reviews.llvm.org/D99070
-
Adrian Kuegel authored
GreedyPatternRewriteDriver was changed from bottom-up traversal to top-down traversal. Not all passes work yet with that change for traversal order. To give some time for fixing, add an option to allow to switch back to bottom-up traversal. Use this option in FusionOfTensorOpsPass which fails otherwise. Differential Revision: https://reviews.llvm.org/D99059
-
- Mar 21, 2021
-
-
Chris Lattner authored
This updates the codebase to pass the context when creating an instance of OwningRewritePatternList, and starts removing extraneous MLIRContext parameters. There are many many more to be removed. Differential Revision: https://reviews.llvm.org/D99028
-