- Nov 04, 2020
-
-
mikeurbach authored
Previously, they were only defined for `FuncOp`. To support this, `FunctionLike` needs a way to get an updated type from the concrete operation. This adds a new hook for that purpose, called `getTypeWithoutArgsAndResults`. For now, `FunctionLike` continues to assume the type is `FunctionType`, and concrete operations that use another type can hide the `getType`, `setType`, and `getTypeWithoutArgsAndResults` methods. Reviewed By: rriddle Differential Revision: https://reviews.llvm.org/D90363
-
- Nov 03, 2020
-
-
Thomas Raoux authored
Differential Revision: https://reviews.llvm.org/D90474
-
MaheshRavishankar authored
Reviewed By: ThomasRaoux Differential Revision: https://reviews.llvm.org/D90582
-
- Oct 30, 2020
-
-
Sean Silva authored
The bufferization patterns are moved to the .cpp file, which is preferred in the codebase when it makes sense. The LinalgToStandard patterns are kept a header because they are expected to be used individually. However, they are moved to LinalgToStandard.h which is the file corresponding to where they are defined. This also removes TensorCastOpConverter, which is handled by populateStdBufferizePatterns now. Eventually, the constant op lowering will be handled as well, but it there are currently holdups on moving it (see https://reviews.llvm.org/D89916). Differential Revision: https://reviews.llvm.org/D90254
-
Mehdi Amini authored
It is semantically equivalent, but the intent was really lost there. This fixes a warning/error from MSVC as well, see PR48013
-
- Oct 29, 2020
-
-
Nicolas Vasilache authored
Linalg "tile-and-fuse" is currently exposed as a Linalg pass "-linalg-fusion" but only the mechanics of the transformation are currently relevant. Instead turn it into a "-test-linalg-greedy-fusion" pass which performs canonicalizations to enable more fusions to compose. This allows dropping the OperationFolder which is not meant to be used with the pattern rewrite infrastructure. Differential Revision: https://reviews.llvm.org/D90394
-
River Riddle authored
Often times the legality of inlining can change depending on if the callable is going to be inlined in-place, or cloned. For example, some operations are not allowed to be duplicated and can only be inlined if the original callable will cease to exist afterwards. The new `wouldBeCloned` flag allows for dialects to hook into this when determining legality. Differential Revision: https://reviews.llvm.org/D90360
-
- Oct 28, 2020
-
-
Kazuaki Ishizaki authored
fix typos in comments and documents Reviewed By: jpienaar Differential Revision: https://reviews.llvm.org/D90089
-
MaheshRavishankar authored
This patch adds support for fusing linalg.indexed_generic op with linalg.tensor_reshape op by expansion, i.e. - linalg.indexed_generic op -> linalg.tensor_reshape op when the latter is expanding. - linalg.tensor_reshape op -> linalg.indexed_generic op when the former is folding. Differential Revision: https://reviews.llvm.org/D90082
-
- Oct 27, 2020
-
-
River Riddle authored
This class represents a rewrite pattern list that has been frozen, and thus immutable. This replaces the uses of OwningRewritePatternList in pattern driver related API, such as dialect conversion. When PDL becomes more prevalent, this API will allow for optimizing a set of patterns once without the need to do this per run of a pass. Differential Revision: https://reviews.llvm.org/D89104
-
River Riddle authored
There are several pieces of pattern rewriting infra in IR/ that really shouldn't be there. This revision moves those pieces to a better location such that they are easier to evolve in the future(e.g. with PDL). More concretely this revision does the following: * Create a Transforms/GreedyPatternRewriteDriver.h and move the apply*andFold methods there. The definitions for these methods are already in Transforms/ so it doesn't make sense for the declarations to be in IR. * Create a new lib/Rewrite library and move PatternApplicator there. This new library will be focused on applying rewrites, and will also include compiling rewrites with PDL. Differential Revision: https://reviews.llvm.org/D89103
-
MaheshRavishankar authored
Adds support for - Dropping unit dimension loops for indexed_generic ops. - Folding consecutive folding (or expanding) reshapes when the result (or src) is a scalar. - Fixes to indexed_generic -> generic fusion when zero-dim tensors are involved. Differential Revision: https://reviews.llvm.org/D90118
-
- Oct 26, 2020
-
-
Nicolas Vasilache authored
This revision allows the fusion of the producer of input tensors in the consumer under a tiling transformation (which produces subtensors). Many pieces are still missing (e.g. support init_tensors, better refactor LinalgStructuredOp interface support, try to merge implementations and reuse code) but this still allows getting started. The greedy pass itself is just for testing purposes and will be extracted in a separate test pass. Differential revision: https://reviews.llvm.org/D89491
-
- Oct 20, 2020
-
-
Federico Lebrón authored
Differential Revision: https://reviews.llvm.org/D89825
-
- Oct 14, 2020
-
-
MaheshRavishankar authored
The current fusion on tensors fuses reshape ops with generic ops by linearizing the indexing maps of the fused tensor in the generic op. This has some limitations - It only works for static shapes - The resulting indexing map has a linearization that would be potentially prevent fusion later on (for ex. tile + fuse). Instead, try to fuse the reshape consumer (producer) with generic op producer (consumer) by expanding the dimensionality of the generic op when the reshape is expanding (folding). This approach conflicts with the linearization approach. The expansion method is used instead of the linearization method. Further refactoring that changes the fusion on tensors to be a collection of patterns. Differential Revision: https://reviews.llvm.org/D89002
-
Sean Silva authored
Part of the refactor discussed in: https://llvm.discourse.group/t/what-is-the-strategy-for-tensor-memref-conversion-bufferization/1938/17 Differential Revision: https://reviews.llvm.org/D89271
-
Sean Silva authored
Part of the refactor discussed in: https://llvm.discourse.group/t/what-is-the-strategy-for-tensor-memref-conversion-bufferization/1938/17 Differential Revision: https://reviews.llvm.org/D89261
-
Nicolas Vasilache authored
-
Nicolas Vasilache authored
This revision adds a programmable codegen strategy from linalg based on staged rewrite patterns. Testing is exercised on a simple linalg.matmul op. Differential Revision: https://reviews.llvm.org/D89374
-
- Oct 13, 2020
-
-
Alberto Magni authored
Update linalg-to-loops lowering for pooling operations to perform padding of the input when specified by the corresponding attribute. Reviewed By: hanchung Differential Revision: https://reviews.llvm.org/D88911
-
Nicolas Vasilache authored
TensorConstantOp bufferization currently uses the vector dialect to store constant data into memory. Due to natural vector size and alignment properties, this is problematic with n>1-D vectors whose most minor dimension is not naturally aligned. Instead, this revision linearizes the constant and introduces a linalg.reshape to go back to the desired shape. Still this is still to be considered a workaround and a better longer term solution will probably involve `llvm.global`. Differential Revision: https://reviews.llvm.org/D89311
-
- Oct 12, 2020
-
-
Nicolas Vasilache authored
This revision reduces the number of places that specific information needs to be modified when adding new named Linalg ops. Differential Revision: https://reviews.llvm.org/D89223
-
Nicolas Vasilache authored
This revision introduces support for buffer allocation for any named linalg op. To avoid template instantiating many ops, a new ConversionPattern is created to capture the LinalgOp interface. Some APIs are updated to remain consistent with MLIR style: `OwningRewritePatternList * -> OwningRewritePatternList &` `BufferAssignmentTypeConverter * -> BufferAssignmentTypeConverter &` Differential revision: https://reviews.llvm.org/D89226
-
Alexander Belyaev authored
The buffer placement preparation tests in test/Transforms/buffer-placement-preparation* are using Linalg as a test dialect which leads to confusion and "copy-pasta", i.e. Linalg is being extended now and when TensorsToBuffers.cpp is changed, TestBufferPlacement is sometimes kept in-sync, which should not be the case. This has led to the unnoticed bug, because the tests were in a different directory and the patterns were slightly off. Differential Revision: https://reviews.llvm.org/D89209
-
- Oct 10, 2020
-
-
Sean Silva authored
Context: https://llvm.discourse.group/t/what-is-the-strategy-for-tensor-memref-conversion-bufferization/1938/14 Differential Revision: https://reviews.llvm.org/D89174
-
- Oct 09, 2020
-
-
Nicolas Vasilache authored
This revision belongs to a series of patches that reduce reliance of Linalg transformations on templated rewrite and conversion patterns. Instead, this uses a MatchAnyTag pattern for the vast majority of cases and dispatches internally. Differential revision: https://reviews.llvm.org/D89133
-
- Oct 08, 2020
-
-
MaheshRavishankar authored
The methods allow to check - if an operation has dependencies, - if there is a dependence from one operation to another. Differential Revision: https://reviews.llvm.org/D88993
-
Nicolas Vasilache authored
This revision also inserts an end-to-end test that lowers tensors to buffers all the way to executable code on CPU. Differential revision: https://reviews.llvm.org/D88998
-
Alexander Belyaev authored
The simplest case is when the indexing maps are DimIds in every component. This covers cwise ops. Also: * Expose populateConvertLinalgOnTensorsToBuffersPatterns in Transforms.h * Expose emitLoopRanges in Transforms.h Differential Revision: https://reviews.llvm.org/D88781
-
- Oct 07, 2020
-
-
Ahmed S. Taei authored
Differential Revision: https://reviews.llvm.org/D88869
-
- Oct 06, 2020
-
-
Nicolas Vasilache authored
This revision implements tiling on tensors as described in: https://llvm.discourse.group/t/an-update-on-linalg-on-tensors/1878/4 Differential revision: https://reviews.llvm.org/D88733
-
Nicolas Vasilache authored
This revision adds init_tensors support to buffer allocation for Linalg on tensors. Currently makes the assumption that the init_tensors fold onto the first output tensors. This assumption is not currently enforced or cast in stone and requires experimenting with tiling linalg on tensors for ops **without reductions**. Still this allows progress towards the end-to-end goal.
-
Nicolas Vasilache authored
A verification check on the number of indexing maps seems to have dropped inadvertently. Also update the relevant roundtrip tests.
-
- Oct 05, 2020
-
-
Nicolas Vasilache authored
This canonicalization is the counterpart of MemRefCastOp -> LinalgOp but on tensors. This is needed to properly canonicalize post linalg tiling on tensors. Differential Revision: https://reviews.llvm.org/D88729
-
Benjamin Kramer authored
While affine maps are part of the builtin memref type, there is very limited support for manipulating them in the standard dialect. Add transpose to the set of ops to complement the existing view/subview ops. This is a metadata transformation that encodes the transpose into the strides of a memref. I'm planning to use this when lowering operations on strided memrefs, using the transpose to remove the stride without adding a dependency on linalg dialect. Differential Revision: https://reviews.llvm.org/D88651
-
- Oct 02, 2020
-
-
Nicolas Vasilache authored
This revision introduces a `subtensor` op, which is the counterpart of `subview` for a tensor operand. This also refactors the relevant pieces to allow reusing the `subview` implementation where appropriate. This operation will be used to implement tiling for Linalg on tensors.
-
- Oct 01, 2020
-
-
MaheshRavishankar authored
Differential Revision: https://reviews.llvm.org/D88633
-
Geoffrey Martin-Noble authored
Dialects include more than just ops, so this suffix is outdated. Follows discussion in https://llvm.discourse.group/t/rfc-canonical-file-paths-to-dialects/621 Reviewed By: stellaraccident Differential Revision: https://reviews.llvm.org/D88530
-
- Sep 30, 2020
-
-
MaheshRavishankar authored
The pattern is structured similar to other patterns like LinalgTilingPattern. The fusion patterns takes options that allows you to fuse with producers of multiple operands at once. - The pattern fuses only at the level that is known to be legal, i.e if a reduction loop in the consumer is tiled, then fusion should happen "before" this loop. Some refactoring of the fusion code is needed to fuse only where it is legal. - Since the fusion on buffers uses the LinalgDependenceGraph that is not mutable in place the fusion pattern keeps the original operations in the IR, but are tagged with a marker that can be later used to find the original operations. This change also fixes an issue with tiling and distribution/interchange where if the tile size of a loop were 0 it wasnt account for in these. Differential Revision: https://reviews.llvm.org/D88435
-
Mahesh Ravishankar authored
while folding tensor_reshape op. While folding reshapes that introduce unit extent dims, the logic to compute the reassociation maps can be generalized to handle some corner cases, for example, when the folded shape still has unit-extent dims but corresponds to folded unit extent dims of the expanded shape. Differential Revision: https://reviews.llvm.org/D88521
-