- Nov 04, 2020
-
-
mikeurbach authored
Previously, they were only defined for `FuncOp`. To support this, `FunctionLike` needs a way to get an updated type from the concrete operation. This adds a new hook for that purpose, called `getTypeWithoutArgsAndResults`. For now, `FunctionLike` continues to assume the type is `FunctionType`, and concrete operations that use another type can hide the `getType`, `setType`, and `getTypeWithoutArgsAndResults` methods. Reviewed By: rriddle Differential Revision: https://reviews.llvm.org/D90363
-
- Nov 03, 2020
-
-
Thomas Raoux authored
Differential Revision: https://reviews.llvm.org/D90474
-
- Oct 30, 2020
-
-
Sean Silva authored
The bufferization patterns are moved to the .cpp file, which is preferred in the codebase when it makes sense. The LinalgToStandard patterns are kept a header because they are expected to be used individually. However, they are moved to LinalgToStandard.h which is the file corresponding to where they are defined. This also removes TensorCastOpConverter, which is handled by populateStdBufferizePatterns now. Eventually, the constant op lowering will be handled as well, but it there are currently holdups on moving it (see https://reviews.llvm.org/D89916). Differential Revision: https://reviews.llvm.org/D90254
-
- Oct 29, 2020
-
-
Nicolas Vasilache authored
Linalg "tile-and-fuse" is currently exposed as a Linalg pass "-linalg-fusion" but only the mechanics of the transformation are currently relevant. Instead turn it into a "-test-linalg-greedy-fusion" pass which performs canonicalizations to enable more fusions to compose. This allows dropping the OperationFolder which is not meant to be used with the pattern rewrite infrastructure. Differential Revision: https://reviews.llvm.org/D90394
-
- Oct 28, 2020
-
-
Kazuaki Ishizaki authored
fix typos in comments and documents Reviewed By: jpienaar Differential Revision: https://reviews.llvm.org/D90089
-
MaheshRavishankar authored
This patch adds support for fusing linalg.indexed_generic op with linalg.tensor_reshape op by expansion, i.e. - linalg.indexed_generic op -> linalg.tensor_reshape op when the latter is expanding. - linalg.tensor_reshape op -> linalg.indexed_generic op when the former is folding. Differential Revision: https://reviews.llvm.org/D90082
-
- Oct 27, 2020
-
-
River Riddle authored
This class represents a rewrite pattern list that has been frozen, and thus immutable. This replaces the uses of OwningRewritePatternList in pattern driver related API, such as dialect conversion. When PDL becomes more prevalent, this API will allow for optimizing a set of patterns once without the need to do this per run of a pass. Differential Revision: https://reviews.llvm.org/D89104
-
River Riddle authored
There are several pieces of pattern rewriting infra in IR/ that really shouldn't be there. This revision moves those pieces to a better location such that they are easier to evolve in the future(e.g. with PDL). More concretely this revision does the following: * Create a Transforms/GreedyPatternRewriteDriver.h and move the apply*andFold methods there. The definitions for these methods are already in Transforms/ so it doesn't make sense for the declarations to be in IR. * Create a new lib/Rewrite library and move PatternApplicator there. This new library will be focused on applying rewrites, and will also include compiling rewrites with PDL. Differential Revision: https://reviews.llvm.org/D89103
-
MaheshRavishankar authored
Adds support for - Dropping unit dimension loops for indexed_generic ops. - Folding consecutive folding (or expanding) reshapes when the result (or src) is a scalar. - Fixes to indexed_generic -> generic fusion when zero-dim tensors are involved. Differential Revision: https://reviews.llvm.org/D90118
-
- Oct 26, 2020
-
-
Nicolas Vasilache authored
This revision allows the fusion of the producer of input tensors in the consumer under a tiling transformation (which produces subtensors). Many pieces are still missing (e.g. support init_tensors, better refactor LinalgStructuredOp interface support, try to merge implementations and reuse code) but this still allows getting started. The greedy pass itself is just for testing purposes and will be extracted in a separate test pass. Differential revision: https://reviews.llvm.org/D89491
-
- Oct 14, 2020
-
-
MaheshRavishankar authored
The current fusion on tensors fuses reshape ops with generic ops by linearizing the indexing maps of the fused tensor in the generic op. This has some limitations - It only works for static shapes - The resulting indexing map has a linearization that would be potentially prevent fusion later on (for ex. tile + fuse). Instead, try to fuse the reshape consumer (producer) with generic op producer (consumer) by expanding the dimensionality of the generic op when the reshape is expanding (folding). This approach conflicts with the linearization approach. The expansion method is used instead of the linearization method. Further refactoring that changes the fusion on tensors to be a collection of patterns. Differential Revision: https://reviews.llvm.org/D89002
-
Sean Silva authored
Part of the refactor discussed in: https://llvm.discourse.group/t/what-is-the-strategy-for-tensor-memref-conversion-bufferization/1938/17 Differential Revision: https://reviews.llvm.org/D89271
-
Sean Silva authored
Part of the refactor discussed in: https://llvm.discourse.group/t/what-is-the-strategy-for-tensor-memref-conversion-bufferization/1938/17 Differential Revision: https://reviews.llvm.org/D89261
-
Nicolas Vasilache authored
-
Nicolas Vasilache authored
This revision adds a programmable codegen strategy from linalg based on staged rewrite patterns. Testing is exercised on a simple linalg.matmul op. Differential Revision: https://reviews.llvm.org/D89374
-
- Oct 13, 2020
-
-
Alberto Magni authored
Update linalg-to-loops lowering for pooling operations to perform padding of the input when specified by the corresponding attribute. Reviewed By: hanchung Differential Revision: https://reviews.llvm.org/D88911
-
Nicolas Vasilache authored
TensorConstantOp bufferization currently uses the vector dialect to store constant data into memory. Due to natural vector size and alignment properties, this is problematic with n>1-D vectors whose most minor dimension is not naturally aligned. Instead, this revision linearizes the constant and introduces a linalg.reshape to go back to the desired shape. Still this is still to be considered a workaround and a better longer term solution will probably involve `llvm.global`. Differential Revision: https://reviews.llvm.org/D89311
-
- Oct 12, 2020
-
-
Nicolas Vasilache authored
This revision introduces support for buffer allocation for any named linalg op. To avoid template instantiating many ops, a new ConversionPattern is created to capture the LinalgOp interface. Some APIs are updated to remain consistent with MLIR style: `OwningRewritePatternList * -> OwningRewritePatternList &` `BufferAssignmentTypeConverter * -> BufferAssignmentTypeConverter &` Differential revision: https://reviews.llvm.org/D89226
-
Alexander Belyaev authored
The buffer placement preparation tests in test/Transforms/buffer-placement-preparation* are using Linalg as a test dialect which leads to confusion and "copy-pasta", i.e. Linalg is being extended now and when TensorsToBuffers.cpp is changed, TestBufferPlacement is sometimes kept in-sync, which should not be the case. This has led to the unnoticed bug, because the tests were in a different directory and the patterns were slightly off. Differential Revision: https://reviews.llvm.org/D89209
-
- Oct 10, 2020
-
-
Sean Silva authored
Context: https://llvm.discourse.group/t/what-is-the-strategy-for-tensor-memref-conversion-bufferization/1938/14 Differential Revision: https://reviews.llvm.org/D89174
-
- Oct 09, 2020
-
-
Nicolas Vasilache authored
This revision belongs to a series of patches that reduce reliance of Linalg transformations on templated rewrite and conversion patterns. Instead, this uses a MatchAnyTag pattern for the vast majority of cases and dispatches internally. Differential revision: https://reviews.llvm.org/D89133
-
- Oct 08, 2020
-
-
Nicolas Vasilache authored
This revision also inserts an end-to-end test that lowers tensors to buffers all the way to executable code on CPU. Differential revision: https://reviews.llvm.org/D88998
-
Alexander Belyaev authored
The simplest case is when the indexing maps are DimIds in every component. This covers cwise ops. Also: * Expose populateConvertLinalgOnTensorsToBuffersPatterns in Transforms.h * Expose emitLoopRanges in Transforms.h Differential Revision: https://reviews.llvm.org/D88781
-
- Oct 07, 2020
-
-
Ahmed S. Taei authored
Differential Revision: https://reviews.llvm.org/D88869
-
- Oct 06, 2020
-
-
Nicolas Vasilache authored
This revision implements tiling on tensors as described in: https://llvm.discourse.group/t/an-update-on-linalg-on-tensors/1878/4 Differential revision: https://reviews.llvm.org/D88733
-
Nicolas Vasilache authored
This revision adds init_tensors support to buffer allocation for Linalg on tensors. Currently makes the assumption that the init_tensors fold onto the first output tensors. This assumption is not currently enforced or cast in stone and requires experimenting with tiling linalg on tensors for ops **without reductions**. Still this allows progress towards the end-to-end goal.
-
- Oct 02, 2020
-
-
Nicolas Vasilache authored
This revision introduces a `subtensor` op, which is the counterpart of `subview` for a tensor operand. This also refactors the relevant pieces to allow reusing the `subview` implementation where appropriate. This operation will be used to implement tiling for Linalg on tensors.
-
- Oct 01, 2020
-
-
MaheshRavishankar authored
Differential Revision: https://reviews.llvm.org/D88633
-
Geoffrey Martin-Noble authored
Dialects include more than just ops, so this suffix is outdated. Follows discussion in https://llvm.discourse.group/t/rfc-canonical-file-paths-to-dialects/621 Reviewed By: stellaraccident Differential Revision: https://reviews.llvm.org/D88530
-
- Sep 30, 2020
-
-
MaheshRavishankar authored
The pattern is structured similar to other patterns like LinalgTilingPattern. The fusion patterns takes options that allows you to fuse with producers of multiple operands at once. - The pattern fuses only at the level that is known to be legal, i.e if a reduction loop in the consumer is tiled, then fusion should happen "before" this loop. Some refactoring of the fusion code is needed to fuse only where it is legal. - Since the fusion on buffers uses the LinalgDependenceGraph that is not mutable in place the fusion pattern keeps the original operations in the IR, but are tagged with a marker that can be later used to find the original operations. This change also fixes an issue with tiling and distribution/interchange where if the tile size of a loop were 0 it wasnt account for in these. Differential Revision: https://reviews.llvm.org/D88435
-
Mahesh Ravishankar authored
while folding tensor_reshape op. While folding reshapes that introduce unit extent dims, the logic to compute the reassociation maps can be generalized to handle some corner cases, for example, when the folded shape still has unit-extent dims but corresponds to folded unit extent dims of the expanded shape. Differential Revision: https://reviews.llvm.org/D88521
-
Jakub Lichman authored
Current setup for conv op vectorization does not enable user to specify tile sizes as well as dimensions for vectorization. In this commit we change that by adding tile sizes as pass arguments. Every dimension with corresponding tile size > 1 is automatically vectorized. Differential Revision: https://reviews.llvm.org/D88533
-
- Sep 29, 2020
-
-
Nicolas Vasilache authored
Manually-defined named ops do not currently support `init_tensors` or return values and may never support them. Add extra interface to the StructuredOpInterface so that we can still write op-agnostic transformations based on StructuredOpInterface. This is an NFC extension in preparation for tiling on tensors. Differential Revision: https://reviews.llvm.org/D88481
-
Nicolas Vasilache authored
This revision changes the signatures of helper function that Linalg uses to create loops so that they can also take iterArgs. iterArgs are asserted empty to ensure no functional change. This is a mechanical change in preparation of tiling on linalg on tensors to avoid polluting the implementation with an NFC change. Differential Revision: https://reviews.llvm.org/D88480
-
- Sep 23, 2020
-
-
MaheshRavishankar authored
A sequence of two reshapes such that one of them is just adding unit extent dims can be folded to a single reshape. Differential Revision: https://reviews.llvm.org/D88057
-
- Sep 22, 2020
-
-
Nicolas Vasilache authored
This revision allows representing a reduction at the level of linalg on tensors for generic ops by uniformizing with the named ops approach.
-
- Sep 17, 2020
-
-
Jakub Lichman authored
ConvOp vectorization supports now only convolutions of static shapes with dimensions of size either 3(vectorized) or 1(not) as underlying vectors have to be of static shape as well. In this commit we add support for convolutions of any size as well as dynamic shapes by leveraging existing matmul infrastructure for tiling of both input and kernel to sizes accepted by the previous version of ConvOp vectorization. In the future this pass can be extended to take "tiling mask" as a user input which will enable vectorization of user specified dimensions. Differential Revision: https://reviews.llvm.org/D87676
-
- Sep 11, 2020
-
-
MaheshRavishankar authored
The LinalgTilingPattern class dervied from the base deletes the original operation. This allows for the use case where the more transformations are necessary on the original operation after tiling. In such cases the pattern can derive from LinalgBaseTilingPattern instead of LinalgTilingPattern. Differential Revision: https://reviews.llvm.org/D87308
-
- Sep 10, 2020
-
-
Eugene Burmako authored
This patch adds a new named structured op to accompany linalg.matmul and linalg.matvec. We needed it for our codegen, so I figured it would be useful to add it to Linalg. Reviewed By: nicolasvasilache, mravishankar Differential Revision: https://reviews.llvm.org/D87292
-
Jakub Lichman authored
This commit addresses comments that were requested on D86619 after it was landed. Differential Revision: https://reviews.llvm.org/D87354
-