- Oct 27, 2020
-
-
River Riddle authored
This class represents a rewrite pattern list that has been frozen, and thus immutable. This replaces the uses of OwningRewritePatternList in pattern driver related API, such as dialect conversion. When PDL becomes more prevalent, this API will allow for optimizing a set of patterns once without the need to do this per run of a pass. Differential Revision: https://reviews.llvm.org/D89104
-
River Riddle authored
There are several pieces of pattern rewriting infra in IR/ that really shouldn't be there. This revision moves those pieces to a better location such that they are easier to evolve in the future(e.g. with PDL). More concretely this revision does the following: * Create a Transforms/GreedyPatternRewriteDriver.h and move the apply*andFold methods there. The definitions for these methods are already in Transforms/ so it doesn't make sense for the declarations to be in IR. * Create a new lib/Rewrite library and move PatternApplicator there. This new library will be focused on applying rewrites, and will also include compiling rewrites with PDL. Differential Revision: https://reviews.llvm.org/D89103
-
MaheshRavishankar authored
Adds support for - Dropping unit dimension loops for indexed_generic ops. - Folding consecutive folding (or expanding) reshapes when the result (or src) is a scalar. - Fixes to indexed_generic -> generic fusion when zero-dim tensors are involved. Differential Revision: https://reviews.llvm.org/D90118
-
- Oct 26, 2020
-
-
Nicolas Vasilache authored
This revision allows the fusion of the producer of input tensors in the consumer under a tiling transformation (which produces subtensors). Many pieces are still missing (e.g. support init_tensors, better refactor LinalgStructuredOp interface support, try to merge implementations and reuse code) but this still allows getting started. The greedy pass itself is just for testing purposes and will be extracted in a separate test pass. Differential revision: https://reviews.llvm.org/D89491
-
- Oct 20, 2020
-
-
Federico Lebrón authored
Differential Revision: https://reviews.llvm.org/D89825
-
- Oct 14, 2020
-
-
MaheshRavishankar authored
The current fusion on tensors fuses reshape ops with generic ops by linearizing the indexing maps of the fused tensor in the generic op. This has some limitations - It only works for static shapes - The resulting indexing map has a linearization that would be potentially prevent fusion later on (for ex. tile + fuse). Instead, try to fuse the reshape consumer (producer) with generic op producer (consumer) by expanding the dimensionality of the generic op when the reshape is expanding (folding). This approach conflicts with the linearization approach. The expansion method is used instead of the linearization method. Further refactoring that changes the fusion on tensors to be a collection of patterns. Differential Revision: https://reviews.llvm.org/D89002
-
Sean Silva authored
Part of the refactor discussed in: https://llvm.discourse.group/t/what-is-the-strategy-for-tensor-memref-conversion-bufferization/1938/17 Differential Revision: https://reviews.llvm.org/D89271
-
Sean Silva authored
Part of the refactor discussed in: https://llvm.discourse.group/t/what-is-the-strategy-for-tensor-memref-conversion-bufferization/1938/17 Differential Revision: https://reviews.llvm.org/D89261
-
Nicolas Vasilache authored
-
Nicolas Vasilache authored
This revision adds a programmable codegen strategy from linalg based on staged rewrite patterns. Testing is exercised on a simple linalg.matmul op. Differential Revision: https://reviews.llvm.org/D89374
-
- Oct 13, 2020
-
-
Alberto Magni authored
Update linalg-to-loops lowering for pooling operations to perform padding of the input when specified by the corresponding attribute. Reviewed By: hanchung Differential Revision: https://reviews.llvm.org/D88911
-
Nicolas Vasilache authored
TensorConstantOp bufferization currently uses the vector dialect to store constant data into memory. Due to natural vector size and alignment properties, this is problematic with n>1-D vectors whose most minor dimension is not naturally aligned. Instead, this revision linearizes the constant and introduces a linalg.reshape to go back to the desired shape. Still this is still to be considered a workaround and a better longer term solution will probably involve `llvm.global`. Differential Revision: https://reviews.llvm.org/D89311
-
- Oct 12, 2020
-
-
Nicolas Vasilache authored
This revision reduces the number of places that specific information needs to be modified when adding new named Linalg ops. Differential Revision: https://reviews.llvm.org/D89223
-
Nicolas Vasilache authored
This revision introduces support for buffer allocation for any named linalg op. To avoid template instantiating many ops, a new ConversionPattern is created to capture the LinalgOp interface. Some APIs are updated to remain consistent with MLIR style: `OwningRewritePatternList * -> OwningRewritePatternList &` `BufferAssignmentTypeConverter * -> BufferAssignmentTypeConverter &` Differential revision: https://reviews.llvm.org/D89226
-
Alexander Belyaev authored
The buffer placement preparation tests in test/Transforms/buffer-placement-preparation* are using Linalg as a test dialect which leads to confusion and "copy-pasta", i.e. Linalg is being extended now and when TensorsToBuffers.cpp is changed, TestBufferPlacement is sometimes kept in-sync, which should not be the case. This has led to the unnoticed bug, because the tests were in a different directory and the patterns were slightly off. Differential Revision: https://reviews.llvm.org/D89209
-
- Oct 10, 2020
-
-
Sean Silva authored
Context: https://llvm.discourse.group/t/what-is-the-strategy-for-tensor-memref-conversion-bufferization/1938/14 Differential Revision: https://reviews.llvm.org/D89174
-
- Oct 09, 2020
-
-
Nicolas Vasilache authored
This revision belongs to a series of patches that reduce reliance of Linalg transformations on templated rewrite and conversion patterns. Instead, this uses a MatchAnyTag pattern for the vast majority of cases and dispatches internally. Differential revision: https://reviews.llvm.org/D89133
-
- Oct 08, 2020
-
-
MaheshRavishankar authored
The methods allow to check - if an operation has dependencies, - if there is a dependence from one operation to another. Differential Revision: https://reviews.llvm.org/D88993
-
Nicolas Vasilache authored
This revision also inserts an end-to-end test that lowers tensors to buffers all the way to executable code on CPU. Differential revision: https://reviews.llvm.org/D88998
-
Alexander Belyaev authored
The simplest case is when the indexing maps are DimIds in every component. This covers cwise ops. Also: * Expose populateConvertLinalgOnTensorsToBuffersPatterns in Transforms.h * Expose emitLoopRanges in Transforms.h Differential Revision: https://reviews.llvm.org/D88781
-
- Oct 07, 2020
-
-
Ahmed S. Taei authored
Differential Revision: https://reviews.llvm.org/D88869
-
- Oct 06, 2020
-
-
Nicolas Vasilache authored
This revision implements tiling on tensors as described in: https://llvm.discourse.group/t/an-update-on-linalg-on-tensors/1878/4 Differential revision: https://reviews.llvm.org/D88733
-
Nicolas Vasilache authored
This revision adds init_tensors support to buffer allocation for Linalg on tensors. Currently makes the assumption that the init_tensors fold onto the first output tensors. This assumption is not currently enforced or cast in stone and requires experimenting with tiling linalg on tensors for ops **without reductions**. Still this allows progress towards the end-to-end goal.
-
Nicolas Vasilache authored
A verification check on the number of indexing maps seems to have dropped inadvertently. Also update the relevant roundtrip tests.
-
- Oct 05, 2020
-
-
Nicolas Vasilache authored
This canonicalization is the counterpart of MemRefCastOp -> LinalgOp but on tensors. This is needed to properly canonicalize post linalg tiling on tensors. Differential Revision: https://reviews.llvm.org/D88729
-
Benjamin Kramer authored
While affine maps are part of the builtin memref type, there is very limited support for manipulating them in the standard dialect. Add transpose to the set of ops to complement the existing view/subview ops. This is a metadata transformation that encodes the transpose into the strides of a memref. I'm planning to use this when lowering operations on strided memrefs, using the transpose to remove the stride without adding a dependency on linalg dialect. Differential Revision: https://reviews.llvm.org/D88651
-
- Oct 02, 2020
-
-
Nicolas Vasilache authored
This revision introduces a `subtensor` op, which is the counterpart of `subview` for a tensor operand. This also refactors the relevant pieces to allow reusing the `subview` implementation where appropriate. This operation will be used to implement tiling for Linalg on tensors.
-
- Oct 01, 2020
-
-
MaheshRavishankar authored
Differential Revision: https://reviews.llvm.org/D88633
-
Geoffrey Martin-Noble authored
Dialects include more than just ops, so this suffix is outdated. Follows discussion in https://llvm.discourse.group/t/rfc-canonical-file-paths-to-dialects/621 Reviewed By: stellaraccident Differential Revision: https://reviews.llvm.org/D88530
-
- Sep 30, 2020
-
-
MaheshRavishankar authored
The pattern is structured similar to other patterns like LinalgTilingPattern. The fusion patterns takes options that allows you to fuse with producers of multiple operands at once. - The pattern fuses only at the level that is known to be legal, i.e if a reduction loop in the consumer is tiled, then fusion should happen "before" this loop. Some refactoring of the fusion code is needed to fuse only where it is legal. - Since the fusion on buffers uses the LinalgDependenceGraph that is not mutable in place the fusion pattern keeps the original operations in the IR, but are tagged with a marker that can be later used to find the original operations. This change also fixes an issue with tiling and distribution/interchange where if the tile size of a loop were 0 it wasnt account for in these. Differential Revision: https://reviews.llvm.org/D88435
-
Mahesh Ravishankar authored
while folding tensor_reshape op. While folding reshapes that introduce unit extent dims, the logic to compute the reassociation maps can be generalized to handle some corner cases, for example, when the folded shape still has unit-extent dims but corresponds to folded unit extent dims of the expanded shape. Differential Revision: https://reviews.llvm.org/D88521
-
Jakub Lichman authored
Current setup for conv op vectorization does not enable user to specify tile sizes as well as dimensions for vectorization. In this commit we change that by adding tile sizes as pass arguments. Every dimension with corresponding tile size > 1 is automatically vectorized. Differential Revision: https://reviews.llvm.org/D88533
-
- Sep 29, 2020
-
-
Nicolas Vasilache authored
Manually-defined named ops do not currently support `init_tensors` or return values and may never support them. Add extra interface to the StructuredOpInterface so that we can still write op-agnostic transformations based on StructuredOpInterface. This is an NFC extension in preparation for tiling on tensors. Differential Revision: https://reviews.llvm.org/D88481
-
Nicolas Vasilache authored
This revision changes the signatures of helper function that Linalg uses to create loops so that they can also take iterArgs. iterArgs are asserted empty to ensure no functional change. This is a mechanical change in preparation of tiling on linalg on tensors to avoid polluting the implementation with an NFC change. Differential Revision: https://reviews.llvm.org/D88480
-
- Sep 23, 2020
-
-
Rahul Joshi authored
- Use TypeRange instead of ArrayRef<Type> where possible. - Change some of the custom builders to also use TypeRange Differential Revision: https://reviews.llvm.org/D87944
-
MaheshRavishankar authored
A sequence of two reshapes such that one of them is just adding unit extent dims can be folded to a single reshape. Differential Revision: https://reviews.llvm.org/D88057
-
- Sep 22, 2020
-
-
Frederik Gossen authored
The assertion falsely expected ranked memrefs only. Now both, ranked and unranked memrefs are allowed. Differential Revision: https://reviews.llvm.org/D88080
-
Nicolas Vasilache authored
This revision allows representing a reduction at the level of linalg on tensors for generic ops by uniformizing with the named ops approach.
-
- Sep 18, 2020
-
-
Nicolas Vasilache authored
This revision allows representing a reduction at the level of linalg on tensors for named ops. When a structured op has a reduction and returns tensor(s), new conventions are added and documented. As an illustration, the syntax for a `linalg.matmul` writing into a buffer is: ``` linalg.matmul ins(%a, %b : memref<?x?xf32>, tensor<?x?xf32>) outs(%c : memref<?x?xf32>) ``` , whereas the syntax for a `linalg.matmul` returning a new tensor is: ``` %d = linalg.matmul ins(%a, %b : tensor<?x?xf32>, memref<?x?xf32>) init(%c : memref<?x?xf32>) -> tensor<?x?xf32> ``` Other parts of linalg will be extended accordingly to allow mixed buffer/tensor semantics in the presence of reductions.
-
- Sep 17, 2020
-
-
Jakub Lichman authored
ConvOp vectorization supports now only convolutions of static shapes with dimensions of size either 3(vectorized) or 1(not) as underlying vectors have to be of static shape as well. In this commit we add support for convolutions of any size as well as dynamic shapes by leveraging existing matmul infrastructure for tiling of both input and kernel to sizes accepted by the previous version of ConvOp vectorization. In the future this pass can be extended to take "tiling mask" as a user input which will enable vectorization of user specified dimensions. Differential Revision: https://reviews.llvm.org/D87676
-