- Feb 08, 2021
-
-
Tres Popp authored
Now the context is the first, rather than the last input. This better matches the rest of the infrastructure and makes it easier to move these types to being declaratively specified. Differential Revision: https://reviews.llvm.org/D96111
-
- Feb 05, 2021
-
-
Nicolas Vasilache authored
-
Nicolas Vasilache authored
Differential Revision: https://reviews.llvm.org/D96116
-
Nicolas Vasilache authored
Reviewed By: nicolasvasilache Differential Revision: https://reviews.llvm.org/D96094
-
River Riddle authored
This makes ignoring a result explicit by the user, and helps to prevent accidental errors with dropped results. Marking LogicalResult as no discard was always the intention from the beginning, but got lost along the way. Differential Revision: https://reviews.llvm.org/D95841
-
- Feb 04, 2021
-
-
Mehdi Amini authored
-
Nicolas Vasilache authored
This revision takes advantage of recent extensions to vectorization to refactor contraction detection into a bona fide Linalg interface. The mlit-linalg-ods-gen parser is extended to support adding such interfaces. The detection that was originally enabling vectorization is refactored to serve as both a test on a generic LinalgOp as well as to verify ops that declare to conform to that interface. This is plugged through Linalg transforms and strategies but it quickly becomes evident that the complexity and rigidity of the C++ class based templating does not pay for itself. Therefore, this revision changes the API for vectorization patterns to get rid of templates as much as possible. Variadic templates are relegated to the internals of LinalgTransformationFilter as much as possible and away from the user-facing APIs. It is expected other patterns / transformations will follow the same path and drop as much C++ templating as possible from the class definition. Differential revision: https://reviews.llvm.org/D95973
-
Nicolas Vasilache authored
This op is subsumed by rank-reducing SubViewOp and has become useless. Differential revision: https://reviews.llvm.org/D95317
-
Nicolas Vasilache authored
This revision defines a Linalg contraction in general terms: 1. Has 2 input and 1 output shapes. 2. Has at least one reduction dimension. 3. Has only projected permutation indexing maps. 4. its body computes `u5(u1(c) + u2(u3(a) * u4(b)))` on some field (AddOpType, MulOpType), where u1, u2, u3, u4 and u5 represent scalar unary operations that may change the type (e.g. for mixed-precision). As a consequence, when vectorization of such an op occurs, the only special behavior is that the (unique) MulOpType is vectorized into a `vector.contract`. All other ops are handled in a generic fashion. In the future, we may wish to allow more input arguments and elementwise and constant operations that do not involve the reduction dimension(s). A test is added to demonstrate the proper vectorization of matmul_i8_i8_i32. Differential revision: https://reviews.llvm.org/D95939
-
Nicolas Vasilache authored
This separation improves the layering and paves the way for more interfaces coming up in the future. Differential revision: https://reviews.llvm.org/D95941
-
- Feb 02, 2021
-
-
Benjamin Kramer authored
-
Nicolas Vasilache authored
This revision unifies Linalg vectorization and paves the way for vectorization of Linalg ops with mixed-precision operations. The new algorithm traverses the ops in the linalg block in order and avoids recursion. It uses a BlockAndValueMapping to keep track of vectorized operations. The revision makes the following modifications but is otherwise NFC: 1. vector.transfer_read are created eagerly and may appear in a different order than the original order. 2. a more progressive vectorization to vector.contract results in only the multiply operation being converted to `vector.contract %a, %b, %zero`, where `%zero` is a constant of the proper type. Later vector canonicalizations are assumed to rewrite vector.contract %a, %b, %zero + add to a proper accumulate form. Differential revision: https://reviews.llvm.org/D95797
-
MaheshRavishankar authored
Add printer and parser hooks for a custom directive that allows parsing and printing of idioms that can represent a list of values each of which is either an integer or an SSA value. For example in `subview %source[%offset_0, 1] [4, %size_1] [%stride_0, 3]` each of the list (which represents offset, size and strides) is a mix of either statically know integer values or dynamically computed SSA values. Since this is used in many places adding a custom directive to parse/print this idiom allows using assembly format on operations which use this idiom. Differential Revision: https://reviews.llvm.org/D95773
-
- Feb 01, 2021
-
-
Hanhan Wang authored
This is the last revision to migrate using SimplePadOp to PadTensorOp, and the SimplePadOp is removed in the patch. Update a bit in SliceAnalysis because the PadTensorOp takes a region different from SimplePadOp. This is not covered by LinalgOp because it is not a structured op. Also, remove a duplicated comment from cpp file, which is already described in a header file. And update the pseudo-mlir in the comment. This is as same as D95615 but fixing one dep in CMakeLists.txt Different from D95671, the fix was applied to run target. Reviewed By: mravishankar Differential Revision: https://reviews.llvm.org/D95785
-
Hanhan Wang authored
This is the last revision to migrate using SimplePadOp to PadTensorOp, and the SimplePadOp is removed in the patch. Update a bit in SliceAnalysis because the PadTensorOp takes a region different from SimplePadOp. This is not covered by LinalgOp because it is not a structured op. Also, remove a duplicated comment from cpp file, which is already described in a header file. And update the pseudo-mlir in the comment. Reviewed By: nicolasvasilache Differential Revision: https://reviews.llvm.org/D95671
-
- Jan 28, 2021
-
-
MaheshRavishankar authored
Differential Revision: https://reviews.llvm.org/D94531
-
Hanhan Wang authored
This reverts commit 1e790b74. Differential Revision: https://reviews.llvm.org/D95636
-
Hanhan Wang authored
This is the last revision to migrate using SimplePadOp to PadTensorOp, and the SimplePadOp is removed in the patch. Update a bit in SliceAnalysis because the PadTensorOp takes a region different from SimplePadOp. This is not covered by LinalgOp because it is not a structured op. Also, remove a duplicated comment from cpp file, which is already described in a header file. And update the pseudo-mlir in the comment. Reviewed By: nicolasvasilache Differential Revision: https://reviews.llvm.org/D95615
-
Hanhan Wang authored
This revision creates a build method of PadTensorOp which can be mapped to SimplePad op. The verifier is updated to accept a static custom result type, which has the same semantic as SimplePadOp. Reviewed By: nicolasvasilache Differential Revision: https://reviews.llvm.org/D95555
-
Nicolas Vasilache authored
This revision adds a layer of SFINAE to the composable codegen strategy so it does not have to require statically defined ops but instead can also be used with OpInterfaces, Operation* and an op name string. A linalg.matmul_i8_i8_i32 is added to the .tc spec to demonstrate how all this works end to end. Differential Revision: https://reviews.llvm.org/D95600
-
Nicolas Vasilache authored
This revision improves the usage of the codegen strategy by adding a few flags that make it easier to control for the CLI. Usage of ModuleOp is replaced by FuncOp as this created issues in multi-threaded mode. A simple benchmarking capability is added for linalg.matmul as well as linalg.matmul_column_major. This latter op is also added to linalg. Now obsolete linalg integration tests that also take too long are deleted. Correctness checks are still missing at this point. Differential revision: https://reviews.llvm.org/D95531
-
- Jan 27, 2021
-
-
Nicolas Vasilache authored
OffsetSizeAndStrideOpInterface now have the ability to specify only a leading subset of offset, sizes, strides operands/attributes. The size of that leading subset must be limited by the corresponding entry in `getArrayAttrMaxRanks` to avoid overflows. Missing trailing dimensions are assumed to span the whole range (i.e. [0 .. dim)). This brings more natural semantics to slice-like op on top of subview and is a simplifies to removing all uses of SliceOp in dependent projects. Differential revision: https://reviews.llvm.org/D95441
-
MaheshRavishankar authored
Differential Revision: https://reviews.llvm.org/D95305
-
- Jan 26, 2021
-
-
Alex Zinenko authored
-
- Jan 25, 2021
-
-
Nicolas Vasilache authored
This revision starts evolving the APIs to manipulate ops with offsets, sizes and operands towards a ValueOrAttr abstraction that is already used in folding under the name OpFoldResult. The objective, in the future, is to allow such manipulations all the way to the level of ODS to avoid all the genuflexions involved in distinguishing between values and attributes for generic constant foldings. Once this evolution is accepted, the next step will be a mechanical OpFoldResult -> ValueOrAttr. Differential Revision: https://reviews.llvm.org/D95310
-
Nicolas Vasilache authored
-
Nicolas Vasilache authored
This revision addresses a remaining comment that was overlooked in https://reviews.llvm.org/D95243: the pad hoisting transformation is made to additionally bail out on side effecting ops other than LoopLikeOps.
-
Nicolas Vasilache authored
This transformation anchors on a padding op whose result is only used as an input to a Linalg op and pulls it out of a given number of loops. The result is a packing of padded tailes of ops that is amortized just before the outermost loop from which the pad operation is hoisted. Differential revision: https://reviews.llvm.org/D95243
-
Nicolas Vasilache authored
This revision allows the base Linalg tiling pattern to optionally require padding to a constant bounding shape. When requested, a simple analysis is performed, similar to buffer promotion. A temporary `linalg.simple_pad` op is added to model padding for the purpose of connecting the dots. This will be replaced by a more fleshed out `linalg.pad_tensor` op when it is available. In the meantime, this temporary op serves the purpose of exhibiting the necessary properties required from a more fleshed out pad op, to compose with transformations properly. Reviewed By: ftynse Differential Revision: https://reviews.llvm.org/D95149
-
- Jan 22, 2021
-
-
MaheshRavishankar authored
Depends on D95109
-
MaheshRavishankar authored
Fusion of generic/indexed_generic operations with tensor_reshape by expansion when the latter just adds/removes unit-dimensions is disabled since it just adds unit-trip count loops. Differential Revision: https://reviews.llvm.org/D94626
-
MaheshRavishankar authored
Differential Revision: https://reviews.llvm.org/D93086
-
MaheshRavishankar authored
representing dependence from producer result to consumer. With Linalg on tensors the dependence between operations can be from the result of the producer to the consumer. This change just does a NFC refactoring of the LinalgDependenceGraphElem to allow representing both OpResult and OpOperand*. Differential Revision: https://reviews.llvm.org/D95208
-
Hanhan Wang authored
`linalg.pad_tensor` is an operation that pads the `source` tensor with given `low` and `high` padding config. Example 1: ```mlir %pad_value = ... : f32 %1 = linalg.pad_tensor %0 low[1, 2] high[2, 3] { ^bb0(%arg0 : index, %arg1 : index): linalg.yield %pad_value : f32 } : tensor<?x?xf32> to tensor<?x?xf32> ``` Example 2: ```mlir %pad_value = ... : f32 %1 = linalg.pad_tensor %arg0 low[2, %arg1, 3, 3] high[3, 3, %arg1, 2] { ^bb0(%arg2: index, %arg3: index, %arg4: index, %arg5: index): linalg.yield %pad_value : f32 } : tensor<1x2x2x?xf32> to tensor<6x?x?x?xf32> ``` Reviewed By: nicolasvasilache Differential Revision: https://reviews.llvm.org/D93704
-
- Jan 20, 2021
-
-
Nicolas Vasilache authored
This may simplify the composition of patterns but is otherwise NFC.
-
Nicolas Vasilache authored
-
Aart Bik authored
Use cases with 16- or even 8-bit pointer/index structures have been identified. Reviewed By: penpornk Differential Revision: https://reviews.llvm.org/D95015
-
- Jan 19, 2021
-
-
Mehdi Amini authored
-
- Jan 16, 2021
-
-
Thomas Raoux authored
-