- Dec 02, 2020
-
-
Christian Sigg authored
Given that OpState already implicit converts to Operator*, this seems reasonable. The alternative would be to add more functions to OpState which forward to Operation. Reviewed By: rriddle, ftynse Differential Revision: https://reviews.llvm.org/D92266
-
- Dec 01, 2020
-
-
Sean Silva authored
- Address TODO in scf-bufferize: the argument materialization issue is now fixed and the code is now in Transforms/Bufferize.cpp - Tighten up finalizing-bufferize to avoid creating invalid IR when operand types potentially change - Tidy up the testing of func-bufferize, and move appropriate tests to a new finalizing-bufferize.mlir - The new stricter checking in finalizing-bufferize revealed that we needed a DimOp conversion pattern (found when integrating into npcomp). Previously, the converion infrastructure was blindly changing the operand type during finalization, which happened to work due to DimOp's tensor/memref polymorphism, but is generally not encouraged (the new pattern is the way to tell the conversion infrastructure that it is legal to change that type).
-
- Nov 29, 2020
-
-
Jacques Pienaar authored
Op with mapping from ops to corresponding shape functions for those op in the library and mechanism to associate shape functions to functions. The mapping of operand to shape function is kept separate from the shape functions themselves as the operation is associated to the shape function and not vice versa, and one could have a common library of shape functions that can be used in different contexts. Use fully qualified names and require a name for shape fn lib ops for now and an explicit print/parse (based around the generated one & GPU module op ones). This commit reverts d9da4c3e. Fixes missing headers (don't know how that was working locally). Differential Revision: https://reviews.llvm.org/D91672
-
Mehdi Amini authored
This reverts commit 6dd9596b. Build is broken.
-
Jacques Pienaar authored
Op with mapping from ops to corresponding shape functions for those op in the library and mechanism to associate shape functions to functions. The mapping of operand to shape function is kept separate from the shape functions themselves as the operation is associated to the shape function and not vice versa, and one could have a common library of shape functions that can be used in different contexts. Use fully qualified names and require a name for shape fn lib ops for now and an explicit print/parse (based around the generated one & GPU module op ones). Differential Revision: https://reviews.llvm.org/D91672
-
- Nov 27, 2020
-
-
Frederik Gossen authored
Overcome the assumption that parallel loops are only nested in other parallel loops. Differential Revision: https://reviews.llvm.org/D92188
-
- Nov 26, 2020
-
-
Stephan Herhut authored
This enables partial bufferization that includes function signatures. To test this, this change also makes the func-bufferize partial and adds a dedicated finalizing-bufferize pass. Differential Revision: https://reviews.llvm.org/D92032
-
Aart Bik authored
This change gives sparse compiler clients more control over selecting individual types for the pointers and indices in the sparse storage schemes. Narrower width obviously results in smaller memory footprints, but the range should always suffice for the maximum number of entries or index value. Reviewed By: penpornk Differential Revision: https://reviews.llvm.org/D92126
-
Sean Silva authored
It still had the old name from before ElementwiseMappable was added.
-
- Nov 25, 2020
-
-
Frank Laub authored
Adding missing custom builders for AffineVectorLoadOp & AffineVectorStoreOp. In practice, it is difficult to correctly construct these ops without these builders (because the AffineMap is not included at construction time). Differential Revision: https://reviews.llvm.org/D86380
-
Aart Bik authored
This CL adds the ability to request different parallelization strategies for the generate code. Every "parallel" loop is a candidate, and converted to a parallel op if it is an actual for-loop (not a while) and the strategy allows dense/sparse outer/inner parallelization. This will connect directly with the work of @ezhulenev on parallel loops. Still TBD: vectorization strategy Reviewed By: penpornk Differential Revision: https://reviews.llvm.org/D91978
-
- Nov 24, 2020
-
-
Aart Bik authored
Generalizes invariant handling to anything defined outside the Linalg op (parameters and SSA computations). Fixes bug that was using parameter number as tensor number. Reviewed By: penpornk Differential Revision: https://reviews.llvm.org/D91985
-
Alex Zinenko authored
Introduce a conversion pass from SCF parallel loops to OpenMP dialect constructs - parallel region and workshare loop. Loops with reductions are not supported because the OpenMP dialect cannot model them yet. The conversion currently targets only one level of parallelism, i.e. only one top-level `omp.parallel` operation is produced even if there are nested `scf.parallel` operations that could be mapped to `omp.wsloop`. Nested parallelism support is left for future work. Reviewed By: kiranchandramohan Differential Revision: https://reviews.llvm.org/D91982
-
Nicolas Vasilache authored
Print part of an op of the form: ``` <optional-offset-prefix>`[` offset-list `]` <optional-size-prefix>`[` size-list `]` <optional-stride-prefix>[` stride-list `]` ``` Also address some leftover nits. Differential revision: https://reviews.llvm.org/D92031
-
Nicolas Vasilache authored
Parse trailing part of an op of the form: ``` <optional-offset-prefix>`[` offset-list `]` <optional-size-prefix>`[` size-list `]` <optional-stride-prefix>[` stride-list `]` ``` Each entry in the offset, size and stride list either resolves to an integer constant or an operand of index type. Constants are added to the `result` as named integer array attributes with name `OffsetSizeAndStrideOpInterface::getStaticOffsetsAttrName()` (resp. `getStaticSizesAttrName()`, `getStaticStridesAttrName()`). Append the number of offset, size and stride operands to `segmentSizes` before adding it to `result` as the named attribute: `OpTrait::AttrSizedOperandSegments<void>::getOperandSegmentSizeAttr()`. Offset, size and stride operands resolution occurs after `preResolutionFn` to give a chance to leading operands to resolve first, after parsing the types. ``` ParseResult parseOffsetsSizesAndStrides( OpAsmParser &parser, OperationState &result, ArrayRef<int> segmentSizes, llvm::function_ref<ParseResult(OpAsmParser &, OperationState &)> preResolutionFn = nullptr, llvm::function_ref<ParseResult(OpAsmParser &)> parseOptionalOffsetPrefix = nullptr, llvm::function_ref<ParseResult(OpAsmParser &)> parseOptionalSizePrefix = nullptr, llvm::function_ref<ParseResult(OpAsmParser &)> parseOptionalStridePrefix = nullptr); ``` Differential revision: https://reviews.llvm.org/D92030
-
Tei Jeong authored
Reviewed By: liufengdb Differential Revision: https://reviews.llvm.org/D92034
-
Stella Laurenzo authored
* Was missed in the initial submission and is required for a ConstantLike op. * Also adds a materializeConstant hook to preserve it. * Tightens up the argument constraint on tosa.const to match what is actually legal. Differential Revision: https://reviews.llvm.org/D92040
-
Nicolas Vasilache authored
This revision will make it easier to create new ops base on the strided memref abstraction outside of the std dialect. OffsetSizeAndStrideOpInterface is an interface for ops that allow specifying mixed dynamic and static offsets, sizes and strides variadic operands. Ops that implement this interface need to expose the following methods: 1. `getArrayAttrRanks` to specify the length of static integer attributes. 2. `offsets`, `sizes` and `strides` variadic operands. 3. `static_offsets`, resp. `static_sizes` and `static_strides` integer array attributes. The invariants of this interface are: 1. `static_offsets`, `static_sizes` and `static_strides` have length exactly `getArrayAttrRanks()`[0] (resp. [1], [2]). 2. `offsets`, `sizes` and `strides` have each length at most `getArrayAttrRanks()`[0] (resp. [1], [2]). 3. if an entry of `static_offsets` (resp. `static_sizes`, `static_strides`) is equal to a special sentinel value, namely `ShapedType::kDynamicStrideOrOffset` (resp. `ShapedType::kDynamicSize`, `ShapedType::kDynamicStrideOrOffset`), then the corresponding entry is a dynamic offset (resp. size, stride). 4. a variadic `offset` (resp. `sizes`, `strides`) operand must be present for each dynamic offset (resp. size, stride). This interface is useful to factor out common behavior and provide support for carrying or injecting static behavior through the use of the static attributes. Differential Revision: https://reviews.llvm.org/D92011
-
Alexander Belyaev authored
Differential Revision: https://reviews.llvm.org/D92014
-
- Nov 23, 2020
-
-
Nicolas Vasilache authored
-
MaheshRavishankar authored
Exposing some utility functions from Linalg to allow for promotion of fused views outside of the core tile+fuse logic. This is an alternative to patch D91322 which adds the promotion logic to the tileAndFuse method. Downside with that approach is that it is not easily customizable based on needs. Differential Revision: https://reviews.llvm.org/D91503
-
MaheshRavishankar authored
Enhance the tile+fuse logic to allow fusing a sequence of operations. Make sure the value used to obtain tile shape is a SubViewOp/SubTensorOp. Current logic used to get the bounds of loop depends on the use of `getOrCreateRange` method on `SubViewOp` and `SubTensorOp`. Make sure that the value/dim used to compute the range is from such ops. This fix is a reasonable WAR, but a btter fix would be to make `getOrCreateRange` method be a method of `ViewInterface`. Differential Revision: https://reviews.llvm.org/D90991
-
Alex Zinenko authored
An SCF 'for' loop does not iterate if its lower bound is equal to its upper bound. Remove loops where both bounds are the same SSA value as such bounds are guaranteed to be equal. Similarly, remove 'parallel' loops where at least one pair of respective lower/upper bounds is specified by the same SSA value. Reviewed By: gysit Differential Revision: https://reviews.llvm.org/D91880
-
Nicolas Vasilache authored
Differential Revision: https://reviews.llvm.org/D91956
-
Nicolas Vasilache authored
This revision refactors code used in various Linalg transformations and makes it a first class citizen to the LinalgStructureOpInterface. This is in preparation to allowing more advanced Linalg behavior but is otherwise NFC. Differential revision: https://reviews.llvm.org/D91863
-
- Nov 21, 2020
-
-
Aart Bik authored
Adds tests for full sum reduction (tensors summed up into scalars) and the well-known sampled-dense-dense-matrix-product. Refines the optimizations rules slightly to handle the summation better. Reviewed By: penpornk Differential Revision: https://reviews.llvm.org/D91818
-
- Nov 20, 2020
-
-
Thomas Raoux authored
Add transformation to be able to forward transfer_write into transfer_read operation and to be able to remove dead transfer_write when a transfer_write is overwritten before being read. Differential Revision: https://reviews.llvm.org/D91321
-
Alex Zinenko authored
Add canoncalization patterns to remove zero-iteration 'for' loops, replace single-iteration 'for' loops with their bodies; remove known-false conditionals with no 'else' branch and replace conditionals with known value by the respective region. Although similar transformations are performed at the CFG level, not all flows reach that level, e.g., the GPU flow may want to remove single-iteration loops before deciding on loop mapping to thread dimensions. Reviewed By: herhut Differential Revision: https://reviews.llvm.org/D91865
-
Stella Stamenova authored
This was removed from ops.h, but it is used by onnx-mlir Reviewed By: mehdi_amini Differential Revision: https://reviews.llvm.org/D91830
-
Stephan Herhut authored
This canonicalization helps propagate shape information through the program. Differential Revision: https://reviews.llvm.org/D91854
-
Stephan Herhut authored
This canonicalization is useful to resolve loads into scalar values when doing partial bufferization. Differential Revision: https://reviews.llvm.org/D91855
-
Stephan Herhut authored
For equal operands, comparisons can be decided statically. Differential Revision: https://reviews.llvm.org/D91856
-
Mikhail Goncharov authored
This reverts commit f8284d21. Revert "[mlir][Linalg] NFC: Expose some utility functions used for promotion." This reverts commit 0c59f515. Revert "Remove unused isZero function" This reverts commit 0f9f0a40. Change f8284d21 led to multiple failures in IREE compilation.
-
Eugene Zhulenev authored
Depends On D89963 **Automatic reference counting algorithm outline:** 1. `ReturnLike` operations forward the reference counted values without modifying the reference count. 2. Use liveness analysis to find blocks in the CFG where the lifetime of reference counted values ends, and insert `drop_ref` operations after the last use of the value. 3. Insert `add_ref` before the `async.execute` operation capturing the value, and pairing `drop_ref` before the async body region terminator, to release the captured reference counted value when execution completes. 4. If the reference counted value is passed only to some of the block successors, insert `drop_ref` operations in the beginning of the blocks that do not have reference coutned value uses. Reviewed By: silvas Differential Revision: https://reviews.llvm.org/D90716
-
Geoffrey Martin-Noble authored
Unused since https://reviews.llvm.org/D91503 and triggering -Wunused-function Reviewed By: rriddle Differential Revision: https://reviews.llvm.org/D91838
-
MaheshRavishankar authored
Exposing some utility functions from Linalg to allow for promotion of fused views outside of the core tile+fuse logic. This is an alternative to patch D91322 which adds the promotion logic to the tileAndFuse method. Downside with that approach is that it is not easily customizable based on needs. Differential Revision: https://reviews.llvm.org/D91503
-
MaheshRavishankar authored
Enhance the tile+fuse logic to allow fusing a sequence of operations. Differential Revision: https://reviews.llvm.org/D90991
-
MaheshRavishankar authored
Differential Revision: https://reviews.llvm.org/D91749
-
- Nov 19, 2020
-
-
River Riddle authored
* Move ops to a BuiltinOps.h * Add file comments
-
ergawy authored
This commit extends the functionality of the SPIR-V module combiner library by adding new deduplication capabilities. In particular, implementation of deduplication of global variables and specialization constants, and functions is introduced. For global variables, 2 variables are considered duplicate if they either have the same descriptor set + binding or the same built_in attribute. For specialization constants, 2 spec constants are considered duplicate if they have the same spec_id attribute. 2 functions are deduplicated if they are identical. 2 functions are identical if they have the same prototype, attributes, and body. Reviewed By: antiagainst Differential Revision: https://reviews.llvm.org/D90951
-