- Mar 21, 2021
-
-
Chris Lattner authored
This updates the codebase to pass the context when creating an instance of OwningRewritePatternList, and starts removing extraneous MLIRContext parameters. There are many many more to be removed. Differential Revision: https://reviews.llvm.org/D99028
-
- Mar 19, 2021
-
-
Benjamin Kramer authored
Transforms.cpp:586:16: error: unused variable 'v' [-Werror,-Wunused-variable] for (Value v : operands) ^
-
Nicolas Vasilache authored
-
- Mar 18, 2021
-
-
Mehdi Amini authored
This reverts commit 32a744ab. CI is broken: test/Dialect/Linalg/bufferize.mlir:274:12: error: CHECK: expected string not found in input // CHECK: %[[MEMREF:.*]] = tensor_to_memref %[[IN]] : memref<?xf32> ^
-
Eugene Zhulenev authored
`BufferizeAnyLinalgOp` fails because `FillOp` is not a `LinalgGenericOp` and it fails while reading operand sizes attribute. Reviewed By: nicolasvasilache Differential Revision: https://reviews.llvm.org/D98671
-
thomasraoux authored
This propagates the affine map to transfer_read op in case it is not a minor identity map. Differential Revision: https://reviews.llvm.org/D98523
- Mar 15, 2021
-
-
Julian Gross authored
Create the memref dialect and move dialect-specific ops from std dialect to this dialect. Moved ops: AllocOp -> MemRef_AllocOp AllocaOp -> MemRef_AllocaOp AssumeAlignmentOp -> MemRef_AssumeAlignmentOp DeallocOp -> MemRef_DeallocOp DimOp -> MemRef_DimOp MemRefCastOp -> MemRef_CastOp MemRefReinterpretCastOp -> MemRef_ReinterpretCastOp GetGlobalMemRefOp -> MemRef_GetGlobalOp GlobalMemRefOp -> MemRef_GlobalOp LoadOp -> MemRef_LoadOp PrefetchOp -> MemRef_PrefetchOp ReshapeOp -> MemRef_ReshapeOp StoreOp -> MemRef_StoreOp SubViewOp -> MemRef_SubViewOp TransposeOp -> MemRef_TransposeOp TensorLoadOp -> MemRef_TensorLoadOp TensorStoreOp -> MemRef_TensorStoreOp TensorToMemRefOp -> MemRef_BufferCastOp ViewOp -> MemRef_ViewOp The roadmap to split the memref dialect from std is discussed here: https://llvm.discourse.group/t/rfc-split-the-memref-dialect-from-std/2667 Differential Revision: https://reviews.llvm.org/D98041
-
- Mar 13, 2021
-
-
Aart Bik authored
This is a temporary work-around to get our all-annotations-all-flags stress testing effort run clean. In the long run, we want to provide efficient implementations of strided loads and stores though Reviewed By: bixia Differential Revision: https://reviews.llvm.org/D98563
-
- Mar 10, 2021
-
-
Inho Seo authored
Moved getStaticLoopRanges and getStaticShape methods to LinalgInterfaces.td to add static shape verification It is to use the methods in LinalgInterfaces.cpp for additional static shape verification to match the shaped operands and loop on linalgOps. If I used the existing methods, I would face circular dependency linking issue. Now we can use them as methods of LinalgOp. Reviewed By: hanchung Differential Revision: https://reviews.llvm.org/D98163
-
- Mar 09, 2021
-
-
Tobias Gysi authored
Return the vectorization results using a vector passed by reference instead of returning them embedded in a structure. Differential Revision: https://reviews.llvm.org/D98182
-
- Mar 05, 2021
-
-
Aart Bik authored
Reduction updates should be masked, just like the load and stores. Note that alternatively, we could use the fact that masked values are zero of += updates and mask invariants to get this working but that would not work for *= updates. Masking the update itself is cleanest. This change also replaces the constant mask with a broadcast of "true" since this constant folds much better for various folding patterns. Reviewed By: nicolasvasilache Differential Revision: https://reviews.llvm.org/D98000
-
Nicolas Vasilache authored
-
- Mar 04, 2021
-
-
Aart Bik authored
Found with exhaustive testing, it is possible that a while loop appears in between chainable for loops. As long as we don't scalarize reductions in while loops, this means we need to terminate the chain at the while. This also refactors the reduction code into more readable helper methods. Reviewed By: bixia Differential Revision: https://reviews.llvm.org/D97886
-
- Mar 03, 2021
-
-
Aart Bik authored
Reviewed By: bixia Differential Revision: https://reviews.llvm.org/D97795
-
- Mar 02, 2021
-
-
Frederik Gossen authored
Some elementwise operations are not scalarizable, vectorizable, or tensorizable. Split `ElementwiseMappable` trait into the following, more precise traits. - `Elementwise` - `Scalarizable` - `Vectorizable` - `Tensorizable` This allows for reuse of `Elementwise` in dialects like HLO. Differential Revision: https://reviews.llvm.org/D97674
-
KareemErgawy-TomTom authored
This patch continues detensorizing implementation by detensoring internal control flow in functions. In order to detensorize functions, all the non-entry block's arguments are detensored and branches between such blocks are properly updated to reflect the detensored types as well. Function entry block (signature) is left intact. This continues work towards handling github/google/iree#1159. Reviewed By: silvas Differential Revision: https://reviews.llvm.org/D97148
-
- Feb 28, 2021
-
-
Aart Bik authored
The universal index was maintained if dense indices were still in place, and lattice points followed. However, it should only be kept if any of those following lattice points actually consumes the universal index. This change also fixes an inaccuracy with a missing broadcast around vector invariant. Reviewed By: bixia Differential Revision: https://reviews.llvm.org/D97594
-
- Feb 26, 2021
-
-
Aart Bik authored
Similar to mask-load/store and compress/expand, the gather and scatter operation now allow for higher dimension uses. Note that to support the mixed-type index, the new syntax is: vector.gather %base [%i,%j] [%kvector] .... The first client of this generalization is the sparse compiler, which needs to define scatter and gathers on dense operands of higher dimensions too. Reviewed By: bixia Differential Revision: https://reviews.llvm.org/D97422
-
Christian Sigg authored
Fix call sites. The method will be removed 2 weeks later. Reviewed By: rriddle Differential Revision: https://reviews.llvm.org/D97530
-
- Feb 23, 2021
-
-
Aart Bik authored
When computing dense address, a vectorized index must be accounted for properly. This bug was formerly undetected because we get 0 * prev + i in most cases, which folds away the scalar part. Now it works for all cases. Reviewed By: bixia Differential Revision: https://reviews.llvm.org/D97317
-
Nicolas Vasilache authored
This transformation was only used for quick experimentation and is not general enough. Retire it. Differential Revision: https://reviews.llvm.org/D97266
-
KareemErgawy-TomTom authored
This commit is the first baby step towards detensoring in linalg-on-tensors. Detensoring is the process through which a tensor value is convereted to one or potentially more primitive value(s). During this process, operations with such detensored operands are also converted to an equivalen form that works on primitives. The detensoring process is driven by linalg-on-tensor ops. In particular, a linalg-on-tensor op is checked to see whether *all* its operands can be detensored. If so, those operands are converted to thier primitive counterparts and the linalg op is replaced by an equivalent op that takes those new primitive values as operands. This works towards handling github/google/iree#1159. Reviewed By: nicolasvasilache Differential Revision: https://reviews.llvm.org/D96271
-
Aart Bik authored
Simplifies the way lattices are optimized with less, but more powerful rules. This also fixes an inaccuracy where too many lattices resulted (expecting a non-existing universal index). Also puts no-side-effects on all proper getters and unifies bufferization flags order in integration tests (for future, more complex use cases). Reviewed By: bixia Differential Revision: https://reviews.llvm.org/D97134
-
- Feb 19, 2021
-
-
Nicolas Vasilache authored
-
- Feb 18, 2021
-
-
Alexander Belyaev authored
This commit introduced a cyclic dependency: Memref dialect depends on Standard because it used ConstantIndexOp. Std depends on the MemRef dialect in its EDSC/Intrinsics.h Working on a fix. This reverts commit 8aa6c376.
-
Julian Gross authored
Create the memref dialect and move several dialect-specific ops without dependencies to other ops from std dialect to this dialect. Moved ops: AllocOp -> MemRef_AllocOp AllocaOp -> MemRef_AllocaOp DeallocOp -> MemRef_DeallocOp MemRefCastOp -> MemRef_CastOp GetGlobalMemRefOp -> MemRef_GetGlobalOp GlobalMemRefOp -> MemRef_GlobalOp PrefetchOp -> MemRef_PrefetchOp ReshapeOp -> MemRef_ReshapeOp StoreOp -> MemRef_StoreOp TransposeOp -> MemRef_TransposeOp ViewOp -> MemRef_ViewOp The roadmap to split the memref dialect from std is discussed here: https://llvm.discourse.group/t/rfc-split-the-memref-dialect-from-std/2667 Differential Revision: https://reviews.llvm.org/D96425
-
Aart Bik authored
Rationale: Narrower types for overhead storage yield a smaller memory footprint for sparse tensors and thus needs to be supported. Also, more value types need to be supported to deal with all kinds of kernels. Since the "one-size-fits-all" sparse storage scheme implementation is used instead of actual codegen, the library needs to be able to support all combinations of desired types. With some crafty templating and overloading, the actual code for this is kept reasonably sized though. Reviewed By: bixia Differential Revision: https://reviews.llvm.org/D96819
-
- Feb 16, 2021
-
-
Nicolas Vasilache authored
This revision adds support for hoisting "subtensor + vector.transfer_read" / "subtensor_insert + vector.transfer_write pairs" across scf.for. The unit of hoisting becomes a HoistableRead / HoistableWrite struct which contains a pair of "vector.transfer_read + optional subtensor" / "vector.transfer_write + optional subtensor_insert". scf::ForOp canonicalization patterns are applied greedily on the successful application of the transformation to cleanup the IR more eagerly and potentially expose more transformation opportunities. Differential revision: https://reviews.llvm.org/D96731
-
Nicolas Vasilache authored
SliceAnalysis originally was developed in the context of affine.for within mlfunc. It predates the notion of region. This revision updates it to not hardcode specific ops like scf::ForOp. When rooted at an op, the behavior of the slice computation changes as it recurses into the regions of the op. This does not support gathering all values transitively depending on a loop induction variable anymore. Additional variants rooted at a Value are added to also support the existing behavior. Differential revision: https://reviews.llvm.org/D96702
-
- Feb 14, 2021
-
-
Nicolas Vasilache authored
-
- Feb 12, 2021
-
-
Mehdi Amini authored
This revision takes advantage of the newly extended `ref` directive in assembly format to allow better region handling for LinalgOps. Specifically, FillOp and CopyOp now build their regions explicitly which allows retiring older behavior that relied on specific op knowledge in both lowering to loops and vectorization. This reverts commit 3f22547f and reland 973e133b with a workaround for a gcc bug that does not accept lambda default parameters: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=59949 Differential Revision: https://reviews.llvm.org/D96598
-
Mehdi Amini authored
This reverts commit 973e133b. It triggers an issue in gcc5 that require investigation, the build is broken with: /tmp/ccdpj3B9.s: Assembler messages: /tmp/ccdpj3B9.s:5821: Error: symbol `_ZNSt17_Function_handlerIFvjjEUljjE2_E9_M_invokeERKSt9_Any_dataOjS6_' is already defined /tmp/ccdpj3B9.s:5860: Error: symbol `_ZNSt14_Function_base13_Base_managerIUljjE2_E10_M_managerERSt9_Any_dataRKS3_St18_Manager_operation' is already defined
-
Nicolas Vasilache authored
This revision takes advantage of the newly extended `ref` directive in assembly format to allow better region handling for LinalgOps. Specifically, FillOp and CopyOp now build their regions explicitly which allows retiring older behavior that relied on specific op knowledge in both lowering to loops and vectorization. Differential Revision: https://reviews.llvm.org/D96598
-
Stephan Herhut authored
This does not split transformations, yet. Those will be done as future clean ups. Differential Revision: https://reviews.llvm.org/D96272
-
- Feb 11, 2021
-
-
Nicolas Vasilache authored
The AffineMap in the MemRef inferred by SubViewOp may have uncompressed symbols which result in type mismatch on otherwise unused symbols. Make the computation of the AffineMap compress those unused symbols which results in better canonical types. Additionally, improve the error message to report which inferred type was expected. Differential Revision: https://reviews.llvm.org/D96551
-
Hanhan Wang authored
The dimension order of a filter in tensorflow is [filter_height, filter_width, in_channels, out_channels], which is different from current definition. The current definition follows TOSA spec. Add TF version conv ops to .tc, so we do not have to insert a transpose op around a conv op. Reviewed By: antiagainst Differential Revision: https://reviews.llvm.org/D96038
-
Sanjoy Das authored
This should have gone in with a76761cf.
-
Sanjoy Das authored
- Remove leftover comment from de2568aa - Fix a typo in a comment
-
- Feb 10, 2021
-
-
Nicolas Vasilache authored
-