- Mar 19, 2021
-
-
Andrew Young authored
When deleting operations in DCE, the algorithm uses a post-order walk of the IR to ensure that value uses were erased before value defs. Graph regions do not have the same structural invariants as SSA CFG, and this post order walk could delete value defs before uses. This problem is guaranteed to occur when there is a cycle in the use-def graph. This change stops DCE from visiting the operations and blocks in any meaningful order. Instead, we rely on explicitly dropping all uses of a value before deleting it. Reviewed By: mehdi_amini, rriddle Differential Revision: https://reviews.llvm.org/D98919
-
- Mar 15, 2021
-
-
Julian Gross authored
Create the memref dialect and move dialect-specific ops from std dialect to this dialect. Moved ops: AllocOp -> MemRef_AllocOp AllocaOp -> MemRef_AllocaOp AssumeAlignmentOp -> MemRef_AssumeAlignmentOp DeallocOp -> MemRef_DeallocOp DimOp -> MemRef_DimOp MemRefCastOp -> MemRef_CastOp MemRefReinterpretCastOp -> MemRef_ReinterpretCastOp GetGlobalMemRefOp -> MemRef_GetGlobalOp GlobalMemRefOp -> MemRef_GlobalOp LoadOp -> MemRef_LoadOp PrefetchOp -> MemRef_PrefetchOp ReshapeOp -> MemRef_ReshapeOp StoreOp -> MemRef_StoreOp SubViewOp -> MemRef_SubViewOp TransposeOp -> MemRef_TransposeOp TensorLoadOp -> MemRef_TensorLoadOp TensorStoreOp -> MemRef_TensorStoreOp TensorToMemRefOp -> MemRef_BufferCastOp ViewOp -> MemRef_ViewOp The roadmap to split the memref dialect from std is discussed here: https://llvm.discourse.group/t/rfc-split-the-memref-dialect-from-std/2667 Differential Revision: https://reviews.llvm.org/D98041
-
Alex Zinenko authored
This reverts commit b5d9a3c9. The commit introduced a memory error in canonicalization/operation walking that is exposed when compiled with ASAN. It leads to crashes in some "release" configurations.
-
Chris Lattner authored
Two changes: 1) Change the canonicalizer to walk the function in top-down order instead of bottom-up order. This composes well with the "top down" nature of constant folding and simplification, reducing iterations and re-evaluation of ops in simple cases. 2) Explicitly enter existing constants into the OperationFolder table before canonicalizing. Previously we would "constant fold" them and rematerialize them, wastefully recreating a bunch fo constants, which lead to pointless memory traffic. Both changes together provide a 33% speedup for canonicalize on some mid-size CIRCT examples. One artifact of this change is that the constants generated in normal pattern application get inserted at the top of the function as the patterns are applied. Because of this, we get "inverted" constants more often, which is an aethetic change to the IR but does permute some testcases. Differential Revision: https://reviews.llvm.org/D98609
-
- Mar 11, 2021
-
-
Julian Gross authored
Buffer hoisting moves allocs upwards although it has dependency within its nested region. This patch fixes this issue. https://bugs.llvm.org/show_bug.cgi?id=49142 Differential Revision: https://reviews.llvm.org/D98248
-
- Mar 09, 2021
-
-
Lei Zhang authored
This makes it easy to compose the distribution computation with other affine computations. Reviewed By: mravishankar Differential Revision: https://reviews.llvm.org/D98171
-
- Feb 27, 2021
-
-
Jacques Pienaar authored
-
- Feb 26, 2021
-
-
Vinayaka Bandishti authored
Fixes a bug in affine fusion pipeline where an incorrect fusion is performed despite a Call Op that potentially modifies memrefs under consideration exists between source and target. Fixes part of https://bugs.llvm.org/show_bug.cgi?id=49220 Reviewed By: bondhugula, dcaballe Differential Revision: https://reviews.llvm.org/D97252
-
- Feb 25, 2021
-
-
Tung D. Le authored
This patch handles defining ops between the source and dest loop nests, and prevents loop nests with `iter_args` from being fused. If there is any SSA value in the dest loop nest whose defining op has dependence from the source loop nest, we cannot fuse the loop nests. If there is a `affine.for` with `iter_args`, prevent it from being fused. Reviewed By: dcaballe, bondhugula Differential Revision: https://reviews.llvm.org/D97030
-
- Feb 23, 2021
-
-
Adam Straw authored
Affine parallel ops may contain and yield results from MemRefsNormalizable ops in the loop body. Thus, both affine.parallel and affine.yield should have the MemRefsNormalizable trait. Reviewed By: bondhugula Differential Revision: https://reviews.llvm.org/D96821
-
- Feb 22, 2021
-
-
Vinayaka Bandishti authored
[MLIR][affine] Prevent fusion when ops with memory effect free are present between producer and consumer This commit fixes a bug in affine fusion pipeline where an incorrect fusion is performed despite a dealloc op is present between a producer and a consumer. This is done by creating a node for dealloc op in the MDG. Reviewed By: bondhugula, dcaballe Differential Revision: https://reviews.llvm.org/D97032
-
- Feb 21, 2021
-
-
Jacques Pienaar authored
Move over to ODS & use pass options.
-
- Feb 19, 2021
-
-
Nicolas Vasilache authored
Differential Revision: https://reviews.llvm.org/D96995
-
- Feb 18, 2021
-
-
Alexander Belyaev authored
This commit introduced a cyclic dependency: Memref dialect depends on Standard because it used ConstantIndexOp. Std depends on the MemRef dialect in its EDSC/Intrinsics.h Working on a fix. This reverts commit 8aa6c376.
-
Julian Gross authored
Create the memref dialect and move several dialect-specific ops without dependencies to other ops from std dialect to this dialect. Moved ops: AllocOp -> MemRef_AllocOp AllocaOp -> MemRef_AllocaOp DeallocOp -> MemRef_DeallocOp MemRefCastOp -> MemRef_CastOp GetGlobalMemRefOp -> MemRef_GetGlobalOp GlobalMemRefOp -> MemRef_GlobalOp PrefetchOp -> MemRef_PrefetchOp ReshapeOp -> MemRef_ReshapeOp StoreOp -> MemRef_StoreOp TransposeOp -> MemRef_TransposeOp ViewOp -> MemRef_ViewOp The roadmap to split the memref dialect from std is discussed here: https://llvm.discourse.group/t/rfc-split-the-memref-dialect-from-std/2667 Differential Revision: https://reviews.llvm.org/D96425
-
- Feb 12, 2021
-
-
Mehdi Amini authored
This revision takes advantage of the newly extended `ref` directive in assembly format to allow better region handling for LinalgOps. Specifically, FillOp and CopyOp now build their regions explicitly which allows retiring older behavior that relied on specific op knowledge in both lowering to loops and vectorization. This reverts commit 3f22547f and reland 973e133b with a workaround for a gcc bug that does not accept lambda default parameters: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=59949 Differential Revision: https://reviews.llvm.org/D96598
-
Mehdi Amini authored
This reverts commit 973e133b. It triggers an issue in gcc5 that require investigation, the build is broken with: /tmp/ccdpj3B9.s: Assembler messages: /tmp/ccdpj3B9.s:5821: Error: symbol `_ZNSt17_Function_handlerIFvjjEUljjE2_E9_M_invokeERKSt9_Any_dataOjS6_' is already defined /tmp/ccdpj3B9.s:5860: Error: symbol `_ZNSt14_Function_base13_Base_managerIUljjE2_E10_M_managerERSt9_Any_dataRKS3_St18_Manager_operation' is already defined
-
Nicolas Vasilache authored
This revision takes advantage of the newly extended `ref` directive in assembly format to allow better region handling for LinalgOps. Specifically, FillOp and CopyOp now build their regions explicitly which allows retiring older behavior that relied on specific op knowledge in both lowering to loops and vectorization. Differential Revision: https://reviews.llvm.org/D96598
-
Stephan Herhut authored
This does not split transformations, yet. Those will be done as future clean ups. Differential Revision: https://reviews.llvm.org/D96272
-
- Feb 06, 2021
-
-
Tung D. Le authored
This patch fixes the following bug when calling --affine-loop-fusion Input program: ```mlir func @should_not_fuse_since_top_level_non_affine_non_result_users( %in0 : memref<32xf32>, %in1 : memref<32xf32>) { %c0 = constant 0 : index %cst_0 = constant 0.000000e+00 : f32 affine.for %d = 0 to 32 { %lhs = affine.load %in0[%d] : memref<32xf32> %rhs = affine.load %in1[%d] : memref<32xf32> %add = addf %lhs, %rhs : f32 affine.store %add, %in0[%d] : memref<32xf32> } store %cst_0, %in0[%c0] : memref<32xf32> affine.for %d = 0 to 32 { %lhs = affine.load %in0[%d] : memref<32xf32> %rhs = affine.load %in1[%d] : memref<32xf32> %add = addf %lhs, %rhs: f32 affine.store %add, %in0[%d] : memref<32xf32> } return } ``` call --affine-loop-fusion, we got an incorrect output: ```mlir func @should_not_fuse_since_top_level_non_affine_non_result_users(%arg0: memref<32xf32>, %arg1: memref<32xf32>) { %c0 = constant 0 : index %cst = constant 0.000000e+00 : f32 store %cst, %arg0[%c0] : memref<32xf32> affine.for %arg2 = 0 to 32 { %0 = affine.load %arg0[%arg2] : memref<32xf32> %1 = affine.load %arg1[%arg2] : memref<32xf32> %2 = addf %0, %1 : f32 affine.store %2, %arg0[%arg2] : memref<32xf32> %3 = affine.load %arg0[%arg2] : memref<32xf32> %4 = affine.load %arg1[%arg2] : memref<32xf32> %5 = addf %3, %4 : f32 affine.store %5, %arg0[%arg2] : memref<32xf32> } return } ``` This happened because when analyzing the source and destination nodes, affine loop fusion ignored non-result ops sandwitched between them. In other words, the MemRefDependencyGraph in the affine loop fusion ignored these non-result ops. This patch solves the issue by adding these non-result ops to the MemRefDependencyGraph. Reviewed By: bondhugula Differential Revision: https://reviews.llvm.org/D95668
-
- Feb 04, 2021
-
-
Alex Zinenko authored
In dialect conversion infrastructure, source materialization applies as part of the finalization procedure to results of the newly produced operations that replace previously existing values with values having a different type. However, such operations may be created to replace operations created in other patterns. At this point, it is possible that the results of the _original_ operation are still in use and have mismatching types, but the results of the _intermediate_ operation that performed the type change are not in use leading to the absence of source materialization. For example, %0 = dialect.produce : !dialect.A dialect.use %0 : !dialect.A can be replaced with %0 = dialect.other : !dialect.A %1 = dialect.produce : !dialect.A // replaced, scheduled for removal dialect.use %1 : !dialect.A and then with %0 = dialect.final : !dialect.B %1 = dialect.other : !dialect.A // replaced, scheduled for removal %2 = dialect.produce : !dialect.A // replaced, scheduled for removal dialect.use %2 : !dialect.A in the same rewriting, but only the %1->%0 replacement is currently considered. Change the logic in dialect conversion to look up all values that were replaced by the given value and performing source materialization if any of those values is still in use with mismatching types. This is performed by computing the inverse value replacement mapping. This arguably expensive manipulation is performed only if there were some type-changing replacements. An alternative could be to consider all replaced operations and not only those that resulted in type changes, but it would harm pattern-level composability: the pattern that performed the non-type-changing replacement would have to be made aware of the type converter in order to call the materialization hook. Reviewed By: rriddle Differential Revision: https://reviews.llvm.org/D95626
-
Mehdi Amini authored
We could extend this with an interface to allow dialect to perform a type conversion, but that would make the folder creating operation which isn't the case at the moment, and isn't necessarily always desirable. Reviewed By: rriddle Differential Revision: https://reviews.llvm.org/D95991
-
- Feb 02, 2021
-
-
Alex Zinenko authored
In dialect conversion, signature conversions essentially perform block argument replacement and are added to the general value remapping. However, the replaced values were not tracked, so if a signature conversion was rolled back, the construction of operand lists for the following patterns could have obtained block arguments from the mapping and give them to the pattern leading to use-after-free. Keep track of signature conversions similarly to normal block argument replacement, and erase such replacements from the general mapping when the conversion is rolled back. Reviewed By: rriddle Differential Revision: https://reviews.llvm.org/D95688
-
- Jan 29, 2021
-
-
Alexander Belyaev authored
Currently, for a scf.parallel (i,j,k) after the loop collapsing to 1D is done, the IVs would be traversed as for an scf.parallel(k,j,i). Differential Revision: https://reviews.llvm.org/D95693
-
- Jan 25, 2021
-
-
Diego Caballero authored
This patch adds support for producer-consumer fusion scenarios with multiple producer stores to the AffineLoopFusion pass. The patch introduces some changes to the producer-consumer algorithm, including: * For a given consumer loop, producer-consumer fusion iterates over its producer candidates until a fixed point is reached. * Producer candidates are gathered beforehand for each iteration of the consumer loop and visited in reverse program order (not strictly guaranteed) to maximize the number of loops fused per iteration. In general, these changes were needed to simplify the multi-store producer support and remove some of the workarounds that were introduced in the past to support more fusion cases under the single-store producer limitation. This patch also preserves the existing functionality of AffineLoopFusion with one minor change in behavior. Producer-consumer fusion didn't fuse scenarios with escaping memrefs and multiple outgoing edges (from a single store). Multi-store producer scenarios will usually (always?) have multiple outgoing edges so we couldn't fuse any with escaping memrefs, which would greatly limit the applicability of this new feature. Therefore, the patch enables fusion for these scenarios. Please, see modified tests for specific details. Reviewed By: andydavis1, bondhugula Differential Revision: https://reviews.llvm.org/D92876
-
- Jan 20, 2021
-
-
Diego Caballero authored
This reverts commit 7dd19885. ASAN issue.
-
Diego Caballero authored
This patch adds support for producer-consumer fusion scenarios with multiple producer stores to the AffineLoopFusion pass. The patch introduces some changes to the producer-consumer algorithm, including: * For a given consumer loop, producer-consumer fusion iterates over its producer candidates until a fixed point is reached. * Producer candidates are gathered beforehand for each iteration of the consumer loop and visited in reverse program order (not strictly guaranteed) to maximize the number of loops fused per iteration. In general, these changes were needed to simplify the multi-store producer support and remove some of the workarounds that were introduced in the past to support more fusion cases under the single-store producer limitation. This patch also preserves the existing functionality of AffineLoopFusion with one minor change in behavior. Producer-consumer fusion didn't fuse scenarios with escaping memrefs and multiple outgoing edges (from a single store). Multi-store producer scenarios will usually (always?) have multiple outgoing edges so we couldn't fuse any with escaping memrefs, which would greatly limit the applicability of this new feature. Therefore, the patch enables fusion for these scenarios. Please, see modified tests for specific details. Reviewed By: andydavis1, bondhugula Differential Revision: https://reviews.llvm.org/D92876
-
Julian Gross authored
Add a check if regions do not implement the RegionBranchOpInterface. This is not allowed in the current deallocation steps. Furthermore, we handle edge-cases, where a single region is attached and the parent operation has no results. This fixes: https://bugs.llvm.org/show_bug.cgi?id=48575 Differential Revision: https://reviews.llvm.org/D94586
-
- Jan 19, 2021
-
-
Sean Silva authored
- DynamicTensorFromElementsOp - TensorFromElements Differential Revision: https://reviews.llvm.org/D94994
-
- Jan 14, 2021
-
-
Andrew Young authored
The standard and gpu dialect both have `alloc` operations which use the memory effect `MemAlloc`. In both cases, it is specified on both the operation itself and on the result. This results in two memory effects being created for these operations. When `MemAlloc` is defined on an operation, it represents some background effect which the compiler cannot reason about, and inhibits the ability of the compiler to remove dead `std.alloc` operations. This change removes the uneeded `MemAlloc` effect from these operations and leaves the effect on the result, which allows dead allocs to be erased. There is the same problem, but to a lesser extent, with MemFree, MemRead and MemWrite. Over-specifying these traits is not currently inhibiting any optimization. Differential Revision: https://reviews.llvm.org/D94662
-
River Riddle authored
This revision adds a new `replaceOpWithIf` hook that replaces uses of an operation that satisfy a given functor. If all uses are replaced, the operation gets erased in a similar manner to `replaceOp`. DialectConversion support will be added in a followup as this requires adjusting how replacements are tracked there. Differential Revision: https://reviews.llvm.org/D94632
-
River Riddle authored
In the overwhelmingly common case, enum attribute case strings represent valid identifiers in MLIR syntax. This revision updates the format generator to format as a keyword in these cases, removing the need to wrap values in a string. The parser still retains the ability to parse the string form, but the printer will use the keyword form when applicable. Differential Revision: https://reviews.llvm.org/D94575
-
- Jan 07, 2021
-
-
Alex Zinenko authored
The LLVM dialect type system has been closed until now, i.e. did not support types from other dialects inside containers. While this has had obvious benefits of deriving from a common base class, it has led to some simple types being almost identical with the built-in types, namely integer and floating point types. This in turn has led to a lot of larger-scale complexity: simple types must still be converted, numerous operations that correspond to LLVM IR intrinsics are replicated to produce versions operating on either LLVM dialect or built-in types leading to quasi-duplicate dialects, lowering to the LLVM dialect is essentially required to be one-shot because of type conversion, etc. In this light, it is reasonable to trade off some local complexity in the internal implementation of LLVM dialect types for removing larger-scale system complexity. Previous commits to the LLVM dialect type system have adapted the API to support types from other dialects. Replace LLVMIntegerType with the built-in IntegerType plus additional checks that such types are signless (these are isolated in a utility function that replaced `isa<LLVMType>` and in the parser). Temporarily keep the possibility to parse `!llvm.i32` as a synonym for `i32`, but add a deprecation notice. Reviewed By: mehdi_amini, silvas, antiagainst Differential Revision: https://reviews.llvm.org/D94178
-
- Jan 06, 2021
-
-
Kazuaki Ishizaki authored
fix typos under docs, test, and tools directories Reviewed By: ftynse Differential Revision: https://reviews.llvm.org/D94158
-
- Dec 18, 2020
-
-
Sean Silva authored
This is almost entirely mechanical. Differential Revision: https://reviews.llvm.org/D93357
-
- Dec 15, 2020
-
-
River Riddle authored
Now that passes have support for running nested pipelines, the inliner can now allow for users to provide proper nested pipelines to use for optimization during inlining. This revision also changes the behavior of optimization during inlining to optimize before attempting to inline, which should lead to a more accurate cost model and prevents the need for users to schedule additional duplicate cleanup passes before/after the inliner that would already be run during inlining. Differential Revision: https://reviews.llvm.org/D91211
-
- Dec 11, 2020
-
-
Sean Silva authored
This reverts commit 0d48d265. This reapplies the following commit, with a fix for CAPI/ir.c: [mlir] Start splitting the `tensor` dialect out of `std`. This starts by moving `std.extract_element` to `tensor.extract` (this mirrors the naming of `vector.extract`). Curiously, `std.extract_element` supposedly works on vectors as well, and this patch removes that functionality. I would tend to do that in separate patch, but I couldn't find any downstream users relying on this, and the fact that we have `vector.extract` made it seem safe enough to lump in here. This also sets up the `tensor` dialect as a dependency of the `std` dialect, as some ops that currently live in `std` depend on `tensor.extract` via their canonicalization patterns. Part of RFC: https://llvm.discourse.group/t/rfc-split-the-tensor-dialect-from-std/2347/2 Differential Revision: https://reviews.llvm.org/D92991
-
Sean Silva authored
This reverts commit cab8dda9. I mistakenly thought that CAPI/ir.c failure was unrelated to this change. Need to debug it.
-
Sean Silva authored
This starts by moving `std.extract_element` to `tensor.extract` (this mirrors the naming of `vector.extract`). Curiously, `std.extract_element` supposedly works on vectors as well, and this patch removes that functionality. I would tend to do that in separate patch, but I couldn't find any downstream users relying on this, and the fact that we have `vector.extract` made it seem safe enough to lump in here. This also sets up the `tensor` dialect as a dependency of the `std` dialect, as some ops that currently live in `std` depend on `tensor.extract` via their canonicalization patterns. Part of RFC: https://llvm.discourse.group/t/rfc-split-the-tensor-dialect-from-std/2347/2 Differential Revision: https://reviews.llvm.org/D92991
-
- Dec 10, 2020
-
-
River Riddle authored
[mlir][SCCP] Don't visit private callables unless they are used when tracking interprocedural arguments/results This fixes a subtle bug where SCCP could incorrectly optimize a private callable while waiting for its arguments to be resolved. Fixes PR#48457 Differential Revision: https://reviews.llvm.org/D92976
-