- Sep 22, 2020
-
-
Nicolas Vasilache authored
This revision allows representing a reduction at the level of linalg on tensors for generic ops by uniformizing with the named ops approach.
-
- Sep 18, 2020
-
-
Nicolas Vasilache authored
This revision allows representing a reduction at the level of linalg on tensors for named ops. When a structured op has a reduction and returns tensor(s), new conventions are added and documented. As an illustration, the syntax for a `linalg.matmul` writing into a buffer is: ``` linalg.matmul ins(%a, %b : memref<?x?xf32>, tensor<?x?xf32>) outs(%c : memref<?x?xf32>) ``` , whereas the syntax for a `linalg.matmul` returning a new tensor is: ``` %d = linalg.matmul ins(%a, %b : tensor<?x?xf32>, memref<?x?xf32>) init(%c : memref<?x?xf32>) -> tensor<?x?xf32> ``` Other parts of linalg will be extended accordingly to allow mixed buffer/tensor semantics in the presence of reductions.
-
- Sep 17, 2020
-
-
Jakub Lichman authored
ConvOp vectorization supports now only convolutions of static shapes with dimensions of size either 3(vectorized) or 1(not) as underlying vectors have to be of static shape as well. In this commit we add support for convolutions of any size as well as dynamic shapes by leveraging existing matmul infrastructure for tiling of both input and kernel to sizes accepted by the previous version of ConvOp vectorization. In the future this pass can be extended to take "tiling mask" as a user input which will enable vectorization of user specified dimensions. Differential Revision: https://reviews.llvm.org/D87676
-
- Sep 16, 2020
-
-
Eugene Zhulenev authored
Enable inlining for Linalg dialect. Differential Revision: https://reviews.llvm.org/D87567
-
- Sep 14, 2020
-
-
Federico Lebrón authored
Now backends spell out which namespace they want to be in, instead of relying on clients #including them inside already-opened namespaces. This also means that cppNamespaces should be fully qualified, and there's no implicit "::mlir::" prepended to them anymore. Reviewed By: mehdi_amini Differential Revision: https://reviews.llvm.org/D86811
-
- Sep 11, 2020
-
-
Nicolas Vasilache authored
This revision refactors and cleans up a bunch of things to simplify StructuredOpInterface before work can proceed on Linalg on tensors: - break out pieces of the StructuredOps trait that are part of the StructuredOpInterface, - drop referenceIterators and referenceIndexingMaps that end up being more confusing than useful, - drop NamedStructuredOpTrait
-
Benjamin Kramer authored
Previously only the input type was printed, and the parser applied it to both input and output, creating an invalid transpose. Print and parse both types, and verify that they match. Differential Revision: https://reviews.llvm.org/D87462
-
MaheshRavishankar authored
The LinalgTilingPattern class dervied from the base deletes the original operation. This allows for the use case where the more transformations are necessary on the original operation after tiling. In such cases the pattern can derive from LinalgBaseTilingPattern instead of LinalgTilingPattern. Differential Revision: https://reviews.llvm.org/D87308
-
- Sep 10, 2020
-
-
Eugene Burmako authored
This patch adds a new named structured op to accompany linalg.matmul and linalg.matvec. We needed it for our codegen, so I figured it would be useful to add it to Linalg. Reviewed By: nicolasvasilache, mravishankar Differential Revision: https://reviews.llvm.org/D87292
-
Jakub Lichman authored
This commit addresses comments that were requested on D86619 after it was landed. Differential Revision: https://reviews.llvm.org/D87354
-
MaheshRavishankar authored
Also refactor the getViewSizes method to work on LinalgOp instead of being a templated version. Keeping the templated version for compatibility. Differential Revision: https://reviews.llvm.org/D87303
-
- Sep 08, 2020
-
-
Ehsan Toosi authored
BufferPlacement has been removed, as allocations are no longer placed during the conversion. Differential Revision: https://reviews.llvm.org/D87079
-
Jakub Lichman authored
-
Jakub Lichman authored
In this commit a new way of convolution ops lowering is introduced. The conv op vectorization pass lowers linalg convolution ops into vector contractions. This lowering is possible when conv op is first tiled by 1 along specific dimensions which transforms it into dot product between input and kernel subview memory buffers. This pass converts such conv op into vector contraction and does all necessary vector transfers that make it work. Differential Revision: https://reviews.llvm.org/D86619
-
- Sep 07, 2020
-
-
Frederik Gossen authored
With `dynamic_tensor_from_elements` tensor values of dynamic size can be created. The body of the operation essentially maps the index space to tensor elements. Declare SCF operations in the `scf` namespace to avoid name clash with the new `std.yield` operation. Resolve ambiguities between `linalg/shape/std/scf.yield` operations. Differential Revision: https://reviews.llvm.org/D86276
-
- Sep 03, 2020
-
-
Jakub Lichman authored
Sizes of tiles (subviews) are bigger by 1 than they should. Let's consider 1D convolution without batches or channels. Furthermore let m iterate over the output and n over the kernel then input is accessed with m + n. In tiling subview sizes for convolutions are computed by applying requested tile size together with kernel size to the above mentioned expression thus let's say for tile size of 2 the subview size is 2 + size(n), which is bigger by one than it should since we move kernel only once. The problem behind it is that range is not turned into closed interval before the composition. This commit fixes the problem by turning ranges first into closed intervals by substracting 1 and after the composition back to half open by adding 1. Differential Revision: https://reviews.llvm.org/D86638
-
- Sep 02, 2020
-
-
Ehsan Toosi authored
In this PR, the users of BufferPlacement can configure BufferAssginmentTypeConverter. These new configurations would give the user more freedom in the process of converting function signature, and return and call operation conversions. These are the new features: - Accepting callback functions for decomposing types (i.e. 1 to N type conversion such as unpacking tuple types). - Defining ResultConversionKind for specifying whether a function result with a certain type should be appended to the function arguments list or should be kept as function result. (Usage: converter.setResultConversionKind<MemRefType>(AppendToArgumentList)) - Accepting callback functions for composing or decomposing values (i.e. N to 1 and 1 to N value conversion). Differential Revision: https://reviews.llvm.org/D85133
-
Ehsan Toosi authored
In this PR, the users of BufferPlacement can configure BufferAssginmentTypeConverter. These new configurations would give the user more freedom in the process of converting function signature, and return and call operation conversions. These are the new features: - Accepting callback functions for decomposing types (i.e. 1 to N type conversion such as unpacking tuple types). - Defining ResultConversionKind for specifying whether a function result with a certain type should be appended to the function arguments list or should be kept as function result. (Usage: converter.setResultConversionKind<MemRefType>(AppendToArgumentList)) - Accepting callback functions for composing or decomposing values (i.e. N to 1 and 1 to N value conversion). Differential Revision: https://reviews.llvm.org/D85133
-
- Aug 28, 2020
-
-
Benjamin Kramer authored
-
Hanhan Wang authored
The tensor_reshape op was only fusible only if it is a collapsing case. Now we propagate the op to all the operands so there is a further chance to fuse it with generic op. The pre-conditions are: 1) The producer is not an indexed_generic op. 2) All the shapes of the operands are the same. 3) All the indexing maps are identity. 4) All the loops are parallel loops. 5) The producer has a single user. It is possible to fuse the ops if the producer is an indexed_generic op. We still can compute the original indices. E.g., if the reshape op collapses the d0 and d1, we can use DimOp to get the width of d1, and calculate the index `d0 * width + d1`. Then replace all the uses with it. However, this pattern is not implemented in the patch. Reviewed By: mravishankar Differential Revision: https://reviews.llvm.org/D86314
-
- Aug 26, 2020
-
-
River Riddle authored
The PDL Interpreter dialect provides a lower level abstraction compared to the PDL dialect, and is targeted towards low level optimization and interpreter code generation. The dialect operations encapsulates low-level pattern match and rewrite "primitives", such as navigating the IR (Operation::getOperand), creating new operations (OpBuilder::create), etc. Many of the operations within this dialect also fuse branching control flow with some form of a predicate comparison operation. This type of fusion reduces the amount of work that an interpreter must do when executing. An example of this representation is shown below: ```mlir // The following high level PDL pattern: pdl.pattern : benefit(1) { %resultType = pdl.type %inputOperand = pdl.input %root, %results = pdl.operation "foo.op"(%inputOperand) -> %resultType pdl.rewrite %root { pdl.replace %root with (%inputOperand) } } // May be represented in the interpreter dialect as follows: module { func @matcher(%arg0: !pdl.operation) { pdl_interp.check_operation_name of %arg0 is "foo.op" -> ^bb2, ^bb1 ^bb1: pdl_interp.return ^bb2: pdl_interp.check_operand_count of %arg0 is 1 -> ^bb3, ^bb1 ^bb3: pdl_interp.check_result_count of %arg0 is 1 -> ^bb4, ^bb1 ^bb4: %0 = pdl_interp.get_operand 0 of %arg0 pdl_interp.is_not_null %0 : !pdl.value -> ^bb5, ^bb1 ^bb5: %1 = pdl_interp.get_result 0 of %arg0 pdl_interp.is_not_null %1 : !pdl.value -> ^bb6, ^bb1 ^bb6: pdl_interp.record_match @rewriters::@rewriter(%0, %arg0 : !pdl.value, !pdl.operation) : benefit(1), loc([%arg0]), root("foo.op") -> ^bb1 } module @rewriters { func @rewriter(%arg0: !pdl.value, %arg1: !pdl.operation) { pdl_interp.replace %arg1 with(%arg0) pdl_interp.return } } } ``` Differential Revision: https://reviews.llvm.org/D84579
-
- Aug 19, 2020
-
-
Benjamin Kramer authored
-
Mehdi Amini authored
This changes the behavior of constructing MLIRContext to no longer load globally registered dialects on construction. Instead Dialects are only loaded explicitly on demand: - the Parser is lazily loading Dialects in the context as it encounters them during parsing. This is the only purpose for registering dialects and not load them in the context. - Passes are expected to declare the dialects they will create entity from (Operations, Attributes, or Types), and the PassManager is loading Dialects into the Context when starting a pipeline. This changes simplifies the configuration of the registration: a compiler only need to load the dialect for the IR it will emit, and the optimizer is self-contained and load the required Dialects. For example in the Toy tutorial, the compiler only needs to load the Toy dialect in the Context, all the others (linalg, affine, std, LLVM, ...) are automatically loaded depending on the optimization pipeline enabled. To adjust to this change, stop using the existing dialect registration: the global registry will be removed soon. 1) For passes, you need to override the method: virtual void getDependentDialects(DialectRegistry ®istry) const {} and registery on the provided registry any dialect that this pass can produce. Passes defined in TableGen can provide this list in the dependentDialects list field. 2) For dialects, on construction you can register dependent dialects using the provided MLIRContext: `context.getOrLoadDialect<DialectName>()` This is useful if a dialect may canonicalize or have interfaces involving another dialect. 3) For loading IR, dialect that can be in the input file must be explicitly registered with the context. `MlirOptMain()` is taking an explicit registry for this purpose. See how the standalone-opt.cpp example is setup: mlir::DialectRegistry registry; registry.insert<mlir::standalone::StandaloneDialect>(); registry.insert<mlir::StandardOpsDialect>(); Only operations from these two dialects can be in the input file. To include all of the dialects in MLIR Core, you can populate the registry this way: mlir::registerAllDialects(registry); 4) For `mlir-translate` callback, as well as frontend, Dialects can be loaded in the context before emitting the IR: context.getOrLoadDialect<ToyDialect>() Differential Revision: https://reviews.llvm.org/D85622
-
Mehdi Amini authored
This reverts commit d14cf457. The build is broken with GCC-5.
-
Mehdi Amini authored
This changes the behavior of constructing MLIRContext to no longer load globally registered dialects on construction. Instead Dialects are only loaded explicitly on demand: - the Parser is lazily loading Dialects in the context as it encounters them during parsing. This is the only purpose for registering dialects and not load them in the context. - Passes are expected to declare the dialects they will create entity from (Operations, Attributes, or Types), and the PassManager is loading Dialects into the Context when starting a pipeline. This changes simplifies the configuration of the registration: a compiler only need to load the dialect for the IR it will emit, and the optimizer is self-contained and load the required Dialects. For example in the Toy tutorial, the compiler only needs to load the Toy dialect in the Context, all the others (linalg, affine, std, LLVM, ...) are automatically loaded depending on the optimization pipeline enabled. To adjust to this change, stop using the existing dialect registration: the global registry will be removed soon. 1) For passes, you need to override the method: virtual void getDependentDialects(DialectRegistry ®istry) const {} and registery on the provided registry any dialect that this pass can produce. Passes defined in TableGen can provide this list in the dependentDialects list field. 2) For dialects, on construction you can register dependent dialects using the provided MLIRContext: `context.getOrLoadDialect<DialectName>()` This is useful if a dialect may canonicalize or have interfaces involving another dialect. 3) For loading IR, dialect that can be in the input file must be explicitly registered with the context. `MlirOptMain()` is taking an explicit registry for this purpose. See how the standalone-opt.cpp example is setup: mlir::DialectRegistry registry; registry.insert<mlir::standalone::StandaloneDialect>(); registry.insert<mlir::StandardOpsDialect>(); Only operations from these two dialects can be in the input file. To include all of the dialects in MLIR Core, you can populate the registry this way: mlir::registerAllDialects(registry); 4) For `mlir-translate` callback, as well as frontend, Dialects can be loaded in the context before emitting the IR: context.getOrLoadDialect<ToyDialect>() Differential Revision: https://reviews.llvm.org/D85622
-
Mehdi Amini authored
This reverts commit e1de2b75. Broke a build bot.
-
- Aug 18, 2020
-
-
Mehdi Amini authored
This changes the behavior of constructing MLIRContext to no longer load globally registered dialects on construction. Instead Dialects are only loaded explicitly on demand: - the Parser is lazily loading Dialects in the context as it encounters them during parsing. This is the only purpose for registering dialects and not load them in the context. - Passes are expected to declare the dialects they will create entity from (Operations, Attributes, or Types), and the PassManager is loading Dialects into the Context when starting a pipeline. This changes simplifies the configuration of the registration: a compiler only need to load the dialect for the IR it will emit, and the optimizer is self-contained and load the required Dialects. For example in the Toy tutorial, the compiler only needs to load the Toy dialect in the Context, all the others (linalg, affine, std, LLVM, ...) are automatically loaded depending on the optimization pipeline enabled. To adjust to this change, stop using the existing dialect registration: the global registry will be removed soon. 1) For passes, you need to override the method: virtual void getDependentDialects(DialectRegistry ®istry) const {} and registery on the provided registry any dialect that this pass can produce. Passes defined in TableGen can provide this list in the dependentDialects list field. 2) For dialects, on construction you can register dependent dialects using the provided MLIRContext: `context.getOrLoadDialect<DialectName>()` This is useful if a dialect may canonicalize or have interfaces involving another dialect. 3) For loading IR, dialect that can be in the input file must be explicitly registered with the context. `MlirOptMain()` is taking an explicit registry for this purpose. See how the standalone-opt.cpp example is setup: mlir::DialectRegistry registry; mlir::registerDialect<mlir::standalone::StandaloneDialect>(); mlir::registerDialect<mlir::StandardOpsDialect>(); Only operations from these two dialects can be in the input file. To include all of the dialects in MLIR Core, you can populate the registry this way: mlir::registerAllDialects(registry); 4) For `mlir-translate` callback, as well as frontend, Dialects can be loaded in the context before emitting the IR: context.getOrLoadDialect<ToyDialect>()
-
MaheshRavishankar authored
LinalgDistribution options to allow more general distributions. Changing the signature of the callback to send in the ranges for all the parallel loops and expect a vector with the Value to use for the processor-id and number-of-processors for each of the parallel loops. Differential Revision: https://reviews.llvm.org/D86095
-
MaheshRavishankar authored
When the operand to the linalg.tensor_reshape op is a splat constant, the result can be replaced with a splat constant of the same value but different type. Differential Revision: https://reviews.llvm.org/D86117
-
- Aug 15, 2020
-
-
Mehdi Amini authored
This reverts commit 20563933. Build is broken on a few bots
-
Mehdi Amini authored
This changes the behavior of constructing MLIRContext to no longer load globally registered dialects on construction. Instead Dialects are only loaded explicitly on demand: - the Parser is lazily loading Dialects in the context as it encounters them during parsing. This is the only purpose for registering dialects and not load them in the context. - Passes are expected to declare the dialects they will create entity from (Operations, Attributes, or Types), and the PassManager is loading Dialects into the Context when starting a pipeline. This changes simplifies the configuration of the registration: a compiler only need to load the dialect for the IR it will emit, and the optimizer is self-contained and load the required Dialects. For example in the Toy tutorial, the compiler only needs to load the Toy dialect in the Context, all the others (linalg, affine, std, LLVM, ...) are automatically loaded depending on the optimization pipeline enabled. Differential Revision: https://reviews.llvm.org/D85622
-
Mehdi Amini authored
This was landed by accident, will reland with the right comments addressed from the reviews. Also revert dependent build fixes.
-
- Aug 14, 2020
-
-
Mehdi Amini authored
This changes the behavior of constructing MLIRContext to no longer load globally registered dialects on construction. Instead Dialects are only loaded explicitly on demand: - the Parser is lazily loading Dialects in the context as it encounters them during parsing. This is the only purpose for registering dialects and not load them in the context. - Passes are expected to declare the dialects they will create entity from (Operations, Attributes, or Types), and the PassManager is loading Dialects into the Context when starting a pipeline. This changes simplifies the configuration of the registration: a compiler only need to load the dialect for the IR it will emit, and the optimizer is self-contained and load the required Dialects. For example in the Toy tutorial, the compiler only needs to load the Toy dialect in the Context, all the others (linalg, affine, std, LLVM, ...) are automatically loaded depending on the optimization pipeline enabled.
-
- Aug 12, 2020
-
-
Valentin Clement authored
Extra semi-colon causes bunch of warnings with GCC 9.2.0 ``` [1354/1516] Building CXX object tools/mlir/lib/Dialect/Linalg/IR/CMakeFiles/obj.MLIRLinalgOps.dir/LinalgOps.cpp.o /home/4vn/versioning/llvm-project/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp:1306:35: warning: extra ';' [-Wpedantic] 1306 | CANONICALIZERS_AND_FOLDERS(ConvOp); | ^ /home/4vn/versioning/llvm-project/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp:1307:41: warning: extra ';' [-Wpedantic] 1307 | CANONICALIZERS_AND_FOLDERS(PoolingMaxOp); | ^ /home/4vn/versioning/llvm-project/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp:1308:41: warning: extra ';' [-Wpedantic] 1308 | CANONICALIZERS_AND_FOLDERS(PoolingMinOp); | ^ /home/4vn/versioning/llvm-project/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp:1309:41: warning: extra ';' [-Wpedantic] 1309 | CANONICALIZERS_AND_FOLDERS(PoolingSumOp); | ^ /home/4vn/versioning/llvm-project/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp:1310:35: warning: extra ';' [-Wpedantic] 1310 | CANONICALIZERS_AND_FOLDERS(CopyOp); | ^ /home/4vn/versioning/llvm-project/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp:1311:35: warning: extra ';' [-Wpedantic] 1311 | CANONICALIZERS_AND_FOLDERS(FillOp); | ^ /home/4vn/versioning/llvm-project/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp:1312:38: warning: extra ';' [-Wpedantic] 1312 | CANONICALIZERS_AND_FOLDERS(GenericOp); | ^ /home/4vn/versioning/llvm-project/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp:1313:45: warning: extra ';' [-Wpedantic] 1313 | CANONICALIZERS_AND_FOLDERS(IndexedGenericOp); | ^ /home/4vn/versioning/llvm-project/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp:1318:42: warning: extra ';' [-Wpedantic] 1318 | CANONICALIZERS_AND_FOLDERS(BatchMatmulOp); | ^ /home/4vn/versioning/llvm-project/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp:1319:34: warning: extra ';' [-Wpedantic] 1319 | CANONICALIZERS_AND_FOLDERS(DotOp); | ^ /home/4vn/versioning/llvm-project/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp:1320:37: warning: extra ';' [-Wpedantic] 1320 | CANONICALIZERS_AND_FOLDERS(MatmulOp); | ^ /home/4vn/versioning/llvm-project/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp:1321:37: warning: extra ';' [-Wpedantic] 1321 | CANONICALIZERS_AND_FOLDERS(MatvecOp); | ^ /home/4vn/versioning/llvm-project/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp:1322:36: warning: extra ';' [-Wpedantic] 1322 | CANONICALIZERS_AND_FOLDERS(ConvWOp); | ^ /home/4vn/versioning/llvm-project/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp:1323:38: warning: extra ';' [-Wpedantic] 1323 | CANONICALIZERS_AND_FOLDERS(ConvNWCOp); | ^ /home/4vn/versioning/llvm-project/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp:1324:38: warning: extra ';' [-Wpedantic] 1324 | CANONICALIZERS_AND_FOLDERS(ConvNCWOp); | ^ /home/4vn/versioning/llvm-project/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp:1325:37: warning: extra ';' [-Wpedantic] 1325 | CANONICALIZERS_AND_FOLDERS(ConvHWOp); | ^ /home/4vn/versioning/llvm-project/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp:1326:39: warning: extra ';' [-Wpedantic] 1326 | CANONICALIZERS_AND_FOLDERS(ConvNHWCOp); | ^ /home/4vn/versioning/llvm-project/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp:1327:39: warning: extra ';' [-Wpedantic] 1327 | CANONICALIZERS_AND_FOLDERS(ConvNCHWOp); | ^ /home/4vn/versioning/llvm-project/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp:1328:38: warning: extra ';' [-Wpedantic] 1328 | CANONICALIZERS_AND_FOLDERS(ConvDHWOp); | ^ /home/4vn/versioning/llvm-project/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp:1329:40: warning: extra ';' [-Wpedantic] 1329 | CANONICALIZERS_AND_FOLDERS(ConvNDHWCOp); | ^ /home/4vn/versioning/llvm-project/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp:1330:40: warning: extra ';' [-Wpedantic] 1330 | CANONICALIZERS_AND_FOLDERS(ConvNCDHWOp); | ^ ``` Reviewed By: mehdi_amini, rriddle Differential Revision: https://reviews.llvm.org/D85766
-
- Aug 10, 2020
-
-
MaheshRavishankar authored
Linalg to processors. This changes adds infrastructure to distribute the loops generated in Linalg to processors at the time of generation. This addresses use case where the instantiation of loop is done just to distribute them. The option to distribute is added to TilingOptions for now and will allow specifying the distribution as a transformation option, just like tiling and promotion are specified as options. Differential Revision: https://reviews.llvm.org/D85147
-
- Aug 07, 2020
-
-
River Riddle authored
This is in preparation for removing the use of "kinds" within attributes and types in MLIR. Differential Revision: https://reviews.llvm.org/D85475
-
Nicolas Vasilache authored
This revision adds a folding pattern to replace affine.min ops by the actual min value, when it can be determined statically from the strides and bounds of enclosing scf loop . This matches the type of expressions that Linalg produces during tiling and simplifies boundary checks. For now Linalg depends both on Affine and SCF but they do not depend on each other, so the pattern is added there. In the future this will move to a more appropriate place when it is determined. The canonicalization of AffineMinOp operations in the context of enclosing scf.for and scf.parallel proceeds by: 1. building an affine map where uses of the induction variable of a loop are replaced by `%lb + %step * floordiv(%iv - %lb, %step)` expressions. 2. checking if any of the results of this affine map divides all the other results (in which case it is also guaranteed to be the min). 3. replacing the AffineMinOp by the result of (2). The algorithm is functional in simple parametric tiling cases by using semi-affine maps. However simplifications of such semi-affine maps are not yet available and the canonicalization does not succeed yet. Differential Revision: https://reviews.llvm.org/D82009
-
Mehdi Amini authored
This patch moves the registration to a method in the MLIRContext: getOrCreateDialect<ConcreteDialect>() This method requires dialect to provide a static getDialectNamespace() and store a TypeID on the Dialect itself, which allows to lazyily create a dialect when not yet loaded in the context. As a side effect, it means that duplicated registration of the same dialect is not an issue anymore. To limit the boilerplate, TableGen dialect generation is modified to emit the constructor entirely and invoke separately a "init()" method that the user implements. Differential Revision: https://reviews.llvm.org/D85495
-
- Aug 06, 2020
-
-
Nicolas Vasilache authored
When any of the memrefs in a structured linalg op has a zero dimension, it becomes dead. This is consistent with the fact that linalg ops deduce their loop bounds from their operands. Note however that this is not the case for the `tensor<0xelt_type>` which is a special convention that must be lowered away into either `memref<elt_type>` or just `elt_type` before this canonicalization can kick in. Differential Revision: https://reviews.llvm.org/D85413
-