- Mar 24, 2021
-
-
Lei Zhang authored
init tensor operands also has indexing map and generally follow the same constraints we expect for non-init-tensor operands. Differential Revision: https://reviews.llvm.org/D99115
-
Lei Zhang authored
This commit exposes an option to the pattern FoldWithProducerReshapeOpByExpansion to allow folding unit dim reshapes. This gives callers more fine-grained controls. Differential Revision: https://reviews.llvm.org/D99114
-
Lei Zhang authored
This identifies a pattern where the producer affine min/max op is bound to a dimension/symbol that is used as a standalone expression in the consumer affine op's map. In that case the producer affine min/max op can be merged into its consumer. For example, a pattern like the following: ``` %0 = affine.min affine_map<()[s0] -> (s0 + 16, s0 * 8)> ()[%sym1] %1 = affine.min affine_map<(d0)[s0] -> (s0 + 4, d0)> (%0)[%sym2] ``` Can be turned into: ``` %1 = affine.min affine_map< ()[s0, s1] -> (s0 + 4, s1 + 16, s1 * 8)> ()[%sym2, %sym1] ``` Differential Revision: https://reviews.llvm.org/D99016
-
Lei Zhang authored
If there are multiple identical expressions in an affine min/max op's map, we can just keep one. Differential Revision: https://reviews.llvm.org/D99015
-
Lei Zhang authored
Until now Linalg fusion only allow fusing producers whose operands are all permutation indexing maps. It's easier to deduce the subtensor/subview but it is an unnecessary constraint, as in tiling we have more advanced logic to deduce the subranges even when the operand is not of permutation indexing maps, e.g., the input operand for convolution ops. This patch uses the logic on tiling side to deduce subranges for fusion. This enables fusing convolution with its consumer ops when possible. Along the way, we are now generating proper affine.min ops to guard against size boundaries, if we cannot be certain they won't be out of bounds. Differential Revision: https://reviews.llvm.org/D99014
-
Lei Zhang authored
This is a preparation step to reuse makeTiledShapes in tensor fusion. Along the way, did some lightweight cleanups. Differential Revision: https://reviews.llvm.org/D99013
-
Jacques Pienaar authored
This avoided some conversion overhead on a model in TypeUniquer when converting from ArrayRef -> TypeRange. Differential Revision: https://reviews.llvm.org/D99300
-
Tobias Gysi authored
All linalg operations having a region builder shall call it during op creation. Calling it during vectorization is obsolete. Differential Revision: https://reviews.llvm.org/D99168
-
Alex Zinenko authored
Index type is an integer type of target-specific bitwidth present in many MLIR operations (loops, memory accesses). Converting values of this type to fixed-size integers has always been problematic. Introduce a data layout entry to specify the bitwidth of `index` in a given layout scope, defaulting to 64 bits, which is a commonly used assumption, e.g., in constants. Port builtin-to-LLVM type conversion to use this data layout entry when converting `index` type and untie it from pointer size. This is particularly relevant for GPU targets. Keep a possibility to forcibly override the index type in lowerings. Depends On D98525 Reviewed By: herhut Differential Revision: https://reviews.llvm.org/D98937
-
Alex Zinenko authored
Even if the layout specification is missing from an op that supports it, the op is still expected to provide meaningful responses to data layout queries. Forward them to the op instead of directly calling the default implementation. Depends On D98524 Reviewed By: rriddle Differential Revision: https://reviews.llvm.org/D98525
-
Alex Zinenko authored
This is useful for bit-packing types such as vectors and tuples as well as for exotic architectures that have non-8-bit bytes. Depends On D98500 Reviewed By: rriddle Differential Revision: https://reviews.llvm.org/D98524
-
Alex Zinenko authored
ModuleOp is a natural place to provide scoped data layout information. However, it is undesirable for ModuleOp to implement the entirety of DataLayoutOpInterface because that would require either pushing the interface inside the IR library instead of a separate library, or putting the default implementation of the interface as inline functions in headers leading to binary bloat. Instead, ModuleOp accepts an arbitrary data layout spec attribute and has a dedicated hook to extract it, and DataLayout is modified to know about ModuleOp particularities. Reviewed By: herhut, nicolasvasilache Differential Revision: https://reviews.llvm.org/D98500
-
Nicolas Vasilache authored
Fix the BlockAndValueMapping update that was missing entries for scf.for op's blockIterArgs. Skip cloning subtensors of the padded tensor as the logic for these is separate. Add a filter to drop side-effecting ops. Tests are beefed up to verify the IR is sound in all hoisting configurations for 2-level 3-D tiled matmul. Differential Revision: https://reviews.llvm.org/D99255
-
Vladislav Vinogradov authored
Use new `MemRefType::getMemorySpace` method with generic Attribute in cases, where there is no specific logic around the memory space. Reviewed By: rriddle Differential Revision: https://reviews.llvm.org/D99154
-
Mehdi Amini authored
This mechanism makes it possible for a dialect to not register all operations but still answer interface-based queries. This can useful for dialects that are "open" or connected to an external system and still interoperate with the compiler. It can also open up the possibility to have a more extensible compiler at runtime: the compiler does not need a pre-registration for each operation and the dialect can inject behavior dynamically. Reviewed By: rriddle, jpienaar Differential Revision: https://reviews.llvm.org/D93085
-
Rob Suderman authored
Tosa's argmax lowering is representable as a linalg.indexed_generic operation. Include the lowering to this type for both integer and floating point types. Differential Revision: https://reviews.llvm.org/D99137
-
- Mar 23, 2021
-
-
Rob Suderman authored
Lowers from tosa's pad op to the linalg equivalent for floating, integer, and quantized values. Differential Revision: https://reviews.llvm.org/D98990
-
River Riddle authored
[mlir][Pattern] Add better support for using interfaces/traits to match root operations in rewrite patterns To match an interface or trait, users currently have to use the `MatchAny` tag. This tag can be quite problematic for compile time for things like the canonicalizer, as the `MatchAny` patterns may get applied to *every* operation. This revision adds better support by bucketing interface/trait patterns based on which registered operations have them registered. This means that moving forward we will only attempt to match these patterns to operations that have this interface registered. Two simplify defining patterns that match traits and interfaces, two new utility classes have been added: OpTraitRewritePattern and OpInterfaceRewritePattern. Differential Revision: https://reviews.llvm.org/D98986
-
Chris Lattner authored
This provides a simplified way to implement 'matchAndRewrite' style canonicalization patterns for ops that don't need the full power of RewritePatterns. Using this style, you can implement a static method with a signature like: ``` LogicalResult AssertOp::canonicalize(AssertOp op, PatternRewriter &rewriter) { return success(); } ``` instead of dealing with defining RewritePattern subclasses. This also adopts this for a few canonicalization patterns in the std dialect to show how it works. Differential Revision: https://reviews.llvm.org/D99143
-
Rob Suderman authored
Tiling operations are generic operations with modified indexing. Updated to to linalg lowerings to perform this lowering. Differential Revision: https://reviews.llvm.org/D99113
-
natashaknk authored
Adds lowerings for matmul and fully_connected. Only supports 2D tensors for inputs and weights, and 1D tensors for bias. Reviewed By: rsuderman Differential Revision: https://reviews.llvm.org/D99211
-
Alex Zinenko authored
-
Nicolas Vasilache authored
This revision introduces proper backward slice computation during the hoisting of PadTensorOp. This allows hoisting padding even across multiple levels of tiling. Such hoisting requires the proper handling of loop bounds that may depend on enclosing loop variables. Differential revision: https://reviews.llvm.org/D98965
-
Alex Zinenko authored
This is an assumption that is made in numerous places in the code. In particular, in the code generated by mlir-tblgen for operand/result accessors in ops with attr-sized operand or result lists. Make sure to verify this assumption. Note that the operation traits are verified before running the custom op verifier, which can expect the trait verifier to have passed, but some traits may be verified before the AttrSizedOperand/ResultTrait and should not make such assumptions. Reviewed By: mehdi_amini Differential Revision: https://reviews.llvm.org/D99183
-
Frederik Gossen authored
This reverts commit 5f8acd4f.
-
Frederik Gossen authored
Differential Revision: https://reviews.llvm.org/D99156
-
Frederik Gossen authored
Differential Revision: https://reviews.llvm.org/D99159
-
Frederik Gossen authored
Differential Revision: https://reviews.llvm.org/D99153
-
Sean Silva authored
This assertion can fire in the case of different contexts as well, which is not difficult to do from Python bindings, for example.
-
Chris Lattner authored
This nicely aligns the naming with RewritePatternSet. This type isn't as widely used, but we keep a using declaration in to help with downstream consumption of this change. Differential Revision: https://reviews.llvm.org/D99131
-
Mehdi Amini authored
Reviewed By: rriddle Differential Revision: https://reviews.llvm.org/D99007
-
Chris Lattner authored
[PatternMatch] Big mechanical rename OwningRewritePatternList -> RewritePatternSet and insert -> add. NFC This doesn't change APIs, this just cleans up the many in-tree uses of these names to use the new preferred names. We'll keep the old names around for a couple weeks to help transitions. Differential Revision: https://reviews.llvm.org/D99127
-
Chris Lattner authored
This maintains the old name to have minimal source impact on downstream codes, and does not do the huge mechanical patch. I expect the huge mechanical patch to land sometime this week, but we can keep around the old names for a couple weeks to reduce impact on downstream projects. Differential Revision: https://reviews.llvm.org/D99119
-
- Mar 22, 2021
-
-
Chris Lattner authored
This allows adding a C function pointer as a matchAndRewrite style pattern, which is a very common case. This adopts it in ExpandTanh to show how it reduces a level of nesting. We could allow C++ lambdas here, but that doesn't work as well with type inference in the common case. Instead of: patterns.insert(convertTanhOp); you need to specify: patterns.insert<math::TanhOp>(convertTanhOp); which is boilerplate'y. Capturing state like this is very uncommon, so we choose to require clients to define their own structs and use the non-convenience method when they need to do so. Differential Revision: https://reviews.llvm.org/D99039
-
Rob Suderman authored
Multiply-shift requires wider compute types or CPU specific code to avoid premature truncation, apply_shift fixes this issue Also, Tosa's mul op supports different input / output types. Added path that sign-extends input values to int-32 values before multiplying. Differential Revision: https://reviews.llvm.org/D99011
-
Nicolas Vasilache authored
- Drop unnecessary occurrences of rewriter.eraseOp: dead linalg ops on tensors should be cleaned up by DCE. - reimplement the part of Linalg on fusion that constructs the body and block arguments: the previous implementation had too much magic. Instead this spells out all cases explicitly and asserts / introduces TODOs for incorrect cases. As a consequence, we can use the default traversal order for this pattern. Differential Revision: https://reviews.llvm.org/D99070
-
Adrian Kuegel authored
GreedyPatternRewriteDriver was changed from bottom-up traversal to top-down traversal. Not all passes work yet with that change for traversal order. To give some time for fixing, add an option to allow to switch back to bottom-up traversal. Use this option in FusionOfTensorOpsPass which fails otherwise. Differential Revision: https://reviews.llvm.org/D99059
-
- Mar 21, 2021
-
-
Chris Lattner authored
-
Chris Lattner authored
mlir/lib/Dialect/Shape/IR/Shape.cpp:573:26: warning: loop variable 'shape' is always a copy because the range of type '::mlir::Operation::operand_range' (aka 'mlir::OperandRange') does not return a reference [-Wrange-loop-analysis] for (const auto &shape : shapes()) { ^
-
Chris Lattner authored
This updates the codebase to pass the context when creating an instance of OwningRewritePatternList, and starts removing extraneous MLIRContext parameters. There are many many more to be removed. Differential Revision: https://reviews.llvm.org/D99028
-