- Apr 21, 2021
-
-
Ahmed Taei authored
This will prevent fusion that spains all dims and generates (d0, d1, ...) -> () reshape that isn't legal Differential Revision: https://reviews.llvm.org/D100805
-
Nico Weber authored
-
Nico Weber authored
-
Nico Weber authored
-
Nico Weber authored
-
thomasraoux authored
Break up the dependency between SCF ops and substituteMin helper and make a more generic version of AffineMinSCFCanonicalization. This reduce dependencies between linalg and SCF and will allow the logic to be used with other kind of ops. (Like ID ops). Differential Revision: https://reviews.llvm.org/D100321
-
Butygin authored
Previously, any terminator without ReturnLike and BranchOpInterface traits (e.g. scf.condition) were causing pass to fail. Differential Revision: https://reviews.llvm.org/D100832
-
Tobias Gysi authored
Instead of always running the region builder check if the generalized op has a region attached. If yes inline the existing region instead of calling the region builder. This change circumvents a problem with named operations that have a region builder taking captures and the generalization pass not knowing about this captures. Differential Revision: https://reviews.llvm.org/D100880
-
Amy Zhuang authored
Reviewed By: bondhugula Differential Revision: https://reviews.llvm.org/D100512
-
Matthias Springer authored
The current implementation allows for TransferWriteOps with broadcasts that do not make sense. E.g., a broadcast could write a vector into a single (scalar) memory location, which is effectively the same as writing only the last element of the vector. Differential Revision: https://reviews.llvm.org/D100842
-
- Apr 20, 2021
-
-
Mathieu Fehr authored
Currently, it is only possible to register an operation or a type when the TypeID is defined at compile time. Same with InterfaceMaps which can only be defined with compile-time defined interfaces. With those changes, it is now possible to register types/operations with custom TypeIDs. This is necessary to define new operations/types at runtime. Differential Revision: https://reviews.llvm.org/D99084
-
Javier Setoain authored
A couple of standard op examples that use an outdated syntax need an update. Differential Revision: https://reviews.llvm.org/D100840
-
Butygin authored
[mlir] Pass AnalysisManager as optional parameter to analysis ctor, so it can request any other analysis as dependency Differential Revision: https://reviews.llvm.org/D100274
-
thomasraoux authored
Differential Revision: https://reviews.llvm.org/D100814
-
Hanhan Wang authored
std.xor ops on bool are lowered to spv.LogicalNotEqual. For Boolean values, xor and not-equal are the same thing. Reviewed By: antiagainst Differential Revision: https://reviews.llvm.org/D100817
-
Tobias Gysi authored
The patch extends the vectorization pass to lower linalg index operations to vector code. It allocates constant 1d vectors that enumerate the indexes along the iteration dimensions and broadcasts/transposes these 1d vectors to the iteration space. Differential Revision: https://reviews.llvm.org/D100373
-
Tobias Gysi authored
Test the vector to llvm lowering of index vectors with index element type. Differential Revision: https://reviews.llvm.org/D100827
-
Matthias Springer authored
Add a new ProgressiveVectorToSCF pass that lowers vector transfer ops to SCF by gradually unpacking one dimension at time. Unpacking stops at 1D, but can be configured to stop earlier, should the HW support (N>1)-d vectors. The current implementation cannot handle permutation maps, masks, tensor types and unrolling yet. These will be added in subsequent commits. Once features are on par with VectorToSCF, this implementation will replace VectorToSCF. Differential Revision: https://reviews.llvm.org/D100622
-
Tres Popp authored
Some Math operations do not have an equivalent in LLVM. In these cases, allow a low priority fallback of calling the libm functions. This is to give functionality and is not a performant option. Differential Revision: https://reviews.llvm.org/D100367
-
KareemErgawy-TomTom authored
This patch extends the control-flow cost-model for detensoring by implementing a forward-looking pass on block arguments that should be detensored. This makes sure that if a (to-be-detensored) block argument "escapes" its block through the terminator, then the successor arguments are also detensored. Reviewed By: silvas Differential Revision: https://reviews.llvm.org/D100457
-
Tobias Gysi authored
The patch replaces the index operations in the body of fused producers and linearizes the indices after expansion. Differential Revision: https://reviews.llvm.org/D100479
-
Tobias Gysi authored
Update the dimensions of the index operations to account for dropped dimensions and replace the index operations of dropped dimensions by zero. Differential Revision: https://reviews.llvm.org/D100395
-
clementval authored
This patch add the UnnamedAddr attribute for the GlobalOp in the LLVM dialect. The attribute is also handled to and from LLVM IR. This is meant to be used in a follow up patch to lower OpenACC/OpenMP ops to call to kmp and tgt runtime calls (D100678). Reviewed By: mehdi_amini Differential Revision: https://reviews.llvm.org/D100677
-
- Apr 19, 2021
-
-
Nicolas Vasilache authored
Differential Revision: https://reviews.llvm.org/D100786
-
Tobias Gysi authored
The patch enables the library call lowering for linalg operations that contain index operations. Differential Revision: https://reviews.llvm.org/D100537
-
Alex Zinenko authored
Expose the debug flag as a readable and assignable property of a dedicated class instead of a write-only function. Actually test the fact of setting the flag. Move test to a dedicated file, it has zero relation to context_managers.py where it was added. Arguably, it should be promoted from mlir.ir to mlir module, but we are not re-exporting the latter and this functionality is purposefully hidden so can stay in IR for now. Drop unnecessary export code. Refactor C API and put Debug into a separate library, fix it to actually set the flag to the given value. Reviewed By: nicolasvasilache Differential Revision: https://reviews.llvm.org/D100757
-
Tobias Gysi authored
Instead of interchanging loops during the loop lowering this pass performs the interchange by permuting the indexing maps. It also updates the iterator types and the index accesses in the body of the operation. Differential Revision: https://reviews.llvm.org/D100627
-
Tres Popp authored
Differential Revision: https://reviews.llvm.org/D100289
-
- Apr 16, 2021
-
-
Nicolas Vasilache authored
Fold scf.for iter_arg/result pairs that go through incoming/ougoing a tensor.cast op pair so as to pull the tensor.cast inside the scf.for: ``` %0 = tensor.cast %t0 : tensor<32x1024xf32> to tensor<?x?xf32> %1 = scf.for %i = %c0 to %c1024 step %c32 iter_args(%iter_t0 = %0) -> (tensor<?x?xf32>) { %2 = call @do(%iter_t0) : (tensor<?x?xf32>) -> tensor<?x?xf32> scf.yield %2 : tensor<?x?xf32> } %2 = tensor.cast %1 : tensor<?x?xf32> to tensor<32x1024xf32> use_of(%2) ``` folds into: ``` %0 = scf.for %arg2 = %c0 to %c1024 step %c32 iter_args(%arg3 = %arg0) -> (tensor<32x1024xf32>) { %2 = tensor.cast %arg3 : tensor<32x1024xf32> to tensor<?x?xf32> %3 = call @do(%2) : (tensor<?x?xf32>) -> tensor<?x?xf32> %4 = tensor.cast %3 : tensor<?x?xf32> to tensor<32x1024xf32> scf.yield %4 : tensor<32x1024xf32> } use_of(%0) ``` Differential Revision: https://reviews.llvm.org/D100661
-
thomasraoux authored
Move the existing optimization for transfer op on tensor to folder and canonicalization. This handles the write after write case and read after write and also add write after read case. Differential Revision: https://reviews.llvm.org/D100597
-
Mats Petersson authored
The implementation supports static schedule for Fortran do loops. This implements the dynamic variant of the same concept. Reviewed By: Meinersbur Differential Revision: https://reviews.llvm.org/D97393
-
Javier Setoain authored
ArmSVE dialect is behind the recent changes in how the Vector dialect interacts with backend vector dialects and the MLIR -> LLVM IR translation module. This patch cleans up ArmSVE initialization within Vector and removes the need for an LLVMArmSVE dialect. Reviewed By: ftynse Differential Revision: https://reviews.llvm.org/D100171
-
Nicolas Vasilache authored
Differential Revision: https://reviews.llvm.org/D100643
-
Frederik Gossen authored
Differential Revision: https://reviews.llvm.org/D100635
-
Frederik Gossen authored
Differential Revision: https://reviews.llvm.org/D100636
-
Nicolas Vasilache authored
When Linalg named ops support was added, captures were omitted from the body builder. This revision adds support for captures which allows us to write FillOp in a more idiomatic fashion using the _linalg_ops_ext mixin support. This raises an issue in the generation of `_linalg_ops_gen.py` where ``` @property def result(self): return self.operation.results[0] if len(self.operation.results) > 1 else None ```. The condition should be `== 1`. This will be fixed in a separate commit. Differential Revision: https://reviews.llvm.org/D100363
-
Nicolas Vasilache authored
Differential Revision: https://reviews.llvm.org/D100603
-
Ahmed Taei authored
Without this tile-and-pad will never terminate if pad-fails. Reviewed By: nicolasvasilache Differential Revision: https://reviews.llvm.org/D97720
-
Prashant Kumar authored
This offers the ability to pass numpy arrays to the corresponding memref argument. Reviewed By: mehdi_amini, nicolasvasilache Differential Revision: https://reviews.llvm.org/D100077
-
River Riddle authored
This allows for walking all nested locations of a given location, and is generally useful when processing locations. Differential Revision: https://reviews.llvm.org/D100437
-