- Feb 12, 2021
-
-
Stephan Herhut authored
This does not split transformations, yet. Those will be done as future clean ups. Differential Revision: https://reviews.llvm.org/D96272
-
- Feb 11, 2021
-
-
Nicolas Vasilache authored
The AffineMap in the MemRef inferred by SubViewOp may have uncompressed symbols which result in type mismatch on otherwise unused symbols. Make the computation of the AffineMap compress those unused symbols which results in better canonical types. Additionally, improve the error message to report which inferred type was expected. Differential Revision: https://reviews.llvm.org/D96551
-
Stella Stamenova authored
Multi-configuration generators (such as Visual Studio and Xcode) allow the specification of a build flavor at build time instead of config time, so the lit configuration files need to support that - and they do for the most part. There are several places that had one of two issues (or both!): 1) Paths had %(build_mode)s set up, but then not configured, resulting in values that would not work correctly e.g. D:/llvm-build/%(build_mode)s/bin/dsymutil.exe 2) Paths did not have %(build_mode)s set up, but instead contained $(Configuration) (which is the value for Visual Studio at configuration time, for Xcode they would have had the equivalent) e.g. "D:/llvm-build/$(Configuration)/lib". This seems to indicate that we still have a lot of fragility in the configurations, but also that a number of these paths are never used (at least on Windows) since the errors appear to have been there a while. This patch fixes the configurations and it has been tested with Ninja and Visual Studio to generate the correct paths. We should consider removing some of these settings altogether. Reviewed By: JDevlieghere, mehdi_amini Differential Revision: https://reviews.llvm.org/D96427
-
Hanhan Wang authored
The dimension order of a filter in tensorflow is [filter_height, filter_width, in_channels, out_channels], which is different from current definition. The current definition follows TOSA spec. Add TF version conv ops to .tc, so we do not have to insert a transpose op around a conv op. Reviewed By: antiagainst Differential Revision: https://reviews.llvm.org/D96038
-
Aart Bik authored
Rationale: BuiltinTypes.cpp observed overflow when computing size of tensor<100x200x300x400x500x600x700x800xf32>. Reviewed By: stellaraccident Differential Revision: https://reviews.llvm.org/D96475
-
Mehdi Amini authored
The CMake changes in 2aa1af9b to make it possible to build MLIR as a standalone project unfortunately disabled all unit-tests from the regular in-tree build.
-
Rob Suderman authored
Added support for broadcasting size-1 dimensions for TOSA elemtnwise operations. Differential Revision: https://reviews.llvm.org/D96190
-
- Feb 10, 2021
-
-
Jing Pu authored
Make the type contraint consistent with other shape dialect operations. Reviewed By: jpienaar Differential Revision: https://reviews.llvm.org/D96377
-
Aart Bik authored
This revision connects the generated sparse code with an actual sparse storage scheme, which can be initialized from a test file. Lacking a first-class citizen SparseTensor type (with buffer), the storage is hidden behind an opaque pointer with some "glue" to bring the pointer back to tensor land. Rather than generating sparse setup code for each different annotated tensor (viz. the "pack" methods in TACO), a single "one-size-fits-all" implementation has been added to the runtime support library. Many details and abstractions need to be refined in the future, but this revision allows full end-to-end integration testing and performance benchmarking (with on one end, an annotated Lingalg op and, on the other end, a JIT/AOT executable). Reviewed By: nicolasvasilache, bixia Differential Revision: https://reviews.llvm.org/D95847
-
Nicolas Vasilache authored
This revision fixes the indexing logic into the packed tensor that result from hoisting padding. Previously, the index was incorrectly set to the loop induction variable when in fact we need to compute the iteration count (i.e. `(iv - lb).ceilDiv(step)`). Differential Revision: https://reviews.llvm.org/D96417
-
Nicolas Vasilache authored
The new pattern is exercised from the TestLinalgTransforms pass. Differential Revision: https://reviews.llvm.org/D96410
-
Tres Popp authored
Previously broadcast was a binary op. Now it can support more inputs. This has been changed in such a way that for now, this is an NFC for all broadcast operations that were previously legal. Differential Revision: https://reviews.llvm.org/D95777
-
Uday Bondhugula authored
Update affine.for loop unroll utility for iteration arguments support. Fix promoteIfSingleIteration as well. Fixes PR49084: https://bugs.llvm.org/show_bug.cgi?id=49084 Differential Revision: https://reviews.llvm.org/D96383
-
Andrew Pritchard authored
These are similar to maxnum and minnum, but they're defined to treat -0 as less than +0. This behavior can't be expressed using float comparisons and selects, since comparisons are defined to treat different-signed zeros as equal. So, the only way to communicate this behavior into LLVM IR without defining target-specific intrinsics is to add the corresponding ops. Reviewed By: mehdi_amini Differential Revision: https://reviews.llvm.org/D96373
-
Andrew Pritchard authored
Previously it reported an op had side-effects iff it declared that it didn't have any side-effects. This had the undesirable result that canonicalization would always delete any intrinsic calls that did memory stores and returned void. Reviewed By: ftynse, mehdi_amini Differential Revision: https://reviews.llvm.org/D96369
-
- Feb 09, 2021
-
-
River Riddle authored
This allows for referencing nearly every component of an operation from within a custom directive. It also fixes a bug with the current type_ref implementation, PR48478 Differential Revision: https://reviews.llvm.org/D96189
-
River Riddle authored
This revision adds a new `AliasAnalysis` class that represents the main alias analysis interface in MLIR. The purpose of this class is not to hold the aliasing logic itself, but to provide an interface into various different alias analysis implementations. As it evolves this should allow for users to plug in specialized alias analysis implementations for their own needs, and have them immediately usable by other analyses and transformations. This revision also adds an initial simple generic alias, LocalAliasAnalysis, that provides support for performing stateless local alias queries between values. This class is similar in scope to LLVM's BasicAA. Differential Revision: https://reviews.llvm.org/D92343
-
George authored
I knew I would miss one... Reviewed By: stellaraccident Differential Revision: https://reviews.llvm.org/D96321
-
River Riddle authored
These properties were useful for a few things before traits had a better integration story, but don't really carry their weight well these days. Most of these properties are already checked via traits in most of the code. It is better to align the system around traits, and improve the performance/cost of traits in general. Differential Revision: https://reviews.llvm.org/D96088
-
Weiwei Li authored
co-authored-by:
Alan Liu <alanliu.yf@gmail.com> Reviewed By: antiagainst Differential Revision: https://reviews.llvm.org/D96169
-
George authored
Replace MlirDialectRegistrationHooks with MlirDialectHandle, which under-the-hood is an opaque pointer to MlirDialectRegistrationHooks. Then we expose the functionality previously directly on MlirDialectRegistrationHooks, as functions which take the opaque MlirDialectHandle struct. This makes the actual structure of the registration hooks an implementation detail, and happens to avoid this issue: https://llvm.discourse.group/t/strange-swift-issues-with-dialect-registration-hooks/2759/3 Reviewed By: stellaraccident Differential Revision: https://reviews.llvm.org/D96229
-
Thomas Raoux authored
Differential Revision: https://reviews.llvm.org/D96314
-
Denys Shabalin authored
Reviewed By: ftynse Differential Revision: https://reviews.llvm.org/D96333
-
Lei Zhang authored
This commit defines linalg.depthwise_conv_2d_nhwc for depthwise 2-D convolution with NHWC input/output data format. This op right now only support channel multiplier == 1, which is the most common case. Reviewed By: nicolasvasilache Differential Revision: https://reviews.llvm.org/D94966
-
Lei Zhang authored
Indexing maps for named ops can reference attributes so that we can synthesize the indexing map dynamically. This supports cases like strides for convolution ops. However, it does cause an issue: now the indexing_maps() function call is dependent on those attributes. Linalg ops inherit LinalgOpInterfaceTraits, which calls verifyStructuredOpInterface() to verify the interface. verifyStructuredOpInterface() further calls indexing_maps(). Note that trait verification is done before the op itself, where ODS generates the verification for those attributes. So we can have indexing_maps() referencing non-existing or invalid attribute, before the ODS-generated verification kick in. There isn't a dependency handling mechansim for traits. This commit adds new interface methods to query whether an op hasDynamicIndexingMaps() and then perform verifyIndexingMapRequiredAttributes() in verifyStructuredOpInterface() to handle the dependency issue. Reviewed By: nicolasvasilache Differential Revision: https://reviews.llvm.org/D96297
-
George authored
Reviewed By: mehdi_amini Differential Revision: https://reviews.llvm.org/D96301
-
- Feb 08, 2021
-
-
Nicolas Vasilache authored
This revision fixes the fact that the padding transformation did not have enough information to set the proper type for the padding value. Additionally, the verifier for Yield in the presence of PadTensorOp is fixed to properly report incorrect number of results or operands. Previously, the error would be silently ignored which made the core issue difficult to debug. Differential Revision: https://reviews.llvm.org/D96264
-
Alex Zinenko authored
After the LLVM dialect types were ported to use built-in types, the parser kept supporting the old syntax for LLVM dialect types to produce built-in types for compatibility. Drop this support. Reviewed By: mehdi_amini Differential Revision: https://reviews.llvm.org/D96275
-
Vladislav Vinogradov authored
This will allow to use `NativeOpTrait` and Operations declared outside of `mlir` namespace. Reviewed By: ftynse Differential Revision: https://reviews.llvm.org/D96128
-
- Feb 06, 2021
-
-
Tung D. Le authored
This patch fixes the following bug when calling --affine-loop-fusion Input program: ```mlir func @should_not_fuse_since_top_level_non_affine_non_result_users( %in0 : memref<32xf32>, %in1 : memref<32xf32>) { %c0 = constant 0 : index %cst_0 = constant 0.000000e+00 : f32 affine.for %d = 0 to 32 { %lhs = affine.load %in0[%d] : memref<32xf32> %rhs = affine.load %in1[%d] : memref<32xf32> %add = addf %lhs, %rhs : f32 affine.store %add, %in0[%d] : memref<32xf32> } store %cst_0, %in0[%c0] : memref<32xf32> affine.for %d = 0 to 32 { %lhs = affine.load %in0[%d] : memref<32xf32> %rhs = affine.load %in1[%d] : memref<32xf32> %add = addf %lhs, %rhs: f32 affine.store %add, %in0[%d] : memref<32xf32> } return } ``` call --affine-loop-fusion, we got an incorrect output: ```mlir func @should_not_fuse_since_top_level_non_affine_non_result_users(%arg0: memref<32xf32>, %arg1: memref<32xf32>) { %c0 = constant 0 : index %cst = constant 0.000000e+00 : f32 store %cst, %arg0[%c0] : memref<32xf32> affine.for %arg2 = 0 to 32 { %0 = affine.load %arg0[%arg2] : memref<32xf32> %1 = affine.load %arg1[%arg2] : memref<32xf32> %2 = addf %0, %1 : f32 affine.store %2, %arg0[%arg2] : memref<32xf32> %3 = affine.load %arg0[%arg2] : memref<32xf32> %4 = affine.load %arg1[%arg2] : memref<32xf32> %5 = addf %3, %4 : f32 affine.store %5, %arg0[%arg2] : memref<32xf32> } return } ``` This happened because when analyzing the source and destination nodes, affine loop fusion ignored non-result ops sandwitched between them. In other words, the MemRefDependencyGraph in the affine loop fusion ignored these non-result ops. This patch solves the issue by adding these non-result ops to the MemRefDependencyGraph. Reviewed By: bondhugula Differential Revision: https://reviews.llvm.org/D95668
-
- Feb 05, 2021
-
-
Lei Zhang authored
These patterns move vector.bitcast ops to be before insert ops or after extract ops where suitable. With them, bitcast will happen on smaller vectors and there are more chances to share extract/insert ops. Reviewed By: ThomasRaoux Differential Revision: https://reviews.llvm.org/D96040
-
Lei Zhang authored
Reviewed By: ThomasRaoux Differential Revision: https://reviews.llvm.org/D96041
-
Lei Zhang authored
This patch introduces a few more straightforward patterns to convert vector ops operating on 1-4 element vectors to their corresponding SPIR-V counterparts. This patch also enables converting vector<1xT> to T. Reviewed By: ThomasRaoux Differential Revision: https://reviews.llvm.org/D96042
-
Lei Zhang authored
This patch adds patterns to use vector.shape_cast to cast away leading 1-dimensions from a few vector operations. It allows exposing more canonical forms of vector.transfer_read, vector.transfer_write, vector_extract_strided_slice, and vector.insert_strided_slice. With this, we can have more opportunity to cancelling extract/insert ops or forwarding write/read ops. Reviewed By: ThomasRaoux Differential Revision: https://reviews.llvm.org/D95873
-
Alex Zinenko authored
Historically, Linalg To LLVM conversion subsumed numerous other conversions, including (affine) loop lowerings to CFG and conversions from the Standard and Vector dialects to the LLVM dialect. This was due to the insufficient support for partial conversions in the infrastructure that essentially required conversions that involve type change (in this case, !linalg.range to !llvm.struct) to be performed in a single conversion sweep. This is no longer the case so remove the subsumed conversions and run them as separate passes when necessary. Depends On D95317 Reviewed By: nicolasvasilache Differential Revision: https://reviews.llvm.org/D96008
-
Nicolas Vasilache authored
-
Nicolas Vasilache authored
Differential Revision: https://reviews.llvm.org/D96116
-
Nicolas Vasilache authored
-
Nicolas Vasilache authored
Reviewed By: nicolasvasilache Differential Revision: https://reviews.llvm.org/D96094
-
River Riddle authored
This makes ignoring a result explicit by the user, and helps to prevent accidental errors with dropped results. Marking LogicalResult as no discard was always the intention from the beginning, but got lost along the way. Differential Revision: https://reviews.llvm.org/D95841
-