- Feb 12, 2021
-
-
Stephan Herhut authored
This does not split transformations, yet. Those will be done as future clean ups. Differential Revision: https://reviews.llvm.org/D96272
-
Alexander Belyaev authored
-
Alexander Belyaev authored
Differential Revision: https://reviews.llvm.org/D96579
-
Aart Bik authored
Rationale: This computation failed ASAN for the following input (integer overflow during 4032000000000000000 * 100): tensor<100x200x300x400x500x600x700x800xf32> This change adds a simple overflow detection during debug mode (which we run more regularly than ASAN). Arguably this is an unrealistic tensor input, but in the context of sparse tensors, we may start to see cases like this. Bug: https://bugs.llvm.org/show_bug.cgi?id=49136 Reviewed By: mehdi_amini Differential Revision: https://reviews.llvm.org/D96530
-
- Feb 11, 2021
-
-
Nicolas Vasilache authored
The AffineMap in the MemRef inferred by SubViewOp may have uncompressed symbols which result in type mismatch on otherwise unused symbols. Make the computation of the AffineMap compress those unused symbols which results in better canonical types. Additionally, improve the error message to report which inferred type was expected. Differential Revision: https://reviews.llvm.org/D96551
-
Stella Stamenova authored
Multi-configuration generators (such as Visual Studio and Xcode) allow the specification of a build flavor at build time instead of config time, so the lit configuration files need to support that - and they do for the most part. There are several places that had one of two issues (or both!): 1) Paths had %(build_mode)s set up, but then not configured, resulting in values that would not work correctly e.g. D:/llvm-build/%(build_mode)s/bin/dsymutil.exe 2) Paths did not have %(build_mode)s set up, but instead contained $(Configuration) (which is the value for Visual Studio at configuration time, for Xcode they would have had the equivalent) e.g. "D:/llvm-build/$(Configuration)/lib". This seems to indicate that we still have a lot of fragility in the configurations, but also that a number of these paths are never used (at least on Windows) since the errors appear to have been there a while. This patch fixes the configurations and it has been tested with Ninja and Visual Studio to generate the correct paths. We should consider removing some of these settings altogether. Reviewed By: JDevlieghere, mehdi_amini Differential Revision: https://reviews.llvm.org/D96427
-
Nicolas Vasilache authored
Differential revision: https://reviews.llvm.org/D96488
-
Alex Zinenko authored
ModuleTranslation contains multiple fields that keep track of the mappings between various MLIR and LLVM IR components. The original ModuleTranslation extension model was based on inheritance, with these fields being protected and thus accessible in the ModuleTranslation and derived classes. The inheritance-based model doesn't scale to translation of more than one derived dialect and will be progressively replaced with a more flexible one based on dialect interfaces and a translation state that is separate from ModuleTranslation. This change prepares the replacement by making the mappings private and providing public methods to access them. Depends On D96436 Reviewed By: mehdi_amini Differential Revision: https://reviews.llvm.org/D96437
-
Alex Zinenko authored
Historically, JitRunner has been registering all available dialects with the context and depending on them without the real need. Make it take a registry that contains only the dialects that are expected in the input and stop linking in all dialects. Reviewed By: mehdi_amini Differential Revision: https://reviews.llvm.org/D96436
-
Stephan Herhut authored
With the standard dialect being split up, the set of dialects that are used when converting to GPU is growing. This change modifies the SCFToGpu pass to allow all operations inside launch bodies. Differential Revision: https://reviews.llvm.org/D96480
-
Hanhan Wang authored
The dimension order of a filter in tensorflow is [filter_height, filter_width, in_channels, out_channels], which is different from current definition. The current definition follows TOSA spec. Add TF version conv ops to .tc, so we do not have to insert a transpose op around a conv op. Reviewed By: antiagainst Differential Revision: https://reviews.llvm.org/D96038
-
Sanjoy Das authored
This should have gone in with a76761cf.
-
Sanjoy Das authored
- Remove leftover comment from de2568aa - Fix a typo in a comment
-
Aart Bik authored
Rationale: BuiltinTypes.cpp observed overflow when computing size of tensor<100x200x300x400x500x600x700x800xf32>. Reviewed By: stellaraccident Differential Revision: https://reviews.llvm.org/D96475
-
Mehdi Amini authored
Differential Revision: https://reviews.llvm.org/D96474
-
Mehdi Amini authored
The CMake changes in 2aa1af9b to make it possible to build MLIR as a standalone project unfortunately disabled all unit-tests from the regular in-tree build.
-
Rob Suderman authored
Added support for broadcasting size-1 dimensions for TOSA elemtnwise operations. Differential Revision: https://reviews.llvm.org/D96190
-
Sean Silva authored
Differential Revision: https://reviews.llvm.org/D96391
-
Sean Silva authored
After discussion, it seems like we want to go with "inherent/discardable". These seem to best capture the relationship with the op semantics and don't conflict with other terms. Please let me know your preferences. Some of the other contenders are: ``` "intrinsic" side | "annotation" side -----------------+------------------ characteristic | annotation closed | open definitional | advisory essential | discardable expected | unexpected innate | acquired internal | external intrinsic | extrinsic known | unknown local | global native | foreign inherent | acquired ``` Rationale: - discardable: good. discourages use for stable data. - inherent: good - annotation: redundant and doesn't convey difference - intrinsic: confusable with "compiler intrinsics". - definitional: too much of a mounthful - extrinsic: too exotic of a word and hard to say - acquired: doesn't convey the relationship to the semantics - internal/external: not immediately obvious: what is internal to what? - innate: similar to intrinsic but worse - acquired: we don't typically think of an op as "acquiring" things - known/unknown: by who? - local/global: to what? - native/foreign: to where? - advisory: confusing distinction: is the attribute itself advisory or is the information it provides advisory? - essential: an intrinsic attribute need not be present. - expected: same issue as essential - unexpected: by who/what? - closed/open: whether the set is open or closed doesn't seem essential to the attribute being intrinsic. Also, in theory an op can have an unbounded set of intrinsic attributes (e.g. `arg<N>` for func). - characteristic: unless you have a math background this probably doesn't make as much sense Differential Revision: https://reviews.llvm.org/D96093
-
- Feb 10, 2021
-
-
Nicolas Vasilache authored
-
Nicolas Vasilache authored
-
Mehdi Amini authored
Fix StridedMemRefType operator[] SFINAE to allow correctly selecting the `int64_t` overload for non-container operands
-
Jing Pu authored
Make the type contraint consistent with other shape dialect operations. Reviewed By: jpienaar Differential Revision: https://reviews.llvm.org/D96377
-
Aart Bik authored
This revision connects the generated sparse code with an actual sparse storage scheme, which can be initialized from a test file. Lacking a first-class citizen SparseTensor type (with buffer), the storage is hidden behind an opaque pointer with some "glue" to bring the pointer back to tensor land. Rather than generating sparse setup code for each different annotated tensor (viz. the "pack" methods in TACO), a single "one-size-fits-all" implementation has been added to the runtime support library. Many details and abstractions need to be refined in the future, but this revision allows full end-to-end integration testing and performance benchmarking (with on one end, an annotated Lingalg op and, on the other end, a JIT/AOT executable). Reviewed By: nicolasvasilache, bixia Differential Revision: https://reviews.llvm.org/D95847
-
Mehdi Amini authored
Reland 11f32a41 that was reverted in e49967fb after fixing the build. Differential Revision: https://reviews.llvm.org/D96192
-
Mehdi Amini authored
This reverts commit 11f32a41. The build is broken because this commit conflits with the refactoring of the DialectRegistry APIs in the context. It'll reland shortly after fixing the API usage.
-
Mehdi Amini authored
Differential Revision: https://reviews.llvm.org/D96192
-
Nicolas Vasilache authored
This revision fixes the indexing logic into the packed tensor that result from hoisting padding. Previously, the index was incorrectly set to the loop induction variable when in fact we need to compute the iteration count (i.e. `(iv - lb).ceilDiv(step)`). Differential Revision: https://reviews.llvm.org/D96417
-
Nicolas Vasilache authored
The new pattern is exercised from the TestLinalgTransforms pass. Differential Revision: https://reviews.llvm.org/D96410
-
Alex Zinenko authored
MLIRContext allows its users to access directly to the DialectRegistry it contains. While sometimes useful for registering additional dialects on an already existing context, this breaks the encapsulation by essentially giving raw accesses to a part of the context's internal state. Remove this mutable access and instead provide a method to append a given DialectRegistry to the one already contained in the context. Also provide a shortcut mechanism to construct a context from an already existing registry, which seems to be a common use case in the wild. Keep read-only access to the registry contained in the context in case it needs to be copied or used for constructing another context. With this change, DialectRegistry is no longer concerned with loading the dialects and deciding whether to invoke delayed interface registration. Loading is concentrated in the MLIRContext, and the functionality of the registry better reflects its name. Depends On D96137 Reviewed By: mehdi_amini Differential Revision: https://reviews.llvm.org/D96331
-
Alex Zinenko authored
This introduces a mechanism to register interfaces for a dialect without making the dialect itself depend on the interface. The registration request happens on DialectRegistry and, if the dialect has not been loaded yet, the actual registration is delayed until the dialect is loaded. It requires DialectRegistry to become aware of the context that contains it and the context to expose methods for querying if a dialect is loaded. This mechanism will enable a simple extension mechanism for dialects that can have interfaces defined outside of the dialect code. It is particularly helpful for, e.g., translation to LLVM IR where we don't want the dialect itself to depend on LLVM IR libraries. Reviewed By: mehdi_amini Differential Revision: https://reviews.llvm.org/D96137
-
Tres Popp authored
Previously broadcast was a binary op. Now it can support more inputs. This has been changed in such a way that for now, this is an NFC for all broadcast operations that were previously legal. Differential Revision: https://reviews.llvm.org/D95777
-
Uday Bondhugula authored
Fix build warnings from VectorTransforms.cpp.
-
Uday Bondhugula authored
Update affine.for loop unroll utility for iteration arguments support. Fix promoteIfSingleIteration as well. Fixes PR49084: https://bugs.llvm.org/show_bug.cgi?id=49084 Differential Revision: https://reviews.llvm.org/D96383
-
Andrew Pritchard authored
These are similar to maxnum and minnum, but they're defined to treat -0 as less than +0. This behavior can't be expressed using float comparisons and selects, since comparisons are defined to treat different-signed zeros as equal. So, the only way to communicate this behavior into LLVM IR without defining target-specific intrinsics is to add the corresponding ops. Reviewed By: mehdi_amini Differential Revision: https://reviews.llvm.org/D96373
-
Andrew Pritchard authored
Previously it reported an op had side-effects iff it declared that it didn't have any side-effects. This had the undesirable result that canonicalization would always delete any intrinsic calls that did memory stores and returned void. Reviewed By: ftynse, mehdi_amini Differential Revision: https://reviews.llvm.org/D96369
-
Jing Pu authored
Reviewed By: jpienaar, silvas Differential Revision: https://reviews.llvm.org/D96358
-
- Feb 09, 2021
-
-
River Riddle authored
This allows for referencing nearly every component of an operation from within a custom directive. It also fixes a bug with the current type_ref implementation, PR48478 Differential Revision: https://reviews.llvm.org/D96189
-
River Riddle authored
This revision adds a new `AliasAnalysis` class that represents the main alias analysis interface in MLIR. The purpose of this class is not to hold the aliasing logic itself, but to provide an interface into various different alias analysis implementations. As it evolves this should allow for users to plug in specialized alias analysis implementations for their own needs, and have them immediately usable by other analyses and transformations. This revision also adds an initial simple generic alias, LocalAliasAnalysis, that provides support for performing stateless local alias queries between values. This class is similar in scope to LLVM's BasicAA. Differential Revision: https://reviews.llvm.org/D92343
-
George authored
I knew I would miss one... Reviewed By: stellaraccident Differential Revision: https://reviews.llvm.org/D96321
-