- Dec 02, 2020
-
-
Christian Sigg authored
Given that OpState already implicit converts to Operator*, this seems reasonable. The alternative would be to add more functions to OpState which forward to Operation. Reviewed By: rriddle, ftynse Differential Revision: https://reviews.llvm.org/D92266
-
Alex Zinenko authored
OpenMPIRBuilder::createParallel outlines the body region of the parallel construct into a new function that accepts any value previously defined outside the region as a function argument. This function is called back by OpenMP runtime function __kmpc_fork_call, which expects trailing arguments to be pointers. If the region uses a value that is not of a pointer type, e.g. a struct, the produced code would be invalid. In such cases, make createParallel emit IR that stores the value on stack and pass the pointer to the outlined function instead. The outlined function then loads the value back and uses as normal. Reviewed By: jdoerfert, llitchev Differential Revision: https://reviews.llvm.org/D92189
-
Hanhan Wang authored
Reviewed By: rriddle Differential Revision: https://reviews.llvm.org/D92416
-
River Riddle authored
-
River Riddle authored
[mlir][PDL] Use .getOperation() when construction SuccessorRange to avoid ambiguous constructor in GCC5
-
zhanghb97 authored
The test process of the ir_array_attributes.py depends on numpy. This patch checks numpy in Python bindings configuration. - Add NumPy in find_package as a required component to check numpy. - If numpy is found, print the version and include directory. Differential Revision: https://reviews.llvm.org/D92276
-
River Riddle authored
-
River Riddle authored
PDL patterns are now supported via a new `PDLPatternModule` class. This class contains a ModuleOp with the pdl::PatternOp operations representing the patterns, as well as a collection of registered C++ functions for native constraints/creations/rewrites/etc. that may be invoked via the pdl patterns. Instances of this class are added to an OwningRewritePatternList in the same fashion as C++ RewritePatterns, i.e. via the `insert` method. The PDL bytecode is an in-memory representation of the PDL interpreter dialect that can be efficiently interpreted/executed. The representation of the bytecode boils down to a code array(for opcodes/memory locations/etc) and a memory buffer(for storing attributes/operations/values/any other data necessary). The bytecode operations are effectively a 1-1 mapping to the PDLInterp dialect operations, with a few exceptions in cases where the in-memory representation of the bytecode can be more efficient than the MLIR representation. For example, a generic `AreEqual` bytecode op can be used to represent AreEqualOp, CheckAttributeOp, and CheckTypeOp. The execution of the bytecode is split into two phases: matching and rewriting. When matching, all of the matched patterns are collected to avoid the overhead of re-running parts of the matcher. These matched patterns are then considered alongside the native C++ patterns, which rewrite immediately in-place via `RewritePattern::matchAndRewrite`, for the given root operation. When a PDL pattern is matched and has the highest benefit, it is passed back to the bytecode to execute its rewriter. Differential Revision: https://reviews.llvm.org/D89107
-
- Dec 01, 2020
-
-
Rahul Joshi authored
- Change InferTypeOpInterface::inferResultTypes to use fully qualified types matching the ones generated by genTypeInterfaceMethods, so the redundancy can be detected. - Move genTypeInterfaceMethods() before genOpInterfaceMethods() so that the inferResultTypes method generated by genTypeInterfaceMethods() takes precedence over the declaration that might be generated by genOpInterfaceMethods() - Modified an op in the test dialect to exercise this (the modified op would fail to generate valid C++ code due to duplicate inferResultTypes methods). Differential Revision: https://reviews.llvm.org/D92414
-
ergawy authored
Reviewed By: rriddle Differential Revision: https://reviews.llvm.org/D92333
-
Eugene Zhulenev authored
ExecutionEngine/LLJIT do not run globals destructors in loaded dynamic libraries when destroyed, and threads managed by ThreadPool can race with program termination, and it leads to segfaults. TODO: Re-enable threading after fixing a problem with destructors, or removing static globals from dynamic library. Differential Revision: https://reviews.llvm.org/D92368
-
Ray (I-Jui) Sung authored
Fixes out-of-bound access in generated nested DAG rewriter matching code. Reviewed By: tpopp Differential Revision: https://reviews.llvm.org/D92075
-
Sean Silva authored
- Address TODO in scf-bufferize: the argument materialization issue is now fixed and the code is now in Transforms/Bufferize.cpp - Tighten up finalizing-bufferize to avoid creating invalid IR when operand types potentially change - Tidy up the testing of func-bufferize, and move appropriate tests to a new finalizing-bufferize.mlir - The new stricter checking in finalizing-bufferize revealed that we needed a DimOp conversion pattern (found when integrating into npcomp). Previously, the converion infrastructure was blindly changing the operand type during finalization, which happened to work due to DimOp's tensor/memref polymorphism, but is generally not encouraged (the new pattern is the way to tell the conversion infrastructure that it is legal to change that type).
-
- Nov 30, 2020
-
-
Nicolas Vasilache authored
-
Christian Sigg authored
Reviewed By: ftynse Differential Revision: https://reviews.llvm.org/D92303
-
Nicolas Vasilache authored
The InlineAsmOp mirrors the underlying LLVM semantics with a notable exception: the embedded `asm_string` is not allowed to define or reference any symbol or any global variable: only the operands of the op may be read, written, or referenced. Attempting to define or reference any symbol or any global behavior is considered undefined behavior at this time. The asm dialect syntax is currently specified with an integer (0 [default] for the "att dialect", 1 for the intel dialect) to circumvent the ODS limitation on string enums. Translation to LLVM is provided and raises the fact that the asm constraints string must be well-formed with respect to in/out operands. No check is performed on the asm_string. An InlineAsm instruction in LLVM is a special call operation to a function that is constructed on the fly. It does not fit the current model of MLIR calls with symbols. As a consequence, the current implementation constructs the function type in ModuleTranslation.cpp. This should be refactored in the future. The mlir-cpu-runner is augmented with the global initialization of the X86 asm parser to allow proper execution in JIT mode. Previously, only the X86 asm printer was initialized. Differential revision: https://reviews.llvm.org/D92166
-
Stella Laurenzo authored
* Follows on https://reviews.llvm.org/D92193 * I had a mid-air collision with some additional occurrences and then noticed that there were a lot more. Think I got them all. Differential Revision: https://reviews.llvm.org/D92292
-
Stella Laurenzo authored
* If ODS redefines this, it is fine, but I have found this accessor to be universally useful in the old npcomp bindings and I'm closing gaps that will let me switch. Differential Revision: https://reviews.llvm.org/D92287
-
Stella Laurenzo authored
* Add capsule get/create for Attribute and Type, which already had capsule interop defined. * Add capsule interop and get/create for Location. * Add Location __eq__. * Use get() and implicit cast to go from PyAttribute, PyType, PyLocation to MlirAttribute, MlirType, MlirLocation (bundled with this change because I didn't want to continue the pattern one more time). Differential Revision: https://reviews.llvm.org/D92283
-
George authored
`bool` is pretty well supported by now in C, and using it in place of `int` is not only more semantically accurate, but also improves automatic bindings for languages like Swift. There is more discussion here: https://llvm.discourse.group/t/adding-mlirbool-to-c-bindings/2280/5 Reviewed By: ftynse, mehdi_amini Differential Revision: https://reviews.llvm.org/D92193
-
- Nov 29, 2020
-
-
Jacques Pienaar authored
Op with mapping from ops to corresponding shape functions for those op in the library and mechanism to associate shape functions to functions. The mapping of operand to shape function is kept separate from the shape functions themselves as the operation is associated to the shape function and not vice versa, and one could have a common library of shape functions that can be used in different contexts. Use fully qualified names and require a name for shape fn lib ops for now and an explicit print/parse (based around the generated one & GPU module op ones). This commit reverts d9da4c3e. Fixes missing headers (don't know how that was working locally). Differential Revision: https://reviews.llvm.org/D91672
-
George authored
This mirrors the underlying C++ api. Reviewed By: mehdi_amini Differential Revision: https://reviews.llvm.org/D92252
-
Mehdi Amini authored
This reverts commit 6dd9596b. Build is broken.
-
Jacques Pienaar authored
Op with mapping from ops to corresponding shape functions for those op in the library and mechanism to associate shape functions to functions. The mapping of operand to shape function is kept separate from the shape functions themselves as the operation is associated to the shape function and not vice versa, and one could have a common library of shape functions that can be used in different contexts. Use fully qualified names and require a name for shape fn lib ops for now and an explicit print/parse (based around the generated one & GPU module op ones). Differential Revision: https://reviews.llvm.org/D91672
-
- Nov 28, 2020
-
-
Christian Sigg authored
Differential Revision: https://reviews.llvm.org/D92265
-
Christian Sigg authored
Reviewed By: herhut, ftynse Differential Revision: https://reviews.llvm.org/D92111
-
- Nov 27, 2020
-
-
Tamas Berghammer authored
A splat attribute have a single element during printing so we should treat it as such when we decide if we elide it or not based on the flag intended to elide large attributes. Reviewed By: rriddle, mehdi_amini Differential Revision: https://reviews.llvm.org/D92165
-
Felipe de Azevedo Piovezan authored
Many pages have had their titles renamed over time, causing broken links to spread throughout the documentation. Reviewed By: ftynse Differential Revision: https://reviews.llvm.org/D92093
-
Frederik Gossen authored
Overcome the assumption that parallel loops are only nested in other parallel loops. Differential Revision: https://reviews.llvm.org/D92188
-
Christian Sigg authored
The ops are very similar to the std variants, but support async GPU execution. gpu.alloc does not currently support an alignment attribute, and the new ops do not have canonicalizers/folders like their std siblings do. Reviewed By: herhut Differential Revision: https://reviews.llvm.org/D91698
-
Nicolas Vasilache authored
This adds LLVM triple propagation and updates the test that did not check it properly. Differential Revision: https://reviews.llvm.org/D92182
-
- Nov 26, 2020
-
-
Stephan Herhut authored
The rewrite logic has an optimization to drop a cast operation after rewriting block arguments if the cast operation has no users. This is unsafe as there might be a pending rewrite that replaced the cast operation itself and hence would trigger a second free. Instead, do not remove the casts and leave it up to a later canonicalization to do so. Differential Revision: https://reviews.llvm.org/D92184
-
Benjamin Kramer authored
-
Stephan Herhut authored
This change is required so that bufferization can properly identify the linalg.yield as a terminator with an associated parent op. Differential Revision: https://reviews.llvm.org/D92173
-
Stephan Herhut authored
This enables partial bufferization that includes function signatures. To test this, this change also makes the func-bufferize partial and adds a dedicated finalizing-bufferize pass. Differential Revision: https://reviews.llvm.org/D92032
-
Stella Laurenzo authored
Differential Revision: https://reviews.llvm.org/D92144
-
Aart Bik authored
This change gives sparse compiler clients more control over selecting individual types for the pointers and indices in the sparse storage schemes. Narrower width obviously results in smaller memory footprints, but the range should always suffice for the maximum number of entries or index value. Reviewed By: penpornk Differential Revision: https://reviews.llvm.org/D92126
-
Sean Silva authored
It still had the old name from before ElementwiseMappable was added.
-
- Nov 25, 2020
-
-
Marius Brehler authored
-
Frank Laub authored
Adding missing custom builders for AffineVectorLoadOp & AffineVectorStoreOp. In practice, it is difficult to correctly construct these ops without these builders (because the AffineMap is not included at construction time). Differential Revision: https://reviews.llvm.org/D86380
-