Skip to content
  1. Oct 27, 2020
    • River Riddle's avatar
      [mlir][Pattern] Add a new FrozenRewritePatternList class · 3fffffa8
      River Riddle authored
      This class represents a rewrite pattern list that has been frozen, and thus immutable. This replaces the uses of OwningRewritePatternList in pattern driver related API, such as dialect conversion. When PDL becomes more prevalent, this API will allow for optimizing a set of patterns once without the need to do this per run of a pass.
      
      Differential Revision: https://reviews.llvm.org/D89104
      3fffffa8
    • River Riddle's avatar
      [mlir][NFC] Move around the code related to PatternRewriting to improve layering · b6eb26fd
      River Riddle authored
      There are several pieces of pattern rewriting infra in IR/ that really shouldn't be there. This revision moves those pieces to a better location such that they are easier to evolve in the future(e.g. with PDL). More concretely this revision does the following:
      
      * Create a Transforms/GreedyPatternRewriteDriver.h and move the apply*andFold methods there.
      The definitions for these methods are already in Transforms/ so it doesn't make sense for the declarations to be in IR.
      
      * Create a new lib/Rewrite library and move PatternApplicator there.
      This new library will be focused on applying rewrites, and will also include compiling rewrites with PDL.
      
      Differential Revision: https://reviews.llvm.org/D89103
      b6eb26fd
    • MaheshRavishankar's avatar
      [mlir][Linalg] Miscalleneous enhancements to cover more fusion cases. · 78f37b74
      MaheshRavishankar authored
      Adds support for
      - Dropping unit dimension loops for indexed_generic ops.
      - Folding consecutive folding (or expanding) reshapes when the result
        (or src) is a scalar.
      - Fixes to indexed_generic -> generic fusion when zero-dim tensors are
        involved.
      
      Differential Revision: https://reviews.llvm.org/D90118
      78f37b74
  2. Oct 26, 2020
  3. Oct 24, 2020
  4. Oct 23, 2020
  5. Oct 22, 2020
  6. Oct 21, 2020
  7. Oct 20, 2020
  8. Oct 19, 2020
  9. Oct 18, 2020
  10. Oct 16, 2020
    • River Riddle's avatar
      [mlir] Add a new SymbolUserOpInterface class · 71eeb5ec
      River Riddle authored
      The initial goal of this interface is to fix the current problems with verifying symbol user operations, but can extend beyond that in the future. The current problems with the verification of symbol uses are:
      * Extremely inefficient:
      Most current symbol users perform the symbol lookup using the slow O(N) string compare methods, which can lead to extremely long verification times in large modules.
      * Invalid/break the constraints of verification pass
      If the symbol reference is not-flat(and even if it is flat in some cases) a verifier for an operation is not permitted to touch the referenced operation because it may be in the process of being mutated by a different thread within the pass manager.
      
      The new SymbolUserOpInterface exposes a method `verifySymbolUses` that will be invoked from the parent symbol table to allow for verifying the constraints of any referenced symbols. This method is passed a `SymbolTableCollection` to allow for O(1) lookups of any necessary symbol operation.
      
      Differential Revision: https://reviews.llvm.org/D89512
      71eeb5ec
    • Thomas Raoux's avatar
      [mlir][vector] Add unrolling patterns for Transfer read/write · edbdea74
      Thomas Raoux authored
      Adding unroll support for transfer read and transfer write operation. This
      allows to pick the ideal size for the memory access for a given target.
      
      Differential Revision: https://reviews.llvm.org/D89289
      edbdea74
  11. Oct 15, 2020
  12. Oct 14, 2020
  13. Oct 13, 2020
    • Alberto Magni's avatar
      [mlir][Linalg] Lower padding attribute for pooling ops · 44865e91
      Alberto Magni authored
      Update linalg-to-loops lowering for pooling operations to perform
      padding of the input when specified by the corresponding attribute.
      
      Reviewed By: hanchung
      
      Differential Revision: https://reviews.llvm.org/D88911
      44865e91
    • Stella Stamenova's avatar
      [mlir] Fix sporadic build failures due to missing dependency · 0c15a1b4
      Stella Stamenova authored
      The build of MLIR occasionally fails (especially on Windows) because there is missing dependency between MLIRLLVMIR and MLIROpenMPOpsIncGen.
      
      1) LLVMDialect.cpp includes LLVMDialect.h
      2) LLVMDialect.h includes OpenMPDialect.h
      3) OpenMPDialect.h includes OpenMPOpsDialect.h.inc, OpenMPOpsEnums.h.inc and OpenMPOps.h.inc
      
      The OpenMP .inc files are generated by MLIROpenMPOpsIncGen, so MLIRLLVMIR which builds LLVMDialect.cpp should depend on MLIROpenMPOpsIncGen
      
      Reviewed By: mehdi_amini
      
      Differential Revision: https://reviews.llvm.org/D89275
      0c15a1b4
    • Nicolas Vasilache's avatar
      [mlir][Linalg] Fix TensorConstantOp bufferization in Linalg. · 61211174
      Nicolas Vasilache authored
      TensorConstantOp bufferization currently uses the vector dialect to store constant data into memory.
      Due to natural vector size and alignment properties, this is problematic with n>1-D vectors whose most minor dimension is not naturally aligned.
      
      Instead, this revision linearizes the constant and introduces a linalg.reshape to go back to the desired shape.
      
      Still this is still to be considered a workaround and a better longer term solution will probably involve `llvm.global`.
      
      Differential Revision: https://reviews.llvm.org/D89311
      61211174
    • Christian Sigg's avatar
      [mlir][gpu] Add `gpu.wait` op. · db1cf3d9
      Christian Sigg authored
      This combines two separate ops (D88972: `gpu.create_token`, D89043: `gpu.host_wait`) into one.
      
      I do after all like the idea of combining the two ops, because it matches exactly the pattern we are
      going to have in the other gpu ops that will implement the AsyncOpInterface (launch_func, copies, alloc):
      
      If the op is async, we return a !gpu.async.token. Otherwise, we synchronize with the host and don't return a token.
      
      The use cases for `gpu.wait async` and `gpu.wait` are further apart than those of e.g. `gpu.h2d async` and `gpu.h2d`,
      but I like the consistent meaning of the `async` keyword in GPU ops.
      
      Reviewed By: herhut
      
      Differential Revision: https://reviews.llvm.org/D89160
      db1cf3d9
Loading