Skip to content
  1. Oct 27, 2020
  2. Oct 26, 2020
    • Alex Zinenko's avatar
      [mlir] Do not print back 0 alignment in LLVM dialect 'alloca' op · 03e6f40c
      Alex Zinenko authored
      The alignment attribute in the 'alloca' op treats the '0' value as 'unset'.
      When parsing the custom form of the 'alloca' op, ignore the alignment attribute
      with if its value is '0' instead of actually creating it and producing a
      slightly different textually yet equivalent semantically form in the output.
      
      Reviewed By: rriddle
      
      Differential Revision: https://reviews.llvm.org/D90179
      03e6f40c
    • Alexander Belyaev's avatar
    • Thomas Raoux's avatar
      [mlir][vector] Update doc strings for insert_map/extract_map and fix insert_map semantic · bd07be4f
      Thomas Raoux authored
      Based on discourse discussion, fix the doc string and remove examples with
      wrong semantic. Also fix insert_map semantic by adding missing operand for
      vector we are inserting into.
      
      Differential Revision: https://reviews.llvm.org/D89563
      bd07be4f
    • Nicolas Vasilache's avatar
      [mlir][Linalg] Add basic support for TileAndFuse on Linalg on tensors. · 37e0fdd0
      Nicolas Vasilache authored
      This revision allows the fusion of the producer of input tensors in the consumer under a tiling transformation (which produces subtensors).
      Many pieces are still missing (e.g. support init_tensors, better refactor LinalgStructuredOp interface support, try to merge implementations and reuse code) but this still allows getting started.
      
      The greedy pass itself is just for testing purposes and will be extracted in a separate test pass.
      
      Differential revision: https://reviews.llvm.org/D89491
      37e0fdd0
    • George Mitenkov's avatar
      [MLIR][mlir-spirv-cpu-runner] A SPIR-V cpu runner prototype · 89808ce7
      George Mitenkov authored
      This patch introduces a SPIR-V runner. The aim is to run a gpu
      kernel on a CPU via GPU -> SPIRV -> LLVM conversions. This is a first
      prototype, so more features will be added in due time.
      
      - Overview
      The runner follows similar flow as the other runners in-tree. However,
      having converted the kernel to SPIR-V, we encode the bind attributes of
      global variables that represent kernel arguments. Then SPIR-V module is
      converted to LLVM. On the host side, we emulate passing the data to device
      by creating in main module globals with the same symbolic name as in kernel
      module. These global variables are later linked with ones from the nested
      module. We copy data from kernel arguments to globals, call the kernel
      function from nested module and then copy the data back.
      
      - Current state
      At the moment, the runner is capable of running 2 modules, nested one in
      another. The kernel module must contain exactly one kernel function. Also,
      the runner supports rank 1 integer memref types as arguments (to be scaled).
      
      - Enhancement of JitRunner and ExecutionEngine
      To translate nested modules to LLVM IR, JitRunner and ExecutionEngine were
      altered to take an optional (default to `nullptr`) function reference that
      is a custom LLVM IR module builder. This allows to customize LLVM IR module
      creation from MLIR modules.
      
      Reviewed By: ftynse, mravishankar
      
      Differential Revision: https://reviews.llvm.org/D86108
      89808ce7
    • George Mitenkov's avatar
      [MLIR][mlir-spirv-cpu-runner] A pass to emulate a call to kernel in LLVM · cae4067e
      George Mitenkov authored
      This patch introduces a pass for running
      `mlir-spirv-cpu-runner` - LowerHostCodeToLLVMPass.
      
      This pass emulates `gpu.launch_func` call in LLVM dialect and lowers
      the host module code to LLVM. It removes the `gpu.module`, creates a
      sequence of global variables that are later linked to the varables
      in the kernel module, as well as a series of copies to/from
      them to emulate the memory transfer to/from the host or to/from the
      device sides. It also converts the remaining Standard dialect into
      LLVM dialect, emitting C wrappers.
      
      Reviewed By: mravishankar
      
      Differential Revision: https://reviews.llvm.org/D86112
      cae4067e
  3. Oct 24, 2020
  4. Oct 23, 2020
  5. Oct 22, 2020
  6. Oct 21, 2020
  7. Oct 20, 2020
Loading