Skip to content
Commit ee491ac9 authored by Sean Silva's avatar Sean Silva
Browse files

[mlir] Add std.tensor_to_memref op and teach the infra about it

The opposite of tensor_to_memref is tensor_load.

- Add some basic tensor_load/tensor_to_memref folding.
- Add source/target materializations to BufferizeTypeConverter.
- Add an example std bufferization pattern/pass that shows how the
  materialiations work together (more std bufferization patterns to come
  in subsequent commits).
  - In coming commits, I'll document how to write composable
  bufferization passes/patterns and update the other in-tree
  bufferization passes to match this convention. The populate* functions
  will of course continue to be exposed for power users.

The naming on tensor_load/tensor_to_memref and their pretty forms are
not very intuitive. I'm open to any suggestions here. One key
observation is that the memref type must always be the one specified in
the pretty form, since the tensor type can be inferred from the memref
type but not vice-versa.

With this, I've been able to replace all my custom bufferization type
converters in npcomp with BufferizeTypeConverter!

Part of the plan discussed in:
https://llvm.discourse.group/t/what-is-the-strategy-for-tensor-memref-conversion-bufferization/1938/17

Differential Revision: https://reviews.llvm.org/D89437
parent 9c728a7c
Loading
Loading
Loading
Loading
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment