[mlir] Add std.tensor_to_memref op and teach the infra about it
The opposite of tensor_to_memref is tensor_load. - Add some basic tensor_load/tensor_to_memref folding. - Add source/target materializations to BufferizeTypeConverter. - Add an example std bufferization pattern/pass that shows how the materialiations work together (more std bufferization patterns to come in subsequent commits). - In coming commits, I'll document how to write composable bufferization passes/patterns and update the other in-tree bufferization passes to match this convention. The populate* functions will of course continue to be exposed for power users. The naming on tensor_load/tensor_to_memref and their pretty forms are not very intuitive. I'm open to any suggestions here. One key observation is that the memref type must always be the one specified in the pretty form, since the tensor type can be inferred from the memref type but not vice-versa. With this, I've been able to replace all my custom bufferization type converters in npcomp with BufferizeTypeConverter! Part of the plan discussed in: https://llvm.discourse.group/t/what-is-the-strategy-for-tensor-memref-conversion-bufferization/1938/17 Differential Revision: https://reviews.llvm.org/D89437
Loading
Please register or sign in to comment