[Mlir-commits] [mlir] [mlir][bufferization] Convert tensor enconding into memref layout (PR #161166)

Andrei Golubev llvmlistbot at llvm.org
Thu Oct 2 01:07:28 PDT 2025


================
@@ -364,6 +367,12 @@ struct BufferizationOptions {
   DefaultMemorySpaceFn defaultMemorySpaceFn =
       [](TensorType t) -> std::optional<Attribute> { return Attribute(); };
 
+  /// Construction function used to determine the memref layout based on the
+  /// original tensor type. Can be used to specialize tensor encoding -> memref
+  /// layout conversion. By default, it is unset, making the layout construction
+  /// behavior depend on the place where it is used.
+  ConstructMemRefLayoutFn constructMemRefLayoutFn = nullptr;
----------------
andrey-golubev wrote:

Hmmm, thanks for questioning actually! I thought originally that I would need this in places that rely on `RankedTensorType::getBufferType()` (thinking of `bufferization.to_tensor` conversion for example).

But: thus far, realistically, custom encoding -> custom layout only works for our own operations in downstream (e.g. we encode "layout" information differently from upstream mlir, for example, type provides whether it's NHWC or NCHW or some other "format").

> E.g.: bufferization.alloc_tensor -> memref.alloc: the result buffer always has an identity layout map.

Not in our case, but thus far we don't rely on `bufferization.alloc_tensor` either I think - we have our own allocation for own types, and use something from MLIR for memrefs. I guess I just have to check how that case works - which is probably a test.

I think at this point I don't have a solid example of why custom layout conversion would be necessary, so I'll drop this one and try to mimic what we have in our codebase from standard MLIR. Perhaps it would be enough to just use current infrastructure.

https://github.com/llvm/llvm-project/pull/161166


More information about the Mlir-commits mailing list