[Mlir-commits] [mlir] [mlir][bufferization] Convert tensor enconding into memref layout (PR #161166)

Matthias Springer llvmlistbot at llvm.org
Tue Sep 30 06:17:10 PDT 2025


================
@@ -364,6 +367,12 @@ struct BufferizationOptions {
   DefaultMemorySpaceFn defaultMemorySpaceFn =
       [](TensorType t) -> std::optional<Attribute> { return Attribute(); };
 
+  /// Construction function used to determine the memref layout based on the
+  /// original tensor type. Can be used to specialize tensor encoding -> memref
+  /// layout conversion. By default, it is unset, making the layout construction
+  /// behavior depend on the place where it is used.
+  ConstructMemRefLayoutFn constructMemRefLayoutFn = nullptr;
----------------
matthias-springer wrote:

Where is this option useful? The bufferization processes operations top-to-bottom (defs before users). The result type of a bufferized op can typically be inferred from the operands. For most ops, it is not possible to put a custom layout map on the result. (The op would not verify anymore.)

E.g.: `tensor.extract_slice` -> `memref.subview`: based on the operands and their types, we can infer the result type.

E.g.: `bufferization.alloc_tensor` -> `memref.alloc`: the result buffer always has an identity layout map.

E.g.: tensor function argument: the bufferized type can be controlled via `FunctionArgTypeConverterFn`.

https://github.com/llvm/llvm-project/pull/161166


More information about the Mlir-commits mailing list