[Mlir-commits] [mlir] [mlir][bufferization] Test tensor encoding -> memref layout conversion (PR #161166)
Andrei Golubev
llvmlistbot at llvm.org
Mon Oct 6 05:47:16 PDT 2025
================
@@ -1425,6 +1425,39 @@ TestMultiSlotAlloca::handleDestructuringComplete(
return createNewMultiAllocaWithoutSlot(slot, builder, *this);
}
+namespace {
+/// Returns test dialect's memref layout for test dialect's tensor encoding when
+/// applicable.
+MemRefLayoutAttrInterface
+getMemRefLayoutForTensorEncoding(RankedTensorType tensorType) {
+ if (auto encoding =
+ dyn_cast<test::TestTensorEncodingAttr>(tensorType.getEncoding())) {
+ return cast<MemRefLayoutAttrInterface>(test::TestMemRefLayoutAttr::get(
+ tensorType.getContext(), encoding.getDummy()));
+ }
+ return {};
+}
+
+/// Auxiliary bufferization function for test and builtin tensors.
+bufferization::BufferLikeType
+convertTensorToBuffer(mlir::Operation *op,
+ const bufferization::BufferizationOptions &options,
+ bufferization::TensorLikeType tensorLike) {
+ auto buffer =
+ *tensorLike.getBufferType(options, [&]() { return op->emitError(); });
+ if (auto memref = dyn_cast<MemRefType>(buffer)) {
----------------
andrey-golubev wrote:
note: if we'd have a option callback that provides customizable layout inference, this branch could be avoided. instead, the one-shot bufferization options could be configured and this whole thing becomes just `return TensorLike::getBufferType()`.
https://github.com/llvm/llvm-project/pull/161166
More information about the Mlir-commits
mailing list