[Mlir-commits] [mlir] [mlir][bufferization] Support custom types (1/N) (PR #142986)

Matthias Springer llvmlistbot at llvm.org
Mon Jun 16 23:18:16 PDT 2025


================
@@ -473,19 +482,19 @@ def Bufferization_ToTensorOp : Bufferization_Op<"to_tensor", [
     FailureOr<BaseMemRefType> getBufferType(
         Value value, const BufferizationOptions &options,
         const BufferizationState &state, SmallVector<Value> &invocationStack) {
-      return ::llvm::cast<BaseMemRefType>(getMemref().getType());
+      return ::llvm::cast<BaseMemRefType>(getBuffer().getType());
     }
   }];
 
   let assemblyFormat = [{
-    $memref (`restrict` $restrict^)? (`writable` $writable^)? attr-dict
-      `:` type($memref) `to` type($result)
+    $buffer (`restrict` $restrict^)? (`writable` $writable^)? attr-dict
+      `:` type($buffer) `to` type($result)
   }];
 
   let builders = [
-    OpBuilder<(ins "Value":$memref, CArg<"bool", "false">:$restrict, CArg<"bool", "false">:$writeable), [{
-      auto rtt = memref::getTensorTypeFromMemRefType(memref.getType());
-      build($_builder, $_state, rtt, memref, restrict, writeable);
+    OpBuilder<(ins "Value":$buffer, CArg<"bool", "false">:$restrict, CArg<"bool", "false">:$writeable), [{
+      auto rtt = bufferization::detail::getTensorFromBuffer(buffer.getType());
----------------
matthias-springer wrote:

If we go with the above-mentioned `verifyCompatibleTensorType` approach, type inference must be dropped, indeed. I think that's alright. We can add some helper functions to make it easy for folks to migrate their code.

I wouldn't pass the `BufferizationOptions` to the op builder. I think it's better to explicitly specify the result type when constructing a to_tensor / to_buffer op.

https://github.com/llvm/llvm-project/pull/142986


More information about the Mlir-commits mailing list