[Mlir-commits] [mlir] [MLIR] Preserve Encoding During TensorOp Creation (PR #80871)

ian Bearman llvmlistbot at llvm.org
Mon Feb 19 14:32:25 PST 2024


================
@@ -1622,7 +1623,20 @@ CollapseShapeOp::inferCollapsedType(RankedTensorType type,
     currentDim += dim;
   }
 
-  return RankedTensorType::get(newShape, type.getElementType());
+  auto encoding = type.getEncoding();
+  if (auto v = encoding.dyn_cast_or_null<VerifiableTensorEncoding>()) {
----------------
manbearian wrote:

Dropping encodings through operations seems exceedingly dangerous and definitely semantically incorrect, at least for the intended use cases in my compiler. 

First, in the Tensor representation, an operation such as "SliceOp" should be producing a tensor that has the same encoding as the original tensor.

Second, if we allow operations to simply drop encodings, at least in my case, bufferization will either fail (since it cannot reconcile the types) or it will produce suboptimal code to convert the types using copies.

Is it possible that i'm completely misusing the encoding field? For our compiler it contains information on how the Tensor will be laid out in memory when allocated.

https://github.com/llvm/llvm-project/pull/80871


More information about the Mlir-commits mailing list