[Mlir-commits] [mlir] [MLIR] Setting MemorySpace During Bufferization + Fixes (PR #78484)

Aart Bik llvmlistbot at llvm.org
Mon Feb 5 13:12:22 PST 2024


================
@@ -1622,7 +1623,20 @@ CollapseShapeOp::inferCollapsedType(RankedTensorType type,
     currentDim += dim;
   }
 
-  return RankedTensorType::get(newShape, type.getElementType());
+  auto encoding = type.getEncoding();
+  if (auto v = encoding.dyn_cast_or_null<VerifiableTensorEncoding>()) {
+    auto ignoreError = [&] {
+      auto emitter = mlir::emitError(UnknownLoc::get(type.getContext()));
+      emitter.abandon();
----------------
aartbik wrote:

Yeah, this touches on the sensitive topic that the "verifable tensor encoding" really can be used for several things, including sparsity. We should probably migrate to a more specialized solution in the long run, but for now, we typically do an audit of preserving the encoding where it makes sense.

https://github.com/llvm/llvm-project/pull/78484


More information about the Mlir-commits mailing list