[Mlir-commits] [mlir] [MLIR] Preserve Encoding During TensorOp Creation (PR #80871)
Aart Bik
llvmlistbot at llvm.org
Mon Feb 12 10:25:47 PST 2024
================
@@ -1622,7 +1623,20 @@ CollapseShapeOp::inferCollapsedType(RankedTensorType type,
currentDim += dim;
}
- return RankedTensorType::get(newShape, type.getElementType());
+ auto encoding = type.getEncoding();
+ if (auto v = encoding.dyn_cast_or_null<VerifiableTensorEncoding>()) {
+ auto ignoreError = [&] {
+ auto emitter = mlir::emitError(UnknownLoc::get(type.getContext()));
+ emitter.abandon();
+ return emitter;
+ };
+ if (failed(
----------------
aartbik wrote:
Sorry for the late reply. Yeah, by "error" I really meant not propagating (to a sparse guy, that is always an error, but I don't know why I said that here ;-)
In principle, dropping sparsity is always "correct", i.e. we will fall back to the dense state of things. Note that we have some nifty new ideas on how to efficiently change ranks (viz. view a sparse matrix as a sparse vector), but that should really be audited very carefully.
So this is good to go!
https://github.com/llvm/llvm-project/pull/80871
More information about the Mlir-commits
mailing list