[Mlir-commits] [mlir] [MLIR][Tensor] Perform shape inference via in-place modification (NFC) (PR #111593)
Han-Chung Wang
llvmlistbot at llvm.org
Tue Oct 8 16:51:29 PDT 2024
================
@@ -4332,21 +4332,24 @@ LogicalResult PackOp::canonicalize(PackOp packOp, PatternRewriter &rewriter) {
rewriter.create<tensor::CastOp>(loc, newSrcType, packOp.getSource());
}
Value dest = packOp.getDest();
+ Type originalResultType = dest.getType();
if (destShape != packOp.getDestType().getShape()) {
auto newDestType = packOp.getDestType().clone(destShape);
dest =
rewriter.create<tensor::CastOp>(loc, newDestType, packOp.getDest());
}
- auto clonedPackOp = cast<PackOp>(rewriter.clone(*packOp));
- Value res = clonedPackOp.getResult();
- rewriter.startOpModification(clonedPackOp);
- clonedPackOp.getSourceMutable().assign(source);
- clonedPackOp.getDestMutable().assign(dest);
- res.setType(dest.getType());
- rewriter.finalizeOpModification(clonedPackOp);
-
- rewriter.replaceOpWithNewOp<tensor::CastOp>(
- packOp, packOp.getResult().getType(), clonedPackOp);
+ rewriter.modifyOpInPlace(packOp, [&] {
+ packOp.getSourceMutable().assign(source);
+ packOp.getDestMutable().assign(dest);
+ packOp.getResult().setType(cast<RankedTensorType>(dest.getType()));
+ });
+ // Insert a cast if needed
+ if (originalResultType != dest.getType()) {
+ rewriter.setInsertionPointAfter(packOp);
+ auto castOp =
+ rewriter.create<tensor::CastOp>(loc, originalResultType, packOp);
+ rewriter.replaceAllUsesExcept(packOp, castOp, castOp);
+ }
----------------
hanhanW wrote:
> There is a clear preference upstream (both LLVM and MLIR I believe) to really avoid this kind of inefficiency.
I can believe it; I understand some people like it better. The way I'm seeing it is like tradeoffs, I don't like the strict rule on small codes when I feel the cost is cheap. I don't have a clear rule other than my intuition. For this case, I browsed the codebase and I noticed that LinalgOp (which is an interface and has much wider scope versus tensor.pack) is following the style for at least 2-3 years, and I feel that it is just a tiny thing wrt tensor.pack. Again, I don't have strong opinion and I'm not saying it is correct. I just want to share what I think. :)
https://github.com/llvm/llvm-project/pull/111593
More information about the Mlir-commits
mailing list