[Mlir-commits] [mlir] [MLIR] Fix incorrect memref::DimOp canonicalization, move it to tensor::DimOp (PR #84225)
Sayan Saha
llvmlistbot at llvm.org
Wed Mar 6 20:09:01 PST 2024
================
@@ -1069,39 +1069,6 @@ OpFoldResult DimOp::fold(FoldAdaptor adaptor) {
return {};
}
-namespace {
-/// Fold dim of a memref reshape operation to a load into the reshape's shape
-/// operand.
-struct DimOfMemRefReshape : public OpRewritePattern<DimOp> {
- using OpRewritePattern<DimOp>::OpRewritePattern;
-
- LogicalResult matchAndRewrite(DimOp dim,
- PatternRewriter &rewriter) const override {
- auto reshape = dim.getSource().getDefiningOp<ReshapeOp>();
-
- if (!reshape)
- return failure();
-
- // Place the load directly after the reshape to ensure that the shape memref
- // was not mutated.
- rewriter.setInsertionPointAfter(reshape);
- Location loc = dim.getLoc();
- Value load =
- rewriter.create<LoadOp>(loc, reshape.getShape(), dim.getIndex());
----------------
sahas3 wrote:
> * `dim.getIndex()` is defined in a parent block of `reshape`.
This will require recursing through all parent blocks of `reshape` -- is that cheap enough to be part of canonicalization? Should this be it's own pass itself instead of a canonicalization pattern?
https://github.com/llvm/llvm-project/pull/84225
More information about the Mlir-commits
mailing list