[Mlir-commits] [mlir] [mlir][Linalg] Allow expand shape propagation across linalg ops with dynamic shapes. (PR #127943)
llvmlistbot at llvm.org
llvmlistbot at llvm.org
Wed Mar 12 22:03:32 PDT 2025
================
@@ -910,31 +875,31 @@ fuseWithReshapeByExpansion(LinalgOp linalgOp, Operation *reshapeOp,
"preconditions for fuse operation failed");
Location loc = linalgOp.getLoc();
- // Check if reshape is expanding or collapsing.
- auto expandingReshapeOp = dyn_cast<tensor::ExpandShapeOp>(*reshapeOp);
- auto collapsingReshapeOp = dyn_cast<tensor::CollapseShapeOp>(*reshapeOp);
- bool isExpanding = (expandingReshapeOp != nullptr);
- RankedTensorType expandedType = isExpanding
- ? expandingReshapeOp.getResultType()
- : collapsingReshapeOp.getSrcType();
- RankedTensorType collapsedType = isExpanding
- ? expandingReshapeOp.getSrcType()
- : collapsingReshapeOp.getResultType();
+ SmallVector<OpFoldResult> expandedShape, collapsedShape;
+ SmallVector<AffineMap, 4> reassociationIndices;
+ Value src;
+ if (auto expandingReshapeOp = dyn_cast<tensor::ExpandShapeOp>(reshapeOp)) {
+ // Try to move the dynamic dimensions in output shape before the `linalgOp`
+ // to maintain SSA validity
+ if (failed(moveValueDefinitions(
+ rewriter, expandingReshapeOp.getOutputShape(), linalgOp)))
+ return std::nullopt;
----------------
MaheshRavishankar wrote:
Yeah, many places that happens in the IR. I dont have a good solution for this though. In general its just changing position of some `tensor.dim` operations which shouldnt matter too much.
https://github.com/llvm/llvm-project/pull/127943
More information about the Mlir-commits
mailing list