[Mlir-commits] [mlir] [mlir][Linalg] Allow expand shape propagation across linalg ops with dynamic shapes. (PR #127943)
Han-Chung Wang
llvmlistbot at llvm.org
Wed Mar 12 14:50:24 PDT 2025
================
@@ -708,16 +683,28 @@ getIndexingMapInExpandedOp(OpBuilder &builder, AffineMap indexingMap,
/// Return the type of the operand/result to use in the expanded op given the
/// type in the original op.
-static RankedTensorType getExpandedType(RankedTensorType originalType,
- AffineMap indexingMap,
- const ExpansionInfo &expansionInfo) {
- SmallVector<int64_t> expandedShape;
+static std::tuple<SmallVector<OpFoldResult>, RankedTensorType>
+getExpandedShapeAndType(RankedTensorType originalType, AffineMap indexingMap,
+ const ExpansionInfo &expansionInfo) {
+ SmallVector<int64_t> expandedStaticShape;
+ SmallVector<OpFoldResult> expandedShape;
for (AffineExpr expr : indexingMap.getResults()) {
unsigned dim = cast<AffineDimExpr>(expr).getPosition();
- auto dimExpansion = expansionInfo.getExpandedShapeOfDim(dim);
+ ArrayRef<OpFoldResult> dimExpansion =
+ expansionInfo.getExpandedShapeOfDim(dim);
+ llvm::append_range(expandedStaticShape,
+ llvm::map_range(dimExpansion, [](OpFoldResult ofr) {
+ std::optional<int64_t> staticShape =
+ getConstantIntValue(ofr);
+ if (staticShape) {
+ return staticShape.value();
+ }
+ return ShapedType::kDynamic;
+ }));
----------------
hanhanW wrote:
I think it is cleaner if we do it outside the loop. Furthermore, you can do `std::tie(expandedStaticShape, std::ingore) = decomposeMixedValues(expandedShape)`. Using `decomposeMixedValues` is optional. I think we at least want to move the `expandedStaticShape` construction outside the loop.
https://github.com/llvm/llvm-project/pull/127943
More information about the Mlir-commits
mailing list