[Mlir-commits] [mlir] [DRAFT] Generalize expand_shape to take shape as explicit input (PR #69267)

llvmlistbot at llvm.org llvmlistbot at llvm.org
Fri Nov 10 18:59:20 PST 2023


================
@@ -230,24 +309,17 @@ LogicalResult mlir::reshapeLikeShapesAreCompatible(
     ArrayRef<ReassociationIndices> reassociationMaps, bool isExpandingReshape) {
   unsigned expandedDimStart = 0;
   for (const auto &map : llvm::enumerate(reassociationMaps)) {
-    std::optional<int64_t> dynamicShape;
+    bool foundDynamicShape = false;
     int64_t linearizedStaticShape = 1;
+
     for (const auto &dim : llvm::enumerate(
              expandedShape.slice(expandedDimStart, map.value().size()))) {
-      if (ShapedType::isDynamic(dim.value())) {
-        if (isExpandingReshape && dynamicShape) {
-          return emitError("invalid to have a single dimension (" +
-                           Twine(map.index()) +
----------------
MaheshRavishankar wrote:

I think this is generally increasing the expressive power of the expand_shape op. You are right, it allows you to represent "how" a dynamic dimension is split into multiple dynamic dimensions. Most transformations AFAIK should not care about this split. The load-bearing information is know how the dimensions are split, i.e. the information being carried by reassociation indices... 
I wouldnt say its making expand_shape more general purpose. You are still splitting a dimension, into multiple, its just that those multiple dimensions can now be dynamic. The information about the final output shape is now carried in the op, making the split deterministic. Not sure if that makes sense, but I can explain on a call if that help.s

https://github.com/llvm/llvm-project/pull/69267


More information about the Mlir-commits mailing list