[Mlir-commits] [mlir] [MLIR] Allowing unsupported conv2d op to fail gracefully from vectorization (PR #130181)

Andrzej Warzyński llvmlistbot at llvm.org
Mon Mar 10 13:47:33 PDT 2025


================
@@ -1939,6 +1939,22 @@ vectorizeInsertSliceOpPrecondition(tensor::InsertSliceOp sliceOp,
   return success();
 }
 
+static LogicalResult vectorizeConvOpPrecondition(linalg::LinalgOp convOp) {
+  // We only support 1D convolutions, reject all other cases.
+  if (isa<linalg::Conv2DNhwcHwcfOp, linalg::Conv2DNhwcFhwcOp,
+          linalg::Conv2DNchwFchwOp>(convOp)) {
+    LDBG("2D convolutions are not supported\n");
+    return failure();
+  }
+
+  if (isa<linalg::Conv3DNdhwcDhwcfOp, linalg::Conv3DNcdhwFcdhwOp>(convOp)) {
+    LDBG("3D convolutions are not supported\n");
+    return failure();
+  }
----------------
banach-space wrote:

No strong opinion, with a mild preference for the bare minimum (i.e. `isa<>` approach). Mostly because I have that clearly organised in my head (+ it's late for me 😅 ) and I am keen to unblock @jerryyin .

To me, the very existence of a constructor has always been counter-intuitive. Also, ideally. I'd like to avoid constructing anything in `vectorizePrecondition` (for better separation of tasks).

I do wonder , why not copy this to `vectorizePrecondition`:

```cpp
      // Determine whether `linalgOp` can be generated with this generator
      if (linalgOp.getNumDpsInputs() != 2 || linalgOp.getNumDpsInits() != 1)
        return;
      lhsShaped = linalgOp.getDpsInputOperand(0)->get();
      rhsShaped = linalgOp.getDpsInputOperand(1)->get();
      resShaped = linalgOp.getDpsInitOperand(0)->get();
      lhsShapedType = dyn_cast<ShapedType>(lhsShaped.getType());
      rhsShapedType = dyn_cast<ShapedType>(rhsShaped.getType());
      resShapedType = dyn_cast<ShapedType>(resShaped.getType());
      if (!lhsShapedType || !rhsShapedType || !resShapedType)
        return;
      // (LHS has dimension NCW/NWC and RES has dimension NFW/NCW/NWF/NWC) OR
      // (non-channeled convolution -> LHS and RHS both have single dimensions).
      if ((lhsShapedType.getRank() != 3 || resShapedType.getRank() != 3) &&
          (lhsShapedType.getRank() != 1 || resShapedType.getRank() != 1))
        return;

      Operation *reduceOp = matchLinalgReduction(linalgOp.getDpsInitOperand(0));
      if (!reduceOp)
        return;

      auto maybeKind = getCombinerOpKind(reduceOp);
      // Typically convolution will have a `Add` CombiningKind but for i1 type it
      // can get strength reduced to `OR` which is also supported. This strength
      // reduction logic is in `buildBinaryFn` helper in the Linalg dialect.
      if (!maybeKind || ((*maybeKind != vector::CombiningKind::ADD &&
                          *maybeKind != vector::CombiningKind::OR) &&
                         (oper != Pool || !isSupportedPoolKind(*maybeKind)))) {
        return;
      }
      reductionKind = maybeKind.value();

      auto rhsRank = rhsShapedType.getRank();
      switch (oper) {
      case Conv:
        if (rhsRank != 1 && rhsRank != 2 && rhsRank != 3)
          return;
        break;
      case Pool:
        if (rhsRank != 1)
          return;
        break;
      }
```
? And then, repeat this in the constructor, but without of all of `if` statements (because `vectorizePrecondition` would make sure that all is good). Definitely not the most graceful approach, what @hanhanW proposes would be neater. But avoids calling the constructor in `vectorizePrecondition`.

Lastly, I strongly suspect that many of these checks are not needed - are there any convs that take something that's not a `ShapedType`?

Most of this typed in a rush, I will be happy with whatever approach you take :) (should you decide to land it overnight)

https://github.com/llvm/llvm-project/pull/130181


More information about the Mlir-commits mailing list