[Mlir-commits] [mlir] [mlir][linalg] Add scalable vectorisation for depthwise convolutions (PR #81625)

Benjamin Maxwell llvmlistbot at llvm.org
Thu Feb 15 08:22:29 PST 2024


================
@@ -3000,13 +3018,21 @@ struct Conv1DGenerator
   /// kw is always unrolled.
   /// TODO: w (resp. kw) is unrolled when the strideW ( resp. dilationW) is
   /// > 1.
-  FailureOr<Operation *> depthwiseConv(bool flatten) {
+  FailureOr<Operation *> depthwiseConv(uint64_t channelDimVecSize,
+                                       bool flatten) {
     if (!valid)
       return rewriter.notifyMatchFailure(op, "unvectorizable depthwise conv");
 
+    bool scalableChDim = false;
     int64_t nSize, wSize, cSize, kwSize;
     // kernel{kw, c}
     bindShapeDims(rhsShapedType, kwSize, cSize);
+    // Dynamic channel size implies scalable vectorisation
+    if (ShapedType::isDynamic(cSize)) {
+      assert(channelDimVecSize != 0 && "Channel dim vec size must be > 0");
+      cSize = channelDimVecSize;
+      scalableChDim = true;
+    }
----------------
MacDue wrote:

I wonder this too (though I'm not very familiar with the linalg vectorizer). Could also use scalable vectors if your channel dimension was large (say 100 elements), but still a static size?

Would it be possible to control this with the by passing a the scalable dims flags for the vector sizes, like is the case with matmuls?  

https://github.com/llvm/llvm-project/pull/81625


More information about the Mlir-commits mailing list