[Mlir-commits] [mlir] [mlir][linalg] Restrict scalable vectorisation (PR #98639)

Andrzej Warzyński llvmlistbot at llvm.org
Mon Jul 15 08:19:54 PDT 2024


================
@@ -1936,26 +1936,79 @@ vectorizePadOpPrecondition(tensor::PadOp padOp,
   return success();
 }
 
-/// Preconditions for scalable vectors.
+/// Preconditions for scalable vectors. This is quite restrictive - it models
+/// the fact that in practice we would only make selected dimensions scalable.
 static LogicalResult
 vectorizeScalableVectorPrecondition(Operation *op,
                                     ArrayRef<int64_t> inputVectorSizes,
                                     ArrayRef<bool> inputScalableVecDims) {
   assert(inputVectorSizes.size() == inputScalableVecDims.size() &&
          "Number of input vector sizes and scalable dims doesn't match");
 
-  if (inputVectorSizes.empty())
-    return success();
+  size_t numOfScalableDims =
+      llvm::count_if(inputScalableVecDims, [](bool flag) { return flag; });
 
-  bool isScalable = inputScalableVecDims.back();
-  if (!isScalable)
+  if (numOfScalableDims == 0)
     return success();
 
-  // Only element-wise and 1d depthwise conv ops supported in the presence of
-  // scalable dims.
   auto linalgOp = dyn_cast<LinalgOp>(op);
-  return success(linalgOp && (isElementwise(linalgOp) ||
-                              isa<linalg::DepthwiseConv1DNwcWcOp>(op)));
+
+  // Cond 1: There's been no need for scalable vectorisation of
+  // non-linalg Ops so far
+  if (!linalgOp)
+    return failure();
+
+  // Cond 2: There's been no need for more than 2 scalable dims so far
+  if (numOfScalableDims > 2)
+    return failure();
+
+  // Cond 3: Look at the configuration in `inputScalableVecDims` and verify that
+  // it matches one of the supported cases:
+  //  1. exactly 1 dim is scalable and that's the _last_ parallel dim
----------------
banach-space wrote:

> This feels like a strong limitations. 

Agreed. But this is only meant to document what we've tried so far and hence "advertise" as supported. Just to make it clear to everyone who'd like to try this.

Also, the current pre-conditions require updating: https://github.com/llvm/llvm-project/blob/93d7d9bfd4aede19dda0ebaf8aead12c2adbd13b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp#L1950-L1958

Let me decompose that. This snippet ignores the fact that also non-trailing dims can (and are) scalable:
```cpp
  bool isScalable = inputScalableVecDims.back();
  if (!isScalable)
    return success();
```
And this is missing  `linalg.matmul_transpose_a` (I think that it's misleading): 

```cpp
  return success(linalgOp && (isElementwise(linalgOp) ||
                              isa<linalg::DepthwiseConv1DNwcWcOp>(op)));
```

> Using split reduction, we should be able to vectorize the K dimension in a matmul, right? 
> And any arbitrary generic op. 

Yes, that's the plan. And as @zhaoshiz mentioned, there's #97788 to enable reductions. One step at a time 😅 

https://github.com/llvm/llvm-project/pull/98639


More information about the Mlir-commits mailing list