[Mlir-commits] [mlir] [MLIR][Linalg] Scalable Vectorization of Reduction (PR #97788)

Zhaoshi Zheng llvmlistbot at llvm.org
Thu Jul 11 10:38:38 PDT 2024


================
@@ -1942,13 +1951,30 @@ vectorizeScalableVectorPrecondition(Operation *op,
   if (inputVectorSizes.empty())
     return success();
 
+  auto linalgOp = dyn_cast<LinalgOp>(op);
+  if (linalgOp && isLinalgReduction(linalgOp)) {
+    LDBG("Checking reduce op dims for scalable vectorization\n");
+    auto iteratorTypes = linalgOp.getIteratorTypesArray();
+    assert(iteratorTypes.size() == inputScalableVecDims.size() &&
+           "Number of iterator types and input scalable dims mismatch");
+    // For now, only support scalable vectorization of a reduction on the
+    // trailing dim.
+    for (size_t i = 0; i < inputScalableVecDims.size() - 1; ++i) {
+      if (inputScalableVecDims[i] && isReductionIterator(iteratorTypes[i])) {
+        LDBG("Non-trailing reduction dim requested for scalable "
+             "vectorization\n");
+        return failure();
+      }
+    }
+    return success();
+  }
+
   bool isScalable = inputScalableVecDims.back();
----------------
zhaoshiz wrote:

> Indeed, feel free to remove (just add a note in the summary).

like mentioned in my Q: this allows vector sizes such as [[4], [4], 1] to be applied without checking the type of the linalg op.
Simply removing it will break some useful cases like matmul. Making sure we allow all correct combinations of vector sizes and op types and prevent unsupported cases is beyond the scope of this PR. I happy to work on it later.

https://github.com/llvm/llvm-project/pull/97788


More information about the Mlir-commits mailing list