[Mlir-commits] [mlir] [mlir][linalg] Vectorize directly to a named contraction (PR #147296)
Han-Chung Wang
llvmlistbot at llvm.org
Tue Jul 8 11:51:33 PDT 2025
================
@@ -2093,6 +2097,84 @@ vectorizeInsertSliceOpPrecondition(tensor::InsertSliceOp sliceOp,
return success();
}
+/// Vectorize a named linalg contraction op into:
+/// vector::TransferReadOp - Reads vectors from the operands
+/// vector::ContractionOp - Performs contraction
+/// vector::TransferWriteOp - Write the result vector back to the
+/// destination
+/// The operands shapes are preserved and loaded directly into vectors.
+/// Any further permutations or numerical casting remain within contraction.
+static LogicalResult
+vectorizeAsLinalgContraction(RewriterBase &rewriter, VectorizationState &state,
+ LinalgOp linalgOp,
+ SmallVectorImpl<Value> &newResults) {
+ Location loc = linalgOp.getLoc();
+ MLIRContext *ctx = linalgOp.getContext();
+
+ if (!isa<ContractionOpInterface>(linalgOp.getOperation()))
+ return failure();
+
+ OpOperand *outOperand = linalgOp.getDpsInitOperand(0);
+ Operation *reduceOp = matchLinalgReduction(outOperand);
+ auto maybeKind = getCombinerOpKind(reduceOp);
+ if (!maybeKind)
+ return failure();
+
+ // Check that all dimensions are present in the input operands.
+ // Arbitrary broadcasts are not supported by the vector contraction.
+ // Broadcasts are expected to be materialized before vectorization.
----------------
hanhanW wrote:
I think I am missing "materialize" context in the vectorization concept. Is it a blocker to make the option default? What does materializing broadcasts mean? Is it breaking a matmul into something like `broadcast(LHS)->matmul`?
https://github.com/llvm/llvm-project/pull/147296
More information about the Mlir-commits
mailing list