[Mlir-commits] [mlir] [mlir][linalg] Enable masked vectorisation for depthwise convolutions (PR #81625)
Benjamin Maxwell
llvmlistbot at llvm.org
Tue Mar 5 05:36:31 PST 2024
================
@@ -3000,13 +3030,23 @@ struct Conv1DGenerator
/// kw is always unrolled.
/// TODO: w (resp. kw) is unrolled when the strideW ( resp. dilationW) is
/// > 1.
- FailureOr<Operation *> depthwiseConv(bool flatten) {
+ FailureOr<Operation *> depthwiseConv(uint64_t channelDimVecSize,
+ bool channelDimScalableFlag,
+ bool flatten) {
if (!valid)
return rewriter.notifyMatchFailure(op, "unvectorizable depthwise conv");
+ bool scalableChDim = false;
+ bool useMasking = false;
int64_t nSize, wSize, cSize, kwSize;
// kernel{kw, c}
bindShapeDims(rhsShapedType, kwSize, cSize);
+ if (ShapedType::isDynamic(cSize)) {
+ assert(channelDimVecSize != 0 && "Channel dim vec size must be > 0");
+ cSize = channelDimVecSize;
+ scalableChDim = channelDimScalableFlag;
----------------
MacDue wrote:
Maybe add a note here that scalable vectors are only used if the channel dim is dynamic and `channelDimScalableFlag` is set.
https://github.com/llvm/llvm-project/pull/81625
More information about the Mlir-commits
mailing list