[Mlir-commits] [mlir] [mlir][linalg][conv] Flatten the channel dimension when vectorizing (PR #71918)
Diego Caballero
llvmlistbot at llvm.org
Wed Nov 15 11:58:23 PST 2023
dcaballe wrote:
Should we schedule a meeting to discuss this? I agree with Nicolas on that this might be a one-off case but I also acknowledge that we have been looking at multiple options with significant drawbacks and we have to find a way to move this forward.
A few replies:
> but that leads to either:
> linalg.generics with indexing maps which are not permutations (does not vectorize), or
> strided loads (which are bound to be slow).
Ultimately, this is something we may want to support. Semi-affine maps should be somewhat challenging but not extremely difficult to support (I already played a bit with this) and I think it has been a long-lasting TODO. Strided loads is another TODO, in general, as we are currently generating gather ops for loads with a short static stride.
> we wouldn't be adding any new cases to the vectorizer (is that the design goal?),
I think the ultimate goal is to move all the convolution decomposition step before the vectorizer but nobody has been brave enough to do it :). I think Nicolas' rewrite suggestion goes along these lines.
I haven't looked much into this mechanism but there also the possibility of adding this specialization as a hook vectorization pattern. Not sure if we still use that mechanism but it's still there in the vectorizer so it should work. Perhaps we could extend the API so users can provide external ad-hoc vectorization patterns? (Just a random though, perhaps this is not even feasible).
Hopefully that helps!
https://github.com/llvm/llvm-project/pull/71918
More information about the Mlir-commits
mailing list