[Mlir-commits] [mlir] [linalg] Add quantized version of `conv_3d_ncdhw_fcdhw` (PR #113953)
Felix Schneider
llvmlistbot at llvm.org
Wed Oct 30 01:44:52 PDT 2024
ubfx wrote:
> @ubfx is there a way around adding a new operator?
We can of course live with the transposition arrangement for a while, particularly the 3d convolution is probably less important. But we also have 2d depthwise convolution which currently has no quantized channel first named Op available, and which we would ideally like to have a named representation of.
Just to be clear - I would also prefer having a single `quantized_conv3d` Op (or similar), which captures the whole subset of dimension orderings. It's just that the `linalg` convolution Ops are probably among the Ops with the most downstream uses, so such an op would likely need much more careful and elaborate design than I could provide. Of course - if such an op or concept existed, I'd be happy to port our uses in torch-mlir accordingly.
> Avoiding unnecessary transposition is good, but will this convolution get vectorized? If not, does this make any difference? Without vectorization, things will be slow _with_ and _without_ transposing.
I'm not sure if/how well vectorization is performed on the CPU backend. Personally, I use linalg graphs lowered from torch IR as a starting point for compilation to a custom dataflow accelerator. Graph partitioning becomes increasingly difficult when having to consider additional transposition ops at the inputs and outputs of the conv ops. In quantized graphs, DQ/Q ops might also sneak in between the convolution and transpositions, which makes pattern matching even more annoying and less reliable. And if a un-fused transposition actually makes it into the final partitioned graph, that's a huge cost on my hardware.
I'm not trying to hijack `linalg` for my specific use case - but it does feel like the abstract goal of "avoiding unnecessary transpositions" is a general good, even though some backends and lowering paths might only benefit minimally.
https://github.com/llvm/llvm-project/pull/113953
More information about the Mlir-commits
mailing list