[Mlir-commits] [mlir] [mlir][linalg] Add quantized conv2d operator with FCHW, NCHW order (PR #107740)
Felix Schneider
llvmlistbot at llvm.org
Thu Sep 19 05:34:04 PDT 2024
ubfx wrote:
For context: In torch-mlir, we currently have to use an additional transposition on weights and inits for all quantized convolutions. This is because we have no fitting quantized convolution op
https://github.com/llvm/torch-mlir/blob/5ce48dfacd971e5075786731bac2152ae855cab4/lib/Conversion/TorchToLinalg/Linear.cpp#L1165
https://github.com/llvm/llvm-project/pull/107740
More information about the Mlir-commits
mailing list