[Mlir-commits] [mlir] Add Vector bitwidth target to Linearize Vectorizable and Constant Ops (PR #83314)
Diego Caballero
llvmlistbot at llvm.org
Fri Mar 1 10:20:37 PST 2024
dcaballe wrote:
> Out of curiosity, is this for CPU codegen? When do we need it? I thought that we want to get rid of shape_cast. Instead, we want the mermef.collapse_shape version? Is it for "scalar loads/stores" + "fully utilization for vector computation"?
`tensor.collapse_shape` is only for memrefs (transfer ops). This is "collapsing" vectors. If we only flatten transfer ops but not their producers/consumers, we would end up with `vector.shape_cast` ops that won't be optimized away. The `vector.shape_cast` ops introduced by this pass should cancel out with the ones introduced by the transfer read/write flattening pass.
https://github.com/llvm/llvm-project/pull/83314
More information about the Mlir-commits
mailing list