[Mlir-commits] [mlir] [mlir][vector] add tensor.concat, bitcast, expand_shape, collapse_shape vectorization support (PR #97297)
Han-Chung Wang
llvmlistbot at llvm.org
Tue Jul 9 10:19:03 PDT 2024
hanhanW wrote:
I'm not sure if it is a good idea to vectorize expand_shape, collapse_shape into vector.shape_cast. Because the tensor version does not have data-movement behavior but the vector.shape_cast does. (The doc of vector.shape_cast is outdated.) Is there a RFC or discussion about why we are adding vectorization patterns for these ops? In our use cases, we handle all these things at graph level, so we don't really need to think about these vectorization.
Also, can you add some high level overview to PR description? I'm sorry that I can't look at implementation details because of my own priorities, so I can't advice much now. However, having an overview of a big change (+529-2) is generally good for reviewers and others.
On the other hand, concat and bitcast is new to me. Looking at the doc, tensor.bitcast is not equivalent to vector.bitcast. E.g., `%0 = tensor.bitcast %arg0 : tensor<2x4xi8> to tensor<2x2xi16>` is not valid, but `%0 = vector.bitcast %arg0 : vector<2x4xi8> to vector<2x2xi16>` is valid. IMO, tensor/linalg ops have higher semantics versus vector ops. Lowering a tensor op to vector ops with adding more semantics looks bad to me.
https://github.com/llvm/llvm-project/pull/97297
More information about the Mlir-commits
mailing list