[Mlir-commits] [mlir] [mlir][linalg] Add support for masked vectorization of `tensor.insert_slice` (1/N) (PR #122927)

Han-Chung Wang llvmlistbot at llvm.org
Thu Jan 23 05:09:00 PST 2025


================
@@ -59,6 +59,38 @@ vectorizeConvolution(RewriterBase &rewriter, LinalgOp convOp,
                      ArrayRef<bool> inputVecScalableFlags = {},
                      bool flatten1DDepthwiseConv = false);
 
+/// Vectorize tensor::InsertSliceOp with:
+///   * vector::TransferReadOp + vector::TransferWriteOp
+/// The vector sizes are either:
+///   * user-provided in `inputVectorSizes`, or
+///   * inferred from the static dims in the input and output tensors.
+/// Bails out if:
+///   * vector sizes are not user-provided, and
+///   * at least one dim is dynamic (in both the input and output tensors),
+///   bails out.
+///
+/// Before:
+///     !t_in_type = tensor<1x2x3xf32>
+///     !t_out_type = tensor<9x8x7x1x2x3xf32>
+///     !v_type = vector<1x2x3xf32>
+///     %inserted_slice = tensor.insert_slice %src into %dest ... : !t_in_type
+///     into !t_out_type
+/// After:
+///     %read = vector.transfer_read %src[...], %pad ... : !t_in_type, !v_type
+///     %write = vector.transfer_write %read, %dest ... : !v_type, !t_out_type
----------------
hanhanW wrote:

Thanks, it makes a lot of sense to me now. Somehow I missed that vector.transfer_write ops write a vector today. 😅

RE inputVectorSize: yes, they are the options. The point was that the input vector size should be inferrable from IR if possible. And I did not know that both read/write use the same mask! The second patch does answer my questions. Thanks for all the details! 🙏

https://github.com/llvm/llvm-project/pull/122927


More information about the Mlir-commits mailing list