[Mlir-commits] [mlir] [mlir][vector] Sink vector.extract/splat into load/store ops (PR #134389)
Diego Caballero
llvmlistbot at llvm.org
Mon Apr 14 10:55:55 PDT 2025
================
@@ -161,6 +161,20 @@ void populateVectorTransferCollapseInnerMostContiguousDimsPatterns(
void populateSinkVectorOpsPatterns(RewritePatternSet &patterns,
PatternBenefit benefit = 1);
+/// Patterns that remove redundant Vector Ops by merging them with load/store
+/// ops
+/// ```
+/// vector.load %arg0[%arg1] : memref<?xf32>, vector<4xf32>
+/// vector.extract %0[1] : f32 from vector<4xf32>
+/// ```
+/// Gets converted to:
+/// ```
+/// %c1 = arith.constant 1 : index
+/// %0 = arith.addi %arg1, %c1 overflow<nsw> : index
+/// %1 = memref.load %arg0[%0] : memref<?xf32>
----------------
dcaballe wrote:
> I don't think we are actually need any special handling or tests for subbyte types. The only ways we can have load ... vector<8xi1> are either loading from memref<...xi1> for which semantics is fully consistent, or loading from memref<...xvector<8xi1>> which is ignored by current pattern.
I'd be surprised if there is no issue with the data layout as the vector one assumes a packed layout and the scalar one would be unpacked. Looking at the generated LLVM IR for both cases would help
https://github.com/llvm/llvm-project/pull/134389
More information about the Mlir-commits
mailing list