[Mlir-commits] [mlir] [mlir][vector] Sink vector.extract/splat into load/store ops (PR #134389)

Andrzej WarzyƄski llvmlistbot at llvm.org
Sun Apr 13 08:12:29 PDT 2025


================
@@ -161,6 +161,20 @@ void populateVectorTransferCollapseInnerMostContiguousDimsPatterns(
 void populateSinkVectorOpsPatterns(RewritePatternSet &patterns,
                                    PatternBenefit benefit = 1);
 
+/// Patterns that remove redundant Vector Ops by merging them with load/store
+/// ops
+/// ```
+/// vector.load %arg0[%arg1] : memref<?xf32>, vector<4xf32>
+/// vector.extract %0[1] : f32 from vector<4xf32>
+/// ```
+/// Gets converted to:
+/// ```
+/// %c1 = arith.constant 1 : index
+/// %0 = arith.addi %arg1, %c1 overflow<nsw> : index
+/// %1 = memref.load %arg0[%0] : memref<?xf32>
----------------
banach-space wrote:

Applying this pattern to a vector of bits would lead to `memref.load %src[%idx] : memref<8xi1>`, i.e. a load of a single bit. That doesn't feel sane.

Also, in cases like this:
```mlir
%x = vector.load ... : vector<8xi1>
%y = vector.extract %x [5]: i1
```
vector load is probably just a scalar load anyway.

My suggestion is to restrict this patter to multi-byte element types (*) and rely on "narrow-type-emulation" to help with sub-bytes.

(*) Multi-byte - at least one byte.

https://github.com/llvm/llvm-project/pull/134389


More information about the Mlir-commits mailing list