[llvm] [RISCV][TTI] Enable masked interleave vectorization (PR #150074)

Luke Lau via llvm-commits llvm-commits at lists.llvm.org
Tue Jul 22 19:55:06 PDT 2025


================
@@ -979,12 +979,14 @@ InstructionCost RISCVTTIImpl::getInterleavedMemoryOpCost(
     Align Alignment, unsigned AddressSpace, TTI::TargetCostKind CostKind,
     bool UseMaskForCond, bool UseMaskForGaps) const {
 
-  // The interleaved memory access pass will lower interleaved memory ops (i.e
-  // a load and store followed by a specific shuffle) to vlseg/vsseg
-  // intrinsics.
-  if (!UseMaskForCond && !UseMaskForGaps &&
+  auto *VTy = cast<VectorType>(VecTy);
+
+  // The interleaved memory access pass will lower (de)interleave ops combined
+  // with an adjacent appropriate memory to vlseg/vsseg intrinsics.  We
+  // currently only support masking for the scalable path. vlseg/vsseg only
+  // support masking per-iteration (i.e. condition), not per-segment (i.e. gap).
+  if ((VTy->isScalableTy() || !UseMaskForCond) && !UseMaskForGaps &&
----------------
lukel97 wrote:

To make sure I'm understanding this right, we do support fixed-length deinterleave/interleave intrinsics, i.e. 

```llvm
define {<8 x i8>, <8 x i8>, <8 x i8>, <8 x i8>} @masked_load_factor4_mask(ptr %p, <8 x i1> %mask) {
; CHECK-LABEL: masked_load_factor4_mask:
; CHECK:       # %bb.0:
; CHECK-NEXT:    vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT:    vlseg4e8.v v8, (a0), v0.t
; CHECK-NEXT:    ret
  %interleaved.mask = tail call <32 x i1> @llvm.vector.interleave4.nxv32i1(<8 x i1> %mask, <8 x i1> %mask, <8 x i1> %mask, <8 x i1> %mask)
  %vec = call <32 x i8> @llvm.masked.load(ptr %p, i32 4, <32 x i1> %interleaved.mask, <32 x i8> poison)
  %deinterleaved.results = call {<8 x i8>, <8 x i8>, <8 x i8>, <8 x i8>} @llvm.vector.deinterleave4.nxv32i8(<32 x i8> %vec)
  ret {<8 x i8>, <8 x i8>, <8 x i8>, <8 x i8>} %deinterleaved.results
}
```

Will get lowered to a vlseg. It's just that we don't currently match a masked.load/store with shufflevector [de]interleaves?

https://github.com/llvm/llvm-project/pull/150074


More information about the llvm-commits mailing list