[llvm] [LV] Support strided memory accesses with a stride of -1 (PR #128718)

Pengcheng Wang via llvm-commits llvm-commits at lists.llvm.org
Tue Mar 11 01:19:02 PDT 2025


================
@@ -2615,9 +2627,25 @@ void VPWidenLoadRecipe::execute(VPTransformState &State) {
     NewLI = Builder.CreateMaskedGather(DataTy, Addr, Alignment, Mask, nullptr,
                                        "wide.masked.gather");
   } else if (Mask) {
-    NewLI =
-        Builder.CreateMaskedLoad(DataTy, Addr, Alignment, Mask,
-                                 PoisonValue::get(DataTy), "wide.masked.load");
+    if (isStrided()) {
+      const DataLayout &DL = LI->getDataLayout();
+      auto *PtrTy = Addr->getType();
+      auto *StrideTy = DL.getIndexType(PtrTy);
+      // TODO: Support non-unit-reverse strided accesses.
+      auto *StrideVal =
+          ConstantInt::get(StrideTy, -1 * DL.getTypeAllocSize(ScalarDataTy));
+      Value *RuntimeVF =
+          getRuntimeVF(State.Builder, State.Builder.getInt32Ty(), State.VF);
+      NewLI = Builder.CreateIntrinsic(
+          Intrinsic::experimental_vp_strided_load, {DataTy, PtrTy, StrideTy},
----------------
wangpc-pp wrote:

I have a concern about how `experimental_vp_strided_load` is expanded when the target doesn't support it, and it is weird to use VP intrinsics in a non-VP code path.

https://github.com/llvm/llvm-project/pull/128718


More information about the llvm-commits mailing list