[llvm] [LV] Convert gather loads with constant stride into strided loads (PR #147297)
Luke Lau via llvm-commits
llvm-commits at lists.llvm.org
Thu Mar 26 01:20:54 PDT 2026
================
@@ -6364,3 +6385,111 @@ void VPlanTransforms::createPartialReductions(VPlan &Plan,
for (const VPPartialReductionChain &Chain : Chains)
transformToPartialReduction(Chain, CostCtx.Types, Plan, Phi);
}
+
+void VPlanTransforms::convertToStridedAccesses(VPlan &Plan,
+ PredicatedScalarEvolution &PSE,
+ Loop &L, VPCostContext &Ctx,
+ VFRange &Range) {
+ if (Plan.hasScalarVFOnly())
+ return;
+
+ VPTypeAnalysis TypeInfo(Plan);
+ VPRegionBlock *VectorLoop = Plan.getVectorLoopRegion();
+ SmallVector<VPWidenMemoryRecipe *> ToErase;
+ VPValue *I32VF = nullptr;
+ for (VPBasicBlock *VPBB : VPBlockUtils::blocksOnly<VPBasicBlock>(
+ vp_depth_first_shallow(VectorLoop->getEntry()))) {
+ for (VPRecipeBase &R : make_early_inc_range(*VPBB)) {
+ auto *LoadR = dyn_cast<VPWidenLoadRecipe>(&R);
+ // TODO: Support strided store.
+ // TODO: Transform reverse access into strided access with -1 stride.
+ // TODO: Transform gather/scatter with uniform address into strided access
+ // with 0 stride.
+ // TODO: Transform interleave access into multiple strided accesses.
+ if (!LoadR || LoadR->isConsecutive())
+ continue;
+
+ auto *Ptr = dyn_cast<VPWidenGEPRecipe>(LoadR->getAddr());
+ if (!Ptr)
+ continue;
+
+ Type *LoadTy = TypeInfo.inferScalarType(LoadR);
+ Align Alignment = LoadR->getAlign();
+ auto IsProfitable = [&](ElementCount VF) -> bool {
----------------
lukel97 wrote:
Nit is the compiler able to deduce the return type automatically?
```suggestion
auto IsProfitable = [&](ElementCount VF) {
```
https://github.com/llvm/llvm-project/pull/147297
More information about the llvm-commits
mailing list