[llvm] [SLPVectorizer][NFC] Refactor `canVectorizeLoads`. (PR #157911)
Alexey Bataev via llvm-commits
llvm-commits at lists.llvm.org
Thu Sep 11 09:00:06 PDT 2025
================
@@ -6776,52 +6809,50 @@ isMaskedLoadCompress(ArrayRef<Value *> VL, ArrayRef<Value *> PointerOps,
CompressMask, LoadVecTy);
}
-/// Checks if strided loads can be generated out of \p VL loads with pointers \p
-/// PointerOps:
-/// 1. Target with strided load support is detected.
-/// 2. The number of loads is greater than MinProfitableStridedLoads, or the
-/// potential stride <= MaxProfitableLoadStride and the potential stride is
-/// power-of-2 (to avoid perf regressions for the very small number of loads)
-/// and max distance > number of loads, or potential stride is -1.
-/// 3. The loads are ordered, or number of unordered loads <=
-/// MaxProfitableUnorderedLoads, or loads are in reversed order. (this check is
-/// to avoid extra costs for very expensive shuffles).
-/// 4. Any pointer operand is an instruction with the users outside of the
-/// current graph (for masked gathers extra extractelement instructions
-/// might be required).
-static bool isStridedLoad(ArrayRef<Value *> VL, ArrayRef<Value *> PointerOps,
- ArrayRef<unsigned> Order,
- const TargetTransformInfo &TTI, const DataLayout &DL,
- ScalarEvolution &SE,
- const bool IsAnyPointerUsedOutGraph,
- const int64_t Diff) {
- const size_t Sz = VL.size();
- const uint64_t AbsoluteDiff = std::abs(Diff);
- Type *ScalarTy = VL.front()->getType();
- auto *VecTy = getWidenedType(ScalarTy, Sz);
+bool BoUpSLP::analyzeConstantStrideCandidate(
+ ArrayRef<Value *> PointerOps, Type *ElemTy, Align CommonAlignment,
+ SmallVectorImpl<unsigned> &SortedIndices, StridedPtrInfo &SPtrInfo,
+ int64_t Diff, Value *Ptr0, Value *PtrN) const {
+ const unsigned Sz = PointerOps.size();
+ FixedVectorType *StridedLoadTy = getWidenedType(ElemTy, Sz);
+
+ // Try to generate strided load node if:
+ // 1. Target with strided load support is detected.
+ // 2. The number of loads is greater than MinProfitableStridedLoads,
+ // or the potential stride <= MaxProfitableLoadStride and the
+ // potential stride is power-of-2 (to avoid perf regressions for the very
+ // small number of loads) and max distance > number of loads, or potential
+ // stride is -1.
+ // 3. The loads are ordered, or number of unordered loads <=
+ // MaxProfitableUnorderedLoads, or loads are in reversed order.
+ // (this check is to avoid extra costs for very expensive shuffles).
+ // 4. Any pointer operand is an instruction with the users outside of the
+ // current graph (for masked gathers extra extractelement instructions
+ // might be required).
+
+ if (!TTI->isTypeLegal(StridedLoadTy) ||
+ !TTI->isLegalStridedLoadStore(StridedLoadTy, CommonAlignment))
+ return false;
+
+ // Simple check if not a strided access - clear order.
+ bool IsPossibleStrided = Diff % (Sz - 1) == 0;
+ auto IsAnyPointerUsedOutGraph =
+ IsPossibleStrided && any_of(PointerOps, [&](Value *V) {
+ return isa<Instruction>(V) && any_of(V->users(), [&](User *U) {
+ return !isVectorized(U) && !MustGather.contains(U);
+ });
+ });
+ const unsigned AbsoluteDiff = std::abs(Diff);
----------------
alexey-bataev wrote:
```suggestion
const uint64_t AbsoluteDiff = std::abs(Diff);
```
https://github.com/llvm/llvm-project/pull/157911
More information about the llvm-commits
mailing list