[llvm] [SLPVectorizer] Widen constant strided loads. (PR #162324)
via llvm-commits
llvm-commits at lists.llvm.org
Fri Oct 10 16:18:59 PDT 2025
github-actions[bot] wrote:
<!--LLVM CODE FORMAT COMMENT: {clang-format}-->
:warning: C/C++ code formatter, clang-format found issues in your code. :warning:
<details>
<summary>
You can test this locally with the following command:
</summary>
``````````bash
git-clang-format --diff origin/main HEAD --extensions cpp -- llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp
``````````
:warning:
The reproduction instructions above might return results for more than one PR
in a stack if you are using a stacked PR workflow. You can limit the results by
changing `origin/main` to the base branch/commit you want to compare against.
:warning:
</details>
<details>
<summary>
View the diff from clang-format here.
</summary>
``````````diff
diff --git a/llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp b/llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp
index 385cc54f1..28676dc46 100644
--- a/llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp
+++ b/llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp
@@ -2247,10 +2247,9 @@ public:
/// Return true if an array of scalar loads can be replaced with a strided
/// load (with constant stride).
///
- /// It is possible that the load gets "widened". Suppose that originally each load loads `k` bytes and `PointerOps` can be arranged as follows (`%s` is constant):
- /// %b + 0 * %s + 0
- /// %b + 0 * %s + 1
- /// %b + 0 * %s + 2
+ /// It is possible that the load gets "widened". Suppose that originally each
+ /// load loads `k` bytes and `PointerOps` can be arranged as follows (`%s` is
+ /// constant): %b + 0 * %s + 0 %b + 0 * %s + 1 %b + 0 * %s + 2
/// ...
/// %b + 0 * %s + (w - 1)
///
@@ -2272,17 +2271,18 @@ public:
/// \param PointerOps list of pointer arguments of loads.
/// \param ElemTy original scalar type of loads.
/// \param Alignment alignment of the first load.
- /// \param SortedIndices is the order of PointerOps as returned by `sortPtrAccesses`
- /// \param Diff Pointer difference between the lowest and the highes pointer in `PointerOps` as returned by `getPointersDiff`.
+ /// \param SortedIndices is the order of PointerOps as returned by
+ /// `sortPtrAccesses`
+ /// \param Diff Pointer difference between the lowest and the highes pointer
+ /// in `PointerOps` as returned by `getPointersDiff`.
/// \param Ptr0 first pointer in `PointersOps`.
/// \param PtrN last pointer in `PointersOps`.
/// \param SPtrInfo If the function return `true`, it also sets all the fields
/// of `SPtrInfo` necessary to generate the strided load later.
- bool analyzeConstantStrideCandidate(const ArrayRef<Value *> PointerOps,
- Type *ElemTy, Align Alignment,
- const SmallVectorImpl<unsigned> &SortedIndices,
- const int64_t Diff, Value *Ptr0, Value *PtrN,
- StridedPtrInfo &SPtrInfo) const;
+ bool analyzeConstantStrideCandidate(
+ const ArrayRef<Value *> PointerOps, Type *ElemTy, Align Alignment,
+ const SmallVectorImpl<unsigned> &SortedIndices, const int64_t Diff,
+ Value *Ptr0, Value *PtrN, StridedPtrInfo &SPtrInfo) const;
/// Return true if an array of scalar loads can be replaced with a strided
/// load (with run-time stride).
@@ -6910,8 +6910,8 @@ bool BoUpSLP::isStridedLoad(ArrayRef<Value *> PointerOps, Type *ScalarTy,
bool BoUpSLP::analyzeConstantStrideCandidate(
const ArrayRef<Value *> PointerOps, Type *ElemTy, Align CommonAlignment,
- const SmallVectorImpl<unsigned> &SortedIndices, const int64_t Diff, Value *Ptr0,
- Value *PtrN, StridedPtrInfo &SPtrInfo) const {
+ const SmallVectorImpl<unsigned> &SortedIndices, const int64_t Diff,
+ Value *Ptr0, Value *PtrN, StridedPtrInfo &SPtrInfo) const {
const unsigned Sz = PointerOps.size();
SmallVector<int64_t> SortedOffsetsFromBase(Sz);
// Go through `PointerOps` in sorted order and record offsets from `Ptr0`.
``````````
</details>
https://github.com/llvm/llvm-project/pull/162324
More information about the llvm-commits
mailing list