[llvm] [SCEV,LAA] Introduce scoped SCEV, use in LAA computations (WIP). (PR #90742)
Florian Hahn via llvm-commits
llvm-commits at lists.llvm.org
Wed May 1 08:55:16 PDT 2024
https://github.com/fhahn created https://github.com/llvm/llvm-project/pull/90742
Note that this patch at the moment is mostly an experiment, and I'd
really appreciate any early feedback.
The motivating use case for this is better reasoning about pointer
bounds in LoopAccessAnalysis. LAA creates a number of SCEV expressions
that only need to be valid in the scope of the loop and we know that the
address computations in the loop don't wrap. When we compute the SCEV
for a pointer in the last iteration, this frequently boils down to
Base + Offset, but the resulting SCEV at the moment cannot be marked as
<nuw>, as the flags for the SCEV expression must be valid in the whole
function.
A concrete example is @test_distance_positive_backwards, where I'd like
to prove that the 2 pointer ranges overlap. (the current code structure
in LAA isn't necessary ideal/finalized, I am mostly looking for feedback
on the SCEV side for now)
I'd like to provide a way to create scoped SCEV expressions, which we
will only use the reason in the context of a loop.
This patch introduces a way to create SCEV commutative expressions that
are only valid for a loop scope. This is done by adding a ExprScope
member to SCEV which is used as additional pointer ID in the lookup
value of UniqueSCEVs. This should ensure that we only return 'scoped'
SCEVs, if the scope is set.
The idea is to keep scoped SCEVs separate from regular SCEVs, as in if
no scope is set, no returned SCEV expression can reference any scoped
expressions (can add assert to that effect I think). This should ensure
that regular SCEVs cannot be 'polluted' by information from scoped
SCEVs. Added some test cases to make sure the modified scoped SCEVs do
not interfere with already cached or later constructed SCEVs in
llvm/test/Analysis/LoopAccessAnalysis/scoped-scevs.ll
It's very likely that I am missing something here. In that case, any
suggestions on how to approach the problem differently?
>From b8cd93917bff7764cdc9f4924cdc86c454665787 Mon Sep 17 00:00:00 2001
From: Florian Hahn <flo at fhahn.com>
Date: Wed, 1 May 2024 11:03:42 +0100
Subject: [PATCH 1/2] [SCEV,LAA] Add tests to make sure scoped SCEVs don't
impact other SCEVs.
---
.../LoopAccessAnalysis/scoped-scevs.ll | 182 ++++++++++++++++++
1 file changed, 182 insertions(+)
create mode 100644 llvm/test/Analysis/LoopAccessAnalysis/scoped-scevs.ll
diff --git a/llvm/test/Analysis/LoopAccessAnalysis/scoped-scevs.ll b/llvm/test/Analysis/LoopAccessAnalysis/scoped-scevs.ll
new file mode 100644
index 00000000000000..323ba2a739cf83
--- /dev/null
+++ b/llvm/test/Analysis/LoopAccessAnalysis/scoped-scevs.ll
@@ -0,0 +1,182 @@
+; NOTE: Assertions have been autogenerated by utils/update_analyze_test_checks.py UTC_ARGS: --version 4
+; RUN: opt -passes='print<access-info>,print<scalar-evolution>' -disable-output %s 2>&1 | FileCheck --check-prefixes=LAA,AFTER %s
+; RUN: opt -passes='print<scalar-evolution>,print<access-info>,print<scalar-evolution>' -disable-output %s 2>&1 | FileCheck --check-prefixes=BEFORE,LAA,AFTER %s
+
+target datalayout = "e-m:o-i64:64-i128:128-n32:64-S128"
+
+declare void @use(ptr)
+
+; Check that scoped expressions created by LAA do not interfere with non-scoped
+; SCEVs with the same operands. The tests first run print<scalar-evolution> to
+; populate the SCEV cache. They contain a GEP computing A+405, which is the end
+; of the accessed range, before and/or after the loop. No nuw flags should be
+; added to them in the second print<scalar-evolution> output.
+
+define ptr @test_ptr_range_end_computed_before_and_after_loop(ptr %A) {
+; BEFORE-LABEL: 'test_ptr_range_end_computed_before_and_after_loop'
+; BEFORE-NEXT: Classifying expressions for: @test_ptr_range_end_computed_before_and_after_loop
+; BEFORE: %x = getelementptr inbounds i8, ptr %A, i64 405
+; BEFORE-NEXT: --> (405 + %A) U: full-set S: full-set
+; BEFORE: %y = getelementptr inbounds i8, ptr %A, i64 405
+; BEFORE-NEXT: --> (405 + %A) U: full-set S: full-set
+;
+; LAA-LABEL: 'test_ptr_range_end_computed_before_and_after_loop'
+; LAA-NEXT: loop:
+; LAA-NEXT: Memory dependences are safe with run-time checks
+; LAA-NEXT: Dependences:
+; LAA-NEXT: Run-time memory checks:
+; LAA-NEXT: Check 0:
+; LAA-NEXT: Comparing group ([[GRP1:0x[0-9a-f]+]]):
+; LAA-NEXT: %gep.A.400 = getelementptr inbounds i32, ptr %A.1, i64 %iv
+; LAA-NEXT: Against group ([[GRP2:0x[0-9a-f]+]]):
+; LAA-NEXT: %gep.A = getelementptr inbounds i8, ptr %A, i64 %iv
+; LAA-NEXT: Grouped accesses:
+; LAA-NEXT: Group [[GRP1]]:
+; LAA-NEXT: (Low: (1 + %A) High: (405 + %A))
+; LAA-NEXT: Member: {(1 + %A),+,4}<nw><%loop>
+; LAA-NEXT: Group [[GRP2]]:
+; LAA-NEXT: (Low: %A High: (101 + %A))
+; LAA-NEXT: Member: {%A,+,1}<nuw><%loop>
+; LAA-EMPTY:
+; LAA-NEXT: Non vectorizable stores to invariant address were not found in loop.
+; LAA-NEXT: SCEV assumptions:
+; LAA-EMPTY:
+; LAA-NEXT: Expressions re-written:
+;
+; AFTER-LABEL: 'test_ptr_range_end_computed_before_and_after_loop'
+; AFTER-NEXT: Classifying expressions for: @test_ptr_range_end_computed_before_and_after_loop
+; AFTER: %x = getelementptr inbounds i8, ptr %A, i64 405
+; AFTER-NEXT: --> (405 + %A) U: full-set S: full-set
+; AFTER: %y = getelementptr inbounds i8, ptr %A, i64 405
+; AFTER-NEXT: --> (405 + %A) U: full-set S: full-set
+entry:
+ %A.1 = getelementptr inbounds i8, ptr %A, i64 1
+ %x = getelementptr inbounds i8, ptr %A, i64 405
+ call void @use(ptr %x)
+ br label %loop
+
+loop:
+ %iv = phi i64 [ 0, %entry ], [ %iv.next, %loop ]
+ %gep.A.400 = getelementptr inbounds i32, ptr %A.1, i64 %iv
+ %gep.A = getelementptr inbounds i8, ptr %A, i64 %iv
+ %l = load i8, ptr %gep.A, align 1
+ %ext = zext i8 %l to i32
+ store i32 %ext, ptr %gep.A.400, align 4
+ %iv.next = add nuw nsw i64 %iv, 1
+ %ec = icmp eq i64 %iv, 100
+ br i1 %ec, label %exit, label %loop
+
+exit:
+ %y = getelementptr inbounds i8, ptr %A, i64 405
+ ret ptr %y
+}
+
+define void @test_ptr_range_end_computed_before_loop(ptr %A) {
+; BEFORE-LABEL: 'test_ptr_range_end_computed_before_loop'
+; BEFORE-NEXT: Classifying expressions for: @test_ptr_range_end_computed_before_loop
+; BEFORE-NEXT: %A.1 = getelementptr inbounds i8, ptr %A, i64 1
+; BEFORE-NEXT: --> (1 + %A) U: full-set S: full-set
+; BEFORE-NEXT: %x = getelementptr inbounds i8, ptr %A, i64 405
+;
+; LAA-LABEL: 'test_ptr_range_end_computed_before_loop'
+; LAA-NEXT: loop:
+; LAA-NEXT: Memory dependences are safe with run-time checks
+; LAA-NEXT: Dependences:
+; LAA-NEXT: Run-time memory checks:
+; LAA-NEXT: Check 0:
+; LAA-NEXT: Comparing group ([[GRP3:0x[0-9a-f]+]]):
+; LAA-NEXT: %gep.A.400 = getelementptr inbounds i32, ptr %A.1, i64 %iv
+; LAA-NEXT: Against group ([[GRP4:0x[0-9a-f]+]]):
+; LAA-NEXT: %gep.A = getelementptr inbounds i8, ptr %A, i64 %iv
+; LAA-NEXT: Grouped accesses:
+; LAA-NEXT: Group [[GRP3]]:
+; LAA-NEXT: (Low: (1 + %A) High: (405 + %A))
+; LAA-NEXT: Member: {(1 + %A),+,4}<nw><%loop>
+; LAA-NEXT: Group [[GRP4]]:
+; LAA-NEXT: (Low: %A High: (101 + %A))
+; LAA-NEXT: Member: {%A,+,1}<nuw><%loop>
+; LAA-EMPTY:
+; LAA-NEXT: Non vectorizable stores to invariant address were not found in loop.
+; LAA-NEXT: SCEV assumptions:
+; LAA-EMPTY:
+; LAA-NEXT: Expressions re-written:
+;
+; AFTER-LABEL: Classifying expressions for: @test_ptr_range_end_computed_before_loop
+; AFTER-NEXT: %A.1 = getelementptr inbounds i8, ptr %A, i64 1
+; AFTER-NEXT: --> (1 + %A) U: full-set S: full-set
+; AFTER-NEXT: %x = getelementptr inbounds i8, ptr %A, i64 405
+;
+entry:
+ %A.1 = getelementptr inbounds i8, ptr %A, i64 1
+ %x = getelementptr inbounds i8, ptr %A, i64 405
+ call void @use(ptr %x)
+ br label %loop
+
+loop:
+ %iv = phi i64 [ 0, %entry ], [ %iv.next, %loop ]
+ %gep.A.400 = getelementptr inbounds i32, ptr %A.1, i64 %iv
+ %gep.A = getelementptr inbounds i8, ptr %A, i64 %iv
+ %l = load i8, ptr %gep.A, align 1
+ %ext = zext i8 %l to i32
+ store i32 %ext, ptr %gep.A.400, align 4
+ %iv.next = add nuw nsw i64 %iv, 1
+ %ec = icmp eq i64 %iv, 100
+ br i1 %ec, label %exit, label %loop
+
+exit:
+ ret void
+}
+
+define ptr @test_ptr_range_end_computed_after_loop(ptr %A) {
+; BEFORE-LABEL: 'test_ptr_range_end_computed_after_loop'
+; BEFORE-NEXT: Classifying expressions for: @test_ptr_range_end_computed_after_loop
+; BEFORE: %y = getelementptr inbounds i8, ptr %A, i64 405
+; BEFORE-NEXT: --> (405 + %A) U: full-set S: full-set
+;
+; LAA-LABEL: 'test_ptr_range_end_computed_after_loop'
+; LAA-NEXT: loop:
+; LAA-NEXT: Memory dependences are safe with run-time checks
+; LAA-NEXT: Dependences:
+; LAA-NEXT: Run-time memory checks:
+; LAA-NEXT: Check 0:
+; LAA-NEXT: Comparing group ([[GRP5:0x[0-9a-f]+]]):
+; LAA-NEXT: %gep.A.400 = getelementptr inbounds i32, ptr %A.1, i64 %iv
+; LAA-NEXT: Against group ([[GRP6:0x[0-9a-f]+]]):
+; LAA-NEXT: %gep.A = getelementptr inbounds i8, ptr %A, i64 %iv
+; LAA-NEXT: Grouped accesses:
+; LAA-NEXT: Group [[GRP5]]:
+; LAA-NEXT: (Low: (1 + %A)<nuw> High: (405 + %A))
+; LAA-NEXT: Member: {(1 + %A)<nuw>,+,4}<nuw><%loop>
+; LAA-NEXT: Group [[GRP6]]:
+; LAA-NEXT: (Low: %A High: (101 + %A))
+; LAA-NEXT: Member: {%A,+,1}<nuw><%loop>
+; LAA-EMPTY:
+; LAA-NEXT: Non vectorizable stores to invariant address were not found in loop.
+; LAA-NEXT: SCEV assumptions:
+; LAA-EMPTY:
+; LAA-NEXT: Expressions re-written:
+;
+; AFTER-LABEL: 'test_ptr_range_end_computed_after_loop'
+; AFTER-NEXT: Classifying expressions for: @test_ptr_range_end_computed_after_loop
+; AFTER: %y = getelementptr inbounds i8, ptr %A, i64 405
+; AFTER-NEXT: --> (405 + %A) U: full-set S: full-set
+;
+entry:
+ %A.1 = getelementptr inbounds i8, ptr %A, i64 1
+ br label %loop
+
+loop:
+ %iv = phi i64 [ 0, %entry ], [ %iv.next, %loop ]
+ %gep.A.400 = getelementptr inbounds i32, ptr %A.1, i64 %iv
+ %gep.A = getelementptr inbounds i8, ptr %A, i64 %iv
+ %l = load i8, ptr %gep.A, align 1
+ %ext = zext i8 %l to i32
+ store i32 %ext, ptr %gep.A.400, align 4
+ %iv.next = add nuw nsw i64 %iv, 1
+ %ec = icmp eq i64 %iv, 100
+ br i1 %ec, label %exit, label %loop
+
+exit:
+ %y = getelementptr inbounds i8, ptr %A, i64 405
+ ret ptr %y
+}
>From f9972444973d0d8bfc81dcf6f6d8a8c8b9906a47 Mon Sep 17 00:00:00 2001
From: Florian Hahn <flo at fhahn.com>
Date: Tue, 30 Apr 2024 21:25:56 +0100
Subject: [PATCH 2/2] [SCEV,LAA] Introduce scoped SCEV, use in LAA computations
(WIP).
Note that this patch at the moment is mostly an experiment, and I'd
really appreciate any early feedback.
The motivating use case for this is better reasoning about pointer
bounds in LoopAccessAnalysis. LAA creates a number of SCEV expressions
that only need to be valid in the scope of the loop and we know that the
address computations in the loop don't wrap. When we compute the SCEV
for a pointer in the last iteration, this frequently boils down to
Base + Offset, but the resulting SCEV at the moment cannot be marked as
<nuw>, as the flags for the SCEV expression must be valid in the whole
function.
A concrete example is @test_distance_positive_backwards, where I'd like
to prove that the 2 pointer ranges overlap. (the current code structure
in LAA isn't necessary ideal/finalized, I am mostly looking for feedback
on the SCEV side for now)
I'd like to provide a way to create scoped SCEV expressions, which we
will only use the reason in the context of a loop.
This patch introduces a way to create SCEV commutative expressions that
are only valid for a loop scope. This is done by adding a ExprScope
member to SCEV which is used as additional pointer ID in the lookup
value of UniqueSCEVs. This should ensure that we only return 'scoped'
SCEVs, if the scope is set.
The idea is to keep scoped SCEVs separate from regular SCEVs, as in if
no scope is set, no returned SCEV expression can reference any scoped
expressions (can add assert to that effect I think). This should ensure
that regular SCEVs cannot be 'polluted' by information from scoped
SCEVs. Added some test cases to make sure the modified scoped SCEVs do
not interfere with already cached or later constructed SCEVs in
llvm/test/Analysis/LoopAccessAnalysis/scoped-scevs.ll
It's very likely that I am missing something here. In that case, any
suggestions on how to approach the problem differently?
---
.../llvm/Analysis/LoopAccessAnalysis.h | 4 ++
llvm/include/llvm/Analysis/ScalarEvolution.h | 14 +++++
llvm/lib/Analysis/LoopAccessAnalysis.cpp | 22 ++++++-
llvm/lib/Analysis/ScalarEvolution.cpp | 29 ++++++++-
.../LoopAccessAnalysis/forked-pointers.ll | 60 +++++++++----------
.../loops-with-indirect-reads-and-writes.ll | 4 +-
.../memcheck-off-by-one-error.ll | 4 +-
.../memcheck-store-vs-alloc-size.ll | 4 +-
.../LoopAccessAnalysis/number-of-memchecks.ll | 8 +--
.../LoopAccessAnalysis/pointer-phis.ll | 32 ++++++----
...endence-distance-different-access-sizes.ll | 27 +++------
.../reverse-memcheck-bounds.ll | 2 +-
.../LoopAccessAnalysis/scoped-scevs.ll | 47 ++++++++-------
13 files changed, 164 insertions(+), 93 deletions(-)
diff --git a/llvm/include/llvm/Analysis/LoopAccessAnalysis.h b/llvm/include/llvm/Analysis/LoopAccessAnalysis.h
index e39c371b41ec5c..ca01db664207ff 100644
--- a/llvm/include/llvm/Analysis/LoopAccessAnalysis.h
+++ b/llvm/include/llvm/Analysis/LoopAccessAnalysis.h
@@ -435,8 +435,10 @@ class RuntimePointerChecking {
/// Reset the state of the pointer runtime information.
void reset() {
Need = false;
+ AlwaysFalse = false;
Pointers.clear();
Checks.clear();
+ CheckingGroups.clear();
}
/// Insert a pointer and calculate the start and end SCEVs.
@@ -493,6 +495,8 @@ class RuntimePointerChecking {
/// This flag indicates if we need to add the runtime check.
bool Need = false;
+ bool AlwaysFalse = false;
+
/// Information about the pointers that may require checking.
SmallVector<PointerInfo, 2> Pointers;
diff --git a/llvm/include/llvm/Analysis/ScalarEvolution.h b/llvm/include/llvm/Analysis/ScalarEvolution.h
index 5828cc156cc785..7e5fbb8287e14c 100644
--- a/llvm/include/llvm/Analysis/ScalarEvolution.h
+++ b/llvm/include/llvm/Analysis/ScalarEvolution.h
@@ -1346,6 +1346,12 @@ class ScalarEvolution {
}
};
+ void setExprScope(const Loop *L);
+
+ void clearExprScope();
+
+ bool isScopedExpr(const SCEV *S);
+
private:
/// A CallbackVH to arrange for ScalarEvolution to be notified whenever a
/// Value is deleted.
@@ -1435,6 +1441,14 @@ class ScalarEvolution {
/// Memoized values for the getConstantMultiple
DenseMap<const SCEV *, APInt> ConstantMultipleCache;
+ /// When not nullptr, this indicates the scope for which an expression needs
+ /// to be valid for. This allows creation of SCEV expressions that only need
+ /// to be valid in a specific loop, allowing to use more specific no-wrap
+ /// flags.
+ const Loop *ExprScope = nullptr;
+
+ SmallVector<const SCEV *> ScopedExprs;
+
/// Return the Value set from which the SCEV expr is generated.
ArrayRef<Value *> getSCEVValues(const SCEV *S);
diff --git a/llvm/lib/Analysis/LoopAccessAnalysis.cpp b/llvm/lib/Analysis/LoopAccessAnalysis.cpp
index b0d29e2409f762..4126bc092d4cb3 100644
--- a/llvm/lib/Analysis/LoopAccessAnalysis.cpp
+++ b/llvm/lib/Analysis/LoopAccessAnalysis.cpp
@@ -17,6 +17,7 @@
#include "llvm/ADT/EquivalenceClasses.h"
#include "llvm/ADT/PointerIntPair.h"
#include "llvm/ADT/STLExtras.h"
+#include "llvm/ADT/ScopeExit.h"
#include "llvm/ADT/SetVector.h"
#include "llvm/ADT/SmallPtrSet.h"
#include "llvm/ADT/SmallSet.h"
@@ -208,7 +209,8 @@ void RuntimePointerChecking::insert(Loop *Lp, Value *Ptr, const SCEV *PtrExpr,
PredicatedScalarEvolution &PSE,
bool NeedsFreeze) {
ScalarEvolution *SE = PSE.getSE();
-
+ SE->setExprScope(Lp);
+ auto ClearOnExit = make_scope_exit([SE]() { SE->clearExprScope(); });
const SCEV *ScStart;
const SCEV *ScEnd;
@@ -222,6 +224,10 @@ void RuntimePointerChecking::insert(Loop *Lp, Value *Ptr, const SCEV *PtrExpr,
ScStart = AR->getStart();
ScEnd = AR->evaluateAtIteration(Ex, *SE);
const SCEV *Step = AR->getStepRecurrence(*SE);
+ if (AR->getNoWrapFlags(SCEV::FlagNUW) && SE->isScopedExpr(ScEnd)) {
+ if (auto *Comm = dyn_cast<SCEVCommutativeExpr>(ScEnd))
+ const_cast<SCEVCommutativeExpr *>(Comm)->setNoWrapFlags(SCEV::FlagNUW);
+ }
// For expressions with negative step, the upper bound is ScStart and the
// lower bound is ScEnd.
@@ -243,7 +249,13 @@ void RuntimePointerChecking::insert(Loop *Lp, Value *Ptr, const SCEV *PtrExpr,
auto &DL = Lp->getHeader()->getModule()->getDataLayout();
Type *IdxTy = DL.getIndexType(Ptr->getType());
const SCEV *EltSizeSCEV = SE->getStoreSizeOfExpr(IdxTy, AccessTy);
- ScEnd = SE->getAddExpr(ScEnd, EltSizeSCEV);
+ // TODO: this computes one-past-the-end. ScEnd + EltSizeSCEV - 1 is the last
+ // accessed byte. Not entirely sure if one-past-the-end must also not wrap? If
+ // it does, could compute and use last accessed byte instead.
+ if (SE->isScopedExpr(ScEnd))
+ ScEnd = SE->getAddExpr(ScEnd, EltSizeSCEV, SCEV::FlagNUW);
+ else
+ ScEnd = SE->getAddExpr(ScEnd, EltSizeSCEV, SCEV::FlagNUW);
Pointers.emplace_back(Ptr, ScStart, ScEnd, WritePtr, DepSetId, ASId, PtrExpr,
NeedsFreeze);
@@ -378,6 +390,11 @@ SmallVector<RuntimePointerCheck, 4> RuntimePointerChecking::generateChecks() {
if (needsChecking(CGI, CGJ)) {
tryToCreateDiffCheck(CGI, CGJ);
Checks.push_back(std::make_pair(&CGI, &CGJ));
+ if (SE->isKnownPredicate(CmpInst::ICMP_UGT, CGI.High, CGJ.Low) &&
+ SE->isKnownPredicate(CmpInst::ICMP_ULE, CGI.Low, CGJ.High)) {
+ AlwaysFalse = true;
+ return {};
+ }
}
}
}
@@ -1273,6 +1290,7 @@ bool AccessAnalysis::canCheckPtrAtRT(RuntimePointerChecking &RtCheck,
// If we can do run-time checks, but there are no checks, no runtime checks
// are needed. This can happen when all pointers point to the same underlying
// object for example.
+ CanDoRT &= !RtCheck.AlwaysFalse;
RtCheck.Need = CanDoRT ? RtCheck.getNumberOfChecks() != 0 : MayNeedRTCheck;
bool CanDoRTIfNeeded = !RtCheck.Need || CanDoRT;
diff --git a/llvm/lib/Analysis/ScalarEvolution.cpp b/llvm/lib/Analysis/ScalarEvolution.cpp
index 93f885c5d5ad8b..9916714b2854c6 100644
--- a/llvm/lib/Analysis/ScalarEvolution.cpp
+++ b/llvm/lib/Analysis/ScalarEvolution.cpp
@@ -2981,6 +2981,26 @@ const SCEV *ScalarEvolution::getAddExpr(SmallVectorImpl<const SCEV *> &Ops,
return getOrCreateAddExpr(Ops, ComputeFlags(Ops));
}
+void ScalarEvolution::setExprScope(const Loop *L) {
+ assert(!ExprScope && "cannot overwrite existing expression scope");
+ ExprScope = L;
+}
+
+void ScalarEvolution::clearExprScope() { ExprScope = nullptr; }
+
+bool ScalarEvolution::isScopedExpr(const SCEV *S) {
+ if (!ExprScope || !isa<SCEVCommutativeExpr>(S))
+ return false;
+
+ FoldingSetNodeID ID;
+ ID.AddInteger(S->getSCEVType());
+ for (const SCEV *Op : S->operands())
+ ID.AddPointer(Op);
+ ID.AddPointer(ExprScope);
+ void *IP = nullptr;
+ return UniqueSCEVs.FindNodeOrInsertPos(ID, IP);
+}
+
const SCEV *
ScalarEvolution::getOrCreateAddExpr(ArrayRef<const SCEV *> Ops,
SCEV::NoWrapFlags Flags) {
@@ -2988,6 +3008,8 @@ ScalarEvolution::getOrCreateAddExpr(ArrayRef<const SCEV *> Ops,
ID.AddInteger(scAddExpr);
for (const SCEV *Op : Ops)
ID.AddPointer(Op);
+ if (ExprScope)
+ ID.AddPointer(ExprScope);
void *IP = nullptr;
SCEVAddExpr *S =
static_cast<SCEVAddExpr *>(UniqueSCEVs.FindNodeOrInsertPos(ID, IP));
@@ -3034,6 +3056,8 @@ ScalarEvolution::getOrCreateMulExpr(ArrayRef<const SCEV *> Ops,
ID.AddInteger(scMulExpr);
for (const SCEV *Op : Ops)
ID.AddPointer(Op);
+ if (ExprScope)
+ ID.AddPointer(ExprScope);
void *IP = nullptr;
SCEVMulExpr *S =
static_cast<SCEVMulExpr *>(UniqueSCEVs.FindNodeOrInsertPos(ID, IP));
@@ -14746,12 +14770,15 @@ PredicatedScalarEvolution::PredicatedScalarEvolution(ScalarEvolution &SE,
void ScalarEvolution::registerUser(const SCEV *User,
ArrayRef<const SCEV *> Ops) {
- for (const auto *Op : Ops)
+ for (const auto *Op : Ops) {
// We do not expect that forgetting cached data for SCEVConstants will ever
// open any prospects for sharpening or introduce any correctness issues,
// so we don't bother storing their dependencies.
if (!isa<SCEVConstant>(Op))
SCEVUsers[Op].insert(User);
+ assert((ExprScope || !isScopedExpr(Op)) &&
+ "Non-scoped expression cannot have scoped operands!");
+ }
}
const SCEV *PredicatedScalarEvolution::getSCEV(Value *V) {
diff --git a/llvm/test/Analysis/LoopAccessAnalysis/forked-pointers.ll b/llvm/test/Analysis/LoopAccessAnalysis/forked-pointers.ll
index cd388b4ee87f22..a61c6dcc7af4d7 100644
--- a/llvm/test/Analysis/LoopAccessAnalysis/forked-pointers.ll
+++ b/llvm/test/Analysis/LoopAccessAnalysis/forked-pointers.ll
@@ -24,7 +24,7 @@ define void @forked_ptrs_simple(ptr nocapture readonly %Base1, ptr nocapture rea
; CHECK-NEXT: %select = select i1 %cmp, ptr %gep.1, ptr %gep.2
; CHECK-NEXT: Grouped accesses:
; CHECK-NEXT: Group [[GRP1]]:
-; CHECK-NEXT: (Low: %Dest High: (400 + %Dest))
+; CHECK-NEXT: (Low: %Dest High: (400 + %Dest)<nuw>)
; CHECK-NEXT: Member: {%Dest,+,4}<nuw><%loop>
; CHECK-NEXT: Member: {%Dest,+,4}<nuw><%loop>
; CHECK-NEXT: Group [[GRP2]]:
@@ -58,7 +58,7 @@ define void @forked_ptrs_simple(ptr nocapture readonly %Base1, ptr nocapture rea
; RECURSE-NEXT: %select = select i1 %cmp, ptr %gep.1, ptr %gep.2
; RECURSE-NEXT: Grouped accesses:
; RECURSE-NEXT: Group [[GRP4]]:
-; RECURSE-NEXT: (Low: %Dest High: (400 + %Dest))
+; RECURSE-NEXT: (Low: %Dest High: (400 + %Dest)<nuw>)
; RECURSE-NEXT: Member: {%Dest,+,4}<nuw><%loop>
; RECURSE-NEXT: Member: {%Dest,+,4}<nuw><%loop>
; RECURSE-NEXT: Group [[GRP5]]:
@@ -132,10 +132,10 @@ define dso_local void @forked_ptrs_different_base_same_offset(ptr nocapture read
; CHECK-NEXT: %.sink.in = getelementptr inbounds float, ptr %spec.select, i64 %indvars.iv
; CHECK-NEXT: Grouped accesses:
; CHECK-NEXT: Group [[GRP7]]:
-; CHECK-NEXT: (Low: %Dest High: (400 + %Dest))
+; CHECK-NEXT: (Low: %Dest High: (400 + %Dest)<nuw>)
; CHECK-NEXT: Member: {%Dest,+,4}<nuw><%for.body>
; CHECK-NEXT: Group [[GRP8]]:
-; CHECK-NEXT: (Low: %Preds High: (400 + %Preds))
+; CHECK-NEXT: (Low: %Preds High: (400 + %Preds)<nuw>)
; CHECK-NEXT: Member: {%Preds,+,4}<nuw><%for.body>
; CHECK-NEXT: Group [[GRP9]]:
; CHECK-NEXT: (Low: %Base2 High: (400 + %Base2))
@@ -171,10 +171,10 @@ define dso_local void @forked_ptrs_different_base_same_offset(ptr nocapture read
; RECURSE-NEXT: %.sink.in = getelementptr inbounds float, ptr %spec.select, i64 %indvars.iv
; RECURSE-NEXT: Grouped accesses:
; RECURSE-NEXT: Group [[GRP11]]:
-; RECURSE-NEXT: (Low: %Dest High: (400 + %Dest))
+; RECURSE-NEXT: (Low: %Dest High: (400 + %Dest)<nuw>)
; RECURSE-NEXT: Member: {%Dest,+,4}<nuw><%for.body>
; RECURSE-NEXT: Group [[GRP12]]:
-; RECURSE-NEXT: (Low: %Preds High: (400 + %Preds))
+; RECURSE-NEXT: (Low: %Preds High: (400 + %Preds)<nuw>)
; RECURSE-NEXT: Member: {%Preds,+,4}<nuw><%for.body>
; RECURSE-NEXT: Group [[GRP13]]:
; RECURSE-NEXT: (Low: %Base2 High: (400 + %Base2))
@@ -232,10 +232,10 @@ define dso_local void @forked_ptrs_different_base_same_offset_64b(ptr nocapture
; CHECK-NEXT: %.sink.in = getelementptr inbounds double, ptr %spec.select, i64 %indvars.iv
; CHECK-NEXT: Grouped accesses:
; CHECK-NEXT: Group [[GRP15]]:
-; CHECK-NEXT: (Low: %Dest High: (800 + %Dest))
+; CHECK-NEXT: (Low: %Dest High: (800 + %Dest)<nuw>)
; CHECK-NEXT: Member: {%Dest,+,8}<nuw><%for.body>
; CHECK-NEXT: Group [[GRP16]]:
-; CHECK-NEXT: (Low: %Preds High: (400 + %Preds))
+; CHECK-NEXT: (Low: %Preds High: (400 + %Preds)<nuw>)
; CHECK-NEXT: Member: {%Preds,+,4}<nuw><%for.body>
; CHECK-NEXT: Group [[GRP17]]:
; CHECK-NEXT: (Low: %Base2 High: (800 + %Base2))
@@ -271,10 +271,10 @@ define dso_local void @forked_ptrs_different_base_same_offset_64b(ptr nocapture
; RECURSE-NEXT: %.sink.in = getelementptr inbounds double, ptr %spec.select, i64 %indvars.iv
; RECURSE-NEXT: Grouped accesses:
; RECURSE-NEXT: Group [[GRP19]]:
-; RECURSE-NEXT: (Low: %Dest High: (800 + %Dest))
+; RECURSE-NEXT: (Low: %Dest High: (800 + %Dest)<nuw>)
; RECURSE-NEXT: Member: {%Dest,+,8}<nuw><%for.body>
; RECURSE-NEXT: Group [[GRP20]]:
-; RECURSE-NEXT: (Low: %Preds High: (400 + %Preds))
+; RECURSE-NEXT: (Low: %Preds High: (400 + %Preds)<nuw>)
; RECURSE-NEXT: Member: {%Preds,+,4}<nuw><%for.body>
; RECURSE-NEXT: Group [[GRP21]]:
; RECURSE-NEXT: (Low: %Base2 High: (800 + %Base2))
@@ -332,10 +332,10 @@ define dso_local void @forked_ptrs_different_base_same_offset_23b(ptr nocapture
; CHECK-NEXT: %.sink.in = getelementptr inbounds i23, ptr %spec.select, i64 %indvars.iv
; CHECK-NEXT: Grouped accesses:
; CHECK-NEXT: Group [[GRP23]]:
-; CHECK-NEXT: (Low: %Dest High: (399 + %Dest))
+; CHECK-NEXT: (Low: %Dest High: (399 + %Dest)<nuw>)
; CHECK-NEXT: Member: {%Dest,+,4}<nuw><%for.body>
; CHECK-NEXT: Group [[GRP24]]:
-; CHECK-NEXT: (Low: %Preds High: (400 + %Preds))
+; CHECK-NEXT: (Low: %Preds High: (400 + %Preds)<nuw>)
; CHECK-NEXT: Member: {%Preds,+,4}<nuw><%for.body>
; CHECK-NEXT: Group [[GRP25]]:
; CHECK-NEXT: (Low: %Base2 High: (399 + %Base2))
@@ -371,10 +371,10 @@ define dso_local void @forked_ptrs_different_base_same_offset_23b(ptr nocapture
; RECURSE-NEXT: %.sink.in = getelementptr inbounds i23, ptr %spec.select, i64 %indvars.iv
; RECURSE-NEXT: Grouped accesses:
; RECURSE-NEXT: Group [[GRP27]]:
-; RECURSE-NEXT: (Low: %Dest High: (399 + %Dest))
+; RECURSE-NEXT: (Low: %Dest High: (399 + %Dest)<nuw>)
; RECURSE-NEXT: Member: {%Dest,+,4}<nuw><%for.body>
; RECURSE-NEXT: Group [[GRP28]]:
-; RECURSE-NEXT: (Low: %Preds High: (400 + %Preds))
+; RECURSE-NEXT: (Low: %Preds High: (400 + %Preds)<nuw>)
; RECURSE-NEXT: Member: {%Preds,+,4}<nuw><%for.body>
; RECURSE-NEXT: Group [[GRP29]]:
; RECURSE-NEXT: (Low: %Base2 High: (399 + %Base2))
@@ -432,10 +432,10 @@ define dso_local void @forked_ptrs_different_base_same_offset_6b(ptr nocapture r
; CHECK-NEXT: %.sink.in = getelementptr inbounds i6, ptr %spec.select, i64 %indvars.iv
; CHECK-NEXT: Grouped accesses:
; CHECK-NEXT: Group [[GRP31]]:
-; CHECK-NEXT: (Low: %Dest High: (100 + %Dest))
+; CHECK-NEXT: (Low: %Dest High: (100 + %Dest)<nuw>)
; CHECK-NEXT: Member: {%Dest,+,1}<nuw><%for.body>
; CHECK-NEXT: Group [[GRP32]]:
-; CHECK-NEXT: (Low: %Preds High: (400 + %Preds))
+; CHECK-NEXT: (Low: %Preds High: (400 + %Preds)<nuw>)
; CHECK-NEXT: Member: {%Preds,+,4}<nuw><%for.body>
; CHECK-NEXT: Group [[GRP33]]:
; CHECK-NEXT: (Low: %Base2 High: (100 + %Base2))
@@ -471,10 +471,10 @@ define dso_local void @forked_ptrs_different_base_same_offset_6b(ptr nocapture r
; RECURSE-NEXT: %.sink.in = getelementptr inbounds i6, ptr %spec.select, i64 %indvars.iv
; RECURSE-NEXT: Grouped accesses:
; RECURSE-NEXT: Group [[GRP35]]:
-; RECURSE-NEXT: (Low: %Dest High: (100 + %Dest))
+; RECURSE-NEXT: (Low: %Dest High: (100 + %Dest)<nuw>)
; RECURSE-NEXT: Member: {%Dest,+,1}<nuw><%for.body>
; RECURSE-NEXT: Group [[GRP36]]:
-; RECURSE-NEXT: (Low: %Preds High: (400 + %Preds))
+; RECURSE-NEXT: (Low: %Preds High: (400 + %Preds)<nuw>)
; RECURSE-NEXT: Member: {%Preds,+,4}<nuw><%for.body>
; RECURSE-NEXT: Group [[GRP37]]:
; RECURSE-NEXT: (Low: %Base2 High: (100 + %Base2))
@@ -535,7 +535,7 @@ define dso_local void @forked_ptrs_different_base_same_offset_possible_poison(pt
; CHECK-NEXT: (Low: %Dest High: (400 + %Dest))
; CHECK-NEXT: Member: {%Dest,+,4}<nw><%for.body>
; CHECK-NEXT: Group [[GRP40]]:
-; CHECK-NEXT: (Low: %Preds High: (400 + %Preds))
+; CHECK-NEXT: (Low: %Preds High: (400 + %Preds)<nuw>)
; CHECK-NEXT: Member: {%Preds,+,4}<nuw><%for.body>
; CHECK-NEXT: Group [[GRP41]]:
; CHECK-NEXT: (Low: %Base2 High: (400 + %Base2))
@@ -574,7 +574,7 @@ define dso_local void @forked_ptrs_different_base_same_offset_possible_poison(pt
; RECURSE-NEXT: (Low: %Dest High: (400 + %Dest))
; RECURSE-NEXT: Member: {%Dest,+,4}<nw><%for.body>
; RECURSE-NEXT: Group [[GRP44]]:
-; RECURSE-NEXT: (Low: %Preds High: (400 + %Preds))
+; RECURSE-NEXT: (Low: %Preds High: (400 + %Preds)<nuw>)
; RECURSE-NEXT: Member: {%Preds,+,4}<nuw><%for.body>
; RECURSE-NEXT: Group [[GRP45]]:
; RECURSE-NEXT: (Low: %Base2 High: (400 + %Base2))
@@ -696,10 +696,10 @@ define dso_local void @forked_ptrs_add_to_offset(ptr nocapture readonly %Base, p
; CHECK-NEXT: %arrayidx3 = getelementptr inbounds float, ptr %Base, i64 %offset
; CHECK-NEXT: Grouped accesses:
; CHECK-NEXT: Group [[GRP47]]:
-; CHECK-NEXT: (Low: %Dest High: (400 + %Dest))
+; CHECK-NEXT: (Low: %Dest High: (400 + %Dest)<nuw>)
; CHECK-NEXT: Member: {%Dest,+,4}<nuw><%for.body>
; CHECK-NEXT: Group [[GRP48]]:
-; CHECK-NEXT: (Low: %Preds High: (400 + %Preds))
+; CHECK-NEXT: (Low: %Preds High: (400 + %Preds)<nuw>)
; CHECK-NEXT: Member: {%Preds,+,4}<nuw><%for.body>
; CHECK-NEXT: Group [[GRP49]]:
; CHECK-NEXT: (Low: ((4 * %extra_offset) + %Base) High: (404 + (4 * %extra_offset) + %Base))
@@ -764,10 +764,10 @@ define dso_local void @forked_ptrs_sub_from_offset(ptr nocapture readonly %Base,
; CHECK-NEXT: %arrayidx3 = getelementptr inbounds float, ptr %Base, i64 %offset
; CHECK-NEXT: Grouped accesses:
; CHECK-NEXT: Group [[GRP50]]:
-; CHECK-NEXT: (Low: %Dest High: (400 + %Dest))
+; CHECK-NEXT: (Low: %Dest High: (400 + %Dest)<nuw>)
; CHECK-NEXT: Member: {%Dest,+,4}<nuw><%for.body>
; CHECK-NEXT: Group [[GRP51]]:
-; CHECK-NEXT: (Low: %Preds High: (400 + %Preds))
+; CHECK-NEXT: (Low: %Preds High: (400 + %Preds)<nuw>)
; CHECK-NEXT: Member: {%Preds,+,4}<nuw><%for.body>
; CHECK-NEXT: Group [[GRP52]]:
; CHECK-NEXT: (Low: ((-4 * %extra_offset) + %Base) High: (404 + (-4 * %extra_offset) + %Base))
@@ -832,10 +832,10 @@ define dso_local void @forked_ptrs_add_sub_offset(ptr nocapture readonly %Base,
; CHECK-NEXT: %arrayidx3 = getelementptr inbounds float, ptr %Base, i64 %offset
; CHECK-NEXT: Grouped accesses:
; CHECK-NEXT: Group [[GRP53]]:
-; CHECK-NEXT: (Low: %Dest High: (400 + %Dest))
+; CHECK-NEXT: (Low: %Dest High: (400 + %Dest)<nuw>)
; CHECK-NEXT: Member: {%Dest,+,4}<nuw><%for.body>
; CHECK-NEXT: Group [[GRP54]]:
-; CHECK-NEXT: (Low: %Preds High: (400 + %Preds))
+; CHECK-NEXT: (Low: %Preds High: (400 + %Preds)<nuw>)
; CHECK-NEXT: Member: {%Preds,+,4}<nuw><%for.body>
; CHECK-NEXT: Group [[GRP55]]:
; CHECK-NEXT: (Low: ((4 * %to_add) + (-4 * %to_sub) + %Base) High: (404 + (4 * %to_add) + (-4 * %to_sub) + %Base))
@@ -1256,7 +1256,7 @@ define void @sc_add_expr_ice(ptr %Base1, ptr %Base2, i64 %N) {
; CHECK-NEXT: %fptr = getelementptr inbounds double, ptr %Base2, i64 %sel
; CHECK-NEXT: Grouped accesses:
; CHECK-NEXT: Group [[GRP56]]:
-; CHECK-NEXT: (Low: %Base1 High: (8 + %Base1))
+; CHECK-NEXT: (Low: %Base1 High: (8 + %Base1)<nuw>)
; CHECK-NEXT: Member: %Base1
; CHECK-NEXT: Group [[GRP57]]:
; CHECK-NEXT: (Low: %Base2 High: ((8 * %N) + %Base2))
@@ -1283,7 +1283,7 @@ define void @sc_add_expr_ice(ptr %Base1, ptr %Base2, i64 %N) {
; RECURSE-NEXT: %fptr = getelementptr inbounds double, ptr %Base2, i64 %sel
; RECURSE-NEXT: Grouped accesses:
; RECURSE-NEXT: Group [[GRP58]]:
-; RECURSE-NEXT: (Low: %Base1 High: (8 + %Base1))
+; RECURSE-NEXT: (Low: %Base1 High: (8 + %Base1)<nuw>)
; RECURSE-NEXT: Member: %Base1
; RECURSE-NEXT: Group [[GRP59]]:
; RECURSE-NEXT: (Low: %Base2 High: ((8 * %N) + %Base2))
@@ -1351,7 +1351,7 @@ define void @forked_ptrs_with_different_base(ptr nocapture readonly %Preds, ptr
; CHECK-NEXT: (Low: %2 High: (63992 + %2))
; CHECK-NEXT: Member: {%2,+,8}<nw><%for.body>
; CHECK-NEXT: Group [[GRP61]]:
-; CHECK-NEXT: (Low: %Preds High: (31996 + %Preds))
+; CHECK-NEXT: (Low: %Preds High: (31996 + %Preds)<nuw>)
; CHECK-NEXT: Member: {%Preds,+,4}<nuw><%for.body>
; CHECK-NEXT: Group [[GRP62]]:
; CHECK-NEXT: (Low: %0 High: (63992 + %0))
@@ -1395,7 +1395,7 @@ define void @forked_ptrs_with_different_base(ptr nocapture readonly %Preds, ptr
; RECURSE-NEXT: (Low: %2 High: (63992 + %2))
; RECURSE-NEXT: Member: {%2,+,8}<nw><%for.body>
; RECURSE-NEXT: Group [[GRP65]]:
-; RECURSE-NEXT: (Low: %Preds High: (31996 + %Preds))
+; RECURSE-NEXT: (Low: %Preds High: (31996 + %Preds)<nuw>)
; RECURSE-NEXT: Member: {%Preds,+,4}<nuw><%for.body>
; RECURSE-NEXT: Group [[GRP66]]:
; RECURSE-NEXT: (Low: %0 High: (63992 + %0))
diff --git a/llvm/test/Analysis/LoopAccessAnalysis/loops-with-indirect-reads-and-writes.ll b/llvm/test/Analysis/LoopAccessAnalysis/loops-with-indirect-reads-and-writes.ll
index fd4f417e57b635..02c5fd1adc5fbe 100644
--- a/llvm/test/Analysis/LoopAccessAnalysis/loops-with-indirect-reads-and-writes.ll
+++ b/llvm/test/Analysis/LoopAccessAnalysis/loops-with-indirect-reads-and-writes.ll
@@ -101,7 +101,7 @@ define void @test_indirect_read_loop_also_modifies_pointer_array(ptr noundef %ar
; CHECK-NEXT: (Low: {(64 + %arr),+,64}<%loop.1> High: {(8064 + %arr),+,64}<%loop.1>)
; CHECK-NEXT: Member: {{\{\{}}(64 + %arr),+,64}<%loop.1>,+,8}<%loop.2>
; CHECK-NEXT: Group [[GRP2]]:
-; CHECK-NEXT: (Low: %arr High: (8000 + %arr))
+; CHECK-NEXT: (Low: %arr High: (8000 + %arr)<nuw>)
; CHECK-NEXT: Member: {%arr,+,8}<nuw><%loop.2>
; CHECK-EMPTY:
; CHECK-NEXT: Non vectorizable stores to invariant address were not found in loop.
@@ -169,7 +169,7 @@ define void @test_indirect_write_loop_also_modifies_pointer_array(ptr noundef %a
; CHECK-NEXT: (Low: {(64 + %arr),+,64}<%loop.1> High: {(8064 + %arr),+,64}<%loop.1>)
; CHECK-NEXT: Member: {{\{\{}}(64 + %arr),+,64}<%loop.1>,+,8}<%loop.2>
; CHECK-NEXT: Group [[GRP4]]:
-; CHECK-NEXT: (Low: %arr High: (8000 + %arr))
+; CHECK-NEXT: (Low: %arr High: (8000 + %arr)<nuw>)
; CHECK-NEXT: Member: {%arr,+,8}<nuw><%loop.2>
; CHECK-EMPTY:
; CHECK-NEXT: Non vectorizable stores to invariant address were not found in loop.
diff --git a/llvm/test/Analysis/LoopAccessAnalysis/memcheck-off-by-one-error.ll b/llvm/test/Analysis/LoopAccessAnalysis/memcheck-off-by-one-error.ll
index 4a9f004cb44a75..0cd83da79caede 100644
--- a/llvm/test/Analysis/LoopAccessAnalysis/memcheck-off-by-one-error.ll
+++ b/llvm/test/Analysis/LoopAccessAnalysis/memcheck-off-by-one-error.ll
@@ -19,8 +19,8 @@
; store a value at *%op touched memory under *%src.
;CHECK: function 'fastCopy':
-;CHECK: (Low: %op High: (32 + %op))
-;CHECK: (Low: %src High: (32 + %src))
+;CHECK: (Low: %op High: (32 + %op)<nuw>)
+;CHECK: (Low: %src High: (32 + %src)<nuw>)
define void @fastCopy(ptr nocapture readonly %src, ptr nocapture %op) {
entry:
diff --git a/llvm/test/Analysis/LoopAccessAnalysis/memcheck-store-vs-alloc-size.ll b/llvm/test/Analysis/LoopAccessAnalysis/memcheck-store-vs-alloc-size.ll
index 6bb1d21b90809d..59f3dcd6b61376 100644
--- a/llvm/test/Analysis/LoopAccessAnalysis/memcheck-store-vs-alloc-size.ll
+++ b/llvm/test/Analysis/LoopAccessAnalysis/memcheck-store-vs-alloc-size.ll
@@ -6,8 +6,8 @@
; Here, we use i19 instead of i64 because it has a different alloc size to its store size.
;CHECK: function 'fastCopy':
-;CHECK: (Low: %op High: (27 + %op))
-;CHECK: (Low: %src High: (27 + %src))
+;CHECK: (Low: %op High: (27 + %op)<nuw>)
+;CHECK: (Low: %src High: (27 + %src)<nuw>)
define void @fastCopy(ptr nocapture readonly %src, ptr nocapture %op) {
entry:
diff --git a/llvm/test/Analysis/LoopAccessAnalysis/number-of-memchecks.ll b/llvm/test/Analysis/LoopAccessAnalysis/number-of-memchecks.ll
index d4287612399ba4..ea968113126fa0 100644
--- a/llvm/test/Analysis/LoopAccessAnalysis/number-of-memchecks.ll
+++ b/llvm/test/Analysis/LoopAccessAnalysis/number-of-memchecks.ll
@@ -95,15 +95,15 @@ for.end: ; preds = %for.body
; CHECK-NEXT: %arrayidxB = getelementptr inbounds i16, ptr %b, i64 %ind
; CHECK-NEXT: Grouped accesses:
; CHECK-NEXT: Group {{.*}}[[ZERO]]:
-; CHECK-NEXT: (Low: %c High: (80 + %c))
+; CHECK-NEXT: (Low: %c High: (80 + %c)<nuw>)
; CHECK-NEXT: Member: {(2 + %c)<nuw>,+,4}
; CHECK-NEXT: Member: {%c,+,4}
; CHECK-NEXT: Group {{.*}}[[ONE]]:
-; CHECK-NEXT: (Low: %a High: (42 + %a))
+; CHECK-NEXT: (Low: %a High: (42 + %a)<nuw>)
; CHECK-NEXT: Member: {(2 + %a)<nuw>,+,2}
; CHECK-NEXT: Member: {%a,+,2}
; CHECK-NEXT: Group {{.*}}[[TWO]]:
-; CHECK-NEXT: (Low: %b High: (40 + %b))
+; CHECK-NEXT: (Low: %b High: (40 + %b)<nuw>)
; CHECK-NEXT: Member: {%b,+,2}
define void @testg(ptr %a,
@@ -167,7 +167,7 @@ for.end: ; preds = %for.body
; CHECK-NEXT: %arrayidxB = getelementptr i16, ptr %b, i64 %ind
; CHECK-NEXT: Grouped accesses:
; CHECK-NEXT: Group {{.*}}[[ZERO]]:
-; CHECK-NEXT: (Low: %c High: (80 + %c))
+; CHECK-NEXT: (Low: %c High: (80 + %c)<nuw>)
; CHECK-NEXT: Member: {(2 + %c)<nuw>,+,4}
; CHECK-NEXT: Member: {%c,+,4}
; CHECK-NEXT: Group {{.*}}[[ONE]]:
diff --git a/llvm/test/Analysis/LoopAccessAnalysis/pointer-phis.ll b/llvm/test/Analysis/LoopAccessAnalysis/pointer-phis.ll
index a214451bfd3fd4..2f24aacfafb906 100644
--- a/llvm/test/Analysis/LoopAccessAnalysis/pointer-phis.ll
+++ b/llvm/test/Analysis/LoopAccessAnalysis/pointer-phis.ll
@@ -277,14 +277,6 @@ define i32 @store_with_pointer_phi_incoming_phi(ptr %A, ptr %B, ptr %C, i1 %c.0,
; CHECK-NEXT: %arrayidx = getelementptr inbounds double, ptr %A, i64 %iv
; CHECK-NEXT: ptr %A
; CHECK-NEXT: Grouped accesses:
-; CHECK-NEXT: Group [[GRP4]]:
-; CHECK-NEXT: (Low: %C High: (8 + %C))
-; CHECK-NEXT: Member: %C
-; CHECK-NEXT: Group [[GRP5]]:
-; CHECK-NEXT: (Low: %B High: (8 + %B))
-; CHECK-NEXT: Member: %B
-; CHECK-NEXT: Group [[GRP6]]:
-; CHECK-NEXT: (Low: %A High: (256000 + %A))
; CHECK-NEXT: Member: {%A,+,8}<nuw><%loop.header>
; CHECK-NEXT: Member: %A
; CHECK-EMPTY:
@@ -360,7 +352,6 @@ define i32 @store_with_pointer_phi_incoming_phi_irreducible_cycle(ptr %A, ptr %B
; CHECK-NEXT: %arrayidx = getelementptr inbounds double, ptr %A, i64 %iv
; CHECK-NEXT: ptr %A
; CHECK-NEXT: Grouped accesses:
-; CHECK-NEXT: Group [[GRP7]]:
; CHECK-NEXT: (Low: %C High: (8 + %C))
; CHECK-NEXT: Member: %C
; CHECK-NEXT: Group [[GRP8]]:
@@ -368,6 +359,16 @@ define i32 @store_with_pointer_phi_incoming_phi_irreducible_cycle(ptr %A, ptr %B
; CHECK-NEXT: Member: %B
; CHECK-NEXT: Group [[GRP9]]:
; CHECK-NEXT: (Low: %A High: (256000 + %A))
+=======
+; CHECK-NEXT: Group [[GRP7]]:
+; CHECK-NEXT: (Low: %C High: (8 + %C)<nuw>)
+; CHECK-NEXT: Member: %C
+; CHECK-NEXT: Group [[GRP8]]:
+; CHECK-NEXT: (Low: %B High: (8 + %B)<nuw>)
+; CHECK-NEXT: Member: %B
+; CHECK-NEXT: Group [[GRP9]]:
+; CHECK-NEXT: (Low: %A High: (256000 + %A)<nuw>)
+>>>>>>> b250dd10a54f ([SCEV,LAA] Introduce scoped SCEV, use in LAA computations (WIP).)
; CHECK-NEXT: Member: {%A,+,8}<nuw><%loop.header>
; CHECK-NEXT: Member: %A
; CHECK-EMPTY:
@@ -532,7 +533,6 @@ define void @phi_load_store_memdep_check(i1 %c, ptr %A, ptr %B, ptr %C) {
; CHECK-NEXT: ptr %B
; CHECK-NEXT: ptr %B
; CHECK-NEXT: Grouped accesses:
-; CHECK-NEXT: Group [[GRP10]]:
; CHECK-NEXT: (Low: %A High: (2 + %A))
; CHECK-NEXT: Member: %A
; CHECK-NEXT: Member: %A
@@ -542,6 +542,18 @@ define void @phi_load_store_memdep_check(i1 %c, ptr %A, ptr %B, ptr %C) {
; CHECK-NEXT: Member: %C
; CHECK-NEXT: Group [[GRP12]]:
; CHECK-NEXT: (Low: %B High: (2 + %B))
+=======
+; CHECK-NEXT: Group [[GRP10]]:
+; CHECK-NEXT: (Low: %A High: (2 + %A)<nuw>)
+; CHECK-NEXT: Member: %A
+; CHECK-NEXT: Member: %A
+; CHECK-NEXT: Group [[GRP11]]:
+; CHECK-NEXT: (Low: %C High: (2 + %C)<nuw>)
+; CHECK-NEXT: Member: %C
+; CHECK-NEXT: Member: %C
+; CHECK-NEXT: Group [[GRP12]]:
+; CHECK-NEXT: (Low: %B High: (2 + %B)<nuw>)
+>>>>>>> b250dd10a54f ([SCEV,LAA] Introduce scoped SCEV, use in LAA computations (WIP).)
; CHECK-NEXT: Member: %B
; CHECK-NEXT: Member: %B
; CHECK-EMPTY:
diff --git a/llvm/test/Analysis/LoopAccessAnalysis/positive-dependence-distance-different-access-sizes.ll b/llvm/test/Analysis/LoopAccessAnalysis/positive-dependence-distance-different-access-sizes.ll
index 08e0bae7f05bac..9d68a860ff1a15 100644
--- a/llvm/test/Analysis/LoopAccessAnalysis/positive-dependence-distance-different-access-sizes.ll
+++ b/llvm/test/Analysis/LoopAccessAnalysis/positive-dependence-distance-different-access-sizes.ll
@@ -18,10 +18,10 @@ define void @test_distance_positive_independent_via_trip_count(ptr %A) {
; CHECK-NEXT: %gep.A = getelementptr inbounds i8, ptr %A, i64 %iv
; CHECK-NEXT: Grouped accesses:
; CHECK-NEXT: Group [[GRP1]]:
-; CHECK-NEXT: (Low: (400 + %A)<nuw> High: (804 + %A))
+; CHECK-NEXT: (Low: (400 + %A)<nuw> High: (804 + %A)<nuw>)
; CHECK-NEXT: Member: {(400 + %A)<nuw>,+,4}<nuw><%loop>
; CHECK-NEXT: Group [[GRP2]]:
-; CHECK-NEXT: (Low: %A High: (101 + %A))
+; CHECK-NEXT: (Low: %A High: (101 + %A)<nuw>)
; CHECK-NEXT: Member: {%A,+,1}<nuw><%loop>
; CHECK-EMPTY:
; CHECK-NEXT: Non vectorizable stores to invariant address were not found in loop.
@@ -53,21 +53,10 @@ exit:
define void @test_distance_positive_backwards(ptr %A) {
; CHECK-LABEL: 'test_distance_positive_backwards'
; CHECK-NEXT: loop:
-; CHECK-NEXT: Memory dependences are safe with run-time checks
+; CHECK-NEXT: Report: cannot check memory dependencies at runtime
; CHECK-NEXT: Dependences:
; CHECK-NEXT: Run-time memory checks:
-; CHECK-NEXT: Check 0:
-; CHECK-NEXT: Comparing group ([[GRP3:0x[0-9a-f]+]]):
-; CHECK-NEXT: %gep.A.400 = getelementptr inbounds i32, ptr %A.1, i64 %iv
-; CHECK-NEXT: Against group ([[GRP4:0x[0-9a-f]+]]):
-; CHECK-NEXT: %gep.A = getelementptr inbounds i8, ptr %A, i64 %iv
; CHECK-NEXT: Grouped accesses:
-; CHECK-NEXT: Group [[GRP3]]:
-; CHECK-NEXT: (Low: (1 + %A)<nuw> High: (405 + %A))
-; CHECK-NEXT: Member: {(1 + %A)<nuw>,+,4}<nuw><%loop>
-; CHECK-NEXT: Group [[GRP4]]:
-; CHECK-NEXT: (Low: %A High: (101 + %A))
-; CHECK-NEXT: Member: {%A,+,1}<nuw><%loop>
; CHECK-EMPTY:
; CHECK-NEXT: Non vectorizable stores to invariant address were not found in loop.
; CHECK-NEXT: SCEV assumptions:
@@ -100,16 +89,16 @@ define void @test_distance_positive_via_assume(ptr %A, i64 %off) {
; CHECK-NEXT: Dependences:
; CHECK-NEXT: Run-time memory checks:
; CHECK-NEXT: Check 0:
-; CHECK-NEXT: Comparing group ([[GRP5:0x[0-9a-f]+]]):
+; CHECK-NEXT: Comparing group ([[GRP3:0x[0-9a-f]+]]):
; CHECK-NEXT: %gep.A.400 = getelementptr inbounds i32, ptr %A.off, i64 %iv
-; CHECK-NEXT: Against group ([[GRP6:0x[0-9a-f]+]]):
+; CHECK-NEXT: Against group ([[GRP4:0x[0-9a-f]+]]):
; CHECK-NEXT: %gep.A = getelementptr inbounds i8, ptr %A, i64 %iv
; CHECK-NEXT: Grouped accesses:
-; CHECK-NEXT: Group [[GRP5]]:
+; CHECK-NEXT: Group [[GRP3]]:
; CHECK-NEXT: (Low: (%off + %A) High: (404 + %off + %A))
; CHECK-NEXT: Member: {(%off + %A),+,4}<nw><%loop>
-; CHECK-NEXT: Group [[GRP6]]:
-; CHECK-NEXT: (Low: %A High: (101 + %A))
+; CHECK-NEXT: Group [[GRP4]]:
+; CHECK-NEXT: (Low: %A High: (101 + %A)<nuw>)
; CHECK-NEXT: Member: {%A,+,1}<nuw><%loop>
; CHECK-EMPTY:
; CHECK-NEXT: Non vectorizable stores to invariant address were not found in loop.
diff --git a/llvm/test/Analysis/LoopAccessAnalysis/reverse-memcheck-bounds.ll b/llvm/test/Analysis/LoopAccessAnalysis/reverse-memcheck-bounds.ll
index 1496e1b0be82ba..ff7754e8e3fb9d 100644
--- a/llvm/test/Analysis/LoopAccessAnalysis/reverse-memcheck-bounds.ll
+++ b/llvm/test/Analysis/LoopAccessAnalysis/reverse-memcheck-bounds.ll
@@ -15,7 +15,7 @@ target datalayout = "e-m:e-i64:64-i128:128-n32:64-S128"
target triple = "aarch64"
; CHECK: function 'f':
-; CHECK: (Low: (20000 + %a)<nuw> High: (60004 + %a))
+; CHECK: (Low: (20000 + %a)<nuw> High: (60004 + %a)<nuw>)
@B = common global ptr null, align 8
@A = common global ptr null, align 8
diff --git a/llvm/test/Analysis/LoopAccessAnalysis/scoped-scevs.ll b/llvm/test/Analysis/LoopAccessAnalysis/scoped-scevs.ll
index 323ba2a739cf83..478a4bdb539789 100644
--- a/llvm/test/Analysis/LoopAccessAnalysis/scoped-scevs.ll
+++ b/llvm/test/Analysis/LoopAccessAnalysis/scoped-scevs.ll
@@ -1,6 +1,6 @@
; NOTE: Assertions have been autogenerated by utils/update_analyze_test_checks.py UTC_ARGS: --version 4
-; RUN: opt -passes='print<access-info>,print<scalar-evolution>' -disable-output %s 2>&1 | FileCheck --check-prefixes=LAA,AFTER %s
-; RUN: opt -passes='print<scalar-evolution>,print<access-info>,print<scalar-evolution>' -disable-output %s 2>&1 | FileCheck --check-prefixes=BEFORE,LAA,AFTER %s
+; RUN: opt -passes='print<access-info>,print<scalar-evolution>' -disable-output %s 2>&1 | FileCheck --check-prefixes=LAA,SCOPED,AFTER %s
+; RUN: opt -passes='print<scalar-evolution>,print<access-info>,print<scalar-evolution>' -disable-output %s 2>&1 | FileCheck --check-prefixes=BEFORE,LAA,NOSCOPED,AFTER %s
target datalayout = "e-m:o-i64:64-i128:128-n32:64-S128"
@@ -35,7 +35,8 @@ define ptr @test_ptr_range_end_computed_before_and_after_loop(ptr %A) {
; LAA-NEXT: (Low: (1 + %A) High: (405 + %A))
; LAA-NEXT: Member: {(1 + %A),+,4}<nw><%loop>
; LAA-NEXT: Group [[GRP2]]:
-; LAA-NEXT: (Low: %A High: (101 + %A))
+; SCOPED-NEXT: (Low: %A High: (101 + %A)<nuw>)
+; NOSCOPED-NEXT: (Low: %A High: (101 + %A))
; LAA-NEXT: Member: {%A,+,1}<nuw><%loop>
; LAA-EMPTY:
; LAA-NEXT: Non vectorizable stores to invariant address were not found in loop.
@@ -90,10 +91,11 @@ define void @test_ptr_range_end_computed_before_loop(ptr %A) {
; LAA-NEXT: %gep.A = getelementptr inbounds i8, ptr %A, i64 %iv
; LAA-NEXT: Grouped accesses:
; LAA-NEXT: Group [[GRP3]]:
-; LAA-NEXT: (Low: (1 + %A) High: (405 + %A))
+; LAA-NEXT: (Low: (1 + %A) High: (405 + %A))
; LAA-NEXT: Member: {(1 + %A),+,4}<nw><%loop>
; LAA-NEXT: Group [[GRP4]]:
-; LAA-NEXT: (Low: %A High: (101 + %A))
+; SCOPED-NEXT: (Low: %A High: (101 + %A)<nuw>)
+; NOSCOPED-NEXT: (Low: %A High: (101 + %A))
; LAA-NEXT: Member: {%A,+,1}<nuw><%loop>
; LAA-EMPTY:
; LAA-NEXT: Non vectorizable stores to invariant address were not found in loop.
@@ -135,21 +137,26 @@ define ptr @test_ptr_range_end_computed_after_loop(ptr %A) {
;
; LAA-LABEL: 'test_ptr_range_end_computed_after_loop'
; LAA-NEXT: loop:
-; LAA-NEXT: Memory dependences are safe with run-time checks
-; LAA-NEXT: Dependences:
-; LAA-NEXT: Run-time memory checks:
-; LAA-NEXT: Check 0:
-; LAA-NEXT: Comparing group ([[GRP5:0x[0-9a-f]+]]):
-; LAA-NEXT: %gep.A.400 = getelementptr inbounds i32, ptr %A.1, i64 %iv
-; LAA-NEXT: Against group ([[GRP6:0x[0-9a-f]+]]):
-; LAA-NEXT: %gep.A = getelementptr inbounds i8, ptr %A, i64 %iv
-; LAA-NEXT: Grouped accesses:
-; LAA-NEXT: Group [[GRP5]]:
-; LAA-NEXT: (Low: (1 + %A)<nuw> High: (405 + %A))
-; LAA-NEXT: Member: {(1 + %A)<nuw>,+,4}<nuw><%loop>
-; LAA-NEXT: Group [[GRP6]]:
-; LAA-NEXT: (Low: %A High: (101 + %A))
-; LAA-NEXT: Member: {%A,+,1}<nuw><%loop>
+; SCOPED-NEXT: Report: cannot check memory dependencies at runtime
+; SCOPED-NEXT: Dependences:
+; SCOPED-NEXT: Run-time memory checks:
+; SCOPED-NEXT: Grouped accesses:
+;
+; NOSCOPED-NEXT: Memory dependences are safe with run-time checks
+; NOSCOPED-NEXT: Dependences:
+; NOSCOPED-NEXT: Run-time memory checks:
+; NOSCOPED-NEXT: Check 0:
+; NOSCOPED-NEXT: Comparing group ([[GRP5:0x[0-9a-f]+]]):
+; NOSCOPED-NEXT: %gep.A.400 = getelementptr inbounds i32, ptr %A.1, i64 %iv
+; NOSCOPED-NEXT: Against group ([[GRP6:0x[0-9a-f]+]]):
+; NOSCOPED-NEXT: %gep.A = getelementptr inbounds i8, ptr %A, i64 %iv
+; NOSCOPED-NEXT: Grouped accesses:
+; NOSCOPED-NEXT: Group [[GRP5]]:
+; NOSCOPED-NEXT: (Low: (1 + %A)<nuw> High: (405 + %A))
+; NOSCOPED-NEXT: Member: {(1 + %A)<nuw>,+,4}<nuw><%loop>
+; NOSCOPED-NEXT: Group [[GRP6]]:
+; NOSCOPED-NEXT: (Low: %A High: (101 + %A))
+; NOSCOPED-NEXT: Member: {%A,+,1}<nuw><%loop>
; LAA-EMPTY:
; LAA-NEXT: Non vectorizable stores to invariant address were not found in loop.
; LAA-NEXT: SCEV assumptions:
More information about the llvm-commits
mailing list