<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<p><br>
</p>
<div class="moz-cite-prefix">On 11/16/20 10:51 AM, Philip Reames via
llvm-commits wrote:<br>
</div>
<blockquote type="cite"
cite="mid:0516be9e-41d9-034e-9ca0-fe33acf23411@philipreames.com">
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<p><br>
</p>
<div class="moz-cite-prefix">On 11/15/20 1:48 AM, Nikita Popov
wrote:<br>
</div>
<blockquote type="cite"
cite="mid:CAF+90c-s_HELwd=hTUiFWPPu+7_14JbzaHaozK_5K89nhV7ssg@mail.gmail.com">
<meta http-equiv="content-type" content="text/html;
charset=UTF-8">
<div dir="ltr">
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Sun, Nov 15, 2020 at
4:21 AM Philip Reames via llvm-commits <<a
href="mailto:llvm-commits@lists.llvm.org"
moz-do-not-send="true">llvm-commits@lists.llvm.org</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex"><br>
Author: Philip Reames<br>
Date: 2020-11-14T19:21:05-08:00<br>
New Revision: 1ec6e1eb8a084bffae8a40236eb9925d8026dd07<br>
<br>
URL: <a
href="https://github.com/llvm/llvm-project/commit/1ec6e1eb8a084bffae8a40236eb9925d8026dd07"
rel="noreferrer" target="_blank" moz-do-not-send="true">https://github.com/llvm/llvm-project/commit/1ec6e1eb8a084bffae8a40236eb9925d8026dd07</a><br>
DIFF: <a
href="https://github.com/llvm/llvm-project/commit/1ec6e1eb8a084bffae8a40236eb9925d8026dd07.diff"
rel="noreferrer" target="_blank" moz-do-not-send="true">https://github.com/llvm/llvm-project/commit/1ec6e1eb8a084bffae8a40236eb9925d8026dd07.diff</a><br>
<br>
LOG: [SCEV] Factor out part of wrap flag detection logic
[NFC-ish]<br>
<br>
In an effort to make code around flag determination more
readable, and (possibly) prepare for a follow up change,
factor out some of the flag detection logic. In the
process, reduce the number of locations we mutate wrap
flags by a couple.<br>
<br>
Note that this isn't NFC. The old code tried for NSW xor
(NUW || NW). This is, two different paths computed
different sets of wrap flags. The new code will try for
all three. The result is that some expressions end up
with a few extra flags set.<br>
</blockquote>
<div><br>
</div>
<div>Hey Philip,</div>
<div><br>
</div>
<div>I've revert this change, as it had a fairly large
compile-time impact for an NFC-ish change: <a
href="https://llvm-compile-time-tracker.com/compare.php?from=dd0b8b94d0796bd895cc998dd163b4fbebceb0b8&to=1ec6e1eb8a084bffae8a40236eb9925d8026dd07&stat=instructions"
moz-do-not-send="true">https://llvm-compile-time-tracker.com/compare.php?from=dd0b8b94d0796bd895cc998dd163b4fbebceb0b8&to=1ec6e1eb8a084bffae8a40236eb9925d8026dd07&stat=instructions</a>
mafft in particular (which tends to stress-test SCEV) has
a >2% regression.</div>
</div>
</div>
</blockquote>
Thank you for the revert. This is way higher than I'd have
expected.<br>
<blockquote type="cite"
cite="mid:CAF+90c-s_HELwd=hTUiFWPPu+7_14JbzaHaozK_5K89nhV7ssg@mail.gmail.com">
<div dir="ltr">
<div class="gmail_quote">
<div><br>
</div>
<div>It's pretty likely that this happens because you now
infer all nowrap flag kinds even though the particular
code doesn't need them. I suspect that we'Re also doing
some duplicate work, e.g. typically everything will get
both sext'ed and zext'ed while IndVars infers IR-level
nowrap flags, so both code-paths will try to re-infer the
full set of flags if inference is not successful.<br>
</div>
</div>
</div>
</blockquote>
<p>Looking at the change, the other possibility is that I changed
the order of two inference steps. If the one I moved early was
the expensive one, and the other mostly proved the required
flags, that might also explain.</p>
<p>Do you have any suggestions for how to test patches? I'd
really rather not try to setup the fully machinery locally, is
there a try bot for compile time sensitive changes? Or do I
just need to check in an attempt and watch carefully what
happens?<br>
</p>
</blockquote>
JFYI, I landed a very cut down version of this in commit 257d33c8.
I'll watch the compile time dashboard to see if anything regresses
before moving on to the likely candidates.<br>
<blockquote type="cite"
cite="mid:0516be9e-41d9-034e-9ca0-fe33acf23411@philipreames.com">
<p> </p>
<blockquote type="cite"
cite="mid:CAF+90c-s_HELwd=hTUiFWPPu+7_14JbzaHaozK_5K89nhV7ssg@mail.gmail.com">
<div dir="ltr">
<div class="gmail_quote">
<div><br>
</div>
<div>Regards,<br>
</div>
<div>Nikita<br>
</div>
<div> </div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex"> Added: <br>
<br>
<br>
Modified: <br>
llvm/include/llvm/Analysis/ScalarEvolution.h<br>
llvm/lib/Analysis/ScalarEvolution.cpp<br>
llvm/test/Analysis/ScalarEvolution/pr22641.ll<br>
llvm/test/Analysis/ScalarEvolution/sext-iv-2.ll<br>
llvm/test/Transforms/IndVarSimplify/X86/loop-invariant-conditions.ll<br>
<br>
Removed: <br>
<br>
<br>
<br>
################################################################################<br>
diff --git a/llvm/include/llvm/Analysis/ScalarEvolution.h
b/llvm/include/llvm/Analysis/ScalarEvolution.h<br>
index 71f56b8bbc0e..87489e0ffe99 100644<br>
--- a/llvm/include/llvm/Analysis/ScalarEvolution.h<br>
+++ b/llvm/include/llvm/Analysis/ScalarEvolution.h<br>
@@ -1905,6 +1905,10 @@ class ScalarEvolution {<br>
/// Try to prove NSW or NUW on \p AR relying on
ConstantRange manipulation.<br>
SCEV::NoWrapFlags proveNoWrapViaConstantRanges(const
SCEVAddRecExpr *AR);<br>
<br>
+ /// Try to prove NSW or NEW on \p AR by proving facts
about conditions known<br>
+ /// on entry and backedge.<br>
+ SCEV::NoWrapFlags proveNoWrapViaInduction(const
SCEVAddRecExpr *AR);<br>
+<br>
Optional<MonotonicPredicateType>
getMonotonicPredicateTypeImpl(<br>
const SCEVAddRecExpr *LHS, ICmpInst::Predicate
Pred,<br>
Optional<const SCEV *> NumIter, const
Instruction *Context);<br>
<br>
diff --git a/llvm/lib/Analysis/ScalarEvolution.cpp
b/llvm/lib/Analysis/ScalarEvolution.cpp<br>
index 7a8f54bd0c6e..bfb23f69e0b0 100644<br>
--- a/llvm/lib/Analysis/ScalarEvolution.cpp<br>
+++ b/llvm/lib/Analysis/ScalarEvolution.cpp<br>
@@ -1588,13 +1588,18 @@
ScalarEvolution::getZeroExtendExpr(const SCEV *Op, Type
*Ty, unsigned Depth) {<br>
setNoWrapFlags(const_cast<SCEVAddRecExpr
*>(AR), NewFlags);<br>
}<br>
<br>
+ if (!AR->hasNoUnsignedWrap()) {<br>
+ auto NewFlags = proveNoWrapViaInduction(AR);<br>
+ setNoWrapFlags(const_cast<SCEVAddRecExpr
*>(AR), NewFlags);<br>
+ }<br>
+<br>
// If we have special knowledge that this addrec
won't overflow,<br>
// we don't need to do any further analysis.<br>
if (AR->hasNoUnsignedWrap())<br>
return getAddRecExpr(<br>
getExtendAddRecStart<SCEVZeroExtendExpr>(AR, Ty,
this, Depth + 1),<br>
getZeroExtendExpr(Step, Ty, Depth + 1), L,
AR->getNoWrapFlags());<br>
-<br>
+ <br>
// Check whether the backedge-taken count is
SCEVCouldNotCompute.<br>
// Note that this serves two purposes: It filters
out loops that are<br>
// simply not analyzable, and it covers the case
where this code is<br>
@@ -1673,35 +1678,14 @@
ScalarEvolution::getZeroExtendExpr(const SCEV *Op, Type
*Ty, unsigned Depth) {<br>
// doing extra work that may not pay off.<br>
if (!isa<SCEVCouldNotCompute>(MaxBECount) ||
HasGuards ||<br>
!AC.assumptions().empty()) {<br>
- // If the backedge is guarded by a comparison
with the pre-inc<br>
- // value the addrec is safe. Also, if the entry
is guarded by<br>
- // a comparison with the start value and the
backedge is<br>
- // guarded by a comparison with the post-inc
value, the addrec<br>
- // is safe.<br>
- if (isKnownPositive(Step)) {<br>
- const SCEV *N =
getConstant(APInt::getMinValue(BitWidth) -<br>
-
getUnsignedRangeMax(Step));<br>
- if (isLoopBackedgeGuardedByCond(L,
ICmpInst::ICMP_ULT, AR, N) ||<br>
- isKnownOnEveryIteration(ICmpInst::ICMP_ULT,
AR, N)) {<br>
- // Cache knowledge of AR NUW, which is
propagated to this<br>
- // AddRec.<br>
- setNoWrapFlags(const_cast<SCEVAddRecExpr
*>(AR), SCEV::FlagNUW);<br>
- // Return the expression with the addrec on
the outside.<br>
- return getAddRecExpr(<br>
-
getExtendAddRecStart<SCEVZeroExtendExpr>(AR, Ty,
this,<br>
-
Depth + 1),<br>
- getZeroExtendExpr(Step, Ty, Depth + 1),
L,<br>
- AR->getNoWrapFlags());<br>
- }<br>
- } else if (isKnownNegative(Step)) {<br>
+ // For a negative step, we can extend the
operands iff doing so only<br>
+ // traverses values in the range
zext([0,UINT_MAX]). <br>
+ if (isKnownNegative(Step)) {<br>
const SCEV *N =
getConstant(APInt::getMaxValue(BitWidth) -<br>
getSignedRangeMin(Step));<br>
if (isLoopBackedgeGuardedByCond(L,
ICmpInst::ICMP_UGT, AR, N) ||<br>
isKnownOnEveryIteration(ICmpInst::ICMP_UGT,
AR, N)) {<br>
- // Cache knowledge of AR NW, which is
propagated to this<br>
- // AddRec. Negative step causes unsigned
wrap, but it<br>
- // still can't self-wrap.<br>
- setNoWrapFlags(const_cast<SCEVAddRecExpr
*>(AR), SCEV::FlagNW);<br>
+ // Note: We've proven NW here, but that's
already done above too.<br>
// Return the expression with the addrec on
the outside.<br>
return getAddRecExpr(<br>
getExtendAddRecStart<SCEVZeroExtendExpr>(AR, Ty,
this,<br>
@@ -1932,6 +1916,11 @@
ScalarEvolution::getSignExtendExpr(const SCEV *Op, Type
*Ty, unsigned Depth) {<br>
setNoWrapFlags(const_cast<SCEVAddRecExpr
*>(AR), NewFlags);<br>
}<br>
<br>
+ if (!AR->hasNoSignedWrap()) {<br>
+ auto NewFlags = proveNoWrapViaInduction(AR);<br>
+ setNoWrapFlags(const_cast<SCEVAddRecExpr
*>(AR), NewFlags);<br>
+ }<br>
+<br>
// If we have special knowledge that this addrec
won't overflow,<br>
// we don't need to do any further analysis.<br>
if (AR->hasNoSignedWrap())<br>
@@ -2015,35 +2004,6 @@
ScalarEvolution::getSignExtendExpr(const SCEV *Op, Type
*Ty, unsigned Depth) {<br>
}<br>
}<br>
<br>
- // Normally, in the cases we can prove no-overflow
via a<br>
- // backedge guarding condition, we can also compute
a backedge<br>
- // taken count for the loop. The exceptions are
assumptions and<br>
- // guards present in the loop -- SCEV is not great
at exploiting<br>
- // these to compute max backedge taken counts, but
can still use<br>
- // these to prove lack of overflow. Use this fact
to avoid<br>
- // doing extra work that may not pay off.<br>
-<br>
- if (!isa<SCEVCouldNotCompute>(MaxBECount) ||
HasGuards ||<br>
- !AC.assumptions().empty()) {<br>
- // If the backedge is guarded by a comparison
with the pre-inc<br>
- // value the addrec is safe. Also, if the entry
is guarded by<br>
- // a comparison with the start value and the
backedge is<br>
- // guarded by a comparison with the post-inc
value, the addrec<br>
- // is safe.<br>
- ICmpInst::Predicate Pred;<br>
- const SCEV *OverflowLimit =<br>
- getSignedOverflowLimitForStep(Step,
&Pred, this);<br>
- if (OverflowLimit &&<br>
- (isLoopBackedgeGuardedByCond(L, Pred, AR,
OverflowLimit) ||<br>
- isKnownOnEveryIteration(Pred, AR,
OverflowLimit))) {<br>
- // Cache knowledge of AR NSW, then propagate
NSW to the wide AddRec.<br>
- setNoWrapFlags(const_cast<SCEVAddRecExpr
*>(AR), SCEV::FlagNSW);<br>
- return getAddRecExpr(<br>
-
getExtendAddRecStart<SCEVSignExtendExpr>(AR, Ty,
this, Depth + 1),<br>
- getSignExtendExpr(Step, Ty, Depth + 1), L,
AR->getNoWrapFlags());<br>
- }<br>
- }<br>
-<br>
// sext({C,+,Step}) --> (sext(D) +
sext({C-D,+,Step}))<nuw><nsw><br>
// if D + (C - D + Step * n) could be proven to not
signed wrap<br>
// where D maximizes the number of trailing zeros
of (C - D + Step * n)<br>
@@ -4436,6 +4396,87 @@
ScalarEvolution::proveNoWrapViaConstantRanges(const
SCEVAddRecExpr *AR) {<br>
return Result;<br>
}<br>
<br>
+SCEV::NoWrapFlags<br>
+ScalarEvolution::proveNoWrapViaInduction(const
SCEVAddRecExpr *AR) {<br>
+ SCEV::NoWrapFlags Result = AR->getNoWrapFlags();<br>
+ if (!AR->isAffine())<br>
+ return Result;<br>
+<br>
+ const SCEV *Step = AR->getStepRecurrence(*this);<br>
+ unsigned BitWidth =
getTypeSizeInBits(AR->getType());<br>
+ const Loop *L = AR->getLoop();<br>
+<br>
+ // Check whether the backedge-taken count is
SCEVCouldNotCompute.<br>
+ // Note that this serves two purposes: It filters out
loops that are<br>
+ // simply not analyzable, and it covers the case where
this code is<br>
+ // being called from within backedge-taken count
analysis, such that<br>
+ // attempting to ask for the backedge-taken count would
likely result<br>
+ // in infinite recursion. In the later case, the
analysis code will<br>
+ // cope with a conservative value, and it will take
care to purge<br>
+ // that value once it has finished.<br>
+ const SCEV *MaxBECount =
getConstantMaxBackedgeTakenCount(L);<br>
+<br>
+ // Normally, in the cases we can prove no-overflow via
a<br>
+ // backedge guarding condition, we can also compute a
backedge<br>
+ // taken count for the loop. The exceptions are
assumptions and<br>
+ // guards present in the loop -- SCEV is not great at
exploiting<br>
+ // these to compute max backedge taken counts, but can
still use<br>
+ // these to prove lack of overflow. Use this fact to
avoid<br>
+ // doing extra work that may not pay off.<br>
+<br>
+ if (isa<SCEVCouldNotCompute>(MaxBECount)
&& !HasGuards &&<br>
+ AC.assumptions().empty())<br>
+ return Result;<br>
+<br>
+ if (!AR->hasNoSignedWrap()) {<br>
+ // If the backedge is guarded by a comparison with
the pre-inc<br>
+ // value the addrec is safe. Also, if the entry is
guarded by<br>
+ // a comparison with the start value and the backedge
is<br>
+ // guarded by a comparison with the post-inc value,
the addrec<br>
+ // is safe.<br>
+ ICmpInst::Predicate Pred;<br>
+ const SCEV *OverflowLimit =<br>
+ getSignedOverflowLimitForStep(Step, &Pred,
this);<br>
+ if (OverflowLimit &&<br>
+ (isLoopBackedgeGuardedByCond(L, Pred, AR,
OverflowLimit) ||<br>
+ isKnownOnEveryIteration(Pred, AR,
OverflowLimit))) {<br>
+ Result = setFlags(Result, SCEV::FlagNSW);<br>
+ }<br>
+ }<br>
+<br>
+ if (!AR->hasNoUnsignedWrap()) {<br>
+ // If the backedge is guarded by a comparison with
the pre-inc<br>
+ // value the addrec is safe. Also, if the entry is
guarded by<br>
+ // a comparison with the start value and the backedge
is<br>
+ // guarded by a comparison with the post-inc value,
the addrec<br>
+ // is safe.<br>
+ if (isKnownPositive(Step)) {<br>
+ const SCEV *N =
getConstant(APInt::getMinValue(BitWidth) -<br>
+
getUnsignedRangeMax(Step));<br>
+ if (isLoopBackedgeGuardedByCond(L,
ICmpInst::ICMP_ULT, AR, N) ||<br>
+ isKnownOnEveryIteration(ICmpInst::ICMP_ULT, AR,
N)) {<br>
+ Result = setFlags(Result, SCEV::FlagNUW);<br>
+ }<br>
+ }<br>
+ }<br>
+<br>
+ if (!AR->hasNoSelfWrap()) {<br>
+ if (isKnownNegative(Step)) {<br>
+ // TODO: We can generalize this condition by
proving (ugt AR, AR.start)<br>
+ // for the two clauses below.<br>
+ const SCEV *N =
getConstant(APInt::getMaxValue(BitWidth) -<br>
+
getSignedRangeMin(Step));<br>
+ if (isLoopBackedgeGuardedByCond(L,
ICmpInst::ICMP_UGT, AR, N) ||<br>
+ isKnownOnEveryIteration(ICmpInst::ICMP_UGT, AR,
N)) {<br>
+ // Negative step causes unsigned wrap, but it
still can't self-wrap.<br>
+ Result = setFlags(Result, SCEV::FlagNW);<br>
+ }<br>
+ }<br>
+ }<br>
+<br>
+ return Result;<br>
+}<br>
+<br>
namespace {<br>
<br>
/// Represents an abstract binary operation. This may
exist as a<br>
<br>
diff --git
a/llvm/test/Analysis/ScalarEvolution/pr22641.ll
b/llvm/test/Analysis/ScalarEvolution/pr22641.ll<br>
index 6c824e47a4eb..33f65e11d476 100644<br>
--- a/llvm/test/Analysis/ScalarEvolution/pr22641.ll<br>
+++ b/llvm/test/Analysis/ScalarEvolution/pr22641.ll<br>
@@ -12,7 +12,7 @@ body:<br>
%conv2 = zext i16 %dec2 to i32<br>
%conv = zext i16 %dec to i32<br>
; CHECK: %conv = zext i16 %dec to i32<br>
-; CHECK-NEXT: --> {(zext i16 (-1 + %a) to
i32),+,65535}<nuw><%body><br>
+; CHECK-NEXT: --> {(zext i16 (-1 + %a) to
i32),+,65535}<nuw><nsw><%body><br>
; CHECK-NOT: --> {(65535 + (zext i16 %a to
i32)),+,65535}<nuw><%body><br>
<br>
br label %cond<br>
<br>
diff --git
a/llvm/test/Analysis/ScalarEvolution/sext-iv-2.ll
b/llvm/test/Analysis/ScalarEvolution/sext-iv-2.ll<br>
index b84c13938dfa..a3a8a9783693 100644<br>
--- a/llvm/test/Analysis/ScalarEvolution/sext-iv-2.ll<br>
+++ b/llvm/test/Analysis/ScalarEvolution/sext-iv-2.ll<br>
@@ -2,9 +2,9 @@<br>
; RUN: opt < %s -disable-output
"-passes=print<scalar-evolution>" 2>&1 |
FileCheck %s<br>
<br>
; CHECK: %tmp3 = sext i8 %tmp2 to i32<br>
-; CHECK: --> (sext i8 {0,+,1}<%bb1> to i32){{
U: [^ ]+ S: [^ ]+}}{{ *}}Exits: -1<br>
+; CHECK: --> (sext i8 {0,+,1}<nuw><%bb1>
to i32){{ U: [^ ]+ S: [^ ]+}}{{ *}}Exits: -1<br>
; CHECK: %tmp4 = mul i32 %tmp3, %i.02<br>
-; CHECK: --> ((sext i8 {0,+,1}<%bb1> to i32) *
{0,+,1}<%bb>){{ U: [^ ]+ S: [^ ]+}}{{ *}}Exits:
{0,+,-1}<%bb><br>
+; CHECK: --> ((sext i8 {0,+,1}<nuw><%bb1>
to i32) * {0,+,1}<%bb>){{ U: [^ ]+ S: [^ ]+}}{{
*}}Exits: {0,+,-1}<%bb><br>
<br>
; These sexts are not foldable.<br>
<br>
<br>
diff --git
a/llvm/test/Transforms/IndVarSimplify/X86/loop-invariant-conditions.ll
b/llvm/test/Transforms/IndVarSimplify/X86/loop-invariant-conditions.ll<br>
index ad11bc015b66..e3a48890b276 100644<br>
---
a/llvm/test/Transforms/IndVarSimplify/X86/loop-invariant-conditions.ll<br>
+++
b/llvm/test/Transforms/IndVarSimplify/X86/loop-invariant-conditions.ll<br>
@@ -193,7 +193,7 @@ for.end:
; preds = %if.end, %entry<br>
define void @test7(i64 %start, i64* %inc_ptr) {<br>
; CHECK-LABEL: @test7(<br>
; CHECK-NEXT: entry:<br>
-; CHECK-NEXT: [[INC:%.*]] = load i64, i64*
[[INC_PTR:%.*]], !range !0<br>
+; CHECK-NEXT: [[INC:%.*]] = load i64, i64*
[[INC_PTR:%.*]], align 8, [[RNG0:!range !.*]]<br>
; CHECK-NEXT: [[OK:%.*]] = icmp sge i64 [[INC]], 0<br>
; CHECK-NEXT: br i1 [[OK]], label
[[LOOP_PREHEADER:%.*]], label [[FOR_END:%.*]]<br>
; CHECK: loop.preheader:<br>
@@ -317,7 +317,7 @@ define void @test3_neg(i64 %start) {<br>
; CHECK-NEXT: br label [[LOOP:%.*]]<br>
; CHECK: loop:<br>
; CHECK-NEXT: [[INDVARS_IV:%.*]] = phi i64 [
[[START]], [[ENTRY:%.*]] ], [ [[INDVARS_IV_NEXT:%.*]],
[[LOOP]] ]<br>
-; CHECK-NEXT: [[INDVARS_IV_NEXT]] = add i64
[[INDVARS_IV]], 1<br>
+; CHECK-NEXT: [[INDVARS_IV_NEXT]] = add nsw i64
[[INDVARS_IV]], 1<br>
; CHECK-NEXT: [[EXITCOND:%.*]] = icmp ne i64
[[INDVARS_IV_NEXT]], [[TMP1]]<br>
; CHECK-NEXT: br i1 [[EXITCOND]], label [[LOOP]],
label [[FOR_END:%.*]]<br>
; CHECK: for.end:<br>
@@ -345,7 +345,7 @@ define void @test4_neg(i64 %start) {<br>
; CHECK-NEXT: br label [[LOOP:%.*]]<br>
; CHECK: loop:<br>
; CHECK-NEXT: [[INDVARS_IV:%.*]] = phi i64 [
[[START]], [[ENTRY:%.*]] ], [ [[INDVARS_IV_NEXT:%.*]],
[[BACKEDGE:%.*]] ]<br>
-; CHECK-NEXT: [[INDVARS_IV_NEXT]] = add i64
[[INDVARS_IV]], 1<br>
+; CHECK-NEXT: [[INDVARS_IV_NEXT]] = add nsw i64
[[INDVARS_IV]], 1<br>
; CHECK-NEXT: [[CMP:%.*]] = icmp eq i64
[[INDVARS_IV_NEXT]], 25<br>
; CHECK-NEXT: br i1 [[CMP]], label [[BACKEDGE]], label
[[FOR_END:%.*]]<br>
; CHECK: backedge:<br>
@@ -405,7 +405,7 @@ for.end:
; preds = %if.end, %entry<br>
define void @test8(i64 %start, i64* %inc_ptr) {<br>
; CHECK-LABEL: @test8(<br>
; CHECK-NEXT: entry:<br>
-; CHECK-NEXT: [[INC:%.*]] = load i64, i64*
[[INC_PTR:%.*]], !range !1<br>
+; CHECK-NEXT: [[INC:%.*]] = load i64, i64*
[[INC_PTR:%.*]], align 8, [[RNG1:!range !.*]]<br>
; CHECK-NEXT: [[OK:%.*]] = icmp sge i64 [[INC]], 0<br>
; CHECK-NEXT: br i1 [[OK]], label
[[LOOP_PREHEADER:%.*]], label [[FOR_END:%.*]]<br>
; CHECK: loop.preheader:<br>
@@ -525,7 +525,7 @@ exit:<br>
define void @test11(i64* %inc_ptr) {<br>
; CHECK-LABEL: @test11(<br>
; CHECK-NEXT: entry:<br>
-; CHECK-NEXT: [[INC:%.*]] = load i64, i64*
[[INC_PTR:%.*]], !range !0<br>
+; CHECK-NEXT: [[INC:%.*]] = load i64, i64*
[[INC_PTR:%.*]], align 8, [[RNG0]]<br>
; CHECK-NEXT: [[NE_COND:%.*]] = icmp ne i64 [[INC]], 0<br>
; CHECK-NEXT: br i1 [[NE_COND]], label
[[LOOP_PREHEADER:%.*]], label [[EXIT:%.*]]<br>
; CHECK: loop.preheader:<br>
@@ -576,7 +576,7 @@ exit:<br>
define void @test12(i64* %inc_ptr) {<br>
; CHECK-LABEL: @test12(<br>
; CHECK-NEXT: entry:<br>
-; CHECK-NEXT: [[INC:%.*]] = load i64, i64*
[[INC_PTR:%.*]], !range !0<br>
+; CHECK-NEXT: [[INC:%.*]] = load i64, i64*
[[INC_PTR:%.*]], align 8, [[RNG0]]<br>
; CHECK-NEXT: br label [[LOOP:%.*]]<br>
; CHECK: loop:<br>
; CHECK-NEXT: [[IV:%.*]] = phi i64 [ [[INC]],
[[ENTRY:%.*]] ], [ [[IV_NEXT:%.*]], [[BACKEDGE:%.*]] ]<br>
<br>
<br>
<br>
_______________________________________________<br>
llvm-commits mailing list<br>
<a href="mailto:llvm-commits@lists.llvm.org"
target="_blank" moz-do-not-send="true">llvm-commits@lists.llvm.org</a><br>
<a
href="https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits"
rel="noreferrer" target="_blank" moz-do-not-send="true">https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits</a><br>
</blockquote>
</div>
</div>
</blockquote>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<pre class="moz-quote-pre" wrap="">_______________________________________________
llvm-commits mailing list
<a class="moz-txt-link-abbreviated" href="mailto:llvm-commits@lists.llvm.org">llvm-commits@lists.llvm.org</a>
<a class="moz-txt-link-freetext" href="https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits">https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits</a>
</pre>
</blockquote>
</body>
</html>