[llvm] r217343 - Add additional patterns for @llvm.assume in ValueTracking

Hal Finkel hfinkel at anl.gov
Mon Sep 8 14:26:15 PDT 2014


----- Original Message -----
> From: "Sean Silva" <chisophugis at gmail.com>
> To: "Hal Finkel" <hfinkel at anl.gov>
> Cc: llvm-commits at cs.uiuc.edu
> Sent: Monday, September 8, 2014 4:22:44 PM
> Subject: Re: [llvm] r217343 - Add additional patterns for @llvm.assume in ValueTracking
> 
> 
> Hi Hal,
> 
> 
> Since these assumption intrinsics seem largely pitched at frontend
> writers, it seems like we should document the forms that we
> recognize. Especially when it comes to aliasing, alignment, and loop
> optimizations, most frontends' language semantics allow them to make
> much stronger assumptions than clang, so they are poised to get a
> lot of mileage out of these.
> 
> 
> Even just appending descriptions from your commit messages into an
> ever growing document would be useful. It's easier to fill in
> details on a skeleton than to find all the bones.
> 
> 
> (you can copy docs/SphinxQuickstartTemplate.rst to get up and running
> with a new document quickly)

That's a good idea. I'll start a new document detailing what we support.

 -Hal

> 
> 
> -- Sean Silva
> 
> 
> 
> On Sun, Sep 7, 2014 at 12:21 PM, Hal Finkel < hfinkel at anl.gov >
> wrote:
> 
> 
> Author: hfinkel
> Date: Sun Sep 7 14:21:07 2014
> New Revision: 217343
> 
> URL: http://llvm.org/viewvc/llvm-project?rev=217343&view=rev
> Log:
> Add additional patterns for @llvm.assume in ValueTracking
> 
> This builds on r217342, which added the infrastructure to compute
> known bits
> using assumptions (@llvm.assume calls). That original commit added
> only a few
> patterns (to catch common cases related to determining pointer
> alignment); this
> change adds several other patterns for simple cases.
> 
> r217342 contained that, for assume(v & b = a), bits in the mask
> that are known to be one, we can propagate known bits from the a to
> v. It also
> had a known-bits transfer for assume(a = b). This patch adds:
> 
> assume(~(v & b) = a) : For those bits in the mask that are known to
> be one, we
> can propagate inverted known bits from the a to v.
> 
> assume(v | b = a) : For those bits in b that are known to be zero, we
> can
> propagate known bits from the a to v.
> 
> assume(~(v | b) = a): For those bits in b that are known to be zero,
> we can
> propagate inverted known bits from the a to v.
> 
> assume(v ^ b = a) : For those bits in b that are known to be zero, we
> can
> propagate known bits from the a to v. For those bits in
> b that are known to be one, we can propagate inverted
> known bits from the a to v.
> 
> assume(~(v ^ b) = a) : For those bits in b that are known to be zero,
> we can
> propagate inverted known bits from the a to v. For those
> bits in b that are known to be one, we can propagate
> known bits from the a to v.
> 
> assume(v << c = a) : For those bits in a that are known, we can
> propagate them
> to known bits in v shifted to the right by c.
> 
> assume(~(v << c) = a) : For those bits in a that are known, we can
> propagate
> them inverted to known bits in v shifted to the right by c.
> 
> assume(v >> c = a) : For those bits in a that are known, we can
> propagate them
> to known bits in v shifted to the right by c.
> 
> assume(~(v >> c) = a) : For those bits in a that are known, we can
> propagate
> them inverted to known bits in v shifted to the right by c.
> 
> assume(v >=_s c) where c is non-negative: The sign bit of v is zero
> 
> assume(v >_s c) where c is at least -1: The sign bit of v is zero
> 
> assume(v <=_s c) where c is negative: The sign bit of v is one
> 
> assume(v <_s c) where c is non-positive: The sign bit of v is one
> 
> assume(v <=_u c): Transfer the known high zero bits
> 
> assume(v <_u c): Transfer the known high zero bits (if c is know to
> be a power
> of 2, transfer one more)
> 
> A small addition to InstCombine was necessary for some of the test
> cases. The
> problem is that when InstCombine was simplifying and, or, etc. it
> would fail to
> check the 'do I know all of the bits' condition before checking less
> specific
> conditions and would not fully constant-fold the result. I'm not sure
> how to
> trigger this aside from using assumptions, so I've just included the
> change
> here.
> 
> Added:
> llvm/trunk/test/Transforms/InstCombine/assume2.ll
> Modified:
> llvm/trunk/lib/Analysis/ValueTracking.cpp
> llvm/trunk/lib/Transforms/InstCombine/InstCombineSimplifyDemanded.cpp
> 
> Modified: llvm/trunk/lib/Analysis/ValueTracking.cpp
> URL:
> http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Analysis/ValueTracking.cpp?rev=217343&r1=217342&r2=217343&view=diff
> ==============================================================================
> --- llvm/trunk/lib/Analysis/ValueTracking.cpp (original)
> +++ llvm/trunk/lib/Analysis/ValueTracking.cpp Sun Sep 7 14:21:07 2014
> @@ -458,6 +458,20 @@ m_c_And(const LHS &L, const RHS &R) {
> return m_CombineOr(m_And(L, R), m_And(R, L));
> }
> 
> +template<typename LHS, typename RHS>
> +inline match_combine_or<BinaryOp_match<LHS, RHS, Instruction::Or>,
> + BinaryOp_match<RHS, LHS, Instruction::Or>>
> +m_c_Or(const LHS &L, const RHS &R) {
> + return m_CombineOr(m_Or(L, R), m_Or(R, L));
> +}
> +
> +template<typename LHS, typename RHS>
> +inline match_combine_or<BinaryOp_match<LHS, RHS, Instruction::Xor>,
> + BinaryOp_match<RHS, LHS, Instruction::Xor>>
> +m_c_Xor(const LHS &L, const RHS &R) {
> + return m_CombineOr(m_Xor(L, R), m_Xor(R, L));
> +}
> +
> static void computeKnownBitsFromAssume(Value *V, APInt &KnownZero,
> APInt &KnownOne,
> const DataLayout *DL,
> @@ -489,6 +503,7 @@ static void computeKnownBitsFromAssume(V
> m_BitCast(m_Specific(V))));
> 
> CmpInst::Predicate Pred;
> + ConstantInt *C;
> // assume(v = a)
> if (match(I, m_Intrinsic<Intrinsic::assume>(
> m_c_ICmp(Pred, m_V, m_Value(A)))) &&
> @@ -510,6 +525,203 @@ static void computeKnownBitsFromAssume(V
> // known bits from the RHS to V.
> KnownZero |= RHSKnownZero & MaskKnownOne;
> KnownOne |= RHSKnownOne & MaskKnownOne;
> + // assume(~(v & b) = a)
> + } else if (match(I, m_Intrinsic<Intrinsic::assume>(
> + m_c_ICmp(Pred, m_Not(m_c_And(m_V, m_Value(B))),
> + m_Value(A)))) &&
> + Pred == ICmpInst::ICMP_EQ && isValidAssumeForContext(I, Q, DL)) {
> + APInt RHSKnownZero(BitWidth, 0), RHSKnownOne(BitWidth, 0);
> + computeKnownBits(A, RHSKnownZero, RHSKnownOne, DL, Depth+1,
> Query(Q, I));
> + APInt MaskKnownZero(BitWidth, 0), MaskKnownOne(BitWidth, 0);
> + computeKnownBits(B, MaskKnownZero, MaskKnownOne, DL, Depth+1,
> Query(Q, I));
> +
> + // For those bits in the mask that are known to be one, we can
> propagate
> + // inverted known bits from the RHS to V.
> + KnownZero |= RHSKnownOne & MaskKnownOne;
> + KnownOne |= RHSKnownZero & MaskKnownOne;
> + // assume(v | b = a)
> + } else if (match(I, m_Intrinsic<Intrinsic::assume>(
> + m_c_ICmp(Pred, m_c_Or(m_V, m_Value(B)), m_Value(A)))) &&
> + Pred == ICmpInst::ICMP_EQ && isValidAssumeForContext(I, Q, DL)) {
> + APInt RHSKnownZero(BitWidth, 0), RHSKnownOne(BitWidth, 0);
> + computeKnownBits(A, RHSKnownZero, RHSKnownOne, DL, Depth+1,
> Query(Q, I));
> + APInt BKnownZero(BitWidth, 0), BKnownOne(BitWidth, 0);
> + computeKnownBits(B, BKnownZero, BKnownOne, DL, Depth+1, Query(Q,
> I));
> +
> + // For those bits in B that are known to be zero, we can propagate
> known
> + // bits from the RHS to V.
> + KnownZero |= RHSKnownZero & BKnownZero;
> + KnownOne |= RHSKnownOne & BKnownZero;
> + // assume(~(v | b) = a)
> + } else if (match(I, m_Intrinsic<Intrinsic::assume>(
> + m_c_ICmp(Pred, m_Not(m_c_Or(m_V, m_Value(B))),
> + m_Value(A)))) &&
> + Pred == ICmpInst::ICMP_EQ && isValidAssumeForContext(I, Q, DL)) {
> + APInt RHSKnownZero(BitWidth, 0), RHSKnownOne(BitWidth, 0);
> + computeKnownBits(A, RHSKnownZero, RHSKnownOne, DL, Depth+1,
> Query(Q, I));
> + APInt BKnownZero(BitWidth, 0), BKnownOne(BitWidth, 0);
> + computeKnownBits(B, BKnownZero, BKnownOne, DL, Depth+1, Query(Q,
> I));
> +
> + // For those bits in B that are known to be zero, we can propagate
> + // inverted known bits from the RHS to V.
> + KnownZero |= RHSKnownOne & BKnownZero;
> + KnownOne |= RHSKnownZero & BKnownZero;
> + // assume(v ^ b = a)
> + } else if (match(I, m_Intrinsic<Intrinsic::assume>(
> + m_c_ICmp(Pred, m_c_Xor(m_V, m_Value(B)), m_Value(A)))) &&
> + Pred == ICmpInst::ICMP_EQ && isValidAssumeForContext(I, Q, DL)) {
> + APInt RHSKnownZero(BitWidth, 0), RHSKnownOne(BitWidth, 0);
> + computeKnownBits(A, RHSKnownZero, RHSKnownOne, DL, Depth+1,
> Query(Q, I));
> + APInt BKnownZero(BitWidth, 0), BKnownOne(BitWidth, 0);
> + computeKnownBits(B, BKnownZero, BKnownOne, DL, Depth+1, Query(Q,
> I));
> +
> + // For those bits in B that are known to be zero, we can propagate
> known
> + // bits from the RHS to V. For those bits in B that are known to be
> one,
> + // we can propagate inverted known bits from the RHS to V.
> + KnownZero |= RHSKnownZero & BKnownZero;
> + KnownOne |= RHSKnownOne & BKnownZero;
> + KnownZero |= RHSKnownOne & BKnownOne;
> + KnownOne |= RHSKnownZero & BKnownOne;
> + // assume(~(v ^ b) = a)
> + } else if (match(I, m_Intrinsic<Intrinsic::assume>(
> + m_c_ICmp(Pred, m_Not(m_c_Xor(m_V, m_Value(B))),
> + m_Value(A)))) &&
> + Pred == ICmpInst::ICMP_EQ && isValidAssumeForContext(I, Q, DL)) {
> + APInt RHSKnownZero(BitWidth, 0), RHSKnownOne(BitWidth, 0);
> + computeKnownBits(A, RHSKnownZero, RHSKnownOne, DL, Depth+1,
> Query(Q, I));
> + APInt BKnownZero(BitWidth, 0), BKnownOne(BitWidth, 0);
> + computeKnownBits(B, BKnownZero, BKnownOne, DL, Depth+1, Query(Q,
> I));
> +
> + // For those bits in B that are known to be zero, we can propagate
> + // inverted known bits from the RHS to V. For those bits in B that
> are
> + // known to be one, we can propagate known bits from the RHS to V.
> + KnownZero |= RHSKnownOne & BKnownZero;
> + KnownOne |= RHSKnownZero & BKnownZero;
> + KnownZero |= RHSKnownZero & BKnownOne;
> + KnownOne |= RHSKnownOne & BKnownOne;
> + // assume(v << c = a)
> + } else if (match(I, m_Intrinsic<Intrinsic::assume>(
> + m_c_ICmp(Pred, m_Shl(m_V, m_ConstantInt(C)),
> + m_Value(A)))) &&
> + Pred == ICmpInst::ICMP_EQ && isValidAssumeForContext(I, Q, DL)) {
> + APInt RHSKnownZero(BitWidth, 0), RHSKnownOne(BitWidth, 0);
> + computeKnownBits(A, RHSKnownZero, RHSKnownOne, DL, Depth+1,
> Query(Q, I));
> + // For those bits in RHS that are known, we can propagate them to
> known
> + // bits in V shifted to the right by C.
> + KnownZero |= RHSKnownZero.lshr(C->getZExtValue());
> + KnownOne |= RHSKnownOne.lshr(C->getZExtValue());
> + // assume(~(v << c) = a)
> + } else if (match(I, m_Intrinsic<Intrinsic::assume>(
> + m_c_ICmp(Pred, m_Not(m_Shl(m_V, m_ConstantInt(C))),
> + m_Value(A)))) &&
> + Pred == ICmpInst::ICMP_EQ && isValidAssumeForContext(I, Q, DL)) {
> + APInt RHSKnownZero(BitWidth, 0), RHSKnownOne(BitWidth, 0);
> + computeKnownBits(A, RHSKnownZero, RHSKnownOne, DL, Depth+1,
> Query(Q, I));
> + // For those bits in RHS that are known, we can propagate them
> inverted
> + // to known bits in V shifted to the right by C.
> + KnownZero |= RHSKnownOne.lshr(C->getZExtValue());
> + KnownOne |= RHSKnownZero.lshr(C->getZExtValue());
> + // assume(v >> c = a)
> + } else if (match(I, m_Intrinsic<Intrinsic::assume>(
> + m_c_ICmp(Pred, m_CombineOr(m_LShr(m_V, m_ConstantInt(C)),
> + m_AShr(m_V,
> + m_ConstantInt(C))),
> + m_Value(A)))) &&
> + Pred == ICmpInst::ICMP_EQ && isValidAssumeForContext(I, Q, DL)) {
> + APInt RHSKnownZero(BitWidth, 0), RHSKnownOne(BitWidth, 0);
> + computeKnownBits(A, RHSKnownZero, RHSKnownOne, DL, Depth+1,
> Query(Q, I));
> + // For those bits in RHS that are known, we can propagate them to
> known
> + // bits in V shifted to the right by C.
> + KnownZero |= RHSKnownZero << C->getZExtValue();
> + KnownOne |= RHSKnownOne << C->getZExtValue();
> + // assume(~(v >> c) = a)
> + } else if (match(I, m_Intrinsic<Intrinsic::assume>(
> + m_c_ICmp(Pred, m_Not(m_CombineOr(
> + m_LShr(m_V, m_ConstantInt(C)),
> + m_AShr(m_V, m_ConstantInt(C)))),
> + m_Value(A)))) &&
> + Pred == ICmpInst::ICMP_EQ && isValidAssumeForContext(I, Q, DL)) {
> + APInt RHSKnownZero(BitWidth, 0), RHSKnownOne(BitWidth, 0);
> + computeKnownBits(A, RHSKnownZero, RHSKnownOne, DL, Depth+1,
> Query(Q, I));
> + // For those bits in RHS that are known, we can propagate them
> inverted
> + // to known bits in V shifted to the right by C.
> + KnownZero |= RHSKnownOne << C->getZExtValue();
> + KnownOne |= RHSKnownZero << C->getZExtValue();
> + // assume(v >=_s c) where c is non-negative
> + } else if (match(I, m_Intrinsic<Intrinsic::assume>(
> + m_ICmp(Pred, m_V, m_Value(A)))) &&
> + Pred == ICmpInst::ICMP_SGE &&
> + isValidAssumeForContext(I, Q, DL)) {
> + APInt RHSKnownZero(BitWidth, 0), RHSKnownOne(BitWidth, 0);
> + computeKnownBits(A, RHSKnownZero, RHSKnownOne, DL, Depth+1,
> Query(Q, I));
> +
> + if (RHSKnownZero.isNegative()) {
> + // We know that the sign bit is zero.
> + KnownZero |= APInt::getSignBit(BitWidth);
> + }
> + // assume(v >_s c) where c is at least -1.
> + } else if (match(I, m_Intrinsic<Intrinsic::assume>(
> + m_ICmp(Pred, m_V, m_Value(A)))) &&
> + Pred == ICmpInst::ICMP_SGT &&
> + isValidAssumeForContext(I, Q, DL)) {
> + APInt RHSKnownZero(BitWidth, 0), RHSKnownOne(BitWidth, 0);
> + computeKnownBits(A, RHSKnownZero, RHSKnownOne, DL, Depth+1,
> Query(Q, I));
> +
> + if (RHSKnownOne.isAllOnesValue() || RHSKnownZero.isNegative()) {
> + // We know that the sign bit is zero.
> + KnownZero |= APInt::getSignBit(BitWidth);
> + }
> + // assume(v <=_s c) where c is negative
> + } else if (match(I, m_Intrinsic<Intrinsic::assume>(
> + m_ICmp(Pred, m_V, m_Value(A)))) &&
> + Pred == ICmpInst::ICMP_SLE &&
> + isValidAssumeForContext(I, Q, DL)) {
> + APInt RHSKnownZero(BitWidth, 0), RHSKnownOne(BitWidth, 0);
> + computeKnownBits(A, RHSKnownZero, RHSKnownOne, DL, Depth+1,
> Query(Q, I));
> +
> + if (RHSKnownOne.isNegative()) {
> + // We know that the sign bit is one.
> + KnownOne |= APInt::getSignBit(BitWidth);
> + }
> + // assume(v <_s c) where c is non-positive
> + } else if (match(I, m_Intrinsic<Intrinsic::assume>(
> + m_ICmp(Pred, m_V, m_Value(A)))) &&
> + Pred == ICmpInst::ICMP_SLT &&
> + isValidAssumeForContext(I, Q, DL)) {
> + APInt RHSKnownZero(BitWidth, 0), RHSKnownOne(BitWidth, 0);
> + computeKnownBits(A, RHSKnownZero, RHSKnownOne, DL, Depth+1,
> Query(Q, I));
> +
> + if (RHSKnownZero.isAllOnesValue() || RHSKnownOne.isNegative()) {
> + // We know that the sign bit is one.
> + KnownOne |= APInt::getSignBit(BitWidth);
> + }
> + // assume(v <=_u c)
> + } else if (match(I, m_Intrinsic<Intrinsic::assume>(
> + m_ICmp(Pred, m_V, m_Value(A)))) &&
> + Pred == ICmpInst::ICMP_ULE &&
> + isValidAssumeForContext(I, Q, DL)) {
> + APInt RHSKnownZero(BitWidth, 0), RHSKnownOne(BitWidth, 0);
> + computeKnownBits(A, RHSKnownZero, RHSKnownOne, DL, Depth+1,
> Query(Q, I));
> +
> + // Whatever high bits in c are zero are known to be zero.
> + KnownZero |=
> + APInt::getHighBitsSet(BitWidth, RHSKnownZero.countLeadingOnes());
> + // assume(v <_u c)
> + } else if (match(I, m_Intrinsic<Intrinsic::assume>(
> + m_ICmp(Pred, m_V, m_Value(A)))) &&
> + Pred == ICmpInst::ICMP_ULT &&
> + isValidAssumeForContext(I, Q, DL)) {
> + APInt RHSKnownZero(BitWidth, 0), RHSKnownOne(BitWidth, 0);
> + computeKnownBits(A, RHSKnownZero, RHSKnownOne, DL, Depth+1,
> Query(Q, I));
> +
> + // Whatever high bits in c are zero are known to be zero (if c is a
> power
> + // of 2, then one more).
> + if (isKnownToBeAPowerOfTwo(A, false, Depth+1, Query(Q, I)))
> + KnownZero |=
> + APInt::getHighBitsSet(BitWidth, RHSKnownZero.countLeadingOnes()+1);
> + else
> + KnownZero |=
> + APInt::getHighBitsSet(BitWidth, RHSKnownZero.countLeadingOnes());
> }
> }
> }
> 
> Modified:
> llvm/trunk/lib/Transforms/InstCombine/InstCombineSimplifyDemanded.cpp
> URL:
> http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Transforms/InstCombine/InstCombineSimplifyDemanded.cpp?rev=217343&r1=217342&r2=217343&view=diff
> ==============================================================================
> ---
> llvm/trunk/lib/Transforms/InstCombine/InstCombineSimplifyDemanded.cpp
> (original)
> +++
> llvm/trunk/lib/Transforms/InstCombine/InstCombineSimplifyDemanded.cpp
> Sun Sep 7 14:21:07 2014
> @@ -264,6 +264,12 @@ Value *InstCombiner::SimplifyDemandedUse
> assert(!(RHSKnownZero & RHSKnownOne) && "Bits known to be one AND
> zero?");
> assert(!(LHSKnownZero & LHSKnownOne) && "Bits known to be one AND
> zero?");
> 
> + // If the client is only demanding bits that we know, return the
> known
> + // constant.
> + if ((DemandedMask & ((RHSKnownZero | LHSKnownZero)|
> + (RHSKnownOne & LHSKnownOne))) == DemandedMask)
> + return Constant::getIntegerValue(VTy, RHSKnownOne & LHSKnownOne);
> +
> // If all of the demanded bits are known 1 on one side, return the
> other.
> // These bits cannot contribute to the result of the 'and'.
> if ((DemandedMask & ~LHSKnownZero & RHSKnownOne) ==
> @@ -296,6 +302,12 @@ Value *InstCombiner::SimplifyDemandedUse
> assert(!(RHSKnownZero & RHSKnownOne) && "Bits known to be one AND
> zero?");
> assert(!(LHSKnownZero & LHSKnownOne) && "Bits known to be one AND
> zero?");
> 
> + // If the client is only demanding bits that we know, return the
> known
> + // constant.
> + if ((DemandedMask & ((RHSKnownZero & LHSKnownZero)|
> + (RHSKnownOne | LHSKnownOne))) == DemandedMask)
> + return Constant::getIntegerValue(VTy, RHSKnownOne | LHSKnownOne);
> +
> // If all of the demanded bits are known zero on one side, return the
> other.
> // These bits cannot contribute to the result of the 'or'.
> if ((DemandedMask & ~LHSKnownOne & RHSKnownZero) ==
> @@ -332,6 +344,18 @@ Value *InstCombiner::SimplifyDemandedUse
> assert(!(RHSKnownZero & RHSKnownOne) && "Bits known to be one AND
> zero?");
> assert(!(LHSKnownZero & LHSKnownOne) && "Bits known to be one AND
> zero?");
> 
> + // Output known-0 bits are known if clear or set in both the LHS &
> RHS.
> + APInt IKnownZero = (RHSKnownZero & LHSKnownZero) |
> + (RHSKnownOne & LHSKnownOne);
> + // Output known-1 are known to be set if set in only one of the
> LHS, RHS.
> + APInt IKnownOne = (RHSKnownZero & LHSKnownOne) |
> + (RHSKnownOne & LHSKnownZero);
> +
> + // If the client is only demanding bits that we know, return the
> known
> + // constant.
> + if ((DemandedMask & (IKnownZero|IKnownOne)) == DemandedMask)
> + return Constant::getIntegerValue(VTy, IKnownOne);
> +
> // If all of the demanded bits are known zero on one side, return the
> other.
> // These bits cannot contribute to the result of the 'xor'.
> if ((DemandedMask & RHSKnownZero) == DemandedMask)
> 
> Added: llvm/trunk/test/Transforms/InstCombine/assume2.ll
> URL:
> http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/InstCombine/assume2.ll?rev=217343&view=auto
> ==============================================================================
> --- llvm/trunk/test/Transforms/InstCombine/assume2.ll (added)
> +++ llvm/trunk/test/Transforms/InstCombine/assume2.ll Sun Sep 7
> 14:21:07 2014
> @@ -0,0 +1,174 @@
> +; RUN: opt < %s -instcombine -S | FileCheck %s
> +target datalayout = "e-m:e-i64:64-f80:128-n8:16:32:64-S128"
> +target triple = "x86_64-unknown-linux-gnu"
> +
> +; Function Attrs: nounwind
> +declare void @llvm.assume(i1) #1
> +
> +; Function Attrs: nounwind uwtable
> +define i32 @test1(i32 %a) #0 {
> +entry:
> +; CHECK-LABEL: @test1
> +; CHECK: call void @llvm.assume
> +; CHECK: ret i32 5
> +
> + %and = and i32 %a, 15
> + %cmp = icmp eq i32 %and, 5
> + tail call void @llvm.assume(i1 %cmp)
> +
> + %and1 = and i32 %a, 7
> + ret i32 %and1
> +}
> +
> +; Function Attrs: nounwind uwtable
> +define i32 @test2(i32 %a) #0 {
> +entry:
> +; CHECK-LABEL: @test2
> +; CHECK: call void @llvm.assume
> +; CHECK: ret i32 2
> +
> + %and = and i32 %a, 15
> + %nand = xor i32 %and, -1
> + %cmp = icmp eq i32 %nand, 4294967285
> + tail call void @llvm.assume(i1 %cmp)
> +
> + %and1 = and i32 %a, 7
> + ret i32 %and1
> +}
> +
> +; Function Attrs: nounwind uwtable
> +define i32 @test3(i32 %a) #0 {
> +entry:
> +; CHECK-LABEL: @test3
> +; CHECK: call void @llvm.assume
> +; CHECK: ret i32 5
> +
> + %v = or i32 %a, 4294967280
> + %cmp = icmp eq i32 %v, 4294967285
> + tail call void @llvm.assume(i1 %cmp)
> +
> + %and1 = and i32 %a, 7
> + ret i32 %and1
> +}
> +
> +; Function Attrs: nounwind uwtable
> +define i32 @test4(i32 %a) #0 {
> +entry:
> +; CHECK-LABEL: @test4
> +; CHECK: call void @llvm.assume
> +; CHECK: ret i32 2
> +
> + %v = or i32 %a, 4294967280
> + %nv = xor i32 %v, -1
> + %cmp = icmp eq i32 %nv, 5
> + tail call void @llvm.assume(i1 %cmp)
> +
> + %and1 = and i32 %a, 7
> + ret i32 %and1
> +}
> +
> +; Function Attrs: nounwind uwtable
> +define i32 @test5(i32 %a) #0 {
> +entry:
> +; CHECK-LABEL: @test5
> +; CHECK: call void @llvm.assume
> +; CHECK: ret i32 4
> +
> + %v = xor i32 %a, 1
> + %cmp = icmp eq i32 %v, 5
> + tail call void @llvm.assume(i1 %cmp)
> +
> + %and1 = and i32 %a, 7
> + ret i32 %and1
> +}
> +
> +; Function Attrs: nounwind uwtable
> +define i32 @test6(i32 %a) #0 {
> +entry:
> +; CHECK-LABEL: @test6
> +; CHECK: call void @llvm.assume
> +; CHECK: ret i32 5
> +
> + %v = shl i32 %a, 2
> + %cmp = icmp eq i32 %v, 20
> + tail call void @llvm.assume(i1 %cmp)
> +
> + %and1 = and i32 %a, 63
> + ret i32 %and1
> +}
> +
> +; Function Attrs: nounwind uwtable
> +define i32 @test7(i32 %a) #0 {
> +entry:
> +; CHECK-LABEL: @test7
> +; CHECK: call void @llvm.assume
> +; CHECK: ret i32 20
> +
> + %v = lshr i32 %a, 2
> + %cmp = icmp eq i32 %v, 5
> + tail call void @llvm.assume(i1 %cmp)
> +
> + %and1 = and i32 %a, 252
> + ret i32 %and1
> +}
> +
> +; Function Attrs: nounwind uwtable
> +define i32 @test8(i32 %a) #0 {
> +entry:
> +; CHECK-LABEL: @test8
> +; CHECK: call void @llvm.assume
> +; CHECK: ret i32 20
> +
> + %v = lshr i32 %a, 2
> + %cmp = icmp eq i32 %v, 5
> + tail call void @llvm.assume(i1 %cmp)
> +
> + %and1 = and i32 %a, 252
> + ret i32 %and1
> +}
> +
> +; Function Attrs: nounwind uwtable
> +define i32 @test9(i32 %a) #0 {
> +entry:
> +; CHECK-LABEL: @test9
> +; CHECK: call void @llvm.assume
> +; CHECK: ret i32 0
> +
> + %cmp = icmp sgt i32 %a, 5
> + tail call void @llvm.assume(i1 %cmp)
> +
> + %and1 = and i32 %a, 2147483648
> + ret i32 %and1
> +}
> +
> +; Function Attrs: nounwind uwtable
> +define i32 @test10(i32 %a) #0 {
> +entry:
> +; CHECK-LABEL: @test10
> +; CHECK: call void @llvm.assume
> +; CHECK: ret i32 -2147483648
> +
> + %cmp = icmp sle i32 %a, -2
> + tail call void @llvm.assume(i1 %cmp)
> +
> + %and1 = and i32 %a, 2147483648
> + ret i32 %and1
> +}
> +
> +; Function Attrs: nounwind uwtable
> +define i32 @test11(i32 %a) #0 {
> +entry:
> +; CHECK-LABEL: @test11
> +; CHECK: call void @llvm.assume
> +; CHECK: ret i32 0
> +
> + %cmp = icmp ule i32 %a, 256
> + tail call void @llvm.assume(i1 %cmp)
> +
> + %and1 = and i32 %a, 3072
> + ret i32 %and1
> +}
> +
> +attributes #0 = { nounwind uwtable }
> +attributes #1 = { nounwind }
> +
> 
> 
> _______________________________________________
> llvm-commits mailing list
> llvm-commits at cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits
> 
> 

-- 
Hal Finkel
Assistant Computational Scientist
Leadership Computing Facility
Argonne National Laboratory



More information about the llvm-commits mailing list