[llvm] 352fef3 - [InstCombine] Negator - sink sinkable negations
Kostya Serebryany via llvm-commits
llvm-commits at lists.llvm.org
Thu Apr 23 13:32:20 PDT 2020
Ping. Shall I revert the change?
On Wed, Apr 22, 2020 at 4:41 PM Kostya Serebryany <kcc at google.com> wrote:
> Hi,
>
> This change causes our build bots to go red, indicating a potential
> performance regression.
> I've provided details in https://reviews.llvm.org/D68408
> Please take a look.
>
> --kcc
>
> On Tue, Apr 21, 2020 at 12:00 PM Roman Lebedev via llvm-commits <
> llvm-commits at lists.llvm.org> wrote:
>
>>
>> Author: Roman Lebedev
>> Date: 2020-04-21T22:00:23+03:00
>> New Revision: 352fef3f11f5ccb2ddc8e16cecb7302a54721e9f
>>
>> URL:
>> https://github.com/llvm/llvm-project/commit/352fef3f11f5ccb2ddc8e16cecb7302a54721e9f
>> DIFF:
>> https://github.com/llvm/llvm-project/commit/352fef3f11f5ccb2ddc8e16cecb7302a54721e9f.diff
>>
>> LOG: [InstCombine] Negator - sink sinkable negations
>>
>> Summary:
>> As we have discussed previously (e.g. in D63992 / D64090 / [[
>> https://bugs.llvm.org/show_bug.cgi?id=42457 | PR42457 ]]), `sub`
>> instruction
>> can almost be considered non-canonical. While we do convert `sub %x, C`
>> -> `add %x, -C`,
>> we sparsely do that for non-constants. But we should.
>>
>> Here, i propose to interpret `sub %x, %y` as `add (sub 0, %y), %x` IFF
>> the negation can be sinked into the `%y`
>>
>> This has some potential to cause endless combine loops (either around
>> PHI's, or if there are some opposite transforms).
>> For former there's `-instcombine-negator-max-depth` option to mitigate
>> it, should this expose any such issues
>> For latter, if there are still any such opposing folds, we'd need to
>> remove the colliding fold.
>> In any case, reproducers welcomed!
>>
>> Reviewers: spatel, nikic, efriedma, xbolva00
>>
>> Reviewed By: spatel
>>
>> Subscribers: xbolva00, mgorny, hiraditya, reames, llvm-commits
>>
>> Tags: #llvm
>>
>> Differential Revision: https://reviews.llvm.org/D68408
>>
>> Added:
>> llvm/lib/Transforms/InstCombine/InstCombineNegator.cpp
>>
>> Modified:
>> llvm/lib/Transforms/InstCombine/CMakeLists.txt
>> llvm/lib/Transforms/InstCombine/InstCombineAddSub.cpp
>> llvm/lib/Transforms/InstCombine/InstCombineInternal.h
>> llvm/lib/Transforms/InstCombine/InstructionCombining.cpp
>> llvm/test/Transforms/InstCombine/and-or-icmps.ll
>> llvm/test/Transforms/InstCombine/fold-sub-of-not-to-inc-of-add.ll
>> llvm/test/Transforms/InstCombine/high-bit-signmask-with-trunc.ll
>> llvm/test/Transforms/InstCombine/high-bit-signmask.ll
>> llvm/test/Transforms/InstCombine/sub-of-negatible.ll
>> llvm/test/Transforms/InstCombine/sub.ll
>> llvm/test/Transforms/InstCombine/zext-bool-add-sub.ll
>>
>> Removed:
>>
>>
>>
>>
>> ################################################################################
>> diff --git a/llvm/lib/Transforms/InstCombine/CMakeLists.txt
>> b/llvm/lib/Transforms/InstCombine/CMakeLists.txt
>> index 2f19882c3316..1a34f22f064f 100644
>> --- a/llvm/lib/Transforms/InstCombine/CMakeLists.txt
>> +++ b/llvm/lib/Transforms/InstCombine/CMakeLists.txt
>> @@ -12,6 +12,7 @@ add_llvm_component_library(LLVMInstCombine
>> InstCombineCompares.cpp
>> InstCombineLoadStoreAlloca.cpp
>> InstCombineMulDivRem.cpp
>> + InstCombineNegator.cpp
>> InstCombinePHI.cpp
>> InstCombineSelect.cpp
>> InstCombineShifts.cpp
>>
>> diff --git a/llvm/lib/Transforms/InstCombine/InstCombineAddSub.cpp
>> b/llvm/lib/Transforms/InstCombine/InstCombineAddSub.cpp
>> index 7ca287f07a11..16666fe9430e 100644
>> --- a/llvm/lib/Transforms/InstCombine/InstCombineAddSub.cpp
>> +++ b/llvm/lib/Transforms/InstCombine/InstCombineAddSub.cpp
>> @@ -1682,12 +1682,10 @@ Instruction
>> *InstCombiner::visitSub(BinaryOperator &I) {
>> if (Instruction *X = foldVectorBinop(I))
>> return X;
>>
>> - // (A*B)-(A*C) -> A*(B-C) etc
>> - if (Value *V = SimplifyUsingDistributiveLaws(I))
>> - return replaceInstUsesWith(I, V);
>> + Value *Op0 = I.getOperand(0), *Op1 = I.getOperand(1);
>>
>> // If this is a 'B = x-(-A)', change to B = x+A.
>> - Value *Op0 = I.getOperand(0), *Op1 = I.getOperand(1);
>> + // We deal with this without involving Negator to preserve NSW flag.
>> if (Value *V = dyn_castNegVal(Op1)) {
>> BinaryOperator *Res = BinaryOperator::CreateAdd(Op0, V);
>>
>> @@ -1704,6 +1702,45 @@ Instruction *InstCombiner::visitSub(BinaryOperator
>> &I) {
>> return Res;
>> }
>>
>> + auto TryToNarrowDeduceFlags = [this, &I, &Op0, &Op1]() -> Instruction
>> * {
>> + if (Instruction *Ext = narrowMathIfNoOverflow(I))
>> + return Ext;
>> +
>> + bool Changed = false;
>> + if (!I.hasNoSignedWrap() && willNotOverflowSignedSub(Op0, Op1, I)) {
>> + Changed = true;
>> + I.setHasNoSignedWrap(true);
>> + }
>> + if (!I.hasNoUnsignedWrap() && willNotOverflowUnsignedSub(Op0, Op1,
>> I)) {
>> + Changed = true;
>> + I.setHasNoUnsignedWrap(true);
>> + }
>> +
>> + return Changed ? &I : nullptr;
>> + };
>> +
>> + // First, let's try to interpret `sub a, b` as `add a, (sub 0, b)`,
>> + // and let's try to sink `(sub 0, b)` into `b` itself. But only if
>> this isn't
>> + // a pure negation used by a select that looks like abs/nabs.
>> + bool IsNegation = match(Op0, m_ZeroInt());
>> + if (!IsNegation || none_of(I.users(), [&I, Op1](const User *U) {
>> + const Instruction *UI = dyn_cast<Instruction>(U);
>> + if (!UI)
>> + return false;
>> + return match(UI,
>> + m_Select(m_Value(), m_Specific(Op1),
>> m_Specific(&I))) ||
>> + match(UI, m_Select(m_Value(), m_Specific(&I),
>> m_Specific(Op1)));
>> + })) {
>> + if (Value *NegOp1 = Negator::Negate(IsNegation, Op1, *this))
>> + return BinaryOperator::CreateAdd(NegOp1, Op0);
>> + }
>> + if (IsNegation)
>> + return TryToNarrowDeduceFlags(); // Should have been handled in
>> Negator!
>> +
>> + // (A*B)-(A*C) -> A*(B-C) etc
>> + if (Value *V = SimplifyUsingDistributiveLaws(I))
>> + return replaceInstUsesWith(I, V);
>> +
>> if (I.getType()->isIntOrIntVectorTy(1))
>> return BinaryOperator::CreateXor(Op0, Op1);
>>
>> @@ -1720,22 +1757,7 @@ Instruction *InstCombiner::visitSub(BinaryOperator
>> &I) {
>> if (match(Op0, m_OneUse(m_Add(m_Value(X), m_AllOnes()))))
>> return BinaryOperator::CreateAdd(Builder.CreateNot(Op1), X);
>>
>> - // Y - (X + 1) --> ~X + Y
>> - if (match(Op1, m_OneUse(m_Add(m_Value(X), m_One()))))
>> - return BinaryOperator::CreateAdd(Builder.CreateNot(X), Op0);
>> -
>> - // Y - ~X --> (X + 1) + Y
>> - if (match(Op1, m_OneUse(m_Not(m_Value(X))))) {
>> - return BinaryOperator::CreateAdd(
>> - Builder.CreateAdd(Op0, ConstantInt::get(I.getType(), 1)), X);
>> - }
>> -
>> if (Constant *C = dyn_cast<Constant>(Op0)) {
>> - // -f(x) -> f(-x) if possible.
>> - if (match(C, m_Zero()))
>> - if (Value *Neg = freelyNegateValue(Op1))
>> - return replaceInstUsesWith(I, Neg);
>> -
>> Value *X;
>> if (match(Op1, m_ZExt(m_Value(X))) &&
>> X->getType()->isIntOrIntVectorTy(1))
>> // C - (zext bool) --> bool ? C - 1 : C
>> @@ -1770,26 +1792,12 @@ Instruction
>> *InstCombiner::visitSub(BinaryOperator &I) {
>> }
>>
>> const APInt *Op0C;
>> - if (match(Op0, m_APInt(Op0C))) {
>> - if (Op0C->isNullValue() && Op1->hasOneUse()) {
>> - Value *LHS, *RHS;
>> - SelectPatternFlavor SPF = matchSelectPattern(Op1, LHS, RHS).Flavor;
>> - if (SPF == SPF_ABS || SPF == SPF_NABS) {
>> - // This is a negate of an ABS/NABS pattern. Just swap the
>> operands
>> - // of the select.
>> - cast<SelectInst>(Op1)->swapValues();
>> - // Don't swap prof metadata, we didn't change the branch
>> behavior.
>> - return replaceInstUsesWith(I, Op1);
>> - }
>> - }
>> -
>> + if (match(Op0, m_APInt(Op0C)) && Op0C->isMask()) {
>> // Turn this into a xor if LHS is 2^n-1 and the remaining bits are
>> known
>> // zero.
>> - if (Op0C->isMask()) {
>> - KnownBits RHSKnown = computeKnownBits(Op1, 0, &I);
>> - if ((*Op0C | RHSKnown.Zero).isAllOnesValue())
>> - return BinaryOperator::CreateXor(Op1, Op0);
>> - }
>> + KnownBits RHSKnown = computeKnownBits(Op1, 0, &I);
>> + if ((*Op0C | RHSKnown.Zero).isAllOnesValue())
>> + return BinaryOperator::CreateXor(Op1, Op0);
>> }
>>
>> {
>> @@ -1919,49 +1927,6 @@ Instruction *InstCombiner::visitSub(BinaryOperator
>> &I) {
>> return BinaryOperator::CreateAnd(
>> Op0, Builder.CreateNot(Y, Y->getName() + ".not"));
>>
>> - if (Op1->hasOneUse()) {
>> - Value *Y = nullptr, *Z = nullptr;
>> - Constant *C = nullptr;
>> -
>> - // (X - (Y - Z)) --> (X + (Z - Y)).
>> - if (match(Op1, m_Sub(m_Value(Y), m_Value(Z))))
>> - return BinaryOperator::CreateAdd(Op0,
>> - Builder.CreateSub(Z, Y,
>> Op1->getName()));
>> -
>> - // Subtracting -1/0 is the same as adding 1/0:
>> - // sub [nsw] Op0, sext(bool Y) -> add [nsw] Op0, zext(bool Y)
>> - // 'nuw' is dropped in favor of the canonical form.
>> - if (match(Op1, m_SExt(m_Value(Y))) &&
>> - Y->getType()->getScalarSizeInBits() == 1) {
>> - Value *Zext = Builder.CreateZExt(Y, I.getType());
>> - BinaryOperator *Add = BinaryOperator::CreateAdd(Op0, Zext);
>> - Add->setHasNoSignedWrap(I.hasNoSignedWrap());
>> - return Add;
>> - }
>> - // sub [nsw] X, zext(bool Y) -> add [nsw] X, sext(bool Y)
>> - // 'nuw' is dropped in favor of the canonical form.
>> - if (match(Op1, m_ZExt(m_Value(Y))) &&
>> Y->getType()->isIntOrIntVectorTy(1)) {
>> - Value *Sext = Builder.CreateSExt(Y, I.getType());
>> - BinaryOperator *Add = BinaryOperator::CreateAdd(Op0, Sext);
>> - Add->setHasNoSignedWrap(I.hasNoSignedWrap());
>> - return Add;
>> - }
>> -
>> - // X - A*-B -> X + A*B
>> - // X - -A*B -> X + A*B
>> - Value *A, *B;
>> - if (match(Op1, m_c_Mul(m_Value(A), m_Neg(m_Value(B)))))
>> - return BinaryOperator::CreateAdd(Op0, Builder.CreateMul(A, B));
>> -
>> - // X - A*C -> X + A*-C
>> - // No need to handle commuted multiply because multiply handling will
>> - // ensure constant will be move to the right hand side.
>> - if (match(Op1, m_Mul(m_Value(A), m_Constant(C))) &&
>> !isa<ConstantExpr>(C)) {
>> - Value *NewMul = Builder.CreateMul(A, ConstantExpr::getNeg(C));
>> - return BinaryOperator::CreateAdd(Op0, NewMul);
>> - }
>> - }
>> -
>> {
>> // ~A - Min/Max(~A, O) -> Max/Min(A, ~O) - A
>> // ~A - Min/Max(O, ~A) -> Max/Min(A, ~O) - A
>> @@ -2036,20 +2001,7 @@ Instruction *InstCombiner::visitSub(BinaryOperator
>> &I) {
>>
>> canonicalizeCondSignextOfHighBitExtractToSignextHighBitExtract(I))
>> return V;
>>
>> - if (Instruction *Ext = narrowMathIfNoOverflow(I))
>> - return Ext;
>> -
>> - bool Changed = false;
>> - if (!I.hasNoSignedWrap() && willNotOverflowSignedSub(Op0, Op1, I)) {
>> - Changed = true;
>> - I.setHasNoSignedWrap(true);
>> - }
>> - if (!I.hasNoUnsignedWrap() && willNotOverflowUnsignedSub(Op0, Op1, I))
>> {
>> - Changed = true;
>> - I.setHasNoUnsignedWrap(true);
>> - }
>> -
>> - return Changed ? &I : nullptr;
>> + return TryToNarrowDeduceFlags();
>> }
>>
>> /// This eliminates floating-point negation in either 'fneg(X)' or
>>
>> diff --git a/llvm/lib/Transforms/InstCombine/InstCombineInternal.h
>> b/llvm/lib/Transforms/InstCombine/InstCombineInternal.h
>> index 6544fd4ee8da..a908349eaff1 100644
>> --- a/llvm/lib/Transforms/InstCombine/InstCombineInternal.h
>> +++ b/llvm/lib/Transforms/InstCombine/InstCombineInternal.h
>> @@ -16,6 +16,7 @@
>> #define LLVM_LIB_TRANSFORMS_INSTCOMBINE_INSTCOMBINEINTERNAL_H
>>
>> #include "llvm/ADT/ArrayRef.h"
>> +#include "llvm/ADT/Statistic.h"
>> #include "llvm/Analysis/AliasAnalysis.h"
>> #include "llvm/Analysis/InstructionSimplify.h"
>> #include "llvm/Analysis/TargetFolder.h"
>> @@ -471,7 +472,6 @@ class LLVM_LIBRARY_VISIBILITY InstCombiner
>> bool shouldChangeType(unsigned FromBitWidth, unsigned ToBitWidth)
>> const;
>> bool shouldChangeType(Type *From, Type *To) const;
>> Value *dyn_castNegVal(Value *V) const;
>> - Value *freelyNegateValue(Value *V);
>> Type *FindElementAtOffset(PointerType *PtrTy, int64_t Offset,
>> SmallVectorImpl<Value *> &NewIndices);
>>
>> @@ -513,7 +513,7 @@ class LLVM_LIBRARY_VISIBILITY InstCombiner
>> Instruction *simplifyMaskedStore(IntrinsicInst &II);
>> Instruction *simplifyMaskedGather(IntrinsicInst &II);
>> Instruction *simplifyMaskedScatter(IntrinsicInst &II);
>> -
>> +
>> /// Transform (zext icmp) to bitwise / integer operations in order to
>> /// eliminate it.
>> ///
>> @@ -1014,6 +1014,55 @@ class LLVM_LIBRARY_VISIBILITY InstCombiner
>> Value *Descale(Value *Val, APInt Scale, bool &NoSignedWrap);
>> };
>>
>> +namespace {
>> +
>> +// As a default, let's assume that we want to be aggressive,
>> +// and attempt to traverse with no limits in attempt to sink negation.
>> +static constexpr unsigned NegatorDefaultMaxDepth = ~0U;
>> +
>> +// Let's guesstimate that most often we will end up visiting/producing
>> +// fairly small number of new instructions.
>> +static constexpr unsigned NegatorMaxNodesSSO = 16;
>> +
>> +} // namespace
>> +
>> +class Negator final {
>> + /// Top-to-bottom, def-to-use negated instruction tree we produced.
>> + SmallVector<Instruction *, NegatorMaxNodesSSO> NewInstructions;
>> +
>> + using BuilderTy = IRBuilder<TargetFolder, IRBuilderCallbackInserter>;
>> + BuilderTy Builder;
>> +
>> + const bool IsTrulyNegation;
>> +
>> + Negator(LLVMContext &C, const DataLayout &DL, bool IsTrulyNegation);
>> +
>> +#if LLVM_ENABLE_STATS
>> + unsigned NumValuesVisitedInThisNegator = 0;
>> + ~Negator();
>> +#endif
>> +
>> + using Result = std::pair<ArrayRef<Instruction *> /*NewInstructions*/,
>> + Value * /*NegatedRoot*/>;
>> +
>> + LLVM_NODISCARD Value *visit(Value *V, unsigned Depth);
>> +
>> + /// Recurse depth-first and attempt to sink the negation.
>> + /// FIXME: use worklist?
>> + LLVM_NODISCARD Optional<Result> run(Value *Root);
>> +
>> + Negator(const Negator &) = delete;
>> + Negator(Negator &&) = delete;
>> + Negator &operator=(const Negator &) = delete;
>> + Negator &operator=(Negator &&) = delete;
>> +
>> +public:
>> + /// Attempt to negate \p Root. Retuns nullptr if negation can't be
>> performed,
>> + /// otherwise returns negated value.
>> + LLVM_NODISCARD static Value *Negate(bool LHSIsZero, Value *Root,
>> + InstCombiner &IC);
>> +};
>> +
>> } // end namespace llvm
>>
>> #undef DEBUG_TYPE
>>
>> diff --git a/llvm/lib/Transforms/InstCombine/InstCombineNegator.cpp
>> b/llvm/lib/Transforms/InstCombine/InstCombineNegator.cpp
>> new file mode 100644
>> index 000000000000..2655ef304787
>> --- /dev/null
>> +++ b/llvm/lib/Transforms/InstCombine/InstCombineNegator.cpp
>> @@ -0,0 +1,377 @@
>> +//===- InstCombineNegator.cpp -----------------------------------*- C++
>> -*-===//
>> +//
>> +// Part of the LLVM Project, under the Apache License v2.0 with LLVM
>> Exceptions.
>> +// See https://llvm.org/LICENSE.txt for license information.
>> +// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
>> +//
>>
>> +//===----------------------------------------------------------------------===//
>> +//
>> +// This file implements sinking of negation into expression trees,
>> +// as long as that can be done without increasing instruction count.
>> +//
>>
>> +//===----------------------------------------------------------------------===//
>> +
>> +#include "InstCombineInternal.h"
>> +#include "llvm/ADT/APInt.h"
>> +#include "llvm/ADT/ArrayRef.h"
>> +#include "llvm/ADT/None.h"
>> +#include "llvm/ADT/Optional.h"
>> +#include "llvm/ADT/STLExtras.h"
>> +#include "llvm/ADT/SmallVector.h"
>> +#include "llvm/ADT/Statistic.h"
>> +#include "llvm/ADT/StringRef.h"
>> +#include "llvm/ADT/Twine.h"
>> +#include "llvm/ADT/iterator_range.h"
>> +#include "llvm/Analysis/TargetFolder.h"
>> +#include "llvm/Analysis/ValueTracking.h"
>> +#include "llvm/IR/Constant.h"
>> +#include "llvm/IR/Constants.h"
>> +#include "llvm/IR/DebugLoc.h"
>> +#include "llvm/IR/DerivedTypes.h"
>> +#include "llvm/IR/IRBuilder.h"
>> +#include "llvm/IR/Instruction.h"
>> +#include "llvm/IR/Instructions.h"
>> +#include "llvm/IR/PatternMatch.h"
>> +#include "llvm/IR/Type.h"
>> +#include "llvm/IR/Use.h"
>> +#include "llvm/IR/User.h"
>> +#include "llvm/IR/Value.h"
>> +#include "llvm/Support/Casting.h"
>> +#include "llvm/Support/CommandLine.h"
>> +#include "llvm/Support/Compiler.h"
>> +#include "llvm/Support/DebugCounter.h"
>> +#include "llvm/Support/ErrorHandling.h"
>> +#include "llvm/Support/raw_ostream.h"
>> +#include <functional>
>> +#include <tuple>
>> +#include <utility>
>> +
>> +using namespace llvm;
>> +
>> +#define DEBUG_TYPE "instcombine"
>> +
>> +STATISTIC(NegatorTotalNegationsAttempted,
>> + "Negator: Number of negations attempted to be sinked");
>> +STATISTIC(NegatorNumTreesNegated,
>> + "Negator: Number of negations successfully sinked");
>> +STATISTIC(NegatorMaxDepthVisited, "Negator: Maximal traversal depth ever
>> "
>> + "reached while attempting to sink
>> negation");
>> +STATISTIC(NegatorTimesDepthLimitReached,
>> + "Negator: How many times did the traversal depth limit was
>> reached "
>> + "during sinking");
>> +STATISTIC(
>> + NegatorNumValuesVisited,
>> + "Negator: Total number of values visited during attempts to sink
>> negation");
>> +STATISTIC(NegatorMaxTotalValuesVisited,
>> + "Negator: Maximal number of values ever visited while
>> attempting to "
>> + "sink negation");
>> +STATISTIC(NegatorNumInstructionsCreatedTotal,
>> + "Negator: Number of new negated instructions created, total");
>> +STATISTIC(NegatorMaxInstructionsCreated,
>> + "Negator: Maximal number of new instructions created during
>> negation "
>> + "attempt");
>> +STATISTIC(NegatorNumInstructionsNegatedSuccess,
>> + "Negator: Number of new negated instructions created in
>> successful "
>> + "negation sinking attempts");
>> +
>> +DEBUG_COUNTER(NegatorCounter, "instcombine-negator",
>> + "Controls Negator transformations in InstCombine pass");
>> +
>> +static cl::opt<bool>
>> + NegatorEnabled("instcombine-negator-enabled", cl::init(true),
>> + cl::desc("Should we attempt to sink negations?"));
>> +
>> +static cl::opt<unsigned>
>> + NegatorMaxDepth("instcombine-negator-max-depth",
>> + cl::init(NegatorDefaultMaxDepth),
>> + cl::desc("What is the maximal lookup depth when
>> trying to "
>> + "check for viability of negation
>> sinking."));
>> +
>> +Negator::Negator(LLVMContext &C, const DataLayout &DL, bool
>> IsTrulyNegation_)
>> + : Builder(C, TargetFolder(DL),
>> + IRBuilderCallbackInserter([&](Instruction *I) {
>> + ++NegatorNumInstructionsCreatedTotal;
>> + NewInstructions.push_back(I);
>> + })),
>> + IsTrulyNegation(IsTrulyNegation_) {}
>> +
>> +#if LLVM_ENABLE_STATS
>> +Negator::~Negator() {
>> + NegatorMaxTotalValuesVisited.updateMax(NumValuesVisitedInThisNegator);
>> +}
>> +#endif
>> +
>> +// FIXME: can this be reworked into a worklist-based algorithm while
>> preserving
>> +// the depth-first, early bailout traversal?
>> +LLVM_NODISCARD Value *Negator::visit(Value *V, unsigned Depth) {
>> + NegatorMaxDepthVisited.updateMax(Depth);
>> + ++NegatorNumValuesVisited;
>> +
>> +#if LLVM_ENABLE_STATS
>> + ++NumValuesVisitedInThisNegator;
>> +#endif
>> +
>> + // In i1, negation can simply be ignored.
>> + if (V->getType()->isIntOrIntVectorTy(1))
>> + return V;
>> +
>> + Value *X;
>> +
>> + // -(-(X)) -> X.
>> + if (match(V, m_Neg(m_Value(X))))
>> + return X;
>> +
>> + // Integral constants can be freely negated.
>> + if (match(V, m_AnyIntegralConstant()))
>> + return ConstantExpr::getNeg(cast<Constant>(V), /*HasNUW=*/false,
>> + /*HasNSW=*/false);
>> +
>> + // If we have a non-instruction, then give up.
>> + if (!isa<Instruction>(V))
>> + return nullptr;
>> +
>> + // If we have started with a true negation (i.e. `sub 0, %y`), then if
>> we've
>> + // got instruction that does not require recursive reasoning, we can
>> still
>> + // negate it even if it has other uses, without increasing instruction
>> count.
>> + if (!V->hasOneUse() && !IsTrulyNegation)
>> + return nullptr;
>> +
>> + auto *I = cast<Instruction>(V);
>> + unsigned BitWidth = I->getType()->getScalarSizeInBits();
>> +
>> + // We must preserve the insertion point and debug info that is set in
>> the
>> + // builder at the time this function is called.
>> + InstCombiner::BuilderTy::InsertPointGuard Guard(Builder);
>> + // And since we are trying to negate instruction I, that tells us
>> about the
>> + // insertion point and the debug info that we need to keep.
>> + Builder.SetInsertPoint(I);
>> +
>> + // In some cases we can give the answer without further recursion.
>> + switch (I->getOpcode()) {
>> + case Instruction::Sub:
>> + // `sub` is always negatible.
>> + return Builder.CreateSub(I->getOperand(1), I->getOperand(0),
>> + I->getName() + ".neg");
>> + case Instruction::Add:
>> + // `inc` is always negatible.
>> + if (match(I->getOperand(1), m_One()))
>> + return Builder.CreateNot(I->getOperand(0), I->getName() + ".neg");
>> + break;
>> + case Instruction::Xor:
>> + // `not` is always negatible.
>> + if (match(I, m_Not(m_Value(X))))
>> + return Builder.CreateAdd(X, ConstantInt::get(X->getType(), 1),
>> + I->getName() + ".neg");
>> + break;
>> + case Instruction::AShr:
>> + case Instruction::LShr: {
>> + // Right-shift sign bit smear is negatible.
>> + const APInt *Op1Val;
>> + if (match(I->getOperand(1), m_APInt(Op1Val)) && *Op1Val == BitWidth
>> - 1) {
>> + Value *BO = I->getOpcode() == Instruction::AShr
>> + ? Builder.CreateLShr(I->getOperand(0),
>> I->getOperand(1))
>> + : Builder.CreateAShr(I->getOperand(0),
>> I->getOperand(1));
>> + if (auto *NewInstr = dyn_cast<Instruction>(BO)) {
>> + NewInstr->copyIRFlags(I);
>> + NewInstr->setName(I->getName() + ".neg");
>> + }
>> + return BO;
>> + }
>> + break;
>> + }
>> + case Instruction::SDiv:
>> + // `sdiv` is negatible if divisor is not undef/INT_MIN/1.
>> + // While this is normally not behind a use-check,
>> + // let's consider division to be special since it's costly.
>> + if (!I->hasOneUse())
>> + break;
>> + if (auto *Op1C = dyn_cast<Constant>(I->getOperand(1))) {
>> + if (!Op1C->containsUndefElement() && Op1C->isNotMinSignedValue() &&
>> + Op1C->isNotOneValue()) {
>> + Value *BO =
>> + Builder.CreateSDiv(I->getOperand(0),
>> ConstantExpr::getNeg(Op1C),
>> + I->getName() + ".neg");
>> + if (auto *NewInstr = dyn_cast<Instruction>(BO))
>> + NewInstr->setIsExact(I->isExact());
>> + return BO;
>> + }
>> + }
>> + break;
>> + case Instruction::SExt:
>> + case Instruction::ZExt:
>> + // `*ext` of i1 is always negatible
>> + if (I->getOperand(0)->getType()->isIntOrIntVectorTy(1))
>> + return I->getOpcode() == Instruction::SExt
>> + ? Builder.CreateZExt(I->getOperand(0), I->getType(),
>> + I->getName() + ".neg")
>> + : Builder.CreateSExt(I->getOperand(0), I->getType(),
>> + I->getName() + ".neg");
>> + break;
>> + default:
>> + break; // Other instructions require recursive reasoning.
>> + }
>> +
>> + // Rest of the logic is recursive, and if either the current
>> instruction
>> + // has other uses or if it's time to give up then it's time.
>> + if (!V->hasOneUse())
>> + return nullptr;
>> + if (Depth > NegatorMaxDepth) {
>> + LLVM_DEBUG(dbgs() << "Negator: reached maximal allowed traversal
>> depth in "
>> + << *V << ". Giving up.\n");
>> + ++NegatorTimesDepthLimitReached;
>> + return nullptr;
>> + }
>> +
>> + switch (I->getOpcode()) {
>> + case Instruction::PHI: {
>> + // `phi` is negatible if all the incoming values are negatible.
>> + PHINode *PHI = cast<PHINode>(I);
>> + SmallVector<Value *, 4> NegatedIncomingValues(PHI->getNumOperands());
>> + for (auto I : zip(PHI->incoming_values(), NegatedIncomingValues)) {
>> + if (!(std::get<1>(I) = visit(std::get<0>(I), Depth + 1))) // Early
>> return.
>> + return nullptr;
>> + }
>> + // All incoming values are indeed negatible. Create negated PHI node.
>> + PHINode *NegatedPHI = Builder.CreatePHI(
>> + PHI->getType(), PHI->getNumOperands(), PHI->getName() + ".neg");
>> + for (auto I : zip(NegatedIncomingValues, PHI->blocks()))
>> + NegatedPHI->addIncoming(std::get<0>(I), std::get<1>(I));
>> + return NegatedPHI;
>> + }
>> + case Instruction::Select: {
>> + {
>> + // `abs`/`nabs` is always negatible.
>> + Value *LHS, *RHS;
>> + SelectPatternFlavor SPF =
>> + matchSelectPattern(I, LHS, RHS, /*CastOp=*/nullptr,
>> Depth).Flavor;
>> + if (SPF == SPF_ABS || SPF == SPF_NABS) {
>> + auto *NewSelect = cast<SelectInst>(I->clone());
>> + // Just swap the operands of the select.
>> + NewSelect->swapValues();
>> + // Don't swap prof metadata, we didn't change the branch
>> behavior.
>> + NewSelect->setName(I->getName() + ".neg");
>> + Builder.Insert(NewSelect);
>> + return NewSelect;
>> + }
>> + }
>> + // `select` is negatible if both hands of `select` are negatible.
>> + Value *NegOp1 = visit(I->getOperand(1), Depth + 1);
>> + if (!NegOp1) // Early return.
>> + return nullptr;
>> + Value *NegOp2 = visit(I->getOperand(2), Depth + 1);
>> + if (!NegOp2)
>> + return nullptr;
>> + // Do preserve the metadata!
>> + return Builder.CreateSelect(I->getOperand(0), NegOp1, NegOp2,
>> + I->getName() + ".neg", /*MDFrom=*/I);
>> + }
>> + case Instruction::Trunc: {
>> + // `trunc` is negatible if its operand is negatible.
>> + Value *NegOp = visit(I->getOperand(0), Depth + 1);
>> + if (!NegOp) // Early return.
>> + return nullptr;
>> + return Builder.CreateTrunc(NegOp, I->getType(), I->getName() +
>> ".neg");
>> + }
>> + case Instruction::Shl: {
>> + // `shl` is negatible if the first operand is negatible.
>> + Value *NegOp0 = visit(I->getOperand(0), Depth + 1);
>> + if (!NegOp0) // Early return.
>> + return nullptr;
>> + return Builder.CreateShl(NegOp0, I->getOperand(1), I->getName() +
>> ".neg");
>> + }
>> + case Instruction::Add: {
>> + // `add` is negatible if both of its operands are negatible.
>> + Value *NegOp0 = visit(I->getOperand(0), Depth + 1);
>> + if (!NegOp0) // Early return.
>> + return nullptr;
>> + Value *NegOp1 = visit(I->getOperand(1), Depth + 1);
>> + if (!NegOp1)
>> + return nullptr;
>> + return Builder.CreateAdd(NegOp0, NegOp1, I->getName() + ".neg");
>> + }
>> + case Instruction::Xor:
>> + // `xor` is negatible if one of its operands is invertible.
>> + // FIXME: InstCombineInverter? But how to connect Inverter and
>> Negator?
>> + if (auto *C = dyn_cast<Constant>(I->getOperand(1))) {
>> + Value *Xor = Builder.CreateXor(I->getOperand(0),
>> ConstantExpr::getNot(C));
>> + return Builder.CreateAdd(Xor, ConstantInt::get(Xor->getType(), 1),
>> + I->getName() + ".neg");
>> + }
>> + return nullptr;
>> + case Instruction::Mul: {
>> + // `mul` is negatible if one of its operands is negatible.
>> + Value *NegatedOp, *OtherOp;
>> + // First try the second operand, in case it's a constant it will be
>> best to
>> + // just invert it instead of sinking the `neg` deeper.
>> + if (Value *NegOp1 = visit(I->getOperand(1), Depth + 1)) {
>> + NegatedOp = NegOp1;
>> + OtherOp = I->getOperand(0);
>> + } else if (Value *NegOp0 = visit(I->getOperand(0), Depth + 1)) {
>> + NegatedOp = NegOp0;
>> + OtherOp = I->getOperand(1);
>> + } else
>> + // Can't negate either of them.
>> + return nullptr;
>> + return Builder.CreateMul(NegatedOp, OtherOp, I->getName() + ".neg");
>> + }
>> + default:
>> + return nullptr; // Don't know, likely not negatible for free.
>> + }
>> +
>> + llvm_unreachable("Can't get here. We always return from switch.");
>> +};
>> +
>> +LLVM_NODISCARD Optional<Negator::Result> Negator::run(Value *Root) {
>> + Value *Negated = visit(Root, /*Depth=*/0);
>> + if (!Negated) {
>> + // We must cleanup newly-inserted instructions, to avoid any
>> potential
>> + // endless combine looping.
>> + llvm::for_each(llvm::reverse(NewInstructions),
>> + [&](Instruction *I) { I->eraseFromParent(); });
>> + return llvm::None;
>> + }
>> + return std::make_pair(ArrayRef<Instruction *>(NewInstructions),
>> Negated);
>> +};
>> +
>> +LLVM_NODISCARD Value *Negator::Negate(bool LHSIsZero, Value *Root,
>> + InstCombiner &IC) {
>> + ++NegatorTotalNegationsAttempted;
>> + LLVM_DEBUG(dbgs() << "Negator: attempting to sink negation into " <<
>> *Root
>> + << "\n");
>> +
>> + if (!NegatorEnabled || !DebugCounter::shouldExecute(NegatorCounter))
>> + return nullptr;
>> +
>> + Negator N(Root->getContext(), IC.getDataLayout(), LHSIsZero);
>> + Optional<Result> Res = N.run(Root);
>> + if (!Res) { // Negation failed.
>> + LLVM_DEBUG(dbgs() << "Negator: failed to sink negation into " <<
>> *Root
>> + << "\n");
>> + return nullptr;
>> + }
>> +
>> + LLVM_DEBUG(dbgs() << "Negator: successfully sunk negation into " <<
>> *Root
>> + << "\n NEW: " << *Res->second << "\n");
>> + ++NegatorNumTreesNegated;
>> +
>> + // We must temporarily unset the 'current' insertion point and
>> DebugLoc of the
>> + // InstCombine's IRBuilder so that it won't interfere with the ones we
>> have
>> + // already specified when producing negated instructions.
>> + InstCombiner::BuilderTy::InsertPointGuard Guard(IC.Builder);
>> + IC.Builder.ClearInsertionPoint();
>> + IC.Builder.SetCurrentDebugLocation(DebugLoc());
>> +
>> + // And finally, we must add newly-created instructions into the
>> InstCombine's
>> + // worklist (in a proper order!) so it can attempt to combine them.
>> + LLVM_DEBUG(dbgs() << "Negator: Propagating " << Res->first.size()
>> + << " instrs to InstCombine\n");
>> + NegatorMaxInstructionsCreated.updateMax(Res->first.size());
>> + NegatorNumInstructionsNegatedSuccess += Res->first.size();
>> +
>> + // They are in def-use order, so nothing fancy, just insert them in
>> order.
>> + llvm::for_each(Res->first,
>> + [&](Instruction *I) { IC.Builder.Insert(I,
>> I->getName()); });
>> +
>> + // And return the new root.
>> + return Res->second;
>> +};
>>
>> diff --git a/llvm/lib/Transforms/InstCombine/InstructionCombining.cpp
>> b/llvm/lib/Transforms/InstCombine/InstructionCombining.cpp
>> index 4d286f19f919..88cb8d5300b9 100644
>> --- a/llvm/lib/Transforms/InstCombine/InstructionCombining.cpp
>> +++ b/llvm/lib/Transforms/InstCombine/InstructionCombining.cpp
>> @@ -853,120 +853,6 @@ Value *InstCombiner::dyn_castNegVal(Value *V) const
>> {
>> return nullptr;
>> }
>>
>> -/// Get negated V (that is 0-V) without increasing instruction count,
>> -/// assuming that the original V will become unused.
>> -Value *InstCombiner::freelyNegateValue(Value *V) {
>> - if (Value *NegV = dyn_castNegVal(V))
>> - return NegV;
>> -
>> - Instruction *I = dyn_cast<Instruction>(V);
>> - if (!I)
>> - return nullptr;
>> -
>> - unsigned BitWidth = I->getType()->getScalarSizeInBits();
>> - switch (I->getOpcode()) {
>> - // 0-(zext i1 A) => sext i1 A
>> - case Instruction::ZExt:
>> - if (I->getOperand(0)->getType()->isIntOrIntVectorTy(1))
>> - return Builder.CreateSExtOrBitCast(
>> - I->getOperand(0), I->getType(), I->getName() + ".neg");
>> - return nullptr;
>> -
>> - // 0-(sext i1 A) => zext i1 A
>> - case Instruction::SExt:
>> - if (I->getOperand(0)->getType()->isIntOrIntVectorTy(1))
>> - return Builder.CreateZExtOrBitCast(
>> - I->getOperand(0), I->getType(), I->getName() + ".neg");
>> - return nullptr;
>> -
>> - // 0-(A lshr (BW-1)) => A ashr (BW-1)
>> - case Instruction::LShr:
>> - if (match(I->getOperand(1), m_SpecificInt(BitWidth - 1)))
>> - return Builder.CreateAShr(
>> - I->getOperand(0), I->getOperand(1),
>> - I->getName() + ".neg", cast<BinaryOperator>(I)->isExact());
>> - return nullptr;
>> -
>> - // 0-(A ashr (BW-1)) => A lshr (BW-1)
>> - case Instruction::AShr:
>> - if (match(I->getOperand(1), m_SpecificInt(BitWidth - 1)))
>> - return Builder.CreateLShr(
>> - I->getOperand(0), I->getOperand(1),
>> - I->getName() + ".neg", cast<BinaryOperator>(I)->isExact());
>> - return nullptr;
>> -
>> - // Negation is equivalent to bitwise-not + 1.
>> - case Instruction::Xor: {
>> - // Special case for negate of 'not' - replace with increment:
>> - // 0 - (~A) => ((A ^ -1) ^ -1) + 1 => A + 1
>> - Value *A;
>> - if (match(I, m_Not(m_Value(A))))
>> - return Builder.CreateAdd(A, ConstantInt::get(A->getType(), 1),
>> - I->getName() + ".neg");
>> -
>> - // General case xor (not a 'not') requires creating a new xor, so
>> this has a
>> - // one-use limitation:
>> - // 0 - (A ^ C) => ((A ^ C) ^ -1) + 1 => A ^ ~C + 1
>> - Constant *C;
>> - if (match(I, m_OneUse(m_Xor(m_Value(A), m_Constant(C))))) {
>> - Value *Xor = Builder.CreateXor(A, ConstantExpr::getNot(C));
>> - return Builder.CreateAdd(Xor, ConstantInt::get(Xor->getType(), 1),
>> - I->getName() + ".neg");
>> - }
>> - return nullptr;
>> - }
>> -
>> - default:
>> - break;
>> - }
>> -
>> - // TODO: The "sub" pattern below could also be applied without the
>> one-use
>> - // restriction. Not allowing it for now in line with existing behavior.
>> - if (!I->hasOneUse())
>> - return nullptr;
>> -
>> - switch (I->getOpcode()) {
>> - // 0-(A-B) => B-A
>> - case Instruction::Sub:
>> - return Builder.CreateSub(
>> - I->getOperand(1), I->getOperand(0), I->getName() + ".neg");
>> -
>> - // 0-(A sdiv C) => A sdiv (0-C) provided the negation doesn't
>> overflow.
>> - case Instruction::SDiv: {
>> - Constant *C = dyn_cast<Constant>(I->getOperand(1));
>> - if (C && !C->containsUndefElement() && C->isNotMinSignedValue() &&
>> - C->isNotOneValue())
>> - return Builder.CreateSDiv(I->getOperand(0),
>> ConstantExpr::getNeg(C),
>> - I->getName() + ".neg", cast<BinaryOperator>(I)->isExact());
>> - return nullptr;
>> - }
>> -
>> - // 0-(A<<B) => (0-A)<<B
>> - case Instruction::Shl:
>> - if (Value *NegA = freelyNegateValue(I->getOperand(0)))
>> - return Builder.CreateShl(NegA, I->getOperand(1), I->getName() +
>> ".neg");
>> - return nullptr;
>> -
>> - // 0-(trunc A) => trunc (0-A)
>> - case Instruction::Trunc:
>> - if (Value *NegA = freelyNegateValue(I->getOperand(0)))
>> - return Builder.CreateTrunc(NegA, I->getType(), I->getName() +
>> ".neg");
>> - return nullptr;
>> -
>> - // 0-(A*B) => (0-A)*B
>> - // 0-(A*B) => A*(0-B)
>> - case Instruction::Mul:
>> - if (Value *NegA = freelyNegateValue(I->getOperand(0)))
>> - return Builder.CreateMul(NegA, I->getOperand(1), V->getName() +
>> ".neg");
>> - if (Value *NegB = freelyNegateValue(I->getOperand(1)))
>> - return Builder.CreateMul(I->getOperand(0), NegB, V->getName() +
>> ".neg");
>> - return nullptr;
>> -
>> - default:
>> - return nullptr;
>> - }
>> -}
>> -
>> static Value *foldOperationIntoSelectOperand(Instruction &I, Value *SO,
>> InstCombiner::BuilderTy
>> &Builder) {
>> if (auto *Cast = dyn_cast<CastInst>(&I))
>>
>> diff --git a/llvm/test/Transforms/InstCombine/and-or-icmps.ll
>> b/llvm/test/Transforms/InstCombine/and-or-icmps.ll
>> index 175103886ff4..efc22068d9ee 100644
>> --- a/llvm/test/Transforms/InstCombine/and-or-icmps.ll
>> +++ b/llvm/test/Transforms/InstCombine/and-or-icmps.ll
>> @@ -210,12 +210,20 @@ define void @simplify_before_foldAndOfICmps() {
>> ; CHECK-LABEL: @simplify_before_foldAndOfICmps(
>> ; CHECK-NEXT: [[A8:%.*]] = alloca i16, align 2
>> ; CHECK-NEXT: [[L7:%.*]] = load i16, i16* [[A8]], align 2
>> -; CHECK-NEXT: [[C10:%.*]] = icmp ult i16 [[L7]], 2
>> +; CHECK-NEXT: [[TMP1:%.*]] = icmp eq i16 [[L7]], -1
>> +; CHECK-NEXT: [[B11:%.*]] = zext i1 [[TMP1]] to i16
>> +; CHECK-NEXT: [[C10:%.*]] = icmp ugt i16 [[L7]], [[B11]]
>> +; CHECK-NEXT: [[C5:%.*]] = icmp slt i16 [[L7]], 1
>> +; CHECK-NEXT: [[C11:%.*]] = icmp ne i16 [[L7]], 0
>> ; CHECK-NEXT: [[C7:%.*]] = icmp slt i16 [[L7]], 0
>> -; CHECK-NEXT: [[C18:%.*]] = or i1 [[C7]], [[C10]]
>> -; CHECK-NEXT: [[L7_LOBIT:%.*]] = ashr i16 [[L7]], 15
>> -; CHECK-NEXT: [[TMP1:%.*]] = sext i16 [[L7_LOBIT]] to i64
>> -; CHECK-NEXT: [[G26:%.*]] = getelementptr i1, i1* null, i64 [[TMP1]]
>> +; CHECK-NEXT: [[B15:%.*]] = xor i1 [[C7]], [[C10]]
>> +; CHECK-NEXT: [[B19:%.*]] = xor i1 [[C11]], [[B15]]
>> +; CHECK-NEXT: [[TMP2:%.*]] = and i1 [[C10]], [[C5]]
>> +; CHECK-NEXT: [[C3:%.*]] = and i1 [[TMP2]], [[B19]]
>> +; CHECK-NEXT: [[TMP3:%.*]] = xor i1 [[C10]], true
>> +; CHECK-NEXT: [[C18:%.*]] = or i1 [[C7]], [[TMP3]]
>> +; CHECK-NEXT: [[TMP4:%.*]] = sext i1 [[C3]] to i64
>> +; CHECK-NEXT: [[G26:%.*]] = getelementptr i1, i1* null, i64 [[TMP4]]
>> ; CHECK-NEXT: store i16 [[L7]], i16* undef, align 2
>> ; CHECK-NEXT: store i1 [[C18]], i1* undef, align 1
>> ; CHECK-NEXT: store i1* [[G26]], i1** undef, align 8
>>
>> diff --git
>> a/llvm/test/Transforms/InstCombine/fold-sub-of-not-to-inc-of-add.ll
>> b/llvm/test/Transforms/InstCombine/fold-sub-of-not-to-inc-of-add.ll
>> index cf9d051d125b..49d4ec43768b 100644
>> --- a/llvm/test/Transforms/InstCombine/fold-sub-of-not-to-inc-of-add.ll
>> +++ b/llvm/test/Transforms/InstCombine/fold-sub-of-not-to-inc-of-add.ll
>> @@ -13,8 +13,8 @@
>>
>> define i32 @p0_scalar(i32 %x, i32 %y) {
>> ; CHECK-LABEL: @p0_scalar(
>> -; CHECK-NEXT: [[TMP1:%.*]] = add i32 [[Y:%.*]], 1
>> -; CHECK-NEXT: [[T1:%.*]] = add i32 [[TMP1]], [[X:%.*]]
>> +; CHECK-NEXT: [[T0_NEG:%.*]] = add i32 [[X:%.*]], 1
>> +; CHECK-NEXT: [[T1:%.*]] = add i32 [[T0_NEG]], [[Y:%.*]]
>> ; CHECK-NEXT: ret i32 [[T1]]
>> ;
>> %t0 = xor i32 %x, -1
>> @@ -28,8 +28,8 @@ define i32 @p0_scalar(i32 %x, i32 %y) {
>>
>> define <4 x i32> @p1_vector_splat(<4 x i32> %x, <4 x i32> %y) {
>> ; CHECK-LABEL: @p1_vector_splat(
>> -; CHECK-NEXT: [[TMP1:%.*]] = add <4 x i32> [[Y:%.*]], <i32 1, i32 1,
>> i32 1, i32 1>
>> -; CHECK-NEXT: [[T1:%.*]] = add <4 x i32> [[TMP1]], [[X:%.*]]
>> +; CHECK-NEXT: [[T0_NEG:%.*]] = add <4 x i32> [[X:%.*]], <i32 1, i32
>> 1, i32 1, i32 1>
>> +; CHECK-NEXT: [[T1:%.*]] = add <4 x i32> [[T0_NEG]], [[Y:%.*]]
>> ; CHECK-NEXT: ret <4 x i32> [[T1]]
>> ;
>> %t0 = xor <4 x i32> %x, <i32 -1, i32 -1, i32 -1, i32 -1>
>> @@ -39,8 +39,8 @@ define <4 x i32> @p1_vector_splat(<4 x i32> %x, <4 x
>> i32> %y) {
>>
>> define <4 x i32> @p2_vector_undef(<4 x i32> %x, <4 x i32> %y) {
>> ; CHECK-LABEL: @p2_vector_undef(
>> -; CHECK-NEXT: [[TMP1:%.*]] = add <4 x i32> [[Y:%.*]], <i32 1, i32 1,
>> i32 1, i32 1>
>> -; CHECK-NEXT: [[T1:%.*]] = add <4 x i32> [[TMP1]], [[X:%.*]]
>> +; CHECK-NEXT: [[T0_NEG:%.*]] = add <4 x i32> [[X:%.*]], <i32 1, i32
>> 1, i32 1, i32 1>
>> +; CHECK-NEXT: [[T1:%.*]] = add <4 x i32> [[T0_NEG]], [[Y:%.*]]
>> ; CHECK-NEXT: ret <4 x i32> [[T1]]
>> ;
>> %t0 = xor <4 x i32> %x, <i32 -1, i32 -1, i32 undef, i32 -1>
>> @@ -85,8 +85,8 @@ define i32 @n4(i32 %x, i32 %y) {
>>
>> define i32 @n5_is_not_not(i32 %x, i32 %y) {
>> ; CHECK-LABEL: @n5_is_not_not(
>> -; CHECK-NEXT: [[T0:%.*]] = xor i32 [[X:%.*]], 2147483647
>> -; CHECK-NEXT: [[T1:%.*]] = sub i32 [[Y:%.*]], [[T0]]
>> +; CHECK-NEXT: [[T0_NEG:%.*]] = add i32 [[X:%.*]], -2147483647
>> +; CHECK-NEXT: [[T1:%.*]] = add i32 [[T0_NEG]], [[Y:%.*]]
>> ; CHECK-NEXT: ret i32 [[T1]]
>> ;
>> %t0 = xor i32 %x, 2147483647 ; not -1
>>
>> diff --git
>> a/llvm/test/Transforms/InstCombine/high-bit-signmask-with-trunc.ll
>> b/llvm/test/Transforms/InstCombine/high-bit-signmask-with-trunc.ll
>> index 034c285dd1ff..b9bce627d2ab 100644
>> --- a/llvm/test/Transforms/InstCombine/high-bit-signmask-with-trunc.ll
>> +++ b/llvm/test/Transforms/InstCombine/high-bit-signmask-with-trunc.ll
>> @@ -3,9 +3,9 @@
>>
>> define i32 @t0(i64 %x) {
>> ; CHECK-LABEL: @t0(
>> -; CHECK-NEXT: [[TMP1:%.*]] = ashr i64 [[X:%.*]], 63
>> -; CHECK-NEXT: [[R:%.*]] = trunc i64 [[TMP1]] to i32
>> -; CHECK-NEXT: ret i32 [[R]]
>> +; CHECK-NEXT: [[T0_NEG:%.*]] = ashr i64 [[X:%.*]], 63
>> +; CHECK-NEXT: [[T1_NEG:%.*]] = trunc i64 [[T0_NEG]] to i32
>> +; CHECK-NEXT: ret i32 [[T1_NEG]]
>> ;
>> %t0 = lshr i64 %x, 63
>> %t1 = trunc i64 %t0 to i32
>> @@ -14,9 +14,9 @@ define i32 @t0(i64 %x) {
>> }
>> define i32 @t1_exact(i64 %x) {
>> ; CHECK-LABEL: @t1_exact(
>> -; CHECK-NEXT: [[TMP1:%.*]] = ashr exact i64 [[X:%.*]], 63
>> -; CHECK-NEXT: [[R:%.*]] = trunc i64 [[TMP1]] to i32
>> -; CHECK-NEXT: ret i32 [[R]]
>> +; CHECK-NEXT: [[T0_NEG:%.*]] = ashr exact i64 [[X:%.*]], 63
>> +; CHECK-NEXT: [[T1_NEG:%.*]] = trunc i64 [[T0_NEG]] to i32
>> +; CHECK-NEXT: ret i32 [[T1_NEG]]
>> ;
>> %t0 = lshr exact i64 %x, 63
>> %t1 = trunc i64 %t0 to i32
>> @@ -25,9 +25,9 @@ define i32 @t1_exact(i64 %x) {
>> }
>> define i32 @t2(i64 %x) {
>> ; CHECK-LABEL: @t2(
>> -; CHECK-NEXT: [[TMP1:%.*]] = lshr i64 [[X:%.*]], 63
>> -; CHECK-NEXT: [[R:%.*]] = trunc i64 [[TMP1]] to i32
>> -; CHECK-NEXT: ret i32 [[R]]
>> +; CHECK-NEXT: [[T0_NEG:%.*]] = lshr i64 [[X:%.*]], 63
>> +; CHECK-NEXT: [[T1_NEG:%.*]] = trunc i64 [[T0_NEG]] to i32
>> +; CHECK-NEXT: ret i32 [[T1_NEG]]
>> ;
>> %t0 = ashr i64 %x, 63
>> %t1 = trunc i64 %t0 to i32
>> @@ -36,9 +36,9 @@ define i32 @t2(i64 %x) {
>> }
>> define i32 @t3_exact(i64 %x) {
>> ; CHECK-LABEL: @t3_exact(
>> -; CHECK-NEXT: [[TMP1:%.*]] = lshr exact i64 [[X:%.*]], 63
>> -; CHECK-NEXT: [[R:%.*]] = trunc i64 [[TMP1]] to i32
>> -; CHECK-NEXT: ret i32 [[R]]
>> +; CHECK-NEXT: [[T0_NEG:%.*]] = lshr exact i64 [[X:%.*]], 63
>> +; CHECK-NEXT: [[T1_NEG:%.*]] = trunc i64 [[T0_NEG]] to i32
>> +; CHECK-NEXT: ret i32 [[T1_NEG]]
>> ;
>> %t0 = ashr exact i64 %x, 63
>> %t1 = trunc i64 %t0 to i32
>> @@ -48,9 +48,9 @@ define i32 @t3_exact(i64 %x) {
>>
>> define <2 x i32> @t4(<2 x i64> %x) {
>> ; CHECK-LABEL: @t4(
>> -; CHECK-NEXT: [[TMP1:%.*]] = ashr <2 x i64> [[X:%.*]], <i64 63, i64
>> 63>
>> -; CHECK-NEXT: [[R:%.*]] = trunc <2 x i64> [[TMP1]] to <2 x i32>
>> -; CHECK-NEXT: ret <2 x i32> [[R]]
>> +; CHECK-NEXT: [[T0_NEG:%.*]] = ashr <2 x i64> [[X:%.*]], <i64 63, i64
>> 63>
>> +; CHECK-NEXT: [[T1_NEG:%.*]] = trunc <2 x i64> [[T0_NEG]] to <2 x i32>
>> +; CHECK-NEXT: ret <2 x i32> [[T1_NEG]]
>> ;
>> %t0 = lshr <2 x i64> %x, <i64 63, i64 63>
>> %t1 = trunc <2 x i64> %t0 to <2 x i32>
>> @@ -76,11 +76,11 @@ declare void @use32(i32)
>>
>> define i32 @t6(i64 %x) {
>> ; CHECK-LABEL: @t6(
>> -; CHECK-NEXT: [[T0:%.*]] = lshr i64 [[X:%.*]], 63
>> +; CHECK-NEXT: [[T0_NEG:%.*]] = ashr i64 [[X:%.*]], 63
>> +; CHECK-NEXT: [[T0:%.*]] = lshr i64 [[X]], 63
>> ; CHECK-NEXT: call void @use64(i64 [[T0]])
>> -; CHECK-NEXT: [[TMP1:%.*]] = ashr i64 [[X]], 63
>> -; CHECK-NEXT: [[R:%.*]] = trunc i64 [[TMP1]] to i32
>> -; CHECK-NEXT: ret i32 [[R]]
>> +; CHECK-NEXT: [[T1_NEG:%.*]] = trunc i64 [[T0_NEG]] to i32
>> +; CHECK-NEXT: ret i32 [[T1_NEG]]
>> ;
>> %t0 = lshr i64 %x, 63
>> call void @use64(i64 %t0)
>> @@ -136,9 +136,9 @@ define i32 @n9(i64 %x) {
>>
>> define i32 @n10(i64 %x) {
>> ; CHECK-LABEL: @n10(
>> -; CHECK-NEXT: [[T0:%.*]] = lshr i64 [[X:%.*]], 63
>> -; CHECK-NEXT: [[T1:%.*]] = trunc i64 [[T0]] to i32
>> -; CHECK-NEXT: [[R:%.*]] = xor i32 [[T1]], 1
>> +; CHECK-NEXT: [[T0_NEG:%.*]] = ashr i64 [[X:%.*]], 63
>> +; CHECK-NEXT: [[T1_NEG:%.*]] = trunc i64 [[T0_NEG]] to i32
>> +; CHECK-NEXT: [[R:%.*]] = add i32 [[T1_NEG]], 1
>> ; CHECK-NEXT: ret i32 [[R]]
>> ;
>> %t0 = lshr i64 %x, 63
>>
>> diff --git a/llvm/test/Transforms/InstCombine/high-bit-signmask.ll
>> b/llvm/test/Transforms/InstCombine/high-bit-signmask.ll
>> index 4a1b395ca35f..18a87273c021 100644
>> --- a/llvm/test/Transforms/InstCombine/high-bit-signmask.ll
>> +++ b/llvm/test/Transforms/InstCombine/high-bit-signmask.ll
>> @@ -3,8 +3,8 @@
>>
>> define i64 @t0(i64 %x) {
>> ; CHECK-LABEL: @t0(
>> -; CHECK-NEXT: [[R:%.*]] = ashr i64 [[X:%.*]], 63
>> -; CHECK-NEXT: ret i64 [[R]]
>> +; CHECK-NEXT: [[T0_NEG:%.*]] = ashr i64 [[X:%.*]], 63
>> +; CHECK-NEXT: ret i64 [[T0_NEG]]
>> ;
>> %t0 = lshr i64 %x, 63
>> %r = sub i64 0, %t0
>> @@ -12,8 +12,8 @@ define i64 @t0(i64 %x) {
>> }
>> define i64 @t0_exact(i64 %x) {
>> ; CHECK-LABEL: @t0_exact(
>> -; CHECK-NEXT: [[R:%.*]] = ashr exact i64 [[X:%.*]], 63
>> -; CHECK-NEXT: ret i64 [[R]]
>> +; CHECK-NEXT: [[T0_NEG:%.*]] = ashr exact i64 [[X:%.*]], 63
>> +; CHECK-NEXT: ret i64 [[T0_NEG]]
>> ;
>> %t0 = lshr exact i64 %x, 63
>> %r = sub i64 0, %t0
>> @@ -21,8 +21,8 @@ define i64 @t0_exact(i64 %x) {
>> }
>> define i64 @t2(i64 %x) {
>> ; CHECK-LABEL: @t2(
>> -; CHECK-NEXT: [[R:%.*]] = lshr i64 [[X:%.*]], 63
>> -; CHECK-NEXT: ret i64 [[R]]
>> +; CHECK-NEXT: [[T0_NEG:%.*]] = lshr i64 [[X:%.*]], 63
>> +; CHECK-NEXT: ret i64 [[T0_NEG]]
>> ;
>> %t0 = ashr i64 %x, 63
>> %r = sub i64 0, %t0
>> @@ -30,8 +30,8 @@ define i64 @t2(i64 %x) {
>> }
>> define i64 @t3_exact(i64 %x) {
>> ; CHECK-LABEL: @t3_exact(
>> -; CHECK-NEXT: [[R:%.*]] = lshr exact i64 [[X:%.*]], 63
>> -; CHECK-NEXT: ret i64 [[R]]
>> +; CHECK-NEXT: [[T0_NEG:%.*]] = lshr exact i64 [[X:%.*]], 63
>> +; CHECK-NEXT: ret i64 [[T0_NEG]]
>> ;
>> %t0 = ashr exact i64 %x, 63
>> %r = sub i64 0, %t0
>> @@ -40,8 +40,8 @@ define i64 @t3_exact(i64 %x) {
>>
>> define <2 x i64> @t4(<2 x i64> %x) {
>> ; CHECK-LABEL: @t4(
>> -; CHECK-NEXT: [[R:%.*]] = ashr <2 x i64> [[X:%.*]], <i64 63, i64 63>
>> -; CHECK-NEXT: ret <2 x i64> [[R]]
>> +; CHECK-NEXT: [[T0_NEG:%.*]] = ashr <2 x i64> [[X:%.*]], <i64 63, i64
>> 63>
>> +; CHECK-NEXT: ret <2 x i64> [[T0_NEG]]
>> ;
>> %t0 = lshr <2 x i64> %x, <i64 63, i64 63>
>> %r = sub <2 x i64> zeroinitializer, %t0
>> @@ -64,10 +64,10 @@ declare void @use32(i64)
>>
>> define i64 @t6(i64 %x) {
>> ; CHECK-LABEL: @t6(
>> -; CHECK-NEXT: [[T0:%.*]] = lshr i64 [[X:%.*]], 63
>> +; CHECK-NEXT: [[T0_NEG:%.*]] = ashr i64 [[X:%.*]], 63
>> +; CHECK-NEXT: [[T0:%.*]] = lshr i64 [[X]], 63
>> ; CHECK-NEXT: call void @use64(i64 [[T0]])
>> -; CHECK-NEXT: [[R:%.*]] = ashr i64 [[X]], 63
>> -; CHECK-NEXT: ret i64 [[R]]
>> +; CHECK-NEXT: ret i64 [[T0_NEG]]
>> ;
>> %t0 = lshr i64 %x, 63
>> call void @use64(i64 %t0)
>> @@ -77,10 +77,10 @@ define i64 @t6(i64 %x) {
>>
>> define i64 @n7(i64 %x) {
>> ; CHECK-LABEL: @n7(
>> -; CHECK-NEXT: [[T0:%.*]] = lshr i64 [[X:%.*]], 63
>> +; CHECK-NEXT: [[T0_NEG:%.*]] = ashr i64 [[X:%.*]], 63
>> +; CHECK-NEXT: [[T0:%.*]] = lshr i64 [[X]], 63
>> ; CHECK-NEXT: call void @use32(i64 [[T0]])
>> -; CHECK-NEXT: [[R:%.*]] = ashr i64 [[X]], 63
>> -; CHECK-NEXT: ret i64 [[R]]
>> +; CHECK-NEXT: ret i64 [[T0_NEG]]
>> ;
>> %t0 = lshr i64 %x, 63
>> call void @use32(i64 %t0)
>> @@ -90,11 +90,11 @@ define i64 @n7(i64 %x) {
>>
>> define i64 @n8(i64 %x) {
>> ; CHECK-LABEL: @n8(
>> -; CHECK-NEXT: [[T0:%.*]] = lshr i64 [[X:%.*]], 63
>> +; CHECK-NEXT: [[T0_NEG:%.*]] = ashr i64 [[X:%.*]], 63
>> +; CHECK-NEXT: [[T0:%.*]] = lshr i64 [[X]], 63
>> ; CHECK-NEXT: call void @use64(i64 [[T0]])
>> ; CHECK-NEXT: call void @use32(i64 [[T0]])
>> -; CHECK-NEXT: [[R:%.*]] = ashr i64 [[X]], 63
>> -; CHECK-NEXT: ret i64 [[R]]
>> +; CHECK-NEXT: ret i64 [[T0_NEG]]
>> ;
>> %t0 = lshr i64 %x, 63
>> call void @use64(i64 %t0)
>> @@ -116,8 +116,8 @@ define i64 @n9(i64 %x) {
>>
>> define i64 @n10(i64 %x) {
>> ; CHECK-LABEL: @n10(
>> -; CHECK-NEXT: [[T0:%.*]] = lshr i64 [[X:%.*]], 63
>> -; CHECK-NEXT: [[R:%.*]] = xor i64 [[T0]], 1
>> +; CHECK-NEXT: [[T0_NEG:%.*]] = ashr i64 [[X:%.*]], 63
>> +; CHECK-NEXT: [[R:%.*]] = add nsw i64 [[T0_NEG]], 1
>> ; CHECK-NEXT: ret i64 [[R]]
>> ;
>> %t0 = lshr i64 %x, 63
>>
>> diff --git a/llvm/test/Transforms/InstCombine/sub-of-negatible.ll
>> b/llvm/test/Transforms/InstCombine/sub-of-negatible.ll
>> index c256bd1ed4dc..40cb4aa29446 100644
>> --- a/llvm/test/Transforms/InstCombine/sub-of-negatible.ll
>> +++ b/llvm/test/Transforms/InstCombine/sub-of-negatible.ll
>> @@ -31,8 +31,8 @@ define i8 @t1(i8 %x, i8 %y) {
>> ; Shift-left can be negated if all uses can be updated
>> define i8 @t2(i8 %x, i8 %y) {
>> ; CHECK-LABEL: @t2(
>> -; CHECK-NEXT: [[T0:%.*]] = shl i8 -42, [[Y:%.*]]
>> -; CHECK-NEXT: [[T1:%.*]] = sub i8 [[X:%.*]], [[T0]]
>> +; CHECK-NEXT: [[T0_NEG:%.*]] = shl i8 42, [[Y:%.*]]
>> +; CHECK-NEXT: [[T1:%.*]] = add i8 [[T0_NEG]], [[X:%.*]]
>> ; CHECK-NEXT: ret i8 [[T1]]
>> ;
>> %t0 = shl i8 -42, %y
>> @@ -55,8 +55,8 @@ define i8 @t3(i8 %x, i8 %y, i8 %z) {
>> ; CHECK-LABEL: @t3(
>> ; CHECK-NEXT: [[T0:%.*]] = sub i8 0, [[Z:%.*]]
>> ; CHECK-NEXT: call void @use8(i8 [[T0]])
>> -; CHECK-NEXT: [[T1:%.*]] = shl i8 [[T0]], [[Y:%.*]]
>> -; CHECK-NEXT: [[T2:%.*]] = sub i8 [[X:%.*]], [[T1]]
>> +; CHECK-NEXT: [[T1_NEG:%.*]] = shl i8 [[Z]], [[Y:%.*]]
>> +; CHECK-NEXT: [[T2:%.*]] = add i8 [[T1_NEG]], [[X:%.*]]
>> ; CHECK-NEXT: ret i8 [[T2]]
>> ;
>> %t0 = sub i8 0, %z
>> @@ -85,8 +85,8 @@ define i8 @n3(i8 %x, i8 %y, i8 %z) {
>> ; Select can be negated if all it's operands can be negated and all the
>> users of select can be updated
>> define i8 @t4(i8 %x, i1 %y) {
>> ; CHECK-LABEL: @t4(
>> -; CHECK-NEXT: [[T0:%.*]] = select i1 [[Y:%.*]], i8 -42, i8 44
>> -; CHECK-NEXT: [[T1:%.*]] = sub i8 [[X:%.*]], [[T0]]
>> +; CHECK-NEXT: [[T0_NEG:%.*]] = select i1 [[Y:%.*]], i8 42, i8 -44
>> +; CHECK-NEXT: [[T1:%.*]] = add i8 [[T0_NEG]], [[X:%.*]]
>> ; CHECK-NEXT: ret i8 [[T1]]
>> ;
>> %t0 = select i1 %y, i8 -42, i8 44
>> @@ -119,8 +119,8 @@ define i8 @t6(i8 %x, i1 %y, i8 %z) {
>> ; CHECK-LABEL: @t6(
>> ; CHECK-NEXT: [[T0:%.*]] = sub i8 0, [[Z:%.*]]
>> ; CHECK-NEXT: call void @use8(i8 [[T0]])
>> -; CHECK-NEXT: [[T1:%.*]] = select i1 [[Y:%.*]], i8 -42, i8 [[T0]]
>> -; CHECK-NEXT: [[T2:%.*]] = sub i8 [[X:%.*]], [[T1]]
>> +; CHECK-NEXT: [[T1_NEG:%.*]] = select i1 [[Y:%.*]], i8 42, i8 [[Z]]
>> +; CHECK-NEXT: [[T2:%.*]] = add i8 [[T1_NEG]], [[X:%.*]]
>> ; CHECK-NEXT: ret i8 [[T2]]
>> ;
>> %t0 = sub i8 0, %z
>> @@ -131,9 +131,9 @@ define i8 @t6(i8 %x, i1 %y, i8 %z) {
>> }
>> define i8 @t7(i8 %x, i1 %y, i8 %z) {
>> ; CHECK-LABEL: @t7(
>> -; CHECK-NEXT: [[T0:%.*]] = shl i8 1, [[Z:%.*]]
>> -; CHECK-NEXT: [[T1:%.*]] = select i1 [[Y:%.*]], i8 0, i8 [[T0]]
>> -; CHECK-NEXT: [[T2:%.*]] = sub i8 [[X:%.*]], [[T1]]
>> +; CHECK-NEXT: [[T0_NEG:%.*]] = shl i8 -1, [[Z:%.*]]
>> +; CHECK-NEXT: [[T1_NEG:%.*]] = select i1 [[Y:%.*]], i8 0, i8
>> [[T0_NEG]]
>> +; CHECK-NEXT: [[T2:%.*]] = add i8 [[T1_NEG]], [[X:%.*]]
>> ; CHECK-NEXT: ret i8 [[T2]]
>> ;
>> %t0 = shl i8 1, %z
>> @@ -169,10 +169,10 @@ define i8 @t9(i8 %x, i8 %y) {
>> }
>> define i8 @n10(i8 %x, i8 %y, i8 %z) {
>> ; CHECK-LABEL: @n10(
>> -; CHECK-NEXT: [[T0:%.*]] = sub i8 [[Y:%.*]], [[X:%.*]]
>> +; CHECK-NEXT: [[T0_NEG:%.*]] = sub i8 [[X:%.*]], [[Y:%.*]]
>> +; CHECK-NEXT: [[T0:%.*]] = sub i8 [[Y]], [[X]]
>> ; CHECK-NEXT: call void @use8(i8 [[T0]])
>> -; CHECK-NEXT: [[T1:%.*]] = sub i8 0, [[T0]]
>> -; CHECK-NEXT: ret i8 [[T1]]
>> +; CHECK-NEXT: ret i8 [[T0_NEG]]
>> ;
>> %t0 = sub i8 %y, %x
>> call void @use8(i8 %t0)
>> @@ -204,8 +204,8 @@ define i8 @n13(i8 %x, i8 %y, i8 %z) {
>> ; CHECK-LABEL: @n13(
>> ; CHECK-NEXT: [[T0:%.*]] = sub i8 0, [[Y:%.*]]
>> ; CHECK-NEXT: call void @use8(i8 [[T0]])
>> -; CHECK-NEXT: [[T11:%.*]] = sub i8 [[Y]], [[Z:%.*]]
>> -; CHECK-NEXT: [[T2:%.*]] = add i8 [[T11]], [[X:%.*]]
>> +; CHECK-NEXT: [[T1_NEG:%.*]] = sub i8 [[Y]], [[Z:%.*]]
>> +; CHECK-NEXT: [[T2:%.*]] = add i8 [[T1_NEG]], [[X:%.*]]
>> ; CHECK-NEXT: ret i8 [[T2]]
>> ;
>> %t0 = sub i8 0, %y
>> @@ -242,8 +242,8 @@ define i8 @t15(i8 %x, i8 %y, i8 %z) {
>> ; CHECK-LABEL: @t15(
>> ; CHECK-NEXT: [[T0:%.*]] = sub i8 0, [[Y:%.*]]
>> ; CHECK-NEXT: call void @use8(i8 [[T0]])
>> -; CHECK-NEXT: [[TMP1:%.*]] = mul i8 [[Z:%.*]], [[Y]]
>> -; CHECK-NEXT: [[T2:%.*]] = add i8 [[TMP1]], [[X:%.*]]
>> +; CHECK-NEXT: [[T1_NEG:%.*]] = mul i8 [[Y]], [[Z:%.*]]
>> +; CHECK-NEXT: [[T2:%.*]] = add i8 [[T1_NEG]], [[X:%.*]]
>> ; CHECK-NEXT: ret i8 [[T2]]
>> ;
>> %t0 = sub i8 0, %y
>> @@ -279,8 +279,8 @@ define i8 @t16(i1 %c, i8 %x) {
>> ; CHECK: else:
>> ; CHECK-NEXT: br label [[END]]
>> ; CHECK: end:
>> -; CHECK-NEXT: [[Z:%.*]] = phi i8 [ [[X:%.*]], [[THEN]] ], [ 42,
>> [[ELSE]] ]
>> -; CHECK-NEXT: ret i8 [[Z]]
>> +; CHECK-NEXT: [[Z_NEG:%.*]] = phi i8 [ [[X:%.*]], [[THEN]] ], [ 42,
>> [[ELSE]] ]
>> +; CHECK-NEXT: ret i8 [[Z_NEG]]
>> ;
>> begin:
>> br i1 %c, label %then, label %else
>> @@ -352,9 +352,9 @@ end:
>> ; truncation can be negated if it's operand can be negated
>> define i8 @t20(i8 %x, i16 %y) {
>> ; CHECK-LABEL: @t20(
>> -; CHECK-NEXT: [[T0:%.*]] = shl i16 -42, [[Y:%.*]]
>> -; CHECK-NEXT: [[T1:%.*]] = trunc i16 [[T0]] to i8
>> -; CHECK-NEXT: [[T2:%.*]] = sub i8 [[X:%.*]], [[T1]]
>> +; CHECK-NEXT: [[T0_NEG:%.*]] = shl i16 42, [[Y:%.*]]
>> +; CHECK-NEXT: [[T1_NEG:%.*]] = trunc i16 [[T0_NEG]] to i8
>> +; CHECK-NEXT: [[T2:%.*]] = add i8 [[T1_NEG]], [[X:%.*]]
>> ; CHECK-NEXT: ret i8 [[T2]]
>> ;
>> %t0 = shl i16 -42, %y
>> @@ -427,9 +427,9 @@ define i4 @negate_shl_xor(i4 %x, i4 %y) {
>>
>> define i8 @negate_shl_not_uses(i8 %x, i8 %y) {
>> ; CHECK-LABEL: @negate_shl_not_uses(
>> -; CHECK-NEXT: [[O:%.*]] = xor i8 [[X:%.*]], -1
>> +; CHECK-NEXT: [[O_NEG:%.*]] = add i8 [[X:%.*]], 1
>> +; CHECK-NEXT: [[O:%.*]] = xor i8 [[X]], -1
>> ; CHECK-NEXT: call void @use8(i8 [[O]])
>> -; CHECK-NEXT: [[O_NEG:%.*]] = add i8 [[X]], 1
>> ; CHECK-NEXT: [[S_NEG:%.*]] = shl i8 [[O_NEG]], [[Y:%.*]]
>> ; CHECK-NEXT: ret i8 [[S_NEG]]
>> ;
>> @@ -442,9 +442,9 @@ define i8 @negate_shl_not_uses(i8 %x, i8 %y) {
>>
>> define <2 x i4> @negate_mul_not_uses_vec(<2 x i4> %x, <2 x i4> %y) {
>> ; CHECK-LABEL: @negate_mul_not_uses_vec(
>> -; CHECK-NEXT: [[O:%.*]] = xor <2 x i4> [[X:%.*]], <i4 -1, i4 -1>
>> +; CHECK-NEXT: [[O_NEG:%.*]] = add <2 x i4> [[X:%.*]], <i4 1, i4 1>
>> +; CHECK-NEXT: [[O:%.*]] = xor <2 x i4> [[X]], <i4 -1, i4 -1>
>> ; CHECK-NEXT: call void @use_v2i4(<2 x i4> [[O]])
>> -; CHECK-NEXT: [[O_NEG:%.*]] = add <2 x i4> [[X]], <i4 1, i4 1>
>> ; CHECK-NEXT: [[S_NEG:%.*]] = mul <2 x i4> [[O_NEG]], [[Y:%.*]]
>> ; CHECK-NEXT: ret <2 x i4> [[S_NEG]]
>> ;
>> @@ -458,8 +458,8 @@ define <2 x i4> @negate_mul_not_uses_vec(<2 x i4> %x,
>> <2 x i4> %y) {
>> ; signed division can be negated if divisor can be negated and is not
>> 1/-1
>> define i8 @negate_sdiv(i8 %x, i8 %y) {
>> ; CHECK-LABEL: @negate_sdiv(
>> -; CHECK-NEXT: [[T0:%.*]] = sdiv i8 [[Y:%.*]], 42
>> -; CHECK-NEXT: [[T1:%.*]] = sub i8 [[X:%.*]], [[T0]]
>> +; CHECK-NEXT: [[T0_NEG:%.*]] = sdiv i8 [[Y:%.*]], -42
>> +; CHECK-NEXT: [[T1:%.*]] = add i8 [[T0_NEG]], [[X:%.*]]
>> ; CHECK-NEXT: ret i8 [[T1]]
>> ;
>> %t0 = sdiv i8 %y, 42
>> @@ -478,12 +478,24 @@ define i8 @negate_sdiv_extrause(i8 %x, i8 %y) {
>> %t1 = sub i8 %x, %t0
>> ret i8 %t1
>> }
>> +define i8 @negate_sdiv_extrause2(i8 %x, i8 %y) {
>> +; CHECK-LABEL: @negate_sdiv_extrause2(
>> +; CHECK-NEXT: [[T0:%.*]] = sdiv i8 [[Y:%.*]], 42
>> +; CHECK-NEXT: call void @use8(i8 [[T0]])
>> +; CHECK-NEXT: [[T1:%.*]] = sub nsw i8 0, [[T0]]
>> +; CHECK-NEXT: ret i8 [[T1]]
>> +;
>> + %t0 = sdiv i8 %y, 42
>> + call void @use8(i8 %t0)
>> + %t1 = sub i8 0, %t0
>> + ret i8 %t1
>> +}
>>
>> ; Right-shift sign bit smear is negatible.
>> define i8 @negate_ashr(i8 %x, i8 %y) {
>> ; CHECK-LABEL: @negate_ashr(
>> -; CHECK-NEXT: [[T0:%.*]] = ashr i8 [[Y:%.*]], 7
>> -; CHECK-NEXT: [[T1:%.*]] = sub i8 [[X:%.*]], [[T0]]
>> +; CHECK-NEXT: [[T0_NEG:%.*]] = lshr i8 [[Y:%.*]], 7
>> +; CHECK-NEXT: [[T1:%.*]] = add i8 [[T0_NEG]], [[X:%.*]]
>> ; CHECK-NEXT: ret i8 [[T1]]
>> ;
>> %t0 = ashr i8 %y, 7
>> @@ -492,8 +504,8 @@ define i8 @negate_ashr(i8 %x, i8 %y) {
>> }
>> define i8 @negate_lshr(i8 %x, i8 %y) {
>> ; CHECK-LABEL: @negate_lshr(
>> -; CHECK-NEXT: [[T0:%.*]] = lshr i8 [[Y:%.*]], 7
>> -; CHECK-NEXT: [[T1:%.*]] = sub i8 [[X:%.*]], [[T0]]
>> +; CHECK-NEXT: [[T0_NEG:%.*]] = ashr i8 [[Y:%.*]], 7
>> +; CHECK-NEXT: [[T1:%.*]] = add i8 [[T0_NEG]], [[X:%.*]]
>> ; CHECK-NEXT: ret i8 [[T1]]
>> ;
>> %t0 = lshr i8 %y, 7
>> @@ -548,8 +560,8 @@ define i8 @negate_lshr_wrongshift(i8 %x, i8 %y) {
>> ; *ext of i1 is always negatible
>> define i8 @negate_sext(i8 %x, i1 %y) {
>> ; CHECK-LABEL: @negate_sext(
>> -; CHECK-NEXT: [[TMP1:%.*]] = zext i1 [[Y:%.*]] to i8
>> -; CHECK-NEXT: [[T1:%.*]] = add i8 [[TMP1]], [[X:%.*]]
>> +; CHECK-NEXT: [[T0_NEG:%.*]] = zext i1 [[Y:%.*]] to i8
>> +; CHECK-NEXT: [[T1:%.*]] = add i8 [[T0_NEG]], [[X:%.*]]
>> ; CHECK-NEXT: ret i8 [[T1]]
>> ;
>> %t0 = sext i1 %y to i8
>> @@ -558,8 +570,8 @@ define i8 @negate_sext(i8 %x, i1 %y) {
>> }
>> define i8 @negate_zext(i8 %x, i1 %y) {
>> ; CHECK-LABEL: @negate_zext(
>> -; CHECK-NEXT: [[TMP1:%.*]] = sext i1 [[Y:%.*]] to i8
>> -; CHECK-NEXT: [[T1:%.*]] = add i8 [[TMP1]], [[X:%.*]]
>> +; CHECK-NEXT: [[T0_NEG:%.*]] = sext i1 [[Y:%.*]] to i8
>> +; CHECK-NEXT: [[T1:%.*]] = add i8 [[T0_NEG]], [[X:%.*]]
>> ; CHECK-NEXT: ret i8 [[T1]]
>> ;
>> %t0 = zext i1 %y to i8
>>
>> diff --git a/llvm/test/Transforms/InstCombine/sub.ll
>> b/llvm/test/Transforms/InstCombine/sub.ll
>> index 6bda5b0a3019..1b4149291951 100644
>> --- a/llvm/test/Transforms/InstCombine/sub.ll
>> +++ b/llvm/test/Transforms/InstCombine/sub.ll
>> @@ -218,8 +218,8 @@ define <2 x i8> @notnotsub_vec_undef_elts(<2 x i8>
>> %x, <2 x i8> %y) {
>>
>> define i32 @test5(i32 %A, i32 %B, i32 %C) {
>> ; CHECK-LABEL: @test5(
>> -; CHECK-NEXT: [[D1:%.*]] = sub i32 [[C:%.*]], [[B:%.*]]
>> -; CHECK-NEXT: [[E:%.*]] = add i32 [[D1]], [[A:%.*]]
>> +; CHECK-NEXT: [[D_NEG:%.*]] = sub i32 [[C:%.*]], [[B:%.*]]
>> +; CHECK-NEXT: [[E:%.*]] = add i32 [[D_NEG]], [[A:%.*]]
>> ; CHECK-NEXT: ret i32 [[E]]
>> ;
>> %D = sub i32 %B, %C
>> @@ -555,11 +555,11 @@ define i64 @test_neg_shl_sub(i64 %a, i64 %b) {
>>
>> define i64 @test_neg_shl_sub_extra_use1(i64 %a, i64 %b, i64* %p) {
>> ; CHECK-LABEL: @test_neg_shl_sub_extra_use1(
>> -; CHECK-NEXT: [[SUB:%.*]] = sub i64 [[A:%.*]], [[B:%.*]]
>> +; CHECK-NEXT: [[SUB_NEG:%.*]] = sub i64 [[B:%.*]], [[A:%.*]]
>> +; CHECK-NEXT: [[SUB:%.*]] = sub i64 [[A]], [[B]]
>> ; CHECK-NEXT: store i64 [[SUB]], i64* [[P:%.*]], align 8
>> -; CHECK-NEXT: [[MUL:%.*]] = shl i64 [[SUB]], 2
>> -; CHECK-NEXT: [[NEG:%.*]] = sub i64 0, [[MUL]]
>> -; CHECK-NEXT: ret i64 [[NEG]]
>> +; CHECK-NEXT: [[MUL_NEG:%.*]] = shl i64 [[SUB_NEG]], 2
>> +; CHECK-NEXT: ret i64 [[MUL_NEG]]
>> ;
>> %sub = sub i64 %a, %b
>> store i64 %sub, i64* %p
>> @@ -621,8 +621,8 @@ define i64 @test_neg_shl_sext_i1(i1 %a, i64 %b) {
>>
>> define i64 @test_neg_zext_i1_extra_use(i1 %a, i64 %b, i64* %p) {
>> ; CHECK-LABEL: @test_neg_zext_i1_extra_use(
>> -; CHECK-NEXT: [[EXT:%.*]] = zext i1 [[A:%.*]] to i64
>> -; CHECK-NEXT: [[EXT_NEG:%.*]] = sext i1 [[A]] to i64
>> +; CHECK-NEXT: [[EXT_NEG:%.*]] = sext i1 [[A:%.*]] to i64
>> +; CHECK-NEXT: [[EXT:%.*]] = zext i1 [[A]] to i64
>> ; CHECK-NEXT: store i64 [[EXT]], i64* [[P:%.*]], align 8
>> ; CHECK-NEXT: ret i64 [[EXT_NEG]]
>> ;
>> @@ -634,8 +634,8 @@ define i64 @test_neg_zext_i1_extra_use(i1 %a, i64 %b,
>> i64* %p) {
>>
>> define i64 @test_neg_sext_i1_extra_use(i1 %a, i64 %b, i64* %p) {
>> ; CHECK-LABEL: @test_neg_sext_i1_extra_use(
>> -; CHECK-NEXT: [[EXT:%.*]] = sext i1 [[A:%.*]] to i64
>> -; CHECK-NEXT: [[EXT_NEG:%.*]] = zext i1 [[A]] to i64
>> +; CHECK-NEXT: [[EXT_NEG:%.*]] = zext i1 [[A:%.*]] to i64
>> +; CHECK-NEXT: [[EXT:%.*]] = sext i1 [[A]] to i64
>> ; CHECK-NEXT: store i64 [[EXT]], i64* [[P:%.*]], align 8
>> ; CHECK-NEXT: ret i64 [[EXT_NEG]]
>> ;
>> @@ -703,7 +703,7 @@ define i64 @test_neg_mul_sub_commuted(i64 %a, i64 %b,
>> i64 %c) {
>> ; CHECK-LABEL: @test_neg_mul_sub_commuted(
>> ; CHECK-NEXT: [[COMPLEX:%.*]] = mul i64 [[C:%.*]], [[C]]
>> ; CHECK-NEXT: [[SUB_NEG:%.*]] = sub i64 [[B:%.*]], [[A:%.*]]
>> -; CHECK-NEXT: [[MUL_NEG:%.*]] = mul i64 [[COMPLEX]], [[SUB_NEG]]
>> +; CHECK-NEXT: [[MUL_NEG:%.*]] = mul i64 [[SUB_NEG]], [[COMPLEX]]
>> ; CHECK-NEXT: ret i64 [[MUL_NEG]]
>> ;
>> %complex = mul i64 %c, %c
>> @@ -715,8 +715,8 @@ define i64 @test_neg_mul_sub_commuted(i64 %a, i64 %b,
>> i64 %c) {
>>
>> define i32 @test27(i32 %x, i32 %y) {
>> ; CHECK-LABEL: @test27(
>> -; CHECK-NEXT: [[TMP1:%.*]] = shl i32 [[Y:%.*]], 3
>> -; CHECK-NEXT: [[SUB:%.*]] = add i32 [[TMP1]], [[X:%.*]]
>> +; CHECK-NEXT: [[MUL_NEG:%.*]] = shl i32 [[Y:%.*]], 3
>> +; CHECK-NEXT: [[SUB:%.*]] = add i32 [[MUL_NEG]], [[X:%.*]]
>> ; CHECK-NEXT: ret i32 [[SUB]]
>> ;
>> %mul = mul i32 %y, -8
>> @@ -726,8 +726,8 @@ define i32 @test27(i32 %x, i32 %y) {
>>
>> define <2 x i32> @test27vec(<2 x i32> %x, <2 x i32> %y) {
>> ; CHECK-LABEL: @test27vec(
>> -; CHECK-NEXT: [[TMP1:%.*]] = mul <2 x i32> [[Y:%.*]], <i32 8, i32 6>
>> -; CHECK-NEXT: [[SUB:%.*]] = add <2 x i32> [[TMP1]], [[X:%.*]]
>> +; CHECK-NEXT: [[MUL_NEG:%.*]] = mul <2 x i32> [[Y:%.*]], <i32 8, i32
>> 6>
>> +; CHECK-NEXT: [[SUB:%.*]] = add <2 x i32> [[MUL_NEG]], [[X:%.*]]
>> ; CHECK-NEXT: ret <2 x i32> [[SUB]]
>> ;
>> %mul = mul <2 x i32> %y, <i32 -8, i32 -6>
>> @@ -737,8 +737,8 @@ define <2 x i32> @test27vec(<2 x i32> %x, <2 x i32>
>> %y) {
>>
>> define <2 x i32> @test27vecsplat(<2 x i32> %x, <2 x i32> %y) {
>> ; CHECK-LABEL: @test27vecsplat(
>> -; CHECK-NEXT: [[TMP1:%.*]] = shl <2 x i32> [[Y:%.*]], <i32 3, i32 3>
>> -; CHECK-NEXT: [[SUB:%.*]] = add <2 x i32> [[TMP1]], [[X:%.*]]
>> +; CHECK-NEXT: [[MUL_NEG:%.*]] = shl <2 x i32> [[Y:%.*]], <i32 3, i32
>> 3>
>> +; CHECK-NEXT: [[SUB:%.*]] = add <2 x i32> [[MUL_NEG]], [[X:%.*]]
>> ; CHECK-NEXT: ret <2 x i32> [[SUB]]
>> ;
>> %mul = mul <2 x i32> %y, <i32 -8, i32 -8>
>> @@ -748,8 +748,8 @@ define <2 x i32> @test27vecsplat(<2 x i32> %x, <2 x
>> i32> %y) {
>>
>> define <2 x i32> @test27vecmixed(<2 x i32> %x, <2 x i32> %y) {
>> ; CHECK-LABEL: @test27vecmixed(
>> -; CHECK-NEXT: [[TMP1:%.*]] = mul <2 x i32> [[Y:%.*]], <i32 8, i32 -8>
>> -; CHECK-NEXT: [[SUB:%.*]] = add <2 x i32> [[TMP1]], [[X:%.*]]
>> +; CHECK-NEXT: [[MUL_NEG:%.*]] = mul <2 x i32> [[Y:%.*]], <i32 8, i32
>> -8>
>> +; CHECK-NEXT: [[SUB:%.*]] = add <2 x i32> [[MUL_NEG]], [[X:%.*]]
>> ; CHECK-NEXT: ret <2 x i32> [[SUB]]
>> ;
>> %mul = mul <2 x i32> %y, <i32 -8, i32 8>
>> @@ -759,8 +759,8 @@ define <2 x i32> @test27vecmixed(<2 x i32> %x, <2 x
>> i32> %y) {
>>
>> define i32 @test27commuted(i32 %x, i32 %y) {
>> ; CHECK-LABEL: @test27commuted(
>> -; CHECK-NEXT: [[TMP1:%.*]] = shl i32 [[Y:%.*]], 3
>> -; CHECK-NEXT: [[SUB:%.*]] = add i32 [[TMP1]], [[X:%.*]]
>> +; CHECK-NEXT: [[MUL_NEG:%.*]] = shl i32 [[Y:%.*]], 3
>> +; CHECK-NEXT: [[SUB:%.*]] = add i32 [[MUL_NEG]], [[X:%.*]]
>> ; CHECK-NEXT: ret i32 [[SUB]]
>> ;
>> %mul = mul i32 -8, %y
>> @@ -770,8 +770,8 @@ define i32 @test27commuted(i32 %x, i32 %y) {
>>
>> define <2 x i32> @test27commutedvec(<2 x i32> %x, <2 x i32> %y) {
>> ; CHECK-LABEL: @test27commutedvec(
>> -; CHECK-NEXT: [[TMP1:%.*]] = mul <2 x i32> [[Y:%.*]], <i32 8, i32 6>
>> -; CHECK-NEXT: [[SUB:%.*]] = add <2 x i32> [[TMP1]], [[X:%.*]]
>> +; CHECK-NEXT: [[MUL_NEG:%.*]] = mul <2 x i32> [[Y:%.*]], <i32 8, i32
>> 6>
>> +; CHECK-NEXT: [[SUB:%.*]] = add <2 x i32> [[MUL_NEG]], [[X:%.*]]
>> ; CHECK-NEXT: ret <2 x i32> [[SUB]]
>> ;
>> %mul = mul <2 x i32> <i32 -8, i32 -6>, %y
>> @@ -781,8 +781,8 @@ define <2 x i32> @test27commutedvec(<2 x i32> %x, <2
>> x i32> %y) {
>>
>> define <2 x i32> @test27commutedvecsplat(<2 x i32> %x, <2 x i32> %y) {
>> ; CHECK-LABEL: @test27commutedvecsplat(
>> -; CHECK-NEXT: [[TMP1:%.*]] = shl <2 x i32> [[Y:%.*]], <i32 3, i32 3>
>> -; CHECK-NEXT: [[SUB:%.*]] = add <2 x i32> [[TMP1]], [[X:%.*]]
>> +; CHECK-NEXT: [[MUL_NEG:%.*]] = shl <2 x i32> [[Y:%.*]], <i32 3, i32
>> 3>
>> +; CHECK-NEXT: [[SUB:%.*]] = add <2 x i32> [[MUL_NEG]], [[X:%.*]]
>> ; CHECK-NEXT: ret <2 x i32> [[SUB]]
>> ;
>> %mul = mul <2 x i32> <i32 -8, i32 -8>, %y
>> @@ -792,8 +792,8 @@ define <2 x i32> @test27commutedvecsplat(<2 x i32>
>> %x, <2 x i32> %y) {
>>
>> define <2 x i32> @test27commutedvecmixed(<2 x i32> %x, <2 x i32> %y) {
>> ; CHECK-LABEL: @test27commutedvecmixed(
>> -; CHECK-NEXT: [[TMP1:%.*]] = mul <2 x i32> [[Y:%.*]], <i32 8, i32 -8>
>> -; CHECK-NEXT: [[SUB:%.*]] = add <2 x i32> [[TMP1]], [[X:%.*]]
>> +; CHECK-NEXT: [[MUL_NEG:%.*]] = mul <2 x i32> [[Y:%.*]], <i32 8, i32
>> -8>
>> +; CHECK-NEXT: [[SUB:%.*]] = add <2 x i32> [[MUL_NEG]], [[X:%.*]]
>> ; CHECK-NEXT: ret <2 x i32> [[SUB]]
>> ;
>> %mul = mul <2 x i32> <i32 -8, i32 8>, %y
>> @@ -1105,7 +1105,7 @@ define i32 @test52(i32 %X) {
>>
>> define <2 x i1> @test53(<2 x i1> %A, <2 x i1> %B) {
>> ; CHECK-LABEL: @test53(
>> -; CHECK-NEXT: [[SUB:%.*]] = xor <2 x i1> [[A:%.*]], [[B:%.*]]
>> +; CHECK-NEXT: [[SUB:%.*]] = xor <2 x i1> [[B:%.*]], [[A:%.*]]
>> ; CHECK-NEXT: ret <2 x i1> [[SUB]]
>> ;
>> %sub = sub <2 x i1> %A, %B
>> @@ -1149,8 +1149,8 @@ define i32 @test55(i1 %which) {
>> ; CHECK: delay:
>> ; CHECK-NEXT: br label [[FINAL]]
>> ; CHECK: final:
>> -; CHECK-NEXT: [[A:%.*]] = phi i32 [ -877, [[ENTRY:%.*]] ], [ 113,
>> [[DELAY]] ]
>> -; CHECK-NEXT: ret i32 [[A]]
>> +; CHECK-NEXT: [[A_NEG:%.*]] = phi i32 [ -877, [[ENTRY:%.*]] ], [ 113,
>> [[DELAY]] ]
>> +; CHECK-NEXT: ret i32 [[A_NEG]]
>> ;
>> entry:
>> br i1 %which, label %final, label %delay
>> @@ -1171,8 +1171,8 @@ define <2 x i32> @test55vec(i1 %which) {
>> ; CHECK: delay:
>> ; CHECK-NEXT: br label [[FINAL]]
>> ; CHECK: final:
>> -; CHECK-NEXT: [[A:%.*]] = phi <2 x i32> [ <i32 -877, i32 -877>,
>> [[ENTRY:%.*]] ], [ <i32 113, i32 113>, [[DELAY]] ]
>> -; CHECK-NEXT: ret <2 x i32> [[A]]
>> +; CHECK-NEXT: [[A_NEG:%.*]] = phi <2 x i32> [ <i32 -877, i32 -877>,
>> [[ENTRY:%.*]] ], [ <i32 113, i32 113>, [[DELAY]] ]
>> +; CHECK-NEXT: ret <2 x i32> [[A_NEG]]
>> ;
>> entry:
>> br i1 %which, label %final, label %delay
>> @@ -1193,8 +1193,8 @@ define <2 x i32> @test55vec2(i1 %which) {
>> ; CHECK: delay:
>> ; CHECK-NEXT: br label [[FINAL]]
>> ; CHECK: final:
>> -; CHECK-NEXT: [[A:%.*]] = phi <2 x i32> [ <i32 -877, i32 -2167>,
>> [[ENTRY:%.*]] ], [ <i32 113, i32 303>, [[DELAY]] ]
>> -; CHECK-NEXT: ret <2 x i32> [[A]]
>> +; CHECK-NEXT: [[A_NEG:%.*]] = phi <2 x i32> [ <i32 -877, i32 -2167>,
>> [[ENTRY:%.*]] ], [ <i32 113, i32 303>, [[DELAY]] ]
>> +; CHECK-NEXT: ret <2 x i32> [[A_NEG]]
>> ;
>> entry:
>> br i1 %which, label %final, label %delay
>> @@ -1356,8 +1356,8 @@ define i32 @test64(i32 %x) {
>> ; CHECK-LABEL: @test64(
>> ; CHECK-NEXT: [[TMP1:%.*]] = icmp slt i32 [[X:%.*]], 255
>> ; CHECK-NEXT: [[TMP2:%.*]] = select i1 [[TMP1]], i32 [[X]], i32 255
>> -; CHECK-NEXT: [[RES:%.*]] = add nsw i32 [[TMP2]], 1
>> -; CHECK-NEXT: ret i32 [[RES]]
>> +; CHECK-NEXT: [[DOTNEG:%.*]] = add nsw i32 [[TMP2]], 1
>> +; CHECK-NEXT: ret i32 [[DOTNEG]]
>> ;
>> %1 = xor i32 %x, -1
>> %2 = icmp sgt i32 %1, -256
>> @@ -1370,8 +1370,8 @@ define i32 @test65(i32 %x) {
>> ; CHECK-LABEL: @test65(
>> ; CHECK-NEXT: [[TMP1:%.*]] = icmp sgt i32 [[X:%.*]], -256
>> ; CHECK-NEXT: [[TMP2:%.*]] = select i1 [[TMP1]], i32 [[X]], i32 -256
>> -; CHECK-NEXT: [[RES:%.*]] = add i32 [[TMP2]], 1
>> -; CHECK-NEXT: ret i32 [[RES]]
>> +; CHECK-NEXT: [[DOTNEG:%.*]] = add i32 [[TMP2]], 1
>> +; CHECK-NEXT: ret i32 [[DOTNEG]]
>> ;
>> %1 = xor i32 %x, -1
>> %2 = icmp slt i32 %1, 255
>> @@ -1384,8 +1384,8 @@ define i32 @test66(i32 %x) {
>> ; CHECK-LABEL: @test66(
>> ; CHECK-NEXT: [[TMP1:%.*]] = icmp ult i32 [[X:%.*]], -101
>> ; CHECK-NEXT: [[TMP2:%.*]] = select i1 [[TMP1]], i32 [[X]], i32 -101
>> -; CHECK-NEXT: [[RES:%.*]] = add nuw i32 [[TMP2]], 1
>> -; CHECK-NEXT: ret i32 [[RES]]
>> +; CHECK-NEXT: [[DOTNEG:%.*]] = add nuw i32 [[TMP2]], 1
>> +; CHECK-NEXT: ret i32 [[DOTNEG]]
>> ;
>> %1 = xor i32 %x, -1
>> %2 = icmp ugt i32 %1, 100
>> @@ -1398,8 +1398,8 @@ define i32 @test67(i32 %x) {
>> ; CHECK-LABEL: @test67(
>> ; CHECK-NEXT: [[TMP1:%.*]] = icmp ugt i32 [[X:%.*]], 100
>> ; CHECK-NEXT: [[TMP2:%.*]] = select i1 [[TMP1]], i32 [[X]], i32 100
>> -; CHECK-NEXT: [[RES:%.*]] = add i32 [[TMP2]], 1
>> -; CHECK-NEXT: ret i32 [[RES]]
>> +; CHECK-NEXT: [[DOTNEG:%.*]] = add i32 [[TMP2]], 1
>> +; CHECK-NEXT: ret i32 [[DOTNEG]]
>> ;
>> %1 = xor i32 %x, -1
>> %2 = icmp ult i32 %1, -101
>> @@ -1413,8 +1413,8 @@ define <2 x i32> @test68(<2 x i32> %x) {
>> ; CHECK-LABEL: @test68(
>> ; CHECK-NEXT: [[TMP1:%.*]] = icmp slt <2 x i32> [[X:%.*]], <i32 255,
>> i32 255>
>> ; CHECK-NEXT: [[TMP2:%.*]] = select <2 x i1> [[TMP1]], <2 x i32>
>> [[X]], <2 x i32> <i32 255, i32 255>
>> -; CHECK-NEXT: [[RES:%.*]] = add nsw <2 x i32> [[TMP2]], <i32 1, i32 1>
>> -; CHECK-NEXT: ret <2 x i32> [[RES]]
>> +; CHECK-NEXT: [[DOTNEG:%.*]] = add nsw <2 x i32> [[TMP2]], <i32 1,
>> i32 1>
>> +; CHECK-NEXT: ret <2 x i32> [[DOTNEG]]
>> ;
>> %1 = xor <2 x i32> %x, <i32 -1, i32 -1>
>> %2 = icmp sgt <2 x i32> %1, <i32 -256, i32 -256>
>> @@ -1428,8 +1428,8 @@ define <2 x i32> @test69(<2 x i32> %x) {
>> ; CHECK-LABEL: @test69(
>> ; CHECK-NEXT: [[TMP1:%.*]] = icmp slt <2 x i32> [[X:%.*]], <i32 255,
>> i32 127>
>> ; CHECK-NEXT: [[TMP2:%.*]] = select <2 x i1> [[TMP1]], <2 x i32>
>> [[X]], <2 x i32> <i32 255, i32 127>
>> -; CHECK-NEXT: [[RES:%.*]] = add <2 x i32> [[TMP2]], <i32 1, i32 1>
>> -; CHECK-NEXT: ret <2 x i32> [[RES]]
>> +; CHECK-NEXT: [[DOTNEG:%.*]] = add <2 x i32> [[TMP2]], <i32 1, i32 1>
>> +; CHECK-NEXT: ret <2 x i32> [[DOTNEG]]
>> ;
>> %1 = xor <2 x i32> %x, <i32 -1, i32 -1>
>> %2 = icmp sgt <2 x i32> %1, <i32 -256, i32 -128>
>>
>> diff --git a/llvm/test/Transforms/InstCombine/zext-bool-add-sub.ll
>> b/llvm/test/Transforms/InstCombine/zext-bool-add-sub.ll
>> index 71fa9a795735..7ad0655dcbe8 100644
>> --- a/llvm/test/Transforms/InstCombine/zext-bool-add-sub.ll
>> +++ b/llvm/test/Transforms/InstCombine/zext-bool-add-sub.ll
>> @@ -5,9 +5,9 @@
>>
>> define i32 @a(i1 zeroext %x, i1 zeroext %y) {
>> ; CHECK-LABEL: @a(
>> -; CHECK-NEXT: [[CONV3_NEG:%.*]] = sext i1 [[Y:%.*]] to i32
>> +; CHECK-NEXT: [[CONV3_NEG1:%.*]] = sext i1 [[Y:%.*]] to i32
>> ; CHECK-NEXT: [[SUB:%.*]] = select i1 [[X:%.*]], i32 2, i32 1
>> -; CHECK-NEXT: [[ADD:%.*]] = add nsw i32 [[SUB]], [[CONV3_NEG]]
>> +; CHECK-NEXT: [[ADD:%.*]] = add nsw i32 [[SUB]], [[CONV3_NEG1]]
>> ; CHECK-NEXT: ret i32 [[ADD]]
>> ;
>> %conv = zext i1 %x to i32
>> @@ -95,8 +95,8 @@ declare void @use(i64)
>>
>> define i64 @zext_negate(i1 %A) {
>> ; CHECK-LABEL: @zext_negate(
>> -; CHECK-NEXT: [[SUB:%.*]] = sext i1 [[A:%.*]] to i64
>> -; CHECK-NEXT: ret i64 [[SUB]]
>> +; CHECK-NEXT: [[EXT_NEG:%.*]] = sext i1 [[A:%.*]] to i64
>> +; CHECK-NEXT: ret i64 [[EXT_NEG]]
>> ;
>> %ext = zext i1 %A to i64
>> %sub = sub i64 0, %ext
>> @@ -105,10 +105,10 @@ define i64 @zext_negate(i1 %A) {
>>
>> define i64 @zext_negate_extra_use(i1 %A) {
>> ; CHECK-LABEL: @zext_negate_extra_use(
>> -; CHECK-NEXT: [[EXT:%.*]] = zext i1 [[A:%.*]] to i64
>> -; CHECK-NEXT: [[SUB:%.*]] = sext i1 [[A]] to i64
>> +; CHECK-NEXT: [[EXT_NEG:%.*]] = sext i1 [[A:%.*]] to i64
>> +; CHECK-NEXT: [[EXT:%.*]] = zext i1 [[A]] to i64
>> ; CHECK-NEXT: call void @use(i64 [[EXT]])
>> -; CHECK-NEXT: ret i64 [[SUB]]
>> +; CHECK-NEXT: ret i64 [[EXT_NEG]]
>> ;
>> %ext = zext i1 %A to i64
>> %sub = sub i64 0, %ext
>> @@ -118,8 +118,8 @@ define i64 @zext_negate_extra_use(i1 %A) {
>>
>> define <2 x i64> @zext_negate_vec(<2 x i1> %A) {
>> ; CHECK-LABEL: @zext_negate_vec(
>> -; CHECK-NEXT: [[SUB:%.*]] = sext <2 x i1> [[A:%.*]] to <2 x i64>
>> -; CHECK-NEXT: ret <2 x i64> [[SUB]]
>> +; CHECK-NEXT: [[EXT_NEG:%.*]] = sext <2 x i1> [[A:%.*]] to <2 x i64>
>> +; CHECK-NEXT: ret <2 x i64> [[EXT_NEG]]
>> ;
>> %ext = zext <2 x i1> %A to <2 x i64>
>> %sub = sub <2 x i64> zeroinitializer, %ext
>> @@ -128,8 +128,8 @@ define <2 x i64> @zext_negate_vec(<2 x i1> %A) {
>>
>> define <2 x i64> @zext_negate_vec_undef_elt(<2 x i1> %A) {
>> ; CHECK-LABEL: @zext_negate_vec_undef_elt(
>> -; CHECK-NEXT: [[SUB:%.*]] = sext <2 x i1> [[A:%.*]] to <2 x i64>
>> -; CHECK-NEXT: ret <2 x i64> [[SUB]]
>> +; CHECK-NEXT: [[EXT_NEG:%.*]] = sext <2 x i1> [[A:%.*]] to <2 x i64>
>> +; CHECK-NEXT: ret <2 x i64> [[EXT_NEG]]
>> ;
>> %ext = zext <2 x i1> %A to <2 x i64>
>> %sub = sub <2 x i64> <i64 0, i64 undef>, %ext
>> @@ -181,8 +181,8 @@ define <2 x i64> @zext_sub_const_vec_undef_elt(<2 x
>> i1> %A) {
>>
>> define i64 @sext_negate(i1 %A) {
>> ; CHECK-LABEL: @sext_negate(
>> -; CHECK-NEXT: [[SUB:%.*]] = zext i1 [[A:%.*]] to i64
>> -; CHECK-NEXT: ret i64 [[SUB]]
>> +; CHECK-NEXT: [[EXT_NEG:%.*]] = zext i1 [[A:%.*]] to i64
>> +; CHECK-NEXT: ret i64 [[EXT_NEG]]
>> ;
>> %ext = sext i1 %A to i64
>> %sub = sub i64 0, %ext
>> @@ -191,10 +191,10 @@ define i64 @sext_negate(i1 %A) {
>>
>> define i64 @sext_negate_extra_use(i1 %A) {
>> ; CHECK-LABEL: @sext_negate_extra_use(
>> -; CHECK-NEXT: [[EXT:%.*]] = sext i1 [[A:%.*]] to i64
>> -; CHECK-NEXT: [[SUB:%.*]] = zext i1 [[A]] to i64
>> +; CHECK-NEXT: [[EXT_NEG:%.*]] = zext i1 [[A:%.*]] to i64
>> +; CHECK-NEXT: [[EXT:%.*]] = sext i1 [[A]] to i64
>> ; CHECK-NEXT: call void @use(i64 [[EXT]])
>> -; CHECK-NEXT: ret i64 [[SUB]]
>> +; CHECK-NEXT: ret i64 [[EXT_NEG]]
>> ;
>> %ext = sext i1 %A to i64
>> %sub = sub i64 0, %ext
>> @@ -204,8 +204,8 @@ define i64 @sext_negate_extra_use(i1 %A) {
>>
>> define <2 x i64> @sext_negate_vec(<2 x i1> %A) {
>> ; CHECK-LABEL: @sext_negate_vec(
>> -; CHECK-NEXT: [[SUB:%.*]] = zext <2 x i1> [[A:%.*]] to <2 x i64>
>> -; CHECK-NEXT: ret <2 x i64> [[SUB]]
>> +; CHECK-NEXT: [[EXT_NEG:%.*]] = zext <2 x i1> [[A:%.*]] to <2 x i64>
>> +; CHECK-NEXT: ret <2 x i64> [[EXT_NEG]]
>> ;
>> %ext = sext <2 x i1> %A to <2 x i64>
>> %sub = sub <2 x i64> zeroinitializer, %ext
>> @@ -214,8 +214,8 @@ define <2 x i64> @sext_negate_vec(<2 x i1> %A) {
>>
>> define <2 x i64> @sext_negate_vec_undef_elt(<2 x i1> %A) {
>> ; CHECK-LABEL: @sext_negate_vec_undef_elt(
>> -; CHECK-NEXT: [[SUB:%.*]] = zext <2 x i1> [[A:%.*]] to <2 x i64>
>> -; CHECK-NEXT: ret <2 x i64> [[SUB]]
>> +; CHECK-NEXT: [[EXT_NEG:%.*]] = zext <2 x i1> [[A:%.*]] to <2 x i64>
>> +; CHECK-NEXT: ret <2 x i64> [[EXT_NEG]]
>> ;
>> %ext = sext <2 x i1> %A to <2 x i64>
>> %sub = sub <2 x i64> <i64 0, i64 undef>, %ext
>> @@ -267,8 +267,8 @@ define <2 x i64> @sext_sub_const_vec_undef_elt(<2 x
>> i1> %A) {
>>
>> define i8 @sext_sub(i8 %x, i1 %y) {
>> ; CHECK-LABEL: @sext_sub(
>> -; CHECK-NEXT: [[TMP1:%.*]] = zext i1 [[Y:%.*]] to i8
>> -; CHECK-NEXT: [[SUB:%.*]] = add i8 [[TMP1]], [[X:%.*]]
>> +; CHECK-NEXT: [[SEXT_NEG:%.*]] = zext i1 [[Y:%.*]] to i8
>> +; CHECK-NEXT: [[SUB:%.*]] = add i8 [[SEXT_NEG]], [[X:%.*]]
>> ; CHECK-NEXT: ret i8 [[SUB]]
>> ;
>> %sext = sext i1 %y to i8
>> @@ -280,8 +280,8 @@ define i8 @sext_sub(i8 %x, i1 %y) {
>>
>> define <2 x i8> @sext_sub_vec(<2 x i8> %x, <2 x i1> %y) {
>> ; CHECK-LABEL: @sext_sub_vec(
>> -; CHECK-NEXT: [[TMP1:%.*]] = zext <2 x i1> [[Y:%.*]] to <2 x i8>
>> -; CHECK-NEXT: [[SUB:%.*]] = add <2 x i8> [[TMP1]], [[X:%.*]]
>> +; CHECK-NEXT: [[SEXT_NEG:%.*]] = zext <2 x i1> [[Y:%.*]] to <2 x i8>
>> +; CHECK-NEXT: [[SUB:%.*]] = add <2 x i8> [[SEXT_NEG]], [[X:%.*]]
>> ; CHECK-NEXT: ret <2 x i8> [[SUB]]
>> ;
>> %sext = sext <2 x i1> %y to <2 x i8>
>> @@ -293,8 +293,8 @@ define <2 x i8> @sext_sub_vec(<2 x i8> %x, <2 x i1>
>> %y) {
>>
>> define <2 x i8> @sext_sub_vec_nsw(<2 x i8> %x, <2 x i1> %y) {
>> ; CHECK-LABEL: @sext_sub_vec_nsw(
>> -; CHECK-NEXT: [[TMP1:%.*]] = zext <2 x i1> [[Y:%.*]] to <2 x i8>
>> -; CHECK-NEXT: [[SUB:%.*]] = add nsw <2 x i8> [[TMP1]], [[X:%.*]]
>> +; CHECK-NEXT: [[SEXT_NEG:%.*]] = zext <2 x i1> [[Y:%.*]] to <2 x i8>
>> +; CHECK-NEXT: [[SUB:%.*]] = add <2 x i8> [[SEXT_NEG]], [[X:%.*]]
>> ; CHECK-NEXT: ret <2 x i8> [[SUB]]
>> ;
>> %sext = sext <2 x i1> %y to <2 x i8>
>> @@ -306,8 +306,8 @@ define <2 x i8> @sext_sub_vec_nsw(<2 x i8> %x, <2 x
>> i1> %y) {
>>
>> define i8 @sext_sub_nuw(i8 %x, i1 %y) {
>> ; CHECK-LABEL: @sext_sub_nuw(
>> -; CHECK-NEXT: [[TMP1:%.*]] = zext i1 [[Y:%.*]] to i8
>> -; CHECK-NEXT: [[SUB:%.*]] = add i8 [[TMP1]], [[X:%.*]]
>> +; CHECK-NEXT: [[SEXT_NEG:%.*]] = zext i1 [[Y:%.*]] to i8
>> +; CHECK-NEXT: [[SUB:%.*]] = add i8 [[SEXT_NEG]], [[X:%.*]]
>> ; CHECK-NEXT: ret i8 [[SUB]]
>> ;
>> %sext = sext i1 %y to i8
>> @@ -393,8 +393,8 @@ define i32 @zextbool_sub_uses(i1 %c, i32 %x) {
>>
>> define <4 x i32> @zextbool_sub_vector(<4 x i1> %c, <4 x i32> %x) {
>> ; CHECK-LABEL: @zextbool_sub_vector(
>> -; CHECK-NEXT: [[TMP1:%.*]] = sext <4 x i1> [[C:%.*]] to <4 x i32>
>> -; CHECK-NEXT: [[S:%.*]] = add <4 x i32> [[TMP1]], [[X:%.*]]
>> +; CHECK-NEXT: [[B_NEG:%.*]] = sext <4 x i1> [[C:%.*]] to <4 x i32>
>> +; CHECK-NEXT: [[S:%.*]] = add <4 x i32> [[B_NEG]], [[X:%.*]]
>> ; CHECK-NEXT: ret <4 x i32> [[S]]
>> ;
>> %b = zext <4 x i1> %c to <4 x i32>
>>
>>
>>
>> _______________________________________________
>> llvm-commits mailing list
>> llvm-commits at lists.llvm.org
>> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20200423/453ffb8f/attachment.html>
More information about the llvm-commits
mailing list