[llvm] r341726 - [SimplifyIndVar] Avoid generating truncate instructions with non-hoisted Laod operand.

Abderrazek Zaafrani via llvm-commits llvm-commits at lists.llvm.org
Mon Sep 10 17:24:41 PDT 2018


Philip,

I appreciate your feedback and for taking the time to help with this.
I think bringing many issues that could have helped such as LICM and alias analysis (but not solved the problem in its entirety) added confusion. So, let's backtrack a little bit and start from what is currently happening in the code:

I summarize it into the following: Current WidenIV code does it best not to insert truncate instructions and let's agree and assume that this is a good goal. I am helping toward that goal in the following manner: An operation that has a load instruction as one of its operand automatically results in inserting a truncate instruction if the load is not pulled outside the loop. With my patch, it does not matter where the load is. In other words, I am helping toward the goal of avoiding truncate. I know that this is still not what you are looking for in terms of showing code generation, but I will send another email about that in a day or two.

I can add the following context to my summary above just in case you are interested:

In indVarSimplify, there is a function called WidenIV::createWideIV that widens the main induction variable and then forward traverse the def-Use chain of that induction variable and calls WidenIV::widenIVUse to widen users. The latter function does some optimizations such elimination of sign/zero extend instructions and avoids to insert truncate instructions. In the analysis to decide whether to insert truncate instruction or not, it requires the use to be a scalar evolution expression (but scalar evolution cannot handle loop variant expression). See WidenIV::getExtendedOperandRecurrence for that. I am adding code in that area. More about examples/code generation later.

Abderrazek

From: Philip Reames [mailto:listmail at philipreames.com]
Sent: Monday, September 10, 2018 4:41 PM
To: Abderrazek Zaafrani; llvm-commits at lists.llvm.org; Sebastian Pop
Subject: Re: [llvm] r341726 - [SimplifyIndVar] Avoid generating truncate instructions with non-hoisted Laod operand.


I have serious concerns with this patch.  I don't believe this patch meets our usual standards.  I believe this patch warrants being reverted, and then re-reviewed from scratch.

@sebpop, I don't think it was appropriate to approve this patch given the public history.  I wouldn't normally call that out, but this seems like a really clear cut case.

@Abderrazek, please don't let me scare you off here.  While I think this patch does need reverted, I think this can be adjusted into something clearly worthwhile, and I encourage you to do so.  I'm happy to help as well.  (Fair warning, my time is very limited, so my help will only go so far.)

Philip
On 09/07/2018 03:41 PM, Abderrazek Zaafrani via llvm-commits wrote:

Author: az

Date: Fri Sep  7 15:41:57 2018

New Revision: 341726



URL: http://llvm.org/viewvc/llvm-project?rev=341726&view=rev

Log:

[SimplifyIndVar] Avoid generating truncate instructions with non-hoisted Laod operand.
(For the future, you had a nice comment in the review about the reasoning.  The submit comment should include that too.)

Your reasoning here is explained purely in terms of what we do for loop-invariant loads.  You don't state what this is, or provide a clear description of what the code generation should be.  I've read over the review and the code, and can reverse engineer what you're going for, but it is your responsibility to make this clear from the submitted code.  This patch does not achieve that objective.

Reverse engineering the code, here's what I *think* you're trying to accomplish:
If we find a binary operator where we need to extend one side, check to see if we can cheaply extend the other operand and widen the load instead.  Many architectures can efficiently generate an extending load, so we consider loads to be one such cheaply extensible case.

(I can't connect this understanding of the code to your comments in the review though, so maybe I'm missing something still?)

In terms of the review discussion, I am concerned that you appear to be mixing TBAA and AliasSet terminology.  You appear to have rejected an approach based on a possibly flawed understanding.  Your reviewers should have asked for a reference w.r.t. the rejected approach (e.g. email discussion, review, or rejected patch).








Differential Revision: https://reviews.llvm.org/D49151



Modified:

    llvm/trunk/lib/Transforms/Scalar/IndVarSimplify.cpp

    llvm/trunk/test/Transforms/IndVarSimplify/iv-widen-elim-ext.ll



Modified: llvm/trunk/lib/Transforms/Scalar/IndVarSimplify.cpp

URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Transforms/Scalar/IndVarSimplify.cpp?rev=341726&r1=341725&r2=341726&view=diff

==============================================================================

--- llvm/trunk/lib/Transforms/Scalar/IndVarSimplify.cpp (original)

+++ llvm/trunk/lib/Transforms/Scalar/IndVarSimplify.cpp Fri Sep  7 15:41:57 2018

@@ -1017,6 +1017,8 @@ protected:

   Instruction *widenIVUse(NarrowIVDefUse DU, SCEVExpander &Rewriter);



   bool widenLoopCompare(NarrowIVDefUse DU);

+  bool widenWithVariantLoadUse(NarrowIVDefUse DU);

+  void widenWithVariantLoadUseCodegen(NarrowIVDefUse DU);



   void pushNarrowIVUsers(Instruction *NarrowDef, Instruction *WideDef);

 };

@@ -1361,6 +1363,146 @@ bool WidenIV::widenLoopCompare(NarrowIVD

   return true;

 }



+/// If the narrow use is an instruction whose two operands are the defining

+/// instruction of DU and a load instruction, then we have the following:

+/// if the load is hoisted outside the loop, then we do not reach this function

+/// as scalar evolution analysis works fine in widenIVUse with variables

+/// hoisted outside the loop and efficient code is subsequently generated by

+/// not emitting truncate instructions. But when the load is not hoisted

+/// (whether due to limitation in alias analysis or due to a true legality),

+/// then scalar evolution can not proceed with loop variant values and

+/// inefficient code is generated. This function handles the non-hoisted load

+/// special case by making the optimization generate the same type of code for

+/// hoisted and non-hoisted load (widen use and eliminate sign extend

+/// instruction). This special case is important especially when the induction

+/// variables are affecting addressing mode in code generation.
This comment really tells me nothing about what this function does, or what it's goal is.




+bool WidenIV::widenWithVariantLoadUse(NarrowIVDefUse DU) {

+  Instruction *NarrowUse = DU.NarrowUse;

+  Instruction *NarrowDef = DU.NarrowDef;

+  Instruction *WideDef = DU.WideDef;

+

+  // Handle the common case of add<nsw/nuw>

+  const unsigned OpCode = NarrowUse->getOpcode();

+  // Only Add/Sub/Mul instructions are supported.
Why?




+  if (OpCode != Instruction::Add && OpCode != Instruction::Sub &&

+      OpCode != Instruction::Mul)

+    return false;

+

+  // The operand that is not defined by NarrowDef of DU. Let's call it the

+  // other operand.

+  unsigned ExtendOperIdx = DU.NarrowUse->getOperand(0) == NarrowDef ? 1 : 0;

+  assert(DU.NarrowUse->getOperand(1 - ExtendOperIdx) == DU.NarrowDef &&

+         "bad DU");

+

+  const SCEV *ExtendOperExpr = nullptr;

+  const OverflowingBinaryOperator *OBO =

+    cast<OverflowingBinaryOperator>(NarrowUse);

+  ExtendKind ExtKind = getExtendKind(NarrowDef);

+  if (ExtKind == SignExtended && OBO->hasNoSignedWrap())

+    ExtendOperExpr = SE->getSignExtendExpr(

+      SE->getSCEV(NarrowUse->getOperand(ExtendOperIdx)), WideType);

+  else if (ExtKind == ZeroExtended && OBO->hasNoUnsignedWrap())

+    ExtendOperExpr = SE->getZeroExtendExpr(

+      SE->getSCEV(NarrowUse->getOperand(ExtendOperIdx)), WideType);

+  else

+    return false;

+

+  // We are interested in the other operand being a load instruction.

+  // But, we should look into relaxing this restriction later on.

+  auto *I = dyn_cast<Instruction>(NarrowUse->getOperand(ExtendOperIdx));

+  if (I && I->getOpcode() != Instruction::Load)

+    return false;

+

+  // Verifying that Defining operand is an AddRec

+  const SCEV *Op1 = SE->getSCEV(WideDef);

+  const SCEVAddRecExpr *AddRecOp1 = dyn_cast<SCEVAddRecExpr>(Op1);

+  if (!AddRecOp1 || AddRecOp1->getLoop() != L)

+    return false;

+  // Verifying that other operand is an Extend.

+  if (ExtKind == SignExtended) {

+    if (!isa<SCEVSignExtendExpr>(ExtendOperExpr))

+      return false;

+  } else {

+    if (!isa<SCEVZeroExtendExpr>(ExtendOperExpr))

+      return false;

+  }

+

+  if (ExtKind == SignExtended) {

+    for (Use &U : NarrowUse->uses()) {

+      SExtInst *User = dyn_cast<SExtInst>(U.getUser());

+      if (!User || User->getType() != WideType)

+        return false;

+    }

+  } else { // ExtKind == ZeroExtended

+    for (Use &U : NarrowUse->uses()) {

+      ZExtInst *User = dyn_cast<ZExtInst>(U.getUser());

+      if (!User || User->getType() != WideType)

+        return false;

+    }

+  }
This and the previous block of restrictions seem awfully narrow with no clear reasoning of why they need to be.




+

+  return true;

+}

+

+/// Special Case for widening with variant Loads (see

+/// WidenIV::widenWithVariantLoadUse). This is the code generation part.

+void WidenIV::widenWithVariantLoadUseCodegen(NarrowIVDefUse DU) {

+  Instruction *NarrowUse = DU.NarrowUse;

+  Instruction *NarrowDef = DU.NarrowDef;

+  Instruction *WideDef = DU.WideDef;

+

+  ExtendKind ExtKind = getExtendKind(NarrowDef);

+

+  LLVM_DEBUG(dbgs() << "Cloning arithmetic IVUser: " << *NarrowUse << "\n");

+

+  // Generating a widening use instruction.

+  Value *LHS = (NarrowUse->getOperand(0) == NarrowDef)

+                   ? WideDef

+                   : createExtendInst(NarrowUse->getOperand(0), WideType,

+                                      ExtKind, NarrowUse);

+  Value *RHS = (NarrowUse->getOperand(1) == NarrowDef)

+                   ? WideDef

+                   : createExtendInst(NarrowUse->getOperand(1), WideType,

+                                      ExtKind, NarrowUse);

+

+  auto *NarrowBO = cast<BinaryOperator>(NarrowUse);

+  auto *WideBO = BinaryOperator::Create(NarrowBO->getOpcode(), LHS, RHS,

+                                        NarrowBO->getName());

+  IRBuilder<> Builder(NarrowUse);

+  Builder.Insert(WideBO);

+  WideBO->copyIRFlags(NarrowBO);

+

+  if (ExtKind == SignExtended)

+    ExtendKindMap[NarrowUse] = SignExtended;

+  else

+    ExtendKindMap[NarrowUse] = ZeroExtended;

+

+  // Update the Use.

+  if (ExtKind == SignExtended) {

+    for (Use &U : NarrowUse->uses()) {

+      SExtInst *User = dyn_cast<SExtInst>(U.getUser());

+      if (User && User->getType() == WideType) {

+        LLVM_DEBUG(dbgs() << "INDVARS: eliminating " << *User << " replaced by "

+                          << *WideBO << "\n");

+        ++NumElimExt;

+        User->replaceAllUsesWith(WideBO);

+        DeadInsts.emplace_back(User);

+      }

+    }

+  } else { // ExtKind == ZeroExtended

+    for (Use &U : NarrowUse->uses()) {

+      ZExtInst *User = dyn_cast<ZExtInst>(U.getUser());

+      if (User && User->getType() == WideType) {

+        LLVM_DEBUG(dbgs() << "INDVARS: eliminating " << *User << " replaced by "

+                          << *WideBO << "\n");

+        ++NumElimExt;

+        User->replaceAllUsesWith(WideBO);

+        DeadInsts.emplace_back(User);

+      }

+    }

+  }

+}

+

 /// Determine whether an individual user of the narrow IV can be widened. If so,

 /// return the wide clone of the user.

 Instruction *WidenIV::widenIVUse(NarrowIVDefUse DU, SCEVExpander &Rewriter) {

@@ -1458,6 +1600,16 @@ Instruction *WidenIV::widenIVUse(NarrowI

     if (widenLoopCompare(DU))

       return nullptr;



+    // We are here about to generate a truncate instruction that may hurt

+    // performance because the scalar evolution expression computed earlier

+    // in WideAddRec.first does not indicate a polynomial induction expression.

+    // In that case, look at the operands of the use instruction to determine

+    // if we can still widen the use instead of truncating its operand.

+    if (widenWithVariantLoadUse(DU)) {

+      widenWithVariantLoadUseCodegen(DU);

+      return nullptr;

+    }

+

     // This user does not evaluate to a recurrence after widening, so don't

     // follow it. Instead insert a Trunc to kill off the original use,

     // eventually isolating the original narrow IV so it can be removed.



Modified: llvm/trunk/test/Transforms/IndVarSimplify/iv-widen-elim-ext.ll

URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/IndVarSimplify/iv-widen-elim-ext.ll?rev=341726&r1=341725&r2=341726&view=diff

==============================================================================

--- llvm/trunk/test/Transforms/IndVarSimplify/iv-widen-elim-ext.ll (original)

+++ llvm/trunk/test/Transforms/IndVarSimplify/iv-widen-elim-ext.ll Fri Sep  7 15:41:57 2018

@@ -273,3 +273,87 @@ for.end:

   %call = call i32 @dummy(i32* getelementptr inbounds ([100 x i32], [100 x i32]* @a, i32 0, i32 0), i32* getelementptr inbounds ([100 x i32], [100 x i32]* @b, i32 0, i32 0))

   ret i32 0

 }
The testing included here is insufficient.  Specifically:
1) The tests do not appear to be maximally reduced.
2) The tests do not clearly show what the expected output is.  (i.e. positive check)
3) It includes only minimal negative tests (i.e. exercising cases which shouldn't trigger).  Clearly missing cases include: non-load RHS, other binary op types, missing nsw/nuw, etc...





+

+%struct.image = type {i32, i32}

+define i32 @foo4(%struct.image* %input, i32 %length, i32* %in) {

+entry:

+  %stride = getelementptr inbounds %struct.image, %struct.image* %input, i64 0, i32 1

+  %0 = load i32, i32* %stride, align 4

+  %cmp17 = icmp sgt i32 %length, 1

+  br i1 %cmp17, label %for.body.lr.ph, label %for.cond.cleanup

+

+for.body.lr.ph:                                   ; preds = %entry

+  %channel = getelementptr inbounds %struct.image, %struct.image* %input, i64 0, i32 0

+  br label %for.body

+

+for.cond.cleanup.loopexit:                        ; preds = %for.body

+  %1 = phi i32 [ %6, %for.body ]

+  br label %for.cond.cleanup

+

+for.cond.cleanup:                                 ; preds = %for.cond.cleanup.loopexit, %entry

+  %2 = phi i32 [ 0, %entry ], [ %1, %for.cond.cleanup.loopexit ]

+  ret i32 %2

+

+; mul instruction below is widened instead of generating a truncate instruction for it

+; regardless if Load operand of mul is inside or outside the loop (we have both cases).

+; CHECK: for.body:

+; CHECK-NOT: trunc

+for.body:                                         ; preds = %for.body.lr.ph, %for.body

+  %x.018 = phi i32 [ 1, %for.body.lr.ph ], [ %add, %for.body ]

+  %add = add nuw nsw i32 %x.018, 1

+  %3 = load i32, i32* %channel, align 8

+  %mul = mul nsw i32 %3, %add

+  %idx.ext = sext i32 %mul to i64

+  %add.ptr = getelementptr inbounds i32, i32* %in, i64 %idx.ext

+  %4 = load i32, i32* %add.ptr, align 4

+  %mul1 = mul nsw i32 %0, %add

+  %idx.ext1 = sext i32 %mul1 to i64

+  %add.ptr1 = getelementptr inbounds i32, i32* %in, i64 %idx.ext1

+  %5 = load i32, i32* %add.ptr1, align 4

+  %6 = add i32 %4, %5

+  %cmp = icmp slt i32 %add, %length

+  br i1 %cmp, label %for.body, label %for.cond.cleanup.loopexit

+}

+

+

+define i32 @foo5(%struct.image* %input, i32 %length, i32* %in) {

+entry:

+  %stride = getelementptr inbounds %struct.image, %struct.image* %input, i64 0, i32 1

+  %0 = load i32, i32* %stride, align 4

+  %cmp17 = icmp sgt i32 %length, 1

+  br i1 %cmp17, label %for.body.lr.ph, label %for.cond.cleanup

+

+for.body.lr.ph:                                   ; preds = %entry

+  %channel = getelementptr inbounds %struct.image, %struct.image* %input, i64 0, i32 0

+  br label %for.body

+

+for.cond.cleanup.loopexit:                        ; preds = %for.body

+  %1 = phi i32 [ %7, %for.body ]

+  br label %for.cond.cleanup

+

+for.cond.cleanup:                                 ; preds = %for.cond.cleanup.loopexit, %entry

+  %2 = phi i32 [ 0, %entry ], [ %1, %for.cond.cleanup.loopexit ]

+  ret i32 %2

+

+; This example is the same as above except that the first mul is used in two places

+; and this may result in having two versions of the multiply: an i32 and i64 version.

+; In this case, keep the trucate instructions to avoid this redundancy.

+; CHECK: for.body:

+; CHECK: trunc

+for.body:                                         ; preds = %for.body.lr.ph, %for.body

+  %x.018 = phi i32 [ 1, %for.body.lr.ph ], [ %add, %for.body ]

+  %add = add nuw nsw i32 %x.018, 1

+  %3 = load i32, i32* %channel, align 8

+  %mul = mul nsw i32 %3, %add

+  %idx.ext = sext i32 %mul to i64

+  %add.ptr = getelementptr inbounds i32, i32* %in, i64 %idx.ext

+  %4 = load i32, i32* %add.ptr, align 4

+  %mul1 = mul nsw i32 %0, %add

+  %idx.ext1 = sext i32 %mul1 to i64

+  %add.ptr1 = getelementptr inbounds i32, i32* %in, i64 %idx.ext1

+  %5 = load i32, i32* %add.ptr1, align 4

+  %6 = add i32 %4, %5

+  %7 = add i32 %6, %mul

+  %cmp = icmp slt i32 %add, %length

+  br i1 %cmp, label %for.body, label %for.cond.cleanup.loopexit

+}





_______________________________________________

llvm-commits mailing list

llvm-commits at lists.llvm.org<mailto:llvm-commits at lists.llvm.org>

http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20180911/131f3073/attachment-0001.html>


More information about the llvm-commits mailing list