[llvm] r341726 - [SimplifyIndVar] Avoid generating truncate instructions with non-hoisted Laod operand.

Abderrazek Zaafrani via llvm-commits llvm-commits at lists.llvm.org
Fri Sep 14 12:03:01 PDT 2018


Philip,

For naming convention, dump.txt contains the LLVM IR dump before induction simplification pass and after induction simplification pass with my patch disabled. dump.txt.patch contains the LLVM IR dump before induction simplification pass and after induction simplification pass with my patch enabled (i.e. current llvm trunk).

For now, I address the main concern which is  *Why* is shifting the extend to the load operand profitable and you also mention below  Why should we believe that the widened-bin-op + *ext form is profitable over the trunc form *in general*: My patch adds an extend to the load, and removes a truncate (or more precisely avoids adding a truncate) but this happens only when it also removes an extend instruction (See widenWithVariantLoadUseCodegen near LLVM_DEBUG…). So, my code adds an extend instruction to the Load but also removes an extend instruction (which is the user of the operation whose one of the operand is a load)  and a truncate instruction. The end result is that we end up with one less instruction which is a truncate instruction. You can see we have 3 fewer truncate instructions in dump.txt.patch than in dump.txt while the number of extend instructions is the same. What we observe in those dump files is what is also happening in general.
.
Note that I used to have this comment in the code, but I lost it by mistake while updating the code after one of the reviews. It could have helped with understanding the code.
+  // Profitability: Check if widening the use eliminates sign/zero extend
+  // instructions.

The next question that you may have (and I had it before doing this) is: we avoided adding a truncate instruction but we changed where extend instruction occur (added one and removed one); Will the latter affect performance negatively even with less truncate instructions? I do not think so but I cannot prove. This optimization is old and I am just extending it to work on more cases.

This discussion obviously points to that I should have explained the changes better in the comment. Also, may need to extend this beyond load.

I stop here before discussing the other points mentioned below because we need to agree first on the core which is “how is it profitable?”. I hope I made things a little bit clearer.

Abderrazek

From: Philip Reames [mailto:listmail at philipreames.com]
Sent: Thursday, September 13, 2018 3:33 PM
To: Abderrazek Zaafrani; llvm-commits at lists.llvm.org; Sebastian Pop
Subject: Re: [llvm] r341726 - [SimplifyIndVar] Avoid generating truncate instructions with non-hoisted Laod operand.

Adrerrazek,

Thank for you for the fairly lengthy follow up.

I've replied to few specific points inline, but the macro concern I have is the same one I expressed in my previous email.  *Why* is shifting the extend to the load operand profitable?  Why do we believe this is a generally profitable transformation?  Your answer to date has been specific to one particular example.  That does not address the core concern which is about generality.

Fair warning: I will revert this patch in a day or so unless we've come to a satisfactory conclusion.  That doesn't prevent us from continuing the conversation and then reintroducing it later.  It's just to avoid having trunk potentially (performance wise) broken in the mean time.

Philip
On 09/12/2018 03:56 PM, Abderrazek Zaafrani wrote:
Philip,

This is a continuation of my previous reply and the main intent is to show some generated code.

The context of this work is to try to improve llvm to be on par with gcc. A well-known benchmark used for phones show a gap between gcc and llvm. I cannot share the proprietary code but I tried to reconstruct my own test (see test.cpp attached). The other two files attached are llvm IR dump without the patch and with the patch. I am only including the dump before and after Induction Variable Simplification pass as it is the only pass I am modifying.
Am I correct in interpreting dump.txt to be the before IR and dump,.txt.patch to the after IR?  I'd probably have named these before-indvars.ll and after-indvars.ll for clarity.

If you compare these two files, you can see that there are less truncate instructions with my patch. Current Induction simplification pass , among other things, tries to avoid creating truncate instructions while widening the induction variables. My patch does not  bring any new idea or optimization. It just tries harder to avoid creating truncate instructions.
I disagree here.  You may be extending an existing approach, but the specifics of how to "avoid a truncate" in this case is new and needs to be justified.  Why should we believe that the widened-bin-op + *ext form is profitable over the trunc form *in general*?  (Not just this one example.)


Let’s examine the effect of having clean code without unnecessary truncate instructions on the final generated code.

Here is the ARM assembly for test.cpp  without the patch:
Unfortunately, I'm not fluent in ARM assembly, so this doesn't help me much.  Comments on key pieces and changes would have gone a long way.  Alternatively, I am much more familiar with X86 if you'd prefer.


   .cfi_startproc
// %bb.0:                               // %entry
   ldr w9, [x0]
  add x11, x1, w3, sxtw #2
  mov w8, wzr
  neg w10, w9
  lsl w12, w9, #1
  orr w13, wzr, #0x4
.LBB0_1:                                // %for.body
                                               // =>This Inner Loop Header: Depth=1
  add w14, w12, w8
  ldrb  w14, [x2, w14, sxtw]
  ldrb  w15, [x2, w8, sxtw]
  add w16, w10, w8
  ldrb  w16, [x2, w16, sxtw]
  add w8, w8, w9
  sub w14, w14, w15
  add w14, w14, w16
  str w14, [x11, x13]
  add x13, x13, #4            // =4
 cmp x13, #400               // =400
  b.ne  .LBB0_1
// %bb.2:

And with the patch:

    .cfi_startproc
// %bb.0:                               // %entry
   ldrsw x8, [x0]
   add x10, x1, w3, sxtw #2
   neg x9, x8
   lsl x11, x8, #1
   orr w12, wzr, #0x4
.LBB0_1:                                // %for.body
                                               // =>This Inner Loop Header: Depth=1
  ldrb  w13, [x2, x11]
  ldrb  w14, [x2]
  ldrb  w15, [x2, x9]
   add x2, x2, x8
  sub w13, w13, w14
  add w13, w13, w15
  str w13, [x10, x12]
  add x12, x12, #4            // =4
  cmp x12, #400               // =400
b.ne  .LBB0_1
// %bb.2:                               // %for.cond.cleanup

If you look at the load instructions ldrb inside the loop for both versions, you can see that the offsets are computed more efficiently with the patched version than non-patched version (offset is  computed inside the loop for non-patch version and outside the loop for patched version).
Your framing here really confuses me.  What does extending the load have to do with the addressing form used?  *Why* is there a connection here?

Experiment on ARM A72 shows an improvement of close to 10% for one application in the benchmark.
This is a single benchmark.  I'm asking about the set of all possible benchmarks.


Abderrazek

From: Philip Reames [mailto:listmail at philipreames.com]
Sent: Monday, September 10, 2018 4:41 PM
To: Abderrazek Zaafrani; llvm-commits at lists.llvm.org<mailto:llvm-commits at lists.llvm.org>; Sebastian Pop
Subject: Re: [llvm] r341726 - [SimplifyIndVar] Avoid generating truncate instructions with non-hoisted Laod operand.


I have serious concerns with this patch.  I don't believe this patch meets our usual standards.  I believe this patch warrants being reverted, and then re-reviewed from scratch.

@sebpop, I don't think it was appropriate to approve this patch given the public history.  I wouldn't normally call that out, but this seems like a really clear cut case.

@Abderrazek, please don't let me scare you off here.  While I think this patch does need reverted, I think this can be adjusted into something clearly worthwhile, and I encourage you to do so.  I'm happy to help as well.  (Fair warning, my time is very limited, so my help will only go so far.)

Philip
On 09/07/2018 03:41 PM, Abderrazek Zaafrani via llvm-commits wrote:

Author: az

Date: Fri Sep  7 15:41:57 2018

New Revision: 341726



URL: http://llvm.org/viewvc/llvm-project?rev=341726&view=rev

Log:

[SimplifyIndVar] Avoid generating truncate instructions with non-hoisted Laod operand.
(For the future, you had a nice comment in the review about the reasoning.  The submit comment should include that too.)

Your reasoning here is explained purely in terms of what we do for loop-invariant loads.  You don't state what this is, or provide a clear description of what the code generation should be.  I've read over the review and the code, and can reverse engineer what you're going for, but it is your responsibility to make this clear from the submitted code.  This patch does not achieve that objective.

Reverse engineering the code, here's what I *think* you're trying to accomplish:
If we find a binary operator where we need to extend one side, check to see if we can cheaply extend the other operand and widen the load instead.  Many architectures can efficiently generate an extending load, so we consider loads to be one such cheaply extensible case.

(I can't connect this understanding of the code to your comments in the review though, so maybe I'm missing something still?)

In terms of the review discussion, I am concerned that you appear to be mixing TBAA and AliasSet terminology.  You appear to have rejected an approach based on a possibly flawed understanding.  Your reviewers should have asked for a reference w.r.t. the rejected approach (e.g. email discussion, review, or rejected patch).









Differential Revision: https://reviews.llvm.org/D49151



Modified:

    llvm/trunk/lib/Transforms/Scalar/IndVarSimplify.cpp

    llvm/trunk/test/Transforms/IndVarSimplify/iv-widen-elim-ext.ll



Modified: llvm/trunk/lib/Transforms/Scalar/IndVarSimplify.cpp

URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Transforms/Scalar/IndVarSimplify.cpp?rev=341726&r1=341725&r2=341726&view=diff

==============================================================================

--- llvm/trunk/lib/Transforms/Scalar/IndVarSimplify.cpp (original)

+++ llvm/trunk/lib/Transforms/Scalar/IndVarSimplify.cpp Fri Sep  7 15:41:57 2018

@@ -1017,6 +1017,8 @@ protected:

   Instruction *widenIVUse(NarrowIVDefUse DU, SCEVExpander &Rewriter);



   bool widenLoopCompare(NarrowIVDefUse DU);

+  bool widenWithVariantLoadUse(NarrowIVDefUse DU);

+  void widenWithVariantLoadUseCodegen(NarrowIVDefUse DU);



   void pushNarrowIVUsers(Instruction *NarrowDef, Instruction *WideDef);

 };

@@ -1361,6 +1363,146 @@ bool WidenIV::widenLoopCompare(NarrowIVD

   return true;

 }



+/// If the narrow use is an instruction whose two operands are the defining

+/// instruction of DU and a load instruction, then we have the following:

+/// if the load is hoisted outside the loop, then we do not reach this function

+/// as scalar evolution analysis works fine in widenIVUse with variables

+/// hoisted outside the loop and efficient code is subsequently generated by

+/// not emitting truncate instructions. But when the load is not hoisted

+/// (whether due to limitation in alias analysis or due to a true legality),

+/// then scalar evolution can not proceed with loop variant values and

+/// inefficient code is generated. This function handles the non-hoisted load

+/// special case by making the optimization generate the same type of code for

+/// hoisted and non-hoisted load (widen use and eliminate sign extend

+/// instruction). This special case is important especially when the induction

+/// variables are affecting addressing mode in code generation.
This comment really tells me nothing about what this function does, or what it's goal is.





+bool WidenIV::widenWithVariantLoadUse(NarrowIVDefUse DU) {

+  Instruction *NarrowUse = DU.NarrowUse;

+  Instruction *NarrowDef = DU.NarrowDef;

+  Instruction *WideDef = DU.WideDef;

+

+  // Handle the common case of add<nsw/nuw>

+  const unsigned OpCode = NarrowUse->getOpcode();

+  // Only Add/Sub/Mul instructions are supported.
Why?





+  if (OpCode != Instruction::Add && OpCode != Instruction::Sub &&

+      OpCode != Instruction::Mul)

+    return false;

+

+  // The operand that is not defined by NarrowDef of DU. Let's call it the

+  // other operand.

+  unsigned ExtendOperIdx = DU.NarrowUse->getOperand(0) == NarrowDef ? 1 : 0;

+  assert(DU.NarrowUse->getOperand(1 - ExtendOperIdx) == DU.NarrowDef &&

+         "bad DU");

+

+  const SCEV *ExtendOperExpr = nullptr;

+  const OverflowingBinaryOperator *OBO =

+    cast<OverflowingBinaryOperator>(NarrowUse);

+  ExtendKind ExtKind = getExtendKind(NarrowDef);

+  if (ExtKind == SignExtended && OBO->hasNoSignedWrap())

+    ExtendOperExpr = SE->getSignExtendExpr(

+      SE->getSCEV(NarrowUse->getOperand(ExtendOperIdx)), WideType);

+  else if (ExtKind == ZeroExtended && OBO->hasNoUnsignedWrap())

+    ExtendOperExpr = SE->getZeroExtendExpr(

+      SE->getSCEV(NarrowUse->getOperand(ExtendOperIdx)), WideType);

+  else

+    return false;

+

+  // We are interested in the other operand being a load instruction.

+  // But, we should look into relaxing this restriction later on.

+  auto *I = dyn_cast<Instruction>(NarrowUse->getOperand(ExtendOperIdx));

+  if (I && I->getOpcode() != Instruction::Load)

+    return false;

+

+  // Verifying that Defining operand is an AddRec

+  const SCEV *Op1 = SE->getSCEV(WideDef);

+  const SCEVAddRecExpr *AddRecOp1 = dyn_cast<SCEVAddRecExpr>(Op1);

+  if (!AddRecOp1 || AddRecOp1->getLoop() != L)

+    return false;

+  // Verifying that other operand is an Extend.

+  if (ExtKind == SignExtended) {

+    if (!isa<SCEVSignExtendExpr>(ExtendOperExpr))

+      return false;

+  } else {

+    if (!isa<SCEVZeroExtendExpr>(ExtendOperExpr))

+      return false;

+  }

+

+  if (ExtKind == SignExtended) {

+    for (Use &U : NarrowUse->uses()) {

+      SExtInst *User = dyn_cast<SExtInst>(U.getUser());

+      if (!User || User->getType() != WideType)

+        return false;

+    }

+  } else { // ExtKind == ZeroExtended

+    for (Use &U : NarrowUse->uses()) {

+      ZExtInst *User = dyn_cast<ZExtInst>(U.getUser());

+      if (!User || User->getType() != WideType)

+        return false;

+    }

+  }
This and the previous block of restrictions seem awfully narrow with no clear reasoning of why they need to be.





+

+  return true;

+}

+

+/// Special Case for widening with variant Loads (see

+/// WidenIV::widenWithVariantLoadUse). This is the code generation part.

+void WidenIV::widenWithVariantLoadUseCodegen(NarrowIVDefUse DU) {

+  Instruction *NarrowUse = DU.NarrowUse;

+  Instruction *NarrowDef = DU.NarrowDef;

+  Instruction *WideDef = DU.WideDef;

+

+  ExtendKind ExtKind = getExtendKind(NarrowDef);

+

+  LLVM_DEBUG(dbgs() << "Cloning arithmetic IVUser: " << *NarrowUse << "\n");

+

+  // Generating a widening use instruction.

+  Value *LHS = (NarrowUse->getOperand(0) == NarrowDef)

+                   ? WideDef

+                   : createExtendInst(NarrowUse->getOperand(0), WideType,

+                                      ExtKind, NarrowUse);

+  Value *RHS = (NarrowUse->getOperand(1) == NarrowDef)

+                   ? WideDef

+                   : createExtendInst(NarrowUse->getOperand(1), WideType,

+                                      ExtKind, NarrowUse);

+

+  auto *NarrowBO = cast<BinaryOperator>(NarrowUse);

+  auto *WideBO = BinaryOperator::Create(NarrowBO->getOpcode(), LHS, RHS,

+                                        NarrowBO->getName());

+  IRBuilder<> Builder(NarrowUse);

+  Builder.Insert(WideBO);

+  WideBO->copyIRFlags(NarrowBO);

+

+  if (ExtKind == SignExtended)

+    ExtendKindMap[NarrowUse] = SignExtended;

+  else

+    ExtendKindMap[NarrowUse] = ZeroExtended;

+

+  // Update the Use.

+  if (ExtKind == SignExtended) {

+    for (Use &U : NarrowUse->uses()) {

+      SExtInst *User = dyn_cast<SExtInst>(U.getUser());

+      if (User && User->getType() == WideType) {

+        LLVM_DEBUG(dbgs() << "INDVARS: eliminating " << *User << " replaced by "

+                          << *WideBO << "\n");

+        ++NumElimExt;

+        User->replaceAllUsesWith(WideBO);

+        DeadInsts.emplace_back(User);

+      }

+    }

+  } else { // ExtKind == ZeroExtended

+    for (Use &U : NarrowUse->uses()) {

+      ZExtInst *User = dyn_cast<ZExtInst>(U.getUser());

+      if (User && User->getType() == WideType) {

+        LLVM_DEBUG(dbgs() << "INDVARS: eliminating " << *User << " replaced by "

+                          << *WideBO << "\n");

+        ++NumElimExt;

+        User->replaceAllUsesWith(WideBO);

+        DeadInsts.emplace_back(User);

+      }

+    }

+  }

+}

+

 /// Determine whether an individual user of the narrow IV can be widened. If so,

 /// return the wide clone of the user.

 Instruction *WidenIV::widenIVUse(NarrowIVDefUse DU, SCEVExpander &Rewriter) {

@@ -1458,6 +1600,16 @@ Instruction *WidenIV::widenIVUse(NarrowI

     if (widenLoopCompare(DU))

       return nullptr;



+    // We are here about to generate a truncate instruction that may hurt

+    // performance because the scalar evolution expression computed earlier

+    // in WideAddRec.first does not indicate a polynomial induction expression.

+    // In that case, look at the operands of the use instruction to determine

+    // if we can still widen the use instead of truncating its operand.

+    if (widenWithVariantLoadUse(DU)) {

+      widenWithVariantLoadUseCodegen(DU);

+      return nullptr;

+    }

+

     // This user does not evaluate to a recurrence after widening, so don't

     // follow it. Instead insert a Trunc to kill off the original use,

     // eventually isolating the original narrow IV so it can be removed.



Modified: llvm/trunk/test/Transforms/IndVarSimplify/iv-widen-elim-ext.ll

URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/IndVarSimplify/iv-widen-elim-ext.ll?rev=341726&r1=341725&r2=341726&view=diff

==============================================================================

--- llvm/trunk/test/Transforms/IndVarSimplify/iv-widen-elim-ext.ll (original)

+++ llvm/trunk/test/Transforms/IndVarSimplify/iv-widen-elim-ext.ll Fri Sep  7 15:41:57 2018

@@ -273,3 +273,87 @@ for.end:

   %call = call i32 @dummy(i32* getelementptr inbounds ([100 x i32], [100 x i32]* @a, i32 0, i32 0), i32* getelementptr inbounds ([100 x i32], [100 x i32]* @b, i32 0, i32 0))

   ret i32 0

 }
The testing included here is insufficient.  Specifically:
1) The tests do not appear to be maximally reduced.
2) The tests do not clearly show what the expected output is.  (i.e. positive check)
3) It includes only minimal negative tests (i.e. exercising cases which shouldn't trigger).  Clearly missing cases include: non-load RHS, other binary op types, missing nsw/nuw, etc...






+

+%struct.image = type {i32, i32}

+define i32 @foo4(%struct.image* %input, i32 %length, i32* %in) {

+entry:

+  %stride = getelementptr inbounds %struct.image, %struct.image* %input, i64 0, i32 1

+  %0 = load i32, i32* %stride, align 4

+  %cmp17 = icmp sgt i32 %length, 1

+  br i1 %cmp17, label %for.body.lr.ph, label %for.cond.cleanup

+

+for.body.lr.ph:                                   ; preds = %entry

+  %channel = getelementptr inbounds %struct.image, %struct.image* %input, i64 0, i32 0

+  br label %for.body

+

+for.cond.cleanup.loopexit:                        ; preds = %for.body

+  %1 = phi i32 [ %6, %for.body ]

+  br label %for.cond.cleanup

+

+for.cond.cleanup:                                 ; preds = %for.cond.cleanup.loopexit, %entry

+  %2 = phi i32 [ 0, %entry ], [ %1, %for.cond.cleanup.loopexit ]

+  ret i32 %2

+

+; mul instruction below is widened instead of generating a truncate instruction for it

+; regardless if Load operand of mul is inside or outside the loop (we have both cases).

+; CHECK: for.body:

+; CHECK-NOT: trunc

+for.body:                                         ; preds = %for.body.lr.ph, %for.body

+  %x.018 = phi i32 [ 1, %for.body.lr.ph ], [ %add, %for.body ]

+  %add = add nuw nsw i32 %x.018, 1

+  %3 = load i32, i32* %channel, align 8

+  %mul = mul nsw i32 %3, %add

+  %idx.ext = sext i32 %mul to i64

+  %add.ptr = getelementptr inbounds i32, i32* %in, i64 %idx.ext

+  %4 = load i32, i32* %add.ptr, align 4

+  %mul1 = mul nsw i32 %0, %add

+  %idx.ext1 = sext i32 %mul1 to i64

+  %add.ptr1 = getelementptr inbounds i32, i32* %in, i64 %idx.ext1

+  %5 = load i32, i32* %add.ptr1, align 4

+  %6 = add i32 %4, %5

+  %cmp = icmp slt i32 %add, %length

+  br i1 %cmp, label %for.body, label %for.cond.cleanup.loopexit

+}

+

+

+define i32 @foo5(%struct.image* %input, i32 %length, i32* %in) {

+entry:

+  %stride = getelementptr inbounds %struct.image, %struct.image* %input, i64 0, i32 1

+  %0 = load i32, i32* %stride, align 4

+  %cmp17 = icmp sgt i32 %length, 1

+  br i1 %cmp17, label %for.body.lr.ph, label %for.cond.cleanup

+

+for.body.lr.ph:                                   ; preds = %entry

+  %channel = getelementptr inbounds %struct.image, %struct.image* %input, i64 0, i32 0

+  br label %for.body

+

+for.cond.cleanup.loopexit:                        ; preds = %for.body

+  %1 = phi i32 [ %7, %for.body ]

+  br label %for.cond.cleanup

+

+for.cond.cleanup:                                 ; preds = %for.cond.cleanup.loopexit, %entry

+  %2 = phi i32 [ 0, %entry ], [ %1, %for.cond.cleanup.loopexit ]

+  ret i32 %2

+

+; This example is the same as above except that the first mul is used in two places

+; and this may result in having two versions of the multiply: an i32 and i64 version.

+; In this case, keep the trucate instructions to avoid this redundancy.

+; CHECK: for.body:

+; CHECK: trunc

+for.body:                                         ; preds = %for.body.lr.ph, %for.body

+  %x.018 = phi i32 [ 1, %for.body.lr.ph ], [ %add, %for.body ]

+  %add = add nuw nsw i32 %x.018, 1

+  %3 = load i32, i32* %channel, align 8

+  %mul = mul nsw i32 %3, %add

+  %idx.ext = sext i32 %mul to i64

+  %add.ptr = getelementptr inbounds i32, i32* %in, i64 %idx.ext

+  %4 = load i32, i32* %add.ptr, align 4

+  %mul1 = mul nsw i32 %0, %add

+  %idx.ext1 = sext i32 %mul1 to i64

+  %add.ptr1 = getelementptr inbounds i32, i32* %in, i64 %idx.ext1

+  %5 = load i32, i32* %add.ptr1, align 4

+  %6 = add i32 %4, %5

+  %7 = add i32 %6, %mul

+  %cmp = icmp slt i32 %add, %length

+  br i1 %cmp, label %for.body, label %for.cond.cleanup.loopexit

+}





_______________________________________________

llvm-commits mailing list

llvm-commits at lists.llvm.org<mailto:llvm-commits at lists.llvm.org>

http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20180914/321577d7/attachment.html>


More information about the llvm-commits mailing list