[llvm] r294814 - [LSR] Recommit: Allow formula containing Reg for SCEVAddRecExpr related with outerloop.

Wei Mi via llvm-commits llvm-commits at lists.llvm.org
Fri Feb 10 16:50:24 PST 2017


Author: wmi
Date: Fri Feb 10 18:50:23 2017
New Revision: 294814

URL: http://llvm.org/viewvc/llvm-project?rev=294814&view=rev
Log:
[LSR] Recommit: Allow formula containing Reg for SCEVAddRecExpr related with outerloop.

The recommit includes some changes of testcases. No functional change to the patch.

In RateRegister of existing LSR, if a formula contains a Reg which is a SCEVAddRecExpr,
and this SCEVAddRecExpr's loop is an outerloop, the formula will be marked as Loser
and dropped.

Suppose we have an IR that %for.body is outerloop and %for.body2 is innerloop. LSR only
handle inner loop now so only %for.body2 will be handled.

Using the logic above, formula like
reg(%array) + reg({1,+, %size}<%for.body>) + 1*reg({0,+,1}<%for.body2>) will be dropped
no matter what because reg({1,+, %size}<%for.body>) is a SCEVAddRecExpr type reg related
with outerloop. Only formula like
reg(%array) + 1*reg({{1,+, %size}<%for.body>,+,1}<nuw><nsw><%for.body2>) will be kept
because the SCEVAddRecExpr related with outerloop is folded into the initial value of the
SCEVAddRecExpr related with current loop.

But in some cases, we do need to share the basic induction variable
reg{0 ,+, 1}<%for.body2> among LSR Uses to reduce the final total number of induction
variables used by LSR, so we don't want to drop the formula like
reg(%array) + reg({1,+, %size}<%for.body>) + 1*reg({0,+,1}<%for.body2>) unconditionally.

>From the existing comment, it tries to avoid considering multiple level loops at the same time.
However, existing LSR only handles innermost loop, so for any SCEVAddRecExpr with a loop other
than current loop, it is an invariant and will be simple to handle, and the formula doesn't have
to be dropped.

Differential Revision: https://reviews.llvm.org/D26429

Added:
    llvm/trunk/test/Transforms/LoopStrengthReduce/X86/nested-loop.ll
Modified:
    llvm/trunk/lib/Transforms/Scalar/LoopStrengthReduce.cpp
    llvm/trunk/test/CodeGen/AArch64/eliminate-trunc.ll
    llvm/trunk/test/CodeGen/X86/licm-nested.ll

Modified: llvm/trunk/lib/Transforms/Scalar/LoopStrengthReduce.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Transforms/Scalar/LoopStrengthReduce.cpp?rev=294814&r1=294813&r2=294814&view=diff
==============================================================================
--- llvm/trunk/lib/Transforms/Scalar/LoopStrengthReduce.cpp (original)
+++ llvm/trunk/lib/Transforms/Scalar/LoopStrengthReduce.cpp Fri Feb 10 18:50:23 2017
@@ -1095,17 +1095,16 @@ void Cost::RateRegister(const SCEV *Reg,
                         const Loop *L,
                         ScalarEvolution &SE, DominatorTree &DT) {
   if (const SCEVAddRecExpr *AR = dyn_cast<SCEVAddRecExpr>(Reg)) {
-    // If this is an addrec for another loop, don't second-guess its addrec phi
-    // nodes. LSR isn't currently smart enough to reason about more than one
-    // loop at a time. LSR has already run on inner loops, will not run on outer
-    // loops, and cannot be expected to change sibling loops.
+    // If this is an addrec for another loop, it should be an invariant
+    // with respect to L since L is the innermost loop (at least
+    // for now LSR only handles innermost loops).
     if (AR->getLoop() != L) {
       // If the AddRec exists, consider it's register free and leave it alone.
       if (isExistingPhi(AR, SE))
         return;
 
-      // Otherwise, do not consider this formula at all.
-      Lose();
+      // Otherwise, it will be an invariant with respect to Loop L.
+      ++NumRegs;
       return;
     }
     AddRecCost += 1; /// TODO: This should be a function of the stride.

Modified: llvm/trunk/test/CodeGen/AArch64/eliminate-trunc.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/eliminate-trunc.ll?rev=294814&r1=294813&r2=294814&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/eliminate-trunc.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/eliminate-trunc.ll Fri Feb 10 18:50:23 2017
@@ -6,7 +6,7 @@
 ; CHECK-NOT: add {{x[0-9]+}}, {{x[0-9]+}}, #1
 ; CHECK: add {{w[0-9]+}}, {{w[0-9]+}}, #1
 ; CHECK-NEXT: cmp {{w[0-9]+}}, {{w[0-9]+}}
-define void @test1_signed([8 x i8]* nocapture %a, i8* nocapture readonly %box, i8 %limit) minsize {
+define void @test1_signed([8 x i8]* nocapture %a, i8* nocapture readonly %box, i8 %limit, i64 %inv) minsize {
 entry:
   %conv = zext i8 %limit to i32
   %cmp223 = icmp eq i8 %limit, 0
@@ -14,7 +14,7 @@ entry:
 
 for.body4.us:
   %indvars.iv = phi i64 [ 0, %for.body4.lr.ph.us ], [ %indvars.iv.next, %for.body4.us ]
-  %arrayidx6.us = getelementptr inbounds [8 x i8], [8 x i8]* %a, i64 %indvars.iv26, i64 %indvars.iv
+  %arrayidx6.us = getelementptr inbounds [8 x i8], [8 x i8]* %a, i64 %indvars.iv, i64 %inv
   %0 = load i8, i8* %arrayidx6.us, align 1
   %idxprom7.us = zext i8 %0 to i64
   %arrayidx8.us = getelementptr inbounds i8, i8* %box, i64 %idxprom7.us

Modified: llvm/trunk/test/CodeGen/X86/licm-nested.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/licm-nested.ll?rev=294814&r1=294813&r2=294814&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/X86/licm-nested.ll (original)
+++ llvm/trunk/test/CodeGen/X86/licm-nested.ll Fri Feb 10 18:50:23 2017
@@ -1,5 +1,5 @@
 ; REQUIRES: asserts
-; RUN: llc -mtriple=x86_64-apple-darwin -march=x86-64 < %s -o /dev/null -stats -info-output-file - | grep "hoisted out of loops" | grep 4
+; RUN: llc -mtriple=x86_64-apple-darwin -march=x86-64 < %s -o /dev/null -stats -info-output-file - | grep "hoisted out of loops" | grep 5
 
 ; MachineLICM should be able to hoist the symbolic addresses out of
 ; the inner loops.

Added: llvm/trunk/test/Transforms/LoopStrengthReduce/X86/nested-loop.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/LoopStrengthReduce/X86/nested-loop.ll?rev=294814&view=auto
==============================================================================
--- llvm/trunk/test/Transforms/LoopStrengthReduce/X86/nested-loop.ll (added)
+++ llvm/trunk/test/Transforms/LoopStrengthReduce/X86/nested-loop.ll Fri Feb 10 18:50:23 2017
@@ -0,0 +1,65 @@
+; RUN: opt -loop-reduce -S < %s | FileCheck %s
+; Check when we use an outerloop induction variable inside of an innerloop
+; induction value expr, LSR can still choose to use single induction variable
+; for the innerloop and share it in multiple induction value exprs.
+
+target datalayout = "e-m:e-i64:64-f80:128-n8:16:32:64-S128"
+target triple = "x86_64-unknown-linux-gnu"
+
+define void @foo(i32 %size, i32 %nsteps, i32 %hsize, i32* %lined, i8* %maxarray) {
+entry:
+  %cmp215 = icmp sgt i32 %size, 1
+  %t0 = zext i32 %size to i64
+  %t1 = sext i32 %nsteps to i64
+  %sub2 = sub i64 %t0, 2
+  br label %for.body
+
+for.body:                                         ; preds = %for.inc, %entry
+  %indvars.iv2 = phi i64 [ %indvars.iv.next3, %for.inc ], [ 0, %entry ]
+  %t2 = mul nsw i64 %indvars.iv2, %t0
+  br i1 %cmp215, label %for.body2.preheader, label %for.inc
+
+for.body2.preheader:                              ; preds = %for.body
+  br label %for.body2
+
+; Check LSR only generates one induction variable for for.body2 and the induction
+; variable will be shared by multiple array accesses.
+; CHECK: for.body2:
+; CHECK-NEXT: [[LSR:%[^,]+]] = phi i64 [ %lsr.iv.next, %for.body2 ], [ 0, %for.body2.preheader ]
+; CHECK-NOT:  = phi i64 [ {{.*}}, %for.body2 ], [ {{.*}}, %for.body2.preheader ]
+; CHECK:      [[SCEVGEP1:%[^,]+]] = getelementptr i8, i8* %maxarray, i64 [[LSR]]
+; CHECK:      [[SCEVGEP2:%[^,]+]] = getelementptr i8, i8* [[SCEVGEP1]], i64 1
+; CHECK:      {{.*}} = load i8, i8* [[SCEVGEP2]], align 1
+; CHECK:      [[SCEVGEP3:%[^,]+]] = getelementptr i8, i8* {{.*}}, i64 [[LSR]]
+; CHECK:      {{.*}} = load i8, i8* [[SCEVGEP3]], align 1
+; CHECK:      [[SCEVGEP4:%[^,]+]] = getelementptr i8, i8* {{.*}}, i64 [[LSR]]
+; CHECK:      store i8 {{.*}}, i8* [[SCEVGEP4]], align 1
+; CHECK:      br i1 %exitcond, label %for.body2, label %for.inc.loopexit
+
+for.body2:                                        ; preds = %for.body2.preheader, %for.body2
+  %indvars.iv = phi i64 [ 1, %for.body2.preheader ], [ %indvars.iv.next, %for.body2 ]
+  %arrayidx1 = getelementptr inbounds i8, i8* %maxarray, i64 %indvars.iv
+  %v1 = load i8, i8* %arrayidx1, align 1
+  %idx2 = add nsw i64 %indvars.iv, %sub2
+  %arrayidx2 = getelementptr inbounds i8, i8* %maxarray, i64 %idx2
+  %v2 = load i8, i8* %arrayidx2, align 1
+  %tmpv = xor i8 %v1, %v2
+  %t4 = add nsw i64 %t2, %indvars.iv
+  %add.ptr = getelementptr inbounds i8, i8* %maxarray, i64 %t4
+  store i8 %tmpv, i8* %add.ptr, align 1
+  %indvars.iv.next = add nuw nsw i64 %indvars.iv, 1
+  %wide.trip.count = zext i32 %size to i64
+  %exitcond = icmp ne i64 %indvars.iv.next, %wide.trip.count
+  br i1 %exitcond, label %for.body2, label %for.inc.loopexit
+
+for.inc.loopexit:                                 ; preds = %for.body2
+  br label %for.inc
+
+for.inc:                                          ; preds = %for.inc.loopexit, %for.body
+  %indvars.iv.next3 = add nuw nsw i64 %indvars.iv2, 1
+  %cmp = icmp slt i64 %indvars.iv.next3, %t1
+  br i1 %cmp, label %for.body, label %for.end.loopexit
+
+for.end.loopexit:                                 ; preds = %for.inc
+  ret void
+}




More information about the llvm-commits mailing list