[llvm] r192608 - MachineSink: Fix and tweak critical-edge breaking heuristic.

Will Dietz wdietz2 at illinois.edu
Mon Oct 14 09:57:17 PDT 2013


Author: wdietz2
Date: Mon Oct 14 11:57:17 2013
New Revision: 192608

URL: http://llvm.org/viewvc/llvm-project?rev=192608&view=rev
Log:
MachineSink: Fix and tweak critical-edge breaking heuristic.

Per original comment, the intention of this loop
is to go ahead and break the critical edge
(in order to sink this instruction) if there's
reason to believe doing so might "unblock" the
sinking of additional instructions that define
registers used by this one.  The idea is that if
we have a few instructions to sink "together"
breaking the edge might be worthwhile.

This commit makes a few small changes
to help better realize this goal:

First, modify the loop to ignore registers
defined by this instruction.  We don't
sink definitions of physical registers,
and sinking an SSA definition isn't
going to unblock an upstream instruction.

Second, ignore uses of physical registers.
Instructions that define physical registers are
rejected for sinking, and so moving this one
won't enable moving any defining instructions.
As an added bonus, while virtual register
use-def chains are generally small due
to SSA goodness, iteration over the uses
and definitions (used by hasOneNonDBGUse)
for physical registers like EFLAGS
can be rather expensive in practice.
(This is the original reason for looking at this)

Finally, to keep things simple continue
to only consider this trick for registers that
have a single use (via hasOneNonDBGUse),
but to avoid spuriously breaking critical edges
only do so if the definition resides
in the same MBB and therefore this one directly
blocks it from being sunk as well.
If sinking them together is meant to be,
let the iterative nature of this pass
sink the definition into this block first.

Update tests to accomodate this change,
add new testcase where sinking avoids pipeline stalls.

Added:
    llvm/trunk/test/CodeGen/ARM/2013-10-11-select-stalls.ll
Modified:
    llvm/trunk/lib/CodeGen/MachineSink.cpp
    llvm/trunk/test/CodeGen/ARM/2011-04-11-MachineLICMBug.ll
    llvm/trunk/test/CodeGen/ARM/2011-08-25-ldmia_ret.ll
    llvm/trunk/test/CodeGen/ARM/2012-08-30-select.ll
    llvm/trunk/test/CodeGen/Thumb2/v8_IT_1.ll
    llvm/trunk/test/CodeGen/X86/2012-11-30-handlemove-dbg.ll
    llvm/trunk/test/CodeGen/X86/fold-load.ll
    llvm/trunk/test/CodeGen/X86/hoist-common.ll
    llvm/trunk/test/CodeGen/X86/misched-balance.ll

Modified: llvm/trunk/lib/CodeGen/MachineSink.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/MachineSink.cpp?rev=192608&r1=192607&r2=192608&view=diff
==============================================================================
--- llvm/trunk/lib/CodeGen/MachineSink.cpp (original)
+++ llvm/trunk/lib/CodeGen/MachineSink.cpp Mon Oct 14 11:57:17 2013
@@ -308,12 +308,29 @@ bool MachineSinking::isWorthBreakingCrit
   // to be sunk then it's probably worth it.
   for (unsigned i = 0, e = MI->getNumOperands(); i != e; ++i) {
     const MachineOperand &MO = MI->getOperand(i);
-    if (!MO.isReg()) continue;
+    if (!MO.isReg() || !MO.isUse())
+      continue;
     unsigned Reg = MO.getReg();
-    if (Reg == 0 || !TargetRegisterInfo::isPhysicalRegister(Reg))
+    if (Reg == 0)
       continue;
-    if (MRI->hasOneNonDBGUse(Reg))
-      return true;
+
+    // We don't move live definitions of physical registers,
+    // so sinking their uses won't enable any opportunities.
+    if (TargetRegisterInfo::isPhysicalRegister(Reg))
+      continue;
+
+    // If this instruction is the only user of a virtual register,
+    // check if breaking the edge will enable sinking
+    // both this instruction and the defining instruction.
+    if (MRI->hasOneNonDBGUse(Reg)) {
+      // If the definition resides in same MBB,
+      // claim it's likely we can sink these together.
+      // If definition resides elsewhere, we aren't
+      // blocking it from being sunk so don't break the edge.
+      MachineInstr *DefMI = MRI->getVRegDef(Reg);
+      if (DefMI->getParent() == MI->getParent())
+        return true;
+    }
   }
 
   return false;
@@ -615,9 +632,8 @@ bool MachineSinking::SinkInstruction(Mac
 
   DEBUG(dbgs() << "Sink instr " << *MI << "\tinto block " << *SuccToSinkTo);
 
-  // If the block has multiple predecessors, this would introduce computation on
-  // a path that it doesn't already exist.  We could split the critical edge,
-  // but for now we just punt.
+  // If the block has multiple predecessors, this is a critical edge.
+  // Decide if we can sink along it or need to break the edge.
   if (SuccToSinkTo->pred_size() > 1) {
     // We cannot sink a load across a critical edge - there may be stores in
     // other code paths.

Modified: llvm/trunk/test/CodeGen/ARM/2011-04-11-MachineLICMBug.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/ARM/2011-04-11-MachineLICMBug.ll?rev=192608&r1=192607&r2=192608&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/ARM/2011-04-11-MachineLICMBug.ll (original)
+++ llvm/trunk/test/CodeGen/ARM/2011-04-11-MachineLICMBug.ll Mon Oct 14 11:57:17 2013
@@ -15,15 +15,14 @@ for.cond:
 
 for.body:
 ; CHECK: %for.
-; CHECK: movs r{{[0-9]+}}, #{{[01]}}
+; CHECK: mov{{.*}} r{{[0-9]+}}, #{{[01]}}
+; CHECK: mov{{.*}} r{{[0-9]+}}, #{{[01]}}
+; CHECK-NOT: mov r{{[0-9]+}}, #{{[01]}}
   %arrayidx = getelementptr i32* %A, i32 %0
   %tmp4 = load i32* %arrayidx, align 4
   %cmp6 = icmp eq i32 %tmp4, %value
   br i1 %cmp6, label %return, label %for.inc
 
-; CHECK: %for.
-; CHECK: movs r{{[0-9]+}}, #{{[01]}}
-
 for.inc:
   %inc = add i32 %0, 1
   br label %for.cond

Modified: llvm/trunk/test/CodeGen/ARM/2011-08-25-ldmia_ret.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/ARM/2011-08-25-ldmia_ret.ll?rev=192608&r1=192607&r2=192608&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/ARM/2011-08-25-ldmia_ret.ll (original)
+++ llvm/trunk/test/CodeGen/ARM/2011-08-25-ldmia_ret.ll Mon Oct 14 11:57:17 2013
@@ -42,7 +42,7 @@ if.then:
 ; If-convert the return
 ; CHECK: it	ne
 ; Fold the CSR+return into a pop
-; CHECK: pop {r4, r5, r6, r7, pc}
+; CHECK: pop {r4, r5, r7, pc}
 sw.bb18:
   %call20 = tail call i32 @bar(i32 %in2) nounwind
   switch i32 %call20, label %sw.default56 [

Modified: llvm/trunk/test/CodeGen/ARM/2012-08-30-select.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/ARM/2012-08-30-select.ll?rev=192608&r1=192607&r2=192608&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/ARM/2012-08-30-select.ll (original)
+++ llvm/trunk/test/CodeGen/ARM/2012-08-30-select.ll Mon Oct 14 11:57:17 2013
@@ -5,14 +5,11 @@
 ;CHECK: it  ne
 ;CHECK-NEXT: vmovne.i32
 ;CHECK: bx
-define <16 x i8> @select_s_v_v(i32 %avail, i8* %bar) {
+define <16 x i8> @select_s_v_v(<16 x i8> %vec, i32 %avail) {
 entry:
-  %vld1 = call <16 x i8> @llvm.arm.neon.vld1.v16i8(i8* %bar, i32 1)
   %and = and i32 %avail, 1
   %tobool = icmp eq i32 %and, 0
-  %vld1. = select i1 %tobool, <16 x i8> %vld1, <16 x i8> zeroinitializer
-  ret <16 x i8> %vld1.
+  %ret = select i1 %tobool, <16 x i8> %vec, <16 x i8> zeroinitializer
+  ret <16 x i8> %ret
 }
 
-declare <16 x i8> @llvm.arm.neon.vld1.v16i8(i8* , i32 )
-

Added: llvm/trunk/test/CodeGen/ARM/2013-10-11-select-stalls.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/ARM/2013-10-11-select-stalls.ll?rev=192608&view=auto
==============================================================================
--- llvm/trunk/test/CodeGen/ARM/2013-10-11-select-stalls.ll (added)
+++ llvm/trunk/test/CodeGen/ARM/2013-10-11-select-stalls.ll Mon Oct 14 11:57:17 2013
@@ -0,0 +1,16 @@
+; REQUIRES: asserts
+; RUN: llc < %s -mtriple=thumbv7-apple-ios -stats 2>&1 | not grep "Number of pipeline stalls"
+; Evaluate the two vld1.8 instructions in separate MBB's,
+; instead of stalling on one and conditionally overwriting its result.
+
+define <16 x i8> @multiselect(i32 %avail, i8* %foo, i8* %bar) {
+entry:
+  %vld1 = call <16 x i8> @llvm.arm.neon.vld1.v16i8(i8* %foo, i32 1)
+  %vld2 = call <16 x i8> @llvm.arm.neon.vld1.v16i8(i8* %bar, i32 1)
+  %and = and i32 %avail, 1
+  %tobool = icmp eq i32 %and, 0
+  %retv = select i1 %tobool, <16 x i8> %vld1, <16 x i8> %vld2
+  ret <16 x i8> %retv
+}
+
+declare <16 x i8> @llvm.arm.neon.vld1.v16i8(i8* , i32 )

Modified: llvm/trunk/test/CodeGen/Thumb2/v8_IT_1.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/Thumb2/v8_IT_1.ll?rev=192608&r1=192607&r2=192608&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/Thumb2/v8_IT_1.ll (original)
+++ llvm/trunk/test/CodeGen/Thumb2/v8_IT_1.ll Mon Oct 14 11:57:17 2013
@@ -1,10 +1,7 @@
 ; RUN: llc < %s -mtriple=thumbv8 -mattr=+neon | FileCheck %s
 
 ;CHECK-LABEL: select_s_v_v:
-;CHECK: beq	.LBB0_2
-;CHECK-NEXT: @ BB#1:
-;CHECK-NEXT: vmov.i32
-;CHECK-NEXT: .LBB0_2:
+;CHECK-NOT: it
 ;CHECK: bx
 define <16 x i8> @select_s_v_v(i32 %avail, i8* %bar) {
 entry:

Modified: llvm/trunk/test/CodeGen/X86/2012-11-30-handlemove-dbg.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/2012-11-30-handlemove-dbg.ll?rev=192608&r1=192607&r2=192608&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/X86/2012-11-30-handlemove-dbg.ll (original)
+++ llvm/trunk/test/CodeGen/X86/2012-11-30-handlemove-dbg.ll Mon Oct 14 11:57:17 2013
@@ -6,7 +6,7 @@
 ;
 ; CHECK: %entry
 ; CHECK: DEBUG_VALUE: hg
-; CHECK: je
+; CHECK: j
 
 %struct.node.0.27 = type { i16, double, [3 x double], i32, i32 }
 %struct.hgstruct.2.29 = type { %struct.bnode.1.28*, [3 x double], double, [3 x double] }

Modified: llvm/trunk/test/CodeGen/X86/fold-load.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/fold-load.ll?rev=192608&r1=192607&r2=192608&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/X86/fold-load.ll (original)
+++ llvm/trunk/test/CodeGen/X86/fold-load.ll Mon Oct 14 11:57:17 2013
@@ -49,10 +49,10 @@ L:
 ; xor in exit block will be CSE'ed and load will be folded to xor in entry.
 define i1 @test3(i32* %P, i32* %Q) nounwind {
 ; CHECK-LABEL: test3:
-; CHECK: movl 8(%esp), %eax
-; CHECK: xorl (%eax),
+; CHECK: movl 8(%esp), %e
+; CHECK: movl 4(%esp), %e
+; CHECK: xorl (%e
 ; CHECK: j
-; CHECK-NOT: xor
 entry:
   %0 = load i32* %P, align 4
   %1 = load i32* %Q, align 4

Modified: llvm/trunk/test/CodeGen/X86/hoist-common.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/hoist-common.ll?rev=192608&r1=192607&r2=192608&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/X86/hoist-common.ll (original)
+++ llvm/trunk/test/CodeGen/X86/hoist-common.ll Mon Oct 14 11:57:17 2013
@@ -1,4 +1,14 @@
 ; RUN: llc < %s -mtriple=x86_64-apple-macosx  | FileCheck %s
+; This is supposed to be testing BranchFolding's common
+; code hoisting logic, but has been erroneously passing due
+; to there being a redundant xorl in the entry block
+; and no common code to hoist.
+; However, now that MachineSink sinks the redundant xor
+; hoist-common looks at it and rejects it for hoisting,
+; which causes this test to fail.
+; Since it seems this test is broken, marking XFAIL for now
+; until someone decides to remove it or fix what it tests.
+; XFAIL: *
 
 ; Common "xorb al, al" instruction in the two successor blocks should be
 ; moved to the entry block above the test + je.

Modified: llvm/trunk/test/CodeGen/X86/misched-balance.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/misched-balance.ll?rev=192608&r1=192607&r2=192608&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/X86/misched-balance.ll (original)
+++ llvm/trunk/test/CodeGen/X86/misched-balance.ll Mon Oct 14 11:57:17 2013
@@ -1,4 +1,4 @@
-; RUN: llc < %s -march=x86-64 -mcpu=core2 -pre-RA-sched=source -enable-misched -verify-machineinstrs | FileCheck %s
+; RUN: llc < %s -mtriple=x86_64-unknown-linux-gnu -mcpu=core2 -pre-RA-sched=source -enable-misched -verify-machineinstrs | FileCheck %s
 ;
 ; Verify that misched resource/latency balancy heuristics are sane.
 





More information about the llvm-commits mailing list