<div dir="ltr">and a creduce-reduced test case</div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Jan 17, 2017 at 6:04 PM, Kostya Serebryany <span dir="ltr"><<a href="mailto:kcc@google.com" target="_blank">kcc@google.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">clang -cc1 -triple x86_64-unknown-linux-gnu -emit-obj z.c -fsanitize-coverage-type=3 -fsanitize-coverage-trace-cmp -fsanitize=address -O2 <span class=""><br><div><div>llvm/lib/CodeGen/<wbr>RegisterCoalescer.cpp:1059: bool (anonymous namespace)::RegisterCoalescer:<wbr>:reMaterializeTrivialDef(const llvm::CoalescerPair &, llvm::MachineInstr *, bool &): Assertion `ValNo && "CopyMI input register not live"' failed.</div></div><div><br></div></span></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Jan 17, 2017 at 5:54 PM, Kostya Serebryany <span dir="ltr"><<a href="mailto:kcc@google.com" target="_blank">kcc@google.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Looks like this causes<div> llvm/lib/CodeGen/RegisterCoal<wbr>escer.cpp:1059: bool (anonymous namespace)::RegisterCoalescer:<wbr>:reMaterializeTrivialDef(const llvm::CoalescerPair &, llvm::MachineInstr *, bool &): Assertion `ValNo && "CopyMI input register not live"' failed.<br></div><div><div><br></div></div><div><a href="http://lab.llvm.org:8011/builders/sanitizer-x86_64-linux-fuzzer/builds/2397/steps/test%20c-ares-CVE-2016-5180%20fuzzer/logs/stdio" target="_blank">http://lab.llvm.org:8011/build<wbr>ers/sanitizer-x86_64-linux-<wbr>fuzzer/builds/2397/steps/test%<wbr>20c-ares-CVE-2016-5180%<wbr>20fuzzer/logs/stdio</a><br></div><div>working on a smaller repro</div></div><div class="m_-4077508993913586422HOEnZb"><div class="m_-4077508993913586422h5"><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Jan 17, 2017 at 3:39 PM, Wei Mi via llvm-commits <span dir="ltr"><<a href="mailto:llvm-commits@lists.llvm.org" target="_blank">llvm-commits@lists.llvm.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Author: wmi<br>
Date: Tue Jan 17 17:39:07 2017<br>
New Revision: 292292<br>
<br>
URL: <a href="http://llvm.org/viewvc/llvm-project?rev=292292&view=rev" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-pr<wbr>oject?rev=292292&view=rev</a><br>
Log:<br>
[RegisterCoalescing] Remove partial redundent copy.<br>
<br>
The patch is to solve the performance problem described in PR27827.<br>
Register coalescing sometimes cannot remove a copy because of interference.<br>
But if we can find a reverse copy in one of the predecessor block of the copy,<br>
the copy is partially redundent and we may remove the copy partially by moving<br>
it to the predecessor block without the reverse copy.<br>
<br>
Differential Revision: <a href="https://reviews.llvm.org/D28585" rel="noreferrer" target="_blank">https://reviews.llvm.org/D2858<wbr>5</a><br>
<br>
Added:<br>
llvm/trunk/test/CodeGen/X86/pr<wbr>e-coalesce.ll<br>
llvm/trunk/test/CodeGen/X86/pr<wbr>e-coalesce.mir<br>
Modified:<br>
llvm/trunk/lib/CodeGen/Registe<wbr>rCoalescer.cpp<br>
<br>
Modified: llvm/trunk/lib/CodeGen/Registe<wbr>rCoalescer.cpp<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/RegisterCoalescer.cpp?rev=292292&r1=292291&r2=292292&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-pr<wbr>oject/llvm/trunk/lib/CodeGen/R<wbr>egisterCoalescer.cpp?rev=29229<wbr>2&r1=292291&r2=292292&view=<wbr>diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/lib/CodeGen/Registe<wbr>rCoalescer.cpp (original)<br>
+++ llvm/trunk/lib/CodeGen/Registe<wbr>rCoalescer.cpp Tue Jan 17 17:39:07 2017<br>
@@ -22,6 +22,7 @@<br>
#include "llvm/CodeGen/LiveRangeEdit.h"<br>
#include "llvm/CodeGen/MachineFrameInfo<wbr>.h"<br>
#include "llvm/CodeGen/MachineInstr.h"<br>
+#include "llvm/CodeGen/MachineInstrBuil<wbr>der.h"<br>
#include "llvm/CodeGen/MachineLoopInfo.<wbr>h"<br>
#include "llvm/CodeGen/MachineRegisterI<wbr>nfo.h"<br>
#include "llvm/CodeGen/Passes.h"<br>
@@ -189,6 +190,9 @@ namespace {<br>
/// This returns true if an interval was modified.<br>
bool removeCopyByCommutingDef(const CoalescerPair &CP,MachineInstr *CopyMI);<br>
<br>
+ /// We found a copy which can be moved to its less frequent predecessor.<br>
+ bool removePartialRedundancy(const CoalescerPair &CP, MachineInstr &CopyMI);<br>
+<br>
/// If the source of a copy is defined by a<br>
/// trivial computation, replace the copy by rematerialize the definition.<br>
bool reMaterializeTrivialDef(const CoalescerPair &CP, MachineInstr *CopyMI,<br>
@@ -861,6 +865,167 @@ bool RegisterCoalescer::removeCopyB<wbr>yComm<br>
return true;<br>
}<br>
<br>
+/// For copy B = A in BB2, if A is defined by A = B in BB0 which is a<br>
+/// predecessor of BB2, and if B is not redefined on the way from A = B<br>
+/// in BB2 to B = A in BB2, B = A in BB2 is partially redundant if the<br>
+/// execution goes through the path from BB0 to BB2. We may move B = A<br>
+/// to the predecessor without such reversed copy.<br>
+/// So we will transform the program from:<br>
+/// BB0:<br>
+/// A = B; BB1:<br>
+/// ... ...<br>
+/// / \ /<br>
+/// BB2:<br>
+/// ...<br>
+/// B = A;<br>
+///<br>
+/// to:<br>
+///<br>
+/// BB0: BB1:<br>
+/// A = B; ...<br>
+/// ... B = A;<br>
+/// / \ /<br>
+/// BB2:<br>
+/// ...<br>
+///<br>
+/// A special case is when BB0 and BB2 are the same BB which is the only<br>
+/// BB in a loop:<br>
+/// BB1:<br>
+/// ...<br>
+/// BB0/BB2: ----<br>
+/// B = A; |<br>
+/// ... |<br>
+/// A = B; |<br>
+/// |-------<br>
+/// |<br>
+/// We may hoist B = A from BB0/BB2 to BB1.<br>
+///<br>
+/// The major preconditions for correctness to remove such partial<br>
+/// redundancy include:<br>
+/// 1. A in B = A in BB2 is defined by a PHI in BB2, and one operand of<br>
+/// the PHI is defined by the reversed copy A = B in BB0.<br>
+/// 2. No B is referenced from the start of BB2 to B = A.<br>
+/// 3. No B is defined from A = B to the end of BB0.<br>
+/// 4. BB1 has only one successor.<br>
+///<br>
+/// 2 and 4 implicitly ensure B is not live at the end of BB1.<br>
+/// 4 guarantees BB2 is hotter than BB1, so we can only move a copy to a<br>
+/// colder place, which not only prevent endless loop, but also make sure<br>
+/// the movement of copy is beneficial.<br>
+bool RegisterCoalescer::removeParti<wbr>alRedundancy(const CoalescerPair &CP,<br>
+ MachineInstr &CopyMI) {<br>
+ assert(!CP.isPhys());<br>
+ if (!CopyMI.isFullCopy())<br>
+ return false;<br>
+<br>
+ MachineBasicBlock &MBB = *CopyMI.getParent();<br>
+ if (MBB.isEHPad())<br>
+ return false;<br>
+<br>
+ if (MBB.pred_size() != 2)<br>
+ return false;<br>
+<br>
+ LiveInterval &IntA =<br>
+ LIS->getInterval(CP.isFlipped(<wbr>) ? CP.getDstReg() : CP.getSrcReg());<br>
+ LiveInterval &IntB =<br>
+ LIS->getInterval(CP.isFlipped(<wbr>) ? CP.getSrcReg() : CP.getDstReg());<br>
+<br>
+ // A is defined by PHI at the entry of MBB.<br>
+ SlotIndex CopyIdx = LIS->getInstructionIndex(CopyM<wbr>I).getRegSlot(true);<br>
+ VNInfo *AValNo = IntA.getVNInfoAt(CopyIdx);<br>
+ assert(AValNo && !AValNo->isUnused() && "COPY source not live");<br>
+ if (!AValNo->isPHIDef())<br>
+ return false;<br>
+<br>
+ // No B is referenced before CopyMI in MBB.<br>
+ if (IntB.overlaps(LIS->getMBBStar<wbr>tIdx(&MBB), CopyIdx))<br>
+ return false;<br>
+<br>
+ // MBB has two predecessors: one contains A = B so no copy will be inserted<br>
+ // for it. The other one will have a copy moved from MBB.<br>
+ bool FoundReverseCopy = false;<br>
+ MachineBasicBlock *CopyLeftBB = nullptr;<br>
+ for (MachineBasicBlock *Pred : MBB.predecessors()) {<br>
+ VNInfo *PVal = IntA.getVNInfoBefore(LIS->getM<wbr>BBEndIdx(Pred));<br>
+ MachineInstr *DefMI = LIS->getInstructionFromIndex(P<wbr>Val->def);<br>
+ if (!DefMI || !DefMI->isFullCopy()) {<br>
+ CopyLeftBB = Pred;<br>
+ continue;<br>
+ }<br>
+ // Check DefMI is a reverse copy and it is in BB Pred.<br>
+ if (DefMI->getOperand(0).getReg() != IntA.reg ||<br>
+ DefMI->getOperand(1).getReg() != IntB.reg ||<br>
+ DefMI->getParent() != Pred) {<br>
+ CopyLeftBB = Pred;<br>
+ continue;<br>
+ }<br>
+ // If there is any other def of B after DefMI and before the end of Pred,<br>
+ // we need to keep the copy of B = A at the end of Pred if we remove<br>
+ // B = A from MBB.<br>
+ bool ValB_Changed = false;<br>
+ for (auto VNI : IntB.valnos) {<br>
+ if (VNI->isUnused())<br>
+ continue;<br>
+ if (PVal->def < VNI->def && VNI->def < LIS->getMBBEndIdx(Pred)) {<br>
+ ValB_Changed = true;<br>
+ break;<br>
+ }<br>
+ }<br>
+ if (ValB_Changed) {<br>
+ CopyLeftBB = Pred;<br>
+ continue;<br>
+ }<br>
+ FoundReverseCopy = true;<br>
+ }<br>
+<br>
+ // If no reverse copy is found in predecessors, nothing to do.<br>
+ if (!FoundReverseCopy)<br>
+ return false;<br>
+<br>
+ // If CopyLeftBB is nullptr, it means every predecessor of MBB contains<br>
+ // reverse copy, CopyMI can be removed trivially if only IntA/IntB is updated.<br>
+ // If CopyLeftBB is not nullptr, move CopyMI from MBB to CopyLeftBB and<br>
+ // update IntA/IntB.<br>
+ //<br>
+ // If CopyLeftBB is not nullptr, ensure CopyLeftBB has a single succ so<br>
+ // MBB is hotter than CopyLeftBB.<br>
+ if (CopyLeftBB && CopyLeftBB->succ_size() > 1)<br>
+ return false;<br>
+<br>
+ // Now ok to move copy.<br>
+ if (CopyLeftBB) {<br>
+ DEBUG(dbgs() << "\tremovePartialRedundancy: Move the copy to BB#"<br>
+ << CopyLeftBB->getNumber() << '\t' << CopyMI);<br>
+<br>
+ // Insert new copy to CopyLeftBB.<br>
+ auto InsPos = CopyLeftBB->getFirstTerminator<wbr>();<br>
+ MachineInstr *NewCopyMI = BuildMI(*CopyLeftBB, InsPos, CopyMI.getDebugLoc(),<br>
+ TII->get(TargetOpcode::COPY), IntB.reg)<br>
+ .addReg(IntA.reg);<br>
+ SlotIndex NewCopyIdx =<br>
+ LIS->InsertMachineInstrInMaps(<wbr>*NewCopyMI).getRegSlot();<br>
+ VNInfo *VNI = IntB.getNextValue(NewCopyIdx, LIS->getVNInfoAllocator());<br>
+ IntB.createDeadDef(VNI);<br>
+ } else {<br>
+ DEBUG(dbgs() << "\tremovePartialRedundancy: Remove the copy from BB#"<br>
+ << MBB.getNumber() << '\t' << CopyMI);<br>
+ }<br>
+<br>
+ // Remove CopyMI.<br>
+ SlotIndex EndPoint = IntB.Query(CopyIdx.getRegSlot(<wbr>)).endPoint();<br>
+ LIS->removeVRegDefAt(IntB, CopyIdx.getRegSlot());<br>
+ LIS->RemoveMachineInstrFromMap<wbr>s(CopyMI);<br>
+ CopyMI.eraseFromParent();<br>
+<br>
+ // Extend IntB to the EndPoint of its original live interval.<br>
+ SmallVector<SlotIndex, 8> EndPoints;<br>
+ EndPoints.push_back(EndPoint);<br>
+ LIS->extendToIndices(IntB, EndPoints);<br>
+<br>
+ shrinkToUses(&IntA);<br>
+ return true;<br>
+}<br>
+<br>
/// Returns true if @p MI defines the full vreg @p Reg, as opposed to just<br>
/// defining a subregister.<br>
static bool definesFullReg(const MachineInstr &MI, unsigned Reg) {<br>
@@ -1486,6 +1651,12 @@ bool RegisterCoalescer::joinCopy(Ma<wbr>chine<br>
}<br>
}<br>
<br>
+ // Try and see if we can partially eliminate the copy by moving the copy to<br>
+ // its predecessor.<br>
+ if (!CP.isPartial() && !CP.isPhys())<br>
+ if (removePartialRedundancy(CP, *CopyMI))<br>
+ return true;<br>
+<br>
// Otherwise, we are unable to join the intervals.<br>
DEBUG(dbgs() << "\tInterference!\n");<br>
Again = true; // May be possible to coalesce later.<br>
<br>
Added: llvm/trunk/test/CodeGen/X86/pr<wbr>e-coalesce.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/pre-coalesce.ll?rev=292292&view=auto" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-pr<wbr>oject/llvm/trunk/test/CodeGen/<wbr>X86/pre-coalesce.ll?rev=292292<wbr>&view=auto</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/pr<wbr>e-coalesce.ll (added)<br>
+++ llvm/trunk/test/CodeGen/X86/pr<wbr>e-coalesce.ll Tue Jan 17 17:39:07 2017<br>
@@ -0,0 +1,51 @@<br>
+; RUN: llc -regalloc=greedy -mtriple=x86_64-unknown-linux-<wbr>gnu < %s -o - | FileCheck %s<br>
+;<br>
+; The test is to check no redundent mov as follows will be generated in %while.body loop.<br>
+; .LBB0_2:<br>
+; movsbl %cl, %ecx<br>
+; movl %edx, %eax ==> This movl can be promoted outside of loop.<br>
+; shll $5, %eax<br>
+; ...<br>
+; movl %eax, %edx<br>
+; jne .LBB0_2<br>
+;<br>
+; CHECK-LABEL: foo:<br>
+; CHECK: [[L0:.LBB0_[0-9]+]]: # %while.body<br>
+; CHECK: movl %[[REGA:.*]], %[[REGB:.*]]<br>
+; CHECK-NOT: movl %[[REGB]], %[[REGA]]<br>
+; CHECK: jne [[L0]]<br>
+;<br>
+target datalayout = "e-m:e-i64:64-f80:128-n8:16:32<wbr>:64-S128"<br>
+<br>
+@b = common local_unnamed_addr global i8* null, align 8<br>
+@a = common local_unnamed_addr global i32 0, align 4<br>
+<br>
+define i32 @foo() local_unnamed_addr {<br>
+entry:<br>
+ %t0 = load i8*, i8** @b, align 8<br>
+ %t1 = load i8, i8* %t0, align 1<br>
+ %cmp4 = icmp eq i8 %t1, 0<br>
+ %t2 = load i32, i32* @a, align 4<br>
+ br i1 %cmp4, label %while.end, label %while.body.preheader<br>
+<br>
+while.body.preheader: ; preds = %entry<br>
+ br label %while.body<br>
+<br>
+while.body: ; preds = %while.body.preheader, %while.body<br>
+ %t3 = phi i32 [ %add3, %while.body ], [ %t2, %while.body.preheader ]<br>
+ %t4 = phi i8 [ %t5, %while.body ], [ %t1, %while.body.preheader ]<br>
+ %conv = sext i8 %t4 to i32<br>
+ %add = mul i32 %t3, 33<br>
+ %add3 = add nsw i32 %add, %conv<br>
+ store i32 %add3, i32* @a, align 4<br>
+ %t5 = load i8, i8* %t0, align 1<br>
+ %cmp = icmp eq i8 %t5, 0<br>
+ br i1 %cmp, label %while.end.loopexit, label %while.body<br>
+<br>
+while.end.loopexit: ; preds = %while.body<br>
+ br label %while.end<br>
+<br>
+while.end: ; preds = %while.end.loopexit, %entry<br>
+ %.lcssa = phi i32 [ %t2, %entry ], [ %add3, %while.end.loopexit ]<br>
+ ret i32 %.lcssa<br>
+}<br>
<br>
Added: llvm/trunk/test/CodeGen/X86/pr<wbr>e-coalesce.mir<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/pre-coalesce.mir?rev=292292&view=auto" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-pr<wbr>oject/llvm/trunk/test/CodeGen/<wbr>X86/pre-coalesce.mir?rev=29229<wbr>2&view=auto</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/pr<wbr>e-coalesce.mir (added)<br>
+++ llvm/trunk/test/CodeGen/X86/pr<wbr>e-coalesce.mir Tue Jan 17 17:39:07 2017<br>
@@ -0,0 +1,122 @@<br>
+# RUN: llc -mtriple=x86_64-unknown-linux-<wbr>gnu -run-pass simple-register-coalescing -o - %s | FileCheck %s<br>
+# Check there is no partial redundent copy left in the loop after register coalescing.<br>
+--- |<br>
+ ; ModuleID = '<stdin>'<br>
+ source_filename = "<stdin>"<br>
+ target datalayout = "e-m:e-i64:64-f80:128-n8:16:32<wbr>:64-S128"<br>
+ target triple = "x86_64-unknown-linux-gnu"<br>
+<br>
+ @b = common local_unnamed_addr global i8* null, align 8<br>
+ @a = common local_unnamed_addr global i32 0, align 4<br>
+<br>
+ define i32 @foo() local_unnamed_addr {<br>
+ entry:<br>
+ %t0 = load i8*, i8** @b, align 8<br>
+ %t1 = load i8, i8* %t0, align 1<br>
+ %cmp4 = icmp eq i8 %t1, 0<br>
+ %t2 = load i32, i32* @a, align 4<br>
+ br i1 %cmp4, label %while.end, label %while.body.preheader<br>
+<br>
+ while.body.preheader: ; preds = %entry<br>
+ br label %while.body<br>
+<br>
+ while.body: ; preds = %while.body, %while.body.preheader<br>
+ %t3 = phi i32 [ %add3, %while.body ], [ %t2, %while.body.preheader ]<br>
+ %t4 = phi i8 [ %t5, %while.body ], [ %t1, %while.body.preheader ]<br>
+ %conv = sext i8 %t4 to i32<br>
+ %add = mul i32 %t3, 33<br>
+ %add3 = add nsw i32 %add, %conv<br>
+ store i32 %add3, i32* @a, align 4<br>
+ %t5 = load i8, i8* %t0, align 1<br>
+ %cmp = icmp eq i8 %t5, 0<br>
+ br i1 %cmp, label %while.end, label %while.body<br>
+<br>
+ while.end: ; preds = %while.body, %entry<br>
+ %.lcssa = phi i32 [ %t2, %entry ], [ %add3, %while.body ]<br>
+ ret i32 %.lcssa<br>
+ }<br>
+<br>
+...<br>
+---<br>
+# Check A = B and B = A copies will not exist in the loop at the same time.<br>
+# CHECK: name: foo<br>
+# CHECK: [[L1:bb.3.while.body]]:<br>
+# CHECK: %[[REGA:.*]] = COPY %[[REGB:.*]]<br>
+# CHECK-NOT: %[[REGB]] = COPY %[[REGA]]<br>
+# CHECK: JNE_1 %[[L1]]<br>
+<br>
+name: foo<br>
+alignment: 4<br>
+exposesReturnsTwice: false<br>
+legalized: false<br>
+regBankSelected: false<br>
+selected: false<br>
+tracksRegLiveness: true<br>
+registers:<br>
+ - { id: 0, class: gr64 }<br>
+ - { id: 1, class: gr8 }<br>
+ - { id: 2, class: gr32 }<br>
+ - { id: 3, class: gr32 }<br>
+ - { id: 4, class: gr8 }<br>
+ - { id: 5, class: gr32 }<br>
+ - { id: 6, class: gr8 }<br>
+ - { id: 7, class: gr32 }<br>
+ - { id: 8, class: gr32 }<br>
+ - { id: 9, class: gr32 }<br>
+ - { id: 10, class: gr32 }<br>
+ - { id: 11, class: gr32 }<br>
+ - { id: 12, class: gr8 }<br>
+ - { id: 13, class: gr32 }<br>
+frameInfo:<br>
+ isFrameAddressTaken: false<br>
+ isReturnAddressTaken: false<br>
+ hasStackMap: false<br>
+ hasPatchPoint: false<br>
+ stackSize: 0<br>
+ offsetAdjustment: 0<br>
+ maxAlignment: 0<br>
+ adjustsStack: false<br>
+ hasCalls: false<br>
+ maxCallFrameSize: 0<br>
+ hasOpaqueSPAdjustment: false<br>
+ hasVAStart: false<br>
+ hasMustTailInVarArgFunc: false<br>
+body: |<br>
+ bb.0.entry:<br>
+ successors: %bb.4(0x30000000), %bb.1.while.body.preheader(0x5<wbr>0000000)<br>
+<br>
+ %0 = MOV64rm %rip, 1, _, @b, _ :: (dereferenceable load 8 from @b)<br>
+ %12 = MOV8rm %0, 1, _, 0, _ :: (load 1 from %ir.t0)<br>
+ TEST8rr %12, %12, implicit-def %eflags<br>
+ %11 = MOV32rm %rip, 1, _, @a, _ :: (dereferenceable load 4 from @a)<br>
+ JNE_1 %bb.1.while.body.preheader, implicit killed %eflags<br>
+<br>
+ bb.4:<br>
+ successors: %bb.3.while.end(0x80000000)<br>
+<br>
+ %10 = COPY %11<br>
+ JMP_1 %bb.3.while.end<br>
+<br>
+ bb.1.while.body.preheader:<br>
+ successors: %bb.2.while.body(0x80000000)<br>
+<br>
+ bb.2.while.body:<br>
+ successors: %bb.3.while.end(0x04000000), %bb.2.while.body(0x7c000000)<br>
+<br>
+ %8 = MOVSX32rr8 %12<br>
+ %10 = COPY %11<br>
+ %10 = SHL32ri %10, 5, implicit-def dead %eflags<br>
+ %10 = ADD32rr %10, %11, implicit-def dead %eflags<br>
+ %10 = ADD32rr %10, %8, implicit-def dead %eflags<br>
+ MOV32mr %rip, 1, _, @a, _, %10 :: (store 4 into @a)<br>
+ %12 = MOV8rm %0, 1, _, 0, _ :: (load 1 from %ir.t0)<br>
+ TEST8rr %12, %12, implicit-def %eflags<br>
+ %11 = COPY %10<br>
+ JNE_1 %bb.2.while.body, implicit killed %eflags<br>
+ JMP_1 %bb.3.while.end<br>
+<br>
+ bb.3.while.end:<br>
+ %eax = COPY %10<br>
+ RET 0, killed %eax<br>
+<br>
+...<br>
<br>
<br>
______________________________<wbr>_________________<br>
llvm-commits mailing list<br>
<a href="mailto:llvm-commits@lists.llvm.org" target="_blank">llvm-commits@lists.llvm.org</a><br>
<a href="http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits" rel="noreferrer" target="_blank">http://lists.llvm.org/cgi-bin/<wbr>mailman/listinfo/llvm-commits</a><br>
</blockquote></div><br></div>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>