<html><head></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><br><div><div>On Aug 2, 2012, at 12:31 AM, Michael Liao wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><div>Some cases are considered conflicting with the previous effort to remove<br>partial register update stall by Bruno Cardoso Lopes.<br><br>For example, sqrtsd with memory operand is such an instruction updating<br>only parts of the registers in SSE. It should be selected if the code is<br>optimized for size. Otherwise, the sequence of movsd + sqrtsd is<br>preferred than sqrtsd with memory operand.<br></div></blockquote><div><br></div>Are you aware of other cases where it is a bad idea to perform memory folding?</div><div><br></div><div>This also seems to be breaking</div><div><a href="http://lab.llvm.org:8011/builders/dragonegg-x86_64-linux-gcc-4.6-test/builds/481">http://lab.llvm.org:8011/builders/dragonegg-x86_64-linux-gcc-4.6-test/builds/481</a><br>BUILD FAILED: failed make.check</div><div><br></div><div>I will try to limit the scope of this optimization to scalar instructions and see whether it can recover the bot.</div><div>Duncan, is it possible for me to duplicate this failure locally on my machine? I am not sure what is the best way to debug this.</div><div><br></div><div>Thanks,</div><div>Manman</div><div><br><blockquote type="cite"><div><br>Yours<br>- Michael<br><br>On Thu, 2012-08-02 at 00:56 +0000, Manman Ren wrote:<br><blockquote type="cite">Author: mren<br></blockquote><blockquote type="cite">Date: Wed Aug 1 19:56:42 2012<br></blockquote><blockquote type="cite">New Revision: 161152<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">URL: <a href="http://llvm.org/viewvc/llvm-project?rev=161152&view=rev">http://llvm.org/viewvc/llvm-project?rev=161152&view=rev</a><br></blockquote><blockquote type="cite">Log:<br></blockquote><blockquote type="cite">X86 Peephole: fold loads to the source register operand if possible.<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">Machine CSE and other optimizations can remove instructions so folding<br></blockquote><blockquote type="cite">is possible at peephole while not possible at ISel.<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">This patch is a rework of r160919 and was tested on clang self-host on my local<br></blockquote><blockquote type="cite">machine.<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite"><a href="rdar://10554090">rdar://10554090</a> and <a href="rdar://11873276">rdar://11873276</a><br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">Modified:<br></blockquote><blockquote type="cite"> llvm/trunk/include/llvm/Target/TargetInstrInfo.h<br></blockquote><blockquote type="cite"> llvm/trunk/lib/CodeGen/PeepholeOptimizer.cpp<br></blockquote><blockquote type="cite"> llvm/trunk/lib/Target/X86/X86InstrInfo.cpp<br></blockquote><blockquote type="cite"> llvm/trunk/lib/Target/X86/X86InstrInfo.h<br></blockquote><blockquote type="cite"> llvm/trunk/test/CodeGen/X86/2012-05-19-avx2-store.ll<br></blockquote><blockquote type="cite"> llvm/trunk/test/CodeGen/X86/break-sse-dep.ll<br></blockquote><blockquote type="cite"> llvm/trunk/test/CodeGen/X86/fold-load.ll<br></blockquote><blockquote type="cite"> llvm/trunk/test/CodeGen/X86/fold-pcmpeqd-1.ll<br></blockquote><blockquote type="cite"> llvm/trunk/test/CodeGen/X86/sse-minmax.ll<br></blockquote><blockquote type="cite"> llvm/trunk/test/CodeGen/X86/vec_compare.ll<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">Modified: llvm/trunk/include/llvm/Target/TargetInstrInfo.h<br></blockquote><blockquote type="cite">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/include/llvm/Target/TargetInstrInfo.h?rev=161152&r1=161151&r2=161152&view=diff">http://llvm.org/viewvc/llvm-project/llvm/trunk/include/llvm/Target/TargetInstrInfo.h?rev=161152&r1=161151&r2=161152&view=diff</a><br></blockquote><blockquote type="cite">==============================================================================<br></blockquote><blockquote type="cite">--- llvm/trunk/include/llvm/Target/TargetInstrInfo.h (original)<br></blockquote><blockquote type="cite">+++ llvm/trunk/include/llvm/Target/TargetInstrInfo.h Wed Aug 1 19:56:42 2012<br></blockquote><blockquote type="cite">@@ -14,6 +14,7 @@<br></blockquote><blockquote type="cite"> #ifndef LLVM_TARGET_TARGETINSTRINFO_H<br></blockquote><blockquote type="cite"> #define LLVM_TARGET_TARGETINSTRINFO_H<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">+#include "llvm/ADT/SmallSet.h"<br></blockquote><blockquote type="cite"> #include "llvm/MC/MCInstrInfo.h"<br></blockquote><blockquote type="cite"> #include "llvm/CodeGen/DFAPacketizer.h"<br></blockquote><blockquote type="cite"> #include "llvm/CodeGen/MachineFunction.h"<br></blockquote><blockquote type="cite">@@ -693,6 +694,16 @@<br></blockquote><blockquote type="cite"> return false;<br></blockquote><blockquote type="cite"> }<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">+ /// optimizeLoadInstr - Try to remove the load by folding it to a register<br></blockquote><blockquote type="cite">+ /// operand at the use. We fold the load instructions if and only if the<br></blockquote><blockquote type="cite">+ /// def and use are in the same BB.<br></blockquote><blockquote type="cite">+ virtual MachineInstr* optimizeLoadInstr(MachineInstr *MI,<br></blockquote><blockquote type="cite">+ const MachineRegisterInfo *MRI,<br></blockquote><blockquote type="cite">+ unsigned &FoldAsLoadDefReg,<br></blockquote><blockquote type="cite">+ MachineInstr *&DefMI) const {<br></blockquote><blockquote type="cite">+ return 0;<br></blockquote><blockquote type="cite">+ }<br></blockquote><blockquote type="cite">+<br></blockquote><blockquote type="cite"> /// FoldImmediate - 'Reg' is known to be defined by a move immediate<br></blockquote><blockquote type="cite"> /// instruction, try to fold the immediate into the use instruction.<br></blockquote><blockquote type="cite"> virtual bool FoldImmediate(MachineInstr *UseMI, MachineInstr *DefMI,<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">Modified: llvm/trunk/lib/CodeGen/PeepholeOptimizer.cpp<br></blockquote><blockquote type="cite">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/PeepholeOptimizer.cpp?rev=161152&r1=161151&r2=161152&view=diff">http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/PeepholeOptimizer.cpp?rev=161152&r1=161151&r2=161152&view=diff</a><br></blockquote><blockquote type="cite">==============================================================================<br></blockquote><blockquote type="cite">--- llvm/trunk/lib/CodeGen/PeepholeOptimizer.cpp (original)<br></blockquote><blockquote type="cite">+++ llvm/trunk/lib/CodeGen/PeepholeOptimizer.cpp Wed Aug 1 19:56:42 2012<br></blockquote><blockquote type="cite">@@ -78,6 +78,7 @@<br></blockquote><blockquote type="cite"> STATISTIC(NumBitcasts, "Number of bitcasts eliminated");<br></blockquote><blockquote type="cite"> STATISTIC(NumCmps, "Number of compares eliminated");<br></blockquote><blockquote type="cite"> STATISTIC(NumImmFold, "Number of move immediate folded");<br></blockquote><blockquote type="cite">+STATISTIC(NumLoadFold, "Number of loads folded");<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite"> namespace {<br></blockquote><blockquote type="cite"> class PeepholeOptimizer : public MachineFunctionPass {<br></blockquote><blockquote type="cite">@@ -114,6 +115,7 @@<br></blockquote><blockquote type="cite"> bool foldImmediate(MachineInstr *MI, MachineBasicBlock *MBB,<br></blockquote><blockquote type="cite"> SmallSet<unsigned, 4> &ImmDefRegs,<br></blockquote><blockquote type="cite"> DenseMap<unsigned, MachineInstr*> &ImmDefMIs);<br></blockquote><blockquote type="cite">+ bool isLoadFoldable(MachineInstr *MI, unsigned &FoldAsLoadDefReg);<br></blockquote><blockquote type="cite"> };<br></blockquote><blockquote type="cite"> }<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">@@ -384,6 +386,29 @@<br></blockquote><blockquote type="cite"> return false;<br></blockquote><blockquote type="cite"> }<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">+/// isLoadFoldable - Check whether MI is a candidate for folding into a later<br></blockquote><blockquote type="cite">+/// instruction. We only fold loads to virtual registers and the virtual<br></blockquote><blockquote type="cite">+/// register defined has a single use.<br></blockquote><blockquote type="cite">+bool PeepholeOptimizer::isLoadFoldable(MachineInstr *MI,<br></blockquote><blockquote type="cite">+ unsigned &FoldAsLoadDefReg) {<br></blockquote><blockquote type="cite">+ if (MI->canFoldAsLoad()) {<br></blockquote><blockquote type="cite">+ const MCInstrDesc &MCID = MI->getDesc();<br></blockquote><blockquote type="cite">+ if (MCID.getNumDefs() == 1) {<br></blockquote><blockquote type="cite">+ unsigned Reg = MI->getOperand(0).getReg();<br></blockquote><blockquote type="cite">+ // To reduce compilation time, we check MRI->hasOneUse when inserting<br></blockquote><blockquote type="cite">+ // loads. It should be checked when processing uses of the load, since<br></blockquote><blockquote type="cite">+ // uses can be removed during peephole.<br></blockquote><blockquote type="cite">+ if (!MI->getOperand(0).getSubReg() &&<br></blockquote><blockquote type="cite">+ TargetRegisterInfo::isVirtualRegister(Reg) &&<br></blockquote><blockquote type="cite">+ MRI->hasOneUse(Reg)) {<br></blockquote><blockquote type="cite">+ FoldAsLoadDefReg = Reg;<br></blockquote><blockquote type="cite">+ return true;<br></blockquote><blockquote type="cite">+ }<br></blockquote><blockquote type="cite">+ }<br></blockquote><blockquote type="cite">+ }<br></blockquote><blockquote type="cite">+ return false;<br></blockquote><blockquote type="cite">+}<br></blockquote><blockquote type="cite">+<br></blockquote><blockquote type="cite"> bool PeepholeOptimizer::isMoveImmediate(MachineInstr *MI,<br></blockquote><blockquote type="cite"> SmallSet<unsigned, 4> &ImmDefRegs,<br></blockquote><blockquote type="cite"> DenseMap<unsigned, MachineInstr*> &ImmDefMIs) {<br></blockquote><blockquote type="cite">@@ -441,6 +466,7 @@<br></blockquote><blockquote type="cite"> SmallPtrSet<MachineInstr*, 8> LocalMIs;<br></blockquote><blockquote type="cite"> SmallSet<unsigned, 4> ImmDefRegs;<br></blockquote><blockquote type="cite"> DenseMap<unsigned, MachineInstr*> ImmDefMIs;<br></blockquote><blockquote type="cite">+ unsigned FoldAsLoadDefReg;<br></blockquote><blockquote type="cite"> for (MachineFunction::iterator I = MF.begin(), E = MF.end(); I != E; ++I) {<br></blockquote><blockquote type="cite"> MachineBasicBlock *MBB = &*I;<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">@@ -448,6 +474,7 @@<br></blockquote><blockquote type="cite"> LocalMIs.clear();<br></blockquote><blockquote type="cite"> ImmDefRegs.clear();<br></blockquote><blockquote type="cite"> ImmDefMIs.clear();<br></blockquote><blockquote type="cite">+ FoldAsLoadDefReg = 0;<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite"> bool First = true;<br></blockquote><blockquote type="cite"> MachineBasicBlock::iterator PMII;<br></blockquote><blockquote type="cite">@@ -456,12 +483,17 @@<br></blockquote><blockquote type="cite"> MachineInstr *MI = &*MII;<br></blockquote><blockquote type="cite"> LocalMIs.insert(MI);<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">+ // If there exists an instruction which belongs to the following<br></blockquote><blockquote type="cite">+ // categories, we will discard the load candidate.<br></blockquote><blockquote type="cite"> if (MI->isLabel() || MI->isPHI() || MI->isImplicitDef() ||<br></blockquote><blockquote type="cite"> MI->isKill() || MI->isInlineAsm() || MI->isDebugValue() ||<br></blockquote><blockquote type="cite"> MI->hasUnmodeledSideEffects()) {<br></blockquote><blockquote type="cite">+ FoldAsLoadDefReg = 0;<br></blockquote><blockquote type="cite"> ++MII;<br></blockquote><blockquote type="cite"> continue;<br></blockquote><blockquote type="cite"> }<br></blockquote><blockquote type="cite">+ if (MI->mayStore() || MI->isCall())<br></blockquote><blockquote type="cite">+ FoldAsLoadDefReg = 0;<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite"> if (MI->isBitcast()) {<br></blockquote><blockquote type="cite"> if (optimizeBitcastInstr(MI, MBB)) {<br></blockquote><blockquote type="cite">@@ -489,6 +521,31 @@<br></blockquote><blockquote type="cite"> Changed |= foldImmediate(MI, MBB, ImmDefRegs, ImmDefMIs);<br></blockquote><blockquote type="cite"> }<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">+ // Check whether MI is a load candidate for folding into a later<br></blockquote><blockquote type="cite">+ // instruction. If MI is not a candidate, check whether we can fold an<br></blockquote><blockquote type="cite">+ // earlier load into MI.<br></blockquote><blockquote type="cite">+ if (!isLoadFoldable(MI, FoldAsLoadDefReg) && FoldAsLoadDefReg) {<br></blockquote><blockquote type="cite">+ // We need to fold load after optimizeCmpInstr, since optimizeCmpInstr<br></blockquote><blockquote type="cite">+ // can enable folding by converting SUB to CMP.<br></blockquote><blockquote type="cite">+ MachineInstr *DefMI = 0;<br></blockquote><blockquote type="cite">+ MachineInstr *FoldMI = TII->optimizeLoadInstr(MI, MRI,<br></blockquote><blockquote type="cite">+ FoldAsLoadDefReg, DefMI);<br></blockquote><blockquote type="cite">+ if (FoldMI) {<br></blockquote><blockquote type="cite">+ // Update LocalMIs since we replaced MI with FoldMI and deleted DefMI.<br></blockquote><blockquote type="cite">+ LocalMIs.erase(MI);<br></blockquote><blockquote type="cite">+ LocalMIs.erase(DefMI);<br></blockquote><blockquote type="cite">+ LocalMIs.insert(FoldMI);<br></blockquote><blockquote type="cite">+ MI->eraseFromParent();<br></blockquote><blockquote type="cite">+ DefMI->eraseFromParent();<br></blockquote><blockquote type="cite">+ ++NumLoadFold;<br></blockquote><blockquote type="cite">+<br></blockquote><blockquote type="cite">+ // MI is replaced with FoldMI.<br></blockquote><blockquote type="cite">+ Changed = true;<br></blockquote><blockquote type="cite">+ PMII = FoldMI;<br></blockquote><blockquote type="cite">+ MII = llvm::next(PMII);<br></blockquote><blockquote type="cite">+ continue;<br></blockquote><blockquote type="cite">+ }<br></blockquote><blockquote type="cite">+ }<br></blockquote><blockquote type="cite"> First = false;<br></blockquote><blockquote type="cite"> PMII = MII;<br></blockquote><blockquote type="cite"> ++MII;<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">Modified: llvm/trunk/lib/Target/X86/X86InstrInfo.cpp<br></blockquote><blockquote type="cite">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86InstrInfo.cpp?rev=161152&r1=161151&r2=161152&view=diff">http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86InstrInfo.cpp?rev=161152&r1=161151&r2=161152&view=diff</a><br></blockquote><blockquote type="cite">==============================================================================<br></blockquote><blockquote type="cite">--- llvm/trunk/lib/Target/X86/X86InstrInfo.cpp (original)<br></blockquote><blockquote type="cite">+++ llvm/trunk/lib/Target/X86/X86InstrInfo.cpp Wed Aug 1 19:56:42 2012<br></blockquote><blockquote type="cite">@@ -3323,6 +3323,81 @@<br></blockquote><blockquote type="cite"> return true;<br></blockquote><blockquote type="cite"> }<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">+/// optimizeLoadInstr - Try to remove the load by folding it to a register<br></blockquote><blockquote type="cite">+/// operand at the use. We fold the load instructions if load defines a virtual<br></blockquote><blockquote type="cite">+/// register, the virtual register is used once in the same BB, and the<br></blockquote><blockquote type="cite">+/// instructions in-between do not load or store, and have no side effects.<br></blockquote><blockquote type="cite">+MachineInstr* X86InstrInfo::<br></blockquote><blockquote type="cite">+optimizeLoadInstr(MachineInstr *MI, const MachineRegisterInfo *MRI,<br></blockquote><blockquote type="cite">+ unsigned &FoldAsLoadDefReg,<br></blockquote><blockquote type="cite">+ MachineInstr *&DefMI) const {<br></blockquote><blockquote type="cite">+ if (FoldAsLoadDefReg == 0)<br></blockquote><blockquote type="cite">+ return 0;<br></blockquote><blockquote type="cite">+ // To be conservative, if there exists another load, clear the load candidate.<br></blockquote><blockquote type="cite">+ if (MI->mayLoad()) {<br></blockquote><blockquote type="cite">+ FoldAsLoadDefReg = 0;<br></blockquote><blockquote type="cite">+ return 0;<br></blockquote><blockquote type="cite">+ }<br></blockquote><blockquote type="cite">+<br></blockquote><blockquote type="cite">+ // Check whether we can move DefMI here.<br></blockquote><blockquote type="cite">+ DefMI = MRI->getVRegDef(FoldAsLoadDefReg);<br></blockquote><blockquote type="cite">+ assert(DefMI);<br></blockquote><blockquote type="cite">+ bool SawStore = false;<br></blockquote><blockquote type="cite">+ if (!DefMI->isSafeToMove(this, 0, SawStore))<br></blockquote><blockquote type="cite">+ return 0;<br></blockquote><blockquote type="cite">+<br></blockquote><blockquote type="cite">+ // We try to commute MI if possible.<br></blockquote><blockquote type="cite">+ unsigned IdxEnd = (MI->isCommutable()) ? 2 : 1;<br></blockquote><blockquote type="cite">+ for (unsigned Idx = 0; Idx < IdxEnd; Idx++) {<br></blockquote><blockquote type="cite">+ // Collect information about virtual register operands of MI.<br></blockquote><blockquote type="cite">+ unsigned SrcOperandId = 0;<br></blockquote><blockquote type="cite">+ bool FoundSrcOperand = false;<br></blockquote><blockquote type="cite">+ for (unsigned i = 0, e = MI->getDesc().getNumOperands(); i != e; ++i) {<br></blockquote><blockquote type="cite">+ MachineOperand &MO = MI->getOperand(i);<br></blockquote><blockquote type="cite">+ if (!MO.isReg())<br></blockquote><blockquote type="cite">+ continue;<br></blockquote><blockquote type="cite">+ unsigned Reg = MO.getReg();<br></blockquote><blockquote type="cite">+ if (Reg != FoldAsLoadDefReg)<br></blockquote><blockquote type="cite">+ continue;<br></blockquote><blockquote type="cite">+ // Do not fold if we have a subreg use or a def or multiple uses.<br></blockquote><blockquote type="cite">+ if (MO.getSubReg() || MO.isDef() || FoundSrcOperand)<br></blockquote><blockquote type="cite">+ return 0;<br></blockquote><blockquote type="cite">+<br></blockquote><blockquote type="cite">+ SrcOperandId = i;<br></blockquote><blockquote type="cite">+ FoundSrcOperand = true;<br></blockquote><blockquote type="cite">+ }<br></blockquote><blockquote type="cite">+ if (!FoundSrcOperand) return 0;<br></blockquote><blockquote type="cite">+<br></blockquote><blockquote type="cite">+ // Check whether we can fold the def into SrcOperandId.<br></blockquote><blockquote type="cite">+ SmallVector<unsigned, 8> Ops;<br></blockquote><blockquote type="cite">+ Ops.push_back(SrcOperandId);<br></blockquote><blockquote type="cite">+ MachineInstr *FoldMI = foldMemoryOperand(MI, Ops, DefMI);<br></blockquote><blockquote type="cite">+ if (FoldMI) {<br></blockquote><blockquote type="cite">+ FoldAsLoadDefReg = 0;<br></blockquote><blockquote type="cite">+ return FoldMI;<br></blockquote><blockquote type="cite">+ }<br></blockquote><blockquote type="cite">+<br></blockquote><blockquote type="cite">+ if (Idx == 1) {<br></blockquote><blockquote type="cite">+ // MI was changed but it didn't help, commute it back!<br></blockquote><blockquote type="cite">+ commuteInstruction(MI, false);<br></blockquote><blockquote type="cite">+ return 0;<br></blockquote><blockquote type="cite">+ }<br></blockquote><blockquote type="cite">+<br></blockquote><blockquote type="cite">+ // Check whether we can commute MI and enable folding.<br></blockquote><blockquote type="cite">+ if (MI->isCommutable()) {<br></blockquote><blockquote type="cite">+ MachineInstr *NewMI = commuteInstruction(MI, false);<br></blockquote><blockquote type="cite">+ // Unable to commute.<br></blockquote><blockquote type="cite">+ if (!NewMI) return 0;<br></blockquote><blockquote type="cite">+ if (NewMI != MI) {<br></blockquote><blockquote type="cite">+ // New instruction. It doesn't need to be kept.<br></blockquote><blockquote type="cite">+ NewMI->eraseFromParent();<br></blockquote><blockquote type="cite">+ return 0;<br></blockquote><blockquote type="cite">+ }<br></blockquote><blockquote type="cite">+ }<br></blockquote><blockquote type="cite">+ }<br></blockquote><blockquote type="cite">+ return 0;<br></blockquote><blockquote type="cite">+}<br></blockquote><blockquote type="cite">+<br></blockquote><blockquote type="cite"> /// Expand2AddrUndef - Expand a single-def pseudo instruction to a two-addr<br></blockquote><blockquote type="cite"> /// instruction with two undef reads of the register being defined. This is<br></blockquote><blockquote type="cite"> /// used for mapping:<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">Modified: llvm/trunk/lib/Target/X86/X86InstrInfo.h<br></blockquote><blockquote type="cite">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86InstrInfo.h?rev=161152&r1=161151&r2=161152&view=diff">http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86InstrInfo.h?rev=161152&r1=161151&r2=161152&view=diff</a><br></blockquote><blockquote type="cite">==============================================================================<br></blockquote><blockquote type="cite">--- llvm/trunk/lib/Target/X86/X86InstrInfo.h (original)<br></blockquote><blockquote type="cite">+++ llvm/trunk/lib/Target/X86/X86InstrInfo.h Wed Aug 1 19:56:42 2012<br></blockquote><blockquote type="cite">@@ -387,6 +387,14 @@<br></blockquote><blockquote type="cite"> unsigned SrcReg2, int CmpMask, int CmpValue,<br></blockquote><blockquote type="cite"> const MachineRegisterInfo *MRI) const;<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">+ /// optimizeLoadInstr - Try to remove the load by folding it to a register<br></blockquote><blockquote type="cite">+ /// operand at the use. We fold the load instructions if and only if the<br></blockquote><blockquote type="cite">+ /// def and use are in the same BB.<br></blockquote><blockquote type="cite">+ virtual MachineInstr* optimizeLoadInstr(MachineInstr *MI,<br></blockquote><blockquote type="cite">+ const MachineRegisterInfo *MRI,<br></blockquote><blockquote type="cite">+ unsigned &FoldAsLoadDefReg,<br></blockquote><blockquote type="cite">+ MachineInstr *&DefMI) const;<br></blockquote><blockquote type="cite">+<br></blockquote><blockquote type="cite"> private:<br></blockquote><blockquote type="cite"> MachineInstr * convertToThreeAddressWithLEA(unsigned MIOpc,<br></blockquote><blockquote type="cite"> MachineFunction::iterator &MFI,<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">Modified: llvm/trunk/test/CodeGen/X86/2012-05-19-avx2-store.ll<br></blockquote><blockquote type="cite">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/2012-05-19-avx2-store.ll?rev=161152&r1=161151&r2=161152&view=diff">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/2012-05-19-avx2-store.ll?rev=161152&r1=161151&r2=161152&view=diff</a><br></blockquote><blockquote type="cite">==============================================================================<br></blockquote><blockquote type="cite">--- llvm/trunk/test/CodeGen/X86/2012-05-19-avx2-store.ll (original)<br></blockquote><blockquote type="cite">+++ llvm/trunk/test/CodeGen/X86/2012-05-19-avx2-store.ll Wed Aug 1 19:56:42 2012<br></blockquote><blockquote type="cite">@@ -3,8 +3,7 @@<br></blockquote><blockquote type="cite"> define void @double_save(<4 x i32>* %Ap, <4 x i32>* %Bp, <8 x i32>* %P) nounwind ssp {<br></blockquote><blockquote type="cite"> entry:<br></blockquote><blockquote type="cite"> ; CHECK: vmovaps<br></blockquote><blockquote type="cite">- ; CHECK: vmovaps<br></blockquote><blockquote type="cite">- ; CHECK: vinsertf128<br></blockquote><blockquote type="cite">+ ; CHECK: vinsertf128 $1, ([[A0:%rdi|%rsi]]),<br></blockquote><blockquote type="cite"> ; CHECK: vmovups<br></blockquote><blockquote type="cite"> %A = load <4 x i32>* %Ap<br></blockquote><blockquote type="cite"> %B = load <4 x i32>* %Bp<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">Modified: llvm/trunk/test/CodeGen/X86/break-sse-dep.ll<br></blockquote><blockquote type="cite">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/break-sse-dep.ll?rev=161152&r1=161151&r2=161152&view=diff">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/break-sse-dep.ll?rev=161152&r1=161151&r2=161152&view=diff</a><br></blockquote><blockquote type="cite">==============================================================================<br></blockquote><blockquote type="cite">--- llvm/trunk/test/CodeGen/X86/break-sse-dep.ll (original)<br></blockquote><blockquote type="cite">+++ llvm/trunk/test/CodeGen/X86/break-sse-dep.ll Wed Aug 1 19:56:42 2012<br></blockquote><blockquote type="cite">@@ -34,8 +34,7 @@<br></blockquote><blockquote type="cite"> define double @squirt(double* %x) nounwind {<br></blockquote><blockquote type="cite"> entry:<br></blockquote><blockquote type="cite"> ; CHECK: squirt:<br></blockquote><blockquote type="cite">-; CHECK: movsd ([[A0]]), %xmm0<br></blockquote><blockquote type="cite">-; CHECK: sqrtsd %xmm0, %xmm0<br></blockquote><blockquote type="cite">+; CHECK: sqrtsd ([[A0]]), %xmm0<br></blockquote><blockquote type="cite"> %z = load double* %x<br></blockquote><blockquote type="cite"> %t = call double @llvm.sqrt.f64(double %z)<br></blockquote><blockquote type="cite"> ret double %t<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">Modified: llvm/trunk/test/CodeGen/X86/fold-load.ll<br></blockquote><blockquote type="cite">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/fold-load.ll?rev=161152&r1=161151&r2=161152&view=diff">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/fold-load.ll?rev=161152&r1=161151&r2=161152&view=diff</a><br></blockquote><blockquote type="cite">==============================================================================<br></blockquote><blockquote type="cite">--- llvm/trunk/test/CodeGen/X86/fold-load.ll (original)<br></blockquote><blockquote type="cite">+++ llvm/trunk/test/CodeGen/X86/fold-load.ll Wed Aug 1 19:56:42 2012<br></blockquote><blockquote type="cite">@@ -45,3 +45,29 @@<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite"> }<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">+; <a href="rdar://10554090">rdar://10554090</a><br></blockquote><blockquote type="cite">+; xor in exit block will be CSE'ed and load will be folded to xor in entry.<br></blockquote><blockquote type="cite">+define i1 @test3(i32* %P, i32* %Q) nounwind {<br></blockquote><blockquote type="cite">+; CHECK: test3:<br></blockquote><blockquote type="cite">+; CHECK: movl 8(%esp), %eax<br></blockquote><blockquote type="cite">+; CHECK: xorl (%eax),<br></blockquote><blockquote type="cite">+; CHECK: j<br></blockquote><blockquote type="cite">+; CHECK-NOT: xor<br></blockquote><blockquote type="cite">+entry:<br></blockquote><blockquote type="cite">+ %0 = load i32* %P, align 4<br></blockquote><blockquote type="cite">+ %1 = load i32* %Q, align 4<br></blockquote><blockquote type="cite">+ %2 = xor i32 %0, %1<br></blockquote><blockquote type="cite">+ %3 = and i32 %2, 65535<br></blockquote><blockquote type="cite">+ %4 = icmp eq i32 %3, 0<br></blockquote><blockquote type="cite">+ br i1 %4, label %exit, label %land.end<br></blockquote><blockquote type="cite">+<br></blockquote><blockquote type="cite">+exit:<br></blockquote><blockquote type="cite">+ %shr.i.i19 = xor i32 %1, %0<br></blockquote><blockquote type="cite">+ %5 = and i32 %shr.i.i19, 2147418112<br></blockquote><blockquote type="cite">+ %6 = icmp eq i32 %5, 0<br></blockquote><blockquote type="cite">+ br label %land.end<br></blockquote><blockquote type="cite">+<br></blockquote><blockquote type="cite">+land.end:<br></blockquote><blockquote type="cite">+ %7 = phi i1 [ %6, %exit ], [ false, %entry ]<br></blockquote><blockquote type="cite">+ ret i1 %7<br></blockquote><blockquote type="cite">+}<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">Modified: llvm/trunk/test/CodeGen/X86/fold-pcmpeqd-1.ll<br></blockquote><blockquote type="cite">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/fold-pcmpeqd-1.ll?rev=161152&r1=161151&r2=161152&view=diff">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/fold-pcmpeqd-1.ll?rev=161152&r1=161151&r2=161152&view=diff</a><br></blockquote><blockquote type="cite">==============================================================================<br></blockquote><blockquote type="cite">--- llvm/trunk/test/CodeGen/X86/fold-pcmpeqd-1.ll (original)<br></blockquote><blockquote type="cite">+++ llvm/trunk/test/CodeGen/X86/fold-pcmpeqd-1.ll Wed Aug 1 19:56:42 2012<br></blockquote><blockquote type="cite">@@ -1,11 +1,14 @@<br></blockquote><blockquote type="cite">-; RUN: llc < %s -march=x86 -mattr=+sse2 > %t<br></blockquote><blockquote type="cite">-; RUN: grep pcmpeqd %t | count 1<br></blockquote><blockquote type="cite">-; RUN: grep xor %t | count 1<br></blockquote><blockquote type="cite">-; RUN: not grep LCP %t<br></blockquote><blockquote type="cite">+; RUN: llc < %s -march=x86 -mattr=+sse2 | FileCheck %s<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite"> define <2 x double> @foo() nounwind {<br></blockquote><blockquote type="cite"> ret <2 x double> bitcast (<2 x i64><i64 -1, i64 -1> to <2 x double>)<br></blockquote><blockquote type="cite">+; CHECK: foo:<br></blockquote><blockquote type="cite">+; CHECK: pcmpeqd %xmm{{[0-9]+}}, %xmm{{[0-9]+}}<br></blockquote><blockquote type="cite">+; CHECK-NEXT: ret<br></blockquote><blockquote type="cite"> }<br></blockquote><blockquote type="cite"> define <2 x double> @bar() nounwind {<br></blockquote><blockquote type="cite"> ret <2 x double> bitcast (<2 x i64><i64 0, i64 0> to <2 x double>)<br></blockquote><blockquote type="cite">+; CHECK: bar:<br></blockquote><blockquote type="cite">+; CHECK: xorps %xmm{{[0-9]+}}, %xmm{{[0-9]+}}<br></blockquote><blockquote type="cite">+; CHECK-NEXT: ret<br></blockquote><blockquote type="cite"> }<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">Modified: llvm/trunk/test/CodeGen/X86/sse-minmax.ll<br></blockquote><blockquote type="cite">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/sse-minmax.ll?rev=161152&r1=161151&r2=161152&view=diff">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/sse-minmax.ll?rev=161152&r1=161151&r2=161152&view=diff</a><br></blockquote><blockquote type="cite">==============================================================================<br></blockquote><blockquote type="cite">--- llvm/trunk/test/CodeGen/X86/sse-minmax.ll (original)<br></blockquote><blockquote type="cite">+++ llvm/trunk/test/CodeGen/X86/sse-minmax.ll Wed Aug 1 19:56:42 2012<br></blockquote><blockquote type="cite">@@ -1,6 +1,6 @@<br></blockquote><blockquote type="cite">-; RUN: llc < %s -march=x86-64 -mcpu=nehalem -asm-verbose=false | FileCheck %s<br></blockquote><blockquote type="cite">-; RUN: llc < %s -march=x86-64 -mcpu=nehalem -asm-verbose=false -enable-unsafe-fp-math -enable-no-nans-fp-math | FileCheck -check-prefix=UNSAFE %s<br></blockquote><blockquote type="cite">-; RUN: llc < %s -march=x86-64 -mcpu=nehalem -asm-verbose=false -enable-no-nans-fp-math | FileCheck -check-prefix=FINITE %s<br></blockquote><blockquote type="cite">+; RUN: llc < %s -march=x86-64 -mtriple=x86_64-apple-darwin -mcpu=nehalem -asm-verbose=false | FileCheck %s<br></blockquote><blockquote type="cite">+; RUN: llc < %s -march=x86-64 -mtriple=x86_64-apple-darwin -mcpu=nehalem -asm-verbose=false -enable-unsafe-fp-math -enable-no-nans-fp-math | FileCheck -check-prefix=UNSAFE %s<br></blockquote><blockquote type="cite">+; RUN: llc < %s -march=x86-64 -mtriple=x86_64-apple-darwin -mcpu=nehalem -asm-verbose=false -enable-no-nans-fp-math | FileCheck -check-prefix=FINITE %s<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite"> ; Some of these patterns can be matched as SSE min or max. Some of<br></blockquote><blockquote type="cite"> ; then can be matched provided that the operands are swapped.<br></blockquote><blockquote type="cite">@@ -137,16 +137,13 @@<br></blockquote><blockquote type="cite"> }<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite"> ; CHECK: ogt_x:<br></blockquote><blockquote type="cite">-; CHECK-NEXT: xorp{{[sd]}} %xmm1, %xmm1<br></blockquote><blockquote type="cite">-; CHECK-NEXT: maxsd %xmm1, %xmm0<br></blockquote><blockquote type="cite">+; CHECK-NEXT: maxsd LCP{{.*}}(%rip), %xmm0<br></blockquote><blockquote type="cite"> ; CHECK-NEXT: ret<br></blockquote><blockquote type="cite"> ; UNSAFE: ogt_x:<br></blockquote><blockquote type="cite">-; UNSAFE-NEXT: xorp{{[sd]}} %xmm1, %xmm1<br></blockquote><blockquote type="cite">-; UNSAFE-NEXT: maxsd %xmm1, %xmm0<br></blockquote><blockquote type="cite">+; UNSAFE-NEXT: maxsd LCP{{.*}}(%rip), %xmm0<br></blockquote><blockquote type="cite"> ; UNSAFE-NEXT: ret<br></blockquote><blockquote type="cite"> ; FINITE: ogt_x:<br></blockquote><blockquote type="cite">-; FINITE-NEXT: xorp{{[sd]}} %xmm1, %xmm1<br></blockquote><blockquote type="cite">-; FINITE-NEXT: maxsd %xmm1, %xmm0<br></blockquote><blockquote type="cite">+; FINITE-NEXT: maxsd LCP{{.*}}(%rip), %xmm0<br></blockquote><blockquote type="cite"> ; FINITE-NEXT: ret<br></blockquote><blockquote type="cite"> define double @ogt_x(double %x) nounwind {<br></blockquote><blockquote type="cite"> %c = fcmp ogt double %x, 0.000000e+00<br></blockquote><blockquote type="cite">@@ -155,16 +152,13 @@<br></blockquote><blockquote type="cite"> }<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite"> ; CHECK: olt_x:<br></blockquote><blockquote type="cite">-; CHECK-NEXT: xorp{{[sd]}} %xmm1, %xmm1<br></blockquote><blockquote type="cite">-; CHECK-NEXT: minsd %xmm1, %xmm0<br></blockquote><blockquote type="cite">+; CHECK-NEXT: minsd LCP{{.*}}(%rip), %xmm0<br></blockquote><blockquote type="cite"> ; CHECK-NEXT: ret<br></blockquote><blockquote type="cite"> ; UNSAFE: olt_x:<br></blockquote><blockquote type="cite">-; UNSAFE-NEXT: xorp{{[sd]}} %xmm1, %xmm1<br></blockquote><blockquote type="cite">-; UNSAFE-NEXT: minsd %xmm1, %xmm0<br></blockquote><blockquote type="cite">+; UNSAFE-NEXT: minsd LCP{{.*}}(%rip), %xmm0<br></blockquote><blockquote type="cite"> ; UNSAFE-NEXT: ret<br></blockquote><blockquote type="cite"> ; FINITE: olt_x:<br></blockquote><blockquote type="cite">-; FINITE-NEXT: xorp{{[sd]}} %xmm1, %xmm1<br></blockquote><blockquote type="cite">-; FINITE-NEXT: minsd %xmm1, %xmm0<br></blockquote><blockquote type="cite">+; FINITE-NEXT: minsd LCP{{.*}}(%rip), %xmm0<br></blockquote><blockquote type="cite"> ; FINITE-NEXT: ret<br></blockquote><blockquote type="cite"> define double @olt_x(double %x) nounwind {<br></blockquote><blockquote type="cite"> %c = fcmp olt double %x, 0.000000e+00<br></blockquote><blockquote type="cite">@@ -217,12 +211,10 @@<br></blockquote><blockquote type="cite"> ; CHECK: oge_x:<br></blockquote><blockquote type="cite"> ; CHECK: ucomisd %xmm1, %xmm0<br></blockquote><blockquote type="cite"> ; UNSAFE: oge_x:<br></blockquote><blockquote type="cite">-; UNSAFE-NEXT: xorp{{[sd]}} %xmm1, %xmm1<br></blockquote><blockquote type="cite">-; UNSAFE-NEXT: maxsd %xmm1, %xmm0<br></blockquote><blockquote type="cite">+; UNSAFE-NEXT: maxsd LCP{{.*}}(%rip), %xmm0<br></blockquote><blockquote type="cite"> ; UNSAFE-NEXT: ret<br></blockquote><blockquote type="cite"> ; FINITE: oge_x:<br></blockquote><blockquote type="cite">-; FINITE-NEXT: xorp{{[sd]}} %xmm1, %xmm1<br></blockquote><blockquote type="cite">-; FINITE-NEXT: maxsd %xmm1, %xmm0<br></blockquote><blockquote type="cite">+; FINITE-NEXT: maxsd LCP{{.*}}(%rip), %xmm0<br></blockquote><blockquote type="cite"> ; FINITE-NEXT: ret<br></blockquote><blockquote type="cite"> define double @oge_x(double %x) nounwind {<br></blockquote><blockquote type="cite"> %c = fcmp oge double %x, 0.000000e+00<br></blockquote><blockquote type="cite">@@ -233,12 +225,10 @@<br></blockquote><blockquote type="cite"> ; CHECK: ole_x:<br></blockquote><blockquote type="cite"> ; CHECK: ucomisd %xmm0, %xmm1<br></blockquote><blockquote type="cite"> ; UNSAFE: ole_x:<br></blockquote><blockquote type="cite">-; UNSAFE-NEXT: xorp{{[sd]}} %xmm1, %xmm1<br></blockquote><blockquote type="cite">-; UNSAFE-NEXT: minsd %xmm1, %xmm0<br></blockquote><blockquote type="cite">+; UNSAFE-NEXT: minsd LCP{{.*}}(%rip), %xmm0<br></blockquote><blockquote type="cite"> ; UNSAFE-NEXT: ret<br></blockquote><blockquote type="cite"> ; FINITE: ole_x:<br></blockquote><blockquote type="cite">-; FINITE-NEXT: xorp{{[sd]}} %xmm1, %xmm1<br></blockquote><blockquote type="cite">-; FINITE-NEXT: minsd %xmm1, %xmm0<br></blockquote><blockquote type="cite">+; FINITE-NEXT: minsd LCP{{.*}}(%rip), %xmm0<br></blockquote><blockquote type="cite"> ; FINITE-NEXT: ret<br></blockquote><blockquote type="cite"> define double @ole_x(double %x) nounwind {<br></blockquote><blockquote type="cite"> %c = fcmp ole double %x, 0.000000e+00<br></blockquote><blockquote type="cite">@@ -411,12 +401,10 @@<br></blockquote><blockquote type="cite"> ; CHECK: ugt_x:<br></blockquote><blockquote type="cite"> ; CHECK: ucomisd %xmm0, %xmm1<br></blockquote><blockquote type="cite"> ; UNSAFE: ugt_x:<br></blockquote><blockquote type="cite">-; UNSAFE-NEXT: xorp{{[sd]}} %xmm1, %xmm1<br></blockquote><blockquote type="cite">-; UNSAFE-NEXT: maxsd %xmm1, %xmm0<br></blockquote><blockquote type="cite">+; UNSAFE-NEXT: maxsd LCP{{.*}}(%rip), %xmm0<br></blockquote><blockquote type="cite"> ; UNSAFE-NEXT: ret<br></blockquote><blockquote type="cite"> ; FINITE: ugt_x:<br></blockquote><blockquote type="cite">-; FINITE-NEXT: xorp{{[sd]}} %xmm1, %xmm1<br></blockquote><blockquote type="cite">-; FINITE-NEXT: maxsd %xmm1, %xmm0<br></blockquote><blockquote type="cite">+; FINITE-NEXT: maxsd LCP{{.*}}(%rip), %xmm0<br></blockquote><blockquote type="cite"> ; FINITE-NEXT: ret<br></blockquote><blockquote type="cite"> define double @ugt_x(double %x) nounwind {<br></blockquote><blockquote type="cite"> %c = fcmp ugt double %x, 0.000000e+00<br></blockquote><blockquote type="cite">@@ -427,12 +415,10 @@<br></blockquote><blockquote type="cite"> ; CHECK: ult_x:<br></blockquote><blockquote type="cite"> ; CHECK: ucomisd %xmm1, %xmm0<br></blockquote><blockquote type="cite"> ; UNSAFE: ult_x:<br></blockquote><blockquote type="cite">-; UNSAFE-NEXT: xorp{{[sd]}} %xmm1, %xmm1<br></blockquote><blockquote type="cite">-; UNSAFE-NEXT: minsd %xmm1, %xmm0<br></blockquote><blockquote type="cite">+; UNSAFE-NEXT: minsd LCP{{.*}}(%rip), %xmm0<br></blockquote><blockquote type="cite"> ; UNSAFE-NEXT: ret<br></blockquote><blockquote type="cite"> ; FINITE: ult_x:<br></blockquote><blockquote type="cite">-; FINITE-NEXT: xorp{{[sd]}} %xmm1, %xmm1<br></blockquote><blockquote type="cite">-; FINITE-NEXT: minsd %xmm1, %xmm0<br></blockquote><blockquote type="cite">+; FINITE-NEXT: minsd LCP{{.*}}(%rip), %xmm0<br></blockquote><blockquote type="cite"> ; FINITE-NEXT: ret<br></blockquote><blockquote type="cite"> define double @ult_x(double %x) nounwind {<br></blockquote><blockquote type="cite"> %c = fcmp ult double %x, 0.000000e+00<br></blockquote><blockquote type="cite">@@ -482,12 +468,10 @@<br></blockquote><blockquote type="cite"> ; CHECK-NEXT: movap{{[sd]}} %xmm1, %xmm0<br></blockquote><blockquote type="cite"> ; CHECK-NEXT: ret<br></blockquote><blockquote type="cite"> ; UNSAFE: uge_x:<br></blockquote><blockquote type="cite">-; UNSAFE-NEXT: xorp{{[sd]}} %xmm1, %xmm1<br></blockquote><blockquote type="cite">-; UNSAFE-NEXT: maxsd %xmm1, %xmm0<br></blockquote><blockquote type="cite">+; UNSAFE-NEXT: maxsd LCP{{.*}}(%rip), %xmm0<br></blockquote><blockquote type="cite"> ; UNSAFE-NEXT: ret<br></blockquote><blockquote type="cite"> ; FINITE: uge_x:<br></blockquote><blockquote type="cite">-; FINITE-NEXT: xorp{{[sd]}} %xmm1, %xmm1<br></blockquote><blockquote type="cite">-; FINITE-NEXT: maxsd %xmm1, %xmm0<br></blockquote><blockquote type="cite">+; FINITE-NEXT: maxsd LCP{{.*}}(%rip), %xmm0<br></blockquote><blockquote type="cite"> ; FINITE-NEXT: ret<br></blockquote><blockquote type="cite"> define double @uge_x(double %x) nounwind {<br></blockquote><blockquote type="cite"> %c = fcmp uge double %x, 0.000000e+00<br></blockquote><blockquote type="cite">@@ -501,12 +485,10 @@<br></blockquote><blockquote type="cite"> ; CHECK-NEXT: movap{{[sd]}} %xmm1, %xmm0<br></blockquote><blockquote type="cite"> ; CHECK-NEXT: ret<br></blockquote><blockquote type="cite"> ; UNSAFE: ule_x:<br></blockquote><blockquote type="cite">-; UNSAFE-NEXT: xorp{{[sd]}} %xmm1, %xmm1<br></blockquote><blockquote type="cite">-; UNSAFE-NEXT: minsd %xmm1, %xmm0<br></blockquote><blockquote type="cite">+; UNSAFE-NEXT: minsd LCP{{.*}}(%rip), %xmm0<br></blockquote><blockquote type="cite"> ; UNSAFE-NEXT: ret<br></blockquote><blockquote type="cite"> ; FINITE: ule_x:<br></blockquote><blockquote type="cite">-; FINITE-NEXT: xorp{{[sd]}} %xmm1, %xmm1<br></blockquote><blockquote type="cite">-; FINITE-NEXT: minsd %xmm1, %xmm0<br></blockquote><blockquote type="cite">+; FINITE-NEXT: minsd LCP{{.*}}(%rip), %xmm0<br></blockquote><blockquote type="cite"> ; FINITE-NEXT: ret<br></blockquote><blockquote type="cite"> define double @ule_x(double %x) nounwind {<br></blockquote><blockquote type="cite"> %c = fcmp ule double %x, 0.000000e+00<br></blockquote><blockquote type="cite">@@ -515,8 +497,7 @@<br></blockquote><blockquote type="cite"> }<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite"> ; CHECK: uge_inverse_x:<br></blockquote><blockquote type="cite">-; CHECK-NEXT: xorp{{[sd]}} %xmm1, %xmm1<br></blockquote><blockquote type="cite">-; CHECK-NEXT: minsd %xmm1, %xmm0<br></blockquote><blockquote type="cite">+; CHECK-NEXT: minsd LCP{{.*}}(%rip), %xmm0<br></blockquote><blockquote type="cite"> ; CHECK-NEXT: ret<br></blockquote><blockquote type="cite"> ; UNSAFE: uge_inverse_x:<br></blockquote><blockquote type="cite"> ; UNSAFE-NEXT: xorp{{[sd]}} %xmm1, %xmm1<br></blockquote><blockquote type="cite">@@ -535,8 +516,7 @@<br></blockquote><blockquote type="cite"> }<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite"> ; CHECK: ule_inverse_x:<br></blockquote><blockquote type="cite">-; CHECK-NEXT: xorp{{[sd]}} %xmm1, %xmm1<br></blockquote><blockquote type="cite">-; CHECK-NEXT: maxsd %xmm1, %xmm0<br></blockquote><blockquote type="cite">+; CHECK-NEXT: maxsd LCP{{.*}}(%rip), %xmm0<br></blockquote><blockquote type="cite"> ; CHECK-NEXT: ret<br></blockquote><blockquote type="cite"> ; UNSAFE: ule_inverse_x:<br></blockquote><blockquote type="cite"> ; UNSAFE-NEXT: xorp{{[sd]}} %xmm1, %xmm1<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">Modified: llvm/trunk/test/CodeGen/X86/vec_compare.ll<br></blockquote><blockquote type="cite">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vec_compare.ll?rev=161152&r1=161151&r2=161152&view=diff">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vec_compare.ll?rev=161152&r1=161151&r2=161152&view=diff</a><br></blockquote><blockquote type="cite">==============================================================================<br></blockquote><blockquote type="cite">--- llvm/trunk/test/CodeGen/X86/vec_compare.ll (original)<br></blockquote><blockquote type="cite">+++ llvm/trunk/test/CodeGen/X86/vec_compare.ll Wed Aug 1 19:56:42 2012<br></blockquote><blockquote type="cite">@@ -1,4 +1,4 @@<br></blockquote><blockquote type="cite">-; RUN: llc < %s -march=x86 -mcpu=yonah | FileCheck %s<br></blockquote><blockquote type="cite">+; RUN: llc < %s -march=x86 -mcpu=yonah -mtriple=i386-apple-darwin | FileCheck %s<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite"> define <4 x i32> @test1(<4 x i32> %A, <4 x i32> %B) nounwind {<br></blockquote><blockquote type="cite">@@ -14,8 +14,8 @@<br></blockquote><blockquote type="cite"> define <4 x i32> @test2(<4 x i32> %A, <4 x i32> %B) nounwind {<br></blockquote><blockquote type="cite"> ; CHECK: test2:<br></blockquote><blockquote type="cite"> ; CHECK: pcmp<br></blockquote><blockquote type="cite">-; CHECK: pcmp<br></blockquote><blockquote type="cite">-; CHECK: pxor<br></blockquote><blockquote type="cite">+; CHECK: pxor LCP<br></blockquote><blockquote type="cite">+; CHECK: movdqa<br></blockquote><blockquote type="cite"> ; CHECK: ret<br></blockquote><blockquote type="cite"> <span class="Apple-tab-span" style="white-space:pre"> </span>%C = icmp sge <4 x i32> %A, %B<br></blockquote><blockquote type="cite"> %D = sext <4 x i1> %C to <4 x i32><br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">_______________________________________________<br></blockquote><blockquote type="cite">llvm-commits mailing list<br></blockquote><blockquote type="cite"><a href="mailto:llvm-commits@cs.uiuc.edu">llvm-commits@cs.uiuc.edu</a><br></blockquote><blockquote type="cite"><a href="http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits">http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits</a><br></blockquote><br><br></div></blockquote></div><br></body></html>