<html><head><meta http-equiv="Content-Type" content="text/html charset=us-ascii"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class="">Hi Geoff,<div class=""><br class=""></div><div class="">I see the MachineVerifier complaining after this commit for our out-of-tree target. More over, address sanitizer is complaining about an invalid access.</div><div class=""><pre class="console-output" style="box-sizing: border-box; white-space: pre-wrap; word-wrap: break-word; margin-top: 0px; margin-bottom: 0px; color: rgb(51, 51, 51); font-size: 13px;">==50770==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000008 (pc 0x000102e7e42e bp 0x700002f37170 sp 0x700002f36f60 T9)
==50770==The signal is caused by a READ memory access.
==50770==Hint: address points to the zero page.
#0 0x102e7e42d in (anonymous namespace)::MachineCopyPropagation::forwardUses(llvm::MachineInstr&) MachineCopyPropagation.cpp:596
#1 0x102e79c22 in (anonymous namespace)::MachineCopyPropagation::runOnMachineFunction(llvm::MachineFunction&) MachineCopyPropagation.cpp:806</pre><div class=""><br class=""></div></div><div class="">The MachineVerifier error is:</div><div class=""><pre class="console-output" style="box-sizing: border-box; white-space: pre-wrap; word-wrap: break-word; margin-top: 0px; margin-bottom: 0px; color: rgb(51, 51, 51); font-size: 13px;">*** Bad machine code: Instruction ending live segment doesn't read the register ***</pre><div class=""><br class=""></div></div><div class=""><div>Could you revert while we investigate?</div><div><br class=""></div><div>Thanks,</div><div>-Quentin<br class=""><blockquote type="cite" class=""><div class="">On Oct 2, 2017, at 3:01 PM, Geoff Berry via llvm-commits <<a href="mailto:llvm-commits@lists.llvm.org" class="">llvm-commits@lists.llvm.org</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div class="">Author: gberry<br class="">Date: Mon Oct 2 15:01:37 2017<br class="">New Revision: 314729<br class=""><br class="">URL: <a href="http://llvm.org/viewvc/llvm-project?rev=314729&view=rev" class="">http://llvm.org/viewvc/llvm-project?rev=314729&view=rev</a><br class="">Log:<br class="">Re-enable "[MachineCopyPropagation] Extend pass to do COPY source forwarding"<br class=""><br class="">Issues addressed since original review:<br class="">- Avoid bug in regalloc greedy/machine verifier when forwarding to use<br class=""> in an instruction that re-defines the same virtual register.<br class="">- Fixed bug when forwarding to use in EarlyClobber instruction slot.<br class="">- Fixed incorrect forwarding to register definitions that showed up in<br class=""> explicit_uses() iterator (e.g. in INLINEASM).<br class="">- Moved removal of dead instructions found by<br class=""> LiveIntervals::shrinkToUses() outside of loop iterating over<br class=""> instructions to avoid instructions being deleted while pointed to by<br class=""> iterator.<br class="">- Fixed ARMLoadStoreOptimizer bug exposed by this change in r311907.<br class="">- The pass no longer forwards COPYs to physical register uses, since<br class=""> doing so can break code that implicitly relies on the physical<br class=""> register number of the use.<br class="">- The pass no longer forwards COPYs to undef uses, since doing so<br class=""> can break the machine verifier by creating LiveRanges that don't<br class=""> end on a use (since the undef operand is not considered a use).<br class=""><br class=""> [MachineCopyPropagation] Extend pass to do COPY source forwarding<br class=""><br class=""> This change extends MachineCopyPropagation to do COPY source forwarding.<br class=""><br class=""> This change also extends the MachineCopyPropagation pass to be able to<br class=""> be run during register allocation, after physical registers have been<br class=""> assigned, but before the virtual registers have been re-written, which<br class=""> allows it to remove virtual register COPY LiveIntervals that become dead<br class=""> through the forwarding of all of their uses.<br class=""><br class="">Modified:<br class=""> llvm/trunk/include/llvm/CodeGen/Passes.h<br class=""> llvm/trunk/include/llvm/InitializePasses.h<br class=""> llvm/trunk/lib/CodeGen/CodeGen.cpp<br class=""> llvm/trunk/lib/CodeGen/MachineCopyPropagation.cpp<br class=""> llvm/trunk/lib/CodeGen/TargetPassConfig.cpp<br class=""> llvm/trunk/test/CodeGen/AArch64/aarch64-fold-lslfast.ll<br class=""> llvm/trunk/test/CodeGen/AArch64/arm64-AdvSIMD-Scalar.ll<br class=""> llvm/trunk/test/CodeGen/AArch64/arm64-zero-cycle-regmov.ll<br class=""> llvm/trunk/test/CodeGen/AArch64/f16-instructions.ll<br class=""> llvm/trunk/test/CodeGen/AArch64/flags-multiuse.ll<br class=""> llvm/trunk/test/CodeGen/AArch64/merge-store-dependency.ll<br class=""> llvm/trunk/test/CodeGen/AArch64/neg-imm.ll<br class=""> llvm/trunk/test/CodeGen/AMDGPU/callee-special-input-sgprs.ll<br class=""> llvm/trunk/test/CodeGen/AMDGPU/mad-mix.ll<br class=""> llvm/trunk/test/CodeGen/AMDGPU/ret.ll<br class=""> llvm/trunk/test/CodeGen/ARM/atomic-op.ll<br class=""> llvm/trunk/test/CodeGen/ARM/intrinsics-overflow.ll<br class=""> llvm/trunk/test/CodeGen/ARM/swifterror.ll<br class=""> llvm/trunk/test/CodeGen/Mips/llvm-ir/sub.ll<br class=""> llvm/trunk/test/CodeGen/PowerPC/fma-mutate.ll<br class=""> llvm/trunk/test/CodeGen/PowerPC/gpr-vsr-spill.ll<br class=""> llvm/trunk/test/CodeGen/PowerPC/inlineasm-i64-reg.ll<br class=""> llvm/trunk/test/CodeGen/PowerPC/opt-li-add-to-addi.ll<br class=""> llvm/trunk/test/CodeGen/PowerPC/tail-dup-layout.ll<br class=""> llvm/trunk/test/CodeGen/SPARC/atomics.ll<br class=""> llvm/trunk/test/CodeGen/Thumb/thumb-shrink-wrapping.ll<br class=""> llvm/trunk/test/CodeGen/X86/2006-03-01-InstrSchedBug.ll<br class=""> llvm/trunk/test/CodeGen/X86/arg-copy-elide.ll<br class=""> llvm/trunk/test/CodeGen/X86/avg.ll<br class=""> llvm/trunk/test/CodeGen/X86/avx-load-store.ll<br class=""> llvm/trunk/test/CodeGen/X86/avx512-bugfix-25270.ll<br class=""> llvm/trunk/test/CodeGen/X86/avx512-calling-conv.ll<br class=""> llvm/trunk/test/CodeGen/X86/avx512-mask-op.ll<br class=""> llvm/trunk/test/CodeGen/X86/avx512-schedule.ll<br class=""> llvm/trunk/test/CodeGen/X86/avx512bw-intrinsics-upgrade.ll<br class=""> llvm/trunk/test/CodeGen/X86/buildvec-insertvec.ll<br class=""> llvm/trunk/test/CodeGen/X86/combine-fcopysign.ll<br class=""> llvm/trunk/test/CodeGen/X86/combine-shl.ll<br class=""> llvm/trunk/test/CodeGen/X86/complex-fastmath.ll<br class=""> llvm/trunk/test/CodeGen/X86/divide-by-constant.ll<br class=""> llvm/trunk/test/CodeGen/X86/fmaxnum.ll<br class=""> llvm/trunk/test/CodeGen/X86/fmf-flags.ll<br class=""> llvm/trunk/test/CodeGen/X86/fminnum.ll<br class=""> llvm/trunk/test/CodeGen/X86/fp128-i128.ll<br class=""> llvm/trunk/test/CodeGen/X86/haddsub-2.ll<br class=""> llvm/trunk/test/CodeGen/X86/haddsub-undef.ll<br class=""> llvm/trunk/test/CodeGen/X86/half.ll<br class=""> llvm/trunk/test/CodeGen/X86/inline-asm-fpstack.ll<br class=""> llvm/trunk/test/CodeGen/X86/ipra-local-linkage.ll<br class=""> llvm/trunk/test/CodeGen/X86/localescape.ll<br class=""> llvm/trunk/test/CodeGen/X86/machine-cp.ll<br class=""> llvm/trunk/test/CodeGen/X86/mul-i1024.ll<br class=""> llvm/trunk/test/CodeGen/X86/mul-i512.ll<br class=""> llvm/trunk/test/CodeGen/X86/mul128.ll<br class=""> llvm/trunk/test/CodeGen/X86/mulvi32.ll<br class=""> llvm/trunk/test/CodeGen/X86/pmul.ll<br class=""> llvm/trunk/test/CodeGen/X86/powi.ll<br class=""> llvm/trunk/test/CodeGen/X86/pr11334.ll<br class=""> llvm/trunk/test/CodeGen/X86/pr29112.ll<br class=""> llvm/trunk/test/CodeGen/X86/psubus.ll<br class=""> llvm/trunk/test/CodeGen/X86/shrink-wrap-chkstk.ll<br class=""> llvm/trunk/test/CodeGen/X86/sqrt-fastmath.ll<br class=""> llvm/trunk/test/CodeGen/X86/sse1.ll<br class=""> llvm/trunk/test/CodeGen/X86/sse3-avx-addsub-2.ll<br class=""> llvm/trunk/test/CodeGen/X86/statepoint-live-in.ll<br class=""> llvm/trunk/test/CodeGen/X86/statepoint-stack-usage.ll<br class=""> llvm/trunk/test/CodeGen/X86/vec_fp_to_int.ll<br class=""> llvm/trunk/test/CodeGen/X86/vec_int_to_fp.ll<br class=""> llvm/trunk/test/CodeGen/X86/vec_minmax_sint.ll<br class=""> llvm/trunk/test/CodeGen/X86/vec_shift4.ll<br class=""> llvm/trunk/test/CodeGen/X86/vector-blend.ll<br class=""> llvm/trunk/test/CodeGen/X86/vector-idiv-sdiv-128.ll<br class=""> llvm/trunk/test/CodeGen/X86/vector-idiv-udiv-128.ll<br class=""> llvm/trunk/test/CodeGen/X86/vector-mul.ll<br class=""> llvm/trunk/test/CodeGen/X86/vector-rotate-128.ll<br class=""> llvm/trunk/test/CodeGen/X86/vector-sext.ll<br class=""> llvm/trunk/test/CodeGen/X86/vector-shift-ashr-128.ll<br class=""> llvm/trunk/test/CodeGen/X86/vector-shift-lshr-128.ll<br class=""> llvm/trunk/test/CodeGen/X86/vector-shift-shl-128.ll<br class=""> llvm/trunk/test/CodeGen/X86/vector-shuffle-combining.ll<br class=""> llvm/trunk/test/CodeGen/X86/vector-trunc-math.ll<br class=""> llvm/trunk/test/CodeGen/X86/vector-zext.ll<br class=""> llvm/trunk/test/CodeGen/X86/vselect-minmax.ll<br class=""> llvm/trunk/test/CodeGen/X86/widen_conv-3.ll<br class=""> llvm/trunk/test/CodeGen/X86/widen_conv-4.ll<br class=""> llvm/trunk/test/CodeGen/X86/x86-interleaved-access.ll<br class=""> llvm/trunk/test/CodeGen/X86/x86-shrink-wrap-unwind.ll<br class=""> llvm/trunk/test/CodeGen/X86/x86-shrink-wrapping.ll<br class=""> llvm/trunk/test/DebugInfo/X86/live-debug-variables.ll<br class=""> llvm/trunk/test/DebugInfo/X86/spill-nospill.ll<br class=""><br class="">Modified: llvm/trunk/include/llvm/CodeGen/Passes.h<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/include/llvm/CodeGen/Passes.h?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/include/llvm/CodeGen/Passes.h?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/include/llvm/CodeGen/Passes.h (original)<br class="">+++ llvm/trunk/include/llvm/CodeGen/Passes.h Mon Oct 2 15:01:37 2017<br class="">@@ -278,6 +278,11 @@ namespace llvm {<br class=""> /// MachineSinking - This pass performs sinking on machine instructions.<br class=""> extern char &MachineSinkingID;<br class=""><br class="">+ /// MachineCopyPropagationPreRegRewrite - This pass performs copy propagation<br class="">+ /// on machine instructions after register allocation but before virtual<br class="">+ /// register re-writing..<br class="">+ extern char &MachineCopyPropagationPreRegRewriteID;<br class="">+<br class=""> /// MachineCopyPropagation - This pass performs copy propagation on<br class=""> /// machine instructions.<br class=""> extern char &MachineCopyPropagationID;<br class=""><br class="">Modified: llvm/trunk/include/llvm/InitializePasses.h<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/include/llvm/InitializePasses.h?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/include/llvm/InitializePasses.h?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/include/llvm/InitializePasses.h (original)<br class="">+++ llvm/trunk/include/llvm/InitializePasses.h Mon Oct 2 15:01:37 2017<br class="">@@ -233,6 +233,7 @@ void initializeMachineBranchProbabilityI<br class=""> void initializeMachineCSEPass(PassRegistry&);<br class=""> void initializeMachineCombinerPass(PassRegistry&);<br class=""> void initializeMachineCopyPropagationPass(PassRegistry&);<br class="">+void initializeMachineCopyPropagationPreRegRewritePass(PassRegistry&);<br class=""> void initializeMachineDominanceFrontierPass(PassRegistry&);<br class=""> void initializeMachineDominatorTreePass(PassRegistry&);<br class=""> void initializeMachineFunctionPrinterPassPass(PassRegistry&);<br class=""><br class="">Modified: llvm/trunk/lib/CodeGen/CodeGen.cpp<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/CodeGen.cpp?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/CodeGen.cpp?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/lib/CodeGen/CodeGen.cpp (original)<br class="">+++ llvm/trunk/lib/CodeGen/CodeGen.cpp Mon Oct 2 15:01:37 2017<br class="">@@ -53,6 +53,7 @@ void llvm::initializeCodeGen(PassRegistr<br class=""> initializeMachineCSEPass(Registry);<br class=""> initializeMachineCombinerPass(Registry);<br class=""> initializeMachineCopyPropagationPass(Registry);<br class="">+ initializeMachineCopyPropagationPreRegRewritePass(Registry);<br class=""> initializeMachineDominatorTreePass(Registry);<br class=""> initializeMachineFunctionPrinterPassPass(Registry);<br class=""> initializeMachineLICMPass(Registry);<br class=""><br class="">Modified: llvm/trunk/lib/CodeGen/MachineCopyPropagation.cpp<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/MachineCopyPropagation.cpp?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/MachineCopyPropagation.cpp?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/lib/CodeGen/MachineCopyPropagation.cpp (original)<br class="">+++ llvm/trunk/lib/CodeGen/MachineCopyPropagation.cpp Mon Oct 2 15:01:37 2017<br class="">@@ -7,25 +7,71 @@<br class=""> //<br class=""> //===----------------------------------------------------------------------===//<br class=""> //<br class="">-// This is an extremely simple MachineInstr-level copy propagation pass.<br class="">+// This is a simple MachineInstr-level copy forwarding pass. It may be run at<br class="">+// two places in the codegen pipeline:<br class="">+// - After register allocation but before virtual registers have been remapped<br class="">+// to physical registers.<br class="">+// - After physical register remapping.<br class="">+//<br class="">+// The optimizations done vary slightly based on whether virtual registers are<br class="">+// still present. In both cases, this pass forwards the source of COPYs to the<br class="">+// users of their destinations when doing so is legal. For example:<br class="">+//<br class="">+// %vreg1 = COPY %vreg0<br class="">+// ...<br class="">+// ... = OP %vreg1<br class="">+//<br class="">+// If<br class="">+// - the physical register assigned to %vreg0 has not been clobbered by the<br class="">+// time of the use of %vreg1<br class="">+// - the register class constraints are satisfied<br class="">+// - the COPY def is the only value that reaches OP<br class="">+// then this pass replaces the above with:<br class="">+//<br class="">+// %vreg1 = COPY %vreg0<br class="">+// ...<br class="">+// ... = OP %vreg0<br class="">+//<br class="">+// and updates the relevant state required by VirtRegMap (e.g. LiveIntervals).<br class="">+// COPYs whose LiveIntervals become dead as a result of this forwarding (i.e. if<br class="">+// all uses of %vreg1 are changed to %vreg0) are removed.<br class="">+//<br class="">+// When being run with only physical registers, this pass will also remove some<br class="">+// redundant COPYs. For example:<br class="">+//<br class="">+// %R1 = COPY %R0<br class="">+// ... // No clobber of %R1<br class="">+// %R0 = COPY %R1 <<< Removed<br class="">+//<br class="">+// or<br class="">+//<br class="">+// %R1 = COPY %R0<br class="">+// ... // No clobber of %R0<br class="">+// %R1 = COPY %R0 <<< Removed<br class=""> //<br class=""> //===----------------------------------------------------------------------===//<br class=""><br class="">+#include "LiveDebugVariables.h"<br class=""> #include "llvm/ADT/DenseMap.h"<br class=""> #include "llvm/ADT/STLExtras.h"<br class=""> #include "llvm/ADT/SetVector.h"<br class=""> #include "llvm/ADT/SmallVector.h"<br class=""> #include "llvm/ADT/Statistic.h"<br class=""> #include "llvm/ADT/iterator_range.h"<br class="">+#include "llvm/CodeGen/LiveRangeEdit.h"<br class="">+#include "llvm/CodeGen/LiveStackAnalysis.h"<br class=""> #include "llvm/CodeGen/MachineBasicBlock.h"<br class=""> #include "llvm/CodeGen/MachineFunction.h"<br class=""> #include "llvm/CodeGen/MachineFunctionPass.h"<br class=""> #include "llvm/CodeGen/MachineInstr.h"<br class=""> #include "llvm/CodeGen/MachineOperand.h"<br class=""> #include "llvm/CodeGen/MachineRegisterInfo.h"<br class="">+#include "llvm/CodeGen/Passes.h"<br class="">+#include "llvm/CodeGen/VirtRegMap.h"<br class=""> #include "llvm/MC/MCRegisterInfo.h"<br class=""> #include "llvm/Pass.h"<br class=""> #include "llvm/Support/Debug.h"<br class="">+#include "llvm/Support/DebugCounter.h"<br class=""> #include "llvm/Support/raw_ostream.h"<br class=""> #include "llvm/Target/TargetInstrInfo.h"<br class=""> #include "llvm/Target/TargetRegisterInfo.h"<br class="">@@ -38,6 +84,9 @@ using namespace llvm;<br class=""> #define DEBUG_TYPE "machine-cp"<br class=""><br class=""> STATISTIC(NumDeletes, "Number of dead copies deleted");<br class="">+STATISTIC(NumCopyForwards, "Number of copy uses forwarded");<br class="">+DEBUG_COUNTER(FwdCounter, "machine-cp-fwd",<br class="">+ "Controls which register COPYs are forwarded");<br class=""><br class=""> namespace {<br class=""><br class="">@@ -45,19 +94,42 @@ using RegList = SmallVector<unsigned, 4><br class=""> using SourceMap = DenseMap<unsigned, RegList>;<br class=""> using Reg2MIMap = DenseMap<unsigned, MachineInstr *>;<br class=""><br class="">- class MachineCopyPropagation : public MachineFunctionPass {<br class="">+ class MachineCopyPropagation : public MachineFunctionPass,<br class="">+ private LiveRangeEdit::Delegate {<br class=""> const TargetRegisterInfo *TRI;<br class=""> const TargetInstrInfo *TII;<br class="">- const MachineRegisterInfo *MRI;<br class="">+ MachineRegisterInfo *MRI;<br class="">+ MachineFunction *MF;<br class="">+ SlotIndexes *Indexes;<br class="">+ LiveIntervals *LIS;<br class="">+ const VirtRegMap *VRM;<br class="">+ // True if this pass being run before virtual registers are remapped to<br class="">+ // physical ones.<br class="">+ bool PreRegRewrite;<br class="">+ bool NoSubRegLiveness;<br class="">+<br class="">+ protected:<br class="">+ MachineCopyPropagation(char &ID, bool PreRegRewrite)<br class="">+ : MachineFunctionPass(ID), PreRegRewrite(PreRegRewrite) {}<br class=""><br class=""> public:<br class=""> static char ID; // Pass identification, replacement for typeid<br class=""><br class="">- MachineCopyPropagation() : MachineFunctionPass(ID) {<br class="">+ MachineCopyPropagation() : MachineCopyPropagation(ID, false) {<br class=""> initializeMachineCopyPropagationPass(*PassRegistry::getPassRegistry());<br class=""> }<br class=""><br class=""> void getAnalysisUsage(AnalysisUsage &AU) const override {<br class="">+ if (PreRegRewrite) {<br class="">+ AU.addRequired<SlotIndexes>();<br class="">+ AU.addPreserved<SlotIndexes>();<br class="">+ AU.addRequired<LiveIntervals>();<br class="">+ AU.addPreserved<LiveIntervals>();<br class="">+ AU.addRequired<VirtRegMap>();<br class="">+ AU.addPreserved<VirtRegMap>();<br class="">+ AU.addPreserved<LiveDebugVariables>();<br class="">+ AU.addPreserved<LiveStacks>();<br class="">+ }<br class=""> AU.setPreservesCFG();<br class=""> MachineFunctionPass::getAnalysisUsage(AU);<br class=""> }<br class="">@@ -65,6 +137,10 @@ using Reg2MIMap = DenseMap<unsigned, Mac<br class=""> bool runOnMachineFunction(MachineFunction &MF) override;<br class=""><br class=""> MachineFunctionProperties getRequiredProperties() const override {<br class="">+ if (PreRegRewrite)<br class="">+ return MachineFunctionProperties()<br class="">+ .set(MachineFunctionProperties::Property::NoPHIs)<br class="">+ .set(MachineFunctionProperties::Property::TracksLiveness);<br class=""> return MachineFunctionProperties().set(<br class=""> MachineFunctionProperties::Property::NoVRegs);<br class=""> }<br class="">@@ -74,9 +150,33 @@ using Reg2MIMap = DenseMap<unsigned, Mac<br class=""> void ReadRegister(unsigned Reg);<br class=""> void CopyPropagateBlock(MachineBasicBlock &MBB);<br class=""> bool eraseIfRedundant(MachineInstr &Copy, unsigned Src, unsigned Def);<br class="">+ unsigned getPhysReg(unsigned Reg, unsigned SubReg);<br class="">+ unsigned getPhysReg(const MachineOperand &Opnd) {<br class="">+ return getPhysReg(Opnd.getReg(), Opnd.getSubReg());<br class="">+ }<br class="">+ unsigned getFullPhysReg(const MachineOperand &Opnd) {<br class="">+ return getPhysReg(Opnd.getReg(), 0);<br class="">+ }<br class="">+ void forwardUses(MachineInstr &MI);<br class="">+ bool isForwardableRegClassCopy(const MachineInstr &Copy,<br class="">+ const MachineInstr &UseI);<br class="">+ std::tuple<unsigned, unsigned, bool><br class="">+ checkUseSubReg(const MachineOperand &CopySrc, const MachineOperand &MOUse);<br class="">+ bool hasImplicitOverlap(const MachineInstr &MI, const MachineOperand &Use);<br class="">+ void narrowRegClass(const MachineInstr &MI, const MachineOperand &MOUse,<br class="">+ unsigned NewUseReg, unsigned NewUseSubReg);<br class="">+ void updateForwardedCopyLiveInterval(const MachineInstr &Copy,<br class="">+ const MachineInstr &UseMI,<br class="">+ bool UseIsEarlyClobber,<br class="">+ unsigned OrigUseReg,<br class="">+ unsigned NewUseReg,<br class="">+ unsigned NewUseSubReg);<br class="">+ /// LiveRangeEdit callback for eliminateDeadDefs().<br class="">+ void LRE_WillEraseInstruction(MachineInstr *MI) override;<br class=""><br class=""> /// Candidates for deletion.<br class=""> SmallSetVector<MachineInstr*, 8> MaybeDeadCopies;<br class="">+ SmallVector<MachineInstr*, 8> ShrunkDeadInsts;<br class=""><br class=""> /// Def -> available copies map.<br class=""> Reg2MIMap AvailCopyMap;<br class="">@@ -90,6 +190,14 @@ using Reg2MIMap = DenseMap<unsigned, Mac<br class=""> bool Changed;<br class=""> };<br class=""><br class="">+ class MachineCopyPropagationPreRegRewrite : public MachineCopyPropagation {<br class="">+ public:<br class="">+ static char ID; // Pass identification, replacement for typeid<br class="">+ MachineCopyPropagationPreRegRewrite()<br class="">+ : MachineCopyPropagation(ID, true) {<br class="">+ initializeMachineCopyPropagationPreRegRewritePass(*PassRegistry::getPassRegistry());<br class="">+ }<br class="">+ };<br class=""> } // end anonymous namespace<br class=""><br class=""> char MachineCopyPropagation::ID = 0;<br class="">@@ -99,6 +207,29 @@ char &llvm::MachineCopyPropagationID = M<br class=""> INITIALIZE_PASS(MachineCopyPropagation, DEBUG_TYPE,<br class=""> "Machine Copy Propagation Pass", false, false)<br class=""><br class="">+/// We have two separate passes that are very similar, the only difference being<br class="">+/// where they are meant to be run in the pipeline. This is done for several<br class="">+/// reasons:<br class="">+/// - the two passes have different dependencies<br class="">+/// - some targets want to disable the later run of this pass, but not the<br class="">+/// earlier one (e.g. NVPTX and WebAssembly)<br class="">+/// - it allows for easier debugging via llc<br class="">+<br class="">+char MachineCopyPropagationPreRegRewrite::ID = 0;<br class="">+char &llvm::MachineCopyPropagationPreRegRewriteID = MachineCopyPropagationPreRegRewrite::ID;<br class="">+<br class="">+INITIALIZE_PASS_BEGIN(MachineCopyPropagationPreRegRewrite,<br class="">+ "machine-cp-prerewrite",<br class="">+ "Machine Copy Propagation Pre-Register Rewrite Pass",<br class="">+ false, false)<br class="">+INITIALIZE_PASS_DEPENDENCY(SlotIndexes)<br class="">+INITIALIZE_PASS_DEPENDENCY(LiveIntervals)<br class="">+INITIALIZE_PASS_DEPENDENCY(VirtRegMap)<br class="">+INITIALIZE_PASS_END(MachineCopyPropagationPreRegRewrite,<br class="">+ "machine-cp-prerewrite",<br class="">+ "Machine Copy Propagation Pre-Register Rewrite Pass", false,<br class="">+ false)<br class="">+<br class=""> /// Remove any entry in \p Map where the register is a subregister or equal to<br class=""> /// a register contained in \p Regs.<br class=""> static void removeRegsFromMap(Reg2MIMap &Map, const RegList &Regs,<br class="">@@ -139,6 +270,10 @@ void MachineCopyPropagation::ClobberRegi<br class=""> }<br class=""><br class=""> void MachineCopyPropagation::ReadRegister(unsigned Reg) {<br class="">+ // We don't track MaybeDeadCopies when running pre-VirtRegRewriter.<br class="">+ if (PreRegRewrite)<br class="">+ return;<br class="">+<br class=""> // If 'Reg' is defined by a copy, the copy is no longer a candidate<br class=""> // for elimination.<br class=""> for (MCRegAliasIterator AI(Reg, TRI, true); AI.isValid(); ++AI) {<br class="">@@ -170,6 +305,46 @@ static bool isNopCopy(const MachineInstr<br class=""> return SubIdx == TRI->getSubRegIndex(PreviousDef, Def);<br class=""> }<br class=""><br class="">+/// Return the physical register assigned to \p Reg if it is a virtual register,<br class="">+/// otherwise just return the physical reg from the operand itself.<br class="">+///<br class="">+/// If \p SubReg is 0 then return the full physical register assigned to the<br class="">+/// virtual register ignoring subregs. If we aren't tracking sub-reg liveness<br class="">+/// then we need to use this to be more conservative with clobbers by killing<br class="">+/// all super reg and their sub reg COPYs as well. This is to prevent COPY<br class="">+/// forwarding in cases like the following:<br class="">+///<br class="">+/// %vreg2 = COPY %vreg1:sub1<br class="">+/// %vreg3 = COPY %vreg1:sub0<br class="">+/// ... = OP1 %vreg2<br class="">+/// ... = OP2 %vreg3<br class="">+///<br class="">+/// After forward %vreg2 (assuming this is the last use of %vreg1) and<br class="">+/// VirtRegRewriter adding kill markers we have:<br class="">+///<br class="">+/// %vreg3 = COPY %vreg1:sub0<br class="">+/// ... = OP1 %vreg1:sub1<kill><br class="">+/// ... = OP2 %vreg3<br class="">+///<br class="">+/// If %vreg3 is assigned to a sub-reg of %vreg1, then after rewriting we have:<br class="">+///<br class="">+/// ... = OP1 R0:sub1, R0<imp-use,kill><br class="">+/// ... = OP2 R0:sub0<br class="">+///<br class="">+/// and the use of R0 by OP2 will not have a valid definition.<br class="">+unsigned MachineCopyPropagation::getPhysReg(unsigned Reg, unsigned SubReg) {<br class="">+<br class="">+ // Physical registers cannot have subregs.<br class="">+ if (!TargetRegisterInfo::isVirtualRegister(Reg))<br class="">+ return Reg;<br class="">+<br class="">+ assert(PreRegRewrite && "Unexpected virtual register encountered");<br class="">+ Reg = VRM->getPhys(Reg);<br class="">+ if (SubReg && !NoSubRegLiveness)<br class="">+ Reg = TRI->getSubReg(Reg, SubReg);<br class="">+ return Reg;<br class="">+}<br class="">+<br class=""> /// Remove instruction \p Copy if there exists a previous copy that copies the<br class=""> /// register \p Src to the register \p Def; This may happen indirectly by<br class=""> /// copying the super registers.<br class="">@@ -207,6 +382,397 @@ bool MachineCopyPropagation::eraseIfRedu<br class=""> return true;<br class=""> }<br class=""><br class="">+<br class="">+/// Decide whether we should forward the destination of \param Copy to its use<br class="">+/// in \param UseI based on the register class of the Copy operands. Same-class<br class="">+/// COPYs are always accepted by this function, but cross-class COPYs are only<br class="">+/// accepted if they are forwarded to another COPY with the operand register<br class="">+/// classes reversed. For example:<br class="">+///<br class="">+/// RegClassA = COPY RegClassB // Copy parameter<br class="">+/// ...<br class="">+/// RegClassB = COPY RegClassA // UseI parameter<br class="">+///<br class="">+/// which after forwarding becomes<br class="">+///<br class="">+/// RegClassA = COPY RegClassB<br class="">+/// ...<br class="">+/// RegClassB = COPY RegClassB<br class="">+///<br class="">+/// so we have reduced the number of cross-class COPYs and potentially<br class="">+/// introduced a no COPY that can be removed.<br class="">+bool MachineCopyPropagation::isForwardableRegClassCopy(<br class="">+ const MachineInstr &Copy, const MachineInstr &UseI) {<br class="">+ auto isCross = [&](const MachineOperand &Dst, const MachineOperand &Src) {<br class="">+ unsigned DstReg = Dst.getReg();<br class="">+ unsigned SrcPhysReg = getPhysReg(Src);<br class="">+ const TargetRegisterClass *DstRC;<br class="">+ if (TargetRegisterInfo::isVirtualRegister(DstReg)) {<br class="">+ DstRC = MRI->getRegClass(DstReg);<br class="">+ unsigned DstSubReg = Dst.getSubReg();<br class="">+ if (DstSubReg)<br class="">+ SrcPhysReg = TRI->getMatchingSuperReg(SrcPhysReg, DstSubReg, DstRC);<br class="">+ } else<br class="">+ DstRC = TRI->getMinimalPhysRegClass(DstReg);<br class="">+<br class="">+ return !DstRC->contains(SrcPhysReg);<br class="">+ };<br class="">+<br class="">+ const MachineOperand &CopyDst = Copy.getOperand(0);<br class="">+ const MachineOperand &CopySrc = Copy.getOperand(1);<br class="">+<br class="">+ if (!isCross(CopyDst, CopySrc))<br class="">+ return true;<br class="">+<br class="">+ if (!UseI.isCopy())<br class="">+ return false;<br class="">+<br class="">+ assert(getFullPhysReg(UseI.getOperand(1)) == getFullPhysReg(CopyDst));<br class="">+ return !isCross(UseI.getOperand(0), CopySrc);<br class="">+}<br class="">+<br class="">+/// Check that the subregs on the copy source operand (\p CopySrc) and the use<br class="">+/// operand to be forwarded to (\p MOUse) are compatible with doing the<br class="">+/// forwarding. Also computes the new register and subregister to be used in<br class="">+/// the forwarded-to instruction.<br class="">+std::tuple<unsigned, unsigned, bool> MachineCopyPropagation::checkUseSubReg(<br class="">+ const MachineOperand &CopySrc, const MachineOperand &MOUse) {<br class="">+ unsigned NewUseReg = CopySrc.getReg();<br class="">+ unsigned NewUseSubReg;<br class="">+<br class="">+ if (TargetRegisterInfo::isPhysicalRegister(NewUseReg)) {<br class="">+ // If MOUse is a virtual reg, we need to apply it to the new physical reg<br class="">+ // we're going to replace it with.<br class="">+ if (MOUse.getSubReg())<br class="">+ NewUseReg = TRI->getSubReg(NewUseReg, MOUse.getSubReg());<br class="">+ // If the original use subreg isn't valid on the new src reg, we can't<br class="">+ // forward it here.<br class="">+ if (!NewUseReg)<br class="">+ return std::make_tuple(0, 0, false);<br class="">+ NewUseSubReg = 0;<br class="">+ } else {<br class="">+ // %v1 = COPY %v2:sub1<br class="">+ // USE %v1:sub2<br class="">+ // The new use is %v2:sub1:sub2<br class="">+ NewUseSubReg =<br class="">+ TRI->composeSubRegIndices(CopySrc.getSubReg(), MOUse.getSubReg());<br class="">+ // Check that NewUseSubReg is valid on NewUseReg<br class="">+ if (NewUseSubReg &&<br class="">+ !TRI->getSubClassWithSubReg(MRI->getRegClass(NewUseReg), NewUseSubReg))<br class="">+ return std::make_tuple(0, 0, false);<br class="">+ }<br class="">+<br class="">+ return std::make_tuple(NewUseReg, NewUseSubReg, true);<br class="">+}<br class="">+<br class="">+/// Check that \p MI does not have implicit uses that overlap with it's \p Use<br class="">+/// operand (the register being replaced), since these can sometimes be<br class="">+/// implicitly tied to other operands. For example, on AMDGPU:<br class="">+///<br class="">+/// V_MOVRELS_B32_e32 %VGPR2, %M0<imp-use>, %EXEC<imp-use>, %VGPR2_VGPR3_VGPR4_VGPR5<imp-use><br class="">+///<br class="">+/// the %VGPR2 is implicitly tied to the larger reg operand, but we have no<br class="">+/// way of knowing we need to update the latter when updating the former.<br class="">+bool MachineCopyPropagation::hasImplicitOverlap(const MachineInstr &MI,<br class="">+ const MachineOperand &Use) {<br class="">+ if (!TargetRegisterInfo::isPhysicalRegister(Use.getReg()))<br class="">+ return false;<br class="">+<br class="">+ for (const MachineOperand &MIUse : MI.uses())<br class="">+ if (&MIUse != &Use && MIUse.isReg() && MIUse.isImplicit() &&<br class="">+ TRI->regsOverlap(Use.getReg(), MIUse.getReg()))<br class="">+ return true;<br class="">+<br class="">+ return false;<br class="">+}<br class="">+<br class="">+/// Narrow the register class of the forwarded vreg so it matches any<br class="">+/// instruction constraints. \p MI is the instruction being forwarded to. \p<br class="">+/// MOUse is the operand being replaced in \p MI (which hasn't yet been updated<br class="">+/// at the time this function is called). \p NewUseReg and \p NewUseSubReg are<br class="">+/// what the \p MOUse will be changed to after forwarding.<br class="">+///<br class="">+/// If we are forwarding<br class="">+/// A:RCA = COPY B:RCB<br class="">+/// into<br class="">+/// ... = OP A:RCA<br class="">+///<br class="">+/// then we need to narrow the register class of B so that it is a subclass<br class="">+/// of RCA so that it meets the instruction register class constraints.<br class="">+void MachineCopyPropagation::narrowRegClass(const MachineInstr &MI,<br class="">+ const MachineOperand &MOUse,<br class="">+ unsigned NewUseReg,<br class="">+ unsigned NewUseSubReg) {<br class="">+ if (!TargetRegisterInfo::isVirtualRegister(NewUseReg))<br class="">+ return;<br class="">+<br class="">+ // Make sure the virtual reg class allows the subreg.<br class="">+ if (NewUseSubReg) {<br class="">+ const TargetRegisterClass *CurUseRC = MRI->getRegClass(NewUseReg);<br class="">+ const TargetRegisterClass *NewUseRC =<br class="">+ TRI->getSubClassWithSubReg(CurUseRC, NewUseSubReg);<br class="">+ if (CurUseRC != NewUseRC) {<br class="">+ DEBUG(dbgs() << "MCP: Setting regclass of " << PrintReg(NewUseReg, TRI)<br class="">+ << " to " << TRI->getRegClassName(NewUseRC) << "\n");<br class="">+ MRI->setRegClass(NewUseReg, NewUseRC);<br class="">+ }<br class="">+ }<br class="">+<br class="">+ unsigned MOUseOpNo = &MOUse - &MI.getOperand(0);<br class="">+ const TargetRegisterClass *InstRC =<br class="">+ TII->getRegClass(MI.getDesc(), MOUseOpNo, TRI, *MF);<br class="">+ if (InstRC) {<br class="">+ const TargetRegisterClass *CurUseRC = MRI->getRegClass(NewUseReg);<br class="">+ if (NewUseSubReg)<br class="">+ InstRC = TRI->getMatchingSuperRegClass(CurUseRC, InstRC, NewUseSubReg);<br class="">+ if (!InstRC->hasSubClassEq(CurUseRC)) {<br class="">+ const TargetRegisterClass *NewUseRC =<br class="">+ TRI->getCommonSubClass(InstRC, CurUseRC);<br class="">+ DEBUG(dbgs() << "MCP: Setting regclass of " << PrintReg(NewUseReg, TRI)<br class="">+ << " to " << TRI->getRegClassName(NewUseRC) << "\n");<br class="">+ MRI->setRegClass(NewUseReg, NewUseRC);<br class="">+ }<br class="">+ }<br class="">+}<br class="">+<br class="">+/// Update the LiveInterval information to reflect the destination of \p Copy<br class="">+/// being forwarded to a use in \p UseMI. \p OrigUseReg is the register being<br class="">+/// forwarded through. It should be the destination register of \p Copy and has<br class="">+/// already been replaced in \p UseMI at the point this function is called. \p<br class="">+/// NewUseReg and \p NewUseSubReg are the register and subregister being<br class="">+/// forwarded. They should be the source register of the \p Copy and should be<br class="">+/// the value of the \p UseMI operand being forwarded at the point this function<br class="">+/// is called. \p UseIsEarlyClobber is true if the use being re-written is in<br class="">+/// the EarlyClobber slot index (as opposed to the register slot of most<br class="">+/// register operands).<br class="">+void MachineCopyPropagation::updateForwardedCopyLiveInterval(<br class="">+ const MachineInstr &Copy, const MachineInstr &UseMI, bool UseIsEarlyClobber,<br class="">+ unsigned OrigUseReg, unsigned NewUseReg, unsigned NewUseSubReg) {<br class="">+<br class="">+ assert(TRI->isSubRegisterEq(getPhysReg(OrigUseReg, 0),<br class="">+ getFullPhysReg(Copy.getOperand(0))) &&<br class="">+ "OrigUseReg mismatch");<br class="">+ assert(TRI->isSubRegisterEq(getFullPhysReg(Copy.getOperand(1)),<br class="">+ getPhysReg(NewUseReg, 0)) &&<br class="">+ "NewUseReg mismatch");<br class="">+<br class="">+ // Extend live range starting from COPY early-clobber slot, since that<br class="">+ // is where the original src live range ends.<br class="">+ SlotIndex CopyUseIdx =<br class="">+ Indexes->getInstructionIndex(Copy).getRegSlot(true /*=EarlyClobber*/);<br class="">+ SlotIndex UseEndIdx =<br class="">+ Indexes->getInstructionIndex(UseMI).getRegSlot(UseIsEarlyClobber);<br class="">+ if (TargetRegisterInfo::isVirtualRegister(NewUseReg)) {<br class="">+ LiveInterval &LI = LIS->getInterval(NewUseReg);<br class="">+ LI.extendInBlock(CopyUseIdx, UseEndIdx);<br class="">+ LaneBitmask UseMask = TRI->getSubRegIndexLaneMask(NewUseSubReg);<br class="">+ for (auto &S : LI.subranges())<br class="">+ if ((S.LaneMask & UseMask).any() && S.find(CopyUseIdx))<br class="">+ S.extendInBlock(CopyUseIdx, UseEndIdx);<br class="">+ } else {<br class="">+ assert(NewUseSubReg == 0 && "Unexpected subreg on physical register!");<br class="">+ for (MCRegUnitIterator UI(NewUseReg, TRI); UI.isValid(); ++UI) {<br class="">+ LiveRange &LR = LIS->getRegUnit(*UI);<br class="">+ LR.extendInBlock(CopyUseIdx, UseEndIdx);<br class="">+ }<br class="">+ }<br class="">+<br class="">+ if (!TargetRegisterInfo::isVirtualRegister(OrigUseReg))<br class="">+ return;<br class="">+<br class="">+ // Shrink the live-range for the old use reg if the forwarded use was it's<br class="">+ // last use.<br class="">+ LiveInterval &OrigUseLI = LIS->getInterval(OrigUseReg);<br class="">+<br class="">+ // Can happen for undef uses.<br class="">+ if (OrigUseLI.empty())<br class="">+ return;<br class="">+<br class="">+ SlotIndex CopyDefIdx = Indexes->getInstructionIndex(Copy).getRegSlot();<br class="">+ const LiveRange::Segment *OrigUseSeg =<br class="">+ OrigUseLI.getSegmentContaining(CopyDefIdx);<br class="">+<br class="">+ // Only shrink if forwarded use is the end of a segment.<br class="">+ if (OrigUseSeg->end != UseEndIdx)<br class="">+ return;<br class="">+<br class="">+ LIS->shrinkToUses(&OrigUseLI, &ShrunkDeadInsts);<br class="">+}<br class="">+<br class="">+void MachineCopyPropagation::LRE_WillEraseInstruction(MachineInstr *MI) {<br class="">+ Changed = true;<br class="">+}<br class="">+<br class="">+/// Look for available copies whose destination register is used by \p MI and<br class="">+/// replace the use in \p MI with the copy's source register.<br class="">+void MachineCopyPropagation::forwardUses(MachineInstr &MI) {<br class="">+ // We can't generally forward uses after virtual registers have been renamed<br class="">+ // because some targets generate code that has implicit dependencies on the<br class="">+ // physical register numbers. For example, in PowerPC, when spilling<br class="">+ // condition code registers, the following code pattern is generated:<br class="">+ //<br class="">+ // %CR7 = COPY %CR0<br class="">+ // %R6 = MFOCRF %CR7<br class="">+ // %R6 = RLWINM %R6, 29, 31, 31<br class="">+ //<br class="">+ // where the shift amount in the RLWINM instruction depends on the source<br class="">+ // register number of the MFOCRF instruction. If we were to forward %CR0 to<br class="">+ // the MFOCRF instruction, the shift amount would no longer be correct.<br class="">+ //<br class="">+ // FIXME: It may be possible to define a target hook that checks the register<br class="">+ // class or user opcode and allows some cases, but prevents cases like the<br class="">+ // above from being broken to enable later register copy forwarding.<br class="">+ if (!PreRegRewrite)<br class="">+ return;<br class="">+<br class="">+ if (AvailCopyMap.empty())<br class="">+ return;<br class="">+<br class="">+ // Look for non-tied explicit vreg uses that have an active COPY<br class="">+ // instruction that defines the physical register allocated to them.<br class="">+ // Replace the vreg with the source of the active COPY.<br class="">+ for (MachineOperand &MOUse : MI.explicit_uses()) {<br class="">+ // Don't forward into undef use operands since doing so can cause problems<br class="">+ // with the machine verifier, since it doesn't treat undef reads as reads,<br class="">+ // so we can end up with a live range that ends on an undef read, leading to<br class="">+ // an error that the live range doesn't end on a read of the live range<br class="">+ // register.<br class="">+ if (!MOUse.isReg() || MOUse.isTied() || MOUse.isUndef() || MOUse.isDef())<br class="">+ continue;<br class="">+<br class="">+ unsigned UseReg = MOUse.getReg();<br class="">+ if (!UseReg)<br class="">+ continue;<br class="">+<br class="">+ // See comment above check for !PreRegRewrite regarding forwarding changing<br class="">+ // physical registers.<br class="">+ if (!TargetRegisterInfo::isVirtualRegister(UseReg))<br class="">+ continue;<br class="">+ else {<br class="">+ // FIXME: Don't forward COPYs to a use that is on an instruction that<br class="">+ // re-defines the same virtual register. This leads to machine<br class="">+ // verification failures because of a bug in the greedy<br class="">+ // allocator/verifier. The bug is that just after greedy regalloc, we can<br class="">+ // end up with code that looks like:<br class="">+ //<br class="">+ // %vreg1<def> = ...<br class="">+ // ...<br class="">+ // ... = %vreg1<br class="">+ // ...<br class="">+ // %vreg1<def> = %vreg1<br class="">+ // ...<br class="">+ //<br class="">+ // verifyLiveInterval() accepts this code as valid since it sees the<br class="">+ // second def as part of the same live interval component because<br class="">+ // ConnectedVNInfoEqClasses::Classify() sees this second def as a<br class="">+ // "two-addr" redefinition, even though the def and source operands are<br class="">+ // not tied.<br class="">+ //<br class="">+ // If we replace just the second use of %vreg1 in the above code, then we<br class="">+ // end up with:<br class="">+ //<br class="">+ // ...<br class="">+ // %vreg1<def> = ...<br class="">+ // ...<br class="">+ // ... = %vreg1<br class="">+ // ...<br class="">+ // %vreg1<def> = *%vreg2*<br class="">+ // ...<br class="">+ //<br class="">+ // verifyLiveInterval() now rejects this code since it sees these two def<br class="">+ // live ranges as being separate components. To get rid of this<br class="">+ // restriction on forwarding, regalloc greedy would need to be fixed to<br class="">+ // avoid generating code like the first snippet above, as well as the<br class="">+ // verifier being fixed to reject such code.<br class="">+ if (llvm::any_of(MI.defs(), [UseReg](MachineOperand &Def) {<br class="">+ if (!Def.isReg() || !Def.isDef() || Def.getReg() != UseReg)<br class="">+ return false;<br class="">+ // Only tied use/def operands or a subregister def without the undef<br class="">+ // flag should result in connected liveranges.<br class="">+ if (Def.isTied())<br class="">+ return false;<br class="">+ if (Def.getSubReg() != 0 && Def.isUndef())<br class="">+ return false;<br class="">+ return true;<br class="">+ }))<br class="">+ continue;<br class="">+ }<br class="">+<br class="">+ UseReg = VRM->getPhys(UseReg);<br class="">+<br class="">+ // Don't forward COPYs via non-allocatable regs since they can have<br class="">+ // non-standard semantics.<br class="">+ if (!MRI->isAllocatable(UseReg))<br class="">+ continue;<br class="">+<br class="">+ auto CI = AvailCopyMap.find(UseReg);<br class="">+ if (CI == AvailCopyMap.end())<br class="">+ continue;<br class="">+<br class="">+ MachineInstr &Copy = *CI->second;<br class="">+ MachineOperand &CopyDst = Copy.getOperand(0);<br class="">+ MachineOperand &CopySrc = Copy.getOperand(1);<br class="">+<br class="">+ // Don't forward COPYs that are already NOPs due to register assignment.<br class="">+ if (getPhysReg(CopyDst) == getPhysReg(CopySrc))<br class="">+ continue;<br class="">+<br class="">+ // FIXME: Don't handle partial uses of wider COPYs yet.<br class="">+ if (CopyDst.getSubReg() != 0 || UseReg != getPhysReg(CopyDst))<br class="">+ continue;<br class="">+<br class="">+ // Don't forward COPYs of non-allocatable regs unless they are constant.<br class="">+ unsigned CopySrcReg = CopySrc.getReg();<br class="">+ if (TargetRegisterInfo::isPhysicalRegister(CopySrcReg) &&<br class="">+ !MRI->isAllocatable(CopySrcReg) && !MRI->isConstantPhysReg(CopySrcReg))<br class="">+ continue;<br class="">+<br class="">+ if (!isForwardableRegClassCopy(Copy, MI))<br class="">+ continue;<br class="">+<br class="">+ unsigned NewUseReg, NewUseSubReg;<br class="">+ bool SubRegOK;<br class="">+ std::tie(NewUseReg, NewUseSubReg, SubRegOK) =<br class="">+ checkUseSubReg(CopySrc, MOUse);<br class="">+ if (!SubRegOK)<br class="">+ continue;<br class="">+<br class="">+ if (hasImplicitOverlap(MI, MOUse))<br class="">+ continue;<br class="">+<br class="">+ if (!DebugCounter::shouldExecute(FwdCounter))<br class="">+ continue;<br class="">+<br class="">+ DEBUG(dbgs() << "MCP: Replacing "<br class="">+ << PrintReg(MOUse.getReg(), TRI, MOUse.getSubReg())<br class="">+ << "\n with "<br class="">+ << PrintReg(NewUseReg, TRI, CopySrc.getSubReg())<br class="">+ << "\n in "<br class="">+ << MI<br class="">+ << " from "<br class="">+ << Copy);<br class="">+<br class="">+ narrowRegClass(MI, MOUse, NewUseReg, NewUseSubReg);<br class="">+<br class="">+ unsigned OrigUseReg = MOUse.getReg();<br class="">+ MOUse.setReg(NewUseReg);<br class="">+ MOUse.setSubReg(NewUseSubReg);<br class="">+<br class="">+ DEBUG(dbgs() << "MCP: After replacement: " << MI << "\n");<br class="">+<br class="">+ if (PreRegRewrite)<br class="">+ updateForwardedCopyLiveInterval(Copy, MI, MOUse.isEarlyClobber(),<br class="">+ OrigUseReg, NewUseReg, NewUseSubReg);<br class="">+ else<br class="">+ for (MachineInstr &KMI :<br class="">+ make_range(Copy.getIterator(), std::next(MI.getIterator())))<br class="">+ KMI.clearRegisterKills(NewUseReg, TRI);<br class="">+<br class="">+ ++NumCopyForwards;<br class="">+ Changed = true;<br class="">+ }<br class="">+}<br class="">+<br class=""> void MachineCopyPropagation::CopyPropagateBlock(MachineBasicBlock &MBB) {<br class=""> DEBUG(dbgs() << "MCP: CopyPropagateBlock " << MBB.getName() << "\n");<br class=""><br class="">@@ -215,12 +781,8 @@ void MachineCopyPropagation::CopyPropaga<br class=""> ++I;<br class=""><br class=""> if (MI->isCopy()) {<br class="">- unsigned Def = MI->getOperand(0).getReg();<br class="">- unsigned Src = MI->getOperand(1).getReg();<br class="">-<br class="">- assert(!TargetRegisterInfo::isVirtualRegister(Def) &&<br class="">- !TargetRegisterInfo::isVirtualRegister(Src) &&<br class="">- "MachineCopyPropagation should be run after register allocation!");<br class="">+ unsigned Def = getPhysReg(MI->getOperand(0));<br class="">+ unsigned Src = getPhysReg(MI->getOperand(1));<br class=""><br class=""> // The two copies cancel out and the source of the first copy<br class=""> // hasn't been overridden, eliminate the second one. e.g.<br class="">@@ -237,8 +799,16 @@ void MachineCopyPropagation::CopyPropaga<br class=""> // %ECX<def> = COPY %EAX<br class=""> // =><br class=""> // %ECX<def> = COPY %EAX<br class="">- if (eraseIfRedundant(*MI, Def, Src) || eraseIfRedundant(*MI, Src, Def))<br class="">- continue;<br class="">+ if (!PreRegRewrite)<br class="">+ if (eraseIfRedundant(*MI, Def, Src) || eraseIfRedundant(*MI, Src, Def))<br class="">+ continue;<br class="">+<br class="">+ forwardUses(*MI);<br class="">+<br class="">+ // Src may have been changed by forwardUses()<br class="">+ Src = getPhysReg(MI->getOperand(1));<br class="">+ unsigned DefClobber = getFullPhysReg(MI->getOperand(0));<br class="">+ unsigned SrcClobber = getFullPhysReg(MI->getOperand(1));<br class=""><br class=""> // If Src is defined by a previous copy, the previous copy cannot be<br class=""> // eliminated.<br class="">@@ -255,7 +825,10 @@ void MachineCopyPropagation::CopyPropaga<br class=""> DEBUG(dbgs() << "MCP: Copy is a deletion candidate: "; MI->dump());<br class=""><br class=""> // Copy is now a candidate for deletion.<br class="">- if (!MRI->isReserved(Def))<br class="">+ // Only look for dead COPYs if we're not running just before<br class="">+ // VirtRegRewriter, since presumably these COPYs will have already been<br class="">+ // removed.<br class="">+ if (!PreRegRewrite && !MRI->isReserved(Def))<br class=""> MaybeDeadCopies.insert(MI);<br class=""><br class=""> // If 'Def' is previously source of another copy, then this earlier copy's<br class="">@@ -265,11 +838,11 @@ void MachineCopyPropagation::CopyPropaga<br class=""> // %xmm2<def> = copy %xmm0<br class=""> // ...<br class=""> // %xmm2<def> = copy %xmm9<br class="">- ClobberRegister(Def);<br class="">+ ClobberRegister(DefClobber);<br class=""> for (const MachineOperand &MO : MI->implicit_operands()) {<br class=""> if (!MO.isReg() || !MO.isDef())<br class=""> continue;<br class="">- unsigned Reg = MO.getReg();<br class="">+ unsigned Reg = getFullPhysReg(MO);<br class=""> if (!Reg)<br class=""> continue;<br class=""> ClobberRegister(Reg);<br class="">@@ -284,13 +857,27 @@ void MachineCopyPropagation::CopyPropaga<br class=""><br class=""> // Remember source that's copied to Def. Once it's clobbered, then<br class=""> // it's no longer available for copy propagation.<br class="">- RegList &DestList = SrcMap[Src];<br class="">- if (!is_contained(DestList, Def))<br class="">- DestList.push_back(Def);<br class="">+ RegList &DestList = SrcMap[SrcClobber];<br class="">+ if (!is_contained(DestList, DefClobber))<br class="">+ DestList.push_back(DefClobber);<br class=""><br class=""> continue;<br class=""> }<br class=""><br class="">+ // Clobber any earlyclobber regs first.<br class="">+ for (const MachineOperand &MO : MI->operands())<br class="">+ if (MO.isReg() && MO.isEarlyClobber()) {<br class="">+ unsigned Reg = getFullPhysReg(MO);<br class="">+ // If we have a tied earlyclobber, that means it is also read by this<br class="">+ // instruction, so we need to make sure we don't remove it as dead<br class="">+ // later.<br class="">+ if (MO.isTied())<br class="">+ ReadRegister(Reg);<br class="">+ ClobberRegister(Reg);<br class="">+ }<br class="">+<br class="">+ forwardUses(*MI);<br class="">+<br class=""> // Not a copy.<br class=""> SmallVector<unsigned, 2> Defs;<br class=""> const MachineOperand *RegMask = nullptr;<br class="">@@ -299,14 +886,11 @@ void MachineCopyPropagation::CopyPropaga<br class=""> RegMask = &MO;<br class=""> if (!MO.isReg())<br class=""> continue;<br class="">- unsigned Reg = MO.getReg();<br class="">+ unsigned Reg = getFullPhysReg(MO);<br class=""> if (!Reg)<br class=""> continue;<br class=""><br class="">- assert(!TargetRegisterInfo::isVirtualRegister(Reg) &&<br class="">- "MachineCopyPropagation should be run after register allocation!");<br class="">-<br class="">- if (MO.isDef()) {<br class="">+ if (MO.isDef() && !MO.isEarlyClobber()) {<br class=""> Defs.push_back(Reg);<br class=""> continue;<br class=""> } else if (MO.readsReg())<br class="">@@ -358,11 +942,22 @@ void MachineCopyPropagation::CopyPropaga<br class=""> ClobberRegister(Reg);<br class=""> }<br class=""><br class="">+ // Remove instructions that were made dead by shrinking live ranges. Do this<br class="">+ // after iterating over instructions to avoid instructions changing while<br class="">+ // iterating.<br class="">+ if (!ShrunkDeadInsts.empty()) {<br class="">+ SmallVector<unsigned, 8> NewRegs;<br class="">+ LiveRangeEdit(nullptr, NewRegs, *MF, *LIS, nullptr, this)<br class="">+ .eliminateDeadDefs(ShrunkDeadInsts);<br class="">+ }<br class="">+<br class=""> // If MBB doesn't have successors, delete the copies whose defs are not used.<br class=""> // If MBB does have successors, then conservative assume the defs are live-out<br class=""> // since we don't want to trust live-in lists.<br class=""> if (MBB.succ_empty()) {<br class=""> for (MachineInstr *MaybeDead : MaybeDeadCopies) {<br class="">+ DEBUG(dbgs() << "MCP: Removing copy due to no live-out succ: ";<br class="">+ MaybeDead->dump());<br class=""> assert(!MRI->isReserved(MaybeDead->getOperand(0).getReg()));<br class=""> MaybeDead->eraseFromParent();<br class=""> Changed = true;<br class="">@@ -374,6 +969,7 @@ void MachineCopyPropagation::CopyPropaga<br class=""> AvailCopyMap.clear();<br class=""> CopyMap.clear();<br class=""> SrcMap.clear();<br class="">+ ShrunkDeadInsts.clear();<br class=""> }<br class=""><br class=""> bool MachineCopyPropagation::runOnMachineFunction(MachineFunction &MF) {<br class="">@@ -385,6 +981,13 @@ bool MachineCopyPropagation::runOnMachin<br class=""> TRI = MF.getSubtarget().getRegisterInfo();<br class=""> TII = MF.getSubtarget().getInstrInfo();<br class=""> MRI = &MF.getRegInfo();<br class="">+ this->MF = &MF;<br class="">+ if (PreRegRewrite) {<br class="">+ Indexes = &getAnalysis<SlotIndexes>();<br class="">+ LIS = &getAnalysis<LiveIntervals>();<br class="">+ VRM = &getAnalysis<VirtRegMap>();<br class="">+ }<br class="">+ NoSubRegLiveness = !MRI->subRegLivenessEnabled();<br class=""><br class=""> for (MachineBasicBlock &MBB : MF)<br class=""> CopyPropagateBlock(MBB);<br class=""><br class="">Modified: llvm/trunk/lib/CodeGen/TargetPassConfig.cpp<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/TargetPassConfig.cpp?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/TargetPassConfig.cpp?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/lib/CodeGen/TargetPassConfig.cpp (original)<br class="">+++ llvm/trunk/lib/CodeGen/TargetPassConfig.cpp Mon Oct 2 15:01:37 2017<br class="">@@ -88,6 +88,8 @@ static cl::opt<bool> DisableCGP("disable<br class=""> cl::desc("Disable Codegen Prepare"));<br class=""> static cl::opt<bool> DisableCopyProp("disable-copyprop", cl::Hidden,<br class=""> cl::desc("Disable Copy Propagation pass"));<br class="">+static cl::opt<bool> DisableCopyPropPreRegRewrite("disable-copyprop-prerewrite", cl::Hidden,<br class="">+ cl::desc("Disable Copy Propagation Pre-Register Re-write pass"));<br class=""> static cl::opt<bool> DisablePartialLibcallInlining("disable-partial-libcall-inlining",<br class=""> cl::Hidden, cl::desc("Disable Partial Libcall Inlining"));<br class=""> static cl::opt<bool> EnableImplicitNullChecks(<br class="">@@ -252,6 +254,9 @@ static IdentifyingPassPtr overridePass(A<br class=""> if (StandardID == &MachineCopyPropagationID)<br class=""> return applyDisable(TargetID, DisableCopyProp);<br class=""><br class="">+ if (StandardID == &MachineCopyPropagationPreRegRewriteID)<br class="">+ return applyDisable(TargetID, DisableCopyPropPreRegRewrite);<br class="">+<br class=""> return TargetID;<br class=""> }<br class=""><br class="">@@ -1064,6 +1069,10 @@ void TargetPassConfig::addOptimizedRegAl<br class=""> // Allow targets to change the register assignments before rewriting.<br class=""> addPreRewrite();<br class=""><br class="">+ // Copy propagate to forward register uses and try to eliminate COPYs that<br class="">+ // were not coalesced.<br class="">+ addPass(&MachineCopyPropagationPreRegRewriteID);<br class="">+<br class=""> // Finally rewrite virtual registers.<br class=""> addPass(&VirtRegRewriterID);<br class=""><br class=""><br class="">Modified: llvm/trunk/test/CodeGen/AArch64/aarch64-fold-lslfast.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/aarch64-fold-lslfast.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/aarch64-fold-lslfast.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/AArch64/aarch64-fold-lslfast.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/AArch64/aarch64-fold-lslfast.ll Mon Oct 2 15:01:37 2017<br class="">@@ -9,7 +9,8 @@ define i16 @halfword(%struct.a* %ctx, i3<br class=""> ; CHECK-LABEL: halfword:<br class=""> ; CHECK: ubfx [[REG:x[0-9]+]], x1, #9, #8<br class=""> ; CHECK: ldrh [[REG1:w[0-9]+]], [{{.*}}[[REG2:x[0-9]+]], [[REG]], lsl #1]<br class="">-; CHECK: strh [[REG1]], [{{.*}}[[REG2]], [[REG]], lsl #1]<br class="">+; CHECK: mov [[REG3:x[0-9]+]], [[REG2]]<br class="">+; CHECK: strh [[REG1]], [{{.*}}[[REG3]], [[REG]], lsl #1]<br class=""> %shr81 = lshr i32 %xor72, 9<br class=""> %conv82 = zext i32 %shr81 to i64<br class=""> %idxprom83 = and i64 %conv82, 255<br class="">@@ -24,7 +25,8 @@ define i32 @word(%struct.b* %ctx, i32 %x<br class=""> ; CHECK-LABEL: word:<br class=""> ; CHECK: ubfx [[REG:x[0-9]+]], x1, #9, #8<br class=""> ; CHECK: ldr [[REG1:w[0-9]+]], [{{.*}}[[REG2:x[0-9]+]], [[REG]], lsl #2]<br class="">-; CHECK: str [[REG1]], [{{.*}}[[REG2]], [[REG]], lsl #2]<br class="">+; CHECK: mov [[REG3:x[0-9]+]], [[REG2]]<br class="">+; CHECK: str [[REG1]], [{{.*}}[[REG3]], [[REG]], lsl #2]<br class=""> %shr81 = lshr i32 %xor72, 9<br class=""> %conv82 = zext i32 %shr81 to i64<br class=""> %idxprom83 = and i64 %conv82, 255<br class="">@@ -39,7 +41,8 @@ define i64 @doubleword(%struct.c* %ctx,<br class=""> ; CHECK-LABEL: doubleword:<br class=""> ; CHECK: ubfx [[REG:x[0-9]+]], x1, #9, #8<br class=""> ; CHECK: ldr [[REG1:x[0-9]+]], [{{.*}}[[REG2:x[0-9]+]], [[REG]], lsl #3]<br class="">-; CHECK: str [[REG1]], [{{.*}}[[REG2]], [[REG]], lsl #3]<br class="">+; CHECK: mov [[REG3:x[0-9]+]], [[REG2]]<br class="">+; CHECK: str [[REG1]], [{{.*}}[[REG3]], [[REG]], lsl #3]<br class=""> %shr81 = lshr i32 %xor72, 9<br class=""> %conv82 = zext i32 %shr81 to i64<br class=""> %idxprom83 = and i64 %conv82, 255<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/AArch64/arm64-AdvSIMD-Scalar.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-AdvSIMD-Scalar.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-AdvSIMD-Scalar.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/AArch64/arm64-AdvSIMD-Scalar.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/AArch64/arm64-AdvSIMD-Scalar.ll Mon Oct 2 15:01:37 2017<br class="">@@ -8,15 +8,9 @@ define <2 x i64> @bar(<2 x i64> %a, <2 x<br class=""> ; CHECK: add.2d<span class="Apple-tab-span" style="white-space:pre"> </span>v[[REG:[0-9]+]], v0, v1<br class=""> ; CHECK: add<span class="Apple-tab-span" style="white-space:pre"> </span>d[[REG3:[0-9]+]], d[[REG]], d1<br class=""> ; CHECK: sub<span class="Apple-tab-span" style="white-space:pre"> </span>d[[REG2:[0-9]+]], d[[REG]], d1<br class="">-; Without advanced copy optimization, we end up with cross register<br class="">-; banks copies that cannot be coalesced.<br class="">-; CHECK-NOOPT: fmov [[COPY_REG3:x[0-9]+]], d[[REG3]]<br class="">-; With advanced copy optimization, we end up with just one copy<br class="">-; to insert the computed high part into the V register. <br class="">-; CHECK-OPT-NOT: fmov<br class="">+; CHECK-NOT: fmov<br class=""> ; CHECK: fmov [[COPY_REG2:x[0-9]+]], d[[REG2]]<br class="">-; CHECK-NOOPT: fmov d0, [[COPY_REG3]]<br class="">-; CHECK-OPT-NOT: fmov<br class="">+; CHECK-NOT: fmov<br class=""> ; CHECK: ins.d v0[1], [[COPY_REG2]]<br class=""> ; CHECK-NEXT: ret<br class=""> ;<br class="">@@ -24,11 +18,9 @@ define <2 x i64> @bar(<2 x i64> %a, <2 x<br class=""> ; GENERIC: add<span class="Apple-tab-span" style="white-space:pre"> </span>v[[REG:[0-9]+]].2d, v0.2d, v1.2d<br class=""> ; GENERIC: add<span class="Apple-tab-span" style="white-space:pre"> </span>d[[REG3:[0-9]+]], d[[REG]], d1<br class=""> ; GENERIC: sub<span class="Apple-tab-span" style="white-space:pre"> </span>d[[REG2:[0-9]+]], d[[REG]], d1<br class="">-; GENERIC-NOOPT: fmov [[COPY_REG3:x[0-9]+]], d[[REG3]]<br class="">-; GENERIC-OPT-NOT: fmov<br class="">+; GENERIC-NOT: fmov<br class=""> ; GENERIC: fmov [[COPY_REG2:x[0-9]+]], d[[REG2]]<br class="">-; GENERIC-NOOPT: fmov d0, [[COPY_REG3]]<br class="">-; GENERIC-OPT-NOT: fmov<br class="">+; GENERIC-NOT: fmov<br class=""> ; GENERIC: ins v0.d[1], [[COPY_REG2]]<br class=""> ; GENERIC-NEXT: ret<br class=""> %add = add <2 x i64> %a, %b<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/AArch64/arm64-zero-cycle-regmov.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-zero-cycle-regmov.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-zero-cycle-regmov.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/AArch64/arm64-zero-cycle-regmov.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/AArch64/arm64-zero-cycle-regmov.ll Mon Oct 2 15:01:37 2017<br class="">@@ -4,8 +4,10 @@<br class=""> define i32 @t(i32 %a, i32 %b, i32 %c, i32 %d) nounwind ssp {<br class=""> entry:<br class=""> ; CHECK-LABEL: t:<br class="">-; CHECK: mov x0, [[REG1:x[0-9]+]]<br class="">-; CHECK: mov x1, [[REG2:x[0-9]+]]<br class="">+; CHECK: mov [[REG2:x[0-9]+]], x3<br class="">+; CHECK: mov [[REG1:x[0-9]+]], x2<br class="">+; CHECK: mov x0, x2<br class="">+; CHECK: mov x1, x3<br class=""> ; CHECK: bl _foo<br class=""> ; CHECK: mov x0, [[REG1]]<br class=""> ; CHECK: mov x1, [[REG2]]<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/AArch64/f16-instructions.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/f16-instructions.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/f16-instructions.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/AArch64/f16-instructions.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/AArch64/f16-instructions.ll Mon Oct 2 15:01:37 2017<br class="">@@ -489,7 +489,7 @@ else:<br class=""><br class=""> ; CHECK-COMMON-LABEL: test_phi:<br class=""> ; CHECK-COMMON: mov x[[PTR:[0-9]+]], x0<br class="">-; CHECK-COMMON: ldr h[[AB:[0-9]+]], [x[[PTR]]]<br class="">+; CHECK-COMMON: ldr h[[AB:[0-9]+]], [x0]<br class=""> ; CHECK-COMMON: [[LOOP:LBB[0-9_]+]]:<br class=""> ; CHECK-COMMON: mov.16b v[[R:[0-9]+]], v[[AB]]<br class=""> ; CHECK-COMMON: ldr h[[AB]], [x[[PTR]]]<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/AArch64/flags-multiuse.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/flags-multiuse.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/flags-multiuse.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/AArch64/flags-multiuse.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/AArch64/flags-multiuse.ll Mon Oct 2 15:01:37 2017<br class="">@@ -17,6 +17,9 @@ define i32 @test_multiflag(i32 %n, i32 %<br class=""> %val = zext i1 %test to i32<br class=""> ; CHECK: cset {{[xw][0-9]+}}, ne<br class=""><br class="">+; CHECK: mov [[RHSCOPY:w[0-9]+]], [[RHS]]<br class="">+; CHECK: mov [[LHSCOPY:w[0-9]+]], [[LHS]]<br class="">+<br class=""> store i32 %val, i32* @var<br class=""><br class=""> call void @bar()<br class="">@@ -25,7 +28,7 @@ define i32 @test_multiflag(i32 %n, i32 %<br class=""> ; Currently, the comparison is emitted again. An MSR/MRS pair would also be<br class=""> ; acceptable, but assuming the call preserves NZCV is not.<br class=""> br i1 %test, label %iftrue, label %iffalse<br class="">-; CHECK: cmp [[LHS]], [[RHS]]<br class="">+; CHECK: cmp [[LHSCOPY]], [[RHSCOPY]]<br class=""> ; CHECK: b.eq<br class=""><br class=""> iftrue:<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/AArch64/merge-store-dependency.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/merge-store-dependency.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/merge-store-dependency.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/AArch64/merge-store-dependency.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/AArch64/merge-store-dependency.ll Mon Oct 2 15:01:37 2017<br class="">@@ -8,10 +8,9 @@<br class=""> define void @test(%struct1* %fde, i32 %fd, void (i32, i32, i8*)* %func, i8* %arg) {<br class=""> ;CHECK-LABEL: test<br class=""> entry:<br class="">-; A53: mov [[DATA:w[0-9]+]], w1<br class=""> ; A53: str q{{[0-9]+}}, {{.*}}<br class=""> ; A53: str q{{[0-9]+}}, {{.*}}<br class="">-; A53: str [[DATA]], {{.*}}<br class="">+; A53: str w1, {{.*}}<br class=""><br class=""> %0 = bitcast %struct1* %fde to i8*<br class=""> tail call void @llvm.memset.p0i8.i64(i8* %0, i8 0, i64 40, i32 8, i1 false)<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/AArch64/neg-imm.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/neg-imm.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/neg-imm.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/AArch64/neg-imm.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/AArch64/neg-imm.ll Mon Oct 2 15:01:37 2017<br class="">@@ -7,8 +7,8 @@ declare void @foo(i32)<br class=""> define void @test(i32 %px) {<br class=""> ; CHECK_LABEL: test:<br class=""> ; CHECK_LABEL: %entry<br class="">-; CHECK: subs<br class="">-; CHECK-NEXT: csel<br class="">+; CHECK: subs [[REG0:w[0-9]+]],<br class="">+; CHECK: csel {{w[0-9]+}}, wzr, [[REG0]]<br class=""> entry:<br class=""> %sub = add nsw i32 %px, -1<br class=""> %cmp = icmp slt i32 %px, 1<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/AMDGPU/callee-special-input-sgprs.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AMDGPU/callee-special-input-sgprs.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AMDGPU/callee-special-input-sgprs.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/AMDGPU/callee-special-input-sgprs.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/AMDGPU/callee-special-input-sgprs.ll Mon Oct 2 15:01:37 2017<br class="">@@ -547,16 +547,16 @@ define void @func_use_every_sgpr_input_c<br class=""> ; GCN: s_mov_b32 s5, s32<br class=""> ; GCN: s_add_u32 s32, s32, 0x300<br class=""><br class="">-; GCN-DAG: s_mov_b32 [[SAVE_X:s[0-9]+]], s14<br class="">-; GCN-DAG: s_mov_b32 [[SAVE_Y:s[0-9]+]], s15<br class="">-; GCN-DAG: s_mov_b32 [[SAVE_Z:s[0-9]+]], s16<br class="">+; GCN-DAG: s_mov_b32 [[SAVE_X:s[0-57-9][0-9]*]], s14<br class="">+; GCN-DAG: s_mov_b32 [[SAVE_Y:s[0-68-9][0-9]*]], s15<br class="">+; GCN-DAG: s_mov_b32 [[SAVE_Z:s[0-79][0-9]*]], s16<br class=""> ; GCN-DAG: s_mov_b64 {{s\[[0-9]+:[0-9]+\]}}, s[6:7]<br class=""> ; GCN-DAG: s_mov_b64 {{s\[[0-9]+:[0-9]+\]}}, s[8:9]<br class=""> ; GCN-DAG: s_mov_b64 {{s\[[0-9]+:[0-9]+\]}}, s[10:11]<br class=""><br class="">-; GCN-DAG: s_mov_b32 s6, [[SAVE_X]]<br class="">-; GCN-DAG: s_mov_b32 s7, [[SAVE_Y]]<br class="">-; GCN-DAG: s_mov_b32 s8, [[SAVE_Z]]<br class="">+; GCN-DAG: s_mov_b32 s6, s14<br class="">+; GCN-DAG: s_mov_b32 s7, s15<br class="">+; GCN-DAG: s_mov_b32 s8, s16<br class=""> ; GCN: s_swappc_b64<br class=""><br class=""> ; GCN: buffer_store_dword v{{[0-9]+}}, off, s[0:3], s5 offset:4<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/AMDGPU/mad-mix.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AMDGPU/mad-mix.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AMDGPU/mad-mix.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/AMDGPU/mad-mix.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/AMDGPU/mad-mix.ll Mon Oct 2 15:01:37 2017<br class="">@@ -51,7 +51,7 @@ define float @v_mad_mix_f32_f16hi_f16hi_<br class=""><br class=""> ; GCN-LABEL: {{^}}v_mad_mix_v2f32:<br class=""> ; GFX9: v_mov_b32_e32 v3, v1<br class="">-; GFX9-NEXT: v_mad_mix_f32 v1, v0, v3, v2 op_sel:[1,1,1]<br class="">+; GFX9-NEXT: v_mad_mix_f32 v1, v0, v1, v2 op_sel:[1,1,1]<br class=""> ; GFX9-NEXT: v_mad_mix_f32 v0, v0, v3, v2<br class=""><br class=""> ; CIVI: v_mac_f32<br class="">@@ -66,7 +66,7 @@ define <2 x float> @v_mad_mix_v2f32(<2 x<br class=""> ; GCN-LABEL: {{^}}v_mad_mix_v2f32_shuffle:<br class=""> ; GCN: s_waitcnt<br class=""> ; GFX9-NEXT: v_mov_b32_e32 v3, v1<br class="">-; GFX9-NEXT: v_mad_mix_f32 v1, v0, v3, v2 op_sel:[0,1,1]<br class="">+; GFX9-NEXT: v_mad_mix_f32 v1, v0, v1, v2 op_sel:[0,1,1]<br class=""> ; GFX9-NEXT: v_mad_mix_f32 v0, v0, v3, v2 op_sel:[1,0,1]<br class=""> ; GFX9-NEXT: s_setpc_b64<br class=""><br class="">@@ -246,7 +246,7 @@ define float @v_mad_mix_f32_f16lo_f16lo_<br class=""> ; GCN-LABEL: {{^}}v_mad_mix_v2f32_f32imm1:<br class=""> ; GFX9: v_mov_b32_e32 v2, v1<br class=""> ; GFX9: v_mov_b32_e32 v3, 1.0<br class="">-; GFX9: v_mad_mix_f32 v1, v0, v2, v3 op_sel:[1,1,0] op_sel_hi:[1,1,0] ; encoding<br class="">+; GFX9: v_mad_mix_f32 v1, v0, v1, v3 op_sel:[1,1,0] op_sel_hi:[1,1,0] ; encoding<br class=""> ; GFX9: v_mad_mix_f32 v0, v0, v2, v3 op_sel_hi:[1,1,0] ; encoding<br class=""> define <2 x float> @v_mad_mix_v2f32_f32imm1(<2 x half> %src0, <2 x half> %src1) #0 {<br class=""> %src0.ext = fpext <2 x half> %src0 to <2 x float><br class="">@@ -258,7 +258,7 @@ define <2 x float> @v_mad_mix_v2f32_f32i<br class=""> ; GCN-LABEL: {{^}}v_mad_mix_v2f32_cvtf16imminv2pi:<br class=""> ; GFX9: v_mov_b32_e32 v2, v1<br class=""> ; GFX9: v_mov_b32_e32 v3, 0x3e230000<br class="">-; GFX9: v_mad_mix_f32 v1, v0, v2, v3 op_sel:[1,1,0] op_sel_hi:[1,1,0] ; encoding<br class="">+; GFX9: v_mad_mix_f32 v1, v0, v1, v3 op_sel:[1,1,0] op_sel_hi:[1,1,0] ; encoding<br class=""> ; GFX9: v_mad_mix_f32 v0, v0, v2, v3 op_sel_hi:[1,1,0] ; encoding<br class=""> define <2 x float> @v_mad_mix_v2f32_cvtf16imminv2pi(<2 x half> %src0, <2 x half> %src1) #0 {<br class=""> %src0.ext = fpext <2 x half> %src0 to <2 x float><br class="">@@ -271,7 +271,7 @@ define <2 x float> @v_mad_mix_v2f32_cvtf<br class=""> ; GCN-LABEL: {{^}}v_mad_mix_v2f32_f32imminv2pi:<br class=""> ; GFX9: v_mov_b32_e32 v2, v1<br class=""> ; GFX9: v_mov_b32_e32 v3, 0.15915494<br class="">-; GFX9: v_mad_mix_f32 v1, v0, v2, v3 op_sel:[1,1,0] op_sel_hi:[1,1,0] ; encoding<br class="">+; GFX9: v_mad_mix_f32 v1, v0, v1, v3 op_sel:[1,1,0] op_sel_hi:[1,1,0] ; encoding<br class=""> ; GFX9: v_mad_mix_f32 v0, v0, v2, v3 op_sel_hi:[1,1,0] ; encoding<br class=""> define <2 x float> @v_mad_mix_v2f32_f32imminv2pi(<2 x half> %src0, <2 x half> %src1) #0 {<br class=""> %src0.ext = fpext <2 x half> %src0 to <2 x float><br class=""><br class="">Modified: llvm/trunk/test/CodeGen/AMDGPU/ret.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AMDGPU/ret.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AMDGPU/ret.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/AMDGPU/ret.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/AMDGPU/ret.ll Mon Oct 2 15:01:37 2017<br class="">@@ -2,10 +2,10 @@<br class=""> ; RUN: llc -march=amdgcn -mcpu=tonga -verify-machineinstrs < %s | FileCheck -check-prefix=GCN %s<br class=""><br class=""> ; GCN-LABEL: {{^}}vgpr:<br class="">-; GCN: v_mov_b32_e32 v1, v0<br class="">-; GCN-DAG: v_add_f32_e32 v0, 1.0, v1<br class="">-; GCN-DAG: exp mrt0 v1, v1, v1, v1 done vm<br class="">+; GCN-DAG: v_mov_b32_e32 v1, v0<br class="">+; GCN-DAG: exp mrt0 v0, v0, v0, v0 done vm<br class=""> ; GCN: s_waitcnt expcnt(0)<br class="">+; GCN: v_add_f32_e32 v0, 1.0, v0<br class=""> ; GCN-NOT: s_endpgm<br class=""> define amdgpu_vs { float, float } @vgpr([9 x <16 x i8>] addrspace(2)* byval %arg, i32 inreg %arg1, i32 inreg %arg2, float %arg3) #0 {<br class=""> bb:<br class="">@@ -204,13 +204,13 @@ bb:<br class=""> }<br class=""><br class=""> ; GCN-LABEL: {{^}}both:<br class="">-; GCN: v_mov_b32_e32 v1, v0<br class="">-; GCN-DAG: exp mrt0 v1, v1, v1, v1 done vm<br class="">-; GCN-DAG: v_add_f32_e32 v0, 1.0, v1<br class="">-; GCN-DAG: s_add_i32 s0, s3, 2<br class="">+; GCN-DAG: exp mrt0 v0, v0, v0, v0 done vm<br class="">+; GCN-DAG: v_mov_b32_e32 v1, v0<br class=""> ; GCN-DAG: s_mov_b32 s1, s2<br class="">-; GCN: s_mov_b32 s2, s3<br class=""> ; GCN: s_waitcnt expcnt(0)<br class="">+; GCN: v_add_f32_e32 v0, 1.0, v0<br class="">+; GCN-DAG: s_add_i32 s0, s3, 2<br class="">+; GCN-DAG: s_mov_b32 s2, s3<br class=""> ; GCN-NOT: s_endpgm<br class=""> define amdgpu_vs { float, i32, float, i32, i32 } @both([9 x <16 x i8>] addrspace(2)* byval %arg, i32 inreg %arg1, i32 inreg %arg2, float %arg3) #0 {<br class=""> bb:<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/ARM/atomic-op.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/ARM/atomic-op.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/ARM/atomic-op.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/ARM/atomic-op.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/ARM/atomic-op.ll Mon Oct 2 15:01:37 2017<br class="">@@ -287,7 +287,8 @@ define i32 @test_cmpxchg_fail_order(i32<br class=""><br class=""> %pair = cmpxchg i32* %addr, i32 %desired, i32 %new seq_cst monotonic<br class=""> %oldval = extractvalue { i32, i1 } %pair, 0<br class="">-; CHECK-ARMV7: ldrex [[OLDVAL:r[0-9]+]], [r[[ADDR:[0-9]+]]]<br class="">+; CHECK-ARMV7: mov r[[ADDR:[0-9]+]], r0<br class="">+; CHECK-ARMV7: ldrex [[OLDVAL:r[0-9]+]], [r0]<br class=""> ; CHECK-ARMV7: cmp [[OLDVAL]], r1<br class=""> ; CHECK-ARMV7: bne [[FAIL_BB:\.?LBB[0-9]+_[0-9]+]]<br class=""> ; CHECK-ARMV7: dmb ish<br class="">@@ -305,7 +306,8 @@ define i32 @test_cmpxchg_fail_order(i32<br class=""> ; CHECK-ARMV7: dmb ish<br class=""> ; CHECK-ARMV7: bx lr<br class=""><br class="">-; CHECK-T2: ldrex [[OLDVAL:r[0-9]+]], [r[[ADDR:[0-9]+]]]<br class="">+; CHECK-T2: mov r[[ADDR:[0-9]+]], r0<br class="">+; CHECK-T2: ldrex [[OLDVAL:r[0-9]+]], [r0]<br class=""> ; CHECK-T2: cmp [[OLDVAL]], r1<br class=""> ; CHECK-T2: bne [[FAIL_BB:\.?LBB.*]]<br class=""> ; CHECK-T2: dmb ish<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/ARM/intrinsics-overflow.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/ARM/intrinsics-overflow.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/ARM/intrinsics-overflow.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/ARM/intrinsics-overflow.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/ARM/intrinsics-overflow.ll Mon Oct 2 15:01:37 2017<br class="">@@ -39,7 +39,7 @@ define i32 @sadd_overflow(i32 %a, i32 %b<br class=""> ; ARM: movvc r[[R1]], #0<br class=""><br class=""> ; THUMBV6: mov r[[R2:[0-9]+]], r[[R0:[0-9]+]]<br class="">- ; THUMBV6: adds r[[R3:[0-9]+]], r[[R2]], r[[R1:[0-9]+]]<br class="">+ ; THUMBV6: adds r[[R3:[0-9]+]], r[[R0]], r[[R1:[0-9]+]]<br class=""> ; THUMBV6: movs r[[R0]], #0<br class=""> ; THUMBV6: movs r[[R1]], #1<br class=""> ; THUMBV6: cmp r[[R3]], r[[R2]]<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/ARM/swifterror.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/ARM/swifterror.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/ARM/swifterror.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/ARM/swifterror.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/ARM/swifterror.ll Mon Oct 2 15:01:37 2017<br class="">@@ -182,7 +182,7 @@ define float @foo_loop(%swift_error** sw<br class=""> ; CHECK-APPLE: beq<br class=""> ; CHECK-APPLE: mov r0, #16<br class=""> ; CHECK-APPLE: malloc<br class="">-; CHECK-APPLE: strb r{{.*}}, [{{.*}}[[ID]], #8]<br class="">+; CHECK-APPLE: strb r{{.*}}, [r0, #8]<br class=""> ; CHECK-APPLE: ble<br class=""> ; CHECK-APPLE: mov r8, [[ID]]<br class=""><br class=""><br class="">Modified: llvm/trunk/test/CodeGen/Mips/llvm-ir/sub.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/Mips/llvm-ir/sub.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/Mips/llvm-ir/sub.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/Mips/llvm-ir/sub.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/Mips/llvm-ir/sub.ll Mon Oct 2 15:01:37 2017<br class="">@@ -165,7 +165,7 @@ entry:<br class=""> ; MMR3: subu16 $5, $[[T19]], $[[T20]]<br class=""><br class=""> ; MMR6: move $[[T0:[0-9]+]], $7<br class="">-; MMR6: sw $[[T0]], 8($sp)<br class="">+; MMR6: sw $7, 8($sp)<br class=""> ; MMR6: move $[[T1:[0-9]+]], $5<br class=""> ; MMR6: sw $4, 12($sp)<br class=""> ; MMR6: lw $[[T2:[0-9]+]], 48($sp)<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/PowerPC/fma-mutate.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/PowerPC/fma-mutate.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/PowerPC/fma-mutate.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/PowerPC/fma-mutate.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/PowerPC/fma-mutate.ll Mon Oct 2 15:01:37 2017<br class="">@@ -14,7 +14,8 @@ define double @foo3(double %a) nounwind<br class=""> ret double %r<br class=""><br class=""> ; CHECK: @foo3<br class="">-; CHECK: xsnmsubadp [[REG:[0-9]+]], {{[0-9]+}}, [[REG]]<br class="">+; CHECK: fmr [[REG:[0-9]+]], [[REG2:[0-9]+]]<br class="">+; CHECK: xsnmsubadp [[REG]], {{[0-9]+}}, [[REG2]]<br class=""> ; CHECK: xsmaddmdp<br class=""> ; CHECK: xsmaddadp<br class=""> }<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/PowerPC/gpr-vsr-spill.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/PowerPC/gpr-vsr-spill.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/PowerPC/gpr-vsr-spill.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/PowerPC/gpr-vsr-spill.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/PowerPC/gpr-vsr-spill.ll Mon Oct 2 15:01:37 2017<br class="">@@ -16,8 +16,8 @@ if.end:<br class=""> ret i32 %e.0<br class=""> ; CHECK: @foo<br class=""> ; CHECK: mr [[NEWREG:[0-9]+]], 3<br class="">+; CHECK: mr [[REG1:[0-9]+]], 4<br class=""> ; CHECK: mtvsrd [[NEWREG2:[0-9]+]], 4<br class="">-; CHECK: mffprd [[REG1:[0-9]+]], [[NEWREG2]]<br class=""> ; CHECK: add {{[0-9]+}}, [[NEWREG]], [[REG1]]<br class=""> ; CHECK: mffprd [[REG2:[0-9]+]], [[NEWREG2]]<br class=""> ; CHECK: add {{[0-9]+}}, [[REG2]], [[NEWREG]]<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/PowerPC/inlineasm-i64-reg.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/PowerPC/inlineasm-i64-reg.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/PowerPC/inlineasm-i64-reg.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/PowerPC/inlineasm-i64-reg.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/PowerPC/inlineasm-i64-reg.ll Mon Oct 2 15:01:37 2017<br class="">@@ -75,7 +75,7 @@ entry:<br class=""><br class=""> ; CHECK-DAG: mr [[REG:[0-9]+]], 3<br class=""> ; CHECK-DAG: li 0, 1076<br class="">-; CHECK: stw [[REG]],<br class="">+; CHECK-DAG: stw 3,<br class=""><br class=""> ; CHECK: #APP<br class=""> ; CHECK: sc<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/PowerPC/opt-li-add-to-addi.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/PowerPC/opt-li-add-to-addi.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/PowerPC/opt-li-add-to-addi.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/PowerPC/opt-li-add-to-addi.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/PowerPC/opt-li-add-to-addi.ll Mon Oct 2 15:01:37 2017<br class="">@@ -3,7 +3,7 @@<br class=""><br class=""> define i64 @testOptimizeLiAddToAddi(i64 %a) {<br class=""> ; CHECK-LABEL: testOptimizeLiAddToAddi:<br class="">-; CHECK: addi 3, 30, 2444<br class="">+; CHECK: addi 3, 3, 2444<br class=""> ; CHECK: bl callv<br class=""> ; CHECK: addi 3, 30, 234<br class=""> ; CHECK: bl call<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/PowerPC/tail-dup-layout.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/PowerPC/tail-dup-layout.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/PowerPC/tail-dup-layout.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/PowerPC/tail-dup-layout.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/PowerPC/tail-dup-layout.ll Mon Oct 2 15:01:37 2017<br class="">@@ -23,7 +23,7 @@ target triple = "powerpc64le-grtev4-linu<br class=""> ;CHECK-LABEL: straight_test:<br class=""> ; test1 may have been merged with entry<br class=""> ;CHECK: mr [[TAGREG:[0-9]+]], 3<br class="">-;CHECK: andi. {{[0-9]+}}, [[TAGREG]], 1<br class="">+;CHECK: andi. {{[0-9]+}}, [[TAGREG:[0-9]+]], 1<br class=""> ;CHECK-NEXT: bc 12, 1, .[[OPT1LABEL:[_0-9A-Za-z]+]]<br class=""> ;CHECK-NEXT: # %test2<br class=""> ;CHECK-NEXT: rlwinm. {{[0-9]+}}, [[TAGREG]], 0, 30, 30<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/SPARC/atomics.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/SPARC/atomics.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/SPARC/atomics.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/SPARC/atomics.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/SPARC/atomics.ll Mon Oct 2 15:01:37 2017<br class="">@@ -235,8 +235,9 @@ entry:<br class=""><br class=""> ; CHECK-LABEL: test_load_add_i32<br class=""> ; CHECK: membar<br class="">-; CHECK: add [[V:%[gilo][0-7]]], %o1, [[U:%[gilo][0-7]]]<br class="">-; CHECK: cas [%o0], [[V]], [[U]]<br class="">+; CHECK: mov [[U:%[gilo][0-7]]], [[V:%[gilo][0-7]]]<br class="">+; CHECK: add [[U:%[gilo][0-7]]], %o1, [[V2:%[gilo][0-7]]]<br class="">+; CHECK: cas [%o0], [[V]], [[V2]]<br class=""> ; CHECK: membar<br class=""> define zeroext i32 @test_load_add_i32(i32* %p, i32 zeroext %v) {<br class=""> entry:<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/Thumb/thumb-shrink-wrapping.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/Thumb/thumb-shrink-wrapping.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/Thumb/thumb-shrink-wrapping.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/Thumb/thumb-shrink-wrapping.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/Thumb/thumb-shrink-wrapping.ll Mon Oct 2 15:01:37 2017<br class="">@@ -598,7 +598,7 @@ declare void @abort() #0<br class=""> define i32 @b_to_bx(i32 %value) {<br class=""> ; CHECK-LABEL: b_to_bx:<br class=""> ; DISABLE: push {r7, lr}<br class="">-; CHECK: cmp r1, #49<br class="">+; CHECK: cmp r0, #49<br class=""> ; CHECK-NEXT: bgt [[ELSE_LABEL:LBB[0-9_]+]]<br class=""> ; ENABLE: push {r7, lr}<br class=""><br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/2006-03-01-InstrSchedBug.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/2006-03-01-InstrSchedBug.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/2006-03-01-InstrSchedBug.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/2006-03-01-InstrSchedBug.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/2006-03-01-InstrSchedBug.ll Mon Oct 2 15:01:37 2017<br class="">@@ -7,7 +7,7 @@ define i32 @f(i32 %a, i32 %b) {<br class=""> ; CHECK-NEXT: movl {{[0-9]+}}(%esp), %eax<br class=""> ; CHECK-NEXT: movl {{[0-9]+}}(%esp), %ecx<br class=""> ; CHECK-NEXT: movl %ecx, %edx<br class="">-; CHECK-NEXT: imull %edx, %edx<br class="">+; CHECK-NEXT: imull %ecx, %edx<br class=""> ; CHECK-NEXT: imull %eax, %ecx<br class=""> ; CHECK-NEXT: imull %eax, %eax<br class=""> ; CHECK-NEXT: addl %edx, %eax<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/arg-copy-elide.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/arg-copy-elide.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/arg-copy-elide.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/arg-copy-elide.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/arg-copy-elide.ll Mon Oct 2 15:01:37 2017<br class="">@@ -106,7 +106,7 @@ entry:<br class=""> ; CHECK-DAG: movl %edx, %[[r1:[^ ]*]]<br class=""> ; CHECK-DAG: movl 8(%ebp), %[[r2:[^ ]*]]<br class=""> ; CHECK-DAG: movl %[[r2]], 4(%esp)<br class="">-; CHECK-DAG: movl %[[r1]], (%esp)<br class="">+; CHECK-DAG: movl %edx, (%esp)<br class=""> ; CHECK: movl %esp, %[[reg:[^ ]*]]<br class=""> ; CHECK: pushl %[[reg]]<br class=""> ; CHECK: calll _addrof_i64<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/avg.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/avg.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/avg.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/avg.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/avg.ll Mon Oct 2 15:01:37 2017<br class="">@@ -407,7 +407,6 @@ define void @avg_v64i8(<64 x i8>* %a, <6<br class=""> ; SSE2-NEXT: pand %xmm0, %xmm2<br class=""> ; SSE2-NEXT: packuswb %xmm1, %xmm2<br class=""> ; SSE2-NEXT: packuswb %xmm10, %xmm2<br class="">-; SSE2-NEXT: movdqa %xmm2, %xmm1<br class=""> ; SSE2-NEXT: psrld $1, %xmm4<br class=""> ; SSE2-NEXT: psrld $1, %xmm12<br class=""> ; SSE2-NEXT: pand %xmm0, %xmm12<br class="">@@ -444,7 +443,7 @@ define void @avg_v64i8(<64 x i8>* %a, <6<br class=""> ; SSE2-NEXT: movdqu %xmm7, (%rax)<br class=""> ; SSE2-NEXT: movdqu %xmm11, (%rax)<br class=""> ; SSE2-NEXT: movdqu %xmm13, (%rax)<br class="">-; SSE2-NEXT: movdqu %xmm1, (%rax)<br class="">+; SSE2-NEXT: movdqu %xmm2, (%rax)<br class=""> ; SSE2-NEXT: retq<br class=""> ;<br class=""> ; AVX1-LABEL: avg_v64i8:<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/avx-load-store.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/avx-load-store.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/avx-load-store.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/avx-load-store.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/avx-load-store.ll Mon Oct 2 15:01:37 2017<br class="">@@ -12,11 +12,11 @@ define void @test_256_load(double* nocap<br class=""> ; CHECK-NEXT: movq %rdx, %r14<br class=""> ; CHECK-NEXT: movq %rsi, %r15<br class=""> ; CHECK-NEXT: movq %rdi, %rbx<br class="">-; CHECK-NEXT: vmovaps (%rbx), %ymm0<br class="">+; CHECK-NEXT: vmovaps (%rdi), %ymm0<br class=""> ; CHECK-NEXT: vmovups %ymm0, {{[0-9]+}}(%rsp) # 32-byte Spill<br class="">-; CHECK-NEXT: vmovaps (%r15), %ymm1<br class="">+; CHECK-NEXT: vmovaps (%rsi), %ymm1<br class=""> ; CHECK-NEXT: vmovups %ymm1, {{[0-9]+}}(%rsp) # 32-byte Spill<br class="">-; CHECK-NEXT: vmovaps (%r14), %ymm2<br class="">+; CHECK-NEXT: vmovaps (%rdx), %ymm2<br class=""> ; CHECK-NEXT: vmovups %ymm2, (%rsp) # 32-byte Spill<br class=""> ; CHECK-NEXT: callq dummy<br class=""> ; CHECK-NEXT: vmovups {{[0-9]+}}(%rsp), %ymm0 # 32-byte Reload<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/avx512-bugfix-25270.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/avx512-bugfix-25270.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/avx512-bugfix-25270.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/avx512-bugfix-25270.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/avx512-bugfix-25270.ll Mon Oct 2 15:01:37 2017<br class="">@@ -9,10 +9,10 @@ define void @bar__512(<16 x i32>* %var)<br class=""> ; CHECK-NEXT: pushq %rbx<br class=""> ; CHECK-NEXT: subq $112, %rsp<br class=""> ; CHECK-NEXT: movq %rdi, %rbx<br class="">-; CHECK-NEXT: vmovups (%rbx), %zmm0<br class="">+; CHECK-NEXT: vmovups (%rdi), %zmm0<br class=""> ; CHECK-NEXT: vmovups %zmm0, (%rsp) ## 64-byte Spill<br class=""> ; CHECK-NEXT: vbroadcastss {{.*}}(%rip), %zmm1<br class="">-; CHECK-NEXT: vmovaps %zmm1, (%rbx)<br class="">+; CHECK-NEXT: vmovaps %zmm1, (%rdi)<br class=""> ; CHECK-NEXT: callq _Print__512<br class=""> ; CHECK-NEXT: vmovups (%rsp), %zmm0 ## 64-byte Reload<br class=""> ; CHECK-NEXT: callq _Print__512<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/avx512-calling-conv.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/avx512-calling-conv.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/avx512-calling-conv.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/avx512-calling-conv.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/avx512-calling-conv.ll Mon Oct 2 15:01:37 2017<br class="">@@ -466,7 +466,7 @@ define i32 @test12(i32 %a1, i32 %a2, i32<br class=""> ; KNL_X32-NEXT: movl %edi, (%esp)<br class=""> ; KNL_X32-NEXT: calll _test11<br class=""> ; KNL_X32-NEXT: movl %eax, %ebx<br class="">-; KNL_X32-NEXT: movzbl %bl, %eax<br class="">+; KNL_X32-NEXT: movzbl %al, %eax<br class=""> ; KNL_X32-NEXT: movl %eax, {{[0-9]+}}(%esp)<br class=""> ; KNL_X32-NEXT: movl %esi, {{[0-9]+}}(%esp)<br class=""> ; KNL_X32-NEXT: movl %edi, (%esp)<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/avx512-mask-op.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/avx512-mask-op.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/avx512-mask-op.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/avx512-mask-op.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/avx512-mask-op.ll Mon Oct 2 15:01:37 2017<br class="">@@ -1175,7 +1175,6 @@ define <8 x i1> @test18(i8 %a, i16 %y) {<br class=""> ; KNL-NEXT: kmovw %esi, %k0<br class=""> ; KNL-NEXT: kshiftlw $7, %k0, %k2<br class=""> ; KNL-NEXT: kshiftrw $15, %k2, %k2<br class="">-; KNL-NEXT: kmovw %k2, %eax<br class=""> ; KNL-NEXT: kshiftlw $6, %k0, %k0<br class=""> ; KNL-NEXT: kshiftrw $15, %k0, %k0<br class=""> ; KNL-NEXT: kmovw %k0, %ecx<br class="">@@ -1188,8 +1187,7 @@ define <8 x i1> @test18(i8 %a, i16 %y) {<br class=""> ; KNL-NEXT: vptestmq %zmm0, %zmm0, %k0<br class=""> ; KNL-NEXT: kshiftlw $1, %k0, %k0<br class=""> ; KNL-NEXT: kshiftrw $1, %k0, %k0<br class="">-; KNL-NEXT: kmovw %eax, %k1<br class="">-; KNL-NEXT: kshiftlw $7, %k1, %k1<br class="">+; KNL-NEXT: kshiftlw $7, %k2, %k1<br class=""> ; KNL-NEXT: korw %k1, %k0, %k1<br class=""> ; KNL-NEXT: vpternlogq $255, %zmm0, %zmm0, %zmm0 {%k1} {z}<br class=""> ; KNL-NEXT: vpmovqw %zmm0, %xmm0<br class="">@@ -1202,20 +1200,16 @@ define <8 x i1> @test18(i8 %a, i16 %y) {<br class=""> ; SKX-NEXT: kmovd %esi, %k1<br class=""> ; SKX-NEXT: kshiftlw $7, %k1, %k2<br class=""> ; SKX-NEXT: kshiftrw $15, %k2, %k2<br class="">-; SKX-NEXT: kmovd %k2, %eax<br class=""> ; SKX-NEXT: kshiftlw $6, %k1, %k1<br class=""> ; SKX-NEXT: kshiftrw $15, %k1, %k1<br class="">-; SKX-NEXT: kmovd %k1, %ecx<br class=""> ; SKX-NEXT: vpmovm2q %k0, %zmm0<br class="">-; SKX-NEXT: kmovd %ecx, %k0<br class="">-; SKX-NEXT: vpmovm2q %k0, %zmm1<br class="">+; SKX-NEXT: vpmovm2q %k1, %zmm1<br class=""> ; SKX-NEXT: vmovdqa64 {{.*#+}} zmm2 = [0,1,2,3,4,5,8,7]<br class=""> ; SKX-NEXT: vpermi2q %zmm1, %zmm0, %zmm2<br class=""> ; SKX-NEXT: vpmovq2m %zmm2, %k0<br class=""> ; SKX-NEXT: kshiftlb $1, %k0, %k0<br class=""> ; SKX-NEXT: kshiftrb $1, %k0, %k0<br class="">-; SKX-NEXT: kmovd %eax, %k1<br class="">-; SKX-NEXT: kshiftlb $7, %k1, %k1<br class="">+; SKX-NEXT: kshiftlb $7, %k2, %k1<br class=""> ; SKX-NEXT: korb %k1, %k0, %k0<br class=""> ; SKX-NEXT: vpmovm2w %k0, %xmm0<br class=""> ; SKX-NEXT: vzeroupper<br class="">@@ -1227,7 +1221,6 @@ define <8 x i1> @test18(i8 %a, i16 %y) {<br class=""> ; AVX512BW-NEXT: kmovd %esi, %k0<br class=""> ; AVX512BW-NEXT: kshiftlw $7, %k0, %k2<br class=""> ; AVX512BW-NEXT: kshiftrw $15, %k2, %k2<br class="">-; AVX512BW-NEXT: kmovd %k2, %eax<br class=""> ; AVX512BW-NEXT: kshiftlw $6, %k0, %k0<br class=""> ; AVX512BW-NEXT: kshiftrw $15, %k0, %k0<br class=""> ; AVX512BW-NEXT: kmovd %k0, %ecx<br class="">@@ -1240,8 +1233,7 @@ define <8 x i1> @test18(i8 %a, i16 %y) {<br class=""> ; AVX512BW-NEXT: vptestmq %zmm0, %zmm0, %k0<br class=""> ; AVX512BW-NEXT: kshiftlw $1, %k0, %k0<br class=""> ; AVX512BW-NEXT: kshiftrw $1, %k0, %k0<br class="">-; AVX512BW-NEXT: kmovd %eax, %k1<br class="">-; AVX512BW-NEXT: kshiftlw $7, %k1, %k1<br class="">+; AVX512BW-NEXT: kshiftlw $7, %k2, %k1<br class=""> ; AVX512BW-NEXT: korw %k1, %k0, %k0<br class=""> ; AVX512BW-NEXT: vpmovm2w %k0, %zmm0<br class=""> ; AVX512BW-NEXT: ## kill: %XMM0<def> %XMM0<kill> %ZMM0<kill><br class="">@@ -1254,20 +1246,16 @@ define <8 x i1> @test18(i8 %a, i16 %y) {<br class=""> ; AVX512DQ-NEXT: kmovw %esi, %k1<br class=""> ; AVX512DQ-NEXT: kshiftlw $7, %k1, %k2<br class=""> ; AVX512DQ-NEXT: kshiftrw $15, %k2, %k2<br class="">-; AVX512DQ-NEXT: kmovw %k2, %eax<br class=""> ; AVX512DQ-NEXT: kshiftlw $6, %k1, %k1<br class=""> ; AVX512DQ-NEXT: kshiftrw $15, %k1, %k1<br class="">-; AVX512DQ-NEXT: kmovw %k1, %ecx<br class=""> ; AVX512DQ-NEXT: vpmovm2q %k0, %zmm0<br class="">-; AVX512DQ-NEXT: kmovw %ecx, %k0<br class="">-; AVX512DQ-NEXT: vpmovm2q %k0, %zmm1<br class="">+; AVX512DQ-NEXT: vpmovm2q %k1, %zmm1<br class=""> ; AVX512DQ-NEXT: vmovdqa64 {{.*#+}} zmm2 = [0,1,2,3,4,5,8,7]<br class=""> ; AVX512DQ-NEXT: vpermi2q %zmm1, %zmm0, %zmm2<br class=""> ; AVX512DQ-NEXT: vpmovq2m %zmm2, %k0<br class=""> ; AVX512DQ-NEXT: kshiftlb $1, %k0, %k0<br class=""> ; AVX512DQ-NEXT: kshiftrb $1, %k0, %k0<br class="">-; AVX512DQ-NEXT: kmovw %eax, %k1<br class="">-; AVX512DQ-NEXT: kshiftlb $7, %k1, %k1<br class="">+; AVX512DQ-NEXT: kshiftlb $7, %k2, %k1<br class=""> ; AVX512DQ-NEXT: korb %k1, %k0, %k0<br class=""> ; AVX512DQ-NEXT: vpmovm2q %k0, %zmm0<br class=""> ; AVX512DQ-NEXT: vpmovqw %zmm0, %xmm0<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/avx512-schedule.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/avx512-schedule.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/avx512-schedule.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/avx512-schedule.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/avx512-schedule.ll Mon Oct 2 15:01:37 2017<br class="">@@ -5973,20 +5973,16 @@ define <8 x i1> @vmov_test18(i8 %a, i16<br class=""> ; CHECK-NEXT: kmovd %esi, %k1<br class=""> ; CHECK-NEXT: kshiftlw $7, %k1, %k2<br class=""> ; CHECK-NEXT: kshiftrw $15, %k2, %k2<br class="">-; CHECK-NEXT: kmovd %k2, %eax<br class=""> ; CHECK-NEXT: kshiftlw $6, %k1, %k1<br class=""> ; CHECK-NEXT: kshiftrw $15, %k1, %k1<br class="">-; CHECK-NEXT: kmovd %k1, %ecx<br class=""> ; CHECK-NEXT: vpmovm2q %k0, %zmm0<br class="">-; CHECK-NEXT: kmovd %ecx, %k0<br class="">-; CHECK-NEXT: vpmovm2q %k0, %zmm1<br class="">+; CHECK-NEXT: vpmovm2q %k1, %zmm1<br class=""> ; CHECK-NEXT: vmovdqa64 {{.*#+}} zmm2 = [0,1,2,3,4,5,8,7] sched: [5:0.50]<br class=""> ; CHECK-NEXT: vpermi2q %zmm1, %zmm0, %zmm2<br class=""> ; CHECK-NEXT: vpmovq2m %zmm2, %k0<br class=""> ; CHECK-NEXT: kshiftlb $1, %k0, %k0<br class=""> ; CHECK-NEXT: kshiftrb $1, %k0, %k0<br class="">-; CHECK-NEXT: kmovd %eax, %k1<br class="">-; CHECK-NEXT: kshiftlb $7, %k1, %k1<br class="">+; CHECK-NEXT: kshiftlb $7, %k2, %k1<br class=""> ; CHECK-NEXT: korb %k1, %k0, %k0<br class=""> ; CHECK-NEXT: vpmovm2w %k0, %xmm0<br class=""> ; CHECK-NEXT: vzeroupper # sched: [4:1.00]<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/avx512bw-intrinsics-upgrade.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/avx512bw-intrinsics-upgrade.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/avx512bw-intrinsics-upgrade.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/avx512bw-intrinsics-upgrade.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/avx512bw-intrinsics-upgrade.ll Mon Oct 2 15:01:37 2017<br class="">@@ -2090,7 +2090,7 @@ define i64 @test_mask_cmp_b_512(<64 x i8<br class=""> ; AVX512F-32-NEXT: vpblendvb %ymm1, %ymm0, %ymm2, %ymm2<br class=""> ; AVX512F-32-NEXT: vshufi64x2 {{.*#+}} zmm0 = zmm2[0,1,2,3],zmm0[4,5,6,7]<br class=""> ; AVX512F-32-NEXT: vpmovb2m %zmm0, %k0<br class="">-; AVX512F-32-NEXT: movl %esi, %eax<br class="">+; AVX512F-32-NEXT: movl %ecx, %eax<br class=""> ; AVX512F-32-NEXT: shrl $30, %eax<br class=""> ; AVX512F-32-NEXT: kmovd %eax, %k1<br class=""> ; AVX512F-32-NEXT: vpmovm2b %k1, %zmm0<br class="">@@ -2101,7 +2101,7 @@ define i64 @test_mask_cmp_b_512(<64 x i8<br class=""> ; AVX512F-32-NEXT: vpblendvb %ymm2, %ymm0, %ymm1, %ymm1<br class=""> ; AVX512F-32-NEXT: vshufi64x2 {{.*#+}} zmm0 = zmm1[0,1,2,3],zmm0[4,5,6,7]<br class=""> ; AVX512F-32-NEXT: vpmovb2m %zmm0, %k0<br class="">-; AVX512F-32-NEXT: movl %esi, %eax<br class="">+; AVX512F-32-NEXT: movl %ecx, %eax<br class=""> ; AVX512F-32-NEXT: shrl $31, %eax<br class=""> ; AVX512F-32-NEXT: kmovd %eax, %k1<br class=""> ; AVX512F-32-NEXT: vpmovm2b %k1, %zmm0<br class="">@@ -2980,7 +2980,7 @@ define i64 @test_mask_x86_avx512_ucmp_b_<br class=""> ; AVX512F-32-NEXT: vpblendvb %ymm1, %ymm0, %ymm2, %ymm2<br class=""> ; AVX512F-32-NEXT: vshufi64x2 {{.*#+}} zmm0 = zmm2[0,1,2,3],zmm0[4,5,6,7]<br class=""> ; AVX512F-32-NEXT: vpmovb2m %zmm0, %k0<br class="">-; AVX512F-32-NEXT: movl %esi, %eax<br class="">+; AVX512F-32-NEXT: movl %ecx, %eax<br class=""> ; AVX512F-32-NEXT: shrl $30, %eax<br class=""> ; AVX512F-32-NEXT: kmovd %eax, %k1<br class=""> ; AVX512F-32-NEXT: vpmovm2b %k1, %zmm0<br class="">@@ -2991,7 +2991,7 @@ define i64 @test_mask_x86_avx512_ucmp_b_<br class=""> ; AVX512F-32-NEXT: vpblendvb %ymm2, %ymm0, %ymm1, %ymm1<br class=""> ; AVX512F-32-NEXT: vshufi64x2 {{.*#+}} zmm0 = zmm1[0,1,2,3],zmm0[4,5,6,7]<br class=""> ; AVX512F-32-NEXT: vpmovb2m %zmm0, %k0<br class="">-; AVX512F-32-NEXT: movl %esi, %eax<br class="">+; AVX512F-32-NEXT: movl %ecx, %eax<br class=""> ; AVX512F-32-NEXT: shrl $31, %eax<br class=""> ; AVX512F-32-NEXT: kmovd %eax, %k1<br class=""> ; AVX512F-32-NEXT: vpmovm2b %k1, %zmm0<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/buildvec-insertvec.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/buildvec-insertvec.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/buildvec-insertvec.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/buildvec-insertvec.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/buildvec-insertvec.ll Mon Oct 2 15:01:37 2017<br class="">@@ -38,7 +38,7 @@ define <4 x float> @test_negative_zero_1<br class=""> ; SSE2-LABEL: test_negative_zero_1:<br class=""> ; SSE2: # BB#0: # %entry<br class=""> ; SSE2-NEXT: movaps %xmm0, %xmm1<br class="">-; SSE2-NEXT: movhlps {{.*#+}} xmm1 = xmm1[1,1]<br class="">+; SSE2-NEXT: movhlps {{.*#+}} xmm1 = xmm0[1],xmm1[1]<br class=""> ; SSE2-NEXT: movss {{.*#+}} xmm2 = mem[0],zero,zero,zero<br class=""> ; SSE2-NEXT: unpcklps {{.*#+}} xmm0 = xmm0[0],xmm2[0],xmm0[1],xmm2[1]<br class=""> ; SSE2-NEXT: xorps %xmm2, %xmm2<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/combine-fcopysign.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/combine-fcopysign.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/combine-fcopysign.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/combine-fcopysign.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/combine-fcopysign.ll Mon Oct 2 15:01:37 2017<br class="">@@ -231,8 +231,8 @@ define <4 x double> @combine_vec_fcopysi<br class=""> ; SSE-NEXT: cvtss2sd %xmm2, %xmm4<br class=""> ; SSE-NEXT: movshdup {{.*#+}} xmm5 = xmm2[1,1,3,3]<br class=""> ; SSE-NEXT: movaps %xmm2, %xmm6<br class="">-; SSE-NEXT: movhlps {{.*#+}} xmm6 = xmm6[1,1]<br class="">-; SSE-NEXT: shufps {{.*#+}} xmm3 = xmm3[3,1,2,3]<br class="">+; SSE-NEXT: movhlps {{.*#+}} xmm6 = xmm2[1],xmm6[1]<br class="">+; SSE-NEXT: shufps {{.*#+}} xmm3 = xmm3[3,1],xmm2[2,3]<br class=""> ; SSE-NEXT: movaps {{.*#+}} xmm7<br class=""> ; SSE-NEXT: movaps %xmm0, %xmm2<br class=""> ; SSE-NEXT: andps %xmm7, %xmm2<br class="">@@ -247,7 +247,7 @@ define <4 x double> @combine_vec_fcopysi<br class=""> ; SSE-NEXT: orps %xmm0, %xmm4<br class=""> ; SSE-NEXT: movlhps {{.*#+}} xmm2 = xmm2[0],xmm4[0]<br class=""> ; SSE-NEXT: movaps %xmm1, %xmm0<br class="">-; SSE-NEXT: movhlps {{.*#+}} xmm0 = xmm0[1,1]<br class="">+; SSE-NEXT: movhlps {{.*#+}} xmm0 = xmm1[1],xmm0[1]<br class=""> ; SSE-NEXT: andps %xmm7, %xmm0<br class=""> ; SSE-NEXT: cvtss2sd %xmm3, %xmm3<br class=""> ; SSE-NEXT: andps %xmm8, %xmm3<br class="">@@ -294,7 +294,7 @@ define <4 x float> @combine_vec_fcopysig<br class=""> ; SSE-NEXT: orps %xmm6, %xmm1<br class=""> ; SSE-NEXT: unpcklps {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[1]<br class=""> ; SSE-NEXT: movaps %xmm3, %xmm1<br class="">-; SSE-NEXT: movhlps {{.*#+}} xmm1 = xmm1[1,1]<br class="">+; SSE-NEXT: movhlps {{.*#+}} xmm1 = xmm3[1],xmm1[1]<br class=""> ; SSE-NEXT: andps %xmm5, %xmm1<br class=""> ; SSE-NEXT: xorps %xmm6, %xmm6<br class=""> ; SSE-NEXT: cvtsd2ss %xmm2, %xmm6<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/combine-shl.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/combine-shl.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/combine-shl.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/combine-shl.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/combine-shl.ll Mon Oct 2 15:01:37 2017<br class="">@@ -194,7 +194,7 @@ define <8 x i32> @combine_vec_shl_ext_sh<br class=""> ; SSE-LABEL: combine_vec_shl_ext_shl0:<br class=""> ; SSE: # BB#0:<br class=""> ; SSE-NEXT: movdqa %xmm0, %xmm1<br class="">-; SSE-NEXT: pmovzxwd {{.*#+}} xmm0 = xmm1[0],zero,xmm1[1],zero,xmm1[2],zero,xmm1[3],zero<br class="">+; SSE-NEXT: pmovzxwd {{.*#+}} xmm0 = xmm0[0],zero,xmm0[1],zero,xmm0[2],zero,xmm0[3],zero<br class=""> ; SSE-NEXT: punpckhwd {{.*#+}} xmm1 = xmm1[4],xmm0[4],xmm1[5],xmm0[5],xmm1[6],xmm0[6],xmm1[7],xmm0[7]<br class=""> ; SSE-NEXT: pslld $20, %xmm1<br class=""> ; SSE-NEXT: pslld $20, %xmm0<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/complex-fastmath.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/complex-fastmath.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/complex-fastmath.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/complex-fastmath.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/complex-fastmath.ll Mon Oct 2 15:01:37 2017<br class="">@@ -14,7 +14,7 @@ define <2 x float> @complex_square_f32(<<br class=""> ; SSE: # BB#0:<br class=""> ; SSE-NEXT: movshdup {{.*#+}} xmm1 = xmm0[1,1,3,3]<br class=""> ; SSE-NEXT: movaps %xmm0, %xmm2<br class="">-; SSE-NEXT: addss %xmm2, %xmm2<br class="">+; SSE-NEXT: addss %xmm0, %xmm2<br class=""> ; SSE-NEXT: mulss %xmm1, %xmm2<br class=""> ; SSE-NEXT: mulss %xmm0, %xmm0<br class=""> ; SSE-NEXT: mulss %xmm1, %xmm1<br class="">@@ -58,9 +58,9 @@ define <2 x double> @complex_square_f64(<br class=""> ; SSE-LABEL: complex_square_f64:<br class=""> ; SSE: # BB#0:<br class=""> ; SSE-NEXT: movaps %xmm0, %xmm1<br class="">-; SSE-NEXT: movhlps {{.*#+}} xmm1 = xmm1[1,1]<br class="">+; SSE-NEXT: movhlps {{.*#+}} xmm1 = xmm0[1],xmm1[1]<br class=""> ; SSE-NEXT: movaps %xmm0, %xmm2<br class="">-; SSE-NEXT: addsd %xmm2, %xmm2<br class="">+; SSE-NEXT: addsd %xmm0, %xmm2<br class=""> ; SSE-NEXT: mulsd %xmm1, %xmm2<br class=""> ; SSE-NEXT: mulsd %xmm0, %xmm0<br class=""> ; SSE-NEXT: mulsd %xmm1, %xmm1<br class="">@@ -161,9 +161,9 @@ define <2 x double> @complex_mul_f64(<2<br class=""> ; SSE-LABEL: complex_mul_f64:<br class=""> ; SSE: # BB#0:<br class=""> ; SSE-NEXT: movaps %xmm0, %xmm2<br class="">-; SSE-NEXT: movhlps {{.*#+}} xmm2 = xmm2[1,1]<br class="">+; SSE-NEXT: movhlps {{.*#+}} xmm2 = xmm0[1],xmm2[1]<br class=""> ; SSE-NEXT: movaps %xmm1, %xmm3<br class="">-; SSE-NEXT: movhlps {{.*#+}} xmm3 = xmm3[1,1]<br class="">+; SSE-NEXT: movhlps {{.*#+}} xmm3 = xmm1[1],xmm3[1]<br class=""> ; SSE-NEXT: movaps %xmm3, %xmm4<br class=""> ; SSE-NEXT: mulsd %xmm0, %xmm4<br class=""> ; SSE-NEXT: mulsd %xmm1, %xmm0<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/divide-by-constant.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/divide-by-constant.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/divide-by-constant.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/divide-by-constant.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/divide-by-constant.ll Mon Oct 2 15:01:37 2017<br class="">@@ -318,7 +318,7 @@ define i64 @PR23590(i64 %x) nounwind {<br class=""> ; X64: # BB#0: # %entry<br class=""> ; X64-NEXT: movq %rdi, %rcx<br class=""> ; X64-NEXT: movabsq $6120523590596543007, %rdx # imm = 0x54F077C718E7C21F<br class="">-; X64-NEXT: movq %rcx, %rax<br class="">+; X64-NEXT: movq %rdi, %rax<br class=""> ; X64-NEXT: mulq %rdx<br class=""> ; X64-NEXT: shrq $12, %rdx<br class=""> ; X64-NEXT: imulq $12345, %rdx, %rax # imm = 0x3039<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/fmaxnum.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/fmaxnum.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/fmaxnum.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/fmaxnum.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/fmaxnum.ll Mon Oct 2 15:01:37 2017<br class="">@@ -18,7 +18,7 @@ declare <8 x double> @llvm.maxnum.v8f64(<br class=""><br class=""> ; CHECK-LABEL: @test_fmaxf<br class=""> ; SSE: movaps %xmm0, %xmm2<br class="">-; SSE-NEXT: cmpunordss %xmm2, %xmm2<br class="">+; SSE-NEXT: cmpunordss %xmm0, %xmm2<br class=""> ; SSE-NEXT: movaps %xmm2, %xmm3<br class=""> ; SSE-NEXT: andps %xmm1, %xmm3<br class=""> ; SSE-NEXT: maxss %xmm0, %xmm1<br class="">@@ -47,7 +47,7 @@ define float @test_fmaxf_minsize(float %<br class=""><br class=""> ; CHECK-LABEL: @test_fmax<br class=""> ; SSE: movapd %xmm0, %xmm2<br class="">-; SSE-NEXT: cmpunordsd %xmm2, %xmm2<br class="">+; SSE-NEXT: cmpunordsd %xmm0, %xmm2<br class=""> ; SSE-NEXT: movapd %xmm2, %xmm3<br class=""> ; SSE-NEXT: andpd %xmm1, %xmm3<br class=""> ; SSE-NEXT: maxsd %xmm0, %xmm1<br class="">@@ -74,7 +74,7 @@ define x86_fp80 @test_fmaxl(x86_fp80 %x,<br class=""><br class=""> ; CHECK-LABEL: @test_intrinsic_fmaxf<br class=""> ; SSE: movaps %xmm0, %xmm2<br class="">-; SSE-NEXT: cmpunordss %xmm2, %xmm2<br class="">+; SSE-NEXT: cmpunordss %xmm0, %xmm2<br class=""> ; SSE-NEXT: movaps %xmm2, %xmm3<br class=""> ; SSE-NEXT: andps %xmm1, %xmm3<br class=""> ; SSE-NEXT: maxss %xmm0, %xmm1<br class="">@@ -95,7 +95,7 @@ define float @test_intrinsic_fmaxf(float<br class=""><br class=""> ; CHECK-LABEL: @test_intrinsic_fmax<br class=""> ; SSE: movapd %xmm0, %xmm2<br class="">-; SSE-NEXT: cmpunordsd %xmm2, %xmm2<br class="">+; SSE-NEXT: cmpunordsd %xmm0, %xmm2<br class=""> ; SSE-NEXT: movapd %xmm2, %xmm3<br class=""> ; SSE-NEXT: andpd %xmm1, %xmm3<br class=""> ; SSE-NEXT: maxsd %xmm0, %xmm1<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/fmf-flags.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/fmf-flags.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/fmf-flags.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/fmf-flags.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/fmf-flags.ll Mon Oct 2 15:01:37 2017<br class="">@@ -30,7 +30,7 @@ define float @fast_fmuladd_opts(float %a<br class=""> ; X64-LABEL: fast_fmuladd_opts:<br class=""> ; X64: # BB#0:<br class=""> ; X64-NEXT: movaps %xmm0, %xmm1<br class="">-; X64-NEXT: addss %xmm1, %xmm1<br class="">+; X64-NEXT: addss %xmm0, %xmm1<br class=""> ; X64-NEXT: addss %xmm0, %xmm1<br class=""> ; X64-NEXT: movaps %xmm1, %xmm0<br class=""> ; X64-NEXT: retq<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/fminnum.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/fminnum.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/fminnum.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/fminnum.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/fminnum.ll Mon Oct 2 15:01:37 2017<br class="">@@ -18,7 +18,7 @@ declare <8 x double> @llvm.minnum.v8f64(<br class=""><br class=""> ; CHECK-LABEL: @test_fminf<br class=""> ; SSE: movaps %xmm0, %xmm2<br class="">-; SSE-NEXT: cmpunordss %xmm2, %xmm2<br class="">+; SSE-NEXT: cmpunordss %xmm0, %xmm2<br class=""> ; SSE-NEXT: movaps %xmm2, %xmm3<br class=""> ; SSE-NEXT: andps %xmm1, %xmm3<br class=""> ; SSE-NEXT: minss %xmm0, %xmm1<br class="">@@ -40,7 +40,7 @@ define float @test_fminf(float %x, float<br class=""><br class=""> ; CHECK-LABEL: @test_fmin<br class=""> ; SSE: movapd %xmm0, %xmm2<br class="">-; SSE-NEXT: cmpunordsd %xmm2, %xmm2<br class="">+; SSE-NEXT: cmpunordsd %xmm0, %xmm2<br class=""> ; SSE-NEXT: movapd %xmm2, %xmm3<br class=""> ; SSE-NEXT: andpd %xmm1, %xmm3<br class=""> ; SSE-NEXT: minsd %xmm0, %xmm1<br class="">@@ -67,7 +67,7 @@ define x86_fp80 @test_fminl(x86_fp80 %x,<br class=""><br class=""> ; CHECK-LABEL: @test_intrinsic_fminf<br class=""> ; SSE: movaps %xmm0, %xmm2<br class="">-; SSE-NEXT: cmpunordss %xmm2, %xmm2<br class="">+; SSE-NEXT: cmpunordss %xmm0, %xmm2<br class=""> ; SSE-NEXT: movaps %xmm2, %xmm3<br class=""> ; SSE-NEXT: andps %xmm1, %xmm3<br class=""> ; SSE-NEXT: minss %xmm0, %xmm1<br class="">@@ -87,7 +87,7 @@ define float @test_intrinsic_fminf(float<br class=""><br class=""> ; CHECK-LABEL: @test_intrinsic_fmin<br class=""> ; SSE: movapd %xmm0, %xmm2<br class="">-; SSE-NEXT: cmpunordsd %xmm2, %xmm2<br class="">+; SSE-NEXT: cmpunordsd %xmm0, %xmm2<br class=""> ; SSE-NEXT: movapd %xmm2, %xmm3<br class=""> ; SSE-NEXT: andpd %xmm1, %xmm3<br class=""> ; SSE-NEXT: minsd %xmm0, %xmm1<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/fp128-i128.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/fp128-i128.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/fp128-i128.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/fp128-i128.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/fp128-i128.ll Mon Oct 2 15:01:37 2017<br class="">@@ -227,7 +227,7 @@ define fp128 @TestI128_4(fp128 %x) #0 {<br class=""> ; CHECK: # BB#0: # %entry<br class=""> ; CHECK-NEXT: subq $40, %rsp<br class=""> ; CHECK-NEXT: movaps %xmm0, %xmm1<br class="">-; CHECK-NEXT: movaps %xmm1, {{[0-9]+}}(%rsp)<br class="">+; CHECK-NEXT: movaps %xmm0, {{[0-9]+}}(%rsp)<br class=""> ; CHECK-NEXT: movq {{[0-9]+}}(%rsp), %rax<br class=""> ; CHECK-NEXT: movq %rax, {{[0-9]+}}(%rsp)<br class=""> ; CHECK-NEXT: movq $0, (%rsp)<br class="">@@ -275,7 +275,7 @@ define fp128 @acosl(fp128 %x) #0 {<br class=""> ; CHECK: # BB#0: # %entry<br class=""> ; CHECK-NEXT: subq $40, %rsp<br class=""> ; CHECK-NEXT: movaps %xmm0, %xmm1<br class="">-; CHECK-NEXT: movaps %xmm1, {{[0-9]+}}(%rsp)<br class="">+; CHECK-NEXT: movaps %xmm0, {{[0-9]+}}(%rsp)<br class=""> ; CHECK-NEXT: movq {{[0-9]+}}(%rsp), %rax<br class=""> ; CHECK-NEXT: movq %rax, {{[0-9]+}}(%rsp)<br class=""> ; CHECK-NEXT: movq $0, (%rsp)<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/haddsub-2.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/haddsub-2.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/haddsub-2.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/haddsub-2.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/haddsub-2.ll Mon Oct 2 15:01:37 2017<br class="">@@ -908,16 +908,16 @@ define <4 x float> @not_a_hsub_2(<4 x fl<br class=""> ; SSE-LABEL: not_a_hsub_2:<br class=""> ; SSE: # BB#0:<br class=""> ; SSE-NEXT: movaps %xmm0, %xmm2<br class="">-; SSE-NEXT: movhlps {{.*#+}} xmm2 = xmm2[1,1]<br class="">+; SSE-NEXT: movhlps {{.*#+}} xmm2 = xmm0[1],xmm2[1]<br class=""> ; SSE-NEXT: movaps %xmm0, %xmm3<br class="">-; SSE-NEXT: shufps {{.*#+}} xmm3 = xmm3[3,1,2,3]<br class="">+; SSE-NEXT: shufps {{.*#+}} xmm3 = xmm3[3,1],xmm0[2,3]<br class=""> ; SSE-NEXT: subss %xmm3, %xmm2<br class=""> ; SSE-NEXT: movshdup {{.*#+}} xmm3 = xmm0[1,1,3,3]<br class=""> ; SSE-NEXT: subss %xmm3, %xmm0<br class=""> ; SSE-NEXT: movaps %xmm1, %xmm3<br class="">-; SSE-NEXT: shufps {{.*#+}} xmm3 = xmm3[3,1,2,3]<br class="">+; SSE-NEXT: shufps {{.*#+}} xmm3 = xmm3[3,1],xmm1[2,3]<br class=""> ; SSE-NEXT: movaps %xmm1, %xmm4<br class="">-; SSE-NEXT: movhlps {{.*#+}} xmm4 = xmm4[1,1]<br class="">+; SSE-NEXT: movhlps {{.*#+}} xmm4 = xmm1[1],xmm4[1]<br class=""> ; SSE-NEXT: subss %xmm4, %xmm3<br class=""> ; SSE-NEXT: movshdup {{.*#+}} xmm4 = xmm1[1,1,3,3]<br class=""> ; SSE-NEXT: subss %xmm4, %xmm1<br class="">@@ -965,10 +965,10 @@ define <2 x double> @not_a_hsub_3(<2 x d<br class=""> ; SSE-LABEL: not_a_hsub_3:<br class=""> ; SSE: # BB#0:<br class=""> ; SSE-NEXT: movaps %xmm1, %xmm2<br class="">-; SSE-NEXT: movhlps {{.*#+}} xmm2 = xmm2[1,1]<br class="">+; SSE-NEXT: movhlps {{.*#+}} xmm2 = xmm1[1],xmm2[1]<br class=""> ; SSE-NEXT: subsd %xmm2, %xmm1<br class=""> ; SSE-NEXT: movaps %xmm0, %xmm2<br class="">-; SSE-NEXT: movhlps {{.*#+}} xmm2 = xmm2[1,1]<br class="">+; SSE-NEXT: movhlps {{.*#+}} xmm2 = xmm0[1],xmm2[1]<br class=""> ; SSE-NEXT: subsd %xmm0, %xmm2<br class=""> ; SSE-NEXT: unpcklpd {{.*#+}} xmm2 = xmm2[0],xmm1[0]<br class=""> ; SSE-NEXT: movapd %xmm2, %xmm0<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/haddsub-undef.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/haddsub-undef.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/haddsub-undef.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/haddsub-undef.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/haddsub-undef.ll Mon Oct 2 15:01:37 2017<br class="">@@ -103,7 +103,7 @@ define <2 x double> @test5_undef(<2 x do<br class=""> ; SSE-LABEL: test5_undef:<br class=""> ; SSE: # BB#0:<br class=""> ; SSE-NEXT: movaps %xmm0, %xmm1<br class="">-; SSE-NEXT: movhlps {{.*#+}} xmm1 = xmm1[1,1]<br class="">+; SSE-NEXT: movhlps {{.*#+}} xmm1 = xmm0[1],xmm1[1]<br class=""> ; SSE-NEXT: addsd %xmm0, %xmm1<br class=""> ; SSE-NEXT: movapd %xmm1, %xmm0<br class=""> ; SSE-NEXT: retq<br class="">@@ -168,7 +168,7 @@ define <4 x float> @test8_undef(<4 x flo<br class=""> ; SSE-NEXT: movshdup {{.*#+}} xmm1 = xmm0[1,1,3,3]<br class=""> ; SSE-NEXT: addss %xmm0, %xmm1<br class=""> ; SSE-NEXT: movaps %xmm0, %xmm2<br class="">-; SSE-NEXT: movhlps {{.*#+}} xmm2 = xmm2[1,1]<br class="">+; SSE-NEXT: movhlps {{.*#+}} xmm2 = xmm0[1],xmm2[1]<br class=""> ; SSE-NEXT: shufps {{.*#+}} xmm0 = xmm0[3,1,2,3]<br class=""> ; SSE-NEXT: addss %xmm2, %xmm0<br class=""> ; SSE-NEXT: movlhps {{.*#+}} xmm1 = xmm1[0],xmm0[0]<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/half.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/half.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/half.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/half.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/half.ll Mon Oct 2 15:01:37 2017<br class="">@@ -386,7 +386,7 @@ define <4 x float> @test_extend32_vec4(<<br class=""> ; CHECK-LIBCALL-NEXT: pushq %rbx<br class=""> ; CHECK-LIBCALL-NEXT: subq $48, %rsp<br class=""> ; CHECK-LIBCALL-NEXT: movq %rdi, %rbx<br class="">-; CHECK-LIBCALL-NEXT: movzwl (%rbx), %edi<br class="">+; CHECK-LIBCALL-NEXT: movzwl (%rdi), %edi<br class=""> ; CHECK-LIBCALL-NEXT: callq __gnu_h2f_ieee<br class=""> ; CHECK-LIBCALL-NEXT: movaps %xmm0, {{[0-9]+}}(%rsp) # 16-byte Spill<br class=""> ; CHECK-LIBCALL-NEXT: movzwl 2(%rbx), %edi<br class="">@@ -472,7 +472,7 @@ define <4 x double> @test_extend64_vec4(<br class=""> ; CHECK-LIBCALL-NEXT: pushq %rbx<br class=""> ; CHECK-LIBCALL-NEXT: subq $16, %rsp<br class=""> ; CHECK-LIBCALL-NEXT: movq %rdi, %rbx<br class="">-; CHECK-LIBCALL-NEXT: movzwl 4(%rbx), %edi<br class="">+; CHECK-LIBCALL-NEXT: movzwl 4(%rdi), %edi<br class=""> ; CHECK-LIBCALL-NEXT: callq __gnu_h2f_ieee<br class=""> ; CHECK-LIBCALL-NEXT: movss %xmm0, {{[0-9]+}}(%rsp) # 4-byte Spill<br class=""> ; CHECK-LIBCALL-NEXT: movzwl 6(%rbx), %edi<br class="">@@ -657,7 +657,7 @@ define void @test_trunc32_vec4(<4 x floa<br class=""> ; CHECK-I686-NEXT: movaps %xmm0, {{[0-9]+}}(%esp) # 16-byte Spill<br class=""> ; CHECK-I686-NEXT: movl {{[0-9]+}}(%esp), %ebp<br class=""> ; CHECK-I686-NEXT: movaps %xmm0, %xmm1<br class="">-; CHECK-I686-NEXT: shufps {{.*#+}} xmm1 = xmm1[1,1,2,3]<br class="">+; CHECK-I686-NEXT: shufps {{.*#+}} xmm1 = xmm1[1,1],xmm0[2,3]<br class=""> ; CHECK-I686-NEXT: movss %xmm1, (%esp)<br class=""> ; CHECK-I686-NEXT: calll __gnu_f2h_ieee<br class=""> ; CHECK-I686-NEXT: movw %ax, %si<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/inline-asm-fpstack.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/inline-asm-fpstack.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/inline-asm-fpstack.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/inline-asm-fpstack.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/inline-asm-fpstack.ll Mon Oct 2 15:01:37 2017<br class="">@@ -162,6 +162,7 @@ define void @testPR4459(x86_fp80 %a) {<br class=""> ; CHECK-NEXT: fstpt (%esp)<br class=""> ; CHECK-NEXT: calll _ceil<br class=""> ; CHECK-NEXT: fld %st(0)<br class="">+; CHECK-NEXT: fxch %st(1)<br class=""> ; CHECK-NEXT: ## InlineAsm Start<br class=""> ; CHECK-NEXT: fistpl %st(0)<br class=""> ; CHECK-NEXT: ## InlineAsm End<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/ipra-local-linkage.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/ipra-local-linkage.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/ipra-local-linkage.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/ipra-local-linkage.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/ipra-local-linkage.ll Mon Oct 2 15:01:37 2017<br class="">@@ -24,7 +24,7 @@ define void @bar(i32 %X) {<br class=""> call void @foo()<br class=""> ; CHECK-LABEL: bar:<br class=""> ; CHECK: callq foo<br class="">- ; CHECK-NEXT: movl %eax, %r15d<br class="">+ ; CHECK-NEXT: movl %edi, %r15d<br class=""> call void asm sideeffect "movl $0, %r12d", "{r15}~{r12}"(i32 %X)<br class=""> ret void<br class=""> }<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/localescape.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/localescape.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/localescape.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/localescape.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/localescape.ll Mon Oct 2 15:01:37 2017<br class="">@@ -27,7 +27,7 @@ define void @print_framealloc_from_fp(i8<br class=""><br class=""> ; X64-LABEL: print_framealloc_from_fp:<br class=""> ; X64: movq %rcx, %[[parent_fp:[a-z]+]]<br class="">-; X64: movl .Lalloc_func$frame_escape_0(%[[parent_fp]]), %edx<br class="">+; X64: movl .Lalloc_func$frame_escape_0(%rcx), %edx<br class=""> ; X64: leaq {{.*}}(%rip), %[[str:[a-z]+]]<br class=""> ; X64: movq %[[str]], %rcx<br class=""> ; X64: callq printf<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/machine-cp.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/machine-cp.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/machine-cp.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/machine-cp.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/machine-cp.ll Mon Oct 2 15:01:37 2017<br class="">@@ -8,7 +8,7 @@ define i32 @t1(i32 %a, i32 %b) nounwind<br class=""> ; CHECK: ## BB#0: ## %entry<br class=""> ; CHECK-NEXT: movl %esi, %edx<br class=""> ; CHECK-NEXT: movl %edi, %eax<br class="">-; CHECK-NEXT: testl %edx, %edx<br class="">+; CHECK-NEXT: testl %esi, %esi<br class=""> ; CHECK-NEXT: je LBB0_1<br class=""> ; CHECK-NEXT: .p2align 4, 0x90<br class=""> ; CHECK-NEXT: LBB0_2: ## %while.body<br class="">@@ -59,7 +59,7 @@ define i32 @t3(i64 %a, i64 %b) nounwind<br class=""> ; CHECK: ## BB#0: ## %entry<br class=""> ; CHECK-NEXT: movq %rsi, %rdx<br class=""> ; CHECK-NEXT: movq %rdi, %rax<br class="">-; CHECK-NEXT: testq %rdx, %rdx<br class="">+; CHECK-NEXT: testq %rsi, %rsi<br class=""> ; CHECK-NEXT: je LBB2_1<br class=""> ; CHECK-NEXT: .p2align 4, 0x90<br class=""> ; CHECK-NEXT: LBB2_2: ## %while.body<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/mul-i1024.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/mul-i1024.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/mul-i1024.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/mul-i1024.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/mul-i1024.ll Mon Oct 2 15:01:37 2017<br class="">@@ -159,7 +159,7 @@ define void @test_1024(i1024* %a, i1024*<br class=""> ; X32-NEXT: pushl {{[0-9]+}}(%esp) # 4-byte Folded Reload<br class=""> ; X32-NEXT: pushl %esi<br class=""> ; X32-NEXT: movl %esi, %ebx<br class="">-; X32-NEXT: movl %ebx, {{[0-9]+}}(%esp) # 4-byte Spill<br class="">+; X32-NEXT: movl %esi, {{[0-9]+}}(%esp) # 4-byte Spill<br class=""> ; X32-NEXT: pushl %edi<br class=""> ; X32-NEXT: movl %edi, {{[0-9]+}}(%esp) # 4-byte Spill<br class=""> ; X32-NEXT: pushl {{[0-9]+}}(%esp) # 4-byte Folded Reload<br class="">@@ -752,7 +752,7 @@ define void @test_1024(i1024* %a, i1024*<br class=""> ; X32-NEXT: pushl $0<br class=""> ; X32-NEXT: pushl %edi<br class=""> ; X32-NEXT: movl %ebx, %esi<br class="">-; X32-NEXT: pushl %esi<br class="">+; X32-NEXT: pushl %ebx<br class=""> ; X32-NEXT: pushl $0<br class=""> ; X32-NEXT: pushl $0<br class=""> ; X32-NEXT: pushl {{[0-9]+}}(%esp) # 4-byte Folded Reload<br class="">@@ -898,7 +898,6 @@ define void @test_1024(i1024* %a, i1024*<br class=""> ; X32-NEXT: pushl $0<br class=""> ; X32-NEXT: pushl $0<br class=""> ; X32-NEXT: pushl %edi<br class="">-; X32-NEXT: movl %edi, %ebx<br class=""> ; X32-NEXT: pushl %esi<br class=""> ; X32-NEXT: pushl $0<br class=""> ; X32-NEXT: pushl $0<br class="">@@ -910,7 +909,7 @@ define void @test_1024(i1024* %a, i1024*<br class=""> ; X32-NEXT: leal {{[0-9]+}}(%esp), %eax<br class=""> ; X32-NEXT: pushl $0<br class=""> ; X32-NEXT: pushl $0<br class="">-; X32-NEXT: pushl %ebx<br class="">+; X32-NEXT: pushl %edi<br class=""> ; X32-NEXT: pushl %esi<br class=""> ; X32-NEXT: pushl $0<br class=""> ; X32-NEXT: pushl $0<br class="">@@ -1365,7 +1364,7 @@ define void @test_1024(i1024* %a, i1024*<br class=""> ; X32-NEXT: pushl $0<br class=""> ; X32-NEXT: pushl $0<br class=""> ; X32-NEXT: movl %edi, %ebx<br class="">-; X32-NEXT: pushl %ebx<br class="">+; X32-NEXT: pushl %edi<br class=""> ; X32-NEXT: movl {{[0-9]+}}(%esp), %esi # 4-byte Reload<br class=""> ; X32-NEXT: pushl %esi<br class=""> ; X32-NEXT: pushl $0<br class="">@@ -2442,7 +2441,7 @@ define void @test_1024(i1024* %a, i1024*<br class=""> ; X32-NEXT: movl %edx, {{[0-9]+}}(%esp) # 4-byte Spill<br class=""> ; X32-NEXT: adcl %edi, %eax<br class=""> ; X32-NEXT: movl %eax, %esi<br class="">-; X32-NEXT: movl %esi, {{[0-9]+}}(%esp) # 4-byte Spill<br class="">+; X32-NEXT: movl %eax, {{[0-9]+}}(%esp) # 4-byte Spill<br class=""> ; X32-NEXT: movl {{[0-9]+}}(%esp), %eax # 4-byte Reload<br class=""> ; X32-NEXT: addl %eax, {{[0-9]+}}(%esp) # 4-byte Folded Spill<br class=""> ; X32-NEXT: movl {{[0-9]+}}(%esp), %eax # 4-byte Reload<br class="">@@ -4265,7 +4264,6 @@ define void @test_1024(i1024* %a, i1024*<br class=""> ; X64-NEXT: adcq $0, %rbp<br class=""> ; X64-NEXT: addq %rcx, %rbx<br class=""> ; X64-NEXT: movq %rbx, -{{[0-9]+}}(%rsp) # 8-byte Spill<br class="">-; X64-NEXT: movq %rcx, %r11<br class=""> ; X64-NEXT: adcq %rdi, %rbp<br class=""> ; X64-NEXT: setb %bl<br class=""> ; X64-NEXT: movzbl %bl, %ebx<br class="">@@ -4275,12 +4273,12 @@ define void @test_1024(i1024* %a, i1024*<br class=""> ; X64-NEXT: mulq %r8<br class=""> ; X64-NEXT: movq %rax, {{[0-9]+}}(%rsp) # 8-byte Spill<br class=""> ; X64-NEXT: movq %rdx, {{[0-9]+}}(%rsp) # 8-byte Spill<br class="">-; X64-NEXT: movq %r11, %r12<br class="">-; X64-NEXT: movq %r11, %r8<br class="">+; X64-NEXT: movq %rcx, %r12<br class="">+; X64-NEXT: movq %rcx, %r8<br class=""> ; X64-NEXT: addq %rax, %r12<br class=""> ; X64-NEXT: movq %rdi, %rax<br class=""> ; X64-NEXT: movq %rdi, %r9<br class="">-; X64-NEXT: movq %r9, (%rsp) # 8-byte Spill<br class="">+; X64-NEXT: movq %rdi, (%rsp) # 8-byte Spill<br class=""> ; X64-NEXT: adcq %rdx, %rax<br class=""> ; X64-NEXT: addq %rbp, %r12<br class=""> ; X64-NEXT: movq %r12, {{[0-9]+}}(%rsp) # 8-byte Spill<br class="">@@ -4309,7 +4307,7 @@ define void @test_1024(i1024* %a, i1024*<br class=""> ; X64-NEXT: adcq %rdx, %rbx<br class=""> ; X64-NEXT: movq 16(%rsi), %rax<br class=""> ; X64-NEXT: movq %rsi, %r13<br class="">-; X64-NEXT: movq %r13, {{[0-9]+}}(%rsp) # 8-byte Spill<br class="">+; X64-NEXT: movq %rsi, {{[0-9]+}}(%rsp) # 8-byte Spill<br class=""> ; X64-NEXT: movq %rax, -{{[0-9]+}}(%rsp) # 8-byte Spill<br class=""> ; X64-NEXT: mulq %r11<br class=""> ; X64-NEXT: movq %rax, -{{[0-9]+}}(%rsp) # 8-byte Spill<br class="">@@ -4322,7 +4320,7 @@ define void @test_1024(i1024* %a, i1024*<br class=""> ; X64-NEXT: adcq %rbx, %r11<br class=""> ; X64-NEXT: movq %r8, %rax<br class=""> ; X64-NEXT: movq %r8, %rbp<br class="">-; X64-NEXT: movq %rbp, -{{[0-9]+}}(%rsp) # 8-byte Spill<br class="">+; X64-NEXT: movq %r8, -{{[0-9]+}}(%rsp) # 8-byte Spill<br class=""> ; X64-NEXT: addq %rdi, %rax<br class=""> ; X64-NEXT: movq %r9, %rax<br class=""> ; X64-NEXT: adcq %rcx, %rax<br class="">@@ -4334,8 +4332,7 @@ define void @test_1024(i1024* %a, i1024*<br class=""> ; X64-NEXT: movq %rdx, %rsi<br class=""> ; X64-NEXT: movq %rax, %rbx<br class=""> ; X64-NEXT: addq %rdi, %rax<br class="">-; X64-NEXT: movq %rdi, %r9<br class="">-; X64-NEXT: movq %rsi, %rax<br class="">+; X64-NEXT: movq %rdx, %rax<br class=""> ; X64-NEXT: adcq %rcx, %rax<br class=""> ; X64-NEXT: movq %rax, {{[0-9]+}}(%rsp) # 8-byte Spill<br class=""> ; X64-NEXT: movq 32(%r13), %rax<br class="">@@ -4351,9 +4348,10 @@ define void @test_1024(i1024* %a, i1024*<br class=""> ; X64-NEXT: adcq %rdx, %rax<br class=""> ; X64-NEXT: movq %rax, {{[0-9]+}}(%rsp) # 8-byte Spill<br class=""> ; X64-NEXT: movq %rbp, %rax<br class="">-; X64-NEXT: addq %r9, %rax<br class="">+; X64-NEXT: addq %rdi, %rax<br class=""> ; X64-NEXT: movq %rax, -{{[0-9]+}}(%rsp) # 8-byte Spill<br class="">-; X64-NEXT: movq %r9, {{[0-9]+}}(%rsp) # 8-byte Spill<br class="">+; X64-NEXT: movq %rdi, %r9<br class="">+; X64-NEXT: movq %rdi, {{[0-9]+}}(%rsp) # 8-byte Spill<br class=""> ; X64-NEXT: movq -{{[0-9]+}}(%rsp), %rax # 8-byte Reload<br class=""> ; X64-NEXT: adcq %r15, %rax<br class=""> ; X64-NEXT: movq %rax, -{{[0-9]+}}(%rsp) # 8-byte Spill<br class="">@@ -4371,7 +4369,7 @@ define void @test_1024(i1024* %a, i1024*<br class=""> ; X64-NEXT: addq %rsi, %r11<br class=""> ; X64-NEXT: movq %rdx, %rbp<br class=""> ; X64-NEXT: adcq $0, %rbp<br class="">-; X64-NEXT: addq %rcx, %r11<br class="">+; X64-NEXT: addq %rbx, %r11<br class=""> ; X64-NEXT: adcq %rsi, %rbp<br class=""> ; X64-NEXT: movq %rsi, {{[0-9]+}}(%rsp) # 8-byte Spill<br class=""> ; X64-NEXT: setb %bl<br class="">@@ -4392,11 +4390,11 @@ define void @test_1024(i1024* %a, i1024*<br class=""> ; X64-NEXT: adcq %rbx, %r10<br class=""> ; X64-NEXT: movq %rcx, %rdx<br class=""> ; X64-NEXT: movq %rcx, %r12<br class="">-; X64-NEXT: movq %r12, {{[0-9]+}}(%rsp) # 8-byte Spill<br class="">+; X64-NEXT: movq %rcx, {{[0-9]+}}(%rsp) # 8-byte Spill<br class=""> ; X64-NEXT: addq %r9, %rdx<br class=""> ; X64-NEXT: movq %rdx, -{{[0-9]+}}(%rsp) # 8-byte Spill<br class=""> ; X64-NEXT: movq %r11, %r8<br class="">-; X64-NEXT: adcq %r8, %r15<br class="">+; X64-NEXT: adcq %r11, %r15<br class=""> ; X64-NEXT: movq %r15, {{[0-9]+}}(%rsp) # 8-byte Spill<br class=""> ; X64-NEXT: adcq %rax, %r14<br class=""> ; X64-NEXT: movq %r14, {{[0-9]+}}(%rsp) # 8-byte Spill<br class="">@@ -4492,13 +4490,12 @@ define void @test_1024(i1024* %a, i1024*<br class=""> ; X64-NEXT: adcq %rdx, %r12<br class=""> ; X64-NEXT: movq {{[0-9]+}}(%rsp), %rcx # 8-byte Reload<br class=""> ; X64-NEXT: movq %rcx, %rax<br class="">-; X64-NEXT: movq %r10, %rbp<br class="">-; X64-NEXT: mulq %rbp<br class="">+; X64-NEXT: mulq %r10<br class=""> ; X64-NEXT: movq %rdx, %rsi<br class=""> ; X64-NEXT: movq %rax, {{[0-9]+}}(%rsp) # 8-byte Spill<br class=""> ; X64-NEXT: movq {{[0-9]+}}(%rsp), %rdi # 8-byte Reload<br class=""> ; X64-NEXT: movq %rdi, %rax<br class="">-; X64-NEXT: mulq %rbp<br class="">+; X64-NEXT: mulq %r10<br class=""> ; X64-NEXT: movq %rdx, %rbp<br class=""> ; X64-NEXT: movq %rax, %rbx<br class=""> ; X64-NEXT: addq %rsi, %rbx<br class="">@@ -4525,7 +4522,7 @@ define void @test_1024(i1024* %a, i1024*<br class=""> ; X64-NEXT: adcq $0, %r15<br class=""> ; X64-NEXT: adcq $0, %r12<br class=""> ; X64-NEXT: movq %r10, %rbx<br class="">-; X64-NEXT: movq %rbx, %rax<br class="">+; X64-NEXT: movq %r10, %rax<br class=""> ; X64-NEXT: movq -{{[0-9]+}}(%rsp), %r11 # 8-byte Reload<br class=""> ; X64-NEXT: mulq %r11<br class=""> ; X64-NEXT: movq %rdx, %rcx<br class="">@@ -4542,7 +4539,7 @@ define void @test_1024(i1024* %a, i1024*<br class=""> ; X64-NEXT: movq %rbx, %rax<br class=""> ; X64-NEXT: mulq %rcx<br class=""> ; X64-NEXT: movq %rcx, %rbx<br class="">-; X64-NEXT: movq %rbx, -{{[0-9]+}}(%rsp) # 8-byte Spill<br class="">+; X64-NEXT: movq %rcx, -{{[0-9]+}}(%rsp) # 8-byte Spill<br class=""> ; X64-NEXT: movq %rdx, %rcx<br class=""> ; X64-NEXT: movq %rax, %r8<br class=""> ; X64-NEXT: addq %rbp, %r8<br class="">@@ -4573,7 +4570,7 @@ define void @test_1024(i1024* %a, i1024*<br class=""> ; X64-NEXT: movq {{[0-9]+}}(%rsp), %rcx # 8-byte Reload<br class=""> ; X64-NEXT: movq %rcx, %rax<br class=""> ; X64-NEXT: movq %r11, %rsi<br class="">-; X64-NEXT: mulq %rsi<br class="">+; X64-NEXT: mulq %r11<br class=""> ; X64-NEXT: movq %rdx, %r11<br class=""> ; X64-NEXT: movq %rax, %r13<br class=""> ; X64-NEXT: movq -{{[0-9]+}}(%rsp), %r12 # 8-byte Reload<br class="">@@ -4653,13 +4650,12 @@ define void @test_1024(i1024* %a, i1024*<br class=""> ; X64-NEXT: adcq %rdx, %r10<br class=""> ; X64-NEXT: movq {{[0-9]+}}(%rsp), %rcx # 8-byte Reload<br class=""> ; X64-NEXT: movq %rcx, %rax<br class="">-; X64-NEXT: movq %r11, %rbp<br class="">-; X64-NEXT: mulq %rbp<br class="">+; X64-NEXT: mulq %r11<br class=""> ; X64-NEXT: movq %rdx, %rdi<br class=""> ; X64-NEXT: movq %rax, {{[0-9]+}}(%rsp) # 8-byte Spill<br class=""> ; X64-NEXT: movq {{[0-9]+}}(%rsp), %rsi # 8-byte Reload<br class=""> ; X64-NEXT: movq %rsi, %rax<br class="">-; X64-NEXT: mulq %rbp<br class="">+; X64-NEXT: mulq %r11<br class=""> ; X64-NEXT: movq %rdx, %rbp<br class=""> ; X64-NEXT: movq %rax, %rbx<br class=""> ; X64-NEXT: addq %rdi, %rbx<br class="">@@ -4789,7 +4785,7 @@ define void @test_1024(i1024* %a, i1024*<br class=""> ; X64-NEXT: movq %rdx, %rsi<br class=""> ; X64-NEXT: movq %rax, %r14<br class=""> ; X64-NEXT: movq %r8, %rbp<br class="">-; X64-NEXT: movq %rbp, %rax<br class="">+; X64-NEXT: movq %r8, %rax<br class=""> ; X64-NEXT: mulq %rcx<br class=""> ; X64-NEXT: movq %rcx, %r11<br class=""> ; X64-NEXT: movq %rdx, %rbx<br class="">@@ -4849,7 +4845,7 @@ define void @test_1024(i1024* %a, i1024*<br class=""> ; X64-NEXT: adcq $0, %r9<br class=""> ; X64-NEXT: adcq $0, %r10<br class=""> ; X64-NEXT: movq %rbp, %rsi<br class="">-; X64-NEXT: movq %rsi, %rax<br class="">+; X64-NEXT: movq %rbp, %rax<br class=""> ; X64-NEXT: movq -{{[0-9]+}}(%rsp), %rcx # 8-byte Reload<br class=""> ; X64-NEXT: mulq %rcx<br class=""> ; X64-NEXT: movq %rdx, %r14<br class="">@@ -4906,8 +4902,8 @@ define void @test_1024(i1024* %a, i1024*<br class=""> ; X64-NEXT: adcq $0, %r15<br class=""> ; X64-NEXT: movq %rbp, %rax<br class=""> ; X64-NEXT: movq %r8, %rdi<br class="">-; X64-NEXT: movq %rdi, {{[0-9]+}}(%rsp) # 8-byte Spill<br class="">-; X64-NEXT: mulq %rdi<br class="">+; X64-NEXT: movq %r8, {{[0-9]+}}(%rsp) # 8-byte Spill<br class="">+; X64-NEXT: mulq %r8<br class=""> ; X64-NEXT: movq %rdx, %r9<br class=""> ; X64-NEXT: movq %rax, %r8<br class=""> ; X64-NEXT: addq %rbx, %r8<br class="">@@ -4990,13 +4986,12 @@ define void @test_1024(i1024* %a, i1024*<br class=""> ; X64-NEXT: movq %rcx, %r14<br class=""> ; X64-NEXT: movq {{[0-9]+}}(%rsp), %rcx # 8-byte Reload<br class=""> ; X64-NEXT: movq %rcx, %rax<br class="">-; X64-NEXT: movq %r10, %rdi<br class="">-; X64-NEXT: mulq %rdi<br class="">+; X64-NEXT: mulq %r10<br class=""> ; X64-NEXT: movq %rdx, %r11<br class=""> ; X64-NEXT: movq %rax, {{[0-9]+}}(%rsp) # 8-byte Spill<br class=""> ; X64-NEXT: movq {{[0-9]+}}(%rsp), %rsi # 8-byte Reload<br class=""> ; X64-NEXT: movq %rsi, %rax<br class="">-; X64-NEXT: mulq %rdi<br class="">+; X64-NEXT: mulq %r10<br class=""> ; X64-NEXT: movq %rdx, %rdi<br class=""> ; X64-NEXT: movq %rax, %rbx<br class=""> ; X64-NEXT: addq %r11, %rbx<br class="">@@ -5024,8 +5019,7 @@ define void @test_1024(i1024* %a, i1024*<br class=""> ; X64-NEXT: movq %r8, -{{[0-9]+}}(%rsp) # 8-byte Spill<br class=""> ; X64-NEXT: adcq $0, %r14<br class=""> ; X64-NEXT: movq %r14, -{{[0-9]+}}(%rsp) # 8-byte Spill<br class="">-; X64-NEXT: movq %r13, %rbx<br class="">-; X64-NEXT: movq %rbx, %rax<br class="">+; X64-NEXT: movq %r13, %rax<br class=""> ; X64-NEXT: movq -{{[0-9]+}}(%rsp), %rcx # 8-byte Reload<br class=""> ; X64-NEXT: mulq %rcx<br class=""> ; X64-NEXT: movq %rdx, %r8<br class="">@@ -5038,7 +5032,7 @@ define void @test_1024(i1024* %a, i1024*<br class=""> ; X64-NEXT: movq %rax, %rcx<br class=""> ; X64-NEXT: addq %r8, %rcx<br class=""> ; X64-NEXT: adcq $0, %rsi<br class="">-; X64-NEXT: movq %rbx, %rax<br class="">+; X64-NEXT: movq %r13, %rax<br class=""> ; X64-NEXT: movq {{[0-9]+}}(%rsp), %r13 # 8-byte Reload<br class=""> ; X64-NEXT: mulq %r13<br class=""> ; X64-NEXT: movq %rdx, %rbx<br class="">@@ -5072,13 +5066,12 @@ define void @test_1024(i1024* %a, i1024*<br class=""> ; X64-NEXT: setb -{{[0-9]+}}(%rsp) # 1-byte Folded Spill<br class=""> ; X64-NEXT: movq {{[0-9]+}}(%rsp), %rbx # 8-byte Reload<br class=""> ; X64-NEXT: movq %rbx, %rax<br class="">-; X64-NEXT: movq %r10, %rsi<br class="">-; X64-NEXT: mulq %rsi<br class="">+; X64-NEXT: mulq %r10<br class=""> ; X64-NEXT: movq %rdx, %rcx<br class=""> ; X64-NEXT: movq %rax, -{{[0-9]+}}(%rsp) # 8-byte Spill<br class=""> ; X64-NEXT: movq -{{[0-9]+}}(%rsp), %r8 # 8-byte Reload<br class=""> ; X64-NEXT: movq %r8, %rax<br class="">-; X64-NEXT: mulq %rsi<br class="">+; X64-NEXT: mulq %r10<br class=""> ; X64-NEXT: movq %rdx, %rsi<br class=""> ; X64-NEXT: movq %rax, %rdi<br class=""> ; X64-NEXT: addq %rcx, %rdi<br class="">@@ -5154,7 +5147,7 @@ define void @test_1024(i1024* %a, i1024*<br class=""> ; X64-NEXT: movq %r9, %rax<br class=""> ; X64-NEXT: mulq %rcx<br class=""> ; X64-NEXT: movq %rcx, %r10<br class="">-; X64-NEXT: movq %r10, -{{[0-9]+}}(%rsp) # 8-byte Spill<br class="">+; X64-NEXT: movq %rcx, -{{[0-9]+}}(%rsp) # 8-byte Spill<br class=""> ; X64-NEXT: movq %rdx, %rcx<br class=""> ; X64-NEXT: movq %rax, %rdi<br class=""> ; X64-NEXT: addq %rsi, %rdi<br class="">@@ -5166,16 +5159,16 @@ define void @test_1024(i1024* %a, i1024*<br class=""> ; X64-NEXT: movq %rax, %rbx<br class=""> ; X64-NEXT: movq %rdx, %r14<br class=""> ; X64-NEXT: movq -{{[0-9]+}}(%rsp), %r12 # 8-byte Reload<br class="">-; X64-NEXT: addq %rbx, %r12<br class="">+; X64-NEXT: addq %rax, %r12<br class=""> ; X64-NEXT: movq -{{[0-9]+}}(%rsp), %r15 # 8-byte Reload<br class="">-; X64-NEXT: adcq %r14, %r15<br class="">+; X64-NEXT: adcq %rdx, %r15<br class=""> ; X64-NEXT: addq %rdi, %r12<br class=""> ; X64-NEXT: adcq %rcx, %r15<br class=""> ; X64-NEXT: movq {{[0-9]+}}(%rsp), %rcx # 8-byte Reload<br class=""> ; X64-NEXT: movq %rcx, %rax<br class=""> ; X64-NEXT: movq %r11, %rsi<br class="">-; X64-NEXT: movq %rsi, -{{[0-9]+}}(%rsp) # 8-byte Spill<br class="">-; X64-NEXT: mulq %rsi<br class="">+; X64-NEXT: movq %r11, -{{[0-9]+}}(%rsp) # 8-byte Spill<br class="">+; X64-NEXT: mulq %r11<br class=""> ; X64-NEXT: movq %rdx, %r11<br class=""> ; X64-NEXT: movq %rax, {{[0-9]+}}(%rsp) # 8-byte Spill<br class=""> ; X64-NEXT: movq {{[0-9]+}}(%rsp), %r9 # 8-byte Reload<br class="">@@ -5239,7 +5232,7 @@ define void @test_1024(i1024* %a, i1024*<br class=""> ; X64-NEXT: movq %rdx, -{{[0-9]+}}(%rsp) # 8-byte Spill<br class=""> ; X64-NEXT: movq %rax, %r9<br class=""> ; X64-NEXT: movq {{[0-9]+}}(%rsp), %rbp # 8-byte Reload<br class="">-; X64-NEXT: addq %r9, %rbp<br class="">+; X64-NEXT: addq %rax, %rbp<br class=""> ; X64-NEXT: movq {{[0-9]+}}(%rsp), %rax # 8-byte Reload<br class=""> ; X64-NEXT: adcq %rdx, %rax<br class=""> ; X64-NEXT: addq %rsi, %rbp<br class="">@@ -5417,7 +5410,7 @@ define void @test_1024(i1024* %a, i1024*<br class=""> ; X64-NEXT: movq 88(%rsi), %rax<br class=""> ; X64-NEXT: movq %rsi, %r9<br class=""> ; X64-NEXT: movq %rax, %rsi<br class="">-; X64-NEXT: movq %rsi, -{{[0-9]+}}(%rsp) # 8-byte Spill<br class="">+; X64-NEXT: movq %rax, -{{[0-9]+}}(%rsp) # 8-byte Spill<br class=""> ; X64-NEXT: mulq %rcx<br class=""> ; X64-NEXT: movq %rcx, %r11<br class=""> ; X64-NEXT: movq %rdx, %rbp<br class="">@@ -5453,13 +5446,12 @@ define void @test_1024(i1024* %a, i1024*<br class=""> ; X64-NEXT: adcq %r8, %r10<br class=""> ; X64-NEXT: addq %rbx, %rsi<br class=""> ; X64-NEXT: adcq %rbp, %r10<br class="">-; X64-NEXT: movq %r9, %rdi<br class="">-; X64-NEXT: movq 64(%rdi), %r13<br class="">+; X64-NEXT: movq 64(%r9), %r13<br class=""> ; X64-NEXT: movq %r13, %rax<br class=""> ; X64-NEXT: mulq %r11<br class=""> ; X64-NEXT: movq %rax, -{{[0-9]+}}(%rsp) # 8-byte Spill<br class=""> ; X64-NEXT: movq %rdx, %rcx<br class="">-; X64-NEXT: movq 72(%rdi), %r9<br class="">+; X64-NEXT: movq 72(%r9), %r9<br class=""> ; X64-NEXT: movq %r9, %rax<br class=""> ; X64-NEXT: mulq %r11<br class=""> ; X64-NEXT: movq %rdx, %rbp<br class="">@@ -5487,8 +5479,8 @@ define void @test_1024(i1024* %a, i1024*<br class=""> ; X64-NEXT: movq %rdx, %r11<br class=""> ; X64-NEXT: movq %rax, %r15<br class=""> ; X64-NEXT: movq %r12, %rcx<br class="">-; X64-NEXT: addq %r15, %rcx<br class="">-; X64-NEXT: adcq %r11, %r8<br class="">+; X64-NEXT: addq %rax, %rcx<br class="">+; X64-NEXT: adcq %rdx, %r8<br class=""> ; X64-NEXT: addq %rbp, %rcx<br class=""> ; X64-NEXT: adcq %rbx, %r8<br class=""> ; X64-NEXT: addq -{{[0-9]+}}(%rsp), %rcx # 8-byte Folded Reload<br class="">@@ -5540,14 +5532,13 @@ define void @test_1024(i1024* %a, i1024*<br class=""> ; X64-NEXT: setb %r10b<br class=""> ; X64-NEXT: movq -{{[0-9]+}}(%rsp), %rsi # 8-byte Reload<br class=""> ; X64-NEXT: movq %rsi, %rax<br class="">-; X64-NEXT: movq %r8, %rdi<br class="">-; X64-NEXT: mulq %rdi<br class="">+; X64-NEXT: mulq %r8<br class=""> ; X64-NEXT: movq %rdx, %rcx<br class=""> ; X64-NEXT: movq %rax, %r9<br class=""> ; X64-NEXT: movq -{{[0-9]+}}(%rsp), %rbp # 8-byte Reload<br class=""> ; X64-NEXT: movq %rbp, %rax<br class="">-; X64-NEXT: mulq %rdi<br class="">-; X64-NEXT: movq %rdi, %r12<br class="">+; X64-NEXT: mulq %r8<br class="">+; X64-NEXT: movq %r8, %r12<br class=""> ; X64-NEXT: movq %rdx, %rdi<br class=""> ; X64-NEXT: movq %rax, %rbx<br class=""> ; X64-NEXT: addq %rcx, %rbx<br class="">@@ -5586,7 +5577,7 @@ define void @test_1024(i1024* %a, i1024*<br class=""> ; X64-NEXT: imulq %rcx, %rdi<br class=""> ; X64-NEXT: movq %rcx, %rax<br class=""> ; X64-NEXT: movq %r12, %rsi<br class="">-; X64-NEXT: mulq %rsi<br class="">+; X64-NEXT: mulq %r12<br class=""> ; X64-NEXT: movq %rax, %r9<br class=""> ; X64-NEXT: addq %rdi, %rdx<br class=""> ; X64-NEXT: movq 104(%rbp), %r8<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/mul-i512.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/mul-i512.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/mul-i512.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/mul-i512.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/mul-i512.ll Mon Oct 2 15:01:37 2017<br class="">@@ -909,7 +909,7 @@ define void @test_512(i512* %a, i512* %b<br class=""> ; X64-NEXT: movq 8(%rsi), %rbp<br class=""> ; X64-NEXT: movq %r15, %rax<br class=""> ; X64-NEXT: movq %rdx, %rsi<br class="">-; X64-NEXT: mulq %rsi<br class="">+; X64-NEXT: mulq %rdx<br class=""> ; X64-NEXT: movq %rdx, %r9<br class=""> ; X64-NEXT: movq %rax, %r8<br class=""> ; X64-NEXT: movq %r11, %rax<br class="">@@ -932,23 +932,24 @@ define void @test_512(i512* %a, i512* %b<br class=""> ; X64-NEXT: movq %r11, %rax<br class=""> ; X64-NEXT: mulq %rbp<br class=""> ; X64-NEXT: movq %rbp, %r14<br class="">-; X64-NEXT: movq %r14, -{{[0-9]+}}(%rsp) # 8-byte Spill<br class="">+; X64-NEXT: movq %rbp, -{{[0-9]+}}(%rsp) # 8-byte Spill<br class=""> ; X64-NEXT: movq %rdx, %rsi<br class=""> ; X64-NEXT: movq %rax, %rbp<br class=""> ; X64-NEXT: addq %rcx, %rbp<br class=""> ; X64-NEXT: adcq %rbx, %rsi<br class=""> ; X64-NEXT: xorl %ecx, %ecx<br class=""> ; X64-NEXT: movq %r10, %rbx<br class="">-; X64-NEXT: movq %rbx, -{{[0-9]+}}(%rsp) # 8-byte Spill<br class="">-; X64-NEXT: movq %rbx, %rax<br class="">+; X64-NEXT: movq %r10, -{{[0-9]+}}(%rsp) # 8-byte Spill<br class="">+; X64-NEXT: movq %r10, %rax<br class=""> ; X64-NEXT: mulq %rcx<br class=""> ; X64-NEXT: movq %rdx, %r13<br class=""> ; X64-NEXT: movq %rax, %r10<br class=""> ; X64-NEXT: movq %r15, %rax<br class=""> ; X64-NEXT: mulq %rcx<br class=""> ; X64-NEXT: movq %rdx, -{{[0-9]+}}(%rsp) # 8-byte Spill<br class="">+; X64-NEXT: # kill: %RAX<kill><br class="">+; X64-NEXT: movq %rax, -{{[0-9]+}}(%rsp) # 8-byte Spill<br class=""> ; X64-NEXT: movq %rax, %r15<br class="">-; X64-NEXT: movq %r15, -{{[0-9]+}}(%rsp) # 8-byte Spill<br class=""> ; X64-NEXT: addq %r10, %r15<br class=""> ; X64-NEXT: adcq %r13, %rdx<br class=""> ; X64-NEXT: addq %rbp, %r15<br class="">@@ -987,8 +988,8 @@ define void @test_512(i512* %a, i512* %b<br class=""> ; X64-NEXT: mulq %rdx<br class=""> ; X64-NEXT: movq %rdx, %r14<br class=""> ; X64-NEXT: movq %rax, %r11<br class="">-; X64-NEXT: addq %r11, %r10<br class="">-; X64-NEXT: adcq %r14, %r13<br class="">+; X64-NEXT: addq %rax, %r10<br class="">+; X64-NEXT: adcq %rdx, %r13<br class=""> ; X64-NEXT: addq %rbp, %r10<br class=""> ; X64-NEXT: adcq %rsi, %r13<br class=""> ; X64-NEXT: addq %r8, %r10<br class="">@@ -1000,7 +1001,7 @@ define void @test_512(i512* %a, i512* %b<br class=""> ; X64-NEXT: movq 16(%rsi), %r8<br class=""> ; X64-NEXT: movq %rcx, %rax<br class=""> ; X64-NEXT: movq %rcx, %r9<br class="">-; X64-NEXT: movq %r9, -{{[0-9]+}}(%rsp) # 8-byte Spill<br class="">+; X64-NEXT: movq %rcx, -{{[0-9]+}}(%rsp) # 8-byte Spill<br class=""> ; X64-NEXT: mulq %r8<br class=""> ; X64-NEXT: movq %rdx, %rdi<br class=""> ; X64-NEXT: movq %rax, %r12<br class="">@@ -1031,7 +1032,7 @@ define void @test_512(i512* %a, i512* %b<br class=""> ; X64-NEXT: mulq %rcx<br class=""> ; X64-NEXT: movq %rdx, -{{[0-9]+}}(%rsp) # 8-byte Spill<br class=""> ; X64-NEXT: movq %rax, %rbp<br class="">-; X64-NEXT: addq %rbp, %r11<br class="">+; X64-NEXT: addq %rax, %r11<br class=""> ; X64-NEXT: adcq %rdx, %r14<br class=""> ; X64-NEXT: addq %r9, %r11<br class=""> ; X64-NEXT: adcq %rbx, %r14<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/mul128.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/mul128.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/mul128.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/mul128.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/mul128.ll Mon Oct 2 15:01:37 2017<br class="">@@ -7,7 +7,7 @@ define i128 @foo(i128 %t, i128 %u) {<br class=""> ; X64-NEXT: movq %rdx, %r8<br class=""> ; X64-NEXT: imulq %rdi, %rcx<br class=""> ; X64-NEXT: movq %rdi, %rax<br class="">-; X64-NEXT: mulq %r8<br class="">+; X64-NEXT: mulq %rdx<br class=""> ; X64-NEXT: addq %rcx, %rdx<br class=""> ; X64-NEXT: imulq %r8, %rsi<br class=""> ; X64-NEXT: addq %rsi, %rdx<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/mulvi32.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/mulvi32.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/mulvi32.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/mulvi32.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/mulvi32.ll Mon Oct 2 15:01:37 2017<br class="">@@ -292,7 +292,7 @@ define <4 x i64> @_mul4xi32toi64b(<4 x i<br class=""> ; SSE-LABEL: _mul4xi32toi64b:<br class=""> ; SSE: # BB#0:<br class=""> ; SSE-NEXT: movdqa %xmm0, %xmm2<br class="">-; SSE-NEXT: pshufd {{.*#+}} xmm0 = xmm2[1,1,3,3]<br class="">+; SSE-NEXT: pshufd {{.*#+}} xmm0 = xmm0[1,1,3,3]<br class=""> ; SSE-NEXT: pmuludq %xmm1, %xmm2<br class=""> ; SSE-NEXT: pshufd {{.*#+}} xmm1 = xmm1[1,1,3,3]<br class=""> ; SSE-NEXT: pmuludq %xmm0, %xmm1<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/pmul.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/pmul.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/pmul.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/pmul.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/pmul.ll Mon Oct 2 15:01:37 2017<br class="">@@ -9,7 +9,7 @@ define <16 x i8> @mul_v16i8c(<16 x i8> %<br class=""> ; SSE2-LABEL: mul_v16i8c:<br class=""> ; SSE2: # BB#0: # %entry<br class=""> ; SSE2-NEXT: movdqa %xmm0, %xmm1<br class="">-; SSE2-NEXT: punpckhbw {{.*#+}} xmm1 = xmm1[8,8,9,9,10,10,11,11,12,12,13,13,14,14,15,15]<br class="">+; SSE2-NEXT: punpckhbw {{.*#+}} xmm1 = xmm1[8],xmm0[8],xmm1[9],xmm0[9],xmm1[10],xmm0[10],xmm1[11],xmm0[11],xmm1[12],xmm0[12],xmm1[13],xmm0[13],xmm1[14],xmm0[14],xmm1[15],xmm0[15]<br class=""> ; SSE2-NEXT: psraw $8, %xmm1<br class=""> ; SSE2-NEXT: movdqa {{.*#+}} xmm2 = [117,117,117,117,117,117,117,117]<br class=""> ; SSE2-NEXT: pmullw %xmm2, %xmm1<br class="">@@ -143,10 +143,10 @@ define <16 x i8> @mul_v16i8(<16 x i8> %i<br class=""> ; SSE2-LABEL: mul_v16i8:<br class=""> ; SSE2: # BB#0: # %entry<br class=""> ; SSE2-NEXT: movdqa %xmm1, %xmm2<br class="">-; SSE2-NEXT: punpckhbw {{.*#+}} xmm2 = xmm2[8,8,9,9,10,10,11,11,12,12,13,13,14,14,15,15]<br class="">+; SSE2-NEXT: punpckhbw {{.*#+}} xmm2 = xmm2[8],xmm1[8],xmm2[9],xmm1[9],xmm2[10],xmm1[10],xmm2[11],xmm1[11],xmm2[12],xmm1[12],xmm2[13],xmm1[13],xmm2[14],xmm1[14],xmm2[15],xmm1[15]<br class=""> ; SSE2-NEXT: psraw $8, %xmm2<br class=""> ; SSE2-NEXT: movdqa %xmm0, %xmm3<br class="">-; SSE2-NEXT: punpckhbw {{.*#+}} xmm3 = xmm3[8,8,9,9,10,10,11,11,12,12,13,13,14,14,15,15]<br class="">+; SSE2-NEXT: punpckhbw {{.*#+}} xmm3 = xmm3[8],xmm0[8],xmm3[9],xmm0[9],xmm3[10],xmm0[10],xmm3[11],xmm0[11],xmm3[12],xmm0[12],xmm3[13],xmm0[13],xmm3[14],xmm0[14],xmm3[15],xmm0[15]<br class=""> ; SSE2-NEXT: psraw $8, %xmm3<br class=""> ; SSE2-NEXT: pmullw %xmm2, %xmm3<br class=""> ; SSE2-NEXT: movdqa {{.*#+}} xmm2 = [255,255,255,255,255,255,255,255]<br class="">@@ -386,7 +386,7 @@ define <32 x i8> @mul_v32i8c(<32 x i8> %<br class=""> ; SSE2-LABEL: mul_v32i8c:<br class=""> ; SSE2: # BB#0: # %entry<br class=""> ; SSE2-NEXT: movdqa %xmm0, %xmm2<br class="">-; SSE2-NEXT: punpckhbw {{.*#+}} xmm2 = xmm2[8,8,9,9,10,10,11,11,12,12,13,13,14,14,15,15]<br class="">+; SSE2-NEXT: punpckhbw {{.*#+}} xmm2 = xmm2[8],xmm0[8],xmm2[9],xmm0[9],xmm2[10],xmm0[10],xmm2[11],xmm0[11],xmm2[12],xmm0[12],xmm2[13],xmm0[13],xmm2[14],xmm0[14],xmm2[15],xmm0[15]<br class=""> ; SSE2-NEXT: psraw $8, %xmm2<br class=""> ; SSE2-NEXT: movdqa {{.*#+}} xmm3 = [117,117,117,117,117,117,117,117]<br class=""> ; SSE2-NEXT: pmullw %xmm3, %xmm2<br class="">@@ -398,7 +398,7 @@ define <32 x i8> @mul_v32i8c(<32 x i8> %<br class=""> ; SSE2-NEXT: pand %xmm4, %xmm0<br class=""> ; SSE2-NEXT: packuswb %xmm2, %xmm0<br class=""> ; SSE2-NEXT: movdqa %xmm1, %xmm2<br class="">-; SSE2-NEXT: punpckhbw {{.*#+}} xmm2 = xmm2[8,8,9,9,10,10,11,11,12,12,13,13,14,14,15,15]<br class="">+; SSE2-NEXT: punpckhbw {{.*#+}} xmm2 = xmm2[8],xmm1[8],xmm2[9],xmm1[9],xmm2[10],xmm1[10],xmm2[11],xmm1[11],xmm2[12],xmm1[12],xmm2[13],xmm1[13],xmm2[14],xmm1[14],xmm2[15],xmm1[15]<br class=""> ; SSE2-NEXT: psraw $8, %xmm2<br class=""> ; SSE2-NEXT: pmullw %xmm3, %xmm2<br class=""> ; SSE2-NEXT: pand %xmm4, %xmm2<br class="">@@ -567,10 +567,10 @@ define <32 x i8> @mul_v32i8(<32 x i8> %i<br class=""> ; SSE2-LABEL: mul_v32i8:<br class=""> ; SSE2: # BB#0: # %entry<br class=""> ; SSE2-NEXT: movdqa %xmm2, %xmm4<br class="">-; SSE2-NEXT: punpckhbw {{.*#+}} xmm4 = xmm4[8,8,9,9,10,10,11,11,12,12,13,13,14,14,15,15]<br class="">+; SSE2-NEXT: punpckhbw {{.*#+}} xmm4 = xmm4[8],xmm2[8],xmm4[9],xmm2[9],xmm4[10],xmm2[10],xmm4[11],xmm2[11],xmm4[12],xmm2[12],xmm4[13],xmm2[13],xmm4[14],xmm2[14],xmm4[15],xmm2[15]<br class=""> ; SSE2-NEXT: psraw $8, %xmm4<br class=""> ; SSE2-NEXT: movdqa %xmm0, %xmm5<br class="">-; SSE2-NEXT: punpckhbw {{.*#+}} xmm5 = xmm5[8,8,9,9,10,10,11,11,12,12,13,13,14,14,15,15]<br class="">+; SSE2-NEXT: punpckhbw {{.*#+}} xmm5 = xmm5[8],xmm0[8],xmm5[9],xmm0[9],xmm5[10],xmm0[10],xmm5[11],xmm0[11],xmm5[12],xmm0[12],xmm5[13],xmm0[13],xmm5[14],xmm0[14],xmm5[15],xmm0[15]<br class=""> ; SSE2-NEXT: psraw $8, %xmm5<br class=""> ; SSE2-NEXT: pmullw %xmm4, %xmm5<br class=""> ; SSE2-NEXT: movdqa {{.*#+}} xmm4 = [255,255,255,255,255,255,255,255]<br class="">@@ -583,10 +583,10 @@ define <32 x i8> @mul_v32i8(<32 x i8> %i<br class=""> ; SSE2-NEXT: pand %xmm4, %xmm0<br class=""> ; SSE2-NEXT: packuswb %xmm5, %xmm0<br class=""> ; SSE2-NEXT: movdqa %xmm3, %xmm2<br class="">-; SSE2-NEXT: punpckhbw {{.*#+}} xmm2 = xmm2[8,8,9,9,10,10,11,11,12,12,13,13,14,14,15,15]<br class="">+; SSE2-NEXT: punpckhbw {{.*#+}} xmm2 = xmm2[8],xmm3[8],xmm2[9],xmm3[9],xmm2[10],xmm3[10],xmm2[11],xmm3[11],xmm2[12],xmm3[12],xmm2[13],xmm3[13],xmm2[14],xmm3[14],xmm2[15],xmm3[15]<br class=""> ; SSE2-NEXT: psraw $8, %xmm2<br class=""> ; SSE2-NEXT: movdqa %xmm1, %xmm5<br class="">-; SSE2-NEXT: punpckhbw {{.*#+}} xmm5 = xmm5[8,8,9,9,10,10,11,11,12,12,13,13,14,14,15,15]<br class="">+; SSE2-NEXT: punpckhbw {{.*#+}} xmm5 = xmm5[8],xmm1[8],xmm5[9],xmm1[9],xmm5[10],xmm1[10],xmm5[11],xmm1[11],xmm5[12],xmm1[12],xmm5[13],xmm1[13],xmm5[14],xmm1[14],xmm5[15],xmm1[15]<br class=""> ; SSE2-NEXT: psraw $8, %xmm5<br class=""> ; SSE2-NEXT: pmullw %xmm2, %xmm5<br class=""> ; SSE2-NEXT: pand %xmm4, %xmm5<br class="">@@ -774,7 +774,7 @@ define <64 x i8> @mul_v64i8c(<64 x i8> %<br class=""> ; SSE2-LABEL: mul_v64i8c:<br class=""> ; SSE2: # BB#0: # %entry<br class=""> ; SSE2-NEXT: movdqa %xmm0, %xmm6<br class="">-; SSE2-NEXT: punpckhbw {{.*#+}} xmm6 = xmm6[8,8,9,9,10,10,11,11,12,12,13,13,14,14,15,15]<br class="">+; SSE2-NEXT: punpckhbw {{.*#+}} xmm6 = xmm6[8],xmm0[8],xmm6[9],xmm0[9],xmm6[10],xmm0[10],xmm6[11],xmm0[11],xmm6[12],xmm0[12],xmm6[13],xmm0[13],xmm6[14],xmm0[14],xmm6[15],xmm0[15]<br class=""> ; SSE2-NEXT: psraw $8, %xmm6<br class=""> ; SSE2-NEXT: movdqa {{.*#+}} xmm4 = [117,117,117,117,117,117,117,117]<br class=""> ; SSE2-NEXT: pmullw %xmm4, %xmm6<br class="">@@ -786,7 +786,7 @@ define <64 x i8> @mul_v64i8c(<64 x i8> %<br class=""> ; SSE2-NEXT: pand %xmm5, %xmm0<br class=""> ; SSE2-NEXT: packuswb %xmm6, %xmm0<br class=""> ; SSE2-NEXT: movdqa %xmm1, %xmm6<br class="">-; SSE2-NEXT: punpckhbw {{.*#+}} xmm6 = xmm6[8,8,9,9,10,10,11,11,12,12,13,13,14,14,15,15]<br class="">+; SSE2-NEXT: punpckhbw {{.*#+}} xmm6 = xmm6[8],xmm1[8],xmm6[9],xmm1[9],xmm6[10],xmm1[10],xmm6[11],xmm1[11],xmm6[12],xmm1[12],xmm6[13],xmm1[13],xmm6[14],xmm1[14],xmm6[15],xmm1[15]<br class=""> ; SSE2-NEXT: psraw $8, %xmm6<br class=""> ; SSE2-NEXT: pmullw %xmm4, %xmm6<br class=""> ; SSE2-NEXT: pand %xmm5, %xmm6<br class="">@@ -796,7 +796,7 @@ define <64 x i8> @mul_v64i8c(<64 x i8> %<br class=""> ; SSE2-NEXT: pand %xmm5, %xmm1<br class=""> ; SSE2-NEXT: packuswb %xmm6, %xmm1<br class=""> ; SSE2-NEXT: movdqa %xmm2, %xmm6<br class="">-; SSE2-NEXT: punpckhbw {{.*#+}} xmm6 = xmm6[8,8,9,9,10,10,11,11,12,12,13,13,14,14,15,15]<br class="">+; SSE2-NEXT: punpckhbw {{.*#+}} xmm6 = xmm6[8],xmm2[8],xmm6[9],xmm2[9],xmm6[10],xmm2[10],xmm6[11],xmm2[11],xmm6[12],xmm2[12],xmm6[13],xmm2[13],xmm6[14],xmm2[14],xmm6[15],xmm2[15]<br class=""> ; SSE2-NEXT: psraw $8, %xmm6<br class=""> ; SSE2-NEXT: pmullw %xmm4, %xmm6<br class=""> ; SSE2-NEXT: pand %xmm5, %xmm6<br class="">@@ -806,7 +806,7 @@ define <64 x i8> @mul_v64i8c(<64 x i8> %<br class=""> ; SSE2-NEXT: pand %xmm5, %xmm2<br class=""> ; SSE2-NEXT: packuswb %xmm6, %xmm2<br class=""> ; SSE2-NEXT: movdqa %xmm3, %xmm6<br class="">-; SSE2-NEXT: punpckhbw {{.*#+}} xmm6 = xmm6[8,8,9,9,10,10,11,11,12,12,13,13,14,14,15,15]<br class="">+; SSE2-NEXT: punpckhbw {{.*#+}} xmm6 = xmm6[8],xmm3[8],xmm6[9],xmm3[9],xmm6[10],xmm3[10],xmm6[11],xmm3[11],xmm6[12],xmm3[12],xmm6[13],xmm3[13],xmm6[14],xmm3[14],xmm6[15],xmm3[15]<br class=""> ; SSE2-NEXT: psraw $8, %xmm6<br class=""> ; SSE2-NEXT: pmullw %xmm4, %xmm6<br class=""> ; SSE2-NEXT: pand %xmm5, %xmm6<br class="">@@ -821,7 +821,7 @@ define <64 x i8> @mul_v64i8c(<64 x i8> %<br class=""> ; SSE41: # BB#0: # %entry<br class=""> ; SSE41-NEXT: movdqa %xmm1, %xmm4<br class=""> ; SSE41-NEXT: movdqa %xmm0, %xmm1<br class="">-; SSE41-NEXT: pmovsxbw %xmm1, %xmm0<br class="">+; SSE41-NEXT: pmovsxbw %xmm0, %xmm0<br class=""> ; SSE41-NEXT: movdqa {{.*#+}} xmm6 = [117,117,117,117,117,117,117,117]<br class=""> ; SSE41-NEXT: pmullw %xmm6, %xmm0<br class=""> ; SSE41-NEXT: movdqa {{.*#+}} xmm7 = [255,255,255,255,255,255,255,255]<br class="">@@ -939,10 +939,10 @@ define <64 x i8> @mul_v64i8(<64 x i8> %i<br class=""> ; SSE2-LABEL: mul_v64i8:<br class=""> ; SSE2: # BB#0: # %entry<br class=""> ; SSE2-NEXT: movdqa %xmm4, %xmm8<br class="">-; SSE2-NEXT: punpckhbw {{.*#+}} xmm8 = xmm8[8,8,9,9,10,10,11,11,12,12,13,13,14,14,15,15]<br class="">+; SSE2-NEXT: punpckhbw {{.*#+}} xmm8 = xmm8[8],xmm4[8],xmm8[9],xmm4[9],xmm8[10],xmm4[10],xmm8[11],xmm4[11],xmm8[12],xmm4[12],xmm8[13],xmm4[13],xmm8[14],xmm4[14],xmm8[15],xmm4[15]<br class=""> ; SSE2-NEXT: psraw $8, %xmm8<br class=""> ; SSE2-NEXT: movdqa %xmm0, %xmm9<br class="">-; SSE2-NEXT: punpckhbw {{.*#+}} xmm9 = xmm9[8,8,9,9,10,10,11,11,12,12,13,13,14,14,15,15]<br class="">+; SSE2-NEXT: punpckhbw {{.*#+}} xmm9 = xmm9[8],xmm0[8],xmm9[9],xmm0[9],xmm9[10],xmm0[10],xmm9[11],xmm0[11],xmm9[12],xmm0[12],xmm9[13],xmm0[13],xmm9[14],xmm0[14],xmm9[15],xmm0[15]<br class=""> ; SSE2-NEXT: psraw $8, %xmm9<br class=""> ; SSE2-NEXT: pmullw %xmm8, %xmm9<br class=""> ; SSE2-NEXT: movdqa {{.*#+}} xmm8 = [255,255,255,255,255,255,255,255]<br class="">@@ -955,10 +955,10 @@ define <64 x i8> @mul_v64i8(<64 x i8> %i<br class=""> ; SSE2-NEXT: pand %xmm8, %xmm0<br class=""> ; SSE2-NEXT: packuswb %xmm9, %xmm0<br class=""> ; SSE2-NEXT: movdqa %xmm5, %xmm9<br class="">-; SSE2-NEXT: punpckhbw {{.*#+}} xmm9 = xmm9[8,8,9,9,10,10,11,11,12,12,13,13,14,14,15,15]<br class="">+; SSE2-NEXT: punpckhbw {{.*#+}} xmm9 = xmm9[8],xmm5[8],xmm9[9],xmm5[9],xmm9[10],xmm5[10],xmm9[11],xmm5[11],xmm9[12],xmm5[12],xmm9[13],xmm5[13],xmm9[14],xmm5[14],xmm9[15],xmm5[15]<br class=""> ; SSE2-NEXT: psraw $8, %xmm9<br class=""> ; SSE2-NEXT: movdqa %xmm1, %xmm4<br class="">-; SSE2-NEXT: punpckhbw {{.*#+}} xmm4 = xmm4[8,8,9,9,10,10,11,11,12,12,13,13,14,14,15,15]<br class="">+; SSE2-NEXT: punpckhbw {{.*#+}} xmm4 = xmm4[8],xmm1[8],xmm4[9],xmm1[9],xmm4[10],xmm1[10],xmm4[11],xmm1[11],xmm4[12],xmm1[12],xmm4[13],xmm1[13],xmm4[14],xmm1[14],xmm4[15],xmm1[15]<br class=""> ; SSE2-NEXT: psraw $8, %xmm4<br class=""> ; SSE2-NEXT: pmullw %xmm9, %xmm4<br class=""> ; SSE2-NEXT: pand %xmm8, %xmm4<br class="">@@ -970,10 +970,10 @@ define <64 x i8> @mul_v64i8(<64 x i8> %i<br class=""> ; SSE2-NEXT: pand %xmm8, %xmm1<br class=""> ; SSE2-NEXT: packuswb %xmm4, %xmm1<br class=""> ; SSE2-NEXT: movdqa %xmm6, %xmm4<br class="">-; SSE2-NEXT: punpckhbw {{.*#+}} xmm4 = xmm4[8,8,9,9,10,10,11,11,12,12,13,13,14,14,15,15]<br class="">+; SSE2-NEXT: punpckhbw {{.*#+}} xmm4 = xmm4[8],xmm6[8],xmm4[9],xmm6[9],xmm4[10],xmm6[10],xmm4[11],xmm6[11],xmm4[12],xmm6[12],xmm4[13],xmm6[13],xmm4[14],xmm6[14],xmm4[15],xmm6[15]<br class=""> ; SSE2-NEXT: psraw $8, %xmm4<br class=""> ; SSE2-NEXT: movdqa %xmm2, %xmm5<br class="">-; SSE2-NEXT: punpckhbw {{.*#+}} xmm5 = xmm5[8,8,9,9,10,10,11,11,12,12,13,13,14,14,15,15]<br class="">+; SSE2-NEXT: punpckhbw {{.*#+}} xmm5 = xmm5[8],xmm2[8],xmm5[9],xmm2[9],xmm5[10],xmm2[10],xmm5[11],xmm2[11],xmm5[12],xmm2[12],xmm5[13],xmm2[13],xmm5[14],xmm2[14],xmm5[15],xmm2[15]<br class=""> ; SSE2-NEXT: psraw $8, %xmm5<br class=""> ; SSE2-NEXT: pmullw %xmm4, %xmm5<br class=""> ; SSE2-NEXT: pand %xmm8, %xmm5<br class="">@@ -985,10 +985,10 @@ define <64 x i8> @mul_v64i8(<64 x i8> %i<br class=""> ; SSE2-NEXT: pand %xmm8, %xmm2<br class=""> ; SSE2-NEXT: packuswb %xmm5, %xmm2<br class=""> ; SSE2-NEXT: movdqa %xmm7, %xmm4<br class="">-; SSE2-NEXT: punpckhbw {{.*#+}} xmm4 = xmm4[8,8,9,9,10,10,11,11,12,12,13,13,14,14,15,15]<br class="">+; SSE2-NEXT: punpckhbw {{.*#+}} xmm4 = xmm4[8],xmm7[8],xmm4[9],xmm7[9],xmm4[10],xmm7[10],xmm4[11],xmm7[11],xmm4[12],xmm7[12],xmm4[13],xmm7[13],xmm4[14],xmm7[14],xmm4[15],xmm7[15]<br class=""> ; SSE2-NEXT: psraw $8, %xmm4<br class=""> ; SSE2-NEXT: movdqa %xmm3, %xmm5<br class="">-; SSE2-NEXT: punpckhbw {{.*#+}} xmm5 = xmm5[8,8,9,9,10,10,11,11,12,12,13,13,14,14,15,15]<br class="">+; SSE2-NEXT: punpckhbw {{.*#+}} xmm5 = xmm5[8],xmm3[8],xmm5[9],xmm3[9],xmm5[10],xmm3[10],xmm5[11],xmm3[11],xmm5[12],xmm3[12],xmm5[13],xmm3[13],xmm5[14],xmm3[14],xmm5[15],xmm3[15]<br class=""> ; SSE2-NEXT: psraw $8, %xmm5<br class=""> ; SSE2-NEXT: pmullw %xmm4, %xmm5<br class=""> ; SSE2-NEXT: pand %xmm8, %xmm5<br class="">@@ -1006,7 +1006,7 @@ define <64 x i8> @mul_v64i8(<64 x i8> %i<br class=""> ; SSE41-NEXT: movdqa %xmm1, %xmm8<br class=""> ; SSE41-NEXT: movdqa %xmm0, %xmm1<br class=""> ; SSE41-NEXT: pmovsxbw %xmm4, %xmm9<br class="">-; SSE41-NEXT: pmovsxbw %xmm1, %xmm0<br class="">+; SSE41-NEXT: pmovsxbw %xmm0, %xmm0<br class=""> ; SSE41-NEXT: pmullw %xmm9, %xmm0<br class=""> ; SSE41-NEXT: movdqa {{.*#+}} xmm9 = [255,255,255,255,255,255,255,255]<br class=""> ; SSE41-NEXT: pand %xmm9, %xmm0<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/powi.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/powi.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/powi.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/powi.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/powi.ll Mon Oct 2 15:01:37 2017<br class="">@@ -5,7 +5,7 @@ define double @pow_wrapper(double %a) no<br class=""> ; CHECK-LABEL: pow_wrapper:<br class=""> ; CHECK: # BB#0:<br class=""> ; CHECK-NEXT: movapd %xmm0, %xmm1<br class="">-; CHECK-NEXT: mulsd %xmm1, %xmm1<br class="">+; CHECK-NEXT: mulsd %xmm0, %xmm1<br class=""> ; CHECK-NEXT: mulsd %xmm1, %xmm0<br class=""> ; CHECK-NEXT: mulsd %xmm1, %xmm1<br class=""> ; CHECK-NEXT: mulsd %xmm1, %xmm0<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/pr11334.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/pr11334.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/pr11334.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/pr11334.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/pr11334.ll Mon Oct 2 15:01:37 2017<br class="">@@ -25,7 +25,7 @@ define <3 x double> @v3f2d_ext_vec(<3 x<br class=""> ; SSE-NEXT: cvtps2pd %xmm0, %xmm0<br class=""> ; SSE-NEXT: movlps %xmm0, -{{[0-9]+}}(%rsp)<br class=""> ; SSE-NEXT: movaps %xmm2, %xmm1<br class="">-; SSE-NEXT: movhlps {{.*#+}} xmm1 = xmm1[1,1]<br class="">+; SSE-NEXT: movhlps {{.*#+}} xmm1 = xmm2[1],xmm1[1]<br class=""> ; SSE-NEXT: fldl -{{[0-9]+}}(%rsp)<br class=""> ; SSE-NEXT: movaps %xmm2, %xmm0<br class=""> ; SSE-NEXT: retq<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/pr29112.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/pr29112.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/pr29112.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/pr29112.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/pr29112.ll Mon Oct 2 15:01:37 2017<br class="">@@ -49,16 +49,16 @@ define <4 x float> @bar(<4 x float>* %a1<br class=""> ; CHECK-NEXT: vinsertps {{.*#+}} xmm2 = xmm9[0,1],xmm2[3],xmm9[3]<br class=""> ; CHECK-NEXT: vinsertps {{.*#+}} xmm2 = xmm2[0,1,2],xmm12[0]<br class=""> ; CHECK-NEXT: vaddps %xmm3, %xmm2, %xmm2<br class="">-; CHECK-NEXT: vmovaps %xmm15, %xmm1<br class="">-; CHECK-NEXT: vmovaps %xmm1, {{[0-9]+}}(%rsp) # 16-byte Spill<br class="">-; CHECK-NEXT: vaddps %xmm0, %xmm1, %xmm9<br class="">+; CHECK-NEXT: vmovaps %xmm15, {{[0-9]+}}(%rsp) # 16-byte Spill<br class="">+; CHECK-NEXT: vaddps %xmm0, %xmm15, %xmm9<br class=""> ; CHECK-NEXT: vaddps %xmm14, %xmm10, %xmm0<br class="">-; CHECK-NEXT: vaddps %xmm1, %xmm1, %xmm8<br class="">+; CHECK-NEXT: vaddps %xmm15, %xmm15, %xmm8<br class=""> ; CHECK-NEXT: vaddps %xmm11, %xmm3, %xmm3<br class=""> ; CHECK-NEXT: vaddps %xmm0, %xmm3, %xmm0<br class="">-; CHECK-NEXT: vaddps %xmm0, %xmm1, %xmm0<br class="">+; CHECK-NEXT: vaddps %xmm0, %xmm15, %xmm0<br class=""> ; CHECK-NEXT: vmovaps %xmm8, {{[0-9]+}}(%rsp)<br class=""> ; CHECK-NEXT: vmovaps %xmm9, (%rsp)<br class="">+; CHECK-NEXT: vmovaps %xmm15, %xmm1<br class=""> ; CHECK-NEXT: vmovaps {{[0-9]+}}(%rsp), %xmm3 # 16-byte Reload<br class=""> ; CHECK-NEXT: vzeroupper<br class=""> ; CHECK-NEXT: callq foo<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/psubus.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/psubus.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/psubus.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/psubus.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/psubus.ll Mon Oct 2 15:01:37 2017<br class="">@@ -712,7 +712,7 @@ define <16 x i8> @test14(<16 x i8> %x, <<br class=""> ; SSE41-LABEL: test14:<br class=""> ; SSE41: # BB#0: # %vector.ph<br class=""> ; SSE41-NEXT: movdqa %xmm0, %xmm5<br class="">-; SSE41-NEXT: pshufd {{.*#+}} xmm0 = xmm5[1,1,2,3]<br class="">+; SSE41-NEXT: pshufd {{.*#+}} xmm0 = xmm0[1,1,2,3]<br class=""> ; SSE41-NEXT: pmovzxbd {{.*#+}} xmm8 = xmm0[0],zero,zero,zero,xmm0[1],zero,zero,zero,xmm0[2],zero,zero,zero,xmm0[3],zero,zero,zero<br class=""> ; SSE41-NEXT: pmovzxbd {{.*#+}} xmm0 = xmm5[0],zero,zero,zero,xmm5[1],zero,zero,zero,xmm5[2],zero,zero,zero,xmm5[3],zero,zero,zero<br class=""> ; SSE41-NEXT: pshufd {{.*#+}} xmm6 = xmm5[2,3,0,1]<br class="">@@ -1320,7 +1320,7 @@ define <32 x i16> @psubus_32i16_max(<32<br class=""> ; SSE2-NEXT: movdqa %xmm1, %xmm9<br class=""> ; SSE2-NEXT: movdqa %xmm0, %xmm8<br class=""> ; SSE2-NEXT: movdqa {{.*#+}} xmm3 = [32768,32768,32768,32768,32768,32768,32768,32768]<br class="">-; SSE2-NEXT: movdqa %xmm8, %xmm1<br class="">+; SSE2-NEXT: movdqa %xmm0, %xmm1<br class=""> ; SSE2-NEXT: pxor %xmm3, %xmm1<br class=""> ; SSE2-NEXT: movdqa %xmm4, %xmm0<br class=""> ; SSE2-NEXT: pxor %xmm3, %xmm0<br class="">@@ -1368,7 +1368,7 @@ define <32 x i16> @psubus_32i16_max(<32<br class=""> ; SSSE3-NEXT: movdqa %xmm1, %xmm9<br class=""> ; SSSE3-NEXT: movdqa %xmm0, %xmm8<br class=""> ; SSSE3-NEXT: movdqa {{.*#+}} xmm3 = [32768,32768,32768,32768,32768,32768,32768,32768]<br class="">-; SSSE3-NEXT: movdqa %xmm8, %xmm1<br class="">+; SSSE3-NEXT: movdqa %xmm0, %xmm1<br class=""> ; SSSE3-NEXT: pxor %xmm3, %xmm1<br class=""> ; SSSE3-NEXT: movdqa %xmm4, %xmm0<br class=""> ; SSSE3-NEXT: pxor %xmm3, %xmm0<br class="">@@ -2042,7 +2042,7 @@ define <16 x i16> @psubus_16i32_max(<16<br class=""> ; SSE2-NEXT: movdqa %xmm9, %xmm11<br class=""> ; SSE2-NEXT: punpckhwd {{.*#+}} xmm11 = xmm11[4],xmm0[4],xmm11[5],xmm0[5],xmm11[6],xmm0[6],xmm11[7],xmm0[7]<br class=""> ; SSE2-NEXT: punpcklwd {{.*#+}} xmm9 = xmm9[0],xmm0[0],xmm9[1],xmm0[1],xmm9[2],xmm0[2],xmm9[3],xmm0[3]<br class="">-; SSE2-NEXT: movdqa %xmm8, %xmm10<br class="">+; SSE2-NEXT: movdqa %xmm1, %xmm10<br class=""> ; SSE2-NEXT: punpckhwd {{.*#+}} xmm10 = xmm10[4],xmm0[4],xmm10[5],xmm0[5],xmm10[6],xmm0[6],xmm10[7],xmm0[7]<br class=""> ; SSE2-NEXT: punpcklwd {{.*#+}} xmm8 = xmm8[0],xmm0[0],xmm8[1],xmm0[1],xmm8[2],xmm0[2],xmm8[3],xmm0[3]<br class=""> ; SSE2-NEXT: movdqa {{.*#+}} xmm6 = [2147483648,2147483648,2147483648,2147483648]<br class="">@@ -2105,7 +2105,7 @@ define <16 x i16> @psubus_16i32_max(<16<br class=""> ; SSSE3-NEXT: movdqa %xmm9, %xmm11<br class=""> ; SSSE3-NEXT: punpckhwd {{.*#+}} xmm11 = xmm11[4],xmm0[4],xmm11[5],xmm0[5],xmm11[6],xmm0[6],xmm11[7],xmm0[7]<br class=""> ; SSSE3-NEXT: punpcklwd {{.*#+}} xmm9 = xmm9[0],xmm0[0],xmm9[1],xmm0[1],xmm9[2],xmm0[2],xmm9[3],xmm0[3]<br class="">-; SSSE3-NEXT: movdqa %xmm8, %xmm10<br class="">+; SSSE3-NEXT: movdqa %xmm1, %xmm10<br class=""> ; SSSE3-NEXT: punpckhwd {{.*#+}} xmm10 = xmm10[4],xmm0[4],xmm10[5],xmm0[5],xmm10[6],xmm0[6],xmm10[7],xmm0[7]<br class=""> ; SSSE3-NEXT: punpcklwd {{.*#+}} xmm8 = xmm8[0],xmm0[0],xmm8[1],xmm0[1],xmm8[2],xmm0[2],xmm8[3],xmm0[3]<br class=""> ; SSSE3-NEXT: movdqa {{.*#+}} xmm6 = [2147483648,2147483648,2147483648,2147483648]<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/shrink-wrap-chkstk.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/shrink-wrap-chkstk.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/shrink-wrap-chkstk.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/shrink-wrap-chkstk.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/shrink-wrap-chkstk.ll Mon Oct 2 15:01:37 2017<br class="">@@ -61,7 +61,7 @@ false:<br class=""><br class=""> ; CHECK-LABEL: @use_eax_before_prologue@8: # @use_eax_before_prologue<br class=""> ; CHECK: movl %ecx, %eax<br class="">-; CHECK: cmpl %edx, %eax<br class="">+; CHECK: cmpl %edx, %ecx<br class=""> ; CHECK: jge LBB1_2<br class=""> ; CHECK: pushl %eax<br class=""> ; CHECK: movl $4092, %eax<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/sqrt-fastmath.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/sqrt-fastmath.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/sqrt-fastmath.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/sqrt-fastmath.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/sqrt-fastmath.ll Mon Oct 2 15:01:37 2017<br class="">@@ -132,7 +132,7 @@ define float @f32_estimate(float %x) #1<br class=""> ; SSE: # BB#0:<br class=""> ; SSE-NEXT: rsqrtss %xmm0, %xmm1<br class=""> ; SSE-NEXT: movaps %xmm1, %xmm2<br class="">-; SSE-NEXT: mulss %xmm2, %xmm2<br class="">+; SSE-NEXT: mulss %xmm1, %xmm2<br class=""> ; SSE-NEXT: mulss %xmm0, %xmm2<br class=""> ; SSE-NEXT: addss {{.*}}(%rip), %xmm2<br class=""> ; SSE-NEXT: mulss {{.*}}(%rip), %xmm1<br class="">@@ -178,7 +178,7 @@ define <4 x float> @v4f32_estimate(<4 x<br class=""> ; SSE: # BB#0:<br class=""> ; SSE-NEXT: rsqrtps %xmm0, %xmm1<br class=""> ; SSE-NEXT: movaps %xmm1, %xmm2<br class="">-; SSE-NEXT: mulps %xmm2, %xmm2<br class="">+; SSE-NEXT: mulps %xmm1, %xmm2<br class=""> ; SSE-NEXT: mulps %xmm0, %xmm2<br class=""> ; SSE-NEXT: addps {{.*}}(%rip), %xmm2<br class=""> ; SSE-NEXT: mulps {{.*}}(%rip), %xmm1<br class="">@@ -228,7 +228,7 @@ define <8 x float> @v8f32_estimate(<8 x<br class=""> ; SSE-NEXT: rsqrtps %xmm0, %xmm3<br class=""> ; SSE-NEXT: movaps {{.*#+}} xmm4 = [-5.000000e-01,-5.000000e-01,-5.000000e-01,-5.000000e-01]<br class=""> ; SSE-NEXT: movaps %xmm3, %xmm2<br class="">-; SSE-NEXT: mulps %xmm2, %xmm2<br class="">+; SSE-NEXT: mulps %xmm3, %xmm2<br class=""> ; SSE-NEXT: mulps %xmm0, %xmm2<br class=""> ; SSE-NEXT: movaps {{.*#+}} xmm0 = [-3.000000e+00,-3.000000e+00,-3.000000e+00,-3.000000e+00]<br class=""> ; SSE-NEXT: addps %xmm0, %xmm2<br class="">@@ -236,7 +236,7 @@ define <8 x float> @v8f32_estimate(<8 x<br class=""> ; SSE-NEXT: mulps %xmm3, %xmm2<br class=""> ; SSE-NEXT: rsqrtps %xmm1, %xmm5<br class=""> ; SSE-NEXT: movaps %xmm5, %xmm3<br class="">-; SSE-NEXT: mulps %xmm3, %xmm3<br class="">+; SSE-NEXT: mulps %xmm5, %xmm3<br class=""> ; SSE-NEXT: mulps %xmm1, %xmm3<br class=""> ; SSE-NEXT: addps %xmm0, %xmm3<br class=""> ; SSE-NEXT: mulps %xmm4, %xmm3<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/sse1.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/sse1.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/sse1.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/sse1.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/sse1.ll Mon Oct 2 15:01:37 2017<br class="">@@ -16,7 +16,7 @@ define <2 x float> @test4(<2 x float> %A<br class=""> ; X32-LABEL: test4:<br class=""> ; X32: # BB#0: # %entry<br class=""> ; X32-NEXT: movaps %xmm0, %xmm2<br class="">-; X32-NEXT: shufps {{.*#+}} xmm2 = xmm2[1,1,2,3]<br class="">+; X32-NEXT: shufps {{.*#+}} xmm2 = xmm2[1,1],xmm0[2,3]<br class=""> ; X32-NEXT: addss %xmm1, %xmm0<br class=""> ; X32-NEXT: shufps {{.*#+}} xmm1 = xmm1[1,1,2,3]<br class=""> ; X32-NEXT: subss %xmm1, %xmm2<br class="">@@ -26,7 +26,7 @@ define <2 x float> @test4(<2 x float> %A<br class=""> ; X64-LABEL: test4:<br class=""> ; X64: # BB#0: # %entry<br class=""> ; X64-NEXT: movaps %xmm0, %xmm2<br class="">-; X64-NEXT: shufps {{.*#+}} xmm2 = xmm2[1,1,2,3]<br class="">+; X64-NEXT: shufps {{.*#+}} xmm2 = xmm2[1,1],xmm0[2,3]<br class=""> ; X64-NEXT: addss %xmm1, %xmm0<br class=""> ; X64-NEXT: shufps {{.*#+}} xmm1 = xmm1[1,1,2,3]<br class=""> ; X64-NEXT: subss %xmm1, %xmm2<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/sse3-avx-addsub-2.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/sse3-avx-addsub-2.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/sse3-avx-addsub-2.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/sse3-avx-addsub-2.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/sse3-avx-addsub-2.ll Mon Oct 2 15:01:37 2017<br class="">@@ -406,9 +406,9 @@ define <4 x float> @test16(<4 x float> %<br class=""> ; SSE-NEXT: movaps %xmm0, %xmm2<br class=""> ; SSE-NEXT: subss %xmm0, %xmm2<br class=""> ; SSE-NEXT: movaps %xmm0, %xmm3<br class="">-; SSE-NEXT: movhlps {{.*#+}} xmm3 = xmm3[1,1]<br class="">+; SSE-NEXT: movhlps {{.*#+}} xmm3 = xmm0[1],xmm3[1]<br class=""> ; SSE-NEXT: movaps %xmm1, %xmm4<br class="">-; SSE-NEXT: movhlps {{.*#+}} xmm4 = xmm4[1,1]<br class="">+; SSE-NEXT: movhlps {{.*#+}} xmm4 = xmm1[1],xmm4[1]<br class=""> ; SSE-NEXT: subss %xmm4, %xmm3<br class=""> ; SSE-NEXT: movshdup {{.*#+}} xmm4 = xmm0[1,1,3,3]<br class=""> ; SSE-NEXT: addss %xmm0, %xmm4<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/statepoint-live-in.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/statepoint-live-in.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/statepoint-live-in.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/statepoint-live-in.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/statepoint-live-in.ll Mon Oct 2 15:01:37 2017<br class="">@@ -126,7 +126,7 @@ define void @test6(i32 %a) gc "statepoin<br class=""> ; CHECK-NEXT: Lcfi11:<br class=""> ; CHECK-NEXT: .cfi_offset %rbx, -16<br class=""> ; CHECK-NEXT: movl %edi, %ebx<br class="">-; CHECK-NEXT: movl %ebx, {{[0-9]+}}(%rsp)<br class="">+; CHECK-NEXT: movl %edi, {{[0-9]+}}(%rsp)<br class=""> ; CHECK-NEXT: callq _baz<br class=""> ; CHECK-NEXT: Ltmp6:<br class=""> ; CHECK-NEXT: callq _bar<br class="">@@ -153,13 +153,13 @@ entry:<br class=""> ; CHECK: .byte<span class="Apple-tab-span" style="white-space:pre"> </span>1<br class=""> ; CHECK-NEXT: .byte 0<br class=""> ; CHECK-NEXT: .short 4<br class="">-; CHECK-NEXT: .short<span class="Apple-tab-span" style="white-space:pre"> </span>6<br class="">+; CHECK-NEXT: .short<span class="Apple-tab-span" style="white-space:pre"> </span>5<br class=""> ; CHECK-NEXT: .short 0<br class=""> ; CHECK-NEXT: .long<span class="Apple-tab-span" style="white-space:pre"> </span>0<br class=""> ; CHECK: .byte<span class="Apple-tab-span" style="white-space:pre"> </span>1<br class=""> ; CHECK-NEXT: .byte 0<br class=""> ; CHECK-NEXT: .short 4<br class="">-; CHECK-NEXT: .short<span class="Apple-tab-span" style="white-space:pre"> </span>3<br class="">+; CHECK-NEXT: .short<span class="Apple-tab-span" style="white-space:pre"> </span>4<br class=""> ; CHECK-NEXT: .short 0<br class=""> ; CHECK-NEXT: .long<span class="Apple-tab-span" style="white-space:pre"> </span>0<br class=""> ; CHECK: Ltmp2-_test2<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/statepoint-stack-usage.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/statepoint-stack-usage.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/statepoint-stack-usage.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/statepoint-stack-usage.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/statepoint-stack-usage.ll Mon Oct 2 15:01:37 2017<br class="">@@ -61,9 +61,9 @@ define i32 @back_to_back_deopt(i32 %a, i<br class=""> gc "statepoint-example" {<br class=""> ; CHECK-LABEL: back_to_back_deopt<br class=""> ; The exact stores don't matter, but there need to be three stack slots created<br class="">-; CHECK-DAG: movl<span class="Apple-tab-span" style="white-space:pre"> </span>%ebx, 12(%rsp)<br class="">-; CHECK-DAG: movl<span class="Apple-tab-span" style="white-space:pre"> </span>%ebp, 8(%rsp)<br class="">-; CHECK-DAG: movl<span class="Apple-tab-span" style="white-space:pre"> </span>%r14d, 4(%rsp)<br class="">+; CHECK-DAG: movl<span class="Apple-tab-span" style="white-space:pre"> </span>%edi, 12(%rsp)<br class="">+; CHECK-DAG: movl<span class="Apple-tab-span" style="white-space:pre"> </span>%esi, 8(%rsp)<br class="">+; CHECK-DAG: movl<span class="Apple-tab-span" style="white-space:pre"> </span>%edx, 4(%rsp)<br class=""> ; CHECK: callq<br class=""> ; CHECK-DAG: movl<span class="Apple-tab-span" style="white-space:pre"> </span>%ebx, 12(%rsp)<br class=""> ; CHECK-DAG: movl<span class="Apple-tab-span" style="white-space:pre"> </span>%ebp, 8(%rsp)<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/vec_fp_to_int.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vec_fp_to_int.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vec_fp_to_int.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/vec_fp_to_int.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/vec_fp_to_int.ll Mon Oct 2 15:01:37 2017<br class="">@@ -1018,12 +1018,12 @@ define <4 x i64> @fptosi_4f32_to_4i64(<8<br class=""> ; SSE-NEXT: cvttss2si %xmm0, %rax<br class=""> ; SSE-NEXT: movq %rax, %xmm2<br class=""> ; SSE-NEXT: movaps %xmm0, %xmm1<br class="">-; SSE-NEXT: shufps {{.*#+}} xmm1 = xmm1[1,1,2,3]<br class="">+; SSE-NEXT: shufps {{.*#+}} xmm1 = xmm1[1,1],xmm0[2,3]<br class=""> ; SSE-NEXT: cvttss2si %xmm1, %rax<br class=""> ; SSE-NEXT: movq %rax, %xmm1<br class=""> ; SSE-NEXT: punpcklqdq {{.*#+}} xmm2 = xmm2[0],xmm1[0]<br class=""> ; SSE-NEXT: movaps %xmm0, %xmm1<br class="">-; SSE-NEXT: shufps {{.*#+}} xmm1 = xmm1[3,1,2,3]<br class="">+; SSE-NEXT: shufps {{.*#+}} xmm1 = xmm1[3,1],xmm0[2,3]<br class=""> ; SSE-NEXT: cvttss2si %xmm1, %rax<br class=""> ; SSE-NEXT: movq %rax, %xmm3<br class=""> ; SSE-NEXT: movhlps {{.*#+}} xmm0 = xmm0[1,1]<br class="">@@ -1126,12 +1126,12 @@ define <4 x i64> @fptosi_8f32_to_4i64(<8<br class=""> ; SSE-NEXT: cvttss2si %xmm0, %rax<br class=""> ; SSE-NEXT: movq %rax, %xmm2<br class=""> ; SSE-NEXT: movaps %xmm0, %xmm1<br class="">-; SSE-NEXT: shufps {{.*#+}} xmm1 = xmm1[1,1,2,3]<br class="">+; SSE-NEXT: shufps {{.*#+}} xmm1 = xmm1[1,1],xmm0[2,3]<br class=""> ; SSE-NEXT: cvttss2si %xmm1, %rax<br class=""> ; SSE-NEXT: movq %rax, %xmm1<br class=""> ; SSE-NEXT: punpcklqdq {{.*#+}} xmm2 = xmm2[0],xmm1[0]<br class=""> ; SSE-NEXT: movaps %xmm0, %xmm1<br class="">-; SSE-NEXT: shufps {{.*#+}} xmm1 = xmm1[3,1,2,3]<br class="">+; SSE-NEXT: shufps {{.*#+}} xmm1 = xmm1[3,1],xmm0[2,3]<br class=""> ; SSE-NEXT: cvttss2si %xmm1, %rax<br class=""> ; SSE-NEXT: movq %rax, %xmm3<br class=""> ; SSE-NEXT: movhlps {{.*#+}} xmm0 = xmm0[1,1]<br class="">@@ -1316,11 +1316,11 @@ define <4 x i32> @fptoui_4f32_to_4i32(<4<br class=""> ; SSE-LABEL: fptoui_4f32_to_4i32:<br class=""> ; SSE: # BB#0:<br class=""> ; SSE-NEXT: movaps %xmm0, %xmm1<br class="">-; SSE-NEXT: shufps {{.*#+}} xmm1 = xmm1[3,1,2,3]<br class="">+; SSE-NEXT: shufps {{.*#+}} xmm1 = xmm1[3,1],xmm0[2,3]<br class=""> ; SSE-NEXT: cvttss2si %xmm1, %rax<br class=""> ; SSE-NEXT: movd %eax, %xmm1<br class=""> ; SSE-NEXT: movaps %xmm0, %xmm2<br class="">-; SSE-NEXT: movhlps {{.*#+}} xmm2 = xmm2[1,1]<br class="">+; SSE-NEXT: movhlps {{.*#+}} xmm2 = xmm0[1],xmm2[1]<br class=""> ; SSE-NEXT: cvttss2si %xmm2, %rax<br class=""> ; SSE-NEXT: movd %eax, %xmm2<br class=""> ; SSE-NEXT: punpckldq {{.*#+}} xmm2 = xmm2[0],xmm1[0],xmm2[1],xmm1[1]<br class="">@@ -1560,7 +1560,7 @@ define <8 x i32> @fptoui_8f32_to_8i32(<8<br class=""> ; SSE-NEXT: cvttss2si %xmm0, %rax<br class=""> ; SSE-NEXT: movd %eax, %xmm0<br class=""> ; SSE-NEXT: movaps %xmm2, %xmm3<br class="">-; SSE-NEXT: movhlps {{.*#+}} xmm3 = xmm3[1,1]<br class="">+; SSE-NEXT: movhlps {{.*#+}} xmm3 = xmm2[1],xmm3[1]<br class=""> ; SSE-NEXT: cvttss2si %xmm3, %rax<br class=""> ; SSE-NEXT: movd %eax, %xmm3<br class=""> ; SSE-NEXT: punpckldq {{.*#+}} xmm3 = xmm3[0],xmm0[0],xmm3[1],xmm0[1]<br class="">@@ -1572,11 +1572,11 @@ define <8 x i32> @fptoui_8f32_to_8i32(<8<br class=""> ; SSE-NEXT: punpckldq {{.*#+}} xmm0 = xmm0[0],xmm2[0],xmm0[1],xmm2[1]<br class=""> ; SSE-NEXT: punpcklqdq {{.*#+}} xmm0 = xmm0[0],xmm3[0]<br class=""> ; SSE-NEXT: movaps %xmm1, %xmm2<br class="">-; SSE-NEXT: shufps {{.*#+}} xmm2 = xmm2[3,1,2,3]<br class="">+; SSE-NEXT: shufps {{.*#+}} xmm2 = xmm2[3,1],xmm1[2,3]<br class=""> ; SSE-NEXT: cvttss2si %xmm2, %rax<br class=""> ; SSE-NEXT: movd %eax, %xmm2<br class=""> ; SSE-NEXT: movaps %xmm1, %xmm3<br class="">-; SSE-NEXT: movhlps {{.*#+}} xmm3 = xmm3[1,1]<br class="">+; SSE-NEXT: movhlps {{.*#+}} xmm3 = xmm1[1],xmm3[1]<br class=""> ; SSE-NEXT: cvttss2si %xmm3, %rax<br class=""> ; SSE-NEXT: movd %eax, %xmm3<br class=""> ; SSE-NEXT: punpckldq {{.*#+}} xmm3 = xmm3[0],xmm2[0],xmm3[1],xmm2[1]<br class="">@@ -1687,7 +1687,7 @@ define <4 x i64> @fptoui_4f32_to_4i64(<8<br class=""> ; SSE-NEXT: cmovaeq %rcx, %rdx<br class=""> ; SSE-NEXT: movq %rdx, %xmm2<br class=""> ; SSE-NEXT: movaps %xmm0, %xmm3<br class="">-; SSE-NEXT: shufps {{.*#+}} xmm3 = xmm3[1,1,2,3]<br class="">+; SSE-NEXT: shufps {{.*#+}} xmm3 = xmm3[1,1],xmm0[2,3]<br class=""> ; SSE-NEXT: movaps %xmm3, %xmm4<br class=""> ; SSE-NEXT: subss %xmm1, %xmm4<br class=""> ; SSE-NEXT: cvttss2si %xmm4, %rcx<br class="">@@ -1698,7 +1698,7 @@ define <4 x i64> @fptoui_4f32_to_4i64(<8<br class=""> ; SSE-NEXT: movq %rdx, %xmm3<br class=""> ; SSE-NEXT: punpcklqdq {{.*#+}} xmm2 = xmm2[0],xmm3[0]<br class=""> ; SSE-NEXT: movaps %xmm0, %xmm3<br class="">-; SSE-NEXT: shufps {{.*#+}} xmm3 = xmm3[3,1,2,3]<br class="">+; SSE-NEXT: shufps {{.*#+}} xmm3 = xmm3[3,1],xmm0[2,3]<br class=""> ; SSE-NEXT: movaps %xmm3, %xmm4<br class=""> ; SSE-NEXT: subss %xmm1, %xmm4<br class=""> ; SSE-NEXT: cvttss2si %xmm4, %rcx<br class="">@@ -1865,7 +1865,7 @@ define <4 x i64> @fptoui_8f32_to_4i64(<8<br class=""> ; SSE-NEXT: cmovaeq %rcx, %rdx<br class=""> ; SSE-NEXT: movq %rdx, %xmm2<br class=""> ; SSE-NEXT: movaps %xmm0, %xmm3<br class="">-; SSE-NEXT: shufps {{.*#+}} xmm3 = xmm3[1,1,2,3]<br class="">+; SSE-NEXT: shufps {{.*#+}} xmm3 = xmm3[1,1],xmm0[2,3]<br class=""> ; SSE-NEXT: movaps %xmm3, %xmm4<br class=""> ; SSE-NEXT: subss %xmm1, %xmm4<br class=""> ; SSE-NEXT: cvttss2si %xmm4, %rcx<br class="">@@ -1876,7 +1876,7 @@ define <4 x i64> @fptoui_8f32_to_4i64(<8<br class=""> ; SSE-NEXT: movq %rdx, %xmm3<br class=""> ; SSE-NEXT: punpcklqdq {{.*#+}} xmm2 = xmm2[0],xmm3[0]<br class=""> ; SSE-NEXT: movaps %xmm0, %xmm3<br class="">-; SSE-NEXT: shufps {{.*#+}} xmm3 = xmm3[3,1,2,3]<br class="">+; SSE-NEXT: shufps {{.*#+}} xmm3 = xmm3[3,1],xmm0[2,3]<br class=""> ; SSE-NEXT: movaps %xmm3, %xmm4<br class=""> ; SSE-NEXT: subss %xmm1, %xmm4<br class=""> ; SSE-NEXT: cvttss2si %xmm4, %rcx<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/vec_int_to_fp.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vec_int_to_fp.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vec_int_to_fp.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/vec_int_to_fp.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/vec_int_to_fp.ll Mon Oct 2 15:01:37 2017<br class="">@@ -1591,7 +1591,7 @@ define <4 x float> @uitofp_2i64_to_4f32(<br class=""> ; SSE-LABEL: uitofp_2i64_to_4f32:<br class=""> ; SSE: # BB#0:<br class=""> ; SSE-NEXT: movdqa %xmm0, %xmm1<br class="">-; SSE-NEXT: movq %xmm1, %rax<br class="">+; SSE-NEXT: movq %xmm0, %rax<br class=""> ; SSE-NEXT: testq %rax, %rax<br class=""> ; SSE-NEXT: js .LBB39_1<br class=""> ; SSE-NEXT: # BB#2:<br class="">@@ -1819,7 +1819,7 @@ define <4 x float> @uitofp_4i64_to_4f32_<br class=""> ; SSE-LABEL: uitofp_4i64_to_4f32_undef:<br class=""> ; SSE: # BB#0:<br class=""> ; SSE-NEXT: movdqa %xmm0, %xmm1<br class="">-; SSE-NEXT: movq %xmm1, %rax<br class="">+; SSE-NEXT: movq %xmm0, %rax<br class=""> ; SSE-NEXT: testq %rax, %rax<br class=""> ; SSE-NEXT: js .LBB41_1<br class=""> ; SSE-NEXT: # BB#2:<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/vec_minmax_sint.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vec_minmax_sint.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vec_minmax_sint.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/vec_minmax_sint.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/vec_minmax_sint.ll Mon Oct 2 15:01:37 2017<br class="">@@ -437,7 +437,7 @@ define <2 x i64> @max_ge_v2i64(<2 x i64><br class=""> ; SSE42: # BB#0:<br class=""> ; SSE42-NEXT: movdqa %xmm0, %xmm2<br class=""> ; SSE42-NEXT: movdqa %xmm1, %xmm3<br class="">-; SSE42-NEXT: pcmpgtq %xmm2, %xmm3<br class="">+; SSE42-NEXT: pcmpgtq %xmm0, %xmm3<br class=""> ; SSE42-NEXT: pcmpeqd %xmm0, %xmm0<br class=""> ; SSE42-NEXT: pxor %xmm3, %xmm0<br class=""> ; SSE42-NEXT: blendvpd %xmm0, %xmm2, %xmm1<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/vec_shift4.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vec_shift4.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vec_shift4.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/vec_shift4.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/vec_shift4.ll Mon Oct 2 15:01:37 2017<br class="">@@ -35,7 +35,7 @@ define <2 x i64> @shl2(<16 x i8> %r, <16<br class=""> ; X32: # BB#0: # %entry<br class=""> ; X32-NEXT: movdqa %xmm0, %xmm2<br class=""> ; X32-NEXT: psllw $5, %xmm1<br class="">-; X32-NEXT: movdqa %xmm2, %xmm3<br class="">+; X32-NEXT: movdqa %xmm0, %xmm3<br class=""> ; X32-NEXT: psllw $4, %xmm3<br class=""> ; X32-NEXT: pand {{\.LCPI.*}}, %xmm3<br class=""> ; X32-NEXT: movdqa %xmm1, %xmm0<br class="">@@ -47,7 +47,7 @@ define <2 x i64> @shl2(<16 x i8> %r, <16<br class=""> ; X32-NEXT: movdqa %xmm1, %xmm0<br class=""> ; X32-NEXT: pblendvb %xmm0, %xmm3, %xmm2<br class=""> ; X32-NEXT: movdqa %xmm2, %xmm3<br class="">-; X32-NEXT: paddb %xmm3, %xmm3<br class="">+; X32-NEXT: paddb %xmm2, %xmm3<br class=""> ; X32-NEXT: paddb %xmm1, %xmm1<br class=""> ; X32-NEXT: movdqa %xmm1, %xmm0<br class=""> ; X32-NEXT: pblendvb %xmm0, %xmm3, %xmm2<br class="">@@ -58,7 +58,7 @@ define <2 x i64> @shl2(<16 x i8> %r, <16<br class=""> ; X64: # BB#0: # %entry<br class=""> ; X64-NEXT: movdqa %xmm0, %xmm2<br class=""> ; X64-NEXT: psllw $5, %xmm1<br class="">-; X64-NEXT: movdqa %xmm2, %xmm3<br class="">+; X64-NEXT: movdqa %xmm0, %xmm3<br class=""> ; X64-NEXT: psllw $4, %xmm3<br class=""> ; X64-NEXT: pand {{.*}}(%rip), %xmm3<br class=""> ; X64-NEXT: movdqa %xmm1, %xmm0<br class="">@@ -70,7 +70,7 @@ define <2 x i64> @shl2(<16 x i8> %r, <16<br class=""> ; X64-NEXT: movdqa %xmm1, %xmm0<br class=""> ; X64-NEXT: pblendvb %xmm0, %xmm3, %xmm2<br class=""> ; X64-NEXT: movdqa %xmm2, %xmm3<br class="">-; X64-NEXT: paddb %xmm3, %xmm3<br class="">+; X64-NEXT: paddb %xmm2, %xmm3<br class=""> ; X64-NEXT: paddb %xmm1, %xmm1<br class=""> ; X64-NEXT: movdqa %xmm1, %xmm0<br class=""> ; X64-NEXT: pblendvb %xmm0, %xmm3, %xmm2<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/vector-blend.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-blend.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-blend.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/vector-blend.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/vector-blend.ll Mon Oct 2 15:01:37 2017<br class="">@@ -986,7 +986,7 @@ define <4 x i32> @blend_neg_logic_v4i32_<br class=""> ; SSE41: # BB#0: # %entry<br class=""> ; SSE41-NEXT: movdqa %xmm0, %xmm2<br class=""> ; SSE41-NEXT: pxor %xmm3, %xmm3<br class="">-; SSE41-NEXT: psubd %xmm2, %xmm3<br class="">+; SSE41-NEXT: psubd %xmm0, %xmm3<br class=""> ; SSE41-NEXT: movaps %xmm1, %xmm0<br class=""> ; SSE41-NEXT: blendvps %xmm0, %xmm2, %xmm3<br class=""> ; SSE41-NEXT: movaps %xmm3, %xmm0<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/vector-idiv-sdiv-128.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-idiv-sdiv-128.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-idiv-sdiv-128.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/vector-idiv-sdiv-128.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/vector-idiv-sdiv-128.ll Mon Oct 2 15:01:37 2017<br class="">@@ -177,13 +177,13 @@ define <16 x i8> @test_div7_16i8(<16 x i<br class=""> ; SSE2-LABEL: test_div7_16i8:<br class=""> ; SSE2: # BB#0:<br class=""> ; SSE2-NEXT: movdqa %xmm0, %xmm2<br class="">-; SSE2-NEXT: punpckhbw {{.*#+}} xmm2 = xmm2[8,8,9,9,10,10,11,11,12,12,13,13,14,14,15,15]<br class="">+; SSE2-NEXT: punpckhbw {{.*#+}} xmm2 = xmm2[8],xmm0[8],xmm2[9],xmm0[9],xmm2[10],xmm0[10],xmm2[11],xmm0[11],xmm2[12],xmm0[12],xmm2[13],xmm0[13],xmm2[14],xmm0[14],xmm2[15],xmm0[15]<br class=""> ; SSE2-NEXT: psraw $8, %xmm2<br class=""> ; SSE2-NEXT: movdqa {{.*#+}} xmm3 = [65427,65427,65427,65427,65427,65427,65427,65427]<br class=""> ; SSE2-NEXT: pmullw %xmm3, %xmm2<br class=""> ; SSE2-NEXT: psrlw $8, %xmm2<br class=""> ; SSE2-NEXT: movdqa %xmm0, %xmm1<br class="">-; SSE2-NEXT: punpcklbw {{.*#+}} xmm1 = xmm1[0,0,1,1,2,2,3,3,4,4,5,5,6,6,7,7]<br class="">+; SSE2-NEXT: punpcklbw {{.*#+}} xmm1 = xmm1[0],xmm0[0],xmm1[1],xmm0[1],xmm1[2],xmm0[2],xmm1[3],xmm0[3],xmm1[4],xmm0[4],xmm1[5],xmm0[5],xmm1[6],xmm0[6],xmm1[7],xmm0[7]<br class=""> ; SSE2-NEXT: psraw $8, %xmm1<br class=""> ; SSE2-NEXT: pmullw %xmm3, %xmm1<br class=""> ; SSE2-NEXT: psrlw $8, %xmm1<br class="">@@ -501,13 +501,13 @@ define <16 x i8> @test_rem7_16i8(<16 x i<br class=""> ; SSE2-LABEL: test_rem7_16i8:<br class=""> ; SSE2: # BB#0:<br class=""> ; SSE2-NEXT: movdqa %xmm0, %xmm2<br class="">-; SSE2-NEXT: punpckhbw {{.*#+}} xmm2 = xmm2[8,8,9,9,10,10,11,11,12,12,13,13,14,14,15,15]<br class="">+; SSE2-NEXT: punpckhbw {{.*#+}} xmm2 = xmm2[8],xmm0[8],xmm2[9],xmm0[9],xmm2[10],xmm0[10],xmm2[11],xmm0[11],xmm2[12],xmm0[12],xmm2[13],xmm0[13],xmm2[14],xmm0[14],xmm2[15],xmm0[15]<br class=""> ; SSE2-NEXT: psraw $8, %xmm2<br class=""> ; SSE2-NEXT: movdqa {{.*#+}} xmm3 = [65427,65427,65427,65427,65427,65427,65427,65427]<br class=""> ; SSE2-NEXT: pmullw %xmm3, %xmm2<br class=""> ; SSE2-NEXT: psrlw $8, %xmm2<br class=""> ; SSE2-NEXT: movdqa %xmm0, %xmm1<br class="">-; SSE2-NEXT: punpcklbw {{.*#+}} xmm1 = xmm1[0,0,1,1,2,2,3,3,4,4,5,5,6,6,7,7]<br class="">+; SSE2-NEXT: punpcklbw {{.*#+}} xmm1 = xmm1[0],xmm0[0],xmm1[1],xmm0[1],xmm1[2],xmm0[2],xmm1[3],xmm0[3],xmm1[4],xmm0[4],xmm1[5],xmm0[5],xmm1[6],xmm0[6],xmm1[7],xmm0[7]<br class=""> ; SSE2-NEXT: psraw $8, %xmm1<br class=""> ; SSE2-NEXT: pmullw %xmm3, %xmm1<br class=""> ; SSE2-NEXT: psrlw $8, %xmm1<br class="">@@ -523,7 +523,7 @@ define <16 x i8> @test_rem7_16i8(<16 x i<br class=""> ; SSE2-NEXT: pand {{.*}}(%rip), %xmm1<br class=""> ; SSE2-NEXT: paddb %xmm2, %xmm1<br class=""> ; SSE2-NEXT: movdqa %xmm1, %xmm2<br class="">-; SSE2-NEXT: punpckhbw {{.*#+}} xmm2 = xmm2[8,8,9,9,10,10,11,11,12,12,13,13,14,14,15,15]<br class="">+; SSE2-NEXT: punpckhbw {{.*#+}} xmm2 = xmm2[8],xmm1[8],xmm2[9],xmm1[9],xmm2[10],xmm1[10],xmm2[11],xmm1[11],xmm2[12],xmm1[12],xmm2[13],xmm1[13],xmm2[14],xmm1[14],xmm2[15],xmm1[15]<br class=""> ; SSE2-NEXT: psraw $8, %xmm2<br class=""> ; SSE2-NEXT: movdqa {{.*#+}} xmm3 = [7,7,7,7,7,7,7,7]<br class=""> ; SSE2-NEXT: pmullw %xmm3, %xmm2<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/vector-idiv-udiv-128.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-idiv-udiv-128.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-idiv-udiv-128.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/vector-idiv-udiv-128.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/vector-idiv-udiv-128.ll Mon Oct 2 15:01:37 2017<br class="">@@ -497,7 +497,7 @@ define <16 x i8> @test_rem7_16i8(<16 x i<br class=""> ; SSE2-NEXT: psrlw $2, %xmm1<br class=""> ; SSE2-NEXT: pand {{.*}}(%rip), %xmm1<br class=""> ; SSE2-NEXT: movdqa %xmm1, %xmm2<br class="">-; SSE2-NEXT: punpckhbw {{.*#+}} xmm2 = xmm2[8,8,9,9,10,10,11,11,12,12,13,13,14,14,15,15]<br class="">+; SSE2-NEXT: punpckhbw {{.*#+}} xmm2 = xmm2[8],xmm1[8],xmm2[9],xmm1[9],xmm2[10],xmm1[10],xmm2[11],xmm1[11],xmm2[12],xmm1[12],xmm2[13],xmm1[13],xmm2[14],xmm1[14],xmm2[15],xmm1[15]<br class=""> ; SSE2-NEXT: psraw $8, %xmm2<br class=""> ; SSE2-NEXT: movdqa {{.*#+}} xmm3 = [7,7,7,7,7,7,7,7]<br class=""> ; SSE2-NEXT: pmullw %xmm3, %xmm2<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/vector-mul.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-mul.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-mul.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/vector-mul.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/vector-mul.ll Mon Oct 2 15:01:37 2017<br class="">@@ -178,7 +178,7 @@ define <16 x i8> @mul_v16i8_1_2_4_8_1_2_<br class=""> ; X86-LABEL: mul_v16i8_1_2_4_8_1_2_4_8_1_2_4_8_1_2_4_8:<br class=""> ; X86: # BB#0:<br class=""> ; X86-NEXT: movdqa %xmm0, %xmm1<br class="">-; X86-NEXT: movdqa %xmm1, %xmm2<br class="">+; X86-NEXT: movdqa %xmm0, %xmm2<br class=""> ; X86-NEXT: psllw $4, %xmm2<br class=""> ; X86-NEXT: pand {{\.LCPI.*}}, %xmm2<br class=""> ; X86-NEXT: movdqa {{.*#+}} xmm0 = [8192,24640,8192,24640,8192,24640,8192,24640]<br class="">@@ -189,7 +189,7 @@ define <16 x i8> @mul_v16i8_1_2_4_8_1_2_<br class=""> ; X86-NEXT: paddb %xmm0, %xmm0<br class=""> ; X86-NEXT: pblendvb %xmm0, %xmm2, %xmm1<br class=""> ; X86-NEXT: movdqa %xmm1, %xmm2<br class="">-; X86-NEXT: paddb %xmm2, %xmm2<br class="">+; X86-NEXT: paddb %xmm1, %xmm2<br class=""> ; X86-NEXT: paddb %xmm0, %xmm0<br class=""> ; X86-NEXT: pblendvb %xmm0, %xmm2, %xmm1<br class=""> ; X86-NEXT: movdqa %xmm1, %xmm0<br class="">@@ -198,7 +198,7 @@ define <16 x i8> @mul_v16i8_1_2_4_8_1_2_<br class=""> ; X64-LABEL: mul_v16i8_1_2_4_8_1_2_4_8_1_2_4_8_1_2_4_8:<br class=""> ; X64: # BB#0:<br class=""> ; X64-NEXT: movdqa %xmm0, %xmm1<br class="">-; X64-NEXT: movdqa %xmm1, %xmm2<br class="">+; X64-NEXT: movdqa %xmm0, %xmm2<br class=""> ; X64-NEXT: psllw $4, %xmm2<br class=""> ; X64-NEXT: pand {{.*}}(%rip), %xmm2<br class=""> ; X64-NEXT: movdqa {{.*#+}} xmm0 = [8192,24640,8192,24640,8192,24640,8192,24640]<br class="">@@ -209,7 +209,7 @@ define <16 x i8> @mul_v16i8_1_2_4_8_1_2_<br class=""> ; X64-NEXT: paddb %xmm0, %xmm0<br class=""> ; X64-NEXT: pblendvb %xmm0, %xmm2, %xmm1<br class=""> ; X64-NEXT: movdqa %xmm1, %xmm2<br class="">-; X64-NEXT: paddb %xmm2, %xmm2<br class="">+; X64-NEXT: paddb %xmm1, %xmm2<br class=""> ; X64-NEXT: paddb %xmm0, %xmm0<br class=""> ; X64-NEXT: pblendvb %xmm0, %xmm2, %xmm1<br class=""> ; X64-NEXT: movdqa %xmm1, %xmm0<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/vector-rotate-128.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-rotate-128.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-rotate-128.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/vector-rotate-128.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/vector-rotate-128.ll Mon Oct 2 15:01:37 2017<br class="">@@ -361,7 +361,7 @@ define <8 x i16> @var_rotate_v8i16(<8 x<br class=""> ; SSE41-NEXT: psllw $4, %xmm1<br class=""> ; SSE41-NEXT: por %xmm0, %xmm1<br class=""> ; SSE41-NEXT: movdqa %xmm1, %xmm4<br class="">-; SSE41-NEXT: paddw %xmm4, %xmm4<br class="">+; SSE41-NEXT: paddw %xmm1, %xmm4<br class=""> ; SSE41-NEXT: movdqa %xmm3, %xmm6<br class=""> ; SSE41-NEXT: psllw $8, %xmm6<br class=""> ; SSE41-NEXT: movdqa %xmm3, %xmm5<br class="">@@ -386,7 +386,7 @@ define <8 x i16> @var_rotate_v8i16(<8 x<br class=""> ; SSE41-NEXT: psllw $4, %xmm2<br class=""> ; SSE41-NEXT: por %xmm0, %xmm2<br class=""> ; SSE41-NEXT: movdqa %xmm2, %xmm1<br class="">-; SSE41-NEXT: paddw %xmm1, %xmm1<br class="">+; SSE41-NEXT: paddw %xmm2, %xmm1<br class=""> ; SSE41-NEXT: movdqa %xmm3, %xmm4<br class=""> ; SSE41-NEXT: psrlw $8, %xmm4<br class=""> ; SSE41-NEXT: movdqa %xmm2, %xmm0<br class="">@@ -631,10 +631,10 @@ define <16 x i8> @var_rotate_v16i8(<16 x<br class=""> ; SSE41-NEXT: movdqa {{.*#+}} xmm2 = [8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8]<br class=""> ; SSE41-NEXT: psubb %xmm3, %xmm2<br class=""> ; SSE41-NEXT: psllw $5, %xmm3<br class="">-; SSE41-NEXT: movdqa %xmm1, %xmm5<br class="">+; SSE41-NEXT: movdqa %xmm0, %xmm5<br class=""> ; SSE41-NEXT: psllw $4, %xmm5<br class=""> ; SSE41-NEXT: pand {{.*}}(%rip), %xmm5<br class="">-; SSE41-NEXT: movdqa %xmm1, %xmm4<br class="">+; SSE41-NEXT: movdqa %xmm0, %xmm4<br class=""> ; SSE41-NEXT: movdqa %xmm3, %xmm0<br class=""> ; SSE41-NEXT: pblendvb %xmm0, %xmm5, %xmm4<br class=""> ; SSE41-NEXT: movdqa %xmm4, %xmm5<br class="">@@ -644,13 +644,13 @@ define <16 x i8> @var_rotate_v16i8(<16 x<br class=""> ; SSE41-NEXT: movdqa %xmm3, %xmm0<br class=""> ; SSE41-NEXT: pblendvb %xmm0, %xmm5, %xmm4<br class=""> ; SSE41-NEXT: movdqa %xmm4, %xmm5<br class="">-; SSE41-NEXT: paddb %xmm5, %xmm5<br class="">+; SSE41-NEXT: paddb %xmm4, %xmm5<br class=""> ; SSE41-NEXT: paddb %xmm3, %xmm3<br class=""> ; SSE41-NEXT: movdqa %xmm3, %xmm0<br class=""> ; SSE41-NEXT: pblendvb %xmm0, %xmm5, %xmm4<br class=""> ; SSE41-NEXT: psllw $5, %xmm2<br class=""> ; SSE41-NEXT: movdqa %xmm2, %xmm3<br class="">-; SSE41-NEXT: paddb %xmm3, %xmm3<br class="">+; SSE41-NEXT: paddb %xmm2, %xmm3<br class=""> ; SSE41-NEXT: movdqa %xmm1, %xmm5<br class=""> ; SSE41-NEXT: psrlw $4, %xmm5<br class=""> ; SSE41-NEXT: pand {{.*}}(%rip), %xmm5<br class="">@@ -1191,7 +1191,7 @@ define <16 x i8> @constant_rotate_v16i8(<br class=""> ; SSE41-LABEL: constant_rotate_v16i8:<br class=""> ; SSE41: # BB#0:<br class=""> ; SSE41-NEXT: movdqa %xmm0, %xmm1<br class="">-; SSE41-NEXT: movdqa %xmm1, %xmm3<br class="">+; SSE41-NEXT: movdqa %xmm0, %xmm3<br class=""> ; SSE41-NEXT: psllw $4, %xmm3<br class=""> ; SSE41-NEXT: pand {{.*}}(%rip), %xmm3<br class=""> ; SSE41-NEXT: movdqa {{.*#+}} xmm0 = [8192,24640,41088,57536,57600,41152,24704,8256]<br class="">@@ -1203,7 +1203,7 @@ define <16 x i8> @constant_rotate_v16i8(<br class=""> ; SSE41-NEXT: paddb %xmm0, %xmm0<br class=""> ; SSE41-NEXT: pblendvb %xmm0, %xmm3, %xmm2<br class=""> ; SSE41-NEXT: movdqa %xmm2, %xmm3<br class="">-; SSE41-NEXT: paddb %xmm3, %xmm3<br class="">+; SSE41-NEXT: paddb %xmm2, %xmm3<br class=""> ; SSE41-NEXT: paddb %xmm0, %xmm0<br class=""> ; SSE41-NEXT: pblendvb %xmm0, %xmm3, %xmm2<br class=""> ; SSE41-NEXT: movdqa %xmm1, %xmm3<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/vector-sext.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-sext.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-sext.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/vector-sext.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/vector-sext.ll Mon Oct 2 15:01:37 2017<br class="">@@ -243,7 +243,7 @@ define <8 x i32> @sext_16i8_to_8i32(<16<br class=""> ; SSSE3-LABEL: sext_16i8_to_8i32:<br class=""> ; SSSE3: # BB#0: # %entry<br class=""> ; SSSE3-NEXT: movdqa %xmm0, %xmm1<br class="">-; SSSE3-NEXT: punpcklbw {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[1],xmm0[2],xmm1[2],xmm0[3],xmm1[3],xmm0[4],xmm1[4],xmm0[5],xmm1[5],xmm0[6],xmm1[6],xmm0[7],xmm1[7]<br class="">+; SSSE3-NEXT: punpcklbw {{.*#+}} xmm0 = xmm0[0,0,1,1,2,2,3,3,4,4,5,5,6,6,7,7]<br class=""> ; SSSE3-NEXT: punpcklwd {{.*#+}} xmm0 = xmm0[0,0,1,1,2,2,3,3]<br class=""> ; SSSE3-NEXT: psrad $24, %xmm0<br class=""> ; SSSE3-NEXT: pshufb {{.*#+}} xmm1 = xmm1[u,u,u,4,u,u,u,5,u,u,u,6,u,u,u,7]<br class="">@@ -312,7 +312,7 @@ define <16 x i32> @sext_16i8_to_16i32(<1<br class=""> ; SSSE3-LABEL: sext_16i8_to_16i32:<br class=""> ; SSSE3: # BB#0: # %entry<br class=""> ; SSSE3-NEXT: movdqa %xmm0, %xmm3<br class="">-; SSSE3-NEXT: punpcklbw {{.*#+}} xmm0 = xmm0[0],xmm3[0],xmm0[1],xmm3[1],xmm0[2],xmm3[2],xmm0[3],xmm3[3],xmm0[4],xmm3[4],xmm0[5],xmm3[5],xmm0[6],xmm3[6],xmm0[7],xmm3[7]<br class="">+; SSSE3-NEXT: punpcklbw {{.*#+}} xmm0 = xmm0[0,0,1,1,2,2,3,3,4,4,5,5,6,6,7,7]<br class=""> ; SSSE3-NEXT: punpcklwd {{.*#+}} xmm0 = xmm0[0,0,1,1,2,2,3,3]<br class=""> ; SSSE3-NEXT: psrad $24, %xmm0<br class=""> ; SSSE3-NEXT: punpckhbw {{.*#+}} xmm1 = xmm1[8],xmm3[8],xmm1[9],xmm3[9],xmm1[10],xmm3[10],xmm1[11],xmm3[11],xmm1[12],xmm3[12],xmm1[13],xmm3[13],xmm1[14],xmm3[14],xmm1[15],xmm3[15]<br class="">@@ -443,7 +443,7 @@ define <4 x i64> @sext_16i8_to_4i64(<16<br class=""> ; SSSE3-LABEL: sext_16i8_to_4i64:<br class=""> ; SSSE3: # BB#0: # %entry<br class=""> ; SSSE3-NEXT: movdqa %xmm0, %xmm1<br class="">-; SSSE3-NEXT: punpcklbw {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[1],xmm0[2],xmm1[2],xmm0[3],xmm1[3],xmm0[4],xmm1[4],xmm0[5],xmm1[5],xmm0[6],xmm1[6],xmm0[7],xmm1[7]<br class="">+; SSSE3-NEXT: punpcklbw {{.*#+}} xmm0 = xmm0[0,0,1,1,2,2,3,3,4,4,5,5,6,6,7,7]<br class=""> ; SSSE3-NEXT: punpcklwd {{.*#+}} xmm0 = xmm0[0,0,1,1,2,2,3,3]<br class=""> ; SSSE3-NEXT: movdqa %xmm0, %xmm2<br class=""> ; SSSE3-NEXT: psrad $31, %xmm2<br class="">@@ -499,7 +499,7 @@ define <8 x i64> @sext_16i8_to_8i64(<16<br class=""> ; SSE2-LABEL: sext_16i8_to_8i64:<br class=""> ; SSE2: # BB#0: # %entry<br class=""> ; SSE2-NEXT: movdqa %xmm0, %xmm1<br class="">-; SSE2-NEXT: punpcklbw {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[1],xmm0[2],xmm1[2],xmm0[3],xmm1[3],xmm0[4],xmm1[4],xmm0[5],xmm1[5],xmm0[6],xmm1[6],xmm0[7],xmm1[7]<br class="">+; SSE2-NEXT: punpcklbw {{.*#+}} xmm0 = xmm0[0,0,1,1,2,2,3,3,4,4,5,5,6,6,7,7]<br class=""> ; SSE2-NEXT: punpcklwd {{.*#+}} xmm0 = xmm0[0,0,1,1,2,2,3,3]<br class=""> ; SSE2-NEXT: movdqa %xmm0, %xmm2<br class=""> ; SSE2-NEXT: psrad $31, %xmm2<br class="">@@ -1112,7 +1112,7 @@ define <8 x i64> @sext_8i32_to_8i64(<8 x<br class=""> ; SSE2-NEXT: movdqa %xmm1, %xmm2<br class=""> ; SSE2-NEXT: movdqa %xmm0, %xmm3<br class=""> ; SSE2-NEXT: psrad $31, %xmm3<br class="">-; SSE2-NEXT: movdqa %xmm2, %xmm4<br class="">+; SSE2-NEXT: movdqa %xmm1, %xmm4<br class=""> ; SSE2-NEXT: psrad $31, %xmm4<br class=""> ; SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[2,3,0,1]<br class=""> ; SSE2-NEXT: punpckldq {{.*#+}} xmm0 = xmm0[0],xmm3[0],xmm0[1],xmm3[1]<br class="">@@ -1131,7 +1131,7 @@ define <8 x i64> @sext_8i32_to_8i64(<8 x<br class=""> ; SSSE3-NEXT: movdqa %xmm1, %xmm2<br class=""> ; SSSE3-NEXT: movdqa %xmm0, %xmm3<br class=""> ; SSSE3-NEXT: psrad $31, %xmm3<br class="">-; SSSE3-NEXT: movdqa %xmm2, %xmm4<br class="">+; SSSE3-NEXT: movdqa %xmm1, %xmm4<br class=""> ; SSSE3-NEXT: psrad $31, %xmm4<br class=""> ; SSSE3-NEXT: pshufd {{.*#+}} xmm1 = xmm0[2,3,0,1]<br class=""> ; SSSE3-NEXT: punpckldq {{.*#+}} xmm0 = xmm0[0],xmm3[0],xmm0[1],xmm3[1]<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/vector-shift-ashr-128.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-shift-ashr-128.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-shift-ashr-128.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/vector-shift-ashr-128.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/vector-shift-ashr-128.ll Mon Oct 2 15:01:37 2017<br class="">@@ -275,7 +275,7 @@ define <8 x i16> @var_shift_v8i16(<8 x i<br class=""> ; SSE41-NEXT: psllw $4, %xmm1<br class=""> ; SSE41-NEXT: por %xmm0, %xmm1<br class=""> ; SSE41-NEXT: movdqa %xmm1, %xmm3<br class="">-; SSE41-NEXT: paddw %xmm3, %xmm3<br class="">+; SSE41-NEXT: paddw %xmm1, %xmm3<br class=""> ; SSE41-NEXT: movdqa %xmm2, %xmm4<br class=""> ; SSE41-NEXT: psraw $8, %xmm4<br class=""> ; SSE41-NEXT: movdqa %xmm1, %xmm0<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/vector-shift-lshr-128.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-shift-lshr-128.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-shift-lshr-128.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/vector-shift-lshr-128.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/vector-shift-lshr-128.ll Mon Oct 2 15:01:37 2017<br class="">@@ -245,7 +245,7 @@ define <8 x i16> @var_shift_v8i16(<8 x i<br class=""> ; SSE41-NEXT: psllw $4, %xmm1<br class=""> ; SSE41-NEXT: por %xmm0, %xmm1<br class=""> ; SSE41-NEXT: movdqa %xmm1, %xmm3<br class="">-; SSE41-NEXT: paddw %xmm3, %xmm3<br class="">+; SSE41-NEXT: paddw %xmm1, %xmm3<br class=""> ; SSE41-NEXT: movdqa %xmm2, %xmm4<br class=""> ; SSE41-NEXT: psrlw $8, %xmm4<br class=""> ; SSE41-NEXT: movdqa %xmm1, %xmm0<br class="">@@ -410,7 +410,7 @@ define <16 x i8> @var_shift_v16i8(<16 x<br class=""> ; SSE41: # BB#0:<br class=""> ; SSE41-NEXT: movdqa %xmm0, %xmm2<br class=""> ; SSE41-NEXT: psllw $5, %xmm1<br class="">-; SSE41-NEXT: movdqa %xmm2, %xmm3<br class="">+; SSE41-NEXT: movdqa %xmm0, %xmm3<br class=""> ; SSE41-NEXT: psrlw $4, %xmm3<br class=""> ; SSE41-NEXT: pand {{.*}}(%rip), %xmm3<br class=""> ; SSE41-NEXT: movdqa %xmm1, %xmm0<br class="">@@ -684,7 +684,7 @@ define <16 x i8> @splatvar_shift_v16i8(<<br class=""> ; SSE41-NEXT: pshufb %xmm0, %xmm1<br class=""> ; SSE41-NEXT: psllw $5, %xmm1<br class=""> ; SSE41-NEXT: movdqa %xmm1, %xmm3<br class="">-; SSE41-NEXT: paddb %xmm3, %xmm3<br class="">+; SSE41-NEXT: paddb %xmm1, %xmm3<br class=""> ; SSE41-NEXT: movdqa %xmm2, %xmm4<br class=""> ; SSE41-NEXT: psrlw $4, %xmm4<br class=""> ; SSE41-NEXT: pand {{.*}}(%rip), %xmm4<br class="">@@ -1111,7 +1111,7 @@ define <16 x i8> @constant_shift_v16i8(<<br class=""> ; SSE41-LABEL: constant_shift_v16i8:<br class=""> ; SSE41: # BB#0:<br class=""> ; SSE41-NEXT: movdqa %xmm0, %xmm1<br class="">-; SSE41-NEXT: movdqa %xmm1, %xmm2<br class="">+; SSE41-NEXT: movdqa %xmm0, %xmm2<br class=""> ; SSE41-NEXT: psrlw $4, %xmm2<br class=""> ; SSE41-NEXT: pand {{.*}}(%rip), %xmm2<br class=""> ; SSE41-NEXT: movdqa {{.*#+}} xmm0 = [8192,24640,41088,57536,49376,32928,16480,32]<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/vector-shift-shl-128.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-shift-shl-128.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-shift-shl-128.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/vector-shift-shl-128.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/vector-shift-shl-128.ll Mon Oct 2 15:01:37 2017<br class="">@@ -202,7 +202,7 @@ define <8 x i16> @var_shift_v8i16(<8 x i<br class=""> ; SSE41-NEXT: psllw $4, %xmm1<br class=""> ; SSE41-NEXT: por %xmm0, %xmm1<br class=""> ; SSE41-NEXT: movdqa %xmm1, %xmm3<br class="">-; SSE41-NEXT: paddw %xmm3, %xmm3<br class="">+; SSE41-NEXT: paddw %xmm1, %xmm3<br class=""> ; SSE41-NEXT: movdqa %xmm2, %xmm4<br class=""> ; SSE41-NEXT: psllw $8, %xmm4<br class=""> ; SSE41-NEXT: movdqa %xmm1, %xmm0<br class="">@@ -364,7 +364,7 @@ define <16 x i8> @var_shift_v16i8(<16 x<br class=""> ; SSE41: # BB#0:<br class=""> ; SSE41-NEXT: movdqa %xmm0, %xmm2<br class=""> ; SSE41-NEXT: psllw $5, %xmm1<br class="">-; SSE41-NEXT: movdqa %xmm2, %xmm3<br class="">+; SSE41-NEXT: movdqa %xmm0, %xmm3<br class=""> ; SSE41-NEXT: psllw $4, %xmm3<br class=""> ; SSE41-NEXT: pand {{.*}}(%rip), %xmm3<br class=""> ; SSE41-NEXT: movdqa %xmm1, %xmm0<br class="">@@ -376,7 +376,7 @@ define <16 x i8> @var_shift_v16i8(<16 x<br class=""> ; SSE41-NEXT: movdqa %xmm1, %xmm0<br class=""> ; SSE41-NEXT: pblendvb %xmm0, %xmm3, %xmm2<br class=""> ; SSE41-NEXT: movdqa %xmm2, %xmm3<br class="">-; SSE41-NEXT: paddb %xmm3, %xmm3<br class="">+; SSE41-NEXT: paddb %xmm2, %xmm3<br class=""> ; SSE41-NEXT: paddb %xmm1, %xmm1<br class=""> ; SSE41-NEXT: movdqa %xmm1, %xmm0<br class=""> ; SSE41-NEXT: pblendvb %xmm0, %xmm3, %xmm2<br class="">@@ -632,7 +632,7 @@ define <16 x i8> @splatvar_shift_v16i8(<<br class=""> ; SSE41-NEXT: pshufb %xmm0, %xmm1<br class=""> ; SSE41-NEXT: psllw $5, %xmm1<br class=""> ; SSE41-NEXT: movdqa %xmm1, %xmm3<br class="">-; SSE41-NEXT: paddb %xmm3, %xmm3<br class="">+; SSE41-NEXT: paddb %xmm1, %xmm3<br class=""> ; SSE41-NEXT: movdqa %xmm2, %xmm4<br class=""> ; SSE41-NEXT: psllw $4, %xmm4<br class=""> ; SSE41-NEXT: pand {{.*}}(%rip), %xmm4<br class="">@@ -644,7 +644,7 @@ define <16 x i8> @splatvar_shift_v16i8(<<br class=""> ; SSE41-NEXT: movdqa %xmm3, %xmm0<br class=""> ; SSE41-NEXT: pblendvb %xmm0, %xmm1, %xmm2<br class=""> ; SSE41-NEXT: movdqa %xmm2, %xmm1<br class="">-; SSE41-NEXT: paddb %xmm1, %xmm1<br class="">+; SSE41-NEXT: paddb %xmm2, %xmm1<br class=""> ; SSE41-NEXT: paddb %xmm3, %xmm3<br class=""> ; SSE41-NEXT: movdqa %xmm3, %xmm0<br class=""> ; SSE41-NEXT: pblendvb %xmm0, %xmm1, %xmm2<br class="">@@ -965,7 +965,7 @@ define <16 x i8> @constant_shift_v16i8(<<br class=""> ; SSE41-LABEL: constant_shift_v16i8:<br class=""> ; SSE41: # BB#0:<br class=""> ; SSE41-NEXT: movdqa %xmm0, %xmm1<br class="">-; SSE41-NEXT: movdqa %xmm1, %xmm2<br class="">+; SSE41-NEXT: movdqa %xmm0, %xmm2<br class=""> ; SSE41-NEXT: psllw $4, %xmm2<br class=""> ; SSE41-NEXT: pand {{.*}}(%rip), %xmm2<br class=""> ; SSE41-NEXT: movdqa {{.*#+}} xmm0 = [8192,24640,41088,57536,49376,32928,16480,32]<br class="">@@ -976,7 +976,7 @@ define <16 x i8> @constant_shift_v16i8(<<br class=""> ; SSE41-NEXT: paddb %xmm0, %xmm0<br class=""> ; SSE41-NEXT: pblendvb %xmm0, %xmm2, %xmm1<br class=""> ; SSE41-NEXT: movdqa %xmm1, %xmm2<br class="">-; SSE41-NEXT: paddb %xmm2, %xmm2<br class="">+; SSE41-NEXT: paddb %xmm1, %xmm2<br class=""> ; SSE41-NEXT: paddb %xmm0, %xmm0<br class=""> ; SSE41-NEXT: pblendvb %xmm0, %xmm2, %xmm1<br class=""> ; SSE41-NEXT: movdqa %xmm1, %xmm0<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/vector-shuffle-combining.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-shuffle-combining.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-shuffle-combining.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/vector-shuffle-combining.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/vector-shuffle-combining.ll Mon Oct 2 15:01:37 2017<br class="">@@ -2783,7 +2783,7 @@ define <4 x float> @PR22377(<4 x float><br class=""> ; SSE-LABEL: PR22377:<br class=""> ; SSE: # BB#0: # %entry<br class=""> ; SSE-NEXT: movaps %xmm0, %xmm1<br class="">-; SSE-NEXT: shufps {{.*#+}} xmm1 = xmm1[1,3,1,3]<br class="">+; SSE-NEXT: shufps {{.*#+}} xmm1 = xmm1[1,3],xmm0[1,3]<br class=""> ; SSE-NEXT: shufps {{.*#+}} xmm0 = xmm0[0,2,0,2]<br class=""> ; SSE-NEXT: addps %xmm0, %xmm1<br class=""> ; SSE-NEXT: unpcklps {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[1]<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/vector-trunc-math.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-trunc-math.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-trunc-math.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/vector-trunc-math.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/vector-trunc-math.ll Mon Oct 2 15:01:37 2017<br class="">@@ -4964,7 +4964,7 @@ define <4 x i32> @mul_add_const_v4i64_v4<br class=""> ; SSE-LABEL: mul_add_const_v4i64_v4i32:<br class=""> ; SSE: # BB#0:<br class=""> ; SSE-NEXT: movdqa %xmm0, %xmm2<br class="">-; SSE-NEXT: pshufd {{.*#+}} xmm0 = xmm2[0,1,1,3]<br class="">+; SSE-NEXT: pshufd {{.*#+}} xmm0 = xmm0[0,1,1,3]<br class=""> ; SSE-NEXT: pshufd {{.*#+}} xmm2 = xmm2[2,1,3,3]<br class=""> ; SSE-NEXT: pshufd {{.*#+}} xmm3 = xmm1[0,1,1,3]<br class=""> ; SSE-NEXT: pshufd {{.*#+}} xmm1 = xmm1[2,1,3,3]<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/vector-zext.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-zext.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-zext.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/vector-zext.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/vector-zext.ll Mon Oct 2 15:01:37 2017<br class="">@@ -246,7 +246,7 @@ define <16 x i32> @zext_16i8_to_16i32(<1<br class=""> ; SSE2: # BB#0: # %entry<br class=""> ; SSE2-NEXT: movdqa %xmm0, %xmm3<br class=""> ; SSE2-NEXT: pxor %xmm4, %xmm4<br class="">-; SSE2-NEXT: movdqa %xmm3, %xmm1<br class="">+; SSE2-NEXT: movdqa %xmm0, %xmm1<br class=""> ; SSE2-NEXT: punpcklbw {{.*#+}} xmm1 = xmm1[0],xmm4[0],xmm1[1],xmm4[1],xmm1[2],xmm4[2],xmm1[3],xmm4[3],xmm1[4],xmm4[4],xmm1[5],xmm4[5],xmm1[6],xmm4[6],xmm1[7],xmm4[7]<br class=""> ; SSE2-NEXT: movdqa %xmm1, %xmm0<br class=""> ; SSE2-NEXT: punpcklwd {{.*#+}} xmm0 = xmm0[0],xmm4[0],xmm0[1],xmm4[1],xmm0[2],xmm4[2],xmm0[3],xmm4[3]<br class="">@@ -261,7 +261,7 @@ define <16 x i32> @zext_16i8_to_16i32(<1<br class=""> ; SSSE3: # BB#0: # %entry<br class=""> ; SSSE3-NEXT: movdqa %xmm0, %xmm3<br class=""> ; SSSE3-NEXT: pxor %xmm4, %xmm4<br class="">-; SSSE3-NEXT: movdqa %xmm3, %xmm1<br class="">+; SSSE3-NEXT: movdqa %xmm0, %xmm1<br class=""> ; SSSE3-NEXT: punpcklbw {{.*#+}} xmm1 = xmm1[0],xmm4[0],xmm1[1],xmm4[1],xmm1[2],xmm4[2],xmm1[3],xmm4[3],xmm1[4],xmm4[4],xmm1[5],xmm4[5],xmm1[6],xmm4[6],xmm1[7],xmm4[7]<br class=""> ; SSSE3-NEXT: movdqa %xmm1, %xmm0<br class=""> ; SSSE3-NEXT: punpcklwd {{.*#+}} xmm0 = xmm0[0],xmm4[0],xmm0[1],xmm4[1],xmm0[2],xmm4[2],xmm0[3],xmm4[3]<br class="">@@ -399,7 +399,7 @@ define <8 x i64> @zext_16i8_to_8i64(<16<br class=""> ; SSE2: # BB#0: # %entry<br class=""> ; SSE2-NEXT: movdqa %xmm0, %xmm1<br class=""> ; SSE2-NEXT: pxor %xmm4, %xmm4<br class="">-; SSE2-NEXT: pshufd {{.*#+}} xmm3 = xmm1[1,1,2,3]<br class="">+; SSE2-NEXT: pshufd {{.*#+}} xmm3 = xmm0[1,1,2,3]<br class=""> ; SSE2-NEXT: punpcklbw {{.*#+}} xmm1 = xmm1[0],xmm4[0],xmm1[1],xmm4[1],xmm1[2],xmm4[2],xmm1[3],xmm4[3],xmm1[4],xmm4[4],xmm1[5],xmm4[5],xmm1[6],xmm4[6],xmm1[7],xmm4[7]<br class=""> ; SSE2-NEXT: punpcklwd {{.*#+}} xmm1 = xmm1[0],xmm4[0],xmm1[1],xmm4[1],xmm1[2],xmm4[2],xmm1[3],xmm4[3]<br class=""> ; SSE2-NEXT: movdqa %xmm1, %xmm0<br class="">@@ -700,7 +700,7 @@ define <8 x i64> @zext_8i16_to_8i64(<8 x<br class=""> ; SSE2: # BB#0: # %entry<br class=""> ; SSE2-NEXT: movdqa %xmm0, %xmm3<br class=""> ; SSE2-NEXT: pxor %xmm4, %xmm4<br class="">-; SSE2-NEXT: movdqa %xmm3, %xmm1<br class="">+; SSE2-NEXT: movdqa %xmm0, %xmm1<br class=""> ; SSE2-NEXT: punpcklwd {{.*#+}} xmm1 = xmm1[0],xmm4[0],xmm1[1],xmm4[1],xmm1[2],xmm4[2],xmm1[3],xmm4[3]<br class=""> ; SSE2-NEXT: movdqa %xmm1, %xmm0<br class=""> ; SSE2-NEXT: punpckldq {{.*#+}} xmm0 = xmm0[0],xmm4[0],xmm0[1],xmm4[1]<br class="">@@ -715,7 +715,7 @@ define <8 x i64> @zext_8i16_to_8i64(<8 x<br class=""> ; SSSE3: # BB#0: # %entry<br class=""> ; SSSE3-NEXT: movdqa %xmm0, %xmm3<br class=""> ; SSSE3-NEXT: pxor %xmm4, %xmm4<br class="">-; SSSE3-NEXT: movdqa %xmm3, %xmm1<br class="">+; SSSE3-NEXT: movdqa %xmm0, %xmm1<br class=""> ; SSSE3-NEXT: punpcklwd {{.*#+}} xmm1 = xmm1[0],xmm4[0],xmm1[1],xmm4[1],xmm1[2],xmm4[2],xmm1[3],xmm4[3]<br class=""> ; SSSE3-NEXT: movdqa %xmm1, %xmm0<br class=""> ; SSSE3-NEXT: punpckldq {{.*#+}} xmm0 = xmm0[0],xmm4[0],xmm0[1],xmm4[1]<br class="">@@ -1582,7 +1582,7 @@ define <8 x i32> @shuf_zext_8i16_to_8i32<br class=""> ; SSE41: # BB#0: # %entry<br class=""> ; SSE41-NEXT: movdqa %xmm0, %xmm1<br class=""> ; SSE41-NEXT: pxor %xmm2, %xmm2<br class="">-; SSE41-NEXT: pmovzxwd {{.*#+}} xmm0 = xmm1[0],zero,xmm1[1],zero,xmm1[2],zero,xmm1[3],zero<br class="">+; SSE41-NEXT: pmovzxwd {{.*#+}} xmm0 = xmm0[0],zero,xmm0[1],zero,xmm0[2],zero,xmm0[3],zero<br class=""> ; SSE41-NEXT: punpckhwd {{.*#+}} xmm1 = xmm1[4],xmm2[4],xmm1[5],xmm2[5],xmm1[6],xmm2[6],xmm1[7],xmm2[7]<br class=""> ; SSE41-NEXT: retq<br class=""> ;<br class="">@@ -1630,7 +1630,7 @@ define <4 x i64> @shuf_zext_4i32_to_4i64<br class=""> ; SSE41: # BB#0: # %entry<br class=""> ; SSE41-NEXT: movdqa %xmm0, %xmm1<br class=""> ; SSE41-NEXT: pxor %xmm2, %xmm2<br class="">-; SSE41-NEXT: pmovzxdq {{.*#+}} xmm0 = xmm1[0],zero,xmm1[1],zero<br class="">+; SSE41-NEXT: pmovzxdq {{.*#+}} xmm0 = xmm0[0],zero,xmm0[1],zero<br class=""> ; SSE41-NEXT: punpckhdq {{.*#+}} xmm1 = xmm1[2],xmm2[2],xmm1[3],xmm2[3]<br class=""> ; SSE41-NEXT: retq<br class=""> ;<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/vselect-minmax.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vselect-minmax.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vselect-minmax.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/vselect-minmax.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/vselect-minmax.ll Mon Oct 2 15:01:37 2017<br class="">@@ -3344,12 +3344,12 @@ define <64 x i8> @test98(<64 x i8> %a, <<br class=""> ; SSE2: # BB#0: # %entry<br class=""> ; SSE2-NEXT: movdqa %xmm3, %xmm8<br class=""> ; SSE2-NEXT: movdqa %xmm2, %xmm9<br class="">-; SSE2-NEXT: movdqa %xmm8, %xmm12<br class="">+; SSE2-NEXT: movdqa %xmm3, %xmm12<br class=""> ; SSE2-NEXT: pcmpgtb %xmm7, %xmm12<br class=""> ; SSE2-NEXT: pcmpeqd %xmm13, %xmm13<br class=""> ; SSE2-NEXT: movdqa %xmm12, %xmm3<br class=""> ; SSE2-NEXT: pxor %xmm13, %xmm3<br class="">-; SSE2-NEXT: movdqa %xmm9, %xmm14<br class="">+; SSE2-NEXT: movdqa %xmm2, %xmm14<br class=""> ; SSE2-NEXT: pcmpgtb %xmm6, %xmm14<br class=""> ; SSE2-NEXT: movdqa %xmm14, %xmm2<br class=""> ; SSE2-NEXT: pxor %xmm13, %xmm2<br class="">@@ -3487,12 +3487,12 @@ define <64 x i8> @test100(<64 x i8> %a,<br class=""> ; SSE2-NEXT: movdqa %xmm2, %xmm9<br class=""> ; SSE2-NEXT: movdqa %xmm0, %xmm10<br class=""> ; SSE2-NEXT: movdqa %xmm7, %xmm12<br class="">-; SSE2-NEXT: pcmpgtb %xmm8, %xmm12<br class="">+; SSE2-NEXT: pcmpgtb %xmm3, %xmm12<br class=""> ; SSE2-NEXT: pcmpeqd %xmm0, %xmm0<br class=""> ; SSE2-NEXT: movdqa %xmm12, %xmm3<br class=""> ; SSE2-NEXT: pxor %xmm0, %xmm3<br class=""> ; SSE2-NEXT: movdqa %xmm6, %xmm13<br class="">-; SSE2-NEXT: pcmpgtb %xmm9, %xmm13<br class="">+; SSE2-NEXT: pcmpgtb %xmm2, %xmm13<br class=""> ; SSE2-NEXT: movdqa %xmm13, %xmm2<br class=""> ; SSE2-NEXT: pxor %xmm0, %xmm2<br class=""> ; SSE2-NEXT: movdqa %xmm5, %xmm14<br class="">@@ -4225,12 +4225,12 @@ define <16 x i32> @test114(<16 x i32> %a<br class=""> ; SSE2: # BB#0: # %entry<br class=""> ; SSE2-NEXT: movdqa %xmm3, %xmm8<br class=""> ; SSE2-NEXT: movdqa %xmm2, %xmm9<br class="">-; SSE2-NEXT: movdqa %xmm8, %xmm12<br class="">+; SSE2-NEXT: movdqa %xmm3, %xmm12<br class=""> ; SSE2-NEXT: pcmpgtd %xmm7, %xmm12<br class=""> ; SSE2-NEXT: pcmpeqd %xmm13, %xmm13<br class=""> ; SSE2-NEXT: movdqa %xmm12, %xmm3<br class=""> ; SSE2-NEXT: pxor %xmm13, %xmm3<br class="">-; SSE2-NEXT: movdqa %xmm9, %xmm14<br class="">+; SSE2-NEXT: movdqa %xmm2, %xmm14<br class=""> ; SSE2-NEXT: pcmpgtd %xmm6, %xmm14<br class=""> ; SSE2-NEXT: movdqa %xmm14, %xmm2<br class=""> ; SSE2-NEXT: pxor %xmm13, %xmm2<br class="">@@ -4368,12 +4368,12 @@ define <16 x i32> @test116(<16 x i32> %a<br class=""> ; SSE2-NEXT: movdqa %xmm2, %xmm9<br class=""> ; SSE2-NEXT: movdqa %xmm0, %xmm10<br class=""> ; SSE2-NEXT: movdqa %xmm7, %xmm12<br class="">-; SSE2-NEXT: pcmpgtd %xmm8, %xmm12<br class="">+; SSE2-NEXT: pcmpgtd %xmm3, %xmm12<br class=""> ; SSE2-NEXT: pcmpeqd %xmm0, %xmm0<br class=""> ; SSE2-NEXT: movdqa %xmm12, %xmm3<br class=""> ; SSE2-NEXT: pxor %xmm0, %xmm3<br class=""> ; SSE2-NEXT: movdqa %xmm6, %xmm13<br class="">-; SSE2-NEXT: pcmpgtd %xmm9, %xmm13<br class="">+; SSE2-NEXT: pcmpgtd %xmm2, %xmm13<br class=""> ; SSE2-NEXT: movdqa %xmm13, %xmm2<br class=""> ; SSE2-NEXT: pxor %xmm0, %xmm2<br class=""> ; SSE2-NEXT: movdqa %xmm5, %xmm14<br class="">@@ -4890,7 +4890,7 @@ define <8 x i64> @test122(<8 x i64> %a,<br class=""> ; SSE2-LABEL: test122:<br class=""> ; SSE2: # BB#0: # %entry<br class=""> ; SSE2-NEXT: movdqa %xmm7, %xmm8<br class="">-; SSE2-NEXT: movdqa %xmm8, -{{[0-9]+}}(%rsp) # 16-byte Spill<br class="">+; SSE2-NEXT: movdqa %xmm7, -{{[0-9]+}}(%rsp) # 16-byte Spill<br class=""> ; SSE2-NEXT: movdqa %xmm3, %xmm7<br class=""> ; SSE2-NEXT: movdqa %xmm2, %xmm3<br class=""> ; SSE2-NEXT: movdqa %xmm1, %xmm2<br class="">@@ -5164,7 +5164,7 @@ define <8 x i64> @test124(<8 x i64> %a,<br class=""> ; SSE2-LABEL: test124:<br class=""> ; SSE2: # BB#0: # %entry<br class=""> ; SSE2-NEXT: movdqa %xmm7, %xmm11<br class="">-; SSE2-NEXT: movdqa %xmm11, -{{[0-9]+}}(%rsp) # 16-byte Spill<br class="">+; SSE2-NEXT: movdqa %xmm7, -{{[0-9]+}}(%rsp) # 16-byte Spill<br class=""> ; SSE2-NEXT: movdqa %xmm3, %xmm7<br class=""> ; SSE2-NEXT: movdqa %xmm2, %xmm3<br class=""> ; SSE2-NEXT: movdqa %xmm1, %xmm2<br class="">@@ -5467,7 +5467,7 @@ define <8 x i64> @test126(<8 x i64> %a,<br class=""> ; SSE2-LABEL: test126:<br class=""> ; SSE2: # BB#0: # %entry<br class=""> ; SSE2-NEXT: movdqa %xmm7, %xmm8<br class="">-; SSE2-NEXT: movdqa %xmm8, -{{[0-9]+}}(%rsp) # 16-byte Spill<br class="">+; SSE2-NEXT: movdqa %xmm7, -{{[0-9]+}}(%rsp) # 16-byte Spill<br class=""> ; SSE2-NEXT: movdqa %xmm3, %xmm7<br class=""> ; SSE2-NEXT: movdqa %xmm2, %xmm3<br class=""> ; SSE2-NEXT: movdqa %xmm1, %xmm2<br class="">@@ -5795,7 +5795,7 @@ define <8 x i64> @test128(<8 x i64> %a,<br class=""> ; SSE2-LABEL: test128:<br class=""> ; SSE2: # BB#0: # %entry<br class=""> ; SSE2-NEXT: movdqa %xmm7, %xmm11<br class="">-; SSE2-NEXT: movdqa %xmm11, -{{[0-9]+}}(%rsp) # 16-byte Spill<br class="">+; SSE2-NEXT: movdqa %xmm7, -{{[0-9]+}}(%rsp) # 16-byte Spill<br class=""> ; SSE2-NEXT: movdqa %xmm3, %xmm7<br class=""> ; SSE2-NEXT: movdqa %xmm2, %xmm3<br class=""> ; SSE2-NEXT: movdqa %xmm1, %xmm2<br class="">@@ -6047,7 +6047,7 @@ define <64 x i8> @test130(<64 x i8> %a,<br class=""> ; SSE2-NEXT: pcmpeqd %xmm13, %xmm13<br class=""> ; SSE2-NEXT: movdqa %xmm12, %xmm9<br class=""> ; SSE2-NEXT: pxor %xmm13, %xmm9<br class="">-; SSE2-NEXT: movdqa %xmm8, %xmm14<br class="">+; SSE2-NEXT: movdqa %xmm2, %xmm14<br class=""> ; SSE2-NEXT: pcmpgtb %xmm6, %xmm14<br class=""> ; SSE2-NEXT: movdqa %xmm14, %xmm2<br class=""> ; SSE2-NEXT: pxor %xmm13, %xmm2<br class="">@@ -6190,7 +6190,7 @@ define <64 x i8> @test132(<64 x i8> %a,<br class=""> ; SSE2-NEXT: movdqa %xmm12, %xmm9<br class=""> ; SSE2-NEXT: pxor %xmm0, %xmm9<br class=""> ; SSE2-NEXT: movdqa %xmm6, %xmm13<br class="">-; SSE2-NEXT: pcmpgtb %xmm8, %xmm13<br class="">+; SSE2-NEXT: pcmpgtb %xmm2, %xmm13<br class=""> ; SSE2-NEXT: movdqa %xmm13, %xmm2<br class=""> ; SSE2-NEXT: pxor %xmm0, %xmm2<br class=""> ; SSE2-NEXT: movdqa %xmm5, %xmm14<br class="">@@ -6941,7 +6941,7 @@ define <16 x i32> @test146(<16 x i32> %a<br class=""> ; SSE2-NEXT: pcmpeqd %xmm13, %xmm13<br class=""> ; SSE2-NEXT: movdqa %xmm12, %xmm9<br class=""> ; SSE2-NEXT: pxor %xmm13, %xmm9<br class="">-; SSE2-NEXT: movdqa %xmm8, %xmm14<br class="">+; SSE2-NEXT: movdqa %xmm2, %xmm14<br class=""> ; SSE2-NEXT: pcmpgtd %xmm6, %xmm14<br class=""> ; SSE2-NEXT: movdqa %xmm14, %xmm2<br class=""> ; SSE2-NEXT: pxor %xmm13, %xmm2<br class="">@@ -7084,7 +7084,7 @@ define <16 x i32> @test148(<16 x i32> %a<br class=""> ; SSE2-NEXT: movdqa %xmm12, %xmm9<br class=""> ; SSE2-NEXT: pxor %xmm0, %xmm9<br class=""> ; SSE2-NEXT: movdqa %xmm6, %xmm13<br class="">-; SSE2-NEXT: pcmpgtd %xmm8, %xmm13<br class="">+; SSE2-NEXT: pcmpgtd %xmm2, %xmm13<br class=""> ; SSE2-NEXT: movdqa %xmm13, %xmm2<br class=""> ; SSE2-NEXT: pxor %xmm0, %xmm2<br class=""> ; SSE2-NEXT: movdqa %xmm5, %xmm14<br class="">@@ -7610,7 +7610,7 @@ define <8 x i64> @test154(<8 x i64> %a,<br class=""> ; SSE2-LABEL: test154:<br class=""> ; SSE2: # BB#0: # %entry<br class=""> ; SSE2-NEXT: movdqa %xmm7, %xmm8<br class="">-; SSE2-NEXT: movdqa %xmm8, -{{[0-9]+}}(%rsp) # 16-byte Spill<br class="">+; SSE2-NEXT: movdqa %xmm7, -{{[0-9]+}}(%rsp) # 16-byte Spill<br class=""> ; SSE2-NEXT: movdqa %xmm3, %xmm7<br class=""> ; SSE2-NEXT: movdqa %xmm2, %xmm3<br class=""> ; SSE2-NEXT: movdqa %xmm1, %xmm2<br class="">@@ -7882,7 +7882,7 @@ define <8 x i64> @test156(<8 x i64> %a,<br class=""> ; SSE2-LABEL: test156:<br class=""> ; SSE2: # BB#0: # %entry<br class=""> ; SSE2-NEXT: movdqa %xmm7, %xmm11<br class="">-; SSE2-NEXT: movdqa %xmm11, -{{[0-9]+}}(%rsp) # 16-byte Spill<br class="">+; SSE2-NEXT: movdqa %xmm7, -{{[0-9]+}}(%rsp) # 16-byte Spill<br class=""> ; SSE2-NEXT: movdqa %xmm3, %xmm7<br class=""> ; SSE2-NEXT: movdqa %xmm2, %xmm3<br class=""> ; SSE2-NEXT: movdqa %xmm1, %xmm2<br class="">@@ -8183,7 +8183,7 @@ define <8 x i64> @test158(<8 x i64> %a,<br class=""> ; SSE2-LABEL: test158:<br class=""> ; SSE2: # BB#0: # %entry<br class=""> ; SSE2-NEXT: movdqa %xmm7, %xmm8<br class="">-; SSE2-NEXT: movdqa %xmm8, -{{[0-9]+}}(%rsp) # 16-byte Spill<br class="">+; SSE2-NEXT: movdqa %xmm7, -{{[0-9]+}}(%rsp) # 16-byte Spill<br class=""> ; SSE2-NEXT: movdqa %xmm3, %xmm7<br class=""> ; SSE2-NEXT: movdqa %xmm2, %xmm3<br class=""> ; SSE2-NEXT: movdqa %xmm1, %xmm2<br class="">@@ -8509,7 +8509,7 @@ define <8 x i64> @test160(<8 x i64> %a,<br class=""> ; SSE2-LABEL: test160:<br class=""> ; SSE2: # BB#0: # %entry<br class=""> ; SSE2-NEXT: movdqa %xmm7, %xmm11<br class="">-; SSE2-NEXT: movdqa %xmm11, -{{[0-9]+}}(%rsp) # 16-byte Spill<br class="">+; SSE2-NEXT: movdqa %xmm7, -{{[0-9]+}}(%rsp) # 16-byte Spill<br class=""> ; SSE2-NEXT: movdqa %xmm3, %xmm7<br class=""> ; SSE2-NEXT: movdqa %xmm2, %xmm3<br class=""> ; SSE2-NEXT: movdqa %xmm1, %xmm2<br class="">@@ -10289,7 +10289,7 @@ define <2 x i64> @test180(<2 x i64> %a,<br class=""> ; SSE4: # BB#0: # %entry<br class=""> ; SSE4-NEXT: movdqa %xmm0, %xmm2<br class=""> ; SSE4-NEXT: movdqa %xmm1, %xmm3<br class="">-; SSE4-NEXT: pcmpgtq %xmm2, %xmm3<br class="">+; SSE4-NEXT: pcmpgtq %xmm0, %xmm3<br class=""> ; SSE4-NEXT: pcmpeqd %xmm0, %xmm0<br class=""> ; SSE4-NEXT: pxor %xmm3, %xmm0<br class=""> ; SSE4-NEXT: blendvpd %xmm0, %xmm2, %xmm1<br class="">@@ -10768,7 +10768,7 @@ define <2 x i64> @test188(<2 x i64> %a,<br class=""> ; SSE4: # BB#0: # %entry<br class=""> ; SSE4-NEXT: movdqa %xmm0, %xmm2<br class=""> ; SSE4-NEXT: movdqa %xmm1, %xmm3<br class="">-; SSE4-NEXT: pcmpgtq %xmm2, %xmm3<br class="">+; SSE4-NEXT: pcmpgtq %xmm0, %xmm3<br class=""> ; SSE4-NEXT: pcmpeqd %xmm0, %xmm0<br class=""> ; SSE4-NEXT: pxor %xmm3, %xmm0<br class=""> ; SSE4-NEXT: blendvpd %xmm0, %xmm1, %xmm2<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/widen_conv-3.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/widen_conv-3.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/widen_conv-3.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/widen_conv-3.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/widen_conv-3.ll Mon Oct 2 15:01:37 2017<br class="">@@ -74,7 +74,7 @@ define void @convert_v3i8_to_v3f32(<3 x<br class=""> ; X86-SSE2-NEXT: cvtdq2ps %xmm0, %xmm0<br class=""> ; X86-SSE2-NEXT: movss %xmm0, (%eax)<br class=""> ; X86-SSE2-NEXT: movaps %xmm0, %xmm1<br class="">-; X86-SSE2-NEXT: movhlps {{.*#+}} xmm1 = xmm1[1,1]<br class="">+; X86-SSE2-NEXT: movhlps {{.*#+}} xmm1 = xmm0[1],xmm1[1]<br class=""> ; X86-SSE2-NEXT: movss %xmm1, 8(%eax)<br class=""> ; X86-SSE2-NEXT: shufps {{.*#+}} xmm0 = xmm0[1,1,2,3]<br class=""> ; X86-SSE2-NEXT: movss %xmm0, 4(%eax)<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/widen_conv-4.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/widen_conv-4.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/widen_conv-4.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/widen_conv-4.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/widen_conv-4.ll Mon Oct 2 15:01:37 2017<br class="">@@ -19,7 +19,7 @@ define void @convert_v7i16_v7f32(<7 x fl<br class=""> ; X86-SSE2-NEXT: movups %xmm0, (%eax)<br class=""> ; X86-SSE2-NEXT: movss %xmm2, 16(%eax)<br class=""> ; X86-SSE2-NEXT: movaps %xmm2, %xmm0<br class="">-; X86-SSE2-NEXT: movhlps {{.*#+}} xmm0 = xmm0[1,1]<br class="">+; X86-SSE2-NEXT: movhlps {{.*#+}} xmm0 = xmm2[1],xmm0[1]<br class=""> ; X86-SSE2-NEXT: movss %xmm0, 24(%eax)<br class=""> ; X86-SSE2-NEXT: shufps {{.*#+}} xmm2 = xmm2[1,1,2,3]<br class=""> ; X86-SSE2-NEXT: movss %xmm2, 20(%eax)<br class="">@@ -100,7 +100,7 @@ define void @convert_v3i8_to_v3f32(<3 x<br class=""> ; X86-SSE2-NEXT: cvtdq2ps %xmm0, %xmm0<br class=""> ; X86-SSE2-NEXT: movss %xmm0, (%eax)<br class=""> ; X86-SSE2-NEXT: movaps %xmm0, %xmm1<br class="">-; X86-SSE2-NEXT: movhlps {{.*#+}} xmm1 = xmm1[1,1]<br class="">+; X86-SSE2-NEXT: movhlps {{.*#+}} xmm1 = xmm0[1],xmm1[1]<br class=""> ; X86-SSE2-NEXT: movss %xmm1, 8(%eax)<br class=""> ; X86-SSE2-NEXT: shufps {{.*#+}} xmm0 = xmm0[1,1,2,3]<br class=""> ; X86-SSE2-NEXT: movss %xmm0, 4(%eax)<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/x86-interleaved-access.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/x86-interleaved-access.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/x86-interleaved-access.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/x86-interleaved-access.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/x86-interleaved-access.ll Mon Oct 2 15:01:37 2017<br class="">@@ -1721,7 +1721,7 @@ define void @interleaved_store_vf64_i8_s<br class=""> ; AVX1-NEXT: vpunpckhbw {{.*#+}} xmm3 = xmm4[8],xmm3[8],xmm4[9],xmm3[9],xmm4[10],xmm3[10],xmm4[11],xmm3[11],xmm4[12],xmm3[12],xmm4[13],xmm3[13],xmm4[14],xmm3[14],xmm4[15],xmm3[15]<br class=""> ; AVX1-NEXT: vpunpcklwd {{.*#+}} xmm4 = xmm15[0],xmm5[0],xmm15[1],xmm5[1],xmm15[2],xmm5[2],xmm15[3],xmm5[3]<br class=""> ; AVX1-NEXT: vmovdqa %xmm8, %xmm1<br class="">-; AVX1-NEXT: vpunpcklwd {{.*#+}} xmm11 = xmm1[0],xmm2[0],xmm1[1],xmm2[1],xmm1[2],xmm2[2],xmm1[3],xmm2[3]<br class="">+; AVX1-NEXT: vpunpcklwd {{.*#+}} xmm11 = xmm8[0],xmm2[0],xmm8[1],xmm2[1],xmm8[2],xmm2[2],xmm8[3],xmm2[3]<br class=""> ; AVX1-NEXT: vinsertf128 $1, %xmm4, %ymm11, %ymm0<br class=""> ; AVX1-NEXT: vmovups %ymm0, -{{[0-9]+}}(%rsp) # 32-byte Spill<br class=""> ; AVX1-NEXT: vpunpcklwd {{.*#+}} xmm4 = xmm10[0],xmm9[0],xmm10[1],xmm9[1],xmm10[2],xmm9[2],xmm10[3],xmm9[3]<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/x86-shrink-wrap-unwind.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/x86-shrink-wrap-unwind.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/x86-shrink-wrap-unwind.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/x86-shrink-wrap-unwind.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/x86-shrink-wrap-unwind.ll Mon Oct 2 15:01:37 2017<br class="">@@ -23,7 +23,7 @@ target triple = "x86_64-apple-macosx"<br class=""> ; Compare the arguments and jump to exit.<br class=""> ; After the prologue is set.<br class=""> ; CHECK: movl %edi, [[ARG0CPY:%e[a-z]+]]<br class="">-; CHECK-NEXT: cmpl %esi, [[ARG0CPY]]<br class="">+; CHECK-NEXT: cmpl %esi, %edi<br class=""> ; CHECK-NEXT: jge [[EXIT_LABEL:LBB[0-9_]+]]<br class=""> ;<br class=""> ; Store %a in the alloca.<br class="">@@ -69,7 +69,7 @@ attributes #0 = { "no-frame-pointer-elim<br class=""> ; Compare the arguments and jump to exit.<br class=""> ; After the prologue is set.<br class=""> ; CHECK: movl %edi, [[ARG0CPY:%e[a-z]+]]<br class="">-; CHECK-NEXT: cmpl %esi, [[ARG0CPY]]<br class="">+; CHECK-NEXT: cmpl %esi, %edi<br class=""> ; CHECK-NEXT: jge [[EXIT_LABEL:LBB[0-9_]+]]<br class=""> ;<br class=""> ; Prologue code.<br class="">@@ -115,7 +115,7 @@ attributes #1 = { "no-frame-pointer-elim<br class=""> ; Compare the arguments and jump to exit.<br class=""> ; After the prologue is set.<br class=""> ; CHECK: movl %edi, [[ARG0CPY:%e[a-z]+]]<br class="">-; CHECK-NEXT: cmpl %esi, [[ARG0CPY]]<br class="">+; CHECK-NEXT: cmpl %esi, %edi<br class=""> ; CHECK-NEXT: jge [[EXIT_LABEL:LBB[0-9_]+]]<br class=""> ;<br class=""> ; Prologue code.<br class=""><br class="">Modified: llvm/trunk/test/CodeGen/X86/x86-shrink-wrapping.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/x86-shrink-wrapping.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/x86-shrink-wrapping.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/CodeGen/X86/x86-shrink-wrapping.ll (original)<br class="">+++ llvm/trunk/test/CodeGen/X86/x86-shrink-wrapping.ll Mon Oct 2 15:01:37 2017<br class="">@@ -17,7 +17,7 @@ target triple = "x86_64-apple-macosx"<br class=""> ; Compare the arguments and jump to exit.<br class=""> ; No prologue needed.<br class=""> ; ENABLE: movl %edi, [[ARG0CPY:%e[a-z]+]]<br class="">-; ENABLE-NEXT: cmpl %esi, [[ARG0CPY]]<br class="">+; ENABLE-NEXT: cmpl %esi, %edi<br class=""> ; ENABLE-NEXT: jge [[EXIT_LABEL:LBB[0-9_]+]]<br class=""> ;<br class=""> ; Prologue code.<br class="">@@ -27,7 +27,7 @@ target triple = "x86_64-apple-macosx"<br class=""> ; Compare the arguments and jump to exit.<br class=""> ; After the prologue is set.<br class=""> ; DISABLE: movl %edi, [[ARG0CPY:%e[a-z]+]]<br class="">-; DISABLE-NEXT: cmpl %esi, [[ARG0CPY]]<br class="">+; DISABLE-NEXT: cmpl %esi, %edi<br class=""> ; DISABLE-NEXT: jge [[EXIT_LABEL:LBB[0-9_]+]]<br class=""> ;<br class=""> ; Store %a in the alloca.<br class=""><br class="">Modified: llvm/trunk/test/DebugInfo/X86/live-debug-variables.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/DebugInfo/X86/live-debug-variables.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/DebugInfo/X86/live-debug-variables.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/DebugInfo/X86/live-debug-variables.ll (original)<br class="">+++ llvm/trunk/test/DebugInfo/X86/live-debug-variables.ll Mon Oct 2 15:01:37 2017<br class="">@@ -24,7 +24,7 @@<br class=""><br class=""> ; CHECK: .debug_loc contents:<br class=""> ; CHECK-NEXT: 0x00000000:<br class="">-; CHECK-NEXT: 0x000000000000001f - 0x000000000000003c: DW_OP_reg3 RBX<br class="">+; CHECK-NEXT: 0x000000000000001f - 0x000000000000005a: DW_OP_reg3 RBX<br class=""> ; We should only have one entry<br class=""> ; CHECK-NOT: :<br class=""><br class=""><br class="">Modified: llvm/trunk/test/DebugInfo/X86/spill-nospill.ll<br class="">URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/DebugInfo/X86/spill-nospill.ll?rev=314729&r1=314728&r2=314729&view=diff" class="">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/DebugInfo/X86/spill-nospill.ll?rev=314729&r1=314728&r2=314729&view=diff</a><br class="">==============================================================================<br class="">--- llvm/trunk/test/DebugInfo/X86/spill-nospill.ll (original)<br class="">+++ llvm/trunk/test/DebugInfo/X86/spill-nospill.ll Mon Oct 2 15:01:37 2017<br class="">@@ -30,7 +30,7 @@<br class=""> ; CHECK: callq g<br class=""> ; CHECK: movl %eax, %[[CSR:[^ ]*]]<br class=""> ; CHECK: #DEBUG_VALUE: f:y <- %ESI<br class="">-; CHECK: movl %[[CSR]], %ecx<br class="">+; CHECK: movl %eax, %ecx<br class=""> ; CHECK: callq g<br class=""> ; CHECK: movl %[[CSR]], %ecx<br class=""> ; CHECK: callq g<br class=""><br class=""><br class="">_______________________________________________<br class="">llvm-commits mailing list<br class=""><a href="mailto:llvm-commits@lists.llvm.org" class="">llvm-commits@lists.llvm.org</a><br class="">http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits<br class=""></div></div></blockquote></div><br class=""></div></body></html>