<div dir="ltr">Looks like after this patch Android tests consistently hang<div><a href="http://lab.llvm.org:8011/builders/sanitizer-x86_64-linux-android/builds/1825">http://lab.llvm.org:8011/builders/sanitizer-x86_64-linux-android/builds/1825</a><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Aug 16, 2017 at 1:50 PM, Geoff Berry via llvm-commits <span dir="ltr"><<a href="mailto:llvm-commits@lists.llvm.org" target="_blank">llvm-commits@lists.llvm.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Author: gberry<br>
Date: Wed Aug 16 13:50:01 2017<br>
New Revision: 311038<br>
<br>
URL: <a href="http://llvm.org/viewvc/llvm-project?rev=311038&view=rev" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project?rev=311038&view=rev</a><br>
Log:<br>
[MachineCopyPropagation] Extend pass to do COPY source forwarding<br>
<br>
This change extends MachineCopyPropagation to do COPY source forwarding.<br>
<br>
This change also extends the MachineCopyPropagation pass to be able to<br>
be run during register allocation, after physical registers have been<br>
assigned, but before the virtual registers have been re-written, which<br>
allows it to remove virtual register COPY LiveIntervals that become dead<br>
through the forwarding of all of their uses.<br>
<br>
Reviewers: qcolombet, javed.absar, MatzeB, jonpa<br>
<br>
Subscribers: jyknight, nemanjai, llvm-commits, nhaehnle, mcrosier, mgorny<br>
<br>
Differential Revision: <a href="https://reviews.llvm.org/D30751" rel="noreferrer" target="_blank">https://reviews.llvm.org/<wbr>D30751</a><br>
<br>
Modified:<br>
llvm/trunk/include/llvm/<wbr>CodeGen/Passes.h<br>
llvm/trunk/include/llvm/<wbr>InitializePasses.h<br>
llvm/trunk/lib/CodeGen/<wbr>CodeGen.cpp<br>
llvm/trunk/lib/CodeGen/<wbr>MachineCopyPropagation.cpp<br>
llvm/trunk/lib/CodeGen/<wbr>TargetPassConfig.cpp<br>
llvm/trunk/test/CodeGen/<wbr>AArch64/aarch64-fold-lslfast.<wbr>ll<br>
llvm/trunk/test/CodeGen/<wbr>AArch64/arm64-AdvSIMD-Scalar.<wbr>ll<br>
llvm/trunk/test/CodeGen/<wbr>AArch64/arm64-zero-cycle-<wbr>regmov.ll<br>
llvm/trunk/test/CodeGen/<wbr>AArch64/f16-instructions.ll<br>
llvm/trunk/test/CodeGen/<wbr>AArch64/flags-multiuse.ll<br>
llvm/trunk/test/CodeGen/<wbr>AArch64/merge-store-<wbr>dependency.ll<br>
llvm/trunk/test/CodeGen/<wbr>AArch64/neg-imm.ll<br>
llvm/trunk/test/CodeGen/<wbr>AMDGPU/byval-frame-setup.ll<br>
llvm/trunk/test/CodeGen/<wbr>AMDGPU/call-argument-types.ll<br>
llvm/trunk/test/CodeGen/<wbr>AMDGPU/call-preserved-<wbr>registers.ll<br>
llvm/trunk/test/CodeGen/<wbr>AMDGPU/callee-special-input-<wbr>sgprs.ll<br>
llvm/trunk/test/CodeGen/<wbr>AMDGPU/callee-special-input-<wbr>vgprs.ll<br>
llvm/trunk/test/CodeGen/<wbr>AMDGPU/mubuf-offset-private.ll<br>
llvm/trunk/test/CodeGen/<wbr>AMDGPU/multilevel-break.ll<br>
llvm/trunk/test/CodeGen/<wbr>AMDGPU/private-access-no-<wbr>objects.ll<br>
llvm/trunk/test/CodeGen/<wbr>AMDGPU/ret.ll<br>
llvm/trunk/test/CodeGen/ARM/<wbr>atomic-op.ll<br>
llvm/trunk/test/CodeGen/ARM/<wbr>swifterror.ll<br>
llvm/trunk/test/CodeGen/Mips/<wbr>llvm-ir/sub.ll<br>
llvm/trunk/test/CodeGen/<wbr>PowerPC/fma-mutate.ll<br>
llvm/trunk/test/CodeGen/<wbr>PowerPC/inlineasm-i64-reg.ll<br>
llvm/trunk/test/CodeGen/<wbr>PowerPC/tail-dup-layout.ll<br>
llvm/trunk/test/CodeGen/SPARC/<wbr>32abi.ll<br>
llvm/trunk/test/CodeGen/SPARC/<wbr>atomics.ll<br>
llvm/trunk/test/CodeGen/Thumb/<wbr>thumb-shrink-wrapping.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>2006-03-01-InstrSchedBug.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>arg-copy-elide.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>avg.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>avx-load-store.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>avx512-bugfix-25270.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>avx512-calling-conv.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>avx512-mask-op.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>avx512bw-intrinsics-upgrade.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>bitcast-int-to-vector-bool-<wbr>sext.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>bitcast-int-to-vector-bool-<wbr>zext.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>buildvec-insertvec.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>combine-fcopysign.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>complex-fastmath.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>divide-by-constant.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>fmaxnum.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>fminnum.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>fp128-i128.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>haddsub-2.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>haddsub-undef.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>half.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>inline-asm-fpstack.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>ipra-local-linkage.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>localescape.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>mul-i1024.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>mul-i512.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>mul128.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>pmul.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>powi.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>pr11334.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>pr29112.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>psubus.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>select.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>shrink-wrap-chkstk.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>sqrt-fastmath.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>sse-scalar-fp-arith.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>sse1.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>sse3-avx-addsub-2.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>statepoint-live-in.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>statepoint-stack-usage.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>vec_fp_to_int.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>vec_int_to_fp.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>vec_minmax_sint.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>vec_shift4.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>vector-blend.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>vector-idiv-sdiv-128.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>vector-idiv-udiv-128.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>vector-rotate-128.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>vector-sext.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>vector-shift-ashr-128.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>vector-shift-lshr-128.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>vector-shift-shl-128.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>vector-shuffle-combining.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>vector-trunc-math.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>vector-zext.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>vselect-minmax.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>widen_conv-3.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>widen_conv-4.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>x86-shrink-wrap-unwind.ll<br>
llvm/trunk/test/CodeGen/X86/<wbr>x86-shrink-wrapping.ll<br>
<br>
Modified: llvm/trunk/include/llvm/<wbr>CodeGen/Passes.h<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/include/llvm/CodeGen/Passes.h?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/include/<wbr>llvm/CodeGen/Passes.h?rev=<wbr>311038&r1=311037&r2=311038&<wbr>view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/include/llvm/<wbr>CodeGen/Passes.h (original)<br>
+++ llvm/trunk/include/llvm/<wbr>CodeGen/Passes.h Wed Aug 16 13:50:01 2017<br>
@@ -278,6 +278,11 @@ namespace llvm {<br>
/// MachineSinking - This pass performs sinking on machine instructions.<br>
extern char &MachineSinkingID;<br>
<br>
+ /// MachineCopyPropagationPreRegRe<wbr>write - This pass performs copy propagation<br>
+ /// on machine instructions after register allocation but before virtual<br>
+ /// register re-writing..<br>
+ extern char &<wbr>MachineCopyPropagationPreRegRe<wbr>writeID;<br>
+<br>
/// MachineCopyPropagation - This pass performs copy propagation on<br>
/// machine instructions.<br>
extern char &MachineCopyPropagationID;<br>
<br>
Modified: llvm/trunk/include/llvm/<wbr>InitializePasses.h<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/include/llvm/InitializePasses.h?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/include/<wbr>llvm/InitializePasses.h?rev=<wbr>311038&r1=311037&r2=311038&<wbr>view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/include/llvm/<wbr>InitializePasses.h (original)<br>
+++ llvm/trunk/include/llvm/<wbr>InitializePasses.h Wed Aug 16 13:50:01 2017<br>
@@ -233,6 +233,7 @@ void initializeMachineBranchProbabi<wbr>lityI<br>
void initializeMachineCSEPass(<wbr>PassRegistry&);<br>
void initializeMachineCombinerPass(<wbr>PassRegistry&);<br>
void initializeMachineCopyPropagati<wbr>onPass(PassRegistry&);<br>
+void initializeMachineCopyPropagati<wbr>onPreRegRewritePass(<wbr>PassRegistry&);<br>
void initializeMachineDominanceFron<wbr>tierPass(PassRegistry&);<br>
void initializeMachineDominatorTree<wbr>Pass(PassRegistry&);<br>
void initializeMachineFunctionPrint<wbr>erPassPass(PassRegistry&);<br>
<br>
Modified: llvm/trunk/lib/CodeGen/<wbr>CodeGen.cpp<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/CodeGen.cpp?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/lib/<wbr>CodeGen/CodeGen.cpp?rev=<wbr>311038&r1=311037&r2=311038&<wbr>view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/lib/CodeGen/<wbr>CodeGen.cpp (original)<br>
+++ llvm/trunk/lib/CodeGen/<wbr>CodeGen.cpp Wed Aug 16 13:50:01 2017<br>
@@ -54,6 +54,7 @@ void llvm::initializeCodeGen(<wbr>PassRegistr<br>
initializeMachineCSEPass(<wbr>Registry);<br>
initializeMachineCombinerPass(<wbr>Registry);<br>
initializeMachineCopyPropagati<wbr>onPass(Registry);<br>
+ initializeMachineCopyPropagati<wbr>onPreRegRewritePass(Registry);<br>
initializeMachineDominatorTree<wbr>Pass(Registry);<br>
initializeMachineFunctionPrint<wbr>erPassPass(Registry);<br>
initializeMachineLICMPass(<wbr>Registry);<br>
<br>
Modified: llvm/trunk/lib/CodeGen/<wbr>MachineCopyPropagation.cpp<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/MachineCopyPropagation.cpp?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/lib/<wbr>CodeGen/<wbr>MachineCopyPropagation.cpp?<wbr>rev=311038&r1=311037&r2=<wbr>311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/lib/CodeGen/<wbr>MachineCopyPropagation.cpp (original)<br>
+++ llvm/trunk/lib/CodeGen/<wbr>MachineCopyPropagation.cpp Wed Aug 16 13:50:01 2017<br>
@@ -7,18 +7,62 @@<br>
//<br>
//===-------------------------<wbr>------------------------------<wbr>---------------===//<br>
//<br>
-// This is an extremely simple MachineInstr-level copy propagation pass.<br>
+// This is a simple MachineInstr-level copy forwarding pass. It may be run at<br>
+// two places in the codegen pipeline:<br>
+// - After register allocation but before virtual registers have been remapped<br>
+// to physical registers.<br>
+// - After physical register remapping.<br>
+//<br>
+// The optimizations done vary slightly based on whether virtual registers are<br>
+// still present. In both cases, this pass forwards the source of COPYs to the<br>
+// users of their destinations when doing so is legal. For example:<br>
+//<br>
+// %vreg1 = COPY %vreg0<br>
+// ...<br>
+// ... = OP %vreg1<br>
+//<br>
+// If<br>
+// - the physical register assigned to %vreg0 has not been clobbered by the<br>
+// time of the use of %vreg1<br>
+// - the register class constraints are satisfied<br>
+// - the COPY def is the only value that reaches OP<br>
+// then this pass replaces the above with:<br>
+//<br>
+// %vreg1 = COPY %vreg0<br>
+// ...<br>
+// ... = OP %vreg0<br>
+//<br>
+// and updates the relevant state required by VirtRegMap (e.g. LiveIntervals).<br>
+// COPYs whose LiveIntervals become dead as a result of this forwarding (i.e. if<br>
+// all uses of %vreg1 are changed to %vreg0) are removed.<br>
+//<br>
+// When being run with only physical registers, this pass will also remove some<br>
+// redundant COPYs. For example:<br>
+//<br>
+// %R1 = COPY %R0<br>
+// ... // No clobber of %R1<br>
+// %R0 = COPY %R1 <<< Removed<br>
+//<br>
+// or<br>
+//<br>
+// %R1 = COPY %R0<br>
+// ... // No clobber of %R0<br>
+// %R1 = COPY %R0 <<< Removed<br>
//<br>
//===-------------------------<wbr>------------------------------<wbr>---------------===//<br>
<br>
+#include "LiveDebugVariables.h"<br>
#include "llvm/ADT/DenseMap.h"<br>
#include "llvm/ADT/SetVector.h"<br>
#include "llvm/ADT/SmallVector.h"<br>
#include "llvm/ADT/Statistic.h"<br>
+#include "llvm/CodeGen/LiveRangeEdit.h"<br>
+#include "llvm/CodeGen/<wbr>LiveStackAnalysis.h"<br>
#include "llvm/CodeGen/MachineFunction.<wbr>h"<br>
#include "llvm/CodeGen/<wbr>MachineFunctionPass.h"<br>
#include "llvm/CodeGen/<wbr>MachineRegisterInfo.h"<br>
#include "llvm/CodeGen/Passes.h"<br>
+#include "llvm/CodeGen/VirtRegMap.h"<br>
#include "llvm/Pass.h"<br>
#include "llvm/Support/Debug.h"<br>
#include "llvm/Support/raw_ostream.h"<br>
@@ -30,24 +74,48 @@ using namespace llvm;<br>
#define DEBUG_TYPE "machine-cp"<br>
<br>
STATISTIC(NumDeletes, "Number of dead copies deleted");<br>
+STATISTIC(NumCopyForwards, "Number of copy uses forwarded");<br>
<br>
namespace {<br>
typedef SmallVector<unsigned, 4> RegList;<br>
typedef DenseMap<unsigned, RegList> SourceMap;<br>
typedef DenseMap<unsigned, MachineInstr*> Reg2MIMap;<br>
<br>
- class MachineCopyPropagation : public MachineFunctionPass {<br>
+ class MachineCopyPropagation : public MachineFunctionPass,<br>
+ private LiveRangeEdit::Delegate {<br>
const TargetRegisterInfo *TRI;<br>
const TargetInstrInfo *TII;<br>
- const MachineRegisterInfo *MRI;<br>
+ MachineRegisterInfo *MRI;<br>
+ MachineFunction *MF;<br>
+ SlotIndexes *Indexes;<br>
+ LiveIntervals *LIS;<br>
+ const VirtRegMap *VRM;<br>
+ // True if this pass being run before virtual registers are remapped to<br>
+ // physical ones.<br>
+ bool PreRegRewrite;<br>
+ bool NoSubRegLiveness;<br>
+<br>
+ protected:<br>
+ MachineCopyPropagation(char &ID, bool PreRegRewrite)<br>
+ : MachineFunctionPass(ID), PreRegRewrite(PreRegRewrite) {}<br>
<br>
public:<br>
static char ID; // Pass identification, replacement for typeid<br>
- MachineCopyPropagation() : MachineFunctionPass(ID) {<br>
+ MachineCopyPropagation() : MachineCopyPropagation(ID, false) {<br>
initializeMachineCopyPropagati<wbr>onPass(*PassRegistry::<wbr>getPassRegistry());<br>
}<br>
<br>
void getAnalysisUsage(AnalysisUsage &AU) const override {<br>
+ if (PreRegRewrite) {<br>
+ AU.addRequired<SlotIndexes>();<br>
+ AU.addPreserved<SlotIndexes>()<wbr>;<br>
+ AU.addRequired<LiveIntervals>(<wbr>);<br>
+ AU.addPreserved<LiveIntervals><wbr>();<br>
+ AU.addRequired<VirtRegMap>();<br>
+ AU.addPreserved<VirtRegMap>();<br>
+ AU.addPreserved<<wbr>LiveDebugVariables>();<br>
+ AU.addPreserved<LiveStacks>();<br>
+ }<br>
AU.setPreservesCFG();<br>
MachineFunctionPass::<wbr>getAnalysisUsage(AU);<br>
}<br>
@@ -55,6 +123,10 @@ namespace {<br>
bool runOnMachineFunction(<wbr>MachineFunction &MF) override;<br>
<br>
MachineFunctionProperties getRequiredProperties() const override {<br>
+ if (PreRegRewrite)<br>
+ return MachineFunctionProperties()<br>
+ .set(<wbr>MachineFunctionProperties::<wbr>Property::NoPHIs)<br>
+ .set(<wbr>MachineFunctionProperties::<wbr>Property::TracksLiveness);<br>
return MachineFunctionProperties().<wbr>set(<br>
MachineFunctionProperties::<wbr>Property::NoVRegs);<br>
}<br>
@@ -64,6 +136,28 @@ namespace {<br>
void ReadRegister(unsigned Reg);<br>
void CopyPropagateBlock(<wbr>MachineBasicBlock &MBB);<br>
bool eraseIfRedundant(MachineInstr &Copy, unsigned Src, unsigned Def);<br>
+ unsigned getPhysReg(unsigned Reg, unsigned SubReg);<br>
+ unsigned getPhysReg(const MachineOperand &Opnd) {<br>
+ return getPhysReg(Opnd.getReg(), Opnd.getSubReg());<br>
+ }<br>
+ unsigned getFullPhysReg(const MachineOperand &Opnd) {<br>
+ return getPhysReg(Opnd.getReg(), 0);<br>
+ }<br>
+ void forwardUses(MachineInstr &MI);<br>
+ bool isForwardableRegClassCopy(<wbr>const MachineInstr &Copy,<br>
+ const MachineInstr &UseI);<br>
+ std::tuple<unsigned, unsigned, bool><br>
+ checkUseSubReg(const MachineOperand &CopySrc, const MachineOperand &MOUse);<br>
+ bool hasImplicitOverlap(const MachineInstr &MI, const MachineOperand &Use);<br>
+ void narrowRegClass(const MachineInstr &MI, const MachineOperand &MOUse,<br>
+ unsigned NewUseReg, unsigned NewUseSubReg);<br>
+ void updateForwardedCopyLiveInterva<wbr>l(const MachineInstr &Copy,<br>
+ const MachineInstr &UseMI,<br>
+ unsigned OrigUseReg,<br>
+ unsigned NewUseReg,<br>
+ unsigned NewUseSubReg);<br>
+ /// LiveRangeEdit callback for eliminateDeadDefs().<br>
+ void LRE_WillEraseInstruction(<wbr>MachineInstr *MI) override;<br>
<br>
/// Candidates for deletion.<br>
SmallSetVector<MachineInstr*, 8> MaybeDeadCopies;<br>
@@ -75,6 +169,15 @@ namespace {<br>
SourceMap SrcMap;<br>
bool Changed;<br>
};<br>
+<br>
+ class MachineCopyPropagationPreRegRe<wbr>write : public MachineCopyPropagation {<br>
+ public:<br>
+ static char ID; // Pass identification, replacement for typeid<br>
+ MachineCopyPropagationPreRegRe<wbr>write()<br>
+ : MachineCopyPropagation(ID, true) {<br>
+ initializeMachineCopyPropagati<wbr>onPreRegRewritePass(*<wbr>PassRegistry::getPassRegistry(<wbr>));<br>
+ }<br>
+ };<br>
}<br>
char MachineCopyPropagation::ID = 0;<br>
char &llvm::<wbr>MachineCopyPropagationID = MachineCopyPropagation::ID;<br>
@@ -82,6 +185,29 @@ char &llvm::<wbr>MachineCopyPropagationID = M<br>
INITIALIZE_PASS(<wbr>MachineCopyPropagation, DEBUG_TYPE,<br>
"Machine Copy Propagation Pass", false, false)<br>
<br>
+/// We have two separate passes that are very similar, the only difference being<br>
+/// where they are meant to be run in the pipeline. This is done for several<br>
+/// reasons:<br>
+/// - the two passes have different dependencies<br>
+/// - some targets want to disable the later run of this pass, but not the<br>
+/// earlier one (e.g. NVPTX and WebAssembly)<br>
+/// - it allows for easier debugging via llc<br>
+<br>
+char MachineCopyPropagationPreRegRe<wbr>write::ID = 0;<br>
+char &llvm::<wbr>MachineCopyPropagationPreRegRe<wbr>writeID = MachineCopyPropagationPreRegRe<wbr>write::ID;<br>
+<br>
+INITIALIZE_PASS_BEGIN(<wbr>MachineCopyPropagationPreRegRe<wbr>write,<br>
+ "machine-cp-prerewrite",<br>
+ "Machine Copy Propagation Pre-Register Rewrite Pass",<br>
+ false, false)<br>
+INITIALIZE_PASS_DEPENDENCY(<wbr>SlotIndexes)<br>
+INITIALIZE_PASS_DEPENDENCY(<wbr>LiveIntervals)<br>
+INITIALIZE_PASS_DEPENDENCY(<wbr>VirtRegMap)<br>
+INITIALIZE_PASS_END(<wbr>MachineCopyPropagationPreRegRe<wbr>write,<br>
+ "machine-cp-prerewrite",<br>
+ "Machine Copy Propagation Pre-Register Rewrite Pass", false,<br>
+ false)<br>
+<br>
/// Remove any entry in \p Map where the register is a subregister or equal to<br>
/// a register contained in \p Regs.<br>
static void removeRegsFromMap(Reg2MIMap &Map, const RegList &Regs,<br>
@@ -122,6 +248,10 @@ void MachineCopyPropagation::<wbr>ClobberRegi<br>
}<br>
<br>
void MachineCopyPropagation::<wbr>ReadRegister(unsigned Reg) {<br>
+ // We don't track MaybeDeadCopies when running pre-VirtRegRewriter.<br>
+ if (PreRegRewrite)<br>
+ return;<br>
+<br>
// If 'Reg' is defined by a copy, the copy is no longer a candidate<br>
// for elimination.<br>
for (MCRegAliasIterator AI(Reg, TRI, true); AI.isValid(); ++AI) {<br>
@@ -153,6 +283,46 @@ static bool isNopCopy(const MachineInstr<br>
return SubIdx == TRI->getSubRegIndex(<wbr>PreviousDef, Def);<br>
}<br>
<br>
+/// Return the physical register assigned to \p Reg if it is a virtual register,<br>
+/// otherwise just return the physical reg from the operand itself.<br>
+///<br>
+/// If \p SubReg is 0 then return the full physical register assigned to the<br>
+/// virtual register ignoring subregs. If we aren't tracking sub-reg liveness<br>
+/// then we need to use this to be more conservative with clobbers by killing<br>
+/// all super reg and their sub reg COPYs as well. This is to prevent COPY<br>
+/// forwarding in cases like the following:<br>
+///<br>
+/// %vreg2 = COPY %vreg1:sub1<br>
+/// %vreg3 = COPY %vreg1:sub0<br>
+/// ... = OP1 %vreg2<br>
+/// ... = OP2 %vreg3<br>
+///<br>
+/// After forward %vreg2 (assuming this is the last use of %vreg1) and<br>
+/// VirtRegRewriter adding kill markers we have:<br>
+///<br>
+/// %vreg3 = COPY %vreg1:sub0<br>
+/// ... = OP1 %vreg1:sub1<kill><br>
+/// ... = OP2 %vreg3<br>
+///<br>
+/// If %vreg3 is assigned to a sub-reg of %vreg1, then after rewriting we have:<br>
+///<br>
+/// ... = OP1 R0:sub1, R0<imp-use,kill><br>
+/// ... = OP2 R0:sub0<br>
+///<br>
+/// and the use of R0 by OP2 will not have a valid definition.<br>
+unsigned MachineCopyPropagation::<wbr>getPhysReg(unsigned Reg, unsigned SubReg) {<br>
+<br>
+ // Physical registers cannot have subregs.<br>
+ if (!TargetRegisterInfo::<wbr>isVirtualRegister(Reg))<br>
+ return Reg;<br>
+<br>
+ assert(PreRegRewrite && "Unexpected virtual register encountered");<br>
+ Reg = VRM->getPhys(Reg);<br>
+ if (SubReg && !NoSubRegLiveness)<br>
+ Reg = TRI->getSubReg(Reg, SubReg);<br>
+ return Reg;<br>
+}<br>
+<br>
/// Remove instruction \p Copy if there exists a previous copy that copies the<br>
/// register \p Src to the register \p Def; This may happen indirectly by<br>
/// copying the super registers.<br>
@@ -190,6 +360,325 @@ bool MachineCopyPropagation::<wbr>eraseIfRedu<br>
return true;<br>
}<br>
<br>
+<br>
+/// Decide whether we should forward the destination of \param Copy to its use<br>
+/// in \param UseI based on the register class of the Copy operands. Same-class<br>
+/// COPYs are always accepted by this function, but cross-class COPYs are only<br>
+/// accepted if they are forwarded to another COPY with the operand register<br>
+/// classes reversed. For example:<br>
+///<br>
+/// RegClassA = COPY RegClassB // Copy parameter<br>
+/// ...<br>
+/// RegClassB = COPY RegClassA // UseI parameter<br>
+///<br>
+/// which after forwarding becomes<br>
+///<br>
+/// RegClassA = COPY RegClassB<br>
+/// ...<br>
+/// RegClassB = COPY RegClassB<br>
+///<br>
+/// so we have reduced the number of cross-class COPYs and potentially<br>
+/// introduced a no COPY that can be removed.<br>
+bool MachineCopyPropagation::<wbr>isForwardableRegClassCopy(<br>
+ const MachineInstr &Copy, const MachineInstr &UseI) {<br>
+ auto isCross = [&](const MachineOperand &Dst, const MachineOperand &Src) {<br>
+ unsigned DstReg = Dst.getReg();<br>
+ unsigned SrcPhysReg = getPhysReg(Src);<br>
+ const TargetRegisterClass *DstRC;<br>
+ if (TargetRegisterInfo::<wbr>isVirtualRegister(DstReg)) {<br>
+ DstRC = MRI->getRegClass(DstReg);<br>
+ unsigned DstSubReg = Dst.getSubReg();<br>
+ if (DstSubReg)<br>
+ SrcPhysReg = TRI->getMatchingSuperReg(<wbr>SrcPhysReg, DstSubReg, DstRC);<br>
+ } else<br>
+ DstRC = TRI->getMinimalPhysRegClass(<wbr>DstReg);<br>
+<br>
+ return !DstRC->contains(SrcPhysReg);<br>
+ };<br>
+<br>
+ const MachineOperand &CopyDst = Copy.getOperand(0);<br>
+ const MachineOperand &CopySrc = Copy.getOperand(1);<br>
+<br>
+ if (!isCross(CopyDst, CopySrc))<br>
+ return true;<br>
+<br>
+ if (!UseI.isCopy())<br>
+ return false;<br>
+<br>
+ assert(getFullPhysReg(UseI.<wbr>getOperand(1)) == getFullPhysReg(CopyDst));<br>
+ return !isCross(UseI.getOperand(0), CopySrc);<br>
+}<br>
+<br>
+/// Check that the subregs on the copy source operand (\p CopySrc) and the use<br>
+/// operand to be forwarded to (\p MOUse) are compatible with doing the<br>
+/// forwarding. Also computes the new register and subregister to be used in<br>
+/// the forwarded-to instruction.<br>
+std::tuple<unsigned, unsigned, bool> MachineCopyPropagation::<wbr>checkUseSubReg(<br>
+ const MachineOperand &CopySrc, const MachineOperand &MOUse) {<br>
+ unsigned NewUseReg = CopySrc.getReg();<br>
+ unsigned NewUseSubReg;<br>
+<br>
+ if (TargetRegisterInfo::<wbr>isPhysicalRegister(NewUseReg)) {<br>
+ // If MOUse is a virtual reg, we need to apply it to the new physical reg<br>
+ // we're going to replace it with.<br>
+ if (MOUse.getSubReg())<br>
+ NewUseReg = TRI->getSubReg(NewUseReg, MOUse.getSubReg());<br>
+ // If the original use subreg isn't valid on the new src reg, we can't<br>
+ // forward it here.<br>
+ if (!NewUseReg)<br>
+ return std::make_tuple(0, 0, false);<br>
+ NewUseSubReg = 0;<br>
+ } else {<br>
+ // %v1 = COPY %v2:sub1<br>
+ // USE %v1:sub2<br>
+ // The new use is %v2:sub1:sub2<br>
+ NewUseSubReg =<br>
+ TRI->composeSubRegIndices(<wbr>CopySrc.getSubReg(), MOUse.getSubReg());<br>
+ // Check that NewUseSubReg is valid on NewUseReg<br>
+ if (NewUseSubReg &&<br>
+ !TRI->getSubClassWithSubReg(<wbr>MRI->getRegClass(NewUseReg), NewUseSubReg))<br>
+ return std::make_tuple(0, 0, false);<br>
+ }<br>
+<br>
+ return std::make_tuple(NewUseReg, NewUseSubReg, true);<br>
+}<br>
+<br>
+/// Check that \p MI does not have implicit uses that overlap with it's \p Use<br>
+/// operand (the register being replaced), since these can sometimes be<br>
+/// implicitly tied to other operands. For example, on AMDGPU:<br>
+///<br>
+/// V_MOVRELS_B32_e32 %VGPR2, %M0<imp-use>, %EXEC<imp-use>, %VGPR2_VGPR3_VGPR4_VGPR5<imp-<wbr>use><br>
+///<br>
+/// the %VGPR2 is implicitly tied to the larger reg operand, but we have no<br>
+/// way of knowing we need to update the latter when updating the former.<br>
+bool MachineCopyPropagation::<wbr>hasImplicitOverlap(const MachineInstr &MI,<br>
+ const MachineOperand &Use) {<br>
+ if (!TargetRegisterInfo::<wbr>isPhysicalRegister(Use.getReg(<wbr>)))<br>
+ return false;<br>
+<br>
+ for (const MachineOperand &MIUse : MI.uses())<br>
+ if (&MIUse != &Use && MIUse.isReg() && MIUse.isImplicit() &&<br>
+ TRI->regsOverlap(Use.getReg(), MIUse.getReg()))<br>
+ return true;<br>
+<br>
+ return false;<br>
+}<br>
+<br>
+/// Narrow the register class of the forwarded vreg so it matches any<br>
+/// instruction constraints. \p MI is the instruction being forwarded to. \p<br>
+/// MOUse is the operand being replaced in \p MI (which hasn't yet been updated<br>
+/// at the time this function is called). \p NewUseReg and \p NewUseSubReg are<br>
+/// what the \p MOUse will be changed to after forwarding.<br>
+///<br>
+/// If we are forwarding<br>
+/// A:RCA = COPY B:RCB<br>
+/// into<br>
+/// ... = OP A:RCA<br>
+///<br>
+/// then we need to narrow the register class of B so that it is a subclass<br>
+/// of RCA so that it meets the instruction register class constraints.<br>
+void MachineCopyPropagation::<wbr>narrowRegClass(const MachineInstr &MI,<br>
+ const MachineOperand &MOUse,<br>
+ unsigned NewUseReg,<br>
+ unsigned NewUseSubReg) {<br>
+ if (!TargetRegisterInfo::<wbr>isVirtualRegister(NewUseReg))<br>
+ return;<br>
+<br>
+ // Make sure the virtual reg class allows the subreg.<br>
+ if (NewUseSubReg) {<br>
+ const TargetRegisterClass *CurUseRC = MRI->getRegClass(NewUseReg);<br>
+ const TargetRegisterClass *NewUseRC =<br>
+ TRI->getSubClassWithSubReg(<wbr>CurUseRC, NewUseSubReg);<br>
+ if (CurUseRC != NewUseRC) {<br>
+ DEBUG(dbgs() << "MCP: Setting regclass of " << PrintReg(NewUseReg, TRI)<br>
+ << " to " << TRI->getRegClassName(NewUseRC) << "\n");<br>
+ MRI->setRegClass(NewUseReg, NewUseRC);<br>
+ }<br>
+ }<br>
+<br>
+ unsigned MOUseOpNo = &MOUse - &MI.getOperand(0);<br>
+ const TargetRegisterClass *InstRC =<br>
+ TII->getRegClass(MI.getDesc(), MOUseOpNo, TRI, *MF);<br>
+ if (InstRC) {<br>
+ const TargetRegisterClass *CurUseRC = MRI->getRegClass(NewUseReg);<br>
+ if (NewUseSubReg)<br>
+ InstRC = TRI->getMatchingSuperRegClass(<wbr>CurUseRC, InstRC, NewUseSubReg);<br>
+ if (!InstRC->hasSubClassEq(<wbr>CurUseRC)) {<br>
+ const TargetRegisterClass *NewUseRC =<br>
+ TRI->getCommonSubClass(InstRC, CurUseRC);<br>
+ DEBUG(dbgs() << "MCP: Setting regclass of " << PrintReg(NewUseReg, TRI)<br>
+ << " to " << TRI->getRegClassName(NewUseRC) << "\n");<br>
+ MRI->setRegClass(NewUseReg, NewUseRC);<br>
+ }<br>
+ }<br>
+}<br>
+<br>
+/// Update the LiveInterval information to reflect the destination of \p Copy<br>
+/// being forwarded to a use in \p UseMI. \p OrigUseReg is the register being<br>
+/// forwarded through. It should be the destination register of \p Copy and has<br>
+/// already been replaced in \p UseMI at the point this function is called. \p<br>
+/// NewUseReg and \p NewUseSubReg are the register and subregister being<br>
+/// forwarded. They should be the source register of the \p Copy and should be<br>
+/// the value of the \p UseMI operand being forwarded at the point this function<br>
+/// is called.<br>
+void MachineCopyPropagation::<wbr>updateForwardedCopyLiveInterva<wbr>l(<br>
+ const MachineInstr &Copy, const MachineInstr &UseMI, unsigned OrigUseReg,<br>
+ unsigned NewUseReg, unsigned NewUseSubReg) {<br>
+<br>
+ assert(TRI->isSubRegisterEq(<wbr>getPhysReg(OrigUseReg, 0),<br>
+ getFullPhysReg(Copy.<wbr>getOperand(0))) &&<br>
+ "OrigUseReg mismatch");<br>
+ assert(TRI->isSubRegisterEq(<wbr>getFullPhysReg(Copy.<wbr>getOperand(1)),<br>
+ getPhysReg(NewUseReg, 0)) &&<br>
+ "NewUseReg mismatch");<br>
+<br>
+ // Extend live range starting from COPY early-clobber slot, since that<br>
+ // is where the original src live range ends.<br>
+ SlotIndex CopyUseIdx =<br>
+ Indexes->getInstructionIndex(<wbr>Copy).getRegSlot(true /*=EarlyClobber*/);<br>
+ SlotIndex UseIdx = Indexes->getInstructionIndex(<wbr>UseMI).getRegSlot();<br>
+ if (TargetRegisterInfo::<wbr>isVirtualRegister(NewUseReg)) {<br>
+ LiveInterval &LI = LIS->getInterval(NewUseReg);<br>
+ LI.extendInBlock(CopyUseIdx, UseIdx);<br>
+ LaneBitmask UseMask = TRI->getSubRegIndexLaneMask(<wbr>NewUseSubReg);<br>
+ for (auto &S : LI.subranges())<br>
+ if ((S.LaneMask & UseMask).any() && S.find(CopyUseIdx))<br>
+ S.extendInBlock(CopyUseIdx, UseIdx);<br>
+ } else {<br>
+ assert(NewUseSubReg == 0 && "Unexpected subreg on physical register!");<br>
+ for (MCRegUnitIterator UI(NewUseReg, TRI); UI.isValid(); ++UI) {<br>
+ LiveRange &LR = LIS->getRegUnit(*UI);<br>
+ LR.extendInBlock(CopyUseIdx, UseIdx);<br>
+ }<br>
+ }<br>
+<br>
+ if (!TargetRegisterInfo::<wbr>isVirtualRegister(OrigUseReg))<br>
+ return;<br>
+<br>
+ LiveInterval &LI = LIS->getInterval(OrigUseReg);<br>
+<br>
+ // Can happen for undef uses.<br>
+ if (LI.empty())<br>
+ return;<br>
+<br>
+ SlotIndex UseIndex = Indexes->getInstructionIndex(<wbr>UseMI);<br>
+ const LiveRange::Segment *UseSeg = LI.getSegmentContaining(<wbr>UseIndex);<br>
+<br>
+ // Only shrink if forwarded use is the end of a segment.<br>
+ if (UseSeg->end != UseIndex.getRegSlot())<br>
+ return;<br>
+<br>
+ SmallVector<MachineInstr *, 4> DeadInsts;<br>
+ LIS->shrinkToUses(&LI, &DeadInsts);<br>
+ if (!DeadInsts.empty()) {<br>
+ SmallVector<unsigned, 8> NewRegs;<br>
+ LiveRangeEdit(nullptr, NewRegs, *MF, *LIS, nullptr, this)<br>
+ .eliminateDeadDefs(DeadInsts);<br>
+ }<br>
+}<br>
+<br>
+void MachineCopyPropagation::LRE_<wbr>WillEraseInstruction(<wbr>MachineInstr *MI) {<br>
+ // Remove this COPY from further consideration for forwarding.<br>
+ ClobberRegister(<wbr>getFullPhysReg(MI->getOperand(<wbr>0)));<br>
+ Changed = true;<br>
+}<br>
+<br>
+/// Look for available copies whose destination register is used by \p MI and<br>
+/// replace the use in \p MI with the copy's source register.<br>
+void MachineCopyPropagation::<wbr>forwardUses(MachineInstr &MI) {<br>
+ if (AvailCopyMap.empty())<br>
+ return;<br>
+<br>
+ // Look for non-tied explicit vreg uses that have an active COPY<br>
+ // instruction that defines the physical register allocated to them.<br>
+ // Replace the vreg with the source of the active COPY.<br>
+ for (MachineOperand &MOUse : MI.explicit_uses()) {<br>
+ if (!MOUse.isReg() || MOUse.isTied())<br>
+ continue;<br>
+<br>
+ unsigned UseReg = MOUse.getReg();<br>
+ if (!UseReg)<br>
+ continue;<br>
+<br>
+ if (TargetRegisterInfo::<wbr>isVirtualRegister(UseReg))<br>
+ UseReg = VRM->getPhys(UseReg);<br>
+ else if (MI.isCall() || MI.isReturn() || MI.isInlineAsm() ||<br>
+ MI.hasUnmodeledSideEffects() || MI.isDebugValue() || MI.isKill())<br>
+ // Some instructions seem to have ABI uses e.g. not marked as<br>
+ // implicit, which can lead to forwarding them when we shouldn't, so<br>
+ // restrict the types of instructions we forward physical regs into.<br>
+ continue;<br>
+<br>
+ // Don't forward COPYs via non-allocatable regs since they can have<br>
+ // non-standard semantics.<br>
+ if (!MRI->isAllocatable(UseReg))<br>
+ continue;<br>
+<br>
+ auto CI = AvailCopyMap.find(UseReg);<br>
+ if (CI == AvailCopyMap.end())<br>
+ continue;<br>
+<br>
+ MachineInstr &Copy = *CI->second;<br>
+ MachineOperand &CopyDst = Copy.getOperand(0);<br>
+ MachineOperand &CopySrc = Copy.getOperand(1);<br>
+<br>
+ // Don't forward COPYs that are already NOPs due to register assignment.<br>
+ if (getPhysReg(CopyDst) == getPhysReg(CopySrc))<br>
+ continue;<br>
+<br>
+ // FIXME: Don't handle partial uses of wider COPYs yet.<br>
+ if (CopyDst.getSubReg() != 0 || UseReg != getPhysReg(CopyDst))<br>
+ continue;<br>
+<br>
+ // Don't forward COPYs of non-allocatable regs unless they are constant.<br>
+ unsigned CopySrcReg = CopySrc.getReg();<br>
+ if (TargetRegisterInfo::<wbr>isPhysicalRegister(CopySrcReg) &&<br>
+ !MRI->isAllocatable(<wbr>CopySrcReg) && !MRI->isConstantPhysReg(<wbr>CopySrcReg))<br>
+ continue;<br>
+<br>
+ if (!isForwardableRegClassCopy(<wbr>Copy, MI))<br>
+ continue;<br>
+<br>
+ unsigned NewUseReg, NewUseSubReg;<br>
+ bool SubRegOK;<br>
+ std::tie(NewUseReg, NewUseSubReg, SubRegOK) =<br>
+ checkUseSubReg(CopySrc, MOUse);<br>
+ if (!SubRegOK)<br>
+ continue;<br>
+<br>
+ if (hasImplicitOverlap(MI, MOUse))<br>
+ continue;<br>
+<br>
+ DEBUG(dbgs() << "MCP: Replacing "<br>
+ << PrintReg(MOUse.getReg(), TRI, MOUse.getSubReg())<br>
+ << "\n with "<br>
+ << PrintReg(NewUseReg, TRI, CopySrc.getSubReg())<br>
+ << "\n in "<br>
+ << MI<br>
+ << " from "<br>
+ << Copy);<br>
+<br>
+ narrowRegClass(MI, MOUse, NewUseReg, NewUseSubReg);<br>
+<br>
+ unsigned OrigUseReg = MOUse.getReg();<br>
+ MOUse.setReg(NewUseReg);<br>
+ MOUse.setSubReg(NewUseSubReg);<br>
+<br>
+ DEBUG(dbgs() << "MCP: After replacement: " << MI << "\n");<br>
+<br>
+ if (PreRegRewrite)<br>
+ updateForwardedCopyLiveInterva<wbr>l(Copy, MI, OrigUseReg, NewUseReg,<br>
+ NewUseSubReg);<br>
+ else<br>
+ for (MachineInstr &KMI :<br>
+ make_range(Copy.getIterator(), std::next(MI.getIterator())))<br>
+ KMI.clearRegisterKills(<wbr>NewUseReg, TRI);<br>
+<br>
+ ++NumCopyForwards;<br>
+ Changed = true;<br>
+ }<br>
+}<br>
+<br>
void MachineCopyPropagation::<wbr>CopyPropagateBlock(<wbr>MachineBasicBlock &MBB) {<br>
DEBUG(dbgs() << "MCP: CopyPropagateBlock " << MBB.getName() << "\n");<br>
<br>
@@ -198,12 +687,8 @@ void MachineCopyPropagation::<wbr>CopyPropaga<br>
++I;<br>
<br>
if (MI->isCopy()) {<br>
- unsigned Def = MI->getOperand(0).getReg();<br>
- unsigned Src = MI->getOperand(1).getReg();<br>
-<br>
- assert(!TargetRegisterInfo::<wbr>isVirtualRegister(Def) &&<br>
- !TargetRegisterInfo::<wbr>isVirtualRegister(Src) &&<br>
- "MachineCopyPropagation should be run after register allocation!");<br>
+ unsigned Def = getPhysReg(MI->getOperand(0));<br>
+ unsigned Src = getPhysReg(MI->getOperand(1));<br>
<br>
// The two copies cancel out and the source of the first copy<br>
// hasn't been overridden, eliminate the second one. e.g.<br>
@@ -220,8 +705,16 @@ void MachineCopyPropagation::<wbr>CopyPropaga<br>
// %ECX<def> = COPY %EAX<br>
// =><br>
// %ECX<def> = COPY %EAX<br>
- if (eraseIfRedundant(*MI, Def, Src) || eraseIfRedundant(*MI, Src, Def))<br>
- continue;<br>
+ if (!PreRegRewrite)<br>
+ if (eraseIfRedundant(*MI, Def, Src) || eraseIfRedundant(*MI, Src, Def))<br>
+ continue;<br>
+<br>
+ forwardUses(*MI);<br>
+<br>
+ // Src may have been changed by forwardUses()<br>
+ Src = getPhysReg(MI->getOperand(1));<br>
+ unsigned DefClobber = getFullPhysReg(MI->getOperand(<wbr>0));<br>
+ unsigned SrcClobber = getFullPhysReg(MI->getOperand(<wbr>1));<br>
<br>
// If Src is defined by a previous copy, the previous copy cannot be<br>
// eliminated.<br>
@@ -238,7 +731,10 @@ void MachineCopyPropagation::<wbr>CopyPropaga<br>
DEBUG(dbgs() << "MCP: Copy is a deletion candidate: "; MI->dump());<br>
<br>
// Copy is now a candidate for deletion.<br>
- if (!MRI->isReserved(Def))<br>
+ // Only look for dead COPYs if we're not running just before<br>
+ // VirtRegRewriter, since presumably these COPYs will have already been<br>
+ // removed.<br>
+ if (!PreRegRewrite && !MRI->isReserved(Def))<br>
MaybeDeadCopies.insert(MI);<br>
<br>
// If 'Def' is previously source of another copy, then this earlier copy's<br>
@@ -248,11 +744,11 @@ void MachineCopyPropagation::<wbr>CopyPropaga<br>
// %xmm2<def> = copy %xmm0<br>
// ...<br>
// %xmm2<def> = copy %xmm9<br>
- ClobberRegister(Def);<br>
+ ClobberRegister(DefClobber);<br>
for (const MachineOperand &MO : MI->implicit_operands()) {<br>
if (!MO.isReg() || !MO.isDef())<br>
continue;<br>
- unsigned Reg = MO.getReg();<br>
+ unsigned Reg = getFullPhysReg(MO);<br>
if (!Reg)<br>
continue;<br>
ClobberRegister(Reg);<br>
@@ -267,13 +763,27 @@ void MachineCopyPropagation::<wbr>CopyPropaga<br>
<br>
// Remember source that's copied to Def. Once it's clobbered, then<br>
// it's no longer available for copy propagation.<br>
- RegList &DestList = SrcMap[Src];<br>
- if (!is_contained(DestList, Def))<br>
- DestList.push_back(Def);<br>
+ RegList &DestList = SrcMap[SrcClobber];<br>
+ if (!is_contained(DestList, DefClobber))<br>
+ DestList.push_back(DefClobber)<wbr>;<br>
<br>
continue;<br>
}<br>
<br>
+ // Clobber any earlyclobber regs first.<br>
+ for (const MachineOperand &MO : MI->operands())<br>
+ if (MO.isReg() && MO.isEarlyClobber()) {<br>
+ unsigned Reg = getFullPhysReg(MO);<br>
+ // If we have a tied earlyclobber, that means it is also read by this<br>
+ // instruction, so we need to make sure we don't remove it as dead<br>
+ // later.<br>
+ if (MO.isTied())<br>
+ ReadRegister(Reg);<br>
+ ClobberRegister(Reg);<br>
+ }<br>
+<br>
+ forwardUses(*MI);<br>
+<br>
// Not a copy.<br>
SmallVector<unsigned, 2> Defs;<br>
const MachineOperand *RegMask = nullptr;<br>
@@ -282,14 +792,11 @@ void MachineCopyPropagation::<wbr>CopyPropaga<br>
RegMask = &MO;<br>
if (!MO.isReg())<br>
continue;<br>
- unsigned Reg = MO.getReg();<br>
+ unsigned Reg = getFullPhysReg(MO);<br>
if (!Reg)<br>
continue;<br>
<br>
- assert(!TargetRegisterInfo::<wbr>isVirtualRegister(Reg) &&<br>
- "MachineCopyPropagation should be run after register allocation!");<br>
-<br>
- if (MO.isDef()) {<br>
+ if (MO.isDef() && !MO.isEarlyClobber()) {<br>
Defs.push_back(Reg);<br>
continue;<br>
} else if (MO.readsReg())<br>
@@ -346,6 +853,8 @@ void MachineCopyPropagation::<wbr>CopyPropaga<br>
// since we don't want to trust live-in lists.<br>
if (MBB.succ_empty()) {<br>
for (MachineInstr *MaybeDead : MaybeDeadCopies) {<br>
+ DEBUG(dbgs() << "MCP: Removing copy due to no live-out succ: ";<br>
+ MaybeDead->dump());<br>
assert(!MRI->isReserved(<wbr>MaybeDead->getOperand(0).<wbr>getReg()));<br>
MaybeDead->eraseFromParent();<br>
Changed = true;<br>
@@ -368,10 +877,16 @@ bool MachineCopyPropagation::<wbr>runOnMachin<br>
TRI = MF.getSubtarget().<wbr>getRegisterInfo();<br>
TII = MF.getSubtarget().<wbr>getInstrInfo();<br>
MRI = &MF.getRegInfo();<br>
+ this->MF = &MF;<br>
+ if (PreRegRewrite) {<br>
+ Indexes = &getAnalysis<SlotIndexes>();<br>
+ LIS = &getAnalysis<LiveIntervals>();<br>
+ VRM = &getAnalysis<VirtRegMap>();<br>
+ }<br>
+ NoSubRegLiveness = !MRI->subRegLivenessEnabled();<br>
<br>
for (MachineBasicBlock &MBB : MF)<br>
CopyPropagateBlock(MBB);<br>
<br>
return Changed;<br>
}<br>
-<br>
<br>
Modified: llvm/trunk/lib/CodeGen/<wbr>TargetPassConfig.cpp<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/TargetPassConfig.cpp?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/lib/<wbr>CodeGen/TargetPassConfig.cpp?<wbr>rev=311038&r1=311037&r2=<wbr>311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/lib/CodeGen/<wbr>TargetPassConfig.cpp (original)<br>
+++ llvm/trunk/lib/CodeGen/<wbr>TargetPassConfig.cpp Wed Aug 16 13:50:01 2017<br>
@@ -88,6 +88,8 @@ static cl::opt<bool> DisableCGP("disable<br>
cl::desc("Disable Codegen Prepare"));<br>
static cl::opt<bool> DisableCopyProp("disable-<wbr>copyprop", cl::Hidden,<br>
cl::desc("Disable Copy Propagation pass"));<br>
+static cl::opt<bool> DisableCopyPropPreRegRewrite("<wbr>disable-copyprop-prerewrite", cl::Hidden,<br>
+ cl::desc("Disable Copy Propagation Pre-Register Re-write pass"));<br>
static cl::opt<bool> DisablePartialLibcallInlining(<wbr>"disable-partial-libcall-<wbr>inlining",<br>
cl::Hidden, cl::desc("Disable Partial Libcall Inlining"));<br>
static cl::opt<bool> EnableImplicitNullChecks(<br>
@@ -248,6 +250,9 @@ static IdentifyingPassPtr overridePass(A<br>
if (StandardID == &MachineCopyPropagationID)<br>
return applyDisable(TargetID, DisableCopyProp);<br>
<br>
+ if (StandardID == &<wbr>MachineCopyPropagationPreRegRe<wbr>writeID)<br>
+ return applyDisable(TargetID, DisableCopyPropPreRegRewrite);<br>
+<br>
return TargetID;<br>
}<br>
<br>
@@ -1059,6 +1064,10 @@ void TargetPassConfig::<wbr>addOptimizedRegAl<br>
// Allow targets to change the register assignments before rewriting.<br>
addPreRewrite();<br>
<br>
+ // Copy propagate to forward register uses and try to eliminate COPYs that<br>
+ // were not coalesced.<br>
+ addPass(&<wbr>MachineCopyPropagationPreRegRe<wbr>writeID);<br>
+<br>
// Finally rewrite virtual registers.<br>
addPass(&VirtRegRewriterID);<br>
<br>
<br>
Modified: llvm/trunk/test/CodeGen/<wbr>AArch64/aarch64-fold-lslfast.<wbr>ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/aarch64-fold-lslfast.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/AArch64/aarch64-fold-<wbr>lslfast.ll?rev=311038&r1=<wbr>311037&r2=311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/<wbr>AArch64/aarch64-fold-lslfast.<wbr>ll (original)<br>
+++ llvm/trunk/test/CodeGen/<wbr>AArch64/aarch64-fold-lslfast.<wbr>ll Wed Aug 16 13:50:01 2017<br>
@@ -9,7 +9,8 @@ define i16 @halfword(%struct.a* %ctx, i3<br>
; CHECK-LABEL: halfword:<br>
; CHECK: ubfx [[REG:x[0-9]+]], x1, #9, #8<br>
; CHECK: ldrh [[REG1:w[0-9]+]], [{{.*}}[[REG2:x[0-9]+]], [[REG]], lsl #1]<br>
-; CHECK: strh [[REG1]], [{{.*}}[[REG2]], [[REG]], lsl #1]<br>
+; CHECK: mov [[REG3:x[0-9]+]], [[REG2]]<br>
+; CHECK: strh [[REG1]], [{{.*}}[[REG3]], [[REG]], lsl #1]<br>
%shr81 = lshr i32 %xor72, 9<br>
%conv82 = zext i32 %shr81 to i64<br>
%idxprom83 = and i64 %conv82, 255<br>
@@ -24,7 +25,8 @@ define i32 @word(%struct.b* %ctx, i32 %x<br>
; CHECK-LABEL: word:<br>
; CHECK: ubfx [[REG:x[0-9]+]], x1, #9, #8<br>
; CHECK: ldr [[REG1:w[0-9]+]], [{{.*}}[[REG2:x[0-9]+]], [[REG]], lsl #2]<br>
-; CHECK: str [[REG1]], [{{.*}}[[REG2]], [[REG]], lsl #2]<br>
+; CHECK: mov [[REG3:x[0-9]+]], [[REG2]]<br>
+; CHECK: str [[REG1]], [{{.*}}[[REG3]], [[REG]], lsl #2]<br>
%shr81 = lshr i32 %xor72, 9<br>
%conv82 = zext i32 %shr81 to i64<br>
%idxprom83 = and i64 %conv82, 255<br>
@@ -39,7 +41,8 @@ define i64 @doubleword(%struct.c* %ctx,<br>
; CHECK-LABEL: doubleword:<br>
; CHECK: ubfx [[REG:x[0-9]+]], x1, #9, #8<br>
; CHECK: ldr [[REG1:x[0-9]+]], [{{.*}}[[REG2:x[0-9]+]], [[REG]], lsl #3]<br>
-; CHECK: str [[REG1]], [{{.*}}[[REG2]], [[REG]], lsl #3]<br>
+; CHECK: mov [[REG3:x[0-9]+]], [[REG2]]<br>
+; CHECK: str [[REG1]], [{{.*}}[[REG3]], [[REG]], lsl #3]<br>
%shr81 = lshr i32 %xor72, 9<br>
%conv82 = zext i32 %shr81 to i64<br>
%idxprom83 = and i64 %conv82, 255<br>
<br>
Modified: llvm/trunk/test/CodeGen/<wbr>AArch64/arm64-AdvSIMD-Scalar.<wbr>ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-AdvSIMD-Scalar.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/AArch64/arm64-AdvSIMD-<wbr>Scalar.ll?rev=311038&r1=<wbr>311037&r2=311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/<wbr>AArch64/arm64-AdvSIMD-Scalar.<wbr>ll (original)<br>
+++ llvm/trunk/test/CodeGen/<wbr>AArch64/arm64-AdvSIMD-Scalar.<wbr>ll Wed Aug 16 13:50:01 2017<br>
@@ -8,15 +8,9 @@ define <2 x i64> @bar(<2 x i64> %a, <2 x<br>
; CHECK: add.2d v[[REG:[0-9]+]], v0, v1<br>
; CHECK: add d[[REG3:[0-9]+]], d[[REG]], d1<br>
; CHECK: sub d[[REG2:[0-9]+]], d[[REG]], d1<br>
-; Without advanced copy optimization, we end up with cross register<br>
-; banks copies that cannot be coalesced.<br>
-; CHECK-NOOPT: fmov [[COPY_REG3:x[0-9]+]], d[[REG3]]<br>
-; With advanced copy optimization, we end up with just one copy<br>
-; to insert the computed high part into the V register.<br>
-; CHECK-OPT-NOT: fmov<br>
+; CHECK-NOT: fmov<br>
; CHECK: fmov [[COPY_REG2:x[0-9]+]], d[[REG2]]<br>
-; CHECK-NOOPT: fmov d0, [[COPY_REG3]]<br>
-; CHECK-OPT-NOT: fmov<br>
+; CHECK-NOT: fmov<br>
; CHECK: ins.d v0[1], [[COPY_REG2]]<br>
; CHECK-NEXT: ret<br>
;<br>
@@ -24,11 +18,9 @@ define <2 x i64> @bar(<2 x i64> %a, <2 x<br>
; GENERIC: add v[[REG:[0-9]+]].2d, v0.2d, v1.2d<br>
; GENERIC: add d[[REG3:[0-9]+]], d[[REG]], d1<br>
; GENERIC: sub d[[REG2:[0-9]+]], d[[REG]], d1<br>
-; GENERIC-NOOPT: fmov [[COPY_REG3:x[0-9]+]], d[[REG3]]<br>
-; GENERIC-OPT-NOT: fmov<br>
+; GENERIC-NOT: fmov<br>
; GENERIC: fmov [[COPY_REG2:x[0-9]+]], d[[REG2]]<br>
-; GENERIC-NOOPT: fmov d0, [[COPY_REG3]]<br>
-; GENERIC-OPT-NOT: fmov<br>
+; GENERIC-NOT: fmov<br>
; GENERIC: ins v0.d[1], [[COPY_REG2]]<br>
; GENERIC-NEXT: ret<br>
%add = add <2 x i64> %a, %b<br>
<br>
Modified: llvm/trunk/test/CodeGen/<wbr>AArch64/arm64-zero-cycle-<wbr>regmov.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-zero-cycle-regmov.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/AArch64/arm64-zero-<wbr>cycle-regmov.ll?rev=311038&r1=<wbr>311037&r2=311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/<wbr>AArch64/arm64-zero-cycle-<wbr>regmov.ll (original)<br>
+++ llvm/trunk/test/CodeGen/<wbr>AArch64/arm64-zero-cycle-<wbr>regmov.ll Wed Aug 16 13:50:01 2017<br>
@@ -4,8 +4,10 @@<br>
define i32 @t(i32 %a, i32 %b, i32 %c, i32 %d) nounwind ssp {<br>
entry:<br>
; CHECK-LABEL: t:<br>
-; CHECK: mov x0, [[REG1:x[0-9]+]]<br>
-; CHECK: mov x1, [[REG2:x[0-9]+]]<br>
+; CHECK: mov [[REG2:x[0-9]+]], x3<br>
+; CHECK: mov [[REG1:x[0-9]+]], x2<br>
+; CHECK: mov x0, x2<br>
+; CHECK: mov x1, x3<br>
; CHECK: bl _foo<br>
; CHECK: mov x0, [[REG1]]<br>
; CHECK: mov x1, [[REG2]]<br>
<br>
Modified: llvm/trunk/test/CodeGen/<wbr>AArch64/f16-instructions.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/f16-instructions.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/AArch64/f16-<wbr>instructions.ll?rev=311038&r1=<wbr>311037&r2=311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/<wbr>AArch64/f16-instructions.ll (original)<br>
+++ llvm/trunk/test/CodeGen/<wbr>AArch64/f16-instructions.ll Wed Aug 16 13:50:01 2017<br>
@@ -350,7 +350,7 @@ else:<br>
<br>
; CHECK-LABEL: test_phi:<br>
; CHECK: mov x[[PTR:[0-9]+]], x0<br>
-; CHECK: ldr h[[AB:[0-9]+]], [x[[PTR]]]<br>
+; CHECK: ldr h[[AB:[0-9]+]], [x0]<br>
; CHECK: [[LOOP:LBB[0-9_]+]]:<br>
; CHECK: mov.16b v[[R:[0-9]+]], v[[AB]]<br>
; CHECK: ldr h[[AB]], [x[[PTR]]]<br>
<br>
Modified: llvm/trunk/test/CodeGen/<wbr>AArch64/flags-multiuse.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/flags-multiuse.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/AArch64/flags-<wbr>multiuse.ll?rev=311038&r1=<wbr>311037&r2=311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/<wbr>AArch64/flags-multiuse.ll (original)<br>
+++ llvm/trunk/test/CodeGen/<wbr>AArch64/flags-multiuse.ll Wed Aug 16 13:50:01 2017<br>
@@ -17,6 +17,9 @@ define i32 @test_multiflag(i32 %n, i32 %<br>
%val = zext i1 %test to i32<br>
; CHECK: cset {{[xw][0-9]+}}, ne<br>
<br>
+; CHECK: mov [[RHSCOPY:w[0-9]+]], [[RHS]]<br>
+; CHECK: mov [[LHSCOPY:w[0-9]+]], [[LHS]]<br>
+<br>
store i32 %val, i32* @var<br>
<br>
call void @bar()<br>
@@ -25,7 +28,7 @@ define i32 @test_multiflag(i32 %n, i32 %<br>
; Currently, the comparison is emitted again. An MSR/MRS pair would also be<br>
; acceptable, but assuming the call preserves NZCV is not.<br>
br i1 %test, label %iftrue, label %iffalse<br>
-; CHECK: cmp [[LHS]], [[RHS]]<br>
+; CHECK: cmp [[LHSCOPY]], [[RHSCOPY]]<br>
; CHECK: b.eq<br>
<br>
iftrue:<br>
<br>
Modified: llvm/trunk/test/CodeGen/<wbr>AArch64/merge-store-<wbr>dependency.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/merge-store-dependency.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/AArch64/merge-store-<wbr>dependency.ll?rev=311038&r1=<wbr>311037&r2=311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/<wbr>AArch64/merge-store-<wbr>dependency.ll (original)<br>
+++ llvm/trunk/test/CodeGen/<wbr>AArch64/merge-store-<wbr>dependency.ll Wed Aug 16 13:50:01 2017<br>
@@ -8,10 +8,9 @@<br>
define void @test(%struct1* %fde, i32 %fd, void (i32, i32, i8*)* %func, i8* %arg) {<br>
;CHECK-LABEL: test<br>
entry:<br>
-; A53: mov [[DATA:w[0-9]+]], w1<br>
; A53: str q{{[0-9]+}}, {{.*}}<br>
; A53: str q{{[0-9]+}}, {{.*}}<br>
-; A53: str [[DATA]], {{.*}}<br>
+; A53: str w1, {{.*}}<br>
<br>
%0 = bitcast %struct1* %fde to i8*<br>
tail call void @llvm.memset.p0i8.i64(i8* %0, i8 0, i64 40, i32 8, i1 false)<br>
<br>
Modified: llvm/trunk/test/CodeGen/<wbr>AArch64/neg-imm.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/neg-imm.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/AArch64/neg-imm.ll?<wbr>rev=311038&r1=311037&r2=<wbr>311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/<wbr>AArch64/neg-imm.ll (original)<br>
+++ llvm/trunk/test/CodeGen/<wbr>AArch64/neg-imm.ll Wed Aug 16 13:50:01 2017<br>
@@ -7,8 +7,8 @@ declare void @foo(i32)<br>
define void @test(i32 %px) {<br>
; CHECK_LABEL: test:<br>
; CHECK_LABEL: %entry<br>
-; CHECK: subs<br>
-; CHECK-NEXT: csel<br>
+; CHECK: subs [[REG0:w[0-9]+]],<br>
+; CHECK: csel {{w[0-9]+}}, wzr, [[REG0]]<br>
entry:<br>
%sub = add nsw i32 %px, -1<br>
%cmp = icmp slt i32 %px, 1<br>
<br>
Modified: llvm/trunk/test/CodeGen/<wbr>AMDGPU/byval-frame-setup.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AMDGPU/byval-frame-setup.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/AMDGPU/byval-frame-<wbr>setup.ll?rev=311038&r1=311037&<wbr>r2=311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/<wbr>AMDGPU/byval-frame-setup.ll (original)<br>
+++ llvm/trunk/test/CodeGen/<wbr>AMDGPU/byval-frame-setup.ll Wed Aug 16 13:50:01 2017<br>
@@ -127,20 +127,21 @@ entry:<br>
}<br>
<br>
; GCN-LABEL: {{^}}call_void_func_byval_<wbr>struct_kernel:<br>
-; GCN: s_mov_b32 s33, s7<br>
-; GCN: s_add_u32 s32, s33, 0xa00{{$}}<br>
+; GCN: s_add_u32 s32, s7, 0xa00{{$}}<br>
<br>
; GCN-DAG: v_mov_b32_e32 [[NINE:v[0-9]+]], 9<br>
; GCN-DAG: v_mov_b32_e32 [[THIRTEEN:v[0-9]+]], 13<br>
-; GCN-DAG: buffer_store_dword [[NINE]], off, s[0:3], s33 offset:8<br>
-; GCN: buffer_store_dword [[THIRTEEN]], off, s[0:3], s33 offset:24<br>
+; GCN-DAG: buffer_store_dword [[NINE]], off, s[0:3], s7 offset:8<br>
+; GCN: buffer_store_dword [[THIRTEEN]], off, s[0:3], s7 offset:24<br>
+<br>
+; GCN: s_mov_b32 s33, s7<br>
<br>
; GCN-DAG: s_add_u32 s32, s32, 0x800{{$}}<br>
<br>
-; GCN-DAG: buffer_load_dword [[LOAD0:v[0-9]+]], off, s[0:3], s33 offset:8<br>
-; GCN-DAG: buffer_load_dword [[LOAD1:v[0-9]+]], off, s[0:3], s33 offset:12<br>
-; GCN-DAG: buffer_load_dword [[LOAD2:v[0-9]+]], off, s[0:3], s33 offset:16<br>
-; GCN-DAG: buffer_load_dword [[LOAD3:v[0-9]+]], off, s[0:3], s33 offset:20<br>
+; GCN-DAG: buffer_load_dword [[LOAD0:v[0-9]+]], off, s[0:3], s{{7|33}} offset:8<br>
+; GCN-DAG: buffer_load_dword [[LOAD1:v[0-9]+]], off, s[0:3], s{{7|33}} offset:12<br>
+; GCN-DAG: buffer_load_dword [[LOAD2:v[0-9]+]], off, s[0:3], s{{7|33}} offset:16<br>
+; GCN-DAG: buffer_load_dword [[LOAD3:v[0-9]+]], off, s[0:3], s{{7|33}} offset:20<br>
<br>
; GCN-DAG: buffer_store_dword [[LOAD0]], off, s[0:3], s32 offset:4{{$}}<br>
; GCN-DAG: buffer_store_dword [[LOAD1]], off, s[0:3], s32 offset:8<br>
<br>
Modified: llvm/trunk/test/CodeGen/<wbr>AMDGPU/call-argument-types.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AMDGPU/call-argument-types.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/AMDGPU/call-argument-<wbr>types.ll?rev=311038&r1=311037&<wbr>r2=311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/<wbr>AMDGPU/call-argument-types.ll (original)<br>
+++ llvm/trunk/test/CodeGen/<wbr>AMDGPU/call-argument-types.ll Wed Aug 16 13:50:01 2017<br>
@@ -65,17 +65,17 @@ define amdgpu_kernel void @test_call_ext<br>
<br>
; GCN-LABEL: {{^}}test_call_external_void_<wbr>func_i1_signext:<br>
; MESA: s_mov_b32 s33, s3{{$}}<br>
+; MESA-DAG: s_mov_b32 s32, s3{{$}}<br>
; HSA: s_mov_b32 s33, s9{{$}}<br>
+; HSA: s_mov_b32 s32, s9{{$}}<br>
<br>
; GCN: s_getpc_b64 s{{\[}}[[PC_LO:[0-9]+]]:[[PC_<wbr>HI:[0-9]+]]{{\]}}<br>
; GCN-NEXT: s_add_u32 s[[PC_LO]], s[[PC_LO]], external_void_func_i1_signext@<wbr>rel32@lo+4<br>
; GCN-NEXT: s_addc_u32 s[[PC_HI]], s[[PC_HI]], external_void_func_i1_signext@<wbr>rel32@hi+4<br>
; GCN-NEXT: buffer_load_ubyte [[VAR:v[0-9]+]]<br>
; HSA-NEXT: s_mov_b32 s4, s33<br>
-; HSA-NEXT: s_mov_b32 s32, s33<br>
<br>
; MESA-DAG: s_mov_b32 s4, s33{{$}}<br>
-; MESA-DAG: s_mov_b32 s32, s33{{$}}<br>
<br>
; GCN: s_waitcnt vmcnt(0)<br>
; GCN-NEXT: v_bfe_i32 v0, v0, 0, 1<br>
@@ -90,6 +90,8 @@ define amdgpu_kernel void @test_call_ext<br>
; FIXME: load should be scheduled before getpc<br>
; GCN-LABEL: {{^}}test_call_external_void_<wbr>func_i1_zeroext:<br>
; MESA: s_mov_b32 s33, s3{{$}}<br>
+; MESA: s_mov_b32 s32, s3{{$}}<br>
+; HSA: s_mov_b32 s32, s9{{$}}<br>
<br>
; GCN: s_getpc_b64 s{{\[}}[[PC_LO:[0-9]+]]:[[PC_<wbr>HI:[0-9]+]]{{\]}}<br>
; GCN-NEXT: s_add_u32 s[[PC_LO]], s[[PC_LO]], external_void_func_i1_zeroext@<wbr>rel32@lo+4<br>
@@ -97,7 +99,6 @@ define amdgpu_kernel void @test_call_ext<br>
; GCN-NEXT: buffer_load_ubyte v0<br>
<br>
; GCN-DAG: s_mov_b32 s4, s33{{$}}<br>
-; GCN-DAG: s_mov_b32 s32, s33{{$}}<br>
<br>
; GCN: s_waitcnt vmcnt(0)<br>
; GCN-NEXT: v_and_b32_e32 v0, 1, v0<br>
@@ -111,14 +112,15 @@ define amdgpu_kernel void @test_call_ext<br>
<br>
; GCN-LABEL: {{^}}test_call_external_void_<wbr>func_i8_imm:<br>
; MESA-DAG: s_mov_b32 s33, s3{{$}}<br>
+; MESA: s_mov_b32 s32, s3{{$}}<br>
<br>
; GCN: s_getpc_b64 s{{\[}}[[PC_LO:[0-9]+]]:[[PC_<wbr>HI:[0-9]+]]{{\]}}<br>
; GCN-NEXT: s_add_u32 s[[PC_LO]], s[[PC_LO]], external_void_func_i8@rel32@<wbr>lo+4<br>
; GCN-NEXT: s_addc_u32 s[[PC_HI]], s[[PC_HI]], external_void_func_i8@rel32@<wbr>hi+4<br>
; GCN-NEXT: v_mov_b32_e32 v0, 0x7b<br>
<br>
-; HSA-DAG: s_mov_b32 s4, s33{{$}}<br>
-; GCN-DAG: s_mov_b32 s32, s33{{$}}<br>
+; HSA-DAG: s_mov_b32 s4, s9{{$}}<br>
+; HSA-DAG: s_mov_b32 s32, s9{{$}}<br>
<br>
; GCN: s_swappc_b64 s[30:31], s{{\[}}[[PC_LO]]:[[PC_HI]]{{\]<wbr>}}<br>
; GCN-NEXT: s_endpgm<br>
@@ -129,16 +131,17 @@ define amdgpu_kernel void @test_call_ext<br>
<br>
; FIXME: don't wait before call<br>
; GCN-LABEL: {{^}}test_call_external_void_<wbr>func_i8_signext:<br>
-; HSA-DAG: s_mov_b32 s33, s9{{$}}<br>
; MESA-DAG: s_mov_b32 s33, s3{{$}}<br>
+; MESA-DAG: s_mov_b32 s32, s3{{$}}<br>
<br>
; GCN-DAG: buffer_load_sbyte v0<br>
; GCN: s_getpc_b64 s{{\[}}[[PC_LO:[0-9]+]]:[[PC_<wbr>HI:[0-9]+]]{{\]}}<br>
; GCN-NEXT: s_add_u32 s[[PC_LO]], s[[PC_LO]], external_void_func_i8_signext@<wbr>rel32@lo+4<br>
; GCN-NEXT: s_addc_u32 s[[PC_HI]], s[[PC_HI]], external_void_func_i8_signext@<wbr>rel32@hi+4<br>
<br>
-; GCN-DAG: s_mov_b32 s4, s33<br>
-; GCN-DAG: s_mov_b32 s32, s3<br>
+; MESA-DAG: s_mov_b32 s4, s33<br>
+; HSA-DAG: s_mov_b32 s4, s9<br>
+; HSA-DAG: s_mov_b32 s32, s9{{$}}<br>
<br>
; GCN: s_waitcnt vmcnt(0)<br>
; GCN-NEXT: s_swappc_b64 s[30:31], s{{\[}}[[PC_LO]]:[[PC_HI]]{{\]<wbr>}}<br>
@@ -151,15 +154,16 @@ define amdgpu_kernel void @test_call_ext<br>
<br>
; GCN-LABEL: {{^}}test_call_external_void_<wbr>func_i8_zeroext:<br>
; MESA-DAG: s_mov_b32 s33, s3{{$}}<br>
-; HSA-DAG: s_mov_b32 s33, s9{{$}}<br>
+; MESA-DAG: s_mov_b32 s32, s3{{$}}<br>
<br>
; GCN-DAG: buffer_load_ubyte v0<br>
; GCN: s_getpc_b64 s{{\[}}[[PC_LO:[0-9]+]]:[[PC_<wbr>HI:[0-9]+]]{{\]}}<br>
; GCN-NEXT: s_add_u32 s[[PC_LO]], s[[PC_LO]], external_void_func_i8_zeroext@<wbr>rel32@lo+4<br>
; GCN-NEXT: s_addc_u32 s[[PC_HI]], s[[PC_HI]], external_void_func_i8_zeroext@<wbr>rel32@hi+4<br>
<br>
-; GCN-DAG: s_mov_b32 s4, s33<br>
-; GCN-DAG: s_mov_b32 s32, s33<br>
+; MESA-DAG: s_mov_b32 s4, s33<br>
+; HSA-DAG: s_mov_b32 s4, s9<br>
+; HSA-DAG: s_mov_b32 s32, s9<br>
<br>
; GCN: s_waitcnt vmcnt(0)<br>
; GCN-NEXT: s_swappc_b64 s[30:31], s{{\[}}[[PC_LO]]:[[PC_HI]]{{\]<wbr>}}<br>
@@ -174,7 +178,8 @@ define amdgpu_kernel void @test_call_ext<br>
; GCN-DAG: v_mov_b32_e32 v0, 0x7b{{$}}<br>
<br>
; GCN-DAG: s_mov_b32 s4, s33<br>
-; GCN-DAG: s_mov_b32 s32, s33<br>
+; MESA-DAG: s_mov_b32 s32, s1<br>
+; HSA-DAG: s_mov_b32 s32, s7<br>
<br>
; GCN: s_swappc_b64<br>
define amdgpu_kernel void @test_call_external_void_func_<wbr>i16_imm() #0 {<br>
@@ -184,14 +189,16 @@ define amdgpu_kernel void @test_call_ext<br>
<br>
; GCN-LABEL: {{^}}test_call_external_void_<wbr>func_i16_signext:<br>
; MESA-DAG: s_mov_b32 s33, s3{{$}}<br>
+; MESA-DAG: s_mov_b32 s32, s3{{$}}<br>
<br>
; GCN-DAG: buffer_load_sshort v0<br>
; GCN: s_getpc_b64 s{{\[}}[[PC_LO:[0-9]+]]:[[PC_<wbr>HI:[0-9]+]]{{\]}}<br>
; GCN-NEXT: s_add_u32 s[[PC_LO]], s[[PC_LO]], external_void_func_i16_<wbr>signext@rel32@lo+4<br>
; GCN-NEXT: s_addc_u32 s[[PC_HI]], s[[PC_HI]], external_void_func_i16_<wbr>signext@rel32@hi+4<br>
<br>
-; GCN-DAG: s_mov_b32 s4, s33<br>
-; GCN-DAG: s_mov_b32 s32, s33<br>
+; MESA-DAG: s_mov_b32 s4, s33<br>
+; HSA-DAG: s_mov_b32 s4, s9<br>
+; HSA-DAG: s_mov_b32 s32, s9<br>
<br>
; GCN: s_waitcnt vmcnt(0)<br>
; GCN-NEXT: s_swappc_b64 s[30:31], s{{\[}}[[PC_LO]]:[[PC_HI]]{{\]<wbr>}}<br>
@@ -204,6 +211,7 @@ define amdgpu_kernel void @test_call_ext<br>
<br>
; GCN-LABEL: {{^}}test_call_external_void_<wbr>func_i16_zeroext:<br>
; MESA-DAG: s_mov_b32 s33, s3{{$}}<br>
+; MESA-DAG: s_mov_b32 s32, s3{{$}}<br>
<br>
<br>
; GCN-DAG: buffer_load_ushort v0<br>
@@ -211,8 +219,9 @@ define amdgpu_kernel void @test_call_ext<br>
; GCN-NEXT: s_add_u32 s[[PC_LO]], s[[PC_LO]], external_void_func_i16_<wbr>zeroext@rel32@lo+4<br>
; GCN-NEXT: s_addc_u32 s[[PC_HI]], s[[PC_HI]], external_void_func_i16_<wbr>zeroext@rel32@hi+4<br>
<br>
-; GCN-DAG: s_mov_b32 s4, s33<br>
-; GCN-DAG: s_mov_b32 s32, s33<br>
+; MESA-DAG: s_mov_b32 s4, s33<br>
+; HSA-DAG: s_mov_b32 s4, s9<br>
+; HSA-DAG: s_mov_b32 s32, s9<br>
<br>
; GCN: s_waitcnt vmcnt(0)<br>
; GCN-NEXT: s_swappc_b64 s[30:31], s{{\[}}[[PC_LO]]:[[PC_HI]]{{\]<wbr>}}<br>
@@ -225,13 +234,15 @@ define amdgpu_kernel void @test_call_ext<br>
<br>
; GCN-LABEL: {{^}}test_call_external_void_<wbr>func_i32_imm:<br>
; MESA-DAG: s_mov_b32 s33, s3{{$}}<br>
+; MESA-DAG: s_mov_b32 s32, s3{{$}}<br>
<br>
; GCN: s_getpc_b64 s{{\[}}[[PC_LO:[0-9]+]]:[[PC_<wbr>HI:[0-9]+]]{{\]}}<br>
; GCN-NEXT: s_add_u32 s[[PC_LO]], s[[PC_LO]], external_void_func_i32@rel32@<wbr>lo+4<br>
; GCN-NEXT: s_addc_u32 s[[PC_HI]], s[[PC_HI]], external_void_func_i32@rel32@<wbr>hi+4<br>
; GCN: v_mov_b32_e32 v0, 42<br>
-; GCN-DAG: s_mov_b32 s4, s33<br>
-; GCN-DAG: s_mov_b32 s32, s33<br>
+; MESA-DAG: s_mov_b32 s4, s33<br>
+; HSA-DAG: s_mov_b32 s4, s9<br>
+; HSA-DAG: s_mov_b32 s32, s9<br>
<br>
; GCN: s_swappc_b64 s[30:31], s{{\[}}[[PC_LO]]:[[PC_HI]]{{\]<wbr>}}<br>
; GCN-NEXT: s_endpgm<br>
@@ -384,11 +395,10 @@ define amdgpu_kernel void @test_call_ext<br>
}<br>
<br>
; GCN-LABEL: {{^}}test_call_external_void_<wbr>func_v32i32_i32:<br>
-; HSA-DAG: s_mov_b32 s33, s9<br>
-; HSA-DAG: s_add_u32 [[SP_REG:s[0-9]+]], s33, 0x100{{$}}<br>
+; HSA-DAG: s_add_u32 [[SP_REG:s[0-9]+]], s9, 0x100{{$}}<br>
<br>
; MESA-DAG: s_mov_b32 s33, s3{{$}}<br>
-; MESA-DAG: s_add_u32 [[SP_REG:s[0-9]+]], s33, 0x100{{$}}<br>
+; MESA-DAG: s_add_u32 [[SP_REG:s[0-9]+]], s3, 0x100{{$}}<br>
<br>
; GCN-DAG: buffer_load_dword [[VAL1:v[0-9]+]], off, s[{{[0-9]+}}:{{[0-9]+}}], 0{{$}}<br>
; GCN-DAG: buffer_load_dwordx4 v[0:3], off<br>
@@ -437,27 +447,29 @@ define amdgpu_kernel void @test_call_ext<br>
}<br>
<br>
; GCN-LABEL: {{^}}test_call_external_void_<wbr>func_byval_struct_i8_i32:<br>
-; GCN-DAG: s_add_u32 [[SP:s[0-9]+]], s33, 0x400{{$}}<br>
+; MESA-DAG: s_add_u32 [[SP:s[0-9]+]], s1, 0x400{{$}}<br>
+; HSA-DAG: s_add_u32 [[SP:s[0-9]+]], s7, 0x400{{$}}<br>
<br>
; GCN-DAG: v_mov_b32_e32 [[VAL0:v[0-9]+]], 3<br>
; GCN-DAG: v_mov_b32_e32 [[VAL1:v[0-9]+]], 8<br>
-; MESA-DAG: buffer_store_byte [[VAL0]], off, s[36:39], s33 offset:8<br>
-; MESA-DAG: buffer_store_dword [[VAL1]], off, s[36:39], s33 offset:12<br>
+; MESA-DAG: buffer_store_byte [[VAL0]], off, s[36:39], s1 offset:8<br>
+; MESA-DAG: buffer_store_dword [[VAL1]], off, s[36:39], s1 offset:12<br>
<br>
-; HSA-DAG: buffer_store_byte [[VAL0]], off, s[0:3], s33 offset:8<br>
-; HSA-DAG: buffer_store_dword [[VAL1]], off, s[0:3], s33 offset:12<br>
+; HSA-DAG: s_mov_b32 s33, s7<br>
+; HSA-DAG: buffer_store_byte [[VAL0]], off, s[0:3], s{{7|33}} offset:8<br>
+; HSA-DAG: buffer_store_dword [[VAL1]], off, s[0:3], s{{7|33}} offset:12<br>
<br>
; GCN: s_add_u32 [[SP]], [[SP]], 0x200<br>
<br>
-; HSA: buffer_load_dword [[RELOAD_VAL0:v[0-9]+]], off, s[0:3], s33 offset:8<br>
-; HSA: buffer_load_dword [[RELOAD_VAL1:v[0-9]+]], off, s[0:3], s33 offset:12<br>
+; HSA: buffer_load_dword [[RELOAD_VAL0:v[0-9]+]], off, s[0:3], s{{7|33}} offset:8<br>
+; HSA: buffer_load_dword [[RELOAD_VAL1:v[0-9]+]], off, s[0:3], s{{7|33}} offset:12<br>
<br>
; HSA: buffer_store_dword [[RELOAD_VAL1]], off, s[0:3], [[SP]] offset:8<br>
; HSA: buffer_store_dword [[RELOAD_VAL0]], off, s[0:3], [[SP]] offset:4<br>
<br>
<br>
-; MESA: buffer_load_dword [[RELOAD_VAL0:v[0-9]+]], off, s[36:39], s33 offset:8<br>
-; MESA: buffer_load_dword [[RELOAD_VAL1:v[0-9]+]], off, s[36:39], s33 offset:12<br>
+; MESA: buffer_load_dword [[RELOAD_VAL0:v[0-9]+]], off, s[36:39], s1 offset:8<br>
+; MESA: buffer_load_dword [[RELOAD_VAL1:v[0-9]+]], off, s[36:39], s1 offset:12<br>
<br>
; MESA: buffer_store_dword [[RELOAD_VAL1]], off, s[36:39], [[SP]] offset:8<br>
; MESA: buffer_store_dword [[RELOAD_VAL0]], off, s[36:39], [[SP]] offset:4<br>
@@ -490,8 +502,8 @@ define amdgpu_kernel void @test_call_ext<br>
; GCN: buffer_store_dword [[RELOAD_VAL1]], off, s{{\[[0-9]+:[0-9]+\]}}, [[SP]] offset:8<br>
; GCN: buffer_store_dword [[RELOAD_VAL0]], off, s{{\[[0-9]+:[0-9]+\]}}, [[SP]] offset:4<br>
; GCN-NEXT: s_swappc_b64<br>
-; GCN-DAG: buffer_load_ubyte [[LOAD_OUT_VAL0:v[0-9]+]], off, s{{\[[0-9]+:[0-9]+\]}}, [[FP_REG]] offset:16<br>
-; GCN-DAG: buffer_load_dword [[LOAD_OUT_VAL1:v[0-9]+]], off, s{{\[[0-9]+:[0-9]+\]}}, [[FP_REG]] offset:20<br>
+; GCN-DAG: buffer_load_ubyte [[LOAD_OUT_VAL0:v[0-9]+]], off, s{{\[[0-9]+:[0-9]+\]}}, s33 offset:16<br>
+; GCN-DAG: buffer_load_dword [[LOAD_OUT_VAL1:v[0-9]+]], off, s{{\[[0-9]+:[0-9]+\]}}, s33 offset:20<br>
; GCN: s_sub_u32 [[SP]], [[SP]], 0x200<br>
<br>
; GCN: buffer_store_byte [[LOAD_OUT_VAL0]], off<br>
<br>
Modified: llvm/trunk/test/CodeGen/<wbr>AMDGPU/call-preserved-<wbr>registers.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AMDGPU/call-preserved-registers.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/AMDGPU/call-preserved-<wbr>registers.ll?rev=311038&r1=<wbr>311037&r2=311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/<wbr>AMDGPU/call-preserved-<wbr>registers.ll (original)<br>
+++ llvm/trunk/test/CodeGen/<wbr>AMDGPU/call-preserved-<wbr>registers.ll Wed Aug 16 13:50:01 2017<br>
@@ -5,12 +5,12 @@<br>
declare void @external_void_func_void() #0<br>
<br>
; GCN-LABEL: {{^}}test_kernel_call_<wbr>external_void_func_void_<wbr>clobber_s30_s31_call_external_<wbr>void_func_void:<br>
-; GCN: s_mov_b32 s33, s7<br>
; GCN: s_getpc_b64 s[34:35]<br>
; GCN-NEXT: s_add_u32 s34, s34,<br>
; GCN-NEXT: s_addc_u32 s35, s35,<br>
-; GCN-NEXT: s_mov_b32 s4, s33<br>
-; GCN-NEXT: s_mov_b32 s32, s33<br>
+; GCN-NEXT: s_mov_b32 s4, s7<br>
+; GCN-NEXT: s_mov_b32 s33, s7<br>
+; GCN-NEXT: s_mov_b32 s32, s7<br>
; GCN: s_swappc_b64 s[30:31], s[34:35]<br>
<br>
; GCN-NEXT: s_mov_b32 s4, s33<br>
@@ -112,14 +112,13 @@ define amdgpu_kernel void @test_call_voi<br>
}<br>
<br>
; GCN-LABEL: {{^}}test_call_void_func_void_<wbr>preserves_s33:<br>
-; GCN: s_mov_b32 s34, s9<br>
; GCN: ; def s33<br>
; GCN-NEXT: #ASMEND<br>
; GCN: s_getpc_b64 s[6:7]<br>
; GCN-NEXT: s_add_u32 s6, s6, external_void_func_void@rel32@<wbr>lo+4<br>
; GCN-NEXT: s_addc_u32 s7, s7, external_void_func_void@rel32@<wbr>hi+4<br>
-; GCN-NEXT: s_mov_b32 s4, s34<br>
-; GCN-NEXT: s_mov_b32 s32, s34<br>
+; GCN-NEXT: s_mov_b32 s4, s9<br>
+; GCN-NEXT: s_mov_b32 s32, s9<br>
; GCN-NEXT: s_swappc_b64 s[30:31], s[6:7]<br>
; GCN-NEXT: ;;#ASMSTART<br>
; GCN-NEXT: ; use s33<br>
@@ -133,14 +132,13 @@ define amdgpu_kernel void @test_call_voi<br>
}<br>
<br>
; GCN-LABEL: {{^}}test_call_void_func_void_<wbr>preserves_v32:<br>
-; GCN: s_mov_b32 s33, s9<br>
; GCN: ; def v32<br>
; GCN-NEXT: #ASMEND<br>
; GCN: s_getpc_b64 s[6:7]<br>
; GCN-NEXT: s_add_u32 s6, s6, external_void_func_void@rel32@<wbr>lo+4<br>
; GCN-NEXT: s_addc_u32 s7, s7, external_void_func_void@rel32@<wbr>hi+4<br>
-; GCN-NEXT: s_mov_b32 s4, s33<br>
-; GCN-NEXT: s_mov_b32 s32, s33<br>
+; GCN-NEXT: s_mov_b32 s4, s9<br>
+; GCN-NEXT: s_mov_b32 s32, s9<br>
; GCN-NEXT: s_swappc_b64 s[30:31], s[6:7]<br>
; GCN-NEXT: ;;#ASMSTART<br>
; GCN-NEXT: ; use v32<br>
@@ -167,11 +165,11 @@ define void @void_func_void_clobber_s33(<br>
<br>
; GCN-LABEL: {{^}}test_call_void_func_void_<wbr>clobber_s33:<br>
; GCN: s_mov_b32 s33, s7<br>
+; GCN: s_mov_b32 s32, s7<br>
; GCN: s_getpc_b64<br>
; GCN-NEXT: s_add_u32<br>
; GCN-NEXT: s_addc_u32<br>
; GCN-NEXT: s_mov_b32 s4, s33<br>
-; GCN-NEXT: s_mov_b32 s32, s33<br>
; GCN: s_swappc_b64<br>
; GCN-NEXT: s_endpgm<br>
define amdgpu_kernel void @test_call_void_func_void_<wbr>clobber_s33() #0 {<br>
<br>
Modified: llvm/trunk/test/CodeGen/<wbr>AMDGPU/callee-special-input-<wbr>sgprs.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AMDGPU/callee-special-input-sgprs.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/AMDGPU/callee-special-<wbr>input-sgprs.ll?rev=311038&r1=<wbr>311037&r2=311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/<wbr>AMDGPU/callee-special-input-<wbr>sgprs.ll (original)<br>
+++ llvm/trunk/test/CodeGen/<wbr>AMDGPU/callee-special-input-<wbr>sgprs.ll Wed Aug 16 13:50:01 2017<br>
@@ -191,11 +191,9 @@ define void @use_workgroup_id_yz() #1 {<br>
; GCN: enable_sgpr_workgroup_id_z = 0<br>
<br>
; GCN-NOT: s6<br>
-; GCN: s_mov_b32 s33, s7<br>
+; GCN: s_mov_b32 s4, s7<br>
; GCN-NOT: s6<br>
-; GCN: s_mov_b32 s4, s33<br>
-; GCN-NOT: s6<br>
-; GCN: s_mov_b32 s32, s33<br>
+; GCN: s_mov_b32 s32, s7<br>
; GCN: s_swappc_b64<br>
define amdgpu_kernel void @kern_indirect_use_workgroup_<wbr>id_x() #1 {<br>
call void @use_workgroup_id_x()<br>
@@ -208,9 +206,9 @@ define amdgpu_kernel void @kern_indirect<br>
; GCN: enable_sgpr_workgroup_id_z = 0<br>
<br>
; GCN: s_mov_b32 s33, s8<br>
+; GCN: s_mov_b32 s32, s8<br>
; GCN: s_mov_b32 s4, s33<br>
; GCN: s_mov_b32 s6, s7<br>
-; GCN: s_mov_b32 s32, s33<br>
; GCN: s_swappc_b64<br>
define amdgpu_kernel void @kern_indirect_use_workgroup_<wbr>id_y() #1 {<br>
call void @use_workgroup_id_y()<br>
@@ -239,10 +237,10 @@ define amdgpu_kernel void @kern_indirect<br>
; GCN: s_mov_b32 s33, s8<br>
; GCN-NOT: s6<br>
; GCN-NOT: s7<br>
-; GCN: s_mov_b32 s4, s33<br>
+; GCN: s_mov_b32 s32, s8<br>
; GCN-NOT: s6<br>
; GCN-NOT: s7<br>
-; GCN: s_mov_b32 s32, s33<br>
+; GCN: s_mov_b32 s4, s33<br>
; GCN-NOT: s6<br>
; GCN-NOT: s7<br>
; GCN: s_swappc_b64<br>
@@ -256,19 +254,17 @@ define amdgpu_kernel void @kern_indirect<br>
; GCN: enable_sgpr_workgroup_id_y = 1<br>
; GCN: enable_sgpr_workgroup_id_z = 1<br>
<br>
-; GCN: s_mov_b32 s33, s9<br>
-<br>
; GCN-NOT: s6<br>
; GCN-NOT: s7<br>
; GCN-NOT: s8<br>
<br>
-; GCN: s_mov_b32 s4, s33<br>
+; GCN: s_mov_b32 s4, s9<br>
<br>
; GCN-NOT: s6<br>
; GCN-NOT: s7<br>
; GCN-NOT: s8<br>
<br>
-; GCN: s_mov_b32 s32, s33<br>
+; GCN: s_mov_b32 s32, s9<br>
<br>
; GCN-NOT: s6<br>
; GCN-NOT: s7<br>
@@ -289,11 +285,11 @@ define amdgpu_kernel void @kern_indirect<br>
; GCN-NOT: s6<br>
; GCN-NOT: s7<br>
<br>
-; GCN: s_mov_b32 s4, s33<br>
+; GCN: s_mov_b32 s32, s8<br>
; GCN-NOT: s6<br>
; GCN-NOT: s7<br>
<br>
-; GCN: s_mov_b32 s32, s33<br>
+; GCN: s_mov_b32 s4, s33<br>
; GCN-NOT: s6<br>
; GCN-NOT: s7<br>
<br>
@@ -308,11 +304,10 @@ define amdgpu_kernel void @kern_indirect<br>
; GCN: enable_sgpr_workgroup_id_y = 1<br>
; GCN: enable_sgpr_workgroup_id_z = 1<br>
<br>
-; GCN: s_mov_b32 s33, s9<br>
; GCN: s_mov_b32 s6, s7<br>
-; GCN: s_mov_b32 s4, s33<br>
+; GCN: s_mov_b32 s4, s9<br>
; GCN: s_mov_b32 s7, s8<br>
-; GCN: s_mov_b32 s32, s33<br>
+; GCN: s_mov_b32 s32, s9<br>
; GCN: s_swappc_b64<br>
define amdgpu_kernel void @kern_indirect_use_workgroup_<wbr>id_yz() #1 {<br>
call void @use_workgroup_id_yz()<br>
@@ -376,13 +371,12 @@ define void @other_arg_use_workgroup_id_<br>
; GCN: enable_sgpr_workgroup_id_y = 0<br>
; GCN: enable_sgpr_workgroup_id_z = 0<br>
<br>
-; GCN-DAG: s_mov_b32 s33, s7<br>
; GCN-DAG: v_mov_b32_e32 v0, 0x22b<br>
<br>
; GCN-NOT: s6<br>
-; GCN: s_mov_b32 s4, s33<br>
+; GCN: s_mov_b32 s4, s7<br>
; GCN-NOT: s6<br>
-; GCN-DAG: s_mov_b32 s32, s33<br>
+; GCN-DAG: s_mov_b32 s32, s7<br>
; GCN: s_swappc_b64<br>
define amdgpu_kernel void @kern_indirect_other_arg_use_<wbr>workgroup_id_x() #1 {<br>
call void @other_arg_use_workgroup_id_x(<wbr>i32 555)<br>
@@ -395,10 +389,10 @@ define amdgpu_kernel void @kern_indirect<br>
; GCN: enable_sgpr_workgroup_id_z = 0<br>
<br>
; GCN-DAG: s_mov_b32 s33, s8<br>
+; GCN-DAG: s_mov_b32 s32, s8<br>
; GCN-DAG: v_mov_b32_e32 v0, 0x22b<br>
; GCN: s_mov_b32 s4, s33<br>
; GCN-DAG: s_mov_b32 s6, s7<br>
-; GCN-DAG: s_mov_b32 s32, s33<br>
; GCN: s_swappc_b64<br>
define amdgpu_kernel void @kern_indirect_other_arg_use_<wbr>workgroup_id_y() #1 {<br>
call void @other_arg_use_workgroup_id_y(<wbr>i32 555)<br>
@@ -411,11 +405,11 @@ define amdgpu_kernel void @kern_indirect<br>
; GCN: enable_sgpr_workgroup_id_z = 1<br>
<br>
; GCN: s_mov_b32 s33, s8<br>
+; GCN: s_mov_b32 s32, s8<br>
; GCN-DAG: v_mov_b32_e32 v0, 0x22b<br>
; GCN: s_mov_b32 s4, s33<br>
; GCN-DAG: s_mov_b32 s6, s7<br>
<br>
-; GCN: s_mov_b32 s32, s33<br>
; GCN: s_swappc_b64<br>
define amdgpu_kernel void @kern_indirect_other_arg_use_<wbr>workgroup_id_z() #1 {<br>
call void @other_arg_use_workgroup_id_z(<wbr>i32 555)<br>
@@ -475,13 +469,12 @@ define void @use_every_sgpr_input() #1 {<br>
; GCN: enable_sgpr_dispatch_id = 1<br>
; GCN: enable_sgpr_flat_scratch_init = 1<br>
<br>
-; GCN: s_mov_b32 s33, s17<br>
; GCN: s_mov_b64 s[12:13], s[10:11]<br>
; GCN: s_mov_b64 s[10:11], s[8:9]<br>
; GCN: s_mov_b64 s[8:9], s[6:7]<br>
; GCN: s_mov_b64 s[6:7], s[4:5]<br>
-; GCN: s_mov_b32 s4, s33<br>
-; GCN: s_mov_b32 s32, s33<br>
+; GCN: s_mov_b32 s4, s17<br>
+; GCN: s_mov_b32 s32, s17<br>
; GCN: s_swappc_b64<br>
define amdgpu_kernel void @kern_indirect_use_every_sgpr_<wbr>input() #1 {<br>
call void @use_every_sgpr_input()<br>
@@ -547,16 +540,18 @@ define void @func_use_every_sgpr_input_c<br>
; GCN: s_mov_b32 s5, s32<br>
; GCN: s_add_u32 s32, s32, 0x300<br>
<br>
-; GCN-DAG: s_mov_b32 [[SAVE_X:s[0-9]+]], s14<br>
-; GCN-DAG: s_mov_b32 [[SAVE_Y:s[0-9]+]], s15<br>
-; GCN-DAG: s_mov_b32 [[SAVE_Z:s[0-9]+]], s16<br>
-; GCN-DAG: s_mov_b64 {{s\[[0-9]+:[0-9]+\]}}, s[6:7]<br>
-; GCN-DAG: s_mov_b64 {{s\[[0-9]+:[0-9]+\]}}, s[8:9]<br>
-; GCN-DAG: s_mov_b64 {{s\[[0-9]+:[0-9]+\]}}, s[10:11]<br>
-<br>
-; GCN-DAG: s_mov_b32 s6, [[SAVE_X]]<br>
-; GCN-DAG: s_mov_b32 s7, [[SAVE_Y]]<br>
-; GCN-DAG: s_mov_b32 s8, [[SAVE_Z]]<br>
+; GCN: s_mov_b64 {{s\[[0-9]+:[0-9]+\]}}, s[6:7]<br>
+; GCN: s_mov_b64 {{s\[[0-9]+:[0-9]+\]}}, s[10:11]<br>
+; GCN: s_mov_b64 {{s\[[0-9]+:[0-9]+\]}}, s[8:9]<br>
+<br>
+; GCN: s_mov_b32 s6, s14<br>
+; GCN: s_mov_b32 s7, s15<br>
+; GCN: s_mov_b32 s8, s16<br>
+<br>
+; GCN: s_mov_b32 [[SAVE_Z:s[0-9]+]], s16<br>
+; GCN: s_mov_b32 [[SAVE_Y:s[0-9]+]], s15<br>
+; GCN: s_mov_b32 [[SAVE_X:s[0-9]+]], s14<br>
+<br>
; GCN: s_swappc_b64<br>
<br>
; GCN: buffer_store_dword v{{[0-9]+}}, off, s[0:3], s5 offset:4<br>
<br>
Modified: llvm/trunk/test/CodeGen/<wbr>AMDGPU/callee-special-input-<wbr>vgprs.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AMDGPU/callee-special-input-vgprs.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/AMDGPU/callee-special-<wbr>input-vgprs.ll?rev=311038&r1=<wbr>311037&r2=311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/<wbr>AMDGPU/callee-special-input-<wbr>vgprs.ll (original)<br>
+++ llvm/trunk/test/CodeGen/<wbr>AMDGPU/callee-special-input-<wbr>vgprs.ll Wed Aug 16 13:50:01 2017<br>
@@ -288,8 +288,8 @@ define void @too_many_args_use_workitem_<br>
; GCN-LABEL: {{^}}kern_call_too_many_args_<wbr>use_workitem_id_x:<br>
; GCN: enable_vgpr_workitem_id = 0<br>
<br>
+; GCN: s_mov_b32 s32, s7<br>
; GCN: s_mov_b32 s33, s7<br>
-; GCN: s_mov_b32 s32, s33<br>
; GCN: buffer_store_dword v0, off, s[0:3], s32 offset:8<br>
; GCN: s_mov_b32 s4, s33<br>
; GCN: s_swappc_b64<br>
@@ -422,15 +422,16 @@ define void @too_many_args_use_workitem_<br>
; GCN-LABEL: {{^}}kern_call_too_many_args_<wbr>use_workitem_id_x_byval:<br>
; GCN: enable_vgpr_workitem_id = 0<br>
<br>
-; GCN: s_mov_b32 s33, s7<br>
-; GCN: s_add_u32 s32, s33, 0x200{{$}}<br>
+; GCN: s_add_u32 s32, s7, 0x200{{$}}<br>
+; GCN: v_mov_b32_e32 [[K:v[0-9]+]], 0x3e7{{$}}<br>
+; GCN: s_add_u32 s32, s32, 0x100{{$}}<br>
<br>
-; GCN-DAG: s_add_u32 s32, s32, 0x100{{$}}<br>
-; GCN-DAG: v_mov_b32_e32 [[K:v[0-9]+]], 0x3e7{{$}}<br>
-; GCN: buffer_store_dword [[K]], off, s[0:3], s33 offset:4<br>
+<br>
+; GCN: buffer_store_dword [[K]], off, s[0:3], s7 offset:4<br>
; GCN: buffer_store_dword v0, off, s[0:3], s32 offset:12<br>
+; GCN: buffer_load_dword [[RELOAD_BYVAL:v[0-9]+]], off, s[0:3], s7 offset:4<br>
<br>
-; GCN: buffer_load_dword [[RELOAD_BYVAL:v[0-9]+]], off, s[0:3], s33 offset:4<br>
+; GCN: s_mov_b32 s33, s7<br>
; GCN: buffer_store_dword [[RELOAD_BYVAL]], off, s[0:3], s32 offset:4{{$}}<br>
; GCN: v_mov_b32_e32 [[RELOAD_BYVAL]],<br>
; GCN: s_swappc_b64<br>
@@ -548,8 +549,8 @@ define void @too_many_args_use_workitem_<br>
; GCN-LABEL: {{^}}kern_call_too_many_args_<wbr>use_workitem_id_xyz:<br>
; GCN: enable_vgpr_workitem_id = 2<br>
<br>
+; GCN: s_mov_b32 s32, s7<br>
; GCN: s_mov_b32 s33, s7<br>
-; GCN: s_mov_b32 s32, s33<br>
<br>
; GCN-DAG: buffer_store_dword v0, off, s[0:3], s32 offset:8<br>
; GCN-DAG: buffer_store_dword v1, off, s[0:3], s32 offset:12<br>
@@ -643,8 +644,8 @@ define void @too_many_args_use_workitem_<br>
; GCN-LABEL: {{^}}kern_call_too_many_args_<wbr>use_workitem_id_x_stack_yz:<br>
; GCN: enable_vgpr_workitem_id = 2<br>
<br>
+; GCN: s_mov_b32 s32, s7<br>
; GCN: s_mov_b32 s33, s7<br>
-; GCN: s_mov_b32 s32, s33<br>
<br>
; GCN-DAG: v_mov_b32_e32 v31, v0<br>
; GCN-DAG: buffer_store_dword v1, off, s[0:3], s32 offset:8<br>
<br>
Modified: llvm/trunk/test/CodeGen/<wbr>AMDGPU/mubuf-offset-private.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AMDGPU/mubuf-offset-private.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/AMDGPU/mubuf-offset-<wbr>private.ll?rev=311038&r1=<wbr>311037&r2=311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/<wbr>AMDGPU/mubuf-offset-private.ll (original)<br>
+++ llvm/trunk/test/CodeGen/<wbr>AMDGPU/mubuf-offset-private.ll Wed Aug 16 13:50:01 2017<br>
@@ -5,49 +5,49 @@<br>
; Test addressing modes when the scratch base is not a frame index.<br>
<br>
; GCN-LABEL: {{^}}store_private_offset_i8:<br>
-; GCN: buffer_store_byte v{{[0-9]+}}, off, s[4:7], s2 offset:8<br>
+; GCN: buffer_store_byte v{{[0-9]+}}, off, s[4:7], s1 offset:8<br>
define amdgpu_kernel void @store_private_offset_i8() #0 {<br>
store volatile i8 5, i8* inttoptr (i32 8 to i8*)<br>
ret void<br>
}<br>
<br>
; GCN-LABEL: {{^}}store_private_offset_i16:<br>
-; GCN: buffer_store_short v{{[0-9]+}}, off, s[4:7], s2 offset:8<br>
+; GCN: buffer_store_short v{{[0-9]+}}, off, s[4:7], s1 offset:8<br>
define amdgpu_kernel void @store_private_offset_i16() #0 {<br>
store volatile i16 5, i16* inttoptr (i32 8 to i16*)<br>
ret void<br>
}<br>
<br>
; GCN-LABEL: {{^}}store_private_offset_i32:<br>
-; GCN: buffer_store_dword v{{[0-9]+}}, off, s[4:7], s2 offset:8<br>
+; GCN: buffer_store_dword v{{[0-9]+}}, off, s[4:7], s1 offset:8<br>
define amdgpu_kernel void @store_private_offset_i32() #0 {<br>
store volatile i32 5, i32* inttoptr (i32 8 to i32*)<br>
ret void<br>
}<br>
<br>
; GCN-LABEL: {{^}}store_private_offset_<wbr>v2i32:<br>
-; GCN: buffer_store_dwordx2 v{{\[[0-9]+:[0-9]+\]}}, off, s[4:7], s2 offset:8<br>
+; GCN: buffer_store_dwordx2 v{{\[[0-9]+:[0-9]+\]}}, off, s[4:7], s1 offset:8<br>
define amdgpu_kernel void @store_private_offset_v2i32() #0 {<br>
store volatile <2 x i32> <i32 5, i32 10>, <2 x i32>* inttoptr (i32 8 to <2 x i32>*)<br>
ret void<br>
}<br>
<br>
; GCN-LABEL: {{^}}store_private_offset_<wbr>v4i32:<br>
-; GCN: buffer_store_dwordx4 v{{\[[0-9]+:[0-9]+\]}}, off, s[4:7], s2 offset:8<br>
+; GCN: buffer_store_dwordx4 v{{\[[0-9]+:[0-9]+\]}}, off, s[4:7], s1 offset:8<br>
define amdgpu_kernel void @store_private_offset_v4i32() #0 {<br>
store volatile <4 x i32> <i32 5, i32 10, i32 15, i32 0>, <4 x i32>* inttoptr (i32 8 to <4 x i32>*)<br>
ret void<br>
}<br>
<br>
; GCN-LABEL: {{^}}load_private_offset_i8:<br>
-; GCN: buffer_load_ubyte v{{[0-9]+}}, off, s[4:7], s2 offset:8<br>
+; GCN: buffer_load_ubyte v{{[0-9]+}}, off, s[4:7], s1 offset:8<br>
define amdgpu_kernel void @load_private_offset_i8() #0 {<br>
%load = load volatile i8, i8* inttoptr (i32 8 to i8*)<br>
ret void<br>
}<br>
<br>
; GCN-LABEL: {{^}}sextload_private_offset_<wbr>i8:<br>
-; GCN: buffer_load_sbyte v{{[0-9]+}}, off, s[4:7], s8 offset:8<br>
+; GCN: buffer_load_sbyte v{{[0-9]+}}, off, s[4:7], s3 offset:8<br>
define amdgpu_kernel void @sextload_private_offset_i8(<wbr>i32 addrspace(1)* %out) #0 {<br>
%load = load volatile i8, i8* inttoptr (i32 8 to i8*)<br>
%sextload = sext i8 %load to i32<br>
@@ -56,7 +56,7 @@ define amdgpu_kernel void @sextload_priv<br>
}<br>
<br>
; GCN-LABEL: {{^}}zextload_private_offset_<wbr>i8:<br>
-; GCN: buffer_load_ubyte v{{[0-9]+}}, off, s[4:7], s8 offset:8<br>
+; GCN: buffer_load_ubyte v{{[0-9]+}}, off, s[4:7], s3 offset:8<br>
define amdgpu_kernel void @zextload_private_offset_i8(<wbr>i32 addrspace(1)* %out) #0 {<br>
%load = load volatile i8, i8* inttoptr (i32 8 to i8*)<br>
%zextload = zext i8 %load to i32<br>
@@ -65,14 +65,14 @@ define amdgpu_kernel void @zextload_priv<br>
}<br>
<br>
; GCN-LABEL: {{^}}load_private_offset_i16:<br>
-; GCN: buffer_load_ushort v{{[0-9]+}}, off, s[4:7], s2 offset:8<br>
+; GCN: buffer_load_ushort v{{[0-9]+}}, off, s[4:7], s1 offset:8<br>
define amdgpu_kernel void @load_private_offset_i16() #0 {<br>
%load = load volatile i16, i16* inttoptr (i32 8 to i16*)<br>
ret void<br>
}<br>
<br>
; GCN-LABEL: {{^}}sextload_private_offset_<wbr>i16:<br>
-; GCN: buffer_load_sshort v{{[0-9]+}}, off, s[4:7], s8 offset:8<br>
+; GCN: buffer_load_sshort v{{[0-9]+}}, off, s[4:7], s3 offset:8<br>
define amdgpu_kernel void @sextload_private_offset_i16(<wbr>i32 addrspace(1)* %out) #0 {<br>
%load = load volatile i16, i16* inttoptr (i32 8 to i16*)<br>
%sextload = sext i16 %load to i32<br>
@@ -81,7 +81,7 @@ define amdgpu_kernel void @sextload_priv<br>
}<br>
<br>
; GCN-LABEL: {{^}}zextload_private_offset_<wbr>i16:<br>
-; GCN: buffer_load_ushort v{{[0-9]+}}, off, s[4:7], s8 offset:8<br>
+; GCN: buffer_load_ushort v{{[0-9]+}}, off, s[4:7], s3 offset:8<br>
define amdgpu_kernel void @zextload_private_offset_i16(<wbr>i32 addrspace(1)* %out) #0 {<br>
%load = load volatile i16, i16* inttoptr (i32 8 to i16*)<br>
%zextload = zext i16 %load to i32<br>
@@ -90,28 +90,28 @@ define amdgpu_kernel void @zextload_priv<br>
}<br>
<br>
; GCN-LABEL: {{^}}load_private_offset_i32:<br>
-; GCN: buffer_load_dword v{{[0-9]+}}, off, s[4:7], s2 offset:8<br>
+; GCN: buffer_load_dword v{{[0-9]+}}, off, s[4:7], s1 offset:8<br>
define amdgpu_kernel void @load_private_offset_i32() #0 {<br>
%load = load volatile i32, i32* inttoptr (i32 8 to i32*)<br>
ret void<br>
}<br>
<br>
; GCN-LABEL: {{^}}load_private_offset_<wbr>v2i32:<br>
-; GCN: buffer_load_dwordx2 v{{\[[0-9]+:[0-9]+\]}}, off, s[4:7], s2 offset:8<br>
+; GCN: buffer_load_dwordx2 v{{\[[0-9]+:[0-9]+\]}}, off, s[4:7], s1 offset:8<br>
define amdgpu_kernel void @load_private_offset_v2i32() #0 {<br>
%load = load volatile <2 x i32>, <2 x i32>* inttoptr (i32 8 to <2 x i32>*)<br>
ret void<br>
}<br>
<br>
; GCN-LABEL: {{^}}load_private_offset_<wbr>v4i32:<br>
-; GCN: buffer_load_dwordx4 v{{\[[0-9]+:[0-9]+\]}}, off, s[4:7], s2 offset:8<br>
+; GCN: buffer_load_dwordx4 v{{\[[0-9]+:[0-9]+\]}}, off, s[4:7], s1 offset:8<br>
define amdgpu_kernel void @load_private_offset_v4i32() #0 {<br>
%load = load volatile <4 x i32>, <4 x i32>* inttoptr (i32 8 to <4 x i32>*)<br>
ret void<br>
}<br>
<br>
; GCN-LABEL: {{^}}store_private_offset_i8_<wbr>max_offset:<br>
-; GCN: buffer_store_byte v{{[0-9]+}}, off, s[4:7], s2 offset:4095<br>
+; GCN: buffer_store_byte v{{[0-9]+}}, off, s[4:7], s1 offset:4095<br>
define amdgpu_kernel void @store_private_offset_i8_max_<wbr>offset() #0 {<br>
store volatile i8 5, i8* inttoptr (i32 4095 to i8*)<br>
ret void<br>
@@ -119,7 +119,7 @@ define amdgpu_kernel void @store_private<br>
<br>
; GCN-LABEL: {{^}}store_private_offset_i8_<wbr>max_offset_plus1:<br>
; GCN: v_mov_b32_e32 [[OFFSET:v[0-9]+]], 0x1000<br>
-; GCN: buffer_store_byte v{{[0-9]+}}, [[OFFSET]], s[4:7], s2 offen{{$}}<br>
+; GCN: buffer_store_byte v{{[0-9]+}}, [[OFFSET]], s[4:7], s1 offen{{$}}<br>
define amdgpu_kernel void @store_private_offset_i8_max_<wbr>offset_plus1() #0 {<br>
store volatile i8 5, i8* inttoptr (i32 4096 to i8*)<br>
ret void<br>
@@ -127,7 +127,7 @@ define amdgpu_kernel void @store_private<br>
<br>
; GCN-LABEL: {{^}}store_private_offset_i8_<wbr>max_offset_plus2:<br>
; GCN: v_mov_b32_e32 [[OFFSET:v[0-9]+]], 0x1000<br>
-; GCN: buffer_store_byte v{{[0-9]+}}, [[OFFSET]], s[4:7], s2 offen offset:1{{$}}<br>
+; GCN: buffer_store_byte v{{[0-9]+}}, [[OFFSET]], s[4:7], s1 offen offset:1{{$}}<br>
define amdgpu_kernel void @store_private_offset_i8_max_<wbr>offset_plus2() #0 {<br>
store volatile i8 5, i8* inttoptr (i32 4097 to i8*)<br>
ret void<br>
<br>
Modified: llvm/trunk/test/CodeGen/<wbr>AMDGPU/multilevel-break.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AMDGPU/multilevel-break.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/AMDGPU/multilevel-<wbr>break.ll?rev=311038&r1=311037&<wbr>r2=311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/<wbr>AMDGPU/multilevel-break.ll (original)<br>
+++ llvm/trunk/test/CodeGen/<wbr>AMDGPU/multilevel-break.ll Wed Aug 16 13:50:01 2017<br>
@@ -78,7 +78,7 @@ ENDIF:<br>
<br>
; Uses a copy intsead of an or<br>
; GCN: s_mov_b64 [[COPY:s\[[0-9]+:[0-9]+\]]], [[BREAK_REG]]<br>
-; GCN: s_or_b64 [[BREAK_REG]], exec, [[COPY]]<br>
+; GCN: s_or_b64 [[BREAK_REG]], exec, [[BREAK_REG]]<br>
define amdgpu_kernel void @multi_if_break_loop(i32 %arg) #0 {<br>
bb:<br>
%id = call i32 @llvm.amdgcn.workitem.id.x()<br>
<br>
Modified: llvm/trunk/test/CodeGen/<wbr>AMDGPU/private-access-no-<wbr>objects.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AMDGPU/private-access-no-objects.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/AMDGPU/private-access-<wbr>no-objects.ll?rev=311038&r1=<wbr>311037&r2=311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/<wbr>AMDGPU/private-access-no-<wbr>objects.ll (original)<br>
+++ llvm/trunk/test/CodeGen/<wbr>AMDGPU/private-access-no-<wbr>objects.ll Wed Aug 16 13:50:01 2017<br>
@@ -1,6 +1,6 @@<br>
; RUN: llc -mtriple=amdgcn--amdhsa -mcpu=fiji -verify-machineinstrs < %s | FileCheck -enable-var-scope -check-prefix=GCN -check-prefix=VI -check-prefix=OPT %s<br>
; RUN: llc -mtriple=amdgcn--amdhsa -mcpu=hawaii -verify-machineinstrs < %s | FileCheck -enable-var-scope -check-prefix=GCN -check-prefix=CI -check-prefix=OPT %s<br>
-; RUN: llc -mtriple=amdgcn--amdhsa -mcpu=iceland -verify-machineinstrs < %s | FileCheck -enable-var-scope -check-prefix=GCN -check-prefix=VI -check-prefix=OPT %s<br>
+; RUN: llc -mtriple=amdgcn--amdhsa -mcpu=iceland -verify-machineinstrs < %s | FileCheck -enable-var-scope -check-prefix=GCN -check-prefix=VI -check-prefix=OPTICELAND %s<br>
; RUN: llc -O0 -mtriple=amdgcn--amdhsa -mcpu=fiji -verify-machineinstrs < %s | FileCheck -enable-var-scope -check-prefix=GCN -check-prefix=OPTNONE %s<br>
<br>
; There are no stack objects, but still a private memory access. The<br>
@@ -8,10 +8,12 @@<br>
; shifted down to the end of the used registers.<br>
<br>
; GCN-LABEL: {{^}}store_to_undef:<br>
-; OPT-DAG: s_mov_b64 s{{\[}}[[RSRC_LO:[0-9]+]]:{{[<wbr>0-9]+\]}}, s[0:1]<br>
-; OPT-DAG: s_mov_b64 s{{\[[0-9]+}}:[[RSRC_HI:[0-9]+<wbr>]]{{\]}}, s[2:3]<br>
-; OPT-DAG: s_mov_b32 [[SOFFSET:s[0-9]+]], s5{{$}}<br>
-; OPT: buffer_store_dword v{{[0-9]+}}, v{{[0-9]+}}, s{{\[}}[[RSRC_LO]]:[[RSRC_HI]]<wbr>{{\]}}, [[SOFFSET]] offen{{$}}<br>
+; OPT: buffer_store_dword v{{[0-9]+}}, v{{[0-9]+}}, s[0:3], s5 offen{{$}}<br>
+; The -mcpu=iceland case doesn't copy-propagate the same as the other two opt cases because the temp registers %SGPR88_SGPR89_SGPR90_SGPR91 and %SGPR93 are marked as non-allocatable by this subtarget.<br>
+; OPTICELAND-DAG: s_mov_b64 s{{\[}}[[RSRC_LO:[0-9]+]]:{{[<wbr>0-9]+\]}}, s[0:1]<br>
+; OPTICELAND-DAG: s_mov_b64 s{{\[[0-9]+}}:[[RSRC_HI:[0-9]+<wbr>]]{{\]}}, s[2:3]<br>
+; OPTICELAND-DAG: s_mov_b32 [[SOFFSET:s[0-9]+]], s5{{$}}<br>
+; OPTICELAND: buffer_store_dword v{{[0-9]+}}, v{{[0-9]+}}, s{{\[}}[[RSRC_LO]]:[[RSRC_HI]]<wbr>{{\]}}, [[SOFFSET]] offen{{$}}<br>
<br>
; -O0 should assume spilling, so the input scratch resource descriptor<br>
; -should be used directly without any copies.<br>
@@ -24,30 +26,21 @@ define amdgpu_kernel void @store_to_unde<br>
}<br>
<br>
; GCN-LABEL: {{^}}store_to_inttoptr:<br>
-; OPT-DAG: s_mov_b64 s{{\[}}[[RSRC_LO:[0-9]+]]:{{[<wbr>0-9]+\]}}, s[0:1]<br>
-; OPT-DAG: s_mov_b64 s{{\[[0-9]+}}:[[RSRC_HI:[0-9]+<wbr>]]{{\]}}, s[2:3]<br>
-; OPT-DAG: s_mov_b32 [[SOFFSET:s[0-9]+]], s5{{$}}<br>
-; OPT: buffer_store_dword v{{[0-9]+}}, off, s{{\[}}[[RSRC_LO]]:[[RSRC_HI]]<wbr>{{\]}}, [[SOFFSET]] offset:124{{$}}<br>
+; OPT: buffer_store_dword v{{[0-9]+}}, off, s[0:3], s5 offset:124{{$}}<br>
define amdgpu_kernel void @store_to_inttoptr() #0 {<br>
store volatile i32 0, i32* inttoptr (i32 124 to i32*)<br>
ret void<br>
}<br>
<br>
; GCN-LABEL: {{^}}load_from_undef:<br>
-; OPT-DAG: s_mov_b64 s{{\[}}[[RSRC_LO:[0-9]+]]:{{[<wbr>0-9]+\]}}, s[0:1]<br>
-; OPT-DAG: s_mov_b64 s{{\[[0-9]+}}:[[RSRC_HI:[0-9]+<wbr>]]{{\]}}, s[2:3]<br>
-; OPT-DAG: s_mov_b32 [[SOFFSET:s[0-9]+]], s5{{$}}<br>
-; OPT: buffer_load_dword v{{[0-9]+}}, v{{[0-9]+}}, s{{\[}}[[RSRC_LO]]:[[RSRC_HI]]<wbr>{{\]}}, [[SOFFSET]] offen{{$}}<br>
+; OPT: buffer_load_dword v{{[0-9]+}}, v{{[0-9]+}}, s[0:3], s5 offen{{$}}<br>
define amdgpu_kernel void @load_from_undef() #0 {<br>
%ld = load volatile i32, i32* undef<br>
ret void<br>
}<br>
<br>
; GCN-LABEL: {{^}}load_from_inttoptr:<br>
-; OPT-DAG: s_mov_b64 s{{\[}}[[RSRC_LO:[0-9]+]]:{{[<wbr>0-9]+\]}}, s[0:1]<br>
-; OPT-DAG: s_mov_b64 s{{\[[0-9]+}}:[[RSRC_HI:[0-9]+<wbr>]]{{\]}}, s[2:3]<br>
-; OPT-DAG: s_mov_b32 [[SOFFSET:s[0-9]+]], s5{{$}}<br>
-; OPT: buffer_load_dword v{{[0-9]+}}, off, s{{\[}}[[RSRC_LO]]:[[RSRC_HI]]<wbr>{{\]}}, [[SOFFSET]] offset:124{{$}}<br>
+; OPT: buffer_load_dword v{{[0-9]+}}, off, s[0:3], s5 offset:124{{$}}<br>
define amdgpu_kernel void @load_from_inttoptr() #0 {<br>
%ld = load volatile i32, i32* inttoptr (i32 124 to i32*)<br>
ret void<br>
<br>
Modified: llvm/trunk/test/CodeGen/<wbr>AMDGPU/ret.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AMDGPU/ret.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/AMDGPU/ret.ll?rev=<wbr>311038&r1=311037&r2=311038&<wbr>view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/<wbr>AMDGPU/ret.ll (original)<br>
+++ llvm/trunk/test/CodeGen/<wbr>AMDGPU/ret.ll Wed Aug 16 13:50:01 2017<br>
@@ -2,10 +2,10 @@<br>
; RUN: llc -march=amdgcn -mcpu=tonga -verify-machineinstrs < %s | FileCheck -check-prefix=GCN %s<br>
<br>
; GCN-LABEL: {{^}}vgpr:<br>
-; GCN: v_mov_b32_e32 v1, v0<br>
-; GCN-DAG: v_add_f32_e32 v0, 1.0, v1<br>
-; GCN-DAG: exp mrt0 v1, v1, v1, v1 done vm<br>
+; GCN-DAG: v_mov_b32_e32 v1, v0<br>
+; GCN-DAG: exp mrt0 v0, v0, v0, v0 done vm<br>
; GCN: s_waitcnt expcnt(0)<br>
+; GCN: v_add_f32_e32 v0, 1.0, v0<br>
; GCN-NOT: s_endpgm<br>
define amdgpu_vs { float, float } @vgpr([9 x <16 x i8>] addrspace(2)* byval %arg, i32 inreg %arg1, i32 inreg %arg2, float %arg3) #0 {<br>
bb:<br>
@@ -204,13 +204,13 @@ bb:<br>
}<br>
<br>
; GCN-LABEL: {{^}}both:<br>
-; GCN: v_mov_b32_e32 v1, v0<br>
-; GCN-DAG: exp mrt0 v1, v1, v1, v1 done vm<br>
-; GCN-DAG: v_add_f32_e32 v0, 1.0, v1<br>
-; GCN-DAG: s_add_i32 s0, s3, 2<br>
+; GCN-DAG: exp mrt0 v0, v0, v0, v0 done vm<br>
+; GCN-DAG: v_mov_b32_e32 v1, v0<br>
; GCN-DAG: s_mov_b32 s1, s2<br>
-; GCN: s_mov_b32 s2, s3<br>
; GCN: s_waitcnt expcnt(0)<br>
+; GCN: v_add_f32_e32 v0, 1.0, v0<br>
+; GCN-DAG: s_add_i32 s0, s3, 2<br>
+; GCN-DAG: s_mov_b32 s2, s3<br>
; GCN-NOT: s_endpgm<br>
define amdgpu_vs { float, i32, float, i32, i32 } @both([9 x <16 x i8>] addrspace(2)* byval %arg, i32 inreg %arg1, i32 inreg %arg2, float %arg3) #0 {<br>
bb:<br>
<br>
Modified: llvm/trunk/test/CodeGen/ARM/<wbr>atomic-op.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/ARM/atomic-op.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/ARM/atomic-op.ll?rev=<wbr>311038&r1=311037&r2=311038&<wbr>view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/ARM/<wbr>atomic-op.ll (original)<br>
+++ llvm/trunk/test/CodeGen/ARM/<wbr>atomic-op.ll Wed Aug 16 13:50:01 2017<br>
@@ -287,7 +287,8 @@ define i32 @test_cmpxchg_fail_order(i32<br>
<br>
%pair = cmpxchg i32* %addr, i32 %desired, i32 %new seq_cst monotonic<br>
%oldval = extractvalue { i32, i1 } %pair, 0<br>
-; CHECK-ARMV7: ldrex [[OLDVAL:r[0-9]+]], [r[[ADDR:[0-9]+]]]<br>
+; CHECK-ARMV7: mov r[[ADDR:[0-9]+]], r0<br>
+; CHECK-ARMV7: ldrex [[OLDVAL:r[0-9]+]], [r0]<br>
; CHECK-ARMV7: cmp [[OLDVAL]], r1<br>
; CHECK-ARMV7: bne [[FAIL_BB:\.?LBB[0-9]+_[0-9]+]<wbr>]<br>
; CHECK-ARMV7: dmb ish<br>
@@ -305,7 +306,8 @@ define i32 @test_cmpxchg_fail_order(i32<br>
; CHECK-ARMV7: dmb ish<br>
; CHECK-ARMV7: bx lr<br>
<br>
-; CHECK-T2: ldrex [[OLDVAL:r[0-9]+]], [r[[ADDR:[0-9]+]]]<br>
+; CHECK-T2: mov r[[ADDR:[0-9]+]], r0<br>
+; CHECK-T2: ldrex [[OLDVAL:r[0-9]+]], [r0]<br>
; CHECK-T2: cmp [[OLDVAL]], r1<br>
; CHECK-T2: bne [[FAIL_BB:\.?LBB.*]]<br>
; CHECK-T2: dmb ish<br>
<br>
Modified: llvm/trunk/test/CodeGen/ARM/<wbr>swifterror.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/ARM/swifterror.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/ARM/swifterror.ll?rev=<wbr>311038&r1=311037&r2=311038&<wbr>view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/ARM/<wbr>swifterror.ll (original)<br>
+++ llvm/trunk/test/CodeGen/ARM/<wbr>swifterror.ll Wed Aug 16 13:50:01 2017<br>
@@ -181,7 +181,7 @@ define float @foo_loop(%swift_error** sw<br>
; CHECK-APPLE: beq<br>
; CHECK-APPLE: mov r0, #16<br>
; CHECK-APPLE: malloc<br>
-; CHECK-APPLE: strb r{{.*}}, [{{.*}}[[ID]], #8]<br>
+; CHECK-APPLE: strb r{{.*}}, [r0, #8]<br>
; CHECK-APPLE: ble<br>
; CHECK-APPLE: mov r8, [[ID]]<br>
<br>
<br>
Modified: llvm/trunk/test/CodeGen/Mips/<wbr>llvm-ir/sub.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/Mips/llvm-ir/sub.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/Mips/llvm-ir/sub.ll?<wbr>rev=311038&r1=311037&r2=<wbr>311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/Mips/<wbr>llvm-ir/sub.ll (original)<br>
+++ llvm/trunk/test/CodeGen/Mips/<wbr>llvm-ir/sub.ll Wed Aug 16 13:50:01 2017<br>
@@ -165,7 +165,7 @@ entry:<br>
; MMR3: subu16 $5, $[[T19]], $[[T20]]<br>
<br>
; MMR6: move $[[T0:[0-9]+]], $7<br>
-; MMR6: sw $[[T0]], 8($sp)<br>
+; MMR6: sw $7, 8($sp)<br>
; MMR6: move $[[T1:[0-9]+]], $5<br>
; MMR6: sw $4, 12($sp)<br>
; MMR6: lw $[[T2:[0-9]+]], 48($sp)<br>
<br>
Modified: llvm/trunk/test/CodeGen/<wbr>PowerPC/fma-mutate.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/PowerPC/fma-mutate.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/PowerPC/fma-mutate.ll?<wbr>rev=311038&r1=311037&r2=<wbr>311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/<wbr>PowerPC/fma-mutate.ll (original)<br>
+++ llvm/trunk/test/CodeGen/<wbr>PowerPC/fma-mutate.ll Wed Aug 16 13:50:01 2017<br>
@@ -14,7 +14,8 @@ define double @foo3(double %a) nounwind<br>
ret double %r<br>
<br>
; CHECK: @foo3<br>
-; CHECK: xsnmsubadp [[REG:[0-9]+]], {{[0-9]+}}, [[REG]]<br>
+; CHECK: fmr [[REG:[0-9]+]], [[REG2:[0-9]+]]<br>
+; CHECK: xsnmsubadp [[REG]], {{[0-9]+}}, [[REG2]]<br>
; CHECK: xsmaddmdp<br>
; CHECK: xsmaddadp<br>
}<br>
<br>
Modified: llvm/trunk/test/CodeGen/<wbr>PowerPC/inlineasm-i64-reg.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/PowerPC/inlineasm-i64-reg.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/PowerPC/inlineasm-i64-<wbr>reg.ll?rev=311038&r1=311037&<wbr>r2=311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/<wbr>PowerPC/inlineasm-i64-reg.ll (original)<br>
+++ llvm/trunk/test/CodeGen/<wbr>PowerPC/inlineasm-i64-reg.ll Wed Aug 16 13:50:01 2017<br>
@@ -75,7 +75,7 @@ entry:<br>
<br>
; CHECK-DAG: mr [[REG:[0-9]+]], 3<br>
; CHECK-DAG: li 0, 1076<br>
-; CHECK: stw [[REG]],<br>
+; CHECK-DAG: stw 3,<br>
<br>
; CHECK: #APP<br>
; CHECK: sc<br>
<br>
Modified: llvm/trunk/test/CodeGen/<wbr>PowerPC/tail-dup-layout.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/PowerPC/tail-dup-layout.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/PowerPC/tail-dup-<wbr>layout.ll?rev=311038&r1=<wbr>311037&r2=311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/<wbr>PowerPC/tail-dup-layout.ll (original)<br>
+++ llvm/trunk/test/CodeGen/<wbr>PowerPC/tail-dup-layout.ll Wed Aug 16 13:50:01 2017<br>
@@ -23,7 +23,7 @@ target triple = "powerpc64le-grtev4-linu<br>
;CHECK-LABEL: straight_test:<br>
; test1 may have been merged with entry<br>
;CHECK: mr [[TAGREG:[0-9]+]], 3<br>
-;CHECK: andi. {{[0-9]+}}, [[TAGREG]], 1<br>
+;CHECK: andi. {{[0-9]+}}, [[TAGREG:[0-9]+]], 1<br>
;CHECK-NEXT: bc 12, 1, .[[OPT1LABEL:[_0-9A-Za-z]+]]<br>
;CHECK-NEXT: # %test2<br>
;CHECK-NEXT: rlwinm. {{[0-9]+}}, [[TAGREG]], 0, 30, 30<br>
<br>
Modified: llvm/trunk/test/CodeGen/SPARC/<wbr>32abi.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/SPARC/32abi.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/SPARC/32abi.ll?rev=<wbr>311038&r1=311037&r2=311038&<wbr>view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/SPARC/<wbr>32abi.ll (original)<br>
+++ llvm/trunk/test/CodeGen/SPARC/<wbr>32abi.ll Wed Aug 16 13:50:01 2017<br>
@@ -156,9 +156,9 @@ define double @floatarg(double %a0, ;<br>
; HARD-NEXT: std %o0, [%sp+96]<br>
; HARD-NEXT: st %o1, [%sp+92]<br>
; HARD-NEXT: mov %i0, %o2<br>
-; HARD-NEXT: mov %o0, %o3<br>
+; HARD-NEXT: mov %i1, %o3<br>
; HARD-NEXT: mov %o1, %o4<br>
-; HARD-NEXT: mov %o0, %o5<br>
+; HARD-NEXT: mov %i1, %o5<br>
; HARD-NEXT: call floatarg<br>
; HARD: std %f0, [%i4]<br>
; SOFT: st %i0, [%sp+104]<br>
<br>
Modified: llvm/trunk/test/CodeGen/SPARC/<wbr>atomics.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/SPARC/atomics.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/SPARC/atomics.ll?rev=<wbr>311038&r1=311037&r2=311038&<wbr>view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/SPARC/<wbr>atomics.ll (original)<br>
+++ llvm/trunk/test/CodeGen/SPARC/<wbr>atomics.ll Wed Aug 16 13:50:01 2017<br>
@@ -235,8 +235,9 @@ entry:<br>
<br>
; CHECK-LABEL: test_load_add_i32<br>
; CHECK: membar<br>
-; CHECK: add [[V:%[gilo][0-7]]], %o1, [[U:%[gilo][0-7]]]<br>
-; CHECK: cas [%o0], [[V]], [[U]]<br>
+; CHECK: mov [[U:%[gilo][0-7]]], [[V:%[gilo][0-7]]]<br>
+; CHECK: add [[U:%[gilo][0-7]]], %o1, [[V2:%[gilo][0-7]]]<br>
+; CHECK: cas [%o0], [[V]], [[V2]]<br>
; CHECK: membar<br>
define zeroext i32 @test_load_add_i32(i32* %p, i32 zeroext %v) {<br>
entry:<br>
<br>
Modified: llvm/trunk/test/CodeGen/Thumb/<wbr>thumb-shrink-wrapping.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/Thumb/thumb-shrink-wrapping.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/Thumb/thumb-shrink-<wbr>wrapping.ll?rev=311038&r1=<wbr>311037&r2=311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/Thumb/<wbr>thumb-shrink-wrapping.ll (original)<br>
+++ llvm/trunk/test/CodeGen/Thumb/<wbr>thumb-shrink-wrapping.ll Wed Aug 16 13:50:01 2017<br>
@@ -598,7 +598,7 @@ declare void @abort() #0<br>
define i32 @b_to_bx(i32 %value) {<br>
; CHECK-LABEL: b_to_bx:<br>
; DISABLE: push {r7, lr}<br>
-; CHECK: cmp r1, #49<br>
+; CHECK: cmp r0, #49<br>
; CHECK-NEXT: bgt [[ELSE_LABEL:LBB[0-9_]+]]<br>
; ENABLE: push {r7, lr}<br>
<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>2006-03-01-InstrSchedBug.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/2006-03-01-InstrSchedBug.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/2006-03-01-<wbr>InstrSchedBug.ll?rev=311038&<wbr>r1=311037&r2=311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>2006-03-01-InstrSchedBug.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>2006-03-01-InstrSchedBug.ll Wed Aug 16 13:50:01 2017<br>
@@ -7,7 +7,7 @@ define i32 @f(i32 %a, i32 %b) {<br>
; CHECK-NEXT: movl {{[0-9]+}}(%esp), %eax<br>
; CHECK-NEXT: movl {{[0-9]+}}(%esp), %ecx<br>
; CHECK-NEXT: movl %ecx, %edx<br>
-; CHECK-NEXT: imull %edx, %edx<br>
+; CHECK-NEXT: imull %ecx, %edx<br>
; CHECK-NEXT: imull %eax, %ecx<br>
; CHECK-NEXT: imull %eax, %eax<br>
; CHECK-NEXT: addl %edx, %eax<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>arg-copy-elide.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/arg-copy-elide.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/arg-copy-elide.ll?<wbr>rev=311038&r1=311037&r2=<wbr>311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>arg-copy-elide.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>arg-copy-elide.ll Wed Aug 16 13:50:01 2017<br>
@@ -106,7 +106,7 @@ entry:<br>
; CHECK-DAG: movl %edx, %[[r1:[^ ]*]]<br>
; CHECK-DAG: movl 8(%ebp), %[[r2:[^ ]*]]<br>
; CHECK-DAG: movl %[[r2]], 4(%esp)<br>
-; CHECK-DAG: movl %[[r1]], (%esp)<br>
+; CHECK-DAG: movl %edx, (%esp)<br>
; CHECK: movl %esp, %[[reg:[^ ]*]]<br>
; CHECK: pushl %[[reg]]<br>
; CHECK: calll _addrof_i64<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>avg.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/avg.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/avg.ll?rev=311038&<wbr>r1=311037&r2=311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>avg.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>avg.ll Wed Aug 16 13:50:01 2017<br>
@@ -407,7 +407,6 @@ define void @avg_v64i8(<64 x i8>* %a, <6<br>
; SSE2-NEXT: pand %xmm0, %xmm2<br>
; SSE2-NEXT: packuswb %xmm1, %xmm2<br>
; SSE2-NEXT: packuswb %xmm10, %xmm2<br>
-; SSE2-NEXT: movdqa %xmm2, %xmm1<br>
; SSE2-NEXT: psrld $1, %xmm4<br>
; SSE2-NEXT: psrld $1, %xmm12<br>
; SSE2-NEXT: pand %xmm0, %xmm12<br>
@@ -444,7 +443,7 @@ define void @avg_v64i8(<64 x i8>* %a, <6<br>
; SSE2-NEXT: movdqu %xmm7, (%rax)<br>
; SSE2-NEXT: movdqu %xmm11, (%rax)<br>
; SSE2-NEXT: movdqu %xmm13, (%rax)<br>
-; SSE2-NEXT: movdqu %xmm1, (%rax)<br>
+; SSE2-NEXT: movdqu %xmm2, (%rax)<br>
; SSE2-NEXT: retq<br>
;<br>
; AVX1-LABEL: avg_v64i8:<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>avx-load-store.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/avx-load-store.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/avx-load-store.ll?<wbr>rev=311038&r1=311037&r2=<wbr>311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>avx-load-store.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>avx-load-store.ll Wed Aug 16 13:50:01 2017<br>
@@ -12,11 +12,11 @@ define void @test_256_load(double* nocap<br>
; CHECK-NEXT: movq %rdx, %r14<br>
; CHECK-NEXT: movq %rsi, %r15<br>
; CHECK-NEXT: movq %rdi, %rbx<br>
-; CHECK-NEXT: vmovaps (%rbx), %ymm0<br>
+; CHECK-NEXT: vmovaps (%rdi), %ymm0<br>
; CHECK-NEXT: vmovups %ymm0, {{[0-9]+}}(%rsp) # 32-byte Spill<br>
-; CHECK-NEXT: vmovaps (%r15), %ymm1<br>
+; CHECK-NEXT: vmovaps (%rsi), %ymm1<br>
; CHECK-NEXT: vmovups %ymm1, {{[0-9]+}}(%rsp) # 32-byte Spill<br>
-; CHECK-NEXT: vmovaps (%r14), %ymm2<br>
+; CHECK-NEXT: vmovaps (%rdx), %ymm2<br>
; CHECK-NEXT: vmovups %ymm2, (%rsp) # 32-byte Spill<br>
; CHECK-NEXT: callq dummy<br>
; CHECK-NEXT: vmovups {{[0-9]+}}(%rsp), %ymm0 # 32-byte Reload<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>avx512-bugfix-25270.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/avx512-bugfix-25270.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/avx512-bugfix-<wbr>25270.ll?rev=311038&r1=311037&<wbr>r2=311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>avx512-bugfix-25270.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>avx512-bugfix-25270.ll Wed Aug 16 13:50:01 2017<br>
@@ -9,10 +9,10 @@ define void @bar__512(<16 x i32>* %var)<br>
; CHECK-NEXT: pushq %rbx<br>
; CHECK-NEXT: subq $112, %rsp<br>
; CHECK-NEXT: movq %rdi, %rbx<br>
-; CHECK-NEXT: vmovups (%rbx), %zmm0<br>
+; CHECK-NEXT: vmovups (%rdi), %zmm0<br>
; CHECK-NEXT: vmovups %zmm0, (%rsp) ## 64-byte Spill<br>
; CHECK-NEXT: vbroadcastss {{.*}}(%rip), %zmm1<br>
-; CHECK-NEXT: vmovaps %zmm1, (%rbx)<br>
+; CHECK-NEXT: vmovaps %zmm1, (%rdi)<br>
; CHECK-NEXT: callq _Print__512<br>
; CHECK-NEXT: vmovups (%rsp), %zmm0 ## 64-byte Reload<br>
; CHECK-NEXT: callq _Print__512<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>avx512-calling-conv.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/avx512-calling-conv.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/avx512-calling-<wbr>conv.ll?rev=311038&r1=311037&<wbr>r2=311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>avx512-calling-conv.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>avx512-calling-conv.ll Wed Aug 16 13:50:01 2017<br>
@@ -466,7 +466,7 @@ define i32 @test12(i32 %a1, i32 %a2, i32<br>
; KNL_X32-NEXT: movl %edi, (%esp)<br>
; KNL_X32-NEXT: calll _test11<br>
; KNL_X32-NEXT: movl %eax, %ebx<br>
-; KNL_X32-NEXT: movzbl %bl, %eax<br>
+; KNL_X32-NEXT: movzbl %al, %eax<br>
; KNL_X32-NEXT: movl %eax, {{[0-9]+}}(%esp)<br>
; KNL_X32-NEXT: movl %esi, {{[0-9]+}}(%esp)<br>
; KNL_X32-NEXT: movl %edi, (%esp)<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>avx512-mask-op.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/avx512-mask-op.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/avx512-mask-op.ll?<wbr>rev=311038&r1=311037&r2=<wbr>311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>avx512-mask-op.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>avx512-mask-op.ll Wed Aug 16 13:50:01 2017<br>
@@ -1171,7 +1171,6 @@ define <8 x i1> @test18(i8 %a, i16 %y) {<br>
; KNL-NEXT: kmovw %esi, %k0<br>
; KNL-NEXT: kshiftlw $7, %k0, %k2<br>
; KNL-NEXT: kshiftrw $15, %k2, %k2<br>
-; KNL-NEXT: kmovw %k2, %eax<br>
; KNL-NEXT: kshiftlw $6, %k0, %k0<br>
; KNL-NEXT: kshiftrw $15, %k0, %k0<br>
; KNL-NEXT: kmovw %k0, %ecx<br>
@@ -1184,8 +1183,7 @@ define <8 x i1> @test18(i8 %a, i16 %y) {<br>
; KNL-NEXT: vptestmq %zmm0, %zmm0, %k0<br>
; KNL-NEXT: kshiftlw $1, %k0, %k0<br>
; KNL-NEXT: kshiftrw $1, %k0, %k0<br>
-; KNL-NEXT: kmovw %eax, %k1<br>
-; KNL-NEXT: kshiftlw $7, %k1, %k1<br>
+; KNL-NEXT: kshiftlw $7, %k2, %k1<br>
; KNL-NEXT: korw %k1, %k0, %k1<br>
; KNL-NEXT: vpternlogq $255, %zmm0, %zmm0, %zmm0 {%k1} {z}<br>
; KNL-NEXT: vpmovqw %zmm0, %xmm0<br>
@@ -1197,20 +1195,16 @@ define <8 x i1> @test18(i8 %a, i16 %y) {<br>
; SKX-NEXT: kmovd %esi, %k1<br>
; SKX-NEXT: kshiftlw $7, %k1, %k2<br>
; SKX-NEXT: kshiftrw $15, %k2, %k2<br>
-; SKX-NEXT: kmovd %k2, %eax<br>
; SKX-NEXT: kshiftlw $6, %k1, %k1<br>
; SKX-NEXT: kshiftrw $15, %k1, %k1<br>
-; SKX-NEXT: kmovd %k1, %ecx<br>
; SKX-NEXT: vpmovm2q %k0, %zmm0<br>
-; SKX-NEXT: kmovd %ecx, %k0<br>
-; SKX-NEXT: vpmovm2q %k0, %zmm1<br>
+; SKX-NEXT: vpmovm2q %k1, %zmm1<br>
; SKX-NEXT: vmovdqa64 {{.*#+}} zmm2 = [0,1,2,3,4,5,8,7]<br>
; SKX-NEXT: vpermi2q %zmm1, %zmm0, %zmm2<br>
; SKX-NEXT: vpmovq2m %zmm2, %k0<br>
; SKX-NEXT: kshiftlb $1, %k0, %k0<br>
; SKX-NEXT: kshiftrb $1, %k0, %k0<br>
-; SKX-NEXT: kmovd %eax, %k1<br>
-; SKX-NEXT: kshiftlb $7, %k1, %k1<br>
+; SKX-NEXT: kshiftlb $7, %k2, %k1<br>
; SKX-NEXT: korb %k1, %k0, %k0<br>
; SKX-NEXT: vpmovm2w %k0, %xmm0<br>
; SKX-NEXT: vzeroupper<br>
@@ -1222,7 +1216,6 @@ define <8 x i1> @test18(i8 %a, i16 %y) {<br>
; AVX512BW-NEXT: kmovd %esi, %k0<br>
; AVX512BW-NEXT: kshiftlw $7, %k0, %k2<br>
; AVX512BW-NEXT: kshiftrw $15, %k2, %k2<br>
-; AVX512BW-NEXT: kmovd %k2, %eax<br>
; AVX512BW-NEXT: kshiftlw $6, %k0, %k0<br>
; AVX512BW-NEXT: kshiftrw $15, %k0, %k0<br>
; AVX512BW-NEXT: kmovd %k0, %ecx<br>
@@ -1235,8 +1228,7 @@ define <8 x i1> @test18(i8 %a, i16 %y) {<br>
; AVX512BW-NEXT: vptestmq %zmm0, %zmm0, %k0<br>
; AVX512BW-NEXT: kshiftlw $1, %k0, %k0<br>
; AVX512BW-NEXT: kshiftrw $1, %k0, %k0<br>
-; AVX512BW-NEXT: kmovd %eax, %k1<br>
-; AVX512BW-NEXT: kshiftlw $7, %k1, %k1<br>
+; AVX512BW-NEXT: kshiftlw $7, %k2, %k1<br>
; AVX512BW-NEXT: korw %k1, %k0, %k0<br>
; AVX512BW-NEXT: vpmovm2w %k0, %zmm0<br>
; AVX512BW-NEXT: ## kill: %XMM0<def> %XMM0<kill> %ZMM0<kill><br>
@@ -1249,20 +1241,16 @@ define <8 x i1> @test18(i8 %a, i16 %y) {<br>
; AVX512DQ-NEXT: kmovw %esi, %k1<br>
; AVX512DQ-NEXT: kshiftlw $7, %k1, %k2<br>
; AVX512DQ-NEXT: kshiftrw $15, %k2, %k2<br>
-; AVX512DQ-NEXT: kmovw %k2, %eax<br>
; AVX512DQ-NEXT: kshiftlw $6, %k1, %k1<br>
; AVX512DQ-NEXT: kshiftrw $15, %k1, %k1<br>
-; AVX512DQ-NEXT: kmovw %k1, %ecx<br>
; AVX512DQ-NEXT: vpmovm2q %k0, %zmm0<br>
-; AVX512DQ-NEXT: kmovw %ecx, %k0<br>
-; AVX512DQ-NEXT: vpmovm2q %k0, %zmm1<br>
+; AVX512DQ-NEXT: vpmovm2q %k1, %zmm1<br>
; AVX512DQ-NEXT: vmovdqa64 {{.*#+}} zmm2 = [0,1,2,3,4,5,8,7]<br>
; AVX512DQ-NEXT: vpermi2q %zmm1, %zmm0, %zmm2<br>
; AVX512DQ-NEXT: vpmovq2m %zmm2, %k0<br>
; AVX512DQ-NEXT: kshiftlb $1, %k0, %k0<br>
; AVX512DQ-NEXT: kshiftrb $1, %k0, %k0<br>
-; AVX512DQ-NEXT: kmovw %eax, %k1<br>
-; AVX512DQ-NEXT: kshiftlb $7, %k1, %k1<br>
+; AVX512DQ-NEXT: kshiftlb $7, %k2, %k1<br>
; AVX512DQ-NEXT: korb %k1, %k0, %k0<br>
; AVX512DQ-NEXT: vpmovm2q %k0, %zmm0<br>
; AVX512DQ-NEXT: vpmovqw %zmm0, %xmm0<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>avx512bw-intrinsics-upgrade.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/avx512bw-intrinsics-upgrade.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/avx512bw-<wbr>intrinsics-upgrade.ll?rev=<wbr>311038&r1=311037&r2=311038&<wbr>view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>avx512bw-intrinsics-upgrade.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>avx512bw-intrinsics-upgrade.ll Wed Aug 16 13:50:01 2017<br>
@@ -2005,7 +2005,7 @@ define i64 @test_mask_cmp_b_512(<64 x i8<br>
; AVX512F-32-NEXT: vpblendvb %ymm1, %ymm0, %ymm2, %ymm2<br>
; AVX512F-32-NEXT: vshufi64x2 {{.*#+}} zmm0 = zmm2[0,1,2,3],zmm0[4,5,6,7]<br>
; AVX512F-32-NEXT: vpmovb2m %zmm0, %k0<br>
-; AVX512F-32-NEXT: movl %esi, %eax<br>
+; AVX512F-32-NEXT: movl %ecx, %eax<br>
; AVX512F-32-NEXT: shrl $30, %eax<br>
; AVX512F-32-NEXT: kmovd %eax, %k1<br>
; AVX512F-32-NEXT: vpmovm2b %k1, %zmm0<br>
@@ -2016,7 +2016,7 @@ define i64 @test_mask_cmp_b_512(<64 x i8<br>
; AVX512F-32-NEXT: vpblendvb %ymm2, %ymm0, %ymm1, %ymm1<br>
; AVX512F-32-NEXT: vshufi64x2 {{.*#+}} zmm0 = zmm1[0,1,2,3],zmm0[4,5,6,7]<br>
; AVX512F-32-NEXT: vpmovb2m %zmm0, %k0<br>
-; AVX512F-32-NEXT: movl %esi, %eax<br>
+; AVX512F-32-NEXT: movl %ecx, %eax<br>
; AVX512F-32-NEXT: shrl $31, %eax<br>
; AVX512F-32-NEXT: kmovd %eax, %k1<br>
; AVX512F-32-NEXT: vpmovm2b %k1, %zmm0<br>
@@ -2891,7 +2891,7 @@ define i64 @test_mask_x86_avx512_ucmp_b_<br>
; AVX512F-32-NEXT: vpblendvb %ymm1, %ymm0, %ymm2, %ymm2<br>
; AVX512F-32-NEXT: vshufi64x2 {{.*#+}} zmm0 = zmm2[0,1,2,3],zmm0[4,5,6,7]<br>
; AVX512F-32-NEXT: vpmovb2m %zmm0, %k0<br>
-; AVX512F-32-NEXT: movl %esi, %eax<br>
+; AVX512F-32-NEXT: movl %ecx, %eax<br>
; AVX512F-32-NEXT: shrl $30, %eax<br>
; AVX512F-32-NEXT: kmovd %eax, %k1<br>
; AVX512F-32-NEXT: vpmovm2b %k1, %zmm0<br>
@@ -2902,7 +2902,7 @@ define i64 @test_mask_x86_avx512_ucmp_b_<br>
; AVX512F-32-NEXT: vpblendvb %ymm2, %ymm0, %ymm1, %ymm1<br>
; AVX512F-32-NEXT: vshufi64x2 {{.*#+}} zmm0 = zmm1[0,1,2,3],zmm0[4,5,6,7]<br>
; AVX512F-32-NEXT: vpmovb2m %zmm0, %k0<br>
-; AVX512F-32-NEXT: movl %esi, %eax<br>
+; AVX512F-32-NEXT: movl %ecx, %eax<br>
; AVX512F-32-NEXT: shrl $31, %eax<br>
; AVX512F-32-NEXT: kmovd %eax, %k1<br>
; AVX512F-32-NEXT: vpmovm2b %k1, %zmm0<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>bitcast-int-to-vector-bool-<wbr>sext.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/bitcast-int-to-vector-bool-sext.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/bitcast-int-to-<wbr>vector-bool-sext.ll?rev=<wbr>311038&r1=311037&r2=311038&<wbr>view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>bitcast-int-to-vector-bool-<wbr>sext.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>bitcast-int-to-vector-bool-<wbr>sext.ll Wed Aug 16 13:50:01 2017<br>
@@ -546,7 +546,7 @@ define <8 x i32> @ext_i8_8i32(i8 %a0) {<br>
; SSE2-SSSE3-NEXT: punpckldq {{.*#+}} xmm2 = xmm2[0],xmm0[0],xmm2[1],xmm0[<wbr>1]<br>
; SSE2-SSSE3-NEXT: punpcklqdq {{.*#+}} xmm1 = xmm1[0],xmm2[0]<br>
; SSE2-SSSE3-NEXT: movdqa %xmm1, %xmm0<br>
-; SSE2-SSSE3-NEXT: punpcklwd {{.*#+}} xmm0 = xmm0[0,0,1,1,2,2,3,3]<br>
+; SSE2-SSSE3-NEXT: punpcklwd {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[<wbr>1],xmm0[2],xmm1[2],xmm0[3],<wbr>xmm1[3]<br>
; SSE2-SSSE3-NEXT: pslld $31, %xmm0<br>
; SSE2-SSSE3-NEXT: psrad $31, %xmm0<br>
; SSE2-SSSE3-NEXT: punpckhwd {{.*#+}} xmm1 = xmm1[4],xmm0[4],xmm1[5],xmm0[<wbr>5],xmm1[6],xmm0[6],xmm1[7],<wbr>xmm0[7]<br>
@@ -722,7 +722,7 @@ define <16 x i16> @ext_i16_16i16(i16 %a0<br>
; SSE2-SSSE3-NEXT: punpckldq {{.*#+}} xmm0 = xmm0[0],xmm3[0],xmm0[1],xmm3[<wbr>1]<br>
; SSE2-SSSE3-NEXT: punpcklqdq {{.*#+}} xmm1 = xmm1[0],xmm0[0]<br>
; SSE2-SSSE3-NEXT: movdqa %xmm1, %xmm0<br>
-; SSE2-SSSE3-NEXT: punpcklbw {{.*#+}} xmm0 = xmm0[0,0,1,1,2,2,3,3,4,4,5,5,<wbr>6,6,7,7]<br>
+; SSE2-SSSE3-NEXT: punpcklbw {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[<wbr>1],xmm0[2],xmm1[2],xmm0[3],<wbr>xmm1[3],xmm0[4],xmm1[4],xmm0[<wbr>5],xmm1[5],xmm0[6],xmm1[6],<wbr>xmm0[7],xmm1[7]<br>
; SSE2-SSSE3-NEXT: psllw $15, %xmm0<br>
; SSE2-SSSE3-NEXT: psraw $15, %xmm0<br>
; SSE2-SSSE3-NEXT: punpckhbw {{.*#+}} xmm1 = xmm1[8],xmm0[8],xmm1[9],xmm0[<wbr>9],xmm1[10],xmm0[10],xmm1[11],<wbr>xmm0[11],xmm1[12],xmm0[12],<wbr>xmm1[13],xmm0[13],xmm1[14],<wbr>xmm0[14],xmm1[15],xmm0[15]<br>
@@ -1753,7 +1753,7 @@ define <16 x i32> @ext_i16_16i32(i16 %a0<br>
; SSE2-SSSE3-NEXT: movdqa %xmm3, %xmm1<br>
; SSE2-SSSE3-NEXT: punpcklbw {{.*#+}} xmm1 = xmm1[0],xmm0[0],xmm1[1],xmm0[<wbr>1],xmm1[2],xmm0[2],xmm1[3],<wbr>xmm0[3],xmm1[4],xmm0[4],xmm1[<wbr>5],xmm0[5],xmm1[6],xmm0[6],<wbr>xmm1[7],xmm0[7]<br>
; SSE2-SSSE3-NEXT: movdqa %xmm1, %xmm0<br>
-; SSE2-SSSE3-NEXT: punpcklwd {{.*#+}} xmm0 = xmm0[0,0,1,1,2,2,3,3]<br>
+; SSE2-SSSE3-NEXT: punpcklwd {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[<wbr>1],xmm0[2],xmm1[2],xmm0[3],<wbr>xmm1[3]<br>
; SSE2-SSSE3-NEXT: pslld $31, %xmm0<br>
; SSE2-SSSE3-NEXT: psrad $31, %xmm0<br>
; SSE2-SSSE3-NEXT: punpckhwd {{.*#+}} xmm1 = xmm1[4],xmm0[4],xmm1[5],xmm0[<wbr>5],xmm1[6],xmm0[6],xmm1[7],<wbr>xmm0[7]<br>
@@ -2103,7 +2103,7 @@ define <32 x i16> @ext_i32_32i16(i32 %a0<br>
; SSE2-SSSE3-NEXT: punpckldq {{.*#+}} xmm0 = xmm0[0],xmm4[0],xmm0[1],xmm4[<wbr>1]<br>
; SSE2-SSSE3-NEXT: punpcklqdq {{.*#+}} xmm1 = xmm1[0],xmm0[0]<br>
; SSE2-SSSE3-NEXT: movdqa %xmm1, %xmm0<br>
-; SSE2-SSSE3-NEXT: punpcklbw {{.*#+}} xmm0 = xmm0[0,0,1,1,2,2,3,3,4,4,5,5,<wbr>6,6,7,7]<br>
+; SSE2-SSSE3-NEXT: punpcklbw {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[<wbr>1],xmm0[2],xmm1[2],xmm0[3],<wbr>xmm1[3],xmm0[4],xmm1[4],xmm0[<wbr>5],xmm1[5],xmm0[6],xmm1[6],<wbr>xmm0[7],xmm1[7]<br>
; SSE2-SSSE3-NEXT: psllw $15, %xmm0<br>
; SSE2-SSSE3-NEXT: psraw $15, %xmm0<br>
; SSE2-SSSE3-NEXT: punpckhbw {{.*#+}} xmm1 = xmm1[8],xmm0[8],xmm1[9],xmm0[<wbr>9],xmm1[10],xmm0[10],xmm1[11],<wbr>xmm0[11],xmm1[12],xmm0[12],<wbr>xmm1[13],xmm0[13],xmm1[14],<wbr>xmm0[14],xmm1[15],xmm0[15]<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>bitcast-int-to-vector-bool-<wbr>zext.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/bitcast-int-to-vector-bool-zext.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/bitcast-int-to-<wbr>vector-bool-zext.ll?rev=<wbr>311038&r1=311037&r2=311038&<wbr>view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>bitcast-int-to-vector-bool-<wbr>zext.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>bitcast-int-to-vector-bool-<wbr>zext.ll Wed Aug 16 13:50:01 2017<br>
@@ -649,7 +649,7 @@ define <8 x i32> @ext_i8_8i32(i8 %a0) {<br>
; SSE2-SSSE3-NEXT: punpckldq {{.*#+}} xmm2 = xmm2[0],xmm0[0],xmm2[1],xmm0[<wbr>1]<br>
; SSE2-SSSE3-NEXT: punpcklqdq {{.*#+}} xmm1 = xmm1[0],xmm2[0]<br>
; SSE2-SSSE3-NEXT: movdqa %xmm1, %xmm0<br>
-; SSE2-SSSE3-NEXT: punpcklwd {{.*#+}} xmm0 = xmm0[0,0,1,1,2,2,3,3]<br>
+; SSE2-SSSE3-NEXT: punpcklwd {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[<wbr>1],xmm0[2],xmm1[2],xmm0[3],<wbr>xmm1[3]<br>
; SSE2-SSSE3-NEXT: movdqa {{.*#+}} xmm2 = [1,1,1,1]<br>
; SSE2-SSSE3-NEXT: pand %xmm2, %xmm0<br>
; SSE2-SSSE3-NEXT: punpckhwd {{.*#+}} xmm1 = xmm1[4],xmm0[4],xmm1[5],xmm0[<wbr>5],xmm1[6],xmm0[6],xmm1[7],<wbr>xmm0[7]<br>
@@ -808,7 +808,7 @@ define <16 x i16> @ext_i16_16i16(i16 %a0<br>
; SSE2-SSSE3-NEXT: punpckldq {{.*#+}} xmm0 = xmm0[0],xmm3[0],xmm0[1],xmm3[<wbr>1]<br>
; SSE2-SSSE3-NEXT: punpcklqdq {{.*#+}} xmm1 = xmm1[0],xmm0[0]<br>
; SSE2-SSSE3-NEXT: movdqa %xmm1, %xmm0<br>
-; SSE2-SSSE3-NEXT: punpcklbw {{.*#+}} xmm0 = xmm0[0,0,1,1,2,2,3,3,4,4,5,5,<wbr>6,6,7,7]<br>
+; SSE2-SSSE3-NEXT: punpcklbw {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[<wbr>1],xmm0[2],xmm1[2],xmm0[3],<wbr>xmm1[3],xmm0[4],xmm1[4],xmm0[<wbr>5],xmm1[5],xmm0[6],xmm1[6],<wbr>xmm0[7],xmm1[7]<br>
; SSE2-SSSE3-NEXT: movdqa {{.*#+}} xmm2 = [1,1,1,1,1,1,1,1]<br>
; SSE2-SSSE3-NEXT: pand %xmm2, %xmm0<br>
; SSE2-SSSE3-NEXT: punpckhbw {{.*#+}} xmm1 = xmm1[8],xmm0[8],xmm1[9],xmm0[<wbr>9],xmm1[10],xmm0[10],xmm1[11],<wbr>xmm0[11],xmm1[12],xmm0[12],<wbr>xmm1[13],xmm0[13],xmm1[14],<wbr>xmm0[14],xmm1[15],xmm0[15]<br>
@@ -1667,7 +1667,7 @@ define <16 x i32> @ext_i16_16i32(i16 %a0<br>
; SSE2-SSSE3-NEXT: movdqa %xmm3, %xmm1<br>
; SSE2-SSSE3-NEXT: punpcklbw {{.*#+}} xmm1 = xmm1[0],xmm0[0],xmm1[1],xmm0[<wbr>1],xmm1[2],xmm0[2],xmm1[3],<wbr>xmm0[3],xmm1[4],xmm0[4],xmm1[<wbr>5],xmm0[5],xmm1[6],xmm0[6],<wbr>xmm1[7],xmm0[7]<br>
; SSE2-SSSE3-NEXT: movdqa %xmm1, %xmm0<br>
-; SSE2-SSSE3-NEXT: punpcklwd {{.*#+}} xmm0 = xmm0[0,0,1,1,2,2,3,3]<br>
+; SSE2-SSSE3-NEXT: punpcklwd {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[<wbr>1],xmm0[2],xmm1[2],xmm0[3],<wbr>xmm1[3]<br>
; SSE2-SSSE3-NEXT: movdqa {{.*#+}} xmm4 = [1,1,1,1]<br>
; SSE2-SSSE3-NEXT: pand %xmm4, %xmm0<br>
; SSE2-SSSE3-NEXT: punpckhwd {{.*#+}} xmm1 = xmm1[4],xmm0[4],xmm1[5],xmm0[<wbr>5],xmm1[6],xmm0[6],xmm1[7],<wbr>xmm0[7]<br>
@@ -2008,7 +2008,7 @@ define <32 x i16> @ext_i32_32i16(i32 %a0<br>
; SSE2-SSSE3-NEXT: punpckldq {{.*#+}} xmm0 = xmm0[0],xmm4[0],xmm0[1],xmm4[<wbr>1]<br>
; SSE2-SSSE3-NEXT: punpcklqdq {{.*#+}} xmm1 = xmm1[0],xmm0[0]<br>
; SSE2-SSSE3-NEXT: movdqa %xmm1, %xmm0<br>
-; SSE2-SSSE3-NEXT: punpcklbw {{.*#+}} xmm0 = xmm0[0,0,1,1,2,2,3,3,4,4,5,5,<wbr>6,6,7,7]<br>
+; SSE2-SSSE3-NEXT: punpcklbw {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[<wbr>1],xmm0[2],xmm1[2],xmm0[3],<wbr>xmm1[3],xmm0[4],xmm1[4],xmm0[<wbr>5],xmm1[5],xmm0[6],xmm1[6],<wbr>xmm0[7],xmm1[7]<br>
; SSE2-SSSE3-NEXT: movdqa {{.*#+}} xmm4 = [1,1,1,1,1,1,1,1]<br>
; SSE2-SSSE3-NEXT: pand %xmm4, %xmm0<br>
; SSE2-SSSE3-NEXT: punpckhbw {{.*#+}} xmm1 = xmm1[8],xmm0[8],xmm1[9],xmm0[<wbr>9],xmm1[10],xmm0[10],xmm1[11],<wbr>xmm0[11],xmm1[12],xmm0[12],<wbr>xmm1[13],xmm0[13],xmm1[14],<wbr>xmm0[14],xmm1[15],xmm0[15]<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>buildvec-insertvec.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/buildvec-insertvec.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/buildvec-<wbr>insertvec.ll?rev=311038&r1=<wbr>311037&r2=311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>buildvec-insertvec.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>buildvec-insertvec.ll Wed Aug 16 13:50:01 2017<br>
@@ -38,7 +38,7 @@ define <4 x float> @test_negative_zero_1<br>
; SSE2-LABEL: test_negative_zero_1:<br>
; SSE2: # BB#0: # %entry<br>
; SSE2-NEXT: movaps %xmm0, %xmm1<br>
-; SSE2-NEXT: movhlps {{.*#+}} xmm1 = xmm1[1,1]<br>
+; SSE2-NEXT: movhlps {{.*#+}} xmm1 = xmm0[1],xmm1[1]<br>
; SSE2-NEXT: movss {{.*#+}} xmm2 = mem[0],zero,zero,zero<br>
; SSE2-NEXT: unpcklps {{.*#+}} xmm0 = xmm0[0],xmm2[0],xmm0[1],xmm2[<wbr>1]<br>
; SSE2-NEXT: xorps %xmm2, %xmm2<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>combine-fcopysign.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/combine-fcopysign.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/combine-fcopysign.<wbr>ll?rev=311038&r1=311037&r2=<wbr>311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>combine-fcopysign.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>combine-fcopysign.ll Wed Aug 16 13:50:01 2017<br>
@@ -231,8 +231,8 @@ define <4 x double> @combine_vec_fcopysi<br>
; SSE-NEXT: cvtss2sd %xmm2, %xmm4<br>
; SSE-NEXT: movshdup {{.*#+}} xmm5 = xmm2[1,1,3,3]<br>
; SSE-NEXT: movaps %xmm2, %xmm6<br>
-; SSE-NEXT: movhlps {{.*#+}} xmm6 = xmm6[1,1]<br>
-; SSE-NEXT: shufps {{.*#+}} xmm3 = xmm3[3,1,2,3]<br>
+; SSE-NEXT: movhlps {{.*#+}} xmm6 = xmm2[1],xmm6[1]<br>
+; SSE-NEXT: shufps {{.*#+}} xmm3 = xmm3[3,1],xmm2[2,3]<br>
; SSE-NEXT: movaps {{.*#+}} xmm7<br>
; SSE-NEXT: movaps %xmm0, %xmm2<br>
; SSE-NEXT: andps %xmm7, %xmm2<br>
@@ -247,7 +247,7 @@ define <4 x double> @combine_vec_fcopysi<br>
; SSE-NEXT: orps %xmm0, %xmm4<br>
; SSE-NEXT: unpcklpd {{.*#+}} xmm2 = xmm2[0],xmm4[0]<br>
; SSE-NEXT: movaps %xmm1, %xmm0<br>
-; SSE-NEXT: movhlps {{.*#+}} xmm0 = xmm0[1,1]<br>
+; SSE-NEXT: movhlps {{.*#+}} xmm0 = xmm1[1],xmm0[1]<br>
; SSE-NEXT: andps %xmm7, %xmm0<br>
; SSE-NEXT: cvtss2sd %xmm3, %xmm3<br>
; SSE-NEXT: andps %xmm8, %xmm3<br>
@@ -294,7 +294,7 @@ define <4 x float> @combine_vec_fcopysig<br>
; SSE-NEXT: orps %xmm6, %xmm1<br>
; SSE-NEXT: unpcklps {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[<wbr>1]<br>
; SSE-NEXT: movaps %xmm3, %xmm1<br>
-; SSE-NEXT: movhlps {{.*#+}} xmm1 = xmm1[1,1]<br>
+; SSE-NEXT: movhlps {{.*#+}} xmm1 = xmm3[1],xmm1[1]<br>
; SSE-NEXT: andps %xmm5, %xmm1<br>
; SSE-NEXT: xorps %xmm6, %xmm6<br>
; SSE-NEXT: cvtsd2ss %xmm2, %xmm6<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>complex-fastmath.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/complex-fastmath.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/complex-fastmath.<wbr>ll?rev=311038&r1=311037&r2=<wbr>311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>complex-fastmath.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>complex-fastmath.ll Wed Aug 16 13:50:01 2017<br>
@@ -14,7 +14,7 @@ define <2 x float> @complex_square_f32(<<br>
; SSE: # BB#0:<br>
; SSE-NEXT: movshdup {{.*#+}} xmm1 = xmm0[1,1,3,3]<br>
; SSE-NEXT: movaps %xmm0, %xmm2<br>
-; SSE-NEXT: addss %xmm2, %xmm2<br>
+; SSE-NEXT: addss %xmm0, %xmm2<br>
; SSE-NEXT: mulss %xmm1, %xmm2<br>
; SSE-NEXT: mulss %xmm0, %xmm0<br>
; SSE-NEXT: mulss %xmm1, %xmm1<br>
@@ -58,9 +58,9 @@ define <2 x double> @complex_square_f64(<br>
; SSE-LABEL: complex_square_f64:<br>
; SSE: # BB#0:<br>
; SSE-NEXT: movaps %xmm0, %xmm1<br>
-; SSE-NEXT: movhlps {{.*#+}} xmm1 = xmm1[1,1]<br>
+; SSE-NEXT: movhlps {{.*#+}} xmm1 = xmm0[1],xmm1[1]<br>
; SSE-NEXT: movaps %xmm0, %xmm2<br>
-; SSE-NEXT: addsd %xmm2, %xmm2<br>
+; SSE-NEXT: addsd %xmm0, %xmm2<br>
; SSE-NEXT: mulsd %xmm1, %xmm2<br>
; SSE-NEXT: mulsd %xmm0, %xmm0<br>
; SSE-NEXT: mulsd %xmm1, %xmm1<br>
@@ -161,9 +161,9 @@ define <2 x double> @complex_mul_f64(<2<br>
; SSE-LABEL: complex_mul_f64:<br>
; SSE: # BB#0:<br>
; SSE-NEXT: movaps %xmm0, %xmm2<br>
-; SSE-NEXT: movhlps {{.*#+}} xmm2 = xmm2[1,1]<br>
+; SSE-NEXT: movhlps {{.*#+}} xmm2 = xmm0[1],xmm2[1]<br>
; SSE-NEXT: movaps %xmm1, %xmm3<br>
-; SSE-NEXT: movhlps {{.*#+}} xmm3 = xmm3[1,1]<br>
+; SSE-NEXT: movhlps {{.*#+}} xmm3 = xmm1[1],xmm3[1]<br>
; SSE-NEXT: movaps %xmm3, %xmm4<br>
; SSE-NEXT: mulsd %xmm0, %xmm4<br>
; SSE-NEXT: mulsd %xmm1, %xmm0<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>divide-by-constant.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/divide-by-constant.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/divide-by-<wbr>constant.ll?rev=311038&r1=<wbr>311037&r2=311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>divide-by-constant.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>divide-by-constant.ll Wed Aug 16 13:50:01 2017<br>
@@ -318,7 +318,7 @@ define i64 @PR23590(i64 %x) nounwind {<br>
; X64: # BB#0: # %entry<br>
; X64-NEXT: movq %rdi, %rcx<br>
; X64-NEXT: movabsq $6120523590596543007, %rdx # imm = 0x54F077C718E7C21F<br>
-; X64-NEXT: movq %rcx, %rax<br>
+; X64-NEXT: movq %rdi, %rax<br>
; X64-NEXT: mulq %rdx<br>
; X64-NEXT: shrq $12, %rdx<br>
; X64-NEXT: imulq $12345, %rdx, %rax # imm = 0x3039<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>fmaxnum.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/fmaxnum.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/fmaxnum.ll?rev=<wbr>311038&r1=311037&r2=311038&<wbr>view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>fmaxnum.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>fmaxnum.ll Wed Aug 16 13:50:01 2017<br>
@@ -18,7 +18,7 @@ declare <8 x double> @llvm.maxnum.v8f64(<br>
<br>
; CHECK-LABEL: @test_fmaxf<br>
; SSE: movaps %xmm0, %xmm2<br>
-; SSE-NEXT: cmpunordss %xmm2, %xmm2<br>
+; SSE-NEXT: cmpunordss %xmm0, %xmm2<br>
; SSE-NEXT: movaps %xmm2, %xmm3<br>
; SSE-NEXT: andps %xmm1, %xmm3<br>
; SSE-NEXT: maxss %xmm0, %xmm1<br>
@@ -47,7 +47,7 @@ define float @test_fmaxf_minsize(float %<br>
<br>
; CHECK-LABEL: @test_fmax<br>
; SSE: movapd %xmm0, %xmm2<br>
-; SSE-NEXT: cmpunordsd %xmm2, %xmm2<br>
+; SSE-NEXT: cmpunordsd %xmm0, %xmm2<br>
; SSE-NEXT: movapd %xmm2, %xmm3<br>
; SSE-NEXT: andpd %xmm1, %xmm3<br>
; SSE-NEXT: maxsd %xmm0, %xmm1<br>
@@ -74,7 +74,7 @@ define x86_fp80 @test_fmaxl(x86_fp80 %x,<br>
<br>
; CHECK-LABEL: @test_intrinsic_fmaxf<br>
; SSE: movaps %xmm0, %xmm2<br>
-; SSE-NEXT: cmpunordss %xmm2, %xmm2<br>
+; SSE-NEXT: cmpunordss %xmm0, %xmm2<br>
; SSE-NEXT: movaps %xmm2, %xmm3<br>
; SSE-NEXT: andps %xmm1, %xmm3<br>
; SSE-NEXT: maxss %xmm0, %xmm1<br>
@@ -95,7 +95,7 @@ define float @test_intrinsic_fmaxf(float<br>
<br>
; CHECK-LABEL: @test_intrinsic_fmax<br>
; SSE: movapd %xmm0, %xmm2<br>
-; SSE-NEXT: cmpunordsd %xmm2, %xmm2<br>
+; SSE-NEXT: cmpunordsd %xmm0, %xmm2<br>
; SSE-NEXT: movapd %xmm2, %xmm3<br>
; SSE-NEXT: andpd %xmm1, %xmm3<br>
; SSE-NEXT: maxsd %xmm0, %xmm1<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>fminnum.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/fminnum.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/fminnum.ll?rev=<wbr>311038&r1=311037&r2=311038&<wbr>view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>fminnum.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>fminnum.ll Wed Aug 16 13:50:01 2017<br>
@@ -18,7 +18,7 @@ declare <8 x double> @llvm.minnum.v8f64(<br>
<br>
; CHECK-LABEL: @test_fminf<br>
; SSE: movaps %xmm0, %xmm2<br>
-; SSE-NEXT: cmpunordss %xmm2, %xmm2<br>
+; SSE-NEXT: cmpunordss %xmm0, %xmm2<br>
; SSE-NEXT: movaps %xmm2, %xmm3<br>
; SSE-NEXT: andps %xmm1, %xmm3<br>
; SSE-NEXT: minss %xmm0, %xmm1<br>
@@ -40,7 +40,7 @@ define float @test_fminf(float %x, float<br>
<br>
; CHECK-LABEL: @test_fmin<br>
; SSE: movapd %xmm0, %xmm2<br>
-; SSE-NEXT: cmpunordsd %xmm2, %xmm2<br>
+; SSE-NEXT: cmpunordsd %xmm0, %xmm2<br>
; SSE-NEXT: movapd %xmm2, %xmm3<br>
; SSE-NEXT: andpd %xmm1, %xmm3<br>
; SSE-NEXT: minsd %xmm0, %xmm1<br>
@@ -67,7 +67,7 @@ define x86_fp80 @test_fminl(x86_fp80 %x,<br>
<br>
; CHECK-LABEL: @test_intrinsic_fminf<br>
; SSE: movaps %xmm0, %xmm2<br>
-; SSE-NEXT: cmpunordss %xmm2, %xmm2<br>
+; SSE-NEXT: cmpunordss %xmm0, %xmm2<br>
; SSE-NEXT: movaps %xmm2, %xmm3<br>
; SSE-NEXT: andps %xmm1, %xmm3<br>
; SSE-NEXT: minss %xmm0, %xmm1<br>
@@ -87,7 +87,7 @@ define float @test_intrinsic_fminf(float<br>
<br>
; CHECK-LABEL: @test_intrinsic_fmin<br>
; SSE: movapd %xmm0, %xmm2<br>
-; SSE-NEXT: cmpunordsd %xmm2, %xmm2<br>
+; SSE-NEXT: cmpunordsd %xmm0, %xmm2<br>
; SSE-NEXT: movapd %xmm2, %xmm3<br>
; SSE-NEXT: andpd %xmm1, %xmm3<br>
; SSE-NEXT: minsd %xmm0, %xmm1<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>fp128-i128.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/fp128-i128.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/fp128-i128.ll?rev=<wbr>311038&r1=311037&r2=311038&<wbr>view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>fp128-i128.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>fp128-i128.ll Wed Aug 16 13:50:01 2017<br>
@@ -227,7 +227,7 @@ define fp128 @TestI128_4(fp128 %x) #0 {<br>
; CHECK: # BB#0: # %entry<br>
; CHECK-NEXT: subq $40, %rsp<br>
; CHECK-NEXT: movaps %xmm0, %xmm1<br>
-; CHECK-NEXT: movaps %xmm1, {{[0-9]+}}(%rsp)<br>
+; CHECK-NEXT: movaps %xmm0, {{[0-9]+}}(%rsp)<br>
; CHECK-NEXT: movq {{[0-9]+}}(%rsp), %rax<br>
; CHECK-NEXT: movq %rax, {{[0-9]+}}(%rsp)<br>
; CHECK-NEXT: movq $0, (%rsp)<br>
@@ -275,7 +275,7 @@ define fp128 @acosl(fp128 %x) #0 {<br>
; CHECK: # BB#0: # %entry<br>
; CHECK-NEXT: subq $40, %rsp<br>
; CHECK-NEXT: movaps %xmm0, %xmm1<br>
-; CHECK-NEXT: movaps %xmm1, {{[0-9]+}}(%rsp)<br>
+; CHECK-NEXT: movaps %xmm0, {{[0-9]+}}(%rsp)<br>
; CHECK-NEXT: movq {{[0-9]+}}(%rsp), %rax<br>
; CHECK-NEXT: movq %rax, {{[0-9]+}}(%rsp)<br>
; CHECK-NEXT: movq $0, (%rsp)<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>haddsub-2.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/haddsub-2.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/haddsub-2.ll?rev=<wbr>311038&r1=311037&r2=311038&<wbr>view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>haddsub-2.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>haddsub-2.ll Wed Aug 16 13:50:01 2017<br>
@@ -908,16 +908,16 @@ define <4 x float> @not_a_hsub_2(<4 x fl<br>
; SSE-LABEL: not_a_hsub_2:<br>
; SSE: # BB#0:<br>
; SSE-NEXT: movaps %xmm0, %xmm2<br>
-; SSE-NEXT: movhlps {{.*#+}} xmm2 = xmm2[1,1]<br>
+; SSE-NEXT: movhlps {{.*#+}} xmm2 = xmm0[1],xmm2[1]<br>
; SSE-NEXT: movaps %xmm0, %xmm3<br>
-; SSE-NEXT: shufps {{.*#+}} xmm3 = xmm3[3,1,2,3]<br>
+; SSE-NEXT: shufps {{.*#+}} xmm3 = xmm3[3,1],xmm0[2,3]<br>
; SSE-NEXT: subss %xmm3, %xmm2<br>
; SSE-NEXT: movshdup {{.*#+}} xmm3 = xmm0[1,1,3,3]<br>
; SSE-NEXT: subss %xmm3, %xmm0<br>
; SSE-NEXT: movaps %xmm1, %xmm3<br>
-; SSE-NEXT: shufps {{.*#+}} xmm3 = xmm3[3,1,2,3]<br>
+; SSE-NEXT: shufps {{.*#+}} xmm3 = xmm3[3,1],xmm1[2,3]<br>
; SSE-NEXT: movaps %xmm1, %xmm4<br>
-; SSE-NEXT: movhlps {{.*#+}} xmm4 = xmm4[1,1]<br>
+; SSE-NEXT: movhlps {{.*#+}} xmm4 = xmm1[1],xmm4[1]<br>
; SSE-NEXT: subss %xmm4, %xmm3<br>
; SSE-NEXT: movshdup {{.*#+}} xmm4 = xmm1[1,1,3,3]<br>
; SSE-NEXT: subss %xmm4, %xmm1<br>
@@ -965,10 +965,10 @@ define <2 x double> @not_a_hsub_3(<2 x d<br>
; SSE-LABEL: not_a_hsub_3:<br>
; SSE: # BB#0:<br>
; SSE-NEXT: movaps %xmm1, %xmm2<br>
-; SSE-NEXT: movhlps {{.*#+}} xmm2 = xmm2[1,1]<br>
+; SSE-NEXT: movhlps {{.*#+}} xmm2 = xmm1[1],xmm2[1]<br>
; SSE-NEXT: subsd %xmm2, %xmm1<br>
; SSE-NEXT: movaps %xmm0, %xmm2<br>
-; SSE-NEXT: movhlps {{.*#+}} xmm2 = xmm2[1,1]<br>
+; SSE-NEXT: movhlps {{.*#+}} xmm2 = xmm0[1],xmm2[1]<br>
; SSE-NEXT: subsd %xmm0, %xmm2<br>
; SSE-NEXT: unpcklpd {{.*#+}} xmm2 = xmm2[0],xmm1[0]<br>
; SSE-NEXT: movapd %xmm2, %xmm0<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>haddsub-undef.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/haddsub-undef.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/haddsub-undef.ll?<wbr>rev=311038&r1=311037&r2=<wbr>311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>haddsub-undef.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>haddsub-undef.ll Wed Aug 16 13:50:01 2017<br>
@@ -103,7 +103,7 @@ define <2 x double> @test5_undef(<2 x do<br>
; SSE-LABEL: test5_undef:<br>
; SSE: # BB#0:<br>
; SSE-NEXT: movaps %xmm0, %xmm1<br>
-; SSE-NEXT: movhlps {{.*#+}} xmm1 = xmm1[1,1]<br>
+; SSE-NEXT: movhlps {{.*#+}} xmm1 = xmm0[1],xmm1[1]<br>
; SSE-NEXT: addsd %xmm0, %xmm1<br>
; SSE-NEXT: movapd %xmm1, %xmm0<br>
; SSE-NEXT: retq<br>
@@ -168,7 +168,7 @@ define <4 x float> @test8_undef(<4 x flo<br>
; SSE-NEXT: movshdup {{.*#+}} xmm1 = xmm0[1,1,3,3]<br>
; SSE-NEXT: addss %xmm0, %xmm1<br>
; SSE-NEXT: movaps %xmm0, %xmm2<br>
-; SSE-NEXT: movhlps {{.*#+}} xmm2 = xmm2[1,1]<br>
+; SSE-NEXT: movhlps {{.*#+}} xmm2 = xmm0[1],xmm2[1]<br>
; SSE-NEXT: shufps {{.*#+}} xmm0 = xmm0[3,1,2,3]<br>
; SSE-NEXT: addss %xmm2, %xmm0<br>
; SSE-NEXT: unpcklpd {{.*#+}} xmm1 = xmm1[0],xmm0[0]<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>half.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/half.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/half.ll?rev=<wbr>311038&r1=311037&r2=311038&<wbr>view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>half.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>half.ll Wed Aug 16 13:50:01 2017<br>
@@ -386,7 +386,7 @@ define <4 x float> @test_extend32_vec4(<<br>
; CHECK-LIBCALL-NEXT: pushq %rbx<br>
; CHECK-LIBCALL-NEXT: subq $48, %rsp<br>
; CHECK-LIBCALL-NEXT: movq %rdi, %rbx<br>
-; CHECK-LIBCALL-NEXT: movzwl (%rbx), %edi<br>
+; CHECK-LIBCALL-NEXT: movzwl (%rdi), %edi<br>
; CHECK-LIBCALL-NEXT: callq __gnu_h2f_ieee<br>
; CHECK-LIBCALL-NEXT: movaps %xmm0, {{[0-9]+}}(%rsp) # 16-byte Spill<br>
; CHECK-LIBCALL-NEXT: movzwl 2(%rbx), %edi<br>
@@ -472,7 +472,7 @@ define <4 x double> @test_extend64_vec4(<br>
; CHECK-LIBCALL-NEXT: pushq %rbx<br>
; CHECK-LIBCALL-NEXT: subq $16, %rsp<br>
; CHECK-LIBCALL-NEXT: movq %rdi, %rbx<br>
-; CHECK-LIBCALL-NEXT: movzwl 4(%rbx), %edi<br>
+; CHECK-LIBCALL-NEXT: movzwl 4(%rdi), %edi<br>
; CHECK-LIBCALL-NEXT: callq __gnu_h2f_ieee<br>
; CHECK-LIBCALL-NEXT: movss %xmm0, {{[0-9]+}}(%rsp) # 4-byte Spill<br>
; CHECK-LIBCALL-NEXT: movzwl 6(%rbx), %edi<br>
@@ -657,7 +657,7 @@ define void @test_trunc32_vec4(<4 x floa<br>
; CHECK-I686-NEXT: movaps %xmm0, {{[0-9]+}}(%esp) # 16-byte Spill<br>
; CHECK-I686-NEXT: movl {{[0-9]+}}(%esp), %ebp<br>
; CHECK-I686-NEXT: movaps %xmm0, %xmm1<br>
-; CHECK-I686-NEXT: shufps {{.*#+}} xmm1 = xmm1[1,1,2,3]<br>
+; CHECK-I686-NEXT: shufps {{.*#+}} xmm1 = xmm1[1,1],xmm0[2,3]<br>
; CHECK-I686-NEXT: movss %xmm1, (%esp)<br>
; CHECK-I686-NEXT: calll __gnu_f2h_ieee<br>
; CHECK-I686-NEXT: movw %ax, %si<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>inline-asm-fpstack.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/inline-asm-fpstack.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/inline-asm-<wbr>fpstack.ll?rev=311038&r1=<wbr>311037&r2=311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>inline-asm-fpstack.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>inline-asm-fpstack.ll Wed Aug 16 13:50:01 2017<br>
@@ -162,6 +162,7 @@ define void @testPR4459(x86_fp80 %a) {<br>
; CHECK-NEXT: fstpt (%esp)<br>
; CHECK-NEXT: calll _ceil<br>
; CHECK-NEXT: fld %st(0)<br>
+; CHECK-NEXT: fxch %st(1)<br>
; CHECK-NEXT: ## InlineAsm Start<br>
; CHECK-NEXT: fistpl %st(0)<br>
; CHECK-NEXT: ## InlineAsm End<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>ipra-local-linkage.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/ipra-local-linkage.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/ipra-local-<wbr>linkage.ll?rev=311038&r1=<wbr>311037&r2=311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>ipra-local-linkage.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>ipra-local-linkage.ll Wed Aug 16 13:50:01 2017<br>
@@ -24,7 +24,7 @@ define void @bar(i32 %X) {<br>
call void @foo()<br>
; CHECK-LABEL: bar:<br>
; CHECK: callq foo<br>
- ; CHECK-NEXT: movl %eax, %r15d<br>
+ ; CHECK-NEXT: movl %edi, %r15d<br>
call void asm sideeffect "movl $0, %r12d", "{r15}~{r12}"(i32 %X)<br>
ret void<br>
}<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>localescape.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/localescape.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/localescape.ll?<wbr>rev=311038&r1=311037&r2=<wbr>311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>localescape.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>localescape.ll Wed Aug 16 13:50:01 2017<br>
@@ -27,7 +27,7 @@ define void @print_framealloc_from_fp(i8<br>
<br>
; X64-LABEL: print_framealloc_from_fp:<br>
; X64: movq %rcx, %[[parent_fp:[a-z]+]]<br>
-; X64: movl .Lalloc_func$frame_escape_0(%[<wbr>[parent_fp]]), %edx<br>
+; X64: movl .Lalloc_func$frame_escape_0(%<wbr>rcx), %edx<br>
; X64: leaq {{.*}}(%rip), %[[str:[a-z]+]]<br>
; X64: movq %[[str]], %rcx<br>
; X64: callq printf<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>mul-i1024.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/mul-i1024.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/mul-i1024.ll?rev=<wbr>311038&r1=311037&r2=311038&<wbr>view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>mul-i1024.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>mul-i1024.ll Wed Aug 16 13:50:01 2017<br>
@@ -159,7 +159,7 @@ define void @test_1024(i1024* %a, i1024*<br>
; X32-NEXT: pushl {{[0-9]+}}(%esp) # 4-byte Folded Reload<br>
; X32-NEXT: pushl %esi<br>
; X32-NEXT: movl %esi, %ebx<br>
-; X32-NEXT: movl %ebx, {{[0-9]+}}(%esp) # 4-byte Spill<br>
+; X32-NEXT: movl %esi, {{[0-9]+}}(%esp) # 4-byte Spill<br>
; X32-NEXT: pushl %edi<br>
; X32-NEXT: movl %edi, {{[0-9]+}}(%esp) # 4-byte Spill<br>
; X32-NEXT: pushl {{[0-9]+}}(%esp) # 4-byte Folded Reload<br>
@@ -752,7 +752,7 @@ define void @test_1024(i1024* %a, i1024*<br>
; X32-NEXT: pushl $0<br>
; X32-NEXT: pushl %edi<br>
; X32-NEXT: movl %ebx, %esi<br>
-; X32-NEXT: pushl %esi<br>
+; X32-NEXT: pushl %ebx<br>
; X32-NEXT: pushl $0<br>
; X32-NEXT: pushl $0<br>
; X32-NEXT: pushl {{[0-9]+}}(%esp) # 4-byte Folded Reload<br>
@@ -898,7 +898,6 @@ define void @test_1024(i1024* %a, i1024*<br>
; X32-NEXT: pushl $0<br>
; X32-NEXT: pushl $0<br>
; X32-NEXT: pushl %edi<br>
-; X32-NEXT: movl %edi, %ebx<br>
; X32-NEXT: pushl %esi<br>
; X32-NEXT: pushl $0<br>
; X32-NEXT: pushl $0<br>
@@ -910,7 +909,7 @@ define void @test_1024(i1024* %a, i1024*<br>
; X32-NEXT: leal {{[0-9]+}}(%esp), %eax<br>
; X32-NEXT: pushl $0<br>
; X32-NEXT: pushl $0<br>
-; X32-NEXT: pushl %ebx<br>
+; X32-NEXT: pushl %edi<br>
; X32-NEXT: pushl %esi<br>
; X32-NEXT: pushl $0<br>
; X32-NEXT: pushl $0<br>
@@ -1365,7 +1364,7 @@ define void @test_1024(i1024* %a, i1024*<br>
; X32-NEXT: pushl $0<br>
; X32-NEXT: pushl $0<br>
; X32-NEXT: movl %edi, %ebx<br>
-; X32-NEXT: pushl %ebx<br>
+; X32-NEXT: pushl %edi<br>
; X32-NEXT: movl {{[0-9]+}}(%esp), %esi # 4-byte Reload<br>
; X32-NEXT: pushl %esi<br>
; X32-NEXT: pushl $0<br>
@@ -2442,7 +2441,7 @@ define void @test_1024(i1024* %a, i1024*<br>
; X32-NEXT: movl %edx, {{[0-9]+}}(%esp) # 4-byte Spill<br>
; X32-NEXT: adcl %edi, %eax<br>
; X32-NEXT: movl %eax, %esi<br>
-; X32-NEXT: movl %esi, {{[0-9]+}}(%esp) # 4-byte Spill<br>
+; X32-NEXT: movl %eax, {{[0-9]+}}(%esp) # 4-byte Spill<br>
; X32-NEXT: movl {{[0-9]+}}(%esp), %eax # 4-byte Reload<br>
; X32-NEXT: addl %eax, {{[0-9]+}}(%esp) # 4-byte Folded Spill<br>
; X32-NEXT: movl {{[0-9]+}}(%esp), %eax # 4-byte Reload<br>
@@ -4265,7 +4264,6 @@ define void @test_1024(i1024* %a, i1024*<br>
; X64-NEXT: adcq $0, %rbp<br>
; X64-NEXT: addq %rcx, %rbx<br>
; X64-NEXT: movq %rbx, -{{[0-9]+}}(%rsp) # 8-byte Spill<br>
-; X64-NEXT: movq %rcx, %r11<br>
; X64-NEXT: adcq %rdi, %rbp<br>
; X64-NEXT: setb %bl<br>
; X64-NEXT: movzbl %bl, %ebx<br>
@@ -4275,12 +4273,12 @@ define void @test_1024(i1024* %a, i1024*<br>
; X64-NEXT: mulq %r8<br>
; X64-NEXT: movq %rax, {{[0-9]+}}(%rsp) # 8-byte Spill<br>
; X64-NEXT: movq %rdx, {{[0-9]+}}(%rsp) # 8-byte Spill<br>
-; X64-NEXT: movq %r11, %r12<br>
-; X64-NEXT: movq %r11, %r8<br>
+; X64-NEXT: movq %rcx, %r12<br>
+; X64-NEXT: movq %rcx, %r8<br>
; X64-NEXT: addq %rax, %r12<br>
; X64-NEXT: movq %rdi, %rax<br>
; X64-NEXT: movq %rdi, %r9<br>
-; X64-NEXT: movq %r9, (%rsp) # 8-byte Spill<br>
+; X64-NEXT: movq %rdi, (%rsp) # 8-byte Spill<br>
; X64-NEXT: adcq %rdx, %rax<br>
; X64-NEXT: addq %rbp, %r12<br>
; X64-NEXT: movq %r12, {{[0-9]+}}(%rsp) # 8-byte Spill<br>
@@ -4309,7 +4307,7 @@ define void @test_1024(i1024* %a, i1024*<br>
; X64-NEXT: adcq %rdx, %rbx<br>
; X64-NEXT: movq 16(%rsi), %rax<br>
; X64-NEXT: movq %rsi, %r13<br>
-; X64-NEXT: movq %r13, {{[0-9]+}}(%rsp) # 8-byte Spill<br>
+; X64-NEXT: movq %rsi, {{[0-9]+}}(%rsp) # 8-byte Spill<br>
; X64-NEXT: movq %rax, -{{[0-9]+}}(%rsp) # 8-byte Spill<br>
; X64-NEXT: mulq %r11<br>
; X64-NEXT: movq %rax, -{{[0-9]+}}(%rsp) # 8-byte Spill<br>
@@ -4322,7 +4320,7 @@ define void @test_1024(i1024* %a, i1024*<br>
; X64-NEXT: adcq %rbx, %r11<br>
; X64-NEXT: movq %r8, %rax<br>
; X64-NEXT: movq %r8, %rbp<br>
-; X64-NEXT: movq %rbp, -{{[0-9]+}}(%rsp) # 8-byte Spill<br>
+; X64-NEXT: movq %r8, -{{[0-9]+}}(%rsp) # 8-byte Spill<br>
; X64-NEXT: addq %rdi, %rax<br>
; X64-NEXT: movq %r9, %rax<br>
; X64-NEXT: adcq %rcx, %rax<br>
@@ -4334,8 +4332,7 @@ define void @test_1024(i1024* %a, i1024*<br>
; X64-NEXT: movq %rdx, %rsi<br>
; X64-NEXT: movq %rax, %rbx<br>
; X64-NEXT: addq %rdi, %rax<br>
-; X64-NEXT: movq %rdi, %r9<br>
-; X64-NEXT: movq %rsi, %rax<br>
+; X64-NEXT: movq %rdx, %rax<br>
; X64-NEXT: adcq %rcx, %rax<br>
; X64-NEXT: movq %rax, {{[0-9]+}}(%rsp) # 8-byte Spill<br>
; X64-NEXT: movq 32(%r13), %rax<br>
@@ -4351,9 +4348,10 @@ define void @test_1024(i1024* %a, i1024*<br>
; X64-NEXT: adcq %rdx, %rax<br>
; X64-NEXT: movq %rax, {{[0-9]+}}(%rsp) # 8-byte Spill<br>
; X64-NEXT: movq %rbp, %rax<br>
-; X64-NEXT: addq %r9, %rax<br>
+; X64-NEXT: addq %rdi, %rax<br>
; X64-NEXT: movq %rax, -{{[0-9]+}}(%rsp) # 8-byte Spill<br>
-; X64-NEXT: movq %r9, {{[0-9]+}}(%rsp) # 8-byte Spill<br>
+; X64-NEXT: movq %rdi, %r9<br>
+; X64-NEXT: movq %rdi, {{[0-9]+}}(%rsp) # 8-byte Spill<br>
; X64-NEXT: movq -{{[0-9]+}}(%rsp), %rax # 8-byte Reload<br>
; X64-NEXT: adcq %r15, %rax<br>
; X64-NEXT: movq %rax, -{{[0-9]+}}(%rsp) # 8-byte Spill<br>
@@ -4371,7 +4369,7 @@ define void @test_1024(i1024* %a, i1024*<br>
; X64-NEXT: addq %rsi, %r11<br>
; X64-NEXT: movq %rdx, %rbp<br>
; X64-NEXT: adcq $0, %rbp<br>
-; X64-NEXT: addq %rcx, %r11<br>
+; X64-NEXT: addq %rbx, %r11<br>
; X64-NEXT: adcq %rsi, %rbp<br>
; X64-NEXT: movq %rsi, {{[0-9]+}}(%rsp) # 8-byte Spill<br>
; X64-NEXT: setb %bl<br>
@@ -4392,11 +4390,11 @@ define void @test_1024(i1024* %a, i1024*<br>
; X64-NEXT: adcq %rbx, %r10<br>
; X64-NEXT: movq %rcx, %rdx<br>
; X64-NEXT: movq %rcx, %r12<br>
-; X64-NEXT: movq %r12, {{[0-9]+}}(%rsp) # 8-byte Spill<br>
+; X64-NEXT: movq %rcx, {{[0-9]+}}(%rsp) # 8-byte Spill<br>
; X64-NEXT: addq %r9, %rdx<br>
; X64-NEXT: movq %rdx, -{{[0-9]+}}(%rsp) # 8-byte Spill<br>
; X64-NEXT: movq %r11, %r8<br>
-; X64-NEXT: adcq %r8, %r15<br>
+; X64-NEXT: adcq %r11, %r15<br>
; X64-NEXT: movq %r15, {{[0-9]+}}(%rsp) # 8-byte Spill<br>
; X64-NEXT: adcq %rax, %r14<br>
; X64-NEXT: movq %r14, {{[0-9]+}}(%rsp) # 8-byte Spill<br>
@@ -4492,13 +4490,12 @@ define void @test_1024(i1024* %a, i1024*<br>
; X64-NEXT: adcq %rdx, %r12<br>
; X64-NEXT: movq {{[0-9]+}}(%rsp), %rcx # 8-byte Reload<br>
; X64-NEXT: movq %rcx, %rax<br>
-; X64-NEXT: movq %r10, %rbp<br>
-; X64-NEXT: mulq %rbp<br>
+; X64-NEXT: mulq %r10<br>
; X64-NEXT: movq %rdx, %rsi<br>
; X64-NEXT: movq %rax, {{[0-9]+}}(%rsp) # 8-byte Spill<br>
; X64-NEXT: movq {{[0-9]+}}(%rsp), %rdi # 8-byte Reload<br>
; X64-NEXT: movq %rdi, %rax<br>
-; X64-NEXT: mulq %rbp<br>
+; X64-NEXT: mulq %r10<br>
; X64-NEXT: movq %rdx, %rbp<br>
; X64-NEXT: movq %rax, %rbx<br>
; X64-NEXT: addq %rsi, %rbx<br>
@@ -4525,7 +4522,7 @@ define void @test_1024(i1024* %a, i1024*<br>
; X64-NEXT: adcq $0, %r15<br>
; X64-NEXT: adcq $0, %r12<br>
; X64-NEXT: movq %r10, %rbx<br>
-; X64-NEXT: movq %rbx, %rax<br>
+; X64-NEXT: movq %r10, %rax<br>
; X64-NEXT: movq -{{[0-9]+}}(%rsp), %r11 # 8-byte Reload<br>
; X64-NEXT: mulq %r11<br>
; X64-NEXT: movq %rdx, %rcx<br>
@@ -4542,7 +4539,7 @@ define void @test_1024(i1024* %a, i1024*<br>
; X64-NEXT: movq %rbx, %rax<br>
; X64-NEXT: mulq %rcx<br>
; X64-NEXT: movq %rcx, %rbx<br>
-; X64-NEXT: movq %rbx, -{{[0-9]+}}(%rsp) # 8-byte Spill<br>
+; X64-NEXT: movq %rcx, -{{[0-9]+}}(%rsp) # 8-byte Spill<br>
; X64-NEXT: movq %rdx, %rcx<br>
; X64-NEXT: movq %rax, %r8<br>
; X64-NEXT: addq %rbp, %r8<br>
@@ -4573,7 +4570,7 @@ define void @test_1024(i1024* %a, i1024*<br>
; X64-NEXT: movq {{[0-9]+}}(%rsp), %rcx # 8-byte Reload<br>
; X64-NEXT: movq %rcx, %rax<br>
; X64-NEXT: movq %r11, %rsi<br>
-; X64-NEXT: mulq %rsi<br>
+; X64-NEXT: mulq %r11<br>
; X64-NEXT: movq %rdx, %r11<br>
; X64-NEXT: movq %rax, %r13<br>
; X64-NEXT: movq -{{[0-9]+}}(%rsp), %r12 # 8-byte Reload<br>
@@ -4653,13 +4650,12 @@ define void @test_1024(i1024* %a, i1024*<br>
; X64-NEXT: adcq %rdx, %r10<br>
; X64-NEXT: movq {{[0-9]+}}(%rsp), %rcx # 8-byte Reload<br>
; X64-NEXT: movq %rcx, %rax<br>
-; X64-NEXT: movq %r11, %rbp<br>
-; X64-NEXT: mulq %rbp<br>
+; X64-NEXT: mulq %r11<br>
; X64-NEXT: movq %rdx, %rdi<br>
; X64-NEXT: movq %rax, {{[0-9]+}}(%rsp) # 8-byte Spill<br>
; X64-NEXT: movq {{[0-9]+}}(%rsp), %rsi # 8-byte Reload<br>
; X64-NEXT: movq %rsi, %rax<br>
-; X64-NEXT: mulq %rbp<br>
+; X64-NEXT: mulq %r11<br>
; X64-NEXT: movq %rdx, %rbp<br>
; X64-NEXT: movq %rax, %rbx<br>
; X64-NEXT: addq %rdi, %rbx<br>
@@ -4789,7 +4785,7 @@ define void @test_1024(i1024* %a, i1024*<br>
; X64-NEXT: movq %rdx, %rsi<br>
; X64-NEXT: movq %rax, %r14<br>
; X64-NEXT: movq %r8, %rbp<br>
-; X64-NEXT: movq %rbp, %rax<br>
+; X64-NEXT: movq %r8, %rax<br>
; X64-NEXT: mulq %rcx<br>
; X64-NEXT: movq %rcx, %r11<br>
; X64-NEXT: movq %rdx, %rbx<br>
@@ -4849,7 +4845,7 @@ define void @test_1024(i1024* %a, i1024*<br>
; X64-NEXT: adcq $0, %r9<br>
; X64-NEXT: adcq $0, %r10<br>
; X64-NEXT: movq %rbp, %rsi<br>
-; X64-NEXT: movq %rsi, %rax<br>
+; X64-NEXT: movq %rbp, %rax<br>
; X64-NEXT: movq -{{[0-9]+}}(%rsp), %rcx # 8-byte Reload<br>
; X64-NEXT: mulq %rcx<br>
; X64-NEXT: movq %rdx, %r14<br>
@@ -4906,8 +4902,8 @@ define void @test_1024(i1024* %a, i1024*<br>
; X64-NEXT: adcq $0, %r15<br>
; X64-NEXT: movq %rbp, %rax<br>
; X64-NEXT: movq %r8, %rdi<br>
-; X64-NEXT: movq %rdi, {{[0-9]+}}(%rsp) # 8-byte Spill<br>
-; X64-NEXT: mulq %rdi<br>
+; X64-NEXT: movq %r8, {{[0-9]+}}(%rsp) # 8-byte Spill<br>
+; X64-NEXT: mulq %r8<br>
; X64-NEXT: movq %rdx, %r9<br>
; X64-NEXT: movq %rax, %r8<br>
; X64-NEXT: addq %rbx, %r8<br>
@@ -4990,13 +4986,12 @@ define void @test_1024(i1024* %a, i1024*<br>
; X64-NEXT: movq %rcx, %r14<br>
; X64-NEXT: movq {{[0-9]+}}(%rsp), %rcx # 8-byte Reload<br>
; X64-NEXT: movq %rcx, %rax<br>
-; X64-NEXT: movq %r10, %rdi<br>
-; X64-NEXT: mulq %rdi<br>
+; X64-NEXT: mulq %r10<br>
; X64-NEXT: movq %rdx, %r11<br>
; X64-NEXT: movq %rax, {{[0-9]+}}(%rsp) # 8-byte Spill<br>
; X64-NEXT: movq {{[0-9]+}}(%rsp), %rsi # 8-byte Reload<br>
; X64-NEXT: movq %rsi, %rax<br>
-; X64-NEXT: mulq %rdi<br>
+; X64-NEXT: mulq %r10<br>
; X64-NEXT: movq %rdx, %rdi<br>
; X64-NEXT: movq %rax, %rbx<br>
; X64-NEXT: addq %r11, %rbx<br>
@@ -5024,8 +5019,7 @@ define void @test_1024(i1024* %a, i1024*<br>
; X64-NEXT: movq %r8, -{{[0-9]+}}(%rsp) # 8-byte Spill<br>
; X64-NEXT: adcq $0, %r14<br>
; X64-NEXT: movq %r14, -{{[0-9]+}}(%rsp) # 8-byte Spill<br>
-; X64-NEXT: movq %r13, %rbx<br>
-; X64-NEXT: movq %rbx, %rax<br>
+; X64-NEXT: movq %r13, %rax<br>
; X64-NEXT: movq -{{[0-9]+}}(%rsp), %rcx # 8-byte Reload<br>
; X64-NEXT: mulq %rcx<br>
; X64-NEXT: movq %rdx, %r8<br>
@@ -5038,7 +5032,7 @@ define void @test_1024(i1024* %a, i1024*<br>
; X64-NEXT: movq %rax, %rcx<br>
; X64-NEXT: addq %r8, %rcx<br>
; X64-NEXT: adcq $0, %rsi<br>
-; X64-NEXT: movq %rbx, %rax<br>
+; X64-NEXT: movq %r13, %rax<br>
; X64-NEXT: movq {{[0-9]+}}(%rsp), %r13 # 8-byte Reload<br>
; X64-NEXT: mulq %r13<br>
; X64-NEXT: movq %rdx, %rbx<br>
@@ -5072,13 +5066,12 @@ define void @test_1024(i1024* %a, i1024*<br>
; X64-NEXT: setb -{{[0-9]+}}(%rsp) # 1-byte Folded Spill<br>
; X64-NEXT: movq {{[0-9]+}}(%rsp), %rbx # 8-byte Reload<br>
; X64-NEXT: movq %rbx, %rax<br>
-; X64-NEXT: movq %r10, %rsi<br>
-; X64-NEXT: mulq %rsi<br>
+; X64-NEXT: mulq %r10<br>
; X64-NEXT: movq %rdx, %rcx<br>
; X64-NEXT: movq %rax, -{{[0-9]+}}(%rsp) # 8-byte Spill<br>
; X64-NEXT: movq -{{[0-9]+}}(%rsp), %r8 # 8-byte Reload<br>
; X64-NEXT: movq %r8, %rax<br>
-; X64-NEXT: mulq %rsi<br>
+; X64-NEXT: mulq %r10<br>
; X64-NEXT: movq %rdx, %rsi<br>
; X64-NEXT: movq %rax, %rdi<br>
; X64-NEXT: addq %rcx, %rdi<br>
@@ -5154,7 +5147,7 @@ define void @test_1024(i1024* %a, i1024*<br>
; X64-NEXT: movq %r9, %rax<br>
; X64-NEXT: mulq %rcx<br>
; X64-NEXT: movq %rcx, %r10<br>
-; X64-NEXT: movq %r10, -{{[0-9]+}}(%rsp) # 8-byte Spill<br>
+; X64-NEXT: movq %rcx, -{{[0-9]+}}(%rsp) # 8-byte Spill<br>
; X64-NEXT: movq %rdx, %rcx<br>
; X64-NEXT: movq %rax, %rdi<br>
; X64-NEXT: addq %rsi, %rdi<br>
@@ -5166,16 +5159,16 @@ define void @test_1024(i1024* %a, i1024*<br>
; X64-NEXT: movq %rax, %rbx<br>
; X64-NEXT: movq %rdx, %r14<br>
; X64-NEXT: movq -{{[0-9]+}}(%rsp), %r12 # 8-byte Reload<br>
-; X64-NEXT: addq %rbx, %r12<br>
+; X64-NEXT: addq %rax, %r12<br>
; X64-NEXT: movq -{{[0-9]+}}(%rsp), %r15 # 8-byte Reload<br>
-; X64-NEXT: adcq %r14, %r15<br>
+; X64-NEXT: adcq %rdx, %r15<br>
; X64-NEXT: addq %rdi, %r12<br>
; X64-NEXT: adcq %rcx, %r15<br>
; X64-NEXT: movq {{[0-9]+}}(%rsp), %rcx # 8-byte Reload<br>
; X64-NEXT: movq %rcx, %rax<br>
; X64-NEXT: movq %r11, %rsi<br>
-; X64-NEXT: movq %rsi, -{{[0-9]+}}(%rsp) # 8-byte Spill<br>
-; X64-NEXT: mulq %rsi<br>
+; X64-NEXT: movq %r11, -{{[0-9]+}}(%rsp) # 8-byte Spill<br>
+; X64-NEXT: mulq %r11<br>
; X64-NEXT: movq %rdx, %r11<br>
; X64-NEXT: movq %rax, {{[0-9]+}}(%rsp) # 8-byte Spill<br>
; X64-NEXT: movq {{[0-9]+}}(%rsp), %r9 # 8-byte Reload<br>
@@ -5239,7 +5232,7 @@ define void @test_1024(i1024* %a, i1024*<br>
; X64-NEXT: movq %rdx, -{{[0-9]+}}(%rsp) # 8-byte Spill<br>
; X64-NEXT: movq %rax, %r9<br>
; X64-NEXT: movq {{[0-9]+}}(%rsp), %rbp # 8-byte Reload<br>
-; X64-NEXT: addq %r9, %rbp<br>
+; X64-NEXT: addq %rax, %rbp<br>
; X64-NEXT: movq {{[0-9]+}}(%rsp), %rax # 8-byte Reload<br>
; X64-NEXT: adcq %rdx, %rax<br>
; X64-NEXT: addq %rsi, %rbp<br>
@@ -5417,7 +5410,7 @@ define void @test_1024(i1024* %a, i1024*<br>
; X64-NEXT: movq 88(%rsi), %rax<br>
; X64-NEXT: movq %rsi, %r9<br>
; X64-NEXT: movq %rax, %rsi<br>
-; X64-NEXT: movq %rsi, -{{[0-9]+}}(%rsp) # 8-byte Spill<br>
+; X64-NEXT: movq %rax, -{{[0-9]+}}(%rsp) # 8-byte Spill<br>
; X64-NEXT: mulq %rcx<br>
; X64-NEXT: movq %rcx, %r11<br>
; X64-NEXT: movq %rdx, %rbp<br>
@@ -5453,13 +5446,12 @@ define void @test_1024(i1024* %a, i1024*<br>
; X64-NEXT: adcq %r8, %r10<br>
; X64-NEXT: addq %rbx, %rsi<br>
; X64-NEXT: adcq %rbp, %r10<br>
-; X64-NEXT: movq %r9, %rdi<br>
-; X64-NEXT: movq 64(%rdi), %r13<br>
+; X64-NEXT: movq 64(%r9), %r13<br>
; X64-NEXT: movq %r13, %rax<br>
; X64-NEXT: mulq %r11<br>
; X64-NEXT: movq %rax, -{{[0-9]+}}(%rsp) # 8-byte Spill<br>
; X64-NEXT: movq %rdx, %rcx<br>
-; X64-NEXT: movq 72(%rdi), %r9<br>
+; X64-NEXT: movq 72(%r9), %r9<br>
; X64-NEXT: movq %r9, %rax<br>
; X64-NEXT: mulq %r11<br>
; X64-NEXT: movq %rdx, %rbp<br>
@@ -5487,8 +5479,8 @@ define void @test_1024(i1024* %a, i1024*<br>
; X64-NEXT: movq %rdx, %r11<br>
; X64-NEXT: movq %rax, %r15<br>
; X64-NEXT: movq %r12, %rcx<br>
-; X64-NEXT: addq %r15, %rcx<br>
-; X64-NEXT: adcq %r11, %r8<br>
+; X64-NEXT: addq %rax, %rcx<br>
+; X64-NEXT: adcq %rdx, %r8<br>
; X64-NEXT: addq %rbp, %rcx<br>
; X64-NEXT: adcq %rbx, %r8<br>
; X64-NEXT: addq -{{[0-9]+}}(%rsp), %rcx # 8-byte Folded Reload<br>
@@ -5540,14 +5532,13 @@ define void @test_1024(i1024* %a, i1024*<br>
; X64-NEXT: setb %r10b<br>
; X64-NEXT: movq -{{[0-9]+}}(%rsp), %rsi # 8-byte Reload<br>
; X64-NEXT: movq %rsi, %rax<br>
-; X64-NEXT: movq %r8, %rdi<br>
-; X64-NEXT: mulq %rdi<br>
+; X64-NEXT: mulq %r8<br>
; X64-NEXT: movq %rdx, %rcx<br>
; X64-NEXT: movq %rax, %r9<br>
; X64-NEXT: movq -{{[0-9]+}}(%rsp), %rbp # 8-byte Reload<br>
; X64-NEXT: movq %rbp, %rax<br>
-; X64-NEXT: mulq %rdi<br>
-; X64-NEXT: movq %rdi, %r12<br>
+; X64-NEXT: mulq %r8<br>
+; X64-NEXT: movq %r8, %r12<br>
; X64-NEXT: movq %rdx, %rdi<br>
; X64-NEXT: movq %rax, %rbx<br>
; X64-NEXT: addq %rcx, %rbx<br>
@@ -5586,7 +5577,7 @@ define void @test_1024(i1024* %a, i1024*<br>
; X64-NEXT: imulq %rcx, %rdi<br>
; X64-NEXT: movq %rcx, %rax<br>
; X64-NEXT: movq %r12, %rsi<br>
-; X64-NEXT: mulq %rsi<br>
+; X64-NEXT: mulq %r12<br>
; X64-NEXT: movq %rax, %r9<br>
; X64-NEXT: addq %rdi, %rdx<br>
; X64-NEXT: movq 104(%rbp), %r8<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>mul-i512.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/mul-i512.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/mul-i512.ll?rev=<wbr>311038&r1=311037&r2=311038&<wbr>view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>mul-i512.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>mul-i512.ll Wed Aug 16 13:50:01 2017<br>
@@ -909,7 +909,7 @@ define void @test_512(i512* %a, i512* %b<br>
; X64-NEXT: movq 8(%rsi), %rbp<br>
; X64-NEXT: movq %r15, %rax<br>
; X64-NEXT: movq %rdx, %rsi<br>
-; X64-NEXT: mulq %rsi<br>
+; X64-NEXT: mulq %rdx<br>
; X64-NEXT: movq %rdx, %r9<br>
; X64-NEXT: movq %rax, %r8<br>
; X64-NEXT: movq %r11, %rax<br>
@@ -932,23 +932,24 @@ define void @test_512(i512* %a, i512* %b<br>
; X64-NEXT: movq %r11, %rax<br>
; X64-NEXT: mulq %rbp<br>
; X64-NEXT: movq %rbp, %r14<br>
-; X64-NEXT: movq %r14, -{{[0-9]+}}(%rsp) # 8-byte Spill<br>
+; X64-NEXT: movq %rbp, -{{[0-9]+}}(%rsp) # 8-byte Spill<br>
; X64-NEXT: movq %rdx, %rsi<br>
; X64-NEXT: movq %rax, %rbp<br>
; X64-NEXT: addq %rcx, %rbp<br>
; X64-NEXT: adcq %rbx, %rsi<br>
; X64-NEXT: xorl %ecx, %ecx<br>
; X64-NEXT: movq %r10, %rbx<br>
-; X64-NEXT: movq %rbx, -{{[0-9]+}}(%rsp) # 8-byte Spill<br>
-; X64-NEXT: movq %rbx, %rax<br>
+; X64-NEXT: movq %r10, -{{[0-9]+}}(%rsp) # 8-byte Spill<br>
+; X64-NEXT: movq %r10, %rax<br>
; X64-NEXT: mulq %rcx<br>
; X64-NEXT: movq %rdx, %r13<br>
; X64-NEXT: movq %rax, %r10<br>
; X64-NEXT: movq %r15, %rax<br>
; X64-NEXT: mulq %rcx<br>
; X64-NEXT: movq %rdx, -{{[0-9]+}}(%rsp) # 8-byte Spill<br>
+; X64-NEXT: # kill: %RAX<kill><br>
+; X64-NEXT: movq %rax, -{{[0-9]+}}(%rsp) # 8-byte Spill<br>
; X64-NEXT: movq %rax, %r15<br>
-; X64-NEXT: movq %r15, -{{[0-9]+}}(%rsp) # 8-byte Spill<br>
; X64-NEXT: addq %r10, %r15<br>
; X64-NEXT: adcq %r13, %rdx<br>
; X64-NEXT: addq %rbp, %r15<br>
@@ -987,8 +988,8 @@ define void @test_512(i512* %a, i512* %b<br>
; X64-NEXT: mulq %rdx<br>
; X64-NEXT: movq %rdx, %r14<br>
; X64-NEXT: movq %rax, %r11<br>
-; X64-NEXT: addq %r11, %r10<br>
-; X64-NEXT: adcq %r14, %r13<br>
+; X64-NEXT: addq %rax, %r10<br>
+; X64-NEXT: adcq %rdx, %r13<br>
; X64-NEXT: addq %rbp, %r10<br>
; X64-NEXT: adcq %rsi, %r13<br>
; X64-NEXT: addq %r8, %r10<br>
@@ -1000,7 +1001,7 @@ define void @test_512(i512* %a, i512* %b<br>
; X64-NEXT: movq 16(%rsi), %r8<br>
; X64-NEXT: movq %rcx, %rax<br>
; X64-NEXT: movq %rcx, %r9<br>
-; X64-NEXT: movq %r9, -{{[0-9]+}}(%rsp) # 8-byte Spill<br>
+; X64-NEXT: movq %rcx, -{{[0-9]+}}(%rsp) # 8-byte Spill<br>
; X64-NEXT: mulq %r8<br>
; X64-NEXT: movq %rdx, %rdi<br>
; X64-NEXT: movq %rax, %r12<br>
@@ -1031,7 +1032,7 @@ define void @test_512(i512* %a, i512* %b<br>
; X64-NEXT: mulq %rcx<br>
; X64-NEXT: movq %rdx, -{{[0-9]+}}(%rsp) # 8-byte Spill<br>
; X64-NEXT: movq %rax, %rbp<br>
-; X64-NEXT: addq %rbp, %r11<br>
+; X64-NEXT: addq %rax, %r11<br>
; X64-NEXT: adcq %rdx, %r14<br>
; X64-NEXT: addq %r9, %r11<br>
; X64-NEXT: adcq %rbx, %r14<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>mul128.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/mul128.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/mul128.ll?rev=<wbr>311038&r1=311037&r2=311038&<wbr>view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>mul128.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>mul128.ll Wed Aug 16 13:50:01 2017<br>
@@ -7,7 +7,7 @@ define i128 @foo(i128 %t, i128 %u) {<br>
; X64-NEXT: movq %rdx, %r8<br>
; X64-NEXT: imulq %rdi, %rcx<br>
; X64-NEXT: movq %rdi, %rax<br>
-; X64-NEXT: mulq %r8<br>
+; X64-NEXT: mulq %rdx<br>
; X64-NEXT: addq %rcx, %rdx<br>
; X64-NEXT: imulq %r8, %rsi<br>
; X64-NEXT: addq %rsi, %rdx<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>pmul.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/pmul.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/pmul.ll?rev=<wbr>311038&r1=311037&r2=311038&<wbr>view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>pmul.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>pmul.ll Wed Aug 16 13:50:01 2017<br>
@@ -9,7 +9,7 @@ define <16 x i8> @mul_v16i8c(<16 x i8> %<br>
; SSE2-LABEL: mul_v16i8c:<br>
; SSE2: # BB#0: # %entry<br>
; SSE2-NEXT: movdqa %xmm0, %xmm1<br>
-; SSE2-NEXT: punpckhbw {{.*#+}} xmm1 = xmm1[8,8,9,9,10,10,11,11,12,<wbr>12,13,13,14,14,15,15]<br>
+; SSE2-NEXT: punpckhbw {{.*#+}} xmm1 = xmm1[8],xmm0[8],xmm1[9],xmm0[<wbr>9],xmm1[10],xmm0[10],xmm1[11],<wbr>xmm0[11],xmm1[12],xmm0[12],<wbr>xmm1[13],xmm0[13],xmm1[14],<wbr>xmm0[14],xmm1[15],xmm0[15]<br>
; SSE2-NEXT: psraw $8, %xmm1<br>
; SSE2-NEXT: movdqa {{.*#+}} xmm2 = [117,117,117,117,117,117,117,<wbr>117]<br>
; SSE2-NEXT: pmullw %xmm2, %xmm1<br>
@@ -143,10 +143,10 @@ define <16 x i8> @mul_v16i8(<16 x i8> %i<br>
; SSE2-LABEL: mul_v16i8:<br>
; SSE2: # BB#0: # %entry<br>
; SSE2-NEXT: movdqa %xmm1, %xmm2<br>
-; SSE2-NEXT: punpckhbw {{.*#+}} xmm2 = xmm2[8,8,9,9,10,10,11,11,12,<wbr>12,13,13,14,14,15,15]<br>
+; SSE2-NEXT: punpckhbw {{.*#+}} xmm2 = xmm2[8],xmm1[8],xmm2[9],xmm1[<wbr>9],xmm2[10],xmm1[10],xmm2[11],<wbr>xmm1[11],xmm2[12],xmm1[12],<wbr>xmm2[13],xmm1[13],xmm2[14],<wbr>xmm1[14],xmm2[15],xmm1[15]<br>
; SSE2-NEXT: psraw $8, %xmm2<br>
; SSE2-NEXT: movdqa %xmm0, %xmm3<br>
-; SSE2-NEXT: punpckhbw {{.*#+}} xmm3 = xmm3[8,8,9,9,10,10,11,11,12,<wbr>12,13,13,14,14,15,15]<br>
+; SSE2-NEXT: punpckhbw {{.*#+}} xmm3 = xmm3[8],xmm0[8],xmm3[9],xmm0[<wbr>9],xmm3[10],xmm0[10],xmm3[11],<wbr>xmm0[11],xmm3[12],xmm0[12],<wbr>xmm3[13],xmm0[13],xmm3[14],<wbr>xmm0[14],xmm3[15],xmm0[15]<br>
; SSE2-NEXT: psraw $8, %xmm3<br>
; SSE2-NEXT: pmullw %xmm2, %xmm3<br>
; SSE2-NEXT: movdqa {{.*#+}} xmm2 = [255,255,255,255,255,255,255,<wbr>255]<br>
@@ -386,7 +386,7 @@ define <32 x i8> @mul_v32i8c(<32 x i8> %<br>
; SSE2-LABEL: mul_v32i8c:<br>
; SSE2: # BB#0: # %entry<br>
; SSE2-NEXT: movdqa %xmm0, %xmm2<br>
-; SSE2-NEXT: punpckhbw {{.*#+}} xmm2 = xmm2[8,8,9,9,10,10,11,11,12,<wbr>12,13,13,14,14,15,15]<br>
+; SSE2-NEXT: punpckhbw {{.*#+}} xmm2 = xmm2[8],xmm0[8],xmm2[9],xmm0[<wbr>9],xmm2[10],xmm0[10],xmm2[11],<wbr>xmm0[11],xmm2[12],xmm0[12],<wbr>xmm2[13],xmm0[13],xmm2[14],<wbr>xmm0[14],xmm2[15],xmm0[15]<br>
; SSE2-NEXT: psraw $8, %xmm2<br>
; SSE2-NEXT: movdqa {{.*#+}} xmm3 = [117,117,117,117,117,117,117,<wbr>117]<br>
; SSE2-NEXT: pmullw %xmm3, %xmm2<br>
@@ -398,7 +398,7 @@ define <32 x i8> @mul_v32i8c(<32 x i8> %<br>
; SSE2-NEXT: pand %xmm4, %xmm0<br>
; SSE2-NEXT: packuswb %xmm2, %xmm0<br>
; SSE2-NEXT: movdqa %xmm1, %xmm2<br>
-; SSE2-NEXT: punpckhbw {{.*#+}} xmm2 = xmm2[8,8,9,9,10,10,11,11,12,<wbr>12,13,13,14,14,15,15]<br>
+; SSE2-NEXT: punpckhbw {{.*#+}} xmm2 = xmm2[8],xmm1[8],xmm2[9],xmm1[<wbr>9],xmm2[10],xmm1[10],xmm2[11],<wbr>xmm1[11],xmm2[12],xmm1[12],<wbr>xmm2[13],xmm1[13],xmm2[14],<wbr>xmm1[14],xmm2[15],xmm1[15]<br>
; SSE2-NEXT: psraw $8, %xmm2<br>
; SSE2-NEXT: pmullw %xmm3, %xmm2<br>
; SSE2-NEXT: pand %xmm4, %xmm2<br>
@@ -567,10 +567,10 @@ define <32 x i8> @mul_v32i8(<32 x i8> %i<br>
; SSE2-LABEL: mul_v32i8:<br>
; SSE2: # BB#0: # %entry<br>
; SSE2-NEXT: movdqa %xmm2, %xmm4<br>
-; SSE2-NEXT: punpckhbw {{.*#+}} xmm4 = xmm4[8,8,9,9,10,10,11,11,12,<wbr>12,13,13,14,14,15,15]<br>
+; SSE2-NEXT: punpckhbw {{.*#+}} xmm4 = xmm4[8],xmm2[8],xmm4[9],xmm2[<wbr>9],xmm4[10],xmm2[10],xmm4[11],<wbr>xmm2[11],xmm4[12],xmm2[12],<wbr>xmm4[13],xmm2[13],xmm4[14],<wbr>xmm2[14],xmm4[15],xmm2[15]<br>
; SSE2-NEXT: psraw $8, %xmm4<br>
; SSE2-NEXT: movdqa %xmm0, %xmm5<br>
-; SSE2-NEXT: punpckhbw {{.*#+}} xmm5 = xmm5[8,8,9,9,10,10,11,11,12,<wbr>12,13,13,14,14,15,15]<br>
+; SSE2-NEXT: punpckhbw {{.*#+}} xmm5 = xmm5[8],xmm0[8],xmm5[9],xmm0[<wbr>9],xmm5[10],xmm0[10],xmm5[11],<wbr>xmm0[11],xmm5[12],xmm0[12],<wbr>xmm5[13],xmm0[13],xmm5[14],<wbr>xmm0[14],xmm5[15],xmm0[15]<br>
; SSE2-NEXT: psraw $8, %xmm5<br>
; SSE2-NEXT: pmullw %xmm4, %xmm5<br>
; SSE2-NEXT: movdqa {{.*#+}} xmm4 = [255,255,255,255,255,255,255,<wbr>255]<br>
@@ -583,10 +583,10 @@ define <32 x i8> @mul_v32i8(<32 x i8> %i<br>
; SSE2-NEXT: pand %xmm4, %xmm0<br>
; SSE2-NEXT: packuswb %xmm5, %xmm0<br>
; SSE2-NEXT: movdqa %xmm3, %xmm2<br>
-; SSE2-NEXT: punpckhbw {{.*#+}} xmm2 = xmm2[8,8,9,9,10,10,11,11,12,<wbr>12,13,13,14,14,15,15]<br>
+; SSE2-NEXT: punpckhbw {{.*#+}} xmm2 = xmm2[8],xmm3[8],xmm2[9],xmm3[<wbr>9],xmm2[10],xmm3[10],xmm2[11],<wbr>xmm3[11],xmm2[12],xmm3[12],<wbr>xmm2[13],xmm3[13],xmm2[14],<wbr>xmm3[14],xmm2[15],xmm3[15]<br>
; SSE2-NEXT: psraw $8, %xmm2<br>
; SSE2-NEXT: movdqa %xmm1, %xmm5<br>
-; SSE2-NEXT: punpckhbw {{.*#+}} xmm5 = xmm5[8,8,9,9,10,10,11,11,12,<wbr>12,13,13,14,14,15,15]<br>
+; SSE2-NEXT: punpckhbw {{.*#+}} xmm5 = xmm5[8],xmm1[8],xmm5[9],xmm1[<wbr>9],xmm5[10],xmm1[10],xmm5[11],<wbr>xmm1[11],xmm5[12],xmm1[12],<wbr>xmm5[13],xmm1[13],xmm5[14],<wbr>xmm1[14],xmm5[15],xmm1[15]<br>
; SSE2-NEXT: psraw $8, %xmm5<br>
; SSE2-NEXT: pmullw %xmm2, %xmm5<br>
; SSE2-NEXT: pand %xmm4, %xmm5<br>
@@ -774,7 +774,7 @@ define <64 x i8> @mul_v64i8c(<64 x i8> %<br>
; SSE2-LABEL: mul_v64i8c:<br>
; SSE2: # BB#0: # %entry<br>
; SSE2-NEXT: movdqa %xmm0, %xmm6<br>
-; SSE2-NEXT: punpckhbw {{.*#+}} xmm6 = xmm6[8,8,9,9,10,10,11,11,12,<wbr>12,13,13,14,14,15,15]<br>
+; SSE2-NEXT: punpckhbw {{.*#+}} xmm6 = xmm6[8],xmm0[8],xmm6[9],xmm0[<wbr>9],xmm6[10],xmm0[10],xmm6[11],<wbr>xmm0[11],xmm6[12],xmm0[12],<wbr>xmm6[13],xmm0[13],xmm6[14],<wbr>xmm0[14],xmm6[15],xmm0[15]<br>
; SSE2-NEXT: psraw $8, %xmm6<br>
; SSE2-NEXT: movdqa {{.*#+}} xmm4 = [117,117,117,117,117,117,117,<wbr>117]<br>
; SSE2-NEXT: pmullw %xmm4, %xmm6<br>
@@ -786,7 +786,7 @@ define <64 x i8> @mul_v64i8c(<64 x i8> %<br>
; SSE2-NEXT: pand %xmm5, %xmm0<br>
; SSE2-NEXT: packuswb %xmm6, %xmm0<br>
; SSE2-NEXT: movdqa %xmm1, %xmm6<br>
-; SSE2-NEXT: punpckhbw {{.*#+}} xmm6 = xmm6[8,8,9,9,10,10,11,11,12,<wbr>12,13,13,14,14,15,15]<br>
+; SSE2-NEXT: punpckhbw {{.*#+}} xmm6 = xmm6[8],xmm1[8],xmm6[9],xmm1[<wbr>9],xmm6[10],xmm1[10],xmm6[11],<wbr>xmm1[11],xmm6[12],xmm1[12],<wbr>xmm6[13],xmm1[13],xmm6[14],<wbr>xmm1[14],xmm6[15],xmm1[15]<br>
; SSE2-NEXT: psraw $8, %xmm6<br>
; SSE2-NEXT: pmullw %xmm4, %xmm6<br>
; SSE2-NEXT: pand %xmm5, %xmm6<br>
@@ -796,7 +796,7 @@ define <64 x i8> @mul_v64i8c(<64 x i8> %<br>
; SSE2-NEXT: pand %xmm5, %xmm1<br>
; SSE2-NEXT: packuswb %xmm6, %xmm1<br>
; SSE2-NEXT: movdqa %xmm2, %xmm6<br>
-; SSE2-NEXT: punpckhbw {{.*#+}} xmm6 = xmm6[8,8,9,9,10,10,11,11,12,<wbr>12,13,13,14,14,15,15]<br>
+; SSE2-NEXT: punpckhbw {{.*#+}} xmm6 = xmm6[8],xmm2[8],xmm6[9],xmm2[<wbr>9],xmm6[10],xmm2[10],xmm6[11],<wbr>xmm2[11],xmm6[12],xmm2[12],<wbr>xmm6[13],xmm2[13],xmm6[14],<wbr>xmm2[14],xmm6[15],xmm2[15]<br>
; SSE2-NEXT: psraw $8, %xmm6<br>
; SSE2-NEXT: pmullw %xmm4, %xmm6<br>
; SSE2-NEXT: pand %xmm5, %xmm6<br>
@@ -806,7 +806,7 @@ define <64 x i8> @mul_v64i8c(<64 x i8> %<br>
; SSE2-NEXT: pand %xmm5, %xmm2<br>
; SSE2-NEXT: packuswb %xmm6, %xmm2<br>
; SSE2-NEXT: movdqa %xmm3, %xmm6<br>
-; SSE2-NEXT: punpckhbw {{.*#+}} xmm6 = xmm6[8,8,9,9,10,10,11,11,12,<wbr>12,13,13,14,14,15,15]<br>
+; SSE2-NEXT: punpckhbw {{.*#+}} xmm6 = xmm6[8],xmm3[8],xmm6[9],xmm3[<wbr>9],xmm6[10],xmm3[10],xmm6[11],<wbr>xmm3[11],xmm6[12],xmm3[12],<wbr>xmm6[13],xmm3[13],xmm6[14],<wbr>xmm3[14],xmm6[15],xmm3[15]<br>
; SSE2-NEXT: psraw $8, %xmm6<br>
; SSE2-NEXT: pmullw %xmm4, %xmm6<br>
; SSE2-NEXT: pand %xmm5, %xmm6<br>
@@ -821,7 +821,7 @@ define <64 x i8> @mul_v64i8c(<64 x i8> %<br>
; SSE41: # BB#0: # %entry<br>
; SSE41-NEXT: movdqa %xmm1, %xmm4<br>
; SSE41-NEXT: movdqa %xmm0, %xmm1<br>
-; SSE41-NEXT: pmovsxbw %xmm1, %xmm0<br>
+; SSE41-NEXT: pmovsxbw %xmm0, %xmm0<br>
; SSE41-NEXT: movdqa {{.*#+}} xmm6 = [117,117,117,117,117,117,117,<wbr>117]<br>
; SSE41-NEXT: pmullw %xmm6, %xmm0<br>
; SSE41-NEXT: movdqa {{.*#+}} xmm7 = [255,255,255,255,255,255,255,<wbr>255]<br>
@@ -939,10 +939,10 @@ define <64 x i8> @mul_v64i8(<64 x i8> %i<br>
; SSE2-LABEL: mul_v64i8:<br>
; SSE2: # BB#0: # %entry<br>
; SSE2-NEXT: movdqa %xmm4, %xmm8<br>
-; SSE2-NEXT: punpckhbw {{.*#+}} xmm8 = xmm8[8,8,9,9,10,10,11,11,12,<wbr>12,13,13,14,14,15,15]<br>
+; SSE2-NEXT: punpckhbw {{.*#+}} xmm8 = xmm8[8],xmm4[8],xmm8[9],xmm4[<wbr>9],xmm8[10],xmm4[10],xmm8[11],<wbr>xmm4[11],xmm8[12],xmm4[12],<wbr>xmm8[13],xmm4[13],xmm8[14],<wbr>xmm4[14],xmm8[15],xmm4[15]<br>
; SSE2-NEXT: psraw $8, %xmm8<br>
; SSE2-NEXT: movdqa %xmm0, %xmm9<br>
-; SSE2-NEXT: punpckhbw {{.*#+}} xmm9 = xmm9[8,8,9,9,10,10,11,11,12,<wbr>12,13,13,14,14,15,15]<br>
+; SSE2-NEXT: punpckhbw {{.*#+}} xmm9 = xmm9[8],xmm0[8],xmm9[9],xmm0[<wbr>9],xmm9[10],xmm0[10],xmm9[11],<wbr>xmm0[11],xmm9[12],xmm0[12],<wbr>xmm9[13],xmm0[13],xmm9[14],<wbr>xmm0[14],xmm9[15],xmm0[15]<br>
; SSE2-NEXT: psraw $8, %xmm9<br>
; SSE2-NEXT: pmullw %xmm8, %xmm9<br>
; SSE2-NEXT: movdqa {{.*#+}} xmm8 = [255,255,255,255,255,255,255,<wbr>255]<br>
@@ -955,10 +955,10 @@ define <64 x i8> @mul_v64i8(<64 x i8> %i<br>
; SSE2-NEXT: pand %xmm8, %xmm0<br>
; SSE2-NEXT: packuswb %xmm9, %xmm0<br>
; SSE2-NEXT: movdqa %xmm5, %xmm9<br>
-; SSE2-NEXT: punpckhbw {{.*#+}} xmm9 = xmm9[8,8,9,9,10,10,11,11,12,<wbr>12,13,13,14,14,15,15]<br>
+; SSE2-NEXT: punpckhbw {{.*#+}} xmm9 = xmm9[8],xmm5[8],xmm9[9],xmm5[<wbr>9],xmm9[10],xmm5[10],xmm9[11],<wbr>xmm5[11],xmm9[12],xmm5[12],<wbr>xmm9[13],xmm5[13],xmm9[14],<wbr>xmm5[14],xmm9[15],xmm5[15]<br>
; SSE2-NEXT: psraw $8, %xmm9<br>
; SSE2-NEXT: movdqa %xmm1, %xmm4<br>
-; SSE2-NEXT: punpckhbw {{.*#+}} xmm4 = xmm4[8,8,9,9,10,10,11,11,12,<wbr>12,13,13,14,14,15,15]<br>
+; SSE2-NEXT: punpckhbw {{.*#+}} xmm4 = xmm4[8],xmm1[8],xmm4[9],xmm1[<wbr>9],xmm4[10],xmm1[10],xmm4[11],<wbr>xmm1[11],xmm4[12],xmm1[12],<wbr>xmm4[13],xmm1[13],xmm4[14],<wbr>xmm1[14],xmm4[15],xmm1[15]<br>
; SSE2-NEXT: psraw $8, %xmm4<br>
; SSE2-NEXT: pmullw %xmm9, %xmm4<br>
; SSE2-NEXT: pand %xmm8, %xmm4<br>
@@ -970,10 +970,10 @@ define <64 x i8> @mul_v64i8(<64 x i8> %i<br>
; SSE2-NEXT: pand %xmm8, %xmm1<br>
; SSE2-NEXT: packuswb %xmm4, %xmm1<br>
; SSE2-NEXT: movdqa %xmm6, %xmm4<br>
-; SSE2-NEXT: punpckhbw {{.*#+}} xmm4 = xmm4[8,8,9,9,10,10,11,11,12,<wbr>12,13,13,14,14,15,15]<br>
+; SSE2-NEXT: punpckhbw {{.*#+}} xmm4 = xmm4[8],xmm6[8],xmm4[9],xmm6[<wbr>9],xmm4[10],xmm6[10],xmm4[11],<wbr>xmm6[11],xmm4[12],xmm6[12],<wbr>xmm4[13],xmm6[13],xmm4[14],<wbr>xmm6[14],xmm4[15],xmm6[15]<br>
; SSE2-NEXT: psraw $8, %xmm4<br>
; SSE2-NEXT: movdqa %xmm2, %xmm5<br>
-; SSE2-NEXT: punpckhbw {{.*#+}} xmm5 = xmm5[8,8,9,9,10,10,11,11,12,<wbr>12,13,13,14,14,15,15]<br>
+; SSE2-NEXT: punpckhbw {{.*#+}} xmm5 = xmm5[8],xmm2[8],xmm5[9],xmm2[<wbr>9],xmm5[10],xmm2[10],xmm5[11],<wbr>xmm2[11],xmm5[12],xmm2[12],<wbr>xmm5[13],xmm2[13],xmm5[14],<wbr>xmm2[14],xmm5[15],xmm2[15]<br>
; SSE2-NEXT: psraw $8, %xmm5<br>
; SSE2-NEXT: pmullw %xmm4, %xmm5<br>
; SSE2-NEXT: pand %xmm8, %xmm5<br>
@@ -985,10 +985,10 @@ define <64 x i8> @mul_v64i8(<64 x i8> %i<br>
; SSE2-NEXT: pand %xmm8, %xmm2<br>
; SSE2-NEXT: packuswb %xmm5, %xmm2<br>
; SSE2-NEXT: movdqa %xmm7, %xmm4<br>
-; SSE2-NEXT: punpckhbw {{.*#+}} xmm4 = xmm4[8,8,9,9,10,10,11,11,12,<wbr>12,13,13,14,14,15,15]<br>
+; SSE2-NEXT: punpckhbw {{.*#+}} xmm4 = xmm4[8],xmm7[8],xmm4[9],xmm7[<wbr>9],xmm4[10],xmm7[10],xmm4[11],<wbr>xmm7[11],xmm4[12],xmm7[12],<wbr>xmm4[13],xmm7[13],xmm4[14],<wbr>xmm7[14],xmm4[15],xmm7[15]<br>
; SSE2-NEXT: psraw $8, %xmm4<br>
; SSE2-NEXT: movdqa %xmm3, %xmm5<br>
-; SSE2-NEXT: punpckhbw {{.*#+}} xmm5 = xmm5[8,8,9,9,10,10,11,11,12,<wbr>12,13,13,14,14,15,15]<br>
+; SSE2-NEXT: punpckhbw {{.*#+}} xmm5 = xmm5[8],xmm3[8],xmm5[9],xmm3[<wbr>9],xmm5[10],xmm3[10],xmm5[11],<wbr>xmm3[11],xmm5[12],xmm3[12],<wbr>xmm5[13],xmm3[13],xmm5[14],<wbr>xmm3[14],xmm5[15],xmm3[15]<br>
; SSE2-NEXT: psraw $8, %xmm5<br>
; SSE2-NEXT: pmullw %xmm4, %xmm5<br>
; SSE2-NEXT: pand %xmm8, %xmm5<br>
@@ -1006,7 +1006,7 @@ define <64 x i8> @mul_v64i8(<64 x i8> %i<br>
; SSE41-NEXT: movdqa %xmm1, %xmm8<br>
; SSE41-NEXT: movdqa %xmm0, %xmm1<br>
; SSE41-NEXT: pmovsxbw %xmm4, %xmm9<br>
-; SSE41-NEXT: pmovsxbw %xmm1, %xmm0<br>
+; SSE41-NEXT: pmovsxbw %xmm0, %xmm0<br>
; SSE41-NEXT: pmullw %xmm9, %xmm0<br>
; SSE41-NEXT: movdqa {{.*#+}} xmm9 = [255,255,255,255,255,255,255,<wbr>255]<br>
; SSE41-NEXT: pand %xmm9, %xmm0<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>powi.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/powi.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/powi.ll?rev=<wbr>311038&r1=311037&r2=311038&<wbr>view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>powi.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>powi.ll Wed Aug 16 13:50:01 2017<br>
@@ -5,7 +5,7 @@ define double @pow_wrapper(double %a) no<br>
; CHECK-LABEL: pow_wrapper:<br>
; CHECK: # BB#0:<br>
; CHECK-NEXT: movapd %xmm0, %xmm1<br>
-; CHECK-NEXT: mulsd %xmm1, %xmm1<br>
+; CHECK-NEXT: mulsd %xmm0, %xmm1<br>
; CHECK-NEXT: mulsd %xmm1, %xmm0<br>
; CHECK-NEXT: mulsd %xmm1, %xmm1<br>
; CHECK-NEXT: mulsd %xmm1, %xmm0<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>pr11334.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/pr11334.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/pr11334.ll?rev=<wbr>311038&r1=311037&r2=311038&<wbr>view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>pr11334.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>pr11334.ll Wed Aug 16 13:50:01 2017<br>
@@ -25,7 +25,7 @@ define <3 x double> @v3f2d_ext_vec(<3 x<br>
; SSE-NEXT: cvtps2pd %xmm0, %xmm0<br>
; SSE-NEXT: movlps %xmm0, -{{[0-9]+}}(%rsp)<br>
; SSE-NEXT: movaps %xmm2, %xmm1<br>
-; SSE-NEXT: movhlps {{.*#+}} xmm1 = xmm1[1,1]<br>
+; SSE-NEXT: movhlps {{.*#+}} xmm1 = xmm2[1],xmm1[1]<br>
; SSE-NEXT: fldl -{{[0-9]+}}(%rsp)<br>
; SSE-NEXT: movaps %xmm2, %xmm0<br>
; SSE-NEXT: retq<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>pr29112.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/pr29112.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/pr29112.ll?rev=<wbr>311038&r1=311037&r2=311038&<wbr>view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>pr29112.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>pr29112.ll Wed Aug 16 13:50:01 2017<br>
@@ -49,16 +49,16 @@ define <4 x float> @bar(<4 x float>* %a1<br>
; CHECK-NEXT: vinsertps {{.*#+}} xmm2 = xmm9[0,1],xmm2[3],xmm9[3]<br>
; CHECK-NEXT: vinsertps {{.*#+}} xmm2 = xmm2[0,1,2],xmm12[0]<br>
; CHECK-NEXT: vaddps %xmm3, %xmm2, %xmm2<br>
-; CHECK-NEXT: vmovaps %xmm15, %xmm1<br>
-; CHECK-NEXT: vmovaps %xmm1, {{[0-9]+}}(%rsp) # 16-byte Spill<br>
-; CHECK-NEXT: vaddps %xmm0, %xmm1, %xmm9<br>
+; CHECK-NEXT: vmovaps %xmm15, {{[0-9]+}}(%rsp) # 16-byte Spill<br>
+; CHECK-NEXT: vaddps %xmm0, %xmm15, %xmm9<br>
; CHECK-NEXT: vaddps %xmm14, %xmm10, %xmm0<br>
-; CHECK-NEXT: vaddps %xmm1, %xmm1, %xmm8<br>
+; CHECK-NEXT: vaddps %xmm15, %xmm15, %xmm8<br>
; CHECK-NEXT: vaddps %xmm11, %xmm3, %xmm3<br>
; CHECK-NEXT: vaddps %xmm0, %xmm3, %xmm0<br>
-; CHECK-NEXT: vaddps %xmm0, %xmm1, %xmm0<br>
+; CHECK-NEXT: vaddps %xmm0, %xmm15, %xmm0<br>
; CHECK-NEXT: vmovaps %xmm8, {{[0-9]+}}(%rsp)<br>
; CHECK-NEXT: vmovaps %xmm9, (%rsp)<br>
+; CHECK-NEXT: vmovaps %xmm15, %xmm1<br>
; CHECK-NEXT: vmovaps {{[0-9]+}}(%rsp), %xmm3 # 16-byte Reload<br>
; CHECK-NEXT: vzeroupper<br>
; CHECK-NEXT: callq foo<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>psubus.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/psubus.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/psubus.ll?rev=<wbr>311038&r1=311037&r2=311038&<wbr>view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>psubus.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>psubus.ll Wed Aug 16 13:50:01 2017<br>
@@ -638,7 +638,7 @@ define <16 x i8> @test14(<16 x i8> %x, <<br>
; SSE41-LABEL: test14:<br>
; SSE41: ## BB#0: ## %<a href="http://vector.ph" rel="noreferrer" target="_blank">vector.ph</a><br>
; SSE41-NEXT: movdqa %xmm0, %xmm5<br>
-; SSE41-NEXT: pshufd {{.*#+}} xmm0 = xmm5[1,1,2,3]<br>
+; SSE41-NEXT: pshufd {{.*#+}} xmm0 = xmm0[1,1,2,3]<br>
; SSE41-NEXT: pmovzxbd {{.*#+}} xmm8 = xmm0[0],zero,zero,zero,xmm0[1]<wbr>,zero,zero,zero,xmm0[2],zero,<wbr>zero,zero,xmm0[3],zero,zero,<wbr>zero<br>
; SSE41-NEXT: pmovzxbd {{.*#+}} xmm0 = xmm5[0],zero,zero,zero,xmm5[1]<wbr>,zero,zero,zero,xmm5[2],zero,<wbr>zero,zero,xmm5[3],zero,zero,<wbr>zero<br>
; SSE41-NEXT: pshufd {{.*#+}} xmm6 = xmm5[2,3,0,1]<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>select.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/select.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/select.ll?rev=<wbr>311038&r1=311037&r2=311038&<wbr>view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>select.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>select.ll Wed Aug 16 13:50:01 2017<br>
@@ -23,8 +23,7 @@ define i32 @test1(%0* %p, %0* %q, i1 %r)<br>
; MCU-NEXT: jne .LBB0_1<br>
; MCU-NEXT: # BB#2:<br>
; MCU-NEXT: addl $8, %edx<br>
-; MCU-NEXT: movl %edx, %eax<br>
-; MCU-NEXT: movl (%eax), %eax<br>
+; MCU-NEXT: movl (%edx), %eax<br>
; MCU-NEXT: retl<br>
; MCU-NEXT: .LBB0_1:<br>
; MCU-NEXT: addl $8, %eax<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>shrink-wrap-chkstk.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/shrink-wrap-chkstk.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/shrink-wrap-<wbr>chkstk.ll?rev=311038&r1=<wbr>311037&r2=311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>shrink-wrap-chkstk.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>shrink-wrap-chkstk.ll Wed Aug 16 13:50:01 2017<br>
@@ -61,7 +61,7 @@ false:<br>
<br>
; CHECK-LABEL: @use_eax_before_prologue@8: # @use_eax_before_prologue<br>
; CHECK: movl %ecx, %eax<br>
-; CHECK: cmpl %edx, %eax<br>
+; CHECK: cmpl %edx, %ecx<br>
; CHECK: jge LBB1_2<br>
; CHECK: pushl %eax<br>
; CHECK: movl $4092, %eax<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>sqrt-fastmath.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/sqrt-fastmath.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/sqrt-fastmath.ll?<wbr>rev=311038&r1=311037&r2=<wbr>311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>sqrt-fastmath.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>sqrt-fastmath.ll Wed Aug 16 13:50:01 2017<br>
@@ -132,7 +132,7 @@ define float @f32_estimate(float %x) #1<br>
; SSE: # BB#0:<br>
; SSE-NEXT: rsqrtss %xmm0, %xmm1<br>
; SSE-NEXT: movaps %xmm1, %xmm2<br>
-; SSE-NEXT: mulss %xmm2, %xmm2<br>
+; SSE-NEXT: mulss %xmm1, %xmm2<br>
; SSE-NEXT: mulss %xmm0, %xmm2<br>
; SSE-NEXT: addss {{.*}}(%rip), %xmm2<br>
; SSE-NEXT: mulss {{.*}}(%rip), %xmm1<br>
@@ -178,7 +178,7 @@ define <4 x float> @v4f32_estimate(<4 x<br>
; SSE: # BB#0:<br>
; SSE-NEXT: rsqrtps %xmm0, %xmm1<br>
; SSE-NEXT: movaps %xmm1, %xmm2<br>
-; SSE-NEXT: mulps %xmm2, %xmm2<br>
+; SSE-NEXT: mulps %xmm1, %xmm2<br>
; SSE-NEXT: mulps %xmm0, %xmm2<br>
; SSE-NEXT: addps {{.*}}(%rip), %xmm2<br>
; SSE-NEXT: mulps {{.*}}(%rip), %xmm1<br>
@@ -228,7 +228,7 @@ define <8 x float> @v8f32_estimate(<8 x<br>
; SSE-NEXT: rsqrtps %xmm0, %xmm3<br>
; SSE-NEXT: movaps {{.*#+}} xmm4 = [-5.000000e-01,-5.000000e-01,-<wbr>5.000000e-01,-5.000000e-01]<br>
; SSE-NEXT: movaps %xmm3, %xmm2<br>
-; SSE-NEXT: mulps %xmm2, %xmm2<br>
+; SSE-NEXT: mulps %xmm3, %xmm2<br>
; SSE-NEXT: mulps %xmm0, %xmm2<br>
; SSE-NEXT: movaps {{.*#+}} xmm0 = [-3.000000e+00,-3.000000e+00,-<wbr>3.000000e+00,-3.000000e+00]<br>
; SSE-NEXT: addps %xmm0, %xmm2<br>
@@ -236,7 +236,7 @@ define <8 x float> @v8f32_estimate(<8 x<br>
; SSE-NEXT: mulps %xmm3, %xmm2<br>
; SSE-NEXT: rsqrtps %xmm1, %xmm5<br>
; SSE-NEXT: movaps %xmm5, %xmm3<br>
-; SSE-NEXT: mulps %xmm3, %xmm3<br>
+; SSE-NEXT: mulps %xmm5, %xmm3<br>
; SSE-NEXT: mulps %xmm1, %xmm3<br>
; SSE-NEXT: addps %xmm0, %xmm3<br>
; SSE-NEXT: mulps %xmm4, %xmm3<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>sse-scalar-fp-arith.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/sse-scalar-fp-arith.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/sse-scalar-fp-<wbr>arith.ll?rev=311038&r1=311037&<wbr>r2=311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>sse-scalar-fp-arith.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>sse-scalar-fp-arith.ll Wed Aug 16 13:50:01 2017<br>
@@ -1084,8 +1084,7 @@ define <4 x float> @add_ss_mask(<4 x flo<br>
; SSE2-NEXT: testb $1, %dil<br>
; SSE2-NEXT: jne .LBB62_1<br>
; SSE2-NEXT: # BB#2:<br>
-; SSE2-NEXT: movaps %xmm2, %xmm1<br>
-; SSE2-NEXT: movss {{.*#+}} xmm0 = xmm1[0],xmm0[1,2,3]<br>
+; SSE2-NEXT: movss {{.*#+}} xmm0 = xmm2[0],xmm0[1,2,3]<br>
; SSE2-NEXT: retq<br>
; SSE2-NEXT: .LBB62_1:<br>
; SSE2-NEXT: addss %xmm0, %xmm1<br>
@@ -1097,8 +1096,7 @@ define <4 x float> @add_ss_mask(<4 x flo<br>
; SSE41-NEXT: testb $1, %dil<br>
; SSE41-NEXT: jne .LBB62_1<br>
; SSE41-NEXT: # BB#2:<br>
-; SSE41-NEXT: movaps %xmm2, %xmm1<br>
-; SSE41-NEXT: blendps {{.*#+}} xmm0 = xmm1[0],xmm0[1,2,3]<br>
+; SSE41-NEXT: blendps {{.*#+}} xmm0 = xmm2[0],xmm0[1,2,3]<br>
; SSE41-NEXT: retq<br>
; SSE41-NEXT: .LBB62_1:<br>
; SSE41-NEXT: addss %xmm0, %xmm1<br>
@@ -1139,8 +1137,7 @@ define <2 x double> @add_sd_mask(<2 x do<br>
; SSE2-NEXT: testb $1, %dil<br>
; SSE2-NEXT: jne .LBB63_1<br>
; SSE2-NEXT: # BB#2:<br>
-; SSE2-NEXT: movapd %xmm2, %xmm1<br>
-; SSE2-NEXT: movsd {{.*#+}} xmm0 = xmm1[0],xmm0[1]<br>
+; SSE2-NEXT: movsd {{.*#+}} xmm0 = xmm2[0],xmm0[1]<br>
; SSE2-NEXT: retq<br>
; SSE2-NEXT: .LBB63_1:<br>
; SSE2-NEXT: addsd %xmm0, %xmm1<br>
@@ -1152,8 +1149,7 @@ define <2 x double> @add_sd_mask(<2 x do<br>
; SSE41-NEXT: testb $1, %dil<br>
; SSE41-NEXT: jne .LBB63_1<br>
; SSE41-NEXT: # BB#2:<br>
-; SSE41-NEXT: movapd %xmm2, %xmm1<br>
-; SSE41-NEXT: blendpd {{.*#+}} xmm0 = xmm1[0],xmm0[1]<br>
+; SSE41-NEXT: blendpd {{.*#+}} xmm0 = xmm2[0],xmm0[1]<br>
; SSE41-NEXT: retq<br>
; SSE41-NEXT: .LBB63_1:<br>
; SSE41-NEXT: addsd %xmm0, %xmm1<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>sse1.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/sse1.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/sse1.ll?rev=<wbr>311038&r1=311037&r2=311038&<wbr>view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>sse1.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>sse1.ll Wed Aug 16 13:50:01 2017<br>
@@ -16,7 +16,7 @@ define <2 x float> @test4(<2 x float> %A<br>
; X32-LABEL: test4:<br>
; X32: # BB#0: # %entry<br>
; X32-NEXT: movaps %xmm0, %xmm2<br>
-; X32-NEXT: shufps {{.*#+}} xmm2 = xmm2[1,1,2,3]<br>
+; X32-NEXT: shufps {{.*#+}} xmm2 = xmm2[1,1],xmm0[2,3]<br>
; X32-NEXT: addss %xmm1, %xmm0<br>
; X32-NEXT: shufps {{.*#+}} xmm1 = xmm1[1,1,2,3]<br>
; X32-NEXT: subss %xmm1, %xmm2<br>
@@ -26,7 +26,7 @@ define <2 x float> @test4(<2 x float> %A<br>
; X64-LABEL: test4:<br>
; X64: # BB#0: # %entry<br>
; X64-NEXT: movaps %xmm0, %xmm2<br>
-; X64-NEXT: shufps {{.*#+}} xmm2 = xmm2[1,1,2,3]<br>
+; X64-NEXT: shufps {{.*#+}} xmm2 = xmm2[1,1],xmm0[2,3]<br>
; X64-NEXT: addss %xmm1, %xmm0<br>
; X64-NEXT: shufps {{.*#+}} xmm1 = xmm1[1,1,2,3]<br>
; X64-NEXT: subss %xmm1, %xmm2<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>sse3-avx-addsub-2.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/sse3-avx-addsub-2.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/sse3-avx-addsub-2.<wbr>ll?rev=311038&r1=311037&r2=<wbr>311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>sse3-avx-addsub-2.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>sse3-avx-addsub-2.ll Wed Aug 16 13:50:01 2017<br>
@@ -406,9 +406,9 @@ define <4 x float> @test16(<4 x float> %<br>
; SSE-NEXT: movaps %xmm0, %xmm2<br>
; SSE-NEXT: subss %xmm0, %xmm2<br>
; SSE-NEXT: movaps %xmm0, %xmm3<br>
-; SSE-NEXT: movhlps {{.*#+}} xmm3 = xmm3[1,1]<br>
+; SSE-NEXT: movhlps {{.*#+}} xmm3 = xmm0[1],xmm3[1]<br>
; SSE-NEXT: movaps %xmm1, %xmm4<br>
-; SSE-NEXT: movhlps {{.*#+}} xmm4 = xmm4[1,1]<br>
+; SSE-NEXT: movhlps {{.*#+}} xmm4 = xmm1[1],xmm4[1]<br>
; SSE-NEXT: subss %xmm4, %xmm3<br>
; SSE-NEXT: movshdup {{.*#+}} xmm4 = xmm0[1,1,3,3]<br>
; SSE-NEXT: addss %xmm0, %xmm4<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>statepoint-live-in.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/statepoint-live-in.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/statepoint-live-<wbr>in.ll?rev=311038&r1=311037&r2=<wbr>311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>statepoint-live-in.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>statepoint-live-in.ll Wed Aug 16 13:50:01 2017<br>
@@ -126,7 +126,7 @@ define void @test6(i32 %a) gc "statepoin<br>
; CHECK-NEXT: Lcfi11:<br>
; CHECK-NEXT: .cfi_offset %rbx, -16<br>
; CHECK-NEXT: movl %edi, %ebx<br>
-; CHECK-NEXT: movl %ebx, {{[0-9]+}}(%rsp)<br>
+; CHECK-NEXT: movl %edi, {{[0-9]+}}(%rsp)<br>
; CHECK-NEXT: callq _baz<br>
; CHECK-NEXT: Ltmp6:<br>
; CHECK-NEXT: callq _bar<br>
@@ -153,13 +153,13 @@ entry:<br>
; CHECK: .byte 1<br>
; CHECK-NEXT: .byte 0<br>
; CHECK-NEXT: .short 4<br>
-; CHECK-NEXT: .short 6<br>
+; CHECK-NEXT: .short 5<br>
; CHECK-NEXT: .short 0<br>
; CHECK-NEXT: .long 0<br>
; CHECK: .byte 1<br>
; CHECK-NEXT: .byte 0<br>
; CHECK-NEXT: .short 4<br>
-; CHECK-NEXT: .short 3<br>
+; CHECK-NEXT: .short 4<br>
; CHECK-NEXT: .short 0<br>
; CHECK-NEXT: .long 0<br>
; CHECK: Ltmp2-_test2<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>statepoint-stack-usage.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/statepoint-stack-usage.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/statepoint-stack-<wbr>usage.ll?rev=311038&r1=311037&<wbr>r2=311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>statepoint-stack-usage.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>statepoint-stack-usage.ll Wed Aug 16 13:50:01 2017<br>
@@ -61,9 +61,9 @@ define i32 @back_to_back_deopt(i32 %a, i<br>
gc "statepoint-example" {<br>
; CHECK-LABEL: back_to_back_deopt<br>
; The exact stores don't matter, but there need to be three stack slots created<br>
-; CHECK-DAG: movl %ebx, 12(%rsp)<br>
-; CHECK-DAG: movl %ebp, 8(%rsp)<br>
-; CHECK-DAG: movl %r14d, 4(%rsp)<br>
+; CHECK-DAG: movl %edi, 12(%rsp)<br>
+; CHECK-DAG: movl %esi, 8(%rsp)<br>
+; CHECK-DAG: movl %edx, 4(%rsp)<br>
; CHECK: callq<br>
; CHECK-DAG: movl %ebx, 12(%rsp)<br>
; CHECK-DAG: movl %ebp, 8(%rsp)<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>vec_fp_to_int.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vec_fp_to_int.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/vec_fp_to_int.ll?<wbr>rev=311038&r1=311037&r2=<wbr>311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>vec_fp_to_int.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>vec_fp_to_int.ll Wed Aug 16 13:50:01 2017<br>
@@ -1018,12 +1018,12 @@ define <4 x i64> @fptosi_4f32_to_4i64(<8<br>
; SSE-NEXT: cvttss2si %xmm0, %rax<br>
; SSE-NEXT: movq %rax, %xmm2<br>
; SSE-NEXT: movaps %xmm0, %xmm1<br>
-; SSE-NEXT: shufps {{.*#+}} xmm1 = xmm1[1,1,2,3]<br>
+; SSE-NEXT: shufps {{.*#+}} xmm1 = xmm1[1,1],xmm0[2,3]<br>
; SSE-NEXT: cvttss2si %xmm1, %rax<br>
; SSE-NEXT: movq %rax, %xmm1<br>
; SSE-NEXT: punpcklqdq {{.*#+}} xmm2 = xmm2[0],xmm1[0]<br>
; SSE-NEXT: movaps %xmm0, %xmm1<br>
-; SSE-NEXT: shufps {{.*#+}} xmm1 = xmm1[3,1,2,3]<br>
+; SSE-NEXT: shufps {{.*#+}} xmm1 = xmm1[3,1],xmm0[2,3]<br>
; SSE-NEXT: cvttss2si %xmm1, %rax<br>
; SSE-NEXT: movq %rax, %xmm3<br>
; SSE-NEXT: movhlps {{.*#+}} xmm0 = xmm0[1,1]<br>
@@ -1126,12 +1126,12 @@ define <4 x i64> @fptosi_8f32_to_4i64(<8<br>
; SSE-NEXT: cvttss2si %xmm0, %rax<br>
; SSE-NEXT: movq %rax, %xmm2<br>
; SSE-NEXT: movaps %xmm0, %xmm1<br>
-; SSE-NEXT: shufps {{.*#+}} xmm1 = xmm1[1,1,2,3]<br>
+; SSE-NEXT: shufps {{.*#+}} xmm1 = xmm1[1,1],xmm0[2,3]<br>
; SSE-NEXT: cvttss2si %xmm1, %rax<br>
; SSE-NEXT: movq %rax, %xmm1<br>
; SSE-NEXT: punpcklqdq {{.*#+}} xmm2 = xmm2[0],xmm1[0]<br>
; SSE-NEXT: movaps %xmm0, %xmm1<br>
-; SSE-NEXT: shufps {{.*#+}} xmm1 = xmm1[3,1,2,3]<br>
+; SSE-NEXT: shufps {{.*#+}} xmm1 = xmm1[3,1],xmm0[2,3]<br>
; SSE-NEXT: cvttss2si %xmm1, %rax<br>
; SSE-NEXT: movq %rax, %xmm3<br>
; SSE-NEXT: movhlps {{.*#+}} xmm0 = xmm0[1,1]<br>
@@ -1316,11 +1316,11 @@ define <4 x i32> @fptoui_4f32_to_4i32(<4<br>
; SSE-LABEL: fptoui_4f32_to_4i32:<br>
; SSE: # BB#0:<br>
; SSE-NEXT: movaps %xmm0, %xmm1<br>
-; SSE-NEXT: shufps {{.*#+}} xmm1 = xmm1[3,1,2,3]<br>
+; SSE-NEXT: shufps {{.*#+}} xmm1 = xmm1[3,1],xmm0[2,3]<br>
; SSE-NEXT: cvttss2si %xmm1, %rax<br>
; SSE-NEXT: movd %eax, %xmm1<br>
; SSE-NEXT: movaps %xmm0, %xmm2<br>
-; SSE-NEXT: movhlps {{.*#+}} xmm2 = xmm2[1,1]<br>
+; SSE-NEXT: movhlps {{.*#+}} xmm2 = xmm0[1],xmm2[1]<br>
; SSE-NEXT: cvttss2si %xmm2, %rax<br>
; SSE-NEXT: movd %eax, %xmm2<br>
; SSE-NEXT: punpckldq {{.*#+}} xmm2 = xmm2[0],xmm1[0],xmm2[1],xmm1[<wbr>1]<br>
@@ -1560,7 +1560,7 @@ define <8 x i32> @fptoui_8f32_to_8i32(<8<br>
; SSE-NEXT: cvttss2si %xmm0, %rax<br>
; SSE-NEXT: movd %eax, %xmm0<br>
; SSE-NEXT: movaps %xmm2, %xmm3<br>
-; SSE-NEXT: movhlps {{.*#+}} xmm3 = xmm3[1,1]<br>
+; SSE-NEXT: movhlps {{.*#+}} xmm3 = xmm2[1],xmm3[1]<br>
; SSE-NEXT: cvttss2si %xmm3, %rax<br>
; SSE-NEXT: movd %eax, %xmm3<br>
; SSE-NEXT: punpckldq {{.*#+}} xmm3 = xmm3[0],xmm0[0],xmm3[1],xmm0[<wbr>1]<br>
@@ -1572,11 +1572,11 @@ define <8 x i32> @fptoui_8f32_to_8i32(<8<br>
; SSE-NEXT: punpckldq {{.*#+}} xmm0 = xmm0[0],xmm2[0],xmm0[1],xmm2[<wbr>1]<br>
; SSE-NEXT: punpcklqdq {{.*#+}} xmm0 = xmm0[0],xmm3[0]<br>
; SSE-NEXT: movaps %xmm1, %xmm2<br>
-; SSE-NEXT: shufps {{.*#+}} xmm2 = xmm2[3,1,2,3]<br>
+; SSE-NEXT: shufps {{.*#+}} xmm2 = xmm2[3,1],xmm1[2,3]<br>
; SSE-NEXT: cvttss2si %xmm2, %rax<br>
; SSE-NEXT: movd %eax, %xmm2<br>
; SSE-NEXT: movaps %xmm1, %xmm3<br>
-; SSE-NEXT: movhlps {{.*#+}} xmm3 = xmm3[1,1]<br>
+; SSE-NEXT: movhlps {{.*#+}} xmm3 = xmm1[1],xmm3[1]<br>
; SSE-NEXT: cvttss2si %xmm3, %rax<br>
; SSE-NEXT: movd %eax, %xmm3<br>
; SSE-NEXT: punpckldq {{.*#+}} xmm3 = xmm3[0],xmm2[0],xmm3[1],xmm2[<wbr>1]<br>
@@ -1687,7 +1687,7 @@ define <4 x i64> @fptoui_4f32_to_4i64(<8<br>
; SSE-NEXT: cmovaeq %rcx, %rdx<br>
; SSE-NEXT: movq %rdx, %xmm2<br>
; SSE-NEXT: movaps %xmm0, %xmm3<br>
-; SSE-NEXT: shufps {{.*#+}} xmm3 = xmm3[1,1,2,3]<br>
+; SSE-NEXT: shufps {{.*#+}} xmm3 = xmm3[1,1],xmm0[2,3]<br>
; SSE-NEXT: movaps %xmm3, %xmm4<br>
; SSE-NEXT: subss %xmm1, %xmm4<br>
; SSE-NEXT: cvttss2si %xmm4, %rcx<br>
@@ -1698,7 +1698,7 @@ define <4 x i64> @fptoui_4f32_to_4i64(<8<br>
; SSE-NEXT: movq %rdx, %xmm3<br>
; SSE-NEXT: punpcklqdq {{.*#+}} xmm2 = xmm2[0],xmm3[0]<br>
; SSE-NEXT: movaps %xmm0, %xmm3<br>
-; SSE-NEXT: shufps {{.*#+}} xmm3 = xmm3[3,1,2,3]<br>
+; SSE-NEXT: shufps {{.*#+}} xmm3 = xmm3[3,1],xmm0[2,3]<br>
; SSE-NEXT: movaps %xmm3, %xmm4<br>
; SSE-NEXT: subss %xmm1, %xmm4<br>
; SSE-NEXT: cvttss2si %xmm4, %rcx<br>
@@ -1865,7 +1865,7 @@ define <4 x i64> @fptoui_8f32_to_4i64(<8<br>
; SSE-NEXT: cmovaeq %rcx, %rdx<br>
; SSE-NEXT: movq %rdx, %xmm2<br>
; SSE-NEXT: movaps %xmm0, %xmm3<br>
-; SSE-NEXT: shufps {{.*#+}} xmm3 = xmm3[1,1,2,3]<br>
+; SSE-NEXT: shufps {{.*#+}} xmm3 = xmm3[1,1],xmm0[2,3]<br>
; SSE-NEXT: movaps %xmm3, %xmm4<br>
; SSE-NEXT: subss %xmm1, %xmm4<br>
; SSE-NEXT: cvttss2si %xmm4, %rcx<br>
@@ -1876,7 +1876,7 @@ define <4 x i64> @fptoui_8f32_to_4i64(<8<br>
; SSE-NEXT: movq %rdx, %xmm3<br>
; SSE-NEXT: punpcklqdq {{.*#+}} xmm2 = xmm2[0],xmm3[0]<br>
; SSE-NEXT: movaps %xmm0, %xmm3<br>
-; SSE-NEXT: shufps {{.*#+}} xmm3 = xmm3[3,1,2,3]<br>
+; SSE-NEXT: shufps {{.*#+}} xmm3 = xmm3[3,1],xmm0[2,3]<br>
; SSE-NEXT: movaps %xmm3, %xmm4<br>
; SSE-NEXT: subss %xmm1, %xmm4<br>
; SSE-NEXT: cvttss2si %xmm4, %rcx<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>vec_int_to_fp.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vec_int_to_fp.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/vec_int_to_fp.ll?<wbr>rev=311038&r1=311037&r2=<wbr>311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>vec_int_to_fp.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>vec_int_to_fp.ll Wed Aug 16 13:50:01 2017<br>
@@ -1611,7 +1611,7 @@ define <4 x float> @uitofp_2i64_to_4f32(<br>
; SSE-LABEL: uitofp_2i64_to_4f32:<br>
; SSE: # BB#0:<br>
; SSE-NEXT: movdqa %xmm0, %xmm1<br>
-; SSE-NEXT: movq %xmm1, %rax<br>
+; SSE-NEXT: movq %xmm0, %rax<br>
; SSE-NEXT: testq %rax, %rax<br>
; SSE-NEXT: js .LBB39_1<br>
; SSE-NEXT: # BB#2:<br>
@@ -1839,7 +1839,7 @@ define <4 x float> @uitofp_4i64_to_4f32_<br>
; SSE-LABEL: uitofp_4i64_to_4f32_undef:<br>
; SSE: # BB#0:<br>
; SSE-NEXT: movdqa %xmm0, %xmm1<br>
-; SSE-NEXT: movq %xmm1, %rax<br>
+; SSE-NEXT: movq %xmm0, %rax<br>
; SSE-NEXT: testq %rax, %rax<br>
; SSE-NEXT: js .LBB41_1<br>
; SSE-NEXT: # BB#2:<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>vec_minmax_sint.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vec_minmax_sint.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/vec_minmax_sint.<wbr>ll?rev=311038&r1=311037&r2=<wbr>311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>vec_minmax_sint.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>vec_minmax_sint.ll Wed Aug 16 13:50:01 2017<br>
@@ -437,7 +437,7 @@ define <2 x i64> @max_ge_v2i64(<2 x i64><br>
; SSE42: # BB#0:<br>
; SSE42-NEXT: movdqa %xmm0, %xmm2<br>
; SSE42-NEXT: movdqa %xmm1, %xmm3<br>
-; SSE42-NEXT: pcmpgtq %xmm2, %xmm3<br>
+; SSE42-NEXT: pcmpgtq %xmm0, %xmm3<br>
; SSE42-NEXT: pcmpeqd %xmm0, %xmm0<br>
; SSE42-NEXT: pxor %xmm3, %xmm0<br>
; SSE42-NEXT: blendvpd %xmm0, %xmm2, %xmm1<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>vec_shift4.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vec_shift4.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/vec_shift4.ll?rev=<wbr>311038&r1=311037&r2=311038&<wbr>view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>vec_shift4.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>vec_shift4.ll Wed Aug 16 13:50:01 2017<br>
@@ -35,7 +35,7 @@ define <2 x i64> @shl2(<16 x i8> %r, <16<br>
; X32: # BB#0: # %entry<br>
; X32-NEXT: movdqa %xmm0, %xmm2<br>
; X32-NEXT: psllw $5, %xmm1<br>
-; X32-NEXT: movdqa %xmm2, %xmm3<br>
+; X32-NEXT: movdqa %xmm0, %xmm3<br>
; X32-NEXT: psllw $4, %xmm3<br>
; X32-NEXT: pand {{\.LCPI.*}}, %xmm3<br>
; X32-NEXT: movdqa %xmm1, %xmm0<br>
@@ -47,7 +47,7 @@ define <2 x i64> @shl2(<16 x i8> %r, <16<br>
; X32-NEXT: movdqa %xmm1, %xmm0<br>
; X32-NEXT: pblendvb %xmm0, %xmm3, %xmm2<br>
; X32-NEXT: movdqa %xmm2, %xmm3<br>
-; X32-NEXT: paddb %xmm3, %xmm3<br>
+; X32-NEXT: paddb %xmm2, %xmm3<br>
; X32-NEXT: paddb %xmm1, %xmm1<br>
; X32-NEXT: movdqa %xmm1, %xmm0<br>
; X32-NEXT: pblendvb %xmm0, %xmm3, %xmm2<br>
@@ -58,7 +58,7 @@ define <2 x i64> @shl2(<16 x i8> %r, <16<br>
; X64: # BB#0: # %entry<br>
; X64-NEXT: movdqa %xmm0, %xmm2<br>
; X64-NEXT: psllw $5, %xmm1<br>
-; X64-NEXT: movdqa %xmm2, %xmm3<br>
+; X64-NEXT: movdqa %xmm0, %xmm3<br>
; X64-NEXT: psllw $4, %xmm3<br>
; X64-NEXT: pand {{.*}}(%rip), %xmm3<br>
; X64-NEXT: movdqa %xmm1, %xmm0<br>
@@ -70,7 +70,7 @@ define <2 x i64> @shl2(<16 x i8> %r, <16<br>
; X64-NEXT: movdqa %xmm1, %xmm0<br>
; X64-NEXT: pblendvb %xmm0, %xmm3, %xmm2<br>
; X64-NEXT: movdqa %xmm2, %xmm3<br>
-; X64-NEXT: paddb %xmm3, %xmm3<br>
+; X64-NEXT: paddb %xmm2, %xmm3<br>
; X64-NEXT: paddb %xmm1, %xmm1<br>
; X64-NEXT: movdqa %xmm1, %xmm0<br>
; X64-NEXT: pblendvb %xmm0, %xmm3, %xmm2<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>vector-blend.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-blend.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/vector-blend.ll?<wbr>rev=311038&r1=311037&r2=<wbr>311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>vector-blend.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>vector-blend.ll Wed Aug 16 13:50:01 2017<br>
@@ -992,7 +992,7 @@ define <4 x i32> @blend_neg_logic_v4i32_<br>
; SSE41-NEXT: movdqa %xmm0, %xmm2<br>
; SSE41-NEXT: psrad $31, %xmm1<br>
; SSE41-NEXT: pxor %xmm3, %xmm3<br>
-; SSE41-NEXT: psubd %xmm2, %xmm3<br>
+; SSE41-NEXT: psubd %xmm0, %xmm3<br>
; SSE41-NEXT: movdqa %xmm1, %xmm0<br>
; SSE41-NEXT: blendvps %xmm0, %xmm2, %xmm3<br>
; SSE41-NEXT: movaps %xmm3, %xmm0<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>vector-idiv-sdiv-128.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-idiv-sdiv-128.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/vector-idiv-sdiv-<wbr>128.ll?rev=311038&r1=311037&<wbr>r2=311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>vector-idiv-sdiv-128.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>vector-idiv-sdiv-128.ll Wed Aug 16 13:50:01 2017<br>
@@ -176,13 +176,13 @@ define <16 x i8> @test_div7_16i8(<16 x i<br>
; SSE2-LABEL: test_div7_16i8:<br>
; SSE2: # BB#0:<br>
; SSE2-NEXT: movdqa %xmm0, %xmm2<br>
-; SSE2-NEXT: punpckhbw {{.*#+}} xmm2 = xmm2[8,8,9,9,10,10,11,11,12,<wbr>12,13,13,14,14,15,15]<br>
+; SSE2-NEXT: punpckhbw {{.*#+}} xmm2 = xmm2[8],xmm0[8],xmm2[9],xmm0[<wbr>9],xmm2[10],xmm0[10],xmm2[11],<wbr>xmm0[11],xmm2[12],xmm0[12],<wbr>xmm2[13],xmm0[13],xmm2[14],<wbr>xmm0[14],xmm2[15],xmm0[15]<br>
; SSE2-NEXT: psraw $8, %xmm2<br>
; SSE2-NEXT: movdqa {{.*#+}} xmm3 = [65427,65427,65427,65427,<wbr>65427,65427,65427,65427]<br>
; SSE2-NEXT: pmullw %xmm3, %xmm2<br>
; SSE2-NEXT: psrlw $8, %xmm2<br>
; SSE2-NEXT: movdqa %xmm0, %xmm1<br>
-; SSE2-NEXT: punpcklbw {{.*#+}} xmm1 = xmm1[0,0,1,1,2,2,3,3,4,4,5,5,<wbr>6,6,7,7]<br>
+; SSE2-NEXT: punpcklbw {{.*#+}} xmm1 = xmm1[0],xmm0[0],xmm1[1],xmm0[<wbr>1],xmm1[2],xmm0[2],xmm1[3],<wbr>xmm0[3],xmm1[4],xmm0[4],xmm1[<wbr>5],xmm0[5],xmm1[6],xmm0[6],<wbr>xmm1[7],xmm0[7]<br>
; SSE2-NEXT: psraw $8, %xmm1<br>
; SSE2-NEXT: pmullw %xmm3, %xmm1<br>
; SSE2-NEXT: psrlw $8, %xmm1<br>
@@ -482,13 +482,13 @@ define <16 x i8> @test_rem7_16i8(<16 x i<br>
; SSE2-LABEL: test_rem7_16i8:<br>
; SSE2: # BB#0:<br>
; SSE2-NEXT: movdqa %xmm0, %xmm2<br>
-; SSE2-NEXT: punpckhbw {{.*#+}} xmm2 = xmm2[8,8,9,9,10,10,11,11,12,<wbr>12,13,13,14,14,15,15]<br>
+; SSE2-NEXT: punpckhbw {{.*#+}} xmm2 = xmm2[8],xmm0[8],xmm2[9],xmm0[<wbr>9],xmm2[10],xmm0[10],xmm2[11],<wbr>xmm0[11],xmm2[12],xmm0[12],<wbr>xmm2[13],xmm0[13],xmm2[14],<wbr>xmm0[14],xmm2[15],xmm0[15]<br>
; SSE2-NEXT: psraw $8, %xmm2<br>
; SSE2-NEXT: movdqa {{.*#+}} xmm3 = [65427,65427,65427,65427,<wbr>65427,65427,65427,65427]<br>
; SSE2-NEXT: pmullw %xmm3, %xmm2<br>
; SSE2-NEXT: psrlw $8, %xmm2<br>
; SSE2-NEXT: movdqa %xmm0, %xmm1<br>
-; SSE2-NEXT: punpcklbw {{.*#+}} xmm1 = xmm1[0,0,1,1,2,2,3,3,4,4,5,5,<wbr>6,6,7,7]<br>
+; SSE2-NEXT: punpcklbw {{.*#+}} xmm1 = xmm1[0],xmm0[0],xmm1[1],xmm0[<wbr>1],xmm1[2],xmm0[2],xmm1[3],<wbr>xmm0[3],xmm1[4],xmm0[4],xmm1[<wbr>5],xmm0[5],xmm1[6],xmm0[6],<wbr>xmm1[7],xmm0[7]<br>
; SSE2-NEXT: psraw $8, %xmm1<br>
; SSE2-NEXT: pmullw %xmm3, %xmm1<br>
; SSE2-NEXT: psrlw $8, %xmm1<br>
@@ -504,7 +504,7 @@ define <16 x i8> @test_rem7_16i8(<16 x i<br>
; SSE2-NEXT: pand {{.*}}(%rip), %xmm1<br>
; SSE2-NEXT: paddb %xmm2, %xmm1<br>
; SSE2-NEXT: movdqa %xmm1, %xmm2<br>
-; SSE2-NEXT: punpckhbw {{.*#+}} xmm2 = xmm2[8,8,9,9,10,10,11,11,12,<wbr>12,13,13,14,14,15,15]<br>
+; SSE2-NEXT: punpckhbw {{.*#+}} xmm2 = xmm2[8],xmm1[8],xmm2[9],xmm1[<wbr>9],xmm2[10],xmm1[10],xmm2[11],<wbr>xmm1[11],xmm2[12],xmm1[12],<wbr>xmm2[13],xmm1[13],xmm2[14],<wbr>xmm1[14],xmm2[15],xmm1[15]<br>
; SSE2-NEXT: psraw $8, %xmm2<br>
; SSE2-NEXT: movdqa {{.*#+}} xmm3 = [7,7,7,7,7,7,7,7]<br>
; SSE2-NEXT: pmullw %xmm3, %xmm2<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>vector-idiv-udiv-128.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-idiv-udiv-128.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/vector-idiv-udiv-<wbr>128.ll?rev=311038&r1=311037&<wbr>r2=311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>vector-idiv-udiv-128.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>vector-idiv-udiv-128.ll Wed Aug 16 13:50:01 2017<br>
@@ -481,7 +481,7 @@ define <16 x i8> @test_rem7_16i8(<16 x i<br>
; SSE2-NEXT: psrlw $2, %xmm1<br>
; SSE2-NEXT: pand {{.*}}(%rip), %xmm1<br>
; SSE2-NEXT: movdqa %xmm1, %xmm2<br>
-; SSE2-NEXT: punpckhbw {{.*#+}} xmm2 = xmm2[8,8,9,9,10,10,11,11,12,<wbr>12,13,13,14,14,15,15]<br>
+; SSE2-NEXT: punpckhbw {{.*#+}} xmm2 = xmm2[8],xmm1[8],xmm2[9],xmm1[<wbr>9],xmm2[10],xmm1[10],xmm2[11],<wbr>xmm1[11],xmm2[12],xmm1[12],<wbr>xmm2[13],xmm1[13],xmm2[14],<wbr>xmm1[14],xmm2[15],xmm1[15]<br>
; SSE2-NEXT: psraw $8, %xmm2<br>
; SSE2-NEXT: movdqa {{.*#+}} xmm3 = [7,7,7,7,7,7,7,7]<br>
; SSE2-NEXT: pmullw %xmm3, %xmm2<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>vector-rotate-128.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-rotate-128.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/vector-rotate-128.<wbr>ll?rev=311038&r1=311037&r2=<wbr>311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>vector-rotate-128.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>vector-rotate-128.ll Wed Aug 16 13:50:01 2017<br>
@@ -361,7 +361,7 @@ define <8 x i16> @var_rotate_v8i16(<8 x<br>
; SSE41-NEXT: psllw $4, %xmm1<br>
; SSE41-NEXT: por %xmm0, %xmm1<br>
; SSE41-NEXT: movdqa %xmm1, %xmm4<br>
-; SSE41-NEXT: paddw %xmm4, %xmm4<br>
+; SSE41-NEXT: paddw %xmm1, %xmm4<br>
; SSE41-NEXT: movdqa %xmm3, %xmm6<br>
; SSE41-NEXT: psllw $8, %xmm6<br>
; SSE41-NEXT: movdqa %xmm3, %xmm5<br>
@@ -386,7 +386,7 @@ define <8 x i16> @var_rotate_v8i16(<8 x<br>
; SSE41-NEXT: psllw $4, %xmm2<br>
; SSE41-NEXT: por %xmm0, %xmm2<br>
; SSE41-NEXT: movdqa %xmm2, %xmm1<br>
-; SSE41-NEXT: paddw %xmm1, %xmm1<br>
+; SSE41-NEXT: paddw %xmm2, %xmm1<br>
; SSE41-NEXT: movdqa %xmm3, %xmm4<br>
; SSE41-NEXT: psrlw $8, %xmm4<br>
; SSE41-NEXT: movdqa %xmm2, %xmm0<br>
@@ -631,10 +631,10 @@ define <16 x i8> @var_rotate_v16i8(<16 x<br>
; SSE41-NEXT: movdqa {{.*#+}} xmm2 = [8,8,8,8,8,8,8,8,8,8,8,8,8,8,<wbr>8,8]<br>
; SSE41-NEXT: psubb %xmm3, %xmm2<br>
; SSE41-NEXT: psllw $5, %xmm3<br>
-; SSE41-NEXT: movdqa %xmm1, %xmm5<br>
+; SSE41-NEXT: movdqa %xmm0, %xmm5<br>
; SSE41-NEXT: psllw $4, %xmm5<br>
; SSE41-NEXT: pand {{.*}}(%rip), %xmm5<br>
-; SSE41-NEXT: movdqa %xmm1, %xmm4<br>
+; SSE41-NEXT: movdqa %xmm0, %xmm4<br>
; SSE41-NEXT: movdqa %xmm3, %xmm0<br>
; SSE41-NEXT: pblendvb %xmm0, %xmm5, %xmm4<br>
; SSE41-NEXT: movdqa %xmm4, %xmm5<br>
@@ -644,13 +644,13 @@ define <16 x i8> @var_rotate_v16i8(<16 x<br>
; SSE41-NEXT: movdqa %xmm3, %xmm0<br>
; SSE41-NEXT: pblendvb %xmm0, %xmm5, %xmm4<br>
; SSE41-NEXT: movdqa %xmm4, %xmm5<br>
-; SSE41-NEXT: paddb %xmm5, %xmm5<br>
+; SSE41-NEXT: paddb %xmm4, %xmm5<br>
; SSE41-NEXT: paddb %xmm3, %xmm3<br>
; SSE41-NEXT: movdqa %xmm3, %xmm0<br>
; SSE41-NEXT: pblendvb %xmm0, %xmm5, %xmm4<br>
; SSE41-NEXT: psllw $5, %xmm2<br>
; SSE41-NEXT: movdqa %xmm2, %xmm3<br>
-; SSE41-NEXT: paddb %xmm3, %xmm3<br>
+; SSE41-NEXT: paddb %xmm2, %xmm3<br>
; SSE41-NEXT: movdqa %xmm1, %xmm5<br>
; SSE41-NEXT: psrlw $4, %xmm5<br>
; SSE41-NEXT: pand {{.*}}(%rip), %xmm5<br>
@@ -1191,7 +1191,7 @@ define <16 x i8> @constant_rotate_v16i8(<br>
; SSE41-LABEL: constant_rotate_v16i8:<br>
; SSE41: # BB#0:<br>
; SSE41-NEXT: movdqa %xmm0, %xmm1<br>
-; SSE41-NEXT: movdqa %xmm1, %xmm3<br>
+; SSE41-NEXT: movdqa %xmm0, %xmm3<br>
; SSE41-NEXT: psllw $4, %xmm3<br>
; SSE41-NEXT: pand {{.*}}(%rip), %xmm3<br>
; SSE41-NEXT: movdqa {{.*#+}} xmm0 = [8192,24640,41088,57536,57600,<wbr>41152,24704,8256]<br>
@@ -1203,7 +1203,7 @@ define <16 x i8> @constant_rotate_v16i8(<br>
; SSE41-NEXT: paddb %xmm0, %xmm0<br>
; SSE41-NEXT: pblendvb %xmm0, %xmm3, %xmm2<br>
; SSE41-NEXT: movdqa %xmm2, %xmm3<br>
-; SSE41-NEXT: paddb %xmm3, %xmm3<br>
+; SSE41-NEXT: paddb %xmm2, %xmm3<br>
; SSE41-NEXT: paddb %xmm0, %xmm0<br>
; SSE41-NEXT: pblendvb %xmm0, %xmm3, %xmm2<br>
; SSE41-NEXT: movdqa %xmm1, %xmm3<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>vector-sext.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-sext.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/vector-sext.ll?<wbr>rev=311038&r1=311037&r2=<wbr>311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>vector-sext.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>vector-sext.ll Wed Aug 16 13:50:01 2017<br>
@@ -243,7 +243,7 @@ define <8 x i32> @sext_16i8_to_8i32(<16<br>
; SSSE3-LABEL: sext_16i8_to_8i32:<br>
; SSSE3: # BB#0: # %entry<br>
; SSSE3-NEXT: movdqa %xmm0, %xmm1<br>
-; SSSE3-NEXT: punpcklbw {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[<wbr>1],xmm0[2],xmm1[2],xmm0[3],<wbr>xmm1[3],xmm0[4],xmm1[4],xmm0[<wbr>5],xmm1[5],xmm0[6],xmm1[6],<wbr>xmm0[7],xmm1[7]<br>
+; SSSE3-NEXT: punpcklbw {{.*#+}} xmm0 = xmm0[0,0,1,1,2,2,3,3,4,4,5,5,<wbr>6,6,7,7]<br>
; SSSE3-NEXT: punpcklwd {{.*#+}} xmm0 = xmm0[0,0,1,1,2,2,3,3]<br>
; SSSE3-NEXT: psrad $24, %xmm0<br>
; SSSE3-NEXT: pshufb {{.*#+}} xmm1 = xmm1[u,u,u,4,u,u,u,5,u,u,u,6,<wbr>u,u,u,7]<br>
@@ -312,7 +312,7 @@ define <16 x i32> @sext_16i8_to_16i32(<1<br>
; SSSE3-LABEL: sext_16i8_to_16i32:<br>
; SSSE3: # BB#0: # %entry<br>
; SSSE3-NEXT: movdqa %xmm0, %xmm3<br>
-; SSSE3-NEXT: punpcklbw {{.*#+}} xmm0 = xmm0[0],xmm3[0],xmm0[1],xmm3[<wbr>1],xmm0[2],xmm3[2],xmm0[3],<wbr>xmm3[3],xmm0[4],xmm3[4],xmm0[<wbr>5],xmm3[5],xmm0[6],xmm3[6],<wbr>xmm0[7],xmm3[7]<br>
+; SSSE3-NEXT: punpcklbw {{.*#+}} xmm0 = xmm0[0,0,1,1,2,2,3,3,4,4,5,5,<wbr>6,6,7,7]<br>
; SSSE3-NEXT: punpcklwd {{.*#+}} xmm0 = xmm0[0,0,1,1,2,2,3,3]<br>
; SSSE3-NEXT: psrad $24, %xmm0<br>
; SSSE3-NEXT: punpckhbw {{.*#+}} xmm1 = xmm1[8],xmm3[8],xmm1[9],xmm3[<wbr>9],xmm1[10],xmm3[10],xmm1[11],<wbr>xmm3[11],xmm1[12],xmm3[12],<wbr>xmm1[13],xmm3[13],xmm1[14],<wbr>xmm3[14],xmm1[15],xmm3[15]<br>
@@ -443,7 +443,7 @@ define <4 x i64> @sext_16i8_to_4i64(<16<br>
; SSSE3-LABEL: sext_16i8_to_4i64:<br>
; SSSE3: # BB#0: # %entry<br>
; SSSE3-NEXT: movdqa %xmm0, %xmm1<br>
-; SSSE3-NEXT: punpcklbw {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[<wbr>1],xmm0[2],xmm1[2],xmm0[3],<wbr>xmm1[3],xmm0[4],xmm1[4],xmm0[<wbr>5],xmm1[5],xmm0[6],xmm1[6],<wbr>xmm0[7],xmm1[7]<br>
+; SSSE3-NEXT: punpcklbw {{.*#+}} xmm0 = xmm0[0,0,1,1,2,2,3,3,4,4,5,5,<wbr>6,6,7,7]<br>
; SSSE3-NEXT: punpcklwd {{.*#+}} xmm0 = xmm0[0,0,1,1,2,2,3,3]<br>
; SSSE3-NEXT: movdqa %xmm0, %xmm2<br>
; SSSE3-NEXT: psrad $31, %xmm2<br>
@@ -499,7 +499,7 @@ define <8 x i64> @sext_16i8_to_8i64(<16<br>
; SSE2-LABEL: sext_16i8_to_8i64:<br>
; SSE2: # BB#0: # %entry<br>
; SSE2-NEXT: movdqa %xmm0, %xmm1<br>
-; SSE2-NEXT: punpcklbw {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[<wbr>1],xmm0[2],xmm1[2],xmm0[3],<wbr>xmm1[3],xmm0[4],xmm1[4],xmm0[<wbr>5],xmm1[5],xmm0[6],xmm1[6],<wbr>xmm0[7],xmm1[7]<br>
+; SSE2-NEXT: punpcklbw {{.*#+}} xmm0 = xmm0[0,0,1,1,2,2,3,3,4,4,5,5,<wbr>6,6,7,7]<br>
; SSE2-NEXT: punpcklwd {{.*#+}} xmm0 = xmm0[0,0,1,1,2,2,3,3]<br>
; SSE2-NEXT: movdqa %xmm0, %xmm2<br>
; SSE2-NEXT: psrad $31, %xmm2<br>
@@ -1112,7 +1112,7 @@ define <8 x i64> @sext_8i32_to_8i64(<8 x<br>
; SSE2-NEXT: movdqa %xmm1, %xmm2<br>
; SSE2-NEXT: movdqa %xmm0, %xmm3<br>
; SSE2-NEXT: psrad $31, %xmm3<br>
-; SSE2-NEXT: movdqa %xmm2, %xmm4<br>
+; SSE2-NEXT: movdqa %xmm1, %xmm4<br>
; SSE2-NEXT: psrad $31, %xmm4<br>
; SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[2,3,0,1]<br>
; SSE2-NEXT: punpckldq {{.*#+}} xmm0 = xmm0[0],xmm3[0],xmm0[1],xmm3[<wbr>1]<br>
@@ -1131,7 +1131,7 @@ define <8 x i64> @sext_8i32_to_8i64(<8 x<br>
; SSSE3-NEXT: movdqa %xmm1, %xmm2<br>
; SSSE3-NEXT: movdqa %xmm0, %xmm3<br>
; SSSE3-NEXT: psrad $31, %xmm3<br>
-; SSSE3-NEXT: movdqa %xmm2, %xmm4<br>
+; SSSE3-NEXT: movdqa %xmm1, %xmm4<br>
; SSSE3-NEXT: psrad $31, %xmm4<br>
; SSSE3-NEXT: pshufd {{.*#+}} xmm1 = xmm0[2,3,0,1]<br>
; SSSE3-NEXT: punpckldq {{.*#+}} xmm0 = xmm0[0],xmm3[0],xmm0[1],xmm3[<wbr>1]<br>
@@ -2228,7 +2228,7 @@ define <8 x i32> @load_sext_8i1_to_8i32(<br>
; SSE2-NEXT: punpckldq {{.*#+}} xmm2 = xmm2[0],xmm0[0],xmm2[1],xmm0[<wbr>1]<br>
; SSE2-NEXT: punpcklqdq {{.*#+}} xmm1 = xmm1[0],xmm2[0]<br>
; SSE2-NEXT: movdqa %xmm1, %xmm0<br>
-; SSE2-NEXT: punpcklwd {{.*#+}} xmm0 = xmm0[0,0,1,1,2,2,3,3]<br>
+; SSE2-NEXT: punpcklwd {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[<wbr>1],xmm0[2],xmm1[2],xmm0[3],<wbr>xmm1[3]<br>
; SSE2-NEXT: pslld $31, %xmm0<br>
; SSE2-NEXT: psrad $31, %xmm0<br>
; SSE2-NEXT: punpckhwd {{.*#+}} xmm1 = xmm1[4],xmm0[4],xmm1[5],xmm0[<wbr>5],xmm1[6],xmm0[6],xmm1[7],<wbr>xmm0[7]<br>
@@ -2277,7 +2277,7 @@ define <8 x i32> @load_sext_8i1_to_8i32(<br>
; SSSE3-NEXT: punpckldq {{.*#+}} xmm2 = xmm2[0],xmm0[0],xmm2[1],xmm0[<wbr>1]<br>
; SSSE3-NEXT: punpcklqdq {{.*#+}} xmm1 = xmm1[0],xmm2[0]<br>
; SSSE3-NEXT: movdqa %xmm1, %xmm0<br>
-; SSSE3-NEXT: punpcklwd {{.*#+}} xmm0 = xmm0[0,0,1,1,2,2,3,3]<br>
+; SSSE3-NEXT: punpcklwd {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[<wbr>1],xmm0[2],xmm1[2],xmm0[3],<wbr>xmm1[3]<br>
; SSSE3-NEXT: pslld $31, %xmm0<br>
; SSSE3-NEXT: psrad $31, %xmm0<br>
; SSSE3-NEXT: punpckhwd {{.*#+}} xmm1 = xmm1[4],xmm0[4],xmm1[5],xmm0[<wbr>5],xmm1[6],xmm0[6],xmm1[7],<wbr>xmm0[7]<br>
@@ -3079,7 +3079,7 @@ define <16 x i16> @load_sext_16i1_to_16i<br>
; SSE2-NEXT: punpckldq {{.*#+}} xmm0 = xmm0[0],xmm3[0],xmm0[1],xmm3[<wbr>1]<br>
; SSE2-NEXT: punpcklqdq {{.*#+}} xmm1 = xmm1[0],xmm0[0]<br>
; SSE2-NEXT: movdqa %xmm1, %xmm0<br>
-; SSE2-NEXT: punpcklbw {{.*#+}} xmm0 = xmm0[0,0,1,1,2,2,3,3,4,4,5,5,<wbr>6,6,7,7]<br>
+; SSE2-NEXT: punpcklbw {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[<wbr>1],xmm0[2],xmm1[2],xmm0[3],<wbr>xmm1[3],xmm0[4],xmm1[4],xmm0[<wbr>5],xmm1[5],xmm0[6],xmm1[6],<wbr>xmm0[7],xmm1[7]<br>
; SSE2-NEXT: psllw $15, %xmm0<br>
; SSE2-NEXT: psraw $15, %xmm0<br>
; SSE2-NEXT: punpckhbw {{.*#+}} xmm1 = xmm1[8],xmm0[8],xmm1[9],xmm0[<wbr>9],xmm1[10],xmm0[10],xmm1[11],<wbr>xmm0[11],xmm1[12],xmm0[12],<wbr>xmm1[13],xmm0[13],xmm1[14],<wbr>xmm0[14],xmm1[15],xmm0[15]<br>
@@ -3168,7 +3168,7 @@ define <16 x i16> @load_sext_16i1_to_16i<br>
; SSSE3-NEXT: punpckldq {{.*#+}} xmm0 = xmm0[0],xmm3[0],xmm0[1],xmm3[<wbr>1]<br>
; SSSE3-NEXT: punpcklqdq {{.*#+}} xmm1 = xmm1[0],xmm0[0]<br>
; SSSE3-NEXT: movdqa %xmm1, %xmm0<br>
-; SSSE3-NEXT: punpcklbw {{.*#+}} xmm0 = xmm0[0,0,1,1,2,2,3,3,4,4,5,5,<wbr>6,6,7,7]<br>
+; SSSE3-NEXT: punpcklbw {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[<wbr>1],xmm0[2],xmm1[2],xmm0[3],<wbr>xmm1[3],xmm0[4],xmm1[4],xmm0[<wbr>5],xmm1[5],xmm0[6],xmm1[6],<wbr>xmm0[7],xmm1[7]<br>
; SSSE3-NEXT: psllw $15, %xmm0<br>
; SSSE3-NEXT: psraw $15, %xmm0<br>
; SSSE3-NEXT: punpckhbw {{.*#+}} xmm1 = xmm1[8],xmm0[8],xmm1[9],xmm0[<wbr>9],xmm1[10],xmm0[10],xmm1[11],<wbr>xmm0[11],xmm1[12],xmm0[12],<wbr>xmm1[13],xmm0[13],xmm1[14],<wbr>xmm0[14],xmm1[15],xmm0[15]<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>vector-shift-ashr-128.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-shift-ashr-128.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/vector-shift-ashr-<wbr>128.ll?rev=311038&r1=311037&<wbr>r2=311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>vector-shift-ashr-128.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>vector-shift-ashr-128.ll Wed Aug 16 13:50:01 2017<br>
@@ -274,7 +274,7 @@ define <8 x i16> @var_shift_v8i16(<8 x i<br>
; SSE41-NEXT: psllw $4, %xmm1<br>
; SSE41-NEXT: por %xmm0, %xmm1<br>
; SSE41-NEXT: movdqa %xmm1, %xmm3<br>
-; SSE41-NEXT: paddw %xmm3, %xmm3<br>
+; SSE41-NEXT: paddw %xmm1, %xmm3<br>
; SSE41-NEXT: movdqa %xmm2, %xmm4<br>
; SSE41-NEXT: psraw $8, %xmm4<br>
; SSE41-NEXT: movdqa %xmm1, %xmm0<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>vector-shift-lshr-128.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-shift-lshr-128.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/vector-shift-lshr-<wbr>128.ll?rev=311038&r1=311037&<wbr>r2=311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>vector-shift-lshr-128.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>vector-shift-lshr-128.ll Wed Aug 16 13:50:01 2017<br>
@@ -245,7 +245,7 @@ define <8 x i16> @var_shift_v8i16(<8 x i<br>
; SSE41-NEXT: psllw $4, %xmm1<br>
; SSE41-NEXT: por %xmm0, %xmm1<br>
; SSE41-NEXT: movdqa %xmm1, %xmm3<br>
-; SSE41-NEXT: paddw %xmm3, %xmm3<br>
+; SSE41-NEXT: paddw %xmm1, %xmm3<br>
; SSE41-NEXT: movdqa %xmm2, %xmm4<br>
; SSE41-NEXT: psrlw $8, %xmm4<br>
; SSE41-NEXT: movdqa %xmm1, %xmm0<br>
@@ -407,7 +407,7 @@ define <16 x i8> @var_shift_v16i8(<16 x<br>
; SSE41: # BB#0:<br>
; SSE41-NEXT: movdqa %xmm0, %xmm2<br>
; SSE41-NEXT: psllw $5, %xmm1<br>
-; SSE41-NEXT: movdqa %xmm2, %xmm3<br>
+; SSE41-NEXT: movdqa %xmm0, %xmm3<br>
; SSE41-NEXT: psrlw $4, %xmm3<br>
; SSE41-NEXT: pand {{.*}}(%rip), %xmm3<br>
; SSE41-NEXT: movdqa %xmm1, %xmm0<br>
@@ -679,7 +679,7 @@ define <16 x i8> @splatvar_shift_v16i8(<<br>
; SSE41-NEXT: pshufb %xmm0, %xmm1<br>
; SSE41-NEXT: psllw $5, %xmm1<br>
; SSE41-NEXT: movdqa %xmm1, %xmm3<br>
-; SSE41-NEXT: paddb %xmm3, %xmm3<br>
+; SSE41-NEXT: paddb %xmm1, %xmm3<br>
; SSE41-NEXT: movdqa %xmm2, %xmm4<br>
; SSE41-NEXT: psrlw $4, %xmm4<br>
; SSE41-NEXT: pand {{.*}}(%rip), %xmm4<br>
@@ -1101,7 +1101,7 @@ define <16 x i8> @constant_shift_v16i8(<<br>
; SSE41-LABEL: constant_shift_v16i8:<br>
; SSE41: # BB#0:<br>
; SSE41-NEXT: movdqa %xmm0, %xmm1<br>
-; SSE41-NEXT: movdqa %xmm1, %xmm2<br>
+; SSE41-NEXT: movdqa %xmm0, %xmm2<br>
; SSE41-NEXT: psrlw $4, %xmm2<br>
; SSE41-NEXT: pand {{.*}}(%rip), %xmm2<br>
; SSE41-NEXT: movdqa {{.*#+}} xmm0 = [8192,24640,41088,57536,49376,<wbr>32928,16480,32]<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>vector-shift-shl-128.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-shift-shl-128.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/vector-shift-shl-<wbr>128.ll?rev=311038&r1=311037&<wbr>r2=311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>vector-shift-shl-128.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>vector-shift-shl-128.ll Wed Aug 16 13:50:01 2017<br>
@@ -202,7 +202,7 @@ define <8 x i16> @var_shift_v8i16(<8 x i<br>
; SSE41-NEXT: psllw $4, %xmm1<br>
; SSE41-NEXT: por %xmm0, %xmm1<br>
; SSE41-NEXT: movdqa %xmm1, %xmm3<br>
-; SSE41-NEXT: paddw %xmm3, %xmm3<br>
+; SSE41-NEXT: paddw %xmm1, %xmm3<br>
; SSE41-NEXT: movdqa %xmm2, %xmm4<br>
; SSE41-NEXT: psllw $8, %xmm4<br>
; SSE41-NEXT: movdqa %xmm1, %xmm0<br>
@@ -361,7 +361,7 @@ define <16 x i8> @var_shift_v16i8(<16 x<br>
; SSE41: # BB#0:<br>
; SSE41-NEXT: movdqa %xmm0, %xmm2<br>
; SSE41-NEXT: psllw $5, %xmm1<br>
-; SSE41-NEXT: movdqa %xmm2, %xmm3<br>
+; SSE41-NEXT: movdqa %xmm0, %xmm3<br>
; SSE41-NEXT: psllw $4, %xmm3<br>
; SSE41-NEXT: pand {{.*}}(%rip), %xmm3<br>
; SSE41-NEXT: movdqa %xmm1, %xmm0<br>
@@ -373,7 +373,7 @@ define <16 x i8> @var_shift_v16i8(<16 x<br>
; SSE41-NEXT: movdqa %xmm1, %xmm0<br>
; SSE41-NEXT: pblendvb %xmm0, %xmm3, %xmm2<br>
; SSE41-NEXT: movdqa %xmm2, %xmm3<br>
-; SSE41-NEXT: paddb %xmm3, %xmm3<br>
+; SSE41-NEXT: paddb %xmm2, %xmm3<br>
; SSE41-NEXT: paddb %xmm1, %xmm1<br>
; SSE41-NEXT: movdqa %xmm1, %xmm0<br>
; SSE41-NEXT: pblendvb %xmm0, %xmm3, %xmm2<br>
@@ -627,7 +627,7 @@ define <16 x i8> @splatvar_shift_v16i8(<<br>
; SSE41-NEXT: pshufb %xmm0, %xmm1<br>
; SSE41-NEXT: psllw $5, %xmm1<br>
; SSE41-NEXT: movdqa %xmm1, %xmm3<br>
-; SSE41-NEXT: paddb %xmm3, %xmm3<br>
+; SSE41-NEXT: paddb %xmm1, %xmm3<br>
; SSE41-NEXT: movdqa %xmm2, %xmm4<br>
; SSE41-NEXT: psllw $4, %xmm4<br>
; SSE41-NEXT: pand {{.*}}(%rip), %xmm4<br>
@@ -639,7 +639,7 @@ define <16 x i8> @splatvar_shift_v16i8(<<br>
; SSE41-NEXT: movdqa %xmm3, %xmm0<br>
; SSE41-NEXT: pblendvb %xmm0, %xmm1, %xmm2<br>
; SSE41-NEXT: movdqa %xmm2, %xmm1<br>
-; SSE41-NEXT: paddb %xmm1, %xmm1<br>
+; SSE41-NEXT: paddb %xmm2, %xmm1<br>
; SSE41-NEXT: paddb %xmm3, %xmm3<br>
; SSE41-NEXT: movdqa %xmm3, %xmm0<br>
; SSE41-NEXT: pblendvb %xmm0, %xmm1, %xmm2<br>
@@ -957,7 +957,7 @@ define <16 x i8> @constant_shift_v16i8(<<br>
; SSE41-LABEL: constant_shift_v16i8:<br>
; SSE41: # BB#0:<br>
; SSE41-NEXT: movdqa %xmm0, %xmm1<br>
-; SSE41-NEXT: movdqa %xmm1, %xmm2<br>
+; SSE41-NEXT: movdqa %xmm0, %xmm2<br>
; SSE41-NEXT: psllw $4, %xmm2<br>
; SSE41-NEXT: pand {{.*}}(%rip), %xmm2<br>
; SSE41-NEXT: movdqa {{.*#+}} xmm0 = [8192,24640,41088,57536,49376,<wbr>32928,16480,32]<br>
@@ -968,7 +968,7 @@ define <16 x i8> @constant_shift_v16i8(<<br>
; SSE41-NEXT: paddb %xmm0, %xmm0<br>
; SSE41-NEXT: pblendvb %xmm0, %xmm2, %xmm1<br>
; SSE41-NEXT: movdqa %xmm1, %xmm2<br>
-; SSE41-NEXT: paddb %xmm2, %xmm2<br>
+; SSE41-NEXT: paddb %xmm1, %xmm2<br>
; SSE41-NEXT: paddb %xmm0, %xmm0<br>
; SSE41-NEXT: pblendvb %xmm0, %xmm2, %xmm1<br>
; SSE41-NEXT: movdqa %xmm1, %xmm0<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>vector-shuffle-combining.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-shuffle-combining.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/vector-shuffle-<wbr>combining.ll?rev=311038&r1=<wbr>311037&r2=311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>vector-shuffle-combining.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>vector-shuffle-combining.ll Wed Aug 16 13:50:01 2017<br>
@@ -2792,7 +2792,7 @@ define <4 x float> @PR22377(<4 x float><br>
; SSE-LABEL: PR22377:<br>
; SSE: # BB#0: # %entry<br>
; SSE-NEXT: movaps %xmm0, %xmm1<br>
-; SSE-NEXT: shufps {{.*#+}} xmm1 = xmm1[1,3,1,3]<br>
+; SSE-NEXT: shufps {{.*#+}} xmm1 = xmm1[1,3],xmm0[1,3]<br>
; SSE-NEXT: shufps {{.*#+}} xmm0 = xmm0[0,2,0,2]<br>
; SSE-NEXT: addps %xmm0, %xmm1<br>
; SSE-NEXT: unpcklps {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[<wbr>1]<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>vector-trunc-math.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-trunc-math.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/vector-trunc-math.<wbr>ll?rev=311038&r1=311037&r2=<wbr>311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>vector-trunc-math.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>vector-trunc-math.ll Wed Aug 16 13:50:01 2017<br>
@@ -5198,7 +5198,7 @@ define <4 x i32> @mul_add_const_v4i64_v4<br>
; SSE-LABEL: mul_add_const_v4i64_v4i32:<br>
; SSE: # BB#0:<br>
; SSE-NEXT: movdqa %xmm0, %xmm2<br>
-; SSE-NEXT: pshufd {{.*#+}} xmm0 = xmm2[0,1,1,3]<br>
+; SSE-NEXT: pshufd {{.*#+}} xmm0 = xmm0[0,1,1,3]<br>
; SSE-NEXT: pshufd {{.*#+}} xmm2 = xmm2[2,1,3,3]<br>
; SSE-NEXT: pshufd {{.*#+}} xmm3 = xmm1[0,1,1,3]<br>
; SSE-NEXT: pshufd {{.*#+}} xmm1 = xmm1[2,1,3,3]<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>vector-zext.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-zext.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/vector-zext.ll?<wbr>rev=311038&r1=311037&r2=<wbr>311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>vector-zext.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>vector-zext.ll Wed Aug 16 13:50:01 2017<br>
@@ -246,7 +246,7 @@ define <16 x i32> @zext_16i8_to_16i32(<1<br>
; SSE2: # BB#0: # %entry<br>
; SSE2-NEXT: movdqa %xmm0, %xmm3<br>
; SSE2-NEXT: pxor %xmm4, %xmm4<br>
-; SSE2-NEXT: movdqa %xmm3, %xmm1<br>
+; SSE2-NEXT: movdqa %xmm0, %xmm1<br>
; SSE2-NEXT: punpcklbw {{.*#+}} xmm1 = xmm1[0],xmm4[0],xmm1[1],xmm4[<wbr>1],xmm1[2],xmm4[2],xmm1[3],<wbr>xmm4[3],xmm1[4],xmm4[4],xmm1[<wbr>5],xmm4[5],xmm1[6],xmm4[6],<wbr>xmm1[7],xmm4[7]<br>
; SSE2-NEXT: movdqa %xmm1, %xmm0<br>
; SSE2-NEXT: punpcklwd {{.*#+}} xmm0 = xmm0[0],xmm4[0],xmm0[1],xmm4[<wbr>1],xmm0[2],xmm4[2],xmm0[3],<wbr>xmm4[3]<br>
@@ -261,7 +261,7 @@ define <16 x i32> @zext_16i8_to_16i32(<1<br>
; SSSE3: # BB#0: # %entry<br>
; SSSE3-NEXT: movdqa %xmm0, %xmm3<br>
; SSSE3-NEXT: pxor %xmm4, %xmm4<br>
-; SSSE3-NEXT: movdqa %xmm3, %xmm1<br>
+; SSSE3-NEXT: movdqa %xmm0, %xmm1<br>
; SSSE3-NEXT: punpcklbw {{.*#+}} xmm1 = xmm1[0],xmm4[0],xmm1[1],xmm4[<wbr>1],xmm1[2],xmm4[2],xmm1[3],<wbr>xmm4[3],xmm1[4],xmm4[4],xmm1[<wbr>5],xmm4[5],xmm1[6],xmm4[6],<wbr>xmm1[7],xmm4[7]<br>
; SSSE3-NEXT: movdqa %xmm1, %xmm0<br>
; SSSE3-NEXT: punpcklwd {{.*#+}} xmm0 = xmm0[0],xmm4[0],xmm0[1],xmm4[<wbr>1],xmm0[2],xmm4[2],xmm0[3],<wbr>xmm4[3]<br>
@@ -399,7 +399,7 @@ define <8 x i64> @zext_16i8_to_8i64(<16<br>
; SSE2: # BB#0: # %entry<br>
; SSE2-NEXT: movdqa %xmm0, %xmm1<br>
; SSE2-NEXT: pxor %xmm4, %xmm4<br>
-; SSE2-NEXT: pshufd {{.*#+}} xmm3 = xmm1[1,1,2,3]<br>
+; SSE2-NEXT: pshufd {{.*#+}} xmm3 = xmm0[1,1,2,3]<br>
; SSE2-NEXT: punpcklbw {{.*#+}} xmm1 = xmm1[0],xmm4[0],xmm1[1],xmm4[<wbr>1],xmm1[2],xmm4[2],xmm1[3],<wbr>xmm4[3],xmm1[4],xmm4[4],xmm1[<wbr>5],xmm4[5],xmm1[6],xmm4[6],<wbr>xmm1[7],xmm4[7]<br>
; SSE2-NEXT: punpcklwd {{.*#+}} xmm1 = xmm1[0],xmm4[0],xmm1[1],xmm4[<wbr>1],xmm1[2],xmm4[2],xmm1[3],<wbr>xmm4[3]<br>
; SSE2-NEXT: movdqa %xmm1, %xmm0<br>
@@ -700,7 +700,7 @@ define <8 x i64> @zext_8i16_to_8i64(<8 x<br>
; SSE2: # BB#0: # %entry<br>
; SSE2-NEXT: movdqa %xmm0, %xmm3<br>
; SSE2-NEXT: pxor %xmm4, %xmm4<br>
-; SSE2-NEXT: movdqa %xmm3, %xmm1<br>
+; SSE2-NEXT: movdqa %xmm0, %xmm1<br>
; SSE2-NEXT: punpcklwd {{.*#+}} xmm1 = xmm1[0],xmm4[0],xmm1[1],xmm4[<wbr>1],xmm1[2],xmm4[2],xmm1[3],<wbr>xmm4[3]<br>
; SSE2-NEXT: movdqa %xmm1, %xmm0<br>
; SSE2-NEXT: punpckldq {{.*#+}} xmm0 = xmm0[0],xmm4[0],xmm0[1],xmm4[<wbr>1]<br>
@@ -715,7 +715,7 @@ define <8 x i64> @zext_8i16_to_8i64(<8 x<br>
; SSSE3: # BB#0: # %entry<br>
; SSSE3-NEXT: movdqa %xmm0, %xmm3<br>
; SSSE3-NEXT: pxor %xmm4, %xmm4<br>
-; SSSE3-NEXT: movdqa %xmm3, %xmm1<br>
+; SSSE3-NEXT: movdqa %xmm0, %xmm1<br>
; SSSE3-NEXT: punpcklwd {{.*#+}} xmm1 = xmm1[0],xmm4[0],xmm1[1],xmm4[<wbr>1],xmm1[2],xmm4[2],xmm1[3],<wbr>xmm4[3]<br>
; SSSE3-NEXT: movdqa %xmm1, %xmm0<br>
; SSSE3-NEXT: punpckldq {{.*#+}} xmm0 = xmm0[0],xmm4[0],xmm0[1],xmm4[<wbr>1]<br>
@@ -1582,7 +1582,7 @@ define <8 x i32> @shuf_zext_8i16_to_8i32<br>
; SSE41: # BB#0: # %entry<br>
; SSE41-NEXT: movdqa %xmm0, %xmm1<br>
; SSE41-NEXT: pxor %xmm2, %xmm2<br>
-; SSE41-NEXT: pmovzxwd {{.*#+}} xmm0 = xmm1[0],zero,xmm1[1],zero,<wbr>xmm1[2],zero,xmm1[3],zero<br>
+; SSE41-NEXT: pmovzxwd {{.*#+}} xmm0 = xmm0[0],zero,xmm0[1],zero,<wbr>xmm0[2],zero,xmm0[3],zero<br>
; SSE41-NEXT: punpckhwd {{.*#+}} xmm1 = xmm1[4],xmm2[4],xmm1[5],xmm2[<wbr>5],xmm1[6],xmm2[6],xmm1[7],<wbr>xmm2[7]<br>
; SSE41-NEXT: retq<br>
;<br>
@@ -1630,7 +1630,7 @@ define <4 x i64> @shuf_zext_4i32_to_4i64<br>
; SSE41: # BB#0: # %entry<br>
; SSE41-NEXT: movdqa %xmm0, %xmm1<br>
; SSE41-NEXT: pxor %xmm2, %xmm2<br>
-; SSE41-NEXT: pmovzxdq {{.*#+}} xmm0 = xmm1[0],zero,xmm1[1],zero<br>
+; SSE41-NEXT: pmovzxdq {{.*#+}} xmm0 = xmm0[0],zero,xmm0[1],zero<br>
; SSE41-NEXT: punpckhdq {{.*#+}} xmm1 = xmm1[2],xmm2[2],xmm1[3],xmm2[<wbr>3]<br>
; SSE41-NEXT: retq<br>
;<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>vselect-minmax.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vselect-minmax.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/vselect-minmax.ll?<wbr>rev=311038&r1=311037&r2=<wbr>311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>vselect-minmax.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>vselect-minmax.ll Wed Aug 16 13:50:01 2017<br>
@@ -3344,12 +3344,12 @@ define <64 x i8> @test98(<64 x i8> %a, <<br>
; SSE2: # BB#0: # %entry<br>
; SSE2-NEXT: movdqa %xmm3, %xmm8<br>
; SSE2-NEXT: movdqa %xmm2, %xmm9<br>
-; SSE2-NEXT: movdqa %xmm8, %xmm12<br>
+; SSE2-NEXT: movdqa %xmm3, %xmm12<br>
; SSE2-NEXT: pcmpgtb %xmm7, %xmm12<br>
; SSE2-NEXT: pcmpeqd %xmm13, %xmm13<br>
; SSE2-NEXT: movdqa %xmm12, %xmm3<br>
; SSE2-NEXT: pxor %xmm13, %xmm3<br>
-; SSE2-NEXT: movdqa %xmm9, %xmm14<br>
+; SSE2-NEXT: movdqa %xmm2, %xmm14<br>
; SSE2-NEXT: pcmpgtb %xmm6, %xmm14<br>
; SSE2-NEXT: movdqa %xmm14, %xmm2<br>
; SSE2-NEXT: pxor %xmm13, %xmm2<br>
@@ -3487,12 +3487,12 @@ define <64 x i8> @test100(<64 x i8> %a,<br>
; SSE2-NEXT: movdqa %xmm2, %xmm9<br>
; SSE2-NEXT: movdqa %xmm0, %xmm10<br>
; SSE2-NEXT: movdqa %xmm7, %xmm12<br>
-; SSE2-NEXT: pcmpgtb %xmm8, %xmm12<br>
+; SSE2-NEXT: pcmpgtb %xmm3, %xmm12<br>
; SSE2-NEXT: pcmpeqd %xmm0, %xmm0<br>
; SSE2-NEXT: movdqa %xmm12, %xmm3<br>
; SSE2-NEXT: pxor %xmm0, %xmm3<br>
; SSE2-NEXT: movdqa %xmm6, %xmm13<br>
-; SSE2-NEXT: pcmpgtb %xmm9, %xmm13<br>
+; SSE2-NEXT: pcmpgtb %xmm2, %xmm13<br>
; SSE2-NEXT: movdqa %xmm13, %xmm2<br>
; SSE2-NEXT: pxor %xmm0, %xmm2<br>
; SSE2-NEXT: movdqa %xmm5, %xmm14<br>
@@ -4225,12 +4225,12 @@ define <16 x i32> @test114(<16 x i32> %a<br>
; SSE2: # BB#0: # %entry<br>
; SSE2-NEXT: movdqa %xmm3, %xmm8<br>
; SSE2-NEXT: movdqa %xmm2, %xmm9<br>
-; SSE2-NEXT: movdqa %xmm8, %xmm12<br>
+; SSE2-NEXT: movdqa %xmm3, %xmm12<br>
; SSE2-NEXT: pcmpgtd %xmm7, %xmm12<br>
; SSE2-NEXT: pcmpeqd %xmm13, %xmm13<br>
; SSE2-NEXT: movdqa %xmm12, %xmm3<br>
; SSE2-NEXT: pxor %xmm13, %xmm3<br>
-; SSE2-NEXT: movdqa %xmm9, %xmm14<br>
+; SSE2-NEXT: movdqa %xmm2, %xmm14<br>
; SSE2-NEXT: pcmpgtd %xmm6, %xmm14<br>
; SSE2-NEXT: movdqa %xmm14, %xmm2<br>
; SSE2-NEXT: pxor %xmm13, %xmm2<br>
@@ -4368,12 +4368,12 @@ define <16 x i32> @test116(<16 x i32> %a<br>
; SSE2-NEXT: movdqa %xmm2, %xmm9<br>
; SSE2-NEXT: movdqa %xmm0, %xmm10<br>
; SSE2-NEXT: movdqa %xmm7, %xmm12<br>
-; SSE2-NEXT: pcmpgtd %xmm8, %xmm12<br>
+; SSE2-NEXT: pcmpgtd %xmm3, %xmm12<br>
; SSE2-NEXT: pcmpeqd %xmm0, %xmm0<br>
; SSE2-NEXT: movdqa %xmm12, %xmm3<br>
; SSE2-NEXT: pxor %xmm0, %xmm3<br>
; SSE2-NEXT: movdqa %xmm6, %xmm13<br>
-; SSE2-NEXT: pcmpgtd %xmm9, %xmm13<br>
+; SSE2-NEXT: pcmpgtd %xmm2, %xmm13<br>
; SSE2-NEXT: movdqa %xmm13, %xmm2<br>
; SSE2-NEXT: pxor %xmm0, %xmm2<br>
; SSE2-NEXT: movdqa %xmm5, %xmm14<br>
@@ -4890,7 +4890,7 @@ define <8 x i64> @test122(<8 x i64> %a,<br>
; SSE2-LABEL: test122:<br>
; SSE2: # BB#0: # %entry<br>
; SSE2-NEXT: movdqa %xmm7, %xmm8<br>
-; SSE2-NEXT: movdqa %xmm8, -{{[0-9]+}}(%rsp) # 16-byte Spill<br>
+; SSE2-NEXT: movdqa %xmm7, -{{[0-9]+}}(%rsp) # 16-byte Spill<br>
; SSE2-NEXT: movdqa %xmm3, %xmm7<br>
; SSE2-NEXT: movdqa %xmm2, %xmm3<br>
; SSE2-NEXT: movdqa %xmm1, %xmm2<br>
@@ -5164,7 +5164,7 @@ define <8 x i64> @test124(<8 x i64> %a,<br>
; SSE2-LABEL: test124:<br>
; SSE2: # BB#0: # %entry<br>
; SSE2-NEXT: movdqa %xmm7, %xmm11<br>
-; SSE2-NEXT: movdqa %xmm11, -{{[0-9]+}}(%rsp) # 16-byte Spill<br>
+; SSE2-NEXT: movdqa %xmm7, -{{[0-9]+}}(%rsp) # 16-byte Spill<br>
; SSE2-NEXT: movdqa %xmm3, %xmm7<br>
; SSE2-NEXT: movdqa %xmm2, %xmm3<br>
; SSE2-NEXT: movdqa %xmm1, %xmm2<br>
@@ -5467,7 +5467,7 @@ define <8 x i64> @test126(<8 x i64> %a,<br>
; SSE2-LABEL: test126:<br>
; SSE2: # BB#0: # %entry<br>
; SSE2-NEXT: movdqa %xmm7, %xmm8<br>
-; SSE2-NEXT: movdqa %xmm8, -{{[0-9]+}}(%rsp) # 16-byte Spill<br>
+; SSE2-NEXT: movdqa %xmm7, -{{[0-9]+}}(%rsp) # 16-byte Spill<br>
; SSE2-NEXT: movdqa %xmm3, %xmm7<br>
; SSE2-NEXT: movdqa %xmm2, %xmm3<br>
; SSE2-NEXT: movdqa %xmm1, %xmm2<br>
@@ -5795,7 +5795,7 @@ define <8 x i64> @test128(<8 x i64> %a,<br>
; SSE2-LABEL: test128:<br>
; SSE2: # BB#0: # %entry<br>
; SSE2-NEXT: movdqa %xmm7, %xmm11<br>
-; SSE2-NEXT: movdqa %xmm11, -{{[0-9]+}}(%rsp) # 16-byte Spill<br>
+; SSE2-NEXT: movdqa %xmm7, -{{[0-9]+}}(%rsp) # 16-byte Spill<br>
; SSE2-NEXT: movdqa %xmm3, %xmm7<br>
; SSE2-NEXT: movdqa %xmm2, %xmm3<br>
; SSE2-NEXT: movdqa %xmm1, %xmm2<br>
@@ -6047,7 +6047,7 @@ define <64 x i8> @test130(<64 x i8> %a,<br>
; SSE2-NEXT: pcmpeqd %xmm13, %xmm13<br>
; SSE2-NEXT: movdqa %xmm12, %xmm9<br>
; SSE2-NEXT: pxor %xmm13, %xmm9<br>
-; SSE2-NEXT: movdqa %xmm8, %xmm14<br>
+; SSE2-NEXT: movdqa %xmm2, %xmm14<br>
; SSE2-NEXT: pcmpgtb %xmm6, %xmm14<br>
; SSE2-NEXT: movdqa %xmm14, %xmm2<br>
; SSE2-NEXT: pxor %xmm13, %xmm2<br>
@@ -6190,7 +6190,7 @@ define <64 x i8> @test132(<64 x i8> %a,<br>
; SSE2-NEXT: movdqa %xmm12, %xmm9<br>
; SSE2-NEXT: pxor %xmm0, %xmm9<br>
; SSE2-NEXT: movdqa %xmm6, %xmm13<br>
-; SSE2-NEXT: pcmpgtb %xmm8, %xmm13<br>
+; SSE2-NEXT: pcmpgtb %xmm2, %xmm13<br>
; SSE2-NEXT: movdqa %xmm13, %xmm2<br>
; SSE2-NEXT: pxor %xmm0, %xmm2<br>
; SSE2-NEXT: movdqa %xmm5, %xmm14<br>
@@ -6941,7 +6941,7 @@ define <16 x i32> @test146(<16 x i32> %a<br>
; SSE2-NEXT: pcmpeqd %xmm13, %xmm13<br>
; SSE2-NEXT: movdqa %xmm12, %xmm9<br>
; SSE2-NEXT: pxor %xmm13, %xmm9<br>
-; SSE2-NEXT: movdqa %xmm8, %xmm14<br>
+; SSE2-NEXT: movdqa %xmm2, %xmm14<br>
; SSE2-NEXT: pcmpgtd %xmm6, %xmm14<br>
; SSE2-NEXT: movdqa %xmm14, %xmm2<br>
; SSE2-NEXT: pxor %xmm13, %xmm2<br>
@@ -7084,7 +7084,7 @@ define <16 x i32> @test148(<16 x i32> %a<br>
; SSE2-NEXT: movdqa %xmm12, %xmm9<br>
; SSE2-NEXT: pxor %xmm0, %xmm9<br>
; SSE2-NEXT: movdqa %xmm6, %xmm13<br>
-; SSE2-NEXT: pcmpgtd %xmm8, %xmm13<br>
+; SSE2-NEXT: pcmpgtd %xmm2, %xmm13<br>
; SSE2-NEXT: movdqa %xmm13, %xmm2<br>
; SSE2-NEXT: pxor %xmm0, %xmm2<br>
; SSE2-NEXT: movdqa %xmm5, %xmm14<br>
@@ -7610,7 +7610,7 @@ define <8 x i64> @test154(<8 x i64> %a,<br>
; SSE2-LABEL: test154:<br>
; SSE2: # BB#0: # %entry<br>
; SSE2-NEXT: movdqa %xmm7, %xmm8<br>
-; SSE2-NEXT: movdqa %xmm8, -{{[0-9]+}}(%rsp) # 16-byte Spill<br>
+; SSE2-NEXT: movdqa %xmm7, -{{[0-9]+}}(%rsp) # 16-byte Spill<br>
; SSE2-NEXT: movdqa %xmm3, %xmm7<br>
; SSE2-NEXT: movdqa %xmm2, %xmm3<br>
; SSE2-NEXT: movdqa %xmm1, %xmm2<br>
@@ -7882,7 +7882,7 @@ define <8 x i64> @test156(<8 x i64> %a,<br>
; SSE2-LABEL: test156:<br>
; SSE2: # BB#0: # %entry<br>
; SSE2-NEXT: movdqa %xmm7, %xmm11<br>
-; SSE2-NEXT: movdqa %xmm11, -{{[0-9]+}}(%rsp) # 16-byte Spill<br>
+; SSE2-NEXT: movdqa %xmm7, -{{[0-9]+}}(%rsp) # 16-byte Spill<br>
; SSE2-NEXT: movdqa %xmm3, %xmm7<br>
; SSE2-NEXT: movdqa %xmm2, %xmm3<br>
; SSE2-NEXT: movdqa %xmm1, %xmm2<br>
@@ -8183,7 +8183,7 @@ define <8 x i64> @test158(<8 x i64> %a,<br>
; SSE2-LABEL: test158:<br>
; SSE2: # BB#0: # %entry<br>
; SSE2-NEXT: movdqa %xmm7, %xmm8<br>
-; SSE2-NEXT: movdqa %xmm8, -{{[0-9]+}}(%rsp) # 16-byte Spill<br>
+; SSE2-NEXT: movdqa %xmm7, -{{[0-9]+}}(%rsp) # 16-byte Spill<br>
; SSE2-NEXT: movdqa %xmm3, %xmm7<br>
; SSE2-NEXT: movdqa %xmm2, %xmm3<br>
; SSE2-NEXT: movdqa %xmm1, %xmm2<br>
@@ -8509,7 +8509,7 @@ define <8 x i64> @test160(<8 x i64> %a,<br>
; SSE2-LABEL: test160:<br>
; SSE2: # BB#0: # %entry<br>
; SSE2-NEXT: movdqa %xmm7, %xmm11<br>
-; SSE2-NEXT: movdqa %xmm11, -{{[0-9]+}}(%rsp) # 16-byte Spill<br>
+; SSE2-NEXT: movdqa %xmm7, -{{[0-9]+}}(%rsp) # 16-byte Spill<br>
; SSE2-NEXT: movdqa %xmm3, %xmm7<br>
; SSE2-NEXT: movdqa %xmm2, %xmm3<br>
; SSE2-NEXT: movdqa %xmm1, %xmm2<br>
@@ -10289,7 +10289,7 @@ define <2 x i64> @test180(<2 x i64> %a,<br>
; SSE4: # BB#0: # %entry<br>
; SSE4-NEXT: movdqa %xmm0, %xmm2<br>
; SSE4-NEXT: movdqa %xmm1, %xmm3<br>
-; SSE4-NEXT: pcmpgtq %xmm2, %xmm3<br>
+; SSE4-NEXT: pcmpgtq %xmm0, %xmm3<br>
; SSE4-NEXT: pcmpeqd %xmm0, %xmm0<br>
; SSE4-NEXT: pxor %xmm3, %xmm0<br>
; SSE4-NEXT: blendvpd %xmm0, %xmm2, %xmm1<br>
@@ -10768,7 +10768,7 @@ define <2 x i64> @test188(<2 x i64> %a,<br>
; SSE4: # BB#0: # %entry<br>
; SSE4-NEXT: movdqa %xmm0, %xmm2<br>
; SSE4-NEXT: movdqa %xmm1, %xmm3<br>
-; SSE4-NEXT: pcmpgtq %xmm2, %xmm3<br>
+; SSE4-NEXT: pcmpgtq %xmm0, %xmm3<br>
; SSE4-NEXT: pcmpeqd %xmm0, %xmm0<br>
; SSE4-NEXT: pxor %xmm3, %xmm0<br>
; SSE4-NEXT: blendvpd %xmm0, %xmm1, %xmm2<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>widen_conv-3.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/widen_conv-3.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/widen_conv-3.ll?<wbr>rev=311038&r1=311037&r2=<wbr>311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>widen_conv-3.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>widen_conv-3.ll Wed Aug 16 13:50:01 2017<br>
@@ -74,7 +74,7 @@ define void @convert_v3i8_to_v3f32(<3 x<br>
; X86-SSE2-NEXT: cvtdq2ps %xmm0, %xmm0<br>
; X86-SSE2-NEXT: movss %xmm0, (%eax)<br>
; X86-SSE2-NEXT: movaps %xmm0, %xmm1<br>
-; X86-SSE2-NEXT: movhlps {{.*#+}} xmm1 = xmm1[1,1]<br>
+; X86-SSE2-NEXT: movhlps {{.*#+}} xmm1 = xmm0[1],xmm1[1]<br>
; X86-SSE2-NEXT: movss %xmm1, 8(%eax)<br>
; X86-SSE2-NEXT: shufps {{.*#+}} xmm0 = xmm0[1,1,2,3]<br>
; X86-SSE2-NEXT: movss %xmm0, 4(%eax)<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>widen_conv-4.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/widen_conv-4.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/widen_conv-4.ll?<wbr>rev=311038&r1=311037&r2=<wbr>311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>widen_conv-4.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>widen_conv-4.ll Wed Aug 16 13:50:01 2017<br>
@@ -19,7 +19,7 @@ define void @convert_v7i16_v7f32(<7 x fl<br>
; X86-SSE2-NEXT: movups %xmm0, (%eax)<br>
; X86-SSE2-NEXT: movss %xmm2, 16(%eax)<br>
; X86-SSE2-NEXT: movaps %xmm2, %xmm0<br>
-; X86-SSE2-NEXT: movhlps {{.*#+}} xmm0 = xmm0[1,1]<br>
+; X86-SSE2-NEXT: movhlps {{.*#+}} xmm0 = xmm2[1],xmm0[1]<br>
; X86-SSE2-NEXT: movss %xmm0, 24(%eax)<br>
; X86-SSE2-NEXT: shufps {{.*#+}} xmm2 = xmm2[1,1,2,3]<br>
; X86-SSE2-NEXT: movss %xmm2, 20(%eax)<br>
@@ -100,7 +100,7 @@ define void @convert_v3i8_to_v3f32(<3 x<br>
; X86-SSE2-NEXT: cvtdq2ps %xmm0, %xmm0<br>
; X86-SSE2-NEXT: movss %xmm0, (%eax)<br>
; X86-SSE2-NEXT: movaps %xmm0, %xmm1<br>
-; X86-SSE2-NEXT: movhlps {{.*#+}} xmm1 = xmm1[1,1]<br>
+; X86-SSE2-NEXT: movhlps {{.*#+}} xmm1 = xmm0[1],xmm1[1]<br>
; X86-SSE2-NEXT: movss %xmm1, 8(%eax)<br>
; X86-SSE2-NEXT: shufps {{.*#+}} xmm0 = xmm0[1,1,2,3]<br>
; X86-SSE2-NEXT: movss %xmm0, 4(%eax)<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>x86-shrink-wrap-unwind.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/x86-shrink-wrap-unwind.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/x86-shrink-wrap-<wbr>unwind.ll?rev=311038&r1=<wbr>311037&r2=311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>x86-shrink-wrap-unwind.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>x86-shrink-wrap-unwind.ll Wed Aug 16 13:50:01 2017<br>
@@ -23,7 +23,7 @@ target triple = "x86_64-apple-macosx"<br>
; Compare the arguments and jump to exit.<br>
; After the prologue is set.<br>
; CHECK: movl %edi, [[ARG0CPY:%e[a-z]+]]<br>
-; CHECK-NEXT: cmpl %esi, [[ARG0CPY]]<br>
+; CHECK-NEXT: cmpl %esi, %edi<br>
; CHECK-NEXT: jge [[EXIT_LABEL:LBB[0-9_]+]]<br>
;<br>
; Store %a in the alloca.<br>
@@ -69,7 +69,7 @@ attributes #0 = { "no-frame-pointer-elim<br>
; Compare the arguments and jump to exit.<br>
; After the prologue is set.<br>
; CHECK: movl %edi, [[ARG0CPY:%e[a-z]+]]<br>
-; CHECK-NEXT: cmpl %esi, [[ARG0CPY]]<br>
+; CHECK-NEXT: cmpl %esi, %edi<br>
; CHECK-NEXT: jge [[EXIT_LABEL:LBB[0-9_]+]]<br>
;<br>
; Prologue code.<br>
@@ -115,7 +115,7 @@ attributes #1 = { "no-frame-pointer-elim<br>
; Compare the arguments and jump to exit.<br>
; After the prologue is set.<br>
; CHECK: movl %edi, [[ARG0CPY:%e[a-z]+]]<br>
-; CHECK-NEXT: cmpl %esi, [[ARG0CPY]]<br>
+; CHECK-NEXT: cmpl %esi, %edi<br>
; CHECK-NEXT: jge [[EXIT_LABEL:LBB[0-9_]+]]<br>
;<br>
; Prologue code.<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/<wbr>x86-shrink-wrapping.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/x86-shrink-wrapping.ll?rev=311038&r1=311037&r2=311038&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-<wbr>project/llvm/trunk/test/<wbr>CodeGen/X86/x86-shrink-<wbr>wrapping.ll?rev=311038&r1=<wbr>311037&r2=311038&view=diff</a><br>
==============================<wbr>==============================<wbr>==================<br>
--- llvm/trunk/test/CodeGen/X86/<wbr>x86-shrink-wrapping.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/<wbr>x86-shrink-wrapping.ll Wed Aug 16 13:50:01 2017<br>
@@ -17,7 +17,7 @@ target triple = "x86_64-apple-macosx"<br>
; Compare the arguments and jump to exit.<br>
; No prologue needed.<br>
; ENABLE: movl %edi, [[ARG0CPY:%e[a-z]+]]<br>
-; ENABLE-NEXT: cmpl %esi, [[ARG0CPY]]<br>
+; ENABLE-NEXT: cmpl %esi, %edi<br>
; ENABLE-NEXT: jge [[EXIT_LABEL:LBB[0-9_]+]]<br>
;<br>
; Prologue code.<br>
@@ -27,7 +27,7 @@ target triple = "x86_64-apple-macosx"<br>
; Compare the arguments and jump to exit.<br>
; After the prologue is set.<br>
; DISABLE: movl %edi, [[ARG0CPY:%e[a-z]+]]<br>
-; DISABLE-NEXT: cmpl %esi, [[ARG0CPY]]<br>
+; DISABLE-NEXT: cmpl %esi, %edi<br>
; DISABLE-NEXT: jge [[EXIT_LABEL:LBB[0-9_]+]]<br>
;<br>
; Store %a in the alloca.<br>
<br>
<br>
______________________________<wbr>_________________<br>
llvm-commits mailing list<br>
<a href="mailto:llvm-commits@lists.llvm.org">llvm-commits@lists.llvm.org</a><br>
<a href="http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits" rel="noreferrer" target="_blank">http://lists.llvm.org/cgi-bin/<wbr>mailman/listinfo/llvm-commits</a><br>
</blockquote></div><br></div>