[llvm] [CodeGen] Only deduplicate PHIs on critical edges (PR #97064)
Alexis Engelke via llvm-commits
llvm-commits at lists.llvm.org
Fri Jun 28 07:55:48 PDT 2024
https://github.com/aengelke created https://github.com/llvm/llvm-project/pull/97064
PHIElim deduplicates identical PHI nodes to reduce the number of copies inserted. There are two cases:
1. Identical PHI nodes are in different blocks. That's the reason for this optimization; this can't be avoided at SSA-level. A necessary prerequisite for this is that the predecessors of all basic blocks (where such a PHI node could occur) are the same. This implies that all (>= 2) predecessors must have multiple successors, i.e. all edges into the block are critical edges.
2. Identical PHI nodes are in the same block. CSE can remove these. There are a few cases, however, where they still occur regardless:
- expand-large-div-rem creates PHI nodes with large integers, which get lowered into one PHI per MVT. Later, some identical values (zeroes) get folded, resulting in identical PHI nodes.
- peephole-opt occasionally inserts PHIs for the same value.
- Some pseudo instruction emitters create redundant PHI nodes (e.g., AVR's insertShift), merging the same values more than once.
In any case, this happens rarely and MachineCSE handles most cases anyway, so that PHIElim only gets to see very few of such cases (see changed test files).
Currently, all PHI nodes are inserted into a DenseMap that checks equality not by pointer but by operands. This hash map is pretty expensive (hashing itself and the hash map), but only really useful in the first case.
Avoid this expensive hashing most of the time by restricting it to basic blocks with only critical input edges. This improves performance for code with many PHI nodes, especially at -O0. (Note that Clang often doesn't generate PHI nodes and O0 includes no mem2reg, so no very visible improvements in [c-t-t](http://llvm-compile-time-tracker.com/compare.php?from=a1bce0b89e800cb7ab1d3cf3437f8f34d0695468&to=679de0bf60dfa547413a34e18882049d409626c5&stat=instructions:u). Other compilers always generate PHI nodes.)
>From 679de0bf60dfa547413a34e18882049d409626c5 Mon Sep 17 00:00:00 2001
From: Alexis Engelke <engelke at in.tum.de>
Date: Fri, 28 Jun 2024 14:26:54 +0200
Subject: [PATCH] [CodeGen] Only dedup PHIs on critical edges
PHIElim deduplicates identical PHI nodes to reduce the number of copies
inserted. There are two cases:
1. Identical PHI nodes are in different blocks. That's the reason for
this optimization; this can't be avoided at SSA-level. A necessary
prerequisite for this is that the predecessors of all basic blocks
(where such a PHI node could occur) are the same. This implies that
all (>= 2) predecessors must have multiple successors, i.e. all edges
into the block are critical edges.
2. Identical PHI nodes are in the same block. CSE can remove these.
There are a few cases, however, where they still occur regardless:
- expand-large-div-rem creates PHI nodes with large integers, which
get lowered into one PHI per MVT. Later, some identical values
(zeroes) get folded, resulting in identical PHI nodes.
- peephole-opt occasionally inserts PHIs for the same value.
- Some pseudo instruction emitters create redundant PHI nodes (e.g.,
AVR's insertShift), merging the same values more than once.
In any case, this happens rarely and MachineCSE handles most cases
anyway, so that PHIElim only gets to see very few of such cases (see
changed test files).
Currently, all PHI nodes are inserted into a DenseMap that checks
equality not by pointer but by operands. This hash map is pretty
expensive (hashing itself and the hash map), but only really useful in
the first case.
Avoid this expensive hashing most of the time by restricting it to basic
blocks with only critical input edges. This improves performance,
especially in the -O0 case.
---
llvm/lib/CodeGen/PHIElimination.cpp | 57 ++-
.../branch-folding-implicit-def-subreg.ll | 4 +-
llvm/test/CodeGen/X86/bfloat.ll | 340 +++++++++---------
.../X86/div-rem-pair-recomposition-signed.ll | 294 +++++++--------
4 files changed, 361 insertions(+), 334 deletions(-)
diff --git a/llvm/lib/CodeGen/PHIElimination.cpp b/llvm/lib/CodeGen/PHIElimination.cpp
index 592972f5c83b2..4fde4ec78ea28 100644
--- a/llvm/lib/CodeGen/PHIElimination.cpp
+++ b/llvm/lib/CodeGen/PHIElimination.cpp
@@ -83,7 +83,8 @@ namespace {
bool EliminatePHINodes(MachineFunction &MF, MachineBasicBlock &MBB);
void LowerPHINode(MachineBasicBlock &MBB,
- MachineBasicBlock::iterator LastPHIIt);
+ MachineBasicBlock::iterator LastPHIIt,
+ bool AllEdgesCritical);
/// analyzePHINodes - Gather information about the PHI nodes in
/// here. In particular, we want to map the number of uses of a virtual
@@ -191,7 +192,8 @@ bool PHIElimination::runOnMachineFunction(MachineFunction &MF) {
MRI->leaveSSA();
// Populate VRegPHIUseCount
- analyzePHINodes(MF);
+ if (LV || LIS)
+ analyzePHINodes(MF);
// Eliminate PHI instructions by inserting copies into predecessor blocks.
for (auto &MBB : MF)
@@ -239,8 +241,20 @@ bool PHIElimination::EliminatePHINodes(MachineFunction &MF,
MachineBasicBlock::iterator LastPHIIt =
std::prev(MBB.SkipPHIsAndLabels(MBB.begin()));
+ // If all incoming edges are critical, we try to deduplicate identical PHIs so
+ // that we generate fewer copies. If at any edge is non-critical, we either
+ // have less than two predecessors (=> no PHIs) or a predecessor has only us
+ // as a successor (=> identical PHI node can't occur in different block).
+ bool AllEdgesCritical = MBB.pred_size() >= 2;
+ for (MachineBasicBlock *Pred : MBB.predecessors()) {
+ if (Pred->succ_size() < 2) {
+ AllEdgesCritical = false;
+ break;
+ }
+ }
+
while (MBB.front().isPHI())
- LowerPHINode(MBB, LastPHIIt);
+ LowerPHINode(MBB, LastPHIIt, AllEdgesCritical);
return true;
}
@@ -267,7 +281,8 @@ static bool allPhiOperandsUndefined(const MachineInstr &MPhi,
}
/// LowerPHINode - Lower the PHI node at the top of the specified block.
void PHIElimination::LowerPHINode(MachineBasicBlock &MBB,
- MachineBasicBlock::iterator LastPHIIt) {
+ MachineBasicBlock::iterator LastPHIIt,
+ bool AllEdgesCritical) {
++NumLowered;
MachineBasicBlock::iterator AfterPHIsIt = std::next(LastPHIIt);
@@ -283,6 +298,7 @@ void PHIElimination::LowerPHINode(MachineBasicBlock &MBB,
// Create a new register for the incoming PHI arguments.
MachineFunction &MF = *MBB.getParent();
unsigned IncomingReg = 0;
+ bool EliminateNow = true; // delay elimination of nodes in LoweredPHIs
bool reusedIncoming = false; // Is IncomingReg reused from an earlier PHI?
// Insert a register to register copy at the top of the current block (but
@@ -297,19 +313,28 @@ void PHIElimination::LowerPHINode(MachineBasicBlock &MBB,
TII->get(TargetOpcode::IMPLICIT_DEF), DestReg);
else {
// Can we reuse an earlier PHI node? This only happens for critical edges,
- // typically those created by tail duplication.
- unsigned &entry = LoweredPHIs[MPhi];
- if (entry) {
+ // typically those created by tail duplication. Typically, an identical PHI
+ // node can't occur, so avoid hashing/storing such PHIs, which is somewhat
+ // expensive.
+ unsigned *Entry = nullptr;
+ if (AllEdgesCritical)
+ Entry = &LoweredPHIs[MPhi];
+ if (Entry && *Entry) {
// An identical PHI node was already lowered. Reuse the incoming register.
- IncomingReg = entry;
+ IncomingReg = *Entry;
reusedIncoming = true;
++NumReused;
LLVM_DEBUG(dbgs() << "Reusing " << printReg(IncomingReg) << " for "
<< *MPhi);
} else {
const TargetRegisterClass *RC = MF.getRegInfo().getRegClass(DestReg);
- entry = IncomingReg = MF.getRegInfo().createVirtualRegister(RC);
+ IncomingReg = MF.getRegInfo().createVirtualRegister(RC);
+ if (Entry) {
+ EliminateNow = false;
+ *Entry = IncomingReg;
+ }
}
+
// Give the target possiblity to handle special cases fallthrough otherwise
PHICopy = TII->createPHIDestinationCopy(MBB, AfterPHIsIt, MPhi->getDebugLoc(),
IncomingReg, DestReg);
@@ -445,11 +470,13 @@ void PHIElimination::LowerPHINode(MachineBasicBlock &MBB,
}
// Adjust the VRegPHIUseCount map to account for the removal of this PHI node.
- for (unsigned i = 1; i != MPhi->getNumOperands(); i += 2) {
- if (!MPhi->getOperand(i).isUndef()) {
- --VRegPHIUseCount[BBVRegPair(
- MPhi->getOperand(i + 1).getMBB()->getNumber(),
- MPhi->getOperand(i).getReg())];
+ if (LV || LIS) {
+ for (unsigned i = 1; i != MPhi->getNumOperands(); i += 2) {
+ if (!MPhi->getOperand(i).isUndef()) {
+ --VRegPHIUseCount[BBVRegPair(
+ MPhi->getOperand(i + 1).getMBB()->getNumber(),
+ MPhi->getOperand(i).getReg())];
+ }
}
}
@@ -646,7 +673,7 @@ void PHIElimination::LowerPHINode(MachineBasicBlock &MBB,
}
// Really delete the PHI instruction now, if it is not in the LoweredPHIs map.
- if (reusedIncoming || !IncomingReg) {
+ if (EliminateNow) {
if (LIS)
LIS->RemoveMachineInstrFromMaps(*MPhi);
MF.deleteMachineInstr(MPhi);
diff --git a/llvm/test/CodeGen/AMDGPU/branch-folding-implicit-def-subreg.ll b/llvm/test/CodeGen/AMDGPU/branch-folding-implicit-def-subreg.ll
index 384715a849c1e..0bd030f1a3750 100644
--- a/llvm/test/CodeGen/AMDGPU/branch-folding-implicit-def-subreg.ll
+++ b/llvm/test/CodeGen/AMDGPU/branch-folding-implicit-def-subreg.ll
@@ -804,6 +804,7 @@ define amdgpu_kernel void @f1(ptr addrspace(1) %arg, ptr addrspace(1) %arg1, i64
; GFX90A-NEXT: renamable $vgpr15 = V_ALIGNBIT_B32_e64 $vgpr15, $vgpr14, 1, implicit $exec
; GFX90A-NEXT: renamable $sgpr48_sgpr49 = S_XOR_B64 $exec, -1, implicit-def dead $scc
; GFX90A-NEXT: renamable $sgpr60_sgpr61 = S_OR_B64 renamable $sgpr28_sgpr29, $exec, implicit-def dead $scc
+ ; GFX90A-NEXT: renamable $vgpr10 = COPY renamable $vgpr14, implicit $exec
; GFX90A-NEXT: S_BRANCH %bb.61
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: bb.58:
@@ -886,11 +887,10 @@ define amdgpu_kernel void @f1(ptr addrspace(1) %arg, ptr addrspace(1) %arg1, i64
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: bb.61.Flow31:
; GFX90A-NEXT: successors: %bb.62(0x80000000)
- ; GFX90A-NEXT: liveins: $sgpr12, $sgpr13, $sgpr14, $sgpr15, $vgpr15, $vgpr17, $vgpr18, $vgpr30, $vgpr31, $vgpr52, $vgpr53, $sgpr4_sgpr5, $sgpr6_sgpr7:0x000000000000000F, $sgpr8_sgpr9, $sgpr10_sgpr11, $sgpr16_sgpr17, $sgpr24_sgpr25, $sgpr26_sgpr27, $sgpr28_sgpr29, $sgpr30_sgpr31, $sgpr34_sgpr35, $sgpr36_sgpr37, $sgpr38_sgpr39, $sgpr40_sgpr41, $sgpr42_sgpr43, $sgpr44_sgpr45, $sgpr46_sgpr47, $sgpr48_sgpr49, $sgpr50_sgpr51, $sgpr58_sgpr59, $sgpr60_sgpr61, $sgpr16_sgpr17_sgpr18_sgpr19:0x00000000000000F0, $sgpr20_sgpr21_sgpr22_sgpr23:0x000000000000003C, $vgpr0_vgpr1:0x000000000000000F, $vgpr2_vgpr3:0x000000000000000F, $vgpr4_vgpr5:0x000000000000000F, $vgpr6_vgpr7:0x000000000000000F, $vgpr8_vgpr9:0x000000000000000F, $vgpr10_vgpr11:0x000000000000000C, $vgpr12_vgpr13:0x000000000000000F, $vgpr14_vgpr15:0x0000000000000003, $vgpr16_vgpr17:0x0000000000000003, $vgpr40_vgpr41:0x000000000000000F, $vgpr42_vgpr43:0x000000000000000F, $vgpr44_vgpr45:0x000000000000000F, $vgpr46_vgpr47:0x000000000000000F, $vgpr56_vgpr57:0x000000000000000F, $vgpr58_vgpr59:0x0000000000000003, $vgpr60_vgpr61:0x000000000000000F, $vgpr62_vgpr63:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3
+ ; GFX90A-NEXT: liveins: $sgpr12, $sgpr13, $sgpr14, $sgpr15, $vgpr15, $vgpr17, $vgpr18, $vgpr30, $vgpr31, $vgpr52, $vgpr53, $sgpr4_sgpr5, $sgpr6_sgpr7:0x000000000000000F, $sgpr8_sgpr9, $sgpr10_sgpr11, $sgpr16_sgpr17, $sgpr24_sgpr25, $sgpr26_sgpr27, $sgpr28_sgpr29, $sgpr30_sgpr31, $sgpr34_sgpr35, $sgpr36_sgpr37, $sgpr38_sgpr39, $sgpr40_sgpr41, $sgpr42_sgpr43, $sgpr44_sgpr45, $sgpr46_sgpr47, $sgpr48_sgpr49, $sgpr50_sgpr51, $sgpr58_sgpr59, $sgpr60_sgpr61, $sgpr16_sgpr17_sgpr18_sgpr19:0x00000000000000F0, $sgpr20_sgpr21_sgpr22_sgpr23:0x000000000000003C, $vgpr0_vgpr1:0x000000000000000F, $vgpr2_vgpr3:0x000000000000000F, $vgpr4_vgpr5:0x000000000000000F, $vgpr6_vgpr7:0x000000000000000F, $vgpr8_vgpr9:0x000000000000000F, $vgpr10_vgpr11:0x000000000000000F, $vgpr12_vgpr13:0x000000000000000F, $vgpr14_vgpr15:0x0000000000000003, $vgpr16_vgpr17:0x0000000000000003, $vgpr40_vgpr41:0x000000000000000F, $vgpr42_vgpr43:0x000000000000000F, $vgpr44_vgpr45:0x000000000000000F, $vgpr46_vgpr47:0x000000000000000F, $vgpr56_vgpr57:0x000000000000000F, $vgpr58_vgpr59:0x0000000000000003, $vgpr60_vgpr61:0x000000000000000F, $vgpr62_vgpr63:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: $exec = S_OR_B64 $exec, killed renamable $sgpr50_sgpr51, implicit-def $scc
; GFX90A-NEXT: renamable $sgpr50_sgpr51 = S_MOV_B64 0
- ; GFX90A-NEXT: renamable $vgpr10 = COPY renamable $vgpr14, implicit $exec
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: bb.62.Flow30:
; GFX90A-NEXT: successors: %bb.56(0x80000000)
diff --git a/llvm/test/CodeGen/X86/bfloat.ll b/llvm/test/CodeGen/X86/bfloat.ll
index 8b5ca57df27ed..ec76e8b05678b 100644
--- a/llvm/test/CodeGen/X86/bfloat.ll
+++ b/llvm/test/CodeGen/X86/bfloat.ll
@@ -770,12 +770,70 @@ define <32 x bfloat> @pr63017_2() nounwind {
; SSE2-NEXT: shll $16, %eax
; SSE2-NEXT: movl %eax, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Spill
; SSE2-NEXT: movd {{.*#+}} xmm0 = [-1.0E+0,0.0E+0,0.0E+0,0.0E+0]
+; SSE2-NEXT: movd %xmm0, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm0, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm0, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm0, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm0, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm0, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm0, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm0, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm0, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm0, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm0, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm0, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm0, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm0, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movdqa %xmm0, %xmm15
+; SSE2-NEXT: movd %xmm0, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movdqa %xmm0, %xmm13
+; SSE2-NEXT: movdqa %xmm0, %xmm14
+; SSE2-NEXT: movdqa %xmm0, %xmm11
+; SSE2-NEXT: movdqa %xmm0, %xmm12
+; SSE2-NEXT: movdqa %xmm0, %xmm9
+; SSE2-NEXT: movdqa %xmm0, %xmm10
+; SSE2-NEXT: movdqa %xmm0, %xmm7
+; SSE2-NEXT: movdqa %xmm0, %xmm8
+; SSE2-NEXT: movdqa %xmm0, %xmm5
+; SSE2-NEXT: movdqa %xmm0, %xmm6
+; SSE2-NEXT: movdqa %xmm0, %xmm3
+; SSE2-NEXT: movdqa %xmm0, %xmm4
; SSE2-NEXT: movdqa %xmm0, %xmm1
+; SSE2-NEXT: movdqa %xmm0, %xmm2
; SSE2-NEXT: jmp .LBB12_3
; SSE2-NEXT: .LBB12_1:
-; SSE2-NEXT: movd {{.*#+}} xmm1 = [-1.0E+0,0.0E+0,0.0E+0,0.0E+0]
-; SSE2-NEXT: movdqa %xmm1, %xmm0
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd {{.*#+}} xmm2 = [-1.0E+0,0.0E+0,0.0E+0,0.0E+0]
+; SSE2-NEXT: movdqa %xmm2, %xmm0
+; SSE2-NEXT: movd %xmm2, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm2, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm2, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm2, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm2, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm2, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm2, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm2, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm2, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm2, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm2, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm2, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm2, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm2, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movdqa %xmm2, %xmm15
+; SSE2-NEXT: movd %xmm2, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movdqa %xmm2, %xmm13
+; SSE2-NEXT: movdqa %xmm2, %xmm14
+; SSE2-NEXT: movdqa %xmm2, %xmm11
+; SSE2-NEXT: movdqa %xmm2, %xmm12
+; SSE2-NEXT: movdqa %xmm2, %xmm9
+; SSE2-NEXT: movdqa %xmm2, %xmm10
+; SSE2-NEXT: movdqa %xmm2, %xmm7
+; SSE2-NEXT: movdqa %xmm2, %xmm8
+; SSE2-NEXT: movdqa %xmm2, %xmm5
+; SSE2-NEXT: movdqa %xmm2, %xmm6
+; SSE2-NEXT: movdqa %xmm2, %xmm3
+; SSE2-NEXT: movdqa %xmm2, %xmm4
+; SSE2-NEXT: movdqa %xmm2, %xmm1
+; SSE2-NEXT: movd %xmm2, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
; SSE2-NEXT: .LBB12_3: # %else
; SSE2-NEXT: xorl %eax, %eax
; SSE2-NEXT: testb %al, %al
@@ -787,299 +845,240 @@ define <32 x bfloat> @pr63017_2() nounwind {
; SSE2-NEXT: .LBB12_5: # %else2
; SSE2-NEXT: xorl %eax, %eax
; SSE2-NEXT: testb %al, %al
-; SSE2-NEXT: jne .LBB12_6
-; SSE2-NEXT: # %bb.7: # %cond.load4
+; SSE2-NEXT: jne .LBB12_7
+; SSE2-NEXT: # %bb.6: # %cond.load4
; SSE2-NEXT: movzwl (%rax), %eax
; SSE2-NEXT: shll $16, %eax
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movdqa %xmm1, %xmm14
-; SSE2-NEXT: movdqa %xmm1, %xmm15
-; SSE2-NEXT: movdqa %xmm1, %xmm12
-; SSE2-NEXT: movdqa %xmm1, %xmm13
-; SSE2-NEXT: movdqa %xmm1, %xmm10
-; SSE2-NEXT: movdqa %xmm1, %xmm11
-; SSE2-NEXT: movdqa %xmm1, %xmm8
-; SSE2-NEXT: movdqa %xmm1, %xmm9
-; SSE2-NEXT: movdqa %xmm1, %xmm6
-; SSE2-NEXT: movdqa %xmm1, %xmm7
-; SSE2-NEXT: movdqa %xmm1, %xmm4
-; SSE2-NEXT: movdqa %xmm1, %xmm5
-; SSE2-NEXT: movdqa %xmm1, %xmm2
-; SSE2-NEXT: movdqa %xmm1, %xmm3
-; SSE2-NEXT: movd %eax, %xmm1
-; SSE2-NEXT: jmp .LBB12_8
-; SSE2-NEXT: .LBB12_6:
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movdqa %xmm1, %xmm14
-; SSE2-NEXT: movdqa %xmm1, %xmm15
-; SSE2-NEXT: movdqa %xmm1, %xmm12
-; SSE2-NEXT: movdqa %xmm1, %xmm13
-; SSE2-NEXT: movdqa %xmm1, %xmm10
-; SSE2-NEXT: movdqa %xmm1, %xmm11
-; SSE2-NEXT: movdqa %xmm1, %xmm8
-; SSE2-NEXT: movdqa %xmm1, %xmm9
-; SSE2-NEXT: movdqa %xmm1, %xmm6
-; SSE2-NEXT: movdqa %xmm1, %xmm7
-; SSE2-NEXT: movdqa %xmm1, %xmm4
-; SSE2-NEXT: movdqa %xmm1, %xmm5
-; SSE2-NEXT: movdqa %xmm1, %xmm2
-; SSE2-NEXT: movdqa %xmm1, %xmm3
-; SSE2-NEXT: .LBB12_8: # %else5
+; SSE2-NEXT: movl %eax, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Spill
+; SSE2-NEXT: .LBB12_7: # %else5
; SSE2-NEXT: xorl %eax, %eax
; SSE2-NEXT: testb %al, %al
-; SSE2-NEXT: jne .LBB12_10
-; SSE2-NEXT: # %bb.9: # %cond.load7
+; SSE2-NEXT: jne .LBB12_9
+; SSE2-NEXT: # %bb.8: # %cond.load7
; SSE2-NEXT: movzwl (%rax), %eax
; SSE2-NEXT: shll $16, %eax
; SSE2-NEXT: movl %eax, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Spill
-; SSE2-NEXT: .LBB12_10: # %else8
+; SSE2-NEXT: .LBB12_9: # %else8
; SSE2-NEXT: xorl %eax, %eax
; SSE2-NEXT: testb %al, %al
-; SSE2-NEXT: jne .LBB12_12
-; SSE2-NEXT: # %bb.11: # %cond.load10
+; SSE2-NEXT: jne .LBB12_11
+; SSE2-NEXT: # %bb.10: # %cond.load10
; SSE2-NEXT: movzwl (%rax), %eax
; SSE2-NEXT: shll $16, %eax
; SSE2-NEXT: movl %eax, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Spill
-; SSE2-NEXT: .LBB12_12: # %else11
+; SSE2-NEXT: .LBB12_11: # %else11
; SSE2-NEXT: xorl %eax, %eax
; SSE2-NEXT: testb %al, %al
-; SSE2-NEXT: jne .LBB12_14
-; SSE2-NEXT: # %bb.13: # %cond.load13
+; SSE2-NEXT: jne .LBB12_13
+; SSE2-NEXT: # %bb.12: # %cond.load13
; SSE2-NEXT: movzwl (%rax), %eax
; SSE2-NEXT: shll $16, %eax
; SSE2-NEXT: movl %eax, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Spill
-; SSE2-NEXT: .LBB12_14: # %else14
+; SSE2-NEXT: .LBB12_13: # %else14
; SSE2-NEXT: xorl %eax, %eax
; SSE2-NEXT: testb %al, %al
-; SSE2-NEXT: jne .LBB12_16
-; SSE2-NEXT: # %bb.15: # %cond.load16
+; SSE2-NEXT: jne .LBB12_15
+; SSE2-NEXT: # %bb.14: # %cond.load16
; SSE2-NEXT: movzwl (%rax), %eax
; SSE2-NEXT: shll $16, %eax
; SSE2-NEXT: movl %eax, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Spill
-; SSE2-NEXT: .LBB12_16: # %else17
+; SSE2-NEXT: .LBB12_15: # %else17
; SSE2-NEXT: xorl %eax, %eax
; SSE2-NEXT: testb %al, %al
-; SSE2-NEXT: jne .LBB12_18
-; SSE2-NEXT: # %bb.17: # %cond.load19
+; SSE2-NEXT: jne .LBB12_17
+; SSE2-NEXT: # %bb.16: # %cond.load19
; SSE2-NEXT: movzwl (%rax), %eax
; SSE2-NEXT: shll $16, %eax
; SSE2-NEXT: movl %eax, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Spill
-; SSE2-NEXT: .LBB12_18: # %else20
+; SSE2-NEXT: .LBB12_17: # %else20
; SSE2-NEXT: xorl %eax, %eax
; SSE2-NEXT: testb %al, %al
-; SSE2-NEXT: jne .LBB12_20
-; SSE2-NEXT: # %bb.19: # %cond.load22
+; SSE2-NEXT: jne .LBB12_19
+; SSE2-NEXT: # %bb.18: # %cond.load22
; SSE2-NEXT: movzwl (%rax), %eax
; SSE2-NEXT: shll $16, %eax
; SSE2-NEXT: movl %eax, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Spill
-; SSE2-NEXT: .LBB12_20: # %else23
+; SSE2-NEXT: .LBB12_19: # %else23
; SSE2-NEXT: xorl %eax, %eax
; SSE2-NEXT: testb %al, %al
-; SSE2-NEXT: jne .LBB12_22
-; SSE2-NEXT: # %bb.21: # %cond.load25
+; SSE2-NEXT: jne .LBB12_21
+; SSE2-NEXT: # %bb.20: # %cond.load25
; SSE2-NEXT: movzwl (%rax), %eax
; SSE2-NEXT: shll $16, %eax
; SSE2-NEXT: movl %eax, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Spill
-; SSE2-NEXT: .LBB12_22: # %else26
+; SSE2-NEXT: .LBB12_21: # %else26
; SSE2-NEXT: xorl %eax, %eax
; SSE2-NEXT: testb %al, %al
-; SSE2-NEXT: jne .LBB12_24
-; SSE2-NEXT: # %bb.23: # %cond.load28
+; SSE2-NEXT: jne .LBB12_23
+; SSE2-NEXT: # %bb.22: # %cond.load28
; SSE2-NEXT: movzwl (%rax), %eax
; SSE2-NEXT: shll $16, %eax
; SSE2-NEXT: movl %eax, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Spill
-; SSE2-NEXT: .LBB12_24: # %else29
+; SSE2-NEXT: .LBB12_23: # %else29
; SSE2-NEXT: xorl %eax, %eax
; SSE2-NEXT: testb %al, %al
-; SSE2-NEXT: jne .LBB12_26
-; SSE2-NEXT: # %bb.25: # %cond.load31
+; SSE2-NEXT: jne .LBB12_25
+; SSE2-NEXT: # %bb.24: # %cond.load31
; SSE2-NEXT: movzwl (%rax), %eax
; SSE2-NEXT: shll $16, %eax
; SSE2-NEXT: movl %eax, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Spill
-; SSE2-NEXT: .LBB12_26: # %else32
+; SSE2-NEXT: .LBB12_25: # %else32
; SSE2-NEXT: xorl %eax, %eax
; SSE2-NEXT: testb %al, %al
-; SSE2-NEXT: jne .LBB12_28
-; SSE2-NEXT: # %bb.27: # %cond.load34
+; SSE2-NEXT: jne .LBB12_27
+; SSE2-NEXT: # %bb.26: # %cond.load34
; SSE2-NEXT: movzwl (%rax), %eax
; SSE2-NEXT: shll $16, %eax
; SSE2-NEXT: movl %eax, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Spill
-; SSE2-NEXT: .LBB12_28: # %else35
+; SSE2-NEXT: .LBB12_27: # %else35
; SSE2-NEXT: xorl %eax, %eax
; SSE2-NEXT: testb %al, %al
-; SSE2-NEXT: jne .LBB12_30
-; SSE2-NEXT: # %bb.29: # %cond.load37
+; SSE2-NEXT: jne .LBB12_29
+; SSE2-NEXT: # %bb.28: # %cond.load37
; SSE2-NEXT: movzwl (%rax), %eax
; SSE2-NEXT: shll $16, %eax
; SSE2-NEXT: movl %eax, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Spill
-; SSE2-NEXT: .LBB12_30: # %else38
+; SSE2-NEXT: .LBB12_29: # %else38
; SSE2-NEXT: xorl %eax, %eax
; SSE2-NEXT: testb %al, %al
-; SSE2-NEXT: jne .LBB12_32
-; SSE2-NEXT: # %bb.31: # %cond.load40
+; SSE2-NEXT: jne .LBB12_31
+; SSE2-NEXT: # %bb.30: # %cond.load40
; SSE2-NEXT: movzwl (%rax), %eax
; SSE2-NEXT: shll $16, %eax
; SSE2-NEXT: movl %eax, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Spill
-; SSE2-NEXT: .LBB12_32: # %else41
+; SSE2-NEXT: .LBB12_31: # %else41
; SSE2-NEXT: xorl %eax, %eax
; SSE2-NEXT: testb %al, %al
-; SSE2-NEXT: jne .LBB12_34
-; SSE2-NEXT: # %bb.33: # %cond.load43
+; SSE2-NEXT: jne .LBB12_33
+; SSE2-NEXT: # %bb.32: # %cond.load43
; SSE2-NEXT: movzwl (%rax), %eax
; SSE2-NEXT: shll $16, %eax
; SSE2-NEXT: movl %eax, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Spill
-; SSE2-NEXT: .LBB12_34: # %else44
+; SSE2-NEXT: .LBB12_33: # %else44
; SSE2-NEXT: xorl %eax, %eax
; SSE2-NEXT: testb %al, %al
-; SSE2-NEXT: jne .LBB12_36
-; SSE2-NEXT: # %bb.35: # %cond.load46
+; SSE2-NEXT: jne .LBB12_35
+; SSE2-NEXT: # %bb.34: # %cond.load46
; SSE2-NEXT: movzwl (%rax), %eax
; SSE2-NEXT: shll $16, %eax
-; SSE2-NEXT: movl %eax, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Spill
-; SSE2-NEXT: .LBB12_36: # %else47
+; SSE2-NEXT: movd %eax, %xmm15
+; SSE2-NEXT: .LBB12_35: # %else47
; SSE2-NEXT: xorl %eax, %eax
; SSE2-NEXT: testb %al, %al
-; SSE2-NEXT: jne .LBB12_38
-; SSE2-NEXT: # %bb.37: # %cond.load49
+; SSE2-NEXT: jne .LBB12_37
+; SSE2-NEXT: # %bb.36: # %cond.load49
; SSE2-NEXT: movzwl (%rax), %eax
; SSE2-NEXT: shll $16, %eax
; SSE2-NEXT: movl %eax, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Spill
-; SSE2-NEXT: .LBB12_38: # %else50
+; SSE2-NEXT: .LBB12_37: # %else50
+; SSE2-NEXT: xorl %eax, %eax
+; SSE2-NEXT: testb %al, %al
+; SSE2-NEXT: jne .LBB12_39
+; SSE2-NEXT: # %bb.38: # %cond.load52
+; SSE2-NEXT: movzwl (%rax), %eax
+; SSE2-NEXT: shll $16, %eax
+; SSE2-NEXT: movd %eax, %xmm13
+; SSE2-NEXT: .LBB12_39: # %else53
; SSE2-NEXT: xorl %eax, %eax
; SSE2-NEXT: testb %al, %al
-; SSE2-NEXT: jne .LBB12_40
-; SSE2-NEXT: # %bb.39: # %cond.load52
+; SSE2-NEXT: jne .LBB12_41
+; SSE2-NEXT: # %bb.40: # %cond.load55
; SSE2-NEXT: movzwl (%rax), %eax
; SSE2-NEXT: shll $16, %eax
; SSE2-NEXT: movd %eax, %xmm14
-; SSE2-NEXT: .LBB12_40: # %else53
+; SSE2-NEXT: .LBB12_41: # %else56
; SSE2-NEXT: xorl %eax, %eax
; SSE2-NEXT: testb %al, %al
-; SSE2-NEXT: jne .LBB12_42
-; SSE2-NEXT: # %bb.41: # %cond.load55
+; SSE2-NEXT: jne .LBB12_43
+; SSE2-NEXT: # %bb.42: # %cond.load58
; SSE2-NEXT: movzwl (%rax), %eax
; SSE2-NEXT: shll $16, %eax
-; SSE2-NEXT: movd %eax, %xmm15
-; SSE2-NEXT: .LBB12_42: # %else56
+; SSE2-NEXT: movd %eax, %xmm11
+; SSE2-NEXT: .LBB12_43: # %else59
; SSE2-NEXT: xorl %eax, %eax
; SSE2-NEXT: testb %al, %al
-; SSE2-NEXT: jne .LBB12_44
-; SSE2-NEXT: # %bb.43: # %cond.load58
+; SSE2-NEXT: jne .LBB12_45
+; SSE2-NEXT: # %bb.44: # %cond.load61
; SSE2-NEXT: movzwl (%rax), %eax
; SSE2-NEXT: shll $16, %eax
; SSE2-NEXT: movd %eax, %xmm12
-; SSE2-NEXT: .LBB12_44: # %else59
+; SSE2-NEXT: .LBB12_45: # %else62
; SSE2-NEXT: xorl %eax, %eax
; SSE2-NEXT: testb %al, %al
-; SSE2-NEXT: jne .LBB12_46
-; SSE2-NEXT: # %bb.45: # %cond.load61
+; SSE2-NEXT: jne .LBB12_47
+; SSE2-NEXT: # %bb.46: # %cond.load64
; SSE2-NEXT: movzwl (%rax), %eax
; SSE2-NEXT: shll $16, %eax
-; SSE2-NEXT: movd %eax, %xmm13
-; SSE2-NEXT: .LBB12_46: # %else62
+; SSE2-NEXT: movd %eax, %xmm9
+; SSE2-NEXT: .LBB12_47: # %else65
; SSE2-NEXT: xorl %eax, %eax
; SSE2-NEXT: testb %al, %al
-; SSE2-NEXT: jne .LBB12_48
-; SSE2-NEXT: # %bb.47: # %cond.load64
+; SSE2-NEXT: jne .LBB12_49
+; SSE2-NEXT: # %bb.48: # %cond.load67
; SSE2-NEXT: movzwl (%rax), %eax
; SSE2-NEXT: shll $16, %eax
; SSE2-NEXT: movd %eax, %xmm10
-; SSE2-NEXT: .LBB12_48: # %else65
+; SSE2-NEXT: .LBB12_49: # %else68
; SSE2-NEXT: xorl %eax, %eax
; SSE2-NEXT: testb %al, %al
-; SSE2-NEXT: jne .LBB12_50
-; SSE2-NEXT: # %bb.49: # %cond.load67
+; SSE2-NEXT: jne .LBB12_51
+; SSE2-NEXT: # %bb.50: # %cond.load70
; SSE2-NEXT: movzwl (%rax), %eax
; SSE2-NEXT: shll $16, %eax
-; SSE2-NEXT: movd %eax, %xmm11
-; SSE2-NEXT: .LBB12_50: # %else68
+; SSE2-NEXT: movd %eax, %xmm7
+; SSE2-NEXT: .LBB12_51: # %else71
; SSE2-NEXT: xorl %eax, %eax
; SSE2-NEXT: testb %al, %al
-; SSE2-NEXT: jne .LBB12_52
-; SSE2-NEXT: # %bb.51: # %cond.load70
+; SSE2-NEXT: jne .LBB12_53
+; SSE2-NEXT: # %bb.52: # %cond.load73
; SSE2-NEXT: movzwl (%rax), %eax
; SSE2-NEXT: shll $16, %eax
; SSE2-NEXT: movd %eax, %xmm8
-; SSE2-NEXT: .LBB12_52: # %else71
+; SSE2-NEXT: .LBB12_53: # %else74
; SSE2-NEXT: xorl %eax, %eax
; SSE2-NEXT: testb %al, %al
-; SSE2-NEXT: jne .LBB12_54
-; SSE2-NEXT: # %bb.53: # %cond.load73
+; SSE2-NEXT: jne .LBB12_55
+; SSE2-NEXT: # %bb.54: # %cond.load76
; SSE2-NEXT: movzwl (%rax), %eax
; SSE2-NEXT: shll $16, %eax
-; SSE2-NEXT: movd %eax, %xmm9
-; SSE2-NEXT: .LBB12_54: # %else74
+; SSE2-NEXT: movd %eax, %xmm5
+; SSE2-NEXT: .LBB12_55: # %else77
; SSE2-NEXT: xorl %eax, %eax
; SSE2-NEXT: testb %al, %al
-; SSE2-NEXT: jne .LBB12_56
-; SSE2-NEXT: # %bb.55: # %cond.load76
+; SSE2-NEXT: jne .LBB12_57
+; SSE2-NEXT: # %bb.56: # %cond.load79
; SSE2-NEXT: movzwl (%rax), %eax
; SSE2-NEXT: shll $16, %eax
; SSE2-NEXT: movd %eax, %xmm6
-; SSE2-NEXT: .LBB12_56: # %else77
+; SSE2-NEXT: .LBB12_57: # %else80
; SSE2-NEXT: xorl %eax, %eax
; SSE2-NEXT: testb %al, %al
-; SSE2-NEXT: jne .LBB12_58
-; SSE2-NEXT: # %bb.57: # %cond.load79
+; SSE2-NEXT: jne .LBB12_59
+; SSE2-NEXT: # %bb.58: # %cond.load82
; SSE2-NEXT: movzwl (%rax), %eax
; SSE2-NEXT: shll $16, %eax
-; SSE2-NEXT: movd %eax, %xmm7
-; SSE2-NEXT: .LBB12_58: # %else80
+; SSE2-NEXT: movd %eax, %xmm3
+; SSE2-NEXT: .LBB12_59: # %else83
; SSE2-NEXT: xorl %eax, %eax
; SSE2-NEXT: testb %al, %al
-; SSE2-NEXT: jne .LBB12_60
-; SSE2-NEXT: # %bb.59: # %cond.load82
+; SSE2-NEXT: jne .LBB12_61
+; SSE2-NEXT: # %bb.60: # %cond.load85
; SSE2-NEXT: movzwl (%rax), %eax
; SSE2-NEXT: shll $16, %eax
; SSE2-NEXT: movd %eax, %xmm4
-; SSE2-NEXT: .LBB12_60: # %else83
-; SSE2-NEXT: xorl %eax, %eax
-; SSE2-NEXT: testb %al, %al
-; SSE2-NEXT: jne .LBB12_62
-; SSE2-NEXT: # %bb.61: # %cond.load85
-; SSE2-NEXT: movzwl (%rax), %eax
-; SSE2-NEXT: shll $16, %eax
-; SSE2-NEXT: movd %eax, %xmm5
-; SSE2-NEXT: .LBB12_62: # %else86
+; SSE2-NEXT: .LBB12_61: # %else86
; SSE2-NEXT: xorl %eax, %eax
; SSE2-NEXT: testb %al, %al
-; SSE2-NEXT: jne .LBB12_64
-; SSE2-NEXT: # %bb.63: # %cond.load88
+; SSE2-NEXT: jne .LBB12_63
+; SSE2-NEXT: # %bb.62: # %cond.load88
; SSE2-NEXT: movzwl (%rax), %eax
; SSE2-NEXT: shll $16, %eax
-; SSE2-NEXT: movd %eax, %xmm2
-; SSE2-NEXT: .LBB12_64: # %else89
+; SSE2-NEXT: movd %eax, %xmm1
+; SSE2-NEXT: .LBB12_63: # %else89
; SSE2-NEXT: xorl %eax, %eax
; SSE2-NEXT: testb %al, %al
-; SSE2-NEXT: movd %xmm2, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm3, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
; SSE2-NEXT: movd %xmm4, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
; SSE2-NEXT: movd %xmm5, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
; SSE2-NEXT: movd %xmm6, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
@@ -1087,21 +1086,20 @@ define <32 x bfloat> @pr63017_2() nounwind {
; SSE2-NEXT: movd %xmm8, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
; SSE2-NEXT: movd %xmm9, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
; SSE2-NEXT: movd %xmm10, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
; SSE2-NEXT: movd %xmm11, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
; SSE2-NEXT: movd %xmm12, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
; SSE2-NEXT: movd %xmm13, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
; SSE2-NEXT: movd %xmm14, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
; SSE2-NEXT: movd %xmm15, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: jne .LBB12_65
-; SSE2-NEXT: # %bb.66: # %cond.load91
+; SSE2-NEXT: jne .LBB12_64
+; SSE2-NEXT: # %bb.65: # %cond.load91
; SSE2-NEXT: movzwl (%rax), %eax
; SSE2-NEXT: shll $16, %eax
; SSE2-NEXT: movl %eax, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Spill
-; SSE2-NEXT: jmp .LBB12_67
-; SSE2-NEXT: .LBB12_65:
-; SSE2-NEXT: movd %xmm3, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: .LBB12_67: # %else92
+; SSE2-NEXT: jmp .LBB12_66
+; SSE2-NEXT: .LBB12_64:
+; SSE2-NEXT: movd %xmm2, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: .LBB12_66: # %else92
; SSE2-NEXT: callq __truncsfbf2 at PLT
; SSE2-NEXT: pextrw $0, %xmm0, %ebx
; SSE2-NEXT: shll $16, %ebx
diff --git a/llvm/test/CodeGen/X86/div-rem-pair-recomposition-signed.ll b/llvm/test/CodeGen/X86/div-rem-pair-recomposition-signed.ll
index 1c303de55c95d..aa7b77f01d5ba 100644
--- a/llvm/test/CodeGen/X86/div-rem-pair-recomposition-signed.ll
+++ b/llvm/test/CodeGen/X86/div-rem-pair-recomposition-signed.ll
@@ -177,7 +177,7 @@ define i128 @scalar_i128(i128 %x, i128 %y, ptr %divdst) nounwind {
; X86-NEXT: pushl %ebx
; X86-NEXT: pushl %edi
; X86-NEXT: pushl %esi
-; X86-NEXT: subl $156, %esp
+; X86-NEXT: subl $152, %esp
; X86-NEXT: movl {{[0-9]+}}(%esp), %ecx
; X86-NEXT: movl {{[0-9]+}}(%esp), %edx
; X86-NEXT: movl {{[0-9]+}}(%esp), %esi
@@ -273,44 +273,42 @@ define i128 @scalar_i128(i128 %x, i128 %y, ptr %divdst) nounwind {
; X86-NEXT: movl %ebp, %esi
; X86-NEXT: orl %ebx, %esi
; X86-NEXT: cmovnel %ecx, %edx
-; X86-NEXT: xorl %ebx, %ebx
+; X86-NEXT: xorl %esi, %esi
; X86-NEXT: subl %edx, %edi
+; X86-NEXT: movl $0, %ebx
+; X86-NEXT: sbbl %ebx, %ebx
; X86-NEXT: movl $0, %edx
; X86-NEXT: sbbl %edx, %edx
; X86-NEXT: movl $0, %eax
; X86-NEXT: sbbl %eax, %eax
-; X86-NEXT: movl $0, %esi
-; X86-NEXT: sbbl %esi, %esi
; X86-NEXT: movl $127, %ecx
; X86-NEXT: movl %edi, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
; X86-NEXT: cmpl %edi, %ecx
+; X86-NEXT: movl %ebx, %edi
+; X86-NEXT: movl $0, %ecx
+; X86-NEXT: sbbl %ebx, %ecx
; X86-NEXT: movl $0, %ecx
; X86-NEXT: movl %edx, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
; X86-NEXT: sbbl %edx, %ecx
; X86-NEXT: movl $0, %ecx
; X86-NEXT: movl %eax, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
; X86-NEXT: sbbl %eax, %ecx
-; X86-NEXT: movl $0, %ecx
-; X86-NEXT: movl %esi, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
-; X86-NEXT: sbbl %esi, %ecx
; X86-NEXT: setb %cl
; X86-NEXT: orb {{[-0-9]+}}(%e{{[sb]}}p), %cl # 1-byte Folded Reload
; X86-NEXT: movl (%esp), %edx # 4-byte Reload
-; X86-NEXT: cmovnel %ebx, %edx
-; X86-NEXT: cmovnel %ebx, %ebp
-; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %eax # 4-byte Reload
-; X86-NEXT: cmovnel %ebx, %eax
-; X86-NEXT: cmovel {{[-0-9]+}}(%e{{[sb]}}p), %ebx # 4-byte Folded Reload
-; X86-NEXT: movl %ebx, %esi
+; X86-NEXT: cmovnel %esi, %edx
+; X86-NEXT: cmovnel %esi, %ebp
+; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %ebx # 4-byte Reload
+; X86-NEXT: cmovnel %esi, %ebx
+; X86-NEXT: cmovel {{[-0-9]+}}(%e{{[sb]}}p), %esi # 4-byte Folded Reload
; X86-NEXT: jne .LBB4_8
; X86-NEXT: # %bb.1: # %_udiv-special-cases
-; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %edi # 4-byte Reload
-; X86-NEXT: xorl $127, %edi
-; X86-NEXT: orl {{[-0-9]+}}(%e{{[sb]}}p), %edi # 4-byte Folded Reload
-; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %ebx # 4-byte Reload
-; X86-NEXT: movl %ebx, %ecx
+; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %eax # 4-byte Reload
+; X86-NEXT: xorl $127, %eax
+; X86-NEXT: orl {{[-0-9]+}}(%e{{[sb]}}p), %eax # 4-byte Folded Reload
+; X86-NEXT: movl %edi, %ecx
; X86-NEXT: orl {{[-0-9]+}}(%e{{[sb]}}p), %ecx # 4-byte Folded Reload
-; X86-NEXT: orl %edi, %ecx
+; X86-NEXT: orl %eax, %ecx
; X86-NEXT: je .LBB4_8
; X86-NEXT: # %bb.2: # %udiv-bb1
; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %eax # 4-byte Reload
@@ -333,34 +331,34 @@ define i128 @scalar_i128(i128 %x, i128 %y, ptr %divdst) nounwind {
; X86-NEXT: shrb $3, %al
; X86-NEXT: andb $15, %al
; X86-NEXT: negb %al
-; X86-NEXT: movsbl %al, %edi
-; X86-NEXT: movl 148(%esp,%edi), %edx
-; X86-NEXT: movl 152(%esp,%edi), %esi
+; X86-NEXT: movsbl %al, %eax
+; X86-NEXT: movl 144(%esp,%eax), %edx
+; X86-NEXT: movl 148(%esp,%eax), %esi
; X86-NEXT: movb %ch, %cl
; X86-NEXT: shldl %cl, %edx, %esi
; X86-NEXT: movl %esi, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
; X86-NEXT: shll %cl, %edx
; X86-NEXT: notb %cl
-; X86-NEXT: movl 144(%esp,%edi), %eax
-; X86-NEXT: movl %eax, %ebp
+; X86-NEXT: movl 140(%esp,%eax), %ebx
+; X86-NEXT: movl %ebx, %ebp
; X86-NEXT: shrl %ebp
; X86-NEXT: shrl %cl, %ebp
; X86-NEXT: orl %edx, %ebp
-; X86-NEXT: movl 140(%esp,%edi), %edx
+; X86-NEXT: movl 136(%esp,%eax), %eax
; X86-NEXT: movb %ch, %cl
-; X86-NEXT: shldl %cl, %edx, %eax
-; X86-NEXT: shll %cl, %edx
-; X86-NEXT: movl %edx, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-NEXT: shldl %cl, %eax, %ebx
+; X86-NEXT: shll %cl, %eax
+; X86-NEXT: movl %eax, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
; X86-NEXT: addl $1, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Folded Spill
-; X86-NEXT: adcl $0, %ebx
-; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %edi # 4-byte Reload
; X86-NEXT: adcl $0, %edi
+; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %ecx # 4-byte Reload
+; X86-NEXT: adcl $0, %ecx
; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %edx # 4-byte Reload
; X86-NEXT: adcl $0, %edx
; X86-NEXT: jae .LBB4_3
; X86-NEXT: # %bb.6:
-; X86-NEXT: xorl %edi, %edi
; X86-NEXT: xorl %ecx, %ecx
+; X86-NEXT: xorl %eax, %eax
; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %edx # 4-byte Reload
; X86-NEXT: jmp .LBB4_7
; X86-NEXT: .LBB4_3: # %udiv-preheader
@@ -376,176 +374,180 @@ define i128 @scalar_i128(i128 %x, i128 %y, ptr %divdst) nounwind {
; X86-NEXT: movl $0, {{[0-9]+}}(%esp)
; X86-NEXT: movl $0, {{[0-9]+}}(%esp)
; X86-NEXT: movl $0, {{[0-9]+}}(%esp)
-; X86-NEXT: movl %edx, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
-; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %edx # 4-byte Reload
-; X86-NEXT: movl %ebx, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
; X86-NEXT: movl %edi, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
-; X86-NEXT: movb %dl, %ch
+; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %eax # 4-byte Reload
+; X86-NEXT: movl %ecx, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-NEXT: movb %al, %ch
; X86-NEXT: andb $7, %ch
-; X86-NEXT: movb %dl, %cl
-; X86-NEXT: shrb $3, %cl
-; X86-NEXT: andb $15, %cl
-; X86-NEXT: movzbl %cl, %edx
-; X86-NEXT: movl 104(%esp,%edx), %ebx
-; X86-NEXT: movl 100(%esp,%edx), %edi
-; X86-NEXT: movl %ebp, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
-; X86-NEXT: movl %edi, %ebp
+; X86-NEXT: # kill: def $al killed $al killed $eax
+; X86-NEXT: shrb $3, %al
+; X86-NEXT: andb $15, %al
+; X86-NEXT: movzbl %al, %eax
+; X86-NEXT: movl 100(%esp,%eax), %esi
+; X86-NEXT: movl %esi, (%esp) # 4-byte Spill
+; X86-NEXT: movl %edx, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-NEXT: movl 96(%esp,%eax), %edx
+; X86-NEXT: movl %ebp, %edi
+; X86-NEXT: movl %edx, %ebp
; X86-NEXT: movb %ch, %cl
-; X86-NEXT: shrdl %cl, %ebx, %ebp
-; X86-NEXT: movl 92(%esp,%edx), %esi
-; X86-NEXT: movl %esi, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
-; X86-NEXT: movl 96(%esp,%edx), %esi
-; X86-NEXT: movl %esi, %edx
-; X86-NEXT: shrl %cl, %edx
+; X86-NEXT: shrdl %cl, %esi, %ebp
+; X86-NEXT: movl %ebx, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-NEXT: movl 88(%esp,%eax), %ebx
+; X86-NEXT: movl 92(%esp,%eax), %esi
+; X86-NEXT: movl %esi, %eax
+; X86-NEXT: shrl %cl, %eax
; X86-NEXT: notb %cl
-; X86-NEXT: addl %edi, %edi
-; X86-NEXT: shll %cl, %edi
-; X86-NEXT: orl %edx, %edi
-; X86-NEXT: movl %edi, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-NEXT: addl %edx, %edx
+; X86-NEXT: shll %cl, %edx
+; X86-NEXT: orl %eax, %edx
+; X86-NEXT: movl %edx, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
; X86-NEXT: movb %ch, %cl
-; X86-NEXT: shrl %cl, %ebx
+; X86-NEXT: shrl %cl, (%esp) # 4-byte Folded Spill
+; X86-NEXT: shrdl %cl, %esi, %ebx
; X86-NEXT: movl %ebx, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
-; X86-NEXT: shrdl %cl, %esi, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Folded Spill
-; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %ecx # 4-byte Reload
-; X86-NEXT: addl $-1, %ecx
-; X86-NEXT: movl %ecx, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
-; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %ecx # 4-byte Reload
-; X86-NEXT: adcl $-1, %ecx
-; X86-NEXT: movl %ecx, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
-; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %ecx # 4-byte Reload
-; X86-NEXT: adcl $-1, %ecx
-; X86-NEXT: movl %ecx, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
-; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %ecx # 4-byte Reload
-; X86-NEXT: adcl $-1, %ecx
-; X86-NEXT: movl %ecx, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %eax # 4-byte Reload
+; X86-NEXT: addl $-1, %eax
+; X86-NEXT: movl %eax, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %eax # 4-byte Reload
+; X86-NEXT: adcl $-1, %eax
+; X86-NEXT: movl %eax, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %eax # 4-byte Reload
+; X86-NEXT: adcl $-1, %eax
+; X86-NEXT: movl %eax, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %esi # 4-byte Reload
+; X86-NEXT: movl %esi, %eax
+; X86-NEXT: adcl $-1, %eax
+; X86-NEXT: movl %eax, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
; X86-NEXT: movl $0, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Folded Spill
; X86-NEXT: movl $0, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Folded Spill
+; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %edx # 4-byte Reload
+; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %ecx # 4-byte Reload
; X86-NEXT: .p2align 4, 0x90
; X86-NEXT: .LBB4_4: # %udiv-do-while
; X86-NEXT: # =>This Inner Loop Header: Depth=1
-; X86-NEXT: movl %ebp, (%esp) # 4-byte Spill
-; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %ebx # 4-byte Reload
-; X86-NEXT: shldl $1, %ebp, %ebx
-; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %ebp # 4-byte Reload
+; X86-NEXT: movl %edx, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-NEXT: movl %ebp, %edx
; X86-NEXT: shldl $1, %ebp, (%esp) # 4-byte Folded Spill
-; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %edx # 4-byte Reload
-; X86-NEXT: shldl $1, %edx, %ebp
-; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %edi # 4-byte Reload
-; X86-NEXT: shldl $1, %edi, %edx
+; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %ebp # 4-byte Reload
+; X86-NEXT: shldl $1, %ebp, %edx
+; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %ebx # 4-byte Reload
+; X86-NEXT: shldl $1, %ebx, %ebp
+; X86-NEXT: shldl $1, %ecx, %ebx
+; X86-NEXT: shldl $1, %edi, %ecx
+; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %eax # 4-byte Reload
+; X86-NEXT: orl %eax, %ecx
+; X86-NEXT: movl %ecx, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %ecx # 4-byte Reload
; X86-NEXT: shldl $1, %ecx, %edi
-; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %esi # 4-byte Reload
-; X86-NEXT: orl %esi, %edi
+; X86-NEXT: orl %eax, %edi
; X86-NEXT: movl %edi, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
-; X86-NEXT: shldl $1, %eax, %ecx
-; X86-NEXT: orl %esi, %ecx
-; X86-NEXT: movl %ecx, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
-; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %ecx # 4-byte Reload
-; X86-NEXT: shldl $1, %ecx, %eax
-; X86-NEXT: orl %esi, %eax
-; X86-NEXT: movl %eax, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
-; X86-NEXT: addl %ecx, %ecx
-; X86-NEXT: orl {{[-0-9]+}}(%e{{[sb]}}p), %ecx # 4-byte Folded Reload
+; X86-NEXT: movl %esi, %edi
+; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %esi # 4-byte Reload
+; X86-NEXT: shldl $1, %esi, %ecx
+; X86-NEXT: orl %eax, %ecx
; X86-NEXT: movl %ecx, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
-; X86-NEXT: cmpl %edx, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Folded Reload
+; X86-NEXT: addl %esi, %esi
+; X86-NEXT: orl {{[-0-9]+}}(%e{{[sb]}}p), %esi # 4-byte Folded Reload
+; X86-NEXT: movl %esi, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-NEXT: cmpl %ebx, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Folded Reload
; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %ecx # 4-byte Reload
; X86-NEXT: sbbl %ebp, %ecx
; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %ecx # 4-byte Reload
-; X86-NEXT: sbbl (%esp), %ecx # 4-byte Folded Reload
+; X86-NEXT: sbbl %edx, %ecx
; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %ecx # 4-byte Reload
-; X86-NEXT: sbbl %ebx, %ecx
+; X86-NEXT: sbbl (%esp), %ecx # 4-byte Folded Reload
; X86-NEXT: sarl $31, %ecx
; X86-NEXT: movl %ecx, %eax
; X86-NEXT: andl $1, %eax
; X86-NEXT: movl %eax, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
; X86-NEXT: movl %ecx, %esi
-; X86-NEXT: andl {{[-0-9]+}}(%e{{[sb]}}p), %esi # 4-byte Folded Reload
+; X86-NEXT: andl %edi, %esi
; X86-NEXT: movl %ecx, %edi
; X86-NEXT: andl {{[-0-9]+}}(%e{{[sb]}}p), %edi # 4-byte Folded Reload
; X86-NEXT: movl %ecx, %eax
; X86-NEXT: andl {{[-0-9]+}}(%e{{[sb]}}p), %eax # 4-byte Folded Reload
; X86-NEXT: andl {{[-0-9]+}}(%e{{[sb]}}p), %ecx # 4-byte Folded Reload
-; X86-NEXT: subl %ecx, %edx
-; X86-NEXT: movl %edx, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-NEXT: subl %ecx, %ebx
+; X86-NEXT: movl %ebx, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
; X86-NEXT: sbbl %eax, %ebp
; X86-NEXT: movl %ebp, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
-; X86-NEXT: movl (%esp), %ebp # 4-byte Reload
-; X86-NEXT: sbbl %edi, %ebp
-; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %eax # 4-byte Reload
-; X86-NEXT: sbbl %esi, %ebx
-; X86-NEXT: movl %ebx, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-NEXT: sbbl %edi, %edx
+; X86-NEXT: movl %edx, %ebp
+; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %edx # 4-byte Reload
+; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %edi # 4-byte Reload
+; X86-NEXT: sbbl %esi, (%esp) # 4-byte Folded Spill
; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %ecx # 4-byte Reload
; X86-NEXT: addl $-1, %ecx
-; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %edi # 4-byte Reload
-; X86-NEXT: adcl $-1, %edi
-; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %ebx # 4-byte Reload
-; X86-NEXT: adcl $-1, %ebx
+; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %eax # 4-byte Reload
+; X86-NEXT: adcl $-1, %eax
; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %esi # 4-byte Reload
; X86-NEXT: adcl $-1, %esi
-; X86-NEXT: movl %edi, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
-; X86-NEXT: movl %esi, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
-; X86-NEXT: orl %esi, %edi
+; X86-NEXT: adcl $-1, %edx
+; X86-NEXT: movl %eax, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-NEXT: orl %edx, %eax
; X86-NEXT: movl %ecx, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
-; X86-NEXT: movl %ebx, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
-; X86-NEXT: orl %ebx, %ecx
-; X86-NEXT: orl %edi, %ecx
+; X86-NEXT: movl %esi, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-NEXT: orl %esi, %ecx
+; X86-NEXT: orl %eax, %ecx
+; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %ecx # 4-byte Reload
+; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %esi # 4-byte Reload
; X86-NEXT: jne .LBB4_4
; X86-NEXT: # %bb.5:
-; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %ebp # 4-byte Reload
-; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %edx # 4-byte Reload
+; X86-NEXT: movl %edi, %ebp
+; X86-NEXT: movl %ecx, %edx
+; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %ebx # 4-byte Reload
+; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %eax # 4-byte Reload
; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %ecx # 4-byte Reload
-; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %edi # 4-byte Reload
; X86-NEXT: .LBB4_7: # %udiv-loop-exit
; X86-NEXT: shldl $1, %ebp, %edx
-; X86-NEXT: orl %ecx, %edx
-; X86-NEXT: shldl $1, %eax, %ebp
-; X86-NEXT: orl %ecx, %ebp
+; X86-NEXT: orl %eax, %edx
+; X86-NEXT: shldl $1, %ebx, %ebp
+; X86-NEXT: orl %eax, %ebp
; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %esi # 4-byte Reload
-; X86-NEXT: shldl $1, %esi, %eax
-; X86-NEXT: orl %ecx, %eax
+; X86-NEXT: shldl $1, %esi, %ebx
+; X86-NEXT: orl %eax, %ebx
; X86-NEXT: addl %esi, %esi
-; X86-NEXT: orl %edi, %esi
+; X86-NEXT: orl %ecx, %esi
; X86-NEXT: .LBB4_8: # %udiv-end
-; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %ecx # 4-byte Reload
-; X86-NEXT: xorl %ecx, %edx
-; X86-NEXT: xorl %ecx, %ebp
-; X86-NEXT: xorl %ecx, %eax
-; X86-NEXT: xorl %ecx, %esi
-; X86-NEXT: subl %ecx, %esi
+; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %eax # 4-byte Reload
+; X86-NEXT: xorl %eax, %edx
+; X86-NEXT: xorl %eax, %ebp
+; X86-NEXT: xorl %eax, %ebx
+; X86-NEXT: xorl %eax, %esi
+; X86-NEXT: subl %eax, %esi
; X86-NEXT: movl %esi, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
-; X86-NEXT: sbbl %ecx, %eax
-; X86-NEXT: movl %eax, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
-; X86-NEXT: sbbl %ecx, %ebp
-; X86-NEXT: sbbl %ecx, %edx
+; X86-NEXT: sbbl %eax, %ebx
+; X86-NEXT: sbbl %eax, %ebp
+; X86-NEXT: sbbl %eax, %edx
; X86-NEXT: movl %edx, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
-; X86-NEXT: movl {{[0-9]+}}(%esp), %ecx
-; X86-NEXT: movl %esi, (%ecx)
-; X86-NEXT: movl %eax, 4(%ecx)
-; X86-NEXT: movl %ebp, 8(%ecx)
-; X86-NEXT: movl %edx, 12(%ecx)
+; X86-NEXT: movl {{[0-9]+}}(%esp), %eax
+; X86-NEXT: movl %esi, (%eax)
+; X86-NEXT: movl %ebx, 4(%eax)
+; X86-NEXT: movl %ebp, 8(%eax)
+; X86-NEXT: movl %edx, 12(%eax)
+; X86-NEXT: movl %ebx, %eax
; X86-NEXT: movl {{[0-9]+}}(%esp), %ecx
; X86-NEXT: movl %ebp, %edi
; X86-NEXT: mull %ecx
-; X86-NEXT: movl %edx, %ebx
-; X86-NEXT: movl %eax, %ebp
+; X86-NEXT: movl %edx, %ebp
+; X86-NEXT: movl %eax, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
; X86-NEXT: movl %esi, %eax
; X86-NEXT: mull %ecx
; X86-NEXT: movl %eax, (%esp) # 4-byte Spill
; X86-NEXT: movl %edx, %ecx
-; X86-NEXT: addl %ebp, %ecx
-; X86-NEXT: adcl $0, %ebx
+; X86-NEXT: addl {{[-0-9]+}}(%e{{[sb]}}p), %ecx # 4-byte Folded Reload
+; X86-NEXT: adcl $0, %ebp
; X86-NEXT: movl %esi, %eax
-; X86-NEXT: movl {{[0-9]+}}(%esp), %ebp
-; X86-NEXT: mull %ebp
+; X86-NEXT: movl {{[0-9]+}}(%esp), %esi
+; X86-NEXT: mull %esi
; X86-NEXT: addl %ecx, %eax
; X86-NEXT: movl %eax, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
-; X86-NEXT: adcl %ebx, %edx
-; X86-NEXT: movl %edx, %ebx
+; X86-NEXT: adcl %ebp, %edx
+; X86-NEXT: movl %edx, %ebp
; X86-NEXT: setb %cl
-; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %esi # 4-byte Reload
-; X86-NEXT: movl %esi, %eax
-; X86-NEXT: mull %ebp
-; X86-NEXT: addl %ebx, %eax
+; X86-NEXT: movl %ebx, %eax
+; X86-NEXT: mull %esi
+; X86-NEXT: addl %ebp, %eax
; X86-NEXT: movl %eax, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
; X86-NEXT: movzbl %cl, %eax
; X86-NEXT: adcl %eax, %edx
@@ -555,12 +557,12 @@ define i128 @scalar_i128(i128 %x, i128 %y, ptr %divdst) nounwind {
; X86-NEXT: imull %eax, %ecx
; X86-NEXT: mull %edi
; X86-NEXT: movl %eax, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
-; X86-NEXT: imull %ebp, %edi
+; X86-NEXT: imull %esi, %edi
; X86-NEXT: addl %edx, %edi
; X86-NEXT: addl %ecx, %edi
; X86-NEXT: movl {{[0-9]+}}(%esp), %eax
; X86-NEXT: movl %eax, %ecx
-; X86-NEXT: imull %esi, %ecx
+; X86-NEXT: imull %ebx, %ecx
; X86-NEXT: movl {{[0-9]+}}(%esp), %esi
; X86-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %edx # 4-byte Reload
; X86-NEXT: imull %edx, %esi
@@ -584,7 +586,7 @@ define i128 @scalar_i128(i128 %x, i128 %y, ptr %divdst) nounwind {
; X86-NEXT: movl %edx, 4(%eax)
; X86-NEXT: movl %ebx, 8(%eax)
; X86-NEXT: movl %edi, 12(%eax)
-; X86-NEXT: addl $156, %esp
+; X86-NEXT: addl $152, %esp
; X86-NEXT: popl %esi
; X86-NEXT: popl %edi
; X86-NEXT: popl %ebx
More information about the llvm-commits
mailing list