[llvm] [CodeGen] Only deduplicate PHIs on critical edges (PR #97064)
via llvm-commits
llvm-commits at lists.llvm.org
Fri Jun 28 07:56:18 PDT 2024
llvmbot wrote:
<!--LLVM PR SUMMARY COMMENT-->
@llvm/pr-subscribers-backend-x86
Author: Alexis Engelke (aengelke)
<details>
<summary>Changes</summary>
PHIElim deduplicates identical PHI nodes to reduce the number of copies inserted. There are two cases:
1. Identical PHI nodes are in different blocks. That's the reason for this optimization; this can't be avoided at SSA-level. A necessary prerequisite for this is that the predecessors of all basic blocks (where such a PHI node could occur) are the same. This implies that all (>= 2) predecessors must have multiple successors, i.e. all edges into the block are critical edges.
2. Identical PHI nodes are in the same block. CSE can remove these. There are a few cases, however, where they still occur regardless:
- expand-large-div-rem creates PHI nodes with large integers, which get lowered into one PHI per MVT. Later, some identical values (zeroes) get folded, resulting in identical PHI nodes.
- peephole-opt occasionally inserts PHIs for the same value.
- Some pseudo instruction emitters create redundant PHI nodes (e.g., AVR's insertShift), merging the same values more than once.
In any case, this happens rarely and MachineCSE handles most cases anyway, so that PHIElim only gets to see very few of such cases (see changed test files).
Currently, all PHI nodes are inserted into a DenseMap that checks equality not by pointer but by operands. This hash map is pretty expensive (hashing itself and the hash map), but only really useful in the first case.
Avoid this expensive hashing most of the time by restricting it to basic blocks with only critical input edges. This improves performance for code with many PHI nodes, especially at -O0. (Note that Clang often doesn't generate PHI nodes and O0 includes no mem2reg, so no very visible improvements in [c-t-t](http://llvm-compile-time-tracker.com/compare.php?from=a1bce0b89e800cb7ab1d3cf3437f8f34d0695468&to=679de0bf60dfa547413a34e18882049d409626c5&stat=instructions:u). Other compilers always generate PHI nodes.)
---
Patch is 49.51 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/97064.diff
4 Files Affected:
- (modified) llvm/lib/CodeGen/PHIElimination.cpp (+42-15)
- (modified) llvm/test/CodeGen/AMDGPU/branch-folding-implicit-def-subreg.ll (+2-2)
- (modified) llvm/test/CodeGen/X86/bfloat.ll (+169-171)
- (modified) llvm/test/CodeGen/X86/div-rem-pair-recomposition-signed.ll (+148-146)
``````````diff
diff --git a/llvm/lib/CodeGen/PHIElimination.cpp b/llvm/lib/CodeGen/PHIElimination.cpp
index 592972f5c83b2..4fde4ec78ea28 100644
--- a/llvm/lib/CodeGen/PHIElimination.cpp
+++ b/llvm/lib/CodeGen/PHIElimination.cpp
@@ -83,7 +83,8 @@ namespace {
bool EliminatePHINodes(MachineFunction &MF, MachineBasicBlock &MBB);
void LowerPHINode(MachineBasicBlock &MBB,
- MachineBasicBlock::iterator LastPHIIt);
+ MachineBasicBlock::iterator LastPHIIt,
+ bool AllEdgesCritical);
/// analyzePHINodes - Gather information about the PHI nodes in
/// here. In particular, we want to map the number of uses of a virtual
@@ -191,7 +192,8 @@ bool PHIElimination::runOnMachineFunction(MachineFunction &MF) {
MRI->leaveSSA();
// Populate VRegPHIUseCount
- analyzePHINodes(MF);
+ if (LV || LIS)
+ analyzePHINodes(MF);
// Eliminate PHI instructions by inserting copies into predecessor blocks.
for (auto &MBB : MF)
@@ -239,8 +241,20 @@ bool PHIElimination::EliminatePHINodes(MachineFunction &MF,
MachineBasicBlock::iterator LastPHIIt =
std::prev(MBB.SkipPHIsAndLabels(MBB.begin()));
+ // If all incoming edges are critical, we try to deduplicate identical PHIs so
+ // that we generate fewer copies. If at any edge is non-critical, we either
+ // have less than two predecessors (=> no PHIs) or a predecessor has only us
+ // as a successor (=> identical PHI node can't occur in different block).
+ bool AllEdgesCritical = MBB.pred_size() >= 2;
+ for (MachineBasicBlock *Pred : MBB.predecessors()) {
+ if (Pred->succ_size() < 2) {
+ AllEdgesCritical = false;
+ break;
+ }
+ }
+
while (MBB.front().isPHI())
- LowerPHINode(MBB, LastPHIIt);
+ LowerPHINode(MBB, LastPHIIt, AllEdgesCritical);
return true;
}
@@ -267,7 +281,8 @@ static bool allPhiOperandsUndefined(const MachineInstr &MPhi,
}
/// LowerPHINode - Lower the PHI node at the top of the specified block.
void PHIElimination::LowerPHINode(MachineBasicBlock &MBB,
- MachineBasicBlock::iterator LastPHIIt) {
+ MachineBasicBlock::iterator LastPHIIt,
+ bool AllEdgesCritical) {
++NumLowered;
MachineBasicBlock::iterator AfterPHIsIt = std::next(LastPHIIt);
@@ -283,6 +298,7 @@ void PHIElimination::LowerPHINode(MachineBasicBlock &MBB,
// Create a new register for the incoming PHI arguments.
MachineFunction &MF = *MBB.getParent();
unsigned IncomingReg = 0;
+ bool EliminateNow = true; // delay elimination of nodes in LoweredPHIs
bool reusedIncoming = false; // Is IncomingReg reused from an earlier PHI?
// Insert a register to register copy at the top of the current block (but
@@ -297,19 +313,28 @@ void PHIElimination::LowerPHINode(MachineBasicBlock &MBB,
TII->get(TargetOpcode::IMPLICIT_DEF), DestReg);
else {
// Can we reuse an earlier PHI node? This only happens for critical edges,
- // typically those created by tail duplication.
- unsigned &entry = LoweredPHIs[MPhi];
- if (entry) {
+ // typically those created by tail duplication. Typically, an identical PHI
+ // node can't occur, so avoid hashing/storing such PHIs, which is somewhat
+ // expensive.
+ unsigned *Entry = nullptr;
+ if (AllEdgesCritical)
+ Entry = &LoweredPHIs[MPhi];
+ if (Entry && *Entry) {
// An identical PHI node was already lowered. Reuse the incoming register.
- IncomingReg = entry;
+ IncomingReg = *Entry;
reusedIncoming = true;
++NumReused;
LLVM_DEBUG(dbgs() << "Reusing " << printReg(IncomingReg) << " for "
<< *MPhi);
} else {
const TargetRegisterClass *RC = MF.getRegInfo().getRegClass(DestReg);
- entry = IncomingReg = MF.getRegInfo().createVirtualRegister(RC);
+ IncomingReg = MF.getRegInfo().createVirtualRegister(RC);
+ if (Entry) {
+ EliminateNow = false;
+ *Entry = IncomingReg;
+ }
}
+
// Give the target possiblity to handle special cases fallthrough otherwise
PHICopy = TII->createPHIDestinationCopy(MBB, AfterPHIsIt, MPhi->getDebugLoc(),
IncomingReg, DestReg);
@@ -445,11 +470,13 @@ void PHIElimination::LowerPHINode(MachineBasicBlock &MBB,
}
// Adjust the VRegPHIUseCount map to account for the removal of this PHI node.
- for (unsigned i = 1; i != MPhi->getNumOperands(); i += 2) {
- if (!MPhi->getOperand(i).isUndef()) {
- --VRegPHIUseCount[BBVRegPair(
- MPhi->getOperand(i + 1).getMBB()->getNumber(),
- MPhi->getOperand(i).getReg())];
+ if (LV || LIS) {
+ for (unsigned i = 1; i != MPhi->getNumOperands(); i += 2) {
+ if (!MPhi->getOperand(i).isUndef()) {
+ --VRegPHIUseCount[BBVRegPair(
+ MPhi->getOperand(i + 1).getMBB()->getNumber(),
+ MPhi->getOperand(i).getReg())];
+ }
}
}
@@ -646,7 +673,7 @@ void PHIElimination::LowerPHINode(MachineBasicBlock &MBB,
}
// Really delete the PHI instruction now, if it is not in the LoweredPHIs map.
- if (reusedIncoming || !IncomingReg) {
+ if (EliminateNow) {
if (LIS)
LIS->RemoveMachineInstrFromMaps(*MPhi);
MF.deleteMachineInstr(MPhi);
diff --git a/llvm/test/CodeGen/AMDGPU/branch-folding-implicit-def-subreg.ll b/llvm/test/CodeGen/AMDGPU/branch-folding-implicit-def-subreg.ll
index 384715a849c1e..0bd030f1a3750 100644
--- a/llvm/test/CodeGen/AMDGPU/branch-folding-implicit-def-subreg.ll
+++ b/llvm/test/CodeGen/AMDGPU/branch-folding-implicit-def-subreg.ll
@@ -804,6 +804,7 @@ define amdgpu_kernel void @f1(ptr addrspace(1) %arg, ptr addrspace(1) %arg1, i64
; GFX90A-NEXT: renamable $vgpr15 = V_ALIGNBIT_B32_e64 $vgpr15, $vgpr14, 1, implicit $exec
; GFX90A-NEXT: renamable $sgpr48_sgpr49 = S_XOR_B64 $exec, -1, implicit-def dead $scc
; GFX90A-NEXT: renamable $sgpr60_sgpr61 = S_OR_B64 renamable $sgpr28_sgpr29, $exec, implicit-def dead $scc
+ ; GFX90A-NEXT: renamable $vgpr10 = COPY renamable $vgpr14, implicit $exec
; GFX90A-NEXT: S_BRANCH %bb.61
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: bb.58:
@@ -886,11 +887,10 @@ define amdgpu_kernel void @f1(ptr addrspace(1) %arg, ptr addrspace(1) %arg1, i64
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: bb.61.Flow31:
; GFX90A-NEXT: successors: %bb.62(0x80000000)
- ; GFX90A-NEXT: liveins: $sgpr12, $sgpr13, $sgpr14, $sgpr15, $vgpr15, $vgpr17, $vgpr18, $vgpr30, $vgpr31, $vgpr52, $vgpr53, $sgpr4_sgpr5, $sgpr6_sgpr7:0x000000000000000F, $sgpr8_sgpr9, $sgpr10_sgpr11, $sgpr16_sgpr17, $sgpr24_sgpr25, $sgpr26_sgpr27, $sgpr28_sgpr29, $sgpr30_sgpr31, $sgpr34_sgpr35, $sgpr36_sgpr37, $sgpr38_sgpr39, $sgpr40_sgpr41, $sgpr42_sgpr43, $sgpr44_sgpr45, $sgpr46_sgpr47, $sgpr48_sgpr49, $sgpr50_sgpr51, $sgpr58_sgpr59, $sgpr60_sgpr61, $sgpr16_sgpr17_sgpr18_sgpr19:0x00000000000000F0, $sgpr20_sgpr21_sgpr22_sgpr23:0x000000000000003C, $vgpr0_vgpr1:0x000000000000000F, $vgpr2_vgpr3:0x000000000000000F, $vgpr4_vgpr5:0x000000000000000F, $vgpr6_vgpr7:0x000000000000000F, $vgpr8_vgpr9:0x000000000000000F, $vgpr10_vgpr11:0x000000000000000C, $vgpr12_vgpr13:0x000000000000000F, $vgpr14_vgpr15:0x0000000000000003, $vgpr16_vgpr17:0x0000000000000003, $vgpr40_vgpr41:0x000000000000000F, $vgpr42_vgpr43:0x000000000000000F, $vgpr44_vgpr45:0x000000000000000F, $vgpr46_vgpr47:0x000000000000000F, $vgpr56_vgpr57:0x000000000000000F, $vgpr58_vgpr59:0x0000000000000003, $vgpr60_vgpr61:0x000000000000000F, $vgpr62_vgpr63:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3
+ ; GFX90A-NEXT: liveins: $sgpr12, $sgpr13, $sgpr14, $sgpr15, $vgpr15, $vgpr17, $vgpr18, $vgpr30, $vgpr31, $vgpr52, $vgpr53, $sgpr4_sgpr5, $sgpr6_sgpr7:0x000000000000000F, $sgpr8_sgpr9, $sgpr10_sgpr11, $sgpr16_sgpr17, $sgpr24_sgpr25, $sgpr26_sgpr27, $sgpr28_sgpr29, $sgpr30_sgpr31, $sgpr34_sgpr35, $sgpr36_sgpr37, $sgpr38_sgpr39, $sgpr40_sgpr41, $sgpr42_sgpr43, $sgpr44_sgpr45, $sgpr46_sgpr47, $sgpr48_sgpr49, $sgpr50_sgpr51, $sgpr58_sgpr59, $sgpr60_sgpr61, $sgpr16_sgpr17_sgpr18_sgpr19:0x00000000000000F0, $sgpr20_sgpr21_sgpr22_sgpr23:0x000000000000003C, $vgpr0_vgpr1:0x000000000000000F, $vgpr2_vgpr3:0x000000000000000F, $vgpr4_vgpr5:0x000000000000000F, $vgpr6_vgpr7:0x000000000000000F, $vgpr8_vgpr9:0x000000000000000F, $vgpr10_vgpr11:0x000000000000000F, $vgpr12_vgpr13:0x000000000000000F, $vgpr14_vgpr15:0x0000000000000003, $vgpr16_vgpr17:0x0000000000000003, $vgpr40_vgpr41:0x000000000000000F, $vgpr42_vgpr43:0x000000000000000F, $vgpr44_vgpr45:0x000000000000000F, $vgpr46_vgpr47:0x000000000000000F, $vgpr56_vgpr57:0x000000000000000F, $vgpr58_vgpr59:0x0000000000000003, $vgpr60_vgpr61:0x000000000000000F, $vgpr62_vgpr63:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: $exec = S_OR_B64 $exec, killed renamable $sgpr50_sgpr51, implicit-def $scc
; GFX90A-NEXT: renamable $sgpr50_sgpr51 = S_MOV_B64 0
- ; GFX90A-NEXT: renamable $vgpr10 = COPY renamable $vgpr14, implicit $exec
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: bb.62.Flow30:
; GFX90A-NEXT: successors: %bb.56(0x80000000)
diff --git a/llvm/test/CodeGen/X86/bfloat.ll b/llvm/test/CodeGen/X86/bfloat.ll
index 8b5ca57df27ed..ec76e8b05678b 100644
--- a/llvm/test/CodeGen/X86/bfloat.ll
+++ b/llvm/test/CodeGen/X86/bfloat.ll
@@ -770,12 +770,70 @@ define <32 x bfloat> @pr63017_2() nounwind {
; SSE2-NEXT: shll $16, %eax
; SSE2-NEXT: movl %eax, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Spill
; SSE2-NEXT: movd {{.*#+}} xmm0 = [-1.0E+0,0.0E+0,0.0E+0,0.0E+0]
+; SSE2-NEXT: movd %xmm0, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm0, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm0, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm0, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm0, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm0, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm0, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm0, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm0, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm0, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm0, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm0, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm0, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm0, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movdqa %xmm0, %xmm15
+; SSE2-NEXT: movd %xmm0, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movdqa %xmm0, %xmm13
+; SSE2-NEXT: movdqa %xmm0, %xmm14
+; SSE2-NEXT: movdqa %xmm0, %xmm11
+; SSE2-NEXT: movdqa %xmm0, %xmm12
+; SSE2-NEXT: movdqa %xmm0, %xmm9
+; SSE2-NEXT: movdqa %xmm0, %xmm10
+; SSE2-NEXT: movdqa %xmm0, %xmm7
+; SSE2-NEXT: movdqa %xmm0, %xmm8
+; SSE2-NEXT: movdqa %xmm0, %xmm5
+; SSE2-NEXT: movdqa %xmm0, %xmm6
+; SSE2-NEXT: movdqa %xmm0, %xmm3
+; SSE2-NEXT: movdqa %xmm0, %xmm4
; SSE2-NEXT: movdqa %xmm0, %xmm1
+; SSE2-NEXT: movdqa %xmm0, %xmm2
; SSE2-NEXT: jmp .LBB12_3
; SSE2-NEXT: .LBB12_1:
-; SSE2-NEXT: movd {{.*#+}} xmm1 = [-1.0E+0,0.0E+0,0.0E+0,0.0E+0]
-; SSE2-NEXT: movdqa %xmm1, %xmm0
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd {{.*#+}} xmm2 = [-1.0E+0,0.0E+0,0.0E+0,0.0E+0]
+; SSE2-NEXT: movdqa %xmm2, %xmm0
+; SSE2-NEXT: movd %xmm2, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm2, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm2, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm2, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm2, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm2, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm2, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm2, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm2, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm2, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm2, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm2, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm2, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movd %xmm2, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movdqa %xmm2, %xmm15
+; SSE2-NEXT: movd %xmm2, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
+; SSE2-NEXT: movdqa %xmm2, %xmm13
+; SSE2-NEXT: movdqa %xmm2, %xmm14
+; SSE2-NEXT: movdqa %xmm2, %xmm11
+; SSE2-NEXT: movdqa %xmm2, %xmm12
+; SSE2-NEXT: movdqa %xmm2, %xmm9
+; SSE2-NEXT: movdqa %xmm2, %xmm10
+; SSE2-NEXT: movdqa %xmm2, %xmm7
+; SSE2-NEXT: movdqa %xmm2, %xmm8
+; SSE2-NEXT: movdqa %xmm2, %xmm5
+; SSE2-NEXT: movdqa %xmm2, %xmm6
+; SSE2-NEXT: movdqa %xmm2, %xmm3
+; SSE2-NEXT: movdqa %xmm2, %xmm4
+; SSE2-NEXT: movdqa %xmm2, %xmm1
+; SSE2-NEXT: movd %xmm2, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
; SSE2-NEXT: .LBB12_3: # %else
; SSE2-NEXT: xorl %eax, %eax
; SSE2-NEXT: testb %al, %al
@@ -787,299 +845,240 @@ define <32 x bfloat> @pr63017_2() nounwind {
; SSE2-NEXT: .LBB12_5: # %else2
; SSE2-NEXT: xorl %eax, %eax
; SSE2-NEXT: testb %al, %al
-; SSE2-NEXT: jne .LBB12_6
-; SSE2-NEXT: # %bb.7: # %cond.load4
+; SSE2-NEXT: jne .LBB12_7
+; SSE2-NEXT: # %bb.6: # %cond.load4
; SSE2-NEXT: movzwl (%rax), %eax
; SSE2-NEXT: shll $16, %eax
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movdqa %xmm1, %xmm14
-; SSE2-NEXT: movdqa %xmm1, %xmm15
-; SSE2-NEXT: movdqa %xmm1, %xmm12
-; SSE2-NEXT: movdqa %xmm1, %xmm13
-; SSE2-NEXT: movdqa %xmm1, %xmm10
-; SSE2-NEXT: movdqa %xmm1, %xmm11
-; SSE2-NEXT: movdqa %xmm1, %xmm8
-; SSE2-NEXT: movdqa %xmm1, %xmm9
-; SSE2-NEXT: movdqa %xmm1, %xmm6
-; SSE2-NEXT: movdqa %xmm1, %xmm7
-; SSE2-NEXT: movdqa %xmm1, %xmm4
-; SSE2-NEXT: movdqa %xmm1, %xmm5
-; SSE2-NEXT: movdqa %xmm1, %xmm2
-; SSE2-NEXT: movdqa %xmm1, %xmm3
-; SSE2-NEXT: movd %eax, %xmm1
-; SSE2-NEXT: jmp .LBB12_8
-; SSE2-NEXT: .LBB12_6:
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Folded Spill
-; SSE2-NEXT: movdqa %xmm1, %xmm14
-; SSE2-NEXT: movdqa %xmm1, %xmm15
-; SSE2-NEXT: movdqa %xmm1, %xmm12
-; SSE2-NEXT: movdqa %xmm1, %xmm13
-; SSE2-NEXT: movdqa %xmm1, %xmm10
-; SSE2-NEXT: movdqa %xmm1, %xmm11
-; SSE2-NEXT: movdqa %xmm1, %xmm8
-; SSE2-NEXT: movdqa %xmm1, %xmm9
-; SSE2-NEXT: movdqa %xmm1, %xmm6
-; SSE2-NEXT: movdqa %xmm1, %xmm7
-; SSE2-NEXT: movdqa %xmm1, %xmm4
-; SSE2-NEXT: movdqa %xmm1, %xmm5
-; SSE2-NEXT: movdqa %xmm1, %xmm2
-; SSE2-NEXT: movdqa %xmm1, %xmm3
-; SSE2-NEXT: .LBB12_8: # %else5
+; SSE2-NEXT: movl %eax, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Spill
+; SSE2-NEXT: .LBB12_7: # %else5
; SSE2-NEXT: xorl %eax, %eax
; SSE2-NEXT: testb %al, %al
-; SSE2-NEXT: jne .LBB12_10
-; SSE2-NEXT: # %bb.9: # %cond.load7
+; SSE2-NEXT: jne .LBB12_9
+; SSE2-NEXT: # %bb.8: # %cond.load7
; SSE2-NEXT: movzwl (%rax), %eax
; SSE2-NEXT: shll $16, %eax
; SSE2-NEXT: movl %eax, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Spill
-; SSE2-NEXT: .LBB12_10: # %else8
+; SSE2-NEXT: .LBB12_9: # %else8
; SSE2-NEXT: xorl %eax, %eax
; SSE2-NEXT: testb %al, %al
-; SSE2-NEXT: jne .LBB12_12
-; SSE2-NEXT: # %bb.11: # %cond.load10
+; SSE2-NEXT: jne .LBB12_11
+; SSE2-NEXT: # %bb.10: # %cond.load10
; SSE2-NEXT: movzwl (%rax), %eax
; SSE2-NEXT: shll $16, %eax
; SSE2-NEXT: movl %eax, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Spill
-; SSE2-NEXT: .LBB12_12: # %else11
+; SSE2-NEXT: .LBB12_11: # %else11
; SSE2-NEXT: xorl %eax, %eax
; SSE2-NEXT: testb %al, %al
-; SSE2-NEXT: jne .LBB12_14
-; SSE2-NEXT: # %bb.13: # %cond.load13
+; SSE2-NEXT: jne .LBB12_13
+; SSE2-NEXT: # %bb.12: # %cond.load13
; SSE2-NEXT: movzwl (%rax), %eax
; SSE2-NEXT: shll $16, %eax
; SSE2-NEXT: movl %eax, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Spill
-; SSE2-NEXT: .LBB12_14: # %else14
+; SSE2-NEXT: .LBB12_13: # %else14
; SSE2-NEXT: xorl %eax, %eax
; SSE2-NEXT: testb %al, %al
-; SSE2-NEXT: jne .LBB12_16
-; SSE2-NEXT: # %bb.15: # %cond.load16
+; SSE2-NEXT: jne .LBB12_15
+; SSE2-NEXT: # %bb.14: # %cond.load16
; SSE2-NEXT: movzwl (%rax), %eax
; SSE2-NEXT: shll $16, %eax
; SSE2-NEXT: movl %eax, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Spill
-; SSE2-NEXT: .LBB12_16: # %else17
+; SSE2-NEXT: .LBB12_15: # %else17
; SSE2-NEXT: xorl %eax, %eax
; SSE2-NEXT: testb %al, %al
-; SSE2-NEXT: jne .LBB12_18
-; SSE2-NEXT: # %bb.17: # %cond.load19
+; SSE2-NEXT: jne .LBB12_17
+; SSE2-NEXT: # %bb.16: # %cond.load19
; SSE2-NEXT: movzwl (%rax), %eax
; SSE2-NEXT: shll $16, %eax
; SSE2-NEXT: movl %eax, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Spill
-; SSE2-NEXT: .LBB12_18: # %else20
+; SSE2-NEXT: .LBB12_17: # %else20
; SSE2-NEXT: xorl %eax, %eax
; SSE2-NEXT: testb %al, %al
-; SSE2-NEXT: jne .LBB12_20
-; SSE2-NEXT: # %bb.19: # %cond.load22
+; SSE2-NEXT: jne .LBB12_19
+; SSE2-NEXT: # %bb.18: # %cond.load22
; SSE2-NEXT: movzwl (%rax), %eax
; SSE2-NEXT: shll $16, %eax
; SSE2-NEXT: movl %eax, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Spill
-; SSE2-NEXT: .LBB12_20: # %else23
+; SSE2-NEXT: .LBB12_19: # %else23
; SSE2-NEXT: xorl %eax, %eax
; SSE2-NEXT: testb %al, %al
-; SSE2-NEXT: jne .LBB12_22
-; SSE2-NEXT: # %bb.21: # %cond.load25
+; SSE2-NEXT: jne .LBB12_21
+; SSE2-NEXT: # %bb.20: # %cond.load25
; SSE2-NEXT: movzwl (%rax), %eax
; SSE2-NEXT: shll $16, %eax
; SSE2-NEXT: movl %eax, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Spill
-; SSE2-NEXT: .LBB12_22: # %else26
+; SSE2-NEXT: .LBB12_21: # %else26
; SSE2-NEXT: xorl %eax, %eax
; SSE2-NEXT: testb %al, %al
-; SSE2-NEXT: jne .LBB12_24
-; SSE2-NEXT: # %bb.23: # %cond.load28
+; SSE2-NEXT: jne .LBB12_23
+; SSE2-NEXT: # %bb.22: # %cond.load28...
[truncated]
``````````
</details>
https://github.com/llvm/llvm-project/pull/97064
More information about the llvm-commits
mailing list