[llvm] [AMDGPU] Avoid rewrite for code patterns that the mfma rewrite scheduler stage cannot properly handle. (PR #188750)
Tony Linthicum via llvm-commits
llvm-commits at lists.llvm.org
Thu Mar 26 07:12:02 PDT 2026
https://github.com/tlinthic created https://github.com/llvm/llvm-project/pull/188750
The mfma rewrite schedule stage cannot generate correct code in a situation in which a rewrite candidate register (i.e. a register defined or used as OpC by a mfma) is partially overwritten by a non-mfma and that "merged" register is used. There are a couple of ways this can fail (illegal operands and partially defined registers from copies), but the long and the short of it is that rewrite cannot handle partial rewrites and would need to be fully subreg aware to handle it and even then it wouldn't be clean. Here's an example:
%reg:vreg_512 = V_MFMA_F32_32X32X1F32 ...
...
%reg.sub2:vreg_512 = COPY %x
...
%result = V_ADD_F32 %reg.sub0, ... ; Non-MAI use
The workaround for this problem is to recognize that we have a partial redef of an MFMA and disallow mfma rewrites. The proper fix will likely be to replace rewrite (the function responsible for copy insertion) with an implementation that is driven by LiveIntervals and is akin to live range splitting.
>From 410bad5c3e16df92059c008f45ceddc0e638f3c9 Mon Sep 17 00:00:00 2001
From: Tony Linthicum <tlinthic at gmail.com>
Date: Thu, 12 Mar 2026 10:57:21 -0500
Subject: [PATCH] [AMDGPU] Avoid rewrite for code patterns that the mfma
rewrite scheduler stage cannot properly handle.
The mfma rewrite schedule stage cannot generate correct code in a situation
in which a rewrite candidate register (i.e. a register defined or used as
OpC by a mfma) is partially overwritten by a non-mfma and that "merged"
register is used. There are a couple of ways this can fail (illegal
operands and partially defined registers from copies), but the long and
the short of it is that rewrite cannot handle partial rewrites and would
need to be fully subreg aware to handle it and even then it wouldn't be
clean. Here's an example:
%reg:vreg_512 = V_MFMA_F32_32X32X1F32 ...
...
%reg.sub2:vreg_512 = COPY %x
...
%result = V_ADD_F32 %reg.sub0, ... ; Non-MAI use
The workaround for this problem is to recognize that we have a partial
redef of an MFMA and disallow mfma rewrites. The proper fix will likely
be to replace rewrite (the function responsible for copy insertion) with
an implementation that is driven by LiveIntervals and is akin to live
range splitting.
---
llvm/lib/Target/AMDGPU/GCNSchedStrategy.cpp | 216 +++++-
llvm/lib/Target/AMDGPU/GCNSchedStrategy.h | 17 +-
.../AMDGPU/sched_mfma_rewrite_redef.mir | 689 ++++++++++++++++++
3 files changed, 919 insertions(+), 3 deletions(-)
create mode 100644 llvm/test/CodeGen/AMDGPU/sched_mfma_rewrite_redef.mir
diff --git a/llvm/lib/Target/AMDGPU/GCNSchedStrategy.cpp b/llvm/lib/Target/AMDGPU/GCNSchedStrategy.cpp
index ad24bad1fd5d7..cdda320f52168 100644
--- a/llvm/lib/Target/AMDGPU/GCNSchedStrategy.cpp
+++ b/llvm/lib/Target/AMDGPU/GCNSchedStrategy.cpp
@@ -102,7 +102,7 @@ static cl::opt<bool> PrintMaxRPRegUsageAfterScheduler(
static cl::opt<bool> DisableRewriteMFMAFormSchedStage(
"amdgpu-disable-rewrite-mfma-form-sched-stage", cl::Hidden,
- cl::desc("Disable rewrite mfma rewrite scheduling stage"), cl::init(true));
+ cl::desc("Disable rewrite mfma rewrite scheduling stage"), cl::init(false));
const unsigned ScheduleMetrics::ScaleFactor = 100;
@@ -1280,6 +1280,99 @@ bool GCNSchedStage::initGCNSchedStage() {
return true;
}
+bool RewriteMFMAFormStage::getDefsFromLiveInterval(
+ LiveInterval &LI, SlotIndex Idx, LaneBitmask UseLanes,
+ SmallVectorImpl<SlotIndex> &DefIdxs) {
+
+ if (!LI.hasSubRanges()) {
+ // No subranges - use main interval
+ VNInfo *VNI = LI.getVNInfoAt(Idx);
+ if (!VNI)
+ return false;
+
+ if (!VNI->isPHIDef()) {
+ DefIdxs.push_back(VNI->def);
+ return false;
+ }
+ return true; // Has PHI, need traversal
+ }
+
+ bool NeedsPredTraversal = false;
+ for (LiveInterval::SubRange &SR : LI.subranges()) {
+ // Only check subranges that cover lanes we're using
+ if ((SR.LaneMask & UseLanes).none())
+ continue;
+
+ VNInfo *SubVNI = SR.getVNInfoAt(Idx);
+ if (!SubVNI)
+ continue;
+
+ // Need to look for duplicates before adding to vector as there may
+ // be multiple subregs assigned by the same def.
+ if (!SubVNI->isPHIDef() && !llvm::is_contained(DefIdxs, SubVNI->def)) {
+ // Direct def - this def writes to these lanes.
+ DefIdxs.push_back(SubVNI->def);
+ } else {
+ // PHI def - will need to traverse predecessors.
+ NeedsPredTraversal = true;
+ }
+ }
+
+ return NeedsPredTraversal;
+}
+
+// TODO: if appropriate, replace findReachingDefs with this implementation
+// when all of RewriteMFMFormStage is subreg aware. If it is no longer
+// needed in that implementation, then delete it.
+void RewriteMFMAFormStage::findReachingDefsSubRegAware(
+ MachineOperand &UseMO, LiveIntervals *LIS,
+ SmallVectorImpl<SlotIndex> &DefIdxs) {
+ MachineInstr *UseMI = UseMO.getParent();
+ LiveInterval &UseLI = LIS->getInterval(UseMO.getReg());
+ SlotIndex UseIdx = LIS->getInstructionIndex(*UseMI);
+
+ // Determine which lanes the use reads.
+ LaneBitmask UseLanes = LaneBitmask::getAll();
+ if (UseMO.getSubReg() != 0) {
+ UseLanes = DAG.TRI->getSubRegIndexLaneMask(UseMO.getSubReg());
+ }
+
+ // Get the definitions that reach this use by looking at the LiveInterval
+ // for the register and looking at subranges (if any). Continue traversal
+ // if any PHI nodes are encountered.
+ if (!getDefsFromLiveInterval(UseLI, UseIdx, UseLanes, DefIdxs))
+ return;
+
+ SmallPtrSet<MachineBasicBlock *, 8> Visited = {UseMI->getParent()};
+ SmallVector<MachineBasicBlock *, 8> Worklist;
+
+ // Mark the predecessor blocks for traversal
+ for (MachineBasicBlock *PredMBB : UseMI->getParent()->predecessors()) {
+ Worklist.push_back(PredMBB);
+ Visited.insert(PredMBB);
+ }
+
+ while (!Worklist.empty()) {
+ MachineBasicBlock *CurrMBB = Worklist.pop_back_val();
+ SlotIndex CurrMBBEnd = LIS->getMBBEndIdx(CurrMBB);
+ SlotIndex CurrIdx = CurrMBBEnd.getPrevSlot();
+
+ // If there is a def in this block, then add it to the list. This is the
+ // reaching def of this path. We will travese predecessors if any of the
+ // subregs of UseMO are defined by a PHI.
+ if (!getDefsFromLiveInterval(UseLI, CurrIdx, UseLanes, DefIdxs))
+ continue;
+
+ VNInfo *VNI = UseLI.getVNInfoAt(CurrIdx);
+ MachineBasicBlock *DefMBB = LIS->getMBBFromIndex(VNI->def);
+
+ for (MachineBasicBlock *PredMBB : DefMBB->predecessors()) {
+ if (Visited.insert(PredMBB).second)
+ Worklist.push_back(PredMBB);
+ }
+ }
+}
+
void RewriteMFMAFormStage::findReachingDefs(
MachineOperand &UseMO, LiveIntervals *LIS,
SmallVectorImpl<SlotIndex> &DefIdxs) {
@@ -2232,12 +2325,132 @@ bool RewriteMFMAFormStage::isRewriteCandidate(MachineInstr *MI) const {
return AMDGPU::getMFMASrcCVDstAGPROp(MI->getOpcode()) != -1;
}
+bool RewriteMFMAFormStage::canSafelyConvertToAGPR(Register Reg) {
+ if (!Reg.isVirtual())
+ return true;
+
+ for (MachineOperand &UseOp : DAG.MRI.use_nodbg_operands(Reg)) {
+ MachineInstr *UseMI = UseOp.getParent();
+
+ // Rewrite candidates can use the source as an AGPR.
+ if (isRewriteCandidate(UseMI))
+ continue;
+
+ // If the use is a COPY, the source of the COPY can be converted
+ // to an AGPR if the destination is a VGPR.
+ if (UseMI->isCopy()) {
+ Register DstReg = UseMI->getOperand(0).getReg();
+ const TargetRegisterClass *RC = DAG.MRI.getRegClass(DstReg);
+ if (SRI->hasVGPRs(RC))
+ continue;
+ }
+
+ // Non-MAI use. Check reaching defs using subreg-aware analysis.
+ SmallVector<SlotIndex, 8> ReachingDefIndexes;
+ findReachingDefsSubRegAware(UseOp, DAG.LIS, ReachingDefIndexes);
+
+ // Check if at least one MAI def reaches this use.
+ bool HasMAIDef = false;
+ SmallVector<MachineInstr *, 4> PartialRedefs;
+
+ for (SlotIndex DefIdx : ReachingDefIndexes) {
+ MachineInstr *DefMI = DAG.LIS->getInstructionFromIndex(DefIdx);
+ if (!DefMI)
+ continue;
+
+ // If the destination of the definition isn't a register, this is
+ // likely an inline asm or other special instruction. In such cases,
+ // we conservatively bail since we can't analyze the definition.
+ MachineOperand &DefMO = DefMI->getOperand(0);
+ if (!DefMO.isReg())
+ return false;
+
+ if (isRewriteCandidate(DefMI)) {
+ HasMAIDef = true;
+ continue;
+ }
+
+ // Check for partial redef.
+ if (DefMO.getReg() == Reg && DefMO.getSubReg() != 0) {
+ PartialRedefs.push_back(DefMI);
+ }
+ }
+
+ if (!HasMAIDef)
+ return false;
+
+ // For each partial redef that reaches this use, check if it merges with
+ // an MFMA def. A partial write like "%reg.sub2 = COPY ..." implicitly uses
+ // the existing value of %reg (for the other subregs). If an MFMA value
+ // reaches the point of the partial write, then the partial write creates a
+ // merged value combining AGPR lanes from the MFMA with the new value which
+ // rewrite() cannot handle correctly.
+ for (MachineInstr *PartialRedef : PartialRedefs) {
+ SlotIndex PartialRedefIdx = DAG.LIS->getInstructionIndex(*PartialRedef);
+ LiveInterval &LI = DAG.LIS->getInterval(Reg);
+
+ // Get what's live just before the partial write.
+ VNInfo *VNI = LI.getVNInfoBefore(PartialRedefIdx);
+ // If there's no prior def, then this is an initialization and is safe.
+ if (!VNI)
+ continue;
+
+ // If it's a PHI def, conservatively bail if there's any MFMA def
+ // in the live interval (the MFMA might flow through the PHI).
+ if (VNI->isPHIDef()) {
+ for (const VNInfo *CheckVNI : LI.valnos) {
+ if (!CheckVNI || CheckVNI->isPHIDef())
+ continue;
+ MachineInstr *Def = DAG.LIS->getInstructionFromIndex(CheckVNI->def);
+ if (Def && isRewriteCandidate(Def)) {
+ return false;
+ }
+ }
+ continue;
+ }
+
+ // Not a PHI, so check the single reaching def. If it's an MFMA, the
+ // partial rewrite is definitely something that rewrite() cannot handle.
+ MachineInstr *ReachingDef = DAG.LIS->getInstructionFromIndex(VNI->def);
+ if (ReachingDef && isRewriteCandidate(ReachingDef)) {
+ return false;
+ }
+ }
+ }
+
+ return true;
+}
+
bool RewriteMFMAFormStage::initHeuristics(
std::vector<std::pair<MachineInstr *, unsigned>> &RewriteCands,
DenseMap<MachineBasicBlock *, std::set<Register>> &CopyForUse,
SmallPtrSetImpl<MachineInstr *> &CopyForDef) {
bool Changed = false;
+ // This is a workaround for a bug. If we have a redefinition of all
+ // or part of a candidate for rewrite, the current implementation of
+ // rewrite() won't see this definition and the need for a copy in the
+ // case of a partial def or a new temp for a full def. More than that,
+ // it's not accounted for in the cost analysis either. A new
+ // implementation of rewrite() is in the works and it will handle this
+ // type of code.
+ // TODO: this code should be removed and getRewriteCost()when rewrite()
+ // is reimplemented.
+ for (MachineBasicBlock &MBB : MF) {
+ for (MachineInstr &MI : MBB) {
+ if (!isRewriteCandidate(&MI))
+ continue;
+ Register DstReg = MI.getOperand(0).getReg();
+ if (!canSafelyConvertToAGPR(DstReg))
+ return false;
+
+ MachineOperand *Src2 = TII->getNamedOperand(MI, AMDGPU::OpName::src2);
+ if (Src2 && Src2->isReg() && Src2->getReg() != DstReg &&
+ !canSafelyConvertToAGPR(Src2->getReg()))
+ return false;
+ }
+ }
+
// Prepare for the heuristics
for (MachineBasicBlock &MBB : MF) {
for (MachineInstr &MI : MBB) {
@@ -2758,7 +2971,6 @@ bool RewriteMFMAFormStage::rewrite(
const TargetRegisterClass *CurrRC = DAG.MRI.getRegClass(RegToRewrite);
const TargetRegisterClass *AGPRRC = SRI->getEquivalentAGPRClass(CurrRC);
-
DAG.MRI.setRegClass(RegToRewrite, AGPRRC);
}
diff --git a/llvm/lib/Target/AMDGPU/GCNSchedStrategy.h b/llvm/lib/Target/AMDGPU/GCNSchedStrategy.h
index ae86388af5545..8fbdad9348610 100644
--- a/llvm/lib/Target/AMDGPU/GCNSchedStrategy.h
+++ b/llvm/lib/Target/AMDGPU/GCNSchedStrategy.h
@@ -473,13 +473,28 @@ class RewriteMFMAFormStage : public GCNSchedStage {
/// \returns true if this MI is a rewrite candidate.
bool isRewriteCandidate(MachineInstr *MI) const;
+ /// \returns true if this register can safely be converted to AGPR.
+ bool canSafelyConvertToAGPR(Register Reg);
+
+ /// Finds all the defs from SubRanges (if any) that define lanes found in
+ /// UseLanes. Returns false if any of the SubRanges defining any of those
+ /// lanes is a PHI.
+ bool getDefsFromLiveInterval(LiveInterval &LI, SlotIndex Idx,
+ LaneBitmask UseLanes,
+ SmallVectorImpl<SlotIndex> &DefIdxs);
+
+ /// Finds all the reaching defs of \p UseMO and stores the SlotIndexes into \p
+ /// DefIdxs. Tracks subregs.
+ void findReachingDefsSubRegAware(MachineOperand &UseMO, LiveIntervals *LIS,
+ SmallVectorImpl<SlotIndex> &DefIdxs);
+
/// Finds all the reaching defs of \p UseMO and stores the SlotIndexes into \p
/// DefIdxs
void findReachingDefs(MachineOperand &UseMO, LiveIntervals *LIS,
SmallVectorImpl<SlotIndex> &DefIdxs);
/// Finds all the reaching uses of \p DefMI and stores the use operands in \p
- /// ReachingUses
+ /// ReachingUses.
void findReachingUses(MachineInstr *DefMI, LiveIntervals *LIS,
SmallVectorImpl<MachineOperand *> &ReachingUses);
diff --git a/llvm/test/CodeGen/AMDGPU/sched_mfma_rewrite_redef.mir b/llvm/test/CodeGen/AMDGPU/sched_mfma_rewrite_redef.mir
new file mode 100644
index 0000000000000..a8ede60c0793d
--- /dev/null
+++ b/llvm/test/CodeGen/AMDGPU/sched_mfma_rewrite_redef.mir
@@ -0,0 +1,689 @@
+# NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py UTC_ARGS: --version 6
+# RUN: llc -mtriple=amdgcn-amd-amdhsa -mcpu=gfx950 -run-pass=machine-scheduler -amdgpu-disable-rewrite-mfma-form-sched-stage=false -verify-machineinstrs -o - %s | FileCheck %s
+
+--- |
+ define void @test_partial_redef() #0 {
+ entry:
+ unreachable
+ }
+
+ define void @test_full_redef() #0 {
+ entry:
+ unreachable
+ }
+
+ define void @test_subreg_partial_redef() #0 {
+ entry:
+ unreachable
+ }
+
+ define void @test_subreg_safe_rewrite() #0 {
+ entry:
+ unreachable
+ }
+
+ define void @test_mfma_partial_redef_merge() #0 {
+ entry:
+ unreachable
+ }
+
+ define void @test_partial_init_safe_rewrite() #0 {
+ entry:
+ unreachable
+ }
+
+ attributes #0 = { "amdgpu-waves-per-eu"="1,1" "amdgpu-flat-work-group-size"="64,64"}
+...
+
+---
+name: test_partial_redef
+tracksRegLiveness: true
+machineFunctionInfo:
+ isEntryFunction: true
+ scratchRSrcReg: '$sgpr96_sgpr97_sgpr98_sgpr99'
+ stackPtrOffsetReg: '$sgpr32'
+ argumentInfo:
+ privateSegmentBuffer: { reg: '$sgpr0_sgpr1_sgpr2_sgpr3' }
+ kernargSegmentPtr: { reg: '$sgpr4_sgpr5' }
+ workGroupIDX: { reg: '$sgpr6' }
+ privateSegmentWaveByteOffset: { reg: '$sgpr7' }
+ workItemIDX: { reg: '$vgpr0' }
+ sgprForEXECCopy: '$sgpr100_sgpr101'
+body: |
+ ; CHECK-LABEL: name: test_partial_redef
+ ; CHECK: bb.0:
+ ; CHECK-NEXT: successors: %bb.2(0x40000000), %bb.1(0x40000000)
+ ; CHECK-NEXT: liveins: $vgpr0, $sgpr4_sgpr5
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: [[DEF:%[0-9]+]]:vreg_512 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF1:%[0-9]+]]:vreg_64 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF2:%[0-9]+]]:vgpr_32 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF3:%[0-9]+]]:vreg_128_align2 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF4:%[0-9]+]]:sgpr_32 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF5:%[0-9]+]]:sgpr_32 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF6:%[0-9]+]]:vreg_512 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF7:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF8:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF9:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF10:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF11:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF12:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF13:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF14:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: SCHED_BARRIER 0
+ ; CHECK-NEXT: $scc = IMPLICIT_DEF
+ ; CHECK-NEXT: S_CBRANCH_SCC1 %bb.2, implicit killed $scc
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: bb.1:
+ ; CHECK-NEXT: successors: %bb.3(0x80000000)
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: undef [[V_ADD_U32_e32_:%[0-9]+]].sub0:vreg_128_align2 = V_ADD_U32_e32 [[DEF3]].sub0, [[DEF2]], implicit $exec
+ ; CHECK-NEXT: S_BRANCH %bb.3
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: bb.2:
+ ; CHECK-NEXT: successors: %bb.3(0x80000000)
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: undef [[V_ADD_U32_e32_:%[0-9]+]].sub0:vreg_128_align2 = V_ADD_U32_e32 [[DEF3]].sub1, [[DEF2]], implicit $exec
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: bb.3:
+ ; CHECK-NEXT: successors: %bb.3(0x40000000), %bb.4(0x40000000)
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: [[V_MFMA_F32_16X16X4F32_vgprcd_e64_:%[0-9]+]]:vreg_128_align2 = V_MFMA_F32_16X16X4F32_vgprcd_e64 [[DEF2]], [[DEF2]], [[DEF3]], 0, 0, 0, implicit $mode, implicit $exec
+ ; CHECK-NEXT: [[V_MFMA_F32_16X16X4F32_vgprcd_e64_:%[0-9]+]].sub2:vreg_128_align2 = COPY [[DEF4]]
+ ; CHECK-NEXT: [[V_ADD_U32_e32_1:%[0-9]+]]:vgpr_32 = V_ADD_U32_e32 [[DEF3]].sub1, [[V_ADD_U32_e32_]].sub0, implicit $exec
+ ; CHECK-NEXT: [[V_CMP_EQ_U32_e64_:%[0-9]+]]:sreg_64 = V_CMP_EQ_U32_e64 [[V_ADD_U32_e32_1]], [[DEF5]], implicit $exec
+ ; CHECK-NEXT: dead [[V_ADD_F32_e64_:%[0-9]+]]:vgpr_32 = V_ADD_F32_e64 0, [[V_MFMA_F32_16X16X4F32_vgprcd_e64_]].sub2, 0, [[V_MFMA_F32_16X16X4F32_vgprcd_e64_]].sub3, 0, 0, implicit $mode, implicit $exec
+ ; CHECK-NEXT: $scc = COPY [[V_CMP_EQ_U32_e64_]].sub0
+ ; CHECK-NEXT: S_CBRANCH_SCC0 %bb.3, implicit killed $scc
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: bb.4:
+ ; CHECK-NEXT: SCHED_BARRIER 0
+ ; CHECK-NEXT: [[DEF15:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: KILL [[DEF15]], [[DEF7]], [[DEF8]], [[DEF9]], [[DEF10]], [[DEF11]], [[DEF12]], [[DEF13]], [[DEF14]], [[DEF]], [[DEF6]], [[DEF1]], [[DEF2]], [[DEF3]]
+ ; CHECK-NEXT: S_ENDPGM 0
+ bb.0:
+ liveins: $vgpr0, $sgpr4_sgpr5
+ %1:vreg_1024 = IMPLICIT_DEF
+ %2:vreg_1024 = IMPLICIT_DEF
+ %3:vreg_1024 = IMPLICIT_DEF
+ %4:vreg_1024 = IMPLICIT_DEF
+ %5:vreg_1024 = IMPLICIT_DEF
+ %6:vreg_1024 = IMPLICIT_DEF
+ %7:vreg_1024 = IMPLICIT_DEF
+ %8:vreg_1024 = IMPLICIT_DEF
+ %9:vreg_1024 = IMPLICIT_DEF
+ %10:vreg_512 = IMPLICIT_DEF
+ %11:vreg_512 = IMPLICIT_DEF
+ %20:vreg_64 = IMPLICIT_DEF
+ %21:vgpr_32 = IMPLICIT_DEF
+ %22:vreg_128_align2 = IMPLICIT_DEF
+ %23:sgpr_32 = IMPLICIT_DEF
+ %50:sgpr_32 = IMPLICIT_DEF
+ SCHED_BARRIER 0
+ $scc = IMPLICIT_DEF
+ S_CBRANCH_SCC1 %bb.2, implicit killed $scc
+
+ bb.1:
+ undef %84.sub0:vreg_128_align2 = V_ADD_U32_e32 %22.sub0, %21, implicit $exec
+ S_BRANCH %bb.3
+
+ bb.2:
+ undef %84.sub0:vreg_128_align2 = V_ADD_U32_e32 %22.sub1, %21, implicit $exec
+
+ bb.3:
+ %15:vreg_128_align2 = V_MFMA_F32_16X16X4F32_vgprcd_e64 %21:vgpr_32, %21:vgpr_32, %22:vreg_128_align2, 0, 0, 0, implicit $mode, implicit $exec
+ %15.sub2:vreg_128_align2 = COPY %23:sgpr_32
+ %30:vgpr_32 = V_ADD_F32_e64 0, %15.sub2:vreg_128_align2, 0, %15.sub3:vreg_128_align2, 0, 0, implicit $mode, implicit $exec
+ %104:vgpr_32 = V_ADD_U32_e32 %22.sub1, %84.sub0, implicit $exec
+ %105:sreg_64 = V_CMP_EQ_U32_e64 %104:vgpr_32, %50:sgpr_32, implicit $exec
+ $scc = COPY %105.sub0:sreg_64
+ S_CBRANCH_SCC0 %bb.3, implicit killed $scc
+
+ bb.4:
+ SCHED_BARRIER 0
+ KILL %1, %2, %3, %4, %5, %6, %7, %8, %9, %10, %11, %20, %21, %22
+ S_ENDPGM 0
+
+---
+name: test_full_redef
+tracksRegLiveness: true
+machineFunctionInfo:
+ isEntryFunction: true
+ scratchRSrcReg: '$sgpr96_sgpr97_sgpr98_sgpr99'
+ stackPtrOffsetReg: '$sgpr32'
+ argumentInfo:
+ privateSegmentBuffer: { reg: '$sgpr0_sgpr1_sgpr2_sgpr3' }
+ kernargSegmentPtr: { reg: '$sgpr4_sgpr5' }
+ workGroupIDX: { reg: '$sgpr6' }
+ privateSegmentWaveByteOffset: { reg: '$sgpr7' }
+ workItemIDX: { reg: '$vgpr0' }
+ sgprForEXECCopy: '$sgpr100_sgpr101'
+body: |
+ ; CHECK-LABEL: name: test_full_redef
+ ; CHECK: bb.0:
+ ; CHECK-NEXT: successors: %bb.2(0x40000000), %bb.1(0x40000000)
+ ; CHECK-NEXT: liveins: $vgpr0, $sgpr4_sgpr5
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: [[DEF:%[0-9]+]]:vreg_512 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF1:%[0-9]+]]:vreg_64 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF2:%[0-9]+]]:vgpr_32 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF3:%[0-9]+]]:vreg_128_align2 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF4:%[0-9]+]]:sgpr_32 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF5:%[0-9]+]]:vreg_512 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF6:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF7:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF8:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF9:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF10:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF11:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF12:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF13:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: SCHED_BARRIER 0
+ ; CHECK-NEXT: $scc = IMPLICIT_DEF
+ ; CHECK-NEXT: S_CBRANCH_SCC1 %bb.2, implicit killed $scc
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: bb.1:
+ ; CHECK-NEXT: successors: %bb.3(0x80000000)
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: undef [[V_ADD_U32_e32_:%[0-9]+]].sub0:vreg_128_align2 = V_ADD_U32_e32 [[DEF3]].sub0, [[DEF2]], implicit $exec
+ ; CHECK-NEXT: S_BRANCH %bb.3
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: bb.2:
+ ; CHECK-NEXT: successors: %bb.3(0x80000000)
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: undef [[V_ADD_U32_e32_:%[0-9]+]].sub0:vreg_128_align2 = V_ADD_U32_e32 [[DEF3]].sub1, [[DEF2]], implicit $exec
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: bb.3:
+ ; CHECK-NEXT: successors: %bb.3(0x40000000), %bb.4(0x40000000)
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: [[DEF14:%[0-9]+]]:vreg_128_align2 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[COPY:%[0-9]+]]:vreg_128_align2 = COPY [[DEF14]]
+ ; CHECK-NEXT: dead [[V_ADD_F32_e64_:%[0-9]+]]:vgpr_32 = V_ADD_F32_e64 0, [[COPY]].sub2, 0, [[COPY]].sub3, 0, 0, implicit $mode, implicit $exec
+ ; CHECK-NEXT: [[V_ADD_U32_e32_1:%[0-9]+]]:vgpr_32 = V_ADD_U32_e32 [[DEF3]].sub1, [[V_ADD_U32_e32_]].sub0, implicit $exec
+ ; CHECK-NEXT: [[V_CMP_EQ_U32_e64_:%[0-9]+]]:sreg_64 = V_CMP_EQ_U32_e64 [[V_ADD_U32_e32_1]], [[DEF4]], implicit $exec
+ ; CHECK-NEXT: dead [[V_MFMA_F32_16X16X4F32_vgprcd_e64_:%[0-9]+]]:vreg_128_align2 = V_MFMA_F32_16X16X4F32_vgprcd_e64 [[DEF2]], [[DEF2]], [[DEF3]], 0, 0, 0, implicit $mode, implicit $exec
+ ; CHECK-NEXT: $scc = COPY [[V_CMP_EQ_U32_e64_]].sub0
+ ; CHECK-NEXT: S_CBRANCH_SCC0 %bb.3, implicit killed $scc
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: bb.4:
+ ; CHECK-NEXT: SCHED_BARRIER 0
+ ; CHECK-NEXT: [[DEF15:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: KILL [[DEF15]], [[DEF6]], [[DEF7]], [[DEF8]], [[DEF9]], [[DEF10]], [[DEF11]], [[DEF12]], [[DEF13]], [[DEF]], [[DEF5]], [[DEF1]], [[DEF2]], [[DEF3]]
+ ; CHECK-NEXT: S_ENDPGM 0
+ bb.0:
+ liveins: $vgpr0, $sgpr4_sgpr5
+ %1:vreg_1024 = IMPLICIT_DEF
+ %2:vreg_1024 = IMPLICIT_DEF
+ %3:vreg_1024 = IMPLICIT_DEF
+ %4:vreg_1024 = IMPLICIT_DEF
+ %5:vreg_1024 = IMPLICIT_DEF
+ %6:vreg_1024 = IMPLICIT_DEF
+ %7:vreg_1024 = IMPLICIT_DEF
+ %8:vreg_1024 = IMPLICIT_DEF
+ %9:vreg_1024 = IMPLICIT_DEF
+ %10:vreg_512 = IMPLICIT_DEF
+ %11:vreg_512 = IMPLICIT_DEF
+ %20:vreg_64 = IMPLICIT_DEF
+ %21:vgpr_32 = IMPLICIT_DEF
+ %22:vreg_128_align2 = IMPLICIT_DEF
+ %23:vreg_128_align2 = IMPLICIT_DEF
+ %50:sgpr_32 = IMPLICIT_DEF
+ SCHED_BARRIER 0
+ $scc = IMPLICIT_DEF
+ S_CBRANCH_SCC1 %bb.2, implicit killed $scc
+
+ bb.1:
+ undef %84.sub0:vreg_128_align2 = V_ADD_U32_e32 %22.sub0, %21, implicit $exec
+ S_BRANCH %bb.3
+
+ bb.2:
+ undef %84.sub0:vreg_128_align2 = V_ADD_U32_e32 %22.sub1, %21, implicit $exec
+
+ bb.3:
+ %35:vreg_128_align2 = V_MFMA_F32_16X16X4F32_vgprcd_e64 %21:vgpr_32, %21:vgpr_32, %22:vreg_128_align2, 0, 0, 0, implicit $mode, implicit $exec
+ %35:vreg_128_align2 = COPY %23:vreg_128_align2
+ %40:vgpr_32 = V_ADD_F32_e64 0, %35.sub2:vreg_128_align2, 0, %35.sub3:vreg_128_align2, 0, 0, implicit $mode, implicit $exec
+ %104:vgpr_32 = V_ADD_U32_e32 %22.sub1, %84.sub0, implicit $exec
+ %105:sreg_64 = V_CMP_EQ_U32_e64 %104:vgpr_32, %50:sgpr_32, implicit $exec
+ $scc = COPY %105.sub0:sreg_64
+ S_CBRANCH_SCC0 %bb.3, implicit killed $scc
+
+ bb.4:
+ SCHED_BARRIER 0
+ KILL %1, %2, %3, %4, %5, %6, %7, %8, %9, %10, %11, %20, %21, %22
+ S_ENDPGM 0
+
+---
+name: test_subreg_partial_redef
+tracksRegLiveness: true
+machineFunctionInfo:
+ isEntryFunction: true
+ scratchRSrcReg: '$sgpr96_sgpr97_sgpr98_sgpr99'
+ stackPtrOffsetReg: '$sgpr32'
+ argumentInfo:
+ privateSegmentBuffer: { reg: '$sgpr0_sgpr1_sgpr2_sgpr3' }
+ kernargSegmentPtr: { reg: '$sgpr4_sgpr5' }
+ workGroupIDX: { reg: '$sgpr6' }
+ privateSegmentWaveByteOffset: { reg: '$sgpr7' }
+ workItemIDX: { reg: '$vgpr0' }
+ sgprForEXECCopy: '$sgpr100_sgpr101'
+body: |
+ ; CHECK-LABEL: name: test_subreg_partial_redef
+ ; CHECK: bb.0:
+ ; CHECK-NEXT: successors: %bb.1(0x80000000)
+ ; CHECK-NEXT: liveins: $vgpr0, $sgpr4_sgpr5
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: [[DEF:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF1:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF2:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF3:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF4:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF5:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF6:%[0-9]+]]:vreg_512 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF7:%[0-9]+]]:vreg_64 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF8:%[0-9]+]]:vgpr_32 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF9:%[0-9]+]]:vreg_128 = IMPLICIT_DEF
+ ; CHECK-NEXT: S_NOP 0, implicit-def [[DEF10:%[0-9]+]]
+ ; CHECK-NEXT: S_NOP 0, implicit-def [[DEF11:%[0-9]+]]
+ ; CHECK-NEXT: [[DEF12:%[0-9]+]]:vreg_512 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF13:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF14:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF15:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF16:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: SCHED_BARRIER 0
+ ; CHECK-NEXT: [[DEF17:%[0-9]+]]:av_128_align2 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF18:%[0-9]+]]:av_128_align2 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF19:%[0-9]+]]:vreg_64_align2 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF20:%[0-9]+]]:vgpr_32 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF21:%[0-9]+]]:vreg_128_align2 = IMPLICIT_DEF
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: bb.1:
+ ; CHECK-NEXT: successors: %bb.2(0x80000000)
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: bb.2:
+ ; CHECK-NEXT: successors: %bb.3(0x80000000)
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: undef [[V_ADD_U32_e32_:%[0-9]+]].sub0:vreg_128_align2 = V_ADD_U32_e32 [[DEF21]].sub0, [[DEF20]], implicit $exec
+ ; CHECK-NEXT: [[V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64_:%[0-9]+]]:vreg_128_align2 = contract nofpexcept V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64 [[DEF17]], [[DEF18]], [[V_ADD_U32_e32_]], 4, 4, [[DEF19]].sub0, [[DEF20]], 0, 0, implicit $mode, implicit $exec
+ ; CHECK-NEXT: [[V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64_1:%[0-9]+]]:vreg_128_align2 = contract nofpexcept V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64 [[DEF17]], [[DEF18]], [[V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64_]], 4, 4, [[DEF19]].sub0, [[DEF20]], 0, 0, implicit $mode, implicit $exec
+ ; CHECK-NEXT: [[V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64_2:%[0-9]+]]:vreg_128_align2 = contract nofpexcept V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64 [[DEF17]], [[DEF18]], [[V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64_1]], 4, 4, [[DEF19]].sub0, [[DEF20]], 0, 0, implicit $mode, implicit $exec
+ ; CHECK-NEXT: undef [[V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64_3:%[0-9]+]].sub0_sub1_sub2_sub3:vreg_512_align2 = contract nofpexcept V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64 [[DEF17]], [[DEF18]], [[V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64_2]], 4, 4, [[DEF19]].sub0, [[DEF20]], 0, 0, implicit $mode, implicit $exec
+ ; CHECK-NEXT: [[V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64_3]].sub4:vreg_512_align2 = V_ADD_U32_e32 [[V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64_2]].sub0, [[V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64_2]].sub1, implicit $exec
+ ; CHECK-NEXT: [[V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64_3]].sub8_sub9_sub10_sub11:vreg_512_align2 = contract nofpexcept V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64 [[DEF17]], [[DEF18]], [[V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64_2]], 4, 4, [[DEF19]].sub0, [[DEF20]], 0, 0, implicit $mode, implicit $exec
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: bb.3:
+ ; CHECK-NEXT: undef [[V_ADD_U32_e32_1:%[0-9]+]].sub0:vreg_128_align2 = V_ADD_U32_e32 [[V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64_3]].sub4, [[DEF20]], implicit $exec
+ ; CHECK-NEXT: SCHED_BARRIER 0
+ ; CHECK-NEXT: [[DEF22:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF23:%[0-9]+]]:vreg_128_align2 = IMPLICIT_DEF
+ ; CHECK-NEXT: KILL [[DEF22]], [[DEF]], [[DEF1]], [[DEF2]], [[DEF3]], [[DEF4]], [[DEF5]], [[DEF6]], [[DEF12]], [[DEF7]], [[DEF8]], [[DEF9]], [[DEF13]], [[DEF14]], [[DEF15]], [[DEF16]], [[DEF23]], [[DEF21]], [[V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64_]], [[V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64_1]], [[V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64_2]], [[V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64_3]], [[V_ADD_U32_e32_1]]
+ ; CHECK-NEXT: S_NOP 0, implicit [[DEF10]], implicit [[DEF11]]
+ ; CHECK-NEXT: S_ENDPGM 0
+ bb.0:
+ liveins: $vgpr0, $sgpr4_sgpr5
+ ; Add register pressure to force rewrites
+ %1:vreg_1024 = IMPLICIT_DEF
+ %2:vreg_1024 = IMPLICIT_DEF
+ %3:vreg_1024 = IMPLICIT_DEF
+ %4:vreg_1024 = IMPLICIT_DEF
+ %5:vreg_1024 = IMPLICIT_DEF
+ %6:vreg_1024 = IMPLICIT_DEF
+ %7:vreg_1024 = IMPLICIT_DEF
+ %8:vreg_512 = IMPLICIT_DEF
+ %9:vreg_512 = IMPLICIT_DEF
+ %10:vreg_64 = IMPLICIT_DEF
+ %11:vgpr_32 = IMPLICIT_DEF
+ %12:vreg_128 = IMPLICIT_DEF
+ %13:vreg_1024 = IMPLICIT_DEF
+ %14:vreg_1024 = IMPLICIT_DEF
+ %15:vreg_1024 = IMPLICIT_DEF
+ %16:vreg_1024 = IMPLICIT_DEF
+ S_NOP 0, implicit-def %50:av_512
+ S_NOP 0, implicit-def %51:av_512
+ SCHED_BARRIER 0
+ %60:av_128_align2 = IMPLICIT_DEF
+ %61:av_128_align2 = IMPLICIT_DEF
+ %62:vreg_128_align2 = IMPLICIT_DEF
+ %63:vreg_64_align2 = IMPLICIT_DEF
+ %64:vgpr_32 = IMPLICIT_DEF
+ %72:vreg_128_align2 = IMPLICIT_DEF
+
+ bb.1:
+ undef %84.sub0:vreg_128_align2 = V_ADD_U32_e32 %72.sub0, %64, implicit $exec
+
+ bb.2:
+ %85:vreg_128_align2 = contract nofpexcept V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64 %60:av_128_align2, %61:av_128_align2, %84:vreg_128_align2, 4, 4, %63.sub0:vreg_64_align2, %64:vgpr_32, 0, 0, implicit $mode, implicit $exec
+ %86:vreg_128_align2 = contract nofpexcept V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64 %60:av_128_align2, %61:av_128_align2, %85:vreg_128_align2, 4, 4, %63.sub0:vreg_64_align2, %64:vgpr_32, 0, 0, implicit $mode, implicit $exec
+ %87:vreg_128_align2 = contract nofpexcept V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64 %60:av_128_align2, %61:av_128_align2, %86:vreg_128_align2, 4, 4, %63.sub0:vreg_64_align2, %64:vgpr_32, 0, 0, implicit $mode, implicit $exec
+
+ undef %88.sub0_sub1_sub2_sub3:vreg_512_align2 = contract nofpexcept V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64 %60:av_128_align2, %61:av_128_align2, %87:vreg_128_align2, 4, 4, %63.sub0:vreg_64_align2, %64:vgpr_32, 0, 0, implicit $mode, implicit $exec
+ %88.sub4 = V_ADD_U32_e32 %87.sub0, %87.sub1, implicit $exec
+ %88.sub8_sub9_sub10_sub11:vreg_512_align2 = contract nofpexcept V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64 %60:av_128_align2, %61:av_128_align2, %87:vreg_128_align2, 4, 4, %63.sub0:vreg_64_align2, %64:vgpr_32, 0, 0, implicit $mode, implicit $exec
+
+ bb.3:
+ undef %94.sub0:vreg_128_align2 = V_ADD_U32_e32 %88.sub4, %64, implicit $exec
+ SCHED_BARRIER 0
+ KILL %1, %2, %3, %4, %5, %6, %7, %8, %9, %10, %11, %12, %13, %14, %15, %16, %62, %72, %85, %86, %87, %88, %94
+ S_NOP 0, implicit %50, implicit %51
+ S_ENDPGM 0
+
+---
+name: test_subreg_safe_rewrite
+tracksRegLiveness: true
+machineFunctionInfo:
+ isEntryFunction: true
+ scratchRSrcReg: '$sgpr96_sgpr97_sgpr98_sgpr99'
+ stackPtrOffsetReg: '$sgpr32'
+ argumentInfo:
+ privateSegmentBuffer: { reg: '$sgpr0_sgpr1_sgpr2_sgpr3' }
+ kernargSegmentPtr: { reg: '$sgpr4_sgpr5' }
+ workGroupIDX: { reg: '$sgpr6' }
+ privateSegmentWaveByteOffset: { reg: '$sgpr7' }
+ workItemIDX: { reg: '$vgpr0' }
+ sgprForEXECCopy: '$sgpr100_sgpr101'
+body: |
+ ; CHECK-LABEL: name: test_subreg_safe_rewrite
+ ; CHECK: bb.0:
+ ; CHECK-NEXT: successors: %bb.1(0x80000000)
+ ; CHECK-NEXT: liveins: $vgpr0, $sgpr4_sgpr5
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: [[DEF:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF1:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF2:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF3:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF4:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF5:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF6:%[0-9]+]]:vreg_512 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF7:%[0-9]+]]:vreg_64 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF8:%[0-9]+]]:vgpr_32 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF9:%[0-9]+]]:vreg_128 = IMPLICIT_DEF
+ ; CHECK-NEXT: S_NOP 0, implicit-def [[DEF10:%[0-9]+]]
+ ; CHECK-NEXT: S_NOP 0, implicit-def [[DEF11:%[0-9]+]]
+ ; CHECK-NEXT: [[DEF12:%[0-9]+]]:vreg_512 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF13:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF14:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF15:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF16:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: SCHED_BARRIER 0
+ ; CHECK-NEXT: [[DEF17:%[0-9]+]]:av_128_align2 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF18:%[0-9]+]]:av_128_align2 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF19:%[0-9]+]]:vreg_64_align2 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF20:%[0-9]+]]:vgpr_32 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF21:%[0-9]+]]:vreg_128_align2 = IMPLICIT_DEF
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: bb.1:
+ ; CHECK-NEXT: successors: %bb.2(0x80000000)
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: bb.2:
+ ; CHECK-NEXT: successors: %bb.3(0x80000000)
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: undef [[V_ADD_U32_e32_:%[0-9]+]].sub0:vreg_128_align2 = V_ADD_U32_e32 [[DEF21]].sub0, [[DEF20]], implicit $exec
+ ; CHECK-NEXT: [[V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64_:%[0-9]+]]:vreg_128_align2 = contract nofpexcept V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64 [[DEF17]], [[DEF18]], [[V_ADD_U32_e32_]], 4, 4, [[DEF19]].sub0, [[DEF20]], 0, 0, implicit $mode, implicit $exec
+ ; CHECK-NEXT: [[V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64_1:%[0-9]+]]:vreg_128_align2 = contract nofpexcept V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64 [[DEF17]], [[DEF18]], [[V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64_]], 4, 4, [[DEF19]].sub0, [[DEF20]], 0, 0, implicit $mode, implicit $exec
+ ; CHECK-NEXT: [[V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64_2:%[0-9]+]]:vreg_128_align2 = contract nofpexcept V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64 [[DEF17]], [[DEF18]], [[V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64_1]], 4, 4, [[DEF19]].sub0, [[DEF20]], 0, 0, implicit $mode, implicit $exec
+ ; CHECK-NEXT: [[V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64_3:%[0-9]+]]:vreg_128_align2 = contract nofpexcept V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64 [[DEF17]], [[DEF18]], [[V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64_2]], 4, 4, [[DEF19]].sub0, [[DEF20]], 0, 0, implicit $mode, implicit $exec
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: bb.3:
+ ; CHECK-NEXT: undef [[V_ADD_U32_e32_1:%[0-9]+]].sub0:vreg_128_align2 = V_ADD_U32_e32 [[V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64_3]].sub0, [[DEF20]], implicit $exec
+ ; CHECK-NEXT: SCHED_BARRIER 0
+ ; CHECK-NEXT: [[DEF22:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF23:%[0-9]+]]:vreg_128_align2 = IMPLICIT_DEF
+ ; CHECK-NEXT: KILL [[DEF22]], [[DEF]], [[DEF1]], [[DEF2]], [[DEF3]], [[DEF4]], [[DEF5]], [[DEF6]], [[DEF12]], [[DEF7]], [[DEF8]], [[DEF9]], [[DEF13]], [[DEF14]], [[DEF15]], [[DEF16]], [[DEF23]], [[DEF21]], [[V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64_]], [[V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64_1]], [[V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64_2]], [[V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64_3]], [[V_ADD_U32_e32_1]]
+ ; CHECK-NEXT: S_NOP 0, implicit [[DEF10]], implicit [[DEF11]]
+ ; CHECK-NEXT: S_ENDPGM 0
+ bb.0:
+ liveins: $vgpr0, $sgpr4_sgpr5
+ ; Add register pressure to force rewrites
+ %1:vreg_1024 = IMPLICIT_DEF
+ %2:vreg_1024 = IMPLICIT_DEF
+ %3:vreg_1024 = IMPLICIT_DEF
+ %4:vreg_1024 = IMPLICIT_DEF
+ %5:vreg_1024 = IMPLICIT_DEF
+ %6:vreg_1024 = IMPLICIT_DEF
+ %7:vreg_1024 = IMPLICIT_DEF
+ %8:vreg_512 = IMPLICIT_DEF
+ %9:vreg_512 = IMPLICIT_DEF
+ %10:vreg_64 = IMPLICIT_DEF
+ %11:vgpr_32 = IMPLICIT_DEF
+ %12:vreg_128 = IMPLICIT_DEF
+ %13:vreg_1024 = IMPLICIT_DEF
+ %14:vreg_1024 = IMPLICIT_DEF
+ %15:vreg_1024 = IMPLICIT_DEF
+ %16:vreg_1024 = IMPLICIT_DEF
+ S_NOP 0, implicit-def %50:av_512
+ S_NOP 0, implicit-def %51:av_512
+ SCHED_BARRIER 0
+ %60:av_128_align2 = IMPLICIT_DEF
+ %61:av_128_align2 = IMPLICIT_DEF
+ %62:vreg_128_align2 = IMPLICIT_DEF
+ %63:vreg_64_align2 = IMPLICIT_DEF
+ %64:vgpr_32 = IMPLICIT_DEF
+ %72:vreg_128_align2 = IMPLICIT_DEF
+
+ bb.1:
+ undef %84.sub0:vreg_128_align2 = V_ADD_U32_e32 %72.sub0, %64, implicit $exec
+
+ bb.2:
+ %85:vreg_128_align2 = contract nofpexcept V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64 %60:av_128_align2, %61:av_128_align2, %84:vreg_128_align2, 4, 4, %63.sub0:vreg_64_align2, %64:vgpr_32, 0, 0, implicit $mode, implicit $exec
+ %86:vreg_128_align2 = contract nofpexcept V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64 %60:av_128_align2, %61:av_128_align2, %85:vreg_128_align2, 4, 4, %63.sub0:vreg_64_align2, %64:vgpr_32, 0, 0, implicit $mode, implicit $exec
+ %87:vreg_128_align2 = contract nofpexcept V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64 %60:av_128_align2, %61:av_128_align2, %86:vreg_128_align2, 4, 4, %63.sub0:vreg_64_align2, %64:vgpr_32, 0, 0, implicit $mode, implicit $exec
+ %88:vreg_128_align2 = contract nofpexcept V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64 %60:av_128_align2, %61:av_128_align2, %87:vreg_128_align2, 4, 4, %63.sub0:vreg_64_align2, %64:vgpr_32, 0, 0, implicit $mode, implicit $exec
+
+ bb.3:
+ undef %94.sub0:vreg_128_align2 = V_ADD_U32_e32 %88.sub0, %64, implicit $exec
+ SCHED_BARRIER 0
+ KILL %1, %2, %3, %4, %5, %6, %7, %8, %9, %10, %11, %12, %13, %14, %15, %16, %62, %72, %85, %86, %87, %88, %94
+ S_NOP 0, implicit %50, implicit %51
+ S_ENDPGM 0
+
+---
+name: test_mfma_partial_redef_merge
+tracksRegLiveness: true
+machineFunctionInfo:
+ isEntryFunction: true
+ scratchRSrcReg: '$sgpr96_sgpr97_sgpr98_sgpr99'
+ stackPtrOffsetReg: '$sgpr32'
+ argumentInfo:
+ privateSegmentBuffer: { reg: '$sgpr0_sgpr1_sgpr2_sgpr3' }
+ kernargSegmentPtr: { reg: '$sgpr4_sgpr5' }
+ workGroupIDX: { reg: '$sgpr6' }
+ privateSegmentWaveByteOffset: { reg: '$sgpr7' }
+ workItemIDX: { reg: '$vgpr0' }
+ sgprForEXECCopy: '$sgpr100_sgpr101'
+body: |
+ ; CHECK-LABEL: name: test_mfma_partial_redef_merge
+ ; CHECK: bb.0:
+ ; CHECK-NEXT: successors: %bb.1(0x80000000)
+ ; CHECK-NEXT: liveins: $vgpr0, $sgpr4_sgpr5
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: [[DEF:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF1:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF2:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF3:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF4:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF5:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF6:%[0-9]+]]:vreg_512 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF7:%[0-9]+]]:vreg_64 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF8:%[0-9]+]]:vgpr_32 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF9:%[0-9]+]]:vreg_128 = IMPLICIT_DEF
+ ; CHECK-NEXT: S_NOP 0, implicit-def [[DEF10:%[0-9]+]]
+ ; CHECK-NEXT: S_NOP 0, implicit-def [[DEF11:%[0-9]+]]
+ ; CHECK-NEXT: [[DEF12:%[0-9]+]]:vreg_512 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF13:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF14:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF15:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF16:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: SCHED_BARRIER 0
+ ; CHECK-NEXT: dead [[DEF17:%[0-9]+]]:av_128_align2 = IMPLICIT_DEF
+ ; CHECK-NEXT: dead [[DEF18:%[0-9]+]]:av_128_align2 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF19:%[0-9]+]]:vreg_128_align2 = IMPLICIT_DEF
+ ; CHECK-NEXT: dead [[DEF20:%[0-9]+]]:vreg_64_align2 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF21:%[0-9]+]]:vgpr_32 = IMPLICIT_DEF
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: bb.1:
+ ; CHECK-NEXT: successors: %bb.2(0x80000000)
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: [[V_MFMA_F32_16X16X4F32_vgprcd_e64_:%[0-9]+]]:vreg_128_align2 = contract nofpexcept V_MFMA_F32_16X16X4F32_vgprcd_e64 [[DEF21]], [[DEF21]], [[DEF19]], 0, 0, 0, implicit $mode, implicit $exec
+ ; CHECK-NEXT: [[V_MFMA_F32_16X16X4F32_vgprcd_e64_1:%[0-9]+]]:vreg_128_align2 = contract nofpexcept V_MFMA_F32_16X16X4F32_vgprcd_e64 [[DEF21]], [[DEF21]], [[DEF19]], 0, 0, 0, implicit $mode, implicit $exec
+ ; CHECK-NEXT: [[V_MFMA_F32_16X16X4F32_vgprcd_e64_]].sub2:vreg_128_align2 = COPY [[V_MFMA_F32_16X16X4F32_vgprcd_e64_1]].sub3
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: bb.2:
+ ; CHECK-NEXT: [[V_MUL_F32_e32_:%[0-9]+]]:vgpr_32 = contract nofpexcept V_MUL_F32_e32 0, [[V_MFMA_F32_16X16X4F32_vgprcd_e64_]].sub0, implicit $mode, implicit $exec
+ ; CHECK-NEXT: [[V_MUL_F32_e32_1:%[0-9]+]]:vgpr_32 = contract nofpexcept V_MUL_F32_e32 [[V_MFMA_F32_16X16X4F32_vgprcd_e64_]].sub1, [[DEF21]], implicit $mode, implicit $exec
+ ; CHECK-NEXT: [[V_PK_MUL_F32_:%[0-9]+]]:vreg_64_align2 = contract nofpexcept V_PK_MUL_F32 8, [[V_MFMA_F32_16X16X4F32_vgprcd_e64_]].sub2_sub3, 0, 0, 0, 0, 0, 0, 0, implicit $mode, implicit $exec
+ ; CHECK-NEXT: SCHED_BARRIER 0
+ ; CHECK-NEXT: [[DEF22:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: KILL [[DEF22]], [[DEF]], [[DEF1]], [[DEF2]], [[DEF3]], [[DEF4]], [[DEF5]], [[DEF6]], [[DEF12]], [[DEF7]], [[DEF8]], [[DEF9]], [[DEF13]], [[DEF14]], [[DEF15]], [[DEF16]], [[DEF19]], [[V_MFMA_F32_16X16X4F32_vgprcd_e64_]], [[V_MFMA_F32_16X16X4F32_vgprcd_e64_1]], [[V_MUL_F32_e32_]], [[V_MUL_F32_e32_1]], [[V_PK_MUL_F32_]]
+ ; CHECK-NEXT: S_NOP 0, implicit [[DEF10]], implicit [[DEF11]]
+ ; CHECK-NEXT: S_ENDPGM 0
+ bb.0:
+ liveins: $vgpr0, $sgpr4_sgpr5
+ ; Add register pressure to force rewrites
+ %1:vreg_1024 = IMPLICIT_DEF
+ %2:vreg_1024 = IMPLICIT_DEF
+ %3:vreg_1024 = IMPLICIT_DEF
+ %4:vreg_1024 = IMPLICIT_DEF
+ %5:vreg_1024 = IMPLICIT_DEF
+ %6:vreg_1024 = IMPLICIT_DEF
+ %7:vreg_1024 = IMPLICIT_DEF
+ %8:vreg_512 = IMPLICIT_DEF
+ %9:vreg_512 = IMPLICIT_DEF
+ %10:vreg_64 = IMPLICIT_DEF
+ %11:vgpr_32 = IMPLICIT_DEF
+ %12:vreg_128 = IMPLICIT_DEF
+ %13:vreg_1024 = IMPLICIT_DEF
+ %14:vreg_1024 = IMPLICIT_DEF
+ %15:vreg_1024 = IMPLICIT_DEF
+ %16:vreg_1024 = IMPLICIT_DEF
+ S_NOP 0, implicit-def %50:av_512
+ S_NOP 0, implicit-def %51:av_512
+ SCHED_BARRIER 0
+ %60:av_128_align2 = IMPLICIT_DEF
+ %61:av_128_align2 = IMPLICIT_DEF
+ %62:vreg_128_align2 = IMPLICIT_DEF
+ %63:vreg_64_align2 = IMPLICIT_DEF
+ %64:vgpr_32 = IMPLICIT_DEF
+
+ bb.1:
+ %100:vreg_128_align2 = contract nofpexcept V_MFMA_F32_16X16X4F32_vgprcd_e64 %64:vgpr_32, %64:vgpr_32, %62:vreg_128_align2, 0, 0, 0, implicit $mode, implicit $exec
+ %101:vreg_128_align2 = contract nofpexcept V_MFMA_F32_16X16X4F32_vgprcd_e64 %64:vgpr_32, %64:vgpr_32, %62:vreg_128_align2, 0, 0, 0, implicit $mode, implicit $exec
+ %100.sub2:vreg_128_align2 = COPY %101.sub3:vreg_128_align2
+
+ bb.2:
+ %110:vgpr_32 = contract nofpexcept V_MUL_F32_e32 0, %100.sub0:vreg_128_align2, implicit $mode, implicit $exec
+ %111:vgpr_32 = contract nofpexcept V_MUL_F32_e32 %100.sub1:vreg_128_align2, %64:vgpr_32, implicit $mode, implicit $exec
+ %112:vreg_64_align2 = contract nofpexcept V_PK_MUL_F32 8, %100.sub2_sub3:vreg_128_align2, 0, 0, 0, 0, 0, 0, 0, implicit $mode, implicit $exec
+
+ SCHED_BARRIER 0
+ KILL %1, %2, %3, %4, %5, %6, %7, %8, %9, %10, %11, %12, %13, %14, %15, %16, %62, %100, %101, %110, %111, %112
+ S_NOP 0, implicit %50, implicit %51
+ S_ENDPGM 0
+
+---
+name: test_partial_init_safe_rewrite
+tracksRegLiveness: true
+machineFunctionInfo:
+ isEntryFunction: true
+ scratchRSrcReg: '$sgpr96_sgpr97_sgpr98_sgpr99'
+ stackPtrOffsetReg: '$sgpr32'
+ argumentInfo:
+ privateSegmentBuffer: { reg: '$sgpr0_sgpr1_sgpr2_sgpr3' }
+ kernargSegmentPtr: { reg: '$sgpr4_sgpr5' }
+ workGroupIDX: { reg: '$sgpr6' }
+ privateSegmentWaveByteOffset: { reg: '$sgpr7' }
+ workItemIDX: { reg: '$vgpr0' }
+ sgprForEXECCopy: '$sgpr100_sgpr101'
+body: |
+ ; CHECK-LABEL: name: test_partial_init_safe_rewrite
+ ; CHECK: bb.0:
+ ; CHECK-NEXT: successors: %bb.3(0x40000000), %bb.1(0x40000000)
+ ; CHECK-NEXT: liveins: $vgpr0, $sgpr4_sgpr5
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: [[DEF:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF1:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF2:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF3:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF4:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF5:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: S_NOP 0, implicit-def [[DEF6:%[0-9]+]]
+ ; CHECK-NEXT: S_NOP 0, implicit-def [[DEF7:%[0-9]+]]
+ ; CHECK-NEXT: SCHED_BARRIER 0
+ ; CHECK-NEXT: [[DEF8:%[0-9]+]]:vgpr_32 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DS_READ_B128_gfx9_:%[0-9]+]]:vreg_128_align2 = DS_READ_B128_gfx9 [[DEF8]], 0, 0, implicit $exec
+ ; CHECK-NEXT: [[DEF9:%[0-9]+]]:av_128_align2 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF10:%[0-9]+]]:av_128_align2 = IMPLICIT_DEF
+ ; CHECK-NEXT: [[DEF11:%[0-9]+]]:vreg_64_align2 = IMPLICIT_DEF
+ ; CHECK-NEXT: $scc = IMPLICIT_DEF
+ ; CHECK-NEXT: S_CBRANCH_SCC1 %bb.3, implicit killed $scc
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: bb.1:
+ ; CHECK-NEXT: successors: %bb.2(0x80000000)
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: undef [[V_ADD_U32_e32_:%[0-9]+]].sub0:vreg_128_align2 = V_ADD_U32_e32 [[DEF8]], [[DEF8]], implicit $exec
+ ; CHECK-NEXT: [[V_ADD_U32_e32_]].sub1:vreg_128_align2 = V_ADD_U32_e32 [[DEF8]], [[DEF8]], implicit $exec
+ ; CHECK-NEXT: [[V_ADD_U32_e32_]].sub2:vreg_128_align2 = V_ADD_U32_e32 [[DEF8]], [[DEF8]], implicit $exec
+ ; CHECK-NEXT: [[V_ADD_U32_e32_]].sub3:vreg_128_align2 = V_ADD_U32_e32 [[DEF8]], [[DEF8]], implicit $exec
+ ; CHECK-NEXT: [[COPY:%[0-9]+]]:areg_128_align2 = COPY [[V_ADD_U32_e32_]]
+ ; CHECK-NEXT: [[V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_e64_:%[0-9]+]]:areg_128_align2 = contract nofpexcept V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_e64 [[DEF9]], [[DEF10]], [[COPY]], 4, 4, [[DEF11]].sub0, [[DEF8]], 0, 0, implicit $mode, implicit $exec
+ ; CHECK-NEXT: [[V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_e64_1:%[0-9]+]]:areg_128_align2 = contract nofpexcept V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_e64 [[DEF9]], [[DEF10]], [[V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_e64_]], 4, 4, [[DEF11]].sub0, [[DEF8]], 0, 0, implicit $mode, implicit $exec
+ ; CHECK-NEXT: [[V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_e64_2:%[0-9]+]]:areg_128_align2 = contract nofpexcept V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_e64 [[DEF9]], [[DEF10]], [[V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_e64_1]], 4, 4, [[DEF11]].sub0, [[DEF8]], 0, 0, implicit $mode, implicit $exec
+ ; CHECK-NEXT: [[V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_e64_3:%[0-9]+]]:areg_128_align2 = contract nofpexcept V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_e64 [[DEF9]], [[DEF10]], [[V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_e64_2]], 4, 4, [[DEF11]].sub0, [[DEF8]], 0, 0, implicit $mode, implicit $exec
+ ; CHECK-NEXT: [[V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_e64_4:%[0-9]+]]:areg_128_align2 = contract nofpexcept V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_e64 [[DEF9]], [[DEF10]], [[V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_e64_3]], 4, 4, [[DEF11]].sub0, [[DEF8]], 0, 0, implicit $mode, implicit $exec
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: bb.2:
+ ; CHECK-NEXT: successors: %bb.3(0x80000000)
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: [[COPY1:%[0-9]+]]:vreg_128_align2 = COPY [[V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_e64_4]]
+ ; CHECK-NEXT: dead [[V_MUL_F32_e32_:%[0-9]+]]:vgpr_32 = contract nofpexcept V_MUL_F32_e32 0, [[COPY1]].sub0, implicit $mode, implicit $exec
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: bb.3:
+ ; CHECK-NEXT: DS_WRITE_B128_gfx9 [[DEF8]], [[DS_READ_B128_gfx9_]], 0, 0, implicit $exec
+ ; CHECK-NEXT: SCHED_BARRIER 0
+ ; CHECK-NEXT: [[DEF12:%[0-9]+]]:vreg_1024 = IMPLICIT_DEF
+ ; CHECK-NEXT: KILL [[DEF12]], [[DEF]], [[DEF1]], [[DEF2]], [[DEF3]], [[DEF4]], [[DEF5]], [[DS_READ_B128_gfx9_]]
+ ; CHECK-NEXT: S_NOP 0, implicit [[DEF6]], implicit [[DEF7]]
+ ; CHECK-NEXT: S_ENDPGM 0
+ bb.0:
+ liveins: $vgpr0, $sgpr4_sgpr5
+ ; Add register pressure
+ %1:vreg_1024 = IMPLICIT_DEF
+ %2:vreg_1024 = IMPLICIT_DEF
+ %3:vreg_1024 = IMPLICIT_DEF
+ %4:vreg_1024 = IMPLICIT_DEF
+ %5:vreg_1024 = IMPLICIT_DEF
+ %6:vreg_1024 = IMPLICIT_DEF
+ %7:vreg_1024 = IMPLICIT_DEF
+ S_NOP 0, implicit-def %50:av_512
+ S_NOP 0, implicit-def %51:av_512
+ SCHED_BARRIER 0
+ %60:av_128_align2 = IMPLICIT_DEF
+ %61:av_128_align2 = IMPLICIT_DEF
+ %63:vreg_64_align2 = IMPLICIT_DEF
+ %64:vgpr_32 = IMPLICIT_DEF
+ ; Real live value from memory
+ %84:vreg_128_align2 = DS_READ_B128_gfx9 %64:vgpr_32, 0, 0, implicit $exec
+ $scc = IMPLICIT_DEF
+ S_CBRANCH_SCC1 %bb.3, implicit killed $scc
+
+ bb.1:
+ undef %100.sub0:vreg_128_align2 = V_ADD_U32_e32 %64:vgpr_32, %64:vgpr_32, implicit $exec
+ %100.sub1:vreg_128_align2 = V_ADD_U32_e32 %64:vgpr_32, %64:vgpr_32, implicit $exec
+ %100.sub2:vreg_128_align2 = V_ADD_U32_e32 %64:vgpr_32, %64:vgpr_32, implicit $exec
+ %100.sub3:vreg_128_align2 = V_ADD_U32_e32 %64:vgpr_32, %64:vgpr_32, implicit $exec
+ %100:vreg_128_align2 = contract nofpexcept V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64 %60:av_128_align2, %61:av_128_align2, %100:vreg_128_align2, 4, 4, %63.sub0:vreg_64_align2, %64:vgpr_32, 0, 0, implicit $mode, implicit $exec
+ %100:vreg_128_align2 = contract nofpexcept V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64 %60:av_128_align2, %61:av_128_align2, %100:vreg_128_align2, 4, 4, %63.sub0:vreg_64_align2, %64:vgpr_32, 0, 0, implicit $mode, implicit $exec
+ %100:vreg_128_align2 = contract nofpexcept V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64 %60:av_128_align2, %61:av_128_align2, %100:vreg_128_align2, 4, 4, %63.sub0:vreg_64_align2, %64:vgpr_32, 0, 0, implicit $mode, implicit $exec
+ %100:vreg_128_align2 = contract nofpexcept V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64 %60:av_128_align2, %61:av_128_align2, %100:vreg_128_align2, 4, 4, %63.sub0:vreg_64_align2, %64:vgpr_32, 0, 0, implicit $mode, implicit $exec
+ %101:vreg_128_align2 = contract nofpexcept V_MFMA_SCALE_F32_16X16X128_F8F6F4_f4_f4_vgprcd_e64 %60:av_128_align2, %61:av_128_align2, %100:vreg_128_align2, 4, 4, %63.sub0:vreg_64_align2, %64:vgpr_32, 0, 0, implicit $mode, implicit $exec
+
+ bb.2:
+ ; Non-MAI use of %101 - AGPR to VGPR copy should be inserted
+ %110:vgpr_32 = contract nofpexcept V_MUL_F32_e32 0, %101.sub0:vreg_128_align2, implicit $mode, implicit $exec
+
+ bb.3:
+ DS_WRITE_B128_gfx9 %64:vgpr_32, %84:vreg_128_align2, 0, 0, implicit $exec
+ SCHED_BARRIER 0
+ KILL %1, %2, %3, %4, %5, %6, %7, %84
+ S_NOP 0, implicit %50, implicit %51
+ S_ENDPGM 0
+...
More information about the llvm-commits
mailing list