[llvm] AMDGPU] ISel & PEI for whole wave functions (PR #131334)
Diana Picus via llvm-commits
llvm-commits at lists.llvm.org
Thu Mar 27 03:21:25 PDT 2025
https://github.com/rovka updated https://github.com/llvm/llvm-project/pull/131334
>From a525bba95403e64d83e550c44e1e05286ea1bc9c Mon Sep 17 00:00:00 2001
From: Diana Picus <diana-magda.picus at amd.com>
Date: Fri, 24 Jan 2025 10:18:23 +0100
Subject: [PATCH 01/11] Add subtarget feature
---
llvm/lib/Target/AMDGPU/AMDGPU.td | 6 ++++++
llvm/lib/Target/AMDGPU/GCNSubtarget.h | 6 ++++++
2 files changed, 12 insertions(+)
diff --git a/llvm/lib/Target/AMDGPU/AMDGPU.td b/llvm/lib/Target/AMDGPU/AMDGPU.td
index 1c8dc09d3060b..cc3a78beefd93 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPU.td
+++ b/llvm/lib/Target/AMDGPU/AMDGPU.td
@@ -1251,6 +1251,12 @@ def FeatureXF32Insts : SubtargetFeature<"xf32-insts",
"v_mfma_f32_16x16x8_xf32 and v_mfma_f32_32x32x4_xf32"
>;
+def FeatureWholeWaveFunction : SubtargetFeature<"whole-wave-function",
+ "IsWholeWaveFunction",
+ "true",
+ "Current function is a whole wave function (runs with all lanes enabled)"
+ >;
+
// Dummy feature used to disable assembler instructions.
def FeatureDisable : SubtargetFeature<"",
"FeatureDisable","true",
diff --git a/llvm/lib/Target/AMDGPU/GCNSubtarget.h b/llvm/lib/Target/AMDGPU/GCNSubtarget.h
index f7f03fe5911bd..1263f374d3db0 100644
--- a/llvm/lib/Target/AMDGPU/GCNSubtarget.h
+++ b/llvm/lib/Target/AMDGPU/GCNSubtarget.h
@@ -257,6 +257,8 @@ class GCNSubtarget final : public AMDGPUGenSubtargetInfo,
bool RequiresCOV6 = false;
+ bool IsWholeWaveFunction = false;
+
// Dummy feature to use for assembler in tablegen.
bool FeatureDisable = false;
@@ -1448,6 +1450,10 @@ class GCNSubtarget final : public AMDGPUGenSubtargetInfo,
// of sign-extending.
bool hasGetPCZeroExtension() const { return GFX12Insts; }
+ /// \returns true if the current function is a whole wave function (i.e. it
+ /// runs with all the lanes enabled).
+ bool isWholeWaveFunction() const { return IsWholeWaveFunction; }
+
/// \returns SGPR allocation granularity supported by the subtarget.
unsigned getSGPRAllocGranule() const {
return AMDGPU::IsaInfo::getSGPRAllocGranule(this);
>From 1ceab6a6e910c57250f4f41bb45cf9a89faa66e0 Mon Sep 17 00:00:00 2001
From: Diana Picus <diana-magda.picus at amd.com>
Date: Mon, 27 Jan 2025 13:17:19 +0100
Subject: [PATCH 02/11] [AMDGPU] ISel & PEI for whole wave functions
Whole wave functions are functions that will run with a full EXEC mask.
They will not be invoked directly, but instead will be launched by way
of a new intrinsic, `llvm.amdgcn.call.whole.wave` (to be added in
a future patch). These functions are meant as an alternative to the
`llvm.amdgcn.init.whole.wave` or `llvm.amdgcn.strict.wwm` intrinsics.
Whole wave functions will set EXEC to -1 in the prologue and restore the
original value of EXEC in the epilogue. They must have a special first
argument, `i1 %active`, that is going to be mapped to EXEC. They may
have either the default calling convention or amdgpu_gfx. The inactive
lanes need to be preserved for all registers used, active lanes only for
the CSRs.
At the IR level, arguments to a whole wave function (other than
`%active`) contain poison in their inactive lanes. Likewise, the return
value for the inactive lanes is poison.
This patch contains the following work:
* 2 new pseudos, SI_SETUP_WHOLE_WAVE_FUNC and SI_WHOLE_WAVE_FUNC_RETURN
used for managing the EXEC mask. SI_SETUP_WHOLE_WAVE_FUNC will return
a SReg_1 representing `%active`, which needs to be passed into
SI_WHOLE_WAVE_FUNC_RETURN.
* SelectionDAG support for generating these 2 new pseudos and the
special handling of %active. Since the return may be in a different
basic block, it's difficult to add the virtual reg for %active to
SI_WHOLE_WAVE_FUNC_RETURN, so we initially generate an IMPLICIT_DEF
which is later replaced via a custom inserter.
* Expansion of the 2 pseudos during prolog/epilog insertion. PEI also
marks any used VGPRs are WWM registers, which are then spilled and
restored with the usual logic.
I'm still working on the GlobalISel support and on adding some docs in
AMDGPUUsage.rst.
Future patches will include the `llvm.amdgcn.call.whole.wave` intrinsic,
a codegen prepare patch that looks for the callees of that intrinsic and
marks them as whole wave functions, and probably a lot of optimization
work.
---
llvm/lib/Target/AMDGPU/AMDGPU.td | 2 +
llvm/lib/Target/AMDGPU/AMDGPUISelLowering.cpp | 2 +
llvm/lib/Target/AMDGPU/AMDGPUISelLowering.h | 6 +
llvm/lib/Target/AMDGPU/AMDGPUInstrInfo.td | 11 +
llvm/lib/Target/AMDGPU/SIFrameLowering.cpp | 81 +++-
llvm/lib/Target/AMDGPU/SIISelLowering.cpp | 30 +-
llvm/lib/Target/AMDGPU/SIInsertWaitcnts.cpp | 1 +
llvm/lib/Target/AMDGPU/SIInstrInfo.cpp | 11 +
llvm/lib/Target/AMDGPU/SIInstrInfo.h | 2 +
llvm/lib/Target/AMDGPU/SIInstructions.td | 29 ++
.../AMDGPU/isel-whole-wave-functions.ll | 116 +++++
.../AMDGPU/whole-wave-functions-pei.mir | 439 ++++++++++++++++++
.../CodeGen/AMDGPU/whole-wave-functions.ll | 285 ++++++++++++
13 files changed, 1002 insertions(+), 13 deletions(-)
create mode 100644 llvm/test/CodeGen/AMDGPU/isel-whole-wave-functions.ll
create mode 100644 llvm/test/CodeGen/AMDGPU/whole-wave-functions-pei.mir
create mode 100644 llvm/test/CodeGen/AMDGPU/whole-wave-functions.ll
diff --git a/llvm/lib/Target/AMDGPU/AMDGPU.td b/llvm/lib/Target/AMDGPU/AMDGPU.td
index cc3a78beefd93..30e52bc0945c6 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPU.td
+++ b/llvm/lib/Target/AMDGPU/AMDGPU.td
@@ -2538,6 +2538,8 @@ def HasXF32Insts : Predicate<"Subtarget->hasXF32Insts()">,
def HasAshrPkInsts : Predicate<"Subtarget->hasAshrPkInsts()">,
AssemblerPredicate<(all_of FeatureAshrPkInsts)>;
+def IsWholeWaveFunction : Predicate<"Subtarget->isWholeWaveFunction()">;
+
// Include AMDGPU TD files
include "SISchedule.td"
include "GCNProcessors.td"
diff --git a/llvm/lib/Target/AMDGPU/AMDGPUISelLowering.cpp b/llvm/lib/Target/AMDGPU/AMDGPUISelLowering.cpp
index ade81f17ecca5..457996e9b941e 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPUISelLowering.cpp
+++ b/llvm/lib/Target/AMDGPU/AMDGPUISelLowering.cpp
@@ -5630,6 +5630,8 @@ const char* AMDGPUTargetLowering::getTargetNodeName(unsigned Opcode) const {
NODE_NAME_CASE(BUFFER_ATOMIC_FMIN)
NODE_NAME_CASE(BUFFER_ATOMIC_FMAX)
NODE_NAME_CASE(BUFFER_ATOMIC_COND_SUB_U32)
+ NODE_NAME_CASE(WHOLE_WAVE_SETUP)
+ NODE_NAME_CASE(WHOLE_WAVE_RETURN)
}
return nullptr;
}
diff --git a/llvm/lib/Target/AMDGPU/AMDGPUISelLowering.h b/llvm/lib/Target/AMDGPU/AMDGPUISelLowering.h
index c74dc7942f52c..0b1d83563e872 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPUISelLowering.h
+++ b/llvm/lib/Target/AMDGPU/AMDGPUISelLowering.h
@@ -605,6 +605,12 @@ enum NodeType : unsigned {
BUFFER_ATOMIC_FMAX,
BUFFER_ATOMIC_COND_SUB_U32,
LAST_MEMORY_OPCODE = BUFFER_ATOMIC_COND_SUB_U32,
+
+ // Set up a whole wave function.
+ WHOLE_WAVE_SETUP,
+
+ // Return from a whole wave function.
+ WHOLE_WAVE_RETURN,
};
} // End namespace AMDGPUISD
diff --git a/llvm/lib/Target/AMDGPU/AMDGPUInstrInfo.td b/llvm/lib/Target/AMDGPU/AMDGPUInstrInfo.td
index bec294a945d2f..22a83b7c1c883 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPUInstrInfo.td
+++ b/llvm/lib/Target/AMDGPU/AMDGPUInstrInfo.td
@@ -342,6 +342,17 @@ def AMDGPUfdot2_impl : SDNode<"AMDGPUISD::FDOT2",
def AMDGPUperm_impl : SDNode<"AMDGPUISD::PERM", AMDGPUDTIntTernaryOp, []>;
+// Marks the entry into a whole wave function.
+def AMDGPUwhole_wave_setup : SDNode<
+ "AMDGPUISD::WHOLE_WAVE_SETUP", SDTypeProfile<1, 0, [SDTCisInt<0>]>,
+ [SDNPHasChain, SDNPSideEffect]>;
+
+// Marks the return from a whole wave function.
+def AMDGPUwhole_wave_return : SDNode<
+ "AMDGPUISD::WHOLE_WAVE_RETURN", SDTNone,
+ [SDNPHasChain, SDNPOptInGlue, SDNPVariadic]
+>;
+
// SI+ export
def AMDGPUExportOp : SDTypeProfile<0, 8, [
SDTCisInt<0>, // i8 tgt
diff --git a/llvm/lib/Target/AMDGPU/SIFrameLowering.cpp b/llvm/lib/Target/AMDGPU/SIFrameLowering.cpp
index 97736e2410c18..671db9595d7a2 100644
--- a/llvm/lib/Target/AMDGPU/SIFrameLowering.cpp
+++ b/llvm/lib/Target/AMDGPU/SIFrameLowering.cpp
@@ -900,8 +900,18 @@ static Register buildScratchExecCopy(LiveRegUnits &LiveUnits,
initLiveUnits(LiveUnits, TRI, FuncInfo, MF, MBB, MBBI, IsProlog);
- ScratchExecCopy = findScratchNonCalleeSaveRegister(
- MRI, LiveUnits, *TRI.getWaveMaskRegClass());
+ if (ST.isWholeWaveFunction()) {
+ // Whole wave functions already have a copy of the original EXEC mask that
+ // we can use.
+ assert(IsProlog && "Epilog should look at return, not setup");
+ ScratchExecCopy =
+ TII->getWholeWaveFunctionSetup(MBB)->getOperand(0).getReg();
+ assert(ScratchExecCopy && "Couldn't find copy of EXEC");
+ } else {
+ ScratchExecCopy = findScratchNonCalleeSaveRegister(
+ MRI, LiveUnits, *TRI.getWaveMaskRegClass());
+ }
+
if (!ScratchExecCopy)
report_fatal_error("failed to find free scratch register");
@@ -950,10 +960,15 @@ void SIFrameLowering::emitCSRSpillStores(
};
StoreWWMRegisters(WWMScratchRegs);
+
+ auto EnableAllLanes = [&]() {
+ unsigned MovOpc = ST.isWave32() ? AMDGPU::S_MOV_B32 : AMDGPU::S_MOV_B64;
+ BuildMI(MBB, MBBI, DL, TII->get(MovOpc), TRI.getExec()).addImm(-1);
+ };
+
if (!WWMCalleeSavedRegs.empty()) {
if (ScratchExecCopy) {
- unsigned MovOpc = ST.isWave32() ? AMDGPU::S_MOV_B32 : AMDGPU::S_MOV_B64;
- BuildMI(MBB, MBBI, DL, TII->get(MovOpc), TRI.getExec()).addImm(-1);
+ EnableAllLanes();
} else {
ScratchExecCopy = buildScratchExecCopy(LiveUnits, MF, MBB, MBBI, DL,
/*IsProlog*/ true,
@@ -962,7 +977,15 @@ void SIFrameLowering::emitCSRSpillStores(
}
StoreWWMRegisters(WWMCalleeSavedRegs);
- if (ScratchExecCopy) {
+ if (ST.isWholeWaveFunction()) {
+ // SI_SETUP_WHOLE_WAVE_FUNCTION has outlived its purpose, so we can remove
+ // it now. If we have already saved some WWM CSR registers, then the EXEC is
+ // already -1 and we don't need to do anything else. Otherwise, set EXEC to
+ // -1 here.
+ if (WWMCalleeSavedRegs.empty())
+ EnableAllLanes();
+ TII->getWholeWaveFunctionSetup(MBB)->eraseFromParent();
+ } else if (ScratchExecCopy) {
// FIXME: Split block and make terminator.
unsigned ExecMov = ST.isWave32() ? AMDGPU::S_MOV_B32 : AMDGPU::S_MOV_B64;
BuildMI(MBB, MBBI, DL, TII->get(ExecMov), TRI.getExec())
@@ -1037,11 +1060,6 @@ void SIFrameLowering::emitCSRSpillRestores(
Register ScratchExecCopy;
SmallVector<std::pair<Register, int>, 2> WWMCalleeSavedRegs, WWMScratchRegs;
FuncInfo->splitWWMSpillRegisters(MF, WWMCalleeSavedRegs, WWMScratchRegs);
- if (!WWMScratchRegs.empty())
- ScratchExecCopy =
- buildScratchExecCopy(LiveUnits, MF, MBB, MBBI, DL,
- /*IsProlog*/ false, /*EnableInactiveLanes*/ true);
-
auto RestoreWWMRegisters =
[&](SmallVectorImpl<std::pair<Register, int>> &WWMRegs) {
for (const auto &Reg : WWMRegs) {
@@ -1052,6 +1070,36 @@ void SIFrameLowering::emitCSRSpillRestores(
}
};
+ if (ST.isWholeWaveFunction()) {
+ // For whole wave functions, the EXEC is already -1 at this point.
+ // Therefore, we can restore the CSR WWM registers right away.
+ RestoreWWMRegisters(WWMCalleeSavedRegs);
+
+ // The original EXEC is the first operand of the return instruction.
+ const MachineInstr &Return = MBB.instr_back();
+ assert(Return.getOpcode() == AMDGPU::SI_WHOLE_WAVE_FUNC_RETURN &&
+ "Unexpected return inst");
+ Register OrigExec = Return.getOperand(0).getReg();
+
+ if (!WWMScratchRegs.empty()) {
+ unsigned XorOpc = ST.isWave32() ? AMDGPU::S_XOR_B32 : AMDGPU::S_XOR_B64;
+ BuildMI(MBB, MBBI, DL, TII->get(XorOpc), TRI.getExec())
+ .addReg(OrigExec)
+ .addImm(-1);
+ RestoreWWMRegisters(WWMScratchRegs);
+ }
+
+ // Restore original EXEC.
+ unsigned MovOpc = ST.isWave32() ? AMDGPU::S_MOV_B32 : AMDGPU::S_MOV_B64;
+ BuildMI(MBB, MBBI, DL, TII->get(MovOpc), TRI.getExec()).addReg(OrigExec);
+ return;
+ }
+
+ if (!WWMScratchRegs.empty())
+ ScratchExecCopy =
+ buildScratchExecCopy(LiveUnits, MF, MBB, MBBI, DL,
+ /*IsProlog*/ false, /*EnableInactiveLanes*/ true);
+
RestoreWWMRegisters(WWMScratchRegs);
if (!WWMCalleeSavedRegs.empty()) {
if (ScratchExecCopy) {
@@ -1588,6 +1636,7 @@ void SIFrameLowering::determineCalleeSaves(MachineFunction &MF,
NeedExecCopyReservedReg = true;
else if (MI.getOpcode() == AMDGPU::SI_RETURN ||
MI.getOpcode() == AMDGPU::SI_RETURN_TO_EPILOG ||
+ MI.getOpcode() == AMDGPU::SI_WHOLE_WAVE_FUNC_RETURN ||
(MFI->isChainFunction() &&
TII->isChainCallOpcode(MI.getOpcode()))) {
// We expect all return to be the same size.
@@ -1616,6 +1665,18 @@ void SIFrameLowering::determineCalleeSaves(MachineFunction &MF,
if (MFI->isEntryFunction())
return;
+ if (ST.isWholeWaveFunction()) {
+ // In practice, all the VGPRs are WWM registers, and we will need to save at
+ // least their inactive lanes. Add them to WWMReservedRegs.
+ assert(!NeedExecCopyReservedReg && "Whole wave functions can use the reg mapped for their i1 argument");
+ for (MCRegister Reg : AMDGPU::VGPR_32RegClass)
+ if (MF.getRegInfo().isPhysRegModified(Reg)) {
+ MFI->reserveWWMRegister(Reg);
+ MF.begin()->addLiveIn(Reg);
+ }
+ MF.begin()->sortUniqueLiveIns();
+ }
+
// Remove any VGPRs used in the return value because these do not need to be saved.
// This prevents CSR restore from clobbering return VGPRs.
if (ReturnMI) {
diff --git a/llvm/lib/Target/AMDGPU/SIISelLowering.cpp b/llvm/lib/Target/AMDGPU/SIISelLowering.cpp
index 9743320601ed4..a6f2c951d8a50 100644
--- a/llvm/lib/Target/AMDGPU/SIISelLowering.cpp
+++ b/llvm/lib/Target/AMDGPU/SIISelLowering.cpp
@@ -2877,6 +2877,8 @@ SDValue SITargetLowering::LowerFormalArguments(
!Info->hasWorkGroupIDZ());
}
+ bool IsWholeWaveFunc = getSubtarget()->isWholeWaveFunction();
+
if (CallConv == CallingConv::AMDGPU_PS) {
processPSInputArgs(Splits, CallConv, Ins, Skipped, FType, Info);
@@ -2917,7 +2919,8 @@ SDValue SITargetLowering::LowerFormalArguments(
} else if (IsKernel) {
assert(Info->hasWorkGroupIDX() && Info->hasWorkItemIDX());
} else {
- Splits.append(Ins.begin(), Ins.end());
+ Splits.append(IsWholeWaveFunc ? std::next(Ins.begin()) : Ins.begin(),
+ Ins.end());
}
if (IsKernel)
@@ -2948,6 +2951,13 @@ SDValue SITargetLowering::LowerFormalArguments(
SmallVector<SDValue, 16> Chains;
+ if (IsWholeWaveFunc) {
+ SDValue Setup = DAG.getNode(AMDGPUISD::WHOLE_WAVE_SETUP, DL,
+ {MVT::i1, MVT::Other}, Chain);
+ InVals.push_back(Setup.getValue(0));
+ Chains.push_back(Setup.getValue(1));
+ }
+
// FIXME: This is the minimum kernel argument alignment. We should improve
// this to the maximum alignment of the arguments.
//
@@ -2955,7 +2965,8 @@ SDValue SITargetLowering::LowerFormalArguments(
// kern arg offset.
const Align KernelArgBaseAlign = Align(16);
- for (unsigned i = 0, e = Ins.size(), ArgIdx = 0; i != e; ++i) {
+ for (unsigned i = IsWholeWaveFunc ? 1 : 0, e = Ins.size(), ArgIdx = 0; i != e;
+ ++i) {
const ISD::InputArg &Arg = Ins[i];
if ((Arg.isOrigArg() && Skipped[Arg.getOrigArgIndex()]) || IsError) {
InVals.push_back(DAG.getUNDEF(Arg.VT));
@@ -3300,7 +3311,9 @@ SITargetLowering::LowerReturn(SDValue Chain, CallingConv::ID CallConv,
unsigned Opc = AMDGPUISD::ENDPGM;
if (!IsWaveEnd)
- Opc = IsShader ? AMDGPUISD::RETURN_TO_EPILOG : AMDGPUISD::RET_GLUE;
+ Opc = Subtarget->isWholeWaveFunction() ? AMDGPUISD::WHOLE_WAVE_RETURN
+ : IsShader ? AMDGPUISD::RETURN_TO_EPILOG
+ : AMDGPUISD::RET_GLUE;
return DAG.getNode(Opc, DL, MVT::Other, RetOps);
}
@@ -5670,6 +5683,17 @@ SITargetLowering::EmitInstrWithCustomInserter(MachineInstr &MI,
MI.eraseFromParent();
return SplitBB;
}
+ case AMDGPU::SI_WHOLE_WAVE_FUNC_RETURN: {
+ assert(Subtarget->isWholeWaveFunction());
+
+ // During ISel, it's difficult to propagate the original EXEC mask to use as
+ // an input to SI_WHOLE_WAVE_FUNC_RETURN. Set it up here instead.
+ MachineInstr *Setup =
+ TII->getWholeWaveFunctionSetup(*BB->getParent()->begin());
+ assert(Setup && "Couldn't find SI_SETUP_WHOLE_WAVE_FUNC");
+ MI.getOperand(0).setReg(Setup->getOperand(0).getReg());
+ return BB;
+ }
default:
if (TII->isImage(MI) || TII->isMUBUF(MI)) {
if (!MI.mayStore())
diff --git a/llvm/lib/Target/AMDGPU/SIInsertWaitcnts.cpp b/llvm/lib/Target/AMDGPU/SIInsertWaitcnts.cpp
index 239f2664f59f3..6d5aac1eeb842 100644
--- a/llvm/lib/Target/AMDGPU/SIInsertWaitcnts.cpp
+++ b/llvm/lib/Target/AMDGPU/SIInsertWaitcnts.cpp
@@ -1636,6 +1636,7 @@ bool SIInsertWaitcnts::generateWaitcntInstBefore(MachineInstr &MI,
// with knowledge of the called routines.
if (MI.getOpcode() == AMDGPU::SI_RETURN_TO_EPILOG ||
MI.getOpcode() == AMDGPU::SI_RETURN ||
+ MI.getOpcode() == AMDGPU::SI_WHOLE_WAVE_FUNC_RETURN ||
MI.getOpcode() == AMDGPU::S_SETPC_B64_return ||
(MI.isReturn() && MI.isCall() && !callWaitsOnFunctionEntry(MI))) {
Wait = Wait.combined(WCG->getAllZeroWaitcnt(/*IncludeVSCnt=*/false));
diff --git a/llvm/lib/Target/AMDGPU/SIInstrInfo.cpp b/llvm/lib/Target/AMDGPU/SIInstrInfo.cpp
index 1e025f481ffa9..4868a873703bc 100644
--- a/llvm/lib/Target/AMDGPU/SIInstrInfo.cpp
+++ b/llvm/lib/Target/AMDGPU/SIInstrInfo.cpp
@@ -2509,6 +2509,7 @@ bool SIInstrInfo::expandPostRAPseudo(MachineInstr &MI) const {
MI.setDesc(get(ST.isWave32() ? AMDGPU::S_MOV_B32 : AMDGPU::S_MOV_B64));
break;
}
+ case AMDGPU::SI_WHOLE_WAVE_FUNC_RETURN:
case AMDGPU::SI_RETURN: {
const MachineFunction *MF = MBB.getParent();
const GCNSubtarget &ST = MF->getSubtarget<GCNSubtarget>();
@@ -5773,6 +5774,16 @@ void SIInstrInfo::restoreExec(MachineFunction &MF, MachineBasicBlock &MBB,
Indexes->insertMachineInstrInMaps(*ExecRestoreMI);
}
+MachineInstr *
+SIInstrInfo::getWholeWaveFunctionSetup(MachineBasicBlock &MBB) const {
+ assert(ST.isWholeWaveFunction() && "Not a whole wave func");
+ for (MachineInstr &MI : MBB)
+ if (MI.getOpcode() == AMDGPU::SI_SETUP_WHOLE_WAVE_FUNC)
+ return &MI;
+
+ llvm_unreachable("Couldn't find instruction. Wrong MBB?");
+}
+
static const TargetRegisterClass *
adjustAllocatableRegClass(const GCNSubtarget &ST, const SIRegisterInfo &RI,
const MachineRegisterInfo &MRI,
diff --git a/llvm/lib/Target/AMDGPU/SIInstrInfo.h b/llvm/lib/Target/AMDGPU/SIInstrInfo.h
index 79ef1432d512a..1850b107b9fa5 100644
--- a/llvm/lib/Target/AMDGPU/SIInstrInfo.h
+++ b/llvm/lib/Target/AMDGPU/SIInstrInfo.h
@@ -1178,6 +1178,8 @@ class SIInstrInfo final : public AMDGPUGenInstrInfo {
MachineBasicBlock::iterator MBBI, const DebugLoc &DL,
Register Reg, SlotIndexes *Indexes = nullptr) const;
+ MachineInstr *getWholeWaveFunctionSetup(MachineBasicBlock &MBB) const;
+
/// Return the correct register class for \p OpNo. For target-specific
/// instructions, this will return the register class that has been defined
/// in tablegen. For generic instructions, like REG_SEQUENCE it will return
diff --git a/llvm/lib/Target/AMDGPU/SIInstructions.td b/llvm/lib/Target/AMDGPU/SIInstructions.td
index de77401eb0137..7ab10a6519d7a 100644
--- a/llvm/lib/Target/AMDGPU/SIInstructions.td
+++ b/llvm/lib/Target/AMDGPU/SIInstructions.td
@@ -623,6 +623,35 @@ def SI_INIT_WHOLE_WAVE : SPseudoInstSI <
let isConvergent = 1;
}
+let SubtargetPredicate = IsWholeWaveFunction in {
+// Sets EXEC to all lanes and returns the previous EXEC.
+def SI_SETUP_WHOLE_WAVE_FUNC : SPseudoInstSI <
+ (outs SReg_1:$dst), (ins), [(set i1:$dst, (AMDGPUwhole_wave_setup))]> {
+ let Defs = [EXEC];
+ let Uses = [EXEC];
+
+ let isConvergent = 1;
+}
+
+// Restores the previous EXEC and otherwise behaves entirely like a SI_RETURN.
+def SI_WHOLE_WAVE_FUNC_RETURN : SPseudoInstSI <
+ (outs), (ins SReg_1:$orig_exec)> {
+ let isTerminator = 1;
+ let isBarrier = 1;
+ let isReturn = 1;
+ let SchedRW = [WriteBranch];
+
+ // We're going to use custom handling to set the $orig_exec to the correct value.
+ let usesCustomInserter = 1;
+}
+
+// Generate a SI_WHOLE_WAVE_FUNC_RETURN pseudo with a placeholder for its
+// argument. It will be filled in by the custom inserter.
+def : GCNPat<
+ (AMDGPUwhole_wave_return), (SI_WHOLE_WAVE_FUNC_RETURN (i1 (IMPLICIT_DEF)))>;
+
+} // SubtargetPredicate = IsWholeWaveFunction
+
// Return for returning shaders to a shader variant epilog.
def SI_RETURN_TO_EPILOG : SPseudoInstSI <
(outs), (ins variable_ops), [(AMDGPUreturn_to_epilog)]> {
diff --git a/llvm/test/CodeGen/AMDGPU/isel-whole-wave-functions.ll b/llvm/test/CodeGen/AMDGPU/isel-whole-wave-functions.ll
new file mode 100644
index 0000000000000..9e41b4e4dd614
--- /dev/null
+++ b/llvm/test/CodeGen/AMDGPU/isel-whole-wave-functions.ll
@@ -0,0 +1,116 @@
+; NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py UTC_ARGS: --version 5
+; RUN: llc -global-isel=0 -mtriple=amdgcn--amdpal -mcpu=gfx1200 -mattr=+whole-wave-function -stop-after=finalize-isel -verify-machineinstrs < %s | FileCheck --check-prefix=DAGISEL %s
+; TODO: llc -global-isel=1 -mtriple=amdgcn--amdpal -mcpu=gfx1200 -mattr=+whole-wave-function < %s | FileCheck --check-prefix=GISEL %s
+
+define amdgpu_gfx i32 @basic_test(i1 %active, i32 %a, i32 %b) {
+ ; DAGISEL-LABEL: name: basic_test
+ ; DAGISEL: bb.0 (%ir-block.0):
+ ; DAGISEL-NEXT: liveins: $vgpr0, $vgpr1
+ ; DAGISEL-NEXT: {{ $}}
+ ; DAGISEL-NEXT: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr1
+ ; DAGISEL-NEXT: [[COPY1:%[0-9]+]]:vgpr_32 = COPY $vgpr0
+ ; DAGISEL-NEXT: [[SI_SETUP_WHOLE_WAVE_FUNC:%[0-9]+]]:sreg_32_xm0_xexec = SI_SETUP_WHOLE_WAVE_FUNC implicit-def dead $exec, implicit $exec
+ ; DAGISEL-NEXT: [[S_MOV_B32_:%[0-9]+]]:sreg_32 = S_MOV_B32 5
+ ; DAGISEL-NEXT: [[V_CNDMASK_B32_e64_:%[0-9]+]]:vgpr_32 = V_CNDMASK_B32_e64 0, killed [[S_MOV_B32_]], 0, [[COPY1]], [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $exec
+ ; DAGISEL-NEXT: [[S_MOV_B32_1:%[0-9]+]]:sreg_32 = S_MOV_B32 3
+ ; DAGISEL-NEXT: [[V_CNDMASK_B32_e64_1:%[0-9]+]]:vgpr_32 = V_CNDMASK_B32_e64 0, killed [[S_MOV_B32_1]], 0, [[COPY]], [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $exec
+ ; DAGISEL-NEXT: [[V_MOV_B32_dpp:%[0-9]+]]:vgpr_32 = V_MOV_B32_dpp [[V_CNDMASK_B32_e64_]], killed [[V_CNDMASK_B32_e64_1]], 1, 1, 1, 0, implicit $exec
+ ; DAGISEL-NEXT: $vgpr0 = COPY [[V_MOV_B32_dpp]]
+ ; DAGISEL-NEXT: [[DEF:%[0-9]+]]:sreg_32 = IMPLICIT_DEF
+ ; DAGISEL-NEXT: SI_WHOLE_WAVE_FUNC_RETURN killed [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $vgpr0
+ %x = select i1 %active, i32 %a, i32 5
+ %y = select i1 %active, i32 %b, i32 3
+ %ret = call i32 @llvm.amdgcn.update.dpp.i32(i32 %x, i32 %y, i32 1, i32 1, i32 1, i1 false) #0
+ ret i32 %ret
+}
+
+; Make sure we don't crash if %active is not used at all.
+define amdgpu_gfx i32 @unused_active(i1 %active, i32 %a, i32 %b) {
+ ; DAGISEL-LABEL: name: unused_active
+ ; DAGISEL: bb.0 (%ir-block.0):
+ ; DAGISEL-NEXT: [[SI_SETUP_WHOLE_WAVE_FUNC:%[0-9]+]]:sreg_32 = SI_SETUP_WHOLE_WAVE_FUNC implicit-def dead $exec, implicit $exec
+ ; DAGISEL-NEXT: [[V_MOV_B32_e32_:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 14, implicit $exec
+ ; DAGISEL-NEXT: $vgpr0 = COPY [[V_MOV_B32_e32_]]
+ ; DAGISEL-NEXT: [[DEF:%[0-9]+]]:sreg_32 = IMPLICIT_DEF
+ ; DAGISEL-NEXT: SI_WHOLE_WAVE_FUNC_RETURN killed [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $vgpr0
+ ret i32 14
+}
+
+define amdgpu_gfx i32 @multiple_blocks(i1 %active, i32 %a, i32 %b) {
+ ; DAGISEL-LABEL: name: multiple_blocks
+ ; DAGISEL: bb.0 (%ir-block.0):
+ ; DAGISEL-NEXT: successors: %bb.1(0x40000000), %bb.2(0x40000000)
+ ; DAGISEL-NEXT: liveins: $vgpr0, $vgpr1
+ ; DAGISEL-NEXT: {{ $}}
+ ; DAGISEL-NEXT: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr1
+ ; DAGISEL-NEXT: [[COPY1:%[0-9]+]]:vgpr_32 = COPY $vgpr0
+ ; DAGISEL-NEXT: [[SI_SETUP_WHOLE_WAVE_FUNC:%[0-9]+]]:sreg_32 = SI_SETUP_WHOLE_WAVE_FUNC implicit-def dead $exec, implicit $exec
+ ; DAGISEL-NEXT: [[COPY2:%[0-9]+]]:sreg_32 = COPY [[SI_SETUP_WHOLE_WAVE_FUNC]]
+ ; DAGISEL-NEXT: [[V_CMP_EQ_U32_e64_:%[0-9]+]]:sreg_32 = V_CMP_EQ_U32_e64 [[COPY1]], [[COPY]], implicit $exec
+ ; DAGISEL-NEXT: [[SI_IF:%[0-9]+]]:sreg_32 = SI_IF killed [[V_CMP_EQ_U32_e64_]], %bb.2, implicit-def dead $exec, implicit-def dead $scc, implicit $exec
+ ; DAGISEL-NEXT: S_BRANCH %bb.1
+ ; DAGISEL-NEXT: {{ $}}
+ ; DAGISEL-NEXT: bb.1.if.then:
+ ; DAGISEL-NEXT: successors: %bb.2(0x80000000)
+ ; DAGISEL-NEXT: {{ $}}
+ ; DAGISEL-NEXT: [[V_ADD_U32_e64_:%[0-9]+]]:vgpr_32 = V_ADD_U32_e64 [[COPY1]], [[COPY]], 0, implicit $exec
+ ; DAGISEL-NEXT: {{ $}}
+ ; DAGISEL-NEXT: bb.2.if.end:
+ ; DAGISEL-NEXT: [[PHI:%[0-9]+]]:vgpr_32 = PHI [[COPY]], %bb.0, [[V_ADD_U32_e64_]], %bb.1
+ ; DAGISEL-NEXT: SI_END_CF [[SI_IF]], implicit-def dead $exec, implicit-def dead $scc, implicit $exec
+ ; DAGISEL-NEXT: [[COPY3:%[0-9]+]]:sreg_32_xm0_xexec = COPY [[COPY2]]
+ ; DAGISEL-NEXT: [[V_CNDMASK_B32_e64_:%[0-9]+]]:vgpr_32 = V_CNDMASK_B32_e64 0, [[PHI]], 0, [[COPY1]], [[COPY3]], implicit $exec
+ ; DAGISEL-NEXT: $vgpr0 = COPY [[V_CNDMASK_B32_e64_]]
+ ; DAGISEL-NEXT: [[DEF:%[0-9]+]]:sreg_32 = IMPLICIT_DEF
+ ; DAGISEL-NEXT: SI_WHOLE_WAVE_FUNC_RETURN killed [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $vgpr0
+ %c = icmp eq i32 %a, %b
+ br i1 %c, label %if.then, label %if.end
+
+if.then: ; preds = %0
+ %d = add i32 %a, %b
+ br label %if.end
+
+if.end:
+ %f = phi i32 [ %d, %if.then ], [ %b, %0 ]
+ %e = select i1 %active, i32 %a, i32 %f
+ ret i32 %e
+}
+
+define amdgpu_gfx i64 @ret_64(i1 %active, i64 %a, i64 %b) {
+ ; DAGISEL-LABEL: name: ret_64
+ ; DAGISEL: bb.0 (%ir-block.0):
+ ; DAGISEL-NEXT: liveins: $vgpr0, $vgpr1, $vgpr2, $vgpr3
+ ; DAGISEL-NEXT: {{ $}}
+ ; DAGISEL-NEXT: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr3
+ ; DAGISEL-NEXT: [[COPY1:%[0-9]+]]:vgpr_32 = COPY $vgpr2
+ ; DAGISEL-NEXT: [[COPY2:%[0-9]+]]:vgpr_32 = COPY $vgpr1
+ ; DAGISEL-NEXT: [[COPY3:%[0-9]+]]:vgpr_32 = COPY $vgpr0
+ ; DAGISEL-NEXT: [[DEF:%[0-9]+]]:sgpr_32 = IMPLICIT_DEF
+ ; DAGISEL-NEXT: [[DEF1:%[0-9]+]]:sgpr_32 = IMPLICIT_DEF
+ ; DAGISEL-NEXT: [[REG_SEQUENCE:%[0-9]+]]:vreg_64 = REG_SEQUENCE [[COPY1]], %subreg.sub0, [[COPY]], %subreg.sub1
+ ; DAGISEL-NEXT: [[DEF2:%[0-9]+]]:sgpr_32 = IMPLICIT_DEF
+ ; DAGISEL-NEXT: [[DEF3:%[0-9]+]]:sgpr_32 = IMPLICIT_DEF
+ ; DAGISEL-NEXT: [[REG_SEQUENCE1:%[0-9]+]]:vreg_64 = REG_SEQUENCE [[COPY3]], %subreg.sub0, [[COPY2]], %subreg.sub1
+ ; DAGISEL-NEXT: [[SI_SETUP_WHOLE_WAVE_FUNC:%[0-9]+]]:sreg_32_xm0_xexec = SI_SETUP_WHOLE_WAVE_FUNC implicit-def dead $exec, implicit $exec
+ ; DAGISEL-NEXT: [[COPY4:%[0-9]+]]:vgpr_32 = COPY [[REG_SEQUENCE1]].sub1
+ ; DAGISEL-NEXT: [[S_MOV_B32_:%[0-9]+]]:sreg_32 = S_MOV_B32 0
+ ; DAGISEL-NEXT: [[V_CNDMASK_B32_e64_:%[0-9]+]]:vgpr_32 = V_CNDMASK_B32_e64 0, [[S_MOV_B32_]], 0, killed [[COPY4]], [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $exec
+ ; DAGISEL-NEXT: [[COPY5:%[0-9]+]]:vgpr_32 = COPY [[REG_SEQUENCE1]].sub0
+ ; DAGISEL-NEXT: [[S_MOV_B32_1:%[0-9]+]]:sreg_32 = S_MOV_B32 5
+ ; DAGISEL-NEXT: [[V_CNDMASK_B32_e64_1:%[0-9]+]]:vgpr_32 = V_CNDMASK_B32_e64 0, killed [[S_MOV_B32_1]], 0, killed [[COPY5]], [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $exec
+ ; DAGISEL-NEXT: [[COPY6:%[0-9]+]]:vgpr_32 = COPY [[REG_SEQUENCE]].sub1
+ ; DAGISEL-NEXT: [[V_CNDMASK_B32_e64_2:%[0-9]+]]:vgpr_32 = V_CNDMASK_B32_e64 0, [[S_MOV_B32_]], 0, killed [[COPY6]], [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $exec
+ ; DAGISEL-NEXT: [[COPY7:%[0-9]+]]:vgpr_32 = COPY [[REG_SEQUENCE]].sub0
+ ; DAGISEL-NEXT: [[S_MOV_B32_2:%[0-9]+]]:sreg_32 = S_MOV_B32 3
+ ; DAGISEL-NEXT: [[V_CNDMASK_B32_e64_3:%[0-9]+]]:vgpr_32 = V_CNDMASK_B32_e64 0, killed [[S_MOV_B32_2]], 0, killed [[COPY7]], [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $exec
+ ; DAGISEL-NEXT: [[V_MOV_B32_dpp:%[0-9]+]]:vgpr_32 = V_MOV_B32_dpp [[V_CNDMASK_B32_e64_1]], killed [[V_CNDMASK_B32_e64_3]], 1, 1, 1, 0, implicit $exec
+ ; DAGISEL-NEXT: [[V_MOV_B32_dpp1:%[0-9]+]]:vgpr_32 = V_MOV_B32_dpp [[V_CNDMASK_B32_e64_]], killed [[V_CNDMASK_B32_e64_2]], 1, 1, 1, 0, implicit $exec
+ ; DAGISEL-NEXT: $vgpr0 = COPY [[V_MOV_B32_dpp]]
+ ; DAGISEL-NEXT: $vgpr1 = COPY [[V_MOV_B32_dpp1]]
+ ; DAGISEL-NEXT: [[DEF4:%[0-9]+]]:sreg_32 = IMPLICIT_DEF
+ ; DAGISEL-NEXT: SI_WHOLE_WAVE_FUNC_RETURN killed [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $vgpr0, implicit $vgpr1
+ %x = select i1 %active, i64 %a, i64 5
+ %y = select i1 %active, i64 %b, i64 3
+ %ret = call i64 @llvm.amdgcn.update.dpp.i64(i64 %x, i64 %y, i32 1, i32 1, i32 1, i1 false) #0
+ ret i64 %ret
+}
diff --git a/llvm/test/CodeGen/AMDGPU/whole-wave-functions-pei.mir b/llvm/test/CodeGen/AMDGPU/whole-wave-functions-pei.mir
new file mode 100644
index 0000000000000..d62e90441284c
--- /dev/null
+++ b/llvm/test/CodeGen/AMDGPU/whole-wave-functions-pei.mir
@@ -0,0 +1,439 @@
+# NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py UTC_ARGS: --version 5
+# RUN: llc -mtriple=amdgcn -mcpu=gfx1200 -mattr=+whole-wave-function -run-pass=prologepilog -o - %s | FileCheck %s
+
+---
+name: save_inactive_lanes_non_csr_vgpr
+alignment: 1
+tracksRegLiveness: true
+noPhis: true
+isSSA: false
+noVRegs: true
+hasFakeUses: false
+tracksDebugUserValues: true
+frameInfo:
+ maxAlignment: 1
+ isCalleeSavedInfoValid: true
+machineFunctionInfo:
+ maxKernArgAlign: 1
+ frameOffsetReg: '$sgpr33'
+ stackPtrOffsetReg: '$sgpr32'
+ returnsVoid: false
+ occupancy: 16
+ sgprForEXECCopy: '$sgpr105'
+body: |
+ bb.0:
+ ; CHECK-LABEL: name: save_inactive_lanes_non_csr_vgpr
+ ; CHECK: liveins: $vgpr0
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: $sgpr0 = S_XOR_SAVEEXEC_B32 -1, implicit-def $exec, implicit-def dead $scc, implicit $exec
+ ; CHECK-NEXT: SCRATCH_STORE_DWORD_SADDR $vgpr0, $sgpr32, 0, 0, implicit $exec, implicit $flat_scr :: (store (s32) into %stack.0, addrspace 5)
+ ; CHECK-NEXT: $exec_lo = S_MOV_B32 -1
+ ; CHECK-NEXT: $vgpr0 = V_MOV_B32_e32 14, implicit $exec
+ ; CHECK-NEXT: $exec_lo = S_XOR_B32 $sgpr0, -1, implicit-def $scc
+ ; CHECK-NEXT: $vgpr0 = SCRATCH_LOAD_DWORD_SADDR $sgpr32, 0, 0, implicit $exec, implicit $flat_scr, implicit $vgpr0(tied-def 0) :: (load (s32) from %stack.0, addrspace 5)
+ ; CHECK-NEXT: $exec_lo = S_MOV_B32 $sgpr0
+ ; CHECK-NEXT: SI_WHOLE_WAVE_FUNC_RETURN killed renamable $sgpr0, implicit killed $vgpr0
+ renamable $sgpr0 = SI_SETUP_WHOLE_WAVE_FUNC implicit-def dead $exec, implicit $exec
+ $vgpr0 = V_MOV_B32_e32 14, implicit $exec
+ SI_WHOLE_WAVE_FUNC_RETURN killed renamable $sgpr0, implicit killed $vgpr0
+
+...
+---
+name: save_all_lanes_csr_vgpr
+alignment: 1
+tracksRegLiveness: true
+noPhis: true
+isSSA: false
+noVRegs: true
+hasFakeUses: false
+tracksDebugUserValues: true
+frameInfo:
+ maxAlignment: 1
+ isCalleeSavedInfoValid: true
+machineFunctionInfo:
+ maxKernArgAlign: 1
+ frameOffsetReg: '$sgpr33'
+ stackPtrOffsetReg: '$sgpr32'
+ returnsVoid: false
+ occupancy: 16
+ sgprForEXECCopy: '$sgpr105'
+body: |
+ bb.0:
+ ; CHECK-LABEL: name: save_all_lanes_csr_vgpr
+ ; CHECK: liveins: $vgpr40
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: $sgpr0 = S_OR_SAVEEXEC_B32 -1, implicit-def $exec, implicit-def dead $scc, implicit $exec
+ ; CHECK-NEXT: SCRATCH_STORE_DWORD_SADDR $vgpr40, $sgpr32, 0, 0, implicit $exec, implicit $flat_scr :: (store (s32) into %stack.0, addrspace 5)
+ ; CHECK-NEXT: $vgpr40 = V_MOV_B32_e32 14, implicit $exec
+ ; CHECK-NEXT: $vgpr40 = SCRATCH_LOAD_DWORD_SADDR $sgpr32, 0, 0, implicit $exec, implicit $flat_scr :: (load (s32) from %stack.0, addrspace 5)
+ ; CHECK-NEXT: $exec_lo = S_MOV_B32 $sgpr0
+ ; CHECK-NEXT: SI_WHOLE_WAVE_FUNC_RETURN killed renamable $sgpr0
+ renamable $sgpr0 = SI_SETUP_WHOLE_WAVE_FUNC implicit-def dead $exec, implicit $exec
+ $vgpr40 = V_MOV_B32_e32 14, implicit $exec
+ SI_WHOLE_WAVE_FUNC_RETURN killed renamable $sgpr0
+
+...
+---
+name: save_csr_sgpr_to_non_csr_vgpr
+alignment: 1
+tracksRegLiveness: true
+noPhis: true
+isSSA: false
+noVRegs: true
+hasFakeUses: false
+tracksDebugUserValues: true
+frameInfo:
+ maxAlignment: 1
+ isCalleeSavedInfoValid: true
+machineFunctionInfo:
+ maxKernArgAlign: 1
+ frameOffsetReg: '$sgpr33'
+ stackPtrOffsetReg: '$sgpr32'
+ returnsVoid: false
+ occupancy: 16
+ sgprForEXECCopy: '$sgpr105'
+body: |
+ bb.0:
+ liveins: $sgpr20, $vgpr191
+ ; CHECK-LABEL: name: save_csr_sgpr_to_non_csr_vgpr
+ ; CHECK: liveins: $sgpr20, $vgpr191, $vgpr192
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: $vcc_lo = S_XOR_SAVEEXEC_B32 -1, implicit-def $exec, implicit-def dead $scc, implicit $exec
+ ; CHECK-NEXT: SCRATCH_STORE_DWORD_SADDR $vgpr192, $sgpr32, 0, 0, implicit $exec, implicit $flat_scr :: (store (s32) into %stack.0, addrspace 5)
+ ; CHECK-NEXT: $exec_lo = S_MOV_B32 -1
+ ; CHECK-NEXT: $vgpr192 = SI_SPILL_S32_TO_VGPR killed $sgpr20, 0, $vgpr192
+ ; CHECK-NEXT: $sgpr20 = S_MOV_B32 14, implicit $exec
+ ; CHECK-NEXT: $sgpr20 = SI_RESTORE_S32_FROM_VGPR $vgpr192, 0
+ ; CHECK-NEXT: $exec_lo = S_XOR_B32 $vcc_lo, -1, implicit-def $scc
+ ; CHECK-NEXT: $vgpr192 = SCRATCH_LOAD_DWORD_SADDR $sgpr32, 0, 0, implicit $exec, implicit $flat_scr :: (load (s32) from %stack.0, addrspace 5)
+ ; CHECK-NEXT: $exec_lo = S_MOV_B32 $vcc_lo
+ ; CHECK-NEXT: SI_WHOLE_WAVE_FUNC_RETURN killed renamable $vcc_lo
+ $vgpr192 = SI_SPILL_S32_TO_VGPR killed $sgpr20, 0, $vgpr192
+ renamable $vcc_lo = SI_SETUP_WHOLE_WAVE_FUNC implicit-def dead $exec, implicit $exec
+ $sgpr20 = S_MOV_B32 14, implicit $exec
+ $sgpr20 = SI_RESTORE_S32_FROM_VGPR $vgpr192, 0
+ SI_WHOLE_WAVE_FUNC_RETURN killed renamable $vcc_lo
+
+...
+---
+name: save_csr_sgpr_to_csr_vgpr
+alignment: 1
+tracksRegLiveness: true
+noPhis: true
+isSSA: false
+noVRegs: true
+hasFakeUses: false
+tracksDebugUserValues: true
+frameInfo:
+ maxAlignment: 1
+ isCalleeSavedInfoValid: true
+machineFunctionInfo:
+ maxKernArgAlign: 1
+ frameOffsetReg: '$sgpr33'
+ stackPtrOffsetReg: '$sgpr32'
+ returnsVoid: false
+ occupancy: 16
+ sgprForEXECCopy: '$sgpr105'
+body: |
+ bb.0:
+ liveins: $sgpr20, $vgpr191
+ ; CHECK-LABEL: name: save_csr_sgpr_to_csr_vgpr
+ ; CHECK: liveins: $sgpr20, $vgpr191
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: $vcc_lo = S_OR_SAVEEXEC_B32 -1, implicit-def $exec, implicit-def dead $scc, implicit $exec
+ ; CHECK-NEXT: SCRATCH_STORE_DWORD_SADDR $vgpr191, $sgpr32, 0, 0, implicit $exec, implicit $flat_scr :: (store (s32) into %stack.0, addrspace 5)
+ ; CHECK-NEXT: $vgpr191 = SI_SPILL_S32_TO_VGPR killed $sgpr20, 0, $vgpr191
+ ; CHECK-NEXT: $sgpr20 = S_MOV_B32 14, implicit $exec
+ ; CHECK-NEXT: $sgpr20 = SI_RESTORE_S32_FROM_VGPR $vgpr191, 0
+ ; CHECK-NEXT: $vgpr191 = SCRATCH_LOAD_DWORD_SADDR $sgpr32, 0, 0, implicit $exec, implicit $flat_scr :: (load (s32) from %stack.0, addrspace 5)
+ ; CHECK-NEXT: $exec_lo = S_MOV_B32 $vcc_lo
+ ; CHECK-NEXT: SI_WHOLE_WAVE_FUNC_RETURN killed renamable $vcc_lo
+ $vgpr191 = SI_SPILL_S32_TO_VGPR killed $sgpr20, 0, $vgpr191
+ renamable $vcc_lo = SI_SETUP_WHOLE_WAVE_FUNC implicit-def dead $exec, implicit $exec
+ $sgpr20 = S_MOV_B32 14, implicit $exec
+ $sgpr20 = SI_RESTORE_S32_FROM_VGPR $vgpr191, 0
+ SI_WHOLE_WAVE_FUNC_RETURN killed renamable $vcc_lo
+
+...
+---
+name: vgpr_and_sgpr_csr
+alignment: 1
+tracksRegLiveness: true
+noPhis: true
+isSSA: false
+noVRegs: true
+hasFakeUses: false
+tracksDebugUserValues: true
+liveins:
+ - { reg: '$vgpr0' }
+ - { reg: '$vgpr1' }
+frameInfo:
+ maxAlignment: 4
+ isCalleeSavedInfoValid: true
+machineFunctionInfo:
+ maxKernArgAlign: 1
+ hasSpilledSGPRs: true
+ frameOffsetReg: '$sgpr33'
+ stackPtrOffsetReg: '$sgpr32'
+ returnsVoid: false
+ occupancy: 16
+ spillPhysVGPRs:
+ - '$vgpr191'
+ wwmReservedRegs:
+ - '$vgpr191'
+body: |
+ bb.0:
+ liveins: $sgpr20, $vgpr0, $vgpr1, $vgpr191
+
+ ; CHECK-LABEL: name: vgpr_and_sgpr_csr
+ ; CHECK: liveins: $sgpr20, $vgpr0, $vgpr1, $vgpr40, $vgpr49
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: $vcc_lo = S_XOR_SAVEEXEC_B32 -1, implicit-def $exec, implicit-def dead $scc, implicit $exec
+ ; CHECK-NEXT: SCRATCH_STORE_DWORD_SADDR $vgpr0, $sgpr32, 0, 0, implicit $exec, implicit $flat_scr :: (store (s32) into %stack.0, addrspace 5)
+ ; CHECK-NEXT: SCRATCH_STORE_DWORD_SADDR $vgpr49, $sgpr32, 8, 0, implicit $exec, implicit $flat_scr :: (store (s32) into %stack.2, addrspace 5)
+ ; CHECK-NEXT: $exec_lo = S_MOV_B32 -1
+ ; CHECK-NEXT: SCRATCH_STORE_DWORD_SADDR $vgpr40, $sgpr32, 4, 0, implicit $exec, implicit $flat_scr :: (store (s32) into %stack.1, addrspace 5)
+ ; CHECK-NEXT: $vgpr0 = SI_SPILL_S32_TO_VGPR killed $sgpr20, 0, $vgpr0
+ ; CHECK-NEXT: S_NOP 0, implicit-def $vgpr40, implicit-def $sgpr20
+ ; CHECK-NEXT: S_NOP 0, implicit-def $vgpr49, implicit-def $sgpr40
+ ; CHECK-NEXT: $sgpr20 = SI_RESTORE_S32_FROM_VGPR $vgpr0, 0
+ ; CHECK-NEXT: $vgpr40 = SCRATCH_LOAD_DWORD_SADDR $sgpr32, 4, 0, implicit $exec, implicit $flat_scr :: (load (s32) from %stack.1, addrspace 5)
+ ; CHECK-NEXT: $exec_lo = S_XOR_B32 $vcc_lo, -1, implicit-def $scc
+ ; CHECK-NEXT: $vgpr0 = SCRATCH_LOAD_DWORD_SADDR $sgpr32, 0, 0, implicit $exec, implicit $flat_scr :: (load (s32) from %stack.0, addrspace 5)
+ ; CHECK-NEXT: $vgpr49 = SCRATCH_LOAD_DWORD_SADDR $sgpr32, 8, 0, implicit $exec, implicit $flat_scr :: (load (s32) from %stack.2, addrspace 5)
+ ; CHECK-NEXT: $exec_lo = S_MOV_B32 $vcc_lo
+ ; CHECK-NEXT: SI_WHOLE_WAVE_FUNC_RETURN killed renamable $vcc_lo
+ $vgpr191 = SI_SPILL_S32_TO_VGPR killed $sgpr20, 0, $vgpr191
+ renamable $vcc_lo = SI_SETUP_WHOLE_WAVE_FUNC implicit-def dead $exec, implicit $exec
+ S_NOP 0, implicit-def $vgpr40, implicit-def $sgpr20
+ S_NOP 0, implicit-def $vgpr49, implicit-def $sgpr40
+ $sgpr20 = SI_RESTORE_S32_FROM_VGPR $vgpr191, 0
+ SI_WHOLE_WAVE_FUNC_RETURN killed renamable $vcc_lo
+
+...
+---
+name: split_orig_exec
+alignment: 1
+tracksRegLiveness: true
+noPhis: true
+isSSA: false
+noVRegs: true
+hasFakeUses: false
+tracksDebugUserValues: true
+liveins:
+ - { reg: '$vgpr0' }
+ - { reg: '$vgpr1' }
+frameInfo:
+ maxAlignment: 4
+ isCalleeSavedInfoValid: true
+machineFunctionInfo:
+ maxKernArgAlign: 1
+ hasSpilledSGPRs: true
+ frameOffsetReg: '$sgpr33'
+ stackPtrOffsetReg: '$sgpr32'
+ returnsVoid: false
+ occupancy: 16
+ spillPhysVGPRs:
+ - '$vgpr191'
+ wwmReservedRegs:
+ - '$vgpr191'
+body: |
+ bb.0:
+ liveins: $sgpr20, $vgpr0, $vgpr1, $vgpr191
+
+ ; CHECK-LABEL: name: split_orig_exec
+ ; CHECK: liveins: $sgpr20, $vgpr0, $vgpr1, $vgpr40, $vgpr49
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: $vcc_lo = S_XOR_SAVEEXEC_B32 -1, implicit-def $exec, implicit-def dead $scc, implicit $exec
+ ; CHECK-NEXT: SCRATCH_STORE_DWORD_SADDR $vgpr0, $sgpr32, 0, 0, implicit $exec, implicit $flat_scr :: (store (s32) into %stack.0, addrspace 5)
+ ; CHECK-NEXT: SCRATCH_STORE_DWORD_SADDR $vgpr49, $sgpr32, 8, 0, implicit $exec, implicit $flat_scr :: (store (s32) into %stack.2, addrspace 5)
+ ; CHECK-NEXT: $exec_lo = S_MOV_B32 -1
+ ; CHECK-NEXT: SCRATCH_STORE_DWORD_SADDR $vgpr40, $sgpr32, 4, 0, implicit $exec, implicit $flat_scr :: (store (s32) into %stack.1, addrspace 5)
+ ; CHECK-NEXT: $vgpr0 = SI_SPILL_S32_TO_VGPR killed $sgpr20, 0, $vgpr0
+ ; CHECK-NEXT: S_NOP 0, implicit-def $vgpr40, implicit-def $sgpr20
+ ; CHECK-NEXT: $sgpr3 = COPY $vcc_lo
+ ; CHECK-NEXT: S_NOP 0, implicit-def $vgpr49, implicit-def $sgpr40
+ ; CHECK-NEXT: $sgpr20 = SI_RESTORE_S32_FROM_VGPR $vgpr0, 0
+ ; CHECK-NEXT: $vgpr40 = SCRATCH_LOAD_DWORD_SADDR $sgpr32, 4, 0, implicit $exec, implicit $flat_scr :: (load (s32) from %stack.1, addrspace 5)
+ ; CHECK-NEXT: $exec_lo = S_XOR_B32 $sgpr3, -1, implicit-def $scc
+ ; CHECK-NEXT: $vgpr0 = SCRATCH_LOAD_DWORD_SADDR $sgpr32, 0, 0, implicit $exec, implicit $flat_scr :: (load (s32) from %stack.0, addrspace 5)
+ ; CHECK-NEXT: $vgpr49 = SCRATCH_LOAD_DWORD_SADDR $sgpr32, 8, 0, implicit $exec, implicit $flat_scr :: (load (s32) from %stack.2, addrspace 5)
+ ; CHECK-NEXT: $exec_lo = S_MOV_B32 $sgpr3
+ ; CHECK-NEXT: SI_WHOLE_WAVE_FUNC_RETURN killed renamable $sgpr3
+ $vgpr191 = SI_SPILL_S32_TO_VGPR killed $sgpr20, 0, $vgpr191
+ renamable $vcc_lo = SI_SETUP_WHOLE_WAVE_FUNC implicit-def dead $exec, implicit $exec
+ S_NOP 0, implicit-def $vgpr40, implicit-def $sgpr20
+ $sgpr3 = COPY $vcc_lo
+ S_NOP 0, implicit-def $vgpr49, implicit-def $sgpr40
+ $sgpr20 = SI_RESTORE_S32_FROM_VGPR $vgpr191, 0
+ SI_WHOLE_WAVE_FUNC_RETURN killed renamable $sgpr3
+
+...
+---
+name: vgpr_superregs
+alignment: 1
+tracksRegLiveness: true
+noPhis: true
+isSSA: false
+noVRegs: true
+hasFakeUses: false
+tracksDebugUserValues: true
+frameInfo:
+ maxAlignment: 1
+ isCalleeSavedInfoValid: true
+machineFunctionInfo:
+ maxKernArgAlign: 1
+ frameOffsetReg: '$sgpr33'
+ stackPtrOffsetReg: '$sgpr32'
+ returnsVoid: false
+ occupancy: 16
+ sgprForEXECCopy: '$sgpr105'
+body: |
+ bb.0:
+ ; CHECK-LABEL: name: vgpr_superregs
+ ; CHECK: liveins: $vgpr0, $vgpr2, $vgpr3, $vgpr4, $vgpr5, $vgpr40, $vgpr41, $vgpr42
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: $sgpr0 = S_XOR_SAVEEXEC_B32 -1, implicit-def $exec, implicit-def dead $scc, implicit $exec
+ ; CHECK-NEXT: SCRATCH_STORE_DWORD_SADDR $vgpr0, $sgpr32, 0, 0, implicit $exec, implicit $flat_scr :: (store (s32) into %stack.0, addrspace 5)
+ ; CHECK-NEXT: SCRATCH_STORE_DWORD_SADDR $vgpr2, $sgpr32, 4, 0, implicit $exec, implicit $flat_scr :: (store (s32) into %stack.1, addrspace 5)
+ ; CHECK-NEXT: SCRATCH_STORE_DWORD_SADDR $vgpr3, $sgpr32, 8, 0, implicit $exec, implicit $flat_scr :: (store (s32) into %stack.2, addrspace 5)
+ ; CHECK-NEXT: SCRATCH_STORE_DWORD_SADDR $vgpr4, $sgpr32, 12, 0, implicit $exec, implicit $flat_scr :: (store (s32) into %stack.3, addrspace 5)
+ ; CHECK-NEXT: SCRATCH_STORE_DWORD_SADDR $vgpr5, $sgpr32, 16, 0, implicit $exec, implicit $flat_scr :: (store (s32) into %stack.4, addrspace 5)
+ ; CHECK-NEXT: $exec_lo = S_MOV_B32 -1
+ ; CHECK-NEXT: SCRATCH_STORE_DWORD_SADDR $vgpr40, $sgpr32, 20, 0, implicit $exec, implicit $flat_scr :: (store (s32) into %stack.5, addrspace 5)
+ ; CHECK-NEXT: SCRATCH_STORE_DWORD_SADDR $vgpr41, $sgpr32, 24, 0, implicit $exec, implicit $flat_scr :: (store (s32) into %stack.6, addrspace 5)
+ ; CHECK-NEXT: SCRATCH_STORE_DWORD_SADDR $vgpr42, $sgpr32, 28, 0, implicit $exec, implicit $flat_scr :: (store (s32) into %stack.7, addrspace 5)
+ ; CHECK-NEXT: $vgpr0 = V_MOV_B32_e32 14, implicit $exec
+ ; CHECK-NEXT: S_NOP 0, implicit-def $vgpr2_vgpr3_vgpr4_vgpr5, implicit-def $vgpr40_vgpr41_vgpr42
+ ; CHECK-NEXT: $vgpr40 = SCRATCH_LOAD_DWORD_SADDR $sgpr32, 20, 0, implicit $exec, implicit $flat_scr :: (load (s32) from %stack.5, addrspace 5)
+ ; CHECK-NEXT: $vgpr41 = SCRATCH_LOAD_DWORD_SADDR $sgpr32, 24, 0, implicit $exec, implicit $flat_scr :: (load (s32) from %stack.6, addrspace 5)
+ ; CHECK-NEXT: $vgpr42 = SCRATCH_LOAD_DWORD_SADDR $sgpr32, 28, 0, implicit $exec, implicit $flat_scr :: (load (s32) from %stack.7, addrspace 5)
+ ; CHECK-NEXT: $exec_lo = S_XOR_B32 $sgpr0, -1, implicit-def $scc
+ ; CHECK-NEXT: $vgpr0 = SCRATCH_LOAD_DWORD_SADDR $sgpr32, 0, 0, implicit $exec, implicit $flat_scr, implicit $vgpr0(tied-def 0) :: (load (s32) from %stack.0, addrspace 5)
+ ; CHECK-NEXT: $vgpr2 = SCRATCH_LOAD_DWORD_SADDR $sgpr32, 4, 0, implicit $exec, implicit $flat_scr :: (load (s32) from %stack.1, addrspace 5)
+ ; CHECK-NEXT: $vgpr3 = SCRATCH_LOAD_DWORD_SADDR $sgpr32, 8, 0, implicit $exec, implicit $flat_scr :: (load (s32) from %stack.2, addrspace 5)
+ ; CHECK-NEXT: $vgpr4 = SCRATCH_LOAD_DWORD_SADDR $sgpr32, 12, 0, implicit $exec, implicit $flat_scr :: (load (s32) from %stack.3, addrspace 5)
+ ; CHECK-NEXT: $vgpr5 = SCRATCH_LOAD_DWORD_SADDR $sgpr32, 16, 0, implicit $exec, implicit $flat_scr :: (load (s32) from %stack.4, addrspace 5)
+ ; CHECK-NEXT: $exec_lo = S_MOV_B32 $sgpr0
+ ; CHECK-NEXT: SI_WHOLE_WAVE_FUNC_RETURN killed renamable $sgpr0, implicit killed $vgpr0
+ renamable $sgpr0 = SI_SETUP_WHOLE_WAVE_FUNC implicit-def dead $exec, implicit $exec
+ $vgpr0 = V_MOV_B32_e32 14, implicit $exec
+ S_NOP 0, implicit-def $vgpr2_vgpr3_vgpr4_vgpr5, implicit-def $vgpr40_vgpr41_vgpr42
+ SI_WHOLE_WAVE_FUNC_RETURN killed renamable $sgpr0, implicit killed $vgpr0
+
+...
+---
+name: dont_restore_used_vgprs
+alignment: 1
+tracksRegLiveness: true
+noPhis: true
+isSSA: false
+noVRegs: true
+hasFakeUses: false
+tracksDebugUserValues: true
+liveins:
+ - { reg: '$vgpr0' }
+ - { reg: '$vgpr20' }
+ - { reg: '$vgpr40' }
+frameInfo:
+ maxAlignment: 1
+ isCalleeSavedInfoValid: true
+machineFunctionInfo:
+ maxKernArgAlign: 1
+ frameOffsetReg: '$sgpr33'
+ stackPtrOffsetReg: '$sgpr32'
+ returnsVoid: false
+ occupancy: 16
+ sgprForEXECCopy: '$sgpr105'
+body: |
+ bb.0:
+ liveins: $vgpr0, $vgpr20, $vgpr40
+
+ ; CHECK-LABEL: name: dont_restore_used_vgprs
+ ; CHECK: liveins: $vgpr0, $vgpr20, $vgpr40
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: $exec_lo = S_MOV_B32 -1
+ ; CHECK-NEXT: S_NOP 0, implicit $vgpr0, implicit $vgpr20, implicit $vgpr40
+ ; CHECK-NEXT: $exec_lo = S_MOV_B32 $sgpr0
+ ; CHECK-NEXT: SI_WHOLE_WAVE_FUNC_RETURN killed renamable $sgpr0, implicit killed $vgpr0
+ renamable $sgpr0 = SI_SETUP_WHOLE_WAVE_FUNC implicit-def dead $exec, implicit $exec
+ S_NOP 0, implicit $vgpr0, implicit $vgpr20, implicit $vgpr40
+ SI_WHOLE_WAVE_FUNC_RETURN killed renamable $sgpr0, implicit killed $vgpr0
+
+...
+---
+name: multiple_blocks
+alignment: 1
+tracksRegLiveness: true
+noPhis: true
+isSSA: false
+noVRegs: true
+hasFakeUses: false
+tracksDebugUserValues: true
+liveins:
+ - { reg: '$vgpr0' }
+ - { reg: '$vgpr1' }
+frameInfo:
+ maxAlignment: 1
+ isCalleeSavedInfoValid: true
+machineFunctionInfo:
+ maxKernArgAlign: 1
+ frameOffsetReg: '$sgpr33'
+ stackPtrOffsetReg: '$sgpr32'
+ returnsVoid: false
+ occupancy: 16
+ sgprForEXECCopy: '$sgpr105'
+body: |
+ ; CHECK-LABEL: name: multiple_blocks
+ ; CHECK: bb.0:
+ ; CHECK-NEXT: successors: %bb.1(0x40000000), %bb.2(0x40000000)
+ ; CHECK-NEXT: liveins: $vgpr0, $vgpr1
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: $vcc_lo = S_XOR_SAVEEXEC_B32 -1, implicit-def $exec, implicit-def dead $scc, implicit $exec
+ ; CHECK-NEXT: SCRATCH_STORE_DWORD_SADDR $vgpr0, $sgpr32, 0, 0, implicit $exec, implicit $flat_scr :: (store (s32) into %stack.0, addrspace 5)
+ ; CHECK-NEXT: SCRATCH_STORE_DWORD_SADDR $vgpr1, $sgpr32, 4, 0, implicit $exec, implicit $flat_scr :: (store (s32) into %stack.1, addrspace 5)
+ ; CHECK-NEXT: $exec_lo = S_MOV_B32 -1
+ ; CHECK-NEXT: $sgpr1 = S_MOV_B32 $exec_lo
+ ; CHECK-NEXT: V_CMPX_EQ_U32_nosdst_e64 $vgpr0, $vgpr1, implicit-def $exec, implicit $exec
+ ; CHECK-NEXT: S_CBRANCH_EXECZ %bb.2, implicit $exec
+ ; CHECK-NEXT: S_BRANCH %bb.1
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: bb.1:
+ ; CHECK-NEXT: successors: %bb.2(0x80000000)
+ ; CHECK-NEXT: liveins: $vcc_lo, $sgpr1, $vgpr0, $vgpr1
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: renamable $vgpr1 = V_ADD_U32_e64 $vgpr0, $vgpr1, 0, implicit $exec
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: bb.2:
+ ; CHECK-NEXT: liveins: $vcc_lo, $sgpr1, $vgpr0, $vgpr1
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: $exec_lo = S_OR_B32 $exec_lo, killed renamable $sgpr1, implicit-def $scc
+ ; CHECK-NEXT: renamable $vgpr0 = V_CNDMASK_B32_e64 0, $vgpr1, 0, $vgpr0, $vcc_lo, implicit $exec
+ ; CHECK-NEXT: $exec_lo = S_XOR_B32 $vcc_lo, -1, implicit-def $scc
+ ; CHECK-NEXT: $vgpr0 = SCRATCH_LOAD_DWORD_SADDR $sgpr32, 0, 0, implicit $exec, implicit $flat_scr, implicit $vgpr0(tied-def 0) :: (load (s32) from %stack.0, addrspace 5)
+ ; CHECK-NEXT: $vgpr1 = SCRATCH_LOAD_DWORD_SADDR $sgpr32, 4, 0, implicit $exec, implicit $flat_scr :: (load (s32) from %stack.1, addrspace 5)
+ ; CHECK-NEXT: $exec_lo = S_MOV_B32 $vcc_lo
+ ; CHECK-NEXT: SI_WHOLE_WAVE_FUNC_RETURN killed renamable $vcc_lo, implicit $vgpr0
+ bb.0:
+ successors: %bb.1, %bb.2
+ liveins: $vgpr0, $vgpr1
+
+ renamable $vcc_lo = SI_SETUP_WHOLE_WAVE_FUNC implicit-def dead $exec, implicit $exec
+ $sgpr1 = S_MOV_B32 $exec_lo
+ V_CMPX_EQ_U32_nosdst_e64 $vgpr0, $vgpr1, implicit-def $exec, implicit $exec
+ S_CBRANCH_EXECZ %bb.2, implicit $exec
+ S_BRANCH %bb.1
+
+ bb.1:
+ liveins: $vcc_lo, $sgpr1, $vgpr0, $vgpr1
+
+ renamable $vgpr1 = V_ADD_U32_e64 $vgpr0, $vgpr1, 0, implicit $exec
+
+ bb.2:
+ liveins: $vcc_lo, $sgpr1, $vgpr0, $vgpr1
+
+ $exec_lo = S_OR_B32 $exec_lo, killed renamable $sgpr1, implicit-def $scc
+ renamable $vgpr0 = V_CNDMASK_B32_e64 0, $vgpr1, 0, $vgpr0, $vcc_lo, implicit $exec
+ SI_WHOLE_WAVE_FUNC_RETURN killed renamable $vcc_lo, implicit $vgpr0
+
+...
diff --git a/llvm/test/CodeGen/AMDGPU/whole-wave-functions.ll b/llvm/test/CodeGen/AMDGPU/whole-wave-functions.ll
new file mode 100644
index 0000000000000..9a951e95f3983
--- /dev/null
+++ b/llvm/test/CodeGen/AMDGPU/whole-wave-functions.ll
@@ -0,0 +1,285 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
+; RUN: llc -global-isel=0 -mtriple=amdgcn--amdpal -mcpu=gfx1200 -mattr=+whole-wave-function -verify-machineinstrs < %s | FileCheck --check-prefix=DAGISEL %s
+; TODO: llc -global-isel=1 -mtriple=amdgcn--amdpal -mcpu=gfx1200 -mattr=+whole-wave-function < %s | FileCheck --check-prefix=GISEL %s
+
+; Make sure the i1 %active is passed through EXEC.
+; The EXEC mask should be set to -1 for the duration of the function
+; and restored to its original value in the epilogue.
+; We will also need to restore the inactive lanes for any allocated VGPRs.
+define i32 @basic_test(i1 %active, i32 %a, i32 %b) {
+; DAGISEL-LABEL: basic_test:
+; DAGISEL: ; %bb.0:
+; DAGISEL-NEXT: s_wait_loadcnt_dscnt 0x0
+; DAGISEL-NEXT: s_wait_expcnt 0x0
+; DAGISEL-NEXT: s_wait_samplecnt 0x0
+; DAGISEL-NEXT: s_wait_bvhcnt 0x0
+; DAGISEL-NEXT: s_wait_kmcnt 0x0
+; DAGISEL-NEXT: s_xor_saveexec_b32 vcc_lo, -1
+; DAGISEL-NEXT: s_clause 0x1
+; DAGISEL-NEXT: scratch_store_b32 off, v0, s32
+; DAGISEL-NEXT: scratch_store_b32 off, v1, s32 offset:4
+; DAGISEL-NEXT: s_mov_b32 exec_lo, -1
+; DAGISEL-NEXT: s_wait_alu 0xfffe
+; DAGISEL-NEXT: v_dual_cndmask_b32 v0, 5, v0 :: v_dual_cndmask_b32 v1, 3, v1
+; DAGISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; DAGISEL-NEXT: v_mov_b32_dpp v0, v1 quad_perm:[1,0,0,0] row_mask:0x1 bank_mask:0x1
+; DAGISEL-NEXT: s_xor_b32 exec_lo, vcc_lo, -1
+; DAGISEL-NEXT: s_clause 0x1
+; DAGISEL-NEXT: scratch_load_b32 v0, off, s32
+; DAGISEL-NEXT: scratch_load_b32 v1, off, s32 offset:4
+; DAGISEL-NEXT: s_mov_b32 exec_lo, vcc_lo
+; DAGISEL-NEXT: s_wait_loadcnt 0x0
+; DAGISEL-NEXT: s_setpc_b64 s[30:31]
+ %x = select i1 %active, i32 %a, i32 5
+ %y = select i1 %active, i32 %b, i32 3
+ %ret = call i32 @llvm.amdgcn.update.dpp.i32(i32 %x, i32 %y, i32 1, i32 1, i32 1, i1 false) #0
+ ret i32 %ret
+}
+
+; Make sure we don't crash if %active is not used at all.
+define i32 @unused_active(i1 %active, i32 %a, i32 %b) {
+; DAGISEL-LABEL: unused_active:
+; DAGISEL: ; %bb.0:
+; DAGISEL-NEXT: s_wait_loadcnt_dscnt 0x0
+; DAGISEL-NEXT: s_wait_expcnt 0x0
+; DAGISEL-NEXT: s_wait_samplecnt 0x0
+; DAGISEL-NEXT: s_wait_bvhcnt 0x0
+; DAGISEL-NEXT: s_wait_kmcnt 0x0
+; DAGISEL-NEXT: s_xor_saveexec_b32 s0, -1
+; DAGISEL-NEXT: scratch_store_b32 off, v0, s32 ; 4-byte Folded Spill
+; DAGISEL-NEXT: s_mov_b32 exec_lo, -1
+; DAGISEL-NEXT: v_mov_b32_e32 v0, 14
+; DAGISEL-NEXT: s_wait_alu 0xfffe
+; DAGISEL-NEXT: s_xor_b32 exec_lo, s0, -1
+; DAGISEL-NEXT: scratch_load_b32 v0, off, s32 ; 4-byte Folded Reload
+; DAGISEL-NEXT: s_mov_b32 exec_lo, s0
+; DAGISEL-NEXT: s_wait_loadcnt 0x0
+; DAGISEL-NEXT: s_setpc_b64 s[30:31]
+ ret i32 14
+}
+
+; For any used VGPRs (including those used for SGPR spills), we need to restore the inactive lanes.
+; For CSR VGPRs, we need to restore all lanes.
+define i32 @csr_default_cc(i1 %active, i32 %a, i32 %b) {
+; DAGISEL-LABEL: csr_default_cc:
+; DAGISEL: ; %bb.0:
+; DAGISEL-NEXT: s_wait_loadcnt_dscnt 0x0
+; DAGISEL-NEXT: s_wait_expcnt 0x0
+; DAGISEL-NEXT: s_wait_samplecnt 0x0
+; DAGISEL-NEXT: s_wait_bvhcnt 0x0
+; DAGISEL-NEXT: s_wait_kmcnt 0x0
+; DAGISEL-NEXT: s_xor_saveexec_b32 vcc_lo, -1
+; DAGISEL-NEXT: s_clause 0x3
+; DAGISEL-NEXT: scratch_store_b32 off, v2, s32
+; DAGISEL-NEXT: scratch_store_b32 off, v0, s32 offset:4
+; DAGISEL-NEXT: scratch_store_b32 off, v1, s32 offset:8
+; DAGISEL-NEXT: scratch_store_b32 off, v49, s32 offset:16
+; DAGISEL-NEXT: s_mov_b32 exec_lo, -1
+; DAGISEL-NEXT: scratch_store_b32 off, v40, s32 offset:12 ; 4-byte Folded Spill
+; DAGISEL-NEXT: v_writelane_b32 v2, s48, 0
+; DAGISEL-NEXT: ;;#ASMSTART
+; DAGISEL-NEXT: ; clobber CSR
+; DAGISEL-NEXT: ;;#ASMEND
+; DAGISEL-NEXT: ;;#ASMSTART
+; DAGISEL-NEXT: ; clobber non-CSR
+; DAGISEL-NEXT: ;;#ASMEND
+; DAGISEL-NEXT: scratch_load_b32 v40, off, s32 offset:12 ; 4-byte Folded Reload
+; DAGISEL-NEXT: s_wait_alu 0xfffe
+; DAGISEL-NEXT: v_dual_cndmask_b32 v0, 5, v0 :: v_dual_cndmask_b32 v1, 3, v1
+; DAGISEL-NEXT: v_readlane_b32 s48, v2, 0
+; DAGISEL-NEXT: s_delay_alu instid0(VALU_DEP_2)
+; DAGISEL-NEXT: v_mov_b32_dpp v0, v1 quad_perm:[1,0,0,0] row_mask:0x1 bank_mask:0x1
+; DAGISEL-NEXT: s_xor_b32 exec_lo, vcc_lo, -1
+; DAGISEL-NEXT: s_clause 0x3
+; DAGISEL-NEXT: scratch_load_b32 v2, off, s32
+; DAGISEL-NEXT: scratch_load_b32 v0, off, s32 offset:4
+; DAGISEL-NEXT: scratch_load_b32 v1, off, s32 offset:8
+; DAGISEL-NEXT: scratch_load_b32 v49, off, s32 offset:16
+; DAGISEL-NEXT: s_mov_b32 exec_lo, vcc_lo
+; DAGISEL-NEXT: s_wait_loadcnt 0x0
+; DAGISEL-NEXT: s_wait_alu 0xf1ff
+; DAGISEL-NEXT: s_setpc_b64 s[30:31]
+ %x = select i1 %active, i32 %a, i32 5
+ %y = select i1 %active, i32 %b, i32 3
+ call void asm sideeffect "; clobber CSR", "~{v40},~{s48}"()
+ call void asm sideeffect "; clobber non-CSR", "~{v49},~{s40}"()
+ %ret = call i32 @llvm.amdgcn.update.dpp.i32(i32 %x, i32 %y, i32 1, i32 1, i32 1, i1 false) #0
+ ret i32 %ret
+}
+
+; Same as above, but with the amdgpu_gfx calling convention.
+define amdgpu_gfx i32 @csr_amdgpu_gfx(i1 %active, i32 %a, i32 %b) {
+; DAGISEL-LABEL: csr_amdgpu_gfx:
+; DAGISEL: ; %bb.0:
+; DAGISEL-NEXT: s_wait_loadcnt_dscnt 0x0
+; DAGISEL-NEXT: s_wait_expcnt 0x0
+; DAGISEL-NEXT: s_wait_samplecnt 0x0
+; DAGISEL-NEXT: s_wait_bvhcnt 0x0
+; DAGISEL-NEXT: s_wait_kmcnt 0x0
+; DAGISEL-NEXT: s_xor_saveexec_b32 vcc_lo, -1
+; DAGISEL-NEXT: s_clause 0x3
+; DAGISEL-NEXT: scratch_store_b32 off, v2, s32
+; DAGISEL-NEXT: scratch_store_b32 off, v0, s32 offset:4
+; DAGISEL-NEXT: scratch_store_b32 off, v1, s32 offset:8
+; DAGISEL-NEXT: scratch_store_b32 off, v49, s32 offset:16
+; DAGISEL-NEXT: s_mov_b32 exec_lo, -1
+; DAGISEL-NEXT: scratch_store_b32 off, v40, s32 offset:12 ; 4-byte Folded Spill
+; DAGISEL-NEXT: v_writelane_b32 v2, s28, 0
+; DAGISEL-NEXT: ;;#ASMSTART
+; DAGISEL-NEXT: ; clobber CSR
+; DAGISEL-NEXT: ;;#ASMEND
+; DAGISEL-NEXT: ;;#ASMSTART
+; DAGISEL-NEXT: ; clobber non-CSR
+; DAGISEL-NEXT: ;;#ASMEND
+; DAGISEL-NEXT: scratch_load_b32 v40, off, s32 offset:12 ; 4-byte Folded Reload
+; DAGISEL-NEXT: s_wait_alu 0xfffe
+; DAGISEL-NEXT: v_dual_cndmask_b32 v0, 5, v0 :: v_dual_cndmask_b32 v1, 3, v1
+; DAGISEL-NEXT: v_readlane_b32 s28, v2, 0
+; DAGISEL-NEXT: s_delay_alu instid0(VALU_DEP_2)
+; DAGISEL-NEXT: v_mov_b32_dpp v0, v1 quad_perm:[1,0,0,0] row_mask:0x1 bank_mask:0x1
+; DAGISEL-NEXT: s_xor_b32 exec_lo, vcc_lo, -1
+; DAGISEL-NEXT: s_clause 0x3
+; DAGISEL-NEXT: scratch_load_b32 v2, off, s32
+; DAGISEL-NEXT: scratch_load_b32 v0, off, s32 offset:4
+; DAGISEL-NEXT: scratch_load_b32 v1, off, s32 offset:8
+; DAGISEL-NEXT: scratch_load_b32 v49, off, s32 offset:16
+; DAGISEL-NEXT: s_mov_b32 exec_lo, vcc_lo
+; DAGISEL-NEXT: s_wait_loadcnt 0x0
+; DAGISEL-NEXT: s_wait_alu 0xf1ff
+; DAGISEL-NEXT: s_setpc_b64 s[30:31]
+ %x = select i1 %active, i32 %a, i32 5
+ %y = select i1 %active, i32 %b, i32 3
+ call void asm sideeffect "; clobber CSR", "~{v40},~{s28}"()
+ call void asm sideeffect "; clobber non-CSR", "~{v49},~{s40}"()
+ %ret = call i32 @llvm.amdgcn.update.dpp.i32(i32 %x, i32 %y, i32 1, i32 1, i32 1, i1 false) #0
+ ret i32 %ret
+}
+
+; Save and restore all lanes of v40.
+define void @csr_vgpr_only(i1 %active, i32 %a, i32 %b) {
+; DAGISEL-LABEL: csr_vgpr_only:
+; DAGISEL: ; %bb.0:
+; DAGISEL-NEXT: s_wait_loadcnt_dscnt 0x0
+; DAGISEL-NEXT: s_wait_expcnt 0x0
+; DAGISEL-NEXT: s_wait_samplecnt 0x0
+; DAGISEL-NEXT: s_wait_bvhcnt 0x0
+; DAGISEL-NEXT: s_wait_kmcnt 0x0
+; DAGISEL-NEXT: s_or_saveexec_b32 s0, -1
+; DAGISEL-NEXT: scratch_store_b32 off, v40, s32 ; 4-byte Folded Spill
+; DAGISEL-NEXT: ;;#ASMSTART
+; DAGISEL-NEXT: ; clobber CSR VGPR
+; DAGISEL-NEXT: ;;#ASMEND
+; DAGISEL-NEXT: scratch_load_b32 v40, off, s32 ; 4-byte Folded Reload
+; DAGISEL-NEXT: s_wait_alu 0xfffe
+; DAGISEL-NEXT: s_mov_b32 exec_lo, s0
+; DAGISEL-NEXT: s_wait_loadcnt 0x0
+; DAGISEL-NEXT: s_setpc_b64 s[30:31]
+ call void asm sideeffect "; clobber CSR VGPR", "~{v40}"()
+ ret void
+}
+
+define void @sgpr_spill_only(i1 %active, i32 %a, i32 %b) {
+; DAGISEL-LABEL: sgpr_spill_only:
+; DAGISEL: ; %bb.0:
+; DAGISEL-NEXT: s_wait_loadcnt_dscnt 0x0
+; DAGISEL-NEXT: s_wait_expcnt 0x0
+; DAGISEL-NEXT: s_wait_samplecnt 0x0
+; DAGISEL-NEXT: s_wait_bvhcnt 0x0
+; DAGISEL-NEXT: s_wait_kmcnt 0x0
+; DAGISEL-NEXT: s_xor_saveexec_b32 s0, -1
+; DAGISEL-NEXT: scratch_store_b32 off, v0, s32 ; 4-byte Folded Spill
+; DAGISEL-NEXT: s_mov_b32 exec_lo, -1
+; DAGISEL-NEXT: v_writelane_b32 v0, s48, 0
+; DAGISEL-NEXT: ;;#ASMSTART
+; DAGISEL-NEXT: ; clobber CSR SGPR
+; DAGISEL-NEXT: ;;#ASMEND
+; DAGISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; DAGISEL-NEXT: v_readlane_b32 s48, v0, 0
+; DAGISEL-NEXT: s_wait_alu 0xfffe
+; DAGISEL-NEXT: s_xor_b32 exec_lo, s0, -1
+; DAGISEL-NEXT: scratch_load_b32 v0, off, s32 ; 4-byte Folded Reload
+; DAGISEL-NEXT: s_mov_b32 exec_lo, s0
+; DAGISEL-NEXT: s_wait_loadcnt 0x0
+; DAGISEL-NEXT: s_setpc_b64 s[30:31]
+ call void asm sideeffect "; clobber CSR SGPR", "~{s48}"()
+ ret void
+}
+
+define i32 @multiple_blocks(i1 %active, i32 %a, i32 %b) {
+; DAGISEL-LABEL: multiple_blocks:
+; DAGISEL: ; %bb.0:
+; DAGISEL-NEXT: s_wait_loadcnt_dscnt 0x0
+; DAGISEL-NEXT: s_wait_expcnt 0x0
+; DAGISEL-NEXT: s_wait_samplecnt 0x0
+; DAGISEL-NEXT: s_wait_bvhcnt 0x0
+; DAGISEL-NEXT: s_wait_kmcnt 0x0
+; DAGISEL-NEXT: s_xor_saveexec_b32 vcc_lo, -1
+; DAGISEL-NEXT: s_clause 0x1
+; DAGISEL-NEXT: scratch_store_b32 off, v0, s32
+; DAGISEL-NEXT: scratch_store_b32 off, v1, s32 offset:4
+; DAGISEL-NEXT: s_mov_b32 exec_lo, -1
+; DAGISEL-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
+; DAGISEL-NEXT: s_mov_b32 s1, exec_lo
+; DAGISEL-NEXT: v_cmpx_eq_u32_e64 v0, v1
+; DAGISEL-NEXT: ; %bb.1: ; %if.then
+; DAGISEL-NEXT: v_add_nc_u32_e32 v1, v0, v1
+; DAGISEL-NEXT: ; %bb.2: ; %if.end
+; DAGISEL-NEXT: s_wait_alu 0xfffe
+; DAGISEL-NEXT: s_or_b32 exec_lo, exec_lo, s1
+; DAGISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; DAGISEL-NEXT: v_cndmask_b32_e32 v0, v1, v0, vcc_lo
+; DAGISEL-NEXT: s_xor_b32 exec_lo, vcc_lo, -1
+; DAGISEL-NEXT: s_clause 0x1
+; DAGISEL-NEXT: scratch_load_b32 v0, off, s32
+; DAGISEL-NEXT: scratch_load_b32 v1, off, s32 offset:4
+; DAGISEL-NEXT: s_mov_b32 exec_lo, vcc_lo
+; DAGISEL-NEXT: s_wait_loadcnt 0x0
+; DAGISEL-NEXT: s_setpc_b64 s[30:31]
+ %c = icmp eq i32 %a, %b
+ br i1 %c, label %if.then, label %if.end
+
+if.then: ; preds = %0
+ %d = add i32 %a, %b
+ br label %if.end
+
+if.end:
+ %f = phi i32 [ %d, %if.then ], [ %b, %0 ]
+ %e = select i1 %active, i32 %a, i32 %f
+ ret i32 %e
+}
+
+define i64 @ret_64(i1 %active, i64 %a, i64 %b) {
+; DAGISEL-LABEL: ret_64:
+; DAGISEL: ; %bb.0:
+; DAGISEL-NEXT: s_wait_loadcnt_dscnt 0x0
+; DAGISEL-NEXT: s_wait_expcnt 0x0
+; DAGISEL-NEXT: s_wait_samplecnt 0x0
+; DAGISEL-NEXT: s_wait_bvhcnt 0x0
+; DAGISEL-NEXT: s_wait_kmcnt 0x0
+; DAGISEL-NEXT: s_xor_saveexec_b32 vcc_lo, -1
+; DAGISEL-NEXT: s_clause 0x3
+; DAGISEL-NEXT: scratch_store_b32 off, v0, s32
+; DAGISEL-NEXT: scratch_store_b32 off, v1, s32 offset:4
+; DAGISEL-NEXT: scratch_store_b32 off, v2, s32 offset:8
+; DAGISEL-NEXT: scratch_store_b32 off, v3, s32 offset:12
+; DAGISEL-NEXT: s_mov_b32 exec_lo, -1
+; DAGISEL-NEXT: s_wait_alu 0xfffe
+; DAGISEL-NEXT: v_dual_cndmask_b32 v1, 0, v1 :: v_dual_cndmask_b32 v0, 5, v0
+; DAGISEL-NEXT: v_dual_cndmask_b32 v2, 3, v2 :: v_dual_cndmask_b32 v3, 0, v3
+; DAGISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_2)
+; DAGISEL-NEXT: v_mov_b32_dpp v0, v2 quad_perm:[1,0,0,0] row_mask:0x1 bank_mask:0x1
+; DAGISEL-NEXT: v_mov_b32_dpp v1, v3 quad_perm:[1,0,0,0] row_mask:0x1 bank_mask:0x1
+; DAGISEL-NEXT: s_xor_b32 exec_lo, vcc_lo, -1
+; DAGISEL-NEXT: s_clause 0x3
+; DAGISEL-NEXT: scratch_load_b32 v0, off, s32
+; DAGISEL-NEXT: scratch_load_b32 v1, off, s32 offset:4
+; DAGISEL-NEXT: scratch_load_b32 v2, off, s32 offset:8
+; DAGISEL-NEXT: scratch_load_b32 v3, off, s32 offset:12
+; DAGISEL-NEXT: s_mov_b32 exec_lo, vcc_lo
+; DAGISEL-NEXT: s_wait_loadcnt 0x0
+; DAGISEL-NEXT: s_setpc_b64 s[30:31]
+ %x = select i1 %active, i64 %a, i64 5
+ %y = select i1 %active, i64 %b, i64 3
+ %ret = call i64 @llvm.amdgcn.update.dpp.i64(i64 %x, i64 %y, i32 1, i32 1, i32 1, i1 false) #0
+ ret i64 %ret
+}
>From 399e08c00b2f4b94c8dcfb740d181db7cc2449f5 Mon Sep 17 00:00:00 2001
From: Diana Picus <diana-magda.picus at amd.com>
Date: Mon, 17 Mar 2025 12:47:21 +0100
Subject: [PATCH 03/11] Use MF instead of MBB
---
llvm/lib/Target/AMDGPU/SIFrameLowering.cpp | 4 ++--
llvm/lib/Target/AMDGPU/SIISelLowering.cpp | 3 +--
llvm/lib/Target/AMDGPU/SIInstrInfo.cpp | 3 ++-
llvm/lib/Target/AMDGPU/SIInstrInfo.h | 2 +-
4 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/llvm/lib/Target/AMDGPU/SIFrameLowering.cpp b/llvm/lib/Target/AMDGPU/SIFrameLowering.cpp
index 671db9595d7a2..7e822f75b31f5 100644
--- a/llvm/lib/Target/AMDGPU/SIFrameLowering.cpp
+++ b/llvm/lib/Target/AMDGPU/SIFrameLowering.cpp
@@ -905,7 +905,7 @@ static Register buildScratchExecCopy(LiveRegUnits &LiveUnits,
// we can use.
assert(IsProlog && "Epilog should look at return, not setup");
ScratchExecCopy =
- TII->getWholeWaveFunctionSetup(MBB)->getOperand(0).getReg();
+ TII->getWholeWaveFunctionSetup(MF)->getOperand(0).getReg();
assert(ScratchExecCopy && "Couldn't find copy of EXEC");
} else {
ScratchExecCopy = findScratchNonCalleeSaveRegister(
@@ -984,7 +984,7 @@ void SIFrameLowering::emitCSRSpillStores(
// -1 here.
if (WWMCalleeSavedRegs.empty())
EnableAllLanes();
- TII->getWholeWaveFunctionSetup(MBB)->eraseFromParent();
+ TII->getWholeWaveFunctionSetup(MF)->eraseFromParent();
} else if (ScratchExecCopy) {
// FIXME: Split block and make terminator.
unsigned ExecMov = ST.isWave32() ? AMDGPU::S_MOV_B32 : AMDGPU::S_MOV_B64;
diff --git a/llvm/lib/Target/AMDGPU/SIISelLowering.cpp b/llvm/lib/Target/AMDGPU/SIISelLowering.cpp
index a6f2c951d8a50..3eb334171bc61 100644
--- a/llvm/lib/Target/AMDGPU/SIISelLowering.cpp
+++ b/llvm/lib/Target/AMDGPU/SIISelLowering.cpp
@@ -5688,8 +5688,7 @@ SITargetLowering::EmitInstrWithCustomInserter(MachineInstr &MI,
// During ISel, it's difficult to propagate the original EXEC mask to use as
// an input to SI_WHOLE_WAVE_FUNC_RETURN. Set it up here instead.
- MachineInstr *Setup =
- TII->getWholeWaveFunctionSetup(*BB->getParent()->begin());
+ MachineInstr *Setup = TII->getWholeWaveFunctionSetup(*BB->getParent());
assert(Setup && "Couldn't find SI_SETUP_WHOLE_WAVE_FUNC");
MI.getOperand(0).setReg(Setup->getOperand(0).getReg());
return BB;
diff --git a/llvm/lib/Target/AMDGPU/SIInstrInfo.cpp b/llvm/lib/Target/AMDGPU/SIInstrInfo.cpp
index 4868a873703bc..b6c5ed7343f7a 100644
--- a/llvm/lib/Target/AMDGPU/SIInstrInfo.cpp
+++ b/llvm/lib/Target/AMDGPU/SIInstrInfo.cpp
@@ -5775,8 +5775,9 @@ void SIInstrInfo::restoreExec(MachineFunction &MF, MachineBasicBlock &MBB,
}
MachineInstr *
-SIInstrInfo::getWholeWaveFunctionSetup(MachineBasicBlock &MBB) const {
+SIInstrInfo::getWholeWaveFunctionSetup(MachineFunction &MF) const {
assert(ST.isWholeWaveFunction() && "Not a whole wave func");
+ MachineBasicBlock &MBB = *MF.begin();
for (MachineInstr &MI : MBB)
if (MI.getOpcode() == AMDGPU::SI_SETUP_WHOLE_WAVE_FUNC)
return &MI;
diff --git a/llvm/lib/Target/AMDGPU/SIInstrInfo.h b/llvm/lib/Target/AMDGPU/SIInstrInfo.h
index 1850b107b9fa5..14909b023a97b 100644
--- a/llvm/lib/Target/AMDGPU/SIInstrInfo.h
+++ b/llvm/lib/Target/AMDGPU/SIInstrInfo.h
@@ -1178,7 +1178,7 @@ class SIInstrInfo final : public AMDGPUGenInstrInfo {
MachineBasicBlock::iterator MBBI, const DebugLoc &DL,
Register Reg, SlotIndexes *Indexes = nullptr) const;
- MachineInstr *getWholeWaveFunctionSetup(MachineBasicBlock &MBB) const;
+ MachineInstr *getWholeWaveFunctionSetup(MachineFunction &MF) const;
/// Return the correct register class for \p OpNo. For target-specific
/// instructions, this will return the register class that has been defined
>From 8f72b59d81fc4105b671892cb4e3a2ecfae4f64a Mon Sep 17 00:00:00 2001
From: Diana Picus <diana-magda.picus at amd.com>
Date: Tue, 11 Mar 2025 12:27:47 +0100
Subject: [PATCH 04/11] Revert "Add subtarget feature"
This reverts commit c6e9211d5644061521cbce8edac7c475c83b01d6.
---
llvm/lib/Target/AMDGPU/AMDGPU.td | 6 ------
llvm/lib/Target/AMDGPU/GCNSubtarget.h | 6 ------
2 files changed, 12 deletions(-)
diff --git a/llvm/lib/Target/AMDGPU/AMDGPU.td b/llvm/lib/Target/AMDGPU/AMDGPU.td
index 30e52bc0945c6..3db93a794ea10 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPU.td
+++ b/llvm/lib/Target/AMDGPU/AMDGPU.td
@@ -1251,12 +1251,6 @@ def FeatureXF32Insts : SubtargetFeature<"xf32-insts",
"v_mfma_f32_16x16x8_xf32 and v_mfma_f32_32x32x4_xf32"
>;
-def FeatureWholeWaveFunction : SubtargetFeature<"whole-wave-function",
- "IsWholeWaveFunction",
- "true",
- "Current function is a whole wave function (runs with all lanes enabled)"
- >;
-
// Dummy feature used to disable assembler instructions.
def FeatureDisable : SubtargetFeature<"",
"FeatureDisable","true",
diff --git a/llvm/lib/Target/AMDGPU/GCNSubtarget.h b/llvm/lib/Target/AMDGPU/GCNSubtarget.h
index 1263f374d3db0..f7f03fe5911bd 100644
--- a/llvm/lib/Target/AMDGPU/GCNSubtarget.h
+++ b/llvm/lib/Target/AMDGPU/GCNSubtarget.h
@@ -257,8 +257,6 @@ class GCNSubtarget final : public AMDGPUGenSubtargetInfo,
bool RequiresCOV6 = false;
- bool IsWholeWaveFunction = false;
-
// Dummy feature to use for assembler in tablegen.
bool FeatureDisable = false;
@@ -1450,10 +1448,6 @@ class GCNSubtarget final : public AMDGPUGenSubtargetInfo,
// of sign-extending.
bool hasGetPCZeroExtension() const { return GFX12Insts; }
- /// \returns true if the current function is a whole wave function (i.e. it
- /// runs with all the lanes enabled).
- bool isWholeWaveFunction() const { return IsWholeWaveFunction; }
-
/// \returns SGPR allocation granularity supported by the subtarget.
unsigned getSGPRAllocGranule() const {
return AMDGPU::IsaInfo::getSGPRAllocGranule(this);
>From accbe8ef0daacf67c3023e46992c28d63787bc50 Mon Sep 17 00:00:00 2001
From: Diana Picus <diana-magda.picus at amd.com>
Date: Wed, 19 Mar 2025 14:50:47 +0100
Subject: [PATCH 05/11] Add new CC. Do nothing
---
llvm/include/llvm/IR/CallingConv.h | 3 +++
1 file changed, 3 insertions(+)
diff --git a/llvm/include/llvm/IR/CallingConv.h b/llvm/include/llvm/IR/CallingConv.h
index 7897aabb6c1a9..030e1321ec0eb 100644
--- a/llvm/include/llvm/IR/CallingConv.h
+++ b/llvm/include/llvm/IR/CallingConv.h
@@ -284,6 +284,9 @@ namespace CallingConv {
RISCV_VLSCall_32768 = 122,
RISCV_VLSCall_65536 = 123,
+ // Calling convention for AMDGPU whole wave functions.
+ AMDGPU_Whole_Wave = 124,
+
/// The highest possible ID. Must be some 2^k - 1.
MaxID = 1023
};
>From 1a82d880f6799e58c19d6162381d9b4de56c546a Mon Sep 17 00:00:00 2001
From: Diana Picus <diana-magda.picus at amd.com>
Date: Tue, 11 Mar 2025 12:32:09 +0100
Subject: [PATCH 06/11] Replace SubtargetFeature with CallingConv
---
llvm/include/llvm/AsmParser/LLToken.h | 1 +
llvm/include/llvm/IR/CallingConv.h | 2 +-
llvm/lib/AsmParser/LLLexer.cpp | 1 +
llvm/lib/AsmParser/LLParser.cpp | 3 +
llvm/lib/IR/AsmWriter.cpp | 3 +
llvm/lib/Target/AMDGPU/AMDGPU.td | 2 -
llvm/lib/Target/AMDGPU/AMDGPUCallLowering.cpp | 6 +-
llvm/lib/Target/AMDGPU/AMDGPUISelLowering.cpp | 2 +
llvm/lib/Target/AMDGPU/SIFrameLowering.cpp | 8 +-
llvm/lib/Target/AMDGPU/SIISelLowering.cpp | 13 +--
llvm/lib/Target/AMDGPU/SIInstrInfo.cpp | 3 +-
llvm/lib/Target/AMDGPU/SIInstructions.td | 3 -
.../Target/AMDGPU/SIMachineFunctionInfo.cpp | 9 +-
.../lib/Target/AMDGPU/SIMachineFunctionInfo.h | 6 ++
llvm/lib/Target/AMDGPU/SIRegisterInfo.cpp | 2 +
.../Target/AMDGPU/Utils/AMDGPUPALMetadata.cpp | 1 +
.../AMDGPU/isel-whole-wave-functions.ll | 15 ++--
.../AMDGPU/whole-wave-functions-pei.mir | 11 ++-
.../CodeGen/AMDGPU/whole-wave-functions.ll | 84 ++++---------------
19 files changed, 79 insertions(+), 96 deletions(-)
diff --git a/llvm/include/llvm/AsmParser/LLToken.h b/llvm/include/llvm/AsmParser/LLToken.h
index 81b9929b1fab8..bcc4b56dac7aa 100644
--- a/llvm/include/llvm/AsmParser/LLToken.h
+++ b/llvm/include/llvm/AsmParser/LLToken.h
@@ -181,6 +181,7 @@ enum Kind {
kw_amdgpu_cs_chain_preserve,
kw_amdgpu_kernel,
kw_amdgpu_gfx,
+ kw_amdgpu_whole_wave,
kw_tailcc,
kw_m68k_rtdcc,
kw_graalcc,
diff --git a/llvm/include/llvm/IR/CallingConv.h b/llvm/include/llvm/IR/CallingConv.h
index 030e1321ec0eb..8e37f0c81382a 100644
--- a/llvm/include/llvm/IR/CallingConv.h
+++ b/llvm/include/llvm/IR/CallingConv.h
@@ -285,7 +285,7 @@ namespace CallingConv {
RISCV_VLSCall_65536 = 123,
// Calling convention for AMDGPU whole wave functions.
- AMDGPU_Whole_Wave = 124,
+ AMDGPU_WholeWave = 124,
/// The highest possible ID. Must be some 2^k - 1.
MaxID = 1023
diff --git a/llvm/lib/AsmParser/LLLexer.cpp b/llvm/lib/AsmParser/LLLexer.cpp
index fd0a50d25e714..e99f41f1c49b5 100644
--- a/llvm/lib/AsmParser/LLLexer.cpp
+++ b/llvm/lib/AsmParser/LLLexer.cpp
@@ -679,6 +679,7 @@ lltok::Kind LLLexer::LexIdentifier() {
KEYWORD(amdgpu_cs_chain_preserve);
KEYWORD(amdgpu_kernel);
KEYWORD(amdgpu_gfx);
+ KEYWORD(amdgpu_whole_wave);
KEYWORD(tailcc);
KEYWORD(m68k_rtdcc);
KEYWORD(graalcc);
diff --git a/llvm/lib/AsmParser/LLParser.cpp b/llvm/lib/AsmParser/LLParser.cpp
index c8d792981793d..408479ea62c60 100644
--- a/llvm/lib/AsmParser/LLParser.cpp
+++ b/llvm/lib/AsmParser/LLParser.cpp
@@ -2286,6 +2286,9 @@ bool LLParser::parseOptionalCallingConv(unsigned &CC) {
CC = CallingConv::AMDGPU_CS_ChainPreserve;
break;
case lltok::kw_amdgpu_kernel: CC = CallingConv::AMDGPU_KERNEL; break;
+ case lltok::kw_amdgpu_whole_wave:
+ CC = CallingConv::AMDGPU_WholeWave;
+ break;
case lltok::kw_tailcc: CC = CallingConv::Tail; break;
case lltok::kw_m68k_rtdcc: CC = CallingConv::M68k_RTD; break;
case lltok::kw_graalcc: CC = CallingConv::GRAAL; break;
diff --git a/llvm/lib/IR/AsmWriter.cpp b/llvm/lib/IR/AsmWriter.cpp
index ae68da0182dc4..00d61e3eb91ac 100644
--- a/llvm/lib/IR/AsmWriter.cpp
+++ b/llvm/lib/IR/AsmWriter.cpp
@@ -376,6 +376,9 @@ static void PrintCallingConv(unsigned cc, raw_ostream &Out) {
break;
case CallingConv::AMDGPU_KERNEL: Out << "amdgpu_kernel"; break;
case CallingConv::AMDGPU_Gfx: Out << "amdgpu_gfx"; break;
+ case CallingConv::AMDGPU_WholeWave:
+ Out << "amdgpu_whole_wave";
+ break;
case CallingConv::M68k_RTD: Out << "m68k_rtdcc"; break;
case CallingConv::RISCV_VectorCall:
Out << "riscv_vector_cc";
diff --git a/llvm/lib/Target/AMDGPU/AMDGPU.td b/llvm/lib/Target/AMDGPU/AMDGPU.td
index 3db93a794ea10..1c8dc09d3060b 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPU.td
+++ b/llvm/lib/Target/AMDGPU/AMDGPU.td
@@ -2532,8 +2532,6 @@ def HasXF32Insts : Predicate<"Subtarget->hasXF32Insts()">,
def HasAshrPkInsts : Predicate<"Subtarget->hasAshrPkInsts()">,
AssemblerPredicate<(all_of FeatureAshrPkInsts)>;
-def IsWholeWaveFunction : Predicate<"Subtarget->isWholeWaveFunction()">;
-
// Include AMDGPU TD files
include "SISchedule.td"
include "GCNProcessors.td"
diff --git a/llvm/lib/Target/AMDGPU/AMDGPUCallLowering.cpp b/llvm/lib/Target/AMDGPU/AMDGPUCallLowering.cpp
index 478a4c161fce7..9baee05047f99 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPUCallLowering.cpp
+++ b/llvm/lib/Target/AMDGPU/AMDGPUCallLowering.cpp
@@ -1285,7 +1285,8 @@ bool AMDGPUCallLowering::lowerTailCall(
SmallVector<std::pair<MCRegister, Register>, 12> ImplicitArgRegs;
if (Info.CallConv != CallingConv::AMDGPU_Gfx &&
- !AMDGPU::isChainCC(Info.CallConv)) {
+ !AMDGPU::isChainCC(Info.CallConv) &&
+ Info.CallConv != CallingConv::AMDGPU_WholeWave) {
// With a fixed ABI, allocate fixed registers before user arguments.
if (!passSpecialInputs(MIRBuilder, CCInfo, ImplicitArgRegs, Info))
return false;
@@ -1467,7 +1468,8 @@ bool AMDGPUCallLowering::lowerCall(MachineIRBuilder &MIRBuilder,
// after the ordinary user argument registers.
SmallVector<std::pair<MCRegister, Register>, 12> ImplicitArgRegs;
- if (Info.CallConv != CallingConv::AMDGPU_Gfx) {
+ if (Info.CallConv != CallingConv::AMDGPU_Gfx &&
+ Info.CallConv != CallingConv::AMDGPU_WholeWave) {
// With a fixed ABI, allocate fixed registers before user arguments.
if (!passSpecialInputs(MIRBuilder, CCInfo, ImplicitArgRegs, Info))
return false;
diff --git a/llvm/lib/Target/AMDGPU/AMDGPUISelLowering.cpp b/llvm/lib/Target/AMDGPU/AMDGPUISelLowering.cpp
index 457996e9b941e..3e2f6cb6f9a17 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPUISelLowering.cpp
+++ b/llvm/lib/Target/AMDGPU/AMDGPUISelLowering.cpp
@@ -1132,6 +1132,7 @@ CCAssignFn *AMDGPUCallLowering::CCAssignFnForCall(CallingConv::ID CC,
case CallingConv::Cold:
return CC_AMDGPU_Func;
case CallingConv::AMDGPU_Gfx:
+ case CallingConv::AMDGPU_WholeWave:
return CC_SI_Gfx;
case CallingConv::AMDGPU_KERNEL:
case CallingConv::SPIR_KERNEL:
@@ -1157,6 +1158,7 @@ CCAssignFn *AMDGPUCallLowering::CCAssignFnForReturn(CallingConv::ID CC,
case CallingConv::AMDGPU_LS:
return RetCC_SI_Shader;
case CallingConv::AMDGPU_Gfx:
+ case CallingConv::AMDGPU_WholeWave:
return RetCC_SI_Gfx;
case CallingConv::C:
case CallingConv::Fast:
diff --git a/llvm/lib/Target/AMDGPU/SIFrameLowering.cpp b/llvm/lib/Target/AMDGPU/SIFrameLowering.cpp
index 7e822f75b31f5..1a20406f45e5d 100644
--- a/llvm/lib/Target/AMDGPU/SIFrameLowering.cpp
+++ b/llvm/lib/Target/AMDGPU/SIFrameLowering.cpp
@@ -900,7 +900,7 @@ static Register buildScratchExecCopy(LiveRegUnits &LiveUnits,
initLiveUnits(LiveUnits, TRI, FuncInfo, MF, MBB, MBBI, IsProlog);
- if (ST.isWholeWaveFunction()) {
+ if (FuncInfo->isWholeWaveFunction()) {
// Whole wave functions already have a copy of the original EXEC mask that
// we can use.
assert(IsProlog && "Epilog should look at return, not setup");
@@ -977,7 +977,7 @@ void SIFrameLowering::emitCSRSpillStores(
}
StoreWWMRegisters(WWMCalleeSavedRegs);
- if (ST.isWholeWaveFunction()) {
+ if (FuncInfo->isWholeWaveFunction()) {
// SI_SETUP_WHOLE_WAVE_FUNCTION has outlived its purpose, so we can remove
// it now. If we have already saved some WWM CSR registers, then the EXEC is
// already -1 and we don't need to do anything else. Otherwise, set EXEC to
@@ -1070,7 +1070,7 @@ void SIFrameLowering::emitCSRSpillRestores(
}
};
- if (ST.isWholeWaveFunction()) {
+ if (FuncInfo->isWholeWaveFunction()) {
// For whole wave functions, the EXEC is already -1 at this point.
// Therefore, we can restore the CSR WWM registers right away.
RestoreWWMRegisters(WWMCalleeSavedRegs);
@@ -1665,7 +1665,7 @@ void SIFrameLowering::determineCalleeSaves(MachineFunction &MF,
if (MFI->isEntryFunction())
return;
- if (ST.isWholeWaveFunction()) {
+ if (MFI->isWholeWaveFunction()) {
// In practice, all the VGPRs are WWM registers, and we will need to save at
// least their inactive lanes. Add them to WWMReservedRegs.
assert(!NeedExecCopyReservedReg && "Whole wave functions can use the reg mapped for their i1 argument");
diff --git a/llvm/lib/Target/AMDGPU/SIISelLowering.cpp b/llvm/lib/Target/AMDGPU/SIISelLowering.cpp
index 3eb334171bc61..d6301dd818441 100644
--- a/llvm/lib/Target/AMDGPU/SIISelLowering.cpp
+++ b/llvm/lib/Target/AMDGPU/SIISelLowering.cpp
@@ -2877,7 +2877,7 @@ SDValue SITargetLowering::LowerFormalArguments(
!Info->hasWorkGroupIDZ());
}
- bool IsWholeWaveFunc = getSubtarget()->isWholeWaveFunction();
+ bool IsWholeWaveFunc = Info->isWholeWaveFunction();
if (CallConv == CallingConv::AMDGPU_PS) {
processPSInputArgs(Splits, CallConv, Ins, Skipped, FType, Info);
@@ -3311,9 +3311,9 @@ SITargetLowering::LowerReturn(SDValue Chain, CallingConv::ID CallConv,
unsigned Opc = AMDGPUISD::ENDPGM;
if (!IsWaveEnd)
- Opc = Subtarget->isWholeWaveFunction() ? AMDGPUISD::WHOLE_WAVE_RETURN
- : IsShader ? AMDGPUISD::RETURN_TO_EPILOG
- : AMDGPUISD::RET_GLUE;
+ Opc = Info->isWholeWaveFunction() ? AMDGPUISD::WHOLE_WAVE_RETURN
+ : IsShader ? AMDGPUISD::RETURN_TO_EPILOG
+ : AMDGPUISD::RET_GLUE;
return DAG.getNode(Opc, DL, MVT::Other, RetOps);
}
@@ -3779,7 +3779,8 @@ SDValue SITargetLowering::LowerCall(CallLoweringInfo &CLI,
CCState CCInfo(CallConv, IsVarArg, MF, ArgLocs, *DAG.getContext());
CCAssignFn *AssignFn = CCAssignFnForCall(CallConv, IsVarArg);
- if (CallConv != CallingConv::AMDGPU_Gfx && !AMDGPU::isChainCC(CallConv)) {
+ if (CallConv != CallingConv::AMDGPU_Gfx && !AMDGPU::isChainCC(CallConv) &&
+ CallConv != CallingConv::AMDGPU_WholeWave) {
// With a fixed ABI, allocate fixed registers before user arguments.
passSpecialInputs(CLI, CCInfo, *Info, RegsToPass, MemOpChains, Chain);
}
@@ -5684,7 +5685,7 @@ SITargetLowering::EmitInstrWithCustomInserter(MachineInstr &MI,
return SplitBB;
}
case AMDGPU::SI_WHOLE_WAVE_FUNC_RETURN: {
- assert(Subtarget->isWholeWaveFunction());
+ assert(MFI->isWholeWaveFunction());
// During ISel, it's difficult to propagate the original EXEC mask to use as
// an input to SI_WHOLE_WAVE_FUNC_RETURN. Set it up here instead.
diff --git a/llvm/lib/Target/AMDGPU/SIInstrInfo.cpp b/llvm/lib/Target/AMDGPU/SIInstrInfo.cpp
index b6c5ed7343f7a..e62fa0d5f0c8a 100644
--- a/llvm/lib/Target/AMDGPU/SIInstrInfo.cpp
+++ b/llvm/lib/Target/AMDGPU/SIInstrInfo.cpp
@@ -5776,7 +5776,8 @@ void SIInstrInfo::restoreExec(MachineFunction &MF, MachineBasicBlock &MBB,
MachineInstr *
SIInstrInfo::getWholeWaveFunctionSetup(MachineFunction &MF) const {
- assert(ST.isWholeWaveFunction() && "Not a whole wave func");
+ assert(MF.getInfo<SIMachineFunctionInfo>()->isWholeWaveFunction() &&
+ "Not a whole wave func");
MachineBasicBlock &MBB = *MF.begin();
for (MachineInstr &MI : MBB)
if (MI.getOpcode() == AMDGPU::SI_SETUP_WHOLE_WAVE_FUNC)
diff --git a/llvm/lib/Target/AMDGPU/SIInstructions.td b/llvm/lib/Target/AMDGPU/SIInstructions.td
index 7ab10a6519d7a..e8c3f97e65e40 100644
--- a/llvm/lib/Target/AMDGPU/SIInstructions.td
+++ b/llvm/lib/Target/AMDGPU/SIInstructions.td
@@ -623,7 +623,6 @@ def SI_INIT_WHOLE_WAVE : SPseudoInstSI <
let isConvergent = 1;
}
-let SubtargetPredicate = IsWholeWaveFunction in {
// Sets EXEC to all lanes and returns the previous EXEC.
def SI_SETUP_WHOLE_WAVE_FUNC : SPseudoInstSI <
(outs SReg_1:$dst), (ins), [(set i1:$dst, (AMDGPUwhole_wave_setup))]> {
@@ -650,8 +649,6 @@ def SI_WHOLE_WAVE_FUNC_RETURN : SPseudoInstSI <
def : GCNPat<
(AMDGPUwhole_wave_return), (SI_WHOLE_WAVE_FUNC_RETURN (i1 (IMPLICIT_DEF)))>;
-} // SubtargetPredicate = IsWholeWaveFunction
-
// Return for returning shaders to a shader variant epilog.
def SI_RETURN_TO_EPILOG : SPseudoInstSI <
(outs), (ins variable_ops), [(AMDGPUreturn_to_epilog)]> {
diff --git a/llvm/lib/Target/AMDGPU/SIMachineFunctionInfo.cpp b/llvm/lib/Target/AMDGPU/SIMachineFunctionInfo.cpp
index abd19c988a7eb..6867e28cc8761 100644
--- a/llvm/lib/Target/AMDGPU/SIMachineFunctionInfo.cpp
+++ b/llvm/lib/Target/AMDGPU/SIMachineFunctionInfo.cpp
@@ -41,7 +41,8 @@ SIMachineFunctionInfo::SIMachineFunctionInfo(const Function &F,
WorkGroupIDZ(false), WorkGroupInfo(false), LDSKernelId(false),
PrivateSegmentWaveByteOffset(false), WorkItemIDX(false),
WorkItemIDY(false), WorkItemIDZ(false), ImplicitArgPtr(false),
- GITPtrHigh(0xffffffff), HighBitsOf32BitAddress(0) {
+ GITPtrHigh(0xffffffff), HighBitsOf32BitAddress(0),
+ IsWholeWaveFunction(F.getCallingConv() == CallingConv::AMDGPU_WholeWave) {
const GCNSubtarget &ST = *static_cast<const GCNSubtarget *>(STI);
FlatWorkGroupSizes = ST.getFlatWorkGroupSizes(F);
WavesPerEU = ST.getWavesPerEU(F);
@@ -83,7 +84,7 @@ SIMachineFunctionInfo::SIMachineFunctionInfo(const Function &F,
ImplicitArgPtr = false;
} else if (!isEntryFunction()) {
- if (CC != CallingConv::AMDGPU_Gfx)
+ if (CC != CallingConv::AMDGPU_Gfx && CC != CallingConv::AMDGPU_WholeWave)
ArgInfo = AMDGPUArgumentUsageInfo::FixedABIFunctionInfo;
FrameOffsetReg = AMDGPU::SGPR33;
@@ -713,7 +714,8 @@ yaml::SIMachineFunctionInfo::SIMachineFunctionInfo(
ArgInfo(convertArgumentInfo(MFI.getArgInfo(), TRI)),
PSInputAddr(MFI.getPSInputAddr()), PSInputEnable(MFI.getPSInputEnable()),
MaxMemoryClusterDWords(MFI.getMaxMemoryClusterDWords()),
- Mode(MFI.getMode()), HasInitWholeWave(MFI.hasInitWholeWave()) {
+ Mode(MFI.getMode()), HasInitWholeWave(MFI.hasInitWholeWave()),
+ IsWholeWaveFunction(MFI.isWholeWaveFunction()) {
for (Register Reg : MFI.getSGPRSpillPhysVGPRs())
SpillPhysVGPRS.push_back(regToString(Reg, TRI));
@@ -758,6 +760,7 @@ bool SIMachineFunctionInfo::initializeBaseYamlFields(
HasSpilledVGPRs = YamlMFI.HasSpilledVGPRs;
BytesInStackArgArea = YamlMFI.BytesInStackArgArea;
ReturnsVoid = YamlMFI.ReturnsVoid;
+ IsWholeWaveFunction = YamlMFI.IsWholeWaveFunction;
if (YamlMFI.ScavengeFI) {
auto FIOrErr = YamlMFI.ScavengeFI->getFI(MF.getFrameInfo());
diff --git a/llvm/lib/Target/AMDGPU/SIMachineFunctionInfo.h b/llvm/lib/Target/AMDGPU/SIMachineFunctionInfo.h
index 740f752bc93b7..2209753694e56 100644
--- a/llvm/lib/Target/AMDGPU/SIMachineFunctionInfo.h
+++ b/llvm/lib/Target/AMDGPU/SIMachineFunctionInfo.h
@@ -298,6 +298,7 @@ struct SIMachineFunctionInfo final : public yaml::MachineFunctionInfo {
StringValue LongBranchReservedReg;
bool HasInitWholeWave = false;
+ bool IsWholeWaveFunction = false;
SIMachineFunctionInfo() = default;
SIMachineFunctionInfo(const llvm::SIMachineFunctionInfo &,
@@ -350,6 +351,7 @@ template <> struct MappingTraits<SIMachineFunctionInfo> {
YamlIO.mapOptional("longBranchReservedReg", MFI.LongBranchReservedReg,
StringValue());
YamlIO.mapOptional("hasInitWholeWave", MFI.HasInitWholeWave, false);
+ YamlIO.mapOptional("isWholeWaveFunction", MFI.IsWholeWaveFunction, false);
}
};
@@ -543,6 +545,8 @@ class SIMachineFunctionInfo final : public AMDGPUMachineFunction,
// the serialization easier.
ReservedRegSet WWMReservedRegs;
+ bool IsWholeWaveFunction = false;
+
using PrologEpilogSGPRSpill =
std::pair<Register, PrologEpilogSGPRSaveRestoreInfo>;
// To track the SGPR spill method used for a CSR SGPR register during
@@ -626,6 +630,8 @@ class SIMachineFunctionInfo final : public AMDGPUMachineFunction,
const WWMSpillsMap &getWWMSpills() const { return WWMSpills; }
const ReservedRegSet &getWWMReservedRegs() const { return WWMReservedRegs; }
+ bool isWholeWaveFunction() const { return IsWholeWaveFunction; }
+
ArrayRef<PrologEpilogSGPRSpill> getPrologEpilogSGPRSpills() const {
assert(is_sorted(PrologEpilogSGPRSpills, llvm::less_first()));
return PrologEpilogSGPRSpills;
diff --git a/llvm/lib/Target/AMDGPU/SIRegisterInfo.cpp b/llvm/lib/Target/AMDGPU/SIRegisterInfo.cpp
index 12b451ece3b96..25ee7a8d337ec 100644
--- a/llvm/lib/Target/AMDGPU/SIRegisterInfo.cpp
+++ b/llvm/lib/Target/AMDGPU/SIRegisterInfo.cpp
@@ -407,6 +407,7 @@ const MCPhysReg *SIRegisterInfo::getCalleeSavedRegs(
return ST.hasGFX90AInsts() ? CSR_AMDGPU_GFX90AInsts_SaveList
: CSR_AMDGPU_SaveList;
case CallingConv::AMDGPU_Gfx:
+ case CallingConv::AMDGPU_WholeWave:
return ST.hasGFX90AInsts() ? CSR_AMDGPU_SI_Gfx_GFX90AInsts_SaveList
: CSR_AMDGPU_SI_Gfx_SaveList;
case CallingConv::AMDGPU_CS_ChainPreserve:
@@ -433,6 +434,7 @@ const uint32_t *SIRegisterInfo::getCallPreservedMask(const MachineFunction &MF,
return ST.hasGFX90AInsts() ? CSR_AMDGPU_GFX90AInsts_RegMask
: CSR_AMDGPU_RegMask;
case CallingConv::AMDGPU_Gfx:
+ case CallingConv::AMDGPU_WholeWave:
return ST.hasGFX90AInsts() ? CSR_AMDGPU_SI_Gfx_GFX90AInsts_RegMask
: CSR_AMDGPU_SI_Gfx_RegMask;
case CallingConv::AMDGPU_CS_Chain:
diff --git a/llvm/lib/Target/AMDGPU/Utils/AMDGPUPALMetadata.cpp b/llvm/lib/Target/AMDGPU/Utils/AMDGPUPALMetadata.cpp
index 296031e4a068f..b7e351bdb6d01 100644
--- a/llvm/lib/Target/AMDGPU/Utils/AMDGPUPALMetadata.cpp
+++ b/llvm/lib/Target/AMDGPU/Utils/AMDGPUPALMetadata.cpp
@@ -43,6 +43,7 @@ static const char *getStageName(CallingConv::ID CC) {
case CallingConv::AMDGPU_LS:
return ".ls";
case CallingConv::AMDGPU_Gfx:
+ case CallingConv::AMDGPU_WholeWave:
llvm_unreachable("Callable shader has no hardware stage");
default:
return ".cs";
diff --git a/llvm/test/CodeGen/AMDGPU/isel-whole-wave-functions.ll b/llvm/test/CodeGen/AMDGPU/isel-whole-wave-functions.ll
index 9e41b4e4dd614..f3f16f4659a5b 100644
--- a/llvm/test/CodeGen/AMDGPU/isel-whole-wave-functions.ll
+++ b/llvm/test/CodeGen/AMDGPU/isel-whole-wave-functions.ll
@@ -1,8 +1,8 @@
; NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py UTC_ARGS: --version 5
-; RUN: llc -global-isel=0 -mtriple=amdgcn--amdpal -mcpu=gfx1200 -mattr=+whole-wave-function -stop-after=finalize-isel -verify-machineinstrs < %s | FileCheck --check-prefix=DAGISEL %s
+; RUN: llc -global-isel=0 -mtriple=amdgcn--amdpal -mcpu=gfx1200 -stop-after=finalize-isel -verify-machineinstrs < %s | FileCheck --check-prefix=DAGISEL %s
; TODO: llc -global-isel=1 -mtriple=amdgcn--amdpal -mcpu=gfx1200 -mattr=+whole-wave-function < %s | FileCheck --check-prefix=GISEL %s
-define amdgpu_gfx i32 @basic_test(i1 %active, i32 %a, i32 %b) {
+define amdgpu_whole_wave i32 @basic_test(i1 %active, i32 %a, i32 %b) {
; DAGISEL-LABEL: name: basic_test
; DAGISEL: bb.0 (%ir-block.0):
; DAGISEL-NEXT: liveins: $vgpr0, $vgpr1
@@ -20,12 +20,12 @@ define amdgpu_gfx i32 @basic_test(i1 %active, i32 %a, i32 %b) {
; DAGISEL-NEXT: SI_WHOLE_WAVE_FUNC_RETURN killed [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $vgpr0
%x = select i1 %active, i32 %a, i32 5
%y = select i1 %active, i32 %b, i32 3
- %ret = call i32 @llvm.amdgcn.update.dpp.i32(i32 %x, i32 %y, i32 1, i32 1, i32 1, i1 false) #0
+ %ret = call i32 @llvm.amdgcn.update.dpp.i32(i32 %x, i32 %y, i32 1, i32 1, i32 1, i1 false)
ret i32 %ret
}
; Make sure we don't crash if %active is not used at all.
-define amdgpu_gfx i32 @unused_active(i1 %active, i32 %a, i32 %b) {
+define amdgpu_whole_wave i32 @unused_active(i1 %active, i32 %a, i32 %b) {
; DAGISEL-LABEL: name: unused_active
; DAGISEL: bb.0 (%ir-block.0):
; DAGISEL-NEXT: [[SI_SETUP_WHOLE_WAVE_FUNC:%[0-9]+]]:sreg_32 = SI_SETUP_WHOLE_WAVE_FUNC implicit-def dead $exec, implicit $exec
@@ -36,7 +36,7 @@ define amdgpu_gfx i32 @unused_active(i1 %active, i32 %a, i32 %b) {
ret i32 14
}
-define amdgpu_gfx i32 @multiple_blocks(i1 %active, i32 %a, i32 %b) {
+define amdgpu_whole_wave i32 @multiple_blocks(i1 %active, i32 %a, i32 %b) {
; DAGISEL-LABEL: name: multiple_blocks
; DAGISEL: bb.0 (%ir-block.0):
; DAGISEL-NEXT: successors: %bb.1(0x40000000), %bb.2(0x40000000)
@@ -76,7 +76,7 @@ if.end:
ret i32 %e
}
-define amdgpu_gfx i64 @ret_64(i1 %active, i64 %a, i64 %b) {
+define amdgpu_whole_wave i64 @ret_64(i1 %active, i64 %a, i64 %b) {
; DAGISEL-LABEL: name: ret_64
; DAGISEL: bb.0 (%ir-block.0):
; DAGISEL-NEXT: liveins: $vgpr0, $vgpr1, $vgpr2, $vgpr3
@@ -111,6 +111,7 @@ define amdgpu_gfx i64 @ret_64(i1 %active, i64 %a, i64 %b) {
; DAGISEL-NEXT: SI_WHOLE_WAVE_FUNC_RETURN killed [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $vgpr0, implicit $vgpr1
%x = select i1 %active, i64 %a, i64 5
%y = select i1 %active, i64 %b, i64 3
- %ret = call i64 @llvm.amdgcn.update.dpp.i64(i64 %x, i64 %y, i32 1, i32 1, i32 1, i1 false) #0
+ %ret = call i64 @llvm.amdgcn.update.dpp.i64(i64 %x, i64 %y, i32 1, i32 1, i32 1, i1 false)
ret i64 %ret
}
+
diff --git a/llvm/test/CodeGen/AMDGPU/whole-wave-functions-pei.mir b/llvm/test/CodeGen/AMDGPU/whole-wave-functions-pei.mir
index d62e90441284c..a5a35c40b719c 100644
--- a/llvm/test/CodeGen/AMDGPU/whole-wave-functions-pei.mir
+++ b/llvm/test/CodeGen/AMDGPU/whole-wave-functions-pei.mir
@@ -1,5 +1,5 @@
# NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py UTC_ARGS: --version 5
-# RUN: llc -mtriple=amdgcn -mcpu=gfx1200 -mattr=+whole-wave-function -run-pass=prologepilog -o - %s | FileCheck %s
+# RUN: llc -mtriple=amdgcn -mcpu=gfx1200 -run-pass=prologepilog -o - %s | FileCheck %s
---
name: save_inactive_lanes_non_csr_vgpr
@@ -20,6 +20,7 @@ machineFunctionInfo:
returnsVoid: false
occupancy: 16
sgprForEXECCopy: '$sgpr105'
+ isWholeWaveFunction: true
body: |
bb.0:
; CHECK-LABEL: name: save_inactive_lanes_non_csr_vgpr
@@ -57,6 +58,7 @@ machineFunctionInfo:
returnsVoid: false
occupancy: 16
sgprForEXECCopy: '$sgpr105'
+ isWholeWaveFunction: true
body: |
bb.0:
; CHECK-LABEL: name: save_all_lanes_csr_vgpr
@@ -92,6 +94,7 @@ machineFunctionInfo:
returnsVoid: false
occupancy: 16
sgprForEXECCopy: '$sgpr105'
+ isWholeWaveFunction: true
body: |
bb.0:
liveins: $sgpr20, $vgpr191
@@ -134,6 +137,7 @@ machineFunctionInfo:
returnsVoid: false
occupancy: 16
sgprForEXECCopy: '$sgpr105'
+ isWholeWaveFunction: true
body: |
bb.0:
liveins: $sgpr20, $vgpr191
@@ -181,6 +185,7 @@ machineFunctionInfo:
- '$vgpr191'
wwmReservedRegs:
- '$vgpr191'
+ isWholeWaveFunction: true
body: |
bb.0:
liveins: $sgpr20, $vgpr0, $vgpr1, $vgpr191
@@ -237,6 +242,7 @@ machineFunctionInfo:
- '$vgpr191'
wwmReservedRegs:
- '$vgpr191'
+ isWholeWaveFunction: true
body: |
bb.0:
liveins: $sgpr20, $vgpr0, $vgpr1, $vgpr191
@@ -288,6 +294,7 @@ machineFunctionInfo:
returnsVoid: false
occupancy: 16
sgprForEXECCopy: '$sgpr105'
+ isWholeWaveFunction: true
body: |
bb.0:
; CHECK-LABEL: name: vgpr_superregs
@@ -345,6 +352,7 @@ machineFunctionInfo:
returnsVoid: false
occupancy: 16
sgprForEXECCopy: '$sgpr105'
+ isWholeWaveFunction: true
body: |
bb.0:
liveins: $vgpr0, $vgpr20, $vgpr40
@@ -383,6 +391,7 @@ machineFunctionInfo:
returnsVoid: false
occupancy: 16
sgprForEXECCopy: '$sgpr105'
+ isWholeWaveFunction: true
body: |
; CHECK-LABEL: name: multiple_blocks
; CHECK: bb.0:
diff --git a/llvm/test/CodeGen/AMDGPU/whole-wave-functions.ll b/llvm/test/CodeGen/AMDGPU/whole-wave-functions.ll
index 9a951e95f3983..c6890414ed5bc 100644
--- a/llvm/test/CodeGen/AMDGPU/whole-wave-functions.ll
+++ b/llvm/test/CodeGen/AMDGPU/whole-wave-functions.ll
@@ -1,12 +1,12 @@
; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
-; RUN: llc -global-isel=0 -mtriple=amdgcn--amdpal -mcpu=gfx1200 -mattr=+whole-wave-function -verify-machineinstrs < %s | FileCheck --check-prefix=DAGISEL %s
+; RUN: llc -global-isel=0 -mtriple=amdgcn--amdpal -mcpu=gfx1200 -verify-machineinstrs < %s | FileCheck --check-prefix=DAGISEL %s
; TODO: llc -global-isel=1 -mtriple=amdgcn--amdpal -mcpu=gfx1200 -mattr=+whole-wave-function < %s | FileCheck --check-prefix=GISEL %s
; Make sure the i1 %active is passed through EXEC.
; The EXEC mask should be set to -1 for the duration of the function
; and restored to its original value in the epilogue.
; We will also need to restore the inactive lanes for any allocated VGPRs.
-define i32 @basic_test(i1 %active, i32 %a, i32 %b) {
+define amdgpu_whole_wave i32 @basic_test(i1 %active, i32 %a, i32 %b) {
; DAGISEL-LABEL: basic_test:
; DAGISEL: ; %bb.0:
; DAGISEL-NEXT: s_wait_loadcnt_dscnt 0x0
@@ -32,12 +32,12 @@ define i32 @basic_test(i1 %active, i32 %a, i32 %b) {
; DAGISEL-NEXT: s_setpc_b64 s[30:31]
%x = select i1 %active, i32 %a, i32 5
%y = select i1 %active, i32 %b, i32 3
- %ret = call i32 @llvm.amdgcn.update.dpp.i32(i32 %x, i32 %y, i32 1, i32 1, i32 1, i1 false) #0
+ %ret = call i32 @llvm.amdgcn.update.dpp.i32(i32 %x, i32 %y, i32 1, i32 1, i32 1, i1 false)
ret i32 %ret
}
; Make sure we don't crash if %active is not used at all.
-define i32 @unused_active(i1 %active, i32 %a, i32 %b) {
+define amdgpu_whole_wave i32 @unused_active(i1 %active, i32 %a, i32 %b) {
; DAGISEL-LABEL: unused_active:
; DAGISEL: ; %bb.0:
; DAGISEL-NEXT: s_wait_loadcnt_dscnt 0x0
@@ -60,8 +60,8 @@ define i32 @unused_active(i1 %active, i32 %a, i32 %b) {
; For any used VGPRs (including those used for SGPR spills), we need to restore the inactive lanes.
; For CSR VGPRs, we need to restore all lanes.
-define i32 @csr_default_cc(i1 %active, i32 %a, i32 %b) {
-; DAGISEL-LABEL: csr_default_cc:
+define amdgpu_whole_wave i32 @csr(i1 %active, i32 %a, i32 %b) {
+; DAGISEL-LABEL: csr:
; DAGISEL: ; %bb.0:
; DAGISEL-NEXT: s_wait_loadcnt_dscnt 0x0
; DAGISEL-NEXT: s_wait_expcnt 0x0
@@ -76,17 +76,17 @@ define i32 @csr_default_cc(i1 %active, i32 %a, i32 %b) {
; DAGISEL-NEXT: scratch_store_b32 off, v49, s32 offset:16
; DAGISEL-NEXT: s_mov_b32 exec_lo, -1
; DAGISEL-NEXT: scratch_store_b32 off, v40, s32 offset:12 ; 4-byte Folded Spill
-; DAGISEL-NEXT: v_writelane_b32 v2, s48, 0
; DAGISEL-NEXT: ;;#ASMSTART
; DAGISEL-NEXT: ; clobber CSR
; DAGISEL-NEXT: ;;#ASMEND
+; DAGISEL-NEXT: v_writelane_b32 v2, s20, 0
; DAGISEL-NEXT: ;;#ASMSTART
; DAGISEL-NEXT: ; clobber non-CSR
; DAGISEL-NEXT: ;;#ASMEND
; DAGISEL-NEXT: scratch_load_b32 v40, off, s32 offset:12 ; 4-byte Folded Reload
; DAGISEL-NEXT: s_wait_alu 0xfffe
; DAGISEL-NEXT: v_dual_cndmask_b32 v0, 5, v0 :: v_dual_cndmask_b32 v1, 3, v1
-; DAGISEL-NEXT: v_readlane_b32 s48, v2, 0
+; DAGISEL-NEXT: v_readlane_b32 s20, v2, 0
; DAGISEL-NEXT: s_delay_alu instid0(VALU_DEP_2)
; DAGISEL-NEXT: v_mov_b32_dpp v0, v1 quad_perm:[1,0,0,0] row_mask:0x1 bank_mask:0x1
; DAGISEL-NEXT: s_xor_b32 exec_lo, vcc_lo, -1
@@ -102,61 +102,13 @@ define i32 @csr_default_cc(i1 %active, i32 %a, i32 %b) {
%x = select i1 %active, i32 %a, i32 5
%y = select i1 %active, i32 %b, i32 3
call void asm sideeffect "; clobber CSR", "~{v40},~{s48}"()
- call void asm sideeffect "; clobber non-CSR", "~{v49},~{s40}"()
- %ret = call i32 @llvm.amdgcn.update.dpp.i32(i32 %x, i32 %y, i32 1, i32 1, i32 1, i1 false) #0
- ret i32 %ret
-}
-
-; Same as above, but with the amdgpu_gfx calling convention.
-define amdgpu_gfx i32 @csr_amdgpu_gfx(i1 %active, i32 %a, i32 %b) {
-; DAGISEL-LABEL: csr_amdgpu_gfx:
-; DAGISEL: ; %bb.0:
-; DAGISEL-NEXT: s_wait_loadcnt_dscnt 0x0
-; DAGISEL-NEXT: s_wait_expcnt 0x0
-; DAGISEL-NEXT: s_wait_samplecnt 0x0
-; DAGISEL-NEXT: s_wait_bvhcnt 0x0
-; DAGISEL-NEXT: s_wait_kmcnt 0x0
-; DAGISEL-NEXT: s_xor_saveexec_b32 vcc_lo, -1
-; DAGISEL-NEXT: s_clause 0x3
-; DAGISEL-NEXT: scratch_store_b32 off, v2, s32
-; DAGISEL-NEXT: scratch_store_b32 off, v0, s32 offset:4
-; DAGISEL-NEXT: scratch_store_b32 off, v1, s32 offset:8
-; DAGISEL-NEXT: scratch_store_b32 off, v49, s32 offset:16
-; DAGISEL-NEXT: s_mov_b32 exec_lo, -1
-; DAGISEL-NEXT: scratch_store_b32 off, v40, s32 offset:12 ; 4-byte Folded Spill
-; DAGISEL-NEXT: v_writelane_b32 v2, s28, 0
-; DAGISEL-NEXT: ;;#ASMSTART
-; DAGISEL-NEXT: ; clobber CSR
-; DAGISEL-NEXT: ;;#ASMEND
-; DAGISEL-NEXT: ;;#ASMSTART
-; DAGISEL-NEXT: ; clobber non-CSR
-; DAGISEL-NEXT: ;;#ASMEND
-; DAGISEL-NEXT: scratch_load_b32 v40, off, s32 offset:12 ; 4-byte Folded Reload
-; DAGISEL-NEXT: s_wait_alu 0xfffe
-; DAGISEL-NEXT: v_dual_cndmask_b32 v0, 5, v0 :: v_dual_cndmask_b32 v1, 3, v1
-; DAGISEL-NEXT: v_readlane_b32 s28, v2, 0
-; DAGISEL-NEXT: s_delay_alu instid0(VALU_DEP_2)
-; DAGISEL-NEXT: v_mov_b32_dpp v0, v1 quad_perm:[1,0,0,0] row_mask:0x1 bank_mask:0x1
-; DAGISEL-NEXT: s_xor_b32 exec_lo, vcc_lo, -1
-; DAGISEL-NEXT: s_clause 0x3
-; DAGISEL-NEXT: scratch_load_b32 v2, off, s32
-; DAGISEL-NEXT: scratch_load_b32 v0, off, s32 offset:4
-; DAGISEL-NEXT: scratch_load_b32 v1, off, s32 offset:8
-; DAGISEL-NEXT: scratch_load_b32 v49, off, s32 offset:16
-; DAGISEL-NEXT: s_mov_b32 exec_lo, vcc_lo
-; DAGISEL-NEXT: s_wait_loadcnt 0x0
-; DAGISEL-NEXT: s_wait_alu 0xf1ff
-; DAGISEL-NEXT: s_setpc_b64 s[30:31]
- %x = select i1 %active, i32 %a, i32 5
- %y = select i1 %active, i32 %b, i32 3
- call void asm sideeffect "; clobber CSR", "~{v40},~{s28}"()
- call void asm sideeffect "; clobber non-CSR", "~{v49},~{s40}"()
- %ret = call i32 @llvm.amdgcn.update.dpp.i32(i32 %x, i32 %y, i32 1, i32 1, i32 1, i1 false) #0
+ call void asm sideeffect "; clobber non-CSR", "~{v49},~{s20}"()
+ %ret = call i32 @llvm.amdgcn.update.dpp.i32(i32 %x, i32 %y, i32 1, i32 1, i32 1, i1 false)
ret i32 %ret
}
; Save and restore all lanes of v40.
-define void @csr_vgpr_only(i1 %active, i32 %a, i32 %b) {
+define amdgpu_whole_wave void @csr_vgpr_only(i1 %active, i32 %a, i32 %b) {
; DAGISEL-LABEL: csr_vgpr_only:
; DAGISEL: ; %bb.0:
; DAGISEL-NEXT: s_wait_loadcnt_dscnt 0x0
@@ -178,7 +130,7 @@ define void @csr_vgpr_only(i1 %active, i32 %a, i32 %b) {
ret void
}
-define void @sgpr_spill_only(i1 %active, i32 %a, i32 %b) {
+define amdgpu_whole_wave void @sgpr_spill_only(i1 %active, i32 %a, i32 %b) {
; DAGISEL-LABEL: sgpr_spill_only:
; DAGISEL: ; %bb.0:
; DAGISEL-NEXT: s_wait_loadcnt_dscnt 0x0
@@ -189,23 +141,23 @@ define void @sgpr_spill_only(i1 %active, i32 %a, i32 %b) {
; DAGISEL-NEXT: s_xor_saveexec_b32 s0, -1
; DAGISEL-NEXT: scratch_store_b32 off, v0, s32 ; 4-byte Folded Spill
; DAGISEL-NEXT: s_mov_b32 exec_lo, -1
-; DAGISEL-NEXT: v_writelane_b32 v0, s48, 0
+; DAGISEL-NEXT: v_writelane_b32 v0, s68, 0
; DAGISEL-NEXT: ;;#ASMSTART
; DAGISEL-NEXT: ; clobber CSR SGPR
; DAGISEL-NEXT: ;;#ASMEND
; DAGISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; DAGISEL-NEXT: v_readlane_b32 s48, v0, 0
+; DAGISEL-NEXT: v_readlane_b32 s68, v0, 0
; DAGISEL-NEXT: s_wait_alu 0xfffe
; DAGISEL-NEXT: s_xor_b32 exec_lo, s0, -1
; DAGISEL-NEXT: scratch_load_b32 v0, off, s32 ; 4-byte Folded Reload
; DAGISEL-NEXT: s_mov_b32 exec_lo, s0
; DAGISEL-NEXT: s_wait_loadcnt 0x0
; DAGISEL-NEXT: s_setpc_b64 s[30:31]
- call void asm sideeffect "; clobber CSR SGPR", "~{s48}"()
+ call void asm sideeffect "; clobber CSR SGPR", "~{s68}"()
ret void
}
-define i32 @multiple_blocks(i1 %active, i32 %a, i32 %b) {
+define amdgpu_whole_wave i32 @multiple_blocks(i1 %active, i32 %a, i32 %b) {
; DAGISEL-LABEL: multiple_blocks:
; DAGISEL: ; %bb.0:
; DAGISEL-NEXT: s_wait_loadcnt_dscnt 0x0
@@ -248,7 +200,7 @@ if.end:
ret i32 %e
}
-define i64 @ret_64(i1 %active, i64 %a, i64 %b) {
+define amdgpu_whole_wave i64 @ret_64(i1 %active, i64 %a, i64 %b) {
; DAGISEL-LABEL: ret_64:
; DAGISEL: ; %bb.0:
; DAGISEL-NEXT: s_wait_loadcnt_dscnt 0x0
@@ -280,6 +232,6 @@ define i64 @ret_64(i1 %active, i64 %a, i64 %b) {
; DAGISEL-NEXT: s_setpc_b64 s[30:31]
%x = select i1 %active, i64 %a, i64 5
%y = select i1 %active, i64 %b, i64 3
- %ret = call i64 @llvm.amdgcn.update.dpp.i64(i64 %x, i64 %y, i32 1, i32 1, i32 1, i1 false) #0
+ %ret = call i64 @llvm.amdgcn.update.dpp.i64(i64 %x, i64 %y, i32 1, i32 1, i32 1, i1 false)
ret i64 %ret
}
>From ea3821b51dfc4c7abadd5e384be78d1de9b9c2b8 Mon Sep 17 00:00:00 2001
From: Diana Picus <diana-magda.picus at amd.com>
Date: Mon, 17 Mar 2025 14:00:49 +0100
Subject: [PATCH 07/11] Enable gisel in tests
---
llvm/test/CodeGen/AMDGPU/isel-whole-wave-functions.ll | 2 +-
llvm/test/CodeGen/AMDGPU/whole-wave-functions.ll | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/llvm/test/CodeGen/AMDGPU/isel-whole-wave-functions.ll b/llvm/test/CodeGen/AMDGPU/isel-whole-wave-functions.ll
index f3f16f4659a5b..23e97dd2e2fdf 100644
--- a/llvm/test/CodeGen/AMDGPU/isel-whole-wave-functions.ll
+++ b/llvm/test/CodeGen/AMDGPU/isel-whole-wave-functions.ll
@@ -1,6 +1,6 @@
; NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py UTC_ARGS: --version 5
; RUN: llc -global-isel=0 -mtriple=amdgcn--amdpal -mcpu=gfx1200 -stop-after=finalize-isel -verify-machineinstrs < %s | FileCheck --check-prefix=DAGISEL %s
-; TODO: llc -global-isel=1 -mtriple=amdgcn--amdpal -mcpu=gfx1200 -mattr=+whole-wave-function < %s | FileCheck --check-prefix=GISEL %s
+; RUN: llc -global-isel=1 -mtriple=amdgcn--amdpal -mcpu=gfx1200 -stop-after=finalize-isel -verify-machineinstrs < %s | FileCheck --check-prefix=GISEL %s
define amdgpu_whole_wave i32 @basic_test(i1 %active, i32 %a, i32 %b) {
; DAGISEL-LABEL: name: basic_test
diff --git a/llvm/test/CodeGen/AMDGPU/whole-wave-functions.ll b/llvm/test/CodeGen/AMDGPU/whole-wave-functions.ll
index c6890414ed5bc..6663fe89f0bc7 100644
--- a/llvm/test/CodeGen/AMDGPU/whole-wave-functions.ll
+++ b/llvm/test/CodeGen/AMDGPU/whole-wave-functions.ll
@@ -1,6 +1,6 @@
; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
; RUN: llc -global-isel=0 -mtriple=amdgcn--amdpal -mcpu=gfx1200 -verify-machineinstrs < %s | FileCheck --check-prefix=DAGISEL %s
-; TODO: llc -global-isel=1 -mtriple=amdgcn--amdpal -mcpu=gfx1200 -mattr=+whole-wave-function < %s | FileCheck --check-prefix=GISEL %s
+; RUN: llc -global-isel=1 -mtriple=amdgcn--amdpal -mcpu=gfx1200 -verify-machineinstrs < %s | FileCheck --check-prefix=GISEL %s
; Make sure the i1 %active is passed through EXEC.
; The EXEC mask should be set to -1 for the duration of the function
>From 1b20edd7dfebadf27b611aa289bc4d328c1f75b8 Mon Sep 17 00:00:00 2001
From: Diana Picus <diana-magda.picus at amd.com>
Date: Tue, 11 Mar 2025 12:26:55 +0100
Subject: [PATCH 08/11] GISel support
---
llvm/lib/Target/AMDGPU/AMDGPUCallLowering.cpp | 24 ++-
llvm/lib/Target/AMDGPU/AMDGPUGISel.td | 4 +
.../AMDGPU/AMDGPUInstructionSelector.cpp | 4 +
.../Target/AMDGPU/AMDGPURegisterBankInfo.cpp | 4 +
llvm/lib/Target/AMDGPU/SIInstrInfo.cpp | 5 +-
llvm/lib/Target/AMDGPU/SIInstructions.td | 14 ++
.../regbankselect-whole-wave-functions.mir | 40 ++++
.../irtranslator-whole-wave-functions.ll | 103 ++++++++++
.../AMDGPU/isel-whole-wave-functions.ll | 73 +++++++
.../CodeGen/AMDGPU/whole-wave-functions.ll | 182 ++++++++++++++++++
10 files changed, 449 insertions(+), 4 deletions(-)
create mode 100644 llvm/test/CodeGen/AMDGPU/GlobalISel/regbankselect-whole-wave-functions.mir
create mode 100644 llvm/test/CodeGen/AMDGPU/irtranslator-whole-wave-functions.ll
diff --git a/llvm/lib/Target/AMDGPU/AMDGPUCallLowering.cpp b/llvm/lib/Target/AMDGPU/AMDGPUCallLowering.cpp
index 9baee05047f99..76d7f33d122d7 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPUCallLowering.cpp
+++ b/llvm/lib/Target/AMDGPU/AMDGPUCallLowering.cpp
@@ -374,8 +374,10 @@ bool AMDGPUCallLowering::lowerReturn(MachineIRBuilder &B, const Value *Val,
return true;
}
- unsigned ReturnOpc =
- IsShader ? AMDGPU::SI_RETURN_TO_EPILOG : AMDGPU::SI_RETURN;
+ const bool IsWholeWave = MFI->isWholeWaveFunction();
+ unsigned ReturnOpc = IsWholeWave ? AMDGPU::G_AMDGPU_WHOLE_WAVE_FUNC_RETURN
+ : IsShader ? AMDGPU::SI_RETURN_TO_EPILOG
+ : AMDGPU::SI_RETURN;
auto Ret = B.buildInstrNoInsert(ReturnOpc);
if (!FLI.CanLowerReturn)
@@ -383,6 +385,13 @@ bool AMDGPUCallLowering::lowerReturn(MachineIRBuilder &B, const Value *Val,
else if (!lowerReturnVal(B, Val, VRegs, Ret))
return false;
+ if (IsWholeWave) {
+ const GCNSubtarget &ST = MF.getSubtarget<GCNSubtarget>();
+ const SIInstrInfo *TII = ST.getInstrInfo();
+ const MachineInstr *Setup = TII->getWholeWaveFunctionSetup(MF);
+ Ret.addReg(Setup->getOperand(0).getReg());
+ }
+
// TODO: Handle CalleeSavedRegsViaCopy.
B.insertInstr(Ret);
@@ -626,6 +635,17 @@ bool AMDGPUCallLowering::lowerFormalArguments(
if (DL.getTypeStoreSize(Arg.getType()) == 0)
continue;
+ if (Info->isWholeWaveFunction() && Idx == 0) {
+ assert(VRegs[Idx].size() == 1 && "Expected only one register");
+
+ // The first argument for whole wave functions is the original EXEC value.
+ B.buildInstr(AMDGPU::G_AMDGPU_WHOLE_WAVE_FUNC_SETUP)
+ .addDef(VRegs[Idx][0]);
+
+ ++Idx;
+ continue;
+ }
+
const bool InReg = Arg.hasAttribute(Attribute::InReg);
if (Arg.hasAttribute(Attribute::SwiftSelf) ||
diff --git a/llvm/lib/Target/AMDGPU/AMDGPUGISel.td b/llvm/lib/Target/AMDGPU/AMDGPUGISel.td
index 1b909568fc555..c5063c4de4ad3 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPUGISel.td
+++ b/llvm/lib/Target/AMDGPU/AMDGPUGISel.td
@@ -300,6 +300,10 @@ def : GINodeEquiv<G_AMDGPU_S_BUFFER_LOAD_SSHORT, SIsbuffer_load_short>;
def : GINodeEquiv<G_AMDGPU_S_BUFFER_LOAD_USHORT, SIsbuffer_load_ushort>;
def : GINodeEquiv<G_AMDGPU_S_BUFFER_PREFETCH, SIsbuffer_prefetch>;
+def : GINodeEquiv<G_AMDGPU_WHOLE_WAVE_FUNC_SETUP, AMDGPUwhole_wave_setup>;
+// G_AMDGPU_WHOLE_WAVE_FUNC_RETURN is simpler than AMDGPUwhole_wave_return,
+// so we don't mark it as equivalent.
+
class GISelSop2Pat <
SDPatternOperator node,
Instruction inst,
diff --git a/llvm/lib/Target/AMDGPU/AMDGPUInstructionSelector.cpp b/llvm/lib/Target/AMDGPU/AMDGPUInstructionSelector.cpp
index 2ee82381c4ef0..1727fd0329432 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPUInstructionSelector.cpp
+++ b/llvm/lib/Target/AMDGPU/AMDGPUInstructionSelector.cpp
@@ -4107,6 +4107,10 @@ bool AMDGPUInstructionSelector::select(MachineInstr &I) {
return true;
case AMDGPU::G_AMDGPU_WAVE_ADDRESS:
return selectWaveAddress(I);
+ case AMDGPU::G_AMDGPU_WHOLE_WAVE_FUNC_RETURN: {
+ I.setDesc(TII.get(AMDGPU::SI_WHOLE_WAVE_FUNC_RETURN));
+ return true;
+ }
case AMDGPU::G_STACKRESTORE:
return selectStackRestore(I);
case AMDGPU::G_PHI:
diff --git a/llvm/lib/Target/AMDGPU/AMDGPURegisterBankInfo.cpp b/llvm/lib/Target/AMDGPU/AMDGPURegisterBankInfo.cpp
index c19ee14ab1574..78622e4b5dd13 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPURegisterBankInfo.cpp
+++ b/llvm/lib/Target/AMDGPU/AMDGPURegisterBankInfo.cpp
@@ -5400,6 +5400,10 @@ AMDGPURegisterBankInfo::getInstrMapping(const MachineInstr &MI) const {
case AMDGPU::G_PREFETCH:
OpdsMapping[0] = getSGPROpMapping(MI.getOperand(0).getReg(), MRI, *TRI);
break;
+ case AMDGPU::G_AMDGPU_WHOLE_WAVE_FUNC_SETUP:
+ case AMDGPU::G_AMDGPU_WHOLE_WAVE_FUNC_RETURN:
+ OpdsMapping[0] = AMDGPU::getValueMapping(AMDGPU::VCCRegBankID, 1);
+ break;
}
return getInstructionMapping(/*ID*/1, /*Cost*/1,
diff --git a/llvm/lib/Target/AMDGPU/SIInstrInfo.cpp b/llvm/lib/Target/AMDGPU/SIInstrInfo.cpp
index e62fa0d5f0c8a..d31649a1a810c 100644
--- a/llvm/lib/Target/AMDGPU/SIInstrInfo.cpp
+++ b/llvm/lib/Target/AMDGPU/SIInstrInfo.cpp
@@ -5780,10 +5780,11 @@ SIInstrInfo::getWholeWaveFunctionSetup(MachineFunction &MF) const {
"Not a whole wave func");
MachineBasicBlock &MBB = *MF.begin();
for (MachineInstr &MI : MBB)
- if (MI.getOpcode() == AMDGPU::SI_SETUP_WHOLE_WAVE_FUNC)
+ if (MI.getOpcode() == AMDGPU::SI_SETUP_WHOLE_WAVE_FUNC ||
+ MI.getOpcode() == AMDGPU::G_AMDGPU_WHOLE_WAVE_FUNC_SETUP)
return &MI;
- llvm_unreachable("Couldn't find instruction. Wrong MBB?");
+ llvm_unreachable("Couldn't find SI_SETUP_WHOLE_WAVE_FUNC instruction");
}
static const TargetRegisterClass *
diff --git a/llvm/lib/Target/AMDGPU/SIInstructions.td b/llvm/lib/Target/AMDGPU/SIInstructions.td
index e8c3f97e65e40..51f52875c30d3 100644
--- a/llvm/lib/Target/AMDGPU/SIInstructions.td
+++ b/llvm/lib/Target/AMDGPU/SIInstructions.td
@@ -4340,6 +4340,20 @@ def G_AMDGPU_S_MUL_I64_I32 : AMDGPUGenericInstruction {
let hasSideEffects = 0;
}
+def G_AMDGPU_WHOLE_WAVE_FUNC_SETUP : AMDGPUGenericInstruction {
+ let OutOperandList = (outs type0:$origExec);
+ let InOperandList = (ins);
+ let isConvergent = 1;
+}
+
+def G_AMDGPU_WHOLE_WAVE_FUNC_RETURN : AMDGPUGenericInstruction {
+ let OutOperandList = (outs);
+ let InOperandList = (ins type0:$origExec);
+ let isTerminator = 1;
+ let isBarrier = 1;
+ let isReturn = 1;
+}
+
// This is equivalent to the G_INTRINSIC*, but the operands may have
// been legalized depending on the subtarget requirements.
def G_AMDGPU_INTRIN_IMAGE_LOAD : AMDGPUGenericInstruction {
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/regbankselect-whole-wave-functions.mir b/llvm/test/CodeGen/AMDGPU/GlobalISel/regbankselect-whole-wave-functions.mir
new file mode 100644
index 0000000000000..beca901945753
--- /dev/null
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/regbankselect-whole-wave-functions.mir
@@ -0,0 +1,40 @@
+# NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py UTC_ARGS: --version 5
+# RUN: llc -mtriple=amdgcn--amdpal -mcpu=gfx1200 -run-pass=regbankselect %s -verify-machineinstrs -o - -regbankselect-fast | FileCheck %s
+# RUN: llc -mtriple=amdgcn--amdpal -mcpu=gfx1200 -run-pass=regbankselect %s -verify-machineinstrs -o - -regbankselect-greedy | FileCheck %s
+# RUN: llc -mtriple=amdgcn--amdpal -mcpu=gfx1200 -mattr=+wavefrontsize64 -run-pass=regbankselect %s -verify-machineinstrs -o - -regbankselect-greedy | FileCheck %s
+---
+name: basic_test
+legalized: true
+machineFunctionInfo:
+ isWholeWaveFunction: true
+body: |
+ bb.1:
+ liveins: $vgpr0, $vgpr1
+
+ ; CHECK-LABEL: name: basic_test
+ ; CHECK: liveins: $vgpr0, $vgpr1
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: [[COPY:%[0-9]+]]:vgpr(s32) = COPY $vgpr0
+ ; CHECK-NEXT: [[COPY1:%[0-9]+]]:vgpr(s32) = COPY $vgpr1
+ ; CHECK-NEXT: [[AMDGPU_WHOLE_WAVE_FUNC_SETUP:%[0-9]+]]:vcc(s1) = G_AMDGPU_WHOLE_WAVE_FUNC_SETUP
+ ; CHECK-NEXT: [[C:%[0-9]+]]:sgpr(s32) = G_CONSTANT i32 5
+ ; CHECK-NEXT: [[COPY2:%[0-9]+]]:vgpr(s32) = COPY [[C]](s32)
+ ; CHECK-NEXT: [[SELECT:%[0-9]+]]:vgpr(s32) = G_SELECT [[AMDGPU_WHOLE_WAVE_FUNC_SETUP]](s1), [[COPY]], [[COPY2]]
+ ; CHECK-NEXT: [[C1:%[0-9]+]]:sgpr(s32) = G_CONSTANT i32 3
+ ; CHECK-NEXT: [[COPY3:%[0-9]+]]:vgpr(s32) = COPY [[C1]](s32)
+ ; CHECK-NEXT: [[SELECT1:%[0-9]+]]:vgpr(s32) = G_SELECT [[AMDGPU_WHOLE_WAVE_FUNC_SETUP]](s1), [[COPY1]], [[COPY3]]
+ ; CHECK-NEXT: [[INTRINSIC_CONVERGENT:%[0-9]+]]:vgpr(s32) = G_INTRINSIC_CONVERGENT intrinsic(@llvm.amdgcn.update.dpp), [[SELECT]](s32), [[SELECT1]](s32), 1, 1, 1, 0
+ ; CHECK-NEXT: $vgpr0 = COPY [[INTRINSIC_CONVERGENT]](s32)
+ ; CHECK-NEXT: G_AMDGPU_WHOLE_WAVE_FUNC_RETURN [[AMDGPU_WHOLE_WAVE_FUNC_SETUP]](s1), implicit $vgpr0
+ %1:_(s32) = COPY $vgpr0
+ %2:_(s32) = COPY $vgpr1
+ %0:_(s1) = G_AMDGPU_WHOLE_WAVE_FUNC_SETUP
+ %12:_(s32) = G_CONSTANT i32 5
+ %11:_(s32) = G_SELECT %0(s1), %1, %12
+ %14:_(s32) = G_CONSTANT i32 3
+ %13:_(s32) = G_SELECT %0(s1), %2, %14
+ %15:_(s32) = G_INTRINSIC_CONVERGENT intrinsic(@llvm.amdgcn.update.dpp), %11(s32), %13(s32), 1, 1, 1, 0
+ $vgpr0 = COPY %15(s32)
+ G_AMDGPU_WHOLE_WAVE_FUNC_RETURN %0(s1), implicit $vgpr0
+
+...
diff --git a/llvm/test/CodeGen/AMDGPU/irtranslator-whole-wave-functions.ll b/llvm/test/CodeGen/AMDGPU/irtranslator-whole-wave-functions.ll
new file mode 100644
index 0000000000000..f18d8128a91ff
--- /dev/null
+++ b/llvm/test/CodeGen/AMDGPU/irtranslator-whole-wave-functions.ll
@@ -0,0 +1,103 @@
+; NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py UTC_ARGS: --version 5
+; RUN: llc -global-isel=1 -mtriple=amdgcn--amdpal -mcpu=gfx1200 -stop-after=irtranslator -verify-machineinstrs < %s | FileCheck %s
+
+define amdgpu_whole_wave i32 @basic_test(i1 %active, i32 %a, i32 %b) {
+ ; CHECK-LABEL: name: basic_test
+ ; CHECK: bb.1 (%ir-block.0):
+ ; CHECK-NEXT: liveins: $vgpr0, $vgpr1
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $vgpr0
+ ; CHECK-NEXT: [[COPY1:%[0-9]+]]:_(s32) = COPY $vgpr1
+ ; CHECK-NEXT: [[AMDGPU_WHOLE_WAVE_FUNC_SETUP:%[0-9]+]]:_(s1) = G_AMDGPU_WHOLE_WAVE_FUNC_SETUP
+ ; CHECK-NEXT: [[C:%[0-9]+]]:_(s32) = G_CONSTANT i32 5
+ ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s32) = G_CONSTANT i32 3
+ ; CHECK-NEXT: [[SELECT:%[0-9]+]]:_(s32) = G_SELECT [[AMDGPU_WHOLE_WAVE_FUNC_SETUP]](s1), [[COPY]], [[C]]
+ ; CHECK-NEXT: [[SELECT1:%[0-9]+]]:_(s32) = G_SELECT [[AMDGPU_WHOLE_WAVE_FUNC_SETUP]](s1), [[COPY1]], [[C1]]
+ ; CHECK-NEXT: [[INTRINSIC_CONVERGENT:%[0-9]+]]:_(s32) = G_INTRINSIC_CONVERGENT intrinsic(@llvm.amdgcn.update.dpp), [[SELECT]](s32), [[SELECT1]](s32), 1, 1, 1, 0
+ ; CHECK-NEXT: $vgpr0 = COPY [[INTRINSIC_CONVERGENT]](s32)
+ ; CHECK-NEXT: G_AMDGPU_WHOLE_WAVE_FUNC_RETURN [[AMDGPU_WHOLE_WAVE_FUNC_SETUP]](s1), implicit $vgpr0
+ %x = select i1 %active, i32 %a, i32 5
+ %y = select i1 %active, i32 %b, i32 3
+ %ret = call i32 @llvm.amdgcn.update.dpp.i32(i32 %x, i32 %y, i32 1, i32 1, i32 1, i1 false)
+ ret i32 %ret
+}
+
+; Make sure we don't crash if %active is not used at all.
+define amdgpu_whole_wave i32 @unused_active(i1 %active, i32 %a, i32 %b) {
+ ; CHECK-LABEL: name: unused_active
+ ; CHECK: bb.1 (%ir-block.0):
+ ; CHECK-NEXT: liveins: $vgpr0, $vgpr1
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $vgpr0
+ ; CHECK-NEXT: [[COPY1:%[0-9]+]]:_(s32) = COPY $vgpr1
+ ; CHECK-NEXT: [[AMDGPU_WHOLE_WAVE_FUNC_SETUP:%[0-9]+]]:_(s1) = G_AMDGPU_WHOLE_WAVE_FUNC_SETUP
+ ; CHECK-NEXT: [[C:%[0-9]+]]:_(s32) = G_CONSTANT i32 14
+ ; CHECK-NEXT: $vgpr0 = COPY [[C]](s32)
+ ; CHECK-NEXT: G_AMDGPU_WHOLE_WAVE_FUNC_RETURN [[AMDGPU_WHOLE_WAVE_FUNC_SETUP]](s1), implicit $vgpr0
+ ret i32 14
+}
+
+define amdgpu_whole_wave i32 @multiple_blocks(i1 %active, i32 %a, i32 %b) {
+ ; CHECK-LABEL: name: multiple_blocks
+ ; CHECK: bb.1 (%ir-block.0):
+ ; CHECK-NEXT: successors: %bb.2(0x40000000), %bb.3(0x40000000)
+ ; CHECK-NEXT: liveins: $vgpr0, $vgpr1
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $vgpr0
+ ; CHECK-NEXT: [[COPY1:%[0-9]+]]:_(s32) = COPY $vgpr1
+ ; CHECK-NEXT: [[AMDGPU_WHOLE_WAVE_FUNC_SETUP:%[0-9]+]]:_(s1) = G_AMDGPU_WHOLE_WAVE_FUNC_SETUP
+ ; CHECK-NEXT: [[ICMP:%[0-9]+]]:_(s1) = G_ICMP intpred(eq), [[COPY]](s32), [[COPY1]]
+ ; CHECK-NEXT: [[INT:%[0-9]+]]:_(s1), [[INT1:%[0-9]+]]:_(s32) = G_INTRINSIC_W_SIDE_EFFECTS intrinsic(@llvm.amdgcn.if), [[ICMP]](s1)
+ ; CHECK-NEXT: G_BRCOND [[INT]](s1), %bb.2
+ ; CHECK-NEXT: G_BR %bb.3
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: bb.2.if.then:
+ ; CHECK-NEXT: successors: %bb.3(0x80000000)
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: [[ADD:%[0-9]+]]:_(s32) = G_ADD [[COPY]], [[COPY1]]
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: bb.3.if.end:
+ ; CHECK-NEXT: [[PHI:%[0-9]+]]:_(s32) = G_PHI [[COPY1]](s32), %bb.1, [[ADD]](s32), %bb.2
+ ; CHECK-NEXT: G_INTRINSIC_W_SIDE_EFFECTS intrinsic(@llvm.amdgcn.end.cf), [[INT1]](s32)
+ ; CHECK-NEXT: [[SELECT:%[0-9]+]]:_(s32) = G_SELECT [[AMDGPU_WHOLE_WAVE_FUNC_SETUP]](s1), [[COPY]], [[PHI]]
+ ; CHECK-NEXT: $vgpr0 = COPY [[SELECT]](s32)
+ ; CHECK-NEXT: G_AMDGPU_WHOLE_WAVE_FUNC_RETURN [[AMDGPU_WHOLE_WAVE_FUNC_SETUP]](s1), implicit $vgpr0
+ %c = icmp eq i32 %a, %b
+ br i1 %c, label %if.then, label %if.end
+
+if.then: ; preds = %0
+ %d = add i32 %a, %b
+ br label %if.end
+
+if.end:
+ %f = phi i32 [ %d, %if.then ], [ %b, %0 ]
+ %e = select i1 %active, i32 %a, i32 %f
+ ret i32 %e
+}
+
+define amdgpu_whole_wave i64 @ret_64(i1 %active, i64 %a, i64 %b) {
+ ; CHECK-LABEL: name: ret_64
+ ; CHECK: bb.1 (%ir-block.0):
+ ; CHECK-NEXT: liveins: $vgpr0, $vgpr1, $vgpr2, $vgpr3
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $vgpr0
+ ; CHECK-NEXT: [[COPY1:%[0-9]+]]:_(s32) = COPY $vgpr1
+ ; CHECK-NEXT: [[MV:%[0-9]+]]:_(s64) = G_MERGE_VALUES [[COPY]](s32), [[COPY1]](s32)
+ ; CHECK-NEXT: [[COPY2:%[0-9]+]]:_(s32) = COPY $vgpr2
+ ; CHECK-NEXT: [[COPY3:%[0-9]+]]:_(s32) = COPY $vgpr3
+ ; CHECK-NEXT: [[MV1:%[0-9]+]]:_(s64) = G_MERGE_VALUES [[COPY2]](s32), [[COPY3]](s32)
+ ; CHECK-NEXT: [[AMDGPU_WHOLE_WAVE_FUNC_SETUP:%[0-9]+]]:_(s1) = G_AMDGPU_WHOLE_WAVE_FUNC_SETUP
+ ; CHECK-NEXT: [[C:%[0-9]+]]:_(s64) = G_CONSTANT i64 5
+ ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s64) = G_CONSTANT i64 3
+ ; CHECK-NEXT: [[SELECT:%[0-9]+]]:_(s64) = G_SELECT [[AMDGPU_WHOLE_WAVE_FUNC_SETUP]](s1), [[MV]], [[C]]
+ ; CHECK-NEXT: [[SELECT1:%[0-9]+]]:_(s64) = G_SELECT [[AMDGPU_WHOLE_WAVE_FUNC_SETUP]](s1), [[MV1]], [[C1]]
+ ; CHECK-NEXT: [[INTRINSIC_CONVERGENT:%[0-9]+]]:_(s64) = G_INTRINSIC_CONVERGENT intrinsic(@llvm.amdgcn.update.dpp), [[SELECT]](s64), [[SELECT1]](s64), 1, 1, 1, 0
+ ; CHECK-NEXT: [[UV:%[0-9]+]]:_(s32), [[UV1:%[0-9]+]]:_(s32) = G_UNMERGE_VALUES [[INTRINSIC_CONVERGENT]](s64)
+ ; CHECK-NEXT: $vgpr0 = COPY [[UV]](s32)
+ ; CHECK-NEXT: $vgpr1 = COPY [[UV1]](s32)
+ ; CHECK-NEXT: G_AMDGPU_WHOLE_WAVE_FUNC_RETURN [[AMDGPU_WHOLE_WAVE_FUNC_SETUP]](s1), implicit $vgpr0, implicit $vgpr1
+ %x = select i1 %active, i64 %a, i64 5
+ %y = select i1 %active, i64 %b, i64 3
+ %ret = call i64 @llvm.amdgcn.update.dpp.i64(i64 %x, i64 %y, i32 1, i32 1, i32 1, i1 false)
+ ret i64 %ret
+}
diff --git a/llvm/test/CodeGen/AMDGPU/isel-whole-wave-functions.ll b/llvm/test/CodeGen/AMDGPU/isel-whole-wave-functions.ll
index 23e97dd2e2fdf..300c7863b6966 100644
--- a/llvm/test/CodeGen/AMDGPU/isel-whole-wave-functions.ll
+++ b/llvm/test/CodeGen/AMDGPU/isel-whole-wave-functions.ll
@@ -18,6 +18,23 @@ define amdgpu_whole_wave i32 @basic_test(i1 %active, i32 %a, i32 %b) {
; DAGISEL-NEXT: $vgpr0 = COPY [[V_MOV_B32_dpp]]
; DAGISEL-NEXT: [[DEF:%[0-9]+]]:sreg_32 = IMPLICIT_DEF
; DAGISEL-NEXT: SI_WHOLE_WAVE_FUNC_RETURN killed [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $vgpr0
+ ;
+ ; GISEL-LABEL: name: basic_test
+ ; GISEL: bb.1 (%ir-block.0):
+ ; GISEL-NEXT: liveins: $vgpr0, $vgpr1
+ ; GISEL-NEXT: {{ $}}
+ ; GISEL-NEXT: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr0
+ ; GISEL-NEXT: [[COPY1:%[0-9]+]]:vgpr_32 = COPY $vgpr1
+ ; GISEL-NEXT: [[SI_SETUP_WHOLE_WAVE_FUNC:%[0-9]+]]:sreg_32_xm0_xexec = SI_SETUP_WHOLE_WAVE_FUNC implicit-def dead $exec, implicit $exec
+ ; GISEL-NEXT: [[S_MOV_B32_:%[0-9]+]]:sreg_32 = S_MOV_B32 5
+ ; GISEL-NEXT: [[COPY2:%[0-9]+]]:vgpr_32 = COPY [[S_MOV_B32_]]
+ ; GISEL-NEXT: [[V_CNDMASK_B32_e64_:%[0-9]+]]:vgpr_32 = V_CNDMASK_B32_e64 0, [[COPY2]], 0, [[COPY]], [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $exec
+ ; GISEL-NEXT: [[S_MOV_B32_1:%[0-9]+]]:sreg_32 = S_MOV_B32 3
+ ; GISEL-NEXT: [[COPY3:%[0-9]+]]:vgpr_32 = COPY [[S_MOV_B32_1]]
+ ; GISEL-NEXT: [[V_CNDMASK_B32_e64_1:%[0-9]+]]:vgpr_32 = V_CNDMASK_B32_e64 0, [[COPY3]], 0, [[COPY1]], [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $exec
+ ; GISEL-NEXT: [[V_MOV_B32_dpp:%[0-9]+]]:vgpr_32 = V_MOV_B32_dpp [[V_CNDMASK_B32_e64_]], [[V_CNDMASK_B32_e64_1]], 1, 1, 1, 0, implicit $exec
+ ; GISEL-NEXT: $vgpr0 = COPY [[V_MOV_B32_dpp]]
+ ; GISEL-NEXT: SI_WHOLE_WAVE_FUNC_RETURN [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $vgpr0
%x = select i1 %active, i32 %a, i32 5
%y = select i1 %active, i32 %b, i32 3
%ret = call i32 @llvm.amdgcn.update.dpp.i32(i32 %x, i32 %y, i32 1, i32 1, i32 1, i1 false)
@@ -33,6 +50,15 @@ define amdgpu_whole_wave i32 @unused_active(i1 %active, i32 %a, i32 %b) {
; DAGISEL-NEXT: $vgpr0 = COPY [[V_MOV_B32_e32_]]
; DAGISEL-NEXT: [[DEF:%[0-9]+]]:sreg_32 = IMPLICIT_DEF
; DAGISEL-NEXT: SI_WHOLE_WAVE_FUNC_RETURN killed [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $vgpr0
+ ;
+ ; GISEL-LABEL: name: unused_active
+ ; GISEL: bb.1 (%ir-block.0):
+ ; GISEL-NEXT: liveins: $vgpr0, $vgpr1
+ ; GISEL-NEXT: {{ $}}
+ ; GISEL-NEXT: [[SI_SETUP_WHOLE_WAVE_FUNC:%[0-9]+]]:sreg_32_xm0_xexec = SI_SETUP_WHOLE_WAVE_FUNC implicit-def dead $exec, implicit $exec
+ ; GISEL-NEXT: [[S_MOV_B32_:%[0-9]+]]:sreg_32 = S_MOV_B32 14
+ ; GISEL-NEXT: $vgpr0 = COPY [[S_MOV_B32_]]
+ ; GISEL-NEXT: SI_WHOLE_WAVE_FUNC_RETURN [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $vgpr0
ret i32 14
}
@@ -63,6 +89,30 @@ define amdgpu_whole_wave i32 @multiple_blocks(i1 %active, i32 %a, i32 %b) {
; DAGISEL-NEXT: $vgpr0 = COPY [[V_CNDMASK_B32_e64_]]
; DAGISEL-NEXT: [[DEF:%[0-9]+]]:sreg_32 = IMPLICIT_DEF
; DAGISEL-NEXT: SI_WHOLE_WAVE_FUNC_RETURN killed [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $vgpr0
+ ;
+ ; GISEL-LABEL: name: multiple_blocks
+ ; GISEL: bb.1 (%ir-block.0):
+ ; GISEL-NEXT: successors: %bb.2(0x40000000), %bb.3(0x40000000)
+ ; GISEL-NEXT: liveins: $vgpr0, $vgpr1
+ ; GISEL-NEXT: {{ $}}
+ ; GISEL-NEXT: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr0
+ ; GISEL-NEXT: [[COPY1:%[0-9]+]]:vgpr_32 = COPY $vgpr1
+ ; GISEL-NEXT: [[SI_SETUP_WHOLE_WAVE_FUNC:%[0-9]+]]:sreg_32_xm0_xexec = SI_SETUP_WHOLE_WAVE_FUNC implicit-def dead $exec, implicit $exec
+ ; GISEL-NEXT: [[V_CMP_EQ_U32_e64_:%[0-9]+]]:sreg_32_xm0_xexec = V_CMP_EQ_U32_e64 [[COPY]], [[COPY1]], implicit $exec
+ ; GISEL-NEXT: [[SI_IF:%[0-9]+]]:sreg_32_xm0_xexec = SI_IF [[V_CMP_EQ_U32_e64_]], %bb.3, implicit-def $exec, implicit-def $scc, implicit $exec
+ ; GISEL-NEXT: S_BRANCH %bb.2
+ ; GISEL-NEXT: {{ $}}
+ ; GISEL-NEXT: bb.2.if.then:
+ ; GISEL-NEXT: successors: %bb.3(0x80000000)
+ ; GISEL-NEXT: {{ $}}
+ ; GISEL-NEXT: [[V_ADD_U32_e64_:%[0-9]+]]:vgpr_32 = V_ADD_U32_e64 [[COPY]], [[COPY1]], 0, implicit $exec
+ ; GISEL-NEXT: {{ $}}
+ ; GISEL-NEXT: bb.3.if.end:
+ ; GISEL-NEXT: [[PHI:%[0-9]+]]:vgpr_32 = PHI [[COPY1]], %bb.1, [[V_ADD_U32_e64_]], %bb.2
+ ; GISEL-NEXT: SI_END_CF [[SI_IF]], implicit-def $exec, implicit-def $scc, implicit $exec
+ ; GISEL-NEXT: [[V_CNDMASK_B32_e64_:%[0-9]+]]:vgpr_32 = V_CNDMASK_B32_e64 0, [[PHI]], 0, [[COPY]], [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $exec
+ ; GISEL-NEXT: $vgpr0 = COPY [[V_CNDMASK_B32_e64_]]
+ ; GISEL-NEXT: SI_WHOLE_WAVE_FUNC_RETURN [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $vgpr0
%c = icmp eq i32 %a, %b
br i1 %c, label %if.then, label %if.end
@@ -109,6 +159,29 @@ define amdgpu_whole_wave i64 @ret_64(i1 %active, i64 %a, i64 %b) {
; DAGISEL-NEXT: $vgpr1 = COPY [[V_MOV_B32_dpp1]]
; DAGISEL-NEXT: [[DEF4:%[0-9]+]]:sreg_32 = IMPLICIT_DEF
; DAGISEL-NEXT: SI_WHOLE_WAVE_FUNC_RETURN killed [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $vgpr0, implicit $vgpr1
+ ;
+ ; GISEL-LABEL: name: ret_64
+ ; GISEL: bb.1 (%ir-block.0):
+ ; GISEL-NEXT: liveins: $vgpr0, $vgpr1, $vgpr2, $vgpr3
+ ; GISEL-NEXT: {{ $}}
+ ; GISEL-NEXT: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr0
+ ; GISEL-NEXT: [[COPY1:%[0-9]+]]:vgpr_32 = COPY $vgpr1
+ ; GISEL-NEXT: [[COPY2:%[0-9]+]]:vgpr_32 = COPY $vgpr2
+ ; GISEL-NEXT: [[COPY3:%[0-9]+]]:vgpr_32 = COPY $vgpr3
+ ; GISEL-NEXT: [[SI_SETUP_WHOLE_WAVE_FUNC:%[0-9]+]]:sreg_32_xm0_xexec = SI_SETUP_WHOLE_WAVE_FUNC implicit-def dead $exec, implicit $exec
+ ; GISEL-NEXT: [[V_MOV_B32_e32_:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 5, implicit $exec
+ ; GISEL-NEXT: [[V_MOV_B32_e32_1:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 0, implicit $exec
+ ; GISEL-NEXT: [[V_CNDMASK_B32_e64_:%[0-9]+]]:vgpr_32 = V_CNDMASK_B32_e64 0, [[V_MOV_B32_e32_]], 0, [[COPY]], [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $exec
+ ; GISEL-NEXT: [[V_CNDMASK_B32_e64_1:%[0-9]+]]:vgpr_32 = V_CNDMASK_B32_e64 0, [[V_MOV_B32_e32_1]], 0, [[COPY1]], [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $exec
+ ; GISEL-NEXT: [[V_MOV_B32_e32_2:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 3, implicit $exec
+ ; GISEL-NEXT: [[V_MOV_B32_e32_3:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 0, implicit $exec
+ ; GISEL-NEXT: [[V_CNDMASK_B32_e64_2:%[0-9]+]]:vgpr_32 = V_CNDMASK_B32_e64 0, [[V_MOV_B32_e32_2]], 0, [[COPY2]], [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $exec
+ ; GISEL-NEXT: [[V_CNDMASK_B32_e64_3:%[0-9]+]]:vgpr_32 = V_CNDMASK_B32_e64 0, [[V_MOV_B32_e32_3]], 0, [[COPY3]], [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $exec
+ ; GISEL-NEXT: [[V_MOV_B32_dpp:%[0-9]+]]:vgpr_32 = V_MOV_B32_dpp [[V_CNDMASK_B32_e64_]], [[V_CNDMASK_B32_e64_2]], 1, 1, 1, 0, implicit $exec
+ ; GISEL-NEXT: [[V_MOV_B32_dpp1:%[0-9]+]]:vgpr_32 = V_MOV_B32_dpp [[V_CNDMASK_B32_e64_1]], [[V_CNDMASK_B32_e64_3]], 1, 1, 1, 0, implicit $exec
+ ; GISEL-NEXT: $vgpr0 = COPY [[V_MOV_B32_dpp]]
+ ; GISEL-NEXT: $vgpr1 = COPY [[V_MOV_B32_dpp1]]
+ ; GISEL-NEXT: SI_WHOLE_WAVE_FUNC_RETURN [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $vgpr0, implicit $vgpr1
%x = select i1 %active, i64 %a, i64 5
%y = select i1 %active, i64 %b, i64 3
%ret = call i64 @llvm.amdgcn.update.dpp.i64(i64 %x, i64 %y, i32 1, i32 1, i32 1, i1 false)
diff --git a/llvm/test/CodeGen/AMDGPU/whole-wave-functions.ll b/llvm/test/CodeGen/AMDGPU/whole-wave-functions.ll
index 6663fe89f0bc7..715244d39765f 100644
--- a/llvm/test/CodeGen/AMDGPU/whole-wave-functions.ll
+++ b/llvm/test/CodeGen/AMDGPU/whole-wave-functions.ll
@@ -30,6 +30,30 @@ define amdgpu_whole_wave i32 @basic_test(i1 %active, i32 %a, i32 %b) {
; DAGISEL-NEXT: s_mov_b32 exec_lo, vcc_lo
; DAGISEL-NEXT: s_wait_loadcnt 0x0
; DAGISEL-NEXT: s_setpc_b64 s[30:31]
+;
+; GISEL-LABEL: basic_test:
+; GISEL: ; %bb.0:
+; GISEL-NEXT: s_wait_loadcnt_dscnt 0x0
+; GISEL-NEXT: s_wait_expcnt 0x0
+; GISEL-NEXT: s_wait_samplecnt 0x0
+; GISEL-NEXT: s_wait_bvhcnt 0x0
+; GISEL-NEXT: s_wait_kmcnt 0x0
+; GISEL-NEXT: s_xor_saveexec_b32 vcc_lo, -1
+; GISEL-NEXT: s_clause 0x1
+; GISEL-NEXT: scratch_store_b32 off, v0, s32
+; GISEL-NEXT: scratch_store_b32 off, v1, s32 offset:4
+; GISEL-NEXT: s_mov_b32 exec_lo, -1
+; GISEL-NEXT: s_wait_alu 0xfffe
+; GISEL-NEXT: v_dual_cndmask_b32 v0, 5, v0 :: v_dual_cndmask_b32 v1, 3, v1
+; GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GISEL-NEXT: v_mov_b32_dpp v0, v1 quad_perm:[1,0,0,0] row_mask:0x1 bank_mask:0x1
+; GISEL-NEXT: s_xor_b32 exec_lo, vcc_lo, -1
+; GISEL-NEXT: s_clause 0x1
+; GISEL-NEXT: scratch_load_b32 v0, off, s32
+; GISEL-NEXT: scratch_load_b32 v1, off, s32 offset:4
+; GISEL-NEXT: s_mov_b32 exec_lo, vcc_lo
+; GISEL-NEXT: s_wait_loadcnt 0x0
+; GISEL-NEXT: s_setpc_b64 s[30:31]
%x = select i1 %active, i32 %a, i32 5
%y = select i1 %active, i32 %b, i32 3
%ret = call i32 @llvm.amdgcn.update.dpp.i32(i32 %x, i32 %y, i32 1, i32 1, i32 1, i1 false)
@@ -55,6 +79,24 @@ define amdgpu_whole_wave i32 @unused_active(i1 %active, i32 %a, i32 %b) {
; DAGISEL-NEXT: s_mov_b32 exec_lo, s0
; DAGISEL-NEXT: s_wait_loadcnt 0x0
; DAGISEL-NEXT: s_setpc_b64 s[30:31]
+;
+; GISEL-LABEL: unused_active:
+; GISEL: ; %bb.0:
+; GISEL-NEXT: s_wait_loadcnt_dscnt 0x0
+; GISEL-NEXT: s_wait_expcnt 0x0
+; GISEL-NEXT: s_wait_samplecnt 0x0
+; GISEL-NEXT: s_wait_bvhcnt 0x0
+; GISEL-NEXT: s_wait_kmcnt 0x0
+; GISEL-NEXT: s_xor_saveexec_b32 s0, -1
+; GISEL-NEXT: scratch_store_b32 off, v0, s32 ; 4-byte Folded Spill
+; GISEL-NEXT: s_mov_b32 exec_lo, -1
+; GISEL-NEXT: v_mov_b32_e32 v0, 14
+; GISEL-NEXT: s_wait_alu 0xfffe
+; GISEL-NEXT: s_xor_b32 exec_lo, s0, -1
+; GISEL-NEXT: scratch_load_b32 v0, off, s32 ; 4-byte Folded Reload
+; GISEL-NEXT: s_mov_b32 exec_lo, s0
+; GISEL-NEXT: s_wait_loadcnt 0x0
+; GISEL-NEXT: s_setpc_b64 s[30:31]
ret i32 14
}
@@ -99,6 +141,45 @@ define amdgpu_whole_wave i32 @csr(i1 %active, i32 %a, i32 %b) {
; DAGISEL-NEXT: s_wait_loadcnt 0x0
; DAGISEL-NEXT: s_wait_alu 0xf1ff
; DAGISEL-NEXT: s_setpc_b64 s[30:31]
+;
+; GISEL-LABEL: csr:
+; GISEL: ; %bb.0:
+; GISEL-NEXT: s_wait_loadcnt_dscnt 0x0
+; GISEL-NEXT: s_wait_expcnt 0x0
+; GISEL-NEXT: s_wait_samplecnt 0x0
+; GISEL-NEXT: s_wait_bvhcnt 0x0
+; GISEL-NEXT: s_wait_kmcnt 0x0
+; GISEL-NEXT: s_xor_saveexec_b32 vcc_lo, -1
+; GISEL-NEXT: s_clause 0x3
+; GISEL-NEXT: scratch_store_b32 off, v2, s32
+; GISEL-NEXT: scratch_store_b32 off, v0, s32 offset:4
+; GISEL-NEXT: scratch_store_b32 off, v1, s32 offset:8
+; GISEL-NEXT: scratch_store_b32 off, v49, s32 offset:16
+; GISEL-NEXT: s_mov_b32 exec_lo, -1
+; GISEL-NEXT: scratch_store_b32 off, v40, s32 offset:12 ; 4-byte Folded Spill
+; GISEL-NEXT: ;;#ASMSTART
+; GISEL-NEXT: ; clobber CSR
+; GISEL-NEXT: ;;#ASMEND
+; GISEL-NEXT: v_writelane_b32 v2, s20, 0
+; GISEL-NEXT: ;;#ASMSTART
+; GISEL-NEXT: ; clobber non-CSR
+; GISEL-NEXT: ;;#ASMEND
+; GISEL-NEXT: scratch_load_b32 v40, off, s32 offset:12 ; 4-byte Folded Reload
+; GISEL-NEXT: s_wait_alu 0xfffe
+; GISEL-NEXT: v_dual_cndmask_b32 v0, 5, v0 :: v_dual_cndmask_b32 v1, 3, v1
+; GISEL-NEXT: v_readlane_b32 s20, v2, 0
+; GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2)
+; GISEL-NEXT: v_mov_b32_dpp v0, v1 quad_perm:[1,0,0,0] row_mask:0x1 bank_mask:0x1
+; GISEL-NEXT: s_xor_b32 exec_lo, vcc_lo, -1
+; GISEL-NEXT: s_clause 0x3
+; GISEL-NEXT: scratch_load_b32 v2, off, s32
+; GISEL-NEXT: scratch_load_b32 v0, off, s32 offset:4
+; GISEL-NEXT: scratch_load_b32 v1, off, s32 offset:8
+; GISEL-NEXT: scratch_load_b32 v49, off, s32 offset:16
+; GISEL-NEXT: s_mov_b32 exec_lo, vcc_lo
+; GISEL-NEXT: s_wait_loadcnt 0x0
+; GISEL-NEXT: s_wait_alu 0xf1ff
+; GISEL-NEXT: s_setpc_b64 s[30:31]
%x = select i1 %active, i32 %a, i32 5
%y = select i1 %active, i32 %b, i32 3
call void asm sideeffect "; clobber CSR", "~{v40},~{s48}"()
@@ -126,6 +207,24 @@ define amdgpu_whole_wave void @csr_vgpr_only(i1 %active, i32 %a, i32 %b) {
; DAGISEL-NEXT: s_mov_b32 exec_lo, s0
; DAGISEL-NEXT: s_wait_loadcnt 0x0
; DAGISEL-NEXT: s_setpc_b64 s[30:31]
+;
+; GISEL-LABEL: csr_vgpr_only:
+; GISEL: ; %bb.0:
+; GISEL-NEXT: s_wait_loadcnt_dscnt 0x0
+; GISEL-NEXT: s_wait_expcnt 0x0
+; GISEL-NEXT: s_wait_samplecnt 0x0
+; GISEL-NEXT: s_wait_bvhcnt 0x0
+; GISEL-NEXT: s_wait_kmcnt 0x0
+; GISEL-NEXT: s_or_saveexec_b32 s0, -1
+; GISEL-NEXT: scratch_store_b32 off, v40, s32 ; 4-byte Folded Spill
+; GISEL-NEXT: ;;#ASMSTART
+; GISEL-NEXT: ; clobber CSR VGPR
+; GISEL-NEXT: ;;#ASMEND
+; GISEL-NEXT: scratch_load_b32 v40, off, s32 ; 4-byte Folded Reload
+; GISEL-NEXT: s_wait_alu 0xfffe
+; GISEL-NEXT: s_mov_b32 exec_lo, s0
+; GISEL-NEXT: s_wait_loadcnt 0x0
+; GISEL-NEXT: s_setpc_b64 s[30:31]
call void asm sideeffect "; clobber CSR VGPR", "~{v40}"()
ret void
}
@@ -153,6 +252,29 @@ define amdgpu_whole_wave void @sgpr_spill_only(i1 %active, i32 %a, i32 %b) {
; DAGISEL-NEXT: s_mov_b32 exec_lo, s0
; DAGISEL-NEXT: s_wait_loadcnt 0x0
; DAGISEL-NEXT: s_setpc_b64 s[30:31]
+;
+; GISEL-LABEL: sgpr_spill_only:
+; GISEL: ; %bb.0:
+; GISEL-NEXT: s_wait_loadcnt_dscnt 0x0
+; GISEL-NEXT: s_wait_expcnt 0x0
+; GISEL-NEXT: s_wait_samplecnt 0x0
+; GISEL-NEXT: s_wait_bvhcnt 0x0
+; GISEL-NEXT: s_wait_kmcnt 0x0
+; GISEL-NEXT: s_xor_saveexec_b32 s0, -1
+; GISEL-NEXT: scratch_store_b32 off, v0, s32 ; 4-byte Folded Spill
+; GISEL-NEXT: s_mov_b32 exec_lo, -1
+; GISEL-NEXT: v_writelane_b32 v0, s68, 0
+; GISEL-NEXT: ;;#ASMSTART
+; GISEL-NEXT: ; clobber CSR SGPR
+; GISEL-NEXT: ;;#ASMEND
+; GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GISEL-NEXT: v_readlane_b32 s68, v0, 0
+; GISEL-NEXT: s_wait_alu 0xfffe
+; GISEL-NEXT: s_xor_b32 exec_lo, s0, -1
+; GISEL-NEXT: scratch_load_b32 v0, off, s32 ; 4-byte Folded Reload
+; GISEL-NEXT: s_mov_b32 exec_lo, s0
+; GISEL-NEXT: s_wait_loadcnt 0x0
+; GISEL-NEXT: s_setpc_b64 s[30:31]
call void asm sideeffect "; clobber CSR SGPR", "~{s68}"()
ret void
}
@@ -187,6 +309,36 @@ define amdgpu_whole_wave i32 @multiple_blocks(i1 %active, i32 %a, i32 %b) {
; DAGISEL-NEXT: s_mov_b32 exec_lo, vcc_lo
; DAGISEL-NEXT: s_wait_loadcnt 0x0
; DAGISEL-NEXT: s_setpc_b64 s[30:31]
+;
+; GISEL-LABEL: multiple_blocks:
+; GISEL: ; %bb.0:
+; GISEL-NEXT: s_wait_loadcnt_dscnt 0x0
+; GISEL-NEXT: s_wait_expcnt 0x0
+; GISEL-NEXT: s_wait_samplecnt 0x0
+; GISEL-NEXT: s_wait_bvhcnt 0x0
+; GISEL-NEXT: s_wait_kmcnt 0x0
+; GISEL-NEXT: s_xor_saveexec_b32 vcc_lo, -1
+; GISEL-NEXT: s_clause 0x1
+; GISEL-NEXT: scratch_store_b32 off, v0, s32
+; GISEL-NEXT: scratch_store_b32 off, v1, s32 offset:4
+; GISEL-NEXT: s_mov_b32 exec_lo, -1
+; GISEL-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
+; GISEL-NEXT: s_mov_b32 s1, exec_lo
+; GISEL-NEXT: v_cmpx_eq_u32_e64 v0, v1
+; GISEL-NEXT: ; %bb.1: ; %if.then
+; GISEL-NEXT: v_add_nc_u32_e32 v1, v0, v1
+; GISEL-NEXT: ; %bb.2: ; %if.end
+; GISEL-NEXT: s_wait_alu 0xfffe
+; GISEL-NEXT: s_or_b32 exec_lo, exec_lo, s1
+; GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GISEL-NEXT: v_cndmask_b32_e32 v0, v1, v0, vcc_lo
+; GISEL-NEXT: s_xor_b32 exec_lo, vcc_lo, -1
+; GISEL-NEXT: s_clause 0x1
+; GISEL-NEXT: scratch_load_b32 v0, off, s32
+; GISEL-NEXT: scratch_load_b32 v1, off, s32 offset:4
+; GISEL-NEXT: s_mov_b32 exec_lo, vcc_lo
+; GISEL-NEXT: s_wait_loadcnt 0x0
+; GISEL-NEXT: s_setpc_b64 s[30:31]
%c = icmp eq i32 %a, %b
br i1 %c, label %if.then, label %if.end
@@ -230,6 +382,36 @@ define amdgpu_whole_wave i64 @ret_64(i1 %active, i64 %a, i64 %b) {
; DAGISEL-NEXT: s_mov_b32 exec_lo, vcc_lo
; DAGISEL-NEXT: s_wait_loadcnt 0x0
; DAGISEL-NEXT: s_setpc_b64 s[30:31]
+;
+; GISEL-LABEL: ret_64:
+; GISEL: ; %bb.0:
+; GISEL-NEXT: s_wait_loadcnt_dscnt 0x0
+; GISEL-NEXT: s_wait_expcnt 0x0
+; GISEL-NEXT: s_wait_samplecnt 0x0
+; GISEL-NEXT: s_wait_bvhcnt 0x0
+; GISEL-NEXT: s_wait_kmcnt 0x0
+; GISEL-NEXT: s_xor_saveexec_b32 vcc_lo, -1
+; GISEL-NEXT: s_clause 0x3
+; GISEL-NEXT: scratch_store_b32 off, v0, s32
+; GISEL-NEXT: scratch_store_b32 off, v1, s32 offset:4
+; GISEL-NEXT: scratch_store_b32 off, v2, s32 offset:8
+; GISEL-NEXT: scratch_store_b32 off, v3, s32 offset:12
+; GISEL-NEXT: s_mov_b32 exec_lo, -1
+; GISEL-NEXT: s_wait_alu 0xfffe
+; GISEL-NEXT: v_dual_cndmask_b32 v0, 5, v0 :: v_dual_cndmask_b32 v1, 0, v1
+; GISEL-NEXT: v_dual_cndmask_b32 v2, 3, v2 :: v_dual_cndmask_b32 v3, 0, v3
+; GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_2)
+; GISEL-NEXT: v_mov_b32_dpp v0, v2 quad_perm:[1,0,0,0] row_mask:0x1 bank_mask:0x1
+; GISEL-NEXT: v_mov_b32_dpp v1, v3 quad_perm:[1,0,0,0] row_mask:0x1 bank_mask:0x1
+; GISEL-NEXT: s_xor_b32 exec_lo, vcc_lo, -1
+; GISEL-NEXT: s_clause 0x3
+; GISEL-NEXT: scratch_load_b32 v0, off, s32
+; GISEL-NEXT: scratch_load_b32 v1, off, s32 offset:4
+; GISEL-NEXT: scratch_load_b32 v2, off, s32 offset:8
+; GISEL-NEXT: scratch_load_b32 v3, off, s32 offset:12
+; GISEL-NEXT: s_mov_b32 exec_lo, vcc_lo
+; GISEL-NEXT: s_wait_loadcnt 0x0
+; GISEL-NEXT: s_setpc_b64 s[30:31]
%x = select i1 %active, i64 %a, i64 5
%y = select i1 %active, i64 %b, i64 3
%ret = call i64 @llvm.amdgcn.update.dpp.i64(i64 %x, i64 %y, i32 1, i32 1, i32 1, i1 false)
>From 5e977507276f34192ed80a1471f432f93a5739c0 Mon Sep 17 00:00:00 2001
From: Diana Picus <diana-magda.picus at amd.com>
Date: Wed, 19 Mar 2025 10:56:02 +0100
Subject: [PATCH 09/11] Rename pseudo to match others
---
llvm/lib/Target/AMDGPU/SIFrameLowering.cpp | 2 +-
llvm/lib/Target/AMDGPU/SIInstrInfo.cpp | 2 +-
llvm/lib/Target/AMDGPU/SIInstructions.td | 2 +-
.../AMDGPU/isel-whole-wave-functions.ll | 60 +++++++++----------
.../AMDGPU/whole-wave-functions-pei.mir | 18 +++---
5 files changed, 42 insertions(+), 42 deletions(-)
diff --git a/llvm/lib/Target/AMDGPU/SIFrameLowering.cpp b/llvm/lib/Target/AMDGPU/SIFrameLowering.cpp
index 1a20406f45e5d..02ce1007a3fcc 100644
--- a/llvm/lib/Target/AMDGPU/SIFrameLowering.cpp
+++ b/llvm/lib/Target/AMDGPU/SIFrameLowering.cpp
@@ -978,7 +978,7 @@ void SIFrameLowering::emitCSRSpillStores(
StoreWWMRegisters(WWMCalleeSavedRegs);
if (FuncInfo->isWholeWaveFunction()) {
- // SI_SETUP_WHOLE_WAVE_FUNCTION has outlived its purpose, so we can remove
+ // SI_WHOLE_WAVE_FUNC_SETUP has outlived its purpose, so we can remove
// it now. If we have already saved some WWM CSR registers, then the EXEC is
// already -1 and we don't need to do anything else. Otherwise, set EXEC to
// -1 here.
diff --git a/llvm/lib/Target/AMDGPU/SIInstrInfo.cpp b/llvm/lib/Target/AMDGPU/SIInstrInfo.cpp
index d31649a1a810c..13d757f9e8769 100644
--- a/llvm/lib/Target/AMDGPU/SIInstrInfo.cpp
+++ b/llvm/lib/Target/AMDGPU/SIInstrInfo.cpp
@@ -5780,7 +5780,7 @@ SIInstrInfo::getWholeWaveFunctionSetup(MachineFunction &MF) const {
"Not a whole wave func");
MachineBasicBlock &MBB = *MF.begin();
for (MachineInstr &MI : MBB)
- if (MI.getOpcode() == AMDGPU::SI_SETUP_WHOLE_WAVE_FUNC ||
+ if (MI.getOpcode() == AMDGPU::SI_WHOLE_WAVE_FUNC_SETUP ||
MI.getOpcode() == AMDGPU::G_AMDGPU_WHOLE_WAVE_FUNC_SETUP)
return &MI;
diff --git a/llvm/lib/Target/AMDGPU/SIInstructions.td b/llvm/lib/Target/AMDGPU/SIInstructions.td
index 51f52875c30d3..db48144032302 100644
--- a/llvm/lib/Target/AMDGPU/SIInstructions.td
+++ b/llvm/lib/Target/AMDGPU/SIInstructions.td
@@ -624,7 +624,7 @@ def SI_INIT_WHOLE_WAVE : SPseudoInstSI <
}
// Sets EXEC to all lanes and returns the previous EXEC.
-def SI_SETUP_WHOLE_WAVE_FUNC : SPseudoInstSI <
+def SI_WHOLE_WAVE_FUNC_SETUP : SPseudoInstSI <
(outs SReg_1:$dst), (ins), [(set i1:$dst, (AMDGPUwhole_wave_setup))]> {
let Defs = [EXEC];
let Uses = [EXEC];
diff --git a/llvm/test/CodeGen/AMDGPU/isel-whole-wave-functions.ll b/llvm/test/CodeGen/AMDGPU/isel-whole-wave-functions.ll
index 300c7863b6966..851dc5107a8a1 100644
--- a/llvm/test/CodeGen/AMDGPU/isel-whole-wave-functions.ll
+++ b/llvm/test/CodeGen/AMDGPU/isel-whole-wave-functions.ll
@@ -9,15 +9,15 @@ define amdgpu_whole_wave i32 @basic_test(i1 %active, i32 %a, i32 %b) {
; DAGISEL-NEXT: {{ $}}
; DAGISEL-NEXT: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr1
; DAGISEL-NEXT: [[COPY1:%[0-9]+]]:vgpr_32 = COPY $vgpr0
- ; DAGISEL-NEXT: [[SI_SETUP_WHOLE_WAVE_FUNC:%[0-9]+]]:sreg_32_xm0_xexec = SI_SETUP_WHOLE_WAVE_FUNC implicit-def dead $exec, implicit $exec
+ ; DAGISEL-NEXT: [[SI_WHOLE_WAVE_FUNC_SETUP:%[0-9]+]]:sreg_32_xm0_xexec = SI_WHOLE_WAVE_FUNC_SETUP implicit-def dead $exec, implicit $exec
; DAGISEL-NEXT: [[S_MOV_B32_:%[0-9]+]]:sreg_32 = S_MOV_B32 5
- ; DAGISEL-NEXT: [[V_CNDMASK_B32_e64_:%[0-9]+]]:vgpr_32 = V_CNDMASK_B32_e64 0, killed [[S_MOV_B32_]], 0, [[COPY1]], [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $exec
+ ; DAGISEL-NEXT: [[V_CNDMASK_B32_e64_:%[0-9]+]]:vgpr_32 = V_CNDMASK_B32_e64 0, killed [[S_MOV_B32_]], 0, [[COPY1]], [[SI_WHOLE_WAVE_FUNC_SETUP]], implicit $exec
; DAGISEL-NEXT: [[S_MOV_B32_1:%[0-9]+]]:sreg_32 = S_MOV_B32 3
- ; DAGISEL-NEXT: [[V_CNDMASK_B32_e64_1:%[0-9]+]]:vgpr_32 = V_CNDMASK_B32_e64 0, killed [[S_MOV_B32_1]], 0, [[COPY]], [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $exec
+ ; DAGISEL-NEXT: [[V_CNDMASK_B32_e64_1:%[0-9]+]]:vgpr_32 = V_CNDMASK_B32_e64 0, killed [[S_MOV_B32_1]], 0, [[COPY]], [[SI_WHOLE_WAVE_FUNC_SETUP]], implicit $exec
; DAGISEL-NEXT: [[V_MOV_B32_dpp:%[0-9]+]]:vgpr_32 = V_MOV_B32_dpp [[V_CNDMASK_B32_e64_]], killed [[V_CNDMASK_B32_e64_1]], 1, 1, 1, 0, implicit $exec
; DAGISEL-NEXT: $vgpr0 = COPY [[V_MOV_B32_dpp]]
; DAGISEL-NEXT: [[DEF:%[0-9]+]]:sreg_32 = IMPLICIT_DEF
- ; DAGISEL-NEXT: SI_WHOLE_WAVE_FUNC_RETURN killed [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $vgpr0
+ ; DAGISEL-NEXT: SI_WHOLE_WAVE_FUNC_RETURN killed [[SI_WHOLE_WAVE_FUNC_SETUP]], implicit $vgpr0
;
; GISEL-LABEL: name: basic_test
; GISEL: bb.1 (%ir-block.0):
@@ -25,16 +25,16 @@ define amdgpu_whole_wave i32 @basic_test(i1 %active, i32 %a, i32 %b) {
; GISEL-NEXT: {{ $}}
; GISEL-NEXT: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr0
; GISEL-NEXT: [[COPY1:%[0-9]+]]:vgpr_32 = COPY $vgpr1
- ; GISEL-NEXT: [[SI_SETUP_WHOLE_WAVE_FUNC:%[0-9]+]]:sreg_32_xm0_xexec = SI_SETUP_WHOLE_WAVE_FUNC implicit-def dead $exec, implicit $exec
+ ; GISEL-NEXT: [[SI_WHOLE_WAVE_FUNC_SETUP:%[0-9]+]]:sreg_32_xm0_xexec = SI_WHOLE_WAVE_FUNC_SETUP implicit-def dead $exec, implicit $exec
; GISEL-NEXT: [[S_MOV_B32_:%[0-9]+]]:sreg_32 = S_MOV_B32 5
; GISEL-NEXT: [[COPY2:%[0-9]+]]:vgpr_32 = COPY [[S_MOV_B32_]]
- ; GISEL-NEXT: [[V_CNDMASK_B32_e64_:%[0-9]+]]:vgpr_32 = V_CNDMASK_B32_e64 0, [[COPY2]], 0, [[COPY]], [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $exec
+ ; GISEL-NEXT: [[V_CNDMASK_B32_e64_:%[0-9]+]]:vgpr_32 = V_CNDMASK_B32_e64 0, [[COPY2]], 0, [[COPY]], [[SI_WHOLE_WAVE_FUNC_SETUP]], implicit $exec
; GISEL-NEXT: [[S_MOV_B32_1:%[0-9]+]]:sreg_32 = S_MOV_B32 3
; GISEL-NEXT: [[COPY3:%[0-9]+]]:vgpr_32 = COPY [[S_MOV_B32_1]]
- ; GISEL-NEXT: [[V_CNDMASK_B32_e64_1:%[0-9]+]]:vgpr_32 = V_CNDMASK_B32_e64 0, [[COPY3]], 0, [[COPY1]], [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $exec
+ ; GISEL-NEXT: [[V_CNDMASK_B32_e64_1:%[0-9]+]]:vgpr_32 = V_CNDMASK_B32_e64 0, [[COPY3]], 0, [[COPY1]], [[SI_WHOLE_WAVE_FUNC_SETUP]], implicit $exec
; GISEL-NEXT: [[V_MOV_B32_dpp:%[0-9]+]]:vgpr_32 = V_MOV_B32_dpp [[V_CNDMASK_B32_e64_]], [[V_CNDMASK_B32_e64_1]], 1, 1, 1, 0, implicit $exec
; GISEL-NEXT: $vgpr0 = COPY [[V_MOV_B32_dpp]]
- ; GISEL-NEXT: SI_WHOLE_WAVE_FUNC_RETURN [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $vgpr0
+ ; GISEL-NEXT: SI_WHOLE_WAVE_FUNC_RETURN [[SI_WHOLE_WAVE_FUNC_SETUP]], implicit $vgpr0
%x = select i1 %active, i32 %a, i32 5
%y = select i1 %active, i32 %b, i32 3
%ret = call i32 @llvm.amdgcn.update.dpp.i32(i32 %x, i32 %y, i32 1, i32 1, i32 1, i1 false)
@@ -45,20 +45,20 @@ define amdgpu_whole_wave i32 @basic_test(i1 %active, i32 %a, i32 %b) {
define amdgpu_whole_wave i32 @unused_active(i1 %active, i32 %a, i32 %b) {
; DAGISEL-LABEL: name: unused_active
; DAGISEL: bb.0 (%ir-block.0):
- ; DAGISEL-NEXT: [[SI_SETUP_WHOLE_WAVE_FUNC:%[0-9]+]]:sreg_32 = SI_SETUP_WHOLE_WAVE_FUNC implicit-def dead $exec, implicit $exec
+ ; DAGISEL-NEXT: [[SI_WHOLE_WAVE_FUNC_SETUP:%[0-9]+]]:sreg_32 = SI_WHOLE_WAVE_FUNC_SETUP implicit-def dead $exec, implicit $exec
; DAGISEL-NEXT: [[V_MOV_B32_e32_:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 14, implicit $exec
; DAGISEL-NEXT: $vgpr0 = COPY [[V_MOV_B32_e32_]]
; DAGISEL-NEXT: [[DEF:%[0-9]+]]:sreg_32 = IMPLICIT_DEF
- ; DAGISEL-NEXT: SI_WHOLE_WAVE_FUNC_RETURN killed [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $vgpr0
+ ; DAGISEL-NEXT: SI_WHOLE_WAVE_FUNC_RETURN killed [[SI_WHOLE_WAVE_FUNC_SETUP]], implicit $vgpr0
;
; GISEL-LABEL: name: unused_active
; GISEL: bb.1 (%ir-block.0):
; GISEL-NEXT: liveins: $vgpr0, $vgpr1
; GISEL-NEXT: {{ $}}
- ; GISEL-NEXT: [[SI_SETUP_WHOLE_WAVE_FUNC:%[0-9]+]]:sreg_32_xm0_xexec = SI_SETUP_WHOLE_WAVE_FUNC implicit-def dead $exec, implicit $exec
+ ; GISEL-NEXT: [[SI_WHOLE_WAVE_FUNC_SETUP:%[0-9]+]]:sreg_32_xm0_xexec = SI_WHOLE_WAVE_FUNC_SETUP implicit-def dead $exec, implicit $exec
; GISEL-NEXT: [[S_MOV_B32_:%[0-9]+]]:sreg_32 = S_MOV_B32 14
; GISEL-NEXT: $vgpr0 = COPY [[S_MOV_B32_]]
- ; GISEL-NEXT: SI_WHOLE_WAVE_FUNC_RETURN [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $vgpr0
+ ; GISEL-NEXT: SI_WHOLE_WAVE_FUNC_RETURN [[SI_WHOLE_WAVE_FUNC_SETUP]], implicit $vgpr0
ret i32 14
}
@@ -70,8 +70,8 @@ define amdgpu_whole_wave i32 @multiple_blocks(i1 %active, i32 %a, i32 %b) {
; DAGISEL-NEXT: {{ $}}
; DAGISEL-NEXT: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr1
; DAGISEL-NEXT: [[COPY1:%[0-9]+]]:vgpr_32 = COPY $vgpr0
- ; DAGISEL-NEXT: [[SI_SETUP_WHOLE_WAVE_FUNC:%[0-9]+]]:sreg_32 = SI_SETUP_WHOLE_WAVE_FUNC implicit-def dead $exec, implicit $exec
- ; DAGISEL-NEXT: [[COPY2:%[0-9]+]]:sreg_32 = COPY [[SI_SETUP_WHOLE_WAVE_FUNC]]
+ ; DAGISEL-NEXT: [[SI_WHOLE_WAVE_FUNC_SETUP:%[0-9]+]]:sreg_32 = SI_WHOLE_WAVE_FUNC_SETUP implicit-def dead $exec, implicit $exec
+ ; DAGISEL-NEXT: [[COPY2:%[0-9]+]]:sreg_32 = COPY [[SI_WHOLE_WAVE_FUNC_SETUP]]
; DAGISEL-NEXT: [[V_CMP_EQ_U32_e64_:%[0-9]+]]:sreg_32 = V_CMP_EQ_U32_e64 [[COPY1]], [[COPY]], implicit $exec
; DAGISEL-NEXT: [[SI_IF:%[0-9]+]]:sreg_32 = SI_IF killed [[V_CMP_EQ_U32_e64_]], %bb.2, implicit-def dead $exec, implicit-def dead $scc, implicit $exec
; DAGISEL-NEXT: S_BRANCH %bb.1
@@ -88,7 +88,7 @@ define amdgpu_whole_wave i32 @multiple_blocks(i1 %active, i32 %a, i32 %b) {
; DAGISEL-NEXT: [[V_CNDMASK_B32_e64_:%[0-9]+]]:vgpr_32 = V_CNDMASK_B32_e64 0, [[PHI]], 0, [[COPY1]], [[COPY3]], implicit $exec
; DAGISEL-NEXT: $vgpr0 = COPY [[V_CNDMASK_B32_e64_]]
; DAGISEL-NEXT: [[DEF:%[0-9]+]]:sreg_32 = IMPLICIT_DEF
- ; DAGISEL-NEXT: SI_WHOLE_WAVE_FUNC_RETURN killed [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $vgpr0
+ ; DAGISEL-NEXT: SI_WHOLE_WAVE_FUNC_RETURN killed [[SI_WHOLE_WAVE_FUNC_SETUP]], implicit $vgpr0
;
; GISEL-LABEL: name: multiple_blocks
; GISEL: bb.1 (%ir-block.0):
@@ -97,7 +97,7 @@ define amdgpu_whole_wave i32 @multiple_blocks(i1 %active, i32 %a, i32 %b) {
; GISEL-NEXT: {{ $}}
; GISEL-NEXT: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr0
; GISEL-NEXT: [[COPY1:%[0-9]+]]:vgpr_32 = COPY $vgpr1
- ; GISEL-NEXT: [[SI_SETUP_WHOLE_WAVE_FUNC:%[0-9]+]]:sreg_32_xm0_xexec = SI_SETUP_WHOLE_WAVE_FUNC implicit-def dead $exec, implicit $exec
+ ; GISEL-NEXT: [[SI_WHOLE_WAVE_FUNC_SETUP:%[0-9]+]]:sreg_32_xm0_xexec = SI_WHOLE_WAVE_FUNC_SETUP implicit-def dead $exec, implicit $exec
; GISEL-NEXT: [[V_CMP_EQ_U32_e64_:%[0-9]+]]:sreg_32_xm0_xexec = V_CMP_EQ_U32_e64 [[COPY]], [[COPY1]], implicit $exec
; GISEL-NEXT: [[SI_IF:%[0-9]+]]:sreg_32_xm0_xexec = SI_IF [[V_CMP_EQ_U32_e64_]], %bb.3, implicit-def $exec, implicit-def $scc, implicit $exec
; GISEL-NEXT: S_BRANCH %bb.2
@@ -110,9 +110,9 @@ define amdgpu_whole_wave i32 @multiple_blocks(i1 %active, i32 %a, i32 %b) {
; GISEL-NEXT: bb.3.if.end:
; GISEL-NEXT: [[PHI:%[0-9]+]]:vgpr_32 = PHI [[COPY1]], %bb.1, [[V_ADD_U32_e64_]], %bb.2
; GISEL-NEXT: SI_END_CF [[SI_IF]], implicit-def $exec, implicit-def $scc, implicit $exec
- ; GISEL-NEXT: [[V_CNDMASK_B32_e64_:%[0-9]+]]:vgpr_32 = V_CNDMASK_B32_e64 0, [[PHI]], 0, [[COPY]], [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $exec
+ ; GISEL-NEXT: [[V_CNDMASK_B32_e64_:%[0-9]+]]:vgpr_32 = V_CNDMASK_B32_e64 0, [[PHI]], 0, [[COPY]], [[SI_WHOLE_WAVE_FUNC_SETUP]], implicit $exec
; GISEL-NEXT: $vgpr0 = COPY [[V_CNDMASK_B32_e64_]]
- ; GISEL-NEXT: SI_WHOLE_WAVE_FUNC_RETURN [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $vgpr0
+ ; GISEL-NEXT: SI_WHOLE_WAVE_FUNC_RETURN [[SI_WHOLE_WAVE_FUNC_SETUP]], implicit $vgpr0
%c = icmp eq i32 %a, %b
br i1 %c, label %if.then, label %if.end
@@ -141,24 +141,24 @@ define amdgpu_whole_wave i64 @ret_64(i1 %active, i64 %a, i64 %b) {
; DAGISEL-NEXT: [[DEF2:%[0-9]+]]:sgpr_32 = IMPLICIT_DEF
; DAGISEL-NEXT: [[DEF3:%[0-9]+]]:sgpr_32 = IMPLICIT_DEF
; DAGISEL-NEXT: [[REG_SEQUENCE1:%[0-9]+]]:vreg_64 = REG_SEQUENCE [[COPY3]], %subreg.sub0, [[COPY2]], %subreg.sub1
- ; DAGISEL-NEXT: [[SI_SETUP_WHOLE_WAVE_FUNC:%[0-9]+]]:sreg_32_xm0_xexec = SI_SETUP_WHOLE_WAVE_FUNC implicit-def dead $exec, implicit $exec
+ ; DAGISEL-NEXT: [[SI_WHOLE_WAVE_FUNC_SETUP:%[0-9]+]]:sreg_32_xm0_xexec = SI_WHOLE_WAVE_FUNC_SETUP implicit-def dead $exec, implicit $exec
; DAGISEL-NEXT: [[COPY4:%[0-9]+]]:vgpr_32 = COPY [[REG_SEQUENCE1]].sub1
; DAGISEL-NEXT: [[S_MOV_B32_:%[0-9]+]]:sreg_32 = S_MOV_B32 0
- ; DAGISEL-NEXT: [[V_CNDMASK_B32_e64_:%[0-9]+]]:vgpr_32 = V_CNDMASK_B32_e64 0, [[S_MOV_B32_]], 0, killed [[COPY4]], [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $exec
+ ; DAGISEL-NEXT: [[V_CNDMASK_B32_e64_:%[0-9]+]]:vgpr_32 = V_CNDMASK_B32_e64 0, [[S_MOV_B32_]], 0, killed [[COPY4]], [[SI_WHOLE_WAVE_FUNC_SETUP]], implicit $exec
; DAGISEL-NEXT: [[COPY5:%[0-9]+]]:vgpr_32 = COPY [[REG_SEQUENCE1]].sub0
; DAGISEL-NEXT: [[S_MOV_B32_1:%[0-9]+]]:sreg_32 = S_MOV_B32 5
- ; DAGISEL-NEXT: [[V_CNDMASK_B32_e64_1:%[0-9]+]]:vgpr_32 = V_CNDMASK_B32_e64 0, killed [[S_MOV_B32_1]], 0, killed [[COPY5]], [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $exec
+ ; DAGISEL-NEXT: [[V_CNDMASK_B32_e64_1:%[0-9]+]]:vgpr_32 = V_CNDMASK_B32_e64 0, killed [[S_MOV_B32_1]], 0, killed [[COPY5]], [[SI_WHOLE_WAVE_FUNC_SETUP]], implicit $exec
; DAGISEL-NEXT: [[COPY6:%[0-9]+]]:vgpr_32 = COPY [[REG_SEQUENCE]].sub1
- ; DAGISEL-NEXT: [[V_CNDMASK_B32_e64_2:%[0-9]+]]:vgpr_32 = V_CNDMASK_B32_e64 0, [[S_MOV_B32_]], 0, killed [[COPY6]], [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $exec
+ ; DAGISEL-NEXT: [[V_CNDMASK_B32_e64_2:%[0-9]+]]:vgpr_32 = V_CNDMASK_B32_e64 0, [[S_MOV_B32_]], 0, killed [[COPY6]], [[SI_WHOLE_WAVE_FUNC_SETUP]], implicit $exec
; DAGISEL-NEXT: [[COPY7:%[0-9]+]]:vgpr_32 = COPY [[REG_SEQUENCE]].sub0
; DAGISEL-NEXT: [[S_MOV_B32_2:%[0-9]+]]:sreg_32 = S_MOV_B32 3
- ; DAGISEL-NEXT: [[V_CNDMASK_B32_e64_3:%[0-9]+]]:vgpr_32 = V_CNDMASK_B32_e64 0, killed [[S_MOV_B32_2]], 0, killed [[COPY7]], [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $exec
+ ; DAGISEL-NEXT: [[V_CNDMASK_B32_e64_3:%[0-9]+]]:vgpr_32 = V_CNDMASK_B32_e64 0, killed [[S_MOV_B32_2]], 0, killed [[COPY7]], [[SI_WHOLE_WAVE_FUNC_SETUP]], implicit $exec
; DAGISEL-NEXT: [[V_MOV_B32_dpp:%[0-9]+]]:vgpr_32 = V_MOV_B32_dpp [[V_CNDMASK_B32_e64_1]], killed [[V_CNDMASK_B32_e64_3]], 1, 1, 1, 0, implicit $exec
; DAGISEL-NEXT: [[V_MOV_B32_dpp1:%[0-9]+]]:vgpr_32 = V_MOV_B32_dpp [[V_CNDMASK_B32_e64_]], killed [[V_CNDMASK_B32_e64_2]], 1, 1, 1, 0, implicit $exec
; DAGISEL-NEXT: $vgpr0 = COPY [[V_MOV_B32_dpp]]
; DAGISEL-NEXT: $vgpr1 = COPY [[V_MOV_B32_dpp1]]
; DAGISEL-NEXT: [[DEF4:%[0-9]+]]:sreg_32 = IMPLICIT_DEF
- ; DAGISEL-NEXT: SI_WHOLE_WAVE_FUNC_RETURN killed [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $vgpr0, implicit $vgpr1
+ ; DAGISEL-NEXT: SI_WHOLE_WAVE_FUNC_RETURN killed [[SI_WHOLE_WAVE_FUNC_SETUP]], implicit $vgpr0, implicit $vgpr1
;
; GISEL-LABEL: name: ret_64
; GISEL: bb.1 (%ir-block.0):
@@ -168,20 +168,20 @@ define amdgpu_whole_wave i64 @ret_64(i1 %active, i64 %a, i64 %b) {
; GISEL-NEXT: [[COPY1:%[0-9]+]]:vgpr_32 = COPY $vgpr1
; GISEL-NEXT: [[COPY2:%[0-9]+]]:vgpr_32 = COPY $vgpr2
; GISEL-NEXT: [[COPY3:%[0-9]+]]:vgpr_32 = COPY $vgpr3
- ; GISEL-NEXT: [[SI_SETUP_WHOLE_WAVE_FUNC:%[0-9]+]]:sreg_32_xm0_xexec = SI_SETUP_WHOLE_WAVE_FUNC implicit-def dead $exec, implicit $exec
+ ; GISEL-NEXT: [[SI_WHOLE_WAVE_FUNC_SETUP:%[0-9]+]]:sreg_32_xm0_xexec = SI_WHOLE_WAVE_FUNC_SETUP implicit-def dead $exec, implicit $exec
; GISEL-NEXT: [[V_MOV_B32_e32_:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 5, implicit $exec
; GISEL-NEXT: [[V_MOV_B32_e32_1:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 0, implicit $exec
- ; GISEL-NEXT: [[V_CNDMASK_B32_e64_:%[0-9]+]]:vgpr_32 = V_CNDMASK_B32_e64 0, [[V_MOV_B32_e32_]], 0, [[COPY]], [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $exec
- ; GISEL-NEXT: [[V_CNDMASK_B32_e64_1:%[0-9]+]]:vgpr_32 = V_CNDMASK_B32_e64 0, [[V_MOV_B32_e32_1]], 0, [[COPY1]], [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $exec
+ ; GISEL-NEXT: [[V_CNDMASK_B32_e64_:%[0-9]+]]:vgpr_32 = V_CNDMASK_B32_e64 0, [[V_MOV_B32_e32_]], 0, [[COPY]], [[SI_WHOLE_WAVE_FUNC_SETUP]], implicit $exec
+ ; GISEL-NEXT: [[V_CNDMASK_B32_e64_1:%[0-9]+]]:vgpr_32 = V_CNDMASK_B32_e64 0, [[V_MOV_B32_e32_1]], 0, [[COPY1]], [[SI_WHOLE_WAVE_FUNC_SETUP]], implicit $exec
; GISEL-NEXT: [[V_MOV_B32_e32_2:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 3, implicit $exec
; GISEL-NEXT: [[V_MOV_B32_e32_3:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 0, implicit $exec
- ; GISEL-NEXT: [[V_CNDMASK_B32_e64_2:%[0-9]+]]:vgpr_32 = V_CNDMASK_B32_e64 0, [[V_MOV_B32_e32_2]], 0, [[COPY2]], [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $exec
- ; GISEL-NEXT: [[V_CNDMASK_B32_e64_3:%[0-9]+]]:vgpr_32 = V_CNDMASK_B32_e64 0, [[V_MOV_B32_e32_3]], 0, [[COPY3]], [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $exec
+ ; GISEL-NEXT: [[V_CNDMASK_B32_e64_2:%[0-9]+]]:vgpr_32 = V_CNDMASK_B32_e64 0, [[V_MOV_B32_e32_2]], 0, [[COPY2]], [[SI_WHOLE_WAVE_FUNC_SETUP]], implicit $exec
+ ; GISEL-NEXT: [[V_CNDMASK_B32_e64_3:%[0-9]+]]:vgpr_32 = V_CNDMASK_B32_e64 0, [[V_MOV_B32_e32_3]], 0, [[COPY3]], [[SI_WHOLE_WAVE_FUNC_SETUP]], implicit $exec
; GISEL-NEXT: [[V_MOV_B32_dpp:%[0-9]+]]:vgpr_32 = V_MOV_B32_dpp [[V_CNDMASK_B32_e64_]], [[V_CNDMASK_B32_e64_2]], 1, 1, 1, 0, implicit $exec
; GISEL-NEXT: [[V_MOV_B32_dpp1:%[0-9]+]]:vgpr_32 = V_MOV_B32_dpp [[V_CNDMASK_B32_e64_1]], [[V_CNDMASK_B32_e64_3]], 1, 1, 1, 0, implicit $exec
; GISEL-NEXT: $vgpr0 = COPY [[V_MOV_B32_dpp]]
; GISEL-NEXT: $vgpr1 = COPY [[V_MOV_B32_dpp1]]
- ; GISEL-NEXT: SI_WHOLE_WAVE_FUNC_RETURN [[SI_SETUP_WHOLE_WAVE_FUNC]], implicit $vgpr0, implicit $vgpr1
+ ; GISEL-NEXT: SI_WHOLE_WAVE_FUNC_RETURN [[SI_WHOLE_WAVE_FUNC_SETUP]], implicit $vgpr0, implicit $vgpr1
%x = select i1 %active, i64 %a, i64 5
%y = select i1 %active, i64 %b, i64 3
%ret = call i64 @llvm.amdgcn.update.dpp.i64(i64 %x, i64 %y, i32 1, i32 1, i32 1, i1 false)
diff --git a/llvm/test/CodeGen/AMDGPU/whole-wave-functions-pei.mir b/llvm/test/CodeGen/AMDGPU/whole-wave-functions-pei.mir
index a5a35c40b719c..5d6906bacf336 100644
--- a/llvm/test/CodeGen/AMDGPU/whole-wave-functions-pei.mir
+++ b/llvm/test/CodeGen/AMDGPU/whole-wave-functions-pei.mir
@@ -34,7 +34,7 @@ body: |
; CHECK-NEXT: $vgpr0 = SCRATCH_LOAD_DWORD_SADDR $sgpr32, 0, 0, implicit $exec, implicit $flat_scr, implicit $vgpr0(tied-def 0) :: (load (s32) from %stack.0, addrspace 5)
; CHECK-NEXT: $exec_lo = S_MOV_B32 $sgpr0
; CHECK-NEXT: SI_WHOLE_WAVE_FUNC_RETURN killed renamable $sgpr0, implicit killed $vgpr0
- renamable $sgpr0 = SI_SETUP_WHOLE_WAVE_FUNC implicit-def dead $exec, implicit $exec
+ renamable $sgpr0 = SI_WHOLE_WAVE_FUNC_SETUP implicit-def dead $exec, implicit $exec
$vgpr0 = V_MOV_B32_e32 14, implicit $exec
SI_WHOLE_WAVE_FUNC_RETURN killed renamable $sgpr0, implicit killed $vgpr0
@@ -70,7 +70,7 @@ body: |
; CHECK-NEXT: $vgpr40 = SCRATCH_LOAD_DWORD_SADDR $sgpr32, 0, 0, implicit $exec, implicit $flat_scr :: (load (s32) from %stack.0, addrspace 5)
; CHECK-NEXT: $exec_lo = S_MOV_B32 $sgpr0
; CHECK-NEXT: SI_WHOLE_WAVE_FUNC_RETURN killed renamable $sgpr0
- renamable $sgpr0 = SI_SETUP_WHOLE_WAVE_FUNC implicit-def dead $exec, implicit $exec
+ renamable $sgpr0 = SI_WHOLE_WAVE_FUNC_SETUP implicit-def dead $exec, implicit $exec
$vgpr40 = V_MOV_B32_e32 14, implicit $exec
SI_WHOLE_WAVE_FUNC_RETURN killed renamable $sgpr0
@@ -112,7 +112,7 @@ body: |
; CHECK-NEXT: $exec_lo = S_MOV_B32 $vcc_lo
; CHECK-NEXT: SI_WHOLE_WAVE_FUNC_RETURN killed renamable $vcc_lo
$vgpr192 = SI_SPILL_S32_TO_VGPR killed $sgpr20, 0, $vgpr192
- renamable $vcc_lo = SI_SETUP_WHOLE_WAVE_FUNC implicit-def dead $exec, implicit $exec
+ renamable $vcc_lo = SI_WHOLE_WAVE_FUNC_SETUP implicit-def dead $exec, implicit $exec
$sgpr20 = S_MOV_B32 14, implicit $exec
$sgpr20 = SI_RESTORE_S32_FROM_VGPR $vgpr192, 0
SI_WHOLE_WAVE_FUNC_RETURN killed renamable $vcc_lo
@@ -153,7 +153,7 @@ body: |
; CHECK-NEXT: $exec_lo = S_MOV_B32 $vcc_lo
; CHECK-NEXT: SI_WHOLE_WAVE_FUNC_RETURN killed renamable $vcc_lo
$vgpr191 = SI_SPILL_S32_TO_VGPR killed $sgpr20, 0, $vgpr191
- renamable $vcc_lo = SI_SETUP_WHOLE_WAVE_FUNC implicit-def dead $exec, implicit $exec
+ renamable $vcc_lo = SI_WHOLE_WAVE_FUNC_SETUP implicit-def dead $exec, implicit $exec
$sgpr20 = S_MOV_B32 14, implicit $exec
$sgpr20 = SI_RESTORE_S32_FROM_VGPR $vgpr191, 0
SI_WHOLE_WAVE_FUNC_RETURN killed renamable $vcc_lo
@@ -209,7 +209,7 @@ body: |
; CHECK-NEXT: $exec_lo = S_MOV_B32 $vcc_lo
; CHECK-NEXT: SI_WHOLE_WAVE_FUNC_RETURN killed renamable $vcc_lo
$vgpr191 = SI_SPILL_S32_TO_VGPR killed $sgpr20, 0, $vgpr191
- renamable $vcc_lo = SI_SETUP_WHOLE_WAVE_FUNC implicit-def dead $exec, implicit $exec
+ renamable $vcc_lo = SI_WHOLE_WAVE_FUNC_SETUP implicit-def dead $exec, implicit $exec
S_NOP 0, implicit-def $vgpr40, implicit-def $sgpr20
S_NOP 0, implicit-def $vgpr49, implicit-def $sgpr40
$sgpr20 = SI_RESTORE_S32_FROM_VGPR $vgpr191, 0
@@ -267,7 +267,7 @@ body: |
; CHECK-NEXT: $exec_lo = S_MOV_B32 $sgpr3
; CHECK-NEXT: SI_WHOLE_WAVE_FUNC_RETURN killed renamable $sgpr3
$vgpr191 = SI_SPILL_S32_TO_VGPR killed $sgpr20, 0, $vgpr191
- renamable $vcc_lo = SI_SETUP_WHOLE_WAVE_FUNC implicit-def dead $exec, implicit $exec
+ renamable $vcc_lo = SI_WHOLE_WAVE_FUNC_SETUP implicit-def dead $exec, implicit $exec
S_NOP 0, implicit-def $vgpr40, implicit-def $sgpr20
$sgpr3 = COPY $vcc_lo
S_NOP 0, implicit-def $vgpr49, implicit-def $sgpr40
@@ -323,7 +323,7 @@ body: |
; CHECK-NEXT: $vgpr5 = SCRATCH_LOAD_DWORD_SADDR $sgpr32, 16, 0, implicit $exec, implicit $flat_scr :: (load (s32) from %stack.4, addrspace 5)
; CHECK-NEXT: $exec_lo = S_MOV_B32 $sgpr0
; CHECK-NEXT: SI_WHOLE_WAVE_FUNC_RETURN killed renamable $sgpr0, implicit killed $vgpr0
- renamable $sgpr0 = SI_SETUP_WHOLE_WAVE_FUNC implicit-def dead $exec, implicit $exec
+ renamable $sgpr0 = SI_WHOLE_WAVE_FUNC_SETUP implicit-def dead $exec, implicit $exec
$vgpr0 = V_MOV_B32_e32 14, implicit $exec
S_NOP 0, implicit-def $vgpr2_vgpr3_vgpr4_vgpr5, implicit-def $vgpr40_vgpr41_vgpr42
SI_WHOLE_WAVE_FUNC_RETURN killed renamable $sgpr0, implicit killed $vgpr0
@@ -364,7 +364,7 @@ body: |
; CHECK-NEXT: S_NOP 0, implicit $vgpr0, implicit $vgpr20, implicit $vgpr40
; CHECK-NEXT: $exec_lo = S_MOV_B32 $sgpr0
; CHECK-NEXT: SI_WHOLE_WAVE_FUNC_RETURN killed renamable $sgpr0, implicit killed $vgpr0
- renamable $sgpr0 = SI_SETUP_WHOLE_WAVE_FUNC implicit-def dead $exec, implicit $exec
+ renamable $sgpr0 = SI_WHOLE_WAVE_FUNC_SETUP implicit-def dead $exec, implicit $exec
S_NOP 0, implicit $vgpr0, implicit $vgpr20, implicit $vgpr40
SI_WHOLE_WAVE_FUNC_RETURN killed renamable $sgpr0, implicit killed $vgpr0
@@ -427,7 +427,7 @@ body: |
successors: %bb.1, %bb.2
liveins: $vgpr0, $vgpr1
- renamable $vcc_lo = SI_SETUP_WHOLE_WAVE_FUNC implicit-def dead $exec, implicit $exec
+ renamable $vcc_lo = SI_WHOLE_WAVE_FUNC_SETUP implicit-def dead $exec, implicit $exec
$sgpr1 = S_MOV_B32 $exec_lo
V_CMPX_EQ_U32_nosdst_e64 $vgpr0, $vgpr1, implicit-def $exec, implicit $exec
S_CBRANCH_EXECZ %bb.2, implicit $exec
>From be094cedfee06b240b3f92b28ace61f4b02285e0 Mon Sep 17 00:00:00 2001
From: Diana Picus <diana-magda.picus at amd.com>
Date: Tue, 25 Mar 2025 13:59:15 +0100
Subject: [PATCH 10/11] Rename CC
---
llvm/include/llvm/AsmParser/LLToken.h | 2 +-
llvm/include/llvm/IR/CallingConv.h | 2 +-
llvm/lib/AsmParser/LLLexer.cpp | 2 +-
llvm/lib/AsmParser/LLParser.cpp | 4 ++--
llvm/lib/IR/AsmWriter.cpp | 4 ++--
llvm/lib/Target/AMDGPU/AMDGPUCallLowering.cpp | 6 +++---
llvm/lib/Target/AMDGPU/AMDGPUISelLowering.cpp | 4 ++--
llvm/lib/Target/AMDGPU/SIISelLowering.cpp | 2 +-
llvm/lib/Target/AMDGPU/SIMachineFunctionInfo.cpp | 6 ++++--
llvm/lib/Target/AMDGPU/SIRegisterInfo.cpp | 4 ++--
llvm/lib/Target/AMDGPU/Utils/AMDGPUBaseInfo.cpp | 3 ++-
llvm/lib/Target/AMDGPU/Utils/AMDGPUPALMetadata.cpp | 2 +-
.../AMDGPU/irtranslator-whole-wave-functions.ll | 8 ++++----
.../CodeGen/AMDGPU/isel-whole-wave-functions.ll | 8 ++++----
llvm/test/CodeGen/AMDGPU/whole-wave-functions.ll | 14 +++++++-------
15 files changed, 37 insertions(+), 34 deletions(-)
diff --git a/llvm/include/llvm/AsmParser/LLToken.h b/llvm/include/llvm/AsmParser/LLToken.h
index bcc4b56dac7aa..a73111297b15a 100644
--- a/llvm/include/llvm/AsmParser/LLToken.h
+++ b/llvm/include/llvm/AsmParser/LLToken.h
@@ -181,7 +181,7 @@ enum Kind {
kw_amdgpu_cs_chain_preserve,
kw_amdgpu_kernel,
kw_amdgpu_gfx,
- kw_amdgpu_whole_wave,
+ kw_amdgpu_gfx_whole_wave,
kw_tailcc,
kw_m68k_rtdcc,
kw_graalcc,
diff --git a/llvm/include/llvm/IR/CallingConv.h b/llvm/include/llvm/IR/CallingConv.h
index 8e37f0c81382a..77abf18fe4c75 100644
--- a/llvm/include/llvm/IR/CallingConv.h
+++ b/llvm/include/llvm/IR/CallingConv.h
@@ -285,7 +285,7 @@ namespace CallingConv {
RISCV_VLSCall_65536 = 123,
// Calling convention for AMDGPU whole wave functions.
- AMDGPU_WholeWave = 124,
+ AMDGPU_Gfx_WholeWave = 124,
/// The highest possible ID. Must be some 2^k - 1.
MaxID = 1023
diff --git a/llvm/lib/AsmParser/LLLexer.cpp b/llvm/lib/AsmParser/LLLexer.cpp
index e99f41f1c49b5..fcea952d3e997 100644
--- a/llvm/lib/AsmParser/LLLexer.cpp
+++ b/llvm/lib/AsmParser/LLLexer.cpp
@@ -679,7 +679,7 @@ lltok::Kind LLLexer::LexIdentifier() {
KEYWORD(amdgpu_cs_chain_preserve);
KEYWORD(amdgpu_kernel);
KEYWORD(amdgpu_gfx);
- KEYWORD(amdgpu_whole_wave);
+ KEYWORD(amdgpu_gfx_whole_wave);
KEYWORD(tailcc);
KEYWORD(m68k_rtdcc);
KEYWORD(graalcc);
diff --git a/llvm/lib/AsmParser/LLParser.cpp b/llvm/lib/AsmParser/LLParser.cpp
index 408479ea62c60..e1406e2ec48b1 100644
--- a/llvm/lib/AsmParser/LLParser.cpp
+++ b/llvm/lib/AsmParser/LLParser.cpp
@@ -2286,8 +2286,8 @@ bool LLParser::parseOptionalCallingConv(unsigned &CC) {
CC = CallingConv::AMDGPU_CS_ChainPreserve;
break;
case lltok::kw_amdgpu_kernel: CC = CallingConv::AMDGPU_KERNEL; break;
- case lltok::kw_amdgpu_whole_wave:
- CC = CallingConv::AMDGPU_WholeWave;
+ case lltok::kw_amdgpu_gfx_whole_wave:
+ CC = CallingConv::AMDGPU_Gfx_WholeWave;
break;
case lltok::kw_tailcc: CC = CallingConv::Tail; break;
case lltok::kw_m68k_rtdcc: CC = CallingConv::M68k_RTD; break;
diff --git a/llvm/lib/IR/AsmWriter.cpp b/llvm/lib/IR/AsmWriter.cpp
index 00d61e3eb91ac..7108901911b78 100644
--- a/llvm/lib/IR/AsmWriter.cpp
+++ b/llvm/lib/IR/AsmWriter.cpp
@@ -376,8 +376,8 @@ static void PrintCallingConv(unsigned cc, raw_ostream &Out) {
break;
case CallingConv::AMDGPU_KERNEL: Out << "amdgpu_kernel"; break;
case CallingConv::AMDGPU_Gfx: Out << "amdgpu_gfx"; break;
- case CallingConv::AMDGPU_WholeWave:
- Out << "amdgpu_whole_wave";
+ case CallingConv::AMDGPU_Gfx_WholeWave:
+ Out << "amdgpu_gfx_whole_wave";
break;
case CallingConv::M68k_RTD: Out << "m68k_rtdcc"; break;
case CallingConv::RISCV_VectorCall:
diff --git a/llvm/lib/Target/AMDGPU/AMDGPUCallLowering.cpp b/llvm/lib/Target/AMDGPU/AMDGPUCallLowering.cpp
index 76d7f33d122d7..a30d3b267f8ad 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPUCallLowering.cpp
+++ b/llvm/lib/Target/AMDGPU/AMDGPUCallLowering.cpp
@@ -1305,8 +1305,8 @@ bool AMDGPUCallLowering::lowerTailCall(
SmallVector<std::pair<MCRegister, Register>, 12> ImplicitArgRegs;
if (Info.CallConv != CallingConv::AMDGPU_Gfx &&
- !AMDGPU::isChainCC(Info.CallConv) &&
- Info.CallConv != CallingConv::AMDGPU_WholeWave) {
+ Info.CallConv != CallingConv::AMDGPU_Gfx_WholeWave &&
+ !AMDGPU::isChainCC(Info.CallConv)) {
// With a fixed ABI, allocate fixed registers before user arguments.
if (!passSpecialInputs(MIRBuilder, CCInfo, ImplicitArgRegs, Info))
return false;
@@ -1489,7 +1489,7 @@ bool AMDGPUCallLowering::lowerCall(MachineIRBuilder &MIRBuilder,
SmallVector<std::pair<MCRegister, Register>, 12> ImplicitArgRegs;
if (Info.CallConv != CallingConv::AMDGPU_Gfx &&
- Info.CallConv != CallingConv::AMDGPU_WholeWave) {
+ Info.CallConv != CallingConv::AMDGPU_Gfx_WholeWave) {
// With a fixed ABI, allocate fixed registers before user arguments.
if (!passSpecialInputs(MIRBuilder, CCInfo, ImplicitArgRegs, Info))
return false;
diff --git a/llvm/lib/Target/AMDGPU/AMDGPUISelLowering.cpp b/llvm/lib/Target/AMDGPU/AMDGPUISelLowering.cpp
index 3e2f6cb6f9a17..a91989604520b 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPUISelLowering.cpp
+++ b/llvm/lib/Target/AMDGPU/AMDGPUISelLowering.cpp
@@ -1132,7 +1132,7 @@ CCAssignFn *AMDGPUCallLowering::CCAssignFnForCall(CallingConv::ID CC,
case CallingConv::Cold:
return CC_AMDGPU_Func;
case CallingConv::AMDGPU_Gfx:
- case CallingConv::AMDGPU_WholeWave:
+ case CallingConv::AMDGPU_Gfx_WholeWave:
return CC_SI_Gfx;
case CallingConv::AMDGPU_KERNEL:
case CallingConv::SPIR_KERNEL:
@@ -1158,7 +1158,7 @@ CCAssignFn *AMDGPUCallLowering::CCAssignFnForReturn(CallingConv::ID CC,
case CallingConv::AMDGPU_LS:
return RetCC_SI_Shader;
case CallingConv::AMDGPU_Gfx:
- case CallingConv::AMDGPU_WholeWave:
+ case CallingConv::AMDGPU_Gfx_WholeWave:
return RetCC_SI_Gfx;
case CallingConv::C:
case CallingConv::Fast:
diff --git a/llvm/lib/Target/AMDGPU/SIISelLowering.cpp b/llvm/lib/Target/AMDGPU/SIISelLowering.cpp
index d6301dd818441..cbc234bb490a3 100644
--- a/llvm/lib/Target/AMDGPU/SIISelLowering.cpp
+++ b/llvm/lib/Target/AMDGPU/SIISelLowering.cpp
@@ -3780,7 +3780,7 @@ SDValue SITargetLowering::LowerCall(CallLoweringInfo &CLI,
CCAssignFn *AssignFn = CCAssignFnForCall(CallConv, IsVarArg);
if (CallConv != CallingConv::AMDGPU_Gfx && !AMDGPU::isChainCC(CallConv) &&
- CallConv != CallingConv::AMDGPU_WholeWave) {
+ CallConv != CallingConv::AMDGPU_Gfx_WholeWave) {
// With a fixed ABI, allocate fixed registers before user arguments.
passSpecialInputs(CLI, CCInfo, *Info, RegsToPass, MemOpChains, Chain);
}
diff --git a/llvm/lib/Target/AMDGPU/SIMachineFunctionInfo.cpp b/llvm/lib/Target/AMDGPU/SIMachineFunctionInfo.cpp
index 6867e28cc8761..b93f5b9536621 100644
--- a/llvm/lib/Target/AMDGPU/SIMachineFunctionInfo.cpp
+++ b/llvm/lib/Target/AMDGPU/SIMachineFunctionInfo.cpp
@@ -42,7 +42,8 @@ SIMachineFunctionInfo::SIMachineFunctionInfo(const Function &F,
PrivateSegmentWaveByteOffset(false), WorkItemIDX(false),
WorkItemIDY(false), WorkItemIDZ(false), ImplicitArgPtr(false),
GITPtrHigh(0xffffffff), HighBitsOf32BitAddress(0),
- IsWholeWaveFunction(F.getCallingConv() == CallingConv::AMDGPU_WholeWave) {
+ IsWholeWaveFunction(F.getCallingConv() ==
+ CallingConv::AMDGPU_Gfx_WholeWave) {
const GCNSubtarget &ST = *static_cast<const GCNSubtarget *>(STI);
FlatWorkGroupSizes = ST.getFlatWorkGroupSizes(F);
WavesPerEU = ST.getWavesPerEU(F);
@@ -84,7 +85,8 @@ SIMachineFunctionInfo::SIMachineFunctionInfo(const Function &F,
ImplicitArgPtr = false;
} else if (!isEntryFunction()) {
- if (CC != CallingConv::AMDGPU_Gfx && CC != CallingConv::AMDGPU_WholeWave)
+ if (CC != CallingConv::AMDGPU_Gfx &&
+ CC != CallingConv::AMDGPU_Gfx_WholeWave)
ArgInfo = AMDGPUArgumentUsageInfo::FixedABIFunctionInfo;
FrameOffsetReg = AMDGPU::SGPR33;
diff --git a/llvm/lib/Target/AMDGPU/SIRegisterInfo.cpp b/llvm/lib/Target/AMDGPU/SIRegisterInfo.cpp
index 25ee7a8d337ec..732384af5bcc7 100644
--- a/llvm/lib/Target/AMDGPU/SIRegisterInfo.cpp
+++ b/llvm/lib/Target/AMDGPU/SIRegisterInfo.cpp
@@ -407,7 +407,7 @@ const MCPhysReg *SIRegisterInfo::getCalleeSavedRegs(
return ST.hasGFX90AInsts() ? CSR_AMDGPU_GFX90AInsts_SaveList
: CSR_AMDGPU_SaveList;
case CallingConv::AMDGPU_Gfx:
- case CallingConv::AMDGPU_WholeWave:
+ case CallingConv::AMDGPU_Gfx_WholeWave:
return ST.hasGFX90AInsts() ? CSR_AMDGPU_SI_Gfx_GFX90AInsts_SaveList
: CSR_AMDGPU_SI_Gfx_SaveList;
case CallingConv::AMDGPU_CS_ChainPreserve:
@@ -434,7 +434,7 @@ const uint32_t *SIRegisterInfo::getCallPreservedMask(const MachineFunction &MF,
return ST.hasGFX90AInsts() ? CSR_AMDGPU_GFX90AInsts_RegMask
: CSR_AMDGPU_RegMask;
case CallingConv::AMDGPU_Gfx:
- case CallingConv::AMDGPU_WholeWave:
+ case CallingConv::AMDGPU_Gfx_WholeWave:
return ST.hasGFX90AInsts() ? CSR_AMDGPU_SI_Gfx_GFX90AInsts_RegMask
: CSR_AMDGPU_SI_Gfx_RegMask;
case CallingConv::AMDGPU_CS_Chain:
diff --git a/llvm/lib/Target/AMDGPU/Utils/AMDGPUBaseInfo.cpp b/llvm/lib/Target/AMDGPU/Utils/AMDGPUBaseInfo.cpp
index ac6b07bad3e35..fe22293fb2c60 100644
--- a/llvm/lib/Target/AMDGPU/Utils/AMDGPUBaseInfo.cpp
+++ b/llvm/lib/Target/AMDGPU/Utils/AMDGPUBaseInfo.cpp
@@ -2129,7 +2129,8 @@ bool isShader(CallingConv::ID cc) {
}
bool isGraphics(CallingConv::ID cc) {
- return isShader(cc) || cc == CallingConv::AMDGPU_Gfx;
+ return isShader(cc) || cc == CallingConv::AMDGPU_Gfx ||
+ cc == CallingConv::AMDGPU_Gfx_WholeWave;
}
bool isCompute(CallingConv::ID cc) {
diff --git a/llvm/lib/Target/AMDGPU/Utils/AMDGPUPALMetadata.cpp b/llvm/lib/Target/AMDGPU/Utils/AMDGPUPALMetadata.cpp
index b7e351bdb6d01..8cbce56808038 100644
--- a/llvm/lib/Target/AMDGPU/Utils/AMDGPUPALMetadata.cpp
+++ b/llvm/lib/Target/AMDGPU/Utils/AMDGPUPALMetadata.cpp
@@ -43,7 +43,7 @@ static const char *getStageName(CallingConv::ID CC) {
case CallingConv::AMDGPU_LS:
return ".ls";
case CallingConv::AMDGPU_Gfx:
- case CallingConv::AMDGPU_WholeWave:
+ case CallingConv::AMDGPU_Gfx_WholeWave:
llvm_unreachable("Callable shader has no hardware stage");
default:
return ".cs";
diff --git a/llvm/test/CodeGen/AMDGPU/irtranslator-whole-wave-functions.ll b/llvm/test/CodeGen/AMDGPU/irtranslator-whole-wave-functions.ll
index f18d8128a91ff..b68786b579dd2 100644
--- a/llvm/test/CodeGen/AMDGPU/irtranslator-whole-wave-functions.ll
+++ b/llvm/test/CodeGen/AMDGPU/irtranslator-whole-wave-functions.ll
@@ -1,7 +1,7 @@
; NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py UTC_ARGS: --version 5
; RUN: llc -global-isel=1 -mtriple=amdgcn--amdpal -mcpu=gfx1200 -stop-after=irtranslator -verify-machineinstrs < %s | FileCheck %s
-define amdgpu_whole_wave i32 @basic_test(i1 %active, i32 %a, i32 %b) {
+define amdgpu_gfx_whole_wave i32 @basic_test(i1 %active, i32 %a, i32 %b) {
; CHECK-LABEL: name: basic_test
; CHECK: bb.1 (%ir-block.0):
; CHECK-NEXT: liveins: $vgpr0, $vgpr1
@@ -23,7 +23,7 @@ define amdgpu_whole_wave i32 @basic_test(i1 %active, i32 %a, i32 %b) {
}
; Make sure we don't crash if %active is not used at all.
-define amdgpu_whole_wave i32 @unused_active(i1 %active, i32 %a, i32 %b) {
+define amdgpu_gfx_whole_wave i32 @unused_active(i1 %active, i32 %a, i32 %b) {
; CHECK-LABEL: name: unused_active
; CHECK: bb.1 (%ir-block.0):
; CHECK-NEXT: liveins: $vgpr0, $vgpr1
@@ -37,7 +37,7 @@ define amdgpu_whole_wave i32 @unused_active(i1 %active, i32 %a, i32 %b) {
ret i32 14
}
-define amdgpu_whole_wave i32 @multiple_blocks(i1 %active, i32 %a, i32 %b) {
+define amdgpu_gfx_whole_wave i32 @multiple_blocks(i1 %active, i32 %a, i32 %b) {
; CHECK-LABEL: name: multiple_blocks
; CHECK: bb.1 (%ir-block.0):
; CHECK-NEXT: successors: %bb.2(0x40000000), %bb.3(0x40000000)
@@ -75,7 +75,7 @@ if.end:
ret i32 %e
}
-define amdgpu_whole_wave i64 @ret_64(i1 %active, i64 %a, i64 %b) {
+define amdgpu_gfx_whole_wave i64 @ret_64(i1 %active, i64 %a, i64 %b) {
; CHECK-LABEL: name: ret_64
; CHECK: bb.1 (%ir-block.0):
; CHECK-NEXT: liveins: $vgpr0, $vgpr1, $vgpr2, $vgpr3
diff --git a/llvm/test/CodeGen/AMDGPU/isel-whole-wave-functions.ll b/llvm/test/CodeGen/AMDGPU/isel-whole-wave-functions.ll
index 851dc5107a8a1..0bd87f493f1ac 100644
--- a/llvm/test/CodeGen/AMDGPU/isel-whole-wave-functions.ll
+++ b/llvm/test/CodeGen/AMDGPU/isel-whole-wave-functions.ll
@@ -2,7 +2,7 @@
; RUN: llc -global-isel=0 -mtriple=amdgcn--amdpal -mcpu=gfx1200 -stop-after=finalize-isel -verify-machineinstrs < %s | FileCheck --check-prefix=DAGISEL %s
; RUN: llc -global-isel=1 -mtriple=amdgcn--amdpal -mcpu=gfx1200 -stop-after=finalize-isel -verify-machineinstrs < %s | FileCheck --check-prefix=GISEL %s
-define amdgpu_whole_wave i32 @basic_test(i1 %active, i32 %a, i32 %b) {
+define amdgpu_gfx_whole_wave i32 @basic_test(i1 %active, i32 %a, i32 %b) {
; DAGISEL-LABEL: name: basic_test
; DAGISEL: bb.0 (%ir-block.0):
; DAGISEL-NEXT: liveins: $vgpr0, $vgpr1
@@ -42,7 +42,7 @@ define amdgpu_whole_wave i32 @basic_test(i1 %active, i32 %a, i32 %b) {
}
; Make sure we don't crash if %active is not used at all.
-define amdgpu_whole_wave i32 @unused_active(i1 %active, i32 %a, i32 %b) {
+define amdgpu_gfx_whole_wave i32 @unused_active(i1 %active, i32 %a, i32 %b) {
; DAGISEL-LABEL: name: unused_active
; DAGISEL: bb.0 (%ir-block.0):
; DAGISEL-NEXT: [[SI_WHOLE_WAVE_FUNC_SETUP:%[0-9]+]]:sreg_32 = SI_WHOLE_WAVE_FUNC_SETUP implicit-def dead $exec, implicit $exec
@@ -62,7 +62,7 @@ define amdgpu_whole_wave i32 @unused_active(i1 %active, i32 %a, i32 %b) {
ret i32 14
}
-define amdgpu_whole_wave i32 @multiple_blocks(i1 %active, i32 %a, i32 %b) {
+define amdgpu_gfx_whole_wave i32 @multiple_blocks(i1 %active, i32 %a, i32 %b) {
; DAGISEL-LABEL: name: multiple_blocks
; DAGISEL: bb.0 (%ir-block.0):
; DAGISEL-NEXT: successors: %bb.1(0x40000000), %bb.2(0x40000000)
@@ -126,7 +126,7 @@ if.end:
ret i32 %e
}
-define amdgpu_whole_wave i64 @ret_64(i1 %active, i64 %a, i64 %b) {
+define amdgpu_gfx_whole_wave i64 @ret_64(i1 %active, i64 %a, i64 %b) {
; DAGISEL-LABEL: name: ret_64
; DAGISEL: bb.0 (%ir-block.0):
; DAGISEL-NEXT: liveins: $vgpr0, $vgpr1, $vgpr2, $vgpr3
diff --git a/llvm/test/CodeGen/AMDGPU/whole-wave-functions.ll b/llvm/test/CodeGen/AMDGPU/whole-wave-functions.ll
index 715244d39765f..039d68befe299 100644
--- a/llvm/test/CodeGen/AMDGPU/whole-wave-functions.ll
+++ b/llvm/test/CodeGen/AMDGPU/whole-wave-functions.ll
@@ -6,7 +6,7 @@
; The EXEC mask should be set to -1 for the duration of the function
; and restored to its original value in the epilogue.
; We will also need to restore the inactive lanes for any allocated VGPRs.
-define amdgpu_whole_wave i32 @basic_test(i1 %active, i32 %a, i32 %b) {
+define amdgpu_gfx_whole_wave i32 @basic_test(i1 %active, i32 %a, i32 %b) {
; DAGISEL-LABEL: basic_test:
; DAGISEL: ; %bb.0:
; DAGISEL-NEXT: s_wait_loadcnt_dscnt 0x0
@@ -61,7 +61,7 @@ define amdgpu_whole_wave i32 @basic_test(i1 %active, i32 %a, i32 %b) {
}
; Make sure we don't crash if %active is not used at all.
-define amdgpu_whole_wave i32 @unused_active(i1 %active, i32 %a, i32 %b) {
+define amdgpu_gfx_whole_wave i32 @unused_active(i1 %active, i32 %a, i32 %b) {
; DAGISEL-LABEL: unused_active:
; DAGISEL: ; %bb.0:
; DAGISEL-NEXT: s_wait_loadcnt_dscnt 0x0
@@ -102,7 +102,7 @@ define amdgpu_whole_wave i32 @unused_active(i1 %active, i32 %a, i32 %b) {
; For any used VGPRs (including those used for SGPR spills), we need to restore the inactive lanes.
; For CSR VGPRs, we need to restore all lanes.
-define amdgpu_whole_wave i32 @csr(i1 %active, i32 %a, i32 %b) {
+define amdgpu_gfx_whole_wave i32 @csr(i1 %active, i32 %a, i32 %b) {
; DAGISEL-LABEL: csr:
; DAGISEL: ; %bb.0:
; DAGISEL-NEXT: s_wait_loadcnt_dscnt 0x0
@@ -189,7 +189,7 @@ define amdgpu_whole_wave i32 @csr(i1 %active, i32 %a, i32 %b) {
}
; Save and restore all lanes of v40.
-define amdgpu_whole_wave void @csr_vgpr_only(i1 %active, i32 %a, i32 %b) {
+define amdgpu_gfx_whole_wave void @csr_vgpr_only(i1 %active, i32 %a, i32 %b) {
; DAGISEL-LABEL: csr_vgpr_only:
; DAGISEL: ; %bb.0:
; DAGISEL-NEXT: s_wait_loadcnt_dscnt 0x0
@@ -229,7 +229,7 @@ define amdgpu_whole_wave void @csr_vgpr_only(i1 %active, i32 %a, i32 %b) {
ret void
}
-define amdgpu_whole_wave void @sgpr_spill_only(i1 %active, i32 %a, i32 %b) {
+define amdgpu_gfx_whole_wave void @sgpr_spill_only(i1 %active, i32 %a, i32 %b) {
; DAGISEL-LABEL: sgpr_spill_only:
; DAGISEL: ; %bb.0:
; DAGISEL-NEXT: s_wait_loadcnt_dscnt 0x0
@@ -279,7 +279,7 @@ define amdgpu_whole_wave void @sgpr_spill_only(i1 %active, i32 %a, i32 %b) {
ret void
}
-define amdgpu_whole_wave i32 @multiple_blocks(i1 %active, i32 %a, i32 %b) {
+define amdgpu_gfx_whole_wave i32 @multiple_blocks(i1 %active, i32 %a, i32 %b) {
; DAGISEL-LABEL: multiple_blocks:
; DAGISEL: ; %bb.0:
; DAGISEL-NEXT: s_wait_loadcnt_dscnt 0x0
@@ -352,7 +352,7 @@ if.end:
ret i32 %e
}
-define amdgpu_whole_wave i64 @ret_64(i1 %active, i64 %a, i64 %b) {
+define amdgpu_gfx_whole_wave i64 @ret_64(i1 %active, i64 %a, i64 %b) {
; DAGISEL-LABEL: ret_64:
; DAGISEL: ; %bb.0:
; DAGISEL-NEXT: s_wait_loadcnt_dscnt 0x0
>From b1a17c6e4d6c0ff8c42e5c20d8c238f2c9c924ea Mon Sep 17 00:00:00 2001
From: Diana Picus <diana-magda.picus at amd.com>
Date: Tue, 25 Mar 2025 15:00:34 +0100
Subject: [PATCH 11/11] Fix formatting
---
llvm/lib/Target/AMDGPU/SIFrameLowering.cpp | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/llvm/lib/Target/AMDGPU/SIFrameLowering.cpp b/llvm/lib/Target/AMDGPU/SIFrameLowering.cpp
index 02ce1007a3fcc..a892fe57649bd 100644
--- a/llvm/lib/Target/AMDGPU/SIFrameLowering.cpp
+++ b/llvm/lib/Target/AMDGPU/SIFrameLowering.cpp
@@ -1668,7 +1668,8 @@ void SIFrameLowering::determineCalleeSaves(MachineFunction &MF,
if (MFI->isWholeWaveFunction()) {
// In practice, all the VGPRs are WWM registers, and we will need to save at
// least their inactive lanes. Add them to WWMReservedRegs.
- assert(!NeedExecCopyReservedReg && "Whole wave functions can use the reg mapped for their i1 argument");
+ assert(!NeedExecCopyReservedReg &&
+ "Whole wave functions can use the reg mapped for their i1 argument");
for (MCRegister Reg : AMDGPU::VGPR_32RegClass)
if (MF.getRegInfo().isPhysRegModified(Reg)) {
MFI->reserveWWMRegister(Reg);
More information about the llvm-commits
mailing list