[llvm-branch-commits] [llvm] [NFC][HLSL][DirectX] Consolidate `ResourceClassNames` (PR #152213)
Finn Plummer via llvm-branch-commits
llvm-branch-commits at lists.llvm.org
Thu Aug 7 09:33:15 PDT 2025
Martin =?utf-8?q?Storsjö?= <martin at martin.st>,Timm Baeder
<tbaeder at redhat.com>,Michael Buch <michaelbuch12 at gmail.com>,Benjamin Maxwell
<benjamin.maxwell at arm.com>,Diana Picus <Diana-Magda.Picus at amd.com>,Ricardo
Jesus <rjj at nvidia.com>,eleviant <56861949+eleviant at users.noreply.github.com>,David
Spickett <david.spickett at linaro.org>,Dan Blackwell <dan_blackwell at apple.com>,David
Spickett <david.spickett at linaro.org>,Ilia Kuklin <ikuklin at accesssoftek.com>,Nikolas
Klauser <nikolasklauser at berlin.de>,Nikolas Klauser <nikolasklauser at berlin.de>
=?utf-8?q?,?=Simon Pilgrim <llvm-dev at redking.me.uk>,Diana Picus
<Diana-Magda.Picus at amd.com>,Himadhith
<79003240+Himadhith at users.noreply.github.com>,Simon Pilgrim
<llvm-dev at redking.me.uk>,Florian Hahn <flo at fhahn.com>,Michael Buch
<michaelbuch12 at gmail.com>,Michael Buch <michaelbuch12 at gmail.com>,Ricardo
Jesus <rjj at nvidia.com>,Tim Renouf <tim.renouf at amd.com>,Michael Kruse
<llvm-project at meinersbur.de>,Joseph Huber <huberjn at outlook.com>,Tony
Varghese <tonypalampalliyil at gmail.com>,Ricardo Jesus <rjj at nvidia.com>,Michael
Kruse <llvm-project at meinersbur.de>,Aiden Grossman <aidengrossman at google.com>,Joseph
Huber <huberjn at outlook.com>,zhijian lin <zhijian at ca.ibm.com>,David Spickett
<david.spickett at linaro.org>,Igor Wodiany <igor.wodiany at imgtec.com>,Florian
Hahn <flo at fhahn.com>,Yussur Mustafa Oraji <yussur.oraji at tu-darmstadt.de>,Simon
Pilgrim <llvm-dev at redking.me.uk>,William Huynh <William.Huynh at arm.com>,Rahul
Joshi <rjoshi at nvidia.com>,Kazu Hirata <kazu at google.com>,Kazu Hirata
<kazu at google.com>,Kazu Hirata <kazu at google.com>,Kazu Hirata <kazu at google.com>
=?utf-8?q?,?=Kazu Hirata <kazu at google.com>,Ross Brunton <ross at codeplay.com>
=?utf-8?q?,?=Alex Duran <alejandro.duran at intel.com>,Shay Kleiman
<42376404+shay-kl at users.noreply.github.com>,Michael Kruse
<llvm-project at meinersbur.de>,Krishna Pandey <kpandey81930 at gmail.com>,Andrei
Safronov <andrei.safronov at espressif.com>,Morris Hafner
<mmha at users.noreply.github.com>,Jonathan Thackray <jonathan.thackray at arm.com>
=?utf-8?q?,?=Nico Weber <thakis at chromium.org>,Nico Weber
<thakis at chromium.org>,Timm Baeder <tbaeder at redhat.com>,Marcos Maronas
<marcos.maronas at intel.com>,Schrodinger ZHU Yifan <yifanzhu at rochester.edu>,Michael
Kruse <llvm-project at meinersbur.de>,Alex Duran <alejandro.duran at intel.com>,parabola94
<heavybaby5000 at toki.waseda.jp>,Krzysztof Parzyszek
<Krzysztof.Parzyszek at amd.com>,Aiden Grossman <aidengrossman at google.com>,Timm
Baeder <tbaeder at redhat.com>,Aiden Grossman <aidengrossman at google.com>,Nikolas
Klauser <nikolasklauser at berlin.de>,Krishna Pandey <kpandey81930 at gmail.com>,Dominik
Steenken <dost at de.ibm.com>,Aiden Grossman <aidengrossman at google.com>,Alexey
Bataev <a.bataev at outlook.com>,erichkeane <ekeane at nvidia.com>,Jonas
Devlieghere <jonas at devlieghere.com>,Daniel Henrique Barboza
<dbarboza at ventanamicro.com>,Daniel =?utf-8?q?Rodríguez_Troitiño?=,Craig
Topper <craig.topper at sifive.com>,Craig Topper <craig.topper at sifive.com>,Steven
Wu <stevenwu at apple.com>,Amr Hesham <amr96 at programmer.net>,Sean Fertile
<sd.fertile at gmail.com>,=?utf-8?q?Donát?= Nagy <donat.nagy at ericsson.com>,Aiden
Grossman <aidengrossman at google.com>,Oliver Hunt <oliver at apple.com>,Michael
Jones <michaelrj at google.com>,Steven Perron <stevenperron at google.com>,erichkeane
<ekeane at nvidia.com>,royitaqi <royitaqi at users.noreply.github.com>,Douglas
<Douglas.Gliner at sony.com>,Jann Horn <jannh at google.com>,Dave Lee
<davelee.com at gmail.com>,David Majnemer <david.majnemer at gmail.com>,Cyndy
Ishida <cyndy_ishida at apple.com>,Michael Jones <michaelrj at google.com>,Arthur
Eubanks <aeubanks at google.com>,Andrew Lazarev <alazarev at google.com>,Daniel
Paoliello <danpao at microsoft.com>,royitaqi <royitaqi at users.noreply.github.com>
=?utf-8?q?,?Úniel Henrique Barboza <dbarboza at ventanamicro.com>,Changpeng
Fang <changpeng.fang at amd.com>,Mikhail Gudim <mgudim at gmail.com>,lntue
<lntue at google.com>,Florian Hahn <flo at fhahn.com>,Ramkumar Ramachandra
<ramkumar.ramachandra at codasip.com>,
Andrzej =?utf-8?q?Warzyński?= <andrzej.warzynski at arm.com>,Anna Thomas
<anna at azul.com>,David Green <david.green at arm.com>,Eugene Epshteyn
<eepshteyn at nvidia.com>,Stanislav Mekhanoshin <Stanislav.Mekhanoshin at amd.com>,Stanislav
Mekhanoshin <Stanislav.Mekhanoshin at amd.com>,erichkeane <ekeane at nvidia.com>,cmtice
<cmtice at google.com>,Stanislav Mekhanoshin <Stanislav.Mekhanoshin at amd.com>,Jordan
Rupprecht <rupprecht at google.com>,Shilei Tian <i at tianshilei.me>,Stanislav
Mekhanoshin <Stanislav.Mekhanoshin at amd.com>,Jonas Devlieghere
<jonas at devlieghere.com>,Stanislav Mekhanoshin <Stanislav.Mekhanoshin at amd.com>
=?utf-8?q?,?=hidekisaito <hidekido at amd.com>,Stanislav Mekhanoshin
<Stanislav.Mekhanoshin at amd.com>,Md Abdullah Shahneous Bari
<98356296+mshahneo at users.noreply.github.com>,Aiden Grossman
<aidengrossman at google.com>,lntue <lntue at google.com>,
Valentin Clement =?utf-8?b?KOODkOODrOODsw=?=,Andrew Lazarev
<alazarev at google.com>,Stanislav Mekhanoshin <Stanislav.Mekhanoshin at amd.com>,Qiongsi
Wu <qiongsiwu at gmail.com>,Uzair Nawaz <uzairnawaz at google.com>,
Valentin Clement =?utf-8?b?KOODkOODrOODsw=?=,Florian Mayer
<fmayer at google.com>,Uzair Nawaz <uzairnawaz at google.com>,Finn Plummer
<mail at inbelic.dev>,Finn Plummer <canadienfinn at gmail.com>
Message-ID:
In-Reply-To: <llvm.org/llvm/llvm-project/pull/152213 at github.com>
https://github.com/inbelic updated https://github.com/llvm/llvm-project/pull/152213
>From d08c2977e86fc7220b3b9c5cb83616705c10046e Mon Sep 17 00:00:00 2001
From: Stanislav Mekhanoshin <Stanislav.Mekhanoshin at amd.com>
Date: Tue, 5 Aug 2025 14:35:48 -0700
Subject: [PATCH 001/171] [AMDGPU] Add MC support for new gfx1250
src_flat_scratch_base_lo/hi (#152203)
---
llvm/lib/Target/AMDGPU/AMDGPU.td | 8 ++
.../AMDGPU/AsmParser/AMDGPUAsmParser.cpp | 89 ++++++++++---------
.../Disassembler/AMDGPUDisassembler.cpp | 3 +
llvm/lib/Target/AMDGPU/GCNSubtarget.h | 5 ++
llvm/lib/Target/AMDGPU/SIRegisterInfo.cpp | 2 +
llvm/lib/Target/AMDGPU/SIRegisterInfo.td | 21 ++++-
.../Target/AMDGPU/Utils/AMDGPUBaseInfo.cpp | 2 +
llvm/test/MC/AMDGPU/gfx1250_asm_operands.s | 29 ++++++
.../AMDGPU/gfx1250_dasm_operands.txt | 22 +++++
9 files changed, 139 insertions(+), 42 deletions(-)
create mode 100644 llvm/test/MC/AMDGPU/gfx1250_asm_operands.s
create mode 100644 llvm/test/MC/Disassembler/AMDGPU/gfx1250_dasm_operands.txt
diff --git a/llvm/lib/Target/AMDGPU/AMDGPU.td b/llvm/lib/Target/AMDGPU/AMDGPU.td
index 18f3c4761748a..d84f512f4976d 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPU.td
+++ b/llvm/lib/Target/AMDGPU/AMDGPU.td
@@ -1365,6 +1365,13 @@ def FeatureXF32Insts : SubtargetFeature<"xf32-insts",
"v_mfma_f32_16x16x8_xf32 and v_mfma_f32_32x32x4_xf32"
>;
+def FeatureGloballyAddressableScratch : SubtargetFeature<
+ "globally-addressable-scratch",
+ "HasGloballyAddressableScratch",
+ "true",
+ "FLAT instructions can access scratch memory for any thread in any wave"
+>;
+
// FIXME: Remove after all users are migrated to attribute.
def FeatureDynamicVGPR : SubtargetFeature <"dynamic-vgpr",
"DynamicVGPR",
@@ -2055,6 +2062,7 @@ def FeatureISAVersion12_50 : FeatureSet<
FeatureAtomicFMinFMaxF64FlatInsts,
FeatureFlatBufferGlobalAtomicFaddF64Inst,
FeatureMemoryAtomicFAddF32DenormalSupport,
+ FeatureGloballyAddressableScratch,
FeatureKernargPreload,
FeatureVmemPrefInsts,
FeatureLshlAddU64Inst,
diff --git a/llvm/lib/Target/AMDGPU/AsmParser/AMDGPUAsmParser.cpp b/llvm/lib/Target/AMDGPU/AsmParser/AMDGPUAsmParser.cpp
index d33765db9cc7d..ff8efd2debc21 100644
--- a/llvm/lib/Target/AMDGPU/AsmParser/AMDGPUAsmParser.cpp
+++ b/llvm/lib/Target/AMDGPU/AsmParser/AMDGPUAsmParser.cpp
@@ -1620,6 +1620,10 @@ class AMDGPUAsmParser : public MCTargetAsmParser {
return getFeatureBits()[AMDGPU::FeaturePartialNSAEncoding];
}
+ bool hasGloballyAddressableScratch() const {
+ return getFeatureBits()[AMDGPU::FeatureGloballyAddressableScratch];
+ }
+
unsigned getNSAMaxSize(bool HasSampler = false) const {
return AMDGPU::getNSAMaxSize(getSTI(), HasSampler);
}
@@ -2759,46 +2763,48 @@ static int getRegClass(RegisterKind Is, unsigned RegWidth) {
static MCRegister getSpecialRegForName(StringRef RegName) {
return StringSwitch<unsigned>(RegName)
- .Case("exec", AMDGPU::EXEC)
- .Case("vcc", AMDGPU::VCC)
- .Case("flat_scratch", AMDGPU::FLAT_SCR)
- .Case("xnack_mask", AMDGPU::XNACK_MASK)
- .Case("shared_base", AMDGPU::SRC_SHARED_BASE)
- .Case("src_shared_base", AMDGPU::SRC_SHARED_BASE)
- .Case("shared_limit", AMDGPU::SRC_SHARED_LIMIT)
- .Case("src_shared_limit", AMDGPU::SRC_SHARED_LIMIT)
- .Case("private_base", AMDGPU::SRC_PRIVATE_BASE)
- .Case("src_private_base", AMDGPU::SRC_PRIVATE_BASE)
- .Case("private_limit", AMDGPU::SRC_PRIVATE_LIMIT)
- .Case("src_private_limit", AMDGPU::SRC_PRIVATE_LIMIT)
- .Case("pops_exiting_wave_id", AMDGPU::SRC_POPS_EXITING_WAVE_ID)
- .Case("src_pops_exiting_wave_id", AMDGPU::SRC_POPS_EXITING_WAVE_ID)
- .Case("lds_direct", AMDGPU::LDS_DIRECT)
- .Case("src_lds_direct", AMDGPU::LDS_DIRECT)
- .Case("m0", AMDGPU::M0)
- .Case("vccz", AMDGPU::SRC_VCCZ)
- .Case("src_vccz", AMDGPU::SRC_VCCZ)
- .Case("execz", AMDGPU::SRC_EXECZ)
- .Case("src_execz", AMDGPU::SRC_EXECZ)
- .Case("scc", AMDGPU::SRC_SCC)
- .Case("src_scc", AMDGPU::SRC_SCC)
- .Case("tba", AMDGPU::TBA)
- .Case("tma", AMDGPU::TMA)
- .Case("flat_scratch_lo", AMDGPU::FLAT_SCR_LO)
- .Case("flat_scratch_hi", AMDGPU::FLAT_SCR_HI)
- .Case("xnack_mask_lo", AMDGPU::XNACK_MASK_LO)
- .Case("xnack_mask_hi", AMDGPU::XNACK_MASK_HI)
- .Case("vcc_lo", AMDGPU::VCC_LO)
- .Case("vcc_hi", AMDGPU::VCC_HI)
- .Case("exec_lo", AMDGPU::EXEC_LO)
- .Case("exec_hi", AMDGPU::EXEC_HI)
- .Case("tma_lo", AMDGPU::TMA_LO)
- .Case("tma_hi", AMDGPU::TMA_HI)
- .Case("tba_lo", AMDGPU::TBA_LO)
- .Case("tba_hi", AMDGPU::TBA_HI)
- .Case("pc", AMDGPU::PC_REG)
- .Case("null", AMDGPU::SGPR_NULL)
- .Default(AMDGPU::NoRegister);
+ .Case("exec", AMDGPU::EXEC)
+ .Case("vcc", AMDGPU::VCC)
+ .Case("flat_scratch", AMDGPU::FLAT_SCR)
+ .Case("xnack_mask", AMDGPU::XNACK_MASK)
+ .Case("shared_base", AMDGPU::SRC_SHARED_BASE)
+ .Case("src_shared_base", AMDGPU::SRC_SHARED_BASE)
+ .Case("shared_limit", AMDGPU::SRC_SHARED_LIMIT)
+ .Case("src_shared_limit", AMDGPU::SRC_SHARED_LIMIT)
+ .Case("private_base", AMDGPU::SRC_PRIVATE_BASE)
+ .Case("src_private_base", AMDGPU::SRC_PRIVATE_BASE)
+ .Case("private_limit", AMDGPU::SRC_PRIVATE_LIMIT)
+ .Case("src_private_limit", AMDGPU::SRC_PRIVATE_LIMIT)
+ .Case("src_flat_scratch_base_lo", AMDGPU::SRC_FLAT_SCRATCH_BASE_LO)
+ .Case("src_flat_scratch_base_hi", AMDGPU::SRC_FLAT_SCRATCH_BASE_HI)
+ .Case("pops_exiting_wave_id", AMDGPU::SRC_POPS_EXITING_WAVE_ID)
+ .Case("src_pops_exiting_wave_id", AMDGPU::SRC_POPS_EXITING_WAVE_ID)
+ .Case("lds_direct", AMDGPU::LDS_DIRECT)
+ .Case("src_lds_direct", AMDGPU::LDS_DIRECT)
+ .Case("m0", AMDGPU::M0)
+ .Case("vccz", AMDGPU::SRC_VCCZ)
+ .Case("src_vccz", AMDGPU::SRC_VCCZ)
+ .Case("execz", AMDGPU::SRC_EXECZ)
+ .Case("src_execz", AMDGPU::SRC_EXECZ)
+ .Case("scc", AMDGPU::SRC_SCC)
+ .Case("src_scc", AMDGPU::SRC_SCC)
+ .Case("tba", AMDGPU::TBA)
+ .Case("tma", AMDGPU::TMA)
+ .Case("flat_scratch_lo", AMDGPU::FLAT_SCR_LO)
+ .Case("flat_scratch_hi", AMDGPU::FLAT_SCR_HI)
+ .Case("xnack_mask_lo", AMDGPU::XNACK_MASK_LO)
+ .Case("xnack_mask_hi", AMDGPU::XNACK_MASK_HI)
+ .Case("vcc_lo", AMDGPU::VCC_LO)
+ .Case("vcc_hi", AMDGPU::VCC_HI)
+ .Case("exec_lo", AMDGPU::EXEC_LO)
+ .Case("exec_hi", AMDGPU::EXEC_HI)
+ .Case("tma_lo", AMDGPU::TMA_LO)
+ .Case("tma_hi", AMDGPU::TMA_HI)
+ .Case("tba_lo", AMDGPU::TBA_LO)
+ .Case("tba_hi", AMDGPU::TBA_HI)
+ .Case("pc", AMDGPU::PC_REG)
+ .Case("null", AMDGPU::SGPR_NULL)
+ .Default(AMDGPU::NoRegister);
}
bool AMDGPUAsmParser::ParseRegister(MCRegister &RegNo, SMLoc &StartLoc,
@@ -6744,6 +6750,9 @@ bool AMDGPUAsmParser::subtargetHasRegister(const MCRegisterInfo &MRI,
case SRC_PRIVATE_LIMIT_LO:
case SRC_PRIVATE_LIMIT:
return isGFX9Plus();
+ case SRC_FLAT_SCRATCH_BASE_LO:
+ case SRC_FLAT_SCRATCH_BASE_HI:
+ return hasGloballyAddressableScratch();
case SRC_POPS_EXITING_WAVE_ID:
return isGFX9Plus() && !isGFX11Plus();
case TBA:
diff --git a/llvm/lib/Target/AMDGPU/Disassembler/AMDGPUDisassembler.cpp b/llvm/lib/Target/AMDGPU/Disassembler/AMDGPUDisassembler.cpp
index fef0d7eb45a8c..fb7d634e62272 100644
--- a/llvm/lib/Target/AMDGPU/Disassembler/AMDGPUDisassembler.cpp
+++ b/llvm/lib/Target/AMDGPU/Disassembler/AMDGPUDisassembler.cpp
@@ -1914,6 +1914,8 @@ MCOperand AMDGPUDisassembler::decodeSpecialReg32(unsigned Val) const {
return isGFX11Plus() ? createRegOperand(M0) : createRegOperand(SGPR_NULL);
case 126: return createRegOperand(EXEC_LO);
case 127: return createRegOperand(EXEC_HI);
+ case 230: return createRegOperand(SRC_FLAT_SCRATCH_BASE_LO);
+ case 231: return createRegOperand(SRC_FLAT_SCRATCH_BASE_HI);
case 235: return createRegOperand(SRC_SHARED_BASE_LO);
case 236: return createRegOperand(SRC_SHARED_LIMIT_LO);
case 237: return createRegOperand(SRC_PRIVATE_BASE_LO);
@@ -1947,6 +1949,7 @@ MCOperand AMDGPUDisassembler::decodeSpecialReg64(unsigned Val) const {
return createRegOperand(SGPR_NULL);
break;
case 126: return createRegOperand(EXEC);
+ case 230: return createRegOperand(SRC_FLAT_SCRATCH_BASE_LO);
case 235: return createRegOperand(SRC_SHARED_BASE);
case 236: return createRegOperand(SRC_SHARED_LIMIT);
case 237: return createRegOperand(SRC_PRIVATE_BASE);
diff --git a/llvm/lib/Target/AMDGPU/GCNSubtarget.h b/llvm/lib/Target/AMDGPU/GCNSubtarget.h
index c84ba1a0a9d47..5530886831cae 100644
--- a/llvm/lib/Target/AMDGPU/GCNSubtarget.h
+++ b/llvm/lib/Target/AMDGPU/GCNSubtarget.h
@@ -281,6 +281,7 @@ class GCNSubtarget final : public AMDGPUGenSubtargetInfo,
bool RequiresCOV6 = false;
bool UseBlockVGPROpsForCSR = false;
+ bool HasGloballyAddressableScratch = false;
// Dummy feature to use for assembler in tablegen.
bool FeatureDisable = false;
@@ -1325,6 +1326,10 @@ class GCNSubtarget final : public AMDGPUGenSubtargetInfo,
bool useVGPRBlockOpsForCSR() const { return UseBlockVGPROpsForCSR; }
+ bool hasGloballyAddressableScratch() const {
+ return HasGloballyAddressableScratch;
+ }
+
bool hasVALUMaskWriteHazard() const { return getGeneration() == GFX11; }
bool hasVALUReadSGPRHazard() const { return GFX12Insts && !GFX1250Insts; }
diff --git a/llvm/lib/Target/AMDGPU/SIRegisterInfo.cpp b/llvm/lib/Target/AMDGPU/SIRegisterInfo.cpp
index f3acc5c2ea159..ae0f304ea3041 100644
--- a/llvm/lib/Target/AMDGPU/SIRegisterInfo.cpp
+++ b/llvm/lib/Target/AMDGPU/SIRegisterInfo.cpp
@@ -598,6 +598,8 @@ BitVector SIRegisterInfo::getReservedRegs(const MachineFunction &MF) const {
reserveRegisterTuples(Reserved, AMDGPU::SRC_SHARED_LIMIT);
reserveRegisterTuples(Reserved, AMDGPU::SRC_PRIVATE_BASE);
reserveRegisterTuples(Reserved, AMDGPU::SRC_PRIVATE_LIMIT);
+ reserveRegisterTuples(Reserved, AMDGPU::SRC_FLAT_SCRATCH_BASE_LO);
+ reserveRegisterTuples(Reserved, AMDGPU::SRC_FLAT_SCRATCH_BASE_HI);
// Reserve async counters pseudo registers
reserveRegisterTuples(Reserved, AMDGPU::ASYNCcnt);
diff --git a/llvm/lib/Target/AMDGPU/SIRegisterInfo.td b/llvm/lib/Target/AMDGPU/SIRegisterInfo.td
index 08d07c927e4c4..ed6b973580502 100644
--- a/llvm/lib/Target/AMDGPU/SIRegisterInfo.td
+++ b/llvm/lib/Target/AMDGPU/SIRegisterInfo.td
@@ -246,6 +246,22 @@ defm SRC_SHARED_LIMIT : ApertureRegister<"src_shared_limit", 236>;
defm SRC_PRIVATE_BASE : ApertureRegister<"src_private_base", 237>;
defm SRC_PRIVATE_LIMIT : ApertureRegister<"src_private_limit", 238>;
+let isConstant = true in {
+ defm SRC_FLAT_SCRATCH_BASE_LO : SIRegLoHi16<"src_flat_scratch_base_lo", 230>;
+ defm SRC_FLAT_SCRATCH_BASE_HI : SIRegLoHi16<"src_flat_scratch_base_hi", 231>;
+
+ // Using src_flat_scratch_base_lo in a 64-bit context gets the full 64-bit
+ // hi:lo value.
+ def SRC_FLAT_SCRATCH_BASE :
+ RegisterWithSubRegs<"src_flat_scratch_base_lo",
+ [SRC_FLAT_SCRATCH_BASE_LO,
+ SRC_FLAT_SCRATCH_BASE_HI]> {
+ let Namespace = "AMDGPU";
+ let SubRegIndices = [sub0, sub1];
+ let HWEncoding = SRC_FLAT_SCRATCH_BASE_LO.HWEncoding;
+ }
+}
+
defm SRC_POPS_EXITING_WAVE_ID : SIRegLoHi16<"src_pops_exiting_wave_id", 239>;
// Not addressable
@@ -765,7 +781,7 @@ def SReg_32_XM0_XEXEC : SIRegisterClass<"AMDGPU", [i32, f32, i16, f16, bf16, v2i
SGPR_NULL, SGPR_NULL_HI, TTMP_32, TMA_LO, TMA_HI, TBA_LO, TBA_HI, SRC_SHARED_BASE_LO,
SRC_SHARED_LIMIT_LO, SRC_PRIVATE_BASE_LO, SRC_PRIVATE_LIMIT_LO, SRC_SHARED_BASE_HI,
SRC_SHARED_LIMIT_HI, SRC_PRIVATE_BASE_HI, SRC_PRIVATE_LIMIT_HI, SRC_POPS_EXITING_WAVE_ID,
- SRC_VCCZ, SRC_EXECZ, SRC_SCC)> {
+ SRC_VCCZ, SRC_EXECZ, SRC_SCC, SRC_FLAT_SCRATCH_BASE_LO, SRC_FLAT_SCRATCH_BASE_HI)> {
let AllocationPriority = 0;
}
@@ -776,7 +792,8 @@ def SReg_LO16 : SIRegisterClass<"AMDGPU", [i16, f16, bf16], 16,
SRC_SHARED_LIMIT_LO_LO16, SRC_PRIVATE_BASE_LO_LO16, SRC_PRIVATE_LIMIT_LO_LO16,
SRC_SHARED_BASE_HI_LO16, SRC_SHARED_LIMIT_HI_LO16, SRC_PRIVATE_BASE_HI_LO16,
SRC_PRIVATE_LIMIT_HI_LO16, SRC_POPS_EXITING_WAVE_ID_LO16, SRC_VCCZ_LO16,
- SRC_EXECZ_LO16, SRC_SCC_LO16, EXEC_LO_LO16, EXEC_HI_LO16, M0_CLASS_LO16)> {
+ SRC_EXECZ_LO16, SRC_SCC_LO16, EXEC_LO_LO16, EXEC_HI_LO16, M0_CLASS_LO16,
+ SRC_FLAT_SCRATCH_BASE_LO_LO16, SRC_FLAT_SCRATCH_BASE_HI_LO16)> {
let Size = 16;
let isAllocatable = 0;
let BaseClassOrder = 16;
diff --git a/llvm/lib/Target/AMDGPU/Utils/AMDGPUBaseInfo.cpp b/llvm/lib/Target/AMDGPU/Utils/AMDGPUBaseInfo.cpp
index 65fa0884b11c9..00dcb9b52d4bd 100644
--- a/llvm/lib/Target/AMDGPU/Utils/AMDGPUBaseInfo.cpp
+++ b/llvm/lib/Target/AMDGPU/Utils/AMDGPUBaseInfo.cpp
@@ -2654,6 +2654,8 @@ bool isInlineValue(unsigned Reg) {
case AMDGPU::SRC_PRIVATE_BASE:
case AMDGPU::SRC_PRIVATE_LIMIT_LO:
case AMDGPU::SRC_PRIVATE_LIMIT:
+ case AMDGPU::SRC_FLAT_SCRATCH_BASE_LO:
+ case AMDGPU::SRC_FLAT_SCRATCH_BASE_HI:
case AMDGPU::SRC_POPS_EXITING_WAVE_ID:
return true;
case AMDGPU::SRC_VCCZ:
diff --git a/llvm/test/MC/AMDGPU/gfx1250_asm_operands.s b/llvm/test/MC/AMDGPU/gfx1250_asm_operands.s
new file mode 100644
index 0000000000000..8b7465b5df574
--- /dev/null
+++ b/llvm/test/MC/AMDGPU/gfx1250_asm_operands.s
@@ -0,0 +1,29 @@
+// RUN: not llvm-mc -triple=amdgcn -mcpu=gfx1200 -show-encoding %s 2>&1 | FileCheck --check-prefixes=GFX1200-ERR %s
+// RUN: llvm-mc -triple=amdgcn -mcpu=gfx1250 -show-encoding %s | FileCheck --check-prefix=GFX1250 %s
+
+s_mov_b32 s0, src_flat_scratch_base_lo
+// GFX1200-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: src_flat_scratch_base_lo register not available on this GPU
+// GFX1250: encoding: [0xe6,0x00,0x80,0xbe]
+
+s_mov_b32 s0, src_flat_scratch_base_hi
+// GFX1200-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: src_flat_scratch_base_hi register not available on this GPU
+// GFX1250: encoding: [0xe7,0x00,0x80,0xbe]
+
+s_mov_b64 s[0:1], src_flat_scratch_base_lo
+// GFX1200-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: src_flat_scratch_base_lo register not available on this GPU
+// GFX1250: encoding: [0xe6,0x01,0x80,0xbe]
+
+s_mov_b64 s[0:1], shared_base
+// GFX1250: encoding: [0xeb,0x01,0x80,0xbe]
+
+s_mov_b64 s[0:1], src_shared_base
+// GFX1250: encoding: [0xeb,0x01,0x80,0xbe]
+
+s_mov_b64 s[0:1], shared_limit
+// GFX1250: encoding: [0xec,0x01,0x80,0xbe]
+
+s_mov_b64 s[0:1], src_shared_limit
+// GFX1250: encoding: [0xec,0x01,0x80,0xbe]
+
+s_getreg_b32 s1, hwreg(33)
+// GFX1250: encoding: [0x21,0xf8,0x81,0xb8]
diff --git a/llvm/test/MC/Disassembler/AMDGPU/gfx1250_dasm_operands.txt b/llvm/test/MC/Disassembler/AMDGPU/gfx1250_dasm_operands.txt
new file mode 100644
index 0000000000000..a3e7e570ab9b6
--- /dev/null
+++ b/llvm/test/MC/Disassembler/AMDGPU/gfx1250_dasm_operands.txt
@@ -0,0 +1,22 @@
+# RUN: llvm-mc -triple=amdgcn -mcpu=gfx1250 -disassemble -show-encoding < %s | FileCheck -strict-whitespace -check-prefix=GFX1250 %s
+
+# GFX1250: s_mov_b32 s0, src_flat_scratch_base_lo ; encoding: [0xe6,0x00,0x80,0xbe]
+0xe6,0x00,0x80,0xbe
+
+# GFX1250: s_mov_b32 s0, src_flat_scratch_base_hi ; encoding: [0xe7,0x00,0x80,0xbe]
+0xe7,0x00,0x80,0xbe
+
+# GFX1250: s_mov_b64 s[0:1], src_flat_scratch_base_lo ; encoding: [0xe6,0x01,0x80,0xbe]
+0xe6,0x01,0x80,0xbe
+
+# GFX1250: s_mov_b64 s[0:1], src_shared_base ; encoding: [0xeb,0x01,0x80,0xbe]
+0xeb,0x01,0x80,0xbe
+
+# GFX1250: s_mov_b64 s[0:1], src_shared_base ; encoding: [0xeb,0x01,0x80,0xbe]
+0xeb,0x01,0x80,0xbe
+
+# GFX1250: s_mov_b64 s[0:1], src_shared_limit ; encoding: [0xec,0x01,0x80,0xbe]
+0xec,0x01,0x80,0xbe
+
+# GFX1250: s_mov_b64 s[0:1], src_shared_limit ; encoding: [0xec,0x01,0x80,0xbe]
+0xec,0x01,0x80,0xbe
>From 06884d0204ecffc8f1164d520bab6117ae06baee Mon Sep 17 00:00:00 2001
From: Charitha Saumya <136391709+charithaintc at users.noreply.github.com>
Date: Tue, 5 Aug 2025 14:42:14 -0700
Subject: [PATCH 002/171] [mlir][xegpu] Bug fix in UpdateNdOffset
distribution. (#150545)
Reason is UpdateNdOffset source operand not retaining the layouts when
it is yielded by the warp op. `warp_execute_on_lane0` op expects that
TensorDesc type is unchanged during distribution out of its region. we
use UnrealizedCasts to reconcile this mismatch outside the warpOp (via
`resolveDistributedTy`)
---
.../Transforms/XeGPUSubgroupDistribute.cpp | 62 +++++++------------
.../Dialect/XeGPU/subgroup-distribute.mlir | 24 ++++++-
2 files changed, 45 insertions(+), 41 deletions(-)
diff --git a/mlir/lib/Dialect/XeGPU/Transforms/XeGPUSubgroupDistribute.cpp b/mlir/lib/Dialect/XeGPU/Transforms/XeGPUSubgroupDistribute.cpp
index 8957ea5399ea2..2088c3c7fc5ec 100644
--- a/mlir/lib/Dialect/XeGPU/Transforms/XeGPUSubgroupDistribute.cpp
+++ b/mlir/lib/Dialect/XeGPU/Transforms/XeGPUSubgroupDistribute.cpp
@@ -277,22 +277,13 @@ struct CreateNdDescDistribution final : public gpu::WarpDistributionPattern {
descOp, "the tensor descriptor lacks layout attribute");
SmallVector<size_t> newRetIndices;
- SmallVector<Value> newYieldValues;
- SmallVector<Type> newYieldTypes;
-
- for (Value operand : descOp->getOperands()) {
- newYieldValues.push_back(operand);
- newYieldTypes.push_back(operand.getType());
- }
rewriter.setInsertionPoint(warpOp);
gpu::WarpExecuteOnLane0Op newWarpOp = moveRegionToNewWarpOpAndAppendReturns(
- rewriter, warpOp, /* new yieled values = */ newYieldValues,
- /* new yielded types = */ newYieldTypes, newRetIndices);
+ rewriter, warpOp, /* new yieled values = */ descOp->getOperands(),
+ /* new yielded types = */ descOp.getOperandTypes(), newRetIndices);
- SmallVector<Value> newDescOperands;
- for (size_t i : newRetIndices) {
- newDescOperands.push_back(newWarpOp.getResult(i));
- }
+ SmallVector<Value> newDescOperands = llvm::map_to_vector(
+ newRetIndices, [&](size_t i) { return newWarpOp.getResult(i); });
rewriter.setInsertionPointAfter(newWarpOp);
xegpu::TensorDescType distributedTensorDescTy =
descOp.getType().dropLayouts(); // Distributed tensor descriptor type
@@ -696,39 +687,30 @@ struct UpdateNdOffsetDistribution final : public gpu::WarpDistributionPattern {
warpOp, "warp result is not a xegpu::UpdateNdOffset op");
auto updateOp = operand->get().getDefiningOp<xegpu::UpdateNdOffsetOp>();
unsigned operandIdx = operand->getOperandNumber();
- // new update op does not have layout attribute.
- xegpu::TensorDescType newTensorDescTy =
- updateOp.getTensorDescType().dropLayouts();
- SmallVector<Value, 3> newYieldValues;
- SmallVector<Type, 3> newYieldTypes;
- for (Value operand : updateOp->getOperands()) {
- newYieldValues.push_back(operand);
- if (isa<xegpu::TensorDescType>(operand.getType())) {
- newYieldTypes.push_back(newTensorDescTy);
- } else {
- newYieldTypes.push_back(operand.getType());
- }
- }
SmallVector<size_t> newRetIndices;
gpu::WarpExecuteOnLane0Op newWarpOp = moveRegionToNewWarpOpAndAppendReturns(
- rewriter, warpOp, newYieldValues, newYieldTypes, newRetIndices);
+ rewriter, warpOp, updateOp->getOperands(), updateOp.getOperandTypes(),
+ newRetIndices);
rewriter.setInsertionPointAfter(newWarpOp);
- SmallVector<Value> newUpdateOperands;
- for (size_t i : newRetIndices) {
- // For the tensor descriptor operand, the layout attribute is dropped
- // after distribution. Types needs to be resolved in this case.
- if (isa<xegpu::TensorDescType>(newWarpOp.getResult(i).getType())) {
- newUpdateOperands.push_back(resolveDistributedTy(
- newWarpOp.getResult(i), newTensorDescTy, rewriter));
- } else {
- newUpdateOperands.push_back(newWarpOp.getResult(i));
- }
- }
+ // new update op does not have layout attribute.
+ xegpu::TensorDescType distributedTensorDescTy =
+ updateOp.getTensorDescType().dropLayouts();
+ SmallVector<Value> newUpdateOperands =
+ llvm::map_to_vector(newRetIndices, [&](size_t i) {
+ // For the tensor descriptor operand, the layout attribute is
+ // dropped after distribution. Types needs to be resolved in this
+ // case.
+ if (isa<xegpu::TensorDescType>(newWarpOp.getResult(i).getType())) {
+ return resolveDistributedTy(newWarpOp.getResult(i),
+ distributedTensorDescTy, rewriter);
+ }
+ return newWarpOp.getResult(i);
+ });
// Create a new update op outside the warp op.
auto newUpdateOp = xegpu::UpdateNdOffsetOp::create(
- rewriter, newWarpOp.getLoc(), newTensorDescTy, newUpdateOperands,
- updateOp->getAttrs());
+ rewriter, newWarpOp.getLoc(), distributedTensorDescTy,
+ newUpdateOperands, updateOp->getAttrs());
xegpu::removeLayoutAttrs(newUpdateOp);
Value distributedVal = newWarpOp.getResult(operandIdx);
// Resolve the distributed type with the original type.
diff --git a/mlir/test/Dialect/XeGPU/subgroup-distribute.mlir b/mlir/test/Dialect/XeGPU/subgroup-distribute.mlir
index e78ae4a17710b..54ef56e013abb 100644
--- a/mlir/test/Dialect/XeGPU/subgroup-distribute.mlir
+++ b/mlir/test/Dialect/XeGPU/subgroup-distribute.mlir
@@ -1,4 +1,4 @@
-// RUN: mlir-opt -xegpu-subgroup-distribute -canonicalize -cse -split-input-file %s | FileCheck %s
+// RUN: mlir-opt -xegpu-subgroup-distribute -allow-unregistered-dialect -canonicalize -cse -split-input-file %s | FileCheck %s
// CHECK-LABEL: gpu.func @store_nd_1d
// CHECK: (%[[ARG0:[0-9a-zA-Z]+]]: memref<16xf32>) {
@@ -265,6 +265,28 @@ gpu.module @test {
}
}
+// -----
+// Explicitly check that update_nd_offset op's source retain layout when yielded from the warp op (PR150545)
+// CHECK-LABEL: gpu.func @check_update_nd_offset_distributed_tensor_desc
+// CHECK: %[[W:.*]] = gpu.warp_execute_on_lane_0(%{{.*}})[16] ->
+// CHECK-SAME: (!xegpu.tensor_desc<16x16xf32, #xegpu.layout<lane_layout = [1, 16], lane_data = [1, 1]>>) {
+// CHECK: %[[T0:.*]] = "some_op"() : () -> !xegpu.tensor_desc<16x16xf32, #xegpu.layout<lane_layout = [1, 16], lane_data = [1, 1]>>
+// CHECK: gpu.yield %[[T0]] : !xegpu.tensor_desc<16x16xf32, #xegpu.layout<lane_layout = [1, 16], lane_data = [1, 1]>>
+// CHECK: }
+// CHECK: %[[T1:.*]] = builtin.unrealized_conversion_cast %[[W]] :
+// CHECK-SAME: !xegpu.tensor_desc<16x16xf32, #xegpu.layout<lane_layout = [1, 16], lane_data = [1, 1]>> to !xegpu.tensor_desc<16x16xf32> {resolve_simt_type_mismatch}
+// CHECK: xegpu.update_nd_offset %[[T1]], [%{{.*}}] : !xegpu.tensor_desc<16x16xf32>
+gpu.module @test {
+ gpu.func @check_update_nd_offset_distributed_tensor_desc() {
+ %c32 = arith.constant 32 : index
+ %cst = arith.constant {layout_result_0 = #xegpu.layout<lane_layout = [1, 16], lane_data = [1, 1]>} dense<1.000000e+00> : vector<16x16xf32>
+ %0 = "some_op"() : () -> !xegpu.tensor_desc<16x16xf32, #xegpu.layout<lane_layout = [1, 16], lane_data = [1, 1]>>
+ %1 = xegpu.update_nd_offset %0, [%c32, %c32] : !xegpu.tensor_desc<16x16xf32, #xegpu.layout<lane_layout = [1, 16], lane_data = [1, 1]>>
+ xegpu.store_nd %cst, %1 : vector<16x16xf32>, !xegpu.tensor_desc<16x16xf32, #xegpu.layout<lane_layout = [1, 16], lane_data = [1, 1]>>
+ gpu.return
+ }
+}
+
// -----
// CHECK-LABEL: gpu.func @prefetch_1d
// CHECK: (%[[ARG0:[0-9a-zA-Z]+]]: memref<256xf16>) {
>From 81ed75679dcfb0b60764db1c8e0a91065a26a742 Mon Sep 17 00:00:00 2001
From: keinflue <keinflue at posteo.de>
Date: Tue, 5 Aug 2025 23:44:15 +0200
Subject: [PATCH 003/171] [clang] Fix constant evaluation of member pointer
access into sibling class. (#150829)
HandleMemberPointerAccess considered whether the lvalue path in a member
pointer access matched the bases of the containing class of the member,
but neglected to check the same for the containing class of the member
itself, thereby ignoring access attempts to members in direct sibling
classes.
Fixes #150705.
Fixes #150709.
---
clang/docs/ReleaseNotes.rst | 20 +++++++++++++
clang/lib/AST/ExprConstant.cpp | 21 +++++++++++++
.../SemaCXX/constant-expression-cxx11.cpp | 30 +++++++++++++++++++
.../SemaCXX/constant-expression-cxx2a.cpp | 13 ++++++++
4 files changed, 84 insertions(+)
diff --git a/clang/docs/ReleaseNotes.rst b/clang/docs/ReleaseNotes.rst
index 9231f2cae9bfa..17e3df467593d 100644
--- a/clang/docs/ReleaseNotes.rst
+++ b/clang/docs/ReleaseNotes.rst
@@ -45,6 +45,26 @@ C++ Specific Potentially Breaking Changes
regressions if your build system supports two-phase compilation model but haven't support
reduced BMI or it is a compiler bug or a bug in users code.
+- Clang now correctly diagnoses during constant expression evaluation undefined behavior due to member
+ pointer access to a member which is not a direct or indirect member of the most-derived object
+ of the accessed object but is instead located directly in a sibling class to one of the classes
+ along the inheritance hierarchy of the most-derived object as ill-formed.
+ Other scenarios in which the member is not member of the most derived object were already
+ diagnosed previously. (#GH150709)
+
+ .. code-block:: c++
+
+ struct A {};
+ struct B : A {};
+ struct C : A { constexpr int foo() const { return 1; } };
+ constexpr A a;
+ constexpr B b;
+ constexpr C c;
+ constexpr auto mp = static_cast<int(A::*)() const>(&C::foo);
+ static_assert((a.*mp)() == 1); // continues to be rejected
+ static_assert((b.*mp)() == 1); // newly rejected
+ static_assert((c.*mp)() == 1); // accepted
+
ABI Changes in This Version
---------------------------
diff --git a/clang/lib/AST/ExprConstant.cpp b/clang/lib/AST/ExprConstant.cpp
index 3a47fa87b5f77..34af9ccd2e96e 100644
--- a/clang/lib/AST/ExprConstant.cpp
+++ b/clang/lib/AST/ExprConstant.cpp
@@ -5035,6 +5035,9 @@ static const ValueDecl *HandleMemberPointerAccess(EvalInfo &Info,
// This is a member of some derived class. Truncate LV appropriately.
// The end of the derived-to-base path for the base object must match the
// derived-to-base path for the member pointer.
+ // C++23 [expr.mptr.oper]p4:
+ // If the result of E1 is an object [...] whose most derived object does
+ // not contain the member to which E2 refers, the behavior is undefined.
if (LV.Designator.MostDerivedPathLength + MemPtr.Path.size() >
LV.Designator.Entries.size()) {
Info.FFDiag(RHS);
@@ -5051,6 +5054,24 @@ static const ValueDecl *HandleMemberPointerAccess(EvalInfo &Info,
return nullptr;
}
}
+ // MemPtr.Path only contains the base classes of the class directly
+ // containing the member E2. It is still necessary to check that the class
+ // directly containing the member E2 lies on the derived-to-base path of E1
+ // to avoid incorrectly permitting member pointer access into a sibling
+ // class of the class containing the member E2. If this class would
+ // correspond to the most-derived class of E1, it either isn't contained in
+ // LV.Designator.Entries or the corresponding entry refers to an array
+ // element instead. Therefore get the most derived class directly in this
+ // case. Otherwise the previous entry should correpond to this class.
+ const CXXRecordDecl *LastLVDecl =
+ (PathLengthToMember > LV.Designator.MostDerivedPathLength)
+ ? getAsBaseClass(LV.Designator.Entries[PathLengthToMember - 1])
+ : LV.Designator.MostDerivedType->getAsCXXRecordDecl();
+ const CXXRecordDecl *LastMPDecl = MemPtr.getContainingRecord();
+ if (LastLVDecl->getCanonicalDecl() != LastMPDecl->getCanonicalDecl()) {
+ Info.FFDiag(RHS);
+ return nullptr;
+ }
// Truncate the lvalue to the appropriate derived class.
if (!CastToDerivedClass(Info, RHS, LV, MemPtr.getContainingRecord(),
diff --git a/clang/test/SemaCXX/constant-expression-cxx11.cpp b/clang/test/SemaCXX/constant-expression-cxx11.cpp
index 5ecb8c607f59a..2423a77e6e7d2 100644
--- a/clang/test/SemaCXX/constant-expression-cxx11.cpp
+++ b/clang/test/SemaCXX/constant-expression-cxx11.cpp
@@ -2615,3 +2615,33 @@ namespace DoubleCapture {
};
}
}
+
+namespace GH150709 {
+ struct C { };
+ struct D : C {
+ constexpr int f() const { return 1; };
+ };
+ struct E : C { };
+ struct F : D { };
+ struct G : E { };
+
+ constexpr C c1, c2[2];
+ constexpr D d1, d2[2];
+ constexpr E e1, e2[2];
+ constexpr F f;
+ constexpr G g;
+
+ constexpr auto mp = static_cast<int (C::*)() const>(&D::f);
+
+ // sanity checks for fix of GH150709 (unchanged behavior)
+ static_assert((c1.*mp)() == 1, ""); // expected-error {{constant expression}}
+ static_assert((d1.*mp)() == 1, "");
+ static_assert((f.*mp)() == 1, "");
+ static_assert((c2[0].*mp)() == 1, ""); // expected-error {{constant expression}}
+ static_assert((d2[0].*mp)() == 1, "");
+
+ // incorrectly undiagnosed before fix of GH150709
+ static_assert((e1.*mp)() == 1, ""); // expected-error {{constant expression}}
+ static_assert((e2[0].*mp)() == 1, ""); // expected-error {{constant expression}}
+ static_assert((g.*mp)() == 1, ""); // expected-error {{constant expression}}
+}
diff --git a/clang/test/SemaCXX/constant-expression-cxx2a.cpp b/clang/test/SemaCXX/constant-expression-cxx2a.cpp
index ffb7e633c2919..b22a82c57ef06 100644
--- a/clang/test/SemaCXX/constant-expression-cxx2a.cpp
+++ b/clang/test/SemaCXX/constant-expression-cxx2a.cpp
@@ -1497,3 +1497,16 @@ namespace GH67317 {
// expected-note {{subobject of type 'const unsigned char' is not initialized}}
__builtin_bit_cast(unsigned char, *new char[3][1]);
};
+
+namespace GH150705 {
+ struct A { };
+ struct B : A { };
+ struct C : A {
+ constexpr virtual int foo() const { return 0; }
+ };
+ constexpr auto p = &C::foo;
+ constexpr auto q = static_cast<int (A::*)() const>(p);
+ constexpr B b;
+ constexpr const A& a = b;
+ constexpr auto x = (a.*q)(); // expected-error {{constant expression}}
+}
>From 07da480614cd7ce3f105a153626a23f6888e503f Mon Sep 17 00:00:00 2001
From: Daniel Paoliello <danpao at microsoft.com>
Date: Tue, 5 Aug 2025 14:54:08 -0700
Subject: [PATCH 004/171] [win][arm64ec] More fixes for building and testing
Arm64EC Windows (#151409)
* `tools/llvm-objcopy/MachO/update-section-object.test` was failing on
Windows since the input file (`macho_sections.s`) might be checked out
with the wrong line ending, resulting in difference in the size of
sections being checked.
* Removed the check for Windows in `AArch64Arm64ECCallLowering`: when
`llc` is run without an explicit target, the module's target triple is
unknown so this assert fires.
* Expect `llvm/test/CodeGen/Generic/allow-check.ll` to fail for Arm64EC:
Global ISel is not supported.
---
llvm/.gitattributes | 1 +
llvm/lib/Target/AArch64/AArch64Arm64ECCallLowering.cpp | 3 ---
llvm/test/CodeGen/Generic/allow-check.ll | 1 +
3 files changed, 2 insertions(+), 3 deletions(-)
diff --git a/llvm/.gitattributes b/llvm/.gitattributes
index fc3afb28a8d52..9f4ed8a24ecd0 100644
--- a/llvm/.gitattributes
+++ b/llvm/.gitattributes
@@ -32,3 +32,4 @@ test/tools/split-file/basic.test text eol=lf
test/tools/split-file/Inputs/basic-*.txt eol=lf
test/tools/split-file/basic.crlf.test text eol=crlf
test/tools/split-file/Inputs/basic-*.crlf eol=crlf
+test/tools/llvm-objcopy/MachO/Inputs/macho_sections.s text eol=lf
diff --git a/llvm/lib/Target/AArch64/AArch64Arm64ECCallLowering.cpp b/llvm/lib/Target/AArch64/AArch64Arm64ECCallLowering.cpp
index 082de56d0bd8a..ad8368e1692be 100644
--- a/llvm/lib/Target/AArch64/AArch64Arm64ECCallLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64Arm64ECCallLowering.cpp
@@ -735,9 +735,6 @@ AArch64Arm64ECCallLowering::buildPatchableThunk(GlobalAlias *UnmangledAlias,
// Lower an indirect call with inline code.
void AArch64Arm64ECCallLowering::lowerCall(CallBase *CB) {
- assert(CB->getModule()->getTargetTriple().isOSWindows() &&
- "Only applicable for Windows targets");
-
IRBuilder<> B(CB);
Value *CalledOperand = CB->getCalledOperand();
diff --git a/llvm/test/CodeGen/Generic/allow-check.ll b/llvm/test/CodeGen/Generic/allow-check.ll
index 148ee811ea806..97719a7af6229 100644
--- a/llvm/test/CodeGen/Generic/allow-check.ll
+++ b/llvm/test/CodeGen/Generic/allow-check.ll
@@ -6,6 +6,7 @@
; XFAIL: target=nvptx{{.*}}
; XFAIL: target=sparc{{.*}}
; XFAIL: target=hexagon-{{.*}}
+; XFAIL: target=arm64ec-{{.*}}
; RUN: llc < %s -O3 -global-isel=0 -fast-isel=0
; RUN: llc < %s -O3 -global-isel=1 -fast-isel=0
>From 5a076e3b4d5ec26d8944bff129c1cae67a44b3ef Mon Sep 17 00:00:00 2001
From: Caslyn Tonelli <6718161+Caslyn at users.noreply.github.com>
Date: Tue, 5 Aug 2025 15:06:08 -0700
Subject: [PATCH 005/171] [libc] Add dladdr to dlfcn.h (#149872)
A initial commit for #97929, this adds a stub implementation for
`dladdr` and includes the definition for the `DL_info` type used as one
of its arguments.
While the `dladdr` implementation relies on dynamic linker support, this
patch will add its prototype in the generated `dlfcn.h` header so that
it can be used by downstream platforms that have their own `dladdr`
implementation.
---
libc/include/dlfcn.yaml | 9 +++++++++
libc/include/llvm-libc-types/Dl_info.h | 19 +++++++++++++++++++
libc/src/dlfcn/CMakeLists.txt | 11 +++++++++++
libc/src/dlfcn/dladdr.cpp | 21 +++++++++++++++++++++
libc/src/dlfcn/dladdr.h | 20 ++++++++++++++++++++
5 files changed, 80 insertions(+)
create mode 100644 libc/include/llvm-libc-types/Dl_info.h
create mode 100644 libc/src/dlfcn/dladdr.cpp
create mode 100644 libc/src/dlfcn/dladdr.h
diff --git a/libc/include/dlfcn.yaml b/libc/include/dlfcn.yaml
index a8218b6780c2c..db2893aaff5d9 100644
--- a/libc/include/dlfcn.yaml
+++ b/libc/include/dlfcn.yaml
@@ -86,6 +86,8 @@ enums:
standards:
- gnu
value: 11
+types:
+ - type_name: Dl_info
functions:
- name: dlclose
standards:
@@ -120,3 +122,10 @@ functions:
- type: void *__restrict
- type: int
- type: void *__restrict
+ - name: dladdr
+ standards:
+ - POSIX
+ return_type: int
+ arguments:
+ - type: const void *
+ - type: Dl_info *
diff --git a/libc/include/llvm-libc-types/Dl_info.h b/libc/include/llvm-libc-types/Dl_info.h
new file mode 100644
index 0000000000000..b082e30549a02
--- /dev/null
+++ b/libc/include/llvm-libc-types/Dl_info.h
@@ -0,0 +1,19 @@
+//===-- Definition of Dl_info type ----------------------------------------===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef LLVM_LIBC_TYPES_DL_INFO_H
+#define LLVM_LIBC_TYPES_DL_INFO_H
+
+typedef struct {
+ const char *dli_fname;
+ void *dli_fbase;
+ const char *dli_sname;
+ void *dli_saddr;
+} Dl_info;
+
+#endif // LLVM_LIBC_TYPES_DL_INFO_H
diff --git a/libc/src/dlfcn/CMakeLists.txt b/libc/src/dlfcn/CMakeLists.txt
index 1ee05fc380c94..8ef0540c01a2e 100644
--- a/libc/src/dlfcn/CMakeLists.txt
+++ b/libc/src/dlfcn/CMakeLists.txt
@@ -49,3 +49,14 @@ add_entrypoint_object(
libc.include.dlfcn
libc.src.errno.errno
)
+
+add_entrypoint_object(
+ dladdr
+ SRCS
+ dladdr.cpp
+ HDRS
+ dladdr.h
+ DEPENDS
+ libc.include.dlfcn
+ libc.src.errno.errno
+)
diff --git a/libc/src/dlfcn/dladdr.cpp b/libc/src/dlfcn/dladdr.cpp
new file mode 100644
index 0000000000000..61490fd9a64b5
--- /dev/null
+++ b/libc/src/dlfcn/dladdr.cpp
@@ -0,0 +1,21 @@
+//===-- Implementation of dladdr ------------------------------------------===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+
+#include "dladdr.h"
+
+#include "src/__support/common.h"
+#include "src/__support/macros/config.h"
+
+namespace LIBC_NAMESPACE_DECL {
+
+// TODO: https:// github.com/llvm/llvm-project/issues/97929
+LLVM_LIBC_FUNCTION(int, dladdr, (const void *addr, Dl_info *info)) {
+ return -1;
+}
+
+} // namespace LIBC_NAMESPACE_DECL
diff --git a/libc/src/dlfcn/dladdr.h b/libc/src/dlfcn/dladdr.h
new file mode 100644
index 0000000000000..346fc8dc27aed
--- /dev/null
+++ b/libc/src/dlfcn/dladdr.h
@@ -0,0 +1,20 @@
+//===-- Implementation header of dladdr -------------------------*- C++ -*-===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef LLVM_LIBC_SRC_DLFCN_DLADDR_H
+#define LLVM_LIBC_SRC_DLFCN_DLADDR_H
+
+#include "src/__support/macros/config.h"
+
+namespace LIBC_NAMESPACE_DECL {
+
+int dladdr(const void *, Dl_info *);
+
+} // namespace LIBC_NAMESPACE_DECL
+
+#endif // LLVM_LIBC_SRC_DLFCN_DLADDR_H
>From 34aed0ed5615583a8f1aaf9c036cc69fa88b3503 Mon Sep 17 00:00:00 2001
From: Stanislav Mekhanoshin <Stanislav.Mekhanoshin at amd.com>
Date: Tue, 5 Aug 2025 15:15:21 -0700
Subject: [PATCH 006/171] [AMDGPU] Add gfx1250 wmma_scale[16]_f32_32x16x128_f4
instructions (#152194)
---
clang/include/clang/Basic/BuiltinsAMDGPU.def | 2 +
clang/lib/CodeGen/TargetBuiltins/AMDGPU.cpp | 10 +
.../builtins-amdgcn-gfx1250-wmma-w32.cl | 22 ++
...ins-amdgcn-error-gfx1250-wmma-w32-param.cl | 26 ++
llvm/include/llvm/IR/IntrinsicsAMDGPU.td | 24 ++
.../Target/AMDGPU/AMDGPURegisterBankInfo.cpp | 2 +
llvm/lib/Target/AMDGPU/VOP3PInstructions.td | 19 ++
.../AMDGPU/llvm.amdgcn.wmma.gfx1250.w32.ll | 166 ++++++++++
.../llvm.amdgcn.wmma.imm.gfx1250.w32.ll | 308 ++++++++++++++++++
.../llvm.amdgcn.wmma.imod.gfx1250.w32.ll | 158 +++++++++
llvm/test/MC/AMDGPU/gfx1250_asm_wmma_w32.s | 170 ++++++++++
.../test/MC/AMDGPU/gfx1250_asm_wmma_w32_err.s | 30 ++
.../AMDGPU/gfx1250_dasm_wmma_w32.txt | 90 +++++
13 files changed, 1027 insertions(+)
diff --git a/clang/include/clang/Basic/BuiltinsAMDGPU.def b/clang/include/clang/Basic/BuiltinsAMDGPU.def
index 5821a69b4de17..b16d4a22207a7 100644
--- a/clang/include/clang/Basic/BuiltinsAMDGPU.def
+++ b/clang/include/clang/Basic/BuiltinsAMDGPU.def
@@ -805,6 +805,8 @@ TARGET_BUILTIN(__builtin_amdgcn_wmma_scale16_f32_16x16x128_f8f6f4, "V8fIiV16iIiV
TARGET_BUILTIN(__builtin_amdgcn_wmma_f32_16x16x32_f16, "V8fIbV16hIbV16hIsV8fIbIb", "nc", "gfx1250-insts,wavefrontsize32")
TARGET_BUILTIN(__builtin_amdgcn_wmma_f16_16x16x32_f16, "V8hIbV16hIbV16hIsV8hIbIb", "nc", "gfx1250-insts,wavefrontsize32")
TARGET_BUILTIN(__builtin_amdgcn_wmma_f32_32x16x128_f4, "V16fV16iV8iIsV16f", "nc", "gfx1250-insts,wavefrontsize32")
+TARGET_BUILTIN(__builtin_amdgcn_wmma_scale_f32_32x16x128_f4, "V16fV16iV8iIsV16fIiIiiIiIiiIbIb", "nc", "gfx1250-insts,wavefrontsize32")
+TARGET_BUILTIN(__builtin_amdgcn_wmma_scale16_f32_32x16x128_f4, "V16fV16iV8iIsV16fIiIiLiIiIiLiIbIb", "nc", "gfx1250-insts,wavefrontsize32")
TARGET_BUILTIN(__builtin_amdgcn_swmmac_f32_16x16x64_bf16, "V8fIbV16yIbV32yV8fiIbIb", "nc", "gfx1250-insts,wavefrontsize32")
TARGET_BUILTIN(__builtin_amdgcn_swmmac_bf16_16x16x64_bf16, "V8yIbV16yIbV32yV8yiIbIb", "nc", "gfx1250-insts,wavefrontsize32")
TARGET_BUILTIN(__builtin_amdgcn_swmmac_bf16f32_16x16x64_bf16, "V8fIbV16yIbV32yV8fiIbIb", "nc", "gfx1250-insts,wavefrontsize32")
diff --git a/clang/lib/CodeGen/TargetBuiltins/AMDGPU.cpp b/clang/lib/CodeGen/TargetBuiltins/AMDGPU.cpp
index e450b1415a04b..dad1f95ac710d 100644
--- a/clang/lib/CodeGen/TargetBuiltins/AMDGPU.cpp
+++ b/clang/lib/CodeGen/TargetBuiltins/AMDGPU.cpp
@@ -894,6 +894,8 @@ Value *CodeGenFunction::EmitAMDGPUBuiltinExpr(unsigned BuiltinID,
case AMDGPU::BI__builtin_amdgcn_wmma_f32_32x16x128_f4:
case AMDGPU::BI__builtin_amdgcn_wmma_scale_f32_16x16x128_f8f6f4:
case AMDGPU::BI__builtin_amdgcn_wmma_scale16_f32_16x16x128_f8f6f4:
+ case AMDGPU::BI__builtin_amdgcn_wmma_scale_f32_32x16x128_f4:
+ case AMDGPU::BI__builtin_amdgcn_wmma_scale16_f32_32x16x128_f4:
case AMDGPU::BI__builtin_amdgcn_swmmac_f32_16x16x64_f16:
case AMDGPU::BI__builtin_amdgcn_swmmac_f32_16x16x64_bf16:
case AMDGPU::BI__builtin_amdgcn_swmmac_f16_16x16x64_f16:
@@ -1172,6 +1174,14 @@ Value *CodeGenFunction::EmitAMDGPUBuiltinExpr(unsigned BuiltinID,
ArgsForMatchingMatrixTypes = {3, 0, 1};
BuiltinWMMAOp = Intrinsic::amdgcn_wmma_f32_32x16x128_f4;
break;
+ case AMDGPU::BI__builtin_amdgcn_wmma_scale_f32_32x16x128_f4:
+ ArgsForMatchingMatrixTypes = {3, 0, 1};
+ BuiltinWMMAOp = Intrinsic::amdgcn_wmma_scale_f32_32x16x128_f4;
+ break;
+ case AMDGPU::BI__builtin_amdgcn_wmma_scale16_f32_32x16x128_f4:
+ ArgsForMatchingMatrixTypes = {3, 0, 1};
+ BuiltinWMMAOp = Intrinsic::amdgcn_wmma_scale16_f32_32x16x128_f4;
+ break;
case AMDGPU::BI__builtin_amdgcn_swmmac_f32_16x16x64_f16:
ArgsForMatchingMatrixTypes = {4, 1, 3, 5};
BuiltinWMMAOp = Intrinsic::amdgcn_swmmac_f32_16x16x64_f16;
diff --git a/clang/test/CodeGenOpenCL/builtins-amdgcn-gfx1250-wmma-w32.cl b/clang/test/CodeGenOpenCL/builtins-amdgcn-gfx1250-wmma-w32.cl
index af8a457bec686..bdb1a7f0bb32f 100644
--- a/clang/test/CodeGenOpenCL/builtins-amdgcn-gfx1250-wmma-w32.cl
+++ b/clang/test/CodeGenOpenCL/builtins-amdgcn-gfx1250-wmma-w32.cl
@@ -314,6 +314,28 @@ void test_amdgcn_wmma_f32_wmma_f32_32x16x128_f4(global v16f* out, v16i a, v8i b,
*out = __builtin_amdgcn_wmma_f32_32x16x128_f4(a, b, 0, c);
}
+// CHECK-GFX1250-LABEL: @test_amdgcn_wmma_scale_f32_32x16x128_f4(
+// CHECK-GFX1250-NEXT: entry:
+// CHECK-GFX1250-NEXT: [[TMP0:%.*]] = tail call <16 x float> @llvm.amdgcn.wmma.scale.f32.32x16x128.f4.v16f32.v16i32.v8i32(<16 x i32> [[A:%.*]], <8 x i32> [[B:%.*]], i16 0, <16 x float> [[C:%.*]], i32 1, i32 2, i32 [[SCALE_SRC0:%.*]], i32 2, i32 1, i32 [[SCALE_SRC1:%.*]], i1 false, i1 true)
+// CHECK-GFX1250-NEXT: store <16 x float> [[TMP0]], ptr addrspace(1) [[OUT:%.*]], align 64, !tbaa [[TBAA4]]
+// CHECK-GFX1250-NEXT: ret void
+//
+void test_amdgcn_wmma_scale_f32_32x16x128_f4(global v16f* out, v16i a, v8i b, v16f c, int scale_src0, int scale_src1)
+{
+ *out = __builtin_amdgcn_wmma_scale_f32_32x16x128_f4(a, b, 0, c, 1, 2, scale_src0, 2, 1, scale_src1, 0, 1);
+}
+
+// CHECK-GFX1250-LABEL: @test_amdgcn_wmma_scale16_f32_32x16x128_f4(
+// CHECK-GFX1250-NEXT: entry:
+// CHECK-GFX1250-NEXT: [[TMP0:%.*]] = tail call <16 x float> @llvm.amdgcn.wmma.scale16.f32.32x16x128.f4.v16f32.v16i32.v8i32(<16 x i32> [[A:%.*]], <8 x i32> [[B:%.*]], i16 0, <16 x float> [[C:%.*]], i32 1, i32 2, i64 [[SCALE_SRC0:%.*]], i32 2, i32 1, i64 [[SCALE_SRC1:%.*]], i1 false, i1 true)
+// CHECK-GFX1250-NEXT: store <16 x float> [[TMP0]], ptr addrspace(1) [[OUT:%.*]], align 64, !tbaa [[TBAA4]]
+// CHECK-GFX1250-NEXT: ret void
+//
+void test_amdgcn_wmma_scale16_f32_32x16x128_f4(global v16f* out, v16i a, v8i b, v16f c, long scale_src0, long scale_src1)
+{
+ *out = __builtin_amdgcn_wmma_scale16_f32_32x16x128_f4(a, b, 0, c, 1, 2, scale_src0, 2, 1, scale_src1, 0, 1);
+}
+
// CHECK-GFX1250-LABEL: @test_amdgcn_swmmac_f32_16x16x64_bf16(
// CHECK-GFX1250-NEXT: entry:
// CHECK-GFX1250-NEXT: [[TMP0:%.*]] = tail call <8 x float> @llvm.amdgcn.swmmac.f32.16x16x64.bf16.v8f32.v16bf16.v32bf16.i32(i1 false, <16 x bfloat> [[A:%.*]], i1 false, <32 x bfloat> [[B:%.*]], <8 x float> [[C:%.*]], i32 [[INDEX:%.*]], i1 false, i1 true)
diff --git a/clang/test/SemaOpenCL/builtins-amdgcn-error-gfx1250-wmma-w32-param.cl b/clang/test/SemaOpenCL/builtins-amdgcn-error-gfx1250-wmma-w32-param.cl
index 83fdc8c8c60ba..49ef2e571740c 100644
--- a/clang/test/SemaOpenCL/builtins-amdgcn-error-gfx1250-wmma-w32-param.cl
+++ b/clang/test/SemaOpenCL/builtins-amdgcn-error-gfx1250-wmma-w32-param.cl
@@ -230,6 +230,32 @@ void test_amdgcn_wmma_f32_wmma_f32_32x16x128_f4(global v16f* out, v16i a, v8i b,
*out = __builtin_amdgcn_wmma_f32_32x16x128_f4(a, b, mod, c); // expected-error {{'__builtin_amdgcn_wmma_f32_32x16x128_f4' must be a constant integer}}
}
+void test_amdgcn_wmma_scale_f32_32x16x128_f4(global v16f* out, v16i a, v8i b, v16f c, int mod, int scale_src0, int scale_src1, bool reuse)
+{
+ *out = __builtin_amdgcn_wmma_scale_f32_32x16x128_f4(a, b, mod, c, 1, 0, scale_src0, 2, 0, scale_src1, 1, 0); // expected-error {{'__builtin_amdgcn_wmma_scale_f32_32x16x128_f4' must be a constant integer}}
+ *out = __builtin_amdgcn_wmma_scale_f32_32x16x128_f4(a, b, 0, c, mod, 0, scale_src0, 2, 0, scale_src1, 0, 0); // expected-error {{'__builtin_amdgcn_wmma_scale_f32_32x16x128_f4' must be a constant integer}}
+ *out = __builtin_amdgcn_wmma_scale_f32_32x16x128_f4(a, b, 0, c, 1, 0, scale_src0, mod, 0, scale_src1, 0, 0); // expected-error {{'__builtin_amdgcn_wmma_scale_f32_32x16x128_f4' must be a constant integer}}
+ *out = __builtin_amdgcn_wmma_scale_f32_32x16x128_f4(a, b, 0, c, 1, 0, scale_src0, 2, 0, scale_src1, reuse, 0); // expected-error {{'__builtin_amdgcn_wmma_scale_f32_32x16x128_f4' must be a constant integer}}
+ *out = __builtin_amdgcn_wmma_scale_f32_32x16x128_f4(a, b, 0, c, 1, 0, scale_src0, 2, 0, scale_src1, 0, reuse); // expected-error {{'__builtin_amdgcn_wmma_scale_f32_32x16x128_f4' must be a constant integer}}
+ *out = __builtin_amdgcn_wmma_scale_f32_32x16x128_f4(a, b, 0, c, 1, mod, scale_src0, 2, 0, scale_src1, 1, 0); // expected-error {{'__builtin_amdgcn_wmma_scale_f32_32x16x128_f4' must be a constant integer}}
+ *out = __builtin_amdgcn_wmma_scale_f32_32x16x128_f4(a, b, 0, c, 1, 0, scale_src0, 2, mod, scale_src1, 1, 0); // expected-error {{'__builtin_amdgcn_wmma_scale_f32_32x16x128_f4' must be a constant integer}}
+ *out = __builtin_amdgcn_wmma_scale_f32_32x16x128_f4(a, b, 0, c, 1, 0, scale_src0, 2, 0, scale_src1, mod, 0); // expected-error {{'__builtin_amdgcn_wmma_scale_f32_32x16x128_f4' must be a constant integer}}
+ *out = __builtin_amdgcn_wmma_scale_f32_32x16x128_f4(a, b, 0, c, 1, 0, scale_src0, 2, 0, scale_src1, 1, mod); // expected-error {{'__builtin_amdgcn_wmma_scale_f32_32x16x128_f4' must be a constant integer}}
+}
+
+void test_amdgcn_wmma_scale16_f32_32x16x128_f4(global v16f* out, v16i a, v8i b, v16f c, int mod, long scale_src0, long scale_src1, bool reuse)
+{
+ *out = __builtin_amdgcn_wmma_scale16_f32_32x16x128_f4(a, b, mod, c, 1, 0, scale_src0, 2, 0, scale_src1, 1, 0); // expected-error {{'__builtin_amdgcn_wmma_scale16_f32_32x16x128_f4' must be a constant integer}}
+ *out = __builtin_amdgcn_wmma_scale16_f32_32x16x128_f4(a, b, 0, c, mod, 0, scale_src0, 2, 0, scale_src1, 0, 0); // expected-error {{'__builtin_amdgcn_wmma_scale16_f32_32x16x128_f4' must be a constant integer}}
+ *out = __builtin_amdgcn_wmma_scale16_f32_32x16x128_f4(a, b, 0, c, 1, 0, scale_src0, mod, 0, scale_src1, 0, 0); // expected-error {{'__builtin_amdgcn_wmma_scale16_f32_32x16x128_f4' must be a constant integer}}
+ *out = __builtin_amdgcn_wmma_scale16_f32_32x16x128_f4(a, b, 0, c, 1, 0, scale_src0, 2, 0, scale_src1, reuse, 0); // expected-error {{'__builtin_amdgcn_wmma_scale16_f32_32x16x128_f4' must be a constant integer}}
+ *out = __builtin_amdgcn_wmma_scale16_f32_32x16x128_f4(a, b, 0, c, 1, 0, scale_src0, 2, 0, scale_src1, 0, reuse); // expected-error {{'__builtin_amdgcn_wmma_scale16_f32_32x16x128_f4' must be a constant integer}}
+ *out = __builtin_amdgcn_wmma_scale16_f32_32x16x128_f4(a, b, 0, c, 1, mod, scale_src0, 2, 0, scale_src1, 1, 0); // expected-error {{'__builtin_amdgcn_wmma_scale16_f32_32x16x128_f4' must be a constant integer}}
+ *out = __builtin_amdgcn_wmma_scale16_f32_32x16x128_f4(a, b, 0, c, 1, 0, scale_src0, 2, mod, scale_src1, 1, 0); // expected-error {{'__builtin_amdgcn_wmma_scale16_f32_32x16x128_f4' must be a constant integer}}
+ *out = __builtin_amdgcn_wmma_scale16_f32_32x16x128_f4(a, b, 0, c, 1, 0, scale_src0, 2, 0, scale_src1, mod, 0); // expected-error {{'__builtin_amdgcn_wmma_scale16_f32_32x16x128_f4' must be a constant integer}}
+ *out = __builtin_amdgcn_wmma_scale16_f32_32x16x128_f4(a, b, 0, c, 1, 0, scale_src0, 2, 0, scale_src1, 1, mod); // expected-error {{'__builtin_amdgcn_wmma_scale16_f32_32x16x128_f4' must be a constant integer}}
+}
+
void test_amdgcn_swmmac_f32_16x16x64_bf16(global v8f* out, v16bf16 a, v32bf16 b, v8f c, int index, int mod)
{
*out = __builtin_amdgcn_swmmac_f32_16x16x64_bf16(mod, a, 0, b, c, index, false, false); // expected-error {{'__builtin_amdgcn_swmmac_f32_16x16x64_bf16' must be a constant integer}}
diff --git a/llvm/include/llvm/IR/IntrinsicsAMDGPU.td b/llvm/include/llvm/IR/IntrinsicsAMDGPU.td
index bfadc6a58f7f1..90cfd8cedd51b 100644
--- a/llvm/include/llvm/IR/IntrinsicsAMDGPU.td
+++ b/llvm/include/llvm/IR/IntrinsicsAMDGPU.td
@@ -3956,6 +3956,28 @@ class AMDGPUWmmaScaleIntrinsicModsC<LLVMType scale_ty> :
IntrWillReturn, IntrNoCallback, IntrNoFree]
>;
+class AMDGPUWmmaScaleF4IntrinsicModsC<LLVMType scale_ty> :
+ Intrinsic<
+ [llvm_anyfloat_ty], // %D
+ [
+ llvm_anyint_ty, // %A
+ llvm_anyint_ty, // %B
+ llvm_i16_ty, // %C_mod: 0 - none, 1 - neg, 2 - abs, 3 - neg(abs)
+ LLVMMatchType<0>, // %C
+ llvm_i32_ty, // matrix_a_scale
+ llvm_i32_ty, // matrix_a_scale_fmt
+ scale_ty, // matrix a scale exponential
+ llvm_i32_ty, // matrix_b_scale
+ llvm_i32_ty, // matrix_b_scale_fmt
+ scale_ty, // matrix b scale exponential
+ llvm_i1_ty, // matrix_a_reuse
+ llvm_i1_ty, // matrix_b_reuse
+ ],
+ [IntrNoMem, IntrConvergent, ImmArg<ArgIndex<2>>, ImmArg<ArgIndex<4>>, ImmArg<ArgIndex<5>>, ImmArg<ArgIndex<7>>,
+ ImmArg<ArgIndex<8>>, ImmArg<ArgIndex<10>>, ImmArg<ArgIndex<11>>,
+ IntrWillReturn, IntrNoCallback, IntrNoFree]
+>;
+
defset list<Intrinsic> AMDGPUWMMAIntrinsicsGFX1250 = {
def int_amdgcn_wmma_f32_16x16x4_f32 : AMDGPUWmmaIntrinsicModsAllReuse<llvm_anyfloat_ty, llvm_anyfloat_ty>;
def int_amdgcn_wmma_f32_16x16x32_bf16 : AMDGPUWmmaIntrinsicModsAllReuse<llvm_anyfloat_ty, llvm_anyfloat_ty>;
@@ -3984,6 +4006,8 @@ def int_amdgcn_wmma_f32_16x16x128_f8f6f4 : AMDGPUWmmaIntrinsicModsC_MatrixFMT;
def int_amdgcn_wmma_scale_f32_16x16x128_f8f6f4 : AMDGPUWmmaScaleIntrinsicModsC<llvm_i32_ty>;
def int_amdgcn_wmma_scale16_f32_16x16x128_f8f6f4 : AMDGPUWmmaScaleIntrinsicModsC<llvm_i64_ty>;
def int_amdgcn_wmma_f32_32x16x128_f4 : AMDGPUWmmaIntrinsicF4ModsC<llvm_anyint_ty, llvm_anyint_ty, llvm_anyfloat_ty>;
+def int_amdgcn_wmma_scale_f32_32x16x128_f4 : AMDGPUWmmaScaleF4IntrinsicModsC<llvm_i32_ty>;
+def int_amdgcn_wmma_scale16_f32_32x16x128_f4 : AMDGPUWmmaScaleF4IntrinsicModsC<llvm_i64_ty>;
}
class AMDGPUSWmmacIntrinsicABIdx<LLVMType A, LLVMType B, LLVMType CD, LLVMType Index> :
diff --git a/llvm/lib/Target/AMDGPU/AMDGPURegisterBankInfo.cpp b/llvm/lib/Target/AMDGPU/AMDGPURegisterBankInfo.cpp
index 74230a543ef16..868b1a21e3cd5 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPURegisterBankInfo.cpp
+++ b/llvm/lib/Target/AMDGPU/AMDGPURegisterBankInfo.cpp
@@ -4801,6 +4801,8 @@ AMDGPURegisterBankInfo::getInstrMapping(const MachineInstr &MI) const {
case Intrinsic::amdgcn_wmma_scale_f32_16x16x128_f8f6f4:
case Intrinsic::amdgcn_wmma_scale16_f32_16x16x128_f8f6f4:
case Intrinsic::amdgcn_wmma_f32_32x16x128_f4:
+ case Intrinsic::amdgcn_wmma_scale_f32_32x16x128_f4:
+ case Intrinsic::amdgcn_wmma_scale16_f32_32x16x128_f4:
case Intrinsic::amdgcn_swmmac_f16_16x16x64_f16:
case Intrinsic::amdgcn_swmmac_bf16_16x16x64_bf16:
case Intrinsic::amdgcn_swmmac_f32_16x16x64_bf16:
diff --git a/llvm/lib/Target/AMDGPU/VOP3PInstructions.td b/llvm/lib/Target/AMDGPU/VOP3PInstructions.td
index 9264935ffad7b..ce280d484da1b 100644
--- a/llvm/lib/Target/AMDGPU/VOP3PInstructions.td
+++ b/llvm/lib/Target/AMDGPU/VOP3PInstructions.td
@@ -1763,6 +1763,8 @@ def F16_FP8BF8X64_WMMA_w32 : VOP3PWMMA_Profile<[v8f16, v8i32, v8i32, v8f16
def F16_FP8BF8X128_WMMA_w32 : VOP3PWMMA_Profile<[v8f16, v16i32, v16i32, v8f16], 0, 0, 0, 1, 1, 0, 0, 0, 1>;
def F32_32X16X128_F4_WMMA_w32 : VOP3PWMMA_Profile<[v16f32, v16i32, v8i32, v16f32], 0, 0, 0, 0, 1, 0, 0, 0, 0, 1>;
def I32_IU8X64_WMMA_w32 : VOP3PWMMA_Profile<[v8i32, v8i32, v8i32, v8i32], 0, 0, 1, 0, 1, 0, 0, 0, 1>;
+def F32_32X16X128_F4_SCALE_w32 : VOP3PWMMA_Profile<[v16f32, v16i32, v8i32, v16f32], 0, 0, 0, 1, 1, 0, 1, 0, 1>;
+def F32_32X16X128_F4_SCALE16_w32 : VOP3PWMMA_Profile<[v16f32, v16i32, v8i32, v16f32], 0, 0, 0, 1, 1, 0, 1, 1, 1>;
def F32_F16X64_SWMMAC_w32 : VOP3PWMMA_Profile<[v8f32, v16f16, v32f16, v8f32], 1, 16, 0, 0, 1, 0, 0, 0, 1>;
def F32_BF16X64_SWMMAC_w32 : VOP3PWMMA_Profile<[v8f32, v16bf16, v32bf16, v8f32], 1, 16, 0, 0, 1, 0, 0, 0, 1>;
def F16_F16X64_SWMMAC_w32 : VOP3PWMMA_Profile<[v8f16, v16f16, v32f16, v8f16], 1, 16, 0, 0, 1, 0, 0, 0, 1>;
@@ -1852,6 +1854,9 @@ defm V_SWMMAC_F16_16X16X64_F16_w32 : SWMMACInstGFX12<"v_swmmac_f16_16x16x64
defm V_WMMA_F32_16X16X128_F8F6F4 : WMMAInst_SrcFormats_mc<"v_wmma_f32_16x16x128_f8f6f4", "F32_16X16X128_F8F6F4">;
defm V_WMMA_SCALE_F32_16X16X128_F8F6F4 : WMMAInst_SrcFormats_mc<"v_wmma_scale_f32_16x16x128_f8f6f4", "F32_16X16X128_F8F6F4_SCALE">;
defm V_WMMA_SCALE16_F32_16X16X128_F8F6F4 : WMMAInst_SrcFormats_mc<"v_wmma_scale16_f32_16x16x128_f8f6f4", "F32_16X16X128_F8F6F4_SCALE16">;
+
+defm V_WMMA_SCALE_F32_32X16X128_F4_w32 : WMMAInstGFX12<"v_wmma_scale_f32_32x16x128_f4", F32_32X16X128_F4_SCALE_w32, "_w32">;
+defm V_WMMA_SCALE16_F32_32X16X128_F4_w32 : WMMAInstGFX12<"v_wmma_scale16_f32_32x16x128_f4", F32_32X16X128_F4_SCALE16_w32, "_w32">;
} // End is_wmma_xdl = 1.
defm V_WMMA_LD_SCALE_PAIRED_B32 : VOP3PInst<"v_wmma_ld_scale_paired_b32", VOP_WMMA_LD_SCALE<i32, VCSrc_b32>>;
@@ -2010,6 +2015,8 @@ let SubtargetPredicate = isGFX125xOnly in {
defm : WMMAPat<"V_WMMA_F32_16X16X128_BF8_FP8_w32", int_amdgcn_wmma_f32_16x16x128_bf8_fp8, F32_FP8BF8X128_WMMA_w32>;
defm : WMMAPat<"V_WMMA_F32_16X16X128_BF8_BF8_w32", int_amdgcn_wmma_f32_16x16x128_bf8_bf8, F32_FP8BF8X128_WMMA_w32>;
defm : WMMAPat<"V_WMMA_F32_32X16X128_F4_w32", int_amdgcn_wmma_f32_32x16x128_f4, F32_32X16X128_F4_WMMA_w32>;
+ defm : WMMAPat<"V_WMMA_SCALE_F32_32X16X128_F4_w32", int_amdgcn_wmma_scale_f32_32x16x128_f4, F32_32X16X128_F4_SCALE_w32>;
+ defm : WMMAPat<"V_WMMA_SCALE16_F32_32X16X128_F4_w32", int_amdgcn_wmma_scale16_f32_32x16x128_f4, F32_32X16X128_F4_SCALE16_w32>;
foreach I = ["f8_f8", "f8_f6", "f8_f4", "f6_f8", "f6_f6", "f6_f4", "f4_f8", "f4_f6", "f4_f4"] in {
defm : WMMAPat<"V_WMMA_F32_16X16X128_F8F6F4_" # I # "_w32", int_amdgcn_wmma_f32_16x16x128_f8f6f4, !cast<VOP3PWMMA_Profile>("F32_16X16X128_F8F6F4_" # I # "_w32")>;
@@ -2191,6 +2198,15 @@ class VOP3PX2e <bits<8> op, bits<8> LdScaleOp, VOP3PWMMA_Profile P> : Enc128, VO
let Inst{127} = !if(P.NegLo2, src2_modifiers{0}, 0);
}
+multiclass VOP3PX2_Real_ScaledWMMA_F4<bits<8> op, bits<8> LdScaleOp, VOP3PWMMA_Profile WMMAP> {
+ defvar PS = !cast<VOP3P_Pseudo>(NAME # "_twoaddr");
+ let SubtargetPredicate = isGFX1250Plus, WaveSizePredicate = isWave32,
+ DecoderNamespace = "GFX1250" in {
+ def _gfx1250 : VOP3P_Real_Gen<PS, GFX1250Gen, PS.Mnemonic>,
+ VOP3PX2e <op, LdScaleOp, WMMAP>;
+ }
+}
+
multiclass VOP3PX2_Real_ScaledWMMA<bits<8> op, bits<8> LdScaleOp, VOP3PWMMA_Profile WMMAP> {
defvar PS = !cast<VOP3P_Pseudo>(NAME # "_twoaddr");
defvar asmName = !substr(PS.Mnemonic, 0, !sub(!size(PS.Mnemonic), !size("_f8_f8_w32")));
@@ -2292,6 +2308,9 @@ defm V_WMMA_F32_16X16X128_F8F6F4 : VOP3P_Real_WMMA_gfx1250_SrcFormats<0x0
defm V_WMMA_SCALE_F32_16X16X128_F8F6F4 : VOP3PX2_Real_ScaledWMMA_SrcFormats<0x033, 0x35, "F32_16X16X128_F8F6F4_SCALE">;
defm V_WMMA_SCALE16_F32_16X16X128_F8F6F4 : VOP3PX2_Real_ScaledWMMA_SrcFormats<0x033, 0x3a, "F32_16X16X128_F8F6F4_SCALE16">;
+defm V_WMMA_SCALE_F32_32X16X128_F4_w32 : VOP3PX2_Real_ScaledWMMA_F4<0x088, 0x35, F32_32X16X128_F4_SCALE_w32>;
+defm V_WMMA_SCALE16_F32_32X16X128_F4_w32 : VOP3PX2_Real_ScaledWMMA_F4<0x088, 0x3a, F32_32X16X128_F4_SCALE16_w32>;
+
defm V_SWMMAC_F32_16X16X64_F16_w32 : VOP3P_Real_WMMA_gfx1250 <0x065, F32_F16X64_SWMMAC_w32>;
defm V_SWMMAC_F32_16X16X64_BF16_w32 : VOP3P_Real_WMMA_gfx1250 <0x066, F32_BF16X64_SWMMAC_w32>;
defm V_SWMMAC_F16_16X16X64_F16_w32 : VOP3P_Real_WMMA_gfx1250 <0x067, F16_F16X64_SWMMAC_w32>;
diff --git a/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.wmma.gfx1250.w32.ll b/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.wmma.gfx1250.w32.ll
index 1c7c625daaa78..1bf865c414279 100644
--- a/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.wmma.gfx1250.w32.ll
+++ b/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.wmma.gfx1250.w32.ll
@@ -2236,6 +2236,170 @@ bb:
ret void
}
+define amdgpu_ps void @test_wmma_scale_f32_32x16x128_f4(<16 x i32> %A, <8 x i32> %B, <16 x float> %C, i32 %scale_src0, i32 %scale_src1, ptr addrspace(1) %out) {
+; GFX1250-LABEL: test_wmma_scale_f32_32x16x128_f4:
+; GFX1250: ; %bb.0: ; %bb
+; GFX1250-NEXT: v_wmma_scale_f32_32x16x128_f4 v[24:39], v[0:15], v[16:23], v[24:39], v40, v41 matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1
+; GFX1250-NEXT: s_clause 0x3
+; GFX1250-NEXT: global_store_b128 v[42:43], v[36:39], off offset:48
+; GFX1250-NEXT: global_store_b128 v[42:43], v[32:35], off offset:32
+; GFX1250-NEXT: global_store_b128 v[42:43], v[28:31], off offset:16
+; GFX1250-NEXT: global_store_b128 v[42:43], v[24:27], off
+; GFX1250-NEXT: s_endpgm
+;
+; GISEL-LABEL: test_wmma_scale_f32_32x16x128_f4:
+; GISEL: ; %bb.0: ; %bb
+; GISEL-NEXT: v_wmma_scale_f32_32x16x128_f4 v[24:39], v[0:15], v[16:23], v[24:39], v40, v41 matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1
+; GISEL-NEXT: s_clause 0x3
+; GISEL-NEXT: global_store_b128 v[42:43], v[24:27], off
+; GISEL-NEXT: global_store_b128 v[42:43], v[28:31], off offset:16
+; GISEL-NEXT: global_store_b128 v[42:43], v[32:35], off offset:32
+; GISEL-NEXT: global_store_b128 v[42:43], v[36:39], off offset:48
+; GISEL-NEXT: s_endpgm
+bb:
+ %res = call <16 x float> @llvm.amdgcn.wmma.scale.f32.32x16x128.f4.v16f32.v16i32.v8i32(<16 x i32> %A, <8 x i32> %B, i16 0, <16 x float> %C, i32 1, i32 0, i32 %scale_src0, i32 1, i32 0, i32 %scale_src1, i1 false, i1 false)
+ store <16 x float> %res, ptr addrspace(1) %out
+ ret void
+}
+
+define amdgpu_ps void @test_wmma_scale_f32_32x16x128_f4_ss(<16 x i32> %A, <8 x i32> %B, <16 x float> %C, i32 inreg %scale_src0, i32 inreg %scale_src1, ptr addrspace(1) %out) {
+; GFX1250-LABEL: test_wmma_scale_f32_32x16x128_f4_ss:
+; GFX1250: ; %bb.0: ; %bb
+; GFX1250-NEXT: v_wmma_scale_f32_32x16x128_f4 v[24:39], v[0:15], v[16:23], v[24:39], s0, s1 matrix_b_scale:MATRIX_SCALE_ROW1 matrix_a_scale_fmt:MATRIX_SCALE_FMT_E5M3 matrix_b_scale_fmt:MATRIX_SCALE_FMT_E4M3 matrix_a_reuse
+; GFX1250-NEXT: s_clause 0x3
+; GFX1250-NEXT: global_store_b128 v[40:41], v[36:39], off offset:48
+; GFX1250-NEXT: global_store_b128 v[40:41], v[32:35], off offset:32
+; GFX1250-NEXT: global_store_b128 v[40:41], v[28:31], off offset:16
+; GFX1250-NEXT: global_store_b128 v[40:41], v[24:27], off
+; GFX1250-NEXT: s_endpgm
+;
+; GISEL-LABEL: test_wmma_scale_f32_32x16x128_f4_ss:
+; GISEL: ; %bb.0: ; %bb
+; GISEL-NEXT: v_wmma_scale_f32_32x16x128_f4 v[24:39], v[0:15], v[16:23], v[24:39], s0, s1 matrix_b_scale:MATRIX_SCALE_ROW1 matrix_a_scale_fmt:MATRIX_SCALE_FMT_E5M3 matrix_b_scale_fmt:MATRIX_SCALE_FMT_E4M3 matrix_a_reuse
+; GISEL-NEXT: s_clause 0x3
+; GISEL-NEXT: global_store_b128 v[40:41], v[24:27], off
+; GISEL-NEXT: global_store_b128 v[40:41], v[28:31], off offset:16
+; GISEL-NEXT: global_store_b128 v[40:41], v[32:35], off offset:32
+; GISEL-NEXT: global_store_b128 v[40:41], v[36:39], off offset:48
+; GISEL-NEXT: s_endpgm
+bb:
+ %res = call <16 x float> @llvm.amdgcn.wmma.scale.f32.32x16x128.f4.v16f32.v16i32.v8i32(<16 x i32> %A, <8 x i32> %B, i16 0, <16 x float> %C, i32 2, i32 1, i32 %scale_src0, i32 1, i32 2, i32 %scale_src1, i1 true, i1 false)
+ store <16 x float> %res, ptr addrspace(1) %out
+ ret void
+}
+
+define amdgpu_ps void @test_wmma_scale_f32_32x16x128_f4_si_scale(<16 x i32> %A, <8 x i32> %B, <16 x float> %C, i32 inreg %scale_src0, ptr addrspace(1) %out) {
+; GFX1250-LABEL: test_wmma_scale_f32_32x16x128_f4_si_scale:
+; GFX1250: ; %bb.0: ; %bb
+; GFX1250-NEXT: s_movk_i32 s1, 0x64
+; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
+; GFX1250-NEXT: v_wmma_scale_f32_32x16x128_f4 v[24:39], v[0:15], v[16:23], v[24:39], s0, s1 matrix_a_scale:MATRIX_SCALE_ROW1 matrix_a_scale_fmt:MATRIX_SCALE_FMT_E4M3 matrix_b_scale_fmt:MATRIX_SCALE_FMT_E5M3 matrix_b_reuse
+; GFX1250-NEXT: s_clause 0x3
+; GFX1250-NEXT: global_store_b128 v[40:41], v[36:39], off offset:48
+; GFX1250-NEXT: global_store_b128 v[40:41], v[32:35], off offset:32
+; GFX1250-NEXT: global_store_b128 v[40:41], v[28:31], off offset:16
+; GFX1250-NEXT: global_store_b128 v[40:41], v[24:27], off
+; GFX1250-NEXT: s_endpgm
+;
+; GISEL-LABEL: test_wmma_scale_f32_32x16x128_f4_si_scale:
+; GISEL: ; %bb.0: ; %bb
+; GISEL-NEXT: v_mov_b32_e32 v42, 0x64
+; GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GISEL-NEXT: v_wmma_scale_f32_32x16x128_f4 v[24:39], v[0:15], v[16:23], v[24:39], s0, v42 matrix_a_scale:MATRIX_SCALE_ROW1 matrix_a_scale_fmt:MATRIX_SCALE_FMT_E4M3 matrix_b_scale_fmt:MATRIX_SCALE_FMT_E5M3 matrix_b_reuse
+; GISEL-NEXT: s_clause 0x3
+; GISEL-NEXT: global_store_b128 v[40:41], v[24:27], off
+; GISEL-NEXT: global_store_b128 v[40:41], v[28:31], off offset:16
+; GISEL-NEXT: global_store_b128 v[40:41], v[32:35], off offset:32
+; GISEL-NEXT: global_store_b128 v[40:41], v[36:39], off offset:48
+; GISEL-NEXT: s_endpgm
+bb:
+ %res = call <16 x float> @llvm.amdgcn.wmma.scale.f32.32x16x128.f4.v16f32.v16i32.v8i32(<16 x i32> %A, <8 x i32> %B, i16 0, <16 x float> %C, i32 3, i32 2, i32 %scale_src0, i32 0, i32 1, i32 100, i1 false, i1 true)
+ store <16 x float> %res, ptr addrspace(1) %out
+ ret void
+}
+
+define amdgpu_ps void @test_wmma_scale16_f32_32x16x128_f4(<16 x i32> %A, <8 x i32> %B, <16 x float> %C, i64 %scale_src0, i64 %scale_src1, ptr addrspace(1) %out) {
+; GFX1250-LABEL: test_wmma_scale16_f32_32x16x128_f4:
+; GFX1250: ; %bb.0: ; %bb
+; GFX1250-NEXT: v_wmma_scale16_f32_32x16x128_f4 v[24:39], v[0:15], v[16:23], v[24:39], v[40:41], v[42:43] matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1
+; GFX1250-NEXT: s_clause 0x3
+; GFX1250-NEXT: global_store_b128 v[44:45], v[36:39], off offset:48
+; GFX1250-NEXT: global_store_b128 v[44:45], v[32:35], off offset:32
+; GFX1250-NEXT: global_store_b128 v[44:45], v[28:31], off offset:16
+; GFX1250-NEXT: global_store_b128 v[44:45], v[24:27], off
+; GFX1250-NEXT: s_endpgm
+;
+; GISEL-LABEL: test_wmma_scale16_f32_32x16x128_f4:
+; GISEL: ; %bb.0: ; %bb
+; GISEL-NEXT: v_wmma_scale16_f32_32x16x128_f4 v[24:39], v[0:15], v[16:23], v[24:39], v[40:41], v[42:43] matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1
+; GISEL-NEXT: s_clause 0x3
+; GISEL-NEXT: global_store_b128 v[44:45], v[24:27], off
+; GISEL-NEXT: global_store_b128 v[44:45], v[28:31], off offset:16
+; GISEL-NEXT: global_store_b128 v[44:45], v[32:35], off offset:32
+; GISEL-NEXT: global_store_b128 v[44:45], v[36:39], off offset:48
+; GISEL-NEXT: s_endpgm
+bb:
+ %res = call <16 x float> @llvm.amdgcn.wmma.scale16.f32.32x16x128.f4.v16f32.v16i32.v8i32(<16 x i32> %A, <8 x i32> %B, i16 0, <16 x float> %C, i32 1, i32 0, i64 %scale_src0, i32 1, i32 0, i64 %scale_src1, i1 false, i1 false)
+ store <16 x float> %res, ptr addrspace(1) %out
+ ret void
+}
+
+define amdgpu_ps void @test_wmma_scale16_f32_32x16x128_f4_ss(<16 x i32> %A, <8 x i32> %B, <16 x float> %C, i64 inreg %scale_src0, i64 inreg %scale_src1, ptr addrspace(1) %out) {
+; GFX1250-LABEL: test_wmma_scale16_f32_32x16x128_f4_ss:
+; GFX1250: ; %bb.0: ; %bb
+; GFX1250-NEXT: v_wmma_scale16_f32_32x16x128_f4 v[24:39], v[0:15], v[16:23], v[24:39], s[0:1], s[2:3] matrix_b_scale:MATRIX_SCALE_ROW1 matrix_a_scale_fmt:MATRIX_SCALE_FMT_E5M3 matrix_b_scale_fmt:MATRIX_SCALE_FMT_E4M3 matrix_a_reuse
+; GFX1250-NEXT: s_clause 0x3
+; GFX1250-NEXT: global_store_b128 v[40:41], v[36:39], off offset:48
+; GFX1250-NEXT: global_store_b128 v[40:41], v[32:35], off offset:32
+; GFX1250-NEXT: global_store_b128 v[40:41], v[28:31], off offset:16
+; GFX1250-NEXT: global_store_b128 v[40:41], v[24:27], off
+; GFX1250-NEXT: s_endpgm
+;
+; GISEL-LABEL: test_wmma_scale16_f32_32x16x128_f4_ss:
+; GISEL: ; %bb.0: ; %bb
+; GISEL-NEXT: v_wmma_scale16_f32_32x16x128_f4 v[24:39], v[0:15], v[16:23], v[24:39], s[0:1], s[2:3] matrix_b_scale:MATRIX_SCALE_ROW1 matrix_a_scale_fmt:MATRIX_SCALE_FMT_E5M3 matrix_b_scale_fmt:MATRIX_SCALE_FMT_E4M3 matrix_a_reuse
+; GISEL-NEXT: s_clause 0x3
+; GISEL-NEXT: global_store_b128 v[40:41], v[24:27], off
+; GISEL-NEXT: global_store_b128 v[40:41], v[28:31], off offset:16
+; GISEL-NEXT: global_store_b128 v[40:41], v[32:35], off offset:32
+; GISEL-NEXT: global_store_b128 v[40:41], v[36:39], off offset:48
+; GISEL-NEXT: s_endpgm
+bb:
+ %res = call <16 x float> @llvm.amdgcn.wmma.scale16.f32.32x16x128.f4.v16f32.v16i32.v8i32(<16 x i32> %A, <8 x i32> %B, i16 0, <16 x float> %C, i32 2, i32 1, i64 %scale_src0, i32 1, i32 2, i64 %scale_src1, i1 true, i1 false)
+ store <16 x float> %res, ptr addrspace(1) %out
+ ret void
+}
+
+define amdgpu_ps void @test_wmma_scale16_f32_32x16x128_f4_si_scale(<16 x i32> %A, <8 x i32> %B, <16 x float> %C, i64 inreg %scale_src0, ptr addrspace(1) %out) {
+; GFX1250-LABEL: test_wmma_scale16_f32_32x16x128_f4_si_scale:
+; GFX1250: ; %bb.0: ; %bb
+; GFX1250-NEXT: s_mov_b64 s[2:3], 0x64
+; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
+; GFX1250-NEXT: v_wmma_scale16_f32_32x16x128_f4 v[24:39], v[0:15], v[16:23], v[24:39], s[0:1], s[2:3] matrix_a_scale:MATRIX_SCALE_ROW1 matrix_a_scale_fmt:MATRIX_SCALE_FMT_E4M3 matrix_b_scale_fmt:MATRIX_SCALE_FMT_E5M3 matrix_b_reuse
+; GFX1250-NEXT: s_clause 0x3
+; GFX1250-NEXT: global_store_b128 v[40:41], v[36:39], off offset:48
+; GFX1250-NEXT: global_store_b128 v[40:41], v[32:35], off offset:32
+; GFX1250-NEXT: global_store_b128 v[40:41], v[28:31], off offset:16
+; GFX1250-NEXT: global_store_b128 v[40:41], v[24:27], off
+; GFX1250-NEXT: s_endpgm
+;
+; GISEL-LABEL: test_wmma_scale16_f32_32x16x128_f4_si_scale:
+; GISEL: ; %bb.0: ; %bb
+; GISEL-NEXT: v_mov_b64_e32 v[42:43], 0x64
+; GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GISEL-NEXT: v_wmma_scale16_f32_32x16x128_f4 v[24:39], v[0:15], v[16:23], v[24:39], s[0:1], v[42:43] matrix_a_scale:MATRIX_SCALE_ROW1 matrix_a_scale_fmt:MATRIX_SCALE_FMT_E4M3 matrix_b_scale_fmt:MATRIX_SCALE_FMT_E5M3 matrix_b_reuse
+; GISEL-NEXT: s_clause 0x3
+; GISEL-NEXT: global_store_b128 v[40:41], v[24:27], off
+; GISEL-NEXT: global_store_b128 v[40:41], v[28:31], off offset:16
+; GISEL-NEXT: global_store_b128 v[40:41], v[32:35], off offset:32
+; GISEL-NEXT: global_store_b128 v[40:41], v[36:39], off offset:48
+; GISEL-NEXT: s_endpgm
+bb:
+ %res = call <16 x float> @llvm.amdgcn.wmma.scale16.f32.32x16x128.f4.v16f32.v16i32.v8i32(<16 x i32> %A, <8 x i32> %B, i16 0, <16 x float> %C, i32 3, i32 2, i64 %scale_src0, i32 0, i32 1, i64 100, i1 false, i1 true)
+ store <16 x float> %res, ptr addrspace(1) %out
+ ret void
+}
+
define amdgpu_ps void @test_swmmac_f32_16x16x64_bf16(<16 x bfloat> %A, <32 x bfloat> %B, <8 x float> %C, i16 %Index, ptr addrspace(1) %out) {
; GFX1250-LABEL: test_swmmac_f32_16x16x64_bf16:
; GFX1250: ; %bb.0: ; %bb
@@ -2573,6 +2737,8 @@ declare <8 x float> @llvm.amdgcn.wmma.f32.16x16x128.fp8.bf8.v8f32.v16i32(<16 x i
declare <8 x float> @llvm.amdgcn.wmma.f32.16x16x128.bf8.fp8.v8f32.v16i32(<16 x i32>, <16 x i32>, i16, <8 x float>, i1, i1)
declare <8 x float> @llvm.amdgcn.wmma.f32.16x16x128.bf8.bf8.v8f32.v16i32(<16 x i32>, <16 x i32>, i16, <8 x float>, i1, i1)
declare <16 x float> @llvm.amdgcn.wmma.f32.32x16x128.f4.v16i32.v8i32.v16f32(<16 x i32>, <8 x i32>, i16, <16 x float>)
+declare <16 x float> @llvm.amdgcn.wmma.scale.f32.32x16x128.f4.v16f32.v16i32.v8i32(<16 x i32>, <8 x i32>, i16, <16 x float>, i32, i32, i32, i32, i32, i32, i1, i1)
+declare <16 x float> @llvm.amdgcn.wmma.scale16.f32.32x16x128.f4.v16f32.v16i32.v8i32(<16 x i32>, <8 x i32>, i16, <16 x float>, i32, i32, i64, i32, i32, i64, i1, i1)
declare <8 x float> @llvm.amdgcn.swmmac.f32.16x16x64.bf16.v8f32.v16bf16.v32bf16.i16(i1, <16 x bfloat>, i1, <32 x bfloat>, <8 x float>, i16, i1, i1)
declare <8 x bfloat> @llvm.amdgcn.swmmac.bf16.16x16x64.bf16.v8bf16.v16bf16.v32bf16.i16(i1, <16 x bfloat>, i1, <32 x bfloat>, <8 x bfloat>, i16, i1, i1)
diff --git a/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.wmma.imm.gfx1250.w32.ll b/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.wmma.imm.gfx1250.w32.ll
index e602c31ebd809..48303c004f1d0 100644
--- a/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.wmma.imm.gfx1250.w32.ll
+++ b/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.wmma.imm.gfx1250.w32.ll
@@ -2530,6 +2530,312 @@ bb:
ret void
}
+define amdgpu_ps void @test_wmma_scale_f32_32x16x128_f4(<16 x i32> %A, <8 x i32> %B, i32 inreg %scale_src0, i32 inreg %scale_src1, ptr addrspace(1) %out) {
+; GFX1250-LABEL: test_wmma_scale_f32_32x16x128_f4:
+; GFX1250: ; %bb.0: ; %bb
+; GFX1250-NEXT: v_wmma_scale_f32_32x16x128_f4 v[26:41], v[0:15], v[16:23], 1.0, s0, s1 matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1 matrix_a_reuse
+; GFX1250-NEXT: s_clause 0x3
+; GFX1250-NEXT: global_store_b128 v[24:25], v[38:41], off offset:48
+; GFX1250-NEXT: global_store_b128 v[24:25], v[34:37], off offset:32
+; GFX1250-NEXT: global_store_b128 v[24:25], v[30:33], off offset:16
+; GFX1250-NEXT: global_store_b128 v[24:25], v[26:29], off
+; GFX1250-NEXT: s_endpgm
+;
+; GISEL-LABEL: test_wmma_scale_f32_32x16x128_f4:
+; GISEL: ; %bb.0: ; %bb
+; GISEL-NEXT: v_wmma_scale_f32_32x16x128_f4 v[26:41], v[0:15], v[16:23], 1.0, s0, s1 matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1 matrix_a_reuse
+; GISEL-NEXT: s_clause 0x3
+; GISEL-NEXT: global_store_b128 v[24:25], v[26:29], off
+; GISEL-NEXT: global_store_b128 v[24:25], v[30:33], off offset:16
+; GISEL-NEXT: global_store_b128 v[24:25], v[34:37], off offset:32
+; GISEL-NEXT: global_store_b128 v[24:25], v[38:41], off offset:48
+; GISEL-NEXT: s_endpgm
+bb:
+ %res = call <16 x float> @llvm.amdgcn.wmma.scale.f32.32x16x128.f4.v16f32.v16i32.v8i32(<16 x i32> %A, <8 x i32> %B, i16 0, <16 x float> <float 1.0, float 1.0, float 1.0, float 1.0, float 1.0, float 1.0, float 1.0, float 1.0, float 1.0, float 1.0, float 1.0, float 1.0, float 1.0, float 1.0, float 1.0, float 1.0>, i32 1, i32 0, i32 %scale_src0, i32 1, i32 0, i32 %scale_src1, i1 true, i1 false)
+ store <16 x float> %res, ptr addrspace(1) %out
+ ret void
+}
+
+define amdgpu_ps void @test_wmma_scale_f32_32x16x128_f4_non_splat(<16 x i32> %A, <8 x i32> %B, ptr addrspace(1) %out) {
+; GFX1250-LABEL: test_wmma_scale_f32_32x16x128_f4_non_splat:
+; GFX1250: ; %bb.0: ; %bb
+; GFX1250-NEXT: v_dual_mov_b32 v26, 1.0 :: v_dual_mov_b32 v27, 2.0
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-NEXT: v_dual_mov_b32 v28, v26 :: v_dual_mov_b32 v29, v26
+; GFX1250-NEXT: v_dual_mov_b32 v30, v26 :: v_dual_mov_b32 v31, v26
+; GFX1250-NEXT: v_dual_mov_b32 v32, v26 :: v_dual_mov_b32 v33, v26
+; GFX1250-NEXT: v_dual_mov_b32 v34, v26 :: v_dual_mov_b32 v35, v26
+; GFX1250-NEXT: v_dual_mov_b32 v36, v26 :: v_dual_mov_b32 v37, v26
+; GFX1250-NEXT: v_dual_mov_b32 v38, v26 :: v_dual_mov_b32 v39, v26
+; GFX1250-NEXT: v_dual_mov_b32 v40, v26 :: v_dual_mov_b32 v41, v26
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-NEXT: v_wmma_scale_f32_32x16x128_f4 v[26:41], v[0:15], v[16:23], v[26:41], 1, 2 matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1
+; GFX1250-NEXT: s_clause 0x3
+; GFX1250-NEXT: global_store_b128 v[24:25], v[38:41], off offset:48
+; GFX1250-NEXT: global_store_b128 v[24:25], v[34:37], off offset:32
+; GFX1250-NEXT: global_store_b128 v[24:25], v[30:33], off offset:16
+; GFX1250-NEXT: global_store_b128 v[24:25], v[26:29], off
+; GFX1250-NEXT: s_endpgm
+;
+; GISEL-LABEL: test_wmma_scale_f32_32x16x128_f4_non_splat:
+; GISEL: ; %bb.0: ; %bb
+; GISEL-NEXT: s_mov_b32 s0, 1.0
+; GISEL-NEXT: s_mov_b32 s1, 2.0
+; GISEL-NEXT: s_mov_b32 s14, s0
+; GISEL-NEXT: s_mov_b32 s15, s0
+; GISEL-NEXT: s_mov_b32 s2, s0
+; GISEL-NEXT: s_mov_b32 s3, s0
+; GISEL-NEXT: s_mov_b32 s4, s0
+; GISEL-NEXT: s_mov_b32 s5, s0
+; GISEL-NEXT: s_mov_b32 s6, s0
+; GISEL-NEXT: s_mov_b32 s7, s0
+; GISEL-NEXT: s_mov_b32 s8, s0
+; GISEL-NEXT: s_mov_b32 s9, s0
+; GISEL-NEXT: s_mov_b32 s10, s0
+; GISEL-NEXT: s_mov_b32 s11, s0
+; GISEL-NEXT: s_mov_b32 s12, s0
+; GISEL-NEXT: s_mov_b32 s13, s0
+; GISEL-NEXT: v_mov_b64_e32 v[40:41], s[14:15]
+; GISEL-NEXT: v_mov_b64_e32 v[38:39], s[12:13]
+; GISEL-NEXT: v_mov_b64_e32 v[36:37], s[10:11]
+; GISEL-NEXT: v_mov_b64_e32 v[34:35], s[8:9]
+; GISEL-NEXT: v_mov_b64_e32 v[32:33], s[6:7]
+; GISEL-NEXT: v_mov_b64_e32 v[30:31], s[4:5]
+; GISEL-NEXT: v_mov_b64_e32 v[28:29], s[2:3]
+; GISEL-NEXT: v_mov_b64_e32 v[26:27], s[0:1]
+; GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GISEL-NEXT: v_wmma_scale_f32_32x16x128_f4 v[26:41], v[0:15], v[16:23], v[26:41], 1, 2 matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1
+; GISEL-NEXT: s_clause 0x3
+; GISEL-NEXT: global_store_b128 v[24:25], v[26:29], off
+; GISEL-NEXT: global_store_b128 v[24:25], v[30:33], off offset:16
+; GISEL-NEXT: global_store_b128 v[24:25], v[34:37], off offset:32
+; GISEL-NEXT: global_store_b128 v[24:25], v[38:41], off offset:48
+; GISEL-NEXT: s_endpgm
+bb:
+ %res = call <16 x float> @llvm.amdgcn.wmma.scale.f32.32x16x128.f4.v16f32.v16i32.v8i32(<16 x i32> %A, <8 x i32> %B, i16 0, <16 x float> <float 1.0, float 2.0, float 1.0, float 1.0, float 1.0, float 1.0, float 1.0, float 1.0, float 1.0, float 1.0, float 1.0, float 1.0, float 1.0, float 1.0, float 1.0, float 1.0>, i32 1, i32 0, i32 1, i32 1, i32 0, i32 2, i1 false, i1 false)
+ store <16 x float> %res, ptr addrspace(1) %out
+ ret void
+}
+
+define amdgpu_ps void @test_wmma_scale_f32_32x16x128_f4_non_inlineable(<16 x i32> %A, <8 x i32> %B, ptr addrspace(1) %out) {
+; GFX1250-LABEL: test_wmma_scale_f32_32x16x128_f4_non_inlineable:
+; GFX1250: ; %bb.0: ; %bb
+; GFX1250-NEXT: v_mov_b32_e32 v26, 0x40400000
+; GFX1250-NEXT: s_movk_i32 s0, 0x65
+; GFX1250-NEXT: s_movk_i32 s1, 0x64
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-NEXT: v_dual_mov_b32 v27, v26 :: v_dual_mov_b32 v28, v26
+; GFX1250-NEXT: v_dual_mov_b32 v29, v26 :: v_dual_mov_b32 v30, v26
+; GFX1250-NEXT: v_dual_mov_b32 v31, v26 :: v_dual_mov_b32 v32, v26
+; GFX1250-NEXT: v_dual_mov_b32 v33, v26 :: v_dual_mov_b32 v34, v26
+; GFX1250-NEXT: v_dual_mov_b32 v35, v26 :: v_dual_mov_b32 v36, v26
+; GFX1250-NEXT: v_dual_mov_b32 v37, v26 :: v_dual_mov_b32 v38, v26
+; GFX1250-NEXT: v_dual_mov_b32 v39, v26 :: v_dual_mov_b32 v40, v26
+; GFX1250-NEXT: v_mov_b32_e32 v41, v26
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-NEXT: v_wmma_scale_f32_32x16x128_f4 v[26:41], v[0:15], v[16:23], v[26:41], s1, s0 matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1 matrix_a_reuse
+; GFX1250-NEXT: s_clause 0x3
+; GFX1250-NEXT: global_store_b128 v[24:25], v[38:41], off offset:48
+; GFX1250-NEXT: global_store_b128 v[24:25], v[34:37], off offset:32
+; GFX1250-NEXT: global_store_b128 v[24:25], v[30:33], off offset:16
+; GFX1250-NEXT: global_store_b128 v[24:25], v[26:29], off
+; GFX1250-NEXT: s_endpgm
+;
+; GISEL-LABEL: test_wmma_scale_f32_32x16x128_f4_non_inlineable:
+; GISEL: ; %bb.0: ; %bb
+; GISEL-NEXT: s_mov_b32 s0, 0x40400000
+; GISEL-NEXT: v_mov_b32_e32 v42, 0x64
+; GISEL-NEXT: s_mov_b32 s14, s0
+; GISEL-NEXT: s_mov_b32 s15, s0
+; GISEL-NEXT: s_mov_b32 s1, s0
+; GISEL-NEXT: s_mov_b32 s2, s0
+; GISEL-NEXT: s_mov_b32 s3, s0
+; GISEL-NEXT: s_mov_b32 s4, s0
+; GISEL-NEXT: s_mov_b32 s5, s0
+; GISEL-NEXT: s_mov_b32 s6, s0
+; GISEL-NEXT: s_mov_b32 s7, s0
+; GISEL-NEXT: s_mov_b32 s8, s0
+; GISEL-NEXT: s_mov_b32 s9, s0
+; GISEL-NEXT: s_mov_b32 s10, s0
+; GISEL-NEXT: s_mov_b32 s11, s0
+; GISEL-NEXT: s_mov_b32 s12, s0
+; GISEL-NEXT: s_mov_b32 s13, s0
+; GISEL-NEXT: v_mov_b64_e32 v[40:41], s[14:15]
+; GISEL-NEXT: v_mov_b64_e32 v[38:39], s[12:13]
+; GISEL-NEXT: v_mov_b64_e32 v[36:37], s[10:11]
+; GISEL-NEXT: v_mov_b64_e32 v[34:35], s[8:9]
+; GISEL-NEXT: v_mov_b64_e32 v[32:33], s[6:7]
+; GISEL-NEXT: v_mov_b64_e32 v[30:31], s[4:5]
+; GISEL-NEXT: v_mov_b64_e32 v[28:29], s[2:3]
+; GISEL-NEXT: v_mov_b64_e32 v[26:27], s[0:1]
+; GISEL-NEXT: v_mov_b32_e32 v43, 0x65
+; GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GISEL-NEXT: v_wmma_scale_f32_32x16x128_f4 v[26:41], v[0:15], v[16:23], v[26:41], v42, v43 matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1 matrix_a_reuse
+; GISEL-NEXT: s_clause 0x3
+; GISEL-NEXT: global_store_b128 v[24:25], v[26:29], off
+; GISEL-NEXT: global_store_b128 v[24:25], v[30:33], off offset:16
+; GISEL-NEXT: global_store_b128 v[24:25], v[34:37], off offset:32
+; GISEL-NEXT: global_store_b128 v[24:25], v[38:41], off offset:48
+; GISEL-NEXT: s_endpgm
+bb:
+ %res = call <16 x float> @llvm.amdgcn.wmma.scale.f32.32x16x128.f4.v16f32.v16i32.v8i32(<16 x i32> %A, <8 x i32> %B, i16 0, <16 x float> <float 3.0, float 3.0, float 3.0, float 3.0, float 3.0, float 3.0, float 3.0, float 3.0, float 3.0, float 3.0, float 3.0, float 3.0, float 3.0, float 3.0, float 3.0, float 3.0>, i32 1, i32 0, i32 100, i32 1, i32 0, i32 101, i1 true, i1 false)
+ store <16 x float> %res, ptr addrspace(1) %out
+ ret void
+}
+
+define amdgpu_ps void @test_wmma_scale16_f32_32x16x128_f4(<16 x i32> %A, <8 x i32> %B, i64 inreg %scale_src0, i64 inreg %scale_src1, ptr addrspace(1) %out) {
+; GFX1250-LABEL: test_wmma_scale16_f32_32x16x128_f4:
+; GFX1250: ; %bb.0: ; %bb
+; GFX1250-NEXT: v_wmma_scale16_f32_32x16x128_f4 v[26:41], v[0:15], v[16:23], 1.0, s[0:1], s[2:3] matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1 matrix_a_reuse
+; GFX1250-NEXT: s_clause 0x3
+; GFX1250-NEXT: global_store_b128 v[24:25], v[38:41], off offset:48
+; GFX1250-NEXT: global_store_b128 v[24:25], v[34:37], off offset:32
+; GFX1250-NEXT: global_store_b128 v[24:25], v[30:33], off offset:16
+; GFX1250-NEXT: global_store_b128 v[24:25], v[26:29], off
+; GFX1250-NEXT: s_endpgm
+;
+; GISEL-LABEL: test_wmma_scale16_f32_32x16x128_f4:
+; GISEL: ; %bb.0: ; %bb
+; GISEL-NEXT: v_wmma_scale16_f32_32x16x128_f4 v[26:41], v[0:15], v[16:23], 1.0, s[0:1], s[2:3] matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1 matrix_a_reuse
+; GISEL-NEXT: s_clause 0x3
+; GISEL-NEXT: global_store_b128 v[24:25], v[26:29], off
+; GISEL-NEXT: global_store_b128 v[24:25], v[30:33], off offset:16
+; GISEL-NEXT: global_store_b128 v[24:25], v[34:37], off offset:32
+; GISEL-NEXT: global_store_b128 v[24:25], v[38:41], off offset:48
+; GISEL-NEXT: s_endpgm
+bb:
+ %res = call <16 x float> @llvm.amdgcn.wmma.scale16.f32.32x16x128.f4.v16f32.v16i32.v8i32(<16 x i32> %A, <8 x i32> %B, i16 0, <16 x float> <float 1.0, float 1.0, float 1.0, float 1.0, float 1.0, float 1.0, float 1.0, float 1.0, float 1.0, float 1.0, float 1.0, float 1.0, float 1.0, float 1.0, float 1.0, float 1.0>, i32 1, i32 0, i64 %scale_src0, i32 1, i32 0, i64 %scale_src1, i1 true, i1 false)
+ store <16 x float> %res, ptr addrspace(1) %out
+ ret void
+}
+
+define amdgpu_ps void @test_wmma_scale16_f32_32x16x128_f4_non_splat(<16 x i32> %A, <8 x i32> %B, ptr addrspace(1) %out) {
+; GFX1250-LABEL: test_wmma_scale16_f32_32x16x128_f4_non_splat:
+; GFX1250: ; %bb.0: ; %bb
+; GFX1250-NEXT: v_dual_mov_b32 v26, 1.0 :: v_dual_mov_b32 v27, 2.0
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-NEXT: v_dual_mov_b32 v28, v26 :: v_dual_mov_b32 v29, v26
+; GFX1250-NEXT: v_dual_mov_b32 v30, v26 :: v_dual_mov_b32 v31, v26
+; GFX1250-NEXT: v_dual_mov_b32 v32, v26 :: v_dual_mov_b32 v33, v26
+; GFX1250-NEXT: v_dual_mov_b32 v34, v26 :: v_dual_mov_b32 v35, v26
+; GFX1250-NEXT: v_dual_mov_b32 v36, v26 :: v_dual_mov_b32 v37, v26
+; GFX1250-NEXT: v_dual_mov_b32 v38, v26 :: v_dual_mov_b32 v39, v26
+; GFX1250-NEXT: v_dual_mov_b32 v40, v26 :: v_dual_mov_b32 v41, v26
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-NEXT: v_wmma_scale16_f32_32x16x128_f4 v[26:41], v[0:15], v[16:23], v[26:41], 1, 2 matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1
+; GFX1250-NEXT: s_clause 0x3
+; GFX1250-NEXT: global_store_b128 v[24:25], v[38:41], off offset:48
+; GFX1250-NEXT: global_store_b128 v[24:25], v[34:37], off offset:32
+; GFX1250-NEXT: global_store_b128 v[24:25], v[30:33], off offset:16
+; GFX1250-NEXT: global_store_b128 v[24:25], v[26:29], off
+; GFX1250-NEXT: s_endpgm
+;
+; GISEL-LABEL: test_wmma_scale16_f32_32x16x128_f4_non_splat:
+; GISEL: ; %bb.0: ; %bb
+; GISEL-NEXT: s_mov_b32 s0, 1.0
+; GISEL-NEXT: s_mov_b32 s1, 2.0
+; GISEL-NEXT: s_mov_b32 s14, s0
+; GISEL-NEXT: s_mov_b32 s15, s0
+; GISEL-NEXT: s_mov_b32 s2, s0
+; GISEL-NEXT: s_mov_b32 s3, s0
+; GISEL-NEXT: s_mov_b32 s4, s0
+; GISEL-NEXT: s_mov_b32 s5, s0
+; GISEL-NEXT: s_mov_b32 s6, s0
+; GISEL-NEXT: s_mov_b32 s7, s0
+; GISEL-NEXT: s_mov_b32 s8, s0
+; GISEL-NEXT: s_mov_b32 s9, s0
+; GISEL-NEXT: s_mov_b32 s10, s0
+; GISEL-NEXT: s_mov_b32 s11, s0
+; GISEL-NEXT: s_mov_b32 s12, s0
+; GISEL-NEXT: s_mov_b32 s13, s0
+; GISEL-NEXT: v_mov_b64_e32 v[40:41], s[14:15]
+; GISEL-NEXT: v_mov_b64_e32 v[38:39], s[12:13]
+; GISEL-NEXT: v_mov_b64_e32 v[36:37], s[10:11]
+; GISEL-NEXT: v_mov_b64_e32 v[34:35], s[8:9]
+; GISEL-NEXT: v_mov_b64_e32 v[32:33], s[6:7]
+; GISEL-NEXT: v_mov_b64_e32 v[30:31], s[4:5]
+; GISEL-NEXT: v_mov_b64_e32 v[28:29], s[2:3]
+; GISEL-NEXT: v_mov_b64_e32 v[26:27], s[0:1]
+; GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GISEL-NEXT: v_wmma_scale16_f32_32x16x128_f4 v[26:41], v[0:15], v[16:23], v[26:41], 1, 2 matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1
+; GISEL-NEXT: s_clause 0x3
+; GISEL-NEXT: global_store_b128 v[24:25], v[26:29], off
+; GISEL-NEXT: global_store_b128 v[24:25], v[30:33], off offset:16
+; GISEL-NEXT: global_store_b128 v[24:25], v[34:37], off offset:32
+; GISEL-NEXT: global_store_b128 v[24:25], v[38:41], off offset:48
+; GISEL-NEXT: s_endpgm
+bb:
+ %res = call <16 x float> @llvm.amdgcn.wmma.scale16.f32.32x16x128.f4.v16f32.v16i32.v8i32(<16 x i32> %A, <8 x i32> %B, i16 0, <16 x float> <float 1.0, float 2.0, float 1.0, float 1.0, float 1.0, float 1.0, float 1.0, float 1.0, float 1.0, float 1.0, float 1.0, float 1.0, float 1.0, float 1.0, float 1.0, float 1.0>, i32 1, i32 0, i64 1, i32 1, i32 0, i64 2, i1 false, i1 false)
+ store <16 x float> %res, ptr addrspace(1) %out
+ ret void
+}
+
+define amdgpu_ps void @test_wmma_scale16_f32_32x16x128_f4_non_inlineable(<16 x i32> %A, <8 x i32> %B, ptr addrspace(1) %out) {
+; GFX1250-LABEL: test_wmma_scale16_f32_32x16x128_f4_non_inlineable:
+; GFX1250: ; %bb.0: ; %bb
+; GFX1250-NEXT: v_mov_b32_e32 v26, 0x40400000
+; GFX1250-NEXT: s_mov_b64 s[0:1], 0x65
+; GFX1250-NEXT: s_mov_b64 s[2:3], 0x64
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-NEXT: v_dual_mov_b32 v27, v26 :: v_dual_mov_b32 v28, v26
+; GFX1250-NEXT: v_dual_mov_b32 v29, v26 :: v_dual_mov_b32 v30, v26
+; GFX1250-NEXT: v_dual_mov_b32 v31, v26 :: v_dual_mov_b32 v32, v26
+; GFX1250-NEXT: v_dual_mov_b32 v33, v26 :: v_dual_mov_b32 v34, v26
+; GFX1250-NEXT: v_dual_mov_b32 v35, v26 :: v_dual_mov_b32 v36, v26
+; GFX1250-NEXT: v_dual_mov_b32 v37, v26 :: v_dual_mov_b32 v38, v26
+; GFX1250-NEXT: v_dual_mov_b32 v39, v26 :: v_dual_mov_b32 v40, v26
+; GFX1250-NEXT: v_mov_b32_e32 v41, v26
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-NEXT: v_wmma_scale16_f32_32x16x128_f4 v[26:41], v[0:15], v[16:23], v[26:41], s[2:3], s[0:1] matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1 matrix_a_reuse
+; GFX1250-NEXT: s_clause 0x3
+; GFX1250-NEXT: global_store_b128 v[24:25], v[38:41], off offset:48
+; GFX1250-NEXT: global_store_b128 v[24:25], v[34:37], off offset:32
+; GFX1250-NEXT: global_store_b128 v[24:25], v[30:33], off offset:16
+; GFX1250-NEXT: global_store_b128 v[24:25], v[26:29], off
+; GFX1250-NEXT: s_endpgm
+;
+; GISEL-LABEL: test_wmma_scale16_f32_32x16x128_f4_non_inlineable:
+; GISEL: ; %bb.0: ; %bb
+; GISEL-NEXT: s_mov_b32 s0, 0x40400000
+; GISEL-NEXT: v_mov_b64_e32 v[42:43], 0x64
+; GISEL-NEXT: s_mov_b32 s14, s0
+; GISEL-NEXT: s_mov_b32 s15, s0
+; GISEL-NEXT: s_mov_b32 s1, s0
+; GISEL-NEXT: s_mov_b32 s2, s0
+; GISEL-NEXT: s_mov_b32 s3, s0
+; GISEL-NEXT: s_mov_b32 s4, s0
+; GISEL-NEXT: s_mov_b32 s5, s0
+; GISEL-NEXT: s_mov_b32 s6, s0
+; GISEL-NEXT: s_mov_b32 s7, s0
+; GISEL-NEXT: s_mov_b32 s8, s0
+; GISEL-NEXT: s_mov_b32 s9, s0
+; GISEL-NEXT: s_mov_b32 s10, s0
+; GISEL-NEXT: s_mov_b32 s11, s0
+; GISEL-NEXT: s_mov_b32 s12, s0
+; GISEL-NEXT: s_mov_b32 s13, s0
+; GISEL-NEXT: v_mov_b64_e32 v[40:41], s[14:15]
+; GISEL-NEXT: v_mov_b64_e32 v[38:39], s[12:13]
+; GISEL-NEXT: v_mov_b64_e32 v[36:37], s[10:11]
+; GISEL-NEXT: v_mov_b64_e32 v[34:35], s[8:9]
+; GISEL-NEXT: v_mov_b64_e32 v[32:33], s[6:7]
+; GISEL-NEXT: v_mov_b64_e32 v[30:31], s[4:5]
+; GISEL-NEXT: v_mov_b64_e32 v[28:29], s[2:3]
+; GISEL-NEXT: v_mov_b64_e32 v[26:27], s[0:1]
+; GISEL-NEXT: v_mov_b64_e32 v[44:45], 0x65
+; GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GISEL-NEXT: v_wmma_scale16_f32_32x16x128_f4 v[26:41], v[0:15], v[16:23], v[26:41], v[42:43], v[44:45] matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1 matrix_a_reuse
+; GISEL-NEXT: s_clause 0x3
+; GISEL-NEXT: global_store_b128 v[24:25], v[26:29], off
+; GISEL-NEXT: global_store_b128 v[24:25], v[30:33], off offset:16
+; GISEL-NEXT: global_store_b128 v[24:25], v[34:37], off offset:32
+; GISEL-NEXT: global_store_b128 v[24:25], v[38:41], off offset:48
+; GISEL-NEXT: s_endpgm
+bb:
+ %res = call <16 x float> @llvm.amdgcn.wmma.scale16.f32.32x16x128.f4.v16f32.v16i32.v8i32(<16 x i32> %A, <8 x i32> %B, i16 0, <16 x float> <float 3.0, float 3.0, float 3.0, float 3.0, float 3.0, float 3.0, float 3.0, float 3.0, float 3.0, float 3.0, float 3.0, float 3.0, float 3.0, float 3.0, float 3.0, float 3.0>, i32 1, i32 0, i64 100, i32 1, i32 0, i64 101, i1 true, i1 false)
+ store <16 x float> %res, ptr addrspace(1) %out
+ ret void
+}
+
declare <8 x float> @llvm.amdgcn.wmma.f32.16x16x4.f32.v8f32.v2f32(i1, <2 x float>, i1, <2 x float>, i16, <8 x float>, i1, i1)
declare <8 x float> @llvm.amdgcn.wmma.f32.16x16x32.bf16.v8f32.v16bf16(i1, <16 x bfloat>, i1, <16 x bfloat>, i16, <8 x float>, i1, i1)
declare <8 x bfloat> @llvm.amdgcn.wmma.bf16.16x16x32.bf16.v8bf16.v16bf16(i1, <16 x bfloat>, i1, <16 x bfloat>, i16, <8 x bfloat>, i1, i1)
@@ -2557,3 +2863,5 @@ declare <8 x float> @llvm.amdgcn.wmma.f32.16x16x128.fp8.bf8.v8f32.v16i32(<16 x i
declare <8 x float> @llvm.amdgcn.wmma.f32.16x16x128.bf8.fp8.v8f32.v16i32(<16 x i32>, <16 x i32>, i16, <8 x float>, i1, i1)
declare <8 x float> @llvm.amdgcn.wmma.f32.16x16x128.bf8.bf8.v8f32.v16i32(<16 x i32>, <16 x i32>, i16, <8 x float>, i1, i1)
declare <16 x float> @llvm.amdgcn.wmma.f32.32x16x128.f4.v16i32.v8i32.v16f32(<16 x i32>, <8 x i32>, i16, <16 x float>)
+declare <16 x float> @llvm.amdgcn.wmma.scale.f32.32x16x128.f4.v16f32.v16i32.v8i32(<16 x i32>, <8 x i32>, i16, <16 x float>, i32, i32, i32, i32, i32, i32, i1, i1)
+declare <16 x float> @llvm.amdgcn.wmma.scale16.f32.32x16x128.f4.v16f32.v16i32.v8i32(<16 x i32>, <8 x i32>, i16, <16 x float>, i32, i32, i64, i32, i32, i64, i1, i1)
diff --git a/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.wmma.imod.gfx1250.w32.ll b/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.wmma.imod.gfx1250.w32.ll
index 14699ce630c19..8f674f84206ff 100644
--- a/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.wmma.imod.gfx1250.w32.ll
+++ b/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.wmma.imod.gfx1250.w32.ll
@@ -1882,6 +1882,162 @@ bb:
ret void
}
+define amdgpu_ps void @test_wmma_scale_f32_32x16x128_f4_negC(<16 x i32> %A, <8 x i32> %B, <16 x float> %C, ptr addrspace(1) %out) {
+; GFX1250-LABEL: test_wmma_scale_f32_32x16x128_f4_negC:
+; GFX1250: ; %bb.0: ; %bb
+; GFX1250-NEXT: v_wmma_scale_f32_32x16x128_f4 v[24:39], v[0:15], v[16:23], v[24:39], 2, 4 matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1 neg_lo:[0,0,1]
+; GFX1250-NEXT: s_clause 0x3
+; GFX1250-NEXT: global_store_b128 v[40:41], v[36:39], off offset:48
+; GFX1250-NEXT: global_store_b128 v[40:41], v[32:35], off offset:32
+; GFX1250-NEXT: global_store_b128 v[40:41], v[28:31], off offset:16
+; GFX1250-NEXT: global_store_b128 v[40:41], v[24:27], off
+; GFX1250-NEXT: s_endpgm
+;
+; GISEL-LABEL: test_wmma_scale_f32_32x16x128_f4_negC:
+; GISEL: ; %bb.0: ; %bb
+; GISEL-NEXT: v_wmma_scale_f32_32x16x128_f4 v[24:39], v[0:15], v[16:23], v[24:39], 2, 4 matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1 neg_lo:[0,0,1]
+; GISEL-NEXT: s_clause 0x3
+; GISEL-NEXT: global_store_b128 v[40:41], v[24:27], off
+; GISEL-NEXT: global_store_b128 v[40:41], v[28:31], off offset:16
+; GISEL-NEXT: global_store_b128 v[40:41], v[32:35], off offset:32
+; GISEL-NEXT: global_store_b128 v[40:41], v[36:39], off offset:48
+; GISEL-NEXT: s_endpgm
+bb:
+ %res = call <16 x float> @llvm.amdgcn.wmma.scale.f32.32x16x128.f4.v16f32.v16i32.v8i32(<16 x i32> %A, <8 x i32> %B, i16 1, <16 x float> %C, i32 1, i32 0, i32 2, i32 1, i32 0, i32 4, i1 false, i1 false)
+ store <16 x float> %res, ptr addrspace(1) %out
+ ret void
+}
+
+define amdgpu_ps void @test_wmma_scale_f32_32x16x128_f4_neg_absC(<16 x i32> %A, <8 x i32> %B, <16 x float> %C, ptr addrspace(1) %out) {
+; GFX1250-LABEL: test_wmma_scale_f32_32x16x128_f4_neg_absC:
+; GFX1250: ; %bb.0: ; %bb
+; GFX1250-NEXT: v_wmma_scale_f32_32x16x128_f4 v[24:39], v[0:15], v[16:23], v[24:39], 2, 4 matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1 neg_lo:[0,0,1] neg_hi:[0,0,1]
+; GFX1250-NEXT: s_clause 0x3
+; GFX1250-NEXT: global_store_b128 v[40:41], v[36:39], off offset:48
+; GFX1250-NEXT: global_store_b128 v[40:41], v[32:35], off offset:32
+; GFX1250-NEXT: global_store_b128 v[40:41], v[28:31], off offset:16
+; GFX1250-NEXT: global_store_b128 v[40:41], v[24:27], off
+; GFX1250-NEXT: s_endpgm
+;
+; GISEL-LABEL: test_wmma_scale_f32_32x16x128_f4_neg_absC:
+; GISEL: ; %bb.0: ; %bb
+; GISEL-NEXT: v_wmma_scale_f32_32x16x128_f4 v[24:39], v[0:15], v[16:23], v[24:39], 2, 4 matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1 neg_lo:[0,0,1] neg_hi:[0,0,1]
+; GISEL-NEXT: s_clause 0x3
+; GISEL-NEXT: global_store_b128 v[40:41], v[24:27], off
+; GISEL-NEXT: global_store_b128 v[40:41], v[28:31], off offset:16
+; GISEL-NEXT: global_store_b128 v[40:41], v[32:35], off offset:32
+; GISEL-NEXT: global_store_b128 v[40:41], v[36:39], off offset:48
+; GISEL-NEXT: s_endpgm
+bb:
+ %res = call <16 x float> @llvm.amdgcn.wmma.scale.f32.32x16x128.f4.v16f32.v16i32.v8i32(<16 x i32> %A, <8 x i32> %B, i16 3, <16 x float> %C, i32 1, i32 0, i32 2, i32 1, i32 0, i32 4, i1 false, i1 false)
+ store <16 x float> %res, ptr addrspace(1) %out
+ ret void
+}
+
+define amdgpu_ps void @test_wmma_scale_f32_32x16x128_f4_ignoreC(<16 x i32> %A, <8 x i32> %B, <16 x float> %C, ptr addrspace(1) %out) {
+; GFX1250-LABEL: test_wmma_scale_f32_32x16x128_f4_ignoreC:
+; GFX1250: ; %bb.0: ; %bb
+; GFX1250-NEXT: v_wmma_scale_f32_32x16x128_f4 v[24:39], v[0:15], v[16:23], v[24:39], 2, 4 matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1
+; GFX1250-NEXT: s_clause 0x3
+; GFX1250-NEXT: global_store_b128 v[40:41], v[36:39], off offset:48
+; GFX1250-NEXT: global_store_b128 v[40:41], v[32:35], off offset:32
+; GFX1250-NEXT: global_store_b128 v[40:41], v[28:31], off offset:16
+; GFX1250-NEXT: global_store_b128 v[40:41], v[24:27], off
+; GFX1250-NEXT: s_endpgm
+;
+; GISEL-LABEL: test_wmma_scale_f32_32x16x128_f4_ignoreC:
+; GISEL: ; %bb.0: ; %bb
+; GISEL-NEXT: v_wmma_scale_f32_32x16x128_f4 v[24:39], v[0:15], v[16:23], v[24:39], 2, 4 matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1
+; GISEL-NEXT: s_clause 0x3
+; GISEL-NEXT: global_store_b128 v[40:41], v[24:27], off
+; GISEL-NEXT: global_store_b128 v[40:41], v[28:31], off offset:16
+; GISEL-NEXT: global_store_b128 v[40:41], v[32:35], off offset:32
+; GISEL-NEXT: global_store_b128 v[40:41], v[36:39], off offset:48
+; GISEL-NEXT: s_endpgm
+bb:
+ %res = call <16 x float> @llvm.amdgcn.wmma.scale.f32.32x16x128.f4.v16f32.v16i32.v8i32(<16 x i32> %A, <8 x i32> %B, i16 4, <16 x float> %C, i32 1, i32 0, i32 2, i32 1, i32 0, i32 4, i1 false, i1 false)
+ store <16 x float> %res, ptr addrspace(1) %out
+ ret void
+}
+
+define amdgpu_ps void @test_wmma_scale16_f32_32x16x128_f4_negC(<16 x i32> %A, <8 x i32> %B, <16 x float> %C, ptr addrspace(1) %out) {
+; GFX1250-LABEL: test_wmma_scale16_f32_32x16x128_f4_negC:
+; GFX1250: ; %bb.0: ; %bb
+; GFX1250-NEXT: v_wmma_scale16_f32_32x16x128_f4 v[24:39], v[0:15], v[16:23], v[24:39], 2, 4 matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1 neg_lo:[0,0,1]
+; GFX1250-NEXT: s_clause 0x3
+; GFX1250-NEXT: global_store_b128 v[40:41], v[36:39], off offset:48
+; GFX1250-NEXT: global_store_b128 v[40:41], v[32:35], off offset:32
+; GFX1250-NEXT: global_store_b128 v[40:41], v[28:31], off offset:16
+; GFX1250-NEXT: global_store_b128 v[40:41], v[24:27], off
+; GFX1250-NEXT: s_endpgm
+;
+; GISEL-LABEL: test_wmma_scale16_f32_32x16x128_f4_negC:
+; GISEL: ; %bb.0: ; %bb
+; GISEL-NEXT: v_wmma_scale16_f32_32x16x128_f4 v[24:39], v[0:15], v[16:23], v[24:39], 2, 4 matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1 neg_lo:[0,0,1]
+; GISEL-NEXT: s_clause 0x3
+; GISEL-NEXT: global_store_b128 v[40:41], v[24:27], off
+; GISEL-NEXT: global_store_b128 v[40:41], v[28:31], off offset:16
+; GISEL-NEXT: global_store_b128 v[40:41], v[32:35], off offset:32
+; GISEL-NEXT: global_store_b128 v[40:41], v[36:39], off offset:48
+; GISEL-NEXT: s_endpgm
+bb:
+ %res = call <16 x float> @llvm.amdgcn.wmma.scale16.f32.32x16x128.f4.v16f32.v16i32.v8i32(<16 x i32> %A, <8 x i32> %B, i16 1, <16 x float> %C, i32 1, i32 0, i64 2, i32 1, i32 0, i64 4, i1 false, i1 false)
+ store <16 x float> %res, ptr addrspace(1) %out
+ ret void
+}
+
+define amdgpu_ps void @test_wmma_scale16_f32_32x16x128_f4_neg_absC(<16 x i32> %A, <8 x i32> %B, <16 x float> %C, ptr addrspace(1) %out) {
+; GFX1250-LABEL: test_wmma_scale16_f32_32x16x128_f4_neg_absC:
+; GFX1250: ; %bb.0: ; %bb
+; GFX1250-NEXT: v_wmma_scale16_f32_32x16x128_f4 v[24:39], v[0:15], v[16:23], v[24:39], 2, 4 matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1 neg_lo:[0,0,1] neg_hi:[0,0,1]
+; GFX1250-NEXT: s_clause 0x3
+; GFX1250-NEXT: global_store_b128 v[40:41], v[36:39], off offset:48
+; GFX1250-NEXT: global_store_b128 v[40:41], v[32:35], off offset:32
+; GFX1250-NEXT: global_store_b128 v[40:41], v[28:31], off offset:16
+; GFX1250-NEXT: global_store_b128 v[40:41], v[24:27], off
+; GFX1250-NEXT: s_endpgm
+;
+; GISEL-LABEL: test_wmma_scale16_f32_32x16x128_f4_neg_absC:
+; GISEL: ; %bb.0: ; %bb
+; GISEL-NEXT: v_wmma_scale16_f32_32x16x128_f4 v[24:39], v[0:15], v[16:23], v[24:39], 2, 4 matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1 neg_lo:[0,0,1] neg_hi:[0,0,1]
+; GISEL-NEXT: s_clause 0x3
+; GISEL-NEXT: global_store_b128 v[40:41], v[24:27], off
+; GISEL-NEXT: global_store_b128 v[40:41], v[28:31], off offset:16
+; GISEL-NEXT: global_store_b128 v[40:41], v[32:35], off offset:32
+; GISEL-NEXT: global_store_b128 v[40:41], v[36:39], off offset:48
+; GISEL-NEXT: s_endpgm
+bb:
+ %res = call <16 x float> @llvm.amdgcn.wmma.scale16.f32.32x16x128.f4.v16f32.v16i32.v8i32(<16 x i32> %A, <8 x i32> %B, i16 3, <16 x float> %C, i32 1, i32 0, i64 2, i32 1, i32 0, i64 4, i1 false, i1 false)
+ store <16 x float> %res, ptr addrspace(1) %out
+ ret void
+}
+
+define amdgpu_ps void @test_wmma_scale16_f32_32x16x128_f4_ignoreC(<16 x i32> %A, <8 x i32> %B, <16 x float> %C, ptr addrspace(1) %out) {
+; GFX1250-LABEL: test_wmma_scale16_f32_32x16x128_f4_ignoreC:
+; GFX1250: ; %bb.0: ; %bb
+; GFX1250-NEXT: v_wmma_scale16_f32_32x16x128_f4 v[24:39], v[0:15], v[16:23], v[24:39], 2, 4 matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1
+; GFX1250-NEXT: s_clause 0x3
+; GFX1250-NEXT: global_store_b128 v[40:41], v[36:39], off offset:48
+; GFX1250-NEXT: global_store_b128 v[40:41], v[32:35], off offset:32
+; GFX1250-NEXT: global_store_b128 v[40:41], v[28:31], off offset:16
+; GFX1250-NEXT: global_store_b128 v[40:41], v[24:27], off
+; GFX1250-NEXT: s_endpgm
+;
+; GISEL-LABEL: test_wmma_scale16_f32_32x16x128_f4_ignoreC:
+; GISEL: ; %bb.0: ; %bb
+; GISEL-NEXT: v_wmma_scale16_f32_32x16x128_f4 v[24:39], v[0:15], v[16:23], v[24:39], 2, 4 matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1
+; GISEL-NEXT: s_clause 0x3
+; GISEL-NEXT: global_store_b128 v[40:41], v[24:27], off
+; GISEL-NEXT: global_store_b128 v[40:41], v[28:31], off offset:16
+; GISEL-NEXT: global_store_b128 v[40:41], v[32:35], off offset:32
+; GISEL-NEXT: global_store_b128 v[40:41], v[36:39], off offset:48
+; GISEL-NEXT: s_endpgm
+bb:
+ %res = call <16 x float> @llvm.amdgcn.wmma.scale16.f32.32x16x128.f4.v16f32.v16i32.v8i32(<16 x i32> %A, <8 x i32> %B, i16 4, <16 x float> %C, i32 1, i32 0, i64 2, i32 1, i32 0, i64 4, i1 false, i1 false)
+ store <16 x float> %res, ptr addrspace(1) %out
+ ret void
+}
+
define amdgpu_ps void @test_swmmac_f32_16x16x64_bf16_negA(<16 x bfloat> %A, <32 x bfloat> %B, <8 x float> %C, i16 %Index, ptr addrspace(1) %out) {
; GFX1250-LABEL: test_swmmac_f32_16x16x64_bf16_negA:
; GFX1250: ; %bb.0: ; %bb
@@ -2177,6 +2333,8 @@ declare <8 x float> @llvm.amdgcn.wmma.f32.16x16x128.fp8.bf8.v8f32.v16i32(<16 x i
declare <8 x float> @llvm.amdgcn.wmma.f32.16x16x128.bf8.fp8.v8f32.v16i32(<16 x i32>, <16 x i32>, i16, <8 x float>, i1, i1)
declare <8 x float> @llvm.amdgcn.wmma.f32.16x16x128.bf8.bf8.v8f32.v16i32(<16 x i32>, <16 x i32>, i16, <8 x float>, i1, i1)
declare <16 x float> @llvm.amdgcn.wmma.f32.32x16x128.f4.v16i32.v8i32.v16f32(<16 x i32>, <8 x i32>, i16, <16 x float>)
+declare <16 x float> @llvm.amdgcn.wmma.scale.f32.32x16x128.f4.v16f32.v16i32.v8i32(<16 x i32>, <8 x i32>, i16, <16 x float>, i32, i32, i32, i32, i32, i32, i1, i1)
+declare <16 x float> @llvm.amdgcn.wmma.scale16.f32.32x16x128.f4.v16f32.v16i32.v8i32(<16 x i32>, <8 x i32>, i16, <16 x float>, i32, i32, i64, i32, i32, i64, i1, i1)
declare <8 x float> @llvm.amdgcn.swmmac.f32.16x16x64.bf16.v8f32.v16bf16.v32bf16.i16(i1, <16 x bfloat>, i1, <32 x bfloat>, <8 x float>, i16, i1, i1)
declare <8 x bfloat> @llvm.amdgcn.swmmac.bf16.16x16x64.bf16.v8bf16.v16bf16.v32bf16.i16(i1, <16 x bfloat>, i1, <32 x bfloat>, <8 x bfloat>, i16, i1, i1)
diff --git a/llvm/test/MC/AMDGPU/gfx1250_asm_wmma_w32.s b/llvm/test/MC/AMDGPU/gfx1250_asm_wmma_w32.s
index 93e65d3444b89..8185b77beb935 100644
--- a/llvm/test/MC/AMDGPU/gfx1250_asm_wmma_w32.s
+++ b/llvm/test/MC/AMDGPU/gfx1250_asm_wmma_w32.s
@@ -1737,3 +1737,173 @@ v_wmma_f32_32x16x128_f4 v[4:19], v[0:15], v[2:9], v[4:19] neg_lo:[0,0,1] neg_hi:
// GFX1250: v_wmma_f32_32x16x128_f4 v[4:19], v[0:15], v[2:9], v[4:19] neg_lo:[0,0,1] neg_hi:[0,0,1] ; encoding: [0x04,0x44,0x88,0xcc,0x00,0x05,0x12,0x9c]
// WAVESIZE-ERR: :[[@LINE-2]]:1: error: instruction requires wavesize=32
// GFX12-ERR: :[[@LINE-3]]:1: error: instruction not supported on this GPU
+
+v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v1, v2 matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1 neg_lo:[0,0,1] neg_hi:[0,0,1]
+// GFX1250: v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v1, v2 matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1 neg_lo:[0,0,1] neg_hi:[0,0,1] ; encoding: [0x00,0x08,0x35,0xcc,0x01,0x05,0x02,0x08,0x00,0x44,0x88,0xcc,0x08,0x31,0xa2,0x9c]
+// WAVESIZE-ERR: :[[@LINE-2]]:1: error: instruction requires wavesize=32
+// GFX12-ERR: :[[@LINE-3]]:1: error: instruction not supported on this GPU
+
+v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], s1, s2 matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1 matrix_a_reuse matrix_b_reuse neg_lo:[0,0,1] neg_hi:[0,0,1]
+// GFX1250: v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], s1, s2 matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1 matrix_a_reuse matrix_b_reuse neg_lo:[0,0,1] neg_hi:[0,0,1] ; encoding: [0x00,0x68,0x35,0xcc,0x01,0x04,0x00,0x08,0x00,0x44,0x88,0xcc,0x08,0x31,0xa2,0x9c]
+// WAVESIZE-ERR: :[[@LINE-2]]:1: error: instruction requires wavesize=32
+// GFX12-ERR: :[[@LINE-3]]:1: error: instruction not supported on this GPU
+
+v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s0, s0
+// GFX1250: v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s0, s0 ; encoding: [0x00,0x00,0x35,0xcc,0x00,0x00,0x00,0x00,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c]
+// WAVESIZE-ERR: :[[@LINE-2]]:1: error: instruction requires wavesize=32
+// GFX12-ERR: :[[@LINE-3]]:1: error: instruction not supported on this GPU
+
+v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s0, s0 matrix_a_scale:MATRIX_SCALE_ROW0
+// GFX1250: v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s0, s0 ; encoding: [0x00,0x00,0x35,0xcc,0x00,0x00,0x00,0x00,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c]
+// WAVESIZE-ERR: :[[@LINE-2]]:1: error: instruction requires wavesize=32
+// GFX12-ERR: :[[@LINE-3]]:1: error: instruction not supported on this GPU
+
+v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s0, s0 matrix_a_scale:MATRIX_SCALE_ROW1
+// GFX1250: v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s0, s0 matrix_a_scale:MATRIX_SCALE_ROW1 ; encoding: [0x00,0x08,0x35,0xcc,0x00,0x00,0x00,0x00,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c]
+// WAVESIZE-ERR: :[[@LINE-2]]:1: error: instruction requires wavesize=32
+// GFX12-ERR: :[[@LINE-3]]:1: error: instruction not supported on this GPU
+
+v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s0, s0 matrix_a_reuse
+// GFX1250: v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s0, s0 matrix_a_reuse ; encoding: [0x00,0x20,0x35,0xcc,0x00,0x00,0x00,0x00,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c]
+// WAVESIZE-ERR: :[[@LINE-2]]:1: error: instruction requires wavesize=32
+// GFX12-ERR: :[[@LINE-3]]:1: error: instruction not supported on this GPU
+
+v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s0, s0 matrix_a_scale:MATRIX_SCALE_ROW1 matrix_a_reuse
+// GFX1250: v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s0, s0 matrix_a_scale:MATRIX_SCALE_ROW1 matrix_a_reuse ; encoding: [0x00,0x28,0x35,0xcc,0x00,0x00,0x00,0x00,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c]
+// WAVESIZE-ERR: :[[@LINE-2]]:1: error: instruction requires wavesize=32
+// GFX12-ERR: :[[@LINE-3]]:1: error: instruction not supported on this GPU
+
+v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s0, s0 matrix_b_scale:MATRIX_SCALE_ROW0
+// GFX1250: v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s0, s0 ; encoding: [0x00,0x00,0x35,0xcc,0x00,0x00,0x00,0x00,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c]
+// WAVESIZE-ERR: :[[@LINE-2]]:1: error: instruction requires wavesize=32
+// GFX12-ERR: :[[@LINE-3]]:1: error: instruction not supported on this GPU
+
+v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s0, s0 matrix_b_scale:MATRIX_SCALE_ROW1
+// GFX1250: v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s0, s0 matrix_b_scale:MATRIX_SCALE_ROW1 ; encoding: [0x00,0x00,0x35,0xcc,0x00,0x00,0x00,0x08,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c]
+// WAVESIZE-ERR: :[[@LINE-2]]:1: error: instruction requires wavesize=32
+// GFX12-ERR: :[[@LINE-3]]:1: error: instruction not supported on this GPU
+
+v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s0, s0 matrix_b_reuse
+// GFX1250: v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s0, s0 matrix_b_reuse ; encoding: [0x00,0x40,0x35,0xcc,0x00,0x00,0x00,0x00,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c]
+// WAVESIZE-ERR: :[[@LINE-2]]:1: error: instruction requires wavesize=32
+// GFX12-ERR: :[[@LINE-3]]:1: error: instruction not supported on this GPU
+
+v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s0, s0 matrix_b_scale:MATRIX_SCALE_ROW1 matrix_b_reuse
+// GFX1250: v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s0, s0 matrix_b_scale:MATRIX_SCALE_ROW1 matrix_b_reuse ; encoding: [0x00,0x40,0x35,0xcc,0x00,0x00,0x00,0x08,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c]
+// WAVESIZE-ERR: :[[@LINE-2]]:1: error: instruction requires wavesize=32
+// GFX12-ERR: :[[@LINE-3]]:1: error: instruction not supported on this GPU
+
+v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v1, v2 matrix_a_scale_fmt:MATRIX_SCALE_FMT_E8 matrix_b_scale_fmt:MATRIX_SCALE_FMT_E8
+// GFX1250: v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v1, v2 ; encoding: [0x00,0x00,0x35,0xcc,0x01,0x05,0x02,0x00,0x00,0x40,0x88,0xcc,0x08,0x31,0xa2,0x1c]
+// WAVESIZE-ERR: :[[@LINE-2]]:1: error: instruction requires wavesize=32
+// GFX12-ERR: :[[@LINE-3]]:1: error: instruction not supported on this GPU
+
+v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v1, v2 matrix_a_scale_fmt:MATRIX_SCALE_FMT_E5M3
+// GFX1250: v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v1, v2 matrix_a_scale_fmt:MATRIX_SCALE_FMT_E5M3 ; encoding: [0x00,0x00,0x35,0xcc,0x01,0x05,0x02,0x20,0x00,0x40,0x88,0xcc,0x08,0x31,0xa2,0x1c]
+// WAVESIZE-ERR: :[[@LINE-2]]:1: error: instruction requires wavesize=32
+// GFX12-ERR: :[[@LINE-3]]:1: error: instruction not supported on this GPU
+
+v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v1, v2 matrix_a_scale_fmt:MATRIX_SCALE_FMT_E4M3
+// GFX1250: v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v1, v2 matrix_a_scale_fmt:MATRIX_SCALE_FMT_E4M3 ; encoding: [0x00,0x00,0x35,0xcc,0x01,0x05,0x02,0x40,0x00,0x40,0x88,0xcc,0x08,0x31,0xa2,0x1c]
+// WAVESIZE-ERR: :[[@LINE-2]]:1: error: instruction requires wavesize=32
+// GFX12-ERR: :[[@LINE-3]]:1: error: instruction not supported on this GPU
+
+v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v1, v2 matrix_b_scale_fmt:MATRIX_SCALE_FMT_E5M3
+// GFX1250: v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v1, v2 matrix_b_scale_fmt:MATRIX_SCALE_FMT_E5M3 ; encoding: [0x00,0x01,0x35,0xcc,0x01,0x05,0x02,0x00,0x00,0x40,0x88,0xcc,0x08,0x31,0xa2,0x1c]
+// WAVESIZE-ERR: :[[@LINE-2]]:1: error: instruction requires wavesize=32
+// GFX12-ERR: :[[@LINE-3]]:1: error: instruction not supported on this GPU
+
+v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v1, v2 matrix_b_scale_fmt:MATRIX_SCALE_FMT_E4M3
+// GFX1250: v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v1, v2 matrix_b_scale_fmt:MATRIX_SCALE_FMT_E4M3 ; encoding: [0x00,0x02,0x35,0xcc,0x01,0x05,0x02,0x00,0x00,0x40,0x88,0xcc,0x08,0x31,0xa2,0x1c]
+// WAVESIZE-ERR: :[[@LINE-2]]:1: error: instruction requires wavesize=32
+// GFX12-ERR: :[[@LINE-3]]:1: error: instruction not supported on this GPU
+
+v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v1, v2 matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1 matrix_a_scale_fmt:MATRIX_SCALE_FMT_E8 matrix_b_scale_fmt:MATRIX_SCALE_FMT_E8 matrix_a_reuse matrix_b_reuse neg_lo:[0,0,1] neg_hi:[0,0,1]
+// GFX1250: v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v1, v2 matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1 matrix_a_reuse matrix_b_reuse neg_lo:[0,0,1] neg_hi:[0,0,1] ; encoding: [0x00,0x68,0x35,0xcc,0x01,0x05,0x02,0x08,0x00,0x44,0x88,0xcc,0x08,0x31,0xa2,0x9c]
+// WAVESIZE-ERR: :[[@LINE-2]]:1: error: instruction requires wavesize=32
+// GFX12-ERR: :[[@LINE-3]]:1: error: instruction not supported on this GPU
+
+v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v[2:3], v[4:5] matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1 neg_lo:[0,0,1] neg_hi:[0,0,1]
+// GFX1250: v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v[2:3], v[4:5] matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1 neg_lo:[0,0,1] neg_hi:[0,0,1] ; encoding: [0x00,0x08,0x3a,0xcc,0x02,0x09,0x02,0x08,0x00,0x44,0x88,0xcc,0x08,0x31,0xa2,0x9c]
+// WAVESIZE-ERR: :[[@LINE-2]]:1: error: instruction requires wavesize=32
+// GFX12-ERR: :[[@LINE-3]]:1: error: instruction not supported on this GPU
+
+v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], s[2:3], s[4:5] matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1 matrix_a_reuse matrix_b_reuse neg_lo:[0,0,1] neg_hi:[0,0,1]
+// GFX1250: v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], s[2:3], s[4:5] matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1 matrix_a_reuse matrix_b_reuse neg_lo:[0,0,1] neg_hi:[0,0,1] ; encoding: [0x00,0x68,0x3a,0xcc,0x02,0x08,0x00,0x08,0x00,0x44,0x88,0xcc,0x08,0x31,0xa2,0x9c]
+// WAVESIZE-ERR: :[[@LINE-2]]:1: error: instruction requires wavesize=32
+// GFX12-ERR: :[[@LINE-3]]:1: error: instruction not supported on this GPU
+
+v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s[0:1], s[0:1]
+// GFX1250: v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s[0:1], s[0:1] ; encoding: [0x00,0x00,0x3a,0xcc,0x00,0x00,0x00,0x00,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c]
+// WAVESIZE-ERR: :[[@LINE-2]]:1: error: instruction requires wavesize=32
+// GFX12-ERR: :[[@LINE-3]]:1: error: instruction not supported on this GPU
+
+v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s[0:1], s[0:1] matrix_a_scale:MATRIX_SCALE_ROW0
+// GFX1250: v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s[0:1], s[0:1] ; encoding: [0x00,0x00,0x3a,0xcc,0x00,0x00,0x00,0x00,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c]
+// WAVESIZE-ERR: :[[@LINE-2]]:1: error: instruction requires wavesize=32
+// GFX12-ERR: :[[@LINE-3]]:1: error: instruction not supported on this GPU
+
+v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s[0:1], s[0:1] matrix_a_scale:MATRIX_SCALE_ROW1
+// GFX1250: v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s[0:1], s[0:1] matrix_a_scale:MATRIX_SCALE_ROW1 ; encoding: [0x00,0x08,0x3a,0xcc,0x00,0x00,0x00,0x00,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c]
+// WAVESIZE-ERR: :[[@LINE-2]]:1: error: instruction requires wavesize=32
+// GFX12-ERR: :[[@LINE-3]]:1: error: instruction not supported on this GPU
+
+v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s[0:1], s[0:1] matrix_a_reuse
+// GFX1250: v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s[0:1], s[0:1] matrix_a_reuse ; encoding: [0x00,0x20,0x3a,0xcc,0x00,0x00,0x00,0x00,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c]
+// WAVESIZE-ERR: :[[@LINE-2]]:1: error: instruction requires wavesize=32
+// GFX12-ERR: :[[@LINE-3]]:1: error: instruction not supported on this GPU
+
+v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s[0:1], s[0:1] matrix_a_scale:MATRIX_SCALE_ROW1 matrix_a_reuse
+// GFX1250: v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s[0:1], s[0:1] matrix_a_scale:MATRIX_SCALE_ROW1 matrix_a_reuse ; encoding: [0x00,0x28,0x3a,0xcc,0x00,0x00,0x00,0x00,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c]
+// WAVESIZE-ERR: :[[@LINE-2]]:1: error: instruction requires wavesize=32
+// GFX12-ERR: :[[@LINE-3]]:1: error: instruction not supported on this GPU
+
+v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s[0:1], s[0:1] matrix_b_scale:MATRIX_SCALE_ROW0
+// GFX1250: v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s[0:1], s[0:1] ; encoding: [0x00,0x00,0x3a,0xcc,0x00,0x00,0x00,0x00,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c]
+// WAVESIZE-ERR: :[[@LINE-2]]:1: error: instruction requires wavesize=32
+// GFX12-ERR: :[[@LINE-3]]:1: error: instruction not supported on this GPU
+
+v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s[0:1], s[0:1] matrix_b_scale:MATRIX_SCALE_ROW1
+// GFX1250: v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s[0:1], s[0:1] matrix_b_scale:MATRIX_SCALE_ROW1 ; encoding: [0x00,0x00,0x3a,0xcc,0x00,0x00,0x00,0x08,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c]
+// WAVESIZE-ERR: :[[@LINE-2]]:1: error: instruction requires wavesize=32
+// GFX12-ERR: :[[@LINE-3]]:1: error: instruction not supported on this GPU
+
+v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s[0:1], s[0:1] matrix_b_reuse
+// GFX1250: v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s[0:1], s[0:1] matrix_b_reuse ; encoding: [0x00,0x40,0x3a,0xcc,0x00,0x00,0x00,0x00,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c]
+// WAVESIZE-ERR: :[[@LINE-2]]:1: error: instruction requires wavesize=32
+// GFX12-ERR: :[[@LINE-3]]:1: error: instruction not supported on this GPU
+
+v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s[0:1], s[0:1] matrix_b_scale:MATRIX_SCALE_ROW1 matrix_b_reuse
+// GFX1250: v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s[0:1], s[0:1] matrix_b_scale:MATRIX_SCALE_ROW1 matrix_b_reuse ; encoding: [0x00,0x40,0x3a,0xcc,0x00,0x00,0x00,0x08,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c]
+// WAVESIZE-ERR: :[[@LINE-2]]:1: error: instruction requires wavesize=32
+// GFX12-ERR: :[[@LINE-3]]:1: error: instruction not supported on this GPU
+
+v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v[2:3], v[4:5] matrix_a_scale_fmt:MATRIX_SCALE_FMT_E8 matrix_b_scale_fmt:MATRIX_SCALE_FMT_E8
+// GFX1250: v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v[2:3], v[4:5] ; encoding: [0x00,0x00,0x3a,0xcc,0x02,0x09,0x02,0x00,0x00,0x40,0x88,0xcc,0x08,0x31,0xa2,0x1c]
+// WAVESIZE-ERR: :[[@LINE-2]]:1: error: instruction requires wavesize=32
+// GFX12-ERR: :[[@LINE-3]]:1: error: instruction not supported on this GPU
+
+v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v[2:3], v[4:5] matrix_a_scale_fmt:MATRIX_SCALE_FMT_E5M3
+// GFX1250: v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v[2:3], v[4:5] matrix_a_scale_fmt:MATRIX_SCALE_FMT_E5M3 ; encoding: [0x00,0x00,0x3a,0xcc,0x02,0x09,0x02,0x20,0x00,0x40,0x88,0xcc,0x08,0x31,0xa2,0x1c]
+// WAVESIZE-ERR: :[[@LINE-2]]:1: error: instruction requires wavesize=32
+// GFX12-ERR: :[[@LINE-3]]:1: error: instruction not supported on this GPU
+
+v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v[2:3], v[4:5] matrix_a_scale_fmt:MATRIX_SCALE_FMT_E4M3
+// GFX1250: v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v[2:3], v[4:5] matrix_a_scale_fmt:MATRIX_SCALE_FMT_E4M3 ; encoding: [0x00,0x00,0x3a,0xcc,0x02,0x09,0x02,0x40,0x00,0x40,0x88,0xcc,0x08,0x31,0xa2,0x1c]
+// WAVESIZE-ERR: :[[@LINE-2]]:1: error: instruction requires wavesize=32
+// GFX12-ERR: :[[@LINE-3]]:1: error: instruction not supported on this GPU
+
+v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v[2:3], v[4:5] matrix_b_scale_fmt:MATRIX_SCALE_FMT_E5M3
+// GFX1250: v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v[2:3], v[4:5] matrix_b_scale_fmt:MATRIX_SCALE_FMT_E5M3 ; encoding: [0x00,0x01,0x3a,0xcc,0x02,0x09,0x02,0x00,0x00,0x40,0x88,0xcc,0x08,0x31,0xa2,0x1c]
+// WAVESIZE-ERR: :[[@LINE-2]]:1: error: instruction requires wavesize=32
+// GFX12-ERR: :[[@LINE-3]]:1: error: instruction not supported on this GPU
+
+v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v[2:3], v[4:5] matrix_b_scale_fmt:MATRIX_SCALE_FMT_E4M3
+// GFX1250: v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v[2:3], v[4:5] matrix_b_scale_fmt:MATRIX_SCALE_FMT_E4M3 ; encoding: [0x00,0x02,0x3a,0xcc,0x02,0x09,0x02,0x00,0x00,0x40,0x88,0xcc,0x08,0x31,0xa2,0x1c]
+// WAVESIZE-ERR: :[[@LINE-2]]:1: error: instruction requires wavesize=32
+// GFX12-ERR: :[[@LINE-3]]:1: error: instruction not supported on this GPU
+
+v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v[2:3], v[4:5] matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1 matrix_a_scale_fmt:MATRIX_SCALE_FMT_E8 matrix_b_scale_fmt:MATRIX_SCALE_FMT_E8 matrix_a_reuse matrix_b_reuse neg_lo:[0,0,1] neg_hi:[0,0,1]
+// GFX1250: v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v[2:3], v[4:5] matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1 matrix_a_reuse matrix_b_reuse neg_lo:[0,0,1] neg_hi:[0,0,1] ; encoding: [0x00,0x68,0x3a,0xcc,0x02,0x09,0x02,0x08,0x00,0x44,0x88,0xcc,0x08,0x31,0xa2,0x9c]
+// WAVESIZE-ERR: :[[@LINE-2]]:1: error: instruction requires wavesize=32
+// GFX12-ERR: :[[@LINE-3]]:1: error: instruction not supported on this GPU
diff --git a/llvm/test/MC/AMDGPU/gfx1250_asm_wmma_w32_err.s b/llvm/test/MC/AMDGPU/gfx1250_asm_wmma_w32_err.s
index 1eae8f6ba451c..41cac9d1470ae 100644
--- a/llvm/test/MC/AMDGPU/gfx1250_asm_wmma_w32_err.s
+++ b/llvm/test/MC/AMDGPU/gfx1250_asm_wmma_w32_err.s
@@ -449,6 +449,16 @@ v_wmma_f32_16x16x128_f8f6f4 v[0:7], v[0:15], v[20:35], v[40:47] matrix_b_fmt:MAT
// GFX1250-ERR-NEXT: {{^}}v_wmma_f32_16x16x128_f8f6f4 v[0:7], v[0:15], v[20:35], v[40:47] matrix_b_fmt:MATRIX_FMT_FP4
// GFX1250-ERR-NEXT: {{^}} ^
+v_wmma_scale_f32_16x16x128_f8f6f4 v[0:7], v[8:23], v[24:31], v[40:47], v1, v2
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: wrong register tuple size for MATRIX_FMT_FP8
+// GFX1250-ERR-NEXT: {{^}}v_wmma_scale_f32_16x16x128_f8f6f4 v[0:7], v[8:23], v[24:31], v[40:47], v1, v2
+// GFX1250-ERR-NEXT: {{^}} ^
+
+v_wmma_scale16_f32_16x16x128_f8f6f4 v[0:7], v[0:7], v[0:15], v[0:7], s[0:1], s[0:1] matrix_a_fmt:MATRIX_FMT_FP6
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: wrong register tuple size for MATRIX_FMT_FP6
+// GFX1250-ERR-NEXT: {{^}}v_wmma_scale16_f32_16x16x128_f8f6f4 v[0:7], v[0:7], v[0:15], v[0:7], s[0:1], s[0:1] matrix_a_fmt:MATRIX_FMT_FP6
+// GFX1250-ERR-NEXT: {{^}} ^
+
v_wmma_f32_32x16x128_f4 v[4:19], v[0:15], v[2:9], v[4:19] neg_lo:[1,0,0]
// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: invalid neg_lo operand
// GFX1250-ERR-NEXT: {{^}}v_wmma_f32_32x16x128_f4 v[4:19], v[0:15], v[2:9], v[4:19] neg_lo:[1,0,0]
@@ -468,3 +478,23 @@ v_wmma_f32_32x16x128_f4 v[4:19], v[0:15], v[2:9], v[4:19] neg_hi:[0,1,0]
// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: invalid neg_hi operand
// GFX1250-ERR-NEXT: {{^}}v_wmma_f32_32x16x128_f4 v[4:19], v[0:15], v[2:9], v[4:19] neg_hi:[0,1,0]
// GFX1250-ERR-NEXT: {{^}} ^
+
+v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v1, v2 neg_lo:[1,0,0]
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: invalid neg_lo operand
+// GFX1250-ERR-NEXT: {{^}}v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v1, v2 neg_lo:[1,0,0]
+// GFX1250-ERR-NEXT: {{^}} ^
+
+v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v1, v2 matrix_a_fmt:0
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: not a valid operand.
+// GFX1250-ERR-NEXT: {{^}}v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v1, v2 matrix_a_fmt:0
+// GFX1250-ERR-NEXT: {{^}} ^
+
+v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v[0:1], v[2:3] neg_lo:[1,0,0]
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: invalid neg_lo operand
+// GFX1250-ERR-NEXT: {{^}}v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v[0:1], v[2:3] neg_lo:[1,0,0]
+// GFX1250-ERR-NEXT: {{^}} ^
+
+v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v[0:1], v[2:3] matrix_a_fmt:0
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: not a valid operand.
+// GFX1250-ERR-NEXT: {{^}}v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v[0:1], v[2:3] matrix_a_fmt:0
+// GFX1250-ERR-NEXT: {{^}} ^
diff --git a/llvm/test/MC/Disassembler/AMDGPU/gfx1250_dasm_wmma_w32.txt b/llvm/test/MC/Disassembler/AMDGPU/gfx1250_dasm_wmma_w32.txt
index 2216348fa43c3..a409dac321f83 100644
--- a/llvm/test/MC/Disassembler/AMDGPU/gfx1250_dasm_wmma_w32.txt
+++ b/llvm/test/MC/Disassembler/AMDGPU/gfx1250_dasm_wmma_w32.txt
@@ -999,3 +999,93 @@
0x04,0x44,0x88,0xcc,0x00,0x05,0x12,0x9c
# GFX1250: v_wmma_f32_32x16x128_f4 v[4:19], v[0:15], v[2:9], v[4:19] neg_lo:[0,0,1] neg_hi:[0,0,1] ; encoding: [0x04,0x44,0x88,0xcc,0x00,0x05,0x12,0x9c]
+
+0x00,0x00,0x35,0xcc,0x00,0x00,0x00,0x00,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c
+# GFX1250: v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s0, s0 ; encoding: [0x00,0x00,0x35,0xcc,0x00,0x00,0x00,0x00,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c]
+
+0x00,0x20,0x35,0xcc,0x00,0x00,0x00,0x00,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c
+# GFX1250: v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s0, s0 matrix_a_reuse ; encoding: [0x00,0x20,0x35,0xcc,0x00,0x00,0x00,0x00,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c]
+
+0x00,0x08,0x35,0xcc,0x00,0x00,0x00,0x00,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c
+# GFX1250: v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s0, s0 matrix_a_scale:MATRIX_SCALE_ROW1 ; encoding: [0x00,0x08,0x35,0xcc,0x00,0x00,0x00,0x00,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c]
+
+0x00,0x28,0x35,0xcc,0x00,0x00,0x00,0x00,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c
+# GFX1250: v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s0, s0 matrix_a_scale:MATRIX_SCALE_ROW1 matrix_a_reuse ; encoding: [0x00,0x28,0x35,0xcc,0x00,0x00,0x00,0x00,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c]
+
+0x00,0x40,0x35,0xcc,0x00,0x00,0x00,0x00,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c
+# GFX1250: v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s0, s0 matrix_b_reuse ; encoding: [0x00,0x40,0x35,0xcc,0x00,0x00,0x00,0x00,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c]
+
+0x00,0x00,0x35,0xcc,0x00,0x00,0x00,0x08,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c
+# GFX1250: v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s0, s0 matrix_b_scale:MATRIX_SCALE_ROW1 ; encoding: [0x00,0x00,0x35,0xcc,0x00,0x00,0x00,0x08,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c]
+
+0x00,0x40,0x35,0xcc,0x00,0x00,0x00,0x08,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c
+# GFX1250: v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s0, s0 matrix_b_scale:MATRIX_SCALE_ROW1 matrix_b_reuse ; encoding: [0x00,0x40,0x35,0xcc,0x00,0x00,0x00,0x08,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c]
+
+0x00,0x68,0x35,0xcc,0x01,0x04,0x00,0x08,0x00,0x44,0x88,0xcc,0x08,0x31,0xa2,0x9c
+# GFX1250: v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], s1, s2 matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1 matrix_a_reuse matrix_b_reuse neg_lo:[0,0,1] neg_hi:[0,0,1] ; encoding: [0x00,0x68,0x35,0xcc,0x01,0x04,0x00,0x08,0x00,0x44,0x88,0xcc,0x08,0x31,0xa2,0x9c]
+
+0x00,0x00,0x35,0xcc,0x01,0x05,0x02,0x00,0x00,0x40,0x88,0xcc,0x08,0x31,0xa2,0x1c
+# GFX1250: v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v1, v2 ; encoding: [0x00,0x00,0x35,0xcc,0x01,0x05,0x02,0x00,0x00,0x40,0x88,0xcc,0x08,0x31,0xa2,0x1c]
+
+0x00,0x68,0x35,0xcc,0x01,0x05,0x02,0x08,0x00,0x44,0x88,0xcc,0x08,0x31,0xa2,0x9c
+# GFX1250: v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v1, v2 matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1 matrix_a_reuse matrix_b_reuse neg_lo:[0,0,1] neg_hi:[0,0,1] ; encoding: [0x00,0x68,0x35,0xcc,0x01,0x05,0x02,0x08,0x00,0x44,0x88,0xcc,0x08,0x31,0xa2,0x9c]
+
+0x00,0x08,0x35,0xcc,0x01,0x05,0x02,0x08,0x00,0x44,0x88,0xcc,0x08,0x31,0xa2,0x9c
+# GFX1250: v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v1, v2 matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1 neg_lo:[0,0,1] neg_hi:[0,0,1] ; encoding: [0x00,0x08,0x35,0xcc,0x01,0x05,0x02,0x08,0x00,0x44,0x88,0xcc,0x08,0x31,0xa2,0x9c]
+
+0x00,0x00,0x35,0xcc,0x01,0x05,0x02,0x40,0x00,0x40,0x88,0xcc,0x08,0x31,0xa2,0x1c
+# GFX1250: v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v1, v2 matrix_a_scale_fmt:MATRIX_SCALE_FMT_E4M3 ; encoding: [0x00,0x00,0x35,0xcc,0x01,0x05,0x02,0x40,0x00,0x40,0x88,0xcc,0x08,0x31,0xa2,0x1c]
+
+0x00,0x00,0x35,0xcc,0x01,0x05,0x02,0x20,0x00,0x40,0x88,0xcc,0x08,0x31,0xa2,0x1c
+# GFX1250: v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v1, v2 matrix_a_scale_fmt:MATRIX_SCALE_FMT_E5M3 ; encoding: [0x00,0x00,0x35,0xcc,0x01,0x05,0x02,0x20,0x00,0x40,0x88,0xcc,0x08,0x31,0xa2,0x1c]
+
+0x00,0x02,0x35,0xcc,0x01,0x05,0x02,0x00,0x00,0x40,0x88,0xcc,0x08,0x31,0xa2,0x1c
+# GFX1250: v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v1, v2 matrix_b_scale_fmt:MATRIX_SCALE_FMT_E4M3 ; encoding: [0x00,0x02,0x35,0xcc,0x01,0x05,0x02,0x00,0x00,0x40,0x88,0xcc,0x08,0x31,0xa2,0x1c]
+
+0x00,0x01,0x35,0xcc,0x01,0x05,0x02,0x00,0x00,0x40,0x88,0xcc,0x08,0x31,0xa2,0x1c
+# GFX1250: v_wmma_scale_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v1, v2 matrix_b_scale_fmt:MATRIX_SCALE_FMT_E5M3 ; encoding: [0x00,0x01,0x35,0xcc,0x01,0x05,0x02,0x00,0x00,0x40,0x88,0xcc,0x08,0x31,0xa2,0x1c]
+
+0x00,0x00,0x3a,0xcc,0x00,0x00,0x00,0x00,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c
+# GFX1250: v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s[0:1], s[0:1] ; encoding: [0x00,0x00,0x3a,0xcc,0x00,0x00,0x00,0x00,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c]
+
+0x00,0x20,0x3a,0xcc,0x00,0x00,0x00,0x00,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c
+# GFX1250: v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s[0:1], s[0:1] matrix_a_reuse ; encoding: [0x00,0x20,0x3a,0xcc,0x00,0x00,0x00,0x00,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c]
+
+0x00,0x08,0x3a,0xcc,0x00,0x00,0x00,0x00,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c
+# GFX1250: v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s[0:1], s[0:1] matrix_a_scale:MATRIX_SCALE_ROW1 ; encoding: [0x00,0x08,0x3a,0xcc,0x00,0x00,0x00,0x00,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c]
+
+0x00,0x28,0x3a,0xcc,0x00,0x00,0x00,0x00,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c
+# GFX1250: v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s[0:1], s[0:1] matrix_a_scale:MATRIX_SCALE_ROW1 matrix_a_reuse ; encoding: [0x00,0x28,0x3a,0xcc,0x00,0x00,0x00,0x00,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c]
+
+0x00,0x40,0x3a,0xcc,0x00,0x00,0x00,0x00,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c
+# GFX1250: v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s[0:1], s[0:1] matrix_b_reuse ; encoding: [0x00,0x40,0x3a,0xcc,0x00,0x00,0x00,0x00,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c]
+
+0x00,0x00,0x3a,0xcc,0x00,0x00,0x00,0x08,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c
+# GFX1250: v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s[0:1], s[0:1] matrix_b_scale:MATRIX_SCALE_ROW1 ; encoding: [0x00,0x00,0x3a,0xcc,0x00,0x00,0x00,0x08,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c]
+
+0x00,0x40,0x3a,0xcc,0x00,0x00,0x00,0x08,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c
+# GFX1250: v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[0:7], v[0:15], s[0:1], s[0:1] matrix_b_scale:MATRIX_SCALE_ROW1 matrix_b_reuse ; encoding: [0x00,0x40,0x3a,0xcc,0x00,0x00,0x00,0x08,0x00,0x40,0x88,0xcc,0x08,0x01,0x02,0x1c]
+
+0x00,0x68,0x3a,0xcc,0x02,0x08,0x00,0x08,0x00,0x44,0x88,0xcc,0x08,0x31,0xa2,0x9c
+# GFX1250: v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], s[2:3], s[4:5] matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1 matrix_a_reuse matrix_b_reuse neg_lo:[0,0,1] neg_hi:[0,0,1] ; encoding: [0x00,0x68,0x3a,0xcc,0x02,0x08,0x00,0x08,0x00,0x44,0x88,0xcc,0x08,0x31,0xa2,0x9c]
+
+0x00,0x00,0x3a,0xcc,0x02,0x09,0x02,0x00,0x00,0x40,0x88,0xcc,0x08,0x31,0xa2,0x1c
+# GFX1250: v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v[2:3], v[4:5] ; encoding: [0x00,0x00,0x3a,0xcc,0x02,0x09,0x02,0x00,0x00,0x40,0x88,0xcc,0x08,0x31,0xa2,0x1c]
+
+0x00,0x68,0x3a,0xcc,0x02,0x09,0x02,0x08,0x00,0x44,0x88,0xcc,0x08,0x31,0xa2,0x9c
+# GFX1250: v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v[2:3], v[4:5] matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1 matrix_a_reuse matrix_b_reuse neg_lo:[0,0,1] neg_hi:[0,0,1] ; encoding: [0x00,0x68,0x3a,0xcc,0x02,0x09,0x02,0x08,0x00,0x44,0x88,0xcc,0x08,0x31,0xa2,0x9c]
+
+0x00,0x08,0x3a,0xcc,0x02,0x09,0x02,0x08,0x00,0x44,0x88,0xcc,0x08,0x31,0xa2,0x9c
+# GFX1250: v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v[2:3], v[4:5] matrix_a_scale:MATRIX_SCALE_ROW1 matrix_b_scale:MATRIX_SCALE_ROW1 neg_lo:[0,0,1] neg_hi:[0,0,1] ; encoding: [0x00,0x08,0x3a,0xcc,0x02,0x09,0x02,0x08,0x00,0x44,0x88,0xcc,0x08,0x31,0xa2,0x9c]
+
+0x00,0x00,0x3a,0xcc,0x02,0x09,0x02,0x40,0x00,0x40,0x88,0xcc,0x08,0x31,0xa2,0x1c
+# GFX1250: v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v[2:3], v[4:5] matrix_a_scale_fmt:MATRIX_SCALE_FMT_E4M3 ; encoding: [0x00,0x00,0x3a,0xcc,0x02,0x09,0x02,0x40,0x00,0x40,0x88,0xcc,0x08,0x31,0xa2,0x1c]
+
+0x00,0x00,0x3a,0xcc,0x02,0x09,0x02,0x20,0x00,0x40,0x88,0xcc,0x08,0x31,0xa2,0x1c
+# GFX1250: v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v[2:3], v[4:5] matrix_a_scale_fmt:MATRIX_SCALE_FMT_E5M3 ; encoding: [0x00,0x00,0x3a,0xcc,0x02,0x09,0x02,0x20,0x00,0x40,0x88,0xcc,0x08,0x31,0xa2,0x1c]
+
+0x00,0x02,0x3a,0xcc,0x02,0x09,0x02,0x00,0x00,0x40,0x88,0xcc,0x08,0x31,0xa2,0x1c
+# GFX1250: v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v[2:3], v[4:5] matrix_b_scale_fmt:MATRIX_SCALE_FMT_E4M3 ; encoding: [0x00,0x02,0x3a,0xcc,0x02,0x09,0x02,0x00,0x00,0x40,0x88,0xcc,0x08,0x31,0xa2,0x1c]
+
+0x00,0x01,0x3a,0xcc,0x02,0x09,0x02,0x00,0x00,0x40,0x88,0xcc,0x08,0x31,0xa2,0x1c
+# GFX1250: v_wmma_scale16_f32_32x16x128_f4 v[0:15], v[8:23], v[24:31], v[40:55], v[2:3], v[4:5] matrix_b_scale_fmt:MATRIX_SCALE_FMT_E5M3 ; encoding: [0x00,0x01,0x3a,0xcc,0x02,0x09,0x02,0x00,0x00,0x40,0x88,0xcc,0x08,0x31,0xa2,0x1c]
>From 9c6bb180407a7db004624d13d9de108d7cebc73c Mon Sep 17 00:00:00 2001
From: Jasmine Tang <jjasmine at igalia.com>
Date: Tue, 5 Aug 2025 15:22:37 -0700
Subject: [PATCH 007/171] [WebAssembly] Constant fold wasm.dot (#149619)
Constant fold wasm.dot of constant vectors/splats.
Test case added in
`llvm/test/Transforms/InstSimplify/ConstProp/WebAssembly/dot.ll`
Related to https://github.com/llvm/llvm-project/issues/55933
---
llvm/lib/Analysis/ConstantFolding.cpp | 25 +++++++++
.../InstSimplify/ConstProp/WebAssembly/dot.ll | 56 +++++++++++++++++++
2 files changed, 81 insertions(+)
create mode 100644 llvm/test/Transforms/InstSimplify/ConstProp/WebAssembly/dot.ll
diff --git a/llvm/lib/Analysis/ConstantFolding.cpp b/llvm/lib/Analysis/ConstantFolding.cpp
index dd98b62baca33..4969528a1b29b 100644
--- a/llvm/lib/Analysis/ConstantFolding.cpp
+++ b/llvm/lib/Analysis/ConstantFolding.cpp
@@ -1659,6 +1659,7 @@ bool llvm::canConstantFoldCallTo(const CallBase *Call, const Function *F) {
case Intrinsic::aarch64_sve_convert_from_svbool:
case Intrinsic::wasm_alltrue:
case Intrinsic::wasm_anytrue:
+ case Intrinsic::wasm_dot:
// WebAssembly float semantics are always known
case Intrinsic::wasm_trunc_signed:
case Intrinsic::wasm_trunc_unsigned:
@@ -3989,6 +3990,30 @@ static Constant *ConstantFoldFixedVectorCall(
}
return ConstantVector::get(Result);
}
+ case Intrinsic::wasm_dot: {
+ unsigned NumElements =
+ cast<FixedVectorType>(Operands[0]->getType())->getNumElements();
+
+ assert(NumElements == 8 && Result.size() == 4 &&
+ "wasm dot takes i16x8 and produces i32x4");
+ assert(Ty->isIntegerTy());
+ int32_t MulVector[8];
+
+ for (unsigned I = 0; I < NumElements; ++I) {
+ ConstantInt *Elt0 =
+ cast<ConstantInt>(Operands[0]->getAggregateElement(I));
+ ConstantInt *Elt1 =
+ cast<ConstantInt>(Operands[1]->getAggregateElement(I));
+
+ MulVector[I] = Elt0->getSExtValue() * Elt1->getSExtValue();
+ }
+ for (unsigned I = 0; I < Result.size(); I++) {
+ int32_t IAdd = MulVector[I * 2] + MulVector[I * 2 + 1];
+ Result[I] = ConstantInt::get(Ty, IAdd);
+ }
+
+ return ConstantVector::get(Result);
+ }
default:
break;
}
diff --git a/llvm/test/Transforms/InstSimplify/ConstProp/WebAssembly/dot.ll b/llvm/test/Transforms/InstSimplify/ConstProp/WebAssembly/dot.ll
new file mode 100644
index 0000000000000..b537b7bccf861
--- /dev/null
+++ b/llvm/test/Transforms/InstSimplify/ConstProp/WebAssembly/dot.ll
@@ -0,0 +1,56 @@
+; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --version 5
+
+; RUN: opt -passes=instsimplify -S < %s | FileCheck %s
+
+; Test that intrinsics wasm dot call are constant folded
+
+target triple = "wasm32-unknown-unknown"
+
+
+define <4 x i32> @dot_zero() {
+; CHECK-LABEL: define <4 x i32> @dot_zero() {
+; CHECK-NEXT: ret <4 x i32> zeroinitializer
+;
+ %res = tail call <4 x i32> @llvm.wasm.dot(<8 x i16> zeroinitializer, <8 x i16> zeroinitializer)
+ ret <4 x i32> %res
+}
+
+; a = 1 2 3 4 5 6 7 8
+; b = 1 2 3 4 5 6 7 8
+; k1|k2 = a * b = 1 4 9 16 25 36 49 64
+; k1 + k2 = (1+4) | (9 + 16) | (25 + 36) | (49 + 64)
+; result = 5 | 25 | 61 | 113
+define <4 x i32> @dot_nonzero() {
+; CHECK-LABEL: define <4 x i32> @dot_nonzero() {
+; CHECK-NEXT: ret <4 x i32> <i32 5, i32 25, i32 61, i32 113>
+;
+ %res = tail call <4 x i32> @llvm.wasm.dot(<8 x i16> <i16 1, i16 2, i16 3, i16 4, i16 5, i16 6, i16 7, i16 8>, <8 x i16> <i16 1, i16 2, i16 3, i16 4, i16 5, i16 6, i16 7, i16 8>)
+ ret <4 x i32> %res
+}
+
+define <4 x i32> @dot_doubly_negative() {
+; CHECK-LABEL: define <4 x i32> @dot_doubly_negative() {
+; CHECK-NEXT: ret <4 x i32> splat (i32 2)
+;
+ %res = tail call <4 x i32> @llvm.wasm.dot(<8 x i16> <i16 -1, i16 -1, i16 -1, i16 -1, i16 -1, i16 -1, i16 -1, i16 -1>, <8 x i16> <i16 -1, i16 -1, i16 -1, i16 -1, i16 -1, i16 -1, i16 -1, i16 -1>)
+ ret <4 x i32> %res
+}
+
+; Tests that i16 max signed values fit in i32
+define <4 x i32> @dot_follow_modulo_spec_1() {
+; CHECK-LABEL: define <4 x i32> @dot_follow_modulo_spec_1() {
+; CHECK-NEXT: ret <4 x i32> <i32 2147352578, i32 0, i32 0, i32 0>
+;
+ %res = tail call <4 x i32> @llvm.wasm.dot(<8 x i16> <i16 32767, i16 32767, i16 0, i16 0, i16 0, i16 0, i16 0, i16 0>, <8 x i16> <i16 32767, i16 32767, i16 0, i16 0, i16 0, i16 0, i16 0, i16 0>)
+ ret <4 x i32> %res
+}
+
+; Tests that i16 min signed values fit in i32
+define <4 x i32> @dot_follow_modulo_spec_2() {
+; CHECK-LABEL: define <4 x i32> @dot_follow_modulo_spec_2() {
+; CHECK-NEXT: ret <4 x i32> <i32 -2147483648, i32 0, i32 0, i32 0>
+;
+ %res = tail call <4 x i32> @llvm.wasm.dot(<8 x i16> <i16 -32768, i16 -32768, i16 0, i16 0, i16 0, i16 0, i16 0, i16 0>, <8 x i16> <i16 -32768, i16 -32768, i16 0, i16 0, i16 0, i16 0, i16 0, i16 0>)
+ ret <4 x i32> %res
+}
+
>From 3bc1b15235c86f08afec9d7a43e7ae431ee18926 Mon Sep 17 00:00:00 2001
From: Joseph Huber <huberjn at outlook.com>
Date: Tue, 5 Aug 2025 17:38:11 -0500
Subject: [PATCH 008/171] [OpenMP] Fix weak linkage on malloc declaration
Summary:
This being weak forces the external reference to be weak. Either we
define it weak or not by pulling it from `libc`. Doing it here causes it
to not be extracted properly.
---
offload/DeviceRTL/include/Allocator.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/offload/DeviceRTL/include/Allocator.h b/offload/DeviceRTL/include/Allocator.h
index 79c69a2a96b4e..dc4d029ed75f3 100644
--- a/offload/DeviceRTL/include/Allocator.h
+++ b/offload/DeviceRTL/include/Allocator.h
@@ -38,8 +38,8 @@ void free(void *Ptr);
} // namespace ompx
extern "C" {
-[[gnu::weak]] void *malloc(size_t Size);
-[[gnu::weak]] void free(void *Ptr);
+void *malloc(size_t Size);
+void free(void *Ptr);
}
#endif
>From 64eba6ef9610a4a82e1610ecd806b8488144bad0 Mon Sep 17 00:00:00 2001
From: Igor Kudrin <ikudrin at accesssoftek.com>
Date: Tue, 5 Aug 2025 15:42:08 -0700
Subject: [PATCH 009/171] [lldb] Fix auto advance PC in
`EmulateInstructionARM64` if PC >= 4G (#151460)
The `EmulateInstructionARM64::EvaluateInstruction()` method used
`uint32_t` to store the PC value, causing addresses greater than 2^32 to
be truncated.
As for now, the issue does not affect the mainline because the method is
never called with both `eEmulateInstructionOptionAutoAdvancePC` and
`eEmulateInstructionOptionIgnoreConditions` options set. However, it can
trigger on a downstream that uses software stepping.
---
.../ARM64/EmulateInstructionARM64.cpp | 4 +-
.../Instruction/ARM64/TestAArch64Emulator.cpp | 120 +++++++++++++++++-
2 files changed, 121 insertions(+), 3 deletions(-)
diff --git a/lldb/source/Plugins/Instruction/ARM64/EmulateInstructionARM64.cpp b/lldb/source/Plugins/Instruction/ARM64/EmulateInstructionARM64.cpp
index 29f03fee47b0d..a8901beda3970 100644
--- a/lldb/source/Plugins/Instruction/ARM64/EmulateInstructionARM64.cpp
+++ b/lldb/source/Plugins/Instruction/ARM64/EmulateInstructionARM64.cpp
@@ -404,7 +404,7 @@ bool EmulateInstructionARM64::EvaluateInstruction(uint32_t evaluate_options) {
if (!success && !m_ignore_conditions)
return false;
- uint32_t orig_pc_value = 0;
+ uint64_t orig_pc_value = 0;
if (auto_advance_pc) {
orig_pc_value =
ReadRegisterUnsigned(eRegisterKindLLDB, gpr_pc_arm64, 0, &success);
@@ -418,7 +418,7 @@ bool EmulateInstructionARM64::EvaluateInstruction(uint32_t evaluate_options) {
return false;
if (auto_advance_pc) {
- uint32_t new_pc_value =
+ uint64_t new_pc_value =
ReadRegisterUnsigned(eRegisterKindLLDB, gpr_pc_arm64, 0, &success);
if (!success)
return false;
diff --git a/lldb/unittests/Instruction/ARM64/TestAArch64Emulator.cpp b/lldb/unittests/Instruction/ARM64/TestAArch64Emulator.cpp
index 4506c200dee3b..fdd456600da18 100644
--- a/lldb/unittests/Instruction/ARM64/TestAArch64Emulator.cpp
+++ b/lldb/unittests/Instruction/ARM64/TestAArch64Emulator.cpp
@@ -13,15 +13,118 @@
#include "lldb/Core/Disassembler.h"
#include "lldb/Target/ExecutionContext.h"
#include "lldb/Utility/ArchSpec.h"
+#include "lldb/Utility/RegisterValue.h"
#include "Plugins/Instruction/ARM64/EmulateInstructionARM64.h"
+#include "Plugins/Process/Utility/RegisterInfoPOSIX_arm64.h"
+#include "Plugins/Process/Utility/lldb-arm64-register-enums.h"
using namespace lldb;
using namespace lldb_private;
struct Arch64EmulatorTester : public EmulateInstructionARM64 {
+ RegisterInfoPOSIX_arm64::GPR gpr;
+ uint8_t memory[64] = {0};
+ uint64_t memory_offset = 0;
+
Arch64EmulatorTester()
- : EmulateInstructionARM64(ArchSpec("arm64-apple-ios")) {}
+ : EmulateInstructionARM64(ArchSpec("arm64-apple-ios")) {
+ memset(&gpr, 0, sizeof(gpr));
+ EmulateInstruction::SetCallbacks(ReadMemoryCallback, WriteMemoryCallback,
+ ReadRegisterCallback,
+ WriteRegisterCallback);
+ }
+
+ static bool ReadRegisterCallback(EmulateInstruction *instruction, void *baton,
+ const RegisterInfo *reg_info,
+ RegisterValue ®_value) {
+ auto *tester = static_cast<Arch64EmulatorTester *>(instruction);
+ uint32_t reg = reg_info->kinds[eRegisterKindLLDB];
+ if (reg >= gpr_x0_arm64 && reg <= gpr_x28_arm64) {
+ reg_value.SetUInt64(tester->gpr.x[reg - gpr_x0_arm64]);
+ return true;
+ }
+ if (reg >= gpr_w0_arm64 && reg <= gpr_w28_arm64) {
+ reg_value.SetUInt32(tester->gpr.x[reg - gpr_w0_arm64]);
+ return true;
+ }
+ switch (reg) {
+ case gpr_fp_arm64:
+ reg_value.SetUInt64(tester->gpr.fp);
+ return true;
+ case gpr_lr_arm64:
+ reg_value.SetUInt64(tester->gpr.lr);
+ return true;
+ case gpr_sp_arm64:
+ reg_value.SetUInt64(tester->gpr.sp);
+ return true;
+ case gpr_pc_arm64:
+ reg_value.SetUInt64(tester->gpr.pc);
+ return true;
+ case gpr_cpsr_arm64:
+ reg_value.SetUInt64(tester->gpr.cpsr);
+ return true;
+ default:
+ return false;
+ }
+ }
+
+ static bool WriteRegisterCallback(EmulateInstruction *instruction,
+ void *baton, const Context &context,
+ const RegisterInfo *reg_info,
+ const RegisterValue ®_value) {
+ auto *tester = static_cast<Arch64EmulatorTester *>(instruction);
+ uint32_t reg = reg_info->kinds[eRegisterKindLLDB];
+ if (reg >= gpr_x0_arm64 && reg <= gpr_x28_arm64) {
+ tester->gpr.x[reg - gpr_x0_arm64] = reg_value.GetAsUInt64();
+ return true;
+ }
+ if (reg >= gpr_w0_arm64 && reg <= gpr_w28_arm64) {
+ tester->gpr.x[reg - gpr_w0_arm64] = reg_value.GetAsUInt32();
+ return true;
+ }
+ switch (reg) {
+ case gpr_fp_arm64:
+ tester->gpr.fp = reg_value.GetAsUInt64();
+ return true;
+ case gpr_lr_arm64:
+ tester->gpr.lr = reg_value.GetAsUInt64();
+ return true;
+ case gpr_sp_arm64:
+ tester->gpr.sp = reg_value.GetAsUInt64();
+ return true;
+ case gpr_pc_arm64:
+ tester->gpr.pc = reg_value.GetAsUInt64();
+ return true;
+ case gpr_cpsr_arm64:
+ tester->gpr.cpsr = reg_value.GetAsUInt64();
+ return true;
+ default:
+ return false;
+ }
+ }
+
+ static size_t ReadMemoryCallback(EmulateInstruction *instruction, void *baton,
+ const Context &context, addr_t addr,
+ void *dst, size_t length) {
+ auto *tester = static_cast<Arch64EmulatorTester *>(instruction);
+ assert(addr >= tester->memory_offset);
+ assert(addr - tester->memory_offset + length <= sizeof(tester->memory));
+ if (addr >= tester->memory_offset &&
+ addr - tester->memory_offset + length <= sizeof(tester->memory)) {
+ memcpy(dst, tester->memory + addr - tester->memory_offset, length);
+ return length;
+ }
+ return 0;
+ };
+
+ static size_t WriteMemoryCallback(EmulateInstruction *instruction,
+ void *baton, const Context &context,
+ addr_t addr, const void *dst,
+ size_t length) {
+ llvm_unreachable("implement when required");
+ return 0;
+ };
static uint64_t AddWithCarry(uint32_t N, uint64_t x, uint64_t y, bool carry_in,
EmulateInstructionARM64::ProcState &proc_state) {
@@ -60,3 +163,18 @@ TEST_F(TestAArch64Emulator, TestOverflow) {
ASSERT_EQ(pstate.V, 1ULL);
ASSERT_EQ(pstate.C, 0ULL);
}
+
+TEST_F(TestAArch64Emulator, TestAutoAdvancePC) {
+ Arch64EmulatorTester emu;
+ emu.memory_offset = 0x123456789abcde00;
+ emu.gpr.pc = 0x123456789abcde00;
+ emu.gpr.x[8] = 0x123456789abcde20;
+ memcpy(emu.memory, "\x08\x01\x40\xb9", 4); // ldr w8, [x8]
+ memcpy(emu.memory + 0x20, "\x11\x22\x33\x44", 4); // 0x44332211
+ ASSERT_TRUE(emu.ReadInstruction());
+ ASSERT_TRUE(
+ emu.EvaluateInstruction(eEmulateInstructionOptionAutoAdvancePC |
+ eEmulateInstructionOptionIgnoreConditions));
+ ASSERT_EQ(emu.gpr.pc, 0x123456789abcde04);
+ ASSERT_EQ(emu.gpr.x[8], 0x44332211);
+}
>From 73685583c859deae7b30bb01692670fd7356c7db Mon Sep 17 00:00:00 2001
From: Craig Topper <craig.topper at sifive.com>
Date: Tue, 5 Aug 2025 16:12:42 -0700
Subject: [PATCH 010/171] [VP][RISCV] Add a vp.load.ff intrinsic for fault only
first load. (#128593)
There's been some interest in supporting early-exit loops recently.
https://discourse.llvm.org/t/rfc-supporting-more-early-exit-loops/84690
This patch was extracted from our downstream where we've been using it
in our vectorizer.
---
llvm/docs/LangRef.rst | 86 ++
llvm/include/llvm/CodeGen/SelectionDAG.h | 3 +
llvm/include/llvm/CodeGen/SelectionDAGNodes.h | 17 +
llvm/include/llvm/IR/Intrinsics.td | 6 +
llvm/include/llvm/IR/VPIntrinsics.def | 6 +
llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h | 2 +
.../SelectionDAG/LegalizeVectorTypes.cpp | 68 ++
.../lib/CodeGen/SelectionDAG/SelectionDAG.cpp | 36 +
.../SelectionDAG/SelectionDAGBuilder.cpp | 31 +
.../SelectionDAG/SelectionDAGBuilder.h | 2 +
llvm/lib/IR/IntrinsicInst.cpp | 5 +
llvm/lib/Target/RISCV/RISCVISelLowering.cpp | 52 +
llvm/lib/Target/RISCV/RISCVISelLowering.h | 1 +
.../RISCV/rvv/fixed-vectors-vploadff.ll | 586 ++++++++++
llvm/test/CodeGen/RISCV/rvv/vploadff.ll | 1008 +++++++++++++++++
llvm/unittests/IR/VPIntrinsicTest.cpp | 3 +
16 files changed, 1912 insertions(+)
create mode 100644 llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vploadff.ll
create mode 100644 llvm/test/CodeGen/RISCV/rvv/vploadff.ll
diff --git a/llvm/docs/LangRef.rst b/llvm/docs/LangRef.rst
index d8cd3b894cda4..3a3a74f323a26 100644
--- a/llvm/docs/LangRef.rst
+++ b/llvm/docs/LangRef.rst
@@ -24243,6 +24243,92 @@ Examples:
%also.r = call <8 x i8> @llvm.masked.load.v8i8.p0(ptr %ptr, i32 2, <8 x i1> %mask, <8 x i8> poison)
+.. _int_vp_load_ff:
+
+'``llvm.vp.load_ff``' Intrinsic
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Syntax:
+"""""""
+This is an overloaded intrinsic.
+
+::
+
+ declare {<4 x float>, i32} @llvm.vp.load.ff.v4f32.p0(ptr %ptr, <4 x i1> %mask, i32 %evl)
+ declare {<vscale x 2 x i16>, i32} @llvm.vp.load.ff.nxv2i16.p0(ptr %ptr, <vscale x 2 x i1> %mask, i32 %evl)
+ declare {<8 x float>, i32} @llvm.vp.load.ff.v8f32.p1(ptr addrspace(1) %ptr, <8 x i1> %mask, i32 %evl)
+ declare {<vscale x 1 x i64>, i32} @llvm.vp.load.ff.nxv1i64.p6(ptr addrspace(6) %ptr, <vscale x 1 x i1> %mask, i32 %evl)
+
+Overview:
+"""""""""
+
+The '``llvm.vp.load.ff.*``' intrinsic is similar to
+'``llvm.vp.load.*``', but will not trap if there are not ``evl`` readable
+lanes at the pointer. '``ff``' stands for fault-first or fault-only-first.
+
+Arguments:
+""""""""""
+
+The first argument is the base pointer for the load. The second argument is a
+vector of boolean values with the same number of elements as the first return
+type. The third is the explicit vector length of the operation. The first
+return type and underlying type of the base pointer are the same vector types.
+
+The :ref:`align <attr_align>` parameter attribute can be provided for the first
+argument.
+
+Semantics:
+""""""""""
+
+The '``llvm.vp.load.ff``' is designed for reading vector lanes in a single
+IR operation where the number of lanes that can be read is not known and can
+only be determined by looking at the data. This is useful for vectorizing
+strcmp or strlen like loops where the data contains a null terminator. Some
+targets have a fault-only-first load instruction that this intrinsic can be
+lowered to. Other targets may support this intrinsic differently, for example by
+lowering to a single scalar load guarded by ``evl!=0`` and ``mask[0]==1`` and
+indicating only 1 lane could be read.
+
+Like '``llvm.vp.load``', this intrinsic reads memory based on a ``mask`` and an
+``evl``. If ``evl`` is non-zero and the first lane is masked-on, then the
+first lane of the vector needs to be inbounds of an allocation. The remaining
+masked-on lanes with index less than ``evl`` do not need to be inbounds of
+an the same allocation or any allocation.
+
+The second return value from the intrinsic indicates the index of the first
+lane that could not be read for some reason or ``evl`` if all lanes could be
+be read. Lanes at this index or higher in the first return value are
+:ref:`poison value <poisonvalues>`. If ``evl`` is non-zero, the result in the
+second return value must be at least 1, even if the first lane is masked-off.
+
+The second result is usually less than ``evl`` when an exception would occur
+for reading that lane, but it can be reduced for any reason. This facilitates
+emulating this intrinsic when the hardware only supports narrower vector
+types natively or when when hardware does not support fault-only-first loads.
+
+Masked-on lanes that are not inbounds of the allocation that contains the first
+lane are :ref:`poison value <poisonvalues>`. There should be a marker in the
+allocation that indicates where valid data stops such as a null terminator. The
+terminator should be checked for after calling this intrinsic to prevent using
+any lanes past the terminator. Even if second return value is less than
+``evl``, the terminator value may not have been read.
+
+This intrinsic will typically be called in a loop until a terminator is
+found. The second result should be used to indicates how many elements are
+valid to look for the null terminator. If the terminator is not found, the
+pointer should be advanced by the number of elements in the second result and
+the intrinsic called again.
+
+The default alignment is taken as the ABI alignment of the first return
+type as specified by the :ref:`datalayout string<langref_datalayout>`.
+
+Examples:
+"""""""""
+
+.. code-block:: text
+
+ %r = call {<8 x i8>, i32} @llvm.vp.load.ff.v8i8.p0(ptr align 2 %ptr, <8 x i1> %mask, i32 %evl)
+
.. _int_vp_store:
'``llvm.vp.store``' Intrinsic
diff --git a/llvm/include/llvm/CodeGen/SelectionDAG.h b/llvm/include/llvm/CodeGen/SelectionDAG.h
index e5644a5ef206a..6f2ad33094dce 100644
--- a/llvm/include/llvm/CodeGen/SelectionDAG.h
+++ b/llvm/include/llvm/CodeGen/SelectionDAG.h
@@ -1668,6 +1668,9 @@ class SelectionDAG {
ArrayRef<SDValue> Ops,
MachineMemOperand *MMO,
ISD::MemIndexType IndexType);
+ LLVM_ABI SDValue getLoadFFVP(EVT VT, const SDLoc &DL, SDValue Chain,
+ SDValue Ptr, SDValue Mask, SDValue EVL,
+ MachineMemOperand *MMO);
LLVM_ABI SDValue getGetFPEnv(SDValue Chain, const SDLoc &dl, SDValue Ptr,
EVT MemVT, MachineMemOperand *MMO);
diff --git a/llvm/include/llvm/CodeGen/SelectionDAGNodes.h b/llvm/include/llvm/CodeGen/SelectionDAGNodes.h
index 11ae8cd5eb77a..65528b3050fe5 100644
--- a/llvm/include/llvm/CodeGen/SelectionDAGNodes.h
+++ b/llvm/include/llvm/CodeGen/SelectionDAGNodes.h
@@ -3099,6 +3099,23 @@ class MaskedHistogramSDNode : public MaskedGatherScatterSDNode {
}
};
+class VPLoadFFSDNode : public MemSDNode {
+public:
+ friend class SelectionDAG;
+
+ VPLoadFFSDNode(unsigned Order, const DebugLoc &DL, SDVTList VTs, EVT MemVT,
+ MachineMemOperand *MMO)
+ : MemSDNode(ISD::VP_LOAD_FF, Order, DL, VTs, MemVT, MMO) {}
+
+ const SDValue &getBasePtr() const { return getOperand(1); }
+ const SDValue &getMask() const { return getOperand(2); }
+ const SDValue &getVectorLength() const { return getOperand(3); }
+
+ static bool classof(const SDNode *N) {
+ return N->getOpcode() == ISD::VP_LOAD_FF;
+ }
+};
+
class FPStateAccessSDNode : public MemSDNode {
public:
friend class SelectionDAG;
diff --git a/llvm/include/llvm/IR/Intrinsics.td b/llvm/include/llvm/IR/Intrinsics.td
index bd6f94ac1286c..1a23276bc6ffb 100644
--- a/llvm/include/llvm/IR/Intrinsics.td
+++ b/llvm/include/llvm/IR/Intrinsics.td
@@ -1932,6 +1932,12 @@ def int_vp_load : DefaultAttrsIntrinsic<[ llvm_anyvector_ty],
llvm_i32_ty],
[ NoCapture<ArgIndex<0>>, IntrReadMem, IntrArgMemOnly ]>;
+def int_vp_load_ff : DefaultAttrsIntrinsic<[ llvm_anyvector_ty, llvm_i32_ty ],
+ [ llvm_anyptr_ty,
+ LLVMScalarOrSameVectorWidth<0, llvm_i1_ty>,
+ llvm_i32_ty],
+ [ NoCapture<ArgIndex<0>>, IntrNoSync, IntrReadMem, IntrWillReturn, IntrArgMemOnly ]>;
+
def int_vp_gather: DefaultAttrsIntrinsic<[ llvm_anyvector_ty],
[ LLVMVectorOfAnyPointersToElt<0>,
LLVMScalarOrSameVectorWidth<0, llvm_i1_ty>,
diff --git a/llvm/include/llvm/IR/VPIntrinsics.def b/llvm/include/llvm/IR/VPIntrinsics.def
index 55f4719da7c8b..4a71097226f18 100644
--- a/llvm/include/llvm/IR/VPIntrinsics.def
+++ b/llvm/include/llvm/IR/VPIntrinsics.def
@@ -587,6 +587,12 @@ VP_PROPERTY_FUNCTIONAL_OPC(Load)
VP_PROPERTY_FUNCTIONAL_INTRINSIC(masked_load)
END_REGISTER_VP(vp_load, VP_LOAD)
+BEGIN_REGISTER_VP_INTRINSIC(vp_load_ff, 1, 2)
+// val,chain = VP_LOAD_FF chain,base,mask,evl
+BEGIN_REGISTER_VP_SDNODE(VP_LOAD_FF, -1, vp_load_ff, 2, 3)
+HELPER_MAP_VPID_TO_VPSD(vp_load_ff, VP_LOAD_FF)
+VP_PROPERTY_NO_FUNCTIONAL
+END_REGISTER_VP(vp_load_ff, VP_LOAD_FF)
// llvm.experimental.vp.strided.load(ptr,stride,mask,vlen)
BEGIN_REGISTER_VP_INTRINSIC(experimental_vp_strided_load, 2, 3)
// chain = EXPERIMENTAL_VP_STRIDED_LOAD chain,base,offset,stride,mask,evl
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h b/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
index 2e13b1854bf29..63544e63e1da1 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
@@ -971,6 +971,7 @@ class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
void SplitVecRes_INSERT_VECTOR_ELT(SDNode *N, SDValue &Lo, SDValue &Hi);
void SplitVecRes_LOAD(LoadSDNode *LD, SDValue &Lo, SDValue &Hi);
void SplitVecRes_VP_LOAD(VPLoadSDNode *LD, SDValue &Lo, SDValue &Hi);
+ void SplitVecRes_VP_LOAD_FF(VPLoadFFSDNode *LD, SDValue &Lo, SDValue &Hi);
void SplitVecRes_VP_STRIDED_LOAD(VPStridedLoadSDNode *SLD, SDValue &Lo,
SDValue &Hi);
void SplitVecRes_MLOAD(MaskedLoadSDNode *MLD, SDValue &Lo, SDValue &Hi);
@@ -1075,6 +1076,7 @@ class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
SDValue WidenVecRes_INSERT_VECTOR_ELT(SDNode* N);
SDValue WidenVecRes_LOAD(SDNode* N);
SDValue WidenVecRes_VP_LOAD(VPLoadSDNode *N);
+ SDValue WidenVecRes_VP_LOAD_FF(VPLoadFFSDNode *N);
SDValue WidenVecRes_VP_STRIDED_LOAD(VPStridedLoadSDNode *N);
SDValue WidenVecRes_VECTOR_COMPRESS(SDNode *N);
SDValue WidenVecRes_MLOAD(MaskedLoadSDNode* N);
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
index 1661814d5a897..bc2dbfb4cbaae 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
@@ -1152,6 +1152,9 @@ void DAGTypeLegalizer::SplitVectorResult(SDNode *N, unsigned ResNo) {
case ISD::VP_LOAD:
SplitVecRes_VP_LOAD(cast<VPLoadSDNode>(N), Lo, Hi);
break;
+ case ISD::VP_LOAD_FF:
+ SplitVecRes_VP_LOAD_FF(cast<VPLoadFFSDNode>(N), Lo, Hi);
+ break;
case ISD::EXPERIMENTAL_VP_STRIDED_LOAD:
SplitVecRes_VP_STRIDED_LOAD(cast<VPStridedLoadSDNode>(N), Lo, Hi);
break;
@@ -2227,6 +2230,45 @@ void DAGTypeLegalizer::SplitVecRes_VP_LOAD(VPLoadSDNode *LD, SDValue &Lo,
ReplaceValueWith(SDValue(LD, 1), Ch);
}
+void DAGTypeLegalizer::SplitVecRes_VP_LOAD_FF(VPLoadFFSDNode *LD, SDValue &Lo,
+ SDValue &Hi) {
+ SDLoc dl(LD);
+ auto [LoVT, HiVT] = DAG.GetSplitDestVTs(LD->getValueType(0));
+
+ SDValue Ch = LD->getChain();
+ SDValue Ptr = LD->getBasePtr();
+ Align Alignment = LD->getBaseAlign();
+ SDValue Mask = LD->getMask();
+ SDValue EVL = LD->getVectorLength();
+
+ // Split Mask operand
+ SDValue MaskLo, MaskHi;
+ if (Mask.getOpcode() == ISD::SETCC) {
+ SplitVecRes_SETCC(Mask.getNode(), MaskLo, MaskHi);
+ } else {
+ if (getTypeAction(Mask.getValueType()) == TargetLowering::TypeSplitVector)
+ GetSplitVector(Mask, MaskLo, MaskHi);
+ else
+ std::tie(MaskLo, MaskHi) = DAG.SplitVector(Mask, dl);
+ }
+
+ // Split EVL operand
+ auto [EVLLo, EVLHi] = DAG.SplitEVL(EVL, LD->getValueType(0), dl);
+
+ MachineMemOperand *MMO = DAG.getMachineFunction().getMachineMemOperand(
+ LD->getPointerInfo(), MachineMemOperand::MOLoad,
+ LocationSize::beforeOrAfterPointer(), Alignment, LD->getAAInfo(),
+ LD->getRanges());
+
+ Lo = DAG.getLoadFFVP(LoVT, dl, Ch, Ptr, MaskLo, EVLLo, MMO);
+
+ // Fill the upper half with poison.
+ Hi = DAG.getUNDEF(HiVT);
+
+ ReplaceValueWith(SDValue(LD, 1), Lo.getValue(1));
+ ReplaceValueWith(SDValue(LD, 2), Lo.getValue(2));
+}
+
void DAGTypeLegalizer::SplitVecRes_VP_STRIDED_LOAD(VPStridedLoadSDNode *SLD,
SDValue &Lo, SDValue &Hi) {
assert(SLD->isUnindexed() &&
@@ -4707,6 +4749,9 @@ void DAGTypeLegalizer::WidenVectorResult(SDNode *N, unsigned ResNo) {
case ISD::VP_LOAD:
Res = WidenVecRes_VP_LOAD(cast<VPLoadSDNode>(N));
break;
+ case ISD::VP_LOAD_FF:
+ Res = WidenVecRes_VP_LOAD_FF(cast<VPLoadFFSDNode>(N));
+ break;
case ISD::EXPERIMENTAL_VP_STRIDED_LOAD:
Res = WidenVecRes_VP_STRIDED_LOAD(cast<VPStridedLoadSDNode>(N));
break;
@@ -6163,6 +6208,29 @@ SDValue DAGTypeLegalizer::WidenVecRes_VP_LOAD(VPLoadSDNode *N) {
return Res;
}
+SDValue DAGTypeLegalizer::WidenVecRes_VP_LOAD_FF(VPLoadFFSDNode *N) {
+ EVT WidenVT = TLI.getTypeToTransformTo(*DAG.getContext(), N->getValueType(0));
+ SDValue Mask = N->getMask();
+ SDValue EVL = N->getVectorLength();
+ SDLoc dl(N);
+
+ // The mask should be widened as well
+ assert(getTypeAction(Mask.getValueType()) ==
+ TargetLowering::TypeWidenVector &&
+ "Unable to widen binary VP op");
+ Mask = GetWidenedVector(Mask);
+ assert(Mask.getValueType().getVectorElementCount() ==
+ TLI.getTypeToTransformTo(*DAG.getContext(), Mask.getValueType())
+ .getVectorElementCount() &&
+ "Unable to widen vector load");
+
+ SDValue Res = DAG.getLoadFFVP(WidenVT, dl, N->getChain(), N->getBasePtr(),
+ Mask, EVL, N->getMemOperand());
+ ReplaceValueWith(SDValue(N, 1), Res.getValue(1));
+ ReplaceValueWith(SDValue(N, 2), Res.getValue(2));
+ return Res;
+}
+
SDValue DAGTypeLegalizer::WidenVecRes_VP_STRIDED_LOAD(VPStridedLoadSDNode *N) {
SDLoc DL(N);
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
index 61f114411cd0d..71a175dfd7b24 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
@@ -837,6 +837,14 @@ static void AddNodeIDCustom(FoldingSetNodeID &ID, const SDNode *N) {
ID.AddInteger(ELD->getMemOperand()->getFlags());
break;
}
+ case ISD::VP_LOAD_FF: {
+ const auto *LD = cast<VPLoadFFSDNode>(N);
+ ID.AddInteger(LD->getMemoryVT().getRawBits());
+ ID.AddInteger(LD->getRawSubclassData());
+ ID.AddInteger(LD->getPointerInfo().getAddrSpace());
+ ID.AddInteger(LD->getMemOperand()->getFlags());
+ break;
+ }
case ISD::VP_STORE: {
const VPStoreSDNode *EST = cast<VPStoreSDNode>(N);
ID.AddInteger(EST->getMemoryVT().getRawBits());
@@ -10433,6 +10441,34 @@ SDValue SelectionDAG::getMaskedHistogram(SDVTList VTs, EVT MemVT,
return V;
}
+SDValue SelectionDAG::getLoadFFVP(EVT VT, const SDLoc &DL, SDValue Chain,
+ SDValue Ptr, SDValue Mask, SDValue EVL,
+ MachineMemOperand *MMO) {
+ SDVTList VTs = getVTList(VT, EVL.getValueType(), MVT::Other);
+ SDValue Ops[] = {Chain, Ptr, Mask, EVL};
+ FoldingSetNodeID ID;
+ AddNodeIDNode(ID, ISD::VP_LOAD_FF, VTs, Ops);
+ ID.AddInteger(VT.getRawBits());
+ ID.AddInteger(getSyntheticNodeSubclassData<VPLoadFFSDNode>(DL.getIROrder(),
+ VTs, VT, MMO));
+ ID.AddInteger(MMO->getPointerInfo().getAddrSpace());
+ ID.AddInteger(MMO->getFlags());
+ void *IP = nullptr;
+ if (SDNode *E = FindNodeOrInsertPos(ID, DL, IP)) {
+ cast<VPLoadFFSDNode>(E)->refineAlignment(MMO);
+ return SDValue(E, 0);
+ }
+ auto *N = newSDNode<VPLoadFFSDNode>(DL.getIROrder(), DL.getDebugLoc(), VTs,
+ VT, MMO);
+ createOperands(N, Ops);
+
+ CSEMap.InsertNode(N, IP);
+ InsertNode(N);
+ SDValue V(N, 0);
+ NewSDValueDbgMsg(V, "Creating new node: ", this);
+ return V;
+}
+
SDValue SelectionDAG::getGetFPEnv(SDValue Chain, const SDLoc &dl, SDValue Ptr,
EVT MemVT, MachineMemOperand *MMO) {
assert(Chain.getValueType() == MVT::Other && "Invalid chain type");
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
index ac0440fef5f60..d0815e9f51822 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
@@ -8442,6 +8442,34 @@ void SelectionDAGBuilder::visitVPLoad(
setValue(&VPIntrin, LD);
}
+void SelectionDAGBuilder::visitVPLoadFF(
+ const VPIntrinsic &VPIntrin, EVT VT, EVT EVLVT,
+ const SmallVectorImpl<SDValue> &OpValues) {
+ assert(OpValues.size() == 3 && "Unexpected number of operands");
+ SDLoc DL = getCurSDLoc();
+ Value *PtrOperand = VPIntrin.getArgOperand(0);
+ MaybeAlign Alignment = VPIntrin.getPointerAlignment();
+ AAMDNodes AAInfo = VPIntrin.getAAMetadata();
+ const MDNode *Ranges = VPIntrin.getMetadata(LLVMContext::MD_range);
+ SDValue LD;
+ // Do not serialize variable-length loads of constant memory with
+ // anything.
+ if (!Alignment)
+ Alignment = DAG.getEVTAlign(VT);
+ MemoryLocation ML = MemoryLocation::getAfter(PtrOperand, AAInfo);
+ bool AddToChain = !BatchAA || !BatchAA->pointsToConstantMemory(ML);
+ SDValue InChain = AddToChain ? DAG.getRoot() : DAG.getEntryNode();
+ MachineMemOperand *MMO = DAG.getMachineFunction().getMachineMemOperand(
+ MachinePointerInfo(PtrOperand), MachineMemOperand::MOLoad,
+ LocationSize::beforeOrAfterPointer(), *Alignment, AAInfo, Ranges);
+ LD = DAG.getLoadFFVP(VT, DL, InChain, OpValues[0], OpValues[1], OpValues[2],
+ MMO);
+ SDValue Trunc = DAG.getNode(ISD::TRUNCATE, DL, EVLVT, LD.getValue(1));
+ if (AddToChain)
+ PendingLoads.push_back(LD.getValue(2));
+ setValue(&VPIntrin, DAG.getMergeValues({LD.getValue(0), Trunc}, DL));
+}
+
void SelectionDAGBuilder::visitVPGather(
const VPIntrinsic &VPIntrin, EVT VT,
const SmallVectorImpl<SDValue> &OpValues) {
@@ -8675,6 +8703,9 @@ void SelectionDAGBuilder::visitVectorPredicationIntrinsic(
case ISD::VP_LOAD:
visitVPLoad(VPIntrin, ValueVTs[0], OpValues);
break;
+ case ISD::VP_LOAD_FF:
+ visitVPLoadFF(VPIntrin, ValueVTs[0], ValueVTs[1], OpValues);
+ break;
case ISD::VP_GATHER:
visitVPGather(VPIntrin, ValueVTs[0], OpValues);
break;
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.h b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.h
index 1c278076a219d..c251755ee7064 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.h
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.h
@@ -631,6 +631,8 @@ class SelectionDAGBuilder {
void visitVectorExtractLastActive(const CallInst &I, unsigned Intrinsic);
void visitVPLoad(const VPIntrinsic &VPIntrin, EVT VT,
const SmallVectorImpl<SDValue> &OpValues);
+ void visitVPLoadFF(const VPIntrinsic &VPIntrin, EVT VT, EVT EVLVT,
+ const SmallVectorImpl<SDValue> &OpValues);
void visitVPStore(const VPIntrinsic &VPIntrin,
const SmallVectorImpl<SDValue> &OpValues);
void visitVPGather(const VPIntrinsic &VPIntrin, EVT VT,
diff --git a/llvm/lib/IR/IntrinsicInst.cpp b/llvm/lib/IR/IntrinsicInst.cpp
index b1d3339c5a414..23a4d1b5c615e 100644
--- a/llvm/lib/IR/IntrinsicInst.cpp
+++ b/llvm/lib/IR/IntrinsicInst.cpp
@@ -448,6 +448,7 @@ VPIntrinsic::getMemoryPointerParamPos(Intrinsic::ID VPID) {
case Intrinsic::experimental_vp_strided_store:
return 1;
case Intrinsic::vp_load:
+ case Intrinsic::vp_load_ff:
case Intrinsic::vp_gather:
case Intrinsic::experimental_vp_strided_load:
return 0;
@@ -671,6 +672,10 @@ Function *VPIntrinsic::getOrInsertDeclarationForParams(
VPFunc = Intrinsic::getOrInsertDeclaration(
M, VPID, {ReturnType, Params[0]->getType()});
break;
+ case Intrinsic::vp_load_ff:
+ VPFunc = Intrinsic::getOrInsertDeclaration(
+ M, VPID, {ReturnType->getStructElementType(0), Params[0]->getType()});
+ break;
case Intrinsic::experimental_vp_strided_load:
VPFunc = Intrinsic::getOrInsertDeclaration(
M, VPID, {ReturnType, Params[0]->getType(), Params[1]->getType()});
diff --git a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
index e09e6fb5b26ee..0077ecf59dd6d 100644
--- a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
+++ b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
@@ -927,6 +927,7 @@ RISCVTargetLowering::RISCVTargetLowering(const TargetMachine &TM,
{ISD::VP_LOAD, ISD::VP_STORE, ISD::EXPERIMENTAL_VP_STRIDED_LOAD,
ISD::EXPERIMENTAL_VP_STRIDED_STORE, ISD::VP_GATHER, ISD::VP_SCATTER},
VT, Custom);
+ setOperationAction(ISD::VP_LOAD_FF, VT, Custom);
setOperationAction({ISD::CONCAT_VECTORS, ISD::INSERT_SUBVECTOR,
ISD::EXTRACT_SUBVECTOR, ISD::SCALAR_TO_VECTOR},
@@ -1105,6 +1106,7 @@ RISCVTargetLowering::RISCVTargetLowering(const TargetMachine &TM,
{ISD::VP_LOAD, ISD::VP_STORE, ISD::EXPERIMENTAL_VP_STRIDED_LOAD,
ISD::EXPERIMENTAL_VP_STRIDED_STORE, ISD::VP_GATHER, ISD::VP_SCATTER},
VT, Custom);
+ setOperationAction(ISD::VP_LOAD_FF, VT, Custom);
setOperationAction(ISD::SELECT, VT, Custom);
setOperationAction(ISD::SELECT_CC, VT, Expand);
@@ -1181,6 +1183,7 @@ RISCVTargetLowering::RISCVTargetLowering(const TargetMachine &TM,
ISD::EXPERIMENTAL_VP_STRIDED_STORE, ISD::VP_GATHER,
ISD::VP_SCATTER},
VT, Custom);
+ setOperationAction(ISD::VP_LOAD_FF, VT, Custom);
setOperationAction(ISD::FNEG, VT, Expand);
setOperationAction(ISD::FABS, VT, Expand);
@@ -1352,6 +1355,7 @@ RISCVTargetLowering::RISCVTargetLowering(const TargetMachine &TM,
ISD::EXPERIMENTAL_VP_STRIDED_STORE, ISD::VP_GATHER,
ISD::VP_SCATTER},
VT, Custom);
+ setOperationAction(ISD::VP_LOAD_FF, VT, Custom);
setOperationAction({ISD::ADD, ISD::MUL, ISD::SUB, ISD::AND, ISD::OR,
ISD::XOR, ISD::SDIV, ISD::SREM, ISD::UDIV,
@@ -1442,6 +1446,7 @@ RISCVTargetLowering::RISCVTargetLowering(const TargetMachine &TM,
ISD::VP_SCATTER, ISD::EXPERIMENTAL_VP_STRIDED_LOAD,
ISD::EXPERIMENTAL_VP_STRIDED_STORE},
VT, Custom);
+ setOperationAction(ISD::VP_LOAD_FF, VT, Custom);
setOperationAction({ISD::FP_ROUND, ISD::FP_EXTEND}, VT, Custom);
setOperationAction({ISD::STRICT_FP_ROUND, ISD::STRICT_FP_EXTEND}, VT,
@@ -8117,6 +8122,8 @@ SDValue RISCVTargetLowering::LowerOperation(SDValue Op,
case ISD::MLOAD:
case ISD::VP_LOAD:
return lowerMaskedLoad(Op, DAG);
+ case ISD::VP_LOAD_FF:
+ return lowerLoadFF(Op, DAG);
case ISD::MSTORE:
case ISD::VP_STORE:
return lowerMaskedStore(Op, DAG);
@@ -12725,6 +12732,51 @@ SDValue RISCVTargetLowering::lowerMaskedLoad(SDValue Op,
return DAG.getMergeValues({Result, Chain}, DL);
}
+SDValue RISCVTargetLowering::lowerLoadFF(SDValue Op, SelectionDAG &DAG) const {
+ SDLoc DL(Op);
+ MVT VT = Op->getSimpleValueType(0);
+
+ const auto *VPLoadFF = cast<VPLoadFFSDNode>(Op);
+ EVT MemVT = VPLoadFF->getMemoryVT();
+ MachineMemOperand *MMO = VPLoadFF->getMemOperand();
+ SDValue Chain = VPLoadFF->getChain();
+ SDValue BasePtr = VPLoadFF->getBasePtr();
+
+ SDValue Mask = VPLoadFF->getMask();
+ SDValue VL = VPLoadFF->getVectorLength();
+
+ MVT XLenVT = Subtarget.getXLenVT();
+
+ MVT ContainerVT = VT;
+ if (VT.isFixedLengthVector()) {
+ ContainerVT = getContainerForFixedLengthVector(VT);
+ MVT MaskVT = getMaskTypeFor(ContainerVT);
+ Mask = convertToScalableVector(MaskVT, Mask, DAG, Subtarget);
+ }
+
+ unsigned IntID = Intrinsic::riscv_vleff_mask;
+ SDValue Ops[] = {
+ Chain,
+ DAG.getTargetConstant(IntID, DL, XLenVT),
+ DAG.getUNDEF(ContainerVT),
+ BasePtr,
+ Mask,
+ VL,
+ DAG.getTargetConstant(RISCVVType::TAIL_AGNOSTIC, DL, XLenVT)};
+
+ SDVTList VTs = DAG.getVTList({ContainerVT, Op->getValueType(1), MVT::Other});
+
+ SDValue Result =
+ DAG.getMemIntrinsicNode(ISD::INTRINSIC_W_CHAIN, DL, VTs, Ops, MemVT, MMO);
+ SDValue OutVL = Result.getValue(1);
+ Chain = Result.getValue(2);
+
+ if (VT.isFixedLengthVector())
+ Result = convertFromScalableVector(VT, Result, DAG, Subtarget);
+
+ return DAG.getMergeValues({Result, OutVL, Chain}, DL);
+}
+
SDValue RISCVTargetLowering::lowerMaskedStore(SDValue Op,
SelectionDAG &DAG) const {
SDLoc DL(Op);
diff --git a/llvm/lib/Target/RISCV/RISCVISelLowering.h b/llvm/lib/Target/RISCV/RISCVISelLowering.h
index fa50e2105a708..433b8be5c562e 100644
--- a/llvm/lib/Target/RISCV/RISCVISelLowering.h
+++ b/llvm/lib/Target/RISCV/RISCVISelLowering.h
@@ -526,6 +526,7 @@ class RISCVTargetLowering : public TargetLowering {
SDValue lowerVECTOR_SPLICE(SDValue Op, SelectionDAG &DAG) const;
SDValue lowerABS(SDValue Op, SelectionDAG &DAG) const;
SDValue lowerMaskedLoad(SDValue Op, SelectionDAG &DAG) const;
+ SDValue lowerLoadFF(SDValue Op, SelectionDAG &DAG) const;
SDValue lowerMaskedStore(SDValue Op, SelectionDAG &DAG) const;
SDValue lowerVectorCompress(SDValue Op, SelectionDAG &DAG) const;
SDValue lowerFixedLengthVectorFCOPYSIGNToRVV(SDValue Op,
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vploadff.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vploadff.ll
new file mode 100644
index 0000000000000..5b01976dbbebd
--- /dev/null
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vploadff.ll
@@ -0,0 +1,586 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
+; RUN: llc -mtriple=riscv32 -mattr=+d,+zvfh,+zvfbfmin,+v \
+; RUN: -verify-machineinstrs < %s | FileCheck %s
+; RUN: llc -mtriple=riscv64 -mattr=+d,+zvfh,+zvfbfmin,+v \
+; RUN: -verify-machineinstrs < %s | FileCheck %s
+; RUN: llc -mtriple=riscv32 -mattr=+d,+zvfhmin,+zvfbfmin,+v \
+; RUN: -verify-machineinstrs < %s | FileCheck %s
+; RUN: llc -mtriple=riscv64 -mattr=+d,+zvfhmin,+zvfbfmin,+v \
+; RUN: -verify-machineinstrs < %s | FileCheck %s
+
+define { <2 x i8>, i32 } @vploadff_v2i8(ptr %ptr, <2 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v2i8:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
+; CHECK-NEXT: vle8ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <2 x i8>, i32 } @llvm.vp.load.ff.v2i8.p0(ptr %ptr, <2 x i1> %m, i32 %evl)
+ ret { <2 x i8>, i32 } %load
+}
+
+define { <2 x i8>, i32 } @vploadff_v2i8_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v2i8_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
+; CHECK-NEXT: vle8ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <2 x i8>, i32 } @llvm.vp.load.ff.v2i8.p0(ptr %ptr, <2 x i1> splat (i1 true), i32 %evl)
+ ret { <2 x i8>, i32 } %load
+}
+
+define { <4 x i8>, i32 } @vploadff_v4i8(ptr %ptr, <4 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v4i8:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
+; CHECK-NEXT: vle8ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <4 x i8>, i32 } @llvm.vp.load.ff.v4i8.p0(ptr %ptr, <4 x i1> %m, i32 %evl)
+ ret { <4 x i8>, i32 } %load
+}
+
+define { <4 x i8>, i32 } @vploadff_v4i8_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v4i8_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
+; CHECK-NEXT: vle8ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <4 x i8>, i32 } @llvm.vp.load.ff.v4i8.p0(ptr %ptr, <4 x i1> splat (i1 true), i32 %evl)
+ ret { <4 x i8>, i32 } %load
+}
+
+define { <8 x i8>, i32 } @vploadff_v8i8(ptr %ptr, <8 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v8i8:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
+; CHECK-NEXT: vle8ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <8 x i8>, i32 } @llvm.vp.load.ff.v8i8.p0(ptr %ptr, <8 x i1> %m, i32 %evl)
+ ret { <8 x i8>, i32 } %load
+}
+
+define { <8 x i8>, i32 } @vploadff_v8i8_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v8i8_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
+; CHECK-NEXT: vle8ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <8 x i8>, i32 } @llvm.vp.load.ff.v8i8.p0(ptr %ptr, <8 x i1> splat (i1 true), i32 %evl)
+ ret { <8 x i8>, i32 } %load
+}
+
+define { <2 x i16>, i32 } @vploadff_v2i16(ptr %ptr, <2 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v2i16:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <2 x i16>, i32 } @llvm.vp.load.ff.v2i16.p0(ptr %ptr, <2 x i1> %m, i32 %evl)
+ ret { <2 x i16>, i32 } %load
+}
+
+define { <2 x i16>, i32 } @vploadff_v2i16_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v2i16_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <2 x i16>, i32 } @llvm.vp.load.ff.v2i16.p0(ptr %ptr, <2 x i1> splat (i1 true), i32 %evl)
+ ret { <2 x i16>, i32 } %load
+}
+
+define { <4 x i16>, i32 } @vploadff_v4i16(ptr %ptr, <4 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v4i16:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <4 x i16>, i32 } @llvm.vp.load.ff.v4i16.p0(ptr %ptr, <4 x i1> %m, i32 %evl)
+ ret { <4 x i16>, i32 } %load
+}
+
+define { <4 x i16>, i32 } @vploadff_v4i16_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v4i16_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <4 x i16>, i32 } @llvm.vp.load.ff.v4i16.p0(ptr %ptr, <4 x i1> splat (i1 true), i32 %evl)
+ ret { <4 x i16>, i32 } %load
+}
+
+define { <8 x i16>, i32 } @vploadff_v8i16(ptr %ptr, <8 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v8i16:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <8 x i16>, i32 } @llvm.vp.load.ff.v8i16.p0(ptr %ptr, <8 x i1> %m, i32 %evl)
+ ret { <8 x i16>, i32 } %load
+}
+
+define { <8 x i16>, i32 } @vploadff_v8i16_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v8i16_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <8 x i16>, i32 } @llvm.vp.load.ff.v8i16.p0(ptr %ptr, <8 x i1> splat (i1 true), i32 %evl)
+ ret { <8 x i16>, i32 } %load
+}
+
+define { <2 x i32>, i32 } @vploadff_v2i32(ptr %ptr, <2 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v2i32:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, ma
+; CHECK-NEXT: vle32ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <2 x i32>, i32 } @llvm.vp.load.ff.v2i32.p0(ptr %ptr, <2 x i1> %m, i32 %evl)
+ ret { <2 x i32>, i32 } %load
+}
+
+define { <2 x i32>, i32 } @vploadff_v2i32_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v2i32_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, ma
+; CHECK-NEXT: vle32ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <2 x i32>, i32 } @llvm.vp.load.ff.v2i32.p0(ptr %ptr, <2 x i1> splat (i1 true), i32 %evl)
+ ret { <2 x i32>, i32 } %load
+}
+
+define { <4 x i32>, i32 } @vploadff_v4i32(ptr %ptr, <4 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v4i32:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, ma
+; CHECK-NEXT: vle32ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <4 x i32>, i32 } @llvm.vp.load.ff.v4i32.p0(ptr %ptr, <4 x i1> %m, i32 %evl)
+ ret { <4 x i32>, i32 } %load
+}
+
+define { <4 x i32>, i32 } @vploadff_v4i32_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v4i32_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, ma
+; CHECK-NEXT: vle32ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <4 x i32>, i32 } @llvm.vp.load.ff.v4i32.p0(ptr %ptr, <4 x i1> splat (i1 true), i32 %evl)
+ ret { <4 x i32>, i32 } %load
+}
+
+define { <8 x i32>, i32 } @vploadff_v8i32(ptr %ptr, <8 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v8i32:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, ma
+; CHECK-NEXT: vle32ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <8 x i32>, i32 } @llvm.vp.load.ff.v8i32.p0(ptr %ptr, <8 x i1> %m, i32 %evl)
+ ret { <8 x i32>, i32 } %load
+}
+
+define { <8 x i32>, i32 } @vploadff_v8i32_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v8i32_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, ma
+; CHECK-NEXT: vle32ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <8 x i32>, i32 } @llvm.vp.load.ff.v8i32.p0(ptr %ptr, <8 x i1> splat (i1 true), i32 %evl)
+ ret { <8 x i32>, i32 } %load
+}
+
+define { <2 x i64>, i32 } @vploadff_v2i64(ptr %ptr, <2 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v2i64:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, ma
+; CHECK-NEXT: vle64ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <2 x i64>, i32 } @llvm.vp.load.ff.v2i64.p0(ptr %ptr, <2 x i1> %m, i32 %evl)
+ ret { <2 x i64>, i32 } %load
+}
+
+define { <2 x i64>, i32 } @vploadff_v2i64_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v2i64_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, ma
+; CHECK-NEXT: vle64ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <2 x i64>, i32 } @llvm.vp.load.ff.v2i64.p0(ptr %ptr, <2 x i1> splat (i1 true), i32 %evl)
+ ret { <2 x i64>, i32 } %load
+}
+
+define { <4 x i64>, i32 } @vploadff_v4i64(ptr %ptr, <4 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v4i64:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, ma
+; CHECK-NEXT: vle64ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <4 x i64>, i32 } @llvm.vp.load.ff.v4i64.p0(ptr %ptr, <4 x i1> %m, i32 %evl)
+ ret { <4 x i64>, i32 } %load
+}
+
+define { <4 x i64>, i32 } @vploadff_v4i64_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v4i64_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, ma
+; CHECK-NEXT: vle64ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <4 x i64>, i32 } @llvm.vp.load.ff.v4i64.p0(ptr %ptr, <4 x i1> splat (i1 true), i32 %evl)
+ ret { <4 x i64>, i32 } %load
+}
+
+define { <8 x i64>, i32 } @vploadff_v8i64(ptr %ptr, <8 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v8i64:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e64, m4, ta, ma
+; CHECK-NEXT: vle64ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <8 x i64>, i32 } @llvm.vp.load.ff.v8i64.p0(ptr %ptr, <8 x i1> %m, i32 %evl)
+ ret { <8 x i64>, i32 } %load
+}
+
+define { <8 x i64>, i32 } @vploadff_v8i64_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v8i64_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e64, m4, ta, ma
+; CHECK-NEXT: vle64ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <8 x i64>, i32 } @llvm.vp.load.ff.v8i64.p0(ptr %ptr, <8 x i1> splat (i1 true), i32 %evl)
+ ret { <8 x i64>, i32 } %load
+}
+
+define { <32 x i64>, i32 } @vploadff_v32i64(ptr %ptr, <32 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v32i64:
+; CHECK: # %bb.0:
+; CHECK-NEXT: li a3, 16
+; CHECK-NEXT: bltu a2, a3, .LBB24_2
+; CHECK-NEXT: # %bb.1:
+; CHECK-NEXT: li a2, 16
+; CHECK-NEXT: .LBB24_2:
+; CHECK-NEXT: vsetvli zero, a2, e64, m8, ta, ma
+; CHECK-NEXT: vle64ff.v v8, (a1), v0.t
+; CHECK-NEXT: csrr a1, vl
+; CHECK-NEXT: sw a1, 256(a0)
+; CHECK-NEXT: vsetivli zero, 16, e64, m8, ta, ma
+; CHECK-NEXT: vse64.v v8, (a0)
+; CHECK-NEXT: ret
+ %load = call { <32 x i64>, i32 } @llvm.vp.load.ff.v32i64.p0(ptr %ptr, <32 x i1> %m, i32 %evl)
+ ret { <32 x i64>, i32 } %load
+}
+
+define { <32 x i64>, i32 } @vploadff_v32i64_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v32i64_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: li a3, 16
+; CHECK-NEXT: bltu a2, a3, .LBB25_2
+; CHECK-NEXT: # %bb.1:
+; CHECK-NEXT: li a2, 16
+; CHECK-NEXT: .LBB25_2:
+; CHECK-NEXT: vsetvli zero, a2, e64, m8, ta, ma
+; CHECK-NEXT: vle64ff.v v8, (a1)
+; CHECK-NEXT: csrr a1, vl
+; CHECK-NEXT: sw a1, 256(a0)
+; CHECK-NEXT: vsetivli zero, 16, e64, m8, ta, ma
+; CHECK-NEXT: vse64.v v8, (a0)
+; CHECK-NEXT: ret
+ %load = call { <32 x i64>, i32 } @llvm.vp.load.ff.v32i64.p0(ptr %ptr, <32 x i1> splat (i1 true), i32 %evl)
+ ret { <32 x i64>, i32 } %load
+}
+
+define { <2 x half>, i32 } @vploadff_v2f16(ptr %ptr, <2 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v2f16:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <2 x half>, i32 } @llvm.vp.load.ff.v2f16.p0(ptr %ptr, <2 x i1> %m, i32 %evl)
+ ret { <2 x half>, i32 } %load
+}
+
+define { <2 x half>, i32 } @vploadff_v2f16_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v2f16_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <2 x half>, i32 } @llvm.vp.load.ff.v2f16.p0(ptr %ptr, <2 x i1> splat (i1 true), i32 %evl)
+ ret { <2 x half>, i32 } %load
+}
+
+define { <4 x half>, i32 } @vploadff_v4f16(ptr %ptr, <4 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v4f16:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <4 x half>, i32 } @llvm.vp.load.ff.v4f16.p0(ptr %ptr, <4 x i1> %m, i32 %evl)
+ ret { <4 x half>, i32 } %load
+}
+
+define { <4 x half>, i32 } @vploadff_v4f16_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v4f16_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <4 x half>, i32 } @llvm.vp.load.ff.v4f16.p0(ptr %ptr, <4 x i1> splat (i1 true), i32 %evl)
+ ret { <4 x half>, i32 } %load
+}
+
+define { <8 x half>, i32 } @vploadff_v8f16(ptr %ptr, <8 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v8f16:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <8 x half>, i32 } @llvm.vp.load.ff.v8f16.p0(ptr %ptr, <8 x i1> %m, i32 %evl)
+ ret { <8 x half>, i32 } %load
+}
+
+define { <8 x half>, i32 } @vploadff_v8f16_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v8f16_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <8 x half>, i32 } @llvm.vp.load.ff.v8f16.p0(ptr %ptr, <8 x i1> splat (i1 true), i32 %evl)
+ ret { <8 x half>, i32 } %load
+}
+
+define { <2 x float>, i32 } @vploadff_v2f32(ptr %ptr, <2 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v2f32:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, ma
+; CHECK-NEXT: vle32ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <2 x float>, i32 } @llvm.vp.load.ff.v2f32.p0(ptr %ptr, <2 x i1> %m, i32 %evl)
+ ret { <2 x float>, i32 } %load
+}
+
+define { <2 x float>, i32 } @vploadff_v2f32_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v2f32_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, ma
+; CHECK-NEXT: vle32ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <2 x float>, i32 } @llvm.vp.load.ff.v2f32.p0(ptr %ptr, <2 x i1> splat (i1 true), i32 %evl)
+ ret { <2 x float>, i32 } %load
+}
+
+define { <4 x float>, i32 } @vploadff_v4f32(ptr %ptr, <4 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v4f32:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, ma
+; CHECK-NEXT: vle32ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <4 x float>, i32 } @llvm.vp.load.ff.v4f32.p0(ptr %ptr, <4 x i1> %m, i32 %evl)
+ ret { <4 x float>, i32 } %load
+}
+
+define { <4 x float>, i32 } @vploadff_v4f32_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v4f32_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, ma
+; CHECK-NEXT: vle32ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <4 x float>, i32 } @llvm.vp.load.ff.v4f32.p0(ptr %ptr, <4 x i1> splat (i1 true), i32 %evl)
+ ret { <4 x float>, i32 } %load
+}
+
+define { <8 x float>, i32 } @vploadff_v8f32(ptr %ptr, <8 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v8f32:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, ma
+; CHECK-NEXT: vle32ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <8 x float>, i32 } @llvm.vp.load.ff.v8f32.p0(ptr %ptr, <8 x i1> %m, i32 %evl)
+ ret { <8 x float>, i32 } %load
+}
+
+define { <8 x float>, i32 } @vploadff_v8f32_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v8f32_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, ma
+; CHECK-NEXT: vle32ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <8 x float>, i32 } @llvm.vp.load.ff.v8f32.p0(ptr %ptr, <8 x i1> splat (i1 true), i32 %evl)
+ ret { <8 x float>, i32 } %load
+}
+
+define { <2 x double>, i32 } @vploadff_v2f64(ptr %ptr, <2 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v2f64:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, ma
+; CHECK-NEXT: vle64ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <2 x double>, i32 } @llvm.vp.load.ff.v2f64.p0(ptr %ptr, <2 x i1> %m, i32 %evl)
+ ret { <2 x double>, i32 } %load
+}
+
+define { <2 x double>, i32 } @vploadff_v2f64_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v2f64_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, ma
+; CHECK-NEXT: vle64ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <2 x double>, i32 } @llvm.vp.load.ff.v2f64.p0(ptr %ptr, <2 x i1> splat (i1 true), i32 %evl)
+ ret { <2 x double>, i32 } %load
+}
+
+define { <4 x double>, i32 } @vploadff_v4f64(ptr %ptr, <4 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v4f64:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, ma
+; CHECK-NEXT: vle64ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <4 x double>, i32 } @llvm.vp.load.ff.v4f64.p0(ptr %ptr, <4 x i1> %m, i32 %evl)
+ ret { <4 x double>, i32 } %load
+}
+
+define { <4 x double>, i32 } @vploadff_v4f64_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v4f64_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, ma
+; CHECK-NEXT: vle64ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <4 x double>, i32 } @llvm.vp.load.ff.v4f64.p0(ptr %ptr, <4 x i1> splat (i1 true), i32 %evl)
+ ret { <4 x double>, i32 } %load
+}
+
+define { <8 x double>, i32 } @vploadff_v8f64(ptr %ptr, <8 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v8f64:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e64, m4, ta, ma
+; CHECK-NEXT: vle64ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <8 x double>, i32 } @llvm.vp.load.ff.v8f64.p0(ptr %ptr, <8 x i1> %m, i32 %evl)
+ ret { <8 x double>, i32 } %load
+}
+
+define { <8 x double>, i32 } @vploadff_v8f64_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v8f64_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e64, m4, ta, ma
+; CHECK-NEXT: vle64ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <8 x double>, i32 } @llvm.vp.load.ff.v8f64.p0(ptr %ptr, <8 x i1> splat (i1 true), i32 %evl)
+ ret { <8 x double>, i32 } %load
+}
+
+define { <2 x bfloat>, i32 } @vploadff_v2bf16(ptr %ptr, <2 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v2bf16:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <2 x bfloat>, i32 } @llvm.vp.load.ff.v2bf16.p0(ptr %ptr, <2 x i1> %m, i32 %evl)
+ ret { <2 x bfloat>, i32 } %load
+}
+
+define { <2 x bfloat>, i32 } @vploadff_v2bf16_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v2bf16_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <2 x bfloat>, i32 } @llvm.vp.load.ff.v2bf16.p0(ptr %ptr, <2 x i1> splat (i1 true), i32 %evl)
+ ret { <2 x bfloat>, i32 } %load
+}
+
+define { <4 x bfloat>, i32 } @vploadff_v4bf16(ptr %ptr, <4 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v4bf16:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <4 x bfloat>, i32 } @llvm.vp.load.ff.v4bf16.p0(ptr %ptr, <4 x i1> %m, i32 %evl)
+ ret { <4 x bfloat>, i32 } %load
+}
+
+define { <4 x bfloat>, i32 } @vploadff_v4bf16_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v4bf16_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <4 x bfloat>, i32 } @llvm.vp.load.ff.v4bf16.p0(ptr %ptr, <4 x i1> splat (i1 true), i32 %evl)
+ ret { <4 x bfloat>, i32 } %load
+}
+
+define { <8 x bfloat>, i32 } @vploadff_v8bf16(ptr %ptr, <8 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v8bf16:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <8 x bfloat>, i32 } @llvm.vp.load.ff.v8bf16.p0(ptr %ptr, <8 x i1> %m, i32 %evl)
+ ret { <8 x bfloat>, i32 } %load
+}
+
+define { <8 x bfloat>, i32 } @vploadff_v8bf16_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v8bf16_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <8 x bfloat>, i32 } @llvm.vp.load.ff.v8bf16.p0(ptr %ptr, <8 x i1> splat (i1 true), i32 %evl)
+ ret { <8 x bfloat>, i32 } %load
+}
+
+define { <7 x i8>, i32 } @vploadff_v7i8(ptr %ptr, <7 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_v7i8:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
+; CHECK-NEXT: vle8ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <7 x i8>, i32 } @llvm.vp.load.ff.v7i8.p0(ptr %ptr, <7 x i1> %m, i32 %evl)
+ ret { <7 x i8>, i32 } %load
+}
diff --git a/llvm/test/CodeGen/RISCV/rvv/vploadff.ll b/llvm/test/CodeGen/RISCV/rvv/vploadff.ll
new file mode 100644
index 0000000000000..9e08938a9fe6c
--- /dev/null
+++ b/llvm/test/CodeGen/RISCV/rvv/vploadff.ll
@@ -0,0 +1,1008 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
+; RUN: llc -mtriple=riscv32 -mattr=+d,+zvfh,+zvfbfmin,+v \
+; RUN: -verify-machineinstrs < %s | FileCheck %s
+; RUN: llc -mtriple=riscv64 -mattr=+d,+zvfh,+zvfbfmin,+v \
+; RUN: -verify-machineinstrs < %s | FileCheck %s
+; RUN: llc -mtriple=riscv32 -mattr=+d,+zvfhmin,+zvfbfmin,+v \
+; RUN: -verify-machineinstrs < %s | FileCheck %s
+; RUN: llc -mtriple=riscv64 -mattr=+d,+zvfhmin,+zvfbfmin,+v \
+; RUN: -verify-machineinstrs < %s | FileCheck %s
+
+define { <vscale x 1 x i8>, i32 } @vploadff_nxv1i8(ptr %ptr, <vscale x 1 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv1i8:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
+; CHECK-NEXT: vle8ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 1 x i8>, i32 } @llvm.vp.load.ff.nxv1i8.p0(ptr %ptr, <vscale x 1 x i1> %m, i32 %evl)
+ ret { <vscale x 1 x i8>, i32 } %load
+}
+
+define { <vscale x 1 x i8>, i32 } @vploadff_nxv1i8_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv1i8_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
+; CHECK-NEXT: vle8ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 1 x i8>, i32 } @llvm.vp.load.ff.nxv1i8.p0(ptr %ptr, <vscale x 1 x i1> splat (i1 true), i32 %evl)
+ ret { <vscale x 1 x i8>, i32 } %load
+}
+
+define { <vscale x 2 x i8>, i32 } @vploadff_nxv2i8(ptr %ptr, <vscale x 2 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv2i8:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
+; CHECK-NEXT: vle8ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 2 x i8>, i32 } @llvm.vp.load.ff.nxv2i8.p0(ptr %ptr, <vscale x 2 x i1> %m, i32 %evl)
+ ret { <vscale x 2 x i8>, i32 } %load
+}
+
+define { <vscale x 2 x i8>, i32 } @vploadff_nxv2i8_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv2i8_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
+; CHECK-NEXT: vle8ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 2 x i8>, i32 } @llvm.vp.load.ff.nxv2i8.p0(ptr %ptr, <vscale x 2 x i1> splat (i1 true), i32 %evl)
+ ret { <vscale x 2 x i8>, i32 } %load
+}
+
+define { <vscale x 4 x i8>, i32 } @vploadff_nxv4i8(ptr %ptr, <vscale x 4 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv4i8:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
+; CHECK-NEXT: vle8ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 4 x i8>, i32 } @llvm.vp.load.ff.nxv4i8.p0(ptr %ptr, <vscale x 4 x i1> %m, i32 %evl)
+ ret { <vscale x 4 x i8>, i32 } %load
+}
+
+define { <vscale x 4 x i8>, i32 } @vploadff_nxv4i8_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv4i8_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
+; CHECK-NEXT: vle8ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 4 x i8>, i32 } @llvm.vp.load.ff.nxv4i8.p0(ptr %ptr, <vscale x 4 x i1> splat (i1 true), i32 %evl)
+ ret { <vscale x 4 x i8>, i32 } %load
+}
+
+define { <vscale x 8 x i8>, i32 } @vploadff_nxv8i8(ptr %ptr, <vscale x 8 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv8i8:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
+; CHECK-NEXT: vle8ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 8 x i8>, i32 } @llvm.vp.load.ff.nxv8i8.p0(ptr %ptr, <vscale x 8 x i1> %m, i32 %evl)
+ ret { <vscale x 8 x i8>, i32 } %load
+}
+
+define { <vscale x 8 x i8>, i32 } @vploadff_nxv8i8_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv8i8_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
+; CHECK-NEXT: vle8ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 8 x i8>, i32 } @llvm.vp.load.ff.nxv8i8.p0(ptr %ptr, <vscale x 8 x i1> splat (i1 true), i32 %evl)
+ ret { <vscale x 8 x i8>, i32 } %load
+}
+
+define { <vscale x 16 x i8>, i32 } @vploadff_nxv16i8(ptr %ptr, <vscale x 16 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv16i8:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
+; CHECK-NEXT: vle8ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 16 x i8>, i32 } @llvm.vp.load.ff.nxv16i8.p0(ptr %ptr, <vscale x 16 x i1> %m, i32 %evl)
+ ret { <vscale x 16 x i8>, i32 } %load
+}
+
+define { <vscale x 16 x i8>, i32 } @vploadff_nxv16i8_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv16i8_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
+; CHECK-NEXT: vle8ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 16 x i8>, i32 } @llvm.vp.load.ff.nxv16i8.p0(ptr %ptr, <vscale x 16 x i1> splat (i1 true), i32 %evl)
+ ret { <vscale x 16 x i8>, i32 } %load
+}
+
+define { <vscale x 32 x i8>, i32 } @vploadff_nxv32i8(ptr %ptr, <vscale x 32 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv32i8:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, ma
+; CHECK-NEXT: vle8ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 32 x i8>, i32 } @llvm.vp.load.ff.nxv32i8.p0(ptr %ptr, <vscale x 32 x i1> %m, i32 %evl)
+ ret { <vscale x 32 x i8>, i32 } %load
+}
+
+define { <vscale x 32 x i8>, i32 } @vploadff_nxv32i8_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv32i8_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, ma
+; CHECK-NEXT: vle8ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 32 x i8>, i32 } @llvm.vp.load.ff.nxv32i8.p0(ptr %ptr, <vscale x 32 x i1> splat (i1 true), i32 %evl)
+ ret { <vscale x 32 x i8>, i32 } %load
+}
+
+define { <vscale x 64 x i8>, i32 } @vploadff_nxv64i8(ptr %ptr, <vscale x 64 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv64i8:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e8, m8, ta, ma
+; CHECK-NEXT: vle8ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 64 x i8>, i32 } @llvm.vp.load.ff.nxv64i8.p0(ptr %ptr, <vscale x 64 x i1> %m, i32 %evl)
+ ret { <vscale x 64 x i8>, i32 } %load
+}
+
+define { <vscale x 64 x i8>, i32 } @vploadff_nxv64i8_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv64i8_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e8, m8, ta, ma
+; CHECK-NEXT: vle8ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 64 x i8>, i32 } @llvm.vp.load.ff.nxv64i8.p0(ptr %ptr, <vscale x 64 x i1> splat (i1 true), i32 %evl)
+ ret { <vscale x 64 x i8>, i32 } %load
+}
+
+define <vscale x 128 x i8> @vploadff_nxv128i8(ptr %ptr, ptr %evl_out, <vscale x 128 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv128i8:
+; CHECK: # %bb.0:
+; CHECK-NEXT: csrr a3, vlenb
+; CHECK-NEXT: slli a3, a3, 3
+; CHECK-NEXT: bltu a2, a3, .LBB14_2
+; CHECK-NEXT: # %bb.1:
+; CHECK-NEXT: mv a2, a3
+; CHECK-NEXT: .LBB14_2:
+; CHECK-NEXT: vsetvli zero, a2, e8, m8, ta, ma
+; CHECK-NEXT: vle8ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: sw a0, 0(a1)
+; CHECK-NEXT: ret
+ %load = call { <vscale x 128 x i8>, i32 } @llvm.vp.load.ff.nxv128i8.p0(ptr %ptr, <vscale x 128 x i1> %m, i32 %evl)
+ %result0 = extractvalue { <vscale x 128 x i8>, i32 } %load, 0
+ %result1 = extractvalue { <vscale x 128 x i8>, i32 } %load, 1
+ store i32 %result1, ptr %evl_out
+ ret <vscale x 128 x i8> %result0
+}
+
+define <vscale x 128 x i8> @vploadff_nxv128i8_allones_mask(ptr %ptr, ptr %evl_out, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv128i8_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: csrr a3, vlenb
+; CHECK-NEXT: slli a3, a3, 3
+; CHECK-NEXT: bltu a2, a3, .LBB15_2
+; CHECK-NEXT: # %bb.1:
+; CHECK-NEXT: mv a2, a3
+; CHECK-NEXT: .LBB15_2:
+; CHECK-NEXT: vsetvli zero, a2, e8, m8, ta, ma
+; CHECK-NEXT: vle8ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: sw a0, 0(a1)
+; CHECK-NEXT: ret
+ %load = call { <vscale x 128 x i8>, i32 } @llvm.vp.load.ff.nxv128i8.p0(ptr %ptr, <vscale x 128 x i1> splat (i1 true), i32 %evl)
+ %result0 = extractvalue { <vscale x 128 x i8>, i32 } %load, 0
+ %result1 = extractvalue { <vscale x 128 x i8>, i32 } %load, 1
+ store i32 %result1, ptr %evl_out
+ ret <vscale x 128 x i8> %result0
+}
+
+define { <vscale x 1 x i16>, i32 } @vploadff_nxv1i16(ptr %ptr, <vscale x 1 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv1i16:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 1 x i16>, i32 } @llvm.vp.load.ff.nxv1i16.p0(ptr %ptr, <vscale x 1 x i1> %m, i32 %evl)
+ ret { <vscale x 1 x i16>, i32 } %load
+}
+
+define { <vscale x 1 x i16>, i32 } @vploadff_nxv1i16_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv1i16_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 1 x i16>, i32 } @llvm.vp.load.ff.nxv1i16.p0(ptr %ptr, <vscale x 1 x i1> splat (i1 true), i32 %evl)
+ ret { <vscale x 1 x i16>, i32 } %load
+}
+
+define { <vscale x 2 x i16>, i32 } @vploadff_nxv2i16(ptr %ptr, <vscale x 2 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv2i16:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 2 x i16>, i32 } @llvm.vp.load.ff.nxv2i16.p0(ptr %ptr, <vscale x 2 x i1> %m, i32 %evl)
+ ret { <vscale x 2 x i16>, i32 } %load
+}
+
+define { <vscale x 2 x i16>, i32 } @vploadff_nxv2i16_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv2i16_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 2 x i16>, i32 } @llvm.vp.load.ff.nxv2i16.p0(ptr %ptr, <vscale x 2 x i1> splat (i1 true), i32 %evl)
+ ret { <vscale x 2 x i16>, i32 } %load
+}
+
+define { <vscale x 4 x i16>, i32 } @vploadff_nxv4i16(ptr %ptr, <vscale x 4 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv4i16:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 4 x i16>, i32 } @llvm.vp.load.ff.nxv4i16.p0(ptr %ptr, <vscale x 4 x i1> %m, i32 %evl)
+ ret { <vscale x 4 x i16>, i32 } %load
+}
+
+define { <vscale x 4 x i16>, i32 } @vploadff_nxv4i16_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv4i16_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 4 x i16>, i32 } @llvm.vp.load.ff.nxv4i16.p0(ptr %ptr, <vscale x 4 x i1> splat (i1 true), i32 %evl)
+ ret { <vscale x 4 x i16>, i32 } %load
+}
+
+define { <vscale x 8 x i16>, i32 } @vploadff_nxv8i16(ptr %ptr, <vscale x 8 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv8i16:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 8 x i16>, i32 } @llvm.vp.load.ff.nxv8i16.p0(ptr %ptr, <vscale x 8 x i1> %m, i32 %evl)
+ ret { <vscale x 8 x i16>, i32 } %load
+}
+
+define { <vscale x 8 x i16>, i32 } @vploadff_nxv8i16_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv8i16_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 8 x i16>, i32 } @llvm.vp.load.ff.nxv8i16.p0(ptr %ptr, <vscale x 8 x i1> splat (i1 true), i32 %evl)
+ ret { <vscale x 8 x i16>, i32 } %load
+}
+
+define { <vscale x 16 x i16>, i32 } @vploadff_nxv16i16(ptr %ptr, <vscale x 16 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv16i16:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 16 x i16>, i32 } @llvm.vp.load.ff.nxv16i16.p0(ptr %ptr, <vscale x 16 x i1> %m, i32 %evl)
+ ret { <vscale x 16 x i16>, i32 } %load
+}
+
+define { <vscale x 16 x i16>, i32 } @vploadff_nxv16i16_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv16i16_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 16 x i16>, i32 } @llvm.vp.load.ff.nxv16i16.p0(ptr %ptr, <vscale x 16 x i1> splat (i1 true), i32 %evl)
+ ret { <vscale x 16 x i16>, i32 } %load
+}
+
+define { <vscale x 32 x i16>, i32 } @vploadff_nxv32i16(ptr %ptr, <vscale x 32 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv32i16:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, m8, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 32 x i16>, i32 } @llvm.vp.load.ff.nxv32i16.p0(ptr %ptr, <vscale x 32 x i1> %m, i32 %evl)
+ ret { <vscale x 32 x i16>, i32 } %load
+}
+
+define { <vscale x 32 x i16>, i32 } @vploadff_nxv32i16_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv32i16_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, m8, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 32 x i16>, i32 } @llvm.vp.load.ff.nxv32i16.p0(ptr %ptr, <vscale x 32 x i1> splat (i1 true), i32 %evl)
+ ret { <vscale x 32 x i16>, i32 } %load
+}
+
+define { <vscale x 1 x i32>, i32 } @vploadff_nxv1i32(ptr %ptr, <vscale x 1 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv1i32:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, ma
+; CHECK-NEXT: vle32ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 1 x i32>, i32 } @llvm.vp.load.ff.nxv1i32.p0(ptr %ptr, <vscale x 1 x i1> %m, i32 %evl)
+ ret { <vscale x 1 x i32>, i32 } %load
+}
+
+define { <vscale x 1 x i32>, i32 } @vploadff_nxv1i32_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv1i32_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, ma
+; CHECK-NEXT: vle32ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 1 x i32>, i32 } @llvm.vp.load.ff.nxv1i32.p0(ptr %ptr, <vscale x 1 x i1> splat (i1 true), i32 %evl)
+ ret { <vscale x 1 x i32>, i32 } %load
+}
+
+define { <vscale x 2 x i32>, i32 } @vploadff_nxv2i32(ptr %ptr, <vscale x 2 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv2i32:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, ma
+; CHECK-NEXT: vle32ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 2 x i32>, i32 } @llvm.vp.load.ff.nxv2i32.p0(ptr %ptr, <vscale x 2 x i1> %m, i32 %evl)
+ ret { <vscale x 2 x i32>, i32 } %load
+}
+
+define { <vscale x 2 x i32>, i32 } @vploadff_nxv2i32_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv2i32_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, ma
+; CHECK-NEXT: vle32ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 2 x i32>, i32 } @llvm.vp.load.ff.nxv2i32.p0(ptr %ptr, <vscale x 2 x i1> splat (i1 true), i32 %evl)
+ ret { <vscale x 2 x i32>, i32 } %load
+}
+
+define { <vscale x 4 x i32>, i32 } @vploadff_nxv4i32(ptr %ptr, <vscale x 4 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv4i32:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, ma
+; CHECK-NEXT: vle32ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 4 x i32>, i32 } @llvm.vp.load.ff.nxv4i32.p0(ptr %ptr, <vscale x 4 x i1> %m, i32 %evl)
+ ret { <vscale x 4 x i32>, i32 } %load
+}
+
+define { <vscale x 4 x i32>, i32 } @vploadff_nxv4i32_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv4i32_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, ma
+; CHECK-NEXT: vle32ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 4 x i32>, i32 } @llvm.vp.load.ff.nxv4i32.p0(ptr %ptr, <vscale x 4 x i1> splat (i1 true), i32 %evl)
+ ret { <vscale x 4 x i32>, i32 } %load
+}
+
+define { <vscale x 8 x i32>, i32 } @vploadff_nxv8i32(ptr %ptr, <vscale x 8 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv8i32:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, ma
+; CHECK-NEXT: vle32ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 8 x i32>, i32 } @llvm.vp.load.ff.nxv8i32.p0(ptr %ptr, <vscale x 8 x i1> %m, i32 %evl)
+ ret { <vscale x 8 x i32>, i32 } %load
+}
+
+define { <vscale x 8 x i32>, i32 } @vploadff_nxv8i32_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv8i32_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, ma
+; CHECK-NEXT: vle32ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 8 x i32>, i32 } @llvm.vp.load.ff.nxv8i32.p0(ptr %ptr, <vscale x 8 x i1> splat (i1 true), i32 %evl)
+ ret { <vscale x 8 x i32>, i32 } %load
+}
+
+define { <vscale x 16 x i32>, i32 } @vploadff_nxv16i32(ptr %ptr, <vscale x 16 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv16i32:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e32, m8, ta, ma
+; CHECK-NEXT: vle32ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 16 x i32>, i32 } @llvm.vp.load.ff.nxv16i32.p0(ptr %ptr, <vscale x 16 x i1> %m, i32 %evl)
+ ret { <vscale x 16 x i32>, i32 } %load
+}
+
+define { <vscale x 16 x i32>, i32 } @vploadff_nxv16i32_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv16i32_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e32, m8, ta, ma
+; CHECK-NEXT: vle32ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 16 x i32>, i32 } @llvm.vp.load.ff.nxv16i32.p0(ptr %ptr, <vscale x 16 x i1> splat (i1 true), i32 %evl)
+ ret { <vscale x 16 x i32>, i32 } %load
+}
+
+define { <vscale x 1 x i64>, i32 } @vploadff_nxv1i64(ptr %ptr, <vscale x 1 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv1i64:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, ma
+; CHECK-NEXT: vle64ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 1 x i64>, i32 } @llvm.vp.load.ff.nxv1i64.p0(ptr %ptr, <vscale x 1 x i1> %m, i32 %evl)
+ ret { <vscale x 1 x i64>, i32 } %load
+}
+
+define { <vscale x 1 x i64>, i32 } @vploadff_nxv1i64_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv1i64_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, ma
+; CHECK-NEXT: vle64ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 1 x i64>, i32 } @llvm.vp.load.ff.nxv1i64.p0(ptr %ptr, <vscale x 1 x i1> splat (i1 true), i32 %evl)
+ ret { <vscale x 1 x i64>, i32 } %load
+}
+
+define { <vscale x 2 x i64>, i32 } @vploadff_nxv2i64(ptr %ptr, <vscale x 2 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv2i64:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, ma
+; CHECK-NEXT: vle64ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 2 x i64>, i32 } @llvm.vp.load.ff.nxv2i64.p0(ptr %ptr, <vscale x 2 x i1> %m, i32 %evl)
+ ret { <vscale x 2 x i64>, i32 } %load
+}
+
+define { <vscale x 2 x i64>, i32 } @vploadff_nxv2i64_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv2i64_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, ma
+; CHECK-NEXT: vle64ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 2 x i64>, i32 } @llvm.vp.load.ff.nxv2i64.p0(ptr %ptr, <vscale x 2 x i1> splat (i1 true), i32 %evl)
+ ret { <vscale x 2 x i64>, i32 } %load
+}
+
+define { <vscale x 4 x i64>, i32 } @vploadff_nxv4i64(ptr %ptr, <vscale x 4 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv4i64:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e64, m4, ta, ma
+; CHECK-NEXT: vle64ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 4 x i64>, i32 } @llvm.vp.load.ff.nxv4i64.p0(ptr %ptr, <vscale x 4 x i1> %m, i32 %evl)
+ ret { <vscale x 4 x i64>, i32 } %load
+}
+
+define { <vscale x 4 x i64>, i32 } @vploadff_nxv4i64_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv4i64_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e64, m4, ta, ma
+; CHECK-NEXT: vle64ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 4 x i64>, i32 } @llvm.vp.load.ff.nxv4i64.p0(ptr %ptr, <vscale x 4 x i1> splat (i1 true), i32 %evl)
+ ret { <vscale x 4 x i64>, i32 } %load
+}
+
+define { <vscale x 8 x i64>, i32 } @vploadff_nxv8i64(ptr %ptr, <vscale x 8 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv8i64:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; CHECK-NEXT: vle64ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 8 x i64>, i32 } @llvm.vp.load.ff.nxv8i64.p0(ptr %ptr, <vscale x 8 x i1> %m, i32 %evl)
+ ret { <vscale x 8 x i64>, i32 } %load
+}
+
+define { <vscale x 8 x i64>, i32 } @vploadff_nxv8i64_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv8i64_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; CHECK-NEXT: vle64ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 8 x i64>, i32 } @llvm.vp.load.ff.nxv8i64.p0(ptr %ptr, <vscale x 8 x i1> splat (i1 true), i32 %evl)
+ ret { <vscale x 8 x i64>, i32 } %load
+}
+
+define { <vscale x 1 x half>, i32 } @vploadff_nxv1f16(ptr %ptr, <vscale x 1 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv1f16:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 1 x half>, i32 } @llvm.vp.load.ff.nxv1f16.p0(ptr %ptr, <vscale x 1 x i1> %m, i32 %evl)
+ ret { <vscale x 1 x half>, i32 } %load
+}
+
+define { <vscale x 1 x half>, i32 } @vploadff_nxv1f16_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv1f16_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 1 x half>, i32 } @llvm.vp.load.ff.nxv1f16.p0(ptr %ptr, <vscale x 1 x i1> splat (i1 true), i32 %evl)
+ ret { <vscale x 1 x half>, i32 } %load
+}
+
+define { <vscale x 2 x half>, i32 } @vploadff_nxv2f16(ptr %ptr, <vscale x 2 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv2f16:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 2 x half>, i32 } @llvm.vp.load.ff.nxv2f16.p0(ptr %ptr, <vscale x 2 x i1> %m, i32 %evl)
+ ret { <vscale x 2 x half>, i32 } %load
+}
+
+define { <vscale x 2 x half>, i32 } @vploadff_nxv2f16_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv2f16_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 2 x half>, i32 } @llvm.vp.load.ff.nxv2f16.p0(ptr %ptr, <vscale x 2 x i1> splat (i1 true), i32 %evl)
+ ret { <vscale x 2 x half>, i32 } %load
+}
+
+define { <vscale x 4 x half>, i32 } @vploadff_nxv4f16(ptr %ptr, <vscale x 4 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv4f16:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 4 x half>, i32 } @llvm.vp.load.ff.nxv4f16.p0(ptr %ptr, <vscale x 4 x i1> %m, i32 %evl)
+ ret { <vscale x 4 x half>, i32 } %load
+}
+
+define { <vscale x 4 x half>, i32 } @vploadff_nxv4f16_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv4f16_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 4 x half>, i32 } @llvm.vp.load.ff.nxv4f16.p0(ptr %ptr, <vscale x 4 x i1> splat (i1 true), i32 %evl)
+ ret { <vscale x 4 x half>, i32 } %load
+}
+
+define { <vscale x 8 x half>, i32 } @vploadff_nxv8f16(ptr %ptr, <vscale x 8 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv8f16:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 8 x half>, i32 } @llvm.vp.load.ff.nxv8f16.p0(ptr %ptr, <vscale x 8 x i1> %m, i32 %evl)
+ ret { <vscale x 8 x half>, i32 } %load
+}
+
+define { <vscale x 8 x half>, i32 } @vploadff_nxv8f16_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv8f16_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 8 x half>, i32 } @llvm.vp.load.ff.nxv8f16.p0(ptr %ptr, <vscale x 8 x i1> splat (i1 true), i32 %evl)
+ ret { <vscale x 8 x half>, i32 } %load
+}
+
+define { <vscale x 16 x half>, i32 } @vploadff_nxv16f16(ptr %ptr, <vscale x 16 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv16f16:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 16 x half>, i32 } @llvm.vp.load.ff.nxv16f16.p0(ptr %ptr, <vscale x 16 x i1> %m, i32 %evl)
+ ret { <vscale x 16 x half>, i32 } %load
+}
+
+define { <vscale x 16 x half>, i32 } @vploadff_nxv16f16_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv16f16_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 16 x half>, i32 } @llvm.vp.load.ff.nxv16f16.p0(ptr %ptr, <vscale x 16 x i1> splat (i1 true), i32 %evl)
+ ret { <vscale x 16 x half>, i32 } %load
+}
+
+define { <vscale x 32 x half>, i32 } @vploadff_nxv32f16(ptr %ptr, <vscale x 32 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv32f16:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, m8, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 32 x half>, i32 } @llvm.vp.load.ff.nxv32f16.p0(ptr %ptr, <vscale x 32 x i1> %m, i32 %evl)
+ ret { <vscale x 32 x half>, i32 } %load
+}
+
+define { <vscale x 32 x half>, i32 } @vploadff_nxv32f16_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv32f16_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, m8, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 32 x half>, i32 } @llvm.vp.load.ff.nxv32f16.p0(ptr %ptr, <vscale x 32 x i1> splat (i1 true), i32 %evl)
+ ret { <vscale x 32 x half>, i32 } %load
+}
+
+define { <vscale x 1 x float>, i32 } @vploadff_nxv1f32(ptr %ptr, <vscale x 1 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv1f32:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, ma
+; CHECK-NEXT: vle32ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 1 x float>, i32 } @llvm.vp.load.ff.nxv1f32.p0(ptr %ptr, <vscale x 1 x i1> %m, i32 %evl)
+ ret { <vscale x 1 x float>, i32 } %load
+}
+
+define { <vscale x 1 x float>, i32 } @vploadff_nxv1f32_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv1f32_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, ma
+; CHECK-NEXT: vle32ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 1 x float>, i32 } @llvm.vp.load.ff.nxv1f32.p0(ptr %ptr, <vscale x 1 x i1> splat (i1 true), i32 %evl)
+ ret { <vscale x 1 x float>, i32 } %load
+}
+
+define { <vscale x 2 x float>, i32 } @vploadff_nxv2f32(ptr %ptr, <vscale x 2 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv2f32:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, ma
+; CHECK-NEXT: vle32ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 2 x float>, i32 } @llvm.vp.load.ff.nxv2f32.p0(ptr %ptr, <vscale x 2 x i1> %m, i32 %evl)
+ ret { <vscale x 2 x float>, i32 } %load
+}
+
+define { <vscale x 2 x float>, i32 } @vploadff_nxv2f32_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv2f32_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, ma
+; CHECK-NEXT: vle32ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 2 x float>, i32 } @llvm.vp.load.ff.nxv2f32.p0(ptr %ptr, <vscale x 2 x i1> splat (i1 true), i32 %evl)
+ ret { <vscale x 2 x float>, i32 } %load
+}
+
+define { <vscale x 4 x float>, i32 } @vploadff_nxv4f32(ptr %ptr, <vscale x 4 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv4f32:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, ma
+; CHECK-NEXT: vle32ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 4 x float>, i32 } @llvm.vp.load.ff.nxv4f32.p0(ptr %ptr, <vscale x 4 x i1> %m, i32 %evl)
+ ret { <vscale x 4 x float>, i32 } %load
+}
+
+define { <vscale x 4 x float>, i32 } @vploadff_nxv4f32_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv4f32_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, ma
+; CHECK-NEXT: vle32ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 4 x float>, i32 } @llvm.vp.load.ff.nxv4f32.p0(ptr %ptr, <vscale x 4 x i1> splat (i1 true), i32 %evl)
+ ret { <vscale x 4 x float>, i32 } %load
+}
+
+define { <vscale x 8 x float>, i32 } @vploadff_nxv8f32(ptr %ptr, <vscale x 8 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv8f32:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, ma
+; CHECK-NEXT: vle32ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 8 x float>, i32 } @llvm.vp.load.ff.nxv8f32.p0(ptr %ptr, <vscale x 8 x i1> %m, i32 %evl)
+ ret { <vscale x 8 x float>, i32 } %load
+}
+
+define { <vscale x 8 x float>, i32 } @vploadff_nxv8f32_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv8f32_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, ma
+; CHECK-NEXT: vle32ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 8 x float>, i32 } @llvm.vp.load.ff.nxv8f32.p0(ptr %ptr, <vscale x 8 x i1> splat (i1 true), i32 %evl)
+ ret { <vscale x 8 x float>, i32 } %load
+}
+
+define { <vscale x 16 x float>, i32 } @vploadff_nxv16f32(ptr %ptr, <vscale x 16 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv16f32:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e32, m8, ta, ma
+; CHECK-NEXT: vle32ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 16 x float>, i32 } @llvm.vp.load.ff.nxv16f32.p0(ptr %ptr, <vscale x 16 x i1> %m, i32 %evl)
+ ret { <vscale x 16 x float>, i32 } %load
+}
+
+define { <vscale x 16 x float>, i32 } @vploadff_nxv16f32_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv16f32_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e32, m8, ta, ma
+; CHECK-NEXT: vle32ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 16 x float>, i32 } @llvm.vp.load.ff.nxv16f32.p0(ptr %ptr, <vscale x 16 x i1> splat (i1 true), i32 %evl)
+ ret { <vscale x 16 x float>, i32 } %load
+}
+
+define { <vscale x 1 x double>, i32 } @vploadff_nxv1f64(ptr %ptr, <vscale x 1 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv1f64:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, ma
+; CHECK-NEXT: vle64ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 1 x double>, i32 } @llvm.vp.load.ff.nxv1f64.p0(ptr %ptr, <vscale x 1 x i1> %m, i32 %evl)
+ ret { <vscale x 1 x double>, i32 } %load
+}
+
+define { <vscale x 1 x double>, i32 } @vploadff_nxv1f64_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv1f64_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, ma
+; CHECK-NEXT: vle64ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 1 x double>, i32 } @llvm.vp.load.ff.nxv1f64.p0(ptr %ptr, <vscale x 1 x i1> splat (i1 true), i32 %evl)
+ ret { <vscale x 1 x double>, i32 } %load
+}
+
+define { <vscale x 2 x double>, i32 } @vploadff_nxv2f64(ptr %ptr, <vscale x 2 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv2f64:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, ma
+; CHECK-NEXT: vle64ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 2 x double>, i32 } @llvm.vp.load.ff.nxv2f64.p0(ptr %ptr, <vscale x 2 x i1> %m, i32 %evl)
+ ret { <vscale x 2 x double>, i32 } %load
+}
+
+define { <vscale x 2 x double>, i32 } @vploadff_nxv2f64_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv2f64_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, ma
+; CHECK-NEXT: vle64ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 2 x double>, i32 } @llvm.vp.load.ff.nxv2f64.p0(ptr %ptr, <vscale x 2 x i1> splat (i1 true), i32 %evl)
+ ret { <vscale x 2 x double>, i32 } %load
+}
+
+define { <vscale x 4 x double>, i32 } @vploadff_nxv4f64(ptr %ptr, <vscale x 4 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv4f64:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e64, m4, ta, ma
+; CHECK-NEXT: vle64ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 4 x double>, i32 } @llvm.vp.load.ff.nxv4f64.p0(ptr %ptr, <vscale x 4 x i1> %m, i32 %evl)
+ ret { <vscale x 4 x double>, i32 } %load
+}
+
+define { <vscale x 4 x double>, i32 } @vploadff_nxv4f64_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv4f64_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e64, m4, ta, ma
+; CHECK-NEXT: vle64ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 4 x double>, i32 } @llvm.vp.load.ff.nxv4f64.p0(ptr %ptr, <vscale x 4 x i1> splat (i1 true), i32 %evl)
+ ret { <vscale x 4 x double>, i32 } %load
+}
+
+define { <vscale x 8 x double>, i32 } @vploadff_nxv8f64(ptr %ptr, <vscale x 8 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv8f64:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; CHECK-NEXT: vle64ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 8 x double>, i32 } @llvm.vp.load.ff.nxv8f64.p0(ptr %ptr, <vscale x 8 x i1> %m, i32 %evl)
+ ret { <vscale x 8 x double>, i32 } %load
+}
+
+define { <vscale x 8 x double>, i32 } @vploadff_nxv8f64_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv8f64_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; CHECK-NEXT: vle64ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 8 x double>, i32 } @llvm.vp.load.ff.nxv8f64.p0(ptr %ptr, <vscale x 8 x i1> splat (i1 true), i32 %evl)
+ ret { <vscale x 8 x double>, i32 } %load
+}
+
+define { <vscale x 1 x bfloat>, i32 } @vploadff_nxv1bf16(ptr %ptr, <vscale x 1 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv1bf16:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 1 x bfloat>, i32 } @llvm.vp.load.ff.nxv1bf16.p0(ptr %ptr, <vscale x 1 x i1> %m, i32 %evl)
+ ret { <vscale x 1 x bfloat>, i32 } %load
+}
+
+define { <vscale x 1 x bfloat>, i32 } @vploadff_nxv1bf16_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv1bf16_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 1 x bfloat>, i32 } @llvm.vp.load.ff.nxv1bf16.p0(ptr %ptr, <vscale x 1 x i1> splat (i1 true), i32 %evl)
+ ret { <vscale x 1 x bfloat>, i32 } %load
+}
+
+define { <vscale x 2 x bfloat>, i32 } @vploadff_nxv2bf16(ptr %ptr, <vscale x 2 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv2bf16:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 2 x bfloat>, i32 } @llvm.vp.load.ff.nxv2bf16.p0(ptr %ptr, <vscale x 2 x i1> %m, i32 %evl)
+ ret { <vscale x 2 x bfloat>, i32 } %load
+}
+
+define { <vscale x 2 x bfloat>, i32 } @vploadff_nxv2bf16_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv2bf16_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 2 x bfloat>, i32 } @llvm.vp.load.ff.nxv2bf16.p0(ptr %ptr, <vscale x 2 x i1> splat (i1 true), i32 %evl)
+ ret { <vscale x 2 x bfloat>, i32 } %load
+}
+
+define { <vscale x 4 x bfloat>, i32 } @vploadff_nxv4bf16(ptr %ptr, <vscale x 4 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv4bf16:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 4 x bfloat>, i32 } @llvm.vp.load.ff.nxv4bf16.p0(ptr %ptr, <vscale x 4 x i1> %m, i32 %evl)
+ ret { <vscale x 4 x bfloat>, i32 } %load
+}
+
+define { <vscale x 4 x bfloat>, i32 } @vploadff_nxv4bf16_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv4bf16_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 4 x bfloat>, i32 } @llvm.vp.load.ff.nxv4bf16.p0(ptr %ptr, <vscale x 4 x i1> splat (i1 true), i32 %evl)
+ ret { <vscale x 4 x bfloat>, i32 } %load
+}
+
+define { <vscale x 8 x bfloat>, i32 } @vploadff_nxv8bf16(ptr %ptr, <vscale x 8 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv8bf16:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 8 x bfloat>, i32 } @llvm.vp.load.ff.nxv8bf16.p0(ptr %ptr, <vscale x 8 x i1> %m, i32 %evl)
+ ret { <vscale x 8 x bfloat>, i32 } %load
+}
+
+define { <vscale x 8 x bfloat>, i32 } @vploadff_nxv8bf16_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv8bf16_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 8 x bfloat>, i32 } @llvm.vp.load.ff.nxv8bf16.p0(ptr %ptr, <vscale x 8 x i1> splat (i1 true), i32 %evl)
+ ret { <vscale x 8 x bfloat>, i32 } %load
+}
+
+define { <vscale x 16 x bfloat>, i32 } @vploadff_nxv16bf16(ptr %ptr, <vscale x 16 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv16bf16:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 16 x bfloat>, i32 } @llvm.vp.load.ff.nxv16bf16.p0(ptr %ptr, <vscale x 16 x i1> %m, i32 %evl)
+ ret { <vscale x 16 x bfloat>, i32 } %load
+}
+
+define { <vscale x 16 x bfloat>, i32 } @vploadff_nxv16bf16_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv16bf16_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 16 x bfloat>, i32 } @llvm.vp.load.ff.nxv16bf16.p0(ptr %ptr, <vscale x 16 x i1> splat (i1 true), i32 %evl)
+ ret { <vscale x 16 x bfloat>, i32 } %load
+}
+
+define { <vscale x 32 x bfloat>, i32 } @vploadff_nxv32bf16(ptr %ptr, <vscale x 32 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv32bf16:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, m8, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 32 x bfloat>, i32 } @llvm.vp.load.ff.nxv32bf16.p0(ptr %ptr, <vscale x 32 x i1> %m, i32 %evl)
+ ret { <vscale x 32 x bfloat>, i32 } %load
+}
+
+define { <vscale x 32 x bfloat>, i32 } @vploadff_nxv32bf16_allones_mask(ptr %ptr, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv32bf16_allones_mask:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e16, m8, ta, ma
+; CHECK-NEXT: vle16ff.v v8, (a0)
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 32 x bfloat>, i32 } @llvm.vp.load.ff.nxv32bf16.p0(ptr %ptr, <vscale x 32 x i1> splat (i1 true), i32 %evl)
+ ret { <vscale x 32 x bfloat>, i32 } %load
+}
+
+define { <vscale x 3 x i8>, i32 } @vploadff_nxv3i8(ptr %ptr, <vscale x 3 x i1> %m, i32 zeroext %evl) {
+; CHECK-LABEL: vploadff_nxv3i8:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
+; CHECK-NEXT: vle8ff.v v8, (a0), v0.t
+; CHECK-NEXT: csrr a0, vl
+; CHECK-NEXT: ret
+ %load = call { <vscale x 3 x i8>, i32 } @llvm.vp.load.ff.nxv3i8.p0(ptr %ptr, <vscale x 3 x i1> %m, i32 %evl)
+ ret { <vscale x 3 x i8>, i32 } %load
+}
diff --git a/llvm/unittests/IR/VPIntrinsicTest.cpp b/llvm/unittests/IR/VPIntrinsicTest.cpp
index d6ad7599ce461..0dd352a94f1c7 100644
--- a/llvm/unittests/IR/VPIntrinsicTest.cpp
+++ b/llvm/unittests/IR/VPIntrinsicTest.cpp
@@ -100,6 +100,9 @@ class VPIntrinsicTest : public testing::Test {
"i32*>, <8 x i1>, i32) ";
Str << " declare <8 x i32> @llvm.vp.load.v8i32.p0v8i32(<8 x i32>*, <8 x "
"i1>, i32) ";
+ Str << " declare {<8 x i32>, i32} "
+ "@llvm.vp.load.ff.v8i32.p0v8i32(<8 x "
+ "i32>*, <8 x i1>, i32) ";
Str << "declare <8 x i32> "
"@llvm.experimental.vp.strided.load.v8i32.i32(i32*, i32, <8 "
"x i1>, i32) ";
>From 600976f4bfb06526c283dcc4efc4801792f08ca5 Mon Sep 17 00:00:00 2001
From: Igor Kudrin <ikudrin at accesssoftek.com>
Date: Tue, 5 Aug 2025 16:18:01 -0700
Subject: [PATCH 011/171] Revert "[lldb] Fix auto advance PC in
`EmulateInstructionARM64` if PC >= 4G (#151460)"
This reverts commit 64eba6ef9610a4a82e1610ecd806b8488144bad0.
It breaks on lldb-arm-ubuntu
---
.../ARM64/EmulateInstructionARM64.cpp | 4 +-
.../Instruction/ARM64/TestAArch64Emulator.cpp | 120 +-----------------
2 files changed, 3 insertions(+), 121 deletions(-)
diff --git a/lldb/source/Plugins/Instruction/ARM64/EmulateInstructionARM64.cpp b/lldb/source/Plugins/Instruction/ARM64/EmulateInstructionARM64.cpp
index a8901beda3970..29f03fee47b0d 100644
--- a/lldb/source/Plugins/Instruction/ARM64/EmulateInstructionARM64.cpp
+++ b/lldb/source/Plugins/Instruction/ARM64/EmulateInstructionARM64.cpp
@@ -404,7 +404,7 @@ bool EmulateInstructionARM64::EvaluateInstruction(uint32_t evaluate_options) {
if (!success && !m_ignore_conditions)
return false;
- uint64_t orig_pc_value = 0;
+ uint32_t orig_pc_value = 0;
if (auto_advance_pc) {
orig_pc_value =
ReadRegisterUnsigned(eRegisterKindLLDB, gpr_pc_arm64, 0, &success);
@@ -418,7 +418,7 @@ bool EmulateInstructionARM64::EvaluateInstruction(uint32_t evaluate_options) {
return false;
if (auto_advance_pc) {
- uint64_t new_pc_value =
+ uint32_t new_pc_value =
ReadRegisterUnsigned(eRegisterKindLLDB, gpr_pc_arm64, 0, &success);
if (!success)
return false;
diff --git a/lldb/unittests/Instruction/ARM64/TestAArch64Emulator.cpp b/lldb/unittests/Instruction/ARM64/TestAArch64Emulator.cpp
index fdd456600da18..4506c200dee3b 100644
--- a/lldb/unittests/Instruction/ARM64/TestAArch64Emulator.cpp
+++ b/lldb/unittests/Instruction/ARM64/TestAArch64Emulator.cpp
@@ -13,118 +13,15 @@
#include "lldb/Core/Disassembler.h"
#include "lldb/Target/ExecutionContext.h"
#include "lldb/Utility/ArchSpec.h"
-#include "lldb/Utility/RegisterValue.h"
#include "Plugins/Instruction/ARM64/EmulateInstructionARM64.h"
-#include "Plugins/Process/Utility/RegisterInfoPOSIX_arm64.h"
-#include "Plugins/Process/Utility/lldb-arm64-register-enums.h"
using namespace lldb;
using namespace lldb_private;
struct Arch64EmulatorTester : public EmulateInstructionARM64 {
- RegisterInfoPOSIX_arm64::GPR gpr;
- uint8_t memory[64] = {0};
- uint64_t memory_offset = 0;
-
Arch64EmulatorTester()
- : EmulateInstructionARM64(ArchSpec("arm64-apple-ios")) {
- memset(&gpr, 0, sizeof(gpr));
- EmulateInstruction::SetCallbacks(ReadMemoryCallback, WriteMemoryCallback,
- ReadRegisterCallback,
- WriteRegisterCallback);
- }
-
- static bool ReadRegisterCallback(EmulateInstruction *instruction, void *baton,
- const RegisterInfo *reg_info,
- RegisterValue ®_value) {
- auto *tester = static_cast<Arch64EmulatorTester *>(instruction);
- uint32_t reg = reg_info->kinds[eRegisterKindLLDB];
- if (reg >= gpr_x0_arm64 && reg <= gpr_x28_arm64) {
- reg_value.SetUInt64(tester->gpr.x[reg - gpr_x0_arm64]);
- return true;
- }
- if (reg >= gpr_w0_arm64 && reg <= gpr_w28_arm64) {
- reg_value.SetUInt32(tester->gpr.x[reg - gpr_w0_arm64]);
- return true;
- }
- switch (reg) {
- case gpr_fp_arm64:
- reg_value.SetUInt64(tester->gpr.fp);
- return true;
- case gpr_lr_arm64:
- reg_value.SetUInt64(tester->gpr.lr);
- return true;
- case gpr_sp_arm64:
- reg_value.SetUInt64(tester->gpr.sp);
- return true;
- case gpr_pc_arm64:
- reg_value.SetUInt64(tester->gpr.pc);
- return true;
- case gpr_cpsr_arm64:
- reg_value.SetUInt64(tester->gpr.cpsr);
- return true;
- default:
- return false;
- }
- }
-
- static bool WriteRegisterCallback(EmulateInstruction *instruction,
- void *baton, const Context &context,
- const RegisterInfo *reg_info,
- const RegisterValue ®_value) {
- auto *tester = static_cast<Arch64EmulatorTester *>(instruction);
- uint32_t reg = reg_info->kinds[eRegisterKindLLDB];
- if (reg >= gpr_x0_arm64 && reg <= gpr_x28_arm64) {
- tester->gpr.x[reg - gpr_x0_arm64] = reg_value.GetAsUInt64();
- return true;
- }
- if (reg >= gpr_w0_arm64 && reg <= gpr_w28_arm64) {
- tester->gpr.x[reg - gpr_w0_arm64] = reg_value.GetAsUInt32();
- return true;
- }
- switch (reg) {
- case gpr_fp_arm64:
- tester->gpr.fp = reg_value.GetAsUInt64();
- return true;
- case gpr_lr_arm64:
- tester->gpr.lr = reg_value.GetAsUInt64();
- return true;
- case gpr_sp_arm64:
- tester->gpr.sp = reg_value.GetAsUInt64();
- return true;
- case gpr_pc_arm64:
- tester->gpr.pc = reg_value.GetAsUInt64();
- return true;
- case gpr_cpsr_arm64:
- tester->gpr.cpsr = reg_value.GetAsUInt64();
- return true;
- default:
- return false;
- }
- }
-
- static size_t ReadMemoryCallback(EmulateInstruction *instruction, void *baton,
- const Context &context, addr_t addr,
- void *dst, size_t length) {
- auto *tester = static_cast<Arch64EmulatorTester *>(instruction);
- assert(addr >= tester->memory_offset);
- assert(addr - tester->memory_offset + length <= sizeof(tester->memory));
- if (addr >= tester->memory_offset &&
- addr - tester->memory_offset + length <= sizeof(tester->memory)) {
- memcpy(dst, tester->memory + addr - tester->memory_offset, length);
- return length;
- }
- return 0;
- };
-
- static size_t WriteMemoryCallback(EmulateInstruction *instruction,
- void *baton, const Context &context,
- addr_t addr, const void *dst,
- size_t length) {
- llvm_unreachable("implement when required");
- return 0;
- };
+ : EmulateInstructionARM64(ArchSpec("arm64-apple-ios")) {}
static uint64_t AddWithCarry(uint32_t N, uint64_t x, uint64_t y, bool carry_in,
EmulateInstructionARM64::ProcState &proc_state) {
@@ -163,18 +60,3 @@ TEST_F(TestAArch64Emulator, TestOverflow) {
ASSERT_EQ(pstate.V, 1ULL);
ASSERT_EQ(pstate.C, 0ULL);
}
-
-TEST_F(TestAArch64Emulator, TestAutoAdvancePC) {
- Arch64EmulatorTester emu;
- emu.memory_offset = 0x123456789abcde00;
- emu.gpr.pc = 0x123456789abcde00;
- emu.gpr.x[8] = 0x123456789abcde20;
- memcpy(emu.memory, "\x08\x01\x40\xb9", 4); // ldr w8, [x8]
- memcpy(emu.memory + 0x20, "\x11\x22\x33\x44", 4); // 0x44332211
- ASSERT_TRUE(emu.ReadInstruction());
- ASSERT_TRUE(
- emu.EvaluateInstruction(eEmulateInstructionOptionAutoAdvancePC |
- eEmulateInstructionOptionIgnoreConditions));
- ASSERT_EQ(emu.gpr.pc, 0x123456789abcde04);
- ASSERT_EQ(emu.gpr.x[8], 0x44332211);
-}
>From 4c2d56318fec16d1d5241cd91d38b59cb65bce83 Mon Sep 17 00:00:00 2001
From: Jonas Devlieghere <jonas at devlieghere.com>
Date: Tue, 5 Aug 2025 16:19:59 -0700
Subject: [PATCH 012/171] [lldb] Drop PY_MINOR_VERSION >= 3 check (NFC)
The minimum supported Python version is now 3.8, so there's no supported
configuration where the minor version is less than 3.
---
.../Plugins/ScriptInterpreter/Python/PythonDataObjects.cpp | 7 +------
1 file changed, 1 insertion(+), 6 deletions(-)
diff --git a/lldb/source/Plugins/ScriptInterpreter/Python/PythonDataObjects.cpp b/lldb/source/Plugins/ScriptInterpreter/Python/PythonDataObjects.cpp
index 4a45673a6ebbc..82aa022bbae0b 100644
--- a/lldb/source/Plugins/ScriptInterpreter/Python/PythonDataObjects.cpp
+++ b/lldb/source/Plugins/ScriptInterpreter/Python/PythonDataObjects.cpp
@@ -430,13 +430,8 @@ Expected<llvm::StringRef> PythonString::AsUTF8() const {
}
size_t PythonString::GetSize() const {
- if (IsValid()) {
-#if PY_MINOR_VERSION >= 3
+ if (IsValid())
return PyUnicode_GetLength(m_py_obj);
-#else
- return PyUnicode_GetSize(m_py_obj);
-#endif
- }
return 0;
}
>From b8eb61adc92bb384bc63f01b7ccddd931409b223 Mon Sep 17 00:00:00 2001
From: Stanislav Mekhanoshin <Stanislav.Mekhanoshin at amd.com>
Date: Tue, 5 Aug 2025 16:25:23 -0700
Subject: [PATCH 013/171] [AMDGPU] Implement addrspacecast from flat <->
private on gfx1250 (#152218)
---
.../lib/Target/AMDGPU/AMDGPULegalizerInfo.cpp | 87 +-
llvm/lib/Target/AMDGPU/SIISelLowering.cpp | 91 +-
llvm/lib/Target/AMDGPU/SIRegisterInfo.td | 3 +-
.../GlobalISel/legalize-addrspacecast.mir | 20 +-
llvm/test/CodeGen/AMDGPU/addrspacecast-gas.ll | 134 ++
.../test/CodeGen/AMDGPU/flat-saddr-atomics.ll | 1654 +++++++++++------
llvm/test/CodeGen/AMDGPU/scale-offset-flat.ll | 43 +-
7 files changed, 1374 insertions(+), 658 deletions(-)
create mode 100644 llvm/test/CodeGen/AMDGPU/addrspacecast-gas.ll
diff --git a/llvm/lib/Target/AMDGPU/AMDGPULegalizerInfo.cpp b/llvm/lib/Target/AMDGPU/AMDGPULegalizerInfo.cpp
index 1fdf272ee2191..a6e4a63de4c62 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPULegalizerInfo.cpp
+++ b/llvm/lib/Target/AMDGPU/AMDGPULegalizerInfo.cpp
@@ -2271,6 +2271,9 @@ Register AMDGPULegalizerInfo::getSegmentAperture(
const unsigned ApertureRegNo = (AS == AMDGPUAS::LOCAL_ADDRESS)
? AMDGPU::SRC_SHARED_BASE
: AMDGPU::SRC_PRIVATE_BASE;
+ assert((ApertureRegNo != AMDGPU::SRC_PRIVATE_BASE ||
+ !ST.hasGloballyAddressableScratch()) &&
+ "Cannot use src_private_base with globally addressable scratch!");
// FIXME: It would be more natural to emit a COPY here, but then copy
// coalescing would kick in and it would think it's okay to use the "HI"
// subregister (instead of extracting the HI 32 bits) which is an artificial
@@ -2396,11 +2399,30 @@ bool AMDGPULegalizerInfo::legalizeAddrSpaceCast(
if (SrcAS == AMDGPUAS::FLAT_ADDRESS &&
(DestAS == AMDGPUAS::LOCAL_ADDRESS ||
DestAS == AMDGPUAS::PRIVATE_ADDRESS)) {
+ auto castFlatToLocalOrPrivate = [&](const DstOp &Dst) -> Register {
+ if (DestAS == AMDGPUAS::PRIVATE_ADDRESS &&
+ ST.hasGloballyAddressableScratch()) {
+ // flat -> private with globally addressable scratch: subtract
+ // src_flat_scratch_base_lo.
+ const LLT S32 = LLT::scalar(32);
+ Register SrcLo = B.buildExtract(S32, Src, 0).getReg(0);
+ Register FlatScratchBaseLo =
+ B.buildInstr(AMDGPU::S_MOV_B32, {S32},
+ {Register(AMDGPU::SRC_FLAT_SCRATCH_BASE_LO)})
+ .getReg(0);
+ MRI.setRegClass(FlatScratchBaseLo, &AMDGPU::SReg_32RegClass);
+ Register Sub = B.buildSub(S32, SrcLo, FlatScratchBaseLo).getReg(0);
+ return B.buildIntToPtr(Dst, Sub).getReg(0);
+ }
+
+ // Extract low 32-bits of the pointer.
+ return B.buildExtract(Dst, Src, 0).getReg(0);
+ };
+
// For llvm.amdgcn.addrspacecast.nonnull we can always assume non-null, for
// G_ADDRSPACE_CAST we need to guess.
if (isa<GIntrinsic>(MI) || isKnownNonNull(Src, MRI, TM, SrcAS)) {
- // Extract low 32-bits of the pointer.
- B.buildExtract(Dst, Src, 0);
+ castFlatToLocalOrPrivate(Dst);
MI.eraseFromParent();
return true;
}
@@ -2411,7 +2433,7 @@ bool AMDGPULegalizerInfo::legalizeAddrSpaceCast(
auto FlatNull = B.buildConstant(SrcTy, 0);
// Extract low 32-bits of the pointer.
- auto PtrLo32 = B.buildExtract(DstTy, Src, 0);
+ auto PtrLo32 = castFlatToLocalOrPrivate(DstTy);
auto CmpRes =
B.buildICmp(CmpInst::ICMP_NE, LLT::scalar(1), Src, FlatNull.getReg(0));
@@ -2425,14 +2447,45 @@ bool AMDGPULegalizerInfo::legalizeAddrSpaceCast(
(SrcAS == AMDGPUAS::LOCAL_ADDRESS ||
SrcAS == AMDGPUAS::PRIVATE_ADDRESS)) {
auto castLocalOrPrivateToFlat = [&](const DstOp &Dst) -> Register {
- Register ApertureReg = getSegmentAperture(SrcAS, MRI, B);
- if (!ApertureReg.isValid())
- return false;
-
// Coerce the type of the low half of the result so we can use
// merge_values.
Register SrcAsInt = B.buildPtrToInt(S32, Src).getReg(0);
+ if (SrcAS == AMDGPUAS::PRIVATE_ADDRESS &&
+ ST.hasGloballyAddressableScratch()) {
+ // For wave32: Addr = (TID[4:0] << 52) + FLAT_SCRATCH_BASE + privateAddr
+ // For wave64: Addr = (TID[5:0] << 51) + FLAT_SCRATCH_BASE + privateAddr
+ Register AllOnes = B.buildConstant(S32, -1).getReg(0);
+ Register ThreadID = B.buildConstant(S32, 0).getReg(0);
+ ThreadID = B.buildIntrinsic(Intrinsic::amdgcn_mbcnt_lo, {S32})
+ .addUse(AllOnes)
+ .addUse(ThreadID)
+ .getReg(0);
+ if (ST.isWave64()) {
+ ThreadID = B.buildIntrinsic(Intrinsic::amdgcn_mbcnt_hi, {S32})
+ .addUse(AllOnes)
+ .addUse(ThreadID)
+ .getReg(0);
+ }
+ Register ShAmt =
+ B.buildConstant(S32, 57 - 32 - ST.getWavefrontSizeLog2()).getReg(0);
+ Register SrcHi = B.buildShl(S32, ThreadID, ShAmt).getReg(0);
+ Register CvtPtr =
+ B.buildMergeLikeInstr(DstTy, {SrcAsInt, SrcHi}).getReg(0);
+ // Accessing src_flat_scratch_base_lo as a 64-bit operand gives the full
+ // 64-bit hi:lo value.
+ Register FlatScratchBase =
+ B.buildInstr(AMDGPU::S_MOV_B64, {S64},
+ {Register(AMDGPU::SRC_FLAT_SCRATCH_BASE)})
+ .getReg(0);
+ MRI.setRegClass(FlatScratchBase, &AMDGPU::SReg_64RegClass);
+ return B.buildPtrAdd(Dst, CvtPtr, FlatScratchBase).getReg(0);
+ }
+
+ Register ApertureReg = getSegmentAperture(SrcAS, MRI, B);
+ if (!ApertureReg.isValid())
+ return false;
+
// TODO: Should we allow mismatched types but matching sizes in merges to
// avoid the ptrtoint?
return B.buildMergeLikeInstr(Dst, {SrcAsInt, ApertureReg}).getReg(0);
@@ -5788,11 +5841,25 @@ bool AMDGPULegalizerInfo::legalizeIsAddrSpace(MachineInstr &MI,
MachineRegisterInfo &MRI,
MachineIRBuilder &B,
unsigned AddrSpace) const {
- Register ApertureReg = getSegmentAperture(AddrSpace, MRI, B);
- auto Unmerge = B.buildUnmerge(LLT::scalar(32), MI.getOperand(2).getReg());
+ const LLT S32 = LLT::scalar(32);
+ auto Unmerge = B.buildUnmerge(S32, MI.getOperand(2).getReg());
Register Hi32 = Unmerge.getReg(1);
- B.buildICmp(ICmpInst::ICMP_EQ, MI.getOperand(0), Hi32, ApertureReg);
+ if (AddrSpace == AMDGPUAS::PRIVATE_ADDRESS &&
+ ST.hasGloballyAddressableScratch()) {
+ Register FlatScratchBaseHi =
+ B.buildInstr(AMDGPU::S_MOV_B32, {S32},
+ {Register(AMDGPU::SRC_FLAT_SCRATCH_BASE_HI)})
+ .getReg(0);
+ MRI.setRegClass(FlatScratchBaseHi, &AMDGPU::SReg_32RegClass);
+ // Test bits 63..58 against the aperture address.
+ Register XOR = B.buildXor(S32, Hi32, FlatScratchBaseHi).getReg(0);
+ B.buildICmp(ICmpInst::ICMP_ULT, MI.getOperand(0), XOR,
+ B.buildConstant(S32, 1u << 26));
+ } else {
+ Register ApertureReg = getSegmentAperture(AddrSpace, MRI, B);
+ B.buildICmp(ICmpInst::ICMP_EQ, MI.getOperand(0), Hi32, ApertureReg);
+ }
MI.eraseFromParent();
return true;
}
diff --git a/llvm/lib/Target/AMDGPU/SIISelLowering.cpp b/llvm/lib/Target/AMDGPU/SIISelLowering.cpp
index 4d67e4a5cbcf9..63826b782a377 100644
--- a/llvm/lib/Target/AMDGPU/SIISelLowering.cpp
+++ b/llvm/lib/Target/AMDGPU/SIISelLowering.cpp
@@ -2098,10 +2098,17 @@ bool SITargetLowering::isNonGlobalAddrSpace(unsigned AS) {
bool SITargetLowering::isFreeAddrSpaceCast(unsigned SrcAS,
unsigned DestAS) const {
- // Flat -> private/local is a simple truncate.
- // Flat -> global is no-op
- if (SrcAS == AMDGPUAS::FLAT_ADDRESS)
+ if (SrcAS == AMDGPUAS::FLAT_ADDRESS) {
+ if (DestAS == AMDGPUAS::PRIVATE_ADDRESS &&
+ Subtarget->hasGloballyAddressableScratch()) {
+ // Flat -> private requires subtracting src_flat_scratch_base_lo.
+ return false;
+ }
+
+ // Flat -> private/local is a simple truncate.
+ // Flat -> global is no-op
return true;
+ }
const GCNTargetMachine &TM =
static_cast<const GCNTargetMachine &>(getTargetMachine());
@@ -7650,6 +7657,9 @@ SDValue SITargetLowering::getSegmentAperture(unsigned AS, const SDLoc &DL,
const unsigned ApertureRegNo = (AS == AMDGPUAS::LOCAL_ADDRESS)
? AMDGPU::SRC_SHARED_BASE
: AMDGPU::SRC_PRIVATE_BASE;
+ assert((ApertureRegNo != AMDGPU::SRC_PRIVATE_BASE ||
+ !Subtarget->hasGloballyAddressableScratch()) &&
+ "Cannot use src_private_base with globally addressable scratch!");
// Note: this feature (register) is broken. When used as a 32-bit operand,
// it returns a wrong value (all zeroes?). The real value is in the upper 32
// bits.
@@ -7760,6 +7770,18 @@ SDValue SITargetLowering::lowerADDRSPACECAST(SDValue Op,
DestAS == AMDGPUAS::PRIVATE_ADDRESS) {
SDValue Ptr = DAG.getNode(ISD::TRUNCATE, SL, MVT::i32, Src);
+ if (DestAS == AMDGPUAS::PRIVATE_ADDRESS &&
+ Subtarget->hasGloballyAddressableScratch()) {
+ // flat -> private with globally addressable scratch: subtract
+ // src_flat_scratch_base_lo.
+ SDValue FlatScratchBaseLo(
+ DAG.getMachineNode(
+ AMDGPU::S_MOV_B32, SL, MVT::i32,
+ DAG.getRegister(AMDGPU::SRC_FLAT_SCRATCH_BASE_LO, MVT::i32)),
+ 0);
+ Ptr = DAG.getNode(ISD::SUB, SL, MVT::i32, Ptr, FlatScratchBaseLo);
+ }
+
if (IsNonNull || isKnownNonNull(Op, DAG, TM, SrcAS))
return Ptr;
@@ -7776,11 +7798,40 @@ SDValue SITargetLowering::lowerADDRSPACECAST(SDValue Op,
if (DestAS == AMDGPUAS::FLAT_ADDRESS) {
if (SrcAS == AMDGPUAS::LOCAL_ADDRESS ||
SrcAS == AMDGPUAS::PRIVATE_ADDRESS) {
-
- SDValue Aperture = getSegmentAperture(SrcAS, SL, DAG);
- SDValue CvtPtr =
- DAG.getNode(ISD::BUILD_VECTOR, SL, MVT::v2i32, Src, Aperture);
- CvtPtr = DAG.getNode(ISD::BITCAST, SL, MVT::i64, CvtPtr);
+ SDValue CvtPtr;
+ if (SrcAS == AMDGPUAS::PRIVATE_ADDRESS &&
+ Subtarget->hasGloballyAddressableScratch()) {
+ // For wave32: Addr = (TID[4:0] << 52) + FLAT_SCRATCH_BASE + privateAddr
+ // For wave64: Addr = (TID[5:0] << 51) + FLAT_SCRATCH_BASE + privateAddr
+ SDValue AllOnes = DAG.getSignedTargetConstant(-1, SL, MVT::i32);
+ SDValue ThreadID = DAG.getConstant(0, SL, MVT::i32);
+ ThreadID = DAG.getNode(
+ ISD::INTRINSIC_WO_CHAIN, SL, MVT::i32,
+ DAG.getTargetConstant(Intrinsic::amdgcn_mbcnt_lo, SL, MVT::i32),
+ AllOnes, ThreadID);
+ if (Subtarget->isWave64())
+ ThreadID = DAG.getNode(
+ ISD::INTRINSIC_WO_CHAIN, SL, MVT::i32,
+ DAG.getTargetConstant(Intrinsic::amdgcn_mbcnt_hi, SL, MVT::i32),
+ AllOnes, ThreadID);
+ SDValue ShAmt = DAG.getShiftAmountConstant(
+ 57 - 32 - Subtarget->getWavefrontSizeLog2(), MVT::i32, SL);
+ SDValue SrcHi = DAG.getNode(ISD::SHL, SL, MVT::i32, ThreadID, ShAmt);
+ CvtPtr = DAG.getNode(ISD::BUILD_VECTOR, SL, MVT::v2i32, Src, SrcHi);
+ CvtPtr = DAG.getNode(ISD::BITCAST, SL, MVT::i64, CvtPtr);
+ // Accessing src_flat_scratch_base_lo as a 64-bit operand gives the full
+ // 64-bit hi:lo value.
+ SDValue FlatScratchBase = {
+ DAG.getMachineNode(
+ AMDGPU::S_MOV_B64, SL, MVT::i64,
+ DAG.getRegister(AMDGPU::SRC_FLAT_SCRATCH_BASE, MVT::i64)),
+ 0};
+ CvtPtr = DAG.getNode(ISD::ADD, SL, MVT::i64, CvtPtr, FlatScratchBase);
+ } else {
+ SDValue Aperture = getSegmentAperture(SrcAS, SL, DAG);
+ CvtPtr = DAG.getNode(ISD::BUILD_VECTOR, SL, MVT::v2i32, Src, Aperture);
+ CvtPtr = DAG.getNode(ISD::BITCAST, SL, MVT::i64, CvtPtr);
+ }
if (IsNonNull || isKnownNonNull(Op, DAG, TM, SrcAS))
return CvtPtr;
@@ -9424,15 +9475,29 @@ SDValue SITargetLowering::LowerINTRINSIC_WO_CHAIN(SDValue Op,
case Intrinsic::amdgcn_is_shared:
case Intrinsic::amdgcn_is_private: {
SDLoc SL(Op);
- unsigned AS = (IntrinsicID == Intrinsic::amdgcn_is_shared)
- ? AMDGPUAS::LOCAL_ADDRESS
- : AMDGPUAS::PRIVATE_ADDRESS;
- SDValue Aperture = getSegmentAperture(AS, SL, DAG);
SDValue SrcVec =
DAG.getNode(ISD::BITCAST, DL, MVT::v2i32, Op.getOperand(1));
-
SDValue SrcHi = DAG.getNode(ISD::EXTRACT_VECTOR_ELT, SL, MVT::i32, SrcVec,
DAG.getConstant(1, SL, MVT::i32));
+
+ unsigned AS = (IntrinsicID == Intrinsic::amdgcn_is_shared)
+ ? AMDGPUAS::LOCAL_ADDRESS
+ : AMDGPUAS::PRIVATE_ADDRESS;
+ if (AS == AMDGPUAS::PRIVATE_ADDRESS &&
+ Subtarget->hasGloballyAddressableScratch()) {
+ SDValue FlatScratchBaseHi(
+ DAG.getMachineNode(
+ AMDGPU::S_MOV_B32, DL, MVT::i32,
+ DAG.getRegister(AMDGPU::SRC_FLAT_SCRATCH_BASE_HI, MVT::i32)),
+ 0);
+ // Test bits 63..58 against the aperture address.
+ return DAG.getSetCC(
+ SL, MVT::i1,
+ DAG.getNode(ISD::XOR, SL, MVT::i32, SrcHi, FlatScratchBaseHi),
+ DAG.getConstant(1u << 26, SL, MVT::i32), ISD::SETULT);
+ }
+
+ SDValue Aperture = getSegmentAperture(AS, SL, DAG);
return DAG.getSetCC(SL, MVT::i1, SrcHi, Aperture, ISD::SETEQ);
}
case Intrinsic::amdgcn_perm:
diff --git a/llvm/lib/Target/AMDGPU/SIRegisterInfo.td b/llvm/lib/Target/AMDGPU/SIRegisterInfo.td
index ed6b973580502..81655f5a829fb 100644
--- a/llvm/lib/Target/AMDGPU/SIRegisterInfo.td
+++ b/llvm/lib/Target/AMDGPU/SIRegisterInfo.td
@@ -866,7 +866,8 @@ def TTMP_64 : SIRegisterClass<"AMDGPU", [v2i32, i64, f64, v4i16, v4f16, v4bf16],
def SReg_64_XEXEC_XNULL : SIRegisterClass<"AMDGPU", [v2i32, i64, v2f32, f64, i1, v4i16, v4f16, v4bf16], 32,
(add SGPR_64, VCC, FLAT_SCR, XNACK_MASK, SRC_SHARED_BASE,
- SRC_SHARED_LIMIT, SRC_PRIVATE_BASE, SRC_PRIVATE_LIMIT, TTMP_64, TBA, TMA)> {
+ SRC_SHARED_LIMIT, SRC_PRIVATE_BASE, SRC_PRIVATE_LIMIT, TTMP_64, TBA, TMA,
+ SRC_FLAT_SCRATCH_BASE)> {
let CopyCost = 1;
let AllocationPriority = 1;
let HasSGPR = 1;
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-addrspacecast.mir b/llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-addrspacecast.mir
index 6a4522f5a97ad..d69a3e1a15bbd 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-addrspacecast.mir
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-addrspacecast.mir
@@ -141,11 +141,11 @@ body: |
; SIVI-NEXT: {{ $}}
; SIVI-NEXT: [[COPY:%[0-9]+]]:sgpr_64(p4) = COPY $sgpr4_sgpr5
; SIVI-NEXT: [[COPY1:%[0-9]+]]:_(p5) = COPY $vgpr0
+ ; SIVI-NEXT: [[PTRTOINT:%[0-9]+]]:_(s32) = G_PTRTOINT [[COPY1]](p5)
; SIVI-NEXT: [[COPY2:%[0-9]+]]:_(p4) = COPY [[COPY]](p4)
; SIVI-NEXT: [[C:%[0-9]+]]:_(s64) = G_CONSTANT i64 68
; SIVI-NEXT: [[PTR_ADD:%[0-9]+]]:_(p4) = nuw inbounds G_PTR_ADD [[COPY2]], [[C]](s64)
; SIVI-NEXT: [[LOAD:%[0-9]+]]:_(s32) = G_LOAD [[PTR_ADD]](p4) :: (dereferenceable invariant load (s32), addrspace 4)
- ; SIVI-NEXT: [[PTRTOINT:%[0-9]+]]:_(s32) = G_PTRTOINT [[COPY1]](p5)
; SIVI-NEXT: [[MV:%[0-9]+]]:_(p0) = G_MERGE_VALUES [[PTRTOINT]](s32), [[LOAD]](s32)
; SIVI-NEXT: [[C1:%[0-9]+]]:_(p5) = G_CONSTANT i32 -1
; SIVI-NEXT: [[C2:%[0-9]+]]:_(p0) = G_CONSTANT i64 0
@@ -157,9 +157,9 @@ body: |
; GFX9: liveins: $vgpr0
; GFX9-NEXT: {{ $}}
; GFX9-NEXT: [[COPY:%[0-9]+]]:_(p5) = COPY $vgpr0
+ ; GFX9-NEXT: [[PTRTOINT:%[0-9]+]]:_(s32) = G_PTRTOINT [[COPY]](p5)
; GFX9-NEXT: [[S_MOV_B64_:%[0-9]+]]:sreg_64(s64) = S_MOV_B64 $src_private_base
; GFX9-NEXT: [[UV:%[0-9]+]]:_(s32), [[UV1:%[0-9]+]]:_(s32) = G_UNMERGE_VALUES [[S_MOV_B64_]](s64)
- ; GFX9-NEXT: [[PTRTOINT:%[0-9]+]]:_(s32) = G_PTRTOINT [[COPY]](p5)
; GFX9-NEXT: [[MV:%[0-9]+]]:_(p0) = G_MERGE_VALUES [[PTRTOINT]](s32), [[UV1]](s32)
; GFX9-NEXT: [[C:%[0-9]+]]:_(p5) = G_CONSTANT i32 -1
; GFX9-NEXT: [[C1:%[0-9]+]]:_(p0) = G_CONSTANT i64 0
@@ -210,11 +210,11 @@ body: |
; SIVI-NEXT: {{ $}}
; SIVI-NEXT: [[COPY:%[0-9]+]]:sgpr_64(p4) = COPY $sgpr4_sgpr5
; SIVI-NEXT: [[COPY1:%[0-9]+]]:_(p3) = COPY $vgpr0
+ ; SIVI-NEXT: [[PTRTOINT:%[0-9]+]]:_(s32) = G_PTRTOINT [[COPY1]](p3)
; SIVI-NEXT: [[COPY2:%[0-9]+]]:_(p4) = COPY [[COPY]](p4)
; SIVI-NEXT: [[C:%[0-9]+]]:_(s64) = G_CONSTANT i64 64
; SIVI-NEXT: [[PTR_ADD:%[0-9]+]]:_(p4) = nuw inbounds G_PTR_ADD [[COPY2]], [[C]](s64)
; SIVI-NEXT: [[LOAD:%[0-9]+]]:_(s32) = G_LOAD [[PTR_ADD]](p4) :: (dereferenceable invariant load (s32), align 64, addrspace 4)
- ; SIVI-NEXT: [[PTRTOINT:%[0-9]+]]:_(s32) = G_PTRTOINT [[COPY1]](p3)
; SIVI-NEXT: [[MV:%[0-9]+]]:_(p0) = G_MERGE_VALUES [[PTRTOINT]](s32), [[LOAD]](s32)
; SIVI-NEXT: [[C1:%[0-9]+]]:_(p3) = G_CONSTANT i32 -1
; SIVI-NEXT: [[C2:%[0-9]+]]:_(p0) = G_CONSTANT i64 0
@@ -226,9 +226,9 @@ body: |
; GFX9: liveins: $vgpr0
; GFX9-NEXT: {{ $}}
; GFX9-NEXT: [[COPY:%[0-9]+]]:_(p3) = COPY $vgpr0
+ ; GFX9-NEXT: [[PTRTOINT:%[0-9]+]]:_(s32) = G_PTRTOINT [[COPY]](p3)
; GFX9-NEXT: [[S_MOV_B64_:%[0-9]+]]:sreg_64(s64) = S_MOV_B64 $src_shared_base
; GFX9-NEXT: [[UV:%[0-9]+]]:_(s32), [[UV1:%[0-9]+]]:_(s32) = G_UNMERGE_VALUES [[S_MOV_B64_]](s64)
- ; GFX9-NEXT: [[PTRTOINT:%[0-9]+]]:_(s32) = G_PTRTOINT [[COPY]](p3)
; GFX9-NEXT: [[MV:%[0-9]+]]:_(p0) = G_MERGE_VALUES [[PTRTOINT]](s32), [[UV1]](s32)
; GFX9-NEXT: [[C:%[0-9]+]]:_(p3) = G_CONSTANT i32 -1
; GFX9-NEXT: [[C1:%[0-9]+]]:_(p0) = G_CONSTANT i64 0
@@ -354,20 +354,20 @@ body: |
; SIVI-NEXT: [[COPY:%[0-9]+]]:sgpr_64(p4) = COPY $sgpr4_sgpr5
; SIVI-NEXT: [[COPY1:%[0-9]+]]:_(<2 x p3>) = COPY $vgpr0_vgpr1
; SIVI-NEXT: [[UV:%[0-9]+]]:_(p3), [[UV1:%[0-9]+]]:_(p3) = G_UNMERGE_VALUES [[COPY1]](<2 x p3>)
+ ; SIVI-NEXT: [[PTRTOINT:%[0-9]+]]:_(s32) = G_PTRTOINT [[UV]](p3)
; SIVI-NEXT: [[COPY2:%[0-9]+]]:_(p4) = COPY [[COPY]](p4)
; SIVI-NEXT: [[C:%[0-9]+]]:_(s64) = G_CONSTANT i64 64
; SIVI-NEXT: [[PTR_ADD:%[0-9]+]]:_(p4) = nuw inbounds G_PTR_ADD [[COPY2]], [[C]](s64)
; SIVI-NEXT: [[LOAD:%[0-9]+]]:_(s32) = G_LOAD [[PTR_ADD]](p4) :: (dereferenceable invariant load (s32), align 64, addrspace 4)
- ; SIVI-NEXT: [[PTRTOINT:%[0-9]+]]:_(s32) = G_PTRTOINT [[UV]](p3)
; SIVI-NEXT: [[MV:%[0-9]+]]:_(p0) = G_MERGE_VALUES [[PTRTOINT]](s32), [[LOAD]](s32)
; SIVI-NEXT: [[C1:%[0-9]+]]:_(p3) = G_CONSTANT i32 -1
; SIVI-NEXT: [[C2:%[0-9]+]]:_(p0) = G_CONSTANT i64 0
; SIVI-NEXT: [[ICMP:%[0-9]+]]:_(s1) = G_ICMP intpred(ne), [[UV]](p3), [[C1]]
; SIVI-NEXT: [[SELECT:%[0-9]+]]:_(p0) = G_SELECT [[ICMP]](s1), [[MV]], [[C2]]
+ ; SIVI-NEXT: [[PTRTOINT1:%[0-9]+]]:_(s32) = G_PTRTOINT [[UV1]](p3)
; SIVI-NEXT: [[COPY3:%[0-9]+]]:_(p4) = COPY [[COPY]](p4)
; SIVI-NEXT: [[PTR_ADD1:%[0-9]+]]:_(p4) = nuw inbounds G_PTR_ADD [[COPY3]], [[C]](s64)
; SIVI-NEXT: [[LOAD1:%[0-9]+]]:_(s32) = G_LOAD [[PTR_ADD1]](p4) :: (dereferenceable invariant load (s32), align 64, addrspace 4)
- ; SIVI-NEXT: [[PTRTOINT1:%[0-9]+]]:_(s32) = G_PTRTOINT [[UV1]](p3)
; SIVI-NEXT: [[MV1:%[0-9]+]]:_(p0) = G_MERGE_VALUES [[PTRTOINT1]](s32), [[LOAD1]](s32)
; SIVI-NEXT: [[ICMP1:%[0-9]+]]:_(s1) = G_ICMP intpred(ne), [[UV1]](p3), [[C1]]
; SIVI-NEXT: [[SELECT1:%[0-9]+]]:_(p0) = G_SELECT [[ICMP1]](s1), [[MV1]], [[C2]]
@@ -379,17 +379,17 @@ body: |
; GFX9-NEXT: {{ $}}
; GFX9-NEXT: [[COPY:%[0-9]+]]:_(<2 x p3>) = COPY $vgpr0_vgpr1
; GFX9-NEXT: [[UV:%[0-9]+]]:_(p3), [[UV1:%[0-9]+]]:_(p3) = G_UNMERGE_VALUES [[COPY]](<2 x p3>)
+ ; GFX9-NEXT: [[PTRTOINT:%[0-9]+]]:_(s32) = G_PTRTOINT [[UV]](p3)
; GFX9-NEXT: [[S_MOV_B64_:%[0-9]+]]:sreg_64(s64) = S_MOV_B64 $src_shared_base
; GFX9-NEXT: [[UV2:%[0-9]+]]:_(s32), [[UV3:%[0-9]+]]:_(s32) = G_UNMERGE_VALUES [[S_MOV_B64_]](s64)
- ; GFX9-NEXT: [[PTRTOINT:%[0-9]+]]:_(s32) = G_PTRTOINT [[UV]](p3)
; GFX9-NEXT: [[MV:%[0-9]+]]:_(p0) = G_MERGE_VALUES [[PTRTOINT]](s32), [[UV3]](s32)
; GFX9-NEXT: [[C:%[0-9]+]]:_(p3) = G_CONSTANT i32 -1
; GFX9-NEXT: [[C1:%[0-9]+]]:_(p0) = G_CONSTANT i64 0
; GFX9-NEXT: [[ICMP:%[0-9]+]]:_(s1) = G_ICMP intpred(ne), [[UV]](p3), [[C]]
; GFX9-NEXT: [[SELECT:%[0-9]+]]:_(p0) = G_SELECT [[ICMP]](s1), [[MV]], [[C1]]
+ ; GFX9-NEXT: [[PTRTOINT1:%[0-9]+]]:_(s32) = G_PTRTOINT [[UV1]](p3)
; GFX9-NEXT: [[S_MOV_B64_1:%[0-9]+]]:sreg_64(s64) = S_MOV_B64 $src_shared_base
; GFX9-NEXT: [[UV4:%[0-9]+]]:_(s32), [[UV5:%[0-9]+]]:_(s32) = G_UNMERGE_VALUES [[S_MOV_B64_1]](s64)
- ; GFX9-NEXT: [[PTRTOINT1:%[0-9]+]]:_(s32) = G_PTRTOINT [[UV1]](p3)
; GFX9-NEXT: [[MV1:%[0-9]+]]:_(p0) = G_MERGE_VALUES [[PTRTOINT1]](s32), [[UV5]](s32)
; GFX9-NEXT: [[ICMP1:%[0-9]+]]:_(s1) = G_ICMP intpred(ne), [[UV1]](p3), [[C]]
; GFX9-NEXT: [[SELECT1:%[0-9]+]]:_(p0) = G_SELECT [[ICMP1]](s1), [[MV1]], [[C1]]
@@ -506,19 +506,19 @@ body: |
; SIVI-NEXT: {{ $}}
; SIVI-NEXT: [[COPY:%[0-9]+]]:sgpr_64(p4) = COPY $sgpr4_sgpr5
; SIVI-NEXT: [[FRAME_INDEX:%[0-9]+]]:_(p5) = G_FRAME_INDEX %stack.0
+ ; SIVI-NEXT: [[PTRTOINT:%[0-9]+]]:_(s32) = G_PTRTOINT [[FRAME_INDEX]](p5)
; SIVI-NEXT: [[COPY1:%[0-9]+]]:_(p4) = COPY [[COPY]](p4)
; SIVI-NEXT: [[C:%[0-9]+]]:_(s64) = G_CONSTANT i64 68
; SIVI-NEXT: [[PTR_ADD:%[0-9]+]]:_(p4) = nuw inbounds G_PTR_ADD [[COPY1]], [[C]](s64)
; SIVI-NEXT: [[LOAD:%[0-9]+]]:_(s32) = G_LOAD [[PTR_ADD]](p4) :: (dereferenceable invariant load (s32), addrspace 4)
- ; SIVI-NEXT: [[PTRTOINT:%[0-9]+]]:_(s32) = G_PTRTOINT [[FRAME_INDEX]](p5)
; SIVI-NEXT: [[MV:%[0-9]+]]:_(p0) = G_MERGE_VALUES [[PTRTOINT]](s32), [[LOAD]](s32)
; SIVI-NEXT: $vgpr0_vgpr1 = COPY [[MV]](p0)
;
; GFX9-LABEL: name: test_addrspacecast_p5_fi_to_p0
; GFX9: [[FRAME_INDEX:%[0-9]+]]:_(p5) = G_FRAME_INDEX %stack.0
+ ; GFX9-NEXT: [[PTRTOINT:%[0-9]+]]:_(s32) = G_PTRTOINT [[FRAME_INDEX]](p5)
; GFX9-NEXT: [[S_MOV_B64_:%[0-9]+]]:sreg_64(s64) = S_MOV_B64 $src_private_base
; GFX9-NEXT: [[UV:%[0-9]+]]:_(s32), [[UV1:%[0-9]+]]:_(s32) = G_UNMERGE_VALUES [[S_MOV_B64_]](s64)
- ; GFX9-NEXT: [[PTRTOINT:%[0-9]+]]:_(s32) = G_PTRTOINT [[FRAME_INDEX]](p5)
; GFX9-NEXT: [[MV:%[0-9]+]]:_(p0) = G_MERGE_VALUES [[PTRTOINT]](s32), [[UV1]](s32)
; GFX9-NEXT: $vgpr0_vgpr1 = COPY [[MV]](p0)
%0:_(p5) = G_FRAME_INDEX %stack.0
diff --git a/llvm/test/CodeGen/AMDGPU/addrspacecast-gas.ll b/llvm/test/CodeGen/AMDGPU/addrspacecast-gas.ll
new file mode 100644
index 0000000000000..4b6375cc60800
--- /dev/null
+++ b/llvm/test/CodeGen/AMDGPU/addrspacecast-gas.ll
@@ -0,0 +1,134 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
+; RUN: llc -global-isel=0 -mtriple=amdgcn -mcpu=gfx1250 < %s | FileCheck -check-prefixes=GFX1250,GFX1250-SDAG %s
+; RUN: llc -global-isel=1 -mtriple=amdgcn -mcpu=gfx1250 < %s | FileCheck -check-prefixes=GFX1250,GFX1250-GISEL %s
+
+; Test code sequences for addrspacecast with globally addressable scratch.
+
+target triple = "amdgcn-amd-amdhsa"
+
+define amdgpu_kernel void @use_private_to_flat_addrspacecast(ptr addrspace(5) %ptr) {
+; GFX1250-SDAG-LABEL: use_private_to_flat_addrspacecast:
+; GFX1250-SDAG: ; %bb.0:
+; GFX1250-SDAG-NEXT: s_load_b32 s2, s[4:5], 0x24
+; GFX1250-SDAG-NEXT: v_mbcnt_lo_u32_b32 v0, -1, 0
+; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_flat_scratch_base_lo
+; GFX1250-SDAG-NEXT: s_wait_kmcnt 0x0
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_2) | instid1(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_dual_mov_b32 v0, s2 :: v_dual_lshlrev_b32 v1, 20, v0
+; GFX1250-SDAG-NEXT: s_cmp_lg_u32 s2, -1
+; GFX1250-SDAG-NEXT: s_cselect_b32 vcc_lo, -1, 0
+; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[0:1], v[0:1]
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_2)
+; GFX1250-SDAG-NEXT: v_dual_mov_b32 v2, 0 :: v_dual_cndmask_b32 v1, 0, v1
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v0, 0, v0, vcc_lo
+; GFX1250-SDAG-NEXT: flat_store_b32 v[0:1], v2 scope:SCOPE_SYS
+; GFX1250-SDAG-NEXT: s_wait_storecnt 0x0
+; GFX1250-SDAG-NEXT: s_endpgm
+;
+; GFX1250-GISEL-LABEL: use_private_to_flat_addrspacecast:
+; GFX1250-GISEL: ; %bb.0:
+; GFX1250-GISEL-NEXT: s_load_b32 s2, s[4:5], 0x24
+; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_flat_scratch_base_lo
+; GFX1250-GISEL-NEXT: v_mbcnt_lo_u32_b32 v2, -1, 0
+; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[0:1], s[0:1]
+; GFX1250-GISEL-NEXT: s_wait_kmcnt 0x0
+; GFX1250-GISEL-NEXT: s_cmp_lg_u32 s2, -1
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_2) | instid1(SALU_CYCLE_1)
+; GFX1250-GISEL-NEXT: v_add_co_u32 v0, vcc_lo, s2, v0
+; GFX1250-GISEL-NEXT: v_lshlrev_b32_e32 v2, 20, v2
+; GFX1250-GISEL-NEXT: s_cselect_b32 s0, 1, 0
+; GFX1250-GISEL-NEXT: s_and_b32 s0, 1, s0
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(VALU_DEP_2)
+; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v1, null, v2, v1, vcc_lo
+; GFX1250-GISEL-NEXT: v_cmp_ne_u32_e64 vcc_lo, 0, s0
+; GFX1250-GISEL-NEXT: v_dual_mov_b32 v2, 0 :: v_dual_cndmask_b32 v1, 0, v1
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v0, 0, v0, vcc_lo
+; GFX1250-GISEL-NEXT: flat_store_b32 v[0:1], v2 scope:SCOPE_SYS
+; GFX1250-GISEL-NEXT: s_wait_storecnt 0x0
+; GFX1250-GISEL-NEXT: s_endpgm
+ %stof = addrspacecast ptr addrspace(5) %ptr to ptr
+ store volatile i32 0, ptr %stof
+ ret void
+}
+
+define amdgpu_kernel void @use_private_to_flat_addrspacecast_nonnull(ptr addrspace(5) %ptr) {
+; GFX1250-SDAG-LABEL: use_private_to_flat_addrspacecast_nonnull:
+; GFX1250-SDAG: ; %bb.0:
+; GFX1250-SDAG-NEXT: s_load_b32 s0, s[4:5], 0x24
+; GFX1250-SDAG-NEXT: v_mbcnt_lo_u32_b32 v0, -1, 0
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_dual_mov_b32 v2, 0 :: v_dual_lshlrev_b32 v1, 20, v0
+; GFX1250-SDAG-NEXT: s_wait_kmcnt 0x0
+; GFX1250-SDAG-NEXT: v_mov_b32_e32 v0, s0
+; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_flat_scratch_base_lo
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[0:1], v[0:1]
+; GFX1250-SDAG-NEXT: flat_store_b32 v[0:1], v2 scope:SCOPE_SYS
+; GFX1250-SDAG-NEXT: s_wait_storecnt 0x0
+; GFX1250-SDAG-NEXT: s_endpgm
+;
+; GFX1250-GISEL-LABEL: use_private_to_flat_addrspacecast_nonnull:
+; GFX1250-GISEL: ; %bb.0:
+; GFX1250-GISEL-NEXT: s_load_b32 s2, s[4:5], 0x24
+; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_flat_scratch_base_lo
+; GFX1250-GISEL-NEXT: v_mbcnt_lo_u32_b32 v2, -1, 0
+; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[0:1], s[0:1]
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(SKIP_1) | instid1(VALU_DEP_2)
+; GFX1250-GISEL-NEXT: v_dual_mov_b32 v3, 0 :: v_dual_lshlrev_b32 v2, 20, v2
+; GFX1250-GISEL-NEXT: s_wait_kmcnt 0x0
+; GFX1250-GISEL-NEXT: v_add_co_u32 v0, vcc_lo, s2, v0
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v1, null, v2, v1, vcc_lo
+; GFX1250-GISEL-NEXT: flat_store_b32 v[0:1], v3 scope:SCOPE_SYS
+; GFX1250-GISEL-NEXT: s_wait_storecnt 0x0
+; GFX1250-GISEL-NEXT: s_endpgm
+ %stof = call ptr @llvm.amdgcn.addrspacecast.nonnull.p0.p5(ptr addrspace(5) %ptr)
+ store volatile i32 0, ptr %stof
+ ret void
+}
+
+define amdgpu_kernel void @use_flat_to_private_addrspacecast(ptr %ptr) {
+; GFX1250-LABEL: use_flat_to_private_addrspacecast:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_load_b64 s[0:1], s[4:5], 0x24
+; GFX1250-NEXT: s_mov_b32 s2, src_flat_scratch_base_lo
+; GFX1250-NEXT: v_mov_b32_e32 v0, 0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: s_sub_co_i32 s2, s0, s2
+; GFX1250-NEXT: s_cmp_lg_u64 s[0:1], 0
+; GFX1250-NEXT: s_cselect_b32 s0, s2, -1
+; GFX1250-NEXT: scratch_store_b32 off, v0, s0 scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_storecnt 0x0
+; GFX1250-NEXT: s_endpgm
+ %ftos = addrspacecast ptr %ptr to ptr addrspace(5)
+ store volatile i32 0, ptr addrspace(5) %ftos
+ ret void
+}
+
+define amdgpu_kernel void @use_flat_to_private_addrspacecast_nonnull(ptr %ptr) {
+; GFX1250-SDAG-LABEL: use_flat_to_private_addrspacecast_nonnull:
+; GFX1250-SDAG: ; %bb.0:
+; GFX1250-SDAG-NEXT: s_load_b32 s0, s[4:5], 0x24
+; GFX1250-SDAG-NEXT: v_mov_b32_e32 v0, 0
+; GFX1250-SDAG-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
+; GFX1250-SDAG-NEXT: s_wait_kmcnt 0x0
+; GFX1250-SDAG-NEXT: s_sub_co_i32 s0, s0, s1
+; GFX1250-SDAG-NEXT: scratch_store_b32 off, v0, s0 scope:SCOPE_SYS
+; GFX1250-SDAG-NEXT: s_wait_storecnt 0x0
+; GFX1250-SDAG-NEXT: s_endpgm
+;
+; GFX1250-GISEL-LABEL: use_flat_to_private_addrspacecast_nonnull:
+; GFX1250-GISEL: ; %bb.0:
+; GFX1250-GISEL-NEXT: s_load_b64 s[0:1], s[4:5], 0x24
+; GFX1250-GISEL-NEXT: v_mov_b32_e32 v0, 0
+; GFX1250-GISEL-NEXT: s_wait_kmcnt 0x0
+; GFX1250-GISEL-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
+; GFX1250-GISEL-NEXT: s_sub_co_i32 s0, s0, s1
+; GFX1250-GISEL-NEXT: scratch_store_b32 off, v0, s0 scope:SCOPE_SYS
+; GFX1250-GISEL-NEXT: s_wait_storecnt 0x0
+; GFX1250-GISEL-NEXT: s_endpgm
+ %ftos = call ptr addrspace(5) @llvm.amdgcn.addrspacecast.nonnull.p5.p0(ptr %ptr)
+ store volatile i32 0, ptr addrspace(5) %ftos
+ ret void
+}
diff --git a/llvm/test/CodeGen/AMDGPU/flat-saddr-atomics.ll b/llvm/test/CodeGen/AMDGPU/flat-saddr-atomics.ll
index 2ff66c9b9017a..7d36c9f07ea73 100644
--- a/llvm/test/CodeGen/AMDGPU/flat-saddr-atomics.ll
+++ b/llvm/test/CodeGen/AMDGPU/flat-saddr-atomics.ll
@@ -252,13 +252,15 @@ define amdgpu_ps <2 x float> @flat_xchg_saddr_i64_rtn(ptr inreg %sbase, i32 %vof
; GFX1250-SDAG-LABEL: flat_xchg_saddr_i64_rtn:
; GFX1250-SDAG: ; %bb.0:
; GFX1250-SDAG-NEXT: v_dual_mov_b32 v3, v2 :: v_dual_mov_b32 v2, v1
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-SDAG-NEXT: v_mov_b32_e32 v1, 0
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[4:5], s[2:3], v[0:1]
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v0, s0, v5
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_cmp_lt_u32_e32 vcc_lo, 0x3ffffff, v0
; GFX1250-SDAG-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v5
+; GFX1250-SDAG-NEXT: s_and_saveexec_b32 s0, vcc_lo
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB10_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -277,9 +279,11 @@ define amdgpu_ps <2 x float> @flat_xchg_saddr_i64_rtn(ptr inreg %sbase, i32 %vof
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB10_2
; GFX1250-SDAG-NEXT: .LBB10_4: ; %atomicrmw.private
-; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_2)
+; GFX1250-SDAG-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[4:5]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v4, vcc_lo
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v0, s1, v4
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v0, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v4, off
; GFX1250-SDAG-NEXT: scratch_store_b64 v4, v[2:3], off scope:SCOPE_SE
; GFX1250-SDAG-NEXT: s_wait_xcnt 0x0
@@ -292,15 +296,16 @@ define amdgpu_ps <2 x float> @flat_xchg_saddr_i64_rtn(ptr inreg %sbase, i32 %vof
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v3, v0 :: v_dual_mov_b32 v4, v1
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[0:1], s[2:3]
-; GFX1250-GISEL-NEXT: v_mov_b32_e32 v5, v2
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v6, vcc_lo, v0, v3
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v7, null, 0, v1, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_dual_mov_b32 v5, v2 :: v_dual_bitop2_b32 v0, s0, v7 bitop3:0x14
+; GFX1250-GISEL-NEXT: v_cmp_le_u32_e32 vcc_lo, 0x4000000, v0
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v7
+; GFX1250-GISEL-NEXT: s_and_saveexec_b32 s0, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB10_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -314,13 +319,16 @@ define amdgpu_ps <2 x float> @flat_xchg_saddr_i64_rtn(ptr inreg %sbase, i32 %vof
; GFX1250-GISEL-NEXT: flat_atomic_swap_b64 v[0:1], v3, v[4:5], s[2:3] th:TH_ATOMIC_RETURN scope:SCOPE_DEV
; GFX1250-GISEL-NEXT: s_wait_loadcnt_dscnt 0x0
; GFX1250-GISEL-NEXT: global_inv scope:SCOPE_DEV
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6_vgpr7
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4_vgpr5
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB10_2
; GFX1250-GISEL-NEXT: .LBB10_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[6:7]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v6, vcc_lo
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s1, v6
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v2, off
; GFX1250-GISEL-NEXT: scratch_store_b64 v2, v[4:5], off scope:SCOPE_SE
; GFX1250-GISEL-NEXT: s_wait_xcnt 0x0
@@ -344,11 +352,13 @@ define amdgpu_ps <2 x float> @flat_xchg_saddr_i64_rtn_neg128(ptr inreg %sbase, i
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[2:3], v[0:1]
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[4:5], s[0:1], v[0:1]
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v0, s0, v5
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_cmp_lt_u32_e32 vcc_lo, 0x3ffffff, v0
; GFX1250-SDAG-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v5
+; GFX1250-SDAG-NEXT: s_and_saveexec_b32 s0, vcc_lo
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB11_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -367,8 +377,11 @@ define amdgpu_ps <2 x float> @flat_xchg_saddr_i64_rtn_neg128(ptr inreg %sbase, i
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB11_2
; GFX1250-SDAG-NEXT: .LBB11_4: ; %atomicrmw.private
+; GFX1250-SDAG-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[4:5]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v4, vcc_lo
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v0, s1, v4
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v0, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v4, off
; GFX1250-SDAG-NEXT: scratch_store_b64 v4, v[2:3], off scope:SCOPE_SE
; GFX1250-SDAG-NEXT: s_wait_xcnt 0x0
@@ -381,18 +394,19 @@ define amdgpu_ps <2 x float> @flat_xchg_saddr_i64_rtn_neg128(ptr inreg %sbase, i
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v3, v0 :: v_dual_mov_b32 v4, v1
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[0:1], s[2:3]
-; GFX1250-GISEL-NEXT: v_mov_b32_e32 v5, v2
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v0, vcc_lo, v0, v3
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v1, null, 0, v1, vcc_lo
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v6, vcc_lo, 0xffffff80, v0
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v7, null, -1, v1, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_dual_mov_b32 v5, v2 :: v_dual_bitop2_b32 v0, s0, v7 bitop3:0x14
+; GFX1250-GISEL-NEXT: v_cmp_le_u32_e32 vcc_lo, 0x4000000, v0
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v7
+; GFX1250-GISEL-NEXT: s_and_saveexec_b32 s0, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB11_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -406,13 +420,16 @@ define amdgpu_ps <2 x float> @flat_xchg_saddr_i64_rtn_neg128(ptr inreg %sbase, i
; GFX1250-GISEL-NEXT: flat_atomic_swap_b64 v[0:1], v3, v[4:5], s[2:3] offset:-128 th:TH_ATOMIC_RETURN scope:SCOPE_DEV
; GFX1250-GISEL-NEXT: s_wait_loadcnt_dscnt 0x0
; GFX1250-GISEL-NEXT: global_inv scope:SCOPE_DEV
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6_vgpr7
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4_vgpr5
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB11_2
; GFX1250-GISEL-NEXT: .LBB11_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[6:7]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v6, vcc_lo
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s1, v6
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v2, off
; GFX1250-GISEL-NEXT: scratch_store_b64 v2, v[4:5], off scope:SCOPE_SE
; GFX1250-GISEL-NEXT: s_wait_xcnt 0x0
@@ -433,11 +450,13 @@ define amdgpu_ps void @flat_xchg_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset,
; GFX1250-SDAG: ; %bb.0:
; GFX1250-SDAG-NEXT: v_dual_mov_b32 v3, v2 :: v_dual_mov_b32 v2, v1
; GFX1250-SDAG-NEXT: v_mov_b32_e32 v1, 0
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[2:3], v[0:1]
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v1
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v4, s0, v1
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cmpx_lt_u32_e32 0x3ffffff, v4
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB12_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -455,9 +474,11 @@ define amdgpu_ps void @flat_xchg_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset,
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB12_2
; GFX1250-SDAG-NEXT: .LBB12_4: ; %atomicrmw.private
-; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_2)
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[0:1]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v0, -1, v0, vcc_lo
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v4, s0, v0
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v0, -1, v4, vcc_lo
; GFX1250-SDAG-NEXT: scratch_store_b64 v0, v[2:3], off scope:SCOPE_SE
; GFX1250-SDAG-NEXT: s_endpgm
;
@@ -465,13 +486,14 @@ define amdgpu_ps void @flat_xchg_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset,
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v4, v1 :: v_dual_mov_b32 v5, v2
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[2:3], s[2:3]
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v2, vcc_lo, v2, v0
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v3, null, 0, v3, vcc_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v3
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_xor_b32_e32 v1, s0, v3
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: v_cmpx_le_u32_e32 0x4000000, v1
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB12_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -483,14 +505,17 @@ define amdgpu_ps void @flat_xchg_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset,
; GFX1250-GISEL-NEXT: flat_atomic_swap_b64 v0, v[4:5], s[2:3] scope:SCOPE_DEV
; GFX1250-GISEL-NEXT: s_wait_storecnt_dscnt 0x0
; GFX1250-GISEL-NEXT: global_inv scope:SCOPE_DEV
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2_vgpr3
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4_vgpr5
; GFX1250-GISEL-NEXT: s_wait_xcnt 0x0
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB12_2
; GFX1250-GISEL-NEXT: .LBB12_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[2:3]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v0, -1, v2, vcc_lo
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s0, v2
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v0, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_store_b64 v0, v[4:5], off scope:SCOPE_SE
; GFX1250-GISEL-NEXT: s_endpgm
%zext.offset = zext i32 %voffset to i64
@@ -508,10 +533,12 @@ define amdgpu_ps void @flat_xchg_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %v
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[2:3], v[0:1]
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[0:1], v[0:1]
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v4, s0, v1
; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v1
+; GFX1250-SDAG-NEXT: v_cmpx_lt_u32_e32 0x3ffffff, v4
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB13_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -529,8 +556,11 @@ define amdgpu_ps void @flat_xchg_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %v
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB13_2
; GFX1250-SDAG-NEXT: .LBB13_4: ; %atomicrmw.private
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[0:1]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v0, -1, v0, vcc_lo
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v4, s0, v0
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v0, -1, v4, vcc_lo
; GFX1250-SDAG-NEXT: scratch_store_b64 v0, v[2:3], off scope:SCOPE_SE
; GFX1250-SDAG-NEXT: s_endpgm
;
@@ -538,16 +568,17 @@ define amdgpu_ps void @flat_xchg_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %v
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v4, v1 :: v_dual_mov_b32 v5, v2
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[2:3], s[2:3]
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v1, vcc_lo, v2, v0
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v3, null, 0, v3, vcc_lo
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v2, vcc_lo, 0xffffff80, v1
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v3, null, -1, v3, vcc_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v3
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_xor_b32_e32 v1, s0, v3
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: v_cmpx_le_u32_e32 0x4000000, v1
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB13_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -559,14 +590,17 @@ define amdgpu_ps void @flat_xchg_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %v
; GFX1250-GISEL-NEXT: flat_atomic_swap_b64 v0, v[4:5], s[2:3] offset:-128 scope:SCOPE_DEV
; GFX1250-GISEL-NEXT: s_wait_storecnt_dscnt 0x0
; GFX1250-GISEL-NEXT: global_inv scope:SCOPE_DEV
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2_vgpr3
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4_vgpr5
; GFX1250-GISEL-NEXT: s_wait_xcnt 0x0
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB13_2
; GFX1250-GISEL-NEXT: .LBB13_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[2:3]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v0, -1, v2, vcc_lo
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s0, v2
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v0, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_store_b64 v0, v[4:5], off scope:SCOPE_SE
; GFX1250-GISEL-NEXT: s_endpgm
%zext.offset = zext i32 %voffset to i64
@@ -642,13 +676,15 @@ define amdgpu_ps <2 x float> @flat_add_saddr_i64_rtn(ptr inreg %sbase, i32 %voff
; GFX1250-SDAG-LABEL: flat_add_saddr_i64_rtn:
; GFX1250-SDAG: ; %bb.0:
; GFX1250-SDAG-NEXT: v_dual_mov_b32 v3, v2 :: v_dual_mov_b32 v2, v1
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-SDAG-NEXT: v_mov_b32_e32 v1, 0
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[4:5], s[2:3], v[0:1]
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v0, s0, v5
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_cmp_lt_u32_e32 vcc_lo, 0x3ffffff, v0
; GFX1250-SDAG-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v5
+; GFX1250-SDAG-NEXT: s_and_saveexec_b32 s0, vcc_lo
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB18_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -667,9 +703,11 @@ define amdgpu_ps <2 x float> @flat_add_saddr_i64_rtn(ptr inreg %sbase, i32 %voff
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB18_2
; GFX1250-SDAG-NEXT: .LBB18_4: ; %atomicrmw.private
-; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_2)
+; GFX1250-SDAG-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[4:5]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v4, vcc_lo
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v0, s1, v4
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v0, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v4, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[2:3], v[0:1], v[2:3]
@@ -683,15 +721,16 @@ define amdgpu_ps <2 x float> @flat_add_saddr_i64_rtn(ptr inreg %sbase, i32 %voff
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v3, v0 :: v_dual_mov_b32 v4, v1
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[0:1], s[2:3]
-; GFX1250-GISEL-NEXT: v_mov_b32_e32 v5, v2
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v6, vcc_lo, v0, v3
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v7, null, 0, v1, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_dual_mov_b32 v5, v2 :: v_dual_bitop2_b32 v0, s0, v7 bitop3:0x14
+; GFX1250-GISEL-NEXT: v_cmp_le_u32_e32 vcc_lo, 0x4000000, v0
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v7
+; GFX1250-GISEL-NEXT: s_and_saveexec_b32 s0, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB18_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -705,13 +744,16 @@ define amdgpu_ps <2 x float> @flat_add_saddr_i64_rtn(ptr inreg %sbase, i32 %voff
; GFX1250-GISEL-NEXT: flat_atomic_add_u64 v[0:1], v3, v[4:5], s[2:3] th:TH_ATOMIC_RETURN scope:SCOPE_DEV
; GFX1250-GISEL-NEXT: s_wait_loadcnt_dscnt 0x0
; GFX1250-GISEL-NEXT: global_inv scope:SCOPE_DEV
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6_vgpr7
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4_vgpr5
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB18_2
; GFX1250-GISEL-NEXT: .LBB18_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[6:7]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v6, vcc_lo
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s1, v6
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v6, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_add_nc_u64_e32 v[2:3], v[0:1], v[4:5]
@@ -736,11 +778,13 @@ define amdgpu_ps <2 x float> @flat_add_saddr_i64_rtn_neg128(ptr inreg %sbase, i3
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[2:3], v[0:1]
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[4:5], s[0:1], v[0:1]
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v0, s0, v5
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_cmp_lt_u32_e32 vcc_lo, 0x3ffffff, v0
; GFX1250-SDAG-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v5
+; GFX1250-SDAG-NEXT: s_and_saveexec_b32 s0, vcc_lo
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB19_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -759,8 +803,11 @@ define amdgpu_ps <2 x float> @flat_add_saddr_i64_rtn_neg128(ptr inreg %sbase, i3
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB19_2
; GFX1250-SDAG-NEXT: .LBB19_4: ; %atomicrmw.private
+; GFX1250-SDAG-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[4:5]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v4, vcc_lo
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v0, s1, v4
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v0, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v4, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[2:3], v[0:1], v[2:3]
@@ -774,18 +821,19 @@ define amdgpu_ps <2 x float> @flat_add_saddr_i64_rtn_neg128(ptr inreg %sbase, i3
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v3, v0 :: v_dual_mov_b32 v4, v1
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[0:1], s[2:3]
-; GFX1250-GISEL-NEXT: v_mov_b32_e32 v5, v2
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v0, vcc_lo, v0, v3
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v1, null, 0, v1, vcc_lo
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v6, vcc_lo, 0xffffff80, v0
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v7, null, -1, v1, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_dual_mov_b32 v5, v2 :: v_dual_bitop2_b32 v0, s0, v7 bitop3:0x14
+; GFX1250-GISEL-NEXT: v_cmp_le_u32_e32 vcc_lo, 0x4000000, v0
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v7
+; GFX1250-GISEL-NEXT: s_and_saveexec_b32 s0, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB19_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -799,13 +847,16 @@ define amdgpu_ps <2 x float> @flat_add_saddr_i64_rtn_neg128(ptr inreg %sbase, i3
; GFX1250-GISEL-NEXT: flat_atomic_add_u64 v[0:1], v3, v[4:5], s[2:3] offset:-128 th:TH_ATOMIC_RETURN scope:SCOPE_DEV
; GFX1250-GISEL-NEXT: s_wait_loadcnt_dscnt 0x0
; GFX1250-GISEL-NEXT: global_inv scope:SCOPE_DEV
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6_vgpr7
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4_vgpr5
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB19_2
; GFX1250-GISEL-NEXT: .LBB19_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[6:7]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v6, vcc_lo
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s1, v6
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v6, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_add_nc_u64_e32 v[2:3], v[0:1], v[4:5]
@@ -827,11 +878,13 @@ define amdgpu_ps void @flat_add_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset,
; GFX1250-SDAG: ; %bb.0:
; GFX1250-SDAG-NEXT: v_dual_mov_b32 v3, v2 :: v_dual_mov_b32 v2, v1
; GFX1250-SDAG-NEXT: v_mov_b32_e32 v1, 0
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[2:3], v[0:1]
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v1
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v4, s0, v1
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cmpx_lt_u32_e32 0x3ffffff, v4
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB20_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -849,9 +902,11 @@ define amdgpu_ps void @flat_add_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset,
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB20_2
; GFX1250-SDAG-NEXT: .LBB20_4: ; %atomicrmw.private
-; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_2)
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[0:1]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v0, vcc_lo
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v4, s0, v0
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v4, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v4, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], v[0:1], v[2:3]
@@ -862,13 +917,14 @@ define amdgpu_ps void @flat_add_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset,
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v4, v1 :: v_dual_mov_b32 v5, v2
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[2:3], s[2:3]
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v2, vcc_lo, v2, v0
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v3, null, 0, v3, vcc_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v3
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_xor_b32_e32 v1, s0, v3
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: v_cmpx_le_u32_e32 0x4000000, v1
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB20_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -880,14 +936,17 @@ define amdgpu_ps void @flat_add_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset,
; GFX1250-GISEL-NEXT: flat_atomic_add_u64 v0, v[4:5], s[2:3] scope:SCOPE_DEV
; GFX1250-GISEL-NEXT: s_wait_storecnt_dscnt 0x0
; GFX1250-GISEL-NEXT: global_inv scope:SCOPE_DEV
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2_vgpr3
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4_vgpr5
; GFX1250-GISEL-NEXT: s_wait_xcnt 0x0
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB20_2
; GFX1250-GISEL-NEXT: .LBB20_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[2:3]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v2, vcc_lo
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s0, v2
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v2, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_add_nc_u64_e32 v[0:1], v[0:1], v[4:5]
@@ -908,10 +967,12 @@ define amdgpu_ps void @flat_add_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %vo
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[2:3], v[0:1]
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[0:1], v[0:1]
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v4, s0, v1
; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v1
+; GFX1250-SDAG-NEXT: v_cmpx_lt_u32_e32 0x3ffffff, v4
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB21_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -929,8 +990,11 @@ define amdgpu_ps void @flat_add_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %vo
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB21_2
; GFX1250-SDAG-NEXT: .LBB21_4: ; %atomicrmw.private
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[0:1]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v0, vcc_lo
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v4, s0, v0
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v4, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v4, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], v[0:1], v[2:3]
@@ -941,16 +1005,17 @@ define amdgpu_ps void @flat_add_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %vo
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v4, v1 :: v_dual_mov_b32 v5, v2
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[2:3], s[2:3]
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v1, vcc_lo, v2, v0
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v3, null, 0, v3, vcc_lo
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v2, vcc_lo, 0xffffff80, v1
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v3, null, -1, v3, vcc_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v3
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_xor_b32_e32 v1, s0, v3
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: v_cmpx_le_u32_e32 0x4000000, v1
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB21_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -962,14 +1027,17 @@ define amdgpu_ps void @flat_add_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %vo
; GFX1250-GISEL-NEXT: flat_atomic_add_u64 v0, v[4:5], s[2:3] offset:-128 scope:SCOPE_DEV
; GFX1250-GISEL-NEXT: s_wait_storecnt_dscnt 0x0
; GFX1250-GISEL-NEXT: global_inv scope:SCOPE_DEV
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2_vgpr3
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4_vgpr5
; GFX1250-GISEL-NEXT: s_wait_xcnt 0x0
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB21_2
; GFX1250-GISEL-NEXT: .LBB21_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[2:3]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v2, vcc_lo
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s0, v2
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v2, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_add_nc_u64_e32 v[0:1], v[0:1], v[4:5]
@@ -1048,13 +1116,15 @@ define amdgpu_ps <2 x float> @flat_sub_saddr_i64_rtn(ptr inreg %sbase, i32 %voff
; GFX1250-SDAG-LABEL: flat_sub_saddr_i64_rtn:
; GFX1250-SDAG: ; %bb.0:
; GFX1250-SDAG-NEXT: v_dual_mov_b32 v3, v2 :: v_dual_mov_b32 v2, v1
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-SDAG-NEXT: v_mov_b32_e32 v1, 0
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[4:5], s[2:3], v[0:1]
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v0, s0, v5
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_cmp_lt_u32_e32 vcc_lo, 0x3ffffff, v0
; GFX1250-SDAG-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v5
+; GFX1250-SDAG-NEXT: s_and_saveexec_b32 s0, vcc_lo
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB26_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -1073,9 +1143,11 @@ define amdgpu_ps <2 x float> @flat_sub_saddr_i64_rtn(ptr inreg %sbase, i32 %voff
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB26_2
; GFX1250-SDAG-NEXT: .LBB26_4: ; %atomicrmw.private
-; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_2)
+; GFX1250-SDAG-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[4:5]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v4, vcc_lo
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v0, s1, v4
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v0, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v4, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_sub_nc_u64_e32 v[2:3], v[0:1], v[2:3]
@@ -1089,15 +1161,16 @@ define amdgpu_ps <2 x float> @flat_sub_saddr_i64_rtn(ptr inreg %sbase, i32 %voff
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v3, v0 :: v_dual_mov_b32 v4, v1
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[0:1], s[2:3]
-; GFX1250-GISEL-NEXT: v_mov_b32_e32 v5, v2
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v6, vcc_lo, v0, v3
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v7, null, 0, v1, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_dual_mov_b32 v5, v2 :: v_dual_bitop2_b32 v0, s0, v7 bitop3:0x14
+; GFX1250-GISEL-NEXT: v_cmp_le_u32_e32 vcc_lo, 0x4000000, v0
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v7
+; GFX1250-GISEL-NEXT: s_and_saveexec_b32 s0, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB26_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -1111,13 +1184,16 @@ define amdgpu_ps <2 x float> @flat_sub_saddr_i64_rtn(ptr inreg %sbase, i32 %voff
; GFX1250-GISEL-NEXT: flat_atomic_sub_u64 v[0:1], v3, v[4:5], s[2:3] th:TH_ATOMIC_RETURN scope:SCOPE_DEV
; GFX1250-GISEL-NEXT: s_wait_loadcnt_dscnt 0x0
; GFX1250-GISEL-NEXT: global_inv scope:SCOPE_DEV
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6_vgpr7
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4_vgpr5
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB26_2
; GFX1250-GISEL-NEXT: .LBB26_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[6:7]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v6, vcc_lo
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s1, v6
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v6, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_sub_nc_u64_e32 v[2:3], v[0:1], v[4:5]
@@ -1142,11 +1218,13 @@ define amdgpu_ps <2 x float> @flat_sub_saddr_i64_rtn_neg128(ptr inreg %sbase, i3
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[2:3], v[0:1]
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[4:5], s[0:1], v[0:1]
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v0, s0, v5
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_cmp_lt_u32_e32 vcc_lo, 0x3ffffff, v0
; GFX1250-SDAG-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v5
+; GFX1250-SDAG-NEXT: s_and_saveexec_b32 s0, vcc_lo
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB27_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -1165,8 +1243,11 @@ define amdgpu_ps <2 x float> @flat_sub_saddr_i64_rtn_neg128(ptr inreg %sbase, i3
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB27_2
; GFX1250-SDAG-NEXT: .LBB27_4: ; %atomicrmw.private
+; GFX1250-SDAG-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[4:5]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v4, vcc_lo
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v0, s1, v4
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v0, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v4, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_sub_nc_u64_e32 v[2:3], v[0:1], v[2:3]
@@ -1180,18 +1261,19 @@ define amdgpu_ps <2 x float> @flat_sub_saddr_i64_rtn_neg128(ptr inreg %sbase, i3
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v3, v0 :: v_dual_mov_b32 v4, v1
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[0:1], s[2:3]
-; GFX1250-GISEL-NEXT: v_mov_b32_e32 v5, v2
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v0, vcc_lo, v0, v3
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v1, null, 0, v1, vcc_lo
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v6, vcc_lo, 0xffffff80, v0
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v7, null, -1, v1, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_dual_mov_b32 v5, v2 :: v_dual_bitop2_b32 v0, s0, v7 bitop3:0x14
+; GFX1250-GISEL-NEXT: v_cmp_le_u32_e32 vcc_lo, 0x4000000, v0
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v7
+; GFX1250-GISEL-NEXT: s_and_saveexec_b32 s0, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB27_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -1205,13 +1287,16 @@ define amdgpu_ps <2 x float> @flat_sub_saddr_i64_rtn_neg128(ptr inreg %sbase, i3
; GFX1250-GISEL-NEXT: flat_atomic_sub_u64 v[0:1], v3, v[4:5], s[2:3] offset:-128 th:TH_ATOMIC_RETURN scope:SCOPE_DEV
; GFX1250-GISEL-NEXT: s_wait_loadcnt_dscnt 0x0
; GFX1250-GISEL-NEXT: global_inv scope:SCOPE_DEV
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6_vgpr7
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4_vgpr5
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB27_2
; GFX1250-GISEL-NEXT: .LBB27_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[6:7]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v6, vcc_lo
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s1, v6
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v6, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_sub_nc_u64_e32 v[2:3], v[0:1], v[4:5]
@@ -1233,11 +1318,13 @@ define amdgpu_ps void @flat_sub_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset,
; GFX1250-SDAG: ; %bb.0:
; GFX1250-SDAG-NEXT: v_dual_mov_b32 v3, v2 :: v_dual_mov_b32 v2, v1
; GFX1250-SDAG-NEXT: v_mov_b32_e32 v1, 0
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[2:3], v[0:1]
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v1
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v4, s0, v1
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cmpx_lt_u32_e32 0x3ffffff, v4
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB28_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -1255,9 +1342,11 @@ define amdgpu_ps void @flat_sub_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset,
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB28_2
; GFX1250-SDAG-NEXT: .LBB28_4: ; %atomicrmw.private
-; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_2)
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[0:1]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v0, vcc_lo
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v4, s0, v0
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v4, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v4, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_sub_nc_u64_e32 v[0:1], v[0:1], v[2:3]
@@ -1268,13 +1357,14 @@ define amdgpu_ps void @flat_sub_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset,
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v4, v1 :: v_dual_mov_b32 v5, v2
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[2:3], s[2:3]
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v2, vcc_lo, v2, v0
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v3, null, 0, v3, vcc_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v3
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_xor_b32_e32 v1, s0, v3
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: v_cmpx_le_u32_e32 0x4000000, v1
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB28_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -1286,14 +1376,17 @@ define amdgpu_ps void @flat_sub_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset,
; GFX1250-GISEL-NEXT: flat_atomic_sub_u64 v0, v[4:5], s[2:3] scope:SCOPE_DEV
; GFX1250-GISEL-NEXT: s_wait_storecnt_dscnt 0x0
; GFX1250-GISEL-NEXT: global_inv scope:SCOPE_DEV
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2_vgpr3
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4_vgpr5
; GFX1250-GISEL-NEXT: s_wait_xcnt 0x0
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB28_2
; GFX1250-GISEL-NEXT: .LBB28_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[2:3]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v2, vcc_lo
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s0, v2
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v2, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_sub_nc_u64_e32 v[0:1], v[0:1], v[4:5]
@@ -1314,10 +1407,12 @@ define amdgpu_ps void @flat_sub_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %vo
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[2:3], v[0:1]
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[0:1], v[0:1]
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v4, s0, v1
; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v1
+; GFX1250-SDAG-NEXT: v_cmpx_lt_u32_e32 0x3ffffff, v4
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB29_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -1335,8 +1430,11 @@ define amdgpu_ps void @flat_sub_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %vo
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB29_2
; GFX1250-SDAG-NEXT: .LBB29_4: ; %atomicrmw.private
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[0:1]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v0, vcc_lo
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v4, s0, v0
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v4, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v4, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_sub_nc_u64_e32 v[0:1], v[0:1], v[2:3]
@@ -1347,16 +1445,17 @@ define amdgpu_ps void @flat_sub_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %vo
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v4, v1 :: v_dual_mov_b32 v5, v2
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[2:3], s[2:3]
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v1, vcc_lo, v2, v0
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v3, null, 0, v3, vcc_lo
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v2, vcc_lo, 0xffffff80, v1
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v3, null, -1, v3, vcc_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v3
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_xor_b32_e32 v1, s0, v3
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: v_cmpx_le_u32_e32 0x4000000, v1
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB29_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -1368,14 +1467,17 @@ define amdgpu_ps void @flat_sub_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %vo
; GFX1250-GISEL-NEXT: flat_atomic_sub_u64 v0, v[4:5], s[2:3] offset:-128 scope:SCOPE_DEV
; GFX1250-GISEL-NEXT: s_wait_storecnt_dscnt 0x0
; GFX1250-GISEL-NEXT: global_inv scope:SCOPE_DEV
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2_vgpr3
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4_vgpr5
; GFX1250-GISEL-NEXT: s_wait_xcnt 0x0
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB29_2
; GFX1250-GISEL-NEXT: .LBB29_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[2:3]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v2, vcc_lo
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s0, v2
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v2, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_sub_nc_u64_e32 v[0:1], v[0:1], v[4:5]
@@ -1454,13 +1556,15 @@ define amdgpu_ps <2 x float> @flat_and_saddr_i64_rtn(ptr inreg %sbase, i32 %voff
; GFX1250-SDAG-LABEL: flat_and_saddr_i64_rtn:
; GFX1250-SDAG: ; %bb.0:
; GFX1250-SDAG-NEXT: v_dual_mov_b32 v3, v2 :: v_dual_mov_b32 v2, v1
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-SDAG-NEXT: v_mov_b32_e32 v1, 0
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[4:5], s[2:3], v[0:1]
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v0, s0, v5
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_cmp_lt_u32_e32 vcc_lo, 0x3ffffff, v0
; GFX1250-SDAG-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v5
+; GFX1250-SDAG-NEXT: s_and_saveexec_b32 s0, vcc_lo
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB34_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -1479,9 +1583,11 @@ define amdgpu_ps <2 x float> @flat_and_saddr_i64_rtn(ptr inreg %sbase, i32 %voff
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB34_2
; GFX1250-SDAG-NEXT: .LBB34_4: ; %atomicrmw.private
-; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_2)
+; GFX1250-SDAG-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[4:5]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v4, vcc_lo
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v0, s1, v4
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v0, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v4, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_and_b32_e32 v3, v1, v3
@@ -1496,15 +1602,16 @@ define amdgpu_ps <2 x float> @flat_and_saddr_i64_rtn(ptr inreg %sbase, i32 %voff
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v3, v0 :: v_dual_mov_b32 v4, v1
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[0:1], s[2:3]
-; GFX1250-GISEL-NEXT: v_mov_b32_e32 v5, v2
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v6, vcc_lo, v0, v3
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v7, null, 0, v1, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_dual_mov_b32 v5, v2 :: v_dual_bitop2_b32 v0, s0, v7 bitop3:0x14
+; GFX1250-GISEL-NEXT: v_cmp_le_u32_e32 vcc_lo, 0x4000000, v0
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v7
+; GFX1250-GISEL-NEXT: s_and_saveexec_b32 s0, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB34_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -1518,13 +1625,16 @@ define amdgpu_ps <2 x float> @flat_and_saddr_i64_rtn(ptr inreg %sbase, i32 %voff
; GFX1250-GISEL-NEXT: flat_atomic_and_b64 v[0:1], v3, v[4:5], s[2:3] th:TH_ATOMIC_RETURN scope:SCOPE_DEV
; GFX1250-GISEL-NEXT: s_wait_loadcnt_dscnt 0x0
; GFX1250-GISEL-NEXT: global_inv scope:SCOPE_DEV
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6_vgpr7
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB34_2
; GFX1250-GISEL-NEXT: .LBB34_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[6:7]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v6, vcc_lo
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s1, v6
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v6, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_and_b32_e32 v2, v0, v4
@@ -1550,11 +1660,13 @@ define amdgpu_ps <2 x float> @flat_and_saddr_i64_rtn_neg128(ptr inreg %sbase, i3
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[2:3], v[0:1]
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[4:5], s[0:1], v[0:1]
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v0, s0, v5
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_cmp_lt_u32_e32 vcc_lo, 0x3ffffff, v0
; GFX1250-SDAG-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v5
+; GFX1250-SDAG-NEXT: s_and_saveexec_b32 s0, vcc_lo
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB35_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -1573,8 +1685,11 @@ define amdgpu_ps <2 x float> @flat_and_saddr_i64_rtn_neg128(ptr inreg %sbase, i3
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB35_2
; GFX1250-SDAG-NEXT: .LBB35_4: ; %atomicrmw.private
+; GFX1250-SDAG-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[4:5]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v4, vcc_lo
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v0, s1, v4
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v0, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v4, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_and_b32_e32 v3, v1, v3
@@ -1589,18 +1704,19 @@ define amdgpu_ps <2 x float> @flat_and_saddr_i64_rtn_neg128(ptr inreg %sbase, i3
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v3, v0 :: v_dual_mov_b32 v4, v1
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[0:1], s[2:3]
-; GFX1250-GISEL-NEXT: v_mov_b32_e32 v5, v2
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v0, vcc_lo, v0, v3
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v1, null, 0, v1, vcc_lo
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v6, vcc_lo, 0xffffff80, v0
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v7, null, -1, v1, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_dual_mov_b32 v5, v2 :: v_dual_bitop2_b32 v0, s0, v7 bitop3:0x14
+; GFX1250-GISEL-NEXT: v_cmp_le_u32_e32 vcc_lo, 0x4000000, v0
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v7
+; GFX1250-GISEL-NEXT: s_and_saveexec_b32 s0, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB35_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -1614,13 +1730,16 @@ define amdgpu_ps <2 x float> @flat_and_saddr_i64_rtn_neg128(ptr inreg %sbase, i3
; GFX1250-GISEL-NEXT: flat_atomic_and_b64 v[0:1], v3, v[4:5], s[2:3] offset:-128 th:TH_ATOMIC_RETURN scope:SCOPE_DEV
; GFX1250-GISEL-NEXT: s_wait_loadcnt_dscnt 0x0
; GFX1250-GISEL-NEXT: global_inv scope:SCOPE_DEV
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6_vgpr7
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB35_2
; GFX1250-GISEL-NEXT: .LBB35_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[6:7]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v6, vcc_lo
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s1, v6
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v6, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_and_b32_e32 v2, v0, v4
@@ -1643,11 +1762,13 @@ define amdgpu_ps void @flat_and_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset,
; GFX1250-SDAG: ; %bb.0:
; GFX1250-SDAG-NEXT: v_dual_mov_b32 v3, v2 :: v_dual_mov_b32 v2, v1
; GFX1250-SDAG-NEXT: v_mov_b32_e32 v1, 0
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[2:3], v[0:1]
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v1
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v4, s0, v1
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cmpx_lt_u32_e32 0x3ffffff, v4
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB36_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -1665,9 +1786,11 @@ define amdgpu_ps void @flat_and_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset,
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB36_2
; GFX1250-SDAG-NEXT: .LBB36_4: ; %atomicrmw.private
-; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_2)
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[0:1]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v0, vcc_lo
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v4, s0, v0
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v4, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v4, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_and_b32_e32 v1, v1, v3
@@ -1679,13 +1802,14 @@ define amdgpu_ps void @flat_and_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset,
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v4, v1 :: v_dual_mov_b32 v5, v2
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[2:3], s[2:3]
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v2, vcc_lo, v2, v0
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v3, null, 0, v3, vcc_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v3
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_xor_b32_e32 v1, s0, v3
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: v_cmpx_le_u32_e32 0x4000000, v1
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB36_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -1697,14 +1821,17 @@ define amdgpu_ps void @flat_and_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset,
; GFX1250-GISEL-NEXT: flat_atomic_and_b64 v0, v[4:5], s[2:3] scope:SCOPE_DEV
; GFX1250-GISEL-NEXT: s_wait_storecnt_dscnt 0x0
; GFX1250-GISEL-NEXT: global_inv scope:SCOPE_DEV
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2_vgpr3
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4
; GFX1250-GISEL-NEXT: s_wait_xcnt 0x0
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB36_2
; GFX1250-GISEL-NEXT: .LBB36_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[2:3]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v2, vcc_lo
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s0, v2
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v2, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_and_b32_e32 v0, v0, v4
@@ -1726,10 +1853,12 @@ define amdgpu_ps void @flat_and_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %vo
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[2:3], v[0:1]
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[0:1], v[0:1]
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v4, s0, v1
; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v1
+; GFX1250-SDAG-NEXT: v_cmpx_lt_u32_e32 0x3ffffff, v4
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB37_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -1747,8 +1876,11 @@ define amdgpu_ps void @flat_and_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %vo
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB37_2
; GFX1250-SDAG-NEXT: .LBB37_4: ; %atomicrmw.private
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[0:1]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v0, vcc_lo
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v4, s0, v0
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v4, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v4, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_and_b32_e32 v1, v1, v3
@@ -1760,16 +1892,17 @@ define amdgpu_ps void @flat_and_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %vo
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v4, v1 :: v_dual_mov_b32 v5, v2
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[2:3], s[2:3]
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v1, vcc_lo, v2, v0
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v3, null, 0, v3, vcc_lo
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v2, vcc_lo, 0xffffff80, v1
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v3, null, -1, v3, vcc_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v3
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_xor_b32_e32 v1, s0, v3
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: v_cmpx_le_u32_e32 0x4000000, v1
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB37_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -1781,14 +1914,17 @@ define amdgpu_ps void @flat_and_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %vo
; GFX1250-GISEL-NEXT: flat_atomic_and_b64 v0, v[4:5], s[2:3] offset:-128 scope:SCOPE_DEV
; GFX1250-GISEL-NEXT: s_wait_storecnt_dscnt 0x0
; GFX1250-GISEL-NEXT: global_inv scope:SCOPE_DEV
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2_vgpr3
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4
; GFX1250-GISEL-NEXT: s_wait_xcnt 0x0
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB37_2
; GFX1250-GISEL-NEXT: .LBB37_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[2:3]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v2, vcc_lo
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s0, v2
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v2, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_and_b32_e32 v0, v0, v4
@@ -1868,13 +2004,15 @@ define amdgpu_ps <2 x float> @flat_or_saddr_i64_rtn(ptr inreg %sbase, i32 %voffs
; GFX1250-SDAG-LABEL: flat_or_saddr_i64_rtn:
; GFX1250-SDAG: ; %bb.0:
; GFX1250-SDAG-NEXT: v_dual_mov_b32 v3, v2 :: v_dual_mov_b32 v2, v1
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-SDAG-NEXT: v_mov_b32_e32 v1, 0
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[4:5], s[2:3], v[0:1]
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v0, s0, v5
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_cmp_lt_u32_e32 vcc_lo, 0x3ffffff, v0
; GFX1250-SDAG-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v5
+; GFX1250-SDAG-NEXT: s_and_saveexec_b32 s0, vcc_lo
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB42_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -1893,9 +2031,11 @@ define amdgpu_ps <2 x float> @flat_or_saddr_i64_rtn(ptr inreg %sbase, i32 %voffs
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB42_2
; GFX1250-SDAG-NEXT: .LBB42_4: ; %atomicrmw.private
-; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_2)
+; GFX1250-SDAG-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[4:5]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v4, vcc_lo
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v0, s1, v4
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v0, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v4, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_or_b32_e32 v3, v1, v3
@@ -1910,15 +2050,16 @@ define amdgpu_ps <2 x float> @flat_or_saddr_i64_rtn(ptr inreg %sbase, i32 %voffs
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v3, v0 :: v_dual_mov_b32 v4, v1
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[0:1], s[2:3]
-; GFX1250-GISEL-NEXT: v_mov_b32_e32 v5, v2
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v6, vcc_lo, v0, v3
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v7, null, 0, v1, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_dual_mov_b32 v5, v2 :: v_dual_bitop2_b32 v0, s0, v7 bitop3:0x14
+; GFX1250-GISEL-NEXT: v_cmp_le_u32_e32 vcc_lo, 0x4000000, v0
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v7
+; GFX1250-GISEL-NEXT: s_and_saveexec_b32 s0, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB42_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -1932,13 +2073,16 @@ define amdgpu_ps <2 x float> @flat_or_saddr_i64_rtn(ptr inreg %sbase, i32 %voffs
; GFX1250-GISEL-NEXT: flat_atomic_or_b64 v[0:1], v3, v[4:5], s[2:3] th:TH_ATOMIC_RETURN scope:SCOPE_DEV
; GFX1250-GISEL-NEXT: s_wait_loadcnt_dscnt 0x0
; GFX1250-GISEL-NEXT: global_inv scope:SCOPE_DEV
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6_vgpr7
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB42_2
; GFX1250-GISEL-NEXT: .LBB42_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[6:7]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v6, vcc_lo
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s1, v6
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v6, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_or_b32_e32 v2, v0, v4
@@ -1964,11 +2108,13 @@ define amdgpu_ps <2 x float> @flat_or_saddr_i64_rtn_neg128(ptr inreg %sbase, i32
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[2:3], v[0:1]
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[4:5], s[0:1], v[0:1]
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v0, s0, v5
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_cmp_lt_u32_e32 vcc_lo, 0x3ffffff, v0
; GFX1250-SDAG-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v5
+; GFX1250-SDAG-NEXT: s_and_saveexec_b32 s0, vcc_lo
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB43_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -1987,8 +2133,11 @@ define amdgpu_ps <2 x float> @flat_or_saddr_i64_rtn_neg128(ptr inreg %sbase, i32
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB43_2
; GFX1250-SDAG-NEXT: .LBB43_4: ; %atomicrmw.private
+; GFX1250-SDAG-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[4:5]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v4, vcc_lo
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v0, s1, v4
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v0, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v4, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_or_b32_e32 v3, v1, v3
@@ -2003,18 +2152,19 @@ define amdgpu_ps <2 x float> @flat_or_saddr_i64_rtn_neg128(ptr inreg %sbase, i32
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v3, v0 :: v_dual_mov_b32 v4, v1
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[0:1], s[2:3]
-; GFX1250-GISEL-NEXT: v_mov_b32_e32 v5, v2
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v0, vcc_lo, v0, v3
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v1, null, 0, v1, vcc_lo
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v6, vcc_lo, 0xffffff80, v0
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v7, null, -1, v1, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_dual_mov_b32 v5, v2 :: v_dual_bitop2_b32 v0, s0, v7 bitop3:0x14
+; GFX1250-GISEL-NEXT: v_cmp_le_u32_e32 vcc_lo, 0x4000000, v0
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v7
+; GFX1250-GISEL-NEXT: s_and_saveexec_b32 s0, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB43_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -2028,13 +2178,16 @@ define amdgpu_ps <2 x float> @flat_or_saddr_i64_rtn_neg128(ptr inreg %sbase, i32
; GFX1250-GISEL-NEXT: flat_atomic_or_b64 v[0:1], v3, v[4:5], s[2:3] offset:-128 th:TH_ATOMIC_RETURN scope:SCOPE_DEV
; GFX1250-GISEL-NEXT: s_wait_loadcnt_dscnt 0x0
; GFX1250-GISEL-NEXT: global_inv scope:SCOPE_DEV
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6_vgpr7
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB43_2
; GFX1250-GISEL-NEXT: .LBB43_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[6:7]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v6, vcc_lo
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s1, v6
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v6, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_or_b32_e32 v2, v0, v4
@@ -2057,11 +2210,13 @@ define amdgpu_ps void @flat_or_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset, i
; GFX1250-SDAG: ; %bb.0:
; GFX1250-SDAG-NEXT: v_dual_mov_b32 v3, v2 :: v_dual_mov_b32 v2, v1
; GFX1250-SDAG-NEXT: v_mov_b32_e32 v1, 0
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[2:3], v[0:1]
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v1
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v4, s0, v1
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cmpx_lt_u32_e32 0x3ffffff, v4
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB44_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -2079,9 +2234,11 @@ define amdgpu_ps void @flat_or_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset, i
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB44_2
; GFX1250-SDAG-NEXT: .LBB44_4: ; %atomicrmw.private
-; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_2)
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[0:1]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v0, vcc_lo
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v4, s0, v0
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v4, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v4, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_or_b32_e32 v1, v1, v3
@@ -2093,13 +2250,14 @@ define amdgpu_ps void @flat_or_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset, i
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v4, v1 :: v_dual_mov_b32 v5, v2
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[2:3], s[2:3]
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v2, vcc_lo, v2, v0
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v3, null, 0, v3, vcc_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v3
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_xor_b32_e32 v1, s0, v3
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: v_cmpx_le_u32_e32 0x4000000, v1
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB44_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -2111,14 +2269,17 @@ define amdgpu_ps void @flat_or_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset, i
; GFX1250-GISEL-NEXT: flat_atomic_or_b64 v0, v[4:5], s[2:3] scope:SCOPE_DEV
; GFX1250-GISEL-NEXT: s_wait_storecnt_dscnt 0x0
; GFX1250-GISEL-NEXT: global_inv scope:SCOPE_DEV
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2_vgpr3
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4
; GFX1250-GISEL-NEXT: s_wait_xcnt 0x0
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB44_2
; GFX1250-GISEL-NEXT: .LBB44_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[2:3]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v2, vcc_lo
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s0, v2
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v2, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_or_b32_e32 v0, v0, v4
@@ -2140,10 +2301,12 @@ define amdgpu_ps void @flat_or_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %vof
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[2:3], v[0:1]
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[0:1], v[0:1]
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v4, s0, v1
; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v1
+; GFX1250-SDAG-NEXT: v_cmpx_lt_u32_e32 0x3ffffff, v4
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB45_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -2161,8 +2324,11 @@ define amdgpu_ps void @flat_or_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %vof
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB45_2
; GFX1250-SDAG-NEXT: .LBB45_4: ; %atomicrmw.private
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[0:1]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v0, vcc_lo
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v4, s0, v0
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v4, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v4, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_or_b32_e32 v1, v1, v3
@@ -2174,16 +2340,17 @@ define amdgpu_ps void @flat_or_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %vof
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v4, v1 :: v_dual_mov_b32 v5, v2
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[2:3], s[2:3]
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v1, vcc_lo, v2, v0
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v3, null, 0, v3, vcc_lo
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v2, vcc_lo, 0xffffff80, v1
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v3, null, -1, v3, vcc_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v3
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_xor_b32_e32 v1, s0, v3
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: v_cmpx_le_u32_e32 0x4000000, v1
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB45_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -2195,14 +2362,17 @@ define amdgpu_ps void @flat_or_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %vof
; GFX1250-GISEL-NEXT: flat_atomic_or_b64 v0, v[4:5], s[2:3] offset:-128 scope:SCOPE_DEV
; GFX1250-GISEL-NEXT: s_wait_storecnt_dscnt 0x0
; GFX1250-GISEL-NEXT: global_inv scope:SCOPE_DEV
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2_vgpr3
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4
; GFX1250-GISEL-NEXT: s_wait_xcnt 0x0
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB45_2
; GFX1250-GISEL-NEXT: .LBB45_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[2:3]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v2, vcc_lo
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s0, v2
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v2, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_or_b32_e32 v0, v0, v4
@@ -2282,13 +2452,15 @@ define amdgpu_ps <2 x float> @flat_xor_saddr_i64_rtn(ptr inreg %sbase, i32 %voff
; GFX1250-SDAG-LABEL: flat_xor_saddr_i64_rtn:
; GFX1250-SDAG: ; %bb.0:
; GFX1250-SDAG-NEXT: v_dual_mov_b32 v3, v2 :: v_dual_mov_b32 v2, v1
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-SDAG-NEXT: v_mov_b32_e32 v1, 0
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[4:5], s[2:3], v[0:1]
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v0, s0, v5
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_cmp_lt_u32_e32 vcc_lo, 0x3ffffff, v0
; GFX1250-SDAG-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v5
+; GFX1250-SDAG-NEXT: s_and_saveexec_b32 s0, vcc_lo
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB50_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -2307,9 +2479,11 @@ define amdgpu_ps <2 x float> @flat_xor_saddr_i64_rtn(ptr inreg %sbase, i32 %voff
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB50_2
; GFX1250-SDAG-NEXT: .LBB50_4: ; %atomicrmw.private
-; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_2)
+; GFX1250-SDAG-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[4:5]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v4, vcc_lo
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v0, s1, v4
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v0, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v4, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_xor_b32_e32 v3, v1, v3
@@ -2324,15 +2498,16 @@ define amdgpu_ps <2 x float> @flat_xor_saddr_i64_rtn(ptr inreg %sbase, i32 %voff
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v3, v0 :: v_dual_mov_b32 v4, v1
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[0:1], s[2:3]
-; GFX1250-GISEL-NEXT: v_mov_b32_e32 v5, v2
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v6, vcc_lo, v0, v3
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v7, null, 0, v1, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_dual_mov_b32 v5, v2 :: v_dual_bitop2_b32 v0, s0, v7 bitop3:0x14
+; GFX1250-GISEL-NEXT: v_cmp_le_u32_e32 vcc_lo, 0x4000000, v0
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v7
+; GFX1250-GISEL-NEXT: s_and_saveexec_b32 s0, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB50_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -2346,13 +2521,16 @@ define amdgpu_ps <2 x float> @flat_xor_saddr_i64_rtn(ptr inreg %sbase, i32 %voff
; GFX1250-GISEL-NEXT: flat_atomic_xor_b64 v[0:1], v3, v[4:5], s[2:3] th:TH_ATOMIC_RETURN scope:SCOPE_DEV
; GFX1250-GISEL-NEXT: s_wait_loadcnt_dscnt 0x0
; GFX1250-GISEL-NEXT: global_inv scope:SCOPE_DEV
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6_vgpr7
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB50_2
; GFX1250-GISEL-NEXT: .LBB50_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[6:7]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v6, vcc_lo
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s1, v6
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v6, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_xor_b32_e32 v2, v0, v4
@@ -2378,11 +2556,13 @@ define amdgpu_ps <2 x float> @flat_xor_saddr_i64_rtn_neg128(ptr inreg %sbase, i3
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[2:3], v[0:1]
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[4:5], s[0:1], v[0:1]
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v0, s0, v5
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_cmp_lt_u32_e32 vcc_lo, 0x3ffffff, v0
; GFX1250-SDAG-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v5
+; GFX1250-SDAG-NEXT: s_and_saveexec_b32 s0, vcc_lo
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB51_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -2401,8 +2581,11 @@ define amdgpu_ps <2 x float> @flat_xor_saddr_i64_rtn_neg128(ptr inreg %sbase, i3
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB51_2
; GFX1250-SDAG-NEXT: .LBB51_4: ; %atomicrmw.private
+; GFX1250-SDAG-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[4:5]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v4, vcc_lo
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v0, s1, v4
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v0, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v4, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_xor_b32_e32 v3, v1, v3
@@ -2417,18 +2600,19 @@ define amdgpu_ps <2 x float> @flat_xor_saddr_i64_rtn_neg128(ptr inreg %sbase, i3
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v3, v0 :: v_dual_mov_b32 v4, v1
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[0:1], s[2:3]
-; GFX1250-GISEL-NEXT: v_mov_b32_e32 v5, v2
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v0, vcc_lo, v0, v3
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v1, null, 0, v1, vcc_lo
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v6, vcc_lo, 0xffffff80, v0
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v7, null, -1, v1, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_dual_mov_b32 v5, v2 :: v_dual_bitop2_b32 v0, s0, v7 bitop3:0x14
+; GFX1250-GISEL-NEXT: v_cmp_le_u32_e32 vcc_lo, 0x4000000, v0
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v7
+; GFX1250-GISEL-NEXT: s_and_saveexec_b32 s0, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB51_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -2442,13 +2626,16 @@ define amdgpu_ps <2 x float> @flat_xor_saddr_i64_rtn_neg128(ptr inreg %sbase, i3
; GFX1250-GISEL-NEXT: flat_atomic_xor_b64 v[0:1], v3, v[4:5], s[2:3] offset:-128 th:TH_ATOMIC_RETURN scope:SCOPE_DEV
; GFX1250-GISEL-NEXT: s_wait_loadcnt_dscnt 0x0
; GFX1250-GISEL-NEXT: global_inv scope:SCOPE_DEV
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6_vgpr7
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB51_2
; GFX1250-GISEL-NEXT: .LBB51_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[6:7]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v6, vcc_lo
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s1, v6
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v6, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_xor_b32_e32 v2, v0, v4
@@ -2471,11 +2658,13 @@ define amdgpu_ps void @flat_xor_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset,
; GFX1250-SDAG: ; %bb.0:
; GFX1250-SDAG-NEXT: v_dual_mov_b32 v3, v2 :: v_dual_mov_b32 v2, v1
; GFX1250-SDAG-NEXT: v_mov_b32_e32 v1, 0
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[2:3], v[0:1]
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v1
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v4, s0, v1
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cmpx_lt_u32_e32 0x3ffffff, v4
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB52_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -2493,9 +2682,11 @@ define amdgpu_ps void @flat_xor_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset,
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB52_2
; GFX1250-SDAG-NEXT: .LBB52_4: ; %atomicrmw.private
-; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_2)
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[0:1]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v0, vcc_lo
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v4, s0, v0
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v4, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v4, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_xor_b32_e32 v1, v1, v3
@@ -2507,13 +2698,14 @@ define amdgpu_ps void @flat_xor_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset,
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v4, v1 :: v_dual_mov_b32 v5, v2
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[2:3], s[2:3]
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v2, vcc_lo, v2, v0
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v3, null, 0, v3, vcc_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v3
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_xor_b32_e32 v1, s0, v3
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: v_cmpx_le_u32_e32 0x4000000, v1
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB52_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -2525,14 +2717,17 @@ define amdgpu_ps void @flat_xor_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset,
; GFX1250-GISEL-NEXT: flat_atomic_xor_b64 v0, v[4:5], s[2:3] scope:SCOPE_DEV
; GFX1250-GISEL-NEXT: s_wait_storecnt_dscnt 0x0
; GFX1250-GISEL-NEXT: global_inv scope:SCOPE_DEV
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2_vgpr3
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4
; GFX1250-GISEL-NEXT: s_wait_xcnt 0x0
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB52_2
; GFX1250-GISEL-NEXT: .LBB52_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[2:3]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v2, vcc_lo
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s0, v2
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v2, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_xor_b32_e32 v0, v0, v4
@@ -2554,10 +2749,12 @@ define amdgpu_ps void @flat_xor_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %vo
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[2:3], v[0:1]
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[0:1], v[0:1]
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v4, s0, v1
; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v1
+; GFX1250-SDAG-NEXT: v_cmpx_lt_u32_e32 0x3ffffff, v4
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB53_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -2575,8 +2772,11 @@ define amdgpu_ps void @flat_xor_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %vo
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB53_2
; GFX1250-SDAG-NEXT: .LBB53_4: ; %atomicrmw.private
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[0:1]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v0, vcc_lo
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v4, s0, v0
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v4, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v4, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_xor_b32_e32 v1, v1, v3
@@ -2588,16 +2788,17 @@ define amdgpu_ps void @flat_xor_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %vo
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v4, v1 :: v_dual_mov_b32 v5, v2
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[2:3], s[2:3]
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v1, vcc_lo, v2, v0
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v3, null, 0, v3, vcc_lo
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v2, vcc_lo, 0xffffff80, v1
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v3, null, -1, v3, vcc_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v3
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_xor_b32_e32 v1, s0, v3
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: v_cmpx_le_u32_e32 0x4000000, v1
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB53_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -2609,14 +2810,17 @@ define amdgpu_ps void @flat_xor_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %vo
; GFX1250-GISEL-NEXT: flat_atomic_xor_b64 v0, v[4:5], s[2:3] offset:-128 scope:SCOPE_DEV
; GFX1250-GISEL-NEXT: s_wait_storecnt_dscnt 0x0
; GFX1250-GISEL-NEXT: global_inv scope:SCOPE_DEV
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2_vgpr3
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4
; GFX1250-GISEL-NEXT: s_wait_xcnt 0x0
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB53_2
; GFX1250-GISEL-NEXT: .LBB53_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[2:3]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v2, vcc_lo
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s0, v2
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v2, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_xor_b32_e32 v0, v0, v4
@@ -2690,13 +2894,15 @@ define amdgpu_ps <2 x float> @flat_max_saddr_i64_rtn(ptr inreg %sbase, i32 %voff
; GFX1250-SDAG-LABEL: flat_max_saddr_i64_rtn:
; GFX1250-SDAG: ; %bb.0:
; GFX1250-SDAG-NEXT: v_dual_mov_b32 v3, v2 :: v_dual_mov_b32 v2, v1
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-SDAG-NEXT: v_mov_b32_e32 v1, 0
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[4:5], s[2:3], v[0:1]
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v0, s0, v5
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_cmp_lt_u32_e32 vcc_lo, 0x3ffffff, v0
; GFX1250-SDAG-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v5
+; GFX1250-SDAG-NEXT: s_and_saveexec_b32 s0, vcc_lo
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB58_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -2715,10 +2921,12 @@ define amdgpu_ps <2 x float> @flat_max_saddr_i64_rtn(ptr inreg %sbase, i32 %voff
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB58_2
; GFX1250-SDAG-NEXT: .LBB58_4: ; %atomicrmw.private
-; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_2)
+; GFX1250-SDAG-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[4:5]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v4, vcc_lo
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v0, s1, v4
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v0, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v4, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_max_i64 v[2:3], v[0:1], v[2:3]
@@ -2732,15 +2940,16 @@ define amdgpu_ps <2 x float> @flat_max_saddr_i64_rtn(ptr inreg %sbase, i32 %voff
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v3, v0 :: v_dual_mov_b32 v4, v1
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[0:1], s[2:3]
-; GFX1250-GISEL-NEXT: v_mov_b32_e32 v5, v2
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v6, vcc_lo, v0, v3
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v7, null, 0, v1, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_dual_mov_b32 v5, v2 :: v_dual_bitop2_b32 v0, s0, v7 bitop3:0x14
+; GFX1250-GISEL-NEXT: v_cmp_le_u32_e32 vcc_lo, 0x4000000, v0
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v7
+; GFX1250-GISEL-NEXT: s_and_saveexec_b32 s0, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB58_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -2753,15 +2962,18 @@ define amdgpu_ps <2 x float> @flat_max_saddr_i64_rtn(ptr inreg %sbase, i32 %voff
; GFX1250-GISEL-NEXT: .LBB58_3: ; %atomicrmw.global
; GFX1250-GISEL-NEXT: flat_atomic_max_i64 v[0:1], v3, v[4:5], s[2:3] th:TH_ATOMIC_RETURN
; GFX1250-GISEL-NEXT: s_wait_dscnt 0x0
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6_vgpr7
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4_vgpr5
; GFX1250-GISEL-NEXT: s_wait_xcnt 0x0
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB58_2
; GFX1250-GISEL-NEXT: .LBB58_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[6:7]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v6, vcc_lo
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s1, v6
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v6, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_max_i64 v[2:3], v[0:1], v[4:5]
@@ -2786,11 +2998,13 @@ define amdgpu_ps <2 x float> @flat_max_saddr_i64_rtn_neg128(ptr inreg %sbase, i3
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[2:3], v[0:1]
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[4:5], s[0:1], v[0:1]
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v0, s0, v5
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_cmp_lt_u32_e32 vcc_lo, 0x3ffffff, v0
; GFX1250-SDAG-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v5
+; GFX1250-SDAG-NEXT: s_and_saveexec_b32 s0, vcc_lo
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB59_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -2809,9 +3023,12 @@ define amdgpu_ps <2 x float> @flat_max_saddr_i64_rtn_neg128(ptr inreg %sbase, i3
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB59_2
; GFX1250-SDAG-NEXT: .LBB59_4: ; %atomicrmw.private
+; GFX1250-SDAG-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[4:5]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v4, vcc_lo
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v0, s1, v4
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v0, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v4, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_max_i64 v[2:3], v[0:1], v[2:3]
@@ -2825,18 +3042,19 @@ define amdgpu_ps <2 x float> @flat_max_saddr_i64_rtn_neg128(ptr inreg %sbase, i3
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v3, v0 :: v_dual_mov_b32 v4, v1
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[0:1], s[2:3]
-; GFX1250-GISEL-NEXT: v_mov_b32_e32 v5, v2
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v0, vcc_lo, v0, v3
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v1, null, 0, v1, vcc_lo
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v6, vcc_lo, 0xffffff80, v0
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v7, null, -1, v1, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_dual_mov_b32 v5, v2 :: v_dual_bitop2_b32 v0, s0, v7 bitop3:0x14
+; GFX1250-GISEL-NEXT: v_cmp_le_u32_e32 vcc_lo, 0x4000000, v0
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v7
+; GFX1250-GISEL-NEXT: s_and_saveexec_b32 s0, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB59_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -2849,15 +3067,18 @@ define amdgpu_ps <2 x float> @flat_max_saddr_i64_rtn_neg128(ptr inreg %sbase, i3
; GFX1250-GISEL-NEXT: .LBB59_3: ; %atomicrmw.global
; GFX1250-GISEL-NEXT: flat_atomic_max_i64 v[0:1], v3, v[4:5], s[2:3] offset:-128 th:TH_ATOMIC_RETURN
; GFX1250-GISEL-NEXT: s_wait_dscnt 0x0
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6_vgpr7
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4_vgpr5
; GFX1250-GISEL-NEXT: s_wait_xcnt 0x0
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB59_2
; GFX1250-GISEL-NEXT: .LBB59_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[6:7]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v6, vcc_lo
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s1, v6
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v6, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_max_i64 v[2:3], v[0:1], v[4:5]
@@ -2879,11 +3100,13 @@ define amdgpu_ps void @flat_max_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset,
; GFX1250-SDAG: ; %bb.0:
; GFX1250-SDAG-NEXT: v_dual_mov_b32 v3, v2 :: v_dual_mov_b32 v2, v1
; GFX1250-SDAG-NEXT: v_mov_b32_e32 v1, 0
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[2:3], v[0:1]
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v1
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v4, s0, v1
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cmpx_lt_u32_e32 0x3ffffff, v4
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB60_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -2900,9 +3123,11 @@ define amdgpu_ps void @flat_max_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset,
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB60_2
; GFX1250-SDAG-NEXT: .LBB60_4: ; %atomicrmw.private
-; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_2)
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[0:1]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v0, vcc_lo
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v4, s0, v0
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v4, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v4, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_max_i64 v[0:1], v[0:1], v[2:3]
@@ -2913,13 +3138,14 @@ define amdgpu_ps void @flat_max_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset,
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v4, v1 :: v_dual_mov_b32 v5, v2
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[2:3], s[2:3]
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v2, vcc_lo, v2, v0
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v3, null, 0, v3, vcc_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v3
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_xor_b32_e32 v1, s0, v3
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: v_cmpx_le_u32_e32 0x4000000, v1
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB60_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -2930,14 +3156,17 @@ define amdgpu_ps void @flat_max_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset,
; GFX1250-GISEL-NEXT: .LBB60_3: ; %atomicrmw.global
; GFX1250-GISEL-NEXT: flat_atomic_max_i64 v0, v[4:5], s[2:3]
; GFX1250-GISEL-NEXT: s_wait_dscnt 0x0
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2_vgpr3
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4_vgpr5
; GFX1250-GISEL-NEXT: s_wait_xcnt 0x0
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB60_2
; GFX1250-GISEL-NEXT: .LBB60_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[2:3]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v2, vcc_lo
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s0, v2
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v2, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_max_i64 v[0:1], v[0:1], v[4:5]
@@ -2958,10 +3187,12 @@ define amdgpu_ps void @flat_max_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %vo
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[2:3], v[0:1]
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[0:1], v[0:1]
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v4, s0, v1
; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v1
+; GFX1250-SDAG-NEXT: v_cmpx_lt_u32_e32 0x3ffffff, v4
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB61_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -2978,8 +3209,11 @@ define amdgpu_ps void @flat_max_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %vo
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB61_2
; GFX1250-SDAG-NEXT: .LBB61_4: ; %atomicrmw.private
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[0:1]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v0, vcc_lo
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v4, s0, v0
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v4, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v4, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_max_i64 v[0:1], v[0:1], v[2:3]
@@ -2990,16 +3224,17 @@ define amdgpu_ps void @flat_max_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %vo
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v4, v1 :: v_dual_mov_b32 v5, v2
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[2:3], s[2:3]
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v1, vcc_lo, v2, v0
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v3, null, 0, v3, vcc_lo
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v2, vcc_lo, 0xffffff80, v1
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v3, null, -1, v3, vcc_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v3
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_xor_b32_e32 v1, s0, v3
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: v_cmpx_le_u32_e32 0x4000000, v1
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB61_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -3010,14 +3245,17 @@ define amdgpu_ps void @flat_max_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %vo
; GFX1250-GISEL-NEXT: .LBB61_3: ; %atomicrmw.global
; GFX1250-GISEL-NEXT: flat_atomic_max_i64 v0, v[4:5], s[2:3] offset:-128
; GFX1250-GISEL-NEXT: s_wait_dscnt 0x0
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2_vgpr3
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4_vgpr5
; GFX1250-GISEL-NEXT: s_wait_xcnt 0x0
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB61_2
; GFX1250-GISEL-NEXT: .LBB61_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[2:3]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v2, vcc_lo
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s0, v2
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v2, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_max_i64 v[0:1], v[0:1], v[4:5]
@@ -3090,13 +3328,15 @@ define amdgpu_ps <2 x float> @flat_min_saddr_i64_rtn(ptr inreg %sbase, i32 %voff
; GFX1250-SDAG-LABEL: flat_min_saddr_i64_rtn:
; GFX1250-SDAG: ; %bb.0:
; GFX1250-SDAG-NEXT: v_dual_mov_b32 v3, v2 :: v_dual_mov_b32 v2, v1
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-SDAG-NEXT: v_mov_b32_e32 v1, 0
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[4:5], s[2:3], v[0:1]
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v0, s0, v5
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_cmp_lt_u32_e32 vcc_lo, 0x3ffffff, v0
; GFX1250-SDAG-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v5
+; GFX1250-SDAG-NEXT: s_and_saveexec_b32 s0, vcc_lo
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB66_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -3115,10 +3355,12 @@ define amdgpu_ps <2 x float> @flat_min_saddr_i64_rtn(ptr inreg %sbase, i32 %voff
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB66_2
; GFX1250-SDAG-NEXT: .LBB66_4: ; %atomicrmw.private
-; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_2)
+; GFX1250-SDAG-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[4:5]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v4, vcc_lo
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v0, s1, v4
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v0, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v4, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_min_i64 v[2:3], v[0:1], v[2:3]
@@ -3132,15 +3374,16 @@ define amdgpu_ps <2 x float> @flat_min_saddr_i64_rtn(ptr inreg %sbase, i32 %voff
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v3, v0 :: v_dual_mov_b32 v4, v1
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[0:1], s[2:3]
-; GFX1250-GISEL-NEXT: v_mov_b32_e32 v5, v2
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v6, vcc_lo, v0, v3
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v7, null, 0, v1, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_dual_mov_b32 v5, v2 :: v_dual_bitop2_b32 v0, s0, v7 bitop3:0x14
+; GFX1250-GISEL-NEXT: v_cmp_le_u32_e32 vcc_lo, 0x4000000, v0
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v7
+; GFX1250-GISEL-NEXT: s_and_saveexec_b32 s0, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB66_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -3153,15 +3396,18 @@ define amdgpu_ps <2 x float> @flat_min_saddr_i64_rtn(ptr inreg %sbase, i32 %voff
; GFX1250-GISEL-NEXT: .LBB66_3: ; %atomicrmw.global
; GFX1250-GISEL-NEXT: flat_atomic_min_i64 v[0:1], v3, v[4:5], s[2:3] th:TH_ATOMIC_RETURN
; GFX1250-GISEL-NEXT: s_wait_dscnt 0x0
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6_vgpr7
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4_vgpr5
; GFX1250-GISEL-NEXT: s_wait_xcnt 0x0
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB66_2
; GFX1250-GISEL-NEXT: .LBB66_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[6:7]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v6, vcc_lo
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s1, v6
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v6, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_min_i64 v[2:3], v[0:1], v[4:5]
@@ -3186,11 +3432,13 @@ define amdgpu_ps <2 x float> @flat_min_saddr_i64_rtn_neg128(ptr inreg %sbase, i3
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[2:3], v[0:1]
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[4:5], s[0:1], v[0:1]
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v0, s0, v5
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_cmp_lt_u32_e32 vcc_lo, 0x3ffffff, v0
; GFX1250-SDAG-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v5
+; GFX1250-SDAG-NEXT: s_and_saveexec_b32 s0, vcc_lo
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB67_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -3209,9 +3457,12 @@ define amdgpu_ps <2 x float> @flat_min_saddr_i64_rtn_neg128(ptr inreg %sbase, i3
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB67_2
; GFX1250-SDAG-NEXT: .LBB67_4: ; %atomicrmw.private
+; GFX1250-SDAG-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[4:5]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v4, vcc_lo
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v0, s1, v4
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v0, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v4, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_min_i64 v[2:3], v[0:1], v[2:3]
@@ -3225,18 +3476,19 @@ define amdgpu_ps <2 x float> @flat_min_saddr_i64_rtn_neg128(ptr inreg %sbase, i3
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v3, v0 :: v_dual_mov_b32 v4, v1
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[0:1], s[2:3]
-; GFX1250-GISEL-NEXT: v_mov_b32_e32 v5, v2
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v0, vcc_lo, v0, v3
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v1, null, 0, v1, vcc_lo
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v6, vcc_lo, 0xffffff80, v0
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v7, null, -1, v1, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_dual_mov_b32 v5, v2 :: v_dual_bitop2_b32 v0, s0, v7 bitop3:0x14
+; GFX1250-GISEL-NEXT: v_cmp_le_u32_e32 vcc_lo, 0x4000000, v0
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v7
+; GFX1250-GISEL-NEXT: s_and_saveexec_b32 s0, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB67_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -3249,15 +3501,18 @@ define amdgpu_ps <2 x float> @flat_min_saddr_i64_rtn_neg128(ptr inreg %sbase, i3
; GFX1250-GISEL-NEXT: .LBB67_3: ; %atomicrmw.global
; GFX1250-GISEL-NEXT: flat_atomic_min_i64 v[0:1], v3, v[4:5], s[2:3] offset:-128 th:TH_ATOMIC_RETURN
; GFX1250-GISEL-NEXT: s_wait_dscnt 0x0
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6_vgpr7
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4_vgpr5
; GFX1250-GISEL-NEXT: s_wait_xcnt 0x0
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB67_2
; GFX1250-GISEL-NEXT: .LBB67_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[6:7]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v6, vcc_lo
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s1, v6
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v6, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_min_i64 v[2:3], v[0:1], v[4:5]
@@ -3279,11 +3534,13 @@ define amdgpu_ps void @flat_min_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset,
; GFX1250-SDAG: ; %bb.0:
; GFX1250-SDAG-NEXT: v_dual_mov_b32 v3, v2 :: v_dual_mov_b32 v2, v1
; GFX1250-SDAG-NEXT: v_mov_b32_e32 v1, 0
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[2:3], v[0:1]
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v1
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v4, s0, v1
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cmpx_lt_u32_e32 0x3ffffff, v4
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB68_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -3300,9 +3557,11 @@ define amdgpu_ps void @flat_min_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset,
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB68_2
; GFX1250-SDAG-NEXT: .LBB68_4: ; %atomicrmw.private
-; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_2)
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[0:1]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v0, vcc_lo
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v4, s0, v0
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v4, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v4, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_min_i64 v[0:1], v[0:1], v[2:3]
@@ -3313,13 +3572,14 @@ define amdgpu_ps void @flat_min_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset,
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v4, v1 :: v_dual_mov_b32 v5, v2
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[2:3], s[2:3]
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v2, vcc_lo, v2, v0
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v3, null, 0, v3, vcc_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v3
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_xor_b32_e32 v1, s0, v3
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: v_cmpx_le_u32_e32 0x4000000, v1
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB68_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -3330,14 +3590,17 @@ define amdgpu_ps void @flat_min_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset,
; GFX1250-GISEL-NEXT: .LBB68_3: ; %atomicrmw.global
; GFX1250-GISEL-NEXT: flat_atomic_min_i64 v0, v[4:5], s[2:3]
; GFX1250-GISEL-NEXT: s_wait_dscnt 0x0
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2_vgpr3
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4_vgpr5
; GFX1250-GISEL-NEXT: s_wait_xcnt 0x0
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB68_2
; GFX1250-GISEL-NEXT: .LBB68_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[2:3]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v2, vcc_lo
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s0, v2
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v2, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_min_i64 v[0:1], v[0:1], v[4:5]
@@ -3358,10 +3621,12 @@ define amdgpu_ps void @flat_min_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %vo
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[2:3], v[0:1]
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[0:1], v[0:1]
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v4, s0, v1
; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v1
+; GFX1250-SDAG-NEXT: v_cmpx_lt_u32_e32 0x3ffffff, v4
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB69_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -3378,8 +3643,11 @@ define amdgpu_ps void @flat_min_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %vo
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB69_2
; GFX1250-SDAG-NEXT: .LBB69_4: ; %atomicrmw.private
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[0:1]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v0, vcc_lo
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v4, s0, v0
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v4, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v4, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_min_i64 v[0:1], v[0:1], v[2:3]
@@ -3390,16 +3658,17 @@ define amdgpu_ps void @flat_min_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %vo
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v4, v1 :: v_dual_mov_b32 v5, v2
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[2:3], s[2:3]
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v1, vcc_lo, v2, v0
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v3, null, 0, v3, vcc_lo
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v2, vcc_lo, 0xffffff80, v1
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v3, null, -1, v3, vcc_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v3
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_xor_b32_e32 v1, s0, v3
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: v_cmpx_le_u32_e32 0x4000000, v1
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB69_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -3410,14 +3679,17 @@ define amdgpu_ps void @flat_min_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %vo
; GFX1250-GISEL-NEXT: .LBB69_3: ; %atomicrmw.global
; GFX1250-GISEL-NEXT: flat_atomic_min_i64 v0, v[4:5], s[2:3] offset:-128
; GFX1250-GISEL-NEXT: s_wait_dscnt 0x0
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2_vgpr3
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4_vgpr5
; GFX1250-GISEL-NEXT: s_wait_xcnt 0x0
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB69_2
; GFX1250-GISEL-NEXT: .LBB69_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[2:3]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v2, vcc_lo
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s0, v2
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v2, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_min_i64 v[0:1], v[0:1], v[4:5]
@@ -3490,13 +3762,15 @@ define amdgpu_ps <2 x float> @flat_umax_saddr_i64_rtn(ptr inreg %sbase, i32 %vof
; GFX1250-SDAG-LABEL: flat_umax_saddr_i64_rtn:
; GFX1250-SDAG: ; %bb.0:
; GFX1250-SDAG-NEXT: v_dual_mov_b32 v3, v2 :: v_dual_mov_b32 v2, v1
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-SDAG-NEXT: v_mov_b32_e32 v1, 0
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[4:5], s[2:3], v[0:1]
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v0, s0, v5
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_cmp_lt_u32_e32 vcc_lo, 0x3ffffff, v0
; GFX1250-SDAG-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v5
+; GFX1250-SDAG-NEXT: s_and_saveexec_b32 s0, vcc_lo
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB74_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -3515,10 +3789,12 @@ define amdgpu_ps <2 x float> @flat_umax_saddr_i64_rtn(ptr inreg %sbase, i32 %vof
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB74_2
; GFX1250-SDAG-NEXT: .LBB74_4: ; %atomicrmw.private
-; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_2)
+; GFX1250-SDAG-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[4:5]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v4, vcc_lo
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v0, s1, v4
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v0, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v4, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_max_u64 v[2:3], v[0:1], v[2:3]
@@ -3532,15 +3808,16 @@ define amdgpu_ps <2 x float> @flat_umax_saddr_i64_rtn(ptr inreg %sbase, i32 %vof
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v3, v0 :: v_dual_mov_b32 v4, v1
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[0:1], s[2:3]
-; GFX1250-GISEL-NEXT: v_mov_b32_e32 v5, v2
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v6, vcc_lo, v0, v3
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v7, null, 0, v1, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_dual_mov_b32 v5, v2 :: v_dual_bitop2_b32 v0, s0, v7 bitop3:0x14
+; GFX1250-GISEL-NEXT: v_cmp_le_u32_e32 vcc_lo, 0x4000000, v0
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v7
+; GFX1250-GISEL-NEXT: s_and_saveexec_b32 s0, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB74_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -3553,15 +3830,18 @@ define amdgpu_ps <2 x float> @flat_umax_saddr_i64_rtn(ptr inreg %sbase, i32 %vof
; GFX1250-GISEL-NEXT: .LBB74_3: ; %atomicrmw.global
; GFX1250-GISEL-NEXT: flat_atomic_max_u64 v[0:1], v3, v[4:5], s[2:3] th:TH_ATOMIC_RETURN
; GFX1250-GISEL-NEXT: s_wait_dscnt 0x0
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6_vgpr7
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4_vgpr5
; GFX1250-GISEL-NEXT: s_wait_xcnt 0x0
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB74_2
; GFX1250-GISEL-NEXT: .LBB74_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[6:7]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v6, vcc_lo
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s1, v6
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v6, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_max_u64 v[2:3], v[0:1], v[4:5]
@@ -3586,11 +3866,13 @@ define amdgpu_ps <2 x float> @flat_umax_saddr_i64_rtn_neg128(ptr inreg %sbase, i
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[2:3], v[0:1]
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[4:5], s[0:1], v[0:1]
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v0, s0, v5
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_cmp_lt_u32_e32 vcc_lo, 0x3ffffff, v0
; GFX1250-SDAG-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v5
+; GFX1250-SDAG-NEXT: s_and_saveexec_b32 s0, vcc_lo
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB75_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -3609,9 +3891,12 @@ define amdgpu_ps <2 x float> @flat_umax_saddr_i64_rtn_neg128(ptr inreg %sbase, i
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB75_2
; GFX1250-SDAG-NEXT: .LBB75_4: ; %atomicrmw.private
+; GFX1250-SDAG-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[4:5]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v4, vcc_lo
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v0, s1, v4
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v0, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v4, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_max_u64 v[2:3], v[0:1], v[2:3]
@@ -3625,18 +3910,19 @@ define amdgpu_ps <2 x float> @flat_umax_saddr_i64_rtn_neg128(ptr inreg %sbase, i
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v3, v0 :: v_dual_mov_b32 v4, v1
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[0:1], s[2:3]
-; GFX1250-GISEL-NEXT: v_mov_b32_e32 v5, v2
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v0, vcc_lo, v0, v3
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v1, null, 0, v1, vcc_lo
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v6, vcc_lo, 0xffffff80, v0
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v7, null, -1, v1, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_dual_mov_b32 v5, v2 :: v_dual_bitop2_b32 v0, s0, v7 bitop3:0x14
+; GFX1250-GISEL-NEXT: v_cmp_le_u32_e32 vcc_lo, 0x4000000, v0
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v7
+; GFX1250-GISEL-NEXT: s_and_saveexec_b32 s0, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB75_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -3649,15 +3935,18 @@ define amdgpu_ps <2 x float> @flat_umax_saddr_i64_rtn_neg128(ptr inreg %sbase, i
; GFX1250-GISEL-NEXT: .LBB75_3: ; %atomicrmw.global
; GFX1250-GISEL-NEXT: flat_atomic_max_u64 v[0:1], v3, v[4:5], s[2:3] offset:-128 th:TH_ATOMIC_RETURN
; GFX1250-GISEL-NEXT: s_wait_dscnt 0x0
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6_vgpr7
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4_vgpr5
; GFX1250-GISEL-NEXT: s_wait_xcnt 0x0
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB75_2
; GFX1250-GISEL-NEXT: .LBB75_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[6:7]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v6, vcc_lo
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s1, v6
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v6, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_max_u64 v[2:3], v[0:1], v[4:5]
@@ -3679,11 +3968,13 @@ define amdgpu_ps void @flat_umax_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset,
; GFX1250-SDAG: ; %bb.0:
; GFX1250-SDAG-NEXT: v_dual_mov_b32 v3, v2 :: v_dual_mov_b32 v2, v1
; GFX1250-SDAG-NEXT: v_mov_b32_e32 v1, 0
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[2:3], v[0:1]
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v1
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v4, s0, v1
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cmpx_lt_u32_e32 0x3ffffff, v4
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB76_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -3700,9 +3991,11 @@ define amdgpu_ps void @flat_umax_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset,
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB76_2
; GFX1250-SDAG-NEXT: .LBB76_4: ; %atomicrmw.private
-; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_2)
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[0:1]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v0, vcc_lo
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v4, s0, v0
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v4, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v4, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_max_u64 v[0:1], v[0:1], v[2:3]
@@ -3713,13 +4006,14 @@ define amdgpu_ps void @flat_umax_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset,
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v4, v1 :: v_dual_mov_b32 v5, v2
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[2:3], s[2:3]
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v2, vcc_lo, v2, v0
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v3, null, 0, v3, vcc_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v3
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_xor_b32_e32 v1, s0, v3
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: v_cmpx_le_u32_e32 0x4000000, v1
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB76_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -3730,14 +4024,17 @@ define amdgpu_ps void @flat_umax_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset,
; GFX1250-GISEL-NEXT: .LBB76_3: ; %atomicrmw.global
; GFX1250-GISEL-NEXT: flat_atomic_max_u64 v0, v[4:5], s[2:3]
; GFX1250-GISEL-NEXT: s_wait_dscnt 0x0
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2_vgpr3
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4_vgpr5
; GFX1250-GISEL-NEXT: s_wait_xcnt 0x0
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB76_2
; GFX1250-GISEL-NEXT: .LBB76_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[2:3]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v2, vcc_lo
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s0, v2
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v2, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_max_u64 v[0:1], v[0:1], v[4:5]
@@ -3758,10 +4055,12 @@ define amdgpu_ps void @flat_umax_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %v
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[2:3], v[0:1]
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[0:1], v[0:1]
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v4, s0, v1
; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v1
+; GFX1250-SDAG-NEXT: v_cmpx_lt_u32_e32 0x3ffffff, v4
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB77_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -3778,8 +4077,11 @@ define amdgpu_ps void @flat_umax_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %v
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB77_2
; GFX1250-SDAG-NEXT: .LBB77_4: ; %atomicrmw.private
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[0:1]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v0, vcc_lo
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v4, s0, v0
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v4, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v4, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_max_u64 v[0:1], v[0:1], v[2:3]
@@ -3790,16 +4092,17 @@ define amdgpu_ps void @flat_umax_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %v
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v4, v1 :: v_dual_mov_b32 v5, v2
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[2:3], s[2:3]
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v1, vcc_lo, v2, v0
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v3, null, 0, v3, vcc_lo
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v2, vcc_lo, 0xffffff80, v1
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v3, null, -1, v3, vcc_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v3
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_xor_b32_e32 v1, s0, v3
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: v_cmpx_le_u32_e32 0x4000000, v1
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB77_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -3810,14 +4113,17 @@ define amdgpu_ps void @flat_umax_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %v
; GFX1250-GISEL-NEXT: .LBB77_3: ; %atomicrmw.global
; GFX1250-GISEL-NEXT: flat_atomic_max_u64 v0, v[4:5], s[2:3] offset:-128
; GFX1250-GISEL-NEXT: s_wait_dscnt 0x0
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2_vgpr3
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4_vgpr5
; GFX1250-GISEL-NEXT: s_wait_xcnt 0x0
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB77_2
; GFX1250-GISEL-NEXT: .LBB77_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[2:3]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v2, vcc_lo
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s0, v2
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v2, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_max_u64 v[0:1], v[0:1], v[4:5]
@@ -3890,13 +4196,15 @@ define amdgpu_ps <2 x float> @flat_umin_saddr_i64_rtn(ptr inreg %sbase, i32 %vof
; GFX1250-SDAG-LABEL: flat_umin_saddr_i64_rtn:
; GFX1250-SDAG: ; %bb.0:
; GFX1250-SDAG-NEXT: v_dual_mov_b32 v3, v2 :: v_dual_mov_b32 v2, v1
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-SDAG-NEXT: v_mov_b32_e32 v1, 0
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[4:5], s[2:3], v[0:1]
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v0, s0, v5
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_cmp_lt_u32_e32 vcc_lo, 0x3ffffff, v0
; GFX1250-SDAG-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v5
+; GFX1250-SDAG-NEXT: s_and_saveexec_b32 s0, vcc_lo
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB82_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -3915,10 +4223,12 @@ define amdgpu_ps <2 x float> @flat_umin_saddr_i64_rtn(ptr inreg %sbase, i32 %vof
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB82_2
; GFX1250-SDAG-NEXT: .LBB82_4: ; %atomicrmw.private
-; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_2)
+; GFX1250-SDAG-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[4:5]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v4, vcc_lo
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v0, s1, v4
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v0, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v4, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_min_u64 v[2:3], v[0:1], v[2:3]
@@ -3932,15 +4242,16 @@ define amdgpu_ps <2 x float> @flat_umin_saddr_i64_rtn(ptr inreg %sbase, i32 %vof
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v3, v0 :: v_dual_mov_b32 v4, v1
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[0:1], s[2:3]
-; GFX1250-GISEL-NEXT: v_mov_b32_e32 v5, v2
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v6, vcc_lo, v0, v3
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v7, null, 0, v1, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_dual_mov_b32 v5, v2 :: v_dual_bitop2_b32 v0, s0, v7 bitop3:0x14
+; GFX1250-GISEL-NEXT: v_cmp_le_u32_e32 vcc_lo, 0x4000000, v0
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v7
+; GFX1250-GISEL-NEXT: s_and_saveexec_b32 s0, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB82_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -3953,15 +4264,18 @@ define amdgpu_ps <2 x float> @flat_umin_saddr_i64_rtn(ptr inreg %sbase, i32 %vof
; GFX1250-GISEL-NEXT: .LBB82_3: ; %atomicrmw.global
; GFX1250-GISEL-NEXT: flat_atomic_min_u64 v[0:1], v3, v[4:5], s[2:3] th:TH_ATOMIC_RETURN
; GFX1250-GISEL-NEXT: s_wait_dscnt 0x0
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6_vgpr7
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4_vgpr5
; GFX1250-GISEL-NEXT: s_wait_xcnt 0x0
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB82_2
; GFX1250-GISEL-NEXT: .LBB82_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[6:7]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v6, vcc_lo
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s1, v6
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v6, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_min_u64 v[2:3], v[0:1], v[4:5]
@@ -3986,11 +4300,13 @@ define amdgpu_ps <2 x float> @flat_umin_saddr_i64_rtn_neg128(ptr inreg %sbase, i
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[2:3], v[0:1]
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[4:5], s[0:1], v[0:1]
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v0, s0, v5
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_cmp_lt_u32_e32 vcc_lo, 0x3ffffff, v0
; GFX1250-SDAG-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v5
+; GFX1250-SDAG-NEXT: s_and_saveexec_b32 s0, vcc_lo
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB83_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -4009,9 +4325,12 @@ define amdgpu_ps <2 x float> @flat_umin_saddr_i64_rtn_neg128(ptr inreg %sbase, i
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB83_2
; GFX1250-SDAG-NEXT: .LBB83_4: ; %atomicrmw.private
+; GFX1250-SDAG-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[4:5]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v4, vcc_lo
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v0, s1, v4
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v0, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v4, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_min_u64 v[2:3], v[0:1], v[2:3]
@@ -4025,18 +4344,19 @@ define amdgpu_ps <2 x float> @flat_umin_saddr_i64_rtn_neg128(ptr inreg %sbase, i
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v3, v0 :: v_dual_mov_b32 v4, v1
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[0:1], s[2:3]
-; GFX1250-GISEL-NEXT: v_mov_b32_e32 v5, v2
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v0, vcc_lo, v0, v3
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v1, null, 0, v1, vcc_lo
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v6, vcc_lo, 0xffffff80, v0
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v7, null, -1, v1, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_dual_mov_b32 v5, v2 :: v_dual_bitop2_b32 v0, s0, v7 bitop3:0x14
+; GFX1250-GISEL-NEXT: v_cmp_le_u32_e32 vcc_lo, 0x4000000, v0
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v7
+; GFX1250-GISEL-NEXT: s_and_saveexec_b32 s0, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB83_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -4049,15 +4369,18 @@ define amdgpu_ps <2 x float> @flat_umin_saddr_i64_rtn_neg128(ptr inreg %sbase, i
; GFX1250-GISEL-NEXT: .LBB83_3: ; %atomicrmw.global
; GFX1250-GISEL-NEXT: flat_atomic_min_u64 v[0:1], v3, v[4:5], s[2:3] offset:-128 th:TH_ATOMIC_RETURN
; GFX1250-GISEL-NEXT: s_wait_dscnt 0x0
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6_vgpr7
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4_vgpr5
; GFX1250-GISEL-NEXT: s_wait_xcnt 0x0
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB83_2
; GFX1250-GISEL-NEXT: .LBB83_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[6:7]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v6, vcc_lo
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s1, v6
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v6, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_min_u64 v[2:3], v[0:1], v[4:5]
@@ -4079,11 +4402,13 @@ define amdgpu_ps void @flat_umin_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset,
; GFX1250-SDAG: ; %bb.0:
; GFX1250-SDAG-NEXT: v_dual_mov_b32 v3, v2 :: v_dual_mov_b32 v2, v1
; GFX1250-SDAG-NEXT: v_mov_b32_e32 v1, 0
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[2:3], v[0:1]
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v1
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v4, s0, v1
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cmpx_lt_u32_e32 0x3ffffff, v4
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB84_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -4100,9 +4425,11 @@ define amdgpu_ps void @flat_umin_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset,
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB84_2
; GFX1250-SDAG-NEXT: .LBB84_4: ; %atomicrmw.private
-; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_2)
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[0:1]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v0, vcc_lo
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v4, s0, v0
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v4, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v4, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_min_u64 v[0:1], v[0:1], v[2:3]
@@ -4113,13 +4440,14 @@ define amdgpu_ps void @flat_umin_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset,
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v4, v1 :: v_dual_mov_b32 v5, v2
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[2:3], s[2:3]
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v2, vcc_lo, v2, v0
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v3, null, 0, v3, vcc_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v3
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_xor_b32_e32 v1, s0, v3
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: v_cmpx_le_u32_e32 0x4000000, v1
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB84_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -4130,14 +4458,17 @@ define amdgpu_ps void @flat_umin_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset,
; GFX1250-GISEL-NEXT: .LBB84_3: ; %atomicrmw.global
; GFX1250-GISEL-NEXT: flat_atomic_min_u64 v0, v[4:5], s[2:3]
; GFX1250-GISEL-NEXT: s_wait_dscnt 0x0
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2_vgpr3
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4_vgpr5
; GFX1250-GISEL-NEXT: s_wait_xcnt 0x0
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB84_2
; GFX1250-GISEL-NEXT: .LBB84_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[2:3]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v2, vcc_lo
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s0, v2
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v2, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_min_u64 v[0:1], v[0:1], v[4:5]
@@ -4158,10 +4489,12 @@ define amdgpu_ps void @flat_umin_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %v
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[2:3], v[0:1]
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[0:1], v[0:1]
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v4, s0, v1
; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v1
+; GFX1250-SDAG-NEXT: v_cmpx_lt_u32_e32 0x3ffffff, v4
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB85_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -4178,8 +4511,11 @@ define amdgpu_ps void @flat_umin_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %v
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB85_2
; GFX1250-SDAG-NEXT: .LBB85_4: ; %atomicrmw.private
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[0:1]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v0, vcc_lo
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v4, s0, v0
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v4, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v4, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_min_u64 v[0:1], v[0:1], v[2:3]
@@ -4190,16 +4526,17 @@ define amdgpu_ps void @flat_umin_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %v
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v4, v1 :: v_dual_mov_b32 v5, v2
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[2:3], s[2:3]
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v1, vcc_lo, v2, v0
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v3, null, 0, v3, vcc_lo
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v2, vcc_lo, 0xffffff80, v1
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v3, null, -1, v3, vcc_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v3
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_xor_b32_e32 v1, s0, v3
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: v_cmpx_le_u32_e32 0x4000000, v1
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB85_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -4210,14 +4547,17 @@ define amdgpu_ps void @flat_umin_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %v
; GFX1250-GISEL-NEXT: .LBB85_3: ; %atomicrmw.global
; GFX1250-GISEL-NEXT: flat_atomic_min_u64 v0, v[4:5], s[2:3] offset:-128
; GFX1250-GISEL-NEXT: s_wait_dscnt 0x0
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2_vgpr3
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4_vgpr5
; GFX1250-GISEL-NEXT: s_wait_xcnt 0x0
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB85_2
; GFX1250-GISEL-NEXT: .LBB85_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[2:3]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v2, vcc_lo
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s0, v2
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v2, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_min_u64 v[0:1], v[0:1], v[4:5]
@@ -4310,14 +4650,16 @@ define amdgpu_ps <2 x float> @flat_cmpxchg_saddr_i64_rtn(ptr inreg %sbase, i32 %
; GFX1250-SDAG-LABEL: flat_cmpxchg_saddr_i64_rtn:
; GFX1250-SDAG: ; %bb.0:
; GFX1250-SDAG-NEXT: v_dual_mov_b32 v7, v2 :: v_dual_mov_b32 v6, v1
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-SDAG-NEXT: v_dual_mov_b32 v1, 0 :: v_dual_mov_b32 v5, v4
; GFX1250-SDAG-NEXT: v_mov_b32_e32 v4, v3
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[2:3], s[2:3], v[0:1]
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v0, s0, v3
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_cmp_lt_u32_e32 vcc_lo, 0x3ffffff, v0
; GFX1250-SDAG-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v3
+; GFX1250-SDAG-NEXT: s_and_saveexec_b32 s0, vcc_lo
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB90_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -4338,9 +4680,11 @@ define amdgpu_ps <2 x float> @flat_cmpxchg_saddr_i64_rtn(ptr inreg %sbase, i32 %
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB90_2
; GFX1250-SDAG-NEXT: .LBB90_4: ; %atomicrmw.private
-; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_2)
+; GFX1250-SDAG-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[2:3]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v8, -1, v2, vcc_lo
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v0, s1, v2
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v8, -1, v0, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v8, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_cmp_eq_u64_e32 vcc_lo, v[0:1], v[6:7]
@@ -4356,15 +4700,16 @@ define amdgpu_ps <2 x float> @flat_cmpxchg_saddr_i64_rtn(ptr inreg %sbase, i32 %
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v5, v0 :: v_dual_mov_b32 v8, v1
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[0:1], s[2:3]
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v9, v2 :: v_dual_mov_b32 v6, v3
-; GFX1250-GISEL-NEXT: v_mov_b32_e32 v7, v4
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_3) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v2, vcc_lo, v0, v5
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v3, null, 0, v1, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_dual_mov_b32 v7, v4 :: v_dual_bitop2_b32 v0, s0, v3 bitop3:0x14
+; GFX1250-GISEL-NEXT: v_cmp_le_u32_e32 vcc_lo, 0x4000000, v0
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v3
+; GFX1250-GISEL-NEXT: s_and_saveexec_b32 s0, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB90_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -4380,13 +4725,16 @@ define amdgpu_ps <2 x float> @flat_cmpxchg_saddr_i64_rtn(ptr inreg %sbase, i32 %
; GFX1250-GISEL-NEXT: flat_atomic_cmpswap_b64 v[0:1], v5, v[6:9], s[2:3] th:TH_ATOMIC_RETURN scope:SCOPE_SYS
; GFX1250-GISEL-NEXT: s_wait_loadcnt_dscnt 0x0
; GFX1250-GISEL-NEXT: global_inv scope:SCOPE_SYS
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2_vgpr3
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr8_vgpr9
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB90_2
; GFX1250-GISEL-NEXT: .LBB90_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[2:3]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v4, -1, v2, vcc_lo
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s1, v2
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v4, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v4, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_cmp_eq_u64_e32 vcc_lo, v[0:1], v[8:9]
@@ -4414,11 +4762,13 @@ define amdgpu_ps <2 x float> @flat_cmpxchg_saddr_i64_rtn_neg128(ptr inreg %sbase
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[2:3], v[0:1]
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[2:3], s[0:1], v[0:1]
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v0, s0, v3
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_cmp_lt_u32_e32 vcc_lo, 0x3ffffff, v0
; GFX1250-SDAG-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v3
+; GFX1250-SDAG-NEXT: s_and_saveexec_b32 s0, vcc_lo
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB91_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -4439,8 +4789,11 @@ define amdgpu_ps <2 x float> @flat_cmpxchg_saddr_i64_rtn_neg128(ptr inreg %sbase
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB91_2
; GFX1250-SDAG-NEXT: .LBB91_4: ; %atomicrmw.private
+; GFX1250-SDAG-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[2:3]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v8, -1, v2, vcc_lo
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v0, s1, v2
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v8, -1, v0, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v8, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_cmp_eq_u64_e32 vcc_lo, v[0:1], v[6:7]
@@ -4456,18 +4809,19 @@ define amdgpu_ps <2 x float> @flat_cmpxchg_saddr_i64_rtn_neg128(ptr inreg %sbase
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v5, v0 :: v_dual_mov_b32 v8, v1
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[0:1], s[2:3]
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v9, v2 :: v_dual_mov_b32 v6, v3
-; GFX1250-GISEL-NEXT: v_mov_b32_e32 v7, v4
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_3) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v0, vcc_lo, v0, v5
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v1, null, 0, v1, vcc_lo
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v2, vcc_lo, 0xffffff80, v0
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v3, null, -1, v1, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_dual_mov_b32 v7, v4 :: v_dual_bitop2_b32 v0, s0, v3 bitop3:0x14
+; GFX1250-GISEL-NEXT: v_cmp_le_u32_e32 vcc_lo, 0x4000000, v0
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v3
+; GFX1250-GISEL-NEXT: s_and_saveexec_b32 s0, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB91_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -4483,13 +4837,16 @@ define amdgpu_ps <2 x float> @flat_cmpxchg_saddr_i64_rtn_neg128(ptr inreg %sbase
; GFX1250-GISEL-NEXT: flat_atomic_cmpswap_b64 v[0:1], v5, v[6:9], s[2:3] offset:-128 th:TH_ATOMIC_RETURN scope:SCOPE_SYS
; GFX1250-GISEL-NEXT: s_wait_loadcnt_dscnt 0x0
; GFX1250-GISEL-NEXT: global_inv scope:SCOPE_SYS
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2_vgpr3
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr8_vgpr9
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB91_2
; GFX1250-GISEL-NEXT: .LBB91_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[2:3]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v4, -1, v2, vcc_lo
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s1, v2
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v4, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v4, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_cmp_eq_u64_e32 vcc_lo, v[0:1], v[8:9]
@@ -4512,13 +4869,15 @@ define amdgpu_ps void @flat_cmpxchg_saddr_i64_nortn(ptr inreg %sbase, i32 %voffs
; GFX1250-SDAG-LABEL: flat_cmpxchg_saddr_i64_nortn:
; GFX1250-SDAG: ; %bb.0:
; GFX1250-SDAG-NEXT: v_dual_mov_b32 v7, v2 :: v_dual_mov_b32 v6, v1
-; GFX1250-SDAG-NEXT: v_dual_mov_b32 v1, 0 :: v_dual_mov_b32 v5, v4
-; GFX1250-SDAG-NEXT: v_mov_b32_e32 v4, v3
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-SDAG-NEXT: v_mov_b32_e32 v1, 0
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-SDAG-NEXT: v_dual_mov_b32 v5, v4 :: v_dual_mov_b32 v4, v3
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[2:3], v[0:1]
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v1
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v2, s0, v1
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cmpx_lt_u32_e32 0x3ffffff, v2
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB92_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -4538,9 +4897,11 @@ define amdgpu_ps void @flat_cmpxchg_saddr_i64_nortn(ptr inreg %sbase, i32 %voffs
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB92_2
; GFX1250-SDAG-NEXT: .LBB92_4: ; %atomicrmw.private
-; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_2)
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[0:1]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v2, -1, v0, vcc_lo
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v2, s0, v0
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v2, -1, v2, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v2, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_cmp_eq_u64_e32 vcc_lo, v[0:1], v[6:7]
@@ -4553,13 +4914,14 @@ define amdgpu_ps void @flat_cmpxchg_saddr_i64_nortn(ptr inreg %sbase, i32 %voffs
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v8, v1 :: v_dual_mov_b32 v9, v2
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v6, v3 :: v_dual_mov_b32 v7, v4
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[2:3], s[2:3]
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v2, vcc_lo, v2, v0
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v3, null, 0, v3, vcc_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v3
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_xor_b32_e32 v1, s0, v3
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: v_cmpx_le_u32_e32 0x4000000, v1
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB92_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -4573,14 +4935,17 @@ define amdgpu_ps void @flat_cmpxchg_saddr_i64_nortn(ptr inreg %sbase, i32 %voffs
; GFX1250-GISEL-NEXT: flat_atomic_cmpswap_b64 v0, v[6:9], s[2:3] scope:SCOPE_SYS
; GFX1250-GISEL-NEXT: s_wait_storecnt_dscnt 0x0
; GFX1250-GISEL-NEXT: global_inv scope:SCOPE_SYS
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2_vgpr3
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr8_vgpr9
; GFX1250-GISEL-NEXT: s_wait_xcnt 0x0
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB92_2
; GFX1250-GISEL-NEXT: .LBB92_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[2:3]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v2, vcc_lo
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s0, v2
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v2, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_cmp_eq_u64_e32 vcc_lo, v[0:1], v[8:9]
@@ -4603,10 +4968,12 @@ define amdgpu_ps void @flat_cmpxchg_saddr_i64_nortn_neg128(ptr inreg %sbase, i32
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[2:3], v[0:1]
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[0:1], v[0:1]
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v2, s0, v1
; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v1
+; GFX1250-SDAG-NEXT: v_cmpx_lt_u32_e32 0x3ffffff, v2
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB93_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -4626,8 +4993,11 @@ define amdgpu_ps void @flat_cmpxchg_saddr_i64_nortn_neg128(ptr inreg %sbase, i32
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB93_2
; GFX1250-SDAG-NEXT: .LBB93_4: ; %atomicrmw.private
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[0:1]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v2, -1, v0, vcc_lo
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v2, s0, v0
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v2, -1, v2, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v2, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_cmp_eq_u64_e32 vcc_lo, v[0:1], v[6:7]
@@ -4640,16 +5010,17 @@ define amdgpu_ps void @flat_cmpxchg_saddr_i64_nortn_neg128(ptr inreg %sbase, i32
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v8, v1 :: v_dual_mov_b32 v9, v2
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v6, v3 :: v_dual_mov_b32 v7, v4
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[2:3], s[2:3]
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v1, vcc_lo, v2, v0
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v3, null, 0, v3, vcc_lo
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v2, vcc_lo, 0xffffff80, v1
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v3, null, -1, v3, vcc_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v3
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_xor_b32_e32 v1, s0, v3
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: v_cmpx_le_u32_e32 0x4000000, v1
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB93_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -4663,14 +5034,17 @@ define amdgpu_ps void @flat_cmpxchg_saddr_i64_nortn_neg128(ptr inreg %sbase, i32
; GFX1250-GISEL-NEXT: flat_atomic_cmpswap_b64 v0, v[6:9], s[2:3] offset:-128 scope:SCOPE_SYS
; GFX1250-GISEL-NEXT: s_wait_storecnt_dscnt 0x0
; GFX1250-GISEL-NEXT: global_inv scope:SCOPE_SYS
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2_vgpr3
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr8_vgpr9
; GFX1250-GISEL-NEXT: s_wait_xcnt 0x0
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB93_2
; GFX1250-GISEL-NEXT: .LBB93_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[2:3]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v2, vcc_lo
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s0, v2
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v2, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_cmp_eq_u64_e32 vcc_lo, v[0:1], v[8:9]
@@ -4742,13 +5116,15 @@ define amdgpu_ps <2 x float> @flat_inc_saddr_i64_rtn(ptr inreg %sbase, i32 %voff
; GFX1250-SDAG-LABEL: flat_inc_saddr_i64_rtn:
; GFX1250-SDAG: ; %bb.0:
; GFX1250-SDAG-NEXT: v_dual_mov_b32 v3, v2 :: v_dual_mov_b32 v2, v1
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-SDAG-NEXT: v_mov_b32_e32 v1, 0
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[4:5], s[2:3], v[0:1]
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v0, s0, v5
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_cmp_lt_u32_e32 vcc_lo, 0x3ffffff, v0
; GFX1250-SDAG-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v5
+; GFX1250-SDAG-NEXT: s_and_saveexec_b32 s0, vcc_lo
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB98_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -4766,15 +5142,16 @@ define amdgpu_ps <2 x float> @flat_inc_saddr_i64_rtn(ptr inreg %sbase, i32 %voff
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB98_2
; GFX1250-SDAG-NEXT: .LBB98_4: ; %atomicrmw.private
-; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_2)
+; GFX1250-SDAG-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[4:5]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v6, -1, v4, vcc_lo
; GFX1250-SDAG-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v0, s1, v4
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_4) | instid1(VALU_DEP_2)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v6, -1, v0, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v6, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[4:5], 1, v[0:1]
; GFX1250-SDAG-NEXT: v_cmp_lt_u64_e32 vcc_lo, v[0:1], v[2:3]
-; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_2)
; GFX1250-SDAG-NEXT: v_dual_cndmask_b32 v3, 0, v5 :: v_dual_cndmask_b32 v2, 0, v4
; GFX1250-SDAG-NEXT: scratch_store_b64 v6, v[2:3], off scope:SCOPE_SE
; GFX1250-SDAG-NEXT: s_wait_xcnt 0x0
@@ -4786,15 +5163,16 @@ define amdgpu_ps <2 x float> @flat_inc_saddr_i64_rtn(ptr inreg %sbase, i32 %voff
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v3, v0 :: v_dual_mov_b32 v4, v1
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[0:1], s[2:3]
-; GFX1250-GISEL-NEXT: v_mov_b32_e32 v5, v2
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v6, vcc_lo, v0, v3
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v7, null, 0, v1, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_dual_mov_b32 v5, v2 :: v_dual_bitop2_b32 v0, s0, v7 bitop3:0x14
+; GFX1250-GISEL-NEXT: v_cmp_le_u32_e32 vcc_lo, 0x4000000, v0
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v7
+; GFX1250-GISEL-NEXT: s_and_saveexec_b32 s0, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB98_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -4806,21 +5184,24 @@ define amdgpu_ps <2 x float> @flat_inc_saddr_i64_rtn(ptr inreg %sbase, i32 %voff
; GFX1250-GISEL-NEXT: s_branch .LBB98_5
; GFX1250-GISEL-NEXT: .LBB98_3: ; %atomicrmw.global
; GFX1250-GISEL-NEXT: flat_atomic_inc_u64 v[0:1], v3, v[4:5], s[2:3] th:TH_ATOMIC_RETURN scope:SCOPE_DEV
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6_vgpr7
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4_vgpr5
; GFX1250-GISEL-NEXT: s_wait_xcnt 0x0
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB98_2
; GFX1250-GISEL-NEXT: .LBB98_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[6:7]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v6, vcc_lo
; GFX1250-GISEL-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s1, v6
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_4) | instid1(VALU_DEP_2)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v6, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_add_nc_u64_e32 v[2:3], 1, v[0:1]
; GFX1250-GISEL-NEXT: v_cmp_ge_u64_e32 vcc_lo, v[0:1], v[4:5]
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_3)
; GFX1250-GISEL-NEXT: v_cndmask_b32_e64 v2, v2, 0, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_3)
; GFX1250-GISEL-NEXT: v_cndmask_b32_e64 v3, v3, 0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_store_b64 v6, v[2:3], off scope:SCOPE_SE
; GFX1250-GISEL-NEXT: s_wait_xcnt 0x0
@@ -4843,11 +5224,13 @@ define amdgpu_ps <2 x float> @flat_inc_saddr_i64_rtn_neg128(ptr inreg %sbase, i3
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[2:3], v[0:1]
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[4:5], s[0:1], v[0:1]
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v0, s0, v5
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_cmp_lt_u32_e32 vcc_lo, 0x3ffffff, v0
; GFX1250-SDAG-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v5
+; GFX1250-SDAG-NEXT: s_and_saveexec_b32 s0, vcc_lo
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB99_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -4865,14 +5248,16 @@ define amdgpu_ps <2 x float> @flat_inc_saddr_i64_rtn_neg128(ptr inreg %sbase, i3
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB99_2
; GFX1250-SDAG-NEXT: .LBB99_4: ; %atomicrmw.private
+; GFX1250-SDAG-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[4:5]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v6, -1, v4, vcc_lo
; GFX1250-SDAG-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v0, s1, v4
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_4) | instid1(VALU_DEP_2)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v6, -1, v0, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v6, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[4:5], 1, v[0:1]
; GFX1250-SDAG-NEXT: v_cmp_lt_u64_e32 vcc_lo, v[0:1], v[2:3]
-; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_2)
; GFX1250-SDAG-NEXT: v_dual_cndmask_b32 v3, 0, v5 :: v_dual_cndmask_b32 v2, 0, v4
; GFX1250-SDAG-NEXT: scratch_store_b64 v6, v[2:3], off scope:SCOPE_SE
; GFX1250-SDAG-NEXT: s_wait_xcnt 0x0
@@ -4884,18 +5269,19 @@ define amdgpu_ps <2 x float> @flat_inc_saddr_i64_rtn_neg128(ptr inreg %sbase, i3
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v3, v0 :: v_dual_mov_b32 v4, v1
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[0:1], s[2:3]
-; GFX1250-GISEL-NEXT: v_mov_b32_e32 v5, v2
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v0, vcc_lo, v0, v3
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v1, null, 0, v1, vcc_lo
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v6, vcc_lo, 0xffffff80, v0
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v7, null, -1, v1, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_dual_mov_b32 v5, v2 :: v_dual_bitop2_b32 v0, s0, v7 bitop3:0x14
+; GFX1250-GISEL-NEXT: v_cmp_le_u32_e32 vcc_lo, 0x4000000, v0
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v7
+; GFX1250-GISEL-NEXT: s_and_saveexec_b32 s0, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB99_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -4907,21 +5293,24 @@ define amdgpu_ps <2 x float> @flat_inc_saddr_i64_rtn_neg128(ptr inreg %sbase, i3
; GFX1250-GISEL-NEXT: s_branch .LBB99_5
; GFX1250-GISEL-NEXT: .LBB99_3: ; %atomicrmw.global
; GFX1250-GISEL-NEXT: flat_atomic_inc_u64 v[0:1], v3, v[4:5], s[2:3] offset:-128 th:TH_ATOMIC_RETURN scope:SCOPE_DEV
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6_vgpr7
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4_vgpr5
; GFX1250-GISEL-NEXT: s_wait_xcnt 0x0
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB99_2
; GFX1250-GISEL-NEXT: .LBB99_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[6:7]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v6, vcc_lo
; GFX1250-GISEL-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s1, v6
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_4) | instid1(VALU_DEP_2)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v6, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_add_nc_u64_e32 v[2:3], 1, v[0:1]
; GFX1250-GISEL-NEXT: v_cmp_ge_u64_e32 vcc_lo, v[0:1], v[4:5]
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_3)
; GFX1250-GISEL-NEXT: v_cndmask_b32_e64 v2, v2, 0, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_3)
; GFX1250-GISEL-NEXT: v_cndmask_b32_e64 v3, v3, 0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_store_b64 v6, v[2:3], off scope:SCOPE_SE
; GFX1250-GISEL-NEXT: s_wait_xcnt 0x0
@@ -4941,11 +5330,13 @@ define amdgpu_ps void @flat_inc_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset,
; GFX1250-SDAG: ; %bb.0:
; GFX1250-SDAG-NEXT: v_dual_mov_b32 v3, v2 :: v_dual_mov_b32 v2, v1
; GFX1250-SDAG-NEXT: v_mov_b32_e32 v1, 0
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[2:3], v[0:1]
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v1
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v4, s0, v1
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cmpx_lt_u32_e32 0x3ffffff, v4
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB100_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -4961,14 +5352,15 @@ define amdgpu_ps void @flat_inc_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset,
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB100_2
; GFX1250-SDAG-NEXT: .LBB100_4: ; %atomicrmw.private
-; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_2)
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[0:1]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v6, -1, v0, vcc_lo
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v4, s0, v0
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_4) | instid1(VALU_DEP_2)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v6, -1, v4, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v6, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[4:5], 1, v[0:1]
; GFX1250-SDAG-NEXT: v_cmp_lt_u64_e32 vcc_lo, v[0:1], v[2:3]
-; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_2)
; GFX1250-SDAG-NEXT: v_dual_cndmask_b32 v1, 0, v5 :: v_dual_cndmask_b32 v0, 0, v4
; GFX1250-SDAG-NEXT: scratch_store_b64 v6, v[0:1], off scope:SCOPE_SE
; GFX1250-SDAG-NEXT: s_endpgm
@@ -4977,13 +5369,14 @@ define amdgpu_ps void @flat_inc_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset,
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v4, v1 :: v_dual_mov_b32 v5, v2
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[2:3], s[2:3]
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v2, vcc_lo, v2, v0
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v3, null, 0, v3, vcc_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v3
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_xor_b32_e32 v1, s0, v3
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: v_cmpx_le_u32_e32 0x4000000, v1
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB100_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -4993,20 +5386,23 @@ define amdgpu_ps void @flat_inc_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset,
; GFX1250-GISEL-NEXT: s_endpgm
; GFX1250-GISEL-NEXT: .LBB100_3: ; %atomicrmw.global
; GFX1250-GISEL-NEXT: flat_atomic_inc_u64 v0, v[4:5], s[2:3] scope:SCOPE_DEV
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2_vgpr3
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4_vgpr5
; GFX1250-GISEL-NEXT: s_wait_xcnt 0x0
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB100_2
; GFX1250-GISEL-NEXT: .LBB100_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[2:3]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v2, vcc_lo
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s0, v2
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_4) | instid1(VALU_DEP_2)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v6, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_add_nc_u64_e32 v[2:3], 1, v[0:1]
; GFX1250-GISEL-NEXT: v_cmp_ge_u64_e32 vcc_lo, v[0:1], v[4:5]
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_3)
; GFX1250-GISEL-NEXT: v_cndmask_b32_e64 v0, v2, 0, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_3)
; GFX1250-GISEL-NEXT: v_cndmask_b32_e64 v1, v3, 0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_store_b64 v6, v[0:1], off scope:SCOPE_SE
; GFX1250-GISEL-NEXT: s_endpgm
@@ -5025,10 +5421,12 @@ define amdgpu_ps void @flat_inc_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %vo
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[2:3], v[0:1]
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[0:1], v[0:1]
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v4, s0, v1
; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v1
+; GFX1250-SDAG-NEXT: v_cmpx_lt_u32_e32 0x3ffffff, v4
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB101_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -5044,13 +5442,15 @@ define amdgpu_ps void @flat_inc_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %vo
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB101_2
; GFX1250-SDAG-NEXT: .LBB101_4: ; %atomicrmw.private
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[0:1]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v6, -1, v0, vcc_lo
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v4, s0, v0
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_4) | instid1(VALU_DEP_2)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v6, -1, v4, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v6, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[4:5], 1, v[0:1]
; GFX1250-SDAG-NEXT: v_cmp_lt_u64_e32 vcc_lo, v[0:1], v[2:3]
-; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_2)
; GFX1250-SDAG-NEXT: v_dual_cndmask_b32 v1, 0, v5 :: v_dual_cndmask_b32 v0, 0, v4
; GFX1250-SDAG-NEXT: scratch_store_b64 v6, v[0:1], off scope:SCOPE_SE
; GFX1250-SDAG-NEXT: s_endpgm
@@ -5059,16 +5459,17 @@ define amdgpu_ps void @flat_inc_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %vo
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v4, v1 :: v_dual_mov_b32 v5, v2
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[2:3], s[2:3]
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v1, vcc_lo, v2, v0
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v3, null, 0, v3, vcc_lo
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v2, vcc_lo, 0xffffff80, v1
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v3, null, -1, v3, vcc_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v3
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_xor_b32_e32 v1, s0, v3
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: v_cmpx_le_u32_e32 0x4000000, v1
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB101_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -5078,20 +5479,23 @@ define amdgpu_ps void @flat_inc_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %vo
; GFX1250-GISEL-NEXT: s_endpgm
; GFX1250-GISEL-NEXT: .LBB101_3: ; %atomicrmw.global
; GFX1250-GISEL-NEXT: flat_atomic_inc_u64 v0, v[4:5], s[2:3] offset:-128 scope:SCOPE_DEV
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2_vgpr3
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4_vgpr5
; GFX1250-GISEL-NEXT: s_wait_xcnt 0x0
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB101_2
; GFX1250-GISEL-NEXT: .LBB101_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[2:3]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v2, vcc_lo
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s0, v2
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_4) | instid1(VALU_DEP_2)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v6, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_add_nc_u64_e32 v[2:3], 1, v[0:1]
; GFX1250-GISEL-NEXT: v_cmp_ge_u64_e32 vcc_lo, v[0:1], v[4:5]
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_3)
; GFX1250-GISEL-NEXT: v_cndmask_b32_e64 v0, v2, 0, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_3)
; GFX1250-GISEL-NEXT: v_cndmask_b32_e64 v1, v3, 0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_store_b64 v6, v[0:1], off scope:SCOPE_SE
; GFX1250-GISEL-NEXT: s_endpgm
@@ -5161,13 +5565,15 @@ define amdgpu_ps <2 x float> @flat_dec_saddr_i64_rtn(ptr inreg %sbase, i32 %voff
; GFX1250-SDAG-LABEL: flat_dec_saddr_i64_rtn:
; GFX1250-SDAG: ; %bb.0:
; GFX1250-SDAG-NEXT: v_dual_mov_b32 v3, v2 :: v_dual_mov_b32 v2, v1
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-SDAG-NEXT: v_mov_b32_e32 v1, 0
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[4:5], s[2:3], v[0:1]
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v0, s0, v5
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_cmp_lt_u32_e32 vcc_lo, 0x3ffffff, v0
; GFX1250-SDAG-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v5
+; GFX1250-SDAG-NEXT: s_and_saveexec_b32 s0, vcc_lo
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB106_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -5185,10 +5591,12 @@ define amdgpu_ps <2 x float> @flat_dec_saddr_i64_rtn(ptr inreg %sbase, i32 %voff
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s1, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB106_2
; GFX1250-SDAG-NEXT: .LBB106_4: ; %atomicrmw.private
-; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_2)
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[4:5]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v6, -1, v4, vcc_lo
; GFX1250-SDAG-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v0, s0, v4
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v6, -1, v0, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v6, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_cmp_eq_u64_e32 vcc_lo, 0, v[0:1]
@@ -5207,15 +5615,16 @@ define amdgpu_ps <2 x float> @flat_dec_saddr_i64_rtn(ptr inreg %sbase, i32 %voff
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v3, v0 :: v_dual_mov_b32 v4, v1
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[0:1], s[2:3]
-; GFX1250-GISEL-NEXT: v_mov_b32_e32 v5, v2
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v6, vcc_lo, v0, v3
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v7, null, 0, v1, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_dual_mov_b32 v5, v2 :: v_dual_bitop2_b32 v0, s0, v7 bitop3:0x14
+; GFX1250-GISEL-NEXT: v_cmp_le_u32_e32 vcc_lo, 0x4000000, v0
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v7
+; GFX1250-GISEL-NEXT: s_and_saveexec_b32 s0, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB106_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -5227,15 +5636,18 @@ define amdgpu_ps <2 x float> @flat_dec_saddr_i64_rtn(ptr inreg %sbase, i32 %voff
; GFX1250-GISEL-NEXT: s_branch .LBB106_5
; GFX1250-GISEL-NEXT: .LBB106_3: ; %atomicrmw.global
; GFX1250-GISEL-NEXT: flat_atomic_dec_u64 v[0:1], v3, v[4:5], s[2:3] th:TH_ATOMIC_RETURN scope:SCOPE_DEV
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6_vgpr7
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4_vgpr5
; GFX1250-GISEL-NEXT: s_wait_xcnt 0x0
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s1, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB106_2
; GFX1250-GISEL-NEXT: .LBB106_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[6:7]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v6, vcc_lo
; GFX1250-GISEL-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s0, v6
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v6, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_cmp_eq_u64_e32 vcc_lo, 0, v[0:1]
@@ -5265,11 +5677,13 @@ define amdgpu_ps <2 x float> @flat_dec_saddr_i64_rtn_neg128(ptr inreg %sbase, i3
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[2:3], v[0:1]
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[4:5], s[0:1], v[0:1]
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v0, s0, v5
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_cmp_lt_u32_e32 vcc_lo, 0x3ffffff, v0
; GFX1250-SDAG-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v5
+; GFX1250-SDAG-NEXT: s_and_saveexec_b32 s0, vcc_lo
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB107_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -5287,9 +5701,12 @@ define amdgpu_ps <2 x float> @flat_dec_saddr_i64_rtn_neg128(ptr inreg %sbase, i3
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s1, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB107_2
; GFX1250-SDAG-NEXT: .LBB107_4: ; %atomicrmw.private
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[4:5]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v6, -1, v4, vcc_lo
; GFX1250-SDAG-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v0, s0, v4
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v6, -1, v0, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v6, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_cmp_eq_u64_e32 vcc_lo, 0, v[0:1]
@@ -5308,18 +5725,19 @@ define amdgpu_ps <2 x float> @flat_dec_saddr_i64_rtn_neg128(ptr inreg %sbase, i3
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v3, v0 :: v_dual_mov_b32 v4, v1
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[0:1], s[2:3]
-; GFX1250-GISEL-NEXT: v_mov_b32_e32 v5, v2
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v0, vcc_lo, v0, v3
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v1, null, 0, v1, vcc_lo
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v6, vcc_lo, 0xffffff80, v0
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v7, null, -1, v1, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_dual_mov_b32 v5, v2 :: v_dual_bitop2_b32 v0, s0, v7 bitop3:0x14
+; GFX1250-GISEL-NEXT: v_cmp_le_u32_e32 vcc_lo, 0x4000000, v0
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v7
+; GFX1250-GISEL-NEXT: s_and_saveexec_b32 s0, vcc_lo
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB107_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -5331,15 +5749,18 @@ define amdgpu_ps <2 x float> @flat_dec_saddr_i64_rtn_neg128(ptr inreg %sbase, i3
; GFX1250-GISEL-NEXT: s_branch .LBB107_5
; GFX1250-GISEL-NEXT: .LBB107_3: ; %atomicrmw.global
; GFX1250-GISEL-NEXT: flat_atomic_dec_u64 v[0:1], v3, v[4:5], s[2:3] offset:-128 th:TH_ATOMIC_RETURN scope:SCOPE_DEV
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6_vgpr7
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr6
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4_vgpr5
; GFX1250-GISEL-NEXT: s_wait_xcnt 0x0
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s1, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB107_2
; GFX1250-GISEL-NEXT: .LBB107_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[6:7]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v6, vcc_lo
; GFX1250-GISEL-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s0, v6
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v6, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v6, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_cmp_eq_u64_e32 vcc_lo, 0, v[0:1]
@@ -5366,11 +5787,13 @@ define amdgpu_ps void @flat_dec_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset,
; GFX1250-SDAG: ; %bb.0:
; GFX1250-SDAG-NEXT: v_dual_mov_b32 v3, v2 :: v_dual_mov_b32 v2, v1
; GFX1250-SDAG-NEXT: v_mov_b32_e32 v1, 0
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[2:3], v[0:1]
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v1
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v4, s0, v1
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cmpx_lt_u32_e32 0x3ffffff, v4
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB108_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -5386,9 +5809,11 @@ define amdgpu_ps void @flat_dec_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset,
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB108_2
; GFX1250-SDAG-NEXT: .LBB108_4: ; %atomicrmw.private
-; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_2)
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[0:1]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v0, vcc_lo
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v4, s0, v0
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v4, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v4, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_cmp_eq_u64_e32 vcc_lo, 0, v[0:1]
@@ -5404,13 +5829,14 @@ define amdgpu_ps void @flat_dec_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset,
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v4, v1 :: v_dual_mov_b32 v5, v2
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[2:3], s[2:3]
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v2, vcc_lo, v2, v0
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v3, null, 0, v3, vcc_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v3
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_xor_b32_e32 v1, s0, v3
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: v_cmpx_le_u32_e32 0x4000000, v1
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB108_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -5420,14 +5846,17 @@ define amdgpu_ps void @flat_dec_saddr_i64_nortn(ptr inreg %sbase, i32 %voffset,
; GFX1250-GISEL-NEXT: s_endpgm
; GFX1250-GISEL-NEXT: .LBB108_3: ; %atomicrmw.global
; GFX1250-GISEL-NEXT: flat_atomic_dec_u64 v0, v[4:5], s[2:3] scope:SCOPE_DEV
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2_vgpr3
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4_vgpr5
; GFX1250-GISEL-NEXT: s_wait_xcnt 0x0
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB108_2
; GFX1250-GISEL-NEXT: .LBB108_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[2:3]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v2, vcc_lo
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s0, v2
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v2, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_cmp_eq_u64_e32 vcc_lo, 0, v[0:1]
@@ -5453,10 +5882,12 @@ define amdgpu_ps void @flat_dec_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %vo
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[2:3], v[0:1]
; GFX1250-SDAG-NEXT: v_add_nc_u64_e32 v[0:1], s[0:1], v[0:1]
-; GFX1250-SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instid1(SALU_CYCLE_1)
+; GFX1250-SDAG-NEXT: v_xor_b32_e32 v4, s0, v1
; GFX1250-SDAG-NEXT: s_mov_b32 s0, exec_lo
; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v1
+; GFX1250-SDAG-NEXT: v_cmpx_lt_u32_e32 0x3ffffff, v4
; GFX1250-SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-SDAG-NEXT: s_cbranch_execnz .LBB109_3
; GFX1250-SDAG-NEXT: ; %bb.1: ; %Flow
@@ -5472,8 +5903,11 @@ define amdgpu_ps void @flat_dec_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %vo
; GFX1250-SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-SDAG-NEXT: s_cbranch_execz .LBB109_2
; GFX1250-SDAG-NEXT: .LBB109_4: ; %atomicrmw.private
+; GFX1250-SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[0:1]
-; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v0, vcc_lo
+; GFX1250-SDAG-NEXT: v_subrev_nc_u32_e32 v4, s0, v0
+; GFX1250-SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v4, vcc_lo
; GFX1250-SDAG-NEXT: scratch_load_b64 v[0:1], v4, off
; GFX1250-SDAG-NEXT: s_wait_loadcnt 0x0
; GFX1250-SDAG-NEXT: v_cmp_eq_u64_e32 vcc_lo, 0, v[0:1]
@@ -5489,16 +5923,17 @@ define amdgpu_ps void @flat_dec_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %vo
; GFX1250-GISEL: ; %bb.0:
; GFX1250-GISEL-NEXT: v_dual_mov_b32 v4, v1 :: v_dual_mov_b32 v5, v2
; GFX1250-GISEL-NEXT: v_mov_b64_e32 v[2:3], s[2:3]
-; GFX1250-GISEL-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v1, vcc_lo, v2, v0
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v3, null, 0, v3, vcc_lo
; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
; GFX1250-GISEL-NEXT: v_add_co_u32 v2, vcc_lo, 0xffffff80, v1
; GFX1250-GISEL-NEXT: v_add_co_ci_u32_e64 v3, null, -1, v3, vcc_lo
-; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-GISEL-NEXT: v_cmpx_ne_u32_e64 s1, v3
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_xor_b32_e32 v1, s0, v3
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-GISEL-NEXT: v_cmpx_le_u32_e32 0x4000000, v1
; GFX1250-GISEL-NEXT: s_xor_b32 s0, exec_lo, s0
; GFX1250-GISEL-NEXT: s_cbranch_execnz .LBB109_3
; GFX1250-GISEL-NEXT: ; %bb.1: ; %Flow
@@ -5508,14 +5943,17 @@ define amdgpu_ps void @flat_dec_saddr_i64_nortn_neg128(ptr inreg %sbase, i32 %vo
; GFX1250-GISEL-NEXT: s_endpgm
; GFX1250-GISEL-NEXT: .LBB109_3: ; %atomicrmw.global
; GFX1250-GISEL-NEXT: flat_atomic_dec_u64 v0, v[4:5], s[2:3] offset:-128 scope:SCOPE_DEV
-; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2_vgpr3
+; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr2
; GFX1250-GISEL-NEXT: ; implicit-def: $vgpr4_vgpr5
; GFX1250-GISEL-NEXT: s_wait_xcnt 0x0
; GFX1250-GISEL-NEXT: s_and_not1_saveexec_b32 s0, s0
; GFX1250-GISEL-NEXT: s_cbranch_execz .LBB109_2
; GFX1250-GISEL-NEXT: .LBB109_4: ; %atomicrmw.private
+; GFX1250-GISEL-NEXT: s_mov_b32 s0, src_flat_scratch_base_lo
; GFX1250-GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[2:3]
-; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v2, vcc_lo
+; GFX1250-GISEL-NEXT: v_subrev_nc_u32_e32 v0, s0, v2
+; GFX1250-GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-GISEL-NEXT: v_cndmask_b32_e32 v2, -1, v0, vcc_lo
; GFX1250-GISEL-NEXT: scratch_load_b64 v[0:1], v2, off
; GFX1250-GISEL-NEXT: s_wait_loadcnt 0x0
; GFX1250-GISEL-NEXT: v_cmp_eq_u64_e32 vcc_lo, 0, v[0:1]
diff --git a/llvm/test/CodeGen/AMDGPU/scale-offset-flat.ll b/llvm/test/CodeGen/AMDGPU/scale-offset-flat.ll
index 735720a794611..725d57d852966 100644
--- a/llvm/test/CodeGen/AMDGPU/scale-offset-flat.ll
+++ b/llvm/test/CodeGen/AMDGPU/scale-offset-flat.ll
@@ -285,7 +285,7 @@ define amdgpu_ps void @flat_store_b32_idxprom(ptr align 4 inreg %p, i32 %idx) {
; GCN-LABEL: flat_store_b32_idxprom:
; GCN: ; %bb.0: ; %entry
; GCN-NEXT: v_mov_b32_e32 v1, 1.0
-; GCN-NEXT: flat_store_b32 v0, v1, s[0:1] scale_offset
+; GCN-NEXT: flat_store_b32 v0, v1, s[0:1] scale_offset scope:SCOPE_SE
; GCN-NEXT: s_endpgm
entry:
%idxprom = sext i32 %idx to i64
@@ -298,7 +298,7 @@ define amdgpu_ps void @flat_store_b16_idxprom(ptr align 2 inreg %p, i32 %idx) {
; GCN-LABEL: flat_store_b16_idxprom:
; GCN: ; %bb.0: ; %entry
; GCN-NEXT: v_mov_b32_e32 v1, 1
-; GCN-NEXT: flat_store_b16 v0, v1, s[0:1] scale_offset
+; GCN-NEXT: flat_store_b16 v0, v1, s[0:1] scale_offset scope:SCOPE_SE
; GCN-NEXT: s_endpgm
entry:
%idxprom = sext i32 %idx to i64
@@ -311,7 +311,7 @@ define amdgpu_ps void @flat_store_b64_idxprom(ptr align 4 inreg %p, i32 %idx) {
; GCN-LABEL: flat_store_b64_idxprom:
; GCN: ; %bb.0: ; %entry
; GCN-NEXT: v_mov_b64_e32 v[2:3], 1.0
-; GCN-NEXT: flat_store_b64 v0, v[2:3], s[0:1] scale_offset
+; GCN-NEXT: flat_store_b64 v0, v[2:3], s[0:1] scale_offset scope:SCOPE_SE
; GCN-NEXT: s_endpgm
entry:
%idxprom = sext i32 %idx to i64
@@ -337,12 +337,15 @@ define amdgpu_ps <2 x float> @flat_atomicrmw_b64_rtn_idxprom(ptr align 8 inreg %
; SDAG-LABEL: flat_atomicrmw_b64_rtn_idxprom:
; SDAG: ; %bb.0: ; %entry
; SDAG-NEXT: v_ashrrev_i32_e32 v1, 31, v0
-; SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_2) | instid1(VALU_DEP_1)
+; SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
; SDAG-NEXT: v_lshl_add_u64 v[2:3], v[0:1], 3, s[0:1]
-; SDAG-NEXT: s_mov_b64 s[0:1], src_private_base
-; SDAG-NEXT: s_mov_b32 s0, exec_lo
+; SDAG-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instid1(SALU_CYCLE_1)
+; SDAG-NEXT: v_xor_b32_e32 v0, s0, v3
+; SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(SALU_CYCLE_1)
+; SDAG-NEXT: v_cmp_lt_u32_e32 vcc_lo, 0x3ffffff, v0
; SDAG-NEXT: ; implicit-def: $vgpr0_vgpr1
-; SDAG-NEXT: v_cmpx_ne_u32_e64 s1, v3
+; SDAG-NEXT: s_and_saveexec_b32 s0, vcc_lo
; SDAG-NEXT: s_xor_b32 s0, exec_lo, s0
; SDAG-NEXT: s_cbranch_execnz .LBB21_3
; SDAG-NEXT: ; %bb.1: ; %Flow
@@ -360,13 +363,16 @@ define amdgpu_ps <2 x float> @flat_atomicrmw_b64_rtn_idxprom(ptr align 8 inreg %
; SDAG-NEXT: s_and_not1_saveexec_b32 s0, s0
; SDAG-NEXT: s_cbranch_execz .LBB21_2
; SDAG-NEXT: .LBB21_4: ; %atomicrmw.private
+; SDAG-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; SDAG-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[2:3]
-; SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v2, vcc_lo
; SDAG-NEXT: s_wait_loadcnt_dscnt 0x0
+; SDAG-NEXT: v_subrev_nc_u32_e32 v0, s1, v2
+; SDAG-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; SDAG-NEXT: v_cndmask_b32_e32 v4, -1, v0, vcc_lo
; SDAG-NEXT: scratch_load_b64 v[0:1], v4, off
; SDAG-NEXT: s_wait_loadcnt 0x0
; SDAG-NEXT: v_add_nc_u64_e32 v[2:3], 1, v[0:1]
-; SDAG-NEXT: scratch_store_b64 v4, v[2:3], off
+; SDAG-NEXT: scratch_store_b64 v4, v[2:3], off scope:SCOPE_SE
; SDAG-NEXT: s_wait_xcnt 0x0
; SDAG-NEXT: s_or_b32 exec_lo, exec_lo, s0
; SDAG-NEXT: s_branch .LBB21_5
@@ -374,19 +380,21 @@ define amdgpu_ps <2 x float> @flat_atomicrmw_b64_rtn_idxprom(ptr align 8 inreg %
;
; GISEL-LABEL: flat_atomicrmw_b64_rtn_idxprom:
; GISEL: ; %bb.0: ; %entry
+; GISEL-NEXT: s_mov_b32 s2, src_flat_scratch_base_hi
; GISEL-NEXT: v_mov_b32_e32 v2, v0
; GISEL-NEXT: v_mov_b64_e32 v[4:5], s[0:1]
-; GISEL-NEXT: s_mov_b64 s[2:3], src_private_base
-; GISEL-NEXT: s_mov_b32 s2, exec_lo
; GISEL-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1)
; GISEL-NEXT: v_ashrrev_i32_e32 v3, 31, v2
; GISEL-NEXT: v_lshlrev_b64_e32 v[0:1], 3, v[2:3]
; GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
; GISEL-NEXT: v_add_co_u32 v4, vcc_lo, v4, v0
; GISEL-NEXT: v_add_co_ci_u32_e64 v5, null, v5, v1, vcc_lo
+; GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GISEL-NEXT: v_xor_b32_e32 v0, s2, v5
+; GISEL-NEXT: v_cmp_le_u32_e32 vcc_lo, 0x4000000, v0
; GISEL-NEXT: ; implicit-def: $vgpr0_vgpr1
-; GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GISEL-NEXT: v_cmpx_ne_u32_e64 s3, v5
+; GISEL-NEXT: s_and_saveexec_b32 s2, vcc_lo
+; GISEL-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
; GISEL-NEXT: s_xor_b32 s2, exec_lo, s2
; GISEL-NEXT: s_cbranch_execnz .LBB21_3
; GISEL-NEXT: ; %bb.1: ; %Flow
@@ -398,19 +406,22 @@ define amdgpu_ps <2 x float> @flat_atomicrmw_b64_rtn_idxprom(ptr align 8 inreg %
; GISEL-NEXT: s_branch .LBB21_5
; GISEL-NEXT: .LBB21_3: ; %atomicrmw.global
; GISEL-NEXT: v_mov_b64_e32 v[0:1], 1
-; GISEL-NEXT: ; implicit-def: $vgpr4_vgpr5
+; GISEL-NEXT: ; implicit-def: $vgpr4
; GISEL-NEXT: flat_atomic_add_u64 v[0:1], v2, v[0:1], s[0:1] scale_offset th:TH_ATOMIC_RETURN scope:SCOPE_SYS
; GISEL-NEXT: s_wait_xcnt 0x0
; GISEL-NEXT: s_and_not1_saveexec_b32 s0, s2
; GISEL-NEXT: s_cbranch_execz .LBB21_2
; GISEL-NEXT: .LBB21_4: ; %atomicrmw.private
+; GISEL-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
; GISEL-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[4:5]
-; GISEL-NEXT: v_cndmask_b32_e32 v4, -1, v4, vcc_lo
; GISEL-NEXT: s_wait_loadcnt_dscnt 0x0
+; GISEL-NEXT: v_subrev_nc_u32_e32 v0, s1, v4
+; GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GISEL-NEXT: v_cndmask_b32_e32 v4, -1, v0, vcc_lo
; GISEL-NEXT: scratch_load_b64 v[0:1], v4, off
; GISEL-NEXT: s_wait_loadcnt 0x0
; GISEL-NEXT: v_add_nc_u64_e32 v[2:3], 1, v[0:1]
-; GISEL-NEXT: scratch_store_b64 v4, v[2:3], off
+; GISEL-NEXT: scratch_store_b64 v4, v[2:3], off scope:SCOPE_SE
; GISEL-NEXT: s_wait_xcnt 0x0
; GISEL-NEXT: s_or_b32 exec_lo, exec_lo, s0
; GISEL-NEXT: s_branch .LBB21_5
>From 435b8b51dc7e236d3940efe4b94104338dbac0e8 Mon Sep 17 00:00:00 2001
From: Thurston Dang <thurston at google.com>
Date: Tue, 5 Aug 2025 16:31:35 -0700
Subject: [PATCH 014/171] [sanitizer] Don't TestPTrace() if SPARC; don't give
up if internal_fork() fails (#152072)
Fixes corner cases of https://github.com/llvm/llvm-project/pull/151406:
- Don't run TestPTrace() on SPARC, because internal_fork() on SPARC
actually calls __fork(). We can't safely __fork(), because it's possible
seccomp has been configured to disallow fork() but allow clone().
- if internal_fork() fails for whatever reason, we shouldn't give up. It
is strictly worse to give up early than to attempt StopTheWorld.
Also updates some comments/TODOs.
---
.../sanitizer_stoptheworld_linux_libcdep.cpp | 36 +++++++++++++++----
1 file changed, 29 insertions(+), 7 deletions(-)
diff --git a/compiler-rt/lib/sanitizer_common/sanitizer_stoptheworld_linux_libcdep.cpp b/compiler-rt/lib/sanitizer_common/sanitizer_stoptheworld_linux_libcdep.cpp
index d5cf0f14dfeb8..5fde65ea48c42 100644
--- a/compiler-rt/lib/sanitizer_common/sanitizer_stoptheworld_linux_libcdep.cpp
+++ b/compiler-rt/lib/sanitizer_common/sanitizer_stoptheworld_linux_libcdep.cpp
@@ -405,10 +405,21 @@ struct ScopedSetTracerPID {
// This detects whether ptrace is blocked (e.g., by seccomp), by forking and
// then attempting ptrace.
-// This separate check is necessary because StopTheWorld() creates a child
-// process with a shared virtual address space and shared TLS, and therefore
+// This separate check is necessary because StopTheWorld() creates a thread
+// with a shared virtual address space and shared TLS, and therefore
// cannot use waitpid() due to the shared errno.
static void TestPTrace() {
+# if SANITIZER_SPARC
+ // internal_fork() on SPARC actually calls __fork(). We can't safely fork,
+ // because it's possible seccomp has been configured to disallow fork() but
+ // allow clone().
+ Report("WARNING: skipping TestPTrace() because this is SPARC\n");
+ Report(
+ "If seccomp blocks ptrace, LeakSanitizer may hang without further "
+ "notice\n");
+ Report(
+ "If seccomp does not block ptrace, you can safely ignore this warning\n");
+# else
// Heuristic: only check the first time this is called. This is not always
// correct (e.g., user manually triggers leak detection, then updates
// seccomp, then leak detection is triggered again).
@@ -417,35 +428,46 @@ static void TestPTrace() {
return;
checked = true;
- // We hope that fork() is not too expensive, because of copy-on-write.
+ // Hopefully internal_fork() is not too expensive, thanks to copy-on-write.
// Besides, this is only called the first time.
+ // Note that internal_fork() on non-SPARC Linux actually calls
+ // SYSCALL(clone); thus, it is reasonable to use it because if seccomp kills
+ // TestPTrace(), it would have killed StopTheWorld() anyway.
int pid = internal_fork();
if (pid < 0) {
int rverrno;
- if (internal_iserror(pid, &rverrno)) {
+ if (internal_iserror(pid, &rverrno))
Report("WARNING: TestPTrace() failed to fork (errno %d)\n", rverrno);
- }
- internal__exit(-1);
+
+ // We don't abort the sanitizer - it's still worth letting the sanitizer
+ // try.
+ return;
}
if (pid == 0) {
// Child subprocess
+
+ // TODO: consider checking return value of internal_ptrace, to handle
+ // SCMP_ACT_ERRNO. However, be careful not to consume too many
+ // resources performing a proper ptrace.
internal_ptrace(PTRACE_ATTACH, 0, nullptr, nullptr);
internal__exit(0);
} else {
int wstatus;
internal_waitpid(pid, &wstatus, 0);
+ // Handle SCMP_ACT_KILL
if (WIFSIGNALED(wstatus)) {
VReport(0,
- "Warning: ptrace appears to be blocked (is seccomp enabled?). "
+ "WARNING: ptrace appears to be blocked (is seccomp enabled?). "
"LeakSanitizer may hang.\n");
VReport(0, "Child exited with signal %d.\n", WTERMSIG(wstatus));
// We don't abort the sanitizer - it's still worth letting the sanitizer
// try.
}
}
+# endif
}
void StopTheWorld(StopTheWorldCallback callback, void *argument) {
>From 315e1f59c8a5a0f64c78b93e43af94d62312c7e2 Mon Sep 17 00:00:00 2001
From: Rahul Joshi <rjoshi at nvidia.com>
Date: Tue, 5 Aug 2025 16:37:29 -0700
Subject: [PATCH 015/171] [NFC] Run clang-format on TGLexer and TGParser
(#151509)
In https://github.com/llvm/llvm-project/pull/149248, clang-format
applied some formatting to lines untouched by that PR, because the
existing code is not clang-format compliant. Hence applying clang-format
on the entire files here.
---
llvm/lib/TableGen/TGLexer.cpp | 62 +++---
llvm/lib/TableGen/TGLexer.h | 8 +-
llvm/lib/TableGen/TGParser.cpp | 389 ++++++++++++++++++++-------------
llvm/lib/TableGen/TGParser.h | 12 +-
4 files changed, 277 insertions(+), 194 deletions(-)
diff --git a/llvm/lib/TableGen/TGLexer.cpp b/llvm/lib/TableGen/TGLexer.cpp
index c369916a48f0d..7b23736169bba 100644
--- a/llvm/lib/TableGen/TGLexer.cpp
+++ b/llvm/lib/TableGen/TGLexer.cpp
@@ -93,9 +93,7 @@ TGLexer::TGLexer(SourceMgr &SM, ArrayRef<std::string> Macros) : SrcMgr(SM) {
}
}
-SMLoc TGLexer::getLoc() const {
- return SMLoc::getFromPointer(TokStart);
-}
+SMLoc TGLexer::getLoc() const { return SMLoc::getFromPointer(TokStart); }
SMRange TGLexer::getLocRange() const {
return {getLoc(), SMLoc::getFromPointer(CurPtr)};
@@ -162,16 +160,13 @@ int TGLexer::getNextChar() {
// Handle the newline character by ignoring it and incrementing the line
// count. However, be careful about 'dos style' files with \n\r in them.
// Only treat a \n\r or \r\n as a single line.
- if ((*CurPtr == '\n' || (*CurPtr == '\r')) &&
- *CurPtr != CurChar)
- ++CurPtr; // Eat the two char newline sequence.
+ if ((*CurPtr == '\n' || (*CurPtr == '\r')) && *CurPtr != CurChar)
+ ++CurPtr; // Eat the two char newline sequence.
return '\n';
}
}
-int TGLexer::peekNextChar(int Index) const {
- return *(CurPtr + Index);
-}
+int TGLexer::peekNextChar(int Index) const { return *(CurPtr + Index); }
tgtok::TokKind TGLexer::LexToken(bool FileOrLineStart) {
while (true) {
@@ -367,7 +362,9 @@ tgtok::TokKind TGLexer::LexString() {
++CurPtr;
switch (*CurPtr) {
- case '\\': case '\'': case '"':
+ case '\\':
+ case '\'':
+ case '"':
// These turn into their literal character.
CurStrVal += *CurPtr++;
break;
@@ -421,7 +418,7 @@ tgtok::TokKind TGLexer::LexIdentifier() {
++CurPtr;
// Check to see if this identifier is a reserved keyword.
- StringRef Str(IdentStart, CurPtr-IdentStart);
+ StringRef Str(IdentStart, CurPtr - IdentStart);
tgtok::TokKind Kind = StringSwitch<tgtok::TokKind>(Str)
.Case("int", tgtok::Int)
@@ -454,14 +451,15 @@ tgtok::TokKind TGLexer::LexIdentifier() {
// A couple of tokens require special processing.
switch (Kind) {
- case tgtok::Include:
- if (LexInclude()) return tgtok::Error;
- return Lex();
- case tgtok::Id:
- CurStrVal.assign(Str.begin(), Str.end());
- break;
- default:
- break;
+ case tgtok::Include:
+ if (LexInclude())
+ return tgtok::Error;
+ return Lex();
+ case tgtok::Id:
+ CurStrVal.assign(Str.begin(), Str.end());
+ break;
+ default:
+ break;
}
return Kind;
@@ -472,7 +470,8 @@ tgtok::TokKind TGLexer::LexIdentifier() {
bool TGLexer::LexInclude() {
// The token after the include must be a string.
tgtok::TokKind Tok = LexToken();
- if (Tok == tgtok::Error) return true;
+ if (Tok == tgtok::Error)
+ return true;
if (Tok != tgtok::StrVal) {
PrintError(getLoc(), "expected filename after include");
return true;
@@ -501,7 +500,7 @@ bool TGLexer::LexInclude() {
/// SkipBCPLComment - Skip over the comment by finding the next CR or LF.
/// Or we may end up at the end of the buffer.
void TGLexer::SkipBCPLComment() {
- ++CurPtr; // skip the second slash.
+ ++CurPtr; // skip the second slash.
auto EOLPos = CurBuf.find_first_of("\r\n", CurPtr - CurBuf.data());
CurPtr = (EOLPos == StringRef::npos) ? CurBuf.end() : CurBuf.data() + EOLPos;
}
@@ -509,7 +508,7 @@ void TGLexer::SkipBCPLComment() {
/// SkipCComment - This skips C-style /**/ comments. The only difference from C
/// is that we allow nesting.
bool TGLexer::SkipCComment() {
- ++CurPtr; // skip the star.
+ ++CurPtr; // skip the star.
unsigned CommentDepth = 1;
while (true) {
@@ -520,15 +519,17 @@ bool TGLexer::SkipCComment() {
return true;
case '*':
// End of the comment?
- if (CurPtr[0] != '/') break;
+ if (CurPtr[0] != '/')
+ break;
- ++CurPtr; // End the */.
+ ++CurPtr; // End the */.
if (--CommentDepth == 0)
return false;
break;
case '/':
// Start of a nested comment?
- if (CurPtr[0] != '*') break;
+ if (CurPtr[0] != '*')
+ break;
++CurPtr;
++CommentDepth;
break;
@@ -608,14 +609,17 @@ tgtok::TokKind TGLexer::LexBracket() {
const char *CodeStart = CurPtr;
while (true) {
int Char = getNextChar();
- if (Char == EOF) break;
+ if (Char == EOF)
+ break;
- if (Char != '}') continue;
+ if (Char != '}')
+ continue;
Char = getNextChar();
- if (Char == EOF) break;
+ if (Char == EOF)
+ break;
if (Char == ']') {
- CurStrVal.assign(CodeStart, CurPtr-2);
+ CurStrVal.assign(CodeStart, CurPtr - 2);
return tgtok::CodeFragment;
}
}
diff --git a/llvm/lib/TableGen/TGLexer.h b/llvm/lib/TableGen/TGLexer.h
index 5725e391d0c4d..753470dfb5374 100644
--- a/llvm/lib/TableGen/TGLexer.h
+++ b/llvm/lib/TableGen/TGLexer.h
@@ -216,13 +216,9 @@ class TGLexer {
public:
TGLexer(SourceMgr &SrcMgr, ArrayRef<std::string> Macros);
- tgtok::TokKind Lex() {
- return CurCode = LexToken(CurPtr == CurBuf.begin());
- }
+ tgtok::TokKind Lex() { return CurCode = LexToken(CurPtr == CurBuf.begin()); }
- const DependenciesSetTy &getDependencies() const {
- return Dependencies;
- }
+ const DependenciesSetTy &getDependencies() const { return Dependencies; }
tgtok::TokKind getCode() const { return CurCode; }
diff --git a/llvm/lib/TableGen/TGParser.cpp b/llvm/lib/TableGen/TGParser.cpp
index 81b61b19f687b..0c6add59cb282 100644
--- a/llvm/lib/TableGen/TGParser.cpp
+++ b/llvm/lib/TableGen/TGParser.cpp
@@ -99,11 +99,11 @@ static void checkConcrete(Record &R) {
if (const Init *V = RV.getValue()) {
bool Ok = isa<BitsInit>(V) ? checkBitsConcrete(R, RV) : V->isConcrete();
if (!Ok) {
- PrintError(R.getLoc(),
- Twine("Initializer of '") + RV.getNameInitAsString() +
- "' in '" + R.getNameInitAsString() +
- "' could not be fully resolved: " +
- RV.getValue()->getAsString());
+ PrintError(R.getLoc(), Twine("Initializer of '") +
+ RV.getNameInitAsString() + "' in '" +
+ R.getNameInitAsString() +
+ "' could not be fully resolved: " +
+ RV.getValue()->getAsString());
}
}
}
@@ -218,9 +218,10 @@ bool TGParser::AddValue(Record *CurRec, SMLoc Loc, const RecordVal &RV) {
// The value already exists in the class, treat this as a set.
if (ERV->setValue(RV.getValue()))
return Error(Loc, "New definition of '" + RV.getName() + "' of type '" +
- RV.getType()->getAsString() + "' is incompatible with " +
- "previous definition of type '" +
- ERV->getType()->getAsString() + "'");
+ RV.getType()->getAsString() +
+ "' is incompatible with " +
+ "previous definition of type '" +
+ ERV->getType()->getAsString() + "'");
} else {
CurRec->addValue(RV);
}
@@ -232,14 +233,16 @@ bool TGParser::AddValue(Record *CurRec, SMLoc Loc, const RecordVal &RV) {
bool TGParser::SetValue(Record *CurRec, SMLoc Loc, const Init *ValName,
ArrayRef<unsigned> BitList, const Init *V,
bool AllowSelfAssignment, bool OverrideDefLoc) {
- if (!V) return false;
+ if (!V)
+ return false;
- if (!CurRec) CurRec = &CurMultiClass->Rec;
+ if (!CurRec)
+ CurRec = &CurMultiClass->Rec;
RecordVal *RV = CurRec->getValue(ValName);
if (!RV)
- return Error(Loc, "Value '" + ValName->getAsUnquotedString() +
- "' unknown!");
+ return Error(Loc,
+ "Value '" + ValName->getAsUnquotedString() + "' unknown!");
// Do not allow assignments like 'X = X'. This will just cause infinite loops
// in the resolution machinery.
@@ -254,7 +257,7 @@ bool TGParser::SetValue(Record *CurRec, SMLoc Loc, const Init *ValName,
const auto *CurVal = dyn_cast<BitsInit>(RV->getValue());
if (!CurVal)
return Error(Loc, "Value '" + ValName->getAsUnquotedString() +
- "' is not a bits type");
+ "' is not a bits type");
// Convert the incoming value to a bits type of the appropriate size...
const Init *BI = V->getCastTo(BitsRecTy::get(Records, BitList.size()));
@@ -268,7 +271,8 @@ bool TGParser::SetValue(Record *CurRec, SMLoc Loc, const Init *ValName,
unsigned Bit = BitList[i];
if (NewBits[Bit])
return Error(Loc, "Cannot set bit #" + Twine(Bit) + " of value '" +
- ValName->getAsUnquotedString() + "' more than once");
+ ValName->getAsUnquotedString() +
+ "' more than once");
NewBits[Bit] = BI->getBit(i);
}
@@ -283,7 +287,8 @@ bool TGParser::SetValue(Record *CurRec, SMLoc Loc, const Init *ValName,
std::string InitType;
if (const auto *BI = dyn_cast<BitsInit>(V))
InitType = (Twine("' of type bit initializer with length ") +
- Twine(BI->getNumBits())).str();
+ Twine(BI->getNumBits()))
+ .str();
else if (const auto *TI = dyn_cast<TypedInit>(V))
InitType =
(Twine("' of type '") + TI->getType()->getAsString() + "'").str();
@@ -416,9 +421,8 @@ bool TGParser::addEntry(RecordsEntry E) {
///
/// The resulting records are stored in \p Dest if non-null. Otherwise, they
/// are added to the global record keeper.
-bool TGParser::resolve(const ForeachLoop &Loop, SubstStack &Substs,
- bool Final, std::vector<RecordsEntry> *Dest,
- SMLoc *Loc) {
+bool TGParser::resolve(const ForeachLoop &Loop, SubstStack &Substs, bool Final,
+ std::vector<RecordsEntry> *Dest, SMLoc *Loc) {
MapResolver R;
for (const auto &S : Substs)
@@ -437,28 +441,28 @@ bool TGParser::resolve(const ForeachLoop &Loop, SubstStack &Substs,
R.setFinal(true);
const Init *LHS = OldLHS->resolveReferences(R);
if (LHS == OldLHS) {
- PrintError(Loop.Loc,
- Twine("unable to resolve if condition '") +
- LHS->getAsString() + "' at end of containing scope");
+ PrintError(Loop.Loc, Twine("unable to resolve if condition '") +
+ LHS->getAsString() +
+ "' at end of containing scope");
return true;
}
const Init *MHS = TI->getMHS();
const Init *RHS = TI->getRHS();
List = TernOpInit::get(TernOpInit::IF, LHS, MHS, RHS, TI->getType())
- ->Fold(nullptr);
+ ->Fold(nullptr);
}
const auto *LI = dyn_cast<ListInit>(List);
if (!LI) {
if (!Final) {
- Dest->emplace_back(std::make_unique<ForeachLoop>(Loop.Loc, Loop.IterVar,
- List));
+ Dest->emplace_back(
+ std::make_unique<ForeachLoop>(Loop.Loc, Loop.IterVar, List));
return resolve(Loop.Entries, Substs, Final, &Dest->back().Loop->Entries,
Loc);
}
PrintError(Loop.Loc, Twine("attempting to loop over '") +
- List->getAsString() + "', expected a list");
+ List->getAsString() + "', expected a list");
return true;
}
@@ -571,7 +575,7 @@ bool TGParser::addDefOne(std::unique_ptr<Record> Rec) {
if (!I->getType()->typeIsA(Defset->EltTy)) {
PrintError(Rec->getLoc(), Twine("adding record of incompatible type '") +
I->getType()->getAsString() +
- "' to defset");
+ "' to defset");
PrintNote(Defset->Loc, "location of defset declaration");
return true;
}
@@ -751,8 +755,8 @@ MultiClass *TGParser::ParseMultiClassID() {
/// SubClassRef ::= ClassID
/// SubClassRef ::= ClassID '<' ArgValueList '>'
///
-SubClassReference TGParser::
-ParseSubClassReference(Record *CurRec, bool isDefm) {
+SubClassReference TGParser::ParseSubClassReference(Record *CurRec,
+ bool isDefm) {
SubClassReference Result;
Result.RefRange.Start = Lex.getLoc();
@@ -762,7 +766,8 @@ ParseSubClassReference(Record *CurRec, bool isDefm) {
} else {
Result.Rec = ParseClassID();
}
- if (!Result.Rec) return Result;
+ if (!Result.Rec)
+ return Result;
// If there is no template arg list, we're done.
if (!consume(tgtok::less)) {
@@ -793,13 +798,14 @@ ParseSubClassReference(Record *CurRec, bool isDefm) {
/// SubMultiClassRef ::= MultiClassID
/// SubMultiClassRef ::= MultiClassID '<' ArgValueList '>'
///
-SubMultiClassReference TGParser::
-ParseSubMultiClassReference(MultiClass *CurMC) {
+SubMultiClassReference
+TGParser::ParseSubMultiClassReference(MultiClass *CurMC) {
SubMultiClassReference Result;
Result.RefRange.Start = Lex.getLoc();
Result.MC = ParseMultiClassID();
- if (!Result.MC) return Result;
+ if (!Result.MC)
+ return Result;
// If there is no template arg list, we're done.
if (!consume(tgtok::less)) {
@@ -1049,7 +1055,8 @@ bool TGParser::ParseOptionalRangeList(SmallVectorImpl<unsigned> &Ranges) {
// Parse the range list.
ParseRangeList(Ranges);
- if (Ranges.empty()) return true;
+ if (Ranges.empty())
+ return true;
if (!consume(tgtok::greater)) {
TokError("expected '>' at end of range list");
@@ -1068,7 +1075,8 @@ bool TGParser::ParseOptionalBitList(SmallVectorImpl<unsigned> &Ranges) {
// Parse the range list.
ParseRangeList(Ranges);
- if (Ranges.empty()) return true;
+ if (Ranges.empty())
+ return true;
if (!consume(tgtok::r_brace)) {
TokError("expected '}' at end of bit list");
@@ -1090,7 +1098,9 @@ bool TGParser::ParseOptionalBitList(SmallVectorImpl<unsigned> &Ranges) {
///
const RecTy *TGParser::ParseType() {
switch (Lex.getCode()) {
- default: TokError("Unknown token when expecting a type"); return nullptr;
+ default:
+ TokError("Unknown token when expecting a type");
+ return nullptr;
case tgtok::String:
case tgtok::Code:
Lex.Lex();
@@ -1129,7 +1139,7 @@ const RecTy *TGParser::ParseType() {
TokError("expected '>' at end of bits<n> type");
return nullptr;
}
- Lex.Lex(); // Eat '>'
+ Lex.Lex(); // Eat '>'
return BitsRecTy::get(Records, Val);
}
case tgtok::List: {
@@ -1137,9 +1147,10 @@ const RecTy *TGParser::ParseType() {
TokError("expected '<' after list type");
return nullptr;
}
- Lex.Lex(); // Eat '<'
+ Lex.Lex(); // Eat '<'
const RecTy *SubType = ParseType();
- if (!SubType) return nullptr;
+ if (!SubType)
+ return nullptr;
if (!consume(tgtok::greater)) {
TokError("expected '>' at end of list<ty> type");
@@ -1206,9 +1217,10 @@ const Init *TGParser::ParseOperation(Record *CurRec, const RecTy *ItemType) {
const RecTy *Type = nullptr;
switch (Lex.getCode()) {
- default: llvm_unreachable("Unhandled code!");
+ default:
+ llvm_unreachable("Unhandled code!");
case tgtok::XCast:
- Lex.Lex(); // eat the operation
+ Lex.Lex(); // eat the operation
Code = UnOpInit::CAST;
Type = ParseOperatorType();
@@ -1235,7 +1247,7 @@ const Init *TGParser::ParseOperation(Record *CurRec, const RecTy *ItemType) {
Type = StringRecTy::get(Records);
break;
case tgtok::XNOT:
- Lex.Lex(); // eat the operation
+ Lex.Lex(); // eat the operation
Code = UnOpInit::NOT;
Type = IntRecTy::get(Records);
break;
@@ -1245,16 +1257,16 @@ const Init *TGParser::ParseOperation(Record *CurRec, const RecTy *ItemType) {
Type = IntRecTy::get(Records); // Bogus type used here.
break;
case tgtok::XLOG2:
- Lex.Lex(); // eat the operation
+ Lex.Lex(); // eat the operation
Code = UnOpInit::LOG2;
Type = IntRecTy::get(Records);
break;
case tgtok::XHead:
- Lex.Lex(); // eat the operation
+ Lex.Lex(); // eat the operation
Code = UnOpInit::HEAD;
break;
case tgtok::XTail:
- Lex.Lex(); // eat the operation
+ Lex.Lex(); // eat the operation
Code = UnOpInit::TAIL;
break;
case tgtok::XSize:
@@ -1263,12 +1275,12 @@ const Init *TGParser::ParseOperation(Record *CurRec, const RecTy *ItemType) {
Type = IntRecTy::get(Records);
break;
case tgtok::XEmpty:
- Lex.Lex(); // eat the operation
+ Lex.Lex(); // eat the operation
Code = UnOpInit::EMPTY;
Type = IntRecTy::get(Records);
break;
case tgtok::XGetDagOp:
- Lex.Lex(); // eat the operation
+ Lex.Lex(); // eat the operation
if (Lex.getCode() == tgtok::less) {
// Parse an optional type suffix, so that you can say
// !getdagop<BaseClass>(someDag) as a shorthand for
@@ -1306,7 +1318,8 @@ const Init *TGParser::ParseOperation(Record *CurRec, const RecTy *ItemType) {
}
const Init *LHS = ParseValue(CurRec);
- if (!LHS) return nullptr;
+ if (!LHS)
+ return nullptr;
if (Code == UnOpInit::EMPTY || Code == UnOpInit::SIZE) {
const auto *LHSl = dyn_cast<ListInit>(LHS);
@@ -1314,12 +1327,14 @@ const Init *TGParser::ParseOperation(Record *CurRec, const RecTy *ItemType) {
const auto *LHSd = dyn_cast<DagInit>(LHS);
const auto *LHSt = dyn_cast<TypedInit>(LHS);
if (!LHSl && !LHSs && !LHSd && !LHSt) {
- TokError("expected string, list, or dag type argument in unary operator");
+ TokError(
+ "expected string, list, or dag type argument in unary operator");
return nullptr;
}
if (LHSt) {
if (!isa<ListRecTy, StringRecTy, DagRecTy>(LHSt->getType())) {
- TokError("expected string, list, or dag type argument in unary operator");
+ TokError(
+ "expected string, list, or dag type argument in unary operator");
return nullptr;
}
}
@@ -1525,39 +1540,84 @@ const Init *TGParser::ParseOperation(Record *CurRec, const RecTy *ItemType) {
case tgtok::XSetDagOpName: { // Value ::= !binop '(' Value ',' Value ')'
tgtok::TokKind OpTok = Lex.getCode();
SMLoc OpLoc = Lex.getLoc();
- Lex.Lex(); // eat the operation
+ Lex.Lex(); // eat the operation
BinOpInit::BinaryOp Code;
switch (OpTok) {
- default: llvm_unreachable("Unhandled code!");
- case tgtok::XConcat: Code = BinOpInit::CONCAT; break;
+ default:
+ llvm_unreachable("Unhandled code!");
+ case tgtok::XConcat:
+ Code = BinOpInit::CONCAT;
+ break;
case tgtok::XMatch:
Code = BinOpInit::MATCH;
break;
- case tgtok::XADD: Code = BinOpInit::ADD; break;
- case tgtok::XSUB: Code = BinOpInit::SUB; break;
- case tgtok::XMUL: Code = BinOpInit::MUL; break;
- case tgtok::XDIV: Code = BinOpInit::DIV; break;
- case tgtok::XAND: Code = BinOpInit::AND; break;
- case tgtok::XOR: Code = BinOpInit::OR; break;
- case tgtok::XXOR: Code = BinOpInit::XOR; break;
- case tgtok::XSRA: Code = BinOpInit::SRA; break;
- case tgtok::XSRL: Code = BinOpInit::SRL; break;
- case tgtok::XSHL: Code = BinOpInit::SHL; break;
- case tgtok::XEq: Code = BinOpInit::EQ; break;
- case tgtok::XNe: Code = BinOpInit::NE; break;
- case tgtok::XLe: Code = BinOpInit::LE; break;
- case tgtok::XLt: Code = BinOpInit::LT; break;
- case tgtok::XGe: Code = BinOpInit::GE; break;
- case tgtok::XGt: Code = BinOpInit::GT; break;
- case tgtok::XListConcat: Code = BinOpInit::LISTCONCAT; break;
- case tgtok::XListSplat: Code = BinOpInit::LISTSPLAT; break;
+ case tgtok::XADD:
+ Code = BinOpInit::ADD;
+ break;
+ case tgtok::XSUB:
+ Code = BinOpInit::SUB;
+ break;
+ case tgtok::XMUL:
+ Code = BinOpInit::MUL;
+ break;
+ case tgtok::XDIV:
+ Code = BinOpInit::DIV;
+ break;
+ case tgtok::XAND:
+ Code = BinOpInit::AND;
+ break;
+ case tgtok::XOR:
+ Code = BinOpInit::OR;
+ break;
+ case tgtok::XXOR:
+ Code = BinOpInit::XOR;
+ break;
+ case tgtok::XSRA:
+ Code = BinOpInit::SRA;
+ break;
+ case tgtok::XSRL:
+ Code = BinOpInit::SRL;
+ break;
+ case tgtok::XSHL:
+ Code = BinOpInit::SHL;
+ break;
+ case tgtok::XEq:
+ Code = BinOpInit::EQ;
+ break;
+ case tgtok::XNe:
+ Code = BinOpInit::NE;
+ break;
+ case tgtok::XLe:
+ Code = BinOpInit::LE;
+ break;
+ case tgtok::XLt:
+ Code = BinOpInit::LT;
+ break;
+ case tgtok::XGe:
+ Code = BinOpInit::GE;
+ break;
+ case tgtok::XGt:
+ Code = BinOpInit::GT;
+ break;
+ case tgtok::XListConcat:
+ Code = BinOpInit::LISTCONCAT;
+ break;
+ case tgtok::XListSplat:
+ Code = BinOpInit::LISTSPLAT;
+ break;
case tgtok::XListRemove:
Code = BinOpInit::LISTREMOVE;
break;
- case tgtok::XStrConcat: Code = BinOpInit::STRCONCAT; break;
- case tgtok::XInterleave: Code = BinOpInit::INTERLEAVE; break;
- case tgtok::XSetDagOp: Code = BinOpInit::SETDAGOP; break;
+ case tgtok::XStrConcat:
+ Code = BinOpInit::STRCONCAT;
+ break;
+ case tgtok::XInterleave:
+ Code = BinOpInit::INTERLEAVE;
+ break;
+ case tgtok::XSetDagOp:
+ Code = BinOpInit::SETDAGOP;
+ break;
case tgtok::XSetDagOpName:
Code = BinOpInit::SETDAGOPNAME;
break;
@@ -1642,9 +1702,8 @@ const Init *TGParser::ParseOperation(Record *CurRec, const RecTy *ItemType) {
}
if (Type && ItemType && !Type->typeIsConvertibleTo(ItemType)) {
- Error(OpLoc, Twine("expected value of type '") +
- ItemType->getAsString() + "', got '" +
- Type->getAsString() + "'");
+ Error(OpLoc, Twine("expected value of type '") + ItemType->getAsString() +
+ "', got '" + Type->getAsString() + "'");
return nullptr;
}
@@ -1660,7 +1719,8 @@ const Init *TGParser::ParseOperation(Record *CurRec, const RecTy *ItemType) {
for (;;) {
SMLoc InitLoc = Lex.getLoc();
InitList.push_back(ParseValue(CurRec, ArgType));
- if (!InitList.back()) return nullptr;
+ if (!InitList.back())
+ return nullptr;
const auto *InitListBack = dyn_cast<TypedInit>(InitList.back());
if (!InitListBack) {
@@ -1678,7 +1738,7 @@ const Init *TGParser::ParseOperation(Record *CurRec, const RecTy *ItemType) {
case BinOpInit::LISTCONCAT:
if (!isa<ListRecTy>(ArgType)) {
Error(InitLoc, Twine("expected a list, got value of type '") +
- ArgType->getAsString() + "'");
+ ArgType->getAsString() + "'");
return nullptr;
}
break;
@@ -1747,9 +1807,10 @@ const Init *TGParser::ParseOperation(Record *CurRec, const RecTy *ItemType) {
if (ArgType != StringRecTy::get(Records)->getListTy() &&
!ArgType->typeIsConvertibleTo(
IntRecTy::get(Records)->getListTy())) {
- Error(InitLoc, Twine("expected list of string, int, bits, or bit; "
- "got value of type '") +
- ArgType->getAsString() + "'");
+ Error(InitLoc,
+ Twine("expected list of string, int, bits, or bit; "
+ "got value of type '") +
+ ArgType->getAsString() + "'");
return nullptr;
}
break;
@@ -1761,11 +1822,12 @@ const Init *TGParser::ParseOperation(Record *CurRec, const RecTy *ItemType) {
return nullptr;
}
break;
- default: ;
+ default:;
}
ArgType = nullptr; // Broken invariant: types not identical.
break;
- default: llvm_unreachable("other ops have fixed argument types");
+ default:
+ llvm_unreachable("other ops have fixed argument types");
}
} else {
@@ -1966,7 +2028,8 @@ const Init *TGParser::ParseOperation(Record *CurRec, const RecTy *ItemType) {
tgtok::TokKind LexCode = Lex.getCode();
Lex.Lex(); // Eat the operation.
switch (LexCode) {
- default: llvm_unreachable("Unhandled code!");
+ default:
+ llvm_unreachable("Unhandled code!");
case tgtok::XDag:
Code = TernOpInit::DAG;
Type = DagRecTy::get(Records);
@@ -1995,7 +2058,8 @@ const Init *TGParser::ParseOperation(Record *CurRec, const RecTy *ItemType) {
}
const Init *LHS = ParseValue(CurRec);
- if (!LHS) return nullptr;
+ if (!LHS)
+ return nullptr;
if (!consume(tgtok::comma)) {
TokError("expected ',' in ternary operator");
@@ -2023,7 +2087,8 @@ const Init *TGParser::ParseOperation(Record *CurRec, const RecTy *ItemType) {
}
switch (LexCode) {
- default: llvm_unreachable("Unhandled code!");
+ default:
+ llvm_unreachable("Unhandled code!");
case tgtok::XDag: {
const auto *MHSt = dyn_cast<TypedInit>(MHS);
if (!MHSt && !isa<UnsetInit>(MHS)) {
@@ -2231,7 +2296,8 @@ const Init *TGParser::ParseOperation(Record *CurRec, const RecTy *ItemType) {
std::unique_ptr<Record> ParseRecTmp;
Record *ParseRec = CurRec;
if (!ParseRec) {
- ParseRecTmp = std::make_unique<Record>(".parse", ArrayRef<SMLoc>{}, Records);
+ ParseRecTmp =
+ std::make_unique<Record>(".parse", ArrayRef<SMLoc>{}, Records);
ParseRec = ParseRecTmp.get();
}
@@ -2347,9 +2413,8 @@ const Init *TGParser::ParseOperationSubstr(Record *CurRec,
}
if (ItemType && !Type->typeIsConvertibleTo(ItemType)) {
- Error(RHSLoc, Twine("expected value of type '") +
- ItemType->getAsString() + "', got '" +
- Type->getAsString() + "'");
+ Error(RHSLoc, Twine("expected value of type '") + ItemType->getAsString() +
+ "', got '" + Type->getAsString() + "'");
}
const auto *LHSt = dyn_cast<TypedInit>(LHS);
@@ -2436,9 +2501,8 @@ const Init *TGParser::ParseOperationFind(Record *CurRec,
}
if (ItemType && !Type->typeIsConvertibleTo(ItemType)) {
- Error(RHSLoc, Twine("expected value of type '") +
- ItemType->getAsString() + "', got '" +
- Type->getAsString() + "'");
+ Error(RHSLoc, Twine("expected value of type '") + ItemType->getAsString() +
+ "', got '" + Type->getAsString() + "'");
}
const auto *LHSt = dyn_cast<TypedInit>(LHS);
@@ -2540,10 +2604,9 @@ const Init *TGParser::ParseOperationForEachFilter(Record *CurRec,
? OutListTy->getElementType()
: IntRecTy::get(Records);
} else {
- Error(OpLoc,
- "expected value of type '" +
- Twine(ItemType->getAsString()) +
- "', but got list type");
+ Error(OpLoc, "expected value of type '" +
+ Twine(ItemType->getAsString()) +
+ "', but got list type");
return nullptr;
}
}
@@ -2554,9 +2617,8 @@ const Init *TGParser::ParseOperationForEachFilter(Record *CurRec,
}
InEltType = InDagTy;
if (ItemType && !isa<DagRecTy>(ItemType)) {
- Error(OpLoc,
- "expected value of type '" + Twine(ItemType->getAsString()) +
- "', but got dag type");
+ Error(OpLoc, "expected value of type '" + Twine(ItemType->getAsString()) +
+ "', but got dag type");
return nullptr;
}
IsDAG = true;
@@ -2610,7 +2672,7 @@ const Init *TGParser::ParseOperationForEachFilter(Record *CurRec,
const Init *TGParser::ParseOperationCond(Record *CurRec,
const RecTy *ItemType) {
- Lex.Lex(); // eat the operation 'cond'
+ Lex.Lex(); // eat the operation 'cond'
if (!consume(tgtok::l_paren)) {
TokError("expected '(' after !cond operator");
@@ -2649,7 +2711,8 @@ const Init *TGParser::ParseOperationCond(Record *CurRec,
}
if (Case.size() < 1) {
- TokError("there should be at least 1 'condition : value' in the !cond operator");
+ TokError(
+ "there should be at least 1 'condition : value' in the !cond operator");
return nullptr;
}
@@ -2672,7 +2735,7 @@ const Init *TGParser::ParseOperationCond(Record *CurRec,
const RecTy *RType = resolveTypes(Type, VTy);
if (!RType) {
TokError(Twine("inconsistent types '") + Type->getAsString() +
- "' and '" + VTy->getAsString() + "' for !cond");
+ "' and '" + VTy->getAsString() + "' for !cond");
return nullptr;
}
Type = RType;
@@ -2724,7 +2787,9 @@ const Init *TGParser::ParseSimpleValue(Record *CurRec, const RecTy *ItemType,
return ParseOperation(CurRec, ItemType);
switch (Code) {
- default: TokError("Unknown or reserved token when parsing a value"); break;
+ default:
+ TokError("Unknown or reserved token when parsing a value");
+ break;
case tgtok::TrueVal:
R = IntInit::get(Records, 1);
@@ -2740,7 +2805,7 @@ const Init *TGParser::ParseSimpleValue(Record *CurRec, const RecTy *ItemType,
break;
case tgtok::BinaryIntVal: {
auto BinaryVal = Lex.getCurBinaryIntVal();
- SmallVector<Init*, 16> Bits(BinaryVal.second);
+ SmallVector<Init *, 16> Bits(BinaryVal.second);
for (unsigned i = 0, e = BinaryVal.second; i != e; ++i)
Bits[i] = BitInit::get(Records, BinaryVal.first & (1LL << i));
R = BitsInit::get(Records, Bits);
@@ -2803,14 +2868,15 @@ const Init *TGParser::ParseSimpleValue(Record *CurRec, const RecTy *ItemType,
Class->appendReferenceLoc(NameLoc);
return VarDefInit::get(NameLoc.Start, Class, Args)->Fold();
}
- case tgtok::l_brace: { // Value ::= '{' ValueList '}'
+ case tgtok::l_brace: { // Value ::= '{' ValueList '}'
SMLoc BraceLoc = Lex.getLoc();
Lex.Lex(); // eat the '{'
SmallVector<const Init *, 16> Vals;
if (Lex.getCode() != tgtok::r_brace) {
ParseValueList(Vals, CurRec);
- if (Vals.empty()) return nullptr;
+ if (Vals.empty())
+ return nullptr;
}
if (!consume(tgtok::r_brace)) {
TokError("expected '}' at end of bit list value");
@@ -2845,7 +2911,7 @@ const Init *TGParser::ParseSimpleValue(Record *CurRec, const RecTy *ItemType,
const Init *Bit = Vals[i]->getCastTo(BitRecTy::get(Records));
if (!Bit) {
Error(BraceLoc, "Element #" + Twine(i) + " (" + Vals[i]->getAsString() +
- ") is not convertable to a bit");
+ ") is not convertable to a bit");
return nullptr;
}
NewBits.push_back(Bit);
@@ -2853,8 +2919,8 @@ const Init *TGParser::ParseSimpleValue(Record *CurRec, const RecTy *ItemType,
std::reverse(NewBits.begin(), NewBits.end());
return BitsInit::get(Records, NewBits);
}
- case tgtok::l_square: { // Value ::= '[' ValueList ']'
- Lex.Lex(); // eat the '['
+ case tgtok::l_square: { // Value ::= '[' ValueList ']'
+ Lex.Lex(); // eat the '['
SmallVector<const Init *, 16> Vals;
const RecTy *DeducedEltTy = nullptr;
@@ -2873,7 +2939,8 @@ const Init *TGParser::ParseSimpleValue(Record *CurRec, const RecTy *ItemType,
if (Lex.getCode() != tgtok::r_square) {
ParseValueList(Vals, CurRec,
GivenListTy ? GivenListTy->getElementType() : nullptr);
- if (Vals.empty()) return nullptr;
+ if (Vals.empty())
+ return nullptr;
}
if (!consume(tgtok::r_square)) {
TokError("expected ']' at end of list value");
@@ -2946,7 +3013,7 @@ const Init *TGParser::ParseSimpleValue(Record *CurRec, const RecTy *ItemType,
}
case tgtok::l_paren: { // Value ::= '(' IDValue DagArgList ')'
// Value ::= '(' '[' ValueList ']' DagArgList ')'
- Lex.Lex(); // eat the '('
+ Lex.Lex(); // eat the '('
if (Lex.getCode() != tgtok::Id && Lex.getCode() != tgtok::XCast &&
Lex.getCode() != tgtok::question && Lex.getCode() != tgtok::XGetDagOp &&
Lex.getCode() != tgtok::l_square) {
@@ -2955,7 +3022,8 @@ const Init *TGParser::ParseSimpleValue(Record *CurRec, const RecTy *ItemType,
}
const Init *Operator = ParseValue(CurRec);
- if (!Operator) return nullptr;
+ if (!Operator)
+ return nullptr;
// If the operator name is present, parse it.
const StringInit *OperatorName = nullptr;
@@ -2965,13 +3033,14 @@ const Init *TGParser::ParseSimpleValue(Record *CurRec, const RecTy *ItemType,
return nullptr;
}
OperatorName = StringInit::get(Records, Lex.getCurStrVal());
- Lex.Lex(); // eat the VarName.
+ Lex.Lex(); // eat the VarName.
}
SmallVector<std::pair<const Init *, const StringInit *>, 8> DagArgs;
if (Lex.getCode() != tgtok::r_paren) {
ParseDagArgList(DagArgs, CurRec);
- if (DagArgs.empty()) return nullptr;
+ if (DagArgs.empty())
+ return nullptr;
}
if (!consume(tgtok::r_paren)) {
@@ -2997,12 +3066,14 @@ const Init *TGParser::ParseValue(Record *CurRec, const RecTy *ItemType,
IDParseMode Mode) {
SMLoc LHSLoc = Lex.getLoc();
const Init *Result = ParseSimpleValue(CurRec, ItemType, Mode);
- if (!Result) return nullptr;
+ if (!Result)
+ return nullptr;
// Parse the suffixes now if present.
while (true) {
switch (Lex.getCode()) {
- default: return Result;
+ default:
+ return Result;
case tgtok::l_brace: {
if (Mode == ParseNameMode)
// This is the beginning of the object body.
@@ -3012,7 +3083,8 @@ const Init *TGParser::ParseValue(Record *CurRec, const RecTy *ItemType,
Lex.Lex(); // eat the '{'
SmallVector<unsigned, 16> Ranges;
ParseRangeList(Ranges);
- if (Ranges.empty()) return nullptr;
+ if (Ranges.empty())
+ return nullptr;
// Reverse the bitlist.
std::reverse(Ranges.begin(), Ranges.end());
@@ -3095,7 +3167,7 @@ const Init *TGParser::ParseValue(Record *CurRec, const RecTy *ItemType,
}
Result = FieldInit::get(Result, FieldName)->Fold(CurRec);
- Lex.Lex(); // eat field name
+ Lex.Lex(); // eat field name
break;
}
@@ -3109,7 +3181,7 @@ const Init *TGParser::ParseValue(Record *CurRec, const RecTy *ItemType,
// Check if it's a 'listA # listB'
if (isa<ListRecTy>(LHS->getType())) {
- Lex.Lex(); // Eat the '#'.
+ Lex.Lex(); // Eat the '#'.
assert(Mode == ParseValueMode && "encountered paste of lists in name");
@@ -3145,7 +3217,7 @@ const Init *TGParser::ParseValue(Record *CurRec, const RecTy *ItemType,
const TypedInit *RHS = nullptr;
- Lex.Lex(); // Eat the '#'.
+ Lex.Lex(); // Eat the '#'.
switch (Lex.getCode()) {
case tgtok::colon:
case tgtok::semi:
@@ -3223,7 +3295,7 @@ void TGParser::ParseDagArgList(
return;
}
VarName = StringInit::get(Records, Lex.getCurStrVal());
- Lex.Lex(); // eat the VarName.
+ Lex.Lex(); // eat the VarName.
}
Result.emplace_back(Val, VarName);
@@ -3351,7 +3423,8 @@ const Init *TGParser::ParseDeclaration(Record *CurRec,
bool HasField = consume(tgtok::Field);
const RecTy *Type = ParseType();
- if (!Type) return nullptr;
+ if (!Type)
+ return nullptr;
if (Lex.getCode() != tgtok::Id) {
TokError("Expected identifier in declaration");
@@ -3440,7 +3513,7 @@ TGParser::ParseForeachDeclaration(const Init *&ForeachListValue) {
switch (Lex.getCode()) {
case tgtok::l_brace: { // '{' RangeList '}'
- Lex.Lex(); // eat the '{'
+ Lex.Lex(); // eat the '{'
ParseRangeList(Ranges);
if (!consume(tgtok::r_brace)) {
TokError("expected '}' at end of bit range list");
@@ -3471,13 +3544,12 @@ TGParser::ParseForeachDeclaration(const Init *&ForeachListValue) {
Error(ValueLoc, "expected a list, got '" + I->getAsString() + "'");
if (CurMultiClass) {
PrintNote({}, "references to multiclass template arguments cannot be "
- "resolved at this time");
+ "resolved at this time");
}
return nullptr;
}
}
-
if (!Ranges.empty()) {
assert(!IterType && "Type already initialized?");
IterType = IntRecTy::get(Records);
@@ -3516,7 +3588,7 @@ bool TGParser::ParseTemplateArgList(Record *CurRec) {
while (consume(tgtok::comma)) {
// Read the following declarations.
SMLoc Loc = Lex.getLoc();
- TemplArg = ParseDeclaration(CurRec, true/*templateargs*/);
+ TemplArg = ParseDeclaration(CurRec, true /*templateargs*/);
if (!TemplArg)
return true;
@@ -3565,7 +3637,7 @@ bool TGParser::ParseBodyItem(Record *CurRec) {
SMLoc IdLoc = Lex.getLoc();
const StringInit *FieldName = StringInit::get(Records, Lex.getCurStrVal());
- Lex.Lex(); // eat the field name.
+ Lex.Lex(); // eat the field name.
SmallVector<unsigned, 16> BitList;
if (ParseOptionalBitList(BitList))
@@ -3587,7 +3659,8 @@ bool TGParser::ParseBodyItem(Record *CurRec) {
}
const Init *Val = ParseValue(CurRec, Type);
- if (!Val) return true;
+ if (!Val)
+ return true;
if (!consume(tgtok::semi))
return TokError("expected ';' after let expression");
@@ -3677,7 +3750,8 @@ bool TGParser::ParseObjectBody(Record *CurRec) {
SubClassReference SubClass = ParseSubClassReference(CurRec, false);
while (true) {
// Check for error.
- if (!SubClass.Rec) return true;
+ if (!SubClass.Rec)
+ return true;
// Add it.
if (AddSubClass(CurRec, SubClass))
@@ -3705,7 +3779,7 @@ bool TGParser::ParseObjectBody(Record *CurRec) {
bool TGParser::ParseDef(MultiClass *CurMultiClass) {
SMLoc DefLoc = Lex.getLoc();
assert(Lex.getCode() == tgtok::Def && "Unknown tok");
- Lex.Lex(); // Eat the 'def' token.
+ Lex.Lex(); // Eat the 'def' token.
// If the name of the def is an Id token, use that for the location.
// Otherwise, the name is more complex and we use the location of the 'def'
@@ -3867,7 +3941,7 @@ bool TGParser::ParseDefvar(Record *CurRec) {
bool TGParser::ParseForeach(MultiClass *CurMultiClass) {
SMLoc Loc = Lex.getLoc();
assert(Lex.getCode() == tgtok::Foreach && "Unknown tok");
- Lex.Lex(); // Eat the 'for' token.
+ Lex.Lex(); // Eat the 'for' token.
// Make a temporary object to record items associated with the for
// loop.
@@ -3892,7 +3966,7 @@ bool TGParser::ParseForeach(MultiClass *CurMultiClass) {
} else {
SMLoc BraceLoc = Lex.getLoc();
// Otherwise, this is a group foreach.
- Lex.Lex(); // eat the '{'.
+ Lex.Lex(); // eat the '{'.
// Parse the object list.
if (ParseObjectList(CurMultiClass))
@@ -4119,7 +4193,7 @@ void TGParser::ParseLetList(SmallVectorImpl<LetRecord> &Result) {
const StringInit *Name = StringInit::get(Records, Lex.getCurStrVal());
SMLoc NameLoc = Lex.getLoc();
- Lex.Lex(); // Eat the identifier.
+ Lex.Lex(); // Eat the identifier.
// Check for an optional RangeList.
SmallVector<unsigned, 16> Bits;
@@ -4159,7 +4233,8 @@ bool TGParser::ParseTopLevelLet(MultiClass *CurMultiClass) {
// Add this entry to the let stack.
SmallVector<LetRecord, 8> LetInfo;
ParseLetList(LetInfo);
- if (LetInfo.empty()) return true;
+ if (LetInfo.empty())
+ return true;
LetStack.push_back(std::move(LetInfo));
if (!consume(tgtok::In))
@@ -4170,10 +4245,10 @@ bool TGParser::ParseTopLevelLet(MultiClass *CurMultiClass) {
// LET LetList IN Object
if (ParseObject(CurMultiClass))
return true;
- } else { // Object ::= LETCommand '{' ObjectList '}'
+ } else { // Object ::= LETCommand '{' ObjectList '}'
SMLoc BraceLoc = Lex.getLoc();
// Otherwise, this is a group let.
- Lex.Lex(); // eat the '{'.
+ Lex.Lex(); // eat the '{'.
// A group let introduces a new scope for local variables.
TGVarScope *LetScope = PushScope();
@@ -4210,7 +4285,7 @@ bool TGParser::ParseTopLevelLet(MultiClass *CurMultiClass) {
///
bool TGParser::ParseMultiClass() {
assert(Lex.getCode() == tgtok::MultiClass && "Unexpected token");
- Lex.Lex(); // Eat the multiclass token.
+ Lex.Lex(); // Eat the multiclass token.
if (Lex.getCode() != tgtok::Id)
return TokError("expected identifier after multiclass for name");
@@ -4223,7 +4298,7 @@ bool TGParser::ParseMultiClass() {
return TokError("multiclass '" + Name + "' already defined");
CurMultiClass = Result.first->second.get();
- Lex.Lex(); // Eat the identifier.
+ Lex.Lex(); // Eat the identifier.
// A multiclass body introduces a new scope for local variables.
TGVarScope *MulticlassScope = PushScope(CurMultiClass);
@@ -4241,10 +4316,11 @@ bool TGParser::ParseMultiClass() {
// Read all of the submulticlasses.
SubMultiClassReference SubMultiClass =
- ParseSubMultiClassReference(CurMultiClass);
+ ParseSubMultiClassReference(CurMultiClass);
while (true) {
// Check for error.
- if (!SubMultiClass.MC) return true;
+ if (!SubMultiClass.MC)
+ return true;
// Add it.
if (AddSubMultiClass(CurMultiClass, SubMultiClass))
@@ -4262,7 +4338,7 @@ bool TGParser::ParseMultiClass() {
if (!consume(tgtok::semi))
return TokError("expected ';' in multiclass definition");
} else {
- if (Lex.Lex() == tgtok::r_brace) // eat the '{'.
+ if (Lex.Lex() == tgtok::r_brace) // eat the '{'.
return TokError("multiclass must contain at least one def");
while (Lex.getCode() != tgtok::r_brace) {
@@ -4284,7 +4360,7 @@ bool TGParser::ParseMultiClass() {
break;
}
}
- Lex.Lex(); // eat the '}'.
+ Lex.Lex(); // eat the '}'.
// If we have a semicolon, print a gentle error.
SMLoc SemiLoc = Lex.getLoc();
@@ -4338,7 +4414,8 @@ bool TGParser::ParseDefm(MultiClass *CurMultiClass) {
SubClassReference Ref = ParseSubClassReference(nullptr, true);
while (true) {
- if (!Ref.Rec) return true;
+ if (!Ref.Rec)
+ return true;
// To instantiate a multiclass, we get the multiclass and then loop
// through its template argument names. Substs contains a substitution
@@ -4380,7 +4457,8 @@ bool TGParser::ParseDefm(MultiClass *CurMultiClass) {
SubClassReference SubClass = ParseSubClassReference(nullptr, false);
while (true) {
// Check for error.
- if (!SubClass.Rec) return true;
+ if (!SubClass.Rec)
+ return true;
// Get the expanded definition prototypes and teach them about
// the record values the current class to inherit has
@@ -4426,17 +4504,24 @@ bool TGParser::ParseObject(MultiClass *MC) {
default:
return TokError(
"Expected assert, class, def, defm, defset, dump, foreach, if, or let");
- case tgtok::Assert: return ParseAssert(MC);
- case tgtok::Def: return ParseDef(MC);
- case tgtok::Defm: return ParseDefm(MC);
+ case tgtok::Assert:
+ return ParseAssert(MC);
+ case tgtok::Def:
+ return ParseDef(MC);
+ case tgtok::Defm:
+ return ParseDefm(MC);
case tgtok::Deftype:
return ParseDeftype();
- case tgtok::Defvar: return ParseDefvar();
+ case tgtok::Defvar:
+ return ParseDefvar();
case tgtok::Dump:
return ParseDump(MC);
- case tgtok::Foreach: return ParseForeach(MC);
- case tgtok::If: return ParseIf(MC);
- case tgtok::Let: return ParseTopLevelLet(MC);
+ case tgtok::Foreach:
+ return ParseForeach(MC);
+ case tgtok::If:
+ return ParseIf(MC);
+ case tgtok::Let:
+ return ParseTopLevelLet(MC);
case tgtok::Defset:
if (MC)
return TokError("defset is not allowed inside multiclass");
diff --git a/llvm/lib/TableGen/TGParser.h b/llvm/lib/TableGen/TGParser.h
index 2a5a1925343cf..7edb6c7a9aac6 100644
--- a/llvm/lib/TableGen/TGParser.h
+++ b/llvm/lib/TableGen/TGParser.h
@@ -167,9 +167,9 @@ class TGParser {
// in the middle of creating in. For those situations, allow the
// parser to ignore missing object errors.
enum IDParseMode {
- ParseValueMode, // We are parsing a value we expect to look up.
- ParseNameMode, // We are parsing a name of an object that does not yet
- // exist.
+ ParseValueMode, // We are parsing a value we expect to look up.
+ ParseNameMode, // We are parsing a name of an object that does not yet
+ // exist.
};
bool NoWarnOnUnusedTemplateArgs = false;
@@ -191,9 +191,7 @@ class TGParser {
PrintError(L, Msg);
return true;
}
- bool TokError(const Twine &Msg) const {
- return Error(Lex.getLoc(), Msg);
- }
+ bool TokError(const Twine &Msg) const { return Error(Lex.getLoc(), Msg); }
const TGLexer::DependenciesSetTy &getDependencies() const {
return Lex.getDependencies();
}
@@ -257,7 +255,7 @@ class TGParser {
ArrayRef<const ArgumentInit *> ArgValues,
const Init *DefmName, SMLoc Loc);
-private: // Parser methods.
+private: // Parser methods.
bool consume(tgtok::TokKind K);
bool ParseObjectList(MultiClass *MC = nullptr);
bool ParseObject(MultiClass *MC);
>From 124722bfe5bf668def1563cfb5778d9aa1b5436d Mon Sep 17 00:00:00 2001
From: Adam Nemet <anemet at apple.com>
Date: Tue, 5 Aug 2025 16:47:50 -0700
Subject: [PATCH 016/171] Revert "[CG] Add VTs for v[567]i1 and v[567]f16"
(#152217)
Reverts llvm/llvm-project#151763
It caused: https://github.com/llvm/llvm-project/issues/152150
---
llvm/include/llvm/CodeGen/ValueTypes.td | 496 +++++++++---------
llvm/lib/Target/AMDGPU/AMDGPUISelLowering.cpp | 12 -
2 files changed, 242 insertions(+), 266 deletions(-)
diff --git a/llvm/include/llvm/CodeGen/ValueTypes.td b/llvm/include/llvm/CodeGen/ValueTypes.td
index b06158d85f510..4551e7e4b9b60 100644
--- a/llvm/include/llvm/CodeGen/ValueTypes.td
+++ b/llvm/include/llvm/CodeGen/ValueTypes.td
@@ -92,270 +92,258 @@ def v1i1 : VTVec<1, i1, 17>; // 1 x i1 vector value
def v2i1 : VTVec<2, i1, 18>; // 2 x i1 vector value
def v3i1 : VTVec<3, i1, 19>; // 3 x i1 vector value
def v4i1 : VTVec<4, i1, 20>; // 4 x i1 vector value
-def v5i1 : VTVec<5, i1, 21>; // 5 x i1 vector value
-def v6i1 : VTVec<6, i1, 22>; // 6 x i1 vector value
-def v7i1 : VTVec<7, i1, 23>; // 7 x i1 vector value
-def v8i1 : VTVec<8, i1, 24>; // 8 x i1 vector value
-def v16i1 : VTVec<16, i1, 25>; // 16 x i1 vector value
-def v32i1 : VTVec<32, i1, 26>; // 32 x i1 vector value
-def v64i1 : VTVec<64, i1, 27>; // 64 x i1 vector value
-def v128i1 : VTVec<128, i1, 28>; // 128 x i1 vector value
-def v256i1 : VTVec<256, i1, 29>; // 256 x i1 vector value
-def v512i1 : VTVec<512, i1, 30>; // 512 x i1 vector value
-def v1024i1 : VTVec<1024, i1, 31>; // 1024 x i1 vector value
-def v2048i1 : VTVec<2048, i1, 32>; // 2048 x i1 vector value
-def v4096i1 : VTVec<4096, i1, 33>; // 4096 x i1 vector value
-
-def v128i2 : VTVec<128, i2, 34>; // 128 x i2 vector value
-def v256i2 : VTVec<256, i2, 35>; // 256 x i2 vector value
-
-def v64i4 : VTVec<64, i4, 36>; // 64 x i4 vector value
-def v128i4 : VTVec<128, i4, 37>; // 128 x i4 vector value
-
-def v1i8 : VTVec<1, i8, 38>; // 1 x i8 vector value
-def v2i8 : VTVec<2, i8, 39>; // 2 x i8 vector value
-def v3i8 : VTVec<3, i8, 40>; // 3 x i8 vector value
-def v4i8 : VTVec<4, i8, 41>; // 4 x i8 vector value
-def v5i8 : VTVec<5, i8, 42>; // 5 x i8 vector value
-def v6i8 : VTVec<6, i8, 43>; // 6 x i8 vector value
-def v7i8 : VTVec<7, i8, 44>; // 7 x i8 vector value
-def v8i8 : VTVec<8, i8, 45>; // 8 x i8 vector value
-def v16i8 : VTVec<16, i8, 46>; // 16 x i8 vector value
-def v32i8 : VTVec<32, i8, 47>; // 32 x i8 vector value
-def v64i8 : VTVec<64, i8, 48>; // 64 x i8 vector value
-def v128i8 : VTVec<128, i8, 49>; // 128 x i8 vector value
-def v256i8 : VTVec<256, i8, 50>; // 256 x i8 vector value
-def v512i8 : VTVec<512, i8, 51>; // 512 x i8 vector value
-def v1024i8 : VTVec<1024, i8, 52>; // 1024 x i8 vector value
-
-def v1i16 : VTVec<1, i16, 53>; // 1 x i16 vector value
-def v2i16 : VTVec<2, i16, 54>; // 2 x i16 vector value
-def v3i16 : VTVec<3, i16, 55>; // 3 x i16 vector value
-def v4i16 : VTVec<4, i16, 56>; // 4 x i16 vector value
-def v5i16 : VTVec<5, i16, 57>; // 5 x i16 vector value
-def v6i16 : VTVec<6, i16, 58>; // 6 x i16 vector value
-def v7i16 : VTVec<7, i16, 59>; // 7 x i16 vector value
-def v8i16 : VTVec<8, i16, 60>; // 8 x i16 vector value
-def v16i16 : VTVec<16, i16, 61>; // 16 x i16 vector value
-def v32i16 : VTVec<32, i16, 62>; // 32 x i16 vector value
-def v64i16 : VTVec<64, i16, 63>; // 64 x i16 vector value
-def v128i16 : VTVec<128, i16, 64>; // 128 x i16 vector value
-def v256i16 : VTVec<256, i16, 65>; // 256 x i16 vector value
-def v512i16 : VTVec<512, i16, 66>; // 512 x i16 vector value
-def v4096i16 : VTVec<4096, i16, 67>; // 4096 x i16 vector value
-
-def v1i32 : VTVec<1, i32, 68>; // 1 x i32 vector value
-def v2i32 : VTVec<2, i32, 69>; // 2 x i32 vector value
-def v3i32 : VTVec<3, i32, 70>; // 3 x i32 vector value
-def v4i32 : VTVec<4, i32, 71>; // 4 x i32 vector value
-def v5i32 : VTVec<5, i32, 72>; // 5 x i32 vector value
-def v6i32 : VTVec<6, i32, 73>; // 6 x i32 vector value
-def v7i32 : VTVec<7, i32, 74>; // 7 x i32 vector value
-def v8i32 : VTVec<8, i32, 75>; // 8 x i32 vector value
-def v9i32 : VTVec<9, i32, 76>; // 9 x i32 vector value
-def v10i32 : VTVec<10, i32, 77>; // 10 x i32 vector value
-def v11i32 : VTVec<11, i32, 78>; // 11 x i32 vector value
-def v12i32 : VTVec<12, i32, 79>; // 12 x i32 vector value
-def v16i32 : VTVec<16, i32, 80>; // 16 x i32 vector value
-def v32i32 : VTVec<32, i32, 81>; // 32 x i32 vector value
-def v64i32 : VTVec<64, i32, 82>; // 64 x i32 vector value
-def v128i32 : VTVec<128, i32, 83>; // 128 x i32 vector value
-def v256i32 : VTVec<256, i32, 84>; // 256 x i32 vector value
-def v512i32 : VTVec<512, i32, 85>; // 512 x i32 vector value
-def v1024i32 : VTVec<1024, i32, 86>; // 1024 x i32 vector value
-def v2048i32 : VTVec<2048, i32, 87>; // 2048 x i32 vector value
-def v4096i32 : VTVec<4096, i32, 88>; // 4096 x i32 vector value
-
-def v1i64 : VTVec<1, i64, 89>; // 1 x i64 vector value
-def v2i64 : VTVec<2, i64, 90>; // 2 x i64 vector value
-def v3i64 : VTVec<3, i64, 91>; // 3 x i64 vector value
-def v4i64 : VTVec<4, i64, 92>; // 4 x i64 vector value
-def v8i64 : VTVec<8, i64, 93>; // 8 x i64 vector value
-def v16i64 : VTVec<16, i64, 94>; // 16 x i64 vector value
-def v32i64 : VTVec<32, i64, 95>; // 32 x i64 vector value
-def v64i64 : VTVec<64, i64, 96>; // 64 x i64 vector value
-def v128i64 : VTVec<128, i64, 97>; // 128 x i64 vector value
-def v256i64 : VTVec<256, i64, 98>; // 256 x i64 vector value
-
-def v1i128 : VTVec<1, i128, 99>; // 1 x i128 vector value
-
-def v1f16 : VTVec<1, f16, 100>; // 1 x f16 vector value
-def v2f16 : VTVec<2, f16, 101>; // 2 x f16 vector value
-def v3f16 : VTVec<3, f16, 102>; // 3 x f16 vector value
-def v4f16 : VTVec<4, f16, 103>; // 4 x f16 vector value
-def v5f16 : VTVec<5, f16, 104>; // 5 x f16 vector value
-def v6f16 : VTVec<6, f16, 105>; // 6 x f16 vector value
-def v7f16 : VTVec<7, f16, 106>; // 7 x f16 vector value
-def v8f16 : VTVec<8, f16, 107>; // 8 x f16 vector value
-def v16f16 : VTVec<16, f16, 108>; // 16 x f16 vector value
-def v32f16 : VTVec<32, f16, 109>; // 32 x f16 vector value
-def v64f16 : VTVec<64, f16, 110>; // 64 x f16 vector value
-def v128f16 : VTVec<128, f16, 111>; // 128 x f16 vector value
-def v256f16 : VTVec<256, f16, 112>; // 256 x f16 vector value
-def v512f16 : VTVec<512, f16, 113>; // 512 x f16 vector value
-def v4096f16 : VTVec<4096, f16, 114>; // 4096 x f16 vector value
-
-def v1bf16 : VTVec<1, bf16, 115>; // 1 x bf16 vector value
-def v2bf16 : VTVec<2, bf16, 116>; // 2 x bf16 vector value
-def v3bf16 : VTVec<3, bf16, 117>; // 3 x bf16 vector value
-def v4bf16 : VTVec<4, bf16, 118>; // 4 x bf16 vector value
-def v8bf16 : VTVec<8, bf16, 119>; // 8 x bf16 vector value
-def v16bf16 : VTVec<16, bf16, 120>; // 16 x bf16 vector value
-def v32bf16 : VTVec<32, bf16, 121>; // 32 x bf16 vector value
-def v64bf16 : VTVec<64, bf16, 122>; // 64 x bf16 vector value
-def v128bf16 : VTVec<128, bf16, 123>; // 128 x bf16 vector value
-def v4096bf16 : VTVec<4096, bf16, 124>; // 4096 x bf16 vector value
-
-def v1f32 : VTVec<1, f32, 125>; // 1 x f32 vector value
-def v2f32 : VTVec<2, f32, 126>; // 2 x f32 vector value
-def v3f32 : VTVec<3, f32, 127>; // 3 x f32 vector value
-def v4f32 : VTVec<4, f32, 128>; // 4 x f32 vector value
-def v5f32 : VTVec<5, f32, 129>; // 5 x f32 vector value
-def v6f32 : VTVec<6, f32, 130>; // 6 x f32 vector value
-def v7f32 : VTVec<7, f32, 131>; // 7 x f32 vector value
-def v8f32 : VTVec<8, f32, 132>; // 8 x f32 vector value
-def v9f32 : VTVec<9, f32, 133>; // 9 x f32 vector value
-def v10f32 : VTVec<10, f32, 134>; // 10 x f32 vector value
-def v11f32 : VTVec<11, f32, 135>; // 11 x f32 vector value
-def v12f32 : VTVec<12, f32, 136>; // 12 x f32 vector value
-def v16f32 : VTVec<16, f32, 137>; // 16 x f32 vector value
-def v32f32 : VTVec<32, f32, 138>; // 32 x f32 vector value
-def v64f32 : VTVec<64, f32, 139>; // 64 x f32 vector value
-def v128f32 : VTVec<128, f32, 140>; // 128 x f32 vector value
-def v256f32 : VTVec<256, f32, 141>; // 256 x f32 vector value
-def v512f32 : VTVec<512, f32, 142>; // 512 x f32 vector value
-def v1024f32 : VTVec<1024, f32, 143>; // 1024 x f32 vector value
-def v2048f32 : VTVec<2048, f32, 144>; // 2048 x f32 vector value
-
-def v1f64 : VTVec<1, f64, 145>; // 1 x f64 vector value
-def v2f64 : VTVec<2, f64, 146>; // 2 x f64 vector value
-def v3f64 : VTVec<3, f64, 147>; // 3 x f64 vector value
-def v4f64 : VTVec<4, f64, 148>; // 4 x f64 vector value
-def v8f64 : VTVec<8, f64, 149>; // 8 x f64 vector value
-def v16f64 : VTVec<16, f64, 150>; // 16 x f64 vector value
-def v32f64 : VTVec<32, f64, 151>; // 32 x f64 vector value
-def v64f64 : VTVec<64, f64, 152>; // 64 x f64 vector value
-def v128f64 : VTVec<128, f64, 153>; // 128 x f64 vector value
-def v256f64 : VTVec<256, f64, 154>; // 256 x f64 vector value
-
-def nxv1i1 : VTScalableVec<1, i1, 155>; // n x 1 x i1 vector value
-def nxv2i1 : VTScalableVec<2, i1, 156>; // n x 2 x i1 vector value
-def nxv4i1 : VTScalableVec<4, i1, 157>; // n x 4 x i1 vector value
-def nxv8i1 : VTScalableVec<8, i1, 158>; // n x 8 x i1 vector value
-def nxv16i1 : VTScalableVec<16, i1, 159>; // n x 16 x i1 vector value
-def nxv32i1 : VTScalableVec<32, i1, 160>; // n x 32 x i1 vector value
-def nxv64i1 : VTScalableVec<64, i1, 161>; // n x 64 x i1 vector value
-
-def nxv1i8 : VTScalableVec<1, i8, 162>; // n x 1 x i8 vector value
-def nxv2i8 : VTScalableVec<2, i8, 163>; // n x 2 x i8 vector value
-def nxv4i8 : VTScalableVec<4, i8, 164>; // n x 4 x i8 vector value
-def nxv8i8 : VTScalableVec<8, i8, 165>; // n x 8 x i8 vector value
-def nxv16i8 : VTScalableVec<16, i8, 166>; // n x 16 x i8 vector value
-def nxv32i8 : VTScalableVec<32, i8, 167>; // n x 32 x i8 vector value
-def nxv64i8 : VTScalableVec<64, i8, 168>; // n x 64 x i8 vector value
-
-def nxv1i16 : VTScalableVec<1, i16, 169>; // n x 1 x i16 vector value
-def nxv2i16 : VTScalableVec<2, i16, 170>; // n x 2 x i16 vector value
-def nxv4i16 : VTScalableVec<4, i16, 171>; // n x 4 x i16 vector value
-def nxv8i16 : VTScalableVec<8, i16, 172>; // n x 8 x i16 vector value
-def nxv16i16 : VTScalableVec<16, i16, 173>; // n x 16 x i16 vector value
-def nxv32i16 : VTScalableVec<32, i16, 174>; // n x 32 x i16 vector value
-
-def nxv1i32 : VTScalableVec<1, i32, 175>; // n x 1 x i32 vector value
-def nxv2i32 : VTScalableVec<2, i32, 176>; // n x 2 x i32 vector value
-def nxv4i32 : VTScalableVec<4, i32, 177>; // n x 4 x i32 vector value
-def nxv8i32 : VTScalableVec<8, i32, 178>; // n x 8 x i32 vector value
-def nxv16i32 : VTScalableVec<16, i32, 179>; // n x 16 x i32 vector value
-def nxv32i32 : VTScalableVec<32, i32, 180>; // n x 32 x i32 vector value
-
-def nxv1i64 : VTScalableVec<1, i64, 181>; // n x 1 x i64 vector value
-def nxv2i64 : VTScalableVec<2, i64, 182>; // n x 2 x i64 vector value
-def nxv4i64 : VTScalableVec<4, i64, 183>; // n x 4 x i64 vector value
-def nxv8i64 : VTScalableVec<8, i64, 184>; // n x 8 x i64 vector value
-def nxv16i64 : VTScalableVec<16, i64, 185>; // n x 16 x i64 vector value
-def nxv32i64 : VTScalableVec<32, i64, 186>; // n x 32 x i64 vector value
-
-def nxv1f16 : VTScalableVec<1, f16, 187>; // n x 1 x f16 vector value
-def nxv2f16 : VTScalableVec<2, f16, 188>; // n x 2 x f16 vector value
-def nxv4f16 : VTScalableVec<4, f16, 189>; // n x 4 x f16 vector value
-def nxv8f16 : VTScalableVec<8, f16, 190>; // n x 8 x f16 vector value
-def nxv16f16 : VTScalableVec<16, f16, 191>; // n x 16 x f16 vector value
-def nxv32f16 : VTScalableVec<32, f16, 192>; // n x 32 x f16 vector value
-
-def nxv1bf16 : VTScalableVec<1, bf16, 193>; // n x 1 x bf16 vector value
-def nxv2bf16 : VTScalableVec<2, bf16, 194>; // n x 2 x bf16 vector value
-def nxv4bf16 : VTScalableVec<4, bf16, 195>; // n x 4 x bf16 vector value
-def nxv8bf16 : VTScalableVec<8, bf16, 196>; // n x 8 x bf16 vector value
-def nxv16bf16 : VTScalableVec<16, bf16, 197>; // n x 16 x bf16 vector value
-def nxv32bf16 : VTScalableVec<32, bf16, 198>; // n x 32 x bf16 vector value
-
-def nxv1f32 : VTScalableVec<1, f32, 199>; // n x 1 x f32 vector value
-def nxv2f32 : VTScalableVec<2, f32, 200>; // n x 2 x f32 vector value
-def nxv4f32 : VTScalableVec<4, f32, 201>; // n x 4 x f32 vector value
-def nxv8f32 : VTScalableVec<8, f32, 202>; // n x 8 x f32 vector value
-def nxv16f32 : VTScalableVec<16, f32, 203>; // n x 16 x f32 vector value
-
-def nxv1f64 : VTScalableVec<1, f64, 204>; // n x 1 x f64 vector value
-def nxv2f64 : VTScalableVec<2, f64, 205>; // n x 2 x f64 vector value
-def nxv4f64 : VTScalableVec<4, f64, 206>; // n x 4 x f64 vector value
-def nxv8f64 : VTScalableVec<8, f64, 207>; // n x 8 x f64 vector value
+def v8i1 : VTVec<8, i1, 21>; // 8 x i1 vector value
+def v16i1 : VTVec<16, i1, 22>; // 16 x i1 vector value
+def v32i1 : VTVec<32, i1, 23>; // 32 x i1 vector value
+def v64i1 : VTVec<64, i1, 24>; // 64 x i1 vector value
+def v128i1 : VTVec<128, i1, 25>; // 128 x i1 vector value
+def v256i1 : VTVec<256, i1, 26>; // 256 x i1 vector value
+def v512i1 : VTVec<512, i1, 27>; // 512 x i1 vector value
+def v1024i1 : VTVec<1024, i1, 28>; // 1024 x i1 vector value
+def v2048i1 : VTVec<2048, i1, 29>; // 2048 x i1 vector value
+def v4096i1 : VTVec<4096, i1, 30>; // 4096 x i1 vector value
+
+def v128i2 : VTVec<128, i2, 31>; // 128 x i2 vector value
+def v256i2 : VTVec<256, i2, 32>; // 256 x i2 vector value
+
+def v64i4 : VTVec<64, i4, 33>; // 64 x i4 vector value
+def v128i4 : VTVec<128, i4, 34>; // 128 x i4 vector value
+
+def v1i8 : VTVec<1, i8, 35>; // 1 x i8 vector value
+def v2i8 : VTVec<2, i8, 36>; // 2 x i8 vector value
+def v3i8 : VTVec<3, i8, 37>; // 3 x i8 vector value
+def v4i8 : VTVec<4, i8, 38>; // 4 x i8 vector value
+def v8i8 : VTVec<8, i8, 39>; // 8 x i8 vector value
+def v16i8 : VTVec<16, i8, 40>; // 16 x i8 vector value
+def v32i8 : VTVec<32, i8, 41>; // 32 x i8 vector value
+def v64i8 : VTVec<64, i8, 42>; // 64 x i8 vector value
+def v128i8 : VTVec<128, i8, 43>; // 128 x i8 vector value
+def v256i8 : VTVec<256, i8, 44>; // 256 x i8 vector value
+def v512i8 : VTVec<512, i8, 45>; // 512 x i8 vector value
+def v1024i8 : VTVec<1024, i8, 46>; // 1024 x i8 vector value
+
+def v1i16 : VTVec<1, i16, 47>; // 1 x i16 vector value
+def v2i16 : VTVec<2, i16, 48>; // 2 x i16 vector value
+def v3i16 : VTVec<3, i16, 49>; // 3 x i16 vector value
+def v4i16 : VTVec<4, i16, 50>; // 4 x i16 vector value
+def v8i16 : VTVec<8, i16, 51>; // 8 x i16 vector value
+def v16i16 : VTVec<16, i16, 52>; // 16 x i16 vector value
+def v32i16 : VTVec<32, i16, 53>; // 32 x i16 vector value
+def v64i16 : VTVec<64, i16, 54>; // 64 x i16 vector value
+def v128i16 : VTVec<128, i16, 55>; // 128 x i16 vector value
+def v256i16 : VTVec<256, i16, 56>; // 256 x i16 vector value
+def v512i16 : VTVec<512, i16, 57>; // 512 x i16 vector value
+def v4096i16 : VTVec<4096, i16, 58>; // 4096 x i16 vector value
+
+def v1i32 : VTVec<1, i32, 59>; // 1 x i32 vector value
+def v2i32 : VTVec<2, i32, 60>; // 2 x i32 vector value
+def v3i32 : VTVec<3, i32, 61>; // 3 x i32 vector value
+def v4i32 : VTVec<4, i32, 62>; // 4 x i32 vector value
+def v5i32 : VTVec<5, i32, 63>; // 5 x i32 vector value
+def v6i32 : VTVec<6, i32, 64>; // 6 x f32 vector value
+def v7i32 : VTVec<7, i32, 65>; // 7 x f32 vector value
+def v8i32 : VTVec<8, i32, 66>; // 8 x i32 vector value
+def v9i32 : VTVec<9, i32, 67>; // 9 x i32 vector value
+def v10i32 : VTVec<10, i32, 68>; // 10 x i32 vector value
+def v11i32 : VTVec<11, i32, 69>; // 11 x i32 vector value
+def v12i32 : VTVec<12, i32, 70>; // 12 x i32 vector value
+def v16i32 : VTVec<16, i32, 71>; // 16 x i32 vector value
+def v32i32 : VTVec<32, i32, 72>; // 32 x i32 vector value
+def v64i32 : VTVec<64, i32, 73>; // 64 x i32 vector value
+def v128i32 : VTVec<128, i32, 74>; // 128 x i32 vector value
+def v256i32 : VTVec<256, i32, 75>; // 256 x i32 vector value
+def v512i32 : VTVec<512, i32, 76>; // 512 x i32 vector value
+def v1024i32 : VTVec<1024, i32, 77>; // 1024 x i32 vector value
+def v2048i32 : VTVec<2048, i32, 78>; // 2048 x i32 vector value
+def v4096i32 : VTVec<4096, i32, 79>; // 4096 x i32 vector value
+
+def v1i64 : VTVec<1, i64, 80>; // 1 x i64 vector value
+def v2i64 : VTVec<2, i64, 81>; // 2 x i64 vector value
+def v3i64 : VTVec<3, i64, 82>; // 3 x i64 vector value
+def v4i64 : VTVec<4, i64, 83>; // 4 x i64 vector value
+def v8i64 : VTVec<8, i64, 84>; // 8 x i64 vector value
+def v16i64 : VTVec<16, i64, 85>; // 16 x i64 vector value
+def v32i64 : VTVec<32, i64, 86>; // 32 x i64 vector value
+def v64i64 : VTVec<64, i64, 87>; // 64 x i64 vector value
+def v128i64 : VTVec<128, i64, 88>; // 128 x i64 vector value
+def v256i64 : VTVec<256, i64, 89>; // 256 x i64 vector value
+
+def v1i128 : VTVec<1, i128, 90>; // 1 x i128 vector value
+
+def v1f16 : VTVec<1, f16, 91>; // 1 x f16 vector value
+def v2f16 : VTVec<2, f16, 92>; // 2 x f16 vector value
+def v3f16 : VTVec<3, f16, 93>; // 3 x f16 vector value
+def v4f16 : VTVec<4, f16, 94>; // 4 x f16 vector value
+def v8f16 : VTVec<8, f16, 95>; // 8 x f16 vector value
+def v16f16 : VTVec<16, f16, 96>; // 16 x f16 vector value
+def v32f16 : VTVec<32, f16, 97>; // 32 x f16 vector value
+def v64f16 : VTVec<64, f16, 98>; // 64 x f16 vector value
+def v128f16 : VTVec<128, f16, 99>; // 128 x f16 vector value
+def v256f16 : VTVec<256, f16, 100>; // 256 x f16 vector value
+def v512f16 : VTVec<512, f16, 101>; // 512 x f16 vector value
+def v4096f16 : VTVec<4096, f16, 102>; // 4096 x f16 vector value
+
+def v1bf16 : VTVec<1, bf16, 103>; // 1 x bf16 vector value
+def v2bf16 : VTVec<2, bf16, 104>; // 2 x bf16 vector value
+def v3bf16 : VTVec<3, bf16, 105>; // 3 x bf16 vector value
+def v4bf16 : VTVec<4, bf16, 106>; // 4 x bf16 vector value
+def v8bf16 : VTVec<8, bf16, 107>; // 8 x bf16 vector value
+def v16bf16 : VTVec<16, bf16, 108>; // 16 x bf16 vector value
+def v32bf16 : VTVec<32, bf16, 109>; // 32 x bf16 vector value
+def v64bf16 : VTVec<64, bf16, 110>; // 64 x bf16 vector value
+def v128bf16 : VTVec<128, bf16, 111>; // 128 x bf16 vector value
+def v4096bf16 : VTVec<4096, bf16, 112>; // 4096 x bf16 vector value
+
+def v1f32 : VTVec<1, f32, 113>; // 1 x f32 vector value
+def v2f32 : VTVec<2, f32, 114>; // 2 x f32 vector value
+def v3f32 : VTVec<3, f32, 115>; // 3 x f32 vector value
+def v4f32 : VTVec<4, f32, 116>; // 4 x f32 vector value
+def v5f32 : VTVec<5, f32, 117>; // 5 x f32 vector value
+def v6f32 : VTVec<6, f32, 118>; // 6 x f32 vector value
+def v7f32 : VTVec<7, f32, 119>; // 7 x f32 vector value
+def v8f32 : VTVec<8, f32, 120>; // 8 x f32 vector value
+def v9f32 : VTVec<9, f32, 121>; // 9 x f32 vector value
+def v10f32 : VTVec<10, f32, 122>; // 10 x f32 vector value
+def v11f32 : VTVec<11, f32, 123>; // 11 x f32 vector value
+def v12f32 : VTVec<12, f32, 124>; // 12 x f32 vector value
+def v16f32 : VTVec<16, f32, 125>; // 16 x f32 vector value
+def v32f32 : VTVec<32, f32, 126>; // 32 x f32 vector value
+def v64f32 : VTVec<64, f32, 127>; // 64 x f32 vector value
+def v128f32 : VTVec<128, f32, 128>; // 128 x f32 vector value
+def v256f32 : VTVec<256, f32, 129>; // 256 x f32 vector value
+def v512f32 : VTVec<512, f32, 130>; // 512 x f32 vector value
+def v1024f32 : VTVec<1024, f32, 131>; // 1024 x f32 vector value
+def v2048f32 : VTVec<2048, f32, 132>; // 2048 x f32 vector value
+
+def v1f64 : VTVec<1, f64, 133>; // 1 x f64 vector value
+def v2f64 : VTVec<2, f64, 134>; // 2 x f64 vector value
+def v3f64 : VTVec<3, f64, 135>; // 3 x f64 vector value
+def v4f64 : VTVec<4, f64, 136>; // 4 x f64 vector value
+def v8f64 : VTVec<8, f64, 137>; // 8 x f64 vector value
+def v16f64 : VTVec<16, f64, 138>; // 16 x f64 vector value
+def v32f64 : VTVec<32, f64, 139>; // 32 x f64 vector value
+def v64f64 : VTVec<64, f64, 140>; // 64 x f64 vector value
+def v128f64 : VTVec<128, f64, 141>; // 128 x f64 vector value
+def v256f64 : VTVec<256, f64, 142>; // 256 x f64 vector value
+
+def nxv1i1 : VTScalableVec<1, i1, 143>; // n x 1 x i1 vector value
+def nxv2i1 : VTScalableVec<2, i1, 144>; // n x 2 x i1 vector value
+def nxv4i1 : VTScalableVec<4, i1, 145>; // n x 4 x i1 vector value
+def nxv8i1 : VTScalableVec<8, i1, 146>; // n x 8 x i1 vector value
+def nxv16i1 : VTScalableVec<16, i1, 147>; // n x 16 x i1 vector value
+def nxv32i1 : VTScalableVec<32, i1, 148>; // n x 32 x i1 vector value
+def nxv64i1 : VTScalableVec<64, i1, 149>; // n x 64 x i1 vector value
+
+def nxv1i8 : VTScalableVec<1, i8, 150>; // n x 1 x i8 vector value
+def nxv2i8 : VTScalableVec<2, i8, 151>; // n x 2 x i8 vector value
+def nxv4i8 : VTScalableVec<4, i8, 152>; // n x 4 x i8 vector value
+def nxv8i8 : VTScalableVec<8, i8, 153>; // n x 8 x i8 vector value
+def nxv16i8 : VTScalableVec<16, i8, 154>; // n x 16 x i8 vector value
+def nxv32i8 : VTScalableVec<32, i8, 155>; // n x 32 x i8 vector value
+def nxv64i8 : VTScalableVec<64, i8, 156>; // n x 64 x i8 vector value
+
+def nxv1i16 : VTScalableVec<1, i16, 157>; // n x 1 x i16 vector value
+def nxv2i16 : VTScalableVec<2, i16, 158>; // n x 2 x i16 vector value
+def nxv4i16 : VTScalableVec<4, i16, 159>; // n x 4 x i16 vector value
+def nxv8i16 : VTScalableVec<8, i16, 160>; // n x 8 x i16 vector value
+def nxv16i16 : VTScalableVec<16, i16, 161>; // n x 16 x i16 vector value
+def nxv32i16 : VTScalableVec<32, i16, 162>; // n x 32 x i16 vector value
+
+def nxv1i32 : VTScalableVec<1, i32, 163>; // n x 1 x i32 vector value
+def nxv2i32 : VTScalableVec<2, i32, 164>; // n x 2 x i32 vector value
+def nxv4i32 : VTScalableVec<4, i32, 165>; // n x 4 x i32 vector value
+def nxv8i32 : VTScalableVec<8, i32, 166>; // n x 8 x i32 vector value
+def nxv16i32 : VTScalableVec<16, i32, 167>; // n x 16 x i32 vector value
+def nxv32i32 : VTScalableVec<32, i32, 168>; // n x 32 x i32 vector value
+
+def nxv1i64 : VTScalableVec<1, i64, 169>; // n x 1 x i64 vector value
+def nxv2i64 : VTScalableVec<2, i64, 170>; // n x 2 x i64 vector value
+def nxv4i64 : VTScalableVec<4, i64, 171>; // n x 4 x i64 vector value
+def nxv8i64 : VTScalableVec<8, i64, 172>; // n x 8 x i64 vector value
+def nxv16i64 : VTScalableVec<16, i64, 173>; // n x 16 x i64 vector value
+def nxv32i64 : VTScalableVec<32, i64, 174>; // n x 32 x i64 vector value
+
+def nxv1f16 : VTScalableVec<1, f16, 175>; // n x 1 x f16 vector value
+def nxv2f16 : VTScalableVec<2, f16, 176>; // n x 2 x f16 vector value
+def nxv4f16 : VTScalableVec<4, f16, 177>; // n x 4 x f16 vector value
+def nxv8f16 : VTScalableVec<8, f16, 178>; // n x 8 x f16 vector value
+def nxv16f16 : VTScalableVec<16, f16, 179>; // n x 16 x f16 vector value
+def nxv32f16 : VTScalableVec<32, f16, 180>; // n x 32 x f16 vector value
+
+def nxv1bf16 : VTScalableVec<1, bf16, 181>; // n x 1 x bf16 vector value
+def nxv2bf16 : VTScalableVec<2, bf16, 182>; // n x 2 x bf16 vector value
+def nxv4bf16 : VTScalableVec<4, bf16, 183>; // n x 4 x bf16 vector value
+def nxv8bf16 : VTScalableVec<8, bf16, 184>; // n x 8 x bf16 vector value
+def nxv16bf16 : VTScalableVec<16, bf16, 185>; // n x 16 x bf16 vector value
+def nxv32bf16 : VTScalableVec<32, bf16, 186>; // n x 32 x bf16 vector value
+
+def nxv1f32 : VTScalableVec<1, f32, 187>; // n x 1 x f32 vector value
+def nxv2f32 : VTScalableVec<2, f32, 188>; // n x 2 x f32 vector value
+def nxv4f32 : VTScalableVec<4, f32, 189>; // n x 4 x f32 vector value
+def nxv8f32 : VTScalableVec<8, f32, 190>; // n x 8 x f32 vector value
+def nxv16f32 : VTScalableVec<16, f32, 191>; // n x 16 x f32 vector value
+
+def nxv1f64 : VTScalableVec<1, f64, 192>; // n x 1 x f64 vector value
+def nxv2f64 : VTScalableVec<2, f64, 193>; // n x 2 x f64 vector value
+def nxv4f64 : VTScalableVec<4, f64, 194>; // n x 4 x f64 vector value
+def nxv8f64 : VTScalableVec<8, f64, 195>; // n x 8 x f64 vector value
// Sz = NF * MinNumElts * 8(bits)
-def riscv_nxv1i8x2 : VTVecTup<16, 2, i8, 208>; // RISCV vector tuple(min_num_elts=1, nf=2)
-def riscv_nxv1i8x3 : VTVecTup<24, 3, i8, 209>; // RISCV vector tuple(min_num_elts=1, nf=3)
-def riscv_nxv1i8x4 : VTVecTup<32, 4, i8, 210>; // RISCV vector tuple(min_num_elts=1, nf=4)
-def riscv_nxv1i8x5 : VTVecTup<40, 5, i8, 211>; // RISCV vector tuple(min_num_elts=1, nf=5)
-def riscv_nxv1i8x6 : VTVecTup<48, 6, i8, 212>; // RISCV vector tuple(min_num_elts=1, nf=6)
-def riscv_nxv1i8x7 : VTVecTup<56, 7, i8, 213>; // RISCV vector tuple(min_num_elts=1, nf=7)
-def riscv_nxv1i8x8 : VTVecTup<64, 8, i8, 214>; // RISCV vector tuple(min_num_elts=1, nf=8)
-def riscv_nxv2i8x2 : VTVecTup<32, 2, i8, 215>; // RISCV vector tuple(min_num_elts=2, nf=2)
-def riscv_nxv2i8x3 : VTVecTup<48, 3, i8, 216>; // RISCV vector tuple(min_num_elts=2, nf=3)
-def riscv_nxv2i8x4 : VTVecTup<64, 4, i8, 217>; // RISCV vector tuple(min_num_elts=2, nf=4)
-def riscv_nxv2i8x5 : VTVecTup<80, 5, i8, 218>; // RISCV vector tuple(min_num_elts=2, nf=5)
-def riscv_nxv2i8x6 : VTVecTup<96, 6, i8, 219>; // RISCV vector tuple(min_num_elts=2, nf=6)
-def riscv_nxv2i8x7 : VTVecTup<112, 7, i8, 220>; // RISCV vector tuple(min_num_elts=2, nf=7)
-def riscv_nxv2i8x8 : VTVecTup<128, 8, i8, 221>; // RISCV vector tuple(min_num_elts=2, nf=8)
-def riscv_nxv4i8x2 : VTVecTup<64, 2, i8, 222>; // RISCV vector tuple(min_num_elts=4, nf=2)
-def riscv_nxv4i8x3 : VTVecTup<96, 3, i8, 223>; // RISCV vector tuple(min_num_elts=4, nf=3)
-def riscv_nxv4i8x4 : VTVecTup<128, 4, i8, 224>; // RISCV vector tuple(min_num_elts=4, nf=4)
-def riscv_nxv4i8x5 : VTVecTup<160, 5, i8, 225>; // RISCV vector tuple(min_num_elts=4, nf=5)
-def riscv_nxv4i8x6 : VTVecTup<192, 6, i8, 226>; // RISCV vector tuple(min_num_elts=4, nf=6)
-def riscv_nxv4i8x7 : VTVecTup<224, 7, i8, 227>; // RISCV vector tuple(min_num_elts=4, nf=7)
-def riscv_nxv4i8x8 : VTVecTup<256, 8, i8, 228>; // RISCV vector tuple(min_num_elts=4, nf=8)
-def riscv_nxv8i8x2 : VTVecTup<128, 2, i8, 229>; // RISCV vector tuple(min_num_elts=8, nf=2)
-def riscv_nxv8i8x3 : VTVecTup<192, 3, i8, 230>; // RISCV vector tuple(min_num_elts=8, nf=3)
-def riscv_nxv8i8x4 : VTVecTup<256, 4, i8, 231>; // RISCV vector tuple(min_num_elts=8, nf=4)
-def riscv_nxv8i8x5 : VTVecTup<320, 5, i8, 232>; // RISCV vector tuple(min_num_elts=8, nf=5)
-def riscv_nxv8i8x6 : VTVecTup<384, 6, i8, 233>; // RISCV vector tuple(min_num_elts=8, nf=6)
-def riscv_nxv8i8x7 : VTVecTup<448, 7, i8, 234>; // RISCV vector tuple(min_num_elts=8, nf=7)
-def riscv_nxv8i8x8 : VTVecTup<512, 8, i8, 235>; // RISCV vector tuple(min_num_elts=8, nf=8)
-def riscv_nxv16i8x2 : VTVecTup<256, 2, i8, 236>; // RISCV vector tuple(min_num_elts=16, nf=2)
-def riscv_nxv16i8x3 : VTVecTup<384, 3, i8, 237>; // RISCV vector tuple(min_num_elts=16, nf=3)
-def riscv_nxv16i8x4 : VTVecTup<512, 4, i8, 238>; // RISCV vector tuple(min_num_elts=16, nf=4)
-def riscv_nxv32i8x2 : VTVecTup<512, 2, i8, 239>; // RISCV vector tuple(min_num_elts=32, nf=2)
-
-def x86mmx : ValueType<64, 240>; // X86 MMX value
-def Glue : ValueType<0, 241>; // Pre-RA sched glue
-def isVoid : ValueType<0, 242>; // Produces no value
-def untyped : ValueType<8, 243> { // Produces an untyped value
+def riscv_nxv1i8x2 : VTVecTup<16, 2, i8, 196>; // RISCV vector tuple(min_num_elts=1, nf=2)
+def riscv_nxv1i8x3 : VTVecTup<24, 3, i8, 197>; // RISCV vector tuple(min_num_elts=1, nf=3)
+def riscv_nxv1i8x4 : VTVecTup<32, 4, i8, 198>; // RISCV vector tuple(min_num_elts=1, nf=4)
+def riscv_nxv1i8x5 : VTVecTup<40, 5, i8, 199>; // RISCV vector tuple(min_num_elts=1, nf=5)
+def riscv_nxv1i8x6 : VTVecTup<48, 6, i8, 200>; // RISCV vector tuple(min_num_elts=1, nf=6)
+def riscv_nxv1i8x7 : VTVecTup<56, 7, i8, 201>; // RISCV vector tuple(min_num_elts=1, nf=7)
+def riscv_nxv1i8x8 : VTVecTup<64, 8, i8, 202>; // RISCV vector tuple(min_num_elts=1, nf=8)
+def riscv_nxv2i8x2 : VTVecTup<32, 2, i8, 203>; // RISCV vector tuple(min_num_elts=2, nf=2)
+def riscv_nxv2i8x3 : VTVecTup<48, 3, i8, 204>; // RISCV vector tuple(min_num_elts=2, nf=3)
+def riscv_nxv2i8x4 : VTVecTup<64, 4, i8, 205>; // RISCV vector tuple(min_num_elts=2, nf=4)
+def riscv_nxv2i8x5 : VTVecTup<80, 5, i8, 206>; // RISCV vector tuple(min_num_elts=2, nf=5)
+def riscv_nxv2i8x6 : VTVecTup<96, 6, i8, 207>; // RISCV vector tuple(min_num_elts=2, nf=6)
+def riscv_nxv2i8x7 : VTVecTup<112, 7, i8, 208>; // RISCV vector tuple(min_num_elts=2, nf=7)
+def riscv_nxv2i8x8 : VTVecTup<128, 8, i8, 209>; // RISCV vector tuple(min_num_elts=2, nf=8)
+def riscv_nxv4i8x2 : VTVecTup<64, 2, i8, 210>; // RISCV vector tuple(min_num_elts=4, nf=2)
+def riscv_nxv4i8x3 : VTVecTup<96, 3, i8, 211>; // RISCV vector tuple(min_num_elts=4, nf=3)
+def riscv_nxv4i8x4 : VTVecTup<128, 4, i8, 212>; // RISCV vector tuple(min_num_elts=4, nf=4)
+def riscv_nxv4i8x5 : VTVecTup<160, 5, i8, 213>; // RISCV vector tuple(min_num_elts=4, nf=5)
+def riscv_nxv4i8x6 : VTVecTup<192, 6, i8, 214>; // RISCV vector tuple(min_num_elts=4, nf=6)
+def riscv_nxv4i8x7 : VTVecTup<224, 7, i8, 215>; // RISCV vector tuple(min_num_elts=4, nf=7)
+def riscv_nxv4i8x8 : VTVecTup<256, 8, i8, 216>; // RISCV vector tuple(min_num_elts=4, nf=8)
+def riscv_nxv8i8x2 : VTVecTup<128, 2, i8, 217>; // RISCV vector tuple(min_num_elts=8, nf=2)
+def riscv_nxv8i8x3 : VTVecTup<192, 3, i8, 218>; // RISCV vector tuple(min_num_elts=8, nf=3)
+def riscv_nxv8i8x4 : VTVecTup<256, 4, i8, 219>; // RISCV vector tuple(min_num_elts=8, nf=4)
+def riscv_nxv8i8x5 : VTVecTup<320, 5, i8, 220>; // RISCV vector tuple(min_num_elts=8, nf=5)
+def riscv_nxv8i8x6 : VTVecTup<384, 6, i8, 221>; // RISCV vector tuple(min_num_elts=8, nf=6)
+def riscv_nxv8i8x7 : VTVecTup<448, 7, i8, 222>; // RISCV vector tuple(min_num_elts=8, nf=7)
+def riscv_nxv8i8x8 : VTVecTup<512, 8, i8, 223>; // RISCV vector tuple(min_num_elts=8, nf=8)
+def riscv_nxv16i8x2 : VTVecTup<256, 2, i8, 224>; // RISCV vector tuple(min_num_elts=16, nf=2)
+def riscv_nxv16i8x3 : VTVecTup<384, 3, i8, 225>; // RISCV vector tuple(min_num_elts=16, nf=3)
+def riscv_nxv16i8x4 : VTVecTup<512, 4, i8, 226>; // RISCV vector tuple(min_num_elts=16, nf=4)
+def riscv_nxv32i8x2 : VTVecTup<512, 2, i8, 227>; // RISCV vector tuple(min_num_elts=32, nf=2)
+
+def x86mmx : ValueType<64, 228>; // X86 MMX value
+def Glue : ValueType<0, 229>; // Pre-RA sched glue
+def isVoid : ValueType<0, 230>; // Produces no value
+def untyped : ValueType<8, 231> { // Produces an untyped value
let LLVMName = "Untyped";
}
-def funcref : ValueType<0, 244>; // WebAssembly's funcref type
-def externref : ValueType<0, 245>; // WebAssembly's externref type
-def exnref : ValueType<0, 246>; // WebAssembly's exnref type
-def x86amx : ValueType<8192, 247>; // X86 AMX value
-def i64x8 : ValueType<512, 248>; // 8 Consecutive GPRs (AArch64)
+def funcref : ValueType<0, 232>; // WebAssembly's funcref type
+def externref : ValueType<0, 233>; // WebAssembly's externref type
+def exnref : ValueType<0, 234>; // WebAssembly's exnref type
+def x86amx : ValueType<8192, 235>; // X86 AMX value
+def i64x8 : ValueType<512, 236>; // 8 Consecutive GPRs (AArch64)
def aarch64svcount
- : ValueType<16, 249>; // AArch64 predicate-as-counter
-def spirvbuiltin : ValueType<0, 250>; // SPIR-V's builtin type
+ : ValueType<16, 237>; // AArch64 predicate-as-counter
+def spirvbuiltin : ValueType<0, 238>; // SPIR-V's builtin type
// AMDGPU buffer fat pointer, buffer rsrc + offset, rewritten before MIR translation.
// FIXME: Remove this and the getPointerType() override if MVT::i160 is added.
-def amdgpuBufferFatPointer : ValueType<160, 251>;
+def amdgpuBufferFatPointer : ValueType<160, 239>;
// AMDGPU buffer strided pointer, buffer rsrc + index + offset, doesn't reach MIR.
// FIXME: Remove this and the getPointerType() override if MVT::i82 is added.
-def amdgpuBufferStridedPointer : ValueType<192, 252>;
+def amdgpuBufferStridedPointer : ValueType<192, 240>;
-def aarch64mfp8 : ValueType<8, 253>; // 8-bit value in FPR (AArch64)
+def aarch64mfp8 : ValueType<8, 241>; // 8-bit value in FPR (AArch64)
let isNormalValueType = false in {
def token : ValueType<0, 504>; // TokenTy
diff --git a/llvm/lib/Target/AMDGPU/AMDGPUISelLowering.cpp b/llvm/lib/Target/AMDGPU/AMDGPUISelLowering.cpp
index 7771f9b70c78b..64e68ab7d753c 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPUISelLowering.cpp
+++ b/llvm/lib/Target/AMDGPU/AMDGPUISelLowering.cpp
@@ -367,18 +367,6 @@ AMDGPUTargetLowering::AMDGPUTargetLowering(const TargetMachine &TM,
setTruncStoreAction(MVT::v4f64, MVT::v4bf16, Expand);
setTruncStoreAction(MVT::v4f64, MVT::v4f16, Expand);
- setTruncStoreAction(MVT::v5i32, MVT::v5i1, Expand);
- setTruncStoreAction(MVT::v5i32, MVT::v5i8, Expand);
- setTruncStoreAction(MVT::v5i32, MVT::v5i16, Expand);
-
- setTruncStoreAction(MVT::v6i32, MVT::v6i1, Expand);
- setTruncStoreAction(MVT::v6i32, MVT::v6i8, Expand);
- setTruncStoreAction(MVT::v6i32, MVT::v6i16, Expand);
-
- setTruncStoreAction(MVT::v7i32, MVT::v7i1, Expand);
- setTruncStoreAction(MVT::v7i32, MVT::v7i8, Expand);
- setTruncStoreAction(MVT::v7i32, MVT::v7i16, Expand);
-
setTruncStoreAction(MVT::v8f64, MVT::v8f32, Expand);
setTruncStoreAction(MVT::v8f64, MVT::v8bf16, Expand);
setTruncStoreAction(MVT::v8f64, MVT::v8f16, Expand);
>From d44754c344885c249f381ace54ab1947e2d0f9fc Mon Sep 17 00:00:00 2001
From: Matt Arsenault <Matthew.Arsenault at amd.com>
Date: Wed, 6 Aug 2025 08:48:01 +0900
Subject: [PATCH 017/171] ARM: Remove redundant or buggy config of __aeabi_d2h
(#152126)
This was set if `TT.isTargetAEABI()`. This was previously set above
if `TM.isAAPCS_ABI() && (TT.isTargetAEABI() || TT.isTargetGNUAEABI() ||
TT.isTargetMuslAEABI() || TT.isAndroid())`.
So this could differ based on a manually specified -target-abi flag due
to the `isAAPCS_ABI` part of the original condition. I'm guessing
these should be consistent, so either this second group of
setLibcallImpl
calls should have been guarded by the `isAAPCS_ABI` check, or the first
condition should remove it.
There doesn't appear to be any meaningful test coverage using the
manually specified ABI option, so #152108 tries to remove it
---
llvm/lib/Target/ARM/ARMISelLowering.cpp | 1 -
1 file changed, 1 deletion(-)
diff --git a/llvm/lib/Target/ARM/ARMISelLowering.cpp b/llvm/lib/Target/ARM/ARMISelLowering.cpp
index 7f8b4460bb814..18766c8c6befa 100644
--- a/llvm/lib/Target/ARM/ARMISelLowering.cpp
+++ b/llvm/lib/Target/ARM/ARMISelLowering.cpp
@@ -737,7 +737,6 @@ ARMTargetLowering::ARMTargetLowering(const TargetMachine &TM_,
const RTLIB::LibcallImpl Impl;
} LibraryCalls[] = {
{RTLIB::FPROUND_F32_F16, RTLIB::__aeabi_f2h},
- {RTLIB::FPROUND_F64_F16, RTLIB::__aeabi_d2h},
{RTLIB::FPEXT_F16_F32, RTLIB::__aeabi_h2f},
};
>From 9e9bf7ac5bc8bf7c2b07dbc8a09d7cb494c70acb Mon Sep 17 00:00:00 2001
From: Amy Huang <akhuang at google.com>
Date: Tue, 5 Aug 2025 16:57:16 -0700
Subject: [PATCH 018/171] Update the cmake options in libc windows build docs
(#152205)
Add the LLVM_ENABLE_RUNTIMES option and remove some that are out of date
or already the default setting.
---
libc/config/windows/README.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/libc/config/windows/README.md b/libc/config/windows/README.md
index 3ac058e207a65..ee5d5fb66e8fc 100644
--- a/libc/config/windows/README.md
+++ b/libc/config/windows/README.md
@@ -59,7 +59,7 @@ libc, and finally, build and test the libc.
by Clang, so ensure Clang is specified as the C and C++ compiler.
```
- cmake -G Ninja ../llvm-project/llvm -DCMAKE_C_COMPILER=C:/src/clang-build/bin/clang-cl.exe -DCMAKE_CXX_COMPILER=C:/src/clang-build/bin/clang-cl.exe -DLLVM_TARGETS_TO_BUILD=X86 -DLLVM_FORCE_BUILD_RUNTIME=libc -DLLVM_ENABLE_PROJECTS=libc -DLLVM_NATIVE_ARCH=x86_64 -DLLVM_HOST_TRIPLE=x86_64-window-x86-gnu
+ cmake -G Ninja ../llvm-project/runtimes -DCMAKE_BUILD_TYPE=Release -DCMAKE_C_COMPILER=C:/src/clang-build/bin/clang-cl.exe -DCMAKE_CXX_COMPILER=C:/src/clang-build/bin/clang-cl.exe -DLLVM_ENABLE_RUNTIMES=libc
```
Some LLVM libc math unittests test correctness/accuracy against results from
>From 7b9786b4a602839179e7436620956a5597ffc0a3 Mon Sep 17 00:00:00 2001
From: Rahul Joshi <rjoshi at nvidia.com>
Date: Tue, 5 Aug 2025 17:26:08 -0700
Subject: [PATCH 019/171] [NFC][TableGen] Capitalize comments in TGLexer.cpp
(#152224)
---
llvm/lib/TableGen/TGLexer.cpp | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/llvm/lib/TableGen/TGLexer.cpp b/llvm/lib/TableGen/TGLexer.cpp
index 7b23736169bba..30eae6e7837cb 100644
--- a/llvm/lib/TableGen/TGLexer.cpp
+++ b/llvm/lib/TableGen/TGLexer.cpp
@@ -500,7 +500,7 @@ bool TGLexer::LexInclude() {
/// SkipBCPLComment - Skip over the comment by finding the next CR or LF.
/// Or we may end up at the end of the buffer.
void TGLexer::SkipBCPLComment() {
- ++CurPtr; // skip the second slash.
+ ++CurPtr; // Skip the second slash.
auto EOLPos = CurBuf.find_first_of("\r\n", CurPtr - CurBuf.data());
CurPtr = (EOLPos == StringRef::npos) ? CurBuf.end() : CurBuf.data() + EOLPos;
}
@@ -508,7 +508,7 @@ void TGLexer::SkipBCPLComment() {
/// SkipCComment - This skips C-style /**/ comments. The only difference from C
/// is that we allow nesting.
bool TGLexer::SkipCComment() {
- ++CurPtr; // skip the star.
+ ++CurPtr; // Skip the star.
unsigned CommentDepth = 1;
while (true) {
>From ab209cafdcd8759302f04205d2f09f4243149f0f Mon Sep 17 00:00:00 2001
From: Peter Collingbourne <peter at pcc.me.uk>
Date: Tue, 5 Aug 2025 17:31:15 -0700
Subject: [PATCH 020/171] Add link to CFI design doc to the documentation
index.
---
clang/docs/index.rst | 1 +
1 file changed, 1 insertion(+)
diff --git a/clang/docs/index.rst b/clang/docs/index.rst
index 4871d05e932ae..542bfc94cd576 100644
--- a/clang/docs/index.rst
+++ b/clang/docs/index.rst
@@ -117,6 +117,7 @@ Design Documents
OffloadingDesign
PCHInternals
ItaniumMangleAbiTags
+ ControlFlowIntegrityDesign
HardwareAssistedAddressSanitizerDesign.rst
ConstantInterpreter
>From de09c27f5e4192e8d5f1ba6661c40d6337409a4d Mon Sep 17 00:00:00 2001
From: Matt Arsenault <Matthew.Arsenault at amd.com>
Date: Wed, 6 Aug 2025 09:41:37 +0900
Subject: [PATCH 021/171] ARM: Simplify logic for default libcall calling
convention (#152166)
---
llvm/include/llvm/IR/RuntimeLibcalls.td | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/llvm/include/llvm/IR/RuntimeLibcalls.td b/llvm/include/llvm/IR/RuntimeLibcalls.td
index 5d1015e585e47..553a302aa47b6 100644
--- a/llvm/include/llvm/IR/RuntimeLibcalls.td
+++ b/llvm/include/llvm/IR/RuntimeLibcalls.td
@@ -1511,10 +1511,9 @@ def ARMSystemLibrary
(!TT.isiOS() || !TT.isOSVersionLT(5, 0))}]>>,
DefaultStackProtector)> {
let DefaultLibcallCallingConv = LibcallCallingConv<[{
- (!TT.isOSDarwin() && !TT.isiOS() && !TT.isWatchOS() && !TT.isDriverKit()) ?
+ TT.isOSDarwin() ? CallingConv::C :
(FloatABI == FloatABI::Hard ? CallingConv::ARM_AAPCS_VFP
- : CallingConv::ARM_AAPCS) :
- CallingConv::C
+ : CallingConv::ARM_AAPCS)
}]>;
}
>From 726847829553079a13b1b7104f2c2db9dcda9c1d Mon Sep 17 00:00:00 2001
From: Oliver Hunt <oliver at apple.com>
Date: Tue, 5 Aug 2025 17:41:55 -0700
Subject: [PATCH 022/171] [clang][PAC] Fix PAC codegen for final class
dynamic_cast optimization (#152227)
The codegen for the final class dynamic_cast optimization fails to
consider pointer authentication. This change resolves this be simply
disabling the optimization when pointer authentication enabled.
---
clang/lib/CodeGen/CGExprCXX.cpp | 3 ++-
clang/test/CodeGenCXX/dynamic-cast-exact-disabled.cpp | 1 +
2 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/clang/lib/CodeGen/CGExprCXX.cpp b/clang/lib/CodeGen/CGExprCXX.cpp
index c7e53331fe05f..fef1baf25d1d2 100644
--- a/clang/lib/CodeGen/CGExprCXX.cpp
+++ b/clang/lib/CodeGen/CGExprCXX.cpp
@@ -2292,7 +2292,8 @@ llvm::Value *CodeGenFunction::EmitDynamicCast(Address ThisAddr,
bool IsExact = !IsDynamicCastToVoid &&
CGM.getCodeGenOpts().OptimizationLevel > 0 &&
DestRecordTy->getAsCXXRecordDecl()->isEffectivelyFinal() &&
- CGM.getCXXABI().shouldEmitExactDynamicCast(DestRecordTy);
+ CGM.getCXXABI().shouldEmitExactDynamicCast(DestRecordTy) &&
+ !getLangOpts().PointerAuthCalls;
// C++ [expr.dynamic.cast]p4:
// If the value of v is a null pointer value in the pointer case, the result
diff --git a/clang/test/CodeGenCXX/dynamic-cast-exact-disabled.cpp b/clang/test/CodeGenCXX/dynamic-cast-exact-disabled.cpp
index 9a8ce1997a7f9..19c2a9bd0497e 100644
--- a/clang/test/CodeGenCXX/dynamic-cast-exact-disabled.cpp
+++ b/clang/test/CodeGenCXX/dynamic-cast-exact-disabled.cpp
@@ -3,6 +3,7 @@
// RUN: %clang_cc1 -I%S %s -triple x86_64-apple-darwin10 -O1 -fvisibility=hidden -emit-llvm -std=c++11 -o - | FileCheck %s --check-prefixes=CHECK,INEXACT
// RUN: %clang_cc1 -I%S %s -triple x86_64-apple-darwin10 -O1 -fapple-kext -emit-llvm -std=c++11 -o - | FileCheck %s --check-prefixes=CHECK,INEXACT
// RUN: %clang_cc1 -I%S %s -triple x86_64-apple-darwin10 -O1 -fno-assume-unique-vtables -emit-llvm -std=c++11 -o - | FileCheck %s --check-prefixes=CHECK,INEXACT
+// RUN: %clang_cc1 -I%S %s -triple arm64e-apple-darwin10 -O1 -fptrauth-calls -emit-llvm -std=c++11 -o - | FileCheck %s --check-prefixes=CHECK,INEXACT
struct A { virtual ~A(); };
struct B final : A { };
>From f67548390582292253e1321a55cdf2c37abe08a7 Mon Sep 17 00:00:00 2001
From: Mircea Trofin <mtrofin at google.com>
Date: Wed, 6 Aug 2025 02:48:50 +0200
Subject: [PATCH 023/171] [profcheck] Annotate `select` instructions (#152171)
For `select`, we don't have the equivalent of the branch probability analysis to offer defaults, so we make up our own and allow their overriding with flags.
Issue #147390
---
llvm/lib/Transforms/Utils/ProfileVerify.cpp | 29 ++++++++-
.../Transforms/PGOProfile/profcheck-select.ll | 63 +++++++++++++++++++
2 files changed, 90 insertions(+), 2 deletions(-)
create mode 100644 llvm/test/Transforms/PGOProfile/profcheck-select.ll
diff --git a/llvm/lib/Transforms/Utils/ProfileVerify.cpp b/llvm/lib/Transforms/Utils/ProfileVerify.cpp
index d67192f9d44ee..0ffea3f53fef6 100644
--- a/llvm/lib/Transforms/Utils/ProfileVerify.cpp
+++ b/llvm/lib/Transforms/Utils/ProfileVerify.cpp
@@ -26,6 +26,18 @@ using namespace llvm;
static cl::opt<int64_t>
DefaultFunctionEntryCount("profcheck-default-function-entry-count",
cl::init(1000));
+static cl::opt<bool>
+ AnnotateSelect("profcheck-annotate-select", cl::init(true),
+ cl::desc("Also inject (if missing) and verify MD_prof for "
+ "`select` instructions"));
+static cl::opt<uint32_t> SelectTrueWeight(
+ "profcheck-default-select-true-weight", cl::init(2U),
+ cl::desc("When annotating `select` instructions, this value will be used "
+ "for the first ('true') case."));
+static cl::opt<uint32_t> SelectFalseWeight(
+ "profcheck-default-select-false-weight", cl::init(3U),
+ cl::desc("When annotating `select` instructions, this value will be used "
+ "for the second ('false') case."));
namespace {
class ProfileInjector {
Function &F;
@@ -82,6 +94,13 @@ bool ProfileInjector::inject() {
return false;
bool Changed = false;
for (auto &BB : F) {
+ if (AnnotateSelect) {
+ for (auto &I : BB) {
+ if (isa<SelectInst>(I) && !I.getMetadata(LLVMContext::MD_prof))
+ setBranchWeights(I, {SelectTrueWeight, SelectFalseWeight},
+ /*IsExpected=*/false);
+ }
+ }
auto *Term = getTerminatorBenefitingFromMDProf(BB);
if (!Term || Term->getMetadata(LLVMContext::MD_prof))
continue;
@@ -144,12 +163,18 @@ PreservedAnalyses ProfileVerifierPass::run(Function &F,
}
if (EntryCount->getCount() == 0)
return PreservedAnalyses::all();
- for (const auto &BB : F)
+ for (const auto &BB : F) {
+ if (AnnotateSelect) {
+ for (const auto &I : BB)
+ if (isa<SelectInst>(I) && !I.getMetadata(LLVMContext::MD_prof))
+ F.getContext().emitError(
+ "Profile verification failed: select annotation missing");
+ }
if (const auto *Term =
ProfileInjector::getTerminatorBenefitingFromMDProf(BB))
if (!Term->getMetadata(LLVMContext::MD_prof))
F.getContext().emitError(
"Profile verification failed: branch annotation missing");
-
+ }
return PreservedAnalyses::all();
}
diff --git a/llvm/test/Transforms/PGOProfile/profcheck-select.ll b/llvm/test/Transforms/PGOProfile/profcheck-select.ll
new file mode 100644
index 0000000000000..b5dc97d2d5a6d
--- /dev/null
+++ b/llvm/test/Transforms/PGOProfile/profcheck-select.ll
@@ -0,0 +1,63 @@
+; RUN: split-file %s %t
+
+; RUN: opt -passes=prof-inject %t/inject.ll -S -o - | FileCheck %t/inject.ll
+
+; RUN: opt -passes=prof-inject %t/inject-some.ll \
+; RUN: -profcheck-default-select-true-weight=1 -profcheck-default-select-false-weight=6 \
+; RUN: -S -o - | FileCheck %t/inject-some.ll
+
+; RUN: opt -passes=prof-verify %t/verify.ll 2>&1 | FileCheck %t/verify.ll
+
+; RUN: not opt -passes=prof-verify %t/verify-missing.ll 2>&1 | FileCheck %t/verify-missing.ll
+
+; verify we can disable it. It's sufficient to see opt not failing.
+; RUN: opt -passes=prof-verify -profcheck-annotate-select=0 %t/verify-missing.ll
+
+;--- inject.ll
+declare void @foo(i32 %a);
+define void @bar(i1 %c) {
+ %v = select i1 %c, i32 1, i32 2
+ call void @foo(i32 %v)
+ ret void
+}
+; CHECK-LABEL: @bar
+; CHECK: %v = select i1 %c, i32 1, i32 2, !prof !1
+; CHECK: !0 = !{!"function_entry_count", i64 1000}
+; CHECK: !1 = !{!"branch_weights", i32 2, i32 3}
+
+;--- inject-some.ll
+declare void @foo(i32 %a);
+define void @bar(i1 %c) {
+ %e = select i1 %c, i32 1, i32 2, !prof !0
+ %c2 = icmp eq i32 %e, 2
+ %v = select i1 %c2, i32 5, i32 10
+ call void @foo(i32 %v)
+ ret void
+}
+!0 = !{!"branch_weights", i32 2, i32 3}
+; CHECK-LABEL: @bar
+; CHECK: %v = select i1 %c2, i32 5, i32 10, !prof !2
+; CHECK: !0 = !{!"function_entry_count", i64 1000}
+; CHECK: !1 = !{!"branch_weights", i32 2, i32 3}
+; CHECK: !2 = !{!"branch_weights", i32 1, i32 6}
+
+;--- verify.ll
+declare void @foo(i32 %a);
+define void @bar(i1 %c) !prof !0 {
+ %v = select i1 %c, i32 1, i32 2, !prof !1
+ call void @foo(i32 %v)
+ ret void
+}
+!0 = !{!"function_entry_count", i64 1000}
+!1 = !{!"branch_weights", i32 1, i32 7}
+; CHECK-NOT: Profile verification failed: select annotation missing
+
+;--- verify-missing.ll
+declare void @foo(i32 %a);
+define void @bar(i1 %c) !prof !0 {
+ %v = select i1 %c, i32 1, i32 2
+ call void @foo(i32 %v)
+ ret void
+}
+!0 = !{!"function_entry_count", i64 1000}
+; CHECK: Profile verification failed: select annotation missing
\ No newline at end of file
>From 813e477155dee55d69297acbeac135b86ee72751 Mon Sep 17 00:00:00 2001
From: Peter Collingbourne <peter at pcc.me.uk>
Date: Tue, 5 Aug 2025 17:46:38 -0700
Subject: [PATCH 024/171] Mention MTE in the HWASan design doc.
---
clang/docs/HardwareAssistedAddressSanitizerDesign.rst | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/clang/docs/HardwareAssistedAddressSanitizerDesign.rst b/clang/docs/HardwareAssistedAddressSanitizerDesign.rst
index 20db41c032c56..014d10382e725 100644
--- a/clang/docs/HardwareAssistedAddressSanitizerDesign.rst
+++ b/clang/docs/HardwareAssistedAddressSanitizerDesign.rst
@@ -291,7 +291,7 @@ implement page aliasing.
Related Work
============
-* `SPARC ADI`_ implements a similar tool mostly in hardware.
+* `SPARC ADI`_ and `Arm MTE`_ implement a similar tool mostly in hardware.
* `Effective and Efficient Memory Protection Using Dynamic Tainting`_ discusses
similar approaches ("lock & key").
* `Watchdog`_ discussed a heavier, but still somewhat similar
@@ -302,6 +302,7 @@ Related Work
.. _Watchdog: https://www.cis.upenn.edu/acg/papers/isca12_watchdog.pdf
.. _Effective and Efficient Memory Protection Using Dynamic Tainting: https://www.cc.gatech.edu/~orso/papers/clause.doudalis.orso.prvulovic.pdf
.. _SPARC ADI: https://lazytyped.blogspot.com/2017/09/getting-started-with-adi.html
+.. _Arm MTE: https://developer.arm.com/documentation/108035/0100/Introduction-to-the-Memory-Tagging-Extension
.. _AddressSanitizer paper: https://www.usenix.org/system/files/conference/atc12/atc12-final39.pdf
.. _Address Tagging: http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.den0024a/ch12s05s01.html
.. _Linear Address Masking: https://software.intel.com/content/www/us/en/develop/download/intel-architecture-instruction-set-extensions-programming-reference.html
>From fe0948c9a5d4ad0255c94306b16ac977c2e84ee0 Mon Sep 17 00:00:00 2001
From: Steven Perron <stevenperron at google.com>
Date: Tue, 5 Aug 2025 21:00:06 -0400
Subject: [PATCH 025/171] [HLSL][SPIRV] Add vk::binding attribute (#150957)
The vk::binding attribute allows users to explicitly set the set and
binding for a resource in SPIR-V without chaning the "register"
attribute, which will be used when targeting DXIL.
Fixes https://github.com/llvm/llvm-project/issues/136894
---
clang/include/clang/Basic/Attr.td | 8 +++
clang/include/clang/Basic/AttrDocs.td | 26 ++++++++
clang/include/clang/Parse/Parser.h | 2 +-
clang/include/clang/Sema/SemaHLSL.h | 1 +
clang/lib/CodeGen/CGHLSLRuntime.cpp | 37 +++++++++--
clang/lib/CodeGen/CGHLSLRuntime.h | 4 ++
clang/lib/Parse/ParseDecl.cpp | 2 +-
clang/lib/Parse/ParseHLSL.cpp | 4 +-
clang/lib/Sema/SemaDeclAttr.cpp | 3 +
clang/lib/Sema/SemaHLSL.cpp | 29 ++++++++-
clang/test/AST/HLSL/vk_binding_attr.hlsl | 70 +++++++++++++++++++++
clang/test/CodeGenHLSL/vk_binding_attr.hlsl | 44 +++++++++++++
12 files changed, 220 insertions(+), 10 deletions(-)
create mode 100644 clang/test/AST/HLSL/vk_binding_attr.hlsl
create mode 100644 clang/test/CodeGenHLSL/vk_binding_attr.hlsl
diff --git a/clang/include/clang/Basic/Attr.td b/clang/include/clang/Basic/Attr.td
index 63fa6aa508827..30efb9f39e4f4 100644
--- a/clang/include/clang/Basic/Attr.td
+++ b/clang/include/clang/Basic/Attr.td
@@ -4887,6 +4887,14 @@ def HLSLSV_GroupIndex: HLSLAnnotationAttr {
let Documentation = [HLSLSV_GroupIndexDocs];
}
+def HLSLVkBinding : InheritableAttr {
+ let Spellings = [CXX11<"vk", "binding">];
+ let Subjects = SubjectList<[HLSLBufferObj, ExternalGlobalVar], ErrorDiag>;
+ let Args = [IntArgument<"Binding">, IntArgument<"Set", 1>];
+ let LangOpts = [HLSL];
+ let Documentation = [HLSLVkBindingDocs];
+}
+
def HLSLResourceBinding: InheritableAttr {
let Spellings = [HLSLAnnotation<"register">];
let Subjects = SubjectList<[HLSLBufferObj, ExternalGlobalVar], ErrorDiag>;
diff --git a/clang/include/clang/Basic/AttrDocs.td b/clang/include/clang/Basic/AttrDocs.td
index 326cf0bc842db..2b095ab975202 100644
--- a/clang/include/clang/Basic/AttrDocs.td
+++ b/clang/include/clang/Basic/AttrDocs.td
@@ -8781,6 +8781,32 @@ def ReadOnlyPlacementDocs : Documentation {
}];
}
+def HLSLVkBindingDocs : Documentation {
+ let Category = DocCatVariable;
+ let Content = [{
+The ``[[vk::binding]]`` attribute allows you to explicitly specify the descriptor
+set and binding for a resource when targeting SPIR-V. This is particularly
+useful when you need different bindings for SPIR-V and DXIL, as the ``register``
+attribute can be used for DXIL-specific bindings.
+
+The attribute takes two integer arguments: the binding and the descriptor set.
+The descriptor set is optional and defaults to 0 if not provided.
+
+.. code-block:: c++
+
+ // A structured buffer with binding 23 in descriptor set 102.
+ [[vk::binding(23, 102)]] StructuredBuffer<float> Buf;
+
+ // A structured buffer with binding 14 in descriptor set 0.
+ [[vk::binding(14)]] StructuredBuffer<float> Buf2;
+
+ // A cbuffer with binding 1 in descriptor set 2.
+ [[vk::binding(1, 2)]] cbuffer MyCBuffer {
+ float4x4 worldViewProj;
+ };
+ }];
+}
+
def WebAssemblyFuncrefDocs : Documentation {
let Category = DocCatType;
let Content = [{
diff --git a/clang/include/clang/Parse/Parser.h b/clang/include/clang/Parse/Parser.h
index 683934321a449..e9437e6d46366 100644
--- a/clang/include/clang/Parse/Parser.h
+++ b/clang/include/clang/Parse/Parser.h
@@ -5191,7 +5191,7 @@ class Parser : public CodeCompletionHandler {
void ParseHLSLAnnotations(ParsedAttributes &Attrs,
SourceLocation *EndLoc = nullptr,
bool CouldBeBitField = false);
- Decl *ParseHLSLBuffer(SourceLocation &DeclEnd);
+ Decl *ParseHLSLBuffer(SourceLocation &DeclEnd, ParsedAttributes &Attrs);
///@}
diff --git a/clang/include/clang/Sema/SemaHLSL.h b/clang/include/clang/Sema/SemaHLSL.h
index 116f568daf617..085c9ed9f3ebd 100644
--- a/clang/include/clang/Sema/SemaHLSL.h
+++ b/clang/include/clang/Sema/SemaHLSL.h
@@ -161,6 +161,7 @@ class SemaHLSL : public SemaBase {
void handleNumThreadsAttr(Decl *D, const ParsedAttr &AL);
void handleWaveSizeAttr(Decl *D, const ParsedAttr &AL);
void handleVkConstantIdAttr(Decl *D, const ParsedAttr &AL);
+ void handleVkBindingAttr(Decl *D, const ParsedAttr &AL);
void handleSV_DispatchThreadIDAttr(Decl *D, const ParsedAttr &AL);
void handleSV_GroupThreadIDAttr(Decl *D, const ParsedAttr &AL);
void handleSV_GroupIDAttr(Decl *D, const ParsedAttr &AL);
diff --git a/clang/lib/CodeGen/CGHLSLRuntime.cpp b/clang/lib/CodeGen/CGHLSLRuntime.cpp
index a47d1cc22980d..f64ac20545fa3 100644
--- a/clang/lib/CodeGen/CGHLSLRuntime.cpp
+++ b/clang/lib/CodeGen/CGHLSLRuntime.cpp
@@ -273,10 +273,14 @@ void CGHLSLRuntime::addBuffer(const HLSLBufferDecl *BufDecl) {
emitBufferGlobalsAndMetadata(BufDecl, BufGV);
// Initialize cbuffer from binding (implicit or explicit)
- HLSLResourceBindingAttr *RBA = BufDecl->getAttr<HLSLResourceBindingAttr>();
- assert(RBA &&
- "cbuffer/tbuffer should always have resource binding attribute");
- initializeBufferFromBinding(BufDecl, BufGV, RBA);
+ if (HLSLVkBindingAttr *VkBinding = BufDecl->getAttr<HLSLVkBindingAttr>()) {
+ initializeBufferFromBinding(BufDecl, BufGV, VkBinding);
+ } else {
+ HLSLResourceBindingAttr *RBA = BufDecl->getAttr<HLSLResourceBindingAttr>();
+ assert(RBA &&
+ "cbuffer/tbuffer should always have resource binding attribute");
+ initializeBufferFromBinding(BufDecl, BufGV, RBA);
+ }
}
llvm::TargetExtType *
@@ -593,6 +597,31 @@ static void initializeBuffer(CodeGenModule &CGM, llvm::GlobalVariable *GV,
CGM.AddCXXGlobalInit(InitResFunc);
}
+static Value *buildNameForResource(llvm::StringRef BaseName,
+ CodeGenModule &CGM) {
+ std::string Str(BaseName);
+ std::string GlobalName(Str + ".str");
+ return CGM.GetAddrOfConstantCString(Str, GlobalName.c_str()).getPointer();
+}
+
+void CGHLSLRuntime::initializeBufferFromBinding(const HLSLBufferDecl *BufDecl,
+ llvm::GlobalVariable *GV,
+ HLSLVkBindingAttr *VkBinding) {
+ assert(VkBinding && "expect a nonnull binding attribute");
+ llvm::Type *Int1Ty = llvm::Type::getInt1Ty(CGM.getLLVMContext());
+ auto *NonUniform = llvm::ConstantInt::get(Int1Ty, false);
+ auto *Index = llvm::ConstantInt::get(CGM.IntTy, 0);
+ auto *RangeSize = llvm::ConstantInt::get(CGM.IntTy, 1);
+ auto *Set = llvm::ConstantInt::get(CGM.IntTy, VkBinding->getSet());
+ auto *Binding = llvm::ConstantInt::get(CGM.IntTy, VkBinding->getBinding());
+ Value *Name = buildNameForResource(BufDecl->getName(), CGM);
+ llvm::Intrinsic::ID IntrinsicID =
+ CGM.getHLSLRuntime().getCreateHandleFromBindingIntrinsic();
+
+ SmallVector<Value *> Args{Set, Binding, RangeSize, Index, NonUniform, Name};
+ initializeBuffer(CGM, GV, IntrinsicID, Args);
+}
+
void CGHLSLRuntime::initializeBufferFromBinding(const HLSLBufferDecl *BufDecl,
llvm::GlobalVariable *GV,
HLSLResourceBindingAttr *RBA) {
diff --git a/clang/lib/CodeGen/CGHLSLRuntime.h b/clang/lib/CodeGen/CGHLSLRuntime.h
index 89d2aff85d913..31d1728da9c56 100644
--- a/clang/lib/CodeGen/CGHLSLRuntime.h
+++ b/clang/lib/CodeGen/CGHLSLRuntime.h
@@ -62,6 +62,7 @@ class VarDecl;
class ParmVarDecl;
class InitListExpr;
class HLSLBufferDecl;
+class HLSLVkBindingAttr;
class HLSLResourceBindingAttr;
class Type;
class RecordType;
@@ -166,6 +167,9 @@ class CGHLSLRuntime {
private:
void emitBufferGlobalsAndMetadata(const HLSLBufferDecl *BufDecl,
llvm::GlobalVariable *BufGV);
+ void initializeBufferFromBinding(const HLSLBufferDecl *BufDecl,
+ llvm::GlobalVariable *GV,
+ HLSLVkBindingAttr *VkBinding);
void initializeBufferFromBinding(const HLSLBufferDecl *BufDecl,
llvm::GlobalVariable *GV,
HLSLResourceBindingAttr *RBA);
diff --git a/clang/lib/Parse/ParseDecl.cpp b/clang/lib/Parse/ParseDecl.cpp
index 5fb3f92b2e7ce..fd53cca5a13ff 100644
--- a/clang/lib/Parse/ParseDecl.cpp
+++ b/clang/lib/Parse/ParseDecl.cpp
@@ -1901,7 +1901,7 @@ Parser::DeclGroupPtrTy Parser::ParseDeclaration(DeclaratorContext Context,
case tok::kw_cbuffer:
case tok::kw_tbuffer:
- SingleDecl = ParseHLSLBuffer(DeclEnd);
+ SingleDecl = ParseHLSLBuffer(DeclEnd, DeclAttrs);
break;
case tok::kw_namespace:
ProhibitAttributes(DeclAttrs);
diff --git a/clang/lib/Parse/ParseHLSL.cpp b/clang/lib/Parse/ParseHLSL.cpp
index e6caa81b309ca..f243b0cb95eae 100644
--- a/clang/lib/Parse/ParseHLSL.cpp
+++ b/clang/lib/Parse/ParseHLSL.cpp
@@ -48,7 +48,8 @@ static bool validateDeclsInsideHLSLBuffer(Parser::DeclGroupPtrTy DG,
return IsValid;
}
-Decl *Parser::ParseHLSLBuffer(SourceLocation &DeclEnd) {
+Decl *Parser::ParseHLSLBuffer(SourceLocation &DeclEnd,
+ ParsedAttributes &Attrs) {
assert((Tok.is(tok::kw_cbuffer) || Tok.is(tok::kw_tbuffer)) &&
"Not a cbuffer or tbuffer!");
bool IsCBuffer = Tok.is(tok::kw_cbuffer);
@@ -62,7 +63,6 @@ Decl *Parser::ParseHLSLBuffer(SourceLocation &DeclEnd) {
IdentifierInfo *Identifier = Tok.getIdentifierInfo();
SourceLocation IdentifierLoc = ConsumeToken();
- ParsedAttributes Attrs(AttrFactory);
MaybeParseHLSLAnnotations(Attrs, nullptr);
ParseScope BufferScope(this, Scope::DeclScope);
diff --git a/clang/lib/Sema/SemaDeclAttr.cpp b/clang/lib/Sema/SemaDeclAttr.cpp
index 16b18bcb6a2a0..f5f18b06c2fff 100644
--- a/clang/lib/Sema/SemaDeclAttr.cpp
+++ b/clang/lib/Sema/SemaDeclAttr.cpp
@@ -7441,6 +7441,9 @@ ProcessDeclAttribute(Sema &S, Scope *scope, Decl *D, const ParsedAttr &AL,
case ParsedAttr::AT_HLSLVkConstantId:
S.HLSL().handleVkConstantIdAttr(D, AL);
break;
+ case ParsedAttr::AT_HLSLVkBinding:
+ S.HLSL().handleVkBindingAttr(D, AL);
+ break;
case ParsedAttr::AT_HLSLSV_GroupThreadID:
S.HLSL().handleSV_GroupThreadIDAttr(D, AL);
break;
diff --git a/clang/lib/Sema/SemaHLSL.cpp b/clang/lib/Sema/SemaHLSL.cpp
index e59849add69f1..8536e04a129ee 100644
--- a/clang/lib/Sema/SemaHLSL.cpp
+++ b/clang/lib/Sema/SemaHLSL.cpp
@@ -597,8 +597,9 @@ void SemaHLSL::ActOnFinishBuffer(Decl *Dcl, SourceLocation RBrace) {
// create buffer layout struct
createHostLayoutStructForBuffer(SemaRef, BufDecl);
+ HLSLVkBindingAttr *VkBinding = Dcl->getAttr<HLSLVkBindingAttr>();
HLSLResourceBindingAttr *RBA = Dcl->getAttr<HLSLResourceBindingAttr>();
- if (!RBA || !RBA->hasRegisterSlot()) {
+ if (!VkBinding && (!RBA || !RBA->hasRegisterSlot())) {
SemaRef.Diag(Dcl->getLocation(), diag::warn_hlsl_implicit_binding);
// Use HLSLResourceBindingAttr to transfer implicit binding order_ID
// to codegen. If it does not exist, create an implicit attribute.
@@ -1496,6 +1497,23 @@ void SemaHLSL::handleVkConstantIdAttr(Decl *D, const ParsedAttr &AL) {
D->addAttr(NewAttr);
}
+void SemaHLSL::handleVkBindingAttr(Decl *D, const ParsedAttr &AL) {
+ // The vk::binding attribute only applies to SPIR-V.
+ if (!getASTContext().getTargetInfo().getTriple().isSPIRV())
+ return;
+
+ uint32_t Binding = 0;
+ if (!SemaRef.checkUInt32Argument(AL, AL.getArgAsExpr(0), Binding))
+ return;
+ uint32_t Set = 0;
+ if (AL.getNumArgs() > 1 &&
+ !SemaRef.checkUInt32Argument(AL, AL.getArgAsExpr(1), Set))
+ return;
+
+ D->addAttr(::new (getASTContext())
+ HLSLVkBindingAttr(getASTContext(), AL, Binding, Set));
+}
+
bool SemaHLSL::diagnoseInputIDType(QualType T, const ParsedAttr &AL) {
const auto *VT = T->getAs<VectorType>();
@@ -3660,8 +3678,12 @@ static bool initVarDeclWithCtor(Sema &S, VarDecl *VD,
bool SemaHLSL::initGlobalResourceDecl(VarDecl *VD) {
std::optional<uint32_t> RegisterSlot;
uint32_t SpaceNo = 0;
+ HLSLVkBindingAttr *VkBinding = VD->getAttr<HLSLVkBindingAttr>();
HLSLResourceBindingAttr *RBA = VD->getAttr<HLSLResourceBindingAttr>();
- if (RBA) {
+ if (VkBinding) {
+ RegisterSlot = VkBinding->getBinding();
+ SpaceNo = VkBinding->getSet();
+ } else if (RBA) {
if (RBA->hasRegisterSlot())
RegisterSlot = RBA->getSlotNumber();
SpaceNo = RBA->getSpaceNumber();
@@ -3764,6 +3786,9 @@ void SemaHLSL::processExplicitBindingsOnDecl(VarDecl *VD) {
bool HasBinding = false;
for (Attr *A : VD->attrs()) {
+ if (isa<HLSLVkBindingAttr>(A))
+ HasBinding = true;
+
HLSLResourceBindingAttr *RBA = dyn_cast<HLSLResourceBindingAttr>(A);
if (!RBA || !RBA->hasRegisterSlot())
continue;
diff --git a/clang/test/AST/HLSL/vk_binding_attr.hlsl b/clang/test/AST/HLSL/vk_binding_attr.hlsl
new file mode 100644
index 0000000000000..4cb2abdaef01a
--- /dev/null
+++ b/clang/test/AST/HLSL/vk_binding_attr.hlsl
@@ -0,0 +1,70 @@
+// RUN: %clang_cc1 -triple spirv-unknown-vulkan1.3-library -finclude-default-header -ast-dump -o - %s | FileCheck %s -check-prefixes=SPV,CHECK
+// RUN: %clang_cc1 -triple dxil-pc-shadermodel6.8-library -finclude-default-header -ast-dump -o - %s | FileCheck %s -check-prefixes=DXIL,CHECK
+
+// CHECK: VarDecl {{.*}} Buf 'StructuredBuffer<float>':'hlsl::StructuredBuffer<float>'
+// SPV-NEXT: CXXConstructExpr {{.*}} 'StructuredBuffer<float>':'hlsl::StructuredBuffer<float>' 'void (unsigned int, unsigned int, int, unsigned int, const char *)'
+// SPV-NEXT: IntegerLiteral {{.*}} 'unsigned int' 23
+// SPV-NEXT: IntegerLiteral {{.*}} 'unsigned int' 102
+// DXIL-NEXT: CXXConstructExpr {{.*}} 'StructuredBuffer<float>':'hlsl::StructuredBuffer<float>' 'void (unsigned int, int, unsigned int, unsigned int, const char *)'
+// DXIL-NEXT: IntegerLiteral {{.*}} 'unsigned int' 0
+// DXIL-NEXT: IntegerLiteral {{.*}} 'int' 1
+// SPV: HLSLVkBindingAttr {{.*}} 23 102
+// DXIL-NOT: HLSLVkBindingAttr
+[[vk::binding(23, 102)]] StructuredBuffer<float> Buf;
+
+// CHECK: VarDecl {{.*}} Buf2 'StructuredBuffer<float>':'hlsl::StructuredBuffer<float>'
+// CHECK-NEXT: CXXConstructExpr {{.*}} 'StructuredBuffer<float>':'hlsl::StructuredBuffer<float>' 'void (unsigned int, unsigned int, int, unsigned int, const char *)'
+// SPV-NEXT: IntegerLiteral {{.*}} 'unsigned int' 14
+// SPV-NEXT: IntegerLiteral {{.*}} 'unsigned int' 1
+// DXIL-NEXT: IntegerLiteral {{.*}} 'unsigned int' 23
+// DXIL-NEXT: IntegerLiteral {{.*}} 'unsigned int' 102
+// SPV: HLSLVkBindingAttr {{.*}} 14 1
+// DXIL-NOT: HLSLVkBindingAttr
+// CHECK: HLSLResourceBindingAttr {{.*}} "t23" "space102"
+[[vk::binding(14, 1)]] StructuredBuffer<float> Buf2 : register(t23, space102);
+
+// CHECK: VarDecl {{.*}} Buf3 'StructuredBuffer<float>':'hlsl::StructuredBuffer<float>'
+// CHECK-NEXT: CXXConstructExpr {{.*}} 'StructuredBuffer<float>':'hlsl::StructuredBuffer<float>' 'void (unsigned int, unsigned int, int, unsigned int, const char *)'
+// SPV-NEXT: IntegerLiteral {{.*}} 'unsigned int' 14
+// SPV-NEXT: IntegerLiteral {{.*}} 'unsigned int' 0
+// DXIL-NEXT: IntegerLiteral {{.*}} 'unsigned int' 23
+// DXIL-NEXT: IntegerLiteral {{.*}} 'unsigned int' 102
+// SPV: HLSLVkBindingAttr {{.*}} 14 0
+// DXIL-NOT: HLSLVkBindingAttr
+// CHECK: HLSLResourceBindingAttr {{.*}} "t23" "space102"
+[[vk::binding(14)]] StructuredBuffer<float> Buf3 : register(t23, space102);
+
+// CHECK: HLSLBufferDecl {{.*}} cbuffer CB
+// CHECK-NEXT: HLSLResourceClassAttr {{.*}} Implicit CBuffer
+// SPV-NEXT: HLSLVkBindingAttr {{.*}} 1 2
+// DXIL-NOT: HLSLVkBindingAttr
+[[vk::binding(1, 2)]] cbuffer CB {
+ float a;
+}
+
+// CHECK: VarDecl {{.*}} Buf4 'Buffer<int>':'hlsl::Buffer<int>'
+// SPV-NEXT: CXXConstructExpr {{.*}} 'Buffer<int>':'hlsl::Buffer<int>' 'void (unsigned int, unsigned int, int, unsigned int, const char *)'
+// SPV-NEXT: IntegerLiteral {{.*}} 'unsigned int' 24
+// SPV-NEXT: IntegerLiteral {{.*}} 'unsigned int' 103
+// DXL-NEXT: CXXConstructExpr {{.*}} 'Buffer<int>':'hlsl::Buffer<int>' 'void (unsigned int, int, unsigned int, unsigned int, const char *)'
+// SPV: HLSLVkBindingAttr {{.*}} 24 103
+// DXIL-NOT: HLSLVkBindingAttr
+[[vk::binding(24, 103)]] Buffer<int> Buf4;
+
+// CHECK: VarDecl {{.*}} Buf5 'RWBuffer<int2>':'hlsl::RWBuffer<vector<int, 2>>'
+// SPV-NEXT: CXXConstructExpr {{.*}} 'RWBuffer<int2>':'hlsl::RWBuffer<vector<int, 2>>' 'void (unsigned int, unsigned int, int, unsigned int, const char *)'
+// SPV-NEXT: IntegerLiteral {{.*}} 'unsigned int' 25
+// SPV-NEXT: IntegerLiteral {{.*}} 'unsigned int' 104
+// DXL-NEXT: CXXConstructExpr {{.*}} 'Buffer<int>':'hlsl::Buffer<int>' 'void (unsigned int, int, unsigned int, unsigned int, const char *)'
+// SPV: HLSLVkBindingAttr {{.*}} 25 104
+// DXIL-NOT: HLSLVkBindingAttr
+[[vk::binding(25, 104)]] RWBuffer<int2> Buf5;
+
+// CHECK: VarDecl {{.*}} Buf6 'RWStructuredBuffer<int>':'hlsl::RWStructuredBuffer<int>'
+// SPV-NEXT: CXXConstructExpr {{.*}} 'RWStructuredBuffer<int>':'hlsl::RWStructuredBuffer<int>' 'void (unsigned int, unsigned int, int, unsigned int, const char *)'
+// SPV-NEXT: IntegerLiteral {{.*}} 'unsigned int' 26
+// SPV-NEXT: IntegerLiteral {{.*}} 'unsigned int' 105
+// DXL-NEXT: CXXConstructExpr {{.*}} 'Buffer<int>':'hlsl::Buffer<int>' 'void (unsigned int, int, unsigned int, unsigned int, const char *)'
+// SPV: HLSLVkBindingAttr {{.*}} 26 105
+// DXIL-NOT: HLSLVkBindingAttr
+[[vk::binding(26, 105)]] RWStructuredBuffer<int> Buf6;
diff --git a/clang/test/CodeGenHLSL/vk_binding_attr.hlsl b/clang/test/CodeGenHLSL/vk_binding_attr.hlsl
new file mode 100644
index 0000000000000..bbef05130116d
--- /dev/null
+++ b/clang/test/CodeGenHLSL/vk_binding_attr.hlsl
@@ -0,0 +1,44 @@
+// RUN: %clang_cc1 -triple spirv-unknown-vulkan1.3-library -finclude-default-header -O3 -emit-llvm -o - %s | FileCheck %s
+// CHECK: [[Buf:@.*]] = private unnamed_addr constant [4 x i8] c"Buf\00"
+// CHECK: [[Buf2:@.*]] = private unnamed_addr constant [5 x i8] c"Buf2\00"
+// CHECK: [[Buf3:@.*]] = private unnamed_addr constant [5 x i8] c"Buf3\00"
+// CHECK: [[CB:@.*]] = private unnamed_addr constant [3 x i8] c"CB\00"
+// CHECK: [[CB2:@.*]] = private unnamed_addr constant [4 x i8] c"CB2\00"
+// CHECK: [[Buf4:@.*]] = private unnamed_addr constant [5 x i8] c"Buf4\00"
+// CHECK: [[Buf5:@.*]] = private unnamed_addr constant [5 x i8] c"Buf5\00"
+// CHECK: [[Buf6:@.*]] = private unnamed_addr constant [5 x i8] c"Buf6\00"
+
+[[vk::binding(23, 102)]] StructuredBuffer<float> Buf;
+[[vk::binding(14, 1)]] StructuredBuffer<float> Buf2 : register(t23, space102);
+[[vk::binding(14)]] StructuredBuffer<float> Buf3 : register(t23, space102);
+
+[[vk::binding(1, 2)]] cbuffer CB {
+ float a;
+};
+
+[[vk::binding(10,20)]] cbuffer CB2 {
+ float b;
+};
+
+
+[[vk::binding(24, 103)]] Buffer<int> Buf4;
+[[vk::binding(25, 104)]] RWBuffer<int2> Buf5;
+[[vk::binding(26, 105)]] RWStructuredBuffer<float> Buf6;
+
+[numthreads(1,1,1)]
+void main() {
+// CHECK: call {{.*}} @llvm.spv.resource.handlefrombinding{{.*}}(i32 102, i32 23, {{.*}} [[Buf]])
+// CHECK: call {{.*}} @llvm.spv.resource.handlefrombinding{{.*}}(i32 1, i32 14, {{.*}} [[Buf2]])
+// CHECK: call {{.*}} @llvm.spv.resource.handlefrombinding{{.*}}(i32 0, i32 14, {{.*}} [[Buf3]])
+// CHECK: call {{.*}} @llvm.spv.resource.handlefrombinding{{.*}}(i32 2, i32 1, {{.*}} [[CB]])
+// CHECK: call {{.*}} @llvm.spv.resource.handlefrombinding{{.*}}(i32 20, i32 10, {{.*}} [[CB2]])
+// CHECK: call {{.*}} @llvm.spv.resource.handlefrombinding{{.*}}(i32 103, i32 24, {{.*}} [[Buf4]])
+// CHECK: call {{.*}} @llvm.spv.resource.handlefrombinding{{.*}}(i32 104, i32 25, {{.*}} [[Buf5]])
+// CHECK: call {{.*}} @llvm.spv.resource.handlefrombinding{{.*}}(i32 105, i32 26, {{.*}} [[Buf6]])
+ float f1 = Buf.Load(0);
+ float f2 = Buf2.Load(0);
+ float f3 = Buf3.Load(0);
+ int i = Buf4.Load(0);
+ Buf5[0] = i;
+ Buf6[0] = f1+f2+f3+a+b;
+}
>From b05e26be8a487d21cf15d34ef60b39b3ca94f235 Mon Sep 17 00:00:00 2001
From: tangaac <tangyan01 at loongson.cn>
Date: Wed, 6 Aug 2025 09:11:34 +0800
Subject: [PATCH 026/171] [LoongArch] Optimize extracting f32/f64 from 256-bit
vector by using XVPICKVE. (#151914)
---
.../LoongArch/LoongArchLASXInstrInfo.td | 27 +++++------
llvm/test/CodeGen/LoongArch/lasx/fpowi.ll | 48 +++++++++----------
.../lasx/ir-instruction/fix-xvshuf.ll | 12 ++---
.../ir-instruction/insert-extract-element.ll | 8 ++--
4 files changed, 47 insertions(+), 48 deletions(-)
diff --git a/llvm/lib/Target/LoongArch/LoongArchLASXInstrInfo.td b/llvm/lib/Target/LoongArch/LoongArchLASXInstrInfo.td
index 5096a8fcda8eb..d8bb16fe9b94d 100644
--- a/llvm/lib/Target/LoongArch/LoongArchLASXInstrInfo.td
+++ b/llvm/lib/Target/LoongArch/LoongArchLASXInstrInfo.td
@@ -1651,20 +1651,19 @@ def : Pat<(vector_insert v8i32:$xd, GRLenVT:$rj, uimm3:$imm),
(XVINSGR2VR_W v8i32:$xd, GRLenVT:$rj, uimm3:$imm)>;
def : Pat<(vector_insert v4i64:$xd, GRLenVT:$rj, uimm2:$imm),
(XVINSGR2VR_D v4i64:$xd, GRLenVT:$rj, uimm2:$imm)>;
-def : Pat<(vector_insert v8f32:$xd, (loongarch_movgr2fr_w_la64 GPR:$rj), uimm3:$imm),
- (XVINSGR2VR_W $xd, $rj, uimm3:$imm)>;
-def : Pat<(vector_insert v4f64:$xd, (f64 (bitconvert i64:$rj)), uimm2:$imm),
- (XVINSGR2VR_D $xd, $rj, uimm2:$imm)>;
-def : Pat<(vector_insert v8f32:$xd, (f32 (vector_extract v8f32:$xj, uimm3:$imm1)), uimm3:$imm2),
- (XVINSGR2VR_W $xd, (XVPICKVE2GR_W v8f32:$xj, uimm3:$imm1), uimm3:$imm2)>;
-def : Pat<(vector_insert v4f64:$xd, (f64 (vector_extract v4f64:$xj, uimm2:$imm1)), uimm2:$imm2),
- (XVINSGR2VR_D $xd, (XVPICKVE2GR_D v4f64:$xj, uimm2:$imm1), uimm2:$imm2)>;
+def : Pat<(vector_insert v8f32:$xd, (loongarch_movgr2fr_w_la64 GPR:$rj),
+ uimm3:$imm),
+ (XVINSGR2VR_W v8f32:$xd, GPR:$rj, uimm3:$imm)>;
+def : Pat<(vector_insert v4f64:$xd, (f64(bitconvert i64:$rj)), uimm2:$imm),
+ (XVINSGR2VR_D v4f64:$xd, GPR:$rj, uimm2:$imm)>;
// XVINSVE0_{W/D}
def : Pat<(vector_insert v8f32:$xd, FPR32:$fj, uimm3:$imm),
- (XVINSVE0_W $xd, (SUBREG_TO_REG (i64 0), FPR32:$fj, sub_32), uimm3:$imm)>;
+ (XVINSVE0_W v8f32:$xd, (SUBREG_TO_REG(i64 0), FPR32:$fj, sub_32),
+ uimm3:$imm)>;
def : Pat<(vector_insert v4f64:$xd, FPR64:$fj, uimm2:$imm),
- (XVINSVE0_D $xd, (SUBREG_TO_REG (i64 0), FPR64:$fj, sub_64), uimm2:$imm)>;
+ (XVINSVE0_D v4f64:$xd, (SUBREG_TO_REG(i64 0), FPR64:$fj, sub_64),
+ uimm2:$imm)>;
// scalar_to_vector
def : Pat<(v8f32 (scalar_to_vector FPR32:$fj)),
@@ -1884,10 +1883,10 @@ def : Pat<(i64 (vector_extract v8i32:$xj, uimm3:$imm)),
(XVPICKVE2GR_W v8i32:$xj, uimm3:$imm)>;
def : Pat<(i64 (vector_extract v4i64:$xj, uimm2:$imm)),
(XVPICKVE2GR_D v4i64:$xj, uimm2:$imm)>;
-def : Pat<(f32 (vector_extract v8f32:$xj, uimm3:$imm)),
- (MOVGR2FR_W (XVPICKVE2GR_W v8f32:$xj, uimm3:$imm))>;
-def : Pat<(f64 (vector_extract v4f64:$xj, uimm2:$imm)),
- (MOVGR2FR_D (XVPICKVE2GR_D v4f64:$xj, uimm2:$imm))>;
+def : Pat<(f32(vector_extract v8f32:$xj, uimm3:$imm)),
+ (EXTRACT_SUBREG(XVPICKVE_W v8f32:$xj, uimm3:$imm), sub_32)>;
+def : Pat<(f64(vector_extract v4f64:$xj, uimm2:$imm)),
+ (EXTRACT_SUBREG(XVPICKVE_D v4f64:$xj, uimm2:$imm), sub_64)>;
// vselect
def : Pat<(v32i8 (vselect LASX256:$xd, (v32i8 (SplatPat_uimm8 uimm8:$imm)),
diff --git a/llvm/test/CodeGen/LoongArch/lasx/fpowi.ll b/llvm/test/CodeGen/LoongArch/lasx/fpowi.ll
index 380071266d80e..f0277a78fa452 100644
--- a/llvm/test/CodeGen/LoongArch/lasx/fpowi.ll
+++ b/llvm/test/CodeGen/LoongArch/lasx/fpowi.ll
@@ -11,16 +11,16 @@ define <8 x float> @powi_v8f32(<8 x float> %va, i32 %b) nounwind {
; CHECK-NEXT: st.d $fp, $sp, 80 # 8-byte Folded Spill
; CHECK-NEXT: xvst $xr0, $sp, 16 # 32-byte Folded Spill
; CHECK-NEXT: addi.w $fp, $a0, 0
-; CHECK-NEXT: xvpickve2gr.w $a0, $xr0, 1
-; CHECK-NEXT: movgr2fr.w $fa0, $a0
+; CHECK-NEXT: xvpickve.w $xr0, $xr0, 1
+; CHECK-NEXT: # kill: def $f0 killed $f0 killed $xr0
; CHECK-NEXT: move $a0, $fp
; CHECK-NEXT: pcaddu18i $ra, %call36(__powisf2)
; CHECK-NEXT: jirl $ra, $ra, 0
; CHECK-NEXT: # kill: def $f0 killed $f0 def $xr0
; CHECK-NEXT: xvst $xr0, $sp, 48 # 32-byte Folded Spill
; CHECK-NEXT: xvld $xr0, $sp, 16 # 32-byte Folded Reload
-; CHECK-NEXT: xvpickve2gr.w $a0, $xr0, 0
-; CHECK-NEXT: movgr2fr.w $fa0, $a0
+; CHECK-NEXT: xvpickve.w $xr0, $xr0, 0
+; CHECK-NEXT: # kill: def $f0 killed $f0 killed $xr0
; CHECK-NEXT: move $a0, $fp
; CHECK-NEXT: pcaddu18i $ra, %call36(__powisf2)
; CHECK-NEXT: jirl $ra, $ra, 0
@@ -29,8 +29,8 @@ define <8 x float> @powi_v8f32(<8 x float> %va, i32 %b) nounwind {
; CHECK-NEXT: xvinsve0.w $xr0, $xr1, 1
; CHECK-NEXT: xvst $xr0, $sp, 48 # 32-byte Folded Spill
; CHECK-NEXT: xvld $xr0, $sp, 16 # 32-byte Folded Reload
-; CHECK-NEXT: xvpickve2gr.w $a0, $xr0, 2
-; CHECK-NEXT: movgr2fr.w $fa0, $a0
+; CHECK-NEXT: xvpickve.w $xr0, $xr0, 2
+; CHECK-NEXT: # kill: def $f0 killed $f0 killed $xr0
; CHECK-NEXT: move $a0, $fp
; CHECK-NEXT: pcaddu18i $ra, %call36(__powisf2)
; CHECK-NEXT: jirl $ra, $ra, 0
@@ -39,8 +39,8 @@ define <8 x float> @powi_v8f32(<8 x float> %va, i32 %b) nounwind {
; CHECK-NEXT: xvinsve0.w $xr1, $xr0, 2
; CHECK-NEXT: xvst $xr1, $sp, 48 # 32-byte Folded Spill
; CHECK-NEXT: xvld $xr0, $sp, 16 # 32-byte Folded Reload
-; CHECK-NEXT: xvpickve2gr.w $a0, $xr0, 3
-; CHECK-NEXT: movgr2fr.w $fa0, $a0
+; CHECK-NEXT: xvpickve.w $xr0, $xr0, 3
+; CHECK-NEXT: # kill: def $f0 killed $f0 killed $xr0
; CHECK-NEXT: move $a0, $fp
; CHECK-NEXT: pcaddu18i $ra, %call36(__powisf2)
; CHECK-NEXT: jirl $ra, $ra, 0
@@ -49,8 +49,8 @@ define <8 x float> @powi_v8f32(<8 x float> %va, i32 %b) nounwind {
; CHECK-NEXT: xvinsve0.w $xr1, $xr0, 3
; CHECK-NEXT: xvst $xr1, $sp, 48 # 32-byte Folded Spill
; CHECK-NEXT: xvld $xr0, $sp, 16 # 32-byte Folded Reload
-; CHECK-NEXT: xvpickve2gr.w $a0, $xr0, 4
-; CHECK-NEXT: movgr2fr.w $fa0, $a0
+; CHECK-NEXT: xvpickve.w $xr0, $xr0, 4
+; CHECK-NEXT: # kill: def $f0 killed $f0 killed $xr0
; CHECK-NEXT: move $a0, $fp
; CHECK-NEXT: pcaddu18i $ra, %call36(__powisf2)
; CHECK-NEXT: jirl $ra, $ra, 0
@@ -59,8 +59,8 @@ define <8 x float> @powi_v8f32(<8 x float> %va, i32 %b) nounwind {
; CHECK-NEXT: xvinsve0.w $xr1, $xr0, 4
; CHECK-NEXT: xvst $xr1, $sp, 48 # 32-byte Folded Spill
; CHECK-NEXT: xvld $xr0, $sp, 16 # 32-byte Folded Reload
-; CHECK-NEXT: xvpickve2gr.w $a0, $xr0, 5
-; CHECK-NEXT: movgr2fr.w $fa0, $a0
+; CHECK-NEXT: xvpickve.w $xr0, $xr0, 5
+; CHECK-NEXT: # kill: def $f0 killed $f0 killed $xr0
; CHECK-NEXT: move $a0, $fp
; CHECK-NEXT: pcaddu18i $ra, %call36(__powisf2)
; CHECK-NEXT: jirl $ra, $ra, 0
@@ -69,8 +69,8 @@ define <8 x float> @powi_v8f32(<8 x float> %va, i32 %b) nounwind {
; CHECK-NEXT: xvinsve0.w $xr1, $xr0, 5
; CHECK-NEXT: xvst $xr1, $sp, 48 # 32-byte Folded Spill
; CHECK-NEXT: xvld $xr0, $sp, 16 # 32-byte Folded Reload
-; CHECK-NEXT: xvpickve2gr.w $a0, $xr0, 6
-; CHECK-NEXT: movgr2fr.w $fa0, $a0
+; CHECK-NEXT: xvpickve.w $xr0, $xr0, 6
+; CHECK-NEXT: # kill: def $f0 killed $f0 killed $xr0
; CHECK-NEXT: move $a0, $fp
; CHECK-NEXT: pcaddu18i $ra, %call36(__powisf2)
; CHECK-NEXT: jirl $ra, $ra, 0
@@ -79,8 +79,8 @@ define <8 x float> @powi_v8f32(<8 x float> %va, i32 %b) nounwind {
; CHECK-NEXT: xvinsve0.w $xr1, $xr0, 6
; CHECK-NEXT: xvst $xr1, $sp, 48 # 32-byte Folded Spill
; CHECK-NEXT: xvld $xr0, $sp, 16 # 32-byte Folded Reload
-; CHECK-NEXT: xvpickve2gr.w $a0, $xr0, 7
-; CHECK-NEXT: movgr2fr.w $fa0, $a0
+; CHECK-NEXT: xvpickve.w $xr0, $xr0, 7
+; CHECK-NEXT: # kill: def $f0 killed $f0 killed $xr0
; CHECK-NEXT: move $a0, $fp
; CHECK-NEXT: pcaddu18i $ra, %call36(__powisf2)
; CHECK-NEXT: jirl $ra, $ra, 0
@@ -107,16 +107,16 @@ define <4 x double> @powi_v4f64(<4 x double> %va, i32 %b) nounwind {
; CHECK-NEXT: st.d $fp, $sp, 80 # 8-byte Folded Spill
; CHECK-NEXT: xvst $xr0, $sp, 48 # 32-byte Folded Spill
; CHECK-NEXT: addi.w $fp, $a0, 0
-; CHECK-NEXT: xvpickve2gr.d $a0, $xr0, 1
-; CHECK-NEXT: movgr2fr.d $fa0, $a0
+; CHECK-NEXT: xvpickve.d $xr0, $xr0, 1
+; CHECK-NEXT: # kill: def $f0_64 killed $f0_64 killed $xr0
; CHECK-NEXT: move $a0, $fp
; CHECK-NEXT: pcaddu18i $ra, %call36(__powidf2)
; CHECK-NEXT: jirl $ra, $ra, 0
; CHECK-NEXT: # kill: def $f0_64 killed $f0_64 def $xr0
; CHECK-NEXT: xvst $xr0, $sp, 16 # 32-byte Folded Spill
; CHECK-NEXT: xvld $xr0, $sp, 48 # 32-byte Folded Reload
-; CHECK-NEXT: xvpickve2gr.d $a0, $xr0, 0
-; CHECK-NEXT: movgr2fr.d $fa0, $a0
+; CHECK-NEXT: xvpickve.d $xr0, $xr0, 0
+; CHECK-NEXT: # kill: def $f0_64 killed $f0_64 killed $xr0
; CHECK-NEXT: move $a0, $fp
; CHECK-NEXT: pcaddu18i $ra, %call36(__powidf2)
; CHECK-NEXT: jirl $ra, $ra, 0
@@ -125,8 +125,8 @@ define <4 x double> @powi_v4f64(<4 x double> %va, i32 %b) nounwind {
; CHECK-NEXT: xvinsve0.d $xr0, $xr1, 1
; CHECK-NEXT: xvst $xr0, $sp, 16 # 32-byte Folded Spill
; CHECK-NEXT: xvld $xr0, $sp, 48 # 32-byte Folded Reload
-; CHECK-NEXT: xvpickve2gr.d $a0, $xr0, 2
-; CHECK-NEXT: movgr2fr.d $fa0, $a0
+; CHECK-NEXT: xvpickve.d $xr0, $xr0, 2
+; CHECK-NEXT: # kill: def $f0_64 killed $f0_64 killed $xr0
; CHECK-NEXT: move $a0, $fp
; CHECK-NEXT: pcaddu18i $ra, %call36(__powidf2)
; CHECK-NEXT: jirl $ra, $ra, 0
@@ -135,8 +135,8 @@ define <4 x double> @powi_v4f64(<4 x double> %va, i32 %b) nounwind {
; CHECK-NEXT: xvinsve0.d $xr1, $xr0, 2
; CHECK-NEXT: xvst $xr1, $sp, 16 # 32-byte Folded Spill
; CHECK-NEXT: xvld $xr0, $sp, 48 # 32-byte Folded Reload
-; CHECK-NEXT: xvpickve2gr.d $a0, $xr0, 3
-; CHECK-NEXT: movgr2fr.d $fa0, $a0
+; CHECK-NEXT: xvpickve.d $xr0, $xr0, 3
+; CHECK-NEXT: # kill: def $f0_64 killed $f0_64 killed $xr0
; CHECK-NEXT: move $a0, $fp
; CHECK-NEXT: pcaddu18i $ra, %call36(__powidf2)
; CHECK-NEXT: jirl $ra, $ra, 0
diff --git a/llvm/test/CodeGen/LoongArch/lasx/ir-instruction/fix-xvshuf.ll b/llvm/test/CodeGen/LoongArch/lasx/ir-instruction/fix-xvshuf.ll
index 221aba3166ed7..8ee567c2a92f9 100644
--- a/llvm/test/CodeGen/LoongArch/lasx/ir-instruction/fix-xvshuf.ll
+++ b/llvm/test/CodeGen/LoongArch/lasx/ir-instruction/fix-xvshuf.ll
@@ -6,12 +6,12 @@
define <4 x double> @shufflevector_v4f64(<4 x double> %a, <4 x double> %b) {
; CHECK-LABEL: shufflevector_v4f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: xvpickve2gr.d $a0, $xr1, 2
-; CHECK-NEXT: xvpickve2gr.d $a1, $xr0, 3
-; CHECK-NEXT: xvinsgr2vr.d $xr0, $a0, 1
-; CHECK-NEXT: xvinsgr2vr.d $xr0, $a1, 2
-; CHECK-NEXT: xvpickve2gr.d $a0, $xr1, 3
-; CHECK-NEXT: xvinsgr2vr.d $xr0, $a0, 3
+; CHECK-NEXT: xvpickve.d $xr2, $xr1, 2
+; CHECK-NEXT: xvpickve.d $xr3, $xr0, 3
+; CHECK-NEXT: xvinsve0.d $xr0, $xr2, 1
+; CHECK-NEXT: xvinsve0.d $xr0, $xr3, 2
+; CHECK-NEXT: xvpickve.d $xr1, $xr1, 3
+; CHECK-NEXT: xvinsve0.d $xr0, $xr1, 3
; CHECK-NEXT: ret
entry:
%c = shufflevector <4 x double> %a, <4 x double> %b, <4 x i32> <i32 0, i32 6, i32 3, i32 7>
diff --git a/llvm/test/CodeGen/LoongArch/lasx/ir-instruction/insert-extract-element.ll b/llvm/test/CodeGen/LoongArch/lasx/ir-instruction/insert-extract-element.ll
index 271e3eca31dbe..ac5a2143451d0 100644
--- a/llvm/test/CodeGen/LoongArch/lasx/ir-instruction/insert-extract-element.ll
+++ b/llvm/test/CodeGen/LoongArch/lasx/ir-instruction/insert-extract-element.ll
@@ -42,8 +42,8 @@ entry:
define <8 x float> @insert_extract_v8f32(<8 x float> %a) nounwind {
; CHECK-LABEL: insert_extract_v8f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: xvpickve2gr.w $a0, $xr0, 7
-; CHECK-NEXT: xvinsgr2vr.w $xr0, $a0, 1
+; CHECK-NEXT: xvpickve.w $xr1, $xr0, 7
+; CHECK-NEXT: xvinsve0.w $xr0, $xr1, 1
; CHECK-NEXT: ret
entry:
%b = extractelement <8 x float> %a, i32 7
@@ -66,8 +66,8 @@ entry:
define <4 x double> @insert_extract_v4f64(<4 x double> %a) nounwind {
; CHECK-LABEL: insert_extract_v4f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: xvpickve2gr.d $a0, $xr0, 3
-; CHECK-NEXT: xvinsgr2vr.d $xr0, $a0, 1
+; CHECK-NEXT: xvpickve.d $xr1, $xr0, 3
+; CHECK-NEXT: xvinsve0.d $xr0, $xr1, 1
; CHECK-NEXT: ret
entry:
%b = extractelement <4 x double> %a, i32 3
>From 8470027f257a3304b2abe50e5663bcd711f6ca29 Mon Sep 17 00:00:00 2001
From: khaki3 <47756807+khaki3 at users.noreply.github.com>
Date: Tue, 5 Aug 2025 18:21:31 -0700
Subject: [PATCH 027/171] [flang][acc] Add a semantic check for the validity of
nested parallelism (#152225)
This PR implements a semantic checker to ensure the legality of nested
OpenACC parallelism. The following are quotes from Spec 3.3. We need to
disallow loops from having parallelism at the same level as or at a
sub-level of child loops.
>**2.9.2 gang clause**
>[2064] <ins>When the parent compute construct is a parallel
construct</ins>, or on an orphaned loop construct, the gang clause
behaves as follows. (...) The associated dimension is the value of the
dim argument, if it appears, or is dimension one. The dim argument must
be a constant positive integer with value 1, 2, or 3.
>[2112] The region of a loop with a gang(dim:d) clause may not contain a
loop construct with a gang(dim:e) clause where e >= d unless it appears
within a nested compute region.
>[2074] <ins>When the parent compute construct is a kernels
construct</ins>, the gang clause behaves as follows. (...)
>[2148] The region of a loop with the gang clause may not contain
another loop with a gang clause unless within a nested compute region.
>**2.9.3 worker clause**
>[2122]/[2129] The region of a loop with the worker clause may not
contain a loop with the gang or worker clause unless within a nested
compute region.
>**2.9.4 vector clause**
>[2141]/[2148] The region of a loop with the vector clause may not
contain a loop with a gang, worker, or vector clause unless within a
nested compute region.
https://openacc.org/sites/default/files/inline-images/Specification/OpenACC-3.3-final.pdf
---
flang/lib/Semantics/check-acc-structure.cpp | 117 ++++++++++++++++++--
flang/lib/Semantics/check-acc-structure.h | 6 +
flang/test/Semantics/OpenACC/acc-loop.f90 | 94 +++++++++++++++-
3 files changed, 206 insertions(+), 11 deletions(-)
diff --git a/flang/lib/Semantics/check-acc-structure.cpp b/flang/lib/Semantics/check-acc-structure.cpp
index 77e2b01578641..051abdceba19d 100644
--- a/flang/lib/Semantics/check-acc-structure.cpp
+++ b/flang/lib/Semantics/check-acc-structure.cpp
@@ -6,6 +6,7 @@
//
//===----------------------------------------------------------------------===//
#include "check-acc-structure.h"
+#include "resolve-names-utils.h"
#include "flang/Common/enum-set.h"
#include "flang/Evaluate/tools.h"
#include "flang/Parser/parse-tree.h"
@@ -106,18 +107,25 @@ bool AccStructureChecker::IsComputeConstruct(
directive == llvm::acc::ACCD_kernels_loop;
}
-bool AccStructureChecker::IsInsideComputeConstruct() const {
- if (dirContext_.size() <= 1) {
- return false;
- }
+bool AccStructureChecker::IsLoopConstruct(
+ llvm::acc::Directive directive) const {
+ return directive == llvm::acc::Directive::ACCD_loop ||
+ directive == llvm::acc::ACCD_parallel_loop ||
+ directive == llvm::acc::ACCD_serial_loop ||
+ directive == llvm::acc::ACCD_kernels_loop;
+}
+std::optional<llvm::acc::Directive>
+AccStructureChecker::getParentComputeConstruct() const {
// Check all nested context skipping the first one.
- for (std::size_t i = dirContext_.size() - 1; i > 0; --i) {
- if (IsComputeConstruct(dirContext_[i - 1].directive)) {
- return true;
- }
- }
- return false;
+ for (std::size_t i = dirContext_.size() - 1; i > 0; --i)
+ if (IsComputeConstruct(dirContext_[i - 1].directive))
+ return dirContext_[i - 1].directive;
+ return std::nullopt;
+}
+
+bool AccStructureChecker::IsInsideComputeConstruct() const {
+ return getParentComputeConstruct().has_value();
}
void AccStructureChecker::CheckNotInComputeConstruct() {
@@ -128,6 +136,14 @@ void AccStructureChecker::CheckNotInComputeConstruct() {
}
}
+bool AccStructureChecker::IsInsideParallelConstruct() const {
+ if (auto directive = getParentComputeConstruct())
+ if (*directive == llvm::acc::ACCD_parallel ||
+ *directive == llvm::acc::ACCD_parallel_loop)
+ return true;
+ return false;
+}
+
void AccStructureChecker::Enter(const parser::AccClause &x) {
SetContextClause(x);
}
@@ -250,6 +266,85 @@ void AccStructureChecker::Leave(const parser::OpenACCCombinedConstruct &x) {
dirContext_.pop_back();
}
+std::optional<std::int64_t> AccStructureChecker::getGangDimensionSize(
+ DirectiveContext &dirContext) {
+ for (auto it : dirContext.clauseInfo) {
+ const auto *clause{it.second};
+ if (const auto *gangClause{
+ std::get_if<parser::AccClause::Gang>(&clause->u)})
+ if (gangClause->v) {
+ const Fortran::parser::AccGangArgList &x{*gangClause->v};
+ for (const Fortran::parser::AccGangArg &gangArg : x.v)
+ if (const auto *dim{
+ std::get_if<Fortran::parser::AccGangArg::Dim>(&gangArg.u)})
+ if (const auto v{EvaluateInt64(context_, dim->v)})
+ return *v;
+ }
+ }
+ return std::nullopt;
+}
+
+void AccStructureChecker::CheckNotInSameOrSubLevelLoopConstruct() {
+ for (std::size_t i = dirContext_.size() - 1; i > 0; --i) {
+ auto &parent{dirContext_[i - 1]};
+ if (IsLoopConstruct(parent.directive)) {
+ for (auto parentClause : parent.actualClauses) {
+ for (auto cl : GetContext().actualClauses) {
+ bool invalid{false};
+ if (parentClause == llvm::acc::Clause::ACCC_gang &&
+ cl == llvm::acc::Clause::ACCC_gang) {
+ if (IsInsideParallelConstruct()) {
+ auto parentDim = getGangDimensionSize(parent);
+ auto currentDim = getGangDimensionSize(GetContext());
+ std::int64_t parentDimNum = 1, currentDimNum = 1;
+ if (parentDim)
+ parentDimNum = *parentDim;
+ if (currentDim)
+ currentDimNum = *currentDim;
+ if (parentDimNum <= currentDimNum) {
+ std::string parentDimStr, currentDimStr;
+ if (parentDim)
+ parentDimStr = "(dim:" + std::to_string(parentDimNum) + ")";
+ if (currentDim)
+ currentDimStr = "(dim:" + std::to_string(currentDimNum) + ")";
+ context_.Say(GetContext().clauseSource,
+ "%s%s clause is not allowed in the region of a loop with the %s%s clause"_err_en_US,
+ parser::ToUpperCaseLetters(
+ llvm::acc::getOpenACCClauseName(cl).str()),
+ currentDimStr,
+ parser::ToUpperCaseLetters(
+ llvm::acc::getOpenACCClauseName(parentClause).str()),
+ parentDimStr);
+ continue;
+ }
+ } else {
+ invalid = true;
+ }
+ } else if (parentClause == llvm::acc::Clause::ACCC_worker &&
+ (cl == llvm::acc::Clause::ACCC_gang ||
+ cl == llvm::acc::Clause::ACCC_worker)) {
+ invalid = true;
+ } else if (parentClause == llvm::acc::Clause::ACCC_vector &&
+ (cl == llvm::acc::Clause::ACCC_gang ||
+ cl == llvm::acc::Clause::ACCC_worker ||
+ cl == llvm::acc::Clause::ACCC_vector)) {
+ invalid = true;
+ }
+ if (invalid)
+ context_.Say(GetContext().clauseSource,
+ "%s clause is not allowed in the region of a loop with the %s clause"_err_en_US,
+ parser::ToUpperCaseLetters(
+ llvm::acc::getOpenACCClauseName(cl).str()),
+ parser::ToUpperCaseLetters(
+ llvm::acc::getOpenACCClauseName(parentClause).str()));
+ }
+ }
+ }
+ if (IsComputeConstruct(parent.directive))
+ break;
+ }
+}
+
void AccStructureChecker::Enter(const parser::OpenACCLoopConstruct &x) {
const auto &beginDir{std::get<parser::AccBeginLoopDirective>(x.t)};
const auto &loopDir{std::get<parser::AccLoopDirective>(beginDir.t)};
@@ -267,6 +362,8 @@ void AccStructureChecker::Leave(const parser::OpenACCLoopConstruct &x) {
CheckNotAllowedIfClause(llvm::acc::Clause::ACCC_seq,
{llvm::acc::Clause::ACCC_gang, llvm::acc::Clause::ACCC_vector,
llvm::acc::Clause::ACCC_worker});
+ // Restriction - 2.9.2, 2.9.3, 2.9.4
+ CheckNotInSameOrSubLevelLoopConstruct();
}
dirContext_.pop_back();
}
diff --git a/flang/lib/Semantics/check-acc-structure.h b/flang/lib/Semantics/check-acc-structure.h
index 359f1557b62cb..711d0326349a4 100644
--- a/flang/lib/Semantics/check-acc-structure.h
+++ b/flang/lib/Semantics/check-acc-structure.h
@@ -98,8 +98,14 @@ class AccStructureChecker
bool CheckAllowedModifier(llvm::acc::Clause clause);
bool IsComputeConstruct(llvm::acc::Directive directive) const;
+ bool IsLoopConstruct(llvm::acc::Directive directive) const;
+ std::optional<llvm::acc::Directive> getParentComputeConstruct() const;
bool IsInsideComputeConstruct() const;
+ bool IsInsideParallelConstruct() const;
void CheckNotInComputeConstruct();
+ std::optional<std::int64_t> getGangDimensionSize(
+ DirectiveContext &dirContext);
+ void CheckNotInSameOrSubLevelLoopConstruct();
void CheckMultipleOccurrenceInDeclare(
const parser::AccObjectList &, llvm::acc::Clause);
void CheckMultipleOccurrenceInDeclare(
diff --git a/flang/test/Semantics/OpenACC/acc-loop.f90 b/flang/test/Semantics/OpenACC/acc-loop.f90
index 859cf3feec0d6..9301cf85305d4 100644
--- a/flang/test/Semantics/OpenACC/acc-loop.f90
+++ b/flang/test/Semantics/OpenACC/acc-loop.f90
@@ -13,7 +13,7 @@ program openacc_loop_validity
integer :: n
end type atype
- integer :: i, j, k, b, gang_size, vector_size, worker_size
+ integer :: i, j, k, l, m, b, gang_size, vector_size, worker_size
integer, parameter :: N = 256
integer, dimension(N) :: c
logical, dimension(N) :: d, e
@@ -259,6 +259,98 @@ program openacc_loop_validity
end do
!$acc end parallel
+ !$acc parallel
+ !$acc loop gang
+ do i = 1, n
+ !$acc loop worker
+ do j = 1, n
+ !ERROR: GANG clause is not allowed in the region of a loop with the WORKER clause
+ !ERROR: GANG clause is not allowed in the region of a loop with the GANG clause
+ !$acc loop gang vector
+ do k = 1, i
+ end do
+ end do
+ end do
+ !$acc end parallel
+
+ !$acc parallel loop vector
+ do i = 1, n
+ !ERROR: GANG clause is not allowed in the region of a loop with the VECTOR clause
+ !$acc loop gang
+ do j = 1, n
+ !ERROR: WORKER clause is not allowed in the region of a loop with the VECTOR clause
+ !$acc loop worker
+ do k = 1, i
+ !ERROR: VECTOR clause is not allowed in the region of a loop with the VECTOR clause
+ !$acc loop vector
+ do l = 1, 1
+ end do
+ end do
+ end do
+ end do
+ !$acc end parallel loop
+
+ !$acc kernels
+ do i = 1, n
+ !$acc loop gang worker
+ do j = 1, n
+ !ERROR: WORKER clause is not allowed in the region of a loop with the WORKER clause
+ !$acc loop worker vector
+ do k = 1, i
+ end do
+ end do
+ end do
+ !$acc end kernels
+
+ !$acc parallel
+ !$acc loop gang(dim:1)
+ do i = 1, n
+ !ERROR: GANG(dim:1) clause is not allowed in the region of a loop with the GANG(dim:1) clause
+ !$acc loop gang(dim:1)
+ do j = 1, n
+ !ERROR: GANG(dim:2) clause is not allowed in the region of a loop with the GANG(dim:1) clause
+ !$acc loop gang(dim:2)
+ do k = 1, i
+ !ERROR: GANG(dim:3) clause is not allowed in the region of a loop with the GANG(dim:2) clause
+ !ERROR: GANG(dim:3) clause is not allowed in the region of a loop with the GANG(dim:1) clause
+ !$acc loop gang(dim:3)
+ do l = 1, 1
+ !ERROR: GANG(dim:3) clause is not allowed in the region of a loop with the GANG(dim:3) clause
+ !ERROR: GANG(dim:3) clause is not allowed in the region of a loop with the GANG(dim:2) clause
+ !ERROR: GANG(dim:3) clause is not allowed in the region of a loop with the GANG(dim:1) clause
+ !$acc loop gang(dim:3)
+ do m = 1, 1
+ end do
+ end do
+ end do
+ end do
+ end do
+ !$acc end parallel
+
+ !$acc parallel loop gang(dim:3)
+ do i = 1, n
+ !$acc loop gang(dim:2)
+ do j = 1, n
+ !$acc loop gang(dim:1) worker vector
+ do k = 1, i
+ end do
+ end do
+ end do
+ !$acc end parallel loop
+
+ !$acc kernels loop gang(dim:3)
+ do i = 1, n
+ !ERROR: GANG clause is not allowed in the region of a loop with the GANG clause
+ !$acc loop gang(dim:2)
+ do j = 1, n
+ !ERROR: GANG clause is not allowed in the region of a loop with the GANG clause
+ !$acc loop gang(dim:1) worker vector
+ do k = 1, i
+ end do
+ end do
+ end do
+ !$acc end kernels loop
+
!ERROR: Clause IF is not allowed after clause DEVICE_TYPE on the PARALLEL directive
!$acc parallel device_type(*) if(.TRUE.)
!$acc loop
>From 342bf58f93a72a675183c18b42ae811c635dda37 Mon Sep 17 00:00:00 2001
From: Matt Arsenault <Matthew.Arsenault at amd.com>
Date: Wed, 6 Aug 2025 10:26:36 +0900
Subject: [PATCH 028/171] RuntimeLibcalls: Add entries for
__security_check_cookie (#151843)
Avoids hardcoding string name based on target, and gets
the entry in the centralized list of emitted calls.
---
llvm/include/llvm/CodeGen/TargetLowering.h | 5 ++++
llvm/include/llvm/IR/RuntimeLibcalls.td | 21 +++++++++++++---
.../Target/AArch64/AArch64ISelLowering.cpp | 12 ++++++---
llvm/lib/Target/AArch64/AArch64Subtarget.h | 6 -----
llvm/lib/Target/ARM/ARMISelLowering.cpp | 25 +++++++++++++------
.../CodeGen/AArch64/stacksmash-arm64ec.ll | 6 +++--
6 files changed, 53 insertions(+), 22 deletions(-)
diff --git a/llvm/include/llvm/CodeGen/TargetLowering.h b/llvm/include/llvm/CodeGen/TargetLowering.h
index 52729e9e7cee7..01f8fb5ed061f 100644
--- a/llvm/include/llvm/CodeGen/TargetLowering.h
+++ b/llvm/include/llvm/CodeGen/TargetLowering.h
@@ -3553,6 +3553,11 @@ class LLVM_ABI TargetLoweringBase {
return Libcalls.getLibcallName(Call);
}
+ /// Get the libcall routine name for the specified libcall implementation
+ const char *getLibcallImplName(RTLIB::LibcallImpl Call) const {
+ return Libcalls.getLibcallImplName(Call);
+ }
+
const char *getMemcpyName() const { return Libcalls.getMemcpyName(); }
/// Get the comparison predicate that's to be used to test the result of the
diff --git a/llvm/include/llvm/IR/RuntimeLibcalls.td b/llvm/include/llvm/IR/RuntimeLibcalls.td
index 553a302aa47b6..df472d4b9cfee 100644
--- a/llvm/include/llvm/IR/RuntimeLibcalls.td
+++ b/llvm/include/llvm/IR/RuntimeLibcalls.td
@@ -25,7 +25,8 @@ def isNotOSMSVCRT : RuntimeLibcallPredicate<"!TT.isOSMSVCRT()">;
def isPS : RuntimeLibcallPredicate<"TT.isPS()">;
def isNotOSWindowsOrIsCygwinMinGW
: RuntimeLibcallPredicate<"!TT.isOSWindows() || TT.isOSCygMing()">;
-
+def isWindowsMSVCEnvironment : RuntimeLibcallPredicate<
+ [{TT.isWindowsMSVCEnvironment()}]>;
def isGNUEnvironment : RuntimeLibcallPredicate<"TT.isGNUEnvironment()">;
def darwinHasSinCosStret : RuntimeLibcallPredicate<"darwinHasSinCosStret(TT)">;
@@ -369,6 +370,8 @@ def STACK_SMASH_HANDLER : RuntimeLibcall;
// Safe stack
def SAFESTACK_POINTER_ADDRESS : RuntimeLibcall;
+def SECURITY_CHECK_COOKIE : RuntimeLibcall;
+
// Deoptimization
def DEOPTIMIZE : RuntimeLibcall;
@@ -1009,6 +1012,10 @@ def __stack_smash_handler : RuntimeLibcallImpl<STACK_SMASH_HANDLER>;
def __riscv_flush_icache : RuntimeLibcallImpl<RISCV_FLUSH_ICACHE>;
+def __security_check_cookie : RuntimeLibcallImpl<SECURITY_CHECK_COOKIE>;
+def __security_check_cookie_arm64ec : RuntimeLibcallImpl<SECURITY_CHECK_COOKIE,
+ "#__security_check_cookie_arm64ec">;
+
//===----------------------------------------------------------------------===//
// F128 libm Runtime Libcalls
//===----------------------------------------------------------------------===//
@@ -1111,6 +1118,9 @@ defvar DarwinSinCosStret = LibcallImpls<(add __sincosf_stret, __sincos_stret),
darwinHasSinCosStret>;
defvar DarwinExp10 = LibcallImpls<(add __exp10f, __exp10), darwinHasExp10>;
+defvar SecurityCheckCookieIfWinMSVC =
+ LibcallImpls<(add __security_check_cookie), isWindowsMSVCEnvironment>;
+
defvar LibmHasSinCosF32 = LibcallImpls<(add sincosf), hasSinCos>;
defvar LibmHasSinCosF64 = LibcallImpls<(add sincos), hasSinCos>;
defvar LibmHasSinCosF80 = LibcallImpls<(add sincos_f80), hasSinCos>;
@@ -1233,7 +1243,8 @@ def AArch64SystemLibrary : SystemRuntimeLibrary<
DarwinExp10, DarwinSinCosStret,
LibmHasSinCosF32, LibmHasSinCosF64, LibmHasSinCosF128,
DefaultLibmExp10,
- DefaultStackProtector)
+ DefaultStackProtector,
+ SecurityCheckCookieIfWinMSVC)
>;
// Prepend a # to every name
@@ -1252,7 +1263,9 @@ def arm64ec___stack_chk_fail : DuplicateLibcallImplWithPrefix<__stack_chk_fail,
def WindowsARM64ECSystemLibrary
: SystemRuntimeLibrary<isWindowsArm64EC,
(add WinArm64ECDefaultRuntimeLibcallImpls,
- arm64ec___stack_chk_fail)>;
+ arm64ec___stack_chk_fail,
+ LibcallImpls<(add __security_check_cookie_arm64ec),
+ isWindowsMSVCEnvironment>)>;
//===----------------------------------------------------------------------===//
// AMDGPU Runtime Libcalls
@@ -1500,6 +1513,7 @@ def ARMSystemLibrary
LibmHasFrexpF128, LibmHasLdexpF128,
WindowARMDivRemCalls,
WindowARMFPIntCasts,
+ SecurityCheckCookieIfWinMSVC,
AEABIDivRemCalls,
DarwinSinCosStret, DarwinExp10,
LibmHasSinCosF32, LibmHasSinCosF64, LibmHasSinCosF128,
@@ -2158,6 +2172,7 @@ defvar X86CommonLibcalls =
DefaultRuntimeLibcallImpls_f80,
LibmHasExp10F32, LibmHasExp10F64, LibmHasExp10F80,
LibcallImpls<(add MostPowI), isNotOSMSVCRT>,
+ SecurityCheckCookieIfWinMSVC,
// FIXME: MSVCRT doesn't have powi. The f128 case is added as a
// hack for one test relying on it.
__powitf2_f128,
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index 2b6ea86ee1af5..018c16d61b12d 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -28609,14 +28609,16 @@ Value *AArch64TargetLowering::getIRStackGuard(IRBuilderBase &IRB) const {
void AArch64TargetLowering::insertSSPDeclarations(Module &M) const {
// MSVC CRT provides functionalities for stack protection.
- if (Subtarget->getTargetTriple().isWindowsMSVCEnvironment()) {
+ RTLIB::LibcallImpl SecurityCheckCookieLibcall =
+ getLibcallImpl(RTLIB::SECURITY_CHECK_COOKIE);
+ if (SecurityCheckCookieLibcall != RTLIB::Unsupported) {
// MSVC CRT has a global variable holding security cookie.
M.getOrInsertGlobal("__security_cookie",
PointerType::getUnqual(M.getContext()));
// MSVC CRT has a function to validate security cookie.
FunctionCallee SecurityCheckCookie =
- M.getOrInsertFunction(Subtarget->getSecurityCheckCookieName(),
+ M.getOrInsertFunction(getLibcallImplName(SecurityCheckCookieLibcall),
Type::getVoidTy(M.getContext()),
PointerType::getUnqual(M.getContext()));
if (Function *F = dyn_cast<Function>(SecurityCheckCookie.getCallee())) {
@@ -28637,8 +28639,10 @@ Value *AArch64TargetLowering::getSDagStackGuard(const Module &M) const {
Function *AArch64TargetLowering::getSSPStackGuardCheck(const Module &M) const {
// MSVC CRT has a function to validate security cookie.
- if (Subtarget->getTargetTriple().isWindowsMSVCEnvironment())
- return M.getFunction(Subtarget->getSecurityCheckCookieName());
+ RTLIB::LibcallImpl SecurityCheckCookieLibcall =
+ getLibcallImpl(RTLIB::SECURITY_CHECK_COOKIE);
+ if (SecurityCheckCookieLibcall != RTLIB::Unsupported)
+ return M.getFunction(getLibcallImplName(SecurityCheckCookieLibcall));
return TargetLowering::getSSPStackGuardCheck(M);
}
diff --git a/llvm/lib/Target/AArch64/AArch64Subtarget.h b/llvm/lib/Target/AArch64/AArch64Subtarget.h
index 061ed611e5e47..d00e4471e107d 100644
--- a/llvm/lib/Target/AArch64/AArch64Subtarget.h
+++ b/llvm/lib/Target/AArch64/AArch64Subtarget.h
@@ -451,12 +451,6 @@ class AArch64Subtarget final : public AArch64GenSubtargetInfo {
return "__chkstk";
}
- const char* getSecurityCheckCookieName() const {
- if (isWindowsArm64EC())
- return "#__security_check_cookie_arm64ec";
- return "__security_check_cookie";
- }
-
/// Choose a method of checking LR before performing a tail call.
AArch64PAuth::AuthCheckMethod
getAuthenticatedLRCheckMethod(const MachineFunction &MF) const;
diff --git a/llvm/lib/Target/ARM/ARMISelLowering.cpp b/llvm/lib/Target/ARM/ARMISelLowering.cpp
index 18766c8c6befa..74c7c97e6e927 100644
--- a/llvm/lib/Target/ARM/ARMISelLowering.cpp
+++ b/llvm/lib/Target/ARM/ARMISelLowering.cpp
@@ -21358,7 +21358,9 @@ bool ARMTargetLowering::useLoadStackGuardNode(const Module &M) const {
}
void ARMTargetLowering::insertSSPDeclarations(Module &M) const {
- if (!Subtarget->getTargetTriple().isWindowsMSVCEnvironment())
+ RTLIB::LibcallImpl SecurityCheckCookieLibcall =
+ getLibcallImpl(RTLIB::SECURITY_CHECK_COOKIE);
+ if (SecurityCheckCookieLibcall == RTLIB::Unsupported)
return TargetLowering::insertSSPDeclarations(M);
// MSVC CRT has a global variable holding security cookie.
@@ -21367,23 +21369,32 @@ void ARMTargetLowering::insertSSPDeclarations(Module &M) const {
// MSVC CRT has a function to validate security cookie.
FunctionCallee SecurityCheckCookie = M.getOrInsertFunction(
- "__security_check_cookie", Type::getVoidTy(M.getContext()),
- PointerType::getUnqual(M.getContext()));
+ getLibcallImplName(SecurityCheckCookieLibcall),
+ Type::getVoidTy(M.getContext()), PointerType::getUnqual(M.getContext()));
if (Function *F = dyn_cast<Function>(SecurityCheckCookie.getCallee()))
F->addParamAttr(0, Attribute::AttrKind::InReg);
}
Value *ARMTargetLowering::getSDagStackGuard(const Module &M) const {
- // MSVC CRT has a global variable holding security cookie.
- if (Subtarget->getTargetTriple().isWindowsMSVCEnvironment())
+ RTLIB::LibcallImpl SecurityCheckCookieLibcall =
+ getLibcallImpl(RTLIB::SECURITY_CHECK_COOKIE);
+ if (SecurityCheckCookieLibcall != RTLIB::Unsupported) {
+ // MSVC CRT has a global variable holding security cookie.
+ //
+ // FIXME: We have a libcall entry for the correlated check function, but not
+ // the global name.
return M.getGlobalVariable("__security_cookie");
+ }
+
return TargetLowering::getSDagStackGuard(M);
}
Function *ARMTargetLowering::getSSPStackGuardCheck(const Module &M) const {
// MSVC CRT has a function to validate security cookie.
- if (Subtarget->getTargetTriple().isWindowsMSVCEnvironment())
- return M.getFunction("__security_check_cookie");
+ RTLIB::LibcallImpl SecurityCheckCookie =
+ getLibcallImpl(RTLIB::SECURITY_CHECK_COOKIE);
+ if (SecurityCheckCookie != RTLIB::Unsupported)
+ return M.getFunction(getLibcallImplName(SecurityCheckCookie));
return TargetLowering::getSSPStackGuardCheck(M);
}
diff --git a/llvm/test/CodeGen/AArch64/stacksmash-arm64ec.ll b/llvm/test/CodeGen/AArch64/stacksmash-arm64ec.ll
index 0960133d7d054..bd4110173f013 100644
--- a/llvm/test/CodeGen/AArch64/stacksmash-arm64ec.ll
+++ b/llvm/test/CodeGen/AArch64/stacksmash-arm64ec.ll
@@ -1,8 +1,10 @@
-; RUN: llc -mtriple=arm64ec-unknown-windows-gnu < %s | FileCheck %s
+; RUN: llc -mtriple=arm64ec-unknown-windows < %s | FileCheck -check-prefixes=CHECK,NONGNU %s
+; RUN: llc -mtriple=arm64ec-unknown-windows-gnu < %s | FileCheck -check-prefixes=CHECK,GNU %s
; CHECK-LABEL: func = "#func"
; CHECK: bl "#other"
-; CHECK: bl "#__stack_chk_fail"
+; NONGNU: bl "#__security_check_cookie_arm64ec"
+; GNU: bl "#__stack_chk_fail"
define void @func() #0 {
entry:
%buf = alloca [10 x i8], align 1
>From a15b629527a975ec592c95d69d04ef3537915d1d Mon Sep 17 00:00:00 2001
From: Jon Roelofs <jonathan_roelofs at apple.com>
Date: Tue, 5 Aug 2025 17:08:00 -0700
Subject: [PATCH 029/171] Revert "Strip the full path from __FILE__ in the LDBG
macro and keep only the filename (#150677)"
This reverts commit 5d26e3c227f4b4a1761a8b0001b3165198def479.
It breaks the modules build of clang, since every source file has a different
version of this macro, which causes
https://green.lab.llvm.org/job/clang-stage2-Rthinlto/ to run out of space. We
should probably be using __FILE_NAME__ instead when the host compiler supports
it, but until then let's revert-to-green, and un-block the bots.
see: https://github.com/llvm/llvm-project/pull/150677
see: https://github.com/llvm/llvm-project/pull/150677#issuecomment-3156779021
rdar://157465825
---
llvm/cmake/modules/LLVMProcessSources.cmake | 15 ----
llvm/include/llvm/Support/DebugLog.h | 91 +++++----------------
2 files changed, 21 insertions(+), 85 deletions(-)
diff --git a/llvm/cmake/modules/LLVMProcessSources.cmake b/llvm/cmake/modules/LLVMProcessSources.cmake
index cf358a88f5fb6..0670d60bf2afd 100644
--- a/llvm/cmake/modules/LLVMProcessSources.cmake
+++ b/llvm/cmake/modules/LLVMProcessSources.cmake
@@ -58,21 +58,6 @@ function(llvm_process_sources OUT_VAR)
set(sources ${ARG_UNPARSED_ARGUMENTS})
llvm_check_source_file_list(${sources})
- # Don't generate __SHORT_FILE__ on VS builds as it can prevent build parallelisation.
- if(NOT CMAKE_GENERATOR MATCHES "Visual Studio")
- foreach(fn ${sources})
- get_filename_component(suf ${fn} EXT)
- if("${suf}" STREQUAL ".cpp" OR "${suf}" STREQUAL ".c")
- get_filename_component(short_name ${fn} NAME)
- set_property(
- SOURCE ${fn}
- APPEND
- PROPERTY COMPILE_DEFINITIONS __SHORT_FILE__="${short_name}")
- endif()
- endforeach()
- endif()
-
-
# This adds .td and .h files to the Visual Studio solution:
add_td_sources(sources)
find_all_header_files(hdrs "${ARG_ADDITIONAL_HEADER_DIRS}")
diff --git a/llvm/include/llvm/Support/DebugLog.h b/llvm/include/llvm/Support/DebugLog.h
index a3312950da94e..386cb7f1f9716 100644
--- a/llvm/include/llvm/Support/DebugLog.h
+++ b/llvm/include/llvm/Support/DebugLog.h
@@ -41,81 +41,32 @@ namespace llvm {
//
#define LDBG(...) _GET_LDBG_MACRO(__VA_ARGS__)(__VA_ARGS__)
-// Helper macros to choose the correct macro based on the number of arguments.
-#define LDBG_FUNC_CHOOSER(_f1, _f2, ...) _f2
-#define LDBG_FUNC_RECOMPOSER(argsWithParentheses) \
- LDBG_FUNC_CHOOSER argsWithParentheses
-#define LDBG_CHOOSE_FROM_ARG_COUNT(...) \
- LDBG_FUNC_RECOMPOSER((__VA_ARGS__, LDBG_LOG_LEVEL, ))
-#define LDBG_NO_ARG_EXPANDER() , LDBG_LOG_LEVEL_1
-#define _GET_LDBG_MACRO(...) \
- LDBG_CHOOSE_FROM_ARG_COUNT(LDBG_NO_ARG_EXPANDER __VA_ARGS__())
-
-// Dispatch macros to support the `level` argument or none (default to 1)
-#define LDBG_LOG_LEVEL(LEVEL) \
- DEBUGLOG_WITH_STREAM_AND_TYPE(llvm::dbgs(), LEVEL, DEBUG_TYPE)
-#define LDBG_LOG_LEVEL_1() LDBG_LOG_LEVEL(1)
-
-#define DEBUGLOG_WITH_STREAM_TYPE_FILE_AND_LINE(STREAM, LEVEL, TYPE, FILE, \
- LINE) \
- for (bool _c = \
- (::llvm::DebugFlag && ::llvm::isCurrentDebugType(TYPE, LEVEL)); \
- _c; _c = false) \
- for (::llvm::impl::RAIINewLineStream NewLineStream{(STREAM)}; _c; \
- _c = false) \
- ::llvm::impl::raw_ldbg_ostream{ \
- ::llvm::impl::computePrefix(TYPE, FILE, LINE, LEVEL), NewLineStream} \
- .asLvalue()
-
-#define DEBUGLOG_WITH_STREAM_TYPE_AND_FILE(STREAM, LEVEL, TYPE, FILE) \
- DEBUGLOG_WITH_STREAM_TYPE_FILE_AND_LINE(STREAM, LEVEL, TYPE, FILE, __LINE__)
-// When __SHORT_FILE__ is not defined, the File is the full path,
-// otherwise __SHORT_FILE__ is defined in CMake to provide the file name
-// without the path prefix.
-#if defined(__SHORT_FILE__)
-#define DEBUGLOG_WITH_STREAM_AND_TYPE(STREAM, LEVEL, TYPE) \
- DEBUGLOG_WITH_STREAM_TYPE_AND_FILE(STREAM, LEVEL, TYPE, __SHORT_FILE__)
-#else
-#define DEBUGLOG_WITH_STREAM_AND_TYPE(STREAM, LEVEL, TYPE) \
- DEBUGLOG_WITH_STREAM_TYPE_AND_FILE(STREAM, LEVEL, TYPE, \
- ::llvm::impl::getShortFileName(__FILE__))
-#endif
+#define DEBUGLOG_WITH_STREAM_AND_TYPE(STREAM, TYPE) \
+ for (bool _c = (::llvm::DebugFlag && ::llvm::isCurrentDebugType(TYPE)); _c; \
+ _c = false) \
+ ::llvm::impl::LogWithNewline(TYPE, __FILE__, __LINE__, (STREAM))
namespace impl {
-
-/// A raw_ostream that tracks `\n` and print the prefix after each
-/// newline.
-class LLVM_ABI raw_ldbg_ostream final : public raw_ostream {
- std::string Prefix;
- raw_ostream &Os;
- bool HasPendingNewline;
-
- /// Split the line on newlines and insert the prefix before each
- /// newline. Forward everything to the underlying stream.
- void write_impl(const char *Ptr, size_t Size) final {
- auto Str = StringRef(Ptr, Size);
- // Handle the initial prefix.
- if (!Str.empty())
- writeWithPrefix(StringRef());
-
- auto Eol = Str.find('\n');
- while (Eol != StringRef::npos) {
- StringRef Line = Str.take_front(Eol + 1);
- if (!Line.empty())
- writeWithPrefix(Line);
- HasPendingNewline = true;
- Str = Str.drop_front(Eol + 1);
- Eol = Str.find('\n');
- }
- if (!Str.empty())
- writeWithPrefix(Str);
+class LogWithNewline {
+public:
+ LogWithNewline(const char *debug_type, const char *file, int line,
+ raw_ostream &os)
+ : os(os) {
+ if (debug_type)
+ os << "[" << debug_type << "] ";
+ os << file << ":" << line << " ";
}
- void emitPrefix() { Os.write(Prefix.c_str(), Prefix.size()); }
- void writeWithPrefix(StringRef Str) {
- flushEol();
- Os.write(Str.data(), Str.size());
+ ~LogWithNewline() { os << '\n'; }
+ template <typename T> raw_ostream &operator<<(const T &t) && {
+ return os << t;
}
+ // Prevent copying, as this class manages newline responsibility and is
+ // intended for use as a temporary.
+ LogWithNewline(const LogWithNewline &) = delete;
+ LogWithNewline &operator=(const LogWithNewline &) = delete;
+ LogWithNewline &operator=(LogWithNewline &&) = delete;
+
public:
explicit raw_ldbg_ostream(std::string Prefix, raw_ostream &Os,
bool HasPendingNewline = true)
>From af16fc2e2a50c1cbac49726ea70739ad6e193729 Mon Sep 17 00:00:00 2001
From: Wenju He <wenju.he at intel.com>
Date: Wed, 6 Aug 2025 09:49:28 +0800
Subject: [PATCH 030/171] [libclc] Move mem_fence and barrier to clc library
(#151446)
__clc_mem_fence and __clc_work_group_barrier function have two
parameters memory_scope and memory_order. The design allows the clc
functions to implement SPIR-V ControlBarrier and MemoryBarrier
functions in the future.
The default memory ordering in clc is set to __ATOMIC_SEQ_CST, which is
also the default and strongest ordering in OpenCL and C++.
OpenCL cl_mem_fence_flags parameter is converted to combination of
__MEMORY_SCOPE_DEVICE and __MEMORY_SCOPE_WRKGRP, which is passed to clc.
llvm-diff shows no change to nvptx64--nvidiacl.bc.
llvm-diff show a small change to amdgcn--amdhsa.bc and the number of
LLVM IR instruction is reduced by 1: https://alive2.llvm.org/ce/z/_Uhqvt
---
.../clc/include/clc/mem_fence/clc_mem_fence.h | 17 +++++++++
.../synchronization/clc_work_group_barrier.h | 17 +++++++++
libclc/clc/lib/amdgcn/SOURCES | 2 +
.../clc/lib/amdgcn/mem_fence/clc_mem_fence.cl | 37 +++++++++++++++++++
.../synchronization/clc_work_group_barrier.cl | 16 ++++++++
libclc/clc/lib/ptx-nvidiacl/SOURCES | 2 +
.../ptx-nvidiacl/mem_fence/clc_mem_fence.cl | 15 ++++++++
.../synchronization/clc_work_group_barrier.cl | 14 +++++++
.../synchronization/cl_mem_fence_flags.h | 1 +
.../clc/opencl/synchronization/utils.h | 24 ++++++++++++
libclc/opencl/lib/amdgcn/mem_fence/fence.cl | 29 +++------------
.../lib/amdgcn/synchronization/barrier.cl | 8 ++--
.../lib/ptx-nvidiacl/mem_fence/fence.cl | 7 +++-
.../ptx-nvidiacl/synchronization/barrier.cl | 6 ++-
14 files changed, 165 insertions(+), 30 deletions(-)
create mode 100644 libclc/clc/include/clc/mem_fence/clc_mem_fence.h
create mode 100644 libclc/clc/include/clc/synchronization/clc_work_group_barrier.h
create mode 100644 libclc/clc/lib/amdgcn/mem_fence/clc_mem_fence.cl
create mode 100644 libclc/clc/lib/amdgcn/synchronization/clc_work_group_barrier.cl
create mode 100644 libclc/clc/lib/ptx-nvidiacl/mem_fence/clc_mem_fence.cl
create mode 100644 libclc/clc/lib/ptx-nvidiacl/synchronization/clc_work_group_barrier.cl
create mode 100644 libclc/opencl/include/clc/opencl/synchronization/utils.h
diff --git a/libclc/clc/include/clc/mem_fence/clc_mem_fence.h b/libclc/clc/include/clc/mem_fence/clc_mem_fence.h
new file mode 100644
index 0000000000000..2321634c76842
--- /dev/null
+++ b/libclc/clc/include/clc/mem_fence/clc_mem_fence.h
@@ -0,0 +1,17 @@
+//===----------------------------------------------------------------------===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef __CLC_MEM_FENCE_CLC_MEM_FENCE_H__
+#define __CLC_MEM_FENCE_CLC_MEM_FENCE_H__
+
+#include <clc/internal/clc.h>
+
+_CLC_OVERLOAD _CLC_DECL void __clc_mem_fence(int memory_scope,
+ int memory_order);
+
+#endif // __CLC_MEM_FENCE_CLC_MEM_FENCE_H__
diff --git a/libclc/clc/include/clc/synchronization/clc_work_group_barrier.h b/libclc/clc/include/clc/synchronization/clc_work_group_barrier.h
new file mode 100644
index 0000000000000..5f864e1057b8b
--- /dev/null
+++ b/libclc/clc/include/clc/synchronization/clc_work_group_barrier.h
@@ -0,0 +1,17 @@
+//===----------------------------------------------------------------------===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef __CLC_SYNCHRONIZATION_CLC_WORK_GROUP_BARRIER_H__
+#define __CLC_SYNCHRONIZATION_CLC_WORK_GROUP_BARRIER_H__
+
+#include <clc/internal/clc.h>
+
+_CLC_OVERLOAD _CLC_DECL void __clc_work_group_barrier(int memory_scope,
+ int memory_order);
+
+#endif // __CLC_SYNCHRONIZATION_CLC_WORK_GROUP_BARRIER_H__
diff --git a/libclc/clc/lib/amdgcn/SOURCES b/libclc/clc/lib/amdgcn/SOURCES
index d91f08533e149..76c3266e3af7b 100644
--- a/libclc/clc/lib/amdgcn/SOURCES
+++ b/libclc/clc/lib/amdgcn/SOURCES
@@ -1,4 +1,6 @@
math/clc_ldexp_override.cl
+mem_fence/clc_mem_fence.cl
+synchronization/clc_work_group_barrier.cl
workitem/clc_get_global_offset.cl
workitem/clc_get_global_size.cl
workitem/clc_get_group_id.cl
diff --git a/libclc/clc/lib/amdgcn/mem_fence/clc_mem_fence.cl b/libclc/clc/lib/amdgcn/mem_fence/clc_mem_fence.cl
new file mode 100644
index 0000000000000..9e6460313718e
--- /dev/null
+++ b/libclc/clc/lib/amdgcn/mem_fence/clc_mem_fence.cl
@@ -0,0 +1,37 @@
+//===----------------------------------------------------------------------===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+
+#include <clc/mem_fence/clc_mem_fence.h>
+
+void __clc_amdgcn_s_waitcnt(unsigned flags);
+
+// s_waitcnt takes 16bit argument with a combined number of maximum allowed
+// pending operations:
+// [12:8] LGKM -- LDS, GDS, Konstant (SMRD), Messages
+// [7] -- undefined
+// [6:4] -- exports, GDS, and mem write
+// [3:0] -- vector memory operations
+
+// Newer clang supports __builtin_amdgcn_s_waitcnt
+#if __clang_major__ >= 5
+#define __waitcnt(x) __builtin_amdgcn_s_waitcnt(x)
+#else
+#define __waitcnt(x) __clc_amdgcn_s_waitcnt(x)
+_CLC_DEF void __clc_amdgcn_s_waitcnt(unsigned) __asm("llvm.amdgcn.s.waitcnt");
+#endif
+
+_CLC_OVERLOAD _CLC_DEF void __clc_mem_fence(int memory_scope,
+ int memory_order) {
+ if (memory_scope & __MEMORY_SCOPE_DEVICE) {
+ // scalar loads are counted with LGKM but we don't know whether
+ // the compiler turned any loads to scalar
+ __waitcnt(0);
+ } else if (memory_scope & __MEMORY_SCOPE_WRKGRP)
+ __waitcnt(0xff); // LGKM is [12:8]
+}
+#undef __waitcnt
diff --git a/libclc/clc/lib/amdgcn/synchronization/clc_work_group_barrier.cl b/libclc/clc/lib/amdgcn/synchronization/clc_work_group_barrier.cl
new file mode 100644
index 0000000000000..ff3628fa7c339
--- /dev/null
+++ b/libclc/clc/lib/amdgcn/synchronization/clc_work_group_barrier.cl
@@ -0,0 +1,16 @@
+//===----------------------------------------------------------------------===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+
+#include <clc/mem_fence/clc_mem_fence.h>
+#include <clc/synchronization/clc_work_group_barrier.h>
+
+_CLC_OVERLOAD _CLC_DEF void __clc_work_group_barrier(int memory_scope,
+ int memory_order) {
+ __clc_mem_fence(memory_scope, memory_order);
+ __builtin_amdgcn_s_barrier();
+}
diff --git a/libclc/clc/lib/ptx-nvidiacl/SOURCES b/libclc/clc/lib/ptx-nvidiacl/SOURCES
index 05368c5e4d4e3..b6f50654f89c5 100644
--- a/libclc/clc/lib/ptx-nvidiacl/SOURCES
+++ b/libclc/clc/lib/ptx-nvidiacl/SOURCES
@@ -1,3 +1,5 @@
+mem_fence/clc_mem_fence.cl
+synchronization/clc_work_group_barrier.cl
workitem/clc_get_global_id.cl
workitem/clc_get_group_id.cl
workitem/clc_get_local_id.cl
diff --git a/libclc/clc/lib/ptx-nvidiacl/mem_fence/clc_mem_fence.cl b/libclc/clc/lib/ptx-nvidiacl/mem_fence/clc_mem_fence.cl
new file mode 100644
index 0000000000000..b3e2375e755a2
--- /dev/null
+++ b/libclc/clc/lib/ptx-nvidiacl/mem_fence/clc_mem_fence.cl
@@ -0,0 +1,15 @@
+//===----------------------------------------------------------------------===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+
+#include <clc/mem_fence/clc_mem_fence.h>
+
+_CLC_OVERLOAD _CLC_DEF void __clc_mem_fence(int memory_scope,
+ int memory_order) {
+ if (memory_scope & (__MEMORY_SCOPE_DEVICE | __MEMORY_SCOPE_WRKGRP))
+ __nvvm_membar_cta();
+}
diff --git a/libclc/clc/lib/ptx-nvidiacl/synchronization/clc_work_group_barrier.cl b/libclc/clc/lib/ptx-nvidiacl/synchronization/clc_work_group_barrier.cl
new file mode 100644
index 0000000000000..6cb37a38f06ac
--- /dev/null
+++ b/libclc/clc/lib/ptx-nvidiacl/synchronization/clc_work_group_barrier.cl
@@ -0,0 +1,14 @@
+//===----------------------------------------------------------------------===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+
+#include <clc/synchronization/clc_work_group_barrier.h>
+
+_CLC_OVERLOAD _CLC_DEF void __clc_work_group_barrier(int memory_scope,
+ int memory_order) {
+ __syncthreads();
+}
diff --git a/libclc/opencl/include/clc/opencl/synchronization/cl_mem_fence_flags.h b/libclc/opencl/include/clc/opencl/synchronization/cl_mem_fence_flags.h
index 6636515fca47d..7b2f701c1ff99 100644
--- a/libclc/opencl/include/clc/opencl/synchronization/cl_mem_fence_flags.h
+++ b/libclc/opencl/include/clc/opencl/synchronization/cl_mem_fence_flags.h
@@ -13,5 +13,6 @@ typedef uint cl_mem_fence_flags;
#define CLK_LOCAL_MEM_FENCE 1
#define CLK_GLOBAL_MEM_FENCE 2
+#define CLK_IMAGE_MEM_FENCE 4
#endif // __CLC_OPENCL_SYNCHRONIZATION_CL_MEM_FENCE_FLAGS_H__
diff --git a/libclc/opencl/include/clc/opencl/synchronization/utils.h b/libclc/opencl/include/clc/opencl/synchronization/utils.h
new file mode 100644
index 0000000000000..cf3baf28cb5f1
--- /dev/null
+++ b/libclc/opencl/include/clc/opencl/synchronization/utils.h
@@ -0,0 +1,24 @@
+//===----------------------------------------------------------------------===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef __CLC_OPENCL_SYNCHRONIZATION_UTILS_H__
+#define __CLC_OPENCL_SYNCHRONIZATION_UTILS_H__
+
+#include <clc/internal/clc.h>
+#include <clc/opencl/synchronization/cl_mem_fence_flags.h>
+
+_CLC_INLINE int getCLCMemoryScope(cl_mem_fence_flags flag) {
+ int memory_scope = 0;
+ if (flag & CLK_GLOBAL_MEM_FENCE)
+ memory_scope |= __MEMORY_SCOPE_DEVICE;
+ if (flag & CLK_LOCAL_MEM_FENCE)
+ memory_scope |= __MEMORY_SCOPE_WRKGRP;
+ return memory_scope;
+}
+
+#endif // __CLC_OPENCL_SYNCHRONIZATION_UTILS_H__
diff --git a/libclc/opencl/lib/amdgcn/mem_fence/fence.cl b/libclc/opencl/lib/amdgcn/mem_fence/fence.cl
index 88b953005aae6..81216d6a26cf2 100644
--- a/libclc/opencl/lib/amdgcn/mem_fence/fence.cl
+++ b/libclc/opencl/lib/amdgcn/mem_fence/fence.cl
@@ -6,34 +6,15 @@
//
//===----------------------------------------------------------------------===//
+#include <clc/mem_fence/clc_mem_fence.h>
#include <clc/opencl/explicit_fence/explicit_memory_fence.h>
-
-void __clc_amdgcn_s_waitcnt(unsigned flags);
-
-// s_waitcnt takes 16bit argument with a combined number of maximum allowed
-// pending operations:
-// [12:8] LGKM -- LDS, GDS, Konstant (SMRD), Messages
-// [7] -- undefined
-// [6:4] -- exports, GDS, and mem write
-// [3:0] -- vector memory operations
-
-// Newer clang supports __builtin_amdgcn_s_waitcnt
-#if __clang_major__ >= 5
-#define __waitcnt(x) __builtin_amdgcn_s_waitcnt(x)
-#else
-#define __waitcnt(x) __clc_amdgcn_s_waitcnt(x)
-_CLC_DEF void __clc_amdgcn_s_waitcnt(unsigned) __asm("llvm.amdgcn.s.waitcnt");
-#endif
+#include <clc/opencl/synchronization/utils.h>
_CLC_DEF _CLC_OVERLOAD void mem_fence(cl_mem_fence_flags flags) {
- if (flags & CLK_GLOBAL_MEM_FENCE) {
- // scalar loads are counted with LGKM but we don't know whether
- // the compiler turned any loads to scalar
- __waitcnt(0);
- } else if (flags & CLK_LOCAL_MEM_FENCE)
- __waitcnt(0xff); // LGKM is [12:8]
+ int memory_scope = getCLCMemoryScope(flags);
+ int memory_order = __ATOMIC_SEQ_CST;
+ __clc_mem_fence(memory_scope, memory_order);
}
-#undef __waitcnt
// We don't have separate mechanism for read and write fences
_CLC_DEF _CLC_OVERLOAD void read_mem_fence(cl_mem_fence_flags flags) {
diff --git a/libclc/opencl/lib/amdgcn/synchronization/barrier.cl b/libclc/opencl/lib/amdgcn/synchronization/barrier.cl
index 5203db72f484c..c8322e602302c 100644
--- a/libclc/opencl/lib/amdgcn/synchronization/barrier.cl
+++ b/libclc/opencl/lib/amdgcn/synchronization/barrier.cl
@@ -6,10 +6,12 @@
//
//===----------------------------------------------------------------------===//
-#include <clc/opencl/explicit_fence/explicit_memory_fence.h>
#include <clc/opencl/synchronization/barrier.h>
+#include <clc/opencl/synchronization/utils.h>
+#include <clc/synchronization/clc_work_group_barrier.h>
_CLC_DEF _CLC_OVERLOAD void barrier(cl_mem_fence_flags flags) {
- mem_fence(flags);
- __builtin_amdgcn_s_barrier();
+ int memory_scope = getCLCMemoryScope(flags);
+ int memory_order = __ATOMIC_SEQ_CST;
+ __clc_work_group_barrier(memory_scope, memory_order);
}
diff --git a/libclc/opencl/lib/ptx-nvidiacl/mem_fence/fence.cl b/libclc/opencl/lib/ptx-nvidiacl/mem_fence/fence.cl
index d24569ecda1bc..e22ed870a7e6b 100644
--- a/libclc/opencl/lib/ptx-nvidiacl/mem_fence/fence.cl
+++ b/libclc/opencl/lib/ptx-nvidiacl/mem_fence/fence.cl
@@ -6,11 +6,14 @@
//
//===----------------------------------------------------------------------===//
+#include <clc/mem_fence/clc_mem_fence.h>
#include <clc/opencl/explicit_fence/explicit_memory_fence.h>
+#include <clc/opencl/synchronization/utils.h>
_CLC_DEF _CLC_OVERLOAD void mem_fence(cl_mem_fence_flags flags) {
- if (flags & (CLK_GLOBAL_MEM_FENCE | CLK_LOCAL_MEM_FENCE))
- __nvvm_membar_cta();
+ int memory_scope = getCLCMemoryScope(flags);
+ int memory_order = __ATOMIC_SEQ_CST;
+ __clc_mem_fence(memory_scope, memory_order);
}
// We do not have separate mechanism for read and write fences.
diff --git a/libclc/opencl/lib/ptx-nvidiacl/synchronization/barrier.cl b/libclc/opencl/lib/ptx-nvidiacl/synchronization/barrier.cl
index 7c57478795dda..c8322e602302c 100644
--- a/libclc/opencl/lib/ptx-nvidiacl/synchronization/barrier.cl
+++ b/libclc/opencl/lib/ptx-nvidiacl/synchronization/barrier.cl
@@ -7,7 +7,11 @@
//===----------------------------------------------------------------------===//
#include <clc/opencl/synchronization/barrier.h>
+#include <clc/opencl/synchronization/utils.h>
+#include <clc/synchronization/clc_work_group_barrier.h>
_CLC_DEF _CLC_OVERLOAD void barrier(cl_mem_fence_flags flags) {
- __syncthreads();
+ int memory_scope = getCLCMemoryScope(flags);
+ int memory_order = __ATOMIC_SEQ_CST;
+ __clc_work_group_barrier(memory_scope, memory_order);
}
>From ee6afeb72ef2cee918c1338c926c481ad36aa915 Mon Sep 17 00:00:00 2001
From: Chuanqi Xu <yedeng.yd at linux.alibaba.com>
Date: Wed, 6 Aug 2025 09:59:59 +0800
Subject: [PATCH 031/171] [NFC] [C++20] [Modules] Lookup in cache before
checking if the primary template is an exposure
Checking if a declaration is an exposure may be relatively expensive.
Try to use cache to avoid such checks will be helpful.
---
clang/lib/Sema/SemaModule.cpp | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/clang/lib/Sema/SemaModule.cpp b/clang/lib/Sema/SemaModule.cpp
index b137549b8f859..ff9f85f960d93 100644
--- a/clang/lib/Sema/SemaModule.cpp
+++ b/clang/lib/Sema/SemaModule.cpp
@@ -1222,7 +1222,8 @@ bool ExposureChecker::isTULocal(const NamedDecl *D) {
// [basic.link]p15.5
// - a specialization of a template whose (possibly instantiated) declaration
// is an exposure.
- if (checkExposure(PrimaryTemplate, /*Diag=*/false))
+ if (ExposureSet.count(PrimaryTemplate) ||
+ checkExposure(PrimaryTemplate, /*Diag=*/false))
return true;
// Avoid calling checkExposure again since it is expensive.
>From d27802a2173ab3d864d3bf1ac507a4acc656e457 Mon Sep 17 00:00:00 2001
From: Alex MacLean <amaclean at nvidia.com>
Date: Tue, 5 Aug 2025 19:20:17 -0700
Subject: [PATCH 032/171] [DAGCombiner] Fold setcc of trunc, generalizing some
NVPTX isel logic (#150270)
That change adds support for folding a SETCC when one or both of the
operands is a TRUNCATE with the appropriate no-wrap flags. This pattern
can occur when promoting i8 operations in NVPTX, and we currently have
some ISel rules to try to handle it.
---
.../CodeGen/SelectionDAG/TargetLowering.cpp | 25 ++
llvm/lib/Target/NVPTX/NVPTXInstrInfo.td | 12 -
llvm/test/CodeGen/NVPTX/sext-setcc.ll | 13 +-
llvm/test/CodeGen/NVPTX/trunc-setcc.ll | 269 ++++++++++++++++++
4 files changed, 298 insertions(+), 21 deletions(-)
create mode 100644 llvm/test/CodeGen/NVPTX/trunc-setcc.ll
diff --git a/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp b/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
index a68f521ee59cd..e235d144e85ff 100644
--- a/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
@@ -5118,6 +5118,20 @@ SDValue TargetLowering::SimplifySetCC(EVT VT, SDValue N0, SDValue N1,
Cond == ISD::SETEQ ? ISD::SETLT : ISD::SETGE);
}
+ // fold (setcc (trunc x) c) -> (setcc x c)
+ if (N0.getOpcode() == ISD::TRUNCATE &&
+ ((N0->getFlags().hasNoUnsignedWrap() && !ISD::isSignedIntSetCC(Cond)) ||
+ (N0->getFlags().hasNoSignedWrap() &&
+ !ISD::isUnsignedIntSetCC(Cond))) &&
+ isTypeDesirableForOp(ISD::SETCC, N0.getOperand(0).getValueType())) {
+ EVT NewVT = N0.getOperand(0).getValueType();
+ SDValue NewConst = DAG.getConstant(ISD::isSignedIntSetCC(Cond)
+ ? C1.sext(NewVT.getSizeInBits())
+ : C1.zext(NewVT.getSizeInBits()),
+ dl, NewVT);
+ return DAG.getSetCC(dl, VT, N0.getOperand(0), NewConst, Cond);
+ }
+
if (SDValue V =
optimizeSetCCOfSignedTruncationCheck(VT, N0, N1, Cond, DCI, dl))
return V;
@@ -5654,6 +5668,17 @@ SDValue TargetLowering::SimplifySetCC(EVT VT, SDValue N0, SDValue N1,
return N0;
}
+ // Fold (setcc (trunc x) (trunc y)) -> (setcc x y)
+ if (N0.getOpcode() == ISD::TRUNCATE && N1.getOpcode() == ISD::TRUNCATE &&
+ N0.getOperand(0).getValueType() == N1.getOperand(0).getValueType() &&
+ ((!ISD::isSignedIntSetCC(Cond) && N0->getFlags().hasNoUnsignedWrap() &&
+ N1->getFlags().hasNoUnsignedWrap()) ||
+ (!ISD::isUnsignedIntSetCC(Cond) && N0->getFlags().hasNoSignedWrap() &&
+ N1->getFlags().hasNoSignedWrap())) &&
+ isTypeDesirableForOp(ISD::SETCC, N0.getOperand(0).getValueType())) {
+ return DAG.getSetCC(dl, VT, N0.getOperand(0), N1.getOperand(0), Cond);
+ }
+
// Could not fold it.
return SDValue();
}
diff --git a/llvm/lib/Target/NVPTX/NVPTXInstrInfo.td b/llvm/lib/Target/NVPTX/NVPTXInstrInfo.td
index 6765ecb77da3a..aac611d4c903a 100644
--- a/llvm/lib/Target/NVPTX/NVPTXInstrInfo.td
+++ b/llvm/lib/Target/NVPTX/NVPTXInstrInfo.td
@@ -1560,18 +1560,6 @@ def : Pat<(setcc (i16 (sext_inreg (trunc (prmt i32:$a, 0, byte_extract_prmt:$sel
(PRMT_B32rii i32:$b, 0, (to_sign_extend_selector $sel_b), PrmtNONE),
(cond2cc $cc))>;
-// A 16-bit comparison of truncated byte extracts can be be converted to 32-bit
-// comparison because we know that the truncate is just trancating off zeros
-// and that the most-significant byte is also zeros so the meaning of signed and
-// unsigned comparisons will not be changed.
-def : Pat<(setcc (i16 (trunc (prmt i32:$a, 0, byte_extract_prmt:$sel_a, PrmtNONE))),
- (i16 (trunc (prmt i32:$b, 0, byte_extract_prmt:$sel_b, PrmtNONE))),
- cond:$cc),
- (SETP_i32rr (PRMT_B32rii i32:$a, 0, byte_extract_prmt:$sel_a, PrmtNONE),
- (PRMT_B32rii i32:$b, 0, byte_extract_prmt:$sel_b, PrmtNONE),
- (cond2cc $cc))>;
-
-
def SDTDeclareArrayParam :
SDTypeProfile<0, 3, [SDTCisVT<0, i32>, SDTCisVT<1, i32>, SDTCisVT<2, i32>]>;
def SDTDeclareScalarParam :
diff --git a/llvm/test/CodeGen/NVPTX/sext-setcc.ll b/llvm/test/CodeGen/NVPTX/sext-setcc.ll
index 9a67bdfeb067b..97918a6f26cdf 100644
--- a/llvm/test/CodeGen/NVPTX/sext-setcc.ll
+++ b/llvm/test/CodeGen/NVPTX/sext-setcc.ll
@@ -29,7 +29,6 @@ define <4 x i8> @sext_setcc_v4i1_to_v4i8(ptr %p) {
; CHECK-LABEL: sext_setcc_v4i1_to_v4i8(
; CHECK: {
; CHECK-NEXT: .reg .pred %p<5>;
-; CHECK-NEXT: .reg .b16 %rs<5>;
; CHECK-NEXT: .reg .b32 %r<13>;
; CHECK-NEXT: .reg .b64 %rd<2>;
; CHECK-EMPTY:
@@ -37,17 +36,13 @@ define <4 x i8> @sext_setcc_v4i1_to_v4i8(ptr %p) {
; CHECK-NEXT: ld.param.b64 %rd1, [sext_setcc_v4i1_to_v4i8_param_0];
; CHECK-NEXT: ld.b32 %r1, [%rd1];
; CHECK-NEXT: prmt.b32 %r2, %r1, 0, 0x7770U;
-; CHECK-NEXT: cvt.u16.u32 %rs1, %r2;
-; CHECK-NEXT: setp.eq.b16 %p1, %rs1, 0;
+; CHECK-NEXT: setp.eq.b32 %p1, %r2, 0;
; CHECK-NEXT: prmt.b32 %r3, %r1, 0, 0x7771U;
-; CHECK-NEXT: cvt.u16.u32 %rs2, %r3;
-; CHECK-NEXT: setp.eq.b16 %p2, %rs2, 0;
+; CHECK-NEXT: setp.eq.b32 %p2, %r3, 0;
; CHECK-NEXT: prmt.b32 %r4, %r1, 0, 0x7772U;
-; CHECK-NEXT: cvt.u16.u32 %rs3, %r4;
-; CHECK-NEXT: setp.eq.b16 %p3, %rs3, 0;
+; CHECK-NEXT: setp.eq.b32 %p3, %r4, 0;
; CHECK-NEXT: prmt.b32 %r5, %r1, 0, 0x7773U;
-; CHECK-NEXT: cvt.u16.u32 %rs4, %r5;
-; CHECK-NEXT: setp.eq.b16 %p4, %rs4, 0;
+; CHECK-NEXT: setp.eq.b32 %p4, %r5, 0;
; CHECK-NEXT: selp.b32 %r6, -1, 0, %p4;
; CHECK-NEXT: selp.b32 %r7, -1, 0, %p3;
; CHECK-NEXT: prmt.b32 %r8, %r7, %r6, 0x3340U;
diff --git a/llvm/test/CodeGen/NVPTX/trunc-setcc.ll b/llvm/test/CodeGen/NVPTX/trunc-setcc.ll
new file mode 100644
index 0000000000000..f22e37e203966
--- /dev/null
+++ b/llvm/test/CodeGen/NVPTX/trunc-setcc.ll
@@ -0,0 +1,269 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
+; RUN: llc < %s -mcpu=sm_50 | FileCheck %s
+; RUN: %if ptxas %{ llc < %s -mcpu=sm_50 | %ptxas-verify -arch=sm_50 %}
+
+target triple = "nvptx64-nvidia-cuda"
+
+define i1 @trunc_nsw_singed_const(i32 %a) {
+; CHECK-LABEL: trunc_nsw_singed_const(
+; CHECK: {
+; CHECK-NEXT: .reg .pred %p<2>;
+; CHECK-NEXT: .reg .b32 %r<4>;
+; CHECK-EMPTY:
+; CHECK-NEXT: // %bb.0:
+; CHECK-NEXT: ld.param.b32 %r1, [trunc_nsw_singed_const_param_0];
+; CHECK-NEXT: add.s32 %r2, %r1, 1;
+; CHECK-NEXT: setp.gt.s32 %p1, %r2, -1;
+; CHECK-NEXT: selp.b32 %r3, -1, 0, %p1;
+; CHECK-NEXT: st.param.b32 [func_retval0], %r3;
+; CHECK-NEXT: ret;
+ %a2 = add i32 %a, 1
+ %b = trunc nsw i32 %a2 to i8
+ %c = icmp sgt i8 %b, -1
+ ret i1 %c
+}
+
+define i1 @trunc_nuw_singed_const(i32 %a) {
+; CHECK-LABEL: trunc_nuw_singed_const(
+; CHECK: {
+; CHECK-NEXT: .reg .pred %p<2>;
+; CHECK-NEXT: .reg .b16 %rs<4>;
+; CHECK-NEXT: .reg .b32 %r<2>;
+; CHECK-EMPTY:
+; CHECK-NEXT: // %bb.0:
+; CHECK-NEXT: ld.param.b8 %rs1, [trunc_nuw_singed_const_param_0];
+; CHECK-NEXT: add.s16 %rs2, %rs1, 1;
+; CHECK-NEXT: cvt.s16.s8 %rs3, %rs2;
+; CHECK-NEXT: setp.lt.s16 %p1, %rs3, 100;
+; CHECK-NEXT: selp.b32 %r1, -1, 0, %p1;
+; CHECK-NEXT: st.param.b32 [func_retval0], %r1;
+; CHECK-NEXT: ret;
+ %a2 = add i32 %a, 1
+ %b = trunc nuw i32 %a2 to i8
+ %c = icmp slt i8 %b, 100
+ ret i1 %c
+}
+
+define i1 @trunc_nsw_unsinged_const(i32 %a) {
+; CHECK-LABEL: trunc_nsw_unsinged_const(
+; CHECK: {
+; CHECK-NEXT: .reg .pred %p<2>;
+; CHECK-NEXT: .reg .b16 %rs<4>;
+; CHECK-NEXT: .reg .b32 %r<2>;
+; CHECK-EMPTY:
+; CHECK-NEXT: // %bb.0:
+; CHECK-NEXT: ld.param.b8 %rs1, [trunc_nsw_unsinged_const_param_0];
+; CHECK-NEXT: add.s16 %rs2, %rs1, 1;
+; CHECK-NEXT: and.b16 %rs3, %rs2, 255;
+; CHECK-NEXT: setp.lt.u16 %p1, %rs3, 236;
+; CHECK-NEXT: selp.b32 %r1, -1, 0, %p1;
+; CHECK-NEXT: st.param.b32 [func_retval0], %r1;
+; CHECK-NEXT: ret;
+ %a2 = add i32 %a, 1
+ %b = trunc nsw i32 %a2 to i8
+ %c = icmp ult i8 %b, -20
+ ret i1 %c
+}
+
+define i1 @trunc_nuw_unsinged_const(i32 %a) {
+; CHECK-LABEL: trunc_nuw_unsinged_const(
+; CHECK: {
+; CHECK-NEXT: .reg .pred %p<2>;
+; CHECK-NEXT: .reg .b32 %r<4>;
+; CHECK-EMPTY:
+; CHECK-NEXT: // %bb.0:
+; CHECK-NEXT: ld.param.b32 %r1, [trunc_nuw_unsinged_const_param_0];
+; CHECK-NEXT: add.s32 %r2, %r1, 1;
+; CHECK-NEXT: setp.gt.u32 %p1, %r2, 100;
+; CHECK-NEXT: selp.b32 %r3, -1, 0, %p1;
+; CHECK-NEXT: st.param.b32 [func_retval0], %r3;
+; CHECK-NEXT: ret;
+ %a2 = add i32 %a, 1
+ %b = trunc nuw i32 %a2 to i8
+ %c = icmp ugt i8 %b, 100
+ ret i1 %c
+}
+
+
+define i1 @trunc_nsw_eq_const(i32 %a) {
+; CHECK-LABEL: trunc_nsw_eq_const(
+; CHECK: {
+; CHECK-NEXT: .reg .pred %p<2>;
+; CHECK-NEXT: .reg .b32 %r<3>;
+; CHECK-EMPTY:
+; CHECK-NEXT: // %bb.0:
+; CHECK-NEXT: ld.param.b32 %r1, [trunc_nsw_eq_const_param_0];
+; CHECK-NEXT: setp.eq.b32 %p1, %r1, 99;
+; CHECK-NEXT: selp.b32 %r2, -1, 0, %p1;
+; CHECK-NEXT: st.param.b32 [func_retval0], %r2;
+; CHECK-NEXT: ret;
+ %a2 = add i32 %a, 1
+ %b = trunc nsw i32 %a2 to i8
+ %c = icmp eq i8 %b, 100
+ ret i1 %c
+}
+
+define i1 @trunc_nuw_eq_const(i32 %a) {
+; CHECK-LABEL: trunc_nuw_eq_const(
+; CHECK: {
+; CHECK-NEXT: .reg .pred %p<2>;
+; CHECK-NEXT: .reg .b32 %r<3>;
+; CHECK-EMPTY:
+; CHECK-NEXT: // %bb.0:
+; CHECK-NEXT: ld.param.b32 %r1, [trunc_nuw_eq_const_param_0];
+; CHECK-NEXT: setp.eq.b32 %p1, %r1, 99;
+; CHECK-NEXT: selp.b32 %r2, -1, 0, %p1;
+; CHECK-NEXT: st.param.b32 [func_retval0], %r2;
+; CHECK-NEXT: ret;
+ %a2 = add i32 %a, 1
+ %b = trunc nuw i32 %a2 to i8
+ %c = icmp eq i8 %b, 100
+ ret i1 %c
+}
+
+;;;
+
+define i1 @trunc_nsw_singed(i32 %a1, i32 %a2) {
+; CHECK-LABEL: trunc_nsw_singed(
+; CHECK: {
+; CHECK-NEXT: .reg .pred %p<2>;
+; CHECK-NEXT: .reg .b32 %r<6>;
+; CHECK-EMPTY:
+; CHECK-NEXT: // %bb.0:
+; CHECK-NEXT: ld.param.b32 %r1, [trunc_nsw_singed_param_0];
+; CHECK-NEXT: add.s32 %r2, %r1, 1;
+; CHECK-NEXT: ld.param.b32 %r3, [trunc_nsw_singed_param_1];
+; CHECK-NEXT: add.s32 %r4, %r3, 7;
+; CHECK-NEXT: setp.gt.s32 %p1, %r2, %r4;
+; CHECK-NEXT: selp.b32 %r5, -1, 0, %p1;
+; CHECK-NEXT: st.param.b32 [func_retval0], %r5;
+; CHECK-NEXT: ret;
+ %b1 = add i32 %a1, 1
+ %b2 = add i32 %a2, 7
+ %c1 = trunc nsw i32 %b1 to i8
+ %c2 = trunc nsw i32 %b2 to i8
+ %c = icmp sgt i8 %c1, %c2
+ ret i1 %c
+}
+
+define i1 @trunc_nuw_singed(i32 %a1, i32 %a2) {
+; CHECK-LABEL: trunc_nuw_singed(
+; CHECK: {
+; CHECK-NEXT: .reg .pred %p<2>;
+; CHECK-NEXT: .reg .b16 %rs<7>;
+; CHECK-NEXT: .reg .b32 %r<2>;
+; CHECK-EMPTY:
+; CHECK-NEXT: // %bb.0:
+; CHECK-NEXT: ld.param.b8 %rs1, [trunc_nuw_singed_param_0];
+; CHECK-NEXT: ld.param.b8 %rs2, [trunc_nuw_singed_param_1];
+; CHECK-NEXT: add.s16 %rs3, %rs1, 1;
+; CHECK-NEXT: cvt.s16.s8 %rs4, %rs3;
+; CHECK-NEXT: add.s16 %rs5, %rs2, 6;
+; CHECK-NEXT: cvt.s16.s8 %rs6, %rs5;
+; CHECK-NEXT: setp.lt.s16 %p1, %rs4, %rs6;
+; CHECK-NEXT: selp.b32 %r1, -1, 0, %p1;
+; CHECK-NEXT: st.param.b32 [func_retval0], %r1;
+; CHECK-NEXT: ret;
+ %b1 = add i32 %a1, 1
+ %b2 = add i32 %a2, 6
+ %c1 = trunc nuw i32 %b1 to i8
+ %c2 = trunc nuw i32 %b2 to i8
+ %c = icmp slt i8 %c1, %c2
+ ret i1 %c
+}
+
+define i1 @trunc_nsw_unsinged(i32 %a1, i32 %a2) {
+; CHECK-LABEL: trunc_nsw_unsinged(
+; CHECK: {
+; CHECK-NEXT: .reg .pred %p<2>;
+; CHECK-NEXT: .reg .b16 %rs<7>;
+; CHECK-NEXT: .reg .b32 %r<2>;
+; CHECK-EMPTY:
+; CHECK-NEXT: // %bb.0:
+; CHECK-NEXT: ld.param.b8 %rs1, [trunc_nsw_unsinged_param_0];
+; CHECK-NEXT: ld.param.b8 %rs2, [trunc_nsw_unsinged_param_1];
+; CHECK-NEXT: add.s16 %rs3, %rs1, 1;
+; CHECK-NEXT: and.b16 %rs4, %rs3, 255;
+; CHECK-NEXT: add.s16 %rs5, %rs2, 4;
+; CHECK-NEXT: and.b16 %rs6, %rs5, 255;
+; CHECK-NEXT: setp.lt.u16 %p1, %rs4, %rs6;
+; CHECK-NEXT: selp.b32 %r1, -1, 0, %p1;
+; CHECK-NEXT: st.param.b32 [func_retval0], %r1;
+; CHECK-NEXT: ret;
+ %b1 = add i32 %a1, 1
+ %b2 = add i32 %a2, 4
+ %c1 = trunc nsw i32 %b1 to i8
+ %c2 = trunc nsw i32 %b2 to i8
+ %c = icmp ult i8 %c1, %c2
+ ret i1 %c
+}
+
+define i1 @trunc_nuw_unsinged(i32 %a1, i32 %a2) {
+; CHECK-LABEL: trunc_nuw_unsinged(
+; CHECK: {
+; CHECK-NEXT: .reg .pred %p<2>;
+; CHECK-NEXT: .reg .b32 %r<6>;
+; CHECK-EMPTY:
+; CHECK-NEXT: // %bb.0:
+; CHECK-NEXT: ld.param.b32 %r1, [trunc_nuw_unsinged_param_0];
+; CHECK-NEXT: add.s32 %r2, %r1, 1;
+; CHECK-NEXT: ld.param.b32 %r3, [trunc_nuw_unsinged_param_1];
+; CHECK-NEXT: add.s32 %r4, %r3, 5;
+; CHECK-NEXT: setp.gt.u32 %p1, %r2, %r4;
+; CHECK-NEXT: selp.b32 %r5, -1, 0, %p1;
+; CHECK-NEXT: st.param.b32 [func_retval0], %r5;
+; CHECK-NEXT: ret;
+ %b1 = add i32 %a1, 1
+ %b2 = add i32 %a2, 5
+ %c1 = trunc nuw i32 %b1 to i8
+ %c2 = trunc nuw i32 %b2 to i8
+ %c = icmp ugt i8 %c1, %c2
+ ret i1 %c
+}
+
+
+define i1 @trunc_nsw_eq(i32 %a1, i32 %a2) {
+; CHECK-LABEL: trunc_nsw_eq(
+; CHECK: {
+; CHECK-NEXT: .reg .pred %p<2>;
+; CHECK-NEXT: .reg .b32 %r<6>;
+; CHECK-EMPTY:
+; CHECK-NEXT: // %bb.0:
+; CHECK-NEXT: ld.param.b32 %r1, [trunc_nsw_eq_param_0];
+; CHECK-NEXT: add.s32 %r2, %r1, 1;
+; CHECK-NEXT: ld.param.b32 %r3, [trunc_nsw_eq_param_1];
+; CHECK-NEXT: add.s32 %r4, %r3, 3;
+; CHECK-NEXT: setp.eq.b32 %p1, %r2, %r4;
+; CHECK-NEXT: selp.b32 %r5, -1, 0, %p1;
+; CHECK-NEXT: st.param.b32 [func_retval0], %r5;
+; CHECK-NEXT: ret;
+ %b1 = add i32 %a1, 1
+ %b2 = add i32 %a2, 3
+ %c1 = trunc nsw i32 %b1 to i8
+ %c2 = trunc nsw i32 %b2 to i8
+ %c = icmp eq i8 %c1, %c2
+ ret i1 %c
+}
+
+define i1 @trunc_nuw_eq(i32 %a1, i32 %a2) {
+; CHECK-LABEL: trunc_nuw_eq(
+; CHECK: {
+; CHECK-NEXT: .reg .pred %p<2>;
+; CHECK-NEXT: .reg .b32 %r<6>;
+; CHECK-EMPTY:
+; CHECK-NEXT: // %bb.0:
+; CHECK-NEXT: ld.param.b32 %r1, [trunc_nuw_eq_param_0];
+; CHECK-NEXT: add.s32 %r2, %r1, 2;
+; CHECK-NEXT: ld.param.b32 %r3, [trunc_nuw_eq_param_1];
+; CHECK-NEXT: add.s32 %r4, %r3, 1;
+; CHECK-NEXT: setp.eq.b32 %p1, %r2, %r4;
+; CHECK-NEXT: selp.b32 %r5, -1, 0, %p1;
+; CHECK-NEXT: st.param.b32 [func_retval0], %r5;
+; CHECK-NEXT: ret;
+ %b1 = add i32 %a1, 2
+ %b2 = add i32 %a2, 1
+ %c1 = trunc nuw i32 %b1 to i8
+ %c2 = trunc nuw i32 %b2 to i8
+ %c = icmp eq i8 %c1, %c2
+ ret i1 %c
+}
>From 7b8dea265e72c3037b6b1e54d5ab51b7e14f328b Mon Sep 17 00:00:00 2001
From: Jonas Devlieghere <jonas at devlieghere.com>
Date: Tue, 5 Aug 2025 19:21:36 -0700
Subject: [PATCH 033/171] [lldb] Workaround omission of PyBUF_READ in the
stable API (#152214)
PyMemoryView_FromMemory is part of stable ABI but the flag constants
such as PyBUF_READ are not. This was fixed in Python 3.11 [1], but still
requires this workaround when using older versions.
[1] https://github.com/python/cpython/issues/98680
---
lldb/source/Plugins/ScriptInterpreter/Python/lldb-python.h | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/lldb/source/Plugins/ScriptInterpreter/Python/lldb-python.h b/lldb/source/Plugins/ScriptInterpreter/Python/lldb-python.h
index 9b4a1dd2cecb2..0c281793613a5 100644
--- a/lldb/source/Plugins/ScriptInterpreter/Python/lldb-python.h
+++ b/lldb/source/Plugins/ScriptInterpreter/Python/lldb-python.h
@@ -60,6 +60,12 @@ static llvm::Expected<bool> *g_fcxx_modules_workaround [[maybe_unused]];
// with a version of Python we don't support.
static_assert(PY_VERSION_HEX >= LLDB_MINIMUM_PYTHON_VERSION,
"LLDB requires at least Python 3.8");
+
+// PyMemoryView_FromMemory is part of stable ABI but the flag constants are not.
+// See https://github.com/python/cpython/issues/98680
+#ifndef PyBUF_READ
+#define PyBUF_READ 0x100
+#endif
#endif
#endif // LLDB_PLUGINS_SCRIPTINTERPRETER_PYTHON_LLDB_PYTHON_H
>From 2d4e5c4e1f0ee00cc668b5f012d718e750a99cc7 Mon Sep 17 00:00:00 2001
From: nerix <nerixdev at outlook.de>
Date: Wed, 6 Aug 2025 04:57:26 +0200
Subject: [PATCH 034/171] [LLDB][NativePDB] Use undecorated name for types if
UniqueName isn't mangled (#152114)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Languages other than C/C++ don't necessarily emit mangled names in the
`UniqueName` field of type records. Rust specifically emits a unique ID
that doesn't contain the name.
For example, `(i32, i32)` is emitted as
```llvm
!266 = !DICompositeType(
tag: DW_TAG_structure_type, name: "tuple$<i32,i32>", file: !9, size: 64, align: 32,
elements: !267, templateParams: !17, identifier: "19122721b0632fe96c0dd37477674472"
)
```
which results in
```
0x1091 | LF_STRUCTURE [size = 72, hash = 0x1AC67] `tuple$<i32,i32>`
unique name: `19122721b0632fe96c0dd37477674472`
vtable: <no type>, base list: <no type>, field list: 0x1090
options: has unique name, sizeof 8
```
In C++ with Clang and MSVC, a structure similar to this would result in
```
0x136F | LF_STRUCTURE [size = 44, hash = 0x30BE2] `MyTuple`
unique name: `.?AUMyTuple@@`
vtable: <no type>, base list: <no type>, field list: 0x136E
options: has unique name, sizeof 8
```
With this PR, if a `UniqueName` is encountered that couldn't be parsed,
it will fall back to using the undecorated (→ do the same as if the
unique name is empty/unavailable).
I'm not sure how to test this. Maybe compiling the LLVM IR that rustc
emits?
Fixes #152051.
---
.../SymbolFile/NativePDB/PdbAstBuilder.cpp | 2 +-
.../NativePDB/SymbolFileNativePDB.cpp | 10 +--
.../SymbolFile/NativePDB/rust-unique-name.ll | 85 +++++++++++++++++++
lldb/test/Shell/lit.cfg.py | 2 +-
4 files changed, 90 insertions(+), 9 deletions(-)
create mode 100644 lldb/test/Shell/SymbolFile/NativePDB/rust-unique-name.ll
diff --git a/lldb/source/Plugins/SymbolFile/NativePDB/PdbAstBuilder.cpp b/lldb/source/Plugins/SymbolFile/NativePDB/PdbAstBuilder.cpp
index 92a102bbc92a8..f01fba3c48ce9 100644
--- a/lldb/source/Plugins/SymbolFile/NativePDB/PdbAstBuilder.cpp
+++ b/lldb/source/Plugins/SymbolFile/NativePDB/PdbAstBuilder.cpp
@@ -178,7 +178,7 @@ PdbAstBuilder::CreateDeclInfoForType(const TagRecord &record, TypeIndex ti) {
std::string_view sv(record.UniqueName.begin(), record.UniqueName.size());
llvm::ms_demangle::TagTypeNode *ttn = demangler.parseTagUniqueName(sv);
if (demangler.Error)
- return {m_clang.GetTranslationUnitDecl(), std::string(record.UniqueName)};
+ return CreateDeclInfoForUndecoratedName(record.Name);
llvm::ms_demangle::IdentifierNode *idn =
ttn->QualifiedName->getUnqualifiedIdentifier();
diff --git a/lldb/source/Plugins/SymbolFile/NativePDB/SymbolFileNativePDB.cpp b/lldb/source/Plugins/SymbolFile/NativePDB/SymbolFileNativePDB.cpp
index b85790e09a85e..dcea33dd9f854 100644
--- a/lldb/source/Plugins/SymbolFile/NativePDB/SymbolFileNativePDB.cpp
+++ b/lldb/source/Plugins/SymbolFile/NativePDB/SymbolFileNativePDB.cpp
@@ -622,18 +622,14 @@ lldb::TypeSP SymbolFileNativePDB::CreateSimpleType(TypeIndex ti,
}
static std::string GetUnqualifiedTypeName(const TagRecord &record) {
- if (!record.hasUniqueName()) {
- MSVCUndecoratedNameParser parser(record.Name);
- llvm::ArrayRef<MSVCUndecoratedNameSpecifier> specs = parser.GetSpecifiers();
-
- return std::string(specs.back().GetBaseName());
- }
+ if (!record.hasUniqueName())
+ return std::string(MSVCUndecoratedNameParser::DropScope(record.Name));
llvm::ms_demangle::Demangler demangler;
std::string_view sv(record.UniqueName.begin(), record.UniqueName.size());
llvm::ms_demangle::TagTypeNode *ttn = demangler.parseTagUniqueName(sv);
if (demangler.Error)
- return std::string(record.Name);
+ return std::string(MSVCUndecoratedNameParser::DropScope(record.Name));
llvm::ms_demangle::IdentifierNode *idn =
ttn->QualifiedName->getUnqualifiedIdentifier();
diff --git a/lldb/test/Shell/SymbolFile/NativePDB/rust-unique-name.ll b/lldb/test/Shell/SymbolFile/NativePDB/rust-unique-name.ll
new file mode 100644
index 0000000000000..55008ec6983c4
--- /dev/null
+++ b/lldb/test/Shell/SymbolFile/NativePDB/rust-unique-name.ll
@@ -0,0 +1,85 @@
+; REQUIRES: system-windows
+; RUN: %build --compiler=clang-cl --nodefaultlib -o %t.exe -- %s
+; RUN: lldb-test symbols %t.exe | FileCheck %s
+
+; Output from
+; rustc lib.rs -C debuginfo=2 --emit=llvm-ir --crate-type=rlib
+; (using rlib to avoid including panic handlers in IR)
+;
+; lib.rs:
+;
+; #![no_std]
+; mod my_module {
+; #[repr(C)]
+; pub struct MyStruct {
+; pub a: i32,
+; pub b: i32,
+; }
+; }
+; #[unsafe(no_mangle)]
+; extern "C" fn mainCRTStartup() -> my_module::MyStruct {
+; my_module::MyStruct { a: 3, b: 4 }
+; }
+; #[unsafe(no_mangle)]
+; extern "C" fn main() {}
+
+; =======================================================================
+; ModuleID = 'lib.b43fc69277defcf4-cgu.0'
+source_filename = "lib.b43fc69277defcf4-cgu.0"
+target datalayout = "e-m:w-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128"
+target triple = "x86_64-pc-windows-msvc"
+
+; Function Attrs: nounwind uwtable
+define i64 @mainCRTStartup() unnamed_addr #0 !dbg !6 {
+start:
+ %_0 = alloca [8 x i8], align 4
+ store i32 3, ptr %_0, align 4, !dbg !20
+ %0 = getelementptr inbounds i8, ptr %_0, i64 4, !dbg !20
+ store i32 4, ptr %0, align 4, !dbg !20
+ %1 = load i64, ptr %_0, align 4, !dbg !21
+ ret i64 %1, !dbg !21
+}
+
+; Function Attrs: nounwind uwtable
+define void @main() unnamed_addr #0 !dbg !22 {
+start:
+ ret void, !dbg !25
+}
+
+attributes #0 = { nounwind uwtable "target-cpu"="x86-64" "target-features"="+cx16,+sse3,+sahf" }
+
+!llvm.module.flags = !{!0, !1, !2}
+!llvm.ident = !{!3}
+!llvm.dbg.cu = !{!4}
+
+!0 = !{i32 8, !"PIC Level", i32 2}
+!1 = !{i32 2, !"CodeView", i32 1}
+!2 = !{i32 2, !"Debug Info Version", i32 3}
+!3 = !{!"rustc version 1.88.0 (6b00bc388 2025-06-23)"}
+!4 = distinct !DICompileUnit(language: DW_LANG_Rust, file: !5, producer: "clang LLVM (rustc version 1.88.0 (6b00bc388 2025-06-23))", isOptimized: false, runtimeVersion: 0, emissionKind: FullDebug, splitDebugInlining: false)
+!5 = !DIFile(filename: "src/lib.rs\\@\\lib.b43fc69277defcf4-cgu.0", directory: "F:\\Dev\\testing")
+!6 = distinct !DISubprogram(name: "mainCRTStartup", scope: !8, file: !7, line: 12, type: !9, scopeLine: 12, flags: DIFlagPrototyped, spFlags: DISPFlagDefinition, unit: !4, templateParams: !19)
+!7 = !DIFile(filename: "src/lib.rs", directory: "F:\\Dev\\testing", checksumkind: CSK_SHA256, checksum: "586087c038922a8fa0183c4b20c075445761d545e02d06af80cd5a62dcadb3ec")
+!8 = !DINamespace(name: "lib", scope: null)
+!9 = !DISubroutineType(types: !10)
+!10 = !{!11}
+!11 = !DICompositeType(tag: DW_TAG_structure_type, name: "MyStruct", scope: !13, file: !12, size: 64, align: 32, flags: DIFlagPublic, elements: !14, templateParams: !19, identifier: "99a3f33b03974e4eaf7224f807a544bf")
+!12 = !DIFile(filename: "<unknown>", directory: "")
+!13 = !DINamespace(name: "my_module", scope: !8)
+!14 = !{!15, !18}
+!15 = !DIDerivedType(tag: DW_TAG_member, name: "a", scope: !11, file: !12, baseType: !16, size: 32, align: 32, flags: DIFlagPublic)
+!16 = !DIDerivedType(tag: DW_TAG_typedef, name: "i32", file: !12, baseType: !17)
+!17 = !DIBasicType(name: "__int32", size: 32, encoding: DW_ATE_signed)
+!18 = !DIDerivedType(tag: DW_TAG_member, name: "b", scope: !11, file: !12, baseType: !16, size: 32, align: 32, offset: 32, flags: DIFlagPublic)
+!19 = !{}
+!20 = !DILocation(line: 13, scope: !6)
+!21 = !DILocation(line: 14, scope: !6)
+!22 = distinct !DISubprogram(name: "main", scope: !8, file: !7, line: 17, type: !23, scopeLine: 17, flags: DIFlagPrototyped, spFlags: DISPFlagDefinition, unit: !4, templateParams: !19)
+!23 = !DISubroutineType(types: !24)
+!24 = !{null}
+!25 = !DILocation(line: 17, scope: !22)
+
+; CHECK: Type{{.*}} , name = "MyStruct", size = 8, compiler_type = {{.*}} struct MyStruct {
+; CHECK-NEXT: int a;
+; CHECK-NEXT: int b;
+; CHECK-NEXT: }
diff --git a/lldb/test/Shell/lit.cfg.py b/lldb/test/Shell/lit.cfg.py
index 8c9448b23c56b..cfa5506e48640 100644
--- a/lldb/test/Shell/lit.cfg.py
+++ b/lldb/test/Shell/lit.cfg.py
@@ -25,7 +25,7 @@
# suffixes: A list of file extensions to treat as test files. This is overriden
# by individual lit.local.cfg files in the test subdirectories.
-config.suffixes = [".test", ".cpp", ".s", ".m"]
+config.suffixes = [".test", ".cpp", ".s", ".m", ".ll"]
# excludes: A list of directories to exclude from the testsuite. The 'Inputs'
# subdirectories contain auxiliary inputs for various tests in their parent
>From 6ba6efea8438006be7d46ec064eb5da6dfda3d1b Mon Sep 17 00:00:00 2001
From: Craig Topper <craig.topper at sifive.com>
Date: Tue, 5 Aug 2025 20:11:59 -0700
Subject: [PATCH 035/171] [RISCV] Simplify one of the RV32 PACK isel patterns.
(#152045)
This pattern previously checked a specific variant of 4 bytes being
packed that is generated by unaligned load expansion.
Our individual PACK patterns don't handle that particular case because a
DAG combine turns (or (or A, (shl B, 8)), (shl (or C, (shl D, 8)), 16))
into (or (or A, (shl B, 8)), (or (shl C, 16), (shl D, 24)). After this,
the outer OR doesn't have a shl operand so we needed a pattern that
looks through 2 layers of OR.
To match this pattern we don't need to look at the (or A, (shl B, 8))
part since that part wasn't affected by the DAG combine and can be
matched to PACKH by itself. It's enough to make sure that part of the
pattern has zeros in the upper 16 bits.
This allows tablegen to automatically generate more permutations of this pattern.
The associative variant expansion is limited to 3 children.
---
llvm/lib/Target/RISCV/RISCVInstrInfoZb.td | 14 ++--
llvm/test/CodeGen/RISCV/rv32zbkb.ll | 80 +++++++++++++++++++++++
2 files changed, 88 insertions(+), 6 deletions(-)
diff --git a/llvm/lib/Target/RISCV/RISCVInstrInfoZb.td b/llvm/lib/Target/RISCV/RISCVInstrInfoZb.td
index d2a651444169c..04ffb05c513f4 100644
--- a/llvm/lib/Target/RISCV/RISCVInstrInfoZb.td
+++ b/llvm/lib/Target/RISCV/RISCVInstrInfoZb.td
@@ -641,13 +641,15 @@ def : Pat<(binop_allhusers<or> (shl GPR:$rs2, (XLenVT 8)),
let Predicates = [HasStdExtZbkb, IsRV32] in {
def : Pat<(i32 (or (zexti16 (i32 GPR:$rs1)), (shl GPR:$rs2, (i32 16)))),
(PACK GPR:$rs1, GPR:$rs2)>;
-def : Pat<(or (or
- (shl (zexti8 (XLenVT GPR:$op1rs2)), (XLenVT 24)),
+
+// Match a pattern of 2 bytes being inserted into bits [31:16], with bits
+// bits [15:0] coming from a zero extended value. We can use pack with packh for
+// bits [31:16]. If bits [15:0] can also be a packh, it can be matched
+// separately.
+def : Pat<(or (or (shl (zexti8 (XLenVT GPR:$op1rs2)), (XLenVT 24)),
(shl (zexti8 (XLenVT GPR:$op1rs1)), (XLenVT 16))),
- (or
- (shl (zexti8 (XLenVT GPR:$op0rs2)), (XLenVT 8)),
- (zexti8 (XLenVT GPR:$op0rs1)))),
- (PACK (XLenVT (PACKH GPR:$op0rs1, GPR:$op0rs2)),
+ (zexti16 (XLenVT GPR:$rs1))),
+ (PACK (XLenVT GPR:$rs1),
(XLenVT (PACKH GPR:$op1rs1, GPR:$op1rs2)))>;
}
diff --git a/llvm/test/CodeGen/RISCV/rv32zbkb.ll b/llvm/test/CodeGen/RISCV/rv32zbkb.ll
index 4aa6dd4dba6c2..7ebbd7802b70f 100644
--- a/llvm/test/CodeGen/RISCV/rv32zbkb.ll
+++ b/llvm/test/CodeGen/RISCV/rv32zbkb.ll
@@ -319,3 +319,83 @@ define i64 @zext_i16_to_i64(i16 %a) nounwind {
%1 = zext i16 %a to i64
ret i64 %1
}
+
+define i32 @pack_lo_packh_hi_packh(i8 zeroext %0, i8 zeroext %1, i8 zeroext %2, i8 zeroext %3) nounwind {
+; RV32I-LABEL: pack_lo_packh_hi_packh:
+; RV32I: # %bb.0:
+; RV32I-NEXT: slli a1, a1, 8
+; RV32I-NEXT: slli a2, a2, 16
+; RV32I-NEXT: slli a3, a3, 24
+; RV32I-NEXT: or a0, a0, a1
+; RV32I-NEXT: or a2, a2, a3
+; RV32I-NEXT: or a0, a0, a2
+; RV32I-NEXT: ret
+;
+; RV32ZBKB-LABEL: pack_lo_packh_hi_packh:
+; RV32ZBKB: # %bb.0:
+; RV32ZBKB-NEXT: packh a0, a0, a1
+; RV32ZBKB-NEXT: packh a1, a2, a3
+; RV32ZBKB-NEXT: pack a0, a0, a1
+; RV32ZBKB-NEXT: ret
+ %a = zext i8 %0 to i32
+ %b = zext i8 %1 to i32
+ %c = zext i8 %2 to i32
+ %d = zext i8 %3 to i32
+ %e = shl i32 %b, 8
+ %f = shl i32 %c, 16
+ %g = shl i32 %d, 24
+ %h = or i32 %a, %e
+ %i = or i32 %h, %f
+ %j = or i32 %i, %g
+ ret i32 %j
+}
+
+define i32 @pack_lo_zext_hi_packh(i16 zeroext %0, i8 zeroext %1, i8 zeroext %2) nounwind {
+; RV32I-LABEL: pack_lo_zext_hi_packh:
+; RV32I: # %bb.0:
+; RV32I-NEXT: slli a1, a2, 16
+; RV32I-NEXT: slli a2, a2, 24
+; RV32I-NEXT: or a1, a2, a1
+; RV32I-NEXT: or a0, a1, a0
+; RV32I-NEXT: ret
+;
+; RV32ZBKB-LABEL: pack_lo_zext_hi_packh:
+; RV32ZBKB: # %bb.0:
+; RV32ZBKB-NEXT: packh a1, a2, a2
+; RV32ZBKB-NEXT: pack a0, a0, a1
+; RV32ZBKB-NEXT: ret
+ %a = zext i16 %0 to i32
+ %b = zext i8 %1 to i32
+ %c = zext i8 %2 to i32
+ %d = shl i32 %c, 8
+ %e = or i32 %c, %d
+ %f = shl i32 %e, 16
+ %g = or i32 %f, %a
+ ret i32 %g
+}
+
+; Negative test, %a isn't extended so we can't use pack for the outer or, but
+; we can use packh for the high half.
+define i32 @pack_lo_noext_hi_packh(i32 %a, i8 zeroext %1, i8 zeroext %2) nounwind {
+; RV32I-LABEL: pack_lo_noext_hi_packh:
+; RV32I: # %bb.0:
+; RV32I-NEXT: slli a1, a2, 16
+; RV32I-NEXT: slli a2, a2, 24
+; RV32I-NEXT: or a1, a2, a1
+; RV32I-NEXT: or a0, a1, a0
+; RV32I-NEXT: ret
+;
+; RV32ZBKB-LABEL: pack_lo_noext_hi_packh:
+; RV32ZBKB: # %bb.0:
+; RV32ZBKB-NEXT: packh a1, a2, a2
+; RV32ZBKB-NEXT: slli a1, a1, 16
+; RV32ZBKB-NEXT: or a0, a1, a0
+; RV32ZBKB-NEXT: ret
+ %b = zext i8 %1 to i32
+ %c = zext i8 %2 to i32
+ %d = shl i32 %c, 8
+ %e = or i32 %c, %d
+ %f = shl i32 %e, 16
+ %g = or i32 %f, %a
+ ret i32 %g
+}
>From e797c716091b41af7901aa88c4739bed97661436 Mon Sep 17 00:00:00 2001
From: Schrodinger ZHU Yifan <yifanzhu at rochester.edu>
Date: Tue, 5 Aug 2025 23:12:19 -0400
Subject: [PATCH 036/171] Reland "[libc] make integration test malloc work
properly when threaded" (#152236)
Reverts llvm/llvm-project#152096
---
libc/test/IntegrationTest/CMakeLists.txt | 1 +
libc/test/IntegrationTest/test.cpp | 21 +++++++++++++--------
2 files changed, 14 insertions(+), 8 deletions(-)
diff --git a/libc/test/IntegrationTest/CMakeLists.txt b/libc/test/IntegrationTest/CMakeLists.txt
index 3afe354eca986..235e9fe2f55ee 100644
--- a/libc/test/IntegrationTest/CMakeLists.txt
+++ b/libc/test/IntegrationTest/CMakeLists.txt
@@ -13,5 +13,6 @@ add_object_library(
DEPENDS
libc.hdr.stdint_proxy
libc.src.__support.OSUtil.osutil
+ libc.src.__support.CPP.atomic
${arch_specific_deps}
)
diff --git a/libc/test/IntegrationTest/test.cpp b/libc/test/IntegrationTest/test.cpp
index 8baf74637b309..0e4825202a9e2 100644
--- a/libc/test/IntegrationTest/test.cpp
+++ b/libc/test/IntegrationTest/test.cpp
@@ -5,8 +5,8 @@
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
-
#include "hdr/stdint_proxy.h"
+#include "src/__support/CPP/atomic.h"
#include "src/__support/common.h"
#include "src/__support/macros/config.h"
#include <stddef.h>
@@ -63,16 +63,21 @@ int atexit(void (*func)(void)) { return LIBC_NAMESPACE::atexit(func); }
// which just hands out continuous blocks from a statically allocated chunk of
// memory.
-static constexpr uint64_t MEMORY_SIZE = 16384;
-static uint8_t memory[MEMORY_SIZE];
-static uint8_t *ptr = memory;
+static constexpr uint64_t ALIGNMENT = alignof(long double);
+static constexpr uint64_t MEMORY_SIZE = 65336;
+alignas(ALIGNMENT) static uint8_t memory[MEMORY_SIZE];
+static size_t ptr = 0;
extern "C" {
-void *malloc(size_t s) {
- void *mem = ptr;
- ptr += s;
- return static_cast<uint64_t>(ptr - memory) >= MEMORY_SIZE ? nullptr : mem;
+void *malloc(size_t size) {
+ LIBC_NAMESPACE::cpp::AtomicRef<size_t> ref(ptr);
+ size = (size + ALIGNMENT - 1) & ~(ALIGNMENT - 1);
+ size_t old_ptr =
+ ref.fetch_add(size, LIBC_NAMESPACE::cpp::MemoryOrder::RELAXED);
+ if (static_cast<size_t>(old_ptr + size) > MEMORY_SIZE)
+ return nullptr;
+ return &memory[old_ptr];
}
void free(void *) {}
>From 3339a0045d66c0e56081b699b8535aecfb95ef85 Mon Sep 17 00:00:00 2001
From: Vincent <llvm at viceroygroup.ca>
Date: Tue, 5 Aug 2025 21:21:36 -0700
Subject: [PATCH 037/171] [clang] Respect [[gnu::error]] on functions passed to
[[gnu::cleanup]] (#152082)
Forward SourceLocation to `EmitCall` so that clang triggers an error
when a function inside `[[gnu::cleanup(func)]]` is annotated with
`[[gnu::error("some message")]]`.
resolves #146520
---
clang/docs/ReleaseNotes.rst | 2 ++
clang/lib/CodeGen/CGDecl.cpp | 15 ++++++++++-----
.../backend-attribute-error-warning-optimize.c | 10 ++++++++++
3 files changed, 22 insertions(+), 5 deletions(-)
diff --git a/clang/docs/ReleaseNotes.rst b/clang/docs/ReleaseNotes.rst
index 17e3df467593d..2a95d1e2736a3 100644
--- a/clang/docs/ReleaseNotes.rst
+++ b/clang/docs/ReleaseNotes.rst
@@ -172,6 +172,8 @@ Bug Fixes to Attribute Support
- ``[[nodiscard]]`` is now respected on Objective-C and Objective-C++ methods.
(#GH141504)
+- Using ``[[gnu::cleanup(some_func)]]`` where some_func is annotated with
+ ``[[gnu::error("some error")]]`` now correctly triggers an error. (#GH146520)
Bug Fixes to C++ Support
^^^^^^^^^^^^^^^^^^^^^^^^
diff --git a/clang/lib/CodeGen/CGDecl.cpp b/clang/lib/CodeGen/CGDecl.cpp
index 04f13c7d7a6a3..ff2dadad0d4ef 100644
--- a/clang/lib/CodeGen/CGDecl.cpp
+++ b/clang/lib/CodeGen/CGDecl.cpp
@@ -599,10 +599,11 @@ namespace {
llvm::Constant *CleanupFn;
const CGFunctionInfo &FnInfo;
const VarDecl &Var;
+ const CleanupAttr *Attribute;
CallCleanupFunction(llvm::Constant *CleanupFn, const CGFunctionInfo *Info,
- const VarDecl *Var)
- : CleanupFn(CleanupFn), FnInfo(*Info), Var(*Var) {}
+ const VarDecl *Var, const CleanupAttr *Attr)
+ : CleanupFn(CleanupFn), FnInfo(*Info), Var(*Var), Attribute(Attr) {}
void Emit(CodeGenFunction &CGF, Flags flags) override {
DeclRefExpr DRE(CGF.getContext(), const_cast<VarDecl *>(&Var), false,
@@ -624,8 +625,11 @@ namespace {
CallArgList Args;
Args.add(RValue::get(Arg),
CGF.getContext().getPointerType(Var.getType()));
- auto Callee = CGCallee::forDirect(CleanupFn);
- CGF.EmitCall(FnInfo, Callee, ReturnValueSlot(), Args);
+ GlobalDecl GD = GlobalDecl(Attribute->getFunctionDecl());
+ auto Callee = CGCallee::forDirect(CleanupFn, CGCalleeInfo(GD));
+ CGF.EmitCall(FnInfo, Callee, ReturnValueSlot(), Args,
+ /*callOrInvoke*/ nullptr, /*IsMustTail*/ false,
+ Attribute->getLoc());
}
};
} // end anonymous namespace
@@ -2231,7 +2235,8 @@ void CodeGenFunction::EmitAutoVarCleanups(const AutoVarEmission &emission) {
assert(F && "Could not find function!");
const CGFunctionInfo &Info = CGM.getTypes().arrangeFunctionDeclaration(FD);
- EHStack.pushCleanup<CallCleanupFunction>(NormalAndEHCleanup, F, &Info, &D);
+ EHStack.pushCleanup<CallCleanupFunction>(NormalAndEHCleanup, F, &Info, &D,
+ CA);
}
// If this is a block variable, call _Block_object_destroy
diff --git a/clang/test/Frontend/backend-attribute-error-warning-optimize.c b/clang/test/Frontend/backend-attribute-error-warning-optimize.c
index d3951e3b6b1f5..d5a8d57e36c96 100644
--- a/clang/test/Frontend/backend-attribute-error-warning-optimize.c
+++ b/clang/test/Frontend/backend-attribute-error-warning-optimize.c
@@ -20,3 +20,13 @@ void indirect(void) {
quux = foo;
quux();
}
+
+// https://github.com/llvm/llvm-project/issues/146520
+
+[[gnu::error("error please")]]
+void cleaner_function(char*);
+
+void asdf(void){
+ [[gnu::cleanup(cleaner_function)]] // expected-error {{call to 'cleaner_function' declared with 'error' attribute: error please}}
+ char x;
+}
>From 2959051e655a5c77401285a68bab2027aa958b88 Mon Sep 17 00:00:00 2001
From: Dave Lee <davelee.com at gmail.com>
Date: Tue, 5 Aug 2025 21:23:46 -0700
Subject: [PATCH 038/171] [lldb] Preserve original symbol of Mangled function
names (#152201)
Fixes a bug that surfaces in frame recognizers.
Details about the bug:
A new frame recognizer is configured to match a specific symbol
(`swift_willThrow`). This is an `extern "C"` symbol defined in a C++
source file. When Swift is built with debug info, the function
`ParseFunctionFromDWARF` will use the debug info to construct a function
name that looks like a C++ declaration (`::swift_willThrow(void *,
SwiftError**)`). The `Mangled` instance will have this string as its
`m_demangled` field, and have _no_ string for its `m_mangled` field.
The result is the frame recognizer would not match the symbol to the
name (`swift_willThrow` != `::swift_willThrow(void *, SwiftError**)`.
By changing `ParseFunctionFromDWARF` to assign both a demangled name and
a mangled, frame recognizers can successfully match symbols in this
configuration.
---
.../source/Plugins/SymbolFile/DWARF/DWARFASTParserClang.cpp | 4 +++-
lldb/test/API/lang/cpp/extern_c/main.cpp | 6 ++++--
.../Shell/SymbolFile/DWARF/x86/debug-types-address-ranges.s | 2 +-
3 files changed, 8 insertions(+), 4 deletions(-)
diff --git a/lldb/source/Plugins/SymbolFile/DWARF/DWARFASTParserClang.cpp b/lldb/source/Plugins/SymbolFile/DWARF/DWARFASTParserClang.cpp
index 4c6c3054ba179..a429ea848b7f7 100644
--- a/lldb/source/Plugins/SymbolFile/DWARF/DWARFASTParserClang.cpp
+++ b/lldb/source/Plugins/SymbolFile/DWARF/DWARFASTParserClang.cpp
@@ -2509,7 +2509,9 @@ Function *DWARFASTParserClang::ParseFunctionFromDWARF(
// If the mangled name is not present in the DWARF, generate the
// demangled name using the decl context. We skip if the function is
// "main" as its name is never mangled.
- func_name.SetValue(ConstructDemangledNameFromDWARF(die));
+ func_name.SetDemangledName(ConstructDemangledNameFromDWARF(die));
+ // Ensure symbol is preserved (as the mangled name).
+ func_name.SetMangledName(ConstString(name));
} else
func_name.SetValue(ConstString(name));
diff --git a/lldb/test/API/lang/cpp/extern_c/main.cpp b/lldb/test/API/lang/cpp/extern_c/main.cpp
index 727ea9255ca7f..7017c745be178 100644
--- a/lldb/test/API/lang/cpp/extern_c/main.cpp
+++ b/lldb/test/API/lang/cpp/extern_c/main.cpp
@@ -8,8 +8,10 @@ extern "C"
int foo()
{
- puts("foo");
- return 2;
+ puts("foo"); //% self.expect("image lookup -va $pc",
+ //% substrs=[' name = "::foo()"',
+ //% ' mangled = "foo"'])
+ return 2;
}
int main (int argc, char const *argv[], char const *envp[])
diff --git a/lldb/test/Shell/SymbolFile/DWARF/x86/debug-types-address-ranges.s b/lldb/test/Shell/SymbolFile/DWARF/x86/debug-types-address-ranges.s
index 8b3b0e9d56956..792555d99d7b7 100644
--- a/lldb/test/Shell/SymbolFile/DWARF/x86/debug-types-address-ranges.s
+++ b/lldb/test/Shell/SymbolFile/DWARF/x86/debug-types-address-ranges.s
@@ -11,7 +11,7 @@
# RUN: %lldb %t -o "image lookup -a 0x48000 -v" -o exit | FileCheck %s
# CHECK: CompileUnit: id = {0x00000001}, file = "/tmp/a.cc", language = "c++"
-# CHECK: Function: id = {0x0000006a}, name = "::_start({{.*}})", range = [0x0000000000048000-0x000000000004800c)
+# CHECK: Function: id = {0x0000006a}, name = "::_start({{.*}})", mangled = "_start", range = [0x0000000000048000-0x000000000004800c)
# CHECK: LineEntry: [0x0000000000048000-0x000000000004800a): /tmp/a.cc:4
# CHECK: Symbol: id = {0x00000002}, range = [0x0000000000048000-0x000000000004800c), name="_start"
# CHECK: Variable: id = {0x00000075}, name = "v1", {{.*}} decl = a.cc:4
>From 255be51f3f64ff6ef20c6379c3f3fdae5504813b Mon Sep 17 00:00:00 2001
From: agozillon <Andrew.Gozillon at amd.com>
Date: Tue, 5 Aug 2025 23:45:36 -0500
Subject: [PATCH 039/171] [Flang][OpenMP] Fix missing dereference in
isConstructWithTopLevelTarget
This was causing false negatives, yielding incorrect programs.
---
flang/lib/Lower/OpenMP/DataSharingProcessor.cpp | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/flang/lib/Lower/OpenMP/DataSharingProcessor.cpp b/flang/lib/Lower/OpenMP/DataSharingProcessor.cpp
index eee208a16648d..67a9a4675f115 100644
--- a/flang/lib/Lower/OpenMP/DataSharingProcessor.cpp
+++ b/flang/lib/Lower/OpenMP/DataSharingProcessor.cpp
@@ -47,7 +47,7 @@ bool DataSharingProcessor::OMPConstructSymbolVisitor::isSymbolDefineBy(
static bool isConstructWithTopLevelTarget(lower::pft::Evaluation &eval) {
const auto *ompEval = eval.getIf<parser::OpenMPConstruct>();
if (ompEval) {
- auto dir = parser::omp::GetOmpDirectiveName(ompEval).v;
+ auto dir = parser::omp::GetOmpDirectiveName(*ompEval).v;
if (llvm::omp::topTargetSet.test(dir))
return true;
}
>From f9386d3b1e5977a7920465d072761fc5d70968dc Mon Sep 17 00:00:00 2001
From: Jon Roelofs <jonathan_roelofs at apple.com>
Date: Tue, 5 Aug 2025 22:51:01 -0700
Subject: [PATCH 040/171] Revert "Revert "Strip the full path from __FILE__ in
the LDBG macro and keep only the filename (#150677)""
This reverts commit a15b629527a975ec592c95d69d04ef3537915d1d.
The revert made things worse. Oops.
---
llvm/cmake/modules/LLVMProcessSources.cmake | 15 ++++
llvm/include/llvm/Support/DebugLog.h | 91 ++++++++++++++++-----
2 files changed, 85 insertions(+), 21 deletions(-)
diff --git a/llvm/cmake/modules/LLVMProcessSources.cmake b/llvm/cmake/modules/LLVMProcessSources.cmake
index 0670d60bf2afd..cf358a88f5fb6 100644
--- a/llvm/cmake/modules/LLVMProcessSources.cmake
+++ b/llvm/cmake/modules/LLVMProcessSources.cmake
@@ -58,6 +58,21 @@ function(llvm_process_sources OUT_VAR)
set(sources ${ARG_UNPARSED_ARGUMENTS})
llvm_check_source_file_list(${sources})
+ # Don't generate __SHORT_FILE__ on VS builds as it can prevent build parallelisation.
+ if(NOT CMAKE_GENERATOR MATCHES "Visual Studio")
+ foreach(fn ${sources})
+ get_filename_component(suf ${fn} EXT)
+ if("${suf}" STREQUAL ".cpp" OR "${suf}" STREQUAL ".c")
+ get_filename_component(short_name ${fn} NAME)
+ set_property(
+ SOURCE ${fn}
+ APPEND
+ PROPERTY COMPILE_DEFINITIONS __SHORT_FILE__="${short_name}")
+ endif()
+ endforeach()
+ endif()
+
+
# This adds .td and .h files to the Visual Studio solution:
add_td_sources(sources)
find_all_header_files(hdrs "${ARG_ADDITIONAL_HEADER_DIRS}")
diff --git a/llvm/include/llvm/Support/DebugLog.h b/llvm/include/llvm/Support/DebugLog.h
index 386cb7f1f9716..a3312950da94e 100644
--- a/llvm/include/llvm/Support/DebugLog.h
+++ b/llvm/include/llvm/Support/DebugLog.h
@@ -41,32 +41,81 @@ namespace llvm {
//
#define LDBG(...) _GET_LDBG_MACRO(__VA_ARGS__)(__VA_ARGS__)
-#define DEBUGLOG_WITH_STREAM_AND_TYPE(STREAM, TYPE) \
- for (bool _c = (::llvm::DebugFlag && ::llvm::isCurrentDebugType(TYPE)); _c; \
- _c = false) \
- ::llvm::impl::LogWithNewline(TYPE, __FILE__, __LINE__, (STREAM))
+// Helper macros to choose the correct macro based on the number of arguments.
+#define LDBG_FUNC_CHOOSER(_f1, _f2, ...) _f2
+#define LDBG_FUNC_RECOMPOSER(argsWithParentheses) \
+ LDBG_FUNC_CHOOSER argsWithParentheses
+#define LDBG_CHOOSE_FROM_ARG_COUNT(...) \
+ LDBG_FUNC_RECOMPOSER((__VA_ARGS__, LDBG_LOG_LEVEL, ))
+#define LDBG_NO_ARG_EXPANDER() , LDBG_LOG_LEVEL_1
+#define _GET_LDBG_MACRO(...) \
+ LDBG_CHOOSE_FROM_ARG_COUNT(LDBG_NO_ARG_EXPANDER __VA_ARGS__())
+
+// Dispatch macros to support the `level` argument or none (default to 1)
+#define LDBG_LOG_LEVEL(LEVEL) \
+ DEBUGLOG_WITH_STREAM_AND_TYPE(llvm::dbgs(), LEVEL, DEBUG_TYPE)
+#define LDBG_LOG_LEVEL_1() LDBG_LOG_LEVEL(1)
+
+#define DEBUGLOG_WITH_STREAM_TYPE_FILE_AND_LINE(STREAM, LEVEL, TYPE, FILE, \
+ LINE) \
+ for (bool _c = \
+ (::llvm::DebugFlag && ::llvm::isCurrentDebugType(TYPE, LEVEL)); \
+ _c; _c = false) \
+ for (::llvm::impl::RAIINewLineStream NewLineStream{(STREAM)}; _c; \
+ _c = false) \
+ ::llvm::impl::raw_ldbg_ostream{ \
+ ::llvm::impl::computePrefix(TYPE, FILE, LINE, LEVEL), NewLineStream} \
+ .asLvalue()
+
+#define DEBUGLOG_WITH_STREAM_TYPE_AND_FILE(STREAM, LEVEL, TYPE, FILE) \
+ DEBUGLOG_WITH_STREAM_TYPE_FILE_AND_LINE(STREAM, LEVEL, TYPE, FILE, __LINE__)
+// When __SHORT_FILE__ is not defined, the File is the full path,
+// otherwise __SHORT_FILE__ is defined in CMake to provide the file name
+// without the path prefix.
+#if defined(__SHORT_FILE__)
+#define DEBUGLOG_WITH_STREAM_AND_TYPE(STREAM, LEVEL, TYPE) \
+ DEBUGLOG_WITH_STREAM_TYPE_AND_FILE(STREAM, LEVEL, TYPE, __SHORT_FILE__)
+#else
+#define DEBUGLOG_WITH_STREAM_AND_TYPE(STREAM, LEVEL, TYPE) \
+ DEBUGLOG_WITH_STREAM_TYPE_AND_FILE(STREAM, LEVEL, TYPE, \
+ ::llvm::impl::getShortFileName(__FILE__))
+#endif
namespace impl {
-class LogWithNewline {
-public:
- LogWithNewline(const char *debug_type, const char *file, int line,
- raw_ostream &os)
- : os(os) {
- if (debug_type)
- os << "[" << debug_type << "] ";
- os << file << ":" << line << " ";
+
+/// A raw_ostream that tracks `\n` and print the prefix after each
+/// newline.
+class LLVM_ABI raw_ldbg_ostream final : public raw_ostream {
+ std::string Prefix;
+ raw_ostream &Os;
+ bool HasPendingNewline;
+
+ /// Split the line on newlines and insert the prefix before each
+ /// newline. Forward everything to the underlying stream.
+ void write_impl(const char *Ptr, size_t Size) final {
+ auto Str = StringRef(Ptr, Size);
+ // Handle the initial prefix.
+ if (!Str.empty())
+ writeWithPrefix(StringRef());
+
+ auto Eol = Str.find('\n');
+ while (Eol != StringRef::npos) {
+ StringRef Line = Str.take_front(Eol + 1);
+ if (!Line.empty())
+ writeWithPrefix(Line);
+ HasPendingNewline = true;
+ Str = Str.drop_front(Eol + 1);
+ Eol = Str.find('\n');
+ }
+ if (!Str.empty())
+ writeWithPrefix(Str);
}
- ~LogWithNewline() { os << '\n'; }
- template <typename T> raw_ostream &operator<<(const T &t) && {
- return os << t;
+ void emitPrefix() { Os.write(Prefix.c_str(), Prefix.size()); }
+ void writeWithPrefix(StringRef Str) {
+ flushEol();
+ Os.write(Str.data(), Str.size());
}
- // Prevent copying, as this class manages newline responsibility and is
- // intended for use as a temporary.
- LogWithNewline(const LogWithNewline &) = delete;
- LogWithNewline &operator=(const LogWithNewline &) = delete;
- LogWithNewline &operator=(LogWithNewline &&) = delete;
-
public:
explicit raw_ldbg_ostream(std::string Prefix, raw_ostream &Os,
bool HasPendingNewline = true)
>From 8c9feb7e13b2c01fd1cd2ecc5f699ba5cc32e369 Mon Sep 17 00:00:00 2001
From: Andrew Rogers <andrurogerz at gmail.com>
Date: Tue, 5 Aug 2025 23:11:09 -0700
Subject: [PATCH 041/171] [llvm] get Linux `-fvisibility=hidden` shared library
build working with GCC (#151365)
## Purpose
Add missing annotations so that LLVM can build as a shared library on
Linux with `-fvisibility=hidden` using GCC.
## Overview
Add a couple of annotations that make GCC happy:
* Add `LLVM_TEMPLATE_ABI` to the `extern` declaration of
`LoopBase<MachineBasicBlock, MachineLoop>`. The instantiation of this of
this template is already properly annotated with `LLVM_EXPORT_TEMPLATE`
and this declaration was missed. This omission did not cause problems
with MSVC or Clang but results in undefined reference to the destructor
with GCC.
* Add `LLVM_ATTRIBUTE_VISIBILITY_DEFAULT` to the template declaration
for `InnerAnalysisManagerProxy::Key`. `LLVM_ABI` cannot be used here
because MSVC disallows storage-class specifiers on class members outside
of the class declaration (C2720). Since
`LLVM_ATTRIBUTE_VISIBILITY_DEFAULT` only applies to non-Windows targets,
it can be used in its place. Omitting this annotation does not cause
problems with Clang but results in undefined reference to
`InnerAnalysisManagerProxy::Key` fields for for explicitly instantiated
declarations when compiling with GCC.
## Background
This effort is tracked in #109483. Additional context is provided in
[this
discourse](https://discourse.llvm.org/t/psa-annotating-llvm-public-interface/85307),
and documentation for `LLVM_ABI` and related annotations is found in the
LLVM repo
[here](https://github.com/llvm/llvm-project/blob/main/llvm/docs/InterfaceExportAnnotations.rst).
## Validation
On Fedora 42, build with GCC `LLVM_BUILD_LLVM_DYLIB=ON`,
`LLVM_BUILD_LLVM_DYLIB_VIS=ON`, and `LLVM_LINK_LLVM_DYLIB=ON`.
```
cmake -B build -S llvm -G Ninja -DLLVM_ENABLE_PROJECTS="llvm;clang;clang-tools-extra;lldb;lld" -DLLVM_OPTIMIZED_TABLEGEN=ON -DLLVM_BUILD_TESTS=ON -DLLVM_BUILD_EXAMPLES=ON -DLLDB_ENABLE_PYTHON=NO -DLLVM_BUILD_LLVM_DYLIB=ON -DLLVM_BUILD_LLVM_DYLIB_VIS=ON -DLLVM_LINK_LLVM_DYLIB=ON -DCLANG_LINK_CLANG_DYLIB=OFF -DCMAKE_BUILD_TYPE=Release
ninja -C build
```
---
llvm/include/llvm/CodeGen/MachineLoopInfo.h | 3 ++-
llvm/include/llvm/IR/PassManager.h | 8 +++++++-
2 files changed, 9 insertions(+), 2 deletions(-)
diff --git a/llvm/include/llvm/CodeGen/MachineLoopInfo.h b/llvm/include/llvm/CodeGen/MachineLoopInfo.h
index 6942264a11c0e..bcec6df39e73c 100644
--- a/llvm/include/llvm/CodeGen/MachineLoopInfo.h
+++ b/llvm/include/llvm/CodeGen/MachineLoopInfo.h
@@ -42,7 +42,8 @@ namespace llvm {
class MachineDominatorTree;
// Implementation in LoopInfoImpl.h
class MachineLoop;
-extern template class LoopBase<MachineBasicBlock, MachineLoop>;
+extern template class LLVM_TEMPLATE_ABI
+ LoopBase<MachineBasicBlock, MachineLoop>;
class MachineLoop : public LoopBase<MachineBasicBlock, MachineLoop> {
public:
diff --git a/llvm/include/llvm/IR/PassManager.h b/llvm/include/llvm/IR/PassManager.h
index ea8226c6e17ba..acb17a8090c51 100644
--- a/llvm/include/llvm/IR/PassManager.h
+++ b/llvm/include/llvm/IR/PassManager.h
@@ -657,8 +657,14 @@ class LLVM_TEMPLATE_ABI InnerAnalysisManagerProxy
AnalysisManagerT *InnerAM;
};
+// NOTE: The LLVM_ABI annotation cannot be used here because MSVC disallows
+// storage-class specifiers on class members outside of the class declaration
+// (C2720). LLVM_ATTRIBUTE_VISIBILITY_DEFAULT only applies to non-Windows
+// targets so it is used instead. Without this annotation, compiling LLVM as a
+// shared library with -fvisibility=hidden using GCC fails to export the symbol
+// even though InnerAnalysisManagerProxy is already annotated with LLVM_ABI.
template <typename AnalysisManagerT, typename IRUnitT, typename... ExtraArgTs>
-AnalysisKey
+LLVM_ATTRIBUTE_VISIBILITY_DEFAULT AnalysisKey
InnerAnalysisManagerProxy<AnalysisManagerT, IRUnitT, ExtraArgTs...>::Key;
/// Provide the \c FunctionAnalysisManager to \c Module proxy.
>From a3c386d241a103481604dc970ada168b786c56c1 Mon Sep 17 00:00:00 2001
From: Andrew Rogers <andrurogerz at gmail.com>
Date: Tue, 5 Aug 2025 23:12:07 -0700
Subject: [PATCH 042/171] [llvm] annotate recently added interfaces for DLL
export (#152179)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
## Purpose
This patch is one in a series of code-mods that annotate LLVM’s public
interface for export. This patch annotates symbols that were recently
added to LLVM and fixes incorrectly annotated symbols.
## Background
This effort is tracked in #109483. Additional context is provided in
[this
discourse](https://discourse.llvm.org/t/psa-annotating-llvm-public-interface/85307),
and documentation for `LLVM_ABI` and related annotations is found in the
LLVM repo
[here](https://github.com/llvm/llvm-project/blob/main/llvm/docs/InterfaceExportAnnotations.rst).
## Overview
The bulk of these changes were generated automatically using the
[Interface Definition Scanner (IDS)](https://github.com/compnerd/ids)
tool, followed formatting with `git clang-format`.
The following manual adjustments were also applied after running IDS:
- Add `LLVM_EXPORT_TEMPLATE` and `LLVM_TEMPLATE_ABI` annotations to
explicitly instantiated instances of `llvm::object::SFrameParser`.
## Validation
On Windows 11:
```
cmake -B build -S llvm -G Ninja -DLLVM_ENABLE_PROJECTS="llvm;clang;clang-tools-extra;lldb;lld" -DLLVM_OPTIMIZED_TABLEGEN=ON -DLLVM_BUILD_LLVM_DYLIB=ON -DLLVM_BUILD_LLVM_DYLIB_VIS=ON -DLLVM_LINK_LLVM_DYLIB=ON -DLLVM_BUILD_TESTS=ON -DCLANG_LINK_CLANG_DYLIB=OFF -DCMAKE_BUILD_TYPE=Release
ninja -C build
```
---
llvm/include/llvm/Analysis/DXILResource.h | 4 ++--
llvm/include/llvm/CodeGen/MachineFunction.h | 2 +-
llvm/include/llvm/Frontend/HLSL/HLSLBinding.h | 5 +++--
llvm/include/llvm/Frontend/HLSL/RootSignatureMetadata.h | 7 ++++---
llvm/include/llvm/Frontend/Offloading/PropertySet.h | 7 +++++--
llvm/include/llvm/IR/IRBuilder.h | 3 ++-
llvm/include/llvm/Object/DXContainer.h | 2 +-
llvm/include/llvm/Transforms/HipStdPar/HipStdPar.h | 2 +-
.../llvm/Transforms/Utils/SplitModuleByCategory.h | 3 ++-
llvm/lib/Transforms/Vectorize/VPlan.h | 9 +++++----
10 files changed, 26 insertions(+), 18 deletions(-)
diff --git a/llvm/include/llvm/Analysis/DXILResource.h b/llvm/include/llvm/Analysis/DXILResource.h
index 93c6bfb057ef5..9d07bc0013ca7 100644
--- a/llvm/include/llvm/Analysis/DXILResource.h
+++ b/llvm/include/llvm/Analysis/DXILResource.h
@@ -649,8 +649,8 @@ class DXILResourceBindingInfo {
bool hasOverlappingBinding() const { return HasOverlappingBinding; }
void setHasOverlappingBinding(bool Value) { HasOverlappingBinding = Value; }
- LLVM_ABI std::optional<uint32_t>
- findAvailableBinding(dxil::ResourceClass RC, uint32_t Space, int32_t Size) {
+ std::optional<uint32_t> findAvailableBinding(dxil::ResourceClass RC,
+ uint32_t Space, int32_t Size) {
return Bindings.findAvailableBinding(RC, Space, Size);
}
diff --git a/llvm/include/llvm/CodeGen/MachineFunction.h b/llvm/include/llvm/CodeGen/MachineFunction.h
index 06c4daf245fa0..69b7a3f570c89 100644
--- a/llvm/include/llvm/CodeGen/MachineFunction.h
+++ b/llvm/include/llvm/CodeGen/MachineFunction.h
@@ -523,7 +523,7 @@ class LLVM_ABI MachineFunction {
/// Extracts the numeric type id from the CallBase's callee_type Metadata,
/// and sets CalleeTypeIds. This is used as type id for the indirect call in
/// the call graph section.
- CallSiteInfo(const CallBase &CB);
+ LLVM_ABI CallSiteInfo(const CallBase &CB);
};
struct CalledGlobalInfo {
diff --git a/llvm/include/llvm/Frontend/HLSL/HLSLBinding.h b/llvm/include/llvm/Frontend/HLSL/HLSLBinding.h
index 70a2eeb632f1b..f4f46b35cf83d 100644
--- a/llvm/include/llvm/Frontend/HLSL/HLSLBinding.h
+++ b/llvm/include/llvm/Frontend/HLSL/HLSLBinding.h
@@ -15,6 +15,7 @@
#include "llvm/ADT/STLFunctionalExtras.h"
#include "llvm/ADT/SmallVector.h"
+#include "llvm/Support/Compiler.h"
#include "llvm/Support/DXILABI.h"
#include "llvm/Support/ErrorHandling.h"
@@ -138,7 +139,7 @@ class BindingInfoBuilder {
}
/// Calculate the binding info - \c ReportOverlap will be called once for each
/// overlapping binding.
- BindingInfo calculateBindingInfo(
+ LLVM_ABI BindingInfo calculateBindingInfo(
llvm::function_ref<void(const BindingInfoBuilder &Builder,
const Binding &Overlapping)>
ReportOverlap);
@@ -153,7 +154,7 @@ class BindingInfoBuilder {
/// For use in the \c ReportOverlap callback of \c calculateBindingInfo -
/// finds a binding that the \c ReportedBinding overlaps with.
- const Binding &findOverlapping(const Binding &ReportedBinding) const;
+ LLVM_ABI const Binding &findOverlapping(const Binding &ReportedBinding) const;
};
} // namespace hlsl
diff --git a/llvm/include/llvm/Frontend/HLSL/RootSignatureMetadata.h b/llvm/include/llvm/Frontend/HLSL/RootSignatureMetadata.h
index 0bd0774641287..c6d7c32c4ad95 100644
--- a/llvm/include/llvm/Frontend/HLSL/RootSignatureMetadata.h
+++ b/llvm/include/llvm/Frontend/HLSL/RootSignatureMetadata.h
@@ -18,6 +18,7 @@
#include "llvm/Frontend/HLSL/HLSLRootSignature.h"
#include "llvm/IR/Constants.h"
#include "llvm/MC/DXContainerRootSignature.h"
+#include "llvm/Support/Compiler.h"
namespace llvm {
class LLVMContext;
@@ -49,7 +50,7 @@ class RootSignatureValidationError
class GenericRSMetadataError : public ErrorInfo<GenericRSMetadataError> {
public:
- static char ID;
+ LLVM_ABI static char ID;
StringRef Message;
MDNode *MD;
@@ -71,7 +72,7 @@ class GenericRSMetadataError : public ErrorInfo<GenericRSMetadataError> {
class InvalidRSMetadataFormat : public ErrorInfo<InvalidRSMetadataFormat> {
public:
- static char ID;
+ LLVM_ABI static char ID;
StringRef ElementName;
InvalidRSMetadataFormat(StringRef ElementName) : ElementName(ElementName) {}
@@ -87,7 +88,7 @@ class InvalidRSMetadataFormat : public ErrorInfo<InvalidRSMetadataFormat> {
class InvalidRSMetadataValue : public ErrorInfo<InvalidRSMetadataValue> {
public:
- static char ID;
+ LLVM_ABI static char ID;
StringRef ParamName;
InvalidRSMetadataValue(StringRef ParamName) : ParamName(ParamName) {}
diff --git a/llvm/include/llvm/Frontend/Offloading/PropertySet.h b/llvm/include/llvm/Frontend/Offloading/PropertySet.h
index d198d3e603264..fbc1cf0f09211 100644
--- a/llvm/include/llvm/Frontend/Offloading/PropertySet.h
+++ b/llvm/include/llvm/Frontend/Offloading/PropertySet.h
@@ -10,6 +10,7 @@
//===----------------------------------------------------------------------===//
#include "llvm/ADT/SmallVector.h"
+#include "llvm/Support/Compiler.h"
#include "llvm/Support/Error.h"
#include <map>
@@ -26,8 +27,10 @@ using PropertyValue = std::variant<uint32_t, ByteArray>;
using PropertySet = std::map<std::string, PropertyValue>;
using PropertySetRegistry = std::map<std::string, PropertySet>;
-void writePropertiesToJSON(const PropertySetRegistry &P, raw_ostream &O);
-Expected<PropertySetRegistry> readPropertiesFromJSON(MemoryBufferRef Buf);
+LLVM_ABI void writePropertiesToJSON(const PropertySetRegistry &P,
+ raw_ostream &O);
+LLVM_ABI Expected<PropertySetRegistry>
+readPropertiesFromJSON(MemoryBufferRef Buf);
} // namespace offloading
} // namespace llvm
diff --git a/llvm/include/llvm/IR/IRBuilder.h b/llvm/include/llvm/IR/IRBuilder.h
index 6d3d864b46559..d106dedf814e5 100644
--- a/llvm/include/llvm/IR/IRBuilder.h
+++ b/llvm/include/llvm/IR/IRBuilder.h
@@ -2614,7 +2614,8 @@ class IRBuilderBase {
return CreateShuffleVector(V, PoisonValue::get(V->getType()), Mask, Name);
}
- Value *CreateVectorInterleave(ArrayRef<Value *> Ops, const Twine &Name = "");
+ LLVM_ABI Value *CreateVectorInterleave(ArrayRef<Value *> Ops,
+ const Twine &Name = "");
Value *CreateExtractValue(Value *Agg, ArrayRef<unsigned> Idxs,
const Twine &Name = "") {
diff --git a/llvm/include/llvm/Object/DXContainer.h b/llvm/include/llvm/Object/DXContainer.h
index 3c8cd174afede..ad1b2361ff064 100644
--- a/llvm/include/llvm/Object/DXContainer.h
+++ b/llvm/include/llvm/Object/DXContainer.h
@@ -586,7 +586,7 @@ class DXContainer {
}
};
-class DXContainerObjectFile : public ObjectFile {
+class LLVM_ABI DXContainerObjectFile : public ObjectFile {
private:
friend class ObjectFile;
DXContainer Container;
diff --git a/llvm/include/llvm/Transforms/HipStdPar/HipStdPar.h b/llvm/include/llvm/Transforms/HipStdPar/HipStdPar.h
index a9a370b27988a..b6b753c6f5cfe 100644
--- a/llvm/include/llvm/Transforms/HipStdPar/HipStdPar.h
+++ b/llvm/include/llvm/Transforms/HipStdPar/HipStdPar.h
@@ -43,7 +43,7 @@ class HipStdParAllocationInterpositionPass
class HipStdParMathFixupPass : public PassInfoMixin<HipStdParMathFixupPass> {
public:
- PreservedAnalyses run(Module &M, ModuleAnalysisManager &MAM);
+ LLVM_ABI PreservedAnalyses run(Module &M, ModuleAnalysisManager &MAM);
static bool isRequired() { return true; }
};
diff --git a/llvm/include/llvm/Transforms/Utils/SplitModuleByCategory.h b/llvm/include/llvm/Transforms/Utils/SplitModuleByCategory.h
index b32cfaf7859ab..cfcd1611e27fe 100644
--- a/llvm/include/llvm/Transforms/Utils/SplitModuleByCategory.h
+++ b/llvm/include/llvm/Transforms/Utils/SplitModuleByCategory.h
@@ -12,6 +12,7 @@
#define LLVM_TRANSFORM_UTILS_SPLIT_MODULE_BY_CATEGORY_H
#include "llvm/ADT/STLFunctionalExtras.h"
+#include "llvm/Support/Compiler.h"
#include <memory>
#include <optional>
@@ -54,7 +55,7 @@ class Function;
///
/// FIXME: For now, the algorithm assumes no recursion in the input Module. This
/// will be addressed in the near future.
-void splitModuleTransitiveFromEntryPoints(
+LLVM_ABI void splitModuleTransitiveFromEntryPoints(
std::unique_ptr<Module> M,
function_ref<std::optional<int>(const Function &F)> EntryPointCategorizer,
function_ref<void(std::unique_ptr<Module> Part)> Callback);
diff --git a/llvm/lib/Transforms/Vectorize/VPlan.h b/llvm/lib/Transforms/Vectorize/VPlan.h
index 8dfb982a7d2f9..46dd11425aa2c 100644
--- a/llvm/lib/Transforms/Vectorize/VPlan.h
+++ b/llvm/lib/Transforms/Vectorize/VPlan.h
@@ -1279,7 +1279,7 @@ class VPIRInstruction : public VPRecipeBase {
/// Create a new VPIRPhi for \p \I, if it is a PHINode, otherwise create a
/// VPIRInstruction.
- static VPIRInstruction *create(Instruction &I);
+ LLVM_ABI_FOR_TEST static VPIRInstruction *create(Instruction &I);
VP_CLASSOF_IMPL(VPDef::VPIRInstructionSC)
@@ -1293,8 +1293,8 @@ class VPIRInstruction : public VPRecipeBase {
void execute(VPTransformState &State) override;
/// Return the cost of this VPIRInstruction.
- InstructionCost computeCost(ElementCount VF,
- VPCostContext &Ctx) const override;
+ LLVM_ABI_FOR_TEST InstructionCost
+ computeCost(ElementCount VF, VPCostContext &Ctx) const override;
Instruction &getInstruction() const { return I; }
@@ -1332,7 +1332,8 @@ class VPIRInstruction : public VPRecipeBase {
/// cast/dyn_cast/isa and execute() implementation. A single VPValue operand is
/// allowed, and it is used to add a new incoming value for the single
/// predecessor VPBB.
-struct VPIRPhi : public VPIRInstruction, public VPPhiAccessors {
+struct LLVM_ABI_FOR_TEST VPIRPhi : public VPIRInstruction,
+ public VPPhiAccessors {
VPIRPhi(PHINode &PN) : VPIRInstruction(PN) {}
static inline bool classof(const VPRecipeBase *U) {
>From cd1363bf42aad4cc55f6fe6892f63de9b32977ae Mon Sep 17 00:00:00 2001
From: Adrian Kuegel <akuegel at google.com>
Date: Wed, 6 Aug 2025 08:58:20 +0200
Subject: [PATCH 043/171] =?UTF-8?q?[mlir][Bufferization]=20Support=20cast?=
=?UTF-8?q?=20from=20ranked=20to=20unranked=20in=20canonic=E2=80=A6=20(#15?=
=?UTF-8?q?2257)?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
https://github.com/llvm/llvm-project/pull/150511 changed the
canonicalization pattern to not allow casts from ranked to unranked
anymore. This patch restores this functionality, while still keeping the
fix to preserve memory space and layout.
---
.../Dialect/Bufferization/IR/BufferizationOps.cpp | 8 +++-----
mlir/test/Dialect/Bufferization/canonicalize.mlir | 13 +++++++++++++
2 files changed, 16 insertions(+), 5 deletions(-)
diff --git a/mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp b/mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp
index 7eb729f349638..f1f12f4bca70e 100644
--- a/mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp
+++ b/mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp
@@ -806,14 +806,12 @@ struct ToBufferOfCast : public OpRewritePattern<ToBufferOp> {
if (!srcTensorType)
return failure();
auto currentOutputMemRefType =
- dyn_cast<MemRefType>(toBuffer.getResult().getType());
+ dyn_cast<BaseMemRefType>(toBuffer.getResult().getType());
if (!currentOutputMemRefType)
return failure();
- auto memrefType = MemRefType::get(srcTensorType.getShape(),
- srcTensorType.getElementType(),
- currentOutputMemRefType.getLayout(),
- currentOutputMemRefType.getMemorySpace());
+ auto memrefType = currentOutputMemRefType.cloneWith(
+ srcTensorType.getShape(), srcTensorType.getElementType());
Value memref = ToBufferOp::create(rewriter, toBuffer.getLoc(), memrefType,
tensorCastOperand.getOperand(),
toBuffer.getReadOnly());
diff --git a/mlir/test/Dialect/Bufferization/canonicalize.mlir b/mlir/test/Dialect/Bufferization/canonicalize.mlir
index 2acd19453a04d..ae1d1fcfc19dc 100644
--- a/mlir/test/Dialect/Bufferization/canonicalize.mlir
+++ b/mlir/test/Dialect/Bufferization/canonicalize.mlir
@@ -263,6 +263,19 @@ func.func @tensor_cast_to_buffer(%arg0 : tensor<4x6x16x32xi8>) ->
// CHECK-SAME: memref<4x6x16x32xi8> to memref<?x?x16x32xi8>
// CHECK: return %[[M1]] : memref<?x?x16x32xi8>
+// CHECK-LABEL: func @tensor_cast_to_unranked_buffer
+// CHECK-SAME: %[[ARG0:.+]]: tensor<4x6x16x32xi8>
+func.func @tensor_cast_to_unranked_buffer(%arg0 : tensor<4x6x16x32xi8>) ->
+ memref<*xi8> {
+ %0 = tensor.cast %arg0 : tensor<4x6x16x32xi8> to tensor<*xi8>
+ %1 = bufferization.to_buffer %0 read_only : tensor<*xi8> to memref<*xi8>
+ return %1 : memref<*xi8>
+}
+// CHECK: %[[M:.+]] = bufferization.to_buffer %[[ARG0]] read_only : tensor<4x6x16x32xi8>
+// CHECK: %[[M1:.+]] = memref.cast %[[M]]
+// CHECK-SAME: memref<4x6x16x32xi8> to memref<*xi8>
+// CHECK: return %[[M1]] : memref<*xi8>
+
// -----
// CHECK-LABEL: func @tensor_cast_to_buffer
>From fe0d67b75413c95d79742cadb0d006377ae8f025 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Martin=20Storsj=C3=B6?= <martin at martin.st>
Date: Wed, 6 Aug 2025 10:06:29 +0300
Subject: [PATCH 044/171] [libcxx] [ci] Update Clang on Windows (#152206)
Update clang-cl/LLVM to 20.1.8.
Update to llvm-mingw 20250709 (with also is built on LLVM 20.1.8). This
release of llvm-mingw is the first release to be built with PGO, making
it significantly faster for the CI runs (on par with the clang-cl
cases); running the current tests in around 1 h rather than 1 h 20 min.
---
.github/workflows/libcxx-build-and-test.yaml | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/.github/workflows/libcxx-build-and-test.yaml b/.github/workflows/libcxx-build-and-test.yaml
index 41a2aad1da236..4b29c9296bf9d 100644
--- a/.github/workflows/libcxx-build-and-test.yaml
+++ b/.github/workflows/libcxx-build-and-test.yaml
@@ -260,11 +260,11 @@ jobs:
- name: Install a current LLVM
if: ${{ matrix.mingw != true }}
run: |
- choco install -y llvm --version=19.1.7 --allow-downgrade
+ choco install -y llvm --version=20.1.8 --allow-downgrade
- name: Install llvm-mingw
if: ${{ matrix.mingw == true }}
run: |
- curl -LO https://github.com/mstorsjo/llvm-mingw/releases/download/20250114/llvm-mingw-20250114-ucrt-x86_64.zip
+ curl -LO https://github.com/mstorsjo/llvm-mingw/releases/download/20250709/llvm-mingw-20250709-ucrt-x86_64.zip
powershell Expand-Archive llvm-mingw*.zip -DestinationPath .
del llvm-mingw*.zip
mv llvm-mingw* c:\llvm-mingw
>From 90d1d2350721f684bb6697b9db7ce36df252fbed Mon Sep 17 00:00:00 2001
From: Timm Baeder <tbaeder at redhat.com>
Date: Wed, 6 Aug 2025 09:07:40 +0200
Subject: [PATCH 045/171] [clang][bytecode] Overrride locs for certain
CXXConstructExprs (#152185)
Do it only if we will end up skipping the initializer anyway because
it's a trivial copy or move constructor.
---
clang/lib/AST/ByteCode/Compiler.cpp | 27 +++++++++++++++------
clang/lib/AST/ByteCode/InterpFrame.cpp | 5 ++++
clang/test/AST/ByteCode/cxx11.cpp | 12 +++++++++
clang/test/SemaCXX/constexpr-value-init.cpp | 1 +
4 files changed, 37 insertions(+), 8 deletions(-)
diff --git a/clang/lib/AST/ByteCode/Compiler.cpp b/clang/lib/AST/ByteCode/Compiler.cpp
index 4ab4dee5b853c..cc99efa01b83e 100644
--- a/clang/lib/AST/ByteCode/Compiler.cpp
+++ b/clang/lib/AST/ByteCode/Compiler.cpp
@@ -5997,6 +5997,23 @@ bool Compiler<Emitter>::checkLiteralType(const Expr *E) {
return this->emitCheckLiteralType(E->getType().getTypePtr(), E);
}
+static bool initNeedsOverridenLoc(const CXXCtorInitializer *Init) {
+ const Expr *InitExpr = Init->getInit();
+
+ if (!Init->isWritten() && !Init->isInClassMemberInitializer() &&
+ !isa<CXXConstructExpr>(InitExpr))
+ return true;
+
+ if (const auto *CE = dyn_cast<CXXConstructExpr>(InitExpr)) {
+ const CXXConstructorDecl *Ctor = CE->getConstructor();
+ if (Ctor->isDefaulted() && Ctor->isCopyOrMoveConstructor() &&
+ Ctor->isTrivial())
+ return true;
+ }
+
+ return false;
+}
+
template <class Emitter>
bool Compiler<Emitter>::compileConstructor(const CXXConstructorDecl *Ctor) {
assert(!ReturnType);
@@ -6071,10 +6088,7 @@ bool Compiler<Emitter>::compileConstructor(const CXXConstructorDecl *Ctor) {
const Record::Field *F = R->getField(Member);
LocOverrideScope<Emitter> LOS(this, SourceInfo{},
- !Init->isWritten() &&
- !Init->isInClassMemberInitializer() &&
- (!isa<CXXConstructExpr>(InitExpr) ||
- Member->isAnonymousStructOrUnion()));
+ initNeedsOverridenLoc(Init));
if (!emitFieldInitializer(F, F->Offset, InitExpr, IsUnion))
return false;
} else if (const Type *Base = Init->getBaseClass()) {
@@ -6104,10 +6118,7 @@ bool Compiler<Emitter>::compileConstructor(const CXXConstructorDecl *Ctor) {
return false;
} else if (const IndirectFieldDecl *IFD = Init->getIndirectMember()) {
LocOverrideScope<Emitter> LOS(this, SourceInfo{},
- !Init->isWritten() &&
- !Init->isInClassMemberInitializer() &&
- !isa<CXXConstructExpr>(InitExpr));
-
+ initNeedsOverridenLoc(Init));
assert(IFD->getChainingSize() >= 2);
unsigned NestedFieldOffset = 0;
diff --git a/clang/lib/AST/ByteCode/InterpFrame.cpp b/clang/lib/AST/ByteCode/InterpFrame.cpp
index 14f99c7b76847..93421927d2ea9 100644
--- a/clang/lib/AST/ByteCode/InterpFrame.cpp
+++ b/clang/lib/AST/ByteCode/InterpFrame.cpp
@@ -133,6 +133,11 @@ static bool shouldSkipInBacktrace(const Function *F) {
MD && MD->getParent()->isAnonymousStructOrUnion())
return true;
+ if (const auto *Ctor = dyn_cast<CXXConstructorDecl>(FD);
+ Ctor && Ctor->isDefaulted() && Ctor->isTrivial() &&
+ Ctor->isCopyOrMoveConstructor() && Ctor->inits().empty())
+ return true;
+
return false;
}
diff --git a/clang/test/AST/ByteCode/cxx11.cpp b/clang/test/AST/ByteCode/cxx11.cpp
index bb8aca27eb965..7aecf23b0f149 100644
--- a/clang/test/AST/ByteCode/cxx11.cpp
+++ b/clang/test/AST/ByteCode/cxx11.cpp
@@ -318,3 +318,15 @@ namespace PR19010 {
};
void test() { constexpr Test t; }
}
+
+namespace ReadMutableInCopyCtor {
+ struct G {
+ struct X {};
+ union U { X a; };
+ mutable U u; // both-note {{declared here}}
+ };
+ constexpr G g1 = {};
+ constexpr G g2 = g1; // both-error {{must be initialized by a constant expression}} \
+ // both-note {{read of mutable member 'u'}} \
+ // both-note {{in call to 'G(g1)'}}
+}
diff --git a/clang/test/SemaCXX/constexpr-value-init.cpp b/clang/test/SemaCXX/constexpr-value-init.cpp
index 3314174a0ea17..18fa9cf0736a6 100644
--- a/clang/test/SemaCXX/constexpr-value-init.cpp
+++ b/clang/test/SemaCXX/constexpr-value-init.cpp
@@ -1,4 +1,5 @@
// RUN: %clang_cc1 %s -Wno-uninitialized -std=c++17 -fsyntax-only -verify
+// RUN: %clang_cc1 %s -Wno-uninitialized -std=c++17 -fsyntax-only -verify -fexperimental-new-constant-interpreter
struct A {
constexpr A() : a(b + 1), b(a + 1) {} // expected-note 5{{outside its lifetime}}
>From 753885eaaf8745a64f502a3a65fa89456d632cd5 Mon Sep 17 00:00:00 2001
From: Michael Buch <michaelbuch12 at gmail.com>
Date: Wed, 6 Aug 2025 09:17:33 +0100
Subject: [PATCH 046/171] [lldb][test] Skip TestQueueFromStdModule.py
pre-Clang-17
Failing on the macOS matrix bot for Clang-15:
```
07:38:40 File "/Users/ec2-user/jenkins/workspace/llvm.org/lldb-cmake-matrix/llvm-project/lldb/test/API/commands/expression/import-std-module/queue/TestQueueFromStdModule.py", line 38, in test
07:38:40 self.expect_expr(
07:38:40 File "/Users/ec2-user/jenkins/workspace/llvm.org/lldb-cmake-matrix/llvm-project/lldb/packages/Python/lldbsuite/test/lldbtest.py", line 2571, in expect_expr
07:38:40 value_check.check_value(self, eval_result, str(eval_result))
07:38:40 File "/Users/ec2-user/jenkins/workspace/llvm.org/lldb-cmake-matrix/llvm-project/lldb/packages/Python/lldbsuite/test/lldbtest.py", line 301, in check_value
07:38:40 test_base.assertSuccess(val.GetError())
07:38:40 File "/Users/ec2-user/jenkins/workspace/llvm.org/lldb-cmake-matrix/llvm-project/lldb/packages/Python/lldbsuite/test/lldbtest.py", line 2606, in assertSuccess
07:38:40 self.fail(self._formatMessage(msg, "'{}' is not success".format(error)))
07:38:40 AssertionError: 'error: /Applications/Xcode-beta.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX14.2.sdk/usr/include/module.modulemap:93:11: header 'stdarg.h' not found
07:38:40 93 | header "stdarg.h" // note: supplied by the compiler
07:38:40 | ^
07:38:40 /Users/ec2-user/jenkins/workspace/llvm.org/lldb-cmake-matrix/clang_1501_build/include/c++/v1/ctype.h:38:15: submodule of top-level module 'Darwin' implicitly imported here
07:38:40 38 | #include_next <ctype.h>
07:38:40 | ^
07:38:40 error: While building module 'std' imported from <lldb wrapper prefix>:42:
```
---
.../import-std-module/queue/TestQueueFromStdModule.py | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/lldb/test/API/commands/expression/import-std-module/queue/TestQueueFromStdModule.py b/lldb/test/API/commands/expression/import-std-module/queue/TestQueueFromStdModule.py
index b08a53855e1db..95aaa8e7aac88 100644
--- a/lldb/test/API/commands/expression/import-std-module/queue/TestQueueFromStdModule.py
+++ b/lldb/test/API/commands/expression/import-std-module/queue/TestQueueFromStdModule.py
@@ -15,6 +15,10 @@ class TestQueue(TestBase):
compiler_version=[">", "16.0"],
bugnumber="https://github.com/llvm/llvm-project/issues/68968",
)
+ @skipIf(
+ compiler="clang",
+ compiler_version=["<", "17.0"],
+ )
def test(self):
self.build()
>From ff5fa711b3078b3305aa5b5a2d02f9d97421c662 Mon Sep 17 00:00:00 2001
From: Benjamin Maxwell <benjamin.maxwell at arm.com>
Date: Wed, 6 Aug 2025 09:21:57 +0100
Subject: [PATCH 047/171] [AArch64][SVE] Tweak how SVE CFI expressions are
emitted (#151677)
The main change in this patch is we go from emitting the expression:
@ cfa - NumBytes - NumScalableBytes * VG
To:
@ cfa - VG * NumScalableBytes - NumBytes
That is, VG is the first expression. This is for a future patch that
adds an alternative way to resolve VG (which uses the CFA, so it is
convenient for the CFA to be at the top of the stack).
Since doing this is fairly churn-heavy, I took the opportunity to also
save up to 4-bytes per SVE CFI expression. This is done by folding
LEB128 constants to literals when in the range 0 to 31, and using the
offset in `DW_OP_breg*` expressions.
---
llvm/include/llvm/Support/LEB128.h | 17 +
llvm/lib/Target/AArch64/AArch64InstrInfo.cpp | 95 +++--
.../alloca-load-store-scalable-array.ll | 8 +-
.../alloca-load-store-scalable-struct.ll | 2 +-
llvm/test/CodeGen/AArch64/fp8-sme2-cvtn.ll | 12 +-
.../framelayout-sve-calleesaves-fix.mir | 8 +-
llvm/test/CodeGen/AArch64/framelayout-sve.mir | 390 +++++++++--------
.../AArch64/intrinsic-vector-match-sve2.ll | 6 +-
llvm/test/CodeGen/AArch64/luti-with-sme2.ll | 18 +-
.../test/CodeGen/AArch64/perm-tb-with-sme2.ll | 42 +-
llvm/test/CodeGen/AArch64/sme-vg-to-stack.ll | 38 +-
.../AArch64/sme2-fp8-intrinsics-cvt.ll | 8 +-
.../CodeGen/AArch64/sme2-intrinsics-qcvt.ll | 2 +-
.../CodeGen/AArch64/sme2-intrinsics-qrshr.ll | 8 +-
.../AArch64/sme2-multivec-regalloc.mir | 4 +-
.../CodeGen/AArch64/split-vector-insert.ll | 8 +-
llvm/test/CodeGen/AArch64/stack-hazard.ll | 398 +++++++++---------
.../test/CodeGen/AArch64/stack-probing-sve.ll | 152 +++----
llvm/test/CodeGen/AArch64/sve-alloca.ll | 16 +-
.../AArch64/sve-callee-save-restore-pairs.ll | 132 +++---
.../AArch64/sve-calling-convention-mixed.ll | 42 +-
.../sve-extract-fixed-from-scalable-vector.ll | 18 +-
.../AArch64/sve-extract-scalable-vector.ll | 2 +-
.../CodeGen/AArch64/sve-fp-reduce-fadda.ll | 4 +-
llvm/test/CodeGen/AArch64/sve-fptosi-sat.ll | 4 +-
llvm/test/CodeGen/AArch64/sve-fptoui-sat.ll | 4 +-
.../CodeGen/AArch64/sve-insert-element.ll | 2 +-
.../test/CodeGen/AArch64/sve-insert-vector.ll | 36 +-
llvm/test/CodeGen/AArch64/sve-ldnf1.mir | 92 ++--
llvm/test/CodeGen/AArch64/sve-ldstnt1.mir | 64 +--
llvm/test/CodeGen/AArch64/sve-llrint.ll | 82 ++--
llvm/test/CodeGen/AArch64/sve-lrint.ll | 82 ++--
llvm/test/CodeGen/AArch64/sve-pred-arith.ll | 4 +-
.../CodeGen/AArch64/sve-split-extract-elt.ll | 14 +-
.../CodeGen/AArch64/sve-split-insert-elt.ll | 10 +-
.../CodeGen/AArch64/sve-stack-frame-layout.ll | 26 +-
llvm/test/CodeGen/AArch64/sve-trunc.ll | 2 +-
.../CodeGen/AArch64/sve-vector-compress.ll | 2 +-
.../AArch64/sve2p1-intrinsics-loads.ll | 16 +-
llvm/test/CodeGen/AArch64/unwind-preserved.ll | 72 ++--
40 files changed, 1018 insertions(+), 924 deletions(-)
diff --git a/llvm/include/llvm/Support/LEB128.h b/llvm/include/llvm/Support/LEB128.h
index ce789cc49f295..6102c1dc1b952 100644
--- a/llvm/include/llvm/Support/LEB128.h
+++ b/llvm/include/llvm/Support/LEB128.h
@@ -221,6 +221,23 @@ inline uint64_t decodeULEB128AndIncUnsafe(const uint8_t *&p) {
return decodeULEB128AndInc(p, nullptr);
}
+enum class LEB128Sign { Unsigned, Signed };
+
+template <LEB128Sign Sign, typename T, typename U = char,
+ unsigned MaxLEB128SizeBytes = 16>
+inline void appendLEB128(SmallVectorImpl<U> &Buffer, T Value) {
+ static_assert(sizeof(U) == 1, "Expected buffer of bytes");
+ unsigned LEB128ValueSize;
+ U TmpBuffer[MaxLEB128SizeBytes];
+ if constexpr (Sign == LEB128Sign::Signed)
+ LEB128ValueSize =
+ encodeSLEB128(Value, reinterpret_cast<uint8_t *>(TmpBuffer));
+ else
+ LEB128ValueSize =
+ encodeULEB128(Value, reinterpret_cast<uint8_t *>(TmpBuffer));
+ Buffer.append(TmpBuffer, TmpBuffer + LEB128ValueSize);
+}
+
/// Utility function to get the size of the ULEB128-encoded value.
LLVM_ABI extern unsigned getULEB128Size(uint64_t Value);
diff --git a/llvm/lib/Target/AArch64/AArch64InstrInfo.cpp b/llvm/lib/Target/AArch64/AArch64InstrInfo.cpp
index 59d4fd26f6f91..98ebd512b0b75 100644
--- a/llvm/lib/Target/AArch64/AArch64InstrInfo.cpp
+++ b/llvm/lib/Target/AArch64/AArch64InstrInfo.cpp
@@ -5861,33 +5861,41 @@ void AArch64InstrInfo::decomposeStackOffsetForFrameOffsets(
}
}
-// Convenience function to create a DWARF expression for
-// Expr + NumBytes + NumVGScaledBytes * AArch64::VG
-static void appendVGScaledOffsetExpr(SmallVectorImpl<char> &Expr, int NumBytes,
- int NumVGScaledBytes, unsigned VG,
- llvm::raw_string_ostream &Comment) {
- uint8_t buffer[16];
-
- if (NumBytes) {
+// Convenience function to create a DWARF expression for: Constant `Operation`.
+// This helper emits compact sequences for common cases. For example, for`-15
+// DW_OP_plus`, this helper would create DW_OP_lit15 DW_OP_minus.
+static void appendConstantExpr(SmallVectorImpl<char> &Expr, int64_t Constant,
+ dwarf::LocationAtom Operation) {
+ if (Operation == dwarf::DW_OP_plus && Constant < 0 && -Constant <= 31) {
+ // -Constant (1 to 31)
+ Expr.push_back(dwarf::DW_OP_lit0 - Constant);
+ Operation = dwarf::DW_OP_minus;
+ } else if (Constant >= 0 && Constant <= 31) {
+ // Literal value 0 to 31
+ Expr.push_back(dwarf::DW_OP_lit0 + Constant);
+ } else {
+ // Signed constant
Expr.push_back(dwarf::DW_OP_consts);
- Expr.append(buffer, buffer + encodeSLEB128(NumBytes, buffer));
- Expr.push_back((uint8_t)dwarf::DW_OP_plus);
- Comment << (NumBytes < 0 ? " - " : " + ") << std::abs(NumBytes);
+ appendLEB128<LEB128Sign::Signed>(Expr, Constant);
}
+ return Expr.push_back(Operation);
+}
- if (NumVGScaledBytes) {
- Expr.push_back((uint8_t)dwarf::DW_OP_consts);
- Expr.append(buffer, buffer + encodeSLEB128(NumVGScaledBytes, buffer));
-
- Expr.push_back((uint8_t)dwarf::DW_OP_bregx);
- Expr.append(buffer, buffer + encodeULEB128(VG, buffer));
- Expr.push_back(0);
-
- Expr.push_back((uint8_t)dwarf::DW_OP_mul);
- Expr.push_back((uint8_t)dwarf::DW_OP_plus);
+// Convenience function to create a DWARF expression for a register.
+static void appendReadRegExpr(SmallVectorImpl<char> &Expr, unsigned RegNum) {
+ Expr.push_back(dwarf::DW_OP_bregx);
+ appendLEB128<LEB128Sign::Unsigned>(Expr, RegNum);
+ Expr.push_back(0);
+}
- Comment << (NumVGScaledBytes < 0 ? " - " : " + ")
- << std::abs(NumVGScaledBytes) << " * VG";
+// Convenience function to create a comment for
+// (+/-) NumBytes (* RegScale)?
+static void appendOffsetComment(int NumBytes, llvm::raw_string_ostream &Comment,
+ StringRef RegScale = {}) {
+ if (NumBytes) {
+ Comment << (NumBytes < 0 ? " - " : " + ") << std::abs(NumBytes);
+ if (!RegScale.empty())
+ Comment << ' ' << RegScale;
}
}
@@ -5909,19 +5917,26 @@ static MCCFIInstruction createDefCFAExpression(const TargetRegisterInfo &TRI,
else
Comment << printReg(Reg, &TRI);
- // Build up the expression (Reg + NumBytes + NumVGScaledBytes * AArch64::VG)
+ // Build up the expression (Reg + NumBytes + VG * NumVGScaledBytes)
SmallString<64> Expr;
unsigned DwarfReg = TRI.getDwarfRegNum(Reg, true);
- Expr.push_back((uint8_t)(dwarf::DW_OP_breg0 + DwarfReg));
- Expr.push_back(0);
- appendVGScaledOffsetExpr(Expr, NumBytes, NumVGScaledBytes,
- TRI.getDwarfRegNum(AArch64::VG, true), Comment);
+ assert(DwarfReg >= 0 && DwarfReg <= 31 && "DwarfReg out of bounds (0..31)");
+ // Reg + NumBytes
+ Expr.push_back(dwarf::DW_OP_breg0 + DwarfReg);
+ appendLEB128<LEB128Sign::Signed>(Expr, NumBytes);
+ appendOffsetComment(NumBytes, Comment);
+ if (NumVGScaledBytes) {
+ // + VG * NumVGScaledBytes
+ appendOffsetComment(NumVGScaledBytes, Comment, "* VG");
+ appendReadRegExpr(Expr, TRI.getDwarfRegNum(AArch64::VG, true));
+ appendConstantExpr(Expr, NumVGScaledBytes, dwarf::DW_OP_mul);
+ Expr.push_back(dwarf::DW_OP_plus);
+ }
// Wrap this into DW_CFA_def_cfa.
SmallString<64> DefCfaExpr;
DefCfaExpr.push_back(dwarf::DW_CFA_def_cfa_expression);
- uint8_t buffer[16];
- DefCfaExpr.append(buffer, buffer + encodeULEB128(Expr.size(), buffer));
+ appendLEB128<LEB128Sign::Unsigned>(DefCfaExpr, Expr.size());
DefCfaExpr.append(Expr.str());
return MCCFIInstruction::createEscape(nullptr, DefCfaExpr.str(), SMLoc(),
Comment.str());
@@ -5958,17 +5973,25 @@ MCCFIInstruction llvm::createCFAOffset(const TargetRegisterInfo &TRI,
llvm::raw_string_ostream Comment(CommentBuffer);
Comment << printReg(Reg, &TRI) << " @ cfa";
- // Build up expression (NumBytes + NumVGScaledBytes * AArch64::VG)
+ // Build up expression (CFA + VG * NumVGScaledBytes + NumBytes)
+ assert(NumVGScaledBytes && "Expected scalable offset");
SmallString<64> OffsetExpr;
- appendVGScaledOffsetExpr(OffsetExpr, NumBytes, NumVGScaledBytes,
- TRI.getDwarfRegNum(AArch64::VG, true), Comment);
+ // + VG * NumVGScaledBytes
+ appendOffsetComment(NumVGScaledBytes, Comment, "* VG");
+ appendReadRegExpr(OffsetExpr, TRI.getDwarfRegNum(AArch64::VG, true));
+ appendConstantExpr(OffsetExpr, NumVGScaledBytes, dwarf::DW_OP_mul);
+ OffsetExpr.push_back(dwarf::DW_OP_plus);
+ if (NumBytes) {
+ // + NumBytes
+ appendOffsetComment(NumBytes, Comment);
+ appendConstantExpr(OffsetExpr, NumBytes, dwarf::DW_OP_plus);
+ }
// Wrap this into DW_CFA_expression
SmallString<64> CfaExpr;
CfaExpr.push_back(dwarf::DW_CFA_expression);
- uint8_t buffer[16];
- CfaExpr.append(buffer, buffer + encodeULEB128(DwarfReg, buffer));
- CfaExpr.append(buffer, buffer + encodeULEB128(OffsetExpr.size(), buffer));
+ appendLEB128<LEB128Sign::Unsigned>(CfaExpr, DwarfReg);
+ appendLEB128<LEB128Sign::Unsigned>(CfaExpr, OffsetExpr.size());
CfaExpr.append(OffsetExpr.str());
return MCCFIInstruction::createEscape(nullptr, CfaExpr.str(), SMLoc(),
diff --git a/llvm/test/CodeGen/AArch64/alloca-load-store-scalable-array.ll b/llvm/test/CodeGen/AArch64/alloca-load-store-scalable-array.ll
index 3a808f5a02f0d..dd018a659d1b8 100644
--- a/llvm/test/CodeGen/AArch64/alloca-load-store-scalable-array.ll
+++ b/llvm/test/CodeGen/AArch64/alloca-load-store-scalable-array.ll
@@ -11,7 +11,7 @@ define void @array_1D(ptr %addr) #0 {
; CHECK: // %bb.0: // %entry
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-3
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x18, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 24 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x48, 0x1e, 0x22 // sp + 16 + 24 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: ldr z0, [x0]
; CHECK-NEXT: ldr z1, [x0, #2, mul vl]
@@ -34,7 +34,7 @@ define %my_subtype @array_1D_extract(ptr %addr) #0 {
; CHECK: // %bb.0: // %entry
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-3
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x18, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 24 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x48, 0x1e, 0x22 // sp + 16 + 24 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: ldr z0, [x0, #1, mul vl]
; CHECK-NEXT: addvl sp, sp, #3
@@ -52,7 +52,7 @@ define void @array_1D_insert(ptr %addr, %my_subtype %elt) #0 {
; CHECK: // %bb.0: // %entry
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-3
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x18, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 24 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x48, 0x1e, 0x22 // sp + 16 + 24 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: ldr z1, [x0, #2, mul vl]
; CHECK-NEXT: ldr z2, [x0]
@@ -75,7 +75,7 @@ define void @array_2D(ptr %addr) #0 {
; CHECK: // %bb.0: // %entry
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-6
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x30, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 48 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x09, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x30, 0x1e, 0x22 // sp + 16 + 48 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: ldr z0, [x0]
; CHECK-NEXT: ldr z1, [x0, #5, mul vl]
diff --git a/llvm/test/CodeGen/AArch64/alloca-load-store-scalable-struct.ll b/llvm/test/CodeGen/AArch64/alloca-load-store-scalable-struct.ll
index e7d8f4ff39cee..be73dc91aac51 100644
--- a/llvm/test/CodeGen/AArch64/alloca-load-store-scalable-struct.ll
+++ b/llvm/test/CodeGen/AArch64/alloca-load-store-scalable-struct.ll
@@ -10,7 +10,7 @@ define void @test(ptr %addr) #0 {
; CHECK: // %bb.0: // %entry
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-3
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x18, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 24 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x48, 0x1e, 0x22 // sp + 16 + 24 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: ldr z0, [x0]
; CHECK-NEXT: ldr z1, [x0, #2, mul vl]
diff --git a/llvm/test/CodeGen/AArch64/fp8-sme2-cvtn.ll b/llvm/test/CodeGen/AArch64/fp8-sme2-cvtn.ll
index d1e0729db30e5..6a91d85a71baf 100644
--- a/llvm/test/CodeGen/AArch64/fp8-sme2-cvtn.ll
+++ b/llvm/test/CodeGen/AArch64/fp8-sme2-cvtn.ll
@@ -11,10 +11,10 @@ define { <vscale x 16 x i8>, <vscale x 16 x i8> } @cvtn_f16_tuple(i64 %stride, p
; CHECK-NEXT: str p8, [sp, #7, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str z11, [sp, #1, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z10, [sp, #2, mul vl] // 16-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x18, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 24 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x48, 0x1e, 0x22 // sp + 16 + 24 * VG
; CHECK-NEXT: .cfi_offset w29, -16
-; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 16 - 8 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 16 - 16 * VG
+; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d10 @ cfa - 8 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d11 @ cfa - 16 * VG - 16
; CHECK-NEXT: ptrue pn8.b
; CHECK-NEXT: add x8, x1, x0
; CHECK-NEXT: ld1h { z2.h, z10.h }, pn8/z, [x1]
@@ -52,10 +52,10 @@ define { <vscale x 16 x i8>, <vscale x 16 x i8> } @cvtnt_f32_tuple(i64 %stride,
; CHECK-NEXT: str p8, [sp, #7, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str z11, [sp, #1, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z10, [sp, #2, mul vl] // 16-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x18, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 24 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x48, 0x1e, 0x22 // sp + 16 + 24 * VG
; CHECK-NEXT: .cfi_offset w29, -16
-; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 16 - 8 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 16 - 16 * VG
+; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d10 @ cfa - 8 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d11 @ cfa - 16 * VG - 16
; CHECK-NEXT: ptrue pn8.b
; CHECK-NEXT: add x8, x1, x0
; CHECK-NEXT: mov z1.d, z0.d
diff --git a/llvm/test/CodeGen/AArch64/framelayout-sve-calleesaves-fix.mir b/llvm/test/CodeGen/AArch64/framelayout-sve-calleesaves-fix.mir
index aed3145073619..e970d8339d792 100644
--- a/llvm/test/CodeGen/AArch64/framelayout-sve-calleesaves-fix.mir
+++ b/llvm/test/CodeGen/AArch64/framelayout-sve-calleesaves-fix.mir
@@ -9,16 +9,16 @@
; CHECK-NEXT: .cfi_def_cfa_offset 16
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: addvl sp, sp, #-2
- ; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 16 * VG
+ ; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x40, 0x1e, 0x22 // sp + 16 + 16 * VG
; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str z8, [sp, #1, mul vl] // 16-byte Folded Spill
- ; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 16 - 8 * VG
+ ; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d8 @ cfa - 8 * VG - 16
; CHECK-NEXT: addvl sp, sp, #-1
- ; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x18, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 24 * VG
+ ; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x48, 0x1e, 0x22 // sp + 16 + 24 * VG
; CHECK-NEXT: // implicit-def: $z8
; CHECK-NEXT: // implicit-def: $p4
; CHECK-NEXT: addvl sp, sp, #1
- ; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 16 * VG
+ ; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x40, 0x1e, 0x22 // sp + 16 + 16 * VG
; CHECK-NEXT: ldr z8, [sp, #1, mul vl] // 16-byte Folded Reload
; CHECK-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
; CHECK-NEXT: addvl sp, sp, #2
diff --git a/llvm/test/CodeGen/AArch64/framelayout-sve.mir b/llvm/test/CodeGen/AArch64/framelayout-sve.mir
index 17b1ad2197c46..03a6aabffaaf1 100644
--- a/llvm/test/CodeGen/AArch64/framelayout-sve.mir
+++ b/llvm/test/CodeGen/AArch64/framelayout-sve.mir
@@ -64,7 +64,7 @@
# CHECK-NEXT: $sp = frame-setup SUBXri $sp, 16, 0
# CHECK-NEXT: frame-setup CFI_INSTRUCTION def_cfa_offset 32
# CHECK-NEXT: $sp = frame-setup ADDVL_XXI $sp, -2
-# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x20, 0x22, 0x11, 0x10, 0x92, 0x2e, 0x00, 0x1e, 0x22
+# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape
# CHECK-NEXT: $sp = frame-destroy ADDVL_XXI $sp, 2
# CHECK-NEXT: frame-destroy CFI_INSTRUCTION def_cfa $wsp, 32
@@ -79,7 +79,8 @@
# ASM: .cfi_def_cfa_offset 16
# ASM-NEXT: .cfi_offset w29, -16
# ASM: .cfi_def_cfa_offset 32
-# ASM: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x20, 0x22, 0x11, 0x10, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 32 + 16 * VG
+# ASM: .cfi_escape
+# ASM-SAME: // sp + 32 + 16 * VG
# ASM: .cfi_def_cfa wsp, 32
# ASM: .cfi_def_cfa_offset 16
# ASM: .cfi_def_cfa_offset 0
@@ -88,8 +89,8 @@
#
# UNWINDINFO: DW_CFA_def_cfa_offset: +16
# UNWINDINFO-NEXT: DW_CFA_offset: reg29 -16
-# UNWINDINFO: DW_CFA_def_cfa_offset: +32
-# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +0, DW_OP_consts +32, DW_OP_plus, DW_OP_consts +16, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
+# UNWINDINFO: DW_CFA_def_cfa_offset: +32
+# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +32, DW_OP_bregx 0x2e +0, DW_OP_lit16, DW_OP_mul, DW_OP_plus
# UNWINDINFO: DW_CFA_def_cfa: reg31 +32
# UNWINDINFO: DW_CFA_def_cfa_offset: +16
# UNWINDINFO: DW_CFA_def_cfa_offset: +0
@@ -129,7 +130,7 @@ body: |
# CHECK-NEXT: $sp = frame-setup SUBXri $sp, 16, 0
# CHECK-NEXT: frame-setup CFI_INSTRUCTION def_cfa_offset 48
# CHECK-NEXT: $sp = frame-setup ADDVL_XXI $sp, -2
-# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x30, 0x22, 0x11, 0x10, 0x92, 0x2e, 0x00, 0x1e, 0x22
+# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape
#
# CHECK-NEXT: $x20 = IMPLICIT_DEF
@@ -152,7 +153,8 @@ body: |
# ASM-NEXT: .cfi_offset w21, -16
# ASM-NEXT: .cfi_offset w29, -32
# ASM: .cfi_def_cfa_offset 48
-# ASM: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x30, 0x22, 0x11, 0x10, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 48 + 16 * VG
+# ASM: .cfi_escape
+# ASM-SAME: // sp + 48 + 16 * VG
#
# ASM: .cfi_def_cfa wsp, 48
# ASM: .cfi_def_cfa_offset 32
@@ -166,9 +168,8 @@ body: |
# UNWINDINFO: DW_CFA_offset: reg20 -8
# UNWINDINFO-NEXT: DW_CFA_offset: reg21 -16
# UNWINDINFO-NEXT: DW_CFA_offset: reg29 -32
-# UNWINDINFO: DW_CFA_def_cfa_offset: +48
-# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +0, DW_OP_consts +48, DW_OP_plus, DW_OP_consts +16, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
-#
+# UNWINDINFO: DW_CFA_def_cfa_offset: +48
+# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +48, DW_OP_bregx 0x2e +0, DW_OP_lit16, DW_OP_mul, DW_OP_plus
# UNWINDINFO: DW_CFA_def_cfa: reg31 +48
# UNWINDINFO: DW_CFA_def_cfa_offset: +32
# UNWINDINFO: DW_CFA_def_cfa_offset: +0
@@ -272,7 +273,7 @@ body: |
# CHECK-NEXT: $sp = frame-setup SUBXri $sp, 16, 0
# CHECK-NEXT: frame-setup CFI_INSTRUCTION def_cfa_offset 32
# CHECK-NEXT: $sp = frame-setup ADDVL_XXI $sp, -3
-# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x20, 0x22, 0x11, 0x18, 0x92, 0x2e, 0x00, 0x1e, 0x22
+# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape
# CHECK-NEXT: $[[TMP:x[0-9]+]] = ADDXri $sp, 16
# CHECK-NEXT: STR_ZXI $z0, killed $[[TMP]], 2
@@ -295,7 +296,8 @@ body: |
# ASM: .cfi_def_cfa_offset 16
# ASM-NEXT: .cfi_offset w29, -16
# ASM: .cfi_def_cfa_offset 32
-# ASM: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x20, 0x22, 0x11, 0x18, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 32 + 24 * VG
+# ASM: .cfi_escape
+# ASM-SAME: // sp + 32 + 24 * VG
#
# ASM: .cfi_def_cfa wsp, 32
# ASM: .cfi_def_cfa_offset 16
@@ -305,7 +307,7 @@ body: |
# UNWINDINFO: DW_CFA_def_cfa_offset: +16
# UNWINDINFO-NEXT: DW_CFA_offset: reg29 -16
# UNWINDINFO: DW_CFA_def_cfa_offset: +32
-# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +0, DW_OP_consts +32, DW_OP_plus, DW_OP_consts +24, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
+# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +32, DW_OP_bregx 0x2e +0, DW_OP_lit24, DW_OP_mul, DW_OP_plus
#
# UNWINDINFO: DW_CFA_def_cfa: reg31 +32
# UNWINDINFO: DW_CFA_def_cfa_offset: +16
@@ -434,7 +436,7 @@ body: |
# CHECK-NEXT: $sp = frame-setup SUBXri $sp, 16, 0
# CHECK-NEXT: frame-setup CFI_INSTRUCTION def_cfa_offset 32
# CHECK-NEXT: $sp = frame-setup ADDVL_XXI $sp, -1
-# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x20, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22
+# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape
# CHECK: $[[TMP:x[0-9]+]] = ADDVL_XXI $sp, 1
# CHECK-NEXT: $x0 = LDRXui killed $[[TMP]], 4
@@ -451,7 +453,8 @@ body: |
# ASM: .cfi_def_cfa_offset 16
# ASM-NEXT: .cfi_offset w29, -16
# ASM: .cfi_def_cfa_offset 32
-# ASM: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x20, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 32 + 8 * VG
+# ASM: .cfi_escape
+# ASM-SAME: // sp + 32 + 8 * VG
#
# ASM: .cfi_def_cfa wsp, 32
# ASM: .cfi_def_cfa_offset 16
@@ -461,7 +464,7 @@ body: |
# UNWINDINFO: DW_CFA_def_cfa_offset: +16
# UNWINDINFO-NEXT: DW_CFA_offset: reg29 -16
# UNWINDINFO: DW_CFA_def_cfa_offset: +32
-# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +0, DW_OP_consts +32, DW_OP_plus, DW_OP_consts +8, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
+# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +32, DW_OP_bregx 0x2e +0, DW_OP_lit8, DW_OP_mul, DW_OP_plus
#
# UNWINDINFO: DW_CFA_def_cfa: reg31 +32
# UNWINDINFO: DW_CFA_def_cfa_offset: +16
@@ -504,23 +507,23 @@ body: |
# CHECK-NEXT: frame-setup CFI_INSTRUCTION def_cfa_offset 16
# CHECK-NEXT: frame-setup CFI_INSTRUCTION offset $w29, -16
# CHECK-NEXT: $sp = frame-setup ADDVL_XXI $sp, -32
-# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x80, 0x02, 0x92, 0x2e, 0x00, 0x1e, 0x22
+# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape
# CHECK-NEXT: $sp = frame-setup ADDVL_XXI $sp, -32
-# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x80, 0x04, 0x92, 0x2e, 0x00, 0x1e, 0x22
+# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape
# CHECK-NEXT: $sp = frame-setup ADDVL_XXI $sp, -32
-# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x80, 0x06, 0x92, 0x2e, 0x00, 0x1e, 0x22
+# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape
# CHECK-NEXT: $sp = frame-setup ADDVL_XXI $sp, -32
-# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x80, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22
+# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape
# CHECK-NEXT: $sp = frame-setup ADDVL_XXI $sp, -32
-# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x80, 0x0a, 0x92, 0x2e, 0x00, 0x1e, 0x22
+# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape
# CHECK-NEXT: $sp = frame-setup ADDVL_XXI $sp, -32
-# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x80, 0x0c, 0x92, 0x2e, 0x00, 0x1e, 0x22
+# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape
# CHECK-NEXT: $sp = frame-setup ADDVL_XXI $sp, -32
-# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x80, 0x0e, 0x92, 0x2e, 0x00, 0x1e, 0x22
+# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape
# CHECK-NEXT: $sp = frame-setup ADDVL_XXI $sp, -32
-# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x80, 0x10, 0x92, 0x2e, 0x00, 0x1e, 0x22
+# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape
# CHECK-NEXT: $sp = frame-setup ADDVL_XXI $sp, -1
-# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x88, 0x10, 0x92, 0x2e, 0x00, 0x1e, 0x22
+# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape
# CHECK-NEXT: $[[TMP2:x[0-9]+]] = ADDVL_XXI $sp, 1
# CHECK-NEXT: STR_ZXI $z0, killed $[[TMP2]], 255
@@ -529,21 +532,21 @@ body: |
# CHECK-NEXT: STR_PXI $p0, killed $[[TMP2]], 255
# CHECK: $sp = frame-destroy ADDVL_XXI $sp, 31
-# CHECK-NEXT: frame-destroy CFI_INSTRUCTION escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x90, 0x0e, 0x92, 0x2e, 0x00, 0x1e, 0x22
+# CHECK-NEXT: frame-destroy CFI_INSTRUCTION escape
# CHECK-NEXT: $sp = frame-destroy ADDVL_XXI $sp, 31
-# CHECK-NEXT: frame-destroy CFI_INSTRUCTION escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x98, 0x0c, 0x92, 0x2e, 0x00, 0x1e, 0x22
+# CHECK-NEXT: frame-destroy CFI_INSTRUCTION escape
# CHECK-NEXT: $sp = frame-destroy ADDVL_XXI $sp, 31
-# CHECK-NEXT: frame-destroy CFI_INSTRUCTION escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0xa0, 0x0a, 0x92, 0x2e, 0x00, 0x1e, 0x22
+# CHECK-NEXT: frame-destroy CFI_INSTRUCTION escape
# CHECK-NEXT: $sp = frame-destroy ADDVL_XXI $sp, 31
-# CHECK-NEXT: frame-destroy CFI_INSTRUCTION escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0xa8, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22
+# CHECK-NEXT: frame-destroy CFI_INSTRUCTION escape
# CHECK-NEXT: $sp = frame-destroy ADDVL_XXI $sp, 31
-# CHECK-NEXT: frame-destroy CFI_INSTRUCTION escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0xb0, 0x06, 0x92, 0x2e, 0x00, 0x1e, 0x22
+# CHECK-NEXT: frame-destroy CFI_INSTRUCTION escape
# CHECK-NEXT: $sp = frame-destroy ADDVL_XXI $sp, 31
-# CHECK-NEXT: frame-destroy CFI_INSTRUCTION escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0xb8, 0x04, 0x92, 0x2e, 0x00, 0x1e, 0x22
+# CHECK-NEXT: frame-destroy CFI_INSTRUCTION escape
# CHECK-NEXT: $sp = frame-destroy ADDVL_XXI $sp, 31
-# CHECK-NEXT: frame-destroy CFI_INSTRUCTION escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0xc0, 0x02, 0x92, 0x2e, 0x00, 0x1e, 0x22
+# CHECK-NEXT: frame-destroy CFI_INSTRUCTION escape
# CHECK-NEXT: $sp = frame-destroy ADDVL_XXI $sp, 31
-# CHECK-NEXT: frame-destroy CFI_INSTRUCTION escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0xc8, 0x00, 0x92, 0x2e, 0x00, 0x1e, 0x22
+# CHECK-NEXT: frame-destroy CFI_INSTRUCTION escape
# CHECK-NEXT: $sp = frame-destroy ADDVL_XXI $sp, 9
# CHECK-NEXT: frame-destroy CFI_INSTRUCTION def_cfa $wsp, 16
# CHECK-NEXT: $sp, $[[SCRATCH]] = frame-destroy LDRXpost $sp, 16
@@ -554,48 +557,65 @@ body: |
# ASM-LABEL: test_address_sve_out_of_range:
# ASM: .cfi_def_cfa_offset 16
# ASM-NEXT: .cfi_offset w29, -16
-# ASM: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x80, 0x02, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 256 * VG
-# ASM: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x80, 0x04, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 512 * VG
-# ASM: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x80, 0x06, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 768 * VG
-# ASM: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x80, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 1024 * VG
-# ASM: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x80, 0x0a, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 1280 * VG
-# ASM: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x80, 0x0c, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 1536 * VG
-# ASM: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x80, 0x0e, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 1792 * VG
-# ASM: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x80, 0x10, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 2048 * VG
-# ASM: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x88, 0x10, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 2056 * VG
+# ASM: .cfi_escape
+# ASM-SAME: // sp + 16 + 256 * VG
+# ASM: .cfi_escape
+# ASM-SAME: // sp + 16 + 512 * VG
+# ASM: .cfi_escape
+# ASM-SAME: // sp + 16 + 768 * VG
+# ASM: .cfi_escape
+# ASM-SAME: // sp + 16 + 1024 * VG
+# ASM: .cfi_escape
+# ASM-SAME: // sp + 16 + 1280 * VG
+# ASM: .cfi_escape
+# ASM-SAME: // sp + 16 + 1536 * VG
+# ASM: .cfi_escape
+# ASM-SAME: // sp + 16 + 1792 * VG
+# ASM: .cfi_escape
+# ASM-SAME: // sp + 16 + 2048 * VG
+# ASM: .cfi_escape
+# ASM-SAME: // sp + 16 + 2056 * VG
#
-# ASM: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x90, 0x0e, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 1808 * VG
-# ASM: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x98, 0x0c, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 1560 * VG
-# ASM: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0xa0, 0x0a, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 1312 * VG
-# ASM: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0xa8, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 1064 * VG
-# ASM: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0xb0, 0x06, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 816 * VG
-# ASM: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0xb8, 0x04, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 568 * VG
-# ASM: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0xc0, 0x02, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 320 * VG
-# ASM: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0xc8, 0x00, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 72 * VG
+# ASM: .cfi_escape
+# ASM-SAME: // sp + 16 + 1808 * VG
+# ASM: .cfi_escape
+# ASM-SAME: // sp + 16 + 1560 * VG
+# ASM: .cfi_escape
+# ASM-SAME: // sp + 16 + 1312 * VG
+# ASM: .cfi_escape
+# ASM-SAME: // sp + 16 + 1064 * VG
+# ASM: .cfi_escape
+# ASM-SAME: // sp + 16 + 816 * VG
+# ASM: .cfi_escape
+# ASM-SAME: // sp + 16 + 568 * VG
+# ASM: .cfi_escape
+# ASM-SAME: // sp + 16 + 320 * VG
+# ASM: .cfi_escape
+# ASM-SAME: // sp + 16 + 72 * VG
# ASM: .cfi_def_cfa wsp, 16
# ASM: .cfi_def_cfa_offset 0
# ASM-NEXT: .cfi_restore w29
# UNWINDINFO: DW_CFA_def_cfa_offset: +16
# UNWINDINFO-NEXT: DW_CFA_offset: reg29 -16
-# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +0, DW_OP_consts +16, DW_OP_plus, DW_OP_consts +256, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
-# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +0, DW_OP_consts +16, DW_OP_plus, DW_OP_consts +512, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
-# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +0, DW_OP_consts +16, DW_OP_plus, DW_OP_consts +768, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
-# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +0, DW_OP_consts +16, DW_OP_plus, DW_OP_consts +1024, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
-# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +0, DW_OP_consts +16, DW_OP_plus, DW_OP_consts +1280, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
-# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +0, DW_OP_consts +16, DW_OP_plus, DW_OP_consts +1536, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
-# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +0, DW_OP_consts +16, DW_OP_plus, DW_OP_consts +1792, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
-# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +0, DW_OP_consts +16, DW_OP_plus, DW_OP_consts +2048, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
-# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +0, DW_OP_consts +16, DW_OP_plus, DW_OP_consts +2056, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
+# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +16, DW_OP_bregx 0x2e +0, DW_OP_consts +256, DW_OP_mul, DW_OP_plus
+# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +16, DW_OP_bregx 0x2e +0, DW_OP_consts +512, DW_OP_mul, DW_OP_plus
+# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +16, DW_OP_bregx 0x2e +0, DW_OP_consts +768, DW_OP_mul, DW_OP_plus
+# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +16, DW_OP_bregx 0x2e +0, DW_OP_consts +1024, DW_OP_mul, DW_OP_plus
+# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +16, DW_OP_bregx 0x2e +0, DW_OP_consts +1280, DW_OP_mul, DW_OP_plus
+# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +16, DW_OP_bregx 0x2e +0, DW_OP_consts +1536, DW_OP_mul, DW_OP_plus
+# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +16, DW_OP_bregx 0x2e +0, DW_OP_consts +1792, DW_OP_mul, DW_OP_plus
+# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +16, DW_OP_bregx 0x2e +0, DW_OP_consts +2048, DW_OP_mul, DW_OP_plus
+# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +16, DW_OP_bregx 0x2e +0, DW_OP_consts +2056, DW_OP_mul, DW_OP_plus
#
-# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +0, DW_OP_consts +16, DW_OP_plus, DW_OP_consts +1808, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
-# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +0, DW_OP_consts +16, DW_OP_plus, DW_OP_consts +1560, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
-# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +0, DW_OP_consts +16, DW_OP_plus, DW_OP_consts +1312, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
-# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +0, DW_OP_consts +16, DW_OP_plus, DW_OP_consts +1064, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
-# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +0, DW_OP_consts +16, DW_OP_plus, DW_OP_consts +816, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
-# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +0, DW_OP_consts +16, DW_OP_plus, DW_OP_consts +568, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
-# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +0, DW_OP_consts +16, DW_OP_plus, DW_OP_consts +320, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
-# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +0, DW_OP_consts +16, DW_OP_plus, DW_OP_consts +72, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
+# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +16, DW_OP_bregx 0x2e +0, DW_OP_consts +1808, DW_OP_mul, DW_OP_plus
+# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +16, DW_OP_bregx 0x2e +0, DW_OP_consts +1560, DW_OP_mul, DW_OP_plus
+# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +16, DW_OP_bregx 0x2e +0, DW_OP_consts +1312, DW_OP_mul, DW_OP_plus
+# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +16, DW_OP_bregx 0x2e +0, DW_OP_consts +1064, DW_OP_mul, DW_OP_plus
+# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +16, DW_OP_bregx 0x2e +0, DW_OP_consts +816, DW_OP_mul, DW_OP_plus
+# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +16, DW_OP_bregx 0x2e +0, DW_OP_consts +568, DW_OP_mul, DW_OP_plus
+# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +16, DW_OP_bregx 0x2e +0, DW_OP_consts +320, DW_OP_mul, DW_OP_plus
+# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +16, DW_OP_bregx 0x2e +0, DW_OP_consts +72, DW_OP_mul, DW_OP_plus
# UNWINDINFO: DW_CFA_def_cfa: reg31 +16
# UNWINDINFO: DW_CFA_def_cfa_offset: +0
# UNWINDINFO-NEXT: DW_CFA_restore: reg29
@@ -702,15 +722,15 @@ body: |
# CHECK-NEXT: frame-setup CFI_INSTRUCTION def_cfa_offset 16
# CHECK-NEXT: frame-setup CFI_INSTRUCTION offset $w29, -16
# CHECK: $sp = frame-setup ADDVL_XXI $sp, -1
-# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22
+# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape
# CHECK: frame-setup STR_PXI killed $p6, $sp, 5
# CHECK: frame-setup STR_PXI killed $p5, $sp, 6
# CHECK: frame-setup STR_PXI killed $p4, $sp, 7
# CHECK: $sp = frame-setup SUBXri $sp, 32, 0
-# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x30, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22
+# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape
# CHECK: $sp = frame-destroy ADDXri $sp, 32, 0
-# CHECK-NEXT: frame-destroy CFI_INSTRUCTION escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22
+# CHECK-NEXT: frame-destroy CFI_INSTRUCTION escape
# CHECK: $p6 = frame-destroy LDR_PXI $sp, 5
# CHECK: $p5 = frame-destroy LDR_PXI $sp, 6
# CHECK: $p4 = frame-destroy LDR_PXI $sp, 7
@@ -725,20 +745,23 @@ body: |
# ASM-LABEL: save_restore_pregs_sve:
# ASM: .cfi_def_cfa_offset 16
# ASM-NEXT: .cfi_offset w29, -16
-# ASM: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
-# ASM: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x30, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 48 + 8 * VG
+# ASM: .cfi_escape
+# ASM-SAME: // sp + 16 + 8 * VG
+# ASM: .cfi_escape
+# ASM-SAME: // sp + 48 + 8 * VG
#
-# ASM: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+# ASM: .cfi_escape
+# ASM-SAME: // sp + 16 + 8 * VG
# ASM: .cfi_def_cfa wsp, 16
# ASM: .cfi_def_cfa_offset 0
# ASM-NEXT: .cfi_restore w29
# UNWINDINFO: DW_CFA_def_cfa_offset: +16
# UNWINDINFO: DW_CFA_offset: reg29 -16
-# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +0, DW_OP_consts +16, DW_OP_plus, DW_OP_consts +8, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
-# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +0, DW_OP_consts +48, DW_OP_plus, DW_OP_consts +8, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
+# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +16, DW_OP_bregx 0x2e +0, DW_OP_lit8, DW_OP_mul, DW_OP_plus
+# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +48, DW_OP_bregx 0x2e +0, DW_OP_lit8, DW_OP_mul, DW_OP_plus
#
-# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +0, DW_OP_consts +16, DW_OP_plus, DW_OP_consts +8, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
+# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +16, DW_OP_bregx 0x2e +0, DW_OP_lit8, DW_OP_mul, DW_OP_plus
# UNWINDINFO: DW_CFA_def_cfa: reg31 +16
# UNWINDINFO: DW_CFA_def_cfa_offset: +0
# UNWINDINFO-NEXT: DW_CFA_restore: reg29
@@ -761,18 +784,18 @@ body: |
# CHECK-NEXT: frame-setup CFI_INSTRUCTION def_cfa_offset 16
# CHECK-NEXT: frame-setup CFI_INSTRUCTION offset $w29, -16
# CHECK-NEXT: $sp = frame-setup ADDVL_XXI $sp, -3
-# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x18, 0x92, 0x2e, 0x00, 0x1e, 0x22
+# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape
# CHECK-NEXT: frame-setup STR_ZXI killed $z10, $sp, 0
# CHECK-NEXT: frame-setup STR_ZXI killed $z9, $sp, 1
# CHECK-NEXT: frame-setup STR_ZXI killed $z8, $sp, 2
-# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22
-# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x10, 0x49, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22
-# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x10, 0x4a, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22
+# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape
+# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape
+# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape
# CHECK-NEXT: $sp = frame-setup SUBXri $sp, 32, 0
-# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x30, 0x22, 0x11, 0x18, 0x92, 0x2e, 0x00, 0x1e, 0x22
+# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape
# CHECK: $sp = frame-destroy ADDXri $sp, 32, 0
-# CHECK-NEXT: frame-destroy CFI_INSTRUCTION escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x18, 0x92, 0x2e, 0x00, 0x1e, 0x22
+# CHECK-NEXT: frame-destroy CFI_INSTRUCTION escape
# CHECK-NEXT: $z10 = frame-destroy LDR_ZXI $sp, 0
# CHECK-NEXT: $z9 = frame-destroy LDR_ZXI $sp, 1
# CHECK-NEXT: $z8 = frame-destroy LDR_ZXI $sp, 2
@@ -789,13 +812,19 @@ body: |
# ASM-LABEL: save_restore_zregs_sve:
# ASM: .cfi_def_cfa_offset 16
# ASM-NEXT: .cfi_offset w29, -16
-# ASM: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x18, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 24 * VG
-# ASM: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 16 - 8 * VG
-# ASM-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 16 - 16 * VG
-# ASM-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 16 - 24 * VG
-# ASM: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x30, 0x22, 0x11, 0x18, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 48 + 24 * VG
+# ASM: .cfi_escape
+# ASM-SAME: // sp + 16 + 24 * VG
+# ASM: .cfi_escape
+# ASM-SAME: // $d8 @ cfa - 8 * VG - 16
+# ASM-NEXT: .cfi_escape
+# ASM-SAME: // $d9 @ cfa - 16 * VG - 16
+# ASM-NEXT: .cfi_escape
+# ASM-SAME: // $d10 @ cfa - 24 * VG - 16
+# ASM: .cfi_escape
+# ASM-SAME: // sp + 48 + 24 * VG
#
-# ASM: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x18, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 24 * VG
+# ASM: .cfi_escape
+# ASM-SAME: // sp + 16 + 24 * VG
# ASM: .cfi_def_cfa wsp, 16
# ASM-NEXT: .cfi_restore z8
# ASM-NEXT: .cfi_restore z9
@@ -805,13 +834,13 @@ body: |
# UNWINDINFO: DW_CFA_def_cfa_offset: +16
# UNWINDINFO-NEXT: DW_CFA_offset: reg29 -16
-# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +0, DW_OP_consts +16, DW_OP_plus, DW_OP_consts +24, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
-# UNWINDINFO: DW_CFA_expression: reg72 DW_OP_consts -16, DW_OP_plus, DW_OP_consts -8, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
-# UNWINDINFO-NEXT: DW_CFA_expression: reg73 DW_OP_consts -16, DW_OP_plus, DW_OP_consts -16, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
-# UNWINDINFO-NEXT: DW_CFA_expression: reg74 DW_OP_consts -16, DW_OP_plus, DW_OP_consts -24, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
-# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +0, DW_OP_consts +48, DW_OP_plus, DW_OP_consts +24, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
+# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +16, DW_OP_bregx 0x2e +0, DW_OP_lit24, DW_OP_mul, DW_OP_plus
+# UNWINDINFO: DW_CFA_expression: reg72 DW_OP_bregx 0x2e +0, DW_OP_consts -8, DW_OP_mul, DW_OP_plus, DW_OP_lit16, DW_OP_minus
+# UNWINDINFO-NEXT: DW_CFA_expression: reg73 DW_OP_bregx 0x2e +0, DW_OP_consts -16, DW_OP_mul, DW_OP_plus, DW_OP_lit16, DW_OP_minus
+# UNWINDINFO-NEXT: DW_CFA_expression: reg74 DW_OP_bregx 0x2e +0, DW_OP_consts -24, DW_OP_mul, DW_OP_plus, DW_OP_lit16, DW_OP_minus
+# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +48, DW_OP_bregx 0x2e +0, DW_OP_lit24, DW_OP_mul, DW_OP_plus
#
-# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +0, DW_OP_consts +16, DW_OP_plus, DW_OP_consts +24, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
+# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +16, DW_OP_bregx 0x2e +0, DW_OP_lit24, DW_OP_mul, DW_OP_plus
# UNWINDINFO: DW_CFA_def_cfa: reg31 +16
# UNWINDINFO-NEXT: DW_CFA_restore_extended: reg104
# UNWINDINFO-NEXT: DW_CFA_restore_extended: reg105
@@ -848,7 +877,7 @@ body: |
# CHECK-NEXT: frame-setup CFI_INSTRUCTION offset $w29, -32
# CHECK: $sp = frame-setup ADDVL_XXI $sp, -18
-# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x20, 0x22, 0x11, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22
+# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape
# CHECK: frame-setup STR_PXI killed $p15, $sp, 4
# CHECK: frame-setup STR_PXI killed $p14, $sp, 5
# CHECK: frame-setup STR_PXI killed $p5, $sp, 14
@@ -857,23 +886,23 @@ body: |
# CHECK: frame-setup STR_ZXI killed $z22, $sp, 3
# CHECK: frame-setup STR_ZXI killed $z9, $sp, 16
# CHECK: frame-setup STR_ZXI killed $z8, $sp, 17
-# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x10, 0x48, 0x0a, 0x11, 0x60, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22
-# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x10, 0x49, 0x0a, 0x11, 0x60, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22
-# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x10, 0x4a, 0x0a, 0x11, 0x60, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22
-# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x10, 0x4b, 0x0a, 0x11, 0x60, 0x22, 0x11, 0x60, 0x92, 0x2e, 0x00, 0x1e, 0x22
-# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x10, 0x4c, 0x0a, 0x11, 0x60, 0x22, 0x11, 0x58, 0x92, 0x2e, 0x00, 0x1e, 0x22
-# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x10, 0x4d, 0x0a, 0x11, 0x60, 0x22, 0x11, 0x50, 0x92, 0x2e, 0x00, 0x1e, 0x22
-# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x10, 0x4e, 0x0a, 0x11, 0x60, 0x22, 0x11, 0x48, 0x92, 0x2e, 0x00, 0x1e, 0x22
-# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x10, 0x4f, 0x0a, 0x11, 0x60, 0x22, 0x11, 0x40, 0x92, 0x2e, 0x00, 0x1e, 0x22
+# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape
+# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape
+# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape
+# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape
+# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape
+# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape
+# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape
+# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape
# CHECK: $sp = frame-setup SUBXri $sp, 32, 0
-# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x0f, 0x0e, 0x8f, 0x00, 0x11, 0xc0, 0x00, 0x22, 0x11, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22
+# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape
# CHECK: $sp = frame-setup ADDVL_XXI $sp, -1
-# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x0f, 0x0e, 0x8f, 0x00, 0x11, 0xc0, 0x00, 0x22, 0x11, 0x98, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22
+# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape
# CHECK: $sp = frame-destroy ADDXri $sp, 32, 0
-# CHECK-NEXT: frame-destroy CFI_INSTRUCTION escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x20, 0x22, 0x11, 0x98, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22
+# CHECK-NEXT: frame-destroy CFI_INSTRUCTION escape
# CHECK: $sp = frame-destroy ADDVL_XXI $sp, 1
-# CHECK-NEXT: frame-destroy CFI_INSTRUCTION escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x20, 0x22, 0x11, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22
+# CHECK-NEXT: frame-destroy CFI_INSTRUCTION escape
# CHECK: $z23 = frame-destroy LDR_ZXI $sp, 2
# CHECK: $z22 = frame-destroy LDR_ZXI $sp, 3
# CHECK: $z9 = frame-destroy LDR_ZXI $sp, 16
@@ -909,20 +938,33 @@ body: |
# ASM-NEXT: .cfi_offset w20, -16
# ASM-NEXT: .cfi_offset w21, -24
# ASM-NEXT: .cfi_offset w29, -32
-# ASM: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x20, 0x22, 0x11, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 32 + 144 * VG
-# ASM: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x60, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 32 - 8 * VG
-# ASM-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x11, 0x60, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 32 - 16 * VG
-# ASM-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x60, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 32 - 24 * VG
-# ASM-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x11, 0x60, 0x22, 0x11, 0x60, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 32 - 32 * VG
-# ASM-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x11, 0x60, 0x22, 0x11, 0x58, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 32 - 40 * VG
-# ASM-NEXT: .cfi_escape 0x10, 0x4d, 0x0a, 0x11, 0x60, 0x22, 0x11, 0x50, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d13 @ cfa - 32 - 48 * VG
-# ASM-NEXT: .cfi_escape 0x10, 0x4e, 0x0a, 0x11, 0x60, 0x22, 0x11, 0x48, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d14 @ cfa - 32 - 56 * VG
-# ASM-NEXT: .cfi_escape 0x10, 0x4f, 0x0a, 0x11, 0x60, 0x22, 0x11, 0x40, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d15 @ cfa - 32 - 64 * VG
-# ASM: .cfi_escape 0x0f, 0x0e, 0x8f, 0x00, 0x11, 0xc0, 0x00, 0x22, 0x11, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 64 + 144 * VG
-# ASM: .cfi_escape 0x0f, 0x0e, 0x8f, 0x00, 0x11, 0xc0, 0x00, 0x22, 0x11, 0x98, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 64 + 152 * VG
+# ASM: .cfi_escape
+# ASM-SAME: // sp + 32 + 144 * VG
+# ASM: .cfi_escape
+# ASM-SAME: // $d8 @ cfa - 8 * VG - 32
+# ASM-NEXT: .cfi_escape
+# ASM-SAME: // $d9 @ cfa - 16 * VG - 32
+# ASM-NEXT: .cfi_escape
+# ASM-SAME: // $d10 @ cfa - 24 * VG - 32
+# ASM-NEXT: .cfi_escape
+# ASM-SAME: // $d11 @ cfa - 32 * VG - 32
+# ASM-NEXT: .cfi_escape
+# ASM-SAME: // $d12 @ cfa - 40 * VG - 32
+# ASM-NEXT: .cfi_escape
+# ASM-SAME: // $d13 @ cfa - 48 * VG - 32
+# ASM-NEXT: .cfi_escape
+# ASM-SAME: // $d14 @ cfa - 56 * VG - 32
+# ASM-NEXT: .cfi_escape
+# ASM-SAME: // $d15 @ cfa - 64 * VG - 32
+# ASM: .cfi_escape
+# ASM-SAME: // sp + 64 + 144 * VG
+# ASM: .cfi_escape
+# ASM-SAME: // sp + 64 + 152 * VG
#
-# ASM: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x20, 0x22, 0x11, 0x98, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 32 + 152 * VG
-# ASM: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x20, 0x22, 0x11, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 32 + 144 * VG
+# ASM: .cfi_escape
+# ASM-SAME: // sp + 32 + 152 * VG
+# ASM: .cfi_escape
+# ASM-SAME: // sp + 32 + 144 * VG
# ASM: .cfi_def_cfa wsp, 32
# ASM-NEXT: .cfi_restore z8
# ASM-NEXT: .cfi_restore z9
@@ -943,20 +985,20 @@ body: |
# UNWINDINFO-NEXT: DW_CFA_offset: reg20 -16
# UNWINDINFO-NEXT: DW_CFA_offset: reg21 -24
# UNWINDINFO-NEXT: DW_CFA_offset: reg29 -32
-# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +0, DW_OP_consts +32, DW_OP_plus, DW_OP_consts +144, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
-# UNWINDINFO: DW_CFA_expression: reg72 DW_OP_consts -32, DW_OP_plus, DW_OP_consts -8, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
-# UNWINDINFO-NEXT: DW_CFA_expression: reg73 DW_OP_consts -32, DW_OP_plus, DW_OP_consts -16, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
-# UNWINDINFO-NEXT: DW_CFA_expression: reg74 DW_OP_consts -32, DW_OP_plus, DW_OP_consts -24, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
-# UNWINDINFO-NEXT: DW_CFA_expression: reg75 DW_OP_consts -32, DW_OP_plus, DW_OP_consts -32, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
-# UNWINDINFO-NEXT: DW_CFA_expression: reg76 DW_OP_consts -32, DW_OP_plus, DW_OP_consts -40, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
-# UNWINDINFO-NEXT: DW_CFA_expression: reg77 DW_OP_consts -32, DW_OP_plus, DW_OP_consts -48, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
-# UNWINDINFO-NEXT: DW_CFA_expression: reg78 DW_OP_consts -32, DW_OP_plus, DW_OP_consts -56, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
-# UNWINDINFO-NEXT: DW_CFA_expression: reg79 DW_OP_consts -32, DW_OP_plus, DW_OP_consts -64, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
-# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +0, DW_OP_consts +64, DW_OP_plus, DW_OP_consts +144, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
-# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +0, DW_OP_consts +64, DW_OP_plus, DW_OP_consts +152, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
+# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +32, DW_OP_bregx 0x2e +0, DW_OP_consts +144, DW_OP_mul, DW_OP_plus
+# UNWINDINFO: DW_CFA_expression: reg72 DW_OP_bregx 0x2e +0, DW_OP_consts -8, DW_OP_mul, DW_OP_plus, DW_OP_consts -32, DW_OP_plus
+# UNWINDINFO-NEXT: DW_CFA_expression: reg73 DW_OP_bregx 0x2e +0, DW_OP_consts -16, DW_OP_mul, DW_OP_plus, DW_OP_consts -32, DW_OP_plus
+# UNWINDINFO-NEXT: DW_CFA_expression: reg74 DW_OP_bregx 0x2e +0, DW_OP_consts -24, DW_OP_mul, DW_OP_plus, DW_OP_consts -32, DW_OP_plus
+# UNWINDINFO-NEXT: DW_CFA_expression: reg75 DW_OP_bregx 0x2e +0, DW_OP_consts -32, DW_OP_mul, DW_OP_plus, DW_OP_consts -32, DW_OP_plus
+# UNWINDINFO-NEXT: DW_CFA_expression: reg76 DW_OP_bregx 0x2e +0, DW_OP_consts -40, DW_OP_mul, DW_OP_plus, DW_OP_consts -32, DW_OP_plus
+# UNWINDINFO-NEXT: DW_CFA_expression: reg77 DW_OP_bregx 0x2e +0, DW_OP_consts -48, DW_OP_mul, DW_OP_plus, DW_OP_consts -32, DW_OP_plus
+# UNWINDINFO-NEXT: DW_CFA_expression: reg78 DW_OP_bregx 0x2e +0, DW_OP_consts -56, DW_OP_mul, DW_OP_plus, DW_OP_consts -32, DW_OP_plus
+# UNWINDINFO-NEXT: DW_CFA_expression: reg79 DW_OP_bregx 0x2e +0, DW_OP_consts -64, DW_OP_mul, DW_OP_plus, DW_OP_consts -32, DW_OP_plus
+# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +64, DW_OP_bregx 0x2e +0, DW_OP_consts +144, DW_OP_mul, DW_OP_plus
+# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +64, DW_OP_bregx 0x2e +0, DW_OP_consts +152, DW_OP_mul, DW_OP_plus
#
-# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +0, DW_OP_consts +32, DW_OP_plus, DW_OP_consts +152, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
-# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +0, DW_OP_consts +32, DW_OP_plus, DW_OP_consts +144, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
+# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +32, DW_OP_bregx 0x2e +0, DW_OP_consts +152, DW_OP_mul, DW_OP_plus
+# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +32, DW_OP_bregx 0x2e +0, DW_OP_consts +144, DW_OP_mul, DW_OP_plus
# UNWINDINFO: DW_CFA_def_cfa: reg31 +32
# UNWINDINFO-NEXT: DW_CFA_restore_extended: reg104
# UNWINDINFO-NEXT: DW_CFA_restore_extended: reg105
@@ -1025,14 +1067,14 @@ body: |
# CHECK-NEXT: STR_ZXI killed $z22, $sp, 3
# CHECK: STR_ZXI killed $z9, $sp, 16
# CHECK-NEXT: STR_ZXI killed $z8, $sp, 17
-# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22
-# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x10, 0x49, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22
-# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x10, 0x4a, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22
-# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x10, 0x4b, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x60, 0x92, 0x2e, 0x00, 0x1e, 0x22
-# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x10, 0x4c, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x58, 0x92, 0x2e, 0x00, 0x1e, 0x22
-# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x10, 0x4d, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x50, 0x92, 0x2e, 0x00, 0x1e, 0x22
-# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x10, 0x4e, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x48, 0x92, 0x2e, 0x00, 0x1e, 0x22
-# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x10, 0x4f, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x40, 0x92, 0x2e, 0x00, 0x1e, 0x22
+# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape
+# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape
+# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape
+# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape
+# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape
+# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape
+# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape
+# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape
# CHECK-NEXT: $[[TMP:x[0-9]+]] = frame-setup SUBXri $sp, 16, 0
# CHECK-NEXT: $[[TMP]] = frame-setup ADDVL_XXI $[[TMP]], -1
# CHECK-NEXT: $sp = frame-setup ANDXri killed $[[TMP]]
@@ -1067,14 +1109,22 @@ body: |
# ASM: .cfi_def_cfa w29, 16
# ASM-NEXT: .cfi_offset w30, -8
# ASM-NEXT: .cfi_offset w29, -16
-# ASM: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 16 - 8 * VG
-# ASM-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 16 - 16 * VG
-# ASM-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 16 - 24 * VG
-# ASM-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x60, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 16 - 32 * VG
-# ASM-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x58, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 16 - 40 * VG
-# ASM-NEXT: .cfi_escape 0x10, 0x4d, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x50, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d13 @ cfa - 16 - 48 * VG
-# ASM-NEXT: .cfi_escape 0x10, 0x4e, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x48, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d14 @ cfa - 16 - 56 * VG
-# ASM-NEXT: .cfi_escape 0x10, 0x4f, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x40, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d15 @ cfa - 16 - 64 * VG
+# ASM: .cfi_escape
+# ASM-SAME: // $d8 @ cfa - 8 * VG - 16
+# ASM-NEXT: .cfi_escape
+# ASM-SAME: // $d9 @ cfa - 16 * VG - 16
+# ASM-NEXT: .cfi_escape
+# ASM-SAME: // $d10 @ cfa - 24 * VG - 16
+# ASM-NEXT: .cfi_escape
+# ASM-SAME: // $d11 @ cfa - 32 * VG - 16
+# ASM-NEXT: .cfi_escape
+# ASM-SAME: // $d12 @ cfa - 40 * VG - 16
+# ASM-NEXT: .cfi_escape
+# ASM-SAME: // $d13 @ cfa - 48 * VG - 16
+# ASM-NEXT: .cfi_escape
+# ASM-SAME: // $d14 @ cfa - 56 * VG - 16
+# ASM-NEXT: .cfi_escape
+# ASM-SAME: // $d15 @ cfa - 64 * VG - 16
#
# ASM: .cfi_restore z8
# ASM-NEXT: .cfi_restore z9
@@ -1093,14 +1143,14 @@ body: |
# UNWINDINFO: DW_CFA_def_cfa: reg29 +16
# UNWINDINFO-NEXT: DW_CFA_offset: reg30 -8
# UNWINDINFO-NEXT: DW_CFA_offset: reg29 -16
-# UNWINDINFO: DW_CFA_expression: reg72 DW_OP_consts -16, DW_OP_plus, DW_OP_consts -8, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
-# UNWINDINFO-NEXT: DW_CFA_expression: reg73 DW_OP_consts -16, DW_OP_plus, DW_OP_consts -16, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
-# UNWINDINFO-NEXT: DW_CFA_expression: reg74 DW_OP_consts -16, DW_OP_plus, DW_OP_consts -24, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
-# UNWINDINFO-NEXT: DW_CFA_expression: reg75 DW_OP_consts -16, DW_OP_plus, DW_OP_consts -32, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
-# UNWINDINFO-NEXT: DW_CFA_expression: reg76 DW_OP_consts -16, DW_OP_plus, DW_OP_consts -40, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
-# UNWINDINFO-NEXT: DW_CFA_expression: reg77 DW_OP_consts -16, DW_OP_plus, DW_OP_consts -48, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
-# UNWINDINFO-NEXT: DW_CFA_expression: reg78 DW_OP_consts -16, DW_OP_plus, DW_OP_consts -56, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
-# UNWINDINFO-NEXT: DW_CFA_expression: reg79 DW_OP_consts -16, DW_OP_plus, DW_OP_consts -64, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
+# UNWINDINFO: DW_CFA_expression: reg72 DW_OP_bregx 0x2e +0, DW_OP_consts -8, DW_OP_mul, DW_OP_plus, DW_OP_lit16, DW_OP_minus
+# UNWINDINFO-NEXT: DW_CFA_expression: reg73 DW_OP_bregx 0x2e +0, DW_OP_consts -16, DW_OP_mul, DW_OP_plus, DW_OP_lit16, DW_OP_minus
+# UNWINDINFO-NEXT: DW_CFA_expression: reg74 DW_OP_bregx 0x2e +0, DW_OP_consts -24, DW_OP_mul, DW_OP_plus, DW_OP_lit16, DW_OP_minus
+# UNWINDINFO-NEXT: DW_CFA_expression: reg75 DW_OP_bregx 0x2e +0, DW_OP_consts -32, DW_OP_mul, DW_OP_plus, DW_OP_lit16, DW_OP_minus
+# UNWINDINFO-NEXT: DW_CFA_expression: reg76 DW_OP_bregx 0x2e +0, DW_OP_consts -40, DW_OP_mul, DW_OP_plus, DW_OP_lit16, DW_OP_minus
+# UNWINDINFO-NEXT: DW_CFA_expression: reg77 DW_OP_bregx 0x2e +0, DW_OP_consts -48, DW_OP_mul, DW_OP_plus, DW_OP_lit16, DW_OP_minus
+# UNWINDINFO-NEXT: DW_CFA_expression: reg78 DW_OP_bregx 0x2e +0, DW_OP_consts -56, DW_OP_mul, DW_OP_plus, DW_OP_lit16, DW_OP_minus
+# UNWINDINFO-NEXT: DW_CFA_expression: reg79 DW_OP_bregx 0x2e +0, DW_OP_consts -64, DW_OP_mul, DW_OP_plus, DW_OP_lit16, DW_OP_minus
#
# UNWINDINFO: DW_CFA_restore_extended: reg104
# UNWINDINFO-NEXT: DW_CFA_restore_extended: reg105
@@ -1188,17 +1238,17 @@ body: |
# CHECK-NEXT: frame-setup CFI_INSTRUCTION def_cfa_offset 16
# CHECK-NEXT: frame-setup CFI_INSTRUCTION offset $w29, -16
# CHECK-NEXT: $sp = frame-setup ADDVL_XXI $sp, -3
-# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x18, 0x92, 0x2e, 0x00, 0x1e, 0x22
+# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape
# CHECK-NEXT: STR_PXI killed $p15, $sp, 6
# CHECK-NEXT: STR_PXI killed $p4, $sp, 7
# CHECK-NEXT: STR_ZXI killed $z23, $sp, 1
# CHECK-NEXT: STR_ZXI killed $z8, $sp, 2
-# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22
+# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape
# CHECK-NEXT: $sp = frame-setup ADDVL_XXI $sp, -7
-# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0xd0, 0x00, 0x92, 0x2e, 0x00, 0x1e, 0x22
+# CHECK-NEXT: frame-setup CFI_INSTRUCTION escape
# CHECK: $sp = frame-destroy ADDVL_XXI $sp, 7
-# CHECK-NEXT: frame-destroy CFI_INSTRUCTION escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x18, 0x92, 0x2e, 0x00, 0x1e, 0x22
+# CHECK-NEXT: frame-destroy CFI_INSTRUCTION escape
# CHECK-NEXT: $z23 = frame-destroy LDR_ZXI $sp, 1
# CHECK-NEXT: $z8 = frame-destroy LDR_ZXI $sp, 2
# CHECK-NEXT: $p15 = frame-destroy LDR_PXI $sp, 6
@@ -1214,11 +1264,15 @@ body: |
# ASM-LABEL: frame_layout:
# ASM: .cfi_def_cfa_offset 16
# ASM-NEXT: .cfi_offset w29, -16
-# ASM: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x18, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 24 * VG
-# ASM: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 16 - 8 * VG
-# ASM: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0xd0, 0x00, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 80 * VG
+# ASM: .cfi_escape
+# ASM-SAME: // sp + 16 + 24 * VG
+# ASM: .cfi_escape
+# ASM-SAME: // $d8 @ cfa - 8 * VG - 16
+# ASM: .cfi_escape
+# ASM-SAME: // sp + 16 + 80 * VG
#
-# ASM: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x18, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 24 * VG
+# ASM: .cfi_escape
+# ASM-SAME: // sp + 16 + 24 * VG
# ASM: .cfi_def_cfa wsp, 16
# ASM-NEXT: .cfi_restore z8
# ASM: .cfi_def_cfa_offset 0
@@ -1226,11 +1280,11 @@ body: |
# UNWINDINFO: DW_CFA_def_cfa_offset: +16
# UNWINDINFO-NEXT: DW_CFA_offset: reg29 -16
-# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +0, DW_OP_consts +16, DW_OP_plus, DW_OP_consts +24, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
-# UNWINDINFO: DW_CFA_expression: reg72 DW_OP_consts -16, DW_OP_plus, DW_OP_consts -8, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
-# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +0, DW_OP_consts +16, DW_OP_plus, DW_OP_consts +80, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
+# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +16, DW_OP_bregx 0x2e +0, DW_OP_lit24, DW_OP_mul, DW_OP_plus
+# UNWINDINFO: DW_CFA_expression: reg72 DW_OP_bregx 0x2e +0, DW_OP_consts -8, DW_OP_mul, DW_OP_plus, DW_OP_lit16, DW_OP_minus
+# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +16, DW_OP_bregx 0x2e +0, DW_OP_consts +80, DW_OP_mul, DW_OP_plus
#
-# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +0, DW_OP_consts +16, DW_OP_plus, DW_OP_consts +24, DW_OP_bregx 0x2e +0, DW_OP_mul, DW_OP_plus
+# UNWINDINFO: DW_CFA_def_cfa_expression: DW_OP_breg31 +16, DW_OP_bregx 0x2e +0, DW_OP_lit24, DW_OP_mul, DW_OP_plus
# UNWINDINFO: DW_CFA_def_cfa: reg31 +16
# UNWINDINFO-NEXT: DW_CFA_restore_extended: reg104
# UNWINDINFO: DW_CFA_def_cfa_offset: +0
diff --git a/llvm/test/CodeGen/AArch64/intrinsic-vector-match-sve2.ll b/llvm/test/CodeGen/AArch64/intrinsic-vector-match-sve2.ll
index 2cf8621ca066d..474a9d1003e8c 100644
--- a/llvm/test/CodeGen/AArch64/intrinsic-vector-match-sve2.ll
+++ b/llvm/test/CodeGen/AArch64/intrinsic-vector-match-sve2.ll
@@ -36,7 +36,7 @@ define <vscale x 16 x i1> @match_nxv16i8_v4i8(<vscale x 16 x i8> %op1, <4 x i8>
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-1
; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: // kill: def $d1 killed $d1 def $q1
; CHECK-NEXT: umov w8, v1.h[1]
@@ -241,7 +241,7 @@ define <vscale x 16 x i1> @match_nxv16i8_v32i8(<vscale x 16 x i8> %op1, <32 x i8
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-1
; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: // kill: def $q1 killed $q1 def $z1
; CHECK-NEXT: mov z3.b, z1.b[1]
@@ -463,7 +463,7 @@ define <vscale x 4 x i1> @match_nxv4xi32_v4i32(<vscale x 4 x i32> %op1, <4 x i32
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-1
; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: // kill: def $q1 killed $q1 def $z1
; CHECK-NEXT: mov z2.s, z1.s[1]
diff --git a/llvm/test/CodeGen/AArch64/luti-with-sme2.ll b/llvm/test/CodeGen/AArch64/luti-with-sme2.ll
index 2d30167e2b124..59e1cba8317bd 100644
--- a/llvm/test/CodeGen/AArch64/luti-with-sme2.ll
+++ b/llvm/test/CodeGen/AArch64/luti-with-sme2.ll
@@ -9,10 +9,10 @@ define { <vscale x 8 x i16>, <vscale x 8 x i16> } @test_luti4_lane_i16_x2_tuple(
; CHECK-NEXT: str p8, [sp, #7, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str z12, [sp, #1, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z11, [sp, #2, mul vl] // 16-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x18, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 24 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x48, 0x1e, 0x22 // sp + 16 + 24 * VG
; CHECK-NEXT: .cfi_offset w29, -16
-; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 16 - 8 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 16 - 16 * VG
+; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d11 @ cfa - 8 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d12 @ cfa - 16 * VG - 16
; CHECK-NEXT: ptrue pn8.b
; CHECK-NEXT: add x8, x1, x0
; CHECK-NEXT: ld1h { z3.h, z11.h }, pn8/z, [x1]
@@ -50,10 +50,10 @@ define { <vscale x 8 x half>, <vscale x 8 x half> } @test_luti4_lane_f16_x2_tupl
; CHECK-NEXT: str p8, [sp, #7, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str z12, [sp, #1, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z11, [sp, #2, mul vl] // 16-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x18, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 24 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x48, 0x1e, 0x22 // sp + 16 + 24 * VG
; CHECK-NEXT: .cfi_offset w29, -16
-; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 16 - 8 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 16 - 16 * VG
+; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d11 @ cfa - 8 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d12 @ cfa - 16 * VG - 16
; CHECK-NEXT: ptrue pn8.b
; CHECK-NEXT: add x8, x1, x0
; CHECK-NEXT: ld1h { z3.h, z11.h }, pn8/z, [x1]
@@ -91,10 +91,10 @@ define { <vscale x 8 x bfloat>, <vscale x 8 x bfloat> } @test_luti4_lane_bf16_x2
; CHECK-NEXT: str p8, [sp, #7, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str z12, [sp, #1, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z11, [sp, #2, mul vl] // 16-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x18, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 24 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x48, 0x1e, 0x22 // sp + 16 + 24 * VG
; CHECK-NEXT: .cfi_offset w29, -16
-; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 16 - 8 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 16 - 16 * VG
+; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d11 @ cfa - 8 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d12 @ cfa - 16 * VG - 16
; CHECK-NEXT: ptrue pn8.b
; CHECK-NEXT: add x8, x1, x0
; CHECK-NEXT: ld1h { z3.h, z11.h }, pn8/z, [x1]
diff --git a/llvm/test/CodeGen/AArch64/perm-tb-with-sme2.ll b/llvm/test/CodeGen/AArch64/perm-tb-with-sme2.ll
index 7b55c69ce9378..1ceb25b89a364 100644
--- a/llvm/test/CodeGen/AArch64/perm-tb-with-sme2.ll
+++ b/llvm/test/CodeGen/AArch64/perm-tb-with-sme2.ll
@@ -13,10 +13,10 @@ define { <vscale x 16 x i8>, <vscale x 16 x i8> } @tbl2_b_tuple(i64 %stride, ptr
; CHECK-NEXT: str p8, [sp, #7, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str z12, [sp, #1, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z11, [sp, #2, mul vl] // 16-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x18, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 24 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x48, 0x1e, 0x22 // sp + 16 + 24 * VG
; CHECK-NEXT: .cfi_offset w29, -16
-; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 16 - 8 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 16 - 16 * VG
+; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d11 @ cfa - 8 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d12 @ cfa - 16 * VG - 16
; CHECK-NEXT: ptrue pn8.b
; CHECK-NEXT: ld1b { z3.b, z11.b }, pn8/z, [x1]
; CHECK-NEXT: ld1b { z4.b, z12.b }, pn8/z, [x1, x0]
@@ -53,10 +53,10 @@ define { <vscale x 8 x i16>, <vscale x 8 x i16> } @tbl2_h_tuple(i64 %stride, ptr
; CHECK-NEXT: str p8, [sp, #7, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str z12, [sp, #1, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z11, [sp, #2, mul vl] // 16-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x18, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 24 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x48, 0x1e, 0x22 // sp + 16 + 24 * VG
; CHECK-NEXT: .cfi_offset w29, -16
-; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 16 - 8 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 16 - 16 * VG
+; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d11 @ cfa - 8 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d12 @ cfa - 16 * VG - 16
; CHECK-NEXT: ptrue pn8.b
; CHECK-NEXT: add x8, x1, x0
; CHECK-NEXT: ld1h { z3.h, z11.h }, pn8/z, [x1]
@@ -94,10 +94,10 @@ define { <vscale x 4 x i32>, <vscale x 4 x i32> } @tbl2_s_tuple(i64 %stride, ptr
; CHECK-NEXT: str p8, [sp, #7, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str z12, [sp, #1, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z11, [sp, #2, mul vl] // 16-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x18, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 24 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x48, 0x1e, 0x22 // sp + 16 + 24 * VG
; CHECK-NEXT: .cfi_offset w29, -16
-; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 16 - 8 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 16 - 16 * VG
+; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d11 @ cfa - 8 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d12 @ cfa - 16 * VG - 16
; CHECK-NEXT: ptrue pn8.b
; CHECK-NEXT: add x8, x1, x0
; CHECK-NEXT: ld1w { z3.s, z11.s }, pn8/z, [x1]
@@ -135,10 +135,10 @@ define { <vscale x 2 x i64>, <vscale x 2 x i64> } @tbl2_d_tuple(i64 %stride, ptr
; CHECK-NEXT: str p8, [sp, #7, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str z12, [sp, #1, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z11, [sp, #2, mul vl] // 16-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x18, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 24 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x48, 0x1e, 0x22 // sp + 16 + 24 * VG
; CHECK-NEXT: .cfi_offset w29, -16
-; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 16 - 8 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 16 - 16 * VG
+; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d11 @ cfa - 8 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d12 @ cfa - 16 * VG - 16
; CHECK-NEXT: ptrue pn8.b
; CHECK-NEXT: add x8, x1, x0
; CHECK-NEXT: ld1d { z3.d, z11.d }, pn8/z, [x1]
@@ -176,10 +176,10 @@ define { <vscale x 8 x bfloat>, <vscale x 8 x bfloat> } @tbl2_bf16_tuple(i64 %st
; CHECK-NEXT: str p8, [sp, #7, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str z12, [sp, #1, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z11, [sp, #2, mul vl] // 16-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x18, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 24 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x48, 0x1e, 0x22 // sp + 16 + 24 * VG
; CHECK-NEXT: .cfi_offset w29, -16
-; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 16 - 8 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 16 - 16 * VG
+; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d11 @ cfa - 8 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d12 @ cfa - 16 * VG - 16
; CHECK-NEXT: ptrue pn8.b
; CHECK-NEXT: add x8, x1, x0
; CHECK-NEXT: ld1h { z3.h, z11.h }, pn8/z, [x1]
@@ -217,10 +217,10 @@ define { <vscale x 4 x float>, <vscale x 4 x float> } @tbl2_f32_tuple(i64 %strid
; CHECK-NEXT: str p8, [sp, #7, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str z12, [sp, #1, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z11, [sp, #2, mul vl] // 16-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x18, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 24 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x48, 0x1e, 0x22 // sp + 16 + 24 * VG
; CHECK-NEXT: .cfi_offset w29, -16
-; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 16 - 8 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 16 - 16 * VG
+; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d11 @ cfa - 8 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d12 @ cfa - 16 * VG - 16
; CHECK-NEXT: ptrue pn8.b
; CHECK-NEXT: add x8, x1, x0
; CHECK-NEXT: ld1w { z3.s, z11.s }, pn8/z, [x1]
@@ -258,10 +258,10 @@ define { <vscale x 2 x double>, <vscale x 2 x double> } @tbl2_f64_tuple(i64 %str
; CHECK-NEXT: str p8, [sp, #7, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str z12, [sp, #1, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z11, [sp, #2, mul vl] // 16-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x18, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 24 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x48, 0x1e, 0x22 // sp + 16 + 24 * VG
; CHECK-NEXT: .cfi_offset w29, -16
-; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 16 - 8 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 16 - 16 * VG
+; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d11 @ cfa - 8 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d12 @ cfa - 16 * VG - 16
; CHECK-NEXT: ptrue pn8.b
; CHECK-NEXT: add x8, x1, x0
; CHECK-NEXT: ld1d { z3.d, z11.d }, pn8/z, [x1]
diff --git a/llvm/test/CodeGen/AArch64/sme-vg-to-stack.ll b/llvm/test/CodeGen/AArch64/sme-vg-to-stack.ll
index 0853325e449af..6fcfc5b242c11 100644
--- a/llvm/test/CodeGen/AArch64/sme-vg-to-stack.ll
+++ b/llvm/test/CodeGen/AArch64/sme-vg-to-stack.ll
@@ -328,7 +328,7 @@ define void @vg_unwind_with_sve_args(<vscale x 2 x i64> %x) #0 {
; CHECK-NEXT: .cfi_offset w30, -24
; CHECK-NEXT: .cfi_offset w29, -32
; CHECK-NEXT: addvl sp, sp, #-18
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x20, 0x22, 0x11, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 32 + 144 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x20, 0x92, 0x2e, 0x00, 0x11, 0x90, 0x01, 0x1e, 0x22 // sp + 32 + 144 * VG
; CHECK-NEXT: str p8, [sp, #11, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: ptrue pn8.b
; CHECK-NEXT: str p15, [sp, #4, mul vl] // 2-byte Folded Spill
@@ -351,16 +351,16 @@ define void @vg_unwind_with_sve_args(<vscale x 2 x i64> %x) #0 {
; CHECK-NEXT: str p4, [sp, #15, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str z9, [sp, #16, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z8, [sp, #17, mul vl] // 16-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x60, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 32 - 8 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x11, 0x60, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 32 - 16 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x60, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 32 - 24 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x11, 0x60, 0x22, 0x11, 0x60, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 32 - 32 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x11, 0x60, 0x22, 0x11, 0x58, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 32 - 40 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4d, 0x0a, 0x11, 0x60, 0x22, 0x11, 0x50, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d13 @ cfa - 32 - 48 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4e, 0x0a, 0x11, 0x60, 0x22, 0x11, 0x48, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d14 @ cfa - 32 - 56 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4f, 0x0a, 0x11, 0x60, 0x22, 0x11, 0x40, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d15 @ cfa - 32 - 64 * VG
+; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x11, 0x60, 0x22 // $d8 @ cfa - 8 * VG - 32
+; CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x11, 0x60, 0x22 // $d9 @ cfa - 16 * VG - 32
+; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x11, 0x60, 0x22 // $d10 @ cfa - 24 * VG - 32
+; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x60, 0x1e, 0x22, 0x11, 0x60, 0x22 // $d11 @ cfa - 32 * VG - 32
+; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x58, 0x1e, 0x22, 0x11, 0x60, 0x22 // $d12 @ cfa - 40 * VG - 32
+; CHECK-NEXT: .cfi_escape 0x10, 0x4d, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x50, 0x1e, 0x22, 0x11, 0x60, 0x22 // $d13 @ cfa - 48 * VG - 32
+; CHECK-NEXT: .cfi_escape 0x10, 0x4e, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x48, 0x1e, 0x22, 0x11, 0x60, 0x22 // $d14 @ cfa - 56 * VG - 32
+; CHECK-NEXT: .cfi_escape 0x10, 0x4f, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x40, 0x1e, 0x22, 0x11, 0x60, 0x22 // $d15 @ cfa - 64 * VG - 32
; CHECK-NEXT: addvl sp, sp, #-1
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x20, 0x22, 0x11, 0x98, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 32 + 152 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x20, 0x92, 0x2e, 0x00, 0x11, 0x98, 0x01, 0x1e, 0x22 // sp + 32 + 152 * VG
; CHECK-NEXT: str z0, [sp] // 16-byte Folded Spill
; CHECK-NEXT: //APP
; CHECK-NEXT: //NO_APP
@@ -371,7 +371,7 @@ define void @vg_unwind_with_sve_args(<vscale x 2 x i64> %x) #0 {
; CHECK-NEXT: smstart sm
; CHECK-NEXT: .cfi_restore vg
; CHECK-NEXT: addvl sp, sp, #1
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x20, 0x22, 0x11, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 32 + 144 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x20, 0x92, 0x2e, 0x00, 0x11, 0x90, 0x01, 0x1e, 0x22 // sp + 32 + 144 * VG
; CHECK-NEXT: ptrue pn8.b
; CHECK-NEXT: ldr z9, [sp, #16, mul vl] // 16-byte Folded Reload
; CHECK-NEXT: ldr z8, [sp, #17, mul vl] // 16-byte Folded Reload
@@ -448,14 +448,14 @@ define void @vg_unwind_with_sve_args(<vscale x 2 x i64> %x) #0 {
; FP-CHECK-NEXT: str p4, [sp, #15, mul vl] // 2-byte Folded Spill
; FP-CHECK-NEXT: str z9, [sp, #16, mul vl] // 16-byte Folded Spill
; FP-CHECK-NEXT: str z8, [sp, #17, mul vl] // 16-byte Folded Spill
-; FP-CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x50, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 48 - 8 * VG
-; FP-CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x11, 0x50, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 48 - 16 * VG
-; FP-CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x50, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 48 - 24 * VG
-; FP-CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x11, 0x50, 0x22, 0x11, 0x60, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 48 - 32 * VG
-; FP-CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x11, 0x50, 0x22, 0x11, 0x58, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 48 - 40 * VG
-; FP-CHECK-NEXT: .cfi_escape 0x10, 0x4d, 0x0a, 0x11, 0x50, 0x22, 0x11, 0x50, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d13 @ cfa - 48 - 48 * VG
-; FP-CHECK-NEXT: .cfi_escape 0x10, 0x4e, 0x0a, 0x11, 0x50, 0x22, 0x11, 0x48, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d14 @ cfa - 48 - 56 * VG
-; FP-CHECK-NEXT: .cfi_escape 0x10, 0x4f, 0x0a, 0x11, 0x50, 0x22, 0x11, 0x40, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d15 @ cfa - 48 - 64 * VG
+; FP-CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x11, 0x50, 0x22 // $d8 @ cfa - 8 * VG - 48
+; FP-CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x11, 0x50, 0x22 // $d9 @ cfa - 16 * VG - 48
+; FP-CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x11, 0x50, 0x22 // $d10 @ cfa - 24 * VG - 48
+; FP-CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x60, 0x1e, 0x22, 0x11, 0x50, 0x22 // $d11 @ cfa - 32 * VG - 48
+; FP-CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x58, 0x1e, 0x22, 0x11, 0x50, 0x22 // $d12 @ cfa - 40 * VG - 48
+; FP-CHECK-NEXT: .cfi_escape 0x10, 0x4d, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x50, 0x1e, 0x22, 0x11, 0x50, 0x22 // $d13 @ cfa - 48 * VG - 48
+; FP-CHECK-NEXT: .cfi_escape 0x10, 0x4e, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x48, 0x1e, 0x22, 0x11, 0x50, 0x22 // $d14 @ cfa - 56 * VG - 48
+; FP-CHECK-NEXT: .cfi_escape 0x10, 0x4f, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x40, 0x1e, 0x22, 0x11, 0x50, 0x22 // $d15 @ cfa - 64 * VG - 48
; FP-CHECK-NEXT: addvl sp, sp, #-1
; FP-CHECK-NEXT: str z0, [x29, #-19, mul vl] // 16-byte Folded Spill
; FP-CHECK-NEXT: //APP
diff --git a/llvm/test/CodeGen/AArch64/sme2-fp8-intrinsics-cvt.ll b/llvm/test/CodeGen/AArch64/sme2-fp8-intrinsics-cvt.ll
index b0390ec73ae97..8398e07f63801 100644
--- a/llvm/test/CodeGen/AArch64/sme2-fp8-intrinsics-cvt.ll
+++ b/llvm/test/CodeGen/AArch64/sme2-fp8-intrinsics-cvt.ll
@@ -36,7 +36,7 @@ define { <vscale x 16 x i8>, <vscale x 16 x i8>, <vscale x 16 x i8>, <vscale x 1
; CHECK-NEXT: str z18, [sp, #6, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z17, [sp, #7, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z16, [sp, #8, mul vl] // 16-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0xc8, 0x00, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 72 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0xc8, 0x00, 0x1e, 0x22 // sp + 16 + 72 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: lsl x8, x0, #1
; CHECK-NEXT: add x9, x1, x0
@@ -129,10 +129,10 @@ define { <vscale x 16 x i8>, <vscale x 16 x i8> } @bfcvt_tuple(i64 %stride, ptr
; CHECK-NEXT: str p8, [sp, #7, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str z11, [sp, #1, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z10, [sp, #2, mul vl] // 16-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x18, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 24 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x48, 0x1e, 0x22 // sp + 16 + 24 * VG
; CHECK-NEXT: .cfi_offset w29, -16
-; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 16 - 8 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 16 - 16 * VG
+; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d10 @ cfa - 8 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d11 @ cfa - 16 * VG - 16
; CHECK-NEXT: ptrue pn8.b
; CHECK-NEXT: add x8, x1, x0
; CHECK-NEXT: ld1h { z2.h, z10.h }, pn8/z, [x1]
diff --git a/llvm/test/CodeGen/AArch64/sme2-intrinsics-qcvt.ll b/llvm/test/CodeGen/AArch64/sme2-intrinsics-qcvt.ll
index b4a83c10df94a..58d2e253eaafd 100644
--- a/llvm/test/CodeGen/AArch64/sme2-intrinsics-qcvt.ll
+++ b/llvm/test/CodeGen/AArch64/sme2-intrinsics-qcvt.ll
@@ -58,7 +58,7 @@ define { <vscale x 8 x i16>, <vscale x 8 x i16>, <vscale x 8 x i16>, <vscale x 8
; CHECK-NEXT: str z18, [sp, #6, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z17, [sp, #7, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z16, [sp, #8, mul vl] // 16-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0xc8, 0x00, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 72 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0xc8, 0x00, 0x1e, 0x22 // sp + 16 + 72 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: lsl x8, x0, #1
; CHECK-NEXT: add x9, x1, x0
diff --git a/llvm/test/CodeGen/AArch64/sme2-intrinsics-qrshr.ll b/llvm/test/CodeGen/AArch64/sme2-intrinsics-qrshr.ll
index 0bc9e15786a8a..3bb516daa95ca 100644
--- a/llvm/test/CodeGen/AArch64/sme2-intrinsics-qrshr.ll
+++ b/llvm/test/CodeGen/AArch64/sme2-intrinsics-qrshr.ll
@@ -24,10 +24,10 @@ define { <vscale x 8 x i16>, <vscale x 8 x i16> } @multi_vector_sat_shift_narrow
; CHECK-NEXT: str p8, [sp, #7, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str z11, [sp, #1, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z10, [sp, #2, mul vl] // 16-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x18, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 24 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x48, 0x1e, 0x22 // sp + 16 + 24 * VG
; CHECK-NEXT: .cfi_offset w29, -16
-; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 16 - 8 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 16 - 16 * VG
+; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d10 @ cfa - 8 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d11 @ cfa - 16 * VG - 16
; CHECK-NEXT: ptrue pn8.b
; CHECK-NEXT: add x8, x1, x0
; CHECK-NEXT: ld1w { z2.s, z10.s }, pn8/z, [x1]
@@ -98,7 +98,7 @@ define { <vscale x 16 x i8>, <vscale x 16 x i8>, <vscale x 16 x i8>, <vscale x 1
; CHECK-NEXT: str z18, [sp, #6, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z17, [sp, #7, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z16, [sp, #8, mul vl] // 16-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0xc8, 0x00, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 72 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0xc8, 0x00, 0x1e, 0x22 // sp + 16 + 72 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: lsl x8, x0, #1
; CHECK-NEXT: add x9, x1, x0
diff --git a/llvm/test/CodeGen/AArch64/sme2-multivec-regalloc.mir b/llvm/test/CodeGen/AArch64/sme2-multivec-regalloc.mir
index 1d04cc6d7ca2a..c3338b14522cb 100644
--- a/llvm/test/CodeGen/AArch64/sme2-multivec-regalloc.mir
+++ b/llvm/test/CodeGen/AArch64/sme2-multivec-regalloc.mir
@@ -17,7 +17,7 @@ body: |
; CHECK-NEXT: stp d9, d8, [sp, #16]
; CHECK-NEXT: str x29, [sp, #32]
; CHECK-NEXT: addvl sp, sp, #-2
- ; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x30, 0x22, 0x11, 0x10, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 48 + 16 * VG
+ ; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x30, 0x92, 0x2e, 0x00, 0x40, 0x1e, 0x22 // sp + 48 + 16 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: .cfi_offset b8, -24
; CHECK-NEXT: .cfi_offset b9, -32
@@ -97,7 +97,7 @@ body: |
; CHECK: str x29, [sp, #-16]!
; CHECK-NEXT: addvl sp, sp, #-2
- ; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 16 * VG
+ ; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x40, 0x1e, 0x22 // sp + 16 + 16 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: lsl x9, x1, #1
; CHECK-NEXT: ptrue pn8.b
diff --git a/llvm/test/CodeGen/AArch64/split-vector-insert.ll b/llvm/test/CodeGen/AArch64/split-vector-insert.ll
index 555e38a3df205..109059e76bbc6 100644
--- a/llvm/test/CodeGen/AArch64/split-vector-insert.ll
+++ b/llvm/test/CodeGen/AArch64/split-vector-insert.ll
@@ -16,7 +16,7 @@ define <vscale x 2 x i64> @test_nxv2i64_v8i64(<vscale x 2 x i64> %a, <8 x i64> %
; CHECK-LEGALIZATION-NEXT: .cfi_def_cfa_offset 16
; CHECK-LEGALIZATION-NEXT: .cfi_offset w29, -16
; CHECK-LEGALIZATION-NEXT: addvl sp, sp, #-3
-; CHECK-LEGALIZATION-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x18, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 24 * VG
+; CHECK-LEGALIZATION-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x48, 0x1e, 0x22 // sp + 16 + 24 * VG
; CHECK-LEGALIZATION-NEXT: cntd x8
; CHECK-LEGALIZATION-NEXT: ptrue p0.d, vl2
; CHECK-LEGALIZATION-NEXT: mov w9, #2 // =0x2
@@ -59,7 +59,7 @@ define <vscale x 2 x i64> @test_nxv2i64_v8i64(<vscale x 2 x i64> %a, <8 x i64> %
; CHECK-NEXT: .cfi_def_cfa_offset 16
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: addvl sp, sp, #-3
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x18, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 24 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x48, 0x1e, 0x22 // sp + 16 + 24 * VG
; CHECK-NEXT: cntd x8
; CHECK-NEXT: ptrue p0.d, vl2
; CHECK-NEXT: mov w9, #2 // =0x2
@@ -111,7 +111,7 @@ define <vscale x 2 x double> @test_nxv2f64_v8f64(<vscale x 2 x double> %a, <8 x
; CHECK-LEGALIZATION-NEXT: .cfi_def_cfa_offset 16
; CHECK-LEGALIZATION-NEXT: .cfi_offset w29, -16
; CHECK-LEGALIZATION-NEXT: addvl sp, sp, #-3
-; CHECK-LEGALIZATION-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x18, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 24 * VG
+; CHECK-LEGALIZATION-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x48, 0x1e, 0x22 // sp + 16 + 24 * VG
; CHECK-LEGALIZATION-NEXT: cntd x8
; CHECK-LEGALIZATION-NEXT: ptrue p0.d, vl2
; CHECK-LEGALIZATION-NEXT: mov w9, #2 // =0x2
@@ -154,7 +154,7 @@ define <vscale x 2 x double> @test_nxv2f64_v8f64(<vscale x 2 x double> %a, <8 x
; CHECK-NEXT: .cfi_def_cfa_offset 16
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: addvl sp, sp, #-3
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x18, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 24 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x48, 0x1e, 0x22 // sp + 16 + 24 * VG
; CHECK-NEXT: cntd x8
; CHECK-NEXT: ptrue p0.d, vl2
; CHECK-NEXT: mov w9, #2 // =0x2
diff --git a/llvm/test/CodeGen/AArch64/stack-hazard.ll b/llvm/test/CodeGen/AArch64/stack-hazard.ll
index 3a33405200132..4615b1a6a9b2e 100644
--- a/llvm/test/CodeGen/AArch64/stack-hazard.ll
+++ b/llvm/test/CodeGen/AArch64/stack-hazard.ll
@@ -388,7 +388,7 @@ define i32 @csr_d8_allocnxv4i32(i64 %d) "aarch64_pstate_sm_compatible" {
; CHECK0-NEXT: str d8, [sp, #-16]! // 8-byte Folded Spill
; CHECK0-NEXT: str x29, [sp, #8] // 8-byte Folded Spill
; CHECK0-NEXT: addvl sp, sp, #-1
-; CHECK0-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK0-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK0-NEXT: .cfi_offset w29, -8
; CHECK0-NEXT: .cfi_offset b8, -16
; CHECK0-NEXT: mov z0.s, #0 // =0x0
@@ -407,7 +407,7 @@ define i32 @csr_d8_allocnxv4i32(i64 %d) "aarch64_pstate_sm_compatible" {
; CHECK64-NEXT: str x29, [sp, #72] // 8-byte Folded Spill
; CHECK64-NEXT: sub sp, sp, #64
; CHECK64-NEXT: addvl sp, sp, #-1
-; CHECK64-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x90, 0x01, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 144 + 8 * VG
+; CHECK64-NEXT: .cfi_escape 0x0f, 0x09, 0x8f, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 144 + 8 * VG
; CHECK64-NEXT: .cfi_offset w29, -8
; CHECK64-NEXT: .cfi_offset b8, -80
; CHECK64-NEXT: mov z0.s, #0 // =0x0
@@ -429,7 +429,7 @@ define i32 @csr_d8_allocnxv4i32(i64 %d) "aarch64_pstate_sm_compatible" {
; CHECK1024-NEXT: str x29, [sp, #1032] // 8-byte Folded Spill
; CHECK1024-NEXT: sub sp, sp, #1024
; CHECK1024-NEXT: addvl sp, sp, #-1
-; CHECK1024-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x90, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 2064 + 8 * VG
+; CHECK1024-NEXT: .cfi_escape 0x0f, 0x09, 0x8f, 0x90, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 2064 + 8 * VG
; CHECK1024-NEXT: .cfi_offset w29, -8
; CHECK1024-NEXT: .cfi_offset b8, -1040
; CHECK1024-NEXT: mov z0.s, #0 // =0x0
@@ -955,9 +955,9 @@ define i32 @svecc_csr_d8(i32 noundef %num, <vscale x 4 x i32> %vs) "aarch64_psta
; CHECK0-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK0-NEXT: addvl sp, sp, #-1
; CHECK0-NEXT: str z8, [sp] // 16-byte Folded Spill
-; CHECK0-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK0-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK0-NEXT: .cfi_offset w29, -16
-; CHECK0-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 16 - 8 * VG
+; CHECK0-NEXT: .cfi_escape 0x10, 0x48, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d8 @ cfa - 8 * VG - 16
; CHECK0-NEXT: //APP
; CHECK0-NEXT: //NO_APP
; CHECK0-NEXT: mov w0, wzr
@@ -973,9 +973,9 @@ define i32 @svecc_csr_d8(i32 noundef %num, <vscale x 4 x i32> %vs) "aarch64_psta
; CHECK64-NEXT: addvl sp, sp, #-1
; CHECK64-NEXT: str z8, [sp] // 16-byte Folded Spill
; CHECK64-NEXT: sub sp, sp, #64
-; CHECK64-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x90, 0x01, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 144 + 8 * VG
+; CHECK64-NEXT: .cfi_escape 0x0f, 0x09, 0x8f, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 144 + 8 * VG
; CHECK64-NEXT: .cfi_offset w29, -16
-; CHECK64-NEXT: .cfi_escape 0x10, 0x48, 0x0b, 0x11, 0xb0, 0x7f, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 80 - 8 * VG
+; CHECK64-NEXT: .cfi_escape 0x10, 0x48, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x11, 0xb0, 0x7f, 0x22 // $d8 @ cfa - 8 * VG - 80
; CHECK64-NEXT: mov w0, wzr
; CHECK64-NEXT: //APP
; CHECK64-NEXT: //NO_APP
@@ -993,9 +993,9 @@ define i32 @svecc_csr_d8(i32 noundef %num, <vscale x 4 x i32> %vs) "aarch64_psta
; CHECK1024-NEXT: addvl sp, sp, #-1
; CHECK1024-NEXT: str z8, [sp] // 16-byte Folded Spill
; CHECK1024-NEXT: sub sp, sp, #1024
-; CHECK1024-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x90, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 2064 + 8 * VG
+; CHECK1024-NEXT: .cfi_escape 0x0f, 0x09, 0x8f, 0x90, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 2064 + 8 * VG
; CHECK1024-NEXT: .cfi_offset w29, -16
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x48, 0x0b, 0x11, 0xf0, 0x77, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 1040 - 8 * VG
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x48, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x11, 0xf0, 0x77, 0x22 // $d8 @ cfa - 8 * VG - 1040
; CHECK1024-NEXT: mov w0, wzr
; CHECK1024-NEXT: //APP
; CHECK1024-NEXT: //NO_APP
@@ -1017,10 +1017,10 @@ define i32 @svecc_csr_d8d9(i32 noundef %num, <vscale x 4 x i32> %vs) "aarch64_ps
; CHECK0-NEXT: addvl sp, sp, #-2
; CHECK0-NEXT: str z9, [sp] // 16-byte Folded Spill
; CHECK0-NEXT: str z8, [sp, #1, mul vl] // 16-byte Folded Spill
-; CHECK0-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 16 * VG
+; CHECK0-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x40, 0x1e, 0x22 // sp + 16 + 16 * VG
; CHECK0-NEXT: .cfi_offset w29, -16
-; CHECK0-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 16 - 8 * VG
-; CHECK0-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 16 - 16 * VG
+; CHECK0-NEXT: .cfi_escape 0x10, 0x48, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d8 @ cfa - 8 * VG - 16
+; CHECK0-NEXT: .cfi_escape 0x10, 0x49, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d9 @ cfa - 16 * VG - 16
; CHECK0-NEXT: //APP
; CHECK0-NEXT: //NO_APP
; CHECK0-NEXT: mov w0, wzr
@@ -1038,10 +1038,10 @@ define i32 @svecc_csr_d8d9(i32 noundef %num, <vscale x 4 x i32> %vs) "aarch64_ps
; CHECK64-NEXT: str z9, [sp] // 16-byte Folded Spill
; CHECK64-NEXT: str z8, [sp, #1, mul vl] // 16-byte Folded Spill
; CHECK64-NEXT: sub sp, sp, #64
-; CHECK64-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x90, 0x01, 0x22, 0x11, 0x10, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 144 + 16 * VG
+; CHECK64-NEXT: .cfi_escape 0x0f, 0x09, 0x8f, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x40, 0x1e, 0x22 // sp + 144 + 16 * VG
; CHECK64-NEXT: .cfi_offset w29, -16
-; CHECK64-NEXT: .cfi_escape 0x10, 0x48, 0x0b, 0x11, 0xb0, 0x7f, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 80 - 8 * VG
-; CHECK64-NEXT: .cfi_escape 0x10, 0x49, 0x0b, 0x11, 0xb0, 0x7f, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 80 - 16 * VG
+; CHECK64-NEXT: .cfi_escape 0x10, 0x48, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x11, 0xb0, 0x7f, 0x22 // $d8 @ cfa - 8 * VG - 80
+; CHECK64-NEXT: .cfi_escape 0x10, 0x49, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x11, 0xb0, 0x7f, 0x22 // $d9 @ cfa - 16 * VG - 80
; CHECK64-NEXT: mov w0, wzr
; CHECK64-NEXT: //APP
; CHECK64-NEXT: //NO_APP
@@ -1061,10 +1061,10 @@ define i32 @svecc_csr_d8d9(i32 noundef %num, <vscale x 4 x i32> %vs) "aarch64_ps
; CHECK1024-NEXT: str z9, [sp] // 16-byte Folded Spill
; CHECK1024-NEXT: str z8, [sp, #1, mul vl] // 16-byte Folded Spill
; CHECK1024-NEXT: sub sp, sp, #1024
-; CHECK1024-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x90, 0x10, 0x22, 0x11, 0x10, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 2064 + 16 * VG
+; CHECK1024-NEXT: .cfi_escape 0x0f, 0x09, 0x8f, 0x90, 0x10, 0x92, 0x2e, 0x00, 0x40, 0x1e, 0x22 // sp + 2064 + 16 * VG
; CHECK1024-NEXT: .cfi_offset w29, -16
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x48, 0x0b, 0x11, 0xf0, 0x77, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 1040 - 8 * VG
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x49, 0x0b, 0x11, 0xf0, 0x77, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 1040 - 16 * VG
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x48, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x11, 0xf0, 0x77, 0x22 // $d8 @ cfa - 8 * VG - 1040
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x49, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x11, 0xf0, 0x77, 0x22 // $d9 @ cfa - 16 * VG - 1040
; CHECK1024-NEXT: mov w0, wzr
; CHECK1024-NEXT: //APP
; CHECK1024-NEXT: //NO_APP
@@ -1086,9 +1086,9 @@ define i32 @svecc_csr_d8_allocd(double %d, <vscale x 4 x i32> %vs) "aarch64_psta
; CHECK0-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK0-NEXT: addvl sp, sp, #-1
; CHECK0-NEXT: str z8, [sp] // 16-byte Folded Spill
-; CHECK0-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK0-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK0-NEXT: .cfi_offset w29, -16
-; CHECK0-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 16 - 8 * VG
+; CHECK0-NEXT: .cfi_escape 0x10, 0x48, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d8 @ cfa - 8 * VG - 16
; CHECK0-NEXT: //APP
; CHECK0-NEXT: //NO_APP
; CHECK0-NEXT: addvl x8, sp, #1
@@ -1106,9 +1106,9 @@ define i32 @svecc_csr_d8_allocd(double %d, <vscale x 4 x i32> %vs) "aarch64_psta
; CHECK64-NEXT: addvl sp, sp, #-1
; CHECK64-NEXT: str z8, [sp] // 16-byte Folded Spill
; CHECK64-NEXT: sub sp, sp, #80
-; CHECK64-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0xa0, 0x01, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 160 + 8 * VG
+; CHECK64-NEXT: .cfi_escape 0x0f, 0x09, 0x8f, 0xa0, 0x01, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 160 + 8 * VG
; CHECK64-NEXT: .cfi_offset w29, -16
-; CHECK64-NEXT: .cfi_escape 0x10, 0x48, 0x0b, 0x11, 0xb0, 0x7f, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 80 - 8 * VG
+; CHECK64-NEXT: .cfi_escape 0x10, 0x48, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x11, 0xb0, 0x7f, 0x22 // $d8 @ cfa - 8 * VG - 80
; CHECK64-NEXT: mov w0, wzr
; CHECK64-NEXT: //APP
; CHECK64-NEXT: //NO_APP
@@ -1127,9 +1127,9 @@ define i32 @svecc_csr_d8_allocd(double %d, <vscale x 4 x i32> %vs) "aarch64_psta
; CHECK1024-NEXT: addvl sp, sp, #-1
; CHECK1024-NEXT: str z8, [sp] // 16-byte Folded Spill
; CHECK1024-NEXT: sub sp, sp, #1040
-; CHECK1024-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0xa0, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 2080 + 8 * VG
+; CHECK1024-NEXT: .cfi_escape 0x0f, 0x09, 0x8f, 0xa0, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 2080 + 8 * VG
; CHECK1024-NEXT: .cfi_offset w29, -16
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x48, 0x0b, 0x11, 0xf0, 0x77, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 1040 - 8 * VG
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x48, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x11, 0xf0, 0x77, 0x22 // $d8 @ cfa - 8 * VG - 1040
; CHECK1024-NEXT: mov w0, wzr
; CHECK1024-NEXT: //APP
; CHECK1024-NEXT: //NO_APP
@@ -1153,9 +1153,9 @@ define i32 @svecc_csr_d8_alloci64(i64 %d, <vscale x 4 x i32> %vs) "aarch64_pstat
; CHECK0-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK0-NEXT: addvl sp, sp, #-1
; CHECK0-NEXT: str z8, [sp] // 16-byte Folded Spill
-; CHECK0-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK0-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK0-NEXT: .cfi_offset w29, -16
-; CHECK0-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 16 - 8 * VG
+; CHECK0-NEXT: .cfi_escape 0x10, 0x48, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d8 @ cfa - 8 * VG - 16
; CHECK0-NEXT: //APP
; CHECK0-NEXT: //NO_APP
; CHECK0-NEXT: mov x8, x0
@@ -1174,9 +1174,9 @@ define i32 @svecc_csr_d8_alloci64(i64 %d, <vscale x 4 x i32> %vs) "aarch64_pstat
; CHECK64-NEXT: addvl sp, sp, #-1
; CHECK64-NEXT: str z8, [sp] // 16-byte Folded Spill
; CHECK64-NEXT: sub sp, sp, #80
-; CHECK64-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0xa0, 0x01, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 160 + 8 * VG
+; CHECK64-NEXT: .cfi_escape 0x0f, 0x09, 0x8f, 0xa0, 0x01, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 160 + 8 * VG
; CHECK64-NEXT: .cfi_offset w29, -16
-; CHECK64-NEXT: .cfi_escape 0x10, 0x48, 0x0b, 0x11, 0xb0, 0x7f, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 80 - 8 * VG
+; CHECK64-NEXT: .cfi_escape 0x10, 0x48, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x11, 0xb0, 0x7f, 0x22 // $d8 @ cfa - 8 * VG - 80
; CHECK64-NEXT: mov x8, x0
; CHECK64-NEXT: mov w0, wzr
; CHECK64-NEXT: //APP
@@ -1196,9 +1196,9 @@ define i32 @svecc_csr_d8_alloci64(i64 %d, <vscale x 4 x i32> %vs) "aarch64_pstat
; CHECK1024-NEXT: addvl sp, sp, #-1
; CHECK1024-NEXT: str z8, [sp] // 16-byte Folded Spill
; CHECK1024-NEXT: sub sp, sp, #1040
-; CHECK1024-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0xa0, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 2080 + 8 * VG
+; CHECK1024-NEXT: .cfi_escape 0x0f, 0x09, 0x8f, 0xa0, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 2080 + 8 * VG
; CHECK1024-NEXT: .cfi_offset w29, -16
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x48, 0x0b, 0x11, 0xf0, 0x77, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 1040 - 8 * VG
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x48, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x11, 0xf0, 0x77, 0x22 // $d8 @ cfa - 8 * VG - 1040
; CHECK1024-NEXT: mov x8, x0
; CHECK1024-NEXT: mov w0, wzr
; CHECK1024-NEXT: //APP
@@ -1224,9 +1224,9 @@ define i32 @svecc_csr_d8_allocnxv4i32(i64 %d, <vscale x 4 x i32> %vs) "aarch64_p
; CHECK0-NEXT: addvl sp, sp, #-1
; CHECK0-NEXT: str z8, [sp] // 16-byte Folded Spill
; CHECK0-NEXT: addvl sp, sp, #-1
-; CHECK0-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 16 * VG
+; CHECK0-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x40, 0x1e, 0x22 // sp + 16 + 16 * VG
; CHECK0-NEXT: .cfi_offset w29, -16
-; CHECK0-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 16 - 8 * VG
+; CHECK0-NEXT: .cfi_escape 0x10, 0x48, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d8 @ cfa - 8 * VG - 16
; CHECK0-NEXT: mov z0.s, #0 // =0x0
; CHECK0-NEXT: mov w0, wzr
; CHECK0-NEXT: //APP
@@ -1246,9 +1246,9 @@ define i32 @svecc_csr_d8_allocnxv4i32(i64 %d, <vscale x 4 x i32> %vs) "aarch64_p
; CHECK64-NEXT: str z8, [sp] // 16-byte Folded Spill
; CHECK64-NEXT: sub sp, sp, #64
; CHECK64-NEXT: addvl sp, sp, #-1
-; CHECK64-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x90, 0x01, 0x22, 0x11, 0x10, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 144 + 16 * VG
+; CHECK64-NEXT: .cfi_escape 0x0f, 0x09, 0x8f, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x40, 0x1e, 0x22 // sp + 144 + 16 * VG
; CHECK64-NEXT: .cfi_offset w29, -16
-; CHECK64-NEXT: .cfi_escape 0x10, 0x48, 0x0b, 0x11, 0xb0, 0x7f, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 80 - 8 * VG
+; CHECK64-NEXT: .cfi_escape 0x10, 0x48, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x11, 0xb0, 0x7f, 0x22 // $d8 @ cfa - 8 * VG - 80
; CHECK64-NEXT: mov z0.s, #0 // =0x0
; CHECK64-NEXT: add x8, sp, #64
; CHECK64-NEXT: mov w0, wzr
@@ -1271,9 +1271,9 @@ define i32 @svecc_csr_d8_allocnxv4i32(i64 %d, <vscale x 4 x i32> %vs) "aarch64_p
; CHECK1024-NEXT: str z8, [sp] // 16-byte Folded Spill
; CHECK1024-NEXT: sub sp, sp, #1024
; CHECK1024-NEXT: addvl sp, sp, #-1
-; CHECK1024-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x90, 0x10, 0x22, 0x11, 0x10, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 2064 + 16 * VG
+; CHECK1024-NEXT: .cfi_escape 0x0f, 0x09, 0x8f, 0x90, 0x10, 0x92, 0x2e, 0x00, 0x40, 0x1e, 0x22 // sp + 2064 + 16 * VG
; CHECK1024-NEXT: .cfi_offset w29, -16
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x48, 0x0b, 0x11, 0xf0, 0x77, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 1040 - 8 * VG
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x48, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x11, 0xf0, 0x77, 0x22 // $d8 @ cfa - 8 * VG - 1040
; CHECK1024-NEXT: mov z0.s, #0 // =0x0
; CHECK1024-NEXT: add x8, sp, #1024
; CHECK1024-NEXT: mov w0, wzr
@@ -1311,7 +1311,7 @@ define i32 @svecc_csr_x18_25_d8_15_allocdi64(i64 %d, double %e, <vscale x 4 x i3
; CHECK0-NEXT: str z9, [sp, #6, mul vl] // 16-byte Folded Spill
; CHECK0-NEXT: str z8, [sp, #7, mul vl] // 16-byte Folded Spill
; CHECK0-NEXT: sub sp, sp, #16
-; CHECK0-NEXT: .cfi_escape 0x0f, 0x0e, 0x8f, 0x00, 0x11, 0xd0, 0x00, 0x22, 0x11, 0xc0, 0x00, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 80 + 64 * VG
+; CHECK0-NEXT: .cfi_escape 0x0f, 0x0b, 0x8f, 0xd0, 0x00, 0x92, 0x2e, 0x00, 0x11, 0xc0, 0x00, 0x1e, 0x22 // sp + 80 + 64 * VG
; CHECK0-NEXT: .cfi_offset w19, -8
; CHECK0-NEXT: .cfi_offset w20, -16
; CHECK0-NEXT: .cfi_offset w21, -24
@@ -1320,14 +1320,14 @@ define i32 @svecc_csr_x18_25_d8_15_allocdi64(i64 %d, double %e, <vscale x 4 x i3
; CHECK0-NEXT: .cfi_offset w24, -48
; CHECK0-NEXT: .cfi_offset w25, -56
; CHECK0-NEXT: .cfi_offset w29, -64
-; CHECK0-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x40, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 64 - 8 * VG
-; CHECK0-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x11, 0x40, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 64 - 16 * VG
-; CHECK0-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x40, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 64 - 24 * VG
-; CHECK0-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x11, 0x40, 0x22, 0x11, 0x60, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 64 - 32 * VG
-; CHECK0-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x11, 0x40, 0x22, 0x11, 0x58, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 64 - 40 * VG
-; CHECK0-NEXT: .cfi_escape 0x10, 0x4d, 0x0a, 0x11, 0x40, 0x22, 0x11, 0x50, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d13 @ cfa - 64 - 48 * VG
-; CHECK0-NEXT: .cfi_escape 0x10, 0x4e, 0x0a, 0x11, 0x40, 0x22, 0x11, 0x48, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d14 @ cfa - 64 - 56 * VG
-; CHECK0-NEXT: .cfi_escape 0x10, 0x4f, 0x0a, 0x11, 0x40, 0x22, 0x11, 0x40, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d15 @ cfa - 64 - 64 * VG
+; CHECK0-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x11, 0x40, 0x22 // $d8 @ cfa - 8 * VG - 64
+; CHECK0-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x11, 0x40, 0x22 // $d9 @ cfa - 16 * VG - 64
+; CHECK0-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x11, 0x40, 0x22 // $d10 @ cfa - 24 * VG - 64
+; CHECK0-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x60, 0x1e, 0x22, 0x11, 0x40, 0x22 // $d11 @ cfa - 32 * VG - 64
+; CHECK0-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x58, 0x1e, 0x22, 0x11, 0x40, 0x22 // $d12 @ cfa - 40 * VG - 64
+; CHECK0-NEXT: .cfi_escape 0x10, 0x4d, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x50, 0x1e, 0x22, 0x11, 0x40, 0x22 // $d13 @ cfa - 48 * VG - 64
+; CHECK0-NEXT: .cfi_escape 0x10, 0x4e, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x48, 0x1e, 0x22, 0x11, 0x40, 0x22 // $d14 @ cfa - 56 * VG - 64
+; CHECK0-NEXT: .cfi_escape 0x10, 0x4f, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x40, 0x1e, 0x22, 0x11, 0x40, 0x22 // $d15 @ cfa - 64 * VG - 64
; CHECK0-NEXT: mov x8, x0
; CHECK0-NEXT: mov w0, wzr
; CHECK0-NEXT: //APP
@@ -1368,7 +1368,7 @@ define i32 @svecc_csr_x18_25_d8_15_allocdi64(i64 %d, double %e, <vscale x 4 x i3
; CHECK64-NEXT: str z9, [sp, #6, mul vl] // 16-byte Folded Spill
; CHECK64-NEXT: str z8, [sp, #7, mul vl] // 16-byte Folded Spill
; CHECK64-NEXT: sub sp, sp, #96
-; CHECK64-NEXT: .cfi_escape 0x0f, 0x0e, 0x8f, 0x00, 0x11, 0xe0, 0x01, 0x22, 0x11, 0xc0, 0x00, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 224 + 64 * VG
+; CHECK64-NEXT: .cfi_escape 0x0f, 0x0b, 0x8f, 0xe0, 0x01, 0x92, 0x2e, 0x00, 0x11, 0xc0, 0x00, 0x1e, 0x22 // sp + 224 + 64 * VG
; CHECK64-NEXT: .cfi_offset w19, -8
; CHECK64-NEXT: .cfi_offset w20, -16
; CHECK64-NEXT: .cfi_offset w21, -24
@@ -1377,14 +1377,14 @@ define i32 @svecc_csr_x18_25_d8_15_allocdi64(i64 %d, double %e, <vscale x 4 x i3
; CHECK64-NEXT: .cfi_offset w24, -48
; CHECK64-NEXT: .cfi_offset w25, -56
; CHECK64-NEXT: .cfi_offset w29, -64
-; CHECK64-NEXT: .cfi_escape 0x10, 0x48, 0x0b, 0x11, 0x80, 0x7f, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 128 - 8 * VG
-; CHECK64-NEXT: .cfi_escape 0x10, 0x49, 0x0b, 0x11, 0x80, 0x7f, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 128 - 16 * VG
-; CHECK64-NEXT: .cfi_escape 0x10, 0x4a, 0x0b, 0x11, 0x80, 0x7f, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 128 - 24 * VG
-; CHECK64-NEXT: .cfi_escape 0x10, 0x4b, 0x0b, 0x11, 0x80, 0x7f, 0x22, 0x11, 0x60, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 128 - 32 * VG
-; CHECK64-NEXT: .cfi_escape 0x10, 0x4c, 0x0b, 0x11, 0x80, 0x7f, 0x22, 0x11, 0x58, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 128 - 40 * VG
-; CHECK64-NEXT: .cfi_escape 0x10, 0x4d, 0x0b, 0x11, 0x80, 0x7f, 0x22, 0x11, 0x50, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d13 @ cfa - 128 - 48 * VG
-; CHECK64-NEXT: .cfi_escape 0x10, 0x4e, 0x0b, 0x11, 0x80, 0x7f, 0x22, 0x11, 0x48, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d14 @ cfa - 128 - 56 * VG
-; CHECK64-NEXT: .cfi_escape 0x10, 0x4f, 0x0b, 0x11, 0x80, 0x7f, 0x22, 0x11, 0x40, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d15 @ cfa - 128 - 64 * VG
+; CHECK64-NEXT: .cfi_escape 0x10, 0x48, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x11, 0x80, 0x7f, 0x22 // $d8 @ cfa - 8 * VG - 128
+; CHECK64-NEXT: .cfi_escape 0x10, 0x49, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x11, 0x80, 0x7f, 0x22 // $d9 @ cfa - 16 * VG - 128
+; CHECK64-NEXT: .cfi_escape 0x10, 0x4a, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x11, 0x80, 0x7f, 0x22 // $d10 @ cfa - 24 * VG - 128
+; CHECK64-NEXT: .cfi_escape 0x10, 0x4b, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x60, 0x1e, 0x22, 0x11, 0x80, 0x7f, 0x22 // $d11 @ cfa - 32 * VG - 128
+; CHECK64-NEXT: .cfi_escape 0x10, 0x4c, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x58, 0x1e, 0x22, 0x11, 0x80, 0x7f, 0x22 // $d12 @ cfa - 40 * VG - 128
+; CHECK64-NEXT: .cfi_escape 0x10, 0x4d, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x50, 0x1e, 0x22, 0x11, 0x80, 0x7f, 0x22 // $d13 @ cfa - 48 * VG - 128
+; CHECK64-NEXT: .cfi_escape 0x10, 0x4e, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x48, 0x1e, 0x22, 0x11, 0x80, 0x7f, 0x22 // $d14 @ cfa - 56 * VG - 128
+; CHECK64-NEXT: .cfi_escape 0x10, 0x4f, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x40, 0x1e, 0x22, 0x11, 0x80, 0x7f, 0x22 // $d15 @ cfa - 64 * VG - 128
; CHECK64-NEXT: mov x8, x0
; CHECK64-NEXT: mov w0, wzr
; CHECK64-NEXT: //APP
@@ -1431,7 +1431,7 @@ define i32 @svecc_csr_x18_25_d8_15_allocdi64(i64 %d, double %e, <vscale x 4 x i3
; CHECK1024-NEXT: str z9, [sp, #6, mul vl] // 16-byte Folded Spill
; CHECK1024-NEXT: str z8, [sp, #7, mul vl] // 16-byte Folded Spill
; CHECK1024-NEXT: sub sp, sp, #1056
-; CHECK1024-NEXT: .cfi_escape 0x0f, 0x0e, 0x8f, 0x00, 0x11, 0xe0, 0x10, 0x22, 0x11, 0xc0, 0x00, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 2144 + 64 * VG
+; CHECK1024-NEXT: .cfi_escape 0x0f, 0x0b, 0x8f, 0xe0, 0x10, 0x92, 0x2e, 0x00, 0x11, 0xc0, 0x00, 0x1e, 0x22 // sp + 2144 + 64 * VG
; CHECK1024-NEXT: .cfi_offset w19, -8
; CHECK1024-NEXT: .cfi_offset w20, -16
; CHECK1024-NEXT: .cfi_offset w21, -24
@@ -1440,14 +1440,14 @@ define i32 @svecc_csr_x18_25_d8_15_allocdi64(i64 %d, double %e, <vscale x 4 x i3
; CHECK1024-NEXT: .cfi_offset w24, -48
; CHECK1024-NEXT: .cfi_offset w25, -56
; CHECK1024-NEXT: .cfi_offset w29, -64
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x48, 0x0b, 0x11, 0xc0, 0x77, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 1088 - 8 * VG
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x49, 0x0b, 0x11, 0xc0, 0x77, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 1088 - 16 * VG
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x4a, 0x0b, 0x11, 0xc0, 0x77, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 1088 - 24 * VG
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x4b, 0x0b, 0x11, 0xc0, 0x77, 0x22, 0x11, 0x60, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 1088 - 32 * VG
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x4c, 0x0b, 0x11, 0xc0, 0x77, 0x22, 0x11, 0x58, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 1088 - 40 * VG
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x4d, 0x0b, 0x11, 0xc0, 0x77, 0x22, 0x11, 0x50, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d13 @ cfa - 1088 - 48 * VG
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x4e, 0x0b, 0x11, 0xc0, 0x77, 0x22, 0x11, 0x48, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d14 @ cfa - 1088 - 56 * VG
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x4f, 0x0b, 0x11, 0xc0, 0x77, 0x22, 0x11, 0x40, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d15 @ cfa - 1088 - 64 * VG
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x48, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x11, 0xc0, 0x77, 0x22 // $d8 @ cfa - 8 * VG - 1088
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x49, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x11, 0xc0, 0x77, 0x22 // $d9 @ cfa - 16 * VG - 1088
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x4a, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x11, 0xc0, 0x77, 0x22 // $d10 @ cfa - 24 * VG - 1088
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x4b, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x60, 0x1e, 0x22, 0x11, 0xc0, 0x77, 0x22 // $d11 @ cfa - 32 * VG - 1088
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x4c, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x58, 0x1e, 0x22, 0x11, 0xc0, 0x77, 0x22 // $d12 @ cfa - 40 * VG - 1088
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x4d, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x50, 0x1e, 0x22, 0x11, 0xc0, 0x77, 0x22 // $d13 @ cfa - 48 * VG - 1088
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x4e, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x48, 0x1e, 0x22, 0x11, 0xc0, 0x77, 0x22 // $d14 @ cfa - 56 * VG - 1088
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x4f, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x40, 0x1e, 0x22, 0x11, 0xc0, 0x77, 0x22 // $d15 @ cfa - 64 * VG - 1088
; CHECK1024-NEXT: mov x8, x0
; CHECK1024-NEXT: mov w0, wzr
; CHECK1024-NEXT: //APP
@@ -1869,7 +1869,7 @@ define i32 @svecc_call(<4 x i16> %P0, ptr %P1, i32 %P2, <vscale x 16 x i8> %P3,
; CHECK0-NEXT: .cfi_offset w30, -40
; CHECK0-NEXT: .cfi_offset w29, -48
; CHECK0-NEXT: addvl sp, sp, #-18
-; CHECK0-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x30, 0x22, 0x11, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 48 + 144 * VG
+; CHECK0-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x30, 0x92, 0x2e, 0x00, 0x11, 0x90, 0x01, 0x1e, 0x22 // sp + 48 + 144 * VG
; CHECK0-NEXT: str p15, [sp, #4, mul vl] // 2-byte Folded Spill
; CHECK0-NEXT: str p14, [sp, #5, mul vl] // 2-byte Folded Spill
; CHECK0-NEXT: str p13, [sp, #6, mul vl] // 2-byte Folded Spill
@@ -1898,14 +1898,14 @@ define i32 @svecc_call(<4 x i16> %P0, ptr %P1, i32 %P2, <vscale x 16 x i8> %P3,
; CHECK0-NEXT: str z10, [sp, #15, mul vl] // 16-byte Folded Spill
; CHECK0-NEXT: str z9, [sp, #16, mul vl] // 16-byte Folded Spill
; CHECK0-NEXT: str z8, [sp, #17, mul vl] // 16-byte Folded Spill
-; CHECK0-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x50, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 48 - 8 * VG
-; CHECK0-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x11, 0x50, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 48 - 16 * VG
-; CHECK0-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x50, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 48 - 24 * VG
-; CHECK0-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x11, 0x50, 0x22, 0x11, 0x60, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 48 - 32 * VG
-; CHECK0-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x11, 0x50, 0x22, 0x11, 0x58, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 48 - 40 * VG
-; CHECK0-NEXT: .cfi_escape 0x10, 0x4d, 0x0a, 0x11, 0x50, 0x22, 0x11, 0x50, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d13 @ cfa - 48 - 48 * VG
-; CHECK0-NEXT: .cfi_escape 0x10, 0x4e, 0x0a, 0x11, 0x50, 0x22, 0x11, 0x48, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d14 @ cfa - 48 - 56 * VG
-; CHECK0-NEXT: .cfi_escape 0x10, 0x4f, 0x0a, 0x11, 0x50, 0x22, 0x11, 0x40, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d15 @ cfa - 48 - 64 * VG
+; CHECK0-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x11, 0x50, 0x22 // $d8 @ cfa - 8 * VG - 48
+; CHECK0-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x11, 0x50, 0x22 // $d9 @ cfa - 16 * VG - 48
+; CHECK0-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x11, 0x50, 0x22 // $d10 @ cfa - 24 * VG - 48
+; CHECK0-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x60, 0x1e, 0x22, 0x11, 0x50, 0x22 // $d11 @ cfa - 32 * VG - 48
+; CHECK0-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x58, 0x1e, 0x22, 0x11, 0x50, 0x22 // $d12 @ cfa - 40 * VG - 48
+; CHECK0-NEXT: .cfi_escape 0x10, 0x4d, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x50, 0x1e, 0x22, 0x11, 0x50, 0x22 // $d13 @ cfa - 48 * VG - 48
+; CHECK0-NEXT: .cfi_escape 0x10, 0x4e, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x48, 0x1e, 0x22, 0x11, 0x50, 0x22 // $d14 @ cfa - 56 * VG - 48
+; CHECK0-NEXT: .cfi_escape 0x10, 0x4f, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x40, 0x1e, 0x22, 0x11, 0x50, 0x22 // $d15 @ cfa - 64 * VG - 48
; CHECK0-NEXT: mov x8, x0
; CHECK0-NEXT: //APP
; CHECK0-NEXT: //NO_APP
@@ -1990,7 +1990,7 @@ define i32 @svecc_call(<4 x i16> %P0, ptr %P1, i32 %P2, <vscale x 16 x i8> %P3,
; CHECK64-NEXT: .cfi_offset w30, -40
; CHECK64-NEXT: .cfi_offset w29, -48
; CHECK64-NEXT: addvl sp, sp, #-18
-; CHECK64-NEXT: .cfi_escape 0x0f, 0x0e, 0x8f, 0x00, 0x11, 0xf0, 0x00, 0x22, 0x11, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 112 + 144 * VG
+; CHECK64-NEXT: .cfi_escape 0x0f, 0x0b, 0x8f, 0xf0, 0x00, 0x92, 0x2e, 0x00, 0x11, 0x90, 0x01, 0x1e, 0x22 // sp + 112 + 144 * VG
; CHECK64-NEXT: str p15, [sp, #4, mul vl] // 2-byte Folded Spill
; CHECK64-NEXT: str p14, [sp, #5, mul vl] // 2-byte Folded Spill
; CHECK64-NEXT: str p13, [sp, #6, mul vl] // 2-byte Folded Spill
@@ -2019,16 +2019,16 @@ define i32 @svecc_call(<4 x i16> %P0, ptr %P1, i32 %P2, <vscale x 16 x i8> %P3,
; CHECK64-NEXT: str z10, [sp, #15, mul vl] // 16-byte Folded Spill
; CHECK64-NEXT: str z9, [sp, #16, mul vl] // 16-byte Folded Spill
; CHECK64-NEXT: str z8, [sp, #17, mul vl] // 16-byte Folded Spill
-; CHECK64-NEXT: .cfi_escape 0x10, 0x48, 0x0b, 0x11, 0x90, 0x7f, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 112 - 8 * VG
-; CHECK64-NEXT: .cfi_escape 0x10, 0x49, 0x0b, 0x11, 0x90, 0x7f, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 112 - 16 * VG
-; CHECK64-NEXT: .cfi_escape 0x10, 0x4a, 0x0b, 0x11, 0x90, 0x7f, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 112 - 24 * VG
-; CHECK64-NEXT: .cfi_escape 0x10, 0x4b, 0x0b, 0x11, 0x90, 0x7f, 0x22, 0x11, 0x60, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 112 - 32 * VG
-; CHECK64-NEXT: .cfi_escape 0x10, 0x4c, 0x0b, 0x11, 0x90, 0x7f, 0x22, 0x11, 0x58, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 112 - 40 * VG
-; CHECK64-NEXT: .cfi_escape 0x10, 0x4d, 0x0b, 0x11, 0x90, 0x7f, 0x22, 0x11, 0x50, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d13 @ cfa - 112 - 48 * VG
-; CHECK64-NEXT: .cfi_escape 0x10, 0x4e, 0x0b, 0x11, 0x90, 0x7f, 0x22, 0x11, 0x48, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d14 @ cfa - 112 - 56 * VG
-; CHECK64-NEXT: .cfi_escape 0x10, 0x4f, 0x0b, 0x11, 0x90, 0x7f, 0x22, 0x11, 0x40, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d15 @ cfa - 112 - 64 * VG
+; CHECK64-NEXT: .cfi_escape 0x10, 0x48, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x11, 0x90, 0x7f, 0x22 // $d8 @ cfa - 8 * VG - 112
+; CHECK64-NEXT: .cfi_escape 0x10, 0x49, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x11, 0x90, 0x7f, 0x22 // $d9 @ cfa - 16 * VG - 112
+; CHECK64-NEXT: .cfi_escape 0x10, 0x4a, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x11, 0x90, 0x7f, 0x22 // $d10 @ cfa - 24 * VG - 112
+; CHECK64-NEXT: .cfi_escape 0x10, 0x4b, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x60, 0x1e, 0x22, 0x11, 0x90, 0x7f, 0x22 // $d11 @ cfa - 32 * VG - 112
+; CHECK64-NEXT: .cfi_escape 0x10, 0x4c, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x58, 0x1e, 0x22, 0x11, 0x90, 0x7f, 0x22 // $d12 @ cfa - 40 * VG - 112
+; CHECK64-NEXT: .cfi_escape 0x10, 0x4d, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x50, 0x1e, 0x22, 0x11, 0x90, 0x7f, 0x22 // $d13 @ cfa - 48 * VG - 112
+; CHECK64-NEXT: .cfi_escape 0x10, 0x4e, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x48, 0x1e, 0x22, 0x11, 0x90, 0x7f, 0x22 // $d14 @ cfa - 56 * VG - 112
+; CHECK64-NEXT: .cfi_escape 0x10, 0x4f, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x40, 0x1e, 0x22, 0x11, 0x90, 0x7f, 0x22 // $d15 @ cfa - 64 * VG - 112
; CHECK64-NEXT: sub sp, sp, #64
-; CHECK64-NEXT: .cfi_escape 0x0f, 0x0e, 0x8f, 0x00, 0x11, 0xb0, 0x01, 0x22, 0x11, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 176 + 144 * VG
+; CHECK64-NEXT: .cfi_escape 0x0f, 0x0b, 0x8f, 0xb0, 0x01, 0x92, 0x2e, 0x00, 0x11, 0x90, 0x01, 0x1e, 0x22 // sp + 176 + 144 * VG
; CHECK64-NEXT: mov x8, x0
; CHECK64-NEXT: //APP
; CHECK64-NEXT: //NO_APP
@@ -2051,7 +2051,7 @@ define i32 @svecc_call(<4 x i16> %P0, ptr %P1, i32 %P2, <vscale x 16 x i8> %P3,
; CHECK64-NEXT: movk w0, #59491, lsl #16
; CHECK64-NEXT: .cfi_restore vg
; CHECK64-NEXT: add sp, sp, #64
-; CHECK64-NEXT: .cfi_escape 0x0f, 0x0e, 0x8f, 0x00, 0x11, 0xf0, 0x00, 0x22, 0x11, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 112 + 144 * VG
+; CHECK64-NEXT: .cfi_escape 0x0f, 0x0b, 0x8f, 0xf0, 0x00, 0x92, 0x2e, 0x00, 0x11, 0x90, 0x01, 0x1e, 0x22 // sp + 112 + 144 * VG
; CHECK64-NEXT: ldr z23, [sp, #2, mul vl] // 16-byte Folded Reload
; CHECK64-NEXT: ldr z22, [sp, #3, mul vl] // 16-byte Folded Reload
; CHECK64-NEXT: ldr z21, [sp, #4, mul vl] // 16-byte Folded Reload
@@ -2119,7 +2119,7 @@ define i32 @svecc_call(<4 x i16> %P0, ptr %P1, i32 %P2, <vscale x 16 x i8> %P3,
; CHECK1024-NEXT: .cfi_offset w30, -40
; CHECK1024-NEXT: .cfi_offset w29, -48
; CHECK1024-NEXT: addvl sp, sp, #-18
-; CHECK1024-NEXT: .cfi_escape 0x0f, 0x0e, 0x8f, 0x00, 0x11, 0xb0, 0x08, 0x22, 0x11, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 1072 + 144 * VG
+; CHECK1024-NEXT: .cfi_escape 0x0f, 0x0b, 0x8f, 0xb0, 0x08, 0x92, 0x2e, 0x00, 0x11, 0x90, 0x01, 0x1e, 0x22 // sp + 1072 + 144 * VG
; CHECK1024-NEXT: str p15, [sp, #4, mul vl] // 2-byte Folded Spill
; CHECK1024-NEXT: str p14, [sp, #5, mul vl] // 2-byte Folded Spill
; CHECK1024-NEXT: str p13, [sp, #6, mul vl] // 2-byte Folded Spill
@@ -2148,16 +2148,16 @@ define i32 @svecc_call(<4 x i16> %P0, ptr %P1, i32 %P2, <vscale x 16 x i8> %P3,
; CHECK1024-NEXT: str z10, [sp, #15, mul vl] // 16-byte Folded Spill
; CHECK1024-NEXT: str z9, [sp, #16, mul vl] // 16-byte Folded Spill
; CHECK1024-NEXT: str z8, [sp, #17, mul vl] // 16-byte Folded Spill
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x48, 0x0b, 0x11, 0xd0, 0x77, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 1072 - 8 * VG
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x49, 0x0b, 0x11, 0xd0, 0x77, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 1072 - 16 * VG
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x4a, 0x0b, 0x11, 0xd0, 0x77, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 1072 - 24 * VG
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x4b, 0x0b, 0x11, 0xd0, 0x77, 0x22, 0x11, 0x60, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 1072 - 32 * VG
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x4c, 0x0b, 0x11, 0xd0, 0x77, 0x22, 0x11, 0x58, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 1072 - 40 * VG
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x4d, 0x0b, 0x11, 0xd0, 0x77, 0x22, 0x11, 0x50, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d13 @ cfa - 1072 - 48 * VG
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x4e, 0x0b, 0x11, 0xd0, 0x77, 0x22, 0x11, 0x48, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d14 @ cfa - 1072 - 56 * VG
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x4f, 0x0b, 0x11, 0xd0, 0x77, 0x22, 0x11, 0x40, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d15 @ cfa - 1072 - 64 * VG
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x48, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x11, 0xd0, 0x77, 0x22 // $d8 @ cfa - 8 * VG - 1072
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x49, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x11, 0xd0, 0x77, 0x22 // $d9 @ cfa - 16 * VG - 1072
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x4a, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x11, 0xd0, 0x77, 0x22 // $d10 @ cfa - 24 * VG - 1072
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x4b, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x60, 0x1e, 0x22, 0x11, 0xd0, 0x77, 0x22 // $d11 @ cfa - 32 * VG - 1072
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x4c, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x58, 0x1e, 0x22, 0x11, 0xd0, 0x77, 0x22 // $d12 @ cfa - 40 * VG - 1072
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x4d, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x50, 0x1e, 0x22, 0x11, 0xd0, 0x77, 0x22 // $d13 @ cfa - 48 * VG - 1072
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x4e, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x48, 0x1e, 0x22, 0x11, 0xd0, 0x77, 0x22 // $d14 @ cfa - 56 * VG - 1072
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x4f, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x40, 0x1e, 0x22, 0x11, 0xd0, 0x77, 0x22 // $d15 @ cfa - 64 * VG - 1072
; CHECK1024-NEXT: sub sp, sp, #1024
-; CHECK1024-NEXT: .cfi_escape 0x0f, 0x0e, 0x8f, 0x00, 0x11, 0xb0, 0x10, 0x22, 0x11, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 2096 + 144 * VG
+; CHECK1024-NEXT: .cfi_escape 0x0f, 0x0b, 0x8f, 0xb0, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x90, 0x01, 0x1e, 0x22 // sp + 2096 + 144 * VG
; CHECK1024-NEXT: mov x8, x0
; CHECK1024-NEXT: //APP
; CHECK1024-NEXT: //NO_APP
@@ -2180,7 +2180,7 @@ define i32 @svecc_call(<4 x i16> %P0, ptr %P1, i32 %P2, <vscale x 16 x i8> %P3,
; CHECK1024-NEXT: movk w0, #59491, lsl #16
; CHECK1024-NEXT: .cfi_restore vg
; CHECK1024-NEXT: add sp, sp, #1024
-; CHECK1024-NEXT: .cfi_escape 0x0f, 0x0e, 0x8f, 0x00, 0x11, 0xb0, 0x08, 0x22, 0x11, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 1072 + 144 * VG
+; CHECK1024-NEXT: .cfi_escape 0x0f, 0x0b, 0x8f, 0xb0, 0x08, 0x92, 0x2e, 0x00, 0x11, 0x90, 0x01, 0x1e, 0x22 // sp + 1072 + 144 * VG
; CHECK1024-NEXT: ldr z23, [sp, #2, mul vl] // 16-byte Folded Reload
; CHECK1024-NEXT: ldr z22, [sp, #3, mul vl] // 16-byte Folded Reload
; CHECK1024-NEXT: ldr z21, [sp, #4, mul vl] // 16-byte Folded Reload
@@ -2252,7 +2252,7 @@ define i32 @svecc_alloca_call(<4 x i16> %P0, ptr %P1, i32 %P2, <vscale x 16 x i8
; CHECK0-NEXT: .cfi_offset w30, -40
; CHECK0-NEXT: .cfi_offset w29, -48
; CHECK0-NEXT: addvl sp, sp, #-18
-; CHECK0-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x30, 0x22, 0x11, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 48 + 144 * VG
+; CHECK0-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x30, 0x92, 0x2e, 0x00, 0x11, 0x90, 0x01, 0x1e, 0x22 // sp + 48 + 144 * VG
; CHECK0-NEXT: str p15, [sp, #4, mul vl] // 2-byte Folded Spill
; CHECK0-NEXT: str p14, [sp, #5, mul vl] // 2-byte Folded Spill
; CHECK0-NEXT: str p13, [sp, #6, mul vl] // 2-byte Folded Spill
@@ -2281,16 +2281,16 @@ define i32 @svecc_alloca_call(<4 x i16> %P0, ptr %P1, i32 %P2, <vscale x 16 x i8
; CHECK0-NEXT: str z10, [sp, #15, mul vl] // 16-byte Folded Spill
; CHECK0-NEXT: str z9, [sp, #16, mul vl] // 16-byte Folded Spill
; CHECK0-NEXT: str z8, [sp, #17, mul vl] // 16-byte Folded Spill
-; CHECK0-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x50, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 48 - 8 * VG
-; CHECK0-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x11, 0x50, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 48 - 16 * VG
-; CHECK0-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x50, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 48 - 24 * VG
-; CHECK0-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x11, 0x50, 0x22, 0x11, 0x60, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 48 - 32 * VG
-; CHECK0-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x11, 0x50, 0x22, 0x11, 0x58, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 48 - 40 * VG
-; CHECK0-NEXT: .cfi_escape 0x10, 0x4d, 0x0a, 0x11, 0x50, 0x22, 0x11, 0x50, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d13 @ cfa - 48 - 48 * VG
-; CHECK0-NEXT: .cfi_escape 0x10, 0x4e, 0x0a, 0x11, 0x50, 0x22, 0x11, 0x48, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d14 @ cfa - 48 - 56 * VG
-; CHECK0-NEXT: .cfi_escape 0x10, 0x4f, 0x0a, 0x11, 0x50, 0x22, 0x11, 0x40, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d15 @ cfa - 48 - 64 * VG
+; CHECK0-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x11, 0x50, 0x22 // $d8 @ cfa - 8 * VG - 48
+; CHECK0-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x11, 0x50, 0x22 // $d9 @ cfa - 16 * VG - 48
+; CHECK0-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x11, 0x50, 0x22 // $d10 @ cfa - 24 * VG - 48
+; CHECK0-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x60, 0x1e, 0x22, 0x11, 0x50, 0x22 // $d11 @ cfa - 32 * VG - 48
+; CHECK0-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x58, 0x1e, 0x22, 0x11, 0x50, 0x22 // $d12 @ cfa - 40 * VG - 48
+; CHECK0-NEXT: .cfi_escape 0x10, 0x4d, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x50, 0x1e, 0x22, 0x11, 0x50, 0x22 // $d13 @ cfa - 48 * VG - 48
+; CHECK0-NEXT: .cfi_escape 0x10, 0x4e, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x48, 0x1e, 0x22, 0x11, 0x50, 0x22 // $d14 @ cfa - 56 * VG - 48
+; CHECK0-NEXT: .cfi_escape 0x10, 0x4f, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x40, 0x1e, 0x22, 0x11, 0x50, 0x22 // $d15 @ cfa - 64 * VG - 48
; CHECK0-NEXT: sub sp, sp, #48
-; CHECK0-NEXT: .cfi_escape 0x0f, 0x0e, 0x8f, 0x00, 0x11, 0xe0, 0x00, 0x22, 0x11, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 96 + 144 * VG
+; CHECK0-NEXT: .cfi_escape 0x0f, 0x0b, 0x8f, 0xe0, 0x00, 0x92, 0x2e, 0x00, 0x11, 0x90, 0x01, 0x1e, 0x22 // sp + 96 + 144 * VG
; CHECK0-NEXT: //APP
; CHECK0-NEXT: //NO_APP
; CHECK0-NEXT: bl __arm_sme_state
@@ -2312,7 +2312,7 @@ define i32 @svecc_alloca_call(<4 x i16> %P0, ptr %P1, i32 %P2, <vscale x 16 x i8
; CHECK0-NEXT: movk w0, #59491, lsl #16
; CHECK0-NEXT: .cfi_restore vg
; CHECK0-NEXT: add sp, sp, #48
-; CHECK0-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x30, 0x22, 0x11, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 48 + 144 * VG
+; CHECK0-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x30, 0x92, 0x2e, 0x00, 0x11, 0x90, 0x01, 0x1e, 0x22 // sp + 48 + 144 * VG
; CHECK0-NEXT: ldr z23, [sp, #2, mul vl] // 16-byte Folded Reload
; CHECK0-NEXT: ldr z22, [sp, #3, mul vl] // 16-byte Folded Reload
; CHECK0-NEXT: ldr z21, [sp, #4, mul vl] // 16-byte Folded Reload
@@ -2376,7 +2376,7 @@ define i32 @svecc_alloca_call(<4 x i16> %P0, ptr %P1, i32 %P2, <vscale x 16 x i8
; CHECK64-NEXT: .cfi_offset w30, -40
; CHECK64-NEXT: .cfi_offset w29, -48
; CHECK64-NEXT: addvl sp, sp, #-18
-; CHECK64-NEXT: .cfi_escape 0x0f, 0x0e, 0x8f, 0x00, 0x11, 0xf0, 0x00, 0x22, 0x11, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 112 + 144 * VG
+; CHECK64-NEXT: .cfi_escape 0x0f, 0x0b, 0x8f, 0xf0, 0x00, 0x92, 0x2e, 0x00, 0x11, 0x90, 0x01, 0x1e, 0x22 // sp + 112 + 144 * VG
; CHECK64-NEXT: str p15, [sp, #4, mul vl] // 2-byte Folded Spill
; CHECK64-NEXT: str p14, [sp, #5, mul vl] // 2-byte Folded Spill
; CHECK64-NEXT: str p13, [sp, #6, mul vl] // 2-byte Folded Spill
@@ -2405,16 +2405,16 @@ define i32 @svecc_alloca_call(<4 x i16> %P0, ptr %P1, i32 %P2, <vscale x 16 x i8
; CHECK64-NEXT: str z10, [sp, #15, mul vl] // 16-byte Folded Spill
; CHECK64-NEXT: str z9, [sp, #16, mul vl] // 16-byte Folded Spill
; CHECK64-NEXT: str z8, [sp, #17, mul vl] // 16-byte Folded Spill
-; CHECK64-NEXT: .cfi_escape 0x10, 0x48, 0x0b, 0x11, 0x90, 0x7f, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 112 - 8 * VG
-; CHECK64-NEXT: .cfi_escape 0x10, 0x49, 0x0b, 0x11, 0x90, 0x7f, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 112 - 16 * VG
-; CHECK64-NEXT: .cfi_escape 0x10, 0x4a, 0x0b, 0x11, 0x90, 0x7f, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 112 - 24 * VG
-; CHECK64-NEXT: .cfi_escape 0x10, 0x4b, 0x0b, 0x11, 0x90, 0x7f, 0x22, 0x11, 0x60, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 112 - 32 * VG
-; CHECK64-NEXT: .cfi_escape 0x10, 0x4c, 0x0b, 0x11, 0x90, 0x7f, 0x22, 0x11, 0x58, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 112 - 40 * VG
-; CHECK64-NEXT: .cfi_escape 0x10, 0x4d, 0x0b, 0x11, 0x90, 0x7f, 0x22, 0x11, 0x50, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d13 @ cfa - 112 - 48 * VG
-; CHECK64-NEXT: .cfi_escape 0x10, 0x4e, 0x0b, 0x11, 0x90, 0x7f, 0x22, 0x11, 0x48, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d14 @ cfa - 112 - 56 * VG
-; CHECK64-NEXT: .cfi_escape 0x10, 0x4f, 0x0b, 0x11, 0x90, 0x7f, 0x22, 0x11, 0x40, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d15 @ cfa - 112 - 64 * VG
+; CHECK64-NEXT: .cfi_escape 0x10, 0x48, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x11, 0x90, 0x7f, 0x22 // $d8 @ cfa - 8 * VG - 112
+; CHECK64-NEXT: .cfi_escape 0x10, 0x49, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x11, 0x90, 0x7f, 0x22 // $d9 @ cfa - 16 * VG - 112
+; CHECK64-NEXT: .cfi_escape 0x10, 0x4a, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x11, 0x90, 0x7f, 0x22 // $d10 @ cfa - 24 * VG - 112
+; CHECK64-NEXT: .cfi_escape 0x10, 0x4b, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x60, 0x1e, 0x22, 0x11, 0x90, 0x7f, 0x22 // $d11 @ cfa - 32 * VG - 112
+; CHECK64-NEXT: .cfi_escape 0x10, 0x4c, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x58, 0x1e, 0x22, 0x11, 0x90, 0x7f, 0x22 // $d12 @ cfa - 40 * VG - 112
+; CHECK64-NEXT: .cfi_escape 0x10, 0x4d, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x50, 0x1e, 0x22, 0x11, 0x90, 0x7f, 0x22 // $d13 @ cfa - 48 * VG - 112
+; CHECK64-NEXT: .cfi_escape 0x10, 0x4e, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x48, 0x1e, 0x22, 0x11, 0x90, 0x7f, 0x22 // $d14 @ cfa - 56 * VG - 112
+; CHECK64-NEXT: .cfi_escape 0x10, 0x4f, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x40, 0x1e, 0x22, 0x11, 0x90, 0x7f, 0x22 // $d15 @ cfa - 64 * VG - 112
; CHECK64-NEXT: sub sp, sp, #112
-; CHECK64-NEXT: .cfi_escape 0x0f, 0x0e, 0x8f, 0x00, 0x11, 0xe0, 0x01, 0x22, 0x11, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 224 + 144 * VG
+; CHECK64-NEXT: .cfi_escape 0x0f, 0x0b, 0x8f, 0xe0, 0x01, 0x92, 0x2e, 0x00, 0x11, 0x90, 0x01, 0x1e, 0x22 // sp + 224 + 144 * VG
; CHECK64-NEXT: //APP
; CHECK64-NEXT: //NO_APP
; CHECK64-NEXT: bl __arm_sme_state
@@ -2436,7 +2436,7 @@ define i32 @svecc_alloca_call(<4 x i16> %P0, ptr %P1, i32 %P2, <vscale x 16 x i8
; CHECK64-NEXT: movk w0, #59491, lsl #16
; CHECK64-NEXT: .cfi_restore vg
; CHECK64-NEXT: add sp, sp, #112
-; CHECK64-NEXT: .cfi_escape 0x0f, 0x0e, 0x8f, 0x00, 0x11, 0xf0, 0x00, 0x22, 0x11, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 112 + 144 * VG
+; CHECK64-NEXT: .cfi_escape 0x0f, 0x0b, 0x8f, 0xf0, 0x00, 0x92, 0x2e, 0x00, 0x11, 0x90, 0x01, 0x1e, 0x22 // sp + 112 + 144 * VG
; CHECK64-NEXT: ldr z23, [sp, #2, mul vl] // 16-byte Folded Reload
; CHECK64-NEXT: ldr z22, [sp, #3, mul vl] // 16-byte Folded Reload
; CHECK64-NEXT: ldr z21, [sp, #4, mul vl] // 16-byte Folded Reload
@@ -2504,7 +2504,7 @@ define i32 @svecc_alloca_call(<4 x i16> %P0, ptr %P1, i32 %P2, <vscale x 16 x i8
; CHECK1024-NEXT: .cfi_offset w30, -40
; CHECK1024-NEXT: .cfi_offset w29, -48
; CHECK1024-NEXT: addvl sp, sp, #-18
-; CHECK1024-NEXT: .cfi_escape 0x0f, 0x0e, 0x8f, 0x00, 0x11, 0xb0, 0x08, 0x22, 0x11, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 1072 + 144 * VG
+; CHECK1024-NEXT: .cfi_escape 0x0f, 0x0b, 0x8f, 0xb0, 0x08, 0x92, 0x2e, 0x00, 0x11, 0x90, 0x01, 0x1e, 0x22 // sp + 1072 + 144 * VG
; CHECK1024-NEXT: str p15, [sp, #4, mul vl] // 2-byte Folded Spill
; CHECK1024-NEXT: str p14, [sp, #5, mul vl] // 2-byte Folded Spill
; CHECK1024-NEXT: str p13, [sp, #6, mul vl] // 2-byte Folded Spill
@@ -2533,16 +2533,16 @@ define i32 @svecc_alloca_call(<4 x i16> %P0, ptr %P1, i32 %P2, <vscale x 16 x i8
; CHECK1024-NEXT: str z10, [sp, #15, mul vl] // 16-byte Folded Spill
; CHECK1024-NEXT: str z9, [sp, #16, mul vl] // 16-byte Folded Spill
; CHECK1024-NEXT: str z8, [sp, #17, mul vl] // 16-byte Folded Spill
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x48, 0x0b, 0x11, 0xd0, 0x77, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 1072 - 8 * VG
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x49, 0x0b, 0x11, 0xd0, 0x77, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 1072 - 16 * VG
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x4a, 0x0b, 0x11, 0xd0, 0x77, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 1072 - 24 * VG
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x4b, 0x0b, 0x11, 0xd0, 0x77, 0x22, 0x11, 0x60, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 1072 - 32 * VG
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x4c, 0x0b, 0x11, 0xd0, 0x77, 0x22, 0x11, 0x58, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 1072 - 40 * VG
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x4d, 0x0b, 0x11, 0xd0, 0x77, 0x22, 0x11, 0x50, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d13 @ cfa - 1072 - 48 * VG
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x4e, 0x0b, 0x11, 0xd0, 0x77, 0x22, 0x11, 0x48, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d14 @ cfa - 1072 - 56 * VG
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x4f, 0x0b, 0x11, 0xd0, 0x77, 0x22, 0x11, 0x40, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d15 @ cfa - 1072 - 64 * VG
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x48, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x11, 0xd0, 0x77, 0x22 // $d8 @ cfa - 8 * VG - 1072
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x49, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x11, 0xd0, 0x77, 0x22 // $d9 @ cfa - 16 * VG - 1072
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x4a, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x11, 0xd0, 0x77, 0x22 // $d10 @ cfa - 24 * VG - 1072
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x4b, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x60, 0x1e, 0x22, 0x11, 0xd0, 0x77, 0x22 // $d11 @ cfa - 32 * VG - 1072
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x4c, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x58, 0x1e, 0x22, 0x11, 0xd0, 0x77, 0x22 // $d12 @ cfa - 40 * VG - 1072
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x4d, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x50, 0x1e, 0x22, 0x11, 0xd0, 0x77, 0x22 // $d13 @ cfa - 48 * VG - 1072
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x4e, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x48, 0x1e, 0x22, 0x11, 0xd0, 0x77, 0x22 // $d14 @ cfa - 56 * VG - 1072
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x4f, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x40, 0x1e, 0x22, 0x11, 0xd0, 0x77, 0x22 // $d15 @ cfa - 64 * VG - 1072
; CHECK1024-NEXT: sub sp, sp, #1072
-; CHECK1024-NEXT: .cfi_escape 0x0f, 0x0e, 0x8f, 0x00, 0x11, 0xe0, 0x10, 0x22, 0x11, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 2144 + 144 * VG
+; CHECK1024-NEXT: .cfi_escape 0x0f, 0x0b, 0x8f, 0xe0, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x90, 0x01, 0x1e, 0x22 // sp + 2144 + 144 * VG
; CHECK1024-NEXT: //APP
; CHECK1024-NEXT: //NO_APP
; CHECK1024-NEXT: bl __arm_sme_state
@@ -2564,7 +2564,7 @@ define i32 @svecc_alloca_call(<4 x i16> %P0, ptr %P1, i32 %P2, <vscale x 16 x i8
; CHECK1024-NEXT: movk w0, #59491, lsl #16
; CHECK1024-NEXT: .cfi_restore vg
; CHECK1024-NEXT: add sp, sp, #1072
-; CHECK1024-NEXT: .cfi_escape 0x0f, 0x0e, 0x8f, 0x00, 0x11, 0xb0, 0x08, 0x22, 0x11, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 1072 + 144 * VG
+; CHECK1024-NEXT: .cfi_escape 0x0f, 0x0b, 0x8f, 0xb0, 0x08, 0x92, 0x2e, 0x00, 0x11, 0x90, 0x01, 0x1e, 0x22 // sp + 1072 + 144 * VG
; CHECK1024-NEXT: ldr z23, [sp, #2, mul vl] // 16-byte Folded Reload
; CHECK1024-NEXT: ldr z22, [sp, #3, mul vl] // 16-byte Folded Reload
; CHECK1024-NEXT: ldr z21, [sp, #4, mul vl] // 16-byte Folded Reload
@@ -3192,14 +3192,14 @@ define i32 @svecc_call_dynamic_alloca(<4 x i16> %P0, i32 %P1, i32 %P2, <vscale x
; CHECK0-NEXT: str z10, [sp, #15, mul vl] // 16-byte Folded Spill
; CHECK0-NEXT: str z9, [sp, #16, mul vl] // 16-byte Folded Spill
; CHECK0-NEXT: str z8, [sp, #17, mul vl] // 16-byte Folded Spill
-; CHECK0-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x40, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 64 - 8 * VG
-; CHECK0-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x11, 0x40, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 64 - 16 * VG
-; CHECK0-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x40, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 64 - 24 * VG
-; CHECK0-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x11, 0x40, 0x22, 0x11, 0x60, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 64 - 32 * VG
-; CHECK0-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x11, 0x40, 0x22, 0x11, 0x58, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 64 - 40 * VG
-; CHECK0-NEXT: .cfi_escape 0x10, 0x4d, 0x0a, 0x11, 0x40, 0x22, 0x11, 0x50, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d13 @ cfa - 64 - 48 * VG
-; CHECK0-NEXT: .cfi_escape 0x10, 0x4e, 0x0a, 0x11, 0x40, 0x22, 0x11, 0x48, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d14 @ cfa - 64 - 56 * VG
-; CHECK0-NEXT: .cfi_escape 0x10, 0x4f, 0x0a, 0x11, 0x40, 0x22, 0x11, 0x40, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d15 @ cfa - 64 - 64 * VG
+; CHECK0-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x11, 0x40, 0x22 // $d8 @ cfa - 8 * VG - 64
+; CHECK0-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x11, 0x40, 0x22 // $d9 @ cfa - 16 * VG - 64
+; CHECK0-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x11, 0x40, 0x22 // $d10 @ cfa - 24 * VG - 64
+; CHECK0-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x60, 0x1e, 0x22, 0x11, 0x40, 0x22 // $d11 @ cfa - 32 * VG - 64
+; CHECK0-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x58, 0x1e, 0x22, 0x11, 0x40, 0x22 // $d12 @ cfa - 40 * VG - 64
+; CHECK0-NEXT: .cfi_escape 0x10, 0x4d, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x50, 0x1e, 0x22, 0x11, 0x40, 0x22 // $d13 @ cfa - 48 * VG - 64
+; CHECK0-NEXT: .cfi_escape 0x10, 0x4e, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x48, 0x1e, 0x22, 0x11, 0x40, 0x22 // $d14 @ cfa - 56 * VG - 64
+; CHECK0-NEXT: .cfi_escape 0x10, 0x4f, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x40, 0x1e, 0x22, 0x11, 0x40, 0x22 // $d15 @ cfa - 64 * VG - 64
; CHECK0-NEXT: mov w9, w0
; CHECK0-NEXT: mov x8, sp
; CHECK0-NEXT: mov w2, w1
@@ -3327,14 +3327,14 @@ define i32 @svecc_call_dynamic_alloca(<4 x i16> %P0, i32 %P1, i32 %P2, <vscale x
; CHECK64-NEXT: str z10, [sp, #15, mul vl] // 16-byte Folded Spill
; CHECK64-NEXT: str z9, [sp, #16, mul vl] // 16-byte Folded Spill
; CHECK64-NEXT: str z8, [sp, #17, mul vl] // 16-byte Folded Spill
-; CHECK64-NEXT: .cfi_escape 0x10, 0x48, 0x0b, 0x11, 0x80, 0x7f, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 128 - 8 * VG
-; CHECK64-NEXT: .cfi_escape 0x10, 0x49, 0x0b, 0x11, 0x80, 0x7f, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 128 - 16 * VG
-; CHECK64-NEXT: .cfi_escape 0x10, 0x4a, 0x0b, 0x11, 0x80, 0x7f, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 128 - 24 * VG
-; CHECK64-NEXT: .cfi_escape 0x10, 0x4b, 0x0b, 0x11, 0x80, 0x7f, 0x22, 0x11, 0x60, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 128 - 32 * VG
-; CHECK64-NEXT: .cfi_escape 0x10, 0x4c, 0x0b, 0x11, 0x80, 0x7f, 0x22, 0x11, 0x58, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 128 - 40 * VG
-; CHECK64-NEXT: .cfi_escape 0x10, 0x4d, 0x0b, 0x11, 0x80, 0x7f, 0x22, 0x11, 0x50, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d13 @ cfa - 128 - 48 * VG
-; CHECK64-NEXT: .cfi_escape 0x10, 0x4e, 0x0b, 0x11, 0x80, 0x7f, 0x22, 0x11, 0x48, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d14 @ cfa - 128 - 56 * VG
-; CHECK64-NEXT: .cfi_escape 0x10, 0x4f, 0x0b, 0x11, 0x80, 0x7f, 0x22, 0x11, 0x40, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d15 @ cfa - 128 - 64 * VG
+; CHECK64-NEXT: .cfi_escape 0x10, 0x48, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x11, 0x80, 0x7f, 0x22 // $d8 @ cfa - 8 * VG - 128
+; CHECK64-NEXT: .cfi_escape 0x10, 0x49, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x11, 0x80, 0x7f, 0x22 // $d9 @ cfa - 16 * VG - 128
+; CHECK64-NEXT: .cfi_escape 0x10, 0x4a, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x11, 0x80, 0x7f, 0x22 // $d10 @ cfa - 24 * VG - 128
+; CHECK64-NEXT: .cfi_escape 0x10, 0x4b, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x60, 0x1e, 0x22, 0x11, 0x80, 0x7f, 0x22 // $d11 @ cfa - 32 * VG - 128
+; CHECK64-NEXT: .cfi_escape 0x10, 0x4c, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x58, 0x1e, 0x22, 0x11, 0x80, 0x7f, 0x22 // $d12 @ cfa - 40 * VG - 128
+; CHECK64-NEXT: .cfi_escape 0x10, 0x4d, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x50, 0x1e, 0x22, 0x11, 0x80, 0x7f, 0x22 // $d13 @ cfa - 48 * VG - 128
+; CHECK64-NEXT: .cfi_escape 0x10, 0x4e, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x48, 0x1e, 0x22, 0x11, 0x80, 0x7f, 0x22 // $d14 @ cfa - 56 * VG - 128
+; CHECK64-NEXT: .cfi_escape 0x10, 0x4f, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x40, 0x1e, 0x22, 0x11, 0x80, 0x7f, 0x22 // $d15 @ cfa - 64 * VG - 128
; CHECK64-NEXT: sub sp, sp, #64
; CHECK64-NEXT: mov w9, w0
; CHECK64-NEXT: mov x8, sp
@@ -3469,14 +3469,14 @@ define i32 @svecc_call_dynamic_alloca(<4 x i16> %P0, i32 %P1, i32 %P2, <vscale x
; CHECK1024-NEXT: str z10, [sp, #15, mul vl] // 16-byte Folded Spill
; CHECK1024-NEXT: str z9, [sp, #16, mul vl] // 16-byte Folded Spill
; CHECK1024-NEXT: str z8, [sp, #17, mul vl] // 16-byte Folded Spill
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x48, 0x0b, 0x11, 0xc0, 0x77, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 1088 - 8 * VG
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x49, 0x0b, 0x11, 0xc0, 0x77, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 1088 - 16 * VG
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x4a, 0x0b, 0x11, 0xc0, 0x77, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 1088 - 24 * VG
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x4b, 0x0b, 0x11, 0xc0, 0x77, 0x22, 0x11, 0x60, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 1088 - 32 * VG
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x4c, 0x0b, 0x11, 0xc0, 0x77, 0x22, 0x11, 0x58, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 1088 - 40 * VG
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x4d, 0x0b, 0x11, 0xc0, 0x77, 0x22, 0x11, 0x50, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d13 @ cfa - 1088 - 48 * VG
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x4e, 0x0b, 0x11, 0xc0, 0x77, 0x22, 0x11, 0x48, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d14 @ cfa - 1088 - 56 * VG
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x4f, 0x0b, 0x11, 0xc0, 0x77, 0x22, 0x11, 0x40, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d15 @ cfa - 1088 - 64 * VG
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x48, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x11, 0xc0, 0x77, 0x22 // $d8 @ cfa - 8 * VG - 1088
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x49, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x11, 0xc0, 0x77, 0x22 // $d9 @ cfa - 16 * VG - 1088
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x4a, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x11, 0xc0, 0x77, 0x22 // $d10 @ cfa - 24 * VG - 1088
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x4b, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x60, 0x1e, 0x22, 0x11, 0xc0, 0x77, 0x22 // $d11 @ cfa - 32 * VG - 1088
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x4c, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x58, 0x1e, 0x22, 0x11, 0xc0, 0x77, 0x22 // $d12 @ cfa - 40 * VG - 1088
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x4d, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x50, 0x1e, 0x22, 0x11, 0xc0, 0x77, 0x22 // $d13 @ cfa - 48 * VG - 1088
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x4e, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x48, 0x1e, 0x22, 0x11, 0xc0, 0x77, 0x22 // $d14 @ cfa - 56 * VG - 1088
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x4f, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x40, 0x1e, 0x22, 0x11, 0xc0, 0x77, 0x22 // $d15 @ cfa - 64 * VG - 1088
; CHECK1024-NEXT: sub sp, sp, #1024
; CHECK1024-NEXT: mov w9, w0
; CHECK1024-NEXT: mov x8, sp
@@ -3616,14 +3616,14 @@ define i32 @svecc_call_realign(<4 x i16> %P0, i32 %P1, i32 %P2, <vscale x 16 x i
; CHECK0-NEXT: str z10, [sp, #15, mul vl] // 16-byte Folded Spill
; CHECK0-NEXT: str z9, [sp, #16, mul vl] // 16-byte Folded Spill
; CHECK0-NEXT: str z8, [sp, #17, mul vl] // 16-byte Folded Spill
-; CHECK0-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x40, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 64 - 8 * VG
-; CHECK0-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x11, 0x40, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 64 - 16 * VG
-; CHECK0-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x40, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 64 - 24 * VG
-; CHECK0-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x11, 0x40, 0x22, 0x11, 0x60, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 64 - 32 * VG
-; CHECK0-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x11, 0x40, 0x22, 0x11, 0x58, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 64 - 40 * VG
-; CHECK0-NEXT: .cfi_escape 0x10, 0x4d, 0x0a, 0x11, 0x40, 0x22, 0x11, 0x50, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d13 @ cfa - 64 - 48 * VG
-; CHECK0-NEXT: .cfi_escape 0x10, 0x4e, 0x0a, 0x11, 0x40, 0x22, 0x11, 0x48, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d14 @ cfa - 64 - 56 * VG
-; CHECK0-NEXT: .cfi_escape 0x10, 0x4f, 0x0a, 0x11, 0x40, 0x22, 0x11, 0x40, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d15 @ cfa - 64 - 64 * VG
+; CHECK0-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x11, 0x40, 0x22 // $d8 @ cfa - 8 * VG - 64
+; CHECK0-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x11, 0x40, 0x22 // $d9 @ cfa - 16 * VG - 64
+; CHECK0-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x11, 0x40, 0x22 // $d10 @ cfa - 24 * VG - 64
+; CHECK0-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x60, 0x1e, 0x22, 0x11, 0x40, 0x22 // $d11 @ cfa - 32 * VG - 64
+; CHECK0-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x58, 0x1e, 0x22, 0x11, 0x40, 0x22 // $d12 @ cfa - 40 * VG - 64
+; CHECK0-NEXT: .cfi_escape 0x10, 0x4d, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x50, 0x1e, 0x22, 0x11, 0x40, 0x22 // $d13 @ cfa - 48 * VG - 64
+; CHECK0-NEXT: .cfi_escape 0x10, 0x4e, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x48, 0x1e, 0x22, 0x11, 0x40, 0x22 // $d14 @ cfa - 56 * VG - 64
+; CHECK0-NEXT: .cfi_escape 0x10, 0x4f, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x40, 0x1e, 0x22, 0x11, 0x40, 0x22 // $d15 @ cfa - 64 * VG - 64
; CHECK0-NEXT: sub x9, sp, #1024
; CHECK0-NEXT: and sp, x9, #0xffffffffffffffe0
; CHECK0-NEXT: mov w2, w1
@@ -3743,14 +3743,14 @@ define i32 @svecc_call_realign(<4 x i16> %P0, i32 %P1, i32 %P2, <vscale x 16 x i
; CHECK64-NEXT: str z10, [sp, #15, mul vl] // 16-byte Folded Spill
; CHECK64-NEXT: str z9, [sp, #16, mul vl] // 16-byte Folded Spill
; CHECK64-NEXT: str z8, [sp, #17, mul vl] // 16-byte Folded Spill
-; CHECK64-NEXT: .cfi_escape 0x10, 0x48, 0x0b, 0x11, 0x80, 0x7f, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 128 - 8 * VG
-; CHECK64-NEXT: .cfi_escape 0x10, 0x49, 0x0b, 0x11, 0x80, 0x7f, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 128 - 16 * VG
-; CHECK64-NEXT: .cfi_escape 0x10, 0x4a, 0x0b, 0x11, 0x80, 0x7f, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 128 - 24 * VG
-; CHECK64-NEXT: .cfi_escape 0x10, 0x4b, 0x0b, 0x11, 0x80, 0x7f, 0x22, 0x11, 0x60, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 128 - 32 * VG
-; CHECK64-NEXT: .cfi_escape 0x10, 0x4c, 0x0b, 0x11, 0x80, 0x7f, 0x22, 0x11, 0x58, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 128 - 40 * VG
-; CHECK64-NEXT: .cfi_escape 0x10, 0x4d, 0x0b, 0x11, 0x80, 0x7f, 0x22, 0x11, 0x50, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d13 @ cfa - 128 - 48 * VG
-; CHECK64-NEXT: .cfi_escape 0x10, 0x4e, 0x0b, 0x11, 0x80, 0x7f, 0x22, 0x11, 0x48, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d14 @ cfa - 128 - 56 * VG
-; CHECK64-NEXT: .cfi_escape 0x10, 0x4f, 0x0b, 0x11, 0x80, 0x7f, 0x22, 0x11, 0x40, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d15 @ cfa - 128 - 64 * VG
+; CHECK64-NEXT: .cfi_escape 0x10, 0x48, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x11, 0x80, 0x7f, 0x22 // $d8 @ cfa - 8 * VG - 128
+; CHECK64-NEXT: .cfi_escape 0x10, 0x49, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x11, 0x80, 0x7f, 0x22 // $d9 @ cfa - 16 * VG - 128
+; CHECK64-NEXT: .cfi_escape 0x10, 0x4a, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x11, 0x80, 0x7f, 0x22 // $d10 @ cfa - 24 * VG - 128
+; CHECK64-NEXT: .cfi_escape 0x10, 0x4b, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x60, 0x1e, 0x22, 0x11, 0x80, 0x7f, 0x22 // $d11 @ cfa - 32 * VG - 128
+; CHECK64-NEXT: .cfi_escape 0x10, 0x4c, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x58, 0x1e, 0x22, 0x11, 0x80, 0x7f, 0x22 // $d12 @ cfa - 40 * VG - 128
+; CHECK64-NEXT: .cfi_escape 0x10, 0x4d, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x50, 0x1e, 0x22, 0x11, 0x80, 0x7f, 0x22 // $d13 @ cfa - 48 * VG - 128
+; CHECK64-NEXT: .cfi_escape 0x10, 0x4e, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x48, 0x1e, 0x22, 0x11, 0x80, 0x7f, 0x22 // $d14 @ cfa - 56 * VG - 128
+; CHECK64-NEXT: .cfi_escape 0x10, 0x4f, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x40, 0x1e, 0x22, 0x11, 0x80, 0x7f, 0x22 // $d15 @ cfa - 64 * VG - 128
; CHECK64-NEXT: sub x9, sp, #1088
; CHECK64-NEXT: and sp, x9, #0xffffffffffffffe0
; CHECK64-NEXT: mov w2, w1
@@ -3875,14 +3875,14 @@ define i32 @svecc_call_realign(<4 x i16> %P0, i32 %P1, i32 %P2, <vscale x 16 x i
; CHECK1024-NEXT: str z10, [sp, #15, mul vl] // 16-byte Folded Spill
; CHECK1024-NEXT: str z9, [sp, #16, mul vl] // 16-byte Folded Spill
; CHECK1024-NEXT: str z8, [sp, #17, mul vl] // 16-byte Folded Spill
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x48, 0x0b, 0x11, 0xc0, 0x77, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 1088 - 8 * VG
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x49, 0x0b, 0x11, 0xc0, 0x77, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 1088 - 16 * VG
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x4a, 0x0b, 0x11, 0xc0, 0x77, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 1088 - 24 * VG
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x4b, 0x0b, 0x11, 0xc0, 0x77, 0x22, 0x11, 0x60, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 1088 - 32 * VG
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x4c, 0x0b, 0x11, 0xc0, 0x77, 0x22, 0x11, 0x58, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 1088 - 40 * VG
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x4d, 0x0b, 0x11, 0xc0, 0x77, 0x22, 0x11, 0x50, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d13 @ cfa - 1088 - 48 * VG
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x4e, 0x0b, 0x11, 0xc0, 0x77, 0x22, 0x11, 0x48, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d14 @ cfa - 1088 - 56 * VG
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x4f, 0x0b, 0x11, 0xc0, 0x77, 0x22, 0x11, 0x40, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d15 @ cfa - 1088 - 64 * VG
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x48, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x11, 0xc0, 0x77, 0x22 // $d8 @ cfa - 8 * VG - 1088
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x49, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x11, 0xc0, 0x77, 0x22 // $d9 @ cfa - 16 * VG - 1088
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x4a, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x11, 0xc0, 0x77, 0x22 // $d10 @ cfa - 24 * VG - 1088
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x4b, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x60, 0x1e, 0x22, 0x11, 0xc0, 0x77, 0x22 // $d11 @ cfa - 32 * VG - 1088
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x4c, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x58, 0x1e, 0x22, 0x11, 0xc0, 0x77, 0x22 // $d12 @ cfa - 40 * VG - 1088
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x4d, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x50, 0x1e, 0x22, 0x11, 0xc0, 0x77, 0x22 // $d13 @ cfa - 48 * VG - 1088
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x4e, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x48, 0x1e, 0x22, 0x11, 0xc0, 0x77, 0x22 // $d14 @ cfa - 56 * VG - 1088
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x4f, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x40, 0x1e, 0x22, 0x11, 0xc0, 0x77, 0x22 // $d15 @ cfa - 64 * VG - 1088
; CHECK1024-NEXT: sub x9, sp, #2048
; CHECK1024-NEXT: and sp, x9, #0xffffffffffffffe0
; CHECK1024-NEXT: mov w2, w1
@@ -4016,14 +4016,14 @@ define i32 @svecc_call_dynamic_and_scalable_alloca(<4 x i16> %P0, i32 %P1, i32 %
; CHECK0-NEXT: .cfi_offset w28, -48
; CHECK0-NEXT: .cfi_offset w30, -56
; CHECK0-NEXT: .cfi_offset w29, -64
-; CHECK0-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x40, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 64 - 8 * VG
-; CHECK0-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x11, 0x40, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 64 - 16 * VG
-; CHECK0-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x40, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 64 - 24 * VG
-; CHECK0-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x11, 0x40, 0x22, 0x11, 0x60, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 64 - 32 * VG
-; CHECK0-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x11, 0x40, 0x22, 0x11, 0x58, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 64 - 40 * VG
-; CHECK0-NEXT: .cfi_escape 0x10, 0x4d, 0x0a, 0x11, 0x40, 0x22, 0x11, 0x50, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d13 @ cfa - 64 - 48 * VG
-; CHECK0-NEXT: .cfi_escape 0x10, 0x4e, 0x0a, 0x11, 0x40, 0x22, 0x11, 0x48, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d14 @ cfa - 64 - 56 * VG
-; CHECK0-NEXT: .cfi_escape 0x10, 0x4f, 0x0a, 0x11, 0x40, 0x22, 0x11, 0x40, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d15 @ cfa - 64 - 64 * VG
+; CHECK0-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x11, 0x40, 0x22 // $d8 @ cfa - 8 * VG - 64
+; CHECK0-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x11, 0x40, 0x22 // $d9 @ cfa - 16 * VG - 64
+; CHECK0-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x11, 0x40, 0x22 // $d10 @ cfa - 24 * VG - 64
+; CHECK0-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x60, 0x1e, 0x22, 0x11, 0x40, 0x22 // $d11 @ cfa - 32 * VG - 64
+; CHECK0-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x58, 0x1e, 0x22, 0x11, 0x40, 0x22 // $d12 @ cfa - 40 * VG - 64
+; CHECK0-NEXT: .cfi_escape 0x10, 0x4d, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x50, 0x1e, 0x22, 0x11, 0x40, 0x22 // $d13 @ cfa - 48 * VG - 64
+; CHECK0-NEXT: .cfi_escape 0x10, 0x4e, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x48, 0x1e, 0x22, 0x11, 0x40, 0x22 // $d14 @ cfa - 56 * VG - 64
+; CHECK0-NEXT: .cfi_escape 0x10, 0x4f, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x40, 0x1e, 0x22, 0x11, 0x40, 0x22 // $d15 @ cfa - 64 * VG - 64
; CHECK0-NEXT: // kill: def $w0 killed $w0 def $x0
; CHECK0-NEXT: ubfiz x8, x0, #2, #32
; CHECK0-NEXT: mov x9, sp
@@ -4125,14 +4125,14 @@ define i32 @svecc_call_dynamic_and_scalable_alloca(<4 x i16> %P0, i32 %P1, i32 %
; CHECK64-NEXT: .cfi_offset w28, -48
; CHECK64-NEXT: .cfi_offset w30, -56
; CHECK64-NEXT: .cfi_offset w29, -64
-; CHECK64-NEXT: .cfi_escape 0x10, 0x48, 0x0b, 0x11, 0x80, 0x7f, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 128 - 8 * VG
-; CHECK64-NEXT: .cfi_escape 0x10, 0x49, 0x0b, 0x11, 0x80, 0x7f, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 128 - 16 * VG
-; CHECK64-NEXT: .cfi_escape 0x10, 0x4a, 0x0b, 0x11, 0x80, 0x7f, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 128 - 24 * VG
-; CHECK64-NEXT: .cfi_escape 0x10, 0x4b, 0x0b, 0x11, 0x80, 0x7f, 0x22, 0x11, 0x60, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 128 - 32 * VG
-; CHECK64-NEXT: .cfi_escape 0x10, 0x4c, 0x0b, 0x11, 0x80, 0x7f, 0x22, 0x11, 0x58, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 128 - 40 * VG
-; CHECK64-NEXT: .cfi_escape 0x10, 0x4d, 0x0b, 0x11, 0x80, 0x7f, 0x22, 0x11, 0x50, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d13 @ cfa - 128 - 48 * VG
-; CHECK64-NEXT: .cfi_escape 0x10, 0x4e, 0x0b, 0x11, 0x80, 0x7f, 0x22, 0x11, 0x48, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d14 @ cfa - 128 - 56 * VG
-; CHECK64-NEXT: .cfi_escape 0x10, 0x4f, 0x0b, 0x11, 0x80, 0x7f, 0x22, 0x11, 0x40, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d15 @ cfa - 128 - 64 * VG
+; CHECK64-NEXT: .cfi_escape 0x10, 0x48, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x11, 0x80, 0x7f, 0x22 // $d8 @ cfa - 8 * VG - 128
+; CHECK64-NEXT: .cfi_escape 0x10, 0x49, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x11, 0x80, 0x7f, 0x22 // $d9 @ cfa - 16 * VG - 128
+; CHECK64-NEXT: .cfi_escape 0x10, 0x4a, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x11, 0x80, 0x7f, 0x22 // $d10 @ cfa - 24 * VG - 128
+; CHECK64-NEXT: .cfi_escape 0x10, 0x4b, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x60, 0x1e, 0x22, 0x11, 0x80, 0x7f, 0x22 // $d11 @ cfa - 32 * VG - 128
+; CHECK64-NEXT: .cfi_escape 0x10, 0x4c, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x58, 0x1e, 0x22, 0x11, 0x80, 0x7f, 0x22 // $d12 @ cfa - 40 * VG - 128
+; CHECK64-NEXT: .cfi_escape 0x10, 0x4d, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x50, 0x1e, 0x22, 0x11, 0x80, 0x7f, 0x22 // $d13 @ cfa - 48 * VG - 128
+; CHECK64-NEXT: .cfi_escape 0x10, 0x4e, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x48, 0x1e, 0x22, 0x11, 0x80, 0x7f, 0x22 // $d14 @ cfa - 56 * VG - 128
+; CHECK64-NEXT: .cfi_escape 0x10, 0x4f, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x40, 0x1e, 0x22, 0x11, 0x80, 0x7f, 0x22 // $d15 @ cfa - 64 * VG - 128
; CHECK64-NEXT: // kill: def $w0 killed $w0 def $x0
; CHECK64-NEXT: ubfiz x8, x0, #2, #32
; CHECK64-NEXT: mov x9, sp
@@ -4240,14 +4240,14 @@ define i32 @svecc_call_dynamic_and_scalable_alloca(<4 x i16> %P0, i32 %P1, i32 %
; CHECK1024-NEXT: .cfi_offset w28, -48
; CHECK1024-NEXT: .cfi_offset w30, -56
; CHECK1024-NEXT: .cfi_offset w29, -64
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x48, 0x0b, 0x11, 0xc0, 0x77, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 1088 - 8 * VG
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x49, 0x0b, 0x11, 0xc0, 0x77, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 1088 - 16 * VG
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x4a, 0x0b, 0x11, 0xc0, 0x77, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 1088 - 24 * VG
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x4b, 0x0b, 0x11, 0xc0, 0x77, 0x22, 0x11, 0x60, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 1088 - 32 * VG
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x4c, 0x0b, 0x11, 0xc0, 0x77, 0x22, 0x11, 0x58, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 1088 - 40 * VG
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x4d, 0x0b, 0x11, 0xc0, 0x77, 0x22, 0x11, 0x50, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d13 @ cfa - 1088 - 48 * VG
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x4e, 0x0b, 0x11, 0xc0, 0x77, 0x22, 0x11, 0x48, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d14 @ cfa - 1088 - 56 * VG
-; CHECK1024-NEXT: .cfi_escape 0x10, 0x4f, 0x0b, 0x11, 0xc0, 0x77, 0x22, 0x11, 0x40, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d15 @ cfa - 1088 - 64 * VG
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x48, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x11, 0xc0, 0x77, 0x22 // $d8 @ cfa - 8 * VG - 1088
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x49, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x11, 0xc0, 0x77, 0x22 // $d9 @ cfa - 16 * VG - 1088
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x4a, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x11, 0xc0, 0x77, 0x22 // $d10 @ cfa - 24 * VG - 1088
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x4b, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x60, 0x1e, 0x22, 0x11, 0xc0, 0x77, 0x22 // $d11 @ cfa - 32 * VG - 1088
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x4c, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x58, 0x1e, 0x22, 0x11, 0xc0, 0x77, 0x22 // $d12 @ cfa - 40 * VG - 1088
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x4d, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x50, 0x1e, 0x22, 0x11, 0xc0, 0x77, 0x22 // $d13 @ cfa - 48 * VG - 1088
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x4e, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x48, 0x1e, 0x22, 0x11, 0xc0, 0x77, 0x22 // $d14 @ cfa - 56 * VG - 1088
+; CHECK1024-NEXT: .cfi_escape 0x10, 0x4f, 0x0b, 0x92, 0x2e, 0x00, 0x11, 0x40, 0x1e, 0x22, 0x11, 0xc0, 0x77, 0x22 // $d15 @ cfa - 64 * VG - 1088
; CHECK1024-NEXT: // kill: def $w0 killed $w0 def $x0
; CHECK1024-NEXT: ubfiz x8, x0, #2, #32
; CHECK1024-NEXT: mov x9, sp
diff --git a/llvm/test/CodeGen/AArch64/stack-probing-sve.ll b/llvm/test/CodeGen/AArch64/stack-probing-sve.ll
index 56d865ef83e6b..59b95be6fc568 100644
--- a/llvm/test/CodeGen/AArch64/stack-probing-sve.ll
+++ b/llvm/test/CodeGen/AArch64/stack-probing-sve.ll
@@ -18,7 +18,7 @@ define void @sve_1_vector(ptr %out) #0 {
; CHECK-NEXT: .cfi_def_cfa_offset 16
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: addvl sp, sp, #-1
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: addvl sp, sp, #1
; CHECK-NEXT: .cfi_def_cfa wsp, 16
; CHECK-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
@@ -38,7 +38,7 @@ define void @sve_4_vector(ptr %out) #0 {
; CHECK-NEXT: .cfi_def_cfa_offset 16
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: addvl sp, sp, #-4
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 32 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x09, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x20, 0x1e, 0x22 // sp + 16 + 32 * VG
; CHECK-NEXT: addvl sp, sp, #4
; CHECK-NEXT: .cfi_def_cfa wsp, 16
; CHECK-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
@@ -63,7 +63,7 @@ define void @sve_16_vector(ptr %out) #0 {
; CHECK-NEXT: .cfi_def_cfa_offset 16
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: addvl sp, sp, #-16
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x80, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 128 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x80, 0x01, 0x1e, 0x22 // sp + 16 + 128 * VG
; CHECK-NEXT: str xzr, [sp]
; CHECK-NEXT: addvl sp, sp, #16
; CHECK-NEXT: .cfi_def_cfa wsp, 16
@@ -103,7 +103,7 @@ define void @sve_17_vector(ptr %out) #0 {
; CHECK-NEXT: .cfi_def_cfa_offset 16
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: addvl x9, sp, #-17
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x79, 0x00, 0x11, 0x10, 0x22, 0x11, 0x88, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $x9 + 16 + 136 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x79, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x88, 0x01, 0x1e, 0x22 // $x9 + 16 + 136 * VG
; CHECK-NEXT: .LBB3_1: // %entry
; CHECK-NEXT: // =>This Inner Loop Header: Depth=1
; CHECK-NEXT: sub sp, sp, #1, lsl #12 // =4096
@@ -155,9 +155,9 @@ define void @sve_1v_csr(<vscale x 4 x float> %a) #0 {
; CHECK-NEXT: .cfi_def_cfa_offset 16
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: addvl sp, sp, #-1
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: str z8, [sp] // 16-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 16 - 8 * VG
+; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d8 @ cfa - 8 * VG - 16
; CHECK-NEXT: //APP
; CHECK-NEXT: //NO_APP
; CHECK-NEXT: ldr z8, [sp] // 16-byte Folded Reload
@@ -180,15 +180,15 @@ define void @sve_4v_csr(<vscale x 4 x float> %a) #0 {
; CHECK-NEXT: .cfi_def_cfa_offset 16
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: addvl sp, sp, #-4
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 32 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x09, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x20, 0x1e, 0x22 // sp + 16 + 32 * VG
; CHECK-NEXT: str z11, [sp] // 16-byte Folded Spill
; CHECK-NEXT: str z10, [sp, #1, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z9, [sp, #2, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z8, [sp, #3, mul vl] // 16-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 16 - 8 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 16 - 16 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 16 - 24 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x60, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 16 - 32 * VG
+; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d8 @ cfa - 8 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d9 @ cfa - 16 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x40, 0x1c // $d10 @ cfa - 24 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x60, 0x1e, 0x22, 0x40, 0x1c // $d11 @ cfa - 32 * VG - 16
; CHECK-NEXT: //APP
; CHECK-NEXT: //NO_APP
; CHECK-NEXT: ldr z11, [sp] // 16-byte Folded Reload
@@ -217,7 +217,7 @@ define void @sve_16v_csr(<vscale x 4 x float> %a) #0 {
; CHECK-NEXT: .cfi_def_cfa_offset 16
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: addvl sp, sp, #-16
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x80, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 128 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x80, 0x01, 0x1e, 0x22 // sp + 16 + 128 * VG
; CHECK-NEXT: str xzr, [sp]
; CHECK-NEXT: str z23, [sp] // 16-byte Folded Spill
; CHECK-NEXT: str z22, [sp, #1, mul vl] // 16-byte Folded Spill
@@ -235,14 +235,14 @@ define void @sve_16v_csr(<vscale x 4 x float> %a) #0 {
; CHECK-NEXT: str z10, [sp, #13, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z9, [sp, #14, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z8, [sp, #15, mul vl] // 16-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 16 - 8 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 16 - 16 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 16 - 24 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x60, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 16 - 32 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x58, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 16 - 40 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4d, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x50, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d13 @ cfa - 16 - 48 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4e, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x48, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d14 @ cfa - 16 - 56 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4f, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x40, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d15 @ cfa - 16 - 64 * VG
+; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d8 @ cfa - 8 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d9 @ cfa - 16 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x40, 0x1c // $d10 @ cfa - 24 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x60, 0x1e, 0x22, 0x40, 0x1c // $d11 @ cfa - 32 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x58, 0x1e, 0x22, 0x40, 0x1c // $d12 @ cfa - 40 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4d, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x50, 0x1e, 0x22, 0x40, 0x1c // $d13 @ cfa - 48 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4e, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x48, 0x1e, 0x22, 0x40, 0x1c // $d14 @ cfa - 56 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4f, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x40, 0x1e, 0x22, 0x40, 0x1c // $d15 @ cfa - 64 * VG - 16
; CHECK-NEXT: //APP
; CHECK-NEXT: //NO_APP
; CHECK-NEXT: ldr z23, [sp] // 16-byte Folded Reload
@@ -287,7 +287,7 @@ define void @sve_1p_csr(<vscale x 4 x float> %a) #0 {
; CHECK-NEXT: .cfi_def_cfa_offset 16
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: addvl sp, sp, #-1
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: str p8, [sp, #7, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: //APP
; CHECK-NEXT: //NO_APP
@@ -310,7 +310,7 @@ define void @sve_4p_csr(<vscale x 4 x float> %a) #0 {
; CHECK-NEXT: .cfi_def_cfa_offset 16
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: addvl sp, sp, #-1
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: str p11, [sp, #4, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str p10, [sp, #5, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str p9, [sp, #6, mul vl] // 2-byte Folded Spill
@@ -339,7 +339,7 @@ define void @sve_16v_1p_csr(<vscale x 4 x float> %a) #0 {
; CHECK-NEXT: .cfi_def_cfa_offset 16
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: addvl x9, sp, #-17
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x79, 0x00, 0x11, 0x10, 0x22, 0x11, 0x88, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $x9 + 16 + 136 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x79, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x88, 0x01, 0x1e, 0x22 // $x9 + 16 + 136 * VG
; CHECK-NEXT: .LBB9_1: // %entry
; CHECK-NEXT: // =>This Inner Loop Header: Depth=1
; CHECK-NEXT: sub sp, sp, #1, lsl #12 // =4096
@@ -370,14 +370,14 @@ define void @sve_16v_1p_csr(<vscale x 4 x float> %a) #0 {
; CHECK-NEXT: str z10, [sp, #14, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z9, [sp, #15, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z8, [sp, #16, mul vl] // 16-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 16 - 8 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 16 - 16 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 16 - 24 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x60, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 16 - 32 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x58, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 16 - 40 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4d, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x50, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d13 @ cfa - 16 - 48 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4e, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x48, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d14 @ cfa - 16 - 56 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4f, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x40, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d15 @ cfa - 16 - 64 * VG
+; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d8 @ cfa - 8 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d9 @ cfa - 16 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x40, 0x1c // $d10 @ cfa - 24 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x60, 0x1e, 0x22, 0x40, 0x1c // $d11 @ cfa - 32 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x58, 0x1e, 0x22, 0x40, 0x1c // $d12 @ cfa - 40 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4d, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x50, 0x1e, 0x22, 0x40, 0x1c // $d13 @ cfa - 48 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4e, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x48, 0x1e, 0x22, 0x40, 0x1c // $d14 @ cfa - 56 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4f, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x40, 0x1e, 0x22, 0x40, 0x1c // $d15 @ cfa - 64 * VG - 16
; CHECK-NEXT: //APP
; CHECK-NEXT: //NO_APP
; CHECK-NEXT: ldr z23, [sp, #1, mul vl] // 16-byte Folded Reload
@@ -426,7 +426,7 @@ define void @sve_1_vector_16_arr(ptr %out) #0 {
; CHECK-NEXT: sub sp, sp, #16
; CHECK-NEXT: .cfi_def_cfa_offset 32
; CHECK-NEXT: addvl sp, sp, #-1
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x20, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 32 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x20, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 32 + 8 * VG
; CHECK-NEXT: addvl sp, sp, #1
; CHECK-NEXT: .cfi_def_cfa wsp, 32
; CHECK-NEXT: add sp, sp, #16
@@ -453,9 +453,9 @@ define void @sve_1_vector_4096_arr(ptr %out) #0 {
; CHECK-NEXT: sub x9, sp, #3, lsl #12 // =12288
; CHECK-NEXT: .cfi_def_cfa w9, 12304
; CHECK-NEXT: addvl x9, x9, #-32
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0f, 0x79, 0x00, 0x11, 0x90, 0xe0, 0x00, 0x22, 0x11, 0x80, 0x02, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $x9 + 12304 + 256 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x79, 0x90, 0xe0, 0x00, 0x92, 0x2e, 0x00, 0x11, 0x80, 0x02, 0x1e, 0x22 // $x9 + 12304 + 256 * VG
; CHECK-NEXT: addvl x9, x9, #-32
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0f, 0x79, 0x00, 0x11, 0x90, 0xe0, 0x00, 0x22, 0x11, 0x80, 0x04, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $x9 + 12304 + 512 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x79, 0x90, 0xe0, 0x00, 0x92, 0x2e, 0x00, 0x11, 0x80, 0x04, 0x1e, 0x22 // $x9 + 12304 + 512 * VG
; CHECK-NEXT: .LBB11_1: // %entry
; CHECK-NEXT: // =>This Inner Loop Header: Depth=1
; CHECK-NEXT: sub sp, sp, #1, lsl #12 // =4096
@@ -470,9 +470,9 @@ define void @sve_1_vector_4096_arr(ptr %out) #0 {
; CHECK-NEXT: ldr xzr, [sp]
; CHECK-NEXT: .cfi_def_cfa_register wsp
; CHECK-NEXT: addvl sp, sp, #31
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0f, 0x8f, 0x00, 0x11, 0x90, 0xe0, 0x00, 0x22, 0x11, 0x88, 0x02, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 12304 + 264 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x90, 0xe0, 0x00, 0x92, 0x2e, 0x00, 0x11, 0x88, 0x02, 0x1e, 0x22 // sp + 12304 + 264 * VG
; CHECK-NEXT: addvl sp, sp, #31
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0e, 0x8f, 0x00, 0x11, 0x90, 0xe0, 0x00, 0x22, 0x11, 0x10, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 12304 + 16 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x90, 0xe0, 0x00, 0x92, 0x2e, 0x00, 0x40, 0x1e, 0x22 // sp + 12304 + 16 * VG
; CHECK-NEXT: addvl sp, sp, #2
; CHECK-NEXT: .cfi_def_cfa wsp, 12304
; CHECK-NEXT: add sp, sp, #3, lsl #12 // =12288
@@ -538,38 +538,38 @@ define void @sve_1024_64k_guard(ptr %out) #0 "stack-probe-size"="65536" {
; CHECK-NEXT: .cfi_def_cfa_offset 16
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: addvl sp, sp, #-32
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x80, 0x02, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 256 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x80, 0x02, 0x1e, 0x22 // sp + 16 + 256 * VG
; CHECK-NEXT: addvl sp, sp, #-32
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x80, 0x04, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 512 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x80, 0x04, 0x1e, 0x22 // sp + 16 + 512 * VG
; CHECK-NEXT: addvl sp, sp, #-32
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x80, 0x06, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 768 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x80, 0x06, 0x1e, 0x22 // sp + 16 + 768 * VG
; CHECK-NEXT: addvl sp, sp, #-32
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x80, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 1024 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x80, 0x08, 0x1e, 0x22 // sp + 16 + 1024 * VG
; CHECK-NEXT: addvl sp, sp, #-32
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x80, 0x0a, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 1280 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x80, 0x0a, 0x1e, 0x22 // sp + 16 + 1280 * VG
; CHECK-NEXT: addvl sp, sp, #-32
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x80, 0x0c, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 1536 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x80, 0x0c, 0x1e, 0x22 // sp + 16 + 1536 * VG
; CHECK-NEXT: addvl sp, sp, #-32
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x80, 0x0e, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 1792 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x80, 0x0e, 0x1e, 0x22 // sp + 16 + 1792 * VG
; CHECK-NEXT: addvl sp, sp, #-32
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x80, 0x10, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 2048 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x80, 0x10, 0x1e, 0x22 // sp + 16 + 2048 * VG
; CHECK-NEXT: str xzr, [sp]
; CHECK-NEXT: addvl sp, sp, #31
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x88, 0x0e, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 1800 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x88, 0x0e, 0x1e, 0x22 // sp + 16 + 1800 * VG
; CHECK-NEXT: addvl sp, sp, #31
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x90, 0x0c, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 1552 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x90, 0x0c, 0x1e, 0x22 // sp + 16 + 1552 * VG
; CHECK-NEXT: addvl sp, sp, #31
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x98, 0x0a, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 1304 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x98, 0x0a, 0x1e, 0x22 // sp + 16 + 1304 * VG
; CHECK-NEXT: addvl sp, sp, #31
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0xa0, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 1056 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0xa0, 0x08, 0x1e, 0x22 // sp + 16 + 1056 * VG
; CHECK-NEXT: addvl sp, sp, #31
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0xa8, 0x06, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 808 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0xa8, 0x06, 0x1e, 0x22 // sp + 16 + 808 * VG
; CHECK-NEXT: addvl sp, sp, #31
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0xb0, 0x04, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 560 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0xb0, 0x04, 0x1e, 0x22 // sp + 16 + 560 * VG
; CHECK-NEXT: addvl sp, sp, #31
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0xb8, 0x02, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 312 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0xb8, 0x02, 0x1e, 0x22 // sp + 16 + 312 * VG
; CHECK-NEXT: addvl sp, sp, #31
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0xc0, 0x00, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 64 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0xc0, 0x00, 0x1e, 0x22 // sp + 16 + 64 * VG
; CHECK-NEXT: addvl sp, sp, #8
; CHECK-NEXT: .cfi_def_cfa wsp, 16
; CHECK-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
@@ -588,23 +588,23 @@ define void @sve_1028_64k_guard(ptr %out) #0 "stack-probe-size"="65536" {
; CHECK-NEXT: .cfi_def_cfa_offset 16
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: addvl x9, sp, #-32
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x79, 0x00, 0x11, 0x10, 0x22, 0x11, 0x80, 0x02, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $x9 + 16 + 256 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x79, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x80, 0x02, 0x1e, 0x22 // $x9 + 16 + 256 * VG
; CHECK-NEXT: addvl x9, x9, #-32
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x79, 0x00, 0x11, 0x10, 0x22, 0x11, 0x80, 0x04, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $x9 + 16 + 512 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x79, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x80, 0x04, 0x1e, 0x22 // $x9 + 16 + 512 * VG
; CHECK-NEXT: addvl x9, x9, #-32
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x79, 0x00, 0x11, 0x10, 0x22, 0x11, 0x80, 0x06, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $x9 + 16 + 768 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x79, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x80, 0x06, 0x1e, 0x22 // $x9 + 16 + 768 * VG
; CHECK-NEXT: addvl x9, x9, #-32
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x79, 0x00, 0x11, 0x10, 0x22, 0x11, 0x80, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $x9 + 16 + 1024 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x79, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x80, 0x08, 0x1e, 0x22 // $x9 + 16 + 1024 * VG
; CHECK-NEXT: addvl x9, x9, #-32
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x79, 0x00, 0x11, 0x10, 0x22, 0x11, 0x80, 0x0a, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $x9 + 16 + 1280 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x79, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x80, 0x0a, 0x1e, 0x22 // $x9 + 16 + 1280 * VG
; CHECK-NEXT: addvl x9, x9, #-32
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x79, 0x00, 0x11, 0x10, 0x22, 0x11, 0x80, 0x0c, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $x9 + 16 + 1536 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x79, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x80, 0x0c, 0x1e, 0x22 // $x9 + 16 + 1536 * VG
; CHECK-NEXT: addvl x9, x9, #-32
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x79, 0x00, 0x11, 0x10, 0x22, 0x11, 0x80, 0x0e, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $x9 + 16 + 1792 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x79, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x80, 0x0e, 0x1e, 0x22 // $x9 + 16 + 1792 * VG
; CHECK-NEXT: addvl x9, x9, #-32
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x79, 0x00, 0x11, 0x10, 0x22, 0x11, 0x80, 0x10, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $x9 + 16 + 2048 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x79, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x80, 0x10, 0x1e, 0x22 // $x9 + 16 + 2048 * VG
; CHECK-NEXT: addvl x9, x9, #-1
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x79, 0x00, 0x11, 0x10, 0x22, 0x11, 0x88, 0x10, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $x9 + 16 + 2056 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x79, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x88, 0x10, 0x1e, 0x22 // $x9 + 16 + 2056 * VG
; CHECK-NEXT: .LBB14_1: // %entry
; CHECK-NEXT: // =>This Inner Loop Header: Depth=1
; CHECK-NEXT: sub sp, sp, #16, lsl #12 // =65536
@@ -619,21 +619,21 @@ define void @sve_1028_64k_guard(ptr %out) #0 "stack-probe-size"="65536" {
; CHECK-NEXT: ldr xzr, [sp]
; CHECK-NEXT: .cfi_def_cfa_register wsp
; CHECK-NEXT: addvl sp, sp, #31
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x90, 0x0e, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 1808 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x90, 0x0e, 0x1e, 0x22 // sp + 16 + 1808 * VG
; CHECK-NEXT: addvl sp, sp, #31
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x98, 0x0c, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 1560 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x98, 0x0c, 0x1e, 0x22 // sp + 16 + 1560 * VG
; CHECK-NEXT: addvl sp, sp, #31
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0xa0, 0x0a, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 1312 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0xa0, 0x0a, 0x1e, 0x22 // sp + 16 + 1312 * VG
; CHECK-NEXT: addvl sp, sp, #31
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0xa8, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 1064 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0xa8, 0x08, 0x1e, 0x22 // sp + 16 + 1064 * VG
; CHECK-NEXT: addvl sp, sp, #31
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0xb0, 0x06, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 816 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0xb0, 0x06, 0x1e, 0x22 // sp + 16 + 816 * VG
; CHECK-NEXT: addvl sp, sp, #31
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0xb8, 0x04, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 568 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0xb8, 0x04, 0x1e, 0x22 // sp + 16 + 568 * VG
; CHECK-NEXT: addvl sp, sp, #31
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0xc0, 0x02, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 320 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0xc0, 0x02, 0x1e, 0x22 // sp + 16 + 320 * VG
; CHECK-NEXT: addvl sp, sp, #31
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0xc8, 0x00, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 72 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0xc8, 0x00, 0x1e, 0x22 // sp + 16 + 72 * VG
; CHECK-NEXT: addvl sp, sp, #9
; CHECK-NEXT: .cfi_def_cfa wsp, 16
; CHECK-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
@@ -656,7 +656,7 @@ define void @sve_5_vector(ptr %out) #0 {
; CHECK-NEXT: .cfi_def_cfa_offset 16
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: addvl sp, sp, #-5
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x28, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 40 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x09, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x28, 0x1e, 0x22 // sp + 16 + 40 * VG
; CHECK-NEXT: str xzr, [sp]
; CHECK-NEXT: addvl sp, sp, #5
; CHECK-NEXT: .cfi_def_cfa wsp, 16
@@ -682,21 +682,21 @@ define void @sve_unprobed_area(<vscale x 4 x float> %a, i32 %n) #0 {
; CHECK-NEXT: .cfi_def_cfa_offset 16
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: addvl sp, sp, #-4
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 32 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x09, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x20, 0x1e, 0x22 // sp + 16 + 32 * VG
; CHECK-NEXT: str xzr, [sp]
; CHECK-NEXT: str p9, [sp, #7, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str z10, [sp, #1, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z9, [sp, #2, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z8, [sp, #3, mul vl] // 16-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 16 - 8 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 16 - 16 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 16 - 24 * VG
+; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d8 @ cfa - 8 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d9 @ cfa - 16 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x40, 0x1c // $d10 @ cfa - 24 * VG - 16
; CHECK-NEXT: addvl sp, sp, #-4
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0xc0, 0x00, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 64 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0xc0, 0x00, 0x1e, 0x22 // sp + 16 + 64 * VG
; CHECK-NEXT: //APP
; CHECK-NEXT: //NO_APP
; CHECK-NEXT: addvl sp, sp, #4
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 32 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x09, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x20, 0x1e, 0x22 // sp + 16 + 32 * VG
; CHECK-NEXT: ldr z10, [sp, #1, mul vl] // 16-byte Folded Reload
; CHECK-NEXT: ldr z9, [sp, #2, mul vl] // 16-byte Folded Reload
; CHECK-NEXT: ldr z8, [sp, #3, mul vl] // 16-byte Folded Reload
diff --git a/llvm/test/CodeGen/AArch64/sve-alloca.ll b/llvm/test/CodeGen/AArch64/sve-alloca.ll
index 2520095cce62e..8b7fa9e7b7f71 100644
--- a/llvm/test/CodeGen/AArch64/sve-alloca.ll
+++ b/llvm/test/CodeGen/AArch64/sve-alloca.ll
@@ -46,14 +46,14 @@ define void @foo(<vscale x 4 x i64> %dst, i1 %cond) {
; CHECK-NEXT: .cfi_offset w28, -16
; CHECK-NEXT: .cfi_offset w30, -24
; CHECK-NEXT: .cfi_offset w29, -32
-; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x60, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 32 - 8 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x11, 0x60, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 32 - 16 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x60, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 32 - 24 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x11, 0x60, 0x22, 0x11, 0x60, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 32 - 32 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x11, 0x60, 0x22, 0x11, 0x58, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 32 - 40 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4d, 0x0a, 0x11, 0x60, 0x22, 0x11, 0x50, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d13 @ cfa - 32 - 48 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4e, 0x0a, 0x11, 0x60, 0x22, 0x11, 0x48, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d14 @ cfa - 32 - 56 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4f, 0x0a, 0x11, 0x60, 0x22, 0x11, 0x40, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d15 @ cfa - 32 - 64 * VG
+; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x11, 0x60, 0x22 // $d8 @ cfa - 8 * VG - 32
+; CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x11, 0x60, 0x22 // $d9 @ cfa - 16 * VG - 32
+; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x11, 0x60, 0x22 // $d10 @ cfa - 24 * VG - 32
+; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x60, 0x1e, 0x22, 0x11, 0x60, 0x22 // $d11 @ cfa - 32 * VG - 32
+; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x58, 0x1e, 0x22, 0x11, 0x60, 0x22 // $d12 @ cfa - 40 * VG - 32
+; CHECK-NEXT: .cfi_escape 0x10, 0x4d, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x50, 0x1e, 0x22, 0x11, 0x60, 0x22 // $d13 @ cfa - 48 * VG - 32
+; CHECK-NEXT: .cfi_escape 0x10, 0x4e, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x48, 0x1e, 0x22, 0x11, 0x60, 0x22 // $d14 @ cfa - 56 * VG - 32
+; CHECK-NEXT: .cfi_escape 0x10, 0x4f, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x40, 0x1e, 0x22, 0x11, 0x60, 0x22 // $d15 @ cfa - 64 * VG - 32
; CHECK-NEXT: rdvl x9, #2
; CHECK-NEXT: mov x8, sp
; CHECK-NEXT: add x9, x9, #15
diff --git a/llvm/test/CodeGen/AArch64/sve-callee-save-restore-pairs.ll b/llvm/test/CodeGen/AArch64/sve-callee-save-restore-pairs.ll
index 30a8396d85ab7..254b8e03636da 100644
--- a/llvm/test/CodeGen/AArch64/sve-callee-save-restore-pairs.ll
+++ b/llvm/test/CodeGen/AArch64/sve-callee-save-restore-pairs.ll
@@ -43,17 +43,17 @@ define void @fbyte(<vscale x 16 x i8> %v){
; NOPAIR-NEXT: str z10, [sp, #15, mul vl] // 16-byte Folded Spill
; NOPAIR-NEXT: str z9, [sp, #16, mul vl] // 16-byte Folded Spill
; NOPAIR-NEXT: str z8, [sp, #17, mul vl] // 16-byte Folded Spill
-; NOPAIR-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 144 * VG
+; NOPAIR-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x90, 0x01, 0x1e, 0x22 // sp + 16 + 144 * VG
; NOPAIR-NEXT: .cfi_offset w30, -8
; NOPAIR-NEXT: .cfi_offset w29, -16
-; NOPAIR-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 16 - 8 * VG
-; NOPAIR-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 16 - 16 * VG
-; NOPAIR-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 16 - 24 * VG
-; NOPAIR-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x60, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 16 - 32 * VG
-; NOPAIR-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x58, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 16 - 40 * VG
-; NOPAIR-NEXT: .cfi_escape 0x10, 0x4d, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x50, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d13 @ cfa - 16 - 48 * VG
-; NOPAIR-NEXT: .cfi_escape 0x10, 0x4e, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x48, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d14 @ cfa - 16 - 56 * VG
-; NOPAIR-NEXT: .cfi_escape 0x10, 0x4f, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x40, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d15 @ cfa - 16 - 64 * VG
+; NOPAIR-NEXT: .cfi_escape 0x10, 0x48, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d8 @ cfa - 8 * VG - 16
+; NOPAIR-NEXT: .cfi_escape 0x10, 0x49, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d9 @ cfa - 16 * VG - 16
+; NOPAIR-NEXT: .cfi_escape 0x10, 0x4a, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x40, 0x1c // $d10 @ cfa - 24 * VG - 16
+; NOPAIR-NEXT: .cfi_escape 0x10, 0x4b, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x60, 0x1e, 0x22, 0x40, 0x1c // $d11 @ cfa - 32 * VG - 16
+; NOPAIR-NEXT: .cfi_escape 0x10, 0x4c, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x58, 0x1e, 0x22, 0x40, 0x1c // $d12 @ cfa - 40 * VG - 16
+; NOPAIR-NEXT: .cfi_escape 0x10, 0x4d, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x50, 0x1e, 0x22, 0x40, 0x1c // $d13 @ cfa - 48 * VG - 16
+; NOPAIR-NEXT: .cfi_escape 0x10, 0x4e, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x48, 0x1e, 0x22, 0x40, 0x1c // $d14 @ cfa - 56 * VG - 16
+; NOPAIR-NEXT: .cfi_escape 0x10, 0x4f, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x40, 0x1e, 0x22, 0x40, 0x1c // $d15 @ cfa - 64 * VG - 16
; NOPAIR-NEXT: bl my_func
; NOPAIR-NEXT: ldr z23, [sp, #2, mul vl] // 16-byte Folded Reload
; NOPAIR-NEXT: ldr z22, [sp, #3, mul vl] // 16-byte Folded Reload
@@ -113,17 +113,17 @@ define void @fbyte(<vscale x 16 x i8> %v){
; PAIR-NEXT: str p4, [sp, #15, mul vl] // 2-byte Folded Spill
; PAIR-NEXT: str z9, [sp, #16, mul vl] // 16-byte Folded Spill
; PAIR-NEXT: str z8, [sp, #17, mul vl] // 16-byte Folded Spill
-; PAIR-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 144 * VG
+; PAIR-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x90, 0x01, 0x1e, 0x22 // sp + 16 + 144 * VG
; PAIR-NEXT: .cfi_offset w30, -8
; PAIR-NEXT: .cfi_offset w29, -16
-; PAIR-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 16 - 8 * VG
-; PAIR-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 16 - 16 * VG
-; PAIR-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 16 - 24 * VG
-; PAIR-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x60, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 16 - 32 * VG
-; PAIR-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x58, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 16 - 40 * VG
-; PAIR-NEXT: .cfi_escape 0x10, 0x4d, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x50, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d13 @ cfa - 16 - 48 * VG
-; PAIR-NEXT: .cfi_escape 0x10, 0x4e, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x48, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d14 @ cfa - 16 - 56 * VG
-; PAIR-NEXT: .cfi_escape 0x10, 0x4f, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x40, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d15 @ cfa - 16 - 64 * VG
+; PAIR-NEXT: .cfi_escape 0x10, 0x48, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d8 @ cfa - 8 * VG - 16
+; PAIR-NEXT: .cfi_escape 0x10, 0x49, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d9 @ cfa - 16 * VG - 16
+; PAIR-NEXT: .cfi_escape 0x10, 0x4a, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x40, 0x1c // $d10 @ cfa - 24 * VG - 16
+; PAIR-NEXT: .cfi_escape 0x10, 0x4b, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x60, 0x1e, 0x22, 0x40, 0x1c // $d11 @ cfa - 32 * VG - 16
+; PAIR-NEXT: .cfi_escape 0x10, 0x4c, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x58, 0x1e, 0x22, 0x40, 0x1c // $d12 @ cfa - 40 * VG - 16
+; PAIR-NEXT: .cfi_escape 0x10, 0x4d, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x50, 0x1e, 0x22, 0x40, 0x1c // $d13 @ cfa - 48 * VG - 16
+; PAIR-NEXT: .cfi_escape 0x10, 0x4e, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x48, 0x1e, 0x22, 0x40, 0x1c // $d14 @ cfa - 56 * VG - 16
+; PAIR-NEXT: .cfi_escape 0x10, 0x4f, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x40, 0x1e, 0x22, 0x40, 0x1c // $d15 @ cfa - 64 * VG - 16
; PAIR-NEXT: bl my_func
; PAIR-NEXT: ptrue pn8.b
; PAIR-NEXT: ldr z9, [sp, #16, mul vl] // 16-byte Folded Reload
@@ -187,17 +187,17 @@ define void @fhalf(<vscale x 8 x half> %v) {
; NOPAIR-NEXT: str z10, [sp, #15, mul vl] // 16-byte Folded Spill
; NOPAIR-NEXT: str z9, [sp, #16, mul vl] // 16-byte Folded Spill
; NOPAIR-NEXT: str z8, [sp, #17, mul vl] // 16-byte Folded Spill
-; NOPAIR-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 144 * VG
+; NOPAIR-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x90, 0x01, 0x1e, 0x22 // sp + 16 + 144 * VG
; NOPAIR-NEXT: .cfi_offset w30, -8
; NOPAIR-NEXT: .cfi_offset w29, -16
-; NOPAIR-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 16 - 8 * VG
-; NOPAIR-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 16 - 16 * VG
-; NOPAIR-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 16 - 24 * VG
-; NOPAIR-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x60, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 16 - 32 * VG
-; NOPAIR-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x58, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 16 - 40 * VG
-; NOPAIR-NEXT: .cfi_escape 0x10, 0x4d, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x50, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d13 @ cfa - 16 - 48 * VG
-; NOPAIR-NEXT: .cfi_escape 0x10, 0x4e, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x48, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d14 @ cfa - 16 - 56 * VG
-; NOPAIR-NEXT: .cfi_escape 0x10, 0x4f, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x40, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d15 @ cfa - 16 - 64 * VG
+; NOPAIR-NEXT: .cfi_escape 0x10, 0x48, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d8 @ cfa - 8 * VG - 16
+; NOPAIR-NEXT: .cfi_escape 0x10, 0x49, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d9 @ cfa - 16 * VG - 16
+; NOPAIR-NEXT: .cfi_escape 0x10, 0x4a, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x40, 0x1c // $d10 @ cfa - 24 * VG - 16
+; NOPAIR-NEXT: .cfi_escape 0x10, 0x4b, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x60, 0x1e, 0x22, 0x40, 0x1c // $d11 @ cfa - 32 * VG - 16
+; NOPAIR-NEXT: .cfi_escape 0x10, 0x4c, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x58, 0x1e, 0x22, 0x40, 0x1c // $d12 @ cfa - 40 * VG - 16
+; NOPAIR-NEXT: .cfi_escape 0x10, 0x4d, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x50, 0x1e, 0x22, 0x40, 0x1c // $d13 @ cfa - 48 * VG - 16
+; NOPAIR-NEXT: .cfi_escape 0x10, 0x4e, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x48, 0x1e, 0x22, 0x40, 0x1c // $d14 @ cfa - 56 * VG - 16
+; NOPAIR-NEXT: .cfi_escape 0x10, 0x4f, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x40, 0x1e, 0x22, 0x40, 0x1c // $d15 @ cfa - 64 * VG - 16
; NOPAIR-NEXT: bl my_func
; NOPAIR-NEXT: ldr z23, [sp, #2, mul vl] // 16-byte Folded Reload
; NOPAIR-NEXT: ldr z22, [sp, #3, mul vl] // 16-byte Folded Reload
@@ -257,17 +257,17 @@ define void @fhalf(<vscale x 8 x half> %v) {
; PAIR-NEXT: str p4, [sp, #15, mul vl] // 2-byte Folded Spill
; PAIR-NEXT: str z9, [sp, #16, mul vl] // 16-byte Folded Spill
; PAIR-NEXT: str z8, [sp, #17, mul vl] // 16-byte Folded Spill
-; PAIR-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 144 * VG
+; PAIR-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x90, 0x01, 0x1e, 0x22 // sp + 16 + 144 * VG
; PAIR-NEXT: .cfi_offset w30, -8
; PAIR-NEXT: .cfi_offset w29, -16
-; PAIR-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 16 - 8 * VG
-; PAIR-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 16 - 16 * VG
-; PAIR-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 16 - 24 * VG
-; PAIR-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x60, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 16 - 32 * VG
-; PAIR-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x58, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 16 - 40 * VG
-; PAIR-NEXT: .cfi_escape 0x10, 0x4d, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x50, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d13 @ cfa - 16 - 48 * VG
-; PAIR-NEXT: .cfi_escape 0x10, 0x4e, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x48, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d14 @ cfa - 16 - 56 * VG
-; PAIR-NEXT: .cfi_escape 0x10, 0x4f, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x40, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d15 @ cfa - 16 - 64 * VG
+; PAIR-NEXT: .cfi_escape 0x10, 0x48, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d8 @ cfa - 8 * VG - 16
+; PAIR-NEXT: .cfi_escape 0x10, 0x49, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d9 @ cfa - 16 * VG - 16
+; PAIR-NEXT: .cfi_escape 0x10, 0x4a, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x40, 0x1c // $d10 @ cfa - 24 * VG - 16
+; PAIR-NEXT: .cfi_escape 0x10, 0x4b, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x60, 0x1e, 0x22, 0x40, 0x1c // $d11 @ cfa - 32 * VG - 16
+; PAIR-NEXT: .cfi_escape 0x10, 0x4c, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x58, 0x1e, 0x22, 0x40, 0x1c // $d12 @ cfa - 40 * VG - 16
+; PAIR-NEXT: .cfi_escape 0x10, 0x4d, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x50, 0x1e, 0x22, 0x40, 0x1c // $d13 @ cfa - 48 * VG - 16
+; PAIR-NEXT: .cfi_escape 0x10, 0x4e, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x48, 0x1e, 0x22, 0x40, 0x1c // $d14 @ cfa - 56 * VG - 16
+; PAIR-NEXT: .cfi_escape 0x10, 0x4f, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x40, 0x1e, 0x22, 0x40, 0x1c // $d15 @ cfa - 64 * VG - 16
; PAIR-NEXT: bl my_func
; PAIR-NEXT: ptrue pn8.b
; PAIR-NEXT: ldr z9, [sp, #16, mul vl] // 16-byte Folded Reload
@@ -310,11 +310,11 @@ define aarch64_sve_vector_pcs void @test_clobbers_z_p_regs() {
; NOPAIR-NEXT: str z10, [sp, #1, mul vl] // 16-byte Folded Spill
; NOPAIR-NEXT: str z9, [sp, #2, mul vl] // 16-byte Folded Spill
; NOPAIR-NEXT: str z8, [sp, #3, mul vl] // 16-byte Folded Spill
-; NOPAIR-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 32 * VG
+; NOPAIR-NEXT: .cfi_escape 0x0f, 0x09, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x20, 0x1e, 0x22 // sp + 16 + 32 * VG
; NOPAIR-NEXT: .cfi_offset w29, -16
-; NOPAIR-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 16 - 8 * VG
-; NOPAIR-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 16 - 16 * VG
-; NOPAIR-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 16 - 24 * VG
+; NOPAIR-NEXT: .cfi_escape 0x10, 0x48, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d8 @ cfa - 8 * VG - 16
+; NOPAIR-NEXT: .cfi_escape 0x10, 0x49, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d9 @ cfa - 16 * VG - 16
+; NOPAIR-NEXT: .cfi_escape 0x10, 0x4a, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x40, 0x1c // $d10 @ cfa - 24 * VG - 16
; NOPAIR-NEXT: //APP
; NOPAIR-NEXT: //NO_APP
; NOPAIR-NEXT: ldr z10, [sp, #1, mul vl] // 16-byte Folded Reload
@@ -336,11 +336,11 @@ define aarch64_sve_vector_pcs void @test_clobbers_z_p_regs() {
; PAIR-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
; PAIR-NEXT: str z10, [sp, #1, mul vl] // 16-byte Folded Spill
; PAIR-NEXT: st1b { z8.b, z9.b }, pn8, [sp, #2, mul vl] // 32-byte Folded Spill
-; PAIR-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 32 * VG
+; PAIR-NEXT: .cfi_escape 0x0f, 0x09, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x20, 0x1e, 0x22 // sp + 16 + 32 * VG
; PAIR-NEXT: .cfi_offset w29, -16
-; PAIR-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 16 - 8 * VG
-; PAIR-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 16 - 16 * VG
-; PAIR-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 16 - 24 * VG
+; PAIR-NEXT: .cfi_escape 0x10, 0x48, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d8 @ cfa - 8 * VG - 16
+; PAIR-NEXT: .cfi_escape 0x10, 0x49, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d9 @ cfa - 16 * VG - 16
+; PAIR-NEXT: .cfi_escape 0x10, 0x4a, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x40, 0x1c // $d10 @ cfa - 24 * VG - 16
; PAIR-NEXT: //APP
; PAIR-NEXT: //NO_APP
; PAIR-NEXT: ptrue pn8.b
@@ -368,11 +368,11 @@ define aarch64_sve_vector_pcs void @test_clobbers_z_p_regs2() {
; NOPAIR-NEXT: str z10, [sp, #1, mul vl] // 16-byte Folded Spill
; NOPAIR-NEXT: str z9, [sp, #2, mul vl] // 16-byte Folded Spill
; NOPAIR-NEXT: str z8, [sp, #3, mul vl] // 16-byte Folded Spill
-; NOPAIR-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 32 * VG
+; NOPAIR-NEXT: .cfi_escape 0x0f, 0x09, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x20, 0x1e, 0x22 // sp + 16 + 32 * VG
; NOPAIR-NEXT: .cfi_offset w29, -16
-; NOPAIR-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 16 - 8 * VG
-; NOPAIR-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 16 - 16 * VG
-; NOPAIR-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 16 - 24 * VG
+; NOPAIR-NEXT: .cfi_escape 0x10, 0x48, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d8 @ cfa - 8 * VG - 16
+; NOPAIR-NEXT: .cfi_escape 0x10, 0x49, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d9 @ cfa - 16 * VG - 16
+; NOPAIR-NEXT: .cfi_escape 0x10, 0x4a, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x40, 0x1c // $d10 @ cfa - 24 * VG - 16
; NOPAIR-NEXT: //APP
; NOPAIR-NEXT: //NO_APP
; NOPAIR-NEXT: ldr z10, [sp, #1, mul vl] // 16-byte Folded Reload
@@ -393,11 +393,11 @@ define aarch64_sve_vector_pcs void @test_clobbers_z_p_regs2() {
; PAIR-NEXT: str p10, [sp, #6, mul vl] // 2-byte Folded Spill
; PAIR-NEXT: str z10, [sp, #1, mul vl] // 16-byte Folded Spill
; PAIR-NEXT: st1b { z8.b, z9.b }, pn9, [sp, #2, mul vl] // 32-byte Folded Spill
-; PAIR-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 32 * VG
+; PAIR-NEXT: .cfi_escape 0x0f, 0x09, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x20, 0x1e, 0x22 // sp + 16 + 32 * VG
; PAIR-NEXT: .cfi_offset w29, -16
-; PAIR-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 16 - 8 * VG
-; PAIR-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 16 - 16 * VG
-; PAIR-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 16 - 24 * VG
+; PAIR-NEXT: .cfi_escape 0x10, 0x48, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d8 @ cfa - 8 * VG - 16
+; PAIR-NEXT: .cfi_escape 0x10, 0x49, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d9 @ cfa - 16 * VG - 16
+; PAIR-NEXT: .cfi_escape 0x10, 0x4a, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x40, 0x1c // $d10 @ cfa - 24 * VG - 16
; PAIR-NEXT: //APP
; PAIR-NEXT: //NO_APP
; PAIR-NEXT: ptrue pn9.b
@@ -421,10 +421,10 @@ define aarch64_sve_vector_pcs void @test_clobbers_z_regs() {
; NOPAIR-NEXT: addvl sp, sp, #-2
; NOPAIR-NEXT: str z9, [sp] // 16-byte Folded Spill
; NOPAIR-NEXT: str z8, [sp, #1, mul vl] // 16-byte Folded Spill
-; NOPAIR-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 16 * VG
+; NOPAIR-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x40, 0x1e, 0x22 // sp + 16 + 16 * VG
; NOPAIR-NEXT: .cfi_offset w29, -16
-; NOPAIR-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 16 - 8 * VG
-; NOPAIR-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 16 - 16 * VG
+; NOPAIR-NEXT: .cfi_escape 0x10, 0x48, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d8 @ cfa - 8 * VG - 16
+; NOPAIR-NEXT: .cfi_escape 0x10, 0x49, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d9 @ cfa - 16 * VG - 16
; NOPAIR-NEXT: //APP
; NOPAIR-NEXT: //NO_APP
; NOPAIR-NEXT: ldr z9, [sp] // 16-byte Folded Reload
@@ -440,10 +440,10 @@ define aarch64_sve_vector_pcs void @test_clobbers_z_regs() {
; PAIR-NEXT: str p8, [sp, #7, mul vl] // 2-byte Folded Spill
; PAIR-NEXT: str z9, [sp, #1, mul vl] // 16-byte Folded Spill
; PAIR-NEXT: str z8, [sp, #2, mul vl] // 16-byte Folded Spill
-; PAIR-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x18, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 24 * VG
+; PAIR-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x48, 0x1e, 0x22 // sp + 16 + 24 * VG
; PAIR-NEXT: .cfi_offset w29, -16
-; PAIR-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 16 - 8 * VG
-; PAIR-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 16 - 16 * VG
+; PAIR-NEXT: .cfi_escape 0x10, 0x48, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d8 @ cfa - 8 * VG - 16
+; PAIR-NEXT: .cfi_escape 0x10, 0x49, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d9 @ cfa - 16 * VG - 16
; PAIR-NEXT: //APP
; PAIR-NEXT: //NO_APP
; PAIR-NEXT: ldr p8, [sp, #7, mul vl] // 2-byte Folded Reload
@@ -494,10 +494,10 @@ define aarch64_sve_vector_pcs void @test_clobbers_2_z_regs_negative() {
; NOPAIR-NEXT: addvl sp, sp, #-2
; NOPAIR-NEXT: str z10, [sp] // 16-byte Folded Spill
; NOPAIR-NEXT: str z8, [sp, #1, mul vl] // 16-byte Folded Spill
-; NOPAIR-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 16 * VG
+; NOPAIR-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x40, 0x1e, 0x22 // sp + 16 + 16 * VG
; NOPAIR-NEXT: .cfi_offset w29, -16
-; NOPAIR-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 16 - 8 * VG
-; NOPAIR-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 16 - 16 * VG
+; NOPAIR-NEXT: .cfi_escape 0x10, 0x48, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d8 @ cfa - 8 * VG - 16
+; NOPAIR-NEXT: .cfi_escape 0x10, 0x4a, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d10 @ cfa - 16 * VG - 16
; NOPAIR-NEXT: //APP
; NOPAIR-NEXT: //NO_APP
; NOPAIR-NEXT: ldr z10, [sp] // 16-byte Folded Reload
@@ -512,10 +512,10 @@ define aarch64_sve_vector_pcs void @test_clobbers_2_z_regs_negative() {
; PAIR-NEXT: addvl sp, sp, #-2
; PAIR-NEXT: str z10, [sp] // 16-byte Folded Spill
; PAIR-NEXT: str z8, [sp, #1, mul vl] // 16-byte Folded Spill
-; PAIR-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 16 * VG
+; PAIR-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x40, 0x1e, 0x22 // sp + 16 + 16 * VG
; PAIR-NEXT: .cfi_offset w29, -16
-; PAIR-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 16 - 8 * VG
-; PAIR-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 16 - 16 * VG
+; PAIR-NEXT: .cfi_escape 0x10, 0x48, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d8 @ cfa - 8 * VG - 16
+; PAIR-NEXT: .cfi_escape 0x10, 0x4a, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d10 @ cfa - 16 * VG - 16
; PAIR-NEXT: //APP
; PAIR-NEXT: //NO_APP
; PAIR-NEXT: ldr z10, [sp] // 16-byte Folded Reload
@@ -536,7 +536,7 @@ define aarch64_sve_vector_pcs void @test_clobbers_p_reg_negative() {
; NOPAIR-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; NOPAIR-NEXT: addvl sp, sp, #-1
; NOPAIR-NEXT: str p10, [sp, #7, mul vl] // 2-byte Folded Spill
-; NOPAIR-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; NOPAIR-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; NOPAIR-NEXT: .cfi_offset w29, -16
; NOPAIR-NEXT: //APP
; NOPAIR-NEXT: //NO_APP
@@ -550,7 +550,7 @@ define aarch64_sve_vector_pcs void @test_clobbers_p_reg_negative() {
; PAIR-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; PAIR-NEXT: addvl sp, sp, #-1
; PAIR-NEXT: str p10, [sp, #7, mul vl] // 2-byte Folded Spill
-; PAIR-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; PAIR-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; PAIR-NEXT: .cfi_offset w29, -16
; PAIR-NEXT: //APP
; PAIR-NEXT: //NO_APP
diff --git a/llvm/test/CodeGen/AArch64/sve-calling-convention-mixed.ll b/llvm/test/CodeGen/AArch64/sve-calling-convention-mixed.ll
index 5e4c8916cbbdb..90660515e4255 100644
--- a/llvm/test/CodeGen/AArch64/sve-calling-convention-mixed.ll
+++ b/llvm/test/CodeGen/AArch64/sve-calling-convention-mixed.ll
@@ -438,7 +438,7 @@ define void @non_sve_caller_non_sve_callee_high_range() {
; CHECK: // %bb.0:
; CHECK-NEXT: stp x29, x30, [sp, #-16]! // 16-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-2
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 16 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x40, 0x1e, 0x22 // sp + 16 + 16 * VG
; CHECK-NEXT: .cfi_offset w30, -8
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: movi d0, #0000000000000000
@@ -464,7 +464,7 @@ define void @non_sve_caller_high_range_non_sve_callee_high_range(float %f0, floa
; CHECK: // %bb.0:
; CHECK-NEXT: stp x29, x30, [sp, #-16]! // 16-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-2
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 16 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x40, 0x1e, 0x22 // sp + 16 + 16 * VG
; CHECK-NEXT: .cfi_offset w30, -8
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: movi d0, #0000000000000000
@@ -523,17 +523,17 @@ define <vscale x 4 x float> @sve_caller_non_sve_callee_high_range(<vscale x 4 x
; CHECK-NEXT: str z9, [sp, #16, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z8, [sp, #17, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-3
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0xa8, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 168 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0xa8, 0x01, 0x1e, 0x22 // sp + 16 + 168 * VG
; CHECK-NEXT: .cfi_offset w30, -8
; CHECK-NEXT: .cfi_offset w29, -16
-; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 16 - 8 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 16 - 16 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 16 - 24 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x60, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 16 - 32 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x58, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 16 - 40 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4d, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x50, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d13 @ cfa - 16 - 48 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4e, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x48, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d14 @ cfa - 16 - 56 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4f, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x40, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d15 @ cfa - 16 - 64 * VG
+; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d8 @ cfa - 8 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d9 @ cfa - 16 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x40, 0x1c // $d10 @ cfa - 24 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x60, 0x1e, 0x22, 0x40, 0x1c // $d11 @ cfa - 32 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x58, 0x1e, 0x22, 0x40, 0x1c // $d12 @ cfa - 40 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4d, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x50, 0x1e, 0x22, 0x40, 0x1c // $d13 @ cfa - 48 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4e, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x48, 0x1e, 0x22, 0x40, 0x1c // $d14 @ cfa - 56 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4f, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x40, 0x1e, 0x22, 0x40, 0x1c // $d15 @ cfa - 64 * VG - 16
; CHECK-NEXT: mov z25.d, z0.d
; CHECK-NEXT: str z0, [sp] // 16-byte Folded Spill
; CHECK-NEXT: movi d0, #0000000000000000
@@ -621,17 +621,17 @@ define <vscale x 4 x float> @sve_ret_caller_non_sve_callee_high_range() {
; CHECK-NEXT: str z9, [sp, #16, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z8, [sp, #17, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-2
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0xa0, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 160 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0xa0, 0x01, 0x1e, 0x22 // sp + 16 + 160 * VG
; CHECK-NEXT: .cfi_offset w30, -8
; CHECK-NEXT: .cfi_offset w29, -16
-; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 16 - 8 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 16 - 16 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 16 - 24 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x60, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 16 - 32 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x58, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 16 - 40 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4d, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x50, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d13 @ cfa - 16 - 48 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4e, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x48, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d14 @ cfa - 16 - 56 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4f, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x40, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d15 @ cfa - 16 - 64 * VG
+; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d8 @ cfa - 8 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d9 @ cfa - 16 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x40, 0x1c // $d10 @ cfa - 24 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x60, 0x1e, 0x22, 0x40, 0x1c // $d11 @ cfa - 32 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x58, 0x1e, 0x22, 0x40, 0x1c // $d12 @ cfa - 40 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4d, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x50, 0x1e, 0x22, 0x40, 0x1c // $d13 @ cfa - 48 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4e, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x48, 0x1e, 0x22, 0x40, 0x1c // $d14 @ cfa - 56 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4f, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x40, 0x1e, 0x22, 0x40, 0x1c // $d15 @ cfa - 64 * VG - 16
; CHECK-NEXT: movi d0, #0000000000000000
; CHECK-NEXT: fmov s1, #1.00000000
; CHECK-NEXT: addvl x0, sp, #1
@@ -686,7 +686,7 @@ define void @verify_all_operands_are_initialised() {
; CHECK-NEXT: stp x29, x30, [sp, #-16]! // 16-byte Folded Spill
; CHECK-NEXT: sub sp, sp, #16
; CHECK-NEXT: addvl sp, sp, #-1
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x20, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 32 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x20, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 32 + 8 * VG
; CHECK-NEXT: .cfi_offset w30, -8
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: movi d0, #0000000000000000
diff --git a/llvm/test/CodeGen/AArch64/sve-extract-fixed-from-scalable-vector.ll b/llvm/test/CodeGen/AArch64/sve-extract-fixed-from-scalable-vector.ll
index d02aa061b25d9..6c6a691760af3 100644
--- a/llvm/test/CodeGen/AArch64/sve-extract-fixed-from-scalable-vector.ll
+++ b/llvm/test/CodeGen/AArch64/sve-extract-fixed-from-scalable-vector.ll
@@ -8,7 +8,7 @@ define <4 x i32> @extract_v4i32_nxv16i32_12(<vscale x 16 x i32> %arg) {
; CHECK: // %bb.0:
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-4
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 32 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x09, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x20, 0x1e, 0x22 // sp + 16 + 32 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: str z3, [sp, #3, mul vl]
; CHECK-NEXT: str z2, [sp, #2, mul vl]
@@ -27,7 +27,7 @@ define <8 x i16> @extract_v8i16_nxv32i16_8(<vscale x 32 x i16> %arg) {
; CHECK: // %bb.0:
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-2
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 16 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x40, 0x1e, 0x22 // sp + 16 + 16 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: str z1, [sp, #1, mul vl]
; CHECK-NEXT: str z0, [sp]
@@ -44,7 +44,7 @@ define <4 x i16> @extract_v4i16_nxv32i16_8(<vscale x 32 x i16> %arg) {
; CHECK: // %bb.0:
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-4
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 32 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x09, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x20, 0x1e, 0x22 // sp + 16 + 32 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: str z3, [sp, #3, mul vl]
; CHECK-NEXT: str z2, [sp, #2, mul vl]
@@ -65,7 +65,7 @@ define <2 x i16> @extract_v2i16_nxv32i16_8(<vscale x 32 x i16> %arg) {
; CHECK: // %bb.0:
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-8
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0xc0, 0x00, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 64 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0xc0, 0x00, 0x1e, 0x22 // sp + 16 + 64 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: mov x8, sp
; CHECK-NEXT: str z3, [sp, #3, mul vl]
@@ -94,7 +94,7 @@ define <2 x i64> @extract_v2i64_nxv8i64_8(<vscale x 8 x i64> %arg) {
; CHECK: // %bb.0:
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-4
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 32 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x09, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x20, 0x1e, 0x22 // sp + 16 + 32 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: cnth x8
; CHECK-NEXT: mov w9, #8 // =0x8
@@ -120,7 +120,7 @@ define <4 x float> @extract_v4f32_nxv16f32_12(<vscale x 16 x float> %arg) {
; CHECK: // %bb.0:
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-4
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 32 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x09, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x20, 0x1e, 0x22 // sp + 16 + 32 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: str z3, [sp, #3, mul vl]
; CHECK-NEXT: str z2, [sp, #2, mul vl]
@@ -168,7 +168,7 @@ define <4 x i1> @extract_v4i1_nxv32i1_16(<vscale x 32 x i1> %arg) {
; CHECK: // %bb.0:
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-8
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0xc0, 0x00, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 64 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0xc0, 0x00, 0x1e, 0x22 // sp + 16 + 64 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: mov z0.b, p1/z, #1 // =0x1
; CHECK-NEXT: mov z1.b, p0/z, #1 // =0x1
@@ -224,7 +224,7 @@ define <4 x i3> @extract_v4i3_nxv32i3_16(<vscale x 32 x i3> %arg) {
; CHECK: // %bb.0:
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-8
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0xc0, 0x00, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 64 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0xc0, 0x00, 0x1e, 0x22 // sp + 16 + 64 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: mov x8, sp
; CHECK-NEXT: str z1, [sp, #1, mul vl]
@@ -271,7 +271,7 @@ define <4 x i64> @extract_v4i64_nxv8i64_0(<vscale x 8 x i64> %arg) {
; CHECK: // %bb.0:
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-2
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 16 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x40, 0x1e, 0x22 // sp + 16 + 16 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: str z1, [sp, #1, mul vl]
; CHECK-NEXT: str z0, [sp]
diff --git a/llvm/test/CodeGen/AArch64/sve-extract-scalable-vector.ll b/llvm/test/CodeGen/AArch64/sve-extract-scalable-vector.ll
index cbede1bf8bb74..4aaa25e5e66c5 100644
--- a/llvm/test/CodeGen/AArch64/sve-extract-scalable-vector.ll
+++ b/llvm/test/CodeGen/AArch64/sve-extract-scalable-vector.ll
@@ -63,7 +63,7 @@ define <vscale x 14 x i1> @extract_nxv14i1_nxv28i1_14(<vscale x 28 x i1> %in) uw
; CHECK-NEXT: .cfi_def_cfa_offset 16
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: addvl sp, sp, #-1
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: punpkhi p2.h, p1.b
; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: punpklo p1.h, p1.b
diff --git a/llvm/test/CodeGen/AArch64/sve-fp-reduce-fadda.ll b/llvm/test/CodeGen/AArch64/sve-fp-reduce-fadda.ll
index 4b93900c7d272..8750867c56731 100644
--- a/llvm/test/CodeGen/AArch64/sve-fp-reduce-fadda.ll
+++ b/llvm/test/CodeGen/AArch64/sve-fp-reduce-fadda.ll
@@ -49,7 +49,7 @@ define half @fadda_nxv6f16(<vscale x 6 x half> %v, half %s) {
; CHECK: // %bb.0:
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-1
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: mov w8, #32768 // =0x8000
; CHECK-NEXT: ptrue p0.d
@@ -73,7 +73,7 @@ define half @fadda_nxv10f16(<vscale x 10 x half> %v, half %s) {
; CHECK: // %bb.0:
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-3
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x18, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 24 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x48, 0x1e, 0x22 // sp + 16 + 24 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: ptrue p0.h
; CHECK-NEXT: // kill: def $h2 killed $h2 def $z2
diff --git a/llvm/test/CodeGen/AArch64/sve-fptosi-sat.ll b/llvm/test/CodeGen/AArch64/sve-fptosi-sat.ll
index 1b6b92af8c64a..43744092a1348 100644
--- a/llvm/test/CodeGen/AArch64/sve-fptosi-sat.ll
+++ b/llvm/test/CodeGen/AArch64/sve-fptosi-sat.ll
@@ -254,7 +254,7 @@ define <vscale x 8 x i32> @test_signed_v8f64_v8i32(<vscale x 8 x double> %f) {
; CHECK-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: mov x8, #-4476578029606273024 // =0xc1e0000000000000
; CHECK-NEXT: ptrue p0.d
@@ -341,7 +341,7 @@ define <vscale x 8 x i16> @test_signed_v8f64_v8i16(<vscale x 8 x double> %f) {
; CHECK-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: mov x8, #-4548635623644200960 // =0xc0e0000000000000
; CHECK-NEXT: ptrue p0.d
diff --git a/llvm/test/CodeGen/AArch64/sve-fptoui-sat.ll b/llvm/test/CodeGen/AArch64/sve-fptoui-sat.ll
index b3aefb8460985..1df28198711e1 100644
--- a/llvm/test/CodeGen/AArch64/sve-fptoui-sat.ll
+++ b/llvm/test/CodeGen/AArch64/sve-fptoui-sat.ll
@@ -208,7 +208,7 @@ define <vscale x 8 x i32> @test_signed_v8f64_v8i32(<vscale x 8 x double> %f) {
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-1
; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: ptrue p0.d
; CHECK-NEXT: mov x8, #281474974613504 // =0xffffffe00000
@@ -275,7 +275,7 @@ define <vscale x 8 x i16> @test_signed_v8f64_v8i16(<vscale x 8 x double> %f) {
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-1
; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: ptrue p0.d
; CHECK-NEXT: mov x8, #281337537757184 // =0xffe000000000
diff --git a/llvm/test/CodeGen/AArch64/sve-insert-element.ll b/llvm/test/CodeGen/AArch64/sve-insert-element.ll
index 7f558e32ae397..8ca005a88add3 100644
--- a/llvm/test/CodeGen/AArch64/sve-insert-element.ll
+++ b/llvm/test/CodeGen/AArch64/sve-insert-element.ll
@@ -588,7 +588,7 @@ define <vscale x 32 x i1> @test_predicate_insert_32xi1(<vscale x 32 x i1> %val,
; CHECK-NEXT: .cfi_def_cfa_offset 16
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: addvl sp, sp, #-2
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 16 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x40, 0x1e, 0x22 // sp + 16 + 16 * VG
; CHECK-NEXT: rdvl x8, #2
; CHECK-NEXT: mov z0.b, p1/z, #1 // =0x1
; CHECK-NEXT: mov z1.b, p0/z, #1 // =0x1
diff --git a/llvm/test/CodeGen/AArch64/sve-insert-vector.ll b/llvm/test/CodeGen/AArch64/sve-insert-vector.ll
index dcf3317a98b99..73c783d4735f8 100644
--- a/llvm/test/CodeGen/AArch64/sve-insert-vector.ll
+++ b/llvm/test/CodeGen/AArch64/sve-insert-vector.ll
@@ -186,7 +186,7 @@ define void @insert_v2i64_nxv16i64(<2 x i64> %sv0, <2 x i64> %sv1, ptr %out) uwt
; CHECK-NEXT: .cfi_def_cfa_offset 16
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: addvl sp, sp, #-4
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 32 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x09, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x20, 0x1e, 0x22 // sp + 16 + 32 * VG
; CHECK-NEXT: // kill: def $q0 killed $q0 def $z0
; CHECK-NEXT: str z0, [sp]
; CHECK-NEXT: str q1, [sp, #32]
@@ -229,7 +229,7 @@ define void @insert_v2i64_nxv16i64_lo2(ptr %psv, ptr %out) uwtable {
; CHECK-NEXT: .cfi_def_cfa_offset 16
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: addvl sp, sp, #-2
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 16 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x40, 0x1e, 0x22 // sp + 16 + 16 * VG
; CHECK-NEXT: ldr q0, [x0]
; CHECK-NEXT: str q0, [sp, #16]
; CHECK-NEXT: ldr z0, [sp, #1, mul vl]
@@ -896,7 +896,7 @@ define <vscale x 16 x i1> @insert_nxv1i1_nxv16i1_0(<vscale x 16 x i1> %vec, <vsc
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-1
; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: punpklo p2.h, p0.b
; CHECK-NEXT: punpkhi p0.h, p0.b
@@ -923,7 +923,7 @@ define <vscale x 16 x i1> @insert_nxv1i1_nxv16i1_1(<vscale x 16 x i1> %vec, <vsc
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-1
; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: punpklo p2.h, p0.b
; CHECK-NEXT: punpkhi p0.h, p0.b
@@ -950,7 +950,7 @@ define <vscale x 16 x i1> @insert_nxv1i1_nxv16i1_2(<vscale x 16 x i1> %vec, <vsc
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-1
; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: punpklo p2.h, p0.b
; CHECK-NEXT: punpkhi p0.h, p0.b
@@ -977,7 +977,7 @@ define <vscale x 16 x i1> @insert_nxv1i1_nxv16i1_3(<vscale x 16 x i1> %vec, <vsc
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-1
; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: punpklo p2.h, p0.b
; CHECK-NEXT: punpkhi p0.h, p0.b
@@ -1004,7 +1004,7 @@ define <vscale x 16 x i1> @insert_nxv1i1_nxv16i1_4(<vscale x 16 x i1> %vec, <vsc
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-1
; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: punpklo p2.h, p0.b
; CHECK-NEXT: punpkhi p0.h, p0.b
@@ -1031,7 +1031,7 @@ define <vscale x 16 x i1> @insert_nxv1i1_nxv16i1_5(<vscale x 16 x i1> %vec, <vsc
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-1
; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: punpklo p2.h, p0.b
; CHECK-NEXT: punpkhi p0.h, p0.b
@@ -1058,7 +1058,7 @@ define <vscale x 16 x i1> @insert_nxv1i1_nxv16i1_6(<vscale x 16 x i1> %vec, <vsc
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-1
; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: punpklo p2.h, p0.b
; CHECK-NEXT: punpkhi p0.h, p0.b
@@ -1085,7 +1085,7 @@ define <vscale x 16 x i1> @insert_nxv1i1_nxv16i1_7(<vscale x 16 x i1> %vec, <vsc
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-1
; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: punpklo p2.h, p0.b
; CHECK-NEXT: punpkhi p0.h, p0.b
@@ -1112,7 +1112,7 @@ define <vscale x 16 x i1> @insert_nxv1i1_nxv16i1_8(<vscale x 16 x i1> %vec, <vsc
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-1
; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: punpkhi p2.h, p0.b
; CHECK-NEXT: punpklo p0.h, p0.b
@@ -1139,7 +1139,7 @@ define <vscale x 16 x i1> @insert_nxv1i1_nxv16i1_9(<vscale x 16 x i1> %vec, <vsc
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-1
; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: punpkhi p2.h, p0.b
; CHECK-NEXT: punpklo p0.h, p0.b
@@ -1166,7 +1166,7 @@ define <vscale x 16 x i1> @insert_nxv1i1_nxv16i1_10(<vscale x 16 x i1> %vec, <vs
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-1
; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: punpkhi p2.h, p0.b
; CHECK-NEXT: punpklo p0.h, p0.b
@@ -1193,7 +1193,7 @@ define <vscale x 16 x i1> @insert_nxv1i1_nxv16i1_11(<vscale x 16 x i1> %vec, <vs
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-1
; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: punpkhi p2.h, p0.b
; CHECK-NEXT: punpklo p0.h, p0.b
@@ -1220,7 +1220,7 @@ define <vscale x 16 x i1> @insert_nxv1i1_nxv16i1_12(<vscale x 16 x i1> %vec, <vs
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-1
; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: punpkhi p2.h, p0.b
; CHECK-NEXT: punpklo p0.h, p0.b
@@ -1247,7 +1247,7 @@ define <vscale x 16 x i1> @insert_nxv1i1_nxv16i1_13(<vscale x 16 x i1> %vec, <vs
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-1
; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: punpkhi p2.h, p0.b
; CHECK-NEXT: punpklo p0.h, p0.b
@@ -1274,7 +1274,7 @@ define <vscale x 16 x i1> @insert_nxv1i1_nxv16i1_14(<vscale x 16 x i1> %vec, <vs
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-1
; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: punpkhi p2.h, p0.b
; CHECK-NEXT: punpklo p0.h, p0.b
@@ -1301,7 +1301,7 @@ define <vscale x 16 x i1> @insert_nxv1i1_nxv16i1_15(<vscale x 16 x i1> %vec, <vs
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-1
; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: punpkhi p2.h, p0.b
; CHECK-NEXT: punpklo p0.h, p0.b
diff --git a/llvm/test/CodeGen/AArch64/sve-ldnf1.mir b/llvm/test/CodeGen/AArch64/sve-ldnf1.mir
index 6d094259c55d9..2a7e8a43c6dcb 100644
--- a/llvm/test/CodeGen/AArch64/sve-ldnf1.mir
+++ b/llvm/test/CodeGen/AArch64/sve-ldnf1.mir
@@ -41,13 +41,13 @@ body: |
liveins: $p0
; CHECK-LABEL: name: testcase_positive_offset
- ; CHECK: liveins: $p0
+ ; CHECK: liveins: $p0, $fp
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: early-clobber $sp = frame-setup STRXpre killed $fp, $sp, -16 :: (store (s64) into %stack.2)
; CHECK-NEXT: frame-setup CFI_INSTRUCTION def_cfa_offset 16
; CHECK-NEXT: frame-setup CFI_INSTRUCTION offset $w29, -16
- ; CHECK-NEXT: $sp = frame-setup ADDVL_XXI $sp, -4
- ; CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0x2e, 0x00, 0x1e, 0x22
+ ; CHECK-NEXT: $sp = frame-setup ADDVL_XXI $sp, -4, implicit $vg
+ ; CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x0f, 0x09, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x20, 0x1e, 0x22
; CHECK-NEXT: renamable $z0 = LDNF1B_IMM renamable $p0, $sp, 7, implicit $ffr, implicit-def $ffr :: (load (s8) from %ir.object, align 2)
; CHECK-NEXT: renamable $z0 = LDNF1B_H_IMM renamable $p0, $sp, 7, implicit $ffr, implicit-def $ffr :: (load (s8) from %ir.object, align 2)
; CHECK-NEXT: renamable $z0 = LDNF1B_S_IMM renamable $p0, $sp, 7, implicit $ffr, implicit-def $ffr :: (load (s8) from %ir.object, align 2)
@@ -64,7 +64,7 @@ body: |
; CHECK-NEXT: renamable $z0 = LDNF1W_D_IMM renamable $p0, $sp, 7, implicit $ffr, implicit-def $ffr :: (load (s32) from %ir.object, align 8)
; CHECK-NEXT: renamable $z0 = LDNF1SW_D_IMM renamable $p0, $sp, 7, implicit $ffr, implicit-def $ffr :: (load (s32) from %ir.object, align 8)
; CHECK-NEXT: renamable $z0 = LDNF1D_IMM renamable $p0, $sp, 7, implicit $ffr, implicit-def $ffr :: (load (s64) from %ir.object)
- ; CHECK-NEXT: $sp = frame-destroy ADDVL_XXI $sp, 4
+ ; CHECK-NEXT: $sp = frame-destroy ADDVL_XXI $sp, 4, implicit $vg
; CHECK-NEXT: frame-destroy CFI_INSTRUCTION def_cfa $wsp, 16
; CHECK-NEXT: early-clobber $sp, $fp = frame-destroy LDRXpost $sp, 16 :: (load (s64) from %stack.2)
; CHECK-NEXT: frame-destroy CFI_INSTRUCTION def_cfa_offset 0
@@ -100,13 +100,13 @@ body: |
liveins: $p0
; CHECK-LABEL: name: testcase_negative_offset
- ; CHECK: liveins: $p0
+ ; CHECK: liveins: $p0, $fp
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: early-clobber $sp = frame-setup STRXpre killed $fp, $sp, -16 :: (store (s64) into %stack.2)
; CHECK-NEXT: frame-setup CFI_INSTRUCTION def_cfa_offset 16
; CHECK-NEXT: frame-setup CFI_INSTRUCTION offset $w29, -16
- ; CHECK-NEXT: $sp = frame-setup ADDVL_XXI $sp, -4
- ; CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0x2e, 0x00, 0x1e, 0x22
+ ; CHECK-NEXT: $sp = frame-setup ADDVL_XXI $sp, -4, implicit $vg
+ ; CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x0f, 0x09, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x20, 0x1e, 0x22
; CHECK-NEXT: renamable $z0 = LDNF1B_IMM renamable $p0, $sp, -8, implicit $ffr, implicit-def $ffr :: (load (s8) from %ir.object, align 2)
; CHECK-NEXT: renamable $z0 = LDNF1B_H_IMM renamable $p0, $sp, -8, implicit $ffr, implicit-def $ffr :: (load (s8) from %ir.object, align 2)
; CHECK-NEXT: renamable $z0 = LDNF1B_S_IMM renamable $p0, $sp, -8, implicit $ffr, implicit-def $ffr :: (load (s8) from %ir.object, align 2)
@@ -123,7 +123,7 @@ body: |
; CHECK-NEXT: renamable $z0 = LDNF1W_D_IMM renamable $p0, $sp, -8, implicit $ffr, implicit-def $ffr :: (load (s32) from %ir.object, align 8)
; CHECK-NEXT: renamable $z0 = LDNF1SW_D_IMM renamable $p0, $sp, -8, implicit $ffr, implicit-def $ffr :: (load (s32) from %ir.object, align 8)
; CHECK-NEXT: renamable $z0 = LDNF1D_IMM renamable $p0, $sp, -8, implicit $ffr, implicit-def $ffr :: (load (s64) from %ir.object)
- ; CHECK-NEXT: $sp = frame-destroy ADDVL_XXI $sp, 4
+ ; CHECK-NEXT: $sp = frame-destroy ADDVL_XXI $sp, 4, implicit $vg
; CHECK-NEXT: frame-destroy CFI_INSTRUCTION def_cfa $wsp, 16
; CHECK-NEXT: early-clobber $sp, $fp = frame-destroy LDRXpost $sp, 16 :: (load (s64) from %stack.2)
; CHECK-NEXT: frame-destroy CFI_INSTRUCTION def_cfa_offset 0
@@ -159,44 +159,44 @@ body: |
liveins: $p0
; CHECK-LABEL: name: testcase_positive_offset_out_of_range
- ; CHECK: liveins: $p0
+ ; CHECK: liveins: $p0, $fp
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: early-clobber $sp = frame-setup STRXpre killed $fp, $sp, -16 :: (store (s64) into %stack.2)
; CHECK-NEXT: frame-setup CFI_INSTRUCTION def_cfa_offset 16
; CHECK-NEXT: frame-setup CFI_INSTRUCTION offset $w29, -16
- ; CHECK-NEXT: $sp = frame-setup ADDVL_XXI $sp, -4
- ; CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0x2e, 0x00, 0x1e, 0x22
- ; CHECK-NEXT: $x8 = ADDVL_XXI $sp, 1
+ ; CHECK-NEXT: $sp = frame-setup ADDVL_XXI $sp, -4, implicit $vg
+ ; CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x0f, 0x09, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x20, 0x1e, 0x22
+ ; CHECK-NEXT: $x8 = ADDVL_XXI $sp, 1, implicit $vg
; CHECK-NEXT: renamable $z0 = LDNF1B_IMM renamable $p0, killed $x8, 7, implicit $ffr, implicit-def $ffr :: (load (s8) from %ir.object, align 2)
- ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, 4
+ ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, 4, implicit $vg
; CHECK-NEXT: renamable $z0 = LDNF1B_H_IMM renamable $p0, killed $x8, 7, implicit $ffr, implicit-def $ffr :: (load (s8) from %ir.object, align 2)
- ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, 2
+ ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, 2, implicit $vg
; CHECK-NEXT: renamable $z0 = LDNF1B_S_IMM renamable $p0, killed $x8, 7, implicit $ffr, implicit-def $ffr :: (load (s8) from %ir.object, align 2)
- ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, 1
+ ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, 1, implicit $vg
; CHECK-NEXT: renamable $z0 = LDNF1B_D_IMM renamable $p0, killed $x8, 7, implicit $ffr, implicit-def $ffr :: (load (s8) from %ir.object, align 2)
- ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, 4
+ ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, 4, implicit $vg
; CHECK-NEXT: renamable $z0 = LDNF1SB_H_IMM renamable $p0, killed $x8, 7, implicit $ffr, implicit-def $ffr :: (load (s8) from %ir.object, align 2)
- ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, 2
+ ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, 2, implicit $vg
; CHECK-NEXT: renamable $z0 = LDNF1SB_S_IMM renamable $p0, killed $x8, 7, implicit $ffr, implicit-def $ffr :: (load (s8) from %ir.object, align 2)
- ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, 1
+ ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, 1, implicit $vg
; CHECK-NEXT: renamable $z0 = LDNF1SB_D_IMM renamable $p0, killed $x8, 7, implicit $ffr, implicit-def $ffr :: (load (s8) from %ir.object, align 2)
- ; CHECK-NEXT: $x8 = ADDVL_XXI $sp, 1
+ ; CHECK-NEXT: $x8 = ADDVL_XXI $sp, 1, implicit $vg
; CHECK-NEXT: renamable $z0 = LDNF1H_IMM renamable $p0, killed $x8, 7, implicit $ffr, implicit-def $ffr :: (load (s16) from %ir.object)
- ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, 4
+ ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, 4, implicit $vg
; CHECK-NEXT: renamable $z0 = LDNF1H_S_IMM renamable $p0, killed $x8, 7, implicit $ffr, implicit-def $ffr :: (load (s16) from %ir.object)
- ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, 2
+ ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, 2, implicit $vg
; CHECK-NEXT: renamable $z0 = LDNF1H_D_IMM renamable $p0, killed $x8, 7, implicit $ffr, implicit-def $ffr :: (load (s16) from %ir.object)
- ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, 4
+ ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, 4, implicit $vg
; CHECK-NEXT: renamable $z0 = LDNF1SH_S_IMM renamable $p0, killed $x8, 7, implicit $ffr, implicit-def $ffr :: (load (s16) from %ir.object)
- ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, 2
+ ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, 2, implicit $vg
; CHECK-NEXT: renamable $z0 = LDNF1SH_D_IMM renamable $p0, killed $x8, 7, implicit $ffr, implicit-def $ffr :: (load (s16) from %ir.object)
- ; CHECK-NEXT: $x8 = ADDVL_XXI $sp, 1
+ ; CHECK-NEXT: $x8 = ADDVL_XXI $sp, 1, implicit $vg
; CHECK-NEXT: renamable $z0 = LDNF1W_IMM renamable $p0, killed $x8, 7, implicit $ffr, implicit-def $ffr :: (load (s32) from %ir.object, align 8)
- ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, 4
+ ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, 4, implicit $vg
; CHECK-NEXT: renamable $z0 = LDNF1W_D_IMM renamable $p0, killed $x8, 7, implicit $ffr, implicit-def $ffr :: (load (s32) from %ir.object, align 8)
- ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, 4
+ ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, 4, implicit $vg
; CHECK-NEXT: renamable $z0 = LDNF1SW_D_IMM renamable $p0, killed $x8, 7, implicit $ffr, implicit-def $ffr :: (load (s32) from %ir.object, align 8)
- ; CHECK-NEXT: $sp = frame-destroy ADDVL_XXI $sp, 4
+ ; CHECK-NEXT: $sp = frame-destroy ADDVL_XXI $sp, 4, implicit $vg
; CHECK-NEXT: frame-destroy CFI_INSTRUCTION def_cfa $wsp, 16
; CHECK-NEXT: early-clobber $sp, $fp = frame-destroy LDRXpost $sp, 16 :: (load (s64) from %stack.2)
; CHECK-NEXT: frame-destroy CFI_INSTRUCTION def_cfa_offset 0
@@ -231,44 +231,44 @@ body: |
liveins: $p0
; CHECK-LABEL: name: testcase_negative_offset_out_of_range
- ; CHECK: liveins: $p0
+ ; CHECK: liveins: $p0, $fp
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: early-clobber $sp = frame-setup STRXpre killed $fp, $sp, -16 :: (store (s64) into %stack.2)
; CHECK-NEXT: frame-setup CFI_INSTRUCTION def_cfa_offset 16
; CHECK-NEXT: frame-setup CFI_INSTRUCTION offset $w29, -16
- ; CHECK-NEXT: $sp = frame-setup ADDVL_XXI $sp, -4
- ; CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0x2e, 0x00, 0x1e, 0x22
- ; CHECK-NEXT: $x8 = ADDVL_XXI $sp, -1
+ ; CHECK-NEXT: $sp = frame-setup ADDVL_XXI $sp, -4, implicit $vg
+ ; CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x0f, 0x09, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x20, 0x1e, 0x22
+ ; CHECK-NEXT: $x8 = ADDVL_XXI $sp, -1, implicit $vg
; CHECK-NEXT: renamable $z0 = LDNF1B_IMM renamable $p0, killed $x8, -8, implicit $ffr, implicit-def $ffr :: (load (s8) from %ir.object, align 2)
- ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, -4
+ ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, -4, implicit $vg
; CHECK-NEXT: renamable $z0 = LDNF1B_H_IMM renamable $p0, killed $x8, -8, implicit $ffr, implicit-def $ffr :: (load (s8) from %ir.object, align 2)
- ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, -2
+ ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, -2, implicit $vg
; CHECK-NEXT: renamable $z0 = LDNF1B_S_IMM renamable $p0, killed $x8, -8, implicit $ffr, implicit-def $ffr :: (load (s8) from %ir.object, align 2)
- ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, -1
+ ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, -1, implicit $vg
; CHECK-NEXT: renamable $z0 = LDNF1B_D_IMM renamable $p0, killed $x8, -8, implicit $ffr, implicit-def $ffr :: (load (s8) from %ir.object, align 2)
- ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, -4
+ ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, -4, implicit $vg
; CHECK-NEXT: renamable $z0 = LDNF1SB_H_IMM renamable $p0, killed $x8, -8, implicit $ffr, implicit-def $ffr :: (load (s8) from %ir.object, align 2)
- ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, -2
+ ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, -2, implicit $vg
; CHECK-NEXT: renamable $z0 = LDNF1SB_S_IMM renamable $p0, killed $x8, -8, implicit $ffr, implicit-def $ffr :: (load (s8) from %ir.object, align 2)
- ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, -1
+ ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, -1, implicit $vg
; CHECK-NEXT: renamable $z0 = LDNF1SB_D_IMM renamable $p0, killed $x8, -8, implicit $ffr, implicit-def $ffr :: (load (s8) from %ir.object, align 2)
- ; CHECK-NEXT: $x8 = ADDVL_XXI $sp, -1
+ ; CHECK-NEXT: $x8 = ADDVL_XXI $sp, -1, implicit $vg
; CHECK-NEXT: renamable $z0 = LDNF1H_IMM renamable $p0, killed $x8, -8, implicit $ffr, implicit-def $ffr :: (load (s16) from %ir.object)
- ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, -4
+ ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, -4, implicit $vg
; CHECK-NEXT: renamable $z0 = LDNF1H_S_IMM renamable $p0, killed $x8, -8, implicit $ffr, implicit-def $ffr :: (load (s16) from %ir.object)
- ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, -2
+ ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, -2, implicit $vg
; CHECK-NEXT: renamable $z0 = LDNF1H_D_IMM renamable $p0, killed $x8, -8, implicit $ffr, implicit-def $ffr :: (load (s16) from %ir.object)
- ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, -4
+ ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, -4, implicit $vg
; CHECK-NEXT: renamable $z0 = LDNF1SH_S_IMM renamable $p0, killed $x8, -8, implicit $ffr, implicit-def $ffr :: (load (s16) from %ir.object)
- ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, -2
+ ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, -2, implicit $vg
; CHECK-NEXT: renamable $z0 = LDNF1SH_D_IMM renamable $p0, killed $x8, -8, implicit $ffr, implicit-def $ffr :: (load (s16) from %ir.object)
- ; CHECK-NEXT: $x8 = ADDVL_XXI $sp, -1
+ ; CHECK-NEXT: $x8 = ADDVL_XXI $sp, -1, implicit $vg
; CHECK-NEXT: renamable $z0 = LDNF1W_IMM renamable $p0, killed $x8, -8, implicit $ffr, implicit-def $ffr :: (load (s32) from %ir.object, align 8)
- ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, -4
+ ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, -4, implicit $vg
; CHECK-NEXT: renamable $z0 = LDNF1W_D_IMM renamable $p0, killed $x8, -8, implicit $ffr, implicit-def $ffr :: (load (s32) from %ir.object, align 8)
- ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, -4
+ ; CHECK-NEXT: $x8 = ADDPL_XXI $sp, -4, implicit $vg
; CHECK-NEXT: renamable $z0 = LDNF1SW_D_IMM renamable $p0, killed $x8, -8, implicit $ffr, implicit-def $ffr :: (load (s32) from %ir.object, align 8)
- ; CHECK-NEXT: $sp = frame-destroy ADDVL_XXI $sp, 4
+ ; CHECK-NEXT: $sp = frame-destroy ADDVL_XXI $sp, 4, implicit $vg
; CHECK-NEXT: frame-destroy CFI_INSTRUCTION def_cfa $wsp, 16
; CHECK-NEXT: early-clobber $sp, $fp = frame-destroy LDRXpost $sp, 16 :: (load (s64) from %stack.2)
; CHECK-NEXT: frame-destroy CFI_INSTRUCTION def_cfa_offset 0
diff --git a/llvm/test/CodeGen/AArch64/sve-ldstnt1.mir b/llvm/test/CodeGen/AArch64/sve-ldstnt1.mir
index 1352b9ddcacdf..863d4d1975e44 100644
--- a/llvm/test/CodeGen/AArch64/sve-ldstnt1.mir
+++ b/llvm/test/CodeGen/AArch64/sve-ldstnt1.mir
@@ -41,13 +41,13 @@ body: |
liveins: $p0
; CHECK-LABEL: name: testcase_positive_offset
- ; CHECK: liveins: $p0
+ ; CHECK: liveins: $p0, $fp
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: early-clobber $sp = frame-setup STRXpre killed $fp, $sp, -16 :: (store (s64) into %stack.2)
; CHECK-NEXT: frame-setup CFI_INSTRUCTION def_cfa_offset 16
; CHECK-NEXT: frame-setup CFI_INSTRUCTION offset $w29, -16
- ; CHECK-NEXT: $sp = frame-setup ADDVL_XXI $sp, -4
- ; CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0x2e, 0x00, 0x1e, 0x22
+ ; CHECK-NEXT: $sp = frame-setup ADDVL_XXI $sp, -4, implicit $vg
+ ; CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x0f, 0x09, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x20, 0x1e, 0x22
; CHECK-NEXT: renamable $z0 = LDNT1B_ZRI renamable $p0, $sp, 7 :: (load (s8) from %ir.object, align 2)
; CHECK-NEXT: renamable $z0 = LDNT1H_ZRI renamable $p0, $sp, 7 :: (load (s16) from %ir.object)
; CHECK-NEXT: renamable $z0 = LDNT1W_ZRI renamable $p0, $sp, 7 :: (load (s32) from %ir.object, align 8)
@@ -56,7 +56,7 @@ body: |
; CHECK-NEXT: STNT1H_ZRI renamable $z0, renamable $p0, $sp, 7 :: (store (s16) into %ir.object, align 8)
; CHECK-NEXT: STNT1W_ZRI renamable $z0, renamable $p0, $sp, 7 :: (store (s32) into %ir.object, align 8)
; CHECK-NEXT: STNT1D_ZRI renamable $z0, renamable $p0, $sp, 7 :: (store (s64) into %ir.object)
- ; CHECK-NEXT: $sp = frame-destroy ADDVL_XXI $sp, 4
+ ; CHECK-NEXT: $sp = frame-destroy ADDVL_XXI $sp, 4, implicit $vg
; CHECK-NEXT: frame-destroy CFI_INSTRUCTION def_cfa $wsp, 16
; CHECK-NEXT: early-clobber $sp, $fp = frame-destroy LDRXpost $sp, 16 :: (load (s64) from %stack.2)
; CHECK-NEXT: frame-destroy CFI_INSTRUCTION def_cfa_offset 0
@@ -84,13 +84,13 @@ body: |
liveins: $p0
; CHECK-LABEL: name: testcase_negative_offset
- ; CHECK: liveins: $p0
+ ; CHECK: liveins: $p0, $fp
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: early-clobber $sp = frame-setup STRXpre killed $fp, $sp, -16 :: (store (s64) into %stack.2)
; CHECK-NEXT: frame-setup CFI_INSTRUCTION def_cfa_offset 16
; CHECK-NEXT: frame-setup CFI_INSTRUCTION offset $w29, -16
- ; CHECK-NEXT: $sp = frame-setup ADDVL_XXI $sp, -4
- ; CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0x2e, 0x00, 0x1e, 0x22
+ ; CHECK-NEXT: $sp = frame-setup ADDVL_XXI $sp, -4, implicit $vg
+ ; CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x0f, 0x09, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x20, 0x1e, 0x22
; CHECK-NEXT: renamable $z0 = LDNT1B_ZRI renamable $p0, $sp, -8 :: (load (s8) from %ir.object, align 2)
; CHECK-NEXT: renamable $z0 = LDNT1H_ZRI renamable $p0, $sp, -8 :: (load (s16) from %ir.object)
; CHECK-NEXT: renamable $z0 = LDNT1W_ZRI renamable $p0, $sp, -8 :: (load (s32) from %ir.object)
@@ -99,7 +99,7 @@ body: |
; CHECK-NEXT: STNT1H_ZRI renamable $z0, renamable $p0, $sp, -8 :: (store (s16) into %ir.object, align 8)
; CHECK-NEXT: STNT1W_ZRI renamable $z0, renamable $p0, $sp, -8 :: (store (s32) into %ir.object, align 8)
; CHECK-NEXT: STNT1D_ZRI renamable $z0, renamable $p0, $sp, -8 :: (store (s64) into %ir.object)
- ; CHECK-NEXT: $sp = frame-destroy ADDVL_XXI $sp, 4
+ ; CHECK-NEXT: $sp = frame-destroy ADDVL_XXI $sp, 4, implicit $vg
; CHECK-NEXT: frame-destroy CFI_INSTRUCTION def_cfa $wsp, 16
; CHECK-NEXT: early-clobber $sp, $fp = frame-destroy LDRXpost $sp, 16 :: (load (s64) from %stack.2)
; CHECK-NEXT: frame-destroy CFI_INSTRUCTION def_cfa_offset 0
@@ -127,30 +127,30 @@ body: |
liveins: $p0
; CHECK-LABEL: name: testcase_positive_offset_out_of_range
- ; CHECK: liveins: $p0
+ ; CHECK: liveins: $p0, $fp
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: early-clobber $sp = frame-setup STRXpre killed $fp, $sp, -16 :: (store (s64) into %stack.2)
; CHECK-NEXT: frame-setup CFI_INSTRUCTION def_cfa_offset 16
; CHECK-NEXT: frame-setup CFI_INSTRUCTION offset $w29, -16
- ; CHECK-NEXT: $sp = frame-setup ADDVL_XXI $sp, -4
- ; CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0x2e, 0x00, 0x1e, 0x22
- ; CHECK-NEXT: $x8 = ADDVL_XXI $sp, 1
+ ; CHECK-NEXT: $sp = frame-setup ADDVL_XXI $sp, -4, implicit $vg
+ ; CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x0f, 0x09, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x20, 0x1e, 0x22
+ ; CHECK-NEXT: $x8 = ADDVL_XXI $sp, 1, implicit $vg
; CHECK-NEXT: renamable $z0 = LDNT1B_ZRI renamable $p0, killed $x8, 7 :: (load (s8) from %ir.object, align 2)
- ; CHECK-NEXT: $x8 = ADDVL_XXI $sp, 1
+ ; CHECK-NEXT: $x8 = ADDVL_XXI $sp, 1, implicit $vg
; CHECK-NEXT: renamable $z0 = LDNT1H_ZRI renamable $p0, killed $x8, 7 :: (load (s16) from %ir.object)
- ; CHECK-NEXT: $x8 = ADDVL_XXI $sp, 1
+ ; CHECK-NEXT: $x8 = ADDVL_XXI $sp, 1, implicit $vg
; CHECK-NEXT: renamable $z0 = LDNT1W_ZRI renamable $p0, killed $x8, 7 :: (load (s32) from %ir.object)
- ; CHECK-NEXT: $x8 = ADDVL_XXI $sp, 1
+ ; CHECK-NEXT: $x8 = ADDVL_XXI $sp, 1, implicit $vg
; CHECK-NEXT: renamable $z0 = LDNT1D_ZRI renamable $p0, killed $x8, 7 :: (load (s64) from %ir.object)
- ; CHECK-NEXT: $x8 = ADDVL_XXI $sp, 1
+ ; CHECK-NEXT: $x8 = ADDVL_XXI $sp, 1, implicit $vg
; CHECK-NEXT: STNT1B_ZRI renamable $z0, renamable $p0, killed $x8, 7 :: (store (s8) into %ir.object, align 8)
- ; CHECK-NEXT: $x8 = ADDVL_XXI $sp, 1
+ ; CHECK-NEXT: $x8 = ADDVL_XXI $sp, 1, implicit $vg
; CHECK-NEXT: STNT1H_ZRI renamable $z0, renamable $p0, killed $x8, 7 :: (store (s16) into %ir.object, align 8)
- ; CHECK-NEXT: $x8 = ADDVL_XXI $sp, 1
+ ; CHECK-NEXT: $x8 = ADDVL_XXI $sp, 1, implicit $vg
; CHECK-NEXT: STNT1W_ZRI renamable $z0, renamable $p0, killed $x8, 7 :: (store (s32) into %ir.object, align 8)
- ; CHECK-NEXT: $x8 = ADDVL_XXI $sp, 1
+ ; CHECK-NEXT: $x8 = ADDVL_XXI $sp, 1, implicit $vg
; CHECK-NEXT: STNT1D_ZRI renamable $z0, renamable $p0, killed $x8, 7 :: (store (s64) into %ir.object)
- ; CHECK-NEXT: $sp = frame-destroy ADDVL_XXI $sp, 4
+ ; CHECK-NEXT: $sp = frame-destroy ADDVL_XXI $sp, 4, implicit $vg
; CHECK-NEXT: frame-destroy CFI_INSTRUCTION def_cfa $wsp, 16
; CHECK-NEXT: early-clobber $sp, $fp = frame-destroy LDRXpost $sp, 16 :: (load (s64) from %stack.2)
; CHECK-NEXT: frame-destroy CFI_INSTRUCTION def_cfa_offset 0
@@ -178,30 +178,30 @@ body: |
liveins: $p0
; CHECK-LABEL: name: testcase_negative_offset_out_of_range
- ; CHECK: liveins: $p0
+ ; CHECK: liveins: $p0, $fp
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: early-clobber $sp = frame-setup STRXpre killed $fp, $sp, -16 :: (store (s64) into %stack.2)
; CHECK-NEXT: frame-setup CFI_INSTRUCTION def_cfa_offset 16
; CHECK-NEXT: frame-setup CFI_INSTRUCTION offset $w29, -16
- ; CHECK-NEXT: $sp = frame-setup ADDVL_XXI $sp, -4
- ; CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0x2e, 0x00, 0x1e, 0x22
- ; CHECK-NEXT: $x8 = ADDVL_XXI $sp, -1
+ ; CHECK-NEXT: $sp = frame-setup ADDVL_XXI $sp, -4, implicit $vg
+ ; CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x0f, 0x09, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x20, 0x1e, 0x22
+ ; CHECK-NEXT: $x8 = ADDVL_XXI $sp, -1, implicit $vg
; CHECK-NEXT: renamable $z0 = LDNT1B_ZRI renamable $p0, killed $x8, -8 :: (load (s8) from %ir.object, align 2)
- ; CHECK-NEXT: $x8 = ADDVL_XXI $sp, -1
+ ; CHECK-NEXT: $x8 = ADDVL_XXI $sp, -1, implicit $vg
; CHECK-NEXT: renamable $z0 = LDNT1H_ZRI renamable $p0, killed $x8, -8 :: (load (s16) from %ir.object)
- ; CHECK-NEXT: $x8 = ADDVL_XXI $sp, -1
+ ; CHECK-NEXT: $x8 = ADDVL_XXI $sp, -1, implicit $vg
; CHECK-NEXT: renamable $z0 = LDNT1W_ZRI renamable $p0, killed $x8, -8 :: (load (s32) from %ir.object)
- ; CHECK-NEXT: $x8 = ADDVL_XXI $sp, -1
+ ; CHECK-NEXT: $x8 = ADDVL_XXI $sp, -1, implicit $vg
; CHECK-NEXT: renamable $z0 = LDNT1D_ZRI renamable $p0, killed $x8, -8 :: (load (s64) from %ir.object)
- ; CHECK-NEXT: $x8 = ADDVL_XXI $sp, -1
+ ; CHECK-NEXT: $x8 = ADDVL_XXI $sp, -1, implicit $vg
; CHECK-NEXT: STNT1B_ZRI renamable $z0, renamable $p0, killed $x8, -8 :: (store (s8) into %ir.object, align 8)
- ; CHECK-NEXT: $x8 = ADDVL_XXI $sp, -1
+ ; CHECK-NEXT: $x8 = ADDVL_XXI $sp, -1, implicit $vg
; CHECK-NEXT: STNT1H_ZRI renamable $z0, renamable $p0, killed $x8, -8 :: (store (s16) into %ir.object, align 8)
- ; CHECK-NEXT: $x8 = ADDVL_XXI $sp, -1
+ ; CHECK-NEXT: $x8 = ADDVL_XXI $sp, -1, implicit $vg
; CHECK-NEXT: STNT1W_ZRI renamable $z0, renamable $p0, killed $x8, -8 :: (store (s32) into %ir.object, align 8)
- ; CHECK-NEXT: $x8 = ADDVL_XXI $sp, -1
+ ; CHECK-NEXT: $x8 = ADDVL_XXI $sp, -1, implicit $vg
; CHECK-NEXT: STNT1D_ZRI renamable $z0, renamable $p0, killed $x8, -8 :: (store (s64) into %ir.object)
- ; CHECK-NEXT: $sp = frame-destroy ADDVL_XXI $sp, 4
+ ; CHECK-NEXT: $sp = frame-destroy ADDVL_XXI $sp, 4, implicit $vg
; CHECK-NEXT: frame-destroy CFI_INSTRUCTION def_cfa $wsp, 16
; CHECK-NEXT: early-clobber $sp, $fp = frame-destroy LDRXpost $sp, 16 :: (load (s64) from %stack.2)
; CHECK-NEXT: frame-destroy CFI_INSTRUCTION def_cfa_offset 0
diff --git a/llvm/test/CodeGen/AArch64/sve-llrint.ll b/llvm/test/CodeGen/AArch64/sve-llrint.ll
index b0198cf9d1247..12d49183edea4 100644
--- a/llvm/test/CodeGen/AArch64/sve-llrint.ll
+++ b/llvm/test/CodeGen/AArch64/sve-llrint.ll
@@ -88,7 +88,7 @@ define <vscale x 8 x i64> @llrint_v8i64_v8f16(<vscale x 8 x half> %x) {
; CHECK-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: uunpklo z1.s, z0.h
; CHECK-NEXT: uunpkhi z0.s, z0.h
@@ -161,11 +161,11 @@ define <vscale x 16 x i64> @llrint_v16i64_v16f16(<vscale x 16 x half> %x) {
; CHECK-NEXT: str z10, [sp, #1, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z9, [sp, #2, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z8, [sp, #3, mul vl] // 16-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 32 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x09, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x20, 0x1e, 0x22 // sp + 16 + 32 * VG
; CHECK-NEXT: .cfi_offset w29, -16
-; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 16 - 8 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 16 - 16 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 16 - 24 * VG
+; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d8 @ cfa - 8 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d9 @ cfa - 16 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x40, 0x1c // $d10 @ cfa - 24 * VG - 16
; CHECK-NEXT: uunpklo z2.s, z0.h
; CHECK-NEXT: uunpkhi z0.s, z0.h
; CHECK-NEXT: mov w8, #64511 // =0xfbff
@@ -299,16 +299,16 @@ define <vscale x 32 x i64> @llrint_v32i64_v32f16(<vscale x 32 x half> %x) {
; CHECK-NEXT: str z9, [sp, #15, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z8, [sp, #16, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-1
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 144 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x90, 0x01, 0x1e, 0x22 // sp + 16 + 144 * VG
; CHECK-NEXT: .cfi_offset w29, -16
-; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 16 - 8 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 16 - 16 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 16 - 24 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x60, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 16 - 32 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x58, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 16 - 40 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4d, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x50, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d13 @ cfa - 16 - 48 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4e, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x48, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d14 @ cfa - 16 - 56 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4f, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x40, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d15 @ cfa - 16 - 64 * VG
+; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d8 @ cfa - 8 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d9 @ cfa - 16 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x40, 0x1c // $d10 @ cfa - 24 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x60, 0x1e, 0x22, 0x40, 0x1c // $d11 @ cfa - 32 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x58, 0x1e, 0x22, 0x40, 0x1c // $d12 @ cfa - 40 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4d, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x50, 0x1e, 0x22, 0x40, 0x1c // $d13 @ cfa - 48 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4e, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x48, 0x1e, 0x22, 0x40, 0x1c // $d14 @ cfa - 56 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4f, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x40, 0x1e, 0x22, 0x40, 0x1c // $d15 @ cfa - 64 * VG - 16
; CHECK-NEXT: uunpklo z4.s, z0.h
; CHECK-NEXT: uunpkhi z0.s, z0.h
; CHECK-NEXT: mov w9, #64511 // =0xfbff
@@ -614,7 +614,7 @@ define <vscale x 8 x i64> @llrint_v8i64_v8f32(<vscale x 8 x float> %x) {
; CHECK-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: uunpklo z2.d, z0.s
; CHECK-NEXT: uunpkhi z0.d, z0.s
@@ -684,11 +684,11 @@ define <vscale x 16 x i64> @llrint_v16i64_v16f32(<vscale x 16 x float> %x) {
; CHECK-NEXT: str z10, [sp, #1, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z9, [sp, #2, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z8, [sp, #3, mul vl] // 16-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 32 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x09, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x20, 0x1e, 0x22 // sp + 16 + 32 * VG
; CHECK-NEXT: .cfi_offset w29, -16
-; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 16 - 8 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 16 - 16 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 16 - 24 * VG
+; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d8 @ cfa - 8 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d9 @ cfa - 16 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x40, 0x1c // $d10 @ cfa - 24 * VG - 16
; CHECK-NEXT: uunpklo z4.d, z0.s
; CHECK-NEXT: uunpkhi z0.d, z0.s
; CHECK-NEXT: mov w8, #-553648128 // =0xdf000000
@@ -818,16 +818,16 @@ define <vscale x 32 x i64> @llrint_v32i64_v32f32(<vscale x 32 x float> %x) {
; CHECK-NEXT: str z9, [sp, #15, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z8, [sp, #16, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-1
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 144 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x90, 0x01, 0x1e, 0x22 // sp + 16 + 144 * VG
; CHECK-NEXT: .cfi_offset w29, -16
-; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 16 - 8 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 16 - 16 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 16 - 24 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x60, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 16 - 32 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x58, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 16 - 40 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4d, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x50, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d13 @ cfa - 16 - 48 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4e, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x48, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d14 @ cfa - 16 - 56 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4f, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x40, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d15 @ cfa - 16 - 64 * VG
+; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d8 @ cfa - 8 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d9 @ cfa - 16 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x40, 0x1c // $d10 @ cfa - 24 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x60, 0x1e, 0x22, 0x40, 0x1c // $d11 @ cfa - 32 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x58, 0x1e, 0x22, 0x40, 0x1c // $d12 @ cfa - 40 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4d, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x50, 0x1e, 0x22, 0x40, 0x1c // $d13 @ cfa - 48 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4e, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x48, 0x1e, 0x22, 0x40, 0x1c // $d14 @ cfa - 56 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4f, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x40, 0x1e, 0x22, 0x40, 0x1c // $d15 @ cfa - 64 * VG - 16
; CHECK-NEXT: uunpklo z24.d, z0.s
; CHECK-NEXT: uunpkhi z25.d, z0.s
; CHECK-NEXT: mov w9, #-553648128 // =0xdf000000
@@ -1125,7 +1125,7 @@ define <vscale x 8 x i64> @llrint_v8i64_v8f64(<vscale x 8 x double> %x) {
; CHECK-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: ptrue p0.d
; CHECK-NEXT: mov x8, #-4332462841530417152 // =0xc3e0000000000000
@@ -1190,10 +1190,10 @@ define <vscale x 16 x i64> @llrint_v16f64(<vscale x 16 x double> %x) {
; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str z9, [sp, #1, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z8, [sp, #2, mul vl] // 16-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x18, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 24 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x48, 0x1e, 0x22 // sp + 16 + 24 * VG
; CHECK-NEXT: .cfi_offset w29, -16
-; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 16 - 8 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 16 - 16 * VG
+; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d8 @ cfa - 8 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d9 @ cfa - 16 * VG - 16
; CHECK-NEXT: ptrue p0.d
; CHECK-NEXT: mov x8, #-4332462841530417152 // =0xc3e0000000000000
; CHECK-NEXT: mov z26.d, #0x8000000000000000
@@ -1312,16 +1312,16 @@ define <vscale x 32 x i64> @llrint_v32f64(<vscale x 32 x double> %x) {
; CHECK-NEXT: str z9, [sp, #15, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z8, [sp, #16, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-1
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 144 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x90, 0x01, 0x1e, 0x22 // sp + 16 + 144 * VG
; CHECK-NEXT: .cfi_offset w29, -16
-; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 16 - 8 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 16 - 16 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 16 - 24 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x60, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 16 - 32 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x58, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 16 - 40 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4d, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x50, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d13 @ cfa - 16 - 48 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4e, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x48, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d14 @ cfa - 16 - 56 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4f, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x40, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d15 @ cfa - 16 - 64 * VG
+; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d8 @ cfa - 8 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d9 @ cfa - 16 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x40, 0x1c // $d10 @ cfa - 24 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x60, 0x1e, 0x22, 0x40, 0x1c // $d11 @ cfa - 32 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x58, 0x1e, 0x22, 0x40, 0x1c // $d12 @ cfa - 40 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4d, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x50, 0x1e, 0x22, 0x40, 0x1c // $d13 @ cfa - 48 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4e, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x48, 0x1e, 0x22, 0x40, 0x1c // $d14 @ cfa - 56 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4f, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x40, 0x1e, 0x22, 0x40, 0x1c // $d15 @ cfa - 64 * VG - 16
; CHECK-NEXT: ldr z0, [x0]
; CHECK-NEXT: ptrue p0.d
; CHECK-NEXT: ldr z2, [x0, #2, mul vl]
diff --git a/llvm/test/CodeGen/AArch64/sve-lrint.ll b/llvm/test/CodeGen/AArch64/sve-lrint.ll
index aa5863901b9d3..58ac53d36f9ae 100644
--- a/llvm/test/CodeGen/AArch64/sve-lrint.ll
+++ b/llvm/test/CodeGen/AArch64/sve-lrint.ll
@@ -89,7 +89,7 @@ define <vscale x 8 x iXLen> @lrint_v8f16(<vscale x 8 x half> %x) {
; CHECK-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: uunpklo z1.s, z0.h
; CHECK-NEXT: uunpkhi z0.s, z0.h
@@ -162,11 +162,11 @@ define <vscale x 16 x iXLen> @lrint_v16f16(<vscale x 16 x half> %x) {
; CHECK-NEXT: str z10, [sp, #1, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z9, [sp, #2, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z8, [sp, #3, mul vl] // 16-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 32 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x09, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x20, 0x1e, 0x22 // sp + 16 + 32 * VG
; CHECK-NEXT: .cfi_offset w29, -16
-; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 16 - 8 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 16 - 16 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 16 - 24 * VG
+; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d8 @ cfa - 8 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d9 @ cfa - 16 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x40, 0x1c // $d10 @ cfa - 24 * VG - 16
; CHECK-NEXT: uunpklo z2.s, z0.h
; CHECK-NEXT: uunpkhi z0.s, z0.h
; CHECK-NEXT: mov w8, #64511 // =0xfbff
@@ -300,16 +300,16 @@ define <vscale x 32 x iXLen> @lrint_v32f16(<vscale x 32 x half> %x) {
; CHECK-NEXT: str z9, [sp, #15, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z8, [sp, #16, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-1
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 144 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x90, 0x01, 0x1e, 0x22 // sp + 16 + 144 * VG
; CHECK-NEXT: .cfi_offset w29, -16
-; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 16 - 8 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 16 - 16 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 16 - 24 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x60, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 16 - 32 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x58, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 16 - 40 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4d, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x50, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d13 @ cfa - 16 - 48 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4e, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x48, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d14 @ cfa - 16 - 56 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4f, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x40, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d15 @ cfa - 16 - 64 * VG
+; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d8 @ cfa - 8 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d9 @ cfa - 16 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x40, 0x1c // $d10 @ cfa - 24 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x60, 0x1e, 0x22, 0x40, 0x1c // $d11 @ cfa - 32 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x58, 0x1e, 0x22, 0x40, 0x1c // $d12 @ cfa - 40 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4d, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x50, 0x1e, 0x22, 0x40, 0x1c // $d13 @ cfa - 48 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4e, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x48, 0x1e, 0x22, 0x40, 0x1c // $d14 @ cfa - 56 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4f, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x40, 0x1e, 0x22, 0x40, 0x1c // $d15 @ cfa - 64 * VG - 16
; CHECK-NEXT: uunpklo z4.s, z0.h
; CHECK-NEXT: uunpkhi z0.s, z0.h
; CHECK-NEXT: mov w9, #64511 // =0xfbff
@@ -615,7 +615,7 @@ define <vscale x 8 x iXLen> @lrint_v8f32(<vscale x 8 x float> %x) {
; CHECK-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: uunpklo z2.d, z0.s
; CHECK-NEXT: uunpkhi z0.d, z0.s
@@ -685,11 +685,11 @@ define <vscale x 16 x iXLen> @lrint_v16f32(<vscale x 16 x float> %x) {
; CHECK-NEXT: str z10, [sp, #1, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z9, [sp, #2, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z8, [sp, #3, mul vl] // 16-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 32 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x09, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x20, 0x1e, 0x22 // sp + 16 + 32 * VG
; CHECK-NEXT: .cfi_offset w29, -16
-; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 16 - 8 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 16 - 16 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 16 - 24 * VG
+; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d8 @ cfa - 8 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d9 @ cfa - 16 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x40, 0x1c // $d10 @ cfa - 24 * VG - 16
; CHECK-NEXT: uunpklo z4.d, z0.s
; CHECK-NEXT: uunpkhi z0.d, z0.s
; CHECK-NEXT: mov w8, #-553648128 // =0xdf000000
@@ -819,16 +819,16 @@ define <vscale x 32 x iXLen> @lrint_v32f32(<vscale x 32 x float> %x) {
; CHECK-NEXT: str z9, [sp, #15, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z8, [sp, #16, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-1
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 144 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x90, 0x01, 0x1e, 0x22 // sp + 16 + 144 * VG
; CHECK-NEXT: .cfi_offset w29, -16
-; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 16 - 8 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 16 - 16 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 16 - 24 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x60, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 16 - 32 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x58, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 16 - 40 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4d, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x50, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d13 @ cfa - 16 - 48 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4e, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x48, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d14 @ cfa - 16 - 56 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4f, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x40, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d15 @ cfa - 16 - 64 * VG
+; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d8 @ cfa - 8 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d9 @ cfa - 16 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x40, 0x1c // $d10 @ cfa - 24 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x60, 0x1e, 0x22, 0x40, 0x1c // $d11 @ cfa - 32 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x58, 0x1e, 0x22, 0x40, 0x1c // $d12 @ cfa - 40 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4d, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x50, 0x1e, 0x22, 0x40, 0x1c // $d13 @ cfa - 48 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4e, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x48, 0x1e, 0x22, 0x40, 0x1c // $d14 @ cfa - 56 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4f, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x40, 0x1e, 0x22, 0x40, 0x1c // $d15 @ cfa - 64 * VG - 16
; CHECK-NEXT: uunpklo z24.d, z0.s
; CHECK-NEXT: uunpkhi z25.d, z0.s
; CHECK-NEXT: mov w9, #-553648128 // =0xdf000000
@@ -1126,7 +1126,7 @@ define <vscale x 8 x iXLen> @lrint_v8f64(<vscale x 8 x double> %x) {
; CHECK-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: ptrue p0.d
; CHECK-NEXT: mov x8, #-4332462841530417152 // =0xc3e0000000000000
@@ -1191,10 +1191,10 @@ define <vscale x 16 x iXLen> @lrint_v16f64(<vscale x 16 x double> %x) {
; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str z9, [sp, #1, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z8, [sp, #2, mul vl] // 16-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x18, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 24 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x48, 0x1e, 0x22 // sp + 16 + 24 * VG
; CHECK-NEXT: .cfi_offset w29, -16
-; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 16 - 8 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 16 - 16 * VG
+; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d8 @ cfa - 8 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d9 @ cfa - 16 * VG - 16
; CHECK-NEXT: ptrue p0.d
; CHECK-NEXT: mov x8, #-4332462841530417152 // =0xc3e0000000000000
; CHECK-NEXT: mov z26.d, #0x8000000000000000
@@ -1313,16 +1313,16 @@ define <vscale x 32 x iXLen> @lrint_v32f64(<vscale x 32 x double> %x) {
; CHECK-NEXT: str z9, [sp, #15, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z8, [sp, #16, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-1
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 144 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x90, 0x01, 0x1e, 0x22 // sp + 16 + 144 * VG
; CHECK-NEXT: .cfi_offset w29, -16
-; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 16 - 8 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 16 - 16 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 16 - 24 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x60, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 16 - 32 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x58, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 16 - 40 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4d, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x50, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d13 @ cfa - 16 - 48 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4e, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x48, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d14 @ cfa - 16 - 56 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4f, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x40, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d15 @ cfa - 16 - 64 * VG
+; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d8 @ cfa - 8 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d9 @ cfa - 16 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x40, 0x1c // $d10 @ cfa - 24 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x60, 0x1e, 0x22, 0x40, 0x1c // $d11 @ cfa - 32 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x58, 0x1e, 0x22, 0x40, 0x1c // $d12 @ cfa - 40 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4d, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x50, 0x1e, 0x22, 0x40, 0x1c // $d13 @ cfa - 48 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4e, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x48, 0x1e, 0x22, 0x40, 0x1c // $d14 @ cfa - 56 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4f, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x40, 0x1e, 0x22, 0x40, 0x1c // $d15 @ cfa - 64 * VG - 16
; CHECK-NEXT: ldr z0, [x0]
; CHECK-NEXT: ptrue p0.d
; CHECK-NEXT: ldr z2, [x0, #2, mul vl]
diff --git a/llvm/test/CodeGen/AArch64/sve-pred-arith.ll b/llvm/test/CodeGen/AArch64/sve-pred-arith.ll
index 6e08606db9537..24df76b1ab25f 100644
--- a/llvm/test/CodeGen/AArch64/sve-pred-arith.ll
+++ b/llvm/test/CodeGen/AArch64/sve-pred-arith.ll
@@ -53,7 +53,7 @@ define aarch64_sve_vector_pcs <vscale x 64 x i1> @add_nxv64i1(<vscale x 64 x i1>
; CHECK-NEXT: .cfi_def_cfa_offset 16
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: addvl sp, sp, #-1
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: str p8, [sp, #3, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
@@ -137,7 +137,7 @@ define aarch64_sve_vector_pcs <vscale x 64 x i1> @sub_nxv64i1(<vscale x 64 x i1>
; CHECK-NEXT: .cfi_def_cfa_offset 16
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: addvl sp, sp, #-1
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: str p8, [sp, #3, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
diff --git a/llvm/test/CodeGen/AArch64/sve-split-extract-elt.ll b/llvm/test/CodeGen/AArch64/sve-split-extract-elt.ll
index 9a4231a57c61f..0bc8cb8bc5003 100644
--- a/llvm/test/CodeGen/AArch64/sve-split-extract-elt.ll
+++ b/llvm/test/CodeGen/AArch64/sve-split-extract-elt.ll
@@ -20,7 +20,7 @@ define i8 @split_extract_32i8_idx(<vscale x 32 x i8> %a, i32 %idx) {
; CHECK: // %bb.0:
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-2
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 16 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x40, 0x1e, 0x22 // sp + 16 + 16 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: rdvl x8, #2
; CHECK-NEXT: mov w9, w0
@@ -43,7 +43,7 @@ define i16 @split_extract_16i16_idx(<vscale x 16 x i16> %a, i32 %idx) {
; CHECK: // %bb.0:
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-2
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 16 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x40, 0x1e, 0x22 // sp + 16 + 16 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: rdvl x8, #1
; CHECK-NEXT: mov w9, w0
@@ -66,7 +66,7 @@ define i32 @split_extract_8i32_idx(<vscale x 8 x i32> %a, i32 %idx) {
; CHECK: // %bb.0:
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-2
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 16 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x40, 0x1e, 0x22 // sp + 16 + 16 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: cnth x8
; CHECK-NEXT: mov w9, w0
@@ -89,7 +89,7 @@ define i64 @split_extract_8i64_idx(<vscale x 8 x i64> %a, i32 %idx) {
; CHECK: // %bb.0:
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-4
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 32 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x09, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x20, 0x1e, 0x22 // sp + 16 + 32 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: cnth x8
; CHECK-NEXT: mov w9, w0
@@ -134,7 +134,7 @@ define i16 @split_extract_16i16(<vscale x 16 x i16> %a) {
; CHECK: // %bb.0:
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-2
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 16 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x40, 0x1e, 0x22 // sp + 16 + 16 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: rdvl x8, #1
; CHECK-NEXT: mov w9, #128 // =0x80
@@ -157,7 +157,7 @@ define i32 @split_extract_16i32(<vscale x 16 x i32> %a) {
; CHECK: // %bb.0:
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-4
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 32 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x09, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x20, 0x1e, 0x22 // sp + 16 + 32 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: rdvl x8, #1
; CHECK-NEXT: mov w9, #34464 // =0x86a0
@@ -183,7 +183,7 @@ define i64 @split_extract_4i64(<vscale x 4 x i64> %a) {
; CHECK: // %bb.0:
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-2
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 16 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x40, 0x1e, 0x22 // sp + 16 + 16 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: cntw x8
; CHECK-NEXT: mov w9, #10 // =0xa
diff --git a/llvm/test/CodeGen/AArch64/sve-split-insert-elt.ll b/llvm/test/CodeGen/AArch64/sve-split-insert-elt.ll
index d7ed42d717937..4ed59bc67db0c 100644
--- a/llvm/test/CodeGen/AArch64/sve-split-insert-elt.ll
+++ b/llvm/test/CodeGen/AArch64/sve-split-insert-elt.ll
@@ -21,7 +21,7 @@ define <vscale x 32 x i8> @split_insert_32i8_idx(<vscale x 32 x i8> %a, i8 %elt,
; CHECK: // %bb.0:
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-2
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 16 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x40, 0x1e, 0x22 // sp + 16 + 16 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: rdvl x8, #2
; CHECK-NEXT: mov x9, sp
@@ -45,7 +45,7 @@ define <vscale x 8 x float> @split_insert_8f32_idx(<vscale x 8 x float> %a, floa
; CHECK: // %bb.0:
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-2
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 16 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x40, 0x1e, 0x22 // sp + 16 + 16 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: cnth x8
; CHECK-NEXT: mov x9, sp
@@ -69,7 +69,7 @@ define <vscale x 8 x i64> @split_insert_8i64_idx(<vscale x 8 x i64> %a, i64 %elt
; CHECK: // %bb.0:
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-4
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 32 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x09, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x20, 0x1e, 0x22 // sp + 16 + 32 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: cnth x8
; CHECK-NEXT: mov x9, sp
@@ -130,7 +130,7 @@ define <vscale x 32 x i16> @split_insert_32i16(<vscale x 32 x i16> %a, i16 %elt)
; CHECK: // %bb.0:
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-4
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 32 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x09, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x20, 0x1e, 0x22 // sp + 16 + 32 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: rdvl x8, #2
; CHECK-NEXT: mov w9, #128 // =0x80
@@ -159,7 +159,7 @@ define <vscale x 8 x i32> @split_insert_8i32(<vscale x 8 x i32> %a, i32 %elt) {
; CHECK: // %bb.0:
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-2
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 16 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x40, 0x1e, 0x22 // sp + 16 + 16 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: cnth x8
; CHECK-NEXT: mov w9, #16960 // =0x4240
diff --git a/llvm/test/CodeGen/AArch64/sve-stack-frame-layout.ll b/llvm/test/CodeGen/AArch64/sve-stack-frame-layout.ll
index c5cf4593cc86d..e0da9b57c6556 100644
--- a/llvm/test/CodeGen/AArch64/sve-stack-frame-layout.ll
+++ b/llvm/test/CodeGen/AArch64/sve-stack-frame-layout.ll
@@ -16,7 +16,7 @@ define i32 @csr_d8_allocnxv4i32i32f64(double %d) "aarch64_pstate_sm_compatible"
; CHECK-NEXT: str x29, [sp, #8] // 8-byte Folded Spill
; CHECK-NEXT: sub sp, sp, #16
; CHECK-NEXT: addvl sp, sp, #-1
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x20, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 32 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x20, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 32 + 8 * VG
; CHECK-NEXT: .cfi_offset w29, -8
; CHECK-NEXT: .cfi_offset b8, -16
; CHECK-NEXT: mov z1.s, #0 // =0x0
@@ -219,7 +219,7 @@ define i32 @csr_d8_allocnxv4i32i32f64_stackargsi32f64(double %d0, double %d1, do
; CHECK-NEXT: str x29, [sp, #8] // 8-byte Folded Spill
; CHECK-NEXT: sub sp, sp, #16
; CHECK-NEXT: addvl sp, sp, #-1
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x20, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 32 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x20, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 32 + 8 * VG
; CHECK-NEXT: .cfi_offset w29, -8
; CHECK-NEXT: .cfi_offset b8, -16
; CHECK-NEXT: mov z1.s, #0 // =0x0
@@ -266,7 +266,7 @@ define i32 @svecc_z8_allocnxv4i32i32f64_fp(double %d, <vscale x 4 x i32> %v) "aa
; CHECK-NEXT: .cfi_def_cfa w29, 16
; CHECK-NEXT: .cfi_offset w30, -8
; CHECK-NEXT: .cfi_offset w29, -16
-; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 16 - 8 * VG
+; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d8 @ cfa - 8 * VG - 16
; CHECK-NEXT: mov w0, wzr
; CHECK-NEXT: //APP
; CHECK-NEXT: //NO_APP
@@ -310,7 +310,7 @@ define i32 @svecc_z8_allocnxv4i32i32f64_stackargsi32_fp(double %d, i32 %i0, i32
; CHECK-NEXT: .cfi_def_cfa w29, 16
; CHECK-NEXT: .cfi_offset w30, -8
; CHECK-NEXT: .cfi_offset w29, -16
-; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 16 - 8 * VG
+; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d8 @ cfa - 8 * VG - 16
; CHECK-NEXT: mov w0, wzr
; CHECK-NEXT: //APP
; CHECK-NEXT: //NO_APP
@@ -383,7 +383,7 @@ define i32 @svecc_call(<4 x i16> %P0, ptr %P1, i32 %P2, <vscale x 16 x i8> %P3,
; CHECK-NEXT: .cfi_offset w30, -40
; CHECK-NEXT: .cfi_offset w29, -48
; CHECK-NEXT: addvl sp, sp, #-18
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x30, 0x22, 0x11, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 48 + 144 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x30, 0x92, 0x2e, 0x00, 0x11, 0x90, 0x01, 0x1e, 0x22 // sp + 48 + 144 * VG
; CHECK-NEXT: str p15, [sp, #4, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str p14, [sp, #5, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str p13, [sp, #6, mul vl] // 2-byte Folded Spill
@@ -412,14 +412,14 @@ define i32 @svecc_call(<4 x i16> %P0, ptr %P1, i32 %P2, <vscale x 16 x i8> %P3,
; CHECK-NEXT: str z10, [sp, #15, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z9, [sp, #16, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z8, [sp, #17, mul vl] // 16-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x50, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 48 - 8 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x11, 0x50, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 48 - 16 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x50, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 48 - 24 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x11, 0x50, 0x22, 0x11, 0x60, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 48 - 32 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x11, 0x50, 0x22, 0x11, 0x58, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 48 - 40 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4d, 0x0a, 0x11, 0x50, 0x22, 0x11, 0x50, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d13 @ cfa - 48 - 48 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4e, 0x0a, 0x11, 0x50, 0x22, 0x11, 0x48, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d14 @ cfa - 48 - 56 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4f, 0x0a, 0x11, 0x50, 0x22, 0x11, 0x40, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d15 @ cfa - 48 - 64 * VG
+; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x11, 0x50, 0x22 // $d8 @ cfa - 8 * VG - 48
+; CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x11, 0x50, 0x22 // $d9 @ cfa - 16 * VG - 48
+; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x11, 0x50, 0x22 // $d10 @ cfa - 24 * VG - 48
+; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x60, 0x1e, 0x22, 0x11, 0x50, 0x22 // $d11 @ cfa - 32 * VG - 48
+; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x58, 0x1e, 0x22, 0x11, 0x50, 0x22 // $d12 @ cfa - 40 * VG - 48
+; CHECK-NEXT: .cfi_escape 0x10, 0x4d, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x50, 0x1e, 0x22, 0x11, 0x50, 0x22 // $d13 @ cfa - 48 * VG - 48
+; CHECK-NEXT: .cfi_escape 0x10, 0x4e, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x48, 0x1e, 0x22, 0x11, 0x50, 0x22 // $d14 @ cfa - 56 * VG - 48
+; CHECK-NEXT: .cfi_escape 0x10, 0x4f, 0x0a, 0x92, 0x2e, 0x00, 0x11, 0x40, 0x1e, 0x22, 0x11, 0x50, 0x22 // $d15 @ cfa - 64 * VG - 48
; CHECK-NEXT: mov x8, x0
; CHECK-NEXT: //APP
; CHECK-NEXT: //NO_APP
diff --git a/llvm/test/CodeGen/AArch64/sve-trunc.ll b/llvm/test/CodeGen/AArch64/sve-trunc.ll
index 0ec6538947c73..50580cb772937 100644
--- a/llvm/test/CodeGen/AArch64/sve-trunc.ll
+++ b/llvm/test/CodeGen/AArch64/sve-trunc.ll
@@ -115,7 +115,7 @@ define <vscale x 16 x i1> @trunc_i64toi1_split3(<vscale x 16 x i64> %in) {
; CHECK-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: and z7.d, z7.d, #0x1
; CHECK-NEXT: and z6.d, z6.d, #0x1
diff --git a/llvm/test/CodeGen/AArch64/sve-vector-compress.ll b/llvm/test/CodeGen/AArch64/sve-vector-compress.ll
index 8a504cd739211..198e0a37c56fa 100644
--- a/llvm/test/CodeGen/AArch64/sve-vector-compress.ll
+++ b/llvm/test/CodeGen/AArch64/sve-vector-compress.ll
@@ -105,7 +105,7 @@ define <vscale x 8 x i32> @test_compress_large(<vscale x 8 x i32> %vec, <vscale
; CHECK: // %bb.0:
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-2
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 16 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x40, 0x1e, 0x22 // sp + 16 + 16 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: punpklo p2.h, p0.b
; CHECK-NEXT: cnth x9
diff --git a/llvm/test/CodeGen/AArch64/sve2p1-intrinsics-loads.ll b/llvm/test/CodeGen/AArch64/sve2p1-intrinsics-loads.ll
index 0eacac2ca63f5..1dbd7ddf46328 100644
--- a/llvm/test/CodeGen/AArch64/sve2p1-intrinsics-loads.ll
+++ b/llvm/test/CodeGen/AArch64/sve2p1-intrinsics-loads.ll
@@ -276,7 +276,7 @@ define <vscale x 16 x i8> @ld1_x2_i8_z0_taken(target("aarch64.svcount") %pn, ptr
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-1
; CHECK-NEXT: str p8, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: mov p8.b, p0.b
; CHECK-NEXT: ld1b { z2.b, z3.b }, pn8/z, [x0]
@@ -298,7 +298,7 @@ define <vscale x 16 x i8> @ld1_x2_i8_z0_taken_scalar(target("aarch64.svcount") %
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-1
; CHECK-NEXT: str p8, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: mov p8.b, p0.b
; CHECK-NEXT: ld1b { z2.b, z3.b }, pn8/z, [x0, x1]
@@ -585,7 +585,7 @@ define <vscale x 8 x i16> @ld1_x4_i16_z0_taken(target("aarch64.svcount") %pn, pt
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-1
; CHECK-NEXT: str p8, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: mov p8.b, p0.b
; CHECK-NEXT: ld1h { z4.h - z7.h }, pn8/z, [x0]
@@ -607,7 +607,7 @@ define <vscale x 8 x i16> @ld1_x4_i16_z0_taken_scalar(target("aarch64.svcount")
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-1
; CHECK-NEXT: str p8, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: mov p8.b, p0.b
; CHECK-NEXT: ld1h { z4.h - z7.h }, pn8/z, [x0, x1, lsl #1]
@@ -896,7 +896,7 @@ define <vscale x 4 x i32> @ldnt1_x2_i32_z0_taken(target("aarch64.svcount") %pn,
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-1
; CHECK-NEXT: str p8, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: mov p8.b, p0.b
; CHECK-NEXT: ldnt1w { z2.s, z3.s }, pn8/z, [x0]
@@ -918,7 +918,7 @@ define <vscale x 4 x i32> @ldnt1_x2_i32_z0_taken_scalar(target("aarch64.svcount"
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-1
; CHECK-NEXT: str p8, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: mov p8.b, p0.b
; CHECK-NEXT: ldnt1w { z2.s, z3.s }, pn8/z, [x0, x1, lsl #2]
@@ -1205,7 +1205,7 @@ define <vscale x 2 x i64> @ldnt1_x4_i64_z0_taken(target("aarch64.svcount") %pn,
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-1
; CHECK-NEXT: str p8, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: mov p8.b, p0.b
; CHECK-NEXT: ldnt1d { z4.d - z7.d }, pn8/z, [x0]
@@ -1227,7 +1227,7 @@ define <vscale x 2 x i64> @ldnt1_x4_i64_z0_taken_scalar(target("aarch64.svcount"
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-1
; CHECK-NEXT: str p8, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: mov p8.b, p0.b
; CHECK-NEXT: ldnt1d { z4.d - z7.d }, pn8/z, [x0, x1, lsl #3]
diff --git a/llvm/test/CodeGen/AArch64/unwind-preserved.ll b/llvm/test/CodeGen/AArch64/unwind-preserved.ll
index 822be14faaeb1..7e1f63d822273 100644
--- a/llvm/test/CodeGen/AArch64/unwind-preserved.ll
+++ b/llvm/test/CodeGen/AArch64/unwind-preserved.ll
@@ -13,7 +13,7 @@ define <vscale x 4 x i32> @invoke_callee_may_throw_sve(<vscale x 4 x i32> %v) uw
; CHECK-NEXT: .cfi_offset w30, -8
; CHECK-NEXT: .cfi_offset w29, -16
; CHECK-NEXT: addvl sp, sp, #-18
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 144 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x90, 0x01, 0x1e, 0x22 // sp + 16 + 144 * VG
; CHECK-NEXT: str p15, [sp, #4, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str p14, [sp, #5, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str p13, [sp, #6, mul vl] // 2-byte Folded Spill
@@ -42,27 +42,27 @@ define <vscale x 4 x i32> @invoke_callee_may_throw_sve(<vscale x 4 x i32> %v) uw
; CHECK-NEXT: str z10, [sp, #15, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z9, [sp, #16, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: str z8, [sp, #17, mul vl] // 16-byte Folded Spill
-; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 16 - 8 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 16 - 16 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 16 - 24 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x60, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 16 - 32 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x58, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 16 - 40 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4d, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x50, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d13 @ cfa - 16 - 48 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4e, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x48, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d14 @ cfa - 16 - 56 * VG
-; CHECK-NEXT: .cfi_escape 0x10, 0x4f, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x40, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d15 @ cfa - 16 - 64 * VG
+; CHECK-NEXT: .cfi_escape 0x10, 0x48, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d8 @ cfa - 8 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x49, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d9 @ cfa - 16 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4a, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x40, 0x1c // $d10 @ cfa - 24 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4b, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x60, 0x1e, 0x22, 0x40, 0x1c // $d11 @ cfa - 32 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4c, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x58, 0x1e, 0x22, 0x40, 0x1c // $d12 @ cfa - 40 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4d, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x50, 0x1e, 0x22, 0x40, 0x1c // $d13 @ cfa - 48 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4e, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x48, 0x1e, 0x22, 0x40, 0x1c // $d14 @ cfa - 56 * VG - 16
+; CHECK-NEXT: .cfi_escape 0x10, 0x4f, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x40, 0x1e, 0x22, 0x40, 0x1c // $d15 @ cfa - 64 * VG - 16
; CHECK-NEXT: addvl sp, sp, #-2
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0xa0, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 160 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0xa0, 0x01, 0x1e, 0x22 // sp + 16 + 160 * VG
; CHECK-NEXT: .cfi_remember_state
; CHECK-NEXT: str z0, [sp] // 16-byte Folded Spill
-; CHECK-NEXT: .Ltmp0:
+; CHECK-NEXT: .Ltmp0: // EH_LABEL
; CHECK-NEXT: bl may_throw_sve
-; CHECK-NEXT: .Ltmp1:
+; CHECK-NEXT: .Ltmp1: // EH_LABEL
; CHECK-NEXT: str z0, [sp, #1, mul vl] // 16-byte Folded Spill
; CHECK-NEXT: b .LBB0_1
; CHECK-NEXT: .LBB0_1: // %.Lcontinue
; CHECK-NEXT: ldr z0, [sp, #1, mul vl] // 16-byte Folded Reload
; CHECK-NEXT: addvl sp, sp, #2
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 144 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x90, 0x01, 0x1e, 0x22 // sp + 16 + 144 * VG
; CHECK-NEXT: ldr z23, [sp, #2, mul vl] // 16-byte Folded Reload
; CHECK-NEXT: ldr z22, [sp, #3, mul vl] // 16-byte Folded Reload
; CHECK-NEXT: ldr z21, [sp, #4, mul vl] // 16-byte Folded Reload
@@ -108,10 +108,10 @@ define <vscale x 4 x i32> @invoke_callee_may_throw_sve(<vscale x 4 x i32> %v) uw
; CHECK-NEXT: ret
; CHECK-NEXT: .LBB0_2: // %.Lunwind
; CHECK-NEXT: .cfi_restore_state
-; CHECK-NEXT: .Ltmp2:
+; CHECK-NEXT: .Ltmp2: // EH_LABEL
; CHECK-NEXT: ldr z0, [sp] // 16-byte Folded Reload
; CHECK-NEXT: addvl sp, sp, #2
-; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 144 * VG
+; CHECK-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x90, 0x01, 0x1e, 0x22 // sp + 16 + 144 * VG
; CHECK-NEXT: ldr z23, [sp, #2, mul vl] // 16-byte Folded Reload
; CHECK-NEXT: ldr z22, [sp, #3, mul vl] // 16-byte Folded Reload
; CHECK-NEXT: ldr z21, [sp, #4, mul vl] // 16-byte Folded Reload
@@ -165,7 +165,7 @@ define <vscale x 4 x i32> @invoke_callee_may_throw_sve(<vscale x 4 x i32> %v) uw
; GISEL-NEXT: .cfi_offset w30, -8
; GISEL-NEXT: .cfi_offset w29, -16
; GISEL-NEXT: addvl sp, sp, #-18
-; GISEL-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 144 * VG
+; GISEL-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x90, 0x01, 0x1e, 0x22 // sp + 16 + 144 * VG
; GISEL-NEXT: str p15, [sp, #4, mul vl] // 2-byte Folded Spill
; GISEL-NEXT: str p14, [sp, #5, mul vl] // 2-byte Folded Spill
; GISEL-NEXT: str p13, [sp, #6, mul vl] // 2-byte Folded Spill
@@ -194,27 +194,27 @@ define <vscale x 4 x i32> @invoke_callee_may_throw_sve(<vscale x 4 x i32> %v) uw
; GISEL-NEXT: str z10, [sp, #15, mul vl] // 16-byte Folded Spill
; GISEL-NEXT: str z9, [sp, #16, mul vl] // 16-byte Folded Spill
; GISEL-NEXT: str z8, [sp, #17, mul vl] // 16-byte Folded Spill
-; GISEL-NEXT: .cfi_escape 0x10, 0x48, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x78, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d8 @ cfa - 16 - 8 * VG
-; GISEL-NEXT: .cfi_escape 0x10, 0x49, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x70, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d9 @ cfa - 16 - 16 * VG
-; GISEL-NEXT: .cfi_escape 0x10, 0x4a, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x68, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d10 @ cfa - 16 - 24 * VG
-; GISEL-NEXT: .cfi_escape 0x10, 0x4b, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x60, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d11 @ cfa - 16 - 32 * VG
-; GISEL-NEXT: .cfi_escape 0x10, 0x4c, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x58, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d12 @ cfa - 16 - 40 * VG
-; GISEL-NEXT: .cfi_escape 0x10, 0x4d, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x50, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d13 @ cfa - 16 - 48 * VG
-; GISEL-NEXT: .cfi_escape 0x10, 0x4e, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x48, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d14 @ cfa - 16 - 56 * VG
-; GISEL-NEXT: .cfi_escape 0x10, 0x4f, 0x0a, 0x11, 0x70, 0x22, 0x11, 0x40, 0x92, 0x2e, 0x00, 0x1e, 0x22 // $d15 @ cfa - 16 - 64 * VG
+; GISEL-NEXT: .cfi_escape 0x10, 0x48, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x78, 0x1e, 0x22, 0x40, 0x1c // $d8 @ cfa - 8 * VG - 16
+; GISEL-NEXT: .cfi_escape 0x10, 0x49, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x70, 0x1e, 0x22, 0x40, 0x1c // $d9 @ cfa - 16 * VG - 16
+; GISEL-NEXT: .cfi_escape 0x10, 0x4a, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x68, 0x1e, 0x22, 0x40, 0x1c // $d10 @ cfa - 24 * VG - 16
+; GISEL-NEXT: .cfi_escape 0x10, 0x4b, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x60, 0x1e, 0x22, 0x40, 0x1c // $d11 @ cfa - 32 * VG - 16
+; GISEL-NEXT: .cfi_escape 0x10, 0x4c, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x58, 0x1e, 0x22, 0x40, 0x1c // $d12 @ cfa - 40 * VG - 16
+; GISEL-NEXT: .cfi_escape 0x10, 0x4d, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x50, 0x1e, 0x22, 0x40, 0x1c // $d13 @ cfa - 48 * VG - 16
+; GISEL-NEXT: .cfi_escape 0x10, 0x4e, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x48, 0x1e, 0x22, 0x40, 0x1c // $d14 @ cfa - 56 * VG - 16
+; GISEL-NEXT: .cfi_escape 0x10, 0x4f, 0x09, 0x92, 0x2e, 0x00, 0x11, 0x40, 0x1e, 0x22, 0x40, 0x1c // $d15 @ cfa - 64 * VG - 16
; GISEL-NEXT: addvl sp, sp, #-2
-; GISEL-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0xa0, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 160 * VG
+; GISEL-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0xa0, 0x01, 0x1e, 0x22 // sp + 16 + 160 * VG
; GISEL-NEXT: .cfi_remember_state
; GISEL-NEXT: str z0, [sp] // 16-byte Folded Spill
-; GISEL-NEXT: .Ltmp0:
+; GISEL-NEXT: .Ltmp0: // EH_LABEL
; GISEL-NEXT: bl may_throw_sve
-; GISEL-NEXT: .Ltmp1:
+; GISEL-NEXT: .Ltmp1: // EH_LABEL
; GISEL-NEXT: str z0, [sp, #1, mul vl] // 16-byte Folded Spill
; GISEL-NEXT: b .LBB0_1
; GISEL-NEXT: .LBB0_1: // %.Lcontinue
; GISEL-NEXT: ldr z0, [sp, #1, mul vl] // 16-byte Folded Reload
; GISEL-NEXT: addvl sp, sp, #2
-; GISEL-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 144 * VG
+; GISEL-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x90, 0x01, 0x1e, 0x22 // sp + 16 + 144 * VG
; GISEL-NEXT: ldr z23, [sp, #2, mul vl] // 16-byte Folded Reload
; GISEL-NEXT: ldr z22, [sp, #3, mul vl] // 16-byte Folded Reload
; GISEL-NEXT: ldr z21, [sp, #4, mul vl] // 16-byte Folded Reload
@@ -260,10 +260,10 @@ define <vscale x 4 x i32> @invoke_callee_may_throw_sve(<vscale x 4 x i32> %v) uw
; GISEL-NEXT: ret
; GISEL-NEXT: .LBB0_2: // %.Lunwind
; GISEL-NEXT: .cfi_restore_state
-; GISEL-NEXT: .Ltmp2:
+; GISEL-NEXT: .Ltmp2: // EH_LABEL
; GISEL-NEXT: ldr z0, [sp] // 16-byte Folded Reload
; GISEL-NEXT: addvl sp, sp, #2
-; GISEL-NEXT: .cfi_escape 0x0f, 0x0d, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x90, 0x01, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 144 * VG
+; GISEL-NEXT: .cfi_escape 0x0f, 0x0a, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x11, 0x90, 0x01, 0x1e, 0x22 // sp + 16 + 144 * VG
; GISEL-NEXT: ldr z23, [sp, #2, mul vl] // 16-byte Folded Reload
; GISEL-NEXT: ldr z22, [sp, #3, mul vl] // 16-byte Folded Reload
; GISEL-NEXT: ldr z21, [sp, #4, mul vl] // 16-byte Folded Reload
@@ -355,9 +355,9 @@ define aarch64_vector_pcs <4 x i32> @invoke_callee_may_throw_neon(<4 x i32> %v)
; CHECK-NEXT: .cfi_offset b23, -272
; CHECK-NEXT: .cfi_remember_state
; CHECK-NEXT: str q0, [sp] // 16-byte Folded Spill
-; CHECK-NEXT: .Ltmp3:
+; CHECK-NEXT: .Ltmp3: // EH_LABEL
; CHECK-NEXT: bl may_throw_neon
-; CHECK-NEXT: .Ltmp4:
+; CHECK-NEXT: .Ltmp4: // EH_LABEL
; CHECK-NEXT: str q0, [sp, #16] // 16-byte Folded Spill
; CHECK-NEXT: b .LBB1_1
; CHECK-NEXT: .LBB1_1: // %.Lcontinue
@@ -394,7 +394,7 @@ define aarch64_vector_pcs <4 x i32> @invoke_callee_may_throw_neon(<4 x i32> %v)
; CHECK-NEXT: ret
; CHECK-NEXT: .LBB1_2: // %.Lunwind
; CHECK-NEXT: .cfi_restore_state
-; CHECK-NEXT: .Ltmp5:
+; CHECK-NEXT: .Ltmp5: // EH_LABEL
; CHECK-NEXT: ldr q0, [sp] // 16-byte Folded Reload
; CHECK-NEXT: ldp x29, x30, [sp, #288] // 16-byte Folded Reload
; CHECK-NEXT: ldp q9, q8, [sp, #256] // 32-byte Folded Reload
@@ -462,10 +462,10 @@ define aarch64_vector_pcs <4 x i32> @invoke_callee_may_throw_neon(<4 x i32> %v)
; GISEL-NEXT: .cfi_offset b23, -272
; GISEL-NEXT: .cfi_remember_state
; GISEL-NEXT: str q0, [sp] // 16-byte Folded Spill
-; GISEL-NEXT: .Ltmp3:
+; GISEL-NEXT: .Ltmp3: // EH_LABEL
; GISEL-NEXT: bl may_throw_neon
; GISEL-NEXT: str q0, [sp, #16] // 16-byte Folded Spill
-; GISEL-NEXT: .Ltmp4:
+; GISEL-NEXT: .Ltmp4: // EH_LABEL
; GISEL-NEXT: b .LBB1_1
; GISEL-NEXT: .LBB1_1: // %.Lcontinue
; GISEL-NEXT: ldr q0, [sp, #16] // 16-byte Folded Reload
@@ -501,7 +501,7 @@ define aarch64_vector_pcs <4 x i32> @invoke_callee_may_throw_neon(<4 x i32> %v)
; GISEL-NEXT: ret
; GISEL-NEXT: .LBB1_2: // %.Lunwind
; GISEL-NEXT: .cfi_restore_state
-; GISEL-NEXT: .Ltmp5:
+; GISEL-NEXT: .Ltmp5: // EH_LABEL
; GISEL-NEXT: ldr q0, [sp] // 16-byte Folded Reload
; GISEL-NEXT: ldp x29, x30, [sp, #288] // 16-byte Folded Reload
; GISEL-NEXT: ldp q9, q8, [sp, #256] // 32-byte Folded Reload
>From 0461cd3d1d6f722b2833dd913c1f974aeebcf82a Mon Sep 17 00:00:00 2001
From: Diana Picus <Diana-Magda.Picus at amd.com>
Date: Wed, 6 Aug 2025 10:25:53 +0200
Subject: [PATCH 048/171] [AMDGPU] Intrinsic for launching whole wave functions
(#145859)
Add the llvm.amdgcn.call.whole.wave intrinsic for calling whole wave
functions. This will take as its first argument the callee with the
amdgpu_gfx_whole_wave calling convention, followed by the call
parameters which must match the signature of the callee except for the
first function argument (the i1 original EXEC mask, which doesn't need
to be passed in). Indirect calls are not allowed.
Make direct calls to amdgpu_gfx_whole_wave functions a verifier error.
Unspeakable horrors happen around calls from whole wave functions, the
plan is to improve the handling of caller/callee-saved registers in
a future patch.
Tail calls are also handled in a future patch.
---
llvm/include/llvm/IR/IntrinsicsAMDGPU.td | 12 +
llvm/lib/CodeGen/GlobalISel/IRTranslator.cpp | 1 +
.../SelectionDAG/SelectionDAGBuilder.cpp | 37 +
llvm/lib/IR/Verifier.cpp | 30 +
llvm/lib/Target/AMDGPU/AMDGPUCallLowering.cpp | 19 +-
.../CodeGen/AMDGPU/amdgcn-call-whole-wave.ll | 174 ++
.../irtranslator-whole-wave-functions.ll | 26 +
.../AMDGPU/isel-whole-wave-functions.ll | 76 +
.../CodeGen/AMDGPU/whole-wave-functions.ll | 1424 +++++++++++++++++
.../intrinsic-amdgcn-call-whole-wave.ll | 46 +
10 files changed, 1842 insertions(+), 3 deletions(-)
create mode 100644 llvm/test/CodeGen/AMDGPU/amdgcn-call-whole-wave.ll
create mode 100644 llvm/test/Verifier/AMDGPU/intrinsic-amdgcn-call-whole-wave.ll
diff --git a/llvm/include/llvm/IR/IntrinsicsAMDGPU.td b/llvm/include/llvm/IR/IntrinsicsAMDGPU.td
index 90cfd8cedd51b..191ed5f523a74 100644
--- a/llvm/include/llvm/IR/IntrinsicsAMDGPU.td
+++ b/llvm/include/llvm/IR/IntrinsicsAMDGPU.td
@@ -2671,6 +2671,18 @@ def int_amdgcn_cs_chain:
],
[IntrConvergent, IntrNoReturn, ImmArg<ArgIndex<4>>]>;
+// Run a function with all the lanes enabled. Only direct calls are allowed. The
+// first argument is the callee, which must have the `amdgpu_gfx_whole_wave`
+// calling convention and must not be variadic. The remaining arguments to the
+// callee are taken from the arguments passed to the intrinsic. Lanes that are
+// inactive at the point of the call will receive poison. The return value is
+// the return value of the callee for the active lanes (there is no return
+// value in the inactive ones).
+def int_amdgcn_call_whole_wave:
+ Intrinsic<[llvm_any_ty], // The return type of the callee.
+ [llvm_anyptr_ty, // The callee.
+ llvm_vararg_ty], // The arguments to the callee.
+ [IntrConvergent]>;
//===----------------------------------------------------------------------===//
// CI+ Intrinsics
diff --git a/llvm/lib/CodeGen/GlobalISel/IRTranslator.cpp b/llvm/lib/CodeGen/GlobalISel/IRTranslator.cpp
index bbfae570e1e1a..787543df1f0f0 100644
--- a/llvm/lib/CodeGen/GlobalISel/IRTranslator.cpp
+++ b/llvm/lib/CodeGen/GlobalISel/IRTranslator.cpp
@@ -2556,6 +2556,7 @@ bool IRTranslator::translateKnownIntrinsic(const CallInst &CI, Intrinsic::ID ID,
getOrCreateVReg(*ConstantInt::getTrue(CI.getType())));
return true;
case Intrinsic::amdgcn_cs_chain:
+ case Intrinsic::amdgcn_call_whole_wave:
return translateCallBase(CI, MIRBuilder);
case Intrinsic::fptrunc_round: {
uint32_t Flags = MachineInstr::copyFlagsFromInstruction(CI);
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
index d0815e9f51822..d5b904055e547 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
@@ -7984,6 +7984,43 @@ void SelectionDAGBuilder::visitIntrinsicCall(const CallInst &I,
HasTailCall = true;
return;
}
+ case Intrinsic::amdgcn_call_whole_wave: {
+ TargetLowering::ArgListTy Args;
+
+ // The first argument is the callee. Skip it when assembling the call args.
+ TargetLowering::ArgListEntry Arg;
+ for (unsigned Idx = 1; Idx < I.arg_size(); ++Idx) {
+ Arg.Node = getValue(I.getArgOperand(Idx));
+ Arg.Ty = I.getArgOperand(Idx)->getType();
+ Arg.setAttributes(&I, Idx);
+ Args.push_back(Arg);
+ }
+
+ SDValue ConvControlToken;
+ if (auto Bundle = I.getOperandBundle(LLVMContext::OB_convergencectrl)) {
+ auto *Token = Bundle->Inputs[0].get();
+ ConvControlToken = getValue(Token);
+ }
+
+ TargetLowering::CallLoweringInfo CLI(DAG);
+ CLI.setDebugLoc(getCurSDLoc())
+ .setChain(getRoot())
+ .setCallee(CallingConv::AMDGPU_Gfx_WholeWave, I.getType(),
+ getValue(I.getArgOperand(0)), std::move(Args))
+ .setTailCall(false)
+ .setIsPreallocated(
+ I.countOperandBundlesOfType(LLVMContext::OB_preallocated) != 0)
+ .setConvergent(I.isConvergent())
+ .setConvergenceControlToken(ConvControlToken);
+ CLI.CB = &I;
+
+ std::pair<SDValue, SDValue> Result =
+ lowerInvokable(CLI, /*EHPadBB=*/nullptr);
+
+ if (Result.first.getNode())
+ setValue(&I, Result.first);
+ return;
+ }
case Intrinsic::ptrmask: {
SDValue Ptr = getValue(I.getOperand(0));
SDValue Mask = getValue(I.getOperand(1));
diff --git a/llvm/lib/IR/Verifier.cpp b/llvm/lib/IR/Verifier.cpp
index ca3f148f881a4..f3f0ae5233977 100644
--- a/llvm/lib/IR/Verifier.cpp
+++ b/llvm/lib/IR/Verifier.cpp
@@ -6612,6 +6612,36 @@ void Verifier::visitIntrinsicCall(Intrinsic::ID ID, CallBase &Call) {
"Value for inactive lanes must be a VGPR function argument", &Call);
break;
}
+ case Intrinsic::amdgcn_call_whole_wave: {
+ auto F = dyn_cast<Function>(Call.getArgOperand(0));
+ Check(F, "Indirect whole wave calls are not allowed", &Call);
+
+ CallingConv::ID CC = F->getCallingConv();
+ Check(CC == CallingConv::AMDGPU_Gfx_WholeWave,
+ "Callee must have the amdgpu_gfx_whole_wave calling convention",
+ &Call);
+
+ Check(!F->isVarArg(), "Variadic whole wave calls are not allowed", &Call);
+
+ Check(Call.arg_size() == F->arg_size(),
+ "Call argument count must match callee argument count", &Call);
+
+ // The first argument of the call is the callee, and the first argument of
+ // the callee is the active mask. The rest of the arguments must match.
+ Check(F->arg_begin()->getType()->isIntegerTy(1),
+ "Callee must have i1 as its first argument", &Call);
+ for (auto [CallArg, FuncArg] :
+ drop_begin(zip_equal(Call.args(), F->args()))) {
+ Check(CallArg->getType() == FuncArg.getType(),
+ "Argument types must match", &Call);
+
+ // Check that inreg attributes match between call site and function
+ Check(Call.paramHasAttr(FuncArg.getArgNo(), Attribute::InReg) ==
+ FuncArg.hasInRegAttr(),
+ "Argument inreg attributes must match", &Call);
+ }
+ break;
+ }
case Intrinsic::amdgcn_s_prefetch_data: {
Check(
AMDGPU::isFlatGlobalAddrSpace(
diff --git a/llvm/lib/Target/AMDGPU/AMDGPUCallLowering.cpp b/llvm/lib/Target/AMDGPU/AMDGPUCallLowering.cpp
index 3d8d274f06246..3ff6e22fbb943 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPUCallLowering.cpp
+++ b/llvm/lib/Target/AMDGPU/AMDGPUCallLowering.cpp
@@ -1464,9 +1464,22 @@ bool AMDGPUCallLowering::lowerCall(MachineIRBuilder &MIRBuilder,
CallLoweringInfo &Info) const {
if (Function *F = Info.CB->getCalledFunction())
if (F->isIntrinsic()) {
- assert(F->getIntrinsicID() == Intrinsic::amdgcn_cs_chain &&
- "Unexpected intrinsic");
- return lowerChainCall(MIRBuilder, Info);
+ switch (F->getIntrinsicID()) {
+ case Intrinsic::amdgcn_cs_chain:
+ return lowerChainCall(MIRBuilder, Info);
+ case Intrinsic::amdgcn_call_whole_wave:
+ Info.CallConv = CallingConv::AMDGPU_Gfx_WholeWave;
+
+ // Get the callee from the original instruction, so it doesn't look like
+ // this is an indirect call.
+ Info.Callee = MachineOperand::CreateGA(
+ cast<GlobalValue>(Info.CB->getOperand(0)), /*Offset=*/0);
+ Info.OrigArgs.erase(Info.OrigArgs.begin());
+ Info.IsVarArg = false;
+ break;
+ default:
+ llvm_unreachable("Unexpected intrinsic call");
+ }
}
if (Info.IsVarArg) {
diff --git a/llvm/test/CodeGen/AMDGPU/amdgcn-call-whole-wave.ll b/llvm/test/CodeGen/AMDGPU/amdgcn-call-whole-wave.ll
new file mode 100644
index 0000000000000..eac0767c88d80
--- /dev/null
+++ b/llvm/test/CodeGen/AMDGPU/amdgcn-call-whole-wave.ll
@@ -0,0 +1,174 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
+; RUN: llc -global-isel=0 -mtriple=amdgcn--amdpal -mcpu=gfx1200 < %s | FileCheck %s --check-prefix=DAGISEL
+; RUN: llc -global-isel=1 -mtriple=amdgcn--amdpal -mcpu=gfx1200 < %s | FileCheck %s --check-prefix=GISEL
+
+declare amdgpu_gfx_whole_wave i32 @good_callee(i1 %active, i32 %x, i32 %y, i32 inreg %c)
+
+define amdgpu_gfx void @basic_test(i32 %x, i32 inreg %c, ptr addrspace(1) %ptr) {
+; DAGISEL-LABEL: basic_test:
+; DAGISEL: ; %bb.0:
+; DAGISEL-NEXT: s_wait_loadcnt_dscnt 0x0
+; DAGISEL-NEXT: s_wait_expcnt 0x0
+; DAGISEL-NEXT: s_wait_samplecnt 0x0
+; DAGISEL-NEXT: s_wait_bvhcnt 0x0
+; DAGISEL-NEXT: s_wait_kmcnt 0x0
+; DAGISEL-NEXT: s_mov_b32 s0, s33
+; DAGISEL-NEXT: s_mov_b32 s33, s32
+; DAGISEL-NEXT: s_or_saveexec_b32 s1, -1
+; DAGISEL-NEXT: scratch_store_b32 off, v42, s33 offset:8 ; 4-byte Folded Spill
+; DAGISEL-NEXT: s_wait_alu 0xfffe
+; DAGISEL-NEXT: s_mov_b32 exec_lo, s1
+; DAGISEL-NEXT: v_writelane_b32 v42, s0, 2
+; DAGISEL-NEXT: s_clause 0x1
+; DAGISEL-NEXT: scratch_store_b32 off, v40, s33 offset:4
+; DAGISEL-NEXT: scratch_store_b32 off, v41, s33
+; DAGISEL-NEXT: v_dual_mov_b32 v41, v2 :: v_dual_mov_b32 v40, v1
+; DAGISEL-NEXT: v_add_nc_u32_e32 v1, 13, v0
+; DAGISEL-NEXT: v_writelane_b32 v42, s30, 0
+; DAGISEL-NEXT: s_mov_b32 s1, good_callee at abs32@hi
+; DAGISEL-NEXT: s_mov_b32 s0, good_callee at abs32@lo
+; DAGISEL-NEXT: s_add_co_i32 s32, s32, 16
+; DAGISEL-NEXT: v_writelane_b32 v42, s31, 1
+; DAGISEL-NEXT: s_wait_alu 0xfffe
+; DAGISEL-NEXT: s_swappc_b64 s[30:31], s[0:1]
+; DAGISEL-NEXT: global_store_b32 v[40:41], v0, off
+; DAGISEL-NEXT: s_clause 0x1
+; DAGISEL-NEXT: scratch_load_b32 v41, off, s33
+; DAGISEL-NEXT: scratch_load_b32 v40, off, s33 offset:4
+; DAGISEL-NEXT: v_readlane_b32 s31, v42, 1
+; DAGISEL-NEXT: v_readlane_b32 s30, v42, 0
+; DAGISEL-NEXT: s_mov_b32 s32, s33
+; DAGISEL-NEXT: v_readlane_b32 s0, v42, 2
+; DAGISEL-NEXT: s_or_saveexec_b32 s1, -1
+; DAGISEL-NEXT: scratch_load_b32 v42, off, s33 offset:8 ; 4-byte Folded Reload
+; DAGISEL-NEXT: s_wait_alu 0xfffe
+; DAGISEL-NEXT: s_mov_b32 exec_lo, s1
+; DAGISEL-NEXT: s_mov_b32 s33, s0
+; DAGISEL-NEXT: s_wait_loadcnt 0x0
+; DAGISEL-NEXT: s_wait_alu 0xfffe
+; DAGISEL-NEXT: s_setpc_b64 s[30:31]
+;
+; GISEL-LABEL: basic_test:
+; GISEL: ; %bb.0:
+; GISEL-NEXT: s_wait_loadcnt_dscnt 0x0
+; GISEL-NEXT: s_wait_expcnt 0x0
+; GISEL-NEXT: s_wait_samplecnt 0x0
+; GISEL-NEXT: s_wait_bvhcnt 0x0
+; GISEL-NEXT: s_wait_kmcnt 0x0
+; GISEL-NEXT: s_mov_b32 s0, s33
+; GISEL-NEXT: s_mov_b32 s33, s32
+; GISEL-NEXT: s_or_saveexec_b32 s1, -1
+; GISEL-NEXT: scratch_store_b32 off, v42, s33 offset:8 ; 4-byte Folded Spill
+; GISEL-NEXT: s_wait_alu 0xfffe
+; GISEL-NEXT: s_mov_b32 exec_lo, s1
+; GISEL-NEXT: v_writelane_b32 v42, s0, 2
+; GISEL-NEXT: s_clause 0x1
+; GISEL-NEXT: scratch_store_b32 off, v40, s33 offset:4
+; GISEL-NEXT: scratch_store_b32 off, v41, s33
+; GISEL-NEXT: v_dual_mov_b32 v40, v1 :: v_dual_mov_b32 v41, v2
+; GISEL-NEXT: v_add_nc_u32_e32 v1, 13, v0
+; GISEL-NEXT: v_writelane_b32 v42, s30, 0
+; GISEL-NEXT: s_mov_b32 s0, good_callee at abs32@lo
+; GISEL-NEXT: s_mov_b32 s1, good_callee at abs32@hi
+; GISEL-NEXT: s_add_co_i32 s32, s32, 16
+; GISEL-NEXT: v_writelane_b32 v42, s31, 1
+; GISEL-NEXT: s_wait_alu 0xfffe
+; GISEL-NEXT: s_swappc_b64 s[30:31], s[0:1]
+; GISEL-NEXT: global_store_b32 v[40:41], v0, off
+; GISEL-NEXT: s_clause 0x1
+; GISEL-NEXT: scratch_load_b32 v41, off, s33
+; GISEL-NEXT: scratch_load_b32 v40, off, s33 offset:4
+; GISEL-NEXT: v_readlane_b32 s31, v42, 1
+; GISEL-NEXT: v_readlane_b32 s30, v42, 0
+; GISEL-NEXT: s_mov_b32 s32, s33
+; GISEL-NEXT: v_readlane_b32 s0, v42, 2
+; GISEL-NEXT: s_or_saveexec_b32 s1, -1
+; GISEL-NEXT: scratch_load_b32 v42, off, s33 offset:8 ; 4-byte Folded Reload
+; GISEL-NEXT: s_wait_alu 0xfffe
+; GISEL-NEXT: s_mov_b32 exec_lo, s1
+; GISEL-NEXT: s_mov_b32 s33, s0
+; GISEL-NEXT: s_wait_loadcnt 0x0
+; GISEL-NEXT: s_wait_alu 0xfffe
+; GISEL-NEXT: s_setpc_b64 s[30:31]
+ %y = add i32 %x, 13
+ %ret = call i32(ptr, ...) @llvm.amdgcn.call.whole.wave(ptr @good_callee, i32 %x, i32 %y, i32 inreg %c)
+ store i32 %ret, ptr addrspace(1) %ptr
+ ret void
+}
+
+declare amdgpu_gfx_whole_wave void @void_callee(i1 %active, i32 %x)
+
+define amdgpu_gfx void @ret_void(i32 %x) {
+; DAGISEL-LABEL: ret_void:
+; DAGISEL: ; %bb.0:
+; DAGISEL-NEXT: s_wait_loadcnt_dscnt 0x0
+; DAGISEL-NEXT: s_wait_expcnt 0x0
+; DAGISEL-NEXT: s_wait_samplecnt 0x0
+; DAGISEL-NEXT: s_wait_bvhcnt 0x0
+; DAGISEL-NEXT: s_wait_kmcnt 0x0
+; DAGISEL-NEXT: s_mov_b32 s0, s33
+; DAGISEL-NEXT: s_mov_b32 s33, s32
+; DAGISEL-NEXT: s_or_saveexec_b32 s1, -1
+; DAGISEL-NEXT: scratch_store_b32 off, v40, s33 ; 4-byte Folded Spill
+; DAGISEL-NEXT: s_wait_alu 0xfffe
+; DAGISEL-NEXT: s_mov_b32 exec_lo, s1
+; DAGISEL-NEXT: v_writelane_b32 v40, s0, 2
+; DAGISEL-NEXT: s_mov_b32 s1, void_callee at abs32@hi
+; DAGISEL-NEXT: s_mov_b32 s0, void_callee at abs32@lo
+; DAGISEL-NEXT: s_add_co_i32 s32, s32, 16
+; DAGISEL-NEXT: v_writelane_b32 v40, s30, 0
+; DAGISEL-NEXT: v_writelane_b32 v40, s31, 1
+; DAGISEL-NEXT: s_wait_alu 0xfffe
+; DAGISEL-NEXT: s_swappc_b64 s[30:31], s[0:1]
+; DAGISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; DAGISEL-NEXT: v_readlane_b32 s31, v40, 1
+; DAGISEL-NEXT: v_readlane_b32 s30, v40, 0
+; DAGISEL-NEXT: s_mov_b32 s32, s33
+; DAGISEL-NEXT: v_readlane_b32 s0, v40, 2
+; DAGISEL-NEXT: s_or_saveexec_b32 s1, -1
+; DAGISEL-NEXT: scratch_load_b32 v40, off, s33 ; 4-byte Folded Reload
+; DAGISEL-NEXT: s_wait_alu 0xfffe
+; DAGISEL-NEXT: s_mov_b32 exec_lo, s1
+; DAGISEL-NEXT: s_mov_b32 s33, s0
+; DAGISEL-NEXT: s_wait_loadcnt 0x0
+; DAGISEL-NEXT: s_wait_alu 0xfffe
+; DAGISEL-NEXT: s_setpc_b64 s[30:31]
+;
+; GISEL-LABEL: ret_void:
+; GISEL: ; %bb.0:
+; GISEL-NEXT: s_wait_loadcnt_dscnt 0x0
+; GISEL-NEXT: s_wait_expcnt 0x0
+; GISEL-NEXT: s_wait_samplecnt 0x0
+; GISEL-NEXT: s_wait_bvhcnt 0x0
+; GISEL-NEXT: s_wait_kmcnt 0x0
+; GISEL-NEXT: s_mov_b32 s0, s33
+; GISEL-NEXT: s_mov_b32 s33, s32
+; GISEL-NEXT: s_or_saveexec_b32 s1, -1
+; GISEL-NEXT: scratch_store_b32 off, v40, s33 ; 4-byte Folded Spill
+; GISEL-NEXT: s_wait_alu 0xfffe
+; GISEL-NEXT: s_mov_b32 exec_lo, s1
+; GISEL-NEXT: v_writelane_b32 v40, s0, 2
+; GISEL-NEXT: s_mov_b32 s0, void_callee at abs32@lo
+; GISEL-NEXT: s_mov_b32 s1, void_callee at abs32@hi
+; GISEL-NEXT: s_add_co_i32 s32, s32, 16
+; GISEL-NEXT: v_writelane_b32 v40, s30, 0
+; GISEL-NEXT: v_writelane_b32 v40, s31, 1
+; GISEL-NEXT: s_wait_alu 0xfffe
+; GISEL-NEXT: s_swappc_b64 s[30:31], s[0:1]
+; GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GISEL-NEXT: v_readlane_b32 s31, v40, 1
+; GISEL-NEXT: v_readlane_b32 s30, v40, 0
+; GISEL-NEXT: s_mov_b32 s32, s33
+; GISEL-NEXT: v_readlane_b32 s0, v40, 2
+; GISEL-NEXT: s_or_saveexec_b32 s1, -1
+; GISEL-NEXT: scratch_load_b32 v40, off, s33 ; 4-byte Folded Reload
+; GISEL-NEXT: s_wait_alu 0xfffe
+; GISEL-NEXT: s_mov_b32 exec_lo, s1
+; GISEL-NEXT: s_mov_b32 s33, s0
+; GISEL-NEXT: s_wait_loadcnt 0x0
+; GISEL-NEXT: s_wait_alu 0xfffe
+; GISEL-NEXT: s_setpc_b64 s[30:31]
+ call void(ptr, ...) @llvm.amdgcn.call.whole.wave(ptr @void_callee, i32 %x)
+ ret void
+}
+
diff --git a/llvm/test/CodeGen/AMDGPU/irtranslator-whole-wave-functions.ll b/llvm/test/CodeGen/AMDGPU/irtranslator-whole-wave-functions.ll
index 8fc5afb155573..17c8010bcbe05 100644
--- a/llvm/test/CodeGen/AMDGPU/irtranslator-whole-wave-functions.ll
+++ b/llvm/test/CodeGen/AMDGPU/irtranslator-whole-wave-functions.ll
@@ -101,3 +101,29 @@ define amdgpu_gfx_whole_wave i64 @ret_64(i1 %active, i64 %a, i64 %b) {
%ret = call i64 @llvm.amdgcn.update.dpp.i64(i64 %x, i64 %y, i32 1, i32 1, i32 1, i1 false)
ret i64 %ret
}
+
+declare amdgpu_gfx_whole_wave i32 @callee(i1 %active, i32 %x)
+
+; Make sure we don't pass the first argument (i1).
+define amdgpu_cs void @call(i32 %x, ptr %p) {
+ ; CHECK-LABEL: name: call
+ ; CHECK: bb.1 (%ir-block.0):
+ ; CHECK-NEXT: liveins: $vgpr0, $vgpr1, $vgpr2
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $vgpr0
+ ; CHECK-NEXT: [[COPY1:%[0-9]+]]:_(s32) = COPY $vgpr1
+ ; CHECK-NEXT: [[COPY2:%[0-9]+]]:_(s32) = COPY $vgpr2
+ ; CHECK-NEXT: [[MV:%[0-9]+]]:_(p0) = G_MERGE_VALUES [[COPY1]](s32), [[COPY2]](s32)
+ ; CHECK-NEXT: [[GV:%[0-9]+]]:_(p0) = G_GLOBAL_VALUE @callee
+ ; CHECK-NEXT: ADJCALLSTACKUP 0, 0, implicit-def $scc
+ ; CHECK-NEXT: [[GV1:%[0-9]+]]:_(p0) = G_GLOBAL_VALUE @callee
+ ; CHECK-NEXT: $vgpr0 = COPY [[COPY]](s32)
+ ; CHECK-NEXT: $sgpr30_sgpr31 = G_SI_CALL [[GV1]](p0), @callee, csr_amdgpu_si_gfx, implicit $vgpr0, implicit-def $vgpr0
+ ; CHECK-NEXT: [[COPY3:%[0-9]+]]:_(s32) = COPY $vgpr0
+ ; CHECK-NEXT: ADJCALLSTACKDOWN 0, 0, implicit-def $scc
+ ; CHECK-NEXT: G_STORE [[COPY3]](s32), [[MV]](p0) :: (store (s32) into %ir.p)
+ ; CHECK-NEXT: S_ENDPGM 0
+ %ret = call i32(ptr, ...) @llvm.amdgcn.call.whole.wave(ptr @callee, i32 %x) convergent
+ store i32 %ret, ptr %p
+ ret void
+}
diff --git a/llvm/test/CodeGen/AMDGPU/isel-whole-wave-functions.ll b/llvm/test/CodeGen/AMDGPU/isel-whole-wave-functions.ll
index 3450d63ff7b4a..69809b115e037 100644
--- a/llvm/test/CodeGen/AMDGPU/isel-whole-wave-functions.ll
+++ b/llvm/test/CodeGen/AMDGPU/isel-whole-wave-functions.ll
@@ -189,3 +189,79 @@ define amdgpu_gfx_whole_wave i64 @ret_64(i1 %active, i64 %a, i64 %b) {
ret i64 %ret
}
+declare amdgpu_gfx_whole_wave i32 @callee(i1 %active, <8 x i32> %x)
+
+; Make sure we don't pass the first argument (i1).
+define amdgpu_cs void @call(<8 x i32> %x, ptr %p) {
+ ; DAGISEL-LABEL: name: call
+ ; DAGISEL: bb.0 (%ir-block.0):
+ ; DAGISEL-NEXT: liveins: $vgpr0, $vgpr1, $vgpr2, $vgpr3, $vgpr4, $vgpr5, $vgpr6, $vgpr7, $vgpr8, $vgpr9
+ ; DAGISEL-NEXT: {{ $}}
+ ; DAGISEL-NEXT: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr9
+ ; DAGISEL-NEXT: [[COPY1:%[0-9]+]]:vgpr_32 = COPY $vgpr8
+ ; DAGISEL-NEXT: [[COPY2:%[0-9]+]]:vgpr_32 = COPY $vgpr7
+ ; DAGISEL-NEXT: [[COPY3:%[0-9]+]]:vgpr_32 = COPY $vgpr6
+ ; DAGISEL-NEXT: [[COPY4:%[0-9]+]]:vgpr_32 = COPY $vgpr5
+ ; DAGISEL-NEXT: [[COPY5:%[0-9]+]]:vgpr_32 = COPY $vgpr4
+ ; DAGISEL-NEXT: [[COPY6:%[0-9]+]]:vgpr_32 = COPY $vgpr3
+ ; DAGISEL-NEXT: [[COPY7:%[0-9]+]]:vgpr_32 = COPY $vgpr2
+ ; DAGISEL-NEXT: [[COPY8:%[0-9]+]]:vgpr_32 = COPY $vgpr1
+ ; DAGISEL-NEXT: [[COPY9:%[0-9]+]]:vgpr_32 = COPY $vgpr0
+ ; DAGISEL-NEXT: [[DEF:%[0-9]+]]:sgpr_32 = IMPLICIT_DEF
+ ; DAGISEL-NEXT: [[DEF1:%[0-9]+]]:sgpr_32 = IMPLICIT_DEF
+ ; DAGISEL-NEXT: [[REG_SEQUENCE:%[0-9]+]]:vreg_64 = REG_SEQUENCE [[COPY1]], %subreg.sub0, [[COPY]], %subreg.sub1
+ ; DAGISEL-NEXT: [[S_MOV_B32_:%[0-9]+]]:sreg_32 = S_MOV_B32 target-flags(amdgpu-abs32-hi) @callee
+ ; DAGISEL-NEXT: [[S_MOV_B32_1:%[0-9]+]]:sreg_32 = S_MOV_B32 target-flags(amdgpu-abs32-lo) @callee
+ ; DAGISEL-NEXT: [[REG_SEQUENCE1:%[0-9]+]]:sreg_64 = REG_SEQUENCE killed [[S_MOV_B32_1]], %subreg.sub0, killed [[S_MOV_B32_]], %subreg.sub1
+ ; DAGISEL-NEXT: ADJCALLSTACKUP 0, 0, implicit-def dead $scc, implicit-def $sgpr32, implicit $sgpr32
+ ; DAGISEL-NEXT: $vgpr0 = COPY [[COPY9]]
+ ; DAGISEL-NEXT: $vgpr1 = COPY [[COPY8]]
+ ; DAGISEL-NEXT: $vgpr2 = COPY [[COPY7]]
+ ; DAGISEL-NEXT: $vgpr3 = COPY [[COPY6]]
+ ; DAGISEL-NEXT: $vgpr4 = COPY [[COPY5]]
+ ; DAGISEL-NEXT: $vgpr5 = COPY [[COPY4]]
+ ; DAGISEL-NEXT: $vgpr6 = COPY [[COPY3]]
+ ; DAGISEL-NEXT: $vgpr7 = COPY [[COPY2]]
+ ; DAGISEL-NEXT: $sgpr30_sgpr31 = SI_CALL killed [[REG_SEQUENCE1]], @callee, csr_amdgpu_si_gfx, implicit $vgpr0, implicit $vgpr1, implicit $vgpr2, implicit $vgpr3, implicit $vgpr4, implicit $vgpr5, implicit $vgpr6, implicit $vgpr7, implicit-def $vgpr0
+ ; DAGISEL-NEXT: ADJCALLSTACKDOWN 0, 0, implicit-def dead $scc, implicit-def $sgpr32, implicit $sgpr32
+ ; DAGISEL-NEXT: [[COPY10:%[0-9]+]]:vgpr_32 = COPY $vgpr0
+ ; DAGISEL-NEXT: [[COPY11:%[0-9]+]]:vreg_64 = COPY [[REG_SEQUENCE]]
+ ; DAGISEL-NEXT: FLAT_STORE_DWORD killed [[COPY11]], [[COPY10]], 0, 0, implicit $exec, implicit $flat_scr :: (store (s32) into %ir.p)
+ ; DAGISEL-NEXT: S_ENDPGM 0
+ ;
+ ; GISEL-LABEL: name: call
+ ; GISEL: bb.1 (%ir-block.0):
+ ; GISEL-NEXT: liveins: $vgpr0, $vgpr1, $vgpr2, $vgpr3, $vgpr4, $vgpr5, $vgpr6, $vgpr7, $vgpr8, $vgpr9
+ ; GISEL-NEXT: {{ $}}
+ ; GISEL-NEXT: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr0
+ ; GISEL-NEXT: [[COPY1:%[0-9]+]]:vgpr_32 = COPY $vgpr1
+ ; GISEL-NEXT: [[COPY2:%[0-9]+]]:vgpr_32 = COPY $vgpr2
+ ; GISEL-NEXT: [[COPY3:%[0-9]+]]:vgpr_32 = COPY $vgpr3
+ ; GISEL-NEXT: [[COPY4:%[0-9]+]]:vgpr_32 = COPY $vgpr4
+ ; GISEL-NEXT: [[COPY5:%[0-9]+]]:vgpr_32 = COPY $vgpr5
+ ; GISEL-NEXT: [[COPY6:%[0-9]+]]:vgpr_32 = COPY $vgpr6
+ ; GISEL-NEXT: [[COPY7:%[0-9]+]]:vgpr_32 = COPY $vgpr7
+ ; GISEL-NEXT: [[COPY8:%[0-9]+]]:vgpr_32 = COPY $vgpr8
+ ; GISEL-NEXT: [[COPY9:%[0-9]+]]:vgpr_32 = COPY $vgpr9
+ ; GISEL-NEXT: [[REG_SEQUENCE:%[0-9]+]]:vreg_64 = REG_SEQUENCE [[COPY8]], %subreg.sub0, [[COPY9]], %subreg.sub1
+ ; GISEL-NEXT: ADJCALLSTACKUP 0, 0, implicit-def $scc, implicit-def $sgpr32, implicit $sgpr32
+ ; GISEL-NEXT: $vgpr0 = COPY [[COPY]]
+ ; GISEL-NEXT: $vgpr1 = COPY [[COPY1]]
+ ; GISEL-NEXT: $vgpr2 = COPY [[COPY2]]
+ ; GISEL-NEXT: $vgpr3 = COPY [[COPY3]]
+ ; GISEL-NEXT: $vgpr4 = COPY [[COPY4]]
+ ; GISEL-NEXT: $vgpr5 = COPY [[COPY5]]
+ ; GISEL-NEXT: $vgpr6 = COPY [[COPY6]]
+ ; GISEL-NEXT: $vgpr7 = COPY [[COPY7]]
+ ; GISEL-NEXT: [[S_MOV_B32_:%[0-9]+]]:sreg_32 = S_MOV_B32 target-flags(amdgpu-abs32-lo) @callee
+ ; GISEL-NEXT: [[S_MOV_B32_1:%[0-9]+]]:sreg_32 = S_MOV_B32 target-flags(amdgpu-abs32-hi) @callee
+ ; GISEL-NEXT: [[REG_SEQUENCE1:%[0-9]+]]:sreg_64 = REG_SEQUENCE [[S_MOV_B32_]], %subreg.sub0, [[S_MOV_B32_1]], %subreg.sub1
+ ; GISEL-NEXT: $sgpr30_sgpr31 = SI_CALL [[REG_SEQUENCE1]], @callee, csr_amdgpu_si_gfx, implicit $vgpr0, implicit $vgpr1, implicit $vgpr2, implicit $vgpr3, implicit $vgpr4, implicit $vgpr5, implicit $vgpr6, implicit $vgpr7, implicit-def $vgpr0
+ ; GISEL-NEXT: [[COPY10:%[0-9]+]]:vgpr_32 = COPY $vgpr0
+ ; GISEL-NEXT: ADJCALLSTACKDOWN 0, 0, implicit-def $scc, implicit-def $sgpr32, implicit $sgpr32
+ ; GISEL-NEXT: FLAT_STORE_DWORD [[REG_SEQUENCE]], [[COPY10]], 0, 0, implicit $exec, implicit $flat_scr :: (store (s32) into %ir.p)
+ ; GISEL-NEXT: S_ENDPGM 0
+ %ret = call i32(ptr, ...) @llvm.amdgcn.call.whole.wave(ptr @callee, <8 x i32> %x) convergent
+ store i32 %ret, ptr %p
+ ret void
+}
diff --git a/llvm/test/CodeGen/AMDGPU/whole-wave-functions.ll b/llvm/test/CodeGen/AMDGPU/whole-wave-functions.ll
index a13a68a665aee..36e8adb23f1f5 100644
--- a/llvm/test/CodeGen/AMDGPU/whole-wave-functions.ll
+++ b/llvm/test/CodeGen/AMDGPU/whole-wave-functions.ll
@@ -2412,3 +2412,1427 @@ define amdgpu_gfx_whole_wave <2 x half> @call_gfx_from_whole_wave(i1 %active, <2
%ret = call amdgpu_gfx <2 x half>(<2 x half>, <2 x half>) @gfx_callee(<2 x half> %y, <2 x half> %x) convergent
ret <2 x half> %ret
}
+
+declare amdgpu_gfx_whole_wave float @callee(i1 %active, <8 x float> %x)
+
+define amdgpu_cs void @call_from_entry(<8 x float> %x, ptr %p) {
+; DAGISEL-LABEL: call_from_entry:
+; DAGISEL: ; %bb.0:
+; DAGISEL-NEXT: s_mov_b32 s1, callee at abs32@hi
+; DAGISEL-NEXT: s_mov_b32 s0, callee at abs32@lo
+; DAGISEL-NEXT: s_mov_b32 s32, 0
+; DAGISEL-NEXT: v_dual_mov_b32 v41, v9 :: v_dual_mov_b32 v40, v8
+; DAGISEL-NEXT: s_swappc_b64 s[30:31], s[0:1]
+; DAGISEL-NEXT: flat_store_b32 v[40:41], v0
+; DAGISEL-NEXT: s_endpgm
+;
+; GISEL-LABEL: call_from_entry:
+; GISEL: ; %bb.0:
+; GISEL-NEXT: s_mov_b32 s0, callee at abs32@lo
+; GISEL-NEXT: s_mov_b32 s1, callee at abs32@hi
+; GISEL-NEXT: s_mov_b32 s32, 0
+; GISEL-NEXT: v_dual_mov_b32 v40, v8 :: v_dual_mov_b32 v41, v9
+; GISEL-NEXT: s_swappc_b64 s[30:31], s[0:1]
+; GISEL-NEXT: flat_store_b32 v[40:41], v0
+; GISEL-NEXT: s_endpgm
+;
+; DAGISEL64-LABEL: call_from_entry:
+; DAGISEL64: ; %bb.0:
+; DAGISEL64-NEXT: s_mov_b32 s1, callee at abs32@hi
+; DAGISEL64-NEXT: s_mov_b32 s0, callee at abs32@lo
+; DAGISEL64-NEXT: s_mov_b32 s32, 0
+; DAGISEL64-NEXT: v_mov_b32_e32 v41, v9
+; DAGISEL64-NEXT: v_mov_b32_e32 v40, v8
+; DAGISEL64-NEXT: s_swappc_b64 s[30:31], s[0:1]
+; DAGISEL64-NEXT: flat_store_b32 v[40:41], v0
+; DAGISEL64-NEXT: s_endpgm
+;
+; GISEL64-LABEL: call_from_entry:
+; GISEL64: ; %bb.0:
+; GISEL64-NEXT: s_mov_b32 s0, callee at abs32@lo
+; GISEL64-NEXT: s_mov_b32 s1, callee at abs32@hi
+; GISEL64-NEXT: s_mov_b32 s32, 0
+; GISEL64-NEXT: v_mov_b32_e32 v40, v8
+; GISEL64-NEXT: v_mov_b32_e32 v41, v9
+; GISEL64-NEXT: s_swappc_b64 s[30:31], s[0:1]
+; GISEL64-NEXT: flat_store_b32 v[40:41], v0
+; GISEL64-NEXT: s_endpgm
+ %ret = call float(ptr, ...) @llvm.amdgcn.call.whole.wave(ptr @callee, <8 x float> %x) convergent
+ store float %ret, ptr %p
+ ret void
+}
+
+define amdgpu_gfx_whole_wave void @call_from_whole_wave(i1 %unused, <8 x float> %x, ptr %p) {
+; DAGISEL-LABEL: call_from_whole_wave:
+; DAGISEL: ; %bb.0:
+; DAGISEL-NEXT: s_wait_loadcnt_dscnt 0x0
+; DAGISEL-NEXT: s_wait_expcnt 0x0
+; DAGISEL-NEXT: s_wait_samplecnt 0x0
+; DAGISEL-NEXT: s_wait_bvhcnt 0x0
+; DAGISEL-NEXT: s_wait_kmcnt 0x0
+; DAGISEL-NEXT: s_mov_b32 s0, s33
+; DAGISEL-NEXT: s_mov_b32 s33, s32
+; DAGISEL-NEXT: s_xor_saveexec_b32 s4, -1
+; DAGISEL-NEXT: s_clause 0x1f
+; DAGISEL-NEXT: scratch_store_b32 off, v0, s33 offset:4
+; DAGISEL-NEXT: scratch_store_b32 off, v1, s33 offset:8
+; DAGISEL-NEXT: scratch_store_b32 off, v2, s33 offset:12
+; DAGISEL-NEXT: scratch_store_b32 off, v3, s33 offset:16
+; DAGISEL-NEXT: scratch_store_b32 off, v4, s33 offset:20
+; DAGISEL-NEXT: scratch_store_b32 off, v5, s33 offset:24
+; DAGISEL-NEXT: scratch_store_b32 off, v6, s33 offset:28
+; DAGISEL-NEXT: scratch_store_b32 off, v7, s33 offset:32
+; DAGISEL-NEXT: scratch_store_b32 off, v8, s33 offset:36
+; DAGISEL-NEXT: scratch_store_b32 off, v9, s33 offset:40
+; DAGISEL-NEXT: scratch_store_b32 off, v10, s33 offset:44
+; DAGISEL-NEXT: scratch_store_b32 off, v11, s33 offset:48
+; DAGISEL-NEXT: scratch_store_b32 off, v12, s33 offset:52
+; DAGISEL-NEXT: scratch_store_b32 off, v13, s33 offset:56
+; DAGISEL-NEXT: scratch_store_b32 off, v14, s33 offset:60
+; DAGISEL-NEXT: scratch_store_b32 off, v15, s33 offset:64
+; DAGISEL-NEXT: scratch_store_b32 off, v16, s33 offset:68
+; DAGISEL-NEXT: scratch_store_b32 off, v17, s33 offset:72
+; DAGISEL-NEXT: scratch_store_b32 off, v18, s33 offset:76
+; DAGISEL-NEXT: scratch_store_b32 off, v19, s33 offset:80
+; DAGISEL-NEXT: scratch_store_b32 off, v20, s33 offset:84
+; DAGISEL-NEXT: scratch_store_b32 off, v21, s33 offset:88
+; DAGISEL-NEXT: scratch_store_b32 off, v22, s33 offset:92
+; DAGISEL-NEXT: scratch_store_b32 off, v23, s33 offset:96
+; DAGISEL-NEXT: scratch_store_b32 off, v24, s33 offset:100
+; DAGISEL-NEXT: scratch_store_b32 off, v25, s33 offset:104
+; DAGISEL-NEXT: scratch_store_b32 off, v26, s33 offset:108
+; DAGISEL-NEXT: scratch_store_b32 off, v27, s33 offset:112
+; DAGISEL-NEXT: scratch_store_b32 off, v28, s33 offset:116
+; DAGISEL-NEXT: scratch_store_b32 off, v29, s33 offset:120
+; DAGISEL-NEXT: scratch_store_b32 off, v30, s33 offset:124
+; DAGISEL-NEXT: scratch_store_b32 off, v31, s33 offset:128
+; DAGISEL-NEXT: s_clause 0x1f
+; DAGISEL-NEXT: scratch_store_b32 off, v32, s33 offset:132
+; DAGISEL-NEXT: scratch_store_b32 off, v33, s33 offset:136
+; DAGISEL-NEXT: scratch_store_b32 off, v34, s33 offset:140
+; DAGISEL-NEXT: scratch_store_b32 off, v35, s33 offset:144
+; DAGISEL-NEXT: scratch_store_b32 off, v36, s33 offset:148
+; DAGISEL-NEXT: scratch_store_b32 off, v37, s33 offset:152
+; DAGISEL-NEXT: scratch_store_b32 off, v38, s33 offset:156
+; DAGISEL-NEXT: scratch_store_b32 off, v39, s33 offset:160
+; DAGISEL-NEXT: scratch_store_b32 off, v48, s33 offset:172
+; DAGISEL-NEXT: scratch_store_b32 off, v49, s33 offset:176
+; DAGISEL-NEXT: scratch_store_b32 off, v50, s33 offset:180
+; DAGISEL-NEXT: scratch_store_b32 off, v51, s33 offset:184
+; DAGISEL-NEXT: scratch_store_b32 off, v52, s33 offset:188
+; DAGISEL-NEXT: scratch_store_b32 off, v53, s33 offset:192
+; DAGISEL-NEXT: scratch_store_b32 off, v54, s33 offset:196
+; DAGISEL-NEXT: scratch_store_b32 off, v55, s33 offset:200
+; DAGISEL-NEXT: scratch_store_b32 off, v64, s33 offset:204
+; DAGISEL-NEXT: scratch_store_b32 off, v65, s33 offset:208
+; DAGISEL-NEXT: scratch_store_b32 off, v66, s33 offset:212
+; DAGISEL-NEXT: scratch_store_b32 off, v67, s33 offset:216
+; DAGISEL-NEXT: scratch_store_b32 off, v68, s33 offset:220
+; DAGISEL-NEXT: scratch_store_b32 off, v69, s33 offset:224
+; DAGISEL-NEXT: scratch_store_b32 off, v70, s33 offset:228
+; DAGISEL-NEXT: scratch_store_b32 off, v71, s33 offset:232
+; DAGISEL-NEXT: scratch_store_b32 off, v80, s33 offset:236
+; DAGISEL-NEXT: scratch_store_b32 off, v81, s33 offset:240
+; DAGISEL-NEXT: scratch_store_b32 off, v82, s33 offset:244
+; DAGISEL-NEXT: scratch_store_b32 off, v83, s33 offset:248
+; DAGISEL-NEXT: scratch_store_b32 off, v84, s33 offset:252
+; DAGISEL-NEXT: scratch_store_b32 off, v85, s33 offset:256
+; DAGISEL-NEXT: scratch_store_b32 off, v86, s33 offset:260
+; DAGISEL-NEXT: scratch_store_b32 off, v87, s33 offset:264
+; DAGISEL-NEXT: s_clause 0x1f
+; DAGISEL-NEXT: scratch_store_b32 off, v96, s33 offset:268
+; DAGISEL-NEXT: scratch_store_b32 off, v97, s33 offset:272
+; DAGISEL-NEXT: scratch_store_b32 off, v98, s33 offset:276
+; DAGISEL-NEXT: scratch_store_b32 off, v99, s33 offset:280
+; DAGISEL-NEXT: scratch_store_b32 off, v100, s33 offset:284
+; DAGISEL-NEXT: scratch_store_b32 off, v101, s33 offset:288
+; DAGISEL-NEXT: scratch_store_b32 off, v102, s33 offset:292
+; DAGISEL-NEXT: scratch_store_b32 off, v103, s33 offset:296
+; DAGISEL-NEXT: scratch_store_b32 off, v112, s33 offset:300
+; DAGISEL-NEXT: scratch_store_b32 off, v113, s33 offset:304
+; DAGISEL-NEXT: scratch_store_b32 off, v114, s33 offset:308
+; DAGISEL-NEXT: scratch_store_b32 off, v115, s33 offset:312
+; DAGISEL-NEXT: scratch_store_b32 off, v116, s33 offset:316
+; DAGISEL-NEXT: scratch_store_b32 off, v117, s33 offset:320
+; DAGISEL-NEXT: scratch_store_b32 off, v118, s33 offset:324
+; DAGISEL-NEXT: scratch_store_b32 off, v119, s33 offset:328
+; DAGISEL-NEXT: scratch_store_b32 off, v128, s33 offset:332
+; DAGISEL-NEXT: scratch_store_b32 off, v129, s33 offset:336
+; DAGISEL-NEXT: scratch_store_b32 off, v130, s33 offset:340
+; DAGISEL-NEXT: scratch_store_b32 off, v131, s33 offset:344
+; DAGISEL-NEXT: scratch_store_b32 off, v132, s33 offset:348
+; DAGISEL-NEXT: scratch_store_b32 off, v133, s33 offset:352
+; DAGISEL-NEXT: scratch_store_b32 off, v134, s33 offset:356
+; DAGISEL-NEXT: scratch_store_b32 off, v135, s33 offset:360
+; DAGISEL-NEXT: scratch_store_b32 off, v144, s33 offset:364
+; DAGISEL-NEXT: scratch_store_b32 off, v145, s33 offset:368
+; DAGISEL-NEXT: scratch_store_b32 off, v146, s33 offset:372
+; DAGISEL-NEXT: scratch_store_b32 off, v147, s33 offset:376
+; DAGISEL-NEXT: scratch_store_b32 off, v148, s33 offset:380
+; DAGISEL-NEXT: scratch_store_b32 off, v149, s33 offset:384
+; DAGISEL-NEXT: scratch_store_b32 off, v150, s33 offset:388
+; DAGISEL-NEXT: scratch_store_b32 off, v151, s33 offset:392
+; DAGISEL-NEXT: s_clause 0x1f
+; DAGISEL-NEXT: scratch_store_b32 off, v160, s33 offset:396
+; DAGISEL-NEXT: scratch_store_b32 off, v161, s33 offset:400
+; DAGISEL-NEXT: scratch_store_b32 off, v162, s33 offset:404
+; DAGISEL-NEXT: scratch_store_b32 off, v163, s33 offset:408
+; DAGISEL-NEXT: scratch_store_b32 off, v164, s33 offset:412
+; DAGISEL-NEXT: scratch_store_b32 off, v165, s33 offset:416
+; DAGISEL-NEXT: scratch_store_b32 off, v166, s33 offset:420
+; DAGISEL-NEXT: scratch_store_b32 off, v167, s33 offset:424
+; DAGISEL-NEXT: scratch_store_b32 off, v176, s33 offset:428
+; DAGISEL-NEXT: scratch_store_b32 off, v177, s33 offset:432
+; DAGISEL-NEXT: scratch_store_b32 off, v178, s33 offset:436
+; DAGISEL-NEXT: scratch_store_b32 off, v179, s33 offset:440
+; DAGISEL-NEXT: scratch_store_b32 off, v180, s33 offset:444
+; DAGISEL-NEXT: scratch_store_b32 off, v181, s33 offset:448
+; DAGISEL-NEXT: scratch_store_b32 off, v182, s33 offset:452
+; DAGISEL-NEXT: scratch_store_b32 off, v183, s33 offset:456
+; DAGISEL-NEXT: scratch_store_b32 off, v192, s33 offset:460
+; DAGISEL-NEXT: scratch_store_b32 off, v193, s33 offset:464
+; DAGISEL-NEXT: scratch_store_b32 off, v194, s33 offset:468
+; DAGISEL-NEXT: scratch_store_b32 off, v195, s33 offset:472
+; DAGISEL-NEXT: scratch_store_b32 off, v196, s33 offset:476
+; DAGISEL-NEXT: scratch_store_b32 off, v197, s33 offset:480
+; DAGISEL-NEXT: scratch_store_b32 off, v198, s33 offset:484
+; DAGISEL-NEXT: scratch_store_b32 off, v199, s33 offset:488
+; DAGISEL-NEXT: scratch_store_b32 off, v208, s33 offset:492
+; DAGISEL-NEXT: scratch_store_b32 off, v209, s33 offset:496
+; DAGISEL-NEXT: scratch_store_b32 off, v210, s33 offset:500
+; DAGISEL-NEXT: scratch_store_b32 off, v211, s33 offset:504
+; DAGISEL-NEXT: scratch_store_b32 off, v212, s33 offset:508
+; DAGISEL-NEXT: scratch_store_b32 off, v213, s33 offset:512
+; DAGISEL-NEXT: scratch_store_b32 off, v214, s33 offset:516
+; DAGISEL-NEXT: scratch_store_b32 off, v215, s33 offset:520
+; DAGISEL-NEXT: s_clause 0xf
+; DAGISEL-NEXT: scratch_store_b32 off, v224, s33 offset:524
+; DAGISEL-NEXT: scratch_store_b32 off, v225, s33 offset:528
+; DAGISEL-NEXT: scratch_store_b32 off, v226, s33 offset:532
+; DAGISEL-NEXT: scratch_store_b32 off, v227, s33 offset:536
+; DAGISEL-NEXT: scratch_store_b32 off, v228, s33 offset:540
+; DAGISEL-NEXT: scratch_store_b32 off, v229, s33 offset:544
+; DAGISEL-NEXT: scratch_store_b32 off, v230, s33 offset:548
+; DAGISEL-NEXT: scratch_store_b32 off, v231, s33 offset:552
+; DAGISEL-NEXT: scratch_store_b32 off, v240, s33 offset:556
+; DAGISEL-NEXT: scratch_store_b32 off, v241, s33 offset:560
+; DAGISEL-NEXT: scratch_store_b32 off, v242, s33 offset:564
+; DAGISEL-NEXT: scratch_store_b32 off, v243, s33 offset:568
+; DAGISEL-NEXT: scratch_store_b32 off, v244, s33 offset:572
+; DAGISEL-NEXT: scratch_store_b32 off, v245, s33 offset:576
+; DAGISEL-NEXT: scratch_store_b32 off, v246, s33 offset:580
+; DAGISEL-NEXT: scratch_store_b32 off, v247, s33 offset:584
+; DAGISEL-NEXT: s_mov_b32 exec_lo, -1
+; DAGISEL-NEXT: s_clause 0x2
+; DAGISEL-NEXT: scratch_store_b32 off, v42, s33
+; DAGISEL-NEXT: scratch_store_b32 off, v40, s33 offset:164
+; DAGISEL-NEXT: scratch_store_b32 off, v41, s33 offset:168
+; DAGISEL-NEXT: s_wait_alu 0xfffe
+; DAGISEL-NEXT: v_writelane_b32 v42, s0, 3
+; DAGISEL-NEXT: s_mov_b32 s1, callee at abs32@hi
+; DAGISEL-NEXT: s_mov_b32 s0, callee at abs32@lo
+; DAGISEL-NEXT: s_addk_co_i32 s32, 0x250
+; DAGISEL-NEXT: v_dual_mov_b32 v41, v9 :: v_dual_mov_b32 v40, v8
+; DAGISEL-NEXT: v_writelane_b32 v42, s4, 0
+; DAGISEL-NEXT: v_writelane_b32 v42, s30, 1
+; DAGISEL-NEXT: v_writelane_b32 v42, s31, 2
+; DAGISEL-NEXT: s_wait_alu 0xfffe
+; DAGISEL-NEXT: s_swappc_b64 s[30:31], s[0:1]
+; DAGISEL-NEXT: flat_store_b32 v[40:41], v0
+; DAGISEL-NEXT: v_readlane_b32 s31, v42, 2
+; DAGISEL-NEXT: v_readlane_b32 s30, v42, 1
+; DAGISEL-NEXT: v_readlane_b32 s4, v42, 0
+; DAGISEL-NEXT: v_readlane_b32 s0, v42, 3
+; DAGISEL-NEXT: s_clause 0x2
+; DAGISEL-NEXT: scratch_load_b32 v42, off, s33
+; DAGISEL-NEXT: scratch_load_b32 v40, off, s33 offset:164
+; DAGISEL-NEXT: scratch_load_b32 v41, off, s33 offset:168
+; DAGISEL-NEXT: s_mov_b32 s32, s33
+; DAGISEL-NEXT: s_xor_b32 exec_lo, s4, -1
+; DAGISEL-NEXT: s_clause 0x1f
+; DAGISEL-NEXT: scratch_load_b32 v0, off, s33 offset:4
+; DAGISEL-NEXT: scratch_load_b32 v1, off, s33 offset:8
+; DAGISEL-NEXT: scratch_load_b32 v2, off, s33 offset:12
+; DAGISEL-NEXT: scratch_load_b32 v3, off, s33 offset:16
+; DAGISEL-NEXT: scratch_load_b32 v4, off, s33 offset:20
+; DAGISEL-NEXT: scratch_load_b32 v5, off, s33 offset:24
+; DAGISEL-NEXT: scratch_load_b32 v6, off, s33 offset:28
+; DAGISEL-NEXT: scratch_load_b32 v7, off, s33 offset:32
+; DAGISEL-NEXT: scratch_load_b32 v8, off, s33 offset:36
+; DAGISEL-NEXT: scratch_load_b32 v9, off, s33 offset:40
+; DAGISEL-NEXT: scratch_load_b32 v10, off, s33 offset:44
+; DAGISEL-NEXT: scratch_load_b32 v11, off, s33 offset:48
+; DAGISEL-NEXT: scratch_load_b32 v12, off, s33 offset:52
+; DAGISEL-NEXT: scratch_load_b32 v13, off, s33 offset:56
+; DAGISEL-NEXT: scratch_load_b32 v14, off, s33 offset:60
+; DAGISEL-NEXT: scratch_load_b32 v15, off, s33 offset:64
+; DAGISEL-NEXT: scratch_load_b32 v16, off, s33 offset:68
+; DAGISEL-NEXT: scratch_load_b32 v17, off, s33 offset:72
+; DAGISEL-NEXT: scratch_load_b32 v18, off, s33 offset:76
+; DAGISEL-NEXT: scratch_load_b32 v19, off, s33 offset:80
+; DAGISEL-NEXT: scratch_load_b32 v20, off, s33 offset:84
+; DAGISEL-NEXT: scratch_load_b32 v21, off, s33 offset:88
+; DAGISEL-NEXT: scratch_load_b32 v22, off, s33 offset:92
+; DAGISEL-NEXT: scratch_load_b32 v23, off, s33 offset:96
+; DAGISEL-NEXT: scratch_load_b32 v24, off, s33 offset:100
+; DAGISEL-NEXT: scratch_load_b32 v25, off, s33 offset:104
+; DAGISEL-NEXT: scratch_load_b32 v26, off, s33 offset:108
+; DAGISEL-NEXT: scratch_load_b32 v27, off, s33 offset:112
+; DAGISEL-NEXT: scratch_load_b32 v28, off, s33 offset:116
+; DAGISEL-NEXT: scratch_load_b32 v29, off, s33 offset:120
+; DAGISEL-NEXT: scratch_load_b32 v30, off, s33 offset:124
+; DAGISEL-NEXT: scratch_load_b32 v31, off, s33 offset:128
+; DAGISEL-NEXT: s_clause 0x1f
+; DAGISEL-NEXT: scratch_load_b32 v32, off, s33 offset:132
+; DAGISEL-NEXT: scratch_load_b32 v33, off, s33 offset:136
+; DAGISEL-NEXT: scratch_load_b32 v34, off, s33 offset:140
+; DAGISEL-NEXT: scratch_load_b32 v35, off, s33 offset:144
+; DAGISEL-NEXT: scratch_load_b32 v36, off, s33 offset:148
+; DAGISEL-NEXT: scratch_load_b32 v37, off, s33 offset:152
+; DAGISEL-NEXT: scratch_load_b32 v38, off, s33 offset:156
+; DAGISEL-NEXT: scratch_load_b32 v39, off, s33 offset:160
+; DAGISEL-NEXT: scratch_load_b32 v48, off, s33 offset:172
+; DAGISEL-NEXT: scratch_load_b32 v49, off, s33 offset:176
+; DAGISEL-NEXT: scratch_load_b32 v50, off, s33 offset:180
+; DAGISEL-NEXT: scratch_load_b32 v51, off, s33 offset:184
+; DAGISEL-NEXT: scratch_load_b32 v52, off, s33 offset:188
+; DAGISEL-NEXT: scratch_load_b32 v53, off, s33 offset:192
+; DAGISEL-NEXT: scratch_load_b32 v54, off, s33 offset:196
+; DAGISEL-NEXT: scratch_load_b32 v55, off, s33 offset:200
+; DAGISEL-NEXT: scratch_load_b32 v64, off, s33 offset:204
+; DAGISEL-NEXT: scratch_load_b32 v65, off, s33 offset:208
+; DAGISEL-NEXT: scratch_load_b32 v66, off, s33 offset:212
+; DAGISEL-NEXT: scratch_load_b32 v67, off, s33 offset:216
+; DAGISEL-NEXT: scratch_load_b32 v68, off, s33 offset:220
+; DAGISEL-NEXT: scratch_load_b32 v69, off, s33 offset:224
+; DAGISEL-NEXT: scratch_load_b32 v70, off, s33 offset:228
+; DAGISEL-NEXT: scratch_load_b32 v71, off, s33 offset:232
+; DAGISEL-NEXT: scratch_load_b32 v80, off, s33 offset:236
+; DAGISEL-NEXT: scratch_load_b32 v81, off, s33 offset:240
+; DAGISEL-NEXT: scratch_load_b32 v82, off, s33 offset:244
+; DAGISEL-NEXT: scratch_load_b32 v83, off, s33 offset:248
+; DAGISEL-NEXT: scratch_load_b32 v84, off, s33 offset:252
+; DAGISEL-NEXT: scratch_load_b32 v85, off, s33 offset:256
+; DAGISEL-NEXT: scratch_load_b32 v86, off, s33 offset:260
+; DAGISEL-NEXT: scratch_load_b32 v87, off, s33 offset:264
+; DAGISEL-NEXT: s_clause 0x1f
+; DAGISEL-NEXT: scratch_load_b32 v96, off, s33 offset:268
+; DAGISEL-NEXT: scratch_load_b32 v97, off, s33 offset:272
+; DAGISEL-NEXT: scratch_load_b32 v98, off, s33 offset:276
+; DAGISEL-NEXT: scratch_load_b32 v99, off, s33 offset:280
+; DAGISEL-NEXT: scratch_load_b32 v100, off, s33 offset:284
+; DAGISEL-NEXT: scratch_load_b32 v101, off, s33 offset:288
+; DAGISEL-NEXT: scratch_load_b32 v102, off, s33 offset:292
+; DAGISEL-NEXT: scratch_load_b32 v103, off, s33 offset:296
+; DAGISEL-NEXT: scratch_load_b32 v112, off, s33 offset:300
+; DAGISEL-NEXT: scratch_load_b32 v113, off, s33 offset:304
+; DAGISEL-NEXT: scratch_load_b32 v114, off, s33 offset:308
+; DAGISEL-NEXT: scratch_load_b32 v115, off, s33 offset:312
+; DAGISEL-NEXT: scratch_load_b32 v116, off, s33 offset:316
+; DAGISEL-NEXT: scratch_load_b32 v117, off, s33 offset:320
+; DAGISEL-NEXT: scratch_load_b32 v118, off, s33 offset:324
+; DAGISEL-NEXT: scratch_load_b32 v119, off, s33 offset:328
+; DAGISEL-NEXT: scratch_load_b32 v128, off, s33 offset:332
+; DAGISEL-NEXT: scratch_load_b32 v129, off, s33 offset:336
+; DAGISEL-NEXT: scratch_load_b32 v130, off, s33 offset:340
+; DAGISEL-NEXT: scratch_load_b32 v131, off, s33 offset:344
+; DAGISEL-NEXT: scratch_load_b32 v132, off, s33 offset:348
+; DAGISEL-NEXT: scratch_load_b32 v133, off, s33 offset:352
+; DAGISEL-NEXT: scratch_load_b32 v134, off, s33 offset:356
+; DAGISEL-NEXT: scratch_load_b32 v135, off, s33 offset:360
+; DAGISEL-NEXT: scratch_load_b32 v144, off, s33 offset:364
+; DAGISEL-NEXT: scratch_load_b32 v145, off, s33 offset:368
+; DAGISEL-NEXT: scratch_load_b32 v146, off, s33 offset:372
+; DAGISEL-NEXT: scratch_load_b32 v147, off, s33 offset:376
+; DAGISEL-NEXT: scratch_load_b32 v148, off, s33 offset:380
+; DAGISEL-NEXT: scratch_load_b32 v149, off, s33 offset:384
+; DAGISEL-NEXT: scratch_load_b32 v150, off, s33 offset:388
+; DAGISEL-NEXT: scratch_load_b32 v151, off, s33 offset:392
+; DAGISEL-NEXT: s_clause 0x1f
+; DAGISEL-NEXT: scratch_load_b32 v160, off, s33 offset:396
+; DAGISEL-NEXT: scratch_load_b32 v161, off, s33 offset:400
+; DAGISEL-NEXT: scratch_load_b32 v162, off, s33 offset:404
+; DAGISEL-NEXT: scratch_load_b32 v163, off, s33 offset:408
+; DAGISEL-NEXT: scratch_load_b32 v164, off, s33 offset:412
+; DAGISEL-NEXT: scratch_load_b32 v165, off, s33 offset:416
+; DAGISEL-NEXT: scratch_load_b32 v166, off, s33 offset:420
+; DAGISEL-NEXT: scratch_load_b32 v167, off, s33 offset:424
+; DAGISEL-NEXT: scratch_load_b32 v176, off, s33 offset:428
+; DAGISEL-NEXT: scratch_load_b32 v177, off, s33 offset:432
+; DAGISEL-NEXT: scratch_load_b32 v178, off, s33 offset:436
+; DAGISEL-NEXT: scratch_load_b32 v179, off, s33 offset:440
+; DAGISEL-NEXT: scratch_load_b32 v180, off, s33 offset:444
+; DAGISEL-NEXT: scratch_load_b32 v181, off, s33 offset:448
+; DAGISEL-NEXT: scratch_load_b32 v182, off, s33 offset:452
+; DAGISEL-NEXT: scratch_load_b32 v183, off, s33 offset:456
+; DAGISEL-NEXT: scratch_load_b32 v192, off, s33 offset:460
+; DAGISEL-NEXT: scratch_load_b32 v193, off, s33 offset:464
+; DAGISEL-NEXT: scratch_load_b32 v194, off, s33 offset:468
+; DAGISEL-NEXT: scratch_load_b32 v195, off, s33 offset:472
+; DAGISEL-NEXT: scratch_load_b32 v196, off, s33 offset:476
+; DAGISEL-NEXT: scratch_load_b32 v197, off, s33 offset:480
+; DAGISEL-NEXT: scratch_load_b32 v198, off, s33 offset:484
+; DAGISEL-NEXT: scratch_load_b32 v199, off, s33 offset:488
+; DAGISEL-NEXT: scratch_load_b32 v208, off, s33 offset:492
+; DAGISEL-NEXT: scratch_load_b32 v209, off, s33 offset:496
+; DAGISEL-NEXT: scratch_load_b32 v210, off, s33 offset:500
+; DAGISEL-NEXT: scratch_load_b32 v211, off, s33 offset:504
+; DAGISEL-NEXT: scratch_load_b32 v212, off, s33 offset:508
+; DAGISEL-NEXT: scratch_load_b32 v213, off, s33 offset:512
+; DAGISEL-NEXT: scratch_load_b32 v214, off, s33 offset:516
+; DAGISEL-NEXT: scratch_load_b32 v215, off, s33 offset:520
+; DAGISEL-NEXT: s_clause 0xf
+; DAGISEL-NEXT: scratch_load_b32 v224, off, s33 offset:524
+; DAGISEL-NEXT: scratch_load_b32 v225, off, s33 offset:528
+; DAGISEL-NEXT: scratch_load_b32 v226, off, s33 offset:532
+; DAGISEL-NEXT: scratch_load_b32 v227, off, s33 offset:536
+; DAGISEL-NEXT: scratch_load_b32 v228, off, s33 offset:540
+; DAGISEL-NEXT: scratch_load_b32 v229, off, s33 offset:544
+; DAGISEL-NEXT: scratch_load_b32 v230, off, s33 offset:548
+; DAGISEL-NEXT: scratch_load_b32 v231, off, s33 offset:552
+; DAGISEL-NEXT: scratch_load_b32 v240, off, s33 offset:556
+; DAGISEL-NEXT: scratch_load_b32 v241, off, s33 offset:560
+; DAGISEL-NEXT: scratch_load_b32 v242, off, s33 offset:564
+; DAGISEL-NEXT: scratch_load_b32 v243, off, s33 offset:568
+; DAGISEL-NEXT: scratch_load_b32 v244, off, s33 offset:572
+; DAGISEL-NEXT: scratch_load_b32 v245, off, s33 offset:576
+; DAGISEL-NEXT: scratch_load_b32 v246, off, s33 offset:580
+; DAGISEL-NEXT: scratch_load_b32 v247, off, s33 offset:584
+; DAGISEL-NEXT: s_mov_b32 exec_lo, s4
+; DAGISEL-NEXT: s_mov_b32 s33, s0
+; DAGISEL-NEXT: s_wait_loadcnt_dscnt 0x0
+; DAGISEL-NEXT: s_wait_alu 0xfffe
+; DAGISEL-NEXT: s_setpc_b64 s[30:31]
+;
+; GISEL-LABEL: call_from_whole_wave:
+; GISEL: ; %bb.0:
+; GISEL-NEXT: s_wait_loadcnt_dscnt 0x0
+; GISEL-NEXT: s_wait_expcnt 0x0
+; GISEL-NEXT: s_wait_samplecnt 0x0
+; GISEL-NEXT: s_wait_bvhcnt 0x0
+; GISEL-NEXT: s_wait_kmcnt 0x0
+; GISEL-NEXT: s_mov_b32 s0, s33
+; GISEL-NEXT: s_mov_b32 s33, s32
+; GISEL-NEXT: s_xor_saveexec_b32 s4, -1
+; GISEL-NEXT: s_clause 0x1f
+; GISEL-NEXT: scratch_store_b32 off, v0, s33 offset:4
+; GISEL-NEXT: scratch_store_b32 off, v1, s33 offset:8
+; GISEL-NEXT: scratch_store_b32 off, v2, s33 offset:12
+; GISEL-NEXT: scratch_store_b32 off, v3, s33 offset:16
+; GISEL-NEXT: scratch_store_b32 off, v4, s33 offset:20
+; GISEL-NEXT: scratch_store_b32 off, v5, s33 offset:24
+; GISEL-NEXT: scratch_store_b32 off, v6, s33 offset:28
+; GISEL-NEXT: scratch_store_b32 off, v7, s33 offset:32
+; GISEL-NEXT: scratch_store_b32 off, v8, s33 offset:36
+; GISEL-NEXT: scratch_store_b32 off, v9, s33 offset:40
+; GISEL-NEXT: scratch_store_b32 off, v10, s33 offset:44
+; GISEL-NEXT: scratch_store_b32 off, v11, s33 offset:48
+; GISEL-NEXT: scratch_store_b32 off, v12, s33 offset:52
+; GISEL-NEXT: scratch_store_b32 off, v13, s33 offset:56
+; GISEL-NEXT: scratch_store_b32 off, v14, s33 offset:60
+; GISEL-NEXT: scratch_store_b32 off, v15, s33 offset:64
+; GISEL-NEXT: scratch_store_b32 off, v16, s33 offset:68
+; GISEL-NEXT: scratch_store_b32 off, v17, s33 offset:72
+; GISEL-NEXT: scratch_store_b32 off, v18, s33 offset:76
+; GISEL-NEXT: scratch_store_b32 off, v19, s33 offset:80
+; GISEL-NEXT: scratch_store_b32 off, v20, s33 offset:84
+; GISEL-NEXT: scratch_store_b32 off, v21, s33 offset:88
+; GISEL-NEXT: scratch_store_b32 off, v22, s33 offset:92
+; GISEL-NEXT: scratch_store_b32 off, v23, s33 offset:96
+; GISEL-NEXT: scratch_store_b32 off, v24, s33 offset:100
+; GISEL-NEXT: scratch_store_b32 off, v25, s33 offset:104
+; GISEL-NEXT: scratch_store_b32 off, v26, s33 offset:108
+; GISEL-NEXT: scratch_store_b32 off, v27, s33 offset:112
+; GISEL-NEXT: scratch_store_b32 off, v28, s33 offset:116
+; GISEL-NEXT: scratch_store_b32 off, v29, s33 offset:120
+; GISEL-NEXT: scratch_store_b32 off, v30, s33 offset:124
+; GISEL-NEXT: scratch_store_b32 off, v31, s33 offset:128
+; GISEL-NEXT: s_clause 0x1f
+; GISEL-NEXT: scratch_store_b32 off, v32, s33 offset:132
+; GISEL-NEXT: scratch_store_b32 off, v33, s33 offset:136
+; GISEL-NEXT: scratch_store_b32 off, v34, s33 offset:140
+; GISEL-NEXT: scratch_store_b32 off, v35, s33 offset:144
+; GISEL-NEXT: scratch_store_b32 off, v36, s33 offset:148
+; GISEL-NEXT: scratch_store_b32 off, v37, s33 offset:152
+; GISEL-NEXT: scratch_store_b32 off, v38, s33 offset:156
+; GISEL-NEXT: scratch_store_b32 off, v39, s33 offset:160
+; GISEL-NEXT: scratch_store_b32 off, v48, s33 offset:172
+; GISEL-NEXT: scratch_store_b32 off, v49, s33 offset:176
+; GISEL-NEXT: scratch_store_b32 off, v50, s33 offset:180
+; GISEL-NEXT: scratch_store_b32 off, v51, s33 offset:184
+; GISEL-NEXT: scratch_store_b32 off, v52, s33 offset:188
+; GISEL-NEXT: scratch_store_b32 off, v53, s33 offset:192
+; GISEL-NEXT: scratch_store_b32 off, v54, s33 offset:196
+; GISEL-NEXT: scratch_store_b32 off, v55, s33 offset:200
+; GISEL-NEXT: scratch_store_b32 off, v64, s33 offset:204
+; GISEL-NEXT: scratch_store_b32 off, v65, s33 offset:208
+; GISEL-NEXT: scratch_store_b32 off, v66, s33 offset:212
+; GISEL-NEXT: scratch_store_b32 off, v67, s33 offset:216
+; GISEL-NEXT: scratch_store_b32 off, v68, s33 offset:220
+; GISEL-NEXT: scratch_store_b32 off, v69, s33 offset:224
+; GISEL-NEXT: scratch_store_b32 off, v70, s33 offset:228
+; GISEL-NEXT: scratch_store_b32 off, v71, s33 offset:232
+; GISEL-NEXT: scratch_store_b32 off, v80, s33 offset:236
+; GISEL-NEXT: scratch_store_b32 off, v81, s33 offset:240
+; GISEL-NEXT: scratch_store_b32 off, v82, s33 offset:244
+; GISEL-NEXT: scratch_store_b32 off, v83, s33 offset:248
+; GISEL-NEXT: scratch_store_b32 off, v84, s33 offset:252
+; GISEL-NEXT: scratch_store_b32 off, v85, s33 offset:256
+; GISEL-NEXT: scratch_store_b32 off, v86, s33 offset:260
+; GISEL-NEXT: scratch_store_b32 off, v87, s33 offset:264
+; GISEL-NEXT: s_clause 0x1f
+; GISEL-NEXT: scratch_store_b32 off, v96, s33 offset:268
+; GISEL-NEXT: scratch_store_b32 off, v97, s33 offset:272
+; GISEL-NEXT: scratch_store_b32 off, v98, s33 offset:276
+; GISEL-NEXT: scratch_store_b32 off, v99, s33 offset:280
+; GISEL-NEXT: scratch_store_b32 off, v100, s33 offset:284
+; GISEL-NEXT: scratch_store_b32 off, v101, s33 offset:288
+; GISEL-NEXT: scratch_store_b32 off, v102, s33 offset:292
+; GISEL-NEXT: scratch_store_b32 off, v103, s33 offset:296
+; GISEL-NEXT: scratch_store_b32 off, v112, s33 offset:300
+; GISEL-NEXT: scratch_store_b32 off, v113, s33 offset:304
+; GISEL-NEXT: scratch_store_b32 off, v114, s33 offset:308
+; GISEL-NEXT: scratch_store_b32 off, v115, s33 offset:312
+; GISEL-NEXT: scratch_store_b32 off, v116, s33 offset:316
+; GISEL-NEXT: scratch_store_b32 off, v117, s33 offset:320
+; GISEL-NEXT: scratch_store_b32 off, v118, s33 offset:324
+; GISEL-NEXT: scratch_store_b32 off, v119, s33 offset:328
+; GISEL-NEXT: scratch_store_b32 off, v128, s33 offset:332
+; GISEL-NEXT: scratch_store_b32 off, v129, s33 offset:336
+; GISEL-NEXT: scratch_store_b32 off, v130, s33 offset:340
+; GISEL-NEXT: scratch_store_b32 off, v131, s33 offset:344
+; GISEL-NEXT: scratch_store_b32 off, v132, s33 offset:348
+; GISEL-NEXT: scratch_store_b32 off, v133, s33 offset:352
+; GISEL-NEXT: scratch_store_b32 off, v134, s33 offset:356
+; GISEL-NEXT: scratch_store_b32 off, v135, s33 offset:360
+; GISEL-NEXT: scratch_store_b32 off, v144, s33 offset:364
+; GISEL-NEXT: scratch_store_b32 off, v145, s33 offset:368
+; GISEL-NEXT: scratch_store_b32 off, v146, s33 offset:372
+; GISEL-NEXT: scratch_store_b32 off, v147, s33 offset:376
+; GISEL-NEXT: scratch_store_b32 off, v148, s33 offset:380
+; GISEL-NEXT: scratch_store_b32 off, v149, s33 offset:384
+; GISEL-NEXT: scratch_store_b32 off, v150, s33 offset:388
+; GISEL-NEXT: scratch_store_b32 off, v151, s33 offset:392
+; GISEL-NEXT: s_clause 0x1f
+; GISEL-NEXT: scratch_store_b32 off, v160, s33 offset:396
+; GISEL-NEXT: scratch_store_b32 off, v161, s33 offset:400
+; GISEL-NEXT: scratch_store_b32 off, v162, s33 offset:404
+; GISEL-NEXT: scratch_store_b32 off, v163, s33 offset:408
+; GISEL-NEXT: scratch_store_b32 off, v164, s33 offset:412
+; GISEL-NEXT: scratch_store_b32 off, v165, s33 offset:416
+; GISEL-NEXT: scratch_store_b32 off, v166, s33 offset:420
+; GISEL-NEXT: scratch_store_b32 off, v167, s33 offset:424
+; GISEL-NEXT: scratch_store_b32 off, v176, s33 offset:428
+; GISEL-NEXT: scratch_store_b32 off, v177, s33 offset:432
+; GISEL-NEXT: scratch_store_b32 off, v178, s33 offset:436
+; GISEL-NEXT: scratch_store_b32 off, v179, s33 offset:440
+; GISEL-NEXT: scratch_store_b32 off, v180, s33 offset:444
+; GISEL-NEXT: scratch_store_b32 off, v181, s33 offset:448
+; GISEL-NEXT: scratch_store_b32 off, v182, s33 offset:452
+; GISEL-NEXT: scratch_store_b32 off, v183, s33 offset:456
+; GISEL-NEXT: scratch_store_b32 off, v192, s33 offset:460
+; GISEL-NEXT: scratch_store_b32 off, v193, s33 offset:464
+; GISEL-NEXT: scratch_store_b32 off, v194, s33 offset:468
+; GISEL-NEXT: scratch_store_b32 off, v195, s33 offset:472
+; GISEL-NEXT: scratch_store_b32 off, v196, s33 offset:476
+; GISEL-NEXT: scratch_store_b32 off, v197, s33 offset:480
+; GISEL-NEXT: scratch_store_b32 off, v198, s33 offset:484
+; GISEL-NEXT: scratch_store_b32 off, v199, s33 offset:488
+; GISEL-NEXT: scratch_store_b32 off, v208, s33 offset:492
+; GISEL-NEXT: scratch_store_b32 off, v209, s33 offset:496
+; GISEL-NEXT: scratch_store_b32 off, v210, s33 offset:500
+; GISEL-NEXT: scratch_store_b32 off, v211, s33 offset:504
+; GISEL-NEXT: scratch_store_b32 off, v212, s33 offset:508
+; GISEL-NEXT: scratch_store_b32 off, v213, s33 offset:512
+; GISEL-NEXT: scratch_store_b32 off, v214, s33 offset:516
+; GISEL-NEXT: scratch_store_b32 off, v215, s33 offset:520
+; GISEL-NEXT: s_clause 0xf
+; GISEL-NEXT: scratch_store_b32 off, v224, s33 offset:524
+; GISEL-NEXT: scratch_store_b32 off, v225, s33 offset:528
+; GISEL-NEXT: scratch_store_b32 off, v226, s33 offset:532
+; GISEL-NEXT: scratch_store_b32 off, v227, s33 offset:536
+; GISEL-NEXT: scratch_store_b32 off, v228, s33 offset:540
+; GISEL-NEXT: scratch_store_b32 off, v229, s33 offset:544
+; GISEL-NEXT: scratch_store_b32 off, v230, s33 offset:548
+; GISEL-NEXT: scratch_store_b32 off, v231, s33 offset:552
+; GISEL-NEXT: scratch_store_b32 off, v240, s33 offset:556
+; GISEL-NEXT: scratch_store_b32 off, v241, s33 offset:560
+; GISEL-NEXT: scratch_store_b32 off, v242, s33 offset:564
+; GISEL-NEXT: scratch_store_b32 off, v243, s33 offset:568
+; GISEL-NEXT: scratch_store_b32 off, v244, s33 offset:572
+; GISEL-NEXT: scratch_store_b32 off, v245, s33 offset:576
+; GISEL-NEXT: scratch_store_b32 off, v246, s33 offset:580
+; GISEL-NEXT: scratch_store_b32 off, v247, s33 offset:584
+; GISEL-NEXT: s_mov_b32 exec_lo, -1
+; GISEL-NEXT: s_clause 0x2
+; GISEL-NEXT: scratch_store_b32 off, v42, s33
+; GISEL-NEXT: scratch_store_b32 off, v40, s33 offset:164
+; GISEL-NEXT: scratch_store_b32 off, v41, s33 offset:168
+; GISEL-NEXT: s_wait_alu 0xfffe
+; GISEL-NEXT: v_writelane_b32 v42, s0, 3
+; GISEL-NEXT: s_mov_b32 s0, callee at abs32@lo
+; GISEL-NEXT: s_mov_b32 s1, callee at abs32@hi
+; GISEL-NEXT: s_addk_co_i32 s32, 0x250
+; GISEL-NEXT: v_dual_mov_b32 v40, v8 :: v_dual_mov_b32 v41, v9
+; GISEL-NEXT: v_writelane_b32 v42, s4, 0
+; GISEL-NEXT: v_writelane_b32 v42, s30, 1
+; GISEL-NEXT: v_writelane_b32 v42, s31, 2
+; GISEL-NEXT: s_wait_alu 0xfffe
+; GISEL-NEXT: s_swappc_b64 s[30:31], s[0:1]
+; GISEL-NEXT: flat_store_b32 v[40:41], v0
+; GISEL-NEXT: v_readlane_b32 s31, v42, 2
+; GISEL-NEXT: v_readlane_b32 s30, v42, 1
+; GISEL-NEXT: v_readlane_b32 s4, v42, 0
+; GISEL-NEXT: v_readlane_b32 s0, v42, 3
+; GISEL-NEXT: s_clause 0x2
+; GISEL-NEXT: scratch_load_b32 v42, off, s33
+; GISEL-NEXT: scratch_load_b32 v40, off, s33 offset:164
+; GISEL-NEXT: scratch_load_b32 v41, off, s33 offset:168
+; GISEL-NEXT: s_mov_b32 s32, s33
+; GISEL-NEXT: s_xor_b32 exec_lo, s4, -1
+; GISEL-NEXT: s_clause 0x1f
+; GISEL-NEXT: scratch_load_b32 v0, off, s33 offset:4
+; GISEL-NEXT: scratch_load_b32 v1, off, s33 offset:8
+; GISEL-NEXT: scratch_load_b32 v2, off, s33 offset:12
+; GISEL-NEXT: scratch_load_b32 v3, off, s33 offset:16
+; GISEL-NEXT: scratch_load_b32 v4, off, s33 offset:20
+; GISEL-NEXT: scratch_load_b32 v5, off, s33 offset:24
+; GISEL-NEXT: scratch_load_b32 v6, off, s33 offset:28
+; GISEL-NEXT: scratch_load_b32 v7, off, s33 offset:32
+; GISEL-NEXT: scratch_load_b32 v8, off, s33 offset:36
+; GISEL-NEXT: scratch_load_b32 v9, off, s33 offset:40
+; GISEL-NEXT: scratch_load_b32 v10, off, s33 offset:44
+; GISEL-NEXT: scratch_load_b32 v11, off, s33 offset:48
+; GISEL-NEXT: scratch_load_b32 v12, off, s33 offset:52
+; GISEL-NEXT: scratch_load_b32 v13, off, s33 offset:56
+; GISEL-NEXT: scratch_load_b32 v14, off, s33 offset:60
+; GISEL-NEXT: scratch_load_b32 v15, off, s33 offset:64
+; GISEL-NEXT: scratch_load_b32 v16, off, s33 offset:68
+; GISEL-NEXT: scratch_load_b32 v17, off, s33 offset:72
+; GISEL-NEXT: scratch_load_b32 v18, off, s33 offset:76
+; GISEL-NEXT: scratch_load_b32 v19, off, s33 offset:80
+; GISEL-NEXT: scratch_load_b32 v20, off, s33 offset:84
+; GISEL-NEXT: scratch_load_b32 v21, off, s33 offset:88
+; GISEL-NEXT: scratch_load_b32 v22, off, s33 offset:92
+; GISEL-NEXT: scratch_load_b32 v23, off, s33 offset:96
+; GISEL-NEXT: scratch_load_b32 v24, off, s33 offset:100
+; GISEL-NEXT: scratch_load_b32 v25, off, s33 offset:104
+; GISEL-NEXT: scratch_load_b32 v26, off, s33 offset:108
+; GISEL-NEXT: scratch_load_b32 v27, off, s33 offset:112
+; GISEL-NEXT: scratch_load_b32 v28, off, s33 offset:116
+; GISEL-NEXT: scratch_load_b32 v29, off, s33 offset:120
+; GISEL-NEXT: scratch_load_b32 v30, off, s33 offset:124
+; GISEL-NEXT: scratch_load_b32 v31, off, s33 offset:128
+; GISEL-NEXT: s_clause 0x1f
+; GISEL-NEXT: scratch_load_b32 v32, off, s33 offset:132
+; GISEL-NEXT: scratch_load_b32 v33, off, s33 offset:136
+; GISEL-NEXT: scratch_load_b32 v34, off, s33 offset:140
+; GISEL-NEXT: scratch_load_b32 v35, off, s33 offset:144
+; GISEL-NEXT: scratch_load_b32 v36, off, s33 offset:148
+; GISEL-NEXT: scratch_load_b32 v37, off, s33 offset:152
+; GISEL-NEXT: scratch_load_b32 v38, off, s33 offset:156
+; GISEL-NEXT: scratch_load_b32 v39, off, s33 offset:160
+; GISEL-NEXT: scratch_load_b32 v48, off, s33 offset:172
+; GISEL-NEXT: scratch_load_b32 v49, off, s33 offset:176
+; GISEL-NEXT: scratch_load_b32 v50, off, s33 offset:180
+; GISEL-NEXT: scratch_load_b32 v51, off, s33 offset:184
+; GISEL-NEXT: scratch_load_b32 v52, off, s33 offset:188
+; GISEL-NEXT: scratch_load_b32 v53, off, s33 offset:192
+; GISEL-NEXT: scratch_load_b32 v54, off, s33 offset:196
+; GISEL-NEXT: scratch_load_b32 v55, off, s33 offset:200
+; GISEL-NEXT: scratch_load_b32 v64, off, s33 offset:204
+; GISEL-NEXT: scratch_load_b32 v65, off, s33 offset:208
+; GISEL-NEXT: scratch_load_b32 v66, off, s33 offset:212
+; GISEL-NEXT: scratch_load_b32 v67, off, s33 offset:216
+; GISEL-NEXT: scratch_load_b32 v68, off, s33 offset:220
+; GISEL-NEXT: scratch_load_b32 v69, off, s33 offset:224
+; GISEL-NEXT: scratch_load_b32 v70, off, s33 offset:228
+; GISEL-NEXT: scratch_load_b32 v71, off, s33 offset:232
+; GISEL-NEXT: scratch_load_b32 v80, off, s33 offset:236
+; GISEL-NEXT: scratch_load_b32 v81, off, s33 offset:240
+; GISEL-NEXT: scratch_load_b32 v82, off, s33 offset:244
+; GISEL-NEXT: scratch_load_b32 v83, off, s33 offset:248
+; GISEL-NEXT: scratch_load_b32 v84, off, s33 offset:252
+; GISEL-NEXT: scratch_load_b32 v85, off, s33 offset:256
+; GISEL-NEXT: scratch_load_b32 v86, off, s33 offset:260
+; GISEL-NEXT: scratch_load_b32 v87, off, s33 offset:264
+; GISEL-NEXT: s_clause 0x1f
+; GISEL-NEXT: scratch_load_b32 v96, off, s33 offset:268
+; GISEL-NEXT: scratch_load_b32 v97, off, s33 offset:272
+; GISEL-NEXT: scratch_load_b32 v98, off, s33 offset:276
+; GISEL-NEXT: scratch_load_b32 v99, off, s33 offset:280
+; GISEL-NEXT: scratch_load_b32 v100, off, s33 offset:284
+; GISEL-NEXT: scratch_load_b32 v101, off, s33 offset:288
+; GISEL-NEXT: scratch_load_b32 v102, off, s33 offset:292
+; GISEL-NEXT: scratch_load_b32 v103, off, s33 offset:296
+; GISEL-NEXT: scratch_load_b32 v112, off, s33 offset:300
+; GISEL-NEXT: scratch_load_b32 v113, off, s33 offset:304
+; GISEL-NEXT: scratch_load_b32 v114, off, s33 offset:308
+; GISEL-NEXT: scratch_load_b32 v115, off, s33 offset:312
+; GISEL-NEXT: scratch_load_b32 v116, off, s33 offset:316
+; GISEL-NEXT: scratch_load_b32 v117, off, s33 offset:320
+; GISEL-NEXT: scratch_load_b32 v118, off, s33 offset:324
+; GISEL-NEXT: scratch_load_b32 v119, off, s33 offset:328
+; GISEL-NEXT: scratch_load_b32 v128, off, s33 offset:332
+; GISEL-NEXT: scratch_load_b32 v129, off, s33 offset:336
+; GISEL-NEXT: scratch_load_b32 v130, off, s33 offset:340
+; GISEL-NEXT: scratch_load_b32 v131, off, s33 offset:344
+; GISEL-NEXT: scratch_load_b32 v132, off, s33 offset:348
+; GISEL-NEXT: scratch_load_b32 v133, off, s33 offset:352
+; GISEL-NEXT: scratch_load_b32 v134, off, s33 offset:356
+; GISEL-NEXT: scratch_load_b32 v135, off, s33 offset:360
+; GISEL-NEXT: scratch_load_b32 v144, off, s33 offset:364
+; GISEL-NEXT: scratch_load_b32 v145, off, s33 offset:368
+; GISEL-NEXT: scratch_load_b32 v146, off, s33 offset:372
+; GISEL-NEXT: scratch_load_b32 v147, off, s33 offset:376
+; GISEL-NEXT: scratch_load_b32 v148, off, s33 offset:380
+; GISEL-NEXT: scratch_load_b32 v149, off, s33 offset:384
+; GISEL-NEXT: scratch_load_b32 v150, off, s33 offset:388
+; GISEL-NEXT: scratch_load_b32 v151, off, s33 offset:392
+; GISEL-NEXT: s_clause 0x1f
+; GISEL-NEXT: scratch_load_b32 v160, off, s33 offset:396
+; GISEL-NEXT: scratch_load_b32 v161, off, s33 offset:400
+; GISEL-NEXT: scratch_load_b32 v162, off, s33 offset:404
+; GISEL-NEXT: scratch_load_b32 v163, off, s33 offset:408
+; GISEL-NEXT: scratch_load_b32 v164, off, s33 offset:412
+; GISEL-NEXT: scratch_load_b32 v165, off, s33 offset:416
+; GISEL-NEXT: scratch_load_b32 v166, off, s33 offset:420
+; GISEL-NEXT: scratch_load_b32 v167, off, s33 offset:424
+; GISEL-NEXT: scratch_load_b32 v176, off, s33 offset:428
+; GISEL-NEXT: scratch_load_b32 v177, off, s33 offset:432
+; GISEL-NEXT: scratch_load_b32 v178, off, s33 offset:436
+; GISEL-NEXT: scratch_load_b32 v179, off, s33 offset:440
+; GISEL-NEXT: scratch_load_b32 v180, off, s33 offset:444
+; GISEL-NEXT: scratch_load_b32 v181, off, s33 offset:448
+; GISEL-NEXT: scratch_load_b32 v182, off, s33 offset:452
+; GISEL-NEXT: scratch_load_b32 v183, off, s33 offset:456
+; GISEL-NEXT: scratch_load_b32 v192, off, s33 offset:460
+; GISEL-NEXT: scratch_load_b32 v193, off, s33 offset:464
+; GISEL-NEXT: scratch_load_b32 v194, off, s33 offset:468
+; GISEL-NEXT: scratch_load_b32 v195, off, s33 offset:472
+; GISEL-NEXT: scratch_load_b32 v196, off, s33 offset:476
+; GISEL-NEXT: scratch_load_b32 v197, off, s33 offset:480
+; GISEL-NEXT: scratch_load_b32 v198, off, s33 offset:484
+; GISEL-NEXT: scratch_load_b32 v199, off, s33 offset:488
+; GISEL-NEXT: scratch_load_b32 v208, off, s33 offset:492
+; GISEL-NEXT: scratch_load_b32 v209, off, s33 offset:496
+; GISEL-NEXT: scratch_load_b32 v210, off, s33 offset:500
+; GISEL-NEXT: scratch_load_b32 v211, off, s33 offset:504
+; GISEL-NEXT: scratch_load_b32 v212, off, s33 offset:508
+; GISEL-NEXT: scratch_load_b32 v213, off, s33 offset:512
+; GISEL-NEXT: scratch_load_b32 v214, off, s33 offset:516
+; GISEL-NEXT: scratch_load_b32 v215, off, s33 offset:520
+; GISEL-NEXT: s_clause 0xf
+; GISEL-NEXT: scratch_load_b32 v224, off, s33 offset:524
+; GISEL-NEXT: scratch_load_b32 v225, off, s33 offset:528
+; GISEL-NEXT: scratch_load_b32 v226, off, s33 offset:532
+; GISEL-NEXT: scratch_load_b32 v227, off, s33 offset:536
+; GISEL-NEXT: scratch_load_b32 v228, off, s33 offset:540
+; GISEL-NEXT: scratch_load_b32 v229, off, s33 offset:544
+; GISEL-NEXT: scratch_load_b32 v230, off, s33 offset:548
+; GISEL-NEXT: scratch_load_b32 v231, off, s33 offset:552
+; GISEL-NEXT: scratch_load_b32 v240, off, s33 offset:556
+; GISEL-NEXT: scratch_load_b32 v241, off, s33 offset:560
+; GISEL-NEXT: scratch_load_b32 v242, off, s33 offset:564
+; GISEL-NEXT: scratch_load_b32 v243, off, s33 offset:568
+; GISEL-NEXT: scratch_load_b32 v244, off, s33 offset:572
+; GISEL-NEXT: scratch_load_b32 v245, off, s33 offset:576
+; GISEL-NEXT: scratch_load_b32 v246, off, s33 offset:580
+; GISEL-NEXT: scratch_load_b32 v247, off, s33 offset:584
+; GISEL-NEXT: s_mov_b32 exec_lo, s4
+; GISEL-NEXT: s_mov_b32 s33, s0
+; GISEL-NEXT: s_wait_loadcnt_dscnt 0x0
+; GISEL-NEXT: s_wait_alu 0xfffe
+; GISEL-NEXT: s_setpc_b64 s[30:31]
+;
+; DAGISEL64-LABEL: call_from_whole_wave:
+; DAGISEL64: ; %bb.0:
+; DAGISEL64-NEXT: s_wait_loadcnt_dscnt 0x0
+; DAGISEL64-NEXT: s_wait_expcnt 0x0
+; DAGISEL64-NEXT: s_wait_samplecnt 0x0
+; DAGISEL64-NEXT: s_wait_bvhcnt 0x0
+; DAGISEL64-NEXT: s_wait_kmcnt 0x0
+; DAGISEL64-NEXT: s_mov_b32 s0, s33
+; DAGISEL64-NEXT: s_mov_b32 s33, s32
+; DAGISEL64-NEXT: s_xor_saveexec_b64 s[4:5], -1
+; DAGISEL64-NEXT: s_clause 0x1f
+; DAGISEL64-NEXT: scratch_store_b32 off, v0, s33 offset:4
+; DAGISEL64-NEXT: scratch_store_b32 off, v1, s33 offset:8
+; DAGISEL64-NEXT: scratch_store_b32 off, v2, s33 offset:12
+; DAGISEL64-NEXT: scratch_store_b32 off, v3, s33 offset:16
+; DAGISEL64-NEXT: scratch_store_b32 off, v4, s33 offset:20
+; DAGISEL64-NEXT: scratch_store_b32 off, v5, s33 offset:24
+; DAGISEL64-NEXT: scratch_store_b32 off, v6, s33 offset:28
+; DAGISEL64-NEXT: scratch_store_b32 off, v7, s33 offset:32
+; DAGISEL64-NEXT: scratch_store_b32 off, v8, s33 offset:36
+; DAGISEL64-NEXT: scratch_store_b32 off, v9, s33 offset:40
+; DAGISEL64-NEXT: scratch_store_b32 off, v10, s33 offset:44
+; DAGISEL64-NEXT: scratch_store_b32 off, v11, s33 offset:48
+; DAGISEL64-NEXT: scratch_store_b32 off, v12, s33 offset:52
+; DAGISEL64-NEXT: scratch_store_b32 off, v13, s33 offset:56
+; DAGISEL64-NEXT: scratch_store_b32 off, v14, s33 offset:60
+; DAGISEL64-NEXT: scratch_store_b32 off, v15, s33 offset:64
+; DAGISEL64-NEXT: scratch_store_b32 off, v16, s33 offset:68
+; DAGISEL64-NEXT: scratch_store_b32 off, v17, s33 offset:72
+; DAGISEL64-NEXT: scratch_store_b32 off, v18, s33 offset:76
+; DAGISEL64-NEXT: scratch_store_b32 off, v19, s33 offset:80
+; DAGISEL64-NEXT: scratch_store_b32 off, v20, s33 offset:84
+; DAGISEL64-NEXT: scratch_store_b32 off, v21, s33 offset:88
+; DAGISEL64-NEXT: scratch_store_b32 off, v22, s33 offset:92
+; DAGISEL64-NEXT: scratch_store_b32 off, v23, s33 offset:96
+; DAGISEL64-NEXT: scratch_store_b32 off, v24, s33 offset:100
+; DAGISEL64-NEXT: scratch_store_b32 off, v25, s33 offset:104
+; DAGISEL64-NEXT: scratch_store_b32 off, v26, s33 offset:108
+; DAGISEL64-NEXT: scratch_store_b32 off, v27, s33 offset:112
+; DAGISEL64-NEXT: scratch_store_b32 off, v28, s33 offset:116
+; DAGISEL64-NEXT: scratch_store_b32 off, v29, s33 offset:120
+; DAGISEL64-NEXT: scratch_store_b32 off, v30, s33 offset:124
+; DAGISEL64-NEXT: scratch_store_b32 off, v31, s33 offset:128
+; DAGISEL64-NEXT: s_clause 0x1f
+; DAGISEL64-NEXT: scratch_store_b32 off, v32, s33 offset:132
+; DAGISEL64-NEXT: scratch_store_b32 off, v33, s33 offset:136
+; DAGISEL64-NEXT: scratch_store_b32 off, v34, s33 offset:140
+; DAGISEL64-NEXT: scratch_store_b32 off, v35, s33 offset:144
+; DAGISEL64-NEXT: scratch_store_b32 off, v36, s33 offset:148
+; DAGISEL64-NEXT: scratch_store_b32 off, v37, s33 offset:152
+; DAGISEL64-NEXT: scratch_store_b32 off, v38, s33 offset:156
+; DAGISEL64-NEXT: scratch_store_b32 off, v39, s33 offset:160
+; DAGISEL64-NEXT: scratch_store_b32 off, v48, s33 offset:172
+; DAGISEL64-NEXT: scratch_store_b32 off, v49, s33 offset:176
+; DAGISEL64-NEXT: scratch_store_b32 off, v50, s33 offset:180
+; DAGISEL64-NEXT: scratch_store_b32 off, v51, s33 offset:184
+; DAGISEL64-NEXT: scratch_store_b32 off, v52, s33 offset:188
+; DAGISEL64-NEXT: scratch_store_b32 off, v53, s33 offset:192
+; DAGISEL64-NEXT: scratch_store_b32 off, v54, s33 offset:196
+; DAGISEL64-NEXT: scratch_store_b32 off, v55, s33 offset:200
+; DAGISEL64-NEXT: scratch_store_b32 off, v64, s33 offset:204
+; DAGISEL64-NEXT: scratch_store_b32 off, v65, s33 offset:208
+; DAGISEL64-NEXT: scratch_store_b32 off, v66, s33 offset:212
+; DAGISEL64-NEXT: scratch_store_b32 off, v67, s33 offset:216
+; DAGISEL64-NEXT: scratch_store_b32 off, v68, s33 offset:220
+; DAGISEL64-NEXT: scratch_store_b32 off, v69, s33 offset:224
+; DAGISEL64-NEXT: scratch_store_b32 off, v70, s33 offset:228
+; DAGISEL64-NEXT: scratch_store_b32 off, v71, s33 offset:232
+; DAGISEL64-NEXT: scratch_store_b32 off, v80, s33 offset:236
+; DAGISEL64-NEXT: scratch_store_b32 off, v81, s33 offset:240
+; DAGISEL64-NEXT: scratch_store_b32 off, v82, s33 offset:244
+; DAGISEL64-NEXT: scratch_store_b32 off, v83, s33 offset:248
+; DAGISEL64-NEXT: scratch_store_b32 off, v84, s33 offset:252
+; DAGISEL64-NEXT: scratch_store_b32 off, v85, s33 offset:256
+; DAGISEL64-NEXT: scratch_store_b32 off, v86, s33 offset:260
+; DAGISEL64-NEXT: scratch_store_b32 off, v87, s33 offset:264
+; DAGISEL64-NEXT: s_clause 0x1f
+; DAGISEL64-NEXT: scratch_store_b32 off, v96, s33 offset:268
+; DAGISEL64-NEXT: scratch_store_b32 off, v97, s33 offset:272
+; DAGISEL64-NEXT: scratch_store_b32 off, v98, s33 offset:276
+; DAGISEL64-NEXT: scratch_store_b32 off, v99, s33 offset:280
+; DAGISEL64-NEXT: scratch_store_b32 off, v100, s33 offset:284
+; DAGISEL64-NEXT: scratch_store_b32 off, v101, s33 offset:288
+; DAGISEL64-NEXT: scratch_store_b32 off, v102, s33 offset:292
+; DAGISEL64-NEXT: scratch_store_b32 off, v103, s33 offset:296
+; DAGISEL64-NEXT: scratch_store_b32 off, v112, s33 offset:300
+; DAGISEL64-NEXT: scratch_store_b32 off, v113, s33 offset:304
+; DAGISEL64-NEXT: scratch_store_b32 off, v114, s33 offset:308
+; DAGISEL64-NEXT: scratch_store_b32 off, v115, s33 offset:312
+; DAGISEL64-NEXT: scratch_store_b32 off, v116, s33 offset:316
+; DAGISEL64-NEXT: scratch_store_b32 off, v117, s33 offset:320
+; DAGISEL64-NEXT: scratch_store_b32 off, v118, s33 offset:324
+; DAGISEL64-NEXT: scratch_store_b32 off, v119, s33 offset:328
+; DAGISEL64-NEXT: scratch_store_b32 off, v128, s33 offset:332
+; DAGISEL64-NEXT: scratch_store_b32 off, v129, s33 offset:336
+; DAGISEL64-NEXT: scratch_store_b32 off, v130, s33 offset:340
+; DAGISEL64-NEXT: scratch_store_b32 off, v131, s33 offset:344
+; DAGISEL64-NEXT: scratch_store_b32 off, v132, s33 offset:348
+; DAGISEL64-NEXT: scratch_store_b32 off, v133, s33 offset:352
+; DAGISEL64-NEXT: scratch_store_b32 off, v134, s33 offset:356
+; DAGISEL64-NEXT: scratch_store_b32 off, v135, s33 offset:360
+; DAGISEL64-NEXT: scratch_store_b32 off, v144, s33 offset:364
+; DAGISEL64-NEXT: scratch_store_b32 off, v145, s33 offset:368
+; DAGISEL64-NEXT: scratch_store_b32 off, v146, s33 offset:372
+; DAGISEL64-NEXT: scratch_store_b32 off, v147, s33 offset:376
+; DAGISEL64-NEXT: scratch_store_b32 off, v148, s33 offset:380
+; DAGISEL64-NEXT: scratch_store_b32 off, v149, s33 offset:384
+; DAGISEL64-NEXT: scratch_store_b32 off, v150, s33 offset:388
+; DAGISEL64-NEXT: scratch_store_b32 off, v151, s33 offset:392
+; DAGISEL64-NEXT: s_clause 0x1f
+; DAGISEL64-NEXT: scratch_store_b32 off, v160, s33 offset:396
+; DAGISEL64-NEXT: scratch_store_b32 off, v161, s33 offset:400
+; DAGISEL64-NEXT: scratch_store_b32 off, v162, s33 offset:404
+; DAGISEL64-NEXT: scratch_store_b32 off, v163, s33 offset:408
+; DAGISEL64-NEXT: scratch_store_b32 off, v164, s33 offset:412
+; DAGISEL64-NEXT: scratch_store_b32 off, v165, s33 offset:416
+; DAGISEL64-NEXT: scratch_store_b32 off, v166, s33 offset:420
+; DAGISEL64-NEXT: scratch_store_b32 off, v167, s33 offset:424
+; DAGISEL64-NEXT: scratch_store_b32 off, v176, s33 offset:428
+; DAGISEL64-NEXT: scratch_store_b32 off, v177, s33 offset:432
+; DAGISEL64-NEXT: scratch_store_b32 off, v178, s33 offset:436
+; DAGISEL64-NEXT: scratch_store_b32 off, v179, s33 offset:440
+; DAGISEL64-NEXT: scratch_store_b32 off, v180, s33 offset:444
+; DAGISEL64-NEXT: scratch_store_b32 off, v181, s33 offset:448
+; DAGISEL64-NEXT: scratch_store_b32 off, v182, s33 offset:452
+; DAGISEL64-NEXT: scratch_store_b32 off, v183, s33 offset:456
+; DAGISEL64-NEXT: scratch_store_b32 off, v192, s33 offset:460
+; DAGISEL64-NEXT: scratch_store_b32 off, v193, s33 offset:464
+; DAGISEL64-NEXT: scratch_store_b32 off, v194, s33 offset:468
+; DAGISEL64-NEXT: scratch_store_b32 off, v195, s33 offset:472
+; DAGISEL64-NEXT: scratch_store_b32 off, v196, s33 offset:476
+; DAGISEL64-NEXT: scratch_store_b32 off, v197, s33 offset:480
+; DAGISEL64-NEXT: scratch_store_b32 off, v198, s33 offset:484
+; DAGISEL64-NEXT: scratch_store_b32 off, v199, s33 offset:488
+; DAGISEL64-NEXT: scratch_store_b32 off, v208, s33 offset:492
+; DAGISEL64-NEXT: scratch_store_b32 off, v209, s33 offset:496
+; DAGISEL64-NEXT: scratch_store_b32 off, v210, s33 offset:500
+; DAGISEL64-NEXT: scratch_store_b32 off, v211, s33 offset:504
+; DAGISEL64-NEXT: scratch_store_b32 off, v212, s33 offset:508
+; DAGISEL64-NEXT: scratch_store_b32 off, v213, s33 offset:512
+; DAGISEL64-NEXT: scratch_store_b32 off, v214, s33 offset:516
+; DAGISEL64-NEXT: scratch_store_b32 off, v215, s33 offset:520
+; DAGISEL64-NEXT: s_clause 0xf
+; DAGISEL64-NEXT: scratch_store_b32 off, v224, s33 offset:524
+; DAGISEL64-NEXT: scratch_store_b32 off, v225, s33 offset:528
+; DAGISEL64-NEXT: scratch_store_b32 off, v226, s33 offset:532
+; DAGISEL64-NEXT: scratch_store_b32 off, v227, s33 offset:536
+; DAGISEL64-NEXT: scratch_store_b32 off, v228, s33 offset:540
+; DAGISEL64-NEXT: scratch_store_b32 off, v229, s33 offset:544
+; DAGISEL64-NEXT: scratch_store_b32 off, v230, s33 offset:548
+; DAGISEL64-NEXT: scratch_store_b32 off, v231, s33 offset:552
+; DAGISEL64-NEXT: scratch_store_b32 off, v240, s33 offset:556
+; DAGISEL64-NEXT: scratch_store_b32 off, v241, s33 offset:560
+; DAGISEL64-NEXT: scratch_store_b32 off, v242, s33 offset:564
+; DAGISEL64-NEXT: scratch_store_b32 off, v243, s33 offset:568
+; DAGISEL64-NEXT: scratch_store_b32 off, v244, s33 offset:572
+; DAGISEL64-NEXT: scratch_store_b32 off, v245, s33 offset:576
+; DAGISEL64-NEXT: scratch_store_b32 off, v246, s33 offset:580
+; DAGISEL64-NEXT: scratch_store_b32 off, v247, s33 offset:584
+; DAGISEL64-NEXT: s_mov_b64 exec, -1
+; DAGISEL64-NEXT: s_clause 0x2
+; DAGISEL64-NEXT: scratch_store_b32 off, v42, s33
+; DAGISEL64-NEXT: scratch_store_b32 off, v40, s33 offset:164
+; DAGISEL64-NEXT: scratch_store_b32 off, v41, s33 offset:168
+; DAGISEL64-NEXT: s_wait_alu 0xfffe
+; DAGISEL64-NEXT: v_writelane_b32 v42, s0, 4
+; DAGISEL64-NEXT: s_mov_b32 s1, callee at abs32@hi
+; DAGISEL64-NEXT: s_mov_b32 s0, callee at abs32@lo
+; DAGISEL64-NEXT: s_addk_co_i32 s32, 0x250
+; DAGISEL64-NEXT: v_mov_b32_e32 v41, v9
+; DAGISEL64-NEXT: v_writelane_b32 v42, s4, 0
+; DAGISEL64-NEXT: v_mov_b32_e32 v40, v8
+; DAGISEL64-NEXT: v_writelane_b32 v42, s5, 1
+; DAGISEL64-NEXT: v_writelane_b32 v42, s30, 2
+; DAGISEL64-NEXT: v_writelane_b32 v42, s31, 3
+; DAGISEL64-NEXT: s_wait_alu 0xfffe
+; DAGISEL64-NEXT: s_swappc_b64 s[30:31], s[0:1]
+; DAGISEL64-NEXT: flat_store_b32 v[40:41], v0
+; DAGISEL64-NEXT: v_readlane_b32 s31, v42, 3
+; DAGISEL64-NEXT: v_readlane_b32 s30, v42, 2
+; DAGISEL64-NEXT: v_readlane_b32 s5, v42, 1
+; DAGISEL64-NEXT: v_readlane_b32 s4, v42, 0
+; DAGISEL64-NEXT: v_readlane_b32 s0, v42, 4
+; DAGISEL64-NEXT: s_clause 0x2
+; DAGISEL64-NEXT: scratch_load_b32 v42, off, s33
+; DAGISEL64-NEXT: scratch_load_b32 v40, off, s33 offset:164
+; DAGISEL64-NEXT: scratch_load_b32 v41, off, s33 offset:168
+; DAGISEL64-NEXT: s_mov_b32 s32, s33
+; DAGISEL64-NEXT: s_xor_b64 exec, s[4:5], -1
+; DAGISEL64-NEXT: s_clause 0x1f
+; DAGISEL64-NEXT: scratch_load_b32 v0, off, s33 offset:4
+; DAGISEL64-NEXT: scratch_load_b32 v1, off, s33 offset:8
+; DAGISEL64-NEXT: scratch_load_b32 v2, off, s33 offset:12
+; DAGISEL64-NEXT: scratch_load_b32 v3, off, s33 offset:16
+; DAGISEL64-NEXT: scratch_load_b32 v4, off, s33 offset:20
+; DAGISEL64-NEXT: scratch_load_b32 v5, off, s33 offset:24
+; DAGISEL64-NEXT: scratch_load_b32 v6, off, s33 offset:28
+; DAGISEL64-NEXT: scratch_load_b32 v7, off, s33 offset:32
+; DAGISEL64-NEXT: scratch_load_b32 v8, off, s33 offset:36
+; DAGISEL64-NEXT: scratch_load_b32 v9, off, s33 offset:40
+; DAGISEL64-NEXT: scratch_load_b32 v10, off, s33 offset:44
+; DAGISEL64-NEXT: scratch_load_b32 v11, off, s33 offset:48
+; DAGISEL64-NEXT: scratch_load_b32 v12, off, s33 offset:52
+; DAGISEL64-NEXT: scratch_load_b32 v13, off, s33 offset:56
+; DAGISEL64-NEXT: scratch_load_b32 v14, off, s33 offset:60
+; DAGISEL64-NEXT: scratch_load_b32 v15, off, s33 offset:64
+; DAGISEL64-NEXT: scratch_load_b32 v16, off, s33 offset:68
+; DAGISEL64-NEXT: scratch_load_b32 v17, off, s33 offset:72
+; DAGISEL64-NEXT: scratch_load_b32 v18, off, s33 offset:76
+; DAGISEL64-NEXT: scratch_load_b32 v19, off, s33 offset:80
+; DAGISEL64-NEXT: scratch_load_b32 v20, off, s33 offset:84
+; DAGISEL64-NEXT: scratch_load_b32 v21, off, s33 offset:88
+; DAGISEL64-NEXT: scratch_load_b32 v22, off, s33 offset:92
+; DAGISEL64-NEXT: scratch_load_b32 v23, off, s33 offset:96
+; DAGISEL64-NEXT: scratch_load_b32 v24, off, s33 offset:100
+; DAGISEL64-NEXT: scratch_load_b32 v25, off, s33 offset:104
+; DAGISEL64-NEXT: scratch_load_b32 v26, off, s33 offset:108
+; DAGISEL64-NEXT: scratch_load_b32 v27, off, s33 offset:112
+; DAGISEL64-NEXT: scratch_load_b32 v28, off, s33 offset:116
+; DAGISEL64-NEXT: scratch_load_b32 v29, off, s33 offset:120
+; DAGISEL64-NEXT: scratch_load_b32 v30, off, s33 offset:124
+; DAGISEL64-NEXT: scratch_load_b32 v31, off, s33 offset:128
+; DAGISEL64-NEXT: s_clause 0x1f
+; DAGISEL64-NEXT: scratch_load_b32 v32, off, s33 offset:132
+; DAGISEL64-NEXT: scratch_load_b32 v33, off, s33 offset:136
+; DAGISEL64-NEXT: scratch_load_b32 v34, off, s33 offset:140
+; DAGISEL64-NEXT: scratch_load_b32 v35, off, s33 offset:144
+; DAGISEL64-NEXT: scratch_load_b32 v36, off, s33 offset:148
+; DAGISEL64-NEXT: scratch_load_b32 v37, off, s33 offset:152
+; DAGISEL64-NEXT: scratch_load_b32 v38, off, s33 offset:156
+; DAGISEL64-NEXT: scratch_load_b32 v39, off, s33 offset:160
+; DAGISEL64-NEXT: scratch_load_b32 v48, off, s33 offset:172
+; DAGISEL64-NEXT: scratch_load_b32 v49, off, s33 offset:176
+; DAGISEL64-NEXT: scratch_load_b32 v50, off, s33 offset:180
+; DAGISEL64-NEXT: scratch_load_b32 v51, off, s33 offset:184
+; DAGISEL64-NEXT: scratch_load_b32 v52, off, s33 offset:188
+; DAGISEL64-NEXT: scratch_load_b32 v53, off, s33 offset:192
+; DAGISEL64-NEXT: scratch_load_b32 v54, off, s33 offset:196
+; DAGISEL64-NEXT: scratch_load_b32 v55, off, s33 offset:200
+; DAGISEL64-NEXT: scratch_load_b32 v64, off, s33 offset:204
+; DAGISEL64-NEXT: scratch_load_b32 v65, off, s33 offset:208
+; DAGISEL64-NEXT: scratch_load_b32 v66, off, s33 offset:212
+; DAGISEL64-NEXT: scratch_load_b32 v67, off, s33 offset:216
+; DAGISEL64-NEXT: scratch_load_b32 v68, off, s33 offset:220
+; DAGISEL64-NEXT: scratch_load_b32 v69, off, s33 offset:224
+; DAGISEL64-NEXT: scratch_load_b32 v70, off, s33 offset:228
+; DAGISEL64-NEXT: scratch_load_b32 v71, off, s33 offset:232
+; DAGISEL64-NEXT: scratch_load_b32 v80, off, s33 offset:236
+; DAGISEL64-NEXT: scratch_load_b32 v81, off, s33 offset:240
+; DAGISEL64-NEXT: scratch_load_b32 v82, off, s33 offset:244
+; DAGISEL64-NEXT: scratch_load_b32 v83, off, s33 offset:248
+; DAGISEL64-NEXT: scratch_load_b32 v84, off, s33 offset:252
+; DAGISEL64-NEXT: scratch_load_b32 v85, off, s33 offset:256
+; DAGISEL64-NEXT: scratch_load_b32 v86, off, s33 offset:260
+; DAGISEL64-NEXT: scratch_load_b32 v87, off, s33 offset:264
+; DAGISEL64-NEXT: s_clause 0x1f
+; DAGISEL64-NEXT: scratch_load_b32 v96, off, s33 offset:268
+; DAGISEL64-NEXT: scratch_load_b32 v97, off, s33 offset:272
+; DAGISEL64-NEXT: scratch_load_b32 v98, off, s33 offset:276
+; DAGISEL64-NEXT: scratch_load_b32 v99, off, s33 offset:280
+; DAGISEL64-NEXT: scratch_load_b32 v100, off, s33 offset:284
+; DAGISEL64-NEXT: scratch_load_b32 v101, off, s33 offset:288
+; DAGISEL64-NEXT: scratch_load_b32 v102, off, s33 offset:292
+; DAGISEL64-NEXT: scratch_load_b32 v103, off, s33 offset:296
+; DAGISEL64-NEXT: scratch_load_b32 v112, off, s33 offset:300
+; DAGISEL64-NEXT: scratch_load_b32 v113, off, s33 offset:304
+; DAGISEL64-NEXT: scratch_load_b32 v114, off, s33 offset:308
+; DAGISEL64-NEXT: scratch_load_b32 v115, off, s33 offset:312
+; DAGISEL64-NEXT: scratch_load_b32 v116, off, s33 offset:316
+; DAGISEL64-NEXT: scratch_load_b32 v117, off, s33 offset:320
+; DAGISEL64-NEXT: scratch_load_b32 v118, off, s33 offset:324
+; DAGISEL64-NEXT: scratch_load_b32 v119, off, s33 offset:328
+; DAGISEL64-NEXT: scratch_load_b32 v128, off, s33 offset:332
+; DAGISEL64-NEXT: scratch_load_b32 v129, off, s33 offset:336
+; DAGISEL64-NEXT: scratch_load_b32 v130, off, s33 offset:340
+; DAGISEL64-NEXT: scratch_load_b32 v131, off, s33 offset:344
+; DAGISEL64-NEXT: scratch_load_b32 v132, off, s33 offset:348
+; DAGISEL64-NEXT: scratch_load_b32 v133, off, s33 offset:352
+; DAGISEL64-NEXT: scratch_load_b32 v134, off, s33 offset:356
+; DAGISEL64-NEXT: scratch_load_b32 v135, off, s33 offset:360
+; DAGISEL64-NEXT: scratch_load_b32 v144, off, s33 offset:364
+; DAGISEL64-NEXT: scratch_load_b32 v145, off, s33 offset:368
+; DAGISEL64-NEXT: scratch_load_b32 v146, off, s33 offset:372
+; DAGISEL64-NEXT: scratch_load_b32 v147, off, s33 offset:376
+; DAGISEL64-NEXT: scratch_load_b32 v148, off, s33 offset:380
+; DAGISEL64-NEXT: scratch_load_b32 v149, off, s33 offset:384
+; DAGISEL64-NEXT: scratch_load_b32 v150, off, s33 offset:388
+; DAGISEL64-NEXT: scratch_load_b32 v151, off, s33 offset:392
+; DAGISEL64-NEXT: s_clause 0x1f
+; DAGISEL64-NEXT: scratch_load_b32 v160, off, s33 offset:396
+; DAGISEL64-NEXT: scratch_load_b32 v161, off, s33 offset:400
+; DAGISEL64-NEXT: scratch_load_b32 v162, off, s33 offset:404
+; DAGISEL64-NEXT: scratch_load_b32 v163, off, s33 offset:408
+; DAGISEL64-NEXT: scratch_load_b32 v164, off, s33 offset:412
+; DAGISEL64-NEXT: scratch_load_b32 v165, off, s33 offset:416
+; DAGISEL64-NEXT: scratch_load_b32 v166, off, s33 offset:420
+; DAGISEL64-NEXT: scratch_load_b32 v167, off, s33 offset:424
+; DAGISEL64-NEXT: scratch_load_b32 v176, off, s33 offset:428
+; DAGISEL64-NEXT: scratch_load_b32 v177, off, s33 offset:432
+; DAGISEL64-NEXT: scratch_load_b32 v178, off, s33 offset:436
+; DAGISEL64-NEXT: scratch_load_b32 v179, off, s33 offset:440
+; DAGISEL64-NEXT: scratch_load_b32 v180, off, s33 offset:444
+; DAGISEL64-NEXT: scratch_load_b32 v181, off, s33 offset:448
+; DAGISEL64-NEXT: scratch_load_b32 v182, off, s33 offset:452
+; DAGISEL64-NEXT: scratch_load_b32 v183, off, s33 offset:456
+; DAGISEL64-NEXT: scratch_load_b32 v192, off, s33 offset:460
+; DAGISEL64-NEXT: scratch_load_b32 v193, off, s33 offset:464
+; DAGISEL64-NEXT: scratch_load_b32 v194, off, s33 offset:468
+; DAGISEL64-NEXT: scratch_load_b32 v195, off, s33 offset:472
+; DAGISEL64-NEXT: scratch_load_b32 v196, off, s33 offset:476
+; DAGISEL64-NEXT: scratch_load_b32 v197, off, s33 offset:480
+; DAGISEL64-NEXT: scratch_load_b32 v198, off, s33 offset:484
+; DAGISEL64-NEXT: scratch_load_b32 v199, off, s33 offset:488
+; DAGISEL64-NEXT: scratch_load_b32 v208, off, s33 offset:492
+; DAGISEL64-NEXT: scratch_load_b32 v209, off, s33 offset:496
+; DAGISEL64-NEXT: scratch_load_b32 v210, off, s33 offset:500
+; DAGISEL64-NEXT: scratch_load_b32 v211, off, s33 offset:504
+; DAGISEL64-NEXT: scratch_load_b32 v212, off, s33 offset:508
+; DAGISEL64-NEXT: scratch_load_b32 v213, off, s33 offset:512
+; DAGISEL64-NEXT: scratch_load_b32 v214, off, s33 offset:516
+; DAGISEL64-NEXT: scratch_load_b32 v215, off, s33 offset:520
+; DAGISEL64-NEXT: s_clause 0xf
+; DAGISEL64-NEXT: scratch_load_b32 v224, off, s33 offset:524
+; DAGISEL64-NEXT: scratch_load_b32 v225, off, s33 offset:528
+; DAGISEL64-NEXT: scratch_load_b32 v226, off, s33 offset:532
+; DAGISEL64-NEXT: scratch_load_b32 v227, off, s33 offset:536
+; DAGISEL64-NEXT: scratch_load_b32 v228, off, s33 offset:540
+; DAGISEL64-NEXT: scratch_load_b32 v229, off, s33 offset:544
+; DAGISEL64-NEXT: scratch_load_b32 v230, off, s33 offset:548
+; DAGISEL64-NEXT: scratch_load_b32 v231, off, s33 offset:552
+; DAGISEL64-NEXT: scratch_load_b32 v240, off, s33 offset:556
+; DAGISEL64-NEXT: scratch_load_b32 v241, off, s33 offset:560
+; DAGISEL64-NEXT: scratch_load_b32 v242, off, s33 offset:564
+; DAGISEL64-NEXT: scratch_load_b32 v243, off, s33 offset:568
+; DAGISEL64-NEXT: scratch_load_b32 v244, off, s33 offset:572
+; DAGISEL64-NEXT: scratch_load_b32 v245, off, s33 offset:576
+; DAGISEL64-NEXT: scratch_load_b32 v246, off, s33 offset:580
+; DAGISEL64-NEXT: scratch_load_b32 v247, off, s33 offset:584
+; DAGISEL64-NEXT: s_mov_b64 exec, s[4:5]
+; DAGISEL64-NEXT: s_mov_b32 s33, s0
+; DAGISEL64-NEXT: s_wait_loadcnt_dscnt 0x0
+; DAGISEL64-NEXT: s_wait_alu 0xfffe
+; DAGISEL64-NEXT: s_setpc_b64 s[30:31]
+;
+; GISEL64-LABEL: call_from_whole_wave:
+; GISEL64: ; %bb.0:
+; GISEL64-NEXT: s_wait_loadcnt_dscnt 0x0
+; GISEL64-NEXT: s_wait_expcnt 0x0
+; GISEL64-NEXT: s_wait_samplecnt 0x0
+; GISEL64-NEXT: s_wait_bvhcnt 0x0
+; GISEL64-NEXT: s_wait_kmcnt 0x0
+; GISEL64-NEXT: s_mov_b32 s0, s33
+; GISEL64-NEXT: s_mov_b32 s33, s32
+; GISEL64-NEXT: s_xor_saveexec_b64 s[4:5], -1
+; GISEL64-NEXT: s_clause 0x1f
+; GISEL64-NEXT: scratch_store_b32 off, v0, s33 offset:4
+; GISEL64-NEXT: scratch_store_b32 off, v1, s33 offset:8
+; GISEL64-NEXT: scratch_store_b32 off, v2, s33 offset:12
+; GISEL64-NEXT: scratch_store_b32 off, v3, s33 offset:16
+; GISEL64-NEXT: scratch_store_b32 off, v4, s33 offset:20
+; GISEL64-NEXT: scratch_store_b32 off, v5, s33 offset:24
+; GISEL64-NEXT: scratch_store_b32 off, v6, s33 offset:28
+; GISEL64-NEXT: scratch_store_b32 off, v7, s33 offset:32
+; GISEL64-NEXT: scratch_store_b32 off, v8, s33 offset:36
+; GISEL64-NEXT: scratch_store_b32 off, v9, s33 offset:40
+; GISEL64-NEXT: scratch_store_b32 off, v10, s33 offset:44
+; GISEL64-NEXT: scratch_store_b32 off, v11, s33 offset:48
+; GISEL64-NEXT: scratch_store_b32 off, v12, s33 offset:52
+; GISEL64-NEXT: scratch_store_b32 off, v13, s33 offset:56
+; GISEL64-NEXT: scratch_store_b32 off, v14, s33 offset:60
+; GISEL64-NEXT: scratch_store_b32 off, v15, s33 offset:64
+; GISEL64-NEXT: scratch_store_b32 off, v16, s33 offset:68
+; GISEL64-NEXT: scratch_store_b32 off, v17, s33 offset:72
+; GISEL64-NEXT: scratch_store_b32 off, v18, s33 offset:76
+; GISEL64-NEXT: scratch_store_b32 off, v19, s33 offset:80
+; GISEL64-NEXT: scratch_store_b32 off, v20, s33 offset:84
+; GISEL64-NEXT: scratch_store_b32 off, v21, s33 offset:88
+; GISEL64-NEXT: scratch_store_b32 off, v22, s33 offset:92
+; GISEL64-NEXT: scratch_store_b32 off, v23, s33 offset:96
+; GISEL64-NEXT: scratch_store_b32 off, v24, s33 offset:100
+; GISEL64-NEXT: scratch_store_b32 off, v25, s33 offset:104
+; GISEL64-NEXT: scratch_store_b32 off, v26, s33 offset:108
+; GISEL64-NEXT: scratch_store_b32 off, v27, s33 offset:112
+; GISEL64-NEXT: scratch_store_b32 off, v28, s33 offset:116
+; GISEL64-NEXT: scratch_store_b32 off, v29, s33 offset:120
+; GISEL64-NEXT: scratch_store_b32 off, v30, s33 offset:124
+; GISEL64-NEXT: scratch_store_b32 off, v31, s33 offset:128
+; GISEL64-NEXT: s_clause 0x1f
+; GISEL64-NEXT: scratch_store_b32 off, v32, s33 offset:132
+; GISEL64-NEXT: scratch_store_b32 off, v33, s33 offset:136
+; GISEL64-NEXT: scratch_store_b32 off, v34, s33 offset:140
+; GISEL64-NEXT: scratch_store_b32 off, v35, s33 offset:144
+; GISEL64-NEXT: scratch_store_b32 off, v36, s33 offset:148
+; GISEL64-NEXT: scratch_store_b32 off, v37, s33 offset:152
+; GISEL64-NEXT: scratch_store_b32 off, v38, s33 offset:156
+; GISEL64-NEXT: scratch_store_b32 off, v39, s33 offset:160
+; GISEL64-NEXT: scratch_store_b32 off, v48, s33 offset:172
+; GISEL64-NEXT: scratch_store_b32 off, v49, s33 offset:176
+; GISEL64-NEXT: scratch_store_b32 off, v50, s33 offset:180
+; GISEL64-NEXT: scratch_store_b32 off, v51, s33 offset:184
+; GISEL64-NEXT: scratch_store_b32 off, v52, s33 offset:188
+; GISEL64-NEXT: scratch_store_b32 off, v53, s33 offset:192
+; GISEL64-NEXT: scratch_store_b32 off, v54, s33 offset:196
+; GISEL64-NEXT: scratch_store_b32 off, v55, s33 offset:200
+; GISEL64-NEXT: scratch_store_b32 off, v64, s33 offset:204
+; GISEL64-NEXT: scratch_store_b32 off, v65, s33 offset:208
+; GISEL64-NEXT: scratch_store_b32 off, v66, s33 offset:212
+; GISEL64-NEXT: scratch_store_b32 off, v67, s33 offset:216
+; GISEL64-NEXT: scratch_store_b32 off, v68, s33 offset:220
+; GISEL64-NEXT: scratch_store_b32 off, v69, s33 offset:224
+; GISEL64-NEXT: scratch_store_b32 off, v70, s33 offset:228
+; GISEL64-NEXT: scratch_store_b32 off, v71, s33 offset:232
+; GISEL64-NEXT: scratch_store_b32 off, v80, s33 offset:236
+; GISEL64-NEXT: scratch_store_b32 off, v81, s33 offset:240
+; GISEL64-NEXT: scratch_store_b32 off, v82, s33 offset:244
+; GISEL64-NEXT: scratch_store_b32 off, v83, s33 offset:248
+; GISEL64-NEXT: scratch_store_b32 off, v84, s33 offset:252
+; GISEL64-NEXT: scratch_store_b32 off, v85, s33 offset:256
+; GISEL64-NEXT: scratch_store_b32 off, v86, s33 offset:260
+; GISEL64-NEXT: scratch_store_b32 off, v87, s33 offset:264
+; GISEL64-NEXT: s_clause 0x1f
+; GISEL64-NEXT: scratch_store_b32 off, v96, s33 offset:268
+; GISEL64-NEXT: scratch_store_b32 off, v97, s33 offset:272
+; GISEL64-NEXT: scratch_store_b32 off, v98, s33 offset:276
+; GISEL64-NEXT: scratch_store_b32 off, v99, s33 offset:280
+; GISEL64-NEXT: scratch_store_b32 off, v100, s33 offset:284
+; GISEL64-NEXT: scratch_store_b32 off, v101, s33 offset:288
+; GISEL64-NEXT: scratch_store_b32 off, v102, s33 offset:292
+; GISEL64-NEXT: scratch_store_b32 off, v103, s33 offset:296
+; GISEL64-NEXT: scratch_store_b32 off, v112, s33 offset:300
+; GISEL64-NEXT: scratch_store_b32 off, v113, s33 offset:304
+; GISEL64-NEXT: scratch_store_b32 off, v114, s33 offset:308
+; GISEL64-NEXT: scratch_store_b32 off, v115, s33 offset:312
+; GISEL64-NEXT: scratch_store_b32 off, v116, s33 offset:316
+; GISEL64-NEXT: scratch_store_b32 off, v117, s33 offset:320
+; GISEL64-NEXT: scratch_store_b32 off, v118, s33 offset:324
+; GISEL64-NEXT: scratch_store_b32 off, v119, s33 offset:328
+; GISEL64-NEXT: scratch_store_b32 off, v128, s33 offset:332
+; GISEL64-NEXT: scratch_store_b32 off, v129, s33 offset:336
+; GISEL64-NEXT: scratch_store_b32 off, v130, s33 offset:340
+; GISEL64-NEXT: scratch_store_b32 off, v131, s33 offset:344
+; GISEL64-NEXT: scratch_store_b32 off, v132, s33 offset:348
+; GISEL64-NEXT: scratch_store_b32 off, v133, s33 offset:352
+; GISEL64-NEXT: scratch_store_b32 off, v134, s33 offset:356
+; GISEL64-NEXT: scratch_store_b32 off, v135, s33 offset:360
+; GISEL64-NEXT: scratch_store_b32 off, v144, s33 offset:364
+; GISEL64-NEXT: scratch_store_b32 off, v145, s33 offset:368
+; GISEL64-NEXT: scratch_store_b32 off, v146, s33 offset:372
+; GISEL64-NEXT: scratch_store_b32 off, v147, s33 offset:376
+; GISEL64-NEXT: scratch_store_b32 off, v148, s33 offset:380
+; GISEL64-NEXT: scratch_store_b32 off, v149, s33 offset:384
+; GISEL64-NEXT: scratch_store_b32 off, v150, s33 offset:388
+; GISEL64-NEXT: scratch_store_b32 off, v151, s33 offset:392
+; GISEL64-NEXT: s_clause 0x1f
+; GISEL64-NEXT: scratch_store_b32 off, v160, s33 offset:396
+; GISEL64-NEXT: scratch_store_b32 off, v161, s33 offset:400
+; GISEL64-NEXT: scratch_store_b32 off, v162, s33 offset:404
+; GISEL64-NEXT: scratch_store_b32 off, v163, s33 offset:408
+; GISEL64-NEXT: scratch_store_b32 off, v164, s33 offset:412
+; GISEL64-NEXT: scratch_store_b32 off, v165, s33 offset:416
+; GISEL64-NEXT: scratch_store_b32 off, v166, s33 offset:420
+; GISEL64-NEXT: scratch_store_b32 off, v167, s33 offset:424
+; GISEL64-NEXT: scratch_store_b32 off, v176, s33 offset:428
+; GISEL64-NEXT: scratch_store_b32 off, v177, s33 offset:432
+; GISEL64-NEXT: scratch_store_b32 off, v178, s33 offset:436
+; GISEL64-NEXT: scratch_store_b32 off, v179, s33 offset:440
+; GISEL64-NEXT: scratch_store_b32 off, v180, s33 offset:444
+; GISEL64-NEXT: scratch_store_b32 off, v181, s33 offset:448
+; GISEL64-NEXT: scratch_store_b32 off, v182, s33 offset:452
+; GISEL64-NEXT: scratch_store_b32 off, v183, s33 offset:456
+; GISEL64-NEXT: scratch_store_b32 off, v192, s33 offset:460
+; GISEL64-NEXT: scratch_store_b32 off, v193, s33 offset:464
+; GISEL64-NEXT: scratch_store_b32 off, v194, s33 offset:468
+; GISEL64-NEXT: scratch_store_b32 off, v195, s33 offset:472
+; GISEL64-NEXT: scratch_store_b32 off, v196, s33 offset:476
+; GISEL64-NEXT: scratch_store_b32 off, v197, s33 offset:480
+; GISEL64-NEXT: scratch_store_b32 off, v198, s33 offset:484
+; GISEL64-NEXT: scratch_store_b32 off, v199, s33 offset:488
+; GISEL64-NEXT: scratch_store_b32 off, v208, s33 offset:492
+; GISEL64-NEXT: scratch_store_b32 off, v209, s33 offset:496
+; GISEL64-NEXT: scratch_store_b32 off, v210, s33 offset:500
+; GISEL64-NEXT: scratch_store_b32 off, v211, s33 offset:504
+; GISEL64-NEXT: scratch_store_b32 off, v212, s33 offset:508
+; GISEL64-NEXT: scratch_store_b32 off, v213, s33 offset:512
+; GISEL64-NEXT: scratch_store_b32 off, v214, s33 offset:516
+; GISEL64-NEXT: scratch_store_b32 off, v215, s33 offset:520
+; GISEL64-NEXT: s_clause 0xf
+; GISEL64-NEXT: scratch_store_b32 off, v224, s33 offset:524
+; GISEL64-NEXT: scratch_store_b32 off, v225, s33 offset:528
+; GISEL64-NEXT: scratch_store_b32 off, v226, s33 offset:532
+; GISEL64-NEXT: scratch_store_b32 off, v227, s33 offset:536
+; GISEL64-NEXT: scratch_store_b32 off, v228, s33 offset:540
+; GISEL64-NEXT: scratch_store_b32 off, v229, s33 offset:544
+; GISEL64-NEXT: scratch_store_b32 off, v230, s33 offset:548
+; GISEL64-NEXT: scratch_store_b32 off, v231, s33 offset:552
+; GISEL64-NEXT: scratch_store_b32 off, v240, s33 offset:556
+; GISEL64-NEXT: scratch_store_b32 off, v241, s33 offset:560
+; GISEL64-NEXT: scratch_store_b32 off, v242, s33 offset:564
+; GISEL64-NEXT: scratch_store_b32 off, v243, s33 offset:568
+; GISEL64-NEXT: scratch_store_b32 off, v244, s33 offset:572
+; GISEL64-NEXT: scratch_store_b32 off, v245, s33 offset:576
+; GISEL64-NEXT: scratch_store_b32 off, v246, s33 offset:580
+; GISEL64-NEXT: scratch_store_b32 off, v247, s33 offset:584
+; GISEL64-NEXT: s_mov_b64 exec, -1
+; GISEL64-NEXT: s_clause 0x2
+; GISEL64-NEXT: scratch_store_b32 off, v42, s33
+; GISEL64-NEXT: scratch_store_b32 off, v40, s33 offset:164
+; GISEL64-NEXT: scratch_store_b32 off, v41, s33 offset:168
+; GISEL64-NEXT: s_wait_alu 0xfffe
+; GISEL64-NEXT: v_writelane_b32 v42, s0, 4
+; GISEL64-NEXT: s_mov_b32 s0, callee at abs32@lo
+; GISEL64-NEXT: s_mov_b32 s1, callee at abs32@hi
+; GISEL64-NEXT: s_addk_co_i32 s32, 0x250
+; GISEL64-NEXT: v_mov_b32_e32 v40, v8
+; GISEL64-NEXT: v_writelane_b32 v42, s4, 0
+; GISEL64-NEXT: v_mov_b32_e32 v41, v9
+; GISEL64-NEXT: v_writelane_b32 v42, s5, 1
+; GISEL64-NEXT: v_writelane_b32 v42, s30, 2
+; GISEL64-NEXT: v_writelane_b32 v42, s31, 3
+; GISEL64-NEXT: s_wait_alu 0xfffe
+; GISEL64-NEXT: s_swappc_b64 s[30:31], s[0:1]
+; GISEL64-NEXT: flat_store_b32 v[40:41], v0
+; GISEL64-NEXT: v_readlane_b32 s31, v42, 3
+; GISEL64-NEXT: v_readlane_b32 s30, v42, 2
+; GISEL64-NEXT: v_readlane_b32 s5, v42, 1
+; GISEL64-NEXT: v_readlane_b32 s4, v42, 0
+; GISEL64-NEXT: v_readlane_b32 s0, v42, 4
+; GISEL64-NEXT: s_clause 0x2
+; GISEL64-NEXT: scratch_load_b32 v42, off, s33
+; GISEL64-NEXT: scratch_load_b32 v40, off, s33 offset:164
+; GISEL64-NEXT: scratch_load_b32 v41, off, s33 offset:168
+; GISEL64-NEXT: s_mov_b32 s32, s33
+; GISEL64-NEXT: s_xor_b64 exec, s[4:5], -1
+; GISEL64-NEXT: s_clause 0x1f
+; GISEL64-NEXT: scratch_load_b32 v0, off, s33 offset:4
+; GISEL64-NEXT: scratch_load_b32 v1, off, s33 offset:8
+; GISEL64-NEXT: scratch_load_b32 v2, off, s33 offset:12
+; GISEL64-NEXT: scratch_load_b32 v3, off, s33 offset:16
+; GISEL64-NEXT: scratch_load_b32 v4, off, s33 offset:20
+; GISEL64-NEXT: scratch_load_b32 v5, off, s33 offset:24
+; GISEL64-NEXT: scratch_load_b32 v6, off, s33 offset:28
+; GISEL64-NEXT: scratch_load_b32 v7, off, s33 offset:32
+; GISEL64-NEXT: scratch_load_b32 v8, off, s33 offset:36
+; GISEL64-NEXT: scratch_load_b32 v9, off, s33 offset:40
+; GISEL64-NEXT: scratch_load_b32 v10, off, s33 offset:44
+; GISEL64-NEXT: scratch_load_b32 v11, off, s33 offset:48
+; GISEL64-NEXT: scratch_load_b32 v12, off, s33 offset:52
+; GISEL64-NEXT: scratch_load_b32 v13, off, s33 offset:56
+; GISEL64-NEXT: scratch_load_b32 v14, off, s33 offset:60
+; GISEL64-NEXT: scratch_load_b32 v15, off, s33 offset:64
+; GISEL64-NEXT: scratch_load_b32 v16, off, s33 offset:68
+; GISEL64-NEXT: scratch_load_b32 v17, off, s33 offset:72
+; GISEL64-NEXT: scratch_load_b32 v18, off, s33 offset:76
+; GISEL64-NEXT: scratch_load_b32 v19, off, s33 offset:80
+; GISEL64-NEXT: scratch_load_b32 v20, off, s33 offset:84
+; GISEL64-NEXT: scratch_load_b32 v21, off, s33 offset:88
+; GISEL64-NEXT: scratch_load_b32 v22, off, s33 offset:92
+; GISEL64-NEXT: scratch_load_b32 v23, off, s33 offset:96
+; GISEL64-NEXT: scratch_load_b32 v24, off, s33 offset:100
+; GISEL64-NEXT: scratch_load_b32 v25, off, s33 offset:104
+; GISEL64-NEXT: scratch_load_b32 v26, off, s33 offset:108
+; GISEL64-NEXT: scratch_load_b32 v27, off, s33 offset:112
+; GISEL64-NEXT: scratch_load_b32 v28, off, s33 offset:116
+; GISEL64-NEXT: scratch_load_b32 v29, off, s33 offset:120
+; GISEL64-NEXT: scratch_load_b32 v30, off, s33 offset:124
+; GISEL64-NEXT: scratch_load_b32 v31, off, s33 offset:128
+; GISEL64-NEXT: s_clause 0x1f
+; GISEL64-NEXT: scratch_load_b32 v32, off, s33 offset:132
+; GISEL64-NEXT: scratch_load_b32 v33, off, s33 offset:136
+; GISEL64-NEXT: scratch_load_b32 v34, off, s33 offset:140
+; GISEL64-NEXT: scratch_load_b32 v35, off, s33 offset:144
+; GISEL64-NEXT: scratch_load_b32 v36, off, s33 offset:148
+; GISEL64-NEXT: scratch_load_b32 v37, off, s33 offset:152
+; GISEL64-NEXT: scratch_load_b32 v38, off, s33 offset:156
+; GISEL64-NEXT: scratch_load_b32 v39, off, s33 offset:160
+; GISEL64-NEXT: scratch_load_b32 v48, off, s33 offset:172
+; GISEL64-NEXT: scratch_load_b32 v49, off, s33 offset:176
+; GISEL64-NEXT: scratch_load_b32 v50, off, s33 offset:180
+; GISEL64-NEXT: scratch_load_b32 v51, off, s33 offset:184
+; GISEL64-NEXT: scratch_load_b32 v52, off, s33 offset:188
+; GISEL64-NEXT: scratch_load_b32 v53, off, s33 offset:192
+; GISEL64-NEXT: scratch_load_b32 v54, off, s33 offset:196
+; GISEL64-NEXT: scratch_load_b32 v55, off, s33 offset:200
+; GISEL64-NEXT: scratch_load_b32 v64, off, s33 offset:204
+; GISEL64-NEXT: scratch_load_b32 v65, off, s33 offset:208
+; GISEL64-NEXT: scratch_load_b32 v66, off, s33 offset:212
+; GISEL64-NEXT: scratch_load_b32 v67, off, s33 offset:216
+; GISEL64-NEXT: scratch_load_b32 v68, off, s33 offset:220
+; GISEL64-NEXT: scratch_load_b32 v69, off, s33 offset:224
+; GISEL64-NEXT: scratch_load_b32 v70, off, s33 offset:228
+; GISEL64-NEXT: scratch_load_b32 v71, off, s33 offset:232
+; GISEL64-NEXT: scratch_load_b32 v80, off, s33 offset:236
+; GISEL64-NEXT: scratch_load_b32 v81, off, s33 offset:240
+; GISEL64-NEXT: scratch_load_b32 v82, off, s33 offset:244
+; GISEL64-NEXT: scratch_load_b32 v83, off, s33 offset:248
+; GISEL64-NEXT: scratch_load_b32 v84, off, s33 offset:252
+; GISEL64-NEXT: scratch_load_b32 v85, off, s33 offset:256
+; GISEL64-NEXT: scratch_load_b32 v86, off, s33 offset:260
+; GISEL64-NEXT: scratch_load_b32 v87, off, s33 offset:264
+; GISEL64-NEXT: s_clause 0x1f
+; GISEL64-NEXT: scratch_load_b32 v96, off, s33 offset:268
+; GISEL64-NEXT: scratch_load_b32 v97, off, s33 offset:272
+; GISEL64-NEXT: scratch_load_b32 v98, off, s33 offset:276
+; GISEL64-NEXT: scratch_load_b32 v99, off, s33 offset:280
+; GISEL64-NEXT: scratch_load_b32 v100, off, s33 offset:284
+; GISEL64-NEXT: scratch_load_b32 v101, off, s33 offset:288
+; GISEL64-NEXT: scratch_load_b32 v102, off, s33 offset:292
+; GISEL64-NEXT: scratch_load_b32 v103, off, s33 offset:296
+; GISEL64-NEXT: scratch_load_b32 v112, off, s33 offset:300
+; GISEL64-NEXT: scratch_load_b32 v113, off, s33 offset:304
+; GISEL64-NEXT: scratch_load_b32 v114, off, s33 offset:308
+; GISEL64-NEXT: scratch_load_b32 v115, off, s33 offset:312
+; GISEL64-NEXT: scratch_load_b32 v116, off, s33 offset:316
+; GISEL64-NEXT: scratch_load_b32 v117, off, s33 offset:320
+; GISEL64-NEXT: scratch_load_b32 v118, off, s33 offset:324
+; GISEL64-NEXT: scratch_load_b32 v119, off, s33 offset:328
+; GISEL64-NEXT: scratch_load_b32 v128, off, s33 offset:332
+; GISEL64-NEXT: scratch_load_b32 v129, off, s33 offset:336
+; GISEL64-NEXT: scratch_load_b32 v130, off, s33 offset:340
+; GISEL64-NEXT: scratch_load_b32 v131, off, s33 offset:344
+; GISEL64-NEXT: scratch_load_b32 v132, off, s33 offset:348
+; GISEL64-NEXT: scratch_load_b32 v133, off, s33 offset:352
+; GISEL64-NEXT: scratch_load_b32 v134, off, s33 offset:356
+; GISEL64-NEXT: scratch_load_b32 v135, off, s33 offset:360
+; GISEL64-NEXT: scratch_load_b32 v144, off, s33 offset:364
+; GISEL64-NEXT: scratch_load_b32 v145, off, s33 offset:368
+; GISEL64-NEXT: scratch_load_b32 v146, off, s33 offset:372
+; GISEL64-NEXT: scratch_load_b32 v147, off, s33 offset:376
+; GISEL64-NEXT: scratch_load_b32 v148, off, s33 offset:380
+; GISEL64-NEXT: scratch_load_b32 v149, off, s33 offset:384
+; GISEL64-NEXT: scratch_load_b32 v150, off, s33 offset:388
+; GISEL64-NEXT: scratch_load_b32 v151, off, s33 offset:392
+; GISEL64-NEXT: s_clause 0x1f
+; GISEL64-NEXT: scratch_load_b32 v160, off, s33 offset:396
+; GISEL64-NEXT: scratch_load_b32 v161, off, s33 offset:400
+; GISEL64-NEXT: scratch_load_b32 v162, off, s33 offset:404
+; GISEL64-NEXT: scratch_load_b32 v163, off, s33 offset:408
+; GISEL64-NEXT: scratch_load_b32 v164, off, s33 offset:412
+; GISEL64-NEXT: scratch_load_b32 v165, off, s33 offset:416
+; GISEL64-NEXT: scratch_load_b32 v166, off, s33 offset:420
+; GISEL64-NEXT: scratch_load_b32 v167, off, s33 offset:424
+; GISEL64-NEXT: scratch_load_b32 v176, off, s33 offset:428
+; GISEL64-NEXT: scratch_load_b32 v177, off, s33 offset:432
+; GISEL64-NEXT: scratch_load_b32 v178, off, s33 offset:436
+; GISEL64-NEXT: scratch_load_b32 v179, off, s33 offset:440
+; GISEL64-NEXT: scratch_load_b32 v180, off, s33 offset:444
+; GISEL64-NEXT: scratch_load_b32 v181, off, s33 offset:448
+; GISEL64-NEXT: scratch_load_b32 v182, off, s33 offset:452
+; GISEL64-NEXT: scratch_load_b32 v183, off, s33 offset:456
+; GISEL64-NEXT: scratch_load_b32 v192, off, s33 offset:460
+; GISEL64-NEXT: scratch_load_b32 v193, off, s33 offset:464
+; GISEL64-NEXT: scratch_load_b32 v194, off, s33 offset:468
+; GISEL64-NEXT: scratch_load_b32 v195, off, s33 offset:472
+; GISEL64-NEXT: scratch_load_b32 v196, off, s33 offset:476
+; GISEL64-NEXT: scratch_load_b32 v197, off, s33 offset:480
+; GISEL64-NEXT: scratch_load_b32 v198, off, s33 offset:484
+; GISEL64-NEXT: scratch_load_b32 v199, off, s33 offset:488
+; GISEL64-NEXT: scratch_load_b32 v208, off, s33 offset:492
+; GISEL64-NEXT: scratch_load_b32 v209, off, s33 offset:496
+; GISEL64-NEXT: scratch_load_b32 v210, off, s33 offset:500
+; GISEL64-NEXT: scratch_load_b32 v211, off, s33 offset:504
+; GISEL64-NEXT: scratch_load_b32 v212, off, s33 offset:508
+; GISEL64-NEXT: scratch_load_b32 v213, off, s33 offset:512
+; GISEL64-NEXT: scratch_load_b32 v214, off, s33 offset:516
+; GISEL64-NEXT: scratch_load_b32 v215, off, s33 offset:520
+; GISEL64-NEXT: s_clause 0xf
+; GISEL64-NEXT: scratch_load_b32 v224, off, s33 offset:524
+; GISEL64-NEXT: scratch_load_b32 v225, off, s33 offset:528
+; GISEL64-NEXT: scratch_load_b32 v226, off, s33 offset:532
+; GISEL64-NEXT: scratch_load_b32 v227, off, s33 offset:536
+; GISEL64-NEXT: scratch_load_b32 v228, off, s33 offset:540
+; GISEL64-NEXT: scratch_load_b32 v229, off, s33 offset:544
+; GISEL64-NEXT: scratch_load_b32 v230, off, s33 offset:548
+; GISEL64-NEXT: scratch_load_b32 v231, off, s33 offset:552
+; GISEL64-NEXT: scratch_load_b32 v240, off, s33 offset:556
+; GISEL64-NEXT: scratch_load_b32 v241, off, s33 offset:560
+; GISEL64-NEXT: scratch_load_b32 v242, off, s33 offset:564
+; GISEL64-NEXT: scratch_load_b32 v243, off, s33 offset:568
+; GISEL64-NEXT: scratch_load_b32 v244, off, s33 offset:572
+; GISEL64-NEXT: scratch_load_b32 v245, off, s33 offset:576
+; GISEL64-NEXT: scratch_load_b32 v246, off, s33 offset:580
+; GISEL64-NEXT: scratch_load_b32 v247, off, s33 offset:584
+; GISEL64-NEXT: s_mov_b64 exec, s[4:5]
+; GISEL64-NEXT: s_mov_b32 s33, s0
+; GISEL64-NEXT: s_wait_loadcnt_dscnt 0x0
+; GISEL64-NEXT: s_wait_alu 0xfffe
+; GISEL64-NEXT: s_setpc_b64 s[30:31]
+ %ret = call float(ptr, ...) @llvm.amdgcn.call.whole.wave(ptr @callee, <8 x float> %x) convergent
+ store float %ret, ptr %p
+ ret void
+}
diff --git a/llvm/test/Verifier/AMDGPU/intrinsic-amdgcn-call-whole-wave.ll b/llvm/test/Verifier/AMDGPU/intrinsic-amdgcn-call-whole-wave.ll
new file mode 100644
index 0000000000000..a744bf318be9a
--- /dev/null
+++ b/llvm/test/Verifier/AMDGPU/intrinsic-amdgcn-call-whole-wave.ll
@@ -0,0 +1,46 @@
+; RUN: not llvm-as %s -disable-output 2>&1 | FileCheck %s
+
+define amdgpu_cs void @indirect(ptr %fn, i32 %x) {
+ ; CHECK: Indirect whole wave calls are not allowed
+ %whatever = call i32(ptr, ...) @llvm.amdgcn.call.whole.wave(ptr %fn, i32 %x)
+ ret void
+}
+
+declare amdgpu_gfx_whole_wave void @variadic_callee(i1 %active, i32 %x, ...)
+
+define amdgpu_cs void @variadic(ptr %fn, i32 %x) {
+ ; CHECK: Variadic whole wave calls are not allowed
+ %whatever = call i32(ptr, ...) @llvm.amdgcn.call.whole.wave(ptr @variadic_callee, i32 %x)
+ ret void
+}
+
+declare amdgpu_gfx void @bad_cc_callee(i1 %active, i32 %x)
+
+define amdgpu_cs void @bad_cc(i32 %x) {
+ ; CHECK: Callee must have the amdgpu_gfx_whole_wave calling convention
+ %whatever = call i32(ptr, ...) @llvm.amdgcn.call.whole.wave(ptr @bad_cc_callee, i32 %x)
+ ret void
+}
+
+declare amdgpu_gfx_whole_wave i32 @no_i1_callee(i32 %active, i32 %y, i32 %z)
+
+define amdgpu_cs void @no_i1(i32 %x) {
+ ; CHECK: Callee must have i1 as its first argument
+ %whatever = call i32(ptr, ...) @llvm.amdgcn.call.whole.wave(ptr @no_i1_callee, i32 %x, i32 0)
+ ret void
+}
+
+declare amdgpu_gfx_whole_wave i32 @good_callee(i1 %active, i32 %x, i32 inreg %y)
+
+define amdgpu_cs void @bad_args(i32 %x) {
+ ; CHECK: Call argument count must match callee argument count
+ %whatever.0 = call i32(ptr, ...) @llvm.amdgcn.call.whole.wave(ptr @good_callee, i32 %x)
+
+ ; CHECK: Argument types must match
+ %whatever.1 = call i32(ptr, ...) @llvm.amdgcn.call.whole.wave(ptr @good_callee, i32 %x, i64 inreg 0)
+
+ ; CHECK: Argument inreg attributes must match
+ %whatever.2 = call i32(ptr, ...) @llvm.amdgcn.call.whole.wave(ptr @good_callee, i32 %x, i32 0)
+
+ ret void
+}
>From df34eaca5961f4f2d201191fe3ad43651066fb21 Mon Sep 17 00:00:00 2001
From: Ricardo Jesus <rjj at nvidia.com>
Date: Wed, 6 Aug 2025 09:27:35 +0100
Subject: [PATCH 049/171] [AArch64] Drop poison flags when lowering absolute
difference patterns. (#152130)
As a follow-up to #151177, when lowering SELECT_CC nodes of absolute
difference patterns, drop poison-generating flags from the negated
operand to avoid inadvertently propagating poison.
As discussed in the PR above, I didn't find practical issues with the
current code, but it seems safer to do this preemptively.
---
llvm/lib/Target/AArch64/AArch64ISelLowering.cpp | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index 018c16d61b12d..43b9743639ab2 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -11390,13 +11390,18 @@ SDValue AArch64TargetLowering::LowerSELECT_CC(
// select_cc lhs, rhs, sub(rhs, lhs), sub(lhs, rhs), cc ->
// select_cc lhs, rhs, neg(sub(lhs, rhs)), sub(lhs, rhs), cc
// The second forms can be matched into subs+cneg.
+ // NOTE: Drop poison generating flags from the negated operand to avoid
+ // inadvertently propagating poison after the canonicalisation.
if (TVal.getOpcode() == ISD::SUB && FVal.getOpcode() == ISD::SUB) {
if (TVal.getOperand(0) == LHS && TVal.getOperand(1) == RHS &&
- FVal.getOperand(0) == RHS && FVal.getOperand(1) == LHS)
+ FVal.getOperand(0) == RHS && FVal.getOperand(1) == LHS) {
+ TVal->dropFlags(SDNodeFlags::PoisonGeneratingFlags);
FVal = DAG.getNegative(TVal, DL, TVal.getValueType());
- else if (TVal.getOperand(0) == RHS && TVal.getOperand(1) == LHS &&
- FVal.getOperand(0) == LHS && FVal.getOperand(1) == RHS)
+ } else if (TVal.getOperand(0) == RHS && TVal.getOperand(1) == LHS &&
+ FVal.getOperand(0) == LHS && FVal.getOperand(1) == RHS) {
+ FVal->dropFlags(SDNodeFlags::PoisonGeneratingFlags);
TVal = DAG.getNegative(FVal, DL, FVal.getValueType());
+ }
}
unsigned Opcode = AArch64ISD::CSEL;
>From 907b7d0f07bb72a4a9732e234621adb589f77d42 Mon Sep 17 00:00:00 2001
From: eleviant <56861949+eleviant at users.noreply.github.com>
Date: Wed, 6 Aug 2025 10:30:49 +0200
Subject: [PATCH 050/171] [ARM] Fix inline asm register validation for vector
types (#152175)
Patch allows following piece of code to be successfully compiled:
```
register uint8x8_t V asm("d3") = vdup_n_u8(0xff);
```
---
llvm/lib/Target/ARM/ARMISelLowering.cpp | 3 ++-
llvm/test/CodeGen/ARM/bad-constraint.ll | 6 ++++++
llvm/test/CodeGen/ARM/inlineasm-vec-to-double.ll | 14 ++++++++++++++
3 files changed, 22 insertions(+), 1 deletion(-)
create mode 100644 llvm/test/CodeGen/ARM/inlineasm-vec-to-double.ll
diff --git a/llvm/lib/Target/ARM/ARMISelLowering.cpp b/llvm/lib/Target/ARM/ARMISelLowering.cpp
index 74c7c97e6e927..c5bae19e0e02e 100644
--- a/llvm/lib/Target/ARM/ARMISelLowering.cpp
+++ b/llvm/lib/Target/ARM/ARMISelLowering.cpp
@@ -20350,7 +20350,8 @@ static bool isIncompatibleReg(const MCPhysReg &PR, MVT VT) {
if (PR == 0 || VT == MVT::Other)
return false;
return (ARM::SPRRegClass.contains(PR) && VT != MVT::f32 && VT != MVT::i32) ||
- (ARM::DPRRegClass.contains(PR) && VT != MVT::f64);
+ (ARM::DPRRegClass.contains(PR) && VT != MVT::f64 &&
+ !VT.is64BitVector());
}
using RCPair = std::pair<unsigned, const TargetRegisterClass *>;
diff --git a/llvm/test/CodeGen/ARM/bad-constraint.ll b/llvm/test/CodeGen/ARM/bad-constraint.ll
index 9b8fcd576db5d..7d80f0cfff591 100644
--- a/llvm/test/CodeGen/ARM/bad-constraint.ll
+++ b/llvm/test/CodeGen/ARM/bad-constraint.ll
@@ -1,6 +1,7 @@
; RUN: not llc -filetype=obj %s -o /dev/null 2>&1 | FileCheck %s
; CHECK: error: couldn't allocate input reg for constraint '{d2}'
; CHECK-NEXT: error: couldn't allocate input reg for constraint '{s2}'
+; CHECK-NEXT: error: couldn't allocate input reg for constraint '{d3}'
target datalayout = "e-m:e-p:32:32-Fi8-i64:64-v128:64:128-a:0:32-n32-S64"
target triple = "armv8a-unknown-linux-gnueabihf"
@@ -23,3 +24,8 @@ entry:
ret void
}
+define void @_Z1dv() local_unnamed_addr {
+entry:
+ tail call void asm sideeffect "", "{d3}"(<16 x i8> splat (i8 -1))
+ ret void
+}
diff --git a/llvm/test/CodeGen/ARM/inlineasm-vec-to-double.ll b/llvm/test/CodeGen/ARM/inlineasm-vec-to-double.ll
new file mode 100644
index 0000000000000..0c01bb9ea6867
--- /dev/null
+++ b/llvm/test/CodeGen/ARM/inlineasm-vec-to-double.ll
@@ -0,0 +1,14 @@
+; RUN: llc %s -filetype=asm -o - | FileCheck %s
+
+; CHECK: vmov.i8 d3, #0xff
+
+target datalayout = "e-m:e-p:32:32-Fi8-i64:64-v128:64:128-a:0:32-n32-S64"
+target triple = "armv8a-unknown-linux-gnueabihf"
+
+; Function Attrs: mustprogress noimplicitfloat nounwind
+define void @cvt_vec() local_unnamed_addr {
+entry:
+ tail call void asm sideeffect "", "{d3}"(<8 x i8> splat (i8 -1))
+ ret void
+}
+
>From 4077e66432a1d17543d0cef4b5a9280caff6e974 Mon Sep 17 00:00:00 2001
From: David Spickett <david.spickett at linaro.org>
Date: Wed, 6 Aug 2025 09:31:09 +0100
Subject: [PATCH 051/171] [lldb] Treat address found via function name as a
callable address (#151973)
I discovered this building the lldb test suite with `-mthumb` set, so
that all test programs are purely Arm Thumb code.
When C++ expressions did a function lookup, they took a different path
from C programs. That path happened to land on the line that I've
changed. Where we try to look something up as a function, but don't then
resolve the address as if it's a callable.
With Thumb, if you do the non-callable lookup, the bottom bit won't be
set. This means when lldb's expression wrapper function branches into
the found function, it'll be in Arm mode trying to execute Thumb code.
Thumb is the only instance where you'd notice this. Aside from maybe
MicroMIPS or MIPS16 perhaps but I expect that there are 0 users of that
and lldb.
I have added a new test case that will simulate this situation in
"normal" Arm builds so that it will get run on Linaro's buildbot.
This change also fixes the following existing tests when `-mthumb` is
used:
```
lldb-api :: commands/expression/anonymous-struct/TestCallUserAnonTypedef.py
lldb-api :: commands/expression/argument_passing_restrictions/TestArgumentPassingRestrictions.py
lldb-api :: commands/expression/call-function/TestCallStopAndContinue.py
lldb-api :: commands/expression/call-function/TestCallUserDefinedFunction.py
lldb-api :: commands/expression/char/TestExprsChar.py
lldb-api :: commands/expression/class_template_specialization_empty_pack/TestClassTemplateSpecializationParametersHandling.py
lldb-api :: commands/expression/context-object/TestContextObject.py
lldb-api :: commands/expression/formatters/TestFormatters.py
lldb-api :: commands/expression/import_base_class_when_class_has_derived_member/TestImportBaseClassWhenClassHasDerivedMember.py
lldb-api :: commands/expression/inline-namespace/TestInlineNamespace.py
lldb-api :: commands/expression/namespace-alias/TestInlineNamespaceAlias.py
lldb-api :: commands/expression/no-deadlock/TestExprDoesntBlock.py
lldb-api :: commands/expression/pr35310/TestExprsBug35310.py
lldb-api :: commands/expression/static-initializers/TestStaticInitializers.py
lldb-api :: commands/expression/test/TestExprs.py
lldb-api :: commands/expression/timeout/TestCallWithTimeout.py
lldb-api :: commands/expression/top-level/TestTopLevelExprs.py
lldb-api :: commands/expression/unwind_expression/TestUnwindExpression.py
lldb-api :: commands/expression/xvalue/TestXValuePrinting.py
lldb-api :: functionalities/thread/main_thread_exit/TestMainThreadExit.py
lldb-api :: lang/cpp/call-function/TestCallCPPFunction.py
lldb-api :: lang/cpp/chained-calls/TestCppChainedCalls.py
lldb-api :: lang/cpp/class-template-parameter-pack/TestClassTemplateParameterPack.py
lldb-api :: lang/cpp/constructors/TestCppConstructors.py
lldb-api :: lang/cpp/function-qualifiers/TestCppFunctionQualifiers.py
lldb-api :: lang/cpp/function-ref-qualifiers/TestCppFunctionRefQualifiers.py
lldb-api :: lang/cpp/global_operators/TestCppGlobalOperators.py
lldb-api :: lang/cpp/llvm-style/TestLLVMStyle.py
lldb-api :: lang/cpp/multiple-inheritance/TestCppMultipleInheritance.py
lldb-api :: lang/cpp/namespace/TestNamespace.py
lldb-api :: lang/cpp/namespace/TestNamespaceLookup.py
lldb-api :: lang/cpp/namespace_conflicts/TestNamespaceConflicts.py
lldb-api :: lang/cpp/operators/TestCppOperators.py
lldb-api :: lang/cpp/overloaded-functions/TestOverloadedFunctions.py
lldb-api :: lang/cpp/rvalue-references/TestRvalueReferences.py
lldb-api :: lang/cpp/static_methods/TestCPPStaticMethods.py
lldb-api :: lang/cpp/template/TestTemplateArgs.py
lldb-api :: python_api/thread/TestThreadAPI.py
```
There are other failures that are due to different problems, and this
change does not make those worse.
(I have no plans to run the test suite with `-mthumb` regularly, I just
did it to test some other refactoring)
---
lldb/source/Expression/IRExecutionUnit.cpp | 5 +-
.../test/API/arm/thumb-function-addr/Makefile | 3 +
.../TestThumbFunctionAddr.py | 67 +++++++++++++++++++
lldb/test/API/arm/thumb-function-addr/main.c | 9 +++
4 files changed, 82 insertions(+), 2 deletions(-)
create mode 100644 lldb/test/API/arm/thumb-function-addr/Makefile
create mode 100644 lldb/test/API/arm/thumb-function-addr/TestThumbFunctionAddr.py
create mode 100644 lldb/test/API/arm/thumb-function-addr/main.c
diff --git a/lldb/source/Expression/IRExecutionUnit.cpp b/lldb/source/Expression/IRExecutionUnit.cpp
index 5e40df282e7b0..e7a26d3c2dcf4 100644
--- a/lldb/source/Expression/IRExecutionUnit.cpp
+++ b/lldb/source/Expression/IRExecutionUnit.cpp
@@ -737,8 +737,9 @@ class LoadAddressResolver {
// If that didn't work, try the function.
if (load_address == LLDB_INVALID_ADDRESS && candidate_sc.function) {
Address addr = candidate_sc.function->GetAddress();
- load_address = m_target.GetProcessSP() ? addr.GetLoadAddress(&m_target)
- : addr.GetFileAddress();
+ load_address = m_target.GetProcessSP()
+ ? addr.GetCallableLoadAddress(&m_target)
+ : addr.GetFileAddress();
}
// We found a load address.
diff --git a/lldb/test/API/arm/thumb-function-addr/Makefile b/lldb/test/API/arm/thumb-function-addr/Makefile
new file mode 100644
index 0000000000000..10495940055b6
--- /dev/null
+++ b/lldb/test/API/arm/thumb-function-addr/Makefile
@@ -0,0 +1,3 @@
+C_SOURCES := main.c
+
+include Makefile.rules
diff --git a/lldb/test/API/arm/thumb-function-addr/TestThumbFunctionAddr.py b/lldb/test/API/arm/thumb-function-addr/TestThumbFunctionAddr.py
new file mode 100644
index 0000000000000..d08099f6331e5
--- /dev/null
+++ b/lldb/test/API/arm/thumb-function-addr/TestThumbFunctionAddr.py
@@ -0,0 +1,67 @@
+"""
+Test that addresses of functions compiled for Arm Thumb include the Thumb mode
+bit (bit 0 of the address) when resolved and used in expressions.
+"""
+
+import lldb
+from lldbsuite.test.decorators import *
+from lldbsuite.test.lldbtest import *
+from lldbsuite.test import lldbutil
+
+
+class TestThumbFunctionAddr(TestBase):
+ def do_thumb_function_test(self, language):
+ self.build(dictionary={"CFLAGS_EXTRAS": f"-x {language} -mthumb"})
+
+ exe = self.getBuildArtifact("a.out")
+ line = line_number("main.c", "// Set break point at this line.")
+ self.runCmd("target create %s" % exe)
+ bpid = lldbutil.run_break_set_by_file_and_line(self, "main.c", line)
+
+ self.runCmd("run")
+ self.assertIsNotNone(
+ lldbutil.get_one_thread_stopped_at_breakpoint_id(self.process(), bpid),
+ "Process is not stopped at breakpoint",
+ )
+
+ # The compiler set this, so the mode bit will be included here.
+ a_function_addr_var = (
+ self.thread().GetFrameAtIndex(0).FindVariable("a_function_addr")
+ )
+ self.assertTrue(a_function_addr_var.IsValid())
+ a_function_addr = a_function_addr_var.GetValueAsUnsigned()
+ self.assertTrue(a_function_addr & 1)
+
+ self.expect("p/x a_function_addr", substrs=[f"0x{a_function_addr:08x}"])
+ # If lldb did not pay attention to the mode bit this would SIGILL trying
+ # to execute Thumb encodings in Arm mode.
+ self.expect("expression -- a_function()", substrs=["= 123"])
+
+ # We cannot call GetCallableLoadAdress via. the API, so we expect this
+ # to not have the bit set as it's treating it as a non-function symbol.
+ found_function = self.target().FindFunctions("a_function")[0]
+ self.assertTrue(found_function.IsValid())
+ found_function = found_function.GetFunction()
+ self.assertTrue(found_function.IsValid())
+ found_function_addr = found_function.GetStartAddress()
+ a_function_load_addr = found_function_addr.GetLoadAddress(self.target())
+ self.assertEqual(a_function_load_addr, a_function_addr & ~1)
+
+ # image lookup should not include the mode bit.
+ a_function_file_addr = found_function_addr.GetFileAddress()
+ self.expect(
+ "image lookup -n a_function", substrs=[f"0x{a_function_file_addr:08x}"]
+ )
+
+ # This test is run for C and C++ because the two will take different paths
+ # trying to resolve the function's address.
+
+ @skipIf(archs=no_match(["arm$"]))
+ @skipIf(archs=["arm64"])
+ def test_function_addr_c(self):
+ self.do_thumb_function_test("c")
+
+ @skipIf(archs=no_match(["arm$"]))
+ @skipIf(archs=["arm64"])
+ def test_function_addr_cpp(self):
+ self.do_thumb_function_test("c++")
diff --git a/lldb/test/API/arm/thumb-function-addr/main.c b/lldb/test/API/arm/thumb-function-addr/main.c
new file mode 100644
index 0000000000000..f3e01b78f575b
--- /dev/null
+++ b/lldb/test/API/arm/thumb-function-addr/main.c
@@ -0,0 +1,9 @@
+#include <stdint.h>
+
+int a_function() { return 123; }
+
+int main() {
+ const uintptr_t a_function_addr = (uintptr_t)a_function;
+ // Set break point at this line.
+ return a_function();
+}
>From b24ad98caa7cdc48ed9034310c8876811d6e52aa Mon Sep 17 00:00:00 2001
From: Dan Blackwell <dan_blackwell at apple.com>
Date: Wed, 6 Aug 2025 10:08:57 +0100
Subject: [PATCH 052/171] [sanitizer_common] Disable SanitizerCommon lsan tests
on Apple arm64 (#151929)
There is an issue tracking lsan incompatibility on these platforms:
https://github.com/llvm/llvm-project/issues/131678. Many of these tests
are currently failing and creating CI noise.
rdar://157252316
---
compiler-rt/test/sanitizer_common/CMakeLists.txt | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/compiler-rt/test/sanitizer_common/CMakeLists.txt b/compiler-rt/test/sanitizer_common/CMakeLists.txt
index 615666676f57a..7596e2005a3b5 100644
--- a/compiler-rt/test/sanitizer_common/CMakeLists.txt
+++ b/compiler-rt/test/sanitizer_common/CMakeLists.txt
@@ -89,9 +89,10 @@ foreach(tool ${SUPPORTED_TOOLS})
${CMAKE_CURRENT_BINARY_DIR}/${CONFIG_NAME}/lit.site.cfg.py)
# FIXME(dliew): LSan i386 on Darwin is completely broken right now.
# so don't run the tests by default.
+ # Tests fail on arm64, see https://github.com/llvm/llvm-project/issues/131678
if (NOT (CMAKE_SYSTEM_NAME MATCHES "Darwin" AND
${tool} STREQUAL "lsan" AND
- ${arch} STREQUAL "i386"))
+ (${arch} STREQUAL "i386" OR ${arch} MATCHES "arm64")))
list(APPEND SANITIZER_COMMON_TESTSUITES
${CMAKE_CURRENT_BINARY_DIR}/${CONFIG_NAME})
endif()
>From 49d5dd37f8bdd961d11cdf4df95d26982b253e97 Mon Sep 17 00:00:00 2001
From: David Spickett <david.spickett at linaro.org>
Date: Wed, 6 Aug 2025 08:44:01 +0000
Subject: [PATCH 053/171] Reland "[lldb] Fix auto advance PC in
`EmulateInstructionARM64` if PC >= 4G (#151460)"
This reverts commit 600976f4bfb06526c283dcc4efc4801792f08ca5.
The test was crashing trying to access any element of the GPR struct.
(gdb) disas
Dump of assembler code for function _ZN7testing8internal11CmpHelperEQIyyEENS_15AssertionResultEPKcS4_RKT_RKT0_:
0x00450afc <+0>: push {r4, r5, r6, r7, r8, r9, r10, r11, lr}
0x00450b00 <+4>: sub sp, sp, #60 ; 0x3c
0x00450b04 <+8>: ldr r5, [sp, #96] ; 0x60
=> 0x00450b08 <+12>: ldm r3, {r4, r7}
0x00450b0c <+16>: ldm r5, {r6, r9}
0x00450b10 <+20>: eor r7, r7, r9
0x00450b14 <+24>: eor r6, r4, r6
0x00450b18 <+28>: orrs r7, r6, r7
(gdb) p/x r3
$3 = 0x3e300f6e
"However, load and store multiple instructions (LDM and STM) and load and store double-word (LDRD or STRD) must be aligned to at least a word boundary."
https://developer.arm.com/documentation/den0013/d/Porting/Alignment
>>> 0x3e300f6e % 4
2
Program received signal SIGBUS, Bus error.
0x00450b08 in testing::AssertionResult testing::internal::CmpHelperEQ<unsigned long long, unsigned long long>(char const*, char const*, unsigned long long const&, unsigned long long const&) ()
The struct is packed with 1 byte alignment, but it needs to start at an aligned address for us
to ldm from it. So I've done that with alignas.
Also fixed some compiler warnings in the test itself.
---
.../ARM64/EmulateInstructionARM64.cpp | 4 +-
.../Process/Utility/RegisterInfoPOSIX_arm64.h | 4 +-
.../Instruction/ARM64/TestAArch64Emulator.cpp | 120 +++++++++++++++++-
3 files changed, 124 insertions(+), 4 deletions(-)
diff --git a/lldb/source/Plugins/Instruction/ARM64/EmulateInstructionARM64.cpp b/lldb/source/Plugins/Instruction/ARM64/EmulateInstructionARM64.cpp
index 29f03fee47b0d..a8901beda3970 100644
--- a/lldb/source/Plugins/Instruction/ARM64/EmulateInstructionARM64.cpp
+++ b/lldb/source/Plugins/Instruction/ARM64/EmulateInstructionARM64.cpp
@@ -404,7 +404,7 @@ bool EmulateInstructionARM64::EvaluateInstruction(uint32_t evaluate_options) {
if (!success && !m_ignore_conditions)
return false;
- uint32_t orig_pc_value = 0;
+ uint64_t orig_pc_value = 0;
if (auto_advance_pc) {
orig_pc_value =
ReadRegisterUnsigned(eRegisterKindLLDB, gpr_pc_arm64, 0, &success);
@@ -418,7 +418,7 @@ bool EmulateInstructionARM64::EvaluateInstruction(uint32_t evaluate_options) {
return false;
if (auto_advance_pc) {
- uint32_t new_pc_value =
+ uint64_t new_pc_value =
ReadRegisterUnsigned(eRegisterKindLLDB, gpr_pc_arm64, 0, &success);
if (!success)
return false;
diff --git a/lldb/source/Plugins/Process/Utility/RegisterInfoPOSIX_arm64.h b/lldb/source/Plugins/Process/Utility/RegisterInfoPOSIX_arm64.h
index d2ddf7d86d8c3..021ec49b3d2be 100644
--- a/lldb/source/Plugins/Process/Utility/RegisterInfoPOSIX_arm64.h
+++ b/lldb/source/Plugins/Process/Utility/RegisterInfoPOSIX_arm64.h
@@ -45,8 +45,10 @@ class RegisterInfoPOSIX_arm64
};
// based on RegisterContextDarwin_arm64.h
+ // Pack this so there are no extra bytes, but align its start address to at
+ // least 4 bytes to prevent alignment errors on Arm 32-bit.
LLVM_PACKED_START
- struct GPR {
+ struct alignas(4) GPR {
uint64_t x[29]; // x0-x28
uint64_t fp; // x29
uint64_t lr; // x30
diff --git a/lldb/unittests/Instruction/ARM64/TestAArch64Emulator.cpp b/lldb/unittests/Instruction/ARM64/TestAArch64Emulator.cpp
index 4506c200dee3b..08296da564976 100644
--- a/lldb/unittests/Instruction/ARM64/TestAArch64Emulator.cpp
+++ b/lldb/unittests/Instruction/ARM64/TestAArch64Emulator.cpp
@@ -13,15 +13,118 @@
#include "lldb/Core/Disassembler.h"
#include "lldb/Target/ExecutionContext.h"
#include "lldb/Utility/ArchSpec.h"
+#include "lldb/Utility/RegisterValue.h"
#include "Plugins/Instruction/ARM64/EmulateInstructionARM64.h"
+#include "Plugins/Process/Utility/RegisterInfoPOSIX_arm64.h"
+#include "Plugins/Process/Utility/lldb-arm64-register-enums.h"
using namespace lldb;
using namespace lldb_private;
struct Arch64EmulatorTester : public EmulateInstructionARM64 {
+ RegisterInfoPOSIX_arm64::GPR gpr;
+ uint8_t memory[64] = {0};
+ uint64_t memory_offset = 0;
+
Arch64EmulatorTester()
- : EmulateInstructionARM64(ArchSpec("arm64-apple-ios")) {}
+ : EmulateInstructionARM64(ArchSpec("arm64-apple-ios")) {
+ memset(&gpr, 0, sizeof(gpr));
+ EmulateInstruction::SetCallbacks(ReadMemoryCallback, WriteMemoryCallback,
+ ReadRegisterCallback,
+ WriteRegisterCallback);
+ }
+
+ static bool ReadRegisterCallback(EmulateInstruction *instruction, void *baton,
+ const RegisterInfo *reg_info,
+ RegisterValue ®_value) {
+ auto *tester = static_cast<Arch64EmulatorTester *>(instruction);
+ uint32_t reg = reg_info->kinds[eRegisterKindLLDB];
+ if (reg >= gpr_x0_arm64 && reg <= gpr_x28_arm64) {
+ reg_value.SetUInt64(tester->gpr.x[reg - gpr_x0_arm64]);
+ return true;
+ }
+ if (reg >= gpr_w0_arm64 && reg <= gpr_w28_arm64) {
+ reg_value.SetUInt32(tester->gpr.x[reg - gpr_w0_arm64]);
+ return true;
+ }
+ switch (reg) {
+ case gpr_fp_arm64:
+ reg_value.SetUInt64(tester->gpr.fp);
+ return true;
+ case gpr_lr_arm64:
+ reg_value.SetUInt64(tester->gpr.lr);
+ return true;
+ case gpr_sp_arm64:
+ reg_value.SetUInt64(tester->gpr.sp);
+ return true;
+ case gpr_pc_arm64:
+ reg_value.SetUInt64(tester->gpr.pc);
+ return true;
+ case gpr_cpsr_arm64:
+ reg_value.SetUInt64(tester->gpr.cpsr);
+ return true;
+ default:
+ return false;
+ }
+ }
+
+ static bool WriteRegisterCallback(EmulateInstruction *instruction,
+ void *baton, const Context &context,
+ const RegisterInfo *reg_info,
+ const RegisterValue ®_value) {
+ auto *tester = static_cast<Arch64EmulatorTester *>(instruction);
+ uint32_t reg = reg_info->kinds[eRegisterKindLLDB];
+ if (reg >= gpr_x0_arm64 && reg <= gpr_x28_arm64) {
+ tester->gpr.x[reg - gpr_x0_arm64] = reg_value.GetAsUInt64();
+ return true;
+ }
+ if (reg >= gpr_w0_arm64 && reg <= gpr_w28_arm64) {
+ tester->gpr.x[reg - gpr_w0_arm64] = reg_value.GetAsUInt32();
+ return true;
+ }
+ switch (reg) {
+ case gpr_fp_arm64:
+ tester->gpr.fp = reg_value.GetAsUInt64();
+ return true;
+ case gpr_lr_arm64:
+ tester->gpr.lr = reg_value.GetAsUInt64();
+ return true;
+ case gpr_sp_arm64:
+ tester->gpr.sp = reg_value.GetAsUInt64();
+ return true;
+ case gpr_pc_arm64:
+ tester->gpr.pc = reg_value.GetAsUInt64();
+ return true;
+ case gpr_cpsr_arm64:
+ tester->gpr.cpsr = reg_value.GetAsUInt64();
+ return true;
+ default:
+ return false;
+ }
+ }
+
+ static size_t ReadMemoryCallback(EmulateInstruction *instruction, void *baton,
+ const Context &context, addr_t addr,
+ void *dst, size_t length) {
+ auto *tester = static_cast<Arch64EmulatorTester *>(instruction);
+ assert(addr >= tester->memory_offset);
+ assert(addr - tester->memory_offset + length <= sizeof(tester->memory));
+ if (addr >= tester->memory_offset &&
+ addr - tester->memory_offset + length <= sizeof(tester->memory)) {
+ memcpy(dst, tester->memory + addr - tester->memory_offset, length);
+ return length;
+ }
+ return 0;
+ };
+
+ static size_t WriteMemoryCallback(EmulateInstruction *instruction,
+ void *baton, const Context &context,
+ addr_t addr, const void *dst,
+ size_t length) {
+ llvm_unreachable("implement when required");
+ return 0;
+ };
static uint64_t AddWithCarry(uint32_t N, uint64_t x, uint64_t y, bool carry_in,
EmulateInstructionARM64::ProcState &proc_state) {
@@ -60,3 +163,18 @@ TEST_F(TestAArch64Emulator, TestOverflow) {
ASSERT_EQ(pstate.V, 1ULL);
ASSERT_EQ(pstate.C, 0ULL);
}
+
+TEST_F(TestAArch64Emulator, TestAutoAdvancePC) {
+ Arch64EmulatorTester emu;
+ emu.memory_offset = 0x123456789abcde00;
+ emu.gpr.pc = 0x123456789abcde00;
+ emu.gpr.x[8] = 0x123456789abcde20;
+ memcpy(emu.memory, "\x08\x01\x40\xb9", 4); // ldr w8, [x8]
+ memcpy(emu.memory + 0x20, "\x11\x22\x33\x44", 4); // 0x44332211
+ ASSERT_TRUE(emu.ReadInstruction());
+ ASSERT_TRUE(
+ emu.EvaluateInstruction(eEmulateInstructionOptionAutoAdvancePC |
+ eEmulateInstructionOptionIgnoreConditions));
+ ASSERT_EQ(emu.gpr.pc, (uint64_t)0x123456789abcde04);
+ ASSERT_EQ(emu.gpr.x[8], (uint64_t)0x44332211);
+}
>From dace67e941f309318b5ce200c1f4e180a4471d20 Mon Sep 17 00:00:00 2001
From: Ilia Kuklin <ikuklin at accesssoftek.com>
Date: Wed, 6 Aug 2025 14:32:19 +0500
Subject: [PATCH 054/171] [lldb] Add `ValueObject::CreateValueObjectFromScalar`
and fix `Scalar::GetData` (#151350)
Add `ValueObject::CreateValueObjectFromScalar` function and adjust
`Scalar::GetData` to be able to both extend and truncate the data bytes
in Scalar to the specified size.
---
lldb/include/lldb/Utility/Scalar.h | 6 +-
lldb/include/lldb/ValueObject/ValueObject.h | 6 ++
.../lldb/ValueObject/ValueObjectConstResult.h | 11 ++++
lldb/source/Core/Value.cpp | 12 +---
lldb/source/Utility/Scalar.cpp | 60 ++++++++++++++-----
lldb/source/ValueObject/ValueObject.cpp | 7 +++
.../ValueObject/ValueObjectConstResult.cpp | 27 +++++++++
lldb/unittests/Utility/ScalarTest.cpp | 18 ++++++
8 files changed, 122 insertions(+), 25 deletions(-)
diff --git a/lldb/include/lldb/Utility/Scalar.h b/lldb/include/lldb/Utility/Scalar.h
index b4b9c7e189582..dbb260962f1d6 100644
--- a/lldb/include/lldb/Utility/Scalar.h
+++ b/lldb/include/lldb/Utility/Scalar.h
@@ -84,11 +84,15 @@ class Scalar {
/// Store the binary representation of this value into the given storage.
/// Exactly GetByteSize() bytes will be stored, and the buffer must be large
/// enough to hold this data.
+ void GetBytes(uint8_t *storage, size_t length) const;
void GetBytes(llvm::MutableArrayRef<uint8_t> storage) const;
size_t GetByteSize() const;
- bool GetData(DataExtractor &data, size_t limit_byte_size = UINT32_MAX) const;
+ /// Get data with a byte size of GetByteSize().
+ bool GetData(DataExtractor &data) const;
+ /// Get data with a byte size forced to \p result_byte_size.
+ bool GetData(DataExtractor &data, size_t result_byte_size) const;
size_t GetAsMemoryData(void *dst, size_t dst_len,
lldb::ByteOrder dst_byte_order, Status &error) const;
diff --git a/lldb/include/lldb/ValueObject/ValueObject.h b/lldb/include/lldb/ValueObject/ValueObject.h
index 3c62f3c17619e..3f9f2b5de8dbe 100644
--- a/lldb/include/lldb/ValueObject/ValueObject.h
+++ b/lldb/include/lldb/ValueObject/ValueObject.h
@@ -737,6 +737,12 @@ class ValueObject {
CreateValueObjectFromAPFloat(lldb::TargetSP target, const llvm::APFloat &v,
CompilerType type, llvm::StringRef name);
+ /// Create a value object containing the given Scalar value.
+ static lldb::ValueObjectSP CreateValueObjectFromScalar(lldb::TargetSP target,
+ Scalar &s,
+ CompilerType type,
+ llvm::StringRef name);
+
/// Create a value object containing the given boolean value.
static lldb::ValueObjectSP CreateValueObjectFromBool(lldb::TargetSP target,
bool value,
diff --git a/lldb/include/lldb/ValueObject/ValueObjectConstResult.h b/lldb/include/lldb/ValueObject/ValueObjectConstResult.h
index 1e4b81c4dc7f1..6fbb15328ee5c 100644
--- a/lldb/include/lldb/ValueObject/ValueObjectConstResult.h
+++ b/lldb/include/lldb/ValueObject/ValueObjectConstResult.h
@@ -60,6 +60,11 @@ class ValueObjectConstResult : public ValueObject {
Value &value, ConstString name,
Module *module = nullptr);
+ static lldb::ValueObjectSP Create(ExecutionContextScope *exe_scope,
+ const CompilerType &compiler_type,
+ Scalar &scalar, ConstString name,
+ Module *module = nullptr);
+
// When an expression fails to evaluate, we return an error
static lldb::ValueObjectSP Create(ExecutionContextScope *exe_scope,
Status &&error);
@@ -145,6 +150,12 @@ class ValueObjectConstResult : public ValueObject {
ValueObjectManager &manager, const Value &value,
ConstString name, Module *module = nullptr);
+ ValueObjectConstResult(ExecutionContextScope *exe_scope,
+ ValueObjectManager &manager,
+ const CompilerType &compiler_type,
+ const Scalar &scalar, ConstString name,
+ Module *module = nullptr);
+
ValueObjectConstResult(ExecutionContextScope *exe_scope,
ValueObjectManager &manager, Status &&error);
diff --git a/lldb/source/Core/Value.cpp b/lldb/source/Core/Value.cpp
index c91b3f852f986..028f0587c5790 100644
--- a/lldb/source/Core/Value.cpp
+++ b/lldb/source/Core/Value.cpp
@@ -347,15 +347,9 @@ Status Value::GetValueAsData(ExecutionContext *exe_ctx, DataExtractor &data,
else
data.SetAddressByteSize(sizeof(void *));
- uint32_t limit_byte_size = UINT32_MAX;
-
- if (type_size)
- limit_byte_size = *type_size;
-
- if (limit_byte_size <= m_value.GetByteSize()) {
- if (m_value.GetData(data, limit_byte_size))
- return error; // Success;
- }
+ uint32_t result_byte_size = *type_size;
+ if (m_value.GetData(data, result_byte_size))
+ return error; // Success;
error = Status::FromErrorString("extracting data from value failed");
break;
diff --git a/lldb/source/Utility/Scalar.cpp b/lldb/source/Utility/Scalar.cpp
index f07a9f3bed00c..7fbe46d46194f 100644
--- a/lldb/source/Utility/Scalar.cpp
+++ b/lldb/source/Utility/Scalar.cpp
@@ -82,7 +82,7 @@ Scalar::Type Scalar::PromoteToMaxType(Scalar &lhs, Scalar &rhs) {
return Scalar::e_void;
}
-bool Scalar::GetData(DataExtractor &data, size_t limit_byte_size) const {
+bool Scalar::GetData(DataExtractor &data) const {
size_t byte_size = GetByteSize();
if (byte_size == 0) {
data.Clear();
@@ -90,27 +90,57 @@ bool Scalar::GetData(DataExtractor &data, size_t limit_byte_size) const {
}
auto buffer_up = std::make_unique<DataBufferHeap>(byte_size, 0);
GetBytes(buffer_up->GetData());
- lldb::offset_t offset = 0;
-
- if (limit_byte_size < byte_size) {
- if (endian::InlHostByteOrder() == eByteOrderLittle) {
- // On little endian systems if we want fewer bytes from the current
- // type we just specify fewer bytes since the LSByte is first...
- byte_size = limit_byte_size;
- } else if (endian::InlHostByteOrder() == eByteOrderBig) {
- // On big endian systems if we want fewer bytes from the current type
- // have to advance our initial byte pointer and trim down the number of
- // bytes since the MSByte is first
- offset = byte_size - limit_byte_size;
- byte_size = limit_byte_size;
+ data.SetData(std::move(buffer_up), 0, byte_size);
+ data.SetByteOrder(endian::InlHostByteOrder());
+ return true;
+}
+
+bool Scalar::GetData(DataExtractor &data, size_t result_byte_size) const {
+ size_t byte_size = GetByteSize();
+ if (byte_size == 0 || result_byte_size == 0) {
+ data.Clear();
+ return false;
+ }
+
+ if (endian::InlHostByteOrder() == lldb::eByteOrderBig) {
+ // On big endian systems if we want fewer bytes from the current type
+ // we have to advance our initial byte pointer since the MSByte is
+ // first.
+ if (result_byte_size <= byte_size) {
+ auto buffer_up = std::make_unique<DataBufferHeap>(byte_size, 0);
+ GetBytes(buffer_up->GetData());
+ auto offset = byte_size - result_byte_size;
+ data.SetData(std::move(buffer_up), offset, result_byte_size);
+ data.SetByteOrder(endian::InlHostByteOrder());
+ } else {
+ // Extend created buffer size and insert the data bytes with an offset
+ auto buffer_up = std::make_unique<DataBufferHeap>(result_byte_size, 0);
+ auto offset = result_byte_size - byte_size;
+ GetBytes(buffer_up->GetBytes() + offset, byte_size);
+ data.SetData(std::move(buffer_up), 0, result_byte_size);
+ data.SetByteOrder(endian::InlHostByteOrder());
}
+ return true;
}
- data.SetData(std::move(buffer_up), offset, byte_size);
+ // On little endian systems MSBytes get trimmed or extended automatically by
+ // size.
+ if (byte_size < result_byte_size)
+ byte_size = result_byte_size;
+ auto buffer_up = std::make_unique<DataBufferHeap>(byte_size, 0);
+ GetBytes(buffer_up->GetData());
+ data.SetData(std::move(buffer_up), 0, result_byte_size);
data.SetByteOrder(endian::InlHostByteOrder());
+
return true;
}
+void Scalar::GetBytes(uint8_t *storage, size_t size) const {
+ assert(size >= GetByteSize());
+ llvm::MutableArrayRef<uint8_t> storage_ref(storage, size);
+ GetBytes(storage_ref);
+}
+
void Scalar::GetBytes(llvm::MutableArrayRef<uint8_t> storage) const {
assert(storage.size() >= GetByteSize());
diff --git a/lldb/source/ValueObject/ValueObject.cpp b/lldb/source/ValueObject/ValueObject.cpp
index 38784426b8024..38b9f77e6ddda 100644
--- a/lldb/source/ValueObject/ValueObject.cpp
+++ b/lldb/source/ValueObject/ValueObject.cpp
@@ -3590,6 +3590,13 @@ lldb::ValueObjectSP ValueObject::CreateValueObjectFromAPFloat(
return CreateValueObjectFromAPInt(target, v.bitcastToAPInt(), type, name);
}
+lldb::ValueObjectSP ValueObject::CreateValueObjectFromScalar(
+ lldb::TargetSP target, Scalar &s, CompilerType type, llvm::StringRef name) {
+ ExecutionContext exe_ctx(target.get(), false);
+ return ValueObjectConstResult::Create(exe_ctx.GetBestExecutionContextScope(),
+ type, s, ConstString(name));
+}
+
lldb::ValueObjectSP
ValueObject::CreateValueObjectFromBool(lldb::TargetSP target, bool value,
llvm::StringRef name) {
diff --git a/lldb/source/ValueObject/ValueObjectConstResult.cpp b/lldb/source/ValueObject/ValueObjectConstResult.cpp
index 774749620e0d0..10a62970edcbb 100644
--- a/lldb/source/ValueObject/ValueObjectConstResult.cpp
+++ b/lldb/source/ValueObject/ValueObjectConstResult.cpp
@@ -105,6 +105,16 @@ ValueObjectSP ValueObjectConstResult::Create(ExecutionContextScope *exe_scope,
->GetSP();
}
+ValueObjectSP ValueObjectConstResult::Create(ExecutionContextScope *exe_scope,
+ const CompilerType &compiler_type,
+ Scalar &scalar, ConstString name,
+ Module *module) {
+ auto manager_sp = ValueObjectManager::Create();
+ return (new ValueObjectConstResult(exe_scope, *manager_sp, compiler_type,
+ scalar, name, module))
+ ->GetSP();
+}
+
ValueObjectConstResult::ValueObjectConstResult(
ExecutionContextScope *exe_scope, ValueObjectManager &manager,
const CompilerType &compiler_type, ConstString name,
@@ -193,6 +203,23 @@ ValueObjectConstResult::ValueObjectConstResult(ExecutionContextScope *exe_scope,
m_error = m_value.GetValueAsData(&exe_ctx, m_data, module);
}
+ValueObjectConstResult::ValueObjectConstResult(
+ ExecutionContextScope *exe_scope, ValueObjectManager &manager,
+ const CompilerType &compiler_type, const Scalar &scalar, ConstString name,
+ Module *module)
+ : ValueObject(exe_scope, manager), m_impl(this) {
+ m_value = Value(scalar);
+ m_value.SetCompilerType(compiler_type);
+ m_value.SetValueType(Value::ValueType::Scalar);
+ m_name = name;
+ ExecutionContext exe_ctx;
+ exe_scope->CalculateExecutionContext(exe_ctx);
+ m_error = m_value.GetValueAsData(&exe_ctx, m_data, module);
+ SetIsConstant();
+ SetValueIsValid(true);
+ SetAddressTypeOfChildren(eAddressTypeLoad);
+}
+
ValueObjectConstResult::~ValueObjectConstResult() = default;
CompilerType ValueObjectConstResult::GetCompilerTypeImpl() {
diff --git a/lldb/unittests/Utility/ScalarTest.cpp b/lldb/unittests/Utility/ScalarTest.cpp
index 65b9783b9416f..256d456783583 100644
--- a/lldb/unittests/Utility/ScalarTest.cpp
+++ b/lldb/unittests/Utility/ScalarTest.cpp
@@ -191,6 +191,24 @@ TEST(ScalarTest, GetData) {
EXPECT_THAT(
get_data(llvm::APSInt::getMaxValue(/*numBits=*/9, /*Unsigned=*/true)),
vec({0x01, 0xff}));
+
+ auto get_data_with_size = [](llvm::APInt v, size_t size) {
+ DataExtractor data;
+ Scalar(v).GetData(data, size);
+ return data.GetData().vec();
+ };
+
+ EXPECT_THAT(get_data_with_size(llvm::APInt(16, 0x0123), 8),
+ vec({0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x23}));
+
+ EXPECT_THAT(get_data_with_size(llvm::APInt(32, 0x01234567), 4),
+ vec({0x01, 0x23, 0x45, 0x67}));
+
+ EXPECT_THAT(get_data_with_size(llvm::APInt(48, 0xABCD01234567UL), 4),
+ vec({0x01, 0x23, 0x45, 0x67}));
+
+ EXPECT_THAT(get_data_with_size(llvm::APInt(64, 0xABCDEF0123456789UL), 2),
+ vec({0x67, 0x89}));
}
TEST(ScalarTest, SetValueFromData) {
>From 35110445081152f7f2d2a9d053bb6fa718216d7b Mon Sep 17 00:00:00 2001
From: Nikolas Klauser <nikolasklauser at berlin.de>
Date: Wed, 6 Aug 2025 12:05:17 +0200
Subject: [PATCH 055/171] [libc++] Fix incorrect down cast in __tree::operator=
This has been introduced by #151304. This problem is diagnosed by UBSan
with optimizations enabled. Since we run UBSan only with optimizations
disabled currently, this isn't caught in our CI. We should look into
enabling UBSan with optimizations enabled to catch these sorts of issues
before landing a patch.
---
libcxx/include/__tree | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/libcxx/include/__tree b/libcxx/include/__tree
index 6ca1a623536f2..64d1436055bfe 100644
--- a/libcxx/include/__tree
+++ b/libcxx/include/__tree
@@ -1388,7 +1388,7 @@ __tree<_Tp, _Compare, _Allocator>& __tree<_Tp, _Compare, _Allocator>::operator=(
if (__root())
__root()->__parent_ = __end_node();
}
- __begin_node_ = static_cast<__end_node_pointer>(std::__tree_min(static_cast<__node_base_pointer>(__end_node())));
+ __begin_node_ = static_cast<__end_node_pointer>(std::__tree_min(__end_node()->__left_));
__size_ = __t.size();
return *this;
>From 5499a70c39bfea10a0139ed6e98a267b9473448d Mon Sep 17 00:00:00 2001
From: Nikolas Klauser <nikolasklauser at berlin.de>
Date: Wed, 6 Aug 2025 12:09:07 +0200
Subject: [PATCH 056/171] Revert "[libc++] Fix incorrect down cast in
__tree::operator="
This reverts commit 35110445081152f7f2d2a9d053bb6fa718216d7b.
I've accidentally pushed to the wrong branch.
---
libcxx/include/__tree | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/libcxx/include/__tree b/libcxx/include/__tree
index 64d1436055bfe..6ca1a623536f2 100644
--- a/libcxx/include/__tree
+++ b/libcxx/include/__tree
@@ -1388,7 +1388,7 @@ __tree<_Tp, _Compare, _Allocator>& __tree<_Tp, _Compare, _Allocator>::operator=(
if (__root())
__root()->__parent_ = __end_node();
}
- __begin_node_ = static_cast<__end_node_pointer>(std::__tree_min(__end_node()->__left_));
+ __begin_node_ = static_cast<__end_node_pointer>(std::__tree_min(static_cast<__node_base_pointer>(__end_node())));
__size_ = __t.size();
return *this;
>From a5d85a6ab5daf67b67da654c90adc494d37833c8 Mon Sep 17 00:00:00 2001
From: Simon Pilgrim <llvm-dev at redking.me.uk>
Date: Wed, 6 Aug 2025 11:14:22 +0100
Subject: [PATCH 057/171] [Headers][X86] Allow AVX _mm256_set* intrinsics to be
used in constexpr (#152173)
---
clang/lib/Headers/avxintrin.h | 24 +++++++++----------
clang/test/CodeGen/X86/avx-builtins.c | 12 ++++++++++
clang/test/CodeGen/X86/builtin_test_helpers.h | 16 +++++++++++++
3 files changed, 40 insertions(+), 12 deletions(-)
diff --git a/clang/lib/Headers/avxintrin.h b/clang/lib/Headers/avxintrin.h
index 8e497a9823499..b9ca013c25c7a 100644
--- a/clang/lib/Headers/avxintrin.h
+++ b/clang/lib/Headers/avxintrin.h
@@ -3777,7 +3777,7 @@ _mm256_set_ps(float __a, float __b, float __c, float __d,
/// \param __i7
/// A 32-bit integral value used to initialize bits [31:0] of the result.
/// \returns An initialized 256-bit integer vector.
-static __inline __m256i __DEFAULT_FN_ATTRS
+static __inline __m256i __DEFAULT_FN_ATTRS_CONSTEXPR
_mm256_set_epi32(int __i0, int __i1, int __i2, int __i3,
int __i4, int __i5, int __i6, int __i7)
{
@@ -3825,7 +3825,7 @@ _mm256_set_epi32(int __i0, int __i1, int __i2, int __i3,
/// \param __w00
/// A 16-bit integral value used to initialize bits [15:0] of the result.
/// \returns An initialized 256-bit integer vector.
-static __inline __m256i __DEFAULT_FN_ATTRS
+static __inline __m256i __DEFAULT_FN_ATTRS_CONSTEXPR
_mm256_set_epi16(short __w15, short __w14, short __w13, short __w12,
short __w11, short __w10, short __w09, short __w08,
short __w07, short __w06, short __w05, short __w04,
@@ -3908,7 +3908,7 @@ _mm256_set_epi16(short __w15, short __w14, short __w13, short __w12,
/// \param __b00
/// An 8-bit integral value used to initialize bits [7:0] of the result.
/// \returns An initialized 256-bit integer vector.
-static __inline __m256i __DEFAULT_FN_ATTRS
+static __inline __m256i __DEFAULT_FN_ATTRS_CONSTEXPR
_mm256_set_epi8(char __b31, char __b30, char __b29, char __b28,
char __b27, char __b26, char __b25, char __b24,
char __b23, char __b22, char __b21, char __b20,
@@ -3943,7 +3943,7 @@ _mm256_set_epi8(char __b31, char __b30, char __b29, char __b28,
/// \param __d
/// A 64-bit integral value used to initialize bits [63:0] of the result.
/// \returns An initialized 256-bit integer vector.
-static __inline __m256i __DEFAULT_FN_ATTRS
+static __inline __m256i __DEFAULT_FN_ATTRS_CONSTEXPR
_mm256_set_epi64x(long long __a, long long __b, long long __c, long long __d)
{
return __extension__ (__m256i)(__v4di){ __d, __c, __b, __a };
@@ -4044,7 +4044,7 @@ _mm256_setr_ps(float __a, float __b, float __c, float __d,
/// \param __i7
/// A 32-bit integral value used to initialize bits [255:224] of the result.
/// \returns An initialized 256-bit integer vector.
-static __inline __m256i __DEFAULT_FN_ATTRS
+static __inline __m256i __DEFAULT_FN_ATTRS_CONSTEXPR
_mm256_setr_epi32(int __i0, int __i1, int __i2, int __i3,
int __i4, int __i5, int __i6, int __i7)
{
@@ -4092,7 +4092,7 @@ _mm256_setr_epi32(int __i0, int __i1, int __i2, int __i3,
/// \param __w00
/// A 16-bit integral value used to initialize bits [255:240] of the result.
/// \returns An initialized 256-bit integer vector.
-static __inline __m256i __DEFAULT_FN_ATTRS
+static __inline __m256i __DEFAULT_FN_ATTRS_CONSTEXPR
_mm256_setr_epi16(short __w15, short __w14, short __w13, short __w12,
short __w11, short __w10, short __w09, short __w08,
short __w07, short __w06, short __w05, short __w04,
@@ -4177,7 +4177,7 @@ _mm256_setr_epi16(short __w15, short __w14, short __w13, short __w12,
/// \param __b00
/// An 8-bit integral value used to initialize bits [255:248] of the result.
/// \returns An initialized 256-bit integer vector.
-static __inline __m256i __DEFAULT_FN_ATTRS
+static __inline __m256i __DEFAULT_FN_ATTRS_CONSTEXPR
_mm256_setr_epi8(char __b31, char __b30, char __b29, char __b28,
char __b27, char __b26, char __b25, char __b24,
char __b23, char __b22, char __b21, char __b20,
@@ -4210,7 +4210,7 @@ _mm256_setr_epi8(char __b31, char __b30, char __b29, char __b28,
/// \param __d
/// A 64-bit integral value used to initialize bits [255:192] of the result.
/// \returns An initialized 256-bit integer vector.
-static __inline __m256i __DEFAULT_FN_ATTRS
+static __inline __m256i __DEFAULT_FN_ATTRS_CONSTEXPR
_mm256_setr_epi64x(long long __a, long long __b, long long __c, long long __d)
{
return _mm256_set_epi64x(__d, __c, __b, __a);
@@ -4267,7 +4267,7 @@ _mm256_set1_ps(float __w)
/// A 32-bit integral value used to initialize each vector element of the
/// result.
/// \returns An initialized 256-bit integer vector of [8 x i32].
-static __inline __m256i __DEFAULT_FN_ATTRS
+static __inline __m256i __DEFAULT_FN_ATTRS_CONSTEXPR
_mm256_set1_epi32(int __i)
{
return _mm256_set_epi32(__i, __i, __i, __i, __i, __i, __i, __i);
@@ -4285,7 +4285,7 @@ _mm256_set1_epi32(int __i)
/// A 16-bit integral value used to initialize each vector element of the
/// result.
/// \returns An initialized 256-bit integer vector of [16 x i16].
-static __inline __m256i __DEFAULT_FN_ATTRS
+static __inline __m256i __DEFAULT_FN_ATTRS_CONSTEXPR
_mm256_set1_epi16(short __w)
{
return _mm256_set_epi16(__w, __w, __w, __w, __w, __w, __w, __w,
@@ -4303,7 +4303,7 @@ _mm256_set1_epi16(short __w)
/// An 8-bit integral value used to initialize each vector element of the
/// result.
/// \returns An initialized 256-bit integer vector of [32 x i8].
-static __inline __m256i __DEFAULT_FN_ATTRS
+static __inline __m256i __DEFAULT_FN_ATTRS_CONSTEXPR
_mm256_set1_epi8(char __b)
{
return _mm256_set_epi8(__b, __b, __b, __b, __b, __b, __b, __b,
@@ -4324,7 +4324,7 @@ _mm256_set1_epi8(char __b)
/// A 64-bit integral value used to initialize each vector element of the
/// result.
/// \returns An initialized 256-bit integer vector of [4 x i64].
-static __inline __m256i __DEFAULT_FN_ATTRS
+static __inline __m256i __DEFAULT_FN_ATTRS_CONSTEXPR
_mm256_set1_epi64x(long long __q)
{
return _mm256_set_epi64x(__q, __q, __q, __q);
diff --git a/clang/test/CodeGen/X86/avx-builtins.c b/clang/test/CodeGen/X86/avx-builtins.c
index ed39862377197..a6e70aae420ea 100644
--- a/clang/test/CodeGen/X86/avx-builtins.c
+++ b/clang/test/CodeGen/X86/avx-builtins.c
@@ -1443,6 +1443,7 @@ __m256i test_mm256_set_epi8(char A0, char A1, char A2, char A3, char A4, char A5
// CHECK: insertelement <32 x i8> %{{.*}}, i8 %{{.*}}, i32 31
return _mm256_set_epi8(A0, A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, A15, A16, A17, A18, A19, A20, A21, A22, A23, A24, A25, A26, A27, A28, A29, A30, A31);
}
+TEST_CONSTEXPR(match_v32qi(_mm256_set_epi8(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31), 31, 30, 29, 28, 27, 26, 25, 24, 23, 22, 21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0));
__m256i test_mm256_set_epi16(short A0, short A1, short A2, short A3, short A4, short A5, short A6, short A7,
short A8, short A9, short A10, short A11, short A12, short A13, short A14, short A15) {
@@ -1465,6 +1466,7 @@ __m256i test_mm256_set_epi16(short A0, short A1, short A2, short A3, short A4, s
// CHECK: insertelement <16 x i16> %{{.*}}, i16 %{{.*}}, i32 15
return _mm256_set_epi16(A0, A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, A15);
}
+TEST_CONSTEXPR(match_v16hi(_mm256_set_epi16(0, -1, -2, -3, -4, -5, -6, -7, -8, -9, -10, -11, -12, -13, -14, -15), -15, -14, -13, -12, -11, -10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0));
__m256i test_mm256_set_epi32(int A0, int A1, int A2, int A3, int A4, int A5, int A6, int A7) {
// CHECK-LABEL: test_mm256_set_epi32
@@ -1478,6 +1480,7 @@ __m256i test_mm256_set_epi32(int A0, int A1, int A2, int A3, int A4, int A5, int
// CHECK: insertelement <8 x i32> %{{.*}}, i32 %{{.*}}, i32 7
return _mm256_set_epi32(A0, A1, A2, A3, A4, A5, A6, A7);
}
+TEST_CONSTEXPR(match_v8si(_mm256_set_epi32(1, -3, 5, -7, 9, -11, 13, -15), -15, 13, -11, 9, -7, 5, -3, 1));
__m256i test_mm256_set_epi64x(long long A0, long long A1, long long A2, long long A3) {
// CHECK-LABEL: test_mm256_set_epi64x
@@ -1487,6 +1490,7 @@ __m256i test_mm256_set_epi64x(long long A0, long long A1, long long A2, long lon
// CHECK: insertelement <4 x i64> %{{.*}}, i64 %{{.*}}, i32 3
return _mm256_set_epi64x(A0, A1, A2, A3);
}
+TEST_CONSTEXPR(match_v4di(_mm256_set_epi64x(100, -1000, 2000, -200), -200, 2000, -1000, 100));
__m256 test_mm256_set_m128(__m128 A, __m128 B) {
// CHECK-LABEL: test_mm256_set_m128
@@ -1566,6 +1570,7 @@ __m256i test_mm256_set1_epi8(char A) {
// CHECK: insertelement <32 x i8> %{{.*}}, i8 %{{.*}}, i32 31
return _mm256_set1_epi8(A);
}
+TEST_CONSTEXPR(match_v32qi(_mm256_set1_epi8(99), 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99));
__m256i test_mm256_set1_epi16(short A) {
// CHECK-LABEL: test_mm256_set1_epi16
@@ -1587,6 +1592,7 @@ __m256i test_mm256_set1_epi16(short A) {
// CHECK: insertelement <16 x i16> %{{.*}}, i16 %{{.*}}, i32 15
return _mm256_set1_epi16(A);
}
+TEST_CONSTEXPR(match_v16hi(_mm256_set1_epi16(-128), -128, -128, -128, -128, -128, -128, -128, -128, -128, -128, -128, -128, -128, -128, -128, -128));
__m256i test_mm256_set1_epi32(int A) {
// CHECK-LABEL: test_mm256_set1_epi32
@@ -1600,6 +1606,7 @@ __m256i test_mm256_set1_epi32(int A) {
// CHECK: insertelement <8 x i32> %{{.*}}, i32 %{{.*}}, i32 7
return _mm256_set1_epi32(A);
}
+TEST_CONSTEXPR(match_v8si(_mm256_set1_epi32(55), 55, 55, 55, 55, 55, 55, 55, 55));
__m256i test_mm256_set1_epi64x(long long A) {
// CHECK-LABEL: test_mm256_set1_epi64x
@@ -1609,6 +1616,7 @@ __m256i test_mm256_set1_epi64x(long long A) {
// CHECK: insertelement <4 x i64> %{{.*}}, i64 %{{.*}}, i32 3
return _mm256_set1_epi64x(A);
}
+TEST_CONSTEXPR(match_v4di(_mm256_set1_epi64x(-65535), -65535, -65535, -65535, -65535));
__m256d test_mm256_set1_pd(double A) {
// CHECK-LABEL: test_mm256_set1_pd
@@ -1673,6 +1681,7 @@ __m256i test_mm256_setr_epi8(char A0, char A1, char A2, char A3, char A4, char A
// CHECK: insertelement <32 x i8> %{{.*}}, i8 %{{.*}}, i32 31
return _mm256_setr_epi8(A0, A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, A15, A16, A17, A18, A19, A20, A21, A22, A23, A24, A25, A26, A27, A28, A29, A30, A31);
}
+TEST_CONSTEXPR(match_v32qi(_mm256_setr_epi8(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31), 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31));
__m256i test_mm256_setr_epi16(short A0, short A1, short A2, short A3, short A4, short A5, short A6, short A7,
short A8, short A9, short A10, short A11, short A12, short A13, short A14, short A15) {
@@ -1695,6 +1704,7 @@ __m256i test_mm256_setr_epi16(short A0, short A1, short A2, short A3, short A4,
// CHECK: insertelement <16 x i16> %{{.*}}, i16 %{{.*}}, i32 15
return _mm256_setr_epi16(A0, A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, A15);
}
+TEST_CONSTEXPR(match_v16hi(_mm256_setr_epi16(0, -1, -2, -3, -4, -5, -6, -7, -8, -9, -10, -11, -12, -13, -14, -15), 0, -1, -2, -3, -4, -5, -6, -7, -8, -9, -10, -11, -12, -13, -14, -15));
__m256i test_mm256_setr_epi32(int A0, int A1, int A2, int A3, int A4, int A5, int A6, int A7) {
// CHECK-LABEL: test_mm256_setr_epi32
@@ -1708,6 +1718,7 @@ __m256i test_mm256_setr_epi32(int A0, int A1, int A2, int A3, int A4, int A5, in
// CHECK: insertelement <8 x i32> %{{.*}}, i32 %{{.*}}, i32 7
return _mm256_setr_epi32(A0, A1, A2, A3, A4, A5, A6, A7);
}
+TEST_CONSTEXPR(match_v8si(_mm256_setr_epi32(1, -3, 5, -7, 9, -11, 13, -15), 1, -3, 5, -7, 9, -11, 13, -15));
__m256i test_mm256_setr_epi64x(long long A0, long long A1, long long A2, long long A3) {
// CHECK-LABEL: test_mm256_setr_epi64x
@@ -1717,6 +1728,7 @@ __m256i test_mm256_setr_epi64x(long long A0, long long A1, long long A2, long lo
// CHECK: insertelement <4 x i64> %{{.*}}, i64 %{{.*}}, i32 3
return _mm256_setr_epi64x(A0, A1, A2, A3);
}
+TEST_CONSTEXPR(match_v4di(_mm256_setr_epi64x(100, -1000, 2000, -200), 100, -1000, 2000, -200));
__m256 test_mm256_setr_m128(__m128 A, __m128 B) {
// CHECK-LABEL: test_mm256_setr_m128
diff --git a/clang/test/CodeGen/X86/builtin_test_helpers.h b/clang/test/CodeGen/X86/builtin_test_helpers.h
index 22a87ce9623be..f719694d41e25 100644
--- a/clang/test/CodeGen/X86/builtin_test_helpers.h
+++ b/clang/test/CodeGen/X86/builtin_test_helpers.h
@@ -83,6 +83,22 @@ constexpr bool match_v8si(__m256i _v, int a, int b, int c, int d, int e, int f,
return v[0] == a && v[1] == b && v[2] == c && v[3] == d && v[4] == e && v[5] == f && v[6] == g && v[7] == h;
}
+constexpr bool match_v16hi(__m256i _v, short a, short b, short c, short d, short e, short f, short g, short h, short i, short j, short k, short l, short m, short n, short o, short p) {
+ __v16hi v = (__v16hi)_v;
+ return v[0] == a && v[1] == b && v[2] == c && v[3] == d && v[4] == e && v[5] == f && v[6] == g && v[7] == h && v[8] == i && v[9] == j && v[10] == k && v[11] == l && v[12] == m && v[13] == n && v[14] == o && v[15] == p;
+}
+
+constexpr bool match_v32qi(__m256i _v, char __b00, char __b01, char __b02, char __b03, char __b04, char __b05, char __b06, char __b07,
+ char __b08, char __b09, char __b10, char __b11, char __b12, char __b13, char __b14, char __b15,
+ char __b16, char __b17, char __b18, char __b19, char __b20, char __b21, char __b22, char __b23,
+ char __b24, char __b25, char __b26, char __b27, char __b28, char __b29, char __b30, char __b31) {
+ __v32qi v = (__v32qi)_v;
+ return v[ 0] == __b00 && v[ 1] == __b01 && v[ 2] == __b02 && v[ 3] == __b03 && v[ 4] == __b04 && v[ 5] == __b05 && v[ 6] == __b06 && v[ 7] == __b07 &&
+ v[ 8] == __b08 && v[ 9] == __b09 && v[10] == __b10 && v[11] == __b11 && v[12] == __b12 && v[13] == __b13 && v[14] == __b14 && v[15] == __b15 &&
+ v[16] == __b16 && v[17] == __b17 && v[18] == __b18 && v[19] == __b19 && v[20] == __b20 && v[21] == __b21 && v[22] == __b22 && v[23] == __b23 &&
+ v[24] == __b24 && v[25] == __b25 && v[26] == __b26 && v[27] == __b27 && v[28] == __b28 && v[29] == __b29 && v[30] == __b30 && v[31] == __b31;
+}
+
constexpr bool match_m512(__m512 v, float a, float b, float c, float d, float e, float f, float g, float h, float i, float j, float k, float l, float m, float n, float o, float p) {
return v[0] == a && v[1] == b && v[2] == c && v[3] == d && v[4] == e && v[5] == f && v[6] == g && v[7] == h && v[8] == i && v[9] == j && v[10] == k && v[11] == l && v[12] == m && v[13] == n && v[14] == o && v[15] == p;
}
>From 14cd1339318b16e08c1363ec6896bd7d1e4ae281 Mon Sep 17 00:00:00 2001
From: Diana Picus <Diana-Magda.Picus at amd.com>
Date: Wed, 6 Aug 2025 12:24:52 +0200
Subject: [PATCH 058/171] Revert "[AMDGPU] Intrinsic for launching whole wave
functions" (#152286)
Reverts llvm/llvm-project#145859 because it broke a HIP test:
```
[34/59] Building CXX object External/HIP/CMakeFiles/TheNextWeek-hip-6.3.0.dir/workload/ray-tracing/TheNextWeek/main.cc.o
FAILED: External/HIP/CMakeFiles/TheNextWeek-hip-6.3.0.dir/workload/ray-tracing/TheNextWeek/main.cc.o
/home/botworker/bbot/clang-hip-vega20/botworker/clang-hip-vega20/llvm/bin/clang++ -DNDEBUG -O3 -DNDEBUG -w -Werror=date-time --rocm-path=/opt/botworker/llvm/External/hip/rocm-6.3.0 --offload-arch=gfx908 --offload-arch=gfx90a --offload-arch=gfx1030 --offload-arch=gfx1100 -xhip -mfma -MD -MT External/HIP/CMakeFiles/TheNextWeek-hip-6.3.0.dir/workload/ray-tracing/TheNextWeek/main.cc.o -MF External/HIP/CMakeFiles/TheNextWeek-hip-6.3.0.dir/workload/ray-tracing/TheNextWeek/main.cc.o.d -o External/HIP/CMakeFiles/TheNextWeek-hip-6.3.0.dir/workload/ray-tracing/TheNextWeek/main.cc.o -c /home/botworker/bbot/clang-hip-vega20/llvm-test-suite/External/HIP/workload/ray-tracing/TheNextWeek/main.cc
fatal error: error in backend: Cannot select: intrinsic %llvm.amdgcn.readfirstlane
```
---
llvm/include/llvm/IR/IntrinsicsAMDGPU.td | 12 -
llvm/lib/CodeGen/GlobalISel/IRTranslator.cpp | 1 -
.../SelectionDAG/SelectionDAGBuilder.cpp | 37 -
llvm/lib/IR/Verifier.cpp | 30 -
llvm/lib/Target/AMDGPU/AMDGPUCallLowering.cpp | 19 +-
.../CodeGen/AMDGPU/amdgcn-call-whole-wave.ll | 174 --
.../irtranslator-whole-wave-functions.ll | 26 -
.../AMDGPU/isel-whole-wave-functions.ll | 76 -
.../CodeGen/AMDGPU/whole-wave-functions.ll | 1424 -----------------
.../intrinsic-amdgcn-call-whole-wave.ll | 46 -
10 files changed, 3 insertions(+), 1842 deletions(-)
delete mode 100644 llvm/test/CodeGen/AMDGPU/amdgcn-call-whole-wave.ll
delete mode 100644 llvm/test/Verifier/AMDGPU/intrinsic-amdgcn-call-whole-wave.ll
diff --git a/llvm/include/llvm/IR/IntrinsicsAMDGPU.td b/llvm/include/llvm/IR/IntrinsicsAMDGPU.td
index 191ed5f523a74..90cfd8cedd51b 100644
--- a/llvm/include/llvm/IR/IntrinsicsAMDGPU.td
+++ b/llvm/include/llvm/IR/IntrinsicsAMDGPU.td
@@ -2671,18 +2671,6 @@ def int_amdgcn_cs_chain:
],
[IntrConvergent, IntrNoReturn, ImmArg<ArgIndex<4>>]>;
-// Run a function with all the lanes enabled. Only direct calls are allowed. The
-// first argument is the callee, which must have the `amdgpu_gfx_whole_wave`
-// calling convention and must not be variadic. The remaining arguments to the
-// callee are taken from the arguments passed to the intrinsic. Lanes that are
-// inactive at the point of the call will receive poison. The return value is
-// the return value of the callee for the active lanes (there is no return
-// value in the inactive ones).
-def int_amdgcn_call_whole_wave:
- Intrinsic<[llvm_any_ty], // The return type of the callee.
- [llvm_anyptr_ty, // The callee.
- llvm_vararg_ty], // The arguments to the callee.
- [IntrConvergent]>;
//===----------------------------------------------------------------------===//
// CI+ Intrinsics
diff --git a/llvm/lib/CodeGen/GlobalISel/IRTranslator.cpp b/llvm/lib/CodeGen/GlobalISel/IRTranslator.cpp
index 787543df1f0f0..bbfae570e1e1a 100644
--- a/llvm/lib/CodeGen/GlobalISel/IRTranslator.cpp
+++ b/llvm/lib/CodeGen/GlobalISel/IRTranslator.cpp
@@ -2556,7 +2556,6 @@ bool IRTranslator::translateKnownIntrinsic(const CallInst &CI, Intrinsic::ID ID,
getOrCreateVReg(*ConstantInt::getTrue(CI.getType())));
return true;
case Intrinsic::amdgcn_cs_chain:
- case Intrinsic::amdgcn_call_whole_wave:
return translateCallBase(CI, MIRBuilder);
case Intrinsic::fptrunc_round: {
uint32_t Flags = MachineInstr::copyFlagsFromInstruction(CI);
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
index d5b904055e547..d0815e9f51822 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
@@ -7984,43 +7984,6 @@ void SelectionDAGBuilder::visitIntrinsicCall(const CallInst &I,
HasTailCall = true;
return;
}
- case Intrinsic::amdgcn_call_whole_wave: {
- TargetLowering::ArgListTy Args;
-
- // The first argument is the callee. Skip it when assembling the call args.
- TargetLowering::ArgListEntry Arg;
- for (unsigned Idx = 1; Idx < I.arg_size(); ++Idx) {
- Arg.Node = getValue(I.getArgOperand(Idx));
- Arg.Ty = I.getArgOperand(Idx)->getType();
- Arg.setAttributes(&I, Idx);
- Args.push_back(Arg);
- }
-
- SDValue ConvControlToken;
- if (auto Bundle = I.getOperandBundle(LLVMContext::OB_convergencectrl)) {
- auto *Token = Bundle->Inputs[0].get();
- ConvControlToken = getValue(Token);
- }
-
- TargetLowering::CallLoweringInfo CLI(DAG);
- CLI.setDebugLoc(getCurSDLoc())
- .setChain(getRoot())
- .setCallee(CallingConv::AMDGPU_Gfx_WholeWave, I.getType(),
- getValue(I.getArgOperand(0)), std::move(Args))
- .setTailCall(false)
- .setIsPreallocated(
- I.countOperandBundlesOfType(LLVMContext::OB_preallocated) != 0)
- .setConvergent(I.isConvergent())
- .setConvergenceControlToken(ConvControlToken);
- CLI.CB = &I;
-
- std::pair<SDValue, SDValue> Result =
- lowerInvokable(CLI, /*EHPadBB=*/nullptr);
-
- if (Result.first.getNode())
- setValue(&I, Result.first);
- return;
- }
case Intrinsic::ptrmask: {
SDValue Ptr = getValue(I.getOperand(0));
SDValue Mask = getValue(I.getOperand(1));
diff --git a/llvm/lib/IR/Verifier.cpp b/llvm/lib/IR/Verifier.cpp
index f3f0ae5233977..ca3f148f881a4 100644
--- a/llvm/lib/IR/Verifier.cpp
+++ b/llvm/lib/IR/Verifier.cpp
@@ -6612,36 +6612,6 @@ void Verifier::visitIntrinsicCall(Intrinsic::ID ID, CallBase &Call) {
"Value for inactive lanes must be a VGPR function argument", &Call);
break;
}
- case Intrinsic::amdgcn_call_whole_wave: {
- auto F = dyn_cast<Function>(Call.getArgOperand(0));
- Check(F, "Indirect whole wave calls are not allowed", &Call);
-
- CallingConv::ID CC = F->getCallingConv();
- Check(CC == CallingConv::AMDGPU_Gfx_WholeWave,
- "Callee must have the amdgpu_gfx_whole_wave calling convention",
- &Call);
-
- Check(!F->isVarArg(), "Variadic whole wave calls are not allowed", &Call);
-
- Check(Call.arg_size() == F->arg_size(),
- "Call argument count must match callee argument count", &Call);
-
- // The first argument of the call is the callee, and the first argument of
- // the callee is the active mask. The rest of the arguments must match.
- Check(F->arg_begin()->getType()->isIntegerTy(1),
- "Callee must have i1 as its first argument", &Call);
- for (auto [CallArg, FuncArg] :
- drop_begin(zip_equal(Call.args(), F->args()))) {
- Check(CallArg->getType() == FuncArg.getType(),
- "Argument types must match", &Call);
-
- // Check that inreg attributes match between call site and function
- Check(Call.paramHasAttr(FuncArg.getArgNo(), Attribute::InReg) ==
- FuncArg.hasInRegAttr(),
- "Argument inreg attributes must match", &Call);
- }
- break;
- }
case Intrinsic::amdgcn_s_prefetch_data: {
Check(
AMDGPU::isFlatGlobalAddrSpace(
diff --git a/llvm/lib/Target/AMDGPU/AMDGPUCallLowering.cpp b/llvm/lib/Target/AMDGPU/AMDGPUCallLowering.cpp
index 3ff6e22fbb943..3d8d274f06246 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPUCallLowering.cpp
+++ b/llvm/lib/Target/AMDGPU/AMDGPUCallLowering.cpp
@@ -1464,22 +1464,9 @@ bool AMDGPUCallLowering::lowerCall(MachineIRBuilder &MIRBuilder,
CallLoweringInfo &Info) const {
if (Function *F = Info.CB->getCalledFunction())
if (F->isIntrinsic()) {
- switch (F->getIntrinsicID()) {
- case Intrinsic::amdgcn_cs_chain:
- return lowerChainCall(MIRBuilder, Info);
- case Intrinsic::amdgcn_call_whole_wave:
- Info.CallConv = CallingConv::AMDGPU_Gfx_WholeWave;
-
- // Get the callee from the original instruction, so it doesn't look like
- // this is an indirect call.
- Info.Callee = MachineOperand::CreateGA(
- cast<GlobalValue>(Info.CB->getOperand(0)), /*Offset=*/0);
- Info.OrigArgs.erase(Info.OrigArgs.begin());
- Info.IsVarArg = false;
- break;
- default:
- llvm_unreachable("Unexpected intrinsic call");
- }
+ assert(F->getIntrinsicID() == Intrinsic::amdgcn_cs_chain &&
+ "Unexpected intrinsic");
+ return lowerChainCall(MIRBuilder, Info);
}
if (Info.IsVarArg) {
diff --git a/llvm/test/CodeGen/AMDGPU/amdgcn-call-whole-wave.ll b/llvm/test/CodeGen/AMDGPU/amdgcn-call-whole-wave.ll
deleted file mode 100644
index eac0767c88d80..0000000000000
--- a/llvm/test/CodeGen/AMDGPU/amdgcn-call-whole-wave.ll
+++ /dev/null
@@ -1,174 +0,0 @@
-; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
-; RUN: llc -global-isel=0 -mtriple=amdgcn--amdpal -mcpu=gfx1200 < %s | FileCheck %s --check-prefix=DAGISEL
-; RUN: llc -global-isel=1 -mtriple=amdgcn--amdpal -mcpu=gfx1200 < %s | FileCheck %s --check-prefix=GISEL
-
-declare amdgpu_gfx_whole_wave i32 @good_callee(i1 %active, i32 %x, i32 %y, i32 inreg %c)
-
-define amdgpu_gfx void @basic_test(i32 %x, i32 inreg %c, ptr addrspace(1) %ptr) {
-; DAGISEL-LABEL: basic_test:
-; DAGISEL: ; %bb.0:
-; DAGISEL-NEXT: s_wait_loadcnt_dscnt 0x0
-; DAGISEL-NEXT: s_wait_expcnt 0x0
-; DAGISEL-NEXT: s_wait_samplecnt 0x0
-; DAGISEL-NEXT: s_wait_bvhcnt 0x0
-; DAGISEL-NEXT: s_wait_kmcnt 0x0
-; DAGISEL-NEXT: s_mov_b32 s0, s33
-; DAGISEL-NEXT: s_mov_b32 s33, s32
-; DAGISEL-NEXT: s_or_saveexec_b32 s1, -1
-; DAGISEL-NEXT: scratch_store_b32 off, v42, s33 offset:8 ; 4-byte Folded Spill
-; DAGISEL-NEXT: s_wait_alu 0xfffe
-; DAGISEL-NEXT: s_mov_b32 exec_lo, s1
-; DAGISEL-NEXT: v_writelane_b32 v42, s0, 2
-; DAGISEL-NEXT: s_clause 0x1
-; DAGISEL-NEXT: scratch_store_b32 off, v40, s33 offset:4
-; DAGISEL-NEXT: scratch_store_b32 off, v41, s33
-; DAGISEL-NEXT: v_dual_mov_b32 v41, v2 :: v_dual_mov_b32 v40, v1
-; DAGISEL-NEXT: v_add_nc_u32_e32 v1, 13, v0
-; DAGISEL-NEXT: v_writelane_b32 v42, s30, 0
-; DAGISEL-NEXT: s_mov_b32 s1, good_callee at abs32@hi
-; DAGISEL-NEXT: s_mov_b32 s0, good_callee at abs32@lo
-; DAGISEL-NEXT: s_add_co_i32 s32, s32, 16
-; DAGISEL-NEXT: v_writelane_b32 v42, s31, 1
-; DAGISEL-NEXT: s_wait_alu 0xfffe
-; DAGISEL-NEXT: s_swappc_b64 s[30:31], s[0:1]
-; DAGISEL-NEXT: global_store_b32 v[40:41], v0, off
-; DAGISEL-NEXT: s_clause 0x1
-; DAGISEL-NEXT: scratch_load_b32 v41, off, s33
-; DAGISEL-NEXT: scratch_load_b32 v40, off, s33 offset:4
-; DAGISEL-NEXT: v_readlane_b32 s31, v42, 1
-; DAGISEL-NEXT: v_readlane_b32 s30, v42, 0
-; DAGISEL-NEXT: s_mov_b32 s32, s33
-; DAGISEL-NEXT: v_readlane_b32 s0, v42, 2
-; DAGISEL-NEXT: s_or_saveexec_b32 s1, -1
-; DAGISEL-NEXT: scratch_load_b32 v42, off, s33 offset:8 ; 4-byte Folded Reload
-; DAGISEL-NEXT: s_wait_alu 0xfffe
-; DAGISEL-NEXT: s_mov_b32 exec_lo, s1
-; DAGISEL-NEXT: s_mov_b32 s33, s0
-; DAGISEL-NEXT: s_wait_loadcnt 0x0
-; DAGISEL-NEXT: s_wait_alu 0xfffe
-; DAGISEL-NEXT: s_setpc_b64 s[30:31]
-;
-; GISEL-LABEL: basic_test:
-; GISEL: ; %bb.0:
-; GISEL-NEXT: s_wait_loadcnt_dscnt 0x0
-; GISEL-NEXT: s_wait_expcnt 0x0
-; GISEL-NEXT: s_wait_samplecnt 0x0
-; GISEL-NEXT: s_wait_bvhcnt 0x0
-; GISEL-NEXT: s_wait_kmcnt 0x0
-; GISEL-NEXT: s_mov_b32 s0, s33
-; GISEL-NEXT: s_mov_b32 s33, s32
-; GISEL-NEXT: s_or_saveexec_b32 s1, -1
-; GISEL-NEXT: scratch_store_b32 off, v42, s33 offset:8 ; 4-byte Folded Spill
-; GISEL-NEXT: s_wait_alu 0xfffe
-; GISEL-NEXT: s_mov_b32 exec_lo, s1
-; GISEL-NEXT: v_writelane_b32 v42, s0, 2
-; GISEL-NEXT: s_clause 0x1
-; GISEL-NEXT: scratch_store_b32 off, v40, s33 offset:4
-; GISEL-NEXT: scratch_store_b32 off, v41, s33
-; GISEL-NEXT: v_dual_mov_b32 v40, v1 :: v_dual_mov_b32 v41, v2
-; GISEL-NEXT: v_add_nc_u32_e32 v1, 13, v0
-; GISEL-NEXT: v_writelane_b32 v42, s30, 0
-; GISEL-NEXT: s_mov_b32 s0, good_callee at abs32@lo
-; GISEL-NEXT: s_mov_b32 s1, good_callee at abs32@hi
-; GISEL-NEXT: s_add_co_i32 s32, s32, 16
-; GISEL-NEXT: v_writelane_b32 v42, s31, 1
-; GISEL-NEXT: s_wait_alu 0xfffe
-; GISEL-NEXT: s_swappc_b64 s[30:31], s[0:1]
-; GISEL-NEXT: global_store_b32 v[40:41], v0, off
-; GISEL-NEXT: s_clause 0x1
-; GISEL-NEXT: scratch_load_b32 v41, off, s33
-; GISEL-NEXT: scratch_load_b32 v40, off, s33 offset:4
-; GISEL-NEXT: v_readlane_b32 s31, v42, 1
-; GISEL-NEXT: v_readlane_b32 s30, v42, 0
-; GISEL-NEXT: s_mov_b32 s32, s33
-; GISEL-NEXT: v_readlane_b32 s0, v42, 2
-; GISEL-NEXT: s_or_saveexec_b32 s1, -1
-; GISEL-NEXT: scratch_load_b32 v42, off, s33 offset:8 ; 4-byte Folded Reload
-; GISEL-NEXT: s_wait_alu 0xfffe
-; GISEL-NEXT: s_mov_b32 exec_lo, s1
-; GISEL-NEXT: s_mov_b32 s33, s0
-; GISEL-NEXT: s_wait_loadcnt 0x0
-; GISEL-NEXT: s_wait_alu 0xfffe
-; GISEL-NEXT: s_setpc_b64 s[30:31]
- %y = add i32 %x, 13
- %ret = call i32(ptr, ...) @llvm.amdgcn.call.whole.wave(ptr @good_callee, i32 %x, i32 %y, i32 inreg %c)
- store i32 %ret, ptr addrspace(1) %ptr
- ret void
-}
-
-declare amdgpu_gfx_whole_wave void @void_callee(i1 %active, i32 %x)
-
-define amdgpu_gfx void @ret_void(i32 %x) {
-; DAGISEL-LABEL: ret_void:
-; DAGISEL: ; %bb.0:
-; DAGISEL-NEXT: s_wait_loadcnt_dscnt 0x0
-; DAGISEL-NEXT: s_wait_expcnt 0x0
-; DAGISEL-NEXT: s_wait_samplecnt 0x0
-; DAGISEL-NEXT: s_wait_bvhcnt 0x0
-; DAGISEL-NEXT: s_wait_kmcnt 0x0
-; DAGISEL-NEXT: s_mov_b32 s0, s33
-; DAGISEL-NEXT: s_mov_b32 s33, s32
-; DAGISEL-NEXT: s_or_saveexec_b32 s1, -1
-; DAGISEL-NEXT: scratch_store_b32 off, v40, s33 ; 4-byte Folded Spill
-; DAGISEL-NEXT: s_wait_alu 0xfffe
-; DAGISEL-NEXT: s_mov_b32 exec_lo, s1
-; DAGISEL-NEXT: v_writelane_b32 v40, s0, 2
-; DAGISEL-NEXT: s_mov_b32 s1, void_callee at abs32@hi
-; DAGISEL-NEXT: s_mov_b32 s0, void_callee at abs32@lo
-; DAGISEL-NEXT: s_add_co_i32 s32, s32, 16
-; DAGISEL-NEXT: v_writelane_b32 v40, s30, 0
-; DAGISEL-NEXT: v_writelane_b32 v40, s31, 1
-; DAGISEL-NEXT: s_wait_alu 0xfffe
-; DAGISEL-NEXT: s_swappc_b64 s[30:31], s[0:1]
-; DAGISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; DAGISEL-NEXT: v_readlane_b32 s31, v40, 1
-; DAGISEL-NEXT: v_readlane_b32 s30, v40, 0
-; DAGISEL-NEXT: s_mov_b32 s32, s33
-; DAGISEL-NEXT: v_readlane_b32 s0, v40, 2
-; DAGISEL-NEXT: s_or_saveexec_b32 s1, -1
-; DAGISEL-NEXT: scratch_load_b32 v40, off, s33 ; 4-byte Folded Reload
-; DAGISEL-NEXT: s_wait_alu 0xfffe
-; DAGISEL-NEXT: s_mov_b32 exec_lo, s1
-; DAGISEL-NEXT: s_mov_b32 s33, s0
-; DAGISEL-NEXT: s_wait_loadcnt 0x0
-; DAGISEL-NEXT: s_wait_alu 0xfffe
-; DAGISEL-NEXT: s_setpc_b64 s[30:31]
-;
-; GISEL-LABEL: ret_void:
-; GISEL: ; %bb.0:
-; GISEL-NEXT: s_wait_loadcnt_dscnt 0x0
-; GISEL-NEXT: s_wait_expcnt 0x0
-; GISEL-NEXT: s_wait_samplecnt 0x0
-; GISEL-NEXT: s_wait_bvhcnt 0x0
-; GISEL-NEXT: s_wait_kmcnt 0x0
-; GISEL-NEXT: s_mov_b32 s0, s33
-; GISEL-NEXT: s_mov_b32 s33, s32
-; GISEL-NEXT: s_or_saveexec_b32 s1, -1
-; GISEL-NEXT: scratch_store_b32 off, v40, s33 ; 4-byte Folded Spill
-; GISEL-NEXT: s_wait_alu 0xfffe
-; GISEL-NEXT: s_mov_b32 exec_lo, s1
-; GISEL-NEXT: v_writelane_b32 v40, s0, 2
-; GISEL-NEXT: s_mov_b32 s0, void_callee at abs32@lo
-; GISEL-NEXT: s_mov_b32 s1, void_callee at abs32@hi
-; GISEL-NEXT: s_add_co_i32 s32, s32, 16
-; GISEL-NEXT: v_writelane_b32 v40, s30, 0
-; GISEL-NEXT: v_writelane_b32 v40, s31, 1
-; GISEL-NEXT: s_wait_alu 0xfffe
-; GISEL-NEXT: s_swappc_b64 s[30:31], s[0:1]
-; GISEL-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GISEL-NEXT: v_readlane_b32 s31, v40, 1
-; GISEL-NEXT: v_readlane_b32 s30, v40, 0
-; GISEL-NEXT: s_mov_b32 s32, s33
-; GISEL-NEXT: v_readlane_b32 s0, v40, 2
-; GISEL-NEXT: s_or_saveexec_b32 s1, -1
-; GISEL-NEXT: scratch_load_b32 v40, off, s33 ; 4-byte Folded Reload
-; GISEL-NEXT: s_wait_alu 0xfffe
-; GISEL-NEXT: s_mov_b32 exec_lo, s1
-; GISEL-NEXT: s_mov_b32 s33, s0
-; GISEL-NEXT: s_wait_loadcnt 0x0
-; GISEL-NEXT: s_wait_alu 0xfffe
-; GISEL-NEXT: s_setpc_b64 s[30:31]
- call void(ptr, ...) @llvm.amdgcn.call.whole.wave(ptr @void_callee, i32 %x)
- ret void
-}
-
diff --git a/llvm/test/CodeGen/AMDGPU/irtranslator-whole-wave-functions.ll b/llvm/test/CodeGen/AMDGPU/irtranslator-whole-wave-functions.ll
index 17c8010bcbe05..8fc5afb155573 100644
--- a/llvm/test/CodeGen/AMDGPU/irtranslator-whole-wave-functions.ll
+++ b/llvm/test/CodeGen/AMDGPU/irtranslator-whole-wave-functions.ll
@@ -101,29 +101,3 @@ define amdgpu_gfx_whole_wave i64 @ret_64(i1 %active, i64 %a, i64 %b) {
%ret = call i64 @llvm.amdgcn.update.dpp.i64(i64 %x, i64 %y, i32 1, i32 1, i32 1, i1 false)
ret i64 %ret
}
-
-declare amdgpu_gfx_whole_wave i32 @callee(i1 %active, i32 %x)
-
-; Make sure we don't pass the first argument (i1).
-define amdgpu_cs void @call(i32 %x, ptr %p) {
- ; CHECK-LABEL: name: call
- ; CHECK: bb.1 (%ir-block.0):
- ; CHECK-NEXT: liveins: $vgpr0, $vgpr1, $vgpr2
- ; CHECK-NEXT: {{ $}}
- ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $vgpr0
- ; CHECK-NEXT: [[COPY1:%[0-9]+]]:_(s32) = COPY $vgpr1
- ; CHECK-NEXT: [[COPY2:%[0-9]+]]:_(s32) = COPY $vgpr2
- ; CHECK-NEXT: [[MV:%[0-9]+]]:_(p0) = G_MERGE_VALUES [[COPY1]](s32), [[COPY2]](s32)
- ; CHECK-NEXT: [[GV:%[0-9]+]]:_(p0) = G_GLOBAL_VALUE @callee
- ; CHECK-NEXT: ADJCALLSTACKUP 0, 0, implicit-def $scc
- ; CHECK-NEXT: [[GV1:%[0-9]+]]:_(p0) = G_GLOBAL_VALUE @callee
- ; CHECK-NEXT: $vgpr0 = COPY [[COPY]](s32)
- ; CHECK-NEXT: $sgpr30_sgpr31 = G_SI_CALL [[GV1]](p0), @callee, csr_amdgpu_si_gfx, implicit $vgpr0, implicit-def $vgpr0
- ; CHECK-NEXT: [[COPY3:%[0-9]+]]:_(s32) = COPY $vgpr0
- ; CHECK-NEXT: ADJCALLSTACKDOWN 0, 0, implicit-def $scc
- ; CHECK-NEXT: G_STORE [[COPY3]](s32), [[MV]](p0) :: (store (s32) into %ir.p)
- ; CHECK-NEXT: S_ENDPGM 0
- %ret = call i32(ptr, ...) @llvm.amdgcn.call.whole.wave(ptr @callee, i32 %x) convergent
- store i32 %ret, ptr %p
- ret void
-}
diff --git a/llvm/test/CodeGen/AMDGPU/isel-whole-wave-functions.ll b/llvm/test/CodeGen/AMDGPU/isel-whole-wave-functions.ll
index 69809b115e037..3450d63ff7b4a 100644
--- a/llvm/test/CodeGen/AMDGPU/isel-whole-wave-functions.ll
+++ b/llvm/test/CodeGen/AMDGPU/isel-whole-wave-functions.ll
@@ -189,79 +189,3 @@ define amdgpu_gfx_whole_wave i64 @ret_64(i1 %active, i64 %a, i64 %b) {
ret i64 %ret
}
-declare amdgpu_gfx_whole_wave i32 @callee(i1 %active, <8 x i32> %x)
-
-; Make sure we don't pass the first argument (i1).
-define amdgpu_cs void @call(<8 x i32> %x, ptr %p) {
- ; DAGISEL-LABEL: name: call
- ; DAGISEL: bb.0 (%ir-block.0):
- ; DAGISEL-NEXT: liveins: $vgpr0, $vgpr1, $vgpr2, $vgpr3, $vgpr4, $vgpr5, $vgpr6, $vgpr7, $vgpr8, $vgpr9
- ; DAGISEL-NEXT: {{ $}}
- ; DAGISEL-NEXT: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr9
- ; DAGISEL-NEXT: [[COPY1:%[0-9]+]]:vgpr_32 = COPY $vgpr8
- ; DAGISEL-NEXT: [[COPY2:%[0-9]+]]:vgpr_32 = COPY $vgpr7
- ; DAGISEL-NEXT: [[COPY3:%[0-9]+]]:vgpr_32 = COPY $vgpr6
- ; DAGISEL-NEXT: [[COPY4:%[0-9]+]]:vgpr_32 = COPY $vgpr5
- ; DAGISEL-NEXT: [[COPY5:%[0-9]+]]:vgpr_32 = COPY $vgpr4
- ; DAGISEL-NEXT: [[COPY6:%[0-9]+]]:vgpr_32 = COPY $vgpr3
- ; DAGISEL-NEXT: [[COPY7:%[0-9]+]]:vgpr_32 = COPY $vgpr2
- ; DAGISEL-NEXT: [[COPY8:%[0-9]+]]:vgpr_32 = COPY $vgpr1
- ; DAGISEL-NEXT: [[COPY9:%[0-9]+]]:vgpr_32 = COPY $vgpr0
- ; DAGISEL-NEXT: [[DEF:%[0-9]+]]:sgpr_32 = IMPLICIT_DEF
- ; DAGISEL-NEXT: [[DEF1:%[0-9]+]]:sgpr_32 = IMPLICIT_DEF
- ; DAGISEL-NEXT: [[REG_SEQUENCE:%[0-9]+]]:vreg_64 = REG_SEQUENCE [[COPY1]], %subreg.sub0, [[COPY]], %subreg.sub1
- ; DAGISEL-NEXT: [[S_MOV_B32_:%[0-9]+]]:sreg_32 = S_MOV_B32 target-flags(amdgpu-abs32-hi) @callee
- ; DAGISEL-NEXT: [[S_MOV_B32_1:%[0-9]+]]:sreg_32 = S_MOV_B32 target-flags(amdgpu-abs32-lo) @callee
- ; DAGISEL-NEXT: [[REG_SEQUENCE1:%[0-9]+]]:sreg_64 = REG_SEQUENCE killed [[S_MOV_B32_1]], %subreg.sub0, killed [[S_MOV_B32_]], %subreg.sub1
- ; DAGISEL-NEXT: ADJCALLSTACKUP 0, 0, implicit-def dead $scc, implicit-def $sgpr32, implicit $sgpr32
- ; DAGISEL-NEXT: $vgpr0 = COPY [[COPY9]]
- ; DAGISEL-NEXT: $vgpr1 = COPY [[COPY8]]
- ; DAGISEL-NEXT: $vgpr2 = COPY [[COPY7]]
- ; DAGISEL-NEXT: $vgpr3 = COPY [[COPY6]]
- ; DAGISEL-NEXT: $vgpr4 = COPY [[COPY5]]
- ; DAGISEL-NEXT: $vgpr5 = COPY [[COPY4]]
- ; DAGISEL-NEXT: $vgpr6 = COPY [[COPY3]]
- ; DAGISEL-NEXT: $vgpr7 = COPY [[COPY2]]
- ; DAGISEL-NEXT: $sgpr30_sgpr31 = SI_CALL killed [[REG_SEQUENCE1]], @callee, csr_amdgpu_si_gfx, implicit $vgpr0, implicit $vgpr1, implicit $vgpr2, implicit $vgpr3, implicit $vgpr4, implicit $vgpr5, implicit $vgpr6, implicit $vgpr7, implicit-def $vgpr0
- ; DAGISEL-NEXT: ADJCALLSTACKDOWN 0, 0, implicit-def dead $scc, implicit-def $sgpr32, implicit $sgpr32
- ; DAGISEL-NEXT: [[COPY10:%[0-9]+]]:vgpr_32 = COPY $vgpr0
- ; DAGISEL-NEXT: [[COPY11:%[0-9]+]]:vreg_64 = COPY [[REG_SEQUENCE]]
- ; DAGISEL-NEXT: FLAT_STORE_DWORD killed [[COPY11]], [[COPY10]], 0, 0, implicit $exec, implicit $flat_scr :: (store (s32) into %ir.p)
- ; DAGISEL-NEXT: S_ENDPGM 0
- ;
- ; GISEL-LABEL: name: call
- ; GISEL: bb.1 (%ir-block.0):
- ; GISEL-NEXT: liveins: $vgpr0, $vgpr1, $vgpr2, $vgpr3, $vgpr4, $vgpr5, $vgpr6, $vgpr7, $vgpr8, $vgpr9
- ; GISEL-NEXT: {{ $}}
- ; GISEL-NEXT: [[COPY:%[0-9]+]]:vgpr_32 = COPY $vgpr0
- ; GISEL-NEXT: [[COPY1:%[0-9]+]]:vgpr_32 = COPY $vgpr1
- ; GISEL-NEXT: [[COPY2:%[0-9]+]]:vgpr_32 = COPY $vgpr2
- ; GISEL-NEXT: [[COPY3:%[0-9]+]]:vgpr_32 = COPY $vgpr3
- ; GISEL-NEXT: [[COPY4:%[0-9]+]]:vgpr_32 = COPY $vgpr4
- ; GISEL-NEXT: [[COPY5:%[0-9]+]]:vgpr_32 = COPY $vgpr5
- ; GISEL-NEXT: [[COPY6:%[0-9]+]]:vgpr_32 = COPY $vgpr6
- ; GISEL-NEXT: [[COPY7:%[0-9]+]]:vgpr_32 = COPY $vgpr7
- ; GISEL-NEXT: [[COPY8:%[0-9]+]]:vgpr_32 = COPY $vgpr8
- ; GISEL-NEXT: [[COPY9:%[0-9]+]]:vgpr_32 = COPY $vgpr9
- ; GISEL-NEXT: [[REG_SEQUENCE:%[0-9]+]]:vreg_64 = REG_SEQUENCE [[COPY8]], %subreg.sub0, [[COPY9]], %subreg.sub1
- ; GISEL-NEXT: ADJCALLSTACKUP 0, 0, implicit-def $scc, implicit-def $sgpr32, implicit $sgpr32
- ; GISEL-NEXT: $vgpr0 = COPY [[COPY]]
- ; GISEL-NEXT: $vgpr1 = COPY [[COPY1]]
- ; GISEL-NEXT: $vgpr2 = COPY [[COPY2]]
- ; GISEL-NEXT: $vgpr3 = COPY [[COPY3]]
- ; GISEL-NEXT: $vgpr4 = COPY [[COPY4]]
- ; GISEL-NEXT: $vgpr5 = COPY [[COPY5]]
- ; GISEL-NEXT: $vgpr6 = COPY [[COPY6]]
- ; GISEL-NEXT: $vgpr7 = COPY [[COPY7]]
- ; GISEL-NEXT: [[S_MOV_B32_:%[0-9]+]]:sreg_32 = S_MOV_B32 target-flags(amdgpu-abs32-lo) @callee
- ; GISEL-NEXT: [[S_MOV_B32_1:%[0-9]+]]:sreg_32 = S_MOV_B32 target-flags(amdgpu-abs32-hi) @callee
- ; GISEL-NEXT: [[REG_SEQUENCE1:%[0-9]+]]:sreg_64 = REG_SEQUENCE [[S_MOV_B32_]], %subreg.sub0, [[S_MOV_B32_1]], %subreg.sub1
- ; GISEL-NEXT: $sgpr30_sgpr31 = SI_CALL [[REG_SEQUENCE1]], @callee, csr_amdgpu_si_gfx, implicit $vgpr0, implicit $vgpr1, implicit $vgpr2, implicit $vgpr3, implicit $vgpr4, implicit $vgpr5, implicit $vgpr6, implicit $vgpr7, implicit-def $vgpr0
- ; GISEL-NEXT: [[COPY10:%[0-9]+]]:vgpr_32 = COPY $vgpr0
- ; GISEL-NEXT: ADJCALLSTACKDOWN 0, 0, implicit-def $scc, implicit-def $sgpr32, implicit $sgpr32
- ; GISEL-NEXT: FLAT_STORE_DWORD [[REG_SEQUENCE]], [[COPY10]], 0, 0, implicit $exec, implicit $flat_scr :: (store (s32) into %ir.p)
- ; GISEL-NEXT: S_ENDPGM 0
- %ret = call i32(ptr, ...) @llvm.amdgcn.call.whole.wave(ptr @callee, <8 x i32> %x) convergent
- store i32 %ret, ptr %p
- ret void
-}
diff --git a/llvm/test/CodeGen/AMDGPU/whole-wave-functions.ll b/llvm/test/CodeGen/AMDGPU/whole-wave-functions.ll
index 36e8adb23f1f5..a13a68a665aee 100644
--- a/llvm/test/CodeGen/AMDGPU/whole-wave-functions.ll
+++ b/llvm/test/CodeGen/AMDGPU/whole-wave-functions.ll
@@ -2412,1427 +2412,3 @@ define amdgpu_gfx_whole_wave <2 x half> @call_gfx_from_whole_wave(i1 %active, <2
%ret = call amdgpu_gfx <2 x half>(<2 x half>, <2 x half>) @gfx_callee(<2 x half> %y, <2 x half> %x) convergent
ret <2 x half> %ret
}
-
-declare amdgpu_gfx_whole_wave float @callee(i1 %active, <8 x float> %x)
-
-define amdgpu_cs void @call_from_entry(<8 x float> %x, ptr %p) {
-; DAGISEL-LABEL: call_from_entry:
-; DAGISEL: ; %bb.0:
-; DAGISEL-NEXT: s_mov_b32 s1, callee at abs32@hi
-; DAGISEL-NEXT: s_mov_b32 s0, callee at abs32@lo
-; DAGISEL-NEXT: s_mov_b32 s32, 0
-; DAGISEL-NEXT: v_dual_mov_b32 v41, v9 :: v_dual_mov_b32 v40, v8
-; DAGISEL-NEXT: s_swappc_b64 s[30:31], s[0:1]
-; DAGISEL-NEXT: flat_store_b32 v[40:41], v0
-; DAGISEL-NEXT: s_endpgm
-;
-; GISEL-LABEL: call_from_entry:
-; GISEL: ; %bb.0:
-; GISEL-NEXT: s_mov_b32 s0, callee at abs32@lo
-; GISEL-NEXT: s_mov_b32 s1, callee at abs32@hi
-; GISEL-NEXT: s_mov_b32 s32, 0
-; GISEL-NEXT: v_dual_mov_b32 v40, v8 :: v_dual_mov_b32 v41, v9
-; GISEL-NEXT: s_swappc_b64 s[30:31], s[0:1]
-; GISEL-NEXT: flat_store_b32 v[40:41], v0
-; GISEL-NEXT: s_endpgm
-;
-; DAGISEL64-LABEL: call_from_entry:
-; DAGISEL64: ; %bb.0:
-; DAGISEL64-NEXT: s_mov_b32 s1, callee at abs32@hi
-; DAGISEL64-NEXT: s_mov_b32 s0, callee at abs32@lo
-; DAGISEL64-NEXT: s_mov_b32 s32, 0
-; DAGISEL64-NEXT: v_mov_b32_e32 v41, v9
-; DAGISEL64-NEXT: v_mov_b32_e32 v40, v8
-; DAGISEL64-NEXT: s_swappc_b64 s[30:31], s[0:1]
-; DAGISEL64-NEXT: flat_store_b32 v[40:41], v0
-; DAGISEL64-NEXT: s_endpgm
-;
-; GISEL64-LABEL: call_from_entry:
-; GISEL64: ; %bb.0:
-; GISEL64-NEXT: s_mov_b32 s0, callee at abs32@lo
-; GISEL64-NEXT: s_mov_b32 s1, callee at abs32@hi
-; GISEL64-NEXT: s_mov_b32 s32, 0
-; GISEL64-NEXT: v_mov_b32_e32 v40, v8
-; GISEL64-NEXT: v_mov_b32_e32 v41, v9
-; GISEL64-NEXT: s_swappc_b64 s[30:31], s[0:1]
-; GISEL64-NEXT: flat_store_b32 v[40:41], v0
-; GISEL64-NEXT: s_endpgm
- %ret = call float(ptr, ...) @llvm.amdgcn.call.whole.wave(ptr @callee, <8 x float> %x) convergent
- store float %ret, ptr %p
- ret void
-}
-
-define amdgpu_gfx_whole_wave void @call_from_whole_wave(i1 %unused, <8 x float> %x, ptr %p) {
-; DAGISEL-LABEL: call_from_whole_wave:
-; DAGISEL: ; %bb.0:
-; DAGISEL-NEXT: s_wait_loadcnt_dscnt 0x0
-; DAGISEL-NEXT: s_wait_expcnt 0x0
-; DAGISEL-NEXT: s_wait_samplecnt 0x0
-; DAGISEL-NEXT: s_wait_bvhcnt 0x0
-; DAGISEL-NEXT: s_wait_kmcnt 0x0
-; DAGISEL-NEXT: s_mov_b32 s0, s33
-; DAGISEL-NEXT: s_mov_b32 s33, s32
-; DAGISEL-NEXT: s_xor_saveexec_b32 s4, -1
-; DAGISEL-NEXT: s_clause 0x1f
-; DAGISEL-NEXT: scratch_store_b32 off, v0, s33 offset:4
-; DAGISEL-NEXT: scratch_store_b32 off, v1, s33 offset:8
-; DAGISEL-NEXT: scratch_store_b32 off, v2, s33 offset:12
-; DAGISEL-NEXT: scratch_store_b32 off, v3, s33 offset:16
-; DAGISEL-NEXT: scratch_store_b32 off, v4, s33 offset:20
-; DAGISEL-NEXT: scratch_store_b32 off, v5, s33 offset:24
-; DAGISEL-NEXT: scratch_store_b32 off, v6, s33 offset:28
-; DAGISEL-NEXT: scratch_store_b32 off, v7, s33 offset:32
-; DAGISEL-NEXT: scratch_store_b32 off, v8, s33 offset:36
-; DAGISEL-NEXT: scratch_store_b32 off, v9, s33 offset:40
-; DAGISEL-NEXT: scratch_store_b32 off, v10, s33 offset:44
-; DAGISEL-NEXT: scratch_store_b32 off, v11, s33 offset:48
-; DAGISEL-NEXT: scratch_store_b32 off, v12, s33 offset:52
-; DAGISEL-NEXT: scratch_store_b32 off, v13, s33 offset:56
-; DAGISEL-NEXT: scratch_store_b32 off, v14, s33 offset:60
-; DAGISEL-NEXT: scratch_store_b32 off, v15, s33 offset:64
-; DAGISEL-NEXT: scratch_store_b32 off, v16, s33 offset:68
-; DAGISEL-NEXT: scratch_store_b32 off, v17, s33 offset:72
-; DAGISEL-NEXT: scratch_store_b32 off, v18, s33 offset:76
-; DAGISEL-NEXT: scratch_store_b32 off, v19, s33 offset:80
-; DAGISEL-NEXT: scratch_store_b32 off, v20, s33 offset:84
-; DAGISEL-NEXT: scratch_store_b32 off, v21, s33 offset:88
-; DAGISEL-NEXT: scratch_store_b32 off, v22, s33 offset:92
-; DAGISEL-NEXT: scratch_store_b32 off, v23, s33 offset:96
-; DAGISEL-NEXT: scratch_store_b32 off, v24, s33 offset:100
-; DAGISEL-NEXT: scratch_store_b32 off, v25, s33 offset:104
-; DAGISEL-NEXT: scratch_store_b32 off, v26, s33 offset:108
-; DAGISEL-NEXT: scratch_store_b32 off, v27, s33 offset:112
-; DAGISEL-NEXT: scratch_store_b32 off, v28, s33 offset:116
-; DAGISEL-NEXT: scratch_store_b32 off, v29, s33 offset:120
-; DAGISEL-NEXT: scratch_store_b32 off, v30, s33 offset:124
-; DAGISEL-NEXT: scratch_store_b32 off, v31, s33 offset:128
-; DAGISEL-NEXT: s_clause 0x1f
-; DAGISEL-NEXT: scratch_store_b32 off, v32, s33 offset:132
-; DAGISEL-NEXT: scratch_store_b32 off, v33, s33 offset:136
-; DAGISEL-NEXT: scratch_store_b32 off, v34, s33 offset:140
-; DAGISEL-NEXT: scratch_store_b32 off, v35, s33 offset:144
-; DAGISEL-NEXT: scratch_store_b32 off, v36, s33 offset:148
-; DAGISEL-NEXT: scratch_store_b32 off, v37, s33 offset:152
-; DAGISEL-NEXT: scratch_store_b32 off, v38, s33 offset:156
-; DAGISEL-NEXT: scratch_store_b32 off, v39, s33 offset:160
-; DAGISEL-NEXT: scratch_store_b32 off, v48, s33 offset:172
-; DAGISEL-NEXT: scratch_store_b32 off, v49, s33 offset:176
-; DAGISEL-NEXT: scratch_store_b32 off, v50, s33 offset:180
-; DAGISEL-NEXT: scratch_store_b32 off, v51, s33 offset:184
-; DAGISEL-NEXT: scratch_store_b32 off, v52, s33 offset:188
-; DAGISEL-NEXT: scratch_store_b32 off, v53, s33 offset:192
-; DAGISEL-NEXT: scratch_store_b32 off, v54, s33 offset:196
-; DAGISEL-NEXT: scratch_store_b32 off, v55, s33 offset:200
-; DAGISEL-NEXT: scratch_store_b32 off, v64, s33 offset:204
-; DAGISEL-NEXT: scratch_store_b32 off, v65, s33 offset:208
-; DAGISEL-NEXT: scratch_store_b32 off, v66, s33 offset:212
-; DAGISEL-NEXT: scratch_store_b32 off, v67, s33 offset:216
-; DAGISEL-NEXT: scratch_store_b32 off, v68, s33 offset:220
-; DAGISEL-NEXT: scratch_store_b32 off, v69, s33 offset:224
-; DAGISEL-NEXT: scratch_store_b32 off, v70, s33 offset:228
-; DAGISEL-NEXT: scratch_store_b32 off, v71, s33 offset:232
-; DAGISEL-NEXT: scratch_store_b32 off, v80, s33 offset:236
-; DAGISEL-NEXT: scratch_store_b32 off, v81, s33 offset:240
-; DAGISEL-NEXT: scratch_store_b32 off, v82, s33 offset:244
-; DAGISEL-NEXT: scratch_store_b32 off, v83, s33 offset:248
-; DAGISEL-NEXT: scratch_store_b32 off, v84, s33 offset:252
-; DAGISEL-NEXT: scratch_store_b32 off, v85, s33 offset:256
-; DAGISEL-NEXT: scratch_store_b32 off, v86, s33 offset:260
-; DAGISEL-NEXT: scratch_store_b32 off, v87, s33 offset:264
-; DAGISEL-NEXT: s_clause 0x1f
-; DAGISEL-NEXT: scratch_store_b32 off, v96, s33 offset:268
-; DAGISEL-NEXT: scratch_store_b32 off, v97, s33 offset:272
-; DAGISEL-NEXT: scratch_store_b32 off, v98, s33 offset:276
-; DAGISEL-NEXT: scratch_store_b32 off, v99, s33 offset:280
-; DAGISEL-NEXT: scratch_store_b32 off, v100, s33 offset:284
-; DAGISEL-NEXT: scratch_store_b32 off, v101, s33 offset:288
-; DAGISEL-NEXT: scratch_store_b32 off, v102, s33 offset:292
-; DAGISEL-NEXT: scratch_store_b32 off, v103, s33 offset:296
-; DAGISEL-NEXT: scratch_store_b32 off, v112, s33 offset:300
-; DAGISEL-NEXT: scratch_store_b32 off, v113, s33 offset:304
-; DAGISEL-NEXT: scratch_store_b32 off, v114, s33 offset:308
-; DAGISEL-NEXT: scratch_store_b32 off, v115, s33 offset:312
-; DAGISEL-NEXT: scratch_store_b32 off, v116, s33 offset:316
-; DAGISEL-NEXT: scratch_store_b32 off, v117, s33 offset:320
-; DAGISEL-NEXT: scratch_store_b32 off, v118, s33 offset:324
-; DAGISEL-NEXT: scratch_store_b32 off, v119, s33 offset:328
-; DAGISEL-NEXT: scratch_store_b32 off, v128, s33 offset:332
-; DAGISEL-NEXT: scratch_store_b32 off, v129, s33 offset:336
-; DAGISEL-NEXT: scratch_store_b32 off, v130, s33 offset:340
-; DAGISEL-NEXT: scratch_store_b32 off, v131, s33 offset:344
-; DAGISEL-NEXT: scratch_store_b32 off, v132, s33 offset:348
-; DAGISEL-NEXT: scratch_store_b32 off, v133, s33 offset:352
-; DAGISEL-NEXT: scratch_store_b32 off, v134, s33 offset:356
-; DAGISEL-NEXT: scratch_store_b32 off, v135, s33 offset:360
-; DAGISEL-NEXT: scratch_store_b32 off, v144, s33 offset:364
-; DAGISEL-NEXT: scratch_store_b32 off, v145, s33 offset:368
-; DAGISEL-NEXT: scratch_store_b32 off, v146, s33 offset:372
-; DAGISEL-NEXT: scratch_store_b32 off, v147, s33 offset:376
-; DAGISEL-NEXT: scratch_store_b32 off, v148, s33 offset:380
-; DAGISEL-NEXT: scratch_store_b32 off, v149, s33 offset:384
-; DAGISEL-NEXT: scratch_store_b32 off, v150, s33 offset:388
-; DAGISEL-NEXT: scratch_store_b32 off, v151, s33 offset:392
-; DAGISEL-NEXT: s_clause 0x1f
-; DAGISEL-NEXT: scratch_store_b32 off, v160, s33 offset:396
-; DAGISEL-NEXT: scratch_store_b32 off, v161, s33 offset:400
-; DAGISEL-NEXT: scratch_store_b32 off, v162, s33 offset:404
-; DAGISEL-NEXT: scratch_store_b32 off, v163, s33 offset:408
-; DAGISEL-NEXT: scratch_store_b32 off, v164, s33 offset:412
-; DAGISEL-NEXT: scratch_store_b32 off, v165, s33 offset:416
-; DAGISEL-NEXT: scratch_store_b32 off, v166, s33 offset:420
-; DAGISEL-NEXT: scratch_store_b32 off, v167, s33 offset:424
-; DAGISEL-NEXT: scratch_store_b32 off, v176, s33 offset:428
-; DAGISEL-NEXT: scratch_store_b32 off, v177, s33 offset:432
-; DAGISEL-NEXT: scratch_store_b32 off, v178, s33 offset:436
-; DAGISEL-NEXT: scratch_store_b32 off, v179, s33 offset:440
-; DAGISEL-NEXT: scratch_store_b32 off, v180, s33 offset:444
-; DAGISEL-NEXT: scratch_store_b32 off, v181, s33 offset:448
-; DAGISEL-NEXT: scratch_store_b32 off, v182, s33 offset:452
-; DAGISEL-NEXT: scratch_store_b32 off, v183, s33 offset:456
-; DAGISEL-NEXT: scratch_store_b32 off, v192, s33 offset:460
-; DAGISEL-NEXT: scratch_store_b32 off, v193, s33 offset:464
-; DAGISEL-NEXT: scratch_store_b32 off, v194, s33 offset:468
-; DAGISEL-NEXT: scratch_store_b32 off, v195, s33 offset:472
-; DAGISEL-NEXT: scratch_store_b32 off, v196, s33 offset:476
-; DAGISEL-NEXT: scratch_store_b32 off, v197, s33 offset:480
-; DAGISEL-NEXT: scratch_store_b32 off, v198, s33 offset:484
-; DAGISEL-NEXT: scratch_store_b32 off, v199, s33 offset:488
-; DAGISEL-NEXT: scratch_store_b32 off, v208, s33 offset:492
-; DAGISEL-NEXT: scratch_store_b32 off, v209, s33 offset:496
-; DAGISEL-NEXT: scratch_store_b32 off, v210, s33 offset:500
-; DAGISEL-NEXT: scratch_store_b32 off, v211, s33 offset:504
-; DAGISEL-NEXT: scratch_store_b32 off, v212, s33 offset:508
-; DAGISEL-NEXT: scratch_store_b32 off, v213, s33 offset:512
-; DAGISEL-NEXT: scratch_store_b32 off, v214, s33 offset:516
-; DAGISEL-NEXT: scratch_store_b32 off, v215, s33 offset:520
-; DAGISEL-NEXT: s_clause 0xf
-; DAGISEL-NEXT: scratch_store_b32 off, v224, s33 offset:524
-; DAGISEL-NEXT: scratch_store_b32 off, v225, s33 offset:528
-; DAGISEL-NEXT: scratch_store_b32 off, v226, s33 offset:532
-; DAGISEL-NEXT: scratch_store_b32 off, v227, s33 offset:536
-; DAGISEL-NEXT: scratch_store_b32 off, v228, s33 offset:540
-; DAGISEL-NEXT: scratch_store_b32 off, v229, s33 offset:544
-; DAGISEL-NEXT: scratch_store_b32 off, v230, s33 offset:548
-; DAGISEL-NEXT: scratch_store_b32 off, v231, s33 offset:552
-; DAGISEL-NEXT: scratch_store_b32 off, v240, s33 offset:556
-; DAGISEL-NEXT: scratch_store_b32 off, v241, s33 offset:560
-; DAGISEL-NEXT: scratch_store_b32 off, v242, s33 offset:564
-; DAGISEL-NEXT: scratch_store_b32 off, v243, s33 offset:568
-; DAGISEL-NEXT: scratch_store_b32 off, v244, s33 offset:572
-; DAGISEL-NEXT: scratch_store_b32 off, v245, s33 offset:576
-; DAGISEL-NEXT: scratch_store_b32 off, v246, s33 offset:580
-; DAGISEL-NEXT: scratch_store_b32 off, v247, s33 offset:584
-; DAGISEL-NEXT: s_mov_b32 exec_lo, -1
-; DAGISEL-NEXT: s_clause 0x2
-; DAGISEL-NEXT: scratch_store_b32 off, v42, s33
-; DAGISEL-NEXT: scratch_store_b32 off, v40, s33 offset:164
-; DAGISEL-NEXT: scratch_store_b32 off, v41, s33 offset:168
-; DAGISEL-NEXT: s_wait_alu 0xfffe
-; DAGISEL-NEXT: v_writelane_b32 v42, s0, 3
-; DAGISEL-NEXT: s_mov_b32 s1, callee at abs32@hi
-; DAGISEL-NEXT: s_mov_b32 s0, callee at abs32@lo
-; DAGISEL-NEXT: s_addk_co_i32 s32, 0x250
-; DAGISEL-NEXT: v_dual_mov_b32 v41, v9 :: v_dual_mov_b32 v40, v8
-; DAGISEL-NEXT: v_writelane_b32 v42, s4, 0
-; DAGISEL-NEXT: v_writelane_b32 v42, s30, 1
-; DAGISEL-NEXT: v_writelane_b32 v42, s31, 2
-; DAGISEL-NEXT: s_wait_alu 0xfffe
-; DAGISEL-NEXT: s_swappc_b64 s[30:31], s[0:1]
-; DAGISEL-NEXT: flat_store_b32 v[40:41], v0
-; DAGISEL-NEXT: v_readlane_b32 s31, v42, 2
-; DAGISEL-NEXT: v_readlane_b32 s30, v42, 1
-; DAGISEL-NEXT: v_readlane_b32 s4, v42, 0
-; DAGISEL-NEXT: v_readlane_b32 s0, v42, 3
-; DAGISEL-NEXT: s_clause 0x2
-; DAGISEL-NEXT: scratch_load_b32 v42, off, s33
-; DAGISEL-NEXT: scratch_load_b32 v40, off, s33 offset:164
-; DAGISEL-NEXT: scratch_load_b32 v41, off, s33 offset:168
-; DAGISEL-NEXT: s_mov_b32 s32, s33
-; DAGISEL-NEXT: s_xor_b32 exec_lo, s4, -1
-; DAGISEL-NEXT: s_clause 0x1f
-; DAGISEL-NEXT: scratch_load_b32 v0, off, s33 offset:4
-; DAGISEL-NEXT: scratch_load_b32 v1, off, s33 offset:8
-; DAGISEL-NEXT: scratch_load_b32 v2, off, s33 offset:12
-; DAGISEL-NEXT: scratch_load_b32 v3, off, s33 offset:16
-; DAGISEL-NEXT: scratch_load_b32 v4, off, s33 offset:20
-; DAGISEL-NEXT: scratch_load_b32 v5, off, s33 offset:24
-; DAGISEL-NEXT: scratch_load_b32 v6, off, s33 offset:28
-; DAGISEL-NEXT: scratch_load_b32 v7, off, s33 offset:32
-; DAGISEL-NEXT: scratch_load_b32 v8, off, s33 offset:36
-; DAGISEL-NEXT: scratch_load_b32 v9, off, s33 offset:40
-; DAGISEL-NEXT: scratch_load_b32 v10, off, s33 offset:44
-; DAGISEL-NEXT: scratch_load_b32 v11, off, s33 offset:48
-; DAGISEL-NEXT: scratch_load_b32 v12, off, s33 offset:52
-; DAGISEL-NEXT: scratch_load_b32 v13, off, s33 offset:56
-; DAGISEL-NEXT: scratch_load_b32 v14, off, s33 offset:60
-; DAGISEL-NEXT: scratch_load_b32 v15, off, s33 offset:64
-; DAGISEL-NEXT: scratch_load_b32 v16, off, s33 offset:68
-; DAGISEL-NEXT: scratch_load_b32 v17, off, s33 offset:72
-; DAGISEL-NEXT: scratch_load_b32 v18, off, s33 offset:76
-; DAGISEL-NEXT: scratch_load_b32 v19, off, s33 offset:80
-; DAGISEL-NEXT: scratch_load_b32 v20, off, s33 offset:84
-; DAGISEL-NEXT: scratch_load_b32 v21, off, s33 offset:88
-; DAGISEL-NEXT: scratch_load_b32 v22, off, s33 offset:92
-; DAGISEL-NEXT: scratch_load_b32 v23, off, s33 offset:96
-; DAGISEL-NEXT: scratch_load_b32 v24, off, s33 offset:100
-; DAGISEL-NEXT: scratch_load_b32 v25, off, s33 offset:104
-; DAGISEL-NEXT: scratch_load_b32 v26, off, s33 offset:108
-; DAGISEL-NEXT: scratch_load_b32 v27, off, s33 offset:112
-; DAGISEL-NEXT: scratch_load_b32 v28, off, s33 offset:116
-; DAGISEL-NEXT: scratch_load_b32 v29, off, s33 offset:120
-; DAGISEL-NEXT: scratch_load_b32 v30, off, s33 offset:124
-; DAGISEL-NEXT: scratch_load_b32 v31, off, s33 offset:128
-; DAGISEL-NEXT: s_clause 0x1f
-; DAGISEL-NEXT: scratch_load_b32 v32, off, s33 offset:132
-; DAGISEL-NEXT: scratch_load_b32 v33, off, s33 offset:136
-; DAGISEL-NEXT: scratch_load_b32 v34, off, s33 offset:140
-; DAGISEL-NEXT: scratch_load_b32 v35, off, s33 offset:144
-; DAGISEL-NEXT: scratch_load_b32 v36, off, s33 offset:148
-; DAGISEL-NEXT: scratch_load_b32 v37, off, s33 offset:152
-; DAGISEL-NEXT: scratch_load_b32 v38, off, s33 offset:156
-; DAGISEL-NEXT: scratch_load_b32 v39, off, s33 offset:160
-; DAGISEL-NEXT: scratch_load_b32 v48, off, s33 offset:172
-; DAGISEL-NEXT: scratch_load_b32 v49, off, s33 offset:176
-; DAGISEL-NEXT: scratch_load_b32 v50, off, s33 offset:180
-; DAGISEL-NEXT: scratch_load_b32 v51, off, s33 offset:184
-; DAGISEL-NEXT: scratch_load_b32 v52, off, s33 offset:188
-; DAGISEL-NEXT: scratch_load_b32 v53, off, s33 offset:192
-; DAGISEL-NEXT: scratch_load_b32 v54, off, s33 offset:196
-; DAGISEL-NEXT: scratch_load_b32 v55, off, s33 offset:200
-; DAGISEL-NEXT: scratch_load_b32 v64, off, s33 offset:204
-; DAGISEL-NEXT: scratch_load_b32 v65, off, s33 offset:208
-; DAGISEL-NEXT: scratch_load_b32 v66, off, s33 offset:212
-; DAGISEL-NEXT: scratch_load_b32 v67, off, s33 offset:216
-; DAGISEL-NEXT: scratch_load_b32 v68, off, s33 offset:220
-; DAGISEL-NEXT: scratch_load_b32 v69, off, s33 offset:224
-; DAGISEL-NEXT: scratch_load_b32 v70, off, s33 offset:228
-; DAGISEL-NEXT: scratch_load_b32 v71, off, s33 offset:232
-; DAGISEL-NEXT: scratch_load_b32 v80, off, s33 offset:236
-; DAGISEL-NEXT: scratch_load_b32 v81, off, s33 offset:240
-; DAGISEL-NEXT: scratch_load_b32 v82, off, s33 offset:244
-; DAGISEL-NEXT: scratch_load_b32 v83, off, s33 offset:248
-; DAGISEL-NEXT: scratch_load_b32 v84, off, s33 offset:252
-; DAGISEL-NEXT: scratch_load_b32 v85, off, s33 offset:256
-; DAGISEL-NEXT: scratch_load_b32 v86, off, s33 offset:260
-; DAGISEL-NEXT: scratch_load_b32 v87, off, s33 offset:264
-; DAGISEL-NEXT: s_clause 0x1f
-; DAGISEL-NEXT: scratch_load_b32 v96, off, s33 offset:268
-; DAGISEL-NEXT: scratch_load_b32 v97, off, s33 offset:272
-; DAGISEL-NEXT: scratch_load_b32 v98, off, s33 offset:276
-; DAGISEL-NEXT: scratch_load_b32 v99, off, s33 offset:280
-; DAGISEL-NEXT: scratch_load_b32 v100, off, s33 offset:284
-; DAGISEL-NEXT: scratch_load_b32 v101, off, s33 offset:288
-; DAGISEL-NEXT: scratch_load_b32 v102, off, s33 offset:292
-; DAGISEL-NEXT: scratch_load_b32 v103, off, s33 offset:296
-; DAGISEL-NEXT: scratch_load_b32 v112, off, s33 offset:300
-; DAGISEL-NEXT: scratch_load_b32 v113, off, s33 offset:304
-; DAGISEL-NEXT: scratch_load_b32 v114, off, s33 offset:308
-; DAGISEL-NEXT: scratch_load_b32 v115, off, s33 offset:312
-; DAGISEL-NEXT: scratch_load_b32 v116, off, s33 offset:316
-; DAGISEL-NEXT: scratch_load_b32 v117, off, s33 offset:320
-; DAGISEL-NEXT: scratch_load_b32 v118, off, s33 offset:324
-; DAGISEL-NEXT: scratch_load_b32 v119, off, s33 offset:328
-; DAGISEL-NEXT: scratch_load_b32 v128, off, s33 offset:332
-; DAGISEL-NEXT: scratch_load_b32 v129, off, s33 offset:336
-; DAGISEL-NEXT: scratch_load_b32 v130, off, s33 offset:340
-; DAGISEL-NEXT: scratch_load_b32 v131, off, s33 offset:344
-; DAGISEL-NEXT: scratch_load_b32 v132, off, s33 offset:348
-; DAGISEL-NEXT: scratch_load_b32 v133, off, s33 offset:352
-; DAGISEL-NEXT: scratch_load_b32 v134, off, s33 offset:356
-; DAGISEL-NEXT: scratch_load_b32 v135, off, s33 offset:360
-; DAGISEL-NEXT: scratch_load_b32 v144, off, s33 offset:364
-; DAGISEL-NEXT: scratch_load_b32 v145, off, s33 offset:368
-; DAGISEL-NEXT: scratch_load_b32 v146, off, s33 offset:372
-; DAGISEL-NEXT: scratch_load_b32 v147, off, s33 offset:376
-; DAGISEL-NEXT: scratch_load_b32 v148, off, s33 offset:380
-; DAGISEL-NEXT: scratch_load_b32 v149, off, s33 offset:384
-; DAGISEL-NEXT: scratch_load_b32 v150, off, s33 offset:388
-; DAGISEL-NEXT: scratch_load_b32 v151, off, s33 offset:392
-; DAGISEL-NEXT: s_clause 0x1f
-; DAGISEL-NEXT: scratch_load_b32 v160, off, s33 offset:396
-; DAGISEL-NEXT: scratch_load_b32 v161, off, s33 offset:400
-; DAGISEL-NEXT: scratch_load_b32 v162, off, s33 offset:404
-; DAGISEL-NEXT: scratch_load_b32 v163, off, s33 offset:408
-; DAGISEL-NEXT: scratch_load_b32 v164, off, s33 offset:412
-; DAGISEL-NEXT: scratch_load_b32 v165, off, s33 offset:416
-; DAGISEL-NEXT: scratch_load_b32 v166, off, s33 offset:420
-; DAGISEL-NEXT: scratch_load_b32 v167, off, s33 offset:424
-; DAGISEL-NEXT: scratch_load_b32 v176, off, s33 offset:428
-; DAGISEL-NEXT: scratch_load_b32 v177, off, s33 offset:432
-; DAGISEL-NEXT: scratch_load_b32 v178, off, s33 offset:436
-; DAGISEL-NEXT: scratch_load_b32 v179, off, s33 offset:440
-; DAGISEL-NEXT: scratch_load_b32 v180, off, s33 offset:444
-; DAGISEL-NEXT: scratch_load_b32 v181, off, s33 offset:448
-; DAGISEL-NEXT: scratch_load_b32 v182, off, s33 offset:452
-; DAGISEL-NEXT: scratch_load_b32 v183, off, s33 offset:456
-; DAGISEL-NEXT: scratch_load_b32 v192, off, s33 offset:460
-; DAGISEL-NEXT: scratch_load_b32 v193, off, s33 offset:464
-; DAGISEL-NEXT: scratch_load_b32 v194, off, s33 offset:468
-; DAGISEL-NEXT: scratch_load_b32 v195, off, s33 offset:472
-; DAGISEL-NEXT: scratch_load_b32 v196, off, s33 offset:476
-; DAGISEL-NEXT: scratch_load_b32 v197, off, s33 offset:480
-; DAGISEL-NEXT: scratch_load_b32 v198, off, s33 offset:484
-; DAGISEL-NEXT: scratch_load_b32 v199, off, s33 offset:488
-; DAGISEL-NEXT: scratch_load_b32 v208, off, s33 offset:492
-; DAGISEL-NEXT: scratch_load_b32 v209, off, s33 offset:496
-; DAGISEL-NEXT: scratch_load_b32 v210, off, s33 offset:500
-; DAGISEL-NEXT: scratch_load_b32 v211, off, s33 offset:504
-; DAGISEL-NEXT: scratch_load_b32 v212, off, s33 offset:508
-; DAGISEL-NEXT: scratch_load_b32 v213, off, s33 offset:512
-; DAGISEL-NEXT: scratch_load_b32 v214, off, s33 offset:516
-; DAGISEL-NEXT: scratch_load_b32 v215, off, s33 offset:520
-; DAGISEL-NEXT: s_clause 0xf
-; DAGISEL-NEXT: scratch_load_b32 v224, off, s33 offset:524
-; DAGISEL-NEXT: scratch_load_b32 v225, off, s33 offset:528
-; DAGISEL-NEXT: scratch_load_b32 v226, off, s33 offset:532
-; DAGISEL-NEXT: scratch_load_b32 v227, off, s33 offset:536
-; DAGISEL-NEXT: scratch_load_b32 v228, off, s33 offset:540
-; DAGISEL-NEXT: scratch_load_b32 v229, off, s33 offset:544
-; DAGISEL-NEXT: scratch_load_b32 v230, off, s33 offset:548
-; DAGISEL-NEXT: scratch_load_b32 v231, off, s33 offset:552
-; DAGISEL-NEXT: scratch_load_b32 v240, off, s33 offset:556
-; DAGISEL-NEXT: scratch_load_b32 v241, off, s33 offset:560
-; DAGISEL-NEXT: scratch_load_b32 v242, off, s33 offset:564
-; DAGISEL-NEXT: scratch_load_b32 v243, off, s33 offset:568
-; DAGISEL-NEXT: scratch_load_b32 v244, off, s33 offset:572
-; DAGISEL-NEXT: scratch_load_b32 v245, off, s33 offset:576
-; DAGISEL-NEXT: scratch_load_b32 v246, off, s33 offset:580
-; DAGISEL-NEXT: scratch_load_b32 v247, off, s33 offset:584
-; DAGISEL-NEXT: s_mov_b32 exec_lo, s4
-; DAGISEL-NEXT: s_mov_b32 s33, s0
-; DAGISEL-NEXT: s_wait_loadcnt_dscnt 0x0
-; DAGISEL-NEXT: s_wait_alu 0xfffe
-; DAGISEL-NEXT: s_setpc_b64 s[30:31]
-;
-; GISEL-LABEL: call_from_whole_wave:
-; GISEL: ; %bb.0:
-; GISEL-NEXT: s_wait_loadcnt_dscnt 0x0
-; GISEL-NEXT: s_wait_expcnt 0x0
-; GISEL-NEXT: s_wait_samplecnt 0x0
-; GISEL-NEXT: s_wait_bvhcnt 0x0
-; GISEL-NEXT: s_wait_kmcnt 0x0
-; GISEL-NEXT: s_mov_b32 s0, s33
-; GISEL-NEXT: s_mov_b32 s33, s32
-; GISEL-NEXT: s_xor_saveexec_b32 s4, -1
-; GISEL-NEXT: s_clause 0x1f
-; GISEL-NEXT: scratch_store_b32 off, v0, s33 offset:4
-; GISEL-NEXT: scratch_store_b32 off, v1, s33 offset:8
-; GISEL-NEXT: scratch_store_b32 off, v2, s33 offset:12
-; GISEL-NEXT: scratch_store_b32 off, v3, s33 offset:16
-; GISEL-NEXT: scratch_store_b32 off, v4, s33 offset:20
-; GISEL-NEXT: scratch_store_b32 off, v5, s33 offset:24
-; GISEL-NEXT: scratch_store_b32 off, v6, s33 offset:28
-; GISEL-NEXT: scratch_store_b32 off, v7, s33 offset:32
-; GISEL-NEXT: scratch_store_b32 off, v8, s33 offset:36
-; GISEL-NEXT: scratch_store_b32 off, v9, s33 offset:40
-; GISEL-NEXT: scratch_store_b32 off, v10, s33 offset:44
-; GISEL-NEXT: scratch_store_b32 off, v11, s33 offset:48
-; GISEL-NEXT: scratch_store_b32 off, v12, s33 offset:52
-; GISEL-NEXT: scratch_store_b32 off, v13, s33 offset:56
-; GISEL-NEXT: scratch_store_b32 off, v14, s33 offset:60
-; GISEL-NEXT: scratch_store_b32 off, v15, s33 offset:64
-; GISEL-NEXT: scratch_store_b32 off, v16, s33 offset:68
-; GISEL-NEXT: scratch_store_b32 off, v17, s33 offset:72
-; GISEL-NEXT: scratch_store_b32 off, v18, s33 offset:76
-; GISEL-NEXT: scratch_store_b32 off, v19, s33 offset:80
-; GISEL-NEXT: scratch_store_b32 off, v20, s33 offset:84
-; GISEL-NEXT: scratch_store_b32 off, v21, s33 offset:88
-; GISEL-NEXT: scratch_store_b32 off, v22, s33 offset:92
-; GISEL-NEXT: scratch_store_b32 off, v23, s33 offset:96
-; GISEL-NEXT: scratch_store_b32 off, v24, s33 offset:100
-; GISEL-NEXT: scratch_store_b32 off, v25, s33 offset:104
-; GISEL-NEXT: scratch_store_b32 off, v26, s33 offset:108
-; GISEL-NEXT: scratch_store_b32 off, v27, s33 offset:112
-; GISEL-NEXT: scratch_store_b32 off, v28, s33 offset:116
-; GISEL-NEXT: scratch_store_b32 off, v29, s33 offset:120
-; GISEL-NEXT: scratch_store_b32 off, v30, s33 offset:124
-; GISEL-NEXT: scratch_store_b32 off, v31, s33 offset:128
-; GISEL-NEXT: s_clause 0x1f
-; GISEL-NEXT: scratch_store_b32 off, v32, s33 offset:132
-; GISEL-NEXT: scratch_store_b32 off, v33, s33 offset:136
-; GISEL-NEXT: scratch_store_b32 off, v34, s33 offset:140
-; GISEL-NEXT: scratch_store_b32 off, v35, s33 offset:144
-; GISEL-NEXT: scratch_store_b32 off, v36, s33 offset:148
-; GISEL-NEXT: scratch_store_b32 off, v37, s33 offset:152
-; GISEL-NEXT: scratch_store_b32 off, v38, s33 offset:156
-; GISEL-NEXT: scratch_store_b32 off, v39, s33 offset:160
-; GISEL-NEXT: scratch_store_b32 off, v48, s33 offset:172
-; GISEL-NEXT: scratch_store_b32 off, v49, s33 offset:176
-; GISEL-NEXT: scratch_store_b32 off, v50, s33 offset:180
-; GISEL-NEXT: scratch_store_b32 off, v51, s33 offset:184
-; GISEL-NEXT: scratch_store_b32 off, v52, s33 offset:188
-; GISEL-NEXT: scratch_store_b32 off, v53, s33 offset:192
-; GISEL-NEXT: scratch_store_b32 off, v54, s33 offset:196
-; GISEL-NEXT: scratch_store_b32 off, v55, s33 offset:200
-; GISEL-NEXT: scratch_store_b32 off, v64, s33 offset:204
-; GISEL-NEXT: scratch_store_b32 off, v65, s33 offset:208
-; GISEL-NEXT: scratch_store_b32 off, v66, s33 offset:212
-; GISEL-NEXT: scratch_store_b32 off, v67, s33 offset:216
-; GISEL-NEXT: scratch_store_b32 off, v68, s33 offset:220
-; GISEL-NEXT: scratch_store_b32 off, v69, s33 offset:224
-; GISEL-NEXT: scratch_store_b32 off, v70, s33 offset:228
-; GISEL-NEXT: scratch_store_b32 off, v71, s33 offset:232
-; GISEL-NEXT: scratch_store_b32 off, v80, s33 offset:236
-; GISEL-NEXT: scratch_store_b32 off, v81, s33 offset:240
-; GISEL-NEXT: scratch_store_b32 off, v82, s33 offset:244
-; GISEL-NEXT: scratch_store_b32 off, v83, s33 offset:248
-; GISEL-NEXT: scratch_store_b32 off, v84, s33 offset:252
-; GISEL-NEXT: scratch_store_b32 off, v85, s33 offset:256
-; GISEL-NEXT: scratch_store_b32 off, v86, s33 offset:260
-; GISEL-NEXT: scratch_store_b32 off, v87, s33 offset:264
-; GISEL-NEXT: s_clause 0x1f
-; GISEL-NEXT: scratch_store_b32 off, v96, s33 offset:268
-; GISEL-NEXT: scratch_store_b32 off, v97, s33 offset:272
-; GISEL-NEXT: scratch_store_b32 off, v98, s33 offset:276
-; GISEL-NEXT: scratch_store_b32 off, v99, s33 offset:280
-; GISEL-NEXT: scratch_store_b32 off, v100, s33 offset:284
-; GISEL-NEXT: scratch_store_b32 off, v101, s33 offset:288
-; GISEL-NEXT: scratch_store_b32 off, v102, s33 offset:292
-; GISEL-NEXT: scratch_store_b32 off, v103, s33 offset:296
-; GISEL-NEXT: scratch_store_b32 off, v112, s33 offset:300
-; GISEL-NEXT: scratch_store_b32 off, v113, s33 offset:304
-; GISEL-NEXT: scratch_store_b32 off, v114, s33 offset:308
-; GISEL-NEXT: scratch_store_b32 off, v115, s33 offset:312
-; GISEL-NEXT: scratch_store_b32 off, v116, s33 offset:316
-; GISEL-NEXT: scratch_store_b32 off, v117, s33 offset:320
-; GISEL-NEXT: scratch_store_b32 off, v118, s33 offset:324
-; GISEL-NEXT: scratch_store_b32 off, v119, s33 offset:328
-; GISEL-NEXT: scratch_store_b32 off, v128, s33 offset:332
-; GISEL-NEXT: scratch_store_b32 off, v129, s33 offset:336
-; GISEL-NEXT: scratch_store_b32 off, v130, s33 offset:340
-; GISEL-NEXT: scratch_store_b32 off, v131, s33 offset:344
-; GISEL-NEXT: scratch_store_b32 off, v132, s33 offset:348
-; GISEL-NEXT: scratch_store_b32 off, v133, s33 offset:352
-; GISEL-NEXT: scratch_store_b32 off, v134, s33 offset:356
-; GISEL-NEXT: scratch_store_b32 off, v135, s33 offset:360
-; GISEL-NEXT: scratch_store_b32 off, v144, s33 offset:364
-; GISEL-NEXT: scratch_store_b32 off, v145, s33 offset:368
-; GISEL-NEXT: scratch_store_b32 off, v146, s33 offset:372
-; GISEL-NEXT: scratch_store_b32 off, v147, s33 offset:376
-; GISEL-NEXT: scratch_store_b32 off, v148, s33 offset:380
-; GISEL-NEXT: scratch_store_b32 off, v149, s33 offset:384
-; GISEL-NEXT: scratch_store_b32 off, v150, s33 offset:388
-; GISEL-NEXT: scratch_store_b32 off, v151, s33 offset:392
-; GISEL-NEXT: s_clause 0x1f
-; GISEL-NEXT: scratch_store_b32 off, v160, s33 offset:396
-; GISEL-NEXT: scratch_store_b32 off, v161, s33 offset:400
-; GISEL-NEXT: scratch_store_b32 off, v162, s33 offset:404
-; GISEL-NEXT: scratch_store_b32 off, v163, s33 offset:408
-; GISEL-NEXT: scratch_store_b32 off, v164, s33 offset:412
-; GISEL-NEXT: scratch_store_b32 off, v165, s33 offset:416
-; GISEL-NEXT: scratch_store_b32 off, v166, s33 offset:420
-; GISEL-NEXT: scratch_store_b32 off, v167, s33 offset:424
-; GISEL-NEXT: scratch_store_b32 off, v176, s33 offset:428
-; GISEL-NEXT: scratch_store_b32 off, v177, s33 offset:432
-; GISEL-NEXT: scratch_store_b32 off, v178, s33 offset:436
-; GISEL-NEXT: scratch_store_b32 off, v179, s33 offset:440
-; GISEL-NEXT: scratch_store_b32 off, v180, s33 offset:444
-; GISEL-NEXT: scratch_store_b32 off, v181, s33 offset:448
-; GISEL-NEXT: scratch_store_b32 off, v182, s33 offset:452
-; GISEL-NEXT: scratch_store_b32 off, v183, s33 offset:456
-; GISEL-NEXT: scratch_store_b32 off, v192, s33 offset:460
-; GISEL-NEXT: scratch_store_b32 off, v193, s33 offset:464
-; GISEL-NEXT: scratch_store_b32 off, v194, s33 offset:468
-; GISEL-NEXT: scratch_store_b32 off, v195, s33 offset:472
-; GISEL-NEXT: scratch_store_b32 off, v196, s33 offset:476
-; GISEL-NEXT: scratch_store_b32 off, v197, s33 offset:480
-; GISEL-NEXT: scratch_store_b32 off, v198, s33 offset:484
-; GISEL-NEXT: scratch_store_b32 off, v199, s33 offset:488
-; GISEL-NEXT: scratch_store_b32 off, v208, s33 offset:492
-; GISEL-NEXT: scratch_store_b32 off, v209, s33 offset:496
-; GISEL-NEXT: scratch_store_b32 off, v210, s33 offset:500
-; GISEL-NEXT: scratch_store_b32 off, v211, s33 offset:504
-; GISEL-NEXT: scratch_store_b32 off, v212, s33 offset:508
-; GISEL-NEXT: scratch_store_b32 off, v213, s33 offset:512
-; GISEL-NEXT: scratch_store_b32 off, v214, s33 offset:516
-; GISEL-NEXT: scratch_store_b32 off, v215, s33 offset:520
-; GISEL-NEXT: s_clause 0xf
-; GISEL-NEXT: scratch_store_b32 off, v224, s33 offset:524
-; GISEL-NEXT: scratch_store_b32 off, v225, s33 offset:528
-; GISEL-NEXT: scratch_store_b32 off, v226, s33 offset:532
-; GISEL-NEXT: scratch_store_b32 off, v227, s33 offset:536
-; GISEL-NEXT: scratch_store_b32 off, v228, s33 offset:540
-; GISEL-NEXT: scratch_store_b32 off, v229, s33 offset:544
-; GISEL-NEXT: scratch_store_b32 off, v230, s33 offset:548
-; GISEL-NEXT: scratch_store_b32 off, v231, s33 offset:552
-; GISEL-NEXT: scratch_store_b32 off, v240, s33 offset:556
-; GISEL-NEXT: scratch_store_b32 off, v241, s33 offset:560
-; GISEL-NEXT: scratch_store_b32 off, v242, s33 offset:564
-; GISEL-NEXT: scratch_store_b32 off, v243, s33 offset:568
-; GISEL-NEXT: scratch_store_b32 off, v244, s33 offset:572
-; GISEL-NEXT: scratch_store_b32 off, v245, s33 offset:576
-; GISEL-NEXT: scratch_store_b32 off, v246, s33 offset:580
-; GISEL-NEXT: scratch_store_b32 off, v247, s33 offset:584
-; GISEL-NEXT: s_mov_b32 exec_lo, -1
-; GISEL-NEXT: s_clause 0x2
-; GISEL-NEXT: scratch_store_b32 off, v42, s33
-; GISEL-NEXT: scratch_store_b32 off, v40, s33 offset:164
-; GISEL-NEXT: scratch_store_b32 off, v41, s33 offset:168
-; GISEL-NEXT: s_wait_alu 0xfffe
-; GISEL-NEXT: v_writelane_b32 v42, s0, 3
-; GISEL-NEXT: s_mov_b32 s0, callee at abs32@lo
-; GISEL-NEXT: s_mov_b32 s1, callee at abs32@hi
-; GISEL-NEXT: s_addk_co_i32 s32, 0x250
-; GISEL-NEXT: v_dual_mov_b32 v40, v8 :: v_dual_mov_b32 v41, v9
-; GISEL-NEXT: v_writelane_b32 v42, s4, 0
-; GISEL-NEXT: v_writelane_b32 v42, s30, 1
-; GISEL-NEXT: v_writelane_b32 v42, s31, 2
-; GISEL-NEXT: s_wait_alu 0xfffe
-; GISEL-NEXT: s_swappc_b64 s[30:31], s[0:1]
-; GISEL-NEXT: flat_store_b32 v[40:41], v0
-; GISEL-NEXT: v_readlane_b32 s31, v42, 2
-; GISEL-NEXT: v_readlane_b32 s30, v42, 1
-; GISEL-NEXT: v_readlane_b32 s4, v42, 0
-; GISEL-NEXT: v_readlane_b32 s0, v42, 3
-; GISEL-NEXT: s_clause 0x2
-; GISEL-NEXT: scratch_load_b32 v42, off, s33
-; GISEL-NEXT: scratch_load_b32 v40, off, s33 offset:164
-; GISEL-NEXT: scratch_load_b32 v41, off, s33 offset:168
-; GISEL-NEXT: s_mov_b32 s32, s33
-; GISEL-NEXT: s_xor_b32 exec_lo, s4, -1
-; GISEL-NEXT: s_clause 0x1f
-; GISEL-NEXT: scratch_load_b32 v0, off, s33 offset:4
-; GISEL-NEXT: scratch_load_b32 v1, off, s33 offset:8
-; GISEL-NEXT: scratch_load_b32 v2, off, s33 offset:12
-; GISEL-NEXT: scratch_load_b32 v3, off, s33 offset:16
-; GISEL-NEXT: scratch_load_b32 v4, off, s33 offset:20
-; GISEL-NEXT: scratch_load_b32 v5, off, s33 offset:24
-; GISEL-NEXT: scratch_load_b32 v6, off, s33 offset:28
-; GISEL-NEXT: scratch_load_b32 v7, off, s33 offset:32
-; GISEL-NEXT: scratch_load_b32 v8, off, s33 offset:36
-; GISEL-NEXT: scratch_load_b32 v9, off, s33 offset:40
-; GISEL-NEXT: scratch_load_b32 v10, off, s33 offset:44
-; GISEL-NEXT: scratch_load_b32 v11, off, s33 offset:48
-; GISEL-NEXT: scratch_load_b32 v12, off, s33 offset:52
-; GISEL-NEXT: scratch_load_b32 v13, off, s33 offset:56
-; GISEL-NEXT: scratch_load_b32 v14, off, s33 offset:60
-; GISEL-NEXT: scratch_load_b32 v15, off, s33 offset:64
-; GISEL-NEXT: scratch_load_b32 v16, off, s33 offset:68
-; GISEL-NEXT: scratch_load_b32 v17, off, s33 offset:72
-; GISEL-NEXT: scratch_load_b32 v18, off, s33 offset:76
-; GISEL-NEXT: scratch_load_b32 v19, off, s33 offset:80
-; GISEL-NEXT: scratch_load_b32 v20, off, s33 offset:84
-; GISEL-NEXT: scratch_load_b32 v21, off, s33 offset:88
-; GISEL-NEXT: scratch_load_b32 v22, off, s33 offset:92
-; GISEL-NEXT: scratch_load_b32 v23, off, s33 offset:96
-; GISEL-NEXT: scratch_load_b32 v24, off, s33 offset:100
-; GISEL-NEXT: scratch_load_b32 v25, off, s33 offset:104
-; GISEL-NEXT: scratch_load_b32 v26, off, s33 offset:108
-; GISEL-NEXT: scratch_load_b32 v27, off, s33 offset:112
-; GISEL-NEXT: scratch_load_b32 v28, off, s33 offset:116
-; GISEL-NEXT: scratch_load_b32 v29, off, s33 offset:120
-; GISEL-NEXT: scratch_load_b32 v30, off, s33 offset:124
-; GISEL-NEXT: scratch_load_b32 v31, off, s33 offset:128
-; GISEL-NEXT: s_clause 0x1f
-; GISEL-NEXT: scratch_load_b32 v32, off, s33 offset:132
-; GISEL-NEXT: scratch_load_b32 v33, off, s33 offset:136
-; GISEL-NEXT: scratch_load_b32 v34, off, s33 offset:140
-; GISEL-NEXT: scratch_load_b32 v35, off, s33 offset:144
-; GISEL-NEXT: scratch_load_b32 v36, off, s33 offset:148
-; GISEL-NEXT: scratch_load_b32 v37, off, s33 offset:152
-; GISEL-NEXT: scratch_load_b32 v38, off, s33 offset:156
-; GISEL-NEXT: scratch_load_b32 v39, off, s33 offset:160
-; GISEL-NEXT: scratch_load_b32 v48, off, s33 offset:172
-; GISEL-NEXT: scratch_load_b32 v49, off, s33 offset:176
-; GISEL-NEXT: scratch_load_b32 v50, off, s33 offset:180
-; GISEL-NEXT: scratch_load_b32 v51, off, s33 offset:184
-; GISEL-NEXT: scratch_load_b32 v52, off, s33 offset:188
-; GISEL-NEXT: scratch_load_b32 v53, off, s33 offset:192
-; GISEL-NEXT: scratch_load_b32 v54, off, s33 offset:196
-; GISEL-NEXT: scratch_load_b32 v55, off, s33 offset:200
-; GISEL-NEXT: scratch_load_b32 v64, off, s33 offset:204
-; GISEL-NEXT: scratch_load_b32 v65, off, s33 offset:208
-; GISEL-NEXT: scratch_load_b32 v66, off, s33 offset:212
-; GISEL-NEXT: scratch_load_b32 v67, off, s33 offset:216
-; GISEL-NEXT: scratch_load_b32 v68, off, s33 offset:220
-; GISEL-NEXT: scratch_load_b32 v69, off, s33 offset:224
-; GISEL-NEXT: scratch_load_b32 v70, off, s33 offset:228
-; GISEL-NEXT: scratch_load_b32 v71, off, s33 offset:232
-; GISEL-NEXT: scratch_load_b32 v80, off, s33 offset:236
-; GISEL-NEXT: scratch_load_b32 v81, off, s33 offset:240
-; GISEL-NEXT: scratch_load_b32 v82, off, s33 offset:244
-; GISEL-NEXT: scratch_load_b32 v83, off, s33 offset:248
-; GISEL-NEXT: scratch_load_b32 v84, off, s33 offset:252
-; GISEL-NEXT: scratch_load_b32 v85, off, s33 offset:256
-; GISEL-NEXT: scratch_load_b32 v86, off, s33 offset:260
-; GISEL-NEXT: scratch_load_b32 v87, off, s33 offset:264
-; GISEL-NEXT: s_clause 0x1f
-; GISEL-NEXT: scratch_load_b32 v96, off, s33 offset:268
-; GISEL-NEXT: scratch_load_b32 v97, off, s33 offset:272
-; GISEL-NEXT: scratch_load_b32 v98, off, s33 offset:276
-; GISEL-NEXT: scratch_load_b32 v99, off, s33 offset:280
-; GISEL-NEXT: scratch_load_b32 v100, off, s33 offset:284
-; GISEL-NEXT: scratch_load_b32 v101, off, s33 offset:288
-; GISEL-NEXT: scratch_load_b32 v102, off, s33 offset:292
-; GISEL-NEXT: scratch_load_b32 v103, off, s33 offset:296
-; GISEL-NEXT: scratch_load_b32 v112, off, s33 offset:300
-; GISEL-NEXT: scratch_load_b32 v113, off, s33 offset:304
-; GISEL-NEXT: scratch_load_b32 v114, off, s33 offset:308
-; GISEL-NEXT: scratch_load_b32 v115, off, s33 offset:312
-; GISEL-NEXT: scratch_load_b32 v116, off, s33 offset:316
-; GISEL-NEXT: scratch_load_b32 v117, off, s33 offset:320
-; GISEL-NEXT: scratch_load_b32 v118, off, s33 offset:324
-; GISEL-NEXT: scratch_load_b32 v119, off, s33 offset:328
-; GISEL-NEXT: scratch_load_b32 v128, off, s33 offset:332
-; GISEL-NEXT: scratch_load_b32 v129, off, s33 offset:336
-; GISEL-NEXT: scratch_load_b32 v130, off, s33 offset:340
-; GISEL-NEXT: scratch_load_b32 v131, off, s33 offset:344
-; GISEL-NEXT: scratch_load_b32 v132, off, s33 offset:348
-; GISEL-NEXT: scratch_load_b32 v133, off, s33 offset:352
-; GISEL-NEXT: scratch_load_b32 v134, off, s33 offset:356
-; GISEL-NEXT: scratch_load_b32 v135, off, s33 offset:360
-; GISEL-NEXT: scratch_load_b32 v144, off, s33 offset:364
-; GISEL-NEXT: scratch_load_b32 v145, off, s33 offset:368
-; GISEL-NEXT: scratch_load_b32 v146, off, s33 offset:372
-; GISEL-NEXT: scratch_load_b32 v147, off, s33 offset:376
-; GISEL-NEXT: scratch_load_b32 v148, off, s33 offset:380
-; GISEL-NEXT: scratch_load_b32 v149, off, s33 offset:384
-; GISEL-NEXT: scratch_load_b32 v150, off, s33 offset:388
-; GISEL-NEXT: scratch_load_b32 v151, off, s33 offset:392
-; GISEL-NEXT: s_clause 0x1f
-; GISEL-NEXT: scratch_load_b32 v160, off, s33 offset:396
-; GISEL-NEXT: scratch_load_b32 v161, off, s33 offset:400
-; GISEL-NEXT: scratch_load_b32 v162, off, s33 offset:404
-; GISEL-NEXT: scratch_load_b32 v163, off, s33 offset:408
-; GISEL-NEXT: scratch_load_b32 v164, off, s33 offset:412
-; GISEL-NEXT: scratch_load_b32 v165, off, s33 offset:416
-; GISEL-NEXT: scratch_load_b32 v166, off, s33 offset:420
-; GISEL-NEXT: scratch_load_b32 v167, off, s33 offset:424
-; GISEL-NEXT: scratch_load_b32 v176, off, s33 offset:428
-; GISEL-NEXT: scratch_load_b32 v177, off, s33 offset:432
-; GISEL-NEXT: scratch_load_b32 v178, off, s33 offset:436
-; GISEL-NEXT: scratch_load_b32 v179, off, s33 offset:440
-; GISEL-NEXT: scratch_load_b32 v180, off, s33 offset:444
-; GISEL-NEXT: scratch_load_b32 v181, off, s33 offset:448
-; GISEL-NEXT: scratch_load_b32 v182, off, s33 offset:452
-; GISEL-NEXT: scratch_load_b32 v183, off, s33 offset:456
-; GISEL-NEXT: scratch_load_b32 v192, off, s33 offset:460
-; GISEL-NEXT: scratch_load_b32 v193, off, s33 offset:464
-; GISEL-NEXT: scratch_load_b32 v194, off, s33 offset:468
-; GISEL-NEXT: scratch_load_b32 v195, off, s33 offset:472
-; GISEL-NEXT: scratch_load_b32 v196, off, s33 offset:476
-; GISEL-NEXT: scratch_load_b32 v197, off, s33 offset:480
-; GISEL-NEXT: scratch_load_b32 v198, off, s33 offset:484
-; GISEL-NEXT: scratch_load_b32 v199, off, s33 offset:488
-; GISEL-NEXT: scratch_load_b32 v208, off, s33 offset:492
-; GISEL-NEXT: scratch_load_b32 v209, off, s33 offset:496
-; GISEL-NEXT: scratch_load_b32 v210, off, s33 offset:500
-; GISEL-NEXT: scratch_load_b32 v211, off, s33 offset:504
-; GISEL-NEXT: scratch_load_b32 v212, off, s33 offset:508
-; GISEL-NEXT: scratch_load_b32 v213, off, s33 offset:512
-; GISEL-NEXT: scratch_load_b32 v214, off, s33 offset:516
-; GISEL-NEXT: scratch_load_b32 v215, off, s33 offset:520
-; GISEL-NEXT: s_clause 0xf
-; GISEL-NEXT: scratch_load_b32 v224, off, s33 offset:524
-; GISEL-NEXT: scratch_load_b32 v225, off, s33 offset:528
-; GISEL-NEXT: scratch_load_b32 v226, off, s33 offset:532
-; GISEL-NEXT: scratch_load_b32 v227, off, s33 offset:536
-; GISEL-NEXT: scratch_load_b32 v228, off, s33 offset:540
-; GISEL-NEXT: scratch_load_b32 v229, off, s33 offset:544
-; GISEL-NEXT: scratch_load_b32 v230, off, s33 offset:548
-; GISEL-NEXT: scratch_load_b32 v231, off, s33 offset:552
-; GISEL-NEXT: scratch_load_b32 v240, off, s33 offset:556
-; GISEL-NEXT: scratch_load_b32 v241, off, s33 offset:560
-; GISEL-NEXT: scratch_load_b32 v242, off, s33 offset:564
-; GISEL-NEXT: scratch_load_b32 v243, off, s33 offset:568
-; GISEL-NEXT: scratch_load_b32 v244, off, s33 offset:572
-; GISEL-NEXT: scratch_load_b32 v245, off, s33 offset:576
-; GISEL-NEXT: scratch_load_b32 v246, off, s33 offset:580
-; GISEL-NEXT: scratch_load_b32 v247, off, s33 offset:584
-; GISEL-NEXT: s_mov_b32 exec_lo, s4
-; GISEL-NEXT: s_mov_b32 s33, s0
-; GISEL-NEXT: s_wait_loadcnt_dscnt 0x0
-; GISEL-NEXT: s_wait_alu 0xfffe
-; GISEL-NEXT: s_setpc_b64 s[30:31]
-;
-; DAGISEL64-LABEL: call_from_whole_wave:
-; DAGISEL64: ; %bb.0:
-; DAGISEL64-NEXT: s_wait_loadcnt_dscnt 0x0
-; DAGISEL64-NEXT: s_wait_expcnt 0x0
-; DAGISEL64-NEXT: s_wait_samplecnt 0x0
-; DAGISEL64-NEXT: s_wait_bvhcnt 0x0
-; DAGISEL64-NEXT: s_wait_kmcnt 0x0
-; DAGISEL64-NEXT: s_mov_b32 s0, s33
-; DAGISEL64-NEXT: s_mov_b32 s33, s32
-; DAGISEL64-NEXT: s_xor_saveexec_b64 s[4:5], -1
-; DAGISEL64-NEXT: s_clause 0x1f
-; DAGISEL64-NEXT: scratch_store_b32 off, v0, s33 offset:4
-; DAGISEL64-NEXT: scratch_store_b32 off, v1, s33 offset:8
-; DAGISEL64-NEXT: scratch_store_b32 off, v2, s33 offset:12
-; DAGISEL64-NEXT: scratch_store_b32 off, v3, s33 offset:16
-; DAGISEL64-NEXT: scratch_store_b32 off, v4, s33 offset:20
-; DAGISEL64-NEXT: scratch_store_b32 off, v5, s33 offset:24
-; DAGISEL64-NEXT: scratch_store_b32 off, v6, s33 offset:28
-; DAGISEL64-NEXT: scratch_store_b32 off, v7, s33 offset:32
-; DAGISEL64-NEXT: scratch_store_b32 off, v8, s33 offset:36
-; DAGISEL64-NEXT: scratch_store_b32 off, v9, s33 offset:40
-; DAGISEL64-NEXT: scratch_store_b32 off, v10, s33 offset:44
-; DAGISEL64-NEXT: scratch_store_b32 off, v11, s33 offset:48
-; DAGISEL64-NEXT: scratch_store_b32 off, v12, s33 offset:52
-; DAGISEL64-NEXT: scratch_store_b32 off, v13, s33 offset:56
-; DAGISEL64-NEXT: scratch_store_b32 off, v14, s33 offset:60
-; DAGISEL64-NEXT: scratch_store_b32 off, v15, s33 offset:64
-; DAGISEL64-NEXT: scratch_store_b32 off, v16, s33 offset:68
-; DAGISEL64-NEXT: scratch_store_b32 off, v17, s33 offset:72
-; DAGISEL64-NEXT: scratch_store_b32 off, v18, s33 offset:76
-; DAGISEL64-NEXT: scratch_store_b32 off, v19, s33 offset:80
-; DAGISEL64-NEXT: scratch_store_b32 off, v20, s33 offset:84
-; DAGISEL64-NEXT: scratch_store_b32 off, v21, s33 offset:88
-; DAGISEL64-NEXT: scratch_store_b32 off, v22, s33 offset:92
-; DAGISEL64-NEXT: scratch_store_b32 off, v23, s33 offset:96
-; DAGISEL64-NEXT: scratch_store_b32 off, v24, s33 offset:100
-; DAGISEL64-NEXT: scratch_store_b32 off, v25, s33 offset:104
-; DAGISEL64-NEXT: scratch_store_b32 off, v26, s33 offset:108
-; DAGISEL64-NEXT: scratch_store_b32 off, v27, s33 offset:112
-; DAGISEL64-NEXT: scratch_store_b32 off, v28, s33 offset:116
-; DAGISEL64-NEXT: scratch_store_b32 off, v29, s33 offset:120
-; DAGISEL64-NEXT: scratch_store_b32 off, v30, s33 offset:124
-; DAGISEL64-NEXT: scratch_store_b32 off, v31, s33 offset:128
-; DAGISEL64-NEXT: s_clause 0x1f
-; DAGISEL64-NEXT: scratch_store_b32 off, v32, s33 offset:132
-; DAGISEL64-NEXT: scratch_store_b32 off, v33, s33 offset:136
-; DAGISEL64-NEXT: scratch_store_b32 off, v34, s33 offset:140
-; DAGISEL64-NEXT: scratch_store_b32 off, v35, s33 offset:144
-; DAGISEL64-NEXT: scratch_store_b32 off, v36, s33 offset:148
-; DAGISEL64-NEXT: scratch_store_b32 off, v37, s33 offset:152
-; DAGISEL64-NEXT: scratch_store_b32 off, v38, s33 offset:156
-; DAGISEL64-NEXT: scratch_store_b32 off, v39, s33 offset:160
-; DAGISEL64-NEXT: scratch_store_b32 off, v48, s33 offset:172
-; DAGISEL64-NEXT: scratch_store_b32 off, v49, s33 offset:176
-; DAGISEL64-NEXT: scratch_store_b32 off, v50, s33 offset:180
-; DAGISEL64-NEXT: scratch_store_b32 off, v51, s33 offset:184
-; DAGISEL64-NEXT: scratch_store_b32 off, v52, s33 offset:188
-; DAGISEL64-NEXT: scratch_store_b32 off, v53, s33 offset:192
-; DAGISEL64-NEXT: scratch_store_b32 off, v54, s33 offset:196
-; DAGISEL64-NEXT: scratch_store_b32 off, v55, s33 offset:200
-; DAGISEL64-NEXT: scratch_store_b32 off, v64, s33 offset:204
-; DAGISEL64-NEXT: scratch_store_b32 off, v65, s33 offset:208
-; DAGISEL64-NEXT: scratch_store_b32 off, v66, s33 offset:212
-; DAGISEL64-NEXT: scratch_store_b32 off, v67, s33 offset:216
-; DAGISEL64-NEXT: scratch_store_b32 off, v68, s33 offset:220
-; DAGISEL64-NEXT: scratch_store_b32 off, v69, s33 offset:224
-; DAGISEL64-NEXT: scratch_store_b32 off, v70, s33 offset:228
-; DAGISEL64-NEXT: scratch_store_b32 off, v71, s33 offset:232
-; DAGISEL64-NEXT: scratch_store_b32 off, v80, s33 offset:236
-; DAGISEL64-NEXT: scratch_store_b32 off, v81, s33 offset:240
-; DAGISEL64-NEXT: scratch_store_b32 off, v82, s33 offset:244
-; DAGISEL64-NEXT: scratch_store_b32 off, v83, s33 offset:248
-; DAGISEL64-NEXT: scratch_store_b32 off, v84, s33 offset:252
-; DAGISEL64-NEXT: scratch_store_b32 off, v85, s33 offset:256
-; DAGISEL64-NEXT: scratch_store_b32 off, v86, s33 offset:260
-; DAGISEL64-NEXT: scratch_store_b32 off, v87, s33 offset:264
-; DAGISEL64-NEXT: s_clause 0x1f
-; DAGISEL64-NEXT: scratch_store_b32 off, v96, s33 offset:268
-; DAGISEL64-NEXT: scratch_store_b32 off, v97, s33 offset:272
-; DAGISEL64-NEXT: scratch_store_b32 off, v98, s33 offset:276
-; DAGISEL64-NEXT: scratch_store_b32 off, v99, s33 offset:280
-; DAGISEL64-NEXT: scratch_store_b32 off, v100, s33 offset:284
-; DAGISEL64-NEXT: scratch_store_b32 off, v101, s33 offset:288
-; DAGISEL64-NEXT: scratch_store_b32 off, v102, s33 offset:292
-; DAGISEL64-NEXT: scratch_store_b32 off, v103, s33 offset:296
-; DAGISEL64-NEXT: scratch_store_b32 off, v112, s33 offset:300
-; DAGISEL64-NEXT: scratch_store_b32 off, v113, s33 offset:304
-; DAGISEL64-NEXT: scratch_store_b32 off, v114, s33 offset:308
-; DAGISEL64-NEXT: scratch_store_b32 off, v115, s33 offset:312
-; DAGISEL64-NEXT: scratch_store_b32 off, v116, s33 offset:316
-; DAGISEL64-NEXT: scratch_store_b32 off, v117, s33 offset:320
-; DAGISEL64-NEXT: scratch_store_b32 off, v118, s33 offset:324
-; DAGISEL64-NEXT: scratch_store_b32 off, v119, s33 offset:328
-; DAGISEL64-NEXT: scratch_store_b32 off, v128, s33 offset:332
-; DAGISEL64-NEXT: scratch_store_b32 off, v129, s33 offset:336
-; DAGISEL64-NEXT: scratch_store_b32 off, v130, s33 offset:340
-; DAGISEL64-NEXT: scratch_store_b32 off, v131, s33 offset:344
-; DAGISEL64-NEXT: scratch_store_b32 off, v132, s33 offset:348
-; DAGISEL64-NEXT: scratch_store_b32 off, v133, s33 offset:352
-; DAGISEL64-NEXT: scratch_store_b32 off, v134, s33 offset:356
-; DAGISEL64-NEXT: scratch_store_b32 off, v135, s33 offset:360
-; DAGISEL64-NEXT: scratch_store_b32 off, v144, s33 offset:364
-; DAGISEL64-NEXT: scratch_store_b32 off, v145, s33 offset:368
-; DAGISEL64-NEXT: scratch_store_b32 off, v146, s33 offset:372
-; DAGISEL64-NEXT: scratch_store_b32 off, v147, s33 offset:376
-; DAGISEL64-NEXT: scratch_store_b32 off, v148, s33 offset:380
-; DAGISEL64-NEXT: scratch_store_b32 off, v149, s33 offset:384
-; DAGISEL64-NEXT: scratch_store_b32 off, v150, s33 offset:388
-; DAGISEL64-NEXT: scratch_store_b32 off, v151, s33 offset:392
-; DAGISEL64-NEXT: s_clause 0x1f
-; DAGISEL64-NEXT: scratch_store_b32 off, v160, s33 offset:396
-; DAGISEL64-NEXT: scratch_store_b32 off, v161, s33 offset:400
-; DAGISEL64-NEXT: scratch_store_b32 off, v162, s33 offset:404
-; DAGISEL64-NEXT: scratch_store_b32 off, v163, s33 offset:408
-; DAGISEL64-NEXT: scratch_store_b32 off, v164, s33 offset:412
-; DAGISEL64-NEXT: scratch_store_b32 off, v165, s33 offset:416
-; DAGISEL64-NEXT: scratch_store_b32 off, v166, s33 offset:420
-; DAGISEL64-NEXT: scratch_store_b32 off, v167, s33 offset:424
-; DAGISEL64-NEXT: scratch_store_b32 off, v176, s33 offset:428
-; DAGISEL64-NEXT: scratch_store_b32 off, v177, s33 offset:432
-; DAGISEL64-NEXT: scratch_store_b32 off, v178, s33 offset:436
-; DAGISEL64-NEXT: scratch_store_b32 off, v179, s33 offset:440
-; DAGISEL64-NEXT: scratch_store_b32 off, v180, s33 offset:444
-; DAGISEL64-NEXT: scratch_store_b32 off, v181, s33 offset:448
-; DAGISEL64-NEXT: scratch_store_b32 off, v182, s33 offset:452
-; DAGISEL64-NEXT: scratch_store_b32 off, v183, s33 offset:456
-; DAGISEL64-NEXT: scratch_store_b32 off, v192, s33 offset:460
-; DAGISEL64-NEXT: scratch_store_b32 off, v193, s33 offset:464
-; DAGISEL64-NEXT: scratch_store_b32 off, v194, s33 offset:468
-; DAGISEL64-NEXT: scratch_store_b32 off, v195, s33 offset:472
-; DAGISEL64-NEXT: scratch_store_b32 off, v196, s33 offset:476
-; DAGISEL64-NEXT: scratch_store_b32 off, v197, s33 offset:480
-; DAGISEL64-NEXT: scratch_store_b32 off, v198, s33 offset:484
-; DAGISEL64-NEXT: scratch_store_b32 off, v199, s33 offset:488
-; DAGISEL64-NEXT: scratch_store_b32 off, v208, s33 offset:492
-; DAGISEL64-NEXT: scratch_store_b32 off, v209, s33 offset:496
-; DAGISEL64-NEXT: scratch_store_b32 off, v210, s33 offset:500
-; DAGISEL64-NEXT: scratch_store_b32 off, v211, s33 offset:504
-; DAGISEL64-NEXT: scratch_store_b32 off, v212, s33 offset:508
-; DAGISEL64-NEXT: scratch_store_b32 off, v213, s33 offset:512
-; DAGISEL64-NEXT: scratch_store_b32 off, v214, s33 offset:516
-; DAGISEL64-NEXT: scratch_store_b32 off, v215, s33 offset:520
-; DAGISEL64-NEXT: s_clause 0xf
-; DAGISEL64-NEXT: scratch_store_b32 off, v224, s33 offset:524
-; DAGISEL64-NEXT: scratch_store_b32 off, v225, s33 offset:528
-; DAGISEL64-NEXT: scratch_store_b32 off, v226, s33 offset:532
-; DAGISEL64-NEXT: scratch_store_b32 off, v227, s33 offset:536
-; DAGISEL64-NEXT: scratch_store_b32 off, v228, s33 offset:540
-; DAGISEL64-NEXT: scratch_store_b32 off, v229, s33 offset:544
-; DAGISEL64-NEXT: scratch_store_b32 off, v230, s33 offset:548
-; DAGISEL64-NEXT: scratch_store_b32 off, v231, s33 offset:552
-; DAGISEL64-NEXT: scratch_store_b32 off, v240, s33 offset:556
-; DAGISEL64-NEXT: scratch_store_b32 off, v241, s33 offset:560
-; DAGISEL64-NEXT: scratch_store_b32 off, v242, s33 offset:564
-; DAGISEL64-NEXT: scratch_store_b32 off, v243, s33 offset:568
-; DAGISEL64-NEXT: scratch_store_b32 off, v244, s33 offset:572
-; DAGISEL64-NEXT: scratch_store_b32 off, v245, s33 offset:576
-; DAGISEL64-NEXT: scratch_store_b32 off, v246, s33 offset:580
-; DAGISEL64-NEXT: scratch_store_b32 off, v247, s33 offset:584
-; DAGISEL64-NEXT: s_mov_b64 exec, -1
-; DAGISEL64-NEXT: s_clause 0x2
-; DAGISEL64-NEXT: scratch_store_b32 off, v42, s33
-; DAGISEL64-NEXT: scratch_store_b32 off, v40, s33 offset:164
-; DAGISEL64-NEXT: scratch_store_b32 off, v41, s33 offset:168
-; DAGISEL64-NEXT: s_wait_alu 0xfffe
-; DAGISEL64-NEXT: v_writelane_b32 v42, s0, 4
-; DAGISEL64-NEXT: s_mov_b32 s1, callee at abs32@hi
-; DAGISEL64-NEXT: s_mov_b32 s0, callee at abs32@lo
-; DAGISEL64-NEXT: s_addk_co_i32 s32, 0x250
-; DAGISEL64-NEXT: v_mov_b32_e32 v41, v9
-; DAGISEL64-NEXT: v_writelane_b32 v42, s4, 0
-; DAGISEL64-NEXT: v_mov_b32_e32 v40, v8
-; DAGISEL64-NEXT: v_writelane_b32 v42, s5, 1
-; DAGISEL64-NEXT: v_writelane_b32 v42, s30, 2
-; DAGISEL64-NEXT: v_writelane_b32 v42, s31, 3
-; DAGISEL64-NEXT: s_wait_alu 0xfffe
-; DAGISEL64-NEXT: s_swappc_b64 s[30:31], s[0:1]
-; DAGISEL64-NEXT: flat_store_b32 v[40:41], v0
-; DAGISEL64-NEXT: v_readlane_b32 s31, v42, 3
-; DAGISEL64-NEXT: v_readlane_b32 s30, v42, 2
-; DAGISEL64-NEXT: v_readlane_b32 s5, v42, 1
-; DAGISEL64-NEXT: v_readlane_b32 s4, v42, 0
-; DAGISEL64-NEXT: v_readlane_b32 s0, v42, 4
-; DAGISEL64-NEXT: s_clause 0x2
-; DAGISEL64-NEXT: scratch_load_b32 v42, off, s33
-; DAGISEL64-NEXT: scratch_load_b32 v40, off, s33 offset:164
-; DAGISEL64-NEXT: scratch_load_b32 v41, off, s33 offset:168
-; DAGISEL64-NEXT: s_mov_b32 s32, s33
-; DAGISEL64-NEXT: s_xor_b64 exec, s[4:5], -1
-; DAGISEL64-NEXT: s_clause 0x1f
-; DAGISEL64-NEXT: scratch_load_b32 v0, off, s33 offset:4
-; DAGISEL64-NEXT: scratch_load_b32 v1, off, s33 offset:8
-; DAGISEL64-NEXT: scratch_load_b32 v2, off, s33 offset:12
-; DAGISEL64-NEXT: scratch_load_b32 v3, off, s33 offset:16
-; DAGISEL64-NEXT: scratch_load_b32 v4, off, s33 offset:20
-; DAGISEL64-NEXT: scratch_load_b32 v5, off, s33 offset:24
-; DAGISEL64-NEXT: scratch_load_b32 v6, off, s33 offset:28
-; DAGISEL64-NEXT: scratch_load_b32 v7, off, s33 offset:32
-; DAGISEL64-NEXT: scratch_load_b32 v8, off, s33 offset:36
-; DAGISEL64-NEXT: scratch_load_b32 v9, off, s33 offset:40
-; DAGISEL64-NEXT: scratch_load_b32 v10, off, s33 offset:44
-; DAGISEL64-NEXT: scratch_load_b32 v11, off, s33 offset:48
-; DAGISEL64-NEXT: scratch_load_b32 v12, off, s33 offset:52
-; DAGISEL64-NEXT: scratch_load_b32 v13, off, s33 offset:56
-; DAGISEL64-NEXT: scratch_load_b32 v14, off, s33 offset:60
-; DAGISEL64-NEXT: scratch_load_b32 v15, off, s33 offset:64
-; DAGISEL64-NEXT: scratch_load_b32 v16, off, s33 offset:68
-; DAGISEL64-NEXT: scratch_load_b32 v17, off, s33 offset:72
-; DAGISEL64-NEXT: scratch_load_b32 v18, off, s33 offset:76
-; DAGISEL64-NEXT: scratch_load_b32 v19, off, s33 offset:80
-; DAGISEL64-NEXT: scratch_load_b32 v20, off, s33 offset:84
-; DAGISEL64-NEXT: scratch_load_b32 v21, off, s33 offset:88
-; DAGISEL64-NEXT: scratch_load_b32 v22, off, s33 offset:92
-; DAGISEL64-NEXT: scratch_load_b32 v23, off, s33 offset:96
-; DAGISEL64-NEXT: scratch_load_b32 v24, off, s33 offset:100
-; DAGISEL64-NEXT: scratch_load_b32 v25, off, s33 offset:104
-; DAGISEL64-NEXT: scratch_load_b32 v26, off, s33 offset:108
-; DAGISEL64-NEXT: scratch_load_b32 v27, off, s33 offset:112
-; DAGISEL64-NEXT: scratch_load_b32 v28, off, s33 offset:116
-; DAGISEL64-NEXT: scratch_load_b32 v29, off, s33 offset:120
-; DAGISEL64-NEXT: scratch_load_b32 v30, off, s33 offset:124
-; DAGISEL64-NEXT: scratch_load_b32 v31, off, s33 offset:128
-; DAGISEL64-NEXT: s_clause 0x1f
-; DAGISEL64-NEXT: scratch_load_b32 v32, off, s33 offset:132
-; DAGISEL64-NEXT: scratch_load_b32 v33, off, s33 offset:136
-; DAGISEL64-NEXT: scratch_load_b32 v34, off, s33 offset:140
-; DAGISEL64-NEXT: scratch_load_b32 v35, off, s33 offset:144
-; DAGISEL64-NEXT: scratch_load_b32 v36, off, s33 offset:148
-; DAGISEL64-NEXT: scratch_load_b32 v37, off, s33 offset:152
-; DAGISEL64-NEXT: scratch_load_b32 v38, off, s33 offset:156
-; DAGISEL64-NEXT: scratch_load_b32 v39, off, s33 offset:160
-; DAGISEL64-NEXT: scratch_load_b32 v48, off, s33 offset:172
-; DAGISEL64-NEXT: scratch_load_b32 v49, off, s33 offset:176
-; DAGISEL64-NEXT: scratch_load_b32 v50, off, s33 offset:180
-; DAGISEL64-NEXT: scratch_load_b32 v51, off, s33 offset:184
-; DAGISEL64-NEXT: scratch_load_b32 v52, off, s33 offset:188
-; DAGISEL64-NEXT: scratch_load_b32 v53, off, s33 offset:192
-; DAGISEL64-NEXT: scratch_load_b32 v54, off, s33 offset:196
-; DAGISEL64-NEXT: scratch_load_b32 v55, off, s33 offset:200
-; DAGISEL64-NEXT: scratch_load_b32 v64, off, s33 offset:204
-; DAGISEL64-NEXT: scratch_load_b32 v65, off, s33 offset:208
-; DAGISEL64-NEXT: scratch_load_b32 v66, off, s33 offset:212
-; DAGISEL64-NEXT: scratch_load_b32 v67, off, s33 offset:216
-; DAGISEL64-NEXT: scratch_load_b32 v68, off, s33 offset:220
-; DAGISEL64-NEXT: scratch_load_b32 v69, off, s33 offset:224
-; DAGISEL64-NEXT: scratch_load_b32 v70, off, s33 offset:228
-; DAGISEL64-NEXT: scratch_load_b32 v71, off, s33 offset:232
-; DAGISEL64-NEXT: scratch_load_b32 v80, off, s33 offset:236
-; DAGISEL64-NEXT: scratch_load_b32 v81, off, s33 offset:240
-; DAGISEL64-NEXT: scratch_load_b32 v82, off, s33 offset:244
-; DAGISEL64-NEXT: scratch_load_b32 v83, off, s33 offset:248
-; DAGISEL64-NEXT: scratch_load_b32 v84, off, s33 offset:252
-; DAGISEL64-NEXT: scratch_load_b32 v85, off, s33 offset:256
-; DAGISEL64-NEXT: scratch_load_b32 v86, off, s33 offset:260
-; DAGISEL64-NEXT: scratch_load_b32 v87, off, s33 offset:264
-; DAGISEL64-NEXT: s_clause 0x1f
-; DAGISEL64-NEXT: scratch_load_b32 v96, off, s33 offset:268
-; DAGISEL64-NEXT: scratch_load_b32 v97, off, s33 offset:272
-; DAGISEL64-NEXT: scratch_load_b32 v98, off, s33 offset:276
-; DAGISEL64-NEXT: scratch_load_b32 v99, off, s33 offset:280
-; DAGISEL64-NEXT: scratch_load_b32 v100, off, s33 offset:284
-; DAGISEL64-NEXT: scratch_load_b32 v101, off, s33 offset:288
-; DAGISEL64-NEXT: scratch_load_b32 v102, off, s33 offset:292
-; DAGISEL64-NEXT: scratch_load_b32 v103, off, s33 offset:296
-; DAGISEL64-NEXT: scratch_load_b32 v112, off, s33 offset:300
-; DAGISEL64-NEXT: scratch_load_b32 v113, off, s33 offset:304
-; DAGISEL64-NEXT: scratch_load_b32 v114, off, s33 offset:308
-; DAGISEL64-NEXT: scratch_load_b32 v115, off, s33 offset:312
-; DAGISEL64-NEXT: scratch_load_b32 v116, off, s33 offset:316
-; DAGISEL64-NEXT: scratch_load_b32 v117, off, s33 offset:320
-; DAGISEL64-NEXT: scratch_load_b32 v118, off, s33 offset:324
-; DAGISEL64-NEXT: scratch_load_b32 v119, off, s33 offset:328
-; DAGISEL64-NEXT: scratch_load_b32 v128, off, s33 offset:332
-; DAGISEL64-NEXT: scratch_load_b32 v129, off, s33 offset:336
-; DAGISEL64-NEXT: scratch_load_b32 v130, off, s33 offset:340
-; DAGISEL64-NEXT: scratch_load_b32 v131, off, s33 offset:344
-; DAGISEL64-NEXT: scratch_load_b32 v132, off, s33 offset:348
-; DAGISEL64-NEXT: scratch_load_b32 v133, off, s33 offset:352
-; DAGISEL64-NEXT: scratch_load_b32 v134, off, s33 offset:356
-; DAGISEL64-NEXT: scratch_load_b32 v135, off, s33 offset:360
-; DAGISEL64-NEXT: scratch_load_b32 v144, off, s33 offset:364
-; DAGISEL64-NEXT: scratch_load_b32 v145, off, s33 offset:368
-; DAGISEL64-NEXT: scratch_load_b32 v146, off, s33 offset:372
-; DAGISEL64-NEXT: scratch_load_b32 v147, off, s33 offset:376
-; DAGISEL64-NEXT: scratch_load_b32 v148, off, s33 offset:380
-; DAGISEL64-NEXT: scratch_load_b32 v149, off, s33 offset:384
-; DAGISEL64-NEXT: scratch_load_b32 v150, off, s33 offset:388
-; DAGISEL64-NEXT: scratch_load_b32 v151, off, s33 offset:392
-; DAGISEL64-NEXT: s_clause 0x1f
-; DAGISEL64-NEXT: scratch_load_b32 v160, off, s33 offset:396
-; DAGISEL64-NEXT: scratch_load_b32 v161, off, s33 offset:400
-; DAGISEL64-NEXT: scratch_load_b32 v162, off, s33 offset:404
-; DAGISEL64-NEXT: scratch_load_b32 v163, off, s33 offset:408
-; DAGISEL64-NEXT: scratch_load_b32 v164, off, s33 offset:412
-; DAGISEL64-NEXT: scratch_load_b32 v165, off, s33 offset:416
-; DAGISEL64-NEXT: scratch_load_b32 v166, off, s33 offset:420
-; DAGISEL64-NEXT: scratch_load_b32 v167, off, s33 offset:424
-; DAGISEL64-NEXT: scratch_load_b32 v176, off, s33 offset:428
-; DAGISEL64-NEXT: scratch_load_b32 v177, off, s33 offset:432
-; DAGISEL64-NEXT: scratch_load_b32 v178, off, s33 offset:436
-; DAGISEL64-NEXT: scratch_load_b32 v179, off, s33 offset:440
-; DAGISEL64-NEXT: scratch_load_b32 v180, off, s33 offset:444
-; DAGISEL64-NEXT: scratch_load_b32 v181, off, s33 offset:448
-; DAGISEL64-NEXT: scratch_load_b32 v182, off, s33 offset:452
-; DAGISEL64-NEXT: scratch_load_b32 v183, off, s33 offset:456
-; DAGISEL64-NEXT: scratch_load_b32 v192, off, s33 offset:460
-; DAGISEL64-NEXT: scratch_load_b32 v193, off, s33 offset:464
-; DAGISEL64-NEXT: scratch_load_b32 v194, off, s33 offset:468
-; DAGISEL64-NEXT: scratch_load_b32 v195, off, s33 offset:472
-; DAGISEL64-NEXT: scratch_load_b32 v196, off, s33 offset:476
-; DAGISEL64-NEXT: scratch_load_b32 v197, off, s33 offset:480
-; DAGISEL64-NEXT: scratch_load_b32 v198, off, s33 offset:484
-; DAGISEL64-NEXT: scratch_load_b32 v199, off, s33 offset:488
-; DAGISEL64-NEXT: scratch_load_b32 v208, off, s33 offset:492
-; DAGISEL64-NEXT: scratch_load_b32 v209, off, s33 offset:496
-; DAGISEL64-NEXT: scratch_load_b32 v210, off, s33 offset:500
-; DAGISEL64-NEXT: scratch_load_b32 v211, off, s33 offset:504
-; DAGISEL64-NEXT: scratch_load_b32 v212, off, s33 offset:508
-; DAGISEL64-NEXT: scratch_load_b32 v213, off, s33 offset:512
-; DAGISEL64-NEXT: scratch_load_b32 v214, off, s33 offset:516
-; DAGISEL64-NEXT: scratch_load_b32 v215, off, s33 offset:520
-; DAGISEL64-NEXT: s_clause 0xf
-; DAGISEL64-NEXT: scratch_load_b32 v224, off, s33 offset:524
-; DAGISEL64-NEXT: scratch_load_b32 v225, off, s33 offset:528
-; DAGISEL64-NEXT: scratch_load_b32 v226, off, s33 offset:532
-; DAGISEL64-NEXT: scratch_load_b32 v227, off, s33 offset:536
-; DAGISEL64-NEXT: scratch_load_b32 v228, off, s33 offset:540
-; DAGISEL64-NEXT: scratch_load_b32 v229, off, s33 offset:544
-; DAGISEL64-NEXT: scratch_load_b32 v230, off, s33 offset:548
-; DAGISEL64-NEXT: scratch_load_b32 v231, off, s33 offset:552
-; DAGISEL64-NEXT: scratch_load_b32 v240, off, s33 offset:556
-; DAGISEL64-NEXT: scratch_load_b32 v241, off, s33 offset:560
-; DAGISEL64-NEXT: scratch_load_b32 v242, off, s33 offset:564
-; DAGISEL64-NEXT: scratch_load_b32 v243, off, s33 offset:568
-; DAGISEL64-NEXT: scratch_load_b32 v244, off, s33 offset:572
-; DAGISEL64-NEXT: scratch_load_b32 v245, off, s33 offset:576
-; DAGISEL64-NEXT: scratch_load_b32 v246, off, s33 offset:580
-; DAGISEL64-NEXT: scratch_load_b32 v247, off, s33 offset:584
-; DAGISEL64-NEXT: s_mov_b64 exec, s[4:5]
-; DAGISEL64-NEXT: s_mov_b32 s33, s0
-; DAGISEL64-NEXT: s_wait_loadcnt_dscnt 0x0
-; DAGISEL64-NEXT: s_wait_alu 0xfffe
-; DAGISEL64-NEXT: s_setpc_b64 s[30:31]
-;
-; GISEL64-LABEL: call_from_whole_wave:
-; GISEL64: ; %bb.0:
-; GISEL64-NEXT: s_wait_loadcnt_dscnt 0x0
-; GISEL64-NEXT: s_wait_expcnt 0x0
-; GISEL64-NEXT: s_wait_samplecnt 0x0
-; GISEL64-NEXT: s_wait_bvhcnt 0x0
-; GISEL64-NEXT: s_wait_kmcnt 0x0
-; GISEL64-NEXT: s_mov_b32 s0, s33
-; GISEL64-NEXT: s_mov_b32 s33, s32
-; GISEL64-NEXT: s_xor_saveexec_b64 s[4:5], -1
-; GISEL64-NEXT: s_clause 0x1f
-; GISEL64-NEXT: scratch_store_b32 off, v0, s33 offset:4
-; GISEL64-NEXT: scratch_store_b32 off, v1, s33 offset:8
-; GISEL64-NEXT: scratch_store_b32 off, v2, s33 offset:12
-; GISEL64-NEXT: scratch_store_b32 off, v3, s33 offset:16
-; GISEL64-NEXT: scratch_store_b32 off, v4, s33 offset:20
-; GISEL64-NEXT: scratch_store_b32 off, v5, s33 offset:24
-; GISEL64-NEXT: scratch_store_b32 off, v6, s33 offset:28
-; GISEL64-NEXT: scratch_store_b32 off, v7, s33 offset:32
-; GISEL64-NEXT: scratch_store_b32 off, v8, s33 offset:36
-; GISEL64-NEXT: scratch_store_b32 off, v9, s33 offset:40
-; GISEL64-NEXT: scratch_store_b32 off, v10, s33 offset:44
-; GISEL64-NEXT: scratch_store_b32 off, v11, s33 offset:48
-; GISEL64-NEXT: scratch_store_b32 off, v12, s33 offset:52
-; GISEL64-NEXT: scratch_store_b32 off, v13, s33 offset:56
-; GISEL64-NEXT: scratch_store_b32 off, v14, s33 offset:60
-; GISEL64-NEXT: scratch_store_b32 off, v15, s33 offset:64
-; GISEL64-NEXT: scratch_store_b32 off, v16, s33 offset:68
-; GISEL64-NEXT: scratch_store_b32 off, v17, s33 offset:72
-; GISEL64-NEXT: scratch_store_b32 off, v18, s33 offset:76
-; GISEL64-NEXT: scratch_store_b32 off, v19, s33 offset:80
-; GISEL64-NEXT: scratch_store_b32 off, v20, s33 offset:84
-; GISEL64-NEXT: scratch_store_b32 off, v21, s33 offset:88
-; GISEL64-NEXT: scratch_store_b32 off, v22, s33 offset:92
-; GISEL64-NEXT: scratch_store_b32 off, v23, s33 offset:96
-; GISEL64-NEXT: scratch_store_b32 off, v24, s33 offset:100
-; GISEL64-NEXT: scratch_store_b32 off, v25, s33 offset:104
-; GISEL64-NEXT: scratch_store_b32 off, v26, s33 offset:108
-; GISEL64-NEXT: scratch_store_b32 off, v27, s33 offset:112
-; GISEL64-NEXT: scratch_store_b32 off, v28, s33 offset:116
-; GISEL64-NEXT: scratch_store_b32 off, v29, s33 offset:120
-; GISEL64-NEXT: scratch_store_b32 off, v30, s33 offset:124
-; GISEL64-NEXT: scratch_store_b32 off, v31, s33 offset:128
-; GISEL64-NEXT: s_clause 0x1f
-; GISEL64-NEXT: scratch_store_b32 off, v32, s33 offset:132
-; GISEL64-NEXT: scratch_store_b32 off, v33, s33 offset:136
-; GISEL64-NEXT: scratch_store_b32 off, v34, s33 offset:140
-; GISEL64-NEXT: scratch_store_b32 off, v35, s33 offset:144
-; GISEL64-NEXT: scratch_store_b32 off, v36, s33 offset:148
-; GISEL64-NEXT: scratch_store_b32 off, v37, s33 offset:152
-; GISEL64-NEXT: scratch_store_b32 off, v38, s33 offset:156
-; GISEL64-NEXT: scratch_store_b32 off, v39, s33 offset:160
-; GISEL64-NEXT: scratch_store_b32 off, v48, s33 offset:172
-; GISEL64-NEXT: scratch_store_b32 off, v49, s33 offset:176
-; GISEL64-NEXT: scratch_store_b32 off, v50, s33 offset:180
-; GISEL64-NEXT: scratch_store_b32 off, v51, s33 offset:184
-; GISEL64-NEXT: scratch_store_b32 off, v52, s33 offset:188
-; GISEL64-NEXT: scratch_store_b32 off, v53, s33 offset:192
-; GISEL64-NEXT: scratch_store_b32 off, v54, s33 offset:196
-; GISEL64-NEXT: scratch_store_b32 off, v55, s33 offset:200
-; GISEL64-NEXT: scratch_store_b32 off, v64, s33 offset:204
-; GISEL64-NEXT: scratch_store_b32 off, v65, s33 offset:208
-; GISEL64-NEXT: scratch_store_b32 off, v66, s33 offset:212
-; GISEL64-NEXT: scratch_store_b32 off, v67, s33 offset:216
-; GISEL64-NEXT: scratch_store_b32 off, v68, s33 offset:220
-; GISEL64-NEXT: scratch_store_b32 off, v69, s33 offset:224
-; GISEL64-NEXT: scratch_store_b32 off, v70, s33 offset:228
-; GISEL64-NEXT: scratch_store_b32 off, v71, s33 offset:232
-; GISEL64-NEXT: scratch_store_b32 off, v80, s33 offset:236
-; GISEL64-NEXT: scratch_store_b32 off, v81, s33 offset:240
-; GISEL64-NEXT: scratch_store_b32 off, v82, s33 offset:244
-; GISEL64-NEXT: scratch_store_b32 off, v83, s33 offset:248
-; GISEL64-NEXT: scratch_store_b32 off, v84, s33 offset:252
-; GISEL64-NEXT: scratch_store_b32 off, v85, s33 offset:256
-; GISEL64-NEXT: scratch_store_b32 off, v86, s33 offset:260
-; GISEL64-NEXT: scratch_store_b32 off, v87, s33 offset:264
-; GISEL64-NEXT: s_clause 0x1f
-; GISEL64-NEXT: scratch_store_b32 off, v96, s33 offset:268
-; GISEL64-NEXT: scratch_store_b32 off, v97, s33 offset:272
-; GISEL64-NEXT: scratch_store_b32 off, v98, s33 offset:276
-; GISEL64-NEXT: scratch_store_b32 off, v99, s33 offset:280
-; GISEL64-NEXT: scratch_store_b32 off, v100, s33 offset:284
-; GISEL64-NEXT: scratch_store_b32 off, v101, s33 offset:288
-; GISEL64-NEXT: scratch_store_b32 off, v102, s33 offset:292
-; GISEL64-NEXT: scratch_store_b32 off, v103, s33 offset:296
-; GISEL64-NEXT: scratch_store_b32 off, v112, s33 offset:300
-; GISEL64-NEXT: scratch_store_b32 off, v113, s33 offset:304
-; GISEL64-NEXT: scratch_store_b32 off, v114, s33 offset:308
-; GISEL64-NEXT: scratch_store_b32 off, v115, s33 offset:312
-; GISEL64-NEXT: scratch_store_b32 off, v116, s33 offset:316
-; GISEL64-NEXT: scratch_store_b32 off, v117, s33 offset:320
-; GISEL64-NEXT: scratch_store_b32 off, v118, s33 offset:324
-; GISEL64-NEXT: scratch_store_b32 off, v119, s33 offset:328
-; GISEL64-NEXT: scratch_store_b32 off, v128, s33 offset:332
-; GISEL64-NEXT: scratch_store_b32 off, v129, s33 offset:336
-; GISEL64-NEXT: scratch_store_b32 off, v130, s33 offset:340
-; GISEL64-NEXT: scratch_store_b32 off, v131, s33 offset:344
-; GISEL64-NEXT: scratch_store_b32 off, v132, s33 offset:348
-; GISEL64-NEXT: scratch_store_b32 off, v133, s33 offset:352
-; GISEL64-NEXT: scratch_store_b32 off, v134, s33 offset:356
-; GISEL64-NEXT: scratch_store_b32 off, v135, s33 offset:360
-; GISEL64-NEXT: scratch_store_b32 off, v144, s33 offset:364
-; GISEL64-NEXT: scratch_store_b32 off, v145, s33 offset:368
-; GISEL64-NEXT: scratch_store_b32 off, v146, s33 offset:372
-; GISEL64-NEXT: scratch_store_b32 off, v147, s33 offset:376
-; GISEL64-NEXT: scratch_store_b32 off, v148, s33 offset:380
-; GISEL64-NEXT: scratch_store_b32 off, v149, s33 offset:384
-; GISEL64-NEXT: scratch_store_b32 off, v150, s33 offset:388
-; GISEL64-NEXT: scratch_store_b32 off, v151, s33 offset:392
-; GISEL64-NEXT: s_clause 0x1f
-; GISEL64-NEXT: scratch_store_b32 off, v160, s33 offset:396
-; GISEL64-NEXT: scratch_store_b32 off, v161, s33 offset:400
-; GISEL64-NEXT: scratch_store_b32 off, v162, s33 offset:404
-; GISEL64-NEXT: scratch_store_b32 off, v163, s33 offset:408
-; GISEL64-NEXT: scratch_store_b32 off, v164, s33 offset:412
-; GISEL64-NEXT: scratch_store_b32 off, v165, s33 offset:416
-; GISEL64-NEXT: scratch_store_b32 off, v166, s33 offset:420
-; GISEL64-NEXT: scratch_store_b32 off, v167, s33 offset:424
-; GISEL64-NEXT: scratch_store_b32 off, v176, s33 offset:428
-; GISEL64-NEXT: scratch_store_b32 off, v177, s33 offset:432
-; GISEL64-NEXT: scratch_store_b32 off, v178, s33 offset:436
-; GISEL64-NEXT: scratch_store_b32 off, v179, s33 offset:440
-; GISEL64-NEXT: scratch_store_b32 off, v180, s33 offset:444
-; GISEL64-NEXT: scratch_store_b32 off, v181, s33 offset:448
-; GISEL64-NEXT: scratch_store_b32 off, v182, s33 offset:452
-; GISEL64-NEXT: scratch_store_b32 off, v183, s33 offset:456
-; GISEL64-NEXT: scratch_store_b32 off, v192, s33 offset:460
-; GISEL64-NEXT: scratch_store_b32 off, v193, s33 offset:464
-; GISEL64-NEXT: scratch_store_b32 off, v194, s33 offset:468
-; GISEL64-NEXT: scratch_store_b32 off, v195, s33 offset:472
-; GISEL64-NEXT: scratch_store_b32 off, v196, s33 offset:476
-; GISEL64-NEXT: scratch_store_b32 off, v197, s33 offset:480
-; GISEL64-NEXT: scratch_store_b32 off, v198, s33 offset:484
-; GISEL64-NEXT: scratch_store_b32 off, v199, s33 offset:488
-; GISEL64-NEXT: scratch_store_b32 off, v208, s33 offset:492
-; GISEL64-NEXT: scratch_store_b32 off, v209, s33 offset:496
-; GISEL64-NEXT: scratch_store_b32 off, v210, s33 offset:500
-; GISEL64-NEXT: scratch_store_b32 off, v211, s33 offset:504
-; GISEL64-NEXT: scratch_store_b32 off, v212, s33 offset:508
-; GISEL64-NEXT: scratch_store_b32 off, v213, s33 offset:512
-; GISEL64-NEXT: scratch_store_b32 off, v214, s33 offset:516
-; GISEL64-NEXT: scratch_store_b32 off, v215, s33 offset:520
-; GISEL64-NEXT: s_clause 0xf
-; GISEL64-NEXT: scratch_store_b32 off, v224, s33 offset:524
-; GISEL64-NEXT: scratch_store_b32 off, v225, s33 offset:528
-; GISEL64-NEXT: scratch_store_b32 off, v226, s33 offset:532
-; GISEL64-NEXT: scratch_store_b32 off, v227, s33 offset:536
-; GISEL64-NEXT: scratch_store_b32 off, v228, s33 offset:540
-; GISEL64-NEXT: scratch_store_b32 off, v229, s33 offset:544
-; GISEL64-NEXT: scratch_store_b32 off, v230, s33 offset:548
-; GISEL64-NEXT: scratch_store_b32 off, v231, s33 offset:552
-; GISEL64-NEXT: scratch_store_b32 off, v240, s33 offset:556
-; GISEL64-NEXT: scratch_store_b32 off, v241, s33 offset:560
-; GISEL64-NEXT: scratch_store_b32 off, v242, s33 offset:564
-; GISEL64-NEXT: scratch_store_b32 off, v243, s33 offset:568
-; GISEL64-NEXT: scratch_store_b32 off, v244, s33 offset:572
-; GISEL64-NEXT: scratch_store_b32 off, v245, s33 offset:576
-; GISEL64-NEXT: scratch_store_b32 off, v246, s33 offset:580
-; GISEL64-NEXT: scratch_store_b32 off, v247, s33 offset:584
-; GISEL64-NEXT: s_mov_b64 exec, -1
-; GISEL64-NEXT: s_clause 0x2
-; GISEL64-NEXT: scratch_store_b32 off, v42, s33
-; GISEL64-NEXT: scratch_store_b32 off, v40, s33 offset:164
-; GISEL64-NEXT: scratch_store_b32 off, v41, s33 offset:168
-; GISEL64-NEXT: s_wait_alu 0xfffe
-; GISEL64-NEXT: v_writelane_b32 v42, s0, 4
-; GISEL64-NEXT: s_mov_b32 s0, callee at abs32@lo
-; GISEL64-NEXT: s_mov_b32 s1, callee at abs32@hi
-; GISEL64-NEXT: s_addk_co_i32 s32, 0x250
-; GISEL64-NEXT: v_mov_b32_e32 v40, v8
-; GISEL64-NEXT: v_writelane_b32 v42, s4, 0
-; GISEL64-NEXT: v_mov_b32_e32 v41, v9
-; GISEL64-NEXT: v_writelane_b32 v42, s5, 1
-; GISEL64-NEXT: v_writelane_b32 v42, s30, 2
-; GISEL64-NEXT: v_writelane_b32 v42, s31, 3
-; GISEL64-NEXT: s_wait_alu 0xfffe
-; GISEL64-NEXT: s_swappc_b64 s[30:31], s[0:1]
-; GISEL64-NEXT: flat_store_b32 v[40:41], v0
-; GISEL64-NEXT: v_readlane_b32 s31, v42, 3
-; GISEL64-NEXT: v_readlane_b32 s30, v42, 2
-; GISEL64-NEXT: v_readlane_b32 s5, v42, 1
-; GISEL64-NEXT: v_readlane_b32 s4, v42, 0
-; GISEL64-NEXT: v_readlane_b32 s0, v42, 4
-; GISEL64-NEXT: s_clause 0x2
-; GISEL64-NEXT: scratch_load_b32 v42, off, s33
-; GISEL64-NEXT: scratch_load_b32 v40, off, s33 offset:164
-; GISEL64-NEXT: scratch_load_b32 v41, off, s33 offset:168
-; GISEL64-NEXT: s_mov_b32 s32, s33
-; GISEL64-NEXT: s_xor_b64 exec, s[4:5], -1
-; GISEL64-NEXT: s_clause 0x1f
-; GISEL64-NEXT: scratch_load_b32 v0, off, s33 offset:4
-; GISEL64-NEXT: scratch_load_b32 v1, off, s33 offset:8
-; GISEL64-NEXT: scratch_load_b32 v2, off, s33 offset:12
-; GISEL64-NEXT: scratch_load_b32 v3, off, s33 offset:16
-; GISEL64-NEXT: scratch_load_b32 v4, off, s33 offset:20
-; GISEL64-NEXT: scratch_load_b32 v5, off, s33 offset:24
-; GISEL64-NEXT: scratch_load_b32 v6, off, s33 offset:28
-; GISEL64-NEXT: scratch_load_b32 v7, off, s33 offset:32
-; GISEL64-NEXT: scratch_load_b32 v8, off, s33 offset:36
-; GISEL64-NEXT: scratch_load_b32 v9, off, s33 offset:40
-; GISEL64-NEXT: scratch_load_b32 v10, off, s33 offset:44
-; GISEL64-NEXT: scratch_load_b32 v11, off, s33 offset:48
-; GISEL64-NEXT: scratch_load_b32 v12, off, s33 offset:52
-; GISEL64-NEXT: scratch_load_b32 v13, off, s33 offset:56
-; GISEL64-NEXT: scratch_load_b32 v14, off, s33 offset:60
-; GISEL64-NEXT: scratch_load_b32 v15, off, s33 offset:64
-; GISEL64-NEXT: scratch_load_b32 v16, off, s33 offset:68
-; GISEL64-NEXT: scratch_load_b32 v17, off, s33 offset:72
-; GISEL64-NEXT: scratch_load_b32 v18, off, s33 offset:76
-; GISEL64-NEXT: scratch_load_b32 v19, off, s33 offset:80
-; GISEL64-NEXT: scratch_load_b32 v20, off, s33 offset:84
-; GISEL64-NEXT: scratch_load_b32 v21, off, s33 offset:88
-; GISEL64-NEXT: scratch_load_b32 v22, off, s33 offset:92
-; GISEL64-NEXT: scratch_load_b32 v23, off, s33 offset:96
-; GISEL64-NEXT: scratch_load_b32 v24, off, s33 offset:100
-; GISEL64-NEXT: scratch_load_b32 v25, off, s33 offset:104
-; GISEL64-NEXT: scratch_load_b32 v26, off, s33 offset:108
-; GISEL64-NEXT: scratch_load_b32 v27, off, s33 offset:112
-; GISEL64-NEXT: scratch_load_b32 v28, off, s33 offset:116
-; GISEL64-NEXT: scratch_load_b32 v29, off, s33 offset:120
-; GISEL64-NEXT: scratch_load_b32 v30, off, s33 offset:124
-; GISEL64-NEXT: scratch_load_b32 v31, off, s33 offset:128
-; GISEL64-NEXT: s_clause 0x1f
-; GISEL64-NEXT: scratch_load_b32 v32, off, s33 offset:132
-; GISEL64-NEXT: scratch_load_b32 v33, off, s33 offset:136
-; GISEL64-NEXT: scratch_load_b32 v34, off, s33 offset:140
-; GISEL64-NEXT: scratch_load_b32 v35, off, s33 offset:144
-; GISEL64-NEXT: scratch_load_b32 v36, off, s33 offset:148
-; GISEL64-NEXT: scratch_load_b32 v37, off, s33 offset:152
-; GISEL64-NEXT: scratch_load_b32 v38, off, s33 offset:156
-; GISEL64-NEXT: scratch_load_b32 v39, off, s33 offset:160
-; GISEL64-NEXT: scratch_load_b32 v48, off, s33 offset:172
-; GISEL64-NEXT: scratch_load_b32 v49, off, s33 offset:176
-; GISEL64-NEXT: scratch_load_b32 v50, off, s33 offset:180
-; GISEL64-NEXT: scratch_load_b32 v51, off, s33 offset:184
-; GISEL64-NEXT: scratch_load_b32 v52, off, s33 offset:188
-; GISEL64-NEXT: scratch_load_b32 v53, off, s33 offset:192
-; GISEL64-NEXT: scratch_load_b32 v54, off, s33 offset:196
-; GISEL64-NEXT: scratch_load_b32 v55, off, s33 offset:200
-; GISEL64-NEXT: scratch_load_b32 v64, off, s33 offset:204
-; GISEL64-NEXT: scratch_load_b32 v65, off, s33 offset:208
-; GISEL64-NEXT: scratch_load_b32 v66, off, s33 offset:212
-; GISEL64-NEXT: scratch_load_b32 v67, off, s33 offset:216
-; GISEL64-NEXT: scratch_load_b32 v68, off, s33 offset:220
-; GISEL64-NEXT: scratch_load_b32 v69, off, s33 offset:224
-; GISEL64-NEXT: scratch_load_b32 v70, off, s33 offset:228
-; GISEL64-NEXT: scratch_load_b32 v71, off, s33 offset:232
-; GISEL64-NEXT: scratch_load_b32 v80, off, s33 offset:236
-; GISEL64-NEXT: scratch_load_b32 v81, off, s33 offset:240
-; GISEL64-NEXT: scratch_load_b32 v82, off, s33 offset:244
-; GISEL64-NEXT: scratch_load_b32 v83, off, s33 offset:248
-; GISEL64-NEXT: scratch_load_b32 v84, off, s33 offset:252
-; GISEL64-NEXT: scratch_load_b32 v85, off, s33 offset:256
-; GISEL64-NEXT: scratch_load_b32 v86, off, s33 offset:260
-; GISEL64-NEXT: scratch_load_b32 v87, off, s33 offset:264
-; GISEL64-NEXT: s_clause 0x1f
-; GISEL64-NEXT: scratch_load_b32 v96, off, s33 offset:268
-; GISEL64-NEXT: scratch_load_b32 v97, off, s33 offset:272
-; GISEL64-NEXT: scratch_load_b32 v98, off, s33 offset:276
-; GISEL64-NEXT: scratch_load_b32 v99, off, s33 offset:280
-; GISEL64-NEXT: scratch_load_b32 v100, off, s33 offset:284
-; GISEL64-NEXT: scratch_load_b32 v101, off, s33 offset:288
-; GISEL64-NEXT: scratch_load_b32 v102, off, s33 offset:292
-; GISEL64-NEXT: scratch_load_b32 v103, off, s33 offset:296
-; GISEL64-NEXT: scratch_load_b32 v112, off, s33 offset:300
-; GISEL64-NEXT: scratch_load_b32 v113, off, s33 offset:304
-; GISEL64-NEXT: scratch_load_b32 v114, off, s33 offset:308
-; GISEL64-NEXT: scratch_load_b32 v115, off, s33 offset:312
-; GISEL64-NEXT: scratch_load_b32 v116, off, s33 offset:316
-; GISEL64-NEXT: scratch_load_b32 v117, off, s33 offset:320
-; GISEL64-NEXT: scratch_load_b32 v118, off, s33 offset:324
-; GISEL64-NEXT: scratch_load_b32 v119, off, s33 offset:328
-; GISEL64-NEXT: scratch_load_b32 v128, off, s33 offset:332
-; GISEL64-NEXT: scratch_load_b32 v129, off, s33 offset:336
-; GISEL64-NEXT: scratch_load_b32 v130, off, s33 offset:340
-; GISEL64-NEXT: scratch_load_b32 v131, off, s33 offset:344
-; GISEL64-NEXT: scratch_load_b32 v132, off, s33 offset:348
-; GISEL64-NEXT: scratch_load_b32 v133, off, s33 offset:352
-; GISEL64-NEXT: scratch_load_b32 v134, off, s33 offset:356
-; GISEL64-NEXT: scratch_load_b32 v135, off, s33 offset:360
-; GISEL64-NEXT: scratch_load_b32 v144, off, s33 offset:364
-; GISEL64-NEXT: scratch_load_b32 v145, off, s33 offset:368
-; GISEL64-NEXT: scratch_load_b32 v146, off, s33 offset:372
-; GISEL64-NEXT: scratch_load_b32 v147, off, s33 offset:376
-; GISEL64-NEXT: scratch_load_b32 v148, off, s33 offset:380
-; GISEL64-NEXT: scratch_load_b32 v149, off, s33 offset:384
-; GISEL64-NEXT: scratch_load_b32 v150, off, s33 offset:388
-; GISEL64-NEXT: scratch_load_b32 v151, off, s33 offset:392
-; GISEL64-NEXT: s_clause 0x1f
-; GISEL64-NEXT: scratch_load_b32 v160, off, s33 offset:396
-; GISEL64-NEXT: scratch_load_b32 v161, off, s33 offset:400
-; GISEL64-NEXT: scratch_load_b32 v162, off, s33 offset:404
-; GISEL64-NEXT: scratch_load_b32 v163, off, s33 offset:408
-; GISEL64-NEXT: scratch_load_b32 v164, off, s33 offset:412
-; GISEL64-NEXT: scratch_load_b32 v165, off, s33 offset:416
-; GISEL64-NEXT: scratch_load_b32 v166, off, s33 offset:420
-; GISEL64-NEXT: scratch_load_b32 v167, off, s33 offset:424
-; GISEL64-NEXT: scratch_load_b32 v176, off, s33 offset:428
-; GISEL64-NEXT: scratch_load_b32 v177, off, s33 offset:432
-; GISEL64-NEXT: scratch_load_b32 v178, off, s33 offset:436
-; GISEL64-NEXT: scratch_load_b32 v179, off, s33 offset:440
-; GISEL64-NEXT: scratch_load_b32 v180, off, s33 offset:444
-; GISEL64-NEXT: scratch_load_b32 v181, off, s33 offset:448
-; GISEL64-NEXT: scratch_load_b32 v182, off, s33 offset:452
-; GISEL64-NEXT: scratch_load_b32 v183, off, s33 offset:456
-; GISEL64-NEXT: scratch_load_b32 v192, off, s33 offset:460
-; GISEL64-NEXT: scratch_load_b32 v193, off, s33 offset:464
-; GISEL64-NEXT: scratch_load_b32 v194, off, s33 offset:468
-; GISEL64-NEXT: scratch_load_b32 v195, off, s33 offset:472
-; GISEL64-NEXT: scratch_load_b32 v196, off, s33 offset:476
-; GISEL64-NEXT: scratch_load_b32 v197, off, s33 offset:480
-; GISEL64-NEXT: scratch_load_b32 v198, off, s33 offset:484
-; GISEL64-NEXT: scratch_load_b32 v199, off, s33 offset:488
-; GISEL64-NEXT: scratch_load_b32 v208, off, s33 offset:492
-; GISEL64-NEXT: scratch_load_b32 v209, off, s33 offset:496
-; GISEL64-NEXT: scratch_load_b32 v210, off, s33 offset:500
-; GISEL64-NEXT: scratch_load_b32 v211, off, s33 offset:504
-; GISEL64-NEXT: scratch_load_b32 v212, off, s33 offset:508
-; GISEL64-NEXT: scratch_load_b32 v213, off, s33 offset:512
-; GISEL64-NEXT: scratch_load_b32 v214, off, s33 offset:516
-; GISEL64-NEXT: scratch_load_b32 v215, off, s33 offset:520
-; GISEL64-NEXT: s_clause 0xf
-; GISEL64-NEXT: scratch_load_b32 v224, off, s33 offset:524
-; GISEL64-NEXT: scratch_load_b32 v225, off, s33 offset:528
-; GISEL64-NEXT: scratch_load_b32 v226, off, s33 offset:532
-; GISEL64-NEXT: scratch_load_b32 v227, off, s33 offset:536
-; GISEL64-NEXT: scratch_load_b32 v228, off, s33 offset:540
-; GISEL64-NEXT: scratch_load_b32 v229, off, s33 offset:544
-; GISEL64-NEXT: scratch_load_b32 v230, off, s33 offset:548
-; GISEL64-NEXT: scratch_load_b32 v231, off, s33 offset:552
-; GISEL64-NEXT: scratch_load_b32 v240, off, s33 offset:556
-; GISEL64-NEXT: scratch_load_b32 v241, off, s33 offset:560
-; GISEL64-NEXT: scratch_load_b32 v242, off, s33 offset:564
-; GISEL64-NEXT: scratch_load_b32 v243, off, s33 offset:568
-; GISEL64-NEXT: scratch_load_b32 v244, off, s33 offset:572
-; GISEL64-NEXT: scratch_load_b32 v245, off, s33 offset:576
-; GISEL64-NEXT: scratch_load_b32 v246, off, s33 offset:580
-; GISEL64-NEXT: scratch_load_b32 v247, off, s33 offset:584
-; GISEL64-NEXT: s_mov_b64 exec, s[4:5]
-; GISEL64-NEXT: s_mov_b32 s33, s0
-; GISEL64-NEXT: s_wait_loadcnt_dscnt 0x0
-; GISEL64-NEXT: s_wait_alu 0xfffe
-; GISEL64-NEXT: s_setpc_b64 s[30:31]
- %ret = call float(ptr, ...) @llvm.amdgcn.call.whole.wave(ptr @callee, <8 x float> %x) convergent
- store float %ret, ptr %p
- ret void
-}
diff --git a/llvm/test/Verifier/AMDGPU/intrinsic-amdgcn-call-whole-wave.ll b/llvm/test/Verifier/AMDGPU/intrinsic-amdgcn-call-whole-wave.ll
deleted file mode 100644
index a744bf318be9a..0000000000000
--- a/llvm/test/Verifier/AMDGPU/intrinsic-amdgcn-call-whole-wave.ll
+++ /dev/null
@@ -1,46 +0,0 @@
-; RUN: not llvm-as %s -disable-output 2>&1 | FileCheck %s
-
-define amdgpu_cs void @indirect(ptr %fn, i32 %x) {
- ; CHECK: Indirect whole wave calls are not allowed
- %whatever = call i32(ptr, ...) @llvm.amdgcn.call.whole.wave(ptr %fn, i32 %x)
- ret void
-}
-
-declare amdgpu_gfx_whole_wave void @variadic_callee(i1 %active, i32 %x, ...)
-
-define amdgpu_cs void @variadic(ptr %fn, i32 %x) {
- ; CHECK: Variadic whole wave calls are not allowed
- %whatever = call i32(ptr, ...) @llvm.amdgcn.call.whole.wave(ptr @variadic_callee, i32 %x)
- ret void
-}
-
-declare amdgpu_gfx void @bad_cc_callee(i1 %active, i32 %x)
-
-define amdgpu_cs void @bad_cc(i32 %x) {
- ; CHECK: Callee must have the amdgpu_gfx_whole_wave calling convention
- %whatever = call i32(ptr, ...) @llvm.amdgcn.call.whole.wave(ptr @bad_cc_callee, i32 %x)
- ret void
-}
-
-declare amdgpu_gfx_whole_wave i32 @no_i1_callee(i32 %active, i32 %y, i32 %z)
-
-define amdgpu_cs void @no_i1(i32 %x) {
- ; CHECK: Callee must have i1 as its first argument
- %whatever = call i32(ptr, ...) @llvm.amdgcn.call.whole.wave(ptr @no_i1_callee, i32 %x, i32 0)
- ret void
-}
-
-declare amdgpu_gfx_whole_wave i32 @good_callee(i1 %active, i32 %x, i32 inreg %y)
-
-define amdgpu_cs void @bad_args(i32 %x) {
- ; CHECK: Call argument count must match callee argument count
- %whatever.0 = call i32(ptr, ...) @llvm.amdgcn.call.whole.wave(ptr @good_callee, i32 %x)
-
- ; CHECK: Argument types must match
- %whatever.1 = call i32(ptr, ...) @llvm.amdgcn.call.whole.wave(ptr @good_callee, i32 %x, i64 inreg 0)
-
- ; CHECK: Argument inreg attributes must match
- %whatever.2 = call i32(ptr, ...) @llvm.amdgcn.call.whole.wave(ptr @good_callee, i32 %x, i32 0)
-
- ret void
-}
>From 1f1b903a645e260195bbdd6c74ae1f34caacec79 Mon Sep 17 00:00:00 2001
From: Himadhith <79003240+Himadhith at users.noreply.github.com>
Date: Wed, 6 Aug 2025 15:59:47 +0530
Subject: [PATCH 059/171] [NFC][PowerPC] Cleaning up test file and removing
redundant front-end test (#151971)
NFC patch to clean up extra lines of code in the file
`llvm/test/CodeGen/PowerPC/check-zero-vector.ll` as the current one has
loop unrolled.
Also removing the file `clang/test/CodeGen/PowerPC/check-zero-vector.c`
as the patch affects only the backend.
Co-authored-by: himadhith <himadhith.v at ibm.com>
---
.../test/CodeGen/PowerPC/check-zero-vector.c | 143 ---------
.../test/CodeGen/PowerPC/check-zero-vector.ll | 303 +++++-------------
2 files changed, 77 insertions(+), 369 deletions(-)
delete mode 100644 clang/test/CodeGen/PowerPC/check-zero-vector.c
diff --git a/clang/test/CodeGen/PowerPC/check-zero-vector.c b/clang/test/CodeGen/PowerPC/check-zero-vector.c
deleted file mode 100644
index cb6c826641366..0000000000000
--- a/clang/test/CodeGen/PowerPC/check-zero-vector.c
+++ /dev/null
@@ -1,143 +0,0 @@
-// NOTE: Assertions have been autogenerated by utils/update_cc_test_checks.py UTC_ARGS: --version 5
-// RUN: %clang_cc1 -triple powerpc64-ibm-aix -emit-llvm %s -o - | FileCheck %s --check-prefix=POWERPC_64
-// RUN: %clang_cc1 -triple powerpc64le-unknown-linux-gnu -emit-llvm %s -o - | FileCheck %s --check-prefix=POWERPC_64LE
-// RUN: %clang_cc1 -triple powerpc-ibm-aix -emit-llvm %s -o - | FileCheck %s --check-prefix=POWERPC_32
-
-// POWERPC_64-LABEL: define signext i32 @test_Greater_than(
-// POWERPC_64-SAME: ptr noundef [[COLAUTHS:%.*]]) #[[ATTR0:[0-9]+]] {
-// POWERPC_64-NEXT: [[ENTRY:.*:]]
-// POWERPC_64-NEXT: [[COLAUTHS_ADDR:%.*]] = alloca ptr, align 8
-// POWERPC_64-NEXT: [[RESULT:%.*]] = alloca i16, align 2
-// POWERPC_64-NEXT: [[I:%.*]] = alloca i32, align 4
-// POWERPC_64-NEXT: store ptr [[COLAUTHS]], ptr [[COLAUTHS_ADDR]], align 8
-// POWERPC_64-NEXT: store i16 0, ptr [[RESULT]], align 2
-// POWERPC_64-NEXT: store i32 0, ptr [[I]], align 4
-// POWERPC_64-NEXT: br label %[[FOR_COND:.*]]
-// POWERPC_64: [[FOR_COND]]:
-// POWERPC_64-NEXT: [[TMP0:%.*]] = load i32, ptr [[I]], align 4
-// POWERPC_64-NEXT: [[CMP:%.*]] = icmp slt i32 [[TMP0]], 4
-// POWERPC_64-NEXT: br i1 [[CMP]], label %[[FOR_BODY:.*]], label %[[FOR_END:.*]]
-// POWERPC_64: [[FOR_BODY]]:
-// POWERPC_64-NEXT: [[TMP1:%.*]] = load ptr, ptr [[COLAUTHS_ADDR]], align 8
-// POWERPC_64-NEXT: [[TMP2:%.*]] = load i32, ptr [[I]], align 4
-// POWERPC_64-NEXT: [[IDXPROM:%.*]] = sext i32 [[TMP2]] to i64
-// POWERPC_64-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i16, ptr [[TMP1]], i64 [[IDXPROM]]
-// POWERPC_64-NEXT: [[TMP3:%.*]] = load i16, ptr [[ARRAYIDX]], align 2
-// POWERPC_64-NEXT: [[CONV:%.*]] = zext i16 [[TMP3]] to i32
-// POWERPC_64-NEXT: [[CMP1:%.*]] = icmp sgt i32 [[CONV]], 0
-// POWERPC_64-NEXT: br i1 [[CMP1]], label %[[IF_THEN:.*]], label %[[IF_END:.*]]
-// POWERPC_64: [[IF_THEN]]:
-// POWERPC_64-NEXT: [[TMP4:%.*]] = load i16, ptr [[RESULT]], align 2
-// POWERPC_64-NEXT: [[INC:%.*]] = add i16 [[TMP4]], 1
-// POWERPC_64-NEXT: store i16 [[INC]], ptr [[RESULT]], align 2
-// POWERPC_64-NEXT: br label %[[IF_END]]
-// POWERPC_64: [[IF_END]]:
-// POWERPC_64-NEXT: br label %[[FOR_INC:.*]]
-// POWERPC_64: [[FOR_INC]]:
-// POWERPC_64-NEXT: [[TMP5:%.*]] = load i32, ptr [[I]], align 4
-// POWERPC_64-NEXT: [[INC3:%.*]] = add nsw i32 [[TMP5]], 1
-// POWERPC_64-NEXT: store i32 [[INC3]], ptr [[I]], align 4
-// POWERPC_64-NEXT: br label %[[FOR_COND]], !llvm.loop [[LOOP2:![0-9]+]]
-// POWERPC_64: [[FOR_END]]:
-// POWERPC_64-NEXT: [[TMP6:%.*]] = load i16, ptr [[RESULT]], align 2
-// POWERPC_64-NEXT: [[CONV4:%.*]] = zext i16 [[TMP6]] to i32
-// POWERPC_64-NEXT: ret i32 [[CONV4]]
-//
-// POWERPC_64LE-LABEL: define dso_local signext i32 @test_Greater_than(
-// POWERPC_64LE-SAME: ptr noundef [[COLAUTHS:%.*]]) #[[ATTR0:[0-9]+]] {
-// POWERPC_64LE-NEXT: [[ENTRY:.*:]]
-// POWERPC_64LE-NEXT: [[COLAUTHS_ADDR:%.*]] = alloca ptr, align 8
-// POWERPC_64LE-NEXT: [[RESULT:%.*]] = alloca i16, align 2
-// POWERPC_64LE-NEXT: [[I:%.*]] = alloca i32, align 4
-// POWERPC_64LE-NEXT: store ptr [[COLAUTHS]], ptr [[COLAUTHS_ADDR]], align 8
-// POWERPC_64LE-NEXT: store i16 0, ptr [[RESULT]], align 2
-// POWERPC_64LE-NEXT: store i32 0, ptr [[I]], align 4
-// POWERPC_64LE-NEXT: br label %[[FOR_COND:.*]]
-// POWERPC_64LE: [[FOR_COND]]:
-// POWERPC_64LE-NEXT: [[TMP0:%.*]] = load i32, ptr [[I]], align 4
-// POWERPC_64LE-NEXT: [[CMP:%.*]] = icmp slt i32 [[TMP0]], 4
-// POWERPC_64LE-NEXT: br i1 [[CMP]], label %[[FOR_BODY:.*]], label %[[FOR_END:.*]]
-// POWERPC_64LE: [[FOR_BODY]]:
-// POWERPC_64LE-NEXT: [[TMP1:%.*]] = load ptr, ptr [[COLAUTHS_ADDR]], align 8
-// POWERPC_64LE-NEXT: [[TMP2:%.*]] = load i32, ptr [[I]], align 4
-// POWERPC_64LE-NEXT: [[IDXPROM:%.*]] = sext i32 [[TMP2]] to i64
-// POWERPC_64LE-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i16, ptr [[TMP1]], i64 [[IDXPROM]]
-// POWERPC_64LE-NEXT: [[TMP3:%.*]] = load i16, ptr [[ARRAYIDX]], align 2
-// POWERPC_64LE-NEXT: [[CONV:%.*]] = zext i16 [[TMP3]] to i32
-// POWERPC_64LE-NEXT: [[CMP1:%.*]] = icmp sgt i32 [[CONV]], 0
-// POWERPC_64LE-NEXT: br i1 [[CMP1]], label %[[IF_THEN:.*]], label %[[IF_END:.*]]
-// POWERPC_64LE: [[IF_THEN]]:
-// POWERPC_64LE-NEXT: [[TMP4:%.*]] = load i16, ptr [[RESULT]], align 2
-// POWERPC_64LE-NEXT: [[INC:%.*]] = add i16 [[TMP4]], 1
-// POWERPC_64LE-NEXT: store i16 [[INC]], ptr [[RESULT]], align 2
-// POWERPC_64LE-NEXT: br label %[[IF_END]]
-// POWERPC_64LE: [[IF_END]]:
-// POWERPC_64LE-NEXT: br label %[[FOR_INC:.*]]
-// POWERPC_64LE: [[FOR_INC]]:
-// POWERPC_64LE-NEXT: [[TMP5:%.*]] = load i32, ptr [[I]], align 4
-// POWERPC_64LE-NEXT: [[INC3:%.*]] = add nsw i32 [[TMP5]], 1
-// POWERPC_64LE-NEXT: store i32 [[INC3]], ptr [[I]], align 4
-// POWERPC_64LE-NEXT: br label %[[FOR_COND]], !llvm.loop [[LOOP2:![0-9]+]]
-// POWERPC_64LE: [[FOR_END]]:
-// POWERPC_64LE-NEXT: [[TMP6:%.*]] = load i16, ptr [[RESULT]], align 2
-// POWERPC_64LE-NEXT: [[CONV4:%.*]] = zext i16 [[TMP6]] to i32
-// POWERPC_64LE-NEXT: ret i32 [[CONV4]]
-//
-// POWERPC_32-LABEL: define i32 @test_Greater_than(
-// POWERPC_32-SAME: ptr noundef [[COLAUTHS:%.*]]) #[[ATTR0:[0-9]+]] {
-// POWERPC_32-NEXT: [[ENTRY:.*:]]
-// POWERPC_32-NEXT: [[COLAUTHS_ADDR:%.*]] = alloca ptr, align 4
-// POWERPC_32-NEXT: [[RESULT:%.*]] = alloca i16, align 2
-// POWERPC_32-NEXT: [[I:%.*]] = alloca i32, align 4
-// POWERPC_32-NEXT: store ptr [[COLAUTHS]], ptr [[COLAUTHS_ADDR]], align 4
-// POWERPC_32-NEXT: store i16 0, ptr [[RESULT]], align 2
-// POWERPC_32-NEXT: store i32 0, ptr [[I]], align 4
-// POWERPC_32-NEXT: br label %[[FOR_COND:.*]]
-// POWERPC_32: [[FOR_COND]]:
-// POWERPC_32-NEXT: [[TMP0:%.*]] = load i32, ptr [[I]], align 4
-// POWERPC_32-NEXT: [[CMP:%.*]] = icmp slt i32 [[TMP0]], 4
-// POWERPC_32-NEXT: br i1 [[CMP]], label %[[FOR_BODY:.*]], label %[[FOR_END:.*]]
-// POWERPC_32: [[FOR_BODY]]:
-// POWERPC_32-NEXT: [[TMP1:%.*]] = load ptr, ptr [[COLAUTHS_ADDR]], align 4
-// POWERPC_32-NEXT: [[TMP2:%.*]] = load i32, ptr [[I]], align 4
-// POWERPC_32-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i16, ptr [[TMP1]], i32 [[TMP2]]
-// POWERPC_32-NEXT: [[TMP3:%.*]] = load i16, ptr [[ARRAYIDX]], align 2
-// POWERPC_32-NEXT: [[CONV:%.*]] = zext i16 [[TMP3]] to i32
-// POWERPC_32-NEXT: [[CMP1:%.*]] = icmp sgt i32 [[CONV]], 0
-// POWERPC_32-NEXT: br i1 [[CMP1]], label %[[IF_THEN:.*]], label %[[IF_END:.*]]
-// POWERPC_32: [[IF_THEN]]:
-// POWERPC_32-NEXT: [[TMP4:%.*]] = load i16, ptr [[RESULT]], align 2
-// POWERPC_32-NEXT: [[INC:%.*]] = add i16 [[TMP4]], 1
-// POWERPC_32-NEXT: store i16 [[INC]], ptr [[RESULT]], align 2
-// POWERPC_32-NEXT: br label %[[IF_END]]
-// POWERPC_32: [[IF_END]]:
-// POWERPC_32-NEXT: br label %[[FOR_INC:.*]]
-// POWERPC_32: [[FOR_INC]]:
-// POWERPC_32-NEXT: [[TMP5:%.*]] = load i32, ptr [[I]], align 4
-// POWERPC_32-NEXT: [[INC3:%.*]] = add nsw i32 [[TMP5]], 1
-// POWERPC_32-NEXT: store i32 [[INC3]], ptr [[I]], align 4
-// POWERPC_32-NEXT: br label %[[FOR_COND]], !llvm.loop [[LOOP2:![0-9]+]]
-// POWERPC_32: [[FOR_END]]:
-// POWERPC_32-NEXT: [[TMP6:%.*]] = load i16, ptr [[RESULT]], align 2
-// POWERPC_32-NEXT: [[CONV4:%.*]] = zext i16 [[TMP6]] to i32
-// POWERPC_32-NEXT: ret i32 [[CONV4]]
-//
-int test_Greater_than(unsigned short *colauths) {
- unsigned short result = 0;
- for (int i = 0; i < 4; i++) {
- if (colauths[i] > 0) {
- result++;
- }
- }
- return result;
-}
-//.
-// POWERPC_64: [[LOOP2]] = distinct !{[[LOOP2]], [[META3:![0-9]+]]}
-// POWERPC_64: [[META3]] = !{!"llvm.loop.mustprogress"}
-//.
-// POWERPC_64LE: [[LOOP2]] = distinct !{[[LOOP2]], [[META3:![0-9]+]]}
-// POWERPC_64LE: [[META3]] = !{!"llvm.loop.mustprogress"}
-//.
-// POWERPC_32: [[LOOP2]] = distinct !{[[LOOP2]], [[META3:![0-9]+]]}
-// POWERPC_32: [[META3]] = !{!"llvm.loop.mustprogress"}
-//.
diff --git a/llvm/test/CodeGen/PowerPC/check-zero-vector.ll b/llvm/test/CodeGen/PowerPC/check-zero-vector.ll
index 59173e22edf26..d8e66d6500f5f 100644
--- a/llvm/test/CodeGen/PowerPC/check-zero-vector.ll
+++ b/llvm/test/CodeGen/PowerPC/check-zero-vector.ll
@@ -1,3 +1,4 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
; RUN: llc -verify-machineinstrs -mcpu=pwr9 -mtriple=powerpc64le-unknown-linux-gnu \
; RUN: < %s | FileCheck %s --check-prefix=POWERPC_64LE
@@ -7,240 +8,90 @@
; RUN: llc -verify-machineinstrs -mcpu=pwr9 -mtriple=powerpc-ibm-aix \
; RUN: < %s | FileCheck %s --check-prefix=POWERPC_32
-define i32 @test_Greater_than(ptr %colauths, i32 signext %ncols) {
-; This testcase is manually reduced to isolate the critical code blocks.
-; It is designed to check for vector comparison specifically for zero vectors.
-; In the vector.body section, we are expecting a comparison instruction (vcmpequh),
-; merge instructions (vmrghh and vmrglh) which use exactly 2 vectors.
-; The output of the merge instruction is being used by xxland and finally
-; accumulated by vadduwm instruction.
-
+define i32 @test_Greater_than(ptr %colauths) {
+; This testcase is for the special case of zero-vector comparisons.
+; Currently the generated code does a comparison (vcmpequh) and then a negation (xxlnor).
+; This pattern is expected to be optimized in a future patch.
; POWERPC_64LE-LABEL: test_Greater_than:
-; POWERPC_64LE: .LBB0_6: # %vector.body
-; POWERPC_64LE-NEXT: #
-; POWERPC_64LE-NEXT: lxv [[R1:[0-9]+]], -64(4)
-; POWERPC_64LE-NEXT: vcmpequh [[R2:[0-9]+]], [[R2]], [[R3:[0-9]+]]
-; POWERPC_64LE-NEXT: xxlnor [[R1]], [[R1]], [[R1]]
-; POWERPC_64LE-NEXT: vmrghh [[R4:[0-9]+]], [[R2]], [[R2]]
-; POWERPC_64LE-NEXT: vmrglh [[R2]], [[R2]], [[R2]]
-; POWERPC_64LE-NEXT: xxland [[R5:[0-9]+]], [[R5]], [[R6:[0-9]+]]
-; POWERPC_64LE-NEXT: xxland [[R1]], [[R1]], [[R6]]
-; POWERPC_64LE-NEXT: vadduwm [[R7:[0-9]+]], [[R7]], [[R4]]
-; POWERPC_64LE: .LBB0_10: # %vec.epilog.vector.body
-; POWERPC_64LE-NEXT: #
-; POWERPC_64LE-NEXT: lxv [[R8:[0-9]+]], 0(4)
-; POWERPC_64LE-NEXT: addi 4, 4, 16
-; POWERPC_64LE-NEXT: vcmpequh [[R9:[0-9]+]], [[R9]], [[R10:[0-9]+]]
-; POWERPC_64LE-NEXT: xxlnor [[R8]], [[R8]], [[R8]]
-; POWERPC_64LE-NEXT: vmrglh [[R11:[0-9]+]], [[R9]], [[R9]]
-; POWERPC_64LE-NEXT: vmrghh [[R9]], [[R9]], [[R9]]
-; POWERPC_64LE-NEXT: xxland [[R12:[0-9]+]], [[R12]], [[R6]]
-; POWERPC_64LE-NEXT: xxland [[R8]], [[R8]], [[R6]]
-; POWERPC_64LE-NEXT: vadduwm [[R7]], [[R7]], [[R9]]
-; POWERPC_64LE-NEXT: vadduwm [[R3]], [[R3]], [[R11]]
-; POWERPC_64LE-NEXT: bdnz .LBB0_10
-; POWERPC_64LE: blr
+; POWERPC_64LE: # %bb.0: # %entry
+; POWERPC_64LE-NEXT: lfd 0, 0(3)
+; POWERPC_64LE-NEXT: xxlxor 35, 35, 35
+; POWERPC_64LE-NEXT: li 4, 0
+; POWERPC_64LE-NEXT: li 3, 4
+; POWERPC_64LE-NEXT: xxswapd 34, 0
+; POWERPC_64LE-NEXT: vcmpequh 2, 2, 3
+; POWERPC_64LE-NEXT: xxlnor 34, 34, 34
+; POWERPC_64LE-NEXT: vmrglh 3, 2, 2
+; POWERPC_64LE-NEXT: vextuwrx 4, 4, 2
+; POWERPC_64LE-NEXT: vextuwrx 3, 3, 3
+; POWERPC_64LE-NEXT: clrlwi 4, 4, 31
+; POWERPC_64LE-NEXT: rlwimi 4, 3, 1, 30, 30
+; POWERPC_64LE-NEXT: mfvsrwz 3, 35
+; POWERPC_64LE-NEXT: rlwimi 4, 3, 2, 29, 29
+; POWERPC_64LE-NEXT: li 3, 12
+; POWERPC_64LE-NEXT: vextuwrx 3, 3, 3
+; POWERPC_64LE-NEXT: rlwimi 4, 3, 3, 28, 28
+; POWERPC_64LE-NEXT: stb 4, -1(1)
+; POWERPC_64LE-NEXT: lbz 3, -1(1)
+; POWERPC_64LE-NEXT: popcntd 3, 3
+; POWERPC_64LE-NEXT: blr
;
; POWERPC_64-LABEL: test_Greater_than:
-; POWERPC_64: L..BB0_6: # %vector.body
-; POWERPC_64-NEXT: #
-; POWERPC_64-NEXT: lxv [[R1:[0-9]+]], -64(4)
-; POWERPC_64-NEXT: vcmpequh [[R2:[0-9]+]], [[R2]], [[R3:[0-9]+]]
-; POWERPC_64-NEXT: xxlnor [[R1]], [[R1]], [[R1]]
-; POWERPC_64-NEXT: vmrglh [[R4:[0-9]+]], [[R2]], [[R2]]
-; POWERPC_64-NEXT: vmrghh [[R2]], [[R2]], [[R2]]
-; POWERPC_64-NEXT: xxland [[R5:[0-9]+]], [[R5]], [[R6:[0-9]+]]
-; POWERPC_64-NEXT: xxland [[R1]], [[R1]], [[R6]]
-; POWERPC_64-NEXT: vadduwm [[R7:[0-9]+]], [[R7]], [[R4]]
-; POWERPC_64: L..BB0_10: # %vec.epilog.vector.body
-; POWERPC_64-NEXT: #
-; POWERPC_64-NEXT: lxv [[R8:[0-9]+]], 0(4)
-; POWERPC_64-NEXT: addi 4, 4, 16
-; POWERPC_64-NEXT: vcmpequh [[R9:[0-9]+]], [[R9]], [[R10:[0-9]+]]
-; POWERPC_64-NEXT: xxlnor [[R8]], [[R8]], [[R8]]
-; POWERPC_64-NEXT: vmrghh [[R11:[0-9]+]], [[R9]], [[R9]]
-; POWERPC_64-NEXT: vmrglh [[R9]], [[R9]], [[R9]]
-; POWERPC_64-NEXT: xxland [[R12:[0-9]+]], [[R12]], [[R6]]
-; POWERPC_64-NEXT: xxland [[R8]], [[R8]], [[R6]]
-; POWERPC_64-NEXT: vadduwm [[R7]], [[R7]], [[R9]]
-; POWERPC_64-NEXT: vadduwm [[R3]], [[R3]], [[R11]]
-; POWERPC_64-NEXT: bdnz L..BB0_10
-; POWERPC_64: blr
+; POWERPC_64: # %bb.0: # %entry
+; POWERPC_64-NEXT: lxsd 2, 0(3)
+; POWERPC_64-NEXT: xxlxor 35, 35, 35
+; POWERPC_64-NEXT: li 4, 12
+; POWERPC_64-NEXT: li 3, 8
+; POWERPC_64-NEXT: vcmpequh 2, 2, 3
+; POWERPC_64-NEXT: xxlnor 34, 34, 34
+; POWERPC_64-NEXT: vmrghh 2, 2, 2
+; POWERPC_64-NEXT: vextuwlx 4, 4, 2
+; POWERPC_64-NEXT: vextuwlx 3, 3, 2
+; POWERPC_64-NEXT: clrlwi 4, 4, 31
+; POWERPC_64-NEXT: rlwimi 4, 3, 1, 30, 30
+; POWERPC_64-NEXT: mfvsrwz 3, 34
+; POWERPC_64-NEXT: rlwimi 4, 3, 2, 29, 29
+; POWERPC_64-NEXT: li 3, 0
+; POWERPC_64-NEXT: vextuwlx 3, 3, 2
+; POWERPC_64-NEXT: rlwimi 4, 3, 3, 28, 28
+; POWERPC_64-NEXT: stb 4, -1(1)
+; POWERPC_64-NEXT: lbz 3, -1(1)
+; POWERPC_64-NEXT: popcntd 3, 3
+; POWERPC_64-NEXT: blr
;
; POWERPC_32-LABEL: test_Greater_than:
-; POWERPC_32: L..BB0_7: # %vector.body
-; POWERPC_32-NEXT: #
-; POWERPC_32-NEXT: lxv [[R1:[0-9]+]], 0(10)
-; POWERPC_32-NEXT: addic [[R13:[0-9]+]], [[R13]], 64
-; POWERPC_32-NEXT: addze [[R14:[0-9]+]], [[R14]]
-; POWERPC_32-NEXT: xor [[R15:[0-9]+]], [[R13]], [[R16:[0-9]+]]
-; POWERPC_32-NEXT: or. [[R15]], [[R15]], [[R14]]
-; POWERPC_32-NEXT: vcmpequh [[R2:[0-9]+]], [[R2]], [[R3:[0-9]+]]
-; POWERPC_32-NEXT: xxlnor [[R1]], [[R1]], [[R1]]
-; POWERPC_32-NEXT: vmrglh [[R4:[0-9]+]], [[R2]], [[R2]]
-; POWERPC_32-NEXT: vmrghh [[R2]], [[R2]], [[R2]]
-; POWERPC_32-NEXT: xxland [[R5:[0-9]+]], [[R5]], [[R6:[0-9]+]]
-; POWERPC_32-NEXT: xxland [[R1]], [[R1]], [[R6]]
-; POWERPC_32-NEXT: vadduwm [[R7:[0-9]+]], [[R7]], [[R4]]
-; POWERPC_32: L..BB0_11: # %vec.epilog.vector.body
-; POWERPC_32-NEXT: #
-; POWERPC_32-NEXT: slwi [[R14]], [[R13]], 1
-; POWERPC_32-NEXT: addic [[R13]], [[R13]], 8
-; POWERPC_32-NEXT: addze [[R17:[0-9]+]], [[R17]]
-; POWERPC_32-NEXT: lxvx [[R8:[0-9]+]], [[R18:[0-9]+]], [[R14]]
-; POWERPC_32-NEXT: xor [[R14]], [[R13]], [[R16]]
-; POWERPC_32-NEXT: or. [[R14]], [[R14]], [[R17]]
-; POWERPC_32-NEXT: vcmpequh [[R9:[0-9]+]], [[R9]], [[R3]]
-; POWERPC_32-NEXT: xxlnor [[R8]], [[R8]], [[R8]]
-; POWERPC_32-NEXT: vmrghh [[R11:[0-9]+]], [[R9]], [[R9]]
-; POWERPC_32-NEXT: vmrglh [[R9]], [[R9]], [[R9]]
-; POWERPC_32-NEXT: xxland [[R12:[0-9]+]], [[R12]], [[R6]]
-; POWERPC_32-NEXT: xxland [[R8]], [[R8]], [[R6]]
-; POWERPC_32-NEXT: vadduwm [[R7]], [[R7]], [[R9]]
-; POWERPC_32-NEXT: vadduwm [[R19:[0-9]+]], [[R19]], [[R11]]
-; POWERPC_32-NEXT: bne 0, L..BB0_11
-; POWERPC_32: blr
- entry:
- %cmp5 = icmp sgt i32 %ncols, 0
- br i1 %cmp5, label %iter.check, label %for.cond.cleanup
-
-iter.check: ; preds = %entry
- %wide.trip.count = zext nneg i32 %ncols to i64
- %min.iters.check = icmp ult i32 %ncols, 8
- br i1 %min.iters.check, label %for.body.preheader, label %vector.main.loop.iter.check
-
-for.body.preheader: ; preds = %vec.epilog.iter.check, %vec.epilog.middle.block, %iter.check
- %indvars.iv.ph = phi i64 [ 0, %iter.check ], [ %n.vec, %vec.epilog.iter.check ], [ %n.vec31, %vec.epilog.middle.block ]
- %num_cols_needed.06.ph = phi i32 [ 0, %iter.check ], [ %33, %vec.epilog.iter.check ], [ %40, %vec.epilog.middle.block ]
- br label %for.body
-
-vector.main.loop.iter.check: ; preds = %iter.check
- %min.iters.check9 = icmp ult i32 %ncols, 64
- br i1 %min.iters.check9, label %vec.epilog.ph, label %vector.ph
-
-vector.ph: ; preds = %vector.main.loop.iter.check
- %n.vec = and i64 %wide.trip.count, 2147483584
- br label %vector.body
-
-vector.body: ; preds = %vector.body, %vector.ph
- %index = phi i64 [ 0, %vector.ph ], [ %index.next, %vector.body ]
- %vec.phi = phi <8 x i32> [ zeroinitializer, %vector.ph ], [ %24, %vector.body ]
- %vec.phi10 = phi <8 x i32> [ zeroinitializer, %vector.ph ], [ %25, %vector.body ]
- %vec.phi11 = phi <8 x i32> [ zeroinitializer, %vector.ph ], [ %26, %vector.body ]
- %vec.phi12 = phi <8 x i32> [ zeroinitializer, %vector.ph ], [ %27, %vector.body ]
- %vec.phi13 = phi <8 x i32> [ zeroinitializer, %vector.ph ], [ %28, %vector.body ]
- %vec.phi14 = phi <8 x i32> [ zeroinitializer, %vector.ph ], [ %29, %vector.body ]
- %vec.phi15 = phi <8 x i32> [ zeroinitializer, %vector.ph ], [ %30, %vector.body ]
- %vec.phi16 = phi <8 x i32> [ zeroinitializer, %vector.ph ], [ %31, %vector.body ]
- %0 = getelementptr inbounds nuw i16, ptr %colauths, i64 %index
- %1 = getelementptr inbounds nuw i8, ptr %0, i64 16
- %2 = getelementptr inbounds nuw i8, ptr %0, i64 32
- %3 = getelementptr inbounds nuw i8, ptr %0, i64 48
- %4 = getelementptr inbounds nuw i8, ptr %0, i64 64
- %5 = getelementptr inbounds nuw i8, ptr %0, i64 80
- %6 = getelementptr inbounds nuw i8, ptr %0, i64 96
- %7 = getelementptr inbounds nuw i8, ptr %0, i64 112
- %wide.load = load <8 x i16>, ptr %0, align 2, !tbaa !5
- %wide.load17 = load <8 x i16>, ptr %1, align 2, !tbaa !5
- %wide.load18 = load <8 x i16>, ptr %2, align 2, !tbaa !5
- %wide.load19 = load <8 x i16>, ptr %3, align 2, !tbaa !5
- %wide.load20 = load <8 x i16>, ptr %4, align 2, !tbaa !5
- %wide.load21 = load <8 x i16>, ptr %5, align 2, !tbaa !5
- %wide.load22 = load <8 x i16>, ptr %6, align 2, !tbaa !5
- %wide.load23 = load <8 x i16>, ptr %7, align 2, !tbaa !5
- %8 = icmp ne <8 x i16> %wide.load, zeroinitializer
- %9 = icmp ne <8 x i16> %wide.load17, zeroinitializer
- %10 = icmp ne <8 x i16> %wide.load18, zeroinitializer
- %11 = icmp ne <8 x i16> %wide.load19, zeroinitializer
- %12 = icmp ne <8 x i16> %wide.load20, zeroinitializer
- %13 = icmp ne <8 x i16> %wide.load21, zeroinitializer
- %14 = icmp ne <8 x i16> %wide.load22, zeroinitializer
- %15 = icmp ne <8 x i16> %wide.load23, zeroinitializer
- %16 = zext <8 x i1> %8 to <8 x i32>
- %17 = zext <8 x i1> %9 to <8 x i32>
- %18 = zext <8 x i1> %10 to <8 x i32>
- %19 = zext <8 x i1> %11 to <8 x i32>
- %20 = zext <8 x i1> %12 to <8 x i32>
- %21 = zext <8 x i1> %13 to <8 x i32>
- %22 = zext <8 x i1> %14 to <8 x i32>
- %23 = zext <8 x i1> %15 to <8 x i32>
- %24 = add <8 x i32> %vec.phi, %16
- %25 = add <8 x i32> %vec.phi10, %17
- %26 = add <8 x i32> %vec.phi11, %18
- %27 = add <8 x i32> %vec.phi12, %19
- %28 = add <8 x i32> %vec.phi13, %20
- %29 = add <8 x i32> %vec.phi14, %21
- %30 = add <8 x i32> %vec.phi15, %22
- %31 = add <8 x i32> %vec.phi16, %23
- %index.next = add nuw i64 %index, 64
- %32 = icmp eq i64 %index.next, %n.vec
- br i1 %32, label %middle.block, label %vector.body, !llvm.loop !9
-
-middle.block: ; preds = %vector.body
- %bin.rdx = add <8 x i32> %25, %24
- %bin.rdx24 = add <8 x i32> %26, %bin.rdx
- %bin.rdx25 = add <8 x i32> %27, %bin.rdx24
- %bin.rdx26 = add <8 x i32> %28, %bin.rdx25
- %bin.rdx27 = add <8 x i32> %29, %bin.rdx26
- %bin.rdx28 = add <8 x i32> %30, %bin.rdx27
- %bin.rdx29 = add <8 x i32> %31, %bin.rdx28
- %33 = tail call i32 @llvm.vector.reduce.add.v8i32(<8 x i32> %bin.rdx29)
- %cmp.n = icmp eq i64 %n.vec, %wide.trip.count
- br i1 %cmp.n, label %for.cond.cleanup, label %vec.epilog.iter.check
-
-vec.epilog.iter.check: ; preds = %middle.block
- %n.vec.remaining = and i64 %wide.trip.count, 56
- %min.epilog.iters.check = icmp eq i64 %n.vec.remaining, 0
- br i1 %min.epilog.iters.check, label %for.body.preheader, label %vec.epilog.ph
-
-vec.epilog.ph: ; preds = %vec.epilog.iter.check, %vector.main.loop.iter.check
- %vec.epilog.resume.val = phi i64 [ %n.vec, %vec.epilog.iter.check ], [ 0, %vector.main.loop.iter.check ]
- %bc.merge.rdx = phi i32 [ %33, %vec.epilog.iter.check ], [ 0, %vector.main.loop.iter.check ]
- %n.vec31 = and i64 %wide.trip.count, 2147483640
- %34 = insertelement <8 x i32> <i32 poison, i32 0, i32 0, i32 0, i32 0, i32 0, i32 0, i32 0>, i32 %bc.merge.rdx, i64 0
- br label %vec.epilog.vector.body
-
-vec.epilog.vector.body: ; preds = %vec.epilog.vector.body, %vec.epilog.ph
- %index32 = phi i64 [ %vec.epilog.resume.val, %vec.epilog.ph ], [ %index.next35, %vec.epilog.vector.body ]
- %vec.phi33 = phi <8 x i32> [ %34, %vec.epilog.ph ], [ %38, %vec.epilog.vector.body ]
- %35 = getelementptr inbounds nuw i16, ptr %colauths, i64 %index32
- %wide.load34 = load <8 x i16>, ptr %35, align 2, !tbaa !5
- %36 = icmp ne <8 x i16> %wide.load34, zeroinitializer
- %37 = zext <8 x i1> %36 to <8 x i32>
- %38 = add <8 x i32> %vec.phi33, %37
- %index.next35 = add nuw i64 %index32, 8
- %39 = icmp eq i64 %index.next35, %n.vec31
- br i1 %39, label %vec.epilog.middle.block, label %vec.epilog.vector.body, !llvm.loop !13
-
-vec.epilog.middle.block: ; preds = %vec.epilog.vector.body
- %40 = tail call i32 @llvm.vector.reduce.add.v8i32(<8 x i32> %38)
- %cmp.n36 = icmp eq i64 %n.vec31, %wide.trip.count
- br i1 %cmp.n36, label %for.cond.cleanup, label %for.body.preheader
-
-for.cond.cleanup: ; preds = %for.body, %middle.block, %vec.epilog.middle.block, %entry
- %num_cols_needed.0.lcssa = phi i32 [ 0, %entry ], [ %33, %middle.block ], [ %40, %vec.epilog.middle.block ], [ %spec.select, %for.body ]
- ret i32 %num_cols_needed.0.lcssa
-
-for.body: ; preds = %for.body.preheader, %for.body
- %indvars.iv = phi i64 [ %indvars.iv.next, %for.body ], [ %indvars.iv.ph, %for.body.preheader ]
- %num_cols_needed.06 = phi i32 [ %spec.select, %for.body ], [ %num_cols_needed.06.ph, %for.body.preheader ]
- %arrayidx = getelementptr inbounds nuw i16, ptr %colauths, i64 %indvars.iv
- %41 = load i16, ptr %arrayidx, align 2, !tbaa !5
- %tobool.not = icmp ne i16 %41, 0
- %inc = zext i1 %tobool.not to i32
- %spec.select = add nuw nsw i32 %num_cols_needed.06, %inc
- %indvars.iv.next = add nuw nsw i64 %indvars.iv, 1
- %exitcond.not = icmp eq i64 %indvars.iv.next, %wide.trip.count
- br i1 %exitcond.not, label %for.cond.cleanup, label %for.body, !llvm.loop !14
+; POWERPC_32: # %bb.0: # %entry
+; POWERPC_32-NEXT: li 4, 4
+; POWERPC_32-NEXT: lxvwsx 1, 0, 3
+; POWERPC_32-NEXT: xxlxor 35, 35, 35
+; POWERPC_32-NEXT: lxvwsx 0, 3, 4
+; POWERPC_32-NEXT: xxmrghw 34, 1, 0
+; POWERPC_32-NEXT: vcmpequh 2, 2, 3
+; POWERPC_32-NEXT: xxlnor 34, 34, 34
+; POWERPC_32-NEXT: vmrghh 2, 2, 2
+; POWERPC_32-NEXT: stxv 34, -32(1)
+; POWERPC_32-NEXT: lwz 3, -20(1)
+; POWERPC_32-NEXT: lwz 4, -24(1)
+; POWERPC_32-NEXT: clrlwi 3, 3, 31
+; POWERPC_32-NEXT: rlwimi 3, 4, 1, 30, 30
+; POWERPC_32-NEXT: lwz 4, -28(1)
+; POWERPC_32-NEXT: rlwimi 3, 4, 2, 29, 29
+; POWERPC_32-NEXT: lwz 4, -32(1)
+; POWERPC_32-NEXT: rlwimi 3, 4, 3, 28, 28
+; POWERPC_32-NEXT: popcntw 3, 3
+; POWERPC_32-NEXT: blr
+entry:
+ %0 = load <4 x i16>, ptr %colauths, align 2, !tbaa !5
+ %1 = icmp ne <4 x i16> %0, zeroinitializer
+ %2 = bitcast <4 x i1> %1 to i4
+ %3 = tail call range(i4 0, 5) i4 @llvm.ctpop.i4(i4 %2)
+ %4 = zext nneg i4 %3 to i32
+ ret i32 %4
}
+declare i4 @llvm.ctpop.i4(i4) #1
+
!5 = !{!6, !6, i64 0}
!6 = !{!"short", !7, i64 0}
!7 = !{!"omnipotent char", !8, i64 0}
!8 = !{!"Simple C/C++ TBAA"}
-!9 = distinct !{!9, !10, !11, !12}
-!10 = !{!"llvm.loop.mustprogress"}
-!11 = !{!"llvm.loop.isvectorized", i32 1}
-!12 = !{!"llvm.loop.unroll.runtime.disable"}
-!13 = distinct !{!13, !10, !11, !12}
-!14 = distinct !{!14, !10, !12, !11}
>From 9b7b3828716a127e6a9155b516f624a0f5a30887 Mon Sep 17 00:00:00 2001
From: Simon Pilgrim <llvm-dev at redking.me.uk>
Date: Wed, 6 Aug 2025 11:39:35 +0100
Subject: [PATCH 060/171] Fix MSVC truncation to char warning. NFC.
---
llvm/lib/Target/AArch64/AArch64InstrInfo.cpp | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/llvm/lib/Target/AArch64/AArch64InstrInfo.cpp b/llvm/lib/Target/AArch64/AArch64InstrInfo.cpp
index 98ebd512b0b75..fb59c9f131fb2 100644
--- a/llvm/lib/Target/AArch64/AArch64InstrInfo.cpp
+++ b/llvm/lib/Target/AArch64/AArch64InstrInfo.cpp
@@ -5883,7 +5883,7 @@ static void appendConstantExpr(SmallVectorImpl<char> &Expr, int64_t Constant,
// Convenience function to create a DWARF expression for a register.
static void appendReadRegExpr(SmallVectorImpl<char> &Expr, unsigned RegNum) {
- Expr.push_back(dwarf::DW_OP_bregx);
+ Expr.push_back((char)dwarf::DW_OP_bregx);
appendLEB128<LEB128Sign::Unsigned>(Expr, RegNum);
Expr.push_back(0);
}
>From 777c320e6c96e2de9d4c6dc52d6a85a5ef3b8569 Mon Sep 17 00:00:00 2001
From: Florian Hahn <flo at fhahn.com>
Date: Wed, 6 Aug 2025 11:52:08 +0100
Subject: [PATCH 061/171] [VPlan] Address comments missed in #142309.
Address additional comments from
https://github.com/llvm/llvm-project/pull/142309.
---
llvm/lib/Transforms/Vectorize/LoopVectorize.cpp | 10 ++++++----
llvm/lib/Transforms/Vectorize/VPlanTransforms.cpp | 13 ++++++++++---
llvm/lib/Transforms/Vectorize/VPlanTransforms.h | 7 ++++---
3 files changed, 20 insertions(+), 10 deletions(-)
diff --git a/llvm/lib/Transforms/Vectorize/LoopVectorize.cpp b/llvm/lib/Transforms/Vectorize/LoopVectorize.cpp
index a52aa8420b301..307b5cf1a2c50 100644
--- a/llvm/lib/Transforms/Vectorize/LoopVectorize.cpp
+++ b/llvm/lib/Transforms/Vectorize/LoopVectorize.cpp
@@ -10321,8 +10321,9 @@ bool LoopVectorizePass::processLoop(Loop *L) {
// TODO: Move to general VPlan pipeline once epilogue loops are also
// supported.
- VPlanTransforms::runPass(VPlanTransforms::materializeVectorTripCount,
- BestPlan, VF.Width, IC, PSE);
+ VPlanTransforms::runPass(
+ VPlanTransforms::materializeConstantVectorTripCount, BestPlan,
+ VF.Width, IC, PSE);
LVP.executePlan(VF.Width, IC, BestPlan, Unroller, DT, false);
@@ -10393,8 +10394,9 @@ bool LoopVectorizePass::processLoop(Loop *L) {
Checks, BestPlan);
// TODO: Move to general VPlan pipeline once epilogue loops are also
// supported.
- VPlanTransforms::runPass(VPlanTransforms::materializeVectorTripCount,
- BestPlan, VF.Width, IC, PSE);
+ VPlanTransforms::runPass(
+ VPlanTransforms::materializeConstantVectorTripCount, BestPlan,
+ VF.Width, IC, PSE);
LVP.executePlan(VF.Width, IC, BestPlan, LB, DT, false);
++LoopsVectorized;
diff --git a/llvm/lib/Transforms/Vectorize/VPlanTransforms.cpp b/llvm/lib/Transforms/Vectorize/VPlanTransforms.cpp
index a7965a053e6e3..57f8e32029cbf 100644
--- a/llvm/lib/Transforms/Vectorize/VPlanTransforms.cpp
+++ b/llvm/lib/Transforms/Vectorize/VPlanTransforms.cpp
@@ -3222,7 +3222,7 @@ void VPlanTransforms::materializeBroadcasts(VPlan &Plan) {
}
}
-void VPlanTransforms::materializeVectorTripCount(
+void VPlanTransforms::materializeConstantVectorTripCount(
VPlan &Plan, ElementCount BestVF, unsigned BestUF,
PredicatedScalarEvolution &PSE) {
assert(Plan.hasVF(BestVF) && "BestVF is not available in Plan");
@@ -3230,19 +3230,26 @@ void VPlanTransforms::materializeVectorTripCount(
VPValue *TC = Plan.getTripCount();
// Skip cases for which the trip count may be non-trivial to materialize.
+ // I.e., when a scalar tail is absent - due to tail folding, or when a scalar
+ // tail is required.
if (!Plan.hasScalarTail() ||
Plan.getMiddleBlock()->getSingleSuccessor() ==
Plan.getScalarPreheader() ||
!TC->isLiveIn())
return;
+
// Materialize vector trip counts for constants early if it can simply
// be computed as (Original TC / VF * UF) * VF * UF.
+ // TODO: Compute vector trip counts for loops requiring a scalar epilogue and
+ // tail-folded loops.
ScalarEvolution &SE = *PSE.getSE();
auto *TCScev = SE.getSCEV(TC->getLiveInIRValue());
+ if (!isa<SCEVConstant>(TCScev))
+ return;
const SCEV *VFxUF = SE.getElementCount(TCScev->getType(), BestVF * BestUF);
auto VecTCScev = SE.getMulExpr(SE.getUDivExpr(TCScev, VFxUF), VFxUF);
- if (auto *NewC = dyn_cast<SCEVConstant>(VecTCScev))
- Plan.getVectorTripCount().setUnderlyingValue(NewC->getValue());
+ if (auto *ConstVecTC = dyn_cast<SCEVConstant>(VecTCScev))
+ Plan.getVectorTripCount().setUnderlyingValue(ConstVecTC->getValue());
}
void VPlanTransforms::materializeBackedgeTakenCount(VPlan &Plan,
diff --git a/llvm/lib/Transforms/Vectorize/VPlanTransforms.h b/llvm/lib/Transforms/Vectorize/VPlanTransforms.h
index 5943684e17a76..ecaca72e8e7ed 100644
--- a/llvm/lib/Transforms/Vectorize/VPlanTransforms.h
+++ b/llvm/lib/Transforms/Vectorize/VPlanTransforms.h
@@ -252,9 +252,10 @@ struct VPlanTransforms {
// Materialize vector trip counts for constants early if it can simply be
// computed as (Original TC / VF * UF) * VF * UF.
- static void materializeVectorTripCount(VPlan &Plan, ElementCount BestVF,
- unsigned BestUF,
- PredicatedScalarEvolution &PSE);
+ static void
+ materializeConstantVectorTripCount(VPlan &Plan, ElementCount BestVF,
+ unsigned BestUF,
+ PredicatedScalarEvolution &PSE);
/// Materialize the backedge-taken count to be computed explicitly using
/// VPInstructions.
>From 2b4b3fd03f716b9ddbb2a69ccfbe144312bedd12 Mon Sep 17 00:00:00 2001
From: Michael Buch <michaelbuch12 at gmail.com>
Date: Wed, 6 Aug 2025 09:56:29 +0100
Subject: [PATCH 062/171] [lldb][test] Re-enable TestQueueFromStdModule.py
Tried this with newer Clang versions locally on my Darwin machine and the tests passes. Try re-enabling again.
---
.../import-std-module/queue/TestQueueFromStdModule.py | 5 -----
1 file changed, 5 deletions(-)
diff --git a/lldb/test/API/commands/expression/import-std-module/queue/TestQueueFromStdModule.py b/lldb/test/API/commands/expression/import-std-module/queue/TestQueueFromStdModule.py
index 95aaa8e7aac88..0d89b62390c50 100644
--- a/lldb/test/API/commands/expression/import-std-module/queue/TestQueueFromStdModule.py
+++ b/lldb/test/API/commands/expression/import-std-module/queue/TestQueueFromStdModule.py
@@ -10,11 +10,6 @@
class TestQueue(TestBase):
@add_test_categories(["libc++"])
@skipIf(compiler=no_match("clang"))
- @skipIf(
- compiler="clang",
- compiler_version=[">", "16.0"],
- bugnumber="https://github.com/llvm/llvm-project/issues/68968",
- )
@skipIf(
compiler="clang",
compiler_version=["<", "17.0"],
>From b242150b075a8a720b00821682a9469258bbcd30 Mon Sep 17 00:00:00 2001
From: Michael Buch <michaelbuch12 at gmail.com>
Date: Wed, 6 Aug 2025 12:19:04 +0100
Subject: [PATCH 063/171] Revert "[lldb][test] Re-enable
TestQueueFromStdModule.py"
This reverts commit 2b4b3fd03f716b9ddbb2a69ccfbe144312bedd12.
Turns out the CI still fails with this test enabled:
```
11:08:50 File "/Users/ec2-user/jenkins/workspace/llvm.org/as-lldb-cmake/llvm-project/lldb/test/API/commands/expression/import-std-module/queue/TestQueueFromStdModule.py", line 37, in test
11:08:50 self.expect_expr(
11:08:50 File "/Users/ec2-user/jenkins/workspace/llvm.org/as-lldb-cmake/llvm-project/lldb/packages/Python/lldbsuite/test/lldbtest.py", line 2571, in expect_expr
11:08:50 value_check.check_value(self, eval_result, str(eval_result))
11:08:50 File "/Users/ec2-user/jenkins/workspace/llvm.org/as-lldb-cmake/llvm-project/lldb/packages/Python/lldbsuite/test/lldbtest.py", line 301, in check_value
11:08:50 test_base.assertSuccess(val.GetError())
11:08:50 File "/Users/ec2-user/jenkins/workspace/llvm.org/as-lldb-cmake/llvm-project/lldb/packages/Python/lldbsuite/test/lldbtest.py", line 2606, in assertSuccess
11:08:50 self.fail(self._formatMessage(msg, "'{}' is not success".format(error)))
11:08:50 AssertionError: 'error: /Applications/Xcode-beta.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX14.2.sdk/usr/include/module.modulemap:93:11: header 'stdarg.h' not found
11:08:50 93 | header "stdarg.h" // note: supplied by the compiler
11:08:50 | ^
11:08:50 /Users/ec2-user/jenkins/workspace/llvm.org/as-lldb-cmake/lldb-build/lib/clang/22/include/stdint.h:56:16: submodule of top-level module 'Darwin' implicitly imported here
11:08:50 56 | # include_next <stdint.h>
11:08:50 | ^
11:08:50 error: While building module 'std' imported from <lldb wrapper prefix>:42:
```
---
.../import-std-module/queue/TestQueueFromStdModule.py | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/lldb/test/API/commands/expression/import-std-module/queue/TestQueueFromStdModule.py b/lldb/test/API/commands/expression/import-std-module/queue/TestQueueFromStdModule.py
index 0d89b62390c50..95aaa8e7aac88 100644
--- a/lldb/test/API/commands/expression/import-std-module/queue/TestQueueFromStdModule.py
+++ b/lldb/test/API/commands/expression/import-std-module/queue/TestQueueFromStdModule.py
@@ -10,6 +10,11 @@
class TestQueue(TestBase):
@add_test_categories(["libc++"])
@skipIf(compiler=no_match("clang"))
+ @skipIf(
+ compiler="clang",
+ compiler_version=[">", "16.0"],
+ bugnumber="https://github.com/llvm/llvm-project/issues/68968",
+ )
@skipIf(
compiler="clang",
compiler_version=["<", "17.0"],
>From d8f896172da036da99a158331856a85cc328fcab Mon Sep 17 00:00:00 2001
From: Ricardo Jesus <rjj at nvidia.com>
Date: Wed, 6 Aug 2025 13:00:28 +0100
Subject: [PATCH 064/171] [AArch64] Improve lowering of scalar abs(sub(a, b)).
(#151180)
This patch avoids a comparison against zero when lowering abs(sub(a, b))
patterns, instead reusing the condition codes generated by a subs of the
operands directly.
For example, currently:
```
sxtb w8, w0
sub w8, w8, w1, sxtb
cmp w8, #0
cneg w0, w8, mi
```
becomes:
```
sxtb w8, w0
subs w8, w8, w1, sxtb
cneg w0, w8, mi
```
Together with #151177, this should handle the remaining patterns in
#118413.
---
.../Target/AArch64/AArch64ISelLowering.cpp | 23 ++++
llvm/test/CodeGen/AArch64/abds-neg.ll | 15 +--
llvm/test/CodeGen/AArch64/abds.ll | 36 ++----
llvm/test/CodeGen/AArch64/abdu-neg.ll | 15 +--
llvm/test/CodeGen/AArch64/abdu.ll | 38 ++----
.../CodeGen/AArch64/csel-subs-dag-combine.ll | 112 ++++++++++++++++++
llvm/test/CodeGen/AArch64/midpoint-int.ll | 28 ++---
7 files changed, 182 insertions(+), 85 deletions(-)
create mode 100644 llvm/test/CodeGen/AArch64/csel-subs-dag-combine.ll
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index 43b9743639ab2..9f6e552ea8916 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -25455,6 +25455,29 @@ static SDValue performCSELCombine(SDNode *N,
}
}
+ // CSEL a, b, cc, SUBS(SUB(x,y), 0) -> CSEL a, b, cc, SUBS(x,y) if cc doesn't
+ // use overflow flags, to avoid the comparison with zero. In case of success,
+ // this also replaces the original SUB(x,y) with the newly created SUBS(x,y).
+ // NOTE: Perhaps in the future use performFlagSettingCombine to replace SUB
+ // nodes with their SUBS equivalent as is already done for other flag-setting
+ // operators, in which case doing the replacement here becomes redundant.
+ if (Cond.getOpcode() == AArch64ISD::SUBS && Cond->hasNUsesOfValue(1, 1) &&
+ isNullConstant(Cond.getOperand(1))) {
+ SDValue Sub = Cond.getOperand(0);
+ AArch64CC::CondCode CC =
+ static_cast<AArch64CC::CondCode>(N->getConstantOperandVal(2));
+ if (Sub.getOpcode() == ISD::SUB &&
+ (CC == AArch64CC::EQ || CC == AArch64CC::NE || CC == AArch64CC::MI ||
+ CC == AArch64CC::PL)) {
+ SDLoc DL(N);
+ SDValue Subs = DAG.getNode(AArch64ISD::SUBS, DL, Cond->getVTList(),
+ Sub.getOperand(0), Sub.getOperand(1));
+ DCI.CombineTo(Sub.getNode(), Subs);
+ DCI.CombineTo(Cond.getNode(), Subs, Subs.getValue(1));
+ return SDValue(N, 0);
+ }
+ }
+
// CSEL (LASTB P, Z), X, NE(ANY P) -> CLASTB P, X, Z
if (SDValue CondLast = foldCSELofLASTB(N, DAG))
return CondLast;
diff --git a/llvm/test/CodeGen/AArch64/abds-neg.ll b/llvm/test/CodeGen/AArch64/abds-neg.ll
index 75247823ee793..02c76ba7343a0 100644
--- a/llvm/test/CodeGen/AArch64/abds-neg.ll
+++ b/llvm/test/CodeGen/AArch64/abds-neg.ll
@@ -9,8 +9,7 @@ define i8 @abd_ext_i8(i8 %a, i8 %b) nounwind {
; CHECK-LABEL: abd_ext_i8:
; CHECK: // %bb.0:
; CHECK-NEXT: sxtb w8, w0
-; CHECK-NEXT: sub w8, w8, w1, sxtb
-; CHECK-NEXT: cmp w8, #0
+; CHECK-NEXT: subs w8, w8, w1, sxtb
; CHECK-NEXT: cneg w0, w8, pl
; CHECK-NEXT: ret
%aext = sext i8 %a to i64
@@ -26,8 +25,7 @@ define i8 @abd_ext_i8_i16(i8 %a, i16 %b) nounwind {
; CHECK-LABEL: abd_ext_i8_i16:
; CHECK: // %bb.0:
; CHECK-NEXT: sxtb w8, w0
-; CHECK-NEXT: sub w8, w8, w1, sxth
-; CHECK-NEXT: cmp w8, #0
+; CHECK-NEXT: subs w8, w8, w1, sxth
; CHECK-NEXT: cneg w0, w8, pl
; CHECK-NEXT: ret
%aext = sext i8 %a to i64
@@ -43,8 +41,7 @@ define i8 @abd_ext_i8_undef(i8 %a, i8 %b) nounwind {
; CHECK-LABEL: abd_ext_i8_undef:
; CHECK: // %bb.0:
; CHECK-NEXT: sxtb w8, w0
-; CHECK-NEXT: sub w8, w8, w1, sxtb
-; CHECK-NEXT: cmp w8, #0
+; CHECK-NEXT: subs w8, w8, w1, sxtb
; CHECK-NEXT: cneg w0, w8, pl
; CHECK-NEXT: ret
%aext = sext i8 %a to i64
@@ -60,8 +57,7 @@ define i16 @abd_ext_i16(i16 %a, i16 %b) nounwind {
; CHECK-LABEL: abd_ext_i16:
; CHECK: // %bb.0:
; CHECK-NEXT: sxth w8, w0
-; CHECK-NEXT: sub w8, w8, w1, sxth
-; CHECK-NEXT: cmp w8, #0
+; CHECK-NEXT: subs w8, w8, w1, sxth
; CHECK-NEXT: cneg w0, w8, pl
; CHECK-NEXT: ret
%aext = sext i16 %a to i64
@@ -93,8 +89,7 @@ define i16 @abd_ext_i16_undef(i16 %a, i16 %b) nounwind {
; CHECK-LABEL: abd_ext_i16_undef:
; CHECK: // %bb.0:
; CHECK-NEXT: sxth w8, w0
-; CHECK-NEXT: sub w8, w8, w1, sxth
-; CHECK-NEXT: cmp w8, #0
+; CHECK-NEXT: subs w8, w8, w1, sxth
; CHECK-NEXT: cneg w0, w8, pl
; CHECK-NEXT: ret
%aext = sext i16 %a to i64
diff --git a/llvm/test/CodeGen/AArch64/abds.ll b/llvm/test/CodeGen/AArch64/abds.ll
index bbdb116851710..bf52e71ec21fe 100644
--- a/llvm/test/CodeGen/AArch64/abds.ll
+++ b/llvm/test/CodeGen/AArch64/abds.ll
@@ -9,8 +9,7 @@ define i8 @abd_ext_i8(i8 %a, i8 %b) nounwind {
; CHECK-LABEL: abd_ext_i8:
; CHECK: // %bb.0:
; CHECK-NEXT: sxtb w8, w0
-; CHECK-NEXT: sub w8, w8, w1, sxtb
-; CHECK-NEXT: cmp w8, #0
+; CHECK-NEXT: subs w8, w8, w1, sxtb
; CHECK-NEXT: cneg w0, w8, mi
; CHECK-NEXT: ret
%aext = sext i8 %a to i64
@@ -25,8 +24,7 @@ define i8 @abd_ext_i8_i16(i8 %a, i16 %b) nounwind {
; CHECK-LABEL: abd_ext_i8_i16:
; CHECK: // %bb.0:
; CHECK-NEXT: sxtb w8, w0
-; CHECK-NEXT: sub w8, w8, w1, sxth
-; CHECK-NEXT: cmp w8, #0
+; CHECK-NEXT: subs w8, w8, w1, sxth
; CHECK-NEXT: cneg w0, w8, mi
; CHECK-NEXT: ret
%aext = sext i8 %a to i64
@@ -41,8 +39,7 @@ define i8 @abd_ext_i8_undef(i8 %a, i8 %b) nounwind {
; CHECK-LABEL: abd_ext_i8_undef:
; CHECK: // %bb.0:
; CHECK-NEXT: sxtb w8, w0
-; CHECK-NEXT: sub w8, w8, w1, sxtb
-; CHECK-NEXT: cmp w8, #0
+; CHECK-NEXT: subs w8, w8, w1, sxtb
; CHECK-NEXT: cneg w0, w8, mi
; CHECK-NEXT: ret
%aext = sext i8 %a to i64
@@ -57,8 +54,7 @@ define i16 @abd_ext_i16(i16 %a, i16 %b) nounwind {
; CHECK-LABEL: abd_ext_i16:
; CHECK: // %bb.0:
; CHECK-NEXT: sxth w8, w0
-; CHECK-NEXT: sub w8, w8, w1, sxth
-; CHECK-NEXT: cmp w8, #0
+; CHECK-NEXT: subs w8, w8, w1, sxth
; CHECK-NEXT: cneg w0, w8, mi
; CHECK-NEXT: ret
%aext = sext i16 %a to i64
@@ -88,8 +84,7 @@ define i16 @abd_ext_i16_undef(i16 %a, i16 %b) nounwind {
; CHECK-LABEL: abd_ext_i16_undef:
; CHECK: // %bb.0:
; CHECK-NEXT: sxth w8, w0
-; CHECK-NEXT: sub w8, w8, w1, sxth
-; CHECK-NEXT: cmp w8, #0
+; CHECK-NEXT: subs w8, w8, w1, sxth
; CHECK-NEXT: cneg w0, w8, mi
; CHECK-NEXT: ret
%aext = sext i16 %a to i64
@@ -215,8 +210,7 @@ define i8 @abd_minmax_i8(i8 %a, i8 %b) nounwind {
; CHECK-LABEL: abd_minmax_i8:
; CHECK: // %bb.0:
; CHECK-NEXT: sxtb w8, w0
-; CHECK-NEXT: sub w8, w8, w1, sxtb
-; CHECK-NEXT: cmp w8, #0
+; CHECK-NEXT: subs w8, w8, w1, sxtb
; CHECK-NEXT: cneg w0, w8, mi
; CHECK-NEXT: ret
%min = call i8 @llvm.smin.i8(i8 %a, i8 %b)
@@ -229,8 +223,7 @@ define i16 @abd_minmax_i16(i16 %a, i16 %b) nounwind {
; CHECK-LABEL: abd_minmax_i16:
; CHECK: // %bb.0:
; CHECK-NEXT: sxth w8, w0
-; CHECK-NEXT: sub w8, w8, w1, sxth
-; CHECK-NEXT: cmp w8, #0
+; CHECK-NEXT: subs w8, w8, w1, sxth
; CHECK-NEXT: cneg w0, w8, mi
; CHECK-NEXT: ret
%min = call i16 @llvm.smin.i16(i16 %a, i16 %b)
@@ -287,8 +280,7 @@ define i8 @abd_cmp_i8(i8 %a, i8 %b) nounwind {
; CHECK-LABEL: abd_cmp_i8:
; CHECK: // %bb.0:
; CHECK-NEXT: sxtb w8, w0
-; CHECK-NEXT: sub w8, w8, w1, sxtb
-; CHECK-NEXT: cmp w8, #0
+; CHECK-NEXT: subs w8, w8, w1, sxtb
; CHECK-NEXT: cneg w0, w8, mi
; CHECK-NEXT: ret
%cmp = icmp sgt i8 %a, %b
@@ -302,8 +294,7 @@ define i16 @abd_cmp_i16(i16 %a, i16 %b) nounwind {
; CHECK-LABEL: abd_cmp_i16:
; CHECK: // %bb.0:
; CHECK-NEXT: sxth w8, w0
-; CHECK-NEXT: sub w8, w8, w1, sxth
-; CHECK-NEXT: cmp w8, #0
+; CHECK-NEXT: subs w8, w8, w1, sxth
; CHECK-NEXT: cneg w0, w8, mi
; CHECK-NEXT: ret
%cmp = icmp sge i16 %a, %b
@@ -508,9 +499,8 @@ define i64 @vector_legalized(i16 %a, i16 %b) {
; CHECK: // %bb.0:
; CHECK-NEXT: movi v0.2d, #0000000000000000
; CHECK-NEXT: sxth w8, w0
-; CHECK-NEXT: sub w8, w8, w1, sxth
+; CHECK-NEXT: subs w8, w8, w1, sxth
; CHECK-NEXT: addp d0, v0.2d
-; CHECK-NEXT: cmp w8, #0
; CHECK-NEXT: cneg w8, w8, mi
; CHECK-NEXT: fmov x9, d0
; CHECK-NEXT: add x0, x9, x8
@@ -533,8 +523,7 @@ define i8 @abd_select_i8(i8 %a, i8 %b) nounwind {
; CHECK-LABEL: abd_select_i8:
; CHECK: // %bb.0:
; CHECK-NEXT: sxtb w8, w0
-; CHECK-NEXT: sub w8, w8, w1, sxtb
-; CHECK-NEXT: cmp w8, #0
+; CHECK-NEXT: subs w8, w8, w1, sxtb
; CHECK-NEXT: cneg w0, w8, mi
; CHECK-NEXT: ret
%cmp = icmp slt i8 %a, %b
@@ -548,8 +537,7 @@ define i16 @abd_select_i16(i16 %a, i16 %b) nounwind {
; CHECK-LABEL: abd_select_i16:
; CHECK: // %bb.0:
; CHECK-NEXT: sxth w8, w0
-; CHECK-NEXT: sub w8, w8, w1, sxth
-; CHECK-NEXT: cmp w8, #0
+; CHECK-NEXT: subs w8, w8, w1, sxth
; CHECK-NEXT: cneg w0, w8, mi
; CHECK-NEXT: ret
%cmp = icmp sle i16 %a, %b
diff --git a/llvm/test/CodeGen/AArch64/abdu-neg.ll b/llvm/test/CodeGen/AArch64/abdu-neg.ll
index d07f099a536ab..400031b64cb84 100644
--- a/llvm/test/CodeGen/AArch64/abdu-neg.ll
+++ b/llvm/test/CodeGen/AArch64/abdu-neg.ll
@@ -9,8 +9,7 @@ define i8 @abd_ext_i8(i8 %a, i8 %b) nounwind {
; CHECK-LABEL: abd_ext_i8:
; CHECK: // %bb.0:
; CHECK-NEXT: and w8, w0, #0xff
-; CHECK-NEXT: sub w8, w8, w1, uxtb
-; CHECK-NEXT: cmp w8, #0
+; CHECK-NEXT: subs w8, w8, w1, uxtb
; CHECK-NEXT: cneg w0, w8, pl
; CHECK-NEXT: ret
%aext = zext i8 %a to i64
@@ -26,8 +25,7 @@ define i8 @abd_ext_i8_i16(i8 %a, i16 %b) nounwind {
; CHECK-LABEL: abd_ext_i8_i16:
; CHECK: // %bb.0:
; CHECK-NEXT: and w8, w0, #0xff
-; CHECK-NEXT: sub w8, w8, w1, uxth
-; CHECK-NEXT: cmp w8, #0
+; CHECK-NEXT: subs w8, w8, w1, uxth
; CHECK-NEXT: cneg w0, w8, pl
; CHECK-NEXT: ret
%aext = zext i8 %a to i64
@@ -43,8 +41,7 @@ define i8 @abd_ext_i8_undef(i8 %a, i8 %b) nounwind {
; CHECK-LABEL: abd_ext_i8_undef:
; CHECK: // %bb.0:
; CHECK-NEXT: and w8, w0, #0xff
-; CHECK-NEXT: sub w8, w8, w1, uxtb
-; CHECK-NEXT: cmp w8, #0
+; CHECK-NEXT: subs w8, w8, w1, uxtb
; CHECK-NEXT: cneg w0, w8, pl
; CHECK-NEXT: ret
%aext = zext i8 %a to i64
@@ -60,8 +57,7 @@ define i16 @abd_ext_i16(i16 %a, i16 %b) nounwind {
; CHECK-LABEL: abd_ext_i16:
; CHECK: // %bb.0:
; CHECK-NEXT: and w8, w0, #0xffff
-; CHECK-NEXT: sub w8, w8, w1, uxth
-; CHECK-NEXT: cmp w8, #0
+; CHECK-NEXT: subs w8, w8, w1, uxth
; CHECK-NEXT: cneg w0, w8, pl
; CHECK-NEXT: ret
%aext = zext i16 %a to i64
@@ -93,8 +89,7 @@ define i16 @abd_ext_i16_undef(i16 %a, i16 %b) nounwind {
; CHECK-LABEL: abd_ext_i16_undef:
; CHECK: // %bb.0:
; CHECK-NEXT: and w8, w0, #0xffff
-; CHECK-NEXT: sub w8, w8, w1, uxth
-; CHECK-NEXT: cmp w8, #0
+; CHECK-NEXT: subs w8, w8, w1, uxth
; CHECK-NEXT: cneg w0, w8, pl
; CHECK-NEXT: ret
%aext = zext i16 %a to i64
diff --git a/llvm/test/CodeGen/AArch64/abdu.ll b/llvm/test/CodeGen/AArch64/abdu.ll
index 1045ee20dc734..8d2b0b0742d7d 100644
--- a/llvm/test/CodeGen/AArch64/abdu.ll
+++ b/llvm/test/CodeGen/AArch64/abdu.ll
@@ -9,8 +9,7 @@ define i8 @abd_ext_i8(i8 %a, i8 %b) nounwind {
; CHECK-LABEL: abd_ext_i8:
; CHECK: // %bb.0:
; CHECK-NEXT: and w8, w0, #0xff
-; CHECK-NEXT: sub w8, w8, w1, uxtb
-; CHECK-NEXT: cmp w8, #0
+; CHECK-NEXT: subs w8, w8, w1, uxtb
; CHECK-NEXT: cneg w0, w8, mi
; CHECK-NEXT: ret
%aext = zext i8 %a to i64
@@ -25,8 +24,7 @@ define i8 @abd_ext_i8_i16(i8 %a, i16 %b) nounwind {
; CHECK-LABEL: abd_ext_i8_i16:
; CHECK: // %bb.0:
; CHECK-NEXT: and w8, w0, #0xff
-; CHECK-NEXT: sub w8, w8, w1, uxth
-; CHECK-NEXT: cmp w8, #0
+; CHECK-NEXT: subs w8, w8, w1, uxth
; CHECK-NEXT: cneg w0, w8, mi
; CHECK-NEXT: ret
%aext = zext i8 %a to i64
@@ -41,8 +39,7 @@ define i8 @abd_ext_i8_undef(i8 %a, i8 %b) nounwind {
; CHECK-LABEL: abd_ext_i8_undef:
; CHECK: // %bb.0:
; CHECK-NEXT: and w8, w0, #0xff
-; CHECK-NEXT: sub w8, w8, w1, uxtb
-; CHECK-NEXT: cmp w8, #0
+; CHECK-NEXT: subs w8, w8, w1, uxtb
; CHECK-NEXT: cneg w0, w8, mi
; CHECK-NEXT: ret
%aext = zext i8 %a to i64
@@ -57,8 +54,7 @@ define i16 @abd_ext_i16(i16 %a, i16 %b) nounwind {
; CHECK-LABEL: abd_ext_i16:
; CHECK: // %bb.0:
; CHECK-NEXT: and w8, w0, #0xffff
-; CHECK-NEXT: sub w8, w8, w1, uxth
-; CHECK-NEXT: cmp w8, #0
+; CHECK-NEXT: subs w8, w8, w1, uxth
; CHECK-NEXT: cneg w0, w8, mi
; CHECK-NEXT: ret
%aext = zext i16 %a to i64
@@ -88,8 +84,7 @@ define i16 @abd_ext_i16_undef(i16 %a, i16 %b) nounwind {
; CHECK-LABEL: abd_ext_i16_undef:
; CHECK: // %bb.0:
; CHECK-NEXT: and w8, w0, #0xffff
-; CHECK-NEXT: sub w8, w8, w1, uxth
-; CHECK-NEXT: cmp w8, #0
+; CHECK-NEXT: subs w8, w8, w1, uxth
; CHECK-NEXT: cneg w0, w8, mi
; CHECK-NEXT: ret
%aext = zext i16 %a to i64
@@ -219,8 +214,7 @@ define i8 @abd_minmax_i8(i8 %a, i8 %b) nounwind {
; CHECK-LABEL: abd_minmax_i8:
; CHECK: // %bb.0:
; CHECK-NEXT: and w8, w0, #0xff
-; CHECK-NEXT: sub w8, w8, w1, uxtb
-; CHECK-NEXT: cmp w8, #0
+; CHECK-NEXT: subs w8, w8, w1, uxtb
; CHECK-NEXT: cneg w0, w8, mi
; CHECK-NEXT: ret
%min = call i8 @llvm.umin.i8(i8 %a, i8 %b)
@@ -233,8 +227,7 @@ define i16 @abd_minmax_i16(i16 %a, i16 %b) nounwind {
; CHECK-LABEL: abd_minmax_i16:
; CHECK: // %bb.0:
; CHECK-NEXT: and w8, w0, #0xffff
-; CHECK-NEXT: sub w8, w8, w1, uxth
-; CHECK-NEXT: cmp w8, #0
+; CHECK-NEXT: subs w8, w8, w1, uxth
; CHECK-NEXT: cneg w0, w8, mi
; CHECK-NEXT: ret
%min = call i16 @llvm.umin.i16(i16 %a, i16 %b)
@@ -293,8 +286,7 @@ define i8 @abd_cmp_i8(i8 %a, i8 %b) nounwind {
; CHECK-LABEL: abd_cmp_i8:
; CHECK: // %bb.0:
; CHECK-NEXT: and w8, w0, #0xff
-; CHECK-NEXT: sub w8, w8, w1, uxtb
-; CHECK-NEXT: cmp w8, #0
+; CHECK-NEXT: subs w8, w8, w1, uxtb
; CHECK-NEXT: cneg w0, w8, mi
; CHECK-NEXT: ret
%cmp = icmp ugt i8 %a, %b
@@ -308,8 +300,7 @@ define i16 @abd_cmp_i16(i16 %a, i16 %b) nounwind {
; CHECK-LABEL: abd_cmp_i16:
; CHECK: // %bb.0:
; CHECK-NEXT: and w8, w0, #0xffff
-; CHECK-NEXT: sub w8, w8, w1, uxth
-; CHECK-NEXT: cmp w8, #0
+; CHECK-NEXT: subs w8, w8, w1, uxth
; CHECK-NEXT: cneg w0, w8, mi
; CHECK-NEXT: ret
%cmp = icmp uge i16 %a, %b
@@ -373,10 +364,9 @@ define i64 @vector_legalized(i16 %a, i16 %b) {
; CHECK: // %bb.0:
; CHECK-NEXT: movi v0.2d, #0000000000000000
; CHECK-NEXT: and w8, w0, #0xffff
-; CHECK-NEXT: sub w8, w8, w1, uxth
-; CHECK-NEXT: cmp w8, #0
-; CHECK-NEXT: addp d0, v0.2d
+; CHECK-NEXT: subs w8, w8, w1, uxth
; CHECK-NEXT: cneg w8, w8, mi
+; CHECK-NEXT: addp d0, v0.2d
; CHECK-NEXT: fmov x9, d0
; CHECK-NEXT: add x0, x9, x8
; CHECK-NEXT: ret
@@ -398,8 +388,7 @@ define i8 @abd_select_i8(i8 %a, i8 %b) nounwind {
; CHECK-LABEL: abd_select_i8:
; CHECK: // %bb.0:
; CHECK-NEXT: and w8, w0, #0xff
-; CHECK-NEXT: sub w8, w8, w1, uxtb
-; CHECK-NEXT: cmp w8, #0
+; CHECK-NEXT: subs w8, w8, w1, uxtb
; CHECK-NEXT: cneg w0, w8, mi
; CHECK-NEXT: ret
%cmp = icmp ult i8 %a, %b
@@ -413,8 +402,7 @@ define i16 @abd_select_i16(i16 %a, i16 %b) nounwind {
; CHECK-LABEL: abd_select_i16:
; CHECK: // %bb.0:
; CHECK-NEXT: and w8, w0, #0xffff
-; CHECK-NEXT: sub w8, w8, w1, uxth
-; CHECK-NEXT: cmp w8, #0
+; CHECK-NEXT: subs w8, w8, w1, uxth
; CHECK-NEXT: cneg w0, w8, mi
; CHECK-NEXT: ret
%cmp = icmp ule i16 %a, %b
diff --git a/llvm/test/CodeGen/AArch64/csel-subs-dag-combine.ll b/llvm/test/CodeGen/AArch64/csel-subs-dag-combine.ll
new file mode 100644
index 0000000000000..5036be9c45e69
--- /dev/null
+++ b/llvm/test/CodeGen/AArch64/csel-subs-dag-combine.ll
@@ -0,0 +1,112 @@
+; RUN: llc -debug-only=isel -o /dev/null < %s 2>&1 | FileCheck %s
+
+; REQUIRES: asserts
+
+; These tests ensure that we don't combine
+; CSEL a, b, cc, SUBS(SUB(x,y), 0) -> CSEL a, b, cc, SUBS(x,y)
+; if the flags set by SUBS(SUB(x,y), 0) have more than one use.
+;
+; This restriction exists because combining SUBS(SUB(x,y), 0) -> SUBS(x,y) is
+; only valid if there are no users of the overflow flags (C/V) generated by the
+; SUBS. Currently, we only check the flags used by the CSEL, and therefore we
+; conservatively reject cases where the SUBS's flags have other uses.
+
+target triple = "aarch64-unknown-linux-gnu"
+
+; CHECK-LABEL: Legalized selection DAG: %bb.0 'combine_subs:'
+; CHECK-NEXT: SelectionDAG has 13 nodes:
+; CHECK-NEXT: t0: ch,glue = EntryToken
+; CHECK-NEXT: t2: i32,ch = CopyFromReg t0, Register:i32 %0
+; CHECK-NEXT: t4: i32,ch = CopyFromReg t0, Register:i32 %1
+; CHECK-NEXT: t5: i32 = sub t2, t4
+; CHECK-NEXT: t14: i32,i32 = AArch64ISD::SUBS t5, Constant:i32<0>
+; CHECK-NEXT: t16: i32 = AArch64ISD::CSEL t2, t4, Constant:i32<1>, t14:1
+; CHECK-NEXT: t11: ch,glue = CopyToReg t0, Register:i32 $w0, t16
+; CHECK-NEXT: t12: ch = AArch64ISD::RET_GLUE t11, Register:i32 $w0, t11:1
+
+; CHECK-LABEL: Optimized legalized selection DAG: %bb.0 'combine_subs:'
+; CHECK-NEXT: SelectionDAG has 11 nodes:
+; CHECK-NEXT: t0: ch,glue = EntryToken
+; CHECK-NEXT: t2: i32,ch = CopyFromReg t0, Register:i32 %0
+; CHECK-NEXT: t4: i32,ch = CopyFromReg t0, Register:i32 %1
+; CHECK-NEXT: t18: i32,i32 = AArch64ISD::SUBS t2, t4
+; CHECK-NEXT: t16: i32 = AArch64ISD::CSEL t2, t4, Constant:i32<1>, t18:1
+; CHECK-NEXT: t11: ch,glue = CopyToReg t0, Register:i32 $w0, t16
+; CHECK-NEXT: t12: ch = AArch64ISD::RET_GLUE t11, Register:i32 $w0, t11:1
+
+define i32 @combine_subs(i32 %a, i32 %b) {
+ %sub = sub i32 %a, %b
+ %cc = icmp ne i32 %sub, 0
+ %sel = select i1 %cc, i32 %a, i32 %b
+ ret i32 %sel
+}
+
+; CHECK-LABEL: Legalized selection DAG: %bb.0 'combine_subs_multiple_sub_uses:'
+; CHECK-NEXT: SelectionDAG has 14 nodes:
+; CHECK-NEXT: t0: ch,glue = EntryToken
+; CHECK-NEXT: t2: i32,ch = CopyFromReg t0, Register:i32 %0
+; CHECK-NEXT: t4: i32,ch = CopyFromReg t0, Register:i32 %1
+; CHECK-NEXT: t5: i32 = sub t2, t4
+; CHECK-NEXT: t15: i32,i32 = AArch64ISD::SUBS t5, Constant:i32<0>
+; CHECK-NEXT: t17: i32 = AArch64ISD::CSEL t2, t4, Constant:i32<1>, t15:1
+; CHECK-NEXT: t10: i32 = add t17, t5
+; CHECK-NEXT: t12: ch,glue = CopyToReg t0, Register:i32 $w0, t10
+; CHECK-NEXT: t13: ch = AArch64ISD::RET_GLUE t12, Register:i32 $w0, t12:1
+
+; CHECK-LABEL: Optimized legalized selection DAG: %bb.0 'combine_subs_multiple_sub_uses:'
+; CHECK-NEXT: SelectionDAG has 12 nodes:
+; CHECK-NEXT: t0: ch,glue = EntryToken
+; CHECK-NEXT: t2: i32,ch = CopyFromReg t0, Register:i32 %0
+; CHECK-NEXT: t4: i32,ch = CopyFromReg t0, Register:i32 %1
+; CHECK-NEXT: t17: i32 = AArch64ISD::CSEL t2, t4, Constant:i32<1>, t19:1
+; CHECK-NEXT: t10: i32 = add t17, t19
+; CHECK-NEXT: t12: ch,glue = CopyToReg t0, Register:i32 $w0, t10
+; CHECK-NEXT: t19: i32,i32 = AArch64ISD::SUBS t2, t4
+; CHECK-NEXT: t13: ch = AArch64ISD::RET_GLUE t12, Register:i32 $w0, t12:1
+
+define i32 @combine_subs_multiple_sub_uses(i32 %a, i32 %b) {
+ %sub = sub i32 %a, %b
+ %cc = icmp ne i32 %sub, 0
+ %sel = select i1 %cc, i32 %a, i32 %b
+ %add = add i32 %sel, %sub
+ ret i32 %add
+}
+
+; CHECK-LABEL: Legalized selection DAG: %bb.0 'do_not_combine_subs_multiple_flag_uses:'
+; CHECK-NEXT: SelectionDAG has 19 nodes:
+; CHECK-NEXT: t0: ch,glue = EntryToken
+; CHECK-NEXT: t2: i32,ch = CopyFromReg t0, Register:i32 %0
+; CHECK-NEXT: t4: i32,ch = CopyFromReg t0, Register:i32 %1
+; CHECK-NEXT: t24: i32 = AArch64ISD::CSEL t2, t4, Constant:i32<1>, t21:1
+; CHECK-NEXT: t6: i32,ch = CopyFromReg t0, Register:i32 %2
+; CHECK-NEXT: t8: i32,ch = CopyFromReg t0, Register:i32 %3
+; CHECK-NEXT: t23: i32 = AArch64ISD::CSEL t6, t8, Constant:i32<1>, t21:1
+; CHECK-NEXT: t15: i32 = add t24, t23
+; CHECK-NEXT: t17: ch,glue = CopyToReg t0, Register:i32 $w0, t15
+; CHECK-NEXT: t9: i32 = sub t2, t4
+; CHECK-NEXT: t21: i32,i32 = AArch64ISD::SUBS t9, Constant:i32<0>
+; CHECK-NEXT: t18: ch = AArch64ISD::RET_GLUE t17, Register:i32 $w0, t17:1
+
+; CHECK-LABEL: Optimized legalized selection DAG: %bb.0 'do_not_combine_subs_multiple_flag_uses:'
+; CHECK-NEXT: SelectionDAG has 19 nodes:
+; CHECK-NEXT: t0: ch,glue = EntryToken
+; CHECK-NEXT: t2: i32,ch = CopyFromReg t0, Register:i32 %0
+; CHECK-NEXT: t4: i32,ch = CopyFromReg t0, Register:i32 %1
+; CHECK-NEXT: t24: i32 = AArch64ISD::CSEL t2, t4, Constant:i32<1>, t21:1
+; CHECK-NEXT: t6: i32,ch = CopyFromReg t0, Register:i32 %2
+; CHECK-NEXT: t8: i32,ch = CopyFromReg t0, Register:i32 %3
+; CHECK-NEXT: t23: i32 = AArch64ISD::CSEL t6, t8, Constant:i32<1>, t21:1
+; CHECK-NEXT: t15: i32 = add t24, t23
+; CHECK-NEXT: t17: ch,glue = CopyToReg t0, Register:i32 $w0, t15
+; CHECK-NEXT: t9: i32 = sub t2, t4
+; CHECK-NEXT: t21: i32,i32 = AArch64ISD::SUBS t9, Constant:i32<0>
+; CHECK-NEXT: t18: ch = AArch64ISD::RET_GLUE t17, Register:i32 $w0, t17:1
+
+define i32 @do_not_combine_subs_multiple_flag_uses(i32 %a, i32 %b, i32 %c, i32 %d) {
+ %sub = sub i32 %a, %b
+ %cc = icmp ne i32 %sub, 0
+ %sel = select i1 %cc, i32 %a, i32 %b
+ %other = select i1 %cc, i32 %c, i32 %d
+ %add = add i32 %sel, %other
+ ret i32 %add
+}
diff --git a/llvm/test/CodeGen/AArch64/midpoint-int.ll b/llvm/test/CodeGen/AArch64/midpoint-int.ll
index 15c1dffae749e..d881e998db2bc 100644
--- a/llvm/test/CodeGen/AArch64/midpoint-int.ll
+++ b/llvm/test/CodeGen/AArch64/midpoint-int.ll
@@ -255,12 +255,11 @@ define i64 @scalar_i64_signed_mem_mem(ptr %a1_addr, ptr %a2_addr) nounwind {
define i16 @scalar_i16_signed_reg_reg(i16 %a1, i16 %a2) nounwind {
; CHECK-LABEL: scalar_i16_signed_reg_reg:
; CHECK: // %bb.0:
-; CHECK-NEXT: sxth w9, w1
-; CHECK-NEXT: sxth w10, w0
+; CHECK-NEXT: sxth w9, w0
; CHECK-NEXT: mov w8, #-1 // =0xffffffff
-; CHECK-NEXT: subs w9, w10, w9
-; CHECK-NEXT: cneg w9, w9, mi
+; CHECK-NEXT: subs w9, w9, w1, sxth
; CHECK-NEXT: cneg w8, w8, le
+; CHECK-NEXT: cneg w9, w9, mi
; CHECK-NEXT: lsr w9, w9, #1
; CHECK-NEXT: madd w0, w9, w8, w0
; CHECK-NEXT: ret
@@ -278,12 +277,11 @@ define i16 @scalar_i16_signed_reg_reg(i16 %a1, i16 %a2) nounwind {
define i16 @scalar_i16_unsigned_reg_reg(i16 %a1, i16 %a2) nounwind {
; CHECK-LABEL: scalar_i16_unsigned_reg_reg:
; CHECK: // %bb.0:
-; CHECK-NEXT: and w9, w1, #0xffff
-; CHECK-NEXT: and w10, w0, #0xffff
+; CHECK-NEXT: and w9, w0, #0xffff
; CHECK-NEXT: mov w8, #-1 // =0xffffffff
-; CHECK-NEXT: subs w9, w10, w9
-; CHECK-NEXT: cneg w9, w9, mi
+; CHECK-NEXT: subs w9, w9, w1, uxth
; CHECK-NEXT: cneg w8, w8, ls
+; CHECK-NEXT: cneg w9, w9, mi
; CHECK-NEXT: lsr w9, w9, #1
; CHECK-NEXT: madd w0, w9, w8, w0
; CHECK-NEXT: ret
@@ -382,12 +380,11 @@ define i16 @scalar_i16_signed_mem_mem(ptr %a1_addr, ptr %a2_addr) nounwind {
define i8 @scalar_i8_signed_reg_reg(i8 %a1, i8 %a2) nounwind {
; CHECK-LABEL: scalar_i8_signed_reg_reg:
; CHECK: // %bb.0:
-; CHECK-NEXT: sxtb w9, w1
-; CHECK-NEXT: sxtb w10, w0
+; CHECK-NEXT: sxtb w9, w0
; CHECK-NEXT: mov w8, #-1 // =0xffffffff
-; CHECK-NEXT: subs w9, w10, w9
-; CHECK-NEXT: cneg w9, w9, mi
+; CHECK-NEXT: subs w9, w9, w1, sxtb
; CHECK-NEXT: cneg w8, w8, le
+; CHECK-NEXT: cneg w9, w9, mi
; CHECK-NEXT: lsr w9, w9, #1
; CHECK-NEXT: madd w0, w9, w8, w0
; CHECK-NEXT: ret
@@ -405,12 +402,11 @@ define i8 @scalar_i8_signed_reg_reg(i8 %a1, i8 %a2) nounwind {
define i8 @scalar_i8_unsigned_reg_reg(i8 %a1, i8 %a2) nounwind {
; CHECK-LABEL: scalar_i8_unsigned_reg_reg:
; CHECK: // %bb.0:
-; CHECK-NEXT: and w9, w1, #0xff
-; CHECK-NEXT: and w10, w0, #0xff
+; CHECK-NEXT: and w9, w0, #0xff
; CHECK-NEXT: mov w8, #-1 // =0xffffffff
-; CHECK-NEXT: subs w9, w10, w9
-; CHECK-NEXT: cneg w9, w9, mi
+; CHECK-NEXT: subs w9, w9, w1, uxtb
; CHECK-NEXT: cneg w8, w8, ls
+; CHECK-NEXT: cneg w9, w9, mi
; CHECK-NEXT: lsr w9, w9, #1
; CHECK-NEXT: madd w0, w9, w8, w0
; CHECK-NEXT: ret
>From e99c565cd2f750c5ec3a71f60e306a5912d8cec9 Mon Sep 17 00:00:00 2001
From: Tim Renouf <tim.renouf at amd.com>
Date: Wed, 6 Aug 2025 13:25:45 +0100
Subject: [PATCH 065/171] MC,AMDGPU: Don't pad .text with s_code_end if it
would otherwise be empty (#147980)
We don't want that padding in a module that only contains data, not
code.
Also fix MCSection::hasInstructions() so it works with the asm streamer
too.
---
llvm/lib/MC/MCAsmStreamer.cpp | 5 +++++
llvm/lib/Target/AMDGPU/AMDGPUAsmPrinter.cpp | 8 ++++++--
llvm/test/CodeGen/AMDGPU/empty-text.ll | 9 +++++++++
3 files changed, 20 insertions(+), 2 deletions(-)
create mode 100644 llvm/test/CodeGen/AMDGPU/empty-text.ll
diff --git a/llvm/lib/MC/MCAsmStreamer.cpp b/llvm/lib/MC/MCAsmStreamer.cpp
index 93614cd61bf6e..9a5e07095fa5a 100644
--- a/llvm/lib/MC/MCAsmStreamer.cpp
+++ b/llvm/lib/MC/MCAsmStreamer.cpp
@@ -2432,6 +2432,11 @@ void MCAsmStreamer::AddEncodingComment(const MCInst &Inst,
void MCAsmStreamer::emitInstruction(const MCInst &Inst,
const MCSubtargetInfo &STI) {
+ if (CurFrag) {
+ MCSection *Sec = getCurrentSectionOnly();
+ Sec->setHasInstructions(true);
+ }
+
if (MAI->isAIX() && CurFrag)
// Now that a machine instruction has been assembled into this section, make
// a line entry for any .loc directive that has been seen.
diff --git a/llvm/lib/Target/AMDGPU/AMDGPUAsmPrinter.cpp b/llvm/lib/Target/AMDGPU/AMDGPUAsmPrinter.cpp
index 668139383f56c..2a324e5683910 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPUAsmPrinter.cpp
+++ b/llvm/lib/Target/AMDGPU/AMDGPUAsmPrinter.cpp
@@ -486,12 +486,16 @@ bool AMDGPUAsmPrinter::doFinalization(Module &M) {
// Pad with s_code_end to help tools and guard against instruction prefetch
// causing stale data in caches. Arguably this should be done by the linker,
// which is why this isn't done for Mesa.
+ // Don't do it if there is no code.
const MCSubtargetInfo &STI = *getGlobalSTI();
if ((AMDGPU::isGFX10Plus(STI) || AMDGPU::isGFX90A(STI)) &&
(STI.getTargetTriple().getOS() == Triple::AMDHSA ||
STI.getTargetTriple().getOS() == Triple::AMDPAL)) {
- OutStreamer->switchSection(getObjFileLowering().getTextSection());
- getTargetStreamer()->EmitCodeEnd(STI);
+ MCSection *TextSect = getObjFileLowering().getTextSection();
+ if (TextSect->hasInstructions()) {
+ OutStreamer->switchSection(TextSect);
+ getTargetStreamer()->EmitCodeEnd(STI);
+ }
}
// Assign expressions which can only be resolved when all other functions are
diff --git a/llvm/test/CodeGen/AMDGPU/empty-text.ll b/llvm/test/CodeGen/AMDGPU/empty-text.ll
new file mode 100644
index 0000000000000..8aa8600cacd23
--- /dev/null
+++ b/llvm/test/CodeGen/AMDGPU/empty-text.ll
@@ -0,0 +1,9 @@
+; Test that there is no s_code_end padding if .text is otherwise empty.
+
+; RUN: llc -mtriple=amdgcn--amdpal -mcpu=gfx1200 < %s | FileCheck %s --check-prefixes=GCN
+
+ at globalVar = global i32 37
+
+declare amdgpu_ps void @funcDecl()
+
+; GCN-NOT: .fill
>From 97cf061e83a0d0cada38d8a2d75c8a840b01ba26 Mon Sep 17 00:00:00 2001
From: Michael Kruse <llvm-project at meinersbur.de>
Date: Wed, 6 Aug 2025 14:13:35 +0200
Subject: [PATCH 066/171] [flang][cmake] Re-apply "Fix bcc dependencies
(#125822)"
It was overwritten due to an automatic merge gone wrong for #124416
---
flang/tools/bbc/CMakeLists.txt | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/flang/tools/bbc/CMakeLists.txt b/flang/tools/bbc/CMakeLists.txt
index 469266cc81558..5fb2c55933b09 100644
--- a/flang/tools/bbc/CMakeLists.txt
+++ b/flang/tools/bbc/CMakeLists.txt
@@ -30,6 +30,11 @@ target_link_libraries(bbc PRIVATE
flangFrontend
flangPasses
FlangOpenMPTransforms
+ FortranCommon
+ FortranParser
+ FortranEvaluate
+ FortranSemantics
+ FortranLower
)
mlir_target_link_libraries(bbc PRIVATE
@@ -37,9 +42,4 @@ mlir_target_link_libraries(bbc PRIVATE
${extension_libs}
MLIRAffineToStandard
MLIRSCFToControlFlow
- FortranSupport
- FortranParser
- FortranEvaluate
- FortranSemantics
- FortranLower
)
>From 56ba1181e9e98c54f22181723c1bc5bb8068ae66 Mon Sep 17 00:00:00 2001
From: Joseph Huber <huberjn at outlook.com>
Date: Wed, 6 Aug 2025 07:30:07 -0500
Subject: [PATCH 067/171] [Clang] Fix warning on synthetic offload arch
argument in host only mode (#151969)
Summary:
These arguments are synthetically generated and should always be
considered used. This was emitting a warning on the new driver.
---
clang/lib/Driver/Driver.cpp | 1 +
clang/test/Driver/hip-options.hip | 2 +-
2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/clang/lib/Driver/Driver.cpp b/clang/lib/Driver/Driver.cpp
index 586f287843f3e..8c0bba938a09b 100644
--- a/clang/lib/Driver/Driver.cpp
+++ b/clang/lib/Driver/Driver.cpp
@@ -1012,6 +1012,7 @@ inferOffloadToolchains(Compilation &C, Action::OffloadKind Kind) {
Arg *A = new Arg(Opt, C.getArgs().getArgString(Index), Index,
C.getArgs().MakeArgString(Triple.split("-").first),
C.getArgs().MakeArgString("--offload-arch=" + Arch));
+ A->claim();
C.getArgs().append(A);
C.getArgs().AddSynthesizedArg(A);
Triples.insert(Triple);
diff --git a/clang/test/Driver/hip-options.hip b/clang/test/Driver/hip-options.hip
index ba23bc2d59b56..6206020d76db6 100644
--- a/clang/test/Driver/hip-options.hip
+++ b/clang/test/Driver/hip-options.hip
@@ -241,7 +241,7 @@
// Check --offload-compress --offload-jobs=N does not cause warning.
// RUN: %clang -### -Werror --target=x86_64-unknown-linux-gnu -nogpuinc -nogpulib \
// RUN: --offload-arch=gfx1100 --offload-compress --offload-host-only -M %s \
-// RUN: --offload-jobs=4
+// RUN: --offload-jobs=4 --offload-new-driver
// Check --offload-jobs=N option.
>From 86c192694d8c7ecf83d5fd706c861773c347b02c Mon Sep 17 00:00:00 2001
From: Tony Varghese <tonypalampalliyil at gmail.com>
Date: Wed, 6 Aug 2025 18:06:48 +0530
Subject: [PATCH 068/171] [NFC][PowerPC] Rebase the anonymous xxeval patterns
to use the new XXEvalPattern class (#151462)
This change rebases the anonymous `xxeval` patterns to use the new
XXEvalPattern class based on the [[PowerPC] Exploit xxeval instruction
for ternary patterns - ternary(A, X,
and(B,C))](https://github.com/llvm/llvm-project/pull/141733#top)
Co-authored-by: Tony Varghese <tony.varghese at ibm.com>
---
llvm/lib/Target/PowerPC/PPCInstrP10.td | 75 +++++++++++++-------------
1 file changed, 36 insertions(+), 39 deletions(-)
diff --git a/llvm/lib/Target/PowerPC/PPCInstrP10.td b/llvm/lib/Target/PowerPC/PPCInstrP10.td
index 1dc485d802075..98dd8464c0ac8 100644
--- a/llvm/lib/Target/PowerPC/PPCInstrP10.td
+++ b/llvm/lib/Target/PowerPC/PPCInstrP10.td
@@ -2175,10 +2175,7 @@ let AddedComplexity = 400, Predicates = [IsISA3_1, HasVSX] in {
// - Other vector types [v16i8, v8i16] require COPY_TO_REGCLASS to/from VRRC
// =============================================================================
-class XXEvalPattern<dag pattern, bits<8> imm>
- : Pat<(v4i32 pattern), (XXEVAL $vA, $vB, $vC, imm)> {}
-
-class XXEvalPatterns<ValueType Vt, dag InputPattern, bits<8> Imm>
+class XXEvalPattern<ValueType Vt, dag InputPattern, bits<8> Imm>
: Pat<(Vt InputPattern),
!if(!or(!eq(Vt, v4i32), !eq(Vt, v2i64)),
// VSRC path: direct XXEVAL for v4i32 and v2i64
@@ -2246,26 +2243,26 @@ def VEqv
// =============================================================================
multiclass XXEvalTernarySelectAnd<ValueType Vt> {
// Pattern: A ? XOR(B,C) : AND(B,C) XXEVAL immediate value: 22
- def : XXEvalPatterns<
+ def : XXEvalPattern<
Vt, (vselect Vt:$vA, (VXor Vt:$vB, Vt:$vC), (VAnd Vt:$vB, Vt:$vC)),
22>;
// Pattern: A ? NOR(B,C) : AND(B,C) XXEVAL immediate value: 24
- def : XXEvalPatterns<
+ def : XXEvalPattern<
Vt, (vselect Vt:$vA, (VNor Vt:$vB, Vt:$vC), (VAnd Vt:$vB, Vt:$vC)),
24>;
// Pattern: A ? EQV(B,C) : AND(B,C) XXEVAL immediate value: 25
- def : XXEvalPatterns<
+ def : XXEvalPattern<
Vt, (vselect Vt:$vA, (VEqv Vt:$vB, Vt:$vC), (VAnd Vt:$vB, Vt:$vC)),
25>;
// Pattern: A ? NOT(C) : AND(B,C) XXEVAL immediate value: 26
- def : XXEvalPatterns<
+ def : XXEvalPattern<
Vt, (vselect Vt:$vA, (VNot Vt:$vC), (VAnd Vt:$vB, Vt:$vC)), 26>;
// Pattern: A ? NOT(B) : AND(B,C) XXEVAL immediate value: 28
- def : XXEvalPatterns<
+ def : XXEvalPattern<
Vt, (vselect Vt:$vA, (VNot Vt:$vB), (VAnd Vt:$vB, Vt:$vC)), 28>;
}
@@ -2299,83 +2296,83 @@ let Predicates = [PrefixInstrs, HasP10Vector] in {
// Anonymous patterns for XXEVAL
// AND
// and(A, B, C)
- def : XXEvalPattern<(and v4i32:$vA, (and v4i32:$vB, v4i32:$vC)), 1>;
+ def : XXEvalPattern<v4i32, (and v4i32:$vA, (and v4i32:$vB, v4i32:$vC)), 1>;
// and(A, xor(B, C))
- def : XXEvalPattern<(and v4i32:$vA, (xor v4i32:$vB, v4i32:$vC)), 6>;
+ def : XXEvalPattern<v4i32, (and v4i32:$vA, (xor v4i32:$vB, v4i32:$vC)), 6>;
// and(A, or(B, C))
- def : XXEvalPattern<(and v4i32:$vA, (or v4i32:$vB, v4i32:$vC)), 7>;
+ def : XXEvalPattern<v4i32, (and v4i32:$vA, (or v4i32:$vB, v4i32:$vC)), 7>;
// and(A, nor(B, C))
- def : XXEvalPattern<(and v4i32:$vA, (vnot (or v4i32:$vB, v4i32:$vC))), 8>;
+ def : XXEvalPattern<v4i32, (and v4i32:$vA, (vnot (or v4i32:$vB, v4i32:$vC))), 8>;
// and(A, eqv(B, C))
- def : XXEvalPattern<(and v4i32:$vA, (vnot (xor v4i32:$vB, v4i32:$vC))), 9>;
+ def : XXEvalPattern<v4i32, (and v4i32:$vA, (vnot (xor v4i32:$vB, v4i32:$vC))), 9>;
// and(A, nand(B, C))
- def : XXEvalPattern<(and v4i32:$vA, (vnot (and v4i32:$vB, v4i32:$vC))), 14>;
+ def : XXEvalPattern<v4i32, (and v4i32:$vA, (vnot (and v4i32:$vB, v4i32:$vC))), 14>;
// NAND
// nand(A, B, C)
- def : XXEvalPattern<(vnot (and v4i32:$vA, (and v4i32:$vB, v4i32:$vC))),
+ def : XXEvalPattern<v4i32, (vnot (and v4i32:$vA, (and v4i32:$vB, v4i32:$vC))),
!sub(255, 1)>;
// nand(A, xor(B, C))
- def : XXEvalPattern<(vnot (and v4i32:$vA, (xor v4i32:$vB, v4i32:$vC))),
+ def : XXEvalPattern<v4i32, (vnot (and v4i32:$vA, (xor v4i32:$vB, v4i32:$vC))),
!sub(255, 6)>;
// nand(A, or(B, C))
- def : XXEvalPattern<(vnot (and v4i32:$vA, (or v4i32:$vB, v4i32:$vC))),
+ def : XXEvalPattern<v4i32, (vnot (and v4i32:$vA, (or v4i32:$vB, v4i32:$vC))),
!sub(255, 7)>;
// nand(A, nor(B, C))
- def : XXEvalPattern<(or (vnot v4i32:$vA), (or v4i32:$vB, v4i32:$vC)),
+ def : XXEvalPattern<v4i32, (or (vnot v4i32:$vA), (or v4i32:$vB, v4i32:$vC)),
!sub(255, 8)>;
// nand(A, eqv(B, C))
- def : XXEvalPattern<(or (vnot v4i32:$vA), (xor v4i32:$vB, v4i32:$vC)),
+ def : XXEvalPattern<v4i32, (or (vnot v4i32:$vA), (xor v4i32:$vB, v4i32:$vC)),
!sub(255, 9)>;
// nand(A, nand(B, C))
- def : XXEvalPattern<(or (vnot v4i32:$vA), (and v4i32:$vB, v4i32:$vC)),
+ def : XXEvalPattern<v4i32, (or (vnot v4i32:$vA), (and v4i32:$vB, v4i32:$vC)),
!sub(255, 14)>;
// EQV
// (eqv A, B, C)
- def : XXEvalPattern<(or (and v4i32:$vA, (and v4i32:$vB, v4i32:$vC)),
+ def : XXEvalPattern<v4i32, (or (and v4i32:$vA, (and v4i32:$vB, v4i32:$vC)),
(vnot (or v4i32:$vA, (or v4i32:$vB, v4i32:$vC)))),
150>;
// (eqv A, (and B, C))
- def : XXEvalPattern<(vnot (xor v4i32:$vA, (and v4i32:$vB, v4i32:$vC))), 225>;
+ def : XXEvalPattern<v4i32, (vnot (xor v4i32:$vA, (and v4i32:$vB, v4i32:$vC))), 225>;
// (eqv A, (or B, C))
- def : XXEvalPattern<(vnot (xor v4i32:$vA, (or v4i32:$vB, v4i32:$vC))), 135>;
+ def : XXEvalPattern<v4i32, (vnot (xor v4i32:$vA, (or v4i32:$vB, v4i32:$vC))), 135>;
// NOR
// (nor A, B, C)
- def : XXEvalPattern<(vnot (or v4i32:$vA, (or v4i32:$vB, v4i32:$vC))), 128>;
+ def : XXEvalPattern<v4i32, (vnot (or v4i32:$vA, (or v4i32:$vB, v4i32:$vC))), 128>;
// (nor A, (and B, C))
- def : XXEvalPattern<(vnot (or v4i32:$vA, (and v4i32:$vB, v4i32:$vC))), 224>;
+ def : XXEvalPattern<v4i32, (vnot (or v4i32:$vA, (and v4i32:$vB, v4i32:$vC))), 224>;
// (nor A, (eqv B, C))
- def : XXEvalPattern<(and (vnot v4i32:$vA), (xor v4i32:$vB, v4i32:$vC)), 96>;
+ def : XXEvalPattern<v4i32, (and (vnot v4i32:$vA), (xor v4i32:$vB, v4i32:$vC)), 96>;
// (nor A, (nand B, C))
- def : XXEvalPattern<(and (vnot v4i32:$vA), (and v4i32:$vB, v4i32:$vC)), 16>;
+ def : XXEvalPattern<v4i32, (and (vnot v4i32:$vA), (and v4i32:$vB, v4i32:$vC)), 16>;
// (nor A, (nor B, C))
- def : XXEvalPattern<(and (vnot v4i32:$vA), (or v4i32:$vB, v4i32:$vC)), 112>;
+ def : XXEvalPattern<v4i32, (and (vnot v4i32:$vA), (or v4i32:$vB, v4i32:$vC)), 112>;
// (nor A, (xor B, C))
- def : XXEvalPattern<(vnot (or v4i32:$vA, (xor v4i32:$vB, v4i32:$vC))), 144>;
+ def : XXEvalPattern<v4i32, (vnot (or v4i32:$vA, (xor v4i32:$vB, v4i32:$vC))), 144>;
// OR
// (or A, B, C)
- def : XXEvalPattern<(or v4i32:$vA, (or v4i32:$vB, v4i32:$vC)), 127>;
+ def : XXEvalPattern<v4i32, (or v4i32:$vA, (or v4i32:$vB, v4i32:$vC)), 127>;
// (or A, (and B, C))
- def : XXEvalPattern<(or v4i32:$vA, (and v4i32:$vB, v4i32:$vC)), 31>;
+ def : XXEvalPattern<v4i32, (or v4i32:$vA, (and v4i32:$vB, v4i32:$vC)), 31>;
// (or A, (eqv B, C))
- def : XXEvalPattern<(or v4i32:$vA, (vnot (xor v4i32:$vB, v4i32:$vC))), 159>;
+ def : XXEvalPattern<v4i32, (or v4i32:$vA, (vnot (xor v4i32:$vB, v4i32:$vC))), 159>;
// (or A, (nand B, C))
- def : XXEvalPattern<(or v4i32:$vA, (vnot (and v4i32:$vB, v4i32:$vC))), 239>;
+ def : XXEvalPattern<v4i32, (or v4i32:$vA, (vnot (and v4i32:$vB, v4i32:$vC))), 239>;
// (or A, (nor B, C))
- def : XXEvalPattern<(or v4i32:$vA, (vnot (or v4i32:$vB, v4i32:$vC))), 143>;
+ def : XXEvalPattern<v4i32, (or v4i32:$vA, (vnot (or v4i32:$vB, v4i32:$vC))), 143>;
// (or A, (xor B, C))
- def : XXEvalPattern<(or v4i32:$vA, (xor v4i32:$vB, v4i32:$vC)), 111>;
+ def : XXEvalPattern<v4i32, (or v4i32:$vA, (xor v4i32:$vB, v4i32:$vC)), 111>;
// XOR
// (xor A, B, C)
- def : XXEvalPattern<(xor v4i32:$vA, (xor v4i32:$vB, v4i32:$vC)), 105>;
+ def : XXEvalPattern<v4i32, (xor v4i32:$vA, (xor v4i32:$vB, v4i32:$vC)), 105>;
// (xor A, (and B, C))
- def : XXEvalPattern<(xor v4i32:$vA, (and v4i32:$vB, v4i32:$vC)), 30>;
+ def : XXEvalPattern<v4i32, (xor v4i32:$vA, (and v4i32:$vB, v4i32:$vC)), 30>;
// (xor A, (or B, C))
- def : XXEvalPattern<(xor v4i32:$vA, (or v4i32:$vB, v4i32:$vC)), 120>;
+ def : XXEvalPattern<v4i32, (xor v4i32:$vA, (or v4i32:$vB, v4i32:$vC)), 120>;
// XXEval Patterns for ternary Operations.
foreach Ty = [v4i32, v2i64, v8i16, v16i8] in {
>From 8a0643611545d0540b4267668c66ca92d3ff6e70 Mon Sep 17 00:00:00 2001
From: Ricardo Jesus <rjj at nvidia.com>
Date: Wed, 6 Aug 2025 13:54:41 +0100
Subject: [PATCH 069/171] [AArch64] Update test from #151180. (#152299)
---
llvm/test/CodeGen/AArch64/midpoint-int.ll | 22 ++++++++++------------
1 file changed, 10 insertions(+), 12 deletions(-)
diff --git a/llvm/test/CodeGen/AArch64/midpoint-int.ll b/llvm/test/CodeGen/AArch64/midpoint-int.ll
index d881e998db2bc..79bba5363188b 100644
--- a/llvm/test/CodeGen/AArch64/midpoint-int.ll
+++ b/llvm/test/CodeGen/AArch64/midpoint-int.ll
@@ -301,14 +301,13 @@ define i16 @scalar_i16_unsigned_reg_reg(i16 %a1, i16 %a2) nounwind {
define i16 @scalar_i16_signed_mem_reg(ptr %a1_addr, i16 %a2) nounwind {
; CHECK-LABEL: scalar_i16_signed_mem_reg:
; CHECK: // %bb.0:
-; CHECK-NEXT: sxth w9, w1
-; CHECK-NEXT: ldrsh w10, [x0]
+; CHECK-NEXT: ldrsh w9, [x0]
; CHECK-NEXT: mov w8, #-1 // =0xffffffff
-; CHECK-NEXT: subs w9, w10, w9
-; CHECK-NEXT: cneg w9, w9, mi
+; CHECK-NEXT: subs w10, w9, w1, sxth
; CHECK-NEXT: cneg w8, w8, le
-; CHECK-NEXT: lsr w9, w9, #1
-; CHECK-NEXT: madd w0, w9, w8, w10
+; CHECK-NEXT: cneg w10, w10, mi
+; CHECK-NEXT: lsr w10, w10, #1
+; CHECK-NEXT: madd w0, w10, w8, w9
; CHECK-NEXT: ret
%a1 = load i16, ptr %a1_addr
%t3 = icmp sgt i16 %a1, %a2 ; signed
@@ -426,14 +425,13 @@ define i8 @scalar_i8_unsigned_reg_reg(i8 %a1, i8 %a2) nounwind {
define i8 @scalar_i8_signed_mem_reg(ptr %a1_addr, i8 %a2) nounwind {
; CHECK-LABEL: scalar_i8_signed_mem_reg:
; CHECK: // %bb.0:
-; CHECK-NEXT: sxtb w9, w1
-; CHECK-NEXT: ldrsb w10, [x0]
+; CHECK-NEXT: ldrsb w9, [x0]
; CHECK-NEXT: mov w8, #-1 // =0xffffffff
-; CHECK-NEXT: subs w9, w10, w9
-; CHECK-NEXT: cneg w9, w9, mi
+; CHECK-NEXT: subs w10, w9, w1, sxtb
; CHECK-NEXT: cneg w8, w8, le
-; CHECK-NEXT: lsr w9, w9, #1
-; CHECK-NEXT: madd w0, w9, w8, w10
+; CHECK-NEXT: cneg w10, w10, mi
+; CHECK-NEXT: lsr w10, w10, #1
+; CHECK-NEXT: madd w0, w10, w8, w9
; CHECK-NEXT: ret
%a1 = load i8, ptr %a1_addr
%t3 = icmp sgt i8 %a1, %a2 ; signed
>From 2c2a368276487da6a3b97c78968258ce61fcc73b Mon Sep 17 00:00:00 2001
From: Michael Kruse <llvm-project at meinersbur.de>
Date: Wed, 6 Aug 2025 15:10:22 +0200
Subject: [PATCH 070/171] Revert "[flang][cmake] Re-apply "Fix bcc dependencies
(#125822)""
This reverts commit 97cf061e83a0d0cada38d8a2d75c8a840b01ba26.
---
flang/tools/bbc/CMakeLists.txt | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/flang/tools/bbc/CMakeLists.txt b/flang/tools/bbc/CMakeLists.txt
index 5fb2c55933b09..469266cc81558 100644
--- a/flang/tools/bbc/CMakeLists.txt
+++ b/flang/tools/bbc/CMakeLists.txt
@@ -30,11 +30,6 @@ target_link_libraries(bbc PRIVATE
flangFrontend
flangPasses
FlangOpenMPTransforms
- FortranCommon
- FortranParser
- FortranEvaluate
- FortranSemantics
- FortranLower
)
mlir_target_link_libraries(bbc PRIVATE
@@ -42,4 +37,9 @@ mlir_target_link_libraries(bbc PRIVATE
${extension_libs}
MLIRAffineToStandard
MLIRSCFToControlFlow
+ FortranSupport
+ FortranParser
+ FortranEvaluate
+ FortranSemantics
+ FortranLower
)
>From 0d7c8691fb5831807b76d89eb397659c6f132011 Mon Sep 17 00:00:00 2001
From: Aiden Grossman <aidengrossman at google.com>
Date: Wed, 6 Aug 2025 06:19:01 -0700
Subject: [PATCH 071/171] [CI] Make monolithic-* scripts only run Github steps
on Github
We run a couple Github only steps in the monolithic-* scripts like
writing to $GITHUB_STEP_SUMMARY and denoting log groups. These can throw
errors when running the scripts locally or as part of other CI systems
(like buildbot). This patch makes it so that we only run these commands
when in the relevant environment. We also add a $POSTCOMMIT_CI check for
groups so that the annotated builder in buildbot works properly with
these scripts.
Reviewers: cmtice, ldionne, Endilll, gburgessiv, Keenuts, dschuff, lnihlen
Pull Request: https://github.com/llvm/llvm-project/pull/152197
---
.ci/monolithic-linux.sh | 42 +++++++++++++++++++++------------------
.ci/monolithic-windows.sh | 27 +++++++++++++++++--------
2 files changed, 42 insertions(+), 27 deletions(-)
diff --git a/.ci/monolithic-linux.sh b/.ci/monolithic-linux.sh
index 6db24d894eb73..fffd8fef21382 100755
--- a/.ci/monolithic-linux.sh
+++ b/.ci/monolithic-linux.sh
@@ -38,11 +38,25 @@ function at-exit {
# If building fails there will be no results files.
shopt -s nullglob
- python3 "${MONOREPO_ROOT}"/.ci/generate_test_report_github.py ":penguin: Linux x64 Test Results" \
- $retcode "${BUILD_DIR}"/test-results.*.xml >> $GITHUB_STEP_SUMMARY
+ if [[ "$GITHUB_STEP_SUMMARY" != "" ]]; then
+ python3 "${MONOREPO_ROOT}"/.ci/generate_test_report_github.py ":penguin: Linux x64 Test Results" \
+ $retcode "${BUILD_DIR}"/test-results.*.xml >> $GITHUB_STEP_SUMMARY
+ fi
}
trap at-exit EXIT
+function start-group {
+ groupname=$1
+ if [[ "$GITHUB_ACTIONS" != "" ]]; then
+ echo "::endgroup"
+ echo "::group::$groupname"
+ elif [[ "$POSTCOMMIT_CI" != "" ]]; then
+ echo "@@@$STEP@@@"
+ else
+ echo "Starting $groupname"
+ fi
+}
+
projects="${1}"
targets="${2}"
runtimes="${3}"
@@ -52,7 +66,7 @@ enable_cir="${6}"
lit_args="-v --xunit-xml-output ${BUILD_DIR}/test-results.xml --use-unique-output-file-name --timeout=1200 --time-tests"
-echo "::group::cmake"
+start-group "CMake"
export PIP_BREAK_SYSTEM_PACKAGES=1
pip install -q -r "${MONOREPO_ROOT}"/.ci/all_requirements.txt
@@ -83,49 +97,39 @@ cmake -S "${MONOREPO_ROOT}"/llvm -B "${BUILD_DIR}" \
-D LLDB_ENFORCE_STRICT_TEST_REQUIREMENTS=ON \
-D CMAKE_INSTALL_PREFIX="${INSTALL_DIR}"
-echo "::endgroup::"
-echo "::group::ninja"
+start-group "ninja"
# Targets are not escaped as they are passed as separate arguments.
ninja -C "${BUILD_DIR}" -k 0 ${targets}
-echo "::endgroup::"
-
if [[ "${runtime_targets}" != "" ]]; then
- echo "::group::ninja runtimes"
+ start-group "ninja Runtimes"
ninja -C "${BUILD_DIR}" ${runtime_targets}
-
- echo "::endgroup::"
fi
# Compiling runtimes with just-built Clang and running their tests
# as an additional testing for Clang.
if [[ "${runtime_targets_needs_reconfig}" != "" ]]; then
- echo "::group::cmake runtimes C++26"
+ start-group "CMake Runtimes C++26"
cmake \
-D LIBCXX_TEST_PARAMS="std=c++26" \
-D LIBCXXABI_TEST_PARAMS="std=c++26" \
"${BUILD_DIR}"
- echo "::endgroup::"
- echo "::group::ninja runtimes C++26"
+ start-group "ninja Runtimes C++26"
ninja -C "${BUILD_DIR}" ${runtime_targets_needs_reconfig}
- echo "::endgroup::"
- echo "::group::cmake runtimes clang modules"
+ start-group "CMake Runtimes Clang Modules"
cmake \
-D LIBCXX_TEST_PARAMS="enable_modules=clang" \
-D LIBCXXABI_TEST_PARAMS="enable_modules=clang" \
"${BUILD_DIR}"
- echo "::endgroup::"
- echo "::group::ninja runtimes clang modules"
+ start-group "ninja Runtimes Clang Modules"
ninja -C "${BUILD_DIR}" ${runtime_targets_needs_reconfig}
-
- echo "::endgroup::"
fi
diff --git a/.ci/monolithic-windows.sh b/.ci/monolithic-windows.sh
index 50a741677d734..da150829783df 100755
--- a/.ci/monolithic-windows.sh
+++ b/.ci/monolithic-windows.sh
@@ -32,16 +32,30 @@ function at-exit {
# If building fails there will be no results files.
shopt -s nullglob
-
- python "${MONOREPO_ROOT}"/.ci/generate_test_report_github.py ":window: Windows x64 Test Results" \
- $retcode "${BUILD_DIR}"/test-results.*.xml >> $GITHUB_STEP_SUMMARY
+
+ if [[ "$GITHUB_STEP_SUMMARY" != "" ]]; then
+ python "${MONOREPO_ROOT}"/.ci/generate_test_report_github.py ":window: Windows x64 Test Results" \
+ $retcode "${BUILD_DIR}"/test-results.*.xml >> $GITHUB_STEP_SUMMARY
+ fi
}
trap at-exit EXIT
+function start-group {
+ groupname=$1
+ if [[ "$GITHUB_ACTIONS" != "" ]]; then
+ echo "::endgroup"
+ echo "::group::$groupname"
+ elif [[ "$POSTCOMMIT_CI" != "" ]]; then
+ echo "@@@$STEP@@@"
+ else
+ echo "Starting $groupname"
+ fi
+}
+
projects="${1}"
targets="${2}"
-echo "::group::cmake"
+start-group "CMake"
pip install -q -r "${MONOREPO_ROOT}"/.ci/all_requirements.txt
export CC=cl
@@ -71,10 +85,7 @@ cmake -S "${MONOREPO_ROOT}"/llvm -B "${BUILD_DIR}" \
-D CMAKE_MODULE_LINKER_FLAGS="/MANIFEST:NO" \
-D CMAKE_SHARED_LINKER_FLAGS="/MANIFEST:NO"
-echo "::endgroup::"
-echo "::group::ninja"
+start-group "ninja"
# Targets are not escaped as they are passed as separate arguments.
ninja -C "${BUILD_DIR}" -k 0 ${targets}
-
-echo "::endgroup"
>From 79253cfe6bf20e4bd7e3e15fec3014f90056a718 Mon Sep 17 00:00:00 2001
From: Joseph Huber <huberjn at outlook.com>
Date: Wed, 6 Aug 2025 08:36:06 -0500
Subject: [PATCH 072/171] [libc] Fix integration tests on w64 amdgpu targets
(#152303)
Summary:
We need `malloc` to return a larger size now that it's aligned properly
and we use a bunch of threads. Also the `match_any` test was wrong
because it assumed a 32-bit lanemask.
---
libc/test/IntegrationTest/test.cpp | 6 +++---
libc/test/integration/src/__support/GPU/match.cpp | 4 +++-
2 files changed, 6 insertions(+), 4 deletions(-)
diff --git a/libc/test/IntegrationTest/test.cpp b/libc/test/IntegrationTest/test.cpp
index 0e4825202a9e2..19eb255e57b04 100644
--- a/libc/test/IntegrationTest/test.cpp
+++ b/libc/test/IntegrationTest/test.cpp
@@ -63,8 +63,8 @@ int atexit(void (*func)(void)) { return LIBC_NAMESPACE::atexit(func); }
// which just hands out continuous blocks from a statically allocated chunk of
// memory.
-static constexpr uint64_t ALIGNMENT = alignof(long double);
-static constexpr uint64_t MEMORY_SIZE = 65336;
+static constexpr uint64_t ALIGNMENT = alignof(double);
+static constexpr uint64_t MEMORY_SIZE = 256 * 1024 /* 256 KiB */;
alignas(ALIGNMENT) static uint8_t memory[MEMORY_SIZE];
static size_t ptr = 0;
@@ -75,7 +75,7 @@ void *malloc(size_t size) {
size = (size + ALIGNMENT - 1) & ~(ALIGNMENT - 1);
size_t old_ptr =
ref.fetch_add(size, LIBC_NAMESPACE::cpp::MemoryOrder::RELAXED);
- if (static_cast<size_t>(old_ptr + size) > MEMORY_SIZE)
+ if (static_cast<size_t>(old_ptr + size) >= MEMORY_SIZE)
return nullptr;
return &memory[old_ptr];
}
diff --git a/libc/test/integration/src/__support/GPU/match.cpp b/libc/test/integration/src/__support/GPU/match.cpp
index 2d314c2805158..70c22b7f3a91b 100644
--- a/libc/test/integration/src/__support/GPU/match.cpp
+++ b/libc/test/integration/src/__support/GPU/match.cpp
@@ -21,7 +21,9 @@ static void test_match() {
gpu::match_any(mask, gpu::get_lane_id()));
EXPECT_EQ(mask, gpu::match_any(mask, 1));
- uint64_t expected = gpu::get_lane_id() < 16 ? 0xffff : 0xffff0000;
+ uint64_t full_mask =
+ gpu::get_lane_size() > 32 ? 0xffffffffffffffff : 0xffffffff;
+ uint64_t expected = gpu::get_lane_id() < 16 ? 0xffff : full_mask & ~0xffff;
EXPECT_EQ(expected, gpu::match_any(mask, gpu::get_lane_id() < 16));
EXPECT_EQ(mask, gpu::match_all(mask, 1));
EXPECT_EQ(0ull, gpu::match_all(mask, gpu::get_lane_id()));
>From 23b320311364f1bc1249500c7542d077d70098bf Mon Sep 17 00:00:00 2001
From: zhijian lin <zhijian at ca.ibm.com>
Date: Wed, 6 Aug 2025 09:36:37 -0400
Subject: [PATCH 073/171] [POWERPC] Fixes an error in the handling of the
MTVSRBMI instruction for big-endian (#151565)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
The patch fixed a bug introduced patch [[PowePC] using MTVSRBMI
instruction instead of constant pool in
power10+](https://github.com/llvm/llvm-project/pull/144084#top).
The issue arose because the layout of vector register elements differs
between little-endian and big-endian modes — specifically, the elements
appear in reverse order. This led to incorrect behavior when loading
constants using MTVSRBMI in big-endian configurations.
---
llvm/lib/Target/PowerPC/PPCISelLowering.cpp | 15 ++--
llvm/test/CodeGen/PowerPC/mtvsrbmi.ll | 87 ++++++++++++++++++---
2 files changed, 86 insertions(+), 16 deletions(-)
diff --git a/llvm/lib/Target/PowerPC/PPCISelLowering.cpp b/llvm/lib/Target/PowerPC/PPCISelLowering.cpp
index 30b5fd6ad1005..47f003801fa40 100644
--- a/llvm/lib/Target/PowerPC/PPCISelLowering.cpp
+++ b/llvm/lib/Target/PowerPC/PPCISelLowering.cpp
@@ -9593,12 +9593,14 @@ static bool isValidSplatLoad(const PPCSubtarget &Subtarget, const SDValue &Op,
return false;
}
-bool isValidMtVsrBmi(APInt &BitMask, BuildVectorSDNode &BVN) {
+bool isValidMtVsrBmi(APInt &BitMask, BuildVectorSDNode &BVN,
+ bool IsLittleEndian) {
assert(BVN.getNumOperands() > 0 && "Unexpected 0-size build vector");
BitMask.clearAllBits();
EVT VT = BVN.getValueType(0);
- APInt ConstValue(VT.getSizeInBits(), 0);
+ unsigned VTSize = VT.getSizeInBits();
+ APInt ConstValue(VTSize, 0);
unsigned EltWidth = VT.getScalarSizeInBits();
@@ -9608,8 +9610,10 @@ bool isValidMtVsrBmi(APInt &BitMask, BuildVectorSDNode &BVN) {
if (!CN)
return false;
-
- ConstValue.insertBits(CN->getAPIntValue().zextOrTrunc(EltWidth), BitPos);
+ // The elements in a vector register are ordered in reverse byte order
+ // between little-endian and big-endian modes.
+ ConstValue.insertBits(CN->getAPIntValue().zextOrTrunc(EltWidth),
+ IsLittleEndian ? BitPos : VTSize - EltWidth - BitPos);
BitPos += EltWidth;
}
@@ -9640,7 +9644,8 @@ SDValue PPCTargetLowering::LowerBUILD_VECTOR(SDValue Op,
// we do not convert it to MTVSRBMI.
// The xxleqv instruction sets a vector with all ones.
// The xxlxor instruction sets a vector with all zeros.
- if (isValidMtVsrBmi(BitMask, *BVN) && BitMask != 0 && BitMask != 0xffff) {
+ if (isValidMtVsrBmi(BitMask, *BVN, Subtarget.isLittleEndian()) &&
+ BitMask != 0 && BitMask != 0xffff) {
SDValue SDConstant = DAG.getTargetConstant(BitMask, dl, MVT::i32);
MachineSDNode *MSDNode =
DAG.getMachineNode(PPC::MTVSRBMI, dl, MVT::v16i8, SDConstant);
diff --git a/llvm/test/CodeGen/PowerPC/mtvsrbmi.ll b/llvm/test/CodeGen/PowerPC/mtvsrbmi.ll
index 232014db9a012..a9503f77c3090 100644
--- a/llvm/test/CodeGen/PowerPC/mtvsrbmi.ll
+++ b/llvm/test/CodeGen/PowerPC/mtvsrbmi.ll
@@ -2,22 +2,87 @@
; Verify whether the generated assembly for the following function includes the mtvsrbmi instruction.
; vector unsigned char v00FF()
; {
-; vector unsigned char x = { 0xFF, 0,0,0, 0,0,0,0, 0,0,0,0, 0,0,0,0 };
-; return x;
+; vector unsigned char x = { 0xFF, 0,0,0, 0,0,0,0, 0,0,0,0, 0,0,0,0 };
+; return x;
+; }
+; vector unsigned short short00FF()
+; {
+; vector unsigned short x = { 0xFF, 0,0,0, 0,0,0,0};
+; return x;
+; }
+; vector unsigned int int00FF()
+; {
+; vector unsigned int x = { 0xFF, 0,0,0};
+; return x;
+; }
+; vector unsigned long long longlong00FF()
+; {
+; vector unsigned long long x = { 0xFF, 0};
+; return x;
; }
; RUN: llc < %s -ppc-asm-full-reg-names -mtriple=powerpc-ibm-aix -mcpu=pwr10 -verify-machineinstrs \
-; RUN: | FileCheck %s --check-prefix=CHECK
+; RUN: | FileCheck %s --check-prefixes=CHECK,CHECK-BE
+
+; RUN: llc < %s -ppc-asm-full-reg-names -mtriple=powerpc64le-unknown-gnu-linux -mcpu=pwr10 -verify-machineinstrs \
+; RUN: | FileCheck %s --check-prefixes=CHECK,CHECK-LE
+
+; CHECK-NOT: .byte 255
+; CHECK-NOT: .byte 0
define dso_local noundef range(i8 -1, 1) <16 x i8> @_Z5v00FFv() {
-; CHECK-NOT: L..CPI0_0:
-; CHECK-NOT: .byte 255 # 0xff
-; CHECK-NOT: .byte 0 # 0x0
-
-; CHECK-LABEL: _Z5v00FFv:
-; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: mtvsrbmi v2, 1
-; CHECK-NEXT: blr
+; CHECK-BE-LABEL: _Z5v00FFv:
+; CHECK-BE: # %bb.0: # %entry
+; CHECK-BE-NEXT: mtvsrbmi v2, 32768
+; CHECK-BE-NEXT: blr
+;
+; CHECK-LE-LABEL: _Z5v00FFv:
+; CHECK-LE: # %bb.0: # %entry
+; CHECK-LE-NEXT: mtvsrbmi v2, 1
+; CHECK-LE-NEXT: blr
+
entry:
ret <16 x i8> <i8 -1, i8 0, i8 0, i8 0, i8 0, i8 0, i8 0, i8 0, i8 0, i8 0, i8 0, i8 0, i8 0, i8 0, i8 0, i8 0>
}
+
+define dso_local noundef range(i16 0, 256) <8 x i16> @_Z9short00FFv() {
+; CHECK-BE-LABEL: _Z9short00FFv:
+; CHECK-BE: # %bb.0: # %entry
+; CHECK-BE-NEXT: mtvsrbmi v2, 16384
+; CHECK-BE-NEXT: blr
+;
+; CHECK-LE-LABEL: _Z9short00FFv:
+; CHECK-LE: # %bb.0: # %entry
+; CHECK-LE-NEXT: mtvsrbmi v2, 1
+; CHECK-LE-NEXT: blr
+entry:
+ ret <8 x i16> <i16 255, i16 0, i16 0, i16 0, i16 0, i16 0, i16 0, i16 0>
+}
+
+define dso_local noundef range(i32 0, 256) <4 x i32> @_Z7int00FFv() {
+; CHECK-BE-LABEL: _Z7int00FFv:
+; CHECK-BE: # %bb.0: # %entry
+; CHECK-BE-NEXT: mtvsrbmi v2, 4096
+; CHECK-BE-NEXT: blr
+;
+; CHECK-LE-LABEL: _Z7int00FFv:
+; CHECK-LE: # %bb.0: # %entry
+; CHECK-LE-NEXT: mtvsrbmi v2, 1
+; CHECK-LE-NEXT: blr
+entry:
+ ret <4 x i32> <i32 255, i32 0, i32 0, i32 0>
+}
+
+define dso_local noundef range(i64 0, 256) <2 x i64> @_Z12longlong00FFv() {
+; CHECK-BE-LABEL: _Z12longlong00FFv:
+; CHECK-BE: # %bb.0: # %entry
+; CHECK-BE-NEXT: mtvsrbmi v2, 256
+; CHECK-BE-NEXT: blr
+;
+; CHECK-LE-LABEL: _Z12longlong00FFv:
+; CHECK-LE: # %bb.0: # %entry
+; CHECK-LE-NEXT: mtvsrbmi v2, 1
+; CHECK-LE-NEXT: blr
+entry:
+ ret <2 x i64> <i64 255, i64 0>
+}
>From 3f6f5b9121ce2d61c1ba5ae41e1fe8c1e15ca49b Mon Sep 17 00:00:00 2001
From: David Spickett <david.spickett at linaro.org>
Date: Wed, 6 Aug 2025 13:35:43 +0000
Subject: [PATCH 074/171] [llvm][docs] Tell people how to list runtime names
This list was already out of date, so I think it's better
we tell them how to get CMake to tell them the list.
---
llvm/docs/CMake.rst | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/llvm/docs/CMake.rst b/llvm/docs/CMake.rst
index 17be41b20a12a..365365c74d65b 100644
--- a/llvm/docs/CMake.rst
+++ b/llvm/docs/CMake.rst
@@ -615,11 +615,11 @@ enabled sub-projects. Nearly all of these variable names begin with
.. note::
The list should not have duplicates with ``LLVM_ENABLE_PROJECTS``.
- The full list is:
-
- ``libc;libunwind;libcxxabi;libcxx;compiler-rt;openmp;llvm-libgcc;offload``
+ To list all possible runtimes, include an invalid name. For example
+ ``-DLLVM_ENABLE_RUNTIMES=notaruntime``. The resulting CMake error will list
+ the possible runtime names.
- To enable all of them, use:
+ To enable all of the runtimes, use:
``LLVM_ENABLE_RUNTIMES=all``
>From fa91dcbefd7d9f16b2c6d624b14a1c023745b78a Mon Sep 17 00:00:00 2001
From: Igor Wodiany <igor.wodiany at imgtec.com>
Date: Wed, 6 Aug 2025 14:40:03 +0100
Subject: [PATCH 075/171] [mlir][spirv] Add support for Invariant and Patch
decorations (#152301)
New tests were validated with `spriv-val`.
---
.../Target/SPIRV/Deserialization/Deserializer.cpp | 2 ++
mlir/lib/Target/SPIRV/Serialization/Serializer.cpp | 2 ++
mlir/test/Target/SPIRV/decorations.mlir | 14 ++++++++++++++
3 files changed, 18 insertions(+)
diff --git a/mlir/lib/Target/SPIRV/Deserialization/Deserializer.cpp b/mlir/lib/Target/SPIRV/Deserialization/Deserializer.cpp
index 750821833224f..c967e863554fc 100644
--- a/mlir/lib/Target/SPIRV/Deserialization/Deserializer.cpp
+++ b/mlir/lib/Target/SPIRV/Deserialization/Deserializer.cpp
@@ -344,6 +344,8 @@ LogicalResult spirv::Deserializer::processDecoration(ArrayRef<uint32_t> words) {
case spirv::Decoration::RestrictPointer:
case spirv::Decoration::NoContraction:
case spirv::Decoration::Constant:
+ case spirv::Decoration::Invariant:
+ case spirv::Decoration::Patch:
if (words.size() != 2) {
return emitError(unknownLoc, "OpDecoration with ")
<< decorationName << "needs a single target <id>";
diff --git a/mlir/lib/Target/SPIRV/Serialization/Serializer.cpp b/mlir/lib/Target/SPIRV/Serialization/Serializer.cpp
index 30536638b56f7..c049574fbc9e3 100644
--- a/mlir/lib/Target/SPIRV/Serialization/Serializer.cpp
+++ b/mlir/lib/Target/SPIRV/Serialization/Serializer.cpp
@@ -338,6 +338,8 @@ LogicalResult Serializer::processDecorationAttr(Location loc, uint32_t resultID,
case spirv::Decoration::NoContraction:
case spirv::Decoration::Constant:
case spirv::Decoration::Block:
+ case spirv::Decoration::Invariant:
+ case spirv::Decoration::Patch:
// For unit attributes and decoration attributes, the args list
// has no values so we do nothing.
if (isa<UnitAttr, DecorationAttr>(attr))
diff --git a/mlir/test/Target/SPIRV/decorations.mlir b/mlir/test/Target/SPIRV/decorations.mlir
index ee7ad814ea0cb..90ba690e50b73 100644
--- a/mlir/test/Target/SPIRV/decorations.mlir
+++ b/mlir/test/Target/SPIRV/decorations.mlir
@@ -58,6 +58,20 @@ spirv.module Logical GLSL450 requires #spirv.vce<v1.0, [Shader], []> {
// -----
+spirv.module Logical GLSL450 requires #spirv.vce<v1.0, [Shader, Tessellation, Linkage], []> {
+ // CHECK: patch
+ spirv.GlobalVariable @var {patch} : !spirv.ptr<vector<4xf32>, Input>
+}
+
+// -----
+
+spirv.module Logical GLSL450 requires #spirv.vce<v1.0, [Shader, Linkage], []> {
+ // CHECK: invariant
+ spirv.GlobalVariable @var {invariant} : !spirv.ptr<vector<2xf32>, Output>
+}
+
+// -----
+
spirv.module Logical GLSL450 requires #spirv.vce<v1.0, [Shader, Linkage], []> {
// CHECK: linkage_attributes = #spirv.linkage_attributes<linkage_name = "outSideGlobalVar1", linkage_type = <Import>>
spirv.GlobalVariable @var1 {
>From e80e7e717e8c2efe2c1712dc3bb4f25ba7ed11f6 Mon Sep 17 00:00:00 2001
From: Florian Hahn <flo at fhahn.com>
Date: Wed, 6 Aug 2025 14:43:03 +0100
Subject: [PATCH 076/171] [VPlan] Use scalar VPPhi instead of VPWidenPHIRecipe
in createPlainCFG. (#150847)
The initial VPlan closely reflects the original scalar loop, so unsing
VPWidenPHIRecipe here is premature. Widened phi recipes should only be
introduced together with other widened recipes.
PR: https://github.com/llvm/llvm-project/pull/150847
---
.../Transforms/Vectorize/LoopVectorize.cpp | 3 ++-
llvm/lib/Transforms/Vectorize/VPlan.h | 18 ++++++++++++---
.../Vectorize/VPlanConstruction.cpp | 16 ++++++-------
.../Transforms/Vectorize/VPlanPredicator.cpp | 6 ++---
.../Transforms/Vectorize/VPlanTransforms.cpp | 23 +++++++++++--------
.../outer-loop-vec-phi-predecessor-order.ll | 2 +-
.../vplan-printing-outer-loop.ll | 4 ++--
.../Transforms/Vectorize/VPlanHCFGTest.cpp | 6 ++---
8 files changed, 46 insertions(+), 32 deletions(-)
diff --git a/llvm/lib/Transforms/Vectorize/LoopVectorize.cpp b/llvm/lib/Transforms/Vectorize/LoopVectorize.cpp
index 307b5cf1a2c50..9667b506e594f 100644
--- a/llvm/lib/Transforms/Vectorize/LoopVectorize.cpp
+++ b/llvm/lib/Transforms/Vectorize/LoopVectorize.cpp
@@ -8301,7 +8301,7 @@ VPRecipeBase *VPRecipeBuilder::tryToCreateWidenRecipe(VPSingleDefRecipe *R,
VPRecipeBase *Recipe;
Instruction *Instr = R->getUnderlyingInstr();
SmallVector<VPValue *, 4> Operands(R->operands());
- if (auto *PhiR = dyn_cast<VPWidenPHIRecipe>(R)) {
+ if (auto *PhiR = dyn_cast<VPPhi>(R)) {
VPBasicBlock *Parent = PhiR->getParent();
[[maybe_unused]] VPRegionBlock *LoopRegionOf =
Parent->getEnclosingLoopRegion();
@@ -8339,6 +8339,7 @@ VPRecipeBase *VPRecipeBuilder::tryToCreateWidenRecipe(VPSingleDefRecipe *R,
PhiRecipe->addOperand(Operands[1]);
return PhiRecipe;
}
+ assert(!R->isPhi() && "only VPPhi nodes expected at this point");
if (isa<TruncInst>(Instr) && (Recipe = tryToOptimizeInductionTruncate(
cast<TruncInst>(Instr), Operands, Range)))
diff --git a/llvm/lib/Transforms/Vectorize/VPlan.h b/llvm/lib/Transforms/Vectorize/VPlan.h
index 46dd11425aa2c..c42cdd5365108 100644
--- a/llvm/lib/Transforms/Vectorize/VPlan.h
+++ b/llvm/lib/Transforms/Vectorize/VPlan.h
@@ -1242,12 +1242,24 @@ struct LLVM_ABI_FOR_TEST VPPhi : public VPInstruction, public VPPhiAccessors {
: VPInstruction(Instruction::PHI, Operands, DL, Name) {}
static inline bool classof(const VPUser *U) {
- auto *R = dyn_cast<VPInstruction>(U);
- return R && R->getOpcode() == Instruction::PHI;
+ auto *VPI = dyn_cast<VPInstruction>(U);
+ return VPI && VPI->getOpcode() == Instruction::PHI;
+ }
+
+ static inline bool classof(const VPValue *V) {
+ auto *VPI = dyn_cast<VPInstruction>(V);
+ return VPI && VPI->getOpcode() == Instruction::PHI;
+ }
+
+ static inline bool classof(const VPSingleDefRecipe *SDR) {
+ auto *VPI = dyn_cast<VPInstruction>(SDR);
+ return VPI && VPI->getOpcode() == Instruction::PHI;
}
VPPhi *clone() override {
- return new VPPhi(operands(), getDebugLoc(), getName());
+ auto *PhiR = new VPPhi(operands(), getDebugLoc(), getName());
+ PhiR->setUnderlyingValue(getUnderlyingValue());
+ return PhiR;
}
void execute(VPTransformState &State) override;
diff --git a/llvm/lib/Transforms/Vectorize/VPlanConstruction.cpp b/llvm/lib/Transforms/Vectorize/VPlanConstruction.cpp
index 1b91901e25d08..7e8eff31c1fd3 100644
--- a/llvm/lib/Transforms/Vectorize/VPlanConstruction.cpp
+++ b/llvm/lib/Transforms/Vectorize/VPlanConstruction.cpp
@@ -91,17 +91,15 @@ void PlainCFGBuilder::fixHeaderPhis() {
for (auto *Phi : PhisToFix) {
assert(IRDef2VPValue.count(Phi) && "Missing VPInstruction for PHINode.");
VPValue *VPVal = IRDef2VPValue[Phi];
- assert(isa<VPWidenPHIRecipe>(VPVal) &&
- "Expected WidenPHIRecipe for phi node.");
- auto *VPPhi = cast<VPWidenPHIRecipe>(VPVal);
- assert(VPPhi->getNumOperands() == 0 &&
- "Expected VPInstruction with no operands.");
+ assert(isa<VPPhi>(VPVal) && "Expected VPPhi for phi node.");
+ auto *PhiR = cast<VPPhi>(VPVal);
+ assert(PhiR->getNumOperands() == 0 && "Expected VPPhi with no operands.");
assert(isHeaderBB(Phi->getParent(), LI->getLoopFor(Phi->getParent())) &&
"Expected Phi in header block.");
assert(Phi->getNumOperands() == 2 &&
"header phi must have exactly 2 operands");
for (BasicBlock *Pred : predecessors(Phi->getParent()))
- VPPhi->addOperand(
+ PhiR->addOperand(
getOrCreateVPOperand(Phi->getIncomingValueForBlock(Pred)));
}
}
@@ -204,11 +202,11 @@ void PlainCFGBuilder::createVPInstructionsForVPBB(VPBasicBlock *VPBB,
VPSingleDefRecipe *NewR;
if (auto *Phi = dyn_cast<PHINode>(Inst)) {
- // Phi node's operands may have not been visited at this point. We create
+ // Phi node's operands may not have been visited at this point. We create
// an empty VPInstruction that we will fix once the whole plain CFG has
// been built.
- NewR = new VPWidenPHIRecipe(Phi, nullptr, Phi->getDebugLoc(), "vec.phi");
- VPBB->appendRecipe(NewR);
+ NewR = VPIRBuilder.createScalarPhi({}, Phi->getDebugLoc(), "vec.phi");
+ NewR->setUnderlyingValue(Phi);
if (isHeaderBB(Phi->getParent(), LI->getLoopFor(Phi->getParent()))) {
// Header phis need to be fixed after the VPBB for the latch has been
// created.
diff --git a/llvm/lib/Transforms/Vectorize/VPlanPredicator.cpp b/llvm/lib/Transforms/Vectorize/VPlanPredicator.cpp
index 3b3bbc312402e..862b9301e8ca5 100644
--- a/llvm/lib/Transforms/Vectorize/VPlanPredicator.cpp
+++ b/llvm/lib/Transforms/Vectorize/VPlanPredicator.cpp
@@ -227,10 +227,10 @@ void VPPredicator::createSwitchEdgeMasks(VPInstruction *SI) {
}
void VPPredicator::convertPhisToBlends(VPBasicBlock *VPBB) {
- SmallVector<VPWidenPHIRecipe *> Phis;
+ SmallVector<VPPhi *> Phis;
for (VPRecipeBase &R : VPBB->phis())
- Phis.push_back(cast<VPWidenPHIRecipe>(&R));
- for (VPWidenPHIRecipe *PhiR : Phis) {
+ Phis.push_back(cast<VPPhi>(&R));
+ for (VPPhi *PhiR : Phis) {
// The non-header Phi is converted into a Blend recipe below,
// so we don't have to worry about the insertion order and we can just use
// the builder. At this point we generate the predication tree. There may
diff --git a/llvm/lib/Transforms/Vectorize/VPlanTransforms.cpp b/llvm/lib/Transforms/Vectorize/VPlanTransforms.cpp
index 57f8e32029cbf..1a71a75776380 100644
--- a/llvm/lib/Transforms/Vectorize/VPlanTransforms.cpp
+++ b/llvm/lib/Transforms/Vectorize/VPlanTransforms.cpp
@@ -63,17 +63,20 @@ bool VPlanTransforms::tryToConvertVPInstructionsToVPRecipes(
Instruction *Inst = cast<Instruction>(VPV->getUnderlyingValue());
VPRecipeBase *NewRecipe = nullptr;
- if (auto *VPPhi = dyn_cast<VPWidenPHIRecipe>(&Ingredient)) {
- auto *Phi = cast<PHINode>(VPPhi->getUnderlyingValue());
+ if (auto *PhiR = dyn_cast<VPPhi>(&Ingredient)) {
+ auto *Phi = cast<PHINode>(PhiR->getUnderlyingValue());
const auto *II = GetIntOrFpInductionDescriptor(Phi);
- if (!II)
- continue;
-
- VPValue *Start = Plan->getOrAddLiveIn(II->getStartValue());
- VPValue *Step =
- vputils::getOrCreateVPValueForSCEVExpr(*Plan, II->getStep(), SE);
- NewRecipe = new VPWidenIntOrFpInductionRecipe(
- Phi, Start, Step, &Plan->getVF(), *II, Ingredient.getDebugLoc());
+ if (!II) {
+ NewRecipe = new VPWidenPHIRecipe(Phi, nullptr, PhiR->getDebugLoc());
+ for (VPValue *Op : PhiR->operands())
+ NewRecipe->addOperand(Op);
+ } else {
+ VPValue *Start = Plan->getOrAddLiveIn(II->getStartValue());
+ VPValue *Step =
+ vputils::getOrCreateVPValueForSCEVExpr(*Plan, II->getStep(), SE);
+ NewRecipe = new VPWidenIntOrFpInductionRecipe(
+ Phi, Start, Step, &Plan->getVF(), *II, Ingredient.getDebugLoc());
+ }
} else {
assert(isa<VPInstruction>(&Ingredient) &&
"only VPInstructions expected here");
diff --git a/llvm/test/Transforms/LoopVectorize/outer-loop-vec-phi-predecessor-order.ll b/llvm/test/Transforms/LoopVectorize/outer-loop-vec-phi-predecessor-order.ll
index 1cf410c359f06..32b1fc4455d37 100644
--- a/llvm/test/Transforms/LoopVectorize/outer-loop-vec-phi-predecessor-order.ll
+++ b/llvm/test/Transforms/LoopVectorize/outer-loop-vec-phi-predecessor-order.ll
@@ -35,7 +35,7 @@ define void @test(ptr %src, i64 %n) {
; CHECK-NEXT: [[TMP3:%.*]] = icmp eq <4 x i64> [[TMP2]], [[BROADCAST_SPLAT]]
; CHECK-NEXT: [[TMP4:%.*]] = extractelement <4 x i1> [[TMP3]], i32 0
; CHECK-NEXT: br i1 [[TMP4]], label [[LOOP_2_LATCH4]], label [[LOOP_32]]
-; CHECK: loop.2.latch4:
+; CHECK: loop.2.latch3:
; CHECK-NEXT: [[TMP5]] = add nuw nsw <4 x i64> [[VEC_PHI]], splat (i64 1)
; CHECK-NEXT: [[TMP6:%.*]] = icmp eq <4 x i64> [[TMP5]], [[BROADCAST_SPLAT]]
; CHECK-NEXT: [[TMP7:%.*]] = extractelement <4 x i1> [[TMP6]], i32 0
diff --git a/llvm/test/Transforms/LoopVectorize/vplan-printing-outer-loop.ll b/llvm/test/Transforms/LoopVectorize/vplan-printing-outer-loop.ll
index 6804817c402bd..20676f3702294 100644
--- a/llvm/test/Transforms/LoopVectorize/vplan-printing-outer-loop.ll
+++ b/llvm/test/Transforms/LoopVectorize/vplan-printing-outer-loop.ll
@@ -13,14 +13,14 @@ define void @foo(i64 %n) {
; CHECK-NEXT: Successor(s): outer.header
; CHECK-EMPTY:
; CHECK-NEXT: outer.header:
-; CHECK-NEXT: WIDEN-PHI ir<%outer.iv> = phi [ ir<%outer.iv.next>, outer.latch ], [ ir<0>, ir-bb<entry> ]
+; CHECK-NEXT: EMIT-SCALAR ir<%outer.iv> = phi [ ir<%outer.iv.next>, outer.latch ], [ ir<0>, ir-bb<entry> ]
; CHECK-NEXT: EMIT ir<%gep.1> = getelementptr ir<@arr2>, ir<0>, ir<%outer.iv>
; CHECK-NEXT: EMIT store ir<%outer.iv>, ir<%gep.1>
; CHECK-NEXT: EMIT ir<%add> = add ir<%outer.iv>, ir<%n>
; CHECK-NEXT: Successor(s): inner
; CHECK-EMPTY:
; CHECK-NEXT: inner:
-; CHECK-NEXT: WIDEN-PHI ir<%inner.iv> = phi [ ir<%inner.iv.next>, inner ], [ ir<0>, outer.header ]
+; CHECK-NEXT: EMIT-SCALAR ir<%inner.iv> = phi [ ir<%inner.iv.next>, inner ], [ ir<0>, outer.header ]
; CHECK-NEXT: EMIT ir<%gep.2> = getelementptr ir<@arr>, ir<0>, ir<%inner.iv>, ir<%outer.iv>
; CHECK-NEXT: EMIT store ir<%add>, ir<%gep.2>
; CHECK-NEXT: EMIT ir<%inner.iv.next> = add ir<%inner.iv>, ir<1>
diff --git a/llvm/unittests/Transforms/Vectorize/VPlanHCFGTest.cpp b/llvm/unittests/Transforms/Vectorize/VPlanHCFGTest.cpp
index 123f6de823162..e743939754010 100644
--- a/llvm/unittests/Transforms/Vectorize/VPlanHCFGTest.cpp
+++ b/llvm/unittests/Transforms/Vectorize/VPlanHCFGTest.cpp
@@ -59,7 +59,7 @@ TEST_F(VPlanHCFGTest, testBuildHCFGInnerLoop) {
auto Iter = VecBB->begin();
auto *CanIV = dyn_cast<VPCanonicalIVPHIRecipe>(&*Iter++);
EXPECT_NE(nullptr, CanIV);
- VPWidenPHIRecipe *Phi = dyn_cast<VPWidenPHIRecipe>(&*Iter++);
+ auto *Phi = dyn_cast<VPPhi>(&*Iter++);
EXPECT_NE(nullptr, Phi);
VPInstruction *Idx = dyn_cast<VPInstruction>(&*Iter++);
@@ -138,7 +138,7 @@ compound=true
N4 [label =
"vector.body:\l" +
" EMIT vp\<%2\> = CANONICAL-INDUCTION ir\<0\>, vp\<%index.next\>\l" +
- " WIDEN-PHI ir\<%indvars.iv\> = phi [ ir\<0\>, vector.ph ], [ ir\<%indvars.iv.next\>, vector.body ]\l" +
+ " EMIT-SCALAR ir\<%indvars.iv\> = phi [ ir\<0\>, vector.ph ], [ ir\<%indvars.iv.next\>, vector.body ]\l" +
" EMIT ir\<%arr.idx\> = getelementptr ir\<%A\>, ir\<%indvars.iv\>\l" +
" EMIT ir\<%l1\> = load ir\<%arr.idx\>\l" +
" EMIT ir\<%res\> = add ir\<%l1\>, ir\<10\>\l" +
@@ -304,7 +304,7 @@ compound=true
N4 [label =
"vector.body:\l" +
" EMIT vp\<%2\> = CANONICAL-INDUCTION ir\<0\>, vp\<%index.next\>\l" +
- " WIDEN-PHI ir\<%iv\> = phi [ ir\<0\>, vector.ph ], [ ir\<%iv.next\>, loop.latch ]\l" +
+ " EMIT-SCALAR ir\<%iv\> = phi [ ir\<0\>, vector.ph ], [ ir\<%iv.next\>, loop.latch ]\l" +
" EMIT ir\<%arr.idx\> = getelementptr ir\<%A\>, ir\<%iv\>\l" +
" EMIT ir\<%l1\> = load ir\<%arr.idx\>\l" +
" EMIT ir\<%c\> = icmp ir\<%l1\>, ir\<0\>\l" +
>From ded1f3ec96bc3de2951094dff5a8d2e12f7402c8 Mon Sep 17 00:00:00 2001
From: Yussur Mustafa Oraji <yussur.oraji at tu-darmstadt.de>
Date: Wed, 6 Aug 2025 15:47:33 +0200
Subject: [PATCH 077/171] [TSan] Add option to ignore capturing behavior when
instrumenting (#148156)
While not needed for most applications, some tools such as
[MUST](https://www.i12.rwth-aachen.de/cms/i12/forschung/forschungsschwerpunkte/lehrstuhl-fuer-hochleistungsrechnen/~nrbe/must/)
depend on the instrumentation being present.
MUST uses the ThreadSanitizer annotation interface to detect data races
in MPI programs, where the capture tracking is detrimental as it has no
bearing on MPI data races, leading to missed races.
---
.../Instrumentation/ThreadSanitizer.cpp | 7 +-
.../ThreadSanitizer/capture-no-omit.ll | 92 +++++++++++++++++++
.../ThreadSanitizer/capture.ll | 2 +
3 files changed, 100 insertions(+), 1 deletion(-)
create mode 100644 llvm/test/Instrumentation/ThreadSanitizer/capture-no-omit.ll
diff --git a/llvm/lib/Transforms/Instrumentation/ThreadSanitizer.cpp b/llvm/lib/Transforms/Instrumentation/ThreadSanitizer.cpp
index 5485998164f1e..0d48a350254ee 100644
--- a/llvm/lib/Transforms/Instrumentation/ThreadSanitizer.cpp
+++ b/llvm/lib/Transforms/Instrumentation/ThreadSanitizer.cpp
@@ -80,6 +80,10 @@ static cl::opt<bool> ClCompoundReadBeforeWrite(
"tsan-compound-read-before-write", cl::init(false),
cl::desc("Emit special compound instrumentation for reads-before-writes"),
cl::Hidden);
+static cl::opt<bool>
+ ClOmitNonCaptured("tsan-omit-by-pointer-capturing", cl::init(true),
+ cl::desc("Omit accesses due to pointer capturing"),
+ cl::Hidden);
STATISTIC(NumInstrumentedReads, "Number of instrumented reads");
STATISTIC(NumInstrumentedWrites, "Number of instrumented writes");
@@ -450,7 +454,8 @@ void ThreadSanitizer::chooseInstructionsToInstrument(
const AllocaInst *AI = findAllocaForValue(Addr);
// Instead of Addr, we should check whether its base pointer is captured.
- if (AI && !PointerMayBeCaptured(AI, /*ReturnCaptures=*/true)) {
+ if (AI && !PointerMayBeCaptured(AI, /*ReturnCaptures=*/true) &&
+ ClOmitNonCaptured) {
// The variable is addressable but not captured, so it cannot be
// referenced from a different thread and participate in a data race
// (see llvm/Analysis/CaptureTracking.h for details).
diff --git a/llvm/test/Instrumentation/ThreadSanitizer/capture-no-omit.ll b/llvm/test/Instrumentation/ThreadSanitizer/capture-no-omit.ll
new file mode 100644
index 0000000000000..cae04936002cd
--- /dev/null
+++ b/llvm/test/Instrumentation/ThreadSanitizer/capture-no-omit.ll
@@ -0,0 +1,92 @@
+; RUN: opt < %s -passes=tsan -tsan-omit-by-pointer-capturing=0 -S | FileCheck %s
+
+target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64-S128"
+
+declare void @escape(ptr)
+
+ at sink = global ptr null, align 4
+
+
+define void @captured2() nounwind uwtable sanitize_thread {
+entry:
+ %ptr = alloca i32, align 4
+ %tmp = alloca ptr, align 8
+ ; transitive escape
+ store ptr %ptr, ptr %tmp, align 8
+ %0 = load ptr, ptr %tmp, align 8
+ store ptr %0, ptr @sink, align 8
+ store i32 42, ptr %ptr, align 4
+ ret void
+}
+; CHECK-LABEL: define void @captured2
+; CHECK: __tsan_write
+; CHECK: __tsan_read
+; CHECK: __tsan_write
+; CHECK: __tsan_write
+; CHECK: ret void
+
+define void @captured3() nounwind uwtable sanitize_thread {
+entry:
+ %stkobj = alloca [2 x i32], align 8
+ ; escapes due to store into global
+ store ptr %stkobj, ptr @sink, align 8
+ ; derived is captured as its base object is captured
+ %derived = getelementptr inbounds i32, ptr %stkobj, i64 1
+ store i32 42, ptr %derived, align 4
+ ret void
+}
+; CHECK-LABEL: define void @captured3
+; CHECK: __tsan_write
+; CHECK: __tsan_write
+; CHECK: ret void
+
+define void @notcaptured2() nounwind uwtable sanitize_thread {
+entry:
+ %ptr = alloca i32, align 4
+ %tmp = alloca ptr, align 8
+ store i32 42, ptr %ptr, align 4
+ ; transitive escape
+ store ptr %ptr, ptr %tmp, align 8
+ %0 = load ptr, ptr %tmp, align 8
+ store ptr %0, ptr @sink, align 8
+ ret void
+}
+; CHECK-LABEL: define void @notcaptured2
+; CHECK: __tsan_write
+; CHECK: __tsan_write
+; CHECK: __tsan_read
+; CHECK: __tsan_write
+; CHECK: ret void
+
+define void @notcaptured3(i1 %cond) nounwind uwtable sanitize_thread {
+entry:
+ %stkobj = alloca [2 x i32], align 8
+ %derived = getelementptr inbounds i32, ptr %stkobj, i64 1
+ %ptr = select i1 %cond, ptr %derived, ptr %stkobj
+ store i32 42, ptr %ptr, align 4
+ ret void
+}
+; CHECK-LABEL: define void @notcaptured3
+; CHECK: __tsan_write
+; CHECK: ret void
+
+define void @notcaptured4() nounwind uwtable sanitize_thread {
+entry:
+ %stkobj = alloca [10 x i8], align 1
+ br label %loop
+
+exit:
+ ret void
+
+loop:
+ %count = phi i32 [ 0, %entry ], [ %addone, %loop ]
+ %derived = phi ptr [ %stkobj, %entry ], [ %ptraddone, %loop ]
+ store i32 %count, ptr %derived, align 4
+ %ptraddone = getelementptr inbounds i32, ptr %derived, i64 1
+ %addone = add nuw nsw i32 %count, 1
+ %eq10 = icmp eq i32 %addone, 10
+ br i1 %eq10, label %exit, label %loop
+}
+; CHECK-LABEL: define void @notcaptured4
+; CHECK: ret void
+; CHECK: __tsan_write
diff --git a/llvm/test/Instrumentation/ThreadSanitizer/capture.ll b/llvm/test/Instrumentation/ThreadSanitizer/capture.ll
index e1b9e03b88446..5083c790011c6 100644
--- a/llvm/test/Instrumentation/ThreadSanitizer/capture.ll
+++ b/llvm/test/Instrumentation/ThreadSanitizer/capture.ll
@@ -45,6 +45,7 @@ entry:
; CHECK-LABEL: define void @captured2
; CHECK: __tsan_write
; CHECK: __tsan_write
+; CHECK-NOT: __tsan_write
; CHECK: ret void
define void @captured3() nounwind uwtable sanitize_thread {
@@ -101,6 +102,7 @@ entry:
; CHECK-LABEL: define void @notcaptured2
; CHECK: __tsan_write
; CHECK: __tsan_write
+; CHECK-NOT: __tsan_write
; CHECK: ret void
define void @notcaptured3(i1 %cond) nounwind uwtable sanitize_thread {
>From c4f6d346749cd368ab60fa06d925b15934d0e38a Mon Sep 17 00:00:00 2001
From: Simon Pilgrim <llvm-dev at redking.me.uk>
Date: Wed, 6 Aug 2025 14:55:46 +0100
Subject: [PATCH 078/171] [DAG] getNode - fold (sext (trunc x)) -> x iff the
upper bits are already signbits (#151945)
Similar to what we already do for ZERO_EXTEND/ANY_EXTEND patterns.
---
.../lib/CodeGen/SelectionDAG/SelectionDAG.cpp | 14 +++++++++++
llvm/test/CodeGen/PowerPC/aix-cc-abi-mir.ll | 24 +++++++++----------
llvm/test/CodeGen/X86/trunc-nsw-nuw.ll | 7 +++---
3 files changed, 30 insertions(+), 15 deletions(-)
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
index 71a175dfd7b24..649a3107cc21c 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
@@ -6422,6 +6422,20 @@ SDValue SelectionDAG::getNode(unsigned Opcode, const SDLoc &DL, EVT VT,
if (N1.isUndef())
// sext(undef) = 0, because the top bits will all be the same.
return getConstant(0, DL, VT);
+
+ // Skip unnecessary sext_inreg pattern:
+ // (sext (trunc x)) -> x iff the upper bits are all signbits.
+ if (OpOpcode == ISD::TRUNCATE) {
+ SDValue OpOp = N1.getOperand(0);
+ if (OpOp.getValueType() == VT) {
+ unsigned NumSignExtBits =
+ VT.getScalarSizeInBits() - N1.getScalarValueSizeInBits();
+ if (ComputeNumSignBits(OpOp) > NumSignExtBits) {
+ transferDbgValues(N1, OpOp);
+ return OpOp;
+ }
+ }
+ }
break;
case ISD::ZERO_EXTEND:
assert(VT.isInteger() && N1.getValueType().isInteger() &&
diff --git a/llvm/test/CodeGen/PowerPC/aix-cc-abi-mir.ll b/llvm/test/CodeGen/PowerPC/aix-cc-abi-mir.ll
index 9ffb4fd5eae45..258ddf60088c1 100644
--- a/llvm/test/CodeGen/PowerPC/aix-cc-abi-mir.ll
+++ b/llvm/test/CodeGen/PowerPC/aix-cc-abi-mir.ll
@@ -37,9 +37,9 @@ define signext i8 @test_chars(i8 signext %c1, i8 signext %c2, i8 signext %c3, i8
; 32BIT: bb.0.entry:
; 32BIT-NEXT: liveins: $r3, $r4, $r5, $r6
; 32BIT-NEXT: {{ $}}
- ; 32BIT-NEXT: renamable $r3 = ADD4 killed renamable $r3, killed renamable $r4
- ; 32BIT-NEXT: renamable $r3 = ADD4 killed renamable $r3, killed renamable $r5
- ; 32BIT-NEXT: renamable $r3 = ADD4 killed renamable $r3, killed renamable $r6
+ ; 32BIT-NEXT: renamable $r3 = nsw ADD4 killed renamable $r3, killed renamable $r4
+ ; 32BIT-NEXT: renamable $r3 = nsw ADD4 killed renamable $r3, killed renamable $r5
+ ; 32BIT-NEXT: renamable $r3 = nsw ADD4 killed renamable $r3, killed renamable $r6
; 32BIT-NEXT: renamable $r3 = EXTSB killed renamable $r3
; 32BIT-NEXT: BLR implicit $lr, implicit $rm, implicit $r3
;
@@ -47,9 +47,9 @@ define signext i8 @test_chars(i8 signext %c1, i8 signext %c2, i8 signext %c3, i8
; 64BIT: bb.0.entry:
; 64BIT-NEXT: liveins: $x3, $x4, $x5, $x6
; 64BIT-NEXT: {{ $}}
- ; 64BIT-NEXT: renamable $r3 = ADD4 renamable $r3, renamable $r4, implicit killed $x4, implicit killed $x3
- ; 64BIT-NEXT: renamable $r3 = ADD4 killed renamable $r3, renamable $r5, implicit killed $x5
- ; 64BIT-NEXT: renamable $r3 = ADD4 killed renamable $r3, renamable $r6, implicit killed $x6, implicit-def $x3
+ ; 64BIT-NEXT: renamable $r3 = nsw ADD4 renamable $r3, renamable $r4, implicit killed $x4, implicit killed $x3
+ ; 64BIT-NEXT: renamable $r3 = nsw ADD4 killed renamable $r3, renamable $r5, implicit killed $x5
+ ; 64BIT-NEXT: renamable $r3 = nsw ADD4 killed renamable $r3, renamable $r6, implicit killed $x6, implicit-def $x3
; 64BIT-NEXT: renamable $x3 = EXTSB8 killed renamable $x3
; 64BIT-NEXT: BLR8 implicit $lr8, implicit $rm, implicit $x3
entry:
@@ -96,9 +96,9 @@ define signext i8 @test_chars_mix(i8 signext %c1, i8 zeroext %c2, i8 zeroext %c3
; 32BIT: bb.0.entry:
; 32BIT-NEXT: liveins: $r3, $r4, $r5, $r6
; 32BIT-NEXT: {{ $}}
- ; 32BIT-NEXT: renamable $r3 = ADD4 killed renamable $r3, killed renamable $r4
- ; 32BIT-NEXT: renamable $r3 = ADD4 killed renamable $r3, killed renamable $r5
- ; 32BIT-NEXT: renamable $r3 = ADD4 killed renamable $r3, killed renamable $r6
+ ; 32BIT-NEXT: renamable $r3 = nsw ADD4 killed renamable $r3, killed renamable $r4
+ ; 32BIT-NEXT: renamable $r3 = nsw ADD4 killed renamable $r3, killed renamable $r5
+ ; 32BIT-NEXT: renamable $r3 = nsw ADD4 killed renamable $r3, killed renamable $r6
; 32BIT-NEXT: renamable $r3 = EXTSB killed renamable $r3
; 32BIT-NEXT: BLR implicit $lr, implicit $rm, implicit $r3
;
@@ -106,9 +106,9 @@ define signext i8 @test_chars_mix(i8 signext %c1, i8 zeroext %c2, i8 zeroext %c3
; 64BIT: bb.0.entry:
; 64BIT-NEXT: liveins: $x3, $x4, $x5, $x6
; 64BIT-NEXT: {{ $}}
- ; 64BIT-NEXT: renamable $r3 = ADD4 renamable $r3, renamable $r4, implicit killed $x4, implicit killed $x3
- ; 64BIT-NEXT: renamable $r3 = ADD4 killed renamable $r3, renamable $r5, implicit killed $x5
- ; 64BIT-NEXT: renamable $r3 = ADD4 killed renamable $r3, renamable $r6, implicit killed $x6, implicit-def $x3
+ ; 64BIT-NEXT: renamable $r3 = nsw ADD4 renamable $r3, renamable $r4, implicit killed $x4, implicit killed $x3
+ ; 64BIT-NEXT: renamable $r3 = nsw ADD4 killed renamable $r3, renamable $r5, implicit killed $x5
+ ; 64BIT-NEXT: renamable $r3 = nsw ADD4 killed renamable $r3, renamable $r6, implicit killed $x6, implicit-def $x3
; 64BIT-NEXT: renamable $x3 = EXTSB8 killed renamable $x3
; 64BIT-NEXT: BLR8 implicit $lr8, implicit $rm, implicit $x3
entry:
diff --git a/llvm/test/CodeGen/X86/trunc-nsw-nuw.ll b/llvm/test/CodeGen/X86/trunc-nsw-nuw.ll
index 5c5f7045ea030..6b0789127f5f9 100644
--- a/llvm/test/CodeGen/X86/trunc-nsw-nuw.ll
+++ b/llvm/test/CodeGen/X86/trunc-nsw-nuw.ll
@@ -62,10 +62,11 @@ entry:
define i32 @simplify_demanded_bits_drop_flag(i1 zeroext %x, i1 zeroext %y) nounwind {
; CHECK-LABEL: simplify_demanded_bits_drop_flag:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: negl %edi
+; CHECK-NEXT: # kill: def $esi killed $esi def $rsi
; CHECK-NEXT: shll $2, %esi
-; CHECK-NEXT: xorl %edi, %esi
-; CHECK-NEXT: movslq %esi, %rax
+; CHECK-NEXT: movl %edi, %eax
+; CHECK-NEXT: negq %rax
+; CHECK-NEXT: xorq %rsi, %rax
; CHECK-NEXT: imulq $-1634202141, %rax, %rax # imm = 0x9E980DE3
; CHECK-NEXT: movq %rax, %rcx
; CHECK-NEXT: shrq $63, %rcx
>From 0b1639581a14e6e99ffff1a155504e4e866df491 Mon Sep 17 00:00:00 2001
From: William Huynh <William.Huynh at arm.com>
Date: Wed, 6 Aug 2025 15:04:26 +0100
Subject: [PATCH 079/171] [libc] Change LIBC_THREAD_LOCAL to be dependent on
LIBC_THREAD_MODE (#151527)
When single-threaded mode is selected, all instances of the keyword
`LIBC_THREAD_LOCAL` are stubbed out, similar to how it currently works
on the GPU. This allows baremetal builds to avoid using thread_local.
However, libcxx uses shared headers, so we need to be careful there.
Thankfully, there is already an option to disable multithreading in
libcxx, so a flag is added such that single-threaded mode is propagated
down to libc.
---
cmake/Modules/FindLibcCommonUtils.cmake | 3 +++
libc/src/__support/macros/attributes.h | 27 ++++++++++++++++++++++++-
libc/src/__support/threads/mutex.h | 22 --------------------
3 files changed, 29 insertions(+), 23 deletions(-)
diff --git a/cmake/Modules/FindLibcCommonUtils.cmake b/cmake/Modules/FindLibcCommonUtils.cmake
index 95426c51a6041..81cf74fbd0d41 100644
--- a/cmake/Modules/FindLibcCommonUtils.cmake
+++ b/cmake/Modules/FindLibcCommonUtils.cmake
@@ -12,6 +12,9 @@ if(NOT TARGET llvm-libc-common-utilities)
add_library(llvm-libc-common-utilities INTERFACE)
# TODO: Reorganize the libc shared section so that it can be included without
# adding the root "libc" directory to the include path.
+ if (NOT(LIBCXX_ENABLE_THREADS))
+ target_compile_definitions(llvm-libc-common-utilities INTERFACE LIBC_THREAD_MODE=LIBC_THREAD_MODE_SINGLE)
+ endif()
target_include_directories(llvm-libc-common-utilities INTERFACE ${libc_path})
target_compile_definitions(llvm-libc-common-utilities INTERFACE LIBC_NAMESPACE=__llvm_libc_common_utils)
target_compile_features(llvm-libc-common-utilities INTERFACE cxx_std_17)
diff --git a/libc/src/__support/macros/attributes.h b/libc/src/__support/macros/attributes.h
index c6474673de85a..4ff374b0e4fbd 100644
--- a/libc/src/__support/macros/attributes.h
+++ b/libc/src/__support/macros/attributes.h
@@ -28,7 +28,32 @@
#define LIBC_INLINE_ASM __asm__ __volatile__
#define LIBC_UNUSED __attribute__((unused))
-#ifdef LIBC_TARGET_ARCH_IS_GPU
+// Uses the platform specific specialization
+#define LIBC_THREAD_MODE_PLATFORM 0
+
+// Mutex guards nothing, used in single-threaded implementations
+#define LIBC_THREAD_MODE_SINGLE 1
+
+// Vendor provides implementation
+#define LIBC_THREAD_MODE_EXTERNAL 2
+
+// libcxx doesn't define LIBC_THREAD_MODE, unless that is passed in the command
+// line in the CMake invocation. This defaults to the original implementation
+// (before changes in https://github.com/llvm/llvm-project/pull/145358)
+#ifndef LIBC_THREAD_MODE
+#define LIBC_THREAD_MODE LIBC_THREAD_MODE_PLATFORM
+#endif // LIBC_THREAD_MODE
+
+#if LIBC_THREAD_MODE != LIBC_THREAD_MODE_PLATFORM && \
+ LIBC_THREAD_MODE != LIBC_THREAD_MODE_SINGLE && \
+ LIBC_THREAD_MODE != LIBC_THREAD_MODE_EXTERNAL
+#error LIBC_THREAD_MODE must be one of the following values: \
+LIBC_THREAD_MODE_PLATFORM, \
+LIBC_THREAD_MODE_SINGLE, \
+LIBC_THREAD_MODE_EXTERNAL.
+#endif
+
+#if LIBC_THREAD_MODE == LIBC_THREAD_MODE_SINGLE
#define LIBC_THREAD_LOCAL
#else
#define LIBC_THREAD_LOCAL thread_local
diff --git a/libc/src/__support/threads/mutex.h b/libc/src/__support/threads/mutex.h
index cbef0d00009b2..f64f7e7b40082 100644
--- a/libc/src/__support/threads/mutex.h
+++ b/libc/src/__support/threads/mutex.h
@@ -12,28 +12,6 @@
#include "src/__support/macros/attributes.h"
#include "src/__support/macros/config.h"
-// Uses the platform specific specialization
-#define LIBC_THREAD_MODE_PLATFORM 0
-
-// Mutex guards nothing, used in single-threaded implementations
-#define LIBC_THREAD_MODE_SINGLE 1
-
-// Vendor provides implementation
-#define LIBC_THREAD_MODE_EXTERNAL 2
-
-#if !defined(LIBC_THREAD_MODE)
-#error LIBC_THREAD_MODE is undefined
-#endif // LIBC_THREAD_MODE
-
-#if LIBC_THREAD_MODE != LIBC_THREAD_MODE_PLATFORM && \
- LIBC_THREAD_MODE != LIBC_THREAD_MODE_SINGLE && \
- LIBC_THREAD_MODE != LIBC_THREAD_MODE_EXTERNAL
-#error LIBC_THREAD_MODE must be one of the following values: \
-LIBC_THREAD_MODE_PLATFORM, \
-LIBC_THREAD_MODE_SINGLE, \
-LIBC_THREAD_MODE_EXTERNAL.
-#endif
-
#if LIBC_THREAD_MODE == LIBC_THREAD_MODE_PLATFORM
// Platform independent code will include this header file which pulls
>From 22af0cd6f9ded0aa70eb5a3d6f12bbd15c3ae870 Mon Sep 17 00:00:00 2001
From: Rahul Joshi <rjoshi at nvidia.com>
Date: Wed, 6 Aug 2025 07:09:52 -0700
Subject: [PATCH 080/171] [LLVM][Intrinsics] Reduce stack size for
`Intrinsic::getAttributes` (#152219)
This change fixes a stack size regression that got introduced in
https://github.com/llvm/llvm-project/commit/0de0354aa8dcd6afab625c6833cb0f40309c2961.
That change did 2 independent things:
1. Uniquify argument and function attributes separately so that we
generate a smaller number of unique sets as opposed to uniquifying them
together. This is beneficial for code size.
2. Eliminate the fixed size array `AS` and `NumAttrs` variable and
instead build the returned AttribteList in each case using an
initializer list.
The second part seems to have caused a regression in the stack size
usage of this function for Windows. This change essentially undoes part
2 and reinstates the use of the fixed size array `AS` which fixes this
stack size regression. The actual measured stack frame size for this
function before/after this change is as follows:
```
Current trunk data for release build (x86_64 builds for Linux, x86 build for Windows):
Compiler gcc-13.3.0 clang-18.1.3 MSVC 19.43.34810.0
DLLVM_ENABLE_ASSERTIONS=OFF 0x120 0x110 0x54B0
DLLVM_ENABLE_ASSERTIONS=ON 0x2880 0x110 0x5250
After applying the fix:
Compiler gcc-13.3.0 clang-18.1.3 MSVC 19.43.34810.0
DLLVM_ENABLE_ASSERTIONS=OFF 0x120 0x118 0x1240h
DLLVM_ENABLE_ASSERTIONS=ON 0x120 0x118 0x1240h
```
Note that for Windows builds with assertions disabled, the stack frame
size for this function reduces from 21680 to 4672 which is a 4.6x
reduction. Stack frame size for GCC build with assertions also improved
and clang builds are unimpacted. The speculation is that clang and gcc
is able to reuse the stack space across these switch cases better with
existing code, but MSVC is not, and re-introducing the `AS` variable
forces all cases to use the same local variable, addressing the stack
space regression.
---
llvm/test/TableGen/intrinsic-attrs.td | 15 +++---
.../utils/TableGen/Basic/IntrinsicEmitter.cpp | 52 +++++++++++--------
2 files changed, 37 insertions(+), 30 deletions(-)
diff --git a/llvm/test/TableGen/intrinsic-attrs.td b/llvm/test/TableGen/intrinsic-attrs.td
index 18309d7419994..92a90dcafd48c 100644
--- a/llvm/test/TableGen/intrinsic-attrs.td
+++ b/llvm/test/TableGen/intrinsic-attrs.td
@@ -30,12 +30,11 @@ def int_deref_ptr_ret : Intrinsic<[llvm_ptr_ty], [], [Dereferenceable<RetIndex,
// CHECK: getAttributes(LLVMContext &C, ID id,
// CHECK-NEXT: FunctionType *FT) {
-// CHECK: case 1:
-// CHECK-NEXT: return AttributeList::get(C, {
-// CHECK-NEXT: {AttributeList::FunctionIndex, getIntrinsicFnAttributeSet(C, FnAttrID)}
-// CHECK-NEXT: });
+// CHECK: case 1:
+// CHECK-NEXT: HasFnAttr = true;
+// CHECK-NEXT: break;
// CHECK-NEXT: case 0:
-// CHECK-NEXT: return AttributeList::get(C, {
-// CHECK-NEXT: {0, getIntrinsicArgAttributeSet(C, 0, FT->getContainedType(0))},
-// CHECK-NEXT: {AttributeList::FunctionIndex, getIntrinsicFnAttributeSet(C, FnAttrID)}
-// CHECK-NEXT: });
+// CHECK-NEXT: AS[0] = {0, getIntrinsicArgAttributeSet(C, 0, FT->getContainedType(0))};
+// CHECK-NEXT: HasFnAttr = true;
+// CHECK-NEXT: NumAttrs = 1
+// CHECK-NEXT: break;
diff --git a/llvm/utils/TableGen/Basic/IntrinsicEmitter.cpp b/llvm/utils/TableGen/Basic/IntrinsicEmitter.cpp
index ac5c455ed63ce..49f719453e0bb 100644
--- a/llvm/utils/TableGen/Basic/IntrinsicEmitter.cpp
+++ b/llvm/utils/TableGen/Basic/IntrinsicEmitter.cpp
@@ -652,6 +652,16 @@ static constexpr uint16_t IntrinsicsToAttributesMap[] = {)";
AttributeList Intrinsic::getAttributes(LLVMContext &C, ID id,
FunctionType *FT) {)";
+ // Find the max number of attributes to create the local array.
+ unsigned MaxNumAttrs = 0;
+ for (const auto [IntPtr, UniqueID] : UniqAttributes) {
+ const CodeGenIntrinsic &Int = *IntPtr;
+ unsigned NumAttrs =
+ llvm::count_if(Int.ArgumentAttributes,
+ [](const auto &Attrs) { return !Attrs.empty(); });
+ NumAttrs += hasFnAttributes(Int);
+ MaxNumAttrs = std::max(MaxNumAttrs, NumAttrs);
+ }
OS << formatv(R"(
if (id == 0)
@@ -659,46 +669,44 @@ AttributeList Intrinsic::getAttributes(LLVMContext &C, ID id,
uint16_t PackedID = IntrinsicsToAttributesMap[id - 1];
uint8_t FnAttrID = PackedID >> 8;
+ std::pair<unsigned, AttributeSet> AS[{}];
+ unsigned NumAttrs = 0;
+ bool HasFnAttr = false;
switch(PackedID & 0xFF) {{
default: llvm_unreachable("Invalid attribute number");
-)");
+)",
+ MaxNumAttrs);
for (const auto [IntPtr, UniqueID] : UniqAttributes) {
OS << formatv(" case {}:\n", UniqueID);
const CodeGenIntrinsic &Int = *IntPtr;
- // Keep track of the number of attributes we're writing out.
- unsigned NumAttrs =
- llvm::count_if(Int.ArgumentAttributes,
- [](const auto &Attrs) { return !Attrs.empty(); });
- NumAttrs += hasFnAttributes(Int);
- if (NumAttrs == 0) {
- OS << " return AttributeList();\n";
- continue;
- }
+ unsigned NumAttrs = 0;
- OS << " return AttributeList::get(C, {\n";
- ListSeparator LS(",\n");
for (const auto &[AttrIdx, Attrs] : enumerate(Int.ArgumentAttributes)) {
if (Attrs.empty())
continue;
unsigned ArgAttrID = UniqArgAttributes.find(Attrs)->second;
- OS << LS
- << formatv(" {{{}, getIntrinsicArgAttributeSet(C, {}, "
- "FT->getContainedType({}))}",
- AttrIdx, ArgAttrID, AttrIdx);
+ OS << formatv(" AS[{}] = {{{}, getIntrinsicArgAttributeSet(C, {}, "
+ "FT->getContainedType({}))};\n",
+ NumAttrs++, AttrIdx, ArgAttrID, AttrIdx);
}
- if (hasFnAttributes(Int)) {
- OS << LS
- << " {AttributeList::FunctionIndex, "
- "getIntrinsicFnAttributeSet(C, FnAttrID)}";
- }
- OS << "\n });\n";
+ if (hasFnAttributes(Int))
+ OS << " HasFnAttr = true;\n";
+
+ if (NumAttrs)
+ OS << formatv(" NumAttrs = {};\n", NumAttrs);
+ OS << " break;\n";
}
OS << R"( }
+ if (HasFnAttr) {
+ AS[NumAttrs++] = {AttributeList::FunctionIndex,
+ getIntrinsicFnAttributeSet(C, FnAttrID)};
+ }
+ return AttributeList::get(C, ArrayRef(AS, NumAttrs));
}
#endif // GET_INTRINSIC_ATTRIBUTES
>From cab2edd39a14b817445f4f74c5f2d83cc6a3967a Mon Sep 17 00:00:00 2001
From: Kazu Hirata <kazu at google.com>
Date: Wed, 6 Aug 2025 07:10:40 -0700
Subject: [PATCH 081/171] [libclang] Remove unnecessary casts (NFC) (#152259)
stringVal is already of char *.
---
clang/tools/libclang/CIndex.cpp | 9 ++++-----
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/clang/tools/libclang/CIndex.cpp b/clang/tools/libclang/CIndex.cpp
index 95cda674a5895..8b3d70b25866f 100644
--- a/clang/tools/libclang/CIndex.cpp
+++ b/clang/tools/libclang/CIndex.cpp
@@ -4720,8 +4720,7 @@ static const ExprEvalResult *evaluateExpr(Expr *expr, CXCursor C) {
std::string strRef(StrE->getString().str());
result->EvalData.stringVal = new char[strRef.size() + 1];
- strncpy((char *)result->EvalData.stringVal, strRef.c_str(),
- strRef.size());
+ strncpy(result->EvalData.stringVal, strRef.c_str(), strRef.size());
result->EvalData.stringVal[strRef.size()] = '\0';
return result.release();
}
@@ -4741,7 +4740,7 @@ static const ExprEvalResult *evaluateExpr(Expr *expr, CXCursor C) {
std::string strRef(StrE->getString().str());
result->EvalData.stringVal = new char[strRef.size() + 1];
- strncpy((char *)result->EvalData.stringVal, strRef.c_str(), strRef.size());
+ strncpy(result->EvalData.stringVal, strRef.c_str(), strRef.size());
result->EvalData.stringVal[strRef.size()] = '\0';
return result.release();
}
@@ -4760,7 +4759,7 @@ static const ExprEvalResult *evaluateExpr(Expr *expr, CXCursor C) {
result->EvalType = CXEval_CFStr;
result->EvalData.stringVal = new char[strLiteral.size() + 1];
- strncpy((char *)result->EvalData.stringVal, strLiteral.c_str(),
+ strncpy(result->EvalData.stringVal, strLiteral.c_str(),
strLiteral.size());
result->EvalData.stringVal[strLiteral.size()] = '\0';
return result.release();
@@ -4785,7 +4784,7 @@ static const ExprEvalResult *evaluateExpr(Expr *expr, CXCursor C) {
std::string strLiteral(S->getString().str());
result->EvalType = CXEval_CFStr;
result->EvalData.stringVal = new char[strLiteral.size() + 1];
- strncpy((char *)result->EvalData.stringVal, strLiteral.c_str(),
+ strncpy(result->EvalData.stringVal, strLiteral.c_str(),
strLiteral.size());
result->EvalData.stringVal[strLiteral.size()] = '\0';
return result.release();
>From e9c510b151e6a487141c441f37301c7c368f0fe6 Mon Sep 17 00:00:00 2001
From: Kazu Hirata <kazu at google.com>
Date: Wed, 6 Aug 2025 07:10:49 -0700
Subject: [PATCH 082/171] [AArch64] Remove an unnecessary cast (NFC) (#152260)
Pred is already of CmpInst::Predicate.
---
llvm/lib/Target/AArch64/GISel/AArch64InstructionSelector.cpp | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/llvm/lib/Target/AArch64/GISel/AArch64InstructionSelector.cpp b/llvm/lib/Target/AArch64/GISel/AArch64InstructionSelector.cpp
index d9056926ff249..f3597310172de 100644
--- a/llvm/lib/Target/AArch64/GISel/AArch64InstructionSelector.cpp
+++ b/llvm/lib/Target/AArch64/GISel/AArch64InstructionSelector.cpp
@@ -1697,7 +1697,7 @@ bool AArch64InstructionSelector::selectCompareBranchFedByFCmp(
emitFPCompare(FCmp.getOperand(2).getReg(), FCmp.getOperand(3).getReg(), MIB,
Pred);
AArch64CC::CondCode CC1, CC2;
- changeFCMPPredToAArch64CC(static_cast<CmpInst::Predicate>(Pred), CC1, CC2);
+ changeFCMPPredToAArch64CC(Pred, CC1, CC2);
MachineBasicBlock *DestMBB = I.getOperand(1).getMBB();
MIB.buildInstr(AArch64::Bcc, {}, {}).addImm(CC1).addMBB(DestMBB);
if (CC2 != AArch64CC::AL)
>From 71b4f4d263a1d66fcf740e736ef5cd657f21cdee Mon Sep 17 00:00:00 2001
From: Kazu Hirata <kazu at google.com>
Date: Wed, 6 Aug 2025 07:10:58 -0700
Subject: [PATCH 083/171] [ObjCopy] Remove unnecessary casts (NFC) (#152261)
getBufferStart() already returns char *.
---
llvm/lib/ObjCopy/MachO/MachOWriter.cpp | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/llvm/lib/ObjCopy/MachO/MachOWriter.cpp b/llvm/lib/ObjCopy/MachO/MachOWriter.cpp
index 89c1df8699298..07514dd2f8d6a 100644
--- a/llvm/lib/ObjCopy/MachO/MachOWriter.cpp
+++ b/llvm/lib/ObjCopy/MachO/MachOWriter.cpp
@@ -301,7 +301,7 @@ void MachOWriter::writeSymbolTable() {
O.LoadCommands[*O.SymTabCommandIndex]
.MachOLoadCommand.symtab_command_data;
- char *SymTable = (char *)Buf->getBufferStart() + SymTabCommand.symoff;
+ char *SymTable = Buf->getBufferStart() + SymTabCommand.symoff;
for (auto &Symbol : O.SymTable.Symbols) {
SymbolEntry *Sym = Symbol.get();
uint32_t Nstrx = LayoutBuilder.getStringTableBuilder().getOffset(Sym->Name);
@@ -319,7 +319,7 @@ void MachOWriter::writeRebaseInfo() {
const MachO::dyld_info_command &DyLdInfoCommand =
O.LoadCommands[*O.DyLdInfoCommandIndex]
.MachOLoadCommand.dyld_info_command_data;
- char *Out = (char *)Buf->getBufferStart() + DyLdInfoCommand.rebase_off;
+ char *Out = Buf->getBufferStart() + DyLdInfoCommand.rebase_off;
assert((DyLdInfoCommand.rebase_size == O.Rebases.Opcodes.size()) &&
"Incorrect rebase opcodes size");
memcpy(Out, O.Rebases.Opcodes.data(), O.Rebases.Opcodes.size());
@@ -331,7 +331,7 @@ void MachOWriter::writeBindInfo() {
const MachO::dyld_info_command &DyLdInfoCommand =
O.LoadCommands[*O.DyLdInfoCommandIndex]
.MachOLoadCommand.dyld_info_command_data;
- char *Out = (char *)Buf->getBufferStart() + DyLdInfoCommand.bind_off;
+ char *Out = Buf->getBufferStart() + DyLdInfoCommand.bind_off;
assert((DyLdInfoCommand.bind_size == O.Binds.Opcodes.size()) &&
"Incorrect bind opcodes size");
memcpy(Out, O.Binds.Opcodes.data(), O.Binds.Opcodes.size());
@@ -343,7 +343,7 @@ void MachOWriter::writeWeakBindInfo() {
const MachO::dyld_info_command &DyLdInfoCommand =
O.LoadCommands[*O.DyLdInfoCommandIndex]
.MachOLoadCommand.dyld_info_command_data;
- char *Out = (char *)Buf->getBufferStart() + DyLdInfoCommand.weak_bind_off;
+ char *Out = Buf->getBufferStart() + DyLdInfoCommand.weak_bind_off;
assert((DyLdInfoCommand.weak_bind_size == O.WeakBinds.Opcodes.size()) &&
"Incorrect weak bind opcodes size");
memcpy(Out, O.WeakBinds.Opcodes.data(), O.WeakBinds.Opcodes.size());
@@ -355,7 +355,7 @@ void MachOWriter::writeLazyBindInfo() {
const MachO::dyld_info_command &DyLdInfoCommand =
O.LoadCommands[*O.DyLdInfoCommandIndex]
.MachOLoadCommand.dyld_info_command_data;
- char *Out = (char *)Buf->getBufferStart() + DyLdInfoCommand.lazy_bind_off;
+ char *Out = Buf->getBufferStart() + DyLdInfoCommand.lazy_bind_off;
assert((DyLdInfoCommand.lazy_bind_size == O.LazyBinds.Opcodes.size()) &&
"Incorrect lazy bind opcodes size");
memcpy(Out, O.LazyBinds.Opcodes.data(), O.LazyBinds.Opcodes.size());
@@ -367,7 +367,7 @@ void MachOWriter::writeExportInfo() {
const MachO::dyld_info_command &DyLdInfoCommand =
O.LoadCommands[*O.DyLdInfoCommandIndex]
.MachOLoadCommand.dyld_info_command_data;
- char *Out = (char *)Buf->getBufferStart() + DyLdInfoCommand.export_off;
+ char *Out = Buf->getBufferStart() + DyLdInfoCommand.export_off;
assert((DyLdInfoCommand.export_size == O.Exports.Trie.size()) &&
"Incorrect export trie size");
memcpy(Out, O.Exports.Trie.data(), O.Exports.Trie.size());
@@ -397,7 +397,7 @@ void MachOWriter::writeLinkData(std::optional<size_t> LCIndex,
return;
const MachO::linkedit_data_command &LinkEditDataCommand =
O.LoadCommands[*LCIndex].MachOLoadCommand.linkedit_data_command_data;
- char *Out = (char *)Buf->getBufferStart() + LinkEditDataCommand.dataoff;
+ char *Out = Buf->getBufferStart() + LinkEditDataCommand.dataoff;
assert((LinkEditDataCommand.datasize == LD.Data.size()) &&
"Incorrect data size");
memcpy(Out, LD.Data.data(), LD.Data.size());
@@ -574,7 +574,7 @@ void MachOWriter::writeExportsTrieData() {
const MachO::linkedit_data_command &ExportsTrieCmd =
O.LoadCommands[*O.ExportsTrieCommandIndex]
.MachOLoadCommand.linkedit_data_command_data;
- char *Out = (char *)Buf->getBufferStart() + ExportsTrieCmd.dataoff;
+ char *Out = Buf->getBufferStart() + ExportsTrieCmd.dataoff;
assert((ExportsTrieCmd.datasize == O.Exports.Trie.size()) &&
"Incorrect export trie size");
memcpy(Out, O.Exports.Trie.data(), O.Exports.Trie.size());
>From 62fc0028bf137137fe8dc1f79dc34178ea688e17 Mon Sep 17 00:00:00 2001
From: Kazu Hirata <kazu at google.com>
Date: Wed, 6 Aug 2025 07:11:07 -0700
Subject: [PATCH 084/171] [Target] Remove unnecessary casts (NFC) (#152262)
value() already returns uint64_t.
---
llvm/lib/Target/AArch64/AArch64ISelLowering.cpp | 9 ++++-----
llvm/lib/Target/ARM/ARMISelLowering.cpp | 5 ++---
llvm/lib/Target/RISCV/RISCVISelLowering.cpp | 2 +-
3 files changed, 7 insertions(+), 9 deletions(-)
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index 9f6e552ea8916..bad7ccde2e1e0 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -16289,9 +16289,8 @@ AArch64TargetLowering::LowerWindowsDYNAMIC_STACKALLOC(SDValue Op,
Chain = SP.getValue(1);
SP = DAG.getNode(ISD::SUB, DL, MVT::i64, SP, Size);
if (Align)
- SP =
- DAG.getNode(ISD::AND, DL, VT, SP.getValue(0),
- DAG.getSignedConstant(-(uint64_t)Align->value(), DL, VT));
+ SP = DAG.getNode(ISD::AND, DL, VT, SP.getValue(0),
+ DAG.getSignedConstant(-Align->value(), DL, VT));
Chain = DAG.getCopyToReg(Chain, DL, AArch64::SP, SP);
SDValue Ops[2] = {SP, Chain};
return DAG.getMergeValues(Ops, DL);
@@ -16328,7 +16327,7 @@ AArch64TargetLowering::LowerWindowsDYNAMIC_STACKALLOC(SDValue Op,
SP = DAG.getNode(ISD::SUB, DL, MVT::i64, SP, Size);
if (Align)
SP = DAG.getNode(ISD::AND, DL, VT, SP.getValue(0),
- DAG.getSignedConstant(-(uint64_t)Align->value(), DL, VT));
+ DAG.getSignedConstant(-Align->value(), DL, VT));
Chain = DAG.getCopyToReg(Chain, DL, AArch64::SP, SP);
Chain = DAG.getCALLSEQ_END(Chain, 0, 0, SDValue(), DL);
@@ -16356,7 +16355,7 @@ AArch64TargetLowering::LowerInlineDYNAMIC_STACKALLOC(SDValue Op,
SP = DAG.getNode(ISD::SUB, DL, MVT::i64, SP, Size);
if (Align)
SP = DAG.getNode(ISD::AND, DL, VT, SP.getValue(0),
- DAG.getSignedConstant(-(uint64_t)Align->value(), DL, VT));
+ DAG.getSignedConstant(-Align->value(), DL, VT));
// Set the real SP to the new value with a probing loop.
Chain = DAG.getNode(AArch64ISD::PROBED_ALLOCA, DL, MVT::Other, Chain, SP);
diff --git a/llvm/lib/Target/ARM/ARMISelLowering.cpp b/llvm/lib/Target/ARM/ARMISelLowering.cpp
index c5bae19e0e02e..ea99cc4424aea 100644
--- a/llvm/lib/Target/ARM/ARMISelLowering.cpp
+++ b/llvm/lib/Target/ARM/ARMISelLowering.cpp
@@ -20784,9 +20784,8 @@ ARMTargetLowering::LowerDYNAMIC_STACKALLOC(SDValue Op, SelectionDAG &DAG) const
Chain = SP.getValue(1);
SP = DAG.getNode(ISD::SUB, DL, MVT::i32, SP, Size);
if (Align)
- SP = DAG.getNode(
- ISD::AND, DL, MVT::i32, SP.getValue(0),
- DAG.getSignedConstant(-(uint64_t)Align->value(), DL, MVT::i32));
+ SP = DAG.getNode(ISD::AND, DL, MVT::i32, SP.getValue(0),
+ DAG.getSignedConstant(-Align->value(), DL, MVT::i32));
Chain = DAG.getCopyToReg(Chain, DL, ARM::SP, SP);
SDValue Ops[2] = { SP, Chain };
return DAG.getMergeValues(Ops, DL);
diff --git a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
index 0077ecf59dd6d..03e54b3d395e3 100644
--- a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
+++ b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
@@ -24691,7 +24691,7 @@ SDValue RISCVTargetLowering::lowerDYNAMIC_STACKALLOC(SDValue Op,
SP = DAG.getNode(ISD::SUB, dl, XLenVT, SP, Size);
if (Align)
SP = DAG.getNode(ISD::AND, dl, VT, SP.getValue(0),
- DAG.getSignedConstant(-(uint64_t)Align->value(), dl, VT));
+ DAG.getSignedConstant(-Align->value(), dl, VT));
// Set the real SP to the new value with a probing loop.
Chain = DAG.getNode(RISCVISD::PROBED_ALLOCA, dl, MVT::Other, Chain, SP);
>From 5342c33f1dfd2588d586b795c402840102b627ea Mon Sep 17 00:00:00 2001
From: Kazu Hirata <kazu at google.com>
Date: Wed, 6 Aug 2025 07:11:16 -0700
Subject: [PATCH 085/171] [llvm] Proofread MIRLangRef.rst (#152263)
---
llvm/docs/MIRLangRef.rst | 48 ++++++++++++++++++++--------------------
1 file changed, 24 insertions(+), 24 deletions(-)
diff --git a/llvm/docs/MIRLangRef.rst b/llvm/docs/MIRLangRef.rst
index a505c1ea4b0aa..3f4c3cde9b3aa 100644
--- a/llvm/docs/MIRLangRef.rst
+++ b/llvm/docs/MIRLangRef.rst
@@ -27,7 +27,7 @@ data serialization language, and the full YAML language spec can be read at
`yaml.org
<http://www.yaml.org/spec/1.2/spec.html#Introduction>`_.
-A MIR file is split up into a series of `YAML documents`_. The first document
+A MIR file is split into a series of `YAML documents`_. The first document
can contain an optional embedded LLVM IR module, and the rest of the documents
contain the serialized machine functions.
@@ -65,22 +65,22 @@ after the name with a comma.
``llc -stop-after=dead-mi-elimination,1 bug-trigger.ll -o test.mir``
-After generating the input MIR file, you'll have to add a run line that uses
+After generating the input MIR file, you'll have to add a ``RUN`` line that uses
the ``-run-pass`` option to it. In order to test the post register allocation
pseudo instruction expansion pass on X86-64, a run line like the one shown
below can be used:
``# RUN: llc -o - %s -mtriple=x86_64-- -run-pass=postrapseudos | FileCheck %s``
-The MIR files are target dependent, so they have to be placed in the target
-specific test directories (``lib/CodeGen/TARGETNAME``). They also need to
-specify a target triple or a target architecture either in the run line or in
+The MIR files are target dependent, so they have to be placed in the
+target-specific test directories (``lib/CodeGen/TARGETNAME``). They also need to
+specify a target triple or a target architecture either in the ``RUN`` line or in
the embedded LLVM IR module.
Simplifying MIR files
^^^^^^^^^^^^^^^^^^^^^
-The MIR code coming out of ``-stop-after``/``-stop-before`` is very verbose;
+The MIR code coming out of ``-stop-after``/``-stop-before`` is very verbose.
Tests are more accessible and future proof when simplified:
- Use the ``-simplify-mir`` option with llc.
@@ -113,12 +113,12 @@ Tests are more accessible and future proof when simplified:
If the test doesn't depend on (good) alias analysis the references can be
dropped: `:: (load 8)`
-- MIR blocks can reference IR blocks for debug printing, profile information
+- MIR blocks can reference IR blocks for debug printing, profile information,
or debug locations. Example: `bb.42.myblock` in MIR references the IR block
`myblock`. It is usually possible to drop the `.myblock` reference and simply
use `bb.42`.
-- If there are no memory operands or blocks referencing the IR then the
+- If there are no memory operands or blocks referencing the IR, then the
IR function can be replaced by a parameterless dummy function like
`define @func() { ret void }`.
@@ -143,7 +143,7 @@ can serialize:
- The ``MCSymbol`` machine operands don't support temporary or local symbols.
- A lot of the state in ``MachineModuleInfo`` isn't serialized - only the CFI
- instructions and the variable debug information from MMI is serialized right
+ instructions and the variable debug information from MMI are serialized right
now.
These limitations impose restrictions on what you can test with the MIR format.
@@ -182,7 +182,7 @@ Machine Functions
-----------------
The remaining YAML documents contain the machine functions. This is an example
-of such YAML document:
+of such a YAML document:
.. code-block:: text
@@ -299,7 +299,7 @@ instructions:
bb.2.else:
<instructions>
-The branch weights can be specified in brackets after the successor blocks.
+The branch weights can be specified in parentheses after the successor blocks.
The example below defines a block that has two successors with branch weights
of 32 and 16:
@@ -314,7 +314,7 @@ Live In Registers
^^^^^^^^^^^^^^^^^
The machine basic block's live in registers have to be specified before any of
-the instructions:
+its instructions:
.. code-block:: text
@@ -322,14 +322,14 @@ the instructions:
liveins: $edi, $esi
The list of live in registers and successors can be empty. The language also
-allows multiple live in register and successor lists - they are combined into
+allows multiple live in register and successor lists; they are combined into
one list by the parser.
Miscellaneous Attributes
^^^^^^^^^^^^^^^^^^^^^^^^
The attributes ``IsAddressTaken``, ``IsLandingPad``,
-``IsInlineAsmBrIndirectTarget`` and ``Alignment`` can be specified in brackets
+``IsInlineAsmBrIndirectTarget`` and ``Alignment`` can be specified in parentheses
after the block's definition:
.. code-block:: text
@@ -417,7 +417,7 @@ and ``}`` are bundled with the first instruction.
Registers
---------
-Registers are one of the key primitives in the machine instructions
+Registers are one of the key primitives in the machine instruction
serialization language. They are primarily used in the
:ref:`register machine operands <register-operands>`,
but they can also be used in a number of other places, like the
@@ -503,9 +503,9 @@ will be printed as ``%subreg.sub_32``:
%1:gpr64 = SUBREG_TO_REG 0, %0, %subreg.sub_32
-For integers > 64bit, we use a special machine operand, ``MO_CImmediate``,
+For integers > 64 bits, we use a special machine operand, ``MO_CImmediate``,
which stores the immediate in a ``ConstantInt`` using an ``APInt`` (LLVM's
-arbitrary precision integers).
+arbitrary-precision integers).
.. TODO: Describe the FPIMM immediate operands.
@@ -626,7 +626,7 @@ For a CPI with the index 0 and offset -12:
%1:gr64 = MOV64ri %const.0 - 12
A constant pool entry is bound to a LLVM IR ``Constant`` or a target-specific
-``MachineConstantPoolValue``. When serializing all the function's constants the
+``MachineConstantPoolValue``. When serializing all the function's constants, the
following format is used:
.. code-block:: text
@@ -695,7 +695,7 @@ and the offset 8:
Jump-table Index Operands
^^^^^^^^^^^^^^^^^^^^^^^^^
-A jump-table index operand with the index 0 is printed as following:
+A jump-table index operand with the index 0 is printed as follows:
.. code-block:: text
@@ -711,7 +711,7 @@ A machine jump-table entry contains a list of ``MachineBasicBlocks``. When seria
- id: <index>
blocks: [ <bbreference>, <bbreference>, ... ]
-where ``<kind>`` is describing how the jump table is represented and emitted (plain address, relocations, PIC, etc.), and each ``<index>`` is a 32-bit unsigned integer and ``blocks`` contains a list of :ref:`machine basic block references <block-references>`.
+where ``<kind>`` describes how the jump table is represented and emitted (plain address, relocations, PIC, etc.), and each ``<index>`` is a 32-bit unsigned integer and ``blocks`` contains a list of :ref:`machine basic block references <block-references>`.
Example:
@@ -741,7 +741,7 @@ Example:
MCSymbol Operands
^^^^^^^^^^^^^^^^^
-A MCSymbol operand is holding a pointer to a ``MCSymbol``. For the limitations
+A MCSymbol operand holds a pointer to a ``MCSymbol``. For the limitations
of this operand in MIR, see :ref:`limitations <limitations>`.
The syntax is:
@@ -754,7 +754,7 @@ Debug Instruction Reference Operands
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A debug instruction reference operand is a pair of indices, referring to an
-instruction and an operand within that instruction respectively; see
+instruction and an operand within that instruction, respectively; see
:ref:`Instruction referencing locations <instruction-referencing-locations>`.
The example below uses a reference to Instruction 1, Operand 0:
@@ -766,7 +766,7 @@ The example below uses a reference to Instruction 1, Operand 0:
CFIIndex Operands
^^^^^^^^^^^^^^^^^
-A CFI Index operand is holding an index into a per-function side-table,
+A CFI Index operand holds an index into a per-function side-table,
``MachineFunction::getFrameInstructions()``, which references all the frame
instructions in a ``MachineFunction``. A ``CFI_INSTRUCTION`` may look like it
contains multiple operands, but the only operand it contains is the CFI Index.
@@ -842,7 +842,7 @@ Comments can be added or customized by overriding InstrInfo's hook
Debug-Info constructs
---------------------
-Most of the debugging information in a MIR file is to be found in the metadata
+Most of the debugging information in a MIR file is found in the metadata
of the embedded module. Within a machine function, that metadata is referred to
by various constructs to describe source locations and variable locations.
>From ca13c44bbc13118d215b4e1e02911157e4d809a8 Mon Sep 17 00:00:00 2001
From: Ross Brunton <ross at codeplay.com>
Date: Wed, 6 Aug 2025 15:34:31 +0100
Subject: [PATCH 086/171] [NFC][Offload] Clarify `olDestroyQueue` (#152132)
This has no code changes.
---
offload/liboffload/API/Queue.td | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/offload/liboffload/API/Queue.td b/offload/liboffload/API/Queue.td
index 6e86b2445c73d..7e4c5b8c68be2 100644
--- a/offload/liboffload/API/Queue.td
+++ b/offload/liboffload/API/Queue.td
@@ -24,7 +24,9 @@ def : Function {
def : Function {
let name = "olDestroyQueue";
let desc = "Destroy the queue and free all underlying resources.";
- let details = [];
+ let details = [
+ "Any work previously enqueued to the queue is still performed and any events generated for this queue remain valid."
+ ];
let params = [
Param<"ol_queue_handle_t", "Queue", "handle of the queue", PARAM_IN>
];
>From 66d1c37eb69c8ab38af6e61a38b7605f5f05d75b Mon Sep 17 00:00:00 2001
From: Alex Duran <alejandro.duran at intel.com>
Date: Wed, 6 Aug 2025 23:34:39 +0900
Subject: [PATCH 087/171] [OFFLOAD][OPENMP] 6.0 compatible interop interface
(#143491)
The following patch introduces a new interop interface implementation
with the following characteristics:
* It supports the new 6.0 prefer_type specification
* It supports both explicit objects (from interop constructs) and
implicit objects (from variant calls).
* Implements a per-thread reuse mechanism for implicit objects to reduce
overheads.
* It provides a plugin interface that allows selecting the supported
interop types, and managing all the backend related interop operations
(init, sync, ...).
* It enables cooperation with the OpenMP runtime to allow progress on
OpenMP synchronizations.
* It cleanups some vendor/fr_id mismatchs from the current query
routines.
* It supports extension to define interop callbacks for library cleanup.
---
offload/include/OpenMP/InteropAPI.h | 150 ++++++-
offload/include/OpenMP/omp.h | 51 ++-
offload/include/PerThreadTable.h | 114 ++++++
offload/include/PluginManager.h | 8 +-
offload/include/Shared/APITypes.h | 1 +
offload/libomptarget/OffloadRTL.cpp | 12 +
offload/libomptarget/OpenMP/API.cpp | 19 +
offload/libomptarget/OpenMP/InteropAPI.cpp | 382 +++++++++++++-----
offload/libomptarget/PluginManager.cpp | 7 +
offload/libomptarget/exports | 6 +-
.../common/include/PluginInterface.h | 123 ++++--
.../common/src/PluginInterface.cpp | 68 ++++
openmp/runtime/src/kmp.h | 7 +
openmp/runtime/src/kmp_barrier.cpp | 8 +
openmp/runtime/src/kmp_runtime.cpp | 15 +
openmp/runtime/src/kmp_tasking.cpp | 29 ++
16 files changed, 826 insertions(+), 174 deletions(-)
create mode 100644 offload/include/PerThreadTable.h
diff --git a/offload/include/OpenMP/InteropAPI.h b/offload/include/OpenMP/InteropAPI.h
index 71c78760a3226..53ac4be2e2e98 100644
--- a/offload/include/OpenMP/InteropAPI.h
+++ b/offload/include/OpenMP/InteropAPI.h
@@ -13,17 +13,69 @@
#include "omp.h"
+#include "PerThreadTable.h"
#include "omptarget.h"
extern "C" {
typedef enum kmp_interop_type_t {
kmp_interop_type_unknown = -1,
- kmp_interop_type_platform,
- kmp_interop_type_device,
- kmp_interop_type_tasksync,
+ kmp_interop_type_target,
+ kmp_interop_type_targetsync,
} kmp_interop_type_t;
+struct interop_attrs_t {
+ bool inorder : 1;
+ int reserved : 31;
+
+ /// Check if the supported attributes are compatible with the current
+ /// attributes. Only if an attribute is supported can the value be true,
+ /// otherwise it needs to be false
+ bool checkSupportedOnly(interop_attrs_t supported) const {
+ return supported.inorder || (!supported.inorder && !inorder);
+ }
+};
+
+struct interop_spec_t {
+ int32_t fr_id;
+ interop_attrs_t attrs; // Common attributes
+ int64_t impl_attrs; // Implementation specific attributes (recognized by each
+ // plugin)
+};
+
+struct interop_flags_t {
+ bool implicit : 1; // dispatch (true) or interop (false)
+ bool nowait : 1; // has nowait flag
+ int reserved : 30;
+};
+
+struct interop_ctx_t {
+ uint32_t version; // version of the interface (current is 0)
+ interop_flags_t flags;
+ int gtid;
+};
+
+struct dep_pack_t {
+ int32_t ndeps;
+ int32_t ndeps_noalias;
+ kmp_depend_info_t *deplist;
+ kmp_depend_info_t *noalias_deplist;
+};
+
+struct omp_interop_val_t;
+
+typedef void ompx_interop_cb_t(omp_interop_val_t *interop, void *data);
+
+struct omp_interop_cb_instance_t {
+ ompx_interop_cb_t *cb;
+ void *data;
+
+ omp_interop_cb_instance_t(ompx_interop_cb_t *cb, void *data)
+ : cb(cb), data(data) {}
+
+ void operator()(omp_interop_val_t *interop) { cb(interop, data); }
+};
+
/// The interop value type, aka. the interop object.
typedef struct omp_interop_val_t {
/// Device and interop-type are determined at construction time and fix.
@@ -34,10 +86,98 @@ typedef struct omp_interop_val_t {
__tgt_device_info device_info;
const kmp_interop_type_t interop_type;
const intptr_t device_id;
- const omp_foreign_runtime_ids_t vendor_id = cuda;
- const intptr_t backend_type_id = omp_interop_backend_type_cuda_1;
+ omp_vendor_id_t vendor_id = omp_vendor_llvm;
+ tgt_foreign_runtime_id_t fr_id = tgt_fr_none;
+ interop_attrs_t attrs{false, 0}; // Common prefer specification attributes
+ int64_t impl_attrs = 0; // Implementation prefer specification attributes
+
+ // Constants
+ static constexpr int no_owner = -1; // This interop has no current owner
+
+ void *rtl_property = nullptr; // Plugin dependent information
+ // For implicitly created Interop objects (e.g., from a dispatch construct)
+ // who owns the object
+ int owner_gtid = no_owner;
+ // Marks whether the object was requested since the last time it was synced
+ bool clean = true;
+
+ typedef llvm::SmallVector<omp_interop_cb_instance_t> callback_list_t;
+
+ callback_list_t completion_cbs;
+
+ void reset() {
+ owner_gtid = no_owner;
+ markClean();
+ clearCompletionCbs();
+ }
+
+ llvm::Expected<DeviceTy &> getDevice() const;
+
+ bool hasOwner() const { return owner_gtid != no_owner; }
+
+ void setOwner(int gtid) { owner_gtid = gtid; }
+ bool isOwnedBy(int gtid) { return owner_gtid == gtid; }
+ bool isCompatibleWith(int32_t InteropType, const interop_spec_t &Spec);
+ bool isCompatibleWith(int32_t InteropType, const interop_spec_t &Spec,
+ int64_t DeviceNum, int gtid);
+ void markClean() { clean = true; }
+ void markDirty() { clean = false; }
+ bool isClean() const { return clean; }
+
+ int32_t flush(DeviceTy &Device);
+ int32_t sync_barrier(DeviceTy &Device);
+ int32_t async_barrier(DeviceTy &Device);
+ int32_t release(DeviceTy &Device);
+
+ void addCompletionCb(ompx_interop_cb_t *cb, void *data) {
+ completion_cbs.push_back(omp_interop_cb_instance_t(cb, data));
+ }
+
+ int numCompletionCbs() const { return completion_cbs.size(); }
+ void clearCompletionCbs() { completion_cbs.clear(); }
+
+ void runCompletionCbs() {
+ for (auto &cbInstance : completion_cbs)
+ cbInstance(this);
+ clearCompletionCbs();
+ }
} omp_interop_val_t;
} // extern "C"
+struct InteropTableEntry {
+ using ContainerTy = typename std::vector<omp_interop_val_t *>;
+ using iterator = typename ContainerTy::iterator;
+
+ ContainerTy Interops;
+
+ static constexpr int reservedEntriesPerThread =
+ 20; // reserve some entries to avoid reallocation
+
+ void add(omp_interop_val_t *obj) {
+ if (Interops.capacity() == 0)
+ Interops.reserve(reservedEntriesPerThread);
+ Interops.push_back(obj);
+ }
+
+ template <class ClearFuncTy> void clear(ClearFuncTy f) {
+ for (auto &Obj : Interops) {
+ f(Obj);
+ }
+ }
+
+ /// vector interface
+ int size() const { return Interops.size(); }
+ iterator begin() { return Interops.begin(); }
+ iterator end() { return Interops.end(); }
+ iterator erase(iterator it) { return Interops.erase(it); }
+};
+
+struct InteropTblTy
+ : public PerThreadTable<InteropTableEntry, omp_interop_val_t *> {
+ void clear();
+};
+
+void syncImplicitInterops(int gtid, void *event);
+
#endif // OMPTARGET_OPENMP_INTEROP_API_H
diff --git a/offload/include/OpenMP/omp.h b/offload/include/OpenMP/omp.h
index b44c6aff1b289..49d9f1fa75c20 100644
--- a/offload/include/OpenMP/omp.h
+++ b/offload/include/OpenMP/omp.h
@@ -80,15 +80,18 @@ typedef enum omp_interop_rc {
omp_irc_other = -6
} omp_interop_rc_t;
-typedef enum omp_interop_fr {
- omp_ifr_cuda = 1,
- omp_ifr_cuda_driver = 2,
- omp_ifr_opencl = 3,
- omp_ifr_sycl = 4,
- omp_ifr_hip = 5,
- omp_ifr_level_zero = 6,
- omp_ifr_last = 7
-} omp_interop_fr_t;
+/* Foreign runtime values from OpenMP Additional Definitions document v2.1 */
+typedef enum tgt_foreign_runtime_id_t {
+ tgt_fr_none = 0,
+ tgt_fr_cuda = 1,
+ tgt_fr_cuda_driver = 2,
+ tgt_fr_opencl = 3,
+ tgt_fr_sycl = 4,
+ tgt_fr_hip = 5,
+ tgt_fr_level_zero = 6,
+ tgt_fr_hsa = 7,
+ tgt_fr_last = 8
+} tgt_foreign_runtime_id_t;
typedef void *omp_interop_t;
@@ -134,19 +137,23 @@ omp_get_interop_type_desc(const omp_interop_t, omp_interop_property_t);
extern const char *__KAI_KMPC_CONVENTION
omp_get_interop_rc_desc(const omp_interop_t, omp_interop_rc_t);
-typedef enum omp_interop_backend_type_t {
- // reserve 0
- omp_interop_backend_type_cuda_1 = 1,
-} omp_interop_backend_type_t;
-
-typedef enum omp_foreign_runtime_ids {
- cuda = 1,
- cuda_driver = 2,
- opencl = 3,
- sycl = 4,
- hip = 5,
- level_zero = 6,
-} omp_foreign_runtime_ids_t;
+/* Vendor defined values from OpenMP Additional Definitions document v2.1*/
+typedef enum omp_vendor_id {
+ omp_vendor_unknown = 0,
+ omp_vendor_amd = 1,
+ omp_vendor_arm = 2,
+ omp_vendor_bsc = 3,
+ omp_vendor_fujitsu = 4,
+ omp_vendor_gnu = 5,
+ omp_vendor_hpe = 6,
+ omp_vendor_ibm = 7,
+ omp_vendor_intel = 8,
+ omp_vendor_llvm = 9,
+ omp_vendor_nec = 10,
+ omp_vendor_nvidia = 11,
+ omp_vendor_ti = 12,
+ omp_vendor_last = 13
+} omp_vendor_id_t;
///} InteropAPI
diff --git a/offload/include/PerThreadTable.h b/offload/include/PerThreadTable.h
new file mode 100644
index 0000000000000..45b196171b4c8
--- /dev/null
+++ b/offload/include/PerThreadTable.h
@@ -0,0 +1,114 @@
+//===-- PerThreadTable.h -- PerThread Storage Structure ----*- C++ -*-===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+//
+// Table indexed with one entry per thread.
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef OFFLOAD_PERTHREADTABLE_H
+#define OFFLOAD_PERTHREADTABLE_H
+
+#include <list>
+#include <memory>
+#include <mutex>
+
+// Using an STL container (such as std::vector) indexed by thread ID has
+// too many race conditions issues so we store each thread entry into a
+// thread_local variable.
+// T is the container type used to store the objects, e.g., std::vector,
+// std::set, etc. by each thread. O is the type of the stored objects e.g.,
+// omp_interop_val_t *, ...
+
+template <typename ContainerType, typename ObjectType> struct PerThreadTable {
+ using iterator = typename ContainerType::iterator;
+
+ struct PerThreadData {
+ size_t NElements = 0;
+ std::unique_ptr<ContainerType> ThEntry;
+ };
+
+ std::mutex Mtx;
+ std::list<std::shared_ptr<PerThreadData>> ThreadDataList;
+
+ // define default constructors, disable copy and move constructors
+ PerThreadTable() = default;
+ PerThreadTable(const PerThreadTable &) = delete;
+ PerThreadTable(PerThreadTable &&) = delete;
+ PerThreadTable &operator=(const PerThreadTable &) = delete;
+ PerThreadTable &operator=(PerThreadTable &&) = delete;
+ ~PerThreadTable() {
+ std::lock_guard<std::mutex> Lock(Mtx);
+ ThreadDataList.clear();
+ }
+
+private:
+ PerThreadData &getThreadData() {
+ static thread_local std::shared_ptr<PerThreadData> ThData = nullptr;
+ if (!ThData) {
+ ThData = std::make_shared<PerThreadData>();
+ std::lock_guard<std::mutex> Lock(Mtx);
+ ThreadDataList.push_back(ThData);
+ }
+ return *ThData;
+ }
+
+protected:
+ ContainerType &getThreadEntry() {
+ auto &ThData = getThreadData();
+ if (ThData.ThEntry)
+ return *ThData.ThEntry;
+ ThData.ThEntry = std::make_unique<ContainerType>();
+ return *ThData.ThEntry;
+ }
+
+ size_t &getThreadNElements() {
+ auto &ThData = getThreadData();
+ return ThData.NElements;
+ }
+
+public:
+ void add(ObjectType obj) {
+ auto &Entry = getThreadEntry();
+ auto &NElements = getThreadNElements();
+ NElements++;
+ Entry.add(obj);
+ }
+
+ iterator erase(iterator it) {
+ auto &Entry = getThreadEntry();
+ auto &NElements = getThreadNElements();
+ NElements--;
+ return Entry.erase(it);
+ }
+
+ size_t size() { return getThreadNElements(); }
+
+ // Iterators to traverse objects owned by
+ // the current thread
+ iterator begin() {
+ auto &Entry = getThreadEntry();
+ return Entry.begin();
+ }
+ iterator end() {
+ auto &Entry = getThreadEntry();
+ return Entry.end();
+ }
+
+ template <class F> void clear(F f) {
+ std::lock_guard<std::mutex> Lock(Mtx);
+ for (auto ThData : ThreadDataList) {
+ if (!ThData->ThEntry || ThData->NElements == 0)
+ continue;
+ ThData->ThEntry->clear(f);
+ ThData->NElements = 0;
+ }
+ ThreadDataList.clear();
+ }
+};
+
+#endif
diff --git a/offload/include/PluginManager.h b/offload/include/PluginManager.h
index ec3adadf0819b..6c6fdebe76dff 100644
--- a/offload/include/PluginManager.h
+++ b/offload/include/PluginManager.h
@@ -35,6 +35,8 @@
#include <mutex>
#include <string>
+#include "OpenMP/InteropAPI.h"
+
using GenericPluginTy = llvm::omp::target::plugin::GenericPluginTy;
/// Struct for the data required to handle plugins
@@ -88,6 +90,9 @@ struct PluginManager {
HostPtrToTableMapTy HostPtrToTableMap;
std::mutex TblMapMtx; ///< For HostPtrToTableMap
+ /// Table of cached implicit interop objects
+ InteropTblTy InteropTbl;
+
// Work around for plugins that call dlopen on shared libraries that call
// tgt_register_lib during their initialisation. Stash the pointers in a
// vector until the plugins are all initialised and then register them.
@@ -185,5 +190,6 @@ void initRuntime();
void deinitRuntime();
extern PluginManager *PM;
-
+extern std::atomic<bool> RTLAlive; // Indicates if the RTL has been initialized
+extern std::atomic<int> RTLOngoingSyncs; // Counts ongoing external syncs
#endif // OMPTARGET_PLUGIN_MANAGER_H
diff --git a/offload/include/Shared/APITypes.h b/offload/include/Shared/APITypes.h
index 978b53d5d69b9..f376c7dc861f9 100644
--- a/offload/include/Shared/APITypes.h
+++ b/offload/include/Shared/APITypes.h
@@ -36,6 +36,7 @@ struct __tgt_device_image {
struct __tgt_device_info {
void *Context = nullptr;
void *Device = nullptr;
+ void *Platform = nullptr;
};
/// This struct is a record of all the host code that may be offloaded to a
diff --git a/offload/libomptarget/OffloadRTL.cpp b/offload/libomptarget/OffloadRTL.cpp
index 29b573a27d087..04bd21ec91a49 100644
--- a/offload/libomptarget/OffloadRTL.cpp
+++ b/offload/libomptarget/OffloadRTL.cpp
@@ -22,6 +22,8 @@ extern void llvm::omp::target::ompt::connectLibrary();
static std::mutex PluginMtx;
static uint32_t RefCount = 0;
+std::atomic<bool> RTLAlive{false};
+std::atomic<int> RTLOngoingSyncs{0};
void initRuntime() {
std::scoped_lock<decltype(PluginMtx)> Lock(PluginMtx);
@@ -41,6 +43,9 @@ void initRuntime() {
PM->init();
PM->registerDelayedLibraries();
+
+ // RTL initialization is complete
+ RTLAlive = true;
}
}
@@ -50,6 +55,13 @@ void deinitRuntime() {
if (RefCount == 1) {
DP("Deinit offload library!\n");
+ // RTL deinitialization has started
+ RTLAlive = false;
+ while (RTLOngoingSyncs > 0) {
+ DP("Waiting for ongoing syncs to finish, count: %d\n",
+ RTLOngoingSyncs.load());
+ std::this_thread::sleep_for(std::chrono::milliseconds(100));
+ }
PM->deinit();
delete PM;
PM = nullptr;
diff --git a/offload/libomptarget/OpenMP/API.cpp b/offload/libomptarget/OpenMP/API.cpp
index 4576f9bd06121..b0f0573833713 100644
--- a/offload/libomptarget/OpenMP/API.cpp
+++ b/offload/libomptarget/OpenMP/API.cpp
@@ -16,6 +16,7 @@
#include "rtl.h"
#include "OpenMP/InternalTypes.h"
+#include "OpenMP/InteropAPI.h"
#include "OpenMP/Mapping.h"
#include "OpenMP/OMPT/Interface.h"
#include "OpenMP/omp.h"
@@ -683,3 +684,21 @@ EXTERN void *omp_get_mapped_ptr(const void *Ptr, int DeviceNum) {
return TPR.TargetPointer;
}
+
+// This routine gets called from the Host RTL at sync points (taskwait, barrier,
+// ...) so we can synchronize the necessary objects from the offload side.
+EXTERN void __tgt_target_sync(ident_t *loc_ref, int gtid, void *current_task,
+ void *event) {
+ if (!RTLAlive)
+ return;
+
+ RTLOngoingSyncs++;
+ if (!RTLAlive) {
+ RTLOngoingSyncs--;
+ return;
+ }
+
+ syncImplicitInterops(gtid, event);
+
+ RTLOngoingSyncs--;
+}
diff --git a/offload/libomptarget/OpenMP/InteropAPI.cpp b/offload/libomptarget/OpenMP/InteropAPI.cpp
index bdbc440c64a2c..eb5425ecbf062 100644
--- a/offload/libomptarget/OpenMP/InteropAPI.cpp
+++ b/offload/libomptarget/OpenMP/InteropAPI.cpp
@@ -10,6 +10,7 @@
#include "OpenMP/InternalTypes.h"
#include "OpenMP/omp.h"
+#include "OffloadPolicy.h"
#include "PluginManager.h"
#include "device.h"
#include "omptarget.h"
@@ -56,22 +57,22 @@ void getTypeMismatch(omp_interop_property_t Property, int *Err) {
*Err = getPropertyErrorType(Property);
}
-const char *getVendorIdToStr(const omp_foreign_runtime_ids_t VendorId) {
- switch (VendorId) {
- case cuda:
- return ("cuda");
- case cuda_driver:
- return ("cuda_driver");
- case opencl:
- return ("opencl");
- case sycl:
- return ("sycl");
- case hip:
- return ("hip");
- case level_zero:
- return ("level_zero");
- }
- return ("unknown");
+static const char *VendorStrTbl[] = {
+ "unknown", "amd", "arm", "bsc", "fujitsu", "gnu", "hpe",
+ "ibm", "intel", "llvm", "nec", "nvidia", "ti"};
+const char *getVendorIdToStr(const omp_vendor_id_t VendorId) {
+ if (VendorId < omp_vendor_unknown || VendorId >= omp_vendor_last)
+ return ("unknown");
+ return VendorStrTbl[VendorId];
+}
+
+static const char *ForeignRuntimeStrTbl[] = {
+ "none", "cuda", "cuda_driver", "opencl",
+ "sycl", "hip", "level_zero", "hsa"};
+const char *getForeignRuntimeIdToStr(const tgt_foreign_runtime_id_t FrId) {
+ if (FrId < tgt_fr_none || FrId >= tgt_fr_last)
+ return ("unknown");
+ return ForeignRuntimeStrTbl[FrId];
}
template <typename PropertyTy>
@@ -83,7 +84,7 @@ intptr_t getProperty<intptr_t>(omp_interop_val_t &InteropVal,
omp_interop_property_t Property, int *Err) {
switch (Property) {
case omp_ipr_fr_id:
- return InteropVal.backend_type_id;
+ return InteropVal.fr_id;
case omp_ipr_vendor:
return InteropVal.vendor_id;
case omp_ipr_device_num:
@@ -99,10 +100,8 @@ const char *getProperty<const char *>(omp_interop_val_t &InteropVal,
omp_interop_property_t Property,
int *Err) {
switch (Property) {
- case omp_ipr_fr_id:
- return InteropVal.interop_type == kmp_interop_type_tasksync
- ? "tasksync"
- : "device+context";
+ case omp_ipr_fr_name:
+ return getForeignRuntimeIdToStr(InteropVal.fr_id);
case omp_ipr_vendor_name:
return getVendorIdToStr(InteropVal.vendor_id);
default:
@@ -120,6 +119,8 @@ void *getProperty<void *>(omp_interop_val_t &InteropVal,
return InteropVal.device_info.Device;
*Err = omp_irc_no_value;
return const_cast<char *>(InteropVal.err_str);
+ case omp_ipr_platform:
+ return InteropVal.device_info.Platform;
case omp_ipr_device_context:
return InteropVal.device_info.Context;
case omp_ipr_targetsync:
@@ -145,13 +146,13 @@ bool getPropertyCheck(omp_interop_val_t **InteropPtr,
return false;
}
if (Property == omp_ipr_targetsync &&
- (*InteropPtr)->interop_type != kmp_interop_type_tasksync) {
+ (*InteropPtr)->interop_type != kmp_interop_type_targetsync) {
if (Err)
*Err = omp_irc_other;
return false;
}
if ((Property == omp_ipr_device || Property == omp_ipr_device_context) &&
- (*InteropPtr)->interop_type == kmp_interop_type_tasksync) {
+ (*InteropPtr)->interop_type == kmp_interop_type_targetsync) {
if (Err)
*Err = omp_irc_other;
return false;
@@ -166,7 +167,7 @@ bool getPropertyCheck(omp_interop_val_t **InteropPtr,
omp_interop_property_t property_id, \
int *err) { \
omp_interop_val_t *interop_val = (omp_interop_val_t *)interop; \
- assert((interop_val)->interop_type == kmp_interop_type_tasksync); \
+ assert((interop_val)->interop_type == kmp_interop_type_targetsync); \
if (!getPropertyCheck(&interop_val, property_id, err)) { \
return (RETURN_TYPE)(0); \
} \
@@ -193,119 +194,278 @@ __OMP_GET_INTEROP_TY3(const char *, type_desc)
__OMP_GET_INTEROP_TY3(const char *, rc_desc)
#undef __OMP_GET_INTEROP_TY3
-static const char *copyErrorString(llvm::Error &&Err) {
- // TODO: Use the error string while avoiding leaks.
- std::string ErrMsg = llvm::toString(std::move(Err));
- char *UsrMsg = reinterpret_cast<char *>(malloc(ErrMsg.size() + 1));
- strcpy(UsrMsg, ErrMsg.c_str());
- return UsrMsg;
-}
-
extern "C" {
-void __tgt_interop_init(ident_t *LocRef, int32_t Gtid,
- omp_interop_val_t *&InteropPtr,
- kmp_interop_type_t InteropType, int32_t DeviceId,
- int32_t Ndeps, kmp_depend_info_t *DepList,
- int32_t HaveNowait) {
- int32_t NdepsNoalias = 0;
- kmp_depend_info_t *NoaliasDepList = NULL;
- assert(InteropType != kmp_interop_type_unknown &&
- "Cannot initialize with unknown interop_type!");
- if (DeviceId == -1) {
- DeviceId = omp_get_default_device();
+omp_interop_val_t *__tgt_interop_get(ident_t *LocRef, int32_t InteropType,
+ int64_t DeviceNum, int32_t NumPrefers,
+ interop_spec_t *Prefers,
+ interop_ctx_t *Ctx, dep_pack_t *Deps) {
+
+ DP("Call to %s with device_num %" PRId64 ", interop type %" PRId32
+ ", number of preferred specs %" PRId32 "%s%s\n",
+ __func__, DeviceNum, InteropType, NumPrefers,
+ Ctx->flags.implicit ? " (implicit)" : "",
+ Ctx->flags.nowait ? " (nowait)" : "");
+
+ if (OffloadPolicy::get(*PM).Kind == OffloadPolicy::DISABLED)
+ return omp_interop_none;
+
+ // Now, try to create an interop with device_num.
+ if (DeviceNum == OFFLOAD_DEVICE_DEFAULT)
+ DeviceNum = omp_get_default_device();
+
+ auto gtid = Ctx->gtid;
+
+ if (InteropType == kmp_interop_type_targetsync) {
+ if (Ctx->flags.nowait)
+ DP("Warning: nowait flag on interop creation not supported yet. "
+ "Ignored\n");
+ if (Deps)
+ __kmpc_omp_wait_deps(LocRef, gtid, Deps->ndeps, Deps->deplist,
+ Deps->ndeps_noalias, Deps->noalias_deplist);
}
- if (InteropType == kmp_interop_type_tasksync) {
- __kmpc_omp_wait_deps(LocRef, Gtid, Ndeps, DepList, NdepsNoalias,
- NoaliasDepList);
+ auto DeviceOrErr = PM->getDevice(DeviceNum);
+ if (!DeviceOrErr) {
+ DP("Couldn't find device %" PRId64
+ " while constructing interop object: %s\n",
+ DeviceNum, toString(DeviceOrErr.takeError()).c_str());
+ return omp_interop_none;
+ }
+ auto &Device = *DeviceOrErr;
+ omp_interop_val_t *Interop = omp_interop_none;
+ auto InteropSpec = Device.RTL->select_interop_preference(
+ DeviceNum, InteropType, NumPrefers, Prefers);
+ if (InteropSpec.fr_id == tgt_fr_none) {
+ DP("Interop request not supported by device %" PRId64 "\n", DeviceNum);
+ return omp_interop_none;
+ }
+ DP("Selected interop preference is fr_id=%s%s impl_attrs=%" PRId64 "\n",
+ getForeignRuntimeIdToStr((tgt_foreign_runtime_id_t)InteropSpec.fr_id),
+ InteropSpec.attrs.inorder ? " inorder" : "", InteropSpec.impl_attrs);
+
+ if (Ctx->flags.implicit) {
+ // This is a request for an RTL managed interop object.
+ // Get it from the InteropTbl if possible
+ for (auto iop : PM->InteropTbl) {
+ if (iop->isCompatibleWith(InteropType, InteropSpec, DeviceNum, gtid)) {
+ Interop = iop;
+ Interop->markDirty();
+ DP("Reused interop " DPxMOD " from device number %" PRId64
+ " for gtid %" PRId32 "\n",
+ DPxPTR(Interop), DeviceNum, gtid);
+ return Interop;
+ }
+ }
}
- InteropPtr = new omp_interop_val_t(DeviceId, InteropType);
+ Interop = Device.RTL->create_interop(DeviceNum, InteropType, &InteropSpec);
+ DP("Created an interop " DPxMOD " from device number %" PRId64 "\n",
+ DPxPTR(Interop), DeviceNum);
+
+ if (Ctx->flags.implicit) {
+ // register the new implicit interop in the RTL
+ Interop->setOwner(gtid);
+ Interop->markDirty();
+ PM->InteropTbl.add(Interop);
+ } else {
+ Interop->setOwner(omp_interop_val_t::no_owner);
+ }
- auto DeviceOrErr = PM->getDevice(DeviceId);
- if (!DeviceOrErr) {
- InteropPtr->err_str = copyErrorString(DeviceOrErr.takeError());
- return;
+ return Interop;
+}
+
+int __tgt_interop_use(ident_t *LocRef, omp_interop_val_t *Interop,
+ interop_ctx_t *Ctx, dep_pack_t *Deps) {
+ bool Nowait = Ctx->flags.nowait;
+ DP("Call to %s with interop " DPxMOD ", nowait %" PRId32 "\n", __func__,
+ DPxPTR(Interop), Nowait);
+ if (OffloadPolicy::get(*PM).Kind == OffloadPolicy::DISABLED || !Interop)
+ return OFFLOAD_FAIL;
+
+ if (Interop->interop_type == kmp_interop_type_targetsync) {
+ if (Deps) {
+ if (Nowait) {
+ DP("Warning: nowait flag on interop use with dependences not supported"
+ "yet. Ignored\n");
+ Nowait = false;
+ }
+
+ __kmpc_omp_wait_deps(LocRef, Ctx->gtid, Deps->ndeps, Deps->deplist,
+ Deps->ndeps_noalias, Deps->noalias_deplist);
+ }
}
- DeviceTy &Device = *DeviceOrErr;
- if (!Device.RTL ||
- Device.RTL->init_device_info(DeviceId, &(InteropPtr)->device_info,
- &(InteropPtr)->err_str)) {
- delete InteropPtr;
- InteropPtr = omp_interop_none;
+ auto DeviceOrErr = Interop->getDevice();
+ if (!DeviceOrErr) {
+ REPORT("Failed to get device for interop " DPxMOD ": %s\n", DPxPTR(Interop),
+ toString(DeviceOrErr.takeError()).c_str());
+ return OFFLOAD_FAIL;
}
- if (InteropType == kmp_interop_type_tasksync) {
- if (!Device.RTL ||
- Device.RTL->init_async_info(DeviceId, &(InteropPtr)->async_info)) {
- delete InteropPtr;
- InteropPtr = omp_interop_none;
+ auto &IOPDevice = *DeviceOrErr;
+
+ if (Interop->async_info && Interop->async_info->Queue) {
+ if (Nowait)
+ Interop->async_barrier(IOPDevice);
+ else {
+ Interop->flush(IOPDevice);
+ Interop->sync_barrier(IOPDevice);
+ Interop->markClean();
}
}
+
+ return OFFLOAD_SUCCESS;
}
-void __tgt_interop_use(ident_t *LocRef, int32_t Gtid,
- omp_interop_val_t *&InteropPtr, int32_t DeviceId,
- int32_t Ndeps, kmp_depend_info_t *DepList,
- int32_t HaveNowait) {
- int32_t NdepsNoalias = 0;
- kmp_depend_info_t *NoaliasDepList = NULL;
- assert(InteropPtr && "Cannot use nullptr!");
- omp_interop_val_t *InteropVal = InteropPtr;
- if (DeviceId == -1) {
- DeviceId = omp_get_default_device();
+int __tgt_interop_release(ident_t *LocRef, omp_interop_val_t *Interop,
+ interop_ctx_t *Ctx, dep_pack_t *Deps) {
+ DP("Call to %s with interop " DPxMOD "\n", __func__, DPxPTR(Interop));
+
+ if (OffloadPolicy::get(*PM).Kind == OffloadPolicy::DISABLED || !Interop)
+ return OFFLOAD_FAIL;
+
+ if (Interop->interop_type == kmp_interop_type_targetsync) {
+ if (Ctx->flags.nowait)
+ DP("Warning: nowait flag on interop destroy not supported "
+ "yet. Ignored\n");
+ if (Deps) {
+ __kmpc_omp_wait_deps(LocRef, Ctx->gtid, Deps->ndeps, Deps->deplist,
+ Deps->ndeps_noalias, Deps->noalias_deplist);
+ }
}
- assert(InteropVal != omp_interop_none &&
- "Cannot use uninitialized interop_ptr!");
- assert((DeviceId == -1 || InteropVal->device_id == DeviceId) &&
- "Inconsistent device-id usage!");
- auto DeviceOrErr = PM->getDevice(DeviceId);
+ auto DeviceOrErr = Interop->getDevice();
if (!DeviceOrErr) {
- InteropPtr->err_str = copyErrorString(DeviceOrErr.takeError());
- return;
+ REPORT("Failed to get device for interop " DPxMOD ": %s\n", DPxPTR(Interop),
+ toString(DeviceOrErr.takeError()).c_str());
+ return OFFLOAD_FAIL;
}
- if (InteropVal->interop_type == kmp_interop_type_tasksync) {
- __kmpc_omp_wait_deps(LocRef, Gtid, Ndeps, DepList, NdepsNoalias,
- NoaliasDepList);
- }
- // TODO Flush the queue associated with the interop through the plugin
+ return Interop->release(*DeviceOrErr);
}
-void __tgt_interop_destroy(ident_t *LocRef, int32_t Gtid,
- omp_interop_val_t *&InteropPtr, int32_t DeviceId,
- int32_t Ndeps, kmp_depend_info_t *DepList,
- int32_t HaveNowait) {
- int32_t NdepsNoalias = 0;
- kmp_depend_info_t *NoaliasDepList = NULL;
- assert(InteropPtr && "Cannot use nullptr!");
- omp_interop_val_t *InteropVal = InteropPtr;
- if (DeviceId == -1) {
- DeviceId = omp_get_default_device();
- }
+EXTERN int ompx_interop_add_completion_callback(omp_interop_val_t *Interop,
+ ompx_interop_cb_t *CB,
+ void *Data) {
+ DP("Call to %s with interop " DPxMOD ", property callback " DPxMOD
+ "and data " DPxMOD "\n",
+ __func__, DPxPTR(Interop), DPxPTR(CB), DPxPTR(Data));
- if (InteropVal == omp_interop_none)
- return;
+ if (OffloadPolicy::get(*PM).Kind == OffloadPolicy::DISABLED || !Interop)
+ return omp_irc_other;
- assert((DeviceId == -1 || InteropVal->device_id == DeviceId) &&
- "Inconsistent device-id usage!");
- auto DeviceOrErr = PM->getDevice(DeviceId);
- if (!DeviceOrErr) {
- InteropPtr->err_str = copyErrorString(DeviceOrErr.takeError());
- return;
+ Interop->addCompletionCb(CB, Data);
+
+ return omp_irc_success;
+}
+
+} // extern "C"
+
+llvm::Expected<DeviceTy &> omp_interop_val_t::getDevice() const {
+ return PM->getDevice(device_id);
+}
+
+bool omp_interop_val_t::isCompatibleWith(int32_t InteropType,
+ const interop_spec_t &Spec) {
+ if (interop_type != InteropType)
+ return false;
+ if (Spec.fr_id != fr_id)
+ return false;
+ if (Spec.attrs.inorder != attrs.inorder)
+ return false;
+ if (Spec.impl_attrs != impl_attrs)
+ return false;
+
+ return true;
+}
+
+bool omp_interop_val_t::isCompatibleWith(int32_t InteropType,
+ const interop_spec_t &Spec,
+ int64_t DeviceNum, int GTID) {
+ if (device_id != DeviceNum)
+ return false;
+
+ if (GTID != owner_gtid)
+ return false;
+
+ return isCompatibleWith(InteropType, Spec);
+}
+
+int32_t omp_interop_val_t::flush(DeviceTy &Device) {
+ return Device.RTL->flush_queue(this);
+}
+
+int32_t omp_interop_val_t::sync_barrier(DeviceTy &Device) {
+ if (Device.RTL->sync_barrier(this) != OFFLOAD_SUCCESS) {
+ FATAL_MESSAGE(device_id, "Interop sync barrier failed for %p object\n",
+ this);
}
+ DP("Calling completion callbacks for " DPxMOD "\n", DPxPTR(this));
+ runCompletionCbs();
+ return OFFLOAD_SUCCESS;
+}
+
+int32_t omp_interop_val_t::async_barrier(DeviceTy &Device) {
+ return Device.RTL->async_barrier(this);
+}
- if (InteropVal->interop_type == kmp_interop_type_tasksync) {
- __kmpc_omp_wait_deps(LocRef, Gtid, Ndeps, DepList, NdepsNoalias,
- NoaliasDepList);
+int32_t omp_interop_val_t::release(DeviceTy &Device) {
+ if (async_info != nullptr && (!hasOwner() || !isClean())) {
+ flush(Device);
+ sync_barrier(Device);
}
- // TODO Flush the queue associated with the interop through the plugin
- // TODO Signal out dependences
+ return Device.RTL->release_interop(device_id, this);
+}
- delete InteropPtr;
- InteropPtr = omp_interop_none;
+void syncImplicitInterops(int Gtid, void *Event) {
+ if (PM->InteropTbl.size() == 0)
+ return;
+
+ DP("target_sync: syncing interops for gtid %" PRId32 ", event " DPxMOD "\n",
+ Gtid, DPxPTR(Event));
+
+ for (auto iop : PM->InteropTbl) {
+ if (iop->async_info && iop->async_info->Queue && iop->isOwnedBy(Gtid) &&
+ !iop->isClean()) {
+
+ auto DeviceOrErr = iop->getDevice();
+ if (!DeviceOrErr) {
+ REPORT("Failed to get device for interop " DPxMOD ": %s\n", DPxPTR(iop),
+ toString(DeviceOrErr.takeError()).c_str());
+ continue;
+ }
+ auto &IOPDevice = *DeviceOrErr;
+
+ iop->flush(IOPDevice);
+ iop->sync_barrier(IOPDevice);
+ iop->markClean();
+
+ // Alternate implementation option in case using barriers is not
+ // efficient enough:
+ //
+ // Instead of using a synchronous barrier, queue an asynchronous
+ // barrier and create a proxy task associated to the event to handle
+ // OpenMP synchronizations.
+ // When the event is completed, fulfill the proxy task to notify the
+ // OpenMP runtime.
+ // event = iop->asyncBarrier();
+ // ptask = createProxyTask();
+ // Events->add(event,ptask);
+ }
+ }
+ // This would be needed for the alternate implementation
+ // processEvents();
}
-} // extern "C"
+void InteropTblTy::clear() {
+ DP("Clearing Interop Table\n");
+ PerThreadTable::clear([](auto &IOP) {
+ auto DeviceOrErr = IOP->getDevice();
+ if (!DeviceOrErr) {
+ REPORT("Failed to get device for interop " DPxMOD ": %s\n", DPxPTR(IOP),
+ toString(DeviceOrErr.takeError()).c_str());
+ return;
+ }
+ IOP->release(*DeviceOrErr);
+ });
+}
diff --git a/offload/libomptarget/PluginManager.cpp b/offload/libomptarget/PluginManager.cpp
index c4d99dfa9f10c..b57a2f815cba6 100644
--- a/offload/libomptarget/PluginManager.cpp
+++ b/offload/libomptarget/PluginManager.cpp
@@ -128,6 +128,13 @@ void PluginManager::initializeAllDevices() {
initializeDevice(Plugin, DeviceId);
}
}
+ // After all plugins are initialized, register atExit cleanup handlers
+ std::atexit([]() {
+ // Interop cleanup should be done before the plugins are deinitialized as
+ // the backend libraries may be already unloaded.
+ if (PM)
+ PM->InteropTbl.clear();
+ });
}
// Returns a pointer to the binary descriptor, upgrading from a legacy format if
diff --git a/offload/libomptarget/exports b/offload/libomptarget/exports
index 2406776c1fb5f..8e2db6ba8bba4 100644
--- a/offload/libomptarget/exports
+++ b/offload/libomptarget/exports
@@ -36,6 +36,7 @@ VERS1.0 {
__kmpc_push_target_tripcount;
__kmpc_push_target_tripcount_mapper;
ompx_dump_mapping_tables;
+ ompx_interop_add_completion_callback;
omp_get_mapped_ptr;
omp_get_num_devices;
omp_get_device_num;
@@ -67,9 +68,10 @@ VERS1.0 {
omp_get_interop_int;
omp_get_interop_name;
omp_get_interop_type_desc;
- __tgt_interop_init;
+ __tgt_interop_get;
__tgt_interop_use;
- __tgt_interop_destroy;
+ __tgt_interop_release;
+ __tgt_target_sync;
__llvmPushCallConfiguration;
__llvmPopCallConfiguration;
llvmLaunchKernel;
diff --git a/offload/plugins-nextgen/common/include/PluginInterface.h b/offload/plugins-nextgen/common/include/PluginInterface.h
index 8c17a2ee07047..21dc1d1fdfb17 100644
--- a/offload/plugins-nextgen/common/include/PluginInterface.h
+++ b/offload/plugins-nextgen/common/include/PluginInterface.h
@@ -21,6 +21,7 @@
#include <vector>
#include "ExclusiveAccess.h"
+#include "OpenMP/InteropAPI.h"
#include "Shared/APITypes.h"
#include "Shared/Debug.h"
#include "Shared/Environment.h"
@@ -60,6 +61,39 @@ struct GenericKernelTy;
struct GenericDeviceTy;
struct RecordReplayTy;
+namespace Plugin {
+/// Create a success error. This is the same as calling Error::success(), but
+/// it is recommended to use this one for consistency with Plugin::error() and
+/// Plugin::check().
+static inline Error success() { return Error::success(); }
+
+/// Create an Offload error.
+template <typename... ArgsTy>
+static Error error(error::ErrorCode Code, const char *ErrFmt, ArgsTy... Args) {
+ return error::createOffloadError(Code, ErrFmt, Args...);
+}
+
+inline Error error(error::ErrorCode Code, const char *S) {
+ return make_error<error::OffloadError>(Code, S);
+}
+
+inline Error error(error::ErrorCode Code, Error &&OtherError,
+ const char *Context) {
+ return error::createOffloadError(Code, std::move(OtherError), Context);
+}
+
+/// Check the plugin-specific error code and return an error or success
+/// accordingly. In case of an error, create a string error with the error
+/// description. The ErrFmt should follow the format:
+/// "Error in <function name>[<optional info>]: %s"
+/// The last format specifier "%s" is mandatory and will be used to place the
+/// error code's description. Notice this function should be only called from
+/// the plugin-specific code.
+/// TODO: Refactor this, must be defined individually by each plugin.
+template <typename... ArgsTy>
+static Error check(int32_t ErrorCode, const char *ErrFmt, ArgsTy... Args);
+} // namespace Plugin
+
/// Class that wraps the __tgt_async_info to simply its usage. In case the
/// object is constructed without a valid __tgt_async_info, the object will use
/// an internal one and will synchronize the current thread with the pending
@@ -1003,6 +1037,21 @@ struct GenericDeviceTy : public DeviceAllocatorTy {
bool useAutoZeroCopy();
virtual bool useAutoZeroCopyImpl() { return false; }
+ virtual Expected<omp_interop_val_t *>
+ createInterop(int32_t InteropType, interop_spec_t &InteropSpec) {
+ return nullptr;
+ }
+
+ virtual Error releaseInterop(omp_interop_val_t *Interop) {
+ return Plugin::success();
+ }
+
+ virtual interop_spec_t selectInteropPreference(int32_t InteropType,
+ int32_t NumPrefers,
+ interop_spec_t *Prefers) {
+ return interop_spec_t{tgt_fr_none, {false, 0}, 0};
+ }
+
/// Allocate and construct a kernel object.
virtual Expected<GenericKernelTy &> constructKernel(const char *Name) = 0;
@@ -1271,6 +1320,20 @@ struct GenericPluginTy {
virtual Expected<bool> isELFCompatible(uint32_t DeviceID,
StringRef Image) const = 0;
+ virtual Error flushQueueImpl(omp_interop_val_t *Interop) {
+ return Plugin::success();
+ }
+
+ virtual Error syncBarrierImpl(omp_interop_val_t *Interop) {
+ return Plugin::error(error::ErrorCode::UNSUPPORTED,
+ "sync_barrier not supported");
+ }
+
+ virtual Error asyncBarrierImpl(omp_interop_val_t *Interop) {
+ return Plugin::error(error::ErrorCode::UNSUPPORTED,
+ "async_barrier not supported");
+ }
+
protected:
/// Indicate whether a device id is valid.
bool isValidDeviceId(int32_t DeviceId) const {
@@ -1410,6 +1473,33 @@ struct GenericPluginTy {
int32_t get_function(__tgt_device_binary Binary, const char *Name,
void **KernelPtr);
+ /// Return the interop specification that the plugin supports
+ /// It might not be one of the user specified ones.
+ interop_spec_t select_interop_preference(int32_t ID, int32_t InteropType,
+ int32_t NumPrefers,
+ interop_spec_t *Prefers) {
+ auto &Device = getDevice(ID);
+ return Device.selectInteropPreference(InteropType, NumPrefers, Prefers);
+ }
+
+ /// Create OpenMP interop with the given interop context
+ omp_interop_val_t *create_interop(int32_t ID, int32_t InteropContext,
+ interop_spec_t *InteropSpec);
+
+ /// Release OpenMP interop object
+ int32_t release_interop(int32_t ID, omp_interop_val_t *Interop);
+
+ /// Flush the queue associated with the interop object if necessary
+ int32_t flush_queue(omp_interop_val_t *Interop);
+
+ /// Perform a host synchronization with the queue associated with the interop
+ /// object and wait for it to complete.
+ int32_t sync_barrier(omp_interop_val_t *Interop);
+
+ /// Queue an asynchronous barrier in the queue associated with the interop
+ /// object and return immediately.
+ int32_t async_barrier(omp_interop_val_t *Interop);
+
private:
/// Indicates if the platform runtime has been fully initialized.
bool Initialized = false;
@@ -1442,39 +1532,6 @@ struct GenericPluginTy {
RecordReplayTy *RecordReplay;
};
-namespace Plugin {
-/// Create a success error. This is the same as calling Error::success(), but
-/// it is recommended to use this one for consistency with Plugin::error() and
-/// Plugin::check().
-static inline Error success() { return Error::success(); }
-
-/// Create an Offload error.
-template <typename... ArgsTy>
-static Error error(error::ErrorCode Code, const char *ErrFmt, ArgsTy... Args) {
- return error::createOffloadError(Code, ErrFmt, Args...);
-}
-
-inline Error error(error::ErrorCode Code, const char *S) {
- return make_error<error::OffloadError>(Code, S);
-}
-
-inline Error error(error::ErrorCode Code, Error &&OtherError,
- const char *Context) {
- return error::createOffloadError(Code, std::move(OtherError), Context);
-}
-
-/// Check the plugin-specific error code and return an error or success
-/// accordingly. In case of an error, create a string error with the error
-/// description. The ErrFmt should follow the format:
-/// "Error in <function name>[<optional info>]: %s"
-/// The last format specifier "%s" is mandatory and will be used to place the
-/// error code's description. Notice this function should be only called from
-/// the plugin-specific code.
-/// TODO: Refactor this, must be defined individually by each plugin.
-template <typename... ArgsTy>
-static Error check(int32_t ErrorCode, const char *ErrFmt, ArgsTy... Args);
-} // namespace Plugin
-
/// Auxiliary interface class for GenericDeviceResourceManagerTy. This class
/// acts as a reference to a device resource, such as a stream, and requires
/// some basic functions to be implemented. The derived class should define an
diff --git a/offload/plugins-nextgen/common/src/PluginInterface.cpp b/offload/plugins-nextgen/common/src/PluginInterface.cpp
index 94a050b559efe..0a7f9fa5970ee 100644
--- a/offload/plugins-nextgen/common/src/PluginInterface.cpp
+++ b/offload/plugins-nextgen/common/src/PluginInterface.cpp
@@ -2231,3 +2231,71 @@ int32_t GenericPluginTy::get_function(__tgt_device_binary Binary,
*KernelPtr = &Kernel;
return OFFLOAD_SUCCESS;
}
+
+/// Create OpenMP interop with the given interop context
+omp_interop_val_t *
+GenericPluginTy::create_interop(int32_t ID, int32_t InteropContext,
+ interop_spec_t *InteropSpec) {
+ assert(InteropSpec && "Interop spec is null");
+ auto &Device = getDevice(ID);
+ auto InteropOrErr = Device.createInterop(InteropContext, *InteropSpec);
+ if (!InteropOrErr) {
+ REPORT("Failure to create interop object for device " DPxMOD ": %s\n",
+ DPxPTR(InteropSpec), toString(InteropOrErr.takeError()).c_str());
+ return nullptr;
+ }
+ return *InteropOrErr;
+}
+
+/// Release OpenMP interop object
+int32_t GenericPluginTy::release_interop(int32_t ID,
+ omp_interop_val_t *Interop) {
+ assert(Interop && "Interop is null");
+ assert(Interop->DeviceId == ID && "Interop does not match device id");
+ auto &Device = getDevice(ID);
+ auto Err = Device.releaseInterop(Interop);
+ if (Err) {
+ REPORT("Failure to release interop object " DPxMOD ": %s\n",
+ DPxPTR(Interop), toString(std::move(Err)).c_str());
+ return OFFLOAD_FAIL;
+ }
+ return OFFLOAD_SUCCESS;
+}
+
+/// Flush the queue associated with the interop object if necessary
+int32_t GenericPluginTy::flush_queue(omp_interop_val_t *Interop) {
+ assert(Interop && "Interop is null");
+ auto Err = flushQueueImpl(Interop);
+ if (Err) {
+ REPORT("Failure to flush interop object " DPxMOD " queue: %s\n",
+ DPxPTR(Interop), toString(std::move(Err)).c_str());
+ return OFFLOAD_FAIL;
+ }
+ return OFFLOAD_SUCCESS;
+}
+
+/// Perform a host synchronization with the queue associated with the interop
+/// object and wait for it to complete.
+int32_t GenericPluginTy::sync_barrier(omp_interop_val_t *Interop) {
+ assert(Interop && "Interop is null");
+ auto Err = syncBarrierImpl(Interop);
+ if (Err) {
+ REPORT("Failure to synchronize interop object " DPxMOD ": %s\n",
+ DPxPTR(Interop), toString(std::move(Err)).c_str());
+ return OFFLOAD_FAIL;
+ }
+ return OFFLOAD_SUCCESS;
+}
+
+/// Queue an asynchronous barrier in the queue associated with the interop
+/// object and return immediately.
+int32_t GenericPluginTy::async_barrier(omp_interop_val_t *Interop) {
+ assert(Interop && "Interop is null");
+ auto Err = asyncBarrierImpl(Interop);
+ if (Err) {
+ REPORT("Failure to queue barrier in interop object " DPxMOD ": %s\n",
+ DPxPTR(Interop), toString(std::move(Err)).c_str());
+ return OFFLOAD_FAIL;
+ }
+ return OFFLOAD_SUCCESS;
+}
diff --git a/openmp/runtime/src/kmp.h b/openmp/runtime/src/kmp.h
index 818edf9060ad1..83afc0e83f231 100644
--- a/openmp/runtime/src/kmp.h
+++ b/openmp/runtime/src/kmp.h
@@ -4621,6 +4621,13 @@ static inline int __kmp_adjust_gtid_for_hidden_helpers(int gtid) {
return adjusted_gtid;
}
+#if ENABLE_LIBOMPTARGET
+// Pointers to callbacks registered by the offload library to be notified of
+// task progress.
+extern void (*kmp_target_sync_cb)(ident_t *loc_ref, int gtid,
+ void *current_task, void *event);
+#endif // ENABLE_LIBOMPTARGET
+
// Support for error directive
typedef enum kmp_severity_t {
severity_warning = 1,
diff --git a/openmp/runtime/src/kmp_barrier.cpp b/openmp/runtime/src/kmp_barrier.cpp
index 88a5cbb69ba87..4d7989d1ce5eb 100644
--- a/openmp/runtime/src/kmp_barrier.cpp
+++ b/openmp/runtime/src/kmp_barrier.cpp
@@ -1853,6 +1853,14 @@ static int __kmp_barrier_template(enum barrier_type bt, int gtid, int is_split,
}
#endif
+#if ENABLE_LIBOMPTARGET
+ // Give an opportunity to the offload runtime to make progress and create
+ // proxy tasks if necessary
+ if (UNLIKELY(kmp_target_sync_cb != NULL))
+ (*kmp_target_sync_cb)(
+ NULL, gtid, KMP_TASKDATA_TO_TASK(this_thr->th.th_current_task), NULL);
+#endif
+
if (!team->t.t_serialized) {
#if USE_ITT_BUILD
// This value will be used in itt notify events below.
diff --git a/openmp/runtime/src/kmp_runtime.cpp b/openmp/runtime/src/kmp_runtime.cpp
index 39b7834d358af..48e29c9f9fe45 100644
--- a/openmp/runtime/src/kmp_runtime.cpp
+++ b/openmp/runtime/src/kmp_runtime.cpp
@@ -93,6 +93,9 @@ static void __kmp_partition_places(kmp_team_t *team,
int update_master_only = 0);
#endif
static void __kmp_do_serial_initialize(void);
+#if ENABLE_LIBOMPTARGET
+static void __kmp_target_init(void);
+#endif // ENABLE_LIBOMPTARGET
void __kmp_fork_barrier(int gtid, int tid);
void __kmp_join_barrier(int gtid);
void __kmp_setup_icv_copy(kmp_team_t *team, int new_nproc,
@@ -7129,6 +7132,9 @@ static void __kmp_do_serial_initialize(void) {
#if KMP_MIC_SUPPORTED
__kmp_check_mic_type();
#endif
+#if ENABLE_LIBOMPTARGET
+ __kmp_target_init();
+#endif /* ENABLE_LIBOMPTARGET */
// Some global variable initialization moved here from kmp_env_initialize()
#ifdef KMP_DEBUG
@@ -9355,6 +9361,15 @@ void __kmp_set_nesting_mode_threads() {
set__max_active_levels(thread, __kmp_nesting_mode_nlevels);
}
+#if ENABLE_LIBOMPTARGET
+void (*kmp_target_sync_cb)(ident_t *loc_ref, int gtid, void *current_task,
+ void *event) = NULL;
+void __kmp_target_init() {
+ // Look for hooks in the libomptarget library
+ *(void **)(&kmp_target_sync_cb) = KMP_DLSYM("__tgt_target_sync");
+}
+#endif // ENABLE_LIBOMPTARGET
+
// Empty symbols to export (see exports_so.txt) when feature is disabled
extern "C" {
#if !KMP_STATS_ENABLED
diff --git a/openmp/runtime/src/kmp_tasking.cpp b/openmp/runtime/src/kmp_tasking.cpp
index e4d92a78fd6b9..37836fb457537 100644
--- a/openmp/runtime/src/kmp_tasking.cpp
+++ b/openmp/runtime/src/kmp_tasking.cpp
@@ -1149,6 +1149,13 @@ void __kmp_init_implicit_task(ident_t *loc_ref, kmp_info_t *this_thr,
// thread: thread data structure corresponding to implicit task
void __kmp_finish_implicit_task(kmp_info_t *thread) {
kmp_taskdata_t *task = thread->th.th_current_task;
+#if ENABLE_LIBOMPTARGET
+ // Give an opportunity to the offload runtime to synchronize any unfinished
+ // target async regions before finishing the implicit task
+ if (UNLIKELY(kmp_target_sync_cb != NULL))
+ (*kmp_target_sync_cb)(NULL, thread->th.th_info.ds.ds_gtid,
+ KMP_TASKDATA_TO_TASK(task), NULL);
+#endif // ENABLE_LIBOMPTARGET
if (task->td_dephash) {
int children;
task->td_flags.complete = 1;
@@ -2020,6 +2027,14 @@ static kmp_int32 __kmpc_omp_taskwait_template(ident_t *loc_ref, kmp_int32 gtid,
}
#endif // OMPT_SUPPORT && OMPT_OPTIONAL
+#if ENABLE_LIBOMPTARGET
+ // Give an opportunity to the offload runtime to make progress and create
+ // any necessary proxy tasks
+ if (UNLIKELY(kmp_target_sync_cb))
+ (*kmp_target_sync_cb)(loc_ref, gtid, KMP_TASKDATA_TO_TASK(taskdata),
+ NULL);
+#endif // ENABLE_LIBOMPTARGET
+
// Debugger: The taskwait is active. Store location and thread encountered the
// taskwait.
#if USE_ITT_BUILD
@@ -2719,6 +2734,13 @@ void __kmpc_end_taskgroup(ident_t *loc, int gtid) {
}
#endif
+#if ENABLE_LIBOMPTARGET
+ // Give an opportunity to the offload runtime to make progress and create
+ // any necessary proxy tasks
+ if (UNLIKELY(kmp_target_sync_cb))
+ (*kmp_target_sync_cb)(loc, gtid, KMP_TASKDATA_TO_TASK(taskdata), NULL);
+#endif // ENABLE_LIBOMPTARGET
+
if (!taskdata->td_flags.team_serial ||
(thread->th.th_task_team != NULL &&
(thread->th.th_task_team->tt.tt_found_proxy_tasks ||
@@ -3162,6 +3184,13 @@ static inline int __kmp_execute_tasks_template(
while (1) { // Outer loop keeps trying to find tasks in case of single thread
// getting tasks from target constructs
while (1) { // Inner loop to find a task and execute it
+#if ENABLE_LIBOMPTARGET
+ // Give an opportunity to the offload runtime to make progress
+ if (UNLIKELY(kmp_target_sync_cb))
+ (*kmp_target_sync_cb)(NULL, gtid, KMP_TASKDATA_TO_TASK(current_task),
+ NULL);
+#endif // ENABLE_LIBOMPTARGET
+
task = NULL;
if (task_team->tt.tt_num_task_pri) { // get priority task first
task = __kmp_get_priority_task(gtid, task_team, is_constrained);
>From cd4ae911eb4fa3050ada5d0016ec822345fab2ee Mon Sep 17 00:00:00 2001
From: Shay Kleiman <42376404+shay-kl at users.noreply.github.com>
Date: Wed, 6 Aug 2025 17:37:18 +0300
Subject: [PATCH 088/171] [mlir] Specify namespace in td file pred (#152258)
Lack of llvm namespace here caused an "'ArrayRef' was not declared in
this scope" error when using the ShapedTypeWithNthDimOfSize pred in a td
file. Explicitly specifying it fixes it.
Co-authored-by: Shay Kleiman <shay.kleiman at mobileye.com>
---
mlir/include/mlir/IR/CommonTypeConstraints.td | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mlir/include/mlir/IR/CommonTypeConstraints.td b/mlir/include/mlir/IR/CommonTypeConstraints.td
index 45ec1846580f2..b682f4c025a46 100644
--- a/mlir/include/mlir/IR/CommonTypeConstraints.td
+++ b/mlir/include/mlir/IR/CommonTypeConstraints.td
@@ -576,7 +576,7 @@ class NormalizeIndex<int value> {
class IsNthDimSizeIsOneOfPred<int n, list<int> allowedSizes>
: And<[
CPred<"::llvm::cast<::mlir::ShapedType>($_self).getRank() >= " # NormalizeIndex<n>.ret>,
- CPred<"::llvm::is_contained(ArrayRef<int64_t>({" # !interleave(allowedSizes, ", ") # "}), "
+ CPred<"::llvm::is_contained(::llvm::ArrayRef<int64_t>({" # !interleave(allowedSizes, ", ") # "}), "
# "::llvm::cast<::mlir::ShapedType>($_self).getDimSize("
# !if(!lt(n, 0),
"::llvm::cast<::mlir::ShapedType>($_self).getRank() + " # n,
@@ -585,7 +585,7 @@ class IsNthDimSizeIsOneOfPred<int n, list<int> allowedSizes>
// Whether the shape of a vector matches the given `shape` list.
class IsVectorOfShape<list<int> shape>
- : CPred<"::llvm::cast<::mlir::VectorType>($_self).getShape() == ArrayRef<int64_t>({" # !interleave(shape, ", ") # "})">;
+ : CPred<"::llvm::cast<::mlir::VectorType>($_self).getShape() == ::llvm::ArrayRef<int64_t>({" # !interleave(shape, ", ") # "})">;
// Any vector where the number of elements is from the given
// `allowedLengths` list
>From 2bb23d444767540c59c32ccdb86f7ef6e35fd96e Mon Sep 17 00:00:00 2001
From: Michael Kruse <llvm-project at meinersbur.de>
Date: Wed, 6 Aug 2025 16:39:03 +0200
Subject: [PATCH 089/171] [flang][cmake] Fix bbc dependencies (#152306)
Re-apply "[flang][cmake] Fix bcc dependencies (#125822)"
It was overwritten due to an automatic merge gone wrong for #124416.
`git cherry-pick f9af5c145f40480d46874b643ca2b1237e9fbb2a` applied in
97cf061e83a0d0cada38d8a2d75c8a840b01ba26 ignored the renaming of
FortranCommon into FortranSupport after #125822.
Original commit message:
The Fortran libraries are not part of MLIR, so they should use
target_link_libraries() rather than mlir_target_link_libraries().
This fixes an issue introduced in #20966.
---
flang/tools/bbc/CMakeLists.txt | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/flang/tools/bbc/CMakeLists.txt b/flang/tools/bbc/CMakeLists.txt
index 469266cc81558..7516157731b5e 100644
--- a/flang/tools/bbc/CMakeLists.txt
+++ b/flang/tools/bbc/CMakeLists.txt
@@ -30,6 +30,11 @@ target_link_libraries(bbc PRIVATE
flangFrontend
flangPasses
FlangOpenMPTransforms
+ FortranSupport
+ FortranParser
+ FortranEvaluate
+ FortranSemantics
+ FortranLower
)
mlir_target_link_libraries(bbc PRIVATE
@@ -37,9 +42,4 @@ mlir_target_link_libraries(bbc PRIVATE
${extension_libs}
MLIRAffineToStandard
MLIRSCFToControlFlow
- FortranSupport
- FortranParser
- FortranEvaluate
- FortranSemantics
- FortranLower
)
>From a4ff76e819965b58083bab6c42c66acb6a2d1224 Mon Sep 17 00:00:00 2001
From: Krishna Pandey <kpandey81930 at gmail.com>
Date: Wed, 6 Aug 2025 20:12:55 +0530
Subject: [PATCH 090/171] [libc][math][c++23] Implement basic arithmetic
operations for BFloat16 (#151228)
This PR implements addition, subtraction, multiplication and division
operations for BFloat16.
---------
Signed-off-by: krishna2803 <kpandey81930 at gmail.com>
Signed-off-by: Krishna Pandey <kpandey81930 at gmail.com>
Co-authored-by: OverMighty <its.overmighty at gmail.com>
---
libc/src/__support/FPUtil/CMakeLists.txt | 3 +
libc/src/__support/FPUtil/bfloat16.h | 25 +++++++
libc/src/__support/FPUtil/cast.h | 66 ++++++++---------
libc/src/__support/FPUtil/dyadic_float.h | 2 +-
.../__support/FPUtil/generic/CMakeLists.txt | 2 +
libc/src/__support/FPUtil/generic/add_sub.h | 23 ++++--
libc/src/__support/FPUtil/generic/div.h | 6 +-
libc/test/src/math/exhaustive/CMakeLists.txt | 72 +++++++++++++++++++
.../src/math/exhaustive/bfloat16_add_test.cpp | 65 +++++++++++++++++
.../src/math/exhaustive/bfloat16_div_test.cpp | 65 +++++++++++++++++
.../src/math/exhaustive/bfloat16_mul_test.cpp | 65 +++++++++++++++++
.../src/math/exhaustive/bfloat16_sub_test.cpp | 65 +++++++++++++++++
.../src/math/exhaustive/exhaustive_test.h | 2 +-
libc/test/src/math/smoke/CMakeLists.txt | 63 ++++++++++++++++
libc/test/src/math/smoke/MulTest.h | 10 +--
.../test/src/math/smoke/bfloat16_add_test.cpp | 15 ++++
.../test/src/math/smoke/bfloat16_div_test.cpp | 15 ++++
.../test/src/math/smoke/bfloat16_mul_test.cpp | 15 ++++
.../test/src/math/smoke/bfloat16_sub_test.cpp | 15 ++++
libc/utils/MPFRWrapper/CMakeLists.txt | 1 +
libc/utils/MPFRWrapper/MPFRUtils.cpp | 8 ++-
21 files changed, 554 insertions(+), 49 deletions(-)
create mode 100644 libc/test/src/math/exhaustive/bfloat16_add_test.cpp
create mode 100644 libc/test/src/math/exhaustive/bfloat16_div_test.cpp
create mode 100644 libc/test/src/math/exhaustive/bfloat16_mul_test.cpp
create mode 100644 libc/test/src/math/exhaustive/bfloat16_sub_test.cpp
create mode 100644 libc/test/src/math/smoke/bfloat16_add_test.cpp
create mode 100644 libc/test/src/math/smoke/bfloat16_div_test.cpp
create mode 100644 libc/test/src/math/smoke/bfloat16_mul_test.cpp
create mode 100644 libc/test/src/math/smoke/bfloat16_sub_test.cpp
diff --git a/libc/src/__support/FPUtil/CMakeLists.txt b/libc/src/__support/FPUtil/CMakeLists.txt
index 6e447fcd47368..37520eadba005 100644
--- a/libc/src/__support/FPUtil/CMakeLists.txt
+++ b/libc/src/__support/FPUtil/CMakeLists.txt
@@ -285,6 +285,9 @@ add_header_library(
libc.hdr.stdint_proxy
libc.src.__support.CPP.bit
libc.src.__support.CPP.type_traits
+ libc.src.__support.FPUtil.generic.add_sub
+ libc.src.__support.FPUtil.generic.div
+ libc.src.__support.FPUtil.generic.mul
libc.src.__support.macros.config
libc.src.__support.macros.properties.types
)
diff --git a/libc/src/__support/FPUtil/bfloat16.h b/libc/src/__support/FPUtil/bfloat16.h
index fa45d73fba6c1..3fab2b80317de 100644
--- a/libc/src/__support/FPUtil/bfloat16.h
+++ b/libc/src/__support/FPUtil/bfloat16.h
@@ -15,6 +15,9 @@
#include "src/__support/FPUtil/cast.h"
#include "src/__support/FPUtil/comparison_operations.h"
#include "src/__support/FPUtil/dyadic_float.h"
+#include "src/__support/FPUtil/generic/add_sub.h"
+#include "src/__support/FPUtil/generic/div.h"
+#include "src/__support/FPUtil/generic/mul.h"
#include "src/__support/macros/config.h"
#include "src/__support/macros/properties/types.h"
@@ -81,6 +84,28 @@ struct BFloat16 {
LIBC_INLINE bool operator>=(BFloat16 other) const {
return fputil::greater_than_or_equals(*this, other);
}
+
+ LIBC_INLINE constexpr BFloat16 operator-() const {
+ fputil::FPBits<bfloat16> result(*this);
+ result.set_sign(result.is_pos() ? Sign::NEG : Sign::POS);
+ return result.get_val();
+ }
+
+ LIBC_INLINE BFloat16 operator+(BFloat16 other) const {
+ return fputil::generic::add<BFloat16>(*this, other);
+ }
+
+ LIBC_INLINE BFloat16 operator-(BFloat16 other) const {
+ return fputil::generic::sub<BFloat16>(*this, other);
+ }
+
+ LIBC_INLINE BFloat16 operator*(BFloat16 other) const {
+ return fputil::generic::mul<bfloat16>(*this, other);
+ }
+
+ LIBC_INLINE BFloat16 operator/(BFloat16 other) const {
+ return fputil::generic::div<bfloat16>(*this, other);
+ }
}; // struct BFloat16
} // namespace fputil
diff --git a/libc/src/__support/FPUtil/cast.h b/libc/src/__support/FPUtil/cast.h
index e999ece37871a..54c80e862523a 100644
--- a/libc/src/__support/FPUtil/cast.h
+++ b/libc/src/__support/FPUtil/cast.h
@@ -27,47 +27,47 @@ LIBC_INLINE constexpr cpp::enable_if_t<cpp::is_floating_point_v<OutType> &&
OutType>
cast(InType x) {
// Casting to the same type is a no-op.
- if constexpr (cpp::is_same_v<InType, OutType>)
+ if constexpr (cpp::is_same_v<InType, OutType>) {
return x;
-
- // bfloat16 is always defined (for now)
- if constexpr (cpp::is_same_v<OutType, bfloat16> ||
- cpp::is_same_v<InType, bfloat16>
+ } else {
+ if constexpr (cpp::is_same_v<OutType, bfloat16> ||
+ cpp::is_same_v<InType, bfloat16>
#if defined(LIBC_TYPES_HAS_FLOAT16) && !defined(__LIBC_USE_FLOAT16_CONVERSION)
- || cpp::is_same_v<OutType, float16> ||
- cpp::is_same_v<InType, float16>
+ || cpp::is_same_v<OutType, float16> ||
+ cpp::is_same_v<InType, float16>
#endif
- ) {
- using InFPBits = FPBits<InType>;
- using InStorageType = typename InFPBits::StorageType;
- using OutFPBits = FPBits<OutType>;
- using OutStorageType = typename OutFPBits::StorageType;
+ ) {
+ using InFPBits = FPBits<InType>;
+ using InStorageType = typename InFPBits::StorageType;
+ using OutFPBits = FPBits<OutType>;
+ using OutStorageType = typename OutFPBits::StorageType;
- InFPBits x_bits(x);
+ InFPBits x_bits(x);
- if (x_bits.is_nan()) {
- if (x_bits.is_signaling_nan()) {
- raise_except_if_required(FE_INVALID);
- return OutFPBits::quiet_nan().get_val();
- }
+ if (x_bits.is_nan()) {
+ if (x_bits.is_signaling_nan()) {
+ raise_except_if_required(FE_INVALID);
+ return OutFPBits::quiet_nan().get_val();
+ }
- InStorageType x_mant = x_bits.get_mantissa();
- if (InFPBits::FRACTION_LEN > OutFPBits::FRACTION_LEN)
- x_mant >>= InFPBits::FRACTION_LEN - OutFPBits::FRACTION_LEN;
- return OutFPBits::quiet_nan(x_bits.sign(),
- static_cast<OutStorageType>(x_mant))
- .get_val();
- }
+ InStorageType x_mant = x_bits.get_mantissa();
+ if (InFPBits::FRACTION_LEN > OutFPBits::FRACTION_LEN)
+ x_mant >>= InFPBits::FRACTION_LEN - OutFPBits::FRACTION_LEN;
+ return OutFPBits::quiet_nan(x_bits.sign(),
+ static_cast<OutStorageType>(x_mant))
+ .get_val();
+ }
- if (x_bits.is_inf())
- return OutFPBits::inf(x_bits.sign()).get_val();
+ if (x_bits.is_inf())
+ return OutFPBits::inf(x_bits.sign()).get_val();
- constexpr size_t MAX_FRACTION_LEN =
- cpp::max(OutFPBits::FRACTION_LEN, InFPBits::FRACTION_LEN);
- DyadicFloat<cpp::bit_ceil(MAX_FRACTION_LEN)> xd(x);
- return xd.template as<OutType, /*ShouldSignalExceptions=*/true>();
- } else {
- return static_cast<OutType>(x);
+ constexpr size_t MAX_FRACTION_LEN =
+ cpp::max(OutFPBits::FRACTION_LEN, InFPBits::FRACTION_LEN);
+ DyadicFloat<cpp::bit_ceil(MAX_FRACTION_LEN)> xd(x);
+ return xd.template as<OutType, /*ShouldSignalExceptions=*/true>();
+ } else {
+ return static_cast<OutType>(x);
+ }
}
}
diff --git a/libc/src/__support/FPUtil/dyadic_float.h b/libc/src/__support/FPUtil/dyadic_float.h
index 3464e4aa9423f..cc0710fbf7b02 100644
--- a/libc/src/__support/FPUtil/dyadic_float.h
+++ b/libc/src/__support/FPUtil/dyadic_float.h
@@ -576,7 +576,7 @@ LIBC_INLINE constexpr DyadicFloat<Bits> quick_mul(const DyadicFloat<Bits> &a,
// Check the leading bit directly, should be faster than using clz in
// normalize().
if (result.mantissa.val[DyadicFloat<Bits>::MantissaType::WORD_COUNT - 1] >>
- 63 ==
+ (DyadicFloat<Bits>::MantissaType::WORD_SIZE - 1) ==
0)
result.shift_left(1);
} else {
diff --git a/libc/src/__support/FPUtil/generic/CMakeLists.txt b/libc/src/__support/FPUtil/generic/CMakeLists.txt
index 117213fc2c59f..b75efc8eb2fe4 100644
--- a/libc/src/__support/FPUtil/generic/CMakeLists.txt
+++ b/libc/src/__support/FPUtil/generic/CMakeLists.txt
@@ -68,6 +68,7 @@ add_header_library(
libc.src.__support.FPUtil.rounding_mode
libc.src.__support.macros.attributes
libc.src.__support.macros.optimization
+ libc.src.__support.macros.properties.types
)
add_header_library(
@@ -77,6 +78,7 @@ add_header_library(
DEPENDS
libc.hdr.errno_macros
libc.hdr.fenv_macros
+ libc.src.__support.CPP.algorithm
libc.src.__support.CPP.bit
libc.src.__support.CPP.type_traits
libc.src.__support.FPUtil.basic_operations
diff --git a/libc/src/__support/FPUtil/generic/add_sub.h b/libc/src/__support/FPUtil/generic/add_sub.h
index d4a4129496640..bcc5647b3cdd5 100644
--- a/libc/src/__support/FPUtil/generic/add_sub.h
+++ b/libc/src/__support/FPUtil/generic/add_sub.h
@@ -104,13 +104,22 @@ add_or_sub(InType x, InType y) {
}
}
- // volatile prevents Clang from converting tmp to OutType and then
- // immediately back to InType before negating it, resulting in double
- // rounding.
- volatile InType tmp = y;
- if constexpr (IsSub)
- tmp = -tmp;
- return cast<OutType>(tmp);
+ if constexpr (cpp::is_same_v<InType, bfloat16> &&
+ cpp::is_same_v<OutType, bfloat16>) {
+ OutFPBits y_bits(y);
+ if constexpr (IsSub)
+ y_bits.set_sign(y_bits.sign().negate());
+ return y_bits.get_val();
+ } else {
+
+ // volatile prevents Clang from converting tmp to OutType and then
+ // immediately back to InType before negating it, resulting in double
+ // rounding.
+ volatile InType tmp = y;
+ if constexpr (IsSub)
+ tmp = -tmp;
+ return cast<OutType>(tmp);
+ }
}
if (y_bits.is_zero())
diff --git a/libc/src/__support/FPUtil/generic/div.h b/libc/src/__support/FPUtil/generic/div.h
index 0891ae010ce20..bf7d0b7112ca9 100644
--- a/libc/src/__support/FPUtil/generic/div.h
+++ b/libc/src/__support/FPUtil/generic/div.h
@@ -11,6 +11,7 @@
#include "hdr/errno_macros.h"
#include "hdr/fenv_macros.h"
+#include "src/__support/CPP/algorithm.h"
#include "src/__support/CPP/bit.h"
#include "src/__support/CPP/type_traits.h"
#include "src/__support/FPUtil/BasicOperations.h"
@@ -34,8 +35,9 @@ div(InType x, InType y) {
using OutStorageType = typename OutFPBits::StorageType;
using InFPBits = FPBits<InType>;
using InStorageType = typename InFPBits::StorageType;
- using DyadicFloat =
- DyadicFloat<cpp::bit_ceil(static_cast<size_t>(InFPBits::SIG_LEN + 1))>;
+ using DyadicFloat = DyadicFloat<cpp::max(
+ static_cast<size_t>(16),
+ cpp::bit_ceil(static_cast<size_t>(InFPBits::SIG_LEN + 1)))>;
InFPBits x_bits(x);
InFPBits y_bits(y);
diff --git a/libc/test/src/math/exhaustive/CMakeLists.txt b/libc/test/src/math/exhaustive/CMakeLists.txt
index 5246337dab64b..e40972ea9a879 100644
--- a/libc/test/src/math/exhaustive/CMakeLists.txt
+++ b/libc/test/src/math/exhaustive/CMakeLists.txt
@@ -567,3 +567,75 @@ add_fp_unittest(
LINK_LIBRARIES
-lpthread
)
+
+add_fp_unittest(
+ bfloat16_add_test
+ NO_RUN_POSTBUILD
+ NEED_MPFR
+ SUITE
+ libc_math_exhaustive_tests
+ SRCS
+ bfloat16_add_test.cpp
+ COMPILE_OPTIONS
+ ${libc_opt_high_flag}
+ DEPENDS
+ .exhaustive_test
+ libc.src.__support.FPUtil.bfloat16
+ libc.src.__support.FPUtil.fp_bits
+ LINK_LIBRARIES
+ -lpthread
+)
+
+add_fp_unittest(
+ bfloat16_div_test
+ NO_RUN_POSTBUILD
+ NEED_MPFR
+ SUITE
+ libc_math_exhaustive_tests
+ SRCS
+ bfloat16_div_test.cpp
+ COMPILE_OPTIONS
+ ${libc_opt_high_flag}
+ DEPENDS
+ .exhaustive_test
+ libc.src.__support.FPUtil.bfloat16
+ libc.src.__support.FPUtil.fp_bits
+ LINK_LIBRARIES
+ -lpthread
+)
+
+add_fp_unittest(
+ bfloat16_mul_test
+ NO_RUN_POSTBUILD
+ NEED_MPFR
+ SUITE
+ libc_math_exhaustive_tests
+ SRCS
+ bfloat16_mul_test.cpp
+ COMPILE_OPTIONS
+ ${libc_opt_high_flag}
+ DEPENDS
+ .exhaustive_test
+ libc.src.__support.FPUtil.bfloat16
+ libc.src.__support.FPUtil.fp_bits
+ LINK_LIBRARIES
+ -lpthread
+)
+
+add_fp_unittest(
+ bfloat16_sub_test
+ NO_RUN_POSTBUILD
+ NEED_MPFR
+ SUITE
+ libc_math_exhaustive_tests
+ SRCS
+ bfloat16_sub_test.cpp
+ COMPILE_OPTIONS
+ ${libc_opt_high_flag}
+ DEPENDS
+ .exhaustive_test
+ libc.src.__support.FPUtil.bfloat16
+ libc.src.__support.FPUtil.fp_bits
+ LINK_LIBRARIES
+ -lpthread
+)
diff --git a/libc/test/src/math/exhaustive/bfloat16_add_test.cpp b/libc/test/src/math/exhaustive/bfloat16_add_test.cpp
new file mode 100644
index 0000000000000..3f4c77978a93d
--- /dev/null
+++ b/libc/test/src/math/exhaustive/bfloat16_add_test.cpp
@@ -0,0 +1,65 @@
+//===-- Exhaustive tests for bfloat16 addition ----------------------------===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+
+#include "exhaustive_test.h"
+#include "src/__support/FPUtil/FPBits.h"
+#include "src/__support/FPUtil/bfloat16.h"
+#include "test/UnitTest/FPMatcher.h"
+#include "utils/MPFRWrapper/MPCommon.h"
+#include "utils/MPFRWrapper/MPFRUtils.h"
+
+namespace mpfr = LIBC_NAMESPACE::testing::mpfr;
+using LIBC_NAMESPACE::fputil::BFloat16;
+
+static BFloat16 add_func(BFloat16 x, BFloat16 y) { return x + y; }
+
+struct Bfloat16AddChecker : public virtual LIBC_NAMESPACE::testing::Test {
+ using FloatType = BFloat16;
+ using FPBits = LIBC_NAMESPACE::fputil::FPBits<bfloat16>;
+ using StorageType = typename FPBits::StorageType;
+
+ uint64_t check(uint16_t x_start, uint16_t x_stop, uint16_t y_start,
+ uint16_t y_stop, mpfr::RoundingMode rounding) {
+ mpfr::ForceRoundingMode r(rounding);
+ if (!r.success)
+ return true;
+ uint16_t xbits = x_start;
+ uint64_t failed = 0;
+ do {
+ BFloat16 x = FPBits(xbits).get_val();
+ uint16_t ybits = xbits;
+ do {
+ BFloat16 y = FPBits(ybits).get_val();
+ mpfr::BinaryInput<BFloat16> input{x, y};
+ bool correct = TEST_MPFR_MATCH_ROUNDING_SILENTLY(
+ mpfr::Operation::Add, input, add_func(x, y), 0.5, rounding);
+ failed += (!correct);
+ } while (ybits++ < y_stop);
+ } while (xbits++ < x_stop);
+ return failed;
+ }
+};
+
+using LlvmLibcBfloat16ExhaustiveAddTest =
+ LlvmLibcExhaustiveMathTest<Bfloat16AddChecker, 1 << 2>;
+
+// range: [0, inf]
+static constexpr uint16_t POS_START = 0x0000U;
+static constexpr uint16_t POS_STOP = 0x7f80U;
+
+// range: [-0, -inf]
+static constexpr uint16_t NEG_START = 0x8000U;
+static constexpr uint16_t NEG_STOP = 0xff80U;
+
+TEST_F(LlvmLibcBfloat16ExhaustiveAddTest, PositiveRange) {
+ test_full_range_all_roundings(POS_START, POS_STOP, POS_START, POS_STOP);
+}
+
+TEST_F(LlvmLibcBfloat16ExhaustiveAddTest, NegativeRange) {
+ test_full_range_all_roundings(NEG_START, NEG_STOP, NEG_START, NEG_STOP);
+}
diff --git a/libc/test/src/math/exhaustive/bfloat16_div_test.cpp b/libc/test/src/math/exhaustive/bfloat16_div_test.cpp
new file mode 100644
index 0000000000000..2648d5f775af5
--- /dev/null
+++ b/libc/test/src/math/exhaustive/bfloat16_div_test.cpp
@@ -0,0 +1,65 @@
+//===-- Exhaustive tests for bfloat16 division ----------------------------===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+
+#include "exhaustive_test.h"
+#include "src/__support/FPUtil/FPBits.h"
+#include "src/__support/FPUtil/bfloat16.h"
+#include "test/UnitTest/FPMatcher.h"
+#include "utils/MPFRWrapper/MPCommon.h"
+#include "utils/MPFRWrapper/MPFRUtils.h"
+
+namespace mpfr = LIBC_NAMESPACE::testing::mpfr;
+using LIBC_NAMESPACE::fputil::BFloat16;
+
+static BFloat16 div_func(BFloat16 x, BFloat16 y) { return x / y; }
+
+struct Bfloat16DivChecker : public virtual LIBC_NAMESPACE::testing::Test {
+ using FloatType = BFloat16;
+ using FPBits = LIBC_NAMESPACE::fputil::FPBits<bfloat16>;
+ using StorageType = typename FPBits::StorageType;
+
+ uint64_t check(uint16_t x_start, uint16_t x_stop, uint16_t y_start,
+ uint16_t y_stop, mpfr::RoundingMode rounding) {
+ mpfr::ForceRoundingMode r(rounding);
+ if (!r.success)
+ return true;
+ uint16_t xbits = x_start;
+ uint64_t failed = 0;
+ do {
+ BFloat16 x = FPBits(xbits).get_val();
+ uint16_t ybits = xbits;
+ do {
+ BFloat16 y = FPBits(ybits).get_val();
+ mpfr::BinaryInput<BFloat16> input{x, y};
+ bool correct = TEST_MPFR_MATCH_ROUNDING_SILENTLY(
+ mpfr::Operation::Div, input, div_func(x, y), 0.5, rounding);
+ failed += (!correct);
+ } while (ybits++ < y_stop);
+ } while (xbits++ < x_stop);
+ return failed;
+ }
+};
+
+using LlvmLibcBfloat16ExhaustiveDivTest =
+ LlvmLibcExhaustiveMathTest<Bfloat16DivChecker, 1 << 2>;
+
+// range: [0, inf]
+static constexpr uint16_t POS_START = 0x0000U;
+static constexpr uint16_t POS_STOP = 0x7f80U;
+
+// range: [-0, -inf]
+static constexpr uint16_t NEG_START = 0x8000U;
+static constexpr uint16_t NEG_STOP = 0xff80U;
+
+TEST_F(LlvmLibcBfloat16ExhaustiveDivTest, PositiveRange) {
+ test_full_range_all_roundings(POS_START, POS_STOP, POS_START, POS_STOP);
+}
+
+TEST_F(LlvmLibcBfloat16ExhaustiveDivTest, NegativeRange) {
+ test_full_range_all_roundings(NEG_START, NEG_STOP, NEG_START, NEG_STOP);
+}
diff --git a/libc/test/src/math/exhaustive/bfloat16_mul_test.cpp b/libc/test/src/math/exhaustive/bfloat16_mul_test.cpp
new file mode 100644
index 0000000000000..3cbbcb500a4fb
--- /dev/null
+++ b/libc/test/src/math/exhaustive/bfloat16_mul_test.cpp
@@ -0,0 +1,65 @@
+//===-- Exhaustive tests for bfloat16 multiplication ----------------------===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+
+#include "exhaustive_test.h"
+#include "src/__support/FPUtil/FPBits.h"
+#include "src/__support/FPUtil/bfloat16.h"
+#include "test/UnitTest/FPMatcher.h"
+#include "utils/MPFRWrapper/MPCommon.h"
+#include "utils/MPFRWrapper/MPFRUtils.h"
+
+namespace mpfr = LIBC_NAMESPACE::testing::mpfr;
+using LIBC_NAMESPACE::fputil::BFloat16;
+
+static BFloat16 mul_func(BFloat16 x, BFloat16 y) { return x * y; }
+
+struct Bfloat16MulChecker : public virtual LIBC_NAMESPACE::testing::Test {
+ using FloatType = BFloat16;
+ using FPBits = LIBC_NAMESPACE::fputil::FPBits<bfloat16>;
+ using StorageType = typename FPBits::StorageType;
+
+ uint64_t check(uint16_t x_start, uint16_t x_stop, uint16_t y_start,
+ uint16_t y_stop, mpfr::RoundingMode rounding) {
+ mpfr::ForceRoundingMode r(rounding);
+ if (!r.success)
+ return true;
+ uint16_t xbits = x_start;
+ uint64_t failed = 0;
+ do {
+ BFloat16 x = FPBits(xbits).get_val();
+ uint16_t ybits = xbits;
+ do {
+ BFloat16 y = FPBits(ybits).get_val();
+ mpfr::BinaryInput<BFloat16> input{x, y};
+ bool correct = TEST_MPFR_MATCH_ROUNDING_SILENTLY(
+ mpfr::Operation::Mul, input, mul_func(x, y), 0.5, rounding);
+ failed += (!correct);
+ } while (ybits++ < y_stop);
+ } while (xbits++ < x_stop);
+ return failed;
+ }
+};
+
+using LlvmLibcBfloat16ExhaustiveMulTest =
+ LlvmLibcExhaustiveMathTest<Bfloat16MulChecker, 1 << 2>;
+
+// range: [0, inf]
+static constexpr uint16_t POS_START = 0x0000U;
+static constexpr uint16_t POS_STOP = 0x7f80U;
+
+// range: [-0, -inf]
+static constexpr uint16_t NEG_START = 0x8000U;
+static constexpr uint16_t NEG_STOP = 0xff80U;
+
+TEST_F(LlvmLibcBfloat16ExhaustiveMulTest, PositiveRange) {
+ test_full_range_all_roundings(POS_START, POS_STOP, POS_START, POS_STOP);
+}
+
+TEST_F(LlvmLibcBfloat16ExhaustiveMulTest, NegativeRange) {
+ test_full_range_all_roundings(NEG_START, NEG_STOP, NEG_START, NEG_STOP);
+}
diff --git a/libc/test/src/math/exhaustive/bfloat16_sub_test.cpp b/libc/test/src/math/exhaustive/bfloat16_sub_test.cpp
new file mode 100644
index 0000000000000..11bc6f59dd294
--- /dev/null
+++ b/libc/test/src/math/exhaustive/bfloat16_sub_test.cpp
@@ -0,0 +1,65 @@
+//===-- Exhaustive tests for bfloat16 subtraction -------------------------===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+
+#include "exhaustive_test.h"
+#include "src/__support/FPUtil/FPBits.h"
+#include "src/__support/FPUtil/bfloat16.h"
+#include "test/UnitTest/FPMatcher.h"
+#include "utils/MPFRWrapper/MPCommon.h"
+#include "utils/MPFRWrapper/MPFRUtils.h"
+
+namespace mpfr = LIBC_NAMESPACE::testing::mpfr;
+using LIBC_NAMESPACE::fputil::BFloat16;
+
+static BFloat16 sub_func(BFloat16 x, BFloat16 y) { return x - y; }
+
+struct Bfloat16SubChecker : public virtual LIBC_NAMESPACE::testing::Test {
+ using FloatType = BFloat16;
+ using FPBits = LIBC_NAMESPACE::fputil::FPBits<bfloat16>;
+ using StorageType = typename FPBits::StorageType;
+
+ uint64_t check(uint16_t x_start, uint16_t x_stop, uint16_t y_start,
+ uint16_t y_stop, mpfr::RoundingMode rounding) {
+ mpfr::ForceRoundingMode r(rounding);
+ if (!r.success)
+ return true;
+ uint16_t xbits = x_start;
+ uint64_t failed = 0;
+ do {
+ BFloat16 x = FPBits(xbits).get_val();
+ uint16_t ybits = xbits;
+ do {
+ BFloat16 y = FPBits(ybits).get_val();
+ mpfr::BinaryInput<BFloat16> input{x, y};
+ bool correct = TEST_MPFR_MATCH_ROUNDING_SILENTLY(
+ mpfr::Operation::Sub, input, sub_func(x, y), 0.5, rounding);
+ failed += (!correct);
+ } while (ybits++ < y_stop);
+ } while (xbits++ < x_stop);
+ return failed;
+ }
+};
+
+using LlvmLibcBfloat16ExhaustiveSubTest =
+ LlvmLibcExhaustiveMathTest<Bfloat16SubChecker, 1 << 2>;
+
+// range: [0, inf]
+static constexpr uint16_t POS_START = 0x0000U;
+static constexpr uint16_t POS_STOP = 0x7f80U;
+
+// range: [-0, -inf]
+static constexpr uint16_t NEG_START = 0x8000U;
+static constexpr uint16_t NEG_STOP = 0xff80U;
+
+TEST_F(LlvmLibcBfloat16ExhaustiveSubTest, PositiveRange) {
+ test_full_range_all_roundings(POS_START, POS_STOP, POS_START, POS_STOP);
+}
+
+TEST_F(LlvmLibcBfloat16ExhaustiveSubTest, NegativeRange) {
+ test_full_range_all_roundings(NEG_START, NEG_STOP, NEG_START, NEG_STOP);
+}
diff --git a/libc/test/src/math/exhaustive/exhaustive_test.h b/libc/test/src/math/exhaustive/exhaustive_test.h
index cdf459c23bb9d..8be65ba69f725 100644
--- a/libc/test/src/math/exhaustive/exhaustive_test.h
+++ b/libc/test/src/math/exhaustive/exhaustive_test.h
@@ -164,7 +164,7 @@ struct LlvmLibcExhaustiveMathTest
range_begin = current_value;
if (stop >= Increment && stop - Increment >= current_value) {
- range_end = current_value + Increment;
+ range_end = static_cast<StorageType>(current_value + Increment);
} else {
range_end = stop;
}
diff --git a/libc/test/src/math/smoke/CMakeLists.txt b/libc/test/src/math/smoke/CMakeLists.txt
index 40b7a342e0c1a..a722f612af817 100644
--- a/libc/test/src/math/smoke/CMakeLists.txt
+++ b/libc/test/src/math/smoke/CMakeLists.txt
@@ -5306,6 +5306,69 @@ add_fp_unittest(
libc.src.math.ddivf128
)
+add_fp_unittest(
+ bfloat16_add_test
+ SUITE
+ libc-math-smoke-tests
+ SRCS
+ bfloat16_add_test.cpp
+ HDRS
+ AddTest.h
+ DEPENDS
+ libc.src.__support.FPUtil.bfloat16
+ libc.src.__support.FPUtil.generic.add_sub
+ libc.src.__support.macros.properties.os
+ libc.src.__support.macros.properties.types
+ libc.hdr.errno_macros
+ libc.hdr.fenv_macros
+)
+
+add_fp_unittest(
+ bfloat16_div_test
+ SUITE
+ libc-math-smoke-tests
+ SRCS
+ bfloat16_div_test.cpp
+ HDRS
+ DivTest.h
+ DEPENDS
+ libc.src.__support.FPUtil.bfloat16
+ libc.hdr.errno_macros
+ libc.hdr.fenv_macros
+)
+
+add_fp_unittest(
+ bfloat16_mul_test
+ SUITE
+ libc-math-smoke-tests
+ SRCS
+ bfloat16_mul_test.cpp
+ HDRS
+ MulTest.h
+ DEPENDS
+ libc.src.__support.FPUtil.basic_operations
+ libc.src.__support.FPUtil.bfloat16
+ libc.hdr.errno_macros
+ libc.hdr.fenv_macros
+)
+
+add_fp_unittest(
+ bfloat16_sub_test
+ SUITE
+ libc-math-smoke-tests
+ SRCS
+ bfloat16_sub_test.cpp
+ HDRS
+ SubTest.h
+ DEPENDS
+ libc.src.__support.FPUtil.bfloat16
+ libc.src.__support.FPUtil.generic.add_sub
+ libc.src.__support.macros.properties.os
+ libc.src.__support.macros.properties.types
+ libc.hdr.errno_macros
+ libc.hdr.fenv_macros
+)
+
add_fp_unittest(
add_same_type_test
SUITE
diff --git a/libc/test/src/math/smoke/MulTest.h b/libc/test/src/math/smoke/MulTest.h
index cf7f41abf5500..a45f422416f68 100644
--- a/libc/test/src/math/smoke/MulTest.h
+++ b/libc/test/src/math/smoke/MulTest.h
@@ -53,10 +53,12 @@ class MulTest : public LIBC_NAMESPACE::testing::FEnvSafeTest {
EXPECT_FP_EQ_ALL_ROUNDING(neg_zero, func(in.zero, in.neg_zero));
EXPECT_FP_EQ_ALL_ROUNDING(neg_zero, func(in.neg_zero, in.zero));
- EXPECT_FP_EQ_ALL_ROUNDING(OutType(1.0), func(1.0, 1.0));
- EXPECT_FP_EQ_ALL_ROUNDING(OutType(15.0), func(3.0, 5.0));
- EXPECT_FP_EQ_ALL_ROUNDING(OutType(0x1.0p-13), func(0x1.0p+1, 0x1.0p-14));
- EXPECT_FP_EQ_ALL_ROUNDING(OutType(0x1.0p-10), func(0x1.0p+2, 0x1.0p-12));
+ EXPECT_FP_EQ_ALL_ROUNDING(InType(1.0), func(InType(1.0), InType(1.0)));
+ EXPECT_FP_EQ_ALL_ROUNDING(InType(15.0), func(InType(3.0), InType(5.0)));
+ EXPECT_FP_EQ_ALL_ROUNDING(OutType(0x1.0p-13),
+ func(InType(0x1.0p+1), InType(0x1.0p-14)));
+ EXPECT_FP_EQ_ALL_ROUNDING(OutType(0x1.0p-10),
+ func(InType(0x1.0p+2), InType(0x1.0p-12)));
}
void test_invalid_operations(MulFunc func) {
diff --git a/libc/test/src/math/smoke/bfloat16_add_test.cpp b/libc/test/src/math/smoke/bfloat16_add_test.cpp
new file mode 100644
index 0000000000000..1db65a9be2b84
--- /dev/null
+++ b/libc/test/src/math/smoke/bfloat16_add_test.cpp
@@ -0,0 +1,15 @@
+//===-- Unittests for bfloat16 addition -----------------------------------===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+
+#include "AddTest.h"
+
+#include "src/__support/FPUtil/bfloat16.h"
+
+static bfloat16 add_func(bfloat16 x, bfloat16 y) { return x + y; }
+
+LIST_ADD_TESTS(bfloat16, bfloat16, add_func)
diff --git a/libc/test/src/math/smoke/bfloat16_div_test.cpp b/libc/test/src/math/smoke/bfloat16_div_test.cpp
new file mode 100644
index 0000000000000..c1ab598df0608
--- /dev/null
+++ b/libc/test/src/math/smoke/bfloat16_div_test.cpp
@@ -0,0 +1,15 @@
+//===-- Unittests for bfloat16 division -----------------------------------===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+
+#include "DivTest.h"
+
+#include "src/__support/FPUtil/bfloat16.h"
+
+static bfloat16 div_func(bfloat16 x, bfloat16 y) { return x / y; }
+
+LIST_DIV_TESTS(bfloat16, bfloat16, div_func)
diff --git a/libc/test/src/math/smoke/bfloat16_mul_test.cpp b/libc/test/src/math/smoke/bfloat16_mul_test.cpp
new file mode 100644
index 0000000000000..03e38d8b8e97a
--- /dev/null
+++ b/libc/test/src/math/smoke/bfloat16_mul_test.cpp
@@ -0,0 +1,15 @@
+//===-- Unittests for bfloat16 multiplication -----------------------------===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+
+#include "MulTest.h"
+
+#include "src/__support/FPUtil/bfloat16.h"
+
+static bfloat16 mul_func(bfloat16 x, bfloat16 y) { return x * y; }
+
+LIST_MUL_TESTS(bfloat16, bfloat16, mul_func)
diff --git a/libc/test/src/math/smoke/bfloat16_sub_test.cpp b/libc/test/src/math/smoke/bfloat16_sub_test.cpp
new file mode 100644
index 0000000000000..5eb4a9b7d062a
--- /dev/null
+++ b/libc/test/src/math/smoke/bfloat16_sub_test.cpp
@@ -0,0 +1,15 @@
+//===-- Unittests for bfloat16 subtraction --------------------------------===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+
+#include "SubTest.h"
+
+#include "src/__support/FPUtil/bfloat16.h"
+
+static bfloat16 sub_func(bfloat16 x, bfloat16 y) { return x - y; }
+
+LIST_SUB_TESTS(bfloat16, bfloat16, sub_func)
diff --git a/libc/utils/MPFRWrapper/CMakeLists.txt b/libc/utils/MPFRWrapper/CMakeLists.txt
index 2669b23ba43f9..73151c61a7fcc 100644
--- a/libc/utils/MPFRWrapper/CMakeLists.txt
+++ b/libc/utils/MPFRWrapper/CMakeLists.txt
@@ -43,6 +43,7 @@ if(LIBC_TESTS_CAN_USE_MPFR)
libc.hdr.stdint_proxy
libc.src.__support.CPP.array
libc.src.__support.CPP.stringstream
+ libc.src.__support.FPUtil.bfloat16
libc.src.__support.FPUtil.fp_bits
libc.src.__support.FPUtil.fpbits_str
LibcTest.unit
diff --git a/libc/utils/MPFRWrapper/MPFRUtils.cpp b/libc/utils/MPFRWrapper/MPFRUtils.cpp
index 8853f96ef8f92..ae12a83e9faa1 100644
--- a/libc/utils/MPFRWrapper/MPFRUtils.cpp
+++ b/libc/utils/MPFRWrapper/MPFRUtils.cpp
@@ -11,6 +11,7 @@
#include "src/__support/CPP/array.h"
#include "src/__support/CPP/stringstream.h"
+#include "src/__support/FPUtil/bfloat16.h"
#include "src/__support/FPUtil/fpbits_str.h"
#include "src/__support/macros/config.h"
#include "src/__support/macros/properties/types.h"
@@ -408,6 +409,8 @@ template void explain_binary_operation_one_output_error(
template void explain_binary_operation_one_output_error(
Operation, const BinaryInput<float128> &, float128, double, RoundingMode);
#endif
+template void explain_binary_operation_one_output_error(
+ Operation, const BinaryInput<bfloat16> &, bfloat16, double, RoundingMode);
template <typename InputType, typename OutputType>
void explain_ternary_operation_one_output_error(
@@ -641,7 +644,10 @@ template bool compare_binary_operation_one_output(Operation,
float128, double,
RoundingMode);
#endif
-
+template bool compare_binary_operation_one_output(Operation,
+ const BinaryInput<bfloat16> &,
+ bfloat16, double,
+ RoundingMode);
template <typename InputType, typename OutputType>
bool compare_ternary_operation_one_output(Operation op,
const TernaryInput<InputType> &input,
>From 3ffaaf6db68ff6ad4d8224a83ee6601df8cfb1fd Mon Sep 17 00:00:00 2001
From: Andrei Safronov <andrei.safronov at espressif.com>
Date: Wed, 6 Aug 2025 17:43:27 +0300
Subject: [PATCH 091/171] [Xtensa] Implement Xtensa S32C1I Option and atomics
lowering. (#137134)
Implement Xtensa S32C1I Option. Implement atomic_cmp_swap_32 operation
using s32c1i instruction. Use atomic_cmp_swap_32 operation and AtomicExpand
pass to implement atomics operations.
---
.../Disassembler/XtensaDisassembler.cpp | 67 +-
.../MCTargetDesc/XtensaMCTargetDesc.cpp | 3 +
llvm/lib/Target/Xtensa/XtensaFeatures.td | 16 +
llvm/lib/Target/Xtensa/XtensaISelLowering.cpp | 31 +
llvm/lib/Target/Xtensa/XtensaISelLowering.h | 6 +
llvm/lib/Target/Xtensa/XtensaInstrInfo.td | 42 +
llvm/lib/Target/Xtensa/XtensaRegisterInfo.td | 10 +-
llvm/lib/Target/Xtensa/XtensaSubtarget.h | 2 +
.../lib/Target/Xtensa/XtensaTargetMachine.cpp | 6 +
llvm/test/CodeGen/Xtensa/atomic-load-store.ll | 498 +
llvm/test/CodeGen/Xtensa/atomic-rmw.ll | 10298 ++++++++++++++++
llvm/test/CodeGen/Xtensa/forced-atomics.ll | 1426 +++
.../AtomicExpand/Xtensa/atomicrmw-expand.ll | 2643 ++++
.../AtomicExpand/Xtensa/lit.local.cfg | 2 +
14 files changed, 15015 insertions(+), 35 deletions(-)
create mode 100644 llvm/test/CodeGen/Xtensa/atomic-load-store.ll
create mode 100644 llvm/test/CodeGen/Xtensa/atomic-rmw.ll
create mode 100644 llvm/test/CodeGen/Xtensa/forced-atomics.ll
create mode 100644 llvm/test/Transforms/AtomicExpand/Xtensa/atomicrmw-expand.ll
create mode 100644 llvm/test/Transforms/AtomicExpand/Xtensa/lit.local.cfg
diff --git a/llvm/lib/Target/Xtensa/Disassembler/XtensaDisassembler.cpp b/llvm/lib/Target/Xtensa/Disassembler/XtensaDisassembler.cpp
index 2f92f8606fb48..39bec4785c61c 100644
--- a/llvm/lib/Target/Xtensa/Disassembler/XtensaDisassembler.cpp
+++ b/llvm/lib/Target/Xtensa/Disassembler/XtensaDisassembler.cpp
@@ -145,39 +145,40 @@ struct DecodeRegister {
};
const DecodeRegister SRDecoderTable[] = {
- {Xtensa::LBEG, 0}, {Xtensa::LEND, 1},
- {Xtensa::LCOUNT, 2}, {Xtensa::SAR, 3},
- {Xtensa::BREG, 4}, {Xtensa::LITBASE, 5},
- {Xtensa::ACCLO, 16}, {Xtensa::ACCHI, 17},
- {Xtensa::M0, 32}, {Xtensa::M1, 33},
- {Xtensa::M2, 34}, {Xtensa::M3, 35},
- {Xtensa::WINDOWBASE, 72}, {Xtensa::WINDOWSTART, 73},
- {Xtensa::IBREAKENABLE, 96}, {Xtensa::MEMCTL, 97},
- {Xtensa::DDR, 104}, {Xtensa::IBREAKA0, 128},
- {Xtensa::IBREAKA1, 129}, {Xtensa::DBREAKA0, 144},
- {Xtensa::DBREAKA1, 145}, {Xtensa::DBREAKC0, 160},
- {Xtensa::DBREAKC1, 161}, {Xtensa::CONFIGID0, 176},
- {Xtensa::EPC1, 177}, {Xtensa::EPC2, 178},
- {Xtensa::EPC3, 179}, {Xtensa::EPC4, 180},
- {Xtensa::EPC5, 181}, {Xtensa::EPC6, 182},
- {Xtensa::EPC7, 183}, {Xtensa::DEPC, 192},
- {Xtensa::EPS2, 194}, {Xtensa::EPS3, 195},
- {Xtensa::EPS4, 196}, {Xtensa::EPS5, 197},
- {Xtensa::EPS6, 198}, {Xtensa::EPS7, 199},
- {Xtensa::CONFIGID1, 208}, {Xtensa::EXCSAVE1, 209},
- {Xtensa::EXCSAVE2, 210}, {Xtensa::EXCSAVE3, 211},
- {Xtensa::EXCSAVE4, 212}, {Xtensa::EXCSAVE5, 213},
- {Xtensa::EXCSAVE6, 214}, {Xtensa::EXCSAVE7, 215},
- {Xtensa::CPENABLE, 224}, {Xtensa::INTERRUPT, 226},
- {Xtensa::INTCLEAR, 227}, {Xtensa::INTENABLE, 228},
- {Xtensa::PS, 230}, {Xtensa::VECBASE, 231},
- {Xtensa::EXCCAUSE, 232}, {Xtensa::DEBUGCAUSE, 233},
- {Xtensa::CCOUNT, 234}, {Xtensa::PRID, 235},
- {Xtensa::ICOUNT, 236}, {Xtensa::ICOUNTLEVEL, 237},
- {Xtensa::EXCVADDR, 238}, {Xtensa::CCOMPARE0, 240},
- {Xtensa::CCOMPARE1, 241}, {Xtensa::CCOMPARE2, 242},
- {Xtensa::MISC0, 244}, {Xtensa::MISC1, 245},
- {Xtensa::MISC2, 246}, {Xtensa::MISC3, 247}};
+ {Xtensa::LBEG, 0}, {Xtensa::LEND, 1},
+ {Xtensa::LCOUNT, 2}, {Xtensa::SAR, 3},
+ {Xtensa::BREG, 4}, {Xtensa::LITBASE, 5},
+ {Xtensa::SCOMPARE1, 12}, {Xtensa::ACCLO, 16},
+ {Xtensa::ACCHI, 17}, {Xtensa::M0, 32},
+ {Xtensa::M1, 33}, {Xtensa::M2, 34},
+ {Xtensa::M3, 35}, {Xtensa::WINDOWBASE, 72},
+ {Xtensa::WINDOWSTART, 73}, {Xtensa::IBREAKENABLE, 96},
+ {Xtensa::MEMCTL, 97}, {Xtensa::ATOMCTL, 99},
+ {Xtensa::DDR, 104}, {Xtensa::IBREAKA0, 128},
+ {Xtensa::IBREAKA1, 129}, {Xtensa::DBREAKA0, 144},
+ {Xtensa::DBREAKA1, 145}, {Xtensa::DBREAKC0, 160},
+ {Xtensa::DBREAKC1, 161}, {Xtensa::CONFIGID0, 176},
+ {Xtensa::EPC1, 177}, {Xtensa::EPC2, 178},
+ {Xtensa::EPC3, 179}, {Xtensa::EPC4, 180},
+ {Xtensa::EPC5, 181}, {Xtensa::EPC6, 182},
+ {Xtensa::EPC7, 183}, {Xtensa::DEPC, 192},
+ {Xtensa::EPS2, 194}, {Xtensa::EPS3, 195},
+ {Xtensa::EPS4, 196}, {Xtensa::EPS5, 197},
+ {Xtensa::EPS6, 198}, {Xtensa::EPS7, 199},
+ {Xtensa::CONFIGID1, 208}, {Xtensa::EXCSAVE1, 209},
+ {Xtensa::EXCSAVE2, 210}, {Xtensa::EXCSAVE3, 211},
+ {Xtensa::EXCSAVE4, 212}, {Xtensa::EXCSAVE5, 213},
+ {Xtensa::EXCSAVE6, 214}, {Xtensa::EXCSAVE7, 215},
+ {Xtensa::CPENABLE, 224}, {Xtensa::INTERRUPT, 226},
+ {Xtensa::INTCLEAR, 227}, {Xtensa::INTENABLE, 228},
+ {Xtensa::PS, 230}, {Xtensa::VECBASE, 231},
+ {Xtensa::EXCCAUSE, 232}, {Xtensa::DEBUGCAUSE, 233},
+ {Xtensa::CCOUNT, 234}, {Xtensa::PRID, 235},
+ {Xtensa::ICOUNT, 236}, {Xtensa::ICOUNTLEVEL, 237},
+ {Xtensa::EXCVADDR, 238}, {Xtensa::CCOMPARE0, 240},
+ {Xtensa::CCOMPARE1, 241}, {Xtensa::CCOMPARE2, 242},
+ {Xtensa::MISC0, 244}, {Xtensa::MISC1, 245},
+ {Xtensa::MISC2, 246}, {Xtensa::MISC3, 247}};
static DecodeStatus DecodeSRRegisterClass(MCInst &Inst, uint64_t RegNo,
uint64_t Address,
diff --git a/llvm/lib/Target/Xtensa/MCTargetDesc/XtensaMCTargetDesc.cpp b/llvm/lib/Target/Xtensa/MCTargetDesc/XtensaMCTargetDesc.cpp
index 821cba0fc25c2..080a9c0bdd9e0 100644
--- a/llvm/lib/Target/Xtensa/MCTargetDesc/XtensaMCTargetDesc.cpp
+++ b/llvm/lib/Target/Xtensa/MCTargetDesc/XtensaMCTargetDesc.cpp
@@ -200,6 +200,9 @@ bool Xtensa::checkRegister(MCRegister RegNo, const FeatureBitset &FeatureBits,
case Xtensa::WINDOWBASE:
case Xtensa::WINDOWSTART:
return FeatureBits[Xtensa::FeatureWindowed];
+ case Xtensa::ATOMCTL:
+ case Xtensa::SCOMPARE1:
+ return FeatureBits[Xtensa::FeatureWindowed];
case Xtensa::NoRegister:
return false;
}
diff --git a/llvm/lib/Target/Xtensa/XtensaFeatures.td b/llvm/lib/Target/Xtensa/XtensaFeatures.td
index 97d5472f3e96c..d6f3ef0f15e32 100644
--- a/llvm/lib/Target/Xtensa/XtensaFeatures.td
+++ b/llvm/lib/Target/Xtensa/XtensaFeatures.td
@@ -73,6 +73,22 @@ def FeatureDiv32 : SubtargetFeature<"div32", "HasDiv32", "true",
def HasDiv32 : Predicate<"Subtarget->hasDiv32()">,
AssemblerPredicate<(all_of FeatureDiv32)>;
+def FeatureS32C1I : SubtargetFeature<"s32c1i", "HasS32C1I", "true",
+ "Enable Xtensa S32C1I option">;
+def HasS32C1I : Predicate<"Subtarget->hasS32C1I()">,
+ AssemblerPredicate<(all_of FeatureS32C1I)>;
+
+// Assume that lock-free native-width atomics are available, even if the target
+// and operating system combination would not usually provide them. The user
+// is responsible for providing any necessary __sync implementations. Code
+// built with this feature is not ABI-compatible with code built without this
+// feature, if atomic variables are exposed across the ABI boundary.
+def FeatureForcedAtomics : SubtargetFeature<"forced-atomics", "HasForcedAtomics", "true",
+ "Assume that lock-free native-width atomics are available">;
+def HasForcedAtomics : Predicate<"Subtarget->hasForcedAtomics()">,
+ AssemblerPredicate<(all_of FeatureForcedAtomics)>;
+def HasAtomicLdSt : Predicate<"Subtarget->hasS32C1I() || Subtarget->hasForcedAtomics()">;
+
def FeatureRegionProtection : SubtargetFeature<"regprotect", "HasRegionProtection", "true",
"Enable Xtensa Region Protection option">;
def HasRegionProtection : Predicate<"Subtarget->hasRegionProtection()">,
diff --git a/llvm/lib/Target/Xtensa/XtensaISelLowering.cpp b/llvm/lib/Target/Xtensa/XtensaISelLowering.cpp
index fd42fd2e010ba..6a07bd865a15a 100644
--- a/llvm/lib/Target/Xtensa/XtensaISelLowering.cpp
+++ b/llvm/lib/Target/Xtensa/XtensaISelLowering.cpp
@@ -250,6 +250,15 @@ XtensaTargetLowering::XtensaTargetLowering(const TargetMachine &TM,
// Floating-point truncation and stores need to be done separately.
setTruncStoreAction(MVT::f64, MVT::f32, Expand);
+ if (Subtarget.hasS32C1I()) {
+ setMaxAtomicSizeInBitsSupported(32);
+ setMinCmpXchgSizeInBits(32);
+ } else if (Subtarget.hasForcedAtomics()) {
+ setMaxAtomicSizeInBitsSupported(32);
+ } else {
+ setMaxAtomicSizeInBitsSupported(0);
+ }
+
// Compute derived properties from the register classes
computeRegisterProperties(STI.getRegisterInfo());
}
@@ -1548,6 +1557,11 @@ const char *XtensaTargetLowering::getTargetNodeName(unsigned Opcode) const {
return nullptr;
}
+TargetLowering::AtomicExpansionKind
+XtensaTargetLowering::shouldExpandAtomicRMWInIR(AtomicRMWInst *AI) const {
+ return AtomicExpansionKind::CmpXChg;
+}
+
//===----------------------------------------------------------------------===//
// Custom insertion
//===----------------------------------------------------------------------===//
@@ -1696,6 +1710,23 @@ MachineBasicBlock *XtensaTargetLowering::EmitInstrWithCustomInserter(
return MBB;
}
+ case Xtensa::ATOMIC_CMP_SWAP_32_P: {
+ MachineOperand &R = MI.getOperand(0);
+ MachineOperand &Addr = MI.getOperand(1);
+ MachineOperand &Cmp = MI.getOperand(2);
+ MachineOperand &Swap = MI.getOperand(3);
+
+ BuildMI(*MBB, MI, DL, TII.get(Xtensa::WSR), Xtensa::SCOMPARE1)
+ .addReg(Cmp.getReg());
+
+ BuildMI(*MBB, MI, DL, TII.get(Xtensa::S32C1I), R.getReg())
+ .addReg(Swap.getReg())
+ .addReg(Addr.getReg())
+ .addImm(0);
+
+ MI.eraseFromParent();
+ return MBB;
+ }
default:
llvm_unreachable("Unexpected instr type to insert");
}
diff --git a/llvm/lib/Target/Xtensa/XtensaISelLowering.h b/llvm/lib/Target/Xtensa/XtensaISelLowering.h
index e6ddf9864932a..d84cbdb6afcef 100644
--- a/llvm/lib/Target/Xtensa/XtensaISelLowering.h
+++ b/llvm/lib/Target/Xtensa/XtensaISelLowering.h
@@ -145,6 +145,12 @@ class XtensaTargetLowering : public TargetLowering {
const SmallVectorImpl<SDValue> &OutVals, const SDLoc &DL,
SelectionDAG &DAG) const override;
+ bool shouldInsertFencesForAtomic(const Instruction *I) const override {
+ return true;
+ }
+
+ AtomicExpansionKind shouldExpandAtomicRMWInIR(AtomicRMWInst *) const override;
+
bool decomposeMulByConstant(LLVMContext &Context, EVT VT,
SDValue C) const override;
diff --git a/llvm/lib/Target/Xtensa/XtensaInstrInfo.td b/llvm/lib/Target/Xtensa/XtensaInstrInfo.td
index 31608f4659365..edcf2473d45cd 100644
--- a/llvm/lib/Target/Xtensa/XtensaInstrInfo.td
+++ b/llvm/lib/Target/Xtensa/XtensaInstrInfo.td
@@ -496,6 +496,8 @@ def EXTW : RRR_Inst<0x00, 0x00, 0x00, (outs), (ins),
let hasSideEffects = 1;
}
+def : Pat<(atomic_fence timm, timm), (MEMW)>;
+
//===----------------------------------------------------------------------===//
// Illegal instructions
//===----------------------------------------------------------------------===//
@@ -1498,6 +1500,46 @@ def RFI : RRR_Inst<0x00, 0x00, 0x00, (outs), (ins uimm4:$imm),
let t = 0x1;
}
+//===----------------------------------------------------------------------===//
+// S32C1I
+//===----------------------------------------------------------------------===//
+
+let mayStore = 1, mayLoad = 1, Predicates = [HasS32C1I] in {
+ def S32C1I : RRI8_Inst<0x02, (outs AR:$a), (ins AR:$t, mem32:$addr),
+ "s32c1i\t$t, $addr", []> {
+ bits<12> addr;
+
+ let r = 0x0e;
+ let Uses = [SCOMPARE1];
+ let Constraints = "$a = $t";
+ let imm8{7-0} = addr{11-4};
+ let s{3-0} = addr{3-0};
+ }
+}
+
+//===----------------------------------------------------------------------===//
+// Atomic patterns
+//===----------------------------------------------------------------------===//
+
+// Atomic load/store are available under both +s32c1i and +force-atomics.
+// Fences will be inserted for atomic load/stores according to the logic in
+// XtensaTargetLowering.
+let Predicates = [HasAtomicLdSt] in {
+ def : Pat<(i32 (atomic_load_8 addr_ish1:$addr)), (L8UI addr_ish1:$addr)>;
+ def : Pat<(i32 (atomic_load_16 addr_ish2:$addr)), (L16UI addr_ish2:$addr)>;
+ def : Pat<(i32 (atomic_load_32 addr_ish4:$addr)), (L32I addr_ish4:$addr)>;
+
+ def : Pat<(atomic_store_8 AR:$t, addr_ish1:$addr), (S8I AR:$t, addr_ish1:$addr)>;
+ def : Pat<(atomic_store_16 AR:$t, addr_ish2:$addr), (S16I AR:$t, addr_ish2:$addr)>;
+ def : Pat<(atomic_store_32 AR:$t, addr_ish4:$addr), (S32I AR:$t, addr_ish4:$addr)>;
+}
+
+let usesCustomInserter = 1, Predicates = [HasS32C1I] in {
+ def ATOMIC_CMP_SWAP_32_P : Pseudo<(outs AR:$dst), (ins AR:$ptr, AR:$cmp, AR:$swap),
+ "!atomic_cmp_swap_32_p, $dst, $ptr, $cmp, $swap",
+ [(set AR:$dst, (atomic_cmp_swap_i32 AR:$ptr, AR:$cmp, AR:$swap))]>;
+}
+
//===----------------------------------------------------------------------===//
// DSP Instructions
//===----------------------------------------------------------------------===//
diff --git a/llvm/lib/Target/Xtensa/XtensaRegisterInfo.td b/llvm/lib/Target/Xtensa/XtensaRegisterInfo.td
index 596c4105c1118..d1f2c6b8e43a3 100644
--- a/llvm/lib/Target/Xtensa/XtensaRegisterInfo.td
+++ b/llvm/lib/Target/Xtensa/XtensaRegisterInfo.td
@@ -84,6 +84,9 @@ def SAR : SRReg<3, "sar", ["SAR","3"]>;
// Boolean Register
def BREG : SRReg<4, "br", ["BR","4"]>;
+// Expected data value for S32C1I operation
+def SCOMPARE1 : SRReg<12, "scompare1", ["SCOMPARE1", "12"]>;
+
// Literal base
def LITBASE : SRReg<5, "litbase", ["LITBASE", "5"]>;
@@ -97,6 +100,9 @@ def IBREAKENABLE : SRReg<96, "ibreakenable", ["IBREAKENABLE", "96"]>;
// Memory Control Register
def MEMCTL : SRReg<97, "memctl", ["MEMCTL", "97"]>;
+// Atomic Operation Control
+def ATOMCTL : SRReg<99, "atomctl", ["ATOMCTL", "99"]>;
+
def DDR : SRReg<104, "ddr", ["DDR", "104"]>;
// Instuction break address register 0
@@ -218,8 +224,8 @@ def MR23 : RegisterClass<"Xtensa", [i32], 32, (add M2, M3)>;
def MR : RegisterClass<"Xtensa", [i32], 32, (add MR01, MR23)>;
def SR : RegisterClass<"Xtensa", [i32], 32, (add
- LBEG, LEND, LCOUNT, SAR, BREG, LITBASE, ACCLO, ACCHI, MR,
- WINDOWBASE, WINDOWSTART, IBREAKENABLE, MEMCTL, DDR, IBREAKA0, IBREAKA1,
+ LBEG, LEND, LCOUNT, SAR, BREG, SCOMPARE1, LITBASE, ACCLO, ACCHI, MR,
+ WINDOWBASE, WINDOWSTART, IBREAKENABLE, MEMCTL, ATOMCTL, DDR, IBREAKA0, IBREAKA1,
DBREAKA0, DBREAKA1, DBREAKC0, DBREAKC1, CONFIGID0, EPC1, EPC2, EPC3, EPC4, EPC5,
EPC6, EPC7, DEPC, EPS2, EPS3, EPS4, EPS5, EPS6, EPS7, CONFIGID1, EXCSAVE1, EXCSAVE2,
EXCSAVE3, EXCSAVE4, EXCSAVE5, EXCSAVE6, EXCSAVE7, CPENABLE, INTERRUPT, INTSET, INTCLEAR, INTENABLE,
diff --git a/llvm/lib/Target/Xtensa/XtensaSubtarget.h b/llvm/lib/Target/Xtensa/XtensaSubtarget.h
index fd677a451f3fd..b406534a0ec77 100644
--- a/llvm/lib/Target/Xtensa/XtensaSubtarget.h
+++ b/llvm/lib/Target/Xtensa/XtensaSubtarget.h
@@ -77,6 +77,8 @@ class XtensaSubtarget : public XtensaGenSubtargetInfo {
bool hasMul32() const { return HasMul32; }
bool hasMul32High() const { return HasMul32High; }
bool hasDiv32() const { return HasDiv32; }
+ bool hasS32C1I() const { return HasS32C1I; }
+ bool hasForcedAtomics() const { return HasForcedAtomics; }
bool hasSingleFloat() const { return HasSingleFloat; }
bool hasRegionProtection() const { return HasRegionProtection; }
bool hasRelocatableVector() const { return HasRelocatableVector; }
diff --git a/llvm/lib/Target/Xtensa/XtensaTargetMachine.cpp b/llvm/lib/Target/Xtensa/XtensaTargetMachine.cpp
index 8d2dca6c23721..c9f1ca8b46dab 100644
--- a/llvm/lib/Target/Xtensa/XtensaTargetMachine.cpp
+++ b/llvm/lib/Target/Xtensa/XtensaTargetMachine.cpp
@@ -107,6 +107,7 @@ class XtensaPassConfig : public TargetPassConfig {
}
bool addInstSelector() override;
+ void addIRPasses() override;
void addPreEmitPass() override;
};
} // end anonymous namespace
@@ -116,6 +117,11 @@ bool XtensaPassConfig::addInstSelector() {
return false;
}
+void XtensaPassConfig::addIRPasses() {
+ addPass(createAtomicExpandLegacyPass());
+ TargetPassConfig::addIRPasses();
+}
+
void XtensaPassConfig::addPreEmitPass() { addPass(&BranchRelaxationPassID); }
TargetPassConfig *XtensaTargetMachine::createPassConfig(PassManagerBase &PM) {
diff --git a/llvm/test/CodeGen/Xtensa/atomic-load-store.ll b/llvm/test/CodeGen/Xtensa/atomic-load-store.ll
new file mode 100644
index 0000000000000..bd843a353da2a
--- /dev/null
+++ b/llvm/test/CodeGen/Xtensa/atomic-load-store.ll
@@ -0,0 +1,498 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
+; RUN: llc -mtriple=xtensa -mattr=+windowed < %s | FileCheck %s --check-prefixes=XTENSA
+; RUN: llc -mtriple=xtensa -mattr=+windowed,s32c1i < %s | FileCheck %s --check-prefixes=XTENSA-ATOMIC
+
+define i8 @atomic_load_i8_unordered(ptr %a) nounwind {
+; XTENSA-LABEL: atomic_load_i8_unordered:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a11, 0
+; XTENSA-NEXT: l32r a8, .LCPI0_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomic_load_i8_unordered:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l8ui a2, a2, 0
+; XTENSA-ATOMIC-NEXT: retw
+ %1 = load atomic i8, ptr %a unordered, align 1
+ ret i8 %1
+}
+
+define i8 @atomic_load_i8_monotonic(ptr %a) nounwind {
+; XTENSA-LABEL: atomic_load_i8_monotonic:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a11, 0
+; XTENSA-NEXT: l32r a8, .LCPI1_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomic_load_i8_monotonic:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l8ui a2, a2, 0
+; XTENSA-ATOMIC-NEXT: retw
+ %1 = load atomic i8, ptr %a monotonic, align 1
+ ret i8 %1
+}
+
+define i8 @atomic_load_i8_acquire(ptr %a) nounwind {
+; XTENSA-LABEL: atomic_load_i8_acquire:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a11, 2
+; XTENSA-NEXT: l32r a8, .LCPI2_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomic_load_i8_acquire:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l8ui a2, a2, 0
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %1 = load atomic i8, ptr %a acquire, align 1
+ ret i8 %1
+}
+
+define i8 @atomic_load_i8_seq_cst(ptr %a) nounwind {
+; XTENSA-LABEL: atomic_load_i8_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a11, 5
+; XTENSA-NEXT: l32r a8, .LCPI3_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomic_load_i8_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l8ui a2, a2, 0
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %1 = load atomic i8, ptr %a seq_cst, align 1
+ ret i8 %1
+}
+
+define i16 @atomic_load_i16_unordered(ptr %a) nounwind {
+; XTENSA-LABEL: atomic_load_i16_unordered:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a11, 0
+; XTENSA-NEXT: l32r a8, .LCPI4_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomic_load_i16_unordered:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l16ui a2, a2, 0
+; XTENSA-ATOMIC-NEXT: retw
+ %1 = load atomic i16, ptr %a unordered, align 2
+ ret i16 %1
+}
+
+define i16 @atomic_load_i16_monotonic(ptr %a) nounwind {
+; XTENSA-LABEL: atomic_load_i16_monotonic:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a11, 0
+; XTENSA-NEXT: l32r a8, .LCPI5_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomic_load_i16_monotonic:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l16ui a2, a2, 0
+; XTENSA-ATOMIC-NEXT: retw
+ %1 = load atomic i16, ptr %a monotonic, align 2
+ ret i16 %1
+}
+
+define i16 @atomic_load_i16_acquire(ptr %a) nounwind {
+; XTENSA-LABEL: atomic_load_i16_acquire:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a11, 2
+; XTENSA-NEXT: l32r a8, .LCPI6_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomic_load_i16_acquire:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l16ui a2, a2, 0
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %1 = load atomic i16, ptr %a acquire, align 2
+ ret i16 %1
+}
+
+define i16 @atomic_load_i16_seq_cst(ptr %a) nounwind {
+; XTENSA-LABEL: atomic_load_i16_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a11, 5
+; XTENSA-NEXT: l32r a8, .LCPI7_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomic_load_i16_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l16ui a2, a2, 0
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %1 = load atomic i16, ptr %a seq_cst, align 2
+ ret i16 %1
+}
+
+define i32 @atomic_load_i32_unordered(ptr %a) nounwind {
+; XTENSA-LABEL: atomic_load_i32_unordered:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a11, 0
+; XTENSA-NEXT: l32r a8, .LCPI8_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomic_load_i32_unordered:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32i a2, a2, 0
+; XTENSA-ATOMIC-NEXT: retw
+ %1 = load atomic i32, ptr %a unordered, align 4
+ ret i32 %1
+}
+
+define i32 @atomic_load_i32_monotonic(ptr %a) nounwind {
+; XTENSA-LABEL: atomic_load_i32_monotonic:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a11, 0
+; XTENSA-NEXT: l32r a8, .LCPI9_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomic_load_i32_monotonic:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32i a2, a2, 0
+; XTENSA-ATOMIC-NEXT: retw
+ %1 = load atomic i32, ptr %a monotonic, align 4
+ ret i32 %1
+}
+
+define i32 @atomic_load_i32_acquire(ptr %a) nounwind {
+; XTENSA-LABEL: atomic_load_i32_acquire:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a11, 2
+; XTENSA-NEXT: l32r a8, .LCPI10_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomic_load_i32_acquire:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32i a2, a2, 0
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %1 = load atomic i32, ptr %a acquire, align 4
+ ret i32 %1
+}
+
+define i32 @atomic_load_i32_seq_cst(ptr %a) nounwind {
+; XTENSA-LABEL: atomic_load_i32_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a11, 5
+; XTENSA-NEXT: l32r a8, .LCPI11_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomic_load_i32_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32i a2, a2, 0
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %1 = load atomic i32, ptr %a seq_cst, align 4
+ ret i32 %1
+}
+
+define void @atomic_store_i8_unordered(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomic_store_i8_unordered:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 0
+; XTENSA-NEXT: l32r a8, .LCPI12_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomic_store_i8_unordered:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: s8i a3, a2, 0
+; XTENSA-ATOMIC-NEXT: retw
+ store atomic i8 %b, ptr %a unordered, align 1
+ ret void
+}
+
+define void @atomic_store_i8_monotonic(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomic_store_i8_monotonic:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 0
+; XTENSA-NEXT: l32r a8, .LCPI13_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomic_store_i8_monotonic:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: s8i a3, a2, 0
+; XTENSA-ATOMIC-NEXT: retw
+ store atomic i8 %b, ptr %a monotonic, align 1
+ ret void
+}
+
+define void @atomic_store_i8_release(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomic_store_i8_release:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 3
+; XTENSA-NEXT: l32r a8, .LCPI14_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomic_store_i8_release:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: s8i a3, a2, 0
+; XTENSA-ATOMIC-NEXT: retw
+ store atomic i8 %b, ptr %a release, align 1
+ ret void
+}
+
+define void @atomic_store_i8_seq_cst(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomic_store_i8_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 5
+; XTENSA-NEXT: l32r a8, .LCPI15_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomic_store_i8_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: s8i a3, a2, 0
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ store atomic i8 %b, ptr %a seq_cst, align 1
+ ret void
+}
+
+define void @atomic_store_i16_unordered(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomic_store_i16_unordered:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 0
+; XTENSA-NEXT: l32r a8, .LCPI16_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomic_store_i16_unordered:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: s16i a3, a2, 0
+; XTENSA-ATOMIC-NEXT: retw
+ store atomic i16 %b, ptr %a unordered, align 2
+ ret void
+}
+
+define void @atomic_store_i16_monotonic(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomic_store_i16_monotonic:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 0
+; XTENSA-NEXT: l32r a8, .LCPI17_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomic_store_i16_monotonic:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: s16i a3, a2, 0
+; XTENSA-ATOMIC-NEXT: retw
+ store atomic i16 %b, ptr %a monotonic, align 2
+ ret void
+}
+
+define void @atomic_store_i16_release(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomic_store_i16_release:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 3
+; XTENSA-NEXT: l32r a8, .LCPI18_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomic_store_i16_release:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: s16i a3, a2, 0
+; XTENSA-ATOMIC-NEXT: retw
+ store atomic i16 %b, ptr %a release, align 2
+ ret void
+}
+
+define void @atomic_store_i16_seq_cst(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomic_store_i16_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 5
+; XTENSA-NEXT: l32r a8, .LCPI19_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomic_store_i16_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: s16i a3, a2, 0
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ store atomic i16 %b, ptr %a seq_cst, align 2
+ ret void
+}
+
+define void @atomic_store_i32_unordered(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomic_store_i32_unordered:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 0
+; XTENSA-NEXT: l32r a8, .LCPI20_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomic_store_i32_unordered:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: s32i a3, a2, 0
+; XTENSA-ATOMIC-NEXT: retw
+ store atomic i32 %b, ptr %a unordered, align 4
+ ret void
+}
+
+define void @atomic_store_i32_monotonic(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomic_store_i32_monotonic:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 0
+; XTENSA-NEXT: l32r a8, .LCPI21_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomic_store_i32_monotonic:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: s32i a3, a2, 0
+; XTENSA-ATOMIC-NEXT: retw
+ store atomic i32 %b, ptr %a monotonic, align 4
+ ret void
+}
+
+define void @atomic_store_i32_release(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomic_store_i32_release:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 3
+; XTENSA-NEXT: l32r a8, .LCPI22_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomic_store_i32_release:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: s32i a3, a2, 0
+; XTENSA-ATOMIC-NEXT: retw
+ store atomic i32 %b, ptr %a release, align 4
+ ret void
+}
+
+define void @atomic_store_i32_seq_cst(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomic_store_i32_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 5
+; XTENSA-NEXT: l32r a8, .LCPI23_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomic_store_i32_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: s32i a3, a2, 0
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ store atomic i32 %b, ptr %a seq_cst, align 4
+ ret void
+}
diff --git a/llvm/test/CodeGen/Xtensa/atomic-rmw.ll b/llvm/test/CodeGen/Xtensa/atomic-rmw.ll
new file mode 100644
index 0000000000000..81cb2dd5e8184
--- /dev/null
+++ b/llvm/test/CodeGen/Xtensa/atomic-rmw.ll
@@ -0,0 +1,10298 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 4
+; RUN: llc -mtriple=xtensa -mattr=+windowed < %s | FileCheck %s --check-prefixes=XTENSA
+; RUN: llc -mtriple=xtensa -mattr=+windowed,s32c1i < %s | FileCheck %s --check-prefixes=XTENSA-ATOMIC
+
+define i8 @atomicrmw_xchg_i8_monotonic(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_xchg_i8_monotonic:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 0
+; XTENSA-NEXT: l32r a8, .LCPI0_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_xchg_i8_monotonic:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: movi a10, 255
+; XTENSA-ATOMIC-NEXT: and a9, a3, a10
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a10, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a11, -4
+; XTENSA-ATOMIC-NEXT: and a11, a2, a11
+; XTENSA-ATOMIC-NEXT: l32i a15, a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 0
+; XTENSA-ATOMIC-NEXT: movi a13, 1
+; XTENSA-ATOMIC-NEXT: j .LBB0_2
+; XTENSA-ATOMIC-NEXT: .LBB0_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB0_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a14, a14
+; XTENSA-ATOMIC-NEXT: beqi a7, 1, .LBB0_4
+; XTENSA-ATOMIC-NEXT: .LBB0_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a14, a15, a10
+; XTENSA-ATOMIC-NEXT: or a14, a14, a9
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a14, a11, 0
+; XTENSA-ATOMIC-NEXT: or a7, a13, a13
+; XTENSA-ATOMIC-NEXT: beq a14, a15, .LBB0_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB0_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a12, a12
+; XTENSA-ATOMIC-NEXT: j .LBB0_1
+; XTENSA-ATOMIC-NEXT: .LBB0_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a14
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw xchg ptr %a, i8 %b monotonic
+ ret i8 %res
+}
+
+define i8 @atomicrmw_xchg_i8_acquire(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_xchg_i8_acquire:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 2
+; XTENSA-NEXT: l32r a8, .LCPI1_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_xchg_i8_acquire:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: movi a10, 255
+; XTENSA-ATOMIC-NEXT: and a9, a3, a10
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a10, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a11, -4
+; XTENSA-ATOMIC-NEXT: and a11, a2, a11
+; XTENSA-ATOMIC-NEXT: l32i a15, a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 0
+; XTENSA-ATOMIC-NEXT: movi a13, 1
+; XTENSA-ATOMIC-NEXT: j .LBB1_2
+; XTENSA-ATOMIC-NEXT: .LBB1_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB1_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a14, a14
+; XTENSA-ATOMIC-NEXT: beqi a7, 1, .LBB1_4
+; XTENSA-ATOMIC-NEXT: .LBB1_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a14, a15, a10
+; XTENSA-ATOMIC-NEXT: or a14, a14, a9
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a14, a11, 0
+; XTENSA-ATOMIC-NEXT: or a7, a13, a13
+; XTENSA-ATOMIC-NEXT: beq a14, a15, .LBB1_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB1_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a12, a12
+; XTENSA-ATOMIC-NEXT: j .LBB1_1
+; XTENSA-ATOMIC-NEXT: .LBB1_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a14
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw xchg ptr %a, i8 %b acquire
+ ret i8 %res
+}
+
+define i8 @atomicrmw_xchg_i8_release(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_xchg_i8_release:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 3
+; XTENSA-NEXT: l32r a8, .LCPI2_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_xchg_i8_release:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: movi a10, 255
+; XTENSA-ATOMIC-NEXT: and a9, a3, a10
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a10, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a11, -4
+; XTENSA-ATOMIC-NEXT: and a11, a2, a11
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a15, a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 0
+; XTENSA-ATOMIC-NEXT: movi a13, 1
+; XTENSA-ATOMIC-NEXT: j .LBB2_2
+; XTENSA-ATOMIC-NEXT: .LBB2_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB2_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a14, a14
+; XTENSA-ATOMIC-NEXT: beqi a7, 1, .LBB2_4
+; XTENSA-ATOMIC-NEXT: .LBB2_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a14, a15, a10
+; XTENSA-ATOMIC-NEXT: or a14, a14, a9
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a14, a11, 0
+; XTENSA-ATOMIC-NEXT: or a7, a13, a13
+; XTENSA-ATOMIC-NEXT: beq a14, a15, .LBB2_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB2_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a12, a12
+; XTENSA-ATOMIC-NEXT: j .LBB2_1
+; XTENSA-ATOMIC-NEXT: .LBB2_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a14
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw xchg ptr %a, i8 %b release
+ ret i8 %res
+}
+
+define i8 @atomicrmw_xchg_i8_acq_rel(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_xchg_i8_acq_rel:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 4
+; XTENSA-NEXT: l32r a8, .LCPI3_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_xchg_i8_acq_rel:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: movi a10, 255
+; XTENSA-ATOMIC-NEXT: and a9, a3, a10
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a10, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a11, -4
+; XTENSA-ATOMIC-NEXT: and a11, a2, a11
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a15, a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 0
+; XTENSA-ATOMIC-NEXT: movi a13, 1
+; XTENSA-ATOMIC-NEXT: j .LBB3_2
+; XTENSA-ATOMIC-NEXT: .LBB3_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB3_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a14, a14
+; XTENSA-ATOMIC-NEXT: beqi a7, 1, .LBB3_4
+; XTENSA-ATOMIC-NEXT: .LBB3_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a14, a15, a10
+; XTENSA-ATOMIC-NEXT: or a14, a14, a9
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a14, a11, 0
+; XTENSA-ATOMIC-NEXT: or a7, a13, a13
+; XTENSA-ATOMIC-NEXT: beq a14, a15, .LBB3_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB3_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a12, a12
+; XTENSA-ATOMIC-NEXT: j .LBB3_1
+; XTENSA-ATOMIC-NEXT: .LBB3_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a14
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw xchg ptr %a, i8 %b acq_rel
+ ret i8 %res
+}
+
+define i8 @atomicrmw_xchg_i8_seq_cst(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_xchg_i8_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 5
+; XTENSA-NEXT: l32r a8, .LCPI4_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_xchg_i8_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: movi a10, 255
+; XTENSA-ATOMIC-NEXT: and a9, a3, a10
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a10, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a11, -4
+; XTENSA-ATOMIC-NEXT: and a11, a2, a11
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a15, a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 0
+; XTENSA-ATOMIC-NEXT: movi a13, 1
+; XTENSA-ATOMIC-NEXT: j .LBB4_2
+; XTENSA-ATOMIC-NEXT: .LBB4_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB4_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a14, a14
+; XTENSA-ATOMIC-NEXT: beqi a7, 1, .LBB4_4
+; XTENSA-ATOMIC-NEXT: .LBB4_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a14, a15, a10
+; XTENSA-ATOMIC-NEXT: or a14, a14, a9
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a14, a11, 0
+; XTENSA-ATOMIC-NEXT: or a7, a13, a13
+; XTENSA-ATOMIC-NEXT: beq a14, a15, .LBB4_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB4_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a12, a12
+; XTENSA-ATOMIC-NEXT: j .LBB4_1
+; XTENSA-ATOMIC-NEXT: .LBB4_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a14
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw xchg ptr %a, i8 %b seq_cst
+ ret i8 %res
+}
+
+define i8 @atomicrmw_add_i8_monotonic(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_add_i8_monotonic:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 0
+; XTENSA-NEXT: l32r a8, .LCPI5_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_add_i8_monotonic:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: movi a10, 255
+; XTENSA-ATOMIC-NEXT: and a9, a3, a10
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a11, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a12, -4
+; XTENSA-ATOMIC-NEXT: and a12, a2, a12
+; XTENSA-ATOMIC-NEXT: l32i a15, a12, 0
+; XTENSA-ATOMIC-NEXT: movi a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 1
+; XTENSA-ATOMIC-NEXT: j .LBB5_2
+; XTENSA-ATOMIC-NEXT: .LBB5_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB5_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a7, a7
+; XTENSA-ATOMIC-NEXT: beqi a6, 1, .LBB5_4
+; XTENSA-ATOMIC-NEXT: .LBB5_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a7, a15, a11
+; XTENSA-ATOMIC-NEXT: add a6, a15, a9
+; XTENSA-ATOMIC-NEXT: and a6, a6, a10
+; XTENSA-ATOMIC-NEXT: or a7, a7, a6
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a7, a12, 0
+; XTENSA-ATOMIC-NEXT: or a6, a14, a14
+; XTENSA-ATOMIC-NEXT: beq a7, a15, .LBB5_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB5_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a13, a13
+; XTENSA-ATOMIC-NEXT: j .LBB5_1
+; XTENSA-ATOMIC-NEXT: .LBB5_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a7
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw add ptr %a, i8 %b monotonic
+ ret i8 %res
+}
+
+define i8 @atomicrmw_add_i8_acquire(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_add_i8_acquire:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 2
+; XTENSA-NEXT: l32r a8, .LCPI6_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_add_i8_acquire:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: movi a10, 255
+; XTENSA-ATOMIC-NEXT: and a9, a3, a10
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a11, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a12, -4
+; XTENSA-ATOMIC-NEXT: and a12, a2, a12
+; XTENSA-ATOMIC-NEXT: l32i a15, a12, 0
+; XTENSA-ATOMIC-NEXT: movi a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 1
+; XTENSA-ATOMIC-NEXT: j .LBB6_2
+; XTENSA-ATOMIC-NEXT: .LBB6_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB6_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a7, a7
+; XTENSA-ATOMIC-NEXT: beqi a6, 1, .LBB6_4
+; XTENSA-ATOMIC-NEXT: .LBB6_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a7, a15, a11
+; XTENSA-ATOMIC-NEXT: add a6, a15, a9
+; XTENSA-ATOMIC-NEXT: and a6, a6, a10
+; XTENSA-ATOMIC-NEXT: or a7, a7, a6
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a7, a12, 0
+; XTENSA-ATOMIC-NEXT: or a6, a14, a14
+; XTENSA-ATOMIC-NEXT: beq a7, a15, .LBB6_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB6_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a13, a13
+; XTENSA-ATOMIC-NEXT: j .LBB6_1
+; XTENSA-ATOMIC-NEXT: .LBB6_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a7
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw add ptr %a, i8 %b acquire
+ ret i8 %res
+}
+
+define i8 @atomicrmw_add_i8_release(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_add_i8_release:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 3
+; XTENSA-NEXT: l32r a8, .LCPI7_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_add_i8_release:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: movi a10, 255
+; XTENSA-ATOMIC-NEXT: and a9, a3, a10
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a11, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a12, -4
+; XTENSA-ATOMIC-NEXT: and a12, a2, a12
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a15, a12, 0
+; XTENSA-ATOMIC-NEXT: movi a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 1
+; XTENSA-ATOMIC-NEXT: j .LBB7_2
+; XTENSA-ATOMIC-NEXT: .LBB7_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB7_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a7, a7
+; XTENSA-ATOMIC-NEXT: beqi a6, 1, .LBB7_4
+; XTENSA-ATOMIC-NEXT: .LBB7_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a7, a15, a11
+; XTENSA-ATOMIC-NEXT: add a6, a15, a9
+; XTENSA-ATOMIC-NEXT: and a6, a6, a10
+; XTENSA-ATOMIC-NEXT: or a7, a7, a6
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a7, a12, 0
+; XTENSA-ATOMIC-NEXT: or a6, a14, a14
+; XTENSA-ATOMIC-NEXT: beq a7, a15, .LBB7_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB7_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a13, a13
+; XTENSA-ATOMIC-NEXT: j .LBB7_1
+; XTENSA-ATOMIC-NEXT: .LBB7_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a7
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw add ptr %a, i8 %b release
+ ret i8 %res
+}
+
+define i8 @atomicrmw_add_i8_acq_rel(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_add_i8_acq_rel:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 4
+; XTENSA-NEXT: l32r a8, .LCPI8_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_add_i8_acq_rel:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: movi a10, 255
+; XTENSA-ATOMIC-NEXT: and a9, a3, a10
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a11, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a12, -4
+; XTENSA-ATOMIC-NEXT: and a12, a2, a12
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a15, a12, 0
+; XTENSA-ATOMIC-NEXT: movi a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 1
+; XTENSA-ATOMIC-NEXT: j .LBB8_2
+; XTENSA-ATOMIC-NEXT: .LBB8_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB8_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a7, a7
+; XTENSA-ATOMIC-NEXT: beqi a6, 1, .LBB8_4
+; XTENSA-ATOMIC-NEXT: .LBB8_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a7, a15, a11
+; XTENSA-ATOMIC-NEXT: add a6, a15, a9
+; XTENSA-ATOMIC-NEXT: and a6, a6, a10
+; XTENSA-ATOMIC-NEXT: or a7, a7, a6
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a7, a12, 0
+; XTENSA-ATOMIC-NEXT: or a6, a14, a14
+; XTENSA-ATOMIC-NEXT: beq a7, a15, .LBB8_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB8_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a13, a13
+; XTENSA-ATOMIC-NEXT: j .LBB8_1
+; XTENSA-ATOMIC-NEXT: .LBB8_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a7
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw add ptr %a, i8 %b acq_rel
+ ret i8 %res
+}
+
+define i8 @atomicrmw_add_i8_seq_cst(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_add_i8_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 5
+; XTENSA-NEXT: l32r a8, .LCPI9_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_add_i8_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: movi a10, 255
+; XTENSA-ATOMIC-NEXT: and a9, a3, a10
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a11, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a12, -4
+; XTENSA-ATOMIC-NEXT: and a12, a2, a12
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a15, a12, 0
+; XTENSA-ATOMIC-NEXT: movi a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 1
+; XTENSA-ATOMIC-NEXT: j .LBB9_2
+; XTENSA-ATOMIC-NEXT: .LBB9_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB9_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a7, a7
+; XTENSA-ATOMIC-NEXT: beqi a6, 1, .LBB9_4
+; XTENSA-ATOMIC-NEXT: .LBB9_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a7, a15, a11
+; XTENSA-ATOMIC-NEXT: add a6, a15, a9
+; XTENSA-ATOMIC-NEXT: and a6, a6, a10
+; XTENSA-ATOMIC-NEXT: or a7, a7, a6
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a7, a12, 0
+; XTENSA-ATOMIC-NEXT: or a6, a14, a14
+; XTENSA-ATOMIC-NEXT: beq a7, a15, .LBB9_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB9_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a13, a13
+; XTENSA-ATOMIC-NEXT: j .LBB9_1
+; XTENSA-ATOMIC-NEXT: .LBB9_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a7
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw add ptr %a, i8 %b seq_cst
+ ret i8 %res
+}
+
+define i8 @atomicrmw_sub_i8_monotonic(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_sub_i8_monotonic:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 0
+; XTENSA-NEXT: l32r a8, .LCPI10_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_sub_i8_monotonic:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: movi a10, 255
+; XTENSA-ATOMIC-NEXT: and a9, a3, a10
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a11, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a12, -4
+; XTENSA-ATOMIC-NEXT: and a12, a2, a12
+; XTENSA-ATOMIC-NEXT: l32i a15, a12, 0
+; XTENSA-ATOMIC-NEXT: movi a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 1
+; XTENSA-ATOMIC-NEXT: j .LBB10_2
+; XTENSA-ATOMIC-NEXT: .LBB10_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB10_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a7, a7
+; XTENSA-ATOMIC-NEXT: beqi a6, 1, .LBB10_4
+; XTENSA-ATOMIC-NEXT: .LBB10_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a7, a15, a11
+; XTENSA-ATOMIC-NEXT: sub a6, a15, a9
+; XTENSA-ATOMIC-NEXT: and a6, a6, a10
+; XTENSA-ATOMIC-NEXT: or a7, a7, a6
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a7, a12, 0
+; XTENSA-ATOMIC-NEXT: or a6, a14, a14
+; XTENSA-ATOMIC-NEXT: beq a7, a15, .LBB10_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB10_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a13, a13
+; XTENSA-ATOMIC-NEXT: j .LBB10_1
+; XTENSA-ATOMIC-NEXT: .LBB10_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a7
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw sub ptr %a, i8 %b monotonic
+ ret i8 %res
+}
+
+define i8 @atomicrmw_sub_i8_acquire(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_sub_i8_acquire:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 2
+; XTENSA-NEXT: l32r a8, .LCPI11_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_sub_i8_acquire:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: movi a10, 255
+; XTENSA-ATOMIC-NEXT: and a9, a3, a10
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a11, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a12, -4
+; XTENSA-ATOMIC-NEXT: and a12, a2, a12
+; XTENSA-ATOMIC-NEXT: l32i a15, a12, 0
+; XTENSA-ATOMIC-NEXT: movi a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 1
+; XTENSA-ATOMIC-NEXT: j .LBB11_2
+; XTENSA-ATOMIC-NEXT: .LBB11_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB11_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a7, a7
+; XTENSA-ATOMIC-NEXT: beqi a6, 1, .LBB11_4
+; XTENSA-ATOMIC-NEXT: .LBB11_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a7, a15, a11
+; XTENSA-ATOMIC-NEXT: sub a6, a15, a9
+; XTENSA-ATOMIC-NEXT: and a6, a6, a10
+; XTENSA-ATOMIC-NEXT: or a7, a7, a6
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a7, a12, 0
+; XTENSA-ATOMIC-NEXT: or a6, a14, a14
+; XTENSA-ATOMIC-NEXT: beq a7, a15, .LBB11_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB11_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a13, a13
+; XTENSA-ATOMIC-NEXT: j .LBB11_1
+; XTENSA-ATOMIC-NEXT: .LBB11_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a7
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw sub ptr %a, i8 %b acquire
+ ret i8 %res
+}
+
+define i8 @atomicrmw_sub_i8_release(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_sub_i8_release:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 3
+; XTENSA-NEXT: l32r a8, .LCPI12_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_sub_i8_release:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: movi a10, 255
+; XTENSA-ATOMIC-NEXT: and a9, a3, a10
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a11, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a12, -4
+; XTENSA-ATOMIC-NEXT: and a12, a2, a12
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a15, a12, 0
+; XTENSA-ATOMIC-NEXT: movi a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 1
+; XTENSA-ATOMIC-NEXT: j .LBB12_2
+; XTENSA-ATOMIC-NEXT: .LBB12_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB12_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a7, a7
+; XTENSA-ATOMIC-NEXT: beqi a6, 1, .LBB12_4
+; XTENSA-ATOMIC-NEXT: .LBB12_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a7, a15, a11
+; XTENSA-ATOMIC-NEXT: sub a6, a15, a9
+; XTENSA-ATOMIC-NEXT: and a6, a6, a10
+; XTENSA-ATOMIC-NEXT: or a7, a7, a6
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a7, a12, 0
+; XTENSA-ATOMIC-NEXT: or a6, a14, a14
+; XTENSA-ATOMIC-NEXT: beq a7, a15, .LBB12_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB12_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a13, a13
+; XTENSA-ATOMIC-NEXT: j .LBB12_1
+; XTENSA-ATOMIC-NEXT: .LBB12_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a7
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw sub ptr %a, i8 %b release
+ ret i8 %res
+}
+
+define i8 @atomicrmw_sub_i8_acq_rel(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_sub_i8_acq_rel:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 4
+; XTENSA-NEXT: l32r a8, .LCPI13_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_sub_i8_acq_rel:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: movi a10, 255
+; XTENSA-ATOMIC-NEXT: and a9, a3, a10
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a11, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a12, -4
+; XTENSA-ATOMIC-NEXT: and a12, a2, a12
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a15, a12, 0
+; XTENSA-ATOMIC-NEXT: movi a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 1
+; XTENSA-ATOMIC-NEXT: j .LBB13_2
+; XTENSA-ATOMIC-NEXT: .LBB13_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB13_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a7, a7
+; XTENSA-ATOMIC-NEXT: beqi a6, 1, .LBB13_4
+; XTENSA-ATOMIC-NEXT: .LBB13_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a7, a15, a11
+; XTENSA-ATOMIC-NEXT: sub a6, a15, a9
+; XTENSA-ATOMIC-NEXT: and a6, a6, a10
+; XTENSA-ATOMIC-NEXT: or a7, a7, a6
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a7, a12, 0
+; XTENSA-ATOMIC-NEXT: or a6, a14, a14
+; XTENSA-ATOMIC-NEXT: beq a7, a15, .LBB13_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB13_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a13, a13
+; XTENSA-ATOMIC-NEXT: j .LBB13_1
+; XTENSA-ATOMIC-NEXT: .LBB13_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a7
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw sub ptr %a, i8 %b acq_rel
+ ret i8 %res
+}
+
+define i8 @atomicrmw_sub_i8_seq_cst(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_sub_i8_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 5
+; XTENSA-NEXT: l32r a8, .LCPI14_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_sub_i8_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: movi a10, 255
+; XTENSA-ATOMIC-NEXT: and a9, a3, a10
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a11, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a12, -4
+; XTENSA-ATOMIC-NEXT: and a12, a2, a12
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a15, a12, 0
+; XTENSA-ATOMIC-NEXT: movi a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 1
+; XTENSA-ATOMIC-NEXT: j .LBB14_2
+; XTENSA-ATOMIC-NEXT: .LBB14_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB14_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a7, a7
+; XTENSA-ATOMIC-NEXT: beqi a6, 1, .LBB14_4
+; XTENSA-ATOMIC-NEXT: .LBB14_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a7, a15, a11
+; XTENSA-ATOMIC-NEXT: sub a6, a15, a9
+; XTENSA-ATOMIC-NEXT: and a6, a6, a10
+; XTENSA-ATOMIC-NEXT: or a7, a7, a6
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a7, a12, 0
+; XTENSA-ATOMIC-NEXT: or a6, a14, a14
+; XTENSA-ATOMIC-NEXT: beq a7, a15, .LBB14_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB14_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a13, a13
+; XTENSA-ATOMIC-NEXT: j .LBB14_1
+; XTENSA-ATOMIC-NEXT: .LBB14_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a7
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw sub ptr %a, i8 %b seq_cst
+ ret i8 %res
+}
+
+define i8 @atomicrmw_and_i8_monotonic(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_and_i8_monotonic:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 0
+; XTENSA-NEXT: l32r a8, .LCPI15_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_and_i8_monotonic:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: movi a9, 255
+; XTENSA-ATOMIC-NEXT: and a10, a3, a9
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a9, a9, a11
+; XTENSA-ATOMIC-NEXT: or a9, a10, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: l32i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: movi a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 1
+; XTENSA-ATOMIC-NEXT: j .LBB15_2
+; XTENSA-ATOMIC-NEXT: .LBB15_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB15_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a14, a13, a13
+; XTENSA-ATOMIC-NEXT: beqi a15, 1, .LBB15_4
+; XTENSA-ATOMIC-NEXT: .LBB15_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a13, a14, a9
+; XTENSA-ATOMIC-NEXT: wsr a14, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a13, a10, 0
+; XTENSA-ATOMIC-NEXT: or a15, a12, a12
+; XTENSA-ATOMIC-NEXT: beq a13, a14, .LBB15_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB15_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a11, a11
+; XTENSA-ATOMIC-NEXT: j .LBB15_1
+; XTENSA-ATOMIC-NEXT: .LBB15_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a13
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw and ptr %a, i8 %b monotonic
+ ret i8 %res
+}
+
+define i8 @atomicrmw_and_i8_acquire(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_and_i8_acquire:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 2
+; XTENSA-NEXT: l32r a8, .LCPI16_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_and_i8_acquire:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: movi a9, 255
+; XTENSA-ATOMIC-NEXT: and a10, a3, a9
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a9, a9, a11
+; XTENSA-ATOMIC-NEXT: or a9, a10, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: l32i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: movi a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 1
+; XTENSA-ATOMIC-NEXT: j .LBB16_2
+; XTENSA-ATOMIC-NEXT: .LBB16_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB16_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a14, a13, a13
+; XTENSA-ATOMIC-NEXT: beqi a15, 1, .LBB16_4
+; XTENSA-ATOMIC-NEXT: .LBB16_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a13, a14, a9
+; XTENSA-ATOMIC-NEXT: wsr a14, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a13, a10, 0
+; XTENSA-ATOMIC-NEXT: or a15, a12, a12
+; XTENSA-ATOMIC-NEXT: beq a13, a14, .LBB16_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB16_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a11, a11
+; XTENSA-ATOMIC-NEXT: j .LBB16_1
+; XTENSA-ATOMIC-NEXT: .LBB16_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a13
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw and ptr %a, i8 %b acquire
+ ret i8 %res
+}
+
+define i8 @atomicrmw_and_i8_release(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_and_i8_release:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 3
+; XTENSA-NEXT: l32r a8, .LCPI17_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_and_i8_release:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: movi a9, 255
+; XTENSA-ATOMIC-NEXT: and a10, a3, a9
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a9, a9, a11
+; XTENSA-ATOMIC-NEXT: or a9, a10, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: movi a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 1
+; XTENSA-ATOMIC-NEXT: j .LBB17_2
+; XTENSA-ATOMIC-NEXT: .LBB17_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB17_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a14, a13, a13
+; XTENSA-ATOMIC-NEXT: beqi a15, 1, .LBB17_4
+; XTENSA-ATOMIC-NEXT: .LBB17_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a13, a14, a9
+; XTENSA-ATOMIC-NEXT: wsr a14, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a13, a10, 0
+; XTENSA-ATOMIC-NEXT: or a15, a12, a12
+; XTENSA-ATOMIC-NEXT: beq a13, a14, .LBB17_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB17_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a11, a11
+; XTENSA-ATOMIC-NEXT: j .LBB17_1
+; XTENSA-ATOMIC-NEXT: .LBB17_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a13
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw and ptr %a, i8 %b release
+ ret i8 %res
+}
+
+define i8 @atomicrmw_and_i8_acq_rel(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_and_i8_acq_rel:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 4
+; XTENSA-NEXT: l32r a8, .LCPI18_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_and_i8_acq_rel:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: movi a9, 255
+; XTENSA-ATOMIC-NEXT: and a10, a3, a9
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a9, a9, a11
+; XTENSA-ATOMIC-NEXT: or a9, a10, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: movi a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 1
+; XTENSA-ATOMIC-NEXT: j .LBB18_2
+; XTENSA-ATOMIC-NEXT: .LBB18_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB18_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a14, a13, a13
+; XTENSA-ATOMIC-NEXT: beqi a15, 1, .LBB18_4
+; XTENSA-ATOMIC-NEXT: .LBB18_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a13, a14, a9
+; XTENSA-ATOMIC-NEXT: wsr a14, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a13, a10, 0
+; XTENSA-ATOMIC-NEXT: or a15, a12, a12
+; XTENSA-ATOMIC-NEXT: beq a13, a14, .LBB18_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB18_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a11, a11
+; XTENSA-ATOMIC-NEXT: j .LBB18_1
+; XTENSA-ATOMIC-NEXT: .LBB18_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a13
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw and ptr %a, i8 %b acq_rel
+ ret i8 %res
+}
+
+define i8 @atomicrmw_and_i8_seq_cst(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_and_i8_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 5
+; XTENSA-NEXT: l32r a8, .LCPI19_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_and_i8_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: movi a9, 255
+; XTENSA-ATOMIC-NEXT: and a10, a3, a9
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a9, a9, a11
+; XTENSA-ATOMIC-NEXT: or a9, a10, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: movi a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 1
+; XTENSA-ATOMIC-NEXT: j .LBB19_2
+; XTENSA-ATOMIC-NEXT: .LBB19_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB19_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a14, a13, a13
+; XTENSA-ATOMIC-NEXT: beqi a15, 1, .LBB19_4
+; XTENSA-ATOMIC-NEXT: .LBB19_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a13, a14, a9
+; XTENSA-ATOMIC-NEXT: wsr a14, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a13, a10, 0
+; XTENSA-ATOMIC-NEXT: or a15, a12, a12
+; XTENSA-ATOMIC-NEXT: beq a13, a14, .LBB19_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB19_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a11, a11
+; XTENSA-ATOMIC-NEXT: j .LBB19_1
+; XTENSA-ATOMIC-NEXT: .LBB19_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a13
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw and ptr %a, i8 %b seq_cst
+ ret i8 %res
+}
+
+define i8 @atomicrmw_nand_i8_monotonic(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_nand_i8_monotonic:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 0
+; XTENSA-NEXT: l32r a8, .LCPI20_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_nand_i8_monotonic:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: movi a10, 255
+; XTENSA-ATOMIC-NEXT: and a9, a3, a10
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a12, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a13, -4
+; XTENSA-ATOMIC-NEXT: and a13, a2, a13
+; XTENSA-ATOMIC-NEXT: l32i a7, a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 0
+; XTENSA-ATOMIC-NEXT: movi a15, 1
+; XTENSA-ATOMIC-NEXT: j .LBB20_2
+; XTENSA-ATOMIC-NEXT: .LBB20_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB20_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a6, a6
+; XTENSA-ATOMIC-NEXT: beqi a5, 1, .LBB20_4
+; XTENSA-ATOMIC-NEXT: .LBB20_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a6, a7, a12
+; XTENSA-ATOMIC-NEXT: and a5, a7, a9
+; XTENSA-ATOMIC-NEXT: xor a5, a5, a11
+; XTENSA-ATOMIC-NEXT: and a5, a5, a10
+; XTENSA-ATOMIC-NEXT: or a6, a6, a5
+; XTENSA-ATOMIC-NEXT: wsr a7, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a6, a13, 0
+; XTENSA-ATOMIC-NEXT: or a5, a15, a15
+; XTENSA-ATOMIC-NEXT: beq a6, a7, .LBB20_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB20_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a5, a14, a14
+; XTENSA-ATOMIC-NEXT: j .LBB20_1
+; XTENSA-ATOMIC-NEXT: .LBB20_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a6
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw nand ptr %a, i8 %b monotonic
+ ret i8 %res
+}
+
+define i8 @atomicrmw_nand_i8_acquire(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_nand_i8_acquire:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 2
+; XTENSA-NEXT: l32r a8, .LCPI21_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_nand_i8_acquire:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: movi a10, 255
+; XTENSA-ATOMIC-NEXT: and a9, a3, a10
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a12, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a13, -4
+; XTENSA-ATOMIC-NEXT: and a13, a2, a13
+; XTENSA-ATOMIC-NEXT: l32i a7, a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 0
+; XTENSA-ATOMIC-NEXT: movi a15, 1
+; XTENSA-ATOMIC-NEXT: j .LBB21_2
+; XTENSA-ATOMIC-NEXT: .LBB21_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB21_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a6, a6
+; XTENSA-ATOMIC-NEXT: beqi a5, 1, .LBB21_4
+; XTENSA-ATOMIC-NEXT: .LBB21_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a6, a7, a12
+; XTENSA-ATOMIC-NEXT: and a5, a7, a9
+; XTENSA-ATOMIC-NEXT: xor a5, a5, a11
+; XTENSA-ATOMIC-NEXT: and a5, a5, a10
+; XTENSA-ATOMIC-NEXT: or a6, a6, a5
+; XTENSA-ATOMIC-NEXT: wsr a7, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a6, a13, 0
+; XTENSA-ATOMIC-NEXT: or a5, a15, a15
+; XTENSA-ATOMIC-NEXT: beq a6, a7, .LBB21_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB21_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a5, a14, a14
+; XTENSA-ATOMIC-NEXT: j .LBB21_1
+; XTENSA-ATOMIC-NEXT: .LBB21_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a6
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw nand ptr %a, i8 %b acquire
+ ret i8 %res
+}
+
+define i8 @atomicrmw_nand_i8_release(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_nand_i8_release:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 3
+; XTENSA-NEXT: l32r a8, .LCPI22_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_nand_i8_release:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: movi a10, 255
+; XTENSA-ATOMIC-NEXT: and a9, a3, a10
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a12, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a13, -4
+; XTENSA-ATOMIC-NEXT: and a13, a2, a13
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a7, a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 0
+; XTENSA-ATOMIC-NEXT: movi a15, 1
+; XTENSA-ATOMIC-NEXT: j .LBB22_2
+; XTENSA-ATOMIC-NEXT: .LBB22_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB22_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a6, a6
+; XTENSA-ATOMIC-NEXT: beqi a5, 1, .LBB22_4
+; XTENSA-ATOMIC-NEXT: .LBB22_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a6, a7, a12
+; XTENSA-ATOMIC-NEXT: and a5, a7, a9
+; XTENSA-ATOMIC-NEXT: xor a5, a5, a11
+; XTENSA-ATOMIC-NEXT: and a5, a5, a10
+; XTENSA-ATOMIC-NEXT: or a6, a6, a5
+; XTENSA-ATOMIC-NEXT: wsr a7, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a6, a13, 0
+; XTENSA-ATOMIC-NEXT: or a5, a15, a15
+; XTENSA-ATOMIC-NEXT: beq a6, a7, .LBB22_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB22_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a5, a14, a14
+; XTENSA-ATOMIC-NEXT: j .LBB22_1
+; XTENSA-ATOMIC-NEXT: .LBB22_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a6
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw nand ptr %a, i8 %b release
+ ret i8 %res
+}
+
+define i8 @atomicrmw_nand_i8_acq_rel(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_nand_i8_acq_rel:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 4
+; XTENSA-NEXT: l32r a8, .LCPI23_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_nand_i8_acq_rel:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: movi a10, 255
+; XTENSA-ATOMIC-NEXT: and a9, a3, a10
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a12, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a13, -4
+; XTENSA-ATOMIC-NEXT: and a13, a2, a13
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a7, a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 0
+; XTENSA-ATOMIC-NEXT: movi a15, 1
+; XTENSA-ATOMIC-NEXT: j .LBB23_2
+; XTENSA-ATOMIC-NEXT: .LBB23_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB23_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a6, a6
+; XTENSA-ATOMIC-NEXT: beqi a5, 1, .LBB23_4
+; XTENSA-ATOMIC-NEXT: .LBB23_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a6, a7, a12
+; XTENSA-ATOMIC-NEXT: and a5, a7, a9
+; XTENSA-ATOMIC-NEXT: xor a5, a5, a11
+; XTENSA-ATOMIC-NEXT: and a5, a5, a10
+; XTENSA-ATOMIC-NEXT: or a6, a6, a5
+; XTENSA-ATOMIC-NEXT: wsr a7, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a6, a13, 0
+; XTENSA-ATOMIC-NEXT: or a5, a15, a15
+; XTENSA-ATOMIC-NEXT: beq a6, a7, .LBB23_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB23_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a5, a14, a14
+; XTENSA-ATOMIC-NEXT: j .LBB23_1
+; XTENSA-ATOMIC-NEXT: .LBB23_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a6
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw nand ptr %a, i8 %b acq_rel
+ ret i8 %res
+}
+
+define i8 @atomicrmw_nand_i8_seq_cst(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_nand_i8_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 5
+; XTENSA-NEXT: l32r a8, .LCPI24_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_nand_i8_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: movi a10, 255
+; XTENSA-ATOMIC-NEXT: and a9, a3, a10
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a12, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a13, -4
+; XTENSA-ATOMIC-NEXT: and a13, a2, a13
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a7, a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 0
+; XTENSA-ATOMIC-NEXT: movi a15, 1
+; XTENSA-ATOMIC-NEXT: j .LBB24_2
+; XTENSA-ATOMIC-NEXT: .LBB24_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB24_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a6, a6
+; XTENSA-ATOMIC-NEXT: beqi a5, 1, .LBB24_4
+; XTENSA-ATOMIC-NEXT: .LBB24_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a6, a7, a12
+; XTENSA-ATOMIC-NEXT: and a5, a7, a9
+; XTENSA-ATOMIC-NEXT: xor a5, a5, a11
+; XTENSA-ATOMIC-NEXT: and a5, a5, a10
+; XTENSA-ATOMIC-NEXT: or a6, a6, a5
+; XTENSA-ATOMIC-NEXT: wsr a7, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a6, a13, 0
+; XTENSA-ATOMIC-NEXT: or a5, a15, a15
+; XTENSA-ATOMIC-NEXT: beq a6, a7, .LBB24_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB24_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a5, a14, a14
+; XTENSA-ATOMIC-NEXT: j .LBB24_1
+; XTENSA-ATOMIC-NEXT: .LBB24_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a6
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw nand ptr %a, i8 %b seq_cst
+ ret i8 %res
+}
+
+define i8 @atomicrmw_or_i8_monotonic(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_or_i8_monotonic:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 0
+; XTENSA-NEXT: l32r a8, .LCPI25_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_or_i8_monotonic:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: movi a8, 255
+; XTENSA-ATOMIC-NEXT: and a9, a3, a8
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a10, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a10
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: l32i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: movi a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 1
+; XTENSA-ATOMIC-NEXT: j .LBB25_2
+; XTENSA-ATOMIC-NEXT: .LBB25_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB25_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a14, a13, a13
+; XTENSA-ATOMIC-NEXT: beqi a15, 1, .LBB25_4
+; XTENSA-ATOMIC-NEXT: .LBB25_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: or a13, a14, a9
+; XTENSA-ATOMIC-NEXT: wsr a14, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a13, a10, 0
+; XTENSA-ATOMIC-NEXT: or a15, a12, a12
+; XTENSA-ATOMIC-NEXT: beq a13, a14, .LBB25_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB25_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a11, a11
+; XTENSA-ATOMIC-NEXT: j .LBB25_1
+; XTENSA-ATOMIC-NEXT: .LBB25_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a13
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw or ptr %a, i8 %b monotonic
+ ret i8 %res
+}
+
+define i8 @atomicrmw_or_i8_acquire(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_or_i8_acquire:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 2
+; XTENSA-NEXT: l32r a8, .LCPI26_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_or_i8_acquire:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: movi a8, 255
+; XTENSA-ATOMIC-NEXT: and a9, a3, a8
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a10, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a10
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: l32i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: movi a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 1
+; XTENSA-ATOMIC-NEXT: j .LBB26_2
+; XTENSA-ATOMIC-NEXT: .LBB26_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB26_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a14, a13, a13
+; XTENSA-ATOMIC-NEXT: beqi a15, 1, .LBB26_4
+; XTENSA-ATOMIC-NEXT: .LBB26_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: or a13, a14, a9
+; XTENSA-ATOMIC-NEXT: wsr a14, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a13, a10, 0
+; XTENSA-ATOMIC-NEXT: or a15, a12, a12
+; XTENSA-ATOMIC-NEXT: beq a13, a14, .LBB26_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB26_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a11, a11
+; XTENSA-ATOMIC-NEXT: j .LBB26_1
+; XTENSA-ATOMIC-NEXT: .LBB26_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a13
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw or ptr %a, i8 %b acquire
+ ret i8 %res
+}
+
+define i8 @atomicrmw_or_i8_release(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_or_i8_release:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 3
+; XTENSA-NEXT: l32r a8, .LCPI27_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_or_i8_release:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: movi a8, 255
+; XTENSA-ATOMIC-NEXT: and a9, a3, a8
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a10, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a10
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: movi a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 1
+; XTENSA-ATOMIC-NEXT: j .LBB27_2
+; XTENSA-ATOMIC-NEXT: .LBB27_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB27_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a14, a13, a13
+; XTENSA-ATOMIC-NEXT: beqi a15, 1, .LBB27_4
+; XTENSA-ATOMIC-NEXT: .LBB27_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: or a13, a14, a9
+; XTENSA-ATOMIC-NEXT: wsr a14, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a13, a10, 0
+; XTENSA-ATOMIC-NEXT: or a15, a12, a12
+; XTENSA-ATOMIC-NEXT: beq a13, a14, .LBB27_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB27_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a11, a11
+; XTENSA-ATOMIC-NEXT: j .LBB27_1
+; XTENSA-ATOMIC-NEXT: .LBB27_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a13
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw or ptr %a, i8 %b release
+ ret i8 %res
+}
+
+define i8 @atomicrmw_or_i8_acq_rel(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_or_i8_acq_rel:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 4
+; XTENSA-NEXT: l32r a8, .LCPI28_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_or_i8_acq_rel:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: movi a8, 255
+; XTENSA-ATOMIC-NEXT: and a9, a3, a8
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a10, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a10
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: movi a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 1
+; XTENSA-ATOMIC-NEXT: j .LBB28_2
+; XTENSA-ATOMIC-NEXT: .LBB28_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB28_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a14, a13, a13
+; XTENSA-ATOMIC-NEXT: beqi a15, 1, .LBB28_4
+; XTENSA-ATOMIC-NEXT: .LBB28_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: or a13, a14, a9
+; XTENSA-ATOMIC-NEXT: wsr a14, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a13, a10, 0
+; XTENSA-ATOMIC-NEXT: or a15, a12, a12
+; XTENSA-ATOMIC-NEXT: beq a13, a14, .LBB28_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB28_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a11, a11
+; XTENSA-ATOMIC-NEXT: j .LBB28_1
+; XTENSA-ATOMIC-NEXT: .LBB28_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a13
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw or ptr %a, i8 %b acq_rel
+ ret i8 %res
+}
+
+define i8 @atomicrmw_or_i8_seq_cst(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_or_i8_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 5
+; XTENSA-NEXT: l32r a8, .LCPI29_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_or_i8_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: movi a8, 255
+; XTENSA-ATOMIC-NEXT: and a9, a3, a8
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a10, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a10
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: movi a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 1
+; XTENSA-ATOMIC-NEXT: j .LBB29_2
+; XTENSA-ATOMIC-NEXT: .LBB29_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB29_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a14, a13, a13
+; XTENSA-ATOMIC-NEXT: beqi a15, 1, .LBB29_4
+; XTENSA-ATOMIC-NEXT: .LBB29_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: or a13, a14, a9
+; XTENSA-ATOMIC-NEXT: wsr a14, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a13, a10, 0
+; XTENSA-ATOMIC-NEXT: or a15, a12, a12
+; XTENSA-ATOMIC-NEXT: beq a13, a14, .LBB29_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB29_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a11, a11
+; XTENSA-ATOMIC-NEXT: j .LBB29_1
+; XTENSA-ATOMIC-NEXT: .LBB29_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a13
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw or ptr %a, i8 %b seq_cst
+ ret i8 %res
+}
+
+define i8 @atomicrmw_xor_i8_monotonic(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_xor_i8_monotonic:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 0
+; XTENSA-NEXT: l32r a8, .LCPI30_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_xor_i8_monotonic:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: movi a8, 255
+; XTENSA-ATOMIC-NEXT: and a9, a3, a8
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a10, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a10
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: l32i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: movi a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 1
+; XTENSA-ATOMIC-NEXT: j .LBB30_2
+; XTENSA-ATOMIC-NEXT: .LBB30_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB30_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a14, a13, a13
+; XTENSA-ATOMIC-NEXT: beqi a15, 1, .LBB30_4
+; XTENSA-ATOMIC-NEXT: .LBB30_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: xor a13, a14, a9
+; XTENSA-ATOMIC-NEXT: wsr a14, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a13, a10, 0
+; XTENSA-ATOMIC-NEXT: or a15, a12, a12
+; XTENSA-ATOMIC-NEXT: beq a13, a14, .LBB30_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB30_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a11, a11
+; XTENSA-ATOMIC-NEXT: j .LBB30_1
+; XTENSA-ATOMIC-NEXT: .LBB30_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a13
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw xor ptr %a, i8 %b monotonic
+ ret i8 %res
+}
+
+define i8 @atomicrmw_xor_i8_acquire(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_xor_i8_acquire:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 2
+; XTENSA-NEXT: l32r a8, .LCPI31_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_xor_i8_acquire:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: movi a8, 255
+; XTENSA-ATOMIC-NEXT: and a9, a3, a8
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a10, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a10
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: l32i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: movi a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 1
+; XTENSA-ATOMIC-NEXT: j .LBB31_2
+; XTENSA-ATOMIC-NEXT: .LBB31_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB31_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a14, a13, a13
+; XTENSA-ATOMIC-NEXT: beqi a15, 1, .LBB31_4
+; XTENSA-ATOMIC-NEXT: .LBB31_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: xor a13, a14, a9
+; XTENSA-ATOMIC-NEXT: wsr a14, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a13, a10, 0
+; XTENSA-ATOMIC-NEXT: or a15, a12, a12
+; XTENSA-ATOMIC-NEXT: beq a13, a14, .LBB31_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB31_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a11, a11
+; XTENSA-ATOMIC-NEXT: j .LBB31_1
+; XTENSA-ATOMIC-NEXT: .LBB31_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a13
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw xor ptr %a, i8 %b acquire
+ ret i8 %res
+}
+
+define i8 @atomicrmw_xor_i8_release(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_xor_i8_release:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 3
+; XTENSA-NEXT: l32r a8, .LCPI32_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_xor_i8_release:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: movi a8, 255
+; XTENSA-ATOMIC-NEXT: and a9, a3, a8
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a10, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a10
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: movi a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 1
+; XTENSA-ATOMIC-NEXT: j .LBB32_2
+; XTENSA-ATOMIC-NEXT: .LBB32_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB32_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a14, a13, a13
+; XTENSA-ATOMIC-NEXT: beqi a15, 1, .LBB32_4
+; XTENSA-ATOMIC-NEXT: .LBB32_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: xor a13, a14, a9
+; XTENSA-ATOMIC-NEXT: wsr a14, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a13, a10, 0
+; XTENSA-ATOMIC-NEXT: or a15, a12, a12
+; XTENSA-ATOMIC-NEXT: beq a13, a14, .LBB32_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB32_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a11, a11
+; XTENSA-ATOMIC-NEXT: j .LBB32_1
+; XTENSA-ATOMIC-NEXT: .LBB32_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a13
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw xor ptr %a, i8 %b release
+ ret i8 %res
+}
+
+define i8 @atomicrmw_xor_i8_acq_rel(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_xor_i8_acq_rel:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 4
+; XTENSA-NEXT: l32r a8, .LCPI33_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_xor_i8_acq_rel:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: movi a8, 255
+; XTENSA-ATOMIC-NEXT: and a9, a3, a8
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a10, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a10
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: movi a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 1
+; XTENSA-ATOMIC-NEXT: j .LBB33_2
+; XTENSA-ATOMIC-NEXT: .LBB33_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB33_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a14, a13, a13
+; XTENSA-ATOMIC-NEXT: beqi a15, 1, .LBB33_4
+; XTENSA-ATOMIC-NEXT: .LBB33_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: xor a13, a14, a9
+; XTENSA-ATOMIC-NEXT: wsr a14, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a13, a10, 0
+; XTENSA-ATOMIC-NEXT: or a15, a12, a12
+; XTENSA-ATOMIC-NEXT: beq a13, a14, .LBB33_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB33_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a11, a11
+; XTENSA-ATOMIC-NEXT: j .LBB33_1
+; XTENSA-ATOMIC-NEXT: .LBB33_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a13
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw xor ptr %a, i8 %b acq_rel
+ ret i8 %res
+}
+
+define i8 @atomicrmw_xor_i8_seq_cst(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_xor_i8_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 5
+; XTENSA-NEXT: l32r a8, .LCPI34_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_xor_i8_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: movi a8, 255
+; XTENSA-ATOMIC-NEXT: and a9, a3, a8
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a10, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a10
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: movi a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 1
+; XTENSA-ATOMIC-NEXT: j .LBB34_2
+; XTENSA-ATOMIC-NEXT: .LBB34_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB34_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a14, a13, a13
+; XTENSA-ATOMIC-NEXT: beqi a15, 1, .LBB34_4
+; XTENSA-ATOMIC-NEXT: .LBB34_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: xor a13, a14, a9
+; XTENSA-ATOMIC-NEXT: wsr a14, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a13, a10, 0
+; XTENSA-ATOMIC-NEXT: or a15, a12, a12
+; XTENSA-ATOMIC-NEXT: beq a13, a14, .LBB34_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB34_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a11, a11
+; XTENSA-ATOMIC-NEXT: j .LBB34_1
+; XTENSA-ATOMIC-NEXT: .LBB34_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a13
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw xor ptr %a, i8 %b seq_cst
+ ret i8 %res
+}
+
+define i8 @atomicrmw_max_i8_monotonic(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_max_i8_monotonic:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a6, a2, a2
+; XTENSA-NEXT: l8ui a2, a6, 0
+; XTENSA-NEXT: slli a8, a3, 24
+; XTENSA-NEXT: srai a5, a8, 24
+; XTENSA-NEXT: movi a7, 0
+; XTENSA-NEXT: l32r a4, .LCPI35_0
+; XTENSA-NEXT: j .LBB35_2
+; XTENSA-NEXT: .LBB35_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB35_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a6, a6
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a7, a7
+; XTENSA-NEXT: callx8 a4
+; XTENSA-NEXT: l8ui a2, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB35_4
+; XTENSA-NEXT: .LBB35_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s8i a2, a1, 0
+; XTENSA-NEXT: slli a8, a2, 24
+; XTENSA-NEXT: srai a8, a8, 24
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: bge a5, a8, .LBB35_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB35_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB35_1
+; XTENSA-NEXT: .LBB35_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_max_i8_monotonic:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a9, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a9
+; XTENSA-ATOMIC-NEXT: movi a9, 255
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a9
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a10, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a11, -4
+; XTENSA-ATOMIC-NEXT: and a11, a2, a11
+; XTENSA-ATOMIC-NEXT: l32i a15, a11, 0
+; XTENSA-ATOMIC-NEXT: slli a12, a3, 24
+; XTENSA-ATOMIC-NEXT: srai a12, a12, 24
+; XTENSA-ATOMIC-NEXT: movi a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 1
+; XTENSA-ATOMIC-NEXT: j .LBB35_2
+; XTENSA-ATOMIC-NEXT: .LBB35_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB35_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a7, a7
+; XTENSA-ATOMIC-NEXT: beqi a6, 1, .LBB35_6
+; XTENSA-ATOMIC-NEXT: .LBB35_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a7, a15
+; XTENSA-ATOMIC-NEXT: slli a6, a7, 24
+; XTENSA-ATOMIC-NEXT: srai a5, a6, 24
+; XTENSA-ATOMIC-NEXT: or a6, a3, a3
+; XTENSA-ATOMIC-NEXT: bge a12, a5, .LBB35_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB35_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a7, a7
+; XTENSA-ATOMIC-NEXT: .LBB35_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB35_2 Depth=1
+; XTENSA-ATOMIC-NEXT: and a7, a6, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a7, a7
+; XTENSA-ATOMIC-NEXT: and a6, a15, a10
+; XTENSA-ATOMIC-NEXT: or a7, a6, a7
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a7, a11, 0
+; XTENSA-ATOMIC-NEXT: or a6, a14, a14
+; XTENSA-ATOMIC-NEXT: beq a7, a15, .LBB35_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB35_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a13, a13
+; XTENSA-ATOMIC-NEXT: j .LBB35_1
+; XTENSA-ATOMIC-NEXT: .LBB35_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a7
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw max ptr %a, i8 %b monotonic
+ ret i8 %res
+}
+
+define i8 @atomicrmw_max_i8_acquire(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_max_i8_acquire:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a6, a2, a2
+; XTENSA-NEXT: l8ui a2, a6, 0
+; XTENSA-NEXT: slli a8, a3, 24
+; XTENSA-NEXT: srai a5, a8, 24
+; XTENSA-NEXT: movi a7, 2
+; XTENSA-NEXT: l32r a4, .LCPI36_0
+; XTENSA-NEXT: j .LBB36_2
+; XTENSA-NEXT: .LBB36_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB36_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a6, a6
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a7, a7
+; XTENSA-NEXT: callx8 a4
+; XTENSA-NEXT: l8ui a2, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB36_4
+; XTENSA-NEXT: .LBB36_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s8i a2, a1, 0
+; XTENSA-NEXT: slli a8, a2, 24
+; XTENSA-NEXT: srai a8, a8, 24
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: bge a5, a8, .LBB36_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB36_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB36_1
+; XTENSA-NEXT: .LBB36_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_max_i8_acquire:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a9, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a9
+; XTENSA-ATOMIC-NEXT: movi a9, 255
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a9
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a10, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a11, -4
+; XTENSA-ATOMIC-NEXT: and a11, a2, a11
+; XTENSA-ATOMIC-NEXT: l32i a15, a11, 0
+; XTENSA-ATOMIC-NEXT: slli a12, a3, 24
+; XTENSA-ATOMIC-NEXT: srai a12, a12, 24
+; XTENSA-ATOMIC-NEXT: movi a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 1
+; XTENSA-ATOMIC-NEXT: j .LBB36_2
+; XTENSA-ATOMIC-NEXT: .LBB36_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB36_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a7, a7
+; XTENSA-ATOMIC-NEXT: beqi a6, 1, .LBB36_6
+; XTENSA-ATOMIC-NEXT: .LBB36_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a7, a15
+; XTENSA-ATOMIC-NEXT: slli a6, a7, 24
+; XTENSA-ATOMIC-NEXT: srai a5, a6, 24
+; XTENSA-ATOMIC-NEXT: or a6, a3, a3
+; XTENSA-ATOMIC-NEXT: bge a12, a5, .LBB36_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB36_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a7, a7
+; XTENSA-ATOMIC-NEXT: .LBB36_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB36_2 Depth=1
+; XTENSA-ATOMIC-NEXT: and a7, a6, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a7, a7
+; XTENSA-ATOMIC-NEXT: and a6, a15, a10
+; XTENSA-ATOMIC-NEXT: or a7, a6, a7
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a7, a11, 0
+; XTENSA-ATOMIC-NEXT: or a6, a14, a14
+; XTENSA-ATOMIC-NEXT: beq a7, a15, .LBB36_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB36_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a13, a13
+; XTENSA-ATOMIC-NEXT: j .LBB36_1
+; XTENSA-ATOMIC-NEXT: .LBB36_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a7
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw max ptr %a, i8 %b acquire
+ ret i8 %res
+}
+
+define i8 @atomicrmw_max_i8_release(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_max_i8_release:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a9, a2, a2
+; XTENSA-NEXT: l8ui a2, a9, 0
+; XTENSA-NEXT: s32i a3, a1, 0 # 4-byte Folded Spill
+; XTENSA-NEXT: slli a8, a3, 24
+; XTENSA-NEXT: or a3, a9, a9
+; XTENSA-NEXT: srai a4, a8, 24
+; XTENSA-NEXT: movi a7, 3
+; XTENSA-NEXT: movi a6, 0
+; XTENSA-NEXT: l32r a5, .LCPI37_0
+; XTENSA-NEXT: j .LBB37_2
+; XTENSA-NEXT: .LBB37_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB37_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 4
+; XTENSA-NEXT: or a10, a3, a3
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a6, a6
+; XTENSA-NEXT: callx8 a5
+; XTENSA-NEXT: l8ui a2, a1, 4
+; XTENSA-NEXT: bnez a10, .LBB37_4
+; XTENSA-NEXT: .LBB37_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s8i a2, a1, 4
+; XTENSA-NEXT: slli a8, a2, 24
+; XTENSA-NEXT: srai a8, a8, 24
+; XTENSA-NEXT: l32i a12, a1, 0 # 4-byte Folded Reload
+; XTENSA-NEXT: bge a4, a8, .LBB37_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB37_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB37_1
+; XTENSA-NEXT: .LBB37_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_max_i8_release:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a9, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a9
+; XTENSA-ATOMIC-NEXT: movi a9, 255
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a9
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a10, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a11, -4
+; XTENSA-ATOMIC-NEXT: and a11, a2, a11
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a15, a11, 0
+; XTENSA-ATOMIC-NEXT: slli a12, a3, 24
+; XTENSA-ATOMIC-NEXT: srai a12, a12, 24
+; XTENSA-ATOMIC-NEXT: movi a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 1
+; XTENSA-ATOMIC-NEXT: j .LBB37_2
+; XTENSA-ATOMIC-NEXT: .LBB37_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB37_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a7, a7
+; XTENSA-ATOMIC-NEXT: beqi a6, 1, .LBB37_6
+; XTENSA-ATOMIC-NEXT: .LBB37_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a7, a15
+; XTENSA-ATOMIC-NEXT: slli a6, a7, 24
+; XTENSA-ATOMIC-NEXT: srai a5, a6, 24
+; XTENSA-ATOMIC-NEXT: or a6, a3, a3
+; XTENSA-ATOMIC-NEXT: bge a12, a5, .LBB37_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB37_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a7, a7
+; XTENSA-ATOMIC-NEXT: .LBB37_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB37_2 Depth=1
+; XTENSA-ATOMIC-NEXT: and a7, a6, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a7, a7
+; XTENSA-ATOMIC-NEXT: and a6, a15, a10
+; XTENSA-ATOMIC-NEXT: or a7, a6, a7
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a7, a11, 0
+; XTENSA-ATOMIC-NEXT: or a6, a14, a14
+; XTENSA-ATOMIC-NEXT: beq a7, a15, .LBB37_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB37_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a13, a13
+; XTENSA-ATOMIC-NEXT: j .LBB37_1
+; XTENSA-ATOMIC-NEXT: .LBB37_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a7
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw max ptr %a, i8 %b release
+ ret i8 %res
+}
+
+define i8 @atomicrmw_max_i8_acq_rel(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_max_i8_acq_rel:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a9, a2, a2
+; XTENSA-NEXT: l8ui a2, a9, 0
+; XTENSA-NEXT: s32i a3, a1, 0 # 4-byte Folded Spill
+; XTENSA-NEXT: slli a8, a3, 24
+; XTENSA-NEXT: or a3, a9, a9
+; XTENSA-NEXT: srai a4, a8, 24
+; XTENSA-NEXT: movi a7, 4
+; XTENSA-NEXT: movi a6, 2
+; XTENSA-NEXT: l32r a5, .LCPI38_0
+; XTENSA-NEXT: j .LBB38_2
+; XTENSA-NEXT: .LBB38_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB38_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 4
+; XTENSA-NEXT: or a10, a3, a3
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a6, a6
+; XTENSA-NEXT: callx8 a5
+; XTENSA-NEXT: l8ui a2, a1, 4
+; XTENSA-NEXT: bnez a10, .LBB38_4
+; XTENSA-NEXT: .LBB38_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s8i a2, a1, 4
+; XTENSA-NEXT: slli a8, a2, 24
+; XTENSA-NEXT: srai a8, a8, 24
+; XTENSA-NEXT: l32i a12, a1, 0 # 4-byte Folded Reload
+; XTENSA-NEXT: bge a4, a8, .LBB38_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB38_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB38_1
+; XTENSA-NEXT: .LBB38_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_max_i8_acq_rel:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a9, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a9
+; XTENSA-ATOMIC-NEXT: movi a9, 255
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a9
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a10, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a11, -4
+; XTENSA-ATOMIC-NEXT: and a11, a2, a11
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a15, a11, 0
+; XTENSA-ATOMIC-NEXT: slli a12, a3, 24
+; XTENSA-ATOMIC-NEXT: srai a12, a12, 24
+; XTENSA-ATOMIC-NEXT: movi a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 1
+; XTENSA-ATOMIC-NEXT: j .LBB38_2
+; XTENSA-ATOMIC-NEXT: .LBB38_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB38_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a7, a7
+; XTENSA-ATOMIC-NEXT: beqi a6, 1, .LBB38_6
+; XTENSA-ATOMIC-NEXT: .LBB38_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a7, a15
+; XTENSA-ATOMIC-NEXT: slli a6, a7, 24
+; XTENSA-ATOMIC-NEXT: srai a5, a6, 24
+; XTENSA-ATOMIC-NEXT: or a6, a3, a3
+; XTENSA-ATOMIC-NEXT: bge a12, a5, .LBB38_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB38_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a7, a7
+; XTENSA-ATOMIC-NEXT: .LBB38_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB38_2 Depth=1
+; XTENSA-ATOMIC-NEXT: and a7, a6, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a7, a7
+; XTENSA-ATOMIC-NEXT: and a6, a15, a10
+; XTENSA-ATOMIC-NEXT: or a7, a6, a7
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a7, a11, 0
+; XTENSA-ATOMIC-NEXT: or a6, a14, a14
+; XTENSA-ATOMIC-NEXT: beq a7, a15, .LBB38_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB38_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a13, a13
+; XTENSA-ATOMIC-NEXT: j .LBB38_1
+; XTENSA-ATOMIC-NEXT: .LBB38_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a7
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw max ptr %a, i8 %b acq_rel
+ ret i8 %res
+}
+
+define i8 @atomicrmw_max_i8_seq_cst(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_max_i8_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a6, a2, a2
+; XTENSA-NEXT: l8ui a2, a6, 0
+; XTENSA-NEXT: slli a8, a3, 24
+; XTENSA-NEXT: srai a5, a8, 24
+; XTENSA-NEXT: movi a7, 5
+; XTENSA-NEXT: l32r a4, .LCPI39_0
+; XTENSA-NEXT: j .LBB39_2
+; XTENSA-NEXT: .LBB39_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB39_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a6, a6
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a7, a7
+; XTENSA-NEXT: callx8 a4
+; XTENSA-NEXT: l8ui a2, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB39_4
+; XTENSA-NEXT: .LBB39_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s8i a2, a1, 0
+; XTENSA-NEXT: slli a8, a2, 24
+; XTENSA-NEXT: srai a8, a8, 24
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: bge a5, a8, .LBB39_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB39_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB39_1
+; XTENSA-NEXT: .LBB39_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_max_i8_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a9, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a9
+; XTENSA-ATOMIC-NEXT: movi a9, 255
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a9
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a10, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a11, -4
+; XTENSA-ATOMIC-NEXT: and a11, a2, a11
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a15, a11, 0
+; XTENSA-ATOMIC-NEXT: slli a12, a3, 24
+; XTENSA-ATOMIC-NEXT: srai a12, a12, 24
+; XTENSA-ATOMIC-NEXT: movi a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 1
+; XTENSA-ATOMIC-NEXT: j .LBB39_2
+; XTENSA-ATOMIC-NEXT: .LBB39_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB39_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a7, a7
+; XTENSA-ATOMIC-NEXT: beqi a6, 1, .LBB39_6
+; XTENSA-ATOMIC-NEXT: .LBB39_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a7, a15
+; XTENSA-ATOMIC-NEXT: slli a6, a7, 24
+; XTENSA-ATOMIC-NEXT: srai a5, a6, 24
+; XTENSA-ATOMIC-NEXT: or a6, a3, a3
+; XTENSA-ATOMIC-NEXT: bge a12, a5, .LBB39_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB39_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a7, a7
+; XTENSA-ATOMIC-NEXT: .LBB39_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB39_2 Depth=1
+; XTENSA-ATOMIC-NEXT: and a7, a6, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a7, a7
+; XTENSA-ATOMIC-NEXT: and a6, a15, a10
+; XTENSA-ATOMIC-NEXT: or a7, a6, a7
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a7, a11, 0
+; XTENSA-ATOMIC-NEXT: or a6, a14, a14
+; XTENSA-ATOMIC-NEXT: beq a7, a15, .LBB39_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB39_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a13, a13
+; XTENSA-ATOMIC-NEXT: j .LBB39_1
+; XTENSA-ATOMIC-NEXT: .LBB39_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a7
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw max ptr %a, i8 %b seq_cst
+ ret i8 %res
+}
+
+define i8 @atomicrmw_min_i8_monotonic(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_min_i8_monotonic:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a6, a2, a2
+; XTENSA-NEXT: l8ui a2, a6, 0
+; XTENSA-NEXT: slli a8, a3, 24
+; XTENSA-NEXT: srai a5, a8, 24
+; XTENSA-NEXT: movi a7, 0
+; XTENSA-NEXT: l32r a4, .LCPI40_0
+; XTENSA-NEXT: j .LBB40_2
+; XTENSA-NEXT: .LBB40_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB40_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a6, a6
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a7, a7
+; XTENSA-NEXT: callx8 a4
+; XTENSA-NEXT: l8ui a2, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB40_4
+; XTENSA-NEXT: .LBB40_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s8i a2, a1, 0
+; XTENSA-NEXT: slli a8, a2, 24
+; XTENSA-NEXT: srai a8, a8, 24
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: blt a5, a8, .LBB40_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB40_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB40_1
+; XTENSA-NEXT: .LBB40_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_min_i8_monotonic:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a9, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a9
+; XTENSA-ATOMIC-NEXT: movi a9, 255
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a9
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a10, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a11, -4
+; XTENSA-ATOMIC-NEXT: and a11, a2, a11
+; XTENSA-ATOMIC-NEXT: l32i a15, a11, 0
+; XTENSA-ATOMIC-NEXT: slli a12, a3, 24
+; XTENSA-ATOMIC-NEXT: srai a12, a12, 24
+; XTENSA-ATOMIC-NEXT: movi a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 1
+; XTENSA-ATOMIC-NEXT: j .LBB40_2
+; XTENSA-ATOMIC-NEXT: .LBB40_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB40_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a7, a7
+; XTENSA-ATOMIC-NEXT: beqi a6, 1, .LBB40_6
+; XTENSA-ATOMIC-NEXT: .LBB40_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a7, a15
+; XTENSA-ATOMIC-NEXT: slli a6, a7, 24
+; XTENSA-ATOMIC-NEXT: srai a5, a6, 24
+; XTENSA-ATOMIC-NEXT: or a6, a3, a3
+; XTENSA-ATOMIC-NEXT: blt a12, a5, .LBB40_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB40_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a7, a7
+; XTENSA-ATOMIC-NEXT: .LBB40_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB40_2 Depth=1
+; XTENSA-ATOMIC-NEXT: and a7, a6, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a7, a7
+; XTENSA-ATOMIC-NEXT: and a6, a15, a10
+; XTENSA-ATOMIC-NEXT: or a7, a6, a7
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a7, a11, 0
+; XTENSA-ATOMIC-NEXT: or a6, a14, a14
+; XTENSA-ATOMIC-NEXT: beq a7, a15, .LBB40_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB40_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a13, a13
+; XTENSA-ATOMIC-NEXT: j .LBB40_1
+; XTENSA-ATOMIC-NEXT: .LBB40_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a7
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw min ptr %a, i8 %b monotonic
+ ret i8 %res
+}
+
+define i8 @atomicrmw_min_i8_acquire(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_min_i8_acquire:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a6, a2, a2
+; XTENSA-NEXT: l8ui a2, a6, 0
+; XTENSA-NEXT: slli a8, a3, 24
+; XTENSA-NEXT: srai a5, a8, 24
+; XTENSA-NEXT: movi a7, 2
+; XTENSA-NEXT: l32r a4, .LCPI41_0
+; XTENSA-NEXT: j .LBB41_2
+; XTENSA-NEXT: .LBB41_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB41_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a6, a6
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a7, a7
+; XTENSA-NEXT: callx8 a4
+; XTENSA-NEXT: l8ui a2, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB41_4
+; XTENSA-NEXT: .LBB41_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s8i a2, a1, 0
+; XTENSA-NEXT: slli a8, a2, 24
+; XTENSA-NEXT: srai a8, a8, 24
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: blt a5, a8, .LBB41_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB41_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB41_1
+; XTENSA-NEXT: .LBB41_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_min_i8_acquire:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a9, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a9
+; XTENSA-ATOMIC-NEXT: movi a9, 255
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a9
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a10, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a11, -4
+; XTENSA-ATOMIC-NEXT: and a11, a2, a11
+; XTENSA-ATOMIC-NEXT: l32i a15, a11, 0
+; XTENSA-ATOMIC-NEXT: slli a12, a3, 24
+; XTENSA-ATOMIC-NEXT: srai a12, a12, 24
+; XTENSA-ATOMIC-NEXT: movi a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 1
+; XTENSA-ATOMIC-NEXT: j .LBB41_2
+; XTENSA-ATOMIC-NEXT: .LBB41_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB41_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a7, a7
+; XTENSA-ATOMIC-NEXT: beqi a6, 1, .LBB41_6
+; XTENSA-ATOMIC-NEXT: .LBB41_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a7, a15
+; XTENSA-ATOMIC-NEXT: slli a6, a7, 24
+; XTENSA-ATOMIC-NEXT: srai a5, a6, 24
+; XTENSA-ATOMIC-NEXT: or a6, a3, a3
+; XTENSA-ATOMIC-NEXT: blt a12, a5, .LBB41_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB41_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a7, a7
+; XTENSA-ATOMIC-NEXT: .LBB41_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB41_2 Depth=1
+; XTENSA-ATOMIC-NEXT: and a7, a6, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a7, a7
+; XTENSA-ATOMIC-NEXT: and a6, a15, a10
+; XTENSA-ATOMIC-NEXT: or a7, a6, a7
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a7, a11, 0
+; XTENSA-ATOMIC-NEXT: or a6, a14, a14
+; XTENSA-ATOMIC-NEXT: beq a7, a15, .LBB41_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB41_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a13, a13
+; XTENSA-ATOMIC-NEXT: j .LBB41_1
+; XTENSA-ATOMIC-NEXT: .LBB41_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a7
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw min ptr %a, i8 %b acquire
+ ret i8 %res
+}
+
+define i8 @atomicrmw_min_i8_release(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_min_i8_release:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a9, a2, a2
+; XTENSA-NEXT: l8ui a2, a9, 0
+; XTENSA-NEXT: s32i a3, a1, 0 # 4-byte Folded Spill
+; XTENSA-NEXT: slli a8, a3, 24
+; XTENSA-NEXT: or a3, a9, a9
+; XTENSA-NEXT: srai a4, a8, 24
+; XTENSA-NEXT: movi a7, 3
+; XTENSA-NEXT: movi a6, 0
+; XTENSA-NEXT: l32r a5, .LCPI42_0
+; XTENSA-NEXT: j .LBB42_2
+; XTENSA-NEXT: .LBB42_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB42_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 4
+; XTENSA-NEXT: or a10, a3, a3
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a6, a6
+; XTENSA-NEXT: callx8 a5
+; XTENSA-NEXT: l8ui a2, a1, 4
+; XTENSA-NEXT: bnez a10, .LBB42_4
+; XTENSA-NEXT: .LBB42_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s8i a2, a1, 4
+; XTENSA-NEXT: slli a8, a2, 24
+; XTENSA-NEXT: srai a8, a8, 24
+; XTENSA-NEXT: l32i a12, a1, 0 # 4-byte Folded Reload
+; XTENSA-NEXT: blt a4, a8, .LBB42_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB42_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB42_1
+; XTENSA-NEXT: .LBB42_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_min_i8_release:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a9, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a9
+; XTENSA-ATOMIC-NEXT: movi a9, 255
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a9
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a10, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a11, -4
+; XTENSA-ATOMIC-NEXT: and a11, a2, a11
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a15, a11, 0
+; XTENSA-ATOMIC-NEXT: slli a12, a3, 24
+; XTENSA-ATOMIC-NEXT: srai a12, a12, 24
+; XTENSA-ATOMIC-NEXT: movi a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 1
+; XTENSA-ATOMIC-NEXT: j .LBB42_2
+; XTENSA-ATOMIC-NEXT: .LBB42_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB42_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a7, a7
+; XTENSA-ATOMIC-NEXT: beqi a6, 1, .LBB42_6
+; XTENSA-ATOMIC-NEXT: .LBB42_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a7, a15
+; XTENSA-ATOMIC-NEXT: slli a6, a7, 24
+; XTENSA-ATOMIC-NEXT: srai a5, a6, 24
+; XTENSA-ATOMIC-NEXT: or a6, a3, a3
+; XTENSA-ATOMIC-NEXT: blt a12, a5, .LBB42_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB42_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a7, a7
+; XTENSA-ATOMIC-NEXT: .LBB42_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB42_2 Depth=1
+; XTENSA-ATOMIC-NEXT: and a7, a6, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a7, a7
+; XTENSA-ATOMIC-NEXT: and a6, a15, a10
+; XTENSA-ATOMIC-NEXT: or a7, a6, a7
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a7, a11, 0
+; XTENSA-ATOMIC-NEXT: or a6, a14, a14
+; XTENSA-ATOMIC-NEXT: beq a7, a15, .LBB42_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB42_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a13, a13
+; XTENSA-ATOMIC-NEXT: j .LBB42_1
+; XTENSA-ATOMIC-NEXT: .LBB42_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a7
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw min ptr %a, i8 %b release
+ ret i8 %res
+}
+
+define i8 @atomicrmw_min_i8_acq_rel(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_min_i8_acq_rel:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a9, a2, a2
+; XTENSA-NEXT: l8ui a2, a9, 0
+; XTENSA-NEXT: s32i a3, a1, 0 # 4-byte Folded Spill
+; XTENSA-NEXT: slli a8, a3, 24
+; XTENSA-NEXT: or a3, a9, a9
+; XTENSA-NEXT: srai a4, a8, 24
+; XTENSA-NEXT: movi a7, 4
+; XTENSA-NEXT: movi a6, 2
+; XTENSA-NEXT: l32r a5, .LCPI43_0
+; XTENSA-NEXT: j .LBB43_2
+; XTENSA-NEXT: .LBB43_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB43_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 4
+; XTENSA-NEXT: or a10, a3, a3
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a6, a6
+; XTENSA-NEXT: callx8 a5
+; XTENSA-NEXT: l8ui a2, a1, 4
+; XTENSA-NEXT: bnez a10, .LBB43_4
+; XTENSA-NEXT: .LBB43_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s8i a2, a1, 4
+; XTENSA-NEXT: slli a8, a2, 24
+; XTENSA-NEXT: srai a8, a8, 24
+; XTENSA-NEXT: l32i a12, a1, 0 # 4-byte Folded Reload
+; XTENSA-NEXT: blt a4, a8, .LBB43_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB43_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB43_1
+; XTENSA-NEXT: .LBB43_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_min_i8_acq_rel:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a9, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a9
+; XTENSA-ATOMIC-NEXT: movi a9, 255
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a9
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a10, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a11, -4
+; XTENSA-ATOMIC-NEXT: and a11, a2, a11
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a15, a11, 0
+; XTENSA-ATOMIC-NEXT: slli a12, a3, 24
+; XTENSA-ATOMIC-NEXT: srai a12, a12, 24
+; XTENSA-ATOMIC-NEXT: movi a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 1
+; XTENSA-ATOMIC-NEXT: j .LBB43_2
+; XTENSA-ATOMIC-NEXT: .LBB43_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB43_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a7, a7
+; XTENSA-ATOMIC-NEXT: beqi a6, 1, .LBB43_6
+; XTENSA-ATOMIC-NEXT: .LBB43_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a7, a15
+; XTENSA-ATOMIC-NEXT: slli a6, a7, 24
+; XTENSA-ATOMIC-NEXT: srai a5, a6, 24
+; XTENSA-ATOMIC-NEXT: or a6, a3, a3
+; XTENSA-ATOMIC-NEXT: blt a12, a5, .LBB43_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB43_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a7, a7
+; XTENSA-ATOMIC-NEXT: .LBB43_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB43_2 Depth=1
+; XTENSA-ATOMIC-NEXT: and a7, a6, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a7, a7
+; XTENSA-ATOMIC-NEXT: and a6, a15, a10
+; XTENSA-ATOMIC-NEXT: or a7, a6, a7
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a7, a11, 0
+; XTENSA-ATOMIC-NEXT: or a6, a14, a14
+; XTENSA-ATOMIC-NEXT: beq a7, a15, .LBB43_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB43_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a13, a13
+; XTENSA-ATOMIC-NEXT: j .LBB43_1
+; XTENSA-ATOMIC-NEXT: .LBB43_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a7
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw min ptr %a, i8 %b acq_rel
+ ret i8 %res
+}
+
+define i8 @atomicrmw_min_i8_seq_cst(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_min_i8_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a6, a2, a2
+; XTENSA-NEXT: l8ui a2, a6, 0
+; XTENSA-NEXT: slli a8, a3, 24
+; XTENSA-NEXT: srai a5, a8, 24
+; XTENSA-NEXT: movi a7, 5
+; XTENSA-NEXT: l32r a4, .LCPI44_0
+; XTENSA-NEXT: j .LBB44_2
+; XTENSA-NEXT: .LBB44_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB44_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a6, a6
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a7, a7
+; XTENSA-NEXT: callx8 a4
+; XTENSA-NEXT: l8ui a2, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB44_4
+; XTENSA-NEXT: .LBB44_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s8i a2, a1, 0
+; XTENSA-NEXT: slli a8, a2, 24
+; XTENSA-NEXT: srai a8, a8, 24
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: blt a5, a8, .LBB44_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB44_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB44_1
+; XTENSA-NEXT: .LBB44_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_min_i8_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a9, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a9
+; XTENSA-ATOMIC-NEXT: movi a9, 255
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a9
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a10, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a11, -4
+; XTENSA-ATOMIC-NEXT: and a11, a2, a11
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a15, a11, 0
+; XTENSA-ATOMIC-NEXT: slli a12, a3, 24
+; XTENSA-ATOMIC-NEXT: srai a12, a12, 24
+; XTENSA-ATOMIC-NEXT: movi a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 1
+; XTENSA-ATOMIC-NEXT: j .LBB44_2
+; XTENSA-ATOMIC-NEXT: .LBB44_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB44_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a7, a7
+; XTENSA-ATOMIC-NEXT: beqi a6, 1, .LBB44_6
+; XTENSA-ATOMIC-NEXT: .LBB44_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a7, a15
+; XTENSA-ATOMIC-NEXT: slli a6, a7, 24
+; XTENSA-ATOMIC-NEXT: srai a5, a6, 24
+; XTENSA-ATOMIC-NEXT: or a6, a3, a3
+; XTENSA-ATOMIC-NEXT: blt a12, a5, .LBB44_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB44_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a7, a7
+; XTENSA-ATOMIC-NEXT: .LBB44_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB44_2 Depth=1
+; XTENSA-ATOMIC-NEXT: and a7, a6, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a7, a7
+; XTENSA-ATOMIC-NEXT: and a6, a15, a10
+; XTENSA-ATOMIC-NEXT: or a7, a6, a7
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a7, a11, 0
+; XTENSA-ATOMIC-NEXT: or a6, a14, a14
+; XTENSA-ATOMIC-NEXT: beq a7, a15, .LBB44_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB44_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a13, a13
+; XTENSA-ATOMIC-NEXT: j .LBB44_1
+; XTENSA-ATOMIC-NEXT: .LBB44_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a7
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw min ptr %a, i8 %b seq_cst
+ ret i8 %res
+}
+
+define i8 @atomicrmw_umax_i8_monotonic(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_umax_i8_monotonic:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a8, a3, a3
+; XTENSA-NEXT: s32i a2, a1, 0 # 4-byte Folded Spill
+; XTENSA-NEXT: l8ui a2, a2, 0
+; XTENSA-NEXT: movi a5, 255
+; XTENSA-NEXT: and a4, a8, a5
+; XTENSA-NEXT: movi a7, 0
+; XTENSA-NEXT: l32r a6, .LCPI45_0
+; XTENSA-NEXT: j .LBB45_2
+; XTENSA-NEXT: .LBB45_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB45_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 4
+; XTENSA-NEXT: l32i a10, a1, 0 # 4-byte Folded Reload
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a7, a7
+; XTENSA-NEXT: callx8 a6
+; XTENSA-NEXT: l8ui a2, a1, 4
+; XTENSA-NEXT: bnez a10, .LBB45_4
+; XTENSA-NEXT: .LBB45_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s8i a2, a1, 4
+; XTENSA-NEXT: and a8, a2, a5
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: bgeu a4, a8, .LBB45_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB45_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB45_1
+; XTENSA-NEXT: .LBB45_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_umax_i8_monotonic:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a9, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a9
+; XTENSA-ATOMIC-NEXT: movi a9, 255
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a9
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a10, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a11, -4
+; XTENSA-ATOMIC-NEXT: and a11, a2, a11
+; XTENSA-ATOMIC-NEXT: l32i a15, a11, 0
+; XTENSA-ATOMIC-NEXT: and a12, a3, a9
+; XTENSA-ATOMIC-NEXT: movi a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 1
+; XTENSA-ATOMIC-NEXT: j .LBB45_2
+; XTENSA-ATOMIC-NEXT: .LBB45_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB45_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a7, a7
+; XTENSA-ATOMIC-NEXT: beqi a6, 1, .LBB45_6
+; XTENSA-ATOMIC-NEXT: .LBB45_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a7, a15
+; XTENSA-ATOMIC-NEXT: and a5, a7, a9
+; XTENSA-ATOMIC-NEXT: or a6, a3, a3
+; XTENSA-ATOMIC-NEXT: bgeu a12, a5, .LBB45_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB45_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a7, a7
+; XTENSA-ATOMIC-NEXT: .LBB45_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB45_2 Depth=1
+; XTENSA-ATOMIC-NEXT: and a7, a6, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a7, a7
+; XTENSA-ATOMIC-NEXT: and a6, a15, a10
+; XTENSA-ATOMIC-NEXT: or a7, a6, a7
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a7, a11, 0
+; XTENSA-ATOMIC-NEXT: or a6, a14, a14
+; XTENSA-ATOMIC-NEXT: beq a7, a15, .LBB45_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB45_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a13, a13
+; XTENSA-ATOMIC-NEXT: j .LBB45_1
+; XTENSA-ATOMIC-NEXT: .LBB45_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a7
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw umax ptr %a, i8 %b monotonic
+ ret i8 %res
+}
+
+define i8 @atomicrmw_umax_i8_acquire(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_umax_i8_acquire:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a8, a3, a3
+; XTENSA-NEXT: s32i a2, a1, 0 # 4-byte Folded Spill
+; XTENSA-NEXT: l8ui a2, a2, 0
+; XTENSA-NEXT: movi a5, 255
+; XTENSA-NEXT: and a4, a8, a5
+; XTENSA-NEXT: movi a7, 2
+; XTENSA-NEXT: l32r a6, .LCPI46_0
+; XTENSA-NEXT: j .LBB46_2
+; XTENSA-NEXT: .LBB46_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB46_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 4
+; XTENSA-NEXT: l32i a10, a1, 0 # 4-byte Folded Reload
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a7, a7
+; XTENSA-NEXT: callx8 a6
+; XTENSA-NEXT: l8ui a2, a1, 4
+; XTENSA-NEXT: bnez a10, .LBB46_4
+; XTENSA-NEXT: .LBB46_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s8i a2, a1, 4
+; XTENSA-NEXT: and a8, a2, a5
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: bgeu a4, a8, .LBB46_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB46_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB46_1
+; XTENSA-NEXT: .LBB46_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_umax_i8_acquire:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a9, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a9
+; XTENSA-ATOMIC-NEXT: movi a9, 255
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a9
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a10, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a11, -4
+; XTENSA-ATOMIC-NEXT: and a11, a2, a11
+; XTENSA-ATOMIC-NEXT: l32i a15, a11, 0
+; XTENSA-ATOMIC-NEXT: and a12, a3, a9
+; XTENSA-ATOMIC-NEXT: movi a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 1
+; XTENSA-ATOMIC-NEXT: j .LBB46_2
+; XTENSA-ATOMIC-NEXT: .LBB46_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB46_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a7, a7
+; XTENSA-ATOMIC-NEXT: beqi a6, 1, .LBB46_6
+; XTENSA-ATOMIC-NEXT: .LBB46_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a7, a15
+; XTENSA-ATOMIC-NEXT: and a5, a7, a9
+; XTENSA-ATOMIC-NEXT: or a6, a3, a3
+; XTENSA-ATOMIC-NEXT: bgeu a12, a5, .LBB46_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB46_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a7, a7
+; XTENSA-ATOMIC-NEXT: .LBB46_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB46_2 Depth=1
+; XTENSA-ATOMIC-NEXT: and a7, a6, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a7, a7
+; XTENSA-ATOMIC-NEXT: and a6, a15, a10
+; XTENSA-ATOMIC-NEXT: or a7, a6, a7
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a7, a11, 0
+; XTENSA-ATOMIC-NEXT: or a6, a14, a14
+; XTENSA-ATOMIC-NEXT: beq a7, a15, .LBB46_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB46_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a13, a13
+; XTENSA-ATOMIC-NEXT: j .LBB46_1
+; XTENSA-ATOMIC-NEXT: .LBB46_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a7
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw umax ptr %a, i8 %b acquire
+ ret i8 %res
+}
+
+define i8 @atomicrmw_umax_i8_release(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_umax_i8_release:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: s32i a2, a1, 4 # 4-byte Folded Spill
+; XTENSA-NEXT: l8ui a2, a2, 0
+; XTENSA-NEXT: movi a4, 255
+; XTENSA-NEXT: or a5, a3, a3
+; XTENSA-NEXT: and a8, a3, a4
+; XTENSA-NEXT: s32i a8, a1, 0 # 4-byte Folded Spill
+; XTENSA-NEXT: movi a7, 3
+; XTENSA-NEXT: movi a6, 0
+; XTENSA-NEXT: l32r a3, .LCPI47_0
+; XTENSA-NEXT: j .LBB47_2
+; XTENSA-NEXT: .LBB47_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB47_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 8
+; XTENSA-NEXT: l32i a10, a1, 4 # 4-byte Folded Reload
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a6, a6
+; XTENSA-NEXT: callx8 a3
+; XTENSA-NEXT: l8ui a2, a1, 8
+; XTENSA-NEXT: bnez a10, .LBB47_4
+; XTENSA-NEXT: .LBB47_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s8i a2, a1, 8
+; XTENSA-NEXT: and a8, a2, a4
+; XTENSA-NEXT: or a12, a5, a5
+; XTENSA-NEXT: l32i a9, a1, 0 # 4-byte Folded Reload
+; XTENSA-NEXT: bgeu a9, a8, .LBB47_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB47_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB47_1
+; XTENSA-NEXT: .LBB47_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_umax_i8_release:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a9, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a9
+; XTENSA-ATOMIC-NEXT: movi a9, 255
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a9
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a10, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a11, -4
+; XTENSA-ATOMIC-NEXT: and a11, a2, a11
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a15, a11, 0
+; XTENSA-ATOMIC-NEXT: and a12, a3, a9
+; XTENSA-ATOMIC-NEXT: movi a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 1
+; XTENSA-ATOMIC-NEXT: j .LBB47_2
+; XTENSA-ATOMIC-NEXT: .LBB47_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB47_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a7, a7
+; XTENSA-ATOMIC-NEXT: beqi a6, 1, .LBB47_6
+; XTENSA-ATOMIC-NEXT: .LBB47_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a7, a15
+; XTENSA-ATOMIC-NEXT: and a5, a7, a9
+; XTENSA-ATOMIC-NEXT: or a6, a3, a3
+; XTENSA-ATOMIC-NEXT: bgeu a12, a5, .LBB47_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB47_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a7, a7
+; XTENSA-ATOMIC-NEXT: .LBB47_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB47_2 Depth=1
+; XTENSA-ATOMIC-NEXT: and a7, a6, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a7, a7
+; XTENSA-ATOMIC-NEXT: and a6, a15, a10
+; XTENSA-ATOMIC-NEXT: or a7, a6, a7
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a7, a11, 0
+; XTENSA-ATOMIC-NEXT: or a6, a14, a14
+; XTENSA-ATOMIC-NEXT: beq a7, a15, .LBB47_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB47_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a13, a13
+; XTENSA-ATOMIC-NEXT: j .LBB47_1
+; XTENSA-ATOMIC-NEXT: .LBB47_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a7
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw umax ptr %a, i8 %b release
+ ret i8 %res
+}
+
+define i8 @atomicrmw_umax_i8_acq_rel(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_umax_i8_acq_rel:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: s32i a2, a1, 4 # 4-byte Folded Spill
+; XTENSA-NEXT: l8ui a2, a2, 0
+; XTENSA-NEXT: movi a4, 255
+; XTENSA-NEXT: or a5, a3, a3
+; XTENSA-NEXT: and a8, a3, a4
+; XTENSA-NEXT: s32i a8, a1, 0 # 4-byte Folded Spill
+; XTENSA-NEXT: movi a7, 4
+; XTENSA-NEXT: movi a6, 2
+; XTENSA-NEXT: l32r a3, .LCPI48_0
+; XTENSA-NEXT: j .LBB48_2
+; XTENSA-NEXT: .LBB48_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB48_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 8
+; XTENSA-NEXT: l32i a10, a1, 4 # 4-byte Folded Reload
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a6, a6
+; XTENSA-NEXT: callx8 a3
+; XTENSA-NEXT: l8ui a2, a1, 8
+; XTENSA-NEXT: bnez a10, .LBB48_4
+; XTENSA-NEXT: .LBB48_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s8i a2, a1, 8
+; XTENSA-NEXT: and a8, a2, a4
+; XTENSA-NEXT: or a12, a5, a5
+; XTENSA-NEXT: l32i a9, a1, 0 # 4-byte Folded Reload
+; XTENSA-NEXT: bgeu a9, a8, .LBB48_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB48_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB48_1
+; XTENSA-NEXT: .LBB48_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_umax_i8_acq_rel:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a9, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a9
+; XTENSA-ATOMIC-NEXT: movi a9, 255
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a9
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a10, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a11, -4
+; XTENSA-ATOMIC-NEXT: and a11, a2, a11
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a15, a11, 0
+; XTENSA-ATOMIC-NEXT: and a12, a3, a9
+; XTENSA-ATOMIC-NEXT: movi a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 1
+; XTENSA-ATOMIC-NEXT: j .LBB48_2
+; XTENSA-ATOMIC-NEXT: .LBB48_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB48_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a7, a7
+; XTENSA-ATOMIC-NEXT: beqi a6, 1, .LBB48_6
+; XTENSA-ATOMIC-NEXT: .LBB48_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a7, a15
+; XTENSA-ATOMIC-NEXT: and a5, a7, a9
+; XTENSA-ATOMIC-NEXT: or a6, a3, a3
+; XTENSA-ATOMIC-NEXT: bgeu a12, a5, .LBB48_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB48_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a7, a7
+; XTENSA-ATOMIC-NEXT: .LBB48_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB48_2 Depth=1
+; XTENSA-ATOMIC-NEXT: and a7, a6, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a7, a7
+; XTENSA-ATOMIC-NEXT: and a6, a15, a10
+; XTENSA-ATOMIC-NEXT: or a7, a6, a7
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a7, a11, 0
+; XTENSA-ATOMIC-NEXT: or a6, a14, a14
+; XTENSA-ATOMIC-NEXT: beq a7, a15, .LBB48_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB48_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a13, a13
+; XTENSA-ATOMIC-NEXT: j .LBB48_1
+; XTENSA-ATOMIC-NEXT: .LBB48_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a7
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw umax ptr %a, i8 %b acq_rel
+ ret i8 %res
+}
+
+define i8 @atomicrmw_umax_i8_seq_cst(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_umax_i8_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a8, a3, a3
+; XTENSA-NEXT: s32i a2, a1, 0 # 4-byte Folded Spill
+; XTENSA-NEXT: l8ui a2, a2, 0
+; XTENSA-NEXT: movi a5, 255
+; XTENSA-NEXT: and a4, a8, a5
+; XTENSA-NEXT: movi a7, 5
+; XTENSA-NEXT: l32r a6, .LCPI49_0
+; XTENSA-NEXT: j .LBB49_2
+; XTENSA-NEXT: .LBB49_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB49_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 4
+; XTENSA-NEXT: l32i a10, a1, 0 # 4-byte Folded Reload
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a7, a7
+; XTENSA-NEXT: callx8 a6
+; XTENSA-NEXT: l8ui a2, a1, 4
+; XTENSA-NEXT: bnez a10, .LBB49_4
+; XTENSA-NEXT: .LBB49_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s8i a2, a1, 4
+; XTENSA-NEXT: and a8, a2, a5
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: bgeu a4, a8, .LBB49_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB49_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB49_1
+; XTENSA-NEXT: .LBB49_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_umax_i8_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a9, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a9
+; XTENSA-ATOMIC-NEXT: movi a9, 255
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a9
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a10, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a11, -4
+; XTENSA-ATOMIC-NEXT: and a11, a2, a11
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a15, a11, 0
+; XTENSA-ATOMIC-NEXT: and a12, a3, a9
+; XTENSA-ATOMIC-NEXT: movi a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 1
+; XTENSA-ATOMIC-NEXT: j .LBB49_2
+; XTENSA-ATOMIC-NEXT: .LBB49_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB49_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a7, a7
+; XTENSA-ATOMIC-NEXT: beqi a6, 1, .LBB49_6
+; XTENSA-ATOMIC-NEXT: .LBB49_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a7, a15
+; XTENSA-ATOMIC-NEXT: and a5, a7, a9
+; XTENSA-ATOMIC-NEXT: or a6, a3, a3
+; XTENSA-ATOMIC-NEXT: bgeu a12, a5, .LBB49_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB49_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a7, a7
+; XTENSA-ATOMIC-NEXT: .LBB49_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB49_2 Depth=1
+; XTENSA-ATOMIC-NEXT: and a7, a6, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a7, a7
+; XTENSA-ATOMIC-NEXT: and a6, a15, a10
+; XTENSA-ATOMIC-NEXT: or a7, a6, a7
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a7, a11, 0
+; XTENSA-ATOMIC-NEXT: or a6, a14, a14
+; XTENSA-ATOMIC-NEXT: beq a7, a15, .LBB49_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB49_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a13, a13
+; XTENSA-ATOMIC-NEXT: j .LBB49_1
+; XTENSA-ATOMIC-NEXT: .LBB49_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a7
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw umax ptr %a, i8 %b seq_cst
+ ret i8 %res
+}
+
+define i8 @atomicrmw_umin_i8_monotonic(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_umin_i8_monotonic:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a8, a3, a3
+; XTENSA-NEXT: s32i a2, a1, 0 # 4-byte Folded Spill
+; XTENSA-NEXT: l8ui a2, a2, 0
+; XTENSA-NEXT: movi a5, 255
+; XTENSA-NEXT: and a4, a8, a5
+; XTENSA-NEXT: movi a7, 0
+; XTENSA-NEXT: l32r a6, .LCPI50_0
+; XTENSA-NEXT: j .LBB50_2
+; XTENSA-NEXT: .LBB50_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB50_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 4
+; XTENSA-NEXT: l32i a10, a1, 0 # 4-byte Folded Reload
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a7, a7
+; XTENSA-NEXT: callx8 a6
+; XTENSA-NEXT: l8ui a2, a1, 4
+; XTENSA-NEXT: bnez a10, .LBB50_4
+; XTENSA-NEXT: .LBB50_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s8i a2, a1, 4
+; XTENSA-NEXT: and a8, a2, a5
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: bltu a4, a8, .LBB50_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB50_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB50_1
+; XTENSA-NEXT: .LBB50_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_umin_i8_monotonic:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a9, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a9
+; XTENSA-ATOMIC-NEXT: movi a9, 255
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a9
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a10, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a11, -4
+; XTENSA-ATOMIC-NEXT: and a11, a2, a11
+; XTENSA-ATOMIC-NEXT: l32i a15, a11, 0
+; XTENSA-ATOMIC-NEXT: and a12, a3, a9
+; XTENSA-ATOMIC-NEXT: movi a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 1
+; XTENSA-ATOMIC-NEXT: j .LBB50_2
+; XTENSA-ATOMIC-NEXT: .LBB50_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB50_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a7, a7
+; XTENSA-ATOMIC-NEXT: beqi a6, 1, .LBB50_6
+; XTENSA-ATOMIC-NEXT: .LBB50_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a7, a15
+; XTENSA-ATOMIC-NEXT: and a5, a7, a9
+; XTENSA-ATOMIC-NEXT: or a6, a3, a3
+; XTENSA-ATOMIC-NEXT: bltu a12, a5, .LBB50_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB50_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a7, a7
+; XTENSA-ATOMIC-NEXT: .LBB50_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB50_2 Depth=1
+; XTENSA-ATOMIC-NEXT: and a7, a6, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a7, a7
+; XTENSA-ATOMIC-NEXT: and a6, a15, a10
+; XTENSA-ATOMIC-NEXT: or a7, a6, a7
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a7, a11, 0
+; XTENSA-ATOMIC-NEXT: or a6, a14, a14
+; XTENSA-ATOMIC-NEXT: beq a7, a15, .LBB50_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB50_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a13, a13
+; XTENSA-ATOMIC-NEXT: j .LBB50_1
+; XTENSA-ATOMIC-NEXT: .LBB50_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a7
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw umin ptr %a, i8 %b monotonic
+ ret i8 %res
+}
+
+define i8 @atomicrmw_umin_i8_acquire(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_umin_i8_acquire:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a8, a3, a3
+; XTENSA-NEXT: s32i a2, a1, 0 # 4-byte Folded Spill
+; XTENSA-NEXT: l8ui a2, a2, 0
+; XTENSA-NEXT: movi a5, 255
+; XTENSA-NEXT: and a4, a8, a5
+; XTENSA-NEXT: movi a7, 2
+; XTENSA-NEXT: l32r a6, .LCPI51_0
+; XTENSA-NEXT: j .LBB51_2
+; XTENSA-NEXT: .LBB51_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB51_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 4
+; XTENSA-NEXT: l32i a10, a1, 0 # 4-byte Folded Reload
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a7, a7
+; XTENSA-NEXT: callx8 a6
+; XTENSA-NEXT: l8ui a2, a1, 4
+; XTENSA-NEXT: bnez a10, .LBB51_4
+; XTENSA-NEXT: .LBB51_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s8i a2, a1, 4
+; XTENSA-NEXT: and a8, a2, a5
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: bltu a4, a8, .LBB51_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB51_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB51_1
+; XTENSA-NEXT: .LBB51_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_umin_i8_acquire:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a9, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a9
+; XTENSA-ATOMIC-NEXT: movi a9, 255
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a9
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a10, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a11, -4
+; XTENSA-ATOMIC-NEXT: and a11, a2, a11
+; XTENSA-ATOMIC-NEXT: l32i a15, a11, 0
+; XTENSA-ATOMIC-NEXT: and a12, a3, a9
+; XTENSA-ATOMIC-NEXT: movi a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 1
+; XTENSA-ATOMIC-NEXT: j .LBB51_2
+; XTENSA-ATOMIC-NEXT: .LBB51_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB51_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a7, a7
+; XTENSA-ATOMIC-NEXT: beqi a6, 1, .LBB51_6
+; XTENSA-ATOMIC-NEXT: .LBB51_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a7, a15
+; XTENSA-ATOMIC-NEXT: and a5, a7, a9
+; XTENSA-ATOMIC-NEXT: or a6, a3, a3
+; XTENSA-ATOMIC-NEXT: bltu a12, a5, .LBB51_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB51_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a7, a7
+; XTENSA-ATOMIC-NEXT: .LBB51_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB51_2 Depth=1
+; XTENSA-ATOMIC-NEXT: and a7, a6, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a7, a7
+; XTENSA-ATOMIC-NEXT: and a6, a15, a10
+; XTENSA-ATOMIC-NEXT: or a7, a6, a7
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a7, a11, 0
+; XTENSA-ATOMIC-NEXT: or a6, a14, a14
+; XTENSA-ATOMIC-NEXT: beq a7, a15, .LBB51_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB51_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a13, a13
+; XTENSA-ATOMIC-NEXT: j .LBB51_1
+; XTENSA-ATOMIC-NEXT: .LBB51_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a7
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw umin ptr %a, i8 %b acquire
+ ret i8 %res
+}
+
+define i8 @atomicrmw_umin_i8_release(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_umin_i8_release:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: s32i a2, a1, 4 # 4-byte Folded Spill
+; XTENSA-NEXT: l8ui a2, a2, 0
+; XTENSA-NEXT: movi a4, 255
+; XTENSA-NEXT: or a5, a3, a3
+; XTENSA-NEXT: and a8, a3, a4
+; XTENSA-NEXT: s32i a8, a1, 0 # 4-byte Folded Spill
+; XTENSA-NEXT: movi a7, 3
+; XTENSA-NEXT: movi a6, 0
+; XTENSA-NEXT: l32r a3, .LCPI52_0
+; XTENSA-NEXT: j .LBB52_2
+; XTENSA-NEXT: .LBB52_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB52_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 8
+; XTENSA-NEXT: l32i a10, a1, 4 # 4-byte Folded Reload
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a6, a6
+; XTENSA-NEXT: callx8 a3
+; XTENSA-NEXT: l8ui a2, a1, 8
+; XTENSA-NEXT: bnez a10, .LBB52_4
+; XTENSA-NEXT: .LBB52_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s8i a2, a1, 8
+; XTENSA-NEXT: and a8, a2, a4
+; XTENSA-NEXT: or a12, a5, a5
+; XTENSA-NEXT: l32i a9, a1, 0 # 4-byte Folded Reload
+; XTENSA-NEXT: bltu a9, a8, .LBB52_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB52_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB52_1
+; XTENSA-NEXT: .LBB52_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_umin_i8_release:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a9, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a9
+; XTENSA-ATOMIC-NEXT: movi a9, 255
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a9
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a10, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a11, -4
+; XTENSA-ATOMIC-NEXT: and a11, a2, a11
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a15, a11, 0
+; XTENSA-ATOMIC-NEXT: and a12, a3, a9
+; XTENSA-ATOMIC-NEXT: movi a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 1
+; XTENSA-ATOMIC-NEXT: j .LBB52_2
+; XTENSA-ATOMIC-NEXT: .LBB52_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB52_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a7, a7
+; XTENSA-ATOMIC-NEXT: beqi a6, 1, .LBB52_6
+; XTENSA-ATOMIC-NEXT: .LBB52_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a7, a15
+; XTENSA-ATOMIC-NEXT: and a5, a7, a9
+; XTENSA-ATOMIC-NEXT: or a6, a3, a3
+; XTENSA-ATOMIC-NEXT: bltu a12, a5, .LBB52_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB52_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a7, a7
+; XTENSA-ATOMIC-NEXT: .LBB52_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB52_2 Depth=1
+; XTENSA-ATOMIC-NEXT: and a7, a6, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a7, a7
+; XTENSA-ATOMIC-NEXT: and a6, a15, a10
+; XTENSA-ATOMIC-NEXT: or a7, a6, a7
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a7, a11, 0
+; XTENSA-ATOMIC-NEXT: or a6, a14, a14
+; XTENSA-ATOMIC-NEXT: beq a7, a15, .LBB52_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB52_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a13, a13
+; XTENSA-ATOMIC-NEXT: j .LBB52_1
+; XTENSA-ATOMIC-NEXT: .LBB52_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a7
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw umin ptr %a, i8 %b release
+ ret i8 %res
+}
+
+define i8 @atomicrmw_umin_i8_acq_rel(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_umin_i8_acq_rel:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: s32i a2, a1, 4 # 4-byte Folded Spill
+; XTENSA-NEXT: l8ui a2, a2, 0
+; XTENSA-NEXT: movi a4, 255
+; XTENSA-NEXT: or a5, a3, a3
+; XTENSA-NEXT: and a8, a3, a4
+; XTENSA-NEXT: s32i a8, a1, 0 # 4-byte Folded Spill
+; XTENSA-NEXT: movi a7, 4
+; XTENSA-NEXT: movi a6, 2
+; XTENSA-NEXT: l32r a3, .LCPI53_0
+; XTENSA-NEXT: j .LBB53_2
+; XTENSA-NEXT: .LBB53_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB53_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 8
+; XTENSA-NEXT: l32i a10, a1, 4 # 4-byte Folded Reload
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a6, a6
+; XTENSA-NEXT: callx8 a3
+; XTENSA-NEXT: l8ui a2, a1, 8
+; XTENSA-NEXT: bnez a10, .LBB53_4
+; XTENSA-NEXT: .LBB53_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s8i a2, a1, 8
+; XTENSA-NEXT: and a8, a2, a4
+; XTENSA-NEXT: or a12, a5, a5
+; XTENSA-NEXT: l32i a9, a1, 0 # 4-byte Folded Reload
+; XTENSA-NEXT: bltu a9, a8, .LBB53_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB53_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB53_1
+; XTENSA-NEXT: .LBB53_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_umin_i8_acq_rel:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a9, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a9
+; XTENSA-ATOMIC-NEXT: movi a9, 255
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a9
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a10, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a11, -4
+; XTENSA-ATOMIC-NEXT: and a11, a2, a11
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a15, a11, 0
+; XTENSA-ATOMIC-NEXT: and a12, a3, a9
+; XTENSA-ATOMIC-NEXT: movi a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 1
+; XTENSA-ATOMIC-NEXT: j .LBB53_2
+; XTENSA-ATOMIC-NEXT: .LBB53_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB53_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a7, a7
+; XTENSA-ATOMIC-NEXT: beqi a6, 1, .LBB53_6
+; XTENSA-ATOMIC-NEXT: .LBB53_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a7, a15
+; XTENSA-ATOMIC-NEXT: and a5, a7, a9
+; XTENSA-ATOMIC-NEXT: or a6, a3, a3
+; XTENSA-ATOMIC-NEXT: bltu a12, a5, .LBB53_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB53_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a7, a7
+; XTENSA-ATOMIC-NEXT: .LBB53_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB53_2 Depth=1
+; XTENSA-ATOMIC-NEXT: and a7, a6, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a7, a7
+; XTENSA-ATOMIC-NEXT: and a6, a15, a10
+; XTENSA-ATOMIC-NEXT: or a7, a6, a7
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a7, a11, 0
+; XTENSA-ATOMIC-NEXT: or a6, a14, a14
+; XTENSA-ATOMIC-NEXT: beq a7, a15, .LBB53_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB53_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a13, a13
+; XTENSA-ATOMIC-NEXT: j .LBB53_1
+; XTENSA-ATOMIC-NEXT: .LBB53_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a7
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw umin ptr %a, i8 %b acq_rel
+ ret i8 %res
+}
+
+define i8 @atomicrmw_umin_i8_seq_cst(ptr %a, i8 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_umin_i8_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a8, a3, a3
+; XTENSA-NEXT: s32i a2, a1, 0 # 4-byte Folded Spill
+; XTENSA-NEXT: l8ui a2, a2, 0
+; XTENSA-NEXT: movi a5, 255
+; XTENSA-NEXT: and a4, a8, a5
+; XTENSA-NEXT: movi a7, 5
+; XTENSA-NEXT: l32r a6, .LCPI54_0
+; XTENSA-NEXT: j .LBB54_2
+; XTENSA-NEXT: .LBB54_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB54_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 4
+; XTENSA-NEXT: l32i a10, a1, 0 # 4-byte Folded Reload
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a7, a7
+; XTENSA-NEXT: callx8 a6
+; XTENSA-NEXT: l8ui a2, a1, 4
+; XTENSA-NEXT: bnez a10, .LBB54_4
+; XTENSA-NEXT: .LBB54_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s8i a2, a1, 4
+; XTENSA-NEXT: and a8, a2, a5
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: bltu a4, a8, .LBB54_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB54_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB54_1
+; XTENSA-NEXT: .LBB54_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_umin_i8_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a9, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a9
+; XTENSA-ATOMIC-NEXT: movi a9, 255
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a9
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a10, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a11, -4
+; XTENSA-ATOMIC-NEXT: and a11, a2, a11
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a15, a11, 0
+; XTENSA-ATOMIC-NEXT: and a12, a3, a9
+; XTENSA-ATOMIC-NEXT: movi a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 1
+; XTENSA-ATOMIC-NEXT: j .LBB54_2
+; XTENSA-ATOMIC-NEXT: .LBB54_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB54_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a7, a7
+; XTENSA-ATOMIC-NEXT: beqi a6, 1, .LBB54_6
+; XTENSA-ATOMIC-NEXT: .LBB54_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a7, a15
+; XTENSA-ATOMIC-NEXT: and a5, a7, a9
+; XTENSA-ATOMIC-NEXT: or a6, a3, a3
+; XTENSA-ATOMIC-NEXT: bltu a12, a5, .LBB54_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB54_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a7, a7
+; XTENSA-ATOMIC-NEXT: .LBB54_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB54_2 Depth=1
+; XTENSA-ATOMIC-NEXT: and a7, a6, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a7, a7
+; XTENSA-ATOMIC-NEXT: and a6, a15, a10
+; XTENSA-ATOMIC-NEXT: or a7, a6, a7
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a7, a11, 0
+; XTENSA-ATOMIC-NEXT: or a6, a14, a14
+; XTENSA-ATOMIC-NEXT: beq a7, a15, .LBB54_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB54_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a13, a13
+; XTENSA-ATOMIC-NEXT: j .LBB54_1
+; XTENSA-ATOMIC-NEXT: .LBB54_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a7
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw umin ptr %a, i8 %b seq_cst
+ ret i8 %res
+}
+
+define i16 @atomicrmw_xchg_i16_monotonic(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_xchg_i16_monotonic:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 0
+; XTENSA-NEXT: l32r a8, .LCPI55_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_xchg_i16_monotonic:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32r a10, .LCPI55_0
+; XTENSA-ATOMIC-NEXT: and a9, a3, a10
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a10, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a11, -4
+; XTENSA-ATOMIC-NEXT: and a11, a2, a11
+; XTENSA-ATOMIC-NEXT: l32i a15, a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 0
+; XTENSA-ATOMIC-NEXT: movi a13, 1
+; XTENSA-ATOMIC-NEXT: j .LBB55_2
+; XTENSA-ATOMIC-NEXT: .LBB55_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB55_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a14, a14
+; XTENSA-ATOMIC-NEXT: beqi a7, 1, .LBB55_4
+; XTENSA-ATOMIC-NEXT: .LBB55_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a14, a15, a10
+; XTENSA-ATOMIC-NEXT: or a14, a14, a9
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a14, a11, 0
+; XTENSA-ATOMIC-NEXT: or a7, a13, a13
+; XTENSA-ATOMIC-NEXT: beq a14, a15, .LBB55_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB55_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a12, a12
+; XTENSA-ATOMIC-NEXT: j .LBB55_1
+; XTENSA-ATOMIC-NEXT: .LBB55_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a14
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw xchg ptr %a, i16 %b monotonic
+ ret i16 %res
+}
+
+define i16 @atomicrmw_xchg_i16_acquire(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_xchg_i16_acquire:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 2
+; XTENSA-NEXT: l32r a8, .LCPI56_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_xchg_i16_acquire:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32r a10, .LCPI56_0
+; XTENSA-ATOMIC-NEXT: and a9, a3, a10
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a10, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a11, -4
+; XTENSA-ATOMIC-NEXT: and a11, a2, a11
+; XTENSA-ATOMIC-NEXT: l32i a15, a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 0
+; XTENSA-ATOMIC-NEXT: movi a13, 1
+; XTENSA-ATOMIC-NEXT: j .LBB56_2
+; XTENSA-ATOMIC-NEXT: .LBB56_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB56_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a14, a14
+; XTENSA-ATOMIC-NEXT: beqi a7, 1, .LBB56_4
+; XTENSA-ATOMIC-NEXT: .LBB56_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a14, a15, a10
+; XTENSA-ATOMIC-NEXT: or a14, a14, a9
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a14, a11, 0
+; XTENSA-ATOMIC-NEXT: or a7, a13, a13
+; XTENSA-ATOMIC-NEXT: beq a14, a15, .LBB56_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB56_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a12, a12
+; XTENSA-ATOMIC-NEXT: j .LBB56_1
+; XTENSA-ATOMIC-NEXT: .LBB56_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a14
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw xchg ptr %a, i16 %b acquire
+ ret i16 %res
+}
+
+define i16 @atomicrmw_xchg_i16_release(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_xchg_i16_release:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 3
+; XTENSA-NEXT: l32r a8, .LCPI57_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_xchg_i16_release:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32r a10, .LCPI57_0
+; XTENSA-ATOMIC-NEXT: and a9, a3, a10
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a10, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a11, -4
+; XTENSA-ATOMIC-NEXT: and a11, a2, a11
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a15, a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 0
+; XTENSA-ATOMIC-NEXT: movi a13, 1
+; XTENSA-ATOMIC-NEXT: j .LBB57_2
+; XTENSA-ATOMIC-NEXT: .LBB57_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB57_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a14, a14
+; XTENSA-ATOMIC-NEXT: beqi a7, 1, .LBB57_4
+; XTENSA-ATOMIC-NEXT: .LBB57_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a14, a15, a10
+; XTENSA-ATOMIC-NEXT: or a14, a14, a9
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a14, a11, 0
+; XTENSA-ATOMIC-NEXT: or a7, a13, a13
+; XTENSA-ATOMIC-NEXT: beq a14, a15, .LBB57_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB57_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a12, a12
+; XTENSA-ATOMIC-NEXT: j .LBB57_1
+; XTENSA-ATOMIC-NEXT: .LBB57_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a14
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw xchg ptr %a, i16 %b release
+ ret i16 %res
+}
+
+define i16 @atomicrmw_xchg_i16_acq_rel(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_xchg_i16_acq_rel:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 4
+; XTENSA-NEXT: l32r a8, .LCPI58_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_xchg_i16_acq_rel:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32r a10, .LCPI58_0
+; XTENSA-ATOMIC-NEXT: and a9, a3, a10
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a10, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a11, -4
+; XTENSA-ATOMIC-NEXT: and a11, a2, a11
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a15, a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 0
+; XTENSA-ATOMIC-NEXT: movi a13, 1
+; XTENSA-ATOMIC-NEXT: j .LBB58_2
+; XTENSA-ATOMIC-NEXT: .LBB58_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB58_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a14, a14
+; XTENSA-ATOMIC-NEXT: beqi a7, 1, .LBB58_4
+; XTENSA-ATOMIC-NEXT: .LBB58_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a14, a15, a10
+; XTENSA-ATOMIC-NEXT: or a14, a14, a9
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a14, a11, 0
+; XTENSA-ATOMIC-NEXT: or a7, a13, a13
+; XTENSA-ATOMIC-NEXT: beq a14, a15, .LBB58_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB58_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a12, a12
+; XTENSA-ATOMIC-NEXT: j .LBB58_1
+; XTENSA-ATOMIC-NEXT: .LBB58_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a14
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw xchg ptr %a, i16 %b acq_rel
+ ret i16 %res
+}
+
+define i16 @atomicrmw_xchg_i16_seq_cst(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_xchg_i16_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 5
+; XTENSA-NEXT: l32r a8, .LCPI59_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_xchg_i16_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32r a10, .LCPI59_0
+; XTENSA-ATOMIC-NEXT: and a9, a3, a10
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a10, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a11, -4
+; XTENSA-ATOMIC-NEXT: and a11, a2, a11
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a15, a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 0
+; XTENSA-ATOMIC-NEXT: movi a13, 1
+; XTENSA-ATOMIC-NEXT: j .LBB59_2
+; XTENSA-ATOMIC-NEXT: .LBB59_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB59_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a14, a14
+; XTENSA-ATOMIC-NEXT: beqi a7, 1, .LBB59_4
+; XTENSA-ATOMIC-NEXT: .LBB59_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a14, a15, a10
+; XTENSA-ATOMIC-NEXT: or a14, a14, a9
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a14, a11, 0
+; XTENSA-ATOMIC-NEXT: or a7, a13, a13
+; XTENSA-ATOMIC-NEXT: beq a14, a15, .LBB59_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB59_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a12, a12
+; XTENSA-ATOMIC-NEXT: j .LBB59_1
+; XTENSA-ATOMIC-NEXT: .LBB59_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a14
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw xchg ptr %a, i16 %b seq_cst
+ ret i16 %res
+}
+
+define i16 @atomicrmw_add_i16_monotonic(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_add_i16_monotonic:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 0
+; XTENSA-NEXT: l32r a8, .LCPI60_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_add_i16_monotonic:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32r a10, .LCPI60_0
+; XTENSA-ATOMIC-NEXT: and a9, a3, a10
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a11, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a12, -4
+; XTENSA-ATOMIC-NEXT: and a12, a2, a12
+; XTENSA-ATOMIC-NEXT: l32i a15, a12, 0
+; XTENSA-ATOMIC-NEXT: movi a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 1
+; XTENSA-ATOMIC-NEXT: j .LBB60_2
+; XTENSA-ATOMIC-NEXT: .LBB60_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB60_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a7, a7
+; XTENSA-ATOMIC-NEXT: beqi a6, 1, .LBB60_4
+; XTENSA-ATOMIC-NEXT: .LBB60_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a7, a15, a11
+; XTENSA-ATOMIC-NEXT: add a6, a15, a9
+; XTENSA-ATOMIC-NEXT: and a6, a6, a10
+; XTENSA-ATOMIC-NEXT: or a7, a7, a6
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a7, a12, 0
+; XTENSA-ATOMIC-NEXT: or a6, a14, a14
+; XTENSA-ATOMIC-NEXT: beq a7, a15, .LBB60_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB60_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a13, a13
+; XTENSA-ATOMIC-NEXT: j .LBB60_1
+; XTENSA-ATOMIC-NEXT: .LBB60_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a7
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw add ptr %a, i16 %b monotonic
+ ret i16 %res
+}
+
+define i16 @atomicrmw_add_i16_acquire(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_add_i16_acquire:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 2
+; XTENSA-NEXT: l32r a8, .LCPI61_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_add_i16_acquire:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32r a10, .LCPI61_0
+; XTENSA-ATOMIC-NEXT: and a9, a3, a10
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a11, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a12, -4
+; XTENSA-ATOMIC-NEXT: and a12, a2, a12
+; XTENSA-ATOMIC-NEXT: l32i a15, a12, 0
+; XTENSA-ATOMIC-NEXT: movi a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 1
+; XTENSA-ATOMIC-NEXT: j .LBB61_2
+; XTENSA-ATOMIC-NEXT: .LBB61_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB61_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a7, a7
+; XTENSA-ATOMIC-NEXT: beqi a6, 1, .LBB61_4
+; XTENSA-ATOMIC-NEXT: .LBB61_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a7, a15, a11
+; XTENSA-ATOMIC-NEXT: add a6, a15, a9
+; XTENSA-ATOMIC-NEXT: and a6, a6, a10
+; XTENSA-ATOMIC-NEXT: or a7, a7, a6
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a7, a12, 0
+; XTENSA-ATOMIC-NEXT: or a6, a14, a14
+; XTENSA-ATOMIC-NEXT: beq a7, a15, .LBB61_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB61_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a13, a13
+; XTENSA-ATOMIC-NEXT: j .LBB61_1
+; XTENSA-ATOMIC-NEXT: .LBB61_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a7
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw add ptr %a, i16 %b acquire
+ ret i16 %res
+}
+
+define i16 @atomicrmw_add_i16_release(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_add_i16_release:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 3
+; XTENSA-NEXT: l32r a8, .LCPI62_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_add_i16_release:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32r a10, .LCPI62_0
+; XTENSA-ATOMIC-NEXT: and a9, a3, a10
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a11, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a12, -4
+; XTENSA-ATOMIC-NEXT: and a12, a2, a12
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a15, a12, 0
+; XTENSA-ATOMIC-NEXT: movi a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 1
+; XTENSA-ATOMIC-NEXT: j .LBB62_2
+; XTENSA-ATOMIC-NEXT: .LBB62_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB62_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a7, a7
+; XTENSA-ATOMIC-NEXT: beqi a6, 1, .LBB62_4
+; XTENSA-ATOMIC-NEXT: .LBB62_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a7, a15, a11
+; XTENSA-ATOMIC-NEXT: add a6, a15, a9
+; XTENSA-ATOMIC-NEXT: and a6, a6, a10
+; XTENSA-ATOMIC-NEXT: or a7, a7, a6
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a7, a12, 0
+; XTENSA-ATOMIC-NEXT: or a6, a14, a14
+; XTENSA-ATOMIC-NEXT: beq a7, a15, .LBB62_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB62_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a13, a13
+; XTENSA-ATOMIC-NEXT: j .LBB62_1
+; XTENSA-ATOMIC-NEXT: .LBB62_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a7
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw add ptr %a, i16 %b release
+ ret i16 %res
+}
+
+define i16 @atomicrmw_add_i16_acq_rel(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_add_i16_acq_rel:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 4
+; XTENSA-NEXT: l32r a8, .LCPI63_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_add_i16_acq_rel:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32r a10, .LCPI63_0
+; XTENSA-ATOMIC-NEXT: and a9, a3, a10
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a11, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a12, -4
+; XTENSA-ATOMIC-NEXT: and a12, a2, a12
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a15, a12, 0
+; XTENSA-ATOMIC-NEXT: movi a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 1
+; XTENSA-ATOMIC-NEXT: j .LBB63_2
+; XTENSA-ATOMIC-NEXT: .LBB63_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB63_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a7, a7
+; XTENSA-ATOMIC-NEXT: beqi a6, 1, .LBB63_4
+; XTENSA-ATOMIC-NEXT: .LBB63_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a7, a15, a11
+; XTENSA-ATOMIC-NEXT: add a6, a15, a9
+; XTENSA-ATOMIC-NEXT: and a6, a6, a10
+; XTENSA-ATOMIC-NEXT: or a7, a7, a6
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a7, a12, 0
+; XTENSA-ATOMIC-NEXT: or a6, a14, a14
+; XTENSA-ATOMIC-NEXT: beq a7, a15, .LBB63_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB63_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a13, a13
+; XTENSA-ATOMIC-NEXT: j .LBB63_1
+; XTENSA-ATOMIC-NEXT: .LBB63_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a7
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw add ptr %a, i16 %b acq_rel
+ ret i16 %res
+}
+
+define i16 @atomicrmw_add_i16_seq_cst(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_add_i16_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 5
+; XTENSA-NEXT: l32r a8, .LCPI64_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_add_i16_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32r a10, .LCPI64_0
+; XTENSA-ATOMIC-NEXT: and a9, a3, a10
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a11, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a12, -4
+; XTENSA-ATOMIC-NEXT: and a12, a2, a12
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a15, a12, 0
+; XTENSA-ATOMIC-NEXT: movi a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 1
+; XTENSA-ATOMIC-NEXT: j .LBB64_2
+; XTENSA-ATOMIC-NEXT: .LBB64_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB64_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a7, a7
+; XTENSA-ATOMIC-NEXT: beqi a6, 1, .LBB64_4
+; XTENSA-ATOMIC-NEXT: .LBB64_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a7, a15, a11
+; XTENSA-ATOMIC-NEXT: add a6, a15, a9
+; XTENSA-ATOMIC-NEXT: and a6, a6, a10
+; XTENSA-ATOMIC-NEXT: or a7, a7, a6
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a7, a12, 0
+; XTENSA-ATOMIC-NEXT: or a6, a14, a14
+; XTENSA-ATOMIC-NEXT: beq a7, a15, .LBB64_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB64_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a13, a13
+; XTENSA-ATOMIC-NEXT: j .LBB64_1
+; XTENSA-ATOMIC-NEXT: .LBB64_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a7
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw add ptr %a, i16 %b seq_cst
+ ret i16 %res
+}
+
+define i16 @atomicrmw_sub_i16_monotonic(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_sub_i16_monotonic:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 0
+; XTENSA-NEXT: l32r a8, .LCPI65_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_sub_i16_monotonic:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32r a10, .LCPI65_0
+; XTENSA-ATOMIC-NEXT: and a9, a3, a10
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a11, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a12, -4
+; XTENSA-ATOMIC-NEXT: and a12, a2, a12
+; XTENSA-ATOMIC-NEXT: l32i a15, a12, 0
+; XTENSA-ATOMIC-NEXT: movi a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 1
+; XTENSA-ATOMIC-NEXT: j .LBB65_2
+; XTENSA-ATOMIC-NEXT: .LBB65_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB65_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a7, a7
+; XTENSA-ATOMIC-NEXT: beqi a6, 1, .LBB65_4
+; XTENSA-ATOMIC-NEXT: .LBB65_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a7, a15, a11
+; XTENSA-ATOMIC-NEXT: sub a6, a15, a9
+; XTENSA-ATOMIC-NEXT: and a6, a6, a10
+; XTENSA-ATOMIC-NEXT: or a7, a7, a6
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a7, a12, 0
+; XTENSA-ATOMIC-NEXT: or a6, a14, a14
+; XTENSA-ATOMIC-NEXT: beq a7, a15, .LBB65_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB65_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a13, a13
+; XTENSA-ATOMIC-NEXT: j .LBB65_1
+; XTENSA-ATOMIC-NEXT: .LBB65_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a7
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw sub ptr %a, i16 %b monotonic
+ ret i16 %res
+}
+
+define i16 @atomicrmw_sub_i16_acquire(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_sub_i16_acquire:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 2
+; XTENSA-NEXT: l32r a8, .LCPI66_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_sub_i16_acquire:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32r a10, .LCPI66_0
+; XTENSA-ATOMIC-NEXT: and a9, a3, a10
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a11, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a12, -4
+; XTENSA-ATOMIC-NEXT: and a12, a2, a12
+; XTENSA-ATOMIC-NEXT: l32i a15, a12, 0
+; XTENSA-ATOMIC-NEXT: movi a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 1
+; XTENSA-ATOMIC-NEXT: j .LBB66_2
+; XTENSA-ATOMIC-NEXT: .LBB66_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB66_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a7, a7
+; XTENSA-ATOMIC-NEXT: beqi a6, 1, .LBB66_4
+; XTENSA-ATOMIC-NEXT: .LBB66_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a7, a15, a11
+; XTENSA-ATOMIC-NEXT: sub a6, a15, a9
+; XTENSA-ATOMIC-NEXT: and a6, a6, a10
+; XTENSA-ATOMIC-NEXT: or a7, a7, a6
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a7, a12, 0
+; XTENSA-ATOMIC-NEXT: or a6, a14, a14
+; XTENSA-ATOMIC-NEXT: beq a7, a15, .LBB66_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB66_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a13, a13
+; XTENSA-ATOMIC-NEXT: j .LBB66_1
+; XTENSA-ATOMIC-NEXT: .LBB66_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a7
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw sub ptr %a, i16 %b acquire
+ ret i16 %res
+}
+
+define i16 @atomicrmw_sub_i16_release(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_sub_i16_release:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 3
+; XTENSA-NEXT: l32r a8, .LCPI67_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_sub_i16_release:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32r a10, .LCPI67_0
+; XTENSA-ATOMIC-NEXT: and a9, a3, a10
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a11, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a12, -4
+; XTENSA-ATOMIC-NEXT: and a12, a2, a12
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a15, a12, 0
+; XTENSA-ATOMIC-NEXT: movi a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 1
+; XTENSA-ATOMIC-NEXT: j .LBB67_2
+; XTENSA-ATOMIC-NEXT: .LBB67_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB67_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a7, a7
+; XTENSA-ATOMIC-NEXT: beqi a6, 1, .LBB67_4
+; XTENSA-ATOMIC-NEXT: .LBB67_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a7, a15, a11
+; XTENSA-ATOMIC-NEXT: sub a6, a15, a9
+; XTENSA-ATOMIC-NEXT: and a6, a6, a10
+; XTENSA-ATOMIC-NEXT: or a7, a7, a6
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a7, a12, 0
+; XTENSA-ATOMIC-NEXT: or a6, a14, a14
+; XTENSA-ATOMIC-NEXT: beq a7, a15, .LBB67_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB67_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a13, a13
+; XTENSA-ATOMIC-NEXT: j .LBB67_1
+; XTENSA-ATOMIC-NEXT: .LBB67_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a7
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw sub ptr %a, i16 %b release
+ ret i16 %res
+}
+
+define i16 @atomicrmw_sub_i16_acq_rel(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_sub_i16_acq_rel:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 4
+; XTENSA-NEXT: l32r a8, .LCPI68_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_sub_i16_acq_rel:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32r a10, .LCPI68_0
+; XTENSA-ATOMIC-NEXT: and a9, a3, a10
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a11, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a12, -4
+; XTENSA-ATOMIC-NEXT: and a12, a2, a12
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a15, a12, 0
+; XTENSA-ATOMIC-NEXT: movi a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 1
+; XTENSA-ATOMIC-NEXT: j .LBB68_2
+; XTENSA-ATOMIC-NEXT: .LBB68_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB68_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a7, a7
+; XTENSA-ATOMIC-NEXT: beqi a6, 1, .LBB68_4
+; XTENSA-ATOMIC-NEXT: .LBB68_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a7, a15, a11
+; XTENSA-ATOMIC-NEXT: sub a6, a15, a9
+; XTENSA-ATOMIC-NEXT: and a6, a6, a10
+; XTENSA-ATOMIC-NEXT: or a7, a7, a6
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a7, a12, 0
+; XTENSA-ATOMIC-NEXT: or a6, a14, a14
+; XTENSA-ATOMIC-NEXT: beq a7, a15, .LBB68_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB68_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a13, a13
+; XTENSA-ATOMIC-NEXT: j .LBB68_1
+; XTENSA-ATOMIC-NEXT: .LBB68_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a7
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw sub ptr %a, i16 %b acq_rel
+ ret i16 %res
+}
+
+define i16 @atomicrmw_sub_i16_seq_cst(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_sub_i16_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 5
+; XTENSA-NEXT: l32r a8, .LCPI69_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_sub_i16_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32r a10, .LCPI69_0
+; XTENSA-ATOMIC-NEXT: and a9, a3, a10
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a11, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a12, -4
+; XTENSA-ATOMIC-NEXT: and a12, a2, a12
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a15, a12, 0
+; XTENSA-ATOMIC-NEXT: movi a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 1
+; XTENSA-ATOMIC-NEXT: j .LBB69_2
+; XTENSA-ATOMIC-NEXT: .LBB69_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB69_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a7, a7
+; XTENSA-ATOMIC-NEXT: beqi a6, 1, .LBB69_4
+; XTENSA-ATOMIC-NEXT: .LBB69_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a7, a15, a11
+; XTENSA-ATOMIC-NEXT: sub a6, a15, a9
+; XTENSA-ATOMIC-NEXT: and a6, a6, a10
+; XTENSA-ATOMIC-NEXT: or a7, a7, a6
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a7, a12, 0
+; XTENSA-ATOMIC-NEXT: or a6, a14, a14
+; XTENSA-ATOMIC-NEXT: beq a7, a15, .LBB69_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB69_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a13, a13
+; XTENSA-ATOMIC-NEXT: j .LBB69_1
+; XTENSA-ATOMIC-NEXT: .LBB69_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a7
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw sub ptr %a, i16 %b seq_cst
+ ret i16 %res
+}
+
+define i16 @atomicrmw_and_i16_monotonic(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_and_i16_monotonic:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 0
+; XTENSA-NEXT: l32r a8, .LCPI70_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_and_i16_monotonic:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32r a9, .LCPI70_0
+; XTENSA-ATOMIC-NEXT: and a10, a3, a9
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a9, a9, a11
+; XTENSA-ATOMIC-NEXT: or a9, a10, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: l32i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: movi a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 1
+; XTENSA-ATOMIC-NEXT: j .LBB70_2
+; XTENSA-ATOMIC-NEXT: .LBB70_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB70_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a14, a13, a13
+; XTENSA-ATOMIC-NEXT: beqi a15, 1, .LBB70_4
+; XTENSA-ATOMIC-NEXT: .LBB70_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a13, a14, a9
+; XTENSA-ATOMIC-NEXT: wsr a14, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a13, a10, 0
+; XTENSA-ATOMIC-NEXT: or a15, a12, a12
+; XTENSA-ATOMIC-NEXT: beq a13, a14, .LBB70_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB70_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a11, a11
+; XTENSA-ATOMIC-NEXT: j .LBB70_1
+; XTENSA-ATOMIC-NEXT: .LBB70_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a13
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw and ptr %a, i16 %b monotonic
+ ret i16 %res
+}
+
+define i16 @atomicrmw_and_i16_acquire(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_and_i16_acquire:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 2
+; XTENSA-NEXT: l32r a8, .LCPI71_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_and_i16_acquire:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32r a9, .LCPI71_0
+; XTENSA-ATOMIC-NEXT: and a10, a3, a9
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a9, a9, a11
+; XTENSA-ATOMIC-NEXT: or a9, a10, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: l32i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: movi a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 1
+; XTENSA-ATOMIC-NEXT: j .LBB71_2
+; XTENSA-ATOMIC-NEXT: .LBB71_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB71_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a14, a13, a13
+; XTENSA-ATOMIC-NEXT: beqi a15, 1, .LBB71_4
+; XTENSA-ATOMIC-NEXT: .LBB71_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a13, a14, a9
+; XTENSA-ATOMIC-NEXT: wsr a14, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a13, a10, 0
+; XTENSA-ATOMIC-NEXT: or a15, a12, a12
+; XTENSA-ATOMIC-NEXT: beq a13, a14, .LBB71_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB71_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a11, a11
+; XTENSA-ATOMIC-NEXT: j .LBB71_1
+; XTENSA-ATOMIC-NEXT: .LBB71_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a13
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw and ptr %a, i16 %b acquire
+ ret i16 %res
+}
+
+define i16 @atomicrmw_and_i16_release(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_and_i16_release:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 3
+; XTENSA-NEXT: l32r a8, .LCPI72_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_and_i16_release:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32r a9, .LCPI72_0
+; XTENSA-ATOMIC-NEXT: and a10, a3, a9
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a9, a9, a11
+; XTENSA-ATOMIC-NEXT: or a9, a10, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: movi a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 1
+; XTENSA-ATOMIC-NEXT: j .LBB72_2
+; XTENSA-ATOMIC-NEXT: .LBB72_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB72_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a14, a13, a13
+; XTENSA-ATOMIC-NEXT: beqi a15, 1, .LBB72_4
+; XTENSA-ATOMIC-NEXT: .LBB72_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a13, a14, a9
+; XTENSA-ATOMIC-NEXT: wsr a14, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a13, a10, 0
+; XTENSA-ATOMIC-NEXT: or a15, a12, a12
+; XTENSA-ATOMIC-NEXT: beq a13, a14, .LBB72_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB72_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a11, a11
+; XTENSA-ATOMIC-NEXT: j .LBB72_1
+; XTENSA-ATOMIC-NEXT: .LBB72_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a13
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw and ptr %a, i16 %b release
+ ret i16 %res
+}
+
+define i16 @atomicrmw_and_i16_acq_rel(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_and_i16_acq_rel:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 4
+; XTENSA-NEXT: l32r a8, .LCPI73_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_and_i16_acq_rel:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32r a9, .LCPI73_0
+; XTENSA-ATOMIC-NEXT: and a10, a3, a9
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a9, a9, a11
+; XTENSA-ATOMIC-NEXT: or a9, a10, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: movi a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 1
+; XTENSA-ATOMIC-NEXT: j .LBB73_2
+; XTENSA-ATOMIC-NEXT: .LBB73_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB73_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a14, a13, a13
+; XTENSA-ATOMIC-NEXT: beqi a15, 1, .LBB73_4
+; XTENSA-ATOMIC-NEXT: .LBB73_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a13, a14, a9
+; XTENSA-ATOMIC-NEXT: wsr a14, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a13, a10, 0
+; XTENSA-ATOMIC-NEXT: or a15, a12, a12
+; XTENSA-ATOMIC-NEXT: beq a13, a14, .LBB73_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB73_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a11, a11
+; XTENSA-ATOMIC-NEXT: j .LBB73_1
+; XTENSA-ATOMIC-NEXT: .LBB73_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a13
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw and ptr %a, i16 %b acq_rel
+ ret i16 %res
+}
+
+define i16 @atomicrmw_and_i16_seq_cst(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_and_i16_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 5
+; XTENSA-NEXT: l32r a8, .LCPI74_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_and_i16_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32r a9, .LCPI74_0
+; XTENSA-ATOMIC-NEXT: and a10, a3, a9
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a9, a9, a11
+; XTENSA-ATOMIC-NEXT: or a9, a10, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: movi a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 1
+; XTENSA-ATOMIC-NEXT: j .LBB74_2
+; XTENSA-ATOMIC-NEXT: .LBB74_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB74_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a14, a13, a13
+; XTENSA-ATOMIC-NEXT: beqi a15, 1, .LBB74_4
+; XTENSA-ATOMIC-NEXT: .LBB74_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a13, a14, a9
+; XTENSA-ATOMIC-NEXT: wsr a14, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a13, a10, 0
+; XTENSA-ATOMIC-NEXT: or a15, a12, a12
+; XTENSA-ATOMIC-NEXT: beq a13, a14, .LBB74_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB74_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a11, a11
+; XTENSA-ATOMIC-NEXT: j .LBB74_1
+; XTENSA-ATOMIC-NEXT: .LBB74_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a13
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw and ptr %a, i16 %b seq_cst
+ ret i16 %res
+}
+
+define i16 @atomicrmw_nand_i16_monotonic(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_nand_i16_monotonic:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 0
+; XTENSA-NEXT: l32r a8, .LCPI75_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_nand_i16_monotonic:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32r a10, .LCPI75_0
+; XTENSA-ATOMIC-NEXT: and a9, a3, a10
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a12, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a13, -4
+; XTENSA-ATOMIC-NEXT: and a13, a2, a13
+; XTENSA-ATOMIC-NEXT: l32i a7, a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 0
+; XTENSA-ATOMIC-NEXT: movi a15, 1
+; XTENSA-ATOMIC-NEXT: j .LBB75_2
+; XTENSA-ATOMIC-NEXT: .LBB75_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB75_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a6, a6
+; XTENSA-ATOMIC-NEXT: beqi a5, 1, .LBB75_4
+; XTENSA-ATOMIC-NEXT: .LBB75_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a6, a7, a12
+; XTENSA-ATOMIC-NEXT: and a5, a7, a9
+; XTENSA-ATOMIC-NEXT: xor a5, a5, a11
+; XTENSA-ATOMIC-NEXT: and a5, a5, a10
+; XTENSA-ATOMIC-NEXT: or a6, a6, a5
+; XTENSA-ATOMIC-NEXT: wsr a7, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a6, a13, 0
+; XTENSA-ATOMIC-NEXT: or a5, a15, a15
+; XTENSA-ATOMIC-NEXT: beq a6, a7, .LBB75_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB75_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a5, a14, a14
+; XTENSA-ATOMIC-NEXT: j .LBB75_1
+; XTENSA-ATOMIC-NEXT: .LBB75_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a6
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw nand ptr %a, i16 %b monotonic
+ ret i16 %res
+}
+
+define i16 @atomicrmw_nand_i16_acquire(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_nand_i16_acquire:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 2
+; XTENSA-NEXT: l32r a8, .LCPI76_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_nand_i16_acquire:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32r a10, .LCPI76_0
+; XTENSA-ATOMIC-NEXT: and a9, a3, a10
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a12, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a13, -4
+; XTENSA-ATOMIC-NEXT: and a13, a2, a13
+; XTENSA-ATOMIC-NEXT: l32i a7, a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 0
+; XTENSA-ATOMIC-NEXT: movi a15, 1
+; XTENSA-ATOMIC-NEXT: j .LBB76_2
+; XTENSA-ATOMIC-NEXT: .LBB76_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB76_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a6, a6
+; XTENSA-ATOMIC-NEXT: beqi a5, 1, .LBB76_4
+; XTENSA-ATOMIC-NEXT: .LBB76_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a6, a7, a12
+; XTENSA-ATOMIC-NEXT: and a5, a7, a9
+; XTENSA-ATOMIC-NEXT: xor a5, a5, a11
+; XTENSA-ATOMIC-NEXT: and a5, a5, a10
+; XTENSA-ATOMIC-NEXT: or a6, a6, a5
+; XTENSA-ATOMIC-NEXT: wsr a7, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a6, a13, 0
+; XTENSA-ATOMIC-NEXT: or a5, a15, a15
+; XTENSA-ATOMIC-NEXT: beq a6, a7, .LBB76_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB76_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a5, a14, a14
+; XTENSA-ATOMIC-NEXT: j .LBB76_1
+; XTENSA-ATOMIC-NEXT: .LBB76_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a6
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw nand ptr %a, i16 %b acquire
+ ret i16 %res
+}
+
+define i16 @atomicrmw_nand_i16_release(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_nand_i16_release:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 3
+; XTENSA-NEXT: l32r a8, .LCPI77_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_nand_i16_release:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32r a10, .LCPI77_0
+; XTENSA-ATOMIC-NEXT: and a9, a3, a10
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a12, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a13, -4
+; XTENSA-ATOMIC-NEXT: and a13, a2, a13
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a7, a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 0
+; XTENSA-ATOMIC-NEXT: movi a15, 1
+; XTENSA-ATOMIC-NEXT: j .LBB77_2
+; XTENSA-ATOMIC-NEXT: .LBB77_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB77_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a6, a6
+; XTENSA-ATOMIC-NEXT: beqi a5, 1, .LBB77_4
+; XTENSA-ATOMIC-NEXT: .LBB77_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a6, a7, a12
+; XTENSA-ATOMIC-NEXT: and a5, a7, a9
+; XTENSA-ATOMIC-NEXT: xor a5, a5, a11
+; XTENSA-ATOMIC-NEXT: and a5, a5, a10
+; XTENSA-ATOMIC-NEXT: or a6, a6, a5
+; XTENSA-ATOMIC-NEXT: wsr a7, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a6, a13, 0
+; XTENSA-ATOMIC-NEXT: or a5, a15, a15
+; XTENSA-ATOMIC-NEXT: beq a6, a7, .LBB77_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB77_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a5, a14, a14
+; XTENSA-ATOMIC-NEXT: j .LBB77_1
+; XTENSA-ATOMIC-NEXT: .LBB77_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a6
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw nand ptr %a, i16 %b release
+ ret i16 %res
+}
+
+define i16 @atomicrmw_nand_i16_acq_rel(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_nand_i16_acq_rel:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 4
+; XTENSA-NEXT: l32r a8, .LCPI78_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_nand_i16_acq_rel:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32r a10, .LCPI78_0
+; XTENSA-ATOMIC-NEXT: and a9, a3, a10
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a12, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a13, -4
+; XTENSA-ATOMIC-NEXT: and a13, a2, a13
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a7, a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 0
+; XTENSA-ATOMIC-NEXT: movi a15, 1
+; XTENSA-ATOMIC-NEXT: j .LBB78_2
+; XTENSA-ATOMIC-NEXT: .LBB78_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB78_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a6, a6
+; XTENSA-ATOMIC-NEXT: beqi a5, 1, .LBB78_4
+; XTENSA-ATOMIC-NEXT: .LBB78_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a6, a7, a12
+; XTENSA-ATOMIC-NEXT: and a5, a7, a9
+; XTENSA-ATOMIC-NEXT: xor a5, a5, a11
+; XTENSA-ATOMIC-NEXT: and a5, a5, a10
+; XTENSA-ATOMIC-NEXT: or a6, a6, a5
+; XTENSA-ATOMIC-NEXT: wsr a7, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a6, a13, 0
+; XTENSA-ATOMIC-NEXT: or a5, a15, a15
+; XTENSA-ATOMIC-NEXT: beq a6, a7, .LBB78_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB78_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a5, a14, a14
+; XTENSA-ATOMIC-NEXT: j .LBB78_1
+; XTENSA-ATOMIC-NEXT: .LBB78_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a6
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw nand ptr %a, i16 %b acq_rel
+ ret i16 %res
+}
+
+define i16 @atomicrmw_nand_i16_seq_cst(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_nand_i16_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 5
+; XTENSA-NEXT: l32r a8, .LCPI79_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_nand_i16_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32r a10, .LCPI79_0
+; XTENSA-ATOMIC-NEXT: and a9, a3, a10
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a11, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a11
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a10
+; XTENSA-ATOMIC-NEXT: movi a11, -1
+; XTENSA-ATOMIC-NEXT: xor a12, a10, a11
+; XTENSA-ATOMIC-NEXT: movi a13, -4
+; XTENSA-ATOMIC-NEXT: and a13, a2, a13
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a7, a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 0
+; XTENSA-ATOMIC-NEXT: movi a15, 1
+; XTENSA-ATOMIC-NEXT: j .LBB79_2
+; XTENSA-ATOMIC-NEXT: .LBB79_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB79_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a6, a6
+; XTENSA-ATOMIC-NEXT: beqi a5, 1, .LBB79_4
+; XTENSA-ATOMIC-NEXT: .LBB79_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a6, a7, a12
+; XTENSA-ATOMIC-NEXT: and a5, a7, a9
+; XTENSA-ATOMIC-NEXT: xor a5, a5, a11
+; XTENSA-ATOMIC-NEXT: and a5, a5, a10
+; XTENSA-ATOMIC-NEXT: or a6, a6, a5
+; XTENSA-ATOMIC-NEXT: wsr a7, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a6, a13, 0
+; XTENSA-ATOMIC-NEXT: or a5, a15, a15
+; XTENSA-ATOMIC-NEXT: beq a6, a7, .LBB79_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB79_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a5, a14, a14
+; XTENSA-ATOMIC-NEXT: j .LBB79_1
+; XTENSA-ATOMIC-NEXT: .LBB79_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a6
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw nand ptr %a, i16 %b seq_cst
+ ret i16 %res
+}
+
+define i16 @atomicrmw_or_i16_monotonic(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_or_i16_monotonic:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 0
+; XTENSA-NEXT: l32r a8, .LCPI80_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_or_i16_monotonic:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32r a8, .LCPI80_0
+; XTENSA-ATOMIC-NEXT: and a9, a3, a8
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a10, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a10
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: l32i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: movi a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 1
+; XTENSA-ATOMIC-NEXT: j .LBB80_2
+; XTENSA-ATOMIC-NEXT: .LBB80_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB80_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a14, a13, a13
+; XTENSA-ATOMIC-NEXT: beqi a15, 1, .LBB80_4
+; XTENSA-ATOMIC-NEXT: .LBB80_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: or a13, a14, a9
+; XTENSA-ATOMIC-NEXT: wsr a14, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a13, a10, 0
+; XTENSA-ATOMIC-NEXT: or a15, a12, a12
+; XTENSA-ATOMIC-NEXT: beq a13, a14, .LBB80_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB80_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a11, a11
+; XTENSA-ATOMIC-NEXT: j .LBB80_1
+; XTENSA-ATOMIC-NEXT: .LBB80_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a13
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw or ptr %a, i16 %b monotonic
+ ret i16 %res
+}
+
+define i16 @atomicrmw_or_i16_acquire(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_or_i16_acquire:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 2
+; XTENSA-NEXT: l32r a8, .LCPI81_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_or_i16_acquire:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32r a8, .LCPI81_0
+; XTENSA-ATOMIC-NEXT: and a9, a3, a8
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a10, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a10
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: l32i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: movi a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 1
+; XTENSA-ATOMIC-NEXT: j .LBB81_2
+; XTENSA-ATOMIC-NEXT: .LBB81_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB81_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a14, a13, a13
+; XTENSA-ATOMIC-NEXT: beqi a15, 1, .LBB81_4
+; XTENSA-ATOMIC-NEXT: .LBB81_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: or a13, a14, a9
+; XTENSA-ATOMIC-NEXT: wsr a14, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a13, a10, 0
+; XTENSA-ATOMIC-NEXT: or a15, a12, a12
+; XTENSA-ATOMIC-NEXT: beq a13, a14, .LBB81_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB81_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a11, a11
+; XTENSA-ATOMIC-NEXT: j .LBB81_1
+; XTENSA-ATOMIC-NEXT: .LBB81_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a13
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw or ptr %a, i16 %b acquire
+ ret i16 %res
+}
+
+define i16 @atomicrmw_or_i16_release(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_or_i16_release:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 3
+; XTENSA-NEXT: l32r a8, .LCPI82_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_or_i16_release:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32r a8, .LCPI82_0
+; XTENSA-ATOMIC-NEXT: and a9, a3, a8
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a10, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a10
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: movi a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 1
+; XTENSA-ATOMIC-NEXT: j .LBB82_2
+; XTENSA-ATOMIC-NEXT: .LBB82_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB82_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a14, a13, a13
+; XTENSA-ATOMIC-NEXT: beqi a15, 1, .LBB82_4
+; XTENSA-ATOMIC-NEXT: .LBB82_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: or a13, a14, a9
+; XTENSA-ATOMIC-NEXT: wsr a14, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a13, a10, 0
+; XTENSA-ATOMIC-NEXT: or a15, a12, a12
+; XTENSA-ATOMIC-NEXT: beq a13, a14, .LBB82_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB82_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a11, a11
+; XTENSA-ATOMIC-NEXT: j .LBB82_1
+; XTENSA-ATOMIC-NEXT: .LBB82_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a13
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw or ptr %a, i16 %b release
+ ret i16 %res
+}
+
+define i16 @atomicrmw_or_i16_acq_rel(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_or_i16_acq_rel:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 4
+; XTENSA-NEXT: l32r a8, .LCPI83_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_or_i16_acq_rel:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32r a8, .LCPI83_0
+; XTENSA-ATOMIC-NEXT: and a9, a3, a8
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a10, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a10
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: movi a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 1
+; XTENSA-ATOMIC-NEXT: j .LBB83_2
+; XTENSA-ATOMIC-NEXT: .LBB83_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB83_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a14, a13, a13
+; XTENSA-ATOMIC-NEXT: beqi a15, 1, .LBB83_4
+; XTENSA-ATOMIC-NEXT: .LBB83_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: or a13, a14, a9
+; XTENSA-ATOMIC-NEXT: wsr a14, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a13, a10, 0
+; XTENSA-ATOMIC-NEXT: or a15, a12, a12
+; XTENSA-ATOMIC-NEXT: beq a13, a14, .LBB83_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB83_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a11, a11
+; XTENSA-ATOMIC-NEXT: j .LBB83_1
+; XTENSA-ATOMIC-NEXT: .LBB83_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a13
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw or ptr %a, i16 %b acq_rel
+ ret i16 %res
+}
+
+define i16 @atomicrmw_or_i16_seq_cst(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_or_i16_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 5
+; XTENSA-NEXT: l32r a8, .LCPI84_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_or_i16_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32r a8, .LCPI84_0
+; XTENSA-ATOMIC-NEXT: and a9, a3, a8
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a10, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a10
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: movi a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 1
+; XTENSA-ATOMIC-NEXT: j .LBB84_2
+; XTENSA-ATOMIC-NEXT: .LBB84_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB84_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a14, a13, a13
+; XTENSA-ATOMIC-NEXT: beqi a15, 1, .LBB84_4
+; XTENSA-ATOMIC-NEXT: .LBB84_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: or a13, a14, a9
+; XTENSA-ATOMIC-NEXT: wsr a14, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a13, a10, 0
+; XTENSA-ATOMIC-NEXT: or a15, a12, a12
+; XTENSA-ATOMIC-NEXT: beq a13, a14, .LBB84_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB84_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a11, a11
+; XTENSA-ATOMIC-NEXT: j .LBB84_1
+; XTENSA-ATOMIC-NEXT: .LBB84_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a13
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw or ptr %a, i16 %b seq_cst
+ ret i16 %res
+}
+
+define i16 @atomicrmw_xor_i16_monotonic(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_xor_i16_monotonic:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 0
+; XTENSA-NEXT: l32r a8, .LCPI85_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_xor_i16_monotonic:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32r a8, .LCPI85_0
+; XTENSA-ATOMIC-NEXT: and a9, a3, a8
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a10, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a10
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: l32i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: movi a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 1
+; XTENSA-ATOMIC-NEXT: j .LBB85_2
+; XTENSA-ATOMIC-NEXT: .LBB85_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB85_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a14, a13, a13
+; XTENSA-ATOMIC-NEXT: beqi a15, 1, .LBB85_4
+; XTENSA-ATOMIC-NEXT: .LBB85_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: xor a13, a14, a9
+; XTENSA-ATOMIC-NEXT: wsr a14, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a13, a10, 0
+; XTENSA-ATOMIC-NEXT: or a15, a12, a12
+; XTENSA-ATOMIC-NEXT: beq a13, a14, .LBB85_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB85_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a11, a11
+; XTENSA-ATOMIC-NEXT: j .LBB85_1
+; XTENSA-ATOMIC-NEXT: .LBB85_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a13
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw xor ptr %a, i16 %b monotonic
+ ret i16 %res
+}
+
+define i16 @atomicrmw_xor_i16_acquire(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_xor_i16_acquire:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 2
+; XTENSA-NEXT: l32r a8, .LCPI86_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_xor_i16_acquire:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32r a8, .LCPI86_0
+; XTENSA-ATOMIC-NEXT: and a9, a3, a8
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a10, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a10
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: l32i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: movi a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 1
+; XTENSA-ATOMIC-NEXT: j .LBB86_2
+; XTENSA-ATOMIC-NEXT: .LBB86_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB86_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a14, a13, a13
+; XTENSA-ATOMIC-NEXT: beqi a15, 1, .LBB86_4
+; XTENSA-ATOMIC-NEXT: .LBB86_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: xor a13, a14, a9
+; XTENSA-ATOMIC-NEXT: wsr a14, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a13, a10, 0
+; XTENSA-ATOMIC-NEXT: or a15, a12, a12
+; XTENSA-ATOMIC-NEXT: beq a13, a14, .LBB86_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB86_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a11, a11
+; XTENSA-ATOMIC-NEXT: j .LBB86_1
+; XTENSA-ATOMIC-NEXT: .LBB86_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a13
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw xor ptr %a, i16 %b acquire
+ ret i16 %res
+}
+
+define i16 @atomicrmw_xor_i16_release(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_xor_i16_release:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 3
+; XTENSA-NEXT: l32r a8, .LCPI87_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_xor_i16_release:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32r a8, .LCPI87_0
+; XTENSA-ATOMIC-NEXT: and a9, a3, a8
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a10, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a10
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: movi a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 1
+; XTENSA-ATOMIC-NEXT: j .LBB87_2
+; XTENSA-ATOMIC-NEXT: .LBB87_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB87_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a14, a13, a13
+; XTENSA-ATOMIC-NEXT: beqi a15, 1, .LBB87_4
+; XTENSA-ATOMIC-NEXT: .LBB87_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: xor a13, a14, a9
+; XTENSA-ATOMIC-NEXT: wsr a14, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a13, a10, 0
+; XTENSA-ATOMIC-NEXT: or a15, a12, a12
+; XTENSA-ATOMIC-NEXT: beq a13, a14, .LBB87_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB87_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a11, a11
+; XTENSA-ATOMIC-NEXT: j .LBB87_1
+; XTENSA-ATOMIC-NEXT: .LBB87_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a13
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw xor ptr %a, i16 %b release
+ ret i16 %res
+}
+
+define i16 @atomicrmw_xor_i16_acq_rel(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_xor_i16_acq_rel:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 4
+; XTENSA-NEXT: l32r a8, .LCPI88_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_xor_i16_acq_rel:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32r a8, .LCPI88_0
+; XTENSA-ATOMIC-NEXT: and a9, a3, a8
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a10, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a10
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: movi a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 1
+; XTENSA-ATOMIC-NEXT: j .LBB88_2
+; XTENSA-ATOMIC-NEXT: .LBB88_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB88_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a14, a13, a13
+; XTENSA-ATOMIC-NEXT: beqi a15, 1, .LBB88_4
+; XTENSA-ATOMIC-NEXT: .LBB88_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: xor a13, a14, a9
+; XTENSA-ATOMIC-NEXT: wsr a14, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a13, a10, 0
+; XTENSA-ATOMIC-NEXT: or a15, a12, a12
+; XTENSA-ATOMIC-NEXT: beq a13, a14, .LBB88_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB88_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a11, a11
+; XTENSA-ATOMIC-NEXT: j .LBB88_1
+; XTENSA-ATOMIC-NEXT: .LBB88_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a13
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw xor ptr %a, i16 %b acq_rel
+ ret i16 %res
+}
+
+define i16 @atomicrmw_xor_i16_seq_cst(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_xor_i16_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 5
+; XTENSA-NEXT: l32r a8, .LCPI89_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_xor_i16_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32r a8, .LCPI89_0
+; XTENSA-ATOMIC-NEXT: and a9, a3, a8
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a10, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a10
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: movi a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 1
+; XTENSA-ATOMIC-NEXT: j .LBB89_2
+; XTENSA-ATOMIC-NEXT: .LBB89_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB89_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a14, a13, a13
+; XTENSA-ATOMIC-NEXT: beqi a15, 1, .LBB89_4
+; XTENSA-ATOMIC-NEXT: .LBB89_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: xor a13, a14, a9
+; XTENSA-ATOMIC-NEXT: wsr a14, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a13, a10, 0
+; XTENSA-ATOMIC-NEXT: or a15, a12, a12
+; XTENSA-ATOMIC-NEXT: beq a13, a14, .LBB89_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB89_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a11, a11
+; XTENSA-ATOMIC-NEXT: j .LBB89_1
+; XTENSA-ATOMIC-NEXT: .LBB89_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a13
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw xor ptr %a, i16 %b seq_cst
+ ret i16 %res
+}
+
+define i16 @atomicrmw_max_i16_monotonic(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_max_i16_monotonic:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a6, a2, a2
+; XTENSA-NEXT: l16ui a2, a6, 0
+; XTENSA-NEXT: slli a8, a3, 16
+; XTENSA-NEXT: srai a5, a8, 16
+; XTENSA-NEXT: movi a7, 0
+; XTENSA-NEXT: l32r a4, .LCPI90_0
+; XTENSA-NEXT: j .LBB90_2
+; XTENSA-NEXT: .LBB90_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB90_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a6, a6
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a7, a7
+; XTENSA-NEXT: callx8 a4
+; XTENSA-NEXT: l16ui a2, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB90_4
+; XTENSA-NEXT: .LBB90_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s16i a2, a1, 0
+; XTENSA-NEXT: slli a8, a2, 16
+; XTENSA-NEXT: srai a8, a8, 16
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: bge a5, a8, .LBB90_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB90_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB90_1
+; XTENSA-NEXT: .LBB90_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_max_i16_monotonic:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a9, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a9
+; XTENSA-ATOMIC-NEXT: l32r a9, .LCPI90_0
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -1
+; XTENSA-ATOMIC-NEXT: xor a9, a9, a10
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: l32i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: slli a11, a3, 16
+; XTENSA-ATOMIC-NEXT: srai a11, a11, 16
+; XTENSA-ATOMIC-NEXT: movi a12, 0
+; XTENSA-ATOMIC-NEXT: movi a13, 1
+; XTENSA-ATOMIC-NEXT: j .LBB90_2
+; XTENSA-ATOMIC-NEXT: .LBB90_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB90_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a14, a15, a15
+; XTENSA-ATOMIC-NEXT: beqi a7, 1, .LBB90_6
+; XTENSA-ATOMIC-NEXT: .LBB90_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a15, a14
+; XTENSA-ATOMIC-NEXT: slli a7, a15, 16
+; XTENSA-ATOMIC-NEXT: srai a6, a7, 16
+; XTENSA-ATOMIC-NEXT: or a7, a3, a3
+; XTENSA-ATOMIC-NEXT: bge a11, a6, .LBB90_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB90_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a15, a15
+; XTENSA-ATOMIC-NEXT: .LBB90_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB90_2 Depth=1
+; XTENSA-ATOMIC-NEXT: l32r a15, .LCPI90_0
+; XTENSA-ATOMIC-NEXT: and a15, a7, a15
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a15, a15
+; XTENSA-ATOMIC-NEXT: and a7, a14, a9
+; XTENSA-ATOMIC-NEXT: or a15, a7, a15
+; XTENSA-ATOMIC-NEXT: wsr a14, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a15, a10, 0
+; XTENSA-ATOMIC-NEXT: or a7, a13, a13
+; XTENSA-ATOMIC-NEXT: beq a15, a14, .LBB90_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB90_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a12, a12
+; XTENSA-ATOMIC-NEXT: j .LBB90_1
+; XTENSA-ATOMIC-NEXT: .LBB90_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a15
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw max ptr %a, i16 %b monotonic
+ ret i16 %res
+}
+
+define i16 @atomicrmw_max_i16_acquire(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_max_i16_acquire:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a6, a2, a2
+; XTENSA-NEXT: l16ui a2, a6, 0
+; XTENSA-NEXT: slli a8, a3, 16
+; XTENSA-NEXT: srai a5, a8, 16
+; XTENSA-NEXT: movi a7, 2
+; XTENSA-NEXT: l32r a4, .LCPI91_0
+; XTENSA-NEXT: j .LBB91_2
+; XTENSA-NEXT: .LBB91_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB91_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a6, a6
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a7, a7
+; XTENSA-NEXT: callx8 a4
+; XTENSA-NEXT: l16ui a2, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB91_4
+; XTENSA-NEXT: .LBB91_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s16i a2, a1, 0
+; XTENSA-NEXT: slli a8, a2, 16
+; XTENSA-NEXT: srai a8, a8, 16
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: bge a5, a8, .LBB91_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB91_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB91_1
+; XTENSA-NEXT: .LBB91_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_max_i16_acquire:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a9, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a9
+; XTENSA-ATOMIC-NEXT: l32r a9, .LCPI91_0
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -1
+; XTENSA-ATOMIC-NEXT: xor a9, a9, a10
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: l32i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: slli a11, a3, 16
+; XTENSA-ATOMIC-NEXT: srai a11, a11, 16
+; XTENSA-ATOMIC-NEXT: movi a12, 0
+; XTENSA-ATOMIC-NEXT: movi a13, 1
+; XTENSA-ATOMIC-NEXT: j .LBB91_2
+; XTENSA-ATOMIC-NEXT: .LBB91_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB91_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a14, a15, a15
+; XTENSA-ATOMIC-NEXT: beqi a7, 1, .LBB91_6
+; XTENSA-ATOMIC-NEXT: .LBB91_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a15, a14
+; XTENSA-ATOMIC-NEXT: slli a7, a15, 16
+; XTENSA-ATOMIC-NEXT: srai a6, a7, 16
+; XTENSA-ATOMIC-NEXT: or a7, a3, a3
+; XTENSA-ATOMIC-NEXT: bge a11, a6, .LBB91_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB91_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a15, a15
+; XTENSA-ATOMIC-NEXT: .LBB91_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB91_2 Depth=1
+; XTENSA-ATOMIC-NEXT: l32r a15, .LCPI91_0
+; XTENSA-ATOMIC-NEXT: and a15, a7, a15
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a15, a15
+; XTENSA-ATOMIC-NEXT: and a7, a14, a9
+; XTENSA-ATOMIC-NEXT: or a15, a7, a15
+; XTENSA-ATOMIC-NEXT: wsr a14, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a15, a10, 0
+; XTENSA-ATOMIC-NEXT: or a7, a13, a13
+; XTENSA-ATOMIC-NEXT: beq a15, a14, .LBB91_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB91_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a12, a12
+; XTENSA-ATOMIC-NEXT: j .LBB91_1
+; XTENSA-ATOMIC-NEXT: .LBB91_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a15
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw max ptr %a, i16 %b acquire
+ ret i16 %res
+}
+
+define i16 @atomicrmw_max_i16_release(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_max_i16_release:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a9, a2, a2
+; XTENSA-NEXT: l16ui a2, a9, 0
+; XTENSA-NEXT: s32i a3, a1, 0 # 4-byte Folded Spill
+; XTENSA-NEXT: slli a8, a3, 16
+; XTENSA-NEXT: or a3, a9, a9
+; XTENSA-NEXT: srai a4, a8, 16
+; XTENSA-NEXT: movi a7, 3
+; XTENSA-NEXT: movi a6, 0
+; XTENSA-NEXT: l32r a5, .LCPI92_0
+; XTENSA-NEXT: j .LBB92_2
+; XTENSA-NEXT: .LBB92_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB92_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 4
+; XTENSA-NEXT: or a10, a3, a3
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a6, a6
+; XTENSA-NEXT: callx8 a5
+; XTENSA-NEXT: l16ui a2, a1, 4
+; XTENSA-NEXT: bnez a10, .LBB92_4
+; XTENSA-NEXT: .LBB92_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s16i a2, a1, 4
+; XTENSA-NEXT: slli a8, a2, 16
+; XTENSA-NEXT: srai a8, a8, 16
+; XTENSA-NEXT: l32i a12, a1, 0 # 4-byte Folded Reload
+; XTENSA-NEXT: bge a4, a8, .LBB92_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB92_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB92_1
+; XTENSA-NEXT: .LBB92_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_max_i16_release:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a9, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a9
+; XTENSA-ATOMIC-NEXT: l32r a9, .LCPI92_0
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -1
+; XTENSA-ATOMIC-NEXT: xor a9, a9, a10
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: slli a11, a3, 16
+; XTENSA-ATOMIC-NEXT: srai a11, a11, 16
+; XTENSA-ATOMIC-NEXT: movi a12, 0
+; XTENSA-ATOMIC-NEXT: movi a13, 1
+; XTENSA-ATOMIC-NEXT: j .LBB92_2
+; XTENSA-ATOMIC-NEXT: .LBB92_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB92_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a14, a15, a15
+; XTENSA-ATOMIC-NEXT: beqi a7, 1, .LBB92_6
+; XTENSA-ATOMIC-NEXT: .LBB92_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a15, a14
+; XTENSA-ATOMIC-NEXT: slli a7, a15, 16
+; XTENSA-ATOMIC-NEXT: srai a6, a7, 16
+; XTENSA-ATOMIC-NEXT: or a7, a3, a3
+; XTENSA-ATOMIC-NEXT: bge a11, a6, .LBB92_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB92_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a15, a15
+; XTENSA-ATOMIC-NEXT: .LBB92_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB92_2 Depth=1
+; XTENSA-ATOMIC-NEXT: l32r a15, .LCPI92_0
+; XTENSA-ATOMIC-NEXT: and a15, a7, a15
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a15, a15
+; XTENSA-ATOMIC-NEXT: and a7, a14, a9
+; XTENSA-ATOMIC-NEXT: or a15, a7, a15
+; XTENSA-ATOMIC-NEXT: wsr a14, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a15, a10, 0
+; XTENSA-ATOMIC-NEXT: or a7, a13, a13
+; XTENSA-ATOMIC-NEXT: beq a15, a14, .LBB92_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB92_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a12, a12
+; XTENSA-ATOMIC-NEXT: j .LBB92_1
+; XTENSA-ATOMIC-NEXT: .LBB92_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a15
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw max ptr %a, i16 %b release
+ ret i16 %res
+}
+
+define i16 @atomicrmw_max_i16_acq_rel(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_max_i16_acq_rel:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a9, a2, a2
+; XTENSA-NEXT: l16ui a2, a9, 0
+; XTENSA-NEXT: s32i a3, a1, 0 # 4-byte Folded Spill
+; XTENSA-NEXT: slli a8, a3, 16
+; XTENSA-NEXT: or a3, a9, a9
+; XTENSA-NEXT: srai a4, a8, 16
+; XTENSA-NEXT: movi a7, 4
+; XTENSA-NEXT: movi a6, 2
+; XTENSA-NEXT: l32r a5, .LCPI93_0
+; XTENSA-NEXT: j .LBB93_2
+; XTENSA-NEXT: .LBB93_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB93_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 4
+; XTENSA-NEXT: or a10, a3, a3
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a6, a6
+; XTENSA-NEXT: callx8 a5
+; XTENSA-NEXT: l16ui a2, a1, 4
+; XTENSA-NEXT: bnez a10, .LBB93_4
+; XTENSA-NEXT: .LBB93_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s16i a2, a1, 4
+; XTENSA-NEXT: slli a8, a2, 16
+; XTENSA-NEXT: srai a8, a8, 16
+; XTENSA-NEXT: l32i a12, a1, 0 # 4-byte Folded Reload
+; XTENSA-NEXT: bge a4, a8, .LBB93_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB93_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB93_1
+; XTENSA-NEXT: .LBB93_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_max_i16_acq_rel:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a9, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a9
+; XTENSA-ATOMIC-NEXT: l32r a9, .LCPI93_0
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -1
+; XTENSA-ATOMIC-NEXT: xor a9, a9, a10
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: slli a11, a3, 16
+; XTENSA-ATOMIC-NEXT: srai a11, a11, 16
+; XTENSA-ATOMIC-NEXT: movi a12, 0
+; XTENSA-ATOMIC-NEXT: movi a13, 1
+; XTENSA-ATOMIC-NEXT: j .LBB93_2
+; XTENSA-ATOMIC-NEXT: .LBB93_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB93_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a14, a15, a15
+; XTENSA-ATOMIC-NEXT: beqi a7, 1, .LBB93_6
+; XTENSA-ATOMIC-NEXT: .LBB93_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a15, a14
+; XTENSA-ATOMIC-NEXT: slli a7, a15, 16
+; XTENSA-ATOMIC-NEXT: srai a6, a7, 16
+; XTENSA-ATOMIC-NEXT: or a7, a3, a3
+; XTENSA-ATOMIC-NEXT: bge a11, a6, .LBB93_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB93_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a15, a15
+; XTENSA-ATOMIC-NEXT: .LBB93_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB93_2 Depth=1
+; XTENSA-ATOMIC-NEXT: l32r a15, .LCPI93_0
+; XTENSA-ATOMIC-NEXT: and a15, a7, a15
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a15, a15
+; XTENSA-ATOMIC-NEXT: and a7, a14, a9
+; XTENSA-ATOMIC-NEXT: or a15, a7, a15
+; XTENSA-ATOMIC-NEXT: wsr a14, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a15, a10, 0
+; XTENSA-ATOMIC-NEXT: or a7, a13, a13
+; XTENSA-ATOMIC-NEXT: beq a15, a14, .LBB93_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB93_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a12, a12
+; XTENSA-ATOMIC-NEXT: j .LBB93_1
+; XTENSA-ATOMIC-NEXT: .LBB93_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a15
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw max ptr %a, i16 %b acq_rel
+ ret i16 %res
+}
+
+define i16 @atomicrmw_max_i16_seq_cst(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_max_i16_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a6, a2, a2
+; XTENSA-NEXT: l16ui a2, a6, 0
+; XTENSA-NEXT: slli a8, a3, 16
+; XTENSA-NEXT: srai a5, a8, 16
+; XTENSA-NEXT: movi a7, 5
+; XTENSA-NEXT: l32r a4, .LCPI94_0
+; XTENSA-NEXT: j .LBB94_2
+; XTENSA-NEXT: .LBB94_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB94_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a6, a6
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a7, a7
+; XTENSA-NEXT: callx8 a4
+; XTENSA-NEXT: l16ui a2, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB94_4
+; XTENSA-NEXT: .LBB94_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s16i a2, a1, 0
+; XTENSA-NEXT: slli a8, a2, 16
+; XTENSA-NEXT: srai a8, a8, 16
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: bge a5, a8, .LBB94_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB94_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB94_1
+; XTENSA-NEXT: .LBB94_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_max_i16_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a9, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a9
+; XTENSA-ATOMIC-NEXT: l32r a9, .LCPI94_0
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -1
+; XTENSA-ATOMIC-NEXT: xor a9, a9, a10
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: slli a11, a3, 16
+; XTENSA-ATOMIC-NEXT: srai a11, a11, 16
+; XTENSA-ATOMIC-NEXT: movi a12, 0
+; XTENSA-ATOMIC-NEXT: movi a13, 1
+; XTENSA-ATOMIC-NEXT: j .LBB94_2
+; XTENSA-ATOMIC-NEXT: .LBB94_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB94_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a14, a15, a15
+; XTENSA-ATOMIC-NEXT: beqi a7, 1, .LBB94_6
+; XTENSA-ATOMIC-NEXT: .LBB94_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a15, a14
+; XTENSA-ATOMIC-NEXT: slli a7, a15, 16
+; XTENSA-ATOMIC-NEXT: srai a6, a7, 16
+; XTENSA-ATOMIC-NEXT: or a7, a3, a3
+; XTENSA-ATOMIC-NEXT: bge a11, a6, .LBB94_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB94_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a15, a15
+; XTENSA-ATOMIC-NEXT: .LBB94_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB94_2 Depth=1
+; XTENSA-ATOMIC-NEXT: l32r a15, .LCPI94_0
+; XTENSA-ATOMIC-NEXT: and a15, a7, a15
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a15, a15
+; XTENSA-ATOMIC-NEXT: and a7, a14, a9
+; XTENSA-ATOMIC-NEXT: or a15, a7, a15
+; XTENSA-ATOMIC-NEXT: wsr a14, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a15, a10, 0
+; XTENSA-ATOMIC-NEXT: or a7, a13, a13
+; XTENSA-ATOMIC-NEXT: beq a15, a14, .LBB94_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB94_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a12, a12
+; XTENSA-ATOMIC-NEXT: j .LBB94_1
+; XTENSA-ATOMIC-NEXT: .LBB94_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a15
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw max ptr %a, i16 %b seq_cst
+ ret i16 %res
+}
+
+define i16 @atomicrmw_min_i16_monotonic(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_min_i16_monotonic:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a6, a2, a2
+; XTENSA-NEXT: l16ui a2, a6, 0
+; XTENSA-NEXT: slli a8, a3, 16
+; XTENSA-NEXT: srai a5, a8, 16
+; XTENSA-NEXT: movi a7, 0
+; XTENSA-NEXT: l32r a4, .LCPI95_0
+; XTENSA-NEXT: j .LBB95_2
+; XTENSA-NEXT: .LBB95_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB95_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a6, a6
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a7, a7
+; XTENSA-NEXT: callx8 a4
+; XTENSA-NEXT: l16ui a2, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB95_4
+; XTENSA-NEXT: .LBB95_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s16i a2, a1, 0
+; XTENSA-NEXT: slli a8, a2, 16
+; XTENSA-NEXT: srai a8, a8, 16
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: blt a5, a8, .LBB95_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB95_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB95_1
+; XTENSA-NEXT: .LBB95_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_min_i16_monotonic:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a9, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a9
+; XTENSA-ATOMIC-NEXT: l32r a9, .LCPI95_0
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -1
+; XTENSA-ATOMIC-NEXT: xor a9, a9, a10
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: l32i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: slli a11, a3, 16
+; XTENSA-ATOMIC-NEXT: srai a11, a11, 16
+; XTENSA-ATOMIC-NEXT: movi a12, 0
+; XTENSA-ATOMIC-NEXT: movi a13, 1
+; XTENSA-ATOMIC-NEXT: j .LBB95_2
+; XTENSA-ATOMIC-NEXT: .LBB95_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB95_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a14, a15, a15
+; XTENSA-ATOMIC-NEXT: beqi a7, 1, .LBB95_6
+; XTENSA-ATOMIC-NEXT: .LBB95_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a15, a14
+; XTENSA-ATOMIC-NEXT: slli a7, a15, 16
+; XTENSA-ATOMIC-NEXT: srai a6, a7, 16
+; XTENSA-ATOMIC-NEXT: or a7, a3, a3
+; XTENSA-ATOMIC-NEXT: blt a11, a6, .LBB95_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB95_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a15, a15
+; XTENSA-ATOMIC-NEXT: .LBB95_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB95_2 Depth=1
+; XTENSA-ATOMIC-NEXT: l32r a15, .LCPI95_0
+; XTENSA-ATOMIC-NEXT: and a15, a7, a15
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a15, a15
+; XTENSA-ATOMIC-NEXT: and a7, a14, a9
+; XTENSA-ATOMIC-NEXT: or a15, a7, a15
+; XTENSA-ATOMIC-NEXT: wsr a14, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a15, a10, 0
+; XTENSA-ATOMIC-NEXT: or a7, a13, a13
+; XTENSA-ATOMIC-NEXT: beq a15, a14, .LBB95_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB95_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a12, a12
+; XTENSA-ATOMIC-NEXT: j .LBB95_1
+; XTENSA-ATOMIC-NEXT: .LBB95_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a15
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw min ptr %a, i16 %b monotonic
+ ret i16 %res
+}
+
+define i16 @atomicrmw_min_i16_acquire(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_min_i16_acquire:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a6, a2, a2
+; XTENSA-NEXT: l16ui a2, a6, 0
+; XTENSA-NEXT: slli a8, a3, 16
+; XTENSA-NEXT: srai a5, a8, 16
+; XTENSA-NEXT: movi a7, 2
+; XTENSA-NEXT: l32r a4, .LCPI96_0
+; XTENSA-NEXT: j .LBB96_2
+; XTENSA-NEXT: .LBB96_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB96_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a6, a6
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a7, a7
+; XTENSA-NEXT: callx8 a4
+; XTENSA-NEXT: l16ui a2, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB96_4
+; XTENSA-NEXT: .LBB96_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s16i a2, a1, 0
+; XTENSA-NEXT: slli a8, a2, 16
+; XTENSA-NEXT: srai a8, a8, 16
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: blt a5, a8, .LBB96_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB96_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB96_1
+; XTENSA-NEXT: .LBB96_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_min_i16_acquire:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a9, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a9
+; XTENSA-ATOMIC-NEXT: l32r a9, .LCPI96_0
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -1
+; XTENSA-ATOMIC-NEXT: xor a9, a9, a10
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: l32i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: slli a11, a3, 16
+; XTENSA-ATOMIC-NEXT: srai a11, a11, 16
+; XTENSA-ATOMIC-NEXT: movi a12, 0
+; XTENSA-ATOMIC-NEXT: movi a13, 1
+; XTENSA-ATOMIC-NEXT: j .LBB96_2
+; XTENSA-ATOMIC-NEXT: .LBB96_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB96_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a14, a15, a15
+; XTENSA-ATOMIC-NEXT: beqi a7, 1, .LBB96_6
+; XTENSA-ATOMIC-NEXT: .LBB96_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a15, a14
+; XTENSA-ATOMIC-NEXT: slli a7, a15, 16
+; XTENSA-ATOMIC-NEXT: srai a6, a7, 16
+; XTENSA-ATOMIC-NEXT: or a7, a3, a3
+; XTENSA-ATOMIC-NEXT: blt a11, a6, .LBB96_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB96_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a15, a15
+; XTENSA-ATOMIC-NEXT: .LBB96_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB96_2 Depth=1
+; XTENSA-ATOMIC-NEXT: l32r a15, .LCPI96_0
+; XTENSA-ATOMIC-NEXT: and a15, a7, a15
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a15, a15
+; XTENSA-ATOMIC-NEXT: and a7, a14, a9
+; XTENSA-ATOMIC-NEXT: or a15, a7, a15
+; XTENSA-ATOMIC-NEXT: wsr a14, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a15, a10, 0
+; XTENSA-ATOMIC-NEXT: or a7, a13, a13
+; XTENSA-ATOMIC-NEXT: beq a15, a14, .LBB96_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB96_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a12, a12
+; XTENSA-ATOMIC-NEXT: j .LBB96_1
+; XTENSA-ATOMIC-NEXT: .LBB96_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a15
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw min ptr %a, i16 %b acquire
+ ret i16 %res
+}
+
+define i16 @atomicrmw_min_i16_release(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_min_i16_release:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a9, a2, a2
+; XTENSA-NEXT: l16ui a2, a9, 0
+; XTENSA-NEXT: s32i a3, a1, 0 # 4-byte Folded Spill
+; XTENSA-NEXT: slli a8, a3, 16
+; XTENSA-NEXT: or a3, a9, a9
+; XTENSA-NEXT: srai a4, a8, 16
+; XTENSA-NEXT: movi a7, 3
+; XTENSA-NEXT: movi a6, 0
+; XTENSA-NEXT: l32r a5, .LCPI97_0
+; XTENSA-NEXT: j .LBB97_2
+; XTENSA-NEXT: .LBB97_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB97_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 4
+; XTENSA-NEXT: or a10, a3, a3
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a6, a6
+; XTENSA-NEXT: callx8 a5
+; XTENSA-NEXT: l16ui a2, a1, 4
+; XTENSA-NEXT: bnez a10, .LBB97_4
+; XTENSA-NEXT: .LBB97_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s16i a2, a1, 4
+; XTENSA-NEXT: slli a8, a2, 16
+; XTENSA-NEXT: srai a8, a8, 16
+; XTENSA-NEXT: l32i a12, a1, 0 # 4-byte Folded Reload
+; XTENSA-NEXT: blt a4, a8, .LBB97_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB97_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB97_1
+; XTENSA-NEXT: .LBB97_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_min_i16_release:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a9, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a9
+; XTENSA-ATOMIC-NEXT: l32r a9, .LCPI97_0
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -1
+; XTENSA-ATOMIC-NEXT: xor a9, a9, a10
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: slli a11, a3, 16
+; XTENSA-ATOMIC-NEXT: srai a11, a11, 16
+; XTENSA-ATOMIC-NEXT: movi a12, 0
+; XTENSA-ATOMIC-NEXT: movi a13, 1
+; XTENSA-ATOMIC-NEXT: j .LBB97_2
+; XTENSA-ATOMIC-NEXT: .LBB97_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB97_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a14, a15, a15
+; XTENSA-ATOMIC-NEXT: beqi a7, 1, .LBB97_6
+; XTENSA-ATOMIC-NEXT: .LBB97_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a15, a14
+; XTENSA-ATOMIC-NEXT: slli a7, a15, 16
+; XTENSA-ATOMIC-NEXT: srai a6, a7, 16
+; XTENSA-ATOMIC-NEXT: or a7, a3, a3
+; XTENSA-ATOMIC-NEXT: blt a11, a6, .LBB97_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB97_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a15, a15
+; XTENSA-ATOMIC-NEXT: .LBB97_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB97_2 Depth=1
+; XTENSA-ATOMIC-NEXT: l32r a15, .LCPI97_0
+; XTENSA-ATOMIC-NEXT: and a15, a7, a15
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a15, a15
+; XTENSA-ATOMIC-NEXT: and a7, a14, a9
+; XTENSA-ATOMIC-NEXT: or a15, a7, a15
+; XTENSA-ATOMIC-NEXT: wsr a14, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a15, a10, 0
+; XTENSA-ATOMIC-NEXT: or a7, a13, a13
+; XTENSA-ATOMIC-NEXT: beq a15, a14, .LBB97_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB97_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a12, a12
+; XTENSA-ATOMIC-NEXT: j .LBB97_1
+; XTENSA-ATOMIC-NEXT: .LBB97_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a15
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw min ptr %a, i16 %b release
+ ret i16 %res
+}
+
+define i16 @atomicrmw_min_i16_acq_rel(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_min_i16_acq_rel:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a9, a2, a2
+; XTENSA-NEXT: l16ui a2, a9, 0
+; XTENSA-NEXT: s32i a3, a1, 0 # 4-byte Folded Spill
+; XTENSA-NEXT: slli a8, a3, 16
+; XTENSA-NEXT: or a3, a9, a9
+; XTENSA-NEXT: srai a4, a8, 16
+; XTENSA-NEXT: movi a7, 4
+; XTENSA-NEXT: movi a6, 2
+; XTENSA-NEXT: l32r a5, .LCPI98_0
+; XTENSA-NEXT: j .LBB98_2
+; XTENSA-NEXT: .LBB98_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB98_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 4
+; XTENSA-NEXT: or a10, a3, a3
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a6, a6
+; XTENSA-NEXT: callx8 a5
+; XTENSA-NEXT: l16ui a2, a1, 4
+; XTENSA-NEXT: bnez a10, .LBB98_4
+; XTENSA-NEXT: .LBB98_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s16i a2, a1, 4
+; XTENSA-NEXT: slli a8, a2, 16
+; XTENSA-NEXT: srai a8, a8, 16
+; XTENSA-NEXT: l32i a12, a1, 0 # 4-byte Folded Reload
+; XTENSA-NEXT: blt a4, a8, .LBB98_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB98_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB98_1
+; XTENSA-NEXT: .LBB98_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_min_i16_acq_rel:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a9, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a9
+; XTENSA-ATOMIC-NEXT: l32r a9, .LCPI98_0
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -1
+; XTENSA-ATOMIC-NEXT: xor a9, a9, a10
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: slli a11, a3, 16
+; XTENSA-ATOMIC-NEXT: srai a11, a11, 16
+; XTENSA-ATOMIC-NEXT: movi a12, 0
+; XTENSA-ATOMIC-NEXT: movi a13, 1
+; XTENSA-ATOMIC-NEXT: j .LBB98_2
+; XTENSA-ATOMIC-NEXT: .LBB98_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB98_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a14, a15, a15
+; XTENSA-ATOMIC-NEXT: beqi a7, 1, .LBB98_6
+; XTENSA-ATOMIC-NEXT: .LBB98_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a15, a14
+; XTENSA-ATOMIC-NEXT: slli a7, a15, 16
+; XTENSA-ATOMIC-NEXT: srai a6, a7, 16
+; XTENSA-ATOMIC-NEXT: or a7, a3, a3
+; XTENSA-ATOMIC-NEXT: blt a11, a6, .LBB98_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB98_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a15, a15
+; XTENSA-ATOMIC-NEXT: .LBB98_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB98_2 Depth=1
+; XTENSA-ATOMIC-NEXT: l32r a15, .LCPI98_0
+; XTENSA-ATOMIC-NEXT: and a15, a7, a15
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a15, a15
+; XTENSA-ATOMIC-NEXT: and a7, a14, a9
+; XTENSA-ATOMIC-NEXT: or a15, a7, a15
+; XTENSA-ATOMIC-NEXT: wsr a14, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a15, a10, 0
+; XTENSA-ATOMIC-NEXT: or a7, a13, a13
+; XTENSA-ATOMIC-NEXT: beq a15, a14, .LBB98_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB98_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a12, a12
+; XTENSA-ATOMIC-NEXT: j .LBB98_1
+; XTENSA-ATOMIC-NEXT: .LBB98_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a15
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw min ptr %a, i16 %b acq_rel
+ ret i16 %res
+}
+
+define i16 @atomicrmw_min_i16_seq_cst(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_min_i16_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a6, a2, a2
+; XTENSA-NEXT: l16ui a2, a6, 0
+; XTENSA-NEXT: slli a8, a3, 16
+; XTENSA-NEXT: srai a5, a8, 16
+; XTENSA-NEXT: movi a7, 5
+; XTENSA-NEXT: l32r a4, .LCPI99_0
+; XTENSA-NEXT: j .LBB99_2
+; XTENSA-NEXT: .LBB99_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB99_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a6, a6
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a7, a7
+; XTENSA-NEXT: callx8 a4
+; XTENSA-NEXT: l16ui a2, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB99_4
+; XTENSA-NEXT: .LBB99_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s16i a2, a1, 0
+; XTENSA-NEXT: slli a8, a2, 16
+; XTENSA-NEXT: srai a8, a8, 16
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: blt a5, a8, .LBB99_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB99_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB99_1
+; XTENSA-NEXT: .LBB99_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_min_i16_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a9, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a9
+; XTENSA-ATOMIC-NEXT: l32r a9, .LCPI99_0
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -1
+; XTENSA-ATOMIC-NEXT: xor a9, a9, a10
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: slli a11, a3, 16
+; XTENSA-ATOMIC-NEXT: srai a11, a11, 16
+; XTENSA-ATOMIC-NEXT: movi a12, 0
+; XTENSA-ATOMIC-NEXT: movi a13, 1
+; XTENSA-ATOMIC-NEXT: j .LBB99_2
+; XTENSA-ATOMIC-NEXT: .LBB99_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB99_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a14, a15, a15
+; XTENSA-ATOMIC-NEXT: beqi a7, 1, .LBB99_6
+; XTENSA-ATOMIC-NEXT: .LBB99_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a15, a14
+; XTENSA-ATOMIC-NEXT: slli a7, a15, 16
+; XTENSA-ATOMIC-NEXT: srai a6, a7, 16
+; XTENSA-ATOMIC-NEXT: or a7, a3, a3
+; XTENSA-ATOMIC-NEXT: blt a11, a6, .LBB99_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB99_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a15, a15
+; XTENSA-ATOMIC-NEXT: .LBB99_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB99_2 Depth=1
+; XTENSA-ATOMIC-NEXT: l32r a15, .LCPI99_0
+; XTENSA-ATOMIC-NEXT: and a15, a7, a15
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a15, a15
+; XTENSA-ATOMIC-NEXT: and a7, a14, a9
+; XTENSA-ATOMIC-NEXT: or a15, a7, a15
+; XTENSA-ATOMIC-NEXT: wsr a14, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a15, a10, 0
+; XTENSA-ATOMIC-NEXT: or a7, a13, a13
+; XTENSA-ATOMIC-NEXT: beq a15, a14, .LBB99_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB99_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a12, a12
+; XTENSA-ATOMIC-NEXT: j .LBB99_1
+; XTENSA-ATOMIC-NEXT: .LBB99_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a15
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw min ptr %a, i16 %b seq_cst
+ ret i16 %res
+}
+
+define i16 @atomicrmw_umax_i16_monotonic(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_umax_i16_monotonic:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a6, a2, a2
+; XTENSA-NEXT: l16ui a2, a6, 0
+; XTENSA-NEXT: movi a7, 0
+; XTENSA-NEXT: l32r a5, .LCPI100_1
+; XTENSA-NEXT: j .LBB100_2
+; XTENSA-NEXT: .LBB100_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB100_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a6, a6
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a7, a7
+; XTENSA-NEXT: callx8 a5
+; XTENSA-NEXT: l16ui a2, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB100_4
+; XTENSA-NEXT: .LBB100_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: l32r a8, .LCPI100_0
+; XTENSA-NEXT: and a9, a3, a8
+; XTENSA-NEXT: s16i a2, a1, 0
+; XTENSA-NEXT: and a8, a2, a8
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: bgeu a9, a8, .LBB100_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB100_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB100_1
+; XTENSA-NEXT: .LBB100_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_umax_i16_monotonic:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a9, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a9
+; XTENSA-ATOMIC-NEXT: l32r a9, .LCPI100_0
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -1
+; XTENSA-ATOMIC-NEXT: xor a9, a9, a10
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: l32i a13, a10, 0
+; XTENSA-ATOMIC-NEXT: movi a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 1
+; XTENSA-ATOMIC-NEXT: j .LBB100_2
+; XTENSA-ATOMIC-NEXT: .LBB100_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB100_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a13, a14, a14
+; XTENSA-ATOMIC-NEXT: beqi a15, 1, .LBB100_6
+; XTENSA-ATOMIC-NEXT: .LBB100_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: l32r a14, .LCPI100_0
+; XTENSA-ATOMIC-NEXT: and a6, a3, a14
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a15, a13
+; XTENSA-ATOMIC-NEXT: and a5, a15, a14
+; XTENSA-ATOMIC-NEXT: or a7, a3, a3
+; XTENSA-ATOMIC-NEXT: bgeu a6, a5, .LBB100_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB100_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a15, a15
+; XTENSA-ATOMIC-NEXT: .LBB100_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB100_2 Depth=1
+; XTENSA-ATOMIC-NEXT: and a14, a7, a14
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a14, a14
+; XTENSA-ATOMIC-NEXT: and a15, a13, a9
+; XTENSA-ATOMIC-NEXT: or a14, a15, a14
+; XTENSA-ATOMIC-NEXT: wsr a13, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: or a15, a12, a12
+; XTENSA-ATOMIC-NEXT: beq a14, a13, .LBB100_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB100_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a11, a11
+; XTENSA-ATOMIC-NEXT: j .LBB100_1
+; XTENSA-ATOMIC-NEXT: .LBB100_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a14
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw umax ptr %a, i16 %b monotonic
+ ret i16 %res
+}
+
+define i16 @atomicrmw_umax_i16_acquire(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_umax_i16_acquire:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a6, a2, a2
+; XTENSA-NEXT: l16ui a2, a6, 0
+; XTENSA-NEXT: movi a7, 2
+; XTENSA-NEXT: l32r a5, .LCPI101_1
+; XTENSA-NEXT: j .LBB101_2
+; XTENSA-NEXT: .LBB101_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB101_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a6, a6
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a7, a7
+; XTENSA-NEXT: callx8 a5
+; XTENSA-NEXT: l16ui a2, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB101_4
+; XTENSA-NEXT: .LBB101_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: l32r a8, .LCPI101_0
+; XTENSA-NEXT: and a9, a3, a8
+; XTENSA-NEXT: s16i a2, a1, 0
+; XTENSA-NEXT: and a8, a2, a8
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: bgeu a9, a8, .LBB101_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB101_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB101_1
+; XTENSA-NEXT: .LBB101_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_umax_i16_acquire:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a9, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a9
+; XTENSA-ATOMIC-NEXT: l32r a9, .LCPI101_0
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -1
+; XTENSA-ATOMIC-NEXT: xor a9, a9, a10
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: l32i a13, a10, 0
+; XTENSA-ATOMIC-NEXT: movi a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 1
+; XTENSA-ATOMIC-NEXT: j .LBB101_2
+; XTENSA-ATOMIC-NEXT: .LBB101_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB101_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a13, a14, a14
+; XTENSA-ATOMIC-NEXT: beqi a15, 1, .LBB101_6
+; XTENSA-ATOMIC-NEXT: .LBB101_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: l32r a14, .LCPI101_0
+; XTENSA-ATOMIC-NEXT: and a6, a3, a14
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a15, a13
+; XTENSA-ATOMIC-NEXT: and a5, a15, a14
+; XTENSA-ATOMIC-NEXT: or a7, a3, a3
+; XTENSA-ATOMIC-NEXT: bgeu a6, a5, .LBB101_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB101_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a15, a15
+; XTENSA-ATOMIC-NEXT: .LBB101_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB101_2 Depth=1
+; XTENSA-ATOMIC-NEXT: and a14, a7, a14
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a14, a14
+; XTENSA-ATOMIC-NEXT: and a15, a13, a9
+; XTENSA-ATOMIC-NEXT: or a14, a15, a14
+; XTENSA-ATOMIC-NEXT: wsr a13, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: or a15, a12, a12
+; XTENSA-ATOMIC-NEXT: beq a14, a13, .LBB101_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB101_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a11, a11
+; XTENSA-ATOMIC-NEXT: j .LBB101_1
+; XTENSA-ATOMIC-NEXT: .LBB101_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a14
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw umax ptr %a, i16 %b acquire
+ ret i16 %res
+}
+
+define i16 @atomicrmw_umax_i16_release(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_umax_i16_release:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a5, a2, a2
+; XTENSA-NEXT: l16ui a2, a5, 0
+; XTENSA-NEXT: movi a7, 3
+; XTENSA-NEXT: movi a6, 0
+; XTENSA-NEXT: l32r a4, .LCPI102_1
+; XTENSA-NEXT: j .LBB102_2
+; XTENSA-NEXT: .LBB102_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB102_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a5, a5
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a6, a6
+; XTENSA-NEXT: callx8 a4
+; XTENSA-NEXT: l16ui a2, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB102_4
+; XTENSA-NEXT: .LBB102_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: l32r a8, .LCPI102_0
+; XTENSA-NEXT: and a9, a3, a8
+; XTENSA-NEXT: s16i a2, a1, 0
+; XTENSA-NEXT: and a8, a2, a8
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: bgeu a9, a8, .LBB102_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB102_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB102_1
+; XTENSA-NEXT: .LBB102_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_umax_i16_release:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a9, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a9
+; XTENSA-ATOMIC-NEXT: l32r a9, .LCPI102_0
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -1
+; XTENSA-ATOMIC-NEXT: xor a9, a9, a10
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a13, a10, 0
+; XTENSA-ATOMIC-NEXT: movi a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 1
+; XTENSA-ATOMIC-NEXT: j .LBB102_2
+; XTENSA-ATOMIC-NEXT: .LBB102_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB102_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a13, a14, a14
+; XTENSA-ATOMIC-NEXT: beqi a15, 1, .LBB102_6
+; XTENSA-ATOMIC-NEXT: .LBB102_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: l32r a14, .LCPI102_0
+; XTENSA-ATOMIC-NEXT: and a6, a3, a14
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a15, a13
+; XTENSA-ATOMIC-NEXT: and a5, a15, a14
+; XTENSA-ATOMIC-NEXT: or a7, a3, a3
+; XTENSA-ATOMIC-NEXT: bgeu a6, a5, .LBB102_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB102_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a15, a15
+; XTENSA-ATOMIC-NEXT: .LBB102_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB102_2 Depth=1
+; XTENSA-ATOMIC-NEXT: and a14, a7, a14
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a14, a14
+; XTENSA-ATOMIC-NEXT: and a15, a13, a9
+; XTENSA-ATOMIC-NEXT: or a14, a15, a14
+; XTENSA-ATOMIC-NEXT: wsr a13, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: or a15, a12, a12
+; XTENSA-ATOMIC-NEXT: beq a14, a13, .LBB102_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB102_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a11, a11
+; XTENSA-ATOMIC-NEXT: j .LBB102_1
+; XTENSA-ATOMIC-NEXT: .LBB102_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a14
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw umax ptr %a, i16 %b release
+ ret i16 %res
+}
+
+define i16 @atomicrmw_umax_i16_acq_rel(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_umax_i16_acq_rel:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a5, a2, a2
+; XTENSA-NEXT: l16ui a2, a5, 0
+; XTENSA-NEXT: movi a7, 4
+; XTENSA-NEXT: movi a6, 2
+; XTENSA-NEXT: l32r a4, .LCPI103_1
+; XTENSA-NEXT: j .LBB103_2
+; XTENSA-NEXT: .LBB103_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB103_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a5, a5
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a6, a6
+; XTENSA-NEXT: callx8 a4
+; XTENSA-NEXT: l16ui a2, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB103_4
+; XTENSA-NEXT: .LBB103_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: l32r a8, .LCPI103_0
+; XTENSA-NEXT: and a9, a3, a8
+; XTENSA-NEXT: s16i a2, a1, 0
+; XTENSA-NEXT: and a8, a2, a8
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: bgeu a9, a8, .LBB103_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB103_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB103_1
+; XTENSA-NEXT: .LBB103_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_umax_i16_acq_rel:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a9, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a9
+; XTENSA-ATOMIC-NEXT: l32r a9, .LCPI103_0
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -1
+; XTENSA-ATOMIC-NEXT: xor a9, a9, a10
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a13, a10, 0
+; XTENSA-ATOMIC-NEXT: movi a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 1
+; XTENSA-ATOMIC-NEXT: j .LBB103_2
+; XTENSA-ATOMIC-NEXT: .LBB103_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB103_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a13, a14, a14
+; XTENSA-ATOMIC-NEXT: beqi a15, 1, .LBB103_6
+; XTENSA-ATOMIC-NEXT: .LBB103_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: l32r a14, .LCPI103_0
+; XTENSA-ATOMIC-NEXT: and a6, a3, a14
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a15, a13
+; XTENSA-ATOMIC-NEXT: and a5, a15, a14
+; XTENSA-ATOMIC-NEXT: or a7, a3, a3
+; XTENSA-ATOMIC-NEXT: bgeu a6, a5, .LBB103_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB103_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a15, a15
+; XTENSA-ATOMIC-NEXT: .LBB103_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB103_2 Depth=1
+; XTENSA-ATOMIC-NEXT: and a14, a7, a14
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a14, a14
+; XTENSA-ATOMIC-NEXT: and a15, a13, a9
+; XTENSA-ATOMIC-NEXT: or a14, a15, a14
+; XTENSA-ATOMIC-NEXT: wsr a13, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: or a15, a12, a12
+; XTENSA-ATOMIC-NEXT: beq a14, a13, .LBB103_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB103_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a11, a11
+; XTENSA-ATOMIC-NEXT: j .LBB103_1
+; XTENSA-ATOMIC-NEXT: .LBB103_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a14
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw umax ptr %a, i16 %b acq_rel
+ ret i16 %res
+}
+
+define i16 @atomicrmw_umax_i16_seq_cst(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_umax_i16_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a6, a2, a2
+; XTENSA-NEXT: l16ui a2, a6, 0
+; XTENSA-NEXT: movi a7, 5
+; XTENSA-NEXT: l32r a5, .LCPI104_1
+; XTENSA-NEXT: j .LBB104_2
+; XTENSA-NEXT: .LBB104_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB104_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a6, a6
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a7, a7
+; XTENSA-NEXT: callx8 a5
+; XTENSA-NEXT: l16ui a2, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB104_4
+; XTENSA-NEXT: .LBB104_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: l32r a8, .LCPI104_0
+; XTENSA-NEXT: and a9, a3, a8
+; XTENSA-NEXT: s16i a2, a1, 0
+; XTENSA-NEXT: and a8, a2, a8
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: bgeu a9, a8, .LBB104_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB104_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB104_1
+; XTENSA-NEXT: .LBB104_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_umax_i16_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a9, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a9
+; XTENSA-ATOMIC-NEXT: l32r a9, .LCPI104_0
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -1
+; XTENSA-ATOMIC-NEXT: xor a9, a9, a10
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a13, a10, 0
+; XTENSA-ATOMIC-NEXT: movi a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 1
+; XTENSA-ATOMIC-NEXT: j .LBB104_2
+; XTENSA-ATOMIC-NEXT: .LBB104_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB104_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a13, a14, a14
+; XTENSA-ATOMIC-NEXT: beqi a15, 1, .LBB104_6
+; XTENSA-ATOMIC-NEXT: .LBB104_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: l32r a14, .LCPI104_0
+; XTENSA-ATOMIC-NEXT: and a6, a3, a14
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a15, a13
+; XTENSA-ATOMIC-NEXT: and a5, a15, a14
+; XTENSA-ATOMIC-NEXT: or a7, a3, a3
+; XTENSA-ATOMIC-NEXT: bgeu a6, a5, .LBB104_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB104_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a15, a15
+; XTENSA-ATOMIC-NEXT: .LBB104_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB104_2 Depth=1
+; XTENSA-ATOMIC-NEXT: and a14, a7, a14
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a14, a14
+; XTENSA-ATOMIC-NEXT: and a15, a13, a9
+; XTENSA-ATOMIC-NEXT: or a14, a15, a14
+; XTENSA-ATOMIC-NEXT: wsr a13, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: or a15, a12, a12
+; XTENSA-ATOMIC-NEXT: beq a14, a13, .LBB104_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB104_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a11, a11
+; XTENSA-ATOMIC-NEXT: j .LBB104_1
+; XTENSA-ATOMIC-NEXT: .LBB104_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a14
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw umax ptr %a, i16 %b seq_cst
+ ret i16 %res
+}
+
+define i16 @atomicrmw_umin_i16_monotonic(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_umin_i16_monotonic:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a6, a2, a2
+; XTENSA-NEXT: l16ui a2, a6, 0
+; XTENSA-NEXT: movi a7, 0
+; XTENSA-NEXT: l32r a5, .LCPI105_1
+; XTENSA-NEXT: j .LBB105_2
+; XTENSA-NEXT: .LBB105_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB105_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a6, a6
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a7, a7
+; XTENSA-NEXT: callx8 a5
+; XTENSA-NEXT: l16ui a2, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB105_4
+; XTENSA-NEXT: .LBB105_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: l32r a8, .LCPI105_0
+; XTENSA-NEXT: and a9, a3, a8
+; XTENSA-NEXT: s16i a2, a1, 0
+; XTENSA-NEXT: and a8, a2, a8
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: bltu a9, a8, .LBB105_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB105_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB105_1
+; XTENSA-NEXT: .LBB105_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_umin_i16_monotonic:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a9, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a9
+; XTENSA-ATOMIC-NEXT: l32r a9, .LCPI105_0
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -1
+; XTENSA-ATOMIC-NEXT: xor a9, a9, a10
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: l32i a13, a10, 0
+; XTENSA-ATOMIC-NEXT: movi a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 1
+; XTENSA-ATOMIC-NEXT: j .LBB105_2
+; XTENSA-ATOMIC-NEXT: .LBB105_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB105_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a13, a14, a14
+; XTENSA-ATOMIC-NEXT: beqi a15, 1, .LBB105_6
+; XTENSA-ATOMIC-NEXT: .LBB105_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: l32r a14, .LCPI105_0
+; XTENSA-ATOMIC-NEXT: and a6, a3, a14
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a15, a13
+; XTENSA-ATOMIC-NEXT: and a5, a15, a14
+; XTENSA-ATOMIC-NEXT: or a7, a3, a3
+; XTENSA-ATOMIC-NEXT: bltu a6, a5, .LBB105_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB105_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a15, a15
+; XTENSA-ATOMIC-NEXT: .LBB105_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB105_2 Depth=1
+; XTENSA-ATOMIC-NEXT: and a14, a7, a14
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a14, a14
+; XTENSA-ATOMIC-NEXT: and a15, a13, a9
+; XTENSA-ATOMIC-NEXT: or a14, a15, a14
+; XTENSA-ATOMIC-NEXT: wsr a13, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: or a15, a12, a12
+; XTENSA-ATOMIC-NEXT: beq a14, a13, .LBB105_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB105_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a11, a11
+; XTENSA-ATOMIC-NEXT: j .LBB105_1
+; XTENSA-ATOMIC-NEXT: .LBB105_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a14
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw umin ptr %a, i16 %b monotonic
+ ret i16 %res
+}
+
+define i16 @atomicrmw_umin_i16_acquire(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_umin_i16_acquire:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a6, a2, a2
+; XTENSA-NEXT: l16ui a2, a6, 0
+; XTENSA-NEXT: movi a7, 2
+; XTENSA-NEXT: l32r a5, .LCPI106_1
+; XTENSA-NEXT: j .LBB106_2
+; XTENSA-NEXT: .LBB106_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB106_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a6, a6
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a7, a7
+; XTENSA-NEXT: callx8 a5
+; XTENSA-NEXT: l16ui a2, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB106_4
+; XTENSA-NEXT: .LBB106_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: l32r a8, .LCPI106_0
+; XTENSA-NEXT: and a9, a3, a8
+; XTENSA-NEXT: s16i a2, a1, 0
+; XTENSA-NEXT: and a8, a2, a8
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: bltu a9, a8, .LBB106_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB106_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB106_1
+; XTENSA-NEXT: .LBB106_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_umin_i16_acquire:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a9, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a9
+; XTENSA-ATOMIC-NEXT: l32r a9, .LCPI106_0
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -1
+; XTENSA-ATOMIC-NEXT: xor a9, a9, a10
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: l32i a13, a10, 0
+; XTENSA-ATOMIC-NEXT: movi a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 1
+; XTENSA-ATOMIC-NEXT: j .LBB106_2
+; XTENSA-ATOMIC-NEXT: .LBB106_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB106_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a13, a14, a14
+; XTENSA-ATOMIC-NEXT: beqi a15, 1, .LBB106_6
+; XTENSA-ATOMIC-NEXT: .LBB106_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: l32r a14, .LCPI106_0
+; XTENSA-ATOMIC-NEXT: and a6, a3, a14
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a15, a13
+; XTENSA-ATOMIC-NEXT: and a5, a15, a14
+; XTENSA-ATOMIC-NEXT: or a7, a3, a3
+; XTENSA-ATOMIC-NEXT: bltu a6, a5, .LBB106_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB106_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a15, a15
+; XTENSA-ATOMIC-NEXT: .LBB106_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB106_2 Depth=1
+; XTENSA-ATOMIC-NEXT: and a14, a7, a14
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a14, a14
+; XTENSA-ATOMIC-NEXT: and a15, a13, a9
+; XTENSA-ATOMIC-NEXT: or a14, a15, a14
+; XTENSA-ATOMIC-NEXT: wsr a13, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: or a15, a12, a12
+; XTENSA-ATOMIC-NEXT: beq a14, a13, .LBB106_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB106_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a11, a11
+; XTENSA-ATOMIC-NEXT: j .LBB106_1
+; XTENSA-ATOMIC-NEXT: .LBB106_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a14
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw umin ptr %a, i16 %b acquire
+ ret i16 %res
+}
+
+define i16 @atomicrmw_umin_i16_release(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_umin_i16_release:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a5, a2, a2
+; XTENSA-NEXT: l16ui a2, a5, 0
+; XTENSA-NEXT: movi a7, 3
+; XTENSA-NEXT: movi a6, 0
+; XTENSA-NEXT: l32r a4, .LCPI107_1
+; XTENSA-NEXT: j .LBB107_2
+; XTENSA-NEXT: .LBB107_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB107_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a5, a5
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a6, a6
+; XTENSA-NEXT: callx8 a4
+; XTENSA-NEXT: l16ui a2, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB107_4
+; XTENSA-NEXT: .LBB107_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: l32r a8, .LCPI107_0
+; XTENSA-NEXT: and a9, a3, a8
+; XTENSA-NEXT: s16i a2, a1, 0
+; XTENSA-NEXT: and a8, a2, a8
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: bltu a9, a8, .LBB107_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB107_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB107_1
+; XTENSA-NEXT: .LBB107_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_umin_i16_release:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a9, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a9
+; XTENSA-ATOMIC-NEXT: l32r a9, .LCPI107_0
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -1
+; XTENSA-ATOMIC-NEXT: xor a9, a9, a10
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a13, a10, 0
+; XTENSA-ATOMIC-NEXT: movi a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 1
+; XTENSA-ATOMIC-NEXT: j .LBB107_2
+; XTENSA-ATOMIC-NEXT: .LBB107_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB107_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a13, a14, a14
+; XTENSA-ATOMIC-NEXT: beqi a15, 1, .LBB107_6
+; XTENSA-ATOMIC-NEXT: .LBB107_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: l32r a14, .LCPI107_0
+; XTENSA-ATOMIC-NEXT: and a6, a3, a14
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a15, a13
+; XTENSA-ATOMIC-NEXT: and a5, a15, a14
+; XTENSA-ATOMIC-NEXT: or a7, a3, a3
+; XTENSA-ATOMIC-NEXT: bltu a6, a5, .LBB107_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB107_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a15, a15
+; XTENSA-ATOMIC-NEXT: .LBB107_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB107_2 Depth=1
+; XTENSA-ATOMIC-NEXT: and a14, a7, a14
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a14, a14
+; XTENSA-ATOMIC-NEXT: and a15, a13, a9
+; XTENSA-ATOMIC-NEXT: or a14, a15, a14
+; XTENSA-ATOMIC-NEXT: wsr a13, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: or a15, a12, a12
+; XTENSA-ATOMIC-NEXT: beq a14, a13, .LBB107_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB107_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a11, a11
+; XTENSA-ATOMIC-NEXT: j .LBB107_1
+; XTENSA-ATOMIC-NEXT: .LBB107_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a14
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw umin ptr %a, i16 %b release
+ ret i16 %res
+}
+
+define i16 @atomicrmw_umin_i16_acq_rel(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_umin_i16_acq_rel:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a5, a2, a2
+; XTENSA-NEXT: l16ui a2, a5, 0
+; XTENSA-NEXT: movi a7, 4
+; XTENSA-NEXT: movi a6, 2
+; XTENSA-NEXT: l32r a4, .LCPI108_1
+; XTENSA-NEXT: j .LBB108_2
+; XTENSA-NEXT: .LBB108_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB108_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a5, a5
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a6, a6
+; XTENSA-NEXT: callx8 a4
+; XTENSA-NEXT: l16ui a2, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB108_4
+; XTENSA-NEXT: .LBB108_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: l32r a8, .LCPI108_0
+; XTENSA-NEXT: and a9, a3, a8
+; XTENSA-NEXT: s16i a2, a1, 0
+; XTENSA-NEXT: and a8, a2, a8
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: bltu a9, a8, .LBB108_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB108_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB108_1
+; XTENSA-NEXT: .LBB108_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_umin_i16_acq_rel:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a9, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a9
+; XTENSA-ATOMIC-NEXT: l32r a9, .LCPI108_0
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -1
+; XTENSA-ATOMIC-NEXT: xor a9, a9, a10
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a13, a10, 0
+; XTENSA-ATOMIC-NEXT: movi a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 1
+; XTENSA-ATOMIC-NEXT: j .LBB108_2
+; XTENSA-ATOMIC-NEXT: .LBB108_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB108_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a13, a14, a14
+; XTENSA-ATOMIC-NEXT: beqi a15, 1, .LBB108_6
+; XTENSA-ATOMIC-NEXT: .LBB108_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: l32r a14, .LCPI108_0
+; XTENSA-ATOMIC-NEXT: and a6, a3, a14
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a15, a13
+; XTENSA-ATOMIC-NEXT: and a5, a15, a14
+; XTENSA-ATOMIC-NEXT: or a7, a3, a3
+; XTENSA-ATOMIC-NEXT: bltu a6, a5, .LBB108_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB108_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a15, a15
+; XTENSA-ATOMIC-NEXT: .LBB108_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB108_2 Depth=1
+; XTENSA-ATOMIC-NEXT: and a14, a7, a14
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a14, a14
+; XTENSA-ATOMIC-NEXT: and a15, a13, a9
+; XTENSA-ATOMIC-NEXT: or a14, a15, a14
+; XTENSA-ATOMIC-NEXT: wsr a13, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: or a15, a12, a12
+; XTENSA-ATOMIC-NEXT: beq a14, a13, .LBB108_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB108_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a11, a11
+; XTENSA-ATOMIC-NEXT: j .LBB108_1
+; XTENSA-ATOMIC-NEXT: .LBB108_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a14
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw umin ptr %a, i16 %b acq_rel
+ ret i16 %res
+}
+
+define i16 @atomicrmw_umin_i16_seq_cst(ptr %a, i16 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_umin_i16_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a6, a2, a2
+; XTENSA-NEXT: l16ui a2, a6, 0
+; XTENSA-NEXT: movi a7, 5
+; XTENSA-NEXT: l32r a5, .LCPI109_1
+; XTENSA-NEXT: j .LBB109_2
+; XTENSA-NEXT: .LBB109_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB109_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a6, a6
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a7, a7
+; XTENSA-NEXT: callx8 a5
+; XTENSA-NEXT: l16ui a2, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB109_4
+; XTENSA-NEXT: .LBB109_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: l32r a8, .LCPI109_0
+; XTENSA-NEXT: and a9, a3, a8
+; XTENSA-NEXT: s16i a2, a1, 0
+; XTENSA-NEXT: and a8, a2, a8
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: bltu a9, a8, .LBB109_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB109_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB109_1
+; XTENSA-NEXT: .LBB109_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_umin_i16_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a9, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a9
+; XTENSA-ATOMIC-NEXT: l32r a9, .LCPI109_0
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -1
+; XTENSA-ATOMIC-NEXT: xor a9, a9, a10
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a13, a10, 0
+; XTENSA-ATOMIC-NEXT: movi a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 1
+; XTENSA-ATOMIC-NEXT: j .LBB109_2
+; XTENSA-ATOMIC-NEXT: .LBB109_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB109_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a13, a14, a14
+; XTENSA-ATOMIC-NEXT: beqi a15, 1, .LBB109_6
+; XTENSA-ATOMIC-NEXT: .LBB109_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: l32r a14, .LCPI109_0
+; XTENSA-ATOMIC-NEXT: and a6, a3, a14
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a15, a13
+; XTENSA-ATOMIC-NEXT: and a5, a15, a14
+; XTENSA-ATOMIC-NEXT: or a7, a3, a3
+; XTENSA-ATOMIC-NEXT: bltu a6, a5, .LBB109_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB109_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a15, a15
+; XTENSA-ATOMIC-NEXT: .LBB109_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB109_2 Depth=1
+; XTENSA-ATOMIC-NEXT: and a14, a7, a14
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a14, a14
+; XTENSA-ATOMIC-NEXT: and a15, a13, a9
+; XTENSA-ATOMIC-NEXT: or a14, a15, a14
+; XTENSA-ATOMIC-NEXT: wsr a13, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: or a15, a12, a12
+; XTENSA-ATOMIC-NEXT: beq a14, a13, .LBB109_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB109_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a11, a11
+; XTENSA-ATOMIC-NEXT: j .LBB109_1
+; XTENSA-ATOMIC-NEXT: .LBB109_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a14
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw umin ptr %a, i16 %b seq_cst
+ ret i16 %res
+}
+
+define i32 @atomicrmw_xchg_i32_monotonic(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_xchg_i32_monotonic:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 0
+; XTENSA-NEXT: l32r a8, .LCPI110_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_xchg_i32_monotonic:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB110_2
+; XTENSA-ATOMIC-NEXT: .LBB110_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB110_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB110_4
+; XTENSA-ATOMIC-NEXT: .LBB110_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: or a8, a3, a3
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB110_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB110_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB110_1
+; XTENSA-ATOMIC-NEXT: .LBB110_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw xchg ptr %a, i32 %b monotonic
+ ret i32 %res
+}
+
+define i32 @atomicrmw_xchg_i32_acquire(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_xchg_i32_acquire:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 2
+; XTENSA-NEXT: l32r a8, .LCPI111_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_xchg_i32_acquire:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB111_2
+; XTENSA-ATOMIC-NEXT: .LBB111_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB111_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB111_4
+; XTENSA-ATOMIC-NEXT: .LBB111_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: or a8, a3, a3
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB111_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB111_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB111_1
+; XTENSA-ATOMIC-NEXT: .LBB111_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw xchg ptr %a, i32 %b acquire
+ ret i32 %res
+}
+
+define i32 @atomicrmw_xchg_i32_release(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_xchg_i32_release:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 3
+; XTENSA-NEXT: l32r a8, .LCPI112_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_xchg_i32_release:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB112_2
+; XTENSA-ATOMIC-NEXT: .LBB112_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB112_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB112_4
+; XTENSA-ATOMIC-NEXT: .LBB112_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: or a8, a3, a3
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB112_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB112_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB112_1
+; XTENSA-ATOMIC-NEXT: .LBB112_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw xchg ptr %a, i32 %b release
+ ret i32 %res
+}
+
+define i32 @atomicrmw_xchg_i32_acq_rel(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_xchg_i32_acq_rel:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 4
+; XTENSA-NEXT: l32r a8, .LCPI113_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_xchg_i32_acq_rel:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB113_2
+; XTENSA-ATOMIC-NEXT: .LBB113_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB113_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB113_4
+; XTENSA-ATOMIC-NEXT: .LBB113_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: or a8, a3, a3
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB113_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB113_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB113_1
+; XTENSA-ATOMIC-NEXT: .LBB113_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw xchg ptr %a, i32 %b acq_rel
+ ret i32 %res
+}
+
+define i32 @atomicrmw_xchg_i32_seq_cst(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_xchg_i32_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 5
+; XTENSA-NEXT: l32r a8, .LCPI114_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_xchg_i32_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB114_2
+; XTENSA-ATOMIC-NEXT: .LBB114_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB114_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB114_4
+; XTENSA-ATOMIC-NEXT: .LBB114_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: or a8, a3, a3
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB114_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB114_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB114_1
+; XTENSA-ATOMIC-NEXT: .LBB114_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw xchg ptr %a, i32 %b seq_cst
+ ret i32 %res
+}
+
+define i32 @atomicrmw_add_i32_monotonic(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_add_i32_monotonic:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 0
+; XTENSA-NEXT: l32r a8, .LCPI115_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_add_i32_monotonic:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB115_2
+; XTENSA-ATOMIC-NEXT: .LBB115_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB115_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB115_4
+; XTENSA-ATOMIC-NEXT: .LBB115_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: add a8, a11, a3
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB115_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB115_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB115_1
+; XTENSA-ATOMIC-NEXT: .LBB115_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw add ptr %a, i32 %b monotonic
+ ret i32 %res
+}
+
+define i32 @atomicrmw_add_i32_acquire(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_add_i32_acquire:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 2
+; XTENSA-NEXT: l32r a8, .LCPI116_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_add_i32_acquire:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB116_2
+; XTENSA-ATOMIC-NEXT: .LBB116_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB116_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB116_4
+; XTENSA-ATOMIC-NEXT: .LBB116_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: add a8, a11, a3
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB116_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB116_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB116_1
+; XTENSA-ATOMIC-NEXT: .LBB116_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw add ptr %a, i32 %b acquire
+ ret i32 %res
+}
+
+define i32 @atomicrmw_add_i32_release(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_add_i32_release:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 3
+; XTENSA-NEXT: l32r a8, .LCPI117_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_add_i32_release:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB117_2
+; XTENSA-ATOMIC-NEXT: .LBB117_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB117_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB117_4
+; XTENSA-ATOMIC-NEXT: .LBB117_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: add a8, a11, a3
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB117_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB117_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB117_1
+; XTENSA-ATOMIC-NEXT: .LBB117_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw add ptr %a, i32 %b release
+ ret i32 %res
+}
+
+define i32 @atomicrmw_add_i32_acq_rel(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_add_i32_acq_rel:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 4
+; XTENSA-NEXT: l32r a8, .LCPI118_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_add_i32_acq_rel:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB118_2
+; XTENSA-ATOMIC-NEXT: .LBB118_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB118_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB118_4
+; XTENSA-ATOMIC-NEXT: .LBB118_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: add a8, a11, a3
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB118_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB118_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB118_1
+; XTENSA-ATOMIC-NEXT: .LBB118_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw add ptr %a, i32 %b acq_rel
+ ret i32 %res
+}
+
+define i32 @atomicrmw_add_i32_seq_cst(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_add_i32_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 5
+; XTENSA-NEXT: l32r a8, .LCPI119_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_add_i32_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB119_2
+; XTENSA-ATOMIC-NEXT: .LBB119_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB119_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB119_4
+; XTENSA-ATOMIC-NEXT: .LBB119_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: add a8, a11, a3
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB119_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB119_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB119_1
+; XTENSA-ATOMIC-NEXT: .LBB119_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw add ptr %a, i32 %b seq_cst
+ ret i32 %res
+}
+
+define i32 @atomicrmw_sub_i32_monotonic(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_sub_i32_monotonic:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 0
+; XTENSA-NEXT: l32r a8, .LCPI120_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_sub_i32_monotonic:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB120_2
+; XTENSA-ATOMIC-NEXT: .LBB120_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB120_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB120_4
+; XTENSA-ATOMIC-NEXT: .LBB120_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: sub a8, a11, a3
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB120_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB120_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB120_1
+; XTENSA-ATOMIC-NEXT: .LBB120_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw sub ptr %a, i32 %b monotonic
+ ret i32 %res
+}
+
+define i32 @atomicrmw_sub_i32_acquire(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_sub_i32_acquire:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 2
+; XTENSA-NEXT: l32r a8, .LCPI121_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_sub_i32_acquire:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB121_2
+; XTENSA-ATOMIC-NEXT: .LBB121_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB121_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB121_4
+; XTENSA-ATOMIC-NEXT: .LBB121_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: sub a8, a11, a3
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB121_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB121_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB121_1
+; XTENSA-ATOMIC-NEXT: .LBB121_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw sub ptr %a, i32 %b acquire
+ ret i32 %res
+}
+
+define i32 @atomicrmw_sub_i32_release(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_sub_i32_release:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 3
+; XTENSA-NEXT: l32r a8, .LCPI122_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_sub_i32_release:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB122_2
+; XTENSA-ATOMIC-NEXT: .LBB122_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB122_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB122_4
+; XTENSA-ATOMIC-NEXT: .LBB122_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: sub a8, a11, a3
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB122_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB122_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB122_1
+; XTENSA-ATOMIC-NEXT: .LBB122_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw sub ptr %a, i32 %b release
+ ret i32 %res
+}
+
+define i32 @atomicrmw_sub_i32_acq_rel(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_sub_i32_acq_rel:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 4
+; XTENSA-NEXT: l32r a8, .LCPI123_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_sub_i32_acq_rel:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB123_2
+; XTENSA-ATOMIC-NEXT: .LBB123_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB123_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB123_4
+; XTENSA-ATOMIC-NEXT: .LBB123_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: sub a8, a11, a3
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB123_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB123_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB123_1
+; XTENSA-ATOMIC-NEXT: .LBB123_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw sub ptr %a, i32 %b acq_rel
+ ret i32 %res
+}
+
+define i32 @atomicrmw_sub_i32_seq_cst(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_sub_i32_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 5
+; XTENSA-NEXT: l32r a8, .LCPI124_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_sub_i32_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB124_2
+; XTENSA-ATOMIC-NEXT: .LBB124_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB124_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB124_4
+; XTENSA-ATOMIC-NEXT: .LBB124_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: sub a8, a11, a3
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB124_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB124_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB124_1
+; XTENSA-ATOMIC-NEXT: .LBB124_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw sub ptr %a, i32 %b seq_cst
+ ret i32 %res
+}
+
+define i32 @atomicrmw_and_i32_monotonic(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_and_i32_monotonic:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 0
+; XTENSA-NEXT: l32r a8, .LCPI125_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_and_i32_monotonic:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB125_2
+; XTENSA-ATOMIC-NEXT: .LBB125_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB125_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB125_4
+; XTENSA-ATOMIC-NEXT: .LBB125_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a8, a11, a3
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB125_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB125_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB125_1
+; XTENSA-ATOMIC-NEXT: .LBB125_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw and ptr %a, i32 %b monotonic
+ ret i32 %res
+}
+
+define i32 @atomicrmw_and_i32_acquire(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_and_i32_acquire:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 2
+; XTENSA-NEXT: l32r a8, .LCPI126_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_and_i32_acquire:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB126_2
+; XTENSA-ATOMIC-NEXT: .LBB126_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB126_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB126_4
+; XTENSA-ATOMIC-NEXT: .LBB126_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a8, a11, a3
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB126_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB126_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB126_1
+; XTENSA-ATOMIC-NEXT: .LBB126_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw and ptr %a, i32 %b acquire
+ ret i32 %res
+}
+
+define i32 @atomicrmw_and_i32_release(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_and_i32_release:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 3
+; XTENSA-NEXT: l32r a8, .LCPI127_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_and_i32_release:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB127_2
+; XTENSA-ATOMIC-NEXT: .LBB127_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB127_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB127_4
+; XTENSA-ATOMIC-NEXT: .LBB127_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a8, a11, a3
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB127_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB127_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB127_1
+; XTENSA-ATOMIC-NEXT: .LBB127_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw and ptr %a, i32 %b release
+ ret i32 %res
+}
+
+define i32 @atomicrmw_and_i32_acq_rel(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_and_i32_acq_rel:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 4
+; XTENSA-NEXT: l32r a8, .LCPI128_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_and_i32_acq_rel:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB128_2
+; XTENSA-ATOMIC-NEXT: .LBB128_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB128_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB128_4
+; XTENSA-ATOMIC-NEXT: .LBB128_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a8, a11, a3
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB128_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB128_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB128_1
+; XTENSA-ATOMIC-NEXT: .LBB128_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw and ptr %a, i32 %b acq_rel
+ ret i32 %res
+}
+
+define i32 @atomicrmw_and_i32_seq_cst(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_and_i32_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 5
+; XTENSA-NEXT: l32r a8, .LCPI129_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_and_i32_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB129_2
+; XTENSA-ATOMIC-NEXT: .LBB129_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB129_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB129_4
+; XTENSA-ATOMIC-NEXT: .LBB129_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a8, a11, a3
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB129_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB129_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB129_1
+; XTENSA-ATOMIC-NEXT: .LBB129_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw and ptr %a, i32 %b seq_cst
+ ret i32 %res
+}
+
+;define i32 @atomicrmw_nand_i32_monotonic(ptr %a, i32 %b) nounwind {
+; %res = atomicrmw nand ptr %a, i32 %b monotonic
+; ret i32 %res
+;}
+;
+;define i32 @atomicrmw_nand_i32_acquire(ptr %a, i32 %b) nounwind {
+; %res = atomicrmw nand ptr %a, i32 %b acquire
+; ret i32 %res
+;}
+;
+;define i32 @atomicrmw_nand_i32_release(ptr %a, i32 %b) nounwind {
+; %res = atomicrmw nand ptr %a, i32 %b release
+; ret i32 %res
+;}
+;
+;define i32 @atomicrmw_nand_i32_acq_rel(ptr %a, i32 %b) nounwind {
+; %res = atomicrmw nand ptr %a, i32 %b acq_rel
+; ret i32 %res
+;}
+;
+;define i32 @atomicrmw_nand_i32_seq_cst(ptr %a, i32 %b) nounwind {
+; %res = atomicrmw nand ptr %a, i32 %b seq_cst
+; ret i32 %res
+;}
+
+define i32 @atomicrmw_or_i32_monotonic(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_or_i32_monotonic:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 0
+; XTENSA-NEXT: l32r a8, .LCPI130_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_or_i32_monotonic:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB130_2
+; XTENSA-ATOMIC-NEXT: .LBB130_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB130_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB130_4
+; XTENSA-ATOMIC-NEXT: .LBB130_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a11, a3
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB130_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB130_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB130_1
+; XTENSA-ATOMIC-NEXT: .LBB130_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw or ptr %a, i32 %b monotonic
+ ret i32 %res
+}
+
+define i32 @atomicrmw_or_i32_acquire(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_or_i32_acquire:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 2
+; XTENSA-NEXT: l32r a8, .LCPI131_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_or_i32_acquire:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB131_2
+; XTENSA-ATOMIC-NEXT: .LBB131_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB131_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB131_4
+; XTENSA-ATOMIC-NEXT: .LBB131_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a11, a3
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB131_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB131_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB131_1
+; XTENSA-ATOMIC-NEXT: .LBB131_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw or ptr %a, i32 %b acquire
+ ret i32 %res
+}
+
+define i32 @atomicrmw_or_i32_release(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_or_i32_release:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 3
+; XTENSA-NEXT: l32r a8, .LCPI132_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_or_i32_release:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB132_2
+; XTENSA-ATOMIC-NEXT: .LBB132_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB132_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB132_4
+; XTENSA-ATOMIC-NEXT: .LBB132_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a11, a3
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB132_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB132_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB132_1
+; XTENSA-ATOMIC-NEXT: .LBB132_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw or ptr %a, i32 %b release
+ ret i32 %res
+}
+
+define i32 @atomicrmw_or_i32_acq_rel(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_or_i32_acq_rel:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 4
+; XTENSA-NEXT: l32r a8, .LCPI133_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_or_i32_acq_rel:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB133_2
+; XTENSA-ATOMIC-NEXT: .LBB133_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB133_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB133_4
+; XTENSA-ATOMIC-NEXT: .LBB133_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a11, a3
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB133_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB133_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB133_1
+; XTENSA-ATOMIC-NEXT: .LBB133_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw or ptr %a, i32 %b acq_rel
+ ret i32 %res
+}
+
+define i32 @atomicrmw_or_i32_seq_cst(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_or_i32_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 5
+; XTENSA-NEXT: l32r a8, .LCPI134_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_or_i32_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB134_2
+; XTENSA-ATOMIC-NEXT: .LBB134_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB134_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB134_4
+; XTENSA-ATOMIC-NEXT: .LBB134_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a11, a3
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB134_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB134_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB134_1
+; XTENSA-ATOMIC-NEXT: .LBB134_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw or ptr %a, i32 %b seq_cst
+ ret i32 %res
+}
+
+define i32 @atomicrmw_xor_i32_monotonic(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_xor_i32_monotonic:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 0
+; XTENSA-NEXT: l32r a8, .LCPI135_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_xor_i32_monotonic:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB135_2
+; XTENSA-ATOMIC-NEXT: .LBB135_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB135_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB135_4
+; XTENSA-ATOMIC-NEXT: .LBB135_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: xor a8, a11, a3
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB135_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB135_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB135_1
+; XTENSA-ATOMIC-NEXT: .LBB135_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw xor ptr %a, i32 %b monotonic
+ ret i32 %res
+}
+
+define i32 @atomicrmw_xor_i32_acquire(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_xor_i32_acquire:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 2
+; XTENSA-NEXT: l32r a8, .LCPI136_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_xor_i32_acquire:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB136_2
+; XTENSA-ATOMIC-NEXT: .LBB136_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB136_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB136_4
+; XTENSA-ATOMIC-NEXT: .LBB136_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: xor a8, a11, a3
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB136_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB136_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB136_1
+; XTENSA-ATOMIC-NEXT: .LBB136_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw xor ptr %a, i32 %b acquire
+ ret i32 %res
+}
+
+define i32 @atomicrmw_xor_i32_release(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_xor_i32_release:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 3
+; XTENSA-NEXT: l32r a8, .LCPI137_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_xor_i32_release:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB137_2
+; XTENSA-ATOMIC-NEXT: .LBB137_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB137_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB137_4
+; XTENSA-ATOMIC-NEXT: .LBB137_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: xor a8, a11, a3
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB137_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB137_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB137_1
+; XTENSA-ATOMIC-NEXT: .LBB137_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw xor ptr %a, i32 %b release
+ ret i32 %res
+}
+
+define i32 @atomicrmw_xor_i32_acq_rel(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_xor_i32_acq_rel:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 4
+; XTENSA-NEXT: l32r a8, .LCPI138_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_xor_i32_acq_rel:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB138_2
+; XTENSA-ATOMIC-NEXT: .LBB138_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB138_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB138_4
+; XTENSA-ATOMIC-NEXT: .LBB138_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: xor a8, a11, a3
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB138_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB138_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB138_1
+; XTENSA-ATOMIC-NEXT: .LBB138_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw xor ptr %a, i32 %b acq_rel
+ ret i32 %res
+}
+
+define i32 @atomicrmw_xor_i32_seq_cst(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_xor_i32_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a11, a3, a3
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a12, 5
+; XTENSA-NEXT: l32r a8, .LCPI139_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_xor_i32_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB139_2
+; XTENSA-ATOMIC-NEXT: .LBB139_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB139_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB139_4
+; XTENSA-ATOMIC-NEXT: .LBB139_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: xor a8, a11, a3
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB139_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB139_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB139_1
+; XTENSA-ATOMIC-NEXT: .LBB139_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw xor ptr %a, i32 %b seq_cst
+ ret i32 %res
+}
+
+define i32 @atomicrmw_max_i32_monotonic(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_max_i32_monotonic:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a6, a2, a2
+; XTENSA-NEXT: l32i a2, a6, 0
+; XTENSA-NEXT: movi a7, 0
+; XTENSA-NEXT: l32r a5, .LCPI140_0
+; XTENSA-NEXT: j .LBB140_2
+; XTENSA-NEXT: .LBB140_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB140_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a6, a6
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a7, a7
+; XTENSA-NEXT: callx8 a5
+; XTENSA-NEXT: l32i a2, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB140_4
+; XTENSA-NEXT: .LBB140_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s32i a2, a1, 0
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: bge a3, a2, .LBB140_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB140_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB140_1
+; XTENSA-NEXT: .LBB140_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_max_i32_monotonic:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB140_2
+; XTENSA-ATOMIC-NEXT: .LBB140_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB140_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB140_6
+; XTENSA-ATOMIC-NEXT: .LBB140_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a3, a3
+; XTENSA-ATOMIC-NEXT: bge a3, a11, .LBB140_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB140_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a11, a11
+; XTENSA-ATOMIC-NEXT: .LBB140_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB140_2 Depth=1
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB140_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB140_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB140_1
+; XTENSA-ATOMIC-NEXT: .LBB140_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw max ptr %a, i32 %b monotonic
+ ret i32 %res
+}
+
+define i32 @atomicrmw_max_i32_acquire(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_max_i32_acquire:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a6, a2, a2
+; XTENSA-NEXT: l32i a2, a6, 0
+; XTENSA-NEXT: movi a7, 2
+; XTENSA-NEXT: l32r a5, .LCPI141_0
+; XTENSA-NEXT: j .LBB141_2
+; XTENSA-NEXT: .LBB141_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB141_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a6, a6
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a7, a7
+; XTENSA-NEXT: callx8 a5
+; XTENSA-NEXT: l32i a2, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB141_4
+; XTENSA-NEXT: .LBB141_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s32i a2, a1, 0
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: bge a3, a2, .LBB141_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB141_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB141_1
+; XTENSA-NEXT: .LBB141_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_max_i32_acquire:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB141_2
+; XTENSA-ATOMIC-NEXT: .LBB141_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB141_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB141_6
+; XTENSA-ATOMIC-NEXT: .LBB141_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a3, a3
+; XTENSA-ATOMIC-NEXT: bge a3, a11, .LBB141_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB141_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a11, a11
+; XTENSA-ATOMIC-NEXT: .LBB141_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB141_2 Depth=1
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB141_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB141_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB141_1
+; XTENSA-ATOMIC-NEXT: .LBB141_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw max ptr %a, i32 %b acquire
+ ret i32 %res
+}
+
+define i32 @atomicrmw_max_i32_release(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_max_i32_release:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a5, a2, a2
+; XTENSA-NEXT: l32i a2, a5, 0
+; XTENSA-NEXT: movi a7, 3
+; XTENSA-NEXT: movi a6, 0
+; XTENSA-NEXT: l32r a4, .LCPI142_0
+; XTENSA-NEXT: j .LBB142_2
+; XTENSA-NEXT: .LBB142_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB142_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a5, a5
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a6, a6
+; XTENSA-NEXT: callx8 a4
+; XTENSA-NEXT: l32i a2, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB142_4
+; XTENSA-NEXT: .LBB142_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s32i a2, a1, 0
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: bge a3, a2, .LBB142_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB142_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB142_1
+; XTENSA-NEXT: .LBB142_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_max_i32_release:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB142_2
+; XTENSA-ATOMIC-NEXT: .LBB142_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB142_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB142_6
+; XTENSA-ATOMIC-NEXT: .LBB142_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a3, a3
+; XTENSA-ATOMIC-NEXT: bge a3, a11, .LBB142_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB142_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a11, a11
+; XTENSA-ATOMIC-NEXT: .LBB142_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB142_2 Depth=1
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB142_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB142_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB142_1
+; XTENSA-ATOMIC-NEXT: .LBB142_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw max ptr %a, i32 %b release
+ ret i32 %res
+}
+
+define i32 @atomicrmw_max_i32_acq_rel(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_max_i32_acq_rel:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a5, a2, a2
+; XTENSA-NEXT: l32i a2, a5, 0
+; XTENSA-NEXT: movi a7, 4
+; XTENSA-NEXT: movi a6, 2
+; XTENSA-NEXT: l32r a4, .LCPI143_0
+; XTENSA-NEXT: j .LBB143_2
+; XTENSA-NEXT: .LBB143_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB143_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a5, a5
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a6, a6
+; XTENSA-NEXT: callx8 a4
+; XTENSA-NEXT: l32i a2, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB143_4
+; XTENSA-NEXT: .LBB143_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s32i a2, a1, 0
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: bge a3, a2, .LBB143_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB143_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB143_1
+; XTENSA-NEXT: .LBB143_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_max_i32_acq_rel:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB143_2
+; XTENSA-ATOMIC-NEXT: .LBB143_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB143_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB143_6
+; XTENSA-ATOMIC-NEXT: .LBB143_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a3, a3
+; XTENSA-ATOMIC-NEXT: bge a3, a11, .LBB143_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB143_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a11, a11
+; XTENSA-ATOMIC-NEXT: .LBB143_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB143_2 Depth=1
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB143_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB143_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB143_1
+; XTENSA-ATOMIC-NEXT: .LBB143_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw max ptr %a, i32 %b acq_rel
+ ret i32 %res
+}
+
+define i32 @atomicrmw_max_i32_seq_cst(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_max_i32_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a6, a2, a2
+; XTENSA-NEXT: l32i a2, a6, 0
+; XTENSA-NEXT: movi a7, 5
+; XTENSA-NEXT: l32r a5, .LCPI144_0
+; XTENSA-NEXT: j .LBB144_2
+; XTENSA-NEXT: .LBB144_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB144_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a6, a6
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a7, a7
+; XTENSA-NEXT: callx8 a5
+; XTENSA-NEXT: l32i a2, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB144_4
+; XTENSA-NEXT: .LBB144_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s32i a2, a1, 0
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: bge a3, a2, .LBB144_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB144_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB144_1
+; XTENSA-NEXT: .LBB144_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_max_i32_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB144_2
+; XTENSA-ATOMIC-NEXT: .LBB144_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB144_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB144_6
+; XTENSA-ATOMIC-NEXT: .LBB144_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a3, a3
+; XTENSA-ATOMIC-NEXT: bge a3, a11, .LBB144_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB144_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a11, a11
+; XTENSA-ATOMIC-NEXT: .LBB144_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB144_2 Depth=1
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB144_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB144_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB144_1
+; XTENSA-ATOMIC-NEXT: .LBB144_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw max ptr %a, i32 %b seq_cst
+ ret i32 %res
+}
+
+define i32 @atomicrmw_min_i32_monotonic(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_min_i32_monotonic:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a6, a2, a2
+; XTENSA-NEXT: l32i a2, a6, 0
+; XTENSA-NEXT: movi a7, 0
+; XTENSA-NEXT: l32r a5, .LCPI145_0
+; XTENSA-NEXT: j .LBB145_2
+; XTENSA-NEXT: .LBB145_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB145_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a6, a6
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a7, a7
+; XTENSA-NEXT: callx8 a5
+; XTENSA-NEXT: l32i a2, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB145_4
+; XTENSA-NEXT: .LBB145_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s32i a2, a1, 0
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: blt a3, a2, .LBB145_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB145_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB145_1
+; XTENSA-NEXT: .LBB145_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_min_i32_monotonic:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB145_2
+; XTENSA-ATOMIC-NEXT: .LBB145_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB145_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB145_6
+; XTENSA-ATOMIC-NEXT: .LBB145_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a3, a3
+; XTENSA-ATOMIC-NEXT: blt a3, a11, .LBB145_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB145_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a11, a11
+; XTENSA-ATOMIC-NEXT: .LBB145_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB145_2 Depth=1
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB145_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB145_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB145_1
+; XTENSA-ATOMIC-NEXT: .LBB145_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw min ptr %a, i32 %b monotonic
+ ret i32 %res
+}
+
+define i32 @atomicrmw_min_i32_acquire(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_min_i32_acquire:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a6, a2, a2
+; XTENSA-NEXT: l32i a2, a6, 0
+; XTENSA-NEXT: movi a7, 2
+; XTENSA-NEXT: l32r a5, .LCPI146_0
+; XTENSA-NEXT: j .LBB146_2
+; XTENSA-NEXT: .LBB146_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB146_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a6, a6
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a7, a7
+; XTENSA-NEXT: callx8 a5
+; XTENSA-NEXT: l32i a2, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB146_4
+; XTENSA-NEXT: .LBB146_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s32i a2, a1, 0
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: blt a3, a2, .LBB146_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB146_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB146_1
+; XTENSA-NEXT: .LBB146_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_min_i32_acquire:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB146_2
+; XTENSA-ATOMIC-NEXT: .LBB146_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB146_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB146_6
+; XTENSA-ATOMIC-NEXT: .LBB146_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a3, a3
+; XTENSA-ATOMIC-NEXT: blt a3, a11, .LBB146_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB146_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a11, a11
+; XTENSA-ATOMIC-NEXT: .LBB146_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB146_2 Depth=1
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB146_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB146_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB146_1
+; XTENSA-ATOMIC-NEXT: .LBB146_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw min ptr %a, i32 %b acquire
+ ret i32 %res
+}
+
+define i32 @atomicrmw_min_i32_release(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_min_i32_release:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a5, a2, a2
+; XTENSA-NEXT: l32i a2, a5, 0
+; XTENSA-NEXT: movi a7, 3
+; XTENSA-NEXT: movi a6, 0
+; XTENSA-NEXT: l32r a4, .LCPI147_0
+; XTENSA-NEXT: j .LBB147_2
+; XTENSA-NEXT: .LBB147_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB147_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a5, a5
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a6, a6
+; XTENSA-NEXT: callx8 a4
+; XTENSA-NEXT: l32i a2, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB147_4
+; XTENSA-NEXT: .LBB147_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s32i a2, a1, 0
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: blt a3, a2, .LBB147_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB147_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB147_1
+; XTENSA-NEXT: .LBB147_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_min_i32_release:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB147_2
+; XTENSA-ATOMIC-NEXT: .LBB147_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB147_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB147_6
+; XTENSA-ATOMIC-NEXT: .LBB147_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a3, a3
+; XTENSA-ATOMIC-NEXT: blt a3, a11, .LBB147_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB147_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a11, a11
+; XTENSA-ATOMIC-NEXT: .LBB147_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB147_2 Depth=1
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB147_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB147_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB147_1
+; XTENSA-ATOMIC-NEXT: .LBB147_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw min ptr %a, i32 %b release
+ ret i32 %res
+}
+
+define i32 @atomicrmw_min_i32_acq_rel(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_min_i32_acq_rel:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a5, a2, a2
+; XTENSA-NEXT: l32i a2, a5, 0
+; XTENSA-NEXT: movi a7, 4
+; XTENSA-NEXT: movi a6, 2
+; XTENSA-NEXT: l32r a4, .LCPI148_0
+; XTENSA-NEXT: j .LBB148_2
+; XTENSA-NEXT: .LBB148_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB148_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a5, a5
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a6, a6
+; XTENSA-NEXT: callx8 a4
+; XTENSA-NEXT: l32i a2, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB148_4
+; XTENSA-NEXT: .LBB148_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s32i a2, a1, 0
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: blt a3, a2, .LBB148_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB148_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB148_1
+; XTENSA-NEXT: .LBB148_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_min_i32_acq_rel:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB148_2
+; XTENSA-ATOMIC-NEXT: .LBB148_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB148_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB148_6
+; XTENSA-ATOMIC-NEXT: .LBB148_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a3, a3
+; XTENSA-ATOMIC-NEXT: blt a3, a11, .LBB148_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB148_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a11, a11
+; XTENSA-ATOMIC-NEXT: .LBB148_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB148_2 Depth=1
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB148_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB148_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB148_1
+; XTENSA-ATOMIC-NEXT: .LBB148_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw min ptr %a, i32 %b acq_rel
+ ret i32 %res
+}
+
+define i32 @atomicrmw_min_i32_seq_cst(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_min_i32_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a6, a2, a2
+; XTENSA-NEXT: l32i a2, a6, 0
+; XTENSA-NEXT: movi a7, 5
+; XTENSA-NEXT: l32r a5, .LCPI149_0
+; XTENSA-NEXT: j .LBB149_2
+; XTENSA-NEXT: .LBB149_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB149_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a6, a6
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a7, a7
+; XTENSA-NEXT: callx8 a5
+; XTENSA-NEXT: l32i a2, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB149_4
+; XTENSA-NEXT: .LBB149_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s32i a2, a1, 0
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: blt a3, a2, .LBB149_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB149_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB149_1
+; XTENSA-NEXT: .LBB149_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_min_i32_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB149_2
+; XTENSA-ATOMIC-NEXT: .LBB149_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB149_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB149_6
+; XTENSA-ATOMIC-NEXT: .LBB149_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a3, a3
+; XTENSA-ATOMIC-NEXT: blt a3, a11, .LBB149_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB149_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a11, a11
+; XTENSA-ATOMIC-NEXT: .LBB149_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB149_2 Depth=1
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB149_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB149_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB149_1
+; XTENSA-ATOMIC-NEXT: .LBB149_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw min ptr %a, i32 %b seq_cst
+ ret i32 %res
+}
+
+define i32 @atomicrmw_umax_i32_monotonic(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_umax_i32_monotonic:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a6, a2, a2
+; XTENSA-NEXT: l32i a2, a6, 0
+; XTENSA-NEXT: movi a7, 0
+; XTENSA-NEXT: l32r a5, .LCPI150_0
+; XTENSA-NEXT: j .LBB150_2
+; XTENSA-NEXT: .LBB150_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB150_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a6, a6
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a7, a7
+; XTENSA-NEXT: callx8 a5
+; XTENSA-NEXT: l32i a2, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB150_4
+; XTENSA-NEXT: .LBB150_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s32i a2, a1, 0
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: bgeu a3, a2, .LBB150_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB150_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB150_1
+; XTENSA-NEXT: .LBB150_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_umax_i32_monotonic:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB150_2
+; XTENSA-ATOMIC-NEXT: .LBB150_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB150_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB150_6
+; XTENSA-ATOMIC-NEXT: .LBB150_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a3, a3
+; XTENSA-ATOMIC-NEXT: bgeu a3, a11, .LBB150_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB150_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a11, a11
+; XTENSA-ATOMIC-NEXT: .LBB150_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB150_2 Depth=1
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB150_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB150_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB150_1
+; XTENSA-ATOMIC-NEXT: .LBB150_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw umax ptr %a, i32 %b monotonic
+ ret i32 %res
+}
+
+define i32 @atomicrmw_umax_i32_acquire(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_umax_i32_acquire:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a6, a2, a2
+; XTENSA-NEXT: l32i a2, a6, 0
+; XTENSA-NEXT: movi a7, 2
+; XTENSA-NEXT: l32r a5, .LCPI151_0
+; XTENSA-NEXT: j .LBB151_2
+; XTENSA-NEXT: .LBB151_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB151_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a6, a6
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a7, a7
+; XTENSA-NEXT: callx8 a5
+; XTENSA-NEXT: l32i a2, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB151_4
+; XTENSA-NEXT: .LBB151_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s32i a2, a1, 0
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: bgeu a3, a2, .LBB151_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB151_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB151_1
+; XTENSA-NEXT: .LBB151_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_umax_i32_acquire:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB151_2
+; XTENSA-ATOMIC-NEXT: .LBB151_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB151_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB151_6
+; XTENSA-ATOMIC-NEXT: .LBB151_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a3, a3
+; XTENSA-ATOMIC-NEXT: bgeu a3, a11, .LBB151_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB151_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a11, a11
+; XTENSA-ATOMIC-NEXT: .LBB151_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB151_2 Depth=1
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB151_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB151_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB151_1
+; XTENSA-ATOMIC-NEXT: .LBB151_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw umax ptr %a, i32 %b acquire
+ ret i32 %res
+}
+
+define i32 @atomicrmw_umax_i32_release(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_umax_i32_release:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a5, a2, a2
+; XTENSA-NEXT: l32i a2, a5, 0
+; XTENSA-NEXT: movi a7, 3
+; XTENSA-NEXT: movi a6, 0
+; XTENSA-NEXT: l32r a4, .LCPI152_0
+; XTENSA-NEXT: j .LBB152_2
+; XTENSA-NEXT: .LBB152_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB152_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a5, a5
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a6, a6
+; XTENSA-NEXT: callx8 a4
+; XTENSA-NEXT: l32i a2, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB152_4
+; XTENSA-NEXT: .LBB152_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s32i a2, a1, 0
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: bgeu a3, a2, .LBB152_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB152_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB152_1
+; XTENSA-NEXT: .LBB152_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_umax_i32_release:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB152_2
+; XTENSA-ATOMIC-NEXT: .LBB152_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB152_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB152_6
+; XTENSA-ATOMIC-NEXT: .LBB152_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a3, a3
+; XTENSA-ATOMIC-NEXT: bgeu a3, a11, .LBB152_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB152_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a11, a11
+; XTENSA-ATOMIC-NEXT: .LBB152_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB152_2 Depth=1
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB152_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB152_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB152_1
+; XTENSA-ATOMIC-NEXT: .LBB152_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw umax ptr %a, i32 %b release
+ ret i32 %res
+}
+
+define i32 @atomicrmw_umax_i32_acq_rel(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_umax_i32_acq_rel:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a5, a2, a2
+; XTENSA-NEXT: l32i a2, a5, 0
+; XTENSA-NEXT: movi a7, 4
+; XTENSA-NEXT: movi a6, 2
+; XTENSA-NEXT: l32r a4, .LCPI153_0
+; XTENSA-NEXT: j .LBB153_2
+; XTENSA-NEXT: .LBB153_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB153_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a5, a5
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a6, a6
+; XTENSA-NEXT: callx8 a4
+; XTENSA-NEXT: l32i a2, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB153_4
+; XTENSA-NEXT: .LBB153_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s32i a2, a1, 0
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: bgeu a3, a2, .LBB153_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB153_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB153_1
+; XTENSA-NEXT: .LBB153_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_umax_i32_acq_rel:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB153_2
+; XTENSA-ATOMIC-NEXT: .LBB153_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB153_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB153_6
+; XTENSA-ATOMIC-NEXT: .LBB153_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a3, a3
+; XTENSA-ATOMIC-NEXT: bgeu a3, a11, .LBB153_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB153_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a11, a11
+; XTENSA-ATOMIC-NEXT: .LBB153_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB153_2 Depth=1
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB153_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB153_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB153_1
+; XTENSA-ATOMIC-NEXT: .LBB153_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw umax ptr %a, i32 %b acq_rel
+ ret i32 %res
+}
+
+define i32 @atomicrmw_umax_i32_seq_cst(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_umax_i32_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a6, a2, a2
+; XTENSA-NEXT: l32i a2, a6, 0
+; XTENSA-NEXT: movi a7, 5
+; XTENSA-NEXT: l32r a5, .LCPI154_0
+; XTENSA-NEXT: j .LBB154_2
+; XTENSA-NEXT: .LBB154_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB154_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a6, a6
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a7, a7
+; XTENSA-NEXT: callx8 a5
+; XTENSA-NEXT: l32i a2, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB154_4
+; XTENSA-NEXT: .LBB154_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s32i a2, a1, 0
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: bgeu a3, a2, .LBB154_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB154_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB154_1
+; XTENSA-NEXT: .LBB154_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_umax_i32_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB154_2
+; XTENSA-ATOMIC-NEXT: .LBB154_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB154_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB154_6
+; XTENSA-ATOMIC-NEXT: .LBB154_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a3, a3
+; XTENSA-ATOMIC-NEXT: bgeu a3, a11, .LBB154_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB154_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a11, a11
+; XTENSA-ATOMIC-NEXT: .LBB154_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB154_2 Depth=1
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB154_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB154_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB154_1
+; XTENSA-ATOMIC-NEXT: .LBB154_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw umax ptr %a, i32 %b seq_cst
+ ret i32 %res
+}
+
+define i32 @atomicrmw_umin_i32_monotonic(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_umin_i32_monotonic:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a6, a2, a2
+; XTENSA-NEXT: l32i a2, a6, 0
+; XTENSA-NEXT: movi a7, 0
+; XTENSA-NEXT: l32r a5, .LCPI155_0
+; XTENSA-NEXT: j .LBB155_2
+; XTENSA-NEXT: .LBB155_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB155_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a6, a6
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a7, a7
+; XTENSA-NEXT: callx8 a5
+; XTENSA-NEXT: l32i a2, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB155_4
+; XTENSA-NEXT: .LBB155_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s32i a2, a1, 0
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: bltu a3, a2, .LBB155_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB155_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB155_1
+; XTENSA-NEXT: .LBB155_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_umin_i32_monotonic:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB155_2
+; XTENSA-ATOMIC-NEXT: .LBB155_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB155_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB155_6
+; XTENSA-ATOMIC-NEXT: .LBB155_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a3, a3
+; XTENSA-ATOMIC-NEXT: bltu a3, a11, .LBB155_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB155_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a11, a11
+; XTENSA-ATOMIC-NEXT: .LBB155_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB155_2 Depth=1
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB155_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB155_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB155_1
+; XTENSA-ATOMIC-NEXT: .LBB155_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw umin ptr %a, i32 %b monotonic
+ ret i32 %res
+}
+
+define i32 @atomicrmw_umin_i32_acquire(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_umin_i32_acquire:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a6, a2, a2
+; XTENSA-NEXT: l32i a2, a6, 0
+; XTENSA-NEXT: movi a7, 2
+; XTENSA-NEXT: l32r a5, .LCPI156_0
+; XTENSA-NEXT: j .LBB156_2
+; XTENSA-NEXT: .LBB156_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB156_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a6, a6
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a7, a7
+; XTENSA-NEXT: callx8 a5
+; XTENSA-NEXT: l32i a2, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB156_4
+; XTENSA-NEXT: .LBB156_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s32i a2, a1, 0
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: bltu a3, a2, .LBB156_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB156_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB156_1
+; XTENSA-NEXT: .LBB156_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_umin_i32_acquire:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB156_2
+; XTENSA-ATOMIC-NEXT: .LBB156_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB156_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB156_6
+; XTENSA-ATOMIC-NEXT: .LBB156_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a3, a3
+; XTENSA-ATOMIC-NEXT: bltu a3, a11, .LBB156_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB156_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a11, a11
+; XTENSA-ATOMIC-NEXT: .LBB156_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB156_2 Depth=1
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB156_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB156_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB156_1
+; XTENSA-ATOMIC-NEXT: .LBB156_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw umin ptr %a, i32 %b acquire
+ ret i32 %res
+}
+
+define i32 @atomicrmw_umin_i32_release(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_umin_i32_release:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a5, a2, a2
+; XTENSA-NEXT: l32i a2, a5, 0
+; XTENSA-NEXT: movi a7, 3
+; XTENSA-NEXT: movi a6, 0
+; XTENSA-NEXT: l32r a4, .LCPI157_0
+; XTENSA-NEXT: j .LBB157_2
+; XTENSA-NEXT: .LBB157_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB157_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a5, a5
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a6, a6
+; XTENSA-NEXT: callx8 a4
+; XTENSA-NEXT: l32i a2, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB157_4
+; XTENSA-NEXT: .LBB157_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s32i a2, a1, 0
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: bltu a3, a2, .LBB157_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB157_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB157_1
+; XTENSA-NEXT: .LBB157_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_umin_i32_release:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB157_2
+; XTENSA-ATOMIC-NEXT: .LBB157_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB157_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB157_6
+; XTENSA-ATOMIC-NEXT: .LBB157_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a3, a3
+; XTENSA-ATOMIC-NEXT: bltu a3, a11, .LBB157_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB157_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a11, a11
+; XTENSA-ATOMIC-NEXT: .LBB157_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB157_2 Depth=1
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB157_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB157_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB157_1
+; XTENSA-ATOMIC-NEXT: .LBB157_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw umin ptr %a, i32 %b release
+ ret i32 %res
+}
+
+define i32 @atomicrmw_umin_i32_acq_rel(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_umin_i32_acq_rel:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a5, a2, a2
+; XTENSA-NEXT: l32i a2, a5, 0
+; XTENSA-NEXT: movi a7, 4
+; XTENSA-NEXT: movi a6, 2
+; XTENSA-NEXT: l32r a4, .LCPI158_0
+; XTENSA-NEXT: j .LBB158_2
+; XTENSA-NEXT: .LBB158_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB158_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a5, a5
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a6, a6
+; XTENSA-NEXT: callx8 a4
+; XTENSA-NEXT: l32i a2, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB158_4
+; XTENSA-NEXT: .LBB158_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s32i a2, a1, 0
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: bltu a3, a2, .LBB158_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB158_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB158_1
+; XTENSA-NEXT: .LBB158_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_umin_i32_acq_rel:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB158_2
+; XTENSA-ATOMIC-NEXT: .LBB158_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB158_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB158_6
+; XTENSA-ATOMIC-NEXT: .LBB158_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a3, a3
+; XTENSA-ATOMIC-NEXT: bltu a3, a11, .LBB158_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB158_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a11, a11
+; XTENSA-ATOMIC-NEXT: .LBB158_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB158_2 Depth=1
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB158_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB158_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB158_1
+; XTENSA-ATOMIC-NEXT: .LBB158_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw umin ptr %a, i32 %b acq_rel
+ ret i32 %res
+}
+
+define i32 @atomicrmw_umin_i32_seq_cst(ptr %a, i32 %b) nounwind {
+; XTENSA-LABEL: atomicrmw_umin_i32_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a6, a2, a2
+; XTENSA-NEXT: l32i a2, a6, 0
+; XTENSA-NEXT: movi a7, 5
+; XTENSA-NEXT: l32r a5, .LCPI159_0
+; XTENSA-NEXT: j .LBB159_2
+; XTENSA-NEXT: .LBB159_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB159_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a6, a6
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a7, a7
+; XTENSA-NEXT: callx8 a5
+; XTENSA-NEXT: l32i a2, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB159_4
+; XTENSA-NEXT: .LBB159_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s32i a2, a1, 0
+; XTENSA-NEXT: or a12, a3, a3
+; XTENSA-NEXT: bltu a3, a2, .LBB159_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB159_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB159_1
+; XTENSA-NEXT: .LBB159_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: atomicrmw_umin_i32_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB159_2
+; XTENSA-ATOMIC-NEXT: .LBB159_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB159_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB159_6
+; XTENSA-ATOMIC-NEXT: .LBB159_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a3, a3
+; XTENSA-ATOMIC-NEXT: bltu a3, a11, .LBB159_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB159_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a11, a11
+; XTENSA-ATOMIC-NEXT: .LBB159_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB159_2 Depth=1
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB159_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB159_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB159_1
+; XTENSA-ATOMIC-NEXT: .LBB159_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = atomicrmw umin ptr %a, i32 %b seq_cst
+ ret i32 %res
+}
diff --git a/llvm/test/CodeGen/Xtensa/forced-atomics.ll b/llvm/test/CodeGen/Xtensa/forced-atomics.ll
new file mode 100644
index 0000000000000..eeec87b7ab132
--- /dev/null
+++ b/llvm/test/CodeGen/Xtensa/forced-atomics.ll
@@ -0,0 +1,1426 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 4
+; RUN: llc -mtriple=xtensa -mattr=+windowed < %s | FileCheck %s --check-prefixes=XTENSA
+; RUN: llc -mtriple=xtensa -mattr=+windowed,s32c1i -mattr=+forced-atomics < %s | FileCheck %s --check-prefixes=XTENSA-ATOMIC
+
+define i8 @load8(ptr %p) nounwind {
+; XTENSA-LABEL: load8:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a11, 5
+; XTENSA-NEXT: l32r a8, .LCPI0_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: load8:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l8ui a2, a2, 0
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %v = load atomic i8, ptr %p seq_cst, align 1
+ ret i8 %v
+}
+
+define void @store8(ptr %p) nounwind {
+; XTENSA-LABEL: store8:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a11, 0
+; XTENSA-NEXT: movi a12, 5
+; XTENSA-NEXT: l32r a8, .LCPI1_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: store8:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: movi a8, 0
+; XTENSA-ATOMIC-NEXT: s8i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ store atomic i8 0, ptr %p seq_cst, align 1
+ ret void
+}
+
+define i8 @rmw8(ptr %p) nounwind {
+; XTENSA-LABEL: rmw8:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a11, 1
+; XTENSA-NEXT: movi a12, 5
+; XTENSA-NEXT: l32r a8, .LCPI2_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: rmw8:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a9, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a9
+; XTENSA-ATOMIC-NEXT: movi a9, 1
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a9
+; XTENSA-ATOMIC-NEXT: movi a11, 255
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a11, a11
+; XTENSA-ATOMIC-NEXT: movi a12, -1
+; XTENSA-ATOMIC-NEXT: xor a12, a11, a12
+; XTENSA-ATOMIC-NEXT: movi a13, -4
+; XTENSA-ATOMIC-NEXT: and a13, a2, a13
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a15, a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 0
+; XTENSA-ATOMIC-NEXT: j .LBB2_2
+; XTENSA-ATOMIC-NEXT: .LBB2_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB2_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a7, a7
+; XTENSA-ATOMIC-NEXT: beqi a6, 1, .LBB2_4
+; XTENSA-ATOMIC-NEXT: .LBB2_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a7, a15, a12
+; XTENSA-ATOMIC-NEXT: add a6, a15, a10
+; XTENSA-ATOMIC-NEXT: and a6, a6, a11
+; XTENSA-ATOMIC-NEXT: or a7, a7, a6
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a7, a13, 0
+; XTENSA-ATOMIC-NEXT: or a6, a9, a9
+; XTENSA-ATOMIC-NEXT: beq a7, a15, .LBB2_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB2_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a14, a14
+; XTENSA-ATOMIC-NEXT: j .LBB2_1
+; XTENSA-ATOMIC-NEXT: .LBB2_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a7
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %v = atomicrmw add ptr %p, i8 1 seq_cst, align 1
+ ret i8 %v
+}
+
+define i8 @cmpxchg8(ptr %p) nounwind {
+; XTENSA-LABEL: cmpxchg8:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a8, 0
+; XTENSA-NEXT: s8i a8, a1, 0
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: movi a12, 1
+; XTENSA-NEXT: movi a13, 5
+; XTENSA-NEXT: l32r a8, .LCPI3_0
+; XTENSA-NEXT: or a14, a13, a13
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: l8ui a2, a1, 0
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: cmpxchg8:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a9, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a9
+; XTENSA-ATOMIC-NEXT: movi a9, 255
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -1
+; XTENSA-ATOMIC-NEXT: xor a9, a9, a10
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a11, a10, 0
+; XTENSA-ATOMIC-NEXT: and a7, a11, a9
+; XTENSA-ATOMIC-NEXT: movi a11, 1
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a12, a11
+; XTENSA-ATOMIC-NEXT: movi a13, 0
+; XTENSA-ATOMIC-NEXT: .LBB3_1: # %partword.cmpxchg.loop
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a7, a7
+; XTENSA-ATOMIC-NEXT: or a14, a15, a12
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: or a7, a11, a11
+; XTENSA-ATOMIC-NEXT: beq a14, a15, .LBB3_3
+; XTENSA-ATOMIC-NEXT: # %bb.2: # %partword.cmpxchg.loop
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB3_1 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a13, a13
+; XTENSA-ATOMIC-NEXT: .LBB3_3: # %partword.cmpxchg.loop
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB3_1 Depth=1
+; XTENSA-ATOMIC-NEXT: bnez a7, .LBB3_5
+; XTENSA-ATOMIC-NEXT: # %bb.4: # %partword.cmpxchg.failure
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB3_1 Depth=1
+; XTENSA-ATOMIC-NEXT: and a7, a14, a9
+; XTENSA-ATOMIC-NEXT: bne a15, a7, .LBB3_1
+; XTENSA-ATOMIC-NEXT: .LBB3_5: # %partword.cmpxchg.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a14
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = cmpxchg ptr %p, i8 0, i8 1 seq_cst seq_cst
+ %res.0 = extractvalue { i8, i1 } %res, 0
+ ret i8 %res.0
+}
+
+define i16 @load16(ptr %p) nounwind {
+; XTENSA-LABEL: load16:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a11, 5
+; XTENSA-NEXT: l32r a8, .LCPI4_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: load16:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l16ui a2, a2, 0
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %v = load atomic i16, ptr %p seq_cst, align 2
+ ret i16 %v
+}
+
+define void @store16(ptr %p) nounwind {
+; XTENSA-LABEL: store16:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a11, 0
+; XTENSA-NEXT: movi a12, 5
+; XTENSA-NEXT: l32r a8, .LCPI5_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: store16:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: movi a8, 0
+; XTENSA-ATOMIC-NEXT: s16i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ store atomic i16 0, ptr %p seq_cst, align 2
+ ret void
+}
+
+define i16 @rmw16(ptr %p) nounwind {
+; XTENSA-LABEL: rmw16:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a11, 1
+; XTENSA-NEXT: movi a12, 5
+; XTENSA-NEXT: l32r a8, .LCPI6_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: rmw16:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a9, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a9
+; XTENSA-ATOMIC-NEXT: movi a9, 1
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a10, a9
+; XTENSA-ATOMIC-NEXT: l32r a11, .LCPI6_0
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a11, a11
+; XTENSA-ATOMIC-NEXT: movi a12, -1
+; XTENSA-ATOMIC-NEXT: xor a12, a11, a12
+; XTENSA-ATOMIC-NEXT: movi a13, -4
+; XTENSA-ATOMIC-NEXT: and a13, a2, a13
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a15, a13, 0
+; XTENSA-ATOMIC-NEXT: movi a14, 0
+; XTENSA-ATOMIC-NEXT: j .LBB6_2
+; XTENSA-ATOMIC-NEXT: .LBB6_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB6_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a7, a7
+; XTENSA-ATOMIC-NEXT: beqi a6, 1, .LBB6_4
+; XTENSA-ATOMIC-NEXT: .LBB6_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a7, a15, a12
+; XTENSA-ATOMIC-NEXT: add a6, a15, a10
+; XTENSA-ATOMIC-NEXT: and a6, a6, a11
+; XTENSA-ATOMIC-NEXT: or a7, a7, a6
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a7, a13, 0
+; XTENSA-ATOMIC-NEXT: or a6, a9, a9
+; XTENSA-ATOMIC-NEXT: beq a7, a15, .LBB6_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB6_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a6, a14, a14
+; XTENSA-ATOMIC-NEXT: j .LBB6_1
+; XTENSA-ATOMIC-NEXT: .LBB6_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a7
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %v = atomicrmw add ptr %p, i16 1 seq_cst, align 2
+ ret i16 %v
+}
+
+define i16 @cmpxchg16(ptr %p) nounwind {
+; XTENSA-LABEL: cmpxchg16:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a8, 0
+; XTENSA-NEXT: s16i a8, a1, 0
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: movi a12, 1
+; XTENSA-NEXT: movi a13, 5
+; XTENSA-NEXT: l32r a8, .LCPI7_0
+; XTENSA-NEXT: or a14, a13, a13
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: l16ui a2, a1, 0
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: cmpxchg16:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: slli a8, a2, 3
+; XTENSA-ATOMIC-NEXT: movi a9, 24
+; XTENSA-ATOMIC-NEXT: and a8, a8, a9
+; XTENSA-ATOMIC-NEXT: l32r a9, .LCPI7_0
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a9, a9
+; XTENSA-ATOMIC-NEXT: movi a10, -1
+; XTENSA-ATOMIC-NEXT: xor a9, a9, a10
+; XTENSA-ATOMIC-NEXT: movi a10, -4
+; XTENSA-ATOMIC-NEXT: and a10, a2, a10
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a11, a10, 0
+; XTENSA-ATOMIC-NEXT: and a7, a11, a9
+; XTENSA-ATOMIC-NEXT: movi a11, 1
+; XTENSA-ATOMIC-NEXT: ssl a8
+; XTENSA-ATOMIC-NEXT: sll a12, a11
+; XTENSA-ATOMIC-NEXT: movi a13, 0
+; XTENSA-ATOMIC-NEXT: .LBB7_1: # %partword.cmpxchg.loop
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: or a15, a7, a7
+; XTENSA-ATOMIC-NEXT: or a14, a15, a12
+; XTENSA-ATOMIC-NEXT: wsr a15, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a14, a10, 0
+; XTENSA-ATOMIC-NEXT: or a7, a11, a11
+; XTENSA-ATOMIC-NEXT: beq a14, a15, .LBB7_3
+; XTENSA-ATOMIC-NEXT: # %bb.2: # %partword.cmpxchg.loop
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB7_1 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a13, a13
+; XTENSA-ATOMIC-NEXT: .LBB7_3: # %partword.cmpxchg.loop
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB7_1 Depth=1
+; XTENSA-ATOMIC-NEXT: bnez a7, .LBB7_5
+; XTENSA-ATOMIC-NEXT: # %bb.4: # %partword.cmpxchg.failure
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB7_1 Depth=1
+; XTENSA-ATOMIC-NEXT: and a7, a14, a9
+; XTENSA-ATOMIC-NEXT: bne a15, a7, .LBB7_1
+; XTENSA-ATOMIC-NEXT: .LBB7_5: # %partword.cmpxchg.end
+; XTENSA-ATOMIC-NEXT: ssr a8
+; XTENSA-ATOMIC-NEXT: srl a2, a14
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %res = cmpxchg ptr %p, i16 0, i16 1 seq_cst seq_cst
+ %res.0 = extractvalue { i16, i1 } %res, 0
+ ret i16 %res.0
+}
+
+define i32 @load32_unordered(ptr %p) nounwind {
+; XTENSA-LABEL: load32_unordered:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a11, 0
+; XTENSA-NEXT: l32r a8, .LCPI8_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: load32_unordered:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32i a2, a2, 0
+; XTENSA-ATOMIC-NEXT: retw
+ %v = load atomic i32, ptr %p unordered, align 4
+ ret i32 %v
+}
+
+define i32 @load32_monotonic(ptr %p) nounwind {
+; XTENSA-LABEL: load32_monotonic:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a11, 0
+; XTENSA-NEXT: l32r a8, .LCPI9_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: load32_monotonic:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32i a2, a2, 0
+; XTENSA-ATOMIC-NEXT: retw
+ %v = load atomic i32, ptr %p monotonic, align 4
+ ret i32 %v
+}
+
+define i32 @load32_acquire(ptr %p) nounwind {
+; XTENSA-LABEL: load32_acquire:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a11, 2
+; XTENSA-NEXT: l32r a8, .LCPI10_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: load32_acquire:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32i a2, a2, 0
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %v = load atomic i32, ptr %p acquire, align 4
+ ret i32 %v
+}
+
+define i32 @load32_seq_cst(ptr %p) nounwind {
+; XTENSA-LABEL: load32_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a11, 5
+; XTENSA-NEXT: l32r a8, .LCPI11_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: load32_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32i a2, a2, 0
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ %v = load atomic i32, ptr %p seq_cst, align 4
+ ret i32 %v
+}
+
+define void @store32_unordered(ptr %p) nounwind {
+; XTENSA-LABEL: store32_unordered:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a11, 0
+; XTENSA-NEXT: l32r a8, .LCPI12_0
+; XTENSA-NEXT: or a12, a11, a11
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: store32_unordered:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: movi a8, 0
+; XTENSA-ATOMIC-NEXT: s32i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: retw
+ store atomic i32 0, ptr %p unordered, align 4
+ ret void
+}
+
+define void @store32_monotonic(ptr %p) nounwind {
+; XTENSA-LABEL: store32_monotonic:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a11, 0
+; XTENSA-NEXT: l32r a8, .LCPI13_0
+; XTENSA-NEXT: or a12, a11, a11
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: store32_monotonic:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: movi a8, 0
+; XTENSA-ATOMIC-NEXT: s32i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: retw
+ store atomic i32 0, ptr %p monotonic, align 4
+ ret void
+}
+
+define void @store32_release(ptr %p) nounwind {
+; XTENSA-LABEL: store32_release:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a11, 0
+; XTENSA-NEXT: movi a12, 3
+; XTENSA-NEXT: l32r a8, .LCPI14_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: store32_release:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: movi a8, 0
+; XTENSA-ATOMIC-NEXT: s32i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: retw
+ store atomic i32 0, ptr %p release, align 4
+ ret void
+}
+
+define void @store32_seq_cst(ptr %p) nounwind {
+; XTENSA-LABEL: store32_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a11, 0
+; XTENSA-NEXT: movi a12, 5
+; XTENSA-NEXT: l32r a8, .LCPI15_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: store32_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: movi a8, 0
+; XTENSA-ATOMIC-NEXT: s32i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: retw
+ store atomic i32 0, ptr %p seq_cst, align 4
+ ret void
+}
+
+define i32 @rmw32_add_monotonic(ptr %p) nounwind {
+; XTENSA-LABEL: rmw32_add_monotonic:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a11, 1
+; XTENSA-NEXT: movi a12, 0
+; XTENSA-NEXT: l32r a8, .LCPI16_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: rmw32_add_monotonic:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB16_2
+; XTENSA-ATOMIC-NEXT: .LBB16_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB16_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB16_4
+; XTENSA-ATOMIC-NEXT: .LBB16_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: addi a8, a11, 1
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB16_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB16_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB16_1
+; XTENSA-ATOMIC-NEXT: .LBB16_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %v = atomicrmw add ptr %p, i32 1 monotonic, align 4
+ ret i32 %v
+}
+
+define i32 @rmw32_add_seq_cst(ptr %p) nounwind {
+; XTENSA-LABEL: rmw32_add_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a11, 1
+; XTENSA-NEXT: movi a12, 5
+; XTENSA-NEXT: l32r a8, .LCPI17_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: rmw32_add_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB17_2
+; XTENSA-ATOMIC-NEXT: .LBB17_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB17_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB17_4
+; XTENSA-ATOMIC-NEXT: .LBB17_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: addi a8, a11, 1
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB17_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB17_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB17_1
+; XTENSA-ATOMIC-NEXT: .LBB17_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %v = atomicrmw add ptr %p, i32 1 seq_cst, align 4
+ ret i32 %v
+}
+
+define i32 @rmw32_sub_seq_cst(ptr %p) nounwind {
+; XTENSA-LABEL: rmw32_sub_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a11, 1
+; XTENSA-NEXT: movi a12, 5
+; XTENSA-NEXT: l32r a8, .LCPI18_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: rmw32_sub_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: movi a10, 1
+; XTENSA-ATOMIC-NEXT: j .LBB18_2
+; XTENSA-ATOMIC-NEXT: .LBB18_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB18_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB18_4
+; XTENSA-ATOMIC-NEXT: .LBB18_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: addi a8, a11, -1
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB18_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB18_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: j .LBB18_1
+; XTENSA-ATOMIC-NEXT: .LBB18_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %v = atomicrmw sub ptr %p, i32 1 seq_cst, align 4
+ ret i32 %v
+}
+
+define i32 @rmw32_and_seq_cst(ptr %p) nounwind {
+; XTENSA-LABEL: rmw32_and_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a11, 1
+; XTENSA-NEXT: movi a12, 5
+; XTENSA-NEXT: l32r a8, .LCPI19_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: rmw32_and_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 1
+; XTENSA-ATOMIC-NEXT: movi a10, 0
+; XTENSA-ATOMIC-NEXT: j .LBB19_2
+; XTENSA-ATOMIC-NEXT: .LBB19_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB19_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB19_4
+; XTENSA-ATOMIC-NEXT: .LBB19_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: and a8, a11, a9
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB19_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB19_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: j .LBB19_1
+; XTENSA-ATOMIC-NEXT: .LBB19_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %v = atomicrmw and ptr %p, i32 1 seq_cst, align 4
+ ret i32 %v
+}
+
+define i32 @rmw32_nand_seq_cst(ptr %p) nounwind {
+; XTENSA-LABEL: rmw32_nand_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a11, 1
+; XTENSA-NEXT: movi a12, 5
+; XTENSA-NEXT: l32r a8, .LCPI20_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: rmw32_nand_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a13, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, -1
+; XTENSA-ATOMIC-NEXT: movi a10, -2
+; XTENSA-ATOMIC-NEXT: movi a11, 0
+; XTENSA-ATOMIC-NEXT: movi a12, 1
+; XTENSA-ATOMIC-NEXT: j .LBB20_2
+; XTENSA-ATOMIC-NEXT: .LBB20_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB20_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a13, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a14, 1, .LBB20_4
+; XTENSA-ATOMIC-NEXT: .LBB20_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: xor a8, a13, a9
+; XTENSA-ATOMIC-NEXT: or a8, a8, a10
+; XTENSA-ATOMIC-NEXT: wsr a13, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a14, a12, a12
+; XTENSA-ATOMIC-NEXT: beq a8, a13, .LBB20_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB20_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a14, a11, a11
+; XTENSA-ATOMIC-NEXT: j .LBB20_1
+; XTENSA-ATOMIC-NEXT: .LBB20_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %v = atomicrmw nand ptr %p, i32 1 seq_cst, align 4
+ ret i32 %v
+}
+
+define i32 @rmw32_or_seq_cst(ptr %p) nounwind {
+; XTENSA-LABEL: rmw32_or_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a11, 1
+; XTENSA-NEXT: movi a12, 5
+; XTENSA-NEXT: l32r a8, .LCPI21_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: rmw32_or_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 1
+; XTENSA-ATOMIC-NEXT: movi a10, 0
+; XTENSA-ATOMIC-NEXT: j .LBB21_2
+; XTENSA-ATOMIC-NEXT: .LBB21_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB21_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB21_4
+; XTENSA-ATOMIC-NEXT: .LBB21_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a11, a9
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB21_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB21_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: j .LBB21_1
+; XTENSA-ATOMIC-NEXT: .LBB21_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %v = atomicrmw or ptr %p, i32 1 seq_cst, align 4
+ ret i32 %v
+}
+
+define i32 @rmw32_xor_seq_cst(ptr %p) nounwind {
+; XTENSA-LABEL: rmw32_xor_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a11, 1
+; XTENSA-NEXT: movi a12, 5
+; XTENSA-NEXT: l32r a8, .LCPI22_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: rmw32_xor_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 1
+; XTENSA-ATOMIC-NEXT: movi a10, 0
+; XTENSA-ATOMIC-NEXT: j .LBB22_2
+; XTENSA-ATOMIC-NEXT: .LBB22_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB22_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB22_4
+; XTENSA-ATOMIC-NEXT: .LBB22_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: xor a8, a11, a9
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB22_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB22_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: j .LBB22_1
+; XTENSA-ATOMIC-NEXT: .LBB22_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %v = atomicrmw xor ptr %p, i32 1 seq_cst, align 4
+ ret i32 %v
+}
+
+define i32 @rmw32_max_seq_cst(ptr %p) nounwind {
+; XTENSA-LABEL: rmw32_max_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a6, a2, a2
+; XTENSA-NEXT: l32i a2, a6, 0
+; XTENSA-NEXT: movi a5, 1
+; XTENSA-NEXT: movi a7, 5
+; XTENSA-NEXT: l32r a4, .LCPI23_0
+; XTENSA-NEXT: j .LBB23_2
+; XTENSA-NEXT: .LBB23_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB23_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a6, a6
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a7, a7
+; XTENSA-NEXT: callx8 a4
+; XTENSA-NEXT: l32i a2, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB23_4
+; XTENSA-NEXT: .LBB23_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s32i a2, a1, 0
+; XTENSA-NEXT: or a12, a5, a5
+; XTENSA-NEXT: bge a5, a2, .LBB23_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB23_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB23_1
+; XTENSA-NEXT: .LBB23_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: rmw32_max_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 1
+; XTENSA-ATOMIC-NEXT: movi a10, 0
+; XTENSA-ATOMIC-NEXT: j .LBB23_2
+; XTENSA-ATOMIC-NEXT: .LBB23_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB23_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB23_6
+; XTENSA-ATOMIC-NEXT: .LBB23_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a9, a9
+; XTENSA-ATOMIC-NEXT: bge a9, a11, .LBB23_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB23_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a11, a11
+; XTENSA-ATOMIC-NEXT: .LBB23_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB23_2 Depth=1
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB23_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB23_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: j .LBB23_1
+; XTENSA-ATOMIC-NEXT: .LBB23_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %v = atomicrmw max ptr %p, i32 1 seq_cst, align 4
+ ret i32 %v
+}
+
+define i32 @rmw32_min_seq_cst(ptr %p) nounwind {
+; XTENSA-LABEL: rmw32_min_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: l32i a12, a2, 0
+; XTENSA-NEXT: movi a6, 1
+; XTENSA-NEXT: movi a5, 2
+; XTENSA-NEXT: movi a7, 5
+; XTENSA-NEXT: l32r a4, .LCPI24_0
+; XTENSA-NEXT: j .LBB24_2
+; XTENSA-NEXT: .LBB24_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB24_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a7, a7
+; XTENSA-NEXT: callx8 a4
+; XTENSA-NEXT: l32i a12, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB24_4
+; XTENSA-NEXT: .LBB24_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s32i a12, a1, 0
+; XTENSA-NEXT: blt a12, a5, .LBB24_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB24_2 Depth=1
+; XTENSA-NEXT: or a12, a6, a6
+; XTENSA-NEXT: j .LBB24_1
+; XTENSA-NEXT: .LBB24_4: # %atomicrmw.end
+; XTENSA-NEXT: or a2, a12, a12
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: rmw32_min_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a12, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 1
+; XTENSA-ATOMIC-NEXT: movi a10, 2
+; XTENSA-ATOMIC-NEXT: movi a11, 0
+; XTENSA-ATOMIC-NEXT: or a8, a12, a12
+; XTENSA-ATOMIC-NEXT: j .LBB24_2
+; XTENSA-ATOMIC-NEXT: .LBB24_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB24_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a13, 1, .LBB24_6
+; XTENSA-ATOMIC-NEXT: .LBB24_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: blt a12, a10, .LBB24_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB24_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a9, a9
+; XTENSA-ATOMIC-NEXT: .LBB24_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB24_2 Depth=1
+; XTENSA-ATOMIC-NEXT: wsr a12, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a13, a9, a9
+; XTENSA-ATOMIC-NEXT: beq a8, a12, .LBB24_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB24_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a13, a11, a11
+; XTENSA-ATOMIC-NEXT: j .LBB24_1
+; XTENSA-ATOMIC-NEXT: .LBB24_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %v = atomicrmw min ptr %p, i32 1 seq_cst, align 4
+ ret i32 %v
+}
+
+define i32 @rmw32_umax_seq_cst(ptr %p) nounwind {
+; XTENSA-LABEL: rmw32_umax_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a6, a2, a2
+; XTENSA-NEXT: l32i a2, a6, 0
+; XTENSA-NEXT: movi a5, 1
+; XTENSA-NEXT: movi a7, 5
+; XTENSA-NEXT: l32r a4, .LCPI25_0
+; XTENSA-NEXT: j .LBB25_2
+; XTENSA-NEXT: .LBB25_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB25_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a6, a6
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a7, a7
+; XTENSA-NEXT: callx8 a4
+; XTENSA-NEXT: l32i a2, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB25_4
+; XTENSA-NEXT: .LBB25_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s32i a2, a1, 0
+; XTENSA-NEXT: or a12, a5, a5
+; XTENSA-NEXT: bgeu a5, a2, .LBB25_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB25_2 Depth=1
+; XTENSA-NEXT: or a12, a2, a2
+; XTENSA-NEXT: j .LBB25_1
+; XTENSA-NEXT: .LBB25_4: # %atomicrmw.end
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: rmw32_umax_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 1
+; XTENSA-ATOMIC-NEXT: movi a10, 0
+; XTENSA-ATOMIC-NEXT: j .LBB25_2
+; XTENSA-ATOMIC-NEXT: .LBB25_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB25_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB25_6
+; XTENSA-ATOMIC-NEXT: .LBB25_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a9, a9
+; XTENSA-ATOMIC-NEXT: bgeu a9, a11, .LBB25_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB25_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a11, a11
+; XTENSA-ATOMIC-NEXT: .LBB25_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB25_2 Depth=1
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB25_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB25_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: j .LBB25_1
+; XTENSA-ATOMIC-NEXT: .LBB25_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %v = atomicrmw umax ptr %p, i32 1 seq_cst, align 4
+ ret i32 %v
+}
+
+define i32 @rmw32_umin_seq_cst(ptr %p) nounwind {
+; XTENSA-LABEL: rmw32_umin_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: l32i a12, a2, 0
+; XTENSA-NEXT: movi a6, 1
+; XTENSA-NEXT: movi a5, 2
+; XTENSA-NEXT: movi a7, 5
+; XTENSA-NEXT: l32r a4, .LCPI26_0
+; XTENSA-NEXT: j .LBB26_2
+; XTENSA-NEXT: .LBB26_1: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB26_2 Depth=1
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a7, a7
+; XTENSA-NEXT: callx8 a4
+; XTENSA-NEXT: l32i a12, a1, 0
+; XTENSA-NEXT: bnez a10, .LBB26_4
+; XTENSA-NEXT: .LBB26_2: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s32i a12, a1, 0
+; XTENSA-NEXT: bltu a12, a5, .LBB26_1
+; XTENSA-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-NEXT: # in Loop: Header=BB26_2 Depth=1
+; XTENSA-NEXT: or a12, a6, a6
+; XTENSA-NEXT: j .LBB26_1
+; XTENSA-NEXT: .LBB26_4: # %atomicrmw.end
+; XTENSA-NEXT: or a2, a12, a12
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: rmw32_umin_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a12, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 1
+; XTENSA-ATOMIC-NEXT: movi a10, 2
+; XTENSA-ATOMIC-NEXT: movi a11, 0
+; XTENSA-ATOMIC-NEXT: or a8, a12, a12
+; XTENSA-ATOMIC-NEXT: j .LBB26_2
+; XTENSA-ATOMIC-NEXT: .LBB26_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB26_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a13, 1, .LBB26_6
+; XTENSA-ATOMIC-NEXT: .LBB26_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: bltu a12, a10, .LBB26_4
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB26_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a9, a9
+; XTENSA-ATOMIC-NEXT: .LBB26_4: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB26_2 Depth=1
+; XTENSA-ATOMIC-NEXT: wsr a12, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a13, a9, a9
+; XTENSA-ATOMIC-NEXT: beq a8, a12, .LBB26_1
+; XTENSA-ATOMIC-NEXT: # %bb.5: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB26_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a13, a11, a11
+; XTENSA-ATOMIC-NEXT: j .LBB26_1
+; XTENSA-ATOMIC-NEXT: .LBB26_6: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %v = atomicrmw umin ptr %p, i32 1 seq_cst, align 4
+ ret i32 %v
+}
+
+define i32 @rmw32_xchg_seq_cst(ptr %p) nounwind {
+; XTENSA-LABEL: rmw32_xchg_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 32
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a11, 1
+; XTENSA-NEXT: movi a12, 5
+; XTENSA-NEXT: l32r a8, .LCPI27_0
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: rmw32_xchg_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a11, a2, 0
+; XTENSA-ATOMIC-NEXT: movi a9, 1
+; XTENSA-ATOMIC-NEXT: movi a10, 0
+; XTENSA-ATOMIC-NEXT: j .LBB27_2
+; XTENSA-ATOMIC-NEXT: .LBB27_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB27_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a11, a8, a8
+; XTENSA-ATOMIC-NEXT: beqi a12, 1, .LBB27_4
+; XTENSA-ATOMIC-NEXT: .LBB27_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: wsr a11, scompare1
+; XTENSA-ATOMIC-NEXT: or a8, a9, a9
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a12, a9, a9
+; XTENSA-ATOMIC-NEXT: beq a8, a11, .LBB27_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB27_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a12, a10, a10
+; XTENSA-ATOMIC-NEXT: j .LBB27_1
+; XTENSA-ATOMIC-NEXT: .LBB27_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %v = atomicrmw xchg ptr %p, i32 1 seq_cst, align 4
+ ret i32 %v
+}
+
+define float @rmw32_fadd_seq_cst(ptr %p) nounwind {
+; XTENSA-LABEL: rmw32_fadd_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: l32i a10, a2, 0
+; XTENSA-NEXT: l32r a6, .LCPI28_1
+; XTENSA-NEXT: movi a7, 5
+; XTENSA-NEXT: l32r a5, .LCPI28_2
+; XTENSA-NEXT: .LBB28_1: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s32i a10, a1, 0
+; XTENSA-NEXT: l32r a11, .LCPI28_0
+; XTENSA-NEXT: callx8 a6
+; XTENSA-NEXT: or a12, a10, a10
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a7, a7
+; XTENSA-NEXT: callx8 a5
+; XTENSA-NEXT: or a8, a10, a10
+; XTENSA-NEXT: l32i a10, a1, 0
+; XTENSA-NEXT: beqz a8, .LBB28_1
+; XTENSA-NEXT: # %bb.2: # %atomicrmw.end
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: rmw32_fadd_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a7, a2, 0
+; XTENSA-ATOMIC-NEXT: l32r a6, .LCPI28_1
+; XTENSA-ATOMIC-NEXT: movi a5, 0
+; XTENSA-ATOMIC-NEXT: movi a4, 1
+; XTENSA-ATOMIC-NEXT: j .LBB28_2
+; XTENSA-ATOMIC-NEXT: .LBB28_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB28_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a10, a10
+; XTENSA-ATOMIC-NEXT: beqi a8, 1, .LBB28_4
+; XTENSA-ATOMIC-NEXT: .LBB28_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: l32r a11, .LCPI28_0
+; XTENSA-ATOMIC-NEXT: or a10, a7, a7
+; XTENSA-ATOMIC-NEXT: callx8 a6
+; XTENSA-ATOMIC-NEXT: wsr a7, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a10, a2, 0
+; XTENSA-ATOMIC-NEXT: or a8, a4, a4
+; XTENSA-ATOMIC-NEXT: beq a10, a7, .LBB28_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB28_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a5, a5
+; XTENSA-ATOMIC-NEXT: j .LBB28_1
+; XTENSA-ATOMIC-NEXT: .LBB28_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a10, a10
+; XTENSA-ATOMIC-NEXT: retw
+ %v = atomicrmw fadd ptr %p, float 1.0 seq_cst, align 4
+ ret float %v
+}
+
+define float @rmw32_fsub_seq_cst(ptr %p) nounwind {
+; XTENSA-LABEL: rmw32_fsub_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: l32i a10, a2, 0
+; XTENSA-NEXT: l32r a6, .LCPI29_1
+; XTENSA-NEXT: movi a7, 5
+; XTENSA-NEXT: l32r a5, .LCPI29_2
+; XTENSA-NEXT: .LBB29_1: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s32i a10, a1, 0
+; XTENSA-NEXT: l32r a11, .LCPI29_0
+; XTENSA-NEXT: callx8 a6
+; XTENSA-NEXT: or a12, a10, a10
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a7, a7
+; XTENSA-NEXT: callx8 a5
+; XTENSA-NEXT: or a8, a10, a10
+; XTENSA-NEXT: l32i a10, a1, 0
+; XTENSA-NEXT: beqz a8, .LBB29_1
+; XTENSA-NEXT: # %bb.2: # %atomicrmw.end
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: rmw32_fsub_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a7, a2, 0
+; XTENSA-ATOMIC-NEXT: l32r a6, .LCPI29_1
+; XTENSA-ATOMIC-NEXT: movi a5, 0
+; XTENSA-ATOMIC-NEXT: movi a4, 1
+; XTENSA-ATOMIC-NEXT: j .LBB29_2
+; XTENSA-ATOMIC-NEXT: .LBB29_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB29_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a10, a10
+; XTENSA-ATOMIC-NEXT: beqi a8, 1, .LBB29_4
+; XTENSA-ATOMIC-NEXT: .LBB29_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: l32r a11, .LCPI29_0
+; XTENSA-ATOMIC-NEXT: or a10, a7, a7
+; XTENSA-ATOMIC-NEXT: callx8 a6
+; XTENSA-ATOMIC-NEXT: wsr a7, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a10, a2, 0
+; XTENSA-ATOMIC-NEXT: or a8, a4, a4
+; XTENSA-ATOMIC-NEXT: beq a10, a7, .LBB29_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB29_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a5, a5
+; XTENSA-ATOMIC-NEXT: j .LBB29_1
+; XTENSA-ATOMIC-NEXT: .LBB29_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a10, a10
+; XTENSA-ATOMIC-NEXT: retw
+ %v = atomicrmw fsub ptr %p, float 1.0 seq_cst, align 4
+ ret float %v
+}
+
+define float @rmw32_fmin_seq_cst(ptr %p) nounwind {
+; XTENSA-LABEL: rmw32_fmin_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: l32i a10, a2, 0
+; XTENSA-NEXT: l32r a6, .LCPI30_1
+; XTENSA-NEXT: movi a7, 5
+; XTENSA-NEXT: l32r a5, .LCPI30_2
+; XTENSA-NEXT: .LBB30_1: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s32i a10, a1, 0
+; XTENSA-NEXT: l32r a11, .LCPI30_0
+; XTENSA-NEXT: callx8 a6
+; XTENSA-NEXT: or a12, a10, a10
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a7, a7
+; XTENSA-NEXT: callx8 a5
+; XTENSA-NEXT: or a8, a10, a10
+; XTENSA-NEXT: l32i a10, a1, 0
+; XTENSA-NEXT: beqz a8, .LBB30_1
+; XTENSA-NEXT: # %bb.2: # %atomicrmw.end
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: rmw32_fmin_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a7, a2, 0
+; XTENSA-ATOMIC-NEXT: l32r a6, .LCPI30_1
+; XTENSA-ATOMIC-NEXT: movi a5, 0
+; XTENSA-ATOMIC-NEXT: movi a4, 1
+; XTENSA-ATOMIC-NEXT: j .LBB30_2
+; XTENSA-ATOMIC-NEXT: .LBB30_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB30_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a10, a10
+; XTENSA-ATOMIC-NEXT: beqi a8, 1, .LBB30_4
+; XTENSA-ATOMIC-NEXT: .LBB30_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: l32r a11, .LCPI30_0
+; XTENSA-ATOMIC-NEXT: or a10, a7, a7
+; XTENSA-ATOMIC-NEXT: callx8 a6
+; XTENSA-ATOMIC-NEXT: wsr a7, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a10, a2, 0
+; XTENSA-ATOMIC-NEXT: or a8, a4, a4
+; XTENSA-ATOMIC-NEXT: beq a10, a7, .LBB30_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB30_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a5, a5
+; XTENSA-ATOMIC-NEXT: j .LBB30_1
+; XTENSA-ATOMIC-NEXT: .LBB30_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a10, a10
+; XTENSA-ATOMIC-NEXT: retw
+ %v = atomicrmw fmin ptr %p, float 1.0 seq_cst, align 4
+ ret float %v
+}
+
+define float @rmw32_fmax_seq_cst(ptr %p) nounwind {
+; XTENSA-LABEL: rmw32_fmax_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: l32i a10, a2, 0
+; XTENSA-NEXT: l32r a6, .LCPI31_1
+; XTENSA-NEXT: movi a7, 5
+; XTENSA-NEXT: l32r a5, .LCPI31_2
+; XTENSA-NEXT: .LBB31_1: # %atomicrmw.start
+; XTENSA-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-NEXT: s32i a10, a1, 0
+; XTENSA-NEXT: l32r a11, .LCPI31_0
+; XTENSA-NEXT: callx8 a6
+; XTENSA-NEXT: or a12, a10, a10
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: or a13, a7, a7
+; XTENSA-NEXT: or a14, a7, a7
+; XTENSA-NEXT: callx8 a5
+; XTENSA-NEXT: or a8, a10, a10
+; XTENSA-NEXT: l32i a10, a1, 0
+; XTENSA-NEXT: beqz a8, .LBB31_1
+; XTENSA-NEXT: # %bb.2: # %atomicrmw.end
+; XTENSA-NEXT: or a2, a10, a10
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: rmw32_fmax_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: l32i a7, a2, 0
+; XTENSA-ATOMIC-NEXT: l32r a6, .LCPI31_1
+; XTENSA-ATOMIC-NEXT: movi a5, 0
+; XTENSA-ATOMIC-NEXT: movi a4, 1
+; XTENSA-ATOMIC-NEXT: j .LBB31_2
+; XTENSA-ATOMIC-NEXT: .LBB31_1: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB31_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a7, a10, a10
+; XTENSA-ATOMIC-NEXT: beqi a8, 1, .LBB31_4
+; XTENSA-ATOMIC-NEXT: .LBB31_2: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # =>This Inner Loop Header: Depth=1
+; XTENSA-ATOMIC-NEXT: l32r a11, .LCPI31_0
+; XTENSA-ATOMIC-NEXT: or a10, a7, a7
+; XTENSA-ATOMIC-NEXT: callx8 a6
+; XTENSA-ATOMIC-NEXT: wsr a7, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a10, a2, 0
+; XTENSA-ATOMIC-NEXT: or a8, a4, a4
+; XTENSA-ATOMIC-NEXT: beq a10, a7, .LBB31_1
+; XTENSA-ATOMIC-NEXT: # %bb.3: # %atomicrmw.start
+; XTENSA-ATOMIC-NEXT: # in Loop: Header=BB31_2 Depth=1
+; XTENSA-ATOMIC-NEXT: or a8, a5, a5
+; XTENSA-ATOMIC-NEXT: j .LBB31_1
+; XTENSA-ATOMIC-NEXT: .LBB31_4: # %atomicrmw.end
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a10, a10
+; XTENSA-ATOMIC-NEXT: retw
+ %v = atomicrmw fmax ptr %p, float 1.0 seq_cst, align 4
+ ret float %v
+}
+
+define i32 @cmpxchg32_monotonic(ptr %p) nounwind {
+; XTENSA-LABEL: cmpxchg32_monotonic:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a13, 0
+; XTENSA-NEXT: s32i a13, a1, 0
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: movi a12, 1
+; XTENSA-NEXT: l32r a8, .LCPI32_0
+; XTENSA-NEXT: or a14, a13, a13
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: l32i a2, a1, 0
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: cmpxchg32_monotonic:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: movi a8, 1
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: wsr a9, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = cmpxchg ptr %p, i32 0, i32 1 monotonic monotonic
+ %res.0 = extractvalue { i32, i1 } %res, 0
+ ret i32 %res.0
+}
+
+define i32 @cmpxchg32_seq_cst(ptr %p) nounwind {
+; XTENSA-LABEL: cmpxchg32_seq_cst:
+; XTENSA: # %bb.0:
+; XTENSA-NEXT: entry a1, 48
+; XTENSA-NEXT: or a10, a2, a2
+; XTENSA-NEXT: movi a8, 0
+; XTENSA-NEXT: s32i a8, a1, 0
+; XTENSA-NEXT: addi a11, a1, 0
+; XTENSA-NEXT: movi a12, 1
+; XTENSA-NEXT: movi a13, 5
+; XTENSA-NEXT: l32r a8, .LCPI33_0
+; XTENSA-NEXT: or a14, a13, a13
+; XTENSA-NEXT: callx8 a8
+; XTENSA-NEXT: l32i a2, a1, 0
+; XTENSA-NEXT: retw
+;
+; XTENSA-ATOMIC-LABEL: cmpxchg32_seq_cst:
+; XTENSA-ATOMIC: # %bb.0:
+; XTENSA-ATOMIC-NEXT: entry a1, 32
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: movi a8, 1
+; XTENSA-ATOMIC-NEXT: movi a9, 0
+; XTENSA-ATOMIC-NEXT: wsr a9, scompare1
+; XTENSA-ATOMIC-NEXT: s32c1i a8, a2, 0
+; XTENSA-ATOMIC-NEXT: memw
+; XTENSA-ATOMIC-NEXT: or a2, a8, a8
+; XTENSA-ATOMIC-NEXT: retw
+ %res = cmpxchg ptr %p, i32 0, i32 1 seq_cst seq_cst
+ %res.0 = extractvalue { i32, i1 } %res, 0
+ ret i32 %res.0
+}
diff --git a/llvm/test/Transforms/AtomicExpand/Xtensa/atomicrmw-expand.ll b/llvm/test/Transforms/AtomicExpand/Xtensa/atomicrmw-expand.ll
new file mode 100644
index 0000000000000..59916cc9e8e91
--- /dev/null
+++ b/llvm/test/Transforms/AtomicExpand/Xtensa/atomicrmw-expand.ll
@@ -0,0 +1,2643 @@
+; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --version 5
+; RUN: opt -S -mtriple=xtensa-- -passes=atomic-expand %s | FileCheck %s
+
+define i8 @atomicrmw_xchg_i8_monotonic(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_xchg_i8_monotonic(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0:[0-9]+]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i8 @__atomic_exchange_1(ptr [[A]], i8 [[B]], i32 0)
+; CHECK-NEXT: ret i8 [[TMP1]]
+;
+ %res = atomicrmw xchg ptr %a, i8 %b monotonic
+ ret i8 %res
+}
+
+define i8 @atomicrmw_xchg_i8_acquire(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_xchg_i8_acquire(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i8 @__atomic_exchange_1(ptr [[A]], i8 [[B]], i32 2)
+; CHECK-NEXT: ret i8 [[TMP1]]
+;
+ %res = atomicrmw xchg ptr %a, i8 %b acquire
+ ret i8 %res
+}
+
+define i8 @atomicrmw_xchg_i8_release(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_xchg_i8_release(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i8 @__atomic_exchange_1(ptr [[A]], i8 [[B]], i32 3)
+; CHECK-NEXT: ret i8 [[TMP1]]
+;
+ %res = atomicrmw xchg ptr %a, i8 %b release
+ ret i8 %res
+}
+
+define i8 @atomicrmw_xchg_i8_acq_rel(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_xchg_i8_acq_rel(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i8 @__atomic_exchange_1(ptr [[A]], i8 [[B]], i32 4)
+; CHECK-NEXT: ret i8 [[TMP1]]
+;
+ %res = atomicrmw xchg ptr %a, i8 %b acq_rel
+ ret i8 %res
+}
+
+define i8 @atomicrmw_xchg_i8_seq_cst(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_xchg_i8_seq_cst(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i8 @__atomic_exchange_1(ptr [[A]], i8 [[B]], i32 5)
+; CHECK-NEXT: ret i8 [[TMP1]]
+;
+ %res = atomicrmw xchg ptr %a, i8 %b seq_cst
+ ret i8 %res
+}
+
+define i8 @atomicrmw_add_i8_monotonic(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_add_i8_monotonic(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i8 @__atomic_fetch_add_1(ptr [[A]], i8 [[B]], i32 0)
+; CHECK-NEXT: ret i8 [[TMP1]]
+;
+ %res = atomicrmw add ptr %a, i8 %b monotonic
+ ret i8 %res
+}
+
+define i8 @atomicrmw_add_i8_acquire(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_add_i8_acquire(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i8 @__atomic_fetch_add_1(ptr [[A]], i8 [[B]], i32 2)
+; CHECK-NEXT: ret i8 [[TMP1]]
+;
+ %res = atomicrmw add ptr %a, i8 %b acquire
+ ret i8 %res
+}
+
+define i8 @atomicrmw_add_i8_release(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_add_i8_release(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i8 @__atomic_fetch_add_1(ptr [[A]], i8 [[B]], i32 3)
+; CHECK-NEXT: ret i8 [[TMP1]]
+;
+ %res = atomicrmw add ptr %a, i8 %b release
+ ret i8 %res
+}
+
+define i8 @atomicrmw_add_i8_acq_rel(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_add_i8_acq_rel(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i8 @__atomic_fetch_add_1(ptr [[A]], i8 [[B]], i32 4)
+; CHECK-NEXT: ret i8 [[TMP1]]
+;
+ %res = atomicrmw add ptr %a, i8 %b acq_rel
+ ret i8 %res
+}
+
+define i8 @atomicrmw_add_i8_seq_cst(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_add_i8_seq_cst(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i8 @__atomic_fetch_add_1(ptr [[A]], i8 [[B]], i32 5)
+; CHECK-NEXT: ret i8 [[TMP1]]
+;
+ %res = atomicrmw add ptr %a, i8 %b seq_cst
+ ret i8 %res
+}
+
+define i8 @atomicrmw_sub_i8_monotonic(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_sub_i8_monotonic(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i8 @__atomic_fetch_sub_1(ptr [[A]], i8 [[B]], i32 0)
+; CHECK-NEXT: ret i8 [[TMP1]]
+;
+ %res = atomicrmw sub ptr %a, i8 %b monotonic
+ ret i8 %res
+}
+
+define i8 @atomicrmw_sub_i8_acquire(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_sub_i8_acquire(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i8 @__atomic_fetch_sub_1(ptr [[A]], i8 [[B]], i32 2)
+; CHECK-NEXT: ret i8 [[TMP1]]
+;
+ %res = atomicrmw sub ptr %a, i8 %b acquire
+ ret i8 %res
+}
+
+define i8 @atomicrmw_sub_i8_release(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_sub_i8_release(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i8 @__atomic_fetch_sub_1(ptr [[A]], i8 [[B]], i32 3)
+; CHECK-NEXT: ret i8 [[TMP1]]
+;
+ %res = atomicrmw sub ptr %a, i8 %b release
+ ret i8 %res
+}
+
+define i8 @atomicrmw_sub_i8_acq_rel(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_sub_i8_acq_rel(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i8 @__atomic_fetch_sub_1(ptr [[A]], i8 [[B]], i32 4)
+; CHECK-NEXT: ret i8 [[TMP1]]
+;
+ %res = atomicrmw sub ptr %a, i8 %b acq_rel
+ ret i8 %res
+}
+
+define i8 @atomicrmw_sub_i8_seq_cst(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_sub_i8_seq_cst(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i8 @__atomic_fetch_sub_1(ptr [[A]], i8 [[B]], i32 5)
+; CHECK-NEXT: ret i8 [[TMP1]]
+;
+ %res = atomicrmw sub ptr %a, i8 %b seq_cst
+ ret i8 %res
+}
+
+define i8 @atomicrmw_and_i8_monotonic(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_and_i8_monotonic(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i8 @__atomic_fetch_and_1(ptr [[A]], i8 [[B]], i32 0)
+; CHECK-NEXT: ret i8 [[TMP1]]
+;
+ %res = atomicrmw and ptr %a, i8 %b monotonic
+ ret i8 %res
+}
+
+define i8 @atomicrmw_and_i8_acquire(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_and_i8_acquire(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i8 @__atomic_fetch_and_1(ptr [[A]], i8 [[B]], i32 2)
+; CHECK-NEXT: ret i8 [[TMP1]]
+;
+ %res = atomicrmw and ptr %a, i8 %b acquire
+ ret i8 %res
+}
+
+define i8 @atomicrmw_and_i8_release(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_and_i8_release(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i8 @__atomic_fetch_and_1(ptr [[A]], i8 [[B]], i32 3)
+; CHECK-NEXT: ret i8 [[TMP1]]
+;
+ %res = atomicrmw and ptr %a, i8 %b release
+ ret i8 %res
+}
+
+define i8 @atomicrmw_and_i8_acq_rel(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_and_i8_acq_rel(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i8 @__atomic_fetch_and_1(ptr [[A]], i8 [[B]], i32 4)
+; CHECK-NEXT: ret i8 [[TMP1]]
+;
+ %res = atomicrmw and ptr %a, i8 %b acq_rel
+ ret i8 %res
+}
+
+define i8 @atomicrmw_and_i8_seq_cst(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_and_i8_seq_cst(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i8 @__atomic_fetch_and_1(ptr [[A]], i8 [[B]], i32 5)
+; CHECK-NEXT: ret i8 [[TMP1]]
+;
+ %res = atomicrmw and ptr %a, i8 %b seq_cst
+ ret i8 %res
+}
+
+define i8 @atomicrmw_nand_i8_monotonic(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_nand_i8_monotonic(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i8 @__atomic_fetch_nand_1(ptr [[A]], i8 [[B]], i32 0)
+; CHECK-NEXT: ret i8 [[TMP1]]
+;
+ %res = atomicrmw nand ptr %a, i8 %b monotonic
+ ret i8 %res
+}
+
+define i8 @atomicrmw_nand_i8_acquire(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_nand_i8_acquire(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i8 @__atomic_fetch_nand_1(ptr [[A]], i8 [[B]], i32 2)
+; CHECK-NEXT: ret i8 [[TMP1]]
+;
+ %res = atomicrmw nand ptr %a, i8 %b acquire
+ ret i8 %res
+}
+
+define i8 @atomicrmw_nand_i8_release(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_nand_i8_release(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i8 @__atomic_fetch_nand_1(ptr [[A]], i8 [[B]], i32 3)
+; CHECK-NEXT: ret i8 [[TMP1]]
+;
+ %res = atomicrmw nand ptr %a, i8 %b release
+ ret i8 %res
+}
+
+define i8 @atomicrmw_nand_i8_acq_rel(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_nand_i8_acq_rel(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i8 @__atomic_fetch_nand_1(ptr [[A]], i8 [[B]], i32 4)
+; CHECK-NEXT: ret i8 [[TMP1]]
+;
+ %res = atomicrmw nand ptr %a, i8 %b acq_rel
+ ret i8 %res
+}
+
+define i8 @atomicrmw_nand_i8_seq_cst(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_nand_i8_seq_cst(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i8 @__atomic_fetch_nand_1(ptr [[A]], i8 [[B]], i32 5)
+; CHECK-NEXT: ret i8 [[TMP1]]
+;
+ %res = atomicrmw nand ptr %a, i8 %b seq_cst
+ ret i8 %res
+}
+
+define i8 @atomicrmw_or_i8_monotonic(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_or_i8_monotonic(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i8 @__atomic_fetch_or_1(ptr [[A]], i8 [[B]], i32 0)
+; CHECK-NEXT: ret i8 [[TMP1]]
+;
+ %res = atomicrmw or ptr %a, i8 %b monotonic
+ ret i8 %res
+}
+
+define i8 @atomicrmw_or_i8_acquire(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_or_i8_acquire(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i8 @__atomic_fetch_or_1(ptr [[A]], i8 [[B]], i32 2)
+; CHECK-NEXT: ret i8 [[TMP1]]
+;
+ %res = atomicrmw or ptr %a, i8 %b acquire
+ ret i8 %res
+}
+
+define i8 @atomicrmw_or_i8_release(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_or_i8_release(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i8 @__atomic_fetch_or_1(ptr [[A]], i8 [[B]], i32 3)
+; CHECK-NEXT: ret i8 [[TMP1]]
+;
+ %res = atomicrmw or ptr %a, i8 %b release
+ ret i8 %res
+}
+
+define i8 @atomicrmw_or_i8_acq_rel(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_or_i8_acq_rel(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i8 @__atomic_fetch_or_1(ptr [[A]], i8 [[B]], i32 4)
+; CHECK-NEXT: ret i8 [[TMP1]]
+;
+ %res = atomicrmw or ptr %a, i8 %b acq_rel
+ ret i8 %res
+}
+
+define i8 @atomicrmw_or_i8_seq_cst(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_or_i8_seq_cst(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i8 @__atomic_fetch_or_1(ptr [[A]], i8 [[B]], i32 5)
+; CHECK-NEXT: ret i8 [[TMP1]]
+;
+ %res = atomicrmw or ptr %a, i8 %b seq_cst
+ ret i8 %res
+}
+
+define i8 @atomicrmw_xor_i8_monotonic(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_xor_i8_monotonic(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i8 @__atomic_fetch_xor_1(ptr [[A]], i8 [[B]], i32 0)
+; CHECK-NEXT: ret i8 [[TMP1]]
+;
+ %res = atomicrmw xor ptr %a, i8 %b monotonic
+ ret i8 %res
+}
+
+define i8 @atomicrmw_xor_i8_acquire(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_xor_i8_acquire(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i8 @__atomic_fetch_xor_1(ptr [[A]], i8 [[B]], i32 2)
+; CHECK-NEXT: ret i8 [[TMP1]]
+;
+ %res = atomicrmw xor ptr %a, i8 %b acquire
+ ret i8 %res
+}
+
+define i8 @atomicrmw_xor_i8_release(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_xor_i8_release(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i8 @__atomic_fetch_xor_1(ptr [[A]], i8 [[B]], i32 3)
+; CHECK-NEXT: ret i8 [[TMP1]]
+;
+ %res = atomicrmw xor ptr %a, i8 %b release
+ ret i8 %res
+}
+
+define i8 @atomicrmw_xor_i8_acq_rel(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_xor_i8_acq_rel(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i8 @__atomic_fetch_xor_1(ptr [[A]], i8 [[B]], i32 4)
+; CHECK-NEXT: ret i8 [[TMP1]]
+;
+ %res = atomicrmw xor ptr %a, i8 %b acq_rel
+ ret i8 %res
+}
+
+define i8 @atomicrmw_xor_i8_seq_cst(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_xor_i8_seq_cst(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i8 @__atomic_fetch_xor_1(ptr [[A]], i8 [[B]], i32 5)
+; CHECK-NEXT: ret i8 [[TMP1]]
+;
+ %res = atomicrmw xor ptr %a, i8 %b seq_cst
+ ret i8 %res
+}
+
+define i8 @atomicrmw_max_i8_monotonic(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_max_i8_monotonic(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i8, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i8, ptr [[A]], align 1
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i8 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp sgt i8 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i8 [[LOADED]], i8 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 1, ptr [[TMP1]])
+; CHECK-NEXT: store i8 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_1(ptr [[A]], ptr [[TMP1]], i8 [[NEW]], i32 0, i32 0)
+; CHECK-NEXT: [[TMP5:%.*]] = load i8, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 1, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i8, i1 } poison, i8 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i8, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i8, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i8, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i8 [[NEWLOADED]]
+;
+ %res = atomicrmw max ptr %a, i8 %b monotonic
+ ret i8 %res
+}
+
+define i8 @atomicrmw_max_i8_acquire(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_max_i8_acquire(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i8, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i8, ptr [[A]], align 1
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i8 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp sgt i8 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i8 [[LOADED]], i8 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 1, ptr [[TMP1]])
+; CHECK-NEXT: store i8 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_1(ptr [[A]], ptr [[TMP1]], i8 [[NEW]], i32 2, i32 2)
+; CHECK-NEXT: [[TMP5:%.*]] = load i8, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 1, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i8, i1 } poison, i8 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i8, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i8, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i8, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i8 [[NEWLOADED]]
+;
+ %res = atomicrmw max ptr %a, i8 %b acquire
+ ret i8 %res
+}
+
+define i8 @atomicrmw_max_i8_release(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_max_i8_release(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i8, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i8, ptr [[A]], align 1
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i8 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp sgt i8 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i8 [[LOADED]], i8 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 1, ptr [[TMP1]])
+; CHECK-NEXT: store i8 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_1(ptr [[A]], ptr [[TMP1]], i8 [[NEW]], i32 3, i32 0)
+; CHECK-NEXT: [[TMP5:%.*]] = load i8, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 1, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i8, i1 } poison, i8 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i8, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i8, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i8, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i8 [[NEWLOADED]]
+;
+ %res = atomicrmw max ptr %a, i8 %b release
+ ret i8 %res
+}
+
+define i8 @atomicrmw_max_i8_acq_rel(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_max_i8_acq_rel(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i8, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i8, ptr [[A]], align 1
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i8 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp sgt i8 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i8 [[LOADED]], i8 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 1, ptr [[TMP1]])
+; CHECK-NEXT: store i8 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_1(ptr [[A]], ptr [[TMP1]], i8 [[NEW]], i32 4, i32 2)
+; CHECK-NEXT: [[TMP5:%.*]] = load i8, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 1, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i8, i1 } poison, i8 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i8, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i8, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i8, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i8 [[NEWLOADED]]
+;
+ %res = atomicrmw max ptr %a, i8 %b acq_rel
+ ret i8 %res
+}
+
+define i8 @atomicrmw_max_i8_seq_cst(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_max_i8_seq_cst(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i8, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i8, ptr [[A]], align 1
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i8 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp sgt i8 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i8 [[LOADED]], i8 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 1, ptr [[TMP1]])
+; CHECK-NEXT: store i8 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_1(ptr [[A]], ptr [[TMP1]], i8 [[NEW]], i32 5, i32 5)
+; CHECK-NEXT: [[TMP5:%.*]] = load i8, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 1, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i8, i1 } poison, i8 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i8, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i8, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i8, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i8 [[NEWLOADED]]
+;
+ %res = atomicrmw max ptr %a, i8 %b seq_cst
+ ret i8 %res
+}
+
+define i8 @atomicrmw_min_i8_monotonic(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_min_i8_monotonic(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i8, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i8, ptr [[A]], align 1
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i8 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp sle i8 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i8 [[LOADED]], i8 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 1, ptr [[TMP1]])
+; CHECK-NEXT: store i8 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_1(ptr [[A]], ptr [[TMP1]], i8 [[NEW]], i32 0, i32 0)
+; CHECK-NEXT: [[TMP5:%.*]] = load i8, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 1, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i8, i1 } poison, i8 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i8, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i8, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i8, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i8 [[NEWLOADED]]
+;
+ %res = atomicrmw min ptr %a, i8 %b monotonic
+ ret i8 %res
+}
+
+define i8 @atomicrmw_min_i8_acquire(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_min_i8_acquire(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i8, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i8, ptr [[A]], align 1
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i8 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp sle i8 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i8 [[LOADED]], i8 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 1, ptr [[TMP1]])
+; CHECK-NEXT: store i8 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_1(ptr [[A]], ptr [[TMP1]], i8 [[NEW]], i32 2, i32 2)
+; CHECK-NEXT: [[TMP5:%.*]] = load i8, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 1, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i8, i1 } poison, i8 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i8, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i8, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i8, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i8 [[NEWLOADED]]
+;
+ %res = atomicrmw min ptr %a, i8 %b acquire
+ ret i8 %res
+}
+
+define i8 @atomicrmw_min_i8_release(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_min_i8_release(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i8, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i8, ptr [[A]], align 1
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i8 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp sle i8 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i8 [[LOADED]], i8 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 1, ptr [[TMP1]])
+; CHECK-NEXT: store i8 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_1(ptr [[A]], ptr [[TMP1]], i8 [[NEW]], i32 3, i32 0)
+; CHECK-NEXT: [[TMP5:%.*]] = load i8, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 1, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i8, i1 } poison, i8 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i8, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i8, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i8, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i8 [[NEWLOADED]]
+;
+ %res = atomicrmw min ptr %a, i8 %b release
+ ret i8 %res
+}
+
+define i8 @atomicrmw_min_i8_acq_rel(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_min_i8_acq_rel(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i8, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i8, ptr [[A]], align 1
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i8 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp sle i8 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i8 [[LOADED]], i8 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 1, ptr [[TMP1]])
+; CHECK-NEXT: store i8 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_1(ptr [[A]], ptr [[TMP1]], i8 [[NEW]], i32 4, i32 2)
+; CHECK-NEXT: [[TMP5:%.*]] = load i8, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 1, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i8, i1 } poison, i8 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i8, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i8, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i8, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i8 [[NEWLOADED]]
+;
+ %res = atomicrmw min ptr %a, i8 %b acq_rel
+ ret i8 %res
+}
+
+define i8 @atomicrmw_min_i8_seq_cst(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_min_i8_seq_cst(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i8, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i8, ptr [[A]], align 1
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i8 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp sle i8 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i8 [[LOADED]], i8 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 1, ptr [[TMP1]])
+; CHECK-NEXT: store i8 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_1(ptr [[A]], ptr [[TMP1]], i8 [[NEW]], i32 5, i32 5)
+; CHECK-NEXT: [[TMP5:%.*]] = load i8, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 1, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i8, i1 } poison, i8 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i8, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i8, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i8, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i8 [[NEWLOADED]]
+;
+ %res = atomicrmw min ptr %a, i8 %b seq_cst
+ ret i8 %res
+}
+
+define i8 @atomicrmw_umax_i8_monotonic(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_umax_i8_monotonic(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i8, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i8, ptr [[A]], align 1
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i8 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp ugt i8 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i8 [[LOADED]], i8 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 1, ptr [[TMP1]])
+; CHECK-NEXT: store i8 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_1(ptr [[A]], ptr [[TMP1]], i8 [[NEW]], i32 0, i32 0)
+; CHECK-NEXT: [[TMP5:%.*]] = load i8, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 1, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i8, i1 } poison, i8 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i8, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i8, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i8, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i8 [[NEWLOADED]]
+;
+ %res = atomicrmw umax ptr %a, i8 %b monotonic
+ ret i8 %res
+}
+
+define i8 @atomicrmw_umax_i8_acquire(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_umax_i8_acquire(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i8, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i8, ptr [[A]], align 1
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i8 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp ugt i8 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i8 [[LOADED]], i8 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 1, ptr [[TMP1]])
+; CHECK-NEXT: store i8 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_1(ptr [[A]], ptr [[TMP1]], i8 [[NEW]], i32 2, i32 2)
+; CHECK-NEXT: [[TMP5:%.*]] = load i8, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 1, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i8, i1 } poison, i8 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i8, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i8, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i8, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i8 [[NEWLOADED]]
+;
+ %res = atomicrmw umax ptr %a, i8 %b acquire
+ ret i8 %res
+}
+
+define i8 @atomicrmw_umax_i8_release(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_umax_i8_release(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i8, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i8, ptr [[A]], align 1
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i8 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp ugt i8 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i8 [[LOADED]], i8 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 1, ptr [[TMP1]])
+; CHECK-NEXT: store i8 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_1(ptr [[A]], ptr [[TMP1]], i8 [[NEW]], i32 3, i32 0)
+; CHECK-NEXT: [[TMP5:%.*]] = load i8, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 1, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i8, i1 } poison, i8 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i8, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i8, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i8, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i8 [[NEWLOADED]]
+;
+ %res = atomicrmw umax ptr %a, i8 %b release
+ ret i8 %res
+}
+
+define i8 @atomicrmw_umax_i8_acq_rel(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_umax_i8_acq_rel(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i8, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i8, ptr [[A]], align 1
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i8 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp ugt i8 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i8 [[LOADED]], i8 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 1, ptr [[TMP1]])
+; CHECK-NEXT: store i8 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_1(ptr [[A]], ptr [[TMP1]], i8 [[NEW]], i32 4, i32 2)
+; CHECK-NEXT: [[TMP5:%.*]] = load i8, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 1, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i8, i1 } poison, i8 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i8, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i8, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i8, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i8 [[NEWLOADED]]
+;
+ %res = atomicrmw umax ptr %a, i8 %b acq_rel
+ ret i8 %res
+}
+
+define i8 @atomicrmw_umax_i8_seq_cst(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_umax_i8_seq_cst(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i8, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i8, ptr [[A]], align 1
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i8 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp ugt i8 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i8 [[LOADED]], i8 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 1, ptr [[TMP1]])
+; CHECK-NEXT: store i8 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_1(ptr [[A]], ptr [[TMP1]], i8 [[NEW]], i32 5, i32 5)
+; CHECK-NEXT: [[TMP5:%.*]] = load i8, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 1, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i8, i1 } poison, i8 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i8, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i8, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i8, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i8 [[NEWLOADED]]
+;
+ %res = atomicrmw umax ptr %a, i8 %b seq_cst
+ ret i8 %res
+}
+
+define i8 @atomicrmw_umin_i8_monotonic(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_umin_i8_monotonic(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i8, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i8, ptr [[A]], align 1
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i8 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp ule i8 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i8 [[LOADED]], i8 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 1, ptr [[TMP1]])
+; CHECK-NEXT: store i8 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_1(ptr [[A]], ptr [[TMP1]], i8 [[NEW]], i32 0, i32 0)
+; CHECK-NEXT: [[TMP5:%.*]] = load i8, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 1, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i8, i1 } poison, i8 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i8, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i8, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i8, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i8 [[NEWLOADED]]
+;
+ %res = atomicrmw umin ptr %a, i8 %b monotonic
+ ret i8 %res
+}
+
+define i8 @atomicrmw_umin_i8_acquire(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_umin_i8_acquire(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i8, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i8, ptr [[A]], align 1
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i8 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp ule i8 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i8 [[LOADED]], i8 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 1, ptr [[TMP1]])
+; CHECK-NEXT: store i8 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_1(ptr [[A]], ptr [[TMP1]], i8 [[NEW]], i32 2, i32 2)
+; CHECK-NEXT: [[TMP5:%.*]] = load i8, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 1, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i8, i1 } poison, i8 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i8, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i8, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i8, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i8 [[NEWLOADED]]
+;
+ %res = atomicrmw umin ptr %a, i8 %b acquire
+ ret i8 %res
+}
+
+define i8 @atomicrmw_umin_i8_release(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_umin_i8_release(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i8, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i8, ptr [[A]], align 1
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i8 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp ule i8 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i8 [[LOADED]], i8 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 1, ptr [[TMP1]])
+; CHECK-NEXT: store i8 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_1(ptr [[A]], ptr [[TMP1]], i8 [[NEW]], i32 3, i32 0)
+; CHECK-NEXT: [[TMP5:%.*]] = load i8, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 1, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i8, i1 } poison, i8 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i8, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i8, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i8, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i8 [[NEWLOADED]]
+;
+ %res = atomicrmw umin ptr %a, i8 %b release
+ ret i8 %res
+}
+
+define i8 @atomicrmw_umin_i8_acq_rel(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_umin_i8_acq_rel(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i8, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i8, ptr [[A]], align 1
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i8 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp ule i8 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i8 [[LOADED]], i8 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 1, ptr [[TMP1]])
+; CHECK-NEXT: store i8 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_1(ptr [[A]], ptr [[TMP1]], i8 [[NEW]], i32 4, i32 2)
+; CHECK-NEXT: [[TMP5:%.*]] = load i8, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 1, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i8, i1 } poison, i8 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i8, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i8, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i8, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i8 [[NEWLOADED]]
+;
+ %res = atomicrmw umin ptr %a, i8 %b acq_rel
+ ret i8 %res
+}
+
+define i8 @atomicrmw_umin_i8_seq_cst(ptr %a, i8 %b) nounwind {
+; CHECK-LABEL: define i8 @atomicrmw_umin_i8_seq_cst(
+; CHECK-SAME: ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i8, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i8, ptr [[A]], align 1
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i8 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp ule i8 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i8 [[LOADED]], i8 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 1, ptr [[TMP1]])
+; CHECK-NEXT: store i8 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_1(ptr [[A]], ptr [[TMP1]], i8 [[NEW]], i32 5, i32 5)
+; CHECK-NEXT: [[TMP5:%.*]] = load i8, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 1, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i8, i1 } poison, i8 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i8, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i8, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i8, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i8 [[NEWLOADED]]
+;
+ %res = atomicrmw umin ptr %a, i8 %b seq_cst
+ ret i8 %res
+}
+
+define i16 @atomicrmw_xchg_i16_monotonic(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_xchg_i16_monotonic(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i16 @__atomic_exchange_2(ptr [[A]], i16 [[B]], i32 0)
+; CHECK-NEXT: ret i16 [[TMP1]]
+;
+ %res = atomicrmw xchg ptr %a, i16 %b monotonic
+ ret i16 %res
+}
+
+define i16 @atomicrmw_xchg_i16_acquire(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_xchg_i16_acquire(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i16 @__atomic_exchange_2(ptr [[A]], i16 [[B]], i32 2)
+; CHECK-NEXT: ret i16 [[TMP1]]
+;
+ %res = atomicrmw xchg ptr %a, i16 %b acquire
+ ret i16 %res
+}
+
+define i16 @atomicrmw_xchg_i16_release(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_xchg_i16_release(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i16 @__atomic_exchange_2(ptr [[A]], i16 [[B]], i32 3)
+; CHECK-NEXT: ret i16 [[TMP1]]
+;
+ %res = atomicrmw xchg ptr %a, i16 %b release
+ ret i16 %res
+}
+
+define i16 @atomicrmw_xchg_i16_acq_rel(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_xchg_i16_acq_rel(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i16 @__atomic_exchange_2(ptr [[A]], i16 [[B]], i32 4)
+; CHECK-NEXT: ret i16 [[TMP1]]
+;
+ %res = atomicrmw xchg ptr %a, i16 %b acq_rel
+ ret i16 %res
+}
+
+define i16 @atomicrmw_xchg_i16_seq_cst(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_xchg_i16_seq_cst(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i16 @__atomic_exchange_2(ptr [[A]], i16 [[B]], i32 5)
+; CHECK-NEXT: ret i16 [[TMP1]]
+;
+ %res = atomicrmw xchg ptr %a, i16 %b seq_cst
+ ret i16 %res
+}
+
+define i16 @atomicrmw_add_i16_monotonic(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_add_i16_monotonic(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i16 @__atomic_fetch_add_2(ptr [[A]], i16 [[B]], i32 0)
+; CHECK-NEXT: ret i16 [[TMP1]]
+;
+ %res = atomicrmw add ptr %a, i16 %b monotonic
+ ret i16 %res
+}
+
+define i16 @atomicrmw_add_i16_acquire(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_add_i16_acquire(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i16 @__atomic_fetch_add_2(ptr [[A]], i16 [[B]], i32 2)
+; CHECK-NEXT: ret i16 [[TMP1]]
+;
+ %res = atomicrmw add ptr %a, i16 %b acquire
+ ret i16 %res
+}
+
+define i16 @atomicrmw_add_i16_release(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_add_i16_release(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i16 @__atomic_fetch_add_2(ptr [[A]], i16 [[B]], i32 3)
+; CHECK-NEXT: ret i16 [[TMP1]]
+;
+ %res = atomicrmw add ptr %a, i16 %b release
+ ret i16 %res
+}
+
+define i16 @atomicrmw_add_i16_acq_rel(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_add_i16_acq_rel(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i16 @__atomic_fetch_add_2(ptr [[A]], i16 [[B]], i32 4)
+; CHECK-NEXT: ret i16 [[TMP1]]
+;
+ %res = atomicrmw add ptr %a, i16 %b acq_rel
+ ret i16 %res
+}
+
+define i16 @atomicrmw_add_i16_seq_cst(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_add_i16_seq_cst(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i16 @__atomic_fetch_add_2(ptr [[A]], i16 [[B]], i32 5)
+; CHECK-NEXT: ret i16 [[TMP1]]
+;
+ %res = atomicrmw add ptr %a, i16 %b seq_cst
+ ret i16 %res
+}
+
+define i16 @atomicrmw_sub_i16_monotonic(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_sub_i16_monotonic(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i16 @__atomic_fetch_sub_2(ptr [[A]], i16 [[B]], i32 0)
+; CHECK-NEXT: ret i16 [[TMP1]]
+;
+ %res = atomicrmw sub ptr %a, i16 %b monotonic
+ ret i16 %res
+}
+
+define i16 @atomicrmw_sub_i16_acquire(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_sub_i16_acquire(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i16 @__atomic_fetch_sub_2(ptr [[A]], i16 [[B]], i32 2)
+; CHECK-NEXT: ret i16 [[TMP1]]
+;
+ %res = atomicrmw sub ptr %a, i16 %b acquire
+ ret i16 %res
+}
+
+define i16 @atomicrmw_sub_i16_release(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_sub_i16_release(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i16 @__atomic_fetch_sub_2(ptr [[A]], i16 [[B]], i32 3)
+; CHECK-NEXT: ret i16 [[TMP1]]
+;
+ %res = atomicrmw sub ptr %a, i16 %b release
+ ret i16 %res
+}
+
+define i16 @atomicrmw_sub_i16_acq_rel(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_sub_i16_acq_rel(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i16 @__atomic_fetch_sub_2(ptr [[A]], i16 [[B]], i32 4)
+; CHECK-NEXT: ret i16 [[TMP1]]
+;
+ %res = atomicrmw sub ptr %a, i16 %b acq_rel
+ ret i16 %res
+}
+
+define i16 @atomicrmw_sub_i16_seq_cst(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_sub_i16_seq_cst(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i16 @__atomic_fetch_sub_2(ptr [[A]], i16 [[B]], i32 5)
+; CHECK-NEXT: ret i16 [[TMP1]]
+;
+ %res = atomicrmw sub ptr %a, i16 %b seq_cst
+ ret i16 %res
+}
+
+define i16 @atomicrmw_and_i16_monotonic(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_and_i16_monotonic(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i16 @__atomic_fetch_and_2(ptr [[A]], i16 [[B]], i32 0)
+; CHECK-NEXT: ret i16 [[TMP1]]
+;
+ %res = atomicrmw and ptr %a, i16 %b monotonic
+ ret i16 %res
+}
+
+define i16 @atomicrmw_and_i16_acquire(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_and_i16_acquire(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i16 @__atomic_fetch_and_2(ptr [[A]], i16 [[B]], i32 2)
+; CHECK-NEXT: ret i16 [[TMP1]]
+;
+ %res = atomicrmw and ptr %a, i16 %b acquire
+ ret i16 %res
+}
+
+define i16 @atomicrmw_and_i16_release(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_and_i16_release(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i16 @__atomic_fetch_and_2(ptr [[A]], i16 [[B]], i32 3)
+; CHECK-NEXT: ret i16 [[TMP1]]
+;
+ %res = atomicrmw and ptr %a, i16 %b release
+ ret i16 %res
+}
+
+define i16 @atomicrmw_and_i16_acq_rel(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_and_i16_acq_rel(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i16 @__atomic_fetch_and_2(ptr [[A]], i16 [[B]], i32 4)
+; CHECK-NEXT: ret i16 [[TMP1]]
+;
+ %res = atomicrmw and ptr %a, i16 %b acq_rel
+ ret i16 %res
+}
+
+define i16 @atomicrmw_and_i16_seq_cst(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_and_i16_seq_cst(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i16 @__atomic_fetch_and_2(ptr [[A]], i16 [[B]], i32 5)
+; CHECK-NEXT: ret i16 [[TMP1]]
+;
+ %res = atomicrmw and ptr %a, i16 %b seq_cst
+ ret i16 %res
+}
+
+define i16 @atomicrmw_nand_i16_monotonic(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_nand_i16_monotonic(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i16 @__atomic_fetch_nand_2(ptr [[A]], i16 [[B]], i32 0)
+; CHECK-NEXT: ret i16 [[TMP1]]
+;
+ %res = atomicrmw nand ptr %a, i16 %b monotonic
+ ret i16 %res
+}
+
+define i16 @atomicrmw_nand_i16_acquire(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_nand_i16_acquire(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i16 @__atomic_fetch_nand_2(ptr [[A]], i16 [[B]], i32 2)
+; CHECK-NEXT: ret i16 [[TMP1]]
+;
+ %res = atomicrmw nand ptr %a, i16 %b acquire
+ ret i16 %res
+}
+
+define i16 @atomicrmw_nand_i16_release(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_nand_i16_release(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i16 @__atomic_fetch_nand_2(ptr [[A]], i16 [[B]], i32 3)
+; CHECK-NEXT: ret i16 [[TMP1]]
+;
+ %res = atomicrmw nand ptr %a, i16 %b release
+ ret i16 %res
+}
+
+define i16 @atomicrmw_nand_i16_acq_rel(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_nand_i16_acq_rel(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i16 @__atomic_fetch_nand_2(ptr [[A]], i16 [[B]], i32 4)
+; CHECK-NEXT: ret i16 [[TMP1]]
+;
+ %res = atomicrmw nand ptr %a, i16 %b acq_rel
+ ret i16 %res
+}
+
+define i16 @atomicrmw_nand_i16_seq_cst(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_nand_i16_seq_cst(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i16 @__atomic_fetch_nand_2(ptr [[A]], i16 [[B]], i32 5)
+; CHECK-NEXT: ret i16 [[TMP1]]
+;
+ %res = atomicrmw nand ptr %a, i16 %b seq_cst
+ ret i16 %res
+}
+
+define i16 @atomicrmw_or_i16_monotonic(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_or_i16_monotonic(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i16 @__atomic_fetch_or_2(ptr [[A]], i16 [[B]], i32 0)
+; CHECK-NEXT: ret i16 [[TMP1]]
+;
+ %res = atomicrmw or ptr %a, i16 %b monotonic
+ ret i16 %res
+}
+
+define i16 @atomicrmw_or_i16_acquire(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_or_i16_acquire(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i16 @__atomic_fetch_or_2(ptr [[A]], i16 [[B]], i32 2)
+; CHECK-NEXT: ret i16 [[TMP1]]
+;
+ %res = atomicrmw or ptr %a, i16 %b acquire
+ ret i16 %res
+}
+
+define i16 @atomicrmw_or_i16_release(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_or_i16_release(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i16 @__atomic_fetch_or_2(ptr [[A]], i16 [[B]], i32 3)
+; CHECK-NEXT: ret i16 [[TMP1]]
+;
+ %res = atomicrmw or ptr %a, i16 %b release
+ ret i16 %res
+}
+
+define i16 @atomicrmw_or_i16_acq_rel(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_or_i16_acq_rel(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i16 @__atomic_fetch_or_2(ptr [[A]], i16 [[B]], i32 4)
+; CHECK-NEXT: ret i16 [[TMP1]]
+;
+ %res = atomicrmw or ptr %a, i16 %b acq_rel
+ ret i16 %res
+}
+
+define i16 @atomicrmw_or_i16_seq_cst(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_or_i16_seq_cst(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i16 @__atomic_fetch_or_2(ptr [[A]], i16 [[B]], i32 5)
+; CHECK-NEXT: ret i16 [[TMP1]]
+;
+ %res = atomicrmw or ptr %a, i16 %b seq_cst
+ ret i16 %res
+}
+
+define i16 @atomicrmw_xor_i16_monotonic(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_xor_i16_monotonic(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i16 @__atomic_fetch_xor_2(ptr [[A]], i16 [[B]], i32 0)
+; CHECK-NEXT: ret i16 [[TMP1]]
+;
+ %res = atomicrmw xor ptr %a, i16 %b monotonic
+ ret i16 %res
+}
+
+define i16 @atomicrmw_xor_i16_acquire(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_xor_i16_acquire(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i16 @__atomic_fetch_xor_2(ptr [[A]], i16 [[B]], i32 2)
+; CHECK-NEXT: ret i16 [[TMP1]]
+;
+ %res = atomicrmw xor ptr %a, i16 %b acquire
+ ret i16 %res
+}
+
+define i16 @atomicrmw_xor_i16_release(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_xor_i16_release(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i16 @__atomic_fetch_xor_2(ptr [[A]], i16 [[B]], i32 3)
+; CHECK-NEXT: ret i16 [[TMP1]]
+;
+ %res = atomicrmw xor ptr %a, i16 %b release
+ ret i16 %res
+}
+
+define i16 @atomicrmw_xor_i16_acq_rel(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_xor_i16_acq_rel(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i16 @__atomic_fetch_xor_2(ptr [[A]], i16 [[B]], i32 4)
+; CHECK-NEXT: ret i16 [[TMP1]]
+;
+ %res = atomicrmw xor ptr %a, i16 %b acq_rel
+ ret i16 %res
+}
+
+define i16 @atomicrmw_xor_i16_seq_cst(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_xor_i16_seq_cst(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i16 @__atomic_fetch_xor_2(ptr [[A]], i16 [[B]], i32 5)
+; CHECK-NEXT: ret i16 [[TMP1]]
+;
+ %res = atomicrmw xor ptr %a, i16 %b seq_cst
+ ret i16 %res
+}
+
+define i16 @atomicrmw_max_i16_monotonic(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_max_i16_monotonic(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i16, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i16, ptr [[A]], align 2
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i16 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp sgt i16 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i16 [[LOADED]], i16 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 2, ptr [[TMP1]])
+; CHECK-NEXT: store i16 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_2(ptr [[A]], ptr [[TMP1]], i16 [[NEW]], i32 0, i32 0)
+; CHECK-NEXT: [[TMP5:%.*]] = load i16, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 2, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i16, i1 } poison, i16 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i16, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i16, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i16, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i16 [[NEWLOADED]]
+;
+ %res = atomicrmw max ptr %a, i16 %b monotonic
+ ret i16 %res
+}
+
+define i16 @atomicrmw_max_i16_acquire(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_max_i16_acquire(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i16, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i16, ptr [[A]], align 2
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i16 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp sgt i16 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i16 [[LOADED]], i16 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 2, ptr [[TMP1]])
+; CHECK-NEXT: store i16 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_2(ptr [[A]], ptr [[TMP1]], i16 [[NEW]], i32 2, i32 2)
+; CHECK-NEXT: [[TMP5:%.*]] = load i16, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 2, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i16, i1 } poison, i16 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i16, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i16, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i16, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i16 [[NEWLOADED]]
+;
+ %res = atomicrmw max ptr %a, i16 %b acquire
+ ret i16 %res
+}
+
+define i16 @atomicrmw_max_i16_release(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_max_i16_release(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i16, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i16, ptr [[A]], align 2
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i16 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp sgt i16 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i16 [[LOADED]], i16 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 2, ptr [[TMP1]])
+; CHECK-NEXT: store i16 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_2(ptr [[A]], ptr [[TMP1]], i16 [[NEW]], i32 3, i32 0)
+; CHECK-NEXT: [[TMP5:%.*]] = load i16, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 2, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i16, i1 } poison, i16 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i16, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i16, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i16, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i16 [[NEWLOADED]]
+;
+ %res = atomicrmw max ptr %a, i16 %b release
+ ret i16 %res
+}
+
+define i16 @atomicrmw_max_i16_acq_rel(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_max_i16_acq_rel(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i16, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i16, ptr [[A]], align 2
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i16 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp sgt i16 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i16 [[LOADED]], i16 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 2, ptr [[TMP1]])
+; CHECK-NEXT: store i16 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_2(ptr [[A]], ptr [[TMP1]], i16 [[NEW]], i32 4, i32 2)
+; CHECK-NEXT: [[TMP5:%.*]] = load i16, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 2, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i16, i1 } poison, i16 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i16, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i16, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i16, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i16 [[NEWLOADED]]
+;
+ %res = atomicrmw max ptr %a, i16 %b acq_rel
+ ret i16 %res
+}
+
+define i16 @atomicrmw_max_i16_seq_cst(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_max_i16_seq_cst(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i16, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i16, ptr [[A]], align 2
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i16 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp sgt i16 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i16 [[LOADED]], i16 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 2, ptr [[TMP1]])
+; CHECK-NEXT: store i16 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_2(ptr [[A]], ptr [[TMP1]], i16 [[NEW]], i32 5, i32 5)
+; CHECK-NEXT: [[TMP5:%.*]] = load i16, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 2, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i16, i1 } poison, i16 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i16, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i16, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i16, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i16 [[NEWLOADED]]
+;
+ %res = atomicrmw max ptr %a, i16 %b seq_cst
+ ret i16 %res
+}
+
+define i16 @atomicrmw_min_i16_monotonic(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_min_i16_monotonic(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i16, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i16, ptr [[A]], align 2
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i16 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp sle i16 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i16 [[LOADED]], i16 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 2, ptr [[TMP1]])
+; CHECK-NEXT: store i16 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_2(ptr [[A]], ptr [[TMP1]], i16 [[NEW]], i32 0, i32 0)
+; CHECK-NEXT: [[TMP5:%.*]] = load i16, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 2, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i16, i1 } poison, i16 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i16, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i16, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i16, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i16 [[NEWLOADED]]
+;
+ %res = atomicrmw min ptr %a, i16 %b monotonic
+ ret i16 %res
+}
+
+define i16 @atomicrmw_min_i16_acquire(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_min_i16_acquire(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i16, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i16, ptr [[A]], align 2
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i16 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp sle i16 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i16 [[LOADED]], i16 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 2, ptr [[TMP1]])
+; CHECK-NEXT: store i16 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_2(ptr [[A]], ptr [[TMP1]], i16 [[NEW]], i32 2, i32 2)
+; CHECK-NEXT: [[TMP5:%.*]] = load i16, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 2, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i16, i1 } poison, i16 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i16, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i16, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i16, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i16 [[NEWLOADED]]
+;
+ %res = atomicrmw min ptr %a, i16 %b acquire
+ ret i16 %res
+}
+
+define i16 @atomicrmw_min_i16_release(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_min_i16_release(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i16, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i16, ptr [[A]], align 2
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i16 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp sle i16 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i16 [[LOADED]], i16 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 2, ptr [[TMP1]])
+; CHECK-NEXT: store i16 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_2(ptr [[A]], ptr [[TMP1]], i16 [[NEW]], i32 3, i32 0)
+; CHECK-NEXT: [[TMP5:%.*]] = load i16, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 2, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i16, i1 } poison, i16 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i16, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i16, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i16, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i16 [[NEWLOADED]]
+;
+ %res = atomicrmw min ptr %a, i16 %b release
+ ret i16 %res
+}
+
+define i16 @atomicrmw_min_i16_acq_rel(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_min_i16_acq_rel(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i16, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i16, ptr [[A]], align 2
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i16 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp sle i16 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i16 [[LOADED]], i16 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 2, ptr [[TMP1]])
+; CHECK-NEXT: store i16 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_2(ptr [[A]], ptr [[TMP1]], i16 [[NEW]], i32 4, i32 2)
+; CHECK-NEXT: [[TMP5:%.*]] = load i16, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 2, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i16, i1 } poison, i16 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i16, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i16, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i16, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i16 [[NEWLOADED]]
+;
+ %res = atomicrmw min ptr %a, i16 %b acq_rel
+ ret i16 %res
+}
+
+define i16 @atomicrmw_min_i16_seq_cst(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_min_i16_seq_cst(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i16, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i16, ptr [[A]], align 2
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i16 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp sle i16 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i16 [[LOADED]], i16 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 2, ptr [[TMP1]])
+; CHECK-NEXT: store i16 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_2(ptr [[A]], ptr [[TMP1]], i16 [[NEW]], i32 5, i32 5)
+; CHECK-NEXT: [[TMP5:%.*]] = load i16, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 2, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i16, i1 } poison, i16 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i16, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i16, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i16, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i16 [[NEWLOADED]]
+;
+ %res = atomicrmw min ptr %a, i16 %b seq_cst
+ ret i16 %res
+}
+
+define i16 @atomicrmw_umax_i16_monotonic(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_umax_i16_monotonic(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i16, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i16, ptr [[A]], align 2
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i16 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp ugt i16 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i16 [[LOADED]], i16 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 2, ptr [[TMP1]])
+; CHECK-NEXT: store i16 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_2(ptr [[A]], ptr [[TMP1]], i16 [[NEW]], i32 0, i32 0)
+; CHECK-NEXT: [[TMP5:%.*]] = load i16, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 2, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i16, i1 } poison, i16 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i16, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i16, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i16, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i16 [[NEWLOADED]]
+;
+ %res = atomicrmw umax ptr %a, i16 %b monotonic
+ ret i16 %res
+}
+
+define i16 @atomicrmw_umax_i16_acquire(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_umax_i16_acquire(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i16, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i16, ptr [[A]], align 2
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i16 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp ugt i16 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i16 [[LOADED]], i16 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 2, ptr [[TMP1]])
+; CHECK-NEXT: store i16 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_2(ptr [[A]], ptr [[TMP1]], i16 [[NEW]], i32 2, i32 2)
+; CHECK-NEXT: [[TMP5:%.*]] = load i16, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 2, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i16, i1 } poison, i16 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i16, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i16, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i16, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i16 [[NEWLOADED]]
+;
+ %res = atomicrmw umax ptr %a, i16 %b acquire
+ ret i16 %res
+}
+
+define i16 @atomicrmw_umax_i16_release(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_umax_i16_release(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i16, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i16, ptr [[A]], align 2
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i16 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp ugt i16 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i16 [[LOADED]], i16 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 2, ptr [[TMP1]])
+; CHECK-NEXT: store i16 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_2(ptr [[A]], ptr [[TMP1]], i16 [[NEW]], i32 3, i32 0)
+; CHECK-NEXT: [[TMP5:%.*]] = load i16, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 2, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i16, i1 } poison, i16 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i16, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i16, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i16, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i16 [[NEWLOADED]]
+;
+ %res = atomicrmw umax ptr %a, i16 %b release
+ ret i16 %res
+}
+
+define i16 @atomicrmw_umax_i16_acq_rel(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_umax_i16_acq_rel(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i16, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i16, ptr [[A]], align 2
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i16 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp ugt i16 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i16 [[LOADED]], i16 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 2, ptr [[TMP1]])
+; CHECK-NEXT: store i16 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_2(ptr [[A]], ptr [[TMP1]], i16 [[NEW]], i32 4, i32 2)
+; CHECK-NEXT: [[TMP5:%.*]] = load i16, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 2, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i16, i1 } poison, i16 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i16, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i16, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i16, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i16 [[NEWLOADED]]
+;
+ %res = atomicrmw umax ptr %a, i16 %b acq_rel
+ ret i16 %res
+}
+
+define i16 @atomicrmw_umax_i16_seq_cst(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_umax_i16_seq_cst(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i16, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i16, ptr [[A]], align 2
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i16 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp ugt i16 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i16 [[LOADED]], i16 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 2, ptr [[TMP1]])
+; CHECK-NEXT: store i16 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_2(ptr [[A]], ptr [[TMP1]], i16 [[NEW]], i32 5, i32 5)
+; CHECK-NEXT: [[TMP5:%.*]] = load i16, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 2, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i16, i1 } poison, i16 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i16, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i16, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i16, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i16 [[NEWLOADED]]
+;
+ %res = atomicrmw umax ptr %a, i16 %b seq_cst
+ ret i16 %res
+}
+
+define i16 @atomicrmw_umin_i16_monotonic(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_umin_i16_monotonic(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i16, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i16, ptr [[A]], align 2
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i16 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp ule i16 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i16 [[LOADED]], i16 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 2, ptr [[TMP1]])
+; CHECK-NEXT: store i16 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_2(ptr [[A]], ptr [[TMP1]], i16 [[NEW]], i32 0, i32 0)
+; CHECK-NEXT: [[TMP5:%.*]] = load i16, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 2, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i16, i1 } poison, i16 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i16, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i16, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i16, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i16 [[NEWLOADED]]
+;
+ %res = atomicrmw umin ptr %a, i16 %b monotonic
+ ret i16 %res
+}
+
+define i16 @atomicrmw_umin_i16_acquire(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_umin_i16_acquire(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i16, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i16, ptr [[A]], align 2
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i16 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp ule i16 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i16 [[LOADED]], i16 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 2, ptr [[TMP1]])
+; CHECK-NEXT: store i16 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_2(ptr [[A]], ptr [[TMP1]], i16 [[NEW]], i32 2, i32 2)
+; CHECK-NEXT: [[TMP5:%.*]] = load i16, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 2, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i16, i1 } poison, i16 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i16, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i16, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i16, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i16 [[NEWLOADED]]
+;
+ %res = atomicrmw umin ptr %a, i16 %b acquire
+ ret i16 %res
+}
+
+define i16 @atomicrmw_umin_i16_release(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_umin_i16_release(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i16, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i16, ptr [[A]], align 2
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i16 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp ule i16 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i16 [[LOADED]], i16 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 2, ptr [[TMP1]])
+; CHECK-NEXT: store i16 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_2(ptr [[A]], ptr [[TMP1]], i16 [[NEW]], i32 3, i32 0)
+; CHECK-NEXT: [[TMP5:%.*]] = load i16, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 2, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i16, i1 } poison, i16 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i16, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i16, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i16, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i16 [[NEWLOADED]]
+;
+ %res = atomicrmw umin ptr %a, i16 %b release
+ ret i16 %res
+}
+
+define i16 @atomicrmw_umin_i16_acq_rel(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_umin_i16_acq_rel(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i16, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i16, ptr [[A]], align 2
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i16 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp ule i16 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i16 [[LOADED]], i16 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 2, ptr [[TMP1]])
+; CHECK-NEXT: store i16 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_2(ptr [[A]], ptr [[TMP1]], i16 [[NEW]], i32 4, i32 2)
+; CHECK-NEXT: [[TMP5:%.*]] = load i16, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 2, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i16, i1 } poison, i16 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i16, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i16, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i16, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i16 [[NEWLOADED]]
+;
+ %res = atomicrmw umin ptr %a, i16 %b acq_rel
+ ret i16 %res
+}
+
+define i16 @atomicrmw_umin_i16_seq_cst(ptr %a, i16 %b) nounwind {
+; CHECK-LABEL: define i16 @atomicrmw_umin_i16_seq_cst(
+; CHECK-SAME: ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i16, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i16, ptr [[A]], align 2
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i16 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp ule i16 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i16 [[LOADED]], i16 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 2, ptr [[TMP1]])
+; CHECK-NEXT: store i16 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_2(ptr [[A]], ptr [[TMP1]], i16 [[NEW]], i32 5, i32 5)
+; CHECK-NEXT: [[TMP5:%.*]] = load i16, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 2, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i16, i1 } poison, i16 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i16, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i16, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i16, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i16 [[NEWLOADED]]
+;
+ %res = atomicrmw umin ptr %a, i16 %b seq_cst
+ ret i16 %res
+}
+
+define i32 @atomicrmw_xchg_i32_monotonic(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_xchg_i32_monotonic(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i32 @__atomic_exchange_4(ptr [[A]], i32 [[B]], i32 0)
+; CHECK-NEXT: ret i32 [[TMP1]]
+;
+ %res = atomicrmw xchg ptr %a, i32 %b monotonic
+ ret i32 %res
+}
+
+define i32 @atomicrmw_xchg_i32_acquire(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_xchg_i32_acquire(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i32 @__atomic_exchange_4(ptr [[A]], i32 [[B]], i32 2)
+; CHECK-NEXT: ret i32 [[TMP1]]
+;
+ %res = atomicrmw xchg ptr %a, i32 %b acquire
+ ret i32 %res
+}
+
+define i32 @atomicrmw_xchg_i32_release(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_xchg_i32_release(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i32 @__atomic_exchange_4(ptr [[A]], i32 [[B]], i32 3)
+; CHECK-NEXT: ret i32 [[TMP1]]
+;
+ %res = atomicrmw xchg ptr %a, i32 %b release
+ ret i32 %res
+}
+
+define i32 @atomicrmw_xchg_i32_acq_rel(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_xchg_i32_acq_rel(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i32 @__atomic_exchange_4(ptr [[A]], i32 [[B]], i32 4)
+; CHECK-NEXT: ret i32 [[TMP1]]
+;
+ %res = atomicrmw xchg ptr %a, i32 %b acq_rel
+ ret i32 %res
+}
+
+define i32 @atomicrmw_xchg_i32_seq_cst(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_xchg_i32_seq_cst(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i32 @__atomic_exchange_4(ptr [[A]], i32 [[B]], i32 5)
+; CHECK-NEXT: ret i32 [[TMP1]]
+;
+ %res = atomicrmw xchg ptr %a, i32 %b seq_cst
+ ret i32 %res
+}
+
+define i32 @atomicrmw_add_i32_monotonic(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_add_i32_monotonic(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i32 @__atomic_fetch_add_4(ptr [[A]], i32 [[B]], i32 0)
+; CHECK-NEXT: ret i32 [[TMP1]]
+;
+ %res = atomicrmw add ptr %a, i32 %b monotonic
+ ret i32 %res
+}
+
+define i32 @atomicrmw_add_i32_acquire(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_add_i32_acquire(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i32 @__atomic_fetch_add_4(ptr [[A]], i32 [[B]], i32 2)
+; CHECK-NEXT: ret i32 [[TMP1]]
+;
+ %res = atomicrmw add ptr %a, i32 %b acquire
+ ret i32 %res
+}
+
+define i32 @atomicrmw_add_i32_release(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_add_i32_release(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i32 @__atomic_fetch_add_4(ptr [[A]], i32 [[B]], i32 3)
+; CHECK-NEXT: ret i32 [[TMP1]]
+;
+ %res = atomicrmw add ptr %a, i32 %b release
+ ret i32 %res
+}
+
+define i32 @atomicrmw_add_i32_acq_rel(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_add_i32_acq_rel(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i32 @__atomic_fetch_add_4(ptr [[A]], i32 [[B]], i32 4)
+; CHECK-NEXT: ret i32 [[TMP1]]
+;
+ %res = atomicrmw add ptr %a, i32 %b acq_rel
+ ret i32 %res
+}
+
+define i32 @atomicrmw_add_i32_seq_cst(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_add_i32_seq_cst(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i32 @__atomic_fetch_add_4(ptr [[A]], i32 [[B]], i32 5)
+; CHECK-NEXT: ret i32 [[TMP1]]
+;
+ %res = atomicrmw add ptr %a, i32 %b seq_cst
+ ret i32 %res
+}
+
+define i32 @atomicrmw_sub_i32_monotonic(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_sub_i32_monotonic(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i32 @__atomic_fetch_sub_4(ptr [[A]], i32 [[B]], i32 0)
+; CHECK-NEXT: ret i32 [[TMP1]]
+;
+ %res = atomicrmw sub ptr %a, i32 %b monotonic
+ ret i32 %res
+}
+
+define i32 @atomicrmw_sub_i32_acquire(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_sub_i32_acquire(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i32 @__atomic_fetch_sub_4(ptr [[A]], i32 [[B]], i32 2)
+; CHECK-NEXT: ret i32 [[TMP1]]
+;
+ %res = atomicrmw sub ptr %a, i32 %b acquire
+ ret i32 %res
+}
+
+define i32 @atomicrmw_sub_i32_release(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_sub_i32_release(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i32 @__atomic_fetch_sub_4(ptr [[A]], i32 [[B]], i32 3)
+; CHECK-NEXT: ret i32 [[TMP1]]
+;
+ %res = atomicrmw sub ptr %a, i32 %b release
+ ret i32 %res
+}
+
+define i32 @atomicrmw_sub_i32_acq_rel(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_sub_i32_acq_rel(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i32 @__atomic_fetch_sub_4(ptr [[A]], i32 [[B]], i32 4)
+; CHECK-NEXT: ret i32 [[TMP1]]
+;
+ %res = atomicrmw sub ptr %a, i32 %b acq_rel
+ ret i32 %res
+}
+
+define i32 @atomicrmw_sub_i32_seq_cst(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_sub_i32_seq_cst(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i32 @__atomic_fetch_sub_4(ptr [[A]], i32 [[B]], i32 5)
+; CHECK-NEXT: ret i32 [[TMP1]]
+;
+ %res = atomicrmw sub ptr %a, i32 %b seq_cst
+ ret i32 %res
+}
+
+define i32 @atomicrmw_and_i32_monotonic(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_and_i32_monotonic(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i32 @__atomic_fetch_and_4(ptr [[A]], i32 [[B]], i32 0)
+; CHECK-NEXT: ret i32 [[TMP1]]
+;
+ %res = atomicrmw and ptr %a, i32 %b monotonic
+ ret i32 %res
+}
+
+define i32 @atomicrmw_and_i32_acquire(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_and_i32_acquire(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i32 @__atomic_fetch_and_4(ptr [[A]], i32 [[B]], i32 2)
+; CHECK-NEXT: ret i32 [[TMP1]]
+;
+ %res = atomicrmw and ptr %a, i32 %b acquire
+ ret i32 %res
+}
+
+define i32 @atomicrmw_and_i32_release(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_and_i32_release(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i32 @__atomic_fetch_and_4(ptr [[A]], i32 [[B]], i32 3)
+; CHECK-NEXT: ret i32 [[TMP1]]
+;
+ %res = atomicrmw and ptr %a, i32 %b release
+ ret i32 %res
+}
+
+define i32 @atomicrmw_and_i32_acq_rel(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_and_i32_acq_rel(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i32 @__atomic_fetch_and_4(ptr [[A]], i32 [[B]], i32 4)
+; CHECK-NEXT: ret i32 [[TMP1]]
+;
+ %res = atomicrmw and ptr %a, i32 %b acq_rel
+ ret i32 %res
+}
+
+define i32 @atomicrmw_and_i32_seq_cst(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_and_i32_seq_cst(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i32 @__atomic_fetch_and_4(ptr [[A]], i32 [[B]], i32 5)
+; CHECK-NEXT: ret i32 [[TMP1]]
+;
+ %res = atomicrmw and ptr %a, i32 %b seq_cst
+ ret i32 %res
+}
+
+;define i32 @atomicrmw_nand_i32_monotonic(ptr %a, i32 %b) nounwind {
+; %res = atomicrmw nand ptr %a, i32 %b monotonic
+; ret i32 %res
+;}
+;define i32 @atomicrmw_nand_i32_acquire(ptr %a, i32 %b) nounwind {
+; %res = atomicrmw nand ptr %a, i32 %b acquire
+; ret i32 %res
+;}
+;define i32 @atomicrmw_nand_i32_release(ptr %a, i32 %b) nounwind {
+; %res = atomicrmw nand ptr %a, i32 %b release
+; ret i32 %res
+;}
+;define i32 @atomicrmw_nand_i32_acq_rel(ptr %a, i32 %b) nounwind {
+; %res = atomicrmw nand ptr %a, i32 %b acq_rel
+; ret i32 %res
+;}
+;define i32 @atomicrmw_nand_i32_seq_cst(ptr %a, i32 %b) nounwind {
+; %res = atomicrmw nand ptr %a, i32 %b seq_cst
+; ret i32 %res
+;}
+
+define i32 @atomicrmw_or_i32_monotonic(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_or_i32_monotonic(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i32 @__atomic_fetch_or_4(ptr [[A]], i32 [[B]], i32 0)
+; CHECK-NEXT: ret i32 [[TMP1]]
+;
+ %res = atomicrmw or ptr %a, i32 %b monotonic
+ ret i32 %res
+}
+
+define i32 @atomicrmw_or_i32_acquire(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_or_i32_acquire(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i32 @__atomic_fetch_or_4(ptr [[A]], i32 [[B]], i32 2)
+; CHECK-NEXT: ret i32 [[TMP1]]
+;
+ %res = atomicrmw or ptr %a, i32 %b acquire
+ ret i32 %res
+}
+
+define i32 @atomicrmw_or_i32_release(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_or_i32_release(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i32 @__atomic_fetch_or_4(ptr [[A]], i32 [[B]], i32 3)
+; CHECK-NEXT: ret i32 [[TMP1]]
+;
+ %res = atomicrmw or ptr %a, i32 %b release
+ ret i32 %res
+}
+
+define i32 @atomicrmw_or_i32_acq_rel(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_or_i32_acq_rel(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i32 @__atomic_fetch_or_4(ptr [[A]], i32 [[B]], i32 4)
+; CHECK-NEXT: ret i32 [[TMP1]]
+;
+ %res = atomicrmw or ptr %a, i32 %b acq_rel
+ ret i32 %res
+}
+
+define i32 @atomicrmw_or_i32_seq_cst(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_or_i32_seq_cst(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i32 @__atomic_fetch_or_4(ptr [[A]], i32 [[B]], i32 5)
+; CHECK-NEXT: ret i32 [[TMP1]]
+;
+ %res = atomicrmw or ptr %a, i32 %b seq_cst
+ ret i32 %res
+}
+
+define i32 @atomicrmw_xor_i32_monotonic(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_xor_i32_monotonic(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i32 @__atomic_fetch_xor_4(ptr [[A]], i32 [[B]], i32 0)
+; CHECK-NEXT: ret i32 [[TMP1]]
+;
+ %res = atomicrmw xor ptr %a, i32 %b monotonic
+ ret i32 %res
+}
+
+define i32 @atomicrmw_xor_i32_acquire(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_xor_i32_acquire(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i32 @__atomic_fetch_xor_4(ptr [[A]], i32 [[B]], i32 2)
+; CHECK-NEXT: ret i32 [[TMP1]]
+;
+ %res = atomicrmw xor ptr %a, i32 %b acquire
+ ret i32 %res
+}
+
+define i32 @atomicrmw_xor_i32_release(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_xor_i32_release(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i32 @__atomic_fetch_xor_4(ptr [[A]], i32 [[B]], i32 3)
+; CHECK-NEXT: ret i32 [[TMP1]]
+;
+ %res = atomicrmw xor ptr %a, i32 %b release
+ ret i32 %res
+}
+
+define i32 @atomicrmw_xor_i32_acq_rel(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_xor_i32_acq_rel(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i32 @__atomic_fetch_xor_4(ptr [[A]], i32 [[B]], i32 4)
+; CHECK-NEXT: ret i32 [[TMP1]]
+;
+ %res = atomicrmw xor ptr %a, i32 %b acq_rel
+ ret i32 %res
+}
+
+define i32 @atomicrmw_xor_i32_seq_cst(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_xor_i32_seq_cst(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = call i32 @__atomic_fetch_xor_4(ptr [[A]], i32 [[B]], i32 5)
+; CHECK-NEXT: ret i32 [[TMP1]]
+;
+ %res = atomicrmw xor ptr %a, i32 %b seq_cst
+ ret i32 %res
+}
+
+define i32 @atomicrmw_max_i32_monotonic(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_max_i32_monotonic(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i32, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i32, ptr [[A]], align 4
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i32 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp sgt i32 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i32 [[LOADED]], i32 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 4, ptr [[TMP1]])
+; CHECK-NEXT: store i32 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_4(ptr [[A]], ptr [[TMP1]], i32 [[NEW]], i32 0, i32 0)
+; CHECK-NEXT: [[TMP5:%.*]] = load i32, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 4, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i32, i1 } poison, i32 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i32, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i32, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i32, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i32 [[NEWLOADED]]
+;
+ %res = atomicrmw max ptr %a, i32 %b monotonic
+ ret i32 %res
+}
+
+define i32 @atomicrmw_max_i32_acquire(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_max_i32_acquire(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i32, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i32, ptr [[A]], align 4
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i32 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp sgt i32 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i32 [[LOADED]], i32 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 4, ptr [[TMP1]])
+; CHECK-NEXT: store i32 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_4(ptr [[A]], ptr [[TMP1]], i32 [[NEW]], i32 2, i32 2)
+; CHECK-NEXT: [[TMP5:%.*]] = load i32, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 4, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i32, i1 } poison, i32 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i32, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i32, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i32, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i32 [[NEWLOADED]]
+;
+ %res = atomicrmw max ptr %a, i32 %b acquire
+ ret i32 %res
+}
+
+define i32 @atomicrmw_max_i32_release(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_max_i32_release(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i32, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i32, ptr [[A]], align 4
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i32 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp sgt i32 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i32 [[LOADED]], i32 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 4, ptr [[TMP1]])
+; CHECK-NEXT: store i32 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_4(ptr [[A]], ptr [[TMP1]], i32 [[NEW]], i32 3, i32 0)
+; CHECK-NEXT: [[TMP5:%.*]] = load i32, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 4, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i32, i1 } poison, i32 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i32, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i32, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i32, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i32 [[NEWLOADED]]
+;
+ %res = atomicrmw max ptr %a, i32 %b release
+ ret i32 %res
+}
+
+define i32 @atomicrmw_max_i32_acq_rel(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_max_i32_acq_rel(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i32, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i32, ptr [[A]], align 4
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i32 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp sgt i32 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i32 [[LOADED]], i32 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 4, ptr [[TMP1]])
+; CHECK-NEXT: store i32 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_4(ptr [[A]], ptr [[TMP1]], i32 [[NEW]], i32 4, i32 2)
+; CHECK-NEXT: [[TMP5:%.*]] = load i32, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 4, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i32, i1 } poison, i32 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i32, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i32, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i32, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i32 [[NEWLOADED]]
+;
+ %res = atomicrmw max ptr %a, i32 %b acq_rel
+ ret i32 %res
+}
+
+define i32 @atomicrmw_max_i32_seq_cst(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_max_i32_seq_cst(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i32, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i32, ptr [[A]], align 4
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i32 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp sgt i32 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i32 [[LOADED]], i32 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 4, ptr [[TMP1]])
+; CHECK-NEXT: store i32 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_4(ptr [[A]], ptr [[TMP1]], i32 [[NEW]], i32 5, i32 5)
+; CHECK-NEXT: [[TMP5:%.*]] = load i32, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 4, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i32, i1 } poison, i32 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i32, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i32, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i32, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i32 [[NEWLOADED]]
+;
+ %res = atomicrmw max ptr %a, i32 %b seq_cst
+ ret i32 %res
+}
+
+define i32 @atomicrmw_min_i32_monotonic(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_min_i32_monotonic(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i32, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i32, ptr [[A]], align 4
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i32 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp sle i32 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i32 [[LOADED]], i32 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 4, ptr [[TMP1]])
+; CHECK-NEXT: store i32 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_4(ptr [[A]], ptr [[TMP1]], i32 [[NEW]], i32 0, i32 0)
+; CHECK-NEXT: [[TMP5:%.*]] = load i32, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 4, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i32, i1 } poison, i32 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i32, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i32, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i32, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i32 [[NEWLOADED]]
+;
+ %res = atomicrmw min ptr %a, i32 %b monotonic
+ ret i32 %res
+}
+
+define i32 @atomicrmw_min_i32_acquire(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_min_i32_acquire(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i32, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i32, ptr [[A]], align 4
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i32 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp sle i32 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i32 [[LOADED]], i32 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 4, ptr [[TMP1]])
+; CHECK-NEXT: store i32 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_4(ptr [[A]], ptr [[TMP1]], i32 [[NEW]], i32 2, i32 2)
+; CHECK-NEXT: [[TMP5:%.*]] = load i32, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 4, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i32, i1 } poison, i32 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i32, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i32, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i32, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i32 [[NEWLOADED]]
+;
+ %res = atomicrmw min ptr %a, i32 %b acquire
+ ret i32 %res
+}
+
+define i32 @atomicrmw_min_i32_release(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_min_i32_release(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i32, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i32, ptr [[A]], align 4
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i32 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp sle i32 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i32 [[LOADED]], i32 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 4, ptr [[TMP1]])
+; CHECK-NEXT: store i32 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_4(ptr [[A]], ptr [[TMP1]], i32 [[NEW]], i32 3, i32 0)
+; CHECK-NEXT: [[TMP5:%.*]] = load i32, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 4, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i32, i1 } poison, i32 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i32, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i32, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i32, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i32 [[NEWLOADED]]
+;
+ %res = atomicrmw min ptr %a, i32 %b release
+ ret i32 %res
+}
+
+define i32 @atomicrmw_min_i32_acq_rel(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_min_i32_acq_rel(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i32, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i32, ptr [[A]], align 4
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i32 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp sle i32 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i32 [[LOADED]], i32 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 4, ptr [[TMP1]])
+; CHECK-NEXT: store i32 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_4(ptr [[A]], ptr [[TMP1]], i32 [[NEW]], i32 4, i32 2)
+; CHECK-NEXT: [[TMP5:%.*]] = load i32, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 4, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i32, i1 } poison, i32 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i32, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i32, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i32, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i32 [[NEWLOADED]]
+;
+ %res = atomicrmw min ptr %a, i32 %b acq_rel
+ ret i32 %res
+}
+
+define i32 @atomicrmw_min_i32_seq_cst(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_min_i32_seq_cst(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i32, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i32, ptr [[A]], align 4
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i32 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp sle i32 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i32 [[LOADED]], i32 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 4, ptr [[TMP1]])
+; CHECK-NEXT: store i32 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_4(ptr [[A]], ptr [[TMP1]], i32 [[NEW]], i32 5, i32 5)
+; CHECK-NEXT: [[TMP5:%.*]] = load i32, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 4, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i32, i1 } poison, i32 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i32, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i32, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i32, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i32 [[NEWLOADED]]
+;
+ %res = atomicrmw min ptr %a, i32 %b seq_cst
+ ret i32 %res
+}
+
+define i32 @atomicrmw_umax_i32_monotonic(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_umax_i32_monotonic(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i32, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i32, ptr [[A]], align 4
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i32 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp ugt i32 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i32 [[LOADED]], i32 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 4, ptr [[TMP1]])
+; CHECK-NEXT: store i32 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_4(ptr [[A]], ptr [[TMP1]], i32 [[NEW]], i32 0, i32 0)
+; CHECK-NEXT: [[TMP5:%.*]] = load i32, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 4, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i32, i1 } poison, i32 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i32, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i32, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i32, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i32 [[NEWLOADED]]
+;
+ %res = atomicrmw umax ptr %a, i32 %b monotonic
+ ret i32 %res
+}
+
+define i32 @atomicrmw_umax_i32_acquire(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_umax_i32_acquire(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i32, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i32, ptr [[A]], align 4
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i32 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp ugt i32 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i32 [[LOADED]], i32 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 4, ptr [[TMP1]])
+; CHECK-NEXT: store i32 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_4(ptr [[A]], ptr [[TMP1]], i32 [[NEW]], i32 2, i32 2)
+; CHECK-NEXT: [[TMP5:%.*]] = load i32, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 4, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i32, i1 } poison, i32 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i32, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i32, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i32, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i32 [[NEWLOADED]]
+;
+ %res = atomicrmw umax ptr %a, i32 %b acquire
+ ret i32 %res
+}
+
+define i32 @atomicrmw_umax_i32_release(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_umax_i32_release(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i32, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i32, ptr [[A]], align 4
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i32 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp ugt i32 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i32 [[LOADED]], i32 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 4, ptr [[TMP1]])
+; CHECK-NEXT: store i32 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_4(ptr [[A]], ptr [[TMP1]], i32 [[NEW]], i32 3, i32 0)
+; CHECK-NEXT: [[TMP5:%.*]] = load i32, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 4, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i32, i1 } poison, i32 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i32, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i32, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i32, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i32 [[NEWLOADED]]
+;
+ %res = atomicrmw umax ptr %a, i32 %b release
+ ret i32 %res
+}
+
+define i32 @atomicrmw_umax_i32_acq_rel(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_umax_i32_acq_rel(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i32, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i32, ptr [[A]], align 4
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i32 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp ugt i32 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i32 [[LOADED]], i32 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 4, ptr [[TMP1]])
+; CHECK-NEXT: store i32 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_4(ptr [[A]], ptr [[TMP1]], i32 [[NEW]], i32 4, i32 2)
+; CHECK-NEXT: [[TMP5:%.*]] = load i32, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 4, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i32, i1 } poison, i32 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i32, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i32, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i32, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i32 [[NEWLOADED]]
+;
+ %res = atomicrmw umax ptr %a, i32 %b acq_rel
+ ret i32 %res
+}
+
+define i32 @atomicrmw_umax_i32_seq_cst(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_umax_i32_seq_cst(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i32, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i32, ptr [[A]], align 4
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i32 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp ugt i32 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i32 [[LOADED]], i32 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 4, ptr [[TMP1]])
+; CHECK-NEXT: store i32 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_4(ptr [[A]], ptr [[TMP1]], i32 [[NEW]], i32 5, i32 5)
+; CHECK-NEXT: [[TMP5:%.*]] = load i32, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 4, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i32, i1 } poison, i32 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i32, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i32, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i32, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i32 [[NEWLOADED]]
+;
+ %res = atomicrmw umax ptr %a, i32 %b seq_cst
+ ret i32 %res
+}
+
+define i32 @atomicrmw_umin_i32_monotonic(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_umin_i32_monotonic(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i32, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i32, ptr [[A]], align 4
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i32 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp ule i32 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i32 [[LOADED]], i32 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 4, ptr [[TMP1]])
+; CHECK-NEXT: store i32 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_4(ptr [[A]], ptr [[TMP1]], i32 [[NEW]], i32 0, i32 0)
+; CHECK-NEXT: [[TMP5:%.*]] = load i32, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 4, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i32, i1 } poison, i32 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i32, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i32, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i32, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i32 [[NEWLOADED]]
+;
+ %res = atomicrmw umin ptr %a, i32 %b monotonic
+ ret i32 %res
+}
+
+define i32 @atomicrmw_umin_i32_acquire(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_umin_i32_acquire(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i32, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i32, ptr [[A]], align 4
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i32 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp ule i32 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i32 [[LOADED]], i32 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 4, ptr [[TMP1]])
+; CHECK-NEXT: store i32 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_4(ptr [[A]], ptr [[TMP1]], i32 [[NEW]], i32 2, i32 2)
+; CHECK-NEXT: [[TMP5:%.*]] = load i32, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 4, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i32, i1 } poison, i32 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i32, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i32, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i32, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i32 [[NEWLOADED]]
+;
+ %res = atomicrmw umin ptr %a, i32 %b acquire
+ ret i32 %res
+}
+
+define i32 @atomicrmw_umin_i32_release(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_umin_i32_release(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i32, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i32, ptr [[A]], align 4
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i32 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp ule i32 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i32 [[LOADED]], i32 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 4, ptr [[TMP1]])
+; CHECK-NEXT: store i32 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_4(ptr [[A]], ptr [[TMP1]], i32 [[NEW]], i32 3, i32 0)
+; CHECK-NEXT: [[TMP5:%.*]] = load i32, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 4, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i32, i1 } poison, i32 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i32, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i32, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i32, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i32 [[NEWLOADED]]
+;
+ %res = atomicrmw umin ptr %a, i32 %b release
+ ret i32 %res
+}
+
+define i32 @atomicrmw_umin_i32_acq_rel(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_umin_i32_acq_rel(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i32, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i32, ptr [[A]], align 4
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i32 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp ule i32 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i32 [[LOADED]], i32 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 4, ptr [[TMP1]])
+; CHECK-NEXT: store i32 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_4(ptr [[A]], ptr [[TMP1]], i32 [[NEW]], i32 4, i32 2)
+; CHECK-NEXT: [[TMP5:%.*]] = load i32, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 4, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i32, i1 } poison, i32 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i32, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i32, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i32, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i32 [[NEWLOADED]]
+;
+ %res = atomicrmw umin ptr %a, i32 %b acq_rel
+ ret i32 %res
+}
+
+define i32 @atomicrmw_umin_i32_seq_cst(ptr %a, i32 %b) nounwind {
+; CHECK-LABEL: define i32 @atomicrmw_umin_i32_seq_cst(
+; CHECK-SAME: ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[TMP1:%.*]] = alloca i32, align 4
+; CHECK-NEXT: [[TMP2:%.*]] = load i32, ptr [[A]], align 4
+; CHECK-NEXT: br label %[[ATOMICRMW_START:.*]]
+; CHECK: [[ATOMICRMW_START]]:
+; CHECK-NEXT: [[LOADED:%.*]] = phi i32 [ [[TMP2]], [[TMP0:%.*]] ], [ [[NEWLOADED:%.*]], %[[ATOMICRMW_START]] ]
+; CHECK-NEXT: [[TMP3:%.*]] = icmp ule i32 [[LOADED]], [[B]]
+; CHECK-NEXT: [[NEW:%.*]] = select i1 [[TMP3]], i32 [[LOADED]], i32 [[B]]
+; CHECK-NEXT: call void @llvm.lifetime.start.p0(i64 4, ptr [[TMP1]])
+; CHECK-NEXT: store i32 [[LOADED]], ptr [[TMP1]], align 4
+; CHECK-NEXT: [[TMP4:%.*]] = call zeroext i1 @__atomic_compare_exchange_4(ptr [[A]], ptr [[TMP1]], i32 [[NEW]], i32 5, i32 5)
+; CHECK-NEXT: [[TMP5:%.*]] = load i32, ptr [[TMP1]], align 4
+; CHECK-NEXT: call void @llvm.lifetime.end.p0(i64 4, ptr [[TMP1]])
+; CHECK-NEXT: [[TMP6:%.*]] = insertvalue { i32, i1 } poison, i32 [[TMP5]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = insertvalue { i32, i1 } [[TMP6]], i1 [[TMP4]], 1
+; CHECK-NEXT: [[SUCCESS:%.*]] = extractvalue { i32, i1 } [[TMP7]], 1
+; CHECK-NEXT: [[NEWLOADED]] = extractvalue { i32, i1 } [[TMP7]], 0
+; CHECK-NEXT: br i1 [[SUCCESS]], label %[[ATOMICRMW_END:.*]], label %[[ATOMICRMW_START]]
+; CHECK: [[ATOMICRMW_END]]:
+; CHECK-NEXT: ret i32 [[NEWLOADED]]
+;
+ %res = atomicrmw umin ptr %a, i32 %b seq_cst
+ ret i32 %res
+}
diff --git a/llvm/test/Transforms/AtomicExpand/Xtensa/lit.local.cfg b/llvm/test/Transforms/AtomicExpand/Xtensa/lit.local.cfg
new file mode 100644
index 0000000000000..e81bfa773f36a
--- /dev/null
+++ b/llvm/test/Transforms/AtomicExpand/Xtensa/lit.local.cfg
@@ -0,0 +1,2 @@
+if not "Xtensa" in config.root.targets:
+ config.unsupported = True
>From 4f166513df06447888b692cbffbd6cf9db69ec9b Mon Sep 17 00:00:00 2001
From: Morris Hafner <mmha at users.noreply.github.com>
Date: Wed, 6 Aug 2025 16:43:42 +0200
Subject: [PATCH 092/171] [CIR] Upstream support for variable template
specializations (#151069)
---
clang/lib/CIR/CodeGen/CIRGenModule.cpp | 5 ++-
.../variable-template-specialization.cpp | 40 +++++++++++++++++++
2 files changed, 44 insertions(+), 1 deletion(-)
create mode 100644 clang/test/CIR/CodeGen/variable-template-specialization.cpp
diff --git a/clang/lib/CIR/CodeGen/CIRGenModule.cpp b/clang/lib/CIR/CodeGen/CIRGenModule.cpp
index b143682e435d2..425250db87da6 100644
--- a/clang/lib/CIR/CodeGen/CIRGenModule.cpp
+++ b/clang/lib/CIR/CodeGen/CIRGenModule.cpp
@@ -1307,7 +1307,8 @@ void CIRGenModule::emitTopLevelDecl(Decl *decl) {
}
case Decl::Var:
- case Decl::Decomposition: {
+ case Decl::Decomposition:
+ case Decl::VarTemplateSpecialization: {
auto *vd = cast<VarDecl>(decl);
if (isa<DecompositionDecl>(decl)) {
errorNYI(decl->getSourceRange(), "global variable decompositions");
@@ -1342,6 +1343,8 @@ void CIRGenModule::emitTopLevelDecl(Decl *decl) {
case Decl::StaticAssert:
case Decl::TypeAliasTemplate:
case Decl::UsingShadow:
+ case Decl::VarTemplate:
+ case Decl::VarTemplatePartialSpecialization:
break;
case Decl::CXXConstructor:
diff --git a/clang/test/CIR/CodeGen/variable-template-specialization.cpp b/clang/test/CIR/CodeGen/variable-template-specialization.cpp
new file mode 100644
index 0000000000000..c13ab51ec64c9
--- /dev/null
+++ b/clang/test/CIR/CodeGen/variable-template-specialization.cpp
@@ -0,0 +1,40 @@
+// RUN: %clang_cc1 -std=c++14 -triple nvptx64-unknown-unknown -fclangir -emit-cir %s -o %t.cir
+// RUN: FileCheck --check-prefix=CIR --input-file=%t.cir %s
+// RUN: %clang_cc1 -std=c++14 -triple nvptx64-unknown-unknown -fclangir -emit-llvm %s -o %t-cir.ll
+// RUN: FileCheck --check-prefix=LLVM --input-file=%t-cir.ll %s
+// RUN: %clang_cc1 -std=c++14 -triple nvptx64-unknown-unknown -emit-llvm %s -o %t.ll
+// RUN: FileCheck --check-prefix=OGCG --input-file=%t.ll %s
+
+struct some_struct {
+ int x;
+};
+
+template<int I>
+int var_template;
+
+template<> int var_template<0>;
+template<> int var_template<1> = 1;
+template<> some_struct var_template<2>;
+
+// CIR: !rec_some_struct = !cir.record<struct "some_struct" {!s32i}>
+// CIR: cir.global external @_Z12var_templateILi0EE = #cir.int<0> : !s32i
+// CIR: cir.global external @_Z12var_templateILi1EE = #cir.int<1> : !s32i
+// CIR: cir.global external @_Z12var_templateILi2EE = #cir.zero : !rec_some_struct
+
+// LLVM: %[[STRUCT_TYPE:.+]] = type { i32 }
+// LLVM: @_Z12var_templateILi0EE = global i32 0
+// LLVM: @_Z12var_templateILi1EE = global i32 1
+// LLVM: @_Z12var_templateILi2EE = global %[[STRUCT_TYPE]] zeroinitializer
+
+// OGCG: %[[STRUCT_TYPE:.+]] = type { i32 }
+// OGCG: @_Z12var_templateILi0EE = global i32 0
+// OGCG: @_Z12var_templateILi1EE = global i32 1
+// OGCG: @_Z12var_templateILi2EE = global %[[STRUCT_TYPE]] zeroinitializer
+
+template<typename T, int I> int partial_var_template_specialization_shouldnt_hit_codegen;
+template<typename T> int partial_var_template_specialization_shouldnt_hit_codegen<T, 123>;
+template<int I> float partial_var_template_specialization_shouldnt_hit_codegen<float, I>;
+
+// CIR-NOT: partial_var_template_specialization_shouldnt_hit_codegen
+// LLVM-NOT: partial_var_template_specialization_shouldnt_hit_codegen
+// OGCG-NOT: partial_var_template_specialization_shouldnt_hit_codegen
>From c3d24217bf00f97155b145ec3d29d272707981df Mon Sep 17 00:00:00 2001
From: Jonathan Thackray <jonathan.thackray at arm.com>
Date: Wed, 6 Aug 2025 15:44:15 +0100
Subject: [PATCH 093/171] [AArch64][llvm] Fix disassembly of `ldt{add,set,clr}`
instructions using `xzr/wzr` (#152292)
The current disassembly of `ldt{add,set,clr}` instructions when using
`xzr/wzr` is incorrect. The Armv9.6-A Memory Systems specification says:
```
For each of LDT{ADD|SET|CLR}{L}, there is the corresponding STT{ADD|SET|CLR}{L}
alias, for the case where the register selected by the Rt field is XZR or WZR
```
and:
```
LDT{ADD|SET|CLR}{A}{L} is equivalent to LD{ADD|SET|CLR}{A}{L} except that: <..conditions..>
```
The Arm ARM specifies the preferred form of disassembly for these
aliases:
```
STADD <Xs>, [<Xn|SP>]
is equivalent to
LDADD <Xs>, XZR, [<Xn|SP>]
and is always the preferred disassembly.
```
(ref: DDI 0487L.b C6-2317)
This means that `sttadd` is the preferred disassembly for `ldtadd w0,
wzr, [x2]` when Rt is `xzr` or `wzr`.
This change also aligns llvm disassembly with GNU binutils, as shown by
the following examples:
llvm before this change:
```
% cat test.s
stadd w0, [sp]
sttadd w0, [sp]
ldadd w0, wzr, [sp]
ldtadd w0, wzr, [sp]
% llvm-mc-20 -triple aarch64 -mattr=+lse,+lsui test.s
stadd w0, [sp]
ldtadd w0, wzr, [sp]
stadd w0, [sp]
ldtadd w0, wzr, [sp]
```
llvm after this change:
```
% llvm-mc -triple aarch64 -mattr=+lse,+lsui test.s
stadd w0, [sp]
sttadd w0, [sp]
stadd w0, [sp]
sttadd w0, [sp]
```
GCC-15 test:
```
% gas test.s -march=armv8-a+lsui+lse -o test.o
% objdump -dr test.o
0: b82003ff stadd w0, [sp]
4: 192007ff sttadd w0, [sp]
8: b82003ff stadd w0, [sp]
c: 192007ff sttadd w0, [sp]
```
Many thanks to Ezra Sitorus and Alice Carlotti for reporting and
confirming this issue.
---
.../lib/Target/AArch64/AArch64InstrFormats.td | 2 +-
llvm/test/MC/AArch64/armv9.6a-lsui.s | 72 +++++------
.../MC/Disassembler/AArch64/armv9.6a-lsui.txt | 120 +++++++++---------
3 files changed, 97 insertions(+), 97 deletions(-)
diff --git a/llvm/lib/Target/AArch64/AArch64InstrFormats.td b/llvm/lib/Target/AArch64/AArch64InstrFormats.td
index 5a537f227760f..d068a12c7f7d5 100644
--- a/llvm/lib/Target/AArch64/AArch64InstrFormats.td
+++ b/llvm/lib/Target/AArch64/AArch64InstrFormats.td
@@ -12564,7 +12564,7 @@ multiclass STOPregister<string asm, string instr> {
let Predicates = [HasLSUI] in
class BaseSTOPregisterLSUI<string asm, RegisterClass OP, Register Reg,
Instruction inst> :
- InstAlias<asm # "\t$Rs, [$Rn]", (inst Reg, OP:$Rs, GPR64sp:$Rn), 0>;
+ InstAlias<asm # "\t$Rs, [$Rn]", (inst Reg, OP:$Rs, GPR64sp:$Rn)>;
multiclass STOPregisterLSUI<string asm, string instr> {
def : BaseSTOPregisterLSUI<asm # "l", GPR32, WZR,
diff --git a/llvm/test/MC/AArch64/armv9.6a-lsui.s b/llvm/test/MC/AArch64/armv9.6a-lsui.s
index d4a5e1f980560..dcd2693d0a021 100644
--- a/llvm/test/MC/AArch64/armv9.6a-lsui.s
+++ b/llvm/test/MC/AArch64/armv9.6a-lsui.s
@@ -212,10 +212,10 @@ _func:
//------------------------------------------------------------------------------
ldtadd w7, wzr, [x5]
-// CHECK: ldtadd w7, wzr, [x5] // encoding: [0xbf,0x04,0x27,0x19]
+// CHECK: sttadd w7, [x5] // encoding: [0xbf,0x04,0x27,0x19]
// ERROR: instruction requires: lsui
ldtadd x9, xzr, [sp]
-// CHECK: ldtadd x9, xzr, [sp] // encoding: [0xff,0x07,0x29,0x59]
+// CHECK: sttadd x9, [sp] // encoding: [0xff,0x07,0x29,0x59]
// ERROR: instruction requires: lsui
ldtadda w7, wzr, [x5]
@@ -226,10 +226,10 @@ _func:
// ERROR: instruction requires: lsui
ldtaddl w7, wzr, [x5]
-// CHECK: ldtaddl w7, wzr, [x5] // encoding: [0xbf,0x04,0x67,0x19]
+// CHECK: sttaddl w7, [x5] // encoding: [0xbf,0x04,0x67,0x19]
// ERROR: instruction requires: lsui
ldtaddl x9, xzr, [sp]
-// CHECK: ldtaddl x9, xzr, [sp] // encoding: [0xff,0x07,0x69,0x59]
+// CHECK: sttaddl x9, [sp] // encoding: [0xff,0x07,0x69,0x59]
// ERROR: instruction requires: lsui
ldtaddal w7, wzr, [x5]
@@ -240,17 +240,17 @@ _func:
// ERROR: instruction requires: lsui
ldtclr w7, wzr, [x5]
-// CHECK: ldtclr w7, wzr, [x5] // encoding: [0xbf,0x14,0x27,0x19]
+// CHECK: sttclr w7, [x5] // encoding: [0xbf,0x14,0x27,0x19]
// ERROR: instruction requires: lsui
ldtclr x9, xzr, [sp]
-// CHECK: ldtclr x9, xzr, [sp] // encoding: [0xff,0x17,0x29,0x59]
+// CHECK: sttclr x9, [sp] // encoding: [0xff,0x17,0x29,0x59]
// ERROR: instruction requires: lsui
ldtclrl w7, wzr, [x5]
-// CHECK: ldtclrl w7, wzr, [x5] // encoding: [0xbf,0x14,0x67,0x19]
+// CHECK: sttclrl w7, [x5] // encoding: [0xbf,0x14,0x67,0x19]
// ERROR: instruction requires: lsui
ldtclrl x9, xzr, [sp]
-// CHECK: ldtclrl x9, xzr, [sp] // encoding: [0xff,0x17,0x69,0x59]
+// CHECK: sttclrl x9, [sp] // encoding: [0xff,0x17,0x69,0x59]
// ERROR: instruction requires: lsui
ldtclra w7, wzr, [x5]
@@ -268,17 +268,17 @@ _func:
// ERROR: instruction requires: lsui
ldtset w7, wzr, [x5]
-// CHECK: ldtset w7, wzr, [x5] // encoding: [0xbf,0x34,0x27,0x19]
+// CHECK: sttset w7, [x5] // encoding: [0xbf,0x34,0x27,0x19]
// ERROR: instruction requires: lsui
ldtset x9, xzr, [sp]
-// CHECK: ldtset x9, xzr, [sp] // encoding: [0xff,0x37,0x29,0x59]
+// CHECK: sttset x9, [sp] // encoding: [0xff,0x37,0x29,0x59]
// ERROR: instruction requires: lsui
ldtsetl w7, wzr, [x5]
-// CHECK: ldtsetl w7, wzr, [x5] // encoding: [0xbf,0x34,0x67,0x19]
+// CHECK: sttsetl w7, [x5] // encoding: [0xbf,0x34,0x67,0x19]
// ERROR: instruction requires: lsui
ldtsetl x9, xzr, [sp]
-// CHECK: ldtsetl x9, xzr, [sp] // encoding: [0xff,0x37,0x69,0x59]
+// CHECK: sttsetl x9, [sp] // encoding: [0xff,0x37,0x69,0x59]
// ERROR: instruction requires: lsui
ldtseta w7, wzr, [x5]
@@ -300,81 +300,81 @@ _func:
//------------------------------------------------------------------------------
sttadd w0, [x2]
-// CHECK: ldtadd w0, wzr, [x2] // encoding: [0x5f,0x04,0x20,0x19]
+// CHECK: sttadd w0, [x2] // encoding: [0x5f,0x04,0x20,0x19]
// ERROR: instruction requires: lsui
sttadd w2, [sp]
-// CHECK: ldtadd w2, wzr, [sp] // encoding: [0xff,0x07,0x22,0x19]
+// CHECK: sttadd w2, [sp] // encoding: [0xff,0x07,0x22,0x19]
// ERROR: instruction requires: lsui
sttadd x0, [x2]
-// CHECK: ldtadd x0, xzr, [x2] // encoding: [0x5f,0x04,0x20,0x59]
+// CHECK: sttadd x0, [x2] // encoding: [0x5f,0x04,0x20,0x59]
// ERROR: instruction requires: lsui
sttadd x2, [sp]
-// CHECK: ldtadd x2, xzr, [sp] // encoding: [0xff,0x07,0x22,0x59]
+// CHECK: sttadd x2, [sp] // encoding: [0xff,0x07,0x22,0x59]
// ERROR: instruction requires: lsui
sttaddl w0, [x2]
-// CHECK: ldtaddl w0, wzr, [x2] // encoding: [0x5f,0x04,0x60,0x19]
+// CHECK: sttaddl w0, [x2] // encoding: [0x5f,0x04,0x60,0x19]
// ERROR: instruction requires: lsui
sttaddl w2, [sp]
-// CHECK: ldtaddl w2, wzr, [sp] // encoding: [0xff,0x07,0x62,0x19]
+// CHECK: sttaddl w2, [sp] // encoding: [0xff,0x07,0x62,0x19]
// ERROR: instruction requires: lsui
sttaddl x0, [x2]
-// CHECK: ldtaddl x0, xzr, [x2] // encoding: [0x5f,0x04,0x60,0x59]
+// CHECK: sttaddl x0, [x2] // encoding: [0x5f,0x04,0x60,0x59]
// ERROR: instruction requires: lsui
sttaddl x2, [sp]
-// CHECK: ldtaddl x2, xzr, [sp] // encoding: [0xff,0x07,0x62,0x59]
+// CHECK: sttaddl x2, [sp] // encoding: [0xff,0x07,0x62,0x59]
// ERROR: instruction requires: lsui
sttclr w0, [x2]
-// CHECK: ldtclr w0, wzr, [x2] // encoding: [0x5f,0x14,0x20,0x19]
+// CHECK: sttclr w0, [x2] // encoding: [0x5f,0x14,0x20,0x19]
// ERROR: instruction requires: lsui
sttclr w2, [sp]
-// CHECK: ldtclr w2, wzr, [sp] // encoding: [0xff,0x17,0x22,0x19]
+// CHECK: sttclr w2, [sp] // encoding: [0xff,0x17,0x22,0x19]
// ERROR: instruction requires: lsui
sttclr x0, [x2]
-// CHECK: ldtclr x0, xzr, [x2] // encoding: [0x5f,0x14,0x20,0x59]
+// CHECK: sttclr x0, [x2] // encoding: [0x5f,0x14,0x20,0x59]
// ERROR: instruction requires: lsui
sttclr x2, [sp]
-// CHECK: ldtclr x2, xzr, [sp] // encoding: [0xff,0x17,0x22,0x59]
+// CHECK: sttclr x2, [sp] // encoding: [0xff,0x17,0x22,0x59]
// ERROR: instruction requires: lsui
sttclrl w0, [x2]
-// CHECK: ldtclrl w0, wzr, [x2] // encoding: [0x5f,0x14,0x60,0x19]
+// CHECK: sttclrl w0, [x2] // encoding: [0x5f,0x14,0x60,0x19]
// ERROR: instruction requires: lsui
sttclrl w2, [sp]
-// CHECK: ldtclrl w2, wzr, [sp] // encoding: [0xff,0x17,0x62,0x19]
+// CHECK: sttclrl w2, [sp] // encoding: [0xff,0x17,0x62,0x19]
// ERROR: instruction requires: lsui
sttclrl x0, [x2]
-// CHECK: ldtclrl x0, xzr, [x2] // encoding: [0x5f,0x14,0x60,0x59]
+// CHECK: sttclrl x0, [x2] // encoding: [0x5f,0x14,0x60,0x59]
// ERROR: instruction requires: lsui
sttclrl x2, [sp]
-// CHECK: ldtclrl x2, xzr, [sp] // encoding: [0xff,0x17,0x62,0x59]
+// CHECK: sttclrl x2, [sp] // encoding: [0xff,0x17,0x62,0x59]
// ERROR: instruction requires: lsui
sttset w0, [x2]
-// CHECK: ldtset w0, wzr, [x2] // encoding: [0x5f,0x34,0x20,0x19]
+// CHECK: sttset w0, [x2] // encoding: [0x5f,0x34,0x20,0x19]
// ERROR: instruction requires: lsui
sttset w2, [sp]
-// CHECK: ldtset w2, wzr, [sp] // encoding: [0xff,0x37,0x22,0x19]
+// CHECK: sttset w2, [sp] // encoding: [0xff,0x37,0x22,0x19]
// ERROR: instruction requires: lsui
sttset x0, [x2]
-// CHECK: ldtset x0, xzr, [x2] // encoding: [0x5f,0x34,0x20,0x59]
+// CHECK: sttset x0, [x2] // encoding: [0x5f,0x34,0x20,0x59]
// ERROR: instruction requires: lsui
sttset x2, [sp]
-// CHECK: ldtset x2, xzr, [sp] // encoding: [0xff,0x37,0x22,0x59]
+// CHECK: sttset x2, [sp] // encoding: [0xff,0x37,0x22,0x59]
// ERROR: instruction requires: lsui
sttsetl w0, [x2]
-// CHECK: ldtsetl w0, wzr, [x2] // encoding: [0x5f,0x34,0x60,0x19]
+// CHECK: sttsetl w0, [x2] // encoding: [0x5f,0x34,0x60,0x19]
// ERROR: instruction requires: lsui
sttsetl w2, [sp]
-// CHECK: ldtsetl w2, wzr, [sp] // encoding: [0xff,0x37,0x62,0x19]
+// CHECK: sttsetl w2, [sp] // encoding: [0xff,0x37,0x62,0x19]
// ERROR: instruction requires: lsui
sttsetl x0, [x2]
-// CHECK: ldtsetl x0, xzr, [x2] // encoding: [0x5f,0x34,0x60,0x59]
+// CHECK: sttsetl x0, [x2] // encoding: [0x5f,0x34,0x60,0x59]
// ERROR: instruction requires: lsui
sttsetl x2, [sp]
-// CHECK: ldtsetl x2, xzr, [sp] // encoding: [0xff,0x37,0x62,0x59]
+// CHECK: sttsetl x2, [sp] // encoding: [0xff,0x37,0x62,0x59]
// ERROR: instruction requires: lsui
//------------------------------------------------------------------------------
diff --git a/llvm/test/MC/Disassembler/AArch64/armv9.6a-lsui.txt b/llvm/test/MC/Disassembler/AArch64/armv9.6a-lsui.txt
index 4cde11f38dde1..dc53a0bfc30e4 100644
--- a/llvm/test/MC/Disassembler/AArch64/armv9.6a-lsui.txt
+++ b/llvm/test/MC/Disassembler/AArch64/armv9.6a-lsui.txt
@@ -249,75 +249,75 @@
# CHECK-NEXT: casplt x0, x1, x2, x3, [sp]
# CHECK-NEXT: caspalt x0, x1, x2, x3, [x4]
# CHECK-NEXT: caspalt x0, x1, x2, x3, [sp]
-# CHECK-NEXT: ldtadd w7, wzr, [x5]
-# CHECK-NEXT: ldtadd x9, xzr, [sp]
+# CHECK-NEXT: sttadd w7, [x5]
+# CHECK-NEXT: sttadd x9, [sp]
# CHECK-NEXT: ldtadda w7, wzr, [x5]
# CHECK-NEXT: ldtadda x9, xzr, [sp]
-# CHECK-NEXT: ldtaddl w7, wzr, [x5]
-# CHECK-NEXT: ldtaddl x9, xzr, [sp]
+# CHECK-NEXT: sttaddl w7, [x5]
+# CHECK-NEXT: sttaddl x9, [sp]
# CHECK-NEXT: ldtaddal w7, wzr, [x5]
# CHECK-NEXT: ldtaddal x9, xzr, [sp]
-# CHECK-NEXT: ldtclr w7, wzr, [x5]
-# CHECK-NEXT: ldtclr x9, xzr, [sp]
-# CHECK-NEXT: ldtclrl w7, wzr, [x5]
-# CHECK-NEXT: ldtclrl x9, xzr, [sp]
+# CHECK-NEXT: sttclr w7, [x5]
+# CHECK-NEXT: sttclr x9, [sp]
+# CHECK-NEXT: sttclrl w7, [x5]
+# CHECK-NEXT: sttclrl x9, [sp]
# CHECK-NEXT: ldtclra w7, wzr, [x5]
# CHECK-NEXT: ldtclra x9, xzr, [sp]
# CHECK-NEXT: ldtclral w7, wzr, [x5]
# CHECK-NEXT: ldtclral x9, xzr, [sp]
-# CHECK-NEXT: ldtset w7, wzr, [x5]
-# CHECK-NEXT: ldtset x9, xzr, [sp]
-# CHECK-NEXT: ldtsetl w7, wzr, [x5]
-# CHECK-NEXT: ldtsetl x9, xzr, [sp]
+# CHECK-NEXT: sttset w7, [x5]
+# CHECK-NEXT: sttset x9, [sp]
+# CHECK-NEXT: sttsetl w7, [x5]
+# CHECK-NEXT: sttsetl x9, [sp]
# CHECK-NEXT: ldtseta w7, wzr, [x5]
# CHECK-NEXT: ldtseta x9, xzr, [sp]
# CHECK-NEXT: ldtsetal w7, wzr, [x5]
# CHECK-NEXT: ldtsetal x9, xzr, [sp]
-# CHECK-NEXT: ldtadd w0, wzr, [x2]
-# CHECK-NEXT: ldtadd w2, wzr, [sp]
-# CHECK-NEXT: ldtadd x0, xzr, [x2]
-# CHECK-NEXT: ldtadd x2, xzr, [sp]
-# CHECK-NEXT: ldtadd w0, wzr, [x2]
-# CHECK-NEXT: ldtadd w2, wzr, [sp]
-# CHECK-NEXT: ldtadd x0, xzr, [x2]
-# CHECK-NEXT: ldtadd x2, xzr, [sp]
-# CHECK-NEXT: ldtadd w0, wzr, [x2]
-# CHECK-NEXT: ldtadd w2, wzr, [sp]
-# CHECK-NEXT: ldtadd x0, xzr, [x2]
-# CHECK-NEXT: ldtadd x2, xzr, [sp]
-# CHECK-NEXT: ldtadd w0, wzr, [x2]
-# CHECK-NEXT: ldtadd w2, wzr, [sp]
-# CHECK-NEXT: ldtadd x0, xzr, [x2]
-# CHECK-NEXT: ldtadd x2, xzr, [sp]
-# CHECK-NEXT: ldtclr w0, wzr, [x2]
-# CHECK-NEXT: ldtclr w2, wzr, [sp]
-# CHECK-NEXT: ldtclr x0, xzr, [x2]
-# CHECK-NEXT: ldtclr x2, xzr, [sp]
-# CHECK-NEXT: ldtclr w0, wzr, [x2]
-# CHECK-NEXT: ldtclr w2, wzr, [sp]
-# CHECK-NEXT: ldtclr x0, xzr, [x2]
-# CHECK-NEXT: ldtclr x2, xzr, [sp]
-# CHECK-NEXT: ldtclr w0, wzr, [x2]
-# CHECK-NEXT: ldtclr w2, wzr, [sp]
-# CHECK-NEXT: ldtclr x0, xzr, [x2]
-# CHECK-NEXT: ldtclr x2, xzr, [sp]
-# CHECK-NEXT: ldtclr w0, wzr, [x2]
-# CHECK-NEXT: ldtclr x2, xzr, [sp]
-# CHECK-NEXT: ldtclr x0, xzr, [x2]
-# CHECK-NEXT: ldtclr x2, xzr, [sp]
-# CHECK-NEXT: ldtset w0, wzr, [x2]
-# CHECK-NEXT: ldtset w2, wzr, [sp]
-# CHECK-NEXT: ldtset x0, xzr, [x2]
-# CHECK-NEXT: ldtset x2, xzr, [sp]
-# CHECK-NEXT: ldtset w0, wzr, [x2]
-# CHECK-NEXT: ldtset w2, wzr, [sp]
-# CHECK-NEXT: ldtset x0, xzr, [x2]
-# CHECK-NEXT: ldtset x2, xzr, [sp]
-# CHECK-NEXT: ldtset w0, wzr, [x2]
-# CHECK-NEXT: ldtset w2, wzr, [sp]
-# CHECK-NEXT: ldtset x0, xzr, [x2]
-# CHECK-NEXT: ldtset x2, xzr, [sp]
-# CHECK-NEXT: ldtset w0, wzr, [x2]
-# CHECK-NEXT: ldtset x2, xzr, [sp]
-# CHECK-NEXT: ldtset x0, xzr, [x2]
-# CHECK-NEXT: ldtset x2, xzr, [sp]
+# CHECK-NEXT: sttadd w0, [x2]
+# CHECK-NEXT: sttadd w2, [sp]
+# CHECK-NEXT: sttadd x0, [x2]
+# CHECK-NEXT: sttadd x2, [sp]
+# CHECK-NEXT: sttadd w0, [x2]
+# CHECK-NEXT: sttadd w2, [sp]
+# CHECK-NEXT: sttadd x0, [x2]
+# CHECK-NEXT: sttadd x2, [sp]
+# CHECK-NEXT: sttadd w0, [x2]
+# CHECK-NEXT: sttadd w2, [sp]
+# CHECK-NEXT: sttadd x0, [x2]
+# CHECK-NEXT: sttadd x2, [sp]
+# CHECK-NEXT: sttadd w0, [x2]
+# CHECK-NEXT: sttadd w2, [sp]
+# CHECK-NEXT: sttadd x0, [x2]
+# CHECK-NEXT: sttadd x2, [sp]
+# CHECK-NEXT: sttclr w0, [x2]
+# CHECK-NEXT: sttclr w2, [sp]
+# CHECK-NEXT: sttclr x0, [x2]
+# CHECK-NEXT: sttclr x2, [sp]
+# CHECK-NEXT: sttclr w0, [x2]
+# CHECK-NEXT: sttclr w2, [sp]
+# CHECK-NEXT: sttclr x0, [x2]
+# CHECK-NEXT: sttclr x2, [sp]
+# CHECK-NEXT: sttclr w0, [x2]
+# CHECK-NEXT: sttclr w2, [sp]
+# CHECK-NEXT: sttclr x0, [x2]
+# CHECK-NEXT: sttclr x2, [sp]
+# CHECK-NEXT: sttclr w0, [x2]
+# CHECK-NEXT: sttclr x2, [sp]
+# CHECK-NEXT: sttclr x0, [x2]
+# CHECK-NEXT: sttclr x2, [sp]
+# CHECK-NEXT: sttset w0, [x2]
+# CHECK-NEXT: sttset w2, [sp]
+# CHECK-NEXT: sttset x0, [x2]
+# CHECK-NEXT: sttset x2, [sp]
+# CHECK-NEXT: sttset w0, [x2]
+# CHECK-NEXT: sttset w2, [sp]
+# CHECK-NEXT: sttset x0, [x2]
+# CHECK-NEXT: sttset x2, [sp]
+# CHECK-NEXT: sttset w0, [x2]
+# CHECK-NEXT: sttset w2, [sp]
+# CHECK-NEXT: sttset x0, [x2]
+# CHECK-NEXT: sttset x2, [sp]
+# CHECK-NEXT: sttset w0, [x2]
+# CHECK-NEXT: sttset x2, [sp]
+# CHECK-NEXT: sttset x0, [x2]
+# CHECK-NEXT: sttset x2, [sp]
>From a50a01358172d1dbf0fb9f723116a1f57efd2b4b Mon Sep 17 00:00:00 2001
From: Nico Weber <thakis at chromium.org>
Date: Wed, 6 Aug 2025 10:44:22 -0400
Subject: [PATCH 094/171] [gn] port 41841e625db8 better
---
llvm/utils/gn/secondary/lldb/source/Host/BUILD.gn | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/llvm/utils/gn/secondary/lldb/source/Host/BUILD.gn b/llvm/utils/gn/secondary/lldb/source/Host/BUILD.gn
index b00442d8e1ebb..a232ea4cebe1b 100644
--- a/llvm/utils/gn/secondary/lldb/source/Host/BUILD.gn
+++ b/llvm/utils/gn/secondary/lldb/source/Host/BUILD.gn
@@ -16,7 +16,6 @@ static_library("Host") {
]
public_deps = [ "//llvm/utils/gn/build/libs/xml" ]
sources = [
- "aix/Support.cpp",
"common/File.cpp",
"common/FileAction.cpp",
"common/FileCache.cpp",
@@ -96,6 +95,7 @@ static_library("Host") {
sources += [
"aix/Host.cpp",
"aix/HostInfoAIX.cpp",
+ "aix/Support.cpp",
]
}
>From 75838b818bdbc14fc41f901b4e60821647009cf3 Mon Sep 17 00:00:00 2001
From: Nico Weber <thakis at chromium.org>
Date: Wed, 6 Aug 2025 10:44:50 -0400
Subject: [PATCH 095/171] [gn] port 12dee9d3cd76 better
---
llvm/utils/gn/secondary/lldb/source/Host/BUILD.gn | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/llvm/utils/gn/secondary/lldb/source/Host/BUILD.gn b/llvm/utils/gn/secondary/lldb/source/Host/BUILD.gn
index a232ea4cebe1b..10c5f95edf5fd 100644
--- a/llvm/utils/gn/secondary/lldb/source/Host/BUILD.gn
+++ b/llvm/utils/gn/secondary/lldb/source/Host/BUILD.gn
@@ -50,7 +50,6 @@ static_library("Host") {
"common/UDPSocket.cpp",
"common/XML.cpp",
"common/ZipFileResolver.cpp",
- "posix/Support.cpp",
]
if (lldb_enable_libedit) {
@@ -88,6 +87,7 @@ static_library("Host") {
"posix/MainLoopPosix.cpp",
"posix/PipePosix.cpp",
"posix/ProcessLauncherPosixFork.cpp",
+ "posix/Support.cpp",
]
}
>From 8704ca0fb8fb3c659a4e98e9362cd56d453dcb4b Mon Sep 17 00:00:00 2001
From: Timm Baeder <tbaeder at redhat.com>
Date: Wed, 6 Aug 2025 16:49:00 +0200
Subject: [PATCH 096/171] [clang][ExprConst] Consider integer pointers of value
0 nullptr (#150164)
When casting a 0 to a pointer type, the IsNullPtr flag was always set to
false, leading to weird results like a pointer with value 0 that isn't a
null pointer.
This caused
```c++
struct B { const int *p;};
template<B> void f() {}
template void f<B{nullptr}>();
template void f<B{fold(reinterpret_cast<int*>(0))}>();
```
to be valid code, since nullptr and (int*)0 aren't equal. This seems
weird and GCC doesn't behave like this.
---
clang/lib/AST/ExprConstant.cpp | 14 +++++++++-----
clang/test/AST/ByteCode/functions.cpp | 2 +-
clang/test/CodeGenCXX/mangle-class-nttp.cpp | 14 ++++++++------
clang/test/SemaOpenCLCXX/amdgpu-nullptr.clcpp | 9 +++++++++
4 files changed, 27 insertions(+), 12 deletions(-)
create mode 100644 clang/test/SemaOpenCLCXX/amdgpu-nullptr.clcpp
diff --git a/clang/lib/AST/ExprConstant.cpp b/clang/lib/AST/ExprConstant.cpp
index 34af9ccd2e96e..3679327da7b0c 100644
--- a/clang/lib/AST/ExprConstant.cpp
+++ b/clang/lib/AST/ExprConstant.cpp
@@ -9860,11 +9860,15 @@ bool PointerExprEvaluator::VisitCastExpr(const CastExpr *E) {
if (Value.isInt()) {
unsigned Size = Info.Ctx.getTypeSize(E->getType());
uint64_t N = Value.getInt().extOrTrunc(Size).getZExtValue();
- Result.Base = (Expr*)nullptr;
- Result.InvalidBase = false;
- Result.Offset = CharUnits::fromQuantity(N);
- Result.Designator.setInvalid();
- Result.IsNullPtr = false;
+ if (N == Info.Ctx.getTargetNullPointerValue(E->getType())) {
+ Result.setNull(Info.Ctx, E->getType());
+ } else {
+ Result.Base = (Expr *)nullptr;
+ Result.InvalidBase = false;
+ Result.Offset = CharUnits::fromQuantity(N);
+ Result.Designator.setInvalid();
+ Result.IsNullPtr = false;
+ }
return true;
} else {
// In rare instances, the value isn't an lvalue.
diff --git a/clang/test/AST/ByteCode/functions.cpp b/clang/test/AST/ByteCode/functions.cpp
index 363b6a58ac54d..36e7bb32b2d86 100644
--- a/clang/test/AST/ByteCode/functions.cpp
+++ b/clang/test/AST/ByteCode/functions.cpp
@@ -622,7 +622,7 @@ namespace FromIntegral {
int a[(int)DoubleFn((void*)-1)()]; // both-error {{not allowed at file scope}} \
// both-warning {{variable length arrays}}
int b[(int)DoubleFn((void*)(-1 + 1))()]; // both-error {{not allowed at file scope}} \
- // expected-note {{evaluates to a null function pointer}} \
+ // both-note {{evaluates to a null function pointer}} \
// both-warning {{variable length arrays}}
#endif
}
diff --git a/clang/test/CodeGenCXX/mangle-class-nttp.cpp b/clang/test/CodeGenCXX/mangle-class-nttp.cpp
index 12c81f2ba0514..536592c6a9308 100644
--- a/clang/test/CodeGenCXX/mangle-class-nttp.cpp
+++ b/clang/test/CodeGenCXX/mangle-class-nttp.cpp
@@ -27,12 +27,14 @@ template void f<B{nullptr}>();
// CHECK: define weak_odr void @_Z1fIXtl1BLPKi32EEEEvv(
// MSABI: define {{.*}} @"??$f@$2UB@@PEBH0CA at H0A@@@@YAXXZ"
template void f<B{fold((int*)32)}>();
-#ifndef _WIN32
-// FIXME: On MS ABI, we mangle this the same as nullptr, despite considering a
-// null pointer and zero bitcast to a pointer to be distinct pointer values.
-// CHECK: define weak_odr void @_Z1fIXtl1BrcPKiLi0EEEEvv(
-template void f<B{fold(reinterpret_cast<int*>(0))}>();
-#endif
+
+// CHECK: define weak_odr void @_Z1fIXtl1BLPKi0ELi2EEEEvv(
+// MSABI: define {{.*}} @"??$f@$2UB@@PEBH0A at H01@@@YAXXZ"(
+template void f<B{fold(reinterpret_cast<int*>(0)), 2}>();
+
+// CHECK: define weak_odr void @_Z1fIXtl1BLPKi12EEEEvv(
+// MSABI: define {{.*}} @"??$f@$2UB@@PEBH0M at H0A@@@@YAXXZ"(
+template void f<B{fold(reinterpret_cast<int*>(12))}>();
// Pointers to subobjects.
struct Nested { union { int k; int arr[2]; }; } nested[2];
diff --git a/clang/test/SemaOpenCLCXX/amdgpu-nullptr.clcpp b/clang/test/SemaOpenCLCXX/amdgpu-nullptr.clcpp
new file mode 100644
index 0000000000000..5bc147c2deba7
--- /dev/null
+++ b/clang/test/SemaOpenCLCXX/amdgpu-nullptr.clcpp
@@ -0,0 +1,9 @@
+// RUN: %clang_cc1 -triple amdgcn -cl-std=clc++ -verify %s
+
+// expected-no-diagnostics
+
+#define fold(x) (__builtin_constant_p(x) ? (x) : (x))
+static_assert(nullptr != fold(reinterpret_cast<private int*>(0)));
+
+static_assert(nullptr == (private int *)0);
+
>From dfd506b9480fd99c6ee5498ee2bc7a239b255467 Mon Sep 17 00:00:00 2001
From: Marcos Maronas <marcos.maronas at intel.com>
Date: Wed, 6 Aug 2025 15:50:00 +0100
Subject: [PATCH 097/171] [SPIRV] Fix code quality issues. (#152005)
Fix code quality issues reported by static analysis tool, such as:
- Rule of Three/Five.
- Dereference after null check.
- Unchecked return value.
- Variable copied when it could be moved.
---
.../Analysis/SPIRVConvergenceRegionAnalysis.h | 7 +++++++
.../SPIRV/MCTargetDesc/SPIRVInstPrinter.cpp | 3 ++-
llvm/lib/Target/SPIRV/SPIRVAPI.cpp | 2 +-
llvm/lib/Target/SPIRV/SPIRVAsmPrinter.cpp | 7 +++++--
llvm/lib/Target/SPIRV/SPIRVBuiltins.cpp | 3 ++-
llvm/lib/Target/SPIRV/SPIRVEmitIntrinsics.cpp | 19 +++++++++----------
.../Target/SPIRV/SPIRVEmitNonSemanticDI.cpp | 1 +
llvm/lib/Target/SPIRV/SPIRVGlobalRegistry.cpp | 14 ++++++++------
.../Target/SPIRV/SPIRVInstructionSelector.cpp | 3 ++-
llvm/lib/Target/SPIRV/SPIRVModuleAnalysis.cpp | 6 +++---
llvm/lib/Target/SPIRV/SPIRVModuleAnalysis.h | 7 ++++---
llvm/lib/Target/SPIRV/SPIRVPostLegalizer.cpp | 2 +-
llvm/lib/Target/SPIRV/SPIRVPreLegalizer.cpp | 1 +
.../Target/SPIRV/SPIRVPrepareFunctions.cpp | 2 +-
llvm/lib/Target/SPIRV/SPIRVUtils.cpp | 8 +++++---
15 files changed, 52 insertions(+), 33 deletions(-)
diff --git a/llvm/lib/Target/SPIRV/Analysis/SPIRVConvergenceRegionAnalysis.h b/llvm/lib/Target/SPIRV/Analysis/SPIRVConvergenceRegionAnalysis.h
index 78a066bef8abc..ed0a1e10562a8 100644
--- a/llvm/lib/Target/SPIRV/Analysis/SPIRVConvergenceRegionAnalysis.h
+++ b/llvm/lib/Target/SPIRV/Analysis/SPIRVConvergenceRegionAnalysis.h
@@ -73,7 +73,11 @@ class ConvergenceRegion {
Entry(std::move(CR.Entry)), Exits(std::move(CR.Exits)),
Blocks(std::move(CR.Blocks)) {}
+ ~ConvergenceRegion() { releaseMemory(); }
+
+ ConvergenceRegion &operator=(ConvergenceRegion &&CR) = delete;
ConvergenceRegion(const ConvergenceRegion &other) = delete;
+ ConvergenceRegion &operator=(const ConvergenceRegion &other) = delete;
// Returns true if the given basic block belongs to this region, or to one of
// its subregion.
@@ -101,6 +105,9 @@ class ConvergenceRegionInfo {
~ConvergenceRegionInfo() { releaseMemory(); }
+ ConvergenceRegionInfo(const ConvergenceRegionInfo &LHS) = delete;
+ ConvergenceRegionInfo &operator=(const ConvergenceRegionInfo &LHS) = delete;
+
ConvergenceRegionInfo(ConvergenceRegionInfo &&LHS)
: TopLevelRegion(LHS.TopLevelRegion) {
if (TopLevelRegion != LHS.TopLevelRegion) {
diff --git a/llvm/lib/Target/SPIRV/MCTargetDesc/SPIRVInstPrinter.cpp b/llvm/lib/Target/SPIRV/MCTargetDesc/SPIRVInstPrinter.cpp
index 64d301e5ff174..4ec31bf193d5f 100644
--- a/llvm/lib/Target/SPIRV/MCTargetDesc/SPIRVInstPrinter.cpp
+++ b/llvm/lib/Target/SPIRV/MCTargetDesc/SPIRVInstPrinter.cpp
@@ -96,7 +96,7 @@ void SPIRVInstPrinter::printOpConstantVarOps(const MCInst *MI,
void SPIRVInstPrinter::recordOpExtInstImport(const MCInst *MI) {
MCRegister Reg = MI->getOperand(0).getReg();
auto Name = getSPIRVStringOperand(*MI, 1);
- auto Set = getExtInstSetFromString(Name);
+ auto Set = getExtInstSetFromString(std::move(Name));
ExtInstSetIDs.insert({Reg, Set});
}
@@ -210,6 +210,7 @@ void SPIRVInstPrinter::printInst(const MCInst *MI, uint64_t Address,
case SPIRV::OpConstantF:
// The last fixed operand along with any variadic operands that follow
// are part of the variable value.
+ assert(NumFixedOps > 0 && "Expected at least one fixed operand");
printOpConstantVarOps(MI, NumFixedOps - 1, OS);
break;
case SPIRV::OpCooperativeMatrixMulAddKHR: {
diff --git a/llvm/lib/Target/SPIRV/SPIRVAPI.cpp b/llvm/lib/Target/SPIRV/SPIRVAPI.cpp
index cfe7ef486381d..d6581b274e00f 100644
--- a/llvm/lib/Target/SPIRV/SPIRVAPI.cpp
+++ b/llvm/lib/Target/SPIRV/SPIRVAPI.cpp
@@ -156,7 +156,7 @@ SPIRVTranslateModule(Module *M, std::string &SpirvObj, std::string &ErrMsg,
}
}
return SPIRVTranslate(M, SpirvObj, ErrMsg, AllowExtNames, OLevel,
- TargetTriple);
+ std::move(TargetTriple));
}
} // namespace llvm
diff --git a/llvm/lib/Target/SPIRV/SPIRVAsmPrinter.cpp b/llvm/lib/Target/SPIRV/SPIRVAsmPrinter.cpp
index 1ebfde2a603b9..c2a6e51913a0a 100644
--- a/llvm/lib/Target/SPIRV/SPIRVAsmPrinter.cpp
+++ b/llvm/lib/Target/SPIRV/SPIRVAsmPrinter.cpp
@@ -50,7 +50,8 @@ class SPIRVAsmPrinter : public AsmPrinter {
public:
explicit SPIRVAsmPrinter(TargetMachine &TM,
std::unique_ptr<MCStreamer> Streamer)
- : AsmPrinter(TM, std::move(Streamer), ID), ST(nullptr), TII(nullptr) {}
+ : AsmPrinter(TM, std::move(Streamer), ID), ModuleSectionsEmitted(false),
+ ST(nullptr), TII(nullptr), MAI(nullptr) {}
static char ID;
bool ModuleSectionsEmitted;
const SPIRVSubtarget *ST;
@@ -591,7 +592,9 @@ void SPIRVAsmPrinter::outputAnnotations(const Module &M) {
cast<GlobalVariable>(CS->getOperand(1)->stripPointerCasts());
StringRef AnnotationString;
- getConstantStringInfo(GV, AnnotationString);
+ [[maybe_unused]] bool Success =
+ getConstantStringInfo(GV, AnnotationString);
+ assert(Success && "Failed to get annotation string");
MCInst Inst;
Inst.setOpcode(SPIRV::OpDecorate);
Inst.addOperand(MCOperand::createReg(Reg));
diff --git a/llvm/lib/Target/SPIRV/SPIRVBuiltins.cpp b/llvm/lib/Target/SPIRV/SPIRVBuiltins.cpp
index 25cdf72a658a8..e6e86b71b2dcc 100644
--- a/llvm/lib/Target/SPIRV/SPIRVBuiltins.cpp
+++ b/llvm/lib/Target/SPIRV/SPIRVBuiltins.cpp
@@ -51,7 +51,7 @@ struct IncomingCall {
IncomingCall(const std::string BuiltinName, const DemangledBuiltin *Builtin,
const Register ReturnRegister, const SPIRVType *ReturnType,
const SmallVectorImpl<Register> &Arguments)
- : BuiltinName(BuiltinName), Builtin(Builtin),
+ : BuiltinName(std::move(BuiltinName)), Builtin(Builtin),
ReturnRegister(ReturnRegister), ReturnType(ReturnType),
Arguments(Arguments) {}
@@ -2619,6 +2619,7 @@ static bool generateConvertInst(const StringRef DemangledCall,
GR->getSPIRVTypeID(Call->ReturnType));
}
+ assert(Builtin && "Conversion builtin not found.");
if (Builtin->IsSaturated)
buildOpDecorate(Call->ReturnRegister, MIRBuilder,
SPIRV::Decoration::SaturatedConversion, {});
diff --git a/llvm/lib/Target/SPIRV/SPIRVEmitIntrinsics.cpp b/llvm/lib/Target/SPIRV/SPIRVEmitIntrinsics.cpp
index 2c3e0876b757d..f5a49e2b47363 100644
--- a/llvm/lib/Target/SPIRV/SPIRVEmitIntrinsics.cpp
+++ b/llvm/lib/Target/SPIRV/SPIRVEmitIntrinsics.cpp
@@ -499,7 +499,7 @@ void SPIRVEmitIntrinsics::propagateElemTypeRec(
std::unordered_set<Value *> Visited;
DenseMap<Function *, CallInst *> Ptrcasts;
propagateElemTypeRec(Op, PtrElemTy, CastElemTy, VisitedSubst, Visited,
- Ptrcasts);
+ std::move(Ptrcasts));
}
void SPIRVEmitIntrinsics::propagateElemTypeRec(
@@ -897,17 +897,16 @@ Type *SPIRVEmitIntrinsics::deduceNestedTypeHelper(
bool Change = false;
for (unsigned i = 0; i < U->getNumOperands(); ++i) {
Value *Op = U->getOperand(i);
+ assert(Op && "Operands should not be null.");
Type *OpTy = Op->getType();
Type *Ty = OpTy;
- if (Op) {
- if (auto *PtrTy = dyn_cast<PointerType>(OpTy)) {
- if (Type *NestedTy =
- deduceElementTypeHelper(Op, Visited, UnknownElemTypeI8))
- Ty = getTypedPointerWrapper(NestedTy, PtrTy->getAddressSpace());
- } else {
- Ty = deduceNestedTypeHelper(dyn_cast<User>(Op), OpTy, Visited,
- UnknownElemTypeI8);
- }
+ if (auto *PtrTy = dyn_cast<PointerType>(OpTy)) {
+ if (Type *NestedTy =
+ deduceElementTypeHelper(Op, Visited, UnknownElemTypeI8))
+ Ty = getTypedPointerWrapper(NestedTy, PtrTy->getAddressSpace());
+ } else {
+ Ty = deduceNestedTypeHelper(dyn_cast<User>(Op), OpTy, Visited,
+ UnknownElemTypeI8);
}
Tys.push_back(Ty);
Change |= Ty != OpTy;
diff --git a/llvm/lib/Target/SPIRV/SPIRVEmitNonSemanticDI.cpp b/llvm/lib/Target/SPIRV/SPIRVEmitNonSemanticDI.cpp
index 7f0d636577869..275463ef4c527 100644
--- a/llvm/lib/Target/SPIRV/SPIRVEmitNonSemanticDI.cpp
+++ b/llvm/lib/Target/SPIRV/SPIRVEmitNonSemanticDI.cpp
@@ -116,6 +116,7 @@ bool SPIRVEmitNonSemanticDI::emitGlobalDI(MachineFunction &MF) {
}
}
const NamedMDNode *ModuleFlags = M->getNamedMetadata("llvm.module.flags");
+ assert(ModuleFlags && "Expected llvm.module.flags metadata to be present");
for (const auto *Op : ModuleFlags->operands()) {
const MDOperand &MaybeStrOp = Op->getOperand(1);
if (MaybeStrOp.equalsStr("Dwarf Version"))
diff --git a/llvm/lib/Target/SPIRV/SPIRVGlobalRegistry.cpp b/llvm/lib/Target/SPIRV/SPIRVGlobalRegistry.cpp
index f1436d5b3c047..cfe24c84941a9 100644
--- a/llvm/lib/Target/SPIRV/SPIRVGlobalRegistry.cpp
+++ b/llvm/lib/Target/SPIRV/SPIRVGlobalRegistry.cpp
@@ -87,7 +87,7 @@ storageClassRequiresExplictLayout(SPIRV::StorageClass::StorageClass SC) {
}
SPIRVGlobalRegistry::SPIRVGlobalRegistry(unsigned PointerSize)
- : PointerSize(PointerSize), Bound(0) {}
+ : PointerSize(PointerSize), Bound(0), CurMF(nullptr) {}
SPIRVType *SPIRVGlobalRegistry::assignIntTypeToVReg(unsigned BitWidth,
Register VReg,
@@ -474,8 +474,8 @@ Register SPIRVGlobalRegistry::getOrCreateBaseRegister(
}
if (Type->getOpcode() == SPIRV::OpTypeFloat) {
SPIRVType *SpvBaseType = getOrCreateSPIRVFloatType(BitWidth, I, TII);
- return getOrCreateConstFP(dyn_cast<ConstantFP>(Val)->getValue(), I,
- SpvBaseType, TII, ZeroAsNull);
+ return getOrCreateConstFP(cast<ConstantFP>(Val)->getValue(), I, SpvBaseType,
+ TII, ZeroAsNull);
}
assert(Type->getOpcode() == SPIRV::OpTypeInt);
SPIRVType *SpvBaseType = getOrCreateSPIRVIntegerType(BitWidth, I, TII);
@@ -1069,7 +1069,8 @@ SPIRVType *SPIRVGlobalRegistry::createSPIRVType(
MIRBuilder);
};
}
- return getOpTypeStruct(SType, MIRBuilder, AccQual, Decorator, EmitIR);
+ return getOpTypeStruct(SType, MIRBuilder, AccQual, std::move(Decorator),
+ EmitIR);
}
if (auto FType = dyn_cast<FunctionType>(Ty)) {
SPIRVType *RetTy = findSPIRVType(FType->getReturnType(), MIRBuilder,
@@ -1406,8 +1407,9 @@ SPIRVType *SPIRVGlobalRegistry::getOrCreateLayoutType(
// We need a new OpTypeStruct instruction because decorations will be
// different from a struct with an explicit layout created from a different
// entry point.
- SPIRVType *SPIRVStructType = getOpTypeStruct(
- ST, MIRBuilder, SPIRV::AccessQualifier::None, Decorator, EmitIr);
+ SPIRVType *SPIRVStructType =
+ getOpTypeStruct(ST, MIRBuilder, SPIRV::AccessQualifier::None,
+ std::move(Decorator), EmitIr);
add(Key, SPIRVStructType);
return SPIRVStructType;
}
diff --git a/llvm/lib/Target/SPIRV/SPIRVInstructionSelector.cpp b/llvm/lib/Target/SPIRV/SPIRVInstructionSelector.cpp
index e9f5ffa23e220..5259db1ff2dd7 100644
--- a/llvm/lib/Target/SPIRV/SPIRVInstructionSelector.cpp
+++ b/llvm/lib/Target/SPIRV/SPIRVInstructionSelector.cpp
@@ -362,6 +362,7 @@ SPIRVInstructionSelector::SPIRVInstructionSelector(const SPIRVTargetMachine &TM,
const RegisterBankInfo &RBI)
: InstructionSelector(), STI(ST), TII(*ST.getInstrInfo()),
TRI(*ST.getRegisterInfo()), RBI(RBI), GR(*ST.getSPIRVGlobalRegistry()),
+ MRI(nullptr),
#define GET_GLOBALISEL_PREDICATES_INIT
#include "SPIRVGenGlobalISel.inc"
#undef GET_GLOBALISEL_PREDICATES_INIT
@@ -3574,7 +3575,7 @@ bool SPIRVInstructionSelector::selectFirstBitSet64Overflow(
// Join all the resulting registers back into the return type in order
// (ie i32x2, i32x2, i32x1 -> i32x5)
- return selectOpWithSrcs(ResVReg, ResType, I, PartialRegs,
+ return selectOpWithSrcs(ResVReg, ResType, I, std::move(PartialRegs),
SPIRV::OpCompositeConstruct);
}
diff --git a/llvm/lib/Target/SPIRV/SPIRVModuleAnalysis.cpp b/llvm/lib/Target/SPIRV/SPIRVModuleAnalysis.cpp
index ab06fc0b5ff31..8039cf0c432fa 100644
--- a/llvm/lib/Target/SPIRV/SPIRVModuleAnalysis.cpp
+++ b/llvm/lib/Target/SPIRV/SPIRVModuleAnalysis.cpp
@@ -93,7 +93,7 @@ getSymbolicOperandRequirements(SPIRV::OperandCategory::OperandCategory Category,
if (Reqs.isCapabilityAvailable(Cap)) {
ReqExts.append(getSymbolicOperandExtensions(
SPIRV::OperandCategory::CapabilityOperand, Cap));
- return {true, {Cap}, ReqExts, ReqMinVer, ReqMaxVer};
+ return {true, {Cap}, std::move(ReqExts), ReqMinVer, ReqMaxVer};
}
} else {
// By SPIR-V specification: "If an instruction, enumerant, or other
@@ -111,7 +111,7 @@ getSymbolicOperandRequirements(SPIRV::OperandCategory::OperandCategory Category,
if (i == Sz - 1 || !AvoidCaps.S.contains(Cap)) {
ReqExts.append(getSymbolicOperandExtensions(
SPIRV::OperandCategory::CapabilityOperand, Cap));
- return {true, {Cap}, ReqExts, ReqMinVer, ReqMaxVer};
+ return {true, {Cap}, std::move(ReqExts), ReqMinVer, ReqMaxVer};
}
}
}
@@ -558,7 +558,7 @@ static void collectOtherInstr(MachineInstr &MI, SPIRV::ModuleAnalysisInfo &MAI,
bool Append = true) {
MAI.setSkipEmission(&MI);
InstrSignature MISign = instrToSignature(MI, MAI, true);
- auto FoundMI = IS.insert(MISign);
+ auto FoundMI = IS.insert(std::move(MISign));
if (!FoundMI.second)
return; // insert failed, so we found a duplicate; don't add it to MAI.MS
// No duplicates, so add it.
diff --git a/llvm/lib/Target/SPIRV/SPIRVModuleAnalysis.h b/llvm/lib/Target/SPIRV/SPIRVModuleAnalysis.h
index a0d47cb052b42..41c792a98534f 100644
--- a/llvm/lib/Target/SPIRV/SPIRVModuleAnalysis.h
+++ b/llvm/lib/Target/SPIRV/SPIRVModuleAnalysis.h
@@ -54,8 +54,8 @@ struct Requirements {
std::optional<Capability::Capability> Cap = {},
ExtensionList Exts = {}, VersionTuple MinVer = VersionTuple(),
VersionTuple MaxVer = VersionTuple())
- : IsSatisfiable(IsSatisfiable), Cap(Cap), Exts(Exts), MinVer(MinVer),
- MaxVer(MaxVer) {}
+ : IsSatisfiable(IsSatisfiable), Cap(Cap), Exts(std::move(Exts)),
+ MinVer(MinVer), MaxVer(MaxVer) {}
Requirements(Capability::Capability Cap) : Requirements(true, {Cap}) {}
};
@@ -217,7 +217,8 @@ struct SPIRVModuleAnalysis : public ModulePass {
static char ID;
public:
- SPIRVModuleAnalysis() : ModulePass(ID) {}
+ SPIRVModuleAnalysis()
+ : ModulePass(ID), ST(nullptr), GR(nullptr), TII(nullptr), MMI(nullptr) {}
bool runOnModule(Module &M) override;
void getAnalysisUsage(AnalysisUsage &AU) const override;
diff --git a/llvm/lib/Target/SPIRV/SPIRVPostLegalizer.cpp b/llvm/lib/Target/SPIRV/SPIRVPostLegalizer.cpp
index 1d38244feeae7..d17528dd882bf 100644
--- a/llvm/lib/Target/SPIRV/SPIRVPostLegalizer.cpp
+++ b/llvm/lib/Target/SPIRV/SPIRVPostLegalizer.cpp
@@ -147,7 +147,7 @@ void visit(MachineFunction &MF, MachineBasicBlock &Start,
// Do a preorder traversal of the CFG starting from the given function's entry
// point. Calls |op| on each basic block encountered during the traversal.
void visit(MachineFunction &MF, std::function<void(MachineBasicBlock *)> op) {
- visit(MF, *MF.begin(), op);
+ visit(MF, *MF.begin(), std::move(op));
}
bool SPIRVPostLegalizer::runOnMachineFunction(MachineFunction &MF) {
diff --git a/llvm/lib/Target/SPIRV/SPIRVPreLegalizer.cpp b/llvm/lib/Target/SPIRV/SPIRVPreLegalizer.cpp
index f4b4846f70d7d..b62db7fd62b2e 100644
--- a/llvm/lib/Target/SPIRV/SPIRVPreLegalizer.cpp
+++ b/llvm/lib/Target/SPIRV/SPIRVPreLegalizer.cpp
@@ -99,6 +99,7 @@ addConstantsToTrack(MachineFunction &MF, SPIRVGlobalRegistry *GR,
SPIRVType *ExtType = GR->getOrCreateSPIRVType(
Const->getType(), MIB, SPIRV::AccessQualifier::ReadWrite,
true);
+ assert(SrcMI && "Expected source instruction to be valid");
SrcMI->setDesc(STI.getInstrInfo()->get(SPIRV::OpConstantNull));
SrcMI->addOperand(MachineOperand::CreateReg(
GR->getSPIRVTypeID(ExtType), false));
diff --git a/llvm/lib/Target/SPIRV/SPIRVPrepareFunctions.cpp b/llvm/lib/Target/SPIRV/SPIRVPrepareFunctions.cpp
index 595424b999439..74aec4fd9b358 100644
--- a/llvm/lib/Target/SPIRV/SPIRVPrepareFunctions.cpp
+++ b/llvm/lib/Target/SPIRV/SPIRVPrepareFunctions.cpp
@@ -234,7 +234,7 @@ static SmallVector<Metadata *> parseAnnotation(Value *I,
return SmallVector<Metadata *>{};
MDs.push_back(MDNode::get(Ctx, MDsItem));
}
- return Pos == static_cast<int>(Anno.length()) ? MDs
+ return Pos == static_cast<int>(Anno.length()) ? std::move(MDs)
: SmallVector<Metadata *>{};
}
diff --git a/llvm/lib/Target/SPIRV/SPIRVUtils.cpp b/llvm/lib/Target/SPIRV/SPIRVUtils.cpp
index 416d811ba4e65..820e56b362edc 100644
--- a/llvm/lib/Target/SPIRV/SPIRVUtils.cpp
+++ b/llvm/lib/Target/SPIRV/SPIRVUtils.cpp
@@ -463,8 +463,10 @@ std::string getOclOrSpirvBuiltinDemangledName(StringRef Name) {
DemangledNameLenStart = NameSpaceStart + 11;
}
Start = Name.find_first_not_of("0123456789", DemangledNameLenStart);
- Name.substr(DemangledNameLenStart, Start - DemangledNameLenStart)
- .getAsInteger(10, Len);
+ [[maybe_unused]] bool Error =
+ Name.substr(DemangledNameLenStart, Start - DemangledNameLenStart)
+ .getAsInteger(10, Len);
+ assert(!Error && "Failed to parse demangled name length");
return Name.substr(Start, Len).str();
}
@@ -756,7 +758,7 @@ bool getVacantFunctionName(Module &M, std::string &Name) {
for (unsigned I = 0; I < MaxIters; ++I) {
std::string OrdName = Name + Twine(I).str();
if (!M.getFunction(OrdName)) {
- Name = OrdName;
+ Name = std::move(OrdName);
return true;
}
}
>From df0325a3e580613b53803ac03ab1e2bda00f744f Mon Sep 17 00:00:00 2001
From: Schrodinger ZHU Yifan <yifanzhu at rochester.edu>
Date: Wed, 6 Aug 2025 10:55:41 -0400
Subject: [PATCH 098/171] [libc] add getrandom vDSO symbol (#151630)
---
.../src/__support/OSUtil/linux/aarch64/vdso.h | 2 ++
libc/src/__support/OSUtil/linux/vdso_sym.h | 6 +++-
libc/src/__support/OSUtil/linux/x86_64/vdso.h | 2 ++
.../src/__support/OSUtil/linux/vdso_test.cpp | 30 +++++++++++++++++--
4 files changed, 37 insertions(+), 3 deletions(-)
diff --git a/libc/src/__support/OSUtil/linux/aarch64/vdso.h b/libc/src/__support/OSUtil/linux/aarch64/vdso.h
index 3c4c6205071da..ee5777ad67f6d 100644
--- a/libc/src/__support/OSUtil/linux/aarch64/vdso.h
+++ b/libc/src/__support/OSUtil/linux/aarch64/vdso.h
@@ -23,6 +23,8 @@ LIBC_INLINE constexpr cpp::string_view symbol_name(VDSOSym sym) {
return "__kernel_clock_gettime";
case VDSOSym::ClockGetRes:
return "__kernel_clock_getres";
+ case VDSOSym::GetRandom:
+ return "__kernel_getrandom";
default:
return "";
}
diff --git a/libc/src/__support/OSUtil/linux/vdso_sym.h b/libc/src/__support/OSUtil/linux/vdso_sym.h
index 968e1536c4d27..01f0b72a4ed9c 100644
--- a/libc/src/__support/OSUtil/linux/vdso_sym.h
+++ b/libc/src/__support/OSUtil/linux/vdso_sym.h
@@ -35,7 +35,8 @@ enum class VDSOSym {
RTSigReturn,
FlushICache,
RiscvHwProbe,
- VDSOSymCount
+ GetRandom,
+ VDSOSymCount,
};
template <VDSOSym sym> LIBC_INLINE constexpr auto dispatcher() {
@@ -60,6 +61,9 @@ template <VDSOSym sym> LIBC_INLINE constexpr auto dispatcher() {
else if constexpr (sym == VDSOSym::RiscvHwProbe)
return static_cast<int (*)(riscv_hwprobe *, size_t, size_t, cpu_set_t *,
unsigned)>(nullptr);
+ else if constexpr (sym == VDSOSym::GetRandom)
+ return static_cast<int (*)(void *, size_t, unsigned int, void *, size_t)>(
+ nullptr);
else
return static_cast<void *>(nullptr);
}
diff --git a/libc/src/__support/OSUtil/linux/x86_64/vdso.h b/libc/src/__support/OSUtil/linux/x86_64/vdso.h
index abe7c33e07cfa..f46fcb038f2e6 100644
--- a/libc/src/__support/OSUtil/linux/x86_64/vdso.h
+++ b/libc/src/__support/OSUtil/linux/x86_64/vdso.h
@@ -29,6 +29,8 @@ LIBC_INLINE constexpr cpp::string_view symbol_name(VDSOSym sym) {
return "__vdso_time";
case VDSOSym::ClockGetRes:
return "__vdso_clock_getres";
+ case VDSOSym::GetRandom:
+ return "__vdso_getrandom";
default:
return "";
}
diff --git a/libc/test/src/__support/OSUtil/linux/vdso_test.cpp b/libc/test/src/__support/OSUtil/linux/vdso_test.cpp
index 2f68470e8f678..71892a055daba 100644
--- a/libc/test/src/__support/OSUtil/linux/vdso_test.cpp
+++ b/libc/test/src/__support/OSUtil/linux/vdso_test.cpp
@@ -110,8 +110,8 @@ TEST(LlvmLibcOSUtilVDSOTest, RtSigReturn) {
using namespace testing::ErrnoSetterMatcher;
// must use struct since there is a function of the same name in the same
// scope.
- struct sigaction sa {};
- struct sigaction old_sa {};
+ struct sigaction sa{};
+ struct sigaction old_sa{};
sa.sa_handler = sigprof_handler;
sa.sa_flags = SA_RESTORER;
vdso::TypedSymbol<vdso::VDSOSym::RTSigReturn> symbol;
@@ -158,4 +158,30 @@ TEST(LlvmLibcOSUtilVDSOTest, RiscvHwProbe) {
}
}
+TEST(LlvmLibcOSUtilVDSOTest, GetRandom) {
+ using namespace testing::ErrnoSetterMatcher;
+ vdso::TypedSymbol<vdso::VDSOSym::GetRandom> symbol;
+ if (!symbol)
+ return;
+ // This structure exists in kernel UAPI header; but we define it on our own to
+ // make sure we can test it even on platform without support.
+ struct VGetrandomOpaqueParams {
+ uint32_t size_of_opaque_states;
+ uint32_t mmap_prot;
+ uint32_t mmap_flags;
+ uint32_t reserved[13];
+ };
+ VGetrandomOpaqueParams param{0, 0, 0, {}};
+ // When getrandom vDSO symbol is called with special parameters (~0 for state
+ // size), it populates the desired configuration into VGetrandomOpaqueParams.
+ int res = symbol(
+ /*buf=*/nullptr, /*count=*/0, /*flags=*/0,
+ /*opaque_states=*/¶m,
+ /*size_of_opaque_states=*/~0);
+ // Test that the size of the states are correctly populated after a successful
+ // call.
+ EXPECT_EQ(res, 0);
+ EXPECT_GT(param.size_of_opaque_states, 0u);
+}
+
} // namespace LIBC_NAMESPACE_DECL
>From 8de481913353a1e37264687d5cc73db0de19e6cc Mon Sep 17 00:00:00 2001
From: Michael Kruse <llvm-project at meinersbur.de>
Date: Wed, 6 Aug 2025 16:58:08 +0200
Subject: [PATCH 099/171] [Flang] Search flang_rt in clang_rt path (#151954)
The clang/flang driver has two separate systems for find the location of
clang_rt (simplified):
* `getCompilerRTPath()`, e.g. `../lib/clang/22/lib/windows`,
used when `LLVM_ENABLE_PER_TARGET_RUNTIME_DIR=0`
* `getRuntimePath()`, e.g. `../lib/clang/22/lib/x86_64-pc-windows-msvc`,
used when `LLVM_ENABLE_PER_TARGET_RUNTIME_DIR=1`
To simplify the search path, Flang-RT normally assumes only
`getRuntimePath()`, i.e. ignoring `LLVM_ENABLE_PER_TARGET_RUNTIME_DIR`
and always using the `LLVM_ENABLE_PER_TARGET_RUNTIME_DIR=1` mechanism.
There is an exception for Apple Darwin triples where `getRuntimePath()`
returns nothing. The flang-rt/compiler-rt CMake code for library
location also ignores `LLVM_ENABLE_PER_TARGET_RUNTIME_DIR` but uses the
`LLVM_ENABLE_PER_TARGET_RUNTIME_DIR=0` path instead. Since only
`getRuntimePath()` is automatically added to the linker command line,
this patch explicitly adds `getCompilerRTPath()` to the path when
linking flang_rt.
Fixes #151031
---
clang/lib/Driver/ToolChain.cpp | 29 +++++++++++++++++++++--------
1 file changed, 21 insertions(+), 8 deletions(-)
diff --git a/clang/lib/Driver/ToolChain.cpp b/clang/lib/Driver/ToolChain.cpp
index 25c6b5a486fd5..7667dbddb0ca2 100644
--- a/clang/lib/Driver/ToolChain.cpp
+++ b/clang/lib/Driver/ToolChain.cpp
@@ -855,17 +855,30 @@ void ToolChain::addFortranRuntimeLibs(const ArgList &Args,
void ToolChain::addFortranRuntimeLibraryPath(const llvm::opt::ArgList &Args,
ArgStringList &CmdArgs) const {
- // Default to the <driver-path>/../lib directory. This works fine on the
- // platforms that we have tested so far. We will probably have to re-fine
- // this in the future. In particular, on some platforms, we may need to use
- // lib64 instead of lib.
+ auto AddLibSearchPathIfExists = [&](const Twine &Path) {
+ // Linker may emit warnings about non-existing directories
+ if (!llvm::sys::fs::is_directory(Path))
+ return;
+
+ if (getTriple().isKnownWindowsMSVCEnvironment())
+ CmdArgs.push_back(Args.MakeArgString("-libpath:" + Path));
+ else
+ CmdArgs.push_back(Args.MakeArgString("-L" + Path));
+ };
+
+ // Search for flang_rt.* at the same location as clang_rt.* with
+ // LLVM_ENABLE_PER_TARGET_RUNTIME_DIR=0. On most platforms, flang_rt is
+ // located at the path returned by getRuntimePath() which is already added to
+ // the library search path. This exception is for Apple-Darwin.
+ AddLibSearchPathIfExists(getCompilerRTPath());
+
+ // Fall back to the non-resource directory <driver-path>/../lib. We will
+ // probably have to refine this in the future. In particular, on some
+ // platforms, we may need to use lib64 instead of lib.
SmallString<256> DefaultLibPath =
llvm::sys::path::parent_path(getDriver().Dir);
llvm::sys::path::append(DefaultLibPath, "lib");
- if (getTriple().isKnownWindowsMSVCEnvironment())
- CmdArgs.push_back(Args.MakeArgString("-libpath:" + DefaultLibPath));
- else
- CmdArgs.push_back(Args.MakeArgString("-L" + DefaultLibPath));
+ AddLibSearchPathIfExists(DefaultLibPath);
}
void ToolChain::addFlangRTLibPath(const ArgList &Args,
>From f092b820d174f2b17713cf336ac98108e59d334a Mon Sep 17 00:00:00 2001
From: Alex Duran <alejandro.duran at intel.com>
Date: Thu, 7 Aug 2025 00:01:47 +0900
Subject: [PATCH 100/171] [OFFLOAD] Fix typo in assert (#152316)
Fixes an issue introduced by PR https://github.com/llvm/llvm-project/pull/143491.
---
offload/plugins-nextgen/common/src/PluginInterface.cpp | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/offload/plugins-nextgen/common/src/PluginInterface.cpp b/offload/plugins-nextgen/common/src/PluginInterface.cpp
index 0a7f9fa5970ee..ed79af909674e 100644
--- a/offload/plugins-nextgen/common/src/PluginInterface.cpp
+++ b/offload/plugins-nextgen/common/src/PluginInterface.cpp
@@ -2251,7 +2251,7 @@ GenericPluginTy::create_interop(int32_t ID, int32_t InteropContext,
int32_t GenericPluginTy::release_interop(int32_t ID,
omp_interop_val_t *Interop) {
assert(Interop && "Interop is null");
- assert(Interop->DeviceId == ID && "Interop does not match device id");
+ assert(Interop->device_id == ID && "Interop does not match device id");
auto &Device = getDevice(ID);
auto Err = Device.releaseInterop(Interop);
if (Err) {
>From a9dacb10fe7db908f1305757820497de65626a46 Mon Sep 17 00:00:00 2001
From: parabola94 <heavybaby5000 at toki.waseda.jp>
Date: Thu, 7 Aug 2025 00:04:32 +0900
Subject: [PATCH 101/171] [flang] Add -O flag to tco (#151869)
At the moment, there is no established way to emit LLVM IR before
optimization in LLVM. tco can partially does, but the optimization level
is fixed to 2. This patch adds -O flag to tco.
Note that this is not completely equivalent to the frontend option. tco
does not accept -O flag without numbers, and the default optimization
level is not 0 but 2.
---
flang/tools/tco/tco.cpp | 32 +++++++++++++++++++++++++++++++-
1 file changed, 31 insertions(+), 1 deletion(-)
diff --git a/flang/tools/tco/tco.cpp b/flang/tools/tco/tco.cpp
index d8daf8769cb21..36939802f55a6 100644
--- a/flang/tools/tco/tco.cpp
+++ b/flang/tools/tco/tco.cpp
@@ -51,6 +51,12 @@ static cl::opt<bool> emitFir("emit-fir",
cl::desc("Parse and pretty-print the input"),
cl::init(false));
+static cl::opt<unsigned>
+ OptLevel("O",
+ cl::desc("Optimization level. [-O0, -O1, -O2, or -O3] "
+ "(default = '-O2')"),
+ cl::Prefix, cl::init(2));
+
static cl::opt<std::string> targetTriple("target",
cl::desc("specify a target triple"),
cl::init("native"));
@@ -96,6 +102,22 @@ static void printModule(mlir::ModuleOp mod, raw_ostream &output) {
output << mod << '\n';
}
+static std::optional<llvm::OptimizationLevel>
+getOptimizationLevel(unsigned level) {
+ switch (level) {
+ default:
+ return std::nullopt;
+ case 0:
+ return llvm::OptimizationLevel::O0;
+ case 1:
+ return llvm::OptimizationLevel::O1;
+ case 2:
+ return llvm::OptimizationLevel::O2;
+ case 3:
+ return llvm::OptimizationLevel::O3;
+ }
+}
+
// compile a .fir file
static llvm::LogicalResult
compileFIR(const mlir::PassPipelineCLParser &passPipeline) {
@@ -157,9 +179,17 @@ compileFIR(const mlir::PassPipelineCLParser &passPipeline) {
if (mlir::failed(passPipeline.addToPipeline(pm, errorHandler)))
return mlir::failure();
} else {
- MLIRToLLVMPassPipelineConfig config(llvm::OptimizationLevel::O2);
+ std::optional<llvm::OptimizationLevel> level =
+ getOptimizationLevel(OptLevel);
+ if (!level) {
+ errs() << "Error invalid optimization level\n";
+ return mlir::failure();
+ }
+ MLIRToLLVMPassPipelineConfig config(*level);
+ // TODO: config.StackArrays should be set here?
config.EnableOpenMP = true; // assume the input contains OpenMP
config.AliasAnalysis = enableAliasAnalysis && !testGeneratorMode;
+ config.LoopVersioning = OptLevel > 2;
if (codeGenLLVM) {
// Run only CodeGen passes.
fir::createDefaultFIRCodeGenPassPipeline(pm, config);
>From 1d23005b8e2ef3494d9beb1812205bd1386e77fc Mon Sep 17 00:00:00 2001
From: Krzysztof Parzyszek <Krzysztof.Parzyszek at amd.com>
Date: Wed, 6 Aug 2025 10:16:43 -0500
Subject: [PATCH 102/171] [flang][OpenMP] Insert CRITICAL construct names into
global scope (#152004)
They were inserted in the current scope.
OpenMP spec (all versions):
The names of critical constructs are global entities of the program. If
a name conflicts with any other entity, the behavior of the program is
unspecified.
---
flang/lib/Semantics/resolve-directives.cpp | 9 -----
flang/lib/Semantics/resolve-names.cpp | 36 +++++++++++++++++++
.../OpenMP/critical-global-conflict.f90 | 15 ++++++++
.../OpenMP/critical_within_default.f90 | 7 +++-
4 files changed, 57 insertions(+), 10 deletions(-)
create mode 100644 flang/test/Semantics/OpenMP/critical-global-conflict.f90
diff --git a/flang/lib/Semantics/resolve-directives.cpp b/flang/lib/Semantics/resolve-directives.cpp
index f08c77352f567..0557b0870de2f 100644
--- a/flang/lib/Semantics/resolve-directives.cpp
+++ b/flang/lib/Semantics/resolve-directives.cpp
@@ -2140,17 +2140,8 @@ bool OmpAttributeVisitor::Pre(const parser::OpenMPSectionConstruct &x) {
bool OmpAttributeVisitor::Pre(const parser::OpenMPCriticalConstruct &x) {
const auto &beginCriticalDir{std::get<parser::OmpCriticalDirective>(x.t)};
- const auto &endCriticalDir{std::get<parser::OmpEndCriticalDirective>(x.t)};
PushContext(beginCriticalDir.source, llvm::omp::Directive::OMPD_critical);
GetContext().withinConstruct = true;
- if (const auto &criticalName{
- std::get<std::optional<parser::Name>>(beginCriticalDir.t)}) {
- ResolveOmpName(*criticalName, Symbol::Flag::OmpCriticalLock);
- }
- if (const auto &endCriticalName{
- std::get<std::optional<parser::Name>>(endCriticalDir.t)}) {
- ResolveOmpName(*endCriticalName, Symbol::Flag::OmpCriticalLock);
- }
return true;
}
diff --git a/flang/lib/Semantics/resolve-names.cpp b/flang/lib/Semantics/resolve-names.cpp
index 25b13700cd3ab..66a45dd3d07fa 100644
--- a/flang/lib/Semantics/resolve-names.cpp
+++ b/flang/lib/Semantics/resolve-names.cpp
@@ -1593,6 +1593,14 @@ class OmpVisitor : public virtual DeclarationVisitor {
}
bool Pre(const parser::OmpCriticalDirective &x) {
AddOmpSourceRange(x.source);
+ // Manually resolve names in CRITICAL directives. This is because these
+ // names do not denote Fortran objects, and the CRITICAL directive causes
+ // them to be "auto-declared", i.e. inserted into the global scope.
+ // More specifically, they are not expected to have explicit declarations,
+ // and if they do the behavior is unspeficied.
+ if (auto &maybeName{std::get<std::optional<parser::Name>>(x.t)}) {
+ ResolveCriticalName(*maybeName);
+ }
return true;
}
void Post(const parser::OmpCriticalDirective &) {
@@ -1600,6 +1608,10 @@ class OmpVisitor : public virtual DeclarationVisitor {
}
bool Pre(const parser::OmpEndCriticalDirective &x) {
AddOmpSourceRange(x.source);
+ // Manually resolve names in CRITICAL directives.
+ if (auto &maybeName{std::get<std::optional<parser::Name>>(x.t)}) {
+ ResolveCriticalName(*maybeName);
+ }
return true;
}
void Post(const parser::OmpEndCriticalDirective &) {
@@ -1720,6 +1732,8 @@ class OmpVisitor : public virtual DeclarationVisitor {
const std::optional<parser::OmpClauseList> &clauses,
const T &wholeConstruct);
+ void ResolveCriticalName(const parser::Name &name);
+
int metaLevel_{0};
const parser::OmpMetadirectiveDirective *metaDirective_{nullptr};
};
@@ -1947,6 +1961,28 @@ void OmpVisitor::ProcessReductionSpecifier(
}
}
+void OmpVisitor::ResolveCriticalName(const parser::Name &name) {
+ auto &globalScope{[&]() -> Scope & {
+ for (Scope *s{&currScope()};; s = &s->parent()) {
+ if (s->IsTopLevel()) {
+ return *s;
+ }
+ }
+ llvm_unreachable("Cannot find global scope");
+ }()};
+
+ if (auto *symbol{FindInScope(globalScope, name)}) {
+ if (!symbol->test(Symbol::Flag::OmpCriticalLock)) {
+ SayWithDecl(name, *symbol,
+ "CRITICAL construct name '%s' conflicts with a previous declaration"_warn_en_US,
+ name.ToString());
+ }
+ } else {
+ name.symbol = &MakeSymbol(globalScope, name.source, Attrs{});
+ name.symbol->set(Symbol::Flag::OmpCriticalLock);
+ }
+}
+
bool OmpVisitor::Pre(const parser::OmpDirectiveSpecification &x) {
AddOmpSourceRange(x.source);
if (metaLevel_ == 0) {
diff --git a/flang/test/Semantics/OpenMP/critical-global-conflict.f90 b/flang/test/Semantics/OpenMP/critical-global-conflict.f90
new file mode 100644
index 0000000000000..2546b68748d93
--- /dev/null
+++ b/flang/test/Semantics/OpenMP/critical-global-conflict.f90
@@ -0,0 +1,15 @@
+! RUN: %python %S/../test_errors.py %s %flang_fc1 -fopenmp -Werror
+
+subroutine g
+end
+
+subroutine f(x)
+ implicit none
+ integer :: x
+
+!ERROR: CRITICAL construct name 'g' conflicts with a previous declaration
+ !$omp critical(g)
+ x = 0
+!ERROR: CRITICAL construct name 'g' conflicts with a previous declaration
+ !$omp end critical(g)
+end
diff --git a/flang/test/Semantics/OpenMP/critical_within_default.f90 b/flang/test/Semantics/OpenMP/critical_within_default.f90
index a5fe30eeb7de0..70353e8e4b585 100644
--- a/flang/test/Semantics/OpenMP/critical_within_default.f90
+++ b/flang/test/Semantics/OpenMP/critical_within_default.f90
@@ -1,11 +1,16 @@
! RUN: %flang_fc1 -fopenmp -fdebug-dump-symbols %s | FileCheck %s
! Test that we do not make a private copy of the critical name
+!CHECK: Global scope:
+!CHECK-NEXT: MN: MainProgram
+!CHECK-NEXT: k2 (OmpCriticalLock): Unknown
+
!CHECK: MainProgram scope: MN
!CHECK-NEXT: j size=4 offset=0: ObjectEntity type: INTEGER(4)
!CHECK-NEXT: OtherConstruct scope:
!CHECK-NEXT: j (OmpPrivate): HostAssoc
-!CHECK-NEXT: k2 (OmpCriticalLock): Unknown
+!CHECK-NOT: k2
+
program mn
integer :: j
j=2
>From bd9117c569678e7af042074cbcaba860ab6eefb3 Mon Sep 17 00:00:00 2001
From: Aiden Grossman <aidengrossman at google.com>
Date: Wed, 6 Aug 2025 08:17:48 -0700
Subject: [PATCH 103/171] [CI] Move platform specific test report titles to
python
This patch moves the platform specific test report titles to the
generate_test_report_github.py script. This means that the functions in
the monolithic-* scripts are exactly the same now and can be moved into
a separate script that can be shared between the two scripts.
Reviewers: DavidSpickett, cmtice, lnihlen, dschuff, Keenuts, gburgessiv
Reviewed By: DavidSpickett
Pull Request: https://github.com/llvm/llvm-project/pull/152198
---
.ci/generate_test_report_github.py | 11 +++++++----
.ci/monolithic-linux.sh | 2 +-
.ci/monolithic-windows.sh | 2 +-
3 files changed, 9 insertions(+), 6 deletions(-)
diff --git a/.ci/generate_test_report_github.py b/.ci/generate_test_report_github.py
index be43c4b4f693f..4b7f3a2367d50 100644
--- a/.ci/generate_test_report_github.py
+++ b/.ci/generate_test_report_github.py
@@ -4,20 +4,23 @@
"""Script to generate a build report for Github."""
import argparse
+import platform
import generate_test_report_lib
+PLATFORM_TITLES = {
+ "Windows": ":window: Windows x64 Test Results",
+ "Linux": ":penguin: Linux x64 Test Results",
+}
+
if __name__ == "__main__":
parser = argparse.ArgumentParser()
- parser.add_argument(
- "title", help="Title of the test report, without Markdown formatting."
- )
parser.add_argument("return_code", help="The build's return code.", type=int)
parser.add_argument("junit_files", help="Paths to JUnit report files.", nargs="*")
args = parser.parse_args()
report = generate_test_report_lib.generate_report_from_files(
- args.title, args.return_code, args.junit_files
+ PLATFORM_TITLES[platform.system()], args.return_code, args.junit_files
)
print(report)
diff --git a/.ci/monolithic-linux.sh b/.ci/monolithic-linux.sh
index fffd8fef21382..da21e09809aee 100755
--- a/.ci/monolithic-linux.sh
+++ b/.ci/monolithic-linux.sh
@@ -39,7 +39,7 @@ function at-exit {
shopt -s nullglob
if [[ "$GITHUB_STEP_SUMMARY" != "" ]]; then
- python3 "${MONOREPO_ROOT}"/.ci/generate_test_report_github.py ":penguin: Linux x64 Test Results" \
+ python3 "${MONOREPO_ROOT}"/.ci/generate_test_report_github.py \
$retcode "${BUILD_DIR}"/test-results.*.xml >> $GITHUB_STEP_SUMMARY
fi
}
diff --git a/.ci/monolithic-windows.sh b/.ci/monolithic-windows.sh
index da150829783df..71646ce047d4f 100755
--- a/.ci/monolithic-windows.sh
+++ b/.ci/monolithic-windows.sh
@@ -34,7 +34,7 @@ function at-exit {
shopt -s nullglob
if [[ "$GITHUB_STEP_SUMMARY" != "" ]]; then
- python "${MONOREPO_ROOT}"/.ci/generate_test_report_github.py ":window: Windows x64 Test Results" \
+ python "${MONOREPO_ROOT}"/.ci/generate_test_report_github.py \
$retcode "${BUILD_DIR}"/test-results.*.xml >> $GITHUB_STEP_SUMMARY
fi
}
>From 71dbf1492bada3a93d49f681c6a8b71072a61d71 Mon Sep 17 00:00:00 2001
From: Timm Baeder <tbaeder at redhat.com>
Date: Wed, 6 Aug 2025 17:41:53 +0200
Subject: [PATCH 104/171] [clang][bytecode][NFC] Remove Descriptor::MoveFn
(#152317)
We don't use this anymore.
---
clang/lib/AST/ByteCode/Descriptor.cpp | 90 ++-------------------------
clang/lib/AST/ByteCode/Descriptor.h | 9 ---
2 files changed, 6 insertions(+), 93 deletions(-)
diff --git a/clang/lib/AST/ByteCode/Descriptor.cpp b/clang/lib/AST/ByteCode/Descriptor.cpp
index 7403e90ce53db..629c1ffa3602e 100644
--- a/clang/lib/AST/ByteCode/Descriptor.cpp
+++ b/clang/lib/AST/ByteCode/Descriptor.cpp
@@ -153,28 +153,6 @@ static void dtorArrayDesc(Block *B, std::byte *Ptr, const Descriptor *D) {
}
}
-static void moveArrayDesc(Block *B, std::byte *Src, std::byte *Dst,
- const Descriptor *D) {
- const unsigned NumElems = D->getNumElems();
- const unsigned ElemSize =
- D->ElemDesc->getAllocSize() + sizeof(InlineDescriptor);
-
- unsigned ElemOffset = 0;
- for (unsigned I = 0; I < NumElems; ++I, ElemOffset += ElemSize) {
- auto *SrcPtr = Src + ElemOffset;
- auto *DstPtr = Dst + ElemOffset;
-
- auto *SrcDesc = reinterpret_cast<InlineDescriptor *>(SrcPtr);
- auto *SrcElemLoc = reinterpret_cast<std::byte *>(SrcDesc + 1);
- auto *DstDesc = reinterpret_cast<InlineDescriptor *>(DstPtr);
- auto *DstElemLoc = reinterpret_cast<std::byte *>(DstDesc + 1);
-
- *DstDesc = *SrcDesc;
- if (auto Fn = D->ElemDesc->MoveFn)
- Fn(B, SrcElemLoc, DstElemLoc, D->ElemDesc);
- }
-}
-
static void initField(Block *B, std::byte *Ptr, bool IsConst, bool IsMutable,
bool IsVolatile, bool IsActive, bool IsUnionField,
bool InUnion, const Descriptor *D, unsigned FieldOffset) {
@@ -268,45 +246,6 @@ static void dtorRecord(Block *B, std::byte *Ptr, const Descriptor *D) {
destroyBase(B, Ptr, F.Desc, F.Offset);
}
-static void moveRecord(Block *B, std::byte *Src, std::byte *Dst,
- const Descriptor *D) {
- assert(D);
- assert(D->ElemRecord);
-
- // FIXME: Code duplication.
- for (const auto &F : D->ElemRecord->fields()) {
- auto FieldOffset = F.Offset;
- const auto *SrcDesc =
- reinterpret_cast<const InlineDescriptor *>(Src + FieldOffset) - 1;
- auto *DestDesc =
- reinterpret_cast<InlineDescriptor *>(Dst + FieldOffset) - 1;
- std::memcpy(DestDesc, SrcDesc, sizeof(InlineDescriptor));
-
- if (auto Fn = F.Desc->MoveFn)
- Fn(B, Src + FieldOffset, Dst + FieldOffset, F.Desc);
- }
-
- for (const auto &Base : D->ElemRecord->bases()) {
- auto BaseOffset = Base.Offset;
- const auto *SrcDesc =
- reinterpret_cast<const InlineDescriptor *>(Src + BaseOffset) - 1;
- auto *DestDesc = reinterpret_cast<InlineDescriptor *>(Dst + BaseOffset) - 1;
- std::memcpy(DestDesc, SrcDesc, sizeof(InlineDescriptor));
-
- if (auto Fn = Base.Desc->MoveFn)
- Fn(B, Src + BaseOffset, Dst + BaseOffset, Base.Desc);
- }
-
- for (const auto &VBase : D->ElemRecord->virtual_bases()) {
- auto VBaseOffset = VBase.Offset;
- const auto *SrcDesc =
- reinterpret_cast<const InlineDescriptor *>(Src + VBaseOffset) - 1;
- auto *DestDesc =
- reinterpret_cast<InlineDescriptor *>(Dst + VBaseOffset) - 1;
- std::memcpy(DestDesc, SrcDesc, sizeof(InlineDescriptor));
- }
-}
-
static BlockCtorFn getCtorPrim(PrimType Type) {
// Floating types are special. They are primitives, but need their
// constructor called.
@@ -337,18 +276,6 @@ static BlockDtorFn getDtorPrim(PrimType Type) {
COMPOSITE_TYPE_SWITCH(Type, return dtorTy<T>, return nullptr);
}
-static BlockMoveFn getMovePrim(PrimType Type) {
- if (Type == PT_Float)
- return moveTy<PrimConv<PT_Float>::T>;
- if (Type == PT_IntAP)
- return moveTy<PrimConv<PT_IntAP>::T>;
- if (Type == PT_IntAPS)
- return moveTy<PrimConv<PT_IntAPS>::T>;
- if (Type == PT_MemberPtr)
- return moveTy<PrimConv<PT_MemberPtr>::T>;
- COMPOSITE_TYPE_SWITCH(Type, return moveTy<T>, return nullptr);
-}
-
static BlockCtorFn getCtorArrayPrim(PrimType Type) {
TYPE_SWITCH(Type, return ctorArrayTy<T>);
llvm_unreachable("unknown Expr");
@@ -359,11 +286,6 @@ static BlockDtorFn getDtorArrayPrim(PrimType Type) {
llvm_unreachable("unknown Expr");
}
-static BlockMoveFn getMoveArrayPrim(PrimType Type) {
- TYPE_SWITCH(Type, return moveArrayTy<T>);
- llvm_unreachable("unknown Expr");
-}
-
/// Primitives.
Descriptor::Descriptor(const DeclTy &D, const Type *SourceTy, PrimType Type,
MetadataSize MD, bool IsConst, bool IsTemporary,
@@ -372,7 +294,7 @@ Descriptor::Descriptor(const DeclTy &D, const Type *SourceTy, PrimType Type,
MDSize(MD.value_or(0)), AllocSize(align(Size + MDSize)), PrimT(Type),
IsConst(IsConst), IsMutable(IsMutable), IsTemporary(IsTemporary),
IsVolatile(IsVolatile), CtorFn(getCtorPrim(Type)),
- DtorFn(getDtorPrim(Type)), MoveFn(getMovePrim(Type)) {
+ DtorFn(getDtorPrim(Type)) {
assert(AllocSize >= Size);
assert(Source && "Missing source");
}
@@ -386,7 +308,7 @@ Descriptor::Descriptor(const DeclTy &D, PrimType Type, MetadataSize MD,
AllocSize(align(MDSize) + align(Size) + sizeof(InitMapPtr)), PrimT(Type),
IsConst(IsConst), IsMutable(IsMutable), IsTemporary(IsTemporary),
IsArray(true), CtorFn(getCtorArrayPrim(Type)),
- DtorFn(getDtorArrayPrim(Type)), MoveFn(getMoveArrayPrim(Type)) {
+ DtorFn(getDtorArrayPrim(Type)) {
assert(Source && "Missing source");
assert(NumElems <= (MaxArrayElemBytes / ElemSize));
}
@@ -399,7 +321,7 @@ Descriptor::Descriptor(const DeclTy &D, PrimType Type, MetadataSize MD,
AllocSize(MDSize + sizeof(InitMapPtr) + alignof(void *)), PrimT(Type),
IsConst(IsConst), IsMutable(false), IsTemporary(IsTemporary),
IsArray(true), CtorFn(getCtorArrayPrim(Type)),
- DtorFn(getDtorArrayPrim(Type)), MoveFn(getMoveArrayPrim(Type)) {
+ DtorFn(getDtorArrayPrim(Type)) {
assert(Source && "Missing source");
}
@@ -414,7 +336,7 @@ Descriptor::Descriptor(const DeclTy &D, const Type *SourceTy,
AllocSize(std::max<size_t>(alignof(void *), Size) + MDSize),
ElemDesc(Elem), IsConst(IsConst), IsMutable(IsMutable),
IsTemporary(IsTemporary), IsArray(true), CtorFn(ctorArrayDesc),
- DtorFn(dtorArrayDesc), MoveFn(moveArrayDesc) {
+ DtorFn(dtorArrayDesc) {
assert(Source && "Missing source");
}
@@ -425,7 +347,7 @@ Descriptor::Descriptor(const DeclTy &D, const Descriptor *Elem, MetadataSize MD,
Size(UnknownSizeMark), MDSize(MD.value_or(0)),
AllocSize(MDSize + alignof(void *)), ElemDesc(Elem), IsConst(true),
IsMutable(false), IsTemporary(IsTemporary), IsArray(true),
- CtorFn(ctorArrayDesc), DtorFn(dtorArrayDesc), MoveFn(moveArrayDesc) {
+ CtorFn(ctorArrayDesc), DtorFn(dtorArrayDesc) {
assert(Source && "Missing source");
}
@@ -437,7 +359,7 @@ Descriptor::Descriptor(const DeclTy &D, const Record *R, MetadataSize MD,
Size(ElemSize), MDSize(MD.value_or(0)), AllocSize(Size + MDSize),
ElemRecord(R), IsConst(IsConst), IsMutable(IsMutable),
IsTemporary(IsTemporary), IsVolatile(IsVolatile), CtorFn(ctorRecord),
- DtorFn(dtorRecord), MoveFn(moveRecord) {
+ DtorFn(dtorRecord) {
assert(Source && "Missing source");
}
diff --git a/clang/lib/AST/ByteCode/Descriptor.h b/clang/lib/AST/ByteCode/Descriptor.h
index 4c925f6f0af1e..cd34e11a67151 100644
--- a/clang/lib/AST/ByteCode/Descriptor.h
+++ b/clang/lib/AST/ByteCode/Descriptor.h
@@ -41,14 +41,6 @@ using BlockCtorFn = void (*)(Block *Storage, std::byte *FieldPtr, bool IsConst,
using BlockDtorFn = void (*)(Block *Storage, std::byte *FieldPtr,
const Descriptor *FieldDesc);
-/// Invoked when a block with pointers referencing it goes out of scope. Such
-/// blocks are persisted: the move function copies all inline descriptors and
-/// non-trivial fields, as existing pointers might need to reference those
-/// descriptors. Data is not copied since it cannot be legally read.
-using BlockMoveFn = void (*)(Block *Storage, std::byte *SrcFieldPtr,
- std::byte *DstFieldPtr,
- const Descriptor *FieldDesc);
-
enum class GlobalInitState {
Initialized,
NoInitializer,
@@ -181,7 +173,6 @@ struct Descriptor final {
/// Storage management methods.
const BlockCtorFn CtorFn = nullptr;
const BlockDtorFn DtorFn = nullptr;
- const BlockMoveFn MoveFn = nullptr;
/// Allocates a descriptor for a primitive.
Descriptor(const DeclTy &D, const Type *SourceTy, PrimType Type,
>From c9eff91ef1c7af02e68a4897c476dab0afbfff77 Mon Sep 17 00:00:00 2001
From: Aiden Grossman <aidengrossman at google.com>
Date: Wed, 6 Aug 2025 15:49:17 +0000
Subject: [PATCH 105/171] [CI][Github] Only run CI Checks Workflow on Push for
Main
Currently the check-ci workflow runs on the push event as well
regardless of the branch which means the workflow runs twice on stacked
PRs. Not a big deal, but a bit weird to see the same workflow running
twice in a PR.
---
.github/workflows/check-ci.yml | 2 ++
1 file changed, 2 insertions(+)
diff --git a/.github/workflows/check-ci.yml b/.github/workflows/check-ci.yml
index befea2093f908..ec2615dcf20d0 100644
--- a/.github/workflows/check-ci.yml
+++ b/.github/workflows/check-ci.yml
@@ -5,6 +5,8 @@ permissions:
on:
push:
+ branches:
+ - main
paths:
- '.ci/**'
- '.github/workflows/check-ci.yml'
>From b1482aa91b16e46cfed46cb4c4c41cfe34dfd434 Mon Sep 17 00:00:00 2001
From: Nikolas Klauser <nikolasklauser at berlin.de>
Date: Wed, 6 Aug 2025 17:50:53 +0200
Subject: [PATCH 106/171] [libc++] Fix incorrect down cast in __tree::operator=
(#152285)
This has been introduced by #151304. This problem is diagnosed by UBSan
with optimizations enabled. Since we run UBSan only with optimizations
disabled currently, this isn't caught in our CI. We should look into
enabling UBSan with optimizations enabled to catch these sorts of issues
before landing a patch.
---
libcxx/include/__tree | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/libcxx/include/__tree b/libcxx/include/__tree
index 6ca1a623536f2..74b20d5b6f814 100644
--- a/libcxx/include/__tree
+++ b/libcxx/include/__tree
@@ -1388,8 +1388,9 @@ __tree<_Tp, _Compare, _Allocator>& __tree<_Tp, _Compare, _Allocator>::operator=(
if (__root())
__root()->__parent_ = __end_node();
}
- __begin_node_ = static_cast<__end_node_pointer>(std::__tree_min(static_cast<__node_base_pointer>(__end_node())));
- __size_ = __t.size();
+ __begin_node_ =
+ __end_node()->__left_ ? static_cast<__end_node_pointer>(std::__tree_min(__end_node()->__left_)) : __end_node();
+ __size_ = __t.size();
return *this;
}
>From 0e3a17c70d3f8bec3495a3331e134e361bb00928 Mon Sep 17 00:00:00 2001
From: Krishna Pandey <kpandey81930 at gmail.com>
Date: Wed, 6 Aug 2025 21:23:21 +0530
Subject: [PATCH 107/171] [libc][math] Fix gcc buildbot failure (#152320)
Signed-off-by: Krishna Pandey <kpandey81930 at gmail.com>
---
libc/src/__support/FPUtil/generic/add_sub.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/libc/src/__support/FPUtil/generic/add_sub.h b/libc/src/__support/FPUtil/generic/add_sub.h
index bcc5647b3cdd5..b2e9d81f4774c 100644
--- a/libc/src/__support/FPUtil/generic/add_sub.h
+++ b/libc/src/__support/FPUtil/generic/add_sub.h
@@ -174,8 +174,8 @@ add_or_sub(InType x, InType y) {
int alignment = (max_bits.get_biased_exponent() - max_bits.is_normal()) -
(min_bits.get_biased_exponent() - min_bits.is_normal());
- InStorageType aligned_min_mant =
- min_mant >> cpp::min(alignment, RESULT_MANTISSA_LEN);
+ InStorageType aligned_min_mant = static_cast<InStorageType>(
+ min_mant >> cpp::min(alignment, RESULT_MANTISSA_LEN));
bool aligned_min_mant_sticky;
if (alignment <= GUARD_BITS_LEN)
>From 832ceda0c05f6db440a220860b3006f967f3bfd0 Mon Sep 17 00:00:00 2001
From: Dominik Steenken <dost at de.ibm.com>
Date: Wed, 6 Aug 2025 17:54:44 +0200
Subject: [PATCH 108/171] [SystemZ] Avoid modifying IR in mcount
instrumentation. (#152298)
This PR changes how the call to `mcount` is inserted in `emitPrologue`.
It is now emitted as an external symbol rather than a global variable,
preventing potentially unexpected IR modification.
Fixes: https://github.com/llvm/llvm-project/issues/152238
---
llvm/lib/Target/SystemZ/SystemZFrameLowering.cpp | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/llvm/lib/Target/SystemZ/SystemZFrameLowering.cpp b/llvm/lib/Target/SystemZ/SystemZFrameLowering.cpp
index 6297916310803..5ee66e3dfa7a8 100644
--- a/llvm/lib/Target/SystemZ/SystemZFrameLowering.cpp
+++ b/llvm/lib/Target/SystemZ/SystemZFrameLowering.cpp
@@ -574,13 +574,11 @@ void SystemZELFFrameLowering::emitPrologue(MachineFunction &MF,
// Call mcount (Regmask from CC AnyReg since mcount preserves all normal
// argument registers).
- FunctionCallee FC = MF.getFunction().getParent()->getOrInsertFunction(
- "mcount", Type::getVoidTy(MF.getFunction().getContext()));
const uint32_t *Mask = MF.getSubtarget<SystemZSubtarget>()
.getSpecialRegisters()
->getCallPreservedMask(MF, CallingConv::AnyReg);
BuildMI(MBB, MBBI, DL, ZII->get(SystemZ::CallBRASL))
- .addGlobalAddress(dyn_cast<Function>(FC.getCallee()))
+ .addExternalSymbol("mcount")
.addRegMask(Mask);
// Reload return address from 8 bytes above stack pointer.
>From d8c43e6236087ba754a5466203e80d3b59389119 Mon Sep 17 00:00:00 2001
From: Aiden Grossman <aidengrossman at google.com>
Date: Wed, 6 Aug 2025 15:54:35 +0000
Subject: [PATCH 109/171] [Github][CI] Add python-is-python3 to CI container
This patch adds the python-is-python3 package to the CI container.
Windows by default uses python instead of python3, which prevents
code sharing without additionaly hackery. This should fix that and
allow for #152199 to land.
---
.github/workflows/containers/github-action-ci/Dockerfile | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/.github/workflows/containers/github-action-ci/Dockerfile b/.github/workflows/containers/github-action-ci/Dockerfile
index 0444f617e3711..4b0d5e2aedc0f 100644
--- a/.github/workflows/containers/github-action-ci/Dockerfile
+++ b/.github/workflows/containers/github-action-ci/Dockerfile
@@ -59,9 +59,12 @@ RUN apt-get update && \
sudo \
# These are needed by the premerge pipeline. Pip is used to install
# dependent python packages. File and tzdata are used for tests.
+ # Having a symlink from python to python3 enables code sharing between
+ # the Linux and Windows pipelines.
python3-pip \
file \
- tzdata && \
+ tzdata \
+ python-is-python3 && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
>From 4784ce9ebcf60274985b8fe9bf4d4eb8c734ce38 Mon Sep 17 00:00:00 2001
From: Alexey Bataev <a.bataev at outlook.com>
Date: Wed, 6 Aug 2025 08:55:52 -0700
Subject: [PATCH 110/171] [SLP][NFC]Check an external user before trying to
address it in debug dump, NFC
---
llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp b/llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp
index 62ab3f522bb6f..5d0e2f9518a51 100644
--- a/llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp
+++ b/llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp
@@ -15097,7 +15097,8 @@ InstructionCost BoUpSLP::getTreeCost(ArrayRef<Value *> VectorizedVals,
for (ExternalUser &EU : ExternalUses) {
LLVM_DEBUG(dbgs() << "SLP: Computing cost for external use of TreeEntry "
<< EU.E.Idx << " in lane " << EU.Lane << "\n");
- LLVM_DEBUG(dbgs() << " User:" << *EU.User << "\n");
+ LLVM_DEBUG(if (EU.User) dbgs() << " User:" << *EU.User << "\n";
+ else dbgs() << " User: nullptr\n");
LLVM_DEBUG(dbgs() << " Use: " << EU.Scalar->getNameOrAsOperand() << "\n");
// Uses by ephemeral values are free (because the ephemeral value will be
>From 056608a2821e34752b1e47f73a8d544b5c9ad787 Mon Sep 17 00:00:00 2001
From: erichkeane <ekeane at nvidia.com>
Date: Wed, 6 Aug 2025 09:05:14 -0700
Subject: [PATCH 111/171] [OpenACC][NFC] Remove temporary assert from CIndex
OpenACCBindClause
This was left over from implementation and shouldn't have been left in,
but in the end 'bind' doesn't require any additional work here, so this
patch removes the assert.
---
clang/tools/libclang/CIndex.cpp | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/clang/tools/libclang/CIndex.cpp b/clang/tools/libclang/CIndex.cpp
index 8b3d70b25866f..bf0c0aab2270f 100644
--- a/clang/tools/libclang/CIndex.cpp
+++ b/clang/tools/libclang/CIndex.cpp
@@ -2976,9 +2976,7 @@ void OpenACCClauseEnqueue::VisitIndependentClause(
const OpenACCIndependentClause &C) {}
void OpenACCClauseEnqueue::VisitSeqClause(const OpenACCSeqClause &C) {}
void OpenACCClauseEnqueue::VisitNoHostClause(const OpenACCNoHostClause &C) {}
-void OpenACCClauseEnqueue::VisitBindClause(const OpenACCBindClause &C) {
- assert(false && "TODO ERICH");
-}
+void OpenACCClauseEnqueue::VisitBindClause(const OpenACCBindClause &C) { }
void OpenACCClauseEnqueue::VisitFinalizeClause(const OpenACCFinalizeClause &C) {
}
void OpenACCClauseEnqueue::VisitIfPresentClause(
>From 3686e5b52f2a02c1c19050479d1dd0fd9d1dd4f8 Mon Sep 17 00:00:00 2001
From: Jonas Devlieghere <jonas at devlieghere.com>
Date: Wed, 6 Aug 2025 09:06:54 -0700
Subject: [PATCH 112/171] [lldb] Eliminate (_)Py_IsFinalizing (NFC) (#152226)
Looking at the implementation of `pylifecycle.c` in cpython, finalizing
and initialized are set at the same time. Therefore we can eliminate the
call to `Py_IsFinalizing` and only check `Py_IsInitialized`, which is
part of the stable API.
I converted the check to an assert and confirmed that during my test
suite runs, we never got into the if block. Because we check before
taking the lock, there is an opportunity for a race, but that exact same
race exists with the original code.
---
.../Python/PythonDataObjects.cpp | 19 +++----------------
1 file changed, 3 insertions(+), 16 deletions(-)
diff --git a/lldb/source/Plugins/ScriptInterpreter/Python/PythonDataObjects.cpp b/lldb/source/Plugins/ScriptInterpreter/Python/PythonDataObjects.cpp
index 82aa022bbae0b..27ac54322165e 100644
--- a/lldb/source/Plugins/ScriptInterpreter/Python/PythonDataObjects.cpp
+++ b/lldb/source/Plugins/ScriptInterpreter/Python/PythonDataObjects.cpp
@@ -71,24 +71,11 @@ Expected<std::string> python::As<std::string>(Expected<PythonObject> &&obj) {
return std::string(utf8.get());
}
-static bool python_is_finalizing() {
-#if PY_VERSION_HEX >= 0x030d0000
- return Py_IsFinalizing();
-#else
- return _Py_IsFinalizing();
-#endif
-}
-
void PythonObject::Reset() {
if (m_py_obj && Py_IsInitialized()) {
- if (python_is_finalizing()) {
- // Leak m_py_obj rather than crashing the process.
- // https://docs.python.org/3/c-api/init.html#c.PyGILState_Ensure
- } else {
- PyGILState_STATE state = PyGILState_Ensure();
- Py_DECREF(m_py_obj);
- PyGILState_Release(state);
- }
+ PyGILState_STATE state = PyGILState_Ensure();
+ Py_DECREF(m_py_obj);
+ PyGILState_Release(state);
}
m_py_obj = nullptr;
}
>From 8e57689c34f0b0af70f9aaf009c3be0e85d90dda Mon Sep 17 00:00:00 2001
From: Daniel Henrique Barboza <dbarboza at ventanamicro.com>
Date: Wed, 6 Aug 2025 13:08:25 -0300
Subject: [PATCH 113/171] [RISCV] add load/store misched/PostRA subtarget
features (#149409)
Some processors benefit more from store clustering than load clustering,
and vice-versa, depending on factors that are exclusive to each one
(e.g. macrofusions implemented).
Likewise, certain optimizations benefits more from misched clustering
than postRA clustering. Macrofusions are again an example: in a
processor with store pair macrofusions, like the veyron-v1, it is
observed that misched clustering increases the amount of macrofusions
more than postRA clustering. This of course isn't necessarily true for
other processors, but it shows that processors can benefit from a more
fine grained control of clustering mutations, and each one is able to do
it differently.
Add 4 new subtarget features that deprecates the existing
riscv-misched-load-store-clustering and
riscv-postmisched-load-store-clustering
options:
- disable-misched-load-clustering and disable-misched-store-clustering:
disable load/store clustering during misched;
- disable-postmisched-load-clustering and
disable-postmisched-store-clustering:
disable load/store clustering during PostRA.
Note that the new subtarget features disables specific stages of the
default
clustering settings. The default per se (load and store clustering for
both
misched and PostRA) is left untouched.
Disable all clustering but misched-store-clustering for the veyron-v1
processor
using the new features.
---
llvm/lib/Target/RISCV/RISCVFeatures.td | 12 +++
llvm/lib/Target/RISCV/RISCVProcessors.td | 3 +
llvm/lib/Target/RISCV/RISCVTargetMachine.cpp | 25 +++---
llvm/test/CodeGen/RISCV/features-info.ll | 4 +
.../CodeGen/RISCV/misched-load-clustering.ll | 47 ++++++++++-
.../CodeGen/RISCV/misched-mem-clustering.mir | 6 +-
.../CodeGen/RISCV/misched-store-clustering.ll | 83 +++++++++++++++++++
7 files changed, 160 insertions(+), 20 deletions(-)
create mode 100644 llvm/test/CodeGen/RISCV/misched-store-clustering.ll
diff --git a/llvm/lib/Target/RISCV/RISCVFeatures.td b/llvm/lib/Target/RISCV/RISCVFeatures.td
index 171940e149815..a7329d201f880 100644
--- a/llvm/lib/Target/RISCV/RISCVFeatures.td
+++ b/llvm/lib/Target/RISCV/RISCVFeatures.td
@@ -1700,6 +1700,18 @@ def TuneNLogNVRGather
def TunePostRAScheduler : SubtargetFeature<"use-postra-scheduler",
"UsePostRAScheduler", "true", "Schedule again after register allocation">;
+def TuneDisableMISchedLoadClustering : SubtargetFeature<"disable-misched-load-clustering",
+ "EnableMISchedLoadClustering", "false", "Disable load clustering in the machine scheduler">;
+
+def TuneDisableMISchedStoreClustering : SubtargetFeature<"disable-misched-store-clustering",
+ "EnableMISchedStoreClustering", "false", "Disable store clustering in the machine scheduler">;
+
+def TuneDisablePostMISchedLoadClustering : SubtargetFeature<"disable-postmisched-load-clustering",
+ "EnablePostMISchedLoadClustering", "false", "Disable PostRA load clustering in the machine scheduler">;
+
+def TuneDisablePostMISchedStoreClustering : SubtargetFeature<"disable-postmisched-store-clustering",
+ "EnablePostMISchedStoreClustering", "false", "Disable PostRA store clustering in the machine scheduler">;
+
def TuneDisableLatencySchedHeuristic
: SubtargetFeature<"disable-latency-sched-heuristic", "DisableLatencySchedHeuristic", "true",
"Disable latency scheduling heuristic">;
diff --git a/llvm/lib/Target/RISCV/RISCVProcessors.td b/llvm/lib/Target/RISCV/RISCVProcessors.td
index 838edf6c57250..8445730446dd9 100644
--- a/llvm/lib/Target/RISCV/RISCVProcessors.td
+++ b/llvm/lib/Target/RISCV/RISCVProcessors.td
@@ -590,6 +590,9 @@ def VENTANA_VEYRON_V1 : RISCVProcessorModel<"veyron-v1",
FeatureStdExtZicboz,
FeatureVendorXVentanaCondOps],
[TuneVentanaVeyron,
+ TuneDisableMISchedLoadClustering,
+ TuneDisablePostMISchedLoadClustering,
+ TuneDisablePostMISchedStoreClustering,
TuneLUIADDIFusion,
TuneAUIPCADDIFusion,
TuneZExtHFusion,
diff --git a/llvm/lib/Target/RISCV/RISCVTargetMachine.cpp b/llvm/lib/Target/RISCV/RISCVTargetMachine.cpp
index 3f2a83f8ce98c..66ce13428267c 100644
--- a/llvm/lib/Target/RISCV/RISCVTargetMachine.cpp
+++ b/llvm/lib/Target/RISCV/RISCVTargetMachine.cpp
@@ -94,16 +94,6 @@ static cl::opt<bool>
cl::desc("Enable the loop data prefetch pass"),
cl::init(true));
-static cl::opt<bool> EnableMISchedLoadStoreClustering(
- "riscv-misched-load-store-clustering", cl::Hidden,
- cl::desc("Enable load and store clustering in the machine scheduler"),
- cl::init(true));
-
-static cl::opt<bool> EnablePostMISchedLoadStoreClustering(
- "riscv-postmisched-load-store-clustering", cl::Hidden,
- cl::desc("Enable PostRA load and store clustering in the machine scheduler"),
- cl::init(true));
-
static cl::opt<bool> DisableVectorMaskMutation(
"riscv-disable-vector-mask-mutation",
cl::desc("Disable the vector mask scheduling mutation"), cl::init(false),
@@ -294,15 +284,17 @@ bool RISCVTargetMachine::isNoopAddrSpaceCast(unsigned SrcAS,
ScheduleDAGInstrs *
RISCVTargetMachine::createMachineScheduler(MachineSchedContext *C) const {
+ const RISCVSubtarget &ST = C->MF->getSubtarget<RISCVSubtarget>();
ScheduleDAGMILive *DAG = createSchedLive(C);
- if (EnableMISchedLoadStoreClustering) {
+
+ if (ST.enableMISchedLoadClustering())
DAG->addMutation(createLoadClusterDAGMutation(
DAG->TII, DAG->TRI, /*ReorderWhileClustering=*/true));
+
+ if (ST.enableMISchedStoreClustering())
DAG->addMutation(createStoreClusterDAGMutation(
DAG->TII, DAG->TRI, /*ReorderWhileClustering=*/true));
- }
- const RISCVSubtarget &ST = C->MF->getSubtarget<RISCVSubtarget>();
if (!DisableVectorMaskMutation && ST.hasVInstructions())
DAG->addMutation(createRISCVVectorMaskDAGMutation(DAG->TRI));
@@ -311,13 +303,16 @@ RISCVTargetMachine::createMachineScheduler(MachineSchedContext *C) const {
ScheduleDAGInstrs *
RISCVTargetMachine::createPostMachineScheduler(MachineSchedContext *C) const {
+ const RISCVSubtarget &ST = C->MF->getSubtarget<RISCVSubtarget>();
ScheduleDAGMI *DAG = createSchedPostRA(C);
- if (EnablePostMISchedLoadStoreClustering) {
+
+ if (ST.enablePostMISchedLoadClustering())
DAG->addMutation(createLoadClusterDAGMutation(
DAG->TII, DAG->TRI, /*ReorderWhileClustering=*/true));
+
+ if (ST.enablePostMISchedStoreClustering())
DAG->addMutation(createStoreClusterDAGMutation(
DAG->TII, DAG->TRI, /*ReorderWhileClustering=*/true));
- }
return DAG;
}
diff --git a/llvm/test/CodeGen/RISCV/features-info.ll b/llvm/test/CodeGen/RISCV/features-info.ll
index b94665b718ae7..a5ee41281607b 100644
--- a/llvm/test/CodeGen/RISCV/features-info.ll
+++ b/llvm/test/CodeGen/RISCV/features-info.ll
@@ -13,6 +13,10 @@
; CHECK-NEXT: conditional-cmv-fusion - Enable branch+c.mv fusion.
; CHECK-NEXT: d - 'D' (Double-Precision Floating-Point).
; CHECK-NEXT: disable-latency-sched-heuristic - Disable latency scheduling heuristic.
+; CHECK-NEXT: disable-misched-load-clustering - Disable load clustering in the machine scheduler.
+; CHECK-NEXT: disable-misched-store-clustering - Disable store clustering in the machine scheduler.
+; CHECK-NEXT: disable-postmisched-load-clustering - Disable PostRA load clustering in the machine scheduler.
+; CHECK-NEXT: disable-postmisched-store-clustering - Disable PostRA store clustering in the machine scheduler.
; CHECK-NEXT: dlen-factor-2 - Vector unit DLEN(data path width) is half of VLEN.
; CHECK-NEXT: e - 'E' (Embedded Instruction Set with 16 GPRs).
; CHECK-NEXT: exact-asm - Enable Exact Assembly (Disables Compression and Relaxation).
diff --git a/llvm/test/CodeGen/RISCV/misched-load-clustering.ll b/llvm/test/CodeGen/RISCV/misched-load-clustering.ll
index 160f0aefa36a7..abdc1bad787ae 100644
--- a/llvm/test/CodeGen/RISCV/misched-load-clustering.ll
+++ b/llvm/test/CodeGen/RISCV/misched-load-clustering.ll
@@ -1,17 +1,42 @@
; REQUIRES: asserts
-; RUN: llc -mtriple=riscv32 -verify-misched -riscv-misched-load-store-clustering=false \
+;
+; Disable all misched clustering
+; RUN: llc -mtriple=riscv32 -verify-misched \
+; RUN: -mattr=+disable-misched-load-clustering,+disable-misched-store-clustering \
; RUN: -debug-only=machine-scheduler -o - 2>&1 < %s \
; RUN: | FileCheck -check-prefix=NOCLUSTER %s
-; RUN: llc -mtriple=riscv64 -verify-misched -riscv-misched-load-store-clustering=false \
+; RUN: llc -mtriple=riscv64 -verify-misched \
+; RUN: -mattr=+disable-misched-load-clustering,+disable-misched-store-clustering \
; RUN: -debug-only=machine-scheduler -o - 2>&1 < %s \
; RUN: | FileCheck -check-prefix=NOCLUSTER %s
+;
+; ST misched clustering only
+; RUN: llc -mtriple=riscv32 -verify-misched \
+; RUN: -mattr=+disable-misched-load-clustering \
+; RUN: -debug-only=machine-scheduler -o - 2>&1 < %s \
+; RUN: | FileCheck -check-prefix=STCLUSTER %s
+; RUN: llc -mtriple=riscv64 -verify-misched \
+; RUN: -mattr=+disable-misched-load-clustering \
+; RUN: -debug-only=machine-scheduler -o - 2>&1 < %s \
+; RUN: | FileCheck -check-prefix=STCLUSTER %s
+;
+; LD misched clustering only
; RUN: llc -mtriple=riscv32 -verify-misched \
+; RUN: -mattr=+disable-misched-store-clustering \
; RUN: -debug-only=machine-scheduler -o - 2>&1 < %s \
; RUN: | FileCheck -check-prefix=LDCLUSTER %s
; RUN: llc -mtriple=riscv64 -verify-misched \
+; RUN: -mattr=+disable-misched-store-clustering \
; RUN: -debug-only=machine-scheduler -o - 2>&1 < %s \
; RUN: | FileCheck -check-prefix=LDCLUSTER %s
-
+;
+; Default misched cluster settings (i.e. both LD and ST clustering)
+; RUN: llc -mtriple=riscv32 -verify-misched \
+; RUN: -debug-only=machine-scheduler -o - 2>&1 < %s \
+; RUN: | FileCheck -check-prefix=DEFAULTCLUSTER %s
+; RUN: llc -mtriple=riscv64 -verify-misched \
+; RUN: -debug-only=machine-scheduler -o - 2>&1 < %s \
+; RUN: | FileCheck -check-prefix=DEFAULTCLUSTER %s
define i32 @load_clustering_1(ptr nocapture %p) {
; NOCLUSTER: ********** MI Scheduling **********
@@ -22,6 +47,14 @@ define i32 @load_clustering_1(ptr nocapture %p) {
; NOCLUSTER: SU(4): %4:gpr = LW %0:gpr, 4
; NOCLUSTER: SU(5): %6:gpr = LW %0:gpr, 16
;
+; STCLUSTER: ********** MI Scheduling **********
+; STCLUSTER-LABEL: load_clustering_1:%bb.0
+; STCLUSTER: *** Final schedule for %bb.0 ***
+; STCLUSTER: SU(1): %1:gpr = LW %0:gpr, 12
+; STCLUSTER: SU(2): %2:gpr = LW %0:gpr, 8
+; STCLUSTER: SU(4): %4:gpr = LW %0:gpr, 4
+; STCLUSTER: SU(5): %6:gpr = LW %0:gpr, 16
+;
; LDCLUSTER: ********** MI Scheduling **********
; LDCLUSTER-LABEL: load_clustering_1:%bb.0
; LDCLUSTER: *** Final schedule for %bb.0 ***
@@ -29,6 +62,14 @@ define i32 @load_clustering_1(ptr nocapture %p) {
; LDCLUSTER: SU(2): %2:gpr = LW %0:gpr, 8
; LDCLUSTER: SU(1): %1:gpr = LW %0:gpr, 12
; LDCLUSTER: SU(5): %6:gpr = LW %0:gpr, 16
+;
+; DEFAULTCLUSTER: ********** MI Scheduling **********
+; DEFAULTCLUSTER-LABEL: load_clustering_1:%bb.0
+; DEFAULTCLUSTER: *** Final schedule for %bb.0 ***
+; DEFAULTCLUSTER: SU(4): %4:gpr = LW %0:gpr, 4
+; DEFAULTCLUSTER: SU(2): %2:gpr = LW %0:gpr, 8
+; DEFAULTCLUSTER: SU(1): %1:gpr = LW %0:gpr, 12
+; DEFAULTCLUSTER: SU(5): %6:gpr = LW %0:gpr, 16
entry:
%arrayidx0 = getelementptr inbounds i32, ptr %p, i32 3
%val0 = load i32, ptr %arrayidx0
diff --git a/llvm/test/CodeGen/RISCV/misched-mem-clustering.mir b/llvm/test/CodeGen/RISCV/misched-mem-clustering.mir
index 21398d315ec93..01960f9d99a8b 100644
--- a/llvm/test/CodeGen/RISCV/misched-mem-clustering.mir
+++ b/llvm/test/CodeGen/RISCV/misched-mem-clustering.mir
@@ -1,10 +1,12 @@
# NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py UTC_ARGS: --version 5
# RUN: llc -mtriple=riscv64 -x mir -mcpu=sifive-p470 -verify-misched -enable-post-misched=false \
-# RUN: -riscv-postmisched-load-store-clustering=false -debug-only=machine-scheduler \
+# RUN: -mattr=+disable-postmisched-load-clustering \
+# RUN: -mattr=+disable-postmisched-store-clustering -debug-only=machine-scheduler \
# RUN: -start-before=machine-scheduler -stop-after=postmisched -misched-regpressure=false -o - 2>&1 < %s \
# RUN: | FileCheck -check-prefix=NOPOSTMISCHED %s
# RUN: llc -mtriple=riscv64 -x mir -mcpu=sifive-p470 -mattr=+use-postra-scheduler -verify-misched -enable-post-misched=true \
-# RUN: -riscv-postmisched-load-store-clustering=false -debug-only=machine-scheduler \
+# RUN: -mattr=+disable-postmisched-load-clustering \
+# RUN: -mattr=+disable-postmisched-store-clustering -debug-only=machine-scheduler \
# RUN: -start-before=machine-scheduler -stop-after=postmisched -misched-regpressure=false -o - 2>&1 < %s \
# RUN: | FileCheck -check-prefix=NOCLUSTER %s
# RUN: llc -mtriple=riscv64 -x mir -mcpu=sifive-p470 -mattr=+use-postra-scheduler -verify-misched -enable-post-misched=true \
diff --git a/llvm/test/CodeGen/RISCV/misched-store-clustering.ll b/llvm/test/CodeGen/RISCV/misched-store-clustering.ll
new file mode 100644
index 0000000000000..02e853d2217cc
--- /dev/null
+++ b/llvm/test/CodeGen/RISCV/misched-store-clustering.ll
@@ -0,0 +1,83 @@
+; REQUIRES: asserts
+;
+; Disable all misched clustering
+; RUN: llc -mtriple=riscv32 -verify-misched \
+; RUN: -mattr=+disable-misched-load-clustering,+disable-misched-store-clustering \
+; RUN: -debug-only=machine-scheduler -o - 2>&1 < %s \
+; RUN: | FileCheck -check-prefix=NOCLUSTER %s
+; RUN: llc -mtriple=riscv64 -verify-misched \
+; RUN: -mattr=+disable-misched-load-clustering,+disable-misched-store-clustering \
+; RUN: -debug-only=machine-scheduler -o - 2>&1 < %s \
+; RUN: | FileCheck -check-prefix=NOCLUSTER %s
+;
+; ST misched clustering only
+; RUN: llc -mtriple=riscv32 -verify-misched \
+; RUN: -mattr=+disable-misched-load-clustering \
+; RUN: -debug-only=machine-scheduler -o - 2>&1 < %s \
+; RUN: | FileCheck -check-prefix=STCLUSTER %s
+; RUN: llc -mtriple=riscv64 -verify-misched \
+; RUN: -mattr=+disable-misched-load-clustering \
+; RUN: -debug-only=machine-scheduler -o - 2>&1 < %s \
+; RUN: | FileCheck -check-prefix=STCLUSTER %s
+;
+; LD misched clustering only
+; RUN: llc -mtriple=riscv32 -verify-misched \
+; RUN: -mattr=+disable-misched-store-clustering \
+; RUN: -debug-only=machine-scheduler -o - 2>&1 < %s \
+; RUN: | FileCheck -check-prefix=LDCLUSTER %s
+; RUN: llc -mtriple=riscv64 -verify-misched \
+; RUN: -mattr=+disable-misched-store-clustering \
+; RUN: -debug-only=machine-scheduler -o - 2>&1 < %s \
+; RUN: | FileCheck -check-prefix=LDCLUSTER %s
+;
+; Default misched cluster settings (i.e. both LD and ST clustering)
+; RUN: llc -mtriple=riscv32 -verify-misched \
+; RUN: -debug-only=machine-scheduler -o - 2>&1 < %s \
+; RUN: | FileCheck -check-prefix=DEFAULTCLUSTER %s
+; RUN: llc -mtriple=riscv64 -verify-misched \
+; RUN: -debug-only=machine-scheduler -o - 2>&1 < %s \
+; RUN: | FileCheck -check-prefix=DEFAULTCLUSTER %s
+
+define i32 @store_clustering_1(ptr nocapture %p, i32 %v) {
+; NOCLUSTER: ********** MI Scheduling **********
+; NOCLUSTER-LABEL: store_clustering_1:%bb.0
+; NOCLUSTER: *** Final schedule for %bb.0 ***
+; NOCLUSTER: SU(2): SW %1:gpr, %0:gpr, 12 :: (store (s32) into %ir.arrayidx0)
+; NOCLUSTER: SU(3): SW %1:gpr, %0:gpr, 8 :: (store (s32) into %ir.arrayidx1)
+; NOCLUSTER: SU(4): SW %1:gpr, %0:gpr, 4 :: (store (s32) into %ir.arrayidx2)
+; NOCLUSTER: SU(5): SW %1:gpr, %0:gpr, 16 :: (store (s32) into %ir.arrayidx3)
+;
+; STCLUSTER: ********** MI Scheduling **********
+; STCLUSTER-LABEL: store_clustering_1:%bb.0
+; STCLUSTER: *** Final schedule for %bb.0 ***
+; STCLUSTER: SU(4): SW %1:gpr, %0:gpr, 4 :: (store (s32) into %ir.arrayidx2)
+; STCLUSTER: SU(3): SW %1:gpr, %0:gpr, 8 :: (store (s32) into %ir.arrayidx1)
+; STCLUSTER: SU(2): SW %1:gpr, %0:gpr, 12 :: (store (s32) into %ir.arrayidx0)
+; STCLUSTER: SU(5): SW %1:gpr, %0:gpr, 16 :: (store (s32) into %ir.arrayidx3)
+;
+; LDCLUSTER: ********** MI Scheduling **********
+; LDCLUSTER-LABEL: store_clustering_1:%bb.0
+; LDCLUSTER: *** Final schedule for %bb.0 ***
+; LDCLUSTER: SU(2): SW %1:gpr, %0:gpr, 12 :: (store (s32) into %ir.arrayidx0)
+; LDCLUSTER: SU(3): SW %1:gpr, %0:gpr, 8 :: (store (s32) into %ir.arrayidx1)
+; LDCLUSTER: SU(4): SW %1:gpr, %0:gpr, 4 :: (store (s32) into %ir.arrayidx2)
+; LDCLUSTER: SU(5): SW %1:gpr, %0:gpr, 16 :: (store (s32) into %ir.arrayidx3)
+;
+; DEFAULTCLUSTER: ********** MI Scheduling **********
+; DEFAULTCLUSTER-LABEL: store_clustering_1:%bb.0
+; DEFAULTCLUSTER: *** Final schedule for %bb.0 ***
+; DEFAULTCLUSTER: SU(4): SW %1:gpr, %0:gpr, 4 :: (store (s32) into %ir.arrayidx2)
+; DEFAULTCLUSTER: SU(3): SW %1:gpr, %0:gpr, 8 :: (store (s32) into %ir.arrayidx1)
+; DEFAULTCLUSTER: SU(2): SW %1:gpr, %0:gpr, 12 :: (store (s32) into %ir.arrayidx0)
+; DEFAULTCLUSTER: SU(5): SW %1:gpr, %0:gpr, 16 :: (store (s32) into %ir.arrayidx3)
+entry:
+ %arrayidx0 = getelementptr inbounds i32, ptr %p, i32 3
+ store i32 %v, ptr %arrayidx0
+ %arrayidx1 = getelementptr inbounds i32, ptr %p, i32 2
+ store i32 %v, ptr %arrayidx1
+ %arrayidx2 = getelementptr inbounds i32, ptr %p, i32 1
+ store i32 %v, ptr %arrayidx2
+ %arrayidx3 = getelementptr inbounds i32, ptr %p, i32 4
+ store i32 %v, ptr %arrayidx3
+ ret i32 %v
+}
>From ef9834c5e846f794ddee4c0a530e280da0c8ad67 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Daniel=20Rodr=C3=ADguez=20Troiti=C3=B1o?=
<danielrodriguez at meta.com>
Date: Wed, 6 Aug 2025 09:08:56 -0700
Subject: [PATCH 114/171] [llvm-objdump] Fix typo in error messages (#152234)
Some error messages were spelling "children" as "childern".
---
llvm/lib/Object/MachOObjectFile.cpp | 4 ++--
llvm/test/tools/llvm-objdump/MachO/bad-trie.test | 6 +++---
2 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/llvm/lib/Object/MachOObjectFile.cpp b/llvm/lib/Object/MachOObjectFile.cpp
index 5db264207ffb7..e09dc947c2779 100644
--- a/llvm/lib/Object/MachOObjectFile.cpp
+++ b/llvm/lib/Object/MachOObjectFile.cpp
@@ -3115,7 +3115,7 @@ void ExportEntry::pushNode(uint64_t offset) {
}
State.ChildCount = *Children;
if (State.ChildCount != 0 && Children + 1 >= Trie.end()) {
- *E = malformedError("byte for count of childern in export trie data at "
+ *E = malformedError("byte for count of children in export trie data at "
"node: 0x" +
Twine::utohexstr(offset) +
" extends past end of trie data");
@@ -3157,7 +3157,7 @@ void ExportEntry::pushDownUntilBottom() {
}
for (const NodeState &node : nodes()) {
if (node.Start == Trie.begin() + childNodeIndex){
- *E = malformedError("loop in childern in export trie data at node: 0x" +
+ *E = malformedError("loop in children in export trie data at node: 0x" +
Twine::utohexstr(Top.Start - Trie.begin()) +
" back to node: 0x" +
Twine::utohexstr(childNodeIndex));
diff --git a/llvm/test/tools/llvm-objdump/MachO/bad-trie.test b/llvm/test/tools/llvm-objdump/MachO/bad-trie.test
index 8b29d30ef0618..e4d0ed58744f9 100644
--- a/llvm/test/tools/llvm-objdump/MachO/bad-trie.test
+++ b/llvm/test/tools/llvm-objdump/MachO/bad-trie.test
@@ -11,7 +11,7 @@ RUN: not llvm-objdump --macho --exports-trie %p/Inputs/macho-trie-export-info-si
EXPORT_INFO_SIZE_TOO_BIG: macho-trie-export-info-size-too-big': truncated or malformed object (export info size: 0x1234 in export trie data at node: 0x33 too big and extends past end of trie data)
RUN: not llvm-objdump --macho --exports-trie %p/Inputs/macho-trie-children-count-byte 2>&1 | FileCheck --check-prefix CHILDREN_COUNT_BYTE %s
-CHILDREN_COUNT_BYTE: macho-trie-children-count-byte': truncated or malformed object (byte for count of childern in export trie data at node: 0x5 extends past end of trie data)
+CHILDREN_COUNT_BYTE: macho-trie-children-count-byte': truncated or malformed object (byte for count of children in export trie data at node: 0x5 extends past end of trie data)
RUN: not llvm-objdump --macho --exports-trie %p/Inputs/macho-trie-import-name-start 2>&1 | FileCheck --check-prefix IMPORT_NAME_START %s
IMPORT_NAME_START: macho-trie-import-name-start': truncated or malformed object (import name of re-export in export trie data at node: 0x33 starts past end of trie data)
@@ -25,8 +25,8 @@ EDGE_STRING_END: macho-trie-edge-string-end': truncated or malformed object (edg
RUN: not llvm-objdump --macho --exports-trie %p/Inputs/macho-trie-not-export-node 2>&1 | FileCheck --check-prefix NOT_EXPORT_NODE %s
NOT_EXPORT_NODE: macho-trie-not-export-node': truncated or malformed object (node is not an export node in export trie data at node: 0x5a)
-RUN: not llvm-objdump --macho --exports-trie %p/Inputs/macho-trie-node-loop 2>&1 | FileCheck --check-prefix LOOP_OF_CHILDERN %s
-LOOP_OF_CHILDERN: macho-trie-node-loop': truncated or malformed object (loop in childern in export trie data at node: 0x42 back to node: 0x5)
+RUN: not llvm-objdump --macho --exports-trie %p/Inputs/macho-trie-node-loop 2>&1 | FileCheck --check-prefix LOOP_OF_CHILDREN %s
+LOOP_OF_CHILDREN: macho-trie-node-loop': truncated or malformed object (loop in children in export trie data at node: 0x42 back to node: 0x5)
RUN: not llvm-objdump --macho --exports-trie %p/Inputs/macho-trie-bad-library-ordinal 2>&1 | FileCheck --check-prefix BAD_LIBRARY_ORDINAL %s
BAD_LIBRARY_ORDINAL: macho-trie-bad-library-ordinal': truncated or malformed object (bad library ordinal: 69 (max 3) in export trie data at node: 0x33)
>From e232f05dfd0e2bc0eb060c8a7e6f5c946395358d Mon Sep 17 00:00:00 2001
From: Craig Topper <craig.topper at sifive.com>
Date: Wed, 6 Aug 2025 09:09:39 -0700
Subject: [PATCH 115/171] [RISCV] Add packw+packh isel pattern for unaligned
loads on RV64. (#152159)
This is similar to an existing pattern from RV32 with the
simpliflication proposed by #152045. Instead of pack we need to use
packw and we need to know that the upper 32 bits are being ignored since
packw sign extends from bit 31.
The use of allBinOpWUsers prevents tablegen from automatically
reassociating the pattern so we need to do it manually. Tablegen is
still able to commute operands though.
---
llvm/lib/Target/RISCV/RISCVInstrInfoZb.td | 21 +++
llvm/test/CodeGen/RISCV/rv64zbkb.ll | 122 ++++++++++++++++++
.../CodeGen/RISCV/unaligned-load-store.ll | 20 ++-
3 files changed, 152 insertions(+), 11 deletions(-)
diff --git a/llvm/lib/Target/RISCV/RISCVInstrInfoZb.td b/llvm/lib/Target/RISCV/RISCVInstrInfoZb.td
index 04ffb05c513f4..27ad10ad7f17e 100644
--- a/llvm/lib/Target/RISCV/RISCVInstrInfoZb.td
+++ b/llvm/lib/Target/RISCV/RISCVInstrInfoZb.td
@@ -663,6 +663,27 @@ def : Pat<(binop_allwusers<or> (shl GPR:$rs2, (i64 16)),
def : Pat<(i64 (or (sext_inreg (shl GPR:$rs2, (i64 16)), i32),
(zexti16 (i64 GPR:$rs1)))),
(PACKW GPR:$rs1, GPR:$rs2)>;
+
+// Match a pattern of 2 bytes being inserted into bits [31:16], with bits
+// bits [15:0] coming from a zero extended value, and bits [63:32] being
+// ignored. We can use packw with packh for bits [31:16]. If bits [15:0] can
+// also be a packh, it can be matched separately.
+def : Pat<(binop_allwusers<or>
+ (or (shl (zexti8 (XLenVT GPR:$op1rs2)), (XLenVT 24)),
+ (shl (zexti8 (XLenVT GPR:$op1rs1)), (XLenVT 16))),
+ (zexti16 (XLenVT GPR:$rs1))),
+ (PACKW GPR:$rs1, (XLenVT (PACKH GPR:$op1rs1, GPR:$op1rs2)))>;
+// We need to manually reassociate the patterns because of the binop_allwusers.
+def : Pat<(binop_allwusers<or>
+ (or (zexti16 (XLenVT GPR:$rs1)),
+ (shl (zexti8 (XLenVT GPR:$op1rs1)), (XLenVT 16))),
+ (shl (zexti8 (XLenVT GPR:$op1rs2)), (XLenVT 24))),
+ (PACKW GPR:$rs1, (XLenVT (PACKH GPR:$op1rs1, GPR:$op1rs2)))>;
+def : Pat<(binop_allwusers<or>
+ (or (zexti16 (XLenVT GPR:$rs1)),
+ (shl (zexti8 (XLenVT GPR:$op1rs1)), (XLenVT 24))),
+ (shl (zexti8 (XLenVT GPR:$op1rs2)), (XLenVT 16))),
+ (PACKW GPR:$rs1, (XLenVT (PACKH GPR:$op1rs1, GPR:$op1rs2)))>;
} // Predicates = [HasStdExtZbkb, IsRV64]
let Predicates = [HasStdExtZbb, IsRV32] in
diff --git a/llvm/test/CodeGen/RISCV/rv64zbkb.ll b/llvm/test/CodeGen/RISCV/rv64zbkb.ll
index 818ea723ca2e1..37c9eaea6f70b 100644
--- a/llvm/test/CodeGen/RISCV/rv64zbkb.ll
+++ b/llvm/test/CodeGen/RISCV/rv64zbkb.ll
@@ -392,3 +392,125 @@ define i64 @zext_i16_to_i64(i16 %a) nounwind {
%1 = zext i16 %a to i64
ret i64 %1
}
+
+define void @pack_lo_packh_hi_packh(i8 zeroext %0, i8 zeroext %1, i8 zeroext %2, i8 zeroext %3, ptr %p) nounwind {
+; RV64I-LABEL: pack_lo_packh_hi_packh:
+; RV64I: # %bb.0:
+; RV64I-NEXT: slli a1, a1, 8
+; RV64I-NEXT: slli a2, a2, 16
+; RV64I-NEXT: slli a3, a3, 24
+; RV64I-NEXT: or a0, a0, a1
+; RV64I-NEXT: or a2, a2, a3
+; RV64I-NEXT: or a0, a0, a2
+; RV64I-NEXT: sw a0, 0(a4)
+; RV64I-NEXT: ret
+;
+; RV64ZBKB-LABEL: pack_lo_packh_hi_packh:
+; RV64ZBKB: # %bb.0:
+; RV64ZBKB-NEXT: packh a0, a0, a1
+; RV64ZBKB-NEXT: packh a1, a2, a3
+; RV64ZBKB-NEXT: packw a0, a0, a1
+; RV64ZBKB-NEXT: sw a0, 0(a4)
+; RV64ZBKB-NEXT: ret
+ %a = zext i8 %0 to i32
+ %b = zext i8 %1 to i32
+ %c = zext i8 %2 to i32
+ %d = zext i8 %3 to i32
+ %e = shl i32 %b, 8
+ %f = shl i32 %c, 16
+ %g = shl i32 %d, 24
+ %h = or i32 %a, %e
+ %i = or i32 %h, %f
+ %j = or i32 %i, %g
+ store i32 %j, ptr %p
+ ret void
+}
+
+define void @pack_lo_packh_hi_packh_2(i8 zeroext %0, i8 zeroext %1, i8 zeroext %2, i8 zeroext %3, ptr %p) nounwind {
+; RV64I-LABEL: pack_lo_packh_hi_packh_2:
+; RV64I: # %bb.0:
+; RV64I-NEXT: slli a1, a1, 8
+; RV64I-NEXT: slli a2, a2, 16
+; RV64I-NEXT: slli a3, a3, 24
+; RV64I-NEXT: or a0, a0, a1
+; RV64I-NEXT: or a2, a2, a3
+; RV64I-NEXT: or a0, a2, a0
+; RV64I-NEXT: sw a0, 0(a4)
+; RV64I-NEXT: ret
+;
+; RV64ZBKB-LABEL: pack_lo_packh_hi_packh_2:
+; RV64ZBKB: # %bb.0:
+; RV64ZBKB-NEXT: packh a0, a0, a1
+; RV64ZBKB-NEXT: packh a1, a3, a2
+; RV64ZBKB-NEXT: packw a0, a0, a1
+; RV64ZBKB-NEXT: sw a0, 0(a4)
+; RV64ZBKB-NEXT: ret
+ %a = zext i8 %0 to i32
+ %b = zext i8 %1 to i32
+ %c = zext i8 %2 to i32
+ %d = zext i8 %3 to i32
+ %e = shl i32 %b, 8
+ %f = shl i32 %c, 16
+ %g = shl i32 %d, 24
+ %h = or i32 %a, %e
+ %i = or i32 %g, %h
+ %j = or i32 %f, %i
+ store i32 %j, ptr %p
+ ret void
+}
+
+define void @pack_lo_zext_hi_packh(i16 zeroext %0, i8 zeroext %1, i8 zeroext %2, ptr %p) nounwind {
+; RV64I-LABEL: pack_lo_zext_hi_packh:
+; RV64I: # %bb.0:
+; RV64I-NEXT: slli a1, a2, 16
+; RV64I-NEXT: slli a2, a2, 24
+; RV64I-NEXT: or a1, a2, a1
+; RV64I-NEXT: or a0, a1, a0
+; RV64I-NEXT: sw a0, 0(a3)
+; RV64I-NEXT: ret
+;
+; RV64ZBKB-LABEL: pack_lo_zext_hi_packh:
+; RV64ZBKB: # %bb.0:
+; RV64ZBKB-NEXT: packh a1, a2, a2
+; RV64ZBKB-NEXT: packw a0, a0, a1
+; RV64ZBKB-NEXT: sw a0, 0(a3)
+; RV64ZBKB-NEXT: ret
+ %a = zext i16 %0 to i32
+ %b = zext i8 %1 to i32
+ %c = zext i8 %2 to i32
+ %d = shl i32 %c, 8
+ %e = or i32 %c, %d
+ %f = shl i32 %e, 16
+ %g = or i32 %f, %a
+ store i32 %g, ptr %p
+ ret void
+}
+
+; Negative test, %a isn't extended so we can't use packw for the outer or, but
+; we can use packh for the high half.
+define void @pack_lo_noext_hi_packh(i32 %a, i8 zeroext %1, i8 zeroext %2, ptr %p) nounwind {
+; RV64I-LABEL: pack_lo_noext_hi_packh:
+; RV64I: # %bb.0:
+; RV64I-NEXT: slli a1, a2, 16
+; RV64I-NEXT: slli a2, a2, 24
+; RV64I-NEXT: or a1, a2, a1
+; RV64I-NEXT: or a0, a1, a0
+; RV64I-NEXT: sw a0, 0(a3)
+; RV64I-NEXT: ret
+;
+; RV64ZBKB-LABEL: pack_lo_noext_hi_packh:
+; RV64ZBKB: # %bb.0:
+; RV64ZBKB-NEXT: packh a1, a2, a2
+; RV64ZBKB-NEXT: slli a1, a1, 16
+; RV64ZBKB-NEXT: or a0, a1, a0
+; RV64ZBKB-NEXT: sw a0, 0(a3)
+; RV64ZBKB-NEXT: ret
+ %b = zext i8 %1 to i32
+ %c = zext i8 %2 to i32
+ %d = shl i32 %c, 8
+ %e = or i32 %c, %d
+ %f = shl i32 %e, 16
+ %g = or i32 %f, %a
+ store i32 %g, ptr %p
+ ret void
+}
diff --git a/llvm/test/CodeGen/RISCV/unaligned-load-store.ll b/llvm/test/CodeGen/RISCV/unaligned-load-store.ll
index c9c49e8f7f532..cb046cdaae75c 100644
--- a/llvm/test/CodeGen/RISCV/unaligned-load-store.ll
+++ b/llvm/test/CodeGen/RISCV/unaligned-load-store.ll
@@ -204,18 +204,16 @@ define i64 @load_i64(ptr %p) {
; RV64IZBKB-NEXT: lbu a2, 5(a0)
; RV64IZBKB-NEXT: lbu a3, 6(a0)
; RV64IZBKB-NEXT: lbu a4, 7(a0)
-; RV64IZBKB-NEXT: lbu a5, 0(a0)
-; RV64IZBKB-NEXT: lbu a6, 1(a0)
-; RV64IZBKB-NEXT: lbu a7, 2(a0)
-; RV64IZBKB-NEXT: lbu a0, 3(a0)
+; RV64IZBKB-NEXT: lbu a5, 1(a0)
+; RV64IZBKB-NEXT: lbu a6, 2(a0)
+; RV64IZBKB-NEXT: lbu a7, 3(a0)
+; RV64IZBKB-NEXT: lbu a0, 0(a0)
+; RV64IZBKB-NEXT: packh a3, a3, a4
; RV64IZBKB-NEXT: packh a1, a1, a2
-; RV64IZBKB-NEXT: packh a2, a3, a4
-; RV64IZBKB-NEXT: packh a3, a5, a6
-; RV64IZBKB-NEXT: packh a0, a7, a0
-; RV64IZBKB-NEXT: slli a2, a2, 16
-; RV64IZBKB-NEXT: slli a0, a0, 16
-; RV64IZBKB-NEXT: or a1, a2, a1
-; RV64IZBKB-NEXT: or a0, a0, a3
+; RV64IZBKB-NEXT: packh a2, a6, a7
+; RV64IZBKB-NEXT: packh a0, a0, a5
+; RV64IZBKB-NEXT: packw a1, a1, a3
+; RV64IZBKB-NEXT: packw a0, a0, a2
; RV64IZBKB-NEXT: pack a0, a0, a1
; RV64IZBKB-NEXT: ret
;
>From 57045a137f97fe4e05d5d9581c0b2e4fd6c729e1 Mon Sep 17 00:00:00 2001
From: Craig Topper <craig.topper at sifive.com>
Date: Wed, 6 Aug 2025 09:10:02 -0700
Subject: [PATCH 116/171] [DAGCombiner] Avoid repeated calls to
WideVT.getScalarSizeInBits() in DAGCombiner::mergeTruncStores. NFC (#152231)
We already have a variable, WideNumBits, that contains the same
information. Use it and delay the creation of WideVT until we really
need it.
---
llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp b/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
index d70e96938ed9a..734191447d67f 100644
--- a/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
@@ -9390,8 +9390,7 @@ SDValue DAGCombiner::mergeTruncStores(StoreSDNode *N) {
LLVMContext &Context = *DAG.getContext();
unsigned NumStores = Stores.size();
unsigned WideNumBits = NumStores * NarrowNumBits;
- EVT WideVT = EVT::getIntegerVT(Context, WideNumBits);
- if (WideVT != MVT::i16 && WideVT != MVT::i32 && WideVT != MVT::i64)
+ if (WideNumBits != 16 && WideNumBits != 32 && WideNumBits != 64)
return SDValue();
// Check if all bytes of the source value that we are looking at are stored
@@ -9445,7 +9444,7 @@ SDValue DAGCombiner::mergeTruncStores(StoreSDNode *N) {
SourceValue = WideVal;
// Give up if the source value type is smaller than the store size.
- if (SourceValue.getScalarValueSizeInBits() < WideVT.getScalarSizeInBits())
+ if (SourceValue.getScalarValueSizeInBits() < WideNumBits)
return SDValue();
}
@@ -9469,6 +9468,8 @@ SDValue DAGCombiner::mergeTruncStores(StoreSDNode *N) {
OffsetMap[Offset] = ByteOffsetFromBase;
}
+ EVT WideVT = EVT::getIntegerVT(Context, WideNumBits);
+
assert(FirstOffset != INT64_MAX && "First byte offset must be set");
assert(FirstStore && "First store must be set");
>From 09146a21a5a7bbea19b5203d585682de519a213c Mon Sep 17 00:00:00 2001
From: Steven Wu <stevenwu at apple.com>
Date: Wed, 6 Aug 2025 09:13:36 -0700
Subject: [PATCH 117/171] [LegacyLTO] Emit the error message that was silently
dropped (#152172)
Using LLVMContext to emit the error from `TargetRegistry::lookupTarget`
that was silently ignored and not propagated. This allows user to better
identify the kind of error occured.
rdar://157542119
---
llvm/lib/LTO/LTOModule.cpp | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/llvm/lib/LTO/LTOModule.cpp b/llvm/lib/LTO/LTOModule.cpp
index e0a975806a31d..7dd06118e2a57 100644
--- a/llvm/lib/LTO/LTOModule.cpp
+++ b/llvm/lib/LTO/LTOModule.cpp
@@ -203,8 +203,10 @@ LTOModule::makeLTOModule(MemoryBufferRef Buffer, const TargetOptions &options,
// find machine architecture for this module
std::string errMsg;
const Target *march = TargetRegistry::lookupTarget(Triple, errMsg);
- if (!march)
+ if (!march) {
+ Context.emitError(errMsg);
return make_error_code(object::object_error::arch_not_found);
+ }
// construct LTOModule, hand over ownership of module and target
SubtargetFeatures Features;
>From 4c9bb656393559437d72ac6d0b17c421dd5463bb Mon Sep 17 00:00:00 2001
From: Amr Hesham <amr96 at programmer.net>
Date: Wed, 6 Aug 2025 18:13:48 +0200
Subject: [PATCH 118/171] [CIR] Plus & Minus CompoundAssignment support for
ComplexType (#150759)
This change adds support for Plus & Minus CompoundAssignment for
ComplexType
https://github.com/llvm/llvm-project/issues/141365
---
clang/lib/CIR/CodeGen/CIRGenExprComplex.cpp | 157 +++++++++-
clang/lib/CIR/CodeGen/CIRGenExprScalar.cpp | 23 ++
clang/lib/CIR/CodeGen/CIRGenFunction.cpp | 5 +-
clang/lib/CIR/CodeGen/CIRGenFunction.h | 8 +
clang/test/CIR/CodeGen/complex-arithmetic.cpp | 201 ++++++------
.../CodeGen/complex-compound-assignment.cpp | 288 ++++++++++++++++++
6 files changed, 578 insertions(+), 104 deletions(-)
create mode 100644 clang/test/CIR/CodeGen/complex-compound-assignment.cpp
diff --git a/clang/lib/CIR/CodeGen/CIRGenExprComplex.cpp b/clang/lib/CIR/CodeGen/CIRGenExprComplex.cpp
index 3aa170eff8606..c22cf607df7d8 100644
--- a/clang/lib/CIR/CodeGen/CIRGenExprComplex.cpp
+++ b/clang/lib/CIR/CodeGen/CIRGenExprComplex.cpp
@@ -116,6 +116,15 @@ class ComplexExprEmitter : public StmtVisitor<ComplexExprEmitter, mlir::Value> {
mlir::Value emitPromotedComplexOperand(const Expr *e, QualType promotionTy);
+ LValue emitCompoundAssignLValue(
+ const CompoundAssignOperator *e,
+ mlir::Value (ComplexExprEmitter::*func)(const BinOpInfo &),
+ RValue &value);
+
+ mlir::Value emitCompoundAssign(
+ const CompoundAssignOperator *e,
+ mlir::Value (ComplexExprEmitter::*func)(const BinOpInfo &));
+
mlir::Value emitBinAdd(const BinOpInfo &op);
mlir::Value emitBinSub(const BinOpInfo &op);
mlir::Value emitBinMul(const BinOpInfo &op);
@@ -153,6 +162,15 @@ class ComplexExprEmitter : public StmtVisitor<ComplexExprEmitter, mlir::Value> {
HANDLEBINOP(Sub)
HANDLEBINOP(Mul)
#undef HANDLEBINOP
+
+ // Compound assignments.
+ mlir::Value VisitBinAddAssign(const CompoundAssignOperator *e) {
+ return emitCompoundAssign(e, &ComplexExprEmitter::emitBinAdd);
+ }
+
+ mlir::Value VisitBinSubAssign(const CompoundAssignOperator *e) {
+ return emitCompoundAssign(e, &ComplexExprEmitter::emitBinSub);
+ }
};
} // namespace
@@ -166,6 +184,12 @@ static const ComplexType *getComplexType(QualType type) {
}
#endif // NDEBUG
+static mlir::Value createComplexFromReal(CIRGenBuilderTy &builder,
+ mlir::Location loc, mlir::Value real) {
+ mlir::Value imag = builder.getNullValue(real.getType(), loc);
+ return builder.createComplexCreate(loc, real, imag);
+}
+
LValue ComplexExprEmitter::emitBinAssignLValue(const BinaryOperator *e,
mlir::Value &value) {
assert(cgf.getContext().hasSameUnqualifiedType(e->getLHS()->getType(),
@@ -602,7 +626,7 @@ mlir::Value ComplexExprEmitter::emitPromoted(const Expr *e,
mlir::Value result = Visit(const_cast<Expr *>(e));
if (!promotionTy.isNull())
- cgf.cgm.errorNYI("emitPromoted emitPromotedValue");
+ return cgf.emitPromotedValue(result, promotionTy);
return result;
}
@@ -630,6 +654,104 @@ ComplexExprEmitter::emitBinOps(const BinaryOperator *e, QualType promotionTy) {
return binOpInfo;
}
+LValue ComplexExprEmitter::emitCompoundAssignLValue(
+ const CompoundAssignOperator *e,
+ mlir::Value (ComplexExprEmitter::*func)(const BinOpInfo &), RValue &value) {
+ QualType lhsTy = e->getLHS()->getType();
+ QualType rhsTy = e->getRHS()->getType();
+ SourceLocation exprLoc = e->getExprLoc();
+ mlir::Location loc = cgf.getLoc(exprLoc);
+
+ if (lhsTy->getAs<AtomicType>()) {
+ cgf.cgm.errorNYI("emitCompoundAssignLValue AtmoicType");
+ return {};
+ }
+
+ BinOpInfo opInfo{loc};
+ opInfo.fpFeatures = e->getFPFeaturesInEffect(cgf.getLangOpts());
+
+ assert(!cir::MissingFeatures::cgFPOptionsRAII());
+
+ // Load the RHS and LHS operands.
+ // __block variables need to have the rhs evaluated first, plus this should
+ // improve codegen a little.
+ QualType promotionTypeCR = getPromotionType(e->getComputationResultType());
+ opInfo.ty = promotionTypeCR.isNull() ? e->getComputationResultType()
+ : promotionTypeCR;
+
+ QualType complexElementTy =
+ opInfo.ty->castAs<ComplexType>()->getElementType();
+ QualType promotionTypeRHS = getPromotionType(rhsTy);
+
+ // The RHS should have been converted to the computation type.
+ if (e->getRHS()->getType()->isRealFloatingType()) {
+ if (!promotionTypeRHS.isNull()) {
+ opInfo.rhs = createComplexFromReal(
+ cgf.getBuilder(), loc,
+ cgf.emitPromotedScalarExpr(e->getRHS(), promotionTypeRHS));
+ } else {
+ assert(cgf.getContext().hasSameUnqualifiedType(complexElementTy, rhsTy));
+ opInfo.rhs = createComplexFromReal(cgf.getBuilder(), loc,
+ cgf.emitScalarExpr(e->getRHS()));
+ }
+ } else {
+ if (!promotionTypeRHS.isNull()) {
+ opInfo.rhs = cgf.emitPromotedComplexExpr(e->getRHS(), promotionTypeRHS);
+ } else {
+ assert(cgf.getContext().hasSameUnqualifiedType(opInfo.ty, rhsTy));
+ opInfo.rhs = Visit(e->getRHS());
+ }
+ }
+
+ LValue lhs = cgf.emitLValue(e->getLHS());
+
+ // Load from the l-value and convert it.
+ QualType promotionTypeLHS = getPromotionType(e->getComputationLHSType());
+ if (lhsTy->isAnyComplexType()) {
+ mlir::Value lhsValue = emitLoadOfLValue(lhs, exprLoc);
+ QualType destTy = promotionTypeLHS.isNull() ? opInfo.ty : promotionTypeLHS;
+ opInfo.lhs = emitComplexToComplexCast(lhsValue, lhsTy, destTy, exprLoc);
+ } else {
+ cgf.cgm.errorNYI("emitCompoundAssignLValue emitLoadOfScalar");
+ return {};
+ }
+
+ // Expand the binary operator.
+ mlir::Value result = (this->*func)(opInfo);
+
+ // Truncate the result and store it into the LHS lvalue.
+ if (lhsTy->isAnyComplexType()) {
+ mlir::Value resultValue =
+ emitComplexToComplexCast(result, opInfo.ty, lhsTy, exprLoc);
+ emitStoreOfComplex(loc, resultValue, lhs, /*isInit*/ false);
+ value = RValue::getComplex(resultValue);
+ } else {
+ mlir::Value resultValue =
+ cgf.emitComplexToScalarConversion(result, opInfo.ty, lhsTy, exprLoc);
+ cgf.emitStoreOfScalar(resultValue, lhs, /*isInit*/ false);
+ value = RValue::get(resultValue);
+ }
+
+ return lhs;
+}
+
+mlir::Value ComplexExprEmitter::emitCompoundAssign(
+ const CompoundAssignOperator *e,
+ mlir::Value (ComplexExprEmitter::*func)(const BinOpInfo &)) {
+ RValue val;
+ LValue lv = emitCompoundAssignLValue(e, func, val);
+
+ // The result of an assignment in C is the assigned r-value.
+ if (!cgf.getLangOpts().CPlusPlus)
+ return val.getComplexValue();
+
+ // If the lvalue is non-volatile, return the computed value of the assignment.
+ if (!lv.isVolatileQualified())
+ return val.getComplexValue();
+
+ return emitLoadOfLValue(lv, e->getExprLoc());
+}
+
mlir::Value ComplexExprEmitter::emitBinAdd(const BinOpInfo &op) {
assert(!cir::MissingFeatures::fastMathFlags());
assert(!cir::MissingFeatures::cgFPOptionsRAII());
@@ -685,6 +807,31 @@ mlir::Value CIRGenFunction::emitComplexExpr(const Expr *e) {
return ComplexExprEmitter(*this).Visit(const_cast<Expr *>(e));
}
+using CompoundFunc =
+ mlir::Value (ComplexExprEmitter::*)(const ComplexExprEmitter::BinOpInfo &);
+
+static CompoundFunc getComplexOp(BinaryOperatorKind op) {
+ switch (op) {
+ case BO_MulAssign:
+ llvm_unreachable("getComplexOp: BO_MulAssign");
+ case BO_DivAssign:
+ llvm_unreachable("getComplexOp: BO_DivAssign");
+ case BO_SubAssign:
+ return &ComplexExprEmitter::emitBinSub;
+ case BO_AddAssign:
+ return &ComplexExprEmitter::emitBinAdd;
+ default:
+ llvm_unreachable("unexpected complex compound assignment");
+ }
+}
+
+LValue CIRGenFunction::emitComplexCompoundAssignmentLValue(
+ const CompoundAssignOperator *e) {
+ CompoundFunc op = getComplexOp(e->getOpcode());
+ RValue val;
+ return ComplexExprEmitter(*this).emitCompoundAssignLValue(e, op, val);
+}
+
mlir::Value CIRGenFunction::emitComplexPrePostIncDec(const UnaryOperator *e,
LValue lv,
cir::UnaryOpKind op,
@@ -729,3 +876,11 @@ mlir::Value CIRGenFunction::emitPromotedComplexExpr(const Expr *e,
QualType promotionType) {
return ComplexExprEmitter(*this).emitPromoted(e, promotionType);
}
+
+mlir::Value CIRGenFunction::emitPromotedValue(mlir::Value result,
+ QualType promotionType) {
+ assert(!mlir::cast<cir::ComplexType>(result.getType()).isIntegerComplex() &&
+ "integral complex will never be promoted");
+ return builder.createCast(cir::CastKind::float_complex, result,
+ convertType(promotionType));
+}
diff --git a/clang/lib/CIR/CodeGen/CIRGenExprScalar.cpp b/clang/lib/CIR/CodeGen/CIRGenExprScalar.cpp
index 32c1c1ada3c53..3e06513bb6cf0 100644
--- a/clang/lib/CIR/CodeGen/CIRGenExprScalar.cpp
+++ b/clang/lib/CIR/CodeGen/CIRGenExprScalar.cpp
@@ -1955,6 +1955,29 @@ mlir::Value CIRGenFunction::emitScalarConversion(mlir::Value src,
.emitScalarConversion(src, srcTy, dstTy, loc);
}
+mlir::Value CIRGenFunction::emitComplexToScalarConversion(mlir::Value src,
+ QualType srcTy,
+ QualType dstTy,
+ SourceLocation loc) {
+ assert(srcTy->isAnyComplexType() && hasScalarEvaluationKind(dstTy) &&
+ "Invalid complex -> scalar conversion");
+
+ QualType complexElemTy = srcTy->castAs<ComplexType>()->getElementType();
+ if (dstTy->isBooleanType()) {
+ auto kind = complexElemTy->isFloatingType()
+ ? cir::CastKind::float_complex_to_bool
+ : cir::CastKind::int_complex_to_bool;
+ return builder.createCast(getLoc(loc), kind, src, convertType(dstTy));
+ }
+
+ auto kind = complexElemTy->isFloatingType()
+ ? cir::CastKind::float_complex_to_real
+ : cir::CastKind::int_complex_to_real;
+ mlir::Value real =
+ builder.createCast(getLoc(loc), kind, src, convertType(complexElemTy));
+ return emitScalarConversion(real, complexElemTy, dstTy, loc);
+}
+
mlir::Value ScalarExprEmitter::VisitUnaryLNot(const UnaryOperator *e) {
// Perform vector logical not on comparison with zero vector.
if (e->getType()->isVectorType() &&
diff --git a/clang/lib/CIR/CodeGen/CIRGenFunction.cpp b/clang/lib/CIR/CodeGen/CIRGenFunction.cpp
index eb05c93d52891..e93dc0bee31c5 100644
--- a/clang/lib/CIR/CodeGen/CIRGenFunction.cpp
+++ b/clang/lib/CIR/CodeGen/CIRGenFunction.cpp
@@ -785,9 +785,8 @@ LValue CIRGenFunction::emitLValue(const Expr *e) {
}
if (!ty->isAnyComplexType())
return emitCompoundAssignmentLValue(cast<CompoundAssignOperator>(e));
- cgm.errorNYI(e->getSourceRange(),
- "CompoundAssignOperator with ComplexType");
- return LValue();
+
+ return emitComplexCompoundAssignmentLValue(cast<CompoundAssignOperator>(e));
}
case Expr::CallExprClass:
case Expr::CXXMemberCallExprClass:
diff --git a/clang/lib/CIR/CodeGen/CIRGenFunction.h b/clang/lib/CIR/CodeGen/CIRGenFunction.h
index 3d92545a4ae3c..2e60cfc8211e6 100644
--- a/clang/lib/CIR/CodeGen/CIRGenFunction.h
+++ b/clang/lib/CIR/CodeGen/CIRGenFunction.h
@@ -944,6 +944,11 @@ class CIRGenFunction : public CIRGenTypeCache {
/// sanitizer is enabled, a runtime check is also emitted.
mlir::Value emitCheckedArgForAssume(const Expr *e);
+ /// Emit a conversion from the specified complex type to the specified
+ /// destination type, where the destination type is an LLVM scalar type.
+ mlir::Value emitComplexToScalarConversion(mlir::Value src, QualType srcTy,
+ QualType dstTy, SourceLocation loc);
+
LValue emitCompoundAssignmentLValue(const clang::CompoundAssignOperator *e);
LValue emitCompoundLiteralLValue(const CompoundLiteralExpr *e);
@@ -1047,6 +1052,8 @@ class CIRGenFunction : public CIRGenTypeCache {
mlir::Value emitPromotedScalarExpr(const Expr *e, QualType promotionType);
+ mlir::Value emitPromotedValue(mlir::Value result, QualType promotionType);
+
/// Emit the computation of the specified expression of scalar type.
mlir::Value emitScalarExpr(const clang::Expr *e);
@@ -1076,6 +1083,7 @@ class CIRGenFunction : public CIRGenTypeCache {
cir::UnaryOpKind op, bool isPre);
LValue emitComplexAssignmentLValue(const BinaryOperator *e);
+ LValue emitComplexCompoundAssignmentLValue(const CompoundAssignOperator *e);
void emitCompoundStmt(const clang::CompoundStmt &s);
diff --git a/clang/test/CIR/CodeGen/complex-arithmetic.cpp b/clang/test/CIR/CodeGen/complex-arithmetic.cpp
index 5e384cd34ebfd..fbb0335106f88 100644
--- a/clang/test/CIR/CodeGen/complex-arithmetic.cpp
+++ b/clang/test/CIR/CodeGen/complex-arithmetic.cpp
@@ -11,16 +11,16 @@ void foo() {
int _Complex c = a + b;
}
-// CIR: %[[COMPLEX_A:.*]] = cir.alloca !cir.complex<!s32i>, !cir.ptr<!cir.complex<!s32i>>, ["a"]
-// CIR: %[[COMPLEX_B:.*]] = cir.alloca !cir.complex<!s32i>, !cir.ptr<!cir.complex<!s32i>>, ["b"]
-// CIR: %[[TMP_A:.*]] = cir.load{{.*}} %[[COMPLEX_A]] : !cir.ptr<!cir.complex<!s32i>>, !cir.complex<!s32i>
-// CIR: %[[TMP_B:.*]] = cir.load{{.*}} %[[COMPLEX_B]] : !cir.ptr<!cir.complex<!s32i>>, !cir.complex<!s32i>
+// CIR: %[[A_ADDR:.*]] = cir.alloca !cir.complex<!s32i>, !cir.ptr<!cir.complex<!s32i>>, ["a"]
+// CIR: %[[B_ADDR:.*]] = cir.alloca !cir.complex<!s32i>, !cir.ptr<!cir.complex<!s32i>>, ["b"]
+// CIR: %[[TMP_A:.*]] = cir.load{{.*}} %[[A_ADDR]] : !cir.ptr<!cir.complex<!s32i>>, !cir.complex<!s32i>
+// CIR: %[[TMP_B:.*]] = cir.load{{.*}} %[[B_ADDR]] : !cir.ptr<!cir.complex<!s32i>>, !cir.complex<!s32i>
// CIR: %[[ADD:.*]] = cir.complex.add %[[TMP_A]], %[[TMP_B]] : !cir.complex<!s32i>
-// LLVM: %[[COMPLEX_A:.*]] = alloca { i32, i32 }, i64 1, align 4
-// LLVM: %[[COMPLEX_B:.*]] = alloca { i32, i32 }, i64 1, align 4
-// LLVM: %[[TMP_A:.*]] = load { i32, i32 }, ptr %[[COMPLEX_A]], align 4
-// LLVM: %[[TMP_B:.*]] = load { i32, i32 }, ptr %[[COMPLEX_B]], align 4
+// LLVM: %[[A_ADDR:.*]] = alloca { i32, i32 }, i64 1, align 4
+// LLVM: %[[B_ADDR:.*]] = alloca { i32, i32 }, i64 1, align 4
+// LLVM: %[[TMP_A:.*]] = load { i32, i32 }, ptr %[[A_ADDR]], align 4
+// LLVM: %[[TMP_B:.*]] = load { i32, i32 }, ptr %[[B_ADDR]], align 4
// LLVM: %[[A_REAL:.*]] = extractvalue { i32, i32 } %[[TMP_A]], 0
// LLVM: %[[A_IMAG:.*]] = extractvalue { i32, i32 } %[[TMP_A]], 1
// LLVM: %[[B_REAL:.*]] = extractvalue { i32, i32 } %[[TMP_B]], 0
@@ -30,16 +30,16 @@ void foo() {
// LLVM: %[[RESULT:.*]] = insertvalue { i32, i32 } poison, i32 %[[ADD_REAL]], 0
// LLVM: %[[RESULT_2:.*]] = insertvalue { i32, i32 } %[[RESULT]], i32 %[[ADD_IMAG]], 1
-// OGCG: %[[COMPLEX_A:.*]] = alloca { i32, i32 }, align 4
-// OGCG: %[[COMPLEX_B:.*]] = alloca { i32, i32 }, align 4
+// OGCG: %[[A_ADDR:.*]] = alloca { i32, i32 }, align 4
+// OGCG: %[[B_ADDR:.*]] = alloca { i32, i32 }, align 4
// OGCG: %[[RESULT:.*]] = alloca { i32, i32 }, align 4
-// OGCG: %[[A_REAL_PTR:.*]] = getelementptr inbounds nuw { i32, i32 }, ptr %[[COMPLEX_A]], i32 0, i32 0
+// OGCG: %[[A_REAL_PTR:.*]] = getelementptr inbounds nuw { i32, i32 }, ptr %[[A_ADDR]], i32 0, i32 0
// OGCG: %[[A_REAL:.*]] = load i32, ptr %[[A_REAL_PTR]], align 4
-// OGCG: %[[A_IMAG_PTR:.*]] = getelementptr inbounds nuw { i32, i32 }, ptr %[[COMPLEX_A]], i32 0, i32 1
+// OGCG: %[[A_IMAG_PTR:.*]] = getelementptr inbounds nuw { i32, i32 }, ptr %[[A_ADDR]], i32 0, i32 1
// OGCG: %[[A_IMAG:.*]] = load i32, ptr %[[A_IMAG_PTR]], align 4
-// OGCG: %[[B_REAL_PTR:.*]] = getelementptr inbounds nuw { i32, i32 }, ptr %[[COMPLEX_B]], i32 0, i32 0
+// OGCG: %[[B_REAL_PTR:.*]] = getelementptr inbounds nuw { i32, i32 }, ptr %[[B_ADDR]], i32 0, i32 0
// OGCG: %[[B_REAL:.*]] = load i32, ptr %[[B_REAL_PTR]], align 4
-// OGCG: %[[B_IMAG_PTR:.*]] = getelementptr inbounds nuw { i32, i32 }, ptr %[[COMPLEX_B]], i32 0, i32 1
+// OGCG: %[[B_IMAG_PTR:.*]] = getelementptr inbounds nuw { i32, i32 }, ptr %[[B_ADDR]], i32 0, i32 1
// OGCG: %[[B_IMAG:.*]] = load i32, ptr %[[B_IMAG_PTR]], align 4
// OGCG: %[[ADD_REAL:.*]] = add i32 %[[A_REAL]], %[[B_REAL]]
// OGCG: %[[ADD_IMAG:.*]] = add i32 %[[A_IMAG]], %[[B_IMAG]]
@@ -54,16 +54,16 @@ void foo2() {
float _Complex c = a + b;
}
-// CIR: %[[COMPLEX_A:.*]] = cir.alloca !cir.complex<!cir.float>, !cir.ptr<!cir.complex<!cir.float>>, ["a"]
-// CIR: %[[COMPLEX_B:.*]] = cir.alloca !cir.complex<!cir.float>, !cir.ptr<!cir.complex<!cir.float>>, ["b"]
-// CIR: %[[TMP_A:.*]] = cir.load{{.*}} %[[COMPLEX_A]] : !cir.ptr<!cir.complex<!cir.float>>, !cir.complex<!cir.float>
-// CIR: %[[TMP_B:.*]] = cir.load{{.*}} %[[COMPLEX_B]] : !cir.ptr<!cir.complex<!cir.float>>, !cir.complex<!cir.float>
+// CIR: %[[A_ADDR:.*]] = cir.alloca !cir.complex<!cir.float>, !cir.ptr<!cir.complex<!cir.float>>, ["a"]
+// CIR: %[[B_ADDR:.*]] = cir.alloca !cir.complex<!cir.float>, !cir.ptr<!cir.complex<!cir.float>>, ["b"]
+// CIR: %[[TMP_A:.*]] = cir.load{{.*}} %[[A_ADDR]] : !cir.ptr<!cir.complex<!cir.float>>, !cir.complex<!cir.float>
+// CIR: %[[TMP_B:.*]] = cir.load{{.*}} %[[B_ADDR]] : !cir.ptr<!cir.complex<!cir.float>>, !cir.complex<!cir.float>
// CIR: %[[ADD:.*]] = cir.complex.add %[[TMP_A]], %[[TMP_B]] : !cir.complex<!cir.float>
-// LLVM: %[[COMPLEX_A:.*]] = alloca { float, float }, i64 1, align 4
-// LLVM: %[[COMPLEX_B:.*]] = alloca { float, float }, i64 1, align 4
-// LLVM: %[[TMP_A:.*]] = load { float, float }, ptr %[[COMPLEX_A]], align 4
-// LLVM: %[[TMP_B:.*]] = load { float, float }, ptr %[[COMPLEX_B]], align 4
+// LLVM: %[[A_ADDR:.*]] = alloca { float, float }, i64 1, align 4
+// LLVM: %[[B_ADDR:.*]] = alloca { float, float }, i64 1, align 4
+// LLVM: %[[TMP_A:.*]] = load { float, float }, ptr %[[A_ADDR]], align 4
+// LLVM: %[[TMP_B:.*]] = load { float, float }, ptr %[[B_ADDR]], align 4
// LLVM: %[[A_REAL:.*]] = extractvalue { float, float } %[[TMP_A]], 0
// LLVM: %[[A_IMAG:.*]] = extractvalue { float, float } %[[TMP_A]], 1
// LLVM: %[[B_REAL:.*]] = extractvalue { float, float } %[[TMP_B]], 0
@@ -73,16 +73,16 @@ void foo2() {
// LLVM: %[[RESULT:.*]] = insertvalue { float, float } poison, float %[[ADD_REAL]], 0
// LLVM: %[[RESULT_2:.*]] = insertvalue { float, float } %[[RESULT]], float %[[ADD_IMAG]], 1
-// OGCG: %[[COMPLEX_A:.*]] = alloca { float, float }, align 4
-// OGCG: %[[COMPLEX_B:.*]] = alloca { float, float }, align 4
+// OGCG: %[[A_ADDR:.*]] = alloca { float, float }, align 4
+// OGCG: %[[B_ADDR:.*]] = alloca { float, float }, align 4
// OGCG: %[[RESULT:.*]] = alloca { float, float }, align 4
-// OGCG: %[[A_REAL_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[COMPLEX_A]], i32 0, i32 0
+// OGCG: %[[A_REAL_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[A_ADDR]], i32 0, i32 0
// OGCG: %[[A_REAL:.*]] = load float, ptr %[[A_REAL_PTR]], align 4
-// OGCG: %[[A_IMAG_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[COMPLEX_A]], i32 0, i32 1
+// OGCG: %[[A_IMAG_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[A_ADDR]], i32 0, i32 1
// OGCG: %[[A_IMAG:.*]] = load float, ptr %[[A_IMAG_PTR]], align 4
-// OGCG: %[[B_REAL_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[COMPLEX_B]], i32 0, i32 0
+// OGCG: %[[B_REAL_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[B_ADDR]], i32 0, i32 0
// OGCG: %[[B_REAL:.*]] = load float, ptr %[[B_REAL_PTR]], align 4
-// OGCG: %[[B_IMAG_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[COMPLEX_B]], i32 0, i32 1
+// OGCG: %[[B_IMAG_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[B_ADDR]], i32 0, i32 1
// OGCG: %[[B_IMAG:.*]] = load float, ptr %[[B_IMAG_PTR]], align 4
// OGCG: %[[ADD_REAL:.*]] = fadd float %[[A_REAL]], %[[B_REAL]]
// OGCG: %[[ADD_IMAG:.*]] = fadd float %[[A_IMAG]], %[[B_IMAG]]
@@ -98,23 +98,23 @@ void foo3() {
float _Complex d = (a + b) + c;
}
-// CIR: %[[COMPLEX_A:.*]] = cir.alloca !cir.complex<!cir.float>, !cir.ptr<!cir.complex<!cir.float>>, ["a"]
-// CIR: %[[COMPLEX_B:.*]] = cir.alloca !cir.complex<!cir.float>, !cir.ptr<!cir.complex<!cir.float>>, ["b"]
-// CIR: %[[COMPLEX_C:.*]] = cir.alloca !cir.complex<!cir.float>, !cir.ptr<!cir.complex<!cir.float>>, ["c"]
+// CIR: %[[A_ADDR:.*]] = cir.alloca !cir.complex<!cir.float>, !cir.ptr<!cir.complex<!cir.float>>, ["a"]
+// CIR: %[[B_ADDR:.*]] = cir.alloca !cir.complex<!cir.float>, !cir.ptr<!cir.complex<!cir.float>>, ["b"]
+// CIR: %[[C_ADDR:.*]] = cir.alloca !cir.complex<!cir.float>, !cir.ptr<!cir.complex<!cir.float>>, ["c"]
// CIR: %[[RESULT:.*]] = cir.alloca !cir.complex<!cir.float>, !cir.ptr<!cir.complex<!cir.float>>, ["d", init]
-// CIR: %[[TMP_A:.*]] = cir.load{{.*}} %[[COMPLEX_A]] : !cir.ptr<!cir.complex<!cir.float>>, !cir.complex<!cir.float>
-// CIR: %[[TMP_B:.*]] = cir.load{{.*}} %[[COMPLEX_B]] : !cir.ptr<!cir.complex<!cir.float>>, !cir.complex<!cir.float>
+// CIR: %[[TMP_A:.*]] = cir.load{{.*}} %[[A_ADDR]] : !cir.ptr<!cir.complex<!cir.float>>, !cir.complex<!cir.float>
+// CIR: %[[TMP_B:.*]] = cir.load{{.*}} %[[B_ADDR]] : !cir.ptr<!cir.complex<!cir.float>>, !cir.complex<!cir.float>
// CIR: %[[ADD_A_B:.*]] = cir.complex.add %[[TMP_A]], %[[TMP_B]] : !cir.complex<!cir.float>
-// CIR: %[[TMP_C:.*]] = cir.load{{.*}} %[[COMPLEX_C]] : !cir.ptr<!cir.complex<!cir.float>>, !cir.complex<!cir.float>
+// CIR: %[[TMP_C:.*]] = cir.load{{.*}} %[[C_ADDR]] : !cir.ptr<!cir.complex<!cir.float>>, !cir.complex<!cir.float>
// CIR: %[[ADD_A_B_C:.*]] = cir.complex.add %[[ADD_A_B]], %[[TMP_C]] : !cir.complex<!cir.float>
// CIR: cir.store{{.*}} %[[ADD_A_B_C]], %[[RESULT]] : !cir.complex<!cir.float>, !cir.ptr<!cir.complex<!cir.float>>
-// LLVM: %[[COMPLEX_A:.*]] = alloca { float, float }, i64 1, align 4
-// LLVM: %[[COMPLEX_B:.*]] = alloca { float, float }, i64 1, align 4
-// LLVM: %[[COMPLEX_C:.*]] = alloca { float, float }, i64 1, align 4
+// LLVM: %[[A_ADDR:.*]] = alloca { float, float }, i64 1, align 4
+// LLVM: %[[B_ADDR:.*]] = alloca { float, float }, i64 1, align 4
+// LLVM: %[[C_ADDR:.*]] = alloca { float, float }, i64 1, align 4
// LLVM: %[[RESULT:.*]] = alloca { float, float }, i64 1, align 4
-// LLVM: %[[TMP_A:.*]] = load { float, float }, ptr %[[COMPLEX_A]], align 4
-// LLVM: %[[TMP_B:.*]] = load { float, float }, ptr %[[COMPLEX_B]], align 4
+// LLVM: %[[TMP_A:.*]] = load { float, float }, ptr %[[A_ADDR]], align 4
+// LLVM: %[[TMP_B:.*]] = load { float, float }, ptr %[[B_ADDR]], align 4
// LLVM: %[[A_REAL:.*]] = extractvalue { float, float } %[[TMP_A]], 0
// LLVM: %[[A_IMAG:.*]] = extractvalue { float, float } %[[TMP_A]], 1
// LLVM: %[[B_REAL:.*]] = extractvalue { float, float } %[[TMP_B]], 0
@@ -123,7 +123,7 @@ void foo3() {
// LLVM: %[[ADD_IMAG_A_B:.*]] = fadd float %[[A_IMAG]], %[[B_IMAG]]
// LLVM: %[[A_B:.*]] = insertvalue { float, float } poison, float %[[ADD_REAL_A_B]], 0
// LLVM: %[[TMP_A_B:.*]] = insertvalue { float, float } %[[A_B]], float %[[ADD_IMAG_A_B]], 1
-// LLVM: %[[TMP_C:.*]] = load { float, float }, ptr %[[COMPLEX_C]], align 4
+// LLVM: %[[TMP_C:.*]] = load { float, float }, ptr %[[C_ADDR]], align 4
// LLVM: %[[A_B_REAL:.*]] = extractvalue { float, float } %[[TMP_A_B]], 0
// LLVM: %[[A_B_IMAG:.*]] = extractvalue { float, float } %[[TMP_A_B]], 1
// LLVM: %[[C_REAL:.*]] = extractvalue { float, float } %[[TMP_C]], 0
@@ -134,23 +134,23 @@ void foo3() {
// LLVM: %[[TMP_A_B_C:.*]] = insertvalue { float, float } %[[A_B_C]], float %[[ADD_IMAG_A_B_C]], 1
// LLVM: store { float, float } %[[TMP_A_B_C]], ptr %[[RESULT]], align 4
-// OGCG: %[[COMPLEX_A:.*]] = alloca { float, float }, align 4
-// OGCG: %[[COMPLEX_B:.*]] = alloca { float, float }, align 4
-// OGCG: %[[COMPLEX_C:.*]] = alloca { float, float }, align 4
+// OGCG: %[[A_ADDR:.*]] = alloca { float, float }, align 4
+// OGCG: %[[B_ADDR:.*]] = alloca { float, float }, align 4
+// OGCG: %[[C_ADDR:.*]] = alloca { float, float }, align 4
// OGCG: %[[RESULT:.*]] = alloca { float, float }, align 4
-// OGCG: %[[A_REAL_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[COMPLEX_A]], i32 0, i32 0
+// OGCG: %[[A_REAL_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[A_ADDR]], i32 0, i32 0
// OGCG: %[[A_REAL:.*]] = load float, ptr %[[A_REAL_PTR]], align 4
-// OGCG: %[[A_IMAG_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[COMPLEX_A]], i32 0, i32 1
+// OGCG: %[[A_IMAG_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[A_ADDR]], i32 0, i32 1
// OGCG: %[[A_IMAG:.*]] = load float, ptr %[[A_IMAG_PTR]], align 4
-// OGCG: %[[B_REAL_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[COMPLEX_B]], i32 0, i32 0
+// OGCG: %[[B_REAL_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[B_ADDR]], i32 0, i32 0
// OGCG: %[[B_REAL:.*]] = load float, ptr %[[B_REAL_PTR]], align 4
-// OGCG: %[[B_IMAG_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[COMPLEX_B]], i32 0, i32 1
+// OGCG: %[[B_IMAG_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[B_ADDR]], i32 0, i32 1
// OGCG: %[[B_IMAG:.*]] = load float, ptr %[[B_IMAG_PTR]], align 4
// OGCG: %[[ADD_REAL_A_B:.*]] = fadd float %[[A_REAL]], %[[B_REAL]]
// OGCG: %[[ADD_IMAG_A_B:.*]] = fadd float %[[A_IMAG]], %[[B_IMAG]]
-// OGCG: %[[C_REAL_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[COMPLEX_C]], i32 0, i32 0
+// OGCG: %[[C_REAL_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[C_ADDR]], i32 0, i32 0
// OGCG: %[[C_REAL:.*]] = load float, ptr %[[C_REAL_PTR]], align 4
-// OGCG: %[[C_IMAG_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[COMPLEX_C]], i32 0, i32 1
+// OGCG: %[[C_IMAG_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[C_ADDR]], i32 0, i32 1
// OGCG: %[[C_IMAG:.*]] = load float, ptr %[[C_IMAG_PTR]], align 4
// OGCG: %[[ADD_REAL_A_B_C:.*]] = fadd float %[[ADD_REAL_A_B]], %[[C_REAL]]
// OGCG: %[[ADD_IMAG_A_B_C:.*]] = fadd float %[[ADD_IMAG_A_B]], %[[C_IMAG]]
@@ -165,17 +165,17 @@ void foo4() {
int _Complex c = a - b;
}
-// CIR: %[[COMPLEX_A:.*]] = cir.alloca !cir.complex<!s32i>, !cir.ptr<!cir.complex<!s32i>>, ["a"]
-// CIR: %[[COMPLEX_B:.*]] = cir.alloca !cir.complex<!s32i>, !cir.ptr<!cir.complex<!s32i>>, ["b"]
-// CIR: %[[TMP_A:.*]] = cir.load{{.*}} %[[COMPLEX_A]] : !cir.ptr<!cir.complex<!s32i>>, !cir.complex<!s32i>
-// CIR: %[[TMP_B:.*]] = cir.load{{.*}} %[[COMPLEX_B]] : !cir.ptr<!cir.complex<!s32i>>, !cir.complex<!s32i>
+// CIR: %[[A_ADDR:.*]] = cir.alloca !cir.complex<!s32i>, !cir.ptr<!cir.complex<!s32i>>, ["a"]
+// CIR: %[[B_ADDR:.*]] = cir.alloca !cir.complex<!s32i>, !cir.ptr<!cir.complex<!s32i>>, ["b"]
+// CIR: %[[TMP_A:.*]] = cir.load{{.*}} %[[A_ADDR]] : !cir.ptr<!cir.complex<!s32i>>, !cir.complex<!s32i>
+// CIR: %[[TMP_B:.*]] = cir.load{{.*}} %[[B_ADDR]] : !cir.ptr<!cir.complex<!s32i>>, !cir.complex<!s32i>
// CIR: %[[SUB:.*]] = cir.complex.sub %[[TMP_A]], %[[TMP_B]] : !cir.complex<!s32i>
-// LLVM: %[[COMPLEX_A:.*]] = alloca { i32, i32 }, i64 1, align 4
-// LLVM: %[[COMPLEX_B:.*]] = alloca { i32, i32 }, i64 1, align 4
-// LLVM: %[[COMPLEX_C:.*]] = alloca { i32, i32 }, i64 1, align 4
-// LLVM: %[[TMP_A:.*]] = load { i32, i32 }, ptr %[[COMPLEX_A]], align 4
-// LLVM: %[[TMP_B:.*]] = load { i32, i32 }, ptr %[[COMPLEX_B]], align 4
+// LLVM: %[[A_ADDR:.*]] = alloca { i32, i32 }, i64 1, align 4
+// LLVM: %[[B_ADDR:.*]] = alloca { i32, i32 }, i64 1, align 4
+// LLVM: %[[C_ADDR:.*]] = alloca { i32, i32 }, i64 1, align 4
+// LLVM: %[[TMP_A:.*]] = load { i32, i32 }, ptr %[[A_ADDR]], align 4
+// LLVM: %[[TMP_B:.*]] = load { i32, i32 }, ptr %[[B_ADDR]], align 4
// LLVM: %[[A_REAL:.*]] = extractvalue { i32, i32 } %[[TMP_A]], 0
// LLVM: %[[A_IMAG:.*]] = extractvalue { i32, i32 } %[[TMP_A]], 1
// LLVM: %[[B_REAL:.*]] = extractvalue { i32, i32 } %[[TMP_B]], 0
@@ -184,18 +184,18 @@ void foo4() {
// LLVM: %[[SUB_IMAG:.*]] = sub i32 %[[A_IMAG]], %[[B_IMAG]]
// LLVM: %[[RESULT:.*]] = insertvalue { i32, i32 } poison, i32 %[[SUB_REAL]], 0
// LLVM: %[[RESULT_2:.*]] = insertvalue { i32, i32 } %[[RESULT]], i32 %[[SUB_IMAG]], 1
-// LLVM: store { i32, i32 } %[[RESULT_2]], ptr %[[COMPLEX_C]], align 4
+// LLVM: store { i32, i32 } %[[RESULT_2]], ptr %[[C_ADDR]], align 4
-// OGCG: %[[COMPLEX_A:.*]] = alloca { i32, i32 }, align 4
-// OGCG: %[[COMPLEX_B:.*]] = alloca { i32, i32 }, align 4
+// OGCG: %[[A_ADDR:.*]] = alloca { i32, i32 }, align 4
+// OGCG: %[[B_ADDR:.*]] = alloca { i32, i32 }, align 4
// OGCG: %[[RESULT:.*]] = alloca { i32, i32 }, align 4
-// OGCG: %[[A_REAL_PTR:.*]] = getelementptr inbounds nuw { i32, i32 }, ptr %[[COMPLEX_A]], i32 0, i32 0
+// OGCG: %[[A_REAL_PTR:.*]] = getelementptr inbounds nuw { i32, i32 }, ptr %[[A_ADDR]], i32 0, i32 0
// OGCG: %[[A_REAL:.*]] = load i32, ptr %[[A_REAL_PTR]], align 4
-// OGCG: %[[A_IMAG_PTR:.*]] = getelementptr inbounds nuw { i32, i32 }, ptr %[[COMPLEX_A]], i32 0, i32 1
+// OGCG: %[[A_IMAG_PTR:.*]] = getelementptr inbounds nuw { i32, i32 }, ptr %[[A_ADDR]], i32 0, i32 1
// OGCG: %[[A_IMAG:.*]] = load i32, ptr %[[A_IMAG_PTR]], align 4
-// OGCG: %[[B_REAL_PTR:.*]] = getelementptr inbounds nuw { i32, i32 }, ptr %[[COMPLEX_B]], i32 0, i32 0
+// OGCG: %[[B_REAL_PTR:.*]] = getelementptr inbounds nuw { i32, i32 }, ptr %[[B_ADDR]], i32 0, i32 0
// OGCG: %[[B_REAL:.*]] = load i32, ptr %[[B_REAL_PTR]], align 4
-// OGCG: %[[B_IMAG_PTR:.*]] = getelementptr inbounds nuw { i32, i32 }, ptr %[[COMPLEX_B]], i32 0, i32 1
+// OGCG: %[[B_IMAG_PTR:.*]] = getelementptr inbounds nuw { i32, i32 }, ptr %[[B_ADDR]], i32 0, i32 1
// OGCG: %[[B_IMAG:.*]] = load i32, ptr %[[B_IMAG_PTR]], align 4
// OGCG: %[[SUB_REAL:.*]] = sub i32 %[[A_REAL]], %[[B_REAL]]
// OGCG: %[[SUB_IMAG:.*]] = sub i32 %[[A_IMAG]], %[[B_IMAG]]
@@ -210,16 +210,16 @@ void foo5() {
float _Complex c = a - b;
}
-// CIR: %[[COMPLEX_A:.*]] = cir.alloca !cir.complex<!cir.float>, !cir.ptr<!cir.complex<!cir.float>>, ["a"]
-// CIR: %[[COMPLEX_B:.*]] = cir.alloca !cir.complex<!cir.float>, !cir.ptr<!cir.complex<!cir.float>>, ["b"]
-// CIR: %[[TMP_A:.*]] = cir.load{{.*}} %[[COMPLEX_A]] : !cir.ptr<!cir.complex<!cir.float>>, !cir.complex<!cir.float>
-// CIR: %[[TMP_B:.*]] = cir.load{{.*}} %[[COMPLEX_B]] : !cir.ptr<!cir.complex<!cir.float>>, !cir.complex<!cir.float>
+// CIR: %[[A_ADDR:.*]] = cir.alloca !cir.complex<!cir.float>, !cir.ptr<!cir.complex<!cir.float>>, ["a"]
+// CIR: %[[B_ADDR:.*]] = cir.alloca !cir.complex<!cir.float>, !cir.ptr<!cir.complex<!cir.float>>, ["b"]
+// CIR: %[[TMP_A:.*]] = cir.load{{.*}} %[[A_ADDR]] : !cir.ptr<!cir.complex<!cir.float>>, !cir.complex<!cir.float>
+// CIR: %[[TMP_B:.*]] = cir.load{{.*}} %[[B_ADDR]] : !cir.ptr<!cir.complex<!cir.float>>, !cir.complex<!cir.float>
// CIR: %[[SUB:.*]] = cir.complex.sub %[[TMP_A]], %[[TMP_B]] : !cir.complex<!cir.float>
-// LLVM: %[[COMPLEX_A:.*]] = alloca { float, float }, i64 1, align 4
-// LLVM: %[[COMPLEX_B:.*]] = alloca { float, float }, i64 1, align 4
-// LLVM: %[[TMP_A:.*]] = load { float, float }, ptr %[[COMPLEX_A]], align 4
-// LLVM: %[[TMP_B:.*]] = load { float, float }, ptr %[[COMPLEX_B]], align 4
+// LLVM: %[[A_ADDR:.*]] = alloca { float, float }, i64 1, align 4
+// LLVM: %[[B_ADDR:.*]] = alloca { float, float }, i64 1, align 4
+// LLVM: %[[TMP_A:.*]] = load { float, float }, ptr %[[A_ADDR]], align 4
+// LLVM: %[[TMP_B:.*]] = load { float, float }, ptr %[[B_ADDR]], align 4
// LLVM: %[[A_REAL:.*]] = extractvalue { float, float } %[[TMP_A]], 0
// LLVM: %[[A_IMAG:.*]] = extractvalue { float, float } %[[TMP_A]], 1
// LLVM: %[[B_REAL:.*]] = extractvalue { float, float } %[[TMP_B]], 0
@@ -229,16 +229,16 @@ void foo5() {
// LLVM: %[[RESULT:.*]] = insertvalue { float, float } poison, float %[[SUB_REAL]], 0
// LLVM: %[[RESULT_2:.*]] = insertvalue { float, float } %[[RESULT]], float %[[SUB_IMAG]], 1
-// OGCG: %[[COMPLEX_A:.*]] = alloca { float, float }, align 4
-// OGCG: %[[COMPLEX_B:.*]] = alloca { float, float }, align 4
+// OGCG: %[[A_ADDR:.*]] = alloca { float, float }, align 4
+// OGCG: %[[B_ADDR:.*]] = alloca { float, float }, align 4
// OGCG: %[[RESULT:.*]] = alloca { float, float }, align 4
-// OGCG: %[[A_REAL_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[COMPLEX_A]], i32 0, i32 0
+// OGCG: %[[A_REAL_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[A_ADDR]], i32 0, i32 0
// OGCG: %[[A_REAL:.*]] = load float, ptr %[[A_REAL_PTR]], align 4
-// OGCG: %[[A_IMAG_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[COMPLEX_A]], i32 0, i32 1
+// OGCG: %[[A_IMAG_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[A_ADDR]], i32 0, i32 1
// OGCG: %[[A_IMAG:.*]] = load float, ptr %[[A_IMAG_PTR]], align 4
-// OGCG: %[[B_REAL_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[COMPLEX_B]], i32 0, i32 0
+// OGCG: %[[B_REAL_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[B_ADDR]], i32 0, i32 0
// OGCG: %[[B_REAL:.*]] = load float, ptr %[[B_REAL_PTR]], align 4
-// OGCG: %[[B_IMAG_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[COMPLEX_B]], i32 0, i32 1
+// OGCG: %[[B_IMAG_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[B_ADDR]], i32 0, i32 1
// OGCG: %[[B_IMAG:.*]] = load float, ptr %[[B_IMAG_PTR]], align 4
// OGCG: %[[SUB_REAL:.*]] = fsub float %[[A_REAL]], %[[B_REAL]]
// OGCG: %[[SUB_IMAG:.*]] = fsub float %[[A_IMAG]], %[[B_IMAG]]
@@ -254,23 +254,23 @@ void foo6() {
float _Complex d = (a - b) - c;
}
-// CIR: %[[COMPLEX_A:.*]] = cir.alloca !cir.complex<!cir.float>, !cir.ptr<!cir.complex<!cir.float>>, ["a"]
-// CIR: %[[COMPLEX_B:.*]] = cir.alloca !cir.complex<!cir.float>, !cir.ptr<!cir.complex<!cir.float>>, ["b"]
-// CIR: %[[COMPLEX_C:.*]] = cir.alloca !cir.complex<!cir.float>, !cir.ptr<!cir.complex<!cir.float>>, ["c"]
+// CIR: %[[A_ADDR:.*]] = cir.alloca !cir.complex<!cir.float>, !cir.ptr<!cir.complex<!cir.float>>, ["a"]
+// CIR: %[[B_ADDR:.*]] = cir.alloca !cir.complex<!cir.float>, !cir.ptr<!cir.complex<!cir.float>>, ["b"]
+// CIR: %[[C_ADDR:.*]] = cir.alloca !cir.complex<!cir.float>, !cir.ptr<!cir.complex<!cir.float>>, ["c"]
// CIR: %[[RESULT:.*]] = cir.alloca !cir.complex<!cir.float>, !cir.ptr<!cir.complex<!cir.float>>, ["d", init]
-// CIR: %[[TMP_A:.*]] = cir.load{{.*}} %[[COMPLEX_A]] : !cir.ptr<!cir.complex<!cir.float>>, !cir.complex<!cir.float>
-// CIR: %[[TMP_B:.*]] = cir.load{{.*}} %[[COMPLEX_B]] : !cir.ptr<!cir.complex<!cir.float>>, !cir.complex<!cir.float>
+// CIR: %[[TMP_A:.*]] = cir.load{{.*}} %[[A_ADDR]] : !cir.ptr<!cir.complex<!cir.float>>, !cir.complex<!cir.float>
+// CIR: %[[TMP_B:.*]] = cir.load{{.*}} %[[B_ADDR]] : !cir.ptr<!cir.complex<!cir.float>>, !cir.complex<!cir.float>
// CIR: %[[SUB_A_B:.*]] = cir.complex.sub %[[TMP_A]], %[[TMP_B]] : !cir.complex<!cir.float>
-// CIR: %[[TMP_C:.*]] = cir.load{{.*}} %[[COMPLEX_C]] : !cir.ptr<!cir.complex<!cir.float>>, !cir.complex<!cir.float>
+// CIR: %[[TMP_C:.*]] = cir.load{{.*}} %[[C_ADDR]] : !cir.ptr<!cir.complex<!cir.float>>, !cir.complex<!cir.float>
// CIR: %[[SUB_A_B_C:.*]] = cir.complex.sub %[[SUB_A_B]], %[[TMP_C]] : !cir.complex<!cir.float>
// CIR: cir.store{{.*}} %[[SUB_A_B_C]], %[[RESULT]] : !cir.complex<!cir.float>, !cir.ptr<!cir.complex<!cir.float>>
-// LLVM: %[[COMPLEX_A:.*]] = alloca { float, float }, i64 1, align 4
-// LLVM: %[[COMPLEX_B:.*]] = alloca { float, float }, i64 1, align 4
-// LLVM: %[[COMPLEX_C:.*]] = alloca { float, float }, i64 1, align 4
+// LLVM: %[[A_ADDR:.*]] = alloca { float, float }, i64 1, align 4
+// LLVM: %[[B_ADDR:.*]] = alloca { float, float }, i64 1, align 4
+// LLVM: %[[C_ADDR:.*]] = alloca { float, float }, i64 1, align 4
// LLVM: %[[RESULT:.*]] = alloca { float, float }, i64 1, align 4
-// LLVM: %[[TMP_A:.*]] = load { float, float }, ptr %[[COMPLEX_A]], align 4
-// LLVM: %[[TMP_B:.*]] = load { float, float }, ptr %[[COMPLEX_B]], align 4
+// LLVM: %[[TMP_A:.*]] = load { float, float }, ptr %[[A_ADDR]], align 4
+// LLVM: %[[TMP_B:.*]] = load { float, float }, ptr %[[B_ADDR]], align 4
// LLVM: %[[A_REAL:.*]] = extractvalue { float, float } %[[TMP_A]], 0
// LLVM: %[[A_IMAG:.*]] = extractvalue { float, float } %[[TMP_A]], 1
// LLVM: %[[B_REAL:.*]] = extractvalue { float, float } %[[TMP_B]], 0
@@ -279,7 +279,7 @@ void foo6() {
// LLVM: %[[SUB_IMAG_A_B:.*]] = fsub float %[[A_IMAG]], %[[B_IMAG]]
// LLVM: %[[A_B:.*]] = insertvalue { float, float } poison, float %[[SUB_REAL_A_B]], 0
// LLVM: %[[TMP_A_B:.*]] = insertvalue { float, float } %[[A_B]], float %[[SUB_IMAG_A_B]], 1
-// LLVM: %[[TMP_C:.*]] = load { float, float }, ptr %[[COMPLEX_C]], align 4
+// LLVM: %[[TMP_C:.*]] = load { float, float }, ptr %[[C_ADDR]], align 4
// LLVM: %[[A_B_REAL:.*]] = extractvalue { float, float } %[[TMP_A_B]], 0
// LLVM: %[[A_B_IMAG:.*]] = extractvalue { float, float } %[[TMP_A_B]], 1
// LLVM: %[[C_REAL:.*]] = extractvalue { float, float } %[[TMP_C]], 0
@@ -290,23 +290,23 @@ void foo6() {
// LLVM: %[[TMP_A_B_C:.*]] = insertvalue { float, float } %[[A_B_C]], float %[[SUB_IMAG_A_B_C]], 1
// LLVM: store { float, float } %[[TMP_A_B_C]], ptr %[[RESULT]], align 4
-// OGCG: %[[COMPLEX_A:.*]] = alloca { float, float }, align 4
-// OGCG: %[[COMPLEX_B:.*]] = alloca { float, float }, align 4
-// OGCG: %[[COMPLEX_C:.*]] = alloca { float, float }, align 4
+// OGCG: %[[A_ADDR:.*]] = alloca { float, float }, align 4
+// OGCG: %[[B_ADDR:.*]] = alloca { float, float }, align 4
+// OGCG: %[[C_ADDR:.*]] = alloca { float, float }, align 4
// OGCG: %[[RESULT:.*]] = alloca { float, float }, align 4
-// OGCG: %[[A_REAL_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[COMPLEX_A]], i32 0, i32 0
+// OGCG: %[[A_REAL_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[A_ADDR]], i32 0, i32 0
// OGCG: %[[A_REAL:.*]] = load float, ptr %[[A_REAL_PTR]], align 4
-// OGCG: %[[A_IMAG_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[COMPLEX_A]], i32 0, i32 1
+// OGCG: %[[A_IMAG_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[A_ADDR]], i32 0, i32 1
// OGCG: %[[A_IMAG:.*]] = load float, ptr %[[A_IMAG_PTR]], align 4
-// OGCG: %[[B_REAL_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[COMPLEX_B]], i32 0, i32 0
+// OGCG: %[[B_REAL_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[B_ADDR]], i32 0, i32 0
// OGCG: %[[B_REAL:.*]] = load float, ptr %[[B_REAL_PTR]], align 4
-// OGCG: %[[B_IMAG_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[COMPLEX_B]], i32 0, i32 1
+// OGCG: %[[B_IMAG_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[B_ADDR]], i32 0, i32 1
// OGCG: %[[B_IMAG:.*]] = load float, ptr %[[B_IMAG_PTR]], align 4
// OGCG: %[[SUB_REAL_A_B:.*]] = fsub float %[[A_REAL]], %[[B_REAL]]
// OGCG: %[[SUB_IMAG_A_B:.*]] = fsub float %[[A_IMAG]], %[[B_IMAG]]
-// OGCG: %[[C_REAL_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[COMPLEX_C]], i32 0, i32 0
+// OGCG: %[[C_REAL_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[C_ADDR]], i32 0, i32 0
// OGCG: %[[C_REAL:.*]] = load float, ptr %[[C_REAL_PTR]], align 4
-// OGCG: %[[C_IMAG_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[COMPLEX_C]], i32 0, i32 1
+// OGCG: %[[C_IMAG_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[C_ADDR]], i32 0, i32 1
// OGCG: %[[C_IMAG:.*]] = load float, ptr %[[C_IMAG_PTR]], align 4
// OGCG: %[[SUB_REAL_A_B_C:.*]] = fsub float %[[SUB_REAL_A_B]], %[[C_REAL]]
// OGCG: %[[SUB_IMAG_A_B_C:.*]] = fsub float %[[SUB_IMAG_A_B]], %[[C_IMAG]]
@@ -314,3 +314,4 @@ void foo6() {
// OGCG: %[[RESULT_IMAG_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[RESULT]], i32 0, i32 1
// OGCG: store float %[[SUB_REAL_A_B_C]], ptr %[[RESULT_REAL_PTR]], align 4
// OGCG: store float %[[SUB_IMAG_A_B_C]], ptr %[[RESULT_IMAG_PTR]], align 4
+
diff --git a/clang/test/CIR/CodeGen/complex-compound-assignment.cpp b/clang/test/CIR/CodeGen/complex-compound-assignment.cpp
new file mode 100644
index 0000000000000..35a8aa693f8ed
--- /dev/null
+++ b/clang/test/CIR/CodeGen/complex-compound-assignment.cpp
@@ -0,0 +1,288 @@
+// RUN: %clang_cc1 -std=c++20 -triple x86_64-unknown-linux-gnu -Wno-unused-value -fclangir -emit-cir %s -o %t.cir
+// RUN: FileCheck --input-file=%t.cir %s --check-prefixes=CIR,CXX_CIR
+// RUN: %clang_cc1 -x c -triple x86_64-unknown-linux-gnu -Wno-unused-value -fclangir -emit-cir %s -o %t.cir
+// RUN: FileCheck --input-file=%t.cir %s -check-prefix=CIR
+// RUN: %clang_cc1 -std=c++20 -triple x86_64-unknown-linux-gnu -Wno-unused-value -fclangir -emit-llvm %s -o %t-cir.ll
+// RUN: FileCheck --input-file=%t-cir.ll %s --check-prefixes=LLVM,CXX_LLVM
+// RUN: %clang_cc1 -x c -triple x86_64-unknown-linux-gnu -Wno-unused-value -fclangir -emit-llvm %s -o %t-cir.ll
+// RUN: FileCheck --input-file=%t-cir.ll %s -check-prefix=LLVM
+// RUN: %clang_cc1 -std=c++20 -triple x86_64-unknown-linux-gnu -Wno-unused-value -emit-llvm %s -o %t.ll
+// RUN: FileCheck --input-file=%t.ll %s --check-prefixes=OGCG,CXX_OGCG
+// RUN: %clang_cc1 -x c -triple x86_64-unknown-linux-gnu -Wno-unused-value -emit-llvm %s -o %t.ll
+// RUN: FileCheck --input-file=%t.ll %s -check-prefix=OGCG
+
+void foo() {
+ float _Complex a;
+ float _Complex b;
+ b += a;
+}
+
+// CIR: %[[A_ADDR:.*]] = cir.alloca !cir.complex<!cir.float>, !cir.ptr<!cir.complex<!cir.float>>, ["a"]
+// CIR: %[[B_ADDR:.*]] = cir.alloca !cir.complex<!cir.float>, !cir.ptr<!cir.complex<!cir.float>>, ["b"]
+// CIR: %[[TMP_A:.*]] = cir.load{{.*}} %[[A_ADDR]] : !cir.ptr<!cir.complex<!cir.float>>, !cir.complex<!cir.float>
+// CIR: %[[TMP_B:.*]] = cir.load{{.*}} %[[B_ADDR]] : !cir.ptr<!cir.complex<!cir.float>>, !cir.complex<!cir.float>
+// CIR: %[[RESULT:.*]] = cir.complex.add %[[TMP_B]], %[[TMP_A]] : !cir.complex<!cir.float>
+// CIR: cir.store{{.*}} %[[RESULT]], %[[B_ADDR]] : !cir.complex<!cir.float>, !cir.ptr<!cir.complex<!cir.float>>
+
+// LLVM: %[[A_ADDR:.*]] = alloca { float, float }, i64 1, align 4
+// LLVM: %[[B_ADDR:.*]] = alloca { float, float }, i64 1, align 4
+// LLVM: %[[TMP_A:.*]] = load { float, float }, ptr %[[A_ADDR]], align 4
+// LLVM: %[[TMP_B:.*]] = load { float, float }, ptr %[[B_ADDR]], align 4
+// LLVM: %[[B_REAL:.*]] = extractvalue { float, float } %[[TMP_B]], 0
+// LLVM: %[[B_IMAG:.*]] = extractvalue { float, float } %[[TMP_B]], 1
+// LLVM: %[[A_REAL:.*]] = extractvalue { float, float } %[[TMP_A]], 0
+// LLVM: %[[A_IMAG:.*]] = extractvalue { float, float } %[[TMP_A]], 1
+// LLVM: %[[ADD_REAL_A_B:.*]] = fadd float %[[B_REAL]], %[[A_REAL]]
+// LLVM: %[[ADD_IMAG_A_B:.*]] = fadd float %[[B_IMAG]], %[[A_IMAG]]
+// LLVM: %[[ADD_A_B:.*]] = insertvalue { float, float } poison, float %[[ADD_REAL_A_B]], 0
+// LLVM: %[[RESULT:.*]] = insertvalue { float, float } %[[ADD_A_B]], float %[[ADD_IMAG_A_B]], 1
+// LLVM: store { float, float } %[[RESULT]], ptr %[[B_ADDR]], align 4
+
+// OGCG: %[[A_ADDR:.*]] = alloca { float, float }, align 4
+// OGCG: %[[B_ADDR:.*]] = alloca { float, float }, align 4
+// OGCG: %[[A_REAL_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[A_ADDR]], i32 0, i32 0
+// OGCG: %[[A_REAL:.*]] = load float, ptr %[[A_REAL_PTR]], align 4
+// OGCG: %[[A_IMAG_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[A_ADDR]], i32 0, i32 1
+// OGCG: %[[A_IMAG:.*]] = load float, ptr %[[A_IMAG_PTR]], align 4
+// OGCG: %[[B_REAL_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[B_ADDR]], i32 0, i32 0
+// OGCG: %[[B_REAL:.*]] = load float, ptr %[[B_REAL_PTR]], align 4
+// OGCG: %[[B_IMAG_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[B_ADDR]], i32 0, i32 1
+// OGCG: %[[B_IMAG:.*]] = load float, ptr %[[B_IMAG_PTR]], align 4
+// OGCG: %[[ADD_REAL:.*]] = fadd float %[[B_REAL]], %[[A_REAL]]
+// OGCG: %[[ADD_IMAG:.*]] = fadd float %[[B_IMAG]], %[[A_IMAG]]
+// OGCG: %[[B_REAL_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[B_ADDR]], i32 0, i32 0
+// OGCG: %[[B_IMAG_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[B_ADDR]], i32 0, i32 1
+// OGCG: store float %[[ADD_REAL]], ptr %[[B_REAL_PTR]], align 4
+// OGCG: store float %[[ADD_IMAG]], ptr %[[B_IMAG_PTR]], align 4
+
+void foo1() {
+ float _Complex a;
+ float _Complex b;
+ b -= a;
+}
+
+// CIR: %[[A_ADDR:.*]] = cir.alloca !cir.complex<!cir.float>, !cir.ptr<!cir.complex<!cir.float>>, ["a"]
+// CIR: %[[B_ADDR:.*]] = cir.alloca !cir.complex<!cir.float>, !cir.ptr<!cir.complex<!cir.float>>, ["b"]
+// CIR: %[[TMP_A:.*]] = cir.load{{.*}} %[[A_ADDR]] : !cir.ptr<!cir.complex<!cir.float>>, !cir.complex<!cir.float>
+// CIR: %[[TMP_B:.*]] = cir.load{{.*}} %[[B_ADDR]] : !cir.ptr<!cir.complex<!cir.float>>, !cir.complex<!cir.float>
+// CIR: %[[RESULT:.*]] = cir.complex.sub %[[TMP_B]], %[[TMP_A]] : !cir.complex<!cir.float>
+// CIR: cir.store{{.*}} %[[RESULT]], %[[B_ADDR]] : !cir.complex<!cir.float>, !cir.ptr<!cir.complex<!cir.float>>
+
+// LLVM: %[[A_ADDR:.*]] = alloca { float, float }, i64 1, align 4
+// LLVM: %[[B_ADDR:.*]] = alloca { float, float }, i64 1, align 4
+// LLVM: %[[TMP_A:.*]] = load { float, float }, ptr %[[A_ADDR]], align 4
+// LLVM: %[[TMP_B:.*]] = load { float, float }, ptr %[[B_ADDR]], align 4
+// LLVM: %[[B_REAL:.*]] = extractvalue { float, float } %[[TMP_B]], 0
+// LLVM: %[[B_IMAG:.*]] = extractvalue { float, float } %[[TMP_B]], 1
+// LLVM: %[[A_REAL:.*]] = extractvalue { float, float } %[[TMP_A]], 0
+// LLVM: %[[A_IMAG:.*]] = extractvalue { float, float } %[[TMP_A]], 1
+// LLVM: %[[SUB_REAL_A_B:.*]] = fsub float %[[B_REAL]], %[[A_REAL]]
+// LLVM: %[[SUB_IMAG_A_B:.*]] = fsub float %[[B_IMAG]], %[[A_IMAG]]
+// LLVM: %[[SUB_A_B:.*]] = insertvalue { float, float } poison, float %[[SUB_REAL_A_B]], 0
+// LLVM: %[[RESULT:.*]] = insertvalue { float, float } %[[SUB_A_B]], float %[[SUB_IMAG_A_B]], 1
+// LLVM: store { float, float } %[[RESULT]], ptr %[[B_ADDR]], align 4
+
+// OGCG: %[[A_ADDR:.*]] = alloca { float, float }, align 4
+// OGCG: %[[B_ADDR:.*]] = alloca { float, float }, align 4
+// OGCG: %[[A_REAL_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[A_ADDR]], i32 0, i32 0
+// OGCG: %[[A_REAL:.*]] = load float, ptr %[[A_REAL_PTR]], align 4
+// OGCG: %[[A_IMAG_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[A_ADDR]], i32 0, i32 1
+// OGCG: %[[A_IMAG:.*]] = load float, ptr %[[A_IMAG_PTR]], align 4
+// OGCG: %[[B_REAL_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[B_ADDR]], i32 0, i32 0
+// OGCG: %[[B_REAL:.*]] = load float, ptr %[[B_REAL_PTR]], align 4
+// OGCG: %[[B_IMAG_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[B_ADDR]], i32 0, i32 1
+// OGCG: %[[B_IMAG:.*]] = load float, ptr %[[B_IMAG_PTR]], align 4
+// OGCG: %[[SUB_REAL:.*]] = fsub float %[[B_REAL]], %[[A_REAL]]
+// OGCG: %[[SUB_IMAG:.*]] = fsub float %[[B_IMAG]], %[[A_IMAG]]
+// OGCG: %[[B_REAL_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[B_ADDR]], i32 0, i32 0
+// OGCG: %[[B_IMAG_PTR:.*]] = getelementptr inbounds nuw { float, float }, ptr %[[B_ADDR]], i32 0, i32 1
+// OGCG: store float %[[SUB_REAL]], ptr %[[B_REAL_PTR]], align 4
+// OGCG: store float %[[SUB_IMAG]], ptr %[[B_IMAG_PTR]], align 4
+
+void foo2() {
+ int _Complex a;
+ int _Complex b;
+ b += a;
+}
+
+// CIR: %[[A_ADDR:.*]] = cir.alloca !cir.complex<!s32i>, !cir.ptr<!cir.complex<!s32i>>, ["a"]
+// CIR: %[[B_ADDR:.*]] = cir.alloca !cir.complex<!s32i>, !cir.ptr<!cir.complex<!s32i>>, ["b"]
+// CIR: %[[TMP_A:.*]] = cir.load{{.*}} %[[A_ADDR]] : !cir.ptr<!cir.complex<!s32i>>, !cir.complex<!s32i>
+// CIR: %[[TMP_B:.*]] = cir.load{{.*}} %[[B_ADDR]] : !cir.ptr<!cir.complex<!s32i>>, !cir.complex<!s32i>
+// CIR: %[[RESULT:.*]] = cir.complex.add %[[TMP_B]], %[[TMP_A]] : !cir.complex<!s32i>
+// CIR: cir.store{{.*}} %[[RESULT]], %[[B_ADDR]] : !cir.complex<!s32i>, !cir.ptr<!cir.complex<!s32i>>
+
+// LLVM: %[[A_ADDR:.*]] = alloca { i32, i32 }, i64 1, align 4
+// LLVM: %[[B_ADDR:.*]] = alloca { i32, i32 }, i64 1, align 4
+// LLVM: %[[TMP_A:.*]] = load { i32, i32 }, ptr %[[A_ADDR]], align 4
+// LLVM: %[[TMP_B:.*]] = load { i32, i32 }, ptr %[[B_ADDR]], align 4
+// LLVM: %[[B_REAL:.*]] = extractvalue { i32, i32 } %[[TMP_B]], 0
+// LLVM: %[[B_IMAG:.*]] = extractvalue { i32, i32 } %[[TMP_B]], 1
+// LLVM: %[[A_REAL:.*]] = extractvalue { i32, i32 } %[[TMP_A]], 0
+// LLVM: %[[A_IMAG:.*]] = extractvalue { i32, i32 } %[[TMP_A]], 1
+// LLVM: %[[ADD_REAL_A_B:.*]] = add i32 %[[B_REAL]], %[[A_REAL]]
+// LLVM: %[[ADD_IMAG_A_B:.*]] = add i32 %[[B_IMAG]], %[[A_IMAG]]
+// LLVM: %[[ADD_A_B:.*]] = insertvalue { i32, i32 } poison, i32 %[[ADD_REAL_A_B]], 0
+// LLVM: %[[RESULT:.*]] = insertvalue { i32, i32 } %[[ADD_A_B]], i32 %[[ADD_IMAG_A_B]], 1
+// LLVM: store { i32, i32 } %[[RESULT]], ptr %[[B_ADDR]], align 4
+
+// OGCG: %[[A_ADDR:.*]] = alloca { i32, i32 }, align 4
+// OGCG: %[[B_ADDR:.*]] = alloca { i32, i32 }, align 4
+// OGCG: %[[A_REAL_PTR:.*]] = getelementptr inbounds nuw { i32, i32 }, ptr %[[A_ADDR]], i32 0, i32 0
+// OGCG: %[[A_REAL:.*]] = load i32, ptr %[[A_REAL_PTR]], align 4
+// OGCG: %[[A_IMAG_PTR:.*]] = getelementptr inbounds nuw { i32, i32 }, ptr %[[A_ADDR]], i32 0, i32 1
+// OGCG: %[[A_IMAG:.*]] = load i32, ptr %[[A_IMAG_PTR]], align 4
+// OGCG: %[[B_REAL_PTR:.*]] = getelementptr inbounds nuw { i32, i32 }, ptr %[[B_ADDR]], i32 0, i32 0
+// OGCG: %[[B_REAL:.*]] = load i32, ptr %[[B_REAL_PTR]], align 4
+// OGCG: %[[B_IMAG_PTR:.*]] = getelementptr inbounds nuw { i32, i32 }, ptr %[[B_ADDR]], i32 0, i32 1
+// OGCG: %[[B_IMAG:.*]] = load i32, ptr %[[B_IMAG_PTR]], align 4
+// OGCG: %[[ADD_REAL:.*]] = add i32 %[[B_REAL]], %[[A_REAL]]
+// OGCG: %[[ADD_IMAG:.*]] = add i32 %[[B_IMAG]], %[[A_IMAG]]
+// OGCG: %[[B_REAL_PTR:.*]] = getelementptr inbounds nuw { i32, i32 }, ptr %[[B_ADDR]], i32 0, i32 0
+// OGCG: %[[B_IMAG_PTR:.*]] = getelementptr inbounds nuw { i32, i32 }, ptr %[[B_ADDR]], i32 0, i32 1
+// OGCG: store i32 %[[ADD_REAL]], ptr %[[B_REAL_PTR]], align 4
+// OGCG: store i32 %[[ADD_IMAG]], ptr %[[B_IMAG_PTR]], align 4
+
+void foo3() {
+ _Float16 _Complex a;
+ _Float16 _Complex b;
+ b += a;
+}
+
+// CIR: %[[A_ADDR:.*]] = cir.alloca !cir.complex<!cir.f16>, !cir.ptr<!cir.complex<!cir.f16>>, ["a"]
+// CIR: %[[B_ADDR:.*]] = cir.alloca !cir.complex<!cir.f16>, !cir.ptr<!cir.complex<!cir.f16>>, ["b"]
+// CIR: %[[TMP_A:.*]] = cir.load{{.*}} %[[A_ADDR]] : !cir.ptr<!cir.complex<!cir.f16>>, !cir.complex<!cir.f16>
+// CIR: %[[A_REAL:.*]] = cir.complex.real %[[TMP_A]] : !cir.complex<!cir.f16> -> !cir.f16
+// CIR: %[[A_IMAG:.*]] = cir.complex.imag %[[TMP_A]] : !cir.complex<!cir.f16> -> !cir.f16
+// CIR: %[[A_REAL_F32:.*]] = cir.cast(floating, %[[A_REAL]] : !cir.f16), !cir.float
+// CIR: %[[A_IMAG_F32:.*]] = cir.cast(floating, %[[A_IMAG]] : !cir.f16), !cir.float
+// CIR: %[[A_COMPLEX_F32:.*]] = cir.complex.create %[[A_REAL_F32]], %[[A_IMAG_F32]] : !cir.float -> !cir.complex<!cir.float>
+// CIR: %[[TMP_B:.*]] = cir.load{{.*}} %[[B_ADDR]] : !cir.ptr<!cir.complex<!cir.f16>>, !cir.complex<!cir.f16>
+// CIR: %[[B_REAL:.*]] = cir.complex.real %[[TMP_B]] : !cir.complex<!cir.f16> -> !cir.f16
+// CIR: %[[B_IMAG:.*]] = cir.complex.imag %[[TMP_B]] : !cir.complex<!cir.f16> -> !cir.f16
+// CIR: %[[B_REAL_F32:.*]] = cir.cast(floating, %[[B_REAL]] : !cir.f16), !cir.float
+// CIR: %[[B_IMAG_F32:.*]] = cir.cast(floating, %[[B_IMAG]] : !cir.f16), !cir.float
+// CIR: %[[B_COMPLEX_F32:.*]] = cir.complex.create %[[B_REAL_F32]], %[[B_IMAG_F32]] : !cir.float -> !cir.complex<!cir.float>
+// CIR: %[[ADD_A_B:.*]] = cir.complex.add %[[B_COMPLEX_F32]], %[[A_COMPLEX_F32]] : !cir.complex<!cir.float>
+// CIR: %[[ADD_REAL:.*]] = cir.complex.real %[[ADD_A_B]] : !cir.complex<!cir.float> -> !cir.float
+// CIR: %[[ADD_IMAG:.*]] = cir.complex.imag %[[ADD_A_B]] : !cir.complex<!cir.float> -> !cir.float
+// CIR: %[[ADD_REAL_F16:.*]] = cir.cast(floating, %[[ADD_REAL]] : !cir.float), !cir.f16
+// CIR: %[[ADD_IMAG_F16:.*]] = cir.cast(floating, %[[ADD_IMAG]] : !cir.float), !cir.f16
+// CIR: %[[RESULT:.*]] = cir.complex.create %[[ADD_REAL_F16]], %[[ADD_IMAG_F16]] : !cir.f16 -> !cir.complex<!cir.f16>
+// CIR: cir.store{{.*}} %[[RESULT]], %[[B_ADDR]] : !cir.complex<!cir.f16>, !cir.ptr<!cir.complex<!cir.f16>>
+
+// LLVM: %[[A_ADDR:.*]] = alloca { half, half }, i64 1, align 2
+// LLVM: %[[B_ADDR:.*]] = alloca { half, half }, i64 1, align 2
+// LLVM: %[[TMP_A:.*]] = load { half, half }, ptr %[[A_ADDR]], align 2
+// LLVM: %[[A_REAL:.*]] = extractvalue { half, half } %[[TMP_A]], 0
+// LLVM: %[[A_IMAG:.*]] = extractvalue { half, half } %[[TMP_A]], 1
+// LLVM: %[[A_REAL_F32:.*]] = fpext half %[[A_REAL]] to float
+// LLVM: %[[A_IMAG_F32:.*]] = fpext half %[[A_IMAG]] to float
+// LLVM: %[[TMP_A_COMPLEX_F32:.*]] = insertvalue { float, float } {{.*}}, float %[[A_REAL_F32]], 0
+// LLVM: %[[A_COMPLEX_F32:.*]] = insertvalue { float, float } %8, float %[[A_IMAG_F32]], 1
+// LLVM: %[[TMP_B:.*]] = load { half, half }, ptr %[[B_ADDR]], align 2
+// LLVM: %[[B_REAL:.*]] = extractvalue { half, half } %[[TMP_B]], 0
+// LLVM: %[[B_IMAG:.*]] = extractvalue { half, half } %[[TMP_B]], 1
+// LLVM: %[[B_REAL_F32:.*]] = fpext half %[[B_REAL]] to float
+// LLVM: %[[B_IMAG_F32:.*]] = fpext half %[[B_IMAG]] to float
+// LLVM: %[[TMP_B_COMPLEX_F32:.*]] = insertvalue { float, float } {{.*}}, float %[[B_REAL_F32]], 0
+// LLVM: %[[B_COMPLEX_F32:.*]] = insertvalue { float, float } %[[TMP_B_COMPLEX_F32]], float %[[B_IMAG_F32]], 1
+// LLVM: %[[B_REAL:.*]] = extractvalue { float, float } %[[B_COMPLEX_F32]], 0
+// LLVM: %[[B_IMAG:.*]] = extractvalue { float, float } %[[B_COMPLEX_F32]], 1
+// LLVM: %[[A_REAL:.*]] = extractvalue { float, float } %[[A_COMPLEX_F32]], 0
+// LLVM: %[[A_IMAG:.*]] = extractvalue { float, float } %[[A_COMPLEX_F32]], 1
+// LLVM: %[[ADD_REAL:.*]] = fadd float %[[B_REAL]], %[[A_REAL]]
+// LLVM: %[[ADD_IMAG:.*]] = fadd float %[[B_IMAG]], %[[A_IMAG]]
+// LLVM: %[[TMP_RESULT:.*]] = insertvalue { float, float } poison, float %[[ADD_REAL]], 0
+// LLVM: %[[RESULT:.*]] = insertvalue { float, float } %[[TMP_RESULT]], float %[[ADD_IMAG]], 1
+// LLVM: %[[RESULT_REAL:.*]] = extractvalue { float, float } %[[RESULT]], 0
+// LLVM: %[[RESULT_IMAG:.*]] = extractvalue { float, float } %[[RESULT]], 1
+// LLVM: %[[RESULT_REAL_F16:.*]] = fptrunc float %[[RESULT_REAL]] to half
+// LLVM: %[[RESULT_IMAG_F26:.*]] = fptrunc float %[[RESULT_IMAG]] to half
+// LLVM: %[[TMP_RESULT_F16:.*]] = insertvalue { half, half } undef, half %[[RESULT_REAL_F16]], 0
+// LLVM: %[[RESULT_F16:.*]] = insertvalue { half, half } %29, half %[[RESULT_IMAG_F26]], 1
+// LLVM: store { half, half } %[[RESULT_F16]], ptr %[[B_ADDR]], align 2
+
+// OGCG: %[[A_ADDR:.*]] = alloca { half, half }, align 2
+// OGCG: %[[B_ADDR:.*]] = alloca { half, half }, align 2
+// OGCG: %[[A_REAL_PTR:.*]] = getelementptr inbounds nuw { half, half }, ptr %[[A_ADDR]], i32 0, i32 0
+// OGCG: %[[A_REAL:.*]] = load half, ptr %[[A_REAL_PTR]], align 2
+// OGCG: %[[A_IMAG_PTR:.*]] = getelementptr inbounds nuw { half, half }, ptr %[[A_ADDR]], i32 0, i32 1
+// OGCG: %[[A_IMAG:.*]] = load half, ptr %[[A_IMAG_PTR]], align 2
+// OGCG: %[[A_REAL_F32:.*]] = fpext half %[[A_REAL]] to float
+// OGCG: %[[A_IMAG_F32:.*]] = fpext half %[[A_IMAG]] to float
+// OGCG: %[[B_REAL_PTR:.*]] = getelementptr inbounds nuw { half, half }, ptr %[[B_ADDR]], i32 0, i32 0
+// OGCG: %[[B_REAL:.*]] = load half, ptr %[[B_REAL_PTR]], align 2
+// OGCG: %[[B_IMAG_PTR:.*]] = getelementptr inbounds nuw { half, half }, ptr %[[B_ADDR]], i32 0, i32 1
+// OGCG: %[[B_IMAG:.*]] = load half, ptr %[[B_IMAG_PTR]], align 2
+// OGCG: %[[B_REAL_F32:.*]] = fpext half %[[B_REAL]] to float
+// OGCG: %[[B_IMAG_F32:.*]] = fpext half %[[B_IMAG]] to float
+// OGCG: %[[ADD_REAL:.*]] = fadd float %[[B_REAL_F32]], %[[A_REAL_F32]]
+// OGCG: %[[ADD_IMAG:.*]] = fadd float %[[B_IMAG_F32]], %[[A_IMAG_F32]]
+// OGCG: %[[ADD_REAL_F16:.*]] = fptrunc float %[[ADD_REAL]] to half
+// OGCG: %[[ADD_IMAG_F16:.*]] = fptrunc float %[[ADD_IMAG]] to half
+// OGCG: %[[B_REAL_PTR:.*]] = getelementptr inbounds nuw { half, half }, ptr %[[B_ADDR]], i32 0, i32 0
+// OGCG: %[[B_IMAG_PTR:.*]] = getelementptr inbounds nuw { half, half }, ptr %[[B_ADDR]], i32 0, i32 1
+// OGCG: store half %[[ADD_REAL_F16]], ptr %[[B_REAL_PTR]], align 2
+// OGCG: store half %[[ADD_IMAG_F16]], ptr %[[B_IMAG_PTR]], align 2
+
+#ifdef __cplusplus
+void foo4() {
+ volatile _Complex int a;
+ volatile _Complex int b;
+ int _Complex c = b += a;
+}
+#endif
+
+// CXX_CIR: %[[A_ADDR:.*]] = cir.alloca !cir.complex<!s32i>, !cir.ptr<!cir.complex<!s32i>>, ["a"]
+// CXX_CIR: %[[B_ADDR:.*]] = cir.alloca !cir.complex<!s32i>, !cir.ptr<!cir.complex<!s32i>>, ["b"]
+// CXX_CIR: %[[C_ADDR:.*]] = cir.alloca !cir.complex<!s32i>, !cir.ptr<!cir.complex<!s32i>>, ["c", init]
+// CXX_CIR: %[[TMP_A:.*]] = cir.load{{.*}} %[[A_ADDR]] : !cir.ptr<!cir.complex<!s32i>>, !cir.complex<!s32i>
+// CXX_CIR: %[[TMP_B:.*]] = cir.load{{.*}} %[[B_ADDR]] : !cir.ptr<!cir.complex<!s32i>>, !cir.complex<!s32i>
+// CXX_CIR: %[[RESULT:.*]] = cir.complex.add %[[TMP_B]], %[[TMP_A]] : !cir.complex<!s32i>
+// CXX_CIR: cir.store{{.*}} %[[RESULT]], %[[B_ADDR]] : !cir.complex<!s32i>, !cir.ptr<!cir.complex<!s32i>>
+// CXX_CIR: %[[TMP_B:.*]] = cir.load{{.*}} %[[B_ADDR]] : !cir.ptr<!cir.complex<!s32i>>, !cir.complex<!s32i>
+// CXX_CIR: cir.store{{.*}} %[[TMP_B]], %[[C_ADDR]] : !cir.complex<!s32i>, !cir.ptr<!cir.complex<!s32i>
+
+// CXX_LLVM: %[[A_ADDR:.*]] = alloca { i32, i32 }, i64 1, align 4
+// CXX_LLVM: %[[B_ADDR:.*]] = alloca { i32, i32 }, i64 1, align 4
+// CXX_LLVM: %[[C_ADDR:.*]] = alloca { i32, i32 }, i64 1, align 4
+// CXX_LLVM: %[[TMP_A:.*]] = load { i32, i32 }, ptr %[[A_ADDR]], align 4
+// CXX_LLVM: %[[TMP_B:.*]] = load { i32, i32 }, ptr %[[B_ADDR]], align 4
+// CXX_LLVM: %[[B_REAL:.*]] = extractvalue { i32, i32 } %[[TMP_B]], 0
+// CXX_LLVM: %[[B_IMAG:.*]] = extractvalue { i32, i32 } %[[TMP_B]], 1
+// CXX_LLVM: %[[A_REAL:.*]] = extractvalue { i32, i32 } %[[TMP_A]], 0
+// CXX_LLVM: %[[A_IMAG:.*]] = extractvalue { i32, i32 } %[[TMP_A]], 1
+// CXX_LLVM: %[[ADD_REAL:.*]] = add i32 %[[B_REAL]], %[[A_REAL]]
+// CXX_LLVM: %[[ADD_IMAG:.*]] = add i32 %[[B_IMAG]], %[[A_IMAG]]
+// CXX_LLVM: %[[TMP_RESULT:.*]] = insertvalue { i32, i32 } poison, i32 %[[ADD_REAL]], 0
+// CXX_LLVM: %[[RESULT:.*]] = insertvalue { i32, i32 } %[[TMP_RESULT]], i32 %[[ADD_IMAG]], 1
+// CXX_LLVM: store { i32, i32 } %[[RESULT]], ptr %[[B_ADDR]], align 4
+// CXX_LLVM: %[[TMP_B:.*]] = load { i32, i32 }, ptr %[[B_ADDR]], align 4
+// CXX_LLVM: store { i32, i32 } %[[TMP_B]], ptr %[[C_ADDR]], align 4
+
+// CXX_OGCG: %[[A_ADDR:.*]] = alloca { i32, i32 }, align 4
+// CXX_OGCG: %[[B_ADDR:.*]] = alloca { i32, i32 }, align 4
+// CXX_OGCG: %[[C_ADDR:.*]] = alloca { i32, i32 }, align 4
+// CXX_OGCG: %a.realp = getelementptr inbounds nuw { i32, i32 }, ptr %[[A_ADDR]], i32 0, i32 0
+// CXX_OGCG: %a.real = load volatile i32, ptr %a.realp, align 4
+// CXX_OGCG: %a.imagp = getelementptr inbounds nuw { i32, i32 }, ptr %[[A_ADDR]], i32 0, i32 1
+// CXX_OGCG: %a.imag = load volatile i32, ptr %a.imagp, align 4
+// CXX_OGCG: %[[B_REAL_PTR:.*]] = getelementptr inbounds nuw { i32, i32 }, ptr %[[B_ADDR]], i32 0, i32 0
+// CXX_OGCG: %[[B_REAL:.*]] = load volatile i32, ptr %[[B_REAL_PTR]], align 4
+// CXX_OGCG: %[[B_IMAG_PTR:.*]] = getelementptr inbounds nuw { i32, i32 }, ptr %[[B_ADDR]], i32 0, i32 1
+// CXX_OGCG: %[[B_IMAG:.*]] = load volatile i32, ptr %[[B_IMAG_PTR]], align 4
+// CXX_OGCG: %[[ADD_REAL:.*]] = add i32 %[[B_REAL]], %[[A_REAL]]
+// CXX_OGCG: %[[ADD_IMAG:.*]] = add i32 %[[B_IMAG]], %[[A_IMAG]]
+// CXX_OGCG: %[[B_REAL_PTR:.*]] = getelementptr inbounds nuw { i32, i32 }, ptr %[[B_ADDR]], i32 0, i32 0
+// CXX_OGCG: %[[B_IMAG_PTR:.*]] = getelementptr inbounds nuw { i32, i32 }, ptr %[[B_ADDR]], i32 0, i32 1
+// CXX_OGCG: store volatile i32 %[[ADD_REAL]], ptr %[[B_REAL_PTR]], align 4
+// CXX_OGCG: store volatile i32 %[[ADD_IMAG]], ptr %[[B_IMAG_PTR]], align 4
+// CXX_OGCG: %[[B_REAL_PTR:.*]] = getelementptr inbounds nuw { i32, i32 }, ptr %[[B_ADDR]], i32 0, i32 0
+// CXX_OGCG: %[[B_REAL:.*]] = load volatile i32, ptr %[[B_REAL_PTR]], align 4
+// CXX_OGCG: %[[B_IMAG_PTR:.*]] = getelementptr inbounds nuw { i32, i32 }, ptr %[[B_ADDR]], i32 0, i32 1
+// CXX_OGCG: %[[B_IMAG:.*]] = load volatile i32, ptr %[[B_IMAG_PTR]], align 4
+// CXX_OGCG: %[[C_REAL_PTR:.*]] = getelementptr inbounds nuw { i32, i32 }, ptr %[[C_ADDR]], i32 0, i32 0
+// CXX_OGCG: %[[C_IMAG_PTR:.*]] = getelementptr inbounds nuw { i32, i32 }, ptr %[[C_ADDR]], i32 0, i32 1
+// CXX_OGCG: store i32 %[[B_REAL]], ptr %[[C_REAL_PTR]], align 4
+// CXX_OGCG: store i32 %[[B_IMAG]], ptr %[[C_IMAG_PTR]], align 4
>From ab4090981012d4be142d0291aae7ed31ecf3ca9b Mon Sep 17 00:00:00 2001
From: Sean Fertile <sd.fertile at gmail.com>
Date: Wed, 6 Aug 2025 12:15:27 -0400
Subject: [PATCH 119/171] Implement the trampoline intrinsics and nest
parameter for AIX. (#149388)
We can expand the init intrinsic to create a descriptor for the nested
procedure by combining the entry point and TOC pointer from the global
descriptor with the nest argument. The normal indirect call sequence
then calls the nested procedure through the descriptor like all other
calls. Patch also implements support for a nest parameter by mapping it
to gpr 11.
---
llvm/lib/Target/PowerPC/PPCISelLowering.cpp | 76 ++++++++++++++++++---
llvm/test/CodeGen/PowerPC/aix-nest-param.ll | 11 ++-
llvm/test/CodeGen/PowerPC/aix-trampoline.ll | 22 ++++--
3 files changed, 93 insertions(+), 16 deletions(-)
diff --git a/llvm/lib/Target/PowerPC/PPCISelLowering.cpp b/llvm/lib/Target/PowerPC/PPCISelLowering.cpp
index 47f003801fa40..196574e13de7c 100644
--- a/llvm/lib/Target/PowerPC/PPCISelLowering.cpp
+++ b/llvm/lib/Target/PowerPC/PPCISelLowering.cpp
@@ -3925,9 +3925,6 @@ SDValue PPCTargetLowering::LowerVACOPY(SDValue Op, SelectionDAG &DAG) const {
SDValue PPCTargetLowering::LowerADJUST_TRAMPOLINE(SDValue Op,
SelectionDAG &DAG) const {
- if (Subtarget.isAIXABI())
- report_fatal_error("ADJUST_TRAMPOLINE operation is not supported on AIX.");
-
return Op.getOperand(0);
}
@@ -3984,9 +3981,6 @@ SDValue PPCTargetLowering::LowerINLINEASM(SDValue Op, SelectionDAG &DAG) const {
SDValue PPCTargetLowering::LowerINIT_TRAMPOLINE(SDValue Op,
SelectionDAG &DAG) const {
- if (Subtarget.isAIXABI())
- report_fatal_error("INIT_TRAMPOLINE operation is not supported on AIX.");
-
SDValue Chain = Op.getOperand(0);
SDValue Trmp = Op.getOperand(1); // trampoline
SDValue FPtr = Op.getOperand(2); // nested function
@@ -3994,6 +3988,65 @@ SDValue PPCTargetLowering::LowerINIT_TRAMPOLINE(SDValue Op,
SDLoc dl(Op);
EVT PtrVT = getPointerTy(DAG.getDataLayout());
+
+ if (Subtarget.isAIXABI()) {
+ // On AIX we create a trampoline descriptor by combining the
+ // entry point and TOC from the global descriptor (FPtr) with the
+ // nest argument as the environment pointer.
+ uint64_t PointerSize = Subtarget.isPPC64() ? 8 : 4;
+ MaybeAlign PointerAlign(PointerSize);
+ auto MMOFlags = Subtarget.hasInvariantFunctionDescriptors()
+ ? (MachineMemOperand::MODereferenceable |
+ MachineMemOperand::MOInvariant)
+ : MachineMemOperand::MONone;
+
+ uint64_t TOCPointerOffset = 1 * PointerSize;
+ uint64_t EnvPointerOffset = 2 * PointerSize;
+ SDValue SDTOCPtrOffset = DAG.getConstant(TOCPointerOffset, dl, PtrVT);
+ SDValue SDEnvPtrOffset = DAG.getConstant(EnvPointerOffset, dl, PtrVT);
+
+ const Value *TrampolineAddr =
+ cast<SrcValueSDNode>(Op.getOperand(4))->getValue();
+ const Function *Func =
+ cast<Function>(cast<SrcValueSDNode>(Op.getOperand(5))->getValue());
+
+ SDValue OutChains[3];
+
+ // Copy the entry point address from the global descriptor to the
+ // trampoline buffer.
+ SDValue LoadEntryPoint =
+ DAG.getLoad(PtrVT, dl, Chain, FPtr, MachinePointerInfo(Func, 0),
+ PointerAlign, MMOFlags);
+ SDValue EPLoadChain = LoadEntryPoint.getValue(1);
+ OutChains[0] = DAG.getStore(EPLoadChain, dl, LoadEntryPoint, Trmp,
+ MachinePointerInfo(TrampolineAddr, 0));
+
+ // Copy the TOC pointer from the global descriptor to the trampoline
+ // buffer.
+ SDValue TOCFromDescriptorPtr =
+ DAG.getNode(ISD::ADD, dl, PtrVT, FPtr, SDTOCPtrOffset);
+ SDValue TOCReg = DAG.getLoad(PtrVT, dl, Chain, TOCFromDescriptorPtr,
+ MachinePointerInfo(Func, TOCPointerOffset),
+ PointerAlign, MMOFlags);
+ SDValue TrampolineTOCPointer =
+ DAG.getNode(ISD::ADD, dl, PtrVT, Trmp, SDTOCPtrOffset);
+ SDValue TOCLoadChain = TOCReg.getValue(1);
+ OutChains[1] =
+ DAG.getStore(TOCLoadChain, dl, TOCReg, TrampolineTOCPointer,
+ MachinePointerInfo(TrampolineAddr, TOCPointerOffset));
+
+ // Store the nest argument into the environment pointer in the trampoline
+ // buffer.
+ SDValue EnvPointer = DAG.getNode(ISD::ADD, dl, PtrVT, Trmp, SDEnvPtrOffset);
+ OutChains[2] =
+ DAG.getStore(Chain, dl, Nest, EnvPointer,
+ MachinePointerInfo(TrampolineAddr, EnvPointerOffset));
+
+ SDValue TokenFactor =
+ DAG.getNode(ISD::TokenFactor, dl, MVT::Other, OutChains);
+ return TokenFactor;
+ }
+
bool isPPC64 = (PtrVT == MVT::i64);
Type *IntPtrTy = DAG.getDataLayout().getIntPtrType(*DAG.getContext());
@@ -6865,9 +6918,6 @@ static bool CC_AIX(unsigned ValNo, MVT ValVT, MVT LocVT,
if (ValVT == MVT::f128)
report_fatal_error("f128 is unimplemented on AIX.");
- if (ArgFlags.isNest())
- report_fatal_error("Nest arguments are unimplemented.");
-
static const MCPhysReg GPR_32[] = {// 32-bit registers.
PPC::R3, PPC::R4, PPC::R5, PPC::R6,
PPC::R7, PPC::R8, PPC::R9, PPC::R10};
@@ -6882,6 +6932,14 @@ static bool CC_AIX(unsigned ValNo, MVT ValVT, MVT LocVT,
const ArrayRef<MCPhysReg> GPRs = IsPPC64 ? GPR_64 : GPR_32;
+ if (ArgFlags.isNest()) {
+ MCRegister EnvReg = State.AllocateReg(IsPPC64 ? PPC::X11 : PPC::R11);
+ if (!EnvReg)
+ report_fatal_error("More then one nest argument.");
+ State.addLoc(CCValAssign::getReg(ValNo, ValVT, EnvReg, RegVT, LocInfo));
+ return false;
+ }
+
if (ArgFlags.isByVal()) {
const Align ByValAlign(ArgFlags.getNonZeroByValAlign());
if (ByValAlign > StackAlign)
diff --git a/llvm/test/CodeGen/PowerPC/aix-nest-param.ll b/llvm/test/CodeGen/PowerPC/aix-nest-param.ll
index 1863eaf999f14..bfc7fbb374f1b 100644
--- a/llvm/test/CodeGen/PowerPC/aix-nest-param.ll
+++ b/llvm/test/CodeGen/PowerPC/aix-nest-param.ll
@@ -1,5 +1,5 @@
-; RUN: not --crash llc -mtriple powerpc-ibm-aix-xcoff < %s 2>&1 | FileCheck %s
-; RUN: not --crash llc -mtriple powerpc64-ibm-aix-xcoff < %s 2>&1 | FileCheck %s
+; RUN: llc -mtriple powerpc-ibm-aix-xcoff < %s 2>&1 | FileCheck %s
+; RUN: llc -mtriple powerpc64-ibm-aix-xcoff < %s 2>&1 | FileCheck %s
define ptr @nest_receiver(ptr nest %arg) nounwind {
ret ptr %arg
@@ -9,5 +9,10 @@ define ptr @nest_caller(ptr %arg) nounwind {
%result = call ptr @nest_receiver(ptr nest %arg)
ret ptr %result
}
+; CHECK-LABEL: .nest_receiver:
+; CHECK: mr 3, 11
+; CHECK: blr
-; CHECK: LLVM ERROR: Nest arguments are unimplemented.
+; CHECK-LABEL: .nest_caller:
+; CHECK: mr 11, 3
+; CHECK: bl .nest_receiver
diff --git a/llvm/test/CodeGen/PowerPC/aix-trampoline.ll b/llvm/test/CodeGen/PowerPC/aix-trampoline.ll
index b71f6b54587c5..19df220178e3b 100644
--- a/llvm/test/CodeGen/PowerPC/aix-trampoline.ll
+++ b/llvm/test/CodeGen/PowerPC/aix-trampoline.ll
@@ -1,7 +1,7 @@
-; RUN: not --crash llc -mtriple powerpc-ibm-aix-xcoff < %s 2>&1 | FileCheck %s
-; RUN: not --crash llc -mtriple powerpc64-ibm-aix-xcoff < %s 2>&1 | FileCheck %s
-
-; CHECK: LLVM ERROR: INIT_TRAMPOLINE operation is not supported on AIX.
+; RUN: llc -mtriple powerpc-ibm-aix-xcoff < %s 2>&1 | \
+; RUN: FileCheck %s --check-prefix=32BIT
+; RUN: llc -mtriple powerpc64-ibm-aix-xcoff < %s 2>&1 -mattr=-altivec | \
+; RUN: FileCheck %s --check-prefix=64BIT
define void @create_trampoline(ptr %buffer, ptr %nval) nounwind {
entry:
@@ -12,3 +12,17 @@ entry:
declare i32 @nested(i32);
declare void @llvm.init.trampoline(ptr, ptr, ptr) nounwind
+
+; 32BIT: stw 4, 8(3)
+; 32BIT: lwz [[FuncDesc:[0-9]+]], L..C0(2)
+; 32BIT-DAG: lwz [[SCRATCH1:[0-9]+]], 0([[FuncDesc]])
+; 32BIT-DAG: lwz [[SCRATCH2:[0-9]+]], 4([[FuncDesc]])
+; 32BIT-DAG: stw [[SCRATCH1]], 0(3)
+; 32BIT-DAG: stw [[SCRATCH2]], 4(3)
+
+; 64BIT: std 4, 16(3)
+; 64BIT-DAG: ld [[FuncDesc:[0-9]+]], L..C0(2)
+; 64BIT-DAG: ld [[SCRATCH1:[0-9]+]], 0([[FuncDesc]])
+; 64BIT-DAG: ld [[SCRATCH2:[0-9]+]], 8([[FuncDesc]])
+; 64BIT-DAG: std [[SCRATCH1]], 0(3)
+; 64BIT-DAG: std [[SCRATCH2]], 8(3)
>From 180d162ca2b08f24702235cb6ce4b8be714efad8 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Don=C3=A1t=20Nagy?= <donat.nagy at ericsson.com>
Date: Wed, 6 Aug 2025 18:41:07 +0200
Subject: [PATCH 120/171] [analyzer] Remove impossible BugType from
CStringChecker (#152163)
CStringChecker had an AdditionOverflow bug type which was intended for a
situation where the analyzer concludes that the addition of two
size/length values overflows `size_t`.
I strongly suspect that the analyzer could emit bugs of this type in
certain complex corner cases (e.g. due to inaccurate cast modeling), but
these reports would be all false positives because in the real world the
sum of two size/length values is always far below SIZE_MAX. (Although
note that there was no test where the analyzer emitted a bug with this
type.)
To simplify the code (and perhaps eliminate false positives), I
eliminated this bug type and replaced code that emits it by a simple
`addSink()` call (because we still want to get rid of the execution
paths where the analyzer has invalid assumptions).
---
.../Checkers/CStringChecker.cpp | 40 +++++--------------
1 file changed, 11 insertions(+), 29 deletions(-)
diff --git a/clang/lib/StaticAnalyzer/Checkers/CStringChecker.cpp b/clang/lib/StaticAnalyzer/Checkers/CStringChecker.cpp
index fd0a3982bf461..0e5fc0a074938 100644
--- a/clang/lib/StaticAnalyzer/Checkers/CStringChecker.cpp
+++ b/clang/lib/StaticAnalyzer/Checkers/CStringChecker.cpp
@@ -97,10 +97,6 @@ class CStringChecker
CheckerFrontendWithBugType UninitializedRead{
"Accessing unitialized/garbage values"};
- // FIXME: This bug type should be removed because it is only emitted in a
- // situation that is practically impossible.
- const BugType AdditionOverflow{&OutOfBounds, "API"};
-
StringRef getDebugTag() const override { return "MallocChecker"; }
static void *getTag() { static int tag; return &tag; }
@@ -330,7 +326,6 @@ class CStringChecker
const Stmt *S, StringRef WarningMsg) const;
void emitNotCStringBug(CheckerContext &C, ProgramStateRef State,
const Stmt *S, StringRef WarningMsg) const;
- void emitAdditionOverflowBug(CheckerContext &C, ProgramStateRef State) const;
void emitUninitializedReadBug(CheckerContext &C, ProgramStateRef State,
const Expr *E, const MemRegion *R,
StringRef Msg) const;
@@ -843,22 +838,6 @@ void CStringChecker::emitNotCStringBug(CheckerContext &C, ProgramStateRef State,
}
}
-void CStringChecker::emitAdditionOverflowBug(CheckerContext &C,
- ProgramStateRef State) const {
- if (ExplodedNode *N = C.generateErrorNode(State)) {
- // This isn't a great error message, but this should never occur in real
- // code anyway -- you'd have to create a buffer longer than a size_t can
- // represent, which is sort of a contradiction.
- const char *WarningMsg =
- "This expression will create a string whose length is too big to "
- "be represented as a size_t";
-
- auto Report = std::make_unique<PathSensitiveBugReport>(AdditionOverflow,
- WarningMsg, N);
- C.emitReport(std::move(Report));
- }
-}
-
ProgramStateRef CStringChecker::checkAdditionOverflow(CheckerContext &C,
ProgramStateRef state,
NonLoc left,
@@ -896,19 +875,22 @@ ProgramStateRef CStringChecker::checkAdditionOverflow(CheckerContext &C,
SVal willOverflow = svalBuilder.evalBinOpNN(state, BO_GT, left,
*maxMinusRightNL, cmpTy);
- ProgramStateRef stateOverflow, stateOkay;
- std::tie(stateOverflow, stateOkay) =
- state->assume(willOverflow.castAs<DefinedOrUnknownSVal>());
+ auto [StateOverflow, StateOkay] =
+ state->assume(willOverflow.castAs<DefinedOrUnknownSVal>());
- if (stateOverflow && !stateOkay) {
- // We have an overflow. Emit a bug report.
- emitAdditionOverflowBug(C, stateOverflow);
+ if (StateOverflow && !StateOkay) {
+ // On this path the analyzer is convinced that the addition of these two
+ // values would overflow `size_t` which must be caused by the inaccuracy
+ // of our modeling because this method is called in situations where the
+ // summands are size/length values which are much less than SIZE_MAX. To
+ // avoid false positives let's just sink this invalid path.
+ C.addSink(StateOverflow);
return nullptr;
}
// From now on, assume an overflow didn't occur.
- assert(stateOkay);
- state = stateOkay;
+ assert(StateOkay);
+ state = StateOkay;
}
return state;
>From 01b0fe3104d633d67d33a963142a54cd500e6b3c Mon Sep 17 00:00:00 2001
From: Aiden Grossman <aidengrossman at google.com>
Date: Wed, 6 Aug 2025 09:42:21 -0700
Subject: [PATCH 121/171] [CI][NFC] Explicitly add libcxx/libcxxabi/libunwind
to excludes (#152210)
This patch adds libcxx/libcxxabi/libunwind to the excludes list in
compute_projects.py for both Windows and MacOS. Neither of these
platforms have ever built the runtimes as the scripts do not have
support for it. Explicitly disable them so that compute_projects_test.py
is more consistent.
---
.ci/compute_projects.py | 6 ++++++
.ci/compute_projects_test.py | 25 ++++++++-----------------
2 files changed, 14 insertions(+), 17 deletions(-)
diff --git a/.ci/compute_projects.py b/.ci/compute_projects.py
index 2dc5629328457..5cb01ea705df5 100644
--- a/.ci/compute_projects.py
+++ b/.ci/compute_projects.py
@@ -100,6 +100,9 @@
"libc", # No Windows Support.
"lldb", # TODO(issues/132800): Needs environment setup.
"bolt", # No Windows Support.
+ "libcxx",
+ "libcxxabi",
+ "libunwind",
}
# These are projects that we should test if the project itself is changed but
@@ -118,6 +121,9 @@
"lldb",
"openmp",
"polly",
+ "libcxx",
+ "libcxxabi",
+ "libunwind",
}
PROJECT_CHECK_TARGETS = {
diff --git a/.ci/compute_projects_test.py b/.ci/compute_projects_test.py
index bd761198300a8..8485b3dc56673 100644
--- a/.ci/compute_projects_test.py
+++ b/.ci/compute_projects_test.py
@@ -45,16 +45,14 @@ def test_llvm_windows(self):
env_variables["project_check_targets"],
"check-clang check-clang-tools check-lld check-llvm check-mlir check-polly",
)
- self.assertEqual(
- env_variables["runtimes_to_build"], "libcxx;libcxxabi;libunwind"
- )
+ self.assertEqual(env_variables["runtimes_to_build"], "")
self.assertEqual(
env_variables["runtimes_check_targets"],
"",
)
self.assertEqual(
env_variables["runtimes_check_targets_needs_reconfig"],
- "check-cxx check-cxxabi check-unwind",
+ "",
)
def test_llvm_mac(self):
@@ -69,16 +67,14 @@ def test_llvm_mac(self):
env_variables["project_check_targets"],
"check-clang check-clang-tools check-lld check-llvm check-mlir",
)
- self.assertEqual(
- env_variables["runtimes_to_build"], "libcxx;libcxxabi;libunwind"
- )
+ self.assertEqual(env_variables["runtimes_to_build"], "")
self.assertEqual(
env_variables["runtimes_check_targets"],
"",
)
self.assertEqual(
env_variables["runtimes_check_targets_needs_reconfig"],
- "check-cxx check-cxxabi check-unwind",
+ "",
)
def test_clang(self):
@@ -119,16 +115,14 @@ def test_clang_windows(self):
self.assertEqual(
env_variables["project_check_targets"], "check-clang check-clang-tools"
)
- self.assertEqual(
- env_variables["runtimes_to_build"], "libcxx;libcxxabi;libunwind"
- )
+ self.assertEqual(env_variables["runtimes_to_build"], "")
self.assertEqual(
env_variables["runtimes_check_targets"],
"",
)
self.assertEqual(
env_variables["runtimes_check_targets_needs_reconfig"],
- "check-cxx check-cxxabi check-unwind",
+ "",
)
self.assertEqual(env_variables["enable_cir"], "OFF")
@@ -298,18 +292,15 @@ def test_windows_ci(self):
)
self.assertEqual(
env_variables["runtimes_to_build"],
- "libcxx;libcxxabi;libunwind",
+ "",
)
self.assertEqual(
env_variables["runtimes_check_targets"],
"",
)
- # TODO(boomanaiden154): We should not be emitting these on Windows.
- # It does not currently impact anything because we do not build the
- # runtimes on Windows though.
self.assertEqual(
env_variables["runtimes_check_targets_needs_reconfig"],
- "check-cxx check-cxxabi check-unwind",
+ "",
)
def test_lldb(self):
>From c548c47476ee3f4578db2ca4f82e097a728b5bff Mon Sep 17 00:00:00 2001
From: Oliver Hunt <oliver at apple.com>
Date: Wed, 6 Aug 2025 09:52:34 -0700
Subject: [PATCH 122/171] [clang] Fix crash in dynamic_cast final class
optimization (#152076)
This corrects the codegen for the final class optimization to
correct handle the case where there is no path to perform the
cast, and also corrects the codegen to handle ptrauth protected
vtable pointers.
As part of this fix we separate out the path computation as
that makes it easier to reason about the failure code paths
and more importantly means we can know what the type of the
this object is during the cast.
The allows us to use the GetVTablePointer interface which
correctly performs the authentication operations required when
pointer authentication is enabled. This still leaves incorrect
authentication behavior in the multiple inheritance case but
currently the optimization is disabled entirely if pointer
authentication is enabled.
Fixes #137518
---
clang/docs/ReleaseNotes.rst | 2 +
clang/lib/CodeGen/CGCXXABI.h | 20 +++--
clang/lib/CodeGen/CGExprCXX.cpp | 15 +++-
clang/lib/CodeGen/ItaniumCXXABI.cpp | 87 ++++++++++++-------
clang/lib/CodeGen/MicrosoftCXXABI.cpp | 6 ++
.../dynamic-cast-exact-disabled.cpp | 16 ++++
clang/test/CodeGenCXX/dynamic-cast-exact.cpp | 42 ++++++++-
7 files changed, 147 insertions(+), 41 deletions(-)
diff --git a/clang/docs/ReleaseNotes.rst b/clang/docs/ReleaseNotes.rst
index 2a95d1e2736a3..0e9fcaa5fac6a 100644
--- a/clang/docs/ReleaseNotes.rst
+++ b/clang/docs/ReleaseNotes.rst
@@ -182,6 +182,8 @@ Bug Fixes to C++ Support
- Fix a crash when deleting a pointer to an incomplete array (#GH150359).
- Fix an assertion failure when expression in assumption attribute
(``[[assume(expr)]]``) creates temporary objects.
+- Fix the dynamic_cast to final class optimization to correctly handle
+ casts that are guaranteed to fail (#GH137518).
Bug Fixes to AST Handling
^^^^^^^^^^^^^^^^^^^^^^^^^
diff --git a/clang/lib/CodeGen/CGCXXABI.h b/clang/lib/CodeGen/CGCXXABI.h
index 96fe04661944d..2dd320dbda976 100644
--- a/clang/lib/CodeGen/CGCXXABI.h
+++ b/clang/lib/CodeGen/CGCXXABI.h
@@ -294,14 +294,22 @@ class CGCXXABI {
Address Value,
QualType SrcRecordTy) = 0;
+ struct ExactDynamicCastInfo {
+ bool RequiresCastToPrimaryBase;
+ CharUnits Offset;
+ };
+
+ virtual std::optional<ExactDynamicCastInfo>
+ getExactDynamicCastInfo(QualType SrcRecordTy, QualType DestTy,
+ QualType DestRecordTy) = 0;
+
/// Emit a dynamic_cast from SrcRecordTy to DestRecordTy. The cast fails if
/// the dynamic type of Value is not exactly DestRecordTy.
- virtual llvm::Value *emitExactDynamicCast(CodeGenFunction &CGF, Address Value,
- QualType SrcRecordTy,
- QualType DestTy,
- QualType DestRecordTy,
- llvm::BasicBlock *CastSuccess,
- llvm::BasicBlock *CastFail) = 0;
+ virtual llvm::Value *emitExactDynamicCast(
+ CodeGenFunction &CGF, Address Value, QualType SrcRecordTy,
+ QualType DestTy, QualType DestRecordTy,
+ const ExactDynamicCastInfo &CastInfo, llvm::BasicBlock *CastSuccess,
+ llvm::BasicBlock *CastFail) = 0;
virtual bool EmitBadCastCall(CodeGenFunction &CGF) = 0;
diff --git a/clang/lib/CodeGen/CGExprCXX.cpp b/clang/lib/CodeGen/CGExprCXX.cpp
index fef1baf25d1d2..49d5d8acbe331 100644
--- a/clang/lib/CodeGen/CGExprCXX.cpp
+++ b/clang/lib/CodeGen/CGExprCXX.cpp
@@ -2295,6 +2295,18 @@ llvm::Value *CodeGenFunction::EmitDynamicCast(Address ThisAddr,
CGM.getCXXABI().shouldEmitExactDynamicCast(DestRecordTy) &&
!getLangOpts().PointerAuthCalls;
+ std::optional<CGCXXABI::ExactDynamicCastInfo> ExactCastInfo;
+ if (IsExact) {
+ ExactCastInfo = CGM.getCXXABI().getExactDynamicCastInfo(SrcRecordTy, DestTy,
+ DestRecordTy);
+ if (!ExactCastInfo) {
+ llvm::Value *NullValue = EmitDynamicCastToNull(*this, DestTy);
+ if (!Builder.GetInsertBlock())
+ EmitBlock(createBasicBlock("dynamic_cast.unreachable"));
+ return NullValue;
+ }
+ }
+
// C++ [expr.dynamic.cast]p4:
// If the value of v is a null pointer value in the pointer case, the result
// is the null pointer value of type T.
@@ -2322,7 +2334,8 @@ llvm::Value *CodeGenFunction::EmitDynamicCast(Address ThisAddr,
// If the destination type is effectively final, this pointer points to the
// right type if and only if its vptr has the right value.
Value = CGM.getCXXABI().emitExactDynamicCast(
- *this, ThisAddr, SrcRecordTy, DestTy, DestRecordTy, CastEnd, CastNull);
+ *this, ThisAddr, SrcRecordTy, DestTy, DestRecordTy, *ExactCastInfo,
+ CastEnd, CastNull);
} else {
assert(DestRecordTy->isRecordType() &&
"destination type must be a record type!");
diff --git a/clang/lib/CodeGen/ItaniumCXXABI.cpp b/clang/lib/CodeGen/ItaniumCXXABI.cpp
index aae1481444067..5ffc1edb9986f 100644
--- a/clang/lib/CodeGen/ItaniumCXXABI.cpp
+++ b/clang/lib/CodeGen/ItaniumCXXABI.cpp
@@ -226,6 +226,10 @@ class ItaniumCXXABI : public CodeGen::CGCXXABI {
return hasUniqueVTablePointer(DestRecordTy);
}
+ std::optional<ExactDynamicCastInfo>
+ getExactDynamicCastInfo(QualType SrcRecordTy, QualType DestTy,
+ QualType DestRecordTy) override;
+
llvm::Value *emitDynamicCastCall(CodeGenFunction &CGF, Address Value,
QualType SrcRecordTy, QualType DestTy,
QualType DestRecordTy,
@@ -234,6 +238,7 @@ class ItaniumCXXABI : public CodeGen::CGCXXABI {
llvm::Value *emitExactDynamicCast(CodeGenFunction &CGF, Address ThisAddr,
QualType SrcRecordTy, QualType DestTy,
QualType DestRecordTy,
+ const ExactDynamicCastInfo &CastInfo,
llvm::BasicBlock *CastSuccess,
llvm::BasicBlock *CastFail) override;
@@ -1681,10 +1686,11 @@ llvm::Value *ItaniumCXXABI::emitDynamicCastCall(
return Value;
}
-llvm::Value *ItaniumCXXABI::emitExactDynamicCast(
- CodeGenFunction &CGF, Address ThisAddr, QualType SrcRecordTy,
- QualType DestTy, QualType DestRecordTy, llvm::BasicBlock *CastSuccess,
- llvm::BasicBlock *CastFail) {
+std::optional<CGCXXABI::ExactDynamicCastInfo>
+ItaniumCXXABI::getExactDynamicCastInfo(QualType SrcRecordTy, QualType DestTy,
+ QualType DestRecordTy) {
+ assert(shouldEmitExactDynamicCast(DestRecordTy));
+
ASTContext &Context = getContext();
// Find all the inheritance paths.
@@ -1722,41 +1728,56 @@ llvm::Value *ItaniumCXXABI::emitExactDynamicCast(
if (!Offset)
Offset = PathOffset;
else if (Offset != PathOffset) {
- // Base appears in at least two different places. Find the most-derived
- // object and see if it's a DestDecl. Note that the most-derived object
- // must be at least as aligned as this base class subobject, and must
- // have a vptr at offset 0.
- ThisAddr = Address(emitDynamicCastToVoid(CGF, ThisAddr, SrcRecordTy),
- CGF.VoidPtrTy, ThisAddr.getAlignment());
- SrcDecl = DestDecl;
- Offset = CharUnits::Zero();
- break;
+ // Base appears in at least two different places.
+ return ExactDynamicCastInfo{/*RequiresCastToPrimaryBase=*/true,
+ CharUnits::Zero()};
}
}
+ if (!Offset)
+ return std::nullopt;
+ return ExactDynamicCastInfo{/*RequiresCastToPrimaryBase=*/false, *Offset};
+}
- if (!Offset) {
- // If there are no public inheritance paths, the cast always fails.
- CGF.EmitBranch(CastFail);
- return llvm::PoisonValue::get(CGF.VoidPtrTy);
- }
+llvm::Value *ItaniumCXXABI::emitExactDynamicCast(
+ CodeGenFunction &CGF, Address ThisAddr, QualType SrcRecordTy,
+ QualType DestTy, QualType DestRecordTy,
+ const ExactDynamicCastInfo &ExactCastInfo, llvm::BasicBlock *CastSuccess,
+ llvm::BasicBlock *CastFail) {
+ const CXXRecordDecl *SrcDecl = SrcRecordTy->getAsCXXRecordDecl();
+ const CXXRecordDecl *DestDecl = DestRecordTy->getAsCXXRecordDecl();
+
+ llvm::Value *VTable = nullptr;
+ if (ExactCastInfo.RequiresCastToPrimaryBase) {
+ // Base appears in at least two different places. Find the most-derived
+ // object and see if it's a DestDecl. Note that the most-derived object
+ // must be at least as aligned as this base class subobject, and must
+ // have a vptr at offset 0.
+ llvm::Value *PrimaryBase =
+ emitDynamicCastToVoid(CGF, ThisAddr, SrcRecordTy);
+ ThisAddr = Address(PrimaryBase, CGF.VoidPtrTy, ThisAddr.getAlignment());
+ SrcDecl = DestDecl;
+ Address VTablePtrPtr = ThisAddr.withElementType(CGF.VoidPtrPtrTy);
+ VTable = CGF.Builder.CreateLoad(VTablePtrPtr, "vtable");
+ } else
+ VTable = CGF.GetVTablePtr(ThisAddr, CGF.UnqualPtrTy, SrcDecl);
// Compare the vptr against the expected vptr for the destination type at
- // this offset. Note that we do not know what type ThisAddr points to in
- // the case where the derived class multiply inherits from the base class
- // so we can't use GetVTablePtr, so we load the vptr directly instead.
- llvm::Instruction *VPtr = CGF.Builder.CreateLoad(
- ThisAddr.withElementType(CGF.VoidPtrPtrTy), "vtable");
- CGM.DecorateInstructionWithTBAA(
- VPtr, CGM.getTBAAVTablePtrAccessInfo(CGF.VoidPtrPtrTy));
- llvm::Value *Success = CGF.Builder.CreateICmpEQ(
- VPtr, getVTableAddressPoint(BaseSubobject(SrcDecl, *Offset), DestDecl));
- llvm::Value *Result = ThisAddr.emitRawPointer(CGF);
- if (!Offset->isZero())
- Result = CGF.Builder.CreateInBoundsGEP(
- CGF.CharTy, Result,
- {llvm::ConstantInt::get(CGF.PtrDiffTy, -Offset->getQuantity())});
+ // this offset.
+ llvm::Constant *ExpectedVTable = getVTableAddressPoint(
+ BaseSubobject(SrcDecl, ExactCastInfo.Offset), DestDecl);
+ llvm::Value *Success = CGF.Builder.CreateICmpEQ(VTable, ExpectedVTable);
+ llvm::Value *AdjustedThisPtr = ThisAddr.emitRawPointer(CGF);
+
+ if (!ExactCastInfo.Offset.isZero()) {
+ CharUnits::QuantityType Offset = ExactCastInfo.Offset.getQuantity();
+ llvm::Constant *OffsetConstant =
+ llvm::ConstantInt::get(CGF.PtrDiffTy, -Offset);
+ AdjustedThisPtr = CGF.Builder.CreateInBoundsGEP(CGF.CharTy, AdjustedThisPtr,
+ OffsetConstant);
+ }
+
CGF.Builder.CreateCondBr(Success, CastSuccess, CastFail);
- return Result;
+ return AdjustedThisPtr;
}
llvm::Value *ItaniumCXXABI::emitDynamicCastToVoid(CodeGenFunction &CGF,
diff --git a/clang/lib/CodeGen/MicrosoftCXXABI.cpp b/clang/lib/CodeGen/MicrosoftCXXABI.cpp
index 700ffa4beffc1..e8d2451a7dd5c 100644
--- a/clang/lib/CodeGen/MicrosoftCXXABI.cpp
+++ b/clang/lib/CodeGen/MicrosoftCXXABI.cpp
@@ -158,9 +158,15 @@ class MicrosoftCXXABI : public CGCXXABI {
// TODO: Add support for exact dynamic_casts.
return false;
}
+ std::optional<ExactDynamicCastInfo>
+ getExactDynamicCastInfo(QualType SrcRecordTy, QualType DestTy,
+ QualType DestRecordTy) override {
+ llvm_unreachable("unsupported");
+ }
llvm::Value *emitExactDynamicCast(CodeGenFunction &CGF, Address Value,
QualType SrcRecordTy, QualType DestTy,
QualType DestRecordTy,
+ const ExactDynamicCastInfo &CastInfo,
llvm::BasicBlock *CastSuccess,
llvm::BasicBlock *CastFail) override {
llvm_unreachable("unsupported");
diff --git a/clang/test/CodeGenCXX/dynamic-cast-exact-disabled.cpp b/clang/test/CodeGenCXX/dynamic-cast-exact-disabled.cpp
index 19c2a9bd0497e..3156e1bb21b1e 100644
--- a/clang/test/CodeGenCXX/dynamic-cast-exact-disabled.cpp
+++ b/clang/test/CodeGenCXX/dynamic-cast-exact-disabled.cpp
@@ -14,3 +14,19 @@ B *exact(A *a) {
// EXACT-NOT: call {{.*}} @__dynamic_cast
return dynamic_cast<B*>(a);
}
+
+struct C {
+ virtual ~C();
+};
+
+struct D final : private C {
+
+};
+
+// CHECK-LABEL: @_Z5exactP1C
+D *exact(C *a) {
+ // INEXACT: call {{.*}} @__dynamic_cast
+ // EXACT: entry:
+ // EXACT-NEXT: ret ptr null
+ return dynamic_cast<D*>(a);
+}
diff --git a/clang/test/CodeGenCXX/dynamic-cast-exact.cpp b/clang/test/CodeGenCXX/dynamic-cast-exact.cpp
index 86e1965b4ba68..588d80844a2fa 100644
--- a/clang/test/CodeGenCXX/dynamic-cast-exact.cpp
+++ b/clang/test/CodeGenCXX/dynamic-cast-exact.cpp
@@ -9,6 +9,7 @@ struct E : A { int e; };
struct F : virtual A { int f; };
struct G : virtual A { int g; };
struct H final : C, D, E, F, G { int h; };
+struct H1 final: C, private D { int h1; };
// CHECK-LABEL: @_Z7inexactP1A
C *inexact(A *a) {
@@ -77,10 +78,49 @@ H *exact_multi(A *a) {
return dynamic_cast<H*>(a);
}
+// CHECK-LABEL: @_Z19exact_invalid_multiP1D
+H1 *exact_invalid_multi(D* d) {
+ // CHECK: entry:
+ // CHECK-NEXT: %d.addr = alloca ptr
+ // CHECK-NEXT: store ptr %d, ptr %d.addr
+ // CHECK-NEXT: load ptr, ptr %d.addr
+ // CHECK-NEXT: ret ptr null
+ return dynamic_cast<H1*>(d);
+}
+
+// CHECK-LABEL: @_Z19exact_invalid_multiR1D
+H1 &exact_invalid_multi(D& d) {
+ // CHECK: entry:
+ // CHECK-NEXT: %d.addr = alloca ptr
+ // CHECK-NEXT: store ptr %d, ptr %d.addr
+ // CHECK-NEXT: load ptr, ptr %d.addr
+ // CHECK-NEXT: call void @__cxa_bad_cast()
+ // CHECK-NEXT: unreachable
+ // CHECK: dynamic_cast.unreachable:
+ // CHECK-NEXT: ret ptr poison
+ return dynamic_cast<H1&>(d);
+}
+
+namespace GH137518 {
+ class base { virtual void fn() = 0; };
+ class test final : base { virtual void fn() { } };
+ test* new_test() { return new test(); }
+
+ // CHECK-LABEL: @_ZN8GH1375184castEPNS_4baseE(
+ test* cast(base* b) {
+ // CHECK: entry:
+ // CHECK-NEXT: %b.addr = alloca ptr
+ // CHECK-NEXT: store ptr %b, ptr %b.addr
+ // CHECK-NEXT: load ptr, ptr %b.addr
+ // CHECK-NEXT: ret ptr null
+ return dynamic_cast<test*>(b);
+ }
+}
+
namespace GH64088 {
// Ensure we mark the B vtable as used here, because we're going to emit a
// reference to it.
- // CHECK: define {{.*}} @_ZN7GH640881BD0
+ // CHECK: define {{.*}} void @_ZN7GH640881BD0Ev(
struct A { virtual ~A(); };
struct B final : A { virtual ~B() = default; };
B *cast(A *p) { return dynamic_cast<B*>(p); }
>From f538f1ad972bd472104873869fd986469a53c57e Mon Sep 17 00:00:00 2001
From: Michael Jones <michaelrj at google.com>
Date: Wed, 6 Aug 2025 09:53:23 -0700
Subject: [PATCH 123/171] [libc] warn when depending on public entrypoints
(#146163)
Add a cmake warning when an entrypoint or object library depends on a
public entrypoint.
---
libc/cmake/modules/LLVMLibCObjectRules.cmake | 58 ++++++++++++++++++--
1 file changed, 54 insertions(+), 4 deletions(-)
diff --git a/libc/cmake/modules/LLVMLibCObjectRules.cmake b/libc/cmake/modules/LLVMLibCObjectRules.cmake
index 805da91284ce8..030157bf64585 100644
--- a/libc/cmake/modules/LLVMLibCObjectRules.cmake
+++ b/libc/cmake/modules/LLVMLibCObjectRules.cmake
@@ -1,4 +1,33 @@
set(OBJECT_LIBRARY_TARGET_TYPE "OBJECT_LIBRARY")
+set(ENTRYPOINT_OBJ_TARGET_TYPE "ENTRYPOINT_OBJ")
+set(ENTRYPOINT_EXT_TARGET_TYPE "ENTRYPOINT_EXT")
+
+# Rule to check if a list of dependencies contains any entrypoint objects. Returns a list in entrypoint_deps.
+function(check_entrypoint_deps entrypoint_deps)
+ set(PUBLIC_DEPS "")
+ set(fq_deps_list "")
+ list(APPEND fq_deps_list ${ARGN})
+
+ #don't warn for deps that are allowed, such as errno
+ set(ALLOWED_DEPS
+ "libc.src.errno.errno"
+ "libc.src.setjmp.longjmp"
+ )
+ list(REMOVE_ITEM fq_deps_list ${ALLOWED_DEPS})
+
+ foreach(dep IN LISTS fq_deps_list)
+ if(NOT TARGET ${dep})
+ continue()
+ endif()
+
+ get_target_property(target_type ${dep} "TARGET_TYPE")
+ if(${target_type} STREQUAL ${ENTRYPOINT_OBJ_TARGET_TYPE})
+ list(APPEND PUBLIC_DEPS ${dep})
+ endif()
+ endforeach()
+ set(${entrypoint_deps} ${PUBLIC_DEPS} PARENT_SCOPE)
+endfunction()
+
# Rule which is essentially a wrapper over add_library to compile a set of
# sources to object files.
@@ -65,6 +94,18 @@ function(create_object_library fq_target_name)
target_include_directories(${fq_target_name} PRIVATE ${LIBC_SOURCE_DIR})
target_compile_options(${fq_target_name} PRIVATE ${compile_options})
+ #loop through the deps, check if any have the TARGET_TYPE of ENTRYPOINT_OBJ_TARGET_TYPE, and print a warning if they do.
+ if(LIBC_CMAKE_VERBOSE_LOGGING)
+ set(entrypoint_deps "")
+ if(NOT "${fq_deps_list}" STREQUAL "")
+ check_entrypoint_deps(entrypoint_deps ${fq_deps_list})
+ endif()
+ if(NOT "${entrypoint_deps}" STREQUAL "")
+ message(WARNING "Object ${fq_target_name} depends on public entrypoint(s) ${entrypoint_deps}.
+ Depending on public entrypoints is not allowed in internal code.")
+ endif()
+ endif()
+
if(SHOW_INTERMEDIATE_OBJECTS)
message(STATUS "Adding object library ${fq_target_name}")
if(${SHOW_INTERMEDIATE_OBJECTS} STREQUAL "DEPS")
@@ -110,7 +151,6 @@ function(add_object_library target_name)
${ARGN})
endfunction(add_object_library)
-set(ENTRYPOINT_OBJ_TARGET_TYPE "ENTRYPOINT_OBJ")
# A rule for entrypoint object targets.
# Usage:
@@ -179,7 +219,6 @@ function(create_entrypoint_object fq_target_name)
get_target_property(obj_type ${fq_dep_name} "TARGET_TYPE")
if((NOT obj_type) OR (NOT ${obj_type} STREQUAL ${ENTRYPOINT_OBJ_TARGET_TYPE}))
-
message(FATAL_ERROR "The aliasee of an entrypoint alias should be an entrypoint.")
endif()
@@ -230,6 +269,19 @@ function(create_entrypoint_object fq_target_name)
_get_common_compile_options(common_compile_options "${ADD_ENTRYPOINT_OBJ_FLAGS}")
list(APPEND common_compile_options ${ADD_ENTRYPOINT_OBJ_COMPILE_OPTIONS})
get_fq_deps_list(fq_deps_list ${ADD_ENTRYPOINT_OBJ_DEPENDS})
+
+ #loop through the deps, check if any have the TARGET_TYPE of entrypoint_target_type, and print a warning if they do.
+ if(LIBC_CMAKE_VERBOSE_LOGGING)
+ set(entrypoint_deps "")
+ if(NOT "${fq_deps_list}" STREQUAL "")
+ check_entrypoint_deps(entrypoint_deps ${fq_deps_list})
+ endif()
+ if(NOT "${entrypoint_deps}" STREQUAL "")
+ message(WARNING "Entrypoint ${fq_target_name} depends on public entrypoint(s) ${entrypoint_deps}.
+ Depending on public entrypoints is not allowed in internal code.")
+ endif()
+ endif()
+
set(full_deps_list ${fq_deps_list} libc.src.__support.common)
if(SHOW_INTERMEDIATE_OBJECTS)
@@ -390,8 +442,6 @@ function(add_entrypoint_object target_name)
)
endfunction(add_entrypoint_object)
-set(ENTRYPOINT_EXT_TARGET_TYPE "ENTRYPOINT_EXT")
-
# A rule for external entrypoint targets.
# Usage:
# add_entrypoint_external(
>From 25bf86fedeb0dd46f4d3b6a434ef02e4e37c89f5 Mon Sep 17 00:00:00 2001
From: Steven Perron <stevenperron at google.com>
Date: Wed, 6 Aug 2025 13:10:55 -0400
Subject: [PATCH 124/171] [SPIRV] Add pass to replace
gethandlefromimplicitbinding (#146756)
The HLSL frontend generates call to the intrinsic
@llvm.spv.resource.handlefromimplicitbinding to be able to access a
resource where the set and binding were not explicitly given in the
source code. Determining the correct set and binding cannot be done
during Clang's codegen or earlier because in DXIL, they must first
remove resource that are not accessed before assigning binding locations
to the resource without an explicit binding.
We will follow their lead.
This is a change from DXC, where implicit binding for SPIR-V are
assigned before optimizations.
See https://github.com/llvm/wg-hlsl/pull/309
---
llvm/lib/Target/SPIRV/CMakeLists.txt | 1 +
llvm/lib/Target/SPIRV/SPIRV.h | 2 +
.../SPIRV/SPIRVLegalizeImplicitBinding.cpp | 159 ++++++++++++++++++
llvm/lib/Target/SPIRV/SPIRVTargetMachine.cpp | 1 +
.../SPIRV/hlsl-resources/ImplicitBinding.ll | 75 +++++++++
5 files changed, 238 insertions(+)
create mode 100644 llvm/lib/Target/SPIRV/SPIRVLegalizeImplicitBinding.cpp
create mode 100644 llvm/test/CodeGen/SPIRV/hlsl-resources/ImplicitBinding.ll
diff --git a/llvm/lib/Target/SPIRV/CMakeLists.txt b/llvm/lib/Target/SPIRV/CMakeLists.txt
index ba09451104479..6660de995e954 100644
--- a/llvm/lib/Target/SPIRV/CMakeLists.txt
+++ b/llvm/lib/Target/SPIRV/CMakeLists.txt
@@ -26,6 +26,7 @@ add_llvm_target(SPIRVCodeGen
SPIRVGlobalRegistry.cpp
SPIRVInstrInfo.cpp
SPIRVInstructionSelector.cpp
+ SPIRVLegalizeImplicitBinding.cpp
SPIRVStripConvergentIntrinsics.cpp
SPIRVLegalizePointerCast.cpp
SPIRVMergeRegionExitTargets.cpp
diff --git a/llvm/lib/Target/SPIRV/SPIRV.h b/llvm/lib/Target/SPIRV/SPIRV.h
index 1688fa32ce436..1934e98ca512f 100644
--- a/llvm/lib/Target/SPIRV/SPIRV.h
+++ b/llvm/lib/Target/SPIRV/SPIRV.h
@@ -23,6 +23,7 @@ ModulePass *createSPIRVPrepareFunctionsPass(const SPIRVTargetMachine &TM);
FunctionPass *createSPIRVStructurizerPass();
FunctionPass *createSPIRVMergeRegionExitTargetsPass();
FunctionPass *createSPIRVStripConvergenceIntrinsicsPass();
+ModulePass *createSPIRVLegalizeImplicitBindingPass();
FunctionPass *createSPIRVLegalizePointerCastPass(SPIRVTargetMachine *TM);
FunctionPass *createSPIRVRegularizerPass();
FunctionPass *createSPIRVPreLegalizerCombiner();
@@ -49,6 +50,7 @@ void initializeSPIRVRegularizerPass(PassRegistry &);
void initializeSPIRVMergeRegionExitTargetsPass(PassRegistry &);
void initializeSPIRVPrepareFunctionsPass(PassRegistry &);
void initializeSPIRVStripConvergentIntrinsicsPass(PassRegistry &);
+void initializeSPIRVLegalizeImplicitBindingPass(PassRegistry &);
} // namespace llvm
#endif // LLVM_LIB_TARGET_SPIRV_SPIRV_H
diff --git a/llvm/lib/Target/SPIRV/SPIRVLegalizeImplicitBinding.cpp b/llvm/lib/Target/SPIRV/SPIRVLegalizeImplicitBinding.cpp
new file mode 100644
index 0000000000000..0398e52895790
--- /dev/null
+++ b/llvm/lib/Target/SPIRV/SPIRVLegalizeImplicitBinding.cpp
@@ -0,0 +1,159 @@
+//===- SPIRVLegalizeImplicitBinding.cpp - Legalize implicit bindings ----*- C++
+//-*-===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+//
+// This pass legalizes the @llvm.spv.resource.handlefromimplicitbinding
+// intrinsic by replacing it with a call to
+// @llvm.spv.resource.handlefrombinding.
+//
+//===----------------------------------------------------------------------===//
+
+#include "SPIRV.h"
+#include "llvm/ADT/BitVector.h"
+#include "llvm/ADT/DenseMap.h"
+#include "llvm/ADT/SmallVector.h"
+#include "llvm/IR/IRBuilder.h"
+#include "llvm/IR/InstVisitor.h"
+#include "llvm/IR/Intrinsics.h"
+#include "llvm/IR/IntrinsicsSPIRV.h"
+#include "llvm/IR/Module.h"
+#include "llvm/Pass.h"
+#include <algorithm>
+#include <vector>
+
+using namespace llvm;
+
+namespace {
+class SPIRVLegalizeImplicitBinding : public ModulePass {
+public:
+ static char ID;
+ SPIRVLegalizeImplicitBinding() : ModulePass(ID) {}
+
+ bool runOnModule(Module &M) override;
+
+private:
+ void collectBindingInfo(Module &M);
+ uint32_t getAndReserveFirstUnusedBinding(uint32_t DescSet);
+ void replaceImplicitBindingCalls(Module &M);
+
+ // A map from descriptor set to a bit vector of used binding numbers.
+ std::vector<BitVector> UsedBindings;
+ // A list of all implicit binding calls, to be sorted by order ID.
+ SmallVector<CallInst *, 16> ImplicitBindingCalls;
+};
+
+struct BindingInfoCollector : public InstVisitor<BindingInfoCollector> {
+ std::vector<BitVector> &UsedBindings;
+ SmallVector<CallInst *, 16> &ImplicitBindingCalls;
+
+ BindingInfoCollector(std::vector<BitVector> &UsedBindings,
+ SmallVector<CallInst *, 16> &ImplicitBindingCalls)
+ : UsedBindings(UsedBindings), ImplicitBindingCalls(ImplicitBindingCalls) {
+ }
+
+ void visitCallInst(CallInst &CI) {
+ if (CI.getIntrinsicID() == Intrinsic::spv_resource_handlefrombinding) {
+ const uint32_t DescSet =
+ cast<ConstantInt>(CI.getArgOperand(0))->getZExtValue();
+ const uint32_t Binding =
+ cast<ConstantInt>(CI.getArgOperand(1))->getZExtValue();
+
+ if (UsedBindings.size() <= DescSet) {
+ UsedBindings.resize(DescSet + 1);
+ UsedBindings[DescSet].resize(64);
+ }
+ if (UsedBindings[DescSet].size() <= Binding) {
+ UsedBindings[DescSet].resize(2 * Binding + 1);
+ }
+ UsedBindings[DescSet].set(Binding);
+ } else if (CI.getIntrinsicID() ==
+ Intrinsic::spv_resource_handlefromimplicitbinding) {
+ ImplicitBindingCalls.push_back(&CI);
+ }
+ }
+};
+
+void SPIRVLegalizeImplicitBinding::collectBindingInfo(Module &M) {
+ BindingInfoCollector InfoCollector(UsedBindings, ImplicitBindingCalls);
+ InfoCollector.visit(M);
+
+ // Sort the collected calls by their order ID.
+ std::sort(
+ ImplicitBindingCalls.begin(), ImplicitBindingCalls.end(),
+ [](const CallInst *A, const CallInst *B) {
+ const uint32_t OrderIdArgIdx = 0;
+ const uint32_t OrderA =
+ cast<ConstantInt>(A->getArgOperand(OrderIdArgIdx))->getZExtValue();
+ const uint32_t OrderB =
+ cast<ConstantInt>(B->getArgOperand(OrderIdArgIdx))->getZExtValue();
+ return OrderA < OrderB;
+ });
+}
+
+uint32_t SPIRVLegalizeImplicitBinding::getAndReserveFirstUnusedBinding(
+ uint32_t DescSet) {
+ if (UsedBindings.size() <= DescSet) {
+ UsedBindings.resize(DescSet + 1);
+ UsedBindings[DescSet].resize(64);
+ }
+
+ int NewBinding = UsedBindings[DescSet].find_first_unset();
+ if (NewBinding == -1) {
+ NewBinding = UsedBindings[DescSet].size();
+ UsedBindings[DescSet].resize(2 * NewBinding + 1);
+ }
+
+ UsedBindings[DescSet].set(NewBinding);
+ return NewBinding;
+}
+
+void SPIRVLegalizeImplicitBinding::replaceImplicitBindingCalls(Module &M) {
+ for (CallInst *OldCI : ImplicitBindingCalls) {
+ IRBuilder<> Builder(OldCI);
+ const uint32_t DescSet =
+ cast<ConstantInt>(OldCI->getArgOperand(1))->getZExtValue();
+ const uint32_t NewBinding = getAndReserveFirstUnusedBinding(DescSet);
+
+ SmallVector<Value *, 8> Args;
+ Args.push_back(Builder.getInt32(DescSet));
+ Args.push_back(Builder.getInt32(NewBinding));
+
+ // Copy the remaining arguments from the old call.
+ for (uint32_t i = 2; i < OldCI->arg_size(); ++i) {
+ Args.push_back(OldCI->getArgOperand(i));
+ }
+
+ Function *NewFunc = Intrinsic::getOrInsertDeclaration(
+ &M, Intrinsic::spv_resource_handlefrombinding, OldCI->getType());
+ CallInst *NewCI = Builder.CreateCall(NewFunc, Args);
+ NewCI->setCallingConv(OldCI->getCallingConv());
+
+ OldCI->replaceAllUsesWith(NewCI);
+ OldCI->eraseFromParent();
+ }
+}
+
+bool SPIRVLegalizeImplicitBinding::runOnModule(Module &M) {
+ collectBindingInfo(M);
+ if (ImplicitBindingCalls.empty()) {
+ return false;
+ }
+
+ replaceImplicitBindingCalls(M);
+ return true;
+}
+} // namespace
+
+char SPIRVLegalizeImplicitBinding::ID = 0;
+
+INITIALIZE_PASS(SPIRVLegalizeImplicitBinding, "legalize-spirv-implicit-binding",
+ "Legalize SPIR-V implicit bindings", false, false)
+
+ModulePass *llvm::createSPIRVLegalizeImplicitBindingPass() {
+ return new SPIRVLegalizeImplicitBinding();
+}
\ No newline at end of file
diff --git a/llvm/lib/Target/SPIRV/SPIRVTargetMachine.cpp b/llvm/lib/Target/SPIRV/SPIRVTargetMachine.cpp
index d7cf211ba84dc..e0bfb77f4b530 100644
--- a/llvm/lib/Target/SPIRV/SPIRVTargetMachine.cpp
+++ b/llvm/lib/Target/SPIRV/SPIRVTargetMachine.cpp
@@ -226,6 +226,7 @@ void SPIRVPassConfig::addIRPasses() {
}
void SPIRVPassConfig::addISelPrepare() {
+ addPass(createSPIRVLegalizeImplicitBindingPass());
addPass(createSPIRVEmitIntrinsicsPass(&getTM<SPIRVTargetMachine>()));
if (TM.getSubtargetImpl()->isLogicalSPIRV())
addPass(createSPIRVLegalizePointerCastPass(&getTM<SPIRVTargetMachine>()));
diff --git a/llvm/test/CodeGen/SPIRV/hlsl-resources/ImplicitBinding.ll b/llvm/test/CodeGen/SPIRV/hlsl-resources/ImplicitBinding.ll
new file mode 100644
index 0000000000000..00e9185822ad5
--- /dev/null
+++ b/llvm/test/CodeGen/SPIRV/hlsl-resources/ImplicitBinding.ll
@@ -0,0 +1,75 @@
+; RUN: llc -O0 -verify-machineinstrs -mtriple=spirv1.6-vulkan1.3-library %s -o - | FileCheck %s
+; RUN: %if spirv-tools %{ llc -O0 -mtriple=spirv1.6-vulkan1.3-library %s -o - -filetype=obj | spirv-val --target-env vulkan1.3 %}
+
+ at .str = private unnamed_addr constant [2 x i8] c"b\00", align 1
+ at .str.2 = private unnamed_addr constant [2 x i8] c"c\00", align 1
+ at .str.4 = private unnamed_addr constant [2 x i8] c"d\00", align 1
+ at .str.6 = private unnamed_addr constant [2 x i8] c"e\00", align 1
+ at .str.8 = private unnamed_addr constant [2 x i8] c"f\00", align 1
+ at .str.10 = private unnamed_addr constant [2 x i8] c"g\00", align 1
+ at .str.12 = private unnamed_addr constant [2 x i8] c"h\00", align 1
+ at .str.14 = private unnamed_addr constant [2 x i8] c"i\00", align 1
+
+; CHECK-DAG: OpName [[b:%[0-9]+]] "b"
+; CHECK-DAG: OpName [[c:%[0-9]+]] "c"
+; CHECK-DAG: OpName [[d:%[0-9]+]] "d"
+; CHECK-DAG: OpName [[e:%[0-9]+]] "e"
+; CHECK-DAG: OpName [[f:%[0-9]+]] "f"
+; CHECK-DAG: OpName [[g:%[0-9]+]] "g"
+; CHECK-DAG: OpName [[h:%[0-9]+]] "h"
+; CHECK-DAG: OpName [[i:%[0-9]+]] "i"
+; CHECK-DAG: OpDecorate [[b]] DescriptorSet 0
+; CHECK-DAG: OpDecorate [[b]] Binding 1
+; CHECK-DAG: OpDecorate [[c]] DescriptorSet 0
+; CHECK-DAG: OpDecorate [[c]] Binding 0
+; CHECK-DAG: OpDecorate [[d]] DescriptorSet 0
+; CHECK-DAG: OpDecorate [[d]] Binding 3
+; CHECK-DAG: OpDecorate [[e]] DescriptorSet 0
+; CHECK-DAG: OpDecorate [[e]] Binding 2
+; CHECK-DAG: OpDecorate [[f]] DescriptorSet 10
+; CHECK-DAG: OpDecorate [[f]] Binding 1
+; CHECK-DAG: OpDecorate [[g]] DescriptorSet 10
+; CHECK-DAG: OpDecorate [[g]] Binding 0
+; CHECK-DAG: OpDecorate [[h]] DescriptorSet 10
+; CHECK-DAG: OpDecorate [[h]] Binding 3
+; CHECK-DAG: OpDecorate [[i]] DescriptorSet 10
+; CHECK-DAG: OpDecorate [[i]] Binding 2
+
+
+define void @main() local_unnamed_addr #0 {
+entry:
+ %0 = tail call target("spirv.SignedImage", i32, 5, 2, 0, 0, 2, 0) @llvm.spv.resource.handlefromimplicitbinding.tspirv.SignedImage_i32_5_2_0_0_2_0t(i32 0, i32 0, i32 1, i32 0, i1 false, ptr nonnull @.str)
+ %1 = tail call target("spirv.SignedImage", i32, 5, 2, 0, 0, 2, 0) @llvm.spv.resource.handlefrombinding.tspirv.SignedImage_i32_5_2_0_0_2_0t(i32 0, i32 0, i32 1, i32 0, i1 false, ptr nonnull @.str.2)
+ %2 = tail call target("spirv.SignedImage", i32, 5, 2, 0, 0, 2, 0) @llvm.spv.resource.handlefromimplicitbinding.tspirv.SignedImage_i32_5_2_0_0_2_0t(i32 1, i32 0, i32 1, i32 0, i1 false, ptr nonnull @.str.4)
+ %3 = tail call target("spirv.SignedImage", i32, 5, 2, 0, 0, 2, 0) @llvm.spv.resource.handlefrombinding.tspirv.SignedImage_i32_5_2_0_0_2_0t(i32 0, i32 2, i32 1, i32 0, i1 false, ptr nonnull @.str.6)
+ %4 = tail call target("spirv.SignedImage", i32, 5, 2, 0, 0, 2, 0) @llvm.spv.resource.handlefrombinding.tspirv.SignedImage_i32_5_2_0_0_2_0t(i32 10, i32 1, i32 1, i32 0, i1 false, ptr nonnull @.str.8)
+ %5 = tail call target("spirv.SignedImage", i32, 5, 2, 0, 0, 2, 0) @llvm.spv.resource.handlefromimplicitbinding.tspirv.SignedImage_i32_5_2_0_0_2_0t(i32 2, i32 10, i32 1, i32 0, i1 false, ptr nonnull @.str.10)
+ %6 = tail call target("spirv.SignedImage", i32, 5, 2, 0, 0, 2, 0) @llvm.spv.resource.handlefromimplicitbinding.tspirv.SignedImage_i32_5_2_0_0_2_0t(i32 3, i32 10, i32 1, i32 0, i1 false, ptr nonnull @.str.12)
+ %7 = tail call target("spirv.SignedImage", i32, 5, 2, 0, 0, 2, 0) @llvm.spv.resource.handlefrombinding.tspirv.SignedImage_i32_5_2_0_0_2_0t(i32 10, i32 2, i32 1, i32 0, i1 false, ptr nonnull @.str.14)
+ %8 = tail call noundef align 4 dereferenceable(4) ptr addrspace(11) @llvm.spv.resource.getpointer.p11.tspirv.SignedImage_i32_5_2_0_0_2_0t(target("spirv.SignedImage", i32, 5, 2, 0, 0, 2, 0) %1, i32 0)
+ %9 = load i32, ptr addrspace(11) %8, align 4
+ %10 = tail call noundef align 4 dereferenceable(4) ptr addrspace(11) @llvm.spv.resource.getpointer.p11.tspirv.SignedImage_i32_5_2_0_0_2_0t(target("spirv.SignedImage", i32, 5, 2, 0, 0, 2, 0) %2, i32 0)
+ %11 = load i32, ptr addrspace(11) %10, align 4
+ %add.i = add nsw i32 %11, %9
+ %12 = tail call noundef align 4 dereferenceable(4) ptr addrspace(11) @llvm.spv.resource.getpointer.p11.tspirv.SignedImage_i32_5_2_0_0_2_0t(target("spirv.SignedImage", i32, 5, 2, 0, 0, 2, 0) %3, i32 0)
+ %13 = load i32, ptr addrspace(11) %12, align 4
+ %add4.i = add nsw i32 %add.i, %13
+ %14 = tail call noundef align 4 dereferenceable(4) ptr addrspace(11) @llvm.spv.resource.getpointer.p11.tspirv.SignedImage_i32_5_2_0_0_2_0t(target("spirv.SignedImage", i32, 5, 2, 0, 0, 2, 0) %4, i32 0)
+ %15 = load i32, ptr addrspace(11) %14, align 4
+ %add6.i = add nsw i32 %add4.i, %15
+ %16 = tail call noundef align 4 dereferenceable(4) ptr addrspace(11) @llvm.spv.resource.getpointer.p11.tspirv.SignedImage_i32_5_2_0_0_2_0t(target("spirv.SignedImage", i32, 5, 2, 0, 0, 2, 0) %5, i32 0)
+ %17 = load i32, ptr addrspace(11) %16, align 4
+ %add8.i = add nsw i32 %add6.i, %17
+ %18 = tail call noundef align 4 dereferenceable(4) ptr addrspace(11) @llvm.spv.resource.getpointer.p11.tspirv.SignedImage_i32_5_2_0_0_2_0t(target("spirv.SignedImage", i32, 5, 2, 0, 0, 2, 0) %6, i32 0)
+ %19 = load i32, ptr addrspace(11) %18, align 4
+ %add10.i = add nsw i32 %add8.i, %19
+ %20 = tail call noundef align 4 dereferenceable(4) ptr addrspace(11) @llvm.spv.resource.getpointer.p11.tspirv.SignedImage_i32_5_2_0_0_2_0t(target("spirv.SignedImage", i32, 5, 2, 0, 0, 2, 0) %7, i32 0)
+ %21 = load i32, ptr addrspace(11) %20, align 4
+ %add12.i = add nsw i32 %add10.i, %21
+ %22 = tail call noundef align 4 dereferenceable(4) ptr addrspace(11) @llvm.spv.resource.getpointer.p11.tspirv.SignedImage_i32_5_2_0_0_2_0t(target("spirv.SignedImage", i32, 5, 2, 0, 0, 2, 0) %0, i32 0)
+ store i32 %add12.i, ptr addrspace(11) %22, align 4
+ ret void
+}
+
+
+attributes #0 = { "hlsl.numthreads"="1,1,1" "hlsl.shader"="compute" }
\ No newline at end of file
>From b291d02a93bcd24c34cdc0febc327270dc9ceb0c Mon Sep 17 00:00:00 2001
From: erichkeane <ekeane at nvidia.com>
Date: Wed, 6 Aug 2025 10:17:31 -0700
Subject: [PATCH 125/171] [OpenACC][NFCI] Add extra data to firstprivate recipe
AST node
During implementation I found that I need some additional data in the
AST node for codegen, so this patch adds the second declaration
reference.
---
clang/include/clang/AST/OpenACCClause.h | 32 +++++++++++++++++--------
clang/include/clang/Sema/SemaOpenACC.h | 6 ++++-
clang/lib/AST/OpenACCClause.cpp | 7 +++---
clang/lib/AST/StmtProfile.cpp | 6 +++--
clang/lib/Sema/SemaOpenACC.cpp | 9 +++----
clang/lib/Sema/SemaOpenACCClause.cpp | 4 ++--
clang/lib/Sema/TreeTransform.h | 11 +++++----
clang/lib/Serialization/ASTReader.cpp | 9 ++++---
clang/lib/Serialization/ASTWriter.cpp | 6 +++--
clang/tools/libclang/CIndex.cpp | 6 +++--
10 files changed, 63 insertions(+), 33 deletions(-)
diff --git a/clang/include/clang/AST/OpenACCClause.h b/clang/include/clang/AST/OpenACCClause.h
index 8c848a10d40ad..b52f7167736fe 100644
--- a/clang/include/clang/AST/OpenACCClause.h
+++ b/clang/include/clang/AST/OpenACCClause.h
@@ -876,21 +876,32 @@ class OpenACCPrivateClause final
}
};
+// A 'pair' to stand in for the recipe. RecipeDecl is the main declaration, and
+// InitFromTemporary is the 'temp' declaration we put in to be 'copied from'.
+struct OpenACCFirstPrivateRecipe {
+ VarDecl *RecipeDecl, *InitFromTemporary;
+ OpenACCFirstPrivateRecipe(VarDecl *R, VarDecl *T)
+ : RecipeDecl(R), InitFromTemporary(T) {}
+ OpenACCFirstPrivateRecipe(std::pair<VarDecl *, VarDecl *> p)
+ : RecipeDecl(p.first), InitFromTemporary(p.second) {}
+};
+
class OpenACCFirstPrivateClause final
: public OpenACCClauseWithVarList,
private llvm::TrailingObjects<OpenACCFirstPrivateClause, Expr *,
- VarDecl *> {
+ OpenACCFirstPrivateRecipe> {
friend TrailingObjects;
OpenACCFirstPrivateClause(SourceLocation BeginLoc, SourceLocation LParenLoc,
ArrayRef<Expr *> VarList,
- ArrayRef<VarDecl *> InitRecipes,
+ ArrayRef<OpenACCFirstPrivateRecipe> InitRecipes,
SourceLocation EndLoc)
: OpenACCClauseWithVarList(OpenACCClauseKind::FirstPrivate, BeginLoc,
LParenLoc, EndLoc) {
assert(VarList.size() == InitRecipes.size());
setExprs(getTrailingObjects<Expr *>(VarList.size()), VarList);
- llvm::uninitialized_copy(InitRecipes, getTrailingObjects<VarDecl *>());
+ llvm::uninitialized_copy(InitRecipes,
+ getTrailingObjects<OpenACCFirstPrivateRecipe>());
}
public:
@@ -900,19 +911,20 @@ class OpenACCFirstPrivateClause final
// Gets a list of 'made up' `VarDecl` objects that can be used by codegen to
// ensure that we properly initialize each of these variables.
- ArrayRef<VarDecl *> getInitRecipes() {
- return ArrayRef<VarDecl *>{getTrailingObjects<VarDecl *>(),
- getExprs().size()};
+ ArrayRef<OpenACCFirstPrivateRecipe> getInitRecipes() {
+ return ArrayRef<OpenACCFirstPrivateRecipe>{
+ getTrailingObjects<OpenACCFirstPrivateRecipe>(), getExprs().size()};
}
- ArrayRef<VarDecl *> getInitRecipes() const {
- return ArrayRef<VarDecl *>{getTrailingObjects<VarDecl *>(),
- getExprs().size()};
+ ArrayRef<OpenACCFirstPrivateRecipe> getInitRecipes() const {
+ return ArrayRef<OpenACCFirstPrivateRecipe>{
+ getTrailingObjects<OpenACCFirstPrivateRecipe>(), getExprs().size()};
}
static OpenACCFirstPrivateClause *
Create(const ASTContext &C, SourceLocation BeginLoc, SourceLocation LParenLoc,
- ArrayRef<Expr *> VarList, ArrayRef<VarDecl *> InitRecipes,
+ ArrayRef<Expr *> VarList,
+ ArrayRef<OpenACCFirstPrivateRecipe> InitRecipes,
SourceLocation EndLoc);
size_t numTrailingObjects(OverloadToken<Expr *>) const {
diff --git a/clang/include/clang/Sema/SemaOpenACC.h b/clang/include/clang/Sema/SemaOpenACC.h
index 964749c2fd627..d078de5244393 100644
--- a/clang/include/clang/Sema/SemaOpenACC.h
+++ b/clang/include/clang/Sema/SemaOpenACC.h
@@ -240,7 +240,11 @@ class SemaOpenACC : public SemaBase {
// Creates a VarDecl with a proper default init for the purposes of a
// `private`/'firstprivate'/'reduction' clause, so it can be used to generate
// a recipe later.
- VarDecl *CreateInitRecipe(OpenACCClauseKind CK, const Expr *VarExpr);
+ // The first entry is the recipe itself, the second is any required
+ // 'temporary' created for the init (in the case of a copy), such as with
+ // firstprivate.
+ std::pair<VarDecl *, VarDecl *> CreateInitRecipe(OpenACCClauseKind CK,
+ const Expr *VarExpr);
public:
ComputeConstructInfo &getActiveComputeConstructInfo() {
diff --git a/clang/lib/AST/OpenACCClause.cpp b/clang/lib/AST/OpenACCClause.cpp
index f7a98bd92190b..fe20004de6bea 100644
--- a/clang/lib/AST/OpenACCClause.cpp
+++ b/clang/lib/AST/OpenACCClause.cpp
@@ -329,10 +329,11 @@ OpenACCPrivateClause::Create(const ASTContext &C, SourceLocation BeginLoc,
OpenACCFirstPrivateClause *OpenACCFirstPrivateClause::Create(
const ASTContext &C, SourceLocation BeginLoc, SourceLocation LParenLoc,
- ArrayRef<Expr *> VarList, ArrayRef<VarDecl *> InitRecipes,
+ ArrayRef<Expr *> VarList, ArrayRef<OpenACCFirstPrivateRecipe> InitRecipes,
SourceLocation EndLoc) {
- void *Mem =
- C.Allocate(OpenACCFirstPrivateClause::totalSizeToAlloc<Expr *, VarDecl *>(
+ void *Mem = C.Allocate(
+ OpenACCFirstPrivateClause::totalSizeToAlloc<Expr *,
+ OpenACCFirstPrivateRecipe>(
VarList.size(), InitRecipes.size()));
return new (Mem) OpenACCFirstPrivateClause(BeginLoc, LParenLoc, VarList,
InitRecipes, EndLoc);
diff --git a/clang/lib/AST/StmtProfile.cpp b/clang/lib/AST/StmtProfile.cpp
index 4c36f2473ce25..0297f9c38dee3 100644
--- a/clang/lib/AST/StmtProfile.cpp
+++ b/clang/lib/AST/StmtProfile.cpp
@@ -2645,8 +2645,10 @@ void OpenACCClauseProfiler::VisitFirstPrivateClause(
const OpenACCFirstPrivateClause &Clause) {
VisitClauseWithVarList(Clause);
- for (auto *VD : Clause.getInitRecipes())
- Profiler.VisitDecl(VD);
+ for (auto &Recipe : Clause.getInitRecipes()) {
+ Profiler.VisitDecl(Recipe.RecipeDecl);
+ Profiler.VisitDecl(Recipe.InitFromTemporary);
+ }
}
void OpenACCClauseProfiler::VisitAttachClause(
diff --git a/clang/lib/Sema/SemaOpenACC.cpp b/clang/lib/Sema/SemaOpenACC.cpp
index 62fe3d1836ab5..c3695f0bb0f9d 100644
--- a/clang/lib/Sema/SemaOpenACC.cpp
+++ b/clang/lib/Sema/SemaOpenACC.cpp
@@ -2575,8 +2575,8 @@ SemaOpenACC::ActOnOpenACCAsteriskSizeExpr(SourceLocation AsteriskLoc) {
return BuildOpenACCAsteriskSizeExpr(AsteriskLoc);
}
-VarDecl *SemaOpenACC::CreateInitRecipe(OpenACCClauseKind CK,
- const Expr *VarExpr) {
+std::pair<VarDecl *, VarDecl *>
+SemaOpenACC::CreateInitRecipe(OpenACCClauseKind CK, const Expr *VarExpr) {
// Strip off any array subscripts/array section exprs to get to the type of
// the variable.
while (isa_and_present<ArraySectionExpr, ArraySubscriptExpr>(VarExpr)) {
@@ -2590,7 +2590,7 @@ VarDecl *SemaOpenACC::CreateInitRecipe(OpenACCClauseKind CK,
// fill in with nullptr. We'll count on TreeTransform to make this if
// necessary.
if (!VarExpr || VarExpr->getType()->isDependentType())
- return nullptr;
+ return {nullptr, nullptr};
QualType VarTy =
VarExpr->getType().getNonReferenceType().getUnqualifiedType();
@@ -2602,6 +2602,7 @@ VarDecl *SemaOpenACC::CreateInitRecipe(OpenACCClauseKind CK,
getASTContext().getTrivialTypeSourceInfo(VarTy), SC_Auto);
ExprResult Init;
+ VarDecl *Temporary = nullptr;
if (CK == OpenACCClauseKind::Private) {
// Trap errors so we don't get weird ones here. If we can't init, we'll just
@@ -2626,5 +2627,5 @@ VarDecl *SemaOpenACC::CreateInitRecipe(OpenACCClauseKind CK,
Recipe->setInitStyle(VarDecl::CallInit);
}
- return Recipe;
+ return {Recipe, Temporary};
}
diff --git a/clang/lib/Sema/SemaOpenACCClause.cpp b/clang/lib/Sema/SemaOpenACCClause.cpp
index 88d217f2c8d25..e8a18243e6db1 100644
--- a/clang/lib/Sema/SemaOpenACCClause.cpp
+++ b/clang/lib/Sema/SemaOpenACCClause.cpp
@@ -800,7 +800,7 @@ OpenACCClause *SemaOpenACCClauseVisitor::VisitPrivateClause(
// Assemble the recipes list.
for (const Expr *VarExpr : Clause.getVarList())
InitRecipes.push_back(
- SemaRef.CreateInitRecipe(OpenACCClauseKind::Private, VarExpr));
+ SemaRef.CreateInitRecipe(OpenACCClauseKind::Private, VarExpr).first);
return OpenACCPrivateClause::Create(
Ctx, Clause.getBeginLoc(), Clause.getLParenLoc(), Clause.getVarList(),
@@ -813,7 +813,7 @@ OpenACCClause *SemaOpenACCClauseVisitor::VisitFirstPrivateClause(
// really isn't anything to do here. GCC does some duplicate-finding, though
// it isn't apparent in the standard where this is justified.
- llvm::SmallVector<VarDecl *> InitRecipes;
+ llvm::SmallVector<OpenACCFirstPrivateRecipe> InitRecipes;
// Assemble the recipes list.
for (const Expr *VarExpr : Clause.getVarList())
diff --git a/clang/lib/Sema/TreeTransform.h b/clang/lib/Sema/TreeTransform.h
index 6ce5535c7f3f7..0030946301a93 100644
--- a/clang/lib/Sema/TreeTransform.h
+++ b/clang/lib/Sema/TreeTransform.h
@@ -11901,8 +11901,11 @@ void OpenACCClauseTransform<Derived>::VisitPrivateClause(
if (InitRecipe)
InitRecipes.push_back(InitRecipe);
else
- InitRecipes.push_back(Self.getSema().OpenACC().CreateInitRecipe(
- OpenACCClauseKind::Private, VarRef.get()));
+ InitRecipes.push_back(
+ Self.getSema()
+ .OpenACC()
+ .CreateInitRecipe(OpenACCClauseKind::Private, VarRef.get())
+ .first);
}
}
ParsedClause.setVarListDetails(InstantiatedVarList,
@@ -11942,7 +11945,7 @@ template <typename Derived>
void OpenACCClauseTransform<Derived>::VisitFirstPrivateClause(
const OpenACCFirstPrivateClause &C) {
llvm::SmallVector<Expr *> InstantiatedVarList;
- llvm::SmallVector<VarDecl *> InitRecipes;
+ llvm::SmallVector<OpenACCFirstPrivateRecipe> InitRecipes;
for (const auto [RefExpr, InitRecipe] :
llvm::zip(C.getVarList(), C.getInitRecipes())) {
@@ -11953,7 +11956,7 @@ void OpenACCClauseTransform<Derived>::VisitFirstPrivateClause(
// We only have to create a new one if it is dependent, and Sema won't
// make one of these unless the type is non-dependent.
- if (InitRecipe)
+ if (InitRecipe.RecipeDecl)
InitRecipes.push_back(InitRecipe);
else
InitRecipes.push_back(Self.getSema().OpenACC().CreateInitRecipe(
diff --git a/clang/lib/Serialization/ASTReader.cpp b/clang/lib/Serialization/ASTReader.cpp
index 1402f40894ab8..ed0ec9e357618 100644
--- a/clang/lib/Serialization/ASTReader.cpp
+++ b/clang/lib/Serialization/ASTReader.cpp
@@ -12877,9 +12877,12 @@ OpenACCClause *ASTRecordReader::readOpenACCClause() {
case OpenACCClauseKind::FirstPrivate: {
SourceLocation LParenLoc = readSourceLocation();
llvm::SmallVector<Expr *> VarList = readOpenACCVarList();
- llvm::SmallVector<VarDecl *> RecipeList;
- for (unsigned I = 0; I < VarList.size(); ++I)
- RecipeList.push_back(readDeclAs<VarDecl>());
+ llvm::SmallVector<OpenACCFirstPrivateRecipe> RecipeList;
+ for (unsigned I = 0; I < VarList.size(); ++I) {
+ VarDecl *Recipe = readDeclAs<VarDecl>();
+ VarDecl *RecipeTemp = readDeclAs<VarDecl>();
+ RecipeList.push_back({Recipe, RecipeTemp});
+ }
return OpenACCFirstPrivateClause::Create(getContext(), BeginLoc, LParenLoc,
VarList, RecipeList, EndLoc);
diff --git a/clang/lib/Serialization/ASTWriter.cpp b/clang/lib/Serialization/ASTWriter.cpp
index c038d4dd5c942..f144cd25eca14 100644
--- a/clang/lib/Serialization/ASTWriter.cpp
+++ b/clang/lib/Serialization/ASTWriter.cpp
@@ -8762,8 +8762,10 @@ void ASTRecordWriter::writeOpenACCClause(const OpenACCClause *C) {
writeSourceLocation(FPC->getLParenLoc());
writeOpenACCVarList(FPC);
- for (VarDecl *VD : FPC->getInitRecipes())
- AddDeclRef(VD);
+ for (const OpenACCFirstPrivateRecipe &R : FPC->getInitRecipes()) {
+ AddDeclRef(R.RecipeDecl);
+ AddDeclRef(R.InitFromTemporary);
+ }
return;
}
case OpenACCClauseKind::Attach: {
diff --git a/clang/tools/libclang/CIndex.cpp b/clang/tools/libclang/CIndex.cpp
index bf0c0aab2270f..9493edfa9c6ba 100644
--- a/clang/tools/libclang/CIndex.cpp
+++ b/clang/tools/libclang/CIndex.cpp
@@ -2895,8 +2895,10 @@ void OpenACCClauseEnqueue::VisitDeviceClause(const OpenACCDeviceClause &C) {
void OpenACCClauseEnqueue::VisitFirstPrivateClause(
const OpenACCFirstPrivateClause &C) {
VisitVarList(C);
- for (VarDecl *V : C.getInitRecipes())
- Visitor.AddDecl(V);
+ for (const OpenACCFirstPrivateRecipe &R : C.getInitRecipes()) {
+ Visitor.AddDecl(R.RecipeDecl);
+ Visitor.AddDecl(R.InitFromTemporary);
+ }
}
void OpenACCClauseEnqueue::VisitPresentClause(const OpenACCPresentClause &C) {
>From bb2642fab70fb4d59e431e01e319dcf6b90d88d8 Mon Sep 17 00:00:00 2001
From: royitaqi <royitaqi at users.noreply.github.com>
Date: Wed, 6 Aug 2025 10:24:09 -0700
Subject: [PATCH 126/171] [vscode-lldb] Fix race condition when changing
lldb-dap arguments (#151828)
# Problem
When the user changes lldb-dap's arguments (e.g. path), there is a race
condition, where the new lldb-dap process could be started first and
have set the extension's `serverProcess` and `serverInfo` according to
the new process, while the old lldb-dap process exits later and wipes
out these two fields.
Consequences:
1. This causes `getServerProcess()` to return `undefined` when it should
return the new process.
2. This also causes wrong behavior when starting the next debug session
that a new lldb-dap process will be started and the old not reused nor
killed.
# Fix
When wiping the two fields, check if `serverProcess` equals to the
process captured by the handler. If they equal, wipe the fields. If not,
then the fields have already been updated (either new process has
started, or the fields were already wiped out by another handler), and
so the wiping should be skipped.
---
lldb/tools/lldb-dap/src-ts/lldb-dap-server.ts | 19 +++++++++++++------
1 file changed, 13 insertions(+), 6 deletions(-)
diff --git a/lldb/tools/lldb-dap/src-ts/lldb-dap-server.ts b/lldb/tools/lldb-dap/src-ts/lldb-dap-server.ts
index 03a0fc9d0fef3..f360fa3adfa80 100644
--- a/lldb/tools/lldb-dap/src-ts/lldb-dap-server.ts
+++ b/lldb/tools/lldb-dap/src-ts/lldb-dap-server.ts
@@ -39,8 +39,7 @@ export class LLDBDapServer implements vscode.Disposable {
const process = child_process.spawn(dapPath, dapArgs, options);
process.on("error", (error) => {
reject(error);
- this.serverProcess = undefined;
- this.serverInfo = undefined;
+ this.cleanUp(process);
});
process.on("exit", (code, signal) => {
let errorMessage = "Server process exited early";
@@ -50,8 +49,7 @@ export class LLDBDapServer implements vscode.Disposable {
errorMessage += ` due to signal ${signal}`;
}
reject(new Error(errorMessage));
- this.serverProcess = undefined;
- this.serverInfo = undefined;
+ this.cleanUp(process);
});
process.stdout.setEncoding("utf8").on("data", (data) => {
const connection = /connection:\/\/\[([^\]]+)\]:(\d+)/.exec(
@@ -126,7 +124,16 @@ Restarting the server will interrupt any existing debug sessions and start a new
return;
}
this.serverProcess.kill();
- this.serverProcess = undefined;
- this.serverInfo = undefined;
+ this.cleanUp(this.serverProcess);
+ }
+
+ cleanUp(process: child_process.ChildProcessWithoutNullStreams) {
+ // If the following don't equal, then the fields have already been updated
+ // (either a new process has started, or the fields were already cleaned
+ // up), and so the cleanup should be skipped.
+ if (this.serverProcess === process) {
+ this.serverProcess = undefined;
+ this.serverInfo = undefined;
+ }
}
}
>From ec138c7fa71d8b6d3894bc6d1c384a4cc3fa506e Mon Sep 17 00:00:00 2001
From: Douglas <Douglas.Gliner at sony.com>
Date: Wed, 6 Aug 2025 17:25:59 +0000
Subject: [PATCH 127/171] [OpenACC] Improve C++20 compatibility of
private/firstprivate test (#152233)
Under C++17, `DeletedCopy` and `DefaultedCopy` in the test
`clang/test/SemaOpenACC/private_firstprivate_reduction_required_ops.cpp`
are eligible for aggregate initialization. Under C++20 and up, that is
no longer the case. Therefore, an explicit constructor must be declared
to avoid additional diagnostics which the test does not expect.
This test was failing in our downstream testing where our toolchain uses
C++20 as the default dialect.
See this reduced example for more details:
https://godbolt.org/z/46oErYT43
---
.../SemaOpenACC/private_firstprivate_reduction_required_ops.cpp | 2 ++
1 file changed, 2 insertions(+)
diff --git a/clang/test/SemaOpenACC/private_firstprivate_reduction_required_ops.cpp b/clang/test/SemaOpenACC/private_firstprivate_reduction_required_ops.cpp
index 4644bdeeacc0e..ce941ad4255da 100644
--- a/clang/test/SemaOpenACC/private_firstprivate_reduction_required_ops.cpp
+++ b/clang/test/SemaOpenACC/private_firstprivate_reduction_required_ops.cpp
@@ -37,10 +37,12 @@ struct ImplicitDelDtor {
};
struct DeletedCopy {
+ DeletedCopy();
DeletedCopy(const DeletedCopy&) = delete;
};
struct DefaultedCopy {
+ DefaultedCopy();
DefaultedCopy(const DefaultedCopy&) = default;
};
struct UserCopy {
>From 3f0c180ca07faf536d2ae0d69ec044fcd5a78716 Mon Sep 17 00:00:00 2001
From: Jann Horn <jannh at google.com>
Date: Wed, 6 Aug 2025 19:34:58 +0200
Subject: [PATCH 128/171] [DebugInfo][DWARF] Add heapallocsite information
(#132073)
LLVM currently stores heapallocsite information in CodeView debuginfo,
but not in DWARF debuginfo. Plumb it into DWARF as an LLVM-specific
extension.
heapallocsite debug information is useful when it is combined with
allocator instrumentation that stores caller addresses; I've used a
previous version of this patch for:
- analyzing memory usage by object type
- analyzing the distributions of values of class members
Other possible uses might be:
- attributing memory access profiles (for example, on Intel CPUs, from
PEBS records with Linear Data Address) to object types or specific
object members
- adding type information to crash/ASAN reports
---
llvm/include/llvm/BinaryFormat/Dwarf.def | 3 ++
.../CodeGen/AsmPrinter/DwarfCompileUnit.cpp | 15 ++++----
.../lib/CodeGen/AsmPrinter/DwarfCompileUnit.h | 3 +-
llvm/lib/CodeGen/AsmPrinter/DwarfDebug.cpp | 22 ++++++------
llvm/lib/DWARFLinker/Classic/DWARFLinker.cpp | 1 +
llvm/test/DebugInfo/X86/DW_AT_alloc_type.ll | 34 +++++++++++++++++++
6 files changed, 60 insertions(+), 18 deletions(-)
create mode 100644 llvm/test/DebugInfo/X86/DW_AT_alloc_type.ll
diff --git a/llvm/include/llvm/BinaryFormat/Dwarf.def b/llvm/include/llvm/BinaryFormat/Dwarf.def
index 48b33478d5045..b561125fe37a4 100644
--- a/llvm/include/llvm/BinaryFormat/Dwarf.def
+++ b/llvm/include/llvm/BinaryFormat/Dwarf.def
@@ -625,6 +625,9 @@ HANDLE_DW_AT(0x3e0a, LLVM_ptrauth_authentication_mode, 0, LLVM)
HANDLE_DW_AT(0x3e0b, LLVM_num_extra_inhabitants, 0, LLVM)
HANDLE_DW_AT(0x3e0c, LLVM_stmt_sequence, 0, LLVM)
HANDLE_DW_AT(0x3e0d, LLVM_coro_suspend_idx, 0, LLVM)
+// The DWARF v6 working draft defines DW_AT_alloc_type; use this LLVM-private ID
+// until that is released as an official standard.
+HANDLE_DW_AT(0x3e0e, LLVM_alloc_type, 0, LLVM)
// Apple extensions.
diff --git a/llvm/lib/CodeGen/AsmPrinter/DwarfCompileUnit.cpp b/llvm/lib/CodeGen/AsmPrinter/DwarfCompileUnit.cpp
index f9d7e763e8890..67f526fe91464 100644
--- a/llvm/lib/CodeGen/AsmPrinter/DwarfCompileUnit.cpp
+++ b/llvm/lib/CodeGen/AsmPrinter/DwarfCompileUnit.cpp
@@ -1292,12 +1292,10 @@ DwarfCompileUnit::getDwarf5OrGNULocationAtom(dwarf::LocationAtom Loc) const {
}
}
-DIE &DwarfCompileUnit::constructCallSiteEntryDIE(DIE &ScopeDIE,
- const DISubprogram *CalleeSP,
- bool IsTail,
- const MCSymbol *PCAddr,
- const MCSymbol *CallAddr,
- unsigned CallReg) {
+DIE &DwarfCompileUnit::constructCallSiteEntryDIE(
+ DIE &ScopeDIE, const DISubprogram *CalleeSP, bool IsTail,
+ const MCSymbol *PCAddr, const MCSymbol *CallAddr, unsigned CallReg,
+ DIType *AllocSiteTy) {
// Insert a call site entry DIE within ScopeDIE.
DIE &CallSiteDIE = createAndAddDIE(getDwarf5OrGNUTag(dwarf::DW_TAG_call_site),
ScopeDIE, nullptr);
@@ -1306,7 +1304,7 @@ DIE &DwarfCompileUnit::constructCallSiteEntryDIE(DIE &ScopeDIE,
// Indirect call.
addAddress(CallSiteDIE, getDwarf5OrGNUAttr(dwarf::DW_AT_call_target),
MachineLocation(CallReg));
- } else {
+ } else if (CalleeSP) {
DIE *CalleeDIE = getOrCreateSubprogramDIE(CalleeSP);
assert(CalleeDIE && "Could not create DIE for call site entry origin");
if (AddLinkageNamesToDeclCallOriginsForTuning(DD) &&
@@ -1351,6 +1349,9 @@ DIE &DwarfCompileUnit::constructCallSiteEntryDIE(DIE &ScopeDIE,
getDwarf5OrGNUAttr(dwarf::DW_AT_call_return_pc), PCAddr);
}
+ if (AllocSiteTy)
+ addType(CallSiteDIE, AllocSiteTy, dwarf::DW_AT_LLVM_alloc_type);
+
return CallSiteDIE;
}
diff --git a/llvm/lib/CodeGen/AsmPrinter/DwarfCompileUnit.h b/llvm/lib/CodeGen/AsmPrinter/DwarfCompileUnit.h
index 09be22ce35e36..c2f6ca0913818 100644
--- a/llvm/lib/CodeGen/AsmPrinter/DwarfCompileUnit.h
+++ b/llvm/lib/CodeGen/AsmPrinter/DwarfCompileUnit.h
@@ -289,7 +289,8 @@ class DwarfCompileUnit final : public DwarfUnit {
/// the \p CallReg is set to 0.
DIE &constructCallSiteEntryDIE(DIE &ScopeDIE, const DISubprogram *CalleeSP,
bool IsTail, const MCSymbol *PCAddr,
- const MCSymbol *CallAddr, unsigned CallReg);
+ const MCSymbol *CallAddr, unsigned CallReg,
+ DIType *AllocSiteTy);
/// Construct call site parameter DIEs for the \p CallSiteDIE. The \p Params
/// were collected by the \ref collectCallSiteParameters.
/// Note: The order of parameters does not matter, since debuggers recognize
diff --git a/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.cpp b/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.cpp
index 5ae2d2a3958bd..c27f100775625 100644
--- a/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.cpp
+++ b/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.cpp
@@ -936,6 +936,8 @@ void DwarfDebug::constructCallSiteEntryDIEs(const DISubprogram &SP,
if (MI.hasDelaySlot() && !delaySlotSupported(*&MI))
return;
+ DIType *AllocSiteTy = dyn_cast_or_null<DIType>(MI.getHeapAllocMarker());
+
// If this is a direct call, find the callee's subprogram.
// In the case of an indirect call find the register that holds
// the callee.
@@ -950,23 +952,23 @@ void DwarfDebug::constructCallSiteEntryDIEs(const DISubprogram &SP,
PhysRegCalleeOperand =
PhysRegCalleeOperand && MCOI.OperandType == MCOI::OPERAND_REGISTER;
}
- if (!CalleeOp.isGlobal() && !PhysRegCalleeOperand)
- continue;
unsigned CallReg = 0;
const DISubprogram *CalleeSP = nullptr;
const Function *CalleeDecl = nullptr;
if (PhysRegCalleeOperand) {
- CallReg = CalleeOp.getReg();
- if (!CallReg)
- continue;
- } else {
+ CallReg = CalleeOp.getReg(); // might be zero
+ } else if (CalleeOp.isGlobal()) {
CalleeDecl = dyn_cast<Function>(CalleeOp.getGlobal());
- if (!CalleeDecl || !CalleeDecl->getSubprogram())
- continue;
- CalleeSP = CalleeDecl->getSubprogram();
+ if (CalleeDecl)
+ CalleeSP = CalleeDecl->getSubprogram(); // might be nullptr
}
+ // Omit DIE if we can't tell where the call goes *and* we don't want to
+ // add metadata to it.
+ if (CalleeSP == nullptr && CallReg == 0 && AllocSiteTy == nullptr)
+ continue;
+
// TODO: Omit call site entries for runtime calls (objc_msgSend, etc).
bool IsTail = TII->isTailCall(MI);
@@ -1000,7 +1002,7 @@ void DwarfDebug::constructCallSiteEntryDIEs(const DISubprogram &SP,
<< (IsTail ? " [IsTail]" : "") << "\n");
DIE &CallSiteDIE = CU.constructCallSiteEntryDIE(
- ScopeDIE, CalleeSP, IsTail, PCAddr, CallAddr, CallReg);
+ ScopeDIE, CalleeSP, IsTail, PCAddr, CallAddr, CallReg, AllocSiteTy);
// Optionally emit call-site-param debug info.
if (emitDebugEntryValues()) {
diff --git a/llvm/lib/DWARFLinker/Classic/DWARFLinker.cpp b/llvm/lib/DWARFLinker/Classic/DWARFLinker.cpp
index 6ddb12ba04349..8052773812a2c 100644
--- a/llvm/lib/DWARFLinker/Classic/DWARFLinker.cpp
+++ b/llvm/lib/DWARFLinker/Classic/DWARFLinker.cpp
@@ -109,6 +109,7 @@ static bool isODRAttribute(uint16_t Attr) {
case dwarf::DW_AT_specification:
case dwarf::DW_AT_abstract_origin:
case dwarf::DW_AT_import:
+ case dwarf::DW_AT_LLVM_alloc_type:
return true;
}
llvm_unreachable("Improper attribute.");
diff --git a/llvm/test/DebugInfo/X86/DW_AT_alloc_type.ll b/llvm/test/DebugInfo/X86/DW_AT_alloc_type.ll
new file mode 100644
index 0000000000000..33028f2234ceb
--- /dev/null
+++ b/llvm/test/DebugInfo/X86/DW_AT_alloc_type.ll
@@ -0,0 +1,34 @@
+; RUN: llc -O3 -o %t -filetype=obj %s
+; RUN: llvm-dwarfdump %t | FileCheck %s
+
+; based on clang++ output for `int *alloc_int() { return new int; }`
+
+
+target triple = "x86_64-unknown-linux-gnu"
+
+define dso_local ptr @alloc_int() !dbg !3 {
+; CHECK: DW_TAG_subprogram
+entry:
+ %call = call ptr @alloc(i64 noundef 4), !heapallocsite !7
+; CHECK: DW_TAG_call_site
+; CHECK: DW_AT_LLVM_alloc_type ([[ALLOCSITE:.*]])
+ ret ptr %call
+}
+
+; CHECK: {{.*}}[[ALLOCSITE]]: DW_TAG_base_type
+; CHECK: DW_AT_name ("int")
+
+declare dso_local ptr @alloc(i64 noundef)
+
+!llvm.dbg.cu = !{!0}
+!llvm.module.flags = !{!2,!8}
+
+!0 = distinct !DICompileUnit(language: DW_LANG_C_plus_plus_14, file: !1, emissionKind: FullDebug)
+!1 = !DIFile(filename: "a.cpp", directory: "/")
+!2 = !{i32 2, !"Debug Info Version", i32 3}
+!3 = distinct !DISubprogram(name: "alloc_int", scope: !1, file: !1, line: 1, type: !4, scopeLine: 1, flags: DIFlagPrototyped | DIFlagAllCallsDescribed, spFlags: DISPFlagDefinition, unit: !0)
+!4 = !DISubroutineType(types: !5)
+!5 = !{!6}
+!6 = !DIDerivedType(tag: DW_TAG_pointer_type, baseType: !7, size: 64)
+!7 = !DIBasicType(name: "int", size: 32, encoding: DW_ATE_signed)
+!8 = !{i32 2, !"Dwarf Version", i32 5}
>From 51e825dbfbe67ebad88c1912452db8fac0489439 Mon Sep 17 00:00:00 2001
From: Dave Lee <davelee.com at gmail.com>
Date: Wed, 6 Aug 2025 10:40:10 -0700
Subject: [PATCH 129/171] [lldb] Use const ref for looping over frame
recognizers (NFC) (#152334)
---
lldb/source/Target/StackFrameRecognizer.cpp | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lldb/source/Target/StackFrameRecognizer.cpp b/lldb/source/Target/StackFrameRecognizer.cpp
index 9d5116cb909d1..d9dbed8f2d434 100644
--- a/lldb/source/Target/StackFrameRecognizer.cpp
+++ b/lldb/source/Target/StackFrameRecognizer.cpp
@@ -152,7 +152,7 @@ StackFrameRecognizerManager::GetRecognizerForFrame(StackFrameSP frame) {
Address start_addr = symbol->GetAddress();
Address current_addr = frame->GetFrameCodeAddress();
- for (auto entry : m_recognizers) {
+ for (const auto &entry : m_recognizers) {
if (!entry.enabled)
continue;
>From 0a23b22d1d43b2816e67aa8c328365b9bf114c74 Mon Sep 17 00:00:00 2001
From: David Majnemer <david.majnemer at gmail.com>
Date: Wed, 6 Aug 2025 10:23:25 -0700
Subject: [PATCH 130/171] [APFloat] Properly implement
DoubleAPFloat::roundToIntegral
The previous implementation did not correctly handle double-doubles like
0x1p100 + 0x1p1 as the low order component would need more than a
106-bit significand to represent.
---
llvm/lib/Support/APFloat.cpp | 91 ++++-
llvm/unittests/ADT/APFloatTest.cpp | 569 +++++++++++++++++++++++++----
2 files changed, 595 insertions(+), 65 deletions(-)
diff --git a/llvm/lib/Support/APFloat.cpp b/llvm/lib/Support/APFloat.cpp
index 46084c5b7fb92..3d688a109cdee 100644
--- a/llvm/lib/Support/APFloat.cpp
+++ b/llvm/lib/Support/APFloat.cpp
@@ -4949,6 +4949,21 @@ DoubleAPFloat &DoubleAPFloat::operator=(const DoubleAPFloat &RHS) {
return *this;
}
+// Returns a result such that:
+// 1. abs(Lo) <= ulp(Hi)/2
+// 2. Hi == RTNE(Hi + Lo)
+// 3. Hi + Lo == X + Y
+//
+// Requires that log2(X) >= log2(Y).
+static std::pair<APFloat, APFloat> fastTwoSum(APFloat X, APFloat Y) {
+ if (!X.isFinite())
+ return {X, APFloat::getZero(X.getSemantics(), /*Negative=*/false)};
+ APFloat Hi = X + Y;
+ APFloat Delta = Hi - X;
+ APFloat Lo = Y - Delta;
+ return {Hi, Lo};
+}
+
// Implement addition, subtraction, multiplication and division based on:
// "Software for Doubled-Precision Floating-Point Computations",
// by Seppo Linnainmaa, ACM TOMS vol 7 no 3, September 1981, pages 272-283.
@@ -5218,10 +5233,78 @@ DoubleAPFloat::fusedMultiplyAdd(const DoubleAPFloat &Multiplicand,
APFloat::opStatus DoubleAPFloat::roundToIntegral(APFloat::roundingMode RM) {
assert(Semantics == &semPPCDoubleDouble && "Unexpected Semantics");
- APFloat Tmp(semPPCDoubleDoubleLegacy, bitcastToAPInt());
- auto Ret = Tmp.roundToIntegral(RM);
- *this = DoubleAPFloat(semPPCDoubleDouble, Tmp.bitcastToAPInt());
- return Ret;
+ const APFloat &Hi = getFirst();
+ const APFloat &Lo = getSecond();
+
+ APFloat RoundedHi = Hi;
+ const opStatus HiStatus = RoundedHi.roundToIntegral(RM);
+
+ // We can reduce the problem to just the high part if the input:
+ // 1. Represents a non-finite value.
+ // 2. Has a component which is zero.
+ if (!Hi.isFiniteNonZero() || Lo.isZero()) {
+ Floats[0] = std::move(RoundedHi);
+ Floats[1].makeZero(/*Neg=*/false);
+ return HiStatus;
+ }
+
+ // Adjust `Rounded` in the direction of `TieBreaker` if `ToRound` was at a
+ // halfway point.
+ auto RoundToNearestHelper = [](APFloat ToRound, APFloat Rounded,
+ APFloat TieBreaker) {
+ // RoundingError tells us which direction we rounded:
+ // - RoundingError > 0: we rounded up.
+ // - RoundingError < 0: we rounded down.
+ // Sterbenz' lemma ensures that RoundingError is exact.
+ const APFloat RoundingError = Rounded - ToRound;
+ if (TieBreaker.isNonZero() &&
+ TieBreaker.isNegative() != RoundingError.isNegative() &&
+ abs(RoundingError).isExactlyValue(0.5))
+ Rounded.add(
+ APFloat::getOne(Rounded.getSemantics(), TieBreaker.isNegative()),
+ rmNearestTiesToEven);
+ return Rounded;
+ };
+
+ // Case 1: Hi is not an integer.
+ // Special cases are for rounding modes that are sensitive to ties.
+ if (RoundedHi != Hi) {
+ // We need to consider the case where Hi was between two integers and the
+ // rounding mode broke the tie when, in fact, Lo may have had a different
+ // sign than Hi.
+ if (RM == rmNearestTiesToAway || RM == rmNearestTiesToEven)
+ RoundedHi = RoundToNearestHelper(Hi, RoundedHi, Lo);
+
+ Floats[0] = std::move(RoundedHi);
+ Floats[1].makeZero(/*Neg=*/false);
+ return HiStatus;
+ }
+
+ // Case 2: Hi is an integer.
+ // Special cases are for rounding modes which are rounding towards or away from zero.
+ RoundingMode LoRoundingMode;
+ if (RM == rmTowardZero)
+ // When our input is positive, we want the Lo component rounded toward
+ // negative infinity to get the smallest result magnitude. Likewise,
+ // negative inputs want the Lo component rounded toward positive infinity.
+ LoRoundingMode = isNegative() ? rmTowardPositive : rmTowardNegative;
+ else
+ LoRoundingMode = RM;
+
+ APFloat RoundedLo = Lo;
+ const opStatus LoStatus = RoundedLo.roundToIntegral(LoRoundingMode);
+ if (LoRoundingMode == rmNearestTiesToAway)
+ // We need to consider the case where Lo was between two integers and the
+ // rounding mode broke the tie when, in fact, Hi may have had a different
+ // sign than Lo.
+ RoundedLo = RoundToNearestHelper(Lo, RoundedLo, Hi);
+
+ // We must ensure that the final result has no overlap between the two APFloat values.
+ std::tie(RoundedHi, RoundedLo) = fastTwoSum(RoundedHi, RoundedLo);
+
+ Floats[0] = std::move(RoundedHi);
+ Floats[1] = std::move(RoundedLo);
+ return LoStatus;
}
void DoubleAPFloat::changeSign() {
diff --git a/llvm/unittests/ADT/APFloatTest.cpp b/llvm/unittests/ADT/APFloatTest.cpp
index 9609e8e22a3ed..a35594d4afed4 100644
--- a/llvm/unittests/ADT/APFloatTest.cpp
+++ b/llvm/unittests/ADT/APFloatTest.cpp
@@ -16,9 +16,11 @@
#include "llvm/Support/FormatVariadic.h"
#include "gtest/gtest.h"
#include <cmath>
+#include <limits>
#include <ostream>
#include <string>
#include <tuple>
+#include <type_traits>
using namespace llvm;
@@ -2661,6 +2663,39 @@ TEST(APFloatTest, Float8UZConvert) {
}
}
+struct DD {
+ double Hi;
+ double Lo;
+};
+
+template <typename T, typename U>
+static APFloat makeDoubleAPFloat(T Hi, U Lo) {
+ APFloat HiFloat{APFloat::IEEEdouble(), APFloat::uninitialized};
+ if constexpr (std::is_same_v<decltype(Hi), APFloat>) {
+ HiFloat = Hi;
+ } else if constexpr (std::is_same_v<decltype(Hi), double>) {
+ HiFloat = APFloat{Hi};
+ } else {
+ HiFloat = {APFloat::IEEEdouble(), Hi};
+ }
+
+ APFloat LoFloat{APFloat::IEEEdouble(), APFloat::uninitialized};
+ if constexpr (std::is_same_v<decltype(Lo), APFloat>) {
+ LoFloat = Lo;
+ } else if constexpr (std::is_same_v<decltype(Lo), double>) {
+ LoFloat = APFloat{Lo};
+ } else {
+ LoFloat = {APFloat::IEEEdouble(), Lo};
+ }
+
+ APInt Bits = LoFloat.bitcastToAPInt().concat(HiFloat.bitcastToAPInt());
+ return APFloat(APFloat::PPCDoubleDouble(), Bits);
+}
+
+static APFloat makeDoubleAPFloat(DD X) {
+ return makeDoubleAPFloat(X.Hi, X.Lo);
+}
+
TEST(APFloatTest, PPCDoubleDouble) {
APFloat test(APFloat::PPCDoubleDouble(), "1.0");
EXPECT_EQ(0x3ff0000000000000ull, test.bitcastToAPInt().getRawData()[0]);
@@ -5315,18 +5350,452 @@ TEST(APFloatTest, PPCDoubleDoubleFMA) {
APFloat(APFloat::PPCDoubleDouble(), "10").compare(A));
}
-TEST(APFloatTest, PPCDoubleDoubleRoundToIntegral) {
- {
- APFloat A(APFloat::PPCDoubleDouble(), "1.5");
- A.roundToIntegral(APFloat::rmNearestTiesToEven);
- EXPECT_EQ(APFloat::cmpEqual,
- APFloat(APFloat::PPCDoubleDouble(), "2").compare(A));
+struct PPCDoubleDoubleRoundToIntegralTestCase {
+ DD Input;
+ DD Rounded[5] = {};
+ constexpr PPCDoubleDoubleRoundToIntegralTestCase &
+ withRounded(DD R, APFloat::roundingMode RM) {
+ Rounded[static_cast<std::underlying_type_t<APFloat::roundingMode>>(RM)] = R;
+ return *this;
}
- {
- APFloat A(APFloat::PPCDoubleDouble(), "2.5");
- A.roundToIntegral(APFloat::rmNearestTiesToEven);
- EXPECT_EQ(APFloat::cmpEqual,
- APFloat(APFloat::PPCDoubleDouble(), "2").compare(A));
+};
+
+auto ppcDoubleDoubleRoundToIntegralTests() {
+ constexpr double Eps = std::numeric_limits<double>::epsilon();
+ constexpr double HalfEps = Eps / 2.0;
+ constexpr double QuarterEps = Eps / 4.0;
+ constexpr double SmallestNormal = std::numeric_limits<double>::min();
+ constexpr double EvenIntegerThreshold{uint64_t{1}
+ << std::numeric_limits<double>::digits};
+ constexpr double Inf = std::numeric_limits<double>::infinity();
+ constexpr double QNaN = std::numeric_limits<double>::quiet_NaN();
+ using TestCase = PPCDoubleDoubleRoundToIntegralTestCase;
+ static constexpr auto TestCases = std::array{
+ // 1. Zeros and Basic Integers
+ // Input: Positive Zero (0.0, 0.0)
+ TestCase({{0.0, 0.0}})
+ .withRounded({0.0, 0.0}, APFloat::rmTowardZero)
+ .withRounded({0.0, 0.0}, APFloat::rmTowardNegative)
+ .withRounded({0.0, 0.0}, APFloat::rmTowardPositive)
+ .withRounded({0.0, 0.0}, APFloat::rmNearestTiesToAway)
+ .withRounded({0.0, 0.0}, APFloat::rmNearestTiesToEven),
+
+ // Input: Negative Zero (-0.0, 0.0)
+ TestCase({{-0.0, 0.0}})
+ .withRounded({-0.0, 0.0}, APFloat::rmTowardZero)
+ .withRounded({-0.0, 0.0}, APFloat::rmTowardNegative)
+ .withRounded({-0.0, 0.0}, APFloat::rmTowardPositive)
+ .withRounded({-0.0, 0.0}, APFloat::rmNearestTiesToAway)
+ .withRounded({-0.0, 0.0}, APFloat::rmNearestTiesToEven),
+
+ // Input: Positive Even (2.0, 0.0)
+ TestCase({{2.0, 0.0}})
+ .withRounded({2.0, 0.0}, APFloat::rmTowardZero)
+ .withRounded({2.0, 0.0}, APFloat::rmTowardNegative)
+ .withRounded({2.0, 0.0}, APFloat::rmTowardPositive)
+ .withRounded({2.0, 0.0}, APFloat::rmNearestTiesToAway)
+ .withRounded({2.0, 0.0}, APFloat::rmNearestTiesToEven),
+
+ // Input: Positive Odd (3.0, 0.0)
+ TestCase({{3.0, 0.0}})
+ .withRounded({3.0, 0.0}, APFloat::rmTowardZero)
+ .withRounded({3.0, 0.0}, APFloat::rmTowardNegative)
+ .withRounded({3.0, 0.0}, APFloat::rmTowardPositive)
+ .withRounded({3.0, 0.0}, APFloat::rmNearestTiesToAway)
+ .withRounded({3.0, 0.0}, APFloat::rmNearestTiesToEven),
+
+ // Input: Negative Even (-2.0, 0.0)
+ TestCase({{-2.0, 0.0}})
+ .withRounded({-2.0, 0.0}, APFloat::rmTowardZero)
+ .withRounded({-2.0, 0.0}, APFloat::rmTowardNegative)
+ .withRounded({-2.0, 0.0}, APFloat::rmTowardPositive)
+ .withRounded({-2.0, 0.0}, APFloat::rmNearestTiesToAway)
+ .withRounded({-2.0, 0.0}, APFloat::rmNearestTiesToEven),
+
+ // 2. General Fractions (Non-Ties)
+ // Input: 2.3
+ TestCase({{2.3, 0.0}})
+ .withRounded({2.0, 0.0}, APFloat::rmTowardZero)
+ .withRounded({2.0, 0.0}, APFloat::rmTowardNegative)
+ .withRounded({3.0, 0.0}, APFloat::rmTowardPositive)
+ .withRounded({2.0, 0.0}, APFloat::rmNearestTiesToAway)
+ .withRounded({2.0, 0.0}, APFloat::rmNearestTiesToEven),
+
+ // Input: 2.7
+ TestCase({{2.7, 0.0}})
+ .withRounded({2.0, 0.0}, APFloat::rmTowardZero)
+ .withRounded({2.0, 0.0}, APFloat::rmTowardNegative)
+ .withRounded({3.0, 0.0}, APFloat::rmTowardPositive)
+ .withRounded({3.0, 0.0}, APFloat::rmNearestTiesToAway)
+ .withRounded({3.0, 0.0}, APFloat::rmNearestTiesToEven),
+
+ // Input: -2.3
+ TestCase({{-2.3, 0.0}})
+ .withRounded({-2.0, 0.0}, APFloat::rmTowardZero)
+ .withRounded({-3.0, 0.0}, APFloat::rmTowardNegative)
+ .withRounded({-2.0, 0.0}, APFloat::rmTowardPositive)
+ .withRounded({-2.0, 0.0}, APFloat::rmNearestTiesToAway)
+ .withRounded({-2.0, 0.0}, APFloat::rmNearestTiesToEven),
+
+ // Input: -2.7
+ TestCase({{-2.7, 0.0}})
+ .withRounded({-2.0, 0.0}, APFloat::rmTowardZero)
+ .withRounded({-3.0, 0.0}, APFloat::rmTowardNegative)
+ .withRounded({-2.0, 0.0}, APFloat::rmTowardPositive)
+ .withRounded({-3.0, 0.0}, APFloat::rmNearestTiesToAway)
+ .withRounded({-3.0, 0.0}, APFloat::rmNearestTiesToEven),
+
+ // Input: 2.3 + Tiny
+ TestCase({{2.3, SmallestNormal}})
+ .withRounded({2.0, 0.0}, APFloat::rmTowardZero)
+ .withRounded({2.0, 0.0}, APFloat::rmTowardNegative)
+ .withRounded({3.0, 0.0}, APFloat::rmTowardPositive)
+ .withRounded({2.0, 0.0}, APFloat::rmNearestTiesToAway)
+ .withRounded({2.0, 0.0}, APFloat::rmNearestTiesToEven),
+
+ // 3. Exact Midpoints (Ties at N.5)
+ // Input: 0.5
+ TestCase({{0.5, 0.0}})
+ .withRounded({0.0, 0.0}, APFloat::rmTowardZero)
+ .withRounded({0.0, 0.0}, APFloat::rmTowardNegative)
+ .withRounded({1.0, 0.0}, APFloat::rmTowardPositive)
+ .withRounded({1.0, 0.0}, APFloat::rmNearestTiesToAway)
+ .withRounded({0.0, 0.0}, APFloat::rmNearestTiesToEven),
+
+ // Input: 1.5 (Odd base)
+ TestCase({{1.5, 0.0}})
+ .withRounded({1.0, 0.0}, APFloat::rmTowardZero)
+ .withRounded({1.0, 0.0}, APFloat::rmTowardNegative)
+ .withRounded({2.0, 0.0}, APFloat::rmTowardPositive)
+ .withRounded({2.0, 0.0}, APFloat::rmNearestTiesToAway)
+ .withRounded({2.0, 0.0}, APFloat::rmNearestTiesToEven),
+
+ // Input: 2.5 (Even base)
+ TestCase({{2.5, 0.0}})
+ .withRounded({2.0, 0.0}, APFloat::rmTowardZero)
+ .withRounded({2.0, 0.0}, APFloat::rmTowardNegative)
+ .withRounded({3.0, 0.0}, APFloat::rmTowardPositive)
+ .withRounded({3.0, 0.0}, APFloat::rmNearestTiesToAway)
+ .withRounded({2.0, 0.0}, APFloat::rmNearestTiesToEven),
+
+ // Input: -0.5
+ TestCase({{-0.5, 0.0}})
+ .withRounded({-0.0, 0.0}, APFloat::rmTowardZero)
+ .withRounded({-1.0, 0.0}, APFloat::rmTowardNegative)
+ .withRounded({-0.0, 0.0}, APFloat::rmTowardPositive)
+ .withRounded({-1.0, 0.0}, APFloat::rmNearestTiesToAway)
+ .withRounded({-0.0, 0.0}, APFloat::rmNearestTiesToEven),
+
+ // Input: -1.5 (Odd base)
+ TestCase({{-1.5, 0.0}})
+ .withRounded({-1.0, 0.0}, APFloat::rmTowardZero)
+ .withRounded({-2.0, 0.0}, APFloat::rmTowardNegative)
+ .withRounded({-1.0, 0.0}, APFloat::rmTowardPositive)
+ .withRounded({-2.0, 0.0}, APFloat::rmNearestTiesToAway)
+ .withRounded({-2.0, 0.0}, APFloat::rmNearestTiesToEven),
+
+ // Input: -2.5 (Even base)
+ TestCase({{-2.5, 0.0}})
+ .withRounded({-2.0, 0.0}, APFloat::rmTowardZero)
+ .withRounded({-3.0, 0.0}, APFloat::rmTowardNegative)
+ .withRounded({-2.0, 0.0}, APFloat::rmTowardPositive)
+ .withRounded({-3.0, 0.0}, APFloat::rmNearestTiesToAway)
+ .withRounded({-2.0, 0.0}, APFloat::rmNearestTiesToEven),
+
+ // 4. Near Midpoints (lo breaks the tie)
+ // Input: Slightly > 2.5
+ TestCase({{2.5, SmallestNormal}})
+ .withRounded({2.0, 0.0}, APFloat::rmTowardZero)
+ .withRounded({2.0, 0.0}, APFloat::rmTowardNegative)
+ .withRounded({3.0, 0.0}, APFloat::rmTowardPositive)
+ .withRounded({3.0, 0.0}, APFloat::rmNearestTiesToAway)
+ .withRounded({3.0, 0.0}, APFloat::rmNearestTiesToEven),
+
+ // Input: Slightly < 2.5
+ TestCase({{2.5, -SmallestNormal}})
+ .withRounded({2.0, 0.0}, APFloat::rmTowardZero)
+ .withRounded({2.0, 0.0}, APFloat::rmTowardNegative)
+ .withRounded({3.0, 0.0}, APFloat::rmTowardPositive)
+ .withRounded({2.0, 0.0}, APFloat::rmNearestTiesToAway)
+ .withRounded({2.0, 0.0}, APFloat::rmNearestTiesToEven),
+
+ // Input: Slightly > 1.5
+ TestCase({{1.5, SmallestNormal}})
+ .withRounded({1.0, 0.0}, APFloat::rmTowardZero)
+ .withRounded({1.0, 0.0}, APFloat::rmTowardNegative)
+ .withRounded({2.0, 0.0}, APFloat::rmTowardPositive)
+ .withRounded({2.0, 0.0}, APFloat::rmNearestTiesToAway)
+ .withRounded({2.0, 0.0}, APFloat::rmNearestTiesToEven),
+
+ // Input: Slightly < 1.5
+ TestCase({{1.5, -SmallestNormal}})
+ .withRounded({1.0, 0.0}, APFloat::rmTowardZero)
+ .withRounded({1.0, 0.0}, APFloat::rmTowardNegative)
+ .withRounded({2.0, 0.0}, APFloat::rmTowardPositive)
+ .withRounded({1.0, 0.0}, APFloat::rmNearestTiesToAway)
+ .withRounded({1.0, 0.0}, APFloat::rmNearestTiesToEven),
+
+ // Input: Slightly > -2.5 (closer to 0)
+ TestCase({{-2.5, SmallestNormal}})
+ .withRounded({-2.0, 0.0}, APFloat::rmTowardZero)
+ .withRounded({-3.0, 0.0}, APFloat::rmTowardNegative)
+ .withRounded({-2.0, 0.0}, APFloat::rmTowardPositive)
+ .withRounded({-2.0, 0.0}, APFloat::rmNearestTiesToAway)
+ .withRounded({-2.0, 0.0}, APFloat::rmNearestTiesToEven),
+
+ // Input: Slightly < -2.5 (further from 0)
+ TestCase({{-2.5, -SmallestNormal}})
+ .withRounded({-2.0, 0.0}, APFloat::rmTowardZero)
+ .withRounded({-3.0, 0.0}, APFloat::rmTowardNegative)
+ .withRounded({-2.0, 0.0}, APFloat::rmTowardPositive)
+ .withRounded({-3.0, 0.0}, APFloat::rmNearestTiesToAway)
+ .withRounded({-3.0, 0.0}, APFloat::rmNearestTiesToEven),
+
+ // 5. Near Integers (lo crosses the integer boundary)
+ // Input: Slightly > 2.0
+ TestCase({{2.0, SmallestNormal}})
+ .withRounded({2.0, 0.0}, APFloat::rmTowardZero)
+ .withRounded({2.0, 0.0}, APFloat::rmTowardNegative)
+ .withRounded({3.0, 0.0}, APFloat::rmTowardPositive)
+ .withRounded({2.0, 0.0}, APFloat::rmNearestTiesToAway)
+ .withRounded({2.0, 0.0}, APFloat::rmNearestTiesToEven),
+
+ // Input: Slightly < 2.0 (1.99...)
+ TestCase({{2.0, -SmallestNormal}})
+ .withRounded({1.0, 0.0}, APFloat::rmTowardZero)
+ .withRounded({1.0, 0.0}, APFloat::rmTowardNegative)
+ .withRounded({2.0, 0.0}, APFloat::rmTowardPositive)
+ .withRounded({2.0, 0.0}, APFloat::rmNearestTiesToAway)
+ .withRounded({2.0, 0.0}, APFloat::rmNearestTiesToEven),
+
+ // Input: Slightly > -2.0 (-1.99...)
+ TestCase({{-2.0, SmallestNormal}})
+ .withRounded({-1.0, 0.0}, APFloat::rmTowardZero)
+ .withRounded({-2.0, 0.0}, APFloat::rmTowardNegative)
+ .withRounded({-1.0, 0.0}, APFloat::rmTowardPositive)
+ .withRounded({-2.0, 0.0}, APFloat::rmNearestTiesToAway)
+ .withRounded({-2.0, 0.0}, APFloat::rmNearestTiesToEven),
+
+ // Input: Slightly < -2.0
+ TestCase({{-2.0, -SmallestNormal}})
+ .withRounded({-2.0, 0.0}, APFloat::rmTowardZero)
+ .withRounded({-3.0, 0.0}, APFloat::rmTowardNegative)
+ .withRounded({-2.0, 0.0}, APFloat::rmTowardPositive)
+ .withRounded({-2.0, 0.0}, APFloat::rmNearestTiesToAway)
+ .withRounded({-2.0, 0.0}, APFloat::rmNearestTiesToEven),
+
+ // Input: Slightly > 0.0
+ TestCase({{SmallestNormal, 0.0}})
+ .withRounded({0.0, 0.0}, APFloat::rmTowardZero)
+ .withRounded({0.0, 0.0}, APFloat::rmTowardNegative)
+ .withRounded({1.0, 0.0}, APFloat::rmTowardPositive)
+ .withRounded({0.0, 0.0}, APFloat::rmNearestTiesToAway)
+ .withRounded({0.0, 0.0}, APFloat::rmNearestTiesToEven),
+
+ // Input: Slightly < 0.0
+ TestCase({{-SmallestNormal, 0.0}})
+ .withRounded({-0.0, 0.0}, APFloat::rmTowardZero)
+ .withRounded({-1.0, 0.0}, APFloat::rmTowardNegative)
+ .withRounded({-0.0, 0.0}, APFloat::rmTowardPositive)
+ .withRounded({-0.0, 0.0}, APFloat::rmNearestTiesToAway)
+ .withRounded({-0.0, 0.0}, APFloat::rmNearestTiesToEven),
+
+ // 6. Boundary of Canonicalization (Maximum lo)
+ // Input: 1.0 + Max lo (1 + 2^-53)
+ TestCase({{1.0, HalfEps}})
+ .withRounded({1.0, 0.0}, APFloat::rmTowardZero)
+ .withRounded({1.0, 0.0}, APFloat::rmTowardNegative)
+ .withRounded({2.0, 0.0}, APFloat::rmTowardPositive)
+ .withRounded({1.0, 0.0}, APFloat::rmNearestTiesToAway)
+ .withRounded({1.0, 0.0}, APFloat::rmNearestTiesToEven),
+
+ // Input: 1.0 - Max lo (1 - 2^-54)
+ TestCase({{1.0, -QuarterEps}})
+ .withRounded({0.0, 0.0}, APFloat::rmTowardZero)
+ .withRounded({0.0, 0.0}, APFloat::rmTowardNegative)
+ .withRounded({1.0, 0.0}, APFloat::rmTowardPositive)
+ .withRounded({1.0, 0.0}, APFloat::rmNearestTiesToAway)
+ .withRounded({1.0, 0.0}, APFloat::rmNearestTiesToEven),
+
+ // 7. Large Magnitudes (Beyond 2^53). N = EvenIntegerThreshold (Even)
+ // Input: EvenIntegerThreshold (Exact)
+ TestCase({{EvenIntegerThreshold, 0.0}})
+ .withRounded({EvenIntegerThreshold, 0.0}, APFloat::rmTowardZero)
+ .withRounded({EvenIntegerThreshold, 0.0}, APFloat::rmTowardNegative)
+ .withRounded({EvenIntegerThreshold, 0.0}, APFloat::rmTowardPositive)
+ .withRounded({EvenIntegerThreshold, 0.0},
+ APFloat::rmNearestTiesToAway)
+ .withRounded({EvenIntegerThreshold, 0.0},
+ APFloat::rmNearestTiesToEven),
+
+ // Input: EvenIntegerThreshold+1 (Exact)
+ TestCase({{EvenIntegerThreshold, 1.0}})
+ .withRounded({EvenIntegerThreshold, 1.0}, APFloat::rmTowardZero)
+ .withRounded({EvenIntegerThreshold, 1.0}, APFloat::rmTowardNegative)
+ .withRounded({EvenIntegerThreshold, 1.0}, APFloat::rmTowardPositive)
+ .withRounded({EvenIntegerThreshold, 1.0},
+ APFloat::rmNearestTiesToAway)
+ .withRounded({EvenIntegerThreshold, 1.0},
+ APFloat::rmNearestTiesToEven),
+
+ // Fractions
+ // Input: EvenIntegerThreshold+0.25
+ TestCase({{EvenIntegerThreshold, 0.25}})
+ .withRounded({EvenIntegerThreshold, 0.0}, APFloat::rmTowardZero)
+ .withRounded({EvenIntegerThreshold, 0.0}, APFloat::rmTowardNegative)
+ .withRounded({EvenIntegerThreshold, 1.0}, APFloat::rmTowardPositive)
+ .withRounded({EvenIntegerThreshold, 0.0},
+ APFloat::rmNearestTiesToAway)
+ .withRounded({EvenIntegerThreshold, 0.0},
+ APFloat::rmNearestTiesToEven),
+
+ // Input: EvenIntegerThreshold+0.75
+ TestCase({{EvenIntegerThreshold, 0.75}})
+ .withRounded({EvenIntegerThreshold, 0.0}, APFloat::rmTowardZero)
+ .withRounded({EvenIntegerThreshold, 0.0}, APFloat::rmTowardNegative)
+ .withRounded({EvenIntegerThreshold, 1.0}, APFloat::rmTowardPositive)
+ .withRounded({EvenIntegerThreshold, 1.0},
+ APFloat::rmNearestTiesToAway)
+ .withRounded({EvenIntegerThreshold, 1.0},
+ APFloat::rmNearestTiesToEven),
+
+ // Ties (Midpoints)
+ // Input: EvenIntegerThreshold-0.5
+ TestCase({{EvenIntegerThreshold, -0.5}})
+ .withRounded({EvenIntegerThreshold - 1.0, 0.0}, APFloat::rmTowardZero)
+ .withRounded({EvenIntegerThreshold - 1.0, 0.0},
+ APFloat::rmTowardNegative)
+ .withRounded({EvenIntegerThreshold, 0.0}, APFloat::rmTowardPositive)
+ .withRounded({EvenIntegerThreshold, 0.0},
+ APFloat::rmNearestTiesToAway)
+ .withRounded({EvenIntegerThreshold, 0.0},
+ APFloat::rmNearestTiesToEven),
+
+ // Input: EvenIntegerThreshold+0.5
+ TestCase({{EvenIntegerThreshold, 0.5}})
+ .withRounded({EvenIntegerThreshold, 0.0}, APFloat::rmTowardZero)
+ .withRounded({EvenIntegerThreshold, 0.0}, APFloat::rmTowardNegative)
+ .withRounded({EvenIntegerThreshold, 1.0}, APFloat::rmTowardPositive)
+ .withRounded({EvenIntegerThreshold, 1.0},
+ APFloat::rmNearestTiesToAway)
+ .withRounded({EvenIntegerThreshold, 0.0},
+ APFloat::rmNearestTiesToEven),
+
+ // Input: EvenIntegerThreshold+1.5
+ TestCase({{EvenIntegerThreshold + 2.0, -0.5}})
+ .withRounded({EvenIntegerThreshold, 1.0}, APFloat::rmTowardZero)
+ .withRounded({EvenIntegerThreshold, 1.0}, APFloat::rmTowardNegative)
+ .withRounded({EvenIntegerThreshold + 2.0, 0.0},
+ APFloat::rmTowardPositive)
+ .withRounded({EvenIntegerThreshold + 2.0, 0.0},
+ APFloat::rmNearestTiesToAway)
+ .withRounded({EvenIntegerThreshold + 2.0, 0.0},
+ APFloat::rmNearestTiesToEven),
+
+ // Input: EvenIntegerThreshold+2.5
+ TestCase({{EvenIntegerThreshold + 2.0, 0.5}})
+ .withRounded({EvenIntegerThreshold + 2.0, 0.0}, APFloat::rmTowardZero)
+ .withRounded({EvenIntegerThreshold + 2.0, 0.0},
+ APFloat::rmTowardNegative)
+ .withRounded({EvenIntegerThreshold + 4.0, -1.0},
+ APFloat::rmTowardPositive)
+ .withRounded({EvenIntegerThreshold + 4.0, -1.0},
+ APFloat::rmNearestTiesToAway)
+ .withRounded({EvenIntegerThreshold + 2.0, 0.0},
+ APFloat::rmNearestTiesToEven),
+
+ // Near Ties
+ // Input: EvenIntegerThreshold+0.5+HalfEps
+ TestCase({{EvenIntegerThreshold, 0.5 + HalfEps}})
+ .withRounded({EvenIntegerThreshold, 0.0}, APFloat::rmTowardZero)
+ .withRounded({EvenIntegerThreshold, 0.0}, APFloat::rmTowardNegative)
+ .withRounded({EvenIntegerThreshold, 1.0}, APFloat::rmTowardPositive)
+ .withRounded({EvenIntegerThreshold, 1.0},
+ APFloat::rmNearestTiesToAway)
+ .withRounded({EvenIntegerThreshold, 1.0},
+ APFloat::rmNearestTiesToEven),
+
+ // Input: EvenIntegerThreshold+0.5-QuarterEps
+ TestCase({{EvenIntegerThreshold, 0.5 - QuarterEps}})
+ .withRounded({EvenIntegerThreshold, 0.0}, APFloat::rmTowardZero)
+ .withRounded({EvenIntegerThreshold, 0.0}, APFloat::rmTowardNegative)
+ .withRounded({EvenIntegerThreshold, 1.0}, APFloat::rmTowardPositive)
+ .withRounded({EvenIntegerThreshold, 0.0},
+ APFloat::rmNearestTiesToAway)
+ .withRounded({EvenIntegerThreshold, 0.0},
+ APFloat::rmNearestTiesToEven),
+
+ // Canonical Boundary (Max lo for EvenIntegerThreshold is 1.0)
+ // Input: EvenIntegerThreshold+1.0
+ TestCase({{EvenIntegerThreshold, 1.0}})
+ .withRounded({EvenIntegerThreshold, 1.0}, APFloat::rmTowardZero)
+ .withRounded({EvenIntegerThreshold, 1.0}, APFloat::rmTowardNegative)
+ .withRounded({EvenIntegerThreshold, 1.0}, APFloat::rmTowardPositive)
+ .withRounded({EvenIntegerThreshold, 1.0},
+ APFloat::rmNearestTiesToAway)
+ .withRounded({EvenIntegerThreshold, 1.0},
+ APFloat::rmNearestTiesToEven),
+
+ // 8. Special Values
+ // Input: +Inf
+ TestCase({{Inf, 0.0}})
+ .withRounded({Inf, 0.0}, APFloat::rmTowardZero)
+ .withRounded({Inf, 0.0}, APFloat::rmTowardNegative)
+ .withRounded({Inf, 0.0}, APFloat::rmTowardPositive)
+ .withRounded({Inf, 0.0}, APFloat::rmNearestTiesToAway)
+ .withRounded({Inf, 0.0}, APFloat::rmNearestTiesToEven),
+
+ // Input: -Inf
+ TestCase({{-Inf, 0.0}})
+ .withRounded({-Inf, 0.0}, APFloat::rmTowardZero)
+ .withRounded({-Inf, 0.0}, APFloat::rmTowardNegative)
+ .withRounded({-Inf, 0.0}, APFloat::rmTowardPositive)
+ .withRounded({-Inf, 0.0}, APFloat::rmNearestTiesToAway)
+ .withRounded({-Inf, 0.0}, APFloat::rmNearestTiesToEven),
+
+ // Input: NaN input hi. Expected output canonical (NaN, 0.0).
+ TestCase({{QNaN, 0.0}})
+ .withRounded({QNaN, 0.0}, APFloat::rmTowardZero)
+ .withRounded({QNaN, 0.0}, APFloat::rmTowardNegative)
+ .withRounded({QNaN, 0.0}, APFloat::rmTowardPositive)
+ .withRounded({QNaN, 0.0}, APFloat::rmNearestTiesToAway)
+ .withRounded({QNaN, 0.0}, APFloat::rmNearestTiesToEven),
+ };
+ return TestCases;
+}
+
+class PPCDoubleDoubleRoundToIntegralValueTest
+ : public testing::Test,
+ public ::testing::WithParamInterface<
+ PPCDoubleDoubleRoundToIntegralTestCase> {};
+
+INSTANTIATE_TEST_SUITE_P(
+ PPCDoubleDoubleRoundToIntegralValueParamTests,
+ PPCDoubleDoubleRoundToIntegralValueTest,
+ ::testing::ValuesIn(ppcDoubleDoubleRoundToIntegralTests()));
+
+TEST_P(PPCDoubleDoubleRoundToIntegralValueTest,
+ PPCDoubleDoubleRoundToIntegral) {
+ const PPCDoubleDoubleRoundToIntegralTestCase TestCase = GetParam();
+ const APFloat Input = makeDoubleAPFloat(TestCase.Input);
+ EXPECT_FALSE(Input.isDenormal())
+ << TestCase.Input.Hi << " + " << TestCase.Input.Lo;
+ for (size_t I = 0, E = std::size(TestCase.Rounded); I != E; ++I) {
+ const auto RM = static_cast<APFloat::roundingMode>(I);
+ const APFloat Expected = makeDoubleAPFloat(TestCase.Rounded[I]);
+ EXPECT_FALSE(Expected.isDenormal())
+ << TestCase.Rounded[I].Hi << " + " << TestCase.Input.Lo;
+ APFloat Actual = Input;
+ Actual.roundToIntegral(RM);
+ if (Actual.isNaN())
+ EXPECT_TRUE(Actual.isNaN());
+ else
+ EXPECT_EQ(Actual.compare(Expected), APFloat::cmpEqual)
+ << "RM: " << RM << " Input.Hi: " << TestCase.Input.Hi
+ << " Input.Lo: " << TestCase.Input.Lo << " Actual: " << Actual
+ << " Expected.Hi: " << TestCase.Rounded[I].Hi
+ << " Expected.Lo: " << TestCase.Rounded[I].Lo
+ << " Expected: " << Expected;
}
}
@@ -5551,13 +6020,9 @@ TEST(APFloatTest, PPCDoubleDoubleNext) {
return X;
};
- auto Zero = [] {
- return APFloat::getZero(APFloat::IEEEdouble());
- };
+ auto Zero = [] { return APFloat::getZero(APFloat::IEEEdouble()); };
- auto One = [] {
- return APFloat::getOne(APFloat::IEEEdouble());
- };
+ auto One = [] { return APFloat::getOne(APFloat::IEEEdouble()); };
// 0x1p-1074
auto MinSubnormal = [] {
@@ -5574,24 +6039,6 @@ TEST(APFloatTest, PPCDoubleDoubleNext) {
// 2^-53
auto EpsNeg = [&] { return scalbn(Eps(), -1, APFloat::rmNearestTiesToEven); };
- auto MakeDoubleAPFloat = [](auto Hi, auto Lo) {
- APFloat HiFloat{APFloat::IEEEdouble(), APFloat::uninitialized};
- if constexpr (std::is_same_v<decltype(Hi), APFloat>) {
- HiFloat = Hi;
- } else {
- HiFloat = {APFloat::IEEEdouble(), Hi};
- }
-
- APFloat LoFloat{APFloat::IEEEdouble(), APFloat::uninitialized};
- if constexpr (std::is_same_v<decltype(Lo), APFloat>) {
- LoFloat = Lo;
- } else {
- LoFloat = {APFloat::IEEEdouble(), Lo};
- }
-
- APInt Bits = LoFloat.bitcastToAPInt().concat(HiFloat.bitcastToAPInt());
- return APFloat(APFloat::PPCDoubleDouble(), Bits);
- };
APFloat Test(APFloat::PPCDoubleDouble(), APFloat::uninitialized);
APFloat Expected(APFloat::PPCDoubleDouble(), APFloat::uninitialized);
@@ -5719,55 +6166,55 @@ TEST(APFloatTest, PPCDoubleDoubleNext) {
// 2b. |hi| >= 2*DBL_MIN_NORMAL (DD precision > D precision)
// Test at hi = 1.0, lo = 0.
- Test = MakeDoubleAPFloat(One(), Zero());
- Expected = MakeDoubleAPFloat(One(), MinSubnormal());
+ Test = makeDoubleAPFloat(One(), Zero());
+ Expected = makeDoubleAPFloat(One(), MinSubnormal());
EXPECT_EQ(Test.next(false), APFloat::opOK);
EXPECT_TRUE(Test.bitwiseIsEqual(Expected));
// Test at hi = -1.0. delta = 2^-1074 (positive, moving towards +Inf).
- Test = MakeDoubleAPFloat(-One(), Zero());
- Expected = MakeDoubleAPFloat(-One(), MinSubnormal());
+ Test = makeDoubleAPFloat(-One(), Zero());
+ Expected = makeDoubleAPFloat(-One(), MinSubnormal());
EXPECT_EQ(Test.next(false), APFloat::opOK);
EXPECT_TRUE(Test.bitwiseIsEqual(Expected));
// Testing the boundary where calculated delta equals DBL_TRUE_MIN.
// Requires ilogb(hi) = E = -968.
// delta = 2^(-968 - 106) = 2^-1074 = DBL_TRUE_MIN.
- Test = MakeDoubleAPFloat("0x1p-968", Zero());
- Expected = MakeDoubleAPFloat("0x1p-968", MinSubnormal());
+ Test = makeDoubleAPFloat("0x1p-968", Zero());
+ Expected = makeDoubleAPFloat("0x1p-968", MinSubnormal());
EXPECT_EQ(Test.next(false), APFloat::opOK);
EXPECT_TRUE(Test.bitwiseIsEqual(Expected));
// Testing below the boundary (E < -968). Delta clamps to DBL_TRUE_MIN.
- Test = MakeDoubleAPFloat("0x1p-969", Zero());
- Expected = MakeDoubleAPFloat("0x1p-969", MinSubnormal());
+ Test = makeDoubleAPFloat("0x1p-969", Zero());
+ Expected = makeDoubleAPFloat("0x1p-969", MinSubnormal());
EXPECT_EQ(Test.next(false), APFloat::opOK);
EXPECT_TRUE(Test.bitwiseIsEqual(Expected));
// 3. Standard Increment (No rollover)
// hi=1.0, lo=2^-1074.
- Test = MakeDoubleAPFloat(One(), MinSubnormal());
- Expected = MakeDoubleAPFloat(One(), NextUp(MinSubnormal()));
+ Test = makeDoubleAPFloat(One(), MinSubnormal());
+ Expected = makeDoubleAPFloat(One(), NextUp(MinSubnormal()));
EXPECT_EQ(Test.next(false), APFloat::opOK);
EXPECT_TRUE(Test.bitwiseIsEqual(Expected));
// Incrementing negative lo.
- Test = MakeDoubleAPFloat(One(), -MinSubnormal());
- Expected = MakeDoubleAPFloat(One(), Zero());
+ Test = makeDoubleAPFloat(One(), -MinSubnormal());
+ Expected = makeDoubleAPFloat(One(), Zero());
EXPECT_EQ(Test.next(false), APFloat::opOK);
EXPECT_EQ(Test.compare(Expected), APFloat::cmpEqual);
// Crossing lo=0.
- Test = MakeDoubleAPFloat(One(), -MinSubnormal());
- Expected = MakeDoubleAPFloat(One(), Zero());
+ Test = makeDoubleAPFloat(One(), -MinSubnormal());
+ Expected = makeDoubleAPFloat(One(), Zero());
EXPECT_EQ(Test.next(false), APFloat::opOK);
EXPECT_EQ(Test.compare(Expected), APFloat::cmpEqual);
// 4. Rollover Cases around 1.0 (Positive hi)
// hi=1.0, lo=nextDown(2^-53).
- Test = MakeDoubleAPFloat(One(), NextDown(EpsNeg()));
+ Test = makeDoubleAPFloat(One(), NextDown(EpsNeg()));
EXPECT_FALSE(Test.isDenormal());
- Expected = MakeDoubleAPFloat(One(), EpsNeg());
+ Expected = makeDoubleAPFloat(One(), EpsNeg());
EXPECT_FALSE(Test.isDenormal());
EXPECT_EQ(Test.next(false), APFloat::opOK);
EXPECT_TRUE(Test.bitwiseIsEqual(Expected));
@@ -5778,17 +6225,17 @@ TEST(APFloatTest, PPCDoubleDoubleNext) {
// Can't naively TwoSum(0x1p+0, nextUp(0x1p-53)):
// It gives {nextUp(0x1p+0), nextUp(nextUp(-0x1p-53))} but the next
// number should be {nextUp(0x1p+0), nextUp(-0x1p-53)}.
- Test = MakeDoubleAPFloat(One(), EpsNeg());
+ Test = makeDoubleAPFloat(One(), EpsNeg());
EXPECT_FALSE(Test.isDenormal());
- Expected = MakeDoubleAPFloat(NextUp(One()), NextUp(-EpsNeg()));
+ Expected = makeDoubleAPFloat(NextUp(One()), NextUp(-EpsNeg()));
EXPECT_EQ(Test.next(false), APFloat::opOK);
EXPECT_TRUE(Test.bitwiseIsEqual(Expected));
EXPECT_FALSE(Test.isDenormal());
// hi = nextDown(1), lo = nextDown(0x1p-54)
- Test = MakeDoubleAPFloat(NextDown(One()), NextDown(APFloat(0x1p-54)));
+ Test = makeDoubleAPFloat(NextDown(One()), NextDown(APFloat(0x1p-54)));
EXPECT_FALSE(Test.isDenormal());
- Expected = MakeDoubleAPFloat(One(), APFloat(-0x1p-54));
+ Expected = makeDoubleAPFloat(One(), APFloat(-0x1p-54));
EXPECT_EQ(Test.next(false), APFloat::opOK);
EXPECT_TRUE(Test.bitwiseIsEqual(Expected));
EXPECT_FALSE(Test.isDenormal());
@@ -5796,26 +6243,26 @@ TEST(APFloatTest, PPCDoubleDoubleNext) {
// 5. Negative Rollover (Moving towards Zero / +Inf)
// hi = -1, lo = nextDown(0x1p-54)
- Test = MakeDoubleAPFloat(APFloat(-1.0), NextDown(APFloat(0x1p-54)));
+ Test = makeDoubleAPFloat(APFloat(-1.0), NextDown(APFloat(0x1p-54)));
EXPECT_FALSE(Test.isDenormal());
- Expected = MakeDoubleAPFloat(APFloat(-1.0), APFloat(0x1p-54));
+ Expected = makeDoubleAPFloat(APFloat(-1.0), APFloat(0x1p-54));
EXPECT_EQ(Test.next(false), APFloat::opOK);
EXPECT_TRUE(Test.bitwiseIsEqual(Expected));
EXPECT_FALSE(Test.isDenormal());
// hi = -1, lo = 0x1p-54
- Test = MakeDoubleAPFloat(APFloat(-1.0), APFloat(0x1p-54));
+ Test = makeDoubleAPFloat(APFloat(-1.0), APFloat(0x1p-54));
EXPECT_FALSE(Test.isDenormal());
Expected =
- MakeDoubleAPFloat(NextUp(APFloat(-1.0)), NextUp(APFloat(-0x1p-54)));
+ makeDoubleAPFloat(NextUp(APFloat(-1.0)), NextUp(APFloat(-0x1p-54)));
EXPECT_EQ(Test.next(false), APFloat::opOK);
EXPECT_TRUE(Test.bitwiseIsEqual(Expected));
EXPECT_FALSE(Test.isDenormal());
// 6. Rollover across Power of 2 boundary (Exponent change)
- Test = MakeDoubleAPFloat(NextDown(APFloat(2.0)), NextDown(EpsNeg()));
+ Test = makeDoubleAPFloat(NextDown(APFloat(2.0)), NextDown(EpsNeg()));
EXPECT_FALSE(Test.isDenormal());
- Expected = MakeDoubleAPFloat(APFloat(2.0), -EpsNeg());
+ Expected = makeDoubleAPFloat(APFloat(2.0), -EpsNeg());
EXPECT_EQ(Test.next(false), APFloat::opOK);
EXPECT_TRUE(Test.bitwiseIsEqual(Expected));
EXPECT_FALSE(Test.isDenormal());
>From 477a65a051ce151895193f8dede1262fdc251132 Mon Sep 17 00:00:00 2001
From: Cyndy Ishida <cyndy_ishida at apple.com>
Date: Wed, 6 Aug 2025 10:44:00 -0700
Subject: [PATCH 131/171] [TextAPI] Seperate out Arch name from enum label,
NFCI (#152332)
The enum label e.g. `AK_arm64` is often the same as the architecture
name, but doesn't have to be. Seperate it out.
---
llvm/include/llvm/TextAPI/Architecture.def | 30 +++++++++++-----------
llvm/include/llvm/TextAPI/Architecture.h | 2 +-
llvm/lib/TextAPI/Architecture.cpp | 12 ++++-----
llvm/lib/TextAPI/TextStubCommon.cpp | 2 +-
4 files changed, 23 insertions(+), 23 deletions(-)
diff --git a/llvm/include/llvm/TextAPI/Architecture.def b/llvm/include/llvm/TextAPI/Architecture.def
index 58ef31b25fe0b..5d310c7982444 100644
--- a/llvm/include/llvm/TextAPI/Architecture.def
+++ b/llvm/include/llvm/TextAPI/Architecture.def
@@ -13,33 +13,33 @@
///
/// X86 architectures sorted by cpu type and sub type id.
///
-ARCHINFO(i386, MachO::CPU_TYPE_I386, MachO::CPU_SUBTYPE_I386_ALL, 32)
-ARCHINFO(x86_64, MachO::CPU_TYPE_X86_64, MachO::CPU_SUBTYPE_X86_64_ALL, 64)
-ARCHINFO(x86_64h, MachO::CPU_TYPE_X86_64, MachO::CPU_SUBTYPE_X86_64_H, 64)
+ARCHINFO(i386, i386, MachO::CPU_TYPE_I386, MachO::CPU_SUBTYPE_I386_ALL, 32)
+ARCHINFO(x86_64, x86_64, MachO::CPU_TYPE_X86_64, MachO::CPU_SUBTYPE_X86_64_ALL, 64)
+ARCHINFO(x86_64h, x86_64h, MachO::CPU_TYPE_X86_64, MachO::CPU_SUBTYPE_X86_64_H, 64)
///
/// ARM architectures sorted by cpu sub type id.
///
-ARCHINFO(armv4t, MachO::CPU_TYPE_ARM, MachO::CPU_SUBTYPE_ARM_V4T, 32)
-ARCHINFO(armv6, MachO::CPU_TYPE_ARM, MachO::CPU_SUBTYPE_ARM_V6, 32)
-ARCHINFO(armv5, MachO::CPU_TYPE_ARM, MachO::CPU_SUBTYPE_ARM_V5TEJ, 32)
-ARCHINFO(armv7, MachO::CPU_TYPE_ARM, MachO::CPU_SUBTYPE_ARM_V7, 32)
-ARCHINFO(armv7s, MachO::CPU_TYPE_ARM, MachO::CPU_SUBTYPE_ARM_V7S, 32)
-ARCHINFO(armv7k, MachO::CPU_TYPE_ARM, MachO::CPU_SUBTYPE_ARM_V7K, 32)
-ARCHINFO(armv6m, MachO::CPU_TYPE_ARM, MachO::CPU_SUBTYPE_ARM_V6M, 32)
-ARCHINFO(armv7m, MachO::CPU_TYPE_ARM, MachO::CPU_SUBTYPE_ARM_V7M, 32)
-ARCHINFO(armv7em, MachO::CPU_TYPE_ARM, MachO::CPU_SUBTYPE_ARM_V7EM, 32)
+ARCHINFO(armv4t, armv4t, MachO::CPU_TYPE_ARM, MachO::CPU_SUBTYPE_ARM_V4T, 32)
+ARCHINFO(armv6, armv6, MachO::CPU_TYPE_ARM, MachO::CPU_SUBTYPE_ARM_V6, 32)
+ARCHINFO(armv5, armv5, MachO::CPU_TYPE_ARM, MachO::CPU_SUBTYPE_ARM_V5TEJ, 32)
+ARCHINFO(armv7, armv7, MachO::CPU_TYPE_ARM, MachO::CPU_SUBTYPE_ARM_V7, 32)
+ARCHINFO(armv7s, armv7s, MachO::CPU_TYPE_ARM, MachO::CPU_SUBTYPE_ARM_V7S, 32)
+ARCHINFO(armv7k, armv7k, MachO::CPU_TYPE_ARM, MachO::CPU_SUBTYPE_ARM_V7K, 32)
+ARCHINFO(armv6m, armv6m, MachO::CPU_TYPE_ARM, MachO::CPU_SUBTYPE_ARM_V6M, 32)
+ARCHINFO(armv7m, armv7m, MachO::CPU_TYPE_ARM, MachO::CPU_SUBTYPE_ARM_V7M, 32)
+ARCHINFO(armv7em, armv7em, MachO::CPU_TYPE_ARM, MachO::CPU_SUBTYPE_ARM_V7EM, 32)
///
/// ARM64 architectures sorted by cpu sub type id.
///
-ARCHINFO(arm64, MachO::CPU_TYPE_ARM64, MachO::CPU_SUBTYPE_ARM64_ALL, 64)
-ARCHINFO(arm64e, MachO::CPU_TYPE_ARM64, MachO::CPU_SUBTYPE_ARM64E, 64)
+ARCHINFO(arm64, arm64, MachO::CPU_TYPE_ARM64, MachO::CPU_SUBTYPE_ARM64_ALL, 64)
+ARCHINFO(arm64e, arm64e, MachO::CPU_TYPE_ARM64, MachO::CPU_SUBTYPE_ARM64E, 64)
///
/// ARM64_32 architectures sorted by cpu sub type id
///
-ARCHINFO(arm64_32, MachO::CPU_TYPE_ARM64_32, MachO::CPU_SUBTYPE_ARM64_32_V8, 32)
+ARCHINFO(arm64_32, arm64_32, MachO::CPU_TYPE_ARM64_32, MachO::CPU_SUBTYPE_ARM64_32_V8, 32)
diff --git a/llvm/include/llvm/TextAPI/Architecture.h b/llvm/include/llvm/TextAPI/Architecture.h
index 7a7f5416fe7c7..2ca199489f1d3 100644
--- a/llvm/include/llvm/TextAPI/Architecture.h
+++ b/llvm/include/llvm/TextAPI/Architecture.h
@@ -26,7 +26,7 @@ namespace MachO {
/// Defines the architecture slices that are supported by Text-based Stub files.
enum Architecture : uint8_t {
-#define ARCHINFO(Arch, Type, SubType, NumBits) AK_##Arch,
+#define ARCHINFO(Arch, Name, Type, SubType, NumBits) AK_##Arch,
#include "llvm/TextAPI/Architecture.def"
#undef ARCHINFO
AK_unknown, // this has to go last.
diff --git a/llvm/lib/TextAPI/Architecture.cpp b/llvm/lib/TextAPI/Architecture.cpp
index 51ca91db13008..3b5306746e1c5 100644
--- a/llvm/lib/TextAPI/Architecture.cpp
+++ b/llvm/lib/TextAPI/Architecture.cpp
@@ -21,7 +21,7 @@ namespace llvm {
namespace MachO {
Architecture getArchitectureFromCpuType(uint32_t CPUType, uint32_t CPUSubType) {
-#define ARCHINFO(Arch, Type, Subtype, NumBits) \
+#define ARCHINFO(Arch, Name, Type, Subtype, NumBits) \
if (CPUType == (Type) && \
(CPUSubType & ~MachO::CPU_SUBTYPE_MASK) == (Subtype)) \
return AK_##Arch;
@@ -33,7 +33,7 @@ Architecture getArchitectureFromCpuType(uint32_t CPUType, uint32_t CPUSubType) {
Architecture getArchitectureFromName(StringRef Name) {
return StringSwitch<Architecture>(Name)
-#define ARCHINFO(Arch, Type, Subtype, NumBits) .Case(#Arch, AK_##Arch)
+#define ARCHINFO(Arch, Name, Type, Subtype, NumBits) .Case(#Name, AK_##Arch)
#include "llvm/TextAPI/Architecture.def"
#undef ARCHINFO
.Default(AK_unknown);
@@ -41,9 +41,9 @@ Architecture getArchitectureFromName(StringRef Name) {
StringRef getArchitectureName(Architecture Arch) {
switch (Arch) {
-#define ARCHINFO(Arch, Type, Subtype, NumBits) \
+#define ARCHINFO(Arch, Name, Type, Subtype, NumBits) \
case AK_##Arch: \
- return #Arch;
+ return #Name;
#include "llvm/TextAPI/Architecture.def"
#undef ARCHINFO
case AK_unknown:
@@ -57,7 +57,7 @@ StringRef getArchitectureName(Architecture Arch) {
std::pair<uint32_t, uint32_t> getCPUTypeFromArchitecture(Architecture Arch) {
switch (Arch) {
-#define ARCHINFO(Arch, Type, Subtype, NumBits) \
+#define ARCHINFO(Arch, Name, Type, Subtype, NumBits) \
case AK_##Arch: \
return std::make_pair(Type, Subtype);
#include "llvm/TextAPI/Architecture.def"
@@ -77,7 +77,7 @@ Architecture mapToArchitecture(const Triple &Target) {
bool is64Bit(Architecture Arch) {
switch (Arch) {
-#define ARCHINFO(Arch, Type, Subtype, NumBits) \
+#define ARCHINFO(Arch, Name, Type, Subtype, NumBits) \
case AK_##Arch: \
return NumBits == 64;
#include "llvm/TextAPI/Architecture.def"
diff --git a/llvm/lib/TextAPI/TextStubCommon.cpp b/llvm/lib/TextAPI/TextStubCommon.cpp
index 0b710b0790b3d..7bf1f9ab4c939 100644
--- a/llvm/lib/TextAPI/TextStubCommon.cpp
+++ b/llvm/lib/TextAPI/TextStubCommon.cpp
@@ -133,7 +133,7 @@ QuotingType ScalarTraits<PlatformSet>::mustQuote(StringRef) {
void ScalarBitSetTraits<ArchitectureSet>::bitset(IO &IO,
ArchitectureSet &Archs) {
-#define ARCHINFO(arch, type, subtype, numbits) \
+#define ARCHINFO(arch, name, type, subtype, numbits) \
IO.bitSetCase(Archs, #arch, 1U << static_cast<int>(AK_##arch));
#include "llvm/TextAPI/Architecture.def"
#undef ARCHINFO
>From 503c0908c3450d228debd64baecf41df8f58476e Mon Sep 17 00:00:00 2001
From: Michael Jones <michaelrj at google.com>
Date: Wed, 6 Aug 2025 11:05:08 -0700
Subject: [PATCH 132/171] [libc] fix iswalpha signiture and test (#152343)
The iswalpha function is only working for characters in the ascii range
right now, which isn't ideal. Also it was returning a bool when the
function is supposed to return an int.
---
libc/src/wctype/iswalpha.cpp | 2 +-
libc/src/wctype/iswalpha.h | 2 +-
libc/test/src/wctype/iswalpha_test.cpp | 78 +++++++++++++-------------
3 files changed, 42 insertions(+), 40 deletions(-)
diff --git a/libc/src/wctype/iswalpha.cpp b/libc/src/wctype/iswalpha.cpp
index e18f29370fbd0..09f55d391dbff 100644
--- a/libc/src/wctype/iswalpha.cpp
+++ b/libc/src/wctype/iswalpha.cpp
@@ -14,6 +14,6 @@
namespace LIBC_NAMESPACE_DECL {
-LLVM_LIBC_FUNCTION(bool, iswalpha, (wint_t c)) { return internal::iswalpha(c); }
+LLVM_LIBC_FUNCTION(int, iswalpha, (wint_t c)) { return internal::iswalpha(c); }
} // namespace LIBC_NAMESPACE_DECL
diff --git a/libc/src/wctype/iswalpha.h b/libc/src/wctype/iswalpha.h
index 681fc6ba79a54..0353388607b60 100644
--- a/libc/src/wctype/iswalpha.h
+++ b/libc/src/wctype/iswalpha.h
@@ -14,7 +14,7 @@
namespace LIBC_NAMESPACE_DECL {
-bool iswalpha(wint_t c);
+int iswalpha(wint_t c);
} // namespace LIBC_NAMESPACE_DECL
diff --git a/libc/test/src/wctype/iswalpha_test.cpp b/libc/test/src/wctype/iswalpha_test.cpp
index f3f75f4dc7aa5..a82c0057c1807 100644
--- a/libc/test/src/wctype/iswalpha_test.cpp
+++ b/libc/test/src/wctype/iswalpha_test.cpp
@@ -9,46 +9,48 @@
#include "src/__support/CPP/span.h"
#include "src/wctype/iswalpha.h"
-#include "test/UnitTest/LibcTest.h"
#include "test/UnitTest/Test.h"
-namespace {
-
-// TODO: Merge the wctype tests using this framework.
-constexpr char WALPHA_ARRAY[] = {
- 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm',
- 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z',
- 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M',
- 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z',
-};
-
-bool in_span(int ch, LIBC_NAMESPACE::cpp::span<const char> arr) {
- for (size_t i = 0; i < arr.size(); ++i)
- if (static_cast<int>(arr[i]) == ch)
- return true;
- return false;
-}
-
-} // namespace
-
TEST(LlvmLibciswalpha, SimpleTest) {
- EXPECT_TRUE(LIBC_NAMESPACE::iswalpha('a'));
- EXPECT_TRUE(LIBC_NAMESPACE::iswalpha('B'));
-
- EXPECT_FALSE(LIBC_NAMESPACE::iswalpha('3'));
- EXPECT_FALSE(LIBC_NAMESPACE::iswalpha(' '));
- EXPECT_FALSE(LIBC_NAMESPACE::iswalpha('?'));
- EXPECT_FALSE(LIBC_NAMESPACE::iswalpha('\0'));
- EXPECT_FALSE(LIBC_NAMESPACE::iswalpha(-1));
+ EXPECT_NE(LIBC_NAMESPACE::iswalpha('a'), 0);
+ EXPECT_NE(LIBC_NAMESPACE::iswalpha('B'), 0);
+
+ EXPECT_EQ(LIBC_NAMESPACE::iswalpha('3'), 0);
+ EXPECT_EQ(LIBC_NAMESPACE::iswalpha(' '), 0);
+ EXPECT_EQ(LIBC_NAMESPACE::iswalpha('?'), 0);
+ EXPECT_EQ(LIBC_NAMESPACE::iswalpha('\0'), 0);
+ EXPECT_EQ(LIBC_NAMESPACE::iswalpha(-1), 0);
}
-TEST(LlvmLibciswalpha, DefaultLocale) {
- // Loops through all characters, verifying that letters return
- // true and everything else returns false.
- for (int ch = -255; ch < 255; ++ch) {
- if (in_span(ch, WALPHA_ARRAY))
- EXPECT_TRUE(LIBC_NAMESPACE::iswalpha(ch));
- else
- EXPECT_FALSE(LIBC_NAMESPACE::iswalpha(ch));
- }
-}
+// TODO: once iswalpha supports more than just ascii-range characters add a
+// proper test.
+
+// namespace {
+
+// // TODO: Merge the wctype tests using this framework.
+// constexpr char WALPHA_ARRAY[] = {
+// 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm',
+// 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z',
+// 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M',
+// 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z',
+// };
+
+// bool in_span(int ch, LIBC_NAMESPACE::cpp::span<const char> arr) {
+// for (size_t i = 0; i < arr.size(); ++i)
+// if (static_cast<int>(arr[i]) == ch)
+// return true;
+// return false;
+// }
+
+// } // namespace
+
+// TEST(LlvmLibciswalpha, DefaultLocale) {
+// // Loops through all characters, verifying that letters return
+// // true and everything else returns false.
+// for (int ch = -255; ch < 255; ++ch) {
+// if (in_span(ch, WALPHA_ARRAY))
+// EXPECT_TRUE(LIBC_NAMESPACE::iswalpha(ch));
+// else
+// EXPECT_FALSE(LIBC_NAMESPACE::iswalpha(ch));
+// }
+// }
>From 266a1a819a4ed8b2a12cb47f088197e4dd038a54 Mon Sep 17 00:00:00 2001
From: Arthur Eubanks <aeubanks at google.com>
Date: Wed, 6 Aug 2025 11:07:09 -0700
Subject: [PATCH 133/171] [clang-tools-extra][test] Fix missed %T removals from
#151538 (#152345)
These tests are failing on our bots presumably due to these missing %T
replacements.
---
clang-tools-extra/test/clang-tidy/infrastructure/diagnostic.cpp | 2 +-
.../test/clang-tidy/infrastructure/export-relpath.cpp | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/clang-tools-extra/test/clang-tidy/infrastructure/diagnostic.cpp b/clang-tools-extra/test/clang-tidy/infrastructure/diagnostic.cpp
index 610d1dadfdb56..a9d140bc9318e 100644
--- a/clang-tools-extra/test/clang-tidy/infrastructure/diagnostic.cpp
+++ b/clang-tools-extra/test/clang-tidy/infrastructure/diagnostic.cpp
@@ -15,7 +15,7 @@
// use it after failing to parse commands from the command line:
//
// RUN: mkdir -p %t.dir/diagnostics/
-// RUN: echo '[{"directory": "%/t.dir/diagnostics/","command": "clang++ -fan-option-from-compilation-database -c %/T/diagnostics/input.cpp", "file": "%/T/diagnostics/input.cpp"}]' > %t.dir/diagnostics/compile_commands.json
+// RUN: echo '[{"directory": "%/t.dir/diagnostics/","command": "clang++ -fan-option-from-compilation-database -c %/t.dir/diagnostics/input.cpp", "file": "%/t.dir/diagnostics/input.cpp"}]' > %t.dir/diagnostics/compile_commands.json
// RUN: cat %s > %t.dir/diagnostics/input.cpp
// RUN: not clang-tidy -checks='-*,modernize-use-override' %t.dir/diagnostics/nonexistent.cpp -- 2>&1 | FileCheck -check-prefix=CHECK1 -implicit-check-not='{{warning:|error:}}' %s
// RUN: not clang-tidy -checks='-*,clang-diagnostic-*,google-explicit-constructor' %t.dir/diagnostics/input.cpp -- -fan-unknown-option 2>&1 | FileCheck -check-prefix=CHECK2 -implicit-check-not='{{warning:|error:}}' %s
diff --git a/clang-tools-extra/test/clang-tidy/infrastructure/export-relpath.cpp b/clang-tools-extra/test/clang-tidy/infrastructure/export-relpath.cpp
index 5fd7303173e1c..051f008ad3f3a 100644
--- a/clang-tools-extra/test/clang-tidy/infrastructure/export-relpath.cpp
+++ b/clang-tools-extra/test/clang-tidy/infrastructure/export-relpath.cpp
@@ -2,7 +2,7 @@
// RUN: rm -rf %t.dir/clang-tidy/export-relpath
// RUN: mkdir -p %t.dir/clang-tidy/export-relpath/subdir
// RUN: cp %s %t.dir/clang-tidy/export-relpath/subdir/source.cpp
-// RUN: echo '[{ "directory": "%/t.dir/clang-tidy/export-relpath/subdir", "command": "clang++ source.cpp", "file": "%/T/clang-tidy/export-relpath/subdir/source.cpp"}]' > %t.dir/clang-tidy/export-relpath/subdir/compile_commands.json
+// RUN: echo '[{ "directory": "%/t.dir/clang-tidy/export-relpath/subdir", "command": "clang++ source.cpp", "file": "%/t.dir/clang-tidy/export-relpath/subdir/source.cpp"}]' > %t.dir/clang-tidy/export-relpath/subdir/compile_commands.json
//
// Check that running clang-tidy in './subdir' and storing results
// in './fixes.yaml' works as expected.
>From a1209d868632b8aea10450cd2323848ab0b6776a Mon Sep 17 00:00:00 2001
From: Andrew Lazarev <alazarev at google.com>
Date: Wed, 6 Aug 2025 11:21:49 -0700
Subject: [PATCH 134/171] [ubsan_minimal] Allow UBSan handler from Minimal
runtime to accept arguments (#152192)
+ Changed type_mismatch minimal handler to accept and print pointer.
This will allow to distinguish null pointer use, misallignment and
incorrect object size.
The change increases binary size by ~1% and has almost no performance
impact.
Fixes #149943
---
clang/lib/CodeGen/CGExpr.cpp | 59 ++++++++++++-------
.../ubsan_minimal/ubsan_minimal_handlers.cpp | 40 ++++++++++---
.../ubsan_minimal/TestCases/misalignment.cpp | 13 ++++
.../test/ubsan_minimal/TestCases/null.cpp | 11 ++++
4 files changed, 93 insertions(+), 30 deletions(-)
create mode 100644 compiler-rt/test/ubsan_minimal/TestCases/misalignment.cpp
create mode 100644 compiler-rt/test/ubsan_minimal/TestCases/null.cpp
diff --git a/clang/lib/CodeGen/CGExpr.cpp b/clang/lib/CodeGen/CGExpr.cpp
index 5a3d4e447b229..ed35a055d8a7f 100644
--- a/clang/lib/CodeGen/CGExpr.cpp
+++ b/clang/lib/CodeGen/CGExpr.cpp
@@ -3789,33 +3789,50 @@ void CodeGenFunction::EmitCheck(
Branch->setMetadata(llvm::LLVMContext::MD_prof, Node);
EmitBlock(Handlers);
+ // Clear arguments for the MinimalRuntime handler.
+ if (CGM.getCodeGenOpts().SanitizeMinimalRuntime) {
+ switch (CheckHandler) {
+ case SanitizerHandler::TypeMismatch:
+ // Pass value pointer only. It adds minimal overhead.
+ StaticArgs = {};
+ assert(DynamicArgs.size() == 1);
+ break;
+ default:
+ // No arguments for other checks.
+ StaticArgs = {};
+ DynamicArgs = {};
+ break;
+ }
+ }
+
// Handler functions take an i8* pointing to the (handler-specific) static
// information block, followed by a sequence of intptr_t arguments
// representing operand values.
SmallVector<llvm::Value *, 4> Args;
SmallVector<llvm::Type *, 4> ArgTypes;
- if (!CGM.getCodeGenOpts().SanitizeMinimalRuntime) {
- Args.reserve(DynamicArgs.size() + 1);
- ArgTypes.reserve(DynamicArgs.size() + 1);
-
- // Emit handler arguments and create handler function type.
- if (!StaticArgs.empty()) {
- llvm::Constant *Info = llvm::ConstantStruct::getAnon(StaticArgs);
- auto *InfoPtr = new llvm::GlobalVariable(
- CGM.getModule(), Info->getType(), false,
- llvm::GlobalVariable::PrivateLinkage, Info, "", nullptr,
- llvm::GlobalVariable::NotThreadLocal,
- CGM.getDataLayout().getDefaultGlobalsAddressSpace());
- InfoPtr->setUnnamedAddr(llvm::GlobalValue::UnnamedAddr::Global);
- CGM.getSanitizerMetadata()->disableSanitizerForGlobal(InfoPtr);
- Args.push_back(InfoPtr);
- ArgTypes.push_back(Args.back()->getType());
- }
- for (llvm::Value *DynamicArg : DynamicArgs) {
- Args.push_back(EmitCheckValue(DynamicArg));
- ArgTypes.push_back(IntPtrTy);
- }
+ Args.reserve(DynamicArgs.size() + 1);
+ ArgTypes.reserve(DynamicArgs.size() + 1);
+
+ // Emit handler arguments and create handler function type.
+ if (!StaticArgs.empty()) {
+ llvm::Constant *Info = llvm::ConstantStruct::getAnon(StaticArgs);
+ auto *InfoPtr = new llvm::GlobalVariable(
+ CGM.getModule(), Info->getType(),
+ // Non-constant global is used in a handler to deduplicate reports.
+ // TODO: change deduplication logic and make it constant.
+ /*isConstant=*/false, llvm::GlobalVariable::PrivateLinkage, Info, "",
+ nullptr, llvm::GlobalVariable::NotThreadLocal,
+ CGM.getDataLayout().getDefaultGlobalsAddressSpace());
+ InfoPtr->setUnnamedAddr(llvm::GlobalValue::UnnamedAddr::Global);
+ CGM.getSanitizerMetadata()->disableSanitizerForGlobal(InfoPtr);
+ Args.push_back(InfoPtr);
+ ArgTypes.push_back(Args.back()->getType());
+ }
+
+ for (llvm::Value *DynamicArg : DynamicArgs) {
+ Args.push_back(EmitCheckValue(DynamicArg));
+ ArgTypes.push_back(IntPtrTy);
}
llvm::FunctionType *FnType =
diff --git a/compiler-rt/lib/ubsan_minimal/ubsan_minimal_handlers.cpp b/compiler-rt/lib/ubsan_minimal/ubsan_minimal_handlers.cpp
index ebc36a8583e05..b1f4eab26de0e 100644
--- a/compiler-rt/lib/ubsan_minimal/ubsan_minimal_handlers.cpp
+++ b/compiler-rt/lib/ubsan_minimal/ubsan_minimal_handlers.cpp
@@ -34,12 +34,16 @@ static char *append_hex(uintptr_t d, char *buf, const char *end) {
return buf;
}
-static void format_msg(const char *kind, uintptr_t caller, char *buf,
- const char *end) {
+static void format_msg(const char *kind, uintptr_t caller,
+ const uintptr_t *address, char *buf, const char *end) {
buf = append_str("ubsan: ", buf, end);
buf = append_str(kind, buf, end);
buf = append_str(" by 0x", buf, end);
buf = append_hex(caller, buf, end);
+ if (address) {
+ buf = append_str(" address 0x", buf, end);
+ buf = append_hex(*address, buf, end);
+ }
buf = append_str("\n", buf, end);
if (buf == end)
--buf; // Make sure we don't cause a buffer overflow.
@@ -47,7 +51,7 @@ static void format_msg(const char *kind, uintptr_t caller, char *buf,
}
SANITIZER_INTERFACE_WEAK_DEF(void, __ubsan_report_error, const char *kind,
- uintptr_t caller) {
+ uintptr_t caller, const uintptr_t *address) {
if (caller == 0)
return;
while (true) {
@@ -80,15 +84,15 @@ SANITIZER_INTERFACE_WEAK_DEF(void, __ubsan_report_error, const char *kind,
__sanitizer::atomic_store_relaxed(&caller_pcs[sz], caller);
char msg_buf[128];
- format_msg(kind, caller, msg_buf, msg_buf + sizeof(msg_buf));
+ format_msg(kind, caller, address, msg_buf, msg_buf + sizeof(msg_buf));
message(msg_buf);
}
}
SANITIZER_INTERFACE_WEAK_DEF(void, __ubsan_report_error_fatal, const char *kind,
- uintptr_t caller) {
+ uintptr_t caller, const uintptr_t *address) {
// Use another handlers, in case it's already overriden.
- __ubsan_report_error(kind, caller);
+ __ubsan_report_error(kind, caller, address);
}
#if defined(__ANDROID__)
@@ -121,13 +125,13 @@ void NORETURN CheckFailed(const char *file, int, const char *cond, u64, u64) {
#define HANDLER_RECOVER(name, kind) \
INTERFACE void __ubsan_handle_##name##_minimal() { \
- __ubsan_report_error(kind, GET_CALLER_PC()); \
+ __ubsan_report_error(kind, GET_CALLER_PC(), nullptr); \
}
#define HANDLER_NORECOVER(name, kind) \
INTERFACE void __ubsan_handle_##name##_minimal_abort() { \
uintptr_t caller = GET_CALLER_PC(); \
- __ubsan_report_error_fatal(kind, caller); \
+ __ubsan_report_error_fatal(kind, caller, nullptr); \
abort_with_message(kind, caller); \
}
@@ -135,7 +139,25 @@ void NORETURN CheckFailed(const char *file, int, const char *cond, u64, u64) {
HANDLER_RECOVER(name, kind) \
HANDLER_NORECOVER(name, kind)
-HANDLER(type_mismatch, "type-mismatch")
+#define HANDLER_RECOVER_PTR(name, kind) \
+ INTERFACE void __ubsan_handle_##name##_minimal(const uintptr_t address) { \
+ __ubsan_report_error(kind, GET_CALLER_PC(), &address); \
+ }
+
+#define HANDLER_NORECOVER_PTR(name, kind) \
+ INTERFACE void __ubsan_handle_##name##_minimal_abort( \
+ const uintptr_t address) { \
+ uintptr_t caller = GET_CALLER_PC(); \
+ __ubsan_report_error_fatal(kind, caller, &address); \
+ abort_with_message(kind, caller); \
+ }
+
+// A version of a handler that takes a pointer to a value.
+#define HANDLER_PTR(name, kind) \
+ HANDLER_RECOVER_PTR(name, kind) \
+ HANDLER_NORECOVER_PTR(name, kind)
+
+HANDLER_PTR(type_mismatch, "type-mismatch")
HANDLER(alignment_assumption, "alignment-assumption")
HANDLER(add_overflow, "add-overflow")
HANDLER(sub_overflow, "sub-overflow")
diff --git a/compiler-rt/test/ubsan_minimal/TestCases/misalignment.cpp b/compiler-rt/test/ubsan_minimal/TestCases/misalignment.cpp
new file mode 100644
index 0000000000000..cf828792a324f
--- /dev/null
+++ b/compiler-rt/test/ubsan_minimal/TestCases/misalignment.cpp
@@ -0,0 +1,13 @@
+// RUN: %clang_min_runtime -fsanitize=alignment %s -o %t && %run %t 2>&1 | FileCheck %s --check-prefixes=CHECK
+
+void f(int &n) {}
+
+int *t;
+
+int main() {
+ int r;
+ t = (int *)(((char *)&r) + 1);
+ // CHECK: ubsan: type-mismatch by 0x{{[[:xdigit:]]+}} address 0x{{[[:xdigit:]]+$}}
+ // CHECK-NOT: type-mismatch
+ f(*t);
+}
diff --git a/compiler-rt/test/ubsan_minimal/TestCases/null.cpp b/compiler-rt/test/ubsan_minimal/TestCases/null.cpp
new file mode 100644
index 0000000000000..c97527b72cd0c
--- /dev/null
+++ b/compiler-rt/test/ubsan_minimal/TestCases/null.cpp
@@ -0,0 +1,11 @@
+// RUN: %clang_min_runtime -fsanitize=null %s -o %t && %run %t 2>&1 | FileCheck %s --check-prefixes=CHECK
+
+void f(int &n) {}
+
+int *t;
+
+int main() {
+ // CHECK: ubsan: type-mismatch by 0x{{[[:xdigit:]]+}} address 0x{{[[:xdigit:]]+$}}
+ // CHECK-NOT: type-mismatch
+ f(*t);
+}
>From a418fa7cdcf4f334a1955389fcec483ea2c77c44 Mon Sep 17 00:00:00 2001
From: Daniel Paoliello <danpao at microsoft.com>
Date: Wed, 6 Aug 2025 11:39:41 -0700
Subject: [PATCH 135/171] [win][aarch64] Add support for detecting the Host CPU
on Arm64 Windows (#151596)
Uses the `CP 4000` registry keys under
`HKLM\HARDWARE\DESCRIPTION\System\CentralProcessor\*` to get the
Implementer and Part, which is then provided to a modified form of
`getHostCPUNameForARM` to map to a CPU.
On my local Surface Pro 11 `llc --version` reports:
```
> .\build\bin\llc.exe --version
LLVM (http://llvm.org/):
LLVM version 22.0.0git
Optimized build with assertions.
Default target: aarch64-pc-windows-msvc
Host CPU: oryon-1
```
---
llvm/include/llvm/TargetParser/Host.h | 3 +
llvm/lib/TargetParser/Host.cpp | 200 +++++++++++++++++++++-----
llvm/unittests/TargetParser/Host.cpp | 27 ++++
3 files changed, 191 insertions(+), 39 deletions(-)
diff --git a/llvm/include/llvm/TargetParser/Host.h b/llvm/include/llvm/TargetParser/Host.h
index 40a9b6cc13902..b44b9b9a4d069 100644
--- a/llvm/include/llvm/TargetParser/Host.h
+++ b/llvm/include/llvm/TargetParser/Host.h
@@ -13,6 +13,7 @@
#ifndef LLVM_TARGETPARSER_HOST_H
#define LLVM_TARGETPARSER_HOST_H
+#include "llvm/ADT/ArrayRef.h"
#include "llvm/Support/Compiler.h"
#include <string>
@@ -63,6 +64,8 @@ namespace detail {
/// Helper functions to extract HostCPUName from /proc/cpuinfo on linux.
LLVM_ABI StringRef getHostCPUNameForPowerPC(StringRef ProcCpuinfoContent);
LLVM_ABI StringRef getHostCPUNameForARM(StringRef ProcCpuinfoContent);
+LLVM_ABI StringRef getHostCPUNameForARM(uint64_t PrimaryCpuInfo,
+ ArrayRef<uint64_t> UniqueCpuInfos);
LLVM_ABI StringRef getHostCPUNameForS390x(StringRef ProcCpuinfoContent);
LLVM_ABI StringRef getHostCPUNameForRISCV(StringRef ProcCpuinfoContent);
LLVM_ABI StringRef getHostCPUNameForSPARC(StringRef ProcCpuinfoContent);
diff --git a/llvm/lib/TargetParser/Host.cpp b/llvm/lib/TargetParser/Host.cpp
index 7e09d30bf3d55..79c40c34a9dae 100644
--- a/llvm/lib/TargetParser/Host.cpp
+++ b/llvm/lib/TargetParser/Host.cpp
@@ -11,7 +11,9 @@
//===----------------------------------------------------------------------===//
#include "llvm/TargetParser/Host.h"
+#include "llvm/ADT/STLFunctionalExtras.h"
#include "llvm/ADT/SmallVector.h"
+#include "llvm/ADT/StringExtras.h"
#include "llvm/ADT/StringMap.h"
#include "llvm/ADT/StringRef.h"
#include "llvm/ADT/StringSwitch.h"
@@ -167,35 +169,10 @@ StringRef sys::detail::getHostCPUNameForPowerPC(StringRef ProcCpuinfoContent) {
.Default(generic);
}
-StringRef sys::detail::getHostCPUNameForARM(StringRef ProcCpuinfoContent) {
- // The cpuid register on arm is not accessible from user space. On Linux,
- // it is exposed through the /proc/cpuinfo file.
-
- // Read 32 lines from /proc/cpuinfo, which should contain the CPU part line
- // in all cases.
- SmallVector<StringRef, 32> Lines;
- ProcCpuinfoContent.split(Lines, '\n');
-
- // Look for the CPU implementer and hardware lines, and store the CPU part
- // numbers found.
- StringRef Implementer;
- StringRef Hardware;
- SmallVector<StringRef, 32> Parts;
- for (StringRef Line : Lines) {
- if (Line.consume_front("CPU implementer"))
- Implementer = Line.ltrim("\t :");
- else if (Line.consume_front("Hardware"))
- Hardware = Line.ltrim("\t :");
- else if (Line.consume_front("CPU part"))
- Parts.emplace_back(Line.ltrim("\t :"));
- }
-
- // Last `Part' seen, in case we don't analyse all `Parts' parsed.
- StringRef Part = Parts.empty() ? StringRef() : Parts.back();
-
- // Remove duplicate `Parts'.
- llvm::sort(Parts);
- Parts.erase(llvm::unique(Parts), Parts.end());
+StringRef
+getHostCPUNameForARMFromComponents(StringRef Implementer, StringRef Hardware,
+ StringRef Part, ArrayRef<StringRef> Parts,
+ function_ref<unsigned()> GetVariant) {
auto MatchBigLittle = [](auto const &Parts, StringRef Big, StringRef Little) {
if (Parts.size() == 2)
@@ -343,21 +320,17 @@ StringRef sys::detail::getHostCPUNameForARM(StringRef ProcCpuinfoContent) {
if (Implementer == "0x53") { // Samsung Electronics Co., Ltd.
// The Exynos chips have a convoluted ID scheme that doesn't seem to follow
// any predictive pattern across variants and parts.
- unsigned Variant = 0, Part = 0;
// Look for the CPU variant line, whose value is a 1 digit hexadecimal
// number, corresponding to the Variant bits in the CP15/C0 register.
- for (auto I : Lines)
- if (I.consume_front("CPU variant"))
- I.ltrim("\t :").getAsInteger(0, Variant);
+ unsigned Variant = GetVariant();
- // Look for the CPU part line, whose value is a 3 digit hexadecimal
- // number, corresponding to the PartNum bits in the CP15/C0 register.
- for (auto I : Lines)
- if (I.consume_front("CPU part"))
- I.ltrim("\t :").getAsInteger(0, Part);
+ // Convert the CPU part line, whose value is a 3 digit hexadecimal number,
+ // corresponding to the PartNum bits in the CP15/C0 register.
+ unsigned PartAsInt;
+ Part.getAsInteger(0, PartAsInt);
- unsigned Exynos = (Variant << 12) | Part;
+ unsigned Exynos = (Variant << 12) | PartAsInt;
switch (Exynos) {
default:
// Default by falling through to Exynos M3.
@@ -416,6 +389,86 @@ StringRef sys::detail::getHostCPUNameForARM(StringRef ProcCpuinfoContent) {
return "generic";
}
+StringRef sys::detail::getHostCPUNameForARM(StringRef ProcCpuinfoContent) {
+ // The cpuid register on arm is not accessible from user space. On Linux,
+ // it is exposed through the /proc/cpuinfo file.
+
+ // Read 32 lines from /proc/cpuinfo, which should contain the CPU part line
+ // in all cases.
+ SmallVector<StringRef, 32> Lines;
+ ProcCpuinfoContent.split(Lines, '\n');
+
+ // Look for the CPU implementer and hardware lines, and store the CPU part
+ // numbers found.
+ StringRef Implementer;
+ StringRef Hardware;
+ SmallVector<StringRef, 32> Parts;
+ for (StringRef Line : Lines) {
+ if (Line.consume_front("CPU implementer"))
+ Implementer = Line.ltrim("\t :");
+ else if (Line.consume_front("Hardware"))
+ Hardware = Line.ltrim("\t :");
+ else if (Line.consume_front("CPU part"))
+ Parts.emplace_back(Line.ltrim("\t :"));
+ }
+
+ // Last `Part' seen, in case we don't analyse all `Parts' parsed.
+ StringRef Part = Parts.empty() ? StringRef() : Parts.back();
+
+ // Remove duplicate `Parts'.
+ llvm::sort(Parts);
+ Parts.erase(llvm::unique(Parts), Parts.end());
+
+ auto GetVariant = [&]() {
+ unsigned Variant = 0;
+ for (auto I : Lines)
+ if (I.consume_front("CPU variant"))
+ I.ltrim("\t :").getAsInteger(0, Variant);
+ return Variant;
+ };
+
+ return getHostCPUNameForARMFromComponents(Implementer, Hardware, Part, Parts,
+ GetVariant);
+}
+
+StringRef sys::detail::getHostCPUNameForARM(uint64_t PrimaryCpuInfo,
+ ArrayRef<uint64_t> UniqueCpuInfos) {
+ // On Windows, the registry provides cached copied of the MIDR_EL1 register.
+ union MIDR_EL1 {
+ uint64_t Raw;
+ struct _Components {
+ uint64_t Revision : 4;
+ uint64_t Partnum : 12;
+ uint64_t Architecture : 4;
+ uint64_t Variant : 4;
+ uint64_t Implementer : 8;
+ uint64_t Reserved : 32;
+ } Components;
+ };
+
+ SmallVector<std::string> PartsHolder;
+ PartsHolder.reserve(UniqueCpuInfos.size());
+ for (auto Info : UniqueCpuInfos)
+ PartsHolder.push_back("0x" + utohexstr(MIDR_EL1{Info}.Components.Partnum,
+ /*LowerCase*/ true,
+ /*Width*/ 3));
+
+ SmallVector<StringRef> Parts;
+ Parts.reserve(PartsHolder.size());
+ for (const auto &Part : PartsHolder)
+ Parts.push_back(Part);
+
+ return getHostCPUNameForARMFromComponents(
+ "0x" + utohexstr(MIDR_EL1{PrimaryCpuInfo}.Components.Implementer,
+ /*LowerCase*/ true,
+ /*Width*/ 2),
+ /*Hardware*/ "",
+ "0x" + utohexstr(MIDR_EL1{PrimaryCpuInfo}.Components.Partnum,
+ /*LowerCase*/ true,
+ /*Width*/ 3),
+ Parts, [=]() { return MIDR_EL1{PrimaryCpuInfo}.Components.Variant; });
+}
+
namespace {
StringRef getCPUNameFromS390Model(unsigned int Id, bool HaveVectorSupport) {
switch (Id) {
@@ -1450,6 +1503,75 @@ StringRef sys::getHostCPUName() {
return "generic";
}
+#elif defined(_M_ARM64) || defined(_M_ARM64EC)
+
+StringRef sys::getHostCPUName() {
+ constexpr char CentralProcessorKeyName[] =
+ "HARDWARE\\DESCRIPTION\\System\\CentralProcessor";
+ // Sub keys names are simple numbers ("0", "1", etc.) so 10 chars should be
+ // enough for the slash and name.
+ constexpr size_t SubKeyNameMaxSize = ARRAYSIZE(CentralProcessorKeyName) + 10;
+
+ SmallVector<uint64_t> Values;
+ uint64_t PrimaryCpuInfo;
+ char PrimaryPartKeyName[SubKeyNameMaxSize];
+ DWORD PrimaryPartKeyNameSize = 0;
+ HKEY CentralProcessorKey;
+ if (RegOpenKeyExA(HKEY_LOCAL_MACHINE, CentralProcessorKeyName, 0, KEY_READ,
+ &CentralProcessorKey) == ERROR_SUCCESS) {
+ for (unsigned Index = 0; Index < UINT32_MAX; ++Index) {
+ char SubKeyName[SubKeyNameMaxSize];
+ DWORD SubKeySize = SubKeyNameMaxSize;
+ HKEY SubKey;
+ if ((RegEnumKeyExA(CentralProcessorKey, Index, SubKeyName, &SubKeySize,
+ nullptr, nullptr, nullptr,
+ nullptr) == ERROR_SUCCESS) &&
+ (RegOpenKeyExA(CentralProcessorKey, SubKeyName, 0, KEY_READ,
+ &SubKey) == ERROR_SUCCESS)) {
+ // The "CP 4000" registry key contains a cached copy of the MIDR_EL1
+ // register.
+ uint64_t RegValue;
+ DWORD ActualType;
+ DWORD RegValueSize = sizeof(RegValue);
+ if ((RegQueryValueExA(SubKey, "CP 4000", nullptr, &ActualType,
+ (PBYTE)&RegValue,
+ &RegValueSize) == ERROR_SUCCESS) &&
+ (ActualType == REG_QWORD) && RegValueSize == sizeof(RegValue)) {
+ // Assume that the part with the "highest" reg key name is the primary
+ // part (to match the way that Linux's cpuinfo is written). Win32
+ // makes no guarantees about the order of sub keys, so we have to
+ // compare the names.
+ if (PrimaryPartKeyNameSize < SubKeySize ||
+ (PrimaryPartKeyNameSize == SubKeySize &&
+ ::memcmp(SubKeyName, PrimaryPartKeyName, SubKeySize) > 0)) {
+ PrimaryCpuInfo = RegValue;
+ ::memcpy(PrimaryPartKeyName, SubKeyName, SubKeySize + 1);
+ PrimaryPartKeyNameSize = SubKeySize;
+ }
+ if (!llvm::is_contained(Values, RegValue)) {
+ Values.push_back(RegValue);
+ }
+ }
+ RegCloseKey(SubKey);
+ } else {
+ // No more sub keys.
+ break;
+ }
+ }
+ RegCloseKey(CentralProcessorKey);
+ }
+
+ if (Values.empty()) {
+ return "generic";
+ }
+
+ // Win32 makes no guarantees about the order of sub keys, so sort to ensure
+ // reproducibility.
+ llvm::sort(Values);
+
+ return detail::getHostCPUNameForARM(PrimaryCpuInfo, Values);
+}
+
#elif defined(__APPLE__) && defined(__powerpc__)
StringRef sys::getHostCPUName() {
host_basic_info_data_t hostInfo;
diff --git a/llvm/unittests/TargetParser/Host.cpp b/llvm/unittests/TargetParser/Host.cpp
index 0a9ac9bb0596d..be8548ebf8551 100644
--- a/llvm/unittests/TargetParser/Host.cpp
+++ b/llvm/unittests/TargetParser/Host.cpp
@@ -59,16 +59,28 @@ Serial : 0000000000000000
EXPECT_EQ(sys::detail::getHostCPUNameForARM(CortexA9ProcCpuinfo),
"cortex-a9");
+ EXPECT_EQ(sys::detail::getHostCPUNameForARM(
+ 0x4100c090, ArrayRef<uint64_t>{0x4100c090, 0x4100c090}),
+ "cortex-a9");
EXPECT_EQ(sys::detail::getHostCPUNameForARM("CPU implementer : 0x41\n"
"CPU part : 0xc0f"),
"cortex-a15");
+ EXPECT_EQ(sys::detail::getHostCPUNameForARM(0x4100c0f0,
+ ArrayRef<uint64_t>{0x4100c0f0}),
+ "cortex-a15");
// Verify that both CPU implementer and CPU part are checked:
EXPECT_EQ(sys::detail::getHostCPUNameForARM("CPU implementer : 0x40\n"
"CPU part : 0xc0f"),
"generic");
+ EXPECT_EQ(sys::detail::getHostCPUNameForARM(0x4000c0f0,
+ ArrayRef<uint64_t>{0x4000c0f0}),
+ "generic");
EXPECT_EQ(sys::detail::getHostCPUNameForARM("CPU implementer : 0x51\n"
"CPU part : 0x06f"),
"krait");
+ EXPECT_EQ(sys::detail::getHostCPUNameForARM(0x510006f0,
+ ArrayRef<uint64_t>{0x510006f0}),
+ "krait");
}
TEST(getLinuxHostCPUName, AArch64) {
@@ -126,10 +138,16 @@ TEST(getLinuxHostCPUName, AArch64) {
"CPU part : 0xd85\n"
"CPU part : 0xd87"),
"cortex-x925");
+ EXPECT_EQ(sys::detail::getHostCPUNameForARM(
+ 0x4100d850, ArrayRef<uint64_t>{0x4100d850, 0x4100d870}),
+ "cortex-x925");
EXPECT_EQ(sys::detail::getHostCPUNameForARM("CPU implementer : 0x41\n"
"CPU part : 0xd87\n"
"CPU part : 0xd85"),
"cortex-x925");
+ EXPECT_EQ(sys::detail::getHostCPUNameForARM(
+ 0x4100d870, ArrayRef<uint64_t>{0x4100d870, 0x4100d850}),
+ "cortex-x925");
EXPECT_EQ(sys::detail::getHostCPUNameForARM("CPU implementer : 0x51\n"
"CPU part : 0xc00"),
"falkor");
@@ -200,16 +218,25 @@ CPU architecture: 8
"CPU variant : 0xc\n"
"CPU part : 0xafe"),
"exynos-m3");
+ EXPECT_EQ(sys::detail::getHostCPUNameForARM(
+ 0x53c0afe0, ArrayRef<uint64_t>{0x53c0afe0, 0x5300d050}),
+ "exynos-m3");
// Verify Exynos M3.
EXPECT_EQ(sys::detail::getHostCPUNameForARM(ExynosProcCpuInfo +
"CPU variant : 0x1\n"
"CPU part : 0x002"),
"exynos-m3");
+ EXPECT_EQ(sys::detail::getHostCPUNameForARM(
+ 0x53100020, ArrayRef<uint64_t>{0x53100020, 0x5300d050}),
+ "exynos-m3");
// Verify Exynos M4.
EXPECT_EQ(sys::detail::getHostCPUNameForARM(ExynosProcCpuInfo +
"CPU variant : 0x1\n"
"CPU part : 0x003"),
"exynos-m4");
+ EXPECT_EQ(sys::detail::getHostCPUNameForARM(
+ 0x53100030, ArrayRef<uint64_t>{0x53100030, 0x5300d050}),
+ "exynos-m4");
const std::string ThunderX2T99ProcCpuInfo = R"(
processor : 0
>From 0e0ea714f3079341b9e11eb478eb85400423ee7c Mon Sep 17 00:00:00 2001
From: royitaqi <royitaqi at users.noreply.github.com>
Date: Wed, 6 Aug 2025 11:42:21 -0700
Subject: [PATCH 136/171] [vscode-lldb] Add VS Code commands for high level
debug workflow (#151827)
This allows other debugger extensions to leverage the `lldb-dap`
extension's settings and logic (e.g. "Server Mode").
Other debugger extensions can invoke these commands to resolve
configuration, create adapter descriptor, and get the `lldb-dap` process
for state tracking, additional interaction, and telemetry.
VS Code commands added:
* `lldb-dap.resolveDebugConfiguration`
* `lldb-dap.resolveDebugConfigurationWithSubstitutedVariables`
* `lldb-dap.createDebugAdapterDescriptor`
* `lldb-dap.getServerProcess`
---
lldb/docs/resources/lldbdap.md | 23 ++++++++++++++++++
.../lldb-dap/src-ts/debug-adapter-factory.ts | 10 +++++++-
.../src-ts/debug-configuration-provider.ts | 24 ++++++++++++++++++-
lldb/tools/lldb-dap/src-ts/lldb-dap-server.ts | 7 ++++++
4 files changed, 62 insertions(+), 2 deletions(-)
diff --git a/lldb/docs/resources/lldbdap.md b/lldb/docs/resources/lldbdap.md
index 955713dd1b594..3838c82ab5dfb 100644
--- a/lldb/docs/resources/lldbdap.md
+++ b/lldb/docs/resources/lldbdap.md
@@ -170,3 +170,26 @@ This is also very simple, just run:
```bash
npm run format
```
+
+## Working with the VS Code extension from another extension
+
+The VS Code extension exposes the following [VS Code
+commands](https://code.visualstudio.com/api/extension-guides/command),
+which can be invoked by other debugger extensions to leverage this extension's
+settings and logic. The commands help resolve configuration, create adapter
+descriptor, and get the lldb-dap process for state tracking, additional
+interaction, and telemetry.
+
+```
+// Resolve debug configuration
+const resolvedConfiguration = await vscode.commands.executeCommand("lldb-dap.resolveDebugConfiguration", folder, configuration, token);
+
+// Resolve debug configuration with substituted variables
+const resolvedConfigurationWithSubstitutedVariables = await vscode.commands.executeCommand("lldb-dap.resolveDebugConfigurationWithSubstitutedVariables", folder, configuration, token);
+
+// Create debug adapter descriptor
+const adapterDescriptor = await vscode.commands.executeCommand("lldb-dap.createDebugAdapterDescriptor", session, executable);
+
+// Get DAP server process
+const process = await vscode.commands.executeCommand("lldb-dap.getServerProcess");
+```
diff --git a/lldb/tools/lldb-dap/src-ts/debug-adapter-factory.ts b/lldb/tools/lldb-dap/src-ts/debug-adapter-factory.ts
index 6e94400b09155..157aa2ac76a1f 100644
--- a/lldb/tools/lldb-dap/src-ts/debug-adapter-factory.ts
+++ b/lldb/tools/lldb-dap/src-ts/debug-adapter-factory.ts
@@ -217,7 +217,15 @@ export class LLDBDapDescriptorFactory
constructor(
private readonly logger: vscode.LogOutputChannel,
private logFilePath: LogFilePathProvider,
- ) {}
+ ) {
+ vscode.commands.registerCommand(
+ "lldb-dap.createDebugAdapterDescriptor",
+ (
+ session: vscode.DebugSession,
+ executable: vscode.DebugAdapterExecutable | undefined,
+ ) => this.createDebugAdapterDescriptor(session, executable),
+ );
+ }
async createDebugAdapterDescriptor(
session: vscode.DebugSession,
diff --git a/lldb/tools/lldb-dap/src-ts/debug-configuration-provider.ts b/lldb/tools/lldb-dap/src-ts/debug-configuration-provider.ts
index 8c04ec2bdc9d3..1e16dac031125 100644
--- a/lldb/tools/lldb-dap/src-ts/debug-configuration-provider.ts
+++ b/lldb/tools/lldb-dap/src-ts/debug-configuration-provider.ts
@@ -76,7 +76,29 @@ export class LLDBDapConfigurationProvider
private readonly server: LLDBDapServer,
private readonly logger: vscode.LogOutputChannel,
private readonly logFilePath: LogFilePathProvider,
- ) {}
+ ) {
+ vscode.commands.registerCommand(
+ "lldb-dap.resolveDebugConfiguration",
+ (
+ folder: vscode.WorkspaceFolder | undefined,
+ debugConfiguration: vscode.DebugConfiguration,
+ token?: vscode.CancellationToken,
+ ) => this.resolveDebugConfiguration(folder, debugConfiguration, token),
+ );
+ vscode.commands.registerCommand(
+ "lldb-dap.resolveDebugConfigurationWithSubstitutedVariables",
+ (
+ folder: vscode.WorkspaceFolder | undefined,
+ debugConfiguration: vscode.DebugConfiguration,
+ token?: vscode.CancellationToken,
+ ) =>
+ this.resolveDebugConfigurationWithSubstitutedVariables(
+ folder,
+ debugConfiguration,
+ token,
+ ),
+ );
+ }
async resolveDebugConfiguration(
folder: vscode.WorkspaceFolder | undefined,
diff --git a/lldb/tools/lldb-dap/src-ts/lldb-dap-server.ts b/lldb/tools/lldb-dap/src-ts/lldb-dap-server.ts
index f360fa3adfa80..5f9d8efdcb3a3 100644
--- a/lldb/tools/lldb-dap/src-ts/lldb-dap-server.ts
+++ b/lldb/tools/lldb-dap/src-ts/lldb-dap-server.ts
@@ -12,6 +12,13 @@ export class LLDBDapServer implements vscode.Disposable {
private serverProcess?: child_process.ChildProcessWithoutNullStreams;
private serverInfo?: Promise<{ host: string; port: number }>;
+ constructor() {
+ vscode.commands.registerCommand(
+ "lldb-dap.getServerProcess",
+ () => this.serverProcess,
+ );
+ }
+
/**
* Starts the server with the provided options. The server will be restarted or reused as
* necessary.
>From 35bd40d321ccb2e646c112418ef32318dd0e040b Mon Sep 17 00:00:00 2001
From: Daniel Henrique Barboza <dbarboza at ventanamicro.com>
Date: Wed, 6 Aug 2025 15:46:34 -0300
Subject: [PATCH 137/171] [RISCV] add more generic macrofusions (#151140)
These are some macrofusions that are used internally in Ventana in an
yet not upstreamed processor. Figured it would be good to contribute
them ahead of the processor to allow the community to also use them in
their own processors, while also alleaviting our own downstream upkeep.
The macrofusions being added are, considering load =
lb,lh,lw,ld,lbu,lhu,lwu:
- bfext (slli+srli)
- auipc+load
- lui+load
- add(.uw)+load
- addi+load
- shXadd(.uw)+load, where X=1,2,3
---
llvm/lib/Target/RISCV/RISCVMacroFusion.td | 56 +
llvm/lib/Target/RISCV/RISCVProcessors.td | 4 +-
llvm/test/CodeGen/RISCV/features-info.ll | 6 +
llvm/test/CodeGen/RISCV/macro-fusions.mir | 1376 +++++++++++++++++++++
4 files changed, 1441 insertions(+), 1 deletion(-)
diff --git a/llvm/lib/Target/RISCV/RISCVMacroFusion.td b/llvm/lib/Target/RISCV/RISCVMacroFusion.td
index 875a93d09a2c6..39e099bc947b2 100644
--- a/llvm/lib/Target/RISCV/RISCVMacroFusion.td
+++ b/llvm/lib/Target/RISCV/RISCVMacroFusion.td
@@ -91,3 +91,59 @@ def TuneLDADDFusion
CheckIsImmOperand<2>,
CheckImmOperand<2, 0>
]>>;
+
+defvar Load = [LB, LH, LW, LD, LBU, LHU, LWU];
+
+// Fuse add(.uw) followed by a load (lb, lh, lw, ld, lbu, lhu, lwu):
+// add(.uw) rd, rs1, rs2
+// load rd, imm12(rd)
+def TuneADDLoadFusion
+ : SimpleFusion<"add-load-fusion", "HasADDLoadFusion", "Enable ADD(.UW) + load macrofusion",
+ CheckOpcode<[ADD, ADD_UW]>,
+ CheckOpcode<Load>>;
+
+// Fuse AUIPC followed by by a load (lb, lh, lw, ld, lbu, lhu, lwu)
+// auipc rd, imm20
+// load rd, imm12(rd)
+def TuneAUIPCLoadFusion
+ : SimpleFusion<"auipc-load-fusion", "HasAUIPCLoadFusion",
+ "Enable AUIPC + load macrofusion",
+ CheckOpcode<[AUIPC]>,
+ CheckOpcode<Load>>;
+
+// Fuse LUI followed by a load (lb, lh, lw, ld, lbu, lhu, lwu)
+// lui rd, imm[31:12]
+// load rd, imm12(rd)
+def TuneLUILoadFusion
+ : SimpleFusion<"lui-load-fusion", "HasLUILoadFusion",
+ "Enable LUI + load macrofusion",
+ CheckOpcode<[LUI]>,
+ CheckOpcode<Load>>;
+
+// Bitfield extract fusion: similar to TuneShiftedZExtWFusion
+// but without the immediate restriction
+// slli rd, rs1, imm12
+// srli rd, rd, imm12
+def TuneBFExtFusion
+ : SimpleFusion<"bfext-fusion", "HasBFExtFusion",
+ "Enable SLLI+SRLI (bitfield extract) macrofusion",
+ CheckOpcode<[SLLI]>,
+ CheckOpcode<[SRLI]>>;
+
+// Fuse ADDI followed by a load (lb, lh, lw, ld, lbu, lhu, lwu)
+// addi rd, rs1, imm12
+// load rd, imm12(rd)
+def TuneADDILoadFusion
+ : SimpleFusion<"addi-load-fusion", "HasADDILoadFusion",
+ "Enable ADDI + load macrofusion",
+ CheckOpcode<[ADDI]>,
+ CheckOpcode<Load>>;
+
+// Fuse shXadd(.uw) followed by a load (lb, lh, lw, ld, lbu, lhu, lwu)
+// shXadd(.uw) rd, rs1, rs2
+// load rd, imm12(rd)
+def TuneSHXADDLoadFusion
+ : SimpleFusion<"shxadd-load-fusion", "HasSHXADDLoadFusion",
+ "Enable SH(1|2|3)ADD(.UW) + load macrofusion",
+ CheckOpcode<[SH1ADD, SH2ADD, SH3ADD, SH1ADD_UW, SH2ADD_UW, SH3ADD_UW]>,
+ CheckOpcode<Load>>;
diff --git a/llvm/lib/Target/RISCV/RISCVProcessors.td b/llvm/lib/Target/RISCV/RISCVProcessors.td
index 8445730446dd9..31d2b3a10db53 100644
--- a/llvm/lib/Target/RISCV/RISCVProcessors.td
+++ b/llvm/lib/Target/RISCV/RISCVProcessors.td
@@ -598,7 +598,9 @@ def VENTANA_VEYRON_V1 : RISCVProcessorModel<"veyron-v1",
TuneZExtHFusion,
TuneZExtWFusion,
TuneShiftedZExtWFusion,
- TuneLDADDFusion]> {
+ TuneADDLoadFusion,
+ TuneAUIPCLoadFusion,
+ TuneLUILoadFusion]> {
let MVendorID = 0x61f;
let MArchID = 0x8000000000010000;
let MImpID = 0x111;
diff --git a/llvm/test/CodeGen/RISCV/features-info.ll b/llvm/test/CodeGen/RISCV/features-info.ll
index a5ee41281607b..fb539211fcc31 100644
--- a/llvm/test/CodeGen/RISCV/features-info.ll
+++ b/llvm/test/CodeGen/RISCV/features-info.ll
@@ -6,9 +6,13 @@
; CHECK-NEXT: 32bit - Implements RV32.
; CHECK-NEXT: 64bit - Implements RV64.
; CHECK-NEXT: a - 'A' (Atomic Instructions).
+; CHECK-NEXT: add-load-fusion - Enable ADD(.UW) + load macrofusion.
+; CHECK-NEXT: addi-load-fusion - Enable ADDI + load macrofusion.
; CHECK-NEXT: andes45 - Andes 45-Series processors.
; CHECK-NEXT: auipc-addi-fusion - Enable AUIPC+ADDI macrofusion.
+; CHECK-NEXT: auipc-load-fusion - Enable AUIPC + load macrofusion.
; CHECK-NEXT: b - 'B' (the collection of the Zba, Zbb, Zbs extensions).
+; CHECK-NEXT: bfext-fusion - Enable SLLI+SRLI (bitfield extract) macrofusion.
; CHECK-NEXT: c - 'C' (Compressed Instructions).
; CHECK-NEXT: conditional-cmv-fusion - Enable branch+c.mv fusion.
; CHECK-NEXT: d - 'D' (Double-Precision Floating-Point).
@@ -62,6 +66,7 @@
; CHECK-NEXT: ld-add-fusion - Enable LD+ADD macrofusion.
; CHECK-NEXT: log-vrgather - Has vrgather.vv with LMUL*log2(LMUL) latency
; CHECK-NEXT: lui-addi-fusion - Enable LUI+ADDI macro fusion.
+; CHECK-NEXT: lui-load-fusion - Enable LUI + load macrofusion.
; CHECK-NEXT: m - 'M' (Integer Multiplication and Division).
; CHECK-NEXT: mips-p8700 - MIPS p8700 processor.
; CHECK-NEXT: no-default-unroll - Disable default unroll preference..
@@ -134,6 +139,7 @@
; CHECK-NEXT: shvsatpa - 'Shvsatpa' (vsatp supports all modes supported by satp).
; CHECK-NEXT: shvstvala - 'Shvstvala' (vstval provides all needed values).
; CHECK-NEXT: shvstvecd - 'Shvstvecd' (vstvec supports Direct mode).
+; CHECK-NEXT: shxadd-load-fusion - Enable SH(1|2|3)ADD(.UW) + load macrofusion.
; CHECK-NEXT: sifive7 - SiFive 7-Series processors.
; CHECK-NEXT: smaia - 'Smaia' (Advanced Interrupt Architecture Machine Level).
; CHECK-NEXT: smcdeleg - 'Smcdeleg' (Counter Delegation Machine Level).
diff --git a/llvm/test/CodeGen/RISCV/macro-fusions.mir b/llvm/test/CodeGen/RISCV/macro-fusions.mir
index 13464141ce27e..ae5b52da2ac16 100644
--- a/llvm/test/CodeGen/RISCV/macro-fusions.mir
+++ b/llvm/test/CodeGen/RISCV/macro-fusions.mir
@@ -2,7 +2,12 @@
# RUN: llc -mtriple=riscv64-linux-gnu -x=mir < %s \
# RUN: -debug-only=machine-scheduler -start-before=machine-scheduler 2>&1 \
# RUN: -mattr=+lui-addi-fusion,+auipc-addi-fusion,+zexth-fusion,+zextw-fusion,+shifted-zextw-fusion,+ld-add-fusion \
+# RUN: -mattr=+add-load-fusion,+auipc-load-fusion,+lui-load-fusion,+addi-load-fusion \
+# RUN: -mattr=+zba,+shxadd-load-fusion \
# RUN: | FileCheck %s
+# RUN: llc -mtriple=riscv64-linux-gnu -x=mir < %s \
+# RUN: -debug-only=machine-scheduler -start-before=machine-scheduler 2>&1 \
+# RUN: -mattr=+zba,+bfext-fusion | FileCheck --check-prefixes=CHECK-BFEXT %s
# CHECK: lui_addi:%bb.0
# CHECK: Macro fuse: {{.*}}LUI - ADDI
@@ -174,3 +179,1374 @@ body: |
$x11 = COPY %5
PseudoRET
...
+
+# CHECK: add_lb
+# CHECK: Macro fuse: {{.*}}ADD - LB
+---
+name: add_lb
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = ADD %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LB %3, 0
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: add_lh
+# CHECK: Macro fuse: {{.*}}ADD - LH
+---
+name: add_lh
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = ADD %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LH %3, 0
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: add_lw
+# CHECK: Macro fuse: {{.*}}ADD - LW
+---
+name: add_lw
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = ADD %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LW %3, 0
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: add_lbu
+# CHECK: Macro fuse: {{.*}}ADD - LBU
+---
+name: add_lbu
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = ADD %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LBU %3, 0
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: add_lhu
+# CHECK: Macro fuse: {{.*}}ADD - LHU
+---
+name: add_lhu
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = ADD %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LHU %3, 0
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: add_lwu
+# CHECK: Macro fuse: {{.*}}ADD - LWU
+---
+name: add_lwu
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = ADD %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LWU %3, 0
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: auipc_lb
+# CHECK: Macro fuse: {{.*}}AUIPC - LB
+---
+name: auipc_lb
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10
+ %1:gpr = COPY $x10
+ %2:gpr = AUIPC 1
+ %3:gpr = XORI %1, 2
+ %4:gpr = LB %2, 4
+ $x10 = COPY %3
+ $x11 = COPY %4
+ PseudoRET
+...
+
+# CHECK: auipc_lh
+# CHECK: Macro fuse: {{.*}}AUIPC - LH
+---
+name: auipc_lh
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10
+ %1:gpr = COPY $x10
+ %2:gpr = AUIPC 1
+ %3:gpr = XORI %1, 2
+ %4:gpr = LH %2, 4
+ $x10 = COPY %3
+ $x11 = COPY %4
+ PseudoRET
+...
+
+# CHECK: auipc_lw
+# CHECK: Macro fuse: {{.*}}AUIPC - LW
+---
+name: auipc_lw
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10
+ %1:gpr = COPY $x10
+ %2:gpr = AUIPC 1
+ %3:gpr = XORI %1, 2
+ %4:gpr = LW %2, 4
+ $x10 = COPY %3
+ $x11 = COPY %4
+ PseudoRET
+...
+
+# CHECK: auipc_ld
+# CHECK: Macro fuse: {{.*}}AUIPC - LD
+---
+name: auipc_ld
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10
+ %1:gpr = COPY $x10
+ %2:gpr = AUIPC 1
+ %3:gpr = XORI %1, 2
+ %4:gpr = LD %2, 4
+ $x10 = COPY %3
+ $x11 = COPY %4
+ PseudoRET
+...
+
+# CHECK: auipc_lbu
+# CHECK: Macro fuse: {{.*}}AUIPC - LBU
+---
+name: auipc_lbu
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10
+ %1:gpr = COPY $x10
+ %2:gpr = AUIPC 1
+ %3:gpr = XORI %1, 2
+ %4:gpr = LBU %2, 4
+ $x10 = COPY %3
+ $x11 = COPY %4
+ PseudoRET
+...
+
+# CHECK: auipc_lhu
+# CHECK: Macro fuse: {{.*}}AUIPC - LHU
+---
+name: auipc_lhu
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10
+ %1:gpr = COPY $x10
+ %2:gpr = AUIPC 1
+ %3:gpr = XORI %1, 2
+ %4:gpr = LHU %2, 4
+ $x10 = COPY %3
+ $x11 = COPY %4
+ PseudoRET
+...
+
+# CHECK: auipc_lwu
+# CHECK: Macro fuse: {{.*}}AUIPC - LWU
+---
+name: auipc_lwu
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10
+ %1:gpr = COPY $x10
+ %2:gpr = AUIPC 1
+ %3:gpr = XORI %1, 2
+ %4:gpr = LWU %2, 4
+ $x10 = COPY %3
+ $x11 = COPY %4
+ PseudoRET
+...
+
+# CHECK: lui_lb
+# CHECK: Macro fuse: {{.*}}LUI - LB
+---
+name: lui_lb
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10
+ %1:gpr = COPY $x10
+ %2:gpr = LUI 1
+ %3:gpr = XORI %1, 2
+ %4:gpr = LB %2, 4
+ $x10 = COPY %3
+ $x11 = COPY %4
+ PseudoRET
+...
+
+# CHECK: lui_lh
+# CHECK: Macro fuse: {{.*}}LUI - LH
+---
+name: lui_lh
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10
+ %1:gpr = COPY $x10
+ %2:gpr = LUI 1
+ %3:gpr = XORI %1, 2
+ %4:gpr = LH %2, 4
+ $x10 = COPY %3
+ $x11 = COPY %4
+ PseudoRET
+...
+
+# CHECK: lui_lw
+# CHECK: Macro fuse: {{.*}}LUI - LW
+---
+name: lui_lw
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10
+ %1:gpr = COPY $x10
+ %2:gpr = LUI 1
+ %3:gpr = XORI %1, 2
+ %4:gpr = LW %2, 4
+ $x10 = COPY %3
+ $x11 = COPY %4
+ PseudoRET
+...
+
+# CHECK: lui_ld
+# CHECK: Macro fuse: {{.*}}LUI - LD
+---
+name: lui_ld
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10
+ %1:gpr = COPY $x10
+ %2:gpr = LUI 1
+ %3:gpr = XORI %1, 2
+ %4:gpr = LD %2, 4
+ $x10 = COPY %3
+ $x11 = COPY %4
+ PseudoRET
+...
+
+# CHECK: lui_lbu
+# CHECK: Macro fuse: {{.*}}LUI - LBU
+---
+name: lui_lbu
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10
+ %1:gpr = COPY $x10
+ %2:gpr = LUI 1
+ %3:gpr = XORI %1, 2
+ %4:gpr = LBU %2, 4
+ $x10 = COPY %3
+ $x11 = COPY %4
+ PseudoRET
+...
+
+# CHECK: lui_lhu
+# CHECK: Macro fuse: {{.*}}LUI - LHU
+---
+name: lui_lhu
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10
+ %1:gpr = COPY $x10
+ %2:gpr = LUI 1
+ %3:gpr = XORI %1, 2
+ %4:gpr = LHU %2, 4
+ $x10 = COPY %3
+ $x11 = COPY %4
+ PseudoRET
+...
+
+# CHECK: lui_lwu
+# CHECK: Macro fuse: {{.*}}LUI - LWU
+---
+name: lui_lwu
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10
+ %1:gpr = COPY $x10
+ %2:gpr = LUI 1
+ %3:gpr = XORI %1, 2
+ %4:gpr = LWU %2, 4
+ $x10 = COPY %3
+ $x11 = COPY %4
+ PseudoRET
+...
+
+# CHECK-BFEXT: bitfield_extract
+# CHECK-BFEXT: Macro fuse: {{.*}}SLLI - SRLI
+---
+name: bitfield_extract
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10
+ %1:gpr = COPY $x10
+ %2:gpr = SLLI %1, 31
+ %3:gpr = XORI %1, 3
+ %4:gpr = SRLI %2, 48
+ $x10 = COPY %3
+ $x11 = COPY %4
+ PseudoRET
+...
+
+# CHECK: addi_lb
+# CHECK: Macro fuse: {{.*}}ADDI - LB
+---
+name: addi_lb
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = ADDI %1, 8
+ %4:gpr = XORI %2, 3
+ %5:gpr = LB %3, 0
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: addi_lh
+# CHECK: Macro fuse: {{.*}}ADDI - LH
+---
+name: addi_lh
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = ADDI %1, 8
+ %4:gpr = XORI %2, 3
+ %5:gpr = LH %3, 0
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: addi_lw
+# CHECK: Macro fuse: {{.*}}ADDI - LW
+---
+name: addi_lw
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = ADDI %1, 8
+ %4:gpr = XORI %2, 3
+ %5:gpr = LW %3, 0
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: addi_ld
+# CHECK: Macro fuse: {{.*}}ADDI - LD
+---
+name: addi_ld
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = ADDI %1, 8
+ %4:gpr = XORI %2, 3
+ %5:gpr = LD %3, 0
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: addi_lbu
+# CHECK: Macro fuse: {{.*}}ADDI - LBU
+---
+name: addi_lbu
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = ADDI %1, 8
+ %4:gpr = XORI %2, 3
+ %5:gpr = LBU %3, 0
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: addi_lhu
+# CHECK: Macro fuse: {{.*}}ADDI - LHU
+---
+name: addi_lhu
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = ADDI %1, 8
+ %4:gpr = XORI %2, 3
+ %5:gpr = LHU %3, 0
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: addi_lwu
+# CHECK: Macro fuse: {{.*}}ADDI - LWU
+---
+name: addi_lwu
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = ADDI %1, 8
+ %4:gpr = XORI %2, 3
+ %5:gpr = LWU %3, 0
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: adduw_lb
+# CHECK: Macro fuse: {{.*}}ADD_UW - LB
+---
+name: adduw_lb
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = ADD_UW %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LB %3, 0
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: adduw_lh
+# CHECK: Macro fuse: {{.*}}ADD_UW - LH
+---
+name: adduw_lh
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = ADD_UW %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LH %3, 0
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: adduw_lw
+# CHECK: Macro fuse: {{.*}}ADD_UW - LW
+---
+name: adduw_lw
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = ADD_UW %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LW %3, 0
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: adduw_ld
+# CHECK: Macro fuse: {{.*}}ADD_UW - LD
+---
+name: adduw_ld
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = ADD_UW %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LD %3, 0
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: adduw_lbu
+# CHECK: Macro fuse: {{.*}}ADD_UW - LBU
+---
+name: adduw_lbu
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = ADD_UW %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LBU %3, 0
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: adduw_lhu
+# CHECK: Macro fuse: {{.*}}ADD_UW - LHU
+---
+name: adduw_lhu
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = ADD_UW %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LHU %3, 0
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: adduw_lwu
+# CHECK: Macro fuse: {{.*}}ADD_UW - LWU
+---
+name: adduw_lwu
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = ADD_UW %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LWU %3, 0
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: sh1add_lb
+# CHECK: Macro fuse: {{.*}}SH1ADD - LB
+---
+name: sh1add_lb
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = SH1ADD %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LB %3, 8
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: sh2add_lb
+# CHECK: Macro fuse: {{.*}}SH2ADD - LB
+---
+name: sh2add_lb
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = SH2ADD %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LB %3, 8
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: sh3add_lb
+# CHECK: Macro fuse: {{.*}}SH3ADD - LB
+---
+name: sh3add_lb
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = SH3ADD %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LB %3, 8
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: sh1add_lh
+# CHECK: Macro fuse: {{.*}}SH1ADD - LH
+---
+name: sh1add_lh
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = SH1ADD %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LH %3, 8
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: sh2add_lh
+# CHECK: Macro fuse: {{.*}}SH2ADD - LH
+---
+name: sh2add_lh
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = SH2ADD %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LH %3, 8
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: sh3add_lh
+# CHECK: Macro fuse: {{.*}}SH3ADD - LH
+---
+name: sh3add_lh
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = SH3ADD %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LH %3, 8
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: sh1add_lw
+# CHECK: Macro fuse: {{.*}}SH1ADD - LW
+---
+name: sh1add_lw
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = SH1ADD %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LW %3, 8
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: sh2add_lw
+# CHECK: Macro fuse: {{.*}}SH2ADD - LW
+---
+name: sh2add_lw
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = SH2ADD %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LW %3, 8
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: sh3add_lw
+# CHECK: Macro fuse: {{.*}}SH3ADD - LW
+---
+name: sh3add_lw
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = SH3ADD %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LW %3, 8
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: sh1add_ld
+# CHECK: Macro fuse: {{.*}}SH1ADD - LD
+---
+name: sh1add_ld
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = SH1ADD %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LD %3, 8
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: sh2add_ld
+# CHECK: Macro fuse: {{.*}}SH2ADD - LD
+---
+name: sh2add_ld
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = SH2ADD %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LD %3, 8
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: sh3add_ld
+# CHECK: Macro fuse: {{.*}}SH3ADD - LD
+---
+name: sh3add_ld
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = SH3ADD %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LD %3, 8
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: sh1add_lbu
+# CHECK: Macro fuse: {{.*}}SH1ADD - LBU
+---
+name: sh1add_lbu
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = SH1ADD %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LBU %3, 8
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: sh2add_lbu
+# CHECK: Macro fuse: {{.*}}SH2ADD - LBU
+---
+name: sh2add_lbu
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = SH2ADD %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LBU %3, 8
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: sh3add_lbu
+# CHECK: Macro fuse: {{.*}}SH3ADD - LBU
+---
+name: sh3add_lbu
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = SH3ADD %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LBU %3, 8
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: sh1add_lhu
+# CHECK: Macro fuse: {{.*}}SH1ADD - LHU
+---
+name: sh1add_lhu
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = SH1ADD %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LHU %3, 8
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: sh2add_lhu
+# CHECK: Macro fuse: {{.*}}SH2ADD - LHU
+---
+name: sh2add_lhu
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = SH2ADD %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LHU %3, 8
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: sh3add_lhu
+# CHECK: Macro fuse: {{.*}}SH3ADD - LHU
+---
+name: sh3add_lhu
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = SH3ADD %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LHU %3, 8
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: sh1add_lwu
+# CHECK: Macro fuse: {{.*}}SH1ADD - LWU
+---
+name: sh1add_lwu
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = SH1ADD %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LWU %3, 8
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: sh2add_lwu
+# CHECK: Macro fuse: {{.*}}SH2ADD - LWU
+---
+name: sh2add_lwu
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = SH2ADD %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LWU %3, 8
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: sh3add_lwu
+# CHECK: Macro fuse: {{.*}}SH3ADD - LWU
+---
+name: sh3add_lwu
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = SH3ADD %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LWU %3, 8
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: sh1adduw_lb
+# CHECK: Macro fuse: {{.*}}SH1ADD_UW - LB
+---
+name: sh1adduw_lb
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = SH1ADD_UW %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LB %3, 8
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: sh2adduw_lb
+# CHECK: Macro fuse: {{.*}}SH2ADD_UW - LB
+---
+name: sh2adduw_lb
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = SH2ADD_UW %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LB %3, 8
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: sh3adduw_lb
+# CHECK: Macro fuse: {{.*}}SH3ADD_UW - LB
+---
+name: sh3adduw_lb
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = SH3ADD_UW %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LB %3, 8
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: sh1adduw_lh
+# CHECK: Macro fuse: {{.*}}SH1ADD_UW - LH
+---
+name: sh1adduw_lh
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = SH1ADD_UW %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LH %3, 8
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: sh2adduw_lh
+# CHECK: Macro fuse: {{.*}}SH2ADD_UW - LH
+---
+name: sh2adduw_lh
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = SH2ADD_UW %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LH %3, 8
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: sh3adduw_lh
+# CHECK: Macro fuse: {{.*}}SH3ADD_UW - LH
+---
+name: sh3adduw_lh
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = SH3ADD_UW %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LH %3, 8
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: sh1adduw_lw
+# CHECK: Macro fuse: {{.*}}SH1ADD_UW - LW
+---
+name: sh1adduw_lw
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = SH1ADD_UW %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LW %3, 8
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: sh2adduw_lw
+# CHECK: Macro fuse: {{.*}}SH2ADD_UW - LW
+---
+name: sh2adduw_lw
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = SH2ADD_UW %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LW %3, 8
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: sh3adduw_lw
+# CHECK: Macro fuse: {{.*}}SH3ADD_UW - LW
+---
+name: sh3adduw_lw
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = SH3ADD_UW %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LW %3, 8
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: sh1adduw_ld
+# CHECK: Macro fuse: {{.*}}SH1ADD_UW - LD
+---
+name: sh1adduw_ld
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = SH1ADD_UW %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LD %3, 8
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: sh2adduw_ld
+# CHECK: Macro fuse: {{.*}}SH2ADD_UW - LD
+---
+name: sh2adduw_ld
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = SH2ADD_UW %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LD %3, 8
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: sh3adduw_ld
+# CHECK: Macro fuse: {{.*}}SH3ADD_UW - LD
+---
+name: sh3adduw_ld
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = SH3ADD_UW %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LD %3, 8
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: sh1adduw_lbu
+# CHECK: Macro fuse: {{.*}}SH1ADD_UW - LBU
+---
+name: sh1adduw_lbu
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = SH1ADD_UW %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LBU %3, 8
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: sh2adduw_lbu
+# CHECK: Macro fuse: {{.*}}SH2ADD_UW - LBU
+---
+name: sh2adduw_lbu
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = SH2ADD_UW %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LBU %3, 8
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: sh3adduw_lbu
+# CHECK: Macro fuse: {{.*}}SH3ADD_UW - LBU
+---
+name: sh3adduw_lbu
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = SH3ADD_UW %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LBU %3, 8
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: sh1adduw_lhu
+# CHECK: Macro fuse: {{.*}}SH1ADD_UW - LHU
+---
+name: sh1adduw_lhu
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = SH1ADD_UW %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LHU %3, 8
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: sh2adduw_lhu
+# CHECK: Macro fuse: {{.*}}SH2ADD_UW - LHU
+---
+name: sh2adduw_lhu
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = SH2ADD_UW %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LHU %3, 8
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: sh3adduw_lhu
+# CHECK: Macro fuse: {{.*}}SH3ADD_UW - LHU
+---
+name: sh3adduw_lhu
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = SH3ADD_UW %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LHU %3, 8
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: sh1adduw_lwu
+# CHECK: Macro fuse: {{.*}}SH1ADD_UW - LWU
+---
+name: sh1adduw_lwu
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = SH1ADD_UW %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LWU %3, 8
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: sh2adduw_lwu
+# CHECK: Macro fuse: {{.*}}SH2ADD_UW - LWU
+---
+name: sh2adduw_lwu
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = SH2ADD_UW %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LWU %3, 8
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
+
+# CHECK: sh3adduw_lwu
+# CHECK: Macro fuse: {{.*}}SH3ADD_UW - LWU
+---
+name: sh3adduw_lwu
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $x10, $x11
+ %1:gpr = COPY $x10
+ %2:gpr = COPY $x11
+ %3:gpr = SH3ADD_UW %1, %2
+ %4:gpr = XORI %2, 3
+ %5:gpr = LWU %3, 8
+ $x10 = COPY %4
+ $x11 = COPY %5
+ PseudoRET
+...
>From 32161e9de36a747dde22a06c1c99a6091eb2920b Mon Sep 17 00:00:00 2001
From: Changpeng Fang <changpeng.fang at amd.com>
Date: Wed, 6 Aug 2025 11:47:37 -0700
Subject: [PATCH 138/171] [AMDGPU] Do not fold an immediate into instructions
with frame indexes (#151263)
Do not fold an immediate into an instruction that already has a frame
index operand. A frame index could possibly turn out to be another immediate.
Fixes: SWDEV-536263
---------
Co-authored-by: Matt Arsenault <arsenm2 at gmail.com>
---
llvm/lib/Target/AMDGPU/SIInstrInfo.cpp | 9 +-
.../CodeGen/AMDGPU/GlobalISel/flat-scratch.ll | 60 +++++++---
llvm/test/CodeGen/AMDGPU/flat-scratch.ll | 31 +++--
.../AMDGPU/fold-operands-frame-index.mir | 3 +-
.../CodeGen/AMDGPU/fold-sgpr-multi-imm.mir | 3 +-
.../CodeGen/AMDGPU/frame-index-elimination.ll | 3 +-
.../issue130120-eliminate-frame-index.ll | 26 +++--
.../local-stack-alloc-block-sp-reference.ll | 27 +++--
.../AMDGPU/no-folding-imm-to-inst-with-fi.ll | 108 ++++++++++++++++++
9 files changed, 220 insertions(+), 50 deletions(-)
create mode 100644 llvm/test/CodeGen/AMDGPU/no-folding-imm-to-inst-with-fi.ll
diff --git a/llvm/lib/Target/AMDGPU/SIInstrInfo.cpp b/llvm/lib/Target/AMDGPU/SIInstrInfo.cpp
index 3f61bbd1d6e85..5f498a3f5a421 100644
--- a/llvm/lib/Target/AMDGPU/SIInstrInfo.cpp
+++ b/llvm/lib/Target/AMDGPU/SIInstrInfo.cpp
@@ -6122,10 +6122,11 @@ bool SIInstrInfo::isOperandLegal(const MachineInstr &MI, unsigned OpIdx,
!Op.isIdenticalTo(*MO))
return false;
- // Do not fold a frame index into an instruction that already has a frame
- // index. The frame index handling code doesn't handle fixing up operand
- // constraints if there are multiple indexes.
- if (Op.isFI() && MO->isFI())
+ // Do not fold a non-inlineable and non-register operand into an
+ // instruction that already has a frame index. The frame index handling
+ // code could not handle well when a frame index co-exists with another
+ // non-register operand, unless that operand is an inlineable immediate.
+ if (Op.isFI())
return false;
}
} else if (IsInlineConst && ST.hasNoF16PseudoScalarTransInlineConstants() &&
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/flat-scratch.ll b/llvm/test/CodeGen/AMDGPU/GlobalISel/flat-scratch.ll
index a066b15f84d6b..e6a8baceee020 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/flat-scratch.ll
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/flat-scratch.ll
@@ -1917,8 +1917,9 @@ define amdgpu_kernel void @store_load_large_imm_offset_kernel() {
; GFX9-NEXT: s_mov_b32 s0, 0
; GFX9-NEXT: scratch_store_dword off, v0, s0 offset:4
; GFX9-NEXT: s_waitcnt vmcnt(0)
+; GFX9-NEXT: s_movk_i32 s0, 0x3e80
; GFX9-NEXT: v_mov_b32_e32 v0, 15
-; GFX9-NEXT: s_movk_i32 s0, 0x3e84
+; GFX9-NEXT: s_add_i32 s0, s0, 4
; GFX9-NEXT: scratch_store_dword off, v0, s0
; GFX9-NEXT: s_waitcnt vmcnt(0)
; GFX9-NEXT: scratch_load_dword v0, off, s0 glc
@@ -1933,7 +1934,8 @@ define amdgpu_kernel void @store_load_large_imm_offset_kernel() {
; GFX10-NEXT: s_setreg_b32 hwreg(HW_REG_FLAT_SCR_HI), s9
; GFX10-NEXT: v_mov_b32_e32 v0, 13
; GFX10-NEXT: v_mov_b32_e32 v1, 15
-; GFX10-NEXT: s_movk_i32 s0, 0x3e84
+; GFX10-NEXT: s_movk_i32 s0, 0x3e80
+; GFX10-NEXT: s_add_i32 s0, s0, 4
; GFX10-NEXT: scratch_store_dword off, v0, off offset:4
; GFX10-NEXT: s_waitcnt_vscnt null, 0x0
; GFX10-NEXT: scratch_store_dword off, v1, s0
@@ -1945,10 +1947,11 @@ define amdgpu_kernel void @store_load_large_imm_offset_kernel() {
; GFX942-LABEL: store_load_large_imm_offset_kernel:
; GFX942: ; %bb.0: ; %bb
; GFX942-NEXT: v_mov_b32_e32 v0, 13
+; GFX942-NEXT: s_movk_i32 s0, 0x3e80
; GFX942-NEXT: scratch_store_dword off, v0, off offset:4 sc0 sc1
; GFX942-NEXT: s_waitcnt vmcnt(0)
; GFX942-NEXT: v_mov_b32_e32 v0, 15
-; GFX942-NEXT: s_movk_i32 s0, 0x3e84
+; GFX942-NEXT: s_add_i32 s0, s0, 4
; GFX942-NEXT: scratch_store_dword off, v0, s0 sc0 sc1
; GFX942-NEXT: s_waitcnt vmcnt(0)
; GFX942-NEXT: scratch_load_dword v0, off, s0 sc0 sc1
@@ -1958,7 +1961,9 @@ define amdgpu_kernel void @store_load_large_imm_offset_kernel() {
; GFX11-LABEL: store_load_large_imm_offset_kernel:
; GFX11: ; %bb.0: ; %bb
; GFX11-NEXT: v_dual_mov_b32 v0, 13 :: v_dual_mov_b32 v1, 15
-; GFX11-NEXT: s_movk_i32 s0, 0x3e84
+; GFX11-NEXT: s_movk_i32 s0, 0x3e80
+; GFX11-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
+; GFX11-NEXT: s_add_i32 s0, s0, 4
; GFX11-NEXT: scratch_store_b32 off, v0, off offset:4 dlc
; GFX11-NEXT: s_waitcnt_vscnt null, 0x0
; GFX11-NEXT: scratch_store_b32 off, v1, s0 dlc
@@ -1986,8 +1991,9 @@ define amdgpu_kernel void @store_load_large_imm_offset_kernel() {
; UNALIGNED_GFX9-NEXT: s_mov_b32 s0, 0
; UNALIGNED_GFX9-NEXT: scratch_store_dword off, v0, s0 offset:4
; UNALIGNED_GFX9-NEXT: s_waitcnt vmcnt(0)
+; UNALIGNED_GFX9-NEXT: s_movk_i32 s0, 0x3e80
; UNALIGNED_GFX9-NEXT: v_mov_b32_e32 v0, 15
-; UNALIGNED_GFX9-NEXT: s_movk_i32 s0, 0x3e84
+; UNALIGNED_GFX9-NEXT: s_add_i32 s0, s0, 4
; UNALIGNED_GFX9-NEXT: scratch_store_dword off, v0, s0
; UNALIGNED_GFX9-NEXT: s_waitcnt vmcnt(0)
; UNALIGNED_GFX9-NEXT: scratch_load_dword v0, off, s0 glc
@@ -2002,7 +2008,8 @@ define amdgpu_kernel void @store_load_large_imm_offset_kernel() {
; UNALIGNED_GFX10-NEXT: s_setreg_b32 hwreg(HW_REG_FLAT_SCR_HI), s9
; UNALIGNED_GFX10-NEXT: v_mov_b32_e32 v0, 13
; UNALIGNED_GFX10-NEXT: v_mov_b32_e32 v1, 15
-; UNALIGNED_GFX10-NEXT: s_movk_i32 s0, 0x3e84
+; UNALIGNED_GFX10-NEXT: s_movk_i32 s0, 0x3e80
+; UNALIGNED_GFX10-NEXT: s_add_i32 s0, s0, 4
; UNALIGNED_GFX10-NEXT: scratch_store_dword off, v0, off offset:4
; UNALIGNED_GFX10-NEXT: s_waitcnt_vscnt null, 0x0
; UNALIGNED_GFX10-NEXT: scratch_store_dword off, v1, s0
@@ -2014,10 +2021,11 @@ define amdgpu_kernel void @store_load_large_imm_offset_kernel() {
; UNALIGNED_GFX942-LABEL: store_load_large_imm_offset_kernel:
; UNALIGNED_GFX942: ; %bb.0: ; %bb
; UNALIGNED_GFX942-NEXT: v_mov_b32_e32 v0, 13
+; UNALIGNED_GFX942-NEXT: s_movk_i32 s0, 0x3e80
; UNALIGNED_GFX942-NEXT: scratch_store_dword off, v0, off offset:4 sc0 sc1
; UNALIGNED_GFX942-NEXT: s_waitcnt vmcnt(0)
; UNALIGNED_GFX942-NEXT: v_mov_b32_e32 v0, 15
-; UNALIGNED_GFX942-NEXT: s_movk_i32 s0, 0x3e84
+; UNALIGNED_GFX942-NEXT: s_add_i32 s0, s0, 4
; UNALIGNED_GFX942-NEXT: scratch_store_dword off, v0, s0 sc0 sc1
; UNALIGNED_GFX942-NEXT: s_waitcnt vmcnt(0)
; UNALIGNED_GFX942-NEXT: scratch_load_dword v0, off, s0 sc0 sc1
@@ -2027,7 +2035,9 @@ define amdgpu_kernel void @store_load_large_imm_offset_kernel() {
; UNALIGNED_GFX11-LABEL: store_load_large_imm_offset_kernel:
; UNALIGNED_GFX11: ; %bb.0: ; %bb
; UNALIGNED_GFX11-NEXT: v_dual_mov_b32 v0, 13 :: v_dual_mov_b32 v1, 15
-; UNALIGNED_GFX11-NEXT: s_movk_i32 s0, 0x3e84
+; UNALIGNED_GFX11-NEXT: s_movk_i32 s0, 0x3e80
+; UNALIGNED_GFX11-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
+; UNALIGNED_GFX11-NEXT: s_add_i32 s0, s0, 4
; UNALIGNED_GFX11-NEXT: scratch_store_b32 off, v0, off offset:4 dlc
; UNALIGNED_GFX11-NEXT: s_waitcnt_vscnt null, 0x0
; UNALIGNED_GFX11-NEXT: scratch_store_b32 off, v1, s0 dlc
@@ -2061,11 +2071,13 @@ define void @store_load_large_imm_offset_foo() {
; GFX9-LABEL: store_load_large_imm_offset_foo:
; GFX9: ; %bb.0: ; %bb
; GFX9-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX9-NEXT: s_movk_i32 s0, 0x3e80
; GFX9-NEXT: v_mov_b32_e32 v0, 13
+; GFX9-NEXT: s_add_i32 s1, s32, s0
; GFX9-NEXT: scratch_store_dword off, v0, s32 offset:4
; GFX9-NEXT: s_waitcnt vmcnt(0)
; GFX9-NEXT: v_mov_b32_e32 v0, 15
-; GFX9-NEXT: s_add_i32 s0, s32, 0x3e84
+; GFX9-NEXT: s_add_i32 s0, s1, 4
; GFX9-NEXT: scratch_store_dword off, v0, s0
; GFX9-NEXT: s_waitcnt vmcnt(0)
; GFX9-NEXT: scratch_load_dword v0, off, s0 glc
@@ -2076,8 +2088,10 @@ define void @store_load_large_imm_offset_foo() {
; GFX10: ; %bb.0: ; %bb
; GFX10-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
; GFX10-NEXT: v_mov_b32_e32 v0, 13
+; GFX10-NEXT: s_movk_i32 s0, 0x3e80
; GFX10-NEXT: v_mov_b32_e32 v1, 15
-; GFX10-NEXT: s_add_i32 s0, s32, 0x3e84
+; GFX10-NEXT: s_add_i32 s1, s32, s0
+; GFX10-NEXT: s_add_i32 s0, s1, 4
; GFX10-NEXT: scratch_store_dword off, v0, s32 offset:4
; GFX10-NEXT: s_waitcnt_vscnt null, 0x0
; GFX10-NEXT: scratch_store_dword off, v1, s0
@@ -2089,11 +2103,13 @@ define void @store_load_large_imm_offset_foo() {
; GFX942-LABEL: store_load_large_imm_offset_foo:
; GFX942: ; %bb.0: ; %bb
; GFX942-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX942-NEXT: s_movk_i32 s0, 0x3e80
; GFX942-NEXT: v_mov_b32_e32 v0, 13
+; GFX942-NEXT: s_add_i32 s1, s32, s0
; GFX942-NEXT: scratch_store_dword off, v0, s32 offset:4 sc0 sc1
; GFX942-NEXT: s_waitcnt vmcnt(0)
; GFX942-NEXT: v_mov_b32_e32 v0, 15
-; GFX942-NEXT: s_add_i32 s0, s32, 0x3e84
+; GFX942-NEXT: s_add_i32 s0, s1, 4
; GFX942-NEXT: scratch_store_dword off, v0, s0 sc0 sc1
; GFX942-NEXT: s_waitcnt vmcnt(0)
; GFX942-NEXT: scratch_load_dword v0, off, s0 sc0 sc1
@@ -2104,7 +2120,10 @@ define void @store_load_large_imm_offset_foo() {
; GFX11: ; %bb.0: ; %bb
; GFX11-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
; GFX11-NEXT: v_dual_mov_b32 v0, 13 :: v_dual_mov_b32 v1, 15
-; GFX11-NEXT: s_add_i32 s0, s32, 0x3e84
+; GFX11-NEXT: s_movk_i32 s0, 0x3e80
+; GFX11-NEXT: s_delay_alu instid0(SALU_CYCLE_1) | instskip(NEXT) | instid1(SALU_CYCLE_1)
+; GFX11-NEXT: s_add_i32 s1, s32, s0
+; GFX11-NEXT: s_add_i32 s0, s1, 4
; GFX11-NEXT: scratch_store_b32 off, v0, s32 offset:4 dlc
; GFX11-NEXT: s_waitcnt_vscnt null, 0x0
; GFX11-NEXT: scratch_store_b32 off, v1, s0 dlc
@@ -2133,11 +2152,13 @@ define void @store_load_large_imm_offset_foo() {
; UNALIGNED_GFX9-LABEL: store_load_large_imm_offset_foo:
; UNALIGNED_GFX9: ; %bb.0: ; %bb
; UNALIGNED_GFX9-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; UNALIGNED_GFX9-NEXT: s_movk_i32 s0, 0x3e80
; UNALIGNED_GFX9-NEXT: v_mov_b32_e32 v0, 13
+; UNALIGNED_GFX9-NEXT: s_add_i32 s1, s32, s0
; UNALIGNED_GFX9-NEXT: scratch_store_dword off, v0, s32 offset:4
; UNALIGNED_GFX9-NEXT: s_waitcnt vmcnt(0)
; UNALIGNED_GFX9-NEXT: v_mov_b32_e32 v0, 15
-; UNALIGNED_GFX9-NEXT: s_add_i32 s0, s32, 0x3e84
+; UNALIGNED_GFX9-NEXT: s_add_i32 s0, s1, 4
; UNALIGNED_GFX9-NEXT: scratch_store_dword off, v0, s0
; UNALIGNED_GFX9-NEXT: s_waitcnt vmcnt(0)
; UNALIGNED_GFX9-NEXT: scratch_load_dword v0, off, s0 glc
@@ -2148,8 +2169,10 @@ define void @store_load_large_imm_offset_foo() {
; UNALIGNED_GFX10: ; %bb.0: ; %bb
; UNALIGNED_GFX10-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
; UNALIGNED_GFX10-NEXT: v_mov_b32_e32 v0, 13
+; UNALIGNED_GFX10-NEXT: s_movk_i32 s0, 0x3e80
; UNALIGNED_GFX10-NEXT: v_mov_b32_e32 v1, 15
-; UNALIGNED_GFX10-NEXT: s_add_i32 s0, s32, 0x3e84
+; UNALIGNED_GFX10-NEXT: s_add_i32 s1, s32, s0
+; UNALIGNED_GFX10-NEXT: s_add_i32 s0, s1, 4
; UNALIGNED_GFX10-NEXT: scratch_store_dword off, v0, s32 offset:4
; UNALIGNED_GFX10-NEXT: s_waitcnt_vscnt null, 0x0
; UNALIGNED_GFX10-NEXT: scratch_store_dword off, v1, s0
@@ -2161,11 +2184,13 @@ define void @store_load_large_imm_offset_foo() {
; UNALIGNED_GFX942-LABEL: store_load_large_imm_offset_foo:
; UNALIGNED_GFX942: ; %bb.0: ; %bb
; UNALIGNED_GFX942-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; UNALIGNED_GFX942-NEXT: s_movk_i32 s0, 0x3e80
; UNALIGNED_GFX942-NEXT: v_mov_b32_e32 v0, 13
+; UNALIGNED_GFX942-NEXT: s_add_i32 s1, s32, s0
; UNALIGNED_GFX942-NEXT: scratch_store_dword off, v0, s32 offset:4 sc0 sc1
; UNALIGNED_GFX942-NEXT: s_waitcnt vmcnt(0)
; UNALIGNED_GFX942-NEXT: v_mov_b32_e32 v0, 15
-; UNALIGNED_GFX942-NEXT: s_add_i32 s0, s32, 0x3e84
+; UNALIGNED_GFX942-NEXT: s_add_i32 s0, s1, 4
; UNALIGNED_GFX942-NEXT: scratch_store_dword off, v0, s0 sc0 sc1
; UNALIGNED_GFX942-NEXT: s_waitcnt vmcnt(0)
; UNALIGNED_GFX942-NEXT: scratch_load_dword v0, off, s0 sc0 sc1
@@ -2176,7 +2201,10 @@ define void @store_load_large_imm_offset_foo() {
; UNALIGNED_GFX11: ; %bb.0: ; %bb
; UNALIGNED_GFX11-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
; UNALIGNED_GFX11-NEXT: v_dual_mov_b32 v0, 13 :: v_dual_mov_b32 v1, 15
-; UNALIGNED_GFX11-NEXT: s_add_i32 s0, s32, 0x3e84
+; UNALIGNED_GFX11-NEXT: s_movk_i32 s0, 0x3e80
+; UNALIGNED_GFX11-NEXT: s_delay_alu instid0(SALU_CYCLE_1) | instskip(NEXT) | instid1(SALU_CYCLE_1)
+; UNALIGNED_GFX11-NEXT: s_add_i32 s1, s32, s0
+; UNALIGNED_GFX11-NEXT: s_add_i32 s0, s1, 4
; UNALIGNED_GFX11-NEXT: scratch_store_b32 off, v0, s32 offset:4 dlc
; UNALIGNED_GFX11-NEXT: s_waitcnt_vscnt null, 0x0
; UNALIGNED_GFX11-NEXT: scratch_store_b32 off, v1, s0 dlc
diff --git a/llvm/test/CodeGen/AMDGPU/flat-scratch.ll b/llvm/test/CodeGen/AMDGPU/flat-scratch.ll
index b25d9b245f5f6..fc8883924dfbc 100644
--- a/llvm/test/CodeGen/AMDGPU/flat-scratch.ll
+++ b/llvm/test/CodeGen/AMDGPU/flat-scratch.ll
@@ -3621,7 +3621,8 @@ define amdgpu_kernel void @store_load_large_imm_offset_kernel() {
; GFX9-NEXT: s_mov_b32 s0, 0
; GFX9-NEXT: scratch_store_dword off, v0, s0 offset:4
; GFX9-NEXT: s_waitcnt vmcnt(0)
-; GFX9-NEXT: s_movk_i32 s0, 0x3004
+; GFX9-NEXT: s_movk_i32 s0, 0x3000
+; GFX9-NEXT: s_add_i32 s0, s0, 4
; GFX9-NEXT: v_mov_b32_e32 v0, 15
; GFX9-NEXT: scratch_store_dword off, v0, s0 offset:3712
; GFX9-NEXT: s_waitcnt vmcnt(0)
@@ -3637,7 +3638,8 @@ define amdgpu_kernel void @store_load_large_imm_offset_kernel() {
; GFX10-NEXT: s_setreg_b32 hwreg(HW_REG_FLAT_SCR_HI), s9
; GFX10-NEXT: v_mov_b32_e32 v0, 13
; GFX10-NEXT: v_mov_b32_e32 v1, 15
-; GFX10-NEXT: s_movk_i32 s0, 0x3804
+; GFX10-NEXT: s_movk_i32 s0, 0x3800
+; GFX10-NEXT: s_add_i32 s0, s0, 4
; GFX10-NEXT: scratch_store_dword off, v0, off offset:4
; GFX10-NEXT: s_waitcnt_vscnt null, 0x0
; GFX10-NEXT: scratch_store_dword off, v1, s0 offset:1664
@@ -3682,7 +3684,8 @@ define amdgpu_kernel void @store_load_large_imm_offset_kernel() {
; GFX9-PAL-NEXT: s_addc_u32 flat_scratch_hi, s13, 0
; GFX9-PAL-NEXT: scratch_store_dword off, v0, s0 offset:4
; GFX9-PAL-NEXT: s_waitcnt vmcnt(0)
-; GFX9-PAL-NEXT: s_movk_i32 s0, 0x3004
+; GFX9-PAL-NEXT: s_movk_i32 s0, 0x3000
+; GFX9-PAL-NEXT: s_add_i32 s0, s0, 4
; GFX9-PAL-NEXT: v_mov_b32_e32 v0, 15
; GFX9-PAL-NEXT: scratch_store_dword off, v0, s0 offset:3712
; GFX9-PAL-NEXT: s_waitcnt vmcnt(0)
@@ -3716,8 +3719,9 @@ define amdgpu_kernel void @store_load_large_imm_offset_kernel() {
; GFX1010-PAL-NEXT: s_setreg_b32 hwreg(HW_REG_FLAT_SCR_HI), s13
; GFX1010-PAL-NEXT: v_mov_b32_e32 v0, 13
; GFX1010-PAL-NEXT: v_mov_b32_e32 v1, 15
+; GFX1010-PAL-NEXT: s_movk_i32 s0, 0x3800
; GFX1010-PAL-NEXT: s_mov_b32 s1, 0
-; GFX1010-PAL-NEXT: s_movk_i32 s0, 0x3804
+; GFX1010-PAL-NEXT: s_add_i32 s0, s0, 4
; GFX1010-PAL-NEXT: scratch_store_dword off, v0, s1 offset:4
; GFX1010-PAL-NEXT: s_waitcnt_vscnt null, 0x0
; GFX1010-PAL-NEXT: scratch_store_dword off, v1, s0 offset:1664
@@ -3739,7 +3743,8 @@ define amdgpu_kernel void @store_load_large_imm_offset_kernel() {
; GFX1030-PAL-NEXT: s_setreg_b32 hwreg(HW_REG_FLAT_SCR_HI), s13
; GFX1030-PAL-NEXT: v_mov_b32_e32 v0, 13
; GFX1030-PAL-NEXT: v_mov_b32_e32 v1, 15
-; GFX1030-PAL-NEXT: s_movk_i32 s0, 0x3804
+; GFX1030-PAL-NEXT: s_movk_i32 s0, 0x3800
+; GFX1030-PAL-NEXT: s_add_i32 s0, s0, 4
; GFX1030-PAL-NEXT: scratch_store_dword off, v0, off offset:4
; GFX1030-PAL-NEXT: s_waitcnt_vscnt null, 0x0
; GFX1030-PAL-NEXT: scratch_store_dword off, v1, s0 offset:1664
@@ -3785,10 +3790,12 @@ define void @store_load_large_imm_offset_foo() {
; GFX9-LABEL: store_load_large_imm_offset_foo:
; GFX9: ; %bb.0: ; %bb
; GFX9-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX9-NEXT: s_movk_i32 s0, 0x3000
; GFX9-NEXT: v_mov_b32_e32 v0, 13
+; GFX9-NEXT: s_add_i32 s1, s32, s0
; GFX9-NEXT: scratch_store_dword off, v0, s32 offset:4
; GFX9-NEXT: s_waitcnt vmcnt(0)
-; GFX9-NEXT: s_add_i32 s0, s32, 0x3004
+; GFX9-NEXT: s_add_i32 s0, s1, 4
; GFX9-NEXT: v_mov_b32_e32 v0, 15
; GFX9-NEXT: scratch_store_dword off, v0, s0 offset:3712
; GFX9-NEXT: s_waitcnt vmcnt(0)
@@ -3800,8 +3807,10 @@ define void @store_load_large_imm_offset_foo() {
; GFX10: ; %bb.0: ; %bb
; GFX10-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
; GFX10-NEXT: v_mov_b32_e32 v0, 13
+; GFX10-NEXT: s_movk_i32 s0, 0x3800
; GFX10-NEXT: v_mov_b32_e32 v1, 15
-; GFX10-NEXT: s_add_i32 s0, s32, 0x3804
+; GFX10-NEXT: s_add_i32 s1, s32, s0
+; GFX10-NEXT: s_add_i32 s0, s1, 4
; GFX10-NEXT: scratch_store_dword off, v0, s32 offset:4
; GFX10-NEXT: s_waitcnt_vscnt null, 0x0
; GFX10-NEXT: scratch_store_dword off, v1, s0 offset:1664
@@ -3843,10 +3852,12 @@ define void @store_load_large_imm_offset_foo() {
; GFX9-PAL-LABEL: store_load_large_imm_offset_foo:
; GFX9-PAL: ; %bb.0: ; %bb
; GFX9-PAL-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX9-PAL-NEXT: s_movk_i32 s0, 0x3000
; GFX9-PAL-NEXT: v_mov_b32_e32 v0, 13
+; GFX9-PAL-NEXT: s_add_i32 s1, s32, s0
; GFX9-PAL-NEXT: scratch_store_dword off, v0, s32 offset:4
; GFX9-PAL-NEXT: s_waitcnt vmcnt(0)
-; GFX9-PAL-NEXT: s_add_i32 s0, s32, 0x3004
+; GFX9-PAL-NEXT: s_add_i32 s0, s1, 4
; GFX9-PAL-NEXT: v_mov_b32_e32 v0, 15
; GFX9-PAL-NEXT: scratch_store_dword off, v0, s0 offset:3712
; GFX9-PAL-NEXT: s_waitcnt vmcnt(0)
@@ -3872,8 +3883,10 @@ define void @store_load_large_imm_offset_foo() {
; GFX10-PAL: ; %bb.0: ; %bb
; GFX10-PAL-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
; GFX10-PAL-NEXT: v_mov_b32_e32 v0, 13
+; GFX10-PAL-NEXT: s_movk_i32 s0, 0x3800
; GFX10-PAL-NEXT: v_mov_b32_e32 v1, 15
-; GFX10-PAL-NEXT: s_add_i32 s0, s32, 0x3804
+; GFX10-PAL-NEXT: s_add_i32 s1, s32, s0
+; GFX10-PAL-NEXT: s_add_i32 s0, s1, 4
; GFX10-PAL-NEXT: scratch_store_dword off, v0, s32 offset:4
; GFX10-PAL-NEXT: s_waitcnt_vscnt null, 0x0
; GFX10-PAL-NEXT: scratch_store_dword off, v1, s0 offset:1664
diff --git a/llvm/test/CodeGen/AMDGPU/fold-operands-frame-index.mir b/llvm/test/CodeGen/AMDGPU/fold-operands-frame-index.mir
index 7fad2f466bc9f..a88b1ecc40cc9 100644
--- a/llvm/test/CodeGen/AMDGPU/fold-operands-frame-index.mir
+++ b/llvm/test/CodeGen/AMDGPU/fold-operands-frame-index.mir
@@ -75,7 +75,8 @@ stack:
body: |
bb.0:
; CHECK-LABEL: name: fold_frame_index__s_add_i32__fi_materializedconst_0
- ; CHECK: [[S_ADD_I32_:%[0-9]+]]:sreg_32 = S_ADD_I32 %stack.0, 256, implicit-def $scc
+ ; CHECK: [[S_MOV_B32_:%[0-9]+]]:sreg_32 = S_MOV_B32 256
+ ; CHECK-NEXT: [[S_ADD_I32_:%[0-9]+]]:sreg_32 = S_ADD_I32 %stack.0, [[S_MOV_B32_]], implicit-def $scc
; CHECK-NEXT: $sgpr4 = COPY [[S_ADD_I32_]]
; CHECK-NEXT: SI_RETURN implicit $sgpr4
%0:sreg_32 = S_MOV_B32 %stack.0
diff --git a/llvm/test/CodeGen/AMDGPU/fold-sgpr-multi-imm.mir b/llvm/test/CodeGen/AMDGPU/fold-sgpr-multi-imm.mir
index cc4314263bcba..2f2d727ee2c59 100644
--- a/llvm/test/CodeGen/AMDGPU/fold-sgpr-multi-imm.mir
+++ b/llvm/test/CodeGen/AMDGPU/fold-sgpr-multi-imm.mir
@@ -46,7 +46,8 @@ body: |
%2:sreg_32 = S_LSHL2_ADD_U32 %0, %1, implicit-def $scc
...
# GCN-LABEL: name: test_frameindex{{$}}
-# GCN: %1:sreg_32 = S_ADD_I32 %stack.0, 70
+# GCN: [[S_MOV_B32_:%[0-9]+]]:sreg_32 = S_MOV_B32 70
+# GCN-NEXT: %1:sreg_32 = S_ADD_I32 %stack.0, [[S_MOV_B32_]]
---
name: test_frameindex
tracksRegLiveness: true
diff --git a/llvm/test/CodeGen/AMDGPU/frame-index-elimination.ll b/llvm/test/CodeGen/AMDGPU/frame-index-elimination.ll
index 15cda622b902d..f2fe61f5376e4 100644
--- a/llvm/test/CodeGen/AMDGPU/frame-index-elimination.ll
+++ b/llvm/test/CodeGen/AMDGPU/frame-index-elimination.ll
@@ -360,7 +360,8 @@ entry:
; s_add_i32.
; GCN-LABEL: {{^}}fi_sop2_s_add_u32_literal_error:
-; GCN: s_add_u32 [[ADD_LO:s[0-9]+]], 0, 0x2010
+; GCN: s_movk_i32 [[S_MOVK_I32_:s[0-9]+]], 0x1000
+; GCN: s_add_u32 [[ADD_LO:s[0-9]+]], 0x1010, [[S_MOVK_I32_]]
; GCN: s_addc_u32 [[ADD_HI:s[0-9]+]], s{{[0-9]+}}, 0
define amdgpu_kernel void @fi_sop2_s_add_u32_literal_error() #0 {
entry:
diff --git a/llvm/test/CodeGen/AMDGPU/issue130120-eliminate-frame-index.ll b/llvm/test/CodeGen/AMDGPU/issue130120-eliminate-frame-index.ll
index 1c298014e33e7..300124848c1aa 100644
--- a/llvm/test/CodeGen/AMDGPU/issue130120-eliminate-frame-index.ll
+++ b/llvm/test/CodeGen/AMDGPU/issue130120-eliminate-frame-index.ll
@@ -6,16 +6,24 @@ define amdgpu_gfx [13 x i32] @issue130120() {
; CHECK: ; %bb.0: ; %bb
; CHECK-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
; CHECK-NEXT: v_mov_b32_e32 v0, 0
-; CHECK-NEXT: s_add_i32 s0, s32, 0xf0
-; CHECK-NEXT: s_add_i32 s1, s32, 0xf4
-; CHECK-NEXT: s_add_i32 s2, s32, 0xf8
-; CHECK-NEXT: s_add_i32 s3, s32, 0xfc
+; CHECK-NEXT: s_movk_i32 s1, 0xf4
+; CHECK-NEXT: s_movk_i32 s2, 0xf8
+; CHECK-NEXT: s_movk_i32 s3, 0xfc
+; CHECK-NEXT: s_movk_i32 s34, 0x100
; CHECK-NEXT: v_mov_b32_e32 v1, v0
-; CHECK-NEXT: s_add_i32 s34, s32, 0x100
-; CHECK-NEXT: s_add_i32 s35, s32, 0x104
-; CHECK-NEXT: s_add_i32 s36, s32, 0x108
-; CHECK-NEXT: s_add_i32 s37, s32, 0x110
-; CHECK-NEXT: s_add_i32 s38, s32, 0x120
+; CHECK-NEXT: s_movk_i32 s35, 0x104
+; CHECK-NEXT: s_movk_i32 s36, 0x108
+; CHECK-NEXT: s_movk_i32 s37, 0x110
+; CHECK-NEXT: s_movk_i32 s38, 0x120
+; CHECK-NEXT: s_add_i32 s0, s32, 0xf0
+; CHECK-NEXT: s_add_i32 s1, s32, s1
+; CHECK-NEXT: s_add_i32 s2, s32, s2
+; CHECK-NEXT: s_add_i32 s3, s32, s3
+; CHECK-NEXT: s_add_i32 s34, s32, s34
+; CHECK-NEXT: s_add_i32 s35, s32, s35
+; CHECK-NEXT: s_add_i32 s36, s32, s36
+; CHECK-NEXT: s_add_i32 s37, s32, s37
+; CHECK-NEXT: s_add_i32 s38, s32, s38
; CHECK-NEXT: s_or_b32 s39, s32, 4
; CHECK-NEXT: s_or_b32 s40, s32, 8
; CHECK-NEXT: s_or_b32 s41, s32, 12
diff --git a/llvm/test/CodeGen/AMDGPU/local-stack-alloc-block-sp-reference.ll b/llvm/test/CodeGen/AMDGPU/local-stack-alloc-block-sp-reference.ll
index a3ebaec4811a9..5f0ca7bc42ae0 100644
--- a/llvm/test/CodeGen/AMDGPU/local-stack-alloc-block-sp-reference.ll
+++ b/llvm/test/CodeGen/AMDGPU/local-stack-alloc-block-sp-reference.ll
@@ -74,7 +74,8 @@ define amdgpu_kernel void @local_stack_offset_uses_sp(ptr addrspace(1) %out) {
; FLATSCR-NEXT: s_waitcnt vmcnt(0)
; FLATSCR-NEXT: s_cbranch_scc1 .LBB0_1
; FLATSCR-NEXT: ; %bb.2: ; %split
-; FLATSCR-NEXT: s_movk_i32 s0, 0x5000
+; FLATSCR-NEXT: s_movk_i32 s0, 0x2000
+; FLATSCR-NEXT: s_addk_i32 s0, 0x3000
; FLATSCR-NEXT: scratch_load_dwordx2 v[0:1], off, s0 offset:208 glc
; FLATSCR-NEXT: s_waitcnt vmcnt(0)
; FLATSCR-NEXT: s_movk_i32 s0, 0x3000
@@ -175,7 +176,9 @@ define void @func_local_stack_offset_uses_sp(ptr addrspace(1) %out) {
; FLATSCR-NEXT: s_waitcnt vmcnt(0)
; FLATSCR-NEXT: s_cbranch_scc1 .LBB1_1
; FLATSCR-NEXT: ; %bb.2: ; %split
-; FLATSCR-NEXT: s_add_i32 s0, s33, 0x5000
+; FLATSCR-NEXT: s_movk_i32 s0, 0x2000
+; FLATSCR-NEXT: s_add_i32 s1, s33, s0
+; FLATSCR-NEXT: s_add_i32 s0, s1, 0x3000
; FLATSCR-NEXT: scratch_load_dwordx2 v[2:3], off, s0 offset:208 glc
; FLATSCR-NEXT: s_waitcnt vmcnt(0)
; FLATSCR-NEXT: s_add_i32 s0, s33, 0x3000
@@ -223,30 +226,35 @@ define amdgpu_kernel void @local_stack_offset_uses_sp_flat(ptr addrspace(1) %out
; MUBUF-NEXT: s_waitcnt vmcnt(0)
; MUBUF-NEXT: s_cbranch_scc1 .LBB2_1
; MUBUF-NEXT: ; %bb.2: ; %split
+; MUBUF-NEXT: s_movk_i32 s5, 0x12d4
; MUBUF-NEXT: v_mov_b32_e32 v1, 0x4000
-; MUBUF-NEXT: v_or_b32_e32 v0, 0x12d4, v1
+; MUBUF-NEXT: v_or_b32_e32 v0, s5, v1
+; MUBUF-NEXT: s_movk_i32 s5, 0x12d0
; MUBUF-NEXT: v_mov_b32_e32 v1, 0x4000
; MUBUF-NEXT: s_movk_i32 s4, 0x4000
; MUBUF-NEXT: buffer_load_dword v5, v0, s[0:3], 0 offen glc
; MUBUF-NEXT: s_waitcnt vmcnt(0)
-; MUBUF-NEXT: v_or_b32_e32 v0, 0x12d0, v1
+; MUBUF-NEXT: v_or_b32_e32 v0, s5, v1
+; MUBUF-NEXT: s_movk_i32 s5, 0x12c4
; MUBUF-NEXT: v_mov_b32_e32 v1, 0x4000
; MUBUF-NEXT: s_or_b32 s4, s4, 0x12c0
; MUBUF-NEXT: buffer_load_dword v4, v0, s[0:3], 0 offen glc
; MUBUF-NEXT: s_waitcnt vmcnt(0)
-; MUBUF-NEXT: v_or_b32_e32 v0, 0x12c4, v1
-; MUBUF-NEXT: v_mov_b32_e32 v3, 0x4000
+; MUBUF-NEXT: v_or_b32_e32 v0, s5, v1
; MUBUF-NEXT: buffer_load_dword v1, v0, s[0:3], 0 offen glc
; MUBUF-NEXT: s_waitcnt vmcnt(0)
; MUBUF-NEXT: v_mov_b32_e32 v0, s4
-; MUBUF-NEXT: v_or_b32_e32 v2, 0x12cc, v3
+; MUBUF-NEXT: s_movk_i32 s4, 0x12cc
+; MUBUF-NEXT: v_mov_b32_e32 v3, 0x4000
+; MUBUF-NEXT: v_or_b32_e32 v2, s4, v3
+; MUBUF-NEXT: s_movk_i32 s4, 0x12c8
; MUBUF-NEXT: v_mov_b32_e32 v6, 0x4000
; MUBUF-NEXT: buffer_load_dword v0, v0, s[0:3], 0 offen glc
; MUBUF-NEXT: s_waitcnt vmcnt(0)
; MUBUF-NEXT: v_mov_b32_e32 v7, 0x4000
; MUBUF-NEXT: buffer_load_dword v3, v2, s[0:3], 0 offen glc
; MUBUF-NEXT: s_waitcnt vmcnt(0)
-; MUBUF-NEXT: v_or_b32_e32 v2, 0x12c8, v6
+; MUBUF-NEXT: v_or_b32_e32 v2, s4, v6
; MUBUF-NEXT: v_mov_b32_e32 v8, 0x4000
; MUBUF-NEXT: v_mov_b32_e32 v9, 0x4000
; MUBUF-NEXT: buffer_load_dword v2, v2, s[0:3], 0 offen glc
@@ -298,7 +306,8 @@ define amdgpu_kernel void @local_stack_offset_uses_sp_flat(ptr addrspace(1) %out
; FLATSCR-NEXT: s_waitcnt vmcnt(0)
; FLATSCR-NEXT: s_cbranch_scc1 .LBB2_1
; FLATSCR-NEXT: ; %bb.2: ; %split
-; FLATSCR-NEXT: s_movk_i32 s0, 0x3000
+; FLATSCR-NEXT: s_movk_i32 s0, 0x1000
+; FLATSCR-NEXT: s_addk_i32 s0, 0x2000
; FLATSCR-NEXT: scratch_load_dwordx2 v[8:9], off, s0 offset:720 glc
; FLATSCR-NEXT: s_waitcnt vmcnt(0)
; FLATSCR-NEXT: scratch_load_dwordx4 v[0:3], off, s0 offset:704 glc
diff --git a/llvm/test/CodeGen/AMDGPU/no-folding-imm-to-inst-with-fi.ll b/llvm/test/CodeGen/AMDGPU/no-folding-imm-to-inst-with-fi.ll
new file mode 100644
index 0000000000000..6d0aa1e784530
--- /dev/null
+++ b/llvm/test/CodeGen/AMDGPU/no-folding-imm-to-inst-with-fi.ll
@@ -0,0 +1,108 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
+; RUN: llc -mtriple=amdgcn -mcpu=gfx1200 < %s | FileCheck %s
+
+define protected amdgpu_kernel void @no_folding_imm_to_inst_with_fi(<4 x i64> %val4, <16 x i64> %val16) {
+; CHECK-LABEL: no_folding_imm_to_inst_with_fi:
+; CHECK: ; %bb.0: ; %bb
+; CHECK-NEXT: s_clause 0x2
+; CHECK-NEXT: s_load_b256 s[36:43], s[4:5], 0x24
+; CHECK-NEXT: s_load_b512 s[16:31], s[4:5], 0xe4
+; CHECK-NEXT: s_load_b512 s[0:15], s[4:5], 0xa4
+; CHECK-NEXT: s_mov_b64 s[34:35], src_private_base
+; CHECK-NEXT: s_movk_i32 s33, 0x70
+; CHECK-NEXT: s_movk_i32 s34, 0x60
+; CHECK-NEXT: s_or_b32 s44, 0x80, s33
+; CHECK-NEXT: s_mov_b32 s45, s35
+; CHECK-NEXT: s_or_b32 s46, 0x80, s34
+; CHECK-NEXT: s_mov_b32 s47, s35
+; CHECK-NEXT: v_dual_mov_b32 v20, s44 :: v_dual_mov_b32 v21, s45
+; CHECK-NEXT: v_dual_mov_b32 v22, s46 :: v_dual_mov_b32 v23, s47
+; CHECK-NEXT: s_movk_i32 s34, 0x80
+; CHECK-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
+; CHECK-NEXT: v_dual_mov_b32 v34, s34 :: v_dual_mov_b32 v35, s35
+; CHECK-NEXT: s_wait_kmcnt 0x0
+; CHECK-NEXT: v_dual_mov_b32 v0, s40 :: v_dual_mov_b32 v1, s41
+; CHECK-NEXT: v_dual_mov_b32 v2, s42 :: v_dual_mov_b32 v3, s43
+; CHECK-NEXT: v_dual_mov_b32 v4, s36 :: v_dual_mov_b32 v5, s37
+; CHECK-NEXT: v_dual_mov_b32 v6, s38 :: v_dual_mov_b32 v7, s39
+; CHECK-NEXT: scratch_store_b128 off, v[0:3], off offset:16 scope:SCOPE_SYS
+; CHECK-NEXT: s_wait_storecnt 0x0
+; CHECK-NEXT: v_dual_mov_b32 v0, s20 :: v_dual_mov_b32 v1, s21
+; CHECK-NEXT: s_movk_i32 s20, 0x50
+; CHECK-NEXT: v_dual_mov_b32 v8, s28 :: v_dual_mov_b32 v9, s29
+; CHECK-NEXT: v_dual_mov_b32 v10, s30 :: v_dual_mov_b32 v11, s31
+; CHECK-NEXT: s_wait_alu 0xfffe
+; CHECK-NEXT: s_or_b32 s20, 0x80, s20
+; CHECK-NEXT: s_mov_b32 s21, s35
+; CHECK-NEXT: v_dual_mov_b32 v12, s24 :: v_dual_mov_b32 v13, s25
+; CHECK-NEXT: v_dual_mov_b32 v14, s26 :: v_dual_mov_b32 v15, s27
+; CHECK-NEXT: v_dual_mov_b32 v2, s22 :: v_dual_mov_b32 v3, s23
+; CHECK-NEXT: s_wait_alu 0xfffe
+; CHECK-NEXT: v_dual_mov_b32 v25, s21 :: v_dual_mov_b32 v24, s20
+; CHECK-NEXT: scratch_store_b128 off, v[4:7], off scope:SCOPE_SYS
+; CHECK-NEXT: s_wait_storecnt 0x0
+; CHECK-NEXT: flat_store_b128 v[20:21], v[8:11] scope:SCOPE_SYS
+; CHECK-NEXT: s_wait_storecnt 0x0
+; CHECK-NEXT: flat_store_b128 v[22:23], v[12:15] scope:SCOPE_SYS
+; CHECK-NEXT: s_wait_storecnt 0x0
+; CHECK-NEXT: flat_store_b128 v[24:25], v[0:3] scope:SCOPE_SYS
+; CHECK-NEXT: s_wait_storecnt 0x0
+; CHECK-NEXT: v_dual_mov_b32 v0, s16 :: v_dual_mov_b32 v1, s17
+; CHECK-NEXT: s_or_b32 s16, 0x80, 64
+; CHECK-NEXT: s_mov_b32 s17, s35
+; CHECK-NEXT: v_dual_mov_b32 v4, s12 :: v_dual_mov_b32 v5, s13
+; CHECK-NEXT: s_or_b32 s12, 0x80, 48
+; CHECK-NEXT: s_mov_b32 s13, s35
+; CHECK-NEXT: v_dual_mov_b32 v8, s8 :: v_dual_mov_b32 v9, s9
+; CHECK-NEXT: s_or_b32 s8, 0x80, 32
+; CHECK-NEXT: s_mov_b32 s9, s35
+; CHECK-NEXT: v_dual_mov_b32 v12, s4 :: v_dual_mov_b32 v13, s5
+; CHECK-NEXT: s_or_b32 s4, 0x80, 16
+; CHECK-NEXT: s_mov_b32 s5, s35
+; CHECK-NEXT: v_dual_mov_b32 v2, s18 :: v_dual_mov_b32 v3, s19
+; CHECK-NEXT: s_wait_alu 0xfffe
+; CHECK-NEXT: v_dual_mov_b32 v27, s17 :: v_dual_mov_b32 v26, s16
+; CHECK-NEXT: v_dual_mov_b32 v6, s14 :: v_dual_mov_b32 v7, s15
+; CHECK-NEXT: v_dual_mov_b32 v29, s13 :: v_dual_mov_b32 v28, s12
+; CHECK-NEXT: v_dual_mov_b32 v31, s9 :: v_dual_mov_b32 v30, s8
+; CHECK-NEXT: v_dual_mov_b32 v33, s5 :: v_dual_mov_b32 v32, s4
+; CHECK-NEXT: v_dual_mov_b32 v10, s10 :: v_dual_mov_b32 v11, s11
+; CHECK-NEXT: v_dual_mov_b32 v14, s6 :: v_dual_mov_b32 v15, s7
+; CHECK-NEXT: v_dual_mov_b32 v16, s0 :: v_dual_mov_b32 v17, s1
+; CHECK-NEXT: v_dual_mov_b32 v18, s2 :: v_dual_mov_b32 v19, s3
+; CHECK-NEXT: flat_store_b128 v[26:27], v[0:3] scope:SCOPE_SYS
+; CHECK-NEXT: s_wait_storecnt 0x0
+; CHECK-NEXT: flat_store_b128 v[28:29], v[4:7] scope:SCOPE_SYS
+; CHECK-NEXT: s_wait_storecnt 0x0
+; CHECK-NEXT: flat_store_b128 v[30:31], v[8:11] scope:SCOPE_SYS
+; CHECK-NEXT: s_wait_storecnt 0x0
+; CHECK-NEXT: flat_store_b128 v[32:33], v[12:15] scope:SCOPE_SYS
+; CHECK-NEXT: s_wait_storecnt 0x0
+; CHECK-NEXT: flat_store_b128 v[34:35], v[16:19] scope:SCOPE_SYS
+; CHECK-NEXT: s_wait_storecnt 0x0
+; CHECK-NEXT: flat_load_b128 v[0:3], v[22:23] scope:SCOPE_SYS
+; CHECK-NEXT: s_wait_loadcnt_dscnt 0x0
+; CHECK-NEXT: flat_load_b128 v[0:3], v[20:21] scope:SCOPE_SYS
+; CHECK-NEXT: s_wait_loadcnt_dscnt 0x0
+; CHECK-NEXT: flat_load_b128 v[0:3], v[26:27] scope:SCOPE_SYS
+; CHECK-NEXT: s_wait_loadcnt_dscnt 0x0
+; CHECK-NEXT: flat_load_b128 v[0:3], v[24:25] scope:SCOPE_SYS
+; CHECK-NEXT: s_wait_loadcnt_dscnt 0x0
+; CHECK-NEXT: flat_load_b128 v[0:3], v[30:31] scope:SCOPE_SYS
+; CHECK-NEXT: s_wait_loadcnt_dscnt 0x0
+; CHECK-NEXT: flat_load_b128 v[0:3], v[28:29] scope:SCOPE_SYS
+; CHECK-NEXT: s_wait_loadcnt_dscnt 0x0
+; CHECK-NEXT: flat_load_b128 v[0:3], v[34:35] scope:SCOPE_SYS
+; CHECK-NEXT: s_wait_loadcnt_dscnt 0x0
+; CHECK-NEXT: flat_load_b128 v[0:3], v[32:33] scope:SCOPE_SYS
+; CHECK-NEXT: s_wait_loadcnt 0x0
+; CHECK-NEXT: s_endpgm
+bb:
+ %alloca = alloca <4 x i64>, align 32, addrspace(5)
+ %alloca1 = alloca <16 x i64>, align 128, addrspace(5)
+ store volatile <4 x i64> %val4, ptr addrspace(5) %alloca
+ %ascast = addrspacecast ptr addrspace(5) %alloca1 to ptr
+ store volatile <16 x i64> %val16, ptr %ascast
+ %load = load volatile <16 x i64>, ptr %ascast
+ ret void
+}
>From 3404c0b01302f1c4df0f9e7e61d22dc6f44674db Mon Sep 17 00:00:00 2001
From: Mikhail Gudim <mgudim at gmail.com>
Date: Wed, 6 Aug 2025 14:54:50 -0400
Subject: [PATCH 139/171] Slp basic test (#152355)
Add a basic test for SLPVectorizer to make sure that upcoming
refactoring patches don't break anything. Also, record a test for a
missed opportunity.
---
.../RISCV/basic-strided-loads.ll | 741 ++++++++++++++++++
1 file changed, 741 insertions(+)
create mode 100644 llvm/test/Transforms/SLPVectorizer/RISCV/basic-strided-loads.ll
diff --git a/llvm/test/Transforms/SLPVectorizer/RISCV/basic-strided-loads.ll b/llvm/test/Transforms/SLPVectorizer/RISCV/basic-strided-loads.ll
new file mode 100644
index 0000000000000..645dbc49269f0
--- /dev/null
+++ b/llvm/test/Transforms/SLPVectorizer/RISCV/basic-strided-loads.ll
@@ -0,0 +1,741 @@
+; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --version 5
+
+; RUN: opt -mtriple=riscv64 -mattr=+m,+v -passes=slp-vectorizer -S < %s | FileCheck %s
+
+define void @const_stride_1_no_reordering(ptr %pl, ptr %ps) {
+; CHECK-LABEL: define void @const_stride_1_no_reordering(
+; CHECK-SAME: ptr [[PL:%.*]], ptr [[PS:%.*]]) #[[ATTR0:[0-9]+]] {
+; CHECK-NEXT: [[GEP_L0:%.*]] = getelementptr inbounds i8, ptr [[PL]], i64 0
+; CHECK-NEXT: [[GEP_S0:%.*]] = getelementptr inbounds i8, ptr [[PS]], i64 0
+; CHECK-NEXT: [[TMP1:%.*]] = load <16 x i8>, ptr [[GEP_L0]], align 16
+; CHECK-NEXT: store <16 x i8> [[TMP1]], ptr [[GEP_S0]], align 16
+; CHECK-NEXT: ret void
+;
+ %gep_l0 = getelementptr inbounds i8, ptr %pl, i64 0
+ %gep_l1 = getelementptr inbounds i8, ptr %pl, i64 1
+ %gep_l2 = getelementptr inbounds i8, ptr %pl, i64 2
+ %gep_l3 = getelementptr inbounds i8, ptr %pl, i64 3
+ %gep_l4 = getelementptr inbounds i8, ptr %pl, i64 4
+ %gep_l5 = getelementptr inbounds i8, ptr %pl, i64 5
+ %gep_l6 = getelementptr inbounds i8, ptr %pl, i64 6
+ %gep_l7 = getelementptr inbounds i8, ptr %pl, i64 7
+ %gep_l8 = getelementptr inbounds i8, ptr %pl, i64 8
+ %gep_l9 = getelementptr inbounds i8, ptr %pl, i64 9
+ %gep_l10 = getelementptr inbounds i8, ptr %pl, i64 10
+ %gep_l11 = getelementptr inbounds i8, ptr %pl, i64 11
+ %gep_l12 = getelementptr inbounds i8, ptr %pl, i64 12
+ %gep_l13 = getelementptr inbounds i8, ptr %pl, i64 13
+ %gep_l14 = getelementptr inbounds i8, ptr %pl, i64 14
+ %gep_l15 = getelementptr inbounds i8, ptr %pl, i64 15
+
+ %load0 = load i8, ptr %gep_l0 , align 16
+ %load1 = load i8, ptr %gep_l1 , align 16
+ %load2 = load i8, ptr %gep_l2 , align 16
+ %load3 = load i8, ptr %gep_l3 , align 16
+ %load4 = load i8, ptr %gep_l4 , align 16
+ %load5 = load i8, ptr %gep_l5 , align 16
+ %load6 = load i8, ptr %gep_l6 , align 16
+ %load7 = load i8, ptr %gep_l7 , align 16
+ %load8 = load i8, ptr %gep_l8 , align 16
+ %load9 = load i8, ptr %gep_l9 , align 16
+ %load10 = load i8, ptr %gep_l10, align 16
+ %load11 = load i8, ptr %gep_l11, align 16
+ %load12 = load i8, ptr %gep_l12, align 16
+ %load13 = load i8, ptr %gep_l13, align 16
+ %load14 = load i8, ptr %gep_l14, align 16
+ %load15 = load i8, ptr %gep_l15, align 16
+
+ %gep_s0 = getelementptr inbounds i8, ptr %ps, i64 0
+ %gep_s1 = getelementptr inbounds i8, ptr %ps, i64 1
+ %gep_s2 = getelementptr inbounds i8, ptr %ps, i64 2
+ %gep_s3 = getelementptr inbounds i8, ptr %ps, i64 3
+ %gep_s4 = getelementptr inbounds i8, ptr %ps, i64 4
+ %gep_s5 = getelementptr inbounds i8, ptr %ps, i64 5
+ %gep_s6 = getelementptr inbounds i8, ptr %ps, i64 6
+ %gep_s7 = getelementptr inbounds i8, ptr %ps, i64 7
+ %gep_s8 = getelementptr inbounds i8, ptr %ps, i64 8
+ %gep_s9 = getelementptr inbounds i8, ptr %ps, i64 9
+ %gep_s10 = getelementptr inbounds i8, ptr %ps, i64 10
+ %gep_s11 = getelementptr inbounds i8, ptr %ps, i64 11
+ %gep_s12 = getelementptr inbounds i8, ptr %ps, i64 12
+ %gep_s13 = getelementptr inbounds i8, ptr %ps, i64 13
+ %gep_s14 = getelementptr inbounds i8, ptr %ps, i64 14
+ %gep_s15 = getelementptr inbounds i8, ptr %ps, i64 15
+
+ store i8 %load0, ptr %gep_s0, align 16
+ store i8 %load1, ptr %gep_s1, align 16
+ store i8 %load2, ptr %gep_s2, align 16
+ store i8 %load3, ptr %gep_s3, align 16
+ store i8 %load4, ptr %gep_s4, align 16
+ store i8 %load5, ptr %gep_s5, align 16
+ store i8 %load6, ptr %gep_s6, align 16
+ store i8 %load7, ptr %gep_s7, align 16
+ store i8 %load8, ptr %gep_s8, align 16
+ store i8 %load9, ptr %gep_s9, align 16
+ store i8 %load10, ptr %gep_s10, align 16
+ store i8 %load11, ptr %gep_s11, align 16
+ store i8 %load12, ptr %gep_s12, align 16
+ store i8 %load13, ptr %gep_s13, align 16
+ store i8 %load14, ptr %gep_s14, align 16
+ store i8 %load15, ptr %gep_s15, align 16
+
+ ret void
+}
+
+define void @const_stride_1_with_reordering(ptr %pl, ptr %ps) {
+; CHECK-LABEL: define void @const_stride_1_with_reordering(
+; CHECK-SAME: ptr [[PL:%.*]], ptr [[PS:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[GEP_L0:%.*]] = getelementptr inbounds i8, ptr [[PL]], i64 0
+; CHECK-NEXT: [[GEP_S0:%.*]] = getelementptr inbounds i8, ptr [[PS]], i64 0
+; CHECK-NEXT: [[TMP1:%.*]] = load <16 x i8>, ptr [[GEP_L0]], align 16
+; CHECK-NEXT: [[TMP2:%.*]] = shufflevector <16 x i8> [[TMP1]], <16 x i8> poison, <16 x i32> <i32 1, i32 0, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15>
+; CHECK-NEXT: store <16 x i8> [[TMP2]], ptr [[GEP_S0]], align 16
+; CHECK-NEXT: ret void
+;
+ %gep_l0 = getelementptr inbounds i8, ptr %pl, i64 0
+ %gep_l1 = getelementptr inbounds i8, ptr %pl, i64 1
+ %gep_l2 = getelementptr inbounds i8, ptr %pl, i64 2
+ %gep_l3 = getelementptr inbounds i8, ptr %pl, i64 3
+ %gep_l4 = getelementptr inbounds i8, ptr %pl, i64 4
+ %gep_l5 = getelementptr inbounds i8, ptr %pl, i64 5
+ %gep_l6 = getelementptr inbounds i8, ptr %pl, i64 6
+ %gep_l7 = getelementptr inbounds i8, ptr %pl, i64 7
+ %gep_l8 = getelementptr inbounds i8, ptr %pl, i64 8
+ %gep_l9 = getelementptr inbounds i8, ptr %pl, i64 9
+ %gep_l10 = getelementptr inbounds i8, ptr %pl, i64 10
+ %gep_l11 = getelementptr inbounds i8, ptr %pl, i64 11
+ %gep_l12 = getelementptr inbounds i8, ptr %pl, i64 12
+ %gep_l13 = getelementptr inbounds i8, ptr %pl, i64 13
+ %gep_l14 = getelementptr inbounds i8, ptr %pl, i64 14
+ %gep_l15 = getelementptr inbounds i8, ptr %pl, i64 15
+
+ %load0 = load i8, ptr %gep_l0 , align 16
+ %load1 = load i8, ptr %gep_l1 , align 16
+ %load2 = load i8, ptr %gep_l2 , align 16
+ %load3 = load i8, ptr %gep_l3 , align 16
+ %load4 = load i8, ptr %gep_l4 , align 16
+ %load5 = load i8, ptr %gep_l5 , align 16
+ %load6 = load i8, ptr %gep_l6 , align 16
+ %load7 = load i8, ptr %gep_l7 , align 16
+ %load8 = load i8, ptr %gep_l8 , align 16
+ %load9 = load i8, ptr %gep_l9 , align 16
+ %load10 = load i8, ptr %gep_l10, align 16
+ %load11 = load i8, ptr %gep_l11, align 16
+ %load12 = load i8, ptr %gep_l12, align 16
+ %load13 = load i8, ptr %gep_l13, align 16
+ %load14 = load i8, ptr %gep_l14, align 16
+ %load15 = load i8, ptr %gep_l15, align 16
+
+ %gep_s0 = getelementptr inbounds i8, ptr %ps, i64 0
+ %gep_s1 = getelementptr inbounds i8, ptr %ps, i64 1
+ %gep_s2 = getelementptr inbounds i8, ptr %ps, i64 2
+ %gep_s3 = getelementptr inbounds i8, ptr %ps, i64 3
+ %gep_s4 = getelementptr inbounds i8, ptr %ps, i64 4
+ %gep_s5 = getelementptr inbounds i8, ptr %ps, i64 5
+ %gep_s6 = getelementptr inbounds i8, ptr %ps, i64 6
+ %gep_s7 = getelementptr inbounds i8, ptr %ps, i64 7
+ %gep_s8 = getelementptr inbounds i8, ptr %ps, i64 8
+ %gep_s9 = getelementptr inbounds i8, ptr %ps, i64 9
+ %gep_s10 = getelementptr inbounds i8, ptr %ps, i64 10
+ %gep_s11 = getelementptr inbounds i8, ptr %ps, i64 11
+ %gep_s12 = getelementptr inbounds i8, ptr %ps, i64 12
+ %gep_s13 = getelementptr inbounds i8, ptr %ps, i64 13
+ %gep_s14 = getelementptr inbounds i8, ptr %ps, i64 14
+ %gep_s15 = getelementptr inbounds i8, ptr %ps, i64 15
+
+ ; NOTE: value from %load1 in stored in %gep_s0
+ store i8 %load1, ptr %gep_s0, align 16
+ store i8 %load0, ptr %gep_s1, align 16
+ store i8 %load2, ptr %gep_s2, align 16
+ store i8 %load3, ptr %gep_s3, align 16
+ store i8 %load4, ptr %gep_s4, align 16
+ store i8 %load5, ptr %gep_s5, align 16
+ store i8 %load6, ptr %gep_s6, align 16
+ store i8 %load7, ptr %gep_s7, align 16
+ store i8 %load8, ptr %gep_s8, align 16
+ store i8 %load9, ptr %gep_s9, align 16
+ store i8 %load10, ptr %gep_s10, align 16
+ store i8 %load11, ptr %gep_s11, align 16
+ store i8 %load12, ptr %gep_s12, align 16
+ store i8 %load13, ptr %gep_s13, align 16
+ store i8 %load14, ptr %gep_s14, align 16
+ store i8 %load15, ptr %gep_s15, align 16
+
+ ret void
+}
+
+
+define void @const_stride_2_no_reordering(ptr %pl, ptr %ps) {
+; CHECK-LABEL: define void @const_stride_2_no_reordering(
+; CHECK-SAME: ptr [[PL:%.*]], ptr [[PS:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[GEP_L0:%.*]] = getelementptr inbounds i8, ptr [[PL]], i64 0
+; CHECK-NEXT: [[GEP_S0:%.*]] = getelementptr inbounds i8, ptr [[PS]], i64 0
+; CHECK-NEXT: [[TMP2:%.*]] = call <31 x i8> @llvm.masked.load.v31i8.p0(ptr [[GEP_L0]], i32 16, <31 x i1> <i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true>, <31 x i8> poison)
+; CHECK-NEXT: [[TMP1:%.*]] = shufflevector <31 x i8> [[TMP2]], <31 x i8> poison, <16 x i32> <i32 0, i32 2, i32 4, i32 6, i32 8, i32 10, i32 12, i32 14, i32 16, i32 18, i32 20, i32 22, i32 24, i32 26, i32 28, i32 30>
+; CHECK-NEXT: store <16 x i8> [[TMP1]], ptr [[GEP_S0]], align 16
+; CHECK-NEXT: ret void
+;
+ %gep_l0 = getelementptr inbounds i8, ptr %pl, i64 0
+ %gep_l1 = getelementptr inbounds i8, ptr %pl, i64 2
+ %gep_l2 = getelementptr inbounds i8, ptr %pl, i64 4
+ %gep_l3 = getelementptr inbounds i8, ptr %pl, i64 6
+ %gep_l4 = getelementptr inbounds i8, ptr %pl, i64 8
+ %gep_l5 = getelementptr inbounds i8, ptr %pl, i64 10
+ %gep_l6 = getelementptr inbounds i8, ptr %pl, i64 12
+ %gep_l7 = getelementptr inbounds i8, ptr %pl, i64 14
+ %gep_l8 = getelementptr inbounds i8, ptr %pl, i64 16
+ %gep_l9 = getelementptr inbounds i8, ptr %pl, i64 18
+ %gep_l10 = getelementptr inbounds i8, ptr %pl, i64 20
+ %gep_l11 = getelementptr inbounds i8, ptr %pl, i64 22
+ %gep_l12 = getelementptr inbounds i8, ptr %pl, i64 24
+ %gep_l13 = getelementptr inbounds i8, ptr %pl, i64 26
+ %gep_l14 = getelementptr inbounds i8, ptr %pl, i64 28
+ %gep_l15 = getelementptr inbounds i8, ptr %pl, i64 30
+
+ %load0 = load i8, ptr %gep_l0 , align 16
+ %load1 = load i8, ptr %gep_l1 , align 16
+ %load2 = load i8, ptr %gep_l2 , align 16
+ %load3 = load i8, ptr %gep_l3 , align 16
+ %load4 = load i8, ptr %gep_l4 , align 16
+ %load5 = load i8, ptr %gep_l5 , align 16
+ %load6 = load i8, ptr %gep_l6 , align 16
+ %load7 = load i8, ptr %gep_l7 , align 16
+ %load8 = load i8, ptr %gep_l8 , align 16
+ %load9 = load i8, ptr %gep_l9 , align 16
+ %load10 = load i8, ptr %gep_l10, align 16
+ %load11 = load i8, ptr %gep_l11, align 16
+ %load12 = load i8, ptr %gep_l12, align 16
+ %load13 = load i8, ptr %gep_l13, align 16
+ %load14 = load i8, ptr %gep_l14, align 16
+ %load15 = load i8, ptr %gep_l15, align 16
+
+ %gep_s0 = getelementptr inbounds i8, ptr %ps, i64 0
+ %gep_s1 = getelementptr inbounds i8, ptr %ps, i64 1
+ %gep_s2 = getelementptr inbounds i8, ptr %ps, i64 2
+ %gep_s3 = getelementptr inbounds i8, ptr %ps, i64 3
+ %gep_s4 = getelementptr inbounds i8, ptr %ps, i64 4
+ %gep_s5 = getelementptr inbounds i8, ptr %ps, i64 5
+ %gep_s6 = getelementptr inbounds i8, ptr %ps, i64 6
+ %gep_s7 = getelementptr inbounds i8, ptr %ps, i64 7
+ %gep_s8 = getelementptr inbounds i8, ptr %ps, i64 8
+ %gep_s9 = getelementptr inbounds i8, ptr %ps, i64 9
+ %gep_s10 = getelementptr inbounds i8, ptr %ps, i64 10
+ %gep_s11 = getelementptr inbounds i8, ptr %ps, i64 11
+ %gep_s12 = getelementptr inbounds i8, ptr %ps, i64 12
+ %gep_s13 = getelementptr inbounds i8, ptr %ps, i64 13
+ %gep_s14 = getelementptr inbounds i8, ptr %ps, i64 14
+ %gep_s15 = getelementptr inbounds i8, ptr %ps, i64 15
+
+ store i8 %load0, ptr %gep_s0, align 16
+ store i8 %load1, ptr %gep_s1, align 16
+ store i8 %load2, ptr %gep_s2, align 16
+ store i8 %load3, ptr %gep_s3, align 16
+ store i8 %load4, ptr %gep_s4, align 16
+ store i8 %load5, ptr %gep_s5, align 16
+ store i8 %load6, ptr %gep_s6, align 16
+ store i8 %load7, ptr %gep_s7, align 16
+ store i8 %load8, ptr %gep_s8, align 16
+ store i8 %load9, ptr %gep_s9, align 16
+ store i8 %load10, ptr %gep_s10, align 16
+ store i8 %load11, ptr %gep_s11, align 16
+ store i8 %load12, ptr %gep_s12, align 16
+ store i8 %load13, ptr %gep_s13, align 16
+ store i8 %load14, ptr %gep_s14, align 16
+ store i8 %load15, ptr %gep_s15, align 16
+
+ ret void
+}
+
+define void @const_stride_2_with_reordering(ptr %pl, ptr %ps) {
+; CHECK-LABEL: define void @const_stride_2_with_reordering(
+; CHECK-SAME: ptr [[PL:%.*]], ptr [[PS:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[GEP_L0:%.*]] = getelementptr inbounds i8, ptr [[PL]], i64 0
+; CHECK-NEXT: [[GEP_S0:%.*]] = getelementptr inbounds i8, ptr [[PS]], i64 0
+; CHECK-NEXT: [[TMP1:%.*]] = call <31 x i8> @llvm.masked.load.v31i8.p0(ptr [[GEP_L0]], i32 16, <31 x i1> <i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true, i1 false, i1 true>, <31 x i8> poison)
+; CHECK-NEXT: [[TMP3:%.*]] = shufflevector <31 x i8> [[TMP1]], <31 x i8> poison, <16 x i32> <i32 0, i32 2, i32 4, i32 6, i32 8, i32 10, i32 12, i32 14, i32 16, i32 18, i32 20, i32 22, i32 24, i32 26, i32 28, i32 30>
+; CHECK-NEXT: [[TMP2:%.*]] = shufflevector <31 x i8> [[TMP1]], <31 x i8> poison, <16 x i32> <i32 2, i32 0, i32 4, i32 6, i32 8, i32 10, i32 12, i32 14, i32 16, i32 18, i32 20, i32 22, i32 24, i32 26, i32 28, i32 30>
+; CHECK-NEXT: store <16 x i8> [[TMP2]], ptr [[GEP_S0]], align 16
+; CHECK-NEXT: ret void
+;
+ %gep_l0 = getelementptr inbounds i8, ptr %pl, i64 0
+ %gep_l1 = getelementptr inbounds i8, ptr %pl, i64 2
+ %gep_l2 = getelementptr inbounds i8, ptr %pl, i64 4
+ %gep_l3 = getelementptr inbounds i8, ptr %pl, i64 6
+ %gep_l4 = getelementptr inbounds i8, ptr %pl, i64 8
+ %gep_l5 = getelementptr inbounds i8, ptr %pl, i64 10
+ %gep_l6 = getelementptr inbounds i8, ptr %pl, i64 12
+ %gep_l7 = getelementptr inbounds i8, ptr %pl, i64 14
+ %gep_l8 = getelementptr inbounds i8, ptr %pl, i64 16
+ %gep_l9 = getelementptr inbounds i8, ptr %pl, i64 18
+ %gep_l10 = getelementptr inbounds i8, ptr %pl, i64 20
+ %gep_l11 = getelementptr inbounds i8, ptr %pl, i64 22
+ %gep_l12 = getelementptr inbounds i8, ptr %pl, i64 24
+ %gep_l13 = getelementptr inbounds i8, ptr %pl, i64 26
+ %gep_l14 = getelementptr inbounds i8, ptr %pl, i64 28
+ %gep_l15 = getelementptr inbounds i8, ptr %pl, i64 30
+
+ %load0 = load i8, ptr %gep_l0 , align 16
+ %load1 = load i8, ptr %gep_l1 , align 16
+ %load2 = load i8, ptr %gep_l2 , align 16
+ %load3 = load i8, ptr %gep_l3 , align 16
+ %load4 = load i8, ptr %gep_l4 , align 16
+ %load5 = load i8, ptr %gep_l5 , align 16
+ %load6 = load i8, ptr %gep_l6 , align 16
+ %load7 = load i8, ptr %gep_l7 , align 16
+ %load8 = load i8, ptr %gep_l8 , align 16
+ %load9 = load i8, ptr %gep_l9 , align 16
+ %load10 = load i8, ptr %gep_l10, align 16
+ %load11 = load i8, ptr %gep_l11, align 16
+ %load12 = load i8, ptr %gep_l12, align 16
+ %load13 = load i8, ptr %gep_l13, align 16
+ %load14 = load i8, ptr %gep_l14, align 16
+ %load15 = load i8, ptr %gep_l15, align 16
+
+ %gep_s0 = getelementptr inbounds i8, ptr %ps, i64 0
+ %gep_s1 = getelementptr inbounds i8, ptr %ps, i64 1
+ %gep_s2 = getelementptr inbounds i8, ptr %ps, i64 2
+ %gep_s3 = getelementptr inbounds i8, ptr %ps, i64 3
+ %gep_s4 = getelementptr inbounds i8, ptr %ps, i64 4
+ %gep_s5 = getelementptr inbounds i8, ptr %ps, i64 5
+ %gep_s6 = getelementptr inbounds i8, ptr %ps, i64 6
+ %gep_s7 = getelementptr inbounds i8, ptr %ps, i64 7
+ %gep_s8 = getelementptr inbounds i8, ptr %ps, i64 8
+ %gep_s9 = getelementptr inbounds i8, ptr %ps, i64 9
+ %gep_s10 = getelementptr inbounds i8, ptr %ps, i64 10
+ %gep_s11 = getelementptr inbounds i8, ptr %ps, i64 11
+ %gep_s12 = getelementptr inbounds i8, ptr %ps, i64 12
+ %gep_s13 = getelementptr inbounds i8, ptr %ps, i64 13
+ %gep_s14 = getelementptr inbounds i8, ptr %ps, i64 14
+ %gep_s15 = getelementptr inbounds i8, ptr %ps, i64 15
+
+ store i8 %load1, ptr %gep_s0, align 16
+ store i8 %load0, ptr %gep_s1, align 16
+ store i8 %load2, ptr %gep_s2, align 16
+ store i8 %load3, ptr %gep_s3, align 16
+ store i8 %load4, ptr %gep_s4, align 16
+ store i8 %load5, ptr %gep_s5, align 16
+ store i8 %load6, ptr %gep_s6, align 16
+ store i8 %load7, ptr %gep_s7, align 16
+ store i8 %load8, ptr %gep_s8, align 16
+ store i8 %load9, ptr %gep_s9, align 16
+ store i8 %load10, ptr %gep_s10, align 16
+ store i8 %load11, ptr %gep_s11, align 16
+ store i8 %load12, ptr %gep_s12, align 16
+ store i8 %load13, ptr %gep_s13, align 16
+ store i8 %load14, ptr %gep_s14, align 16
+ store i8 %load15, ptr %gep_s15, align 16
+
+ ret void
+}
+
+define void @rt_stride_1_no_reordering(ptr %pl, i64 %stride, ptr %ps) {
+; CHECK-LABEL: define void @rt_stride_1_no_reordering(
+; CHECK-SAME: ptr [[PL:%.*]], i64 [[STRIDE:%.*]], ptr [[PS:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[STRIDE0:%.*]] = mul nsw i64 [[STRIDE]], 0
+; CHECK-NEXT: [[GEP_L0:%.*]] = getelementptr inbounds i8, ptr [[PL]], i64 [[STRIDE0]]
+; CHECK-NEXT: [[GEP_S0:%.*]] = getelementptr inbounds i8, ptr [[PS]], i64 0
+; CHECK-NEXT: [[TMP1:%.*]] = mul i64 [[STRIDE]], 1
+; CHECK-NEXT: [[TMP2:%.*]] = call <16 x i8> @llvm.experimental.vp.strided.load.v16i8.p0.i64(ptr align 16 [[GEP_L0]], i64 [[TMP1]], <16 x i1> splat (i1 true), i32 16)
+; CHECK-NEXT: store <16 x i8> [[TMP2]], ptr [[GEP_S0]], align 16
+; CHECK-NEXT: ret void
+;
+ %stride0 = mul nsw i64 %stride, 0
+ %stride1 = mul nsw i64 %stride, 1
+ %stride2 = mul nsw i64 %stride, 2
+ %stride3 = mul nsw i64 %stride, 3
+ %stride4 = mul nsw i64 %stride, 4
+ %stride5 = mul nsw i64 %stride, 5
+ %stride6 = mul nsw i64 %stride, 6
+ %stride7 = mul nsw i64 %stride, 7
+ %stride8 = mul nsw i64 %stride, 8
+ %stride9 = mul nsw i64 %stride, 9
+ %stride10 = mul nsw i64 %stride, 10
+ %stride11 = mul nsw i64 %stride, 11
+ %stride12 = mul nsw i64 %stride, 12
+ %stride13 = mul nsw i64 %stride, 13
+ %stride14 = mul nsw i64 %stride, 14
+ %stride15 = mul nsw i64 %stride, 15
+
+ %gep_l0 = getelementptr inbounds i8, ptr %pl, i64 %stride0
+ %gep_l1 = getelementptr inbounds i8, ptr %pl, i64 %stride1
+ %gep_l2 = getelementptr inbounds i8, ptr %pl, i64 %stride2
+ %gep_l3 = getelementptr inbounds i8, ptr %pl, i64 %stride3
+ %gep_l4 = getelementptr inbounds i8, ptr %pl, i64 %stride4
+ %gep_l5 = getelementptr inbounds i8, ptr %pl, i64 %stride5
+ %gep_l6 = getelementptr inbounds i8, ptr %pl, i64 %stride6
+ %gep_l7 = getelementptr inbounds i8, ptr %pl, i64 %stride7
+ %gep_l8 = getelementptr inbounds i8, ptr %pl, i64 %stride8
+ %gep_l9 = getelementptr inbounds i8, ptr %pl, i64 %stride9
+ %gep_l10 = getelementptr inbounds i8, ptr %pl, i64 %stride10
+ %gep_l11 = getelementptr inbounds i8, ptr %pl, i64 %stride11
+ %gep_l12 = getelementptr inbounds i8, ptr %pl, i64 %stride12
+ %gep_l13 = getelementptr inbounds i8, ptr %pl, i64 %stride13
+ %gep_l14 = getelementptr inbounds i8, ptr %pl, i64 %stride14
+ %gep_l15 = getelementptr inbounds i8, ptr %pl, i64 %stride15
+
+ %load0 = load i8, ptr %gep_l0 , align 16
+ %load1 = load i8, ptr %gep_l1 , align 16
+ %load2 = load i8, ptr %gep_l2 , align 16
+ %load3 = load i8, ptr %gep_l3 , align 16
+ %load4 = load i8, ptr %gep_l4 , align 16
+ %load5 = load i8, ptr %gep_l5 , align 16
+ %load6 = load i8, ptr %gep_l6 , align 16
+ %load7 = load i8, ptr %gep_l7 , align 16
+ %load8 = load i8, ptr %gep_l8 , align 16
+ %load9 = load i8, ptr %gep_l9 , align 16
+ %load10 = load i8, ptr %gep_l10, align 16
+ %load11 = load i8, ptr %gep_l11, align 16
+ %load12 = load i8, ptr %gep_l12, align 16
+ %load13 = load i8, ptr %gep_l13, align 16
+ %load14 = load i8, ptr %gep_l14, align 16
+ %load15 = load i8, ptr %gep_l15, align 16
+
+ %gep_s0 = getelementptr inbounds i8, ptr %ps, i64 0
+ %gep_s1 = getelementptr inbounds i8, ptr %ps, i64 1
+ %gep_s2 = getelementptr inbounds i8, ptr %ps, i64 2
+ %gep_s3 = getelementptr inbounds i8, ptr %ps, i64 3
+ %gep_s4 = getelementptr inbounds i8, ptr %ps, i64 4
+ %gep_s5 = getelementptr inbounds i8, ptr %ps, i64 5
+ %gep_s6 = getelementptr inbounds i8, ptr %ps, i64 6
+ %gep_s7 = getelementptr inbounds i8, ptr %ps, i64 7
+ %gep_s8 = getelementptr inbounds i8, ptr %ps, i64 8
+ %gep_s9 = getelementptr inbounds i8, ptr %ps, i64 9
+ %gep_s10 = getelementptr inbounds i8, ptr %ps, i64 10
+ %gep_s11 = getelementptr inbounds i8, ptr %ps, i64 11
+ %gep_s12 = getelementptr inbounds i8, ptr %ps, i64 12
+ %gep_s13 = getelementptr inbounds i8, ptr %ps, i64 13
+ %gep_s14 = getelementptr inbounds i8, ptr %ps, i64 14
+ %gep_s15 = getelementptr inbounds i8, ptr %ps, i64 15
+
+ store i8 %load0, ptr %gep_s0, align 16
+ store i8 %load1, ptr %gep_s1, align 16
+ store i8 %load2, ptr %gep_s2, align 16
+ store i8 %load3, ptr %gep_s3, align 16
+ store i8 %load4, ptr %gep_s4, align 16
+ store i8 %load5, ptr %gep_s5, align 16
+ store i8 %load6, ptr %gep_s6, align 16
+ store i8 %load7, ptr %gep_s7, align 16
+ store i8 %load8, ptr %gep_s8, align 16
+ store i8 %load9, ptr %gep_s9, align 16
+ store i8 %load10, ptr %gep_s10, align 16
+ store i8 %load11, ptr %gep_s11, align 16
+ store i8 %load12, ptr %gep_s12, align 16
+ store i8 %load13, ptr %gep_s13, align 16
+ store i8 %load14, ptr %gep_s14, align 16
+ store i8 %load15, ptr %gep_s15, align 16
+
+ ret void
+}
+
+define void @rt_stride_1_with_reordering(ptr %pl, i64 %stride, ptr %ps) {
+; CHECK-LABEL: define void @rt_stride_1_with_reordering(
+; CHECK-SAME: ptr [[PL:%.*]], i64 [[STRIDE:%.*]], ptr [[PS:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[STRIDE0:%.*]] = mul nsw i64 [[STRIDE]], 0
+; CHECK-NEXT: [[GEP_L0:%.*]] = getelementptr inbounds i8, ptr [[PL]], i64 [[STRIDE0]]
+; CHECK-NEXT: [[GEP_S0:%.*]] = getelementptr inbounds i8, ptr [[PS]], i64 0
+; CHECK-NEXT: [[TMP1:%.*]] = mul i64 [[STRIDE]], 1
+; CHECK-NEXT: [[TMP2:%.*]] = call <16 x i8> @llvm.experimental.vp.strided.load.v16i8.p0.i64(ptr align 16 [[GEP_L0]], i64 [[TMP1]], <16 x i1> splat (i1 true), i32 16)
+; CHECK-NEXT: [[TMP3:%.*]] = shufflevector <16 x i8> [[TMP2]], <16 x i8> poison, <16 x i32> <i32 1, i32 0, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15>
+; CHECK-NEXT: store <16 x i8> [[TMP3]], ptr [[GEP_S0]], align 16
+; CHECK-NEXT: ret void
+;
+ %stride0 = mul nsw i64 %stride, 0
+ %stride1 = mul nsw i64 %stride, 1
+ %stride2 = mul nsw i64 %stride, 2
+ %stride3 = mul nsw i64 %stride, 3
+ %stride4 = mul nsw i64 %stride, 4
+ %stride5 = mul nsw i64 %stride, 5
+ %stride6 = mul nsw i64 %stride, 6
+ %stride7 = mul nsw i64 %stride, 7
+ %stride8 = mul nsw i64 %stride, 8
+ %stride9 = mul nsw i64 %stride, 9
+ %stride10 = mul nsw i64 %stride, 10
+ %stride11 = mul nsw i64 %stride, 11
+ %stride12 = mul nsw i64 %stride, 12
+ %stride13 = mul nsw i64 %stride, 13
+ %stride14 = mul nsw i64 %stride, 14
+ %stride15 = mul nsw i64 %stride, 15
+
+ %gep_l0 = getelementptr inbounds i8, ptr %pl, i64 %stride0
+ %gep_l1 = getelementptr inbounds i8, ptr %pl, i64 %stride1
+ %gep_l2 = getelementptr inbounds i8, ptr %pl, i64 %stride2
+ %gep_l3 = getelementptr inbounds i8, ptr %pl, i64 %stride3
+ %gep_l4 = getelementptr inbounds i8, ptr %pl, i64 %stride4
+ %gep_l5 = getelementptr inbounds i8, ptr %pl, i64 %stride5
+ %gep_l6 = getelementptr inbounds i8, ptr %pl, i64 %stride6
+ %gep_l7 = getelementptr inbounds i8, ptr %pl, i64 %stride7
+ %gep_l8 = getelementptr inbounds i8, ptr %pl, i64 %stride8
+ %gep_l9 = getelementptr inbounds i8, ptr %pl, i64 %stride9
+ %gep_l10 = getelementptr inbounds i8, ptr %pl, i64 %stride10
+ %gep_l11 = getelementptr inbounds i8, ptr %pl, i64 %stride11
+ %gep_l12 = getelementptr inbounds i8, ptr %pl, i64 %stride12
+ %gep_l13 = getelementptr inbounds i8, ptr %pl, i64 %stride13
+ %gep_l14 = getelementptr inbounds i8, ptr %pl, i64 %stride14
+ %gep_l15 = getelementptr inbounds i8, ptr %pl, i64 %stride15
+
+ %load0 = load i8, ptr %gep_l0 , align 16
+ %load1 = load i8, ptr %gep_l1 , align 16
+ %load2 = load i8, ptr %gep_l2 , align 16
+ %load3 = load i8, ptr %gep_l3 , align 16
+ %load4 = load i8, ptr %gep_l4 , align 16
+ %load5 = load i8, ptr %gep_l5 , align 16
+ %load6 = load i8, ptr %gep_l6 , align 16
+ %load7 = load i8, ptr %gep_l7 , align 16
+ %load8 = load i8, ptr %gep_l8 , align 16
+ %load9 = load i8, ptr %gep_l9 , align 16
+ %load10 = load i8, ptr %gep_l10, align 16
+ %load11 = load i8, ptr %gep_l11, align 16
+ %load12 = load i8, ptr %gep_l12, align 16
+ %load13 = load i8, ptr %gep_l13, align 16
+ %load14 = load i8, ptr %gep_l14, align 16
+ %load15 = load i8, ptr %gep_l15, align 16
+
+ %gep_s0 = getelementptr inbounds i8, ptr %ps, i64 0
+ %gep_s1 = getelementptr inbounds i8, ptr %ps, i64 1
+ %gep_s2 = getelementptr inbounds i8, ptr %ps, i64 2
+ %gep_s3 = getelementptr inbounds i8, ptr %ps, i64 3
+ %gep_s4 = getelementptr inbounds i8, ptr %ps, i64 4
+ %gep_s5 = getelementptr inbounds i8, ptr %ps, i64 5
+ %gep_s6 = getelementptr inbounds i8, ptr %ps, i64 6
+ %gep_s7 = getelementptr inbounds i8, ptr %ps, i64 7
+ %gep_s8 = getelementptr inbounds i8, ptr %ps, i64 8
+ %gep_s9 = getelementptr inbounds i8, ptr %ps, i64 9
+ %gep_s10 = getelementptr inbounds i8, ptr %ps, i64 10
+ %gep_s11 = getelementptr inbounds i8, ptr %ps, i64 11
+ %gep_s12 = getelementptr inbounds i8, ptr %ps, i64 12
+ %gep_s13 = getelementptr inbounds i8, ptr %ps, i64 13
+ %gep_s14 = getelementptr inbounds i8, ptr %ps, i64 14
+ %gep_s15 = getelementptr inbounds i8, ptr %ps, i64 15
+
+ store i8 %load1, ptr %gep_s0, align 16
+ store i8 %load0, ptr %gep_s1, align 16
+ store i8 %load2, ptr %gep_s2, align 16
+ store i8 %load3, ptr %gep_s3, align 16
+ store i8 %load4, ptr %gep_s4, align 16
+ store i8 %load5, ptr %gep_s5, align 16
+ store i8 %load6, ptr %gep_s6, align 16
+ store i8 %load7, ptr %gep_s7, align 16
+ store i8 %load8, ptr %gep_s8, align 16
+ store i8 %load9, ptr %gep_s9, align 16
+ store i8 %load10, ptr %gep_s10, align 16
+ store i8 %load11, ptr %gep_s11, align 16
+ store i8 %load12, ptr %gep_s12, align 16
+ store i8 %load13, ptr %gep_s13, align 16
+ store i8 %load14, ptr %gep_s14, align 16
+ store i8 %load15, ptr %gep_s15, align 16
+
+ ret void
+}
+
+; TODO: We want to generate this code:
+; define void @constant_stride_widen_no_reordering(ptr %pl, i64 %stride, ptr %ps) {
+; %gep_l0 = getelementptr inbounds i8, ptr %pl, i64 %offset0
+; %gep_s0 = getelementptr inbounds i8, ptr %ps, i64 0
+; %strided_load = call <4 x i32> @llvm.experimental.vp.strided.load.v4i32.p0.i64(ptr align 16 %gep_l0, i64 8, <4 x i1> splat (i1 true), i32 4)
+; %bitcast_ = bitcast <4 x i32> %strided_load to <16 x i8>
+; store <16 x i8> %bitcast_, ptr %gep_s0, align 16
+; ret void
+; }
+define void @constant_stride_widen_no_reordering(ptr %pl, i64 %stride, ptr %ps) {
+; CHECK-LABEL: define void @constant_stride_widen_no_reordering(
+; CHECK-SAME: ptr [[PL:%.*]], i64 [[STRIDE:%.*]], ptr [[PS:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[GEP_L0:%.*]] = getelementptr inbounds i8, ptr [[PL]], i64 0
+; CHECK-NEXT: [[GEP_S0:%.*]] = getelementptr inbounds i8, ptr [[PS]], i64 0
+; CHECK-NEXT: [[TMP1:%.*]] = call <28 x i8> @llvm.masked.load.v28i8.p0(ptr [[GEP_L0]], i32 16, <28 x i1> <i1 true, i1 true, i1 true, i1 true, i1 false, i1 false, i1 false, i1 false, i1 true, i1 true, i1 true, i1 true, i1 false, i1 false, i1 false, i1 false, i1 true, i1 true, i1 true, i1 true, i1 false, i1 false, i1 false, i1 false, i1 true, i1 true, i1 true, i1 true>, <28 x i8> poison)
+; CHECK-NEXT: [[TMP8:%.*]] = shufflevector <28 x i8> [[TMP1]], <28 x i8> poison, <16 x i32> <i32 0, i32 1, i32 2, i32 3, i32 8, i32 9, i32 10, i32 11, i32 16, i32 17, i32 18, i32 19, i32 24, i32 25, i32 26, i32 27>
+; CHECK-NEXT: store <16 x i8> [[TMP8]], ptr [[GEP_S0]], align 16
+; CHECK-NEXT: ret void
+;
+ %gep_l0 = getelementptr inbounds i8, ptr %pl, i64 0
+ %gep_l1 = getelementptr inbounds i8, ptr %pl, i64 1
+ %gep_l2 = getelementptr inbounds i8, ptr %pl, i64 2
+ %gep_l3 = getelementptr inbounds i8, ptr %pl, i64 3
+ %gep_l4 = getelementptr inbounds i8, ptr %pl, i64 8
+ %gep_l5 = getelementptr inbounds i8, ptr %pl, i64 9
+ %gep_l6 = getelementptr inbounds i8, ptr %pl, i64 10
+ %gep_l7 = getelementptr inbounds i8, ptr %pl, i64 11
+ %gep_l8 = getelementptr inbounds i8, ptr %pl, i64 16
+ %gep_l9 = getelementptr inbounds i8, ptr %pl, i64 17
+ %gep_l10 = getelementptr inbounds i8, ptr %pl, i64 18
+ %gep_l11 = getelementptr inbounds i8, ptr %pl, i64 19
+ %gep_l12 = getelementptr inbounds i8, ptr %pl, i64 24
+ %gep_l13 = getelementptr inbounds i8, ptr %pl, i64 25
+ %gep_l14 = getelementptr inbounds i8, ptr %pl, i64 26
+ %gep_l15 = getelementptr inbounds i8, ptr %pl, i64 27
+
+ %load0 = load i8, ptr %gep_l0 , align 16
+ %load1 = load i8, ptr %gep_l1 , align 16
+ %load2 = load i8, ptr %gep_l2 , align 16
+ %load3 = load i8, ptr %gep_l3 , align 16
+ %load4 = load i8, ptr %gep_l4 , align 16
+ %load5 = load i8, ptr %gep_l5 , align 16
+ %load6 = load i8, ptr %gep_l6 , align 16
+ %load7 = load i8, ptr %gep_l7 , align 16
+ %load8 = load i8, ptr %gep_l8 , align 16
+ %load9 = load i8, ptr %gep_l9 , align 16
+ %load10 = load i8, ptr %gep_l10, align 16
+ %load11 = load i8, ptr %gep_l11, align 16
+ %load12 = load i8, ptr %gep_l12, align 16
+ %load13 = load i8, ptr %gep_l13, align 16
+ %load14 = load i8, ptr %gep_l14, align 16
+ %load15 = load i8, ptr %gep_l15, align 16
+
+ %gep_s0 = getelementptr inbounds i8, ptr %ps, i64 0
+ %gep_s1 = getelementptr inbounds i8, ptr %ps, i64 1
+ %gep_s2 = getelementptr inbounds i8, ptr %ps, i64 2
+ %gep_s3 = getelementptr inbounds i8, ptr %ps, i64 3
+ %gep_s4 = getelementptr inbounds i8, ptr %ps, i64 4
+ %gep_s5 = getelementptr inbounds i8, ptr %ps, i64 5
+ %gep_s6 = getelementptr inbounds i8, ptr %ps, i64 6
+ %gep_s7 = getelementptr inbounds i8, ptr %ps, i64 7
+ %gep_s8 = getelementptr inbounds i8, ptr %ps, i64 8
+ %gep_s9 = getelementptr inbounds i8, ptr %ps, i64 9
+ %gep_s10 = getelementptr inbounds i8, ptr %ps, i64 10
+ %gep_s11 = getelementptr inbounds i8, ptr %ps, i64 11
+ %gep_s12 = getelementptr inbounds i8, ptr %ps, i64 12
+ %gep_s13 = getelementptr inbounds i8, ptr %ps, i64 13
+ %gep_s14 = getelementptr inbounds i8, ptr %ps, i64 14
+ %gep_s15 = getelementptr inbounds i8, ptr %ps, i64 15
+
+ store i8 %load0, ptr %gep_s0, align 16
+ store i8 %load1, ptr %gep_s1, align 16
+ store i8 %load2, ptr %gep_s2, align 16
+ store i8 %load3, ptr %gep_s3, align 16
+ store i8 %load4, ptr %gep_s4, align 16
+ store i8 %load5, ptr %gep_s5, align 16
+ store i8 %load6, ptr %gep_s6, align 16
+ store i8 %load7, ptr %gep_s7, align 16
+ store i8 %load8, ptr %gep_s8, align 16
+ store i8 %load9, ptr %gep_s9, align 16
+ store i8 %load10, ptr %gep_s10, align 16
+ store i8 %load11, ptr %gep_s11, align 16
+ store i8 %load12, ptr %gep_s12, align 16
+ store i8 %load13, ptr %gep_s13, align 16
+ store i8 %load14, ptr %gep_s14, align 16
+ store i8 %load15, ptr %gep_s15, align 16
+
+ ret void
+}
+
+; TODO: We want to generate this code:
+; define void @rt_stride_widen_no_reordering(ptr %pl, i64 %stride, ptr %ps) {
+; %gep_l0 = getelementptr inbounds i8, ptr %pl, i64 %offset0
+; %gep_s0 = getelementptr inbounds i8, ptr %ps, i64 0
+; %strided_load = call <4 x i32> @llvm.experimental.vp.strided.load.v4i32.p0.i64(ptr align 16 %gep_l0, i64 %stride, <4 x i1> splat (i1 true), i32 4)
+; %bitcast_ = bitcast <4 x i32> %strided_load to <16 x i8>
+; store <16 x i8> %bitcast_, ptr %gep_s0, align 16
+; ret void
+; }
+define void @rt_stride_widen_no_reordering(ptr %pl, i64 %stride, ptr %ps) {
+; CHECK-LABEL: define void @rt_stride_widen_no_reordering(
+; CHECK-SAME: ptr [[PL:%.*]], i64 [[STRIDE:%.*]], ptr [[PS:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: [[OFFSET0:%.*]] = mul nsw i64 [[STRIDE]], 0
+; CHECK-NEXT: [[OFFSET4:%.*]] = mul nsw i64 [[STRIDE]], 1
+; CHECK-NEXT: [[OFFSET8:%.*]] = mul nsw i64 [[STRIDE]], 2
+; CHECK-NEXT: [[OFFSET12:%.*]] = mul nsw i64 [[STRIDE]], 3
+; CHECK-NEXT: [[GEP_L0:%.*]] = getelementptr inbounds i8, ptr [[PL]], i64 [[OFFSET0]]
+; CHECK-NEXT: [[GEP_L4:%.*]] = getelementptr inbounds i8, ptr [[PL]], i64 [[OFFSET4]]
+; CHECK-NEXT: [[GEP_L8:%.*]] = getelementptr inbounds i8, ptr [[PL]], i64 [[OFFSET8]]
+; CHECK-NEXT: [[GEP_L12:%.*]] = getelementptr inbounds i8, ptr [[PL]], i64 [[OFFSET12]]
+; CHECK-NEXT: [[GEP_S0:%.*]] = getelementptr inbounds i8, ptr [[PS]], i64 0
+; CHECK-NEXT: [[TMP1:%.*]] = load <4 x i8>, ptr [[GEP_L0]], align 16
+; CHECK-NEXT: [[TMP2:%.*]] = load <4 x i8>, ptr [[GEP_L4]], align 16
+; CHECK-NEXT: [[TMP3:%.*]] = load <4 x i8>, ptr [[GEP_L8]], align 16
+; CHECK-NEXT: [[TMP4:%.*]] = load <4 x i8>, ptr [[GEP_L12]], align 16
+; CHECK-NEXT: [[TMP5:%.*]] = shufflevector <4 x i8> [[TMP1]], <4 x i8> poison, <16 x i32> <i32 0, i32 1, i32 2, i32 3, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison>
+; CHECK-NEXT: [[TMP6:%.*]] = shufflevector <4 x i8> [[TMP2]], <4 x i8> poison, <16 x i32> <i32 0, i32 1, i32 2, i32 3, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison>
+; CHECK-NEXT: [[TMP7:%.*]] = shufflevector <4 x i8> [[TMP1]], <4 x i8> [[TMP2]], <16 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison>
+; CHECK-NEXT: [[TMP11:%.*]] = shufflevector <4 x i8> [[TMP3]], <4 x i8> poison, <16 x i32> <i32 0, i32 1, i32 2, i32 3, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison>
+; CHECK-NEXT: [[TMP9:%.*]] = shufflevector <16 x i8> [[TMP7]], <16 x i8> [[TMP11]], <16 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 16, i32 17, i32 18, i32 19, i32 poison, i32 poison, i32 poison, i32 poison>
+; CHECK-NEXT: [[TMP10:%.*]] = shufflevector <4 x i8> [[TMP4]], <4 x i8> poison, <16 x i32> <i32 0, i32 1, i32 2, i32 3, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison>
+; CHECK-NEXT: [[TMP8:%.*]] = shufflevector <16 x i8> [[TMP9]], <16 x i8> [[TMP10]], <16 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 16, i32 17, i32 18, i32 19>
+; CHECK-NEXT: store <16 x i8> [[TMP8]], ptr [[GEP_S0]], align 16
+; CHECK-NEXT: ret void
+;
+ %offset0 = mul nsw i64 %stride, 0
+ %offset1 = add nsw i64 %offset0, 1
+ %offset2 = add nsw i64 %offset0, 2
+ %offset3 = add nsw i64 %offset0, 3
+ %offset4 = mul nsw i64 %stride, 1
+ %offset5 = add nsw i64 %offset4, 1
+ %offset6 = add nsw i64 %offset4, 2
+ %offset7 = add nsw i64 %offset4, 3
+ %offset8 = mul nsw i64 %stride, 2
+ %offset9 = add nsw i64 %offset8, 1
+ %offset10 = add nsw i64 %offset8, 2
+ %offset11 = add nsw i64 %offset8, 3
+ %offset12 = mul nsw i64 %stride, 3
+ %offset13 = add nsw i64 %offset12, 1
+ %offset14 = add nsw i64 %offset12, 2
+ %offset15 = add nsw i64 %offset12, 3
+
+ %gep_l0 = getelementptr inbounds i8, ptr %pl, i64 %offset0
+ %gep_l1 = getelementptr inbounds i8, ptr %pl, i64 %offset1
+ %gep_l2 = getelementptr inbounds i8, ptr %pl, i64 %offset2
+ %gep_l3 = getelementptr inbounds i8, ptr %pl, i64 %offset3
+ %gep_l4 = getelementptr inbounds i8, ptr %pl, i64 %offset4
+ %gep_l5 = getelementptr inbounds i8, ptr %pl, i64 %offset5
+ %gep_l6 = getelementptr inbounds i8, ptr %pl, i64 %offset6
+ %gep_l7 = getelementptr inbounds i8, ptr %pl, i64 %offset7
+ %gep_l8 = getelementptr inbounds i8, ptr %pl, i64 %offset8
+ %gep_l9 = getelementptr inbounds i8, ptr %pl, i64 %offset9
+ %gep_l10 = getelementptr inbounds i8, ptr %pl, i64 %offset10
+ %gep_l11 = getelementptr inbounds i8, ptr %pl, i64 %offset11
+ %gep_l12 = getelementptr inbounds i8, ptr %pl, i64 %offset12
+ %gep_l13 = getelementptr inbounds i8, ptr %pl, i64 %offset13
+ %gep_l14 = getelementptr inbounds i8, ptr %pl, i64 %offset14
+ %gep_l15 = getelementptr inbounds i8, ptr %pl, i64 %offset15
+
+ %load0 = load i8, ptr %gep_l0 , align 16
+ %load1 = load i8, ptr %gep_l1 , align 16
+ %load2 = load i8, ptr %gep_l2 , align 16
+ %load3 = load i8, ptr %gep_l3 , align 16
+ %load4 = load i8, ptr %gep_l4 , align 16
+ %load5 = load i8, ptr %gep_l5 , align 16
+ %load6 = load i8, ptr %gep_l6 , align 16
+ %load7 = load i8, ptr %gep_l7 , align 16
+ %load8 = load i8, ptr %gep_l8 , align 16
+ %load9 = load i8, ptr %gep_l9 , align 16
+ %load10 = load i8, ptr %gep_l10, align 16
+ %load11 = load i8, ptr %gep_l11, align 16
+ %load12 = load i8, ptr %gep_l12, align 16
+ %load13 = load i8, ptr %gep_l13, align 16
+ %load14 = load i8, ptr %gep_l14, align 16
+ %load15 = load i8, ptr %gep_l15, align 16
+
+ %gep_s0 = getelementptr inbounds i8, ptr %ps, i64 0
+ %gep_s1 = getelementptr inbounds i8, ptr %ps, i64 1
+ %gep_s2 = getelementptr inbounds i8, ptr %ps, i64 2
+ %gep_s3 = getelementptr inbounds i8, ptr %ps, i64 3
+ %gep_s4 = getelementptr inbounds i8, ptr %ps, i64 4
+ %gep_s5 = getelementptr inbounds i8, ptr %ps, i64 5
+ %gep_s6 = getelementptr inbounds i8, ptr %ps, i64 6
+ %gep_s7 = getelementptr inbounds i8, ptr %ps, i64 7
+ %gep_s8 = getelementptr inbounds i8, ptr %ps, i64 8
+ %gep_s9 = getelementptr inbounds i8, ptr %ps, i64 9
+ %gep_s10 = getelementptr inbounds i8, ptr %ps, i64 10
+ %gep_s11 = getelementptr inbounds i8, ptr %ps, i64 11
+ %gep_s12 = getelementptr inbounds i8, ptr %ps, i64 12
+ %gep_s13 = getelementptr inbounds i8, ptr %ps, i64 13
+ %gep_s14 = getelementptr inbounds i8, ptr %ps, i64 14
+ %gep_s15 = getelementptr inbounds i8, ptr %ps, i64 15
+
+ store i8 %load0, ptr %gep_s0, align 16
+ store i8 %load1, ptr %gep_s1, align 16
+ store i8 %load2, ptr %gep_s2, align 16
+ store i8 %load3, ptr %gep_s3, align 16
+ store i8 %load4, ptr %gep_s4, align 16
+ store i8 %load5, ptr %gep_s5, align 16
+ store i8 %load6, ptr %gep_s6, align 16
+ store i8 %load7, ptr %gep_s7, align 16
+ store i8 %load8, ptr %gep_s8, align 16
+ store i8 %load9, ptr %gep_s9, align 16
+ store i8 %load10, ptr %gep_s10, align 16
+ store i8 %load11, ptr %gep_s11, align 16
+ store i8 %load12, ptr %gep_s12, align 16
+ store i8 %load13, ptr %gep_s13, align 16
+ store i8 %load14, ptr %gep_s14, align 16
+ store i8 %load15, ptr %gep_s15, align 16
+
+ ret void
+}
>From 80cf436a27be535b93fa453faa11c983ea59d444 Mon Sep 17 00:00:00 2001
From: lntue <lntue at google.com>
Date: Wed, 6 Aug 2025 19:01:28 +0000
Subject: [PATCH 140/171] [libc] Fix undefined behavior in BitsFxTest.h
(#152347)
---
libc/test/src/stdfix/BitsFxTest.h | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/libc/test/src/stdfix/BitsFxTest.h b/libc/test/src/stdfix/BitsFxTest.h
index eca6ab1bb28ab..cdb363f07053b 100644
--- a/libc/test/src/stdfix/BitsFxTest.h
+++ b/libc/test/src/stdfix/BitsFxTest.h
@@ -54,7 +54,7 @@ class BitsFxTest : public LIBC_NAMESPACE::testing::Test {
if (max >= 11 && FXRep::FRACTION_LEN >= kMinFbits) {
// (10.71875)_10 = (1010.1011100)_2
- constexpr long long kExpected = 1372;
+ constexpr int64_t kExpected = 1372;
EXPECT_EQ(
static_cast<XType>(kExpected << (FXRep::FRACTION_LEN - kMinFbits)),
func(special_num_t));
@@ -63,9 +63,11 @@ class BitsFxTest : public LIBC_NAMESPACE::testing::Test {
if constexpr (FXRep::SIGN_LEN > 0) {
if (min <= -11 && FXRep::FRACTION_LEN >= kMinFbits) {
// (-10.71875)_10 = (-1010.1011100)_2
- constexpr long long kExpected = -1372;
- EXPECT_EQ(static_cast<XType>(kExpected
- << (FXRep::FRACTION_LEN - kMinFbits)),
+ constexpr int64_t kExpected =
+ static_cast<int64_t>(static_cast<uint64_t>(-1372)
+ << (FXRep::FRACTION_LEN - kMinFbits));
+
+ EXPECT_EQ(static_cast<XType>(kExpected),
func(negative_special_num_t));
}
}
>From 25d1285eecbab731eaf418c8aab44e4eb5f9e538 Mon Sep 17 00:00:00 2001
From: Florian Hahn <flo at fhahn.com>
Date: Wed, 6 Aug 2025 20:03:31 +0100
Subject: [PATCH 141/171] [VPlan] Replace single-entry VPPhis with their
incoming values.
Replace trivial, single-entry VPPhis with their incoming values,
---
.../Transforms/Vectorize/VPlanTransforms.cpp | 6 ++
.../AArch64/clamped-trip-count.ll | 8 +--
.../AArch64/conditional-branches-cost.ll | 12 ++--
.../first-order-recurrence-fold-tail.ll | 4 +-
.../LoopVectorize/AArch64/optsize_minsize.ll | 8 +--
.../LoopVectorize/AArch64/pr73894.ll | 4 +-
.../AArch64/reduction-recurrence-costs-sve.ll | 12 ++--
.../AArch64/scalable-strict-fadd.ll | 30 +++++-----
.../LoopVectorize/AArch64/store-costs-sve.ll | 2 +-
.../AArch64/sve-tail-folding-reductions.ll | 24 ++++----
.../AArch64/tail-folding-styles.ll | 8 +--
...eave-to-widen-memory-remove-loop-region.ll | 2 +-
.../ARM/mve-gather-scatter-tailpred.ll | 6 +-
.../LoopVectorize/ARM/mve-reduction-types.ll | 36 ++++++------
.../LoopVectorize/ARM/optsize_minsize.ll | 2 +-
.../RISCV/evl-compatible-loops.ll | 2 +-
.../LoopVectorize/RISCV/inloop-reduction.ll | 16 +++---
.../LoopVectorize/RISCV/low-trip-count.ll | 6 +-
.../Transforms/LoopVectorize/RISCV/pr88802.ll | 2 +-
.../LoopVectorize/RISCV/scalable-tailfold.ll | 14 ++---
.../RISCV/tail-folding-cast-intrinsics.ll | 2 +-
.../RISCV/tail-folding-cond-reduction.ll | 32 +++++------
.../LoopVectorize/RISCV/tail-folding-div.ll | 8 +--
.../tail-folding-fixed-order-recurrence.ll | 22 ++++----
.../RISCV/tail-folding-gather-scatter.ll | 2 +-
.../RISCV/tail-folding-inloop-reduction.ll | 56 +++++++++----------
.../RISCV/tail-folding-interleave.ll | 2 +-
.../LoopVectorize/RISCV/tail-folding-iv32.ll | 2 +-
.../RISCV/tail-folding-known-no-overflow.ll | 6 +-
.../RISCV/tail-folding-masked-loadstore.ll | 2 +-
.../RISCV/tail-folding-ordered-reduction.ll | 4 +-
.../RISCV/tail-folding-reduction.ll | 56 +++++++++----------
.../RISCV/tail-folding-reverse-load-store.ll | 10 ++--
.../RISCV/tail-folding-safe-dep-distance.ll | 6 +-
.../RISCV/tail-folding-uniform-store.ll | 2 +-
.../truncate-to-minimal-bitwidth-cost.ll | 2 +-
.../truncate-to-minimal-bitwidth-evl-crash.ll | 2 +-
.../LoopVectorize/RISCV/uniform-load-store.ll | 14 ++---
...ctor-loop-backedge-elimination-with-evl.ll | 2 +-
.../RISCV/vectorize-vp-intrinsics.ll | 2 +-
.../SystemZ/force-target-instruction-cost.ll | 2 +-
.../LoopVectorize/SystemZ/pr47665.ll | 2 +-
.../predicated-first-order-recurrence.ll | 4 +-
.../LoopVectorize/X86/constant-fold.ll | 4 +-
.../LoopVectorize/X86/cost-model.ll | 4 +-
...bounds-flags-for-reverse-vector-pointer.ll | 4 +-
.../X86/fixed-order-recurrence.ll | 4 +-
.../LoopVectorize/X86/induction-costs.ll | 2 +-
.../Transforms/LoopVectorize/X86/optsize.ll | 12 ++--
.../Transforms/LoopVectorize/X86/pr81872.ll | 2 +-
.../X86/scev-checks-unprofitable.ll | 2 +-
.../LoopVectorize/X86/tail_loop_folding.ll | 8 +--
.../X86/vect.omp.force.small-tc.ll | 2 +-
.../X86/vectorize-force-tail-with-evl.ll | 2 +-
.../X86/vectorize-interleaved-accesses-gap.ll | 2 +-
.../LoopVectorize/dead_instructions.ll | 6 +-
.../dont-fold-tail-for-divisible-TC.ll | 2 +-
.../LoopVectorize/first-order-recurrence.ll | 42 +++++++-------
.../LoopVectorize/iv-select-cmp-decreasing.ll | 8 +--
.../Transforms/LoopVectorize/loop-form.ll | 2 +-
.../LoopVectorize/memdep-fold-tail.ll | 2 +-
llvm/test/Transforms/LoopVectorize/optsize.ll | 7 ++-
.../pr45679-fold-tail-by-masking.ll | 12 ++--
.../pr46525-expander-insertpoint.ll | 2 +-
.../pr51614-fold-tail-by-masking.ll | 4 +-
.../predicatedinst-loop-invariant.ll | 8 +--
.../LoopVectorize/scalable-predication.ll | 2 +-
.../LoopVectorize/select-reduction.ll | 8 +--
...e-reduction-results-in-tail-folded-loop.ll | 4 +-
.../strict-fadd-interleave-only.ll | 24 ++++----
.../tail-folding-alloca-in-loop.ll | 2 +-
...folding-optimize-vector-induction-width.ll | 18 +++---
.../LoopVectorize/tail-folding-switch.ll | 2 +-
.../tail-folding-vectorization-factor-1.ll | 4 +-
.../Transforms/LoopVectorize/uniform-blend.ll | 2 +-
.../vector-loop-backedge-elimination.ll | 12 ++--
76 files changed, 337 insertions(+), 330 deletions(-)
diff --git a/llvm/lib/Transforms/Vectorize/VPlanTransforms.cpp b/llvm/lib/Transforms/Vectorize/VPlanTransforms.cpp
index 1a71a75776380..d99d6940c1c91 100644
--- a/llvm/lib/Transforms/Vectorize/VPlanTransforms.cpp
+++ b/llvm/lib/Transforms/Vectorize/VPlanTransforms.cpp
@@ -1164,6 +1164,12 @@ static void simplifyRecipe(VPRecipeBase &R, VPTypeAnalysis &TypeInfo) {
return;
}
+ if (auto *Phi = dyn_cast<VPPhi>(Def)) {
+ if (Phi->getNumOperands() == 1)
+ Phi->replaceAllUsesWith(Phi->getOperand(0));
+ return;
+ }
+
// Some simplifications can only be applied after unrolling. Perform them
// below.
if (!Plan->isUnrolled())
diff --git a/llvm/test/Transforms/LoopVectorize/AArch64/clamped-trip-count.ll b/llvm/test/Transforms/LoopVectorize/AArch64/clamped-trip-count.ll
index 795de3d978e74..a8d9a0c89dec5 100644
--- a/llvm/test/Transforms/LoopVectorize/AArch64/clamped-trip-count.ll
+++ b/llvm/test/Transforms/LoopVectorize/AArch64/clamped-trip-count.ll
@@ -45,8 +45,8 @@ define void @clamped_tc_8(ptr nocapture %dst, i32 %n, i64 %val) vscale_range(1,1
; CHECK-NEXT: [[BC_RESUME_VAL1:%.*]] = phi ptr [ [[DST]], [[ENTRY]] ]
; CHECK-NEXT: br label [[FOR_BODY:%.*]]
; CHECK: for.body:
-; CHECK-NEXT: [[INDVARS_IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[INDVARS_IV_NEXT:%.*]], [[FOR_BODY]] ]
-; CHECK-NEXT: [[P_OUT_TAIL_09:%.*]] = phi ptr [ [[BC_RESUME_VAL1]], [[SCALAR_PH]] ], [ [[INCDEC_PTR:%.*]], [[FOR_BODY]] ]
+; CHECK-NEXT: [[INDVARS_IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[INDVARS_IV_NEXT:%.*]], [[FOR_BODY]] ]
+; CHECK-NEXT: [[P_OUT_TAIL_09:%.*]] = phi ptr [ [[DST]], [[SCALAR_PH]] ], [ [[INCDEC_PTR:%.*]], [[FOR_BODY]] ]
; CHECK-NEXT: [[TMP19:%.*]] = shl nuw nsw i64 [[INDVARS_IV]], 3
; CHECK-NEXT: [[SHR3:%.*]] = lshr i64 [[VAL]], [[TMP19]]
; CHECK-NEXT: [[CONV4:%.*]] = trunc i64 [[SHR3]] to i8
@@ -128,8 +128,8 @@ define void @clamped_tc_max_8(ptr nocapture %dst, i32 %n, i64 %val) vscale_range
; CHECK-NEXT: [[BC_RESUME_VAL1:%.*]] = phi ptr [ [[DST]], [[FOR_BODY_PREHEADER]] ]
; CHECK-NEXT: br label [[FOR_BODY:%.*]]
; CHECK: for.body:
-; CHECK-NEXT: [[INDVARS_IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[INDVARS_IV_NEXT:%.*]], [[FOR_BODY]] ]
-; CHECK-NEXT: [[P_OUT_TAIL_09:%.*]] = phi ptr [ [[BC_RESUME_VAL1]], [[SCALAR_PH]] ], [ [[INCDEC_PTR:%.*]], [[FOR_BODY]] ]
+; CHECK-NEXT: [[INDVARS_IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[INDVARS_IV_NEXT:%.*]], [[FOR_BODY]] ]
+; CHECK-NEXT: [[P_OUT_TAIL_09:%.*]] = phi ptr [ [[DST]], [[SCALAR_PH]] ], [ [[INCDEC_PTR:%.*]], [[FOR_BODY]] ]
; CHECK-NEXT: [[TMP19:%.*]] = shl nuw nsw i64 [[INDVARS_IV]], 3
; CHECK-NEXT: [[SHR3:%.*]] = lshr i64 [[VAL]], [[TMP19]]
; CHECK-NEXT: [[CONV4:%.*]] = trunc i64 [[SHR3]] to i8
diff --git a/llvm/test/Transforms/LoopVectorize/AArch64/conditional-branches-cost.ll b/llvm/test/Transforms/LoopVectorize/AArch64/conditional-branches-cost.ll
index 0232d88347d0a..4b895ae0c4b5b 100644
--- a/llvm/test/Transforms/LoopVectorize/AArch64/conditional-branches-cost.ll
+++ b/llvm/test/Transforms/LoopVectorize/AArch64/conditional-branches-cost.ll
@@ -459,7 +459,7 @@ define void @latch_branch_cost(ptr %dst) {
; PRED-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, %[[ENTRY]] ]
; PRED-NEXT: br label %[[LOOP:.*]]
; PRED: [[LOOP]]:
-; PRED-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
+; PRED-NEXT: [[IV:%.*]] = phi i64 [ 0, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
; PRED-NEXT: [[GEP:%.*]] = getelementptr i8, ptr [[DST]], i64 [[IV]]
; PRED-NEXT: store i8 0, ptr [[GEP]], align 1
; PRED-NEXT: [[IV_NEXT]] = add i64 [[IV]], 1
@@ -738,8 +738,8 @@ define void @multiple_exit_conditions(ptr %src, ptr noalias %dst) #1 {
; PRED-NEXT: [[BC_RESUME_VAL1:%.*]] = phi i64 [ 0, %[[ENTRY]] ]
; PRED-NEXT: br label %[[LOOP:.*]]
; PRED: [[LOOP]]:
-; PRED-NEXT: [[PTR_IV:%.*]] = phi ptr [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[PTR_IV_NEXT:%.*]], %[[LOOP]] ]
-; PRED-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL1]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
+; PRED-NEXT: [[PTR_IV:%.*]] = phi ptr [ [[DST]], %[[SCALAR_PH]] ], [ [[PTR_IV_NEXT:%.*]], %[[LOOP]] ]
+; PRED-NEXT: [[IV:%.*]] = phi i64 [ 0, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
; PRED-NEXT: [[L:%.*]] = load i16, ptr [[SRC]], align 2
; PRED-NEXT: [[O:%.*]] = or i16 [[L]], 1
; PRED-NEXT: [[CONV:%.*]] = uitofp i16 [[O]] to double
@@ -865,7 +865,7 @@ define void @low_trip_count_fold_tail_scalarized_store(ptr %dst) {
; DEFAULT-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, %[[ENTRY]] ]
; DEFAULT-NEXT: br label %[[LOOP:.*]]
; DEFAULT: [[LOOP]]:
-; DEFAULT-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
+; DEFAULT-NEXT: [[IV:%.*]] = phi i64 [ 0, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
; DEFAULT-NEXT: [[IV_TRUNC:%.*]] = trunc i64 [[IV]] to i8
; DEFAULT-NEXT: [[GEP:%.*]] = getelementptr i8, ptr [[DST]], i64 [[IV]]
; DEFAULT-NEXT: store i8 [[IV_TRUNC]], ptr [[GEP]], align 1
@@ -967,7 +967,7 @@ define void @low_trip_count_fold_tail_scalarized_store(ptr %dst) {
; PRED-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, %[[ENTRY]] ]
; PRED-NEXT: br label %[[LOOP:.*]]
; PRED: [[LOOP]]:
-; PRED-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
+; PRED-NEXT: [[IV:%.*]] = phi i64 [ 0, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
; PRED-NEXT: [[IV_TRUNC:%.*]] = trunc i64 [[IV]] to i8
; PRED-NEXT: [[GEP:%.*]] = getelementptr i8, ptr [[DST]], i64 [[IV]]
; PRED-NEXT: store i8 [[IV_TRUNC]], ptr [[GEP]], align 1
@@ -1554,7 +1554,7 @@ define void @redundant_branch_and_tail_folding(ptr %dst, i1 %c) {
; PRED-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, %[[ENTRY]] ]
; PRED-NEXT: br label %[[LOOP_HEADER:.*]]
; PRED: [[LOOP_HEADER]]:
-; PRED-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP_LATCH:.*]] ]
+; PRED-NEXT: [[IV:%.*]] = phi i64 [ 0, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP_LATCH:.*]] ]
; PRED-NEXT: br i1 [[C]], label %[[LOOP_LATCH]], label %[[THEN:.*]]
; PRED: [[THEN]]:
; PRED-NEXT: br label %[[LOOP_LATCH]]
diff --git a/llvm/test/Transforms/LoopVectorize/AArch64/first-order-recurrence-fold-tail.ll b/llvm/test/Transforms/LoopVectorize/AArch64/first-order-recurrence-fold-tail.ll
index fff99f1498ae7..41a624b05482f 100644
--- a/llvm/test/Transforms/LoopVectorize/AArch64/first-order-recurrence-fold-tail.ll
+++ b/llvm/test/Transforms/LoopVectorize/AArch64/first-order-recurrence-fold-tail.ll
@@ -75,8 +75,8 @@ define i32 @test_phi_iterator_invalidation(ptr %A, ptr noalias %B) {
; CHECK-NEXT: [[SCALAR_RECUR_INIT:%.*]] = phi i16 [ 0, [[ENTRY]] ]
; CHECK-NEXT: br label [[LOOP:%.*]]
; CHECK: loop:
-; CHECK-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[LOOP]] ]
-; CHECK-NEXT: [[SCALAR_RECUR:%.*]] = phi i16 [ [[SCALAR_RECUR_INIT]], [[SCALAR_PH]] ], [ [[FOR_NEXT:%.*]], [[LOOP]] ]
+; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[LOOP]] ]
+; CHECK-NEXT: [[SCALAR_RECUR:%.*]] = phi i16 [ 0, [[SCALAR_PH]] ], [ [[FOR_NEXT:%.*]], [[LOOP]] ]
; CHECK-NEXT: [[SEXT:%.*]] = sext i16 [[SCALAR_RECUR]] to i32
; CHECK-NEXT: [[IV_NEXT]] = add i64 [[IV]], 1
; CHECK-NEXT: [[GEP_A:%.*]] = getelementptr i32, ptr [[A]], i64 [[IV_NEXT]]
diff --git a/llvm/test/Transforms/LoopVectorize/AArch64/optsize_minsize.ll b/llvm/test/Transforms/LoopVectorize/AArch64/optsize_minsize.ll
index 1471896f99329..cc36cdb5f425e 100644
--- a/llvm/test/Transforms/LoopVectorize/AArch64/optsize_minsize.ll
+++ b/llvm/test/Transforms/LoopVectorize/AArch64/optsize_minsize.ll
@@ -397,7 +397,7 @@ define void @tail_predicate_without_optsize(ptr %p, i8 %a, i8 %b, i8 %c, i32 %n)
; DEFAULT-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, %[[ENTRY]] ]
; DEFAULT-NEXT: br label %[[FOR_BODY:.*]]
; DEFAULT: [[FOR_BODY]]:
-; DEFAULT-NEXT: [[INDVARS_IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[INDVARS_IV_NEXT:%.*]], %[[FOR_BODY]] ]
+; DEFAULT-NEXT: [[INDVARS_IV:%.*]] = phi i64 [ 0, %[[SCALAR_PH]] ], [ [[INDVARS_IV_NEXT:%.*]], %[[FOR_BODY]] ]
; DEFAULT-NEXT: [[TMP72:%.*]] = trunc nuw nsw i64 [[INDVARS_IV]] to i8
; DEFAULT-NEXT: [[MUL:%.*]] = mul i8 [[A]], [[TMP72]]
; DEFAULT-NEXT: [[SHR:%.*]] = lshr i8 [[TMP72]], 1
@@ -546,7 +546,7 @@ define void @sve_tail_predicate_without_minsize(ptr %p, i8 %a, i8 %b, i8 %c, i32
; DEFAULT-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, %[[ENTRY]] ]
; DEFAULT-NEXT: br label %[[FOR_BODY:.*]]
; DEFAULT: [[FOR_BODY]]:
-; DEFAULT-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[FOR_BODY]] ]
+; DEFAULT-NEXT: [[IV:%.*]] = phi i64 [ 0, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[FOR_BODY]] ]
; DEFAULT-NEXT: [[TMP26:%.*]] = trunc nuw nsw i64 [[IV]] to i8
; DEFAULT-NEXT: [[MUL:%.*]] = mul i8 [[A]], [[TMP26]]
; DEFAULT-NEXT: [[SHR:%.*]] = lshr i8 [[TMP26]], 1
@@ -621,7 +621,7 @@ define void @sve_tail_predicate_without_minsize(ptr %p, i8 %a, i8 %b, i8 %c, i32
; OPTSIZE-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, %[[ENTRY]] ]
; OPTSIZE-NEXT: br label %[[FOR_BODY:.*]]
; OPTSIZE: [[FOR_BODY]]:
-; OPTSIZE-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[FOR_BODY]] ]
+; OPTSIZE-NEXT: [[IV:%.*]] = phi i64 [ 0, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[FOR_BODY]] ]
; OPTSIZE-NEXT: [[TMP26:%.*]] = trunc nuw nsw i64 [[IV]] to i8
; OPTSIZE-NEXT: [[MUL:%.*]] = mul i8 [[A]], [[TMP26]]
; OPTSIZE-NEXT: [[SHR:%.*]] = lshr i8 [[TMP26]], 1
@@ -696,7 +696,7 @@ define void @sve_tail_predicate_without_minsize(ptr %p, i8 %a, i8 %b, i8 %c, i32
; MINSIZE-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, %[[ENTRY]] ]
; MINSIZE-NEXT: br label %[[FOR_BODY:.*]]
; MINSIZE: [[FOR_BODY]]:
-; MINSIZE-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[FOR_BODY]] ]
+; MINSIZE-NEXT: [[IV:%.*]] = phi i64 [ 0, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[FOR_BODY]] ]
; MINSIZE-NEXT: [[TMP26:%.*]] = trunc nuw nsw i64 [[IV]] to i8
; MINSIZE-NEXT: [[MUL:%.*]] = mul i8 [[A]], [[TMP26]]
; MINSIZE-NEXT: [[SHR:%.*]] = lshr i8 [[TMP26]], 1
diff --git a/llvm/test/Transforms/LoopVectorize/AArch64/pr73894.ll b/llvm/test/Transforms/LoopVectorize/AArch64/pr73894.ll
index d9a3a71141540..830e7da4c802e 100644
--- a/llvm/test/Transforms/LoopVectorize/AArch64/pr73894.ll
+++ b/llvm/test/Transforms/LoopVectorize/AArch64/pr73894.ll
@@ -59,8 +59,8 @@ define i32 @pr70988(ptr %src, i32 %n) {
; CHECK-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ 0, [[ENTRY]] ]
; CHECK-NEXT: br label [[LOOP:%.*]]
; CHECK: loop:
-; CHECK-NEXT: [[INDUC:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[INDUC_NEXT:%.*]], [[LOOP]] ]
-; CHECK-NEXT: [[MAX:%.*]] = phi i32 [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[TMP24:%.*]], [[LOOP]] ]
+; CHECK-NEXT: [[INDUC:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[INDUC_NEXT:%.*]], [[LOOP]] ]
+; CHECK-NEXT: [[MAX:%.*]] = phi i32 [ 0, [[SCALAR_PH]] ], [ [[TMP24:%.*]], [[LOOP]] ]
; CHECK-NEXT: [[GEP:%.*]] = getelementptr i32, ptr [[SRC]], i64 [[INDUC]]
; CHECK-NEXT: [[TMP22:%.*]] = load ptr, ptr [[GEP]], align 8
; CHECK-NEXT: [[TMP23:%.*]] = load i32, ptr [[TMP22]], align 4
diff --git a/llvm/test/Transforms/LoopVectorize/AArch64/reduction-recurrence-costs-sve.ll b/llvm/test/Transforms/LoopVectorize/AArch64/reduction-recurrence-costs-sve.ll
index 08d35f71e7cc3..381d2e15799d1 100644
--- a/llvm/test/Transforms/LoopVectorize/AArch64/reduction-recurrence-costs-sve.ll
+++ b/llvm/test/Transforms/LoopVectorize/AArch64/reduction-recurrence-costs-sve.ll
@@ -256,10 +256,10 @@ define i32 @chained_recurrences(i32 %x, i64 %y, ptr %src.1, i32 %z, ptr %src.2)
; PRED-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ 0, %[[ENTRY]] ]
; PRED-NEXT: br label %[[LOOP:.*]]
; PRED: [[LOOP]]:
-; PRED-NEXT: [[TMP45:%.*]] = phi i32 [ [[SCALAR_RECUR_INIT]], %[[SCALAR_PH]] ], [ [[TMP53:%.*]], %[[LOOP]] ]
-; PRED-NEXT: [[SCALAR_RECUR10:%.*]] = phi i32 [ [[SCALAR_RECUR_INIT8]], %[[SCALAR_PH]] ], [ [[TMP45]], %[[LOOP]] ]
-; PRED-NEXT: [[IV1:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT1:%.*]], %[[LOOP]] ]
-; PRED-NEXT: [[SUM_RED:%.*]] = phi i32 [ [[BC_MERGE_RDX]], %[[SCALAR_PH]] ], [ [[RED_2:%.*]], %[[LOOP]] ]
+; PRED-NEXT: [[TMP45:%.*]] = phi i32 [ 0, %[[SCALAR_PH]] ], [ [[TMP53:%.*]], %[[LOOP]] ]
+; PRED-NEXT: [[SCALAR_RECUR10:%.*]] = phi i32 [ 0, %[[SCALAR_PH]] ], [ [[TMP45]], %[[LOOP]] ]
+; PRED-NEXT: [[IV1:%.*]] = phi i64 [ 0, %[[SCALAR_PH]] ], [ [[IV_NEXT1:%.*]], %[[LOOP]] ]
+; PRED-NEXT: [[SUM_RED:%.*]] = phi i32 [ 0, %[[SCALAR_PH]] ], [ [[RED_2:%.*]], %[[LOOP]] ]
; PRED-NEXT: [[TMP52:%.*]] = add i64 [[Y]], 1
; PRED-NEXT: [[GEP_1:%.*]] = getelementptr i32, ptr [[SRC_1]], i64 [[TMP52]]
; PRED-NEXT: [[TMP53]] = load i32, ptr [[GEP_1]], align 4
@@ -491,8 +491,8 @@ define i16 @reduce_udiv(ptr %src, i16 %x, i64 %N) #0 {
; PRED-NEXT: [[BC_MERGE_RDX:%.*]] = phi i16 [ 0, %[[ENTRY]] ]
; PRED-NEXT: br label %[[LOOP:.*]]
; PRED: [[LOOP]]:
-; PRED-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
-; PRED-NEXT: [[RED:%.*]] = phi i16 [ [[BC_MERGE_RDX]], %[[SCALAR_PH]] ], [ [[RED_NEXT:%.*]], %[[LOOP]] ]
+; PRED-NEXT: [[IV:%.*]] = phi i64 [ 0, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
+; PRED-NEXT: [[RED:%.*]] = phi i16 [ 0, %[[SCALAR_PH]] ], [ [[RED_NEXT:%.*]], %[[LOOP]] ]
; PRED-NEXT: [[GEP:%.*]] = getelementptr i16, ptr [[SRC]], i64 [[IV]]
; PRED-NEXT: [[L:%.*]] = load i16, ptr [[GEP]], align 2
; PRED-NEXT: [[DIV:%.*]] = udiv i16 [[L]], [[X]]
diff --git a/llvm/test/Transforms/LoopVectorize/AArch64/scalable-strict-fadd.ll b/llvm/test/Transforms/LoopVectorize/AArch64/scalable-strict-fadd.ll
index a60d35d407fb0..0cad05314da55 100644
--- a/llvm/test/Transforms/LoopVectorize/AArch64/scalable-strict-fadd.ll
+++ b/llvm/test/Transforms/LoopVectorize/AArch64/scalable-strict-fadd.ll
@@ -159,8 +159,8 @@ define float @fadd_strict(ptr noalias nocapture readonly %a, i64 %n) #0 {
; CHECK-ORDERED-TF-NEXT: [[BC_MERGE_RDX:%.*]] = phi float [ 0.000000e+00, [[ENTRY]] ]
; CHECK-ORDERED-TF-NEXT: br label [[FOR_BODY:%.*]]
; CHECK-ORDERED-TF: for.body:
-; CHECK-ORDERED-TF-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
-; CHECK-ORDERED-TF-NEXT: [[SUM_07:%.*]] = phi float [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[ADD:%.*]], [[FOR_BODY]] ]
+; CHECK-ORDERED-TF-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; CHECK-ORDERED-TF-NEXT: [[SUM_07:%.*]] = phi float [ 0.000000e+00, [[SCALAR_PH]] ], [ [[ADD:%.*]], [[FOR_BODY]] ]
; CHECK-ORDERED-TF-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds float, ptr [[A]], i64 [[IV]]
; CHECK-ORDERED-TF-NEXT: [[TMP15:%.*]] = load float, ptr [[ARRAYIDX]], align 4
; CHECK-ORDERED-TF-NEXT: [[ADD]] = fadd float [[TMP15]], [[SUM_07]]
@@ -420,8 +420,8 @@ define float @fadd_strict_unroll(ptr noalias nocapture readonly %a, i64 %n) #0 {
; CHECK-ORDERED-TF-NEXT: [[BC_MERGE_RDX:%.*]] = phi float [ 0.000000e+00, [[ENTRY]] ]
; CHECK-ORDERED-TF-NEXT: br label [[FOR_BODY:%.*]]
; CHECK-ORDERED-TF: for.body:
-; CHECK-ORDERED-TF-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
-; CHECK-ORDERED-TF-NEXT: [[SUM_07:%.*]] = phi float [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[ADD:%.*]], [[FOR_BODY]] ]
+; CHECK-ORDERED-TF-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; CHECK-ORDERED-TF-NEXT: [[SUM_07:%.*]] = phi float [ 0.000000e+00, [[SCALAR_PH]] ], [ [[ADD:%.*]], [[FOR_BODY]] ]
; CHECK-ORDERED-TF-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds float, ptr [[A]], i64 [[IV]]
; CHECK-ORDERED-TF-NEXT: [[TMP45:%.*]] = load float, ptr [[ARRAYIDX]], align 4
; CHECK-ORDERED-TF-NEXT: [[ADD]] = fadd float [[TMP45]], [[SUM_07]]
@@ -673,9 +673,9 @@ define void @fadd_strict_interleave(ptr noalias nocapture readonly %a, ptr noali
; CHECK-ORDERED-TF-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, [[ENTRY]] ]
; CHECK-ORDERED-TF-NEXT: br label [[FOR_BODY:%.*]]
; CHECK-ORDERED-TF: for.body:
-; CHECK-ORDERED-TF-NEXT: [[ADD_PHI1:%.*]] = phi float [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[ADD2:%.*]], [[FOR_BODY]] ]
-; CHECK-ORDERED-TF-NEXT: [[ADD_PHI2:%.*]] = phi float [ [[BC_MERGE_RDX2]], [[SCALAR_PH]] ], [ [[ADD1:%.*]], [[FOR_BODY]] ]
-; CHECK-ORDERED-TF-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; CHECK-ORDERED-TF-NEXT: [[ADD_PHI1:%.*]] = phi float [ [[A2]], [[SCALAR_PH]] ], [ [[ADD2:%.*]], [[FOR_BODY]] ]
+; CHECK-ORDERED-TF-NEXT: [[ADD_PHI2:%.*]] = phi float [ [[A1]], [[SCALAR_PH]] ], [ [[ADD1:%.*]], [[FOR_BODY]] ]
+; CHECK-ORDERED-TF-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
; CHECK-ORDERED-TF-NEXT: [[ARRAYIDXB1:%.*]] = getelementptr inbounds float, ptr [[B]], i64 [[IV]]
; CHECK-ORDERED-TF-NEXT: [[TMP22:%.*]] = load float, ptr [[ARRAYIDXB1]], align 4
; CHECK-ORDERED-TF-NEXT: [[ADD1]] = fadd float [[TMP22]], [[ADD_PHI2]]
@@ -918,8 +918,8 @@ define float @fadd_of_sum(ptr noalias nocapture readonly %a, ptr noalias nocaptu
; CHECK-ORDERED-TF-NEXT: [[BC_MERGE_RDX:%.*]] = phi float [ 0.000000e+00, [[FOR_BODY_PREHEADER]] ]
; CHECK-ORDERED-TF-NEXT: br label [[FOR_BODY:%.*]]
; CHECK-ORDERED-TF: for.body:
-; CHECK-ORDERED-TF-NEXT: [[IV:%.*]] = phi i64 [ [[IV_NEXT:%.*]], [[FOR_BODY]] ], [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ]
-; CHECK-ORDERED-TF-NEXT: [[RES_014:%.*]] = phi float [ [[RDX:%.*]], [[FOR_BODY]] ], [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ]
+; CHECK-ORDERED-TF-NEXT: [[IV:%.*]] = phi i64 [ [[IV_NEXT:%.*]], [[FOR_BODY]] ], [ 0, [[SCALAR_PH]] ]
+; CHECK-ORDERED-TF-NEXT: [[RES_014:%.*]] = phi float [ [[RDX:%.*]], [[FOR_BODY]] ], [ 0.000000e+00, [[SCALAR_PH]] ]
; CHECK-ORDERED-TF-NEXT: [[ARRAYIDX2:%.*]] = getelementptr inbounds float, ptr [[A]], i64 [[IV]]
; CHECK-ORDERED-TF-NEXT: [[TMP18:%.*]] = load float, ptr [[ARRAYIDX2]], align 4
; CHECK-ORDERED-TF-NEXT: [[ARRAYIDX4:%.*]] = getelementptr inbounds float, ptr [[B]], i64 [[IV]]
@@ -1148,8 +1148,8 @@ define float @fadd_conditional(ptr noalias nocapture readonly %a, ptr noalias no
; CHECK-ORDERED-TF-NEXT: [[BC_MERGE_RDX:%.*]] = phi float [ 1.000000e+00, [[ENTRY]] ]
; CHECK-ORDERED-TF-NEXT: br label [[FOR_BODY:%.*]]
; CHECK-ORDERED-TF: for.body:
-; CHECK-ORDERED-TF-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_INC:%.*]] ]
-; CHECK-ORDERED-TF-NEXT: [[RES:%.*]] = phi float [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[FADD:%.*]], [[FOR_INC]] ]
+; CHECK-ORDERED-TF-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_INC:%.*]] ]
+; CHECK-ORDERED-TF-NEXT: [[RES:%.*]] = phi float [ 1.000000e+00, [[SCALAR_PH]] ], [ [[FADD:%.*]], [[FOR_INC]] ]
; CHECK-ORDERED-TF-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds float, ptr [[B]], i64 [[IV]]
; CHECK-ORDERED-TF-NEXT: [[TMP18:%.*]] = load float, ptr [[ARRAYIDX]], align 4
; CHECK-ORDERED-TF-NEXT: [[TOBOOL:%.*]] = fcmp une float [[TMP18]], 0.000000e+00
@@ -1623,8 +1623,8 @@ define float @fmuladd_strict(ptr %a, ptr %b, i64 %n) #0 {
; CHECK-ORDERED-TF-NEXT: [[BC_MERGE_RDX:%.*]] = phi float [ 0.000000e+00, [[ENTRY]] ]
; CHECK-ORDERED-TF-NEXT: br label [[FOR_BODY:%.*]]
; CHECK-ORDERED-TF: for.body:
-; CHECK-ORDERED-TF-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
-; CHECK-ORDERED-TF-NEXT: [[SUM_07:%.*]] = phi float [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[MULADD:%.*]], [[FOR_BODY]] ]
+; CHECK-ORDERED-TF-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; CHECK-ORDERED-TF-NEXT: [[SUM_07:%.*]] = phi float [ 0.000000e+00, [[SCALAR_PH]] ], [ [[MULADD:%.*]], [[FOR_BODY]] ]
; CHECK-ORDERED-TF-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds float, ptr [[A]], i64 [[IV]]
; CHECK-ORDERED-TF-NEXT: [[TMP59:%.*]] = load float, ptr [[ARRAYIDX]], align 4
; CHECK-ORDERED-TF-NEXT: [[ARRAYIDX2:%.*]] = getelementptr inbounds float, ptr [[B]], i64 [[IV]]
@@ -1945,8 +1945,8 @@ define float @fmuladd_strict_fmf(ptr %a, ptr %b, i64 %n) #0 {
; CHECK-ORDERED-TF-NEXT: [[BC_MERGE_RDX:%.*]] = phi float [ 0.000000e+00, [[ENTRY]] ]
; CHECK-ORDERED-TF-NEXT: br label [[FOR_BODY:%.*]]
; CHECK-ORDERED-TF: for.body:
-; CHECK-ORDERED-TF-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
-; CHECK-ORDERED-TF-NEXT: [[SUM_07:%.*]] = phi float [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[MULADD:%.*]], [[FOR_BODY]] ]
+; CHECK-ORDERED-TF-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; CHECK-ORDERED-TF-NEXT: [[SUM_07:%.*]] = phi float [ 0.000000e+00, [[SCALAR_PH]] ], [ [[MULADD:%.*]], [[FOR_BODY]] ]
; CHECK-ORDERED-TF-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds float, ptr [[A]], i64 [[IV]]
; CHECK-ORDERED-TF-NEXT: [[TMP59:%.*]] = load float, ptr [[ARRAYIDX]], align 4
; CHECK-ORDERED-TF-NEXT: [[ARRAYIDX2:%.*]] = getelementptr inbounds float, ptr [[B]], i64 [[IV]]
diff --git a/llvm/test/Transforms/LoopVectorize/AArch64/store-costs-sve.ll b/llvm/test/Transforms/LoopVectorize/AArch64/store-costs-sve.ll
index 51efbe96f83b8..d32b89829fdd1 100644
--- a/llvm/test/Transforms/LoopVectorize/AArch64/store-costs-sve.ll
+++ b/llvm/test/Transforms/LoopVectorize/AArch64/store-costs-sve.ll
@@ -114,7 +114,7 @@ define void @cost_store_i8(ptr %dst) #0 {
; PRED-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, [[ENTRY:%.*]] ]
; PRED-NEXT: br label [[LOOP:%.*]]
; PRED: loop:
-; PRED-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[LOOP]] ]
+; PRED-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[LOOP]] ]
; PRED-NEXT: [[GEP:%.*]] = getelementptr i8, ptr [[DST]], i64 [[IV]]
; PRED-NEXT: store i8 0, ptr [[GEP]], align 1
; PRED-NEXT: [[IV_NEXT]] = add i64 [[IV]], 1
diff --git a/llvm/test/Transforms/LoopVectorize/AArch64/sve-tail-folding-reductions.ll b/llvm/test/Transforms/LoopVectorize/AArch64/sve-tail-folding-reductions.ll
index f4982e602bcdc..d6f8b8e13b9f3 100644
--- a/llvm/test/Transforms/LoopVectorize/AArch64/sve-tail-folding-reductions.ll
+++ b/llvm/test/Transforms/LoopVectorize/AArch64/sve-tail-folding-reductions.ll
@@ -48,8 +48,8 @@ define i32 @add_reduction_i32(ptr %ptr, i64 %n) #0 {
; CHECK-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ 0, [[ENTRY]] ]
; CHECK-NEXT: br label [[WHILE_BODY:%.*]]
; CHECK: while.body:
-; CHECK-NEXT: [[INDEX:%.*]] = phi i64 [ [[INDEX_NEXT:%.*]], [[WHILE_BODY]] ], [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ]
-; CHECK-NEXT: [[RED:%.*]] = phi i32 [ [[RED_NEXT:%.*]], [[WHILE_BODY]] ], [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ]
+; CHECK-NEXT: [[INDEX:%.*]] = phi i64 [ [[INDEX_NEXT:%.*]], [[WHILE_BODY]] ], [ 0, [[SCALAR_PH]] ]
+; CHECK-NEXT: [[RED:%.*]] = phi i32 [ [[RED_NEXT:%.*]], [[WHILE_BODY]] ], [ 0, [[SCALAR_PH]] ]
; CHECK-NEXT: [[GEP:%.*]] = getelementptr i32, ptr [[PTR]], i64 [[INDEX]]
; CHECK-NEXT: [[VAL:%.*]] = load i32, ptr [[GEP]], align 4
; CHECK-NEXT: [[RED_NEXT]] = add i32 [[RED]], [[VAL]]
@@ -101,8 +101,8 @@ define i32 @add_reduction_i32(ptr %ptr, i64 %n) #0 {
; CHECK-IN-LOOP-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ 0, [[ENTRY]] ]
; CHECK-IN-LOOP-NEXT: br label [[WHILE_BODY:%.*]]
; CHECK-IN-LOOP: while.body:
-; CHECK-IN-LOOP-NEXT: [[INDEX:%.*]] = phi i64 [ [[INDEX_NEXT:%.*]], [[WHILE_BODY]] ], [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ]
-; CHECK-IN-LOOP-NEXT: [[RED:%.*]] = phi i32 [ [[RED_NEXT:%.*]], [[WHILE_BODY]] ], [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ]
+; CHECK-IN-LOOP-NEXT: [[INDEX:%.*]] = phi i64 [ [[INDEX_NEXT:%.*]], [[WHILE_BODY]] ], [ 0, [[SCALAR_PH]] ]
+; CHECK-IN-LOOP-NEXT: [[RED:%.*]] = phi i32 [ [[RED_NEXT:%.*]], [[WHILE_BODY]] ], [ 0, [[SCALAR_PH]] ]
; CHECK-IN-LOOP-NEXT: [[GEP:%.*]] = getelementptr i32, ptr [[PTR]], i64 [[INDEX]]
; CHECK-IN-LOOP-NEXT: [[VAL:%.*]] = load i32, ptr [[GEP]], align 4
; CHECK-IN-LOOP-NEXT: [[RED_NEXT]] = add i32 [[RED]], [[VAL]]
@@ -171,8 +171,8 @@ define float @add_reduction_f32(ptr %ptr, i64 %n) #0 {
; CHECK-NEXT: [[BC_MERGE_RDX:%.*]] = phi float [ 0.000000e+00, [[ENTRY]] ]
; CHECK-NEXT: br label [[WHILE_BODY:%.*]]
; CHECK: while.body:
-; CHECK-NEXT: [[INDEX:%.*]] = phi i64 [ [[INDEX_NEXT:%.*]], [[WHILE_BODY]] ], [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ]
-; CHECK-NEXT: [[RED:%.*]] = phi float [ [[RED_NEXT:%.*]], [[WHILE_BODY]] ], [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ]
+; CHECK-NEXT: [[INDEX:%.*]] = phi i64 [ [[INDEX_NEXT:%.*]], [[WHILE_BODY]] ], [ 0, [[SCALAR_PH]] ]
+; CHECK-NEXT: [[RED:%.*]] = phi float [ [[RED_NEXT:%.*]], [[WHILE_BODY]] ], [ 0.000000e+00, [[SCALAR_PH]] ]
; CHECK-NEXT: [[GEP:%.*]] = getelementptr float, ptr [[PTR]], i64 [[INDEX]]
; CHECK-NEXT: [[VAL:%.*]] = load float, ptr [[GEP]], align 4
; CHECK-NEXT: [[RED_NEXT]] = fadd float [[RED]], [[VAL]]
@@ -223,8 +223,8 @@ define float @add_reduction_f32(ptr %ptr, i64 %n) #0 {
; CHECK-IN-LOOP-NEXT: [[BC_MERGE_RDX:%.*]] = phi float [ 0.000000e+00, [[ENTRY]] ]
; CHECK-IN-LOOP-NEXT: br label [[WHILE_BODY:%.*]]
; CHECK-IN-LOOP: while.body:
-; CHECK-IN-LOOP-NEXT: [[INDEX:%.*]] = phi i64 [ [[INDEX_NEXT:%.*]], [[WHILE_BODY]] ], [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ]
-; CHECK-IN-LOOP-NEXT: [[RED:%.*]] = phi float [ [[RED_NEXT:%.*]], [[WHILE_BODY]] ], [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ]
+; CHECK-IN-LOOP-NEXT: [[INDEX:%.*]] = phi i64 [ [[INDEX_NEXT:%.*]], [[WHILE_BODY]] ], [ 0, [[SCALAR_PH]] ]
+; CHECK-IN-LOOP-NEXT: [[RED:%.*]] = phi float [ [[RED_NEXT:%.*]], [[WHILE_BODY]] ], [ 0.000000e+00, [[SCALAR_PH]] ]
; CHECK-IN-LOOP-NEXT: [[GEP:%.*]] = getelementptr float, ptr [[PTR]], i64 [[INDEX]]
; CHECK-IN-LOOP-NEXT: [[VAL:%.*]] = load float, ptr [[GEP]], align 4
; CHECK-IN-LOOP-NEXT: [[RED_NEXT]] = fadd float [[RED]], [[VAL]]
@@ -298,8 +298,8 @@ define i32 @cond_xor_reduction(ptr noalias %a, ptr noalias %cond, i64 %N) #0 {
; CHECK-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ 7, [[ENTRY]] ]
; CHECK-NEXT: br label [[FOR_BODY:%.*]]
; CHECK: for.body:
-; CHECK-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_INC:%.*]] ]
-; CHECK-NEXT: [[RDX:%.*]] = phi i32 [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[RES:%.*]], [[FOR_INC]] ]
+; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_INC:%.*]] ]
+; CHECK-NEXT: [[RDX:%.*]] = phi i32 [ 7, [[SCALAR_PH]] ], [ [[RES:%.*]], [[FOR_INC]] ]
; CHECK-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[COND]], i64 [[IV]]
; CHECK-NEXT: [[TMP26:%.*]] = load i32, ptr [[ARRAYIDX]], align 4
; CHECK-NEXT: [[TOBOOL:%.*]] = icmp eq i32 [[TMP26]], 5
@@ -362,8 +362,8 @@ define i32 @cond_xor_reduction(ptr noalias %a, ptr noalias %cond, i64 %N) #0 {
; CHECK-IN-LOOP-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ 7, [[ENTRY]] ]
; CHECK-IN-LOOP-NEXT: br label [[FOR_BODY:%.*]]
; CHECK-IN-LOOP: for.body:
-; CHECK-IN-LOOP-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_INC:%.*]] ]
-; CHECK-IN-LOOP-NEXT: [[RDX:%.*]] = phi i32 [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[RES:%.*]], [[FOR_INC]] ]
+; CHECK-IN-LOOP-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_INC:%.*]] ]
+; CHECK-IN-LOOP-NEXT: [[RDX:%.*]] = phi i32 [ 7, [[SCALAR_PH]] ], [ [[RES:%.*]], [[FOR_INC]] ]
; CHECK-IN-LOOP-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[COND]], i64 [[IV]]
; CHECK-IN-LOOP-NEXT: [[TMP24:%.*]] = load i32, ptr [[ARRAYIDX]], align 4
; CHECK-IN-LOOP-NEXT: [[TOBOOL:%.*]] = icmp eq i32 [[TMP24]], 5
diff --git a/llvm/test/Transforms/LoopVectorize/AArch64/tail-folding-styles.ll b/llvm/test/Transforms/LoopVectorize/AArch64/tail-folding-styles.ll
index a11896a288000..124abc6da5f6d 100644
--- a/llvm/test/Transforms/LoopVectorize/AArch64/tail-folding-styles.ll
+++ b/llvm/test/Transforms/LoopVectorize/AArch64/tail-folding-styles.ll
@@ -80,7 +80,7 @@ define void @simple_memset_tailfold(i32 %val, ptr %ptr, i64 %n) "target-features
; DATA-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, [[ENTRY:%.*]] ]
; DATA-NEXT: br label [[WHILE_BODY:%.*]]
; DATA: while.body:
-; DATA-NEXT: [[INDEX:%.*]] = phi i64 [ [[INDEX_NEXT:%.*]], [[WHILE_BODY]] ], [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ]
+; DATA-NEXT: [[INDEX:%.*]] = phi i64 [ [[INDEX_NEXT:%.*]], [[WHILE_BODY]] ], [ 0, [[SCALAR_PH]] ]
; DATA-NEXT: [[GEP:%.*]] = getelementptr i32, ptr [[PTR]], i64 [[INDEX]]
; DATA-NEXT: store i32 [[VAL]], ptr [[GEP]], align 4
; DATA-NEXT: [[INDEX_NEXT]] = add nsw i64 [[INDEX]], 1
@@ -127,7 +127,7 @@ define void @simple_memset_tailfold(i32 %val, ptr %ptr, i64 %n) "target-features
; DATA_NO_LANEMASK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, [[ENTRY:%.*]] ]
; DATA_NO_LANEMASK-NEXT: br label [[WHILE_BODY:%.*]]
; DATA_NO_LANEMASK: while.body:
-; DATA_NO_LANEMASK-NEXT: [[INDEX:%.*]] = phi i64 [ [[INDEX_NEXT:%.*]], [[WHILE_BODY]] ], [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ]
+; DATA_NO_LANEMASK-NEXT: [[INDEX:%.*]] = phi i64 [ [[INDEX_NEXT:%.*]], [[WHILE_BODY]] ], [ 0, [[SCALAR_PH]] ]
; DATA_NO_LANEMASK-NEXT: [[GEP:%.*]] = getelementptr i32, ptr [[PTR]], i64 [[INDEX]]
; DATA_NO_LANEMASK-NEXT: store i32 [[VAL]], ptr [[GEP]], align 4
; DATA_NO_LANEMASK-NEXT: [[INDEX_NEXT]] = add nsw i64 [[INDEX]], 1
@@ -169,7 +169,7 @@ define void @simple_memset_tailfold(i32 %val, ptr %ptr, i64 %n) "target-features
; DATA_AND_CONTROL-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, [[ENTRY:%.*]] ]
; DATA_AND_CONTROL-NEXT: br label [[WHILE_BODY:%.*]]
; DATA_AND_CONTROL: while.body:
-; DATA_AND_CONTROL-NEXT: [[INDEX:%.*]] = phi i64 [ [[INDEX_NEXT:%.*]], [[WHILE_BODY]] ], [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ]
+; DATA_AND_CONTROL-NEXT: [[INDEX:%.*]] = phi i64 [ [[INDEX_NEXT:%.*]], [[WHILE_BODY]] ], [ 0, [[SCALAR_PH]] ]
; DATA_AND_CONTROL-NEXT: [[GEP:%.*]] = getelementptr i32, ptr [[PTR]], i64 [[INDEX]]
; DATA_AND_CONTROL-NEXT: store i32 [[VAL]], ptr [[GEP]], align 4
; DATA_AND_CONTROL-NEXT: [[INDEX_NEXT]] = add nsw i64 [[INDEX]], 1
@@ -216,7 +216,7 @@ define void @simple_memset_tailfold(i32 %val, ptr %ptr, i64 %n) "target-features
; DATA_AND_CONTROL_NO_RT_CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, [[ENTRY:%.*]] ]
; DATA_AND_CONTROL_NO_RT_CHECK-NEXT: br label [[WHILE_BODY:%.*]]
; DATA_AND_CONTROL_NO_RT_CHECK: while.body:
-; DATA_AND_CONTROL_NO_RT_CHECK-NEXT: [[INDEX:%.*]] = phi i64 [ [[INDEX_NEXT:%.*]], [[WHILE_BODY]] ], [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ]
+; DATA_AND_CONTROL_NO_RT_CHECK-NEXT: [[INDEX:%.*]] = phi i64 [ [[INDEX_NEXT:%.*]], [[WHILE_BODY]] ], [ 0, [[SCALAR_PH]] ]
; DATA_AND_CONTROL_NO_RT_CHECK-NEXT: [[GEP:%.*]] = getelementptr i32, ptr [[PTR]], i64 [[INDEX]]
; DATA_AND_CONTROL_NO_RT_CHECK-NEXT: store i32 [[VAL]], ptr [[GEP]], align 4
; DATA_AND_CONTROL_NO_RT_CHECK-NEXT: [[INDEX_NEXT]] = add nsw i64 [[INDEX]], 1
diff --git a/llvm/test/Transforms/LoopVectorize/AArch64/transform-narrow-interleave-to-widen-memory-remove-loop-region.ll b/llvm/test/Transforms/LoopVectorize/AArch64/transform-narrow-interleave-to-widen-memory-remove-loop-region.ll
index d0ea8288ea126..bd6a027a048e5 100644
--- a/llvm/test/Transforms/LoopVectorize/AArch64/transform-narrow-interleave-to-widen-memory-remove-loop-region.ll
+++ b/llvm/test/Transforms/LoopVectorize/AArch64/transform-narrow-interleave-to-widen-memory-remove-loop-region.ll
@@ -116,7 +116,7 @@ define void @load_store_interleave_group_tc_2(ptr noalias %data) {
; VF4-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, %[[ENTRY]] ]
; VF4-NEXT: br label %[[LOOP:.*]]
; VF4: [[LOOP]]:
-; VF4-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
+; VF4-NEXT: [[IV:%.*]] = phi i64 [ 0, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
; VF4-NEXT: [[MUL_2:%.*]] = shl nsw i64 [[IV]], 1
; VF4-NEXT: [[DATA_0:%.*]] = getelementptr inbounds i64, ptr [[DATA]], i64 [[MUL_2]]
; VF4-NEXT: [[L_0:%.*]] = load i64, ptr [[DATA_0]], align 8
diff --git a/llvm/test/Transforms/LoopVectorize/ARM/mve-gather-scatter-tailpred.ll b/llvm/test/Transforms/LoopVectorize/ARM/mve-gather-scatter-tailpred.ll
index 66bb80bbe21aa..59e65f7672230 100644
--- a/llvm/test/Transforms/LoopVectorize/ARM/mve-gather-scatter-tailpred.ll
+++ b/llvm/test/Transforms/LoopVectorize/ARM/mve-gather-scatter-tailpred.ll
@@ -30,7 +30,7 @@ define void @test_stride1_4i32(ptr readonly %data, ptr noalias nocapture %dst, i
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i32 [ 0, [[ENTRY:%.*]] ]
; CHECK-NEXT: br label [[FOR_BODY:%.*]]
; CHECK: for.body:
-; CHECK-NEXT: [[I_023:%.*]] = phi i32 [ [[INC:%.*]], [[FOR_BODY]] ], [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ]
+; CHECK-NEXT: [[I_023:%.*]] = phi i32 [ [[INC:%.*]], [[FOR_BODY]] ], [ 0, [[SCALAR_PH]] ]
; CHECK-NEXT: [[MUL:%.*]] = mul nuw nsw i32 [[I_023]], 1
; CHECK-NEXT: [[ADD5:%.*]] = add nuw nsw i32 [[MUL]], 2
; CHECK-NEXT: [[ARRAYIDX6:%.*]] = getelementptr inbounds i32, ptr [[DATA]], i32 [[ADD5]]
@@ -218,7 +218,7 @@ define void @test_stride3_4i32(ptr readonly %data, ptr noalias nocapture %dst, i
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i32 [ 0, [[ENTRY:%.*]] ]
; CHECK-NEXT: br label [[FOR_BODY:%.*]]
; CHECK: for.body:
-; CHECK-NEXT: [[I_023:%.*]] = phi i32 [ [[INC:%.*]], [[FOR_BODY]] ], [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ]
+; CHECK-NEXT: [[I_023:%.*]] = phi i32 [ [[INC:%.*]], [[FOR_BODY]] ], [ 0, [[SCALAR_PH]] ]
; CHECK-NEXT: [[MUL:%.*]] = mul nuw nsw i32 [[I_023]], 3
; CHECK-NEXT: [[ADD5:%.*]] = add nuw nsw i32 [[MUL]], 2
; CHECK-NEXT: [[ARRAYIDX6:%.*]] = getelementptr inbounds i32, ptr [[DATA]], i32 [[ADD5]]
@@ -280,7 +280,7 @@ define void @test_stride4_4i32(ptr readonly %data, ptr noalias nocapture %dst, i
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i32 [ 0, [[ENTRY:%.*]] ]
; CHECK-NEXT: br label [[FOR_BODY:%.*]]
; CHECK: for.body:
-; CHECK-NEXT: [[I_023:%.*]] = phi i32 [ [[INC:%.*]], [[FOR_BODY]] ], [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ]
+; CHECK-NEXT: [[I_023:%.*]] = phi i32 [ [[INC:%.*]], [[FOR_BODY]] ], [ 0, [[SCALAR_PH]] ]
; CHECK-NEXT: [[MUL:%.*]] = mul nuw nsw i32 [[I_023]], 4
; CHECK-NEXT: [[ADD5:%.*]] = add nuw nsw i32 [[MUL]], 2
; CHECK-NEXT: [[ARRAYIDX6:%.*]] = getelementptr inbounds i32, ptr [[DATA]], i32 [[ADD5]]
diff --git a/llvm/test/Transforms/LoopVectorize/ARM/mve-reduction-types.ll b/llvm/test/Transforms/LoopVectorize/ARM/mve-reduction-types.ll
index 83cb3250fe87b..fd946735ff82a 100644
--- a/llvm/test/Transforms/LoopVectorize/ARM/mve-reduction-types.ll
+++ b/llvm/test/Transforms/LoopVectorize/ARM/mve-reduction-types.ll
@@ -40,8 +40,8 @@ define i32 @mla_i32(ptr noalias nocapture readonly %A, ptr noalias nocapture rea
; CHECK-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ 0, [[FOR_BODY_PREHEADER]] ]
; CHECK-NEXT: br label [[FOR_BODY:%.*]]
; CHECK: for.body:
-; CHECK-NEXT: [[I_011:%.*]] = phi i32 [ [[INC:%.*]], [[FOR_BODY]] ], [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ]
-; CHECK-NEXT: [[RES_010:%.*]] = phi i32 [ [[ADD:%.*]], [[FOR_BODY]] ], [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ]
+; CHECK-NEXT: [[I_011:%.*]] = phi i32 [ [[INC:%.*]], [[FOR_BODY]] ], [ 0, [[SCALAR_PH]] ]
+; CHECK-NEXT: [[RES_010:%.*]] = phi i32 [ [[ADD:%.*]], [[FOR_BODY]] ], [ 0, [[SCALAR_PH]] ]
; CHECK-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i8, ptr [[A]], i32 [[I_011]]
; CHECK-NEXT: [[TMP12:%.*]] = load i8, ptr [[ARRAYIDX]], align 1
; CHECK-NEXT: [[CONV:%.*]] = sext i8 [[TMP12]] to i32
@@ -120,8 +120,8 @@ define i32 @mla_i8(ptr noalias nocapture readonly %A, ptr noalias nocapture read
; CHECK-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ 0, [[FOR_BODY_PREHEADER]] ]
; CHECK-NEXT: br label [[FOR_BODY:%.*]]
; CHECK: for.body:
-; CHECK-NEXT: [[I_011:%.*]] = phi i32 [ [[INC:%.*]], [[FOR_BODY]] ], [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ]
-; CHECK-NEXT: [[RES_010:%.*]] = phi i32 [ [[ADD:%.*]], [[FOR_BODY]] ], [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ]
+; CHECK-NEXT: [[I_011:%.*]] = phi i32 [ [[INC:%.*]], [[FOR_BODY]] ], [ 0, [[SCALAR_PH]] ]
+; CHECK-NEXT: [[RES_010:%.*]] = phi i32 [ [[ADD:%.*]], [[FOR_BODY]] ], [ 0, [[SCALAR_PH]] ]
; CHECK-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i8, ptr [[A]], i32 [[I_011]]
; CHECK-NEXT: [[TMP12:%.*]] = load i8, ptr [[ARRAYIDX]], align 1
; CHECK-NEXT: [[CONV:%.*]] = sext i8 [[TMP12]] to i32
@@ -195,8 +195,8 @@ define i32 @add_i32(ptr nocapture readonly %x, i32 %n) #0 {
; CHECK-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ 0, [[FOR_BODY_PREHEADER]] ]
; CHECK-NEXT: br label [[FOR_BODY:%.*]]
; CHECK: for.body:
-; CHECK-NEXT: [[I_08:%.*]] = phi i32 [ [[INC:%.*]], [[FOR_BODY]] ], [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ]
-; CHECK-NEXT: [[R_07:%.*]] = phi i32 [ [[ADD:%.*]], [[FOR_BODY]] ], [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ]
+; CHECK-NEXT: [[I_08:%.*]] = phi i32 [ [[INC:%.*]], [[FOR_BODY]] ], [ 0, [[SCALAR_PH]] ]
+; CHECK-NEXT: [[R_07:%.*]] = phi i32 [ [[ADD:%.*]], [[FOR_BODY]] ], [ 0, [[SCALAR_PH]] ]
; CHECK-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[X]], i32 [[I_08]]
; CHECK-NEXT: [[TMP7:%.*]] = load i32, ptr [[ARRAYIDX]], align 4
; CHECK-NEXT: [[ADD]] = add nsw i32 [[TMP7]], [[R_07]]
@@ -260,8 +260,8 @@ define i32 @mul_i32(ptr nocapture readonly %x, i32 %n) #0 {
; CHECK-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ 1, [[FOR_BODY_PREHEADER]] ]
; CHECK-NEXT: br label [[FOR_BODY:%.*]]
; CHECK: for.body:
-; CHECK-NEXT: [[I_08:%.*]] = phi i32 [ [[INC:%.*]], [[FOR_BODY]] ], [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ]
-; CHECK-NEXT: [[R_07:%.*]] = phi i32 [ [[ADD:%.*]], [[FOR_BODY]] ], [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ]
+; CHECK-NEXT: [[I_08:%.*]] = phi i32 [ [[INC:%.*]], [[FOR_BODY]] ], [ 0, [[SCALAR_PH]] ]
+; CHECK-NEXT: [[R_07:%.*]] = phi i32 [ [[ADD:%.*]], [[FOR_BODY]] ], [ 1, [[SCALAR_PH]] ]
; CHECK-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[X]], i32 [[I_08]]
; CHECK-NEXT: [[TMP7:%.*]] = load i32, ptr [[ARRAYIDX]], align 4
; CHECK-NEXT: [[ADD]] = mul nsw i32 [[TMP7]], [[R_07]]
@@ -325,8 +325,8 @@ define i32 @and_i32(ptr nocapture readonly %x, i32 %n) #0 {
; CHECK-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ -1, [[FOR_BODY_PREHEADER]] ]
; CHECK-NEXT: br label [[FOR_BODY:%.*]]
; CHECK: for.body:
-; CHECK-NEXT: [[I_08:%.*]] = phi i32 [ [[INC:%.*]], [[FOR_BODY]] ], [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ]
-; CHECK-NEXT: [[R_07:%.*]] = phi i32 [ [[ADD:%.*]], [[FOR_BODY]] ], [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ]
+; CHECK-NEXT: [[I_08:%.*]] = phi i32 [ [[INC:%.*]], [[FOR_BODY]] ], [ 0, [[SCALAR_PH]] ]
+; CHECK-NEXT: [[R_07:%.*]] = phi i32 [ [[ADD:%.*]], [[FOR_BODY]] ], [ -1, [[SCALAR_PH]] ]
; CHECK-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[X]], i32 [[I_08]]
; CHECK-NEXT: [[TMP7:%.*]] = load i32, ptr [[ARRAYIDX]], align 4
; CHECK-NEXT: [[ADD]] = and i32 [[TMP7]], [[R_07]]
@@ -390,8 +390,8 @@ define i32 @or_i32(ptr nocapture readonly %x, i32 %n) #0 {
; CHECK-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ 0, [[FOR_BODY_PREHEADER]] ]
; CHECK-NEXT: br label [[FOR_BODY:%.*]]
; CHECK: for.body:
-; CHECK-NEXT: [[I_08:%.*]] = phi i32 [ [[INC:%.*]], [[FOR_BODY]] ], [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ]
-; CHECK-NEXT: [[R_07:%.*]] = phi i32 [ [[ADD:%.*]], [[FOR_BODY]] ], [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ]
+; CHECK-NEXT: [[I_08:%.*]] = phi i32 [ [[INC:%.*]], [[FOR_BODY]] ], [ 0, [[SCALAR_PH]] ]
+; CHECK-NEXT: [[R_07:%.*]] = phi i32 [ [[ADD:%.*]], [[FOR_BODY]] ], [ 0, [[SCALAR_PH]] ]
; CHECK-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[X]], i32 [[I_08]]
; CHECK-NEXT: [[TMP7:%.*]] = load i32, ptr [[ARRAYIDX]], align 4
; CHECK-NEXT: [[ADD]] = or i32 [[TMP7]], [[R_07]]
@@ -455,8 +455,8 @@ define i32 @xor_i32(ptr nocapture readonly %x, i32 %n) #0 {
; CHECK-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ 0, [[FOR_BODY_PREHEADER]] ]
; CHECK-NEXT: br label [[FOR_BODY:%.*]]
; CHECK: for.body:
-; CHECK-NEXT: [[I_08:%.*]] = phi i32 [ [[INC:%.*]], [[FOR_BODY]] ], [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ]
-; CHECK-NEXT: [[R_07:%.*]] = phi i32 [ [[ADD:%.*]], [[FOR_BODY]] ], [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ]
+; CHECK-NEXT: [[I_08:%.*]] = phi i32 [ [[INC:%.*]], [[FOR_BODY]] ], [ 0, [[SCALAR_PH]] ]
+; CHECK-NEXT: [[R_07:%.*]] = phi i32 [ [[ADD:%.*]], [[FOR_BODY]] ], [ 0, [[SCALAR_PH]] ]
; CHECK-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[X]], i32 [[I_08]]
; CHECK-NEXT: [[TMP7:%.*]] = load i32, ptr [[ARRAYIDX]], align 4
; CHECK-NEXT: [[ADD]] = xor i32 [[TMP7]], [[R_07]]
@@ -520,8 +520,8 @@ define float @fadd_f32(ptr nocapture readonly %x, i32 %n) #0 {
; CHECK-NEXT: [[BC_MERGE_RDX:%.*]] = phi float [ 0.000000e+00, [[FOR_BODY_PREHEADER]] ]
; CHECK-NEXT: br label [[FOR_BODY:%.*]]
; CHECK: for.body:
-; CHECK-NEXT: [[I_08:%.*]] = phi i32 [ [[INC:%.*]], [[FOR_BODY]] ], [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ]
-; CHECK-NEXT: [[R_07:%.*]] = phi float [ [[ADD:%.*]], [[FOR_BODY]] ], [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ]
+; CHECK-NEXT: [[I_08:%.*]] = phi i32 [ [[INC:%.*]], [[FOR_BODY]] ], [ 0, [[SCALAR_PH]] ]
+; CHECK-NEXT: [[R_07:%.*]] = phi float [ [[ADD:%.*]], [[FOR_BODY]] ], [ 0.000000e+00, [[SCALAR_PH]] ]
; CHECK-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds float, ptr [[X]], i32 [[I_08]]
; CHECK-NEXT: [[TMP7:%.*]] = load float, ptr [[ARRAYIDX]], align 4
; CHECK-NEXT: [[ADD]] = fadd fast float [[TMP7]], [[R_07]]
@@ -585,8 +585,8 @@ define float @fmul_f32(ptr nocapture readonly %x, i32 %n) #0 {
; CHECK-NEXT: [[BC_MERGE_RDX:%.*]] = phi float [ 1.000000e+00, [[FOR_BODY_PREHEADER]] ]
; CHECK-NEXT: br label [[FOR_BODY:%.*]]
; CHECK: for.body:
-; CHECK-NEXT: [[I_08:%.*]] = phi i32 [ [[INC:%.*]], [[FOR_BODY]] ], [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ]
-; CHECK-NEXT: [[R_07:%.*]] = phi float [ [[ADD:%.*]], [[FOR_BODY]] ], [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ]
+; CHECK-NEXT: [[I_08:%.*]] = phi i32 [ [[INC:%.*]], [[FOR_BODY]] ], [ 0, [[SCALAR_PH]] ]
+; CHECK-NEXT: [[R_07:%.*]] = phi float [ [[ADD:%.*]], [[FOR_BODY]] ], [ 1.000000e+00, [[SCALAR_PH]] ]
; CHECK-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds float, ptr [[X]], i32 [[I_08]]
; CHECK-NEXT: [[TMP7:%.*]] = load float, ptr [[ARRAYIDX]], align 4
; CHECK-NEXT: [[ADD]] = fmul fast float [[TMP7]], [[R_07]]
diff --git a/llvm/test/Transforms/LoopVectorize/ARM/optsize_minsize.ll b/llvm/test/Transforms/LoopVectorize/ARM/optsize_minsize.ll
index 0f4d40f202759..8fbeff5e91e75 100644
--- a/llvm/test/Transforms/LoopVectorize/ARM/optsize_minsize.ll
+++ b/llvm/test/Transforms/LoopVectorize/ARM/optsize_minsize.ll
@@ -393,7 +393,7 @@ define void @tail_predicate_without_optsize(ptr %p, i8 %a, i8 %b, i8 %c, i32 %n)
; DEFAULT-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, %[[ENTRY]] ]
; DEFAULT-NEXT: br label %[[FOR_BODY:.*]]
; DEFAULT: [[FOR_BODY]]:
-; DEFAULT-NEXT: [[INDVARS_IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[INDVARS_IV_NEXT:%.*]], %[[FOR_BODY]] ]
+; DEFAULT-NEXT: [[INDVARS_IV:%.*]] = phi i64 [ 0, %[[SCALAR_PH]] ], [ [[INDVARS_IV_NEXT:%.*]], %[[FOR_BODY]] ]
; DEFAULT-NEXT: [[TMP72:%.*]] = trunc nuw nsw i64 [[INDVARS_IV]] to i8
; DEFAULT-NEXT: [[MUL:%.*]] = mul i8 [[A]], [[TMP72]]
; DEFAULT-NEXT: [[SHR:%.*]] = lshr i8 [[TMP72]], 1
diff --git a/llvm/test/Transforms/LoopVectorize/RISCV/evl-compatible-loops.ll b/llvm/test/Transforms/LoopVectorize/RISCV/evl-compatible-loops.ll
index 5f13089ff17fd..00521b1fd8546 100644
--- a/llvm/test/Transforms/LoopVectorize/RISCV/evl-compatible-loops.ll
+++ b/llvm/test/Transforms/LoopVectorize/RISCV/evl-compatible-loops.ll
@@ -45,7 +45,7 @@ define void @test_wide_integer_induction(ptr noalias %a, i64 %N) {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, [[ENTRY1:%.*]] ]
; CHECK-NEXT: br label [[FOR_BODY1:%.*]]
; CHECK: for.body:
-; CHECK-NEXT: [[IV1:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT1:%.*]], [[FOR_BODY1]] ]
+; CHECK-NEXT: [[IV1:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT1:%.*]], [[FOR_BODY1]] ]
; CHECK-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i64, ptr [[A]], i64 [[IV1]]
; CHECK-NEXT: store i64 [[IV1]], ptr [[ARRAYIDX]], align 8
; CHECK-NEXT: [[IV_NEXT1]] = add nuw nsw i64 [[IV1]], 1
diff --git a/llvm/test/Transforms/LoopVectorize/RISCV/inloop-reduction.ll b/llvm/test/Transforms/LoopVectorize/RISCV/inloop-reduction.ll
index 6e2434aefce9d..1addff68c72e4 100644
--- a/llvm/test/Transforms/LoopVectorize/RISCV/inloop-reduction.ll
+++ b/llvm/test/Transforms/LoopVectorize/RISCV/inloop-reduction.ll
@@ -151,8 +151,8 @@ define i32 @add_i16_i32(ptr nocapture readonly %x, i32 %n) {
; IF-EVL-OUTLOOP-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ 0, [[FOR_BODY_PREHEADER]] ]
; IF-EVL-OUTLOOP-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL-OUTLOOP: for.body:
-; IF-EVL-OUTLOOP-NEXT: [[I_08:%.*]] = phi i32 [ [[INC:%.*]], [[FOR_BODY]] ], [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ]
-; IF-EVL-OUTLOOP-NEXT: [[R_07:%.*]] = phi i32 [ [[ADD:%.*]], [[FOR_BODY]] ], [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ]
+; IF-EVL-OUTLOOP-NEXT: [[I_08:%.*]] = phi i32 [ [[INC:%.*]], [[FOR_BODY]] ], [ 0, [[SCALAR_PH]] ]
+; IF-EVL-OUTLOOP-NEXT: [[R_07:%.*]] = phi i32 [ [[ADD:%.*]], [[FOR_BODY]] ], [ 0, [[SCALAR_PH]] ]
; IF-EVL-OUTLOOP-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i16, ptr [[X]], i32 [[I_08]]
; IF-EVL-OUTLOOP-NEXT: [[TMP13:%.*]] = load i16, ptr [[ARRAYIDX]], align 2
; IF-EVL-OUTLOOP-NEXT: [[CONV:%.*]] = sext i16 [[TMP13]] to i32
@@ -204,8 +204,8 @@ define i32 @add_i16_i32(ptr nocapture readonly %x, i32 %n) {
; IF-EVL-INLOOP-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ 0, [[FOR_BODY_PREHEADER]] ]
; IF-EVL-INLOOP-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL-INLOOP: for.body:
-; IF-EVL-INLOOP-NEXT: [[I_08:%.*]] = phi i32 [ [[INC:%.*]], [[FOR_BODY]] ], [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ]
-; IF-EVL-INLOOP-NEXT: [[R_07:%.*]] = phi i32 [ [[ADD:%.*]], [[FOR_BODY]] ], [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ]
+; IF-EVL-INLOOP-NEXT: [[I_08:%.*]] = phi i32 [ [[INC:%.*]], [[FOR_BODY]] ], [ 0, [[SCALAR_PH]] ]
+; IF-EVL-INLOOP-NEXT: [[R_07:%.*]] = phi i32 [ [[ADD:%.*]], [[FOR_BODY]] ], [ 0, [[SCALAR_PH]] ]
; IF-EVL-INLOOP-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i16, ptr [[X]], i32 [[I_08]]
; IF-EVL-INLOOP-NEXT: [[TMP13:%.*]] = load i16, ptr [[ARRAYIDX]], align 2
; IF-EVL-INLOOP-NEXT: [[CONV:%.*]] = sext i16 [[TMP13]] to i32
@@ -372,8 +372,8 @@ define i32 @smin(ptr %a, i64 %n, i32 %start) {
; IF-EVL-OUTLOOP-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ [[START]], [[ENTRY]] ]
; IF-EVL-OUTLOOP-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL-OUTLOOP: for.body:
-; IF-EVL-OUTLOOP-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
-; IF-EVL-OUTLOOP-NEXT: [[RDX:%.*]] = phi i32 [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[SMIN:%.*]], [[FOR_BODY]] ]
+; IF-EVL-OUTLOOP-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; IF-EVL-OUTLOOP-NEXT: [[RDX:%.*]] = phi i32 [ [[START]], [[SCALAR_PH]] ], [ [[SMIN:%.*]], [[FOR_BODY]] ]
; IF-EVL-OUTLOOP-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[A]], i64 [[IV]]
; IF-EVL-OUTLOOP-NEXT: [[TMP19:%.*]] = load i32, ptr [[ARRAYIDX]], align 4
; IF-EVL-OUTLOOP-NEXT: [[CMP_I:%.*]] = icmp slt i32 [[TMP19]], [[RDX]]
@@ -419,8 +419,8 @@ define i32 @smin(ptr %a, i64 %n, i32 %start) {
; IF-EVL-INLOOP-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ [[START]], [[ENTRY]] ]
; IF-EVL-INLOOP-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL-INLOOP: for.body:
-; IF-EVL-INLOOP-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
-; IF-EVL-INLOOP-NEXT: [[RDX:%.*]] = phi i32 [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[SMIN:%.*]], [[FOR_BODY]] ]
+; IF-EVL-INLOOP-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; IF-EVL-INLOOP-NEXT: [[RDX:%.*]] = phi i32 [ [[START]], [[SCALAR_PH]] ], [ [[SMIN:%.*]], [[FOR_BODY]] ]
; IF-EVL-INLOOP-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[A]], i64 [[IV]]
; IF-EVL-INLOOP-NEXT: [[TMP16:%.*]] = load i32, ptr [[ARRAYIDX]], align 4
; IF-EVL-INLOOP-NEXT: [[CMP_I:%.*]] = icmp slt i32 [[TMP16]], [[RDX]]
diff --git a/llvm/test/Transforms/LoopVectorize/RISCV/low-trip-count.ll b/llvm/test/Transforms/LoopVectorize/RISCV/low-trip-count.ll
index 0a872578f70b5..32cb4261622bf 100644
--- a/llvm/test/Transforms/LoopVectorize/RISCV/low-trip-count.ll
+++ b/llvm/test/Transforms/LoopVectorize/RISCV/low-trip-count.ll
@@ -146,7 +146,7 @@ define void @trip8_i8(ptr noalias nocapture noundef %dst, ptr noalias nocapture
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, [[ENTRY:%.*]] ]
; CHECK-NEXT: br label [[FOR_BODY:%.*]]
; CHECK: for.body:
-; CHECK-NEXT: [[I_08:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[INC:%.*]], [[FOR_BODY]] ]
+; CHECK-NEXT: [[I_08:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[INC:%.*]], [[FOR_BODY]] ]
; CHECK-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i8, ptr [[TMP9]], i64 [[I_08]]
; CHECK-NEXT: [[TMP15:%.*]] = load i8, ptr [[ARRAYIDX]], align 1
; CHECK-NEXT: [[MUL:%.*]] = shl i8 [[TMP15]], 1
@@ -379,8 +379,8 @@ define i8 @mul_non_pow_2_low_trip_count(ptr noalias %a) {
; CHECK-NEXT: [[BC_MERGE_RDX:%.*]] = phi i8 [ 2, [[ENTRY]] ]
; CHECK-NEXT: br label [[FOR_BODY:%.*]]
; CHECK: for.body:
-; CHECK-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
-; CHECK-NEXT: [[RDX:%.*]] = phi i8 [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[MUL:%.*]], [[FOR_BODY]] ]
+; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; CHECK-NEXT: [[RDX:%.*]] = phi i8 [ 2, [[SCALAR_PH]] ], [ [[MUL:%.*]], [[FOR_BODY]] ]
; CHECK-NEXT: [[GEP:%.*]] = getelementptr i8, ptr [[A]], i64 [[IV]]
; CHECK-NEXT: [[TMP5:%.*]] = load i8, ptr [[GEP]], align 1
; CHECK-NEXT: [[MUL]] = mul i8 [[TMP5]], [[RDX]]
diff --git a/llvm/test/Transforms/LoopVectorize/RISCV/pr88802.ll b/llvm/test/Transforms/LoopVectorize/RISCV/pr88802.ll
index 01df43618aad0..d41d47a6448be 100644
--- a/llvm/test/Transforms/LoopVectorize/RISCV/pr88802.ll
+++ b/llvm/test/Transforms/LoopVectorize/RISCV/pr88802.ll
@@ -58,7 +58,7 @@ define void @test(ptr %p, i64 %a, i8 %b) {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i32 [ 0, [[ENTRY:%.*]] ]
; CHECK-NEXT: br label [[FOR_COND1:%.*]]
; CHECK: for.cond:
-; CHECK-NEXT: [[IV:%.*]] = phi i32 [ [[BC_RESUME_VAL]], [[SCALAR_PH1]] ], [ [[ADD:%.*]], [[FOR_BODY:%.*]] ]
+; CHECK-NEXT: [[IV:%.*]] = phi i32 [ 0, [[SCALAR_PH1]] ], [ [[ADD:%.*]], [[FOR_BODY:%.*]] ]
; CHECK-NEXT: [[ADD]] = add i32 [[IV]], 1
; CHECK-NEXT: [[CMP_SLT:%.*]] = icmp slt i32 [[IV]], 2
; CHECK-NEXT: [[SHL:%.*]] = shl i64 [[A]], 48
diff --git a/llvm/test/Transforms/LoopVectorize/RISCV/scalable-tailfold.ll b/llvm/test/Transforms/LoopVectorize/RISCV/scalable-tailfold.ll
index ed507961ef825..c037b707d0a18 100644
--- a/llvm/test/Transforms/LoopVectorize/RISCV/scalable-tailfold.ll
+++ b/llvm/test/Transforms/LoopVectorize/RISCV/scalable-tailfold.ll
@@ -41,7 +41,7 @@ define void @vector_add(ptr noalias nocapture %a, i64 %v, i64 %n) {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, [[ENTRY:%.*]] ]
; CHECK-NEXT: br label [[FOR_BODY:%.*]]
; CHECK: for.body:
-; CHECK-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
; CHECK-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i64, ptr [[A]], i64 [[IV]]
; CHECK-NEXT: [[ELEM:%.*]] = load i64, ptr [[ARRAYIDX]], align 8
; CHECK-NEXT: [[ADD:%.*]] = add i64 [[ELEM]], [[V]]
@@ -106,7 +106,7 @@ define void @indexed_store(ptr noalias nocapture %a, ptr noalias nocapture %b, i
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, [[ENTRY:%.*]] ]
; CHECK-NEXT: br label [[FOR_BODY:%.*]]
; CHECK: for.body:
-; CHECK-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
; CHECK-NEXT: [[BADDR:%.*]] = getelementptr inbounds i64, ptr [[B]], i64 [[IV]]
; CHECK-NEXT: [[AIDX:%.*]] = load i64, ptr [[BADDR]], align 8
; CHECK-NEXT: [[AADDR:%.*]] = getelementptr inbounds i64, ptr [[A]], i64 [[AIDX]]
@@ -172,8 +172,8 @@ define i64 @indexed_load(ptr noalias nocapture %a, ptr noalias nocapture %b, i64
; CHECK-NEXT: [[BC_MERGE_RDX:%.*]] = phi i64 [ 0, [[ENTRY]] ]
; CHECK-NEXT: br label [[FOR_BODY:%.*]]
; CHECK: for.body:
-; CHECK-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
-; CHECK-NEXT: [[SUM:%.*]] = phi i64 [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[SUM_NEXT:%.*]], [[FOR_BODY]] ]
+; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; CHECK-NEXT: [[SUM:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[SUM_NEXT:%.*]], [[FOR_BODY]] ]
; CHECK-NEXT: [[BADDR:%.*]] = getelementptr inbounds i64, ptr [[B]], i64 [[IV]]
; CHECK-NEXT: [[AIDX:%.*]] = load i64, ptr [[BADDR]], align 8
; CHECK-NEXT: [[AADDR:%.*]] = getelementptr inbounds i64, ptr [[A]], i64 [[AIDX]]
@@ -238,7 +238,7 @@ define void @splat_int(ptr noalias nocapture %a, i64 %v, i64 %n) {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, [[ENTRY:%.*]] ]
; CHECK-NEXT: br label [[FOR_BODY:%.*]]
; CHECK: for.body:
-; CHECK-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
; CHECK-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i64, ptr [[A]], i64 [[IV]]
; CHECK-NEXT: store i64 [[V]], ptr [[ARRAYIDX]], align 8
; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1
@@ -296,7 +296,7 @@ define void @uniform_store(ptr noalias nocapture %a, ptr noalias nocapture %b, i
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, [[ENTRY:%.*]] ]
; CHECK-NEXT: br label [[FOR_BODY:%.*]]
; CHECK: for.body:
-; CHECK-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
; CHECK-NEXT: store i64 [[V]], ptr [[B]], align 8
; CHECK-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i64, ptr [[A]], i64 [[IV]]
; CHECK-NEXT: store i64 [[V]], ptr [[ARRAYIDX]], align 8
@@ -417,7 +417,7 @@ define void @vector_add_trip1024(ptr noalias nocapture %a, i64 %v, i64 %n) {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, [[ENTRY:%.*]] ]
; CHECK-NEXT: br label [[FOR_BODY:%.*]]
; CHECK: for.body:
-; CHECK-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
; CHECK-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i64, ptr [[A]], i64 [[IV]]
; CHECK-NEXT: [[ELEM:%.*]] = load i64, ptr [[ARRAYIDX]], align 8
; CHECK-NEXT: [[ADD:%.*]] = add i64 [[ELEM]], [[V]]
diff --git a/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-cast-intrinsics.ll b/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-cast-intrinsics.ll
index ce2b790cdbd4f..2be74e51f1c90 100644
--- a/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-cast-intrinsics.ll
+++ b/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-cast-intrinsics.ll
@@ -1330,7 +1330,7 @@ define void @vp_ptrtoint(ptr %a, ptr %b, i64 %N) {
; IF-EVL-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, %[[ENTRY]] ]
; IF-EVL-NEXT: br label %[[LOOP:.*]]
; IF-EVL: [[LOOP]]:
-; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
+; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ 0, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
; IF-EVL-NEXT: [[GEP:%.*]] = getelementptr inbounds i32, ptr [[B]], i64 [[IV]]
; IF-EVL-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[GEP]] to i64
; IF-EVL-NEXT: [[GEP2:%.*]] = getelementptr inbounds i64, ptr [[A]], i64 [[IV]]
diff --git a/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-cond-reduction.ll b/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-cond-reduction.ll
index d02d53b8e1207..76a830ad3e97c 100644
--- a/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-cond-reduction.ll
+++ b/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-cond-reduction.ll
@@ -57,8 +57,8 @@ define i32 @cond_add(ptr %a, i64 %n, i32 %start) {
; IF-EVL-OUTLOOP-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ [[START]], [[ENTRY]] ]
; IF-EVL-OUTLOOP-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL-OUTLOOP: for.body:
-; IF-EVL-OUTLOOP-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
-; IF-EVL-OUTLOOP-NEXT: [[RDX:%.*]] = phi i32 [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[ADD:%.*]], [[FOR_BODY]] ]
+; IF-EVL-OUTLOOP-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; IF-EVL-OUTLOOP-NEXT: [[RDX:%.*]] = phi i32 [ [[START]], [[SCALAR_PH]] ], [ [[ADD:%.*]], [[FOR_BODY]] ]
; IF-EVL-OUTLOOP-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[A]], i64 [[IV]]
; IF-EVL-OUTLOOP-NEXT: [[TMP27:%.*]] = load i32, ptr [[ARRAYIDX]], align 4
; IF-EVL-OUTLOOP-NEXT: [[CMP:%.*]] = icmp sgt i32 [[TMP27]], 3
@@ -108,8 +108,8 @@ define i32 @cond_add(ptr %a, i64 %n, i32 %start) {
; IF-EVL-INLOOP-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ [[START]], [[ENTRY]] ]
; IF-EVL-INLOOP-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL-INLOOP: for.body:
-; IF-EVL-INLOOP-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
-; IF-EVL-INLOOP-NEXT: [[RDX:%.*]] = phi i32 [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[ADD:%.*]], [[FOR_BODY]] ]
+; IF-EVL-INLOOP-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; IF-EVL-INLOOP-NEXT: [[RDX:%.*]] = phi i32 [ [[START]], [[SCALAR_PH]] ], [ [[ADD:%.*]], [[FOR_BODY]] ]
; IF-EVL-INLOOP-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[A]], i64 [[IV]]
; IF-EVL-INLOOP-NEXT: [[TMP25:%.*]] = load i32, ptr [[ARRAYIDX]], align 4
; IF-EVL-INLOOP-NEXT: [[CMP:%.*]] = icmp sgt i32 [[TMP25]], 3
@@ -285,8 +285,8 @@ define i32 @cond_add_pred(ptr %a, i64 %n, i32 %start) {
; IF-EVL-OUTLOOP-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ [[START]], [[ENTRY]] ]
; IF-EVL-OUTLOOP-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL-OUTLOOP: for.body:
-; IF-EVL-OUTLOOP-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_INC:%.*]] ]
-; IF-EVL-OUTLOOP-NEXT: [[RDX:%.*]] = phi i32 [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[RDX_ADD:%.*]], [[FOR_INC]] ]
+; IF-EVL-OUTLOOP-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_INC:%.*]] ]
+; IF-EVL-OUTLOOP-NEXT: [[RDX:%.*]] = phi i32 [ [[START]], [[SCALAR_PH]] ], [ [[RDX_ADD:%.*]], [[FOR_INC]] ]
; IF-EVL-OUTLOOP-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[A]], i64 [[IV]]
; IF-EVL-OUTLOOP-NEXT: [[TMP28:%.*]] = load i32, ptr [[ARRAYIDX]], align 4
; IF-EVL-OUTLOOP-NEXT: [[CMP:%.*]] = icmp sgt i32 [[TMP28]], 3
@@ -339,8 +339,8 @@ define i32 @cond_add_pred(ptr %a, i64 %n, i32 %start) {
; IF-EVL-INLOOP-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ [[START]], [[ENTRY]] ]
; IF-EVL-INLOOP-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL-INLOOP: for.body:
-; IF-EVL-INLOOP-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_INC:%.*]] ]
-; IF-EVL-INLOOP-NEXT: [[RDX:%.*]] = phi i32 [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[RDX_ADD:%.*]], [[FOR_INC]] ]
+; IF-EVL-INLOOP-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_INC:%.*]] ]
+; IF-EVL-INLOOP-NEXT: [[RDX:%.*]] = phi i32 [ [[START]], [[SCALAR_PH]] ], [ [[RDX_ADD:%.*]], [[FOR_INC]] ]
; IF-EVL-INLOOP-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[A]], i64 [[IV]]
; IF-EVL-INLOOP-NEXT: [[TMP25:%.*]] = load i32, ptr [[ARRAYIDX]], align 4
; IF-EVL-INLOOP-NEXT: [[CMP:%.*]] = icmp sgt i32 [[TMP25]], 3
@@ -537,8 +537,8 @@ define i32 @step_cond_add(ptr %a, i64 %n, i32 %start) {
; IF-EVL-OUTLOOP-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ [[START]], [[ENTRY]] ]
; IF-EVL-OUTLOOP-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL-OUTLOOP: for.body:
-; IF-EVL-OUTLOOP-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
-; IF-EVL-OUTLOOP-NEXT: [[RDX:%.*]] = phi i32 [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[ADD:%.*]], [[FOR_BODY]] ]
+; IF-EVL-OUTLOOP-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; IF-EVL-OUTLOOP-NEXT: [[RDX:%.*]] = phi i32 [ [[START]], [[SCALAR_PH]] ], [ [[ADD:%.*]], [[FOR_BODY]] ]
; IF-EVL-OUTLOOP-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[A]], i64 [[IV]]
; IF-EVL-OUTLOOP-NEXT: [[TMP37:%.*]] = load i32, ptr [[ARRAYIDX]], align 4
; IF-EVL-OUTLOOP-NEXT: [[IV_TRUNC:%.*]] = trunc i64 [[IV]] to i32
@@ -597,8 +597,8 @@ define i32 @step_cond_add(ptr %a, i64 %n, i32 %start) {
; IF-EVL-INLOOP-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ [[START]], [[ENTRY]] ]
; IF-EVL-INLOOP-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL-INLOOP: for.body:
-; IF-EVL-INLOOP-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
-; IF-EVL-INLOOP-NEXT: [[RDX1:%.*]] = phi i32 [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[ADD1:%.*]], [[FOR_BODY]] ]
+; IF-EVL-INLOOP-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; IF-EVL-INLOOP-NEXT: [[RDX1:%.*]] = phi i32 [ [[START]], [[SCALAR_PH]] ], [ [[ADD1:%.*]], [[FOR_BODY]] ]
; IF-EVL-INLOOP-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[A]], i64 [[IV]]
; IF-EVL-INLOOP-NEXT: [[TMP28:%.*]] = load i32, ptr [[ARRAYIDX]], align 4
; IF-EVL-INLOOP-NEXT: [[IV_TRUNC:%.*]] = trunc i64 [[IV]] to i32
@@ -804,8 +804,8 @@ define i32 @step_cond_add_pred(ptr %a, i64 %n, i32 %start) {
; IF-EVL-OUTLOOP-NEXT: [[BC_MERGE_RDX1:%.*]] = phi i32 [ [[START]], [[ENTRY]] ]
; IF-EVL-OUTLOOP-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL-OUTLOOP: for.body:
-; IF-EVL-OUTLOOP-NEXT: [[IV1:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[MIDDLE_BLOCK:%.*]] ]
-; IF-EVL-OUTLOOP-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ [[BC_MERGE_RDX1]], [[SCALAR_PH]] ], [ [[RDX_ADD:%.*]], [[MIDDLE_BLOCK]] ]
+; IF-EVL-OUTLOOP-NEXT: [[IV1:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[MIDDLE_BLOCK:%.*]] ]
+; IF-EVL-OUTLOOP-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ [[START]], [[SCALAR_PH]] ], [ [[RDX_ADD:%.*]], [[MIDDLE_BLOCK]] ]
; IF-EVL-OUTLOOP-NEXT: [[ARRAYIDX1:%.*]] = getelementptr inbounds i32, ptr [[A]], i64 [[IV1]]
; IF-EVL-OUTLOOP-NEXT: [[TMP38:%.*]] = load i32, ptr [[ARRAYIDX1]], align 4
; IF-EVL-OUTLOOP-NEXT: [[IV_TRUNC:%.*]] = trunc i64 [[IV1]] to i32
@@ -867,8 +867,8 @@ define i32 @step_cond_add_pred(ptr %a, i64 %n, i32 %start) {
; IF-EVL-INLOOP-NEXT: [[BC_MERGE_RDX1:%.*]] = phi i32 [ [[START]], [[ENTRY]] ]
; IF-EVL-INLOOP-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL-INLOOP: for.body:
-; IF-EVL-INLOOP-NEXT: [[IV1:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[MIDDLE_BLOCK:%.*]] ]
-; IF-EVL-INLOOP-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ [[BC_MERGE_RDX1]], [[SCALAR_PH]] ], [ [[RDX_ADD:%.*]], [[MIDDLE_BLOCK]] ]
+; IF-EVL-INLOOP-NEXT: [[IV1:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[MIDDLE_BLOCK:%.*]] ]
+; IF-EVL-INLOOP-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ [[START]], [[SCALAR_PH]] ], [ [[RDX_ADD:%.*]], [[MIDDLE_BLOCK]] ]
; IF-EVL-INLOOP-NEXT: [[ARRAYIDX1:%.*]] = getelementptr inbounds i32, ptr [[A]], i64 [[IV1]]
; IF-EVL-INLOOP-NEXT: [[TMP35:%.*]] = load i32, ptr [[ARRAYIDX1]], align 4
; IF-EVL-INLOOP-NEXT: [[IV_TRUNC:%.*]] = trunc i64 [[IV1]] to i32
diff --git a/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-div.ll b/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-div.ll
index ae047f5f63476..a216aa8642429 100644
--- a/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-div.ll
+++ b/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-div.ll
@@ -45,7 +45,7 @@ define void @test_sdiv(ptr noalias %a, ptr noalias %b, ptr noalias %c) {
; IF-EVL-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, %[[LOOP_PREHEADER]] ]
; IF-EVL-NEXT: br label %[[LOOP:.*]]
; IF-EVL: [[LOOP]]:
-; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[IV_NEXT:%.*]], %[[LOOP]] ], [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ]
+; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[IV_NEXT:%.*]], %[[LOOP]] ], [ 0, %[[SCALAR_PH]] ]
; IF-EVL-NEXT: [[A_GEP:%.*]] = getelementptr i64, ptr [[A]], i64 [[IV]]
; IF-EVL-NEXT: [[TMP16:%.*]] = load i64, ptr [[A_GEP]], align 8
; IF-EVL-NEXT: [[B_GEP:%.*]] = getelementptr i64, ptr [[B]], i64 [[IV]]
@@ -166,7 +166,7 @@ define void @test_udiv(ptr noalias %a, ptr noalias %b, ptr noalias %c) {
; IF-EVL-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, %[[LOOP_PREHEADER]] ]
; IF-EVL-NEXT: br label %[[LOOP:.*]]
; IF-EVL: [[LOOP]]:
-; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[IV_NEXT:%.*]], %[[LOOP]] ], [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ]
+; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[IV_NEXT:%.*]], %[[LOOP]] ], [ 0, %[[SCALAR_PH]] ]
; IF-EVL-NEXT: [[A_GEP:%.*]] = getelementptr i64, ptr [[A]], i64 [[IV]]
; IF-EVL-NEXT: [[TMP16:%.*]] = load i64, ptr [[A_GEP]], align 8
; IF-EVL-NEXT: [[B_GEP:%.*]] = getelementptr i64, ptr [[B]], i64 [[IV]]
@@ -286,7 +286,7 @@ define void @test_srem(ptr noalias %a, ptr noalias %b, ptr noalias %c) {
; IF-EVL-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, %[[LOOP_PREHEADER]] ]
; IF-EVL-NEXT: br label %[[LOOP:.*]]
; IF-EVL: [[LOOP]]:
-; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[IV_NEXT:%.*]], %[[LOOP]] ], [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ]
+; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[IV_NEXT:%.*]], %[[LOOP]] ], [ 0, %[[SCALAR_PH]] ]
; IF-EVL-NEXT: [[A_GEP:%.*]] = getelementptr i64, ptr [[A]], i64 [[IV]]
; IF-EVL-NEXT: [[TMP16:%.*]] = load i64, ptr [[A_GEP]], align 8
; IF-EVL-NEXT: [[B_GEP:%.*]] = getelementptr i64, ptr [[B]], i64 [[IV]]
@@ -406,7 +406,7 @@ define void @test_urem(ptr noalias %a, ptr noalias %b, ptr noalias %c) {
; IF-EVL-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, %[[LOOP_PREHEADER]] ]
; IF-EVL-NEXT: br label %[[LOOP:.*]]
; IF-EVL: [[LOOP]]:
-; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[IV_NEXT:%.*]], %[[LOOP]] ], [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ]
+; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[IV_NEXT:%.*]], %[[LOOP]] ], [ 0, %[[SCALAR_PH]] ]
; IF-EVL-NEXT: [[A_GEP:%.*]] = getelementptr i64, ptr [[A]], i64 [[IV]]
; IF-EVL-NEXT: [[TMP16:%.*]] = load i64, ptr [[A_GEP]], align 8
; IF-EVL-NEXT: [[B_GEP:%.*]] = getelementptr i64, ptr [[B]], i64 [[IV]]
diff --git a/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-fixed-order-recurrence.ll b/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-fixed-order-recurrence.ll
index 987f9460c2172..f92bf5a98cdd5 100644
--- a/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-fixed-order-recurrence.ll
+++ b/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-fixed-order-recurrence.ll
@@ -53,8 +53,8 @@ define void @first_order_recurrence(ptr noalias %A, ptr noalias %B, i64 %TC) {
; IF-EVL-NEXT: [[SCALAR_RECUR_INIT:%.*]] = phi i32 [ 33, %[[ENTRY]] ]
; IF-EVL-NEXT: br label %[[FOR_BODY:.*]]
; IF-EVL: [[FOR_BODY]]:
-; IF-EVL-NEXT: [[INDVARS:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[INDVARS_NEXT:%.*]], %[[FOR_BODY]] ]
-; IF-EVL-NEXT: [[FOR1:%.*]] = phi i32 [ [[SCALAR_RECUR_INIT]], %[[SCALAR_PH]] ], [ [[TMP24:%.*]], %[[FOR_BODY]] ]
+; IF-EVL-NEXT: [[INDVARS:%.*]] = phi i64 [ 0, %[[SCALAR_PH]] ], [ [[INDVARS_NEXT:%.*]], %[[FOR_BODY]] ]
+; IF-EVL-NEXT: [[FOR1:%.*]] = phi i32 [ 33, %[[SCALAR_PH]] ], [ [[TMP24:%.*]], %[[FOR_BODY]] ]
; IF-EVL-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds nuw i32, ptr [[A]], i64 [[INDVARS]]
; IF-EVL-NEXT: [[TMP24]] = load i32, ptr [[ARRAYIDX]], align 4
; IF-EVL-NEXT: [[ADD:%.*]] = add nsw i32 [[FOR1]], [[TMP24]]
@@ -192,9 +192,9 @@ define void @second_order_recurrence(ptr noalias %A, ptr noalias %B, i64 %TC) {
; IF-EVL-NEXT: [[SCALAR_RECUR_INIT3:%.*]] = phi i32 [ 22, %[[ENTRY]] ]
; IF-EVL-NEXT: br label %[[FOR_BODY:.*]]
; IF-EVL: [[FOR_BODY]]:
-; IF-EVL-NEXT: [[INDVARS:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[INDVARS_NEXT:%.*]], %[[FOR_BODY]] ]
-; IF-EVL-NEXT: [[FOR1:%.*]] = phi i32 [ [[SCALAR_RECUR_INIT]], %[[SCALAR_PH]] ], [ [[TMP31:%.*]], %[[FOR_BODY]] ]
-; IF-EVL-NEXT: [[FOR2:%.*]] = phi i32 [ [[SCALAR_RECUR_INIT3]], %[[SCALAR_PH]] ], [ [[FOR1]], %[[FOR_BODY]] ]
+; IF-EVL-NEXT: [[INDVARS:%.*]] = phi i64 [ 0, %[[SCALAR_PH]] ], [ [[INDVARS_NEXT:%.*]], %[[FOR_BODY]] ]
+; IF-EVL-NEXT: [[FOR1:%.*]] = phi i32 [ 33, %[[SCALAR_PH]] ], [ [[TMP31:%.*]], %[[FOR_BODY]] ]
+; IF-EVL-NEXT: [[FOR2:%.*]] = phi i32 [ 22, %[[SCALAR_PH]] ], [ [[FOR1]], %[[FOR_BODY]] ]
; IF-EVL-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds nuw i32, ptr [[A]], i64 [[INDVARS]]
; IF-EVL-NEXT: [[TMP31]] = load i32, ptr [[ARRAYIDX]], align 4
; IF-EVL-NEXT: [[ADD:%.*]] = add nsw i32 [[FOR1]], [[FOR2]]
@@ -353,10 +353,10 @@ define void @third_order_recurrence(ptr noalias %A, ptr noalias %B, i64 %TC) {
; IF-EVL-NEXT: [[SCALAR_RECUR_INIT6:%.*]] = phi i32 [ 11, %[[ENTRY]] ]
; IF-EVL-NEXT: br label %[[FOR_BODY:.*]]
; IF-EVL: [[FOR_BODY]]:
-; IF-EVL-NEXT: [[INDVARS:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[INDVARS_NEXT:%.*]], %[[FOR_BODY]] ]
-; IF-EVL-NEXT: [[FOR1:%.*]] = phi i32 [ [[SCALAR_RECUR_INIT]], %[[SCALAR_PH]] ], [ [[TMP38:%.*]], %[[FOR_BODY]] ]
-; IF-EVL-NEXT: [[FOR2:%.*]] = phi i32 [ [[SCALAR_RECUR_INIT5]], %[[SCALAR_PH]] ], [ [[FOR1]], %[[FOR_BODY]] ]
-; IF-EVL-NEXT: [[FOR3:%.*]] = phi i32 [ [[SCALAR_RECUR_INIT6]], %[[SCALAR_PH]] ], [ [[FOR2]], %[[FOR_BODY]] ]
+; IF-EVL-NEXT: [[INDVARS:%.*]] = phi i64 [ 0, %[[SCALAR_PH]] ], [ [[INDVARS_NEXT:%.*]], %[[FOR_BODY]] ]
+; IF-EVL-NEXT: [[FOR1:%.*]] = phi i32 [ 33, %[[SCALAR_PH]] ], [ [[TMP38:%.*]], %[[FOR_BODY]] ]
+; IF-EVL-NEXT: [[FOR2:%.*]] = phi i32 [ 22, %[[SCALAR_PH]] ], [ [[FOR1]], %[[FOR_BODY]] ]
+; IF-EVL-NEXT: [[FOR3:%.*]] = phi i32 [ 11, %[[SCALAR_PH]] ], [ [[FOR2]], %[[FOR_BODY]] ]
; IF-EVL-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds nuw i32, ptr [[A]], i64 [[INDVARS]]
; IF-EVL-NEXT: [[TMP38]] = load i32, ptr [[ARRAYIDX]], align 4
; IF-EVL-NEXT: [[ADD:%.*]] = add nsw i32 [[FOR2]], [[FOR3]]
@@ -666,8 +666,8 @@ define void @first_order_recurrence_indvar(ptr noalias %A, i64 %TC) {
; IF-EVL-NEXT: [[SCALAR_RECUR_INIT:%.*]] = phi i64 [ 33, %[[ENTRY]] ]
; IF-EVL-NEXT: br label %[[FOR_BODY:.*]]
; IF-EVL: [[FOR_BODY]]:
-; IF-EVL-NEXT: [[IV1:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV1_NEXT:%.*]], %[[FOR_BODY]] ]
-; IF-EVL-NEXT: [[FOR1:%.*]] = phi i64 [ [[SCALAR_RECUR_INIT]], %[[SCALAR_PH]] ], [ [[TMP14:%.*]], %[[FOR_BODY]] ]
+; IF-EVL-NEXT: [[IV1:%.*]] = phi i64 [ 0, %[[SCALAR_PH]] ], [ [[IV1_NEXT:%.*]], %[[FOR_BODY]] ]
+; IF-EVL-NEXT: [[FOR1:%.*]] = phi i64 [ 33, %[[SCALAR_PH]] ], [ [[TMP14:%.*]], %[[FOR_BODY]] ]
; IF-EVL-NEXT: [[TMP14]] = add i64 [[IV1]], 42
; IF-EVL-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds nuw i64, ptr [[A]], i64 [[IV1]]
; IF-EVL-NEXT: store i64 [[FOR1]], ptr [[ARRAYIDX]], align 8
diff --git a/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-gather-scatter.ll b/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-gather-scatter.ll
index 2aeb1d0b25b5d..da5aed943bc75 100644
--- a/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-gather-scatter.ll
+++ b/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-gather-scatter.ll
@@ -51,7 +51,7 @@ define void @gather_scatter(ptr noalias %in, ptr noalias %out, ptr noalias %inde
; IF-EVL-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, [[ENTRY1:%.*]] ]
; IF-EVL-NEXT: br label [[FOR_BODY1:%.*]]
; IF-EVL: for.body:
-; IF-EVL-NEXT: [[INDVARS_IV1:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[INDVARS_IV_NEXT1:%.*]], [[FOR_BODY1]] ]
+; IF-EVL-NEXT: [[INDVARS_IV1:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[INDVARS_IV_NEXT1:%.*]], [[FOR_BODY1]] ]
; IF-EVL-NEXT: [[ARRAYIDX3:%.*]] = getelementptr inbounds i32, ptr [[INDEX]], i64 [[INDVARS_IV1]]
; IF-EVL-NEXT: [[TMP0:%.*]] = load i64, ptr [[ARRAYIDX3]], align 8
; IF-EVL-NEXT: [[ARRAYIDX5:%.*]] = getelementptr inbounds float, ptr [[IN]], i64 [[TMP0]]
diff --git a/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-inloop-reduction.ll b/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-inloop-reduction.ll
index 3e23df78e0b68..433d1e4c3c824 100644
--- a/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-inloop-reduction.ll
+++ b/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-inloop-reduction.ll
@@ -44,8 +44,8 @@ define i32 @add(ptr %a, i64 %n, i32 %start) {
; IF-EVL-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ [[START]], [[ENTRY]] ]
; IF-EVL-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL: for.body:
-; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
-; IF-EVL-NEXT: [[RDX:%.*]] = phi i32 [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[ADD:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[RDX:%.*]] = phi i32 [ [[START]], [[SCALAR_PH]] ], [ [[ADD:%.*]], [[FOR_BODY]] ]
; IF-EVL-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[A]], i64 [[IV]]
; IF-EVL-NEXT: [[TMP18:%.*]] = load i32, ptr [[ARRAYIDX]], align 4
; IF-EVL-NEXT: [[ADD]] = add nsw i32 [[TMP18]], [[RDX]]
@@ -259,8 +259,8 @@ define i32 @or(ptr %a, i64 %n, i32 %start) {
; IF-EVL-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ [[START]], [[ENTRY]] ]
; IF-EVL-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL: for.body:
-; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
-; IF-EVL-NEXT: [[RDX:%.*]] = phi i32 [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[OR:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[RDX:%.*]] = phi i32 [ [[START]], [[SCALAR_PH]] ], [ [[OR:%.*]], [[FOR_BODY]] ]
; IF-EVL-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[A]], i64 [[IV]]
; IF-EVL-NEXT: [[TMP18:%.*]] = load i32, ptr [[ARRAYIDX]], align 4
; IF-EVL-NEXT: [[OR]] = or i32 [[TMP18]], [[RDX]]
@@ -367,8 +367,8 @@ define i32 @and(ptr %a, i64 %n, i32 %start) {
; IF-EVL-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ [[START]], [[ENTRY]] ]
; IF-EVL-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL: for.body:
-; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
-; IF-EVL-NEXT: [[RDX:%.*]] = phi i32 [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[AND:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[RDX:%.*]] = phi i32 [ [[START]], [[SCALAR_PH]] ], [ [[AND:%.*]], [[FOR_BODY]] ]
; IF-EVL-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[A]], i64 [[IV]]
; IF-EVL-NEXT: [[TMP18:%.*]] = load i32, ptr [[ARRAYIDX]], align 4
; IF-EVL-NEXT: [[AND]] = and i32 [[TMP18]], [[RDX]]
@@ -475,8 +475,8 @@ define i32 @xor(ptr %a, i64 %n, i32 %start) {
; IF-EVL-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ [[START]], [[ENTRY]] ]
; IF-EVL-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL: for.body:
-; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
-; IF-EVL-NEXT: [[RDX:%.*]] = phi i32 [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[XOR:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[RDX:%.*]] = phi i32 [ [[START]], [[SCALAR_PH]] ], [ [[XOR:%.*]], [[FOR_BODY]] ]
; IF-EVL-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[A]], i64 [[IV]]
; IF-EVL-NEXT: [[TMP18:%.*]] = load i32, ptr [[ARRAYIDX]], align 4
; IF-EVL-NEXT: [[XOR]] = xor i32 [[TMP18]], [[RDX]]
@@ -583,8 +583,8 @@ define i32 @smin(ptr %a, i64 %n, i32 %start) {
; IF-EVL-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ [[START]], [[ENTRY]] ]
; IF-EVL-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL: for.body:
-; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
-; IF-EVL-NEXT: [[RDX:%.*]] = phi i32 [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[SMIN:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[RDX:%.*]] = phi i32 [ [[START]], [[SCALAR_PH]] ], [ [[SMIN:%.*]], [[FOR_BODY]] ]
; IF-EVL-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[A]], i64 [[IV]]
; IF-EVL-NEXT: [[TMP17:%.*]] = load i32, ptr [[ARRAYIDX]], align 4
; IF-EVL-NEXT: [[CMP_I:%.*]] = icmp slt i32 [[TMP17]], [[RDX]]
@@ -694,8 +694,8 @@ define i32 @smax(ptr %a, i64 %n, i32 %start) {
; IF-EVL-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ [[START]], [[ENTRY]] ]
; IF-EVL-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL: for.body:
-; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
-; IF-EVL-NEXT: [[RDX:%.*]] = phi i32 [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[SMAX:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[RDX:%.*]] = phi i32 [ [[START]], [[SCALAR_PH]] ], [ [[SMAX:%.*]], [[FOR_BODY]] ]
; IF-EVL-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[A]], i64 [[IV]]
; IF-EVL-NEXT: [[TMP17:%.*]] = load i32, ptr [[ARRAYIDX]], align 4
; IF-EVL-NEXT: [[CMP_I:%.*]] = icmp sgt i32 [[TMP17]], [[RDX]]
@@ -805,8 +805,8 @@ define i32 @umin(ptr %a, i64 %n, i32 %start) {
; IF-EVL-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ [[START]], [[ENTRY]] ]
; IF-EVL-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL: for.body:
-; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
-; IF-EVL-NEXT: [[RDX:%.*]] = phi i32 [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[UMIN:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[RDX:%.*]] = phi i32 [ [[START]], [[SCALAR_PH]] ], [ [[UMIN:%.*]], [[FOR_BODY]] ]
; IF-EVL-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[A]], i64 [[IV]]
; IF-EVL-NEXT: [[TMP17:%.*]] = load i32, ptr [[ARRAYIDX]], align 4
; IF-EVL-NEXT: [[CMP_I:%.*]] = icmp ult i32 [[TMP17]], [[RDX]]
@@ -916,8 +916,8 @@ define i32 @umax(ptr %a, i64 %n, i32 %start) {
; IF-EVL-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ [[START]], [[ENTRY]] ]
; IF-EVL-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL: for.body:
-; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
-; IF-EVL-NEXT: [[RDX:%.*]] = phi i32 [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[UMAX:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[RDX:%.*]] = phi i32 [ [[START]], [[SCALAR_PH]] ], [ [[UMAX:%.*]], [[FOR_BODY]] ]
; IF-EVL-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[A]], i64 [[IV]]
; IF-EVL-NEXT: [[TMP17:%.*]] = load i32, ptr [[ARRAYIDX]], align 4
; IF-EVL-NEXT: [[CMP_I:%.*]] = icmp ugt i32 [[TMP17]], [[RDX]]
@@ -1027,8 +1027,8 @@ define float @fadd(ptr %a, i64 %n, float %start) {
; IF-EVL-NEXT: [[BC_MERGE_RDX:%.*]] = phi float [ [[START]], [[ENTRY]] ]
; IF-EVL-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL: for.body:
-; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
-; IF-EVL-NEXT: [[RDX:%.*]] = phi float [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[ADD:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[RDX:%.*]] = phi float [ [[START]], [[SCALAR_PH]] ], [ [[ADD:%.*]], [[FOR_BODY]] ]
; IF-EVL-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds float, ptr [[A]], i64 [[IV]]
; IF-EVL-NEXT: [[TMP18:%.*]] = load float, ptr [[ARRAYIDX]], align 4
; IF-EVL-NEXT: [[ADD]] = fadd reassoc float [[TMP18]], [[RDX]]
@@ -1243,8 +1243,8 @@ define float @fmin(ptr %a, i64 %n, float %start) #0 {
; IF-EVL-NEXT: [[BC_MERGE_RDX:%.*]] = phi float [ [[START]], [[ENTRY]] ]
; IF-EVL-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL: for.body:
-; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
-; IF-EVL-NEXT: [[RDX:%.*]] = phi float [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[MIN:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[RDX:%.*]] = phi float [ [[START]], [[SCALAR_PH]] ], [ [[MIN:%.*]], [[FOR_BODY]] ]
; IF-EVL-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds float, ptr [[A]], i64 [[IV]]
; IF-EVL-NEXT: [[TMP17:%.*]] = load float, ptr [[ARRAYIDX]], align 4
; IF-EVL-NEXT: [[CMP:%.*]] = fcmp fast olt float [[TMP17]], [[RDX]]
@@ -1356,8 +1356,8 @@ define float @fmax(ptr %a, i64 %n, float %start) #0 {
; IF-EVL-NEXT: [[BC_MERGE_RDX:%.*]] = phi float [ [[START]], [[ENTRY]] ]
; IF-EVL-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL: for.body:
-; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
-; IF-EVL-NEXT: [[RDX:%.*]] = phi float [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[MAX:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[RDX:%.*]] = phi float [ [[START]], [[SCALAR_PH]] ], [ [[MAX:%.*]], [[FOR_BODY]] ]
; IF-EVL-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds float, ptr [[A]], i64 [[IV]]
; IF-EVL-NEXT: [[TMP17:%.*]] = load float, ptr [[ARRAYIDX]], align 4
; IF-EVL-NEXT: [[CMP:%.*]] = fcmp fast ogt float [[TMP17]], [[RDX]]
@@ -1687,8 +1687,8 @@ define float @fmuladd(ptr %a, ptr %b, i64 %n, float %start) {
; IF-EVL-NEXT: [[BC_MERGE_RDX:%.*]] = phi float [ [[START]], [[ENTRY]] ]
; IF-EVL-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL: for.body:
-; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
-; IF-EVL-NEXT: [[RDX:%.*]] = phi float [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[MULADD:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[RDX:%.*]] = phi float [ [[START]], [[SCALAR_PH]] ], [ [[MULADD:%.*]], [[FOR_BODY]] ]
; IF-EVL-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds float, ptr [[A]], i64 [[IV]]
; IF-EVL-NEXT: [[TMP21:%.*]] = load float, ptr [[ARRAYIDX]], align 4
; IF-EVL-NEXT: [[ARRAYIDX2:%.*]] = getelementptr inbounds float, ptr [[B]], i64 [[IV]]
@@ -1807,8 +1807,8 @@ define i32 @anyof_icmp(ptr %a, i64 %n, i32 %start, i32 %inv) {
; IF-EVL-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ [[START]], [[ENTRY]] ]
; IF-EVL-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL: for.body:
-; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
-; IF-EVL-NEXT: [[RDX:%.*]] = phi i32 [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[ANYOF:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[RDX:%.*]] = phi i32 [ [[START]], [[SCALAR_PH]] ], [ [[ANYOF:%.*]], [[FOR_BODY]] ]
; IF-EVL-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[A]], i64 [[IV]]
; IF-EVL-NEXT: [[TMP21:%.*]] = load i32, ptr [[ARRAYIDX]], align 4
; IF-EVL-NEXT: [[CMP_I:%.*]] = icmp slt i32 [[TMP21]], 3
@@ -1924,8 +1924,8 @@ define i32 @anyof_fcmp(ptr %a, i64 %n, i32 %start, i32 %inv) {
; IF-EVL-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ [[START]], [[ENTRY]] ]
; IF-EVL-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL: for.body:
-; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
-; IF-EVL-NEXT: [[RDX:%.*]] = phi i32 [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[ANYOF:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[RDX:%.*]] = phi i32 [ [[START]], [[SCALAR_PH]] ], [ [[ANYOF:%.*]], [[FOR_BODY]] ]
; IF-EVL-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[A]], i64 [[IV]]
; IF-EVL-NEXT: [[TMP21:%.*]] = load float, ptr [[ARRAYIDX]], align 4
; IF-EVL-NEXT: [[CMP_I:%.*]] = fcmp fast olt float [[TMP21]], 3.000000e+00
diff --git a/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-interleave.ll b/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-interleave.ll
index 8d987a94d383d..c5d2739d0c087 100644
--- a/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-interleave.ll
+++ b/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-interleave.ll
@@ -50,7 +50,7 @@ define void @interleave(ptr noalias %a, ptr noalias %b, i64 %N) {
; IF-EVL-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, [[ENTRY:%.*]] ]
; IF-EVL-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL: for.body:
-; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
; IF-EVL-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds [2 x i32], ptr [[B]], i64 [[IV]], i32 0
; IF-EVL-NEXT: [[TMP12:%.*]] = load i32, ptr [[ARRAYIDX]], align 4
; IF-EVL-NEXT: [[ARRAYIDX2:%.*]] = getelementptr inbounds [2 x i32], ptr [[B]], i64 [[IV]], i32 1
diff --git a/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-iv32.ll b/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-iv32.ll
index d474a03b90ee9..62a4f7373d85f 100644
--- a/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-iv32.ll
+++ b/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-iv32.ll
@@ -39,7 +39,7 @@ define void @iv32(ptr noalias %a, ptr noalias %b, i32 %N) {
; IF-EVL-NEXT: [[BC_RESUME_VAL:%.*]] = phi i32 [ 0, [[ENTRY1:%.*]] ]
; IF-EVL-NEXT: br label [[FOR_BODY1:%.*]]
; IF-EVL: for.body:
-; IF-EVL-NEXT: [[IV1:%.*]] = phi i32 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT1:%.*]], [[FOR_BODY1]] ]
+; IF-EVL-NEXT: [[IV1:%.*]] = phi i32 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT1:%.*]], [[FOR_BODY1]] ]
; IF-EVL-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[B]], i32 [[IV1]]
; IF-EVL-NEXT: [[TMP0:%.*]] = load i32, ptr [[ARRAYIDX]], align 4
; IF-EVL-NEXT: [[ARRAYIDX4:%.*]] = getelementptr inbounds i32, ptr [[A]], i32 [[IV1]]
diff --git a/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-known-no-overflow.ll b/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-known-no-overflow.ll
index 06c6bfe64dd2f..296405d71ea87 100644
--- a/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-known-no-overflow.ll
+++ b/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-known-no-overflow.ll
@@ -44,7 +44,7 @@ define void @trip_count_max_1024(ptr %p, i64 %tc) vscale_range(2, 1024) {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, %[[LOOP_PREHEADER]] ]
; CHECK-NEXT: br label %[[LOOP:.*]]
; CHECK: [[LOOP]]:
-; CHECK-NEXT: [[I:%.*]] = phi i64 [ [[I_NEXT:%.*]], %[[LOOP]] ], [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ]
+; CHECK-NEXT: [[I:%.*]] = phi i64 [ [[I_NEXT:%.*]], %[[LOOP]] ], [ 0, %[[SCALAR_PH]] ]
; CHECK-NEXT: [[GEP:%.*]] = getelementptr i64, ptr [[P]], i64 [[I]]
; CHECK-NEXT: [[X:%.*]] = load i64, ptr [[GEP]], align 8
; CHECK-NEXT: [[Y:%.*]] = add i64 [[X]], 1
@@ -113,7 +113,7 @@ define void @overflow_at_0(ptr %p, i64 %tc) vscale_range(2, 1024) {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, %[[LOOP_PREHEADER]] ]
; CHECK-NEXT: br label %[[LOOP:.*]]
; CHECK: [[LOOP]]:
-; CHECK-NEXT: [[I:%.*]] = phi i64 [ [[I_NEXT:%.*]], %[[LOOP]] ], [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ]
+; CHECK-NEXT: [[I:%.*]] = phi i64 [ [[I_NEXT:%.*]], %[[LOOP]] ], [ 0, %[[SCALAR_PH]] ]
; CHECK-NEXT: [[GEP:%.*]] = getelementptr i64, ptr [[P]], i64 [[I]]
; CHECK-NEXT: [[X:%.*]] = load i64, ptr [[GEP]], align 8
; CHECK-NEXT: [[Y:%.*]] = add i64 [[X]], 1
@@ -182,7 +182,7 @@ define void @no_overflow_at_0(ptr %p, i64 %tc) vscale_range(2, 1024) {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, %[[LOOP_PREHEADER]] ]
; CHECK-NEXT: br label %[[LOOP:.*]]
; CHECK: [[LOOP]]:
-; CHECK-NEXT: [[I:%.*]] = phi i64 [ [[I_NEXT:%.*]], %[[LOOP]] ], [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ]
+; CHECK-NEXT: [[I:%.*]] = phi i64 [ [[I_NEXT:%.*]], %[[LOOP]] ], [ 0, %[[SCALAR_PH]] ]
; CHECK-NEXT: [[GEP:%.*]] = getelementptr i64, ptr [[P]], i64 [[I]]
; CHECK-NEXT: [[X:%.*]] = load i64, ptr [[GEP]], align 8
; CHECK-NEXT: [[Y:%.*]] = add i64 [[X]], 1
diff --git a/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-masked-loadstore.ll b/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-masked-loadstore.ll
index 5f407fcca259d..e06bbe947671f 100644
--- a/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-masked-loadstore.ll
+++ b/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-masked-loadstore.ll
@@ -43,7 +43,7 @@ define void @masked_loadstore(ptr noalias %a, ptr noalias %b, i64 %n) {
; IF-EVL-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, [[ENTRY:%.*]] ]
; IF-EVL-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL: for.body:
-; IF-EVL-NEXT: [[I_011:%.*]] = phi i64 [ [[INC:%.*]], [[FOR_INC:%.*]] ], [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ]
+; IF-EVL-NEXT: [[I_011:%.*]] = phi i64 [ [[INC:%.*]], [[FOR_INC:%.*]] ], [ 0, [[SCALAR_PH]] ]
; IF-EVL-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[B]], i64 [[I_011]]
; IF-EVL-NEXT: [[TMP23:%.*]] = load i32, ptr [[ARRAYIDX]], align 4
; IF-EVL-NEXT: [[CMP1:%.*]] = icmp ne i32 [[TMP23]], 0
diff --git a/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-ordered-reduction.ll b/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-ordered-reduction.ll
index 59d137011d288..775d9cad49683 100644
--- a/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-ordered-reduction.ll
+++ b/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-ordered-reduction.ll
@@ -43,8 +43,8 @@ define float @fadd(ptr noalias nocapture readonly %a, i64 %n) {
; IF-EVL-NEXT: [[BC_MERGE_RDX:%.*]] = phi float [ 0.000000e+00, [[ENTRY]] ]
; IF-EVL-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL: for.body:
-; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
-; IF-EVL-NEXT: [[SUM_07:%.*]] = phi float [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[ADD:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[SUM_07:%.*]] = phi float [ 0.000000e+00, [[SCALAR_PH]] ], [ [[ADD:%.*]], [[FOR_BODY]] ]
; IF-EVL-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds float, ptr [[A]], i64 [[IV]]
; IF-EVL-NEXT: [[TMP17:%.*]] = load float, ptr [[ARRAYIDX]], align 4
; IF-EVL-NEXT: [[ADD]] = fadd float [[TMP17]], [[SUM_07]]
diff --git a/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-reduction.ll b/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-reduction.ll
index 2d5718b24cb30..464667df73627 100644
--- a/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-reduction.ll
+++ b/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-reduction.ll
@@ -44,8 +44,8 @@ define i32 @add(ptr %a, i64 %n, i32 %start) {
; IF-EVL-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ [[START]], [[ENTRY]] ]
; IF-EVL-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL: for.body:
-; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
-; IF-EVL-NEXT: [[RDX:%.*]] = phi i32 [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[ADD:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[RDX:%.*]] = phi i32 [ [[START]], [[SCALAR_PH]] ], [ [[ADD:%.*]], [[FOR_BODY]] ]
; IF-EVL-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[A]], i64 [[IV]]
; IF-EVL-NEXT: [[TMP18:%.*]] = load i32, ptr [[ARRAYIDX]], align 4
; IF-EVL-NEXT: [[ADD]] = add nsw i32 [[TMP18]], [[RDX]]
@@ -262,8 +262,8 @@ define i32 @or(ptr %a, i64 %n, i32 %start) {
; IF-EVL-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ [[START]], [[ENTRY]] ]
; IF-EVL-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL: for.body:
-; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
-; IF-EVL-NEXT: [[RDX:%.*]] = phi i32 [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[OR:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[RDX:%.*]] = phi i32 [ [[START]], [[SCALAR_PH]] ], [ [[OR:%.*]], [[FOR_BODY]] ]
; IF-EVL-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[A]], i64 [[IV]]
; IF-EVL-NEXT: [[TMP18:%.*]] = load i32, ptr [[ARRAYIDX]], align 4
; IF-EVL-NEXT: [[OR]] = or i32 [[TMP18]], [[RDX]]
@@ -373,8 +373,8 @@ define i32 @and(ptr %a, i64 %n, i32 %start) {
; IF-EVL-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ [[START]], [[ENTRY]] ]
; IF-EVL-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL: for.body:
-; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
-; IF-EVL-NEXT: [[RDX:%.*]] = phi i32 [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[AND:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[RDX:%.*]] = phi i32 [ [[START]], [[SCALAR_PH]] ], [ [[AND:%.*]], [[FOR_BODY]] ]
; IF-EVL-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[A]], i64 [[IV]]
; IF-EVL-NEXT: [[TMP18:%.*]] = load i32, ptr [[ARRAYIDX]], align 4
; IF-EVL-NEXT: [[AND]] = and i32 [[TMP18]], [[RDX]]
@@ -484,8 +484,8 @@ define i32 @xor(ptr %a, i64 %n, i32 %start) {
; IF-EVL-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ [[START]], [[ENTRY]] ]
; IF-EVL-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL: for.body:
-; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
-; IF-EVL-NEXT: [[RDX:%.*]] = phi i32 [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[XOR:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[RDX:%.*]] = phi i32 [ [[START]], [[SCALAR_PH]] ], [ [[XOR:%.*]], [[FOR_BODY]] ]
; IF-EVL-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[A]], i64 [[IV]]
; IF-EVL-NEXT: [[TMP18:%.*]] = load i32, ptr [[ARRAYIDX]], align 4
; IF-EVL-NEXT: [[XOR]] = xor i32 [[TMP18]], [[RDX]]
@@ -597,8 +597,8 @@ define i32 @smin(ptr %a, i64 %n, i32 %start) {
; IF-EVL-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ [[START]], [[ENTRY]] ]
; IF-EVL-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL: for.body:
-; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
-; IF-EVL-NEXT: [[RDX:%.*]] = phi i32 [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[SMIN:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[RDX:%.*]] = phi i32 [ [[START]], [[SCALAR_PH]] ], [ [[SMIN:%.*]], [[FOR_BODY]] ]
; IF-EVL-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[A]], i64 [[IV]]
; IF-EVL-NEXT: [[TMP19:%.*]] = load i32, ptr [[ARRAYIDX]], align 4
; IF-EVL-NEXT: [[CMP_I:%.*]] = icmp slt i32 [[TMP19]], [[RDX]]
@@ -715,8 +715,8 @@ define i32 @smax(ptr %a, i64 %n, i32 %start) {
; IF-EVL-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ [[START]], [[ENTRY]] ]
; IF-EVL-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL: for.body:
-; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
-; IF-EVL-NEXT: [[RDX:%.*]] = phi i32 [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[SMAX:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[RDX:%.*]] = phi i32 [ [[START]], [[SCALAR_PH]] ], [ [[SMAX:%.*]], [[FOR_BODY]] ]
; IF-EVL-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[A]], i64 [[IV]]
; IF-EVL-NEXT: [[TMP19:%.*]] = load i32, ptr [[ARRAYIDX]], align 4
; IF-EVL-NEXT: [[CMP_I:%.*]] = icmp sgt i32 [[TMP19]], [[RDX]]
@@ -833,8 +833,8 @@ define i32 @umin(ptr %a, i64 %n, i32 %start) {
; IF-EVL-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ [[START]], [[ENTRY]] ]
; IF-EVL-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL: for.body:
-; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
-; IF-EVL-NEXT: [[RDX:%.*]] = phi i32 [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[UMIN:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[RDX:%.*]] = phi i32 [ [[START]], [[SCALAR_PH]] ], [ [[UMIN:%.*]], [[FOR_BODY]] ]
; IF-EVL-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[A]], i64 [[IV]]
; IF-EVL-NEXT: [[TMP19:%.*]] = load i32, ptr [[ARRAYIDX]], align 4
; IF-EVL-NEXT: [[CMP_I:%.*]] = icmp ult i32 [[TMP19]], [[RDX]]
@@ -951,8 +951,8 @@ define i32 @umax(ptr %a, i64 %n, i32 %start) {
; IF-EVL-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ [[START]], [[ENTRY]] ]
; IF-EVL-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL: for.body:
-; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
-; IF-EVL-NEXT: [[RDX:%.*]] = phi i32 [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[UMAX:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[RDX:%.*]] = phi i32 [ [[START]], [[SCALAR_PH]] ], [ [[UMAX:%.*]], [[FOR_BODY]] ]
; IF-EVL-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[A]], i64 [[IV]]
; IF-EVL-NEXT: [[TMP19:%.*]] = load i32, ptr [[ARRAYIDX]], align 4
; IF-EVL-NEXT: [[CMP_I:%.*]] = icmp ugt i32 [[TMP19]], [[RDX]]
@@ -1067,8 +1067,8 @@ define float @fadd(ptr %a, i64 %n, float %start) {
; IF-EVL-NEXT: [[BC_MERGE_RDX:%.*]] = phi float [ [[START]], [[ENTRY]] ]
; IF-EVL-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL: for.body:
-; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
-; IF-EVL-NEXT: [[RDX:%.*]] = phi float [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[ADD:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[RDX:%.*]] = phi float [ [[START]], [[SCALAR_PH]] ], [ [[ADD:%.*]], [[FOR_BODY]] ]
; IF-EVL-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds float, ptr [[A]], i64 [[IV]]
; IF-EVL-NEXT: [[TMP18:%.*]] = load float, ptr [[ARRAYIDX]], align 4
; IF-EVL-NEXT: [[ADD]] = fadd reassoc float [[TMP18]], [[RDX]]
@@ -1287,8 +1287,8 @@ define float @fmin(ptr %a, i64 %n, float %start) #0 {
; IF-EVL-NEXT: [[BC_MERGE_RDX:%.*]] = phi float [ [[START]], [[ENTRY]] ]
; IF-EVL-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL: for.body:
-; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
-; IF-EVL-NEXT: [[RDX:%.*]] = phi float [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[MIN:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[RDX:%.*]] = phi float [ [[START]], [[SCALAR_PH]] ], [ [[MIN:%.*]], [[FOR_BODY]] ]
; IF-EVL-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds float, ptr [[A]], i64 [[IV]]
; IF-EVL-NEXT: [[TMP19:%.*]] = load float, ptr [[ARRAYIDX]], align 4
; IF-EVL-NEXT: [[CMP:%.*]] = fcmp fast olt float [[TMP19]], [[RDX]]
@@ -1405,8 +1405,8 @@ define float @fmax(ptr %a, i64 %n, float %start) #0 {
; IF-EVL-NEXT: [[BC_MERGE_RDX:%.*]] = phi float [ [[START]], [[ENTRY]] ]
; IF-EVL-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL: for.body:
-; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
-; IF-EVL-NEXT: [[RDX:%.*]] = phi float [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[MAX:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[RDX:%.*]] = phi float [ [[START]], [[SCALAR_PH]] ], [ [[MAX:%.*]], [[FOR_BODY]] ]
; IF-EVL-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds float, ptr [[A]], i64 [[IV]]
; IF-EVL-NEXT: [[TMP19:%.*]] = load float, ptr [[ARRAYIDX]], align 4
; IF-EVL-NEXT: [[CMP:%.*]] = fcmp fast ogt float [[TMP19]], [[RDX]]
@@ -1739,8 +1739,8 @@ define float @fmuladd(ptr %a, ptr %b, i64 %n, float %start) {
; IF-EVL-NEXT: [[BC_MERGE_RDX:%.*]] = phi float [ [[START]], [[ENTRY]] ]
; IF-EVL-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL: for.body:
-; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
-; IF-EVL-NEXT: [[RDX:%.*]] = phi float [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[MULADD:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[RDX:%.*]] = phi float [ [[START]], [[SCALAR_PH]] ], [ [[MULADD:%.*]], [[FOR_BODY]] ]
; IF-EVL-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds float, ptr [[A]], i64 [[IV]]
; IF-EVL-NEXT: [[TMP21:%.*]] = load float, ptr [[ARRAYIDX]], align 4
; IF-EVL-NEXT: [[ARRAYIDX2:%.*]] = getelementptr inbounds float, ptr [[B]], i64 [[IV]]
@@ -1859,8 +1859,8 @@ define i32 @anyof_icmp(ptr %a, i64 %n, i32 %start, i32 %inv) {
; IF-EVL-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ [[START]], [[ENTRY]] ]
; IF-EVL-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL: for.body:
-; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
-; IF-EVL-NEXT: [[RDX:%.*]] = phi i32 [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[ANYOF:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[RDX:%.*]] = phi i32 [ [[START]], [[SCALAR_PH]] ], [ [[ANYOF:%.*]], [[FOR_BODY]] ]
; IF-EVL-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[A]], i64 [[IV]]
; IF-EVL-NEXT: [[TMP20:%.*]] = load i32, ptr [[ARRAYIDX]], align 4
; IF-EVL-NEXT: [[CMP_I:%.*]] = icmp slt i32 [[TMP20]], 3
@@ -1976,8 +1976,8 @@ define i32 @anyof_fcmp(ptr %a, i64 %n, i32 %start, i32 %inv) {
; IF-EVL-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ [[START]], [[ENTRY]] ]
; IF-EVL-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL: for.body:
-; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
-; IF-EVL-NEXT: [[RDX:%.*]] = phi i32 [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[ANYOF:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[RDX:%.*]] = phi i32 [ [[START]], [[SCALAR_PH]] ], [ [[ANYOF:%.*]], [[FOR_BODY]] ]
; IF-EVL-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[A]], i64 [[IV]]
; IF-EVL-NEXT: [[TMP20:%.*]] = load float, ptr [[ARRAYIDX]], align 4
; IF-EVL-NEXT: [[CMP_I:%.*]] = fcmp fast olt float [[TMP20]], 3.000000e+00
diff --git a/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-reverse-load-store.ll b/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-reverse-load-store.ll
index e2db28d54ac58..397cb95c25662 100644
--- a/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-reverse-load-store.ll
+++ b/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-reverse-load-store.ll
@@ -57,8 +57,8 @@ define void @reverse_load_store(i64 %startval, ptr noalias %ptr, ptr noalias %pt
; IF-EVL-NEXT: [[BC_RESUME_VAL2:%.*]] = phi i32 [ 0, [[ENTRY]] ]
; IF-EVL-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL: for.body:
-; IF-EVL-NEXT: [[ADD_PHI:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[ADD:%.*]], [[FOR_BODY]] ]
-; IF-EVL-NEXT: [[I:%.*]] = phi i32 [ [[BC_RESUME_VAL2]], [[SCALAR_PH]] ], [ [[INC:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[ADD_PHI:%.*]] = phi i64 [ [[STARTVAL]], [[SCALAR_PH]] ], [ [[ADD:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[I:%.*]] = phi i32 [ 0, [[SCALAR_PH]] ], [ [[INC:%.*]], [[FOR_BODY]] ]
; IF-EVL-NEXT: [[ADD]] = add i64 [[ADD_PHI]], -1
; IF-EVL-NEXT: [[GEPL:%.*]] = getelementptr inbounds i32, ptr [[PTR]], i64 [[ADD]]
; IF-EVL-NEXT: [[TMP:%.*]] = load i32, ptr [[GEPL]], align 4
@@ -205,8 +205,8 @@ define void @reverse_load_store_masked(i64 %startval, ptr noalias %ptr, ptr noal
; IF-EVL-NEXT: [[BC_RESUME_VAL2:%.*]] = phi i32 [ 0, [[ENTRY]] ]
; IF-EVL-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL: for.body:
-; IF-EVL-NEXT: [[ADD_PHI:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[ADD:%.*]], [[FOR_INC:%.*]] ]
-; IF-EVL-NEXT: [[I:%.*]] = phi i32 [ [[BC_RESUME_VAL2]], [[SCALAR_PH]] ], [ [[INC:%.*]], [[FOR_INC]] ]
+; IF-EVL-NEXT: [[ADD_PHI:%.*]] = phi i64 [ [[STARTVAL]], [[SCALAR_PH]] ], [ [[ADD:%.*]], [[FOR_INC:%.*]] ]
+; IF-EVL-NEXT: [[I:%.*]] = phi i32 [ 0, [[SCALAR_PH]] ], [ [[INC:%.*]], [[FOR_INC]] ]
; IF-EVL-NEXT: [[ADD]] = add i64 [[ADD_PHI]], -1
; IF-EVL-NEXT: [[GEPL:%.*]] = getelementptr inbounds i32, ptr [[PTR]], i32 [[I]]
; IF-EVL-NEXT: [[TMP:%.*]] = load i32, ptr [[GEPL]], align 4
@@ -388,7 +388,7 @@ define void @multiple_reverse_vector_pointer(ptr noalias %a, ptr noalias %b, ptr
; IF-EVL-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 1024, [[ENTRY:%.*]] ]
; IF-EVL-NEXT: br label [[LOOP:%.*]]
; IF-EVL: loop:
-; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[LOOP]] ]
+; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ 1024, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[LOOP]] ]
; IF-EVL-NEXT: [[GEP_A:%.*]] = getelementptr i8, ptr [[A]], i64 [[IV]]
; IF-EVL-NEXT: [[X:%.*]] = load i8, ptr [[GEP_A]], align 1
; IF-EVL-NEXT: [[GEP_B:%.*]] = getelementptr i8, ptr [[B]], i8 [[X]]
diff --git a/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-safe-dep-distance.ll b/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-safe-dep-distance.ll
index 1c78b25e114c4..2ec23b91f4b02 100644
--- a/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-safe-dep-distance.ll
+++ b/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-safe-dep-distance.ll
@@ -44,7 +44,7 @@ define void @test(ptr %p) {
; IF-EVL-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, [[ENTRY:%.*]] ]
; IF-EVL-NEXT: br label [[LOOP:%.*]]
; IF-EVL: loop:
-; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[LOOP]] ]
+; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[LOOP]] ]
; IF-EVL-NEXT: [[A1:%.*]] = getelementptr i64, ptr [[P]], i64 [[IV]]
; IF-EVL-NEXT: [[V:%.*]] = load i64, ptr [[A1]], align 8
; IF-EVL-NEXT: [[OFFSET:%.*]] = add i64 [[IV]], 200
@@ -375,7 +375,7 @@ define void @trivial_due_max_vscale(ptr %p) {
; IF-EVL-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, [[ENTRY:%.*]] ]
; IF-EVL-NEXT: br label [[LOOP:%.*]]
; IF-EVL: loop:
-; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[LOOP]] ]
+; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[LOOP]] ]
; IF-EVL-NEXT: [[A1:%.*]] = getelementptr i64, ptr [[P]], i64 [[IV]]
; IF-EVL-NEXT: [[V:%.*]] = load i64, ptr [[A1]], align 32
; IF-EVL-NEXT: [[OFFSET:%.*]] = add i64 [[IV]], 8192
@@ -483,7 +483,7 @@ define void @no_high_lmul_or_interleave(ptr %p) {
; IF-EVL-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, [[ENTRY:%.*]] ]
; IF-EVL-NEXT: br label [[LOOP:%.*]]
; IF-EVL: loop:
-; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[LOOP]] ]
+; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[LOOP]] ]
; IF-EVL-NEXT: [[A1:%.*]] = getelementptr i64, ptr [[P]], i64 [[IV]]
; IF-EVL-NEXT: [[V:%.*]] = load i64, ptr [[A1]], align 32
; IF-EVL-NEXT: [[OFFSET:%.*]] = add i64 [[IV]], 1024
diff --git a/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-uniform-store.ll b/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-uniform-store.ll
index 687a2e7bf9312..ab05166a8b894 100644
--- a/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-uniform-store.ll
+++ b/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-uniform-store.ll
@@ -50,7 +50,7 @@ define void @lshift_significand(i32 %n, ptr nocapture writeonly %dst) {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ [[SPEC_SELECT]], %[[ENTRY]] ]
; CHECK-NEXT: br label %[[LOOP:.*]]
; CHECK: [[LOOP]]:
-; CHECK-NEXT: [[IV1:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
+; CHECK-NEXT: [[IV1:%.*]] = phi i64 [ [[SPEC_SELECT]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
; CHECK-NEXT: [[TMP22:%.*]] = sub nuw nsw i64 1, [[IV1]]
; CHECK-NEXT: [[ARRAYIDX14:%.*]] = getelementptr i64, ptr [[DST]], i64 [[TMP22]]
; CHECK-NEXT: store i64 0, ptr [[ARRAYIDX14]], align 8
diff --git a/llvm/test/Transforms/LoopVectorize/RISCV/truncate-to-minimal-bitwidth-cost.ll b/llvm/test/Transforms/LoopVectorize/RISCV/truncate-to-minimal-bitwidth-cost.ll
index 24649729f43ba..034b76741bab6 100644
--- a/llvm/test/Transforms/LoopVectorize/RISCV/truncate-to-minimal-bitwidth-cost.ll
+++ b/llvm/test/Transforms/LoopVectorize/RISCV/truncate-to-minimal-bitwidth-cost.ll
@@ -179,7 +179,7 @@ define void @truncate_to_i1_used_by_branch(i8 %x, ptr %dst) #0 {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i8 [ 0, %[[ENTRY]] ]
; CHECK-NEXT: br label %[[LOOP_HEADER:.*]]
; CHECK: [[LOOP_HEADER]]:
-; CHECK-NEXT: [[F_039:%.*]] = phi i8 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[ADD:%.*]], %[[LOOP_LATCH:.*]] ]
+; CHECK-NEXT: [[F_039:%.*]] = phi i8 [ 0, %[[SCALAR_PH]] ], [ [[ADD:%.*]], %[[LOOP_LATCH:.*]] ]
; CHECK-NEXT: [[TMP4:%.*]] = or i8 23, [[X]]
; CHECK-NEXT: [[EXTRACT_T:%.*]] = trunc i8 [[TMP4]] to i1
; CHECK-NEXT: br i1 [[EXTRACT_T]], label %[[THEN:.*]], label %[[LOOP_LATCH]]
diff --git a/llvm/test/Transforms/LoopVectorize/RISCV/truncate-to-minimal-bitwidth-evl-crash.ll b/llvm/test/Transforms/LoopVectorize/RISCV/truncate-to-minimal-bitwidth-evl-crash.ll
index dfdc893570810..01edeed803878 100644
--- a/llvm/test/Transforms/LoopVectorize/RISCV/truncate-to-minimal-bitwidth-evl-crash.ll
+++ b/llvm/test/Transforms/LoopVectorize/RISCV/truncate-to-minimal-bitwidth-evl-crash.ll
@@ -36,7 +36,7 @@ define void @truncate_to_minimal_bitwidths_widen_cast_recipe(ptr %src) {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, %[[ENTRY]] ]
; CHECK-NEXT: br label %[[LOOP:.*]]
; CHECK: [[LOOP]]:
-; CHECK-NEXT: [[IV1:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
+; CHECK-NEXT: [[IV1:%.*]] = phi i64 [ 0, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
; CHECK-NEXT: [[GEP_SRC1:%.*]] = getelementptr i8, ptr [[SRC]], i64 [[IV1]]
; CHECK-NEXT: [[TMP11:%.*]] = load i8, ptr [[GEP_SRC1]], align 1
; CHECK-NEXT: [[CONV:%.*]] = zext i8 [[TMP11]] to i32
diff --git a/llvm/test/Transforms/LoopVectorize/RISCV/uniform-load-store.ll b/llvm/test/Transforms/LoopVectorize/RISCV/uniform-load-store.ll
index 568aa953de511..d97e93d892a06 100644
--- a/llvm/test/Transforms/LoopVectorize/RISCV/uniform-load-store.ll
+++ b/llvm/test/Transforms/LoopVectorize/RISCV/uniform-load-store.ll
@@ -117,7 +117,7 @@ define void @uniform_load(ptr noalias nocapture %a, ptr noalias nocapture %b, i6
; TF-SCALABLE-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, %[[ENTRY]] ]
; TF-SCALABLE-NEXT: br label %[[FOR_BODY:.*]]
; TF-SCALABLE: [[FOR_BODY]]:
-; TF-SCALABLE-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[FOR_BODY]] ]
+; TF-SCALABLE-NEXT: [[IV:%.*]] = phi i64 [ 0, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[FOR_BODY]] ]
; TF-SCALABLE-NEXT: [[V:%.*]] = load i64, ptr [[B]], align 8
; TF-SCALABLE-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i64, ptr [[A]], i64 [[IV]]
; TF-SCALABLE-NEXT: store i64 [[V]], ptr [[ARRAYIDX]], align 8
@@ -439,7 +439,7 @@ define void @conditional_uniform_load(ptr noalias nocapture %a, ptr noalias noca
; TF-SCALABLE-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, %[[ENTRY]] ]
; TF-SCALABLE-NEXT: br label %[[FOR_BODY:.*]]
; TF-SCALABLE: [[FOR_BODY]]:
-; TF-SCALABLE-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LATCH:.*]] ]
+; TF-SCALABLE-NEXT: [[IV:%.*]] = phi i64 [ 0, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LATCH:.*]] ]
; TF-SCALABLE-NEXT: [[CMP:%.*]] = icmp ugt i64 [[IV]], 10
; TF-SCALABLE-NEXT: br i1 [[CMP]], label %[[DO_LOAD:.*]], label %[[LATCH]]
; TF-SCALABLE: [[DO_LOAD]]:
@@ -589,7 +589,7 @@ define void @uniform_load_unaligned(ptr noalias nocapture %a, ptr noalias nocapt
; TF-SCALABLE-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, %[[ENTRY]] ]
; TF-SCALABLE-NEXT: br label %[[FOR_BODY:.*]]
; TF-SCALABLE: [[FOR_BODY]]:
-; TF-SCALABLE-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[FOR_BODY]] ]
+; TF-SCALABLE-NEXT: [[IV:%.*]] = phi i64 [ 0, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[FOR_BODY]] ]
; TF-SCALABLE-NEXT: [[V:%.*]] = load i64, ptr [[B]], align 1
; TF-SCALABLE-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i64, ptr [[A]], i64 [[IV]]
; TF-SCALABLE-NEXT: store i64 [[V]], ptr [[ARRAYIDX]], align 8
@@ -726,7 +726,7 @@ define void @uniform_store(ptr noalias nocapture %a, ptr noalias nocapture %b, i
; TF-SCALABLE-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, %[[ENTRY]] ]
; TF-SCALABLE-NEXT: br label %[[FOR_BODY:.*]]
; TF-SCALABLE: [[FOR_BODY]]:
-; TF-SCALABLE-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[FOR_BODY]] ]
+; TF-SCALABLE-NEXT: [[IV:%.*]] = phi i64 [ 0, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[FOR_BODY]] ]
; TF-SCALABLE-NEXT: store i64 [[V]], ptr [[B]], align 8
; TF-SCALABLE-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i64, ptr [[A]], i64 [[IV]]
; TF-SCALABLE-NEXT: store i64 [[V]], ptr [[ARRAYIDX]], align 8
@@ -890,7 +890,7 @@ define void @uniform_store_of_loop_varying(ptr noalias nocapture %a, ptr noalias
; TF-SCALABLE-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, %[[ENTRY]] ]
; TF-SCALABLE-NEXT: br label %[[FOR_BODY:.*]]
; TF-SCALABLE: [[FOR_BODY]]:
-; TF-SCALABLE-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[FOR_BODY]] ]
+; TF-SCALABLE-NEXT: [[IV:%.*]] = phi i64 [ 0, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[FOR_BODY]] ]
; TF-SCALABLE-NEXT: store i64 [[IV]], ptr [[B]], align 8
; TF-SCALABLE-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i64, ptr [[A]], i64 [[IV]]
; TF-SCALABLE-NEXT: store i64 [[V]], ptr [[ARRAYIDX]], align 8
@@ -1068,7 +1068,7 @@ define void @conditional_uniform_store(ptr noalias nocapture %a, ptr noalias noc
; TF-SCALABLE-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, %[[ENTRY]] ]
; TF-SCALABLE-NEXT: br label %[[FOR_BODY:.*]]
; TF-SCALABLE: [[FOR_BODY]]:
-; TF-SCALABLE-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LATCH:.*]] ]
+; TF-SCALABLE-NEXT: [[IV:%.*]] = phi i64 [ 0, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LATCH:.*]] ]
; TF-SCALABLE-NEXT: [[CMP:%.*]] = icmp ugt i64 [[IV]], 10
; TF-SCALABLE-NEXT: br i1 [[CMP]], label %[[DO_STORE:.*]], label %[[LATCH]]
; TF-SCALABLE: [[DO_STORE]]:
@@ -1216,7 +1216,7 @@ define void @uniform_store_unaligned(ptr noalias nocapture %a, ptr noalias nocap
; TF-SCALABLE-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, %[[ENTRY]] ]
; TF-SCALABLE-NEXT: br label %[[FOR_BODY:.*]]
; TF-SCALABLE: [[FOR_BODY]]:
-; TF-SCALABLE-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[FOR_BODY]] ]
+; TF-SCALABLE-NEXT: [[IV:%.*]] = phi i64 [ 0, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[FOR_BODY]] ]
; TF-SCALABLE-NEXT: store i64 [[V]], ptr [[B]], align 1
; TF-SCALABLE-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i64, ptr [[A]], i64 [[IV]]
; TF-SCALABLE-NEXT: store i64 [[V]], ptr [[ARRAYIDX]], align 8
diff --git a/llvm/test/Transforms/LoopVectorize/RISCV/vector-loop-backedge-elimination-with-evl.ll b/llvm/test/Transforms/LoopVectorize/RISCV/vector-loop-backedge-elimination-with-evl.ll
index 7c1ec9ab6c5fd..d93a5c0c38098 100644
--- a/llvm/test/Transforms/LoopVectorize/RISCV/vector-loop-backedge-elimination-with-evl.ll
+++ b/llvm/test/Transforms/LoopVectorize/RISCV/vector-loop-backedge-elimination-with-evl.ll
@@ -27,7 +27,7 @@ define void @foo(ptr %arg) #0 {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, %[[ENTRY]] ]
; CHECK-NEXT: br label %[[LOOP:.*]]
; CHECK: [[LOOP]]:
-; CHECK-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
+; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
; CHECK-NEXT: [[GEP:%.*]] = getelementptr [3 x i64], ptr [[ARG]], i64 0, i64 [[IV]]
; CHECK-NEXT: store i64 0, ptr [[GEP]], align 8
; CHECK-NEXT: [[IV_NEXT]] = add i64 [[IV]], 1
diff --git a/llvm/test/Transforms/LoopVectorize/RISCV/vectorize-vp-intrinsics.ll b/llvm/test/Transforms/LoopVectorize/RISCV/vectorize-vp-intrinsics.ll
index 85116feab6a33..d3c3c6b958998 100644
--- a/llvm/test/Transforms/LoopVectorize/RISCV/vectorize-vp-intrinsics.ll
+++ b/llvm/test/Transforms/LoopVectorize/RISCV/vectorize-vp-intrinsics.ll
@@ -43,7 +43,7 @@ define void @foo(ptr noalias %a, ptr noalias %b, ptr noalias %c, i64 %N) {
; IF-EVL-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, [[ENTRY:%.*]] ]
; IF-EVL-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL: for.body:
-; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
; IF-EVL-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[B]], i64 [[IV]]
; IF-EVL-NEXT: [[TMP22:%.*]] = load i32, ptr [[ARRAYIDX]], align 4
; IF-EVL-NEXT: [[ARRAYIDX2:%.*]] = getelementptr inbounds i32, ptr [[C]], i64 [[IV]]
diff --git a/llvm/test/Transforms/LoopVectorize/SystemZ/force-target-instruction-cost.ll b/llvm/test/Transforms/LoopVectorize/SystemZ/force-target-instruction-cost.ll
index 082e3266e7c8f..0fb46550d043f 100644
--- a/llvm/test/Transforms/LoopVectorize/SystemZ/force-target-instruction-cost.ll
+++ b/llvm/test/Transforms/LoopVectorize/SystemZ/force-target-instruction-cost.ll
@@ -42,7 +42,7 @@ define void @test_scalar_steps_target_instruction_cost(ptr %dst) {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, %[[ENTRY]] ]
; CHECK-NEXT: br label %[[LOOP:.*]]
; CHECK: [[LOOP]]:
-; CHECK-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
+; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
; CHECK-NEXT: [[GEP:%.*]] = getelementptr inbounds i64, ptr [[DST]], i64 [[IV]]
; CHECK-NEXT: store i64 [[IV]], ptr [[GEP]], align 8
; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 3
diff --git a/llvm/test/Transforms/LoopVectorize/SystemZ/pr47665.ll b/llvm/test/Transforms/LoopVectorize/SystemZ/pr47665.ll
index 02a876a3fda67..d7cc6f00af44c 100644
--- a/llvm/test/Transforms/LoopVectorize/SystemZ/pr47665.ll
+++ b/llvm/test/Transforms/LoopVectorize/SystemZ/pr47665.ll
@@ -96,7 +96,7 @@ define void @test(ptr %p, i40 %a) {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i32 [ 0, [[ENTRY:%.*]] ]
; CHECK-NEXT: br label [[FOR_BODY:%.*]]
; CHECK: for.body:
-; CHECK-NEXT: [[IV:%.*]] = phi i32 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; CHECK-NEXT: [[IV:%.*]] = phi i32 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
; CHECK-NEXT: [[SHL:%.*]] = shl i40 [[A]], 24
; CHECK-NEXT: [[ASHR:%.*]] = ashr i40 [[SHL]], 28
; CHECK-NEXT: [[TRUNC:%.*]] = trunc i40 [[ASHR]] to i32
diff --git a/llvm/test/Transforms/LoopVectorize/SystemZ/predicated-first-order-recurrence.ll b/llvm/test/Transforms/LoopVectorize/SystemZ/predicated-first-order-recurrence.ll
index e0fc73f669946..4e46a29821e9b 100644
--- a/llvm/test/Transforms/LoopVectorize/SystemZ/predicated-first-order-recurrence.ll
+++ b/llvm/test/Transforms/LoopVectorize/SystemZ/predicated-first-order-recurrence.ll
@@ -69,8 +69,8 @@ define void @func_21() {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, [[ENTRY]] ]
; CHECK-NEXT: br label [[LOOP:%.*]]
; CHECK: loop:
-; CHECK-NEXT: [[SCALAR_RECUR:%.*]] = phi i32 [ [[SCALAR_RECUR_INIT]], [[SCALAR_PH]] ], [ [[LV:%.*]], [[LOOP]] ]
-; CHECK-NEXT: [[INDVARS_IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[INDVARS_IV_NEXT:%.*]], [[LOOP]] ]
+; CHECK-NEXT: [[SCALAR_RECUR:%.*]] = phi i32 [ 0, [[SCALAR_PH]] ], [ [[LV:%.*]], [[LOOP]] ]
+; CHECK-NEXT: [[INDVARS_IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[INDVARS_IV_NEXT:%.*]], [[LOOP]] ]
; CHECK-NEXT: [[A_PTR:%.*]] = getelementptr inbounds [5 x i32], ptr @A, i64 0, i64 [[INDVARS_IV]]
; CHECK-NEXT: [[LV]] = load i32, ptr [[A_PTR]], align 4
; CHECK-NEXT: [[B_PTR:%.*]] = getelementptr inbounds [5 x i32], ptr @B, i64 0, i64 [[INDVARS_IV]]
diff --git a/llvm/test/Transforms/LoopVectorize/X86/constant-fold.ll b/llvm/test/Transforms/LoopVectorize/X86/constant-fold.ll
index c61b1b90f3dfe..37493d1d47c9f 100644
--- a/llvm/test/Transforms/LoopVectorize/X86/constant-fold.ll
+++ b/llvm/test/Transforms/LoopVectorize/X86/constant-fold.ll
@@ -117,7 +117,7 @@ define void @redundant_or_1(ptr %dst, i1 %c.0, i1 %c.1) {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i32 [ 0, [[ENTRY:%.*]] ]
; CHECK-NEXT: br label [[LOOP_HEADER:%.*]]
; CHECK: loop.header:
-; CHECK-NEXT: [[IV:%.*]] = phi i32 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[LOOP_LATCH:%.*]] ]
+; CHECK-NEXT: [[IV:%.*]] = phi i32 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[LOOP_LATCH:%.*]] ]
; CHECK-NEXT: br i1 [[C_0]], label [[LOOP_LATCH]], label [[THEN_1:%.*]]
; CHECK: then.1:
; CHECK-NEXT: [[CMP:%.*]] = icmp eq i32 [[IV]], 2
@@ -220,7 +220,7 @@ define void @redundant_or_2(ptr %dst, i1 %c.0, i1 %c.1) {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i32 [ 0, [[ENTRY:%.*]] ]
; CHECK-NEXT: br label [[LOOP_HEADER:%.*]]
; CHECK: loop.header:
-; CHECK-NEXT: [[IV:%.*]] = phi i32 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[LOOP_LATCH:%.*]] ]
+; CHECK-NEXT: [[IV:%.*]] = phi i32 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[LOOP_LATCH:%.*]] ]
; CHECK-NEXT: br i1 [[C_1]], label [[LOOP_LATCH]], label [[THEN_1:%.*]]
; CHECK: then.1:
; CHECK-NEXT: [[CMP:%.*]] = icmp eq i32 [[IV]], 2
diff --git a/llvm/test/Transforms/LoopVectorize/X86/cost-model.ll b/llvm/test/Transforms/LoopVectorize/X86/cost-model.ll
index 85b475c996c79..1a3ff6c65fc10 100644
--- a/llvm/test/Transforms/LoopVectorize/X86/cost-model.ll
+++ b/llvm/test/Transforms/LoopVectorize/X86/cost-model.ll
@@ -1055,8 +1055,8 @@ define i64 @live_in_known_1_via_scev() {
; CHECK-NEXT: [[BC_MERGE_RDX:%.*]] = phi i64 [ 3, [[PH]] ]
; CHECK-NEXT: br label [[LOOP:%.*]]
; CHECK: loop:
-; CHECK-NEXT: [[IV:%.*]] = phi i32 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[LOOP]] ]
-; CHECK-NEXT: [[RED:%.*]] = phi i64 [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[RED_MUL:%.*]], [[LOOP]] ]
+; CHECK-NEXT: [[IV:%.*]] = phi i32 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[LOOP]] ]
+; CHECK-NEXT: [[RED:%.*]] = phi i64 [ 3, [[SCALAR_PH]] ], [ [[RED_MUL:%.*]], [[LOOP]] ]
; CHECK-NEXT: [[RED_MUL]] = mul nsw i64 [[RED]], [[P_EXT]]
; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i32 [[IV]], 1
; CHECK-NEXT: [[EC:%.*]] = icmp eq i32 [[IV_NEXT]], [[N]]
diff --git a/llvm/test/Transforms/LoopVectorize/X86/drop-inbounds-flags-for-reverse-vector-pointer.ll b/llvm/test/Transforms/LoopVectorize/X86/drop-inbounds-flags-for-reverse-vector-pointer.ll
index 1249df4af62ad..ee85e0ec179e0 100644
--- a/llvm/test/Transforms/LoopVectorize/X86/drop-inbounds-flags-for-reverse-vector-pointer.ll
+++ b/llvm/test/Transforms/LoopVectorize/X86/drop-inbounds-flags-for-reverse-vector-pointer.ll
@@ -46,8 +46,8 @@ define i1 @fn(ptr %nno) #0 {
; CHECK-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ 0, [[ENTRY]] ]
; CHECK-NEXT: br label [[FOR_BODY20:%.*]]
; CHECK: loop.header:
-; CHECK-NEXT: [[INDVARS_IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[INDVARS_IV_NEXT:%.*]], [[FOR_INC35:%.*]] ]
-; CHECK-NEXT: [[SUM_01:%.*]] = phi i32 [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[SUM_1:%.*]], [[FOR_INC35]] ]
+; CHECK-NEXT: [[INDVARS_IV:%.*]] = phi i64 [ 10, [[SCALAR_PH]] ], [ [[INDVARS_IV_NEXT:%.*]], [[FOR_INC35:%.*]] ]
+; CHECK-NEXT: [[SUM_01:%.*]] = phi i32 [ 0, [[SCALAR_PH]] ], [ [[SUM_1:%.*]], [[FOR_INC35]] ]
; CHECK-NEXT: [[REM4:%.*]] = and i64 [[INDVARS_IV]], 1
; CHECK-NEXT: [[CMP21:%.*]] = icmp eq i64 [[REM4]], 0
; CHECK-NEXT: [[GEP:%.*]] = getelementptr inbounds nuw i32, ptr [[NNO]], i64 [[INDVARS_IV]]
diff --git a/llvm/test/Transforms/LoopVectorize/X86/fixed-order-recurrence.ll b/llvm/test/Transforms/LoopVectorize/X86/fixed-order-recurrence.ll
index fe2ad661967e6..07b130bff662c 100644
--- a/llvm/test/Transforms/LoopVectorize/X86/fixed-order-recurrence.ll
+++ b/llvm/test/Transforms/LoopVectorize/X86/fixed-order-recurrence.ll
@@ -507,8 +507,8 @@ define void @test_first_order_recurrence_tried_to_scalarized(ptr %dst, i1 %c, i3
; CHECK-NEXT: [[SCALAR_RECUR_INIT:%.*]] = phi i32 [ 4, [[ENTRY]] ]
; CHECK-NEXT: br label [[LOOP:%.*]]
; CHECK: loop:
-; CHECK-NEXT: [[IV:%.*]] = phi i32 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[LOOP]] ]
-; CHECK-NEXT: [[FOR:%.*]] = phi i32 [ [[SCALAR_RECUR_INIT]], [[SCALAR_PH]] ], [ [[IV]], [[LOOP]] ]
+; CHECK-NEXT: [[IV:%.*]] = phi i32 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[LOOP]] ]
+; CHECK-NEXT: [[FOR:%.*]] = phi i32 [ 4, [[SCALAR_PH]] ], [ [[IV]], [[LOOP]] ]
; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i32 [[IV]], 1
; CHECK-NEXT: [[SUB:%.*]] = sub nsw i32 10, [[FOR]]
; CHECK-NEXT: [[GEP_DST:%.*]] = getelementptr inbounds nuw i32, ptr [[DST]], i32 [[IV]]
diff --git a/llvm/test/Transforms/LoopVectorize/X86/induction-costs.ll b/llvm/test/Transforms/LoopVectorize/X86/induction-costs.ll
index fcd94f444e8a5..a66800c7a3e06 100644
--- a/llvm/test/Transforms/LoopVectorize/X86/induction-costs.ll
+++ b/llvm/test/Transforms/LoopVectorize/X86/induction-costs.ll
@@ -623,7 +623,7 @@ define void @wide_iv_trunc(ptr %dst, i64 %N) {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, [[LOOP_PREHEADER]] ]
; CHECK-NEXT: br label [[LOOP:%.*]]
; CHECK: loop:
-; CHECK-NEXT: [[IV:%.*]] = phi i64 [ [[IV_NEXT:%.*]], [[LOOP]] ], [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ]
+; CHECK-NEXT: [[IV:%.*]] = phi i64 [ [[IV_NEXT:%.*]], [[LOOP]] ], [ 0, [[SCALAR_PH]] ]
; CHECK-NEXT: [[IV_TRUNC:%.*]] = trunc i64 [[IV]] to i32
; CHECK-NEXT: store i32 [[IV_TRUNC]], ptr [[DST]], align 4
; CHECK-NEXT: [[IV_NEXT]] = add i64 [[IV]], 1
diff --git a/llvm/test/Transforms/LoopVectorize/X86/optsize.ll b/llvm/test/Transforms/LoopVectorize/X86/optsize.ll
index 07e2df360e249..c5ac0ae8b8daa 100644
--- a/llvm/test/Transforms/LoopVectorize/X86/optsize.ll
+++ b/llvm/test/Transforms/LoopVectorize/X86/optsize.ll
@@ -35,7 +35,7 @@ define i32 @foo_optsize() #0 {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i32 [ 0, [[ENTRY:%.*]] ]
; CHECK-NEXT: br label [[FOR_BODY:%.*]]
; CHECK: for.body:
-; CHECK-NEXT: [[I_08:%.*]] = phi i32 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[INC:%.*]], [[FOR_BODY]] ]
+; CHECK-NEXT: [[I_08:%.*]] = phi i32 [ 0, [[SCALAR_PH]] ], [ [[INC:%.*]], [[FOR_BODY]] ]
; CHECK-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds [32 x i8], ptr @tab, i32 0, i32 [[I_08]]
; CHECK-NEXT: [[TMP7:%.*]] = load i8, ptr [[ARRAYIDX]], align 1
; CHECK-NEXT: [[CMP1:%.*]] = icmp eq i8 [[TMP7]], 0
@@ -72,7 +72,7 @@ define i32 @foo_optsize() #0 {
; AUTOVF-NEXT: [[BC_RESUME_VAL:%.*]] = phi i32 [ 0, [[ENTRY:%.*]] ]
; AUTOVF-NEXT: br label [[FOR_BODY:%.*]]
; AUTOVF: for.body:
-; AUTOVF-NEXT: [[I_08:%.*]] = phi i32 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[INC:%.*]], [[FOR_BODY]] ]
+; AUTOVF-NEXT: [[I_08:%.*]] = phi i32 [ 0, [[SCALAR_PH]] ], [ [[INC:%.*]], [[FOR_BODY]] ]
; AUTOVF-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds [32 x i8], ptr @tab, i32 0, i32 [[I_08]]
; AUTOVF-NEXT: [[TMP7:%.*]] = load i8, ptr [[ARRAYIDX]], align 1
; AUTOVF-NEXT: [[CMP1:%.*]] = icmp eq i8 [[TMP7]], 0
@@ -131,7 +131,7 @@ define i32 @foo_minsize() #1 {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i32 [ 0, [[ENTRY:%.*]] ]
; CHECK-NEXT: br label [[FOR_BODY:%.*]]
; CHECK: for.body:
-; CHECK-NEXT: [[I_08:%.*]] = phi i32 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[INC:%.*]], [[FOR_BODY]] ]
+; CHECK-NEXT: [[I_08:%.*]] = phi i32 [ 0, [[SCALAR_PH]] ], [ [[INC:%.*]], [[FOR_BODY]] ]
; CHECK-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds [32 x i8], ptr @tab, i32 0, i32 [[I_08]]
; CHECK-NEXT: [[TMP7:%.*]] = load i8, ptr [[ARRAYIDX]], align 1
; CHECK-NEXT: [[CMP1:%.*]] = icmp eq i8 [[TMP7]], 0
@@ -168,7 +168,7 @@ define i32 @foo_minsize() #1 {
; AUTOVF-NEXT: [[BC_RESUME_VAL:%.*]] = phi i32 [ 0, [[ENTRY:%.*]] ]
; AUTOVF-NEXT: br label [[FOR_BODY:%.*]]
; AUTOVF: for.body:
-; AUTOVF-NEXT: [[I_08:%.*]] = phi i32 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[INC:%.*]], [[FOR_BODY]] ]
+; AUTOVF-NEXT: [[I_08:%.*]] = phi i32 [ 0, [[SCALAR_PH]] ], [ [[INC:%.*]], [[FOR_BODY]] ]
; AUTOVF-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds [32 x i8], ptr @tab, i32 0, i32 [[I_08]]
; AUTOVF-NEXT: [[TMP7:%.*]] = load i8, ptr [[ARRAYIDX]], align 1
; AUTOVF-NEXT: [[CMP1:%.*]] = icmp eq i8 [[TMP7]], 0
@@ -379,7 +379,7 @@ define void @tail_folded_store_avx512(ptr %start, ptr %end) #3 {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi ptr [ [[START]], [[ENTRY:%.*]] ]
; CHECK-NEXT: br label [[LOOP:%.*]]
; CHECK: loop:
-; CHECK-NEXT: [[PTR_IV:%.*]] = phi ptr [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[PTR_IV_NEXT:%.*]], [[LOOP]] ]
+; CHECK-NEXT: [[PTR_IV:%.*]] = phi ptr [ [[START]], [[SCALAR_PH]] ], [ [[PTR_IV_NEXT:%.*]], [[LOOP]] ]
; CHECK-NEXT: [[PTR_IV_NEXT]] = getelementptr nusw i8, ptr [[PTR_IV]], i64 -72
; CHECK-NEXT: store ptr null, ptr [[PTR_IV]], align 8
; CHECK-NEXT: [[EC:%.*]] = icmp eq ptr [[PTR_IV_NEXT]], [[END]]
@@ -423,7 +423,7 @@ define void @tail_folded_store_avx512(ptr %start, ptr %end) #3 {
; AUTOVF-NEXT: [[BC_RESUME_VAL:%.*]] = phi ptr [ [[START]], [[ENTRY:%.*]] ]
; AUTOVF-NEXT: br label [[LOOP:%.*]]
; AUTOVF: loop:
-; AUTOVF-NEXT: [[PTR_IV:%.*]] = phi ptr [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[PTR_IV_NEXT:%.*]], [[LOOP]] ]
+; AUTOVF-NEXT: [[PTR_IV:%.*]] = phi ptr [ [[START]], [[SCALAR_PH]] ], [ [[PTR_IV_NEXT:%.*]], [[LOOP]] ]
; AUTOVF-NEXT: [[PTR_IV_NEXT]] = getelementptr nusw i8, ptr [[PTR_IV]], i64 -72
; AUTOVF-NEXT: store ptr null, ptr [[PTR_IV]], align 8
; AUTOVF-NEXT: [[EC:%.*]] = icmp eq ptr [[PTR_IV_NEXT]], [[END]]
diff --git a/llvm/test/Transforms/LoopVectorize/X86/pr81872.ll b/llvm/test/Transforms/LoopVectorize/X86/pr81872.ll
index 08adfdd4793eb..11c5e3906d436 100644
--- a/llvm/test/Transforms/LoopVectorize/X86/pr81872.ll
+++ b/llvm/test/Transforms/LoopVectorize/X86/pr81872.ll
@@ -44,7 +44,7 @@ define void @test(ptr noundef align 8 dereferenceable_or_null(16) %arr) #0 {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 99, [[BB5:%.*]] ]
; CHECK-NEXT: br label [[LOOP_HEADER:%.*]]
; CHECK: loop.header:
-; CHECK-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[LOOP_LATCH:%.*]] ]
+; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 99, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[LOOP_LATCH:%.*]] ]
; CHECK-NEXT: [[AND:%.*]] = and i64 [[IV]], 1
; CHECK-NEXT: [[ICMP17:%.*]] = icmp eq i64 [[AND]], 0
; CHECK-NEXT: br i1 [[ICMP17]], label [[BB18:%.*]], label [[LOOP_LATCH]], !prof [[PROF5:![0-9]+]]
diff --git a/llvm/test/Transforms/LoopVectorize/X86/scev-checks-unprofitable.ll b/llvm/test/Transforms/LoopVectorize/X86/scev-checks-unprofitable.ll
index 440f6e1dfeeff..4145967c11679 100644
--- a/llvm/test/Transforms/LoopVectorize/X86/scev-checks-unprofitable.ll
+++ b/llvm/test/Transforms/LoopVectorize/X86/scev-checks-unprofitable.ll
@@ -53,7 +53,7 @@ define void @value_defined_in_loop1_used_for_trip_counts(i32 %start, i1 %c, ptr
; CHECK-NEXT: [[EC_2:%.*]] = icmp ult i64 [[IV_2]], [[IV_1_LCSSA]]
; CHECK-NEXT: br i1 [[EC_2]], label %[[LOOP_2]], label %[[EXIT_1_LOOPEXIT:.*]]
; CHECK: [[LOOP_3]]:
-; CHECK-NEXT: [[IV_4:%.*]] = phi i64 [ [[IV_4_NEXT:%.*]], %[[LOOP_3]] ], [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ]
+; CHECK-NEXT: [[IV_4:%.*]] = phi i64 [ [[IV_4_NEXT:%.*]], %[[LOOP_3]] ], [ 0, %[[SCALAR_PH]] ]
; CHECK-NEXT: [[GEP_DST_2:%.*]] = getelementptr i8, ptr [[DST]], i64 [[IV_4]]
; CHECK-NEXT: store i8 0, ptr [[GEP_DST_2]], align 1
; CHECK-NEXT: [[IV_4_NEXT]] = add i64 [[IV_4]], 1
diff --git a/llvm/test/Transforms/LoopVectorize/X86/tail_loop_folding.ll b/llvm/test/Transforms/LoopVectorize/X86/tail_loop_folding.ll
index 5e35c4ae1f404..9a81fae4f61f1 100644
--- a/llvm/test/Transforms/LoopVectorize/X86/tail_loop_folding.ll
+++ b/llvm/test/Transforms/LoopVectorize/X86/tail_loop_folding.ll
@@ -35,7 +35,7 @@ define dso_local void @tail_folding_enabled(ptr noalias nocapture %A, ptr noalia
; CHECK: for.cond.cleanup:
; CHECK-NEXT: ret void
; CHECK: for.body:
-; CHECK-NEXT: [[INDVARS_IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[INDVARS_IV_NEXT:%.*]], [[FOR_BODY]] ]
+; CHECK-NEXT: [[INDVARS_IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[INDVARS_IV_NEXT:%.*]], [[FOR_BODY]] ]
; CHECK-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[B]], i64 [[INDVARS_IV]]
; CHECK-NEXT: [[TMP10:%.*]] = load i32, ptr [[ARRAYIDX]], align 4
; CHECK-NEXT: [[ARRAYIDX2:%.*]] = getelementptr inbounds i32, ptr [[C]], i64 [[INDVARS_IV]]
@@ -99,7 +99,7 @@ define dso_local void @tail_folding_disabled(ptr noalias nocapture %A, ptr noali
; CHECK: for.cond.cleanup:
; CHECK-NEXT: ret void
; CHECK: for.body:
-; CHECK-NEXT: [[INDVARS_IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[INDVARS_IV_NEXT:%.*]], [[FOR_BODY]] ]
+; CHECK-NEXT: [[INDVARS_IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[INDVARS_IV_NEXT:%.*]], [[FOR_BODY]] ]
; CHECK-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[B]], i64 [[INDVARS_IV]]
; CHECK-NEXT: [[TMP10:%.*]] = load i32, ptr [[ARRAYIDX]], align 4
; CHECK-NEXT: [[ARRAYIDX2:%.*]] = getelementptr inbounds i32, ptr [[C]], i64 [[INDVARS_IV]]
@@ -181,8 +181,8 @@ define i32 @reduction_i32(ptr nocapture readonly %A, ptr nocapture readonly %B,
; CHECK-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ 0, [[ENTRY]] ]
; CHECK-NEXT: br label [[FOR_BODY:%.*]]
; CHECK: for.body:
-; CHECK-NEXT: [[INDVARS_IV:%.*]] = phi i64 [ [[INDVARS_IV_NEXT:%.*]], [[FOR_BODY]] ], [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ]
-; CHECK-NEXT: [[SUM_0:%.*]] = phi i32 [ [[SUM_1:%.*]], [[FOR_BODY]] ], [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ]
+; CHECK-NEXT: [[INDVARS_IV:%.*]] = phi i64 [ [[INDVARS_IV_NEXT:%.*]], [[FOR_BODY]] ], [ 0, [[SCALAR_PH]] ]
+; CHECK-NEXT: [[SUM_0:%.*]] = phi i32 [ [[SUM_1:%.*]], [[FOR_BODY]] ], [ 0, [[SCALAR_PH]] ]
; CHECK-NEXT: [[INDVARS_IV_NEXT]] = add nuw nsw i64 [[INDVARS_IV]], 1
; CHECK-NEXT: [[ARRAYIDXA:%.*]] = getelementptr inbounds i32, ptr [[A]], i64 [[INDVARS_IV]]
; CHECK-NEXT: [[TMP14:%.*]] = load i32, ptr [[ARRAYIDXA]], align 4
diff --git a/llvm/test/Transforms/LoopVectorize/X86/vect.omp.force.small-tc.ll b/llvm/test/Transforms/LoopVectorize/X86/vect.omp.force.small-tc.ll
index f7eba42edaf57..a926ff4bc560c 100644
--- a/llvm/test/Transforms/LoopVectorize/X86/vect.omp.force.small-tc.ll
+++ b/llvm/test/Transforms/LoopVectorize/X86/vect.omp.force.small-tc.ll
@@ -146,7 +146,7 @@ define void @vectorized1(ptr noalias nocapture %A, ptr noalias nocapture readonl
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, [[ENTRY:%.*]] ]
; CHECK-NEXT: br label [[FOR_BODY:%.*]]
; CHECK: for.body:
-; CHECK-NEXT: [[INDVARS_IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[INDVARS_IV_NEXT:%.*]], [[FOR_BODY]] ]
+; CHECK-NEXT: [[INDVARS_IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[INDVARS_IV_NEXT:%.*]], [[FOR_BODY]] ]
; CHECK-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds float, ptr [[B]], i64 [[INDVARS_IV]]
; CHECK-NEXT: [[TMP8:%.*]] = load float, ptr [[ARRAYIDX]], align 4, !llvm.access.group [[ACC_GRP7]]
; CHECK-NEXT: [[ARRAYIDX2:%.*]] = getelementptr inbounds float, ptr [[A]], i64 [[INDVARS_IV]]
diff --git a/llvm/test/Transforms/LoopVectorize/X86/vectorize-force-tail-with-evl.ll b/llvm/test/Transforms/LoopVectorize/X86/vectorize-force-tail-with-evl.ll
index 59f2925d01fa2..e7fa6559ba818 100644
--- a/llvm/test/Transforms/LoopVectorize/X86/vectorize-force-tail-with-evl.ll
+++ b/llvm/test/Transforms/LoopVectorize/X86/vectorize-force-tail-with-evl.ll
@@ -43,7 +43,7 @@ define void @foo(ptr noalias %a, ptr noalias %b, ptr noalias %c, i64 %N) {
; IF-EVL-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, [[ENTRY:%.*]] ]
; IF-EVL-NEXT: br label [[FOR_BODY:%.*]]
; IF-EVL: for.body:
-; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; IF-EVL-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
; IF-EVL-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[B]], i64 [[IV]]
; IF-EVL-NEXT: [[TMP10:%.*]] = load i32, ptr [[ARRAYIDX]], align 4
; IF-EVL-NEXT: [[ARRAYIDX2:%.*]] = getelementptr inbounds i32, ptr [[C]], i64 [[IV]]
diff --git a/llvm/test/Transforms/LoopVectorize/X86/vectorize-interleaved-accesses-gap.ll b/llvm/test/Transforms/LoopVectorize/X86/vectorize-interleaved-accesses-gap.ll
index e9d85c26d26e3..f4fe1208eebac 100644
--- a/llvm/test/Transforms/LoopVectorize/X86/vectorize-interleaved-accesses-gap.ll
+++ b/llvm/test/Transforms/LoopVectorize/X86/vectorize-interleaved-accesses-gap.ll
@@ -79,7 +79,7 @@ define void @test_pr59090(ptr %l_out, ptr noalias %b) #0 {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, [[ENTRY:%.*]] ]
; CHECK-NEXT: br label [[LOOP:%.*]]
; CHECK: loop:
-; CHECK-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[LOOP]] ]
+; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[LOOP]] ]
; CHECK-NEXT: [[IV_MUL:%.*]] = mul nuw i64 [[IV]], 6
; CHECK-NEXT: [[L:%.*]] = load i8, ptr [[B]], align 1, !llvm.access.group [[ACC_GRP0]]
; CHECK-NEXT: store i8 [[L]], ptr [[B]], align 1, !llvm.access.group [[ACC_GRP0]]
diff --git a/llvm/test/Transforms/LoopVectorize/dead_instructions.ll b/llvm/test/Transforms/LoopVectorize/dead_instructions.ll
index 42d45bda9d7d2..8ac33a1b869f4 100644
--- a/llvm/test/Transforms/LoopVectorize/dead_instructions.ll
+++ b/llvm/test/Transforms/LoopVectorize/dead_instructions.ll
@@ -102,9 +102,9 @@ define void @pr47390(ptr %a) {
; CHECK: [[EXIT]]:
; CHECK-NEXT: ret void
; CHECK: [[LOOP]]:
-; CHECK-NEXT: [[PRIMARY:%.*]] = phi i32 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[PRIMARY_ADD:%.*]], %[[LOOP]] ]
-; CHECK-NEXT: [[USE_PRIMARY:%.*]] = phi i32 [ [[BC_RESUME_VAL1]], %[[SCALAR_PH]] ], [ [[PRIMARY]], %[[LOOP]] ]
-; CHECK-NEXT: [[SECONDARY:%.*]] = phi i32 [ [[BC_RESUME_VAL2]], %[[SCALAR_PH]] ], [ [[SECONDARY_ADD:%.*]], %[[LOOP]] ]
+; CHECK-NEXT: [[PRIMARY:%.*]] = phi i32 [ 0, %[[SCALAR_PH]] ], [ [[PRIMARY_ADD:%.*]], %[[LOOP]] ]
+; CHECK-NEXT: [[USE_PRIMARY:%.*]] = phi i32 [ -1, %[[SCALAR_PH]] ], [ [[PRIMARY]], %[[LOOP]] ]
+; CHECK-NEXT: [[SECONDARY:%.*]] = phi i32 [ 1, %[[SCALAR_PH]] ], [ [[SECONDARY_ADD:%.*]], %[[LOOP]] ]
; CHECK-NEXT: [[PRIMARY_ADD]] = add i32 [[PRIMARY]], 1
; CHECK-NEXT: [[SECONDARY_ADD]] = add i32 [[SECONDARY]], 1
; CHECK-NEXT: [[GEP:%.*]] = getelementptr inbounds i32, ptr [[A]], i32 [[SECONDARY]]
diff --git a/llvm/test/Transforms/LoopVectorize/dont-fold-tail-for-divisible-TC.ll b/llvm/test/Transforms/LoopVectorize/dont-fold-tail-for-divisible-TC.ll
index 1936b409bc150..d66648718f942 100644
--- a/llvm/test/Transforms/LoopVectorize/dont-fold-tail-for-divisible-TC.ll
+++ b/llvm/test/Transforms/LoopVectorize/dont-fold-tail-for-divisible-TC.ll
@@ -203,7 +203,7 @@ define dso_local void @cannotProveAlignedTC(ptr noalias nocapture %A, i32 %p, i3
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i32 [ 0, [[LOOP_PREHEADER]] ]
; CHECK-NEXT: br label [[LOOP:%.*]]
; CHECK: loop:
-; CHECK-NEXT: [[RIV:%.*]] = phi i32 [ [[RIVPLUS1:%.*]], [[LOOP]] ], [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ]
+; CHECK-NEXT: [[RIV:%.*]] = phi i32 [ [[RIVPLUS1:%.*]], [[LOOP]] ], [ 0, [[SCALAR_PH]] ]
; CHECK-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[A]], i32 [[RIV]]
; CHECK-NEXT: store i32 13, ptr [[ARRAYIDX]], align 1
; CHECK-NEXT: [[RIVPLUS1]] = add nuw nsw i32 [[RIV]], 1
diff --git a/llvm/test/Transforms/LoopVectorize/first-order-recurrence.ll b/llvm/test/Transforms/LoopVectorize/first-order-recurrence.ll
index 3adfcf53e4564..db97bdf4041a1 100644
--- a/llvm/test/Transforms/LoopVectorize/first-order-recurrence.ll
+++ b/llvm/test/Transforms/LoopVectorize/first-order-recurrence.ll
@@ -2750,9 +2750,9 @@ define i32 @sink_into_replication_region(i32 %y) {
; UNROLL-NO-IC-NEXT: [[VAR:%.*]] = phi i32 [ [[VAR6:%.*]], [[BB2]] ], [ [[TMP51]], [[MIDDLE_BLOCK]] ]
; UNROLL-NO-IC-NEXT: ret i32 [[VAR]]
; UNROLL-NO-IC: bb2:
-; UNROLL-NO-IC-NEXT: [[VAR3:%.*]] = phi i32 [ [[VAR8:%.*]], [[BB2]] ], [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ]
-; UNROLL-NO-IC-NEXT: [[VAR4:%.*]] = phi i32 [ [[VAR7:%.*]], [[BB2]] ], [ [[SCALAR_RECUR_INIT]], [[SCALAR_PH]] ]
-; UNROLL-NO-IC-NEXT: [[VAR5:%.*]] = phi i32 [ [[VAR6]], [[BB2]] ], [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ]
+; UNROLL-NO-IC-NEXT: [[VAR3:%.*]] = phi i32 [ [[VAR8:%.*]], [[BB2]] ], [ [[Y]], [[SCALAR_PH]] ]
+; UNROLL-NO-IC-NEXT: [[VAR4:%.*]] = phi i32 [ [[VAR7:%.*]], [[BB2]] ], [ 0, [[SCALAR_PH]] ]
+; UNROLL-NO-IC-NEXT: [[VAR5:%.*]] = phi i32 [ [[VAR6]], [[BB2]] ], [ 0, [[SCALAR_PH]] ]
; UNROLL-NO-IC-NEXT: [[VAR6]] = add i32 [[VAR5]], [[VAR4]]
; UNROLL-NO-IC-NEXT: [[VAR7]] = udiv i32 219220132, [[VAR3]]
; UNROLL-NO-IC-NEXT: [[VAR8]] = add nsw i32 [[VAR3]], -1
@@ -2813,9 +2813,9 @@ define i32 @sink_into_replication_region(i32 %y) {
; UNROLL-NO-VF-NEXT: [[VAR:%.*]] = phi i32 [ [[VAR6:%.*]], [[BB2]] ], [ [[BIN_RDX]], [[MIDDLE_BLOCK]] ]
; UNROLL-NO-VF-NEXT: ret i32 [[VAR]]
; UNROLL-NO-VF: bb2:
-; UNROLL-NO-VF-NEXT: [[VAR3:%.*]] = phi i32 [ [[VAR8:%.*]], [[BB2]] ], [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ]
-; UNROLL-NO-VF-NEXT: [[VAR4:%.*]] = phi i32 [ [[VAR7:%.*]], [[BB2]] ], [ [[SCALAR_RECUR_INIT]], [[SCALAR_PH]] ]
-; UNROLL-NO-VF-NEXT: [[VAR5:%.*]] = phi i32 [ [[VAR6]], [[BB2]] ], [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ]
+; UNROLL-NO-VF-NEXT: [[VAR3:%.*]] = phi i32 [ [[VAR8:%.*]], [[BB2]] ], [ [[Y]], [[SCALAR_PH]] ]
+; UNROLL-NO-VF-NEXT: [[VAR4:%.*]] = phi i32 [ [[VAR7:%.*]], [[BB2]] ], [ 0, [[SCALAR_PH]] ]
+; UNROLL-NO-VF-NEXT: [[VAR5:%.*]] = phi i32 [ [[VAR6]], [[BB2]] ], [ 0, [[SCALAR_PH]] ]
; UNROLL-NO-VF-NEXT: [[VAR6]] = add i32 [[VAR5]], [[VAR4]]
; UNROLL-NO-VF-NEXT: [[VAR7]] = udiv i32 219220132, [[VAR3]]
; UNROLL-NO-VF-NEXT: [[VAR8]] = add nsw i32 [[VAR3]], -1
@@ -2899,9 +2899,9 @@ define i32 @sink_into_replication_region(i32 %y) {
; SINK-AFTER-NEXT: [[VAR:%.*]] = phi i32 [ [[VAR6:%.*]], [[BB2]] ], [ [[TMP27]], [[MIDDLE_BLOCK]] ]
; SINK-AFTER-NEXT: ret i32 [[VAR]]
; SINK-AFTER: bb2:
-; SINK-AFTER-NEXT: [[VAR3:%.*]] = phi i32 [ [[VAR8:%.*]], [[BB2]] ], [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ]
-; SINK-AFTER-NEXT: [[VAR4:%.*]] = phi i32 [ [[VAR7:%.*]], [[BB2]] ], [ [[SCALAR_RECUR_INIT]], [[SCALAR_PH]] ]
-; SINK-AFTER-NEXT: [[VAR5:%.*]] = phi i32 [ [[VAR6]], [[BB2]] ], [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ]
+; SINK-AFTER-NEXT: [[VAR3:%.*]] = phi i32 [ [[VAR8:%.*]], [[BB2]] ], [ [[Y]], [[SCALAR_PH]] ]
+; SINK-AFTER-NEXT: [[VAR4:%.*]] = phi i32 [ [[VAR7:%.*]], [[BB2]] ], [ 0, [[SCALAR_PH]] ]
+; SINK-AFTER-NEXT: [[VAR5:%.*]] = phi i32 [ [[VAR6]], [[BB2]] ], [ 0, [[SCALAR_PH]] ]
; SINK-AFTER-NEXT: [[VAR6]] = add i32 [[VAR5]], [[VAR4]]
; SINK-AFTER-NEXT: [[VAR7]] = udiv i32 219220132, [[VAR3]]
; SINK-AFTER-NEXT: [[VAR8]] = add nsw i32 [[VAR3]], -1
@@ -3113,10 +3113,10 @@ define i32 @sink_into_replication_region_multiple(ptr %x, i32 %y) {
; UNROLL-NO-IC-NEXT: [[VAR:%.*]] = phi i32 [ [[VAR6:%.*]], [[BB2]] ], [ [[TMP75]], [[MIDDLE_BLOCK]] ]
; UNROLL-NO-IC-NEXT: ret i32 [[VAR]]
; UNROLL-NO-IC: bb2:
-; UNROLL-NO-IC-NEXT: [[VAR3:%.*]] = phi i32 [ [[VAR8:%.*]], [[BB2]] ], [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ]
-; UNROLL-NO-IC-NEXT: [[IV:%.*]] = phi i32 [ [[IV_NEXT:%.*]], [[BB2]] ], [ [[BC_RESUME_VAL1]], [[SCALAR_PH]] ]
-; UNROLL-NO-IC-NEXT: [[VAR4:%.*]] = phi i32 [ [[VAR7:%.*]], [[BB2]] ], [ [[SCALAR_RECUR_INIT]], [[SCALAR_PH]] ]
-; UNROLL-NO-IC-NEXT: [[VAR5:%.*]] = phi i32 [ [[VAR6]], [[BB2]] ], [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ]
+; UNROLL-NO-IC-NEXT: [[VAR3:%.*]] = phi i32 [ [[VAR8:%.*]], [[BB2]] ], [ [[Y]], [[SCALAR_PH]] ]
+; UNROLL-NO-IC-NEXT: [[IV:%.*]] = phi i32 [ [[IV_NEXT:%.*]], [[BB2]] ], [ 0, [[SCALAR_PH]] ]
+; UNROLL-NO-IC-NEXT: [[VAR4:%.*]] = phi i32 [ [[VAR7:%.*]], [[BB2]] ], [ 0, [[SCALAR_PH]] ]
+; UNROLL-NO-IC-NEXT: [[VAR5:%.*]] = phi i32 [ [[VAR6]], [[BB2]] ], [ 0, [[SCALAR_PH]] ]
; UNROLL-NO-IC-NEXT: [[G:%.*]] = getelementptr inbounds i32, ptr [[X]], i32 [[IV]]
; UNROLL-NO-IC-NEXT: [[VAR6]] = add i32 [[VAR5]], [[VAR4]]
; UNROLL-NO-IC-NEXT: [[VAR7]] = udiv i32 219220132, [[VAR3]]
@@ -3194,10 +3194,10 @@ define i32 @sink_into_replication_region_multiple(ptr %x, i32 %y) {
; UNROLL-NO-VF-NEXT: [[VAR:%.*]] = phi i32 [ [[VAR6:%.*]], [[BB2]] ], [ [[BIN_RDX]], [[MIDDLE_BLOCK]] ]
; UNROLL-NO-VF-NEXT: ret i32 [[VAR]]
; UNROLL-NO-VF: bb2:
-; UNROLL-NO-VF-NEXT: [[VAR3:%.*]] = phi i32 [ [[VAR8:%.*]], [[BB2]] ], [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ]
-; UNROLL-NO-VF-NEXT: [[IV:%.*]] = phi i32 [ [[IV_NEXT:%.*]], [[BB2]] ], [ [[BC_RESUME_VAL1]], [[SCALAR_PH]] ]
-; UNROLL-NO-VF-NEXT: [[VAR4:%.*]] = phi i32 [ [[VAR7:%.*]], [[BB2]] ], [ [[SCALAR_RECUR_INIT]], [[SCALAR_PH]] ]
-; UNROLL-NO-VF-NEXT: [[VAR5:%.*]] = phi i32 [ [[VAR6]], [[BB2]] ], [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ]
+; UNROLL-NO-VF-NEXT: [[VAR3:%.*]] = phi i32 [ [[VAR8:%.*]], [[BB2]] ], [ [[Y]], [[SCALAR_PH]] ]
+; UNROLL-NO-VF-NEXT: [[IV:%.*]] = phi i32 [ [[IV_NEXT:%.*]], [[BB2]] ], [ 0, [[SCALAR_PH]] ]
+; UNROLL-NO-VF-NEXT: [[VAR4:%.*]] = phi i32 [ [[VAR7:%.*]], [[BB2]] ], [ 0, [[SCALAR_PH]] ]
+; UNROLL-NO-VF-NEXT: [[VAR5:%.*]] = phi i32 [ [[VAR6]], [[BB2]] ], [ 0, [[SCALAR_PH]] ]
; UNROLL-NO-VF-NEXT: [[G:%.*]] = getelementptr inbounds i32, ptr [[X]], i32 [[IV]]
; UNROLL-NO-VF-NEXT: [[VAR6]] = add i32 [[VAR5]], [[VAR4]]
; UNROLL-NO-VF-NEXT: [[VAR7]] = udiv i32 219220132, [[VAR3]]
@@ -3316,10 +3316,10 @@ define i32 @sink_into_replication_region_multiple(ptr %x, i32 %y) {
; SINK-AFTER-NEXT: [[VAR:%.*]] = phi i32 [ [[VAR6:%.*]], [[BB2]] ], [ [[TMP39]], [[MIDDLE_BLOCK]] ]
; SINK-AFTER-NEXT: ret i32 [[VAR]]
; SINK-AFTER: bb2:
-; SINK-AFTER-NEXT: [[VAR3:%.*]] = phi i32 [ [[VAR8:%.*]], [[BB2]] ], [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ]
-; SINK-AFTER-NEXT: [[IV:%.*]] = phi i32 [ [[IV_NEXT:%.*]], [[BB2]] ], [ [[BC_RESUME_VAL1]], [[SCALAR_PH]] ]
-; SINK-AFTER-NEXT: [[VAR4:%.*]] = phi i32 [ [[VAR7:%.*]], [[BB2]] ], [ [[SCALAR_RECUR_INIT]], [[SCALAR_PH]] ]
-; SINK-AFTER-NEXT: [[VAR5:%.*]] = phi i32 [ [[VAR6]], [[BB2]] ], [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ]
+; SINK-AFTER-NEXT: [[VAR3:%.*]] = phi i32 [ [[VAR8:%.*]], [[BB2]] ], [ [[Y]], [[SCALAR_PH]] ]
+; SINK-AFTER-NEXT: [[IV:%.*]] = phi i32 [ [[IV_NEXT:%.*]], [[BB2]] ], [ 0, [[SCALAR_PH]] ]
+; SINK-AFTER-NEXT: [[VAR4:%.*]] = phi i32 [ [[VAR7:%.*]], [[BB2]] ], [ 0, [[SCALAR_PH]] ]
+; SINK-AFTER-NEXT: [[VAR5:%.*]] = phi i32 [ [[VAR6]], [[BB2]] ], [ 0, [[SCALAR_PH]] ]
; SINK-AFTER-NEXT: [[G:%.*]] = getelementptr inbounds i32, ptr [[X]], i32 [[IV]]
; SINK-AFTER-NEXT: [[VAR6]] = add i32 [[VAR5]], [[VAR4]]
; SINK-AFTER-NEXT: [[VAR7]] = udiv i32 219220132, [[VAR3]]
diff --git a/llvm/test/Transforms/LoopVectorize/iv-select-cmp-decreasing.ll b/llvm/test/Transforms/LoopVectorize/iv-select-cmp-decreasing.ll
index a0068f0f6cab3..d6acba54da4fa 100644
--- a/llvm/test/Transforms/LoopVectorize/iv-select-cmp-decreasing.ll
+++ b/llvm/test/Transforms/LoopVectorize/iv-select-cmp-decreasing.ll
@@ -473,8 +473,8 @@ define i16 @select_decreasing_induction_icmp_table_i16(i16 noundef %val) {
; IC4VF4-NEXT: [[BC_MERGE_RDX:%.*]] = phi i16 [ 0, %[[ENTRY]] ]
; IC4VF4-NEXT: br label %[[LOOP:.*]]
; IC4VF4: [[LOOP]]:
-; IC4VF4-NEXT: [[IV:%.*]] = phi i16 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
-; IC4VF4-NEXT: [[RDX:%.*]] = phi i16 [ [[BC_MERGE_RDX]], %[[SCALAR_PH]] ], [ [[SPEC_SELECT:%.*]], %[[LOOP]] ]
+; IC4VF4-NEXT: [[IV:%.*]] = phi i16 [ 12, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
+; IC4VF4-NEXT: [[RDX:%.*]] = phi i16 [ 0, %[[SCALAR_PH]] ], [ [[SPEC_SELECT:%.*]], %[[LOOP]] ]
; IC4VF4-NEXT: [[GEP_TABLE_IV:%.*]] = getelementptr inbounds [13 x i16], ptr @table, i16 0, i16 [[IV]]
; IC4VF4-NEXT: [[LD_TABLE:%.*]] = load i16, ptr [[GEP_TABLE_IV]], align 1
; IC4VF4-NEXT: [[CMP_TABLE_VAL:%.*]] = icmp ugt i16 [[LD_TABLE]], [[VAL]]
@@ -844,8 +844,8 @@ define i16 @select_decreasing_induction_icmp_table_half(half noundef %val) {
; IC4VF4-NEXT: [[BC_MERGE_RDX:%.*]] = phi i16 [ 0, %[[ENTRY]] ]
; IC4VF4-NEXT: br label %[[LOOP:.*]]
; IC4VF4: [[LOOP]]:
-; IC4VF4-NEXT: [[IV:%.*]] = phi i16 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
-; IC4VF4-NEXT: [[RDX:%.*]] = phi i16 [ [[BC_MERGE_RDX]], %[[SCALAR_PH]] ], [ [[SPEC_SELECT:%.*]], %[[LOOP]] ]
+; IC4VF4-NEXT: [[IV:%.*]] = phi i16 [ 12, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
+; IC4VF4-NEXT: [[RDX:%.*]] = phi i16 [ 0, %[[SCALAR_PH]] ], [ [[SPEC_SELECT:%.*]], %[[LOOP]] ]
; IC4VF4-NEXT: [[GEP_TABLE_IV:%.*]] = getelementptr inbounds [13 x i16], ptr @table, i16 0, i16 [[IV]]
; IC4VF4-NEXT: [[LD_TABLE:%.*]] = load half, ptr [[GEP_TABLE_IV]], align 1
; IC4VF4-NEXT: [[CMP_TABLE_VAL:%.*]] = fcmp ugt half [[LD_TABLE]], [[VAL]]
diff --git a/llvm/test/Transforms/LoopVectorize/loop-form.ll b/llvm/test/Transforms/LoopVectorize/loop-form.ll
index 10b2e704cb891..22ebf920087b6 100644
--- a/llvm/test/Transforms/LoopVectorize/loop-form.ll
+++ b/llvm/test/Transforms/LoopVectorize/loop-form.ll
@@ -84,7 +84,7 @@ define void @bottom_tested(ptr %p, i32 %n) {
; TAILFOLD-NEXT: [[BC_RESUME_VAL:%.*]] = phi i32 [ 0, [[ENTRY:%.*]] ]
; TAILFOLD-NEXT: br label [[FOR_COND:%.*]]
; TAILFOLD: for.cond:
-; TAILFOLD-NEXT: [[I:%.*]] = phi i32 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[INC:%.*]], [[FOR_COND]] ]
+; TAILFOLD-NEXT: [[I:%.*]] = phi i32 [ 0, [[SCALAR_PH]] ], [ [[INC:%.*]], [[FOR_COND]] ]
; TAILFOLD-NEXT: [[IPROM:%.*]] = sext i32 [[I]] to i64
; TAILFOLD-NEXT: [[B:%.*]] = getelementptr inbounds i16, ptr [[P]], i64 [[IPROM]]
; TAILFOLD-NEXT: store i16 0, ptr [[B]], align 4
diff --git a/llvm/test/Transforms/LoopVectorize/memdep-fold-tail.ll b/llvm/test/Transforms/LoopVectorize/memdep-fold-tail.ll
index c9066f22c5592..72bc1816178bf 100644
--- a/llvm/test/Transforms/LoopVectorize/memdep-fold-tail.ll
+++ b/llvm/test/Transforms/LoopVectorize/memdep-fold-tail.ll
@@ -74,7 +74,7 @@ define void @maxvf3() {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i32 [ 0, [[ENTRY:%.*]] ]
; CHECK-NEXT: br label [[FOR_BODY:%.*]]
; CHECK: for.body:
-; CHECK-NEXT: [[J:%.*]] = phi i32 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[J_NEXT:%.*]], [[FOR_BODY]] ]
+; CHECK-NEXT: [[J:%.*]] = phi i32 [ 0, [[SCALAR_PH]] ], [ [[J_NEXT:%.*]], [[FOR_BODY]] ]
; CHECK-NEXT: [[AJ:%.*]] = getelementptr inbounds [18 x i8], ptr @a, i32 0, i32 [[J]]
; CHECK-NEXT: store i8 69, ptr [[AJ]], align 8
; CHECK-NEXT: [[JP3:%.*]] = add nuw nsw i32 3, [[J]]
diff --git a/llvm/test/Transforms/LoopVectorize/optsize.ll b/llvm/test/Transforms/LoopVectorize/optsize.ll
index f0d026b322e24..b9ee09e77090d 100644
--- a/llvm/test/Transforms/LoopVectorize/optsize.ll
+++ b/llvm/test/Transforms/LoopVectorize/optsize.ll
@@ -626,6 +626,7 @@ define i32 @pr45526_pgso() !prof !14 {
; NPGSO-NEXT: br i1 [[TMP1]], label %[[MIDDLE_BLOCK:.*]], label %[[VECTOR_BODY]], !llvm.loop [[LOOP23:![0-9]+]]
; NPGSO: [[MIDDLE_BLOCK]]:
; NPGSO-NEXT: [[VECTOR_RECUR_EXTRACT:%.*]] = extractelement <4 x i32> [[TMP0]], i32 3
+; NPGSO-NEXT: br label %[[SCALAR_PH]]
; NPGSO: [[SCALAR_PH]]:
; NPGSO-NEXT: [[BC_RESUME_VAL:%.*]] = phi i32 [ 508, %[[MIDDLE_BLOCK]] ], [ 0, %[[ENTRY]] ]
; NPGSO-NEXT: [[SCALAR_RECUR_INIT:%.*]] = phi i32 [ [[VECTOR_RECUR_EXTRACT]], %[[MIDDLE_BLOCK]] ], [ 5, %[[ENTRY]] ]
@@ -698,7 +699,7 @@ define void @stride1(ptr noalias %B, i32 %BStride) optsize {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i32 [ 0, %[[ENTRY]] ]
; CHECK-NEXT: br label %[[FOR_BODY:.*]]
; CHECK: [[FOR_BODY]]:
-; CHECK-NEXT: [[IV:%.*]] = phi i32 [ [[IV_NEXT:%.*]], %[[FOR_BODY]] ], [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ]
+; CHECK-NEXT: [[IV:%.*]] = phi i32 [ [[IV_NEXT:%.*]], %[[FOR_BODY]] ], [ 0, %[[SCALAR_PH]] ]
; CHECK-NEXT: [[MULB:%.*]] = mul nsw i32 [[IV]], [[BSTRIDE]]
; CHECK-NEXT: [[GEPOFB:%.*]] = getelementptr inbounds i16, ptr [[B]], i32 [[MULB]]
; CHECK-NEXT: store i16 42, ptr [[GEPOFB]], align 4
@@ -747,7 +748,7 @@ define void @stride1(ptr noalias %B, i32 %BStride) optsize {
; PGSO-NEXT: [[BC_RESUME_VAL:%.*]] = phi i32 [ 0, %[[ENTRY]] ]
; PGSO-NEXT: br label %[[FOR_BODY:.*]]
; PGSO: [[FOR_BODY]]:
-; PGSO-NEXT: [[IV:%.*]] = phi i32 [ [[IV_NEXT:%.*]], %[[FOR_BODY]] ], [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ]
+; PGSO-NEXT: [[IV:%.*]] = phi i32 [ [[IV_NEXT:%.*]], %[[FOR_BODY]] ], [ 0, %[[SCALAR_PH]] ]
; PGSO-NEXT: [[MULB:%.*]] = mul nsw i32 [[IV]], [[BSTRIDE]]
; PGSO-NEXT: [[GEPOFB:%.*]] = getelementptr inbounds i16, ptr [[B]], i32 [[MULB]]
; PGSO-NEXT: store i16 42, ptr [[GEPOFB]], align 4
@@ -796,7 +797,7 @@ define void @stride1(ptr noalias %B, i32 %BStride) optsize {
; NPGSO-NEXT: [[BC_RESUME_VAL:%.*]] = phi i32 [ 0, %[[ENTRY]] ]
; NPGSO-NEXT: br label %[[FOR_BODY:.*]]
; NPGSO: [[FOR_BODY]]:
-; NPGSO-NEXT: [[IV:%.*]] = phi i32 [ [[IV_NEXT:%.*]], %[[FOR_BODY]] ], [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ]
+; NPGSO-NEXT: [[IV:%.*]] = phi i32 [ [[IV_NEXT:%.*]], %[[FOR_BODY]] ], [ 0, %[[SCALAR_PH]] ]
; NPGSO-NEXT: [[MULB:%.*]] = mul nsw i32 [[IV]], [[BSTRIDE]]
; NPGSO-NEXT: [[GEPOFB:%.*]] = getelementptr inbounds i16, ptr [[B]], i32 [[MULB]]
; NPGSO-NEXT: store i16 42, ptr [[GEPOFB]], align 4
diff --git a/llvm/test/Transforms/LoopVectorize/pr45679-fold-tail-by-masking.ll b/llvm/test/Transforms/LoopVectorize/pr45679-fold-tail-by-masking.ll
index c044cc0edf0a3..bda91bae2b8a4 100644
--- a/llvm/test/Transforms/LoopVectorize/pr45679-fold-tail-by-masking.ll
+++ b/llvm/test/Transforms/LoopVectorize/pr45679-fold-tail-by-masking.ll
@@ -62,7 +62,7 @@ define void @pr45679(ptr %A) {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i32 [ 0, [[ENTRY:%.*]] ]
; CHECK-NEXT: br label [[LOOP:%.*]]
; CHECK: loop:
-; CHECK-NEXT: [[RIV:%.*]] = phi i32 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[RIVPLUS1:%.*]], [[LOOP]] ]
+; CHECK-NEXT: [[RIV:%.*]] = phi i32 [ 0, [[SCALAR_PH]] ], [ [[RIVPLUS1:%.*]], [[LOOP]] ]
; CHECK-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[A]], i32 [[RIV]]
; CHECK-NEXT: store i32 13, ptr [[ARRAYIDX]], align 1
; CHECK-NEXT: [[RIVPLUS1]] = add nuw nsw i32 [[RIV]], 1
@@ -124,7 +124,7 @@ define void @pr45679(ptr %A) {
; VF2UF2-NEXT: [[BC_RESUME_VAL:%.*]] = phi i32 [ 0, [[ENTRY:%.*]] ]
; VF2UF2-NEXT: br label [[LOOP:%.*]]
; VF2UF2: loop:
-; VF2UF2-NEXT: [[RIV:%.*]] = phi i32 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[RIVPLUS1:%.*]], [[LOOP]] ]
+; VF2UF2-NEXT: [[RIV:%.*]] = phi i32 [ 0, [[SCALAR_PH]] ], [ [[RIVPLUS1:%.*]], [[LOOP]] ]
; VF2UF2-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[A]], i32 [[RIV]]
; VF2UF2-NEXT: store i32 13, ptr [[ARRAYIDX]], align 1
; VF2UF2-NEXT: [[RIVPLUS1]] = add nuw nsw i32 [[RIV]], 1
@@ -181,7 +181,7 @@ define void @pr45679(ptr %A) {
; VF1UF4-NEXT: [[BC_RESUME_VAL:%.*]] = phi i32 [ 0, [[ENTRY:%.*]] ]
; VF1UF4-NEXT: br label [[LOOP:%.*]]
; VF1UF4: loop:
-; VF1UF4-NEXT: [[RIV:%.*]] = phi i32 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[RIVPLUS1:%.*]], [[LOOP]] ]
+; VF1UF4-NEXT: [[RIV:%.*]] = phi i32 [ 0, [[SCALAR_PH]] ], [ [[RIVPLUS1:%.*]], [[LOOP]] ]
; VF1UF4-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i32, ptr [[A]], i32 [[RIV]]
; VF1UF4-NEXT: store i32 13, ptr [[ARRAYIDX]], align 1
; VF1UF4-NEXT: [[RIVPLUS1]] = add nuw nsw i32 [[RIV]], 1
@@ -261,7 +261,7 @@ define void @load_variant(ptr noalias %a, ptr noalias %b) {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, [[ENTRY:%.*]] ]
; CHECK-NEXT: br label [[FOR_BODY:%.*]]
; CHECK: for.body:
-; CHECK-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
; CHECK-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i64, ptr [[A]], i64 [[IV]]
; CHECK-NEXT: [[V:%.*]] = load i64, ptr [[ARRAYIDX]], align 8
; CHECK-NEXT: store i64 [[V]], ptr [[B]], align 8
@@ -328,7 +328,7 @@ define void @load_variant(ptr noalias %a, ptr noalias %b) {
; VF2UF2-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, [[ENTRY:%.*]] ]
; VF2UF2-NEXT: br label [[FOR_BODY:%.*]]
; VF2UF2: for.body:
-; VF2UF2-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; VF2UF2-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
; VF2UF2-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i64, ptr [[A]], i64 [[IV]]
; VF2UF2-NEXT: [[V:%.*]] = load i64, ptr [[ARRAYIDX]], align 8
; VF2UF2-NEXT: store i64 [[V]], ptr [[B]], align 8
@@ -390,7 +390,7 @@ define void @load_variant(ptr noalias %a, ptr noalias %b) {
; VF1UF4-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, [[ENTRY:%.*]] ]
; VF1UF4-NEXT: br label [[FOR_BODY:%.*]]
; VF1UF4: for.body:
-; VF1UF4-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
+; VF1UF4-NEXT: [[IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[FOR_BODY]] ]
; VF1UF4-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds i64, ptr [[A]], i64 [[IV]]
; VF1UF4-NEXT: [[V:%.*]] = load i64, ptr [[ARRAYIDX]], align 8
; VF1UF4-NEXT: store i64 [[V]], ptr [[B]], align 8
diff --git a/llvm/test/Transforms/LoopVectorize/pr46525-expander-insertpoint.ll b/llvm/test/Transforms/LoopVectorize/pr46525-expander-insertpoint.ll
index d4a6aed472832..7d6667c716fa7 100644
--- a/llvm/test/Transforms/LoopVectorize/pr46525-expander-insertpoint.ll
+++ b/llvm/test/Transforms/LoopVectorize/pr46525-expander-insertpoint.ll
@@ -36,7 +36,7 @@ define void @test(i16 %x, i64 %y, ptr %ptr) {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, [[LOOP_PREHEADER]] ]
; CHECK-NEXT: br label [[LOOP:%.*]]
; CHECK: loop:
-; CHECK-NEXT: [[IV:%.*]] = phi i64 [ [[IV_NEXT:%.*]], [[LOOP]] ], [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ]
+; CHECK-NEXT: [[IV:%.*]] = phi i64 [ [[IV_NEXT:%.*]], [[LOOP]] ], [ 0, [[SCALAR_PH]] ]
; CHECK-NEXT: store i32 0, ptr [[PTR]], align 4
; CHECK-NEXT: [[V2:%.*]] = trunc i64 [[IV]] to i8
; CHECK-NEXT: [[V3:%.*]] = add i8 [[V2]], 1
diff --git a/llvm/test/Transforms/LoopVectorize/pr51614-fold-tail-by-masking.ll b/llvm/test/Transforms/LoopVectorize/pr51614-fold-tail-by-masking.ll
index 77794dcb9369d..19c9ccc2f534a 100644
--- a/llvm/test/Transforms/LoopVectorize/pr51614-fold-tail-by-masking.ll
+++ b/llvm/test/Transforms/LoopVectorize/pr51614-fold-tail-by-masking.ll
@@ -67,8 +67,8 @@ define dso_local i16 @reverse_interleave_load_fold_mask() optsize {
; CHECK-NEXT: [[BC_MERGE_RDX:%.*]] = phi i16 [ 0, [[ENTRY]] ]
; CHECK-NEXT: br label [[LOOP:%.*]]
; CHECK: loop:
-; CHECK-NEXT: [[IV:%.*]] = phi i16 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IVMINUS1:%.*]], [[LOOP]] ]
-; CHECK-NEXT: [[SUM:%.*]] = phi i16 [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[PREVSUM:%.*]], [[LOOP]] ]
+; CHECK-NEXT: [[IV:%.*]] = phi i16 [ 41, [[SCALAR_PH]] ], [ [[IVMINUS1:%.*]], [[LOOP]] ]
+; CHECK-NEXT: [[SUM:%.*]] = phi i16 [ 0, [[SCALAR_PH]] ], [ [[PREVSUM:%.*]], [[LOOP]] ]
; CHECK-NEXT: [[IVMINUS1]] = add nsw i16 [[IV]], -1
; CHECK-NEXT: [[GEPA0:%.*]] = getelementptr inbounds [40 x [4 x i16]], ptr @A, i16 0, i16 [[IVMINUS1]], i16 0
; CHECK-NEXT: [[TMP29:%.*]] = load i16, ptr [[GEPA0]], align 1
diff --git a/llvm/test/Transforms/LoopVectorize/predicatedinst-loop-invariant.ll b/llvm/test/Transforms/LoopVectorize/predicatedinst-loop-invariant.ll
index ffe118b047064..90caee321ed71 100644
--- a/llvm/test/Transforms/LoopVectorize/predicatedinst-loop-invariant.ll
+++ b/llvm/test/Transforms/LoopVectorize/predicatedinst-loop-invariant.ll
@@ -63,7 +63,7 @@ define void @loop_invariant_store(ptr %p, i64 %a, i8 %b) {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i32 [ 0, %[[ENTRY]] ]
; CHECK-NEXT: br label %[[LOOP_HEADER:.*]]
; CHECK: [[LOOP_HEADER]]:
-; CHECK-NEXT: [[IV:%.*]] = phi i32 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[ADD:%.*]], %[[LOOP_LATCH:.*]] ]
+; CHECK-NEXT: [[IV:%.*]] = phi i32 [ 0, %[[SCALAR_PH]] ], [ [[ADD:%.*]], %[[LOOP_LATCH:.*]] ]
; CHECK-NEXT: [[ADD]] = add i32 [[IV]], 1
; CHECK-NEXT: [[CMP_SLT:%.*]] = icmp slt i32 [[IV]], 2
; CHECK-NEXT: [[SHL:%.*]] = shl i64 [[A]], 48
@@ -181,7 +181,7 @@ define void @loop_invariant_srem(ptr %p, i64 %a, i8 %b) {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i8 [ 0, %[[ENTRY]] ]
; CHECK-NEXT: br label %[[LOOP_HEADER:.*]]
; CHECK: [[LOOP_HEADER]]:
-; CHECK-NEXT: [[IV:%.*]] = phi i8 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP_LATCH:.*]] ]
+; CHECK-NEXT: [[IV:%.*]] = phi i8 [ 0, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP_LATCH:.*]] ]
; CHECK-NEXT: [[IV_NEXT]] = add i8 [[IV]], 1
; CHECK-NEXT: [[CMP_SLT:%.*]] = icmp slt i8 [[IV]], 2
; CHECK-NEXT: [[SHL:%.*]] = shl i64 [[A]], 48
@@ -253,7 +253,7 @@ define void @loop_invariant_float_store(ptr %p, i32 %a) {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i32 [ 0, %[[ENTRY]] ]
; CHECK-NEXT: br label %[[LOOP_HEADER:.*]]
; CHECK: [[LOOP_HEADER]]:
-; CHECK-NEXT: [[IV:%.*]] = phi i32 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP_LATCH:.*]] ]
+; CHECK-NEXT: [[IV:%.*]] = phi i32 [ 0, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP_LATCH:.*]] ]
; CHECK-NEXT: [[IV_NEXT]] = add i32 [[IV]], 1
; CHECK-NEXT: [[CMP_SLT:%.*]] = icmp slt i32 [[IV]], 2
; CHECK-NEXT: br i1 [[CMP_SLT]], label %[[COND_FALSE:.*]], label %[[LOOP_LATCH]]
@@ -324,7 +324,7 @@ define void @test_store_to_invariant_address_needs_mask_due_to_low_trip_count(pt
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i16 [ 0, %[[ENTRY]] ]
; CHECK-NEXT: br label %[[LOOP_HEADER:.*]]
; CHECK: [[LOOP_HEADER]]:
-; CHECK-NEXT: [[IV:%.*]] = phi i16 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP_LATCH:.*]] ]
+; CHECK-NEXT: [[IV:%.*]] = phi i16 [ 0, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP_LATCH:.*]] ]
; CHECK-NEXT: br i1 true, label %[[LOOP_LATCH]], label %[[ELSE:.*]]
; CHECK: [[ELSE]]:
; CHECK-NEXT: br label %[[LOOP_LATCH]]
diff --git a/llvm/test/Transforms/LoopVectorize/scalable-predication.ll b/llvm/test/Transforms/LoopVectorize/scalable-predication.ll
index 8e272debb299d..a3a4c29b92781 100644
--- a/llvm/test/Transforms/LoopVectorize/scalable-predication.ll
+++ b/llvm/test/Transforms/LoopVectorize/scalable-predication.ll
@@ -34,7 +34,7 @@ define void @foo(i32 %val, ptr dereferenceable(1024) %ptr) {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, [[ENTRY:%.*]] ]
; CHECK-NEXT: br label [[WHILE_BODY:%.*]]
; CHECK: while.body:
-; CHECK-NEXT: [[INDEX:%.*]] = phi i64 [ [[INDEX_NEXT:%.*]], [[WHILE_BODY]] ], [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ]
+; CHECK-NEXT: [[INDEX:%.*]] = phi i64 [ [[INDEX_NEXT:%.*]], [[WHILE_BODY]] ], [ 0, [[SCALAR_PH]] ]
; CHECK-NEXT: [[GEP:%.*]] = getelementptr i32, ptr [[PTR:%.*]], i64 [[INDEX]]
; CHECK-NEXT: [[LD1:%.*]] = load i32, ptr [[GEP]], align 4
; CHECK-NEXT: [[INDEX_NEXT]] = add nsw i64 [[INDEX]], 1
diff --git a/llvm/test/Transforms/LoopVectorize/select-reduction.ll b/llvm/test/Transforms/LoopVectorize/select-reduction.ll
index cfc9bb25a9208..03b3ff2746ae2 100644
--- a/llvm/test/Transforms/LoopVectorize/select-reduction.ll
+++ b/llvm/test/Transforms/LoopVectorize/select-reduction.ll
@@ -42,8 +42,8 @@ define i32 @test(i64 %N, i32 %x) {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ [[EXTRA_ITER]], [[LOOP_PREHEADER]] ]
; CHECK-NEXT: br label [[LOOP:%.*]]
; CHECK: loop:
-; CHECK-NEXT: [[NEXT:%.*]] = phi i32 [ [[SEL:%.*]], [[LOOP]] ], [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ]
-; CHECK-NEXT: [[IV:%.*]] = phi i64 [ [[IV_NEXT:%.*]], [[LOOP]] ], [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ]
+; CHECK-NEXT: [[NEXT:%.*]] = phi i32 [ [[SEL:%.*]], [[LOOP]] ], [ 0, [[SCALAR_PH]] ]
+; CHECK-NEXT: [[IV:%.*]] = phi i64 [ [[IV_NEXT:%.*]], [[LOOP]] ], [ [[EXTRA_ITER]], [[SCALAR_PH]] ]
; CHECK-NEXT: [[SEL_COND:%.*]] = icmp sgt i32 [[NEXT]], 10
; CHECK-NEXT: [[SEL]] = select i1 [[SEL_COND]], i32 [[NEXT]], i32 10
; CHECK-NEXT: [[IV_NEXT]] = add nsw i64 [[IV]], -1
@@ -98,8 +98,8 @@ define i32 @pr66895_tail_fold_reduction_exit_inst_gets_simplified(i32 %n) {
; CHECK-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ 0, [[ENTRY]] ]
; CHECK-NEXT: br label [[LOOP:%.*]]
; CHECK: loop:
-; CHECK-NEXT: [[IV:%.*]] = phi i32 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[LOOP]] ]
-; CHECK-NEXT: [[RED:%.*]] = phi i32 [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[RED_NEXT:%.*]], [[LOOP]] ]
+; CHECK-NEXT: [[IV:%.*]] = phi i32 [ 12, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[LOOP]] ]
+; CHECK-NEXT: [[RED:%.*]] = phi i32 [ 0, [[SCALAR_PH]] ], [ [[RED_NEXT:%.*]], [[LOOP]] ]
; CHECK-NEXT: [[IV_NEXT]] = add i32 [[IV]], -1
; CHECK-NEXT: [[RED_NEXT]] = mul i32 [[RED]], 1
; CHECK-NEXT: [[EC:%.*]] = icmp eq i32 [[IV]], 0
diff --git a/llvm/test/Transforms/LoopVectorize/store-reduction-results-in-tail-folded-loop.ll b/llvm/test/Transforms/LoopVectorize/store-reduction-results-in-tail-folded-loop.ll
index bf86cbd601f44..60522247ed840 100644
--- a/llvm/test/Transforms/LoopVectorize/store-reduction-results-in-tail-folded-loop.ll
+++ b/llvm/test/Transforms/LoopVectorize/store-reduction-results-in-tail-folded-loop.ll
@@ -47,8 +47,8 @@ define void @pr75298_store_reduction_value_in_folded_loop(i64 %iv.start) optsize
; CHECK-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ 0, [[PH]] ]
; CHECK-NEXT: br label [[LOOP:%.*]]
; CHECK: loop:
-; CHECK-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[LOOP]] ]
-; CHECK-NEXT: [[RED:%.*]] = phi i32 [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[RED_NEXT:%.*]], [[LOOP]] ]
+; CHECK-NEXT: [[IV:%.*]] = phi i64 [ [[IV_START]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[LOOP]] ]
+; CHECK-NEXT: [[RED:%.*]] = phi i32 [ 0, [[SCALAR_PH]] ], [ [[RED_NEXT:%.*]], [[LOOP]] ]
; CHECK-NEXT: [[L:%.*]] = load i32, ptr @c, align 4
; CHECK-NEXT: [[RED_NEXT]] = xor i32 [[RED]], [[L]]
; CHECK-NEXT: store i32 [[RED_NEXT]], ptr @a, align 4
diff --git a/llvm/test/Transforms/LoopVectorize/strict-fadd-interleave-only.ll b/llvm/test/Transforms/LoopVectorize/strict-fadd-interleave-only.ll
index eefa3da97a4bc..e7b243e01f01c 100644
--- a/llvm/test/Transforms/LoopVectorize/strict-fadd-interleave-only.ll
+++ b/llvm/test/Transforms/LoopVectorize/strict-fadd-interleave-only.ll
@@ -29,8 +29,8 @@ define float @pr70988() {
; CHECK-NEXT: [[BC_MERGE_RDX:%.*]] = phi float [ 0.000000e+00, [[ENTRY]] ]
; CHECK-NEXT: br label [[LOOP:%.*]]
; CHECK: loop:
-; CHECK-NEXT: [[INDEX:%.*]] = phi i32 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[INDEX_NEXT:%.*]], [[LOOP]] ]
-; CHECK-NEXT: [[RDX:%.*]] = phi float [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[RDX_NEXT:%.*]], [[LOOP]] ]
+; CHECK-NEXT: [[INDEX:%.*]] = phi i32 [ 0, [[SCALAR_PH]] ], [ [[INDEX_NEXT:%.*]], [[LOOP]] ]
+; CHECK-NEXT: [[RDX:%.*]] = phi float [ 0.000000e+00, [[SCALAR_PH]] ], [ [[RDX_NEXT:%.*]], [[LOOP]] ]
; CHECK-NEXT: [[RDX_NEXT]] = fadd contract float [[RDX]], 1.000000e+00
; CHECK-NEXT: [[INDEX_NEXT]] = add nuw nsw i32 [[INDEX]], 1
; CHECK-NEXT: [[COND:%.*]] = icmp ult i32 [[INDEX_NEXT]], 1021
@@ -64,8 +64,8 @@ define float @pr70988() {
; CHECK-ALM-NEXT: [[BC_MERGE_RDX:%.*]] = phi float [ 0.000000e+00, [[ENTRY]] ]
; CHECK-ALM-NEXT: br label [[LOOP:%.*]]
; CHECK-ALM: loop:
-; CHECK-ALM-NEXT: [[INDEX:%.*]] = phi i32 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[INDEX_NEXT:%.*]], [[LOOP]] ]
-; CHECK-ALM-NEXT: [[RDX:%.*]] = phi float [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[RDX_NEXT:%.*]], [[LOOP]] ]
+; CHECK-ALM-NEXT: [[INDEX:%.*]] = phi i32 [ 0, [[SCALAR_PH]] ], [ [[INDEX_NEXT:%.*]], [[LOOP]] ]
+; CHECK-ALM-NEXT: [[RDX:%.*]] = phi float [ 0.000000e+00, [[SCALAR_PH]] ], [ [[RDX_NEXT:%.*]], [[LOOP]] ]
; CHECK-ALM-NEXT: [[RDX_NEXT]] = fadd contract float [[RDX]], 1.000000e+00
; CHECK-ALM-NEXT: [[INDEX_NEXT]] = add nuw nsw i32 [[INDEX]], 1
; CHECK-ALM-NEXT: [[COND:%.*]] = icmp ult i32 [[INDEX_NEXT]], 1021
@@ -133,8 +133,8 @@ define float @pr72720reduction_using_active_lane_mask(ptr %src) {
; CHECK-NEXT: [[BC_MERGE_RDX:%.*]] = phi float [ 0.000000e+00, [[ENTRY]] ]
; CHECK-NEXT: br label [[LOOP:%.*]]
; CHECK: loop:
-; CHECK-NEXT: [[IV:%.*]] = phi i32 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[NARROW:%.*]], [[LOOP]] ]
-; CHECK-NEXT: [[RDX:%.*]] = phi float [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[RDX_NEXT:%.*]], [[LOOP]] ]
+; CHECK-NEXT: [[IV:%.*]] = phi i32 [ 0, [[SCALAR_PH]] ], [ [[NARROW:%.*]], [[LOOP]] ]
+; CHECK-NEXT: [[RDX:%.*]] = phi float [ 0.000000e+00, [[SCALAR_PH]] ], [ [[RDX_NEXT:%.*]], [[LOOP]] ]
; CHECK-NEXT: [[NARROW]] = add nuw nsw i32 [[IV]], 1
; CHECK-NEXT: [[GEP:%.*]] = getelementptr float, ptr [[SRC]], i32 [[IV]]
; CHECK-NEXT: [[L:%.*]] = load float, ptr [[GEP]], align 4
@@ -185,8 +185,8 @@ define float @pr72720reduction_using_active_lane_mask(ptr %src) {
; CHECK-ALM-NEXT: [[BC_MERGE_RDX:%.*]] = phi float [ 0.000000e+00, [[ENTRY]] ]
; CHECK-ALM-NEXT: br label [[LOOP:%.*]]
; CHECK-ALM: loop:
-; CHECK-ALM-NEXT: [[IV:%.*]] = phi i32 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[NARROW:%.*]], [[LOOP]] ]
-; CHECK-ALM-NEXT: [[RDX:%.*]] = phi float [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[RDX_NEXT:%.*]], [[LOOP]] ]
+; CHECK-ALM-NEXT: [[IV:%.*]] = phi i32 [ 0, [[SCALAR_PH]] ], [ [[NARROW:%.*]], [[LOOP]] ]
+; CHECK-ALM-NEXT: [[RDX:%.*]] = phi float [ 0.000000e+00, [[SCALAR_PH]] ], [ [[RDX_NEXT:%.*]], [[LOOP]] ]
; CHECK-ALM-NEXT: [[NARROW]] = add nuw nsw i32 [[IV]], 1
; CHECK-ALM-NEXT: [[GEP:%.*]] = getelementptr float, ptr [[SRC]], i32 [[IV]]
; CHECK-ALM-NEXT: [[L:%.*]] = load float, ptr [[GEP]], align 4
@@ -243,8 +243,8 @@ define float @fadd_reduction_with_live_in(float %inc) {
; CHECK-NEXT: [[BC_MERGE_RDX:%.*]] = phi float [ 0.000000e+00, [[ENTRY]] ]
; CHECK-NEXT: br label [[LOOP:%.*]]
; CHECK: loop:
-; CHECK-NEXT: [[IV:%.*]] = phi i32 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[LOOP]] ]
-; CHECK-NEXT: [[SUM:%.*]] = phi float [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[SUM_NEXT:%.*]], [[LOOP]] ]
+; CHECK-NEXT: [[IV:%.*]] = phi i32 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[LOOP]] ]
+; CHECK-NEXT: [[SUM:%.*]] = phi float [ 0.000000e+00, [[SCALAR_PH]] ], [ [[SUM_NEXT:%.*]], [[LOOP]] ]
; CHECK-NEXT: [[SUM_NEXT]] = fadd float [[SUM]], [[INC]]
; CHECK-NEXT: [[IV_NEXT]] = add i32 [[IV]], 1
; CHECK-NEXT: [[EC:%.*]] = icmp eq i32 [[IV]], 1000
@@ -279,8 +279,8 @@ define float @fadd_reduction_with_live_in(float %inc) {
; CHECK-ALM-NEXT: [[BC_MERGE_RDX:%.*]] = phi float [ 0.000000e+00, [[ENTRY]] ]
; CHECK-ALM-NEXT: br label [[LOOP:%.*]]
; CHECK-ALM: loop:
-; CHECK-ALM-NEXT: [[IV:%.*]] = phi i32 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[LOOP]] ]
-; CHECK-ALM-NEXT: [[SUM:%.*]] = phi float [ [[BC_MERGE_RDX]], [[SCALAR_PH]] ], [ [[SUM_NEXT:%.*]], [[LOOP]] ]
+; CHECK-ALM-NEXT: [[IV:%.*]] = phi i32 [ 0, [[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], [[LOOP]] ]
+; CHECK-ALM-NEXT: [[SUM:%.*]] = phi float [ 0.000000e+00, [[SCALAR_PH]] ], [ [[SUM_NEXT:%.*]], [[LOOP]] ]
; CHECK-ALM-NEXT: [[SUM_NEXT]] = fadd float [[SUM]], [[INC]]
; CHECK-ALM-NEXT: [[IV_NEXT]] = add i32 [[IV]], 1
; CHECK-ALM-NEXT: [[EC:%.*]] = icmp eq i32 [[IV]], 1000
diff --git a/llvm/test/Transforms/LoopVectorize/tail-folding-alloca-in-loop.ll b/llvm/test/Transforms/LoopVectorize/tail-folding-alloca-in-loop.ll
index 3cf8b3f4bf2b7..9f33db87129f8 100644
--- a/llvm/test/Transforms/LoopVectorize/tail-folding-alloca-in-loop.ll
+++ b/llvm/test/Transforms/LoopVectorize/tail-folding-alloca-in-loop.ll
@@ -58,7 +58,7 @@ define i32 @test(ptr %vf1, i64 %n) {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, %[[ENTRY]] ]
; CHECK-NEXT: br label %[[FOR_BODY:.*]]
; CHECK: [[FOR_BODY]]:
-; CHECK-NEXT: [[INDVARS_IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[INDVARS_IV_NEXT:%.*]], %[[FOR_BODY]] ]
+; CHECK-NEXT: [[INDVARS_IV:%.*]] = phi i64 [ 0, %[[SCALAR_PH]] ], [ [[INDVARS_IV_NEXT:%.*]], %[[FOR_BODY]] ]
; CHECK-NEXT: [[TMP18:%.*]] = alloca i8, i64 [[N]], align 16
; CHECK-NEXT: [[ARRAYIDX:%.*]] = getelementptr inbounds ptr, ptr [[VF1]], i64 [[INDVARS_IV]]
; CHECK-NEXT: store ptr [[TMP18]], ptr [[ARRAYIDX]], align 8
diff --git a/llvm/test/Transforms/LoopVectorize/tail-folding-optimize-vector-induction-width.ll b/llvm/test/Transforms/LoopVectorize/tail-folding-optimize-vector-induction-width.ll
index efc2b8d819431..ac15787515986 100644
--- a/llvm/test/Transforms/LoopVectorize/tail-folding-optimize-vector-induction-width.ll
+++ b/llvm/test/Transforms/LoopVectorize/tail-folding-optimize-vector-induction-width.ll
@@ -38,7 +38,7 @@ define void @canonical_small_tc_i8(ptr nocapture noundef writeonly %p) {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, %[[ENTRY]] ]
; CHECK-NEXT: br label %[[LOOP:.*]]
; CHECK: [[LOOP]]:
-; CHECK-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
+; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
; CHECK-NEXT: [[P_IV:%.*]] = getelementptr inbounds i16, ptr [[P]], i64 [[IV]]
; CHECK-NEXT: store i16 1, ptr [[P_IV]], align 2
; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1
@@ -99,7 +99,7 @@ define void @canonical_upper_limit_i8(ptr nocapture noundef writeonly %p) {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, %[[ENTRY]] ]
; CHECK-NEXT: br label %[[LOOP:.*]]
; CHECK: [[LOOP]]:
-; CHECK-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
+; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
; CHECK-NEXT: [[P_IV:%.*]] = getelementptr inbounds i16, ptr [[P]], i64 [[IV]]
; CHECK-NEXT: store i16 1, ptr [[P_IV]], align 2
; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1
@@ -160,7 +160,7 @@ define void @canonical_lower_limit_i16(ptr nocapture noundef writeonly %p) {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, %[[ENTRY]] ]
; CHECK-NEXT: br label %[[LOOP:.*]]
; CHECK: [[LOOP]]:
-; CHECK-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
+; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
; CHECK-NEXT: [[P_IV:%.*]] = getelementptr inbounds i16, ptr [[P]], i64 [[IV]]
; CHECK-NEXT: store i16 1, ptr [[P_IV]], align 2
; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1
@@ -221,7 +221,7 @@ define void @canonical_upper_limit_i16(ptr nocapture noundef writeonly %p) {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, %[[ENTRY]] ]
; CHECK-NEXT: br label %[[LOOP:.*]]
; CHECK: [[LOOP]]:
-; CHECK-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
+; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
; CHECK-NEXT: [[P_IV:%.*]] = getelementptr inbounds i16, ptr [[P]], i64 [[IV]]
; CHECK-NEXT: store i16 1, ptr [[P_IV]], align 2
; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1
@@ -282,7 +282,7 @@ define void @canonical_lower_limit_i32(ptr nocapture noundef writeonly %p) {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, %[[ENTRY]] ]
; CHECK-NEXT: br label %[[LOOP:.*]]
; CHECK: [[LOOP]]:
-; CHECK-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
+; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
; CHECK-NEXT: [[P_IV:%.*]] = getelementptr inbounds i16, ptr [[P]], i64 [[IV]]
; CHECK-NEXT: store i16 1, ptr [[P_IV]], align 2
; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1
@@ -343,7 +343,7 @@ define void @canonical_upper_limit_i32(ptr nocapture noundef writeonly %p) {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, %[[ENTRY]] ]
; CHECK-NEXT: br label %[[LOOP:.*]]
; CHECK: [[LOOP]]:
-; CHECK-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
+; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
; CHECK-NEXT: [[P_IV:%.*]] = getelementptr inbounds i16, ptr [[P]], i64 [[IV]]
; CHECK-NEXT: store i16 1, ptr [[P_IV]], align 2
; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1
@@ -404,7 +404,7 @@ define void @canonical_lower_limit_i64(ptr nocapture noundef writeonly %p) {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, %[[ENTRY]] ]
; CHECK-NEXT: br label %[[LOOP:.*]]
; CHECK: [[LOOP]]:
-; CHECK-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
+; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
; CHECK-NEXT: [[P_IV:%.*]] = getelementptr inbounds i16, ptr [[P]], i64 [[IV]]
; CHECK-NEXT: store i16 1, ptr [[P_IV]], align 2
; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1
@@ -465,7 +465,7 @@ define void @canonical_upper_limit_i64(ptr nocapture noundef writeonly %p) {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, %[[ENTRY]] ]
; CHECK-NEXT: br label %[[LOOP:.*]]
; CHECK: [[LOOP]]:
-; CHECK-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
+; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
; CHECK-NEXT: [[P_IV:%.*]] = getelementptr inbounds i16, ptr [[P]], i64 [[IV]]
; CHECK-NEXT: store i16 1, ptr [[P_IV]], align 2
; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1
@@ -526,7 +526,7 @@ define void @canonical_lower_limit_i128(ptr nocapture noundef writeonly %p) {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i256 [ 0, %[[ENTRY]] ]
; CHECK-NEXT: br label %[[LOOP:.*]]
; CHECK: [[LOOP]]:
-; CHECK-NEXT: [[IV:%.*]] = phi i256 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
+; CHECK-NEXT: [[IV:%.*]] = phi i256 [ 0, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
; CHECK-NEXT: [[P_IV:%.*]] = getelementptr inbounds i16, ptr [[P]], i256 [[IV]]
; CHECK-NEXT: store i16 1, ptr [[P_IV]], align 2
; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i256 [[IV]], 1
diff --git a/llvm/test/Transforms/LoopVectorize/tail-folding-switch.ll b/llvm/test/Transforms/LoopVectorize/tail-folding-switch.ll
index 222c1eeb6e443..6f4bb1dcebfc6 100644
--- a/llvm/test/Transforms/LoopVectorize/tail-folding-switch.ll
+++ b/llvm/test/Transforms/LoopVectorize/tail-folding-switch.ll
@@ -59,7 +59,7 @@ define void @tail_fold_switch(ptr %dst, i32 %0) {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, %[[ENTRY]] ]
; CHECK-NEXT: br label %[[LOOP_HEADER:.*]]
; CHECK: [[LOOP_HEADER]]:
-; CHECK-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP_LATCH:.*]] ]
+; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP_LATCH:.*]] ]
; CHECK-NEXT: switch i32 [[TMP0]], label %[[LOOP_LATCH]] [
; CHECK-NEXT: i32 0, label %[[LOOP_LATCH]]
; CHECK-NEXT: i32 1, label %[[IF_THEN:.*]]
diff --git a/llvm/test/Transforms/LoopVectorize/tail-folding-vectorization-factor-1.ll b/llvm/test/Transforms/LoopVectorize/tail-folding-vectorization-factor-1.ll
index 13d5be1b94d15..d39a146b9fd8a 100644
--- a/llvm/test/Transforms/LoopVectorize/tail-folding-vectorization-factor-1.ll
+++ b/llvm/test/Transforms/LoopVectorize/tail-folding-vectorization-factor-1.ll
@@ -60,7 +60,7 @@ define void @VF1-VPlanExe(ptr %dst) {
; CHECK: for.cond.cleanup:
; CHECK-NEXT: ret void
; CHECK: for.body:
-; CHECK-NEXT: [[INDVARS_IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[INDVARS_IV_NEXT:%.*]], [[FOR_BODY]] ]
+; CHECK-NEXT: [[INDVARS_IV:%.*]] = phi i64 [ 0, [[SCALAR_PH]] ], [ [[INDVARS_IV_NEXT:%.*]], [[FOR_BODY]] ]
; CHECK-NEXT: [[DST_PTR:%.*]] = getelementptr inbounds i32, ptr [[DST]], i64 [[INDVARS_IV]]
; CHECK-NEXT: store i32 0, ptr [[DST_PTR]], align 4
; CHECK-NEXT: [[INDVARS_IV_NEXT]] = add nuw nsw i64 [[INDVARS_IV]], 1
@@ -140,7 +140,7 @@ define void @VF1-VPWidenCanonicalIVRecipeExe(ptr %ptr1) {
; CHECK: for.cond.cleanup:
; CHECK-NEXT: ret void
; CHECK: for.body:
-; CHECK-NEXT: [[ADDR:%.*]] = phi ptr [ [[PTR:%.*]], [[FOR_BODY]] ], [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ]
+; CHECK-NEXT: [[ADDR:%.*]] = phi ptr [ [[PTR:%.*]], [[FOR_BODY]] ], [ [[PTR1]], [[SCALAR_PH]] ]
; CHECK-NEXT: store double 0.000000e+00, ptr [[ADDR]], align 8
; CHECK-NEXT: [[PTR]] = getelementptr inbounds double, ptr [[ADDR]], i64 1
; CHECK-NEXT: [[COND:%.*]] = icmp eq ptr [[PTR]], [[PTR2]]
diff --git a/llvm/test/Transforms/LoopVectorize/uniform-blend.ll b/llvm/test/Transforms/LoopVectorize/uniform-blend.ll
index 85cf925669feb..a35e763acdb0d 100644
--- a/llvm/test/Transforms/LoopVectorize/uniform-blend.ll
+++ b/llvm/test/Transforms/LoopVectorize/uniform-blend.ll
@@ -302,7 +302,7 @@ define void @redundant_branch_and_blends_without_mask(ptr %A) {
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, %[[ENTRY]] ]
; CHECK-NEXT: br label %[[LOOP_HEADER:.*]]
; CHECK: [[LOOP_HEADER]]:
-; CHECK-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP_LATCH:.*]] ]
+; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP_LATCH:.*]] ]
; CHECK-NEXT: [[GEP_IV:%.*]] = getelementptr inbounds i32, ptr [[A]], i64 [[IV]]
; CHECK-NEXT: [[L:%.*]] = load i32, ptr [[GEP_IV]], align 4
; CHECK-NEXT: [[ADD:%.*]] = add i32 [[L]], 10
diff --git a/llvm/test/Transforms/LoopVectorize/vector-loop-backedge-elimination.ll b/llvm/test/Transforms/LoopVectorize/vector-loop-backedge-elimination.ll
index 59c76aefbb90f..983f32728d1a9 100644
--- a/llvm/test/Transforms/LoopVectorize/vector-loop-backedge-elimination.ll
+++ b/llvm/test/Transforms/LoopVectorize/vector-loop-backedge-elimination.ll
@@ -224,7 +224,7 @@ define void @remove_loop_region_with_replicate_recipe(ptr %dst, i64 range(i64 5,
; VF8UF1-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 2, %[[ENTRY]] ]
; VF8UF1-NEXT: br label %[[LOOP:.*]]
; VF8UF1: [[LOOP]]:
-; VF8UF1-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
+; VF8UF1-NEXT: [[IV:%.*]] = phi i64 [ 2, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
; VF8UF1-NEXT: [[GEP_DST:%.*]] = getelementptr i16, ptr [[DST]], i64 [[IV]]
; VF8UF1-NEXT: store i16 0, ptr [[GEP_DST]], align 2
; VF8UF1-NEXT: [[IV_NEXT]] = add i64 [[IV]], 1
@@ -368,7 +368,7 @@ define void @remove_loop_region_with_replicate_recipe(ptr %dst, i64 range(i64 5,
; VF8UF2-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 2, %[[ENTRY]] ]
; VF8UF2-NEXT: br label %[[LOOP:.*]]
; VF8UF2: [[LOOP]]:
-; VF8UF2-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
+; VF8UF2-NEXT: [[IV:%.*]] = phi i64 [ 2, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
; VF8UF2-NEXT: [[GEP_DST:%.*]] = getelementptr i16, ptr [[DST]], i64 [[IV]]
; VF8UF2-NEXT: store i16 0, ptr [[GEP_DST]], align 2
; VF8UF2-NEXT: [[IV_NEXT]] = add i64 [[IV]], 1
@@ -511,7 +511,7 @@ define void @remove_loop_region_with_replicate_recipe(ptr %dst, i64 range(i64 5,
; VF16UF1-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 2, %[[ENTRY]] ]
; VF16UF1-NEXT: br label %[[LOOP:.*]]
; VF16UF1: [[LOOP]]:
-; VF16UF1-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
+; VF16UF1-NEXT: [[IV:%.*]] = phi i64 [ 2, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
; VF16UF1-NEXT: [[GEP_DST:%.*]] = getelementptr i16, ptr [[DST]], i64 [[IV]]
; VF16UF1-NEXT: store i16 0, ptr [[GEP_DST]], align 2
; VF16UF1-NEXT: [[IV_NEXT]] = add i64 [[IV]], 1
@@ -797,7 +797,7 @@ define void @scev_expand_step(i64 %x, ptr %dst) {
; VF8UF1-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, %[[ENTRY]] ]
; VF8UF1-NEXT: br label %[[LOOP:.*]]
; VF8UF1: [[LOOP]]:
-; VF8UF1-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
+; VF8UF1-NEXT: [[IV:%.*]] = phi i64 [ 0, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
; VF8UF1-NEXT: [[IV_NEXT]] = add i64 [[IV]], [[STEP]]
; VF8UF1-NEXT: [[GEP_DST:%.*]] = getelementptr i8, ptr [[DST]], i64 [[IV_NEXT]]
; VF8UF1-NEXT: store i8 0, ptr [[GEP_DST]], align 1
@@ -994,7 +994,7 @@ define void @scev_expand_step(i64 %x, ptr %dst) {
; VF8UF2-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, %[[ENTRY]] ]
; VF8UF2-NEXT: br label %[[LOOP:.*]]
; VF8UF2: [[LOOP]]:
-; VF8UF2-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
+; VF8UF2-NEXT: [[IV:%.*]] = phi i64 [ 0, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
; VF8UF2-NEXT: [[IV_NEXT]] = add i64 [[IV]], [[STEP]]
; VF8UF2-NEXT: [[GEP_DST:%.*]] = getelementptr i8, ptr [[DST]], i64 [[IV_NEXT]]
; VF8UF2-NEXT: store i8 0, ptr [[GEP_DST]], align 1
@@ -1190,7 +1190,7 @@ define void @scev_expand_step(i64 %x, ptr %dst) {
; VF16UF1-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ 0, %[[ENTRY]] ]
; VF16UF1-NEXT: br label %[[LOOP:.*]]
; VF16UF1: [[LOOP]]:
-; VF16UF1-NEXT: [[IV:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
+; VF16UF1-NEXT: [[IV:%.*]] = phi i64 [ 0, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
; VF16UF1-NEXT: [[IV_NEXT]] = add i64 [[IV]], [[STEP]]
; VF16UF1-NEXT: [[GEP_DST:%.*]] = getelementptr i8, ptr [[DST]], i64 [[IV_NEXT]]
; VF16UF1-NEXT: store i8 0, ptr [[GEP_DST]], align 1
>From 092388171f3f6c40b9244cc2ffbc89ce266680fd Mon Sep 17 00:00:00 2001
From: Ramkumar Ramachandra <ramkumar.ramachandra at codasip.com>
Date: Wed, 6 Aug 2025 20:35:35 +0100
Subject: [PATCH 142/171] [VPlan] Introduce m_[Specific]ICmp matcher (#151540)
---
.../Transforms/Vectorize/VPlanPatternMatch.h | 60 +++++++++++++++++++
.../Transforms/Vectorize/VPlanTransforms.cpp | 24 ++++----
2 files changed, 70 insertions(+), 14 deletions(-)
diff --git a/llvm/lib/Transforms/Vectorize/VPlanPatternMatch.h b/llvm/lib/Transforms/Vectorize/VPlanPatternMatch.h
index d133610ef4f75..8818843a30625 100644
--- a/llvm/lib/Transforms/Vectorize/VPlanPatternMatch.h
+++ b/llvm/lib/Transforms/Vectorize/VPlanPatternMatch.h
@@ -461,6 +461,66 @@ m_c_BinaryOr(const Op0_t &Op0, const Op1_t &Op1) {
return m_BinaryOr<Op0_t, Op1_t, /*Commutative*/ true>(Op0, Op1);
}
+/// ICmp_match is a variant of BinaryRecipe_match that also binds the comparison
+/// predicate.
+template <typename Op0_t, typename Op1_t> struct ICmp_match {
+ CmpPredicate *Predicate = nullptr;
+ Op0_t Op0;
+ Op1_t Op1;
+
+ ICmp_match(CmpPredicate &Pred, const Op0_t &Op0, const Op1_t &Op1)
+ : Predicate(&Pred), Op0(Op0), Op1(Op1) {}
+ ICmp_match(const Op0_t &Op0, const Op1_t &Op1) : Op0(Op0), Op1(Op1) {}
+
+ bool match(const VPValue *V) const {
+ auto *DefR = V->getDefiningRecipe();
+ return DefR && match(DefR);
+ }
+
+ bool match(const VPRecipeBase *V) const {
+ if (m_Binary<Instruction::ICmp>(Op0, Op1).match(V)) {
+ if (Predicate)
+ *Predicate = cast<VPRecipeWithIRFlags>(V)->getPredicate();
+ return true;
+ }
+ return false;
+ }
+};
+
+/// SpecificICmp_match is a variant of ICmp_match that matches the comparison
+/// predicate, instead of binding it.
+template <typename Op0_t, typename Op1_t> struct SpecificICmp_match {
+ const CmpPredicate Predicate;
+ Op0_t Op0;
+ Op1_t Op1;
+
+ SpecificICmp_match(CmpPredicate Pred, const Op0_t &LHS, const Op1_t &RHS)
+ : Predicate(Pred), Op0(LHS), Op1(RHS) {}
+
+ bool match(const VPValue *V) const {
+ CmpPredicate CurrentPred;
+ return ICmp_match<Op0_t, Op1_t>(CurrentPred, Op0, Op1).match(V) &&
+ CmpPredicate::getMatching(CurrentPred, Predicate);
+ }
+};
+
+template <typename Op0_t, typename Op1_t>
+inline ICmp_match<Op0_t, Op1_t> m_ICmp(const Op0_t &Op0, const Op1_t &Op1) {
+ return ICmp_match<Op0_t, Op1_t>(Op0, Op1);
+}
+
+template <typename Op0_t, typename Op1_t>
+inline ICmp_match<Op0_t, Op1_t> m_ICmp(CmpPredicate &Pred, const Op0_t &Op0,
+ const Op1_t &Op1) {
+ return ICmp_match<Op0_t, Op1_t>(Pred, Op0, Op1);
+}
+
+template <typename Op0_t, typename Op1_t>
+inline SpecificICmp_match<Op0_t, Op1_t>
+m_SpecificICmp(CmpPredicate MatchPred, const Op0_t &Op0, const Op1_t &Op1) {
+ return SpecificICmp_match<Op0_t, Op1_t>(MatchPred, Op0, Op1);
+}
+
template <typename Op0_t, typename Op1_t>
using GEPLikeRecipe_match =
BinaryRecipe_match<Op0_t, Op1_t, Instruction::GetElementPtr, false,
diff --git a/llvm/lib/Transforms/Vectorize/VPlanTransforms.cpp b/llvm/lib/Transforms/Vectorize/VPlanTransforms.cpp
index d99d6940c1c91..0cb704c85ba40 100644
--- a/llvm/lib/Transforms/Vectorize/VPlanTransforms.cpp
+++ b/llvm/lib/Transforms/Vectorize/VPlanTransforms.cpp
@@ -1391,11 +1391,10 @@ static bool optimizeVectorInductionWidthForTCAndVFUF(VPlan &Plan,
// Currently only handle cases where the single user is a header-mask
// comparison with the backedge-taken-count.
- if (!match(
- *WideIV->user_begin(),
- m_Binary<Instruction::ICmp>(
- m_Specific(WideIV),
- m_Broadcast(m_Specific(Plan.getOrCreateBackedgeTakenCount())))))
+ if (!match(*WideIV->user_begin(),
+ m_ICmp(m_Specific(WideIV),
+ m_Broadcast(
+ m_Specific(Plan.getOrCreateBackedgeTakenCount())))))
continue;
// Update IV operands and comparison bound to use new narrower type.
@@ -1428,11 +1427,9 @@ static bool isConditionTrueViaVFAndUF(VPValue *Cond, VPlan &Plan,
});
auto *CanIV = Plan.getCanonicalIV();
- if (!match(Cond, m_Binary<Instruction::ICmp>(
- m_Specific(CanIV->getBackedgeValue()),
- m_Specific(&Plan.getVectorTripCount()))) ||
- cast<VPRecipeWithIRFlags>(Cond->getDefiningRecipe())->getPredicate() !=
- CmpInst::ICMP_EQ)
+ if (!match(Cond, m_SpecificICmp(CmpInst::ICMP_EQ,
+ m_Specific(CanIV->getBackedgeValue()),
+ m_Specific(&Plan.getVectorTripCount()))))
return false;
// The compare checks CanIV + VFxUF == vector trip count. The vector trip
@@ -1841,7 +1838,7 @@ void VPlanTransforms::truncateToMinimalBitwidths(
VPW->dropPoisonGeneratingFlags();
if (OldResSizeInBits != NewResSizeInBits &&
- !match(&R, m_Binary<Instruction::ICmp>(m_VPValue(), m_VPValue()))) {
+ !match(&R, m_ICmp(m_VPValue(), m_VPValue()))) {
// Extend result to original width.
auto *Ext =
new VPWidenCastRecipe(Instruction::ZExt, ResultVPV, OldResTy);
@@ -1850,9 +1847,8 @@ void VPlanTransforms::truncateToMinimalBitwidths(
Ext->setOperand(0, ResultVPV);
assert(OldResSizeInBits > NewResSizeInBits && "Nothing to shrink?");
} else {
- assert(
- match(&R, m_Binary<Instruction::ICmp>(m_VPValue(), m_VPValue())) &&
- "Only ICmps should not need extending the result.");
+ assert(match(&R, m_ICmp(m_VPValue(), m_VPValue())) &&
+ "Only ICmps should not need extending the result.");
}
assert(!isa<VPWidenStoreRecipe>(&R) && "stores cannot be narrowed");
>From 3692c73ce415618351640497232cc07a8780d4c2 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Andrzej=20Warzy=C5=84ski?= <andrzej.warzynski at arm.com>
Date: Wed, 6 Aug 2025 20:37:50 +0100
Subject: [PATCH 143/171] [mlir][linalg] Enable scalable vectorization of
linalg.unpack (#149293)
This patch updates `vectorizeAsTensorUnpackOp` to support scalable
vectorization by requiring user-specified vector sizes for the _read_ operation
(rather than the _write_ operation) in `linalg.unpack`.
Conceptually, `linalg.unpack` consists of these high-level steps:
* **Read** from the source tensor using `vector.transfer_read`.
* **Transpose** the read value according to the permutation in the
`linalg.unpack` op (via `vector.transpose`).
* **Re-associate** dimensions of the transposed value, as specified by the op
(via `vector.shape_cast`)
* **Write** the result into the destination tensor via
`vector.transfer_write`.
Previously, the vector sizes provided by the user were interpreted as
write-vector sizes. These were used to:
* Infer read-vector sizes using the `inner_tiles` attribute of the unpack op.
* Deduce vector sizes for the transpose and shape cast operations.
* Ultimately determine the vector shape for the write.
However, this logic breaks when one or more tile sizes are dynamic. In such
cases, `vectorizeUnPackOpPrecondition` fails, and vectorization is rejected.
This patch switches the contract: users now directly specify the
"read-vector-sizes", which inherently encode all inner tile sizes - including
dynamic ones. It becomes the user's responsibility to provide valid sizes.
In practice, since `linalg.unpack` is typically constructed, tiled, and
vectorized by the same transformation pipeline, the necessary
"read-vector-sizes" should be recoverable.
---
.../mlir/Dialect/Vector/Utils/VectorUtils.h | 2 +-
.../Linalg/Transforms/Vectorization.cpp | 188 +++++++++---------
mlir/lib/Dialect/Vector/Utils/VectorUtils.cpp | 25 +--
.../Linalg/vectorization/linalg-ops.mlir | 117 ++++++++---
4 files changed, 201 insertions(+), 131 deletions(-)
diff --git a/mlir/include/mlir/Dialect/Vector/Utils/VectorUtils.h b/mlir/include/mlir/Dialect/Vector/Utils/VectorUtils.h
index 7cd70e42d363c..8bd54cf31b893 100644
--- a/mlir/include/mlir/Dialect/Vector/Utils/VectorUtils.h
+++ b/mlir/include/mlir/Dialect/Vector/Utils/VectorUtils.h
@@ -228,7 +228,7 @@ bool isLinearizableVector(VectorType type);
Value createReadOrMaskedRead(OpBuilder &builder, Location loc, Value source,
ArrayRef<int64_t> inputVectorSizes, Value padValue,
bool useInBoundsInsteadOfMasking = false,
- ArrayRef<bool> scalableDims = {});
+ ArrayRef<bool> inputScalableVecDims = {});
/// Returns success if `inputVectorSizes` is a valid masking configuraion for
/// given `shape`, i.e., it meets:
diff --git a/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
index 0860ceafa0270..cf65e673a5c44 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
@@ -1805,7 +1805,8 @@ vectorizeAsTensorPackOp(RewriterBase &rewriter, linalg::PackOp packOp,
inputShape[innerDimsPos[idx]] *= size;
auto maskedRead = vector::createReadOrMaskedRead(
rewriter, loc, packOp.getSource(), inputShape, padValue,
- useInBoundsInsteadOfMasking);
+ useInBoundsInsteadOfMasking,
+ /*inputScalableVecSizes=*/{});
// Create ShapeCastOp.
SmallVector<int64_t> destShape(inputVectorSizes);
@@ -1878,19 +1879,46 @@ static VectorType getCollapsedVecType(VectorType type,
return VectorType::get(newShape, type.getElementType(), newScalableFlags);
}
-/// Vectorize a `linalg::UnPackOp` to these 4 Ops:
-/// Vector::TransferReadOp - Reads a vector from the source tensor
-/// vector::TransposeOp - Transpose the Source tensor
-/// ShapeCastOp - Reshape the data based on the target.
-/// vector::TransferWriteOp. - Write the result vector back to the destination
-/// tensor.
-/// If the vector sizes are not provided:
-/// * the vector sizes are determined by the input operand and attributes,
-/// * update the inBounds attribute instead of masking.
+/// Vectorize `linalg.unpack` as:
+/// * xfer_read -> vector.transpose -> vector.shape_cast -> xfer_write
+///
+/// The input-vector-sizes specify the read vector sizes (i.e. the vector sizes
+/// for the xfer_read operation). This is sufficient to infer the other vector
+/// sizes required here.
+///
+/// If the vector sizes are not provided:
+/// * the vector sizes are determined from the input tensor static shape.
+/// * the inBounds attribute is used instead of masking.
+///
+/// EXAMPLE (no vector sizes):
+/// ```
+/// %unpack = linalg.unpack %src
+/// inner_dims_pos = [0, 1]
+/// inner_tiles = [8, 8]
+/// into %dest : tensor<1x1x8x8xf32> -> tensor<8x8xf32>
+/// ```
+/// is vectorized as:
+/// ```
+/// %read = vector.transfer_read %src
+/// : tensor<1x1x8x8xf32>, vector<1x1x8x8xf32>
+/// %tr = vector.transpose %read, [0, 2, 1, 3]
+/// : vector<1x1x8x8xf32> to vector<1x8x1x8xf32>
+/// %sc = vector.shape_cast %tr
+/// : vector<1x8x1x8xf32> to vector<8x8xf32>
+/// %vector = vector.transfer_write %sc into %dest
+/// : vector<8x8xf32>, tensor<8x8xf32>
+/// ```
static LogicalResult
vectorizeAsTensorUnpackOp(RewriterBase &rewriter, linalg::UnPackOp unpackOp,
ArrayRef<int64_t> inputVectorSizes,
+ ArrayRef<bool> inputScalableVecDims,
SmallVectorImpl<Value> &newResults) {
+ if (!inputVectorSizes.empty()) {
+ assert(inputVectorSizes.size() == unpackOp.getSourceRank() &&
+ "Invalid number of input vector sizes!");
+ assert(inputVectorSizes.size() == inputScalableVecDims.size() &&
+ "Incompatible number of vector sizes and vector scalable flags!");
+ }
// TODO: Introduce a parent class that will handle the insertion point update.
OpBuilder::InsertionGuard g(rewriter);
@@ -1898,88 +1926,40 @@ vectorizeAsTensorUnpackOp(RewriterBase &rewriter, linalg::UnPackOp unpackOp,
RankedTensorType unpackTensorType = unpackOp.getSourceType();
- ArrayRef<int64_t> innerDimPos = unpackOp.getInnerDimsPos();
- ArrayRef<int64_t> innerTiles = unpackOp.getStaticInnerTiles();
ArrayRef<int64_t> sourceShape = unpackTensorType.getShape();
bool useInBoundsInsteadOfMasking = false;
- ArrayRef<int64_t> outerDimsPerm = unpackOp.getOuterDimsPerm();
-
- auto destSize = unpackOp.getDestRank();
-
- if (!inputVectorSizes.empty())
- assert(inputVectorSizes.size() == destSize &&
- "Incorrect number of input vector sizes");
-
- // vectorSizes is the shape of the vector that will be used to do final
- // write on the destination tensor. It is set like this: Let's say the
- // source tensor is rank 'M' and the dest tensor rank 'N', where N <= M.
- // Thus:
- // 1. vectorSizes = sourceShape.take_front(N)
- // 2. if outer_dims_perms is present: do that permutation on vectorSizes.
- // 3. multiply all the locations in vectorSize pointed by innerDimPos by the
- // innerTiles attribute value.
- SmallVector<int64_t> vectorSizes(inputVectorSizes);
- if (vectorSizes.empty()) {
- llvm::append_range(vectorSizes, sourceShape.take_front(destSize));
- if (!outerDimsPerm.empty())
- applyPermutationToVector(vectorSizes, outerDimsPerm);
- for (auto [i, pos] : llvm::enumerate(innerDimPos))
- vectorSizes[pos] *= innerTiles[i];
- useInBoundsInsteadOfMasking = true;
- }
+ Location loc = unpackOp->getLoc();
- // readVectorSizes is the size of tensor used to read and apply mask. It is
- // set like this: Let's say the vectorSize (VS) array is size 'N' and
- // the sourceShape(SS) is 'M' where M >= N and InnerTileSizes (IT) of
- // size M-N
- // Thus:
- // - initially: readVectorSizes = vectorInputSizes
- // - Divide all the readMaskShape locations pointed by innerDimPos
- // by the innerTileSize attribute value.
- // - if outer_dims_perms is present: do that permutation on readVectorSizes.
- // - Append the remaining shape from SS
- // E.g. let's say let's say unpackTensorType.getShape() = <8x8x32x16>
- // inner Dim Pos = [0, 1] and Inner Tiles = [32, 16], vector_sizes are [512,
- // 128] and outer_dims_perm is [1, 0] then read shape is:
- // ReadVectorSizes(initial): [512, 128]
- // Final Value(after innerDim Adjustment): [512/32, 128/16]
- // = [16, 8]
- // After applying outer_dims_perm: [8, 16]
- // After appending the rest of the sourceShape: [8, 16, 32, 16]
-
- SmallVector<int64_t> readVectorSizes(vectorSizes.begin(), vectorSizes.end());
-
- for (auto [index, size] : enumerate(innerTiles)) {
- readVectorSizes[innerDimPos[index]] =
- llvm::divideCeil(readVectorSizes[innerDimPos[index]], size);
- }
- if (!outerDimsPerm.empty()) {
- applyPermutationToVector(readVectorSizes, outerDimsPerm);
- }
- readVectorSizes.append(sourceShape.begin() + vectorSizes.size(),
- sourceShape.end());
+ // Obtain vector sizes for the read operation.
+ SmallVector<int64_t> readVectorSizes(inputVectorSizes);
+ SmallVector<bool> readScalableVectorFlags(inputScalableVecDims);
- Location loc = unpackOp->getLoc();
+ // In the absence of input-vector-sizes, use the _static_ input tensor shape.
+ if (inputVectorSizes.empty()) {
+ if (ShapedType::isDynamicShape(sourceShape))
+ return failure();
+
+ readVectorSizes.assign(sourceShape.begin(), sourceShape.end());
+ useInBoundsInsteadOfMasking = true;
+ }
+ // -- Generate the read operation --
auto padValue = arith::ConstantOp::create(
rewriter, loc,
rewriter.getZeroAttr(unpackOp.getSourceType().getElementType()));
-
- // Read result, mask if necessary. If transferReadOp shape is not equal
- // to shape of source, then a mask is necessary.
Value readResult = vector::createReadOrMaskedRead(
rewriter, loc, unpackOp.getSource(), readVectorSizes, padValue,
- /*useInBoundsInsteadOfMasking=*/false);
+ useInBoundsInsteadOfMasking, readScalableVectorFlags);
+ // -- Generate the transpose operation --
PackingMetadata packMetadata;
SmallVector<int64_t> lastDimToInsertPosPerm =
getUnPackInverseSrcPerm(unpackOp, packMetadata);
- // Transpose the appropriate rows to match output.
vector::TransposeOp transposeOp = vector::TransposeOp::create(
rewriter, loc, readResult, lastDimToInsertPosPerm);
- // Collapse the vector to the size required by result.
+ // -- Generate the shape_cast operation --
VectorType collapsedVecType = getCollapsedVecType(
transposeOp.getType(),
getSymbolLessAffineMaps(convertReassociationIndicesToExprs(
@@ -1987,9 +1967,11 @@ vectorizeAsTensorUnpackOp(RewriterBase &rewriter, linalg::UnPackOp unpackOp,
vector::ShapeCastOp shapeCastOp = vector::ShapeCastOp::create(
rewriter, loc, collapsedVecType, transposeOp->getResult(0));
+ // -- Generate the write operation --
Operation *write = createWriteOrMaskedWrite(
rewriter, loc, shapeCastOp.getResult(), unpackOp.getDest(),
/*writeIndices=*/{}, useInBoundsInsteadOfMasking);
+
newResults.push_back(write->getResult(0));
return success();
}
@@ -2016,7 +1998,7 @@ vectorizeAsTensorPadOp(RewriterBase &rewriter, tensor::PadOp padOp,
assert(succeeded(status) && "failed to reify result shapes");
auto maskedRead = vector::createReadOrMaskedRead(
rewriter, loc, padOp.getSource(), inputVectorSizes, padValue,
- /*useInBoundsInsteadOfMasking=*/false);
+ /*useInBoundsInsteadOfMasking=*/false, /*inputScalableVecSizes=*/{});
// Create Xfer write Op
Value dest = tensor::EmptyOp::create(rewriter, loc, reifiedReturnShapes[0],
@@ -2095,24 +2077,34 @@ vectorizeDynamicLinalgOpPrecondition(linalg::LinalgOp op,
return success();
}
-/// Need to check if the inner-tiles are static/constant.
+//// This hook considers two cases:
+/// (1) If the input-vector-sizes are empty, then the vector sizes will be
+/// infered. This is only possible when all shapes are static.
+/// (2) If the input-vector-sizes are non-empty (i.e. user provided), then
+/// carry out basic sanity-checking.
static LogicalResult
vectorizeUnPackOpPrecondition(linalg::UnPackOp unpackOp,
ArrayRef<int64_t> inputVectorSizes) {
+ // If there are no input vector sizes and all shapes are static, there is
+ // nothing left to check.
+ if (inputVectorSizes.empty() && unpackOp.getDestType().hasStaticShape() &&
+ unpackOp.getSourceType().hasStaticShape())
+ return success();
- if (llvm::any_of(unpackOp.getInnerTiles(), [](OpFoldResult res) {
- return !getConstantIntValue(res).has_value();
- })) {
- LDBG() << "Inner-tiles must be constant: " << unpackOp;
+ // The number of input vector sizes must be equal to:
+ // * read-vector-rank
+ if (!inputVectorSizes.empty() &&
+ (inputVectorSizes.size() != unpackOp.getSourceRank())) {
+ LDBG() << "Incorrect number of input vector sizes";
return failure();
}
- ArrayRef<int64_t> resultShape = unpackOp.getDestType().getShape();
- bool satisfyEmptyCond = inputVectorSizes.empty() &&
- unpackOp.getDestType().hasStaticShape() &&
- unpackOp.getSourceType().hasStaticShape();
- if (!satisfyEmptyCond &&
- failed(vector::isValidMaskedInputVector(resultShape, inputVectorSizes)))
+
+ // Check the vector sizes for the read operation.
+ if (failed(vector::isValidMaskedInputVector(
+ unpackOp.getSourceType().getShape(), inputVectorSizes))) {
+ LDBG() << "Invalid vector sizes for the read operation";
return failure();
+ }
return success();
}
@@ -2436,6 +2428,7 @@ vectorizePackOpPrecondition(linalg::PackOp packOp,
LDBG() << "pad value is not constant: " << packOp;
return failure();
}
+
ArrayRef<int64_t> resultTensorShape = packOp.getDestType().getShape();
bool satisfyEmptyCond = true;
if (inputVectorSizes.empty()) {
@@ -2499,8 +2492,12 @@ vectorizePadOpPrecondition(tensor::PadOp padOp,
return success();
}
-/// Preconditions for scalable vectors. This is quite restrictive - it models
-/// the fact that in practice we would only make selected dimensions scalable.
+/// Preconditions for scalable vectors.
+///
+/// For Ops implementing the LinalgOp interface, this is quite restrictive - it
+/// models the fact that in practice we would only make selected dimensions
+/// scalable. For other Ops (e.g. `linalg.unpack`), this will succeed
+/// unconditionally - we are yet to identify meaningful conditions.
static LogicalResult
vectorizeScalableVectorPrecondition(Operation *op,
ArrayRef<int64_t> inputVectorSizes,
@@ -2516,10 +2513,11 @@ vectorizeScalableVectorPrecondition(Operation *op,
auto linalgOp = dyn_cast<LinalgOp>(op);
- // Cond 1: There's been no need for scalable vectorisation of
- // non-linalg Ops so far
- if (!linalgOp)
- return failure();
+ // Cond 1: Reject Ops that don't implement the LinalgOp interface, with the
+ // exception of UnpackOp for which there is a dedicated hook.
+ if (!linalgOp) {
+ return success(isa<linalg::UnPackOp>(op));
+ }
// Cond 2: There's been no need for more than 2 scalable dims so far
if (numOfScalableDims > 2)
@@ -2750,7 +2748,8 @@ FailureOr<VectorizationResult> mlir::linalg::vectorize(
})
.Case<linalg::UnPackOp>([&](auto unpackOp) {
return vectorizeAsTensorUnpackOp(rewriter, unpackOp,
- inputVectorSizes, results);
+ inputVectorSizes,
+ inputScalableVecDims, results);
})
.Case<tensor::InsertSliceOp>([&](auto sliceOp) {
return vectorizeAsInsertSliceOp(rewriter, sliceOp, inputVectorSizes,
@@ -3142,7 +3141,8 @@ vectorizeAsInsertSliceOp(RewriterBase &rewriter, tensor::InsertSliceOp sliceOp,
vecType.getRank(), arith::ConstantIndexOp::create(rewriter, loc, 0));
Value read = mlir::vector::createReadOrMaskedRead(
rewriter, loc, source, vecType.getShape(), padValue,
- /*useInBoundsInsteadOfMasking=*/inputVectorSizes.empty());
+ /*useInBoundsInsteadOfMasking=*/inputVectorSizes.empty(),
+ /*inputScalableVecSizes=*/{});
// Create write
auto writeIndices =
diff --git a/mlir/lib/Dialect/Vector/Utils/VectorUtils.cpp b/mlir/lib/Dialect/Vector/Utils/VectorUtils.cpp
index 10ed2bcfb35a3..6e2fa35e1279a 100644
--- a/mlir/lib/Dialect/Vector/Utils/VectorUtils.cpp
+++ b/mlir/lib/Dialect/Vector/Utils/VectorUtils.cpp
@@ -279,14 +279,16 @@ vector::createUnrollIterator(VectorType vType, int64_t targetRank) {
// Attempt to unroll until targetRank or the first scalable dimension (which
// cannot be unrolled).
auto shapeToUnroll = vType.getShape().drop_back(targetRank);
- auto scalableDimsToUnroll = vType.getScalableDims().drop_back(targetRank);
- auto it = llvm::find(scalableDimsToUnroll, true);
- auto firstScalableDim = it - scalableDimsToUnroll.begin();
+ auto inputScalableVecDimsToUnroll =
+ vType.getScalableDims().drop_back(targetRank);
+ auto it = llvm::find(inputScalableVecDimsToUnroll, true);
+ auto firstScalableDim = it - inputScalableVecDimsToUnroll.begin();
if (firstScalableDim == 0)
return {};
// All scalable dimensions should be removed now.
- scalableDimsToUnroll = scalableDimsToUnroll.slice(0, firstScalableDim);
- assert(!llvm::is_contained(scalableDimsToUnroll, true) &&
+ inputScalableVecDimsToUnroll =
+ inputScalableVecDimsToUnroll.slice(0, firstScalableDim);
+ assert(!llvm::is_contained(inputScalableVecDimsToUnroll, true) &&
"unexpected leading scalable dimension");
// Create an unroll iterator for leading dimensions.
shapeToUnroll = shapeToUnroll.slice(0, firstScalableDim);
@@ -319,15 +321,15 @@ Value vector::createReadOrMaskedRead(OpBuilder &builder, Location loc,
ArrayRef<int64_t> inputVectorSizes,
Value padValue,
bool useInBoundsInsteadOfMasking,
- ArrayRef<bool> scalableDims) {
+ ArrayRef<bool> inputScalableVecDims) {
assert(!llvm::is_contained(inputVectorSizes, ShapedType::kDynamic) &&
"invalid input vector sizes");
auto sourceShapedType = cast<ShapedType>(source.getType());
auto sourceShape = sourceShapedType.getShape();
assert(sourceShape.size() == inputVectorSizes.size() &&
"expected same ranks.");
- auto vectorType =
- VectorType::get(inputVectorSizes, padValue.getType(), scalableDims);
+ auto vectorType = VectorType::get(inputVectorSizes, padValue.getType(),
+ inputScalableVecDims);
assert(padValue.getType() == sourceShapedType.getElementType() &&
"expected same pad element type to match source element type");
int64_t readRank = inputVectorSizes.size();
@@ -356,8 +358,8 @@ Value vector::createReadOrMaskedRead(OpBuilder &builder, Location loc,
? memref::getMixedSizes(builder, loc, source)
: tensor::getMixedSizes(builder, loc, source);
- auto maskType =
- VectorType::get(inputVectorSizes, builder.getI1Type(), scalableDims);
+ auto maskType = VectorType::get(inputVectorSizes, builder.getI1Type(),
+ inputScalableVecDims);
Value mask =
vector::CreateMaskOp::create(builder, loc, maskType, mixedSourceDims);
return mlir::vector::maskOperation(builder, transferReadOp, mask)
@@ -385,8 +387,7 @@ vector::isValidMaskedInputVector(ArrayRef<int64_t> shape,
staticSize <= inputSize;
})) {
LDBG() << "Input vector sizes must be greater than or equal to iteration "
- "space "
- "static sizes";
+ "space static sizes";
return failure();
}
return success();
diff --git a/mlir/test/Dialect/Linalg/vectorization/linalg-ops.mlir b/mlir/test/Dialect/Linalg/vectorization/linalg-ops.mlir
index d41d86117793b..095810fe0451e 100644
--- a/mlir/test/Dialect/Linalg/vectorization/linalg-ops.mlir
+++ b/mlir/test/Dialect/Linalg/vectorization/linalg-ops.mlir
@@ -940,31 +940,100 @@ module attributes {transform.with_named_sequence} {
///----------------------------------------------------------------------------------------
// CHECK-LABEL: func @test_vectorize_dynamic_shapes_unpack
-// CHECK-SAME: %[[ARG_0:.*]]: tensor<?x?xf32>,
-// CHECK-SAME: %[[ARG_1:.*]]: tensor<?x?x16x2xf32>
-func.func @test_vectorize_dynamic_shapes_unpack(%arg0: tensor<?x?xf32>, %arg1: tensor<?x?x16x2xf32>) -> tensor<?x?xf32> {
-// CHECK: %[[C0:.*]] = arith.constant 0
-// CHECK: %[[C01:.*]] = arith.constant 0
-// CHECK: %[[C02:.*]] = arith.constant 0
-// CHECK: %[[DIM_0:.*]] = tensor.dim %[[ARG_1]], %[[C02]] : tensor<?x?x16x2xf32>
-// CHECK: %[[C1:.*]] = arith.constant 1
-// CHECK: %[[DIM6:.*]] = tensor.dim %[[ARG_1]], %[[C1]] : tensor<?x?x16x2xf32>
-// CHECK: %[[CNST16:.*]] = arith.constant 16 : index
-// CHECK: %[[CNST2:.*]] = arith.constant 2 : index
-// CHECK: %[[readMsk0:.*]] = vector.create_mask %[[DIM_0]], %[[DIM6]], %[[CNST16]], %[[CNST2]] : vector<2x1x16x2xi1>
-// CHECK: %[[read0:.*]] = vector.mask %[[readMsk0]] {{.*}} vector.transfer_read %{{.*}} : tensor<?x?x16x2xf32>, vector<2x1x16x2xf32> } : vector<2x1x16x2xi1> -> vector<2x1x16x2xf32>
-// CHECK: %[[trans0:.*]] = vector.transpose %[[read0]], [0, 3, 1, 2] : vector<2x1x16x2xf32> to vector<2x2x1x16xf32>
-// CHECK: %[[sc0:.*]] = vector.shape_cast %[[trans0]] : vector<2x2x1x16xf32> to vector<4x16xf32>
-// CHECK: %[[writeMsk0:.*]] = vector.create_mask {{.*}} : vector<4x16xi1>
-// CHECK: %[[write0:.*]] = vector.mask %[[writeMsk0:.*]] {{.*}} vector.transfer_write %[[sc0]], %[[ARG_0]]
-// CHECK: return %[[write0]]
- %ret = linalg.unpack %arg1 inner_dims_pos = [1, 0] inner_tiles = [16, 2] into %arg0 : tensor<?x?x16x2xf32> -> tensor<?x?xf32>
- return %ret : tensor<?x?xf32>
+// CHECK-SAME: %[[DEST:.*]]: tensor<?x?xf32>,
+// CHECK-SAME: %[[SRC:.*]]: tensor<?x?x16x2xf32>
+func.func @test_vectorize_dynamic_shapes_unpack(%dest: tensor<?x?xf32>, %src: tensor<?x?x16x2xf32>) -> tensor<?x?xf32> {
+ // CHECK: %[[C0:.*]] = arith.constant 0 : index
+ // CHECK: %[[C0_1:.*]] = arith.constant 0 : index
+ // CHECK: %[[DIM_0:.*]] = tensor.dim %[[SRC]], %[[C0_1]] : tensor<?x?x16x2xf32>
+ // CHECK: %[[C1:.*]] = arith.constant 1
+ // CHECK: %[[DIM6:.*]] = tensor.dim %[[SRC]], %[[C1]] : tensor<?x?x16x2xf32>
+ // CHECK: %[[CNST16:.*]] = arith.constant 16 : index
+ // CHECK: %[[CNST2:.*]] = arith.constant 2 : index
+ // CHECK: %[[MASK_READ:.*]] = vector.create_mask %[[DIM_0]], %[[DIM6]], %[[CNST16]], %[[CNST2]] : vector<2x1x16x2xi1>
+ // CHECK: %[[READ:.*]] = vector.mask %[[MASK_READ]] {{.*}} vector.transfer_read %{{.*}} : tensor<?x?x16x2xf32>, vector<2x1x16x2xf32> } : vector<2x1x16x2xi1> -> vector<2x1x16x2xf32>
+ // CHECK: %[[TR:.*]] = vector.transpose %[[READ]], [0, 3, 1, 2] : vector<2x1x16x2xf32> to vector<2x2x1x16xf32>
+ // CHECK: %[[SC:.*]] = vector.shape_cast %[[TR]] : vector<2x2x1x16xf32> to vector<4x16xf32>
+ // CHECK: %[[MASK_WRITE:.*]] = vector.create_mask {{.*}} : vector<4x16xi1>
+ // CHECK: %[[WRITE:.*]] = vector.mask %[[MASK_WRITE:.*]] {{.*}} vector.transfer_write %[[SC]], %[[DEST]]
+ // CHECK: return %[[WRITE]]
+ %ret = linalg.unpack %src inner_dims_pos = [1, 0] inner_tiles = [16, 2] into %dest : tensor<?x?x16x2xf32> -> tensor<?x?xf32>
+ return %ret : tensor<?x?xf32>
+}
+module attributes {transform.with_named_sequence} {
+ transform.named_sequence @__transform_main(%arg0: !transform.any_op {transform.readonly}) {
+ %0 = transform.structured.match ops{["linalg.unpack"]} in %arg0 : (!transform.any_op) -> !transform.any_op
+ transform.structured.vectorize %0 vector_sizes [2, 1, 16, 2] : !transform.any_op
+ transform.yield
+ }
+}
+
+// -----
+
+// CHECK-LABEL: func @test_vectorize_dynamic_shapes_unpack_scalable_vec
+// CHECK-SAME: %[[DEST:.*]]: tensor<?x?xf32>,
+// CHECK-SAME: %[[SRC:.*]]: tensor<?x?x16x2xf32>
+func.func @test_vectorize_dynamic_shapes_unpack_scalable_vec(%dest: tensor<?x?xf32>, %src: tensor<?x?x16x2xf32>) -> tensor<?x?xf32> {
+ // CHECK: %[[CST:.*]] = arith.constant 0.000000e+00
+ // CHECK: %[[C01:.*]] = arith.constant 0
+ // CHECK: %[[C02:.*]] = arith.constant 0
+ // CHECK: %[[DIM4:.*]] = tensor.dim %[[SRC]], %[[C02]] : tensor<?x?x16x2xf32>
+ // CHECK: %[[CNST14:.*]] = arith.constant 1
+ // CHECK: %[[DIM6:.*]] = tensor.dim %[[SRC]], %[[CNST14]] : tensor<?x?x16x2xf32>
+ // CHECK: %[[CNST16:.*]] = arith.constant 16 : index
+ // CHECK: %[[CNST2:.*]] = arith.constant 2 : index
+ // CHECK: %[[MASK_READ:.*]] = vector.create_mask %[[DIM4]], %[[DIM6]], %[[CNST16]], %[[CNST2]] : vector<2x1x[16]x2xi1>
+ // CHECK: %[[READ:.*]] = vector.mask %[[MASK_READ]] {{.*}} vector.transfer_read %{{.*}} : tensor<?x?x16x2xf32>, vector<2x1x[16]x2xf32> } : vector<2x1x[16]x2xi1> -> vector<2x1x[16]x2xf32>
+ // CHECK: %[[TR:.*]] = vector.transpose %[[READ]], [0, 3, 1, 2] : vector<2x1x[16]x2xf32> to vector<2x2x1x[16]xf32>
+ // CHECK: %[[SC:.*]] = vector.shape_cast %[[TR]] : vector<2x2x1x[16]xf32> to vector<4x[16]xf32>
+ // CHECK: %[[MASK_WRITE:.*]] = vector.create_mask {{.*}} : vector<4x[16]xi1>
+ // CHECK: %[[WRITE:.*]] = vector.mask %[[MASK_WRITE:.*]] {{.*}} vector.transfer_write %[[SC]], %[[DEST]]
+ // CHECK: return %[[WRITE]]
+ %ret = linalg.unpack %src inner_dims_pos = [1, 0] inner_tiles = [16, 2] into %dest : tensor<?x?x16x2xf32> -> tensor<?x?xf32>
+ return %ret : tensor<?x?xf32>
+}
+module attributes {transform.with_named_sequence} {
+ transform.named_sequence @__transform_main(%arg0: !transform.any_op {transform.readonly}) {
+ %0 = transform.structured.match ops{["linalg.unpack"]} in %arg0 : (!transform.any_op) -> !transform.any_op
+ transform.structured.vectorize %0 vector_sizes [2, 1, [16], 2] : !transform.any_op
+ transform.yield
+ }
+}
+
+// -----
+
+// CHECK-LABEL: func @test_vectorize_dynamic_shapes_unpack_scalable_vec_and_tile_size
+// CHECK-SAME: %[[DEST:.*]]: tensor<?x?xf32>,
+// CHECK-SAME: %[[SRC:.*]]: tensor<?x?x?x2xf32>
+func.func @test_vectorize_dynamic_shapes_unpack_scalable_vec_and_tile_size(%dest: tensor<?x?xf32>, %src: tensor<?x?x?x2xf32>) -> tensor<?x?xf32> {
+ // CHECK: %[[CST:.*]] = arith.constant 0.000000e+00
+ // CHECK: %[[C01:.*]] = arith.constant 0
+ // CHECK: %[[C02:.*]] = arith.constant 0
+ // CHECK: %[[DIM4:.*]] = tensor.dim %[[SRC]], %[[C02]] : tensor<?x?x?x2xf32>
+ // CHECK: %[[C1_2:.*]] = arith.constant 1
+ // CHECK: %[[DIM6:.*]] = tensor.dim %[[SRC]], %[[C1_2]] : tensor<?x?x?x2xf32>
+ // CHECK: %[[C2:.*]] = arith.constant 2 : index
+ // CHECK: %[[DIM_2:.*]] = tensor.dim %[[SRC]], %[[C2]] : tensor<?x?x?x2xf32>
+ // CHECK: %[[C2_1:.*]] = arith.constant 2 : index
+ // CHECK: %[[MASK_READ:.*]] = vector.create_mask %[[DIM4]], %[[DIM6]], %[[DIM_2]], %[[C2_1]] : vector<2x1x[16]x2xi1>
+ // CHECK: %[[READ:.*]] = vector.mask %[[MASK_READ]] {{.*}} vector.transfer_read %{{.*}} : tensor<?x?x?x2xf32>, vector<2x1x[16]x2xf32> } : vector<2x1x[16]x2xi1> -> vector<2x1x[16]x2xf32>
+ // CHECK: %[[TR:.*]] = vector.transpose %[[READ]], [0, 3, 1, 2] : vector<2x1x[16]x2xf32> to vector<2x2x1x[16]xf32>
+ // CHECK: %[[SC:.*]] = vector.shape_cast %[[TR]] : vector<2x2x1x[16]xf32> to vector<4x[16]xf32>
+ // CHECK: %[[MASK_WRITE:.*]] = vector.create_mask {{.*}} : vector<4x[16]xi1>
+ // CHECK: %[[WRITE:.*]] = vector.mask %[[MASK_WRITE:.*]] {{.*}} vector.transfer_write %[[SC]], %[[DEST]]
+ // CHECK: return %[[WRITE]]
+
+ %vs = vector.vscale
+ %c16 = arith.constant 16 : index
+ %tile_size = arith.muli %vs, %c16 : index
+
+ %ret = linalg.unpack %src inner_dims_pos = [1, 0] inner_tiles = [%tile_size, 2] into %dest : tensor<?x?x?x2xf32> -> tensor<?x?xf32>
+ return %ret : tensor<?x?xf32>
}
module attributes {transform.with_named_sequence} {
transform.named_sequence @__transform_main(%arg0: !transform.any_op {transform.readonly}) {
%0 = transform.structured.match ops{["linalg.unpack"]} in %arg0 : (!transform.any_op) -> !transform.any_op
- transform.structured.vectorize %0 vector_sizes [4, 16] : !transform.any_op
+ transform.structured.vectorize %0 vector_sizes [2, 1, [16], 2] : !transform.any_op
transform.yield
}
}
@@ -997,7 +1066,7 @@ func.func @test_vectorize_unpack(%source: tensor<8x8x32x16xf32>, %dest: tensor<2
module attributes {transform.with_named_sequence} {
transform.named_sequence @__transform_main(%arg0: !transform.any_op {transform.readonly}) {
%0 = transform.structured.match ops{["linalg.unpack"]} in %arg0 : (!transform.any_op) -> !transform.any_op
- transform.structured.vectorize %0 vector_sizes [512, 128] : !transform.any_op
+ transform.structured.vectorize %0 vector_sizes [16, 8, 32, 16] : !transform.any_op
transform.yield
}
}
@@ -1022,7 +1091,7 @@ func.func @test_vectorize_unpack_no_masks(%source: tensor<8x8x32x16xf32>, %dest:
module attributes {transform.with_named_sequence} {
transform.named_sequence @__transform_main(%arg0: !transform.any_op {transform.readonly}) {
%0 = transform.structured.match ops{["linalg.unpack"]} in %arg0 : (!transform.any_op) -> !transform.any_op
- transform.structured.vectorize %0 vector_sizes [256, 128] : !transform.any_op
+ transform.structured.vectorize %0 vector_sizes [8, 8, 32, 16] : !transform.any_op
transform.yield
}
}
@@ -1047,7 +1116,7 @@ func.func @test_vectorize_unpack_no_masks(%source: tensor<8x8x32x16xf32>, %dest:
module attributes {transform.with_named_sequence} {
transform.named_sequence @__transform_main(%arg0: !transform.any_op {transform.readonly}) {
%0 = transform.structured.match ops{["linalg.unpack"]} in %arg0 : (!transform.any_op) -> !transform.any_op
- transform.structured.vectorize %0 vector_sizes [256, 128] : !transform.any_op
+ transform.structured.vectorize %0 vector_sizes [8, 8, 32, 16] : !transform.any_op
transform.yield
}
}
>From 59231115b084474287fa85c8fc20697646373cc3 Mon Sep 17 00:00:00 2001
From: Anna Thomas <anna at azul.com>
Date: Wed, 6 Aug 2025 15:37:38 -0400
Subject: [PATCH 144/171] [Loads] Precommit tests for #149551. NFC
Add these tests that currently require predicated loads due to variable
start SCEV.
---
...able-info-from-assumption-variable-size.ll | 224 ++++++++++++++++++
1 file changed, 224 insertions(+)
diff --git a/llvm/test/Transforms/LoopVectorize/dereferenceable-info-from-assumption-variable-size.ll b/llvm/test/Transforms/LoopVectorize/dereferenceable-info-from-assumption-variable-size.ll
index c8cf2ad8198a9..9852f538c6f74 100644
--- a/llvm/test/Transforms/LoopVectorize/dereferenceable-info-from-assumption-variable-size.ll
+++ b/llvm/test/Transforms/LoopVectorize/dereferenceable-info-from-assumption-variable-size.ll
@@ -540,3 +540,227 @@ loop.latch:
exit:
ret void
}
+
+; The start access is SCEV with non-constant offset because of variable `iv.start`
+; for IV.
+define void @deref_assumption_loop_access_start_variable(i8 %v, ptr noundef %P, i64 range(i64 0, 2000) %N, ptr noalias %b, ptr noalias %c, i64 range(i64 0, 2000) %iv.start) nofree nosync {
+; CHECK-LABEL: define void @deref_assumption_loop_access_start_variable(
+; CHECK-SAME: i8 [[V:%.*]], ptr noundef [[P:%.*]], i64 range(i64 0, 2000) [[N:%.*]], ptr noalias [[B:%.*]], ptr noalias [[C:%.*]], i64 range(i64 0, 2000) [[IV_START:%.*]]) #[[ATTR1]] {
+; CHECK-NEXT: [[ENTRY:.*]]:
+; CHECK-NEXT: [[A:%.*]] = getelementptr i8, ptr [[P]], i64 16
+; CHECK-NEXT: [[CMP:%.*]] = icmp slt i64 [[IV_START]], [[N]]
+; CHECK-NEXT: call void @llvm.assume(i1 [[CMP]])
+; CHECK-NEXT: [[MUL:%.*]] = mul i64 [[N]], 4
+; CHECK-NEXT: [[ADD:%.*]] = add i64 [[MUL]], 16
+; CHECK-NEXT: call void @llvm.assume(i1 true) [ "dereferenceable"(ptr [[P]], i64 [[ADD]]) ]
+; CHECK-NEXT: [[TMP3:%.*]] = sub i64 [[N]], [[IV_START]]
+; CHECK-NEXT: [[MIN_ITERS_CHECK:%.*]] = icmp ult i64 [[TMP3]], 2
+; CHECK-NEXT: br i1 [[MIN_ITERS_CHECK]], label %[[SCALAR_PH:.*]], label %[[VECTOR_PH:.*]]
+; CHECK: [[VECTOR_PH]]:
+; CHECK-NEXT: [[N_MOD_VF:%.*]] = urem i64 [[TMP3]], 2
+; CHECK-NEXT: [[N_VEC:%.*]] = sub i64 [[TMP3]], [[N_MOD_VF]]
+; CHECK-NEXT: [[TMP1:%.*]] = add i64 [[IV_START]], [[N_VEC]]
+; CHECK-NEXT: br label %[[VECTOR_BODY:.*]]
+; CHECK: [[VECTOR_BODY]]:
+; CHECK-NEXT: [[INDEX:%.*]] = phi i64 [ 0, %[[VECTOR_PH]] ], [ [[INDEX_NEXT:%.*]], %[[PRED_LOAD_CONTINUE2:.*]] ]
+; CHECK-NEXT: [[OFFSET_IDX:%.*]] = add i64 [[IV_START]], [[INDEX]]
+; CHECK-NEXT: [[TMP6:%.*]] = getelementptr inbounds i32, ptr [[B]], i64 [[OFFSET_IDX]]
+; CHECK-NEXT: [[WIDE_LOAD:%.*]] = load <2 x i32>, ptr [[TMP6]], align 1
+; CHECK-NEXT: [[TMP8:%.*]] = icmp sge <2 x i32> [[WIDE_LOAD]], zeroinitializer
+; CHECK-NEXT: [[TMP4:%.*]] = xor <2 x i1> [[TMP8]], splat (i1 true)
+; CHECK-NEXT: [[TMP5:%.*]] = extractelement <2 x i1> [[TMP4]], i32 0
+; CHECK-NEXT: br i1 [[TMP5]], label %[[PRED_LOAD_IF:.*]], label %[[PRED_LOAD_CONTINUE:.*]]
+; CHECK: [[PRED_LOAD_IF]]:
+; CHECK-NEXT: [[TMP16:%.*]] = add i64 [[OFFSET_IDX]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = getelementptr inbounds i32, ptr [[A]], i64 [[TMP16]]
+; CHECK-NEXT: [[TMP19:%.*]] = load i32, ptr [[TMP7]], align 1
+; CHECK-NEXT: [[TMP9:%.*]] = insertelement <2 x i32> poison, i32 [[TMP19]], i32 0
+; CHECK-NEXT: br label %[[PRED_LOAD_CONTINUE]]
+; CHECK: [[PRED_LOAD_CONTINUE]]:
+; CHECK-NEXT: [[TMP10:%.*]] = phi <2 x i32> [ poison, %[[VECTOR_BODY]] ], [ [[TMP9]], %[[PRED_LOAD_IF]] ]
+; CHECK-NEXT: [[TMP11:%.*]] = extractelement <2 x i1> [[TMP4]], i32 1
+; CHECK-NEXT: br i1 [[TMP11]], label %[[PRED_LOAD_IF1:.*]], label %[[PRED_LOAD_CONTINUE2]]
+; CHECK: [[PRED_LOAD_IF1]]:
+; CHECK-NEXT: [[TMP12:%.*]] = add i64 [[OFFSET_IDX]], 1
+; CHECK-NEXT: [[TMP13:%.*]] = getelementptr inbounds i32, ptr [[A]], i64 [[TMP12]]
+; CHECK-NEXT: [[TMP14:%.*]] = load i32, ptr [[TMP13]], align 1
+; CHECK-NEXT: [[TMP15:%.*]] = insertelement <2 x i32> [[TMP10]], i32 [[TMP14]], i32 1
+; CHECK-NEXT: br label %[[PRED_LOAD_CONTINUE2]]
+; CHECK: [[PRED_LOAD_CONTINUE2]]:
+; CHECK-NEXT: [[WIDE_LOAD1:%.*]] = phi <2 x i32> [ [[TMP10]], %[[PRED_LOAD_CONTINUE]] ], [ [[TMP15]], %[[PRED_LOAD_IF1]] ]
+; CHECK-NEXT: [[PREDPHI:%.*]] = select <2 x i1> [[TMP8]], <2 x i32> [[WIDE_LOAD]], <2 x i32> [[WIDE_LOAD1]]
+; CHECK-NEXT: [[TMP17:%.*]] = getelementptr inbounds i32, ptr [[C]], i64 [[OFFSET_IDX]]
+; CHECK-NEXT: store <2 x i32> [[PREDPHI]], ptr [[TMP17]], align 1
+; CHECK-NEXT: [[INDEX_NEXT]] = add nuw i64 [[INDEX]], 2
+; CHECK-NEXT: [[TMP18:%.*]] = icmp eq i64 [[INDEX_NEXT]], [[N_VEC]]
+; CHECK-NEXT: br i1 [[TMP18]], label %[[MIDDLE_BLOCK:.*]], label %[[VECTOR_BODY]], !llvm.loop [[LOOP14:![0-9]+]]
+; CHECK: [[MIDDLE_BLOCK]]:
+; CHECK-NEXT: [[CMP_N:%.*]] = icmp eq i64 [[TMP3]], [[N_VEC]]
+; CHECK-NEXT: br i1 [[CMP_N]], label %[[EXIT:.*]], label %[[SCALAR_PH]]
+; CHECK: [[SCALAR_PH]]:
+; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ [[TMP1]], %[[MIDDLE_BLOCK]] ], [ [[IV_START]], %[[ENTRY]] ]
+; CHECK-NEXT: br label %[[LOOP:.*]]
+; CHECK: [[LOOP]]:
+; CHECK-NEXT: [[IV:%.*]] = phi i64 [ [[IV_NEXT:%.*]], %[[LOOP_LATCH:.*]] ], [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ]
+; CHECK-NEXT: [[GEP_A:%.*]] = getelementptr inbounds i32, ptr [[A]], i64 [[IV]]
+; CHECK-NEXT: [[GEP_B:%.*]] = getelementptr inbounds i32, ptr [[B]], i64 [[IV]]
+; CHECK-NEXT: [[L_B:%.*]] = load i32, ptr [[GEP_B]], align 1
+; CHECK-NEXT: [[C_1:%.*]] = icmp sge i32 [[L_B]], 0
+; CHECK-NEXT: br i1 [[C_1]], label %[[LOOP_LATCH]], label %[[LOOP_THEN:.*]]
+; CHECK: [[LOOP_THEN]]:
+; CHECK-NEXT: [[L_A:%.*]] = load i32, ptr [[GEP_A]], align 1
+; CHECK-NEXT: br label %[[LOOP_LATCH]]
+; CHECK: [[LOOP_LATCH]]:
+; CHECK-NEXT: [[MERGE:%.*]] = phi i32 [ [[L_A]], %[[LOOP_THEN]] ], [ [[L_B]], %[[LOOP]] ]
+; CHECK-NEXT: [[GEP_C:%.*]] = getelementptr inbounds i32, ptr [[C]], i64 [[IV]]
+; CHECK-NEXT: store i32 [[MERGE]], ptr [[GEP_C]], align 1
+; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1
+; CHECK-NEXT: [[TERM_COND:%.*]] = icmp slt i64 [[IV_NEXT]], [[N]]
+; CHECK-NEXT: br i1 [[TERM_COND]], label %[[LOOP]], label %[[EXIT]], !llvm.loop [[LOOP15:![0-9]+]]
+; CHECK: [[EXIT]]:
+; CHECK-NEXT: ret void
+;
+
+entry:
+ %a = getelementptr i8, ptr %P, i64 16
+ %cmp = icmp slt i64 %iv.start, %N
+ call void @llvm.assume(i1 %cmp)
+ %mul = mul i64 %N, 4
+ %add = add i64 %mul, 16
+ call void @llvm.assume(i1 true) [ "dereferenceable"(ptr %P, i64 %add) ]
+ br label %loop
+
+loop: ; preds = %mainloop, %loop.latch
+ %iv = phi i64 [ %iv.next, %loop.latch ], [ %iv.start, %entry ]
+ %gep.a = getelementptr inbounds i32, ptr %a, i64 %iv
+ %gep.b = getelementptr inbounds i32, ptr %b, i64 %iv
+ %l.b = load i32, ptr %gep.b, align 1
+ %c.1 = icmp sge i32 %l.b, 0
+ br i1 %c.1, label %loop.latch, label %loop.then
+
+loop.then: ; preds = %loop
+ %l.a = load i32, ptr %gep.a, align 1
+ br label %loop.latch
+
+loop.latch: ; preds = %loop.then, %loop
+ %merge = phi i32 [ %l.a, %loop.then ], [ %l.b, %loop ]
+ %gep.c = getelementptr inbounds i32, ptr %c, i64 %iv
+ store i32 %merge, ptr %gep.c, align 1
+ %iv.next = add nuw nsw i64 %iv, 1
+ %term.cond = icmp slt i64 %iv.next, %N
+ br i1 %term.cond, label %loop, label %exit
+
+exit:
+ ret void
+}
+
+; Same as previous test, but `iv.start` is not known nonnegative.
+define void @deref_assumption_loop_access_start_variable_unknown_range(i8 %v, ptr noundef %P, i64 range(i64 0, 2000) %N, ptr noalias %b, ptr noalias %c, i64 %iv.start) nofree nosync {
+; CHECK-LABEL: define void @deref_assumption_loop_access_start_variable_unknown_range(
+; CHECK-SAME: i8 [[V:%.*]], ptr noundef [[P:%.*]], i64 range(i64 0, 2000) [[N:%.*]], ptr noalias [[B:%.*]], ptr noalias [[C:%.*]], i64 [[IV_START:%.*]]) #[[ATTR1]] {
+; CHECK-NEXT: [[ENTRY:.*]]:
+; CHECK-NEXT: [[A:%.*]] = getelementptr i8, ptr [[P]], i64 16
+; CHECK-NEXT: [[CMP:%.*]] = icmp slt i64 [[IV_START]], [[N]]
+; CHECK-NEXT: call void @llvm.assume(i1 [[CMP]])
+; CHECK-NEXT: [[MUL:%.*]] = mul i64 [[N]], 4
+; CHECK-NEXT: [[ADD:%.*]] = add i64 [[MUL]], 16
+; CHECK-NEXT: call void @llvm.assume(i1 true) [ "dereferenceable"(ptr [[P]], i64 [[ADD]]) ]
+; CHECK-NEXT: [[TMP0:%.*]] = sub i64 [[N]], [[IV_START]]
+; CHECK-NEXT: [[MIN_ITERS_CHECK:%.*]] = icmp ult i64 [[TMP0]], 2
+; CHECK-NEXT: br i1 [[MIN_ITERS_CHECK]], label %[[SCALAR_PH:.*]], label %[[VECTOR_PH:.*]]
+; CHECK: [[VECTOR_PH]]:
+; CHECK-NEXT: [[N_MOD_VF:%.*]] = urem i64 [[TMP0]], 2
+; CHECK-NEXT: [[N_VEC:%.*]] = sub i64 [[TMP0]], [[N_MOD_VF]]
+; CHECK-NEXT: [[TMP1:%.*]] = add i64 [[IV_START]], [[N_VEC]]
+; CHECK-NEXT: br label %[[VECTOR_BODY:.*]]
+; CHECK: [[VECTOR_BODY]]:
+; CHECK-NEXT: [[INDEX:%.*]] = phi i64 [ 0, %[[VECTOR_PH]] ], [ [[INDEX_NEXT:%.*]], %[[PRED_LOAD_CONTINUE2:.*]] ]
+; CHECK-NEXT: [[OFFSET_IDX:%.*]] = add i64 [[IV_START]], [[INDEX]]
+; CHECK-NEXT: [[TMP2:%.*]] = getelementptr inbounds i32, ptr [[B]], i64 [[OFFSET_IDX]]
+; CHECK-NEXT: [[WIDE_LOAD:%.*]] = load <2 x i32>, ptr [[TMP2]], align 1
+; CHECK-NEXT: [[TMP3:%.*]] = icmp sge <2 x i32> [[WIDE_LOAD]], zeroinitializer
+; CHECK-NEXT: [[TMP4:%.*]] = xor <2 x i1> [[TMP3]], splat (i1 true)
+; CHECK-NEXT: [[TMP5:%.*]] = extractelement <2 x i1> [[TMP4]], i32 0
+; CHECK-NEXT: br i1 [[TMP5]], label %[[PRED_LOAD_IF:.*]], label %[[PRED_LOAD_CONTINUE:.*]]
+; CHECK: [[PRED_LOAD_IF]]:
+; CHECK-NEXT: [[TMP6:%.*]] = add i64 [[OFFSET_IDX]], 0
+; CHECK-NEXT: [[TMP7:%.*]] = getelementptr inbounds i32, ptr [[A]], i64 [[TMP6]]
+; CHECK-NEXT: [[TMP8:%.*]] = load i32, ptr [[TMP7]], align 1
+; CHECK-NEXT: [[TMP9:%.*]] = insertelement <2 x i32> poison, i32 [[TMP8]], i32 0
+; CHECK-NEXT: br label %[[PRED_LOAD_CONTINUE]]
+; CHECK: [[PRED_LOAD_CONTINUE]]:
+; CHECK-NEXT: [[TMP10:%.*]] = phi <2 x i32> [ poison, %[[VECTOR_BODY]] ], [ [[TMP9]], %[[PRED_LOAD_IF]] ]
+; CHECK-NEXT: [[TMP11:%.*]] = extractelement <2 x i1> [[TMP4]], i32 1
+; CHECK-NEXT: br i1 [[TMP11]], label %[[PRED_LOAD_IF1:.*]], label %[[PRED_LOAD_CONTINUE2]]
+; CHECK: [[PRED_LOAD_IF1]]:
+; CHECK-NEXT: [[TMP12:%.*]] = add i64 [[OFFSET_IDX]], 1
+; CHECK-NEXT: [[TMP13:%.*]] = getelementptr inbounds i32, ptr [[A]], i64 [[TMP12]]
+; CHECK-NEXT: [[TMP14:%.*]] = load i32, ptr [[TMP13]], align 1
+; CHECK-NEXT: [[TMP15:%.*]] = insertelement <2 x i32> [[TMP10]], i32 [[TMP14]], i32 1
+; CHECK-NEXT: br label %[[PRED_LOAD_CONTINUE2]]
+; CHECK: [[PRED_LOAD_CONTINUE2]]:
+; CHECK-NEXT: [[TMP16:%.*]] = phi <2 x i32> [ [[TMP10]], %[[PRED_LOAD_CONTINUE]] ], [ [[TMP15]], %[[PRED_LOAD_IF1]] ]
+; CHECK-NEXT: [[PREDPHI:%.*]] = select <2 x i1> [[TMP3]], <2 x i32> [[WIDE_LOAD]], <2 x i32> [[TMP16]]
+; CHECK-NEXT: [[TMP17:%.*]] = getelementptr inbounds i32, ptr [[C]], i64 [[OFFSET_IDX]]
+; CHECK-NEXT: store <2 x i32> [[PREDPHI]], ptr [[TMP17]], align 1
+; CHECK-NEXT: [[INDEX_NEXT]] = add nuw i64 [[INDEX]], 2
+; CHECK-NEXT: [[TMP18:%.*]] = icmp eq i64 [[INDEX_NEXT]], [[N_VEC]]
+; CHECK-NEXT: br i1 [[TMP18]], label %[[MIDDLE_BLOCK:.*]], label %[[VECTOR_BODY]], !llvm.loop [[LOOP16:![0-9]+]]
+; CHECK: [[MIDDLE_BLOCK]]:
+; CHECK-NEXT: [[CMP_N:%.*]] = icmp eq i64 [[TMP0]], [[N_VEC]]
+; CHECK-NEXT: br i1 [[CMP_N]], label %[[EXIT:.*]], label %[[SCALAR_PH]]
+; CHECK: [[SCALAR_PH]]:
+; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ [[TMP1]], %[[MIDDLE_BLOCK]] ], [ [[IV_START]], %[[ENTRY]] ]
+; CHECK-NEXT: br label %[[LOOP:.*]]
+; CHECK: [[LOOP]]:
+; CHECK-NEXT: [[IV:%.*]] = phi i64 [ [[IV_NEXT:%.*]], %[[LOOP_LATCH:.*]] ], [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ]
+; CHECK-NEXT: [[GEP_A:%.*]] = getelementptr inbounds i32, ptr [[A]], i64 [[IV]]
+; CHECK-NEXT: [[GEP_B:%.*]] = getelementptr inbounds i32, ptr [[B]], i64 [[IV]]
+; CHECK-NEXT: [[L_B:%.*]] = load i32, ptr [[GEP_B]], align 1
+; CHECK-NEXT: [[C_1:%.*]] = icmp sge i32 [[L_B]], 0
+; CHECK-NEXT: br i1 [[C_1]], label %[[LOOP_LATCH]], label %[[LOOP_THEN:.*]]
+; CHECK: [[LOOP_THEN]]:
+; CHECK-NEXT: [[L_A:%.*]] = load i32, ptr [[GEP_A]], align 1
+; CHECK-NEXT: br label %[[LOOP_LATCH]]
+; CHECK: [[LOOP_LATCH]]:
+; CHECK-NEXT: [[MERGE:%.*]] = phi i32 [ [[L_A]], %[[LOOP_THEN]] ], [ [[L_B]], %[[LOOP]] ]
+; CHECK-NEXT: [[GEP_C:%.*]] = getelementptr inbounds i32, ptr [[C]], i64 [[IV]]
+; CHECK-NEXT: store i32 [[MERGE]], ptr [[GEP_C]], align 1
+; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1
+; CHECK-NEXT: [[TERM_COND:%.*]] = icmp slt i64 [[IV_NEXT]], [[N]]
+; CHECK-NEXT: br i1 [[TERM_COND]], label %[[LOOP]], label %[[EXIT]], !llvm.loop [[LOOP17:![0-9]+]]
+; CHECK: [[EXIT]]:
+; CHECK-NEXT: ret void
+;
+entry:
+ %a = getelementptr i8, ptr %P, i64 16
+ %cmp = icmp slt i64 %iv.start, %N
+ call void @llvm.assume(i1 %cmp)
+ %mul = mul i64 %N, 4
+ %add = add i64 %mul, 16
+ call void @llvm.assume(i1 true) [ "dereferenceable"(ptr %P, i64 %add) ]
+ br label %loop
+
+loop: ; preds = %mainloop, %loop.latch
+ %iv = phi i64 [ %iv.next, %loop.latch ], [ %iv.start, %entry ]
+ %gep.a = getelementptr inbounds i32, ptr %a, i64 %iv
+ %gep.b = getelementptr inbounds i32, ptr %b, i64 %iv
+ %l.b = load i32, ptr %gep.b, align 1
+ %c.1 = icmp sge i32 %l.b, 0
+ br i1 %c.1, label %loop.latch, label %loop.then
+
+loop.then: ; preds = %loop
+ %l.a = load i32, ptr %gep.a, align 1
+ br label %loop.latch
+
+loop.latch: ; preds = %loop.then, %loop
+ %merge = phi i32 [ %l.a, %loop.then ], [ %l.b, %loop ]
+ %gep.c = getelementptr inbounds i32, ptr %c, i64 %iv
+ store i32 %merge, ptr %gep.c, align 1
+ %iv.next = add nuw nsw i64 %iv, 1
+ %term.cond = icmp slt i64 %iv.next, %N
+ br i1 %term.cond, label %loop, label %exit
+
+exit:
+ ret void
+}
>From 0491d8bda73f88f5faff8523613f3dce19080b15 Mon Sep 17 00:00:00 2001
From: David Green <david.green at arm.com>
Date: Wed, 6 Aug 2025 20:54:11 +0100
Subject: [PATCH 145/171] [AArch64] Treat single-vector ext as legal shuffle
masks. (#151909)
We can generate ext from shuffles like <2, 3, 0, 1> from a single vector
source. Add handling to isShuffleMaskLegal to allow DAG combines to
optimize to it.
---
llvm/lib/Target/AArch64/AArch64ISelLowering.cpp | 5 +++--
llvm/test/CodeGen/AArch64/arm64-ext.ll | 10 ++++------
2 files changed, 7 insertions(+), 8 deletions(-)
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index bad7ccde2e1e0..1f5ff4eb5b6d3 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -13482,7 +13482,7 @@ static bool isEXTMask(ArrayRef<int> M, EVT VT, bool &ReverseEXT,
// Look for the first non-undef element.
const int *FirstRealElt = find_if(M, [](int Elt) { return Elt >= 0; });
- // Benefit form APInt to handle overflow when calculating expected element.
+ // Benefit from APInt to handle overflow when calculating expected element.
unsigned NumElts = VT.getVectorNumElements();
unsigned MaskBits = APInt(32, NumElts * 2).logBase2();
APInt ExpectedElt = APInt(MaskBits, *FirstRealElt + 1, /*isSigned=*/false,
@@ -13490,7 +13490,7 @@ static bool isEXTMask(ArrayRef<int> M, EVT VT, bool &ReverseEXT,
// The following shuffle indices must be the successive elements after the
// first real element.
bool FoundWrongElt = std::any_of(FirstRealElt + 1, M.end(), [&](int Elt) {
- return Elt != ExpectedElt++ && Elt != -1;
+ return Elt != ExpectedElt++ && Elt >= 0;
});
if (FoundWrongElt)
return false;
@@ -15777,6 +15777,7 @@ bool AArch64TargetLowering::isShuffleMaskLegal(ArrayRef<int> M, EVT VT) const {
isREVMask(M, EltSize, NumElts, 32) ||
isREVMask(M, EltSize, NumElts, 16) ||
isEXTMask(M, VT, DummyBool, DummyUnsigned) ||
+ isSingletonEXTMask(M, VT, DummyUnsigned) ||
isTRNMask(M, NumElts, DummyUnsigned) ||
isUZPMask(M, NumElts, DummyUnsigned) ||
isZIPMask(M, NumElts, DummyUnsigned) ||
diff --git a/llvm/test/CodeGen/AArch64/arm64-ext.ll b/llvm/test/CodeGen/AArch64/arm64-ext.ll
index 8bf2b826d7101..c3670579c9148 100644
--- a/llvm/test/CodeGen/AArch64/arm64-ext.ll
+++ b/llvm/test/CodeGen/AArch64/arm64-ext.ll
@@ -139,9 +139,8 @@ define <2 x ptr> @test_v2p0(<2 x ptr> %a, <2 x ptr> %b) {
define <16 x i8> @reverse_vector_s8x16b(<16 x i8> noundef %x) {
; CHECK-SD-LABEL: reverse_vector_s8x16b:
; CHECK-SD: // %bb.0: // %entry
-; CHECK-SD-NEXT: rev64 v1.16b, v0.16b
-; CHECK-SD-NEXT: ext v0.16b, v1.16b, v1.16b, #8
-; CHECK-SD-NEXT: mov v0.d[1], v1.d[0]
+; CHECK-SD-NEXT: rev64 v0.16b, v0.16b
+; CHECK-SD-NEXT: ext v0.16b, v0.16b, v0.16b, #8
; CHECK-SD-NEXT: ret
;
; CHECK-GI-LABEL: reverse_vector_s8x16b:
@@ -161,9 +160,8 @@ entry:
define <8 x i16> @reverse_vector_s16x8b(<8 x i16> noundef %x) {
; CHECK-SD-LABEL: reverse_vector_s16x8b:
; CHECK-SD: // %bb.0: // %entry
-; CHECK-SD-NEXT: rev64 v1.8h, v0.8h
-; CHECK-SD-NEXT: ext v0.16b, v1.16b, v1.16b, #8
-; CHECK-SD-NEXT: mov v0.d[1], v1.d[0]
+; CHECK-SD-NEXT: rev64 v0.8h, v0.8h
+; CHECK-SD-NEXT: ext v0.16b, v0.16b, v0.16b, #8
; CHECK-SD-NEXT: ret
;
; CHECK-GI-LABEL: reverse_vector_s16x8b:
>From cae7bebcaa41e4c459e973b9688215f5a57bcb56 Mon Sep 17 00:00:00 2001
From: Eugene Epshteyn <eepshteyn at nvidia.com>
Date: Wed, 6 Aug 2025 16:02:27 -0400
Subject: [PATCH 146/171] [flang-rt] Runtime implementation of extended
intrinsic function SECNDS() (#152021)
Until the compiler part is fully hooked up via
https://github.com/llvm/llvm-project/pull/151878, tested this using
`external`:
```
external secnds
real s1, s2
s1 = secnds(0.0)
print *, "Seconds from midnight:", s1
call sleep(2)
s2 = secnds(s1)
print *, "Seconds from s1", s2
print *, "Seconds from midnight:", secnds(0.0)
end
```
---
flang-rt/lib/runtime/extensions.cpp | 85 +++++++++++++++++++++++-
flang/include/flang/Runtime/extensions.h | 4 ++
2 files changed, 86 insertions(+), 3 deletions(-)
diff --git a/flang-rt/lib/runtime/extensions.cpp b/flang-rt/lib/runtime/extensions.cpp
index f6c39468d5655..a24810b4f344a 100644
--- a/flang-rt/lib/runtime/extensions.cpp
+++ b/flang-rt/lib/runtime/extensions.cpp
@@ -18,6 +18,7 @@
#include "flang/Runtime/entry-names.h"
#include "flang/Runtime/io-api.h"
#include "flang/Runtime/iostat-consts.h"
+#include <atomic>
#include <chrono>
#include <cstdio>
#include <cstring>
@@ -57,10 +58,76 @@ inline void CtimeBuffer(char *buffer, size_t bufsize, const time_t cur_time,
#include <direct.h>
#endif
-extern "C" {
-
namespace Fortran::runtime {
+// Common implementation that could be used for either SECNDS() or SECNDSD(),
+// which are defined for float or double.
+template <typename T> T SecndsImpl(T *refTime) {
+ static_assert(std::is_same<T, float>::value || std::is_same<T, double>::value,
+ "T must be float or double");
+ constexpr T FAIL_SECNDS{T{-1.0}}; // Failure code for this function
+ // Failure code for time functions that return std::time_t
+ constexpr std::time_t FAIL_TIME{std::time_t{-1}};
+ constexpr std::time_t TIME_UNINITIALIZED{std::time_t{0}};
+ if (!refTime) {
+ return FAIL_SECNDS;
+ }
+ std::time_t now{std::time(nullptr)};
+ if (now == FAIL_TIME) {
+ return FAIL_SECNDS;
+ }
+ // In case we are using a float result, we can only precisely store
+ // 2^24 seconds, which comes out to about 194 days. Thus, need to pick
+ // a starting point, which will allow us to keep the time diffs as precise
+ // as possible. Given the description of this function, midnight of the
+ // current day is the best starting point.
+ static std::atomic<std::time_t> startingPoint{TIME_UNINITIALIZED};
+ // "Acquire" will give us writes from other threads.
+ std::time_t localStartingPoint{startingPoint.load(std::memory_order_acquire)};
+ // Initialize startingPoint if we haven't initialized it yet or
+ // if we were passed 0.0, which indicates to compute seconds from
+ // current day's midnight.
+ if (localStartingPoint == TIME_UNINITIALIZED || *refTime == 0.0) {
+ // Compute midnight in the current timezone and try to initialize
+ // startingPoint with it. If there are any errors during computation,
+ // exit with error and hope that the other threads have better luck
+ // (or the user retries the call).
+ struct tm timeInfo;
+#ifdef _WIN32
+ if (localtime_s(&timeInfo, &now)) {
+#else
+ if (!localtime_r(&now, &timeInfo)) {
+#endif
+ return FAIL_SECNDS;
+ }
+ // Back to midnight
+ timeInfo.tm_hour = 0;
+ timeInfo.tm_min = 0;
+ timeInfo.tm_sec = 0;
+ localStartingPoint = std::mktime(&timeInfo);
+ if (localStartingPoint == FAIL_TIME) {
+ return FAIL_SECNDS;
+ }
+ INTERNAL_CHECK(localStartingPoint > TIME_UNINITIALIZED);
+ // Attempt to atomically set startingPoint to localStartingPoint
+ std::time_t expected{TIME_UNINITIALIZED};
+ if (startingPoint.compare_exchange_strong(expected, localStartingPoint,
+ std::memory_order_acq_rel, // "Acquire and release" on success
+ std::memory_order_acquire)) { // "Acquire" on failure
+ // startingPoint was set to localStartingPoint
+ } else {
+ // startingPoint was already initialized and its value was loaded
+ // into `expected`. Discard our precomputed midnight value in favor
+ // of the one from startingPoint.
+ localStartingPoint = expected;
+ }
+ }
+ double diffStartingPoint{std::difftime(now, localStartingPoint)};
+ return static_cast<T>(diffStartingPoint) - *refTime;
+}
+
+extern "C" {
+
gid_t RTNAME(GetGID)() {
#ifdef _WIN32
// Group IDs don't exist on Windows, return 1 to avoid errors
@@ -303,6 +370,17 @@ void FORTRAN_PROCEDURE_NAME(qsort)(int *array, int *len, int *isize,
// PERROR(STRING)
void RTNAME(Perror)(const char *str) { perror(str); }
+// GNU extension function SECNDS(refTime)
+float FORTRAN_PROCEDURE_NAME(secnds)(float *refTime) {
+ return SecndsImpl(refTime);
+}
+
+float RTNAME(Secnds)(float *refTime, const char *sourceFile, int line) {
+ Terminator terminator{sourceFile, line};
+ RUNTIME_CHECK(terminator, refTime != nullptr);
+ return FORTRAN_PROCEDURE_NAME(secnds)(refTime);
+}
+
// GNU extension function TIME()
std::int64_t RTNAME(time)() { return time(nullptr); }
@@ -337,5 +415,6 @@ std::int64_t RTNAME(Ftell)(int unitNumber) {
}
} // namespace io
-} // namespace Fortran::runtime
} // extern "C"
+
+} // namespace Fortran::runtime
diff --git a/flang/include/flang/Runtime/extensions.h b/flang/include/flang/Runtime/extensions.h
index b350204714431..9a100cec9e6b9 100644
--- a/flang/include/flang/Runtime/extensions.h
+++ b/flang/include/flang/Runtime/extensions.h
@@ -90,5 +90,9 @@ void RTNAME(Perror)(const char *str);
// MCLOCK -- returns accumulated time in ticks
int FORTRAN_PROCEDURE_NAME(mclock)();
+// GNU extension subroutine SECNDS(refTime)
+float FORTRAN_PROCEDURE_NAME(secnds)(float *refTime);
+float RTNAME(Secnds)(float *refTime, const char *sourceFile, int line);
+
} // extern "C"
#endif // FORTRAN_RUNTIME_EXTENSIONS_H_
>From 334d0be2d496af6c511d2efb183b862e7d911329 Mon Sep 17 00:00:00 2001
From: Stanislav Mekhanoshin <Stanislav.Mekhanoshin at amd.com>
Date: Wed, 6 Aug 2025 13:07:56 -0700
Subject: [PATCH 147/171] [AMDGPU] Support 64-bit LDS atomic fadd on gfx1250
(#152368)
---
llvm/lib/Target/AMDGPU/GCNSubtarget.h | 2 +-
.../AMDGPU/GlobalISel/fp64-atomics-gfx90a.ll | 120 ++++----------
.../CodeGen/AMDGPU/fp64-atomics-gfx90a.ll | 151 +++---------------
3 files changed, 53 insertions(+), 220 deletions(-)
diff --git a/llvm/lib/Target/AMDGPU/GCNSubtarget.h b/llvm/lib/Target/AMDGPU/GCNSubtarget.h
index 5530886831cae..9114f249c92a7 100644
--- a/llvm/lib/Target/AMDGPU/GCNSubtarget.h
+++ b/llvm/lib/Target/AMDGPU/GCNSubtarget.h
@@ -1081,7 +1081,7 @@ class GCNSubtarget final : public AMDGPUGenSubtargetInfo,
}
bool hasLDSFPAtomicAddF32() const { return GFX8Insts; }
- bool hasLDSFPAtomicAddF64() const { return GFX90AInsts; }
+ bool hasLDSFPAtomicAddF64() const { return GFX90AInsts || GFX1250Insts; }
/// \returns true if the subtarget has the v_permlanex16_b32 instruction.
bool hasPermLaneX16() const { return getGeneration() >= GFX10; }
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/fp64-atomics-gfx90a.ll b/llvm/test/CodeGen/AMDGPU/GlobalISel/fp64-atomics-gfx90a.ll
index 2785b78da99e2..481a2540eacb7 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/fp64-atomics-gfx90a.ll
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/fp64-atomics-gfx90a.ll
@@ -2243,36 +2243,22 @@ define amdgpu_kernel void @local_atomic_fadd_f64_noret_pat(ptr addrspace(3) %ptr
;
; GFX1250-LABEL: local_atomic_fadd_f64_noret_pat:
; GFX1250: ; %bb.0: ; %main_body
+; GFX1250-NEXT: s_mov_b32 s0, exec_lo
; GFX1250-NEXT: s_mov_b32 s1, exec_lo
-; GFX1250-NEXT: s_mov_b32 s0, 0
-; GFX1250-NEXT: v_mbcnt_lo_u32_b32 v0, s1, 0
-; GFX1250-NEXT: s_mov_b32 s2, exec_lo
+; GFX1250-NEXT: v_mbcnt_lo_u32_b32 v0, s0, 0
; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1)
; GFX1250-NEXT: v_cmpx_eq_u32_e32 0, v0
-; GFX1250-NEXT: s_cbranch_execz .LBB51_3
+; GFX1250-NEXT: s_cbranch_execz .LBB51_2
; GFX1250-NEXT: ; %bb.1:
-; GFX1250-NEXT: s_bcnt1_i32_b32 s1, s1
-; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
-; GFX1250-NEXT: v_cvt_f64_u32_e32 v[0:1], s1
-; GFX1250-NEXT: s_load_b32 s1, s[4:5], 0x24
+; GFX1250-NEXT: s_bcnt1_i32_b32 s0, s0
+; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1) | instskip(SKIP_2) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_cvt_f64_u32_e32 v[0:1], s0
+; GFX1250-NEXT: s_load_b32 s0, s[4:5], 0x24
; GFX1250-NEXT: s_wait_kmcnt 0x0
-; GFX1250-NEXT: v_mov_b32_e32 v4, s1
-; GFX1250-NEXT: ds_load_b64 v[2:3], v4
-; GFX1250-NEXT: v_mul_f64_e32 v[0:1], 4.0, v[0:1]
-; GFX1250-NEXT: .LBB51_2: ; %atomicrmw.start
-; GFX1250-NEXT: ; =>This Inner Loop Header: Depth=1
-; GFX1250-NEXT: s_wait_dscnt 0x0
-; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-NEXT: v_add_f64_e32 v[6:7], v[2:3], v[0:1]
-; GFX1250-NEXT: ds_cmpstore_rtn_b64 v[6:7], v4, v[6:7], v[2:3]
+; GFX1250-NEXT: v_dual_mul_f64 v[0:1], 4.0, v[0:1] :: v_dual_mov_b32 v2, s0
+; GFX1250-NEXT: ds_add_f64 v2, v[0:1]
; GFX1250-NEXT: s_wait_dscnt 0x0
-; GFX1250-NEXT: v_cmp_eq_u64_e32 vcc_lo, v[6:7], v[2:3]
-; GFX1250-NEXT: v_mov_b64_e32 v[2:3], v[6:7]
-; GFX1250-NEXT: s_or_b32 s0, vcc_lo, s0
-; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
-; GFX1250-NEXT: s_and_not1_b32 exec_lo, exec_lo, s0
-; GFX1250-NEXT: s_cbranch_execnz .LBB51_2
-; GFX1250-NEXT: .LBB51_3:
+; GFX1250-NEXT: .LBB51_2:
; GFX1250-NEXT: s_endpgm
main_body:
%ret = atomicrmw fadd ptr addrspace(3) %ptr, double 4.0 seq_cst, !amdgpu.no.fine.grained.memory !0
@@ -2322,36 +2308,22 @@ define amdgpu_kernel void @local_atomic_fadd_f64_noret_pat_flush(ptr addrspace(3
;
; GFX1250-LABEL: local_atomic_fadd_f64_noret_pat_flush:
; GFX1250: ; %bb.0: ; %main_body
+; GFX1250-NEXT: s_mov_b32 s0, exec_lo
; GFX1250-NEXT: s_mov_b32 s1, exec_lo
-; GFX1250-NEXT: s_mov_b32 s0, 0
-; GFX1250-NEXT: v_mbcnt_lo_u32_b32 v0, s1, 0
-; GFX1250-NEXT: s_mov_b32 s2, exec_lo
+; GFX1250-NEXT: v_mbcnt_lo_u32_b32 v0, s0, 0
; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1)
; GFX1250-NEXT: v_cmpx_eq_u32_e32 0, v0
-; GFX1250-NEXT: s_cbranch_execz .LBB52_3
+; GFX1250-NEXT: s_cbranch_execz .LBB52_2
; GFX1250-NEXT: ; %bb.1:
-; GFX1250-NEXT: s_bcnt1_i32_b32 s1, s1
-; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
-; GFX1250-NEXT: v_cvt_f64_u32_e32 v[0:1], s1
-; GFX1250-NEXT: s_load_b32 s1, s[4:5], 0x24
+; GFX1250-NEXT: s_bcnt1_i32_b32 s0, s0
+; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1) | instskip(SKIP_2) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_cvt_f64_u32_e32 v[0:1], s0
+; GFX1250-NEXT: s_load_b32 s0, s[4:5], 0x24
; GFX1250-NEXT: s_wait_kmcnt 0x0
-; GFX1250-NEXT: v_mov_b32_e32 v4, s1
-; GFX1250-NEXT: ds_load_b64 v[2:3], v4
-; GFX1250-NEXT: v_mul_f64_e32 v[0:1], 4.0, v[0:1]
-; GFX1250-NEXT: .LBB52_2: ; %atomicrmw.start
-; GFX1250-NEXT: ; =>This Inner Loop Header: Depth=1
+; GFX1250-NEXT: v_dual_mul_f64 v[0:1], 4.0, v[0:1] :: v_dual_mov_b32 v2, s0
+; GFX1250-NEXT: ds_add_f64 v2, v[0:1]
; GFX1250-NEXT: s_wait_dscnt 0x0
-; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-NEXT: v_add_f64_e32 v[6:7], v[2:3], v[0:1]
-; GFX1250-NEXT: ds_cmpstore_rtn_b64 v[6:7], v4, v[6:7], v[2:3]
-; GFX1250-NEXT: s_wait_dscnt 0x0
-; GFX1250-NEXT: v_cmp_eq_u64_e32 vcc_lo, v[6:7], v[2:3]
-; GFX1250-NEXT: v_mov_b64_e32 v[2:3], v[6:7]
-; GFX1250-NEXT: s_or_b32 s0, vcc_lo, s0
-; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
-; GFX1250-NEXT: s_and_not1_b32 exec_lo, exec_lo, s0
-; GFX1250-NEXT: s_cbranch_execnz .LBB52_2
-; GFX1250-NEXT: .LBB52_3:
+; GFX1250-NEXT: .LBB52_2:
; GFX1250-NEXT: s_endpgm
main_body:
%ret = atomicrmw fadd ptr addrspace(3) %ptr, double 4.0 seq_cst, !amdgpu.no.fine.grained.memory !0
@@ -2401,36 +2373,22 @@ define amdgpu_kernel void @local_atomic_fadd_f64_noret_pat_flush_safe(ptr addrsp
;
; GFX1250-LABEL: local_atomic_fadd_f64_noret_pat_flush_safe:
; GFX1250: ; %bb.0: ; %main_body
+; GFX1250-NEXT: s_mov_b32 s0, exec_lo
; GFX1250-NEXT: s_mov_b32 s1, exec_lo
-; GFX1250-NEXT: s_mov_b32 s0, 0
-; GFX1250-NEXT: v_mbcnt_lo_u32_b32 v0, s1, 0
-; GFX1250-NEXT: s_mov_b32 s2, exec_lo
+; GFX1250-NEXT: v_mbcnt_lo_u32_b32 v0, s0, 0
; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1)
; GFX1250-NEXT: v_cmpx_eq_u32_e32 0, v0
-; GFX1250-NEXT: s_cbranch_execz .LBB53_3
+; GFX1250-NEXT: s_cbranch_execz .LBB53_2
; GFX1250-NEXT: ; %bb.1:
-; GFX1250-NEXT: s_bcnt1_i32_b32 s1, s1
-; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
-; GFX1250-NEXT: v_cvt_f64_u32_e32 v[0:1], s1
-; GFX1250-NEXT: s_load_b32 s1, s[4:5], 0x24
+; GFX1250-NEXT: s_bcnt1_i32_b32 s0, s0
+; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1) | instskip(SKIP_2) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_cvt_f64_u32_e32 v[0:1], s0
+; GFX1250-NEXT: s_load_b32 s0, s[4:5], 0x24
; GFX1250-NEXT: s_wait_kmcnt 0x0
-; GFX1250-NEXT: v_mov_b32_e32 v4, s1
-; GFX1250-NEXT: ds_load_b64 v[2:3], v4
-; GFX1250-NEXT: v_mul_f64_e32 v[0:1], 4.0, v[0:1]
-; GFX1250-NEXT: .LBB53_2: ; %atomicrmw.start
-; GFX1250-NEXT: ; =>This Inner Loop Header: Depth=1
-; GFX1250-NEXT: s_wait_dscnt 0x0
-; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-NEXT: v_add_f64_e32 v[6:7], v[2:3], v[0:1]
-; GFX1250-NEXT: ds_cmpstore_rtn_b64 v[6:7], v4, v[6:7], v[2:3]
+; GFX1250-NEXT: v_dual_mul_f64 v[0:1], 4.0, v[0:1] :: v_dual_mov_b32 v2, s0
+; GFX1250-NEXT: ds_add_f64 v2, v[0:1]
; GFX1250-NEXT: s_wait_dscnt 0x0
-; GFX1250-NEXT: v_cmp_eq_u64_e32 vcc_lo, v[6:7], v[2:3]
-; GFX1250-NEXT: v_mov_b64_e32 v[2:3], v[6:7]
-; GFX1250-NEXT: s_or_b32 s0, vcc_lo, s0
-; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
-; GFX1250-NEXT: s_and_not1_b32 exec_lo, exec_lo, s0
-; GFX1250-NEXT: s_cbranch_execnz .LBB53_2
-; GFX1250-NEXT: .LBB53_3:
+; GFX1250-NEXT: .LBB53_2:
; GFX1250-NEXT: s_endpgm
main_body:
%ret = atomicrmw fadd ptr addrspace(3) %ptr, double 4.0 seq_cst, !amdgpu.no.fine.grained.memory !0
@@ -2459,23 +2417,9 @@ define double @local_atomic_fadd_f64_rtn_pat(ptr addrspace(3) %ptr, double %data
; GFX1250: ; %bb.0: ; %main_body
; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
; GFX1250-NEXT: s_wait_kmcnt 0x0
-; GFX1250-NEXT: v_mov_b32_e32 v2, v0
-; GFX1250-NEXT: ds_load_b64 v[0:1], v0
-; GFX1250-NEXT: s_mov_b32 s0, 0
-; GFX1250-NEXT: .LBB54_1: ; %atomicrmw.start
-; GFX1250-NEXT: ; =>This Inner Loop Header: Depth=1
-; GFX1250-NEXT: s_wait_dscnt 0x0
-; GFX1250-NEXT: v_mov_b64_e32 v[4:5], v[0:1]
-; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_4) | instid1(SALU_CYCLE_1)
-; GFX1250-NEXT: v_add_f64_e32 v[0:1], 4.0, v[4:5]
-; GFX1250-NEXT: ds_cmpstore_rtn_b64 v[0:1], v2, v[0:1], v[4:5]
+; GFX1250-NEXT: v_mov_b64_e32 v[2:3], 4.0
+; GFX1250-NEXT: ds_add_rtn_f64 v[0:1], v0, v[2:3]
; GFX1250-NEXT: s_wait_dscnt 0x0
-; GFX1250-NEXT: v_cmp_eq_u64_e32 vcc_lo, v[0:1], v[4:5]
-; GFX1250-NEXT: s_or_b32 s0, vcc_lo, s0
-; GFX1250-NEXT: s_and_not1_b32 exec_lo, exec_lo, s0
-; GFX1250-NEXT: s_cbranch_execnz .LBB54_1
-; GFX1250-NEXT: ; %bb.2: ; %atomicrmw.end
-; GFX1250-NEXT: s_or_b32 exec_lo, exec_lo, s0
; GFX1250-NEXT: s_set_pc_i64 s[30:31]
main_body:
%ret = atomicrmw fadd ptr addrspace(3) %ptr, double 4.0 seq_cst
diff --git a/llvm/test/CodeGen/AMDGPU/fp64-atomics-gfx90a.ll b/llvm/test/CodeGen/AMDGPU/fp64-atomics-gfx90a.ll
index f9a24fee59692..0cb2b0b7df3d2 100644
--- a/llvm/test/CodeGen/AMDGPU/fp64-atomics-gfx90a.ll
+++ b/llvm/test/CodeGen/AMDGPU/fp64-atomics-gfx90a.ll
@@ -2102,23 +2102,10 @@ define amdgpu_kernel void @local_atomic_fadd_f64_noret(ptr addrspace(3) %ptr, do
; GFX1250-NEXT: s_load_b32 s2, s[4:5], 0x24
; GFX1250-NEXT: s_load_b64 s[0:1], s[4:5], 0x2c
; GFX1250-NEXT: s_wait_kmcnt 0x0
-; GFX1250-NEXT: v_dual_mov_b32 v0, s2 :: v_dual_mov_b32 v2, s2
-; GFX1250-NEXT: s_mov_b32 s2, 0
-; GFX1250-NEXT: ds_load_b64 v[0:1], v0
-; GFX1250-NEXT: .LBB51_1: ; %atomicrmw.start
-; GFX1250-NEXT: ; =>This Inner Loop Header: Depth=1
+; GFX1250-NEXT: v_mov_b32_e32 v2, s2
+; GFX1250-NEXT: v_mov_b64_e32 v[0:1], s[0:1]
+; GFX1250-NEXT: ds_add_f64 v2, v[0:1]
; GFX1250-NEXT: s_wait_dscnt 0x0
-; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-NEXT: v_add_f64_e32 v[4:5], s[0:1], v[0:1]
-; GFX1250-NEXT: ds_cmpstore_rtn_b64 v[4:5], v2, v[4:5], v[0:1]
-; GFX1250-NEXT: s_wait_dscnt 0x0
-; GFX1250-NEXT: v_cmp_eq_u64_e32 vcc_lo, v[4:5], v[0:1]
-; GFX1250-NEXT: v_mov_b64_e32 v[0:1], v[4:5]
-; GFX1250-NEXT: s_or_b32 s2, vcc_lo, s2
-; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
-; GFX1250-NEXT: s_and_not1_b32 exec_lo, exec_lo, s2
-; GFX1250-NEXT: s_cbranch_execnz .LBB51_1
-; GFX1250-NEXT: ; %bb.2: ; %atomicrmw.end
; GFX1250-NEXT: s_endpgm
main_body:
%ret = call double @llvm.amdgcn.ds.fadd.f64(ptr addrspace(3) %ptr, double %data, i32 0, i32 0, i1 0)
@@ -2148,24 +2135,9 @@ define double @local_atomic_fadd_f64_rtn(ptr addrspace(3) %ptr, double %data) {
; GFX1250: ; %bb.0: ; %main_body
; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
; GFX1250-NEXT: s_wait_kmcnt 0x0
-; GFX1250-NEXT: v_dual_mov_b32 v5, v2 :: v_dual_mov_b32 v2, v0
-; GFX1250-NEXT: v_mov_b32_e32 v4, v1
-; GFX1250-NEXT: ds_load_b64 v[0:1], v0
-; GFX1250-NEXT: s_mov_b32 s0, 0
-; GFX1250-NEXT: .LBB52_1: ; %atomicrmw.start
-; GFX1250-NEXT: ; =>This Inner Loop Header: Depth=1
-; GFX1250-NEXT: s_wait_dscnt 0x0
-; GFX1250-NEXT: v_mov_b64_e32 v[6:7], v[0:1]
-; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_4) | instid1(SALU_CYCLE_1)
-; GFX1250-NEXT: v_add_f64_e32 v[0:1], v[6:7], v[4:5]
-; GFX1250-NEXT: ds_cmpstore_rtn_b64 v[0:1], v2, v[0:1], v[6:7]
+; GFX1250-NEXT: v_dual_mov_b32 v3, v2 :: v_dual_mov_b32 v2, v1
+; GFX1250-NEXT: ds_add_rtn_f64 v[0:1], v0, v[2:3]
; GFX1250-NEXT: s_wait_dscnt 0x0
-; GFX1250-NEXT: v_cmp_eq_u64_e32 vcc_lo, v[0:1], v[6:7]
-; GFX1250-NEXT: s_or_b32 s0, vcc_lo, s0
-; GFX1250-NEXT: s_and_not1_b32 exec_lo, exec_lo, s0
-; GFX1250-NEXT: s_cbranch_execnz .LBB52_1
-; GFX1250-NEXT: ; %bb.2: ; %atomicrmw.end
-; GFX1250-NEXT: s_or_b32 exec_lo, exec_lo, s0
; GFX1250-NEXT: s_set_pc_i64 s[30:31]
main_body:
%ret = call double @llvm.amdgcn.ds.fadd.f64(ptr addrspace(3) %ptr, double %data, i32 0, i32 0, i1 0)
@@ -2197,24 +2169,11 @@ define amdgpu_kernel void @local_atomic_fadd_f64_noret_pat(ptr addrspace(3) %ptr
; GFX1250-LABEL: local_atomic_fadd_f64_noret_pat:
; GFX1250: ; %bb.0: ; %main_body
; GFX1250-NEXT: s_load_b32 s0, s[4:5], 0x24
+; GFX1250-NEXT: v_mov_b64_e32 v[0:1], 4.0
; GFX1250-NEXT: s_wait_kmcnt 0x0
-; GFX1250-NEXT: v_dual_mov_b32 v0, s0 :: v_dual_mov_b32 v2, s0
-; GFX1250-NEXT: s_mov_b32 s0, 0
-; GFX1250-NEXT: ds_load_b64 v[0:1], v0
-; GFX1250-NEXT: .LBB53_1: ; %atomicrmw.start
-; GFX1250-NEXT: ; =>This Inner Loop Header: Depth=1
+; GFX1250-NEXT: v_mov_b32_e32 v2, s0
+; GFX1250-NEXT: ds_add_f64 v2, v[0:1]
; GFX1250-NEXT: s_wait_dscnt 0x0
-; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-NEXT: v_add_f64_e32 v[4:5], 4.0, v[0:1]
-; GFX1250-NEXT: ds_cmpstore_rtn_b64 v[4:5], v2, v[4:5], v[0:1]
-; GFX1250-NEXT: s_wait_dscnt 0x0
-; GFX1250-NEXT: v_cmp_eq_u64_e32 vcc_lo, v[4:5], v[0:1]
-; GFX1250-NEXT: v_mov_b64_e32 v[0:1], v[4:5]
-; GFX1250-NEXT: s_or_b32 s0, vcc_lo, s0
-; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
-; GFX1250-NEXT: s_and_not1_b32 exec_lo, exec_lo, s0
-; GFX1250-NEXT: s_cbranch_execnz .LBB53_1
-; GFX1250-NEXT: ; %bb.2: ; %atomicrmw.end
; GFX1250-NEXT: s_endpgm
main_body:
%ret = atomicrmw fadd ptr addrspace(3) %ptr, double 4.0 seq_cst, !amdgpu.no.fine.grained.memory !0
@@ -2246,24 +2205,11 @@ define amdgpu_kernel void @local_atomic_fadd_f64_noret_pat_flush(ptr addrspace(3
; GFX1250-LABEL: local_atomic_fadd_f64_noret_pat_flush:
; GFX1250: ; %bb.0: ; %main_body
; GFX1250-NEXT: s_load_b32 s0, s[4:5], 0x24
+; GFX1250-NEXT: v_mov_b64_e32 v[0:1], 4.0
; GFX1250-NEXT: s_wait_kmcnt 0x0
-; GFX1250-NEXT: v_dual_mov_b32 v0, s0 :: v_dual_mov_b32 v2, s0
-; GFX1250-NEXT: s_mov_b32 s0, 0
-; GFX1250-NEXT: ds_load_b64 v[0:1], v0
-; GFX1250-NEXT: .LBB54_1: ; %atomicrmw.start
-; GFX1250-NEXT: ; =>This Inner Loop Header: Depth=1
-; GFX1250-NEXT: s_wait_dscnt 0x0
-; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-NEXT: v_add_f64_e32 v[4:5], 4.0, v[0:1]
-; GFX1250-NEXT: ds_cmpstore_rtn_b64 v[4:5], v2, v[4:5], v[0:1]
+; GFX1250-NEXT: v_mov_b32_e32 v2, s0
+; GFX1250-NEXT: ds_add_f64 v2, v[0:1]
; GFX1250-NEXT: s_wait_dscnt 0x0
-; GFX1250-NEXT: v_cmp_eq_u64_e32 vcc_lo, v[4:5], v[0:1]
-; GFX1250-NEXT: v_mov_b64_e32 v[0:1], v[4:5]
-; GFX1250-NEXT: s_or_b32 s0, vcc_lo, s0
-; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
-; GFX1250-NEXT: s_and_not1_b32 exec_lo, exec_lo, s0
-; GFX1250-NEXT: s_cbranch_execnz .LBB54_1
-; GFX1250-NEXT: ; %bb.2: ; %atomicrmw.end
; GFX1250-NEXT: s_endpgm
main_body:
%ret = atomicrmw fadd ptr addrspace(3) %ptr, double 4.0 seq_cst, !amdgpu.no.fine.grained.memory !0
@@ -2295,24 +2241,11 @@ define amdgpu_kernel void @local_atomic_fadd_f64_noret_pat_flush_safe(ptr addrsp
; GFX1250-LABEL: local_atomic_fadd_f64_noret_pat_flush_safe:
; GFX1250: ; %bb.0: ; %main_body
; GFX1250-NEXT: s_load_b32 s0, s[4:5], 0x24
+; GFX1250-NEXT: v_mov_b64_e32 v[0:1], 4.0
; GFX1250-NEXT: s_wait_kmcnt 0x0
-; GFX1250-NEXT: v_dual_mov_b32 v0, s0 :: v_dual_mov_b32 v2, s0
-; GFX1250-NEXT: s_mov_b32 s0, 0
-; GFX1250-NEXT: ds_load_b64 v[0:1], v0
-; GFX1250-NEXT: .LBB55_1: ; %atomicrmw.start
-; GFX1250-NEXT: ; =>This Inner Loop Header: Depth=1
-; GFX1250-NEXT: s_wait_dscnt 0x0
-; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX1250-NEXT: v_add_f64_e32 v[4:5], 4.0, v[0:1]
-; GFX1250-NEXT: ds_cmpstore_rtn_b64 v[4:5], v2, v[4:5], v[0:1]
+; GFX1250-NEXT: v_mov_b32_e32 v2, s0
+; GFX1250-NEXT: ds_add_f64 v2, v[0:1]
; GFX1250-NEXT: s_wait_dscnt 0x0
-; GFX1250-NEXT: v_cmp_eq_u64_e32 vcc_lo, v[4:5], v[0:1]
-; GFX1250-NEXT: v_mov_b64_e32 v[0:1], v[4:5]
-; GFX1250-NEXT: s_or_b32 s0, vcc_lo, s0
-; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
-; GFX1250-NEXT: s_and_not1_b32 exec_lo, exec_lo, s0
-; GFX1250-NEXT: s_cbranch_execnz .LBB55_1
-; GFX1250-NEXT: ; %bb.2: ; %atomicrmw.end
; GFX1250-NEXT: s_endpgm
main_body:
%ret = atomicrmw fadd ptr addrspace(3) %ptr, double 4.0 seq_cst
@@ -2341,23 +2274,9 @@ define double @local_atomic_fadd_f64_rtn_pat(ptr addrspace(3) %ptr, double %data
; GFX1250: ; %bb.0: ; %main_body
; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
; GFX1250-NEXT: s_wait_kmcnt 0x0
-; GFX1250-NEXT: v_mov_b32_e32 v2, v0
-; GFX1250-NEXT: ds_load_b64 v[0:1], v0
-; GFX1250-NEXT: s_mov_b32 s0, 0
-; GFX1250-NEXT: .LBB56_1: ; %atomicrmw.start
-; GFX1250-NEXT: ; =>This Inner Loop Header: Depth=1
-; GFX1250-NEXT: s_wait_dscnt 0x0
-; GFX1250-NEXT: v_mov_b64_e32 v[4:5], v[0:1]
-; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_4) | instid1(SALU_CYCLE_1)
-; GFX1250-NEXT: v_add_f64_e32 v[0:1], 4.0, v[4:5]
-; GFX1250-NEXT: ds_cmpstore_rtn_b64 v[0:1], v2, v[0:1], v[4:5]
+; GFX1250-NEXT: v_mov_b64_e32 v[2:3], 4.0
+; GFX1250-NEXT: ds_add_rtn_f64 v[0:1], v0, v[2:3]
; GFX1250-NEXT: s_wait_dscnt 0x0
-; GFX1250-NEXT: v_cmp_eq_u64_e32 vcc_lo, v[0:1], v[4:5]
-; GFX1250-NEXT: s_or_b32 s0, vcc_lo, s0
-; GFX1250-NEXT: s_and_not1_b32 exec_lo, exec_lo, s0
-; GFX1250-NEXT: s_cbranch_execnz .LBB56_1
-; GFX1250-NEXT: ; %bb.2: ; %atomicrmw.end
-; GFX1250-NEXT: s_or_b32 exec_lo, exec_lo, s0
; GFX1250-NEXT: s_set_pc_i64 s[30:31]
main_body:
%ret = atomicrmw fadd ptr addrspace(3) %ptr, double 4.0 seq_cst, !amdgpu.no.fine.grained.memory !0
@@ -2387,24 +2306,9 @@ define double @local_atomic_fadd_f64_rtn_ieee_unsafe(ptr addrspace(3) %ptr, doub
; GFX1250: ; %bb.0: ; %main_body
; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
; GFX1250-NEXT: s_wait_kmcnt 0x0
-; GFX1250-NEXT: v_dual_mov_b32 v5, v2 :: v_dual_mov_b32 v2, v0
-; GFX1250-NEXT: v_mov_b32_e32 v4, v1
-; GFX1250-NEXT: ds_load_b64 v[0:1], v0
-; GFX1250-NEXT: s_mov_b32 s0, 0
-; GFX1250-NEXT: .LBB57_1: ; %atomicrmw.start
-; GFX1250-NEXT: ; =>This Inner Loop Header: Depth=1
-; GFX1250-NEXT: s_wait_dscnt 0x0
-; GFX1250-NEXT: v_mov_b64_e32 v[6:7], v[0:1]
-; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_4) | instid1(SALU_CYCLE_1)
-; GFX1250-NEXT: v_add_f64_e32 v[0:1], v[6:7], v[4:5]
-; GFX1250-NEXT: ds_cmpstore_rtn_b64 v[0:1], v2, v[0:1], v[6:7]
+; GFX1250-NEXT: v_dual_mov_b32 v3, v2 :: v_dual_mov_b32 v2, v1
+; GFX1250-NEXT: ds_add_rtn_f64 v[0:1], v0, v[2:3]
; GFX1250-NEXT: s_wait_dscnt 0x0
-; GFX1250-NEXT: v_cmp_eq_u64_e32 vcc_lo, v[0:1], v[6:7]
-; GFX1250-NEXT: s_or_b32 s0, vcc_lo, s0
-; GFX1250-NEXT: s_and_not1_b32 exec_lo, exec_lo, s0
-; GFX1250-NEXT: s_cbranch_execnz .LBB57_1
-; GFX1250-NEXT: ; %bb.2: ; %atomicrmw.end
-; GFX1250-NEXT: s_or_b32 exec_lo, exec_lo, s0
; GFX1250-NEXT: s_set_pc_i64 s[30:31]
main_body:
%ret = call double @llvm.amdgcn.ds.fadd.f64(ptr addrspace(3) %ptr, double %data, i32 0, i32 0, i1 0)
@@ -2434,24 +2338,9 @@ define double @local_atomic_fadd_f64_rtn_ieee_safe(ptr addrspace(3) %ptr, double
; GFX1250: ; %bb.0: ; %main_body
; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
; GFX1250-NEXT: s_wait_kmcnt 0x0
-; GFX1250-NEXT: v_dual_mov_b32 v5, v2 :: v_dual_mov_b32 v2, v0
-; GFX1250-NEXT: v_mov_b32_e32 v4, v1
-; GFX1250-NEXT: ds_load_b64 v[0:1], v0
-; GFX1250-NEXT: s_mov_b32 s0, 0
-; GFX1250-NEXT: .LBB58_1: ; %atomicrmw.start
-; GFX1250-NEXT: ; =>This Inner Loop Header: Depth=1
-; GFX1250-NEXT: s_wait_dscnt 0x0
-; GFX1250-NEXT: v_mov_b64_e32 v[6:7], v[0:1]
-; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_4) | instid1(SALU_CYCLE_1)
-; GFX1250-NEXT: v_add_f64_e32 v[0:1], v[6:7], v[4:5]
-; GFX1250-NEXT: ds_cmpstore_rtn_b64 v[0:1], v2, v[0:1], v[6:7]
+; GFX1250-NEXT: v_dual_mov_b32 v3, v2 :: v_dual_mov_b32 v2, v1
+; GFX1250-NEXT: ds_add_rtn_f64 v[0:1], v0, v[2:3]
; GFX1250-NEXT: s_wait_dscnt 0x0
-; GFX1250-NEXT: v_cmp_eq_u64_e32 vcc_lo, v[0:1], v[6:7]
-; GFX1250-NEXT: s_or_b32 s0, vcc_lo, s0
-; GFX1250-NEXT: s_and_not1_b32 exec_lo, exec_lo, s0
-; GFX1250-NEXT: s_cbranch_execnz .LBB58_1
-; GFX1250-NEXT: ; %bb.2: ; %atomicrmw.end
-; GFX1250-NEXT: s_or_b32 exec_lo, exec_lo, s0
; GFX1250-NEXT: s_set_pc_i64 s[30:31]
main_body:
%ret = call double @llvm.amdgcn.ds.fadd.f64(ptr addrspace(3) %ptr, double %data, i32 0, i32 0, i1 0)
>From c2eddec4ff42eca8a93e3f8a0531dfb6e60a61ca Mon Sep 17 00:00:00 2001
From: Stanislav Mekhanoshin <Stanislav.Mekhanoshin at amd.com>
Date: Wed, 6 Aug 2025 13:08:12 -0700
Subject: [PATCH 148/171] [AMDGPU] System scope atomics are emulated over PCIe
in gfx1250 (#152369)
HW will emulate unsupported PCIe atomics via CAS loop, we do not need to
expand these anymore.
---
llvm/lib/Target/AMDGPU/AMDGPU.td | 9 +
llvm/lib/Target/AMDGPU/GCNSubtarget.h | 7 +
llvm/lib/Target/AMDGPU/SIISelLowering.cpp | 5 +-
.../CodeGen/AMDGPU/atomics-system-scope.ll | 1486 +++++++++++++++++
llvm/test/CodeGen/AMDGPU/literal64.ll | 20 +-
5 files changed, 1507 insertions(+), 20 deletions(-)
create mode 100644 llvm/test/CodeGen/AMDGPU/atomics-system-scope.ll
diff --git a/llvm/lib/Target/AMDGPU/AMDGPU.td b/llvm/lib/Target/AMDGPU/AMDGPU.td
index d84f512f4976d..ddeca07e51103 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPU.td
+++ b/llvm/lib/Target/AMDGPU/AMDGPU.td
@@ -1013,6 +1013,14 @@ def FeatureAgentScopeFineGrainedRemoteMemoryAtomics
"device memory."
>;
+def FeatureEmulatedSystemScopeAtomics
+ : SubtargetFeature<"emulated-system-scope-atomics",
+ "HasEmulatedSystemScopeAtomics",
+ "true",
+ "System scope atomics unsupported by the PCI-e are emulated in HW via CAS "
+ "loop and functional."
+>;
+
def FeatureDefaultComponentZero : SubtargetFeature<"default-component-zero",
"HasDefaultComponentZero",
"true",
@@ -2062,6 +2070,7 @@ def FeatureISAVersion12_50 : FeatureSet<
FeatureAtomicFMinFMaxF64FlatInsts,
FeatureFlatBufferGlobalAtomicFaddF64Inst,
FeatureMemoryAtomicFAddF32DenormalSupport,
+ FeatureEmulatedSystemScopeAtomics,
FeatureGloballyAddressableScratch,
FeatureKernargPreload,
FeatureVmemPrefInsts,
diff --git a/llvm/lib/Target/AMDGPU/GCNSubtarget.h b/llvm/lib/Target/AMDGPU/GCNSubtarget.h
index 9114f249c92a7..1c3749d81eec8 100644
--- a/llvm/lib/Target/AMDGPU/GCNSubtarget.h
+++ b/llvm/lib/Target/AMDGPU/GCNSubtarget.h
@@ -187,6 +187,7 @@ class GCNSubtarget final : public AMDGPUGenSubtargetInfo,
bool HasFlatBufferGlobalAtomicFaddF64Inst = false;
bool HasDefaultComponentZero = false;
bool HasAgentScopeFineGrainedRemoteMemoryAtomics = false;
+ bool HasEmulatedSystemScopeAtomics = false;
bool HasDefaultComponentBroadcast = false;
bool HasXF32Insts = false;
/// The maximum number of instructions that may be placed within an S_CLAUSE,
@@ -950,6 +951,12 @@ class GCNSubtarget final : public AMDGPUGenSubtargetInfo,
return HasAgentScopeFineGrainedRemoteMemoryAtomics;
}
+ /// \return true is HW emulates system scope atomics unsupported by the PCI-e
+ /// via CAS loop.
+ bool hasEmulatedSystemScopeAtomics() const {
+ return HasEmulatedSystemScopeAtomics;
+ }
+
bool hasDefaultComponentZero() const { return HasDefaultComponentZero; }
bool hasDefaultComponentBroadcast() const {
diff --git a/llvm/lib/Target/AMDGPU/SIISelLowering.cpp b/llvm/lib/Target/AMDGPU/SIISelLowering.cpp
index 63826b782a377..8f44c03d95b43 100644
--- a/llvm/lib/Target/AMDGPU/SIISelLowering.cpp
+++ b/llvm/lib/Target/AMDGPU/SIISelLowering.cpp
@@ -17695,6 +17695,8 @@ static bool globalMemoryFPAtomicIsLegal(const GCNSubtarget &Subtarget,
if (Subtarget.supportsAgentScopeFineGrainedRemoteMemoryAtomics() &&
RMW->hasMetadata("amdgpu.no.remote.memory"))
return true;
+ if (Subtarget.hasEmulatedSystemScopeAtomics())
+ return true;
} else if (Subtarget.supportsAgentScopeFineGrainedRemoteMemoryAtomics())
return true;
@@ -17942,8 +17944,7 @@ SITargetLowering::shouldExpandAtomicRMWInIR(AtomicRMWInst *RMW) const {
case AtomicRMWInst::UMax: {
if (AMDGPU::isFlatGlobalAddrSpace(AS) ||
AS == AMDGPUAS::BUFFER_FAT_POINTER) {
- // Always expand system scope min/max atomics.
- if (HasSystemScope)
+ if (HasSystemScope && !Subtarget->hasEmulatedSystemScopeAtomics())
return AtomicExpansionKind::CmpXChg;
}
diff --git a/llvm/test/CodeGen/AMDGPU/atomics-system-scope.ll b/llvm/test/CodeGen/AMDGPU/atomics-system-scope.ll
new file mode 100644
index 0000000000000..5fc9f4a0f8038
--- /dev/null
+++ b/llvm/test/CodeGen/AMDGPU/atomics-system-scope.ll
@@ -0,0 +1,1486 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
+; RUN:llc -mtriple=amdgcn -mcpu=gfx1250 < %s | FileCheck --check-prefix=GFX1250 %s
+
+define float @global_system_atomic_fadd_f32(ptr addrspace(1) %ptr, float %val) {
+; GFX1250-LABEL: global_system_atomic_fadd_f32:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: global_atomic_add_f32 v0, v[0:1], v2, off th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw fadd ptr addrspace(1) %ptr, float %val monotonic
+ ret float %result
+}
+
+define float @global_one_as_atomic_fadd_f32(ptr addrspace(1) %ptr, float %val) {
+; GFX1250-LABEL: global_one_as_atomic_fadd_f32:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: global_atomic_add_f32 v0, v[0:1], v2, off th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw fadd ptr addrspace(1) %ptr, float %val syncscope("one-as") monotonic
+ ret float %result
+}
+
+define double @global_system_atomic_fadd_f64(ptr addrspace(1) %ptr, double %val) {
+; GFX1250-LABEL: global_system_atomic_fadd_f64:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: global_atomic_add_f64 v[0:1], v[0:1], v[2:3], off th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw fadd ptr addrspace(1) %ptr, double %val monotonic
+ ret double %result
+}
+
+define double @global_one_as_atomic_fadd_f64(ptr addrspace(1) %ptr, double %val) {
+; GFX1250-LABEL: global_one_as_atomic_fadd_f64:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: global_atomic_add_f64 v[0:1], v[0:1], v[2:3], off th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw fadd ptr addrspace(1) %ptr, double %val syncscope("one-as") monotonic
+ ret double %result
+}
+
+define float @global_system_atomic_fmin_f32(ptr addrspace(1) %ptr, float %val) {
+; GFX1250-LABEL: global_system_atomic_fmin_f32:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: global_atomic_min_num_f32 v0, v[0:1], v2, off th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw fmin ptr addrspace(1) %ptr, float %val monotonic
+ ret float %result
+}
+
+define float @global_one_as_atomic_fmin_f32(ptr addrspace(1) %ptr, float %val) {
+; GFX1250-LABEL: global_one_as_atomic_fmin_f32:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: global_atomic_min_num_f32 v0, v[0:1], v2, off th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw fmin ptr addrspace(1) %ptr, float %val syncscope("one-as") monotonic
+ ret float %result
+}
+
+define double @global_system_atomic_fmin_f64(ptr addrspace(1) %ptr, double %val) {
+; GFX1250-LABEL: global_system_atomic_fmin_f64:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: global_atomic_min_num_f64 v[0:1], v[0:1], v[2:3], off th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw fmin ptr addrspace(1) %ptr, double %val monotonic
+ ret double %result
+}
+
+define double @global_one_as_atomic_fmin_f64(ptr addrspace(1) %ptr, double %val) {
+; GFX1250-LABEL: global_one_as_atomic_fmin_f64:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: global_atomic_min_num_f64 v[0:1], v[0:1], v[2:3], off th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw fmin ptr addrspace(1) %ptr, double %val syncscope("one-as") monotonic
+ ret double %result
+}
+
+define float @global_system_atomic_fmax_f32(ptr addrspace(1) %ptr, float %val) {
+; GFX1250-LABEL: global_system_atomic_fmax_f32:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: global_atomic_max_num_f32 v0, v[0:1], v2, off th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw fmax ptr addrspace(1) %ptr, float %val monotonic
+ ret float %result
+}
+
+define float @global_one_as_atomic_fmax_f32(ptr addrspace(1) %ptr, float %val) {
+; GFX1250-LABEL: global_one_as_atomic_fmax_f32:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: global_atomic_max_num_f32 v0, v[0:1], v2, off th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw fmax ptr addrspace(1) %ptr, float %val syncscope("one-as") monotonic
+ ret float %result
+}
+
+define double @global_system_atomic_fmax_f64(ptr addrspace(1) %ptr, double %val) {
+; GFX1250-LABEL: global_system_atomic_fmax_f64:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: global_atomic_max_num_f64 v[0:1], v[0:1], v[2:3], off th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw fmax ptr addrspace(1) %ptr, double %val monotonic
+ ret double %result
+}
+
+define double @global_one_as_atomic_fmax_f64(ptr addrspace(1) %ptr, double %val) {
+; GFX1250-LABEL: global_one_as_atomic_fmax_f64:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: global_atomic_max_num_f64 v[0:1], v[0:1], v[2:3], off th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw fmax ptr addrspace(1) %ptr, double %val syncscope("one-as") monotonic
+ ret double %result
+}
+
+define i32 @global_one_as_atomic_min_i32(ptr addrspace(1) %ptr, i32 %val) {
+; GFX1250-LABEL: global_one_as_atomic_min_i32:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: global_atomic_min_i32 v0, v[0:1], v2, off th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw min ptr addrspace(1) %ptr, i32 %val syncscope("one-as") monotonic
+ ret i32 %result
+}
+
+define i32 @global_system_atomic_min_i32(ptr addrspace(1) %ptr, i32 %val) {
+; GFX1250-LABEL: global_system_atomic_min_i32:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: global_atomic_min_i32 v0, v[0:1], v2, off th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw min ptr addrspace(1) %ptr, i32 %val monotonic
+ ret i32 %result
+}
+
+define i32 @global_one_as_atomic_max_i32(ptr addrspace(1) %ptr, i32 %val) {
+; GFX1250-LABEL: global_one_as_atomic_max_i32:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: global_atomic_max_i32 v0, v[0:1], v2, off th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw max ptr addrspace(1) %ptr, i32 %val syncscope("one-as") monotonic
+ ret i32 %result
+}
+
+define i32 @global_system_atomic_max_i32(ptr addrspace(1) %ptr, i32 %val) {
+; GFX1250-LABEL: global_system_atomic_max_i32:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: global_atomic_max_i32 v0, v[0:1], v2, off th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw max ptr addrspace(1) %ptr, i32 %val monotonic
+ ret i32 %result
+}
+
+define i32 @global_one_as_atomic_umin_i32(ptr addrspace(1) %ptr, i32 %val) {
+; GFX1250-LABEL: global_one_as_atomic_umin_i32:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: global_atomic_min_u32 v0, v[0:1], v2, off th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw umin ptr addrspace(1) %ptr, i32 %val syncscope("one-as") monotonic
+ ret i32 %result
+}
+
+define i32 @global_system_atomic_umin_i32(ptr addrspace(1) %ptr, i32 %val) {
+; GFX1250-LABEL: global_system_atomic_umin_i32:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: global_atomic_min_u32 v0, v[0:1], v2, off th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw umin ptr addrspace(1) %ptr, i32 %val monotonic
+ ret i32 %result
+}
+
+define i32 @global_one_as_atomic_umax_i32(ptr addrspace(1) %ptr, i32 %val) {
+; GFX1250-LABEL: global_one_as_atomic_umax_i32:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: global_atomic_max_u32 v0, v[0:1], v2, off th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw umax ptr addrspace(1) %ptr, i32 %val syncscope("one-as") monotonic
+ ret i32 %result
+}
+
+define i32 @global_system_atomic_umax_i32(ptr addrspace(1) %ptr, i32 %val) {
+; GFX1250-LABEL: global_system_atomic_umax_i32:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: global_atomic_max_u32 v0, v[0:1], v2, off th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw umax ptr addrspace(1) %ptr, i32 %val monotonic
+ ret i32 %result
+}
+
+define i64 @global_one_as_atomic_min_i64(ptr addrspace(1) %ptr, i64 %val) {
+; GFX1250-LABEL: global_one_as_atomic_min_i64:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: global_atomic_min_i64 v[0:1], v[0:1], v[2:3], off th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw min ptr addrspace(1) %ptr, i64 %val syncscope("one-as") monotonic
+ ret i64 %result
+}
+
+define i64 @global_system_atomic_min_i64(ptr addrspace(1) %ptr, i64 %val) {
+; GFX1250-LABEL: global_system_atomic_min_i64:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: global_atomic_min_i64 v[0:1], v[0:1], v[2:3], off th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw min ptr addrspace(1) %ptr, i64 %val monotonic
+ ret i64 %result
+}
+
+define i64 @global_one_as_atomic_max_i64(ptr addrspace(1) %ptr, i64 %val) {
+; GFX1250-LABEL: global_one_as_atomic_max_i64:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: global_atomic_max_i64 v[0:1], v[0:1], v[2:3], off th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw max ptr addrspace(1) %ptr, i64 %val syncscope("one-as") monotonic
+ ret i64 %result
+}
+
+define i64 @global_system_atomic_max_i64(ptr addrspace(1) %ptr, i64 %val) {
+; GFX1250-LABEL: global_system_atomic_max_i64:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: global_atomic_max_i64 v[0:1], v[0:1], v[2:3], off th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw max ptr addrspace(1) %ptr, i64 %val monotonic
+ ret i64 %result
+}
+
+define i64 @global_one_as_atomic_umin_i64(ptr addrspace(1) %ptr, i64 %val) {
+; GFX1250-LABEL: global_one_as_atomic_umin_i64:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: global_atomic_min_u64 v[0:1], v[0:1], v[2:3], off th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw umin ptr addrspace(1) %ptr, i64 %val syncscope("one-as") monotonic
+ ret i64 %result
+}
+
+define i64 @global_system_atomic_umin_i64(ptr addrspace(1) %ptr, i64 %val) {
+; GFX1250-LABEL: global_system_atomic_umin_i64:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: global_atomic_min_u64 v[0:1], v[0:1], v[2:3], off th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw umin ptr addrspace(1) %ptr, i64 %val monotonic
+ ret i64 %result
+}
+
+define i64 @global_one_as_atomic_umax_i64(ptr addrspace(1) %ptr, i64 %val) {
+; GFX1250-LABEL: global_one_as_atomic_umax_i64:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: global_atomic_max_u64 v[0:1], v[0:1], v[2:3], off th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw umax ptr addrspace(1) %ptr, i64 %val syncscope("one-as") monotonic
+ ret i64 %result
+}
+
+define i64 @global_system_atomic_umax_i64(ptr addrspace(1) %ptr, i64 %val) {
+; GFX1250-LABEL: global_system_atomic_umax_i64:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: global_atomic_max_u64 v[0:1], v[0:1], v[2:3], off th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw umax ptr addrspace(1) %ptr, i64 %val monotonic
+ ret i64 %result
+}
+
+define i16 @global_one_as_atomic_min_i16(ptr addrspace(1) %ptr, i16 %val) {
+; GFX1250-LABEL: global_one_as_atomic_min_i16:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: v_mov_b32_e32 v3, v0
+; GFX1250-NEXT: s_mov_b32 s0, 0
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_and_b32_e32 v0, -4, v3
+; GFX1250-NEXT: v_and_b32_e32 v3, 3, v3
+; GFX1250-NEXT: v_lshlrev_b32_e32 v3, 3, v3
+; GFX1250-NEXT: global_load_b32 v5, v[0:1], off
+; GFX1250-NEXT: v_lshlrev_b32_e64 v4, v3, 0xffff
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-NEXT: v_not_b32_e32 v4, v4
+; GFX1250-NEXT: .LBB28_1: ; %atomicrmw.start
+; GFX1250-NEXT: ; =>This Inner Loop Header: Depth=1
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: v_mov_b32_e32 v7, v5
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_lshrrev_b32_e32 v5, v3, v7
+; GFX1250-NEXT: v_min_i16 v5, v5, v2
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_and_b32_e32 v5, 0xffff, v5
+; GFX1250-NEXT: v_lshlrev_b32_e32 v5, v3, v5
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-NEXT: v_and_or_b32 v6, v7, v4, v5
+; GFX1250-NEXT: global_atomic_cmpswap_b32 v5, v[0:1], v[6:7], off th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: v_cmp_eq_u32_e32 vcc_lo, v5, v7
+; GFX1250-NEXT: s_or_b32 s0, vcc_lo, s0
+; GFX1250-NEXT: s_wait_xcnt 0x0
+; GFX1250-NEXT: s_and_not1_b32 exec_lo, exec_lo, s0
+; GFX1250-NEXT: s_cbranch_execnz .LBB28_1
+; GFX1250-NEXT: ; %bb.2: ; %atomicrmw.end
+; GFX1250-NEXT: s_or_b32 exec_lo, exec_lo, s0
+; GFX1250-NEXT: v_lshrrev_b32_e32 v0, v3, v5
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw min ptr addrspace(1) %ptr, i16 %val syncscope("one-as") monotonic
+ ret i16 %result
+}
+
+define i16 @global_one_as_atomic_umin_i16(ptr addrspace(1) %ptr, i16 %val) {
+; GFX1250-LABEL: global_one_as_atomic_umin_i16:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: v_mov_b32_e32 v3, v0
+; GFX1250-NEXT: s_mov_b32 s0, 0
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_and_b32_e32 v0, -4, v3
+; GFX1250-NEXT: v_and_b32_e32 v3, 3, v3
+; GFX1250-NEXT: v_lshlrev_b32_e32 v3, 3, v3
+; GFX1250-NEXT: global_load_b32 v5, v[0:1], off
+; GFX1250-NEXT: v_lshlrev_b32_e64 v4, v3, 0xffff
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-NEXT: v_not_b32_e32 v4, v4
+; GFX1250-NEXT: .LBB29_1: ; %atomicrmw.start
+; GFX1250-NEXT: ; =>This Inner Loop Header: Depth=1
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: v_mov_b32_e32 v7, v5
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_lshrrev_b32_e32 v5, v3, v7
+; GFX1250-NEXT: v_min_u16 v5, v5, v2
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_and_b32_e32 v5, 0xffff, v5
+; GFX1250-NEXT: v_lshlrev_b32_e32 v5, v3, v5
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-NEXT: v_and_or_b32 v6, v7, v4, v5
+; GFX1250-NEXT: global_atomic_cmpswap_b32 v5, v[0:1], v[6:7], off th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: v_cmp_eq_u32_e32 vcc_lo, v5, v7
+; GFX1250-NEXT: s_or_b32 s0, vcc_lo, s0
+; GFX1250-NEXT: s_wait_xcnt 0x0
+; GFX1250-NEXT: s_and_not1_b32 exec_lo, exec_lo, s0
+; GFX1250-NEXT: s_cbranch_execnz .LBB29_1
+; GFX1250-NEXT: ; %bb.2: ; %atomicrmw.end
+; GFX1250-NEXT: s_or_b32 exec_lo, exec_lo, s0
+; GFX1250-NEXT: v_lshrrev_b32_e32 v0, v3, v5
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw umin ptr addrspace(1) %ptr, i16 %val syncscope("one-as") monotonic
+ ret i16 %result
+}
+
+define i16 @global_one_as_atomic_max_i16(ptr addrspace(1) %ptr, i16 %val) {
+; GFX1250-LABEL: global_one_as_atomic_max_i16:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: v_mov_b32_e32 v3, v0
+; GFX1250-NEXT: s_mov_b32 s0, 0
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_and_b32_e32 v0, -4, v3
+; GFX1250-NEXT: v_and_b32_e32 v3, 3, v3
+; GFX1250-NEXT: v_lshlrev_b32_e32 v3, 3, v3
+; GFX1250-NEXT: global_load_b32 v5, v[0:1], off
+; GFX1250-NEXT: v_lshlrev_b32_e64 v4, v3, 0xffff
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-NEXT: v_not_b32_e32 v4, v4
+; GFX1250-NEXT: .LBB30_1: ; %atomicrmw.start
+; GFX1250-NEXT: ; =>This Inner Loop Header: Depth=1
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: v_mov_b32_e32 v7, v5
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_lshrrev_b32_e32 v5, v3, v7
+; GFX1250-NEXT: v_max_i16 v5, v5, v2
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_and_b32_e32 v5, 0xffff, v5
+; GFX1250-NEXT: v_lshlrev_b32_e32 v5, v3, v5
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-NEXT: v_and_or_b32 v6, v7, v4, v5
+; GFX1250-NEXT: global_atomic_cmpswap_b32 v5, v[0:1], v[6:7], off th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: v_cmp_eq_u32_e32 vcc_lo, v5, v7
+; GFX1250-NEXT: s_or_b32 s0, vcc_lo, s0
+; GFX1250-NEXT: s_wait_xcnt 0x0
+; GFX1250-NEXT: s_and_not1_b32 exec_lo, exec_lo, s0
+; GFX1250-NEXT: s_cbranch_execnz .LBB30_1
+; GFX1250-NEXT: ; %bb.2: ; %atomicrmw.end
+; GFX1250-NEXT: s_or_b32 exec_lo, exec_lo, s0
+; GFX1250-NEXT: v_lshrrev_b32_e32 v0, v3, v5
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw max ptr addrspace(1) %ptr, i16 %val syncscope("one-as") monotonic
+ ret i16 %result
+}
+
+define i16 @global_one_as_atomic_umax_i16(ptr addrspace(1) %ptr, i16 %val) {
+; GFX1250-LABEL: global_one_as_atomic_umax_i16:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: v_mov_b32_e32 v3, v0
+; GFX1250-NEXT: s_mov_b32 s0, 0
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_and_b32_e32 v0, -4, v3
+; GFX1250-NEXT: v_and_b32_e32 v3, 3, v3
+; GFX1250-NEXT: v_lshlrev_b32_e32 v3, 3, v3
+; GFX1250-NEXT: global_load_b32 v5, v[0:1], off
+; GFX1250-NEXT: v_lshlrev_b32_e64 v4, v3, 0xffff
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-NEXT: v_not_b32_e32 v4, v4
+; GFX1250-NEXT: .LBB31_1: ; %atomicrmw.start
+; GFX1250-NEXT: ; =>This Inner Loop Header: Depth=1
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: v_mov_b32_e32 v7, v5
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_lshrrev_b32_e32 v5, v3, v7
+; GFX1250-NEXT: v_max_u16 v5, v5, v2
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_and_b32_e32 v5, 0xffff, v5
+; GFX1250-NEXT: v_lshlrev_b32_e32 v5, v3, v5
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-NEXT: v_and_or_b32 v6, v7, v4, v5
+; GFX1250-NEXT: global_atomic_cmpswap_b32 v5, v[0:1], v[6:7], off th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: v_cmp_eq_u32_e32 vcc_lo, v5, v7
+; GFX1250-NEXT: s_or_b32 s0, vcc_lo, s0
+; GFX1250-NEXT: s_wait_xcnt 0x0
+; GFX1250-NEXT: s_and_not1_b32 exec_lo, exec_lo, s0
+; GFX1250-NEXT: s_cbranch_execnz .LBB31_1
+; GFX1250-NEXT: ; %bb.2: ; %atomicrmw.end
+; GFX1250-NEXT: s_or_b32 exec_lo, exec_lo, s0
+; GFX1250-NEXT: v_lshrrev_b32_e32 v0, v3, v5
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw umax ptr addrspace(1) %ptr, i16 %val syncscope("one-as") monotonic
+ ret i16 %result
+}
+
+define float @flat_system_atomic_fadd_f32(ptr %ptr, float %val) {
+; GFX1250-LABEL: flat_system_atomic_fadd_f32:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: flat_atomic_add_f32 v0, v[0:1], v2 th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw fadd ptr %ptr, float %val monotonic
+ ret float %result
+}
+
+define float @flat_one_as_atomic_fadd_f32(ptr %ptr, float %val) {
+; GFX1250-LABEL: flat_one_as_atomic_fadd_f32:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: flat_atomic_add_f32 v0, v[0:1], v2 th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw fadd ptr %ptr, float %val syncscope("one-as") monotonic
+ ret float %result
+}
+
+define double @flat_system_atomic_fadd_f64(ptr %ptr, double %val) {
+; GFX1250-LABEL: flat_system_atomic_fadd_f64:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: s_mov_b64 s[0:1], src_shared_base
+; GFX1250-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-NEXT: ; implicit-def: $vgpr4_vgpr5
+; GFX1250-NEXT: v_cmpx_ne_u32_e64 s1, v1
+; GFX1250-NEXT: s_xor_b32 s0, exec_lo, s0
+; GFX1250-NEXT: s_cbranch_execz .LBB34_6
+; GFX1250-NEXT: ; %bb.1: ; %atomicrmw.check.private
+; GFX1250-NEXT: s_mov_b32 s1, src_flat_scratch_base_hi
+; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_xor_b32_e32 v4, s1, v1
+; GFX1250-NEXT: v_cmp_lt_u32_e32 vcc_lo, 0x3ffffff, v4
+; GFX1250-NEXT: ; implicit-def: $vgpr4_vgpr5
+; GFX1250-NEXT: s_and_saveexec_b32 s1, vcc_lo
+; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
+; GFX1250-NEXT: s_xor_b32 s1, exec_lo, s1
+; GFX1250-NEXT: s_cbranch_execz .LBB34_3
+; GFX1250-NEXT: ; %bb.2: ; %atomicrmw.global
+; GFX1250-NEXT: global_atomic_add_f64 v[4:5], v[0:1], v[2:3], off th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: ; implicit-def: $vgpr0_vgpr1
+; GFX1250-NEXT: ; implicit-def: $vgpr2_vgpr3
+; GFX1250-NEXT: .LBB34_3: ; %Flow
+; GFX1250-NEXT: s_wait_xcnt 0x0
+; GFX1250-NEXT: s_and_not1_saveexec_b32 s1, s1
+; GFX1250-NEXT: s_cbranch_execz .LBB34_5
+; GFX1250-NEXT: ; %bb.4: ; %atomicrmw.private
+; GFX1250-NEXT: s_mov_b32 s2, src_flat_scratch_base_lo
+; GFX1250-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[0:1]
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: v_subrev_nc_u32_e32 v4, s2, v0
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-NEXT: v_cndmask_b32_e32 v6, -1, v4, vcc_lo
+; GFX1250-NEXT: scratch_load_b64 v[4:5], v6, off
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: v_add_f64_e32 v[0:1], v[4:5], v[2:3]
+; GFX1250-NEXT: scratch_store_b64 v6, v[0:1], off scope:SCOPE_SE
+; GFX1250-NEXT: .LBB34_5: ; %Flow1
+; GFX1250-NEXT: s_wait_xcnt 0x0
+; GFX1250-NEXT: s_or_b32 exec_lo, exec_lo, s1
+; GFX1250-NEXT: ; implicit-def: $vgpr0_vgpr1
+; GFX1250-NEXT: ; implicit-def: $vgpr2_vgpr3
+; GFX1250-NEXT: .LBB34_6: ; %Flow2
+; GFX1250-NEXT: s_and_not1_saveexec_b32 s0, s0
+; GFX1250-NEXT: s_cbranch_execz .LBB34_8
+; GFX1250-NEXT: ; %bb.7: ; %atomicrmw.shared
+; GFX1250-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[0:1]
+; GFX1250-NEXT: v_cndmask_b32_e32 v0, -1, v0, vcc_lo
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: ds_add_rtn_f64 v[4:5], v0, v[2:3]
+; GFX1250-NEXT: .LBB34_8: ; %atomicrmw.phi
+; GFX1250-NEXT: s_or_b32 exec_lo, exec_lo, s0
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: v_dual_mov_b32 v0, v4 :: v_dual_mov_b32 v1, v5
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw fadd ptr %ptr, double %val monotonic
+ ret double %result
+}
+
+define double @flat_one_as_atomic_fadd_f64(ptr %ptr, double %val) {
+; GFX1250-LABEL: flat_one_as_atomic_fadd_f64:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: s_mov_b64 s[0:1], src_shared_base
+; GFX1250-NEXT: s_mov_b32 s0, exec_lo
+; GFX1250-NEXT: ; implicit-def: $vgpr4_vgpr5
+; GFX1250-NEXT: v_cmpx_ne_u32_e64 s1, v1
+; GFX1250-NEXT: s_xor_b32 s0, exec_lo, s0
+; GFX1250-NEXT: s_cbranch_execz .LBB35_6
+; GFX1250-NEXT: ; %bb.1: ; %atomicrmw.check.private
+; GFX1250-NEXT: s_mov_b32 s1, src_flat_scratch_base_hi
+; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_xor_b32_e32 v4, s1, v1
+; GFX1250-NEXT: v_cmp_lt_u32_e32 vcc_lo, 0x3ffffff, v4
+; GFX1250-NEXT: ; implicit-def: $vgpr4_vgpr5
+; GFX1250-NEXT: s_and_saveexec_b32 s1, vcc_lo
+; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
+; GFX1250-NEXT: s_xor_b32 s1, exec_lo, s1
+; GFX1250-NEXT: s_cbranch_execz .LBB35_3
+; GFX1250-NEXT: ; %bb.2: ; %atomicrmw.global
+; GFX1250-NEXT: global_atomic_add_f64 v[4:5], v[0:1], v[2:3], off th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: ; implicit-def: $vgpr0_vgpr1
+; GFX1250-NEXT: ; implicit-def: $vgpr2_vgpr3
+; GFX1250-NEXT: .LBB35_3: ; %Flow
+; GFX1250-NEXT: s_wait_xcnt 0x0
+; GFX1250-NEXT: s_and_not1_saveexec_b32 s1, s1
+; GFX1250-NEXT: s_cbranch_execz .LBB35_5
+; GFX1250-NEXT: ; %bb.4: ; %atomicrmw.private
+; GFX1250-NEXT: s_mov_b32 s2, src_flat_scratch_base_lo
+; GFX1250-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[0:1]
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: v_subrev_nc_u32_e32 v4, s2, v0
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-NEXT: v_cndmask_b32_e32 v6, -1, v4, vcc_lo
+; GFX1250-NEXT: scratch_load_b64 v[4:5], v6, off
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: v_add_f64_e32 v[0:1], v[4:5], v[2:3]
+; GFX1250-NEXT: scratch_store_b64 v6, v[0:1], off scope:SCOPE_SE
+; GFX1250-NEXT: .LBB35_5: ; %Flow1
+; GFX1250-NEXT: s_wait_xcnt 0x0
+; GFX1250-NEXT: s_or_b32 exec_lo, exec_lo, s1
+; GFX1250-NEXT: ; implicit-def: $vgpr0_vgpr1
+; GFX1250-NEXT: ; implicit-def: $vgpr2_vgpr3
+; GFX1250-NEXT: .LBB35_6: ; %Flow2
+; GFX1250-NEXT: s_and_not1_saveexec_b32 s0, s0
+; GFX1250-NEXT: s_cbranch_execz .LBB35_8
+; GFX1250-NEXT: ; %bb.7: ; %atomicrmw.shared
+; GFX1250-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[0:1]
+; GFX1250-NEXT: v_cndmask_b32_e32 v0, -1, v0, vcc_lo
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: ds_add_rtn_f64 v[4:5], v0, v[2:3]
+; GFX1250-NEXT: .LBB35_8: ; %atomicrmw.phi
+; GFX1250-NEXT: s_or_b32 exec_lo, exec_lo, s0
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: v_dual_mov_b32 v0, v4 :: v_dual_mov_b32 v1, v5
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw fadd ptr %ptr, double %val syncscope("one-as") monotonic
+ ret double %result
+}
+
+define float @flat_system_atomic_fmin_f32(ptr %ptr, float %val) {
+; GFX1250-LABEL: flat_system_atomic_fmin_f32:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: flat_atomic_min_num_f32 v0, v[0:1], v2 th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw fmin ptr %ptr, float %val monotonic
+ ret float %result
+}
+
+define float @flat_one_as_atomic_fmin_f32(ptr %ptr, float %val) {
+; GFX1250-LABEL: flat_one_as_atomic_fmin_f32:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: flat_atomic_min_num_f32 v0, v[0:1], v2 th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw fmin ptr %ptr, float %val syncscope("one-as") monotonic
+ ret float %result
+}
+
+define double @flat_system_atomic_fmin_f64(ptr %ptr, double %val) {
+; GFX1250-LABEL: flat_system_atomic_fmin_f64:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_xor_b32_e32 v4, s0, v1
+; GFX1250-NEXT: v_cmp_lt_u32_e32 vcc_lo, 0x3ffffff, v4
+; GFX1250-NEXT: ; implicit-def: $vgpr4_vgpr5
+; GFX1250-NEXT: s_and_saveexec_b32 s0, vcc_lo
+; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
+; GFX1250-NEXT: s_xor_b32 s0, exec_lo, s0
+; GFX1250-NEXT: s_cbranch_execz .LBB38_2
+; GFX1250-NEXT: ; %bb.1: ; %atomicrmw.global
+; GFX1250-NEXT: flat_atomic_min_num_f64 v[4:5], v[0:1], v[2:3] th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: ; implicit-def: $vgpr0_vgpr1
+; GFX1250-NEXT: ; implicit-def: $vgpr2_vgpr3
+; GFX1250-NEXT: .LBB38_2: ; %Flow
+; GFX1250-NEXT: s_wait_xcnt 0x0
+; GFX1250-NEXT: s_and_not1_saveexec_b32 s0, s0
+; GFX1250-NEXT: s_cbranch_execz .LBB38_4
+; GFX1250-NEXT: ; %bb.3: ; %atomicrmw.private
+; GFX1250-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
+; GFX1250-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[0:1]
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: v_subrev_nc_u32_e32 v4, s1, v0
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_3) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_dual_max_num_f64 v[2:3], v[2:3], v[2:3] :: v_dual_cndmask_b32 v6, -1, v4, vcc_lo
+; GFX1250-NEXT: scratch_load_b64 v[4:5], v6, off
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: v_max_num_f64_e32 v[0:1], v[4:5], v[4:5]
+; GFX1250-NEXT: v_min_num_f64_e32 v[0:1], v[0:1], v[2:3]
+; GFX1250-NEXT: scratch_store_b64 v6, v[0:1], off scope:SCOPE_SE
+; GFX1250-NEXT: .LBB38_4: ; %atomicrmw.phi
+; GFX1250-NEXT: s_wait_xcnt 0x0
+; GFX1250-NEXT: s_or_b32 exec_lo, exec_lo, s0
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: v_dual_mov_b32 v0, v4 :: v_dual_mov_b32 v1, v5
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw fmin ptr %ptr, double %val monotonic
+ ret double %result
+}
+
+define double @flat_one_as_atomic_fmin_f64(ptr %ptr, double %val) {
+; GFX1250-LABEL: flat_one_as_atomic_fmin_f64:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_xor_b32_e32 v4, s0, v1
+; GFX1250-NEXT: v_cmp_lt_u32_e32 vcc_lo, 0x3ffffff, v4
+; GFX1250-NEXT: ; implicit-def: $vgpr4_vgpr5
+; GFX1250-NEXT: s_and_saveexec_b32 s0, vcc_lo
+; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
+; GFX1250-NEXT: s_xor_b32 s0, exec_lo, s0
+; GFX1250-NEXT: s_cbranch_execz .LBB39_2
+; GFX1250-NEXT: ; %bb.1: ; %atomicrmw.global
+; GFX1250-NEXT: flat_atomic_min_num_f64 v[4:5], v[0:1], v[2:3] th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: ; implicit-def: $vgpr0_vgpr1
+; GFX1250-NEXT: ; implicit-def: $vgpr2_vgpr3
+; GFX1250-NEXT: .LBB39_2: ; %Flow
+; GFX1250-NEXT: s_wait_xcnt 0x0
+; GFX1250-NEXT: s_and_not1_saveexec_b32 s0, s0
+; GFX1250-NEXT: s_cbranch_execz .LBB39_4
+; GFX1250-NEXT: ; %bb.3: ; %atomicrmw.private
+; GFX1250-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
+; GFX1250-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[0:1]
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: v_subrev_nc_u32_e32 v4, s1, v0
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_3) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_dual_max_num_f64 v[2:3], v[2:3], v[2:3] :: v_dual_cndmask_b32 v6, -1, v4, vcc_lo
+; GFX1250-NEXT: scratch_load_b64 v[4:5], v6, off
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: v_max_num_f64_e32 v[0:1], v[4:5], v[4:5]
+; GFX1250-NEXT: v_min_num_f64_e32 v[0:1], v[0:1], v[2:3]
+; GFX1250-NEXT: scratch_store_b64 v6, v[0:1], off scope:SCOPE_SE
+; GFX1250-NEXT: .LBB39_4: ; %atomicrmw.phi
+; GFX1250-NEXT: s_wait_xcnt 0x0
+; GFX1250-NEXT: s_or_b32 exec_lo, exec_lo, s0
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: v_dual_mov_b32 v0, v4 :: v_dual_mov_b32 v1, v5
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw fmin ptr %ptr, double %val syncscope("one-as") monotonic
+ ret double %result
+}
+
+define float @flat_system_atomic_fmax_f32(ptr %ptr, float %val) {
+; GFX1250-LABEL: flat_system_atomic_fmax_f32:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: flat_atomic_max_num_f32 v0, v[0:1], v2 th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw fmax ptr %ptr, float %val monotonic
+ ret float %result
+}
+
+define float @flat_one_as_atomic_fmax_f32(ptr %ptr, float %val) {
+; GFX1250-LABEL: flat_one_as_atomic_fmax_f32:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: flat_atomic_max_num_f32 v0, v[0:1], v2 th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw fmax ptr %ptr, float %val syncscope("one-as") monotonic
+ ret float %result
+}
+
+define double @flat_system_atomic_fmax_f64(ptr %ptr, double %val) {
+; GFX1250-LABEL: flat_system_atomic_fmax_f64:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_xor_b32_e32 v4, s0, v1
+; GFX1250-NEXT: v_cmp_lt_u32_e32 vcc_lo, 0x3ffffff, v4
+; GFX1250-NEXT: ; implicit-def: $vgpr4_vgpr5
+; GFX1250-NEXT: s_and_saveexec_b32 s0, vcc_lo
+; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
+; GFX1250-NEXT: s_xor_b32 s0, exec_lo, s0
+; GFX1250-NEXT: s_cbranch_execz .LBB42_2
+; GFX1250-NEXT: ; %bb.1: ; %atomicrmw.global
+; GFX1250-NEXT: flat_atomic_max_num_f64 v[4:5], v[0:1], v[2:3] th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: ; implicit-def: $vgpr0_vgpr1
+; GFX1250-NEXT: ; implicit-def: $vgpr2_vgpr3
+; GFX1250-NEXT: .LBB42_2: ; %Flow
+; GFX1250-NEXT: s_wait_xcnt 0x0
+; GFX1250-NEXT: s_and_not1_saveexec_b32 s0, s0
+; GFX1250-NEXT: s_cbranch_execz .LBB42_4
+; GFX1250-NEXT: ; %bb.3: ; %atomicrmw.private
+; GFX1250-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
+; GFX1250-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[0:1]
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: v_subrev_nc_u32_e32 v4, s1, v0
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_3) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_dual_max_num_f64 v[2:3], v[2:3], v[2:3] :: v_dual_cndmask_b32 v6, -1, v4, vcc_lo
+; GFX1250-NEXT: scratch_load_b64 v[4:5], v6, off
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: v_max_num_f64_e32 v[0:1], v[4:5], v[4:5]
+; GFX1250-NEXT: v_max_num_f64_e32 v[0:1], v[0:1], v[2:3]
+; GFX1250-NEXT: scratch_store_b64 v6, v[0:1], off scope:SCOPE_SE
+; GFX1250-NEXT: .LBB42_4: ; %atomicrmw.phi
+; GFX1250-NEXT: s_wait_xcnt 0x0
+; GFX1250-NEXT: s_or_b32 exec_lo, exec_lo, s0
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: v_dual_mov_b32 v0, v4 :: v_dual_mov_b32 v1, v5
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw fmax ptr %ptr, double %val monotonic
+ ret double %result
+}
+
+define double @flat_one_as_atomic_fmax_f64(ptr %ptr, double %val) {
+; GFX1250-LABEL: flat_one_as_atomic_fmax_f64:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_xor_b32_e32 v4, s0, v1
+; GFX1250-NEXT: v_cmp_lt_u32_e32 vcc_lo, 0x3ffffff, v4
+; GFX1250-NEXT: ; implicit-def: $vgpr4_vgpr5
+; GFX1250-NEXT: s_and_saveexec_b32 s0, vcc_lo
+; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
+; GFX1250-NEXT: s_xor_b32 s0, exec_lo, s0
+; GFX1250-NEXT: s_cbranch_execz .LBB43_2
+; GFX1250-NEXT: ; %bb.1: ; %atomicrmw.global
+; GFX1250-NEXT: flat_atomic_max_num_f64 v[4:5], v[0:1], v[2:3] th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: ; implicit-def: $vgpr0_vgpr1
+; GFX1250-NEXT: ; implicit-def: $vgpr2_vgpr3
+; GFX1250-NEXT: .LBB43_2: ; %Flow
+; GFX1250-NEXT: s_wait_xcnt 0x0
+; GFX1250-NEXT: s_and_not1_saveexec_b32 s0, s0
+; GFX1250-NEXT: s_cbranch_execz .LBB43_4
+; GFX1250-NEXT: ; %bb.3: ; %atomicrmw.private
+; GFX1250-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
+; GFX1250-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[0:1]
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: v_subrev_nc_u32_e32 v4, s1, v0
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_3) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_dual_max_num_f64 v[2:3], v[2:3], v[2:3] :: v_dual_cndmask_b32 v6, -1, v4, vcc_lo
+; GFX1250-NEXT: scratch_load_b64 v[4:5], v6, off
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: v_max_num_f64_e32 v[0:1], v[4:5], v[4:5]
+; GFX1250-NEXT: v_max_num_f64_e32 v[0:1], v[0:1], v[2:3]
+; GFX1250-NEXT: scratch_store_b64 v6, v[0:1], off scope:SCOPE_SE
+; GFX1250-NEXT: .LBB43_4: ; %atomicrmw.phi
+; GFX1250-NEXT: s_wait_xcnt 0x0
+; GFX1250-NEXT: s_or_b32 exec_lo, exec_lo, s0
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: v_dual_mov_b32 v0, v4 :: v_dual_mov_b32 v1, v5
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw fmax ptr %ptr, double %val syncscope("one-as") monotonic
+ ret double %result
+}
+
+define i32 @flat_one_as_atomic_min_i32(ptr %ptr, i32 %val) {
+; GFX1250-LABEL: flat_one_as_atomic_min_i32:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: flat_atomic_min_i32 v0, v[0:1], v2 th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw min ptr %ptr, i32 %val syncscope("one-as") monotonic
+ ret i32 %result
+}
+
+define i32 @flat_system_atomic_min_i32(ptr %ptr, i32 %val) {
+; GFX1250-LABEL: flat_system_atomic_min_i32:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: flat_atomic_min_i32 v0, v[0:1], v2 th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw min ptr %ptr, i32 %val monotonic
+ ret i32 %result
+}
+
+define i32 @flat_one_as_atomic_max_i32(ptr %ptr, i32 %val) {
+; GFX1250-LABEL: flat_one_as_atomic_max_i32:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: flat_atomic_max_i32 v0, v[0:1], v2 th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw max ptr %ptr, i32 %val syncscope("one-as") monotonic
+ ret i32 %result
+}
+
+define i32 @flat_system_atomic_max_i32(ptr %ptr, i32 %val) {
+; GFX1250-LABEL: flat_system_atomic_max_i32:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: flat_atomic_max_i32 v0, v[0:1], v2 th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw max ptr %ptr, i32 %val monotonic
+ ret i32 %result
+}
+
+define i32 @flat_one_as_atomic_umin_i32(ptr %ptr, i32 %val) {
+; GFX1250-LABEL: flat_one_as_atomic_umin_i32:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: flat_atomic_min_u32 v0, v[0:1], v2 th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw umin ptr %ptr, i32 %val syncscope("one-as") monotonic
+ ret i32 %result
+}
+
+define i32 @flat_system_atomic_umin_i32(ptr %ptr, i32 %val) {
+; GFX1250-LABEL: flat_system_atomic_umin_i32:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: flat_atomic_min_u32 v0, v[0:1], v2 th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw umin ptr %ptr, i32 %val monotonic
+ ret i32 %result
+}
+
+define i32 @flat_one_as_atomic_umax_i32(ptr %ptr, i32 %val) {
+; GFX1250-LABEL: flat_one_as_atomic_umax_i32:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: flat_atomic_max_u32 v0, v[0:1], v2 th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw umax ptr %ptr, i32 %val syncscope("one-as") monotonic
+ ret i32 %result
+}
+
+define i32 @flat_system_atomic_umax_i32(ptr %ptr, i32 %val) {
+; GFX1250-LABEL: flat_system_atomic_umax_i32:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: flat_atomic_max_u32 v0, v[0:1], v2 th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw umax ptr %ptr, i32 %val monotonic
+ ret i32 %result
+}
+
+define i64 @flat_one_as_atomic_min_i64(ptr %ptr, i64 %val) {
+; GFX1250-LABEL: flat_one_as_atomic_min_i64:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_xor_b32_e32 v4, s0, v1
+; GFX1250-NEXT: v_cmp_lt_u32_e32 vcc_lo, 0x3ffffff, v4
+; GFX1250-NEXT: ; implicit-def: $vgpr4_vgpr5
+; GFX1250-NEXT: s_and_saveexec_b32 s0, vcc_lo
+; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
+; GFX1250-NEXT: s_xor_b32 s0, exec_lo, s0
+; GFX1250-NEXT: s_cbranch_execz .LBB52_2
+; GFX1250-NEXT: ; %bb.1: ; %atomicrmw.global
+; GFX1250-NEXT: flat_atomic_min_i64 v[4:5], v[0:1], v[2:3] th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: ; implicit-def: $vgpr0_vgpr1
+; GFX1250-NEXT: ; implicit-def: $vgpr2_vgpr3
+; GFX1250-NEXT: .LBB52_2: ; %Flow
+; GFX1250-NEXT: s_wait_xcnt 0x0
+; GFX1250-NEXT: s_and_not1_saveexec_b32 s0, s0
+; GFX1250-NEXT: s_cbranch_execz .LBB52_4
+; GFX1250-NEXT: ; %bb.3: ; %atomicrmw.private
+; GFX1250-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
+; GFX1250-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[0:1]
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: v_subrev_nc_u32_e32 v4, s1, v0
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-NEXT: v_cndmask_b32_e32 v6, -1, v4, vcc_lo
+; GFX1250-NEXT: scratch_load_b64 v[4:5], v6, off
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: v_min_i64 v[0:1], v[4:5], v[2:3]
+; GFX1250-NEXT: scratch_store_b64 v6, v[0:1], off scope:SCOPE_SE
+; GFX1250-NEXT: .LBB52_4: ; %atomicrmw.phi
+; GFX1250-NEXT: s_wait_xcnt 0x0
+; GFX1250-NEXT: s_or_b32 exec_lo, exec_lo, s0
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: v_dual_mov_b32 v0, v4 :: v_dual_mov_b32 v1, v5
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw min ptr %ptr, i64 %val syncscope("one-as") monotonic
+ ret i64 %result
+}
+
+define i64 @flat_system_atomic_min_i64(ptr %ptr, i64 %val) {
+; GFX1250-LABEL: flat_system_atomic_min_i64:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_xor_b32_e32 v4, s0, v1
+; GFX1250-NEXT: v_cmp_lt_u32_e32 vcc_lo, 0x3ffffff, v4
+; GFX1250-NEXT: ; implicit-def: $vgpr4_vgpr5
+; GFX1250-NEXT: s_and_saveexec_b32 s0, vcc_lo
+; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
+; GFX1250-NEXT: s_xor_b32 s0, exec_lo, s0
+; GFX1250-NEXT: s_cbranch_execz .LBB53_2
+; GFX1250-NEXT: ; %bb.1: ; %atomicrmw.global
+; GFX1250-NEXT: flat_atomic_min_i64 v[4:5], v[0:1], v[2:3] th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: ; implicit-def: $vgpr0_vgpr1
+; GFX1250-NEXT: ; implicit-def: $vgpr2_vgpr3
+; GFX1250-NEXT: .LBB53_2: ; %Flow
+; GFX1250-NEXT: s_wait_xcnt 0x0
+; GFX1250-NEXT: s_and_not1_saveexec_b32 s0, s0
+; GFX1250-NEXT: s_cbranch_execz .LBB53_4
+; GFX1250-NEXT: ; %bb.3: ; %atomicrmw.private
+; GFX1250-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
+; GFX1250-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[0:1]
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: v_subrev_nc_u32_e32 v4, s1, v0
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-NEXT: v_cndmask_b32_e32 v6, -1, v4, vcc_lo
+; GFX1250-NEXT: scratch_load_b64 v[4:5], v6, off
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: v_min_i64 v[0:1], v[4:5], v[2:3]
+; GFX1250-NEXT: scratch_store_b64 v6, v[0:1], off scope:SCOPE_SE
+; GFX1250-NEXT: .LBB53_4: ; %atomicrmw.phi
+; GFX1250-NEXT: s_wait_xcnt 0x0
+; GFX1250-NEXT: s_or_b32 exec_lo, exec_lo, s0
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: v_dual_mov_b32 v0, v4 :: v_dual_mov_b32 v1, v5
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw min ptr %ptr, i64 %val monotonic
+ ret i64 %result
+}
+
+define i64 @flat_one_as_atomic_max_i64(ptr %ptr, i64 %val) {
+; GFX1250-LABEL: flat_one_as_atomic_max_i64:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_xor_b32_e32 v4, s0, v1
+; GFX1250-NEXT: v_cmp_lt_u32_e32 vcc_lo, 0x3ffffff, v4
+; GFX1250-NEXT: ; implicit-def: $vgpr4_vgpr5
+; GFX1250-NEXT: s_and_saveexec_b32 s0, vcc_lo
+; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
+; GFX1250-NEXT: s_xor_b32 s0, exec_lo, s0
+; GFX1250-NEXT: s_cbranch_execz .LBB54_2
+; GFX1250-NEXT: ; %bb.1: ; %atomicrmw.global
+; GFX1250-NEXT: flat_atomic_max_i64 v[4:5], v[0:1], v[2:3] th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: ; implicit-def: $vgpr0_vgpr1
+; GFX1250-NEXT: ; implicit-def: $vgpr2_vgpr3
+; GFX1250-NEXT: .LBB54_2: ; %Flow
+; GFX1250-NEXT: s_wait_xcnt 0x0
+; GFX1250-NEXT: s_and_not1_saveexec_b32 s0, s0
+; GFX1250-NEXT: s_cbranch_execz .LBB54_4
+; GFX1250-NEXT: ; %bb.3: ; %atomicrmw.private
+; GFX1250-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
+; GFX1250-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[0:1]
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: v_subrev_nc_u32_e32 v4, s1, v0
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-NEXT: v_cndmask_b32_e32 v6, -1, v4, vcc_lo
+; GFX1250-NEXT: scratch_load_b64 v[4:5], v6, off
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: v_max_i64 v[0:1], v[4:5], v[2:3]
+; GFX1250-NEXT: scratch_store_b64 v6, v[0:1], off scope:SCOPE_SE
+; GFX1250-NEXT: .LBB54_4: ; %atomicrmw.phi
+; GFX1250-NEXT: s_wait_xcnt 0x0
+; GFX1250-NEXT: s_or_b32 exec_lo, exec_lo, s0
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: v_dual_mov_b32 v0, v4 :: v_dual_mov_b32 v1, v5
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw max ptr %ptr, i64 %val syncscope("one-as") monotonic
+ ret i64 %result
+}
+
+define i64 @flat_system_atomic_max_i64(ptr %ptr, i64 %val) {
+; GFX1250-LABEL: flat_system_atomic_max_i64:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_xor_b32_e32 v4, s0, v1
+; GFX1250-NEXT: v_cmp_lt_u32_e32 vcc_lo, 0x3ffffff, v4
+; GFX1250-NEXT: ; implicit-def: $vgpr4_vgpr5
+; GFX1250-NEXT: s_and_saveexec_b32 s0, vcc_lo
+; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
+; GFX1250-NEXT: s_xor_b32 s0, exec_lo, s0
+; GFX1250-NEXT: s_cbranch_execz .LBB55_2
+; GFX1250-NEXT: ; %bb.1: ; %atomicrmw.global
+; GFX1250-NEXT: flat_atomic_max_i64 v[4:5], v[0:1], v[2:3] th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: ; implicit-def: $vgpr0_vgpr1
+; GFX1250-NEXT: ; implicit-def: $vgpr2_vgpr3
+; GFX1250-NEXT: .LBB55_2: ; %Flow
+; GFX1250-NEXT: s_wait_xcnt 0x0
+; GFX1250-NEXT: s_and_not1_saveexec_b32 s0, s0
+; GFX1250-NEXT: s_cbranch_execz .LBB55_4
+; GFX1250-NEXT: ; %bb.3: ; %atomicrmw.private
+; GFX1250-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
+; GFX1250-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[0:1]
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: v_subrev_nc_u32_e32 v4, s1, v0
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-NEXT: v_cndmask_b32_e32 v6, -1, v4, vcc_lo
+; GFX1250-NEXT: scratch_load_b64 v[4:5], v6, off
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: v_max_i64 v[0:1], v[4:5], v[2:3]
+; GFX1250-NEXT: scratch_store_b64 v6, v[0:1], off scope:SCOPE_SE
+; GFX1250-NEXT: .LBB55_4: ; %atomicrmw.phi
+; GFX1250-NEXT: s_wait_xcnt 0x0
+; GFX1250-NEXT: s_or_b32 exec_lo, exec_lo, s0
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: v_dual_mov_b32 v0, v4 :: v_dual_mov_b32 v1, v5
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw max ptr %ptr, i64 %val monotonic
+ ret i64 %result
+}
+
+define i64 @flat_one_as_atomic_umin_i64(ptr %ptr, i64 %val) {
+; GFX1250-LABEL: flat_one_as_atomic_umin_i64:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_xor_b32_e32 v4, s0, v1
+; GFX1250-NEXT: v_cmp_lt_u32_e32 vcc_lo, 0x3ffffff, v4
+; GFX1250-NEXT: ; implicit-def: $vgpr4_vgpr5
+; GFX1250-NEXT: s_and_saveexec_b32 s0, vcc_lo
+; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
+; GFX1250-NEXT: s_xor_b32 s0, exec_lo, s0
+; GFX1250-NEXT: s_cbranch_execz .LBB56_2
+; GFX1250-NEXT: ; %bb.1: ; %atomicrmw.global
+; GFX1250-NEXT: flat_atomic_min_u64 v[4:5], v[0:1], v[2:3] th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: ; implicit-def: $vgpr0_vgpr1
+; GFX1250-NEXT: ; implicit-def: $vgpr2_vgpr3
+; GFX1250-NEXT: .LBB56_2: ; %Flow
+; GFX1250-NEXT: s_wait_xcnt 0x0
+; GFX1250-NEXT: s_and_not1_saveexec_b32 s0, s0
+; GFX1250-NEXT: s_cbranch_execz .LBB56_4
+; GFX1250-NEXT: ; %bb.3: ; %atomicrmw.private
+; GFX1250-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
+; GFX1250-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[0:1]
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: v_subrev_nc_u32_e32 v4, s1, v0
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-NEXT: v_cndmask_b32_e32 v6, -1, v4, vcc_lo
+; GFX1250-NEXT: scratch_load_b64 v[4:5], v6, off
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: v_min_u64 v[0:1], v[4:5], v[2:3]
+; GFX1250-NEXT: scratch_store_b64 v6, v[0:1], off scope:SCOPE_SE
+; GFX1250-NEXT: .LBB56_4: ; %atomicrmw.phi
+; GFX1250-NEXT: s_wait_xcnt 0x0
+; GFX1250-NEXT: s_or_b32 exec_lo, exec_lo, s0
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: v_dual_mov_b32 v0, v4 :: v_dual_mov_b32 v1, v5
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw umin ptr %ptr, i64 %val syncscope("one-as") monotonic
+ ret i64 %result
+}
+
+define i64 @flat_system_atomic_umin_i64(ptr %ptr, i64 %val) {
+; GFX1250-LABEL: flat_system_atomic_umin_i64:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_xor_b32_e32 v4, s0, v1
+; GFX1250-NEXT: v_cmp_lt_u32_e32 vcc_lo, 0x3ffffff, v4
+; GFX1250-NEXT: ; implicit-def: $vgpr4_vgpr5
+; GFX1250-NEXT: s_and_saveexec_b32 s0, vcc_lo
+; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
+; GFX1250-NEXT: s_xor_b32 s0, exec_lo, s0
+; GFX1250-NEXT: s_cbranch_execz .LBB57_2
+; GFX1250-NEXT: ; %bb.1: ; %atomicrmw.global
+; GFX1250-NEXT: flat_atomic_min_u64 v[4:5], v[0:1], v[2:3] th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: ; implicit-def: $vgpr0_vgpr1
+; GFX1250-NEXT: ; implicit-def: $vgpr2_vgpr3
+; GFX1250-NEXT: .LBB57_2: ; %Flow
+; GFX1250-NEXT: s_wait_xcnt 0x0
+; GFX1250-NEXT: s_and_not1_saveexec_b32 s0, s0
+; GFX1250-NEXT: s_cbranch_execz .LBB57_4
+; GFX1250-NEXT: ; %bb.3: ; %atomicrmw.private
+; GFX1250-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
+; GFX1250-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[0:1]
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: v_subrev_nc_u32_e32 v4, s1, v0
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-NEXT: v_cndmask_b32_e32 v6, -1, v4, vcc_lo
+; GFX1250-NEXT: scratch_load_b64 v[4:5], v6, off
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: v_min_u64 v[0:1], v[4:5], v[2:3]
+; GFX1250-NEXT: scratch_store_b64 v6, v[0:1], off scope:SCOPE_SE
+; GFX1250-NEXT: .LBB57_4: ; %atomicrmw.phi
+; GFX1250-NEXT: s_wait_xcnt 0x0
+; GFX1250-NEXT: s_or_b32 exec_lo, exec_lo, s0
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: v_dual_mov_b32 v0, v4 :: v_dual_mov_b32 v1, v5
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw umin ptr %ptr, i64 %val monotonic
+ ret i64 %result
+}
+
+define i64 @flat_one_as_atomic_umax_i64(ptr %ptr, i64 %val) {
+; GFX1250-LABEL: flat_one_as_atomic_umax_i64:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_xor_b32_e32 v4, s0, v1
+; GFX1250-NEXT: v_cmp_lt_u32_e32 vcc_lo, 0x3ffffff, v4
+; GFX1250-NEXT: ; implicit-def: $vgpr4_vgpr5
+; GFX1250-NEXT: s_and_saveexec_b32 s0, vcc_lo
+; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
+; GFX1250-NEXT: s_xor_b32 s0, exec_lo, s0
+; GFX1250-NEXT: s_cbranch_execz .LBB58_2
+; GFX1250-NEXT: ; %bb.1: ; %atomicrmw.global
+; GFX1250-NEXT: flat_atomic_max_u64 v[4:5], v[0:1], v[2:3] th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: ; implicit-def: $vgpr0_vgpr1
+; GFX1250-NEXT: ; implicit-def: $vgpr2_vgpr3
+; GFX1250-NEXT: .LBB58_2: ; %Flow
+; GFX1250-NEXT: s_wait_xcnt 0x0
+; GFX1250-NEXT: s_and_not1_saveexec_b32 s0, s0
+; GFX1250-NEXT: s_cbranch_execz .LBB58_4
+; GFX1250-NEXT: ; %bb.3: ; %atomicrmw.private
+; GFX1250-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
+; GFX1250-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[0:1]
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: v_subrev_nc_u32_e32 v4, s1, v0
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-NEXT: v_cndmask_b32_e32 v6, -1, v4, vcc_lo
+; GFX1250-NEXT: scratch_load_b64 v[4:5], v6, off
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: v_max_u64 v[0:1], v[4:5], v[2:3]
+; GFX1250-NEXT: scratch_store_b64 v6, v[0:1], off scope:SCOPE_SE
+; GFX1250-NEXT: .LBB58_4: ; %atomicrmw.phi
+; GFX1250-NEXT: s_wait_xcnt 0x0
+; GFX1250-NEXT: s_or_b32 exec_lo, exec_lo, s0
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: v_dual_mov_b32 v0, v4 :: v_dual_mov_b32 v1, v5
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw umax ptr %ptr, i64 %val syncscope("one-as") monotonic
+ ret i64 %result
+}
+
+define i64 @flat_system_atomic_umax_i64(ptr %ptr, i64 %val) {
+; GFX1250-LABEL: flat_system_atomic_umax_i64:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: s_mov_b32 s0, src_flat_scratch_base_hi
+; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_xor_b32_e32 v4, s0, v1
+; GFX1250-NEXT: v_cmp_lt_u32_e32 vcc_lo, 0x3ffffff, v4
+; GFX1250-NEXT: ; implicit-def: $vgpr4_vgpr5
+; GFX1250-NEXT: s_and_saveexec_b32 s0, vcc_lo
+; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
+; GFX1250-NEXT: s_xor_b32 s0, exec_lo, s0
+; GFX1250-NEXT: s_cbranch_execz .LBB59_2
+; GFX1250-NEXT: ; %bb.1: ; %atomicrmw.global
+; GFX1250-NEXT: flat_atomic_max_u64 v[4:5], v[0:1], v[2:3] th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: ; implicit-def: $vgpr0_vgpr1
+; GFX1250-NEXT: ; implicit-def: $vgpr2_vgpr3
+; GFX1250-NEXT: .LBB59_2: ; %Flow
+; GFX1250-NEXT: s_wait_xcnt 0x0
+; GFX1250-NEXT: s_and_not1_saveexec_b32 s0, s0
+; GFX1250-NEXT: s_cbranch_execz .LBB59_4
+; GFX1250-NEXT: ; %bb.3: ; %atomicrmw.private
+; GFX1250-NEXT: s_mov_b32 s1, src_flat_scratch_base_lo
+; GFX1250-NEXT: v_cmp_ne_u64_e32 vcc_lo, 0, v[0:1]
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: v_subrev_nc_u32_e32 v4, s1, v0
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-NEXT: v_cndmask_b32_e32 v6, -1, v4, vcc_lo
+; GFX1250-NEXT: scratch_load_b64 v[4:5], v6, off
+; GFX1250-NEXT: s_wait_loadcnt 0x0
+; GFX1250-NEXT: v_max_u64 v[0:1], v[4:5], v[2:3]
+; GFX1250-NEXT: scratch_store_b64 v6, v[0:1], off scope:SCOPE_SE
+; GFX1250-NEXT: .LBB59_4: ; %atomicrmw.phi
+; GFX1250-NEXT: s_wait_xcnt 0x0
+; GFX1250-NEXT: s_or_b32 exec_lo, exec_lo, s0
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: v_dual_mov_b32 v0, v4 :: v_dual_mov_b32 v1, v5
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw umax ptr %ptr, i64 %val monotonic
+ ret i64 %result
+}
+
+define i16 @flat_one_as_atomic_min_i16(ptr %ptr, i16 %val) {
+; GFX1250-LABEL: flat_one_as_atomic_min_i16:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: v_mov_b32_e32 v3, v0
+; GFX1250-NEXT: s_mov_b32 s0, 0
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_and_b32_e32 v0, -4, v3
+; GFX1250-NEXT: v_and_b32_e32 v3, 3, v3
+; GFX1250-NEXT: v_lshlrev_b32_e32 v3, 3, v3
+; GFX1250-NEXT: flat_load_b32 v5, v[0:1]
+; GFX1250-NEXT: v_lshlrev_b32_e64 v4, v3, 0xffff
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-NEXT: v_not_b32_e32 v4, v4
+; GFX1250-NEXT: .LBB60_1: ; %atomicrmw.start
+; GFX1250-NEXT: ; =>This Inner Loop Header: Depth=1
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: v_mov_b32_e32 v7, v5
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_lshrrev_b32_e32 v5, v3, v7
+; GFX1250-NEXT: v_min_i16 v5, v5, v2
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_and_b32_e32 v5, 0xffff, v5
+; GFX1250-NEXT: v_lshlrev_b32_e32 v5, v3, v5
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-NEXT: v_and_or_b32 v6, v7, v4, v5
+; GFX1250-NEXT: flat_atomic_cmpswap_b32 v5, v[0:1], v[6:7] th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: v_cmp_eq_u32_e32 vcc_lo, v5, v7
+; GFX1250-NEXT: s_or_b32 s0, vcc_lo, s0
+; GFX1250-NEXT: s_wait_xcnt 0x0
+; GFX1250-NEXT: s_and_not1_b32 exec_lo, exec_lo, s0
+; GFX1250-NEXT: s_cbranch_execnz .LBB60_1
+; GFX1250-NEXT: ; %bb.2: ; %atomicrmw.end
+; GFX1250-NEXT: s_or_b32 exec_lo, exec_lo, s0
+; GFX1250-NEXT: v_lshrrev_b32_e32 v0, v3, v5
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw min ptr %ptr, i16 %val syncscope("one-as") monotonic
+ ret i16 %result
+}
+
+define i16 @flat_one_as_atomic_umin_i16(ptr %ptr, i16 %val) {
+; GFX1250-LABEL: flat_one_as_atomic_umin_i16:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: v_mov_b32_e32 v3, v0
+; GFX1250-NEXT: s_mov_b32 s0, 0
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_and_b32_e32 v0, -4, v3
+; GFX1250-NEXT: v_and_b32_e32 v3, 3, v3
+; GFX1250-NEXT: v_lshlrev_b32_e32 v3, 3, v3
+; GFX1250-NEXT: flat_load_b32 v5, v[0:1]
+; GFX1250-NEXT: v_lshlrev_b32_e64 v4, v3, 0xffff
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-NEXT: v_not_b32_e32 v4, v4
+; GFX1250-NEXT: .LBB61_1: ; %atomicrmw.start
+; GFX1250-NEXT: ; =>This Inner Loop Header: Depth=1
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: v_mov_b32_e32 v7, v5
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_lshrrev_b32_e32 v5, v3, v7
+; GFX1250-NEXT: v_min_u16 v5, v5, v2
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_and_b32_e32 v5, 0xffff, v5
+; GFX1250-NEXT: v_lshlrev_b32_e32 v5, v3, v5
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-NEXT: v_and_or_b32 v6, v7, v4, v5
+; GFX1250-NEXT: flat_atomic_cmpswap_b32 v5, v[0:1], v[6:7] th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: v_cmp_eq_u32_e32 vcc_lo, v5, v7
+; GFX1250-NEXT: s_or_b32 s0, vcc_lo, s0
+; GFX1250-NEXT: s_wait_xcnt 0x0
+; GFX1250-NEXT: s_and_not1_b32 exec_lo, exec_lo, s0
+; GFX1250-NEXT: s_cbranch_execnz .LBB61_1
+; GFX1250-NEXT: ; %bb.2: ; %atomicrmw.end
+; GFX1250-NEXT: s_or_b32 exec_lo, exec_lo, s0
+; GFX1250-NEXT: v_lshrrev_b32_e32 v0, v3, v5
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw umin ptr %ptr, i16 %val syncscope("one-as") monotonic
+ ret i16 %result
+}
+
+define i16 @flat_one_as_atomic_max_i16(ptr %ptr, i16 %val) {
+; GFX1250-LABEL: flat_one_as_atomic_max_i16:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: v_mov_b32_e32 v3, v0
+; GFX1250-NEXT: s_mov_b32 s0, 0
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_and_b32_e32 v0, -4, v3
+; GFX1250-NEXT: v_and_b32_e32 v3, 3, v3
+; GFX1250-NEXT: v_lshlrev_b32_e32 v3, 3, v3
+; GFX1250-NEXT: flat_load_b32 v5, v[0:1]
+; GFX1250-NEXT: v_lshlrev_b32_e64 v4, v3, 0xffff
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-NEXT: v_not_b32_e32 v4, v4
+; GFX1250-NEXT: .LBB62_1: ; %atomicrmw.start
+; GFX1250-NEXT: ; =>This Inner Loop Header: Depth=1
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: v_mov_b32_e32 v7, v5
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_lshrrev_b32_e32 v5, v3, v7
+; GFX1250-NEXT: v_max_i16 v5, v5, v2
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_and_b32_e32 v5, 0xffff, v5
+; GFX1250-NEXT: v_lshlrev_b32_e32 v5, v3, v5
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-NEXT: v_and_or_b32 v6, v7, v4, v5
+; GFX1250-NEXT: flat_atomic_cmpswap_b32 v5, v[0:1], v[6:7] th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: v_cmp_eq_u32_e32 vcc_lo, v5, v7
+; GFX1250-NEXT: s_or_b32 s0, vcc_lo, s0
+; GFX1250-NEXT: s_wait_xcnt 0x0
+; GFX1250-NEXT: s_and_not1_b32 exec_lo, exec_lo, s0
+; GFX1250-NEXT: s_cbranch_execnz .LBB62_1
+; GFX1250-NEXT: ; %bb.2: ; %atomicrmw.end
+; GFX1250-NEXT: s_or_b32 exec_lo, exec_lo, s0
+; GFX1250-NEXT: v_lshrrev_b32_e32 v0, v3, v5
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw max ptr %ptr, i16 %val syncscope("one-as") monotonic
+ ret i16 %result
+}
+
+define i16 @flat_one_as_atomic_umax_i16(ptr %ptr, i16 %val) {
+; GFX1250-LABEL: flat_one_as_atomic_umax_i16:
+; GFX1250: ; %bb.0:
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: s_wait_kmcnt 0x0
+; GFX1250-NEXT: v_mov_b32_e32 v3, v0
+; GFX1250-NEXT: s_mov_b32 s0, 0
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_and_b32_e32 v0, -4, v3
+; GFX1250-NEXT: v_and_b32_e32 v3, 3, v3
+; GFX1250-NEXT: v_lshlrev_b32_e32 v3, 3, v3
+; GFX1250-NEXT: flat_load_b32 v5, v[0:1]
+; GFX1250-NEXT: v_lshlrev_b32_e64 v4, v3, 0xffff
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-NEXT: v_not_b32_e32 v4, v4
+; GFX1250-NEXT: .LBB63_1: ; %atomicrmw.start
+; GFX1250-NEXT: ; =>This Inner Loop Header: Depth=1
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: v_mov_b32_e32 v7, v5
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_lshrrev_b32_e32 v5, v3, v7
+; GFX1250-NEXT: v_max_u16 v5, v5, v2
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX1250-NEXT: v_and_b32_e32 v5, 0xffff, v5
+; GFX1250-NEXT: v_lshlrev_b32_e32 v5, v3, v5
+; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX1250-NEXT: v_and_or_b32 v6, v7, v4, v5
+; GFX1250-NEXT: flat_atomic_cmpswap_b32 v5, v[0:1], v[6:7] th:TH_ATOMIC_RETURN scope:SCOPE_SYS
+; GFX1250-NEXT: s_wait_loadcnt_dscnt 0x0
+; GFX1250-NEXT: v_cmp_eq_u32_e32 vcc_lo, v5, v7
+; GFX1250-NEXT: s_or_b32 s0, vcc_lo, s0
+; GFX1250-NEXT: s_wait_xcnt 0x0
+; GFX1250-NEXT: s_and_not1_b32 exec_lo, exec_lo, s0
+; GFX1250-NEXT: s_cbranch_execnz .LBB63_1
+; GFX1250-NEXT: ; %bb.2: ; %atomicrmw.end
+; GFX1250-NEXT: s_or_b32 exec_lo, exec_lo, s0
+; GFX1250-NEXT: v_lshrrev_b32_e32 v0, v3, v5
+; GFX1250-NEXT: s_set_pc_i64 s[30:31]
+ %result = atomicrmw umax ptr %ptr, i16 %val syncscope("one-as") monotonic
+ ret i16 %result
+}
diff --git a/llvm/test/CodeGen/AMDGPU/literal64.ll b/llvm/test/CodeGen/AMDGPU/literal64.ll
index 768c9728554df..98691d394abb3 100644
--- a/llvm/test/CodeGen/AMDGPU/literal64.ll
+++ b/llvm/test/CodeGen/AMDGPU/literal64.ll
@@ -67,24 +67,8 @@ define void @v_mov_b64_double(ptr addrspace(1) %ptr) {
; GCN: ; %bb.0:
; GCN-NEXT: s_wait_loadcnt_dscnt 0x0
; GCN-NEXT: s_wait_kmcnt 0x0
-; GCN-NEXT: global_load_b64 v[4:5], v[0:1], off
-; GCN-NEXT: s_mov_b32 s0, 0
-; GCN-NEXT: .LBB6_1: ; %atomicrmw.start
-; GCN-NEXT: ; =>This Inner Loop Header: Depth=1
-; GCN-NEXT: s_wait_loadcnt 0x0
-; GCN-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GCN-NEXT: v_add_f64_e32 v[2:3], lit64(0x4063233333333333), v[4:5]
-; GCN-NEXT: global_atomic_cmpswap_b64 v[2:3], v[0:1], v[2:5], off th:TH_ATOMIC_RETURN scope:SCOPE_SYS
-; GCN-NEXT: s_wait_loadcnt 0x0
-; GCN-NEXT: v_cmp_eq_u64_e32 vcc_lo, v[2:3], v[4:5]
-; GCN-NEXT: s_wait_xcnt 0x0
-; GCN-NEXT: v_mov_b64_e32 v[4:5], v[2:3]
-; GCN-NEXT: s_or_b32 s0, vcc_lo, s0
-; GCN-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
-; GCN-NEXT: s_and_not1_b32 exec_lo, exec_lo, s0
-; GCN-NEXT: s_cbranch_execnz .LBB6_1
-; GCN-NEXT: ; %bb.2: ; %atomicrmw.end
-; GCN-NEXT: s_or_b32 exec_lo, exec_lo, s0
+; GCN-NEXT: v_mov_b64_e32 v[2:3], lit64(0x4063233333333333)
+; GCN-NEXT: global_atomic_add_f64 v[0:1], v[2:3], off scope:SCOPE_SYS
; GCN-NEXT: s_set_pc_i64 s[30:31]
%result = atomicrmw fadd ptr addrspace(1) %ptr, double 153.1 monotonic
ret void
>From 26dde15ed4f1310fa5df3baf03d802ea1cf009b8 Mon Sep 17 00:00:00 2001
From: erichkeane <ekeane at nvidia.com>
Date: Wed, 6 Aug 2025 12:39:41 -0700
Subject: [PATCH 149/171] [OpenACC] Add warning for VLAs in a
private/firstprivate clause
private/firstprivate typically do copy operations, however copying a VLA
isn't really possible. This patch introduces a warning to alert the
person that this copy isn't happening correctly.
As a future direction, we MIGHT consider doing additional work to make
sure they are initialized/copied/deleted/etc correctly.
---
.../include/clang/Basic/DiagnosticSemaKinds.td | 5 +++++
clang/lib/Sema/SemaOpenACC.cpp | 11 ++++++++++-
...vate_firstprivate_reduction_required_ops.cpp | 17 +++++++++++++++++
clang/test/SemaOpenACC/sub-array.cpp | 8 ++++++--
4 files changed, 38 insertions(+), 3 deletions(-)
diff --git a/clang/include/clang/Basic/DiagnosticSemaKinds.td b/clang/include/clang/Basic/DiagnosticSemaKinds.td
index f903b7f4dacd0..cf23594201143 100644
--- a/clang/include/clang/Basic/DiagnosticSemaKinds.td
+++ b/clang/include/clang/Basic/DiagnosticSemaKinds.td
@@ -13531,6 +13531,11 @@ def err_acc_invalid_default_type
def err_acc_device_type_multiple_archs
: Error<"OpenACC 'device_type' clause on a 'set' construct only permits "
"one architecture">;
+def warn_acc_var_referenced_non_const_array
+ : Warning<"variable of array type %0 referenced in OpenACC '%1' clause "
+ "does not have constant bounds; initialization will happen after "
+ "decay to pointer">,
+ InGroup<DiagGroup<"openacc-var-non-const-array">>;
def warn_acc_var_referenced_lacks_op
: Warning<"variable of type %0 referenced in OpenACC '%1' clause does not "
"have a %enum_select<AccVarReferencedReason>{%DefCtor{default "
diff --git a/clang/lib/Sema/SemaOpenACC.cpp b/clang/lib/Sema/SemaOpenACC.cpp
index c3695f0bb0f9d..4d58b4a5f4079 100644
--- a/clang/lib/Sema/SemaOpenACC.cpp
+++ b/clang/lib/Sema/SemaOpenACC.cpp
@@ -646,8 +646,17 @@ ExprResult CheckVarType(SemaOpenACC &S, OpenACCClauseKind CK, Expr *VarExpr,
if (auto *RefTy = InnerTy->getAs<ReferenceType>())
InnerTy = RefTy->getPointeeType();
- if (auto *ArrTy = InnerTy->getAsArrayTypeUnsafe())
+ if (auto *ArrTy = InnerTy->getAsArrayTypeUnsafe()) {
+ // Non constant arrays decay to 'pointer', so warn and return that we're
+ // successful.
+ if (!ArrTy->isConstantArrayType()) {
+ S.Diag(InnerLoc, clang::diag::warn_acc_var_referenced_non_const_array)
+ << InnerTy << CK;
+ return VarExpr;
+ }
+
return CheckVarType(S, CK, VarExpr, InnerLoc, ArrTy->getElementType());
+ }
auto *RD = InnerTy->getAsCXXRecordDecl();
diff --git a/clang/test/SemaOpenACC/private_firstprivate_reduction_required_ops.cpp b/clang/test/SemaOpenACC/private_firstprivate_reduction_required_ops.cpp
index ce941ad4255da..38eb4e43813a8 100644
--- a/clang/test/SemaOpenACC/private_firstprivate_reduction_required_ops.cpp
+++ b/clang/test/SemaOpenACC/private_firstprivate_reduction_required_ops.cpp
@@ -275,3 +275,20 @@ void firstprivate_arrays() {
#pragma acc parallel firstprivate(UDCopyArr)
;
}
+
+template<unsigned I>
+void non_const_array_templ() {
+ int CArr[I];
+
+#pragma acc parallel firstprivate(CArr)
+ ;
+}
+
+void non_const_arrays(int I) {
+ non_const_array_templ<5>();
+
+ int NCArr[I];
+ // expected-warning at +1{{variable of array type 'int[I]' referenced in OpenACC 'firstprivate' clause does not have constant bounds; initialization will happen after decay to pointer}}
+#pragma acc parallel firstprivate(NCArr)
+ ;
+}
diff --git a/clang/test/SemaOpenACC/sub-array.cpp b/clang/test/SemaOpenACC/sub-array.cpp
index 355ac5ef1d3ce..ec929759b6d56 100644
--- a/clang/test/SemaOpenACC/sub-array.cpp
+++ b/clang/test/SemaOpenACC/sub-array.cpp
@@ -61,13 +61,15 @@ void Func(int i, int j) {
#pragma acc parallel private(ptr[3:])
while (true);
- // expected-error at +1{{OpenACC sub-array length is unspecified and cannot be inferred because the subscripted value is an array of unknown bound}}
+ // expected-error at +2{{OpenACC sub-array length is unspecified and cannot be inferred because the subscripted value is an array of unknown bound}}
+ // expected-warning at +1{{variable of array type 'int[i]' referenced in OpenACC 'private' clause does not have constant bounds; initialization will happen after decay to pointer}}
#pragma acc parallel private(VLA[3:])
while (true);
#pragma acc parallel private(ptr[:3])
while (true);
+ // expected-warning at +1{{variable of array type 'int[i]' referenced in OpenACC 'private' clause does not have constant bounds; initialization will happen after decay to pointer}}
#pragma acc parallel private(VLA[:3])
while (true);
@@ -159,11 +161,13 @@ void Templ(int i){
// expected-error at +1{{OpenACC sub-array length is unspecified and cannot be inferred because the subscripted value is not an array}}
#pragma acc parallel private(ptr[Conv:])
while (true);
- // expected-error at +1{{OpenACC sub-array length is unspecified and cannot be inferred because the subscripted value is an array of unknown bound}}
+ // expected-error at +2{{OpenACC sub-array length is unspecified and cannot be inferred because the subscripted value is an array of unknown bound}}
+ // expected-warning at +1{{variable of array type 'int[i]' referenced in OpenACC 'private' clause does not have constant bounds; initialization will happen after decay to pointer}}
#pragma acc parallel private(VLA[Conv:])
while (true);
#pragma acc parallel private(ptr[:Conv])
while (true);
+ // expected-warning at +1{{variable of array type 'int[i]' referenced in OpenACC 'private' clause does not have constant bounds; initialization will happen after decay to pointer}}
#pragma acc parallel private(VLA[:Conv])
while (true);
>From 5a47a1828abeefe72c82f732b446cc319ef65a31 Mon Sep 17 00:00:00 2001
From: cmtice <cmtice at google.com>
Date: Wed, 6 Aug 2025 13:14:47 -0700
Subject: [PATCH 150/171] [libcxx] Update testing documentation about CI
container images. (#149192)
Add information to the libcxx testing documentation, about the names of
the new CI libcxx runner sets, their current values, and how to change
the values or the runner set being used.
---
libcxx/docs/Contributing.rst | 89 ++++++++++++++++++++++++++++++++----
1 file changed, 79 insertions(+), 10 deletions(-)
diff --git a/libcxx/docs/Contributing.rst b/libcxx/docs/Contributing.rst
index 6aaa70764c2fa..ac856195ad68e 100644
--- a/libcxx/docs/Contributing.rst
+++ b/libcxx/docs/Contributing.rst
@@ -174,10 +174,11 @@ Pre-commit CI
Introduction
------------
-Unlike most parts of the LLVM project, libc++ uses a pre-commit CI [#]_. This
-CI is hosted on `Buildkite <https://buildkite.com/llvm-project/libcxx-ci>`__ and
-the build results are visible in the review on GitHub. Please make sure
-the CI is green before committing a patch.
+Unlike most parts of the LLVM project, libc++ uses a pre-commit CI [#]_. Some of
+this CI is hosted on `Buildkite <https://buildkite.com/llvm-project/libcxx-ci>`__,
+but some has migrated to the LLVM CI infrastructure. The build results are
+visible in the review on GitHub. Please make sure the CI is green before
+committing a patch.
The CI tests libc++ for all :ref:`supported platforms <SupportedPlatforms>`.
The build is started for every commit added to a Pull Request. A complete CI
@@ -246,21 +247,89 @@ Below is a short description of the most interesting CI builds [#]_:
Infrastructure
--------------
-All files of the CI infrastructure are in the directory ``libcxx/utils/ci``.
-Note that quite a bit of this infrastructure is heavily Linux focused. This is
-the platform used by most of libc++'s Buildkite runners and developers.
+The files for the CI infrastructure are split between the llvm-project
+and the llvm-zorg repositories. All files of the CI infrastructure in
+the llvm-project are in the directory ``libcxx/utils/ci``. Note that
+quite a bit of this infrastructure is heavily Linux focused. This is
+the platform used by most of libc++'s Buildkite runners and
+developers.
-Dockerfile
-~~~~~~~~~~
+Dockerfile/Container Images
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
Contains the Docker image for the Ubuntu CI. Because the same Docker image is
used for the ``main`` and ``release`` branch, it should contain no hard-coded
-versions. It contains the used versions of Clang, various clang-tools,
+versions. It contains the used versions of Clang, various clang-tools,
GCC, and CMake.
.. note:: This image is pulled from Docker hub and not rebuild when changing
the Dockerfile.
+Updating the CI testing container images
+----------------------------------------
+
+The libcxx linux premerge testing can run on one of three sets of runner
+groups. The three runner group names are "llvm-premerge-libcxx-runners",
+"llvm-premerge-libcxx-release-runners" and "llvm-premerge-libcxx-next-runners".
+Which runner set to use is controlled by the contents of
+https://github.com/llvm/llvm-project/blob/main/.github/workflows/libcxx-build-and-test.yaml.
+By default, it uses "llvm-premerge-libcxx-runners". To switch to one of the
+other runner sets, just replace all uses of "llvm-premerge-libcxx-runners" in
+the yaml file with the desired runner set.
+
+Which container image is used by these three runner sets is controlled
+and set by the variable values in
+https://github.com/llvm/llvm-zorg/blob/main/premerge/premerge_resources/variables.tf.
+The table below shows the variable names and
+the runner sets to which they correspond. To see their values, follow the
+link above (to variables.tf in llvm-zorg).
+
++------------------------------------+---------------------------+
+|Runner Set |Variable |
++====================================+===========================+
+|llvm-premerge-libcxx-runners |libcxx_runner_image |
++------------------------------------+---------------------------+
+|llvm-premerge-libcxx-release-runners|libcxx_release_runner_image|
++------------------------------------+---------------------------+
+|llvm-premerge-libcxx-next-runners |libcxx_next_runner_image |
++------------------------------------+---------------------------+
+
+
+When updating the container image you can either update just the
+runner binary (the part the connects to Github), or you can update
+everything (tools, etc.). Whether to update just the runner or to update
+everything is controlled by the value of ``ACTIONS_BASE_IMAGE``, under
+``actions-builder`` in ``libcxx/utils/ci/docker-compose.yml``.
+
+To update just the runner binary, change the value of ``ACTIONS_BASE_IMAGE``
+to be a modified version of one of the libcxx runner variable images from
+https://github.com/llvm/llvm-zorg/blob/main/premerge/premerge_resources/variables.tf,
+as follows: Find the libcxx runner image name you want to use from the
+variables.tf file. The name will be something like
+``ghcr.io/llvm/libcxx-linux-builder:<some-commit-SHA>``. Replace
+``libcxx-linux-builder`` with ``libcxx-linux-builder-base``. Use this new image
+name as the value you assign to ``ACTIONS_BASE_IMAGE``.
+
+To update the entire container image, set the value of ``ACTIONS_BASE_IMAGE``
+to ``builder-base``. If the value is already ``builder-base`` (there
+have been no just-the-runner updates since the last complete update), then you
+need to find the line containing ``RUN echo "Last forced update executed on``
+in ``libcxx/utils/ci/Dockerfile`` and update the date to be the current date.
+
+Once you have created and merged a PR with those changes, a new image
+will be created, and a link to it can be found at
+https://github.com/llvm/llvm-project/pkgs/container/libcxx-linux-builder,
+where the actual image name should be
+``ghcr.io/llvm/libcxx-linux-builder:<SHA-of-committed-change-from-PR>``.
+
+Lastly you need to create a PR in the llvm-zorg repository,
+updating the the value of the appropriate libcxx runner variable in
+the variables.tf file mentioned above to the name of your newly created
+image (see above paragraph about finding the image name). Once that change
+has been merged, an LLVM premerge maintainer (a Google employee) must use
+terraform to apply the change to the running GKE cluster.
+
+
run-buildbot-container
~~~~~~~~~~~~~~~~~~~~~~
>From d1b6ce50dffcc70cd1610515527b4645b1136d1c Mon Sep 17 00:00:00 2001
From: Stanislav Mekhanoshin <Stanislav.Mekhanoshin at amd.com>
Date: Wed, 6 Aug 2025 13:32:26 -0700
Subject: [PATCH 151/171] [AMDGPU] gfx1250 has fixed GETPC bug and also
extended VA to 57 bits (#152373)
---
llvm/lib/Target/AMDGPU/GCNSubtarget.h | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/llvm/lib/Target/AMDGPU/GCNSubtarget.h b/llvm/lib/Target/AMDGPU/GCNSubtarget.h
index 1c3749d81eec8..c5bdd28314642 100644
--- a/llvm/lib/Target/AMDGPU/GCNSubtarget.h
+++ b/llvm/lib/Target/AMDGPU/GCNSubtarget.h
@@ -1566,8 +1566,9 @@ class GCNSubtarget final : public AMDGPUGenSubtargetInfo,
bool hasSetPrioIncWgInst() const { return HasSetPrioIncWgInst; }
// \returns true if S_GETPC_B64 zero-extends the result from 48 bits instead
- // of sign-extending.
- bool hasGetPCZeroExtension() const { return GFX12Insts; }
+ // of sign-extending. Note that GFX1250 has not only fixed the bug but also
+ // extended VA to 57 bits.
+ bool hasGetPCZeroExtension() const { return GFX12Insts && !GFX1250Insts; }
/// \returns SGPR allocation granularity supported by the subtarget.
unsigned getSGPRAllocGranule() const {
>From 381623eb11cefd3ac21a36d028ba4832643010ef Mon Sep 17 00:00:00 2001
From: Jordan Rupprecht <rupprecht at google.com>
Date: Wed, 6 Aug 2025 15:35:03 -0500
Subject: [PATCH 152/171] [bazel] Port #151228: BFloat16 (#152377)
---
.../llvm-project-overlay/libc/BUILD.bazel | 51 ++++++++++---------
1 file changed, 26 insertions(+), 25 deletions(-)
diff --git a/utils/bazel/llvm-project-overlay/libc/BUILD.bazel b/utils/bazel/llvm-project-overlay/libc/BUILD.bazel
index b46f334512979..6a9bd09a2ed56 100644
--- a/utils/bazel/llvm-project-overlay/libc/BUILD.bazel
+++ b/utils/bazel/llvm-project-overlay/libc/BUILD.bazel
@@ -1109,6 +1109,7 @@ libc_support_library(
deps = [
":__support_cpp_bit",
":__support_cpp_type_traits",
+ ":__support_fputil_basic_operations",
":__support_fputil_cast",
":__support_fputil_comparison_operations",
":__support_fputil_dyadic_float",
@@ -2205,7 +2206,6 @@ libc_support_library(
name = "__support_math_asin",
hdrs = ["src/__support/math/asin.h"],
deps = [
- ":__support_math_asin_utils",
":__support_fputil_double_double",
":__support_fputil_dyadic_float",
":__support_fputil_fenv_impl",
@@ -2215,6 +2215,7 @@ libc_support_library(
":__support_fputil_sqrt",
":__support_macros_optimization",
":__support_macros_properties_cpu_features",
+ ":__support_math_asin_utils",
],
)
@@ -2222,13 +2223,13 @@ libc_support_library(
name = "__support_math_asinhf",
hdrs = ["src/__support/math/asinhf.h"],
deps = [
- ":__support_math_acoshf_utils",
":__support_fputil_fp_bits",
- ":__support_fputil_polyeval",
":__support_fputil_multiply_add",
+ ":__support_fputil_polyeval",
":__support_fputil_sqrt",
":__support_macros_config",
":__support_macros_optimization",
+ ":__support_math_acoshf_utils",
],
)
@@ -2236,17 +2237,17 @@ libc_support_library(
name = "__support_math_asinhf16",
hdrs = ["src/__support/math/asinhf16.h"],
deps = [
- ":__support_math_acoshf_utils",
- ":__support_fputil_fenv_impl",
- ":__support_fputil_fp_bits",
- ":__support_fputil_polyeval",
":__support_fputil_cast",
":__support_fputil_except_value_utils",
+ ":__support_fputil_fenv_impl",
+ ":__support_fputil_fp_bits",
":__support_fputil_multiply_add",
+ ":__support_fputil_polyeval",
":__support_fputil_rounding_mode",
":__support_fputil_sqrt",
":__support_macros_config",
":__support_macros_optimization",
+ ":__support_math_acoshf_utils",
],
)
@@ -2288,12 +2289,12 @@ libc_support_library(
name = "__support_math_atan2f",
hdrs = ["src/__support/math/atan2f.h"],
deps = [
+ ":__support_fputil_double_double",
":__support_fputil_fenv_impl",
":__support_fputil_fp_bits",
- ":__support_fputil_polyeval",
- ":__support_fputil_double_double",
":__support_fputil_multiply_add",
":__support_fputil_nearest_integer",
+ ":__support_fputil_polyeval",
":__support_macros_config",
":__support_macros_optimization",
":__support_math_inv_trigf_utils",
@@ -2304,13 +2305,13 @@ libc_support_library(
name = "__support_math_atan2f128",
hdrs = ["src/__support/math/atan2f128.h"],
deps = [
- ":__support_math_atan_utils",
- ":__support_fputil_fp_bits",
":__support_fputil_dyadic_float",
+ ":__support_fputil_fp_bits",
":__support_fputil_nearest_integer",
":__support_integer_literals",
":__support_macros_config",
":__support_macros_optimization",
+ ":__support_math_atan_utils",
":__support_uint128",
],
)
@@ -2337,8 +2338,8 @@ libc_support_library(
":__support_fputil_except_value_utils",
":__support_fputil_fenv_impl",
":__support_fputil_fp_bits",
- ":__support_fputil_polyeval",
":__support_fputil_multiply_add",
+ ":__support_fputil_polyeval",
":__support_fputil_sqrt",
":__support_macros_optimization",
],
@@ -2348,15 +2349,15 @@ libc_support_library(
name = "__support_math_asinf",
hdrs = ["src/__support/math/asinf.h"],
deps = [
- ":__support_math_inv_trigf_utils",
+ ":__support_fputil_except_value_utils",
":__support_fputil_fenv_impl",
":__support_fputil_fp_bits",
- ":__support_fputil_except_value_utils",
":__support_fputil_multiply_add",
":__support_fputil_sqrt",
":__support_macros_config",
":__support_macros_optimization",
":__support_macros_properties_cpu_features",
+ ":__support_math_inv_trigf_utils",
],
)
@@ -2366,8 +2367,8 @@ libc_support_library(
deps = [
":__support_fputil_fenv_impl",
":__support_fputil_fp_bits",
- ":__support_fputil_polyeval",
":__support_fputil_multiply_add",
+ ":__support_fputil_polyeval",
":__support_fputil_sqrt",
":__support_macros_optimization",
],
@@ -2377,11 +2378,11 @@ libc_support_library(
name = "__support_math_atanhf",
hdrs = ["src/__support/math/atanhf.h"],
deps = [
- ":__support_math_acoshf_utils",
":__support_fputil_fenv_impl",
":__support_fputil_fp_bits",
":__support_macros_config",
":__support_macros_optimization",
+ ":__support_math_acoshf_utils",
],
)
@@ -2389,12 +2390,12 @@ libc_support_library(
name = "__support_math_atanhf16",
hdrs = ["src/__support/math/atanhf16.h"],
deps = [
- ":__support_fputil_fenv_impl",
- ":__support_fputil_fp_bits",
- ":__support_fputil_polyeval",
":__support_fputil_cast",
":__support_fputil_except_value_utils",
+ ":__support_fputil_fenv_impl",
+ ":__support_fputil_fp_bits",
":__support_fputil_multiply_add",
+ ":__support_fputil_polyeval",
":__support_macros_config",
":__support_macros_optimization",
],
@@ -2405,9 +2406,9 @@ libc_support_library(
hdrs = ["src/__support/math/cbrt.h"],
deps = [
":__support_fputil_double_double",
- ":__support_fputil_polyeval",
- ":__support_fputil_fenv_impl",
":__support_fputil_dyadic_float",
+ ":__support_fputil_fenv_impl",
+ ":__support_fputil_polyeval",
":__support_integer_literals",
],
)
@@ -2995,21 +2996,21 @@ libc_math_function(
libc_math_function(
name = "atanf",
additional_deps = [
- ":__support_math_atanf"
+ ":__support_math_atanf",
],
)
libc_math_function(
name = "atanf16",
additional_deps = [
- ":__support_math_atanf16"
+ ":__support_math_atanf16",
],
)
libc_math_function(
name = "atan",
additional_deps = [
- ":__support_math_atan"
+ ":__support_math_atan",
],
)
@@ -3030,7 +3031,7 @@ libc_math_function(
libc_math_function(
name = "atan2",
additional_deps = [
- ":__support_math_atan2",
+ ":__support_math_atan2",
],
)
>From 351b38f266718d862aa122e56667d6582625c918 Mon Sep 17 00:00:00 2001
From: Shilei Tian <i at tianshilei.me>
Date: Wed, 6 Aug 2025 17:08:56 -0400
Subject: [PATCH 153/171] [AMDGPU] Mark address space cast from private to flat
as divergent if target supports globally addressable scratch (#152376)
Globally addressable scratch is a new feature introduced in gfx1250.
However, this feature changes how scratch space is mapped into the flat
aperture, making address space casts from private to flat no longer
uniform.
---
.../AMDGPU/AMDGPUTargetTransformInfo.cpp | 26 ++++++++++++--
llvm/lib/Target/AMDGPU/SIInstrInfo.cpp | 25 +++++++++++++
.../AMDGPU/MIR/addrspacecast.mir | 35 +++++++++++++++++++
.../AMDGPU/addrspacecast.ll | 14 ++++++++
4 files changed, 97 insertions(+), 3 deletions(-)
create mode 100644 llvm/test/Analysis/UniformityAnalysis/AMDGPU/MIR/addrspacecast.mir
create mode 100644 llvm/test/Analysis/UniformityAnalysis/AMDGPU/addrspacecast.ll
diff --git a/llvm/lib/Target/AMDGPU/AMDGPUTargetTransformInfo.cpp b/llvm/lib/Target/AMDGPU/AMDGPUTargetTransformInfo.cpp
index a0c99b0ef0491..846a0b6280f19 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPUTargetTransformInfo.cpp
+++ b/llvm/lib/Target/AMDGPU/AMDGPUTargetTransformInfo.cpp
@@ -991,10 +991,21 @@ bool GCNTTIImpl::isSourceOfDivergence(const Value *V) const {
return true;
if (const IntrinsicInst *Intrinsic = dyn_cast<IntrinsicInst>(V)) {
- if (Intrinsic->getIntrinsicID() == Intrinsic::read_register)
+ Intrinsic::ID IID = Intrinsic->getIntrinsicID();
+ switch (IID) {
+ case Intrinsic::read_register:
return isReadRegisterSourceOfDivergence(Intrinsic);
-
- return AMDGPU::isIntrinsicSourceOfDivergence(Intrinsic->getIntrinsicID());
+ case Intrinsic::amdgcn_addrspacecast_nonnull: {
+ unsigned SrcAS =
+ Intrinsic->getOperand(0)->getType()->getPointerAddressSpace();
+ unsigned DstAS = Intrinsic->getType()->getPointerAddressSpace();
+ return SrcAS == AMDGPUAS::PRIVATE_ADDRESS &&
+ DstAS == AMDGPUAS::FLAT_ADDRESS &&
+ ST->hasGloballyAddressableScratch();
+ }
+ default:
+ return AMDGPU::isIntrinsicSourceOfDivergence(IID);
+ }
}
// Assume all function calls are a source of divergence.
@@ -1008,6 +1019,15 @@ bool GCNTTIImpl::isSourceOfDivergence(const Value *V) const {
if (isa<InvokeInst>(V))
return true;
+ // If the target supports globally addressable scratch, the mapping from
+ // scratch memory to the flat aperture changes therefore an address space cast
+ // is no longer uniform.
+ if (auto *CastI = dyn_cast<AddrSpaceCastInst>(V)) {
+ return CastI->getSrcAddressSpace() == AMDGPUAS::PRIVATE_ADDRESS &&
+ CastI->getDestAddressSpace() == AMDGPUAS::FLAT_ADDRESS &&
+ ST->hasGloballyAddressableScratch();
+ }
+
return false;
}
diff --git a/llvm/lib/Target/AMDGPU/SIInstrInfo.cpp b/llvm/lib/Target/AMDGPU/SIInstrInfo.cpp
index 5f498a3f5a421..f20b22d14c984 100644
--- a/llvm/lib/Target/AMDGPU/SIInstrInfo.cpp
+++ b/llvm/lib/Target/AMDGPU/SIInstrInfo.cpp
@@ -10074,7 +10074,30 @@ unsigned SIInstrInfo::getInstrLatency(const InstrItineraryData *ItinData,
InstructionUniformity
SIInstrInfo::getGenericInstructionUniformity(const MachineInstr &MI) const {
+ const MachineRegisterInfo &MRI = MI.getMF()->getRegInfo();
unsigned opcode = MI.getOpcode();
+
+ auto HandleAddrSpaceCast = [this, &MRI](const MachineInstr &MI) {
+ Register Dst = MI.getOperand(0).getReg();
+ Register Src = isa<GIntrinsic>(MI) ? MI.getOperand(2).getReg()
+ : MI.getOperand(1).getReg();
+ LLT DstTy = MRI.getType(Dst);
+ LLT SrcTy = MRI.getType(Src);
+ unsigned DstAS = DstTy.getAddressSpace();
+ unsigned SrcAS = SrcTy.getAddressSpace();
+ return SrcAS == AMDGPUAS::PRIVATE_ADDRESS &&
+ DstAS == AMDGPUAS::FLAT_ADDRESS &&
+ ST.hasGloballyAddressableScratch()
+ ? InstructionUniformity::NeverUniform
+ : InstructionUniformity::Default;
+ };
+
+ // If the target supports globally addressable scratch, the mapping from
+ // scratch memory to the flat aperture changes therefore an address space cast
+ // is no longer uniform.
+ if (opcode == TargetOpcode::G_ADDRSPACE_CAST)
+ return HandleAddrSpaceCast(MI);
+
if (auto *GI = dyn_cast<GIntrinsic>(&MI)) {
auto IID = GI->getIntrinsicID();
if (AMDGPU::isIntrinsicSourceOfDivergence(IID))
@@ -10083,6 +10106,8 @@ SIInstrInfo::getGenericInstructionUniformity(const MachineInstr &MI) const {
return InstructionUniformity::AlwaysUniform;
switch (IID) {
+ case Intrinsic::amdgcn_addrspacecast_nonnull:
+ return HandleAddrSpaceCast(MI);
case Intrinsic::amdgcn_if:
case Intrinsic::amdgcn_else:
// FIXME: Uniform if second result
diff --git a/llvm/test/Analysis/UniformityAnalysis/AMDGPU/MIR/addrspacecast.mir b/llvm/test/Analysis/UniformityAnalysis/AMDGPU/MIR/addrspacecast.mir
new file mode 100644
index 0000000000000..612f7b7ef4ec4
--- /dev/null
+++ b/llvm/test/Analysis/UniformityAnalysis/AMDGPU/MIR/addrspacecast.mir
@@ -0,0 +1,35 @@
+# NOTE: This file is Generic MIR translation of llvm/test/Analysis/UniformityAnalysis/AMDGPU/addrspacecast.ll test file
+# RUN: llc -mtriple=amdgcn-amd-amdhsa -mcpu=gfx900 -run-pass=print-machine-uniformity -filetype=null %s 2>&1 | FileCheck %s --check-prefix=UNI
+# RUN: llc -mtriple=amdgcn-amd-amdhsa -mcpu=gfx1250 -run-pass=print-machine-uniformity -filetype=null %s 2>&1 | FileCheck %s --check-prefix=DIV
+
+# UNI: ALL VALUES UNIFORM
+# DIV: DIVERGENT: %3: %3:_(p0) = G_ADDRSPACE_CAST %2:_(p5)
+# DIV: DIVERGENT: %4: %4:_(p0) = G_INTRINSIC intrinsic(@llvm.amdgcn.addrspacecast.nonnull), %2:_(p5)
+
+--- |
+ define void @foo() {
+ %alloca = alloca i32, align 4, addrspace(5)
+ %cast = addrspacecast ptr addrspace(5) %alloca to ptr
+ store i32 1, ptr %cast, align 4
+ %cast.1 = call ptr @llvm.amdgcn.addrspacecast.nonnull.p0.p5(ptr addrspace(5) %alloca)
+ store i32 2, ptr %cast.1, align 4
+ ret void
+ }
+...
+---
+name: foo
+stack:
+ - { id: 0, name: alloca, type: default, offset: 0, size: 4, alignment: 4,
+ stack-id: default, callee-saved-register: '', callee-saved-restored: true,
+ debug-info-variable: '', debug-info-expression: '', debug-info-location: '' }
+body: |
+ bb.1 (%ir-block.0):
+ %10:_(s32) = G_CONSTANT i32 1
+ %12:_(s32) = G_CONSTANT i32 2
+ %8:_(p5) = G_FRAME_INDEX %stack.0.alloca
+ %9:_(p0) = G_ADDRSPACE_CAST %8(p5)
+ G_STORE %10(s32), %9(p0) :: (store (s32) into %ir.cast)
+ %11:_(p0) = G_INTRINSIC intrinsic(@llvm.amdgcn.addrspacecast.nonnull), %8(p5)
+ G_STORE %12(s32), %11(p0) :: (store (s32) into %ir.cast.1)
+ SI_RETURN
+...
diff --git a/llvm/test/Analysis/UniformityAnalysis/AMDGPU/addrspacecast.ll b/llvm/test/Analysis/UniformityAnalysis/AMDGPU/addrspacecast.ll
new file mode 100644
index 0000000000000..e6808448651c8
--- /dev/null
+++ b/llvm/test/Analysis/UniformityAnalysis/AMDGPU/addrspacecast.ll
@@ -0,0 +1,14 @@
+; RUN: opt -mtriple=amdgcn-amd-amdhsa -mcpu=gfx900 -passes='print<uniformity>' -disable-output %s 2>&1 | FileCheck %s --check-prefix=UNI
+; RUN: opt -mtriple=amdgcn-amd-amdhsa -mcpu=gfx1250 -passes='print<uniformity>' -disable-output %s 2>&1 | FileCheck %s --check-prefix=DIV
+
+; UNI: ALL VALUES UNIFORM
+; DIV: DIVERGENT: %cast = addrspacecast ptr addrspace(5) %alloca to ptr
+; DIV: DIVERGENT: %cast.1 = call ptr @llvm.amdgcn.addrspacecast.nonnull.p0.p5(ptr addrspace(5) %alloca)
+define void @foo() {
+ %alloca = alloca i32, align 4, addrspace(5)
+ %cast = addrspacecast ptr addrspace(5) %alloca to ptr
+ store i32 1, ptr %cast
+ %cast.1 = call ptr @llvm.amdgcn.addrspacecast.nonnull.p0.p5(ptr addrspace(5) %alloca)
+ store i32 2, ptr %cast.1
+ ret void
+}
>From 184821b63d769e48d8b89f70e8f7a5adbe429fae Mon Sep 17 00:00:00 2001
From: Stanislav Mekhanoshin <Stanislav.Mekhanoshin at amd.com>
Date: Wed, 6 Aug 2025 14:15:35 -0700
Subject: [PATCH 154/171] [AMDGPU] Add gfx1250 DS MC tests. NFC. (#152378)
---
llvm/test/MC/AMDGPU/gfx1250_asm_ds.s | 1911 +++++++++++++++++
.../Disassembler/AMDGPU/gfx1250_dasm_ds.txt | 1104 ++++++++++
2 files changed, 3015 insertions(+)
diff --git a/llvm/test/MC/AMDGPU/gfx1250_asm_ds.s b/llvm/test/MC/AMDGPU/gfx1250_asm_ds.s
index f1641fc693b1c..b46189b15d9dc 100644
--- a/llvm/test/MC/AMDGPU/gfx1250_asm_ds.s
+++ b/llvm/test/MC/AMDGPU/gfx1250_asm_ds.s
@@ -1,6 +1,1917 @@
// RUN: llvm-mc -triple=amdgcn -mcpu=gfx1250 -show-encoding %s | FileCheck --check-prefixes=GFX1250 %s
// RUN: not llvm-mc -triple=amdgcn -mcpu=gfx1200 -show-encoding %s 2>&1 | FileCheck --check-prefix=GFX12-ERR %s
+ds_nop
+// GFX1250: ds_nop ; encoding: [0x00,0x00,0x50,0xd8,0x00,0x00,0x00,0x00]
+
+ds_add_f32 v1, v2
+// GFX1250: ds_add_f32 v1, v2 ; encoding: [0x00,0x00,0x54,0xd8,0x01,0x02,0x00,0x00]
+
+ds_add_f32 v1, v2 offset:65535
+// GFX1250: ds_add_f32 v1, v2 offset:65535 ; encoding: [0xff,0xff,0x54,0xd8,0x01,0x02,0x00,0x00]
+
+ds_add_f32 v1, v2 offset:0
+// GFX1250: ds_add_f32 v1, v2 ; encoding: [0x00,0x00,0x54,0xd8,0x01,0x02,0x00,0x00]
+
+ds_add_f32 v255, v255 offset:4
+// GFX1250: ds_add_f32 v255, v255 offset:4 ; encoding: [0x04,0x00,0x54,0xd8,0xff,0xff,0x00,0x00]
+
+ds_add_rtn_f32 v5, v1, v2
+// GFX1250: ds_add_rtn_f32 v5, v1, v2 ; encoding: [0x00,0x00,0xe4,0xd9,0x01,0x02,0x00,0x05]
+
+ds_add_rtn_f32 v5, v1, v2 offset:65535
+// GFX1250: ds_add_rtn_f32 v5, v1, v2 offset:65535 ; encoding: [0xff,0xff,0xe4,0xd9,0x01,0x02,0x00,0x05]
+
+ds_add_rtn_f32 v5, v1, v2 offset:0
+// GFX1250: ds_add_rtn_f32 v5, v1, v2 ; encoding: [0x00,0x00,0xe4,0xd9,0x01,0x02,0x00,0x05]
+
+ds_add_rtn_f32 v255, v255, v255 offset:4
+// GFX1250: ds_add_rtn_f32 v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0xe4,0xd9,0xff,0xff,0x00,0xff]
+
+ds_add_rtn_u32 v5, v1, v2
+// GFX1250: ds_add_rtn_u32 v5, v1, v2 ; encoding: [0x00,0x00,0x80,0xd8,0x01,0x02,0x00,0x05]
+
+ds_add_rtn_u32 v5, v1, v2 offset:65535
+// GFX1250: ds_add_rtn_u32 v5, v1, v2 offset:65535 ; encoding: [0xff,0xff,0x80,0xd8,0x01,0x02,0x00,0x05]
+
+ds_add_rtn_u32 v5, v1, v2 offset:0
+// GFX1250: ds_add_rtn_u32 v5, v1, v2 ; encoding: [0x00,0x00,0x80,0xd8,0x01,0x02,0x00,0x05]
+
+ds_add_rtn_u32 v255, v255, v255 offset:4
+// GFX1250: ds_add_rtn_u32 v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0x80,0xd8,0xff,0xff,0x00,0xff]
+
+ds_add_rtn_u64 v[6:7], v1, v[2:3]
+// GFX1250: ds_add_rtn_u64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0x80,0xd9,0x01,0x02,0x00,0x06]
+
+ds_add_rtn_u64 v[6:7], v1, v[2:3] offset:65535
+// GFX1250: ds_add_rtn_u64 v[6:7], v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x80,0xd9,0x01,0x02,0x00,0x06]
+
+ds_add_rtn_u64 v[6:7], v1, v[2:3] offset:0
+// GFX1250: ds_add_rtn_u64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0x80,0xd9,0x01,0x02,0x00,0x06]
+
+ds_add_rtn_u64 v[254:255], v255, v[254:255] offset:4
+// GFX1250: ds_add_rtn_u64 v[254:255], v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x80,0xd9,0xff,0xfe,0x00,0xfe]
+
+ds_add_u32 v1, v2
+// GFX1250: ds_add_u32 v1, v2 ; encoding: [0x00,0x00,0x00,0xd8,0x01,0x02,0x00,0x00]
+
+ds_add_u32 v1, v2 offset:65535
+// GFX1250: ds_add_u32 v1, v2 offset:65535 ; encoding: [0xff,0xff,0x00,0xd8,0x01,0x02,0x00,0x00]
+
+ds_add_u32 v1, v2 offset:0
+// GFX1250: ds_add_u32 v1, v2 ; encoding: [0x00,0x00,0x00,0xd8,0x01,0x02,0x00,0x00]
+
+ds_add_u32 v255, v255 offset:4
+// GFX1250: ds_add_u32 v255, v255 offset:4 ; encoding: [0x04,0x00,0x00,0xd8,0xff,0xff,0x00,0x00]
+
+ds_add_u64 v1, v[2:3]
+// GFX1250: ds_add_u64 v1, v[2:3] ; encoding: [0x00,0x00,0x00,0xd9,0x01,0x02,0x00,0x00]
+
+ds_add_u64 v1, v[2:3] offset:65535
+// GFX1250: ds_add_u64 v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x00,0xd9,0x01,0x02,0x00,0x00]
+
+ds_add_u64 v1, v[2:3] offset:0
+// GFX1250: ds_add_u64 v1, v[2:3] ; encoding: [0x00,0x00,0x00,0xd9,0x01,0x02,0x00,0x00]
+
+ds_add_u64 v255, v[254:255] offset:4
+// GFX1250: ds_add_u64 v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x00,0xd9,0xff,0xfe,0x00,0x00]
+
+ds_and_b32 v1, v2
+// GFX1250: ds_and_b32 v1, v2 ; encoding: [0x00,0x00,0x24,0xd8,0x01,0x02,0x00,0x00]
+
+ds_and_b32 v1, v2 offset:65535
+// GFX1250: ds_and_b32 v1, v2 offset:65535 ; encoding: [0xff,0xff,0x24,0xd8,0x01,0x02,0x00,0x00]
+
+ds_and_b32 v1, v2 offset:0
+// GFX1250: ds_and_b32 v1, v2 ; encoding: [0x00,0x00,0x24,0xd8,0x01,0x02,0x00,0x00]
+
+ds_and_b32 v255, v255 offset:4
+// GFX1250: ds_and_b32 v255, v255 offset:4 ; encoding: [0x04,0x00,0x24,0xd8,0xff,0xff,0x00,0x00]
+
+ds_and_b64 v1, v[2:3]
+// GFX1250: ds_and_b64 v1, v[2:3] ; encoding: [0x00,0x00,0x24,0xd9,0x01,0x02,0x00,0x00]
+
+ds_and_b64 v1, v[2:3] offset:65535
+// GFX1250: ds_and_b64 v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x24,0xd9,0x01,0x02,0x00,0x00]
+
+ds_and_b64 v1, v[2:3] offset:0
+// GFX1250: ds_and_b64 v1, v[2:3] ; encoding: [0x00,0x00,0x24,0xd9,0x01,0x02,0x00,0x00]
+
+ds_and_b64 v255, v[254:255] offset:4
+// GFX1250: ds_and_b64 v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x24,0xd9,0xff,0xfe,0x00,0x00]
+
+ds_and_rtn_b32 v5, v1, v2
+// GFX1250: ds_and_rtn_b32 v5, v1, v2 ; encoding: [0x00,0x00,0xa4,0xd8,0x01,0x02,0x00,0x05]
+
+ds_and_rtn_b32 v5, v1, v2 offset:65535
+// GFX1250: ds_and_rtn_b32 v5, v1, v2 offset:65535 ; encoding: [0xff,0xff,0xa4,0xd8,0x01,0x02,0x00,0x05]
+
+ds_and_rtn_b32 v5, v1, v2 offset:0
+// GFX1250: ds_and_rtn_b32 v5, v1, v2 ; encoding: [0x00,0x00,0xa4,0xd8,0x01,0x02,0x00,0x05]
+
+ds_and_rtn_b32 v255, v255, v255 offset:4
+// GFX1250: ds_and_rtn_b32 v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0xa4,0xd8,0xff,0xff,0x00,0xff]
+
+ds_and_rtn_b64 v[6:7], v1, v[2:3]
+// GFX1250: ds_and_rtn_b64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0xa4,0xd9,0x01,0x02,0x00,0x06]
+
+ds_and_rtn_b64 v[6:7], v1, v[2:3] offset:65535
+// GFX1250: ds_and_rtn_b64 v[6:7], v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0xa4,0xd9,0x01,0x02,0x00,0x06]
+
+ds_and_rtn_b64 v[6:7], v1, v[2:3] offset:0
+// GFX1250: ds_and_rtn_b64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0xa4,0xd9,0x01,0x02,0x00,0x06]
+
+ds_and_rtn_b64 v[254:255], v255, v[254:255] offset:4
+// GFX1250: ds_and_rtn_b64 v[254:255], v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0xa4,0xd9,0xff,0xfe,0x00,0xfe]
+
+ds_append v5
+// GFX1250: ds_append v5 ; encoding: [0x00,0x00,0xf8,0xd8,0x00,0x00,0x00,0x05]
+
+ds_append v5 offset:65535
+// GFX1250: ds_append v5 offset:65535 ; encoding: [0xff,0xff,0xf8,0xd8,0x00,0x00,0x00,0x05]
+
+ds_append v5 offset:0
+// GFX1250: ds_append v5 ; encoding: [0x00,0x00,0xf8,0xd8,0x00,0x00,0x00,0x05]
+
+ds_append v255 offset:4
+// GFX1250: ds_append v255 offset:4 ; encoding: [0x04,0x00,0xf8,0xd8,0x00,0x00,0x00,0xff]
+
+ds_bpermute_b32 v5, v1, v2
+// GFX1250: ds_bpermute_b32 v5, v1, v2 ; encoding: [0x00,0x00,0xcc,0xda,0x01,0x02,0x00,0x05]
+
+ds_bpermute_b32 v5, v1, v2 offset:65535
+// GFX1250: ds_bpermute_b32 v5, v1, v2 offset:65535 ; encoding: [0xff,0xff,0xcc,0xda,0x01,0x02,0x00,0x05]
+
+ds_bpermute_b32 v5, v1, v2 offset:0
+// GFX1250: ds_bpermute_b32 v5, v1, v2 ; encoding: [0x00,0x00,0xcc,0xda,0x01,0x02,0x00,0x05]
+
+ds_bpermute_b32 v255, v255, v255 offset:4
+// GFX1250: ds_bpermute_b32 v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0xcc,0xda,0xff,0xff,0x00,0xff]
+
+ds_cmpstore_b32 v1, v2, v3
+// GFX1250: ds_cmpstore_b32 v1, v2, v3 ; encoding: [0x00,0x00,0x40,0xd8,0x01,0x02,0x03,0x00]
+
+ds_cmpstore_b32 v1, v2, v3 offset:65535
+// GFX1250: ds_cmpstore_b32 v1, v2, v3 offset:65535 ; encoding: [0xff,0xff,0x40,0xd8,0x01,0x02,0x03,0x00]
+
+ds_cmpstore_b32 v1, v2, v3 offset:0
+// GFX1250: ds_cmpstore_b32 v1, v2, v3 ; encoding: [0x00,0x00,0x40,0xd8,0x01,0x02,0x03,0x00]
+
+ds_cmpstore_b32 v255, v255, v255 offset:4
+// GFX1250: ds_cmpstore_b32 v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0x40,0xd8,0xff,0xff,0xff,0x00]
+
+ds_cmpstore_b64 v1, v[2:3], v[4:5]
+// GFX1250: ds_cmpstore_b64 v1, v[2:3], v[4:5] ; encoding: [0x00,0x00,0x40,0xd9,0x01,0x02,0x04,0x00]
+
+ds_cmpstore_b64 v1, v[2:3], v[4:5] offset:65535
+// GFX1250: ds_cmpstore_b64 v1, v[2:3], v[4:5] offset:65535 ; encoding: [0xff,0xff,0x40,0xd9,0x01,0x02,0x04,0x00]
+
+ds_cmpstore_b64 v1, v[2:3], v[4:5] offset:0
+// GFX1250: ds_cmpstore_b64 v1, v[2:3], v[4:5] ; encoding: [0x00,0x00,0x40,0xd9,0x01,0x02,0x04,0x00]
+
+ds_cmpstore_b64 v255, v[254:255], v[254:255] offset:4
+// GFX1250: ds_cmpstore_b64 v255, v[254:255], v[254:255] offset:4 ; encoding: [0x04,0x00,0x40,0xd9,0xff,0xfe,0xfe,0x00]
+
+ds_cmpstore_rtn_b32 v5, v1, v2, v3
+// GFX1250: ds_cmpstore_rtn_b32 v5, v1, v2, v3 ; encoding: [0x00,0x00,0xc0,0xd8,0x01,0x02,0x03,0x05]
+
+ds_cmpstore_rtn_b32 v5, v1, v2, v3 offset:65535
+// GFX1250: ds_cmpstore_rtn_b32 v5, v1, v2, v3 offset:65535 ; encoding: [0xff,0xff,0xc0,0xd8,0x01,0x02,0x03,0x05]
+
+ds_cmpstore_rtn_b32 v5, v1, v2, v3 offset:0
+// GFX1250: ds_cmpstore_rtn_b32 v5, v1, v2, v3 ; encoding: [0x00,0x00,0xc0,0xd8,0x01,0x02,0x03,0x05]
+
+ds_cmpstore_rtn_b32 v255, v255, v255, v255 offset:4
+// GFX1250: ds_cmpstore_rtn_b32 v255, v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0xc0,0xd8,0xff,0xff,0xff,0xff]
+
+ds_cmpstore_rtn_b64 v[6:7], v1, v[2:3], v[4:5]
+// GFX1250: ds_cmpstore_rtn_b64 v[6:7], v1, v[2:3], v[4:5] ; encoding: [0x00,0x00,0xc0,0xd9,0x01,0x02,0x04,0x06]
+
+ds_cmpstore_rtn_b64 v[6:7], v1, v[2:3], v[4:5] offset:65535
+// GFX1250: ds_cmpstore_rtn_b64 v[6:7], v1, v[2:3], v[4:5] offset:65535 ; encoding: [0xff,0xff,0xc0,0xd9,0x01,0x02,0x04,0x06]
+
+ds_cmpstore_rtn_b64 v[6:7], v1, v[2:3], v[4:5] offset:0
+// GFX1250: ds_cmpstore_rtn_b64 v[6:7], v1, v[2:3], v[4:5] ; encoding: [0x00,0x00,0xc0,0xd9,0x01,0x02,0x04,0x06]
+
+ds_cmpstore_rtn_b64 v[254:255], v255, v[254:255], v[254:255] offset:4
+// GFX1250: ds_cmpstore_rtn_b64 v[254:255], v255, v[254:255], v[254:255] offset:4 ; encoding: [0x04,0x00,0xc0,0xd9,0xff,0xfe,0xfe,0xfe]
+
+ds_condxchg32_rtn_b64 v[6:7], v1, v[2:3]
+// GFX1250: ds_condxchg32_rtn_b64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0xf8,0xd9,0x01,0x02,0x00,0x06]
+
+ds_condxchg32_rtn_b64 v[6:7], v1, v[2:3] offset:65535
+// GFX1250: ds_condxchg32_rtn_b64 v[6:7], v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0xf8,0xd9,0x01,0x02,0x00,0x06]
+
+ds_condxchg32_rtn_b64 v[6:7], v1, v[2:3] offset:0
+// GFX1250: ds_condxchg32_rtn_b64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0xf8,0xd9,0x01,0x02,0x00,0x06]
+
+ds_condxchg32_rtn_b64 v[254:255], v255, v[254:255] offset:4
+// GFX1250: ds_condxchg32_rtn_b64 v[254:255], v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0xf8,0xd9,0xff,0xfe,0x00,0xfe]
+
+ds_consume v5
+// GFX1250: ds_consume v5 ; encoding: [0x00,0x00,0xf4,0xd8,0x00,0x00,0x00,0x05]
+
+ds_consume v5 offset:65535
+// GFX1250: ds_consume v5 offset:65535 ; encoding: [0xff,0xff,0xf4,0xd8,0x00,0x00,0x00,0x05]
+
+ds_consume v5 offset:0
+// GFX1250: ds_consume v5 ; encoding: [0x00,0x00,0xf4,0xd8,0x00,0x00,0x00,0x05]
+
+ds_consume v255 offset:4
+// GFX1250: ds_consume v255 offset:4 ; encoding: [0x04,0x00,0xf4,0xd8,0x00,0x00,0x00,0xff]
+
+ds_dec_rtn_u32 v5, v1, v2
+// GFX1250: ds_dec_rtn_u32 v5, v1, v2 ; encoding: [0x00,0x00,0x90,0xd8,0x01,0x02,0x00,0x05]
+
+ds_dec_rtn_u32 v5, v1, v2 offset:65535
+// GFX1250: ds_dec_rtn_u32 v5, v1, v2 offset:65535 ; encoding: [0xff,0xff,0x90,0xd8,0x01,0x02,0x00,0x05]
+
+ds_dec_rtn_u32 v5, v1, v2 offset:0
+// GFX1250: ds_dec_rtn_u32 v5, v1, v2 ; encoding: [0x00,0x00,0x90,0xd8,0x01,0x02,0x00,0x05]
+
+ds_dec_rtn_u32 v255, v255, v255 offset:4
+// GFX1250: ds_dec_rtn_u32 v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0x90,0xd8,0xff,0xff,0x00,0xff]
+
+ds_dec_rtn_u64 v[6:7], v1, v[2:3]
+// GFX1250: ds_dec_rtn_u64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0x90,0xd9,0x01,0x02,0x00,0x06]
+
+ds_dec_rtn_u64 v[6:7], v1, v[2:3] offset:65535
+// GFX1250: ds_dec_rtn_u64 v[6:7], v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x90,0xd9,0x01,0x02,0x00,0x06]
+
+ds_dec_rtn_u64 v[6:7], v1, v[2:3] offset:0
+// GFX1250: ds_dec_rtn_u64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0x90,0xd9,0x01,0x02,0x00,0x06]
+
+ds_dec_rtn_u64 v[254:255], v255, v[254:255] offset:4
+// GFX1250: ds_dec_rtn_u64 v[254:255], v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x90,0xd9,0xff,0xfe,0x00,0xfe]
+
+ds_dec_u32 v1, v2
+// GFX1250: ds_dec_u32 v1, v2 ; encoding: [0x00,0x00,0x10,0xd8,0x01,0x02,0x00,0x00]
+
+ds_dec_u32 v1, v2 offset:65535
+// GFX1250: ds_dec_u32 v1, v2 offset:65535 ; encoding: [0xff,0xff,0x10,0xd8,0x01,0x02,0x00,0x00]
+
+ds_dec_u32 v1, v2 offset:0
+// GFX1250: ds_dec_u32 v1, v2 ; encoding: [0x00,0x00,0x10,0xd8,0x01,0x02,0x00,0x00]
+
+ds_dec_u32 v255, v255 offset:4
+// GFX1250: ds_dec_u32 v255, v255 offset:4 ; encoding: [0x04,0x00,0x10,0xd8,0xff,0xff,0x00,0x00]
+
+ds_dec_u64 v1, v[2:3]
+// GFX1250: ds_dec_u64 v1, v[2:3] ; encoding: [0x00,0x00,0x10,0xd9,0x01,0x02,0x00,0x00]
+
+ds_dec_u64 v1, v[2:3] offset:65535
+// GFX1250: ds_dec_u64 v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x10,0xd9,0x01,0x02,0x00,0x00]
+
+ds_dec_u64 v1, v[2:3] offset:0
+// GFX1250: ds_dec_u64 v1, v[2:3] ; encoding: [0x00,0x00,0x10,0xd9,0x01,0x02,0x00,0x00]
+
+ds_dec_u64 v255, v[254:255] offset:4
+// GFX1250: ds_dec_u64 v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x10,0xd9,0xff,0xfe,0x00,0x00]
+
+ds_inc_rtn_u32 v5, v1, v2
+// GFX1250: ds_inc_rtn_u32 v5, v1, v2 ; encoding: [0x00,0x00,0x8c,0xd8,0x01,0x02,0x00,0x05]
+
+ds_inc_rtn_u32 v5, v1, v2 offset:65535
+// GFX1250: ds_inc_rtn_u32 v5, v1, v2 offset:65535 ; encoding: [0xff,0xff,0x8c,0xd8,0x01,0x02,0x00,0x05]
+
+ds_inc_rtn_u32 v5, v1, v2 offset:0
+// GFX1250: ds_inc_rtn_u32 v5, v1, v2 ; encoding: [0x00,0x00,0x8c,0xd8,0x01,0x02,0x00,0x05]
+
+ds_inc_rtn_u32 v255, v255, v255 offset:4
+// GFX1250: ds_inc_rtn_u32 v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0x8c,0xd8,0xff,0xff,0x00,0xff]
+
+ds_inc_rtn_u64 v[6:7], v1, v[2:3]
+// GFX1250: ds_inc_rtn_u64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0x8c,0xd9,0x01,0x02,0x00,0x06]
+
+ds_inc_rtn_u64 v[6:7], v1, v[2:3] offset:65535
+// GFX1250: ds_inc_rtn_u64 v[6:7], v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x8c,0xd9,0x01,0x02,0x00,0x06]
+
+ds_inc_rtn_u64 v[6:7], v1, v[2:3] offset:0
+// GFX1250: ds_inc_rtn_u64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0x8c,0xd9,0x01,0x02,0x00,0x06]
+
+ds_inc_rtn_u64 v[254:255], v255, v[254:255] offset:4
+// GFX1250: ds_inc_rtn_u64 v[254:255], v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x8c,0xd9,0xff,0xfe,0x00,0xfe]
+
+ds_inc_u32 v1, v2
+// GFX1250: ds_inc_u32 v1, v2 ; encoding: [0x00,0x00,0x0c,0xd8,0x01,0x02,0x00,0x00]
+
+ds_inc_u32 v1, v2 offset:65535
+// GFX1250: ds_inc_u32 v1, v2 offset:65535 ; encoding: [0xff,0xff,0x0c,0xd8,0x01,0x02,0x00,0x00]
+
+ds_inc_u32 v1, v2 offset:0
+// GFX1250: ds_inc_u32 v1, v2 ; encoding: [0x00,0x00,0x0c,0xd8,0x01,0x02,0x00,0x00]
+
+ds_inc_u32 v255, v255 offset:4
+// GFX1250: ds_inc_u32 v255, v255 offset:4 ; encoding: [0x04,0x00,0x0c,0xd8,0xff,0xff,0x00,0x00]
+
+ds_inc_u64 v1, v[2:3]
+// GFX1250: ds_inc_u64 v1, v[2:3] ; encoding: [0x00,0x00,0x0c,0xd9,0x01,0x02,0x00,0x00]
+
+ds_inc_u64 v1, v[2:3] offset:65535
+// GFX1250: ds_inc_u64 v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x0c,0xd9,0x01,0x02,0x00,0x00]
+
+ds_inc_u64 v1, v[2:3] offset:0
+// GFX1250: ds_inc_u64 v1, v[2:3] ; encoding: [0x00,0x00,0x0c,0xd9,0x01,0x02,0x00,0x00]
+
+ds_inc_u64 v255, v[254:255] offset:4
+// GFX1250: ds_inc_u64 v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x0c,0xd9,0xff,0xfe,0x00,0x00]
+
+ds_load_2addr_b32 v[6:7], v1
+// GFX1250: ds_load_2addr_b32 v[6:7], v1 ; encoding: [0x00,0x00,0xdc,0xd8,0x01,0x00,0x00,0x06]
+
+ds_load_2addr_b32 v[6:7], v1 offset0:127 offset1:255
+// GFX1250: ds_load_2addr_b32 v[6:7], v1 offset0:127 offset1:255 ; encoding: [0x7f,0xff,0xdc,0xd8,0x01,0x00,0x00,0x06]
+
+ds_load_2addr_b32 v[6:7], v1 offset0:0 offset1:0
+// GFX1250: ds_load_2addr_b32 v[6:7], v1 ; encoding: [0x00,0x00,0xdc,0xd8,0x01,0x00,0x00,0x06]
+
+ds_load_2addr_b32 v[254:255], v255 offset0:16 offset1:1
+// GFX1250: ds_load_2addr_b32 v[254:255], v255 offset0:16 offset1:1 ; encoding: [0x10,0x01,0xdc,0xd8,0xff,0x00,0x00,0xfe]
+
+ds_load_2addr_b64 v[6:9], v1
+// GFX1250: ds_load_2addr_b64 v[6:9], v1 ; encoding: [0x00,0x00,0xdc,0xd9,0x01,0x00,0x00,0x06]
+
+ds_load_2addr_b64 v[6:9], v1 offset0:127 offset1:255
+// GFX1250: ds_load_2addr_b64 v[6:9], v1 offset0:127 offset1:255 ; encoding: [0x7f,0xff,0xdc,0xd9,0x01,0x00,0x00,0x06]
+
+ds_load_2addr_b64 v[6:9], v1 offset0:0 offset1:0
+// GFX1250: ds_load_2addr_b64 v[6:9], v1 ; encoding: [0x00,0x00,0xdc,0xd9,0x01,0x00,0x00,0x06]
+
+ds_load_2addr_b64 v[252:255], v255 offset0:16 offset1:1
+// GFX1250: ds_load_2addr_b64 v[252:255], v255 offset0:16 offset1:1 ; encoding: [0x10,0x01,0xdc,0xd9,0xff,0x00,0x00,0xfc]
+
+ds_load_2addr_stride64_b32 v[6:7], v1
+// GFX1250: ds_load_2addr_stride64_b32 v[6:7], v1 ; encoding: [0x00,0x00,0xe0,0xd8,0x01,0x00,0x00,0x06]
+
+ds_load_2addr_stride64_b32 v[6:7], v1 offset0:127 offset1:255
+// GFX1250: ds_load_2addr_stride64_b32 v[6:7], v1 offset0:127 offset1:255 ; encoding: [0x7f,0xff,0xe0,0xd8,0x01,0x00,0x00,0x06]
+
+ds_load_2addr_stride64_b32 v[6:7], v1 offset0:0 offset1:0
+// GFX1250: ds_load_2addr_stride64_b32 v[6:7], v1 ; encoding: [0x00,0x00,0xe0,0xd8,0x01,0x00,0x00,0x06]
+
+ds_load_2addr_stride64_b32 v[254:255], v255 offset0:16 offset1:1
+// GFX1250: ds_load_2addr_stride64_b32 v[254:255], v255 offset0:16 offset1:1 ; encoding: [0x10,0x01,0xe0,0xd8,0xff,0x00,0x00,0xfe]
+
+ds_load_2addr_stride64_b64 v[6:9], v1
+// GFX1250: ds_load_2addr_stride64_b64 v[6:9], v1 ; encoding: [0x00,0x00,0xe0,0xd9,0x01,0x00,0x00,0x06]
+
+ds_load_2addr_stride64_b64 v[6:9], v1 offset0:127 offset1:255
+// GFX1250: ds_load_2addr_stride64_b64 v[6:9], v1 offset0:127 offset1:255 ; encoding: [0x7f,0xff,0xe0,0xd9,0x01,0x00,0x00,0x06]
+
+ds_load_2addr_stride64_b64 v[6:9], v1 offset0:0 offset1:0
+// GFX1250: ds_load_2addr_stride64_b64 v[6:9], v1 ; encoding: [0x00,0x00,0xe0,0xd9,0x01,0x00,0x00,0x06]
+
+ds_load_2addr_stride64_b64 v[252:255], v255 offset0:16 offset1:1
+// GFX1250: ds_load_2addr_stride64_b64 v[252:255], v255 offset0:16 offset1:1 ; encoding: [0x10,0x01,0xe0,0xd9,0xff,0x00,0x00,0xfc]
+
+ds_load_addtid_b32 v5
+// GFX1250: ds_load_addtid_b32 v5 ; encoding: [0x00,0x00,0xc4,0xda,0x00,0x00,0x00,0x05]
+
+ds_load_addtid_b32 v5 offset:65535
+// GFX1250: ds_load_addtid_b32 v5 offset:65535 ; encoding: [0xff,0xff,0xc4,0xda,0x00,0x00,0x00,0x05]
+
+ds_load_addtid_b32 v5 offset:0
+// GFX1250: ds_load_addtid_b32 v5 ; encoding: [0x00,0x00,0xc4,0xda,0x00,0x00,0x00,0x05]
+
+ds_load_addtid_b32 v255 offset:4
+// GFX1250: ds_load_addtid_b32 v255 offset:4 ; encoding: [0x04,0x00,0xc4,0xda,0x00,0x00,0x00,0xff]
+
+ds_load_b128 v[6:9], v1
+// GFX1250: ds_load_b128 v[6:9], v1 ; encoding: [0x00,0x00,0xfc,0xdb,0x01,0x00,0x00,0x06]
+
+ds_load_b128 v[6:9], v1 offset:65535
+// GFX1250: ds_load_b128 v[6:9], v1 offset:65535 ; encoding: [0xff,0xff,0xfc,0xdb,0x01,0x00,0x00,0x06]
+
+ds_load_b128 v[6:9], v1 offset:0
+// GFX1250: ds_load_b128 v[6:9], v1 ; encoding: [0x00,0x00,0xfc,0xdb,0x01,0x00,0x00,0x06]
+
+ds_load_b128 v[252:255], v255 offset:4
+// GFX1250: ds_load_b128 v[252:255], v255 offset:4 ; encoding: [0x04,0x00,0xfc,0xdb,0xff,0x00,0x00,0xfc]
+
+ds_load_b32 v5, v1
+// GFX1250: ds_load_b32 v5, v1 ; encoding: [0x00,0x00,0xd8,0xd8,0x01,0x00,0x00,0x05]
+
+ds_load_b32 v5, v1 offset:65535
+// GFX1250: ds_load_b32 v5, v1 offset:65535 ; encoding: [0xff,0xff,0xd8,0xd8,0x01,0x00,0x00,0x05]
+
+ds_load_b32 v5, v1 offset:0
+// GFX1250: ds_load_b32 v5, v1 ; encoding: [0x00,0x00,0xd8,0xd8,0x01,0x00,0x00,0x05]
+
+ds_load_b32 v255, v255 offset:4
+// GFX1250: ds_load_b32 v255, v255 offset:4 ; encoding: [0x04,0x00,0xd8,0xd8,0xff,0x00,0x00,0xff]
+
+ds_load_b64 v[6:7], v1
+// GFX1250: ds_load_b64 v[6:7], v1 ; encoding: [0x00,0x00,0xd8,0xd9,0x01,0x00,0x00,0x06]
+
+ds_load_b64 v[6:7], v1 offset:65535
+// GFX1250: ds_load_b64 v[6:7], v1 offset:65535 ; encoding: [0xff,0xff,0xd8,0xd9,0x01,0x00,0x00,0x06]
+
+ds_load_b64 v[6:7], v1 offset:0
+// GFX1250: ds_load_b64 v[6:7], v1 ; encoding: [0x00,0x00,0xd8,0xd9,0x01,0x00,0x00,0x06]
+
+ds_load_b64 v[254:255], v255 offset:4
+// GFX1250: ds_load_b64 v[254:255], v255 offset:4 ; encoding: [0x04,0x00,0xd8,0xd9,0xff,0x00,0x00,0xfe]
+
+ds_load_b96 v[6:8], v1
+// GFX1250: ds_load_b96 v[6:8], v1 ; encoding: [0x00,0x00,0xf8,0xdb,0x01,0x00,0x00,0x06]
+
+ds_load_b96 v[6:8], v1 offset:65535
+// GFX1250: ds_load_b96 v[6:8], v1 offset:65535 ; encoding: [0xff,0xff,0xf8,0xdb,0x01,0x00,0x00,0x06]
+
+ds_load_b96 v[6:8], v1 offset:0
+// GFX1250: ds_load_b96 v[6:8], v1 ; encoding: [0x00,0x00,0xf8,0xdb,0x01,0x00,0x00,0x06]
+
+ds_load_b96 v[252:254], v255 offset:4
+// GFX1250: ds_load_b96 v[252:254], v255 offset:4 ; encoding: [0x04,0x00,0xf8,0xdb,0xff,0x00,0x00,0xfc]
+
+ds_load_i16 v5, v1
+// GFX1250: ds_load_i16 v5, v1 ; encoding: [0x00,0x00,0xec,0xd8,0x01,0x00,0x00,0x05]
+
+ds_load_i16 v5, v1 offset:65535
+// GFX1250: ds_load_i16 v5, v1 offset:65535 ; encoding: [0xff,0xff,0xec,0xd8,0x01,0x00,0x00,0x05]
+
+ds_load_i16 v5, v1 offset:0
+// GFX1250: ds_load_i16 v5, v1 ; encoding: [0x00,0x00,0xec,0xd8,0x01,0x00,0x00,0x05]
+
+ds_load_i16 v255, v255 offset:4
+// GFX1250: ds_load_i16 v255, v255 offset:4 ; encoding: [0x04,0x00,0xec,0xd8,0xff,0x00,0x00,0xff]
+
+ds_load_i8 v5, v1
+// GFX1250: ds_load_i8 v5, v1 ; encoding: [0x00,0x00,0xe4,0xd8,0x01,0x00,0x00,0x05]
+
+ds_load_i8 v5, v1 offset:65535
+// GFX1250: ds_load_i8 v5, v1 offset:65535 ; encoding: [0xff,0xff,0xe4,0xd8,0x01,0x00,0x00,0x05]
+
+ds_load_i8 v5, v1 offset:0
+// GFX1250: ds_load_i8 v5, v1 ; encoding: [0x00,0x00,0xe4,0xd8,0x01,0x00,0x00,0x05]
+
+ds_load_i8 v255, v255 offset:4
+// GFX1250: ds_load_i8 v255, v255 offset:4 ; encoding: [0x04,0x00,0xe4,0xd8,0xff,0x00,0x00,0xff]
+
+ds_load_i8_d16 v5, v1
+// GFX1250: ds_load_i8_d16 v5, v1 ; encoding: [0x00,0x00,0x90,0xda,0x01,0x00,0x00,0x05]
+
+ds_load_i8_d16 v5, v1 offset:65535
+// GFX1250: ds_load_i8_d16 v5, v1 offset:65535 ; encoding: [0xff,0xff,0x90,0xda,0x01,0x00,0x00,0x05]
+
+ds_load_i8_d16 v5, v1 offset:0
+// GFX1250: ds_load_i8_d16 v5, v1 ; encoding: [0x00,0x00,0x90,0xda,0x01,0x00,0x00,0x05]
+
+ds_load_i8_d16 v255, v255 offset:4
+// GFX1250: ds_load_i8_d16 v255, v255 offset:4 ; encoding: [0x04,0x00,0x90,0xda,0xff,0x00,0x00,0xff]
+
+ds_load_i8_d16_hi v5, v1
+// GFX1250: ds_load_i8_d16_hi v5, v1 ; encoding: [0x00,0x00,0x94,0xda,0x01,0x00,0x00,0x05]
+
+ds_load_i8_d16_hi v5, v1 offset:65535
+// GFX1250: ds_load_i8_d16_hi v5, v1 offset:65535 ; encoding: [0xff,0xff,0x94,0xda,0x01,0x00,0x00,0x05]
+
+ds_load_i8_d16_hi v5, v1 offset:0
+// GFX1250: ds_load_i8_d16_hi v5, v1 ; encoding: [0x00,0x00,0x94,0xda,0x01,0x00,0x00,0x05]
+
+ds_load_i8_d16_hi v255, v255 offset:4
+// GFX1250: ds_load_i8_d16_hi v255, v255 offset:4 ; encoding: [0x04,0x00,0x94,0xda,0xff,0x00,0x00,0xff]
+
+ds_load_u16 v5, v1
+// GFX1250: ds_load_u16 v5, v1 ; encoding: [0x00,0x00,0xf0,0xd8,0x01,0x00,0x00,0x05]
+
+ds_load_u16 v5, v1 offset:65535
+// GFX1250: ds_load_u16 v5, v1 offset:65535 ; encoding: [0xff,0xff,0xf0,0xd8,0x01,0x00,0x00,0x05]
+
+ds_load_u16 v5, v1 offset:0
+// GFX1250: ds_load_u16 v5, v1 ; encoding: [0x00,0x00,0xf0,0xd8,0x01,0x00,0x00,0x05]
+
+ds_load_u16 v255, v255 offset:4
+// GFX1250: ds_load_u16 v255, v255 offset:4 ; encoding: [0x04,0x00,0xf0,0xd8,0xff,0x00,0x00,0xff]
+
+ds_load_u16_d16 v5, v1
+// GFX1250: ds_load_u16_d16 v5, v1 ; encoding: [0x00,0x00,0x98,0xda,0x01,0x00,0x00,0x05]
+
+ds_load_u16_d16 v5, v1 offset:65535
+// GFX1250: ds_load_u16_d16 v5, v1 offset:65535 ; encoding: [0xff,0xff,0x98,0xda,0x01,0x00,0x00,0x05]
+
+ds_load_u16_d16 v5, v1 offset:0
+// GFX1250: ds_load_u16_d16 v5, v1 ; encoding: [0x00,0x00,0x98,0xda,0x01,0x00,0x00,0x05]
+
+ds_load_u16_d16 v255, v255 offset:4
+// GFX1250: ds_load_u16_d16 v255, v255 offset:4 ; encoding: [0x04,0x00,0x98,0xda,0xff,0x00,0x00,0xff]
+
+ds_load_u16_d16_hi v5, v1
+// GFX1250: ds_load_u16_d16_hi v5, v1 ; encoding: [0x00,0x00,0x9c,0xda,0x01,0x00,0x00,0x05]
+
+ds_load_u16_d16_hi v5, v1 offset:65535
+// GFX1250: ds_load_u16_d16_hi v5, v1 offset:65535 ; encoding: [0xff,0xff,0x9c,0xda,0x01,0x00,0x00,0x05]
+
+ds_load_u16_d16_hi v5, v1 offset:0
+// GFX1250: ds_load_u16_d16_hi v5, v1 ; encoding: [0x00,0x00,0x9c,0xda,0x01,0x00,0x00,0x05]
+
+ds_load_u16_d16_hi v255, v255 offset:4
+// GFX1250: ds_load_u16_d16_hi v255, v255 offset:4 ; encoding: [0x04,0x00,0x9c,0xda,0xff,0x00,0x00,0xff]
+
+ds_load_u8 v5, v1
+// GFX1250: ds_load_u8 v5, v1 ; encoding: [0x00,0x00,0xe8,0xd8,0x01,0x00,0x00,0x05]
+
+ds_load_u8 v5, v1 offset:65535
+// GFX1250: ds_load_u8 v5, v1 offset:65535 ; encoding: [0xff,0xff,0xe8,0xd8,0x01,0x00,0x00,0x05]
+
+ds_load_u8 v5, v1 offset:0
+// GFX1250: ds_load_u8 v5, v1 ; encoding: [0x00,0x00,0xe8,0xd8,0x01,0x00,0x00,0x05]
+
+ds_load_u8 v255, v255 offset:4
+// GFX1250: ds_load_u8 v255, v255 offset:4 ; encoding: [0x04,0x00,0xe8,0xd8,0xff,0x00,0x00,0xff]
+
+ds_load_u8_d16 v5, v1
+// GFX1250: ds_load_u8_d16 v5, v1 ; encoding: [0x00,0x00,0x88,0xda,0x01,0x00,0x00,0x05]
+
+ds_load_u8_d16 v5, v1 offset:65535
+// GFX1250: ds_load_u8_d16 v5, v1 offset:65535 ; encoding: [0xff,0xff,0x88,0xda,0x01,0x00,0x00,0x05]
+
+ds_load_u8_d16 v5, v1 offset:0
+// GFX1250: ds_load_u8_d16 v5, v1 ; encoding: [0x00,0x00,0x88,0xda,0x01,0x00,0x00,0x05]
+
+ds_load_u8_d16 v255, v255 offset:4
+// GFX1250: ds_load_u8_d16 v255, v255 offset:4 ; encoding: [0x04,0x00,0x88,0xda,0xff,0x00,0x00,0xff]
+
+ds_load_u8_d16_hi v5, v1
+// GFX1250: ds_load_u8_d16_hi v5, v1 ; encoding: [0x00,0x00,0x8c,0xda,0x01,0x00,0x00,0x05]
+
+ds_load_u8_d16_hi v5, v1 offset:65535
+// GFX1250: ds_load_u8_d16_hi v5, v1 offset:65535 ; encoding: [0xff,0xff,0x8c,0xda,0x01,0x00,0x00,0x05]
+
+ds_load_u8_d16_hi v5, v1 offset:0
+// GFX1250: ds_load_u8_d16_hi v5, v1 ; encoding: [0x00,0x00,0x8c,0xda,0x01,0x00,0x00,0x05]
+
+ds_load_u8_d16_hi v255, v255 offset:4
+// GFX1250: ds_load_u8_d16_hi v255, v255 offset:4 ; encoding: [0x04,0x00,0x8c,0xda,0xff,0x00,0x00,0xff]
+
+ds_max_num_f32 v1, v2
+// GFX1250: ds_max_num_f32 v1, v2 ; encoding: [0x00,0x00,0x4c,0xd8,0x01,0x02,0x00,0x00]
+
+ds_max_num_f32 v1, v2 offset:65535
+// GFX1250: ds_max_num_f32 v1, v2 offset:65535 ; encoding: [0xff,0xff,0x4c,0xd8,0x01,0x02,0x00,0x00]
+
+ds_max_num_f32 v1, v2 offset:0
+// GFX1250: ds_max_num_f32 v1, v2 ; encoding: [0x00,0x00,0x4c,0xd8,0x01,0x02,0x00,0x00]
+
+ds_max_num_f32 v255, v255 offset:4
+// GFX1250: ds_max_num_f32 v255, v255 offset:4 ; encoding: [0x04,0x00,0x4c,0xd8,0xff,0xff,0x00,0x00]
+
+ds_max_num_f64 v1, v[2:3]
+// GFX1250: ds_max_num_f64 v1, v[2:3] ; encoding: [0x00,0x00,0x4c,0xd9,0x01,0x02,0x00,0x00]
+
+ds_max_num_f64 v1, v[2:3] offset:65535
+// GFX1250: ds_max_num_f64 v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x4c,0xd9,0x01,0x02,0x00,0x00]
+
+ds_max_num_f64 v1, v[2:3] offset:0
+// GFX1250: ds_max_num_f64 v1, v[2:3] ; encoding: [0x00,0x00,0x4c,0xd9,0x01,0x02,0x00,0x00]
+
+ds_max_num_f64 v255, v[254:255] offset:4
+// GFX1250: ds_max_num_f64 v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x4c,0xd9,0xff,0xfe,0x00,0x00]
+
+ds_max_i32 v1, v2
+// GFX1250: ds_max_i32 v1, v2 ; encoding: [0x00,0x00,0x18,0xd8,0x01,0x02,0x00,0x00]
+
+ds_max_i32 v1, v2 offset:65535
+// GFX1250: ds_max_i32 v1, v2 offset:65535 ; encoding: [0xff,0xff,0x18,0xd8,0x01,0x02,0x00,0x00]
+
+ds_max_i32 v1, v2 offset:0
+// GFX1250: ds_max_i32 v1, v2 ; encoding: [0x00,0x00,0x18,0xd8,0x01,0x02,0x00,0x00]
+
+ds_max_i32 v255, v255 offset:4
+// GFX1250: ds_max_i32 v255, v255 offset:4 ; encoding: [0x04,0x00,0x18,0xd8,0xff,0xff,0x00,0x00]
+
+ds_max_i64 v1, v[2:3]
+// GFX1250: ds_max_i64 v1, v[2:3] ; encoding: [0x00,0x00,0x18,0xd9,0x01,0x02,0x00,0x00]
+
+ds_max_i64 v1, v[2:3] offset:65535
+// GFX1250: ds_max_i64 v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x18,0xd9,0x01,0x02,0x00,0x00]
+
+ds_max_i64 v1, v[2:3] offset:0
+// GFX1250: ds_max_i64 v1, v[2:3] ; encoding: [0x00,0x00,0x18,0xd9,0x01,0x02,0x00,0x00]
+
+ds_max_i64 v255, v[254:255] offset:4
+// GFX1250: ds_max_i64 v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x18,0xd9,0xff,0xfe,0x00,0x00]
+
+ds_max_num_rtn_f32 v5, v1, v2
+// GFX1250: ds_max_num_rtn_f32 v5, v1, v2 ; encoding: [0x00,0x00,0xcc,0xd8,0x01,0x02,0x00,0x05]
+
+ds_max_num_rtn_f32 v5, v1, v2 offset:65535
+// GFX1250: ds_max_num_rtn_f32 v5, v1, v2 offset:65535 ; encoding: [0xff,0xff,0xcc,0xd8,0x01,0x02,0x00,0x05]
+
+ds_max_num_rtn_f32 v5, v1, v2 offset:0
+// GFX1250: ds_max_num_rtn_f32 v5, v1, v2 ; encoding: [0x00,0x00,0xcc,0xd8,0x01,0x02,0x00,0x05]
+
+ds_max_num_rtn_f32 v255, v255, v255 offset:4
+// GFX1250: ds_max_num_rtn_f32 v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0xcc,0xd8,0xff,0xff,0x00,0xff]
+
+ds_max_num_rtn_f64 v[6:7], v1, v[2:3]
+// GFX1250: ds_max_num_rtn_f64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0xcc,0xd9,0x01,0x02,0x00,0x06]
+
+ds_max_num_rtn_f64 v[6:7], v1, v[2:3] offset:65535
+// GFX1250: ds_max_num_rtn_f64 v[6:7], v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0xcc,0xd9,0x01,0x02,0x00,0x06]
+
+ds_max_num_rtn_f64 v[6:7], v1, v[2:3] offset:0
+// GFX1250: ds_max_num_rtn_f64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0xcc,0xd9,0x01,0x02,0x00,0x06]
+
+ds_max_num_rtn_f64 v[254:255], v255, v[254:255] offset:4
+// GFX1250: ds_max_num_rtn_f64 v[254:255], v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0xcc,0xd9,0xff,0xfe,0x00,0xfe]
+
+ds_max_rtn_i32 v5, v1, v2
+// GFX1250: ds_max_rtn_i32 v5, v1, v2 ; encoding: [0x00,0x00,0x98,0xd8,0x01,0x02,0x00,0x05]
+
+ds_max_rtn_i32 v5, v1, v2 offset:65535
+// GFX1250: ds_max_rtn_i32 v5, v1, v2 offset:65535 ; encoding: [0xff,0xff,0x98,0xd8,0x01,0x02,0x00,0x05]
+
+ds_max_rtn_i32 v5, v1, v2 offset:0
+// GFX1250: ds_max_rtn_i32 v5, v1, v2 ; encoding: [0x00,0x00,0x98,0xd8,0x01,0x02,0x00,0x05]
+
+ds_max_rtn_i32 v255, v255, v255 offset:4
+// GFX1250: ds_max_rtn_i32 v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0x98,0xd8,0xff,0xff,0x00,0xff]
+
+ds_max_rtn_i64 v[6:7], v1, v[2:3]
+// GFX1250: ds_max_rtn_i64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0x98,0xd9,0x01,0x02,0x00,0x06]
+
+ds_max_rtn_i64 v[6:7], v1, v[2:3] offset:65535
+// GFX1250: ds_max_rtn_i64 v[6:7], v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x98,0xd9,0x01,0x02,0x00,0x06]
+
+ds_max_rtn_i64 v[6:7], v1, v[2:3] offset:0
+// GFX1250: ds_max_rtn_i64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0x98,0xd9,0x01,0x02,0x00,0x06]
+
+ds_max_rtn_i64 v[254:255], v255, v[254:255] offset:4
+// GFX1250: ds_max_rtn_i64 v[254:255], v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x98,0xd9,0xff,0xfe,0x00,0xfe]
+
+ds_max_rtn_u32 v5, v1, v2
+// GFX1250: ds_max_rtn_u32 v5, v1, v2 ; encoding: [0x00,0x00,0xa0,0xd8,0x01,0x02,0x00,0x05]
+
+ds_max_rtn_u32 v5, v1, v2 offset:65535
+// GFX1250: ds_max_rtn_u32 v5, v1, v2 offset:65535 ; encoding: [0xff,0xff,0xa0,0xd8,0x01,0x02,0x00,0x05]
+
+ds_max_rtn_u32 v5, v1, v2 offset:0
+// GFX1250: ds_max_rtn_u32 v5, v1, v2 ; encoding: [0x00,0x00,0xa0,0xd8,0x01,0x02,0x00,0x05]
+
+ds_max_rtn_u32 v255, v255, v255 offset:4
+// GFX1250: ds_max_rtn_u32 v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0xa0,0xd8,0xff,0xff,0x00,0xff]
+
+ds_max_rtn_u64 v[6:7], v1, v[2:3]
+// GFX1250: ds_max_rtn_u64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0xa0,0xd9,0x01,0x02,0x00,0x06]
+
+ds_max_rtn_u64 v[6:7], v1, v[2:3] offset:65535
+// GFX1250: ds_max_rtn_u64 v[6:7], v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0xa0,0xd9,0x01,0x02,0x00,0x06]
+
+ds_max_rtn_u64 v[6:7], v1, v[2:3] offset:0
+// GFX1250: ds_max_rtn_u64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0xa0,0xd9,0x01,0x02,0x00,0x06]
+
+ds_max_rtn_u64 v[254:255], v255, v[254:255] offset:4
+// GFX1250: ds_max_rtn_u64 v[254:255], v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0xa0,0xd9,0xff,0xfe,0x00,0xfe]
+
+ds_max_u32 v1, v2
+// GFX1250: ds_max_u32 v1, v2 ; encoding: [0x00,0x00,0x20,0xd8,0x01,0x02,0x00,0x00]
+
+ds_max_u32 v1, v2 offset:65535
+// GFX1250: ds_max_u32 v1, v2 offset:65535 ; encoding: [0xff,0xff,0x20,0xd8,0x01,0x02,0x00,0x00]
+
+ds_max_u32 v1, v2 offset:0
+// GFX1250: ds_max_u32 v1, v2 ; encoding: [0x00,0x00,0x20,0xd8,0x01,0x02,0x00,0x00]
+
+ds_max_u32 v255, v255 offset:4
+// GFX1250: ds_max_u32 v255, v255 offset:4 ; encoding: [0x04,0x00,0x20,0xd8,0xff,0xff,0x00,0x00]
+
+ds_max_u64 v1, v[2:3]
+// GFX1250: ds_max_u64 v1, v[2:3] ; encoding: [0x00,0x00,0x20,0xd9,0x01,0x02,0x00,0x00]
+
+ds_max_u64 v1, v[2:3] offset:65535
+// GFX1250: ds_max_u64 v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x20,0xd9,0x01,0x02,0x00,0x00]
+
+ds_max_u64 v1, v[2:3] offset:0
+// GFX1250: ds_max_u64 v1, v[2:3] ; encoding: [0x00,0x00,0x20,0xd9,0x01,0x02,0x00,0x00]
+
+ds_max_u64 v255, v[254:255] offset:4
+// GFX1250: ds_max_u64 v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x20,0xd9,0xff,0xfe,0x00,0x00]
+
+ds_min_num_f32 v1, v2
+// GFX1250: ds_min_num_f32 v1, v2 ; encoding: [0x00,0x00,0x48,0xd8,0x01,0x02,0x00,0x00]
+
+ds_min_num_f32 v1, v2 offset:65535
+// GFX1250: ds_min_num_f32 v1, v2 offset:65535 ; encoding: [0xff,0xff,0x48,0xd8,0x01,0x02,0x00,0x00]
+
+ds_min_num_f32 v1, v2 offset:0
+// GFX1250: ds_min_num_f32 v1, v2 ; encoding: [0x00,0x00,0x48,0xd8,0x01,0x02,0x00,0x00]
+
+ds_min_num_f32 v255, v255 offset:4
+// GFX1250: ds_min_num_f32 v255, v255 offset:4 ; encoding: [0x04,0x00,0x48,0xd8,0xff,0xff,0x00,0x00]
+
+ds_min_num_f64 v1, v[2:3]
+// GFX1250: ds_min_num_f64 v1, v[2:3] ; encoding: [0x00,0x00,0x48,0xd9,0x01,0x02,0x00,0x00]
+
+ds_min_num_f64 v1, v[2:3] offset:65535
+// GFX1250: ds_min_num_f64 v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x48,0xd9,0x01,0x02,0x00,0x00]
+
+ds_min_num_f64 v1, v[2:3] offset:0
+// GFX1250: ds_min_num_f64 v1, v[2:3] ; encoding: [0x00,0x00,0x48,0xd9,0x01,0x02,0x00,0x00]
+
+ds_min_num_f64 v255, v[254:255] offset:4
+// GFX1250: ds_min_num_f64 v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x48,0xd9,0xff,0xfe,0x00,0x00]
+
+ds_min_i32 v1, v2
+// GFX1250: ds_min_i32 v1, v2 ; encoding: [0x00,0x00,0x14,0xd8,0x01,0x02,0x00,0x00]
+
+ds_min_i32 v1, v2 offset:65535
+// GFX1250: ds_min_i32 v1, v2 offset:65535 ; encoding: [0xff,0xff,0x14,0xd8,0x01,0x02,0x00,0x00]
+
+ds_min_i32 v1, v2 offset:0
+// GFX1250: ds_min_i32 v1, v2 ; encoding: [0x00,0x00,0x14,0xd8,0x01,0x02,0x00,0x00]
+
+ds_min_i32 v255, v255 offset:4
+// GFX1250: ds_min_i32 v255, v255 offset:4 ; encoding: [0x04,0x00,0x14,0xd8,0xff,0xff,0x00,0x00]
+
+ds_min_i64 v1, v[2:3]
+// GFX1250: ds_min_i64 v1, v[2:3] ; encoding: [0x00,0x00,0x14,0xd9,0x01,0x02,0x00,0x00]
+
+ds_min_i64 v1, v[2:3] offset:65535
+// GFX1250: ds_min_i64 v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x14,0xd9,0x01,0x02,0x00,0x00]
+
+ds_min_i64 v1, v[2:3] offset:0
+// GFX1250: ds_min_i64 v1, v[2:3] ; encoding: [0x00,0x00,0x14,0xd9,0x01,0x02,0x00,0x00]
+
+ds_min_i64 v255, v[254:255] offset:4
+// GFX1250: ds_min_i64 v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x14,0xd9,0xff,0xfe,0x00,0x00]
+
+ds_min_num_rtn_f32 v5, v1, v2
+// GFX1250: ds_min_num_rtn_f32 v5, v1, v2 ; encoding: [0x00,0x00,0xc8,0xd8,0x01,0x02,0x00,0x05]
+
+ds_min_num_rtn_f32 v5, v1, v2 offset:65535
+// GFX1250: ds_min_num_rtn_f32 v5, v1, v2 offset:65535 ; encoding: [0xff,0xff,0xc8,0xd8,0x01,0x02,0x00,0x05]
+
+ds_min_num_rtn_f32 v5, v1, v2 offset:0
+// GFX1250: ds_min_num_rtn_f32 v5, v1, v2 ; encoding: [0x00,0x00,0xc8,0xd8,0x01,0x02,0x00,0x05]
+
+ds_min_num_rtn_f32 v255, v255, v255 offset:4
+// GFX1250: ds_min_num_rtn_f32 v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0xc8,0xd8,0xff,0xff,0x00,0xff]
+
+ds_min_num_rtn_f64 v[6:7], v1, v[2:3]
+// GFX1250: ds_min_num_rtn_f64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0xc8,0xd9,0x01,0x02,0x00,0x06]
+
+ds_min_num_rtn_f64 v[6:7], v1, v[2:3] offset:65535
+// GFX1250: ds_min_num_rtn_f64 v[6:7], v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0xc8,0xd9,0x01,0x02,0x00,0x06]
+
+ds_min_num_rtn_f64 v[6:7], v1, v[2:3] offset:0
+// GFX1250: ds_min_num_rtn_f64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0xc8,0xd9,0x01,0x02,0x00,0x06]
+
+ds_min_num_rtn_f64 v[254:255], v255, v[254:255] offset:4
+// GFX1250: ds_min_num_rtn_f64 v[254:255], v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0xc8,0xd9,0xff,0xfe,0x00,0xfe]
+
+ds_min_rtn_i32 v5, v1, v2
+// GFX1250: ds_min_rtn_i32 v5, v1, v2 ; encoding: [0x00,0x00,0x94,0xd8,0x01,0x02,0x00,0x05]
+
+ds_min_rtn_i32 v5, v1, v2 offset:65535
+// GFX1250: ds_min_rtn_i32 v5, v1, v2 offset:65535 ; encoding: [0xff,0xff,0x94,0xd8,0x01,0x02,0x00,0x05]
+
+ds_min_rtn_i32 v5, v1, v2 offset:0
+// GFX1250: ds_min_rtn_i32 v5, v1, v2 ; encoding: [0x00,0x00,0x94,0xd8,0x01,0x02,0x00,0x05]
+
+ds_min_rtn_i32 v255, v255, v255 offset:4
+// GFX1250: ds_min_rtn_i32 v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0x94,0xd8,0xff,0xff,0x00,0xff]
+
+ds_min_rtn_i64 v[6:7], v1, v[2:3]
+// GFX1250: ds_min_rtn_i64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0x94,0xd9,0x01,0x02,0x00,0x06]
+
+ds_min_rtn_i64 v[6:7], v1, v[2:3] offset:65535
+// GFX1250: ds_min_rtn_i64 v[6:7], v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x94,0xd9,0x01,0x02,0x00,0x06]
+
+ds_min_rtn_i64 v[6:7], v1, v[2:3] offset:0
+// GFX1250: ds_min_rtn_i64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0x94,0xd9,0x01,0x02,0x00,0x06]
+
+ds_min_rtn_i64 v[254:255], v255, v[254:255] offset:4
+// GFX1250: ds_min_rtn_i64 v[254:255], v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x94,0xd9,0xff,0xfe,0x00,0xfe]
+
+ds_min_rtn_u32 v5, v1, v2
+// GFX1250: ds_min_rtn_u32 v5, v1, v2 ; encoding: [0x00,0x00,0x9c,0xd8,0x01,0x02,0x00,0x05]
+
+ds_min_rtn_u32 v5, v1, v2 offset:65535
+// GFX1250: ds_min_rtn_u32 v5, v1, v2 offset:65535 ; encoding: [0xff,0xff,0x9c,0xd8,0x01,0x02,0x00,0x05]
+
+ds_min_rtn_u32 v5, v1, v2 offset:0
+// GFX1250: ds_min_rtn_u32 v5, v1, v2 ; encoding: [0x00,0x00,0x9c,0xd8,0x01,0x02,0x00,0x05]
+
+ds_min_rtn_u32 v255, v255, v255 offset:4
+// GFX1250: ds_min_rtn_u32 v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0x9c,0xd8,0xff,0xff,0x00,0xff]
+
+ds_min_rtn_u64 v[6:7], v1, v[2:3]
+// GFX1250: ds_min_rtn_u64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0x9c,0xd9,0x01,0x02,0x00,0x06]
+
+ds_min_rtn_u64 v[6:7], v1, v[2:3] offset:65535
+// GFX1250: ds_min_rtn_u64 v[6:7], v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x9c,0xd9,0x01,0x02,0x00,0x06]
+
+ds_min_rtn_u64 v[6:7], v1, v[2:3] offset:0
+// GFX1250: ds_min_rtn_u64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0x9c,0xd9,0x01,0x02,0x00,0x06]
+
+ds_min_rtn_u64 v[254:255], v255, v[254:255] offset:4
+// GFX1250: ds_min_rtn_u64 v[254:255], v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x9c,0xd9,0xff,0xfe,0x00,0xfe]
+
+ds_min_u32 v1, v2
+// GFX1250: ds_min_u32 v1, v2 ; encoding: [0x00,0x00,0x1c,0xd8,0x01,0x02,0x00,0x00]
+
+ds_min_u32 v1, v2 offset:65535
+// GFX1250: ds_min_u32 v1, v2 offset:65535 ; encoding: [0xff,0xff,0x1c,0xd8,0x01,0x02,0x00,0x00]
+
+ds_min_u32 v1, v2 offset:0
+// GFX1250: ds_min_u32 v1, v2 ; encoding: [0x00,0x00,0x1c,0xd8,0x01,0x02,0x00,0x00]
+
+ds_min_u32 v255, v255 offset:4
+// GFX1250: ds_min_u32 v255, v255 offset:4 ; encoding: [0x04,0x00,0x1c,0xd8,0xff,0xff,0x00,0x00]
+
+ds_min_u64 v1, v[2:3]
+// GFX1250: ds_min_u64 v1, v[2:3] ; encoding: [0x00,0x00,0x1c,0xd9,0x01,0x02,0x00,0x00]
+
+ds_min_u64 v1, v[2:3] offset:65535
+// GFX1250: ds_min_u64 v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x1c,0xd9,0x01,0x02,0x00,0x00]
+
+ds_min_u64 v1, v[2:3] offset:0
+// GFX1250: ds_min_u64 v1, v[2:3] ; encoding: [0x00,0x00,0x1c,0xd9,0x01,0x02,0x00,0x00]
+
+ds_min_u64 v255, v[254:255] offset:4
+// GFX1250: ds_min_u64 v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x1c,0xd9,0xff,0xfe,0x00,0x00]
+
+ds_mskor_b32 v1, v2, v3
+// GFX1250: ds_mskor_b32 v1, v2, v3 ; encoding: [0x00,0x00,0x30,0xd8,0x01,0x02,0x03,0x00]
+
+ds_mskor_b32 v1, v2, v3 offset:65535
+// GFX1250: ds_mskor_b32 v1, v2, v3 offset:65535 ; encoding: [0xff,0xff,0x30,0xd8,0x01,0x02,0x03,0x00]
+
+ds_mskor_b32 v1, v2, v3 offset:0
+// GFX1250: ds_mskor_b32 v1, v2, v3 ; encoding: [0x00,0x00,0x30,0xd8,0x01,0x02,0x03,0x00]
+
+ds_mskor_b32 v255, v255, v255 offset:4
+// GFX1250: ds_mskor_b32 v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0x30,0xd8,0xff,0xff,0xff,0x00]
+
+ds_mskor_b64 v1, v[2:3], v[4:5]
+// GFX1250: ds_mskor_b64 v1, v[2:3], v[4:5] ; encoding: [0x00,0x00,0x30,0xd9,0x01,0x02,0x04,0x00]
+
+ds_mskor_b64 v1, v[2:3], v[4:5] offset:65535
+// GFX1250: ds_mskor_b64 v1, v[2:3], v[4:5] offset:65535 ; encoding: [0xff,0xff,0x30,0xd9,0x01,0x02,0x04,0x00]
+
+ds_mskor_b64 v1, v[2:3], v[4:5] offset:0
+// GFX1250: ds_mskor_b64 v1, v[2:3], v[4:5] ; encoding: [0x00,0x00,0x30,0xd9,0x01,0x02,0x04,0x00]
+
+ds_mskor_b64 v255, v[254:255], v[254:255] offset:4
+// GFX1250: ds_mskor_b64 v255, v[254:255], v[254:255] offset:4 ; encoding: [0x04,0x00,0x30,0xd9,0xff,0xfe,0xfe,0x00]
+
+ds_mskor_rtn_b32 v5, v1, v2, v3
+// GFX1250: ds_mskor_rtn_b32 v5, v1, v2, v3 ; encoding: [0x00,0x00,0xb0,0xd8,0x01,0x02,0x03,0x05]
+
+ds_mskor_rtn_b32 v5, v1, v2, v3 offset:65535
+// GFX1250: ds_mskor_rtn_b32 v5, v1, v2, v3 offset:65535 ; encoding: [0xff,0xff,0xb0,0xd8,0x01,0x02,0x03,0x05]
+
+ds_mskor_rtn_b32 v5, v1, v2, v3 offset:0
+// GFX1250: ds_mskor_rtn_b32 v5, v1, v2, v3 ; encoding: [0x00,0x00,0xb0,0xd8,0x01,0x02,0x03,0x05]
+
+ds_mskor_rtn_b32 v255, v255, v255, v255 offset:4
+// GFX1250: ds_mskor_rtn_b32 v255, v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0xb0,0xd8,0xff,0xff,0xff,0xff]
+
+ds_mskor_rtn_b64 v[6:7], v1, v[2:3], v[4:5]
+// GFX1250: ds_mskor_rtn_b64 v[6:7], v1, v[2:3], v[4:5] ; encoding: [0x00,0x00,0xb0,0xd9,0x01,0x02,0x04,0x06]
+
+ds_mskor_rtn_b64 v[6:7], v1, v[2:3], v[4:5] offset:65535
+// GFX1250: ds_mskor_rtn_b64 v[6:7], v1, v[2:3], v[4:5] offset:65535 ; encoding: [0xff,0xff,0xb0,0xd9,0x01,0x02,0x04,0x06]
+
+ds_mskor_rtn_b64 v[6:7], v1, v[2:3], v[4:5] offset:0
+// GFX1250: ds_mskor_rtn_b64 v[6:7], v1, v[2:3], v[4:5] ; encoding: [0x00,0x00,0xb0,0xd9,0x01,0x02,0x04,0x06]
+
+ds_mskor_rtn_b64 v[254:255], v255, v[254:255], v[254:255] offset:4
+// GFX1250: ds_mskor_rtn_b64 v[254:255], v255, v[254:255], v[254:255] offset:4 ; encoding: [0x04,0x00,0xb0,0xd9,0xff,0xfe,0xfe,0xfe]
+
+ds_or_b32 v1, v2
+// GFX1250: ds_or_b32 v1, v2 ; encoding: [0x00,0x00,0x28,0xd8,0x01,0x02,0x00,0x00]
+
+ds_or_b32 v1, v2 offset:65535
+// GFX1250: ds_or_b32 v1, v2 offset:65535 ; encoding: [0xff,0xff,0x28,0xd8,0x01,0x02,0x00,0x00]
+
+ds_or_b32 v1, v2 offset:0
+// GFX1250: ds_or_b32 v1, v2 ; encoding: [0x00,0x00,0x28,0xd8,0x01,0x02,0x00,0x00]
+
+ds_or_b32 v255, v255 offset:4
+// GFX1250: ds_or_b32 v255, v255 offset:4 ; encoding: [0x04,0x00,0x28,0xd8,0xff,0xff,0x00,0x00]
+
+ds_or_b64 v1, v[2:3]
+// GFX1250: ds_or_b64 v1, v[2:3] ; encoding: [0x00,0x00,0x28,0xd9,0x01,0x02,0x00,0x00]
+
+ds_or_b64 v1, v[2:3] offset:65535
+// GFX1250: ds_or_b64 v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x28,0xd9,0x01,0x02,0x00,0x00]
+
+ds_or_b64 v1, v[2:3] offset:0
+// GFX1250: ds_or_b64 v1, v[2:3] ; encoding: [0x00,0x00,0x28,0xd9,0x01,0x02,0x00,0x00]
+
+ds_or_b64 v255, v[254:255] offset:4
+// GFX1250: ds_or_b64 v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x28,0xd9,0xff,0xfe,0x00,0x00]
+
+ds_or_rtn_b32 v5, v1, v2
+// GFX1250: ds_or_rtn_b32 v5, v1, v2 ; encoding: [0x00,0x00,0xa8,0xd8,0x01,0x02,0x00,0x05]
+
+ds_or_rtn_b32 v5, v1, v2 offset:65535
+// GFX1250: ds_or_rtn_b32 v5, v1, v2 offset:65535 ; encoding: [0xff,0xff,0xa8,0xd8,0x01,0x02,0x00,0x05]
+
+ds_or_rtn_b32 v5, v1, v2 offset:0
+// GFX1250: ds_or_rtn_b32 v5, v1, v2 ; encoding: [0x00,0x00,0xa8,0xd8,0x01,0x02,0x00,0x05]
+
+ds_or_rtn_b32 v255, v255, v255 offset:4
+// GFX1250: ds_or_rtn_b32 v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0xa8,0xd8,0xff,0xff,0x00,0xff]
+
+ds_or_rtn_b64 v[6:7], v1, v[2:3]
+// GFX1250: ds_or_rtn_b64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0xa8,0xd9,0x01,0x02,0x00,0x06]
+
+ds_or_rtn_b64 v[6:7], v1, v[2:3] offset:65535
+// GFX1250: ds_or_rtn_b64 v[6:7], v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0xa8,0xd9,0x01,0x02,0x00,0x06]
+
+ds_or_rtn_b64 v[6:7], v1, v[2:3] offset:0
+// GFX1250: ds_or_rtn_b64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0xa8,0xd9,0x01,0x02,0x00,0x06]
+
+ds_or_rtn_b64 v[254:255], v255, v[254:255] offset:4
+// GFX1250: ds_or_rtn_b64 v[254:255], v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0xa8,0xd9,0xff,0xfe,0x00,0xfe]
+
+ds_permute_b32 v5, v1, v2
+// GFX1250: ds_permute_b32 v5, v1, v2 ; encoding: [0x00,0x00,0xc8,0xda,0x01,0x02,0x00,0x05]
+
+ds_permute_b32 v5, v1, v2 offset:65535
+// GFX1250: ds_permute_b32 v5, v1, v2 offset:65535 ; encoding: [0xff,0xff,0xc8,0xda,0x01,0x02,0x00,0x05]
+
+ds_permute_b32 v5, v1, v2 offset:0
+// GFX1250: ds_permute_b32 v5, v1, v2 ; encoding: [0x00,0x00,0xc8,0xda,0x01,0x02,0x00,0x05]
+
+ds_permute_b32 v255, v255, v255 offset:4
+// GFX1250: ds_permute_b32 v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0xc8,0xda,0xff,0xff,0x00,0xff]
+
+ds_pk_add_f16 v2, v1
+// GFX1250: ds_pk_add_f16 v2, v1 ; encoding: [0x00,0x00,0x68,0xda,0x02,0x01,0x00,0x00]
+
+ds_pk_add_f16 v2, v1 offset:0
+// GFX1250: ds_pk_add_f16 v2, v1 ; encoding: [0x00,0x00,0x68,0xda,0x02,0x01,0x00,0x00]
+
+ds_pk_add_f16 v2, v1 offset:4660
+// GFX1250: ds_pk_add_f16 v2, v1 offset:4660 ; encoding: [0x34,0x12,0x68,0xda,0x02,0x01,0x00,0x00]
+
+ds_pk_add_f16 v2, v1 offset:65535
+// GFX1250: ds_pk_add_f16 v2, v1 offset:65535 ; encoding: [0xff,0xff,0x68,0xda,0x02,0x01,0x00,0x00]
+
+ds_pk_add_f16 v255, v255
+// GFX1250: ds_pk_add_f16 v255, v255 ; encoding: [0x00,0x00,0x68,0xda,0xff,0xff,0x00,0x00]
+
+ds_pk_add_f16 v255, v255 offset:0
+// GFX1250: ds_pk_add_f16 v255, v255 ; encoding: [0x00,0x00,0x68,0xda,0xff,0xff,0x00,0x00]
+
+ds_pk_add_f16 v255, v255 offset:4660
+// GFX1250: ds_pk_add_f16 v255, v255 offset:4660 ; encoding: [0x34,0x12,0x68,0xda,0xff,0xff,0x00,0x00]
+
+ds_pk_add_f16 v255, v255 offset:65535
+// GFX1250: ds_pk_add_f16 v255, v255 offset:65535 ; encoding: [0xff,0xff,0x68,0xda,0xff,0xff,0x00,0x00]
+
+ds_pk_add_f16 v0, v0
+// GFX1250: ds_pk_add_f16 v0, v0 ; encoding: [0x00,0x00,0x68,0xda,0x00,0x00,0x00,0x00]
+
+ds_pk_add_bf16 v2, v1
+// GFX1250: ds_pk_add_bf16 v2, v1 ; encoding: [0x00,0x00,0x6c,0xda,0x02,0x01,0x00,0x00]
+
+ds_pk_add_bf16 v2, v1 offset:0
+// GFX1250: ds_pk_add_bf16 v2, v1 ; encoding: [0x00,0x00,0x6c,0xda,0x02,0x01,0x00,0x00]
+
+ds_pk_add_bf16 v255, v255
+// GFX1250: ds_pk_add_bf16 v255, v255 ; encoding: [0x00,0x00,0x6c,0xda,0xff,0xff,0x00,0x00]
+
+ds_pk_add_bf16 v255, v255 offset:4660
+// GFX1250: ds_pk_add_bf16 v255, v255 offset:4660 ; encoding: [0x34,0x12,0x6c,0xda,0xff,0xff,0x00,0x00]
+
+ds_pk_add_bf16 v0, v0
+// GFX1250: ds_pk_add_bf16 v0, v0 ; encoding: [0x00,0x00,0x6c,0xda,0x00,0x00,0x00,0x00]
+
+ds_pk_add_bf16 v0, v0 offset:65535
+// GFX1250: ds_pk_add_bf16 v0, v0 offset:65535 ; encoding: [0xff,0xff,0x6c,0xda,0x00,0x00,0x00,0x00]
+
+ds_pk_add_rtn_f16 v3, v2, v1
+// GFX1250: ds_pk_add_rtn_f16 v3, v2, v1 ; encoding: [0x00,0x00,0xa8,0xda,0x02,0x01,0x00,0x03]
+
+ds_pk_add_rtn_f16 v3, v2, v1 offset:4660
+// GFX1250: ds_pk_add_rtn_f16 v3, v2, v1 offset:4660 ; encoding: [0x34,0x12,0xa8,0xda,0x02,0x01,0x00,0x03]
+
+ds_pk_add_rtn_f16 v255, v0, v200
+// GFX1250: ds_pk_add_rtn_f16 v255, v0, v200 ; encoding: [0x00,0x00,0xa8,0xda,0x00,0xc8,0x00,0xff]
+
+ds_pk_add_rtn_f16 v255, v0, v200 offset:65535
+// GFX1250: ds_pk_add_rtn_f16 v255, v0, v200 offset:65535 ; encoding: [0xff,0xff,0xa8,0xda,0x00,0xc8,0x00,0xff]
+
+ds_pk_add_rtn_f16 v255, v255, v255
+// GFX1250: ds_pk_add_rtn_f16 v255, v255, v255 ; encoding: [0x00,0x00,0xa8,0xda,0xff,0xff,0x00,0xff]
+
+ds_pk_add_rtn_bf16 v3, v2, v1
+// GFX1250: ds_pk_add_rtn_bf16 v3, v2, v1 ; encoding: [0x00,0x00,0xac,0xda,0x02,0x01,0x00,0x03]
+
+ds_pk_add_rtn_bf16 v3, v2, v1 offset:4660
+// GFX1250: ds_pk_add_rtn_bf16 v3, v2, v1 offset:4660 ; encoding: [0x34,0x12,0xac,0xda,0x02,0x01,0x00,0x03]
+
+ds_pk_add_rtn_bf16 v255, v0, v200
+// GFX1250: ds_pk_add_rtn_bf16 v255, v0, v200 ; encoding: [0x00,0x00,0xac,0xda,0x00,0xc8,0x00,0xff]
+
+ds_pk_add_rtn_bf16 v255, v255, v255
+// GFX1250: ds_pk_add_rtn_bf16 v255, v255, v255 ; encoding: [0x00,0x00,0xac,0xda,0xff,0xff,0x00,0xff]
+
+ds_pk_add_rtn_bf16 v255, v255, v255 offset:65535
+// GFX1250: ds_pk_add_rtn_bf16 v255, v255, v255 offset:65535 ; encoding: [0xff,0xff,0xac,0xda,0xff,0xff,0x00,0xff]
+
+ds_read2_b32 v[6:7], v1
+// GFX1250: ds_load_2addr_b32 v[6:7], v1 ; encoding: [0x00,0x00,0xdc,0xd8,0x01,0x00,0x00,0x06]
+
+ds_read2_b32 v[6:7], v1 offset0:127 offset1:255
+// GFX1250: ds_load_2addr_b32 v[6:7], v1 offset0:127 offset1:255 ; encoding: [0x7f,0xff,0xdc,0xd8,0x01,0x00,0x00,0x06]
+
+ds_read2_b32 v[6:7], v1 offset0:0 offset1:0
+// GFX1250: ds_load_2addr_b32 v[6:7], v1 ; encoding: [0x00,0x00,0xdc,0xd8,0x01,0x00,0x00,0x06]
+
+ds_read2_b32 v[254:255], v255 offset0:16 offset1:1
+// GFX1250: ds_load_2addr_b32 v[254:255], v255 offset0:16 offset1:1 ; encoding: [0x10,0x01,0xdc,0xd8,0xff,0x00,0x00,0xfe]
+
+ds_read2_b64 v[6:9], v1
+// GFX1250: ds_load_2addr_b64 v[6:9], v1 ; encoding: [0x00,0x00,0xdc,0xd9,0x01,0x00,0x00,0x06]
+
+ds_read2_b64 v[6:9], v1 offset0:127 offset1:255
+// GFX1250: ds_load_2addr_b64 v[6:9], v1 offset0:127 offset1:255 ; encoding: [0x7f,0xff,0xdc,0xd9,0x01,0x00,0x00,0x06]
+
+ds_read2_b64 v[6:9], v1 offset0:0 offset1:0
+// GFX1250: ds_load_2addr_b64 v[6:9], v1 ; encoding: [0x00,0x00,0xdc,0xd9,0x01,0x00,0x00,0x06]
+
+ds_read2_b64 v[252:255], v255 offset0:16 offset1:1
+// GFX1250: ds_load_2addr_b64 v[252:255], v255 offset0:16 offset1:1 ; encoding: [0x10,0x01,0xdc,0xd9,0xff,0x00,0x00,0xfc]
+
+ds_read2st64_b32 v[6:7], v1
+// GFX1250: ds_load_2addr_stride64_b32 v[6:7], v1 ; encoding: [0x00,0x00,0xe0,0xd8,0x01,0x00,0x00,0x06]
+
+ds_read2st64_b32 v[6:7], v1 offset0:127 offset1:255
+// GFX1250: ds_load_2addr_stride64_b32 v[6:7], v1 offset0:127 offset1:255 ; encoding: [0x7f,0xff,0xe0,0xd8,0x01,0x00,0x00,0x06]
+
+ds_read2st64_b32 v[6:7], v1 offset0:0 offset1:0
+// GFX1250: ds_load_2addr_stride64_b32 v[6:7], v1 ; encoding: [0x00,0x00,0xe0,0xd8,0x01,0x00,0x00,0x06]
+
+ds_read2st64_b32 v[254:255], v255 offset0:16 offset1:1
+// GFX1250: ds_load_2addr_stride64_b32 v[254:255], v255 offset0:16 offset1:1 ; encoding: [0x10,0x01,0xe0,0xd8,0xff,0x00,0x00,0xfe]
+
+ds_read2st64_b64 v[6:9], v1
+// GFX1250: ds_load_2addr_stride64_b64 v[6:9], v1 ; encoding: [0x00,0x00,0xe0,0xd9,0x01,0x00,0x00,0x06]
+
+ds_read2st64_b64 v[6:9], v1 offset0:127 offset1:255
+// GFX1250: ds_load_2addr_stride64_b64 v[6:9], v1 offset0:127 offset1:255 ; encoding: [0x7f,0xff,0xe0,0xd9,0x01,0x00,0x00,0x06]
+
+ds_read2st64_b64 v[6:9], v1 offset0:0 offset1:0
+// GFX1250: ds_load_2addr_stride64_b64 v[6:9], v1 ; encoding: [0x00,0x00,0xe0,0xd9,0x01,0x00,0x00,0x06]
+
+ds_read2st64_b64 v[252:255], v255 offset0:16 offset1:1
+// GFX1250: ds_load_2addr_stride64_b64 v[252:255], v255 offset0:16 offset1:1 ; encoding: [0x10,0x01,0xe0,0xd9,0xff,0x00,0x00,0xfc]
+
+ds_read_addtid_b32 v5
+// GFX1250: ds_load_addtid_b32 v5 ; encoding: [0x00,0x00,0xc4,0xda,0x00,0x00,0x00,0x05]
+
+ds_read_addtid_b32 v5 offset:65535
+// GFX1250: ds_load_addtid_b32 v5 offset:65535 ; encoding: [0xff,0xff,0xc4,0xda,0x00,0x00,0x00,0x05]
+
+ds_read_addtid_b32 v5 offset:0
+// GFX1250: ds_load_addtid_b32 v5 ; encoding: [0x00,0x00,0xc4,0xda,0x00,0x00,0x00,0x05]
+
+ds_read_addtid_b32 v255 offset:4
+// GFX1250: ds_load_addtid_b32 v255 offset:4 ; encoding: [0x04,0x00,0xc4,0xda,0x00,0x00,0x00,0xff]
+
+ds_read_b128 v[6:9], v1
+// GFX1250: ds_load_b128 v[6:9], v1 ; encoding: [0x00,0x00,0xfc,0xdb,0x01,0x00,0x00,0x06]
+
+ds_read_b128 v[6:9], v1 offset:65535
+// GFX1250: ds_load_b128 v[6:9], v1 offset:65535 ; encoding: [0xff,0xff,0xfc,0xdb,0x01,0x00,0x00,0x06]
+
+ds_read_b128 v[6:9], v1 offset:0
+// GFX1250: ds_load_b128 v[6:9], v1 ; encoding: [0x00,0x00,0xfc,0xdb,0x01,0x00,0x00,0x06]
+
+ds_read_b128 v[252:255], v255 offset:4
+// GFX1250: ds_load_b128 v[252:255], v255 offset:4 ; encoding: [0x04,0x00,0xfc,0xdb,0xff,0x00,0x00,0xfc]
+
+ds_read_b32 v5, v1
+// GFX1250: ds_load_b32 v5, v1 ; encoding: [0x00,0x00,0xd8,0xd8,0x01,0x00,0x00,0x05]
+
+ds_read_b32 v5, v1 offset:65535
+// GFX1250: ds_load_b32 v5, v1 offset:65535 ; encoding: [0xff,0xff,0xd8,0xd8,0x01,0x00,0x00,0x05]
+
+ds_read_b32 v5, v1 offset:0
+// GFX1250: ds_load_b32 v5, v1 ; encoding: [0x00,0x00,0xd8,0xd8,0x01,0x00,0x00,0x05]
+
+ds_read_b32 v255, v255 offset:4
+// GFX1250: ds_load_b32 v255, v255 offset:4 ; encoding: [0x04,0x00,0xd8,0xd8,0xff,0x00,0x00,0xff]
+
+ds_read_b64 v[6:7], v1
+// GFX1250: ds_load_b64 v[6:7], v1 ; encoding: [0x00,0x00,0xd8,0xd9,0x01,0x00,0x00,0x06]
+
+ds_read_b64 v[6:7], v1 offset:65535
+// GFX1250: ds_load_b64 v[6:7], v1 offset:65535 ; encoding: [0xff,0xff,0xd8,0xd9,0x01,0x00,0x00,0x06]
+
+ds_read_b64 v[6:7], v1 offset:0
+// GFX1250: ds_load_b64 v[6:7], v1 ; encoding: [0x00,0x00,0xd8,0xd9,0x01,0x00,0x00,0x06]
+
+ds_read_b64 v[254:255], v255 offset:4
+// GFX1250: ds_load_b64 v[254:255], v255 offset:4 ; encoding: [0x04,0x00,0xd8,0xd9,0xff,0x00,0x00,0xfe]
+
+ds_read_b96 v[6:8], v1
+// GFX1250: ds_load_b96 v[6:8], v1 ; encoding: [0x00,0x00,0xf8,0xdb,0x01,0x00,0x00,0x06]
+
+ds_read_b96 v[6:8], v1 offset:65535
+// GFX1250: ds_load_b96 v[6:8], v1 offset:65535 ; encoding: [0xff,0xff,0xf8,0xdb,0x01,0x00,0x00,0x06]
+
+ds_read_b96 v[6:8], v1 offset:0
+// GFX1250: ds_load_b96 v[6:8], v1 ; encoding: [0x00,0x00,0xf8,0xdb,0x01,0x00,0x00,0x06]
+
+ds_read_b96 v[252:254], v255 offset:4
+// GFX1250: ds_load_b96 v[252:254], v255 offset:4 ; encoding: [0x04,0x00,0xf8,0xdb,0xff,0x00,0x00,0xfc]
+
+ds_read_i16 v5, v1
+// GFX1250: ds_load_i16 v5, v1 ; encoding: [0x00,0x00,0xec,0xd8,0x01,0x00,0x00,0x05]
+
+ds_read_i16 v5, v1 offset:65535
+// GFX1250: ds_load_i16 v5, v1 offset:65535 ; encoding: [0xff,0xff,0xec,0xd8,0x01,0x00,0x00,0x05]
+
+ds_read_i16 v5, v1 offset:0
+// GFX1250: ds_load_i16 v5, v1 ; encoding: [0x00,0x00,0xec,0xd8,0x01,0x00,0x00,0x05]
+
+ds_read_i16 v255, v255 offset:4
+// GFX1250: ds_load_i16 v255, v255 offset:4 ; encoding: [0x04,0x00,0xec,0xd8,0xff,0x00,0x00,0xff]
+
+ds_read_i8 v5, v1
+// GFX1250: ds_load_i8 v5, v1 ; encoding: [0x00,0x00,0xe4,0xd8,0x01,0x00,0x00,0x05]
+
+ds_read_i8 v5, v1 offset:65535
+// GFX1250: ds_load_i8 v5, v1 offset:65535 ; encoding: [0xff,0xff,0xe4,0xd8,0x01,0x00,0x00,0x05]
+
+ds_read_i8 v5, v1 offset:0
+// GFX1250: ds_load_i8 v5, v1 ; encoding: [0x00,0x00,0xe4,0xd8,0x01,0x00,0x00,0x05]
+
+ds_read_i8 v255, v255 offset:4
+// GFX1250: ds_load_i8 v255, v255 offset:4 ; encoding: [0x04,0x00,0xe4,0xd8,0xff,0x00,0x00,0xff]
+
+ds_read_i8_d16 v5, v1
+// GFX1250: ds_load_i8_d16 v5, v1 ; encoding: [0x00,0x00,0x90,0xda,0x01,0x00,0x00,0x05]
+
+ds_read_i8_d16 v5, v1 offset:65535
+// GFX1250: ds_load_i8_d16 v5, v1 offset:65535 ; encoding: [0xff,0xff,0x90,0xda,0x01,0x00,0x00,0x05]
+
+ds_read_i8_d16 v5, v1 offset:0
+// GFX1250: ds_load_i8_d16 v5, v1 ; encoding: [0x00,0x00,0x90,0xda,0x01,0x00,0x00,0x05]
+
+ds_read_i8_d16 v255, v255 offset:4
+// GFX1250: ds_load_i8_d16 v255, v255 offset:4 ; encoding: [0x04,0x00,0x90,0xda,0xff,0x00,0x00,0xff]
+
+ds_read_i8_d16_hi v5, v1
+// GFX1250: ds_load_i8_d16_hi v5, v1 ; encoding: [0x00,0x00,0x94,0xda,0x01,0x00,0x00,0x05]
+
+ds_read_i8_d16_hi v5, v1 offset:65535
+// GFX1250: ds_load_i8_d16_hi v5, v1 offset:65535 ; encoding: [0xff,0xff,0x94,0xda,0x01,0x00,0x00,0x05]
+
+ds_read_i8_d16_hi v5, v1 offset:0
+// GFX1250: ds_load_i8_d16_hi v5, v1 ; encoding: [0x00,0x00,0x94,0xda,0x01,0x00,0x00,0x05]
+
+ds_read_i8_d16_hi v255, v255 offset:4
+// GFX1250: ds_load_i8_d16_hi v255, v255 offset:4 ; encoding: [0x04,0x00,0x94,0xda,0xff,0x00,0x00,0xff]
+
+ds_read_u16 v5, v1
+// GFX1250: ds_load_u16 v5, v1 ; encoding: [0x00,0x00,0xf0,0xd8,0x01,0x00,0x00,0x05]
+
+ds_read_u16 v5, v1 offset:65535
+// GFX1250: ds_load_u16 v5, v1 offset:65535 ; encoding: [0xff,0xff,0xf0,0xd8,0x01,0x00,0x00,0x05]
+
+ds_read_u16 v5, v1 offset:0
+// GFX1250: ds_load_u16 v5, v1 ; encoding: [0x00,0x00,0xf0,0xd8,0x01,0x00,0x00,0x05]
+
+ds_read_u16 v255, v255 offset:4
+// GFX1250: ds_load_u16 v255, v255 offset:4 ; encoding: [0x04,0x00,0xf0,0xd8,0xff,0x00,0x00,0xff]
+
+ds_read_u16_d16 v5, v1
+// GFX1250: ds_load_u16_d16 v5, v1 ; encoding: [0x00,0x00,0x98,0xda,0x01,0x00,0x00,0x05]
+
+ds_read_u16_d16 v5, v1 offset:65535
+// GFX1250: ds_load_u16_d16 v5, v1 offset:65535 ; encoding: [0xff,0xff,0x98,0xda,0x01,0x00,0x00,0x05]
+
+ds_read_u16_d16 v5, v1 offset:0
+// GFX1250: ds_load_u16_d16 v5, v1 ; encoding: [0x00,0x00,0x98,0xda,0x01,0x00,0x00,0x05]
+
+ds_read_u16_d16 v255, v255 offset:4
+// GFX1250: ds_load_u16_d16 v255, v255 offset:4 ; encoding: [0x04,0x00,0x98,0xda,0xff,0x00,0x00,0xff]
+
+ds_read_u16_d16_hi v5, v1
+// GFX1250: ds_load_u16_d16_hi v5, v1 ; encoding: [0x00,0x00,0x9c,0xda,0x01,0x00,0x00,0x05]
+
+ds_read_u16_d16_hi v5, v1 offset:65535
+// GFX1250: ds_load_u16_d16_hi v5, v1 offset:65535 ; encoding: [0xff,0xff,0x9c,0xda,0x01,0x00,0x00,0x05]
+
+ds_read_u16_d16_hi v5, v1 offset:0
+// GFX1250: ds_load_u16_d16_hi v5, v1 ; encoding: [0x00,0x00,0x9c,0xda,0x01,0x00,0x00,0x05]
+
+ds_read_u16_d16_hi v255, v255 offset:4
+// GFX1250: ds_load_u16_d16_hi v255, v255 offset:4 ; encoding: [0x04,0x00,0x9c,0xda,0xff,0x00,0x00,0xff]
+
+ds_read_u8 v5, v1
+// GFX1250: ds_load_u8 v5, v1 ; encoding: [0x00,0x00,0xe8,0xd8,0x01,0x00,0x00,0x05]
+
+ds_read_u8 v5, v1 offset:65535
+// GFX1250: ds_load_u8 v5, v1 offset:65535 ; encoding: [0xff,0xff,0xe8,0xd8,0x01,0x00,0x00,0x05]
+
+ds_read_u8 v5, v1 offset:0
+// GFX1250: ds_load_u8 v5, v1 ; encoding: [0x00,0x00,0xe8,0xd8,0x01,0x00,0x00,0x05]
+
+ds_read_u8 v255, v255 offset:4
+// GFX1250: ds_load_u8 v255, v255 offset:4 ; encoding: [0x04,0x00,0xe8,0xd8,0xff,0x00,0x00,0xff]
+
+ds_read_u8_d16 v5, v1
+// GFX1250: ds_load_u8_d16 v5, v1 ; encoding: [0x00,0x00,0x88,0xda,0x01,0x00,0x00,0x05]
+
+ds_read_u8_d16 v5, v1 offset:65535
+// GFX1250: ds_load_u8_d16 v5, v1 offset:65535 ; encoding: [0xff,0xff,0x88,0xda,0x01,0x00,0x00,0x05]
+
+ds_read_u8_d16 v5, v1 offset:0
+// GFX1250: ds_load_u8_d16 v5, v1 ; encoding: [0x00,0x00,0x88,0xda,0x01,0x00,0x00,0x05]
+
+ds_read_u8_d16 v255, v255 offset:4
+// GFX1250: ds_load_u8_d16 v255, v255 offset:4 ; encoding: [0x04,0x00,0x88,0xda,0xff,0x00,0x00,0xff]
+
+ds_read_u8_d16_hi v5, v1
+// GFX1250: ds_load_u8_d16_hi v5, v1 ; encoding: [0x00,0x00,0x8c,0xda,0x01,0x00,0x00,0x05]
+
+ds_read_u8_d16_hi v5, v1 offset:65535
+// GFX1250: ds_load_u8_d16_hi v5, v1 offset:65535 ; encoding: [0xff,0xff,0x8c,0xda,0x01,0x00,0x00,0x05]
+
+ds_read_u8_d16_hi v5, v1 offset:0
+// GFX1250: ds_load_u8_d16_hi v5, v1 ; encoding: [0x00,0x00,0x8c,0xda,0x01,0x00,0x00,0x05]
+
+ds_read_u8_d16_hi v255, v255 offset:4
+// GFX1250: ds_load_u8_d16_hi v255, v255 offset:4 ; encoding: [0x04,0x00,0x8c,0xda,0xff,0x00,0x00,0xff]
+
+ds_rsub_rtn_u32 v5, v1, v2
+// GFX1250: ds_rsub_rtn_u32 v5, v1, v2 ; encoding: [0x00,0x00,0x88,0xd8,0x01,0x02,0x00,0x05]
+
+ds_rsub_rtn_u32 v5, v1, v2 offset:65535
+// GFX1250: ds_rsub_rtn_u32 v5, v1, v2 offset:65535 ; encoding: [0xff,0xff,0x88,0xd8,0x01,0x02,0x00,0x05]
+
+ds_rsub_rtn_u32 v5, v1, v2 offset:0
+// GFX1250: ds_rsub_rtn_u32 v5, v1, v2 ; encoding: [0x00,0x00,0x88,0xd8,0x01,0x02,0x00,0x05]
+
+ds_rsub_rtn_u32 v255, v255, v255 offset:4
+// GFX1250: ds_rsub_rtn_u32 v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0x88,0xd8,0xff,0xff,0x00,0xff]
+
+ds_rsub_rtn_u64 v[6:7], v1, v[2:3]
+// GFX1250: ds_rsub_rtn_u64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0x88,0xd9,0x01,0x02,0x00,0x06]
+
+ds_rsub_rtn_u64 v[6:7], v1, v[2:3] offset:65535
+// GFX1250: ds_rsub_rtn_u64 v[6:7], v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x88,0xd9,0x01,0x02,0x00,0x06]
+
+ds_rsub_rtn_u64 v[6:7], v1, v[2:3] offset:0
+// GFX1250: ds_rsub_rtn_u64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0x88,0xd9,0x01,0x02,0x00,0x06]
+
+ds_rsub_rtn_u64 v[254:255], v255, v[254:255] offset:4
+// GFX1250: ds_rsub_rtn_u64 v[254:255], v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x88,0xd9,0xff,0xfe,0x00,0xfe]
+
+ds_rsub_u32 v1, v2
+// GFX1250: ds_rsub_u32 v1, v2 ; encoding: [0x00,0x00,0x08,0xd8,0x01,0x02,0x00,0x00]
+
+ds_rsub_u32 v1, v2 offset:65535
+// GFX1250: ds_rsub_u32 v1, v2 offset:65535 ; encoding: [0xff,0xff,0x08,0xd8,0x01,0x02,0x00,0x00]
+
+ds_rsub_u32 v1, v2 offset:0
+// GFX1250: ds_rsub_u32 v1, v2 ; encoding: [0x00,0x00,0x08,0xd8,0x01,0x02,0x00,0x00]
+
+ds_rsub_u32 v255, v255 offset:4
+// GFX1250: ds_rsub_u32 v255, v255 offset:4 ; encoding: [0x04,0x00,0x08,0xd8,0xff,0xff,0x00,0x00]
+
+ds_rsub_u64 v1, v[2:3]
+// GFX1250: ds_rsub_u64 v1, v[2:3] ; encoding: [0x00,0x00,0x08,0xd9,0x01,0x02,0x00,0x00]
+
+ds_rsub_u64 v1, v[2:3] offset:65535
+// GFX1250: ds_rsub_u64 v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x08,0xd9,0x01,0x02,0x00,0x00]
+
+ds_rsub_u64 v1, v[2:3] offset:0
+// GFX1250: ds_rsub_u64 v1, v[2:3] ; encoding: [0x00,0x00,0x08,0xd9,0x01,0x02,0x00,0x00]
+
+ds_rsub_u64 v255, v[254:255] offset:4
+// GFX1250: ds_rsub_u64 v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x08,0xd9,0xff,0xfe,0x00,0x00]
+
+ds_store_2addr_b32 v1, v2, v3
+// GFX1250: ds_store_2addr_b32 v1, v2, v3 ; encoding: [0x00,0x00,0x38,0xd8,0x01,0x02,0x03,0x00]
+
+ds_store_2addr_b32 v1, v2, v3 offset0:127 offset1:255
+// GFX1250: ds_store_2addr_b32 v1, v2, v3 offset0:127 offset1:255 ; encoding: [0x7f,0xff,0x38,0xd8,0x01,0x02,0x03,0x00]
+
+ds_store_2addr_b32 v1, v2, v3 offset0:0 offset1:0
+// GFX1250: ds_store_2addr_b32 v1, v2, v3 ; encoding: [0x00,0x00,0x38,0xd8,0x01,0x02,0x03,0x00]
+
+ds_store_2addr_b32 v255, v255, v255 offset0:16 offset1:1
+// GFX1250: ds_store_2addr_b32 v255, v255, v255 offset0:16 offset1:1 ; encoding: [0x10,0x01,0x38,0xd8,0xff,0xff,0xff,0x00]
+
+ds_store_2addr_b64 v1, v[2:3], v[4:5]
+// GFX1250: ds_store_2addr_b64 v1, v[2:3], v[4:5] ; encoding: [0x00,0x00,0x38,0xd9,0x01,0x02,0x04,0x00]
+
+ds_store_2addr_b64 v1, v[2:3], v[4:5] offset0:127 offset1:255
+// GFX1250: ds_store_2addr_b64 v1, v[2:3], v[4:5] offset0:127 offset1:255 ; encoding: [0x7f,0xff,0x38,0xd9,0x01,0x02,0x04,0x00]
+
+ds_store_2addr_b64 v1, v[2:3], v[4:5] offset0:0 offset1:0
+// GFX1250: ds_store_2addr_b64 v1, v[2:3], v[4:5] ; encoding: [0x00,0x00,0x38,0xd9,0x01,0x02,0x04,0x00]
+
+ds_store_2addr_b64 v255, v[254:255], v[254:255] offset0:16 offset1:1
+// GFX1250: ds_store_2addr_b64 v255, v[254:255], v[254:255] offset0:16 offset1:1 ; encoding: [0x10,0x01,0x38,0xd9,0xff,0xfe,0xfe,0x00]
+
+ds_store_2addr_stride64_b32 v1, v2, v3
+// GFX1250: ds_store_2addr_stride64_b32 v1, v2, v3 ; encoding: [0x00,0x00,0x3c,0xd8,0x01,0x02,0x03,0x00]
+
+ds_store_2addr_stride64_b32 v1, v2, v3 offset0:127 offset1:255
+// GFX1250: ds_store_2addr_stride64_b32 v1, v2, v3 offset0:127 offset1:255 ; encoding: [0x7f,0xff,0x3c,0xd8,0x01,0x02,0x03,0x00]
+
+ds_store_2addr_stride64_b32 v1, v2, v3 offset0:0 offset1:0
+// GFX1250: ds_store_2addr_stride64_b32 v1, v2, v3 ; encoding: [0x00,0x00,0x3c,0xd8,0x01,0x02,0x03,0x00]
+
+ds_store_2addr_stride64_b32 v255, v255, v255 offset0:16 offset1:1
+// GFX1250: ds_store_2addr_stride64_b32 v255, v255, v255 offset0:16 offset1:1 ; encoding: [0x10,0x01,0x3c,0xd8,0xff,0xff,0xff,0x00]
+
+ds_store_2addr_stride64_b64 v1, v[2:3], v[4:5]
+// GFX1250: ds_store_2addr_stride64_b64 v1, v[2:3], v[4:5] ; encoding: [0x00,0x00,0x3c,0xd9,0x01,0x02,0x04,0x00]
+
+ds_store_2addr_stride64_b64 v1, v[2:3], v[4:5] offset0:127 offset1:255
+// GFX1250: ds_store_2addr_stride64_b64 v1, v[2:3], v[4:5] offset0:127 offset1:255 ; encoding: [0x7f,0xff,0x3c,0xd9,0x01,0x02,0x04,0x00]
+
+ds_store_2addr_stride64_b64 v1, v[2:3], v[4:5] offset0:0 offset1:0
+// GFX1250: ds_store_2addr_stride64_b64 v1, v[2:3], v[4:5] ; encoding: [0x00,0x00,0x3c,0xd9,0x01,0x02,0x04,0x00]
+
+ds_store_2addr_stride64_b64 v255, v[254:255], v[254:255] offset0:16 offset1:1
+// GFX1250: ds_store_2addr_stride64_b64 v255, v[254:255], v[254:255] offset0:16 offset1:1 ; encoding: [0x10,0x01,0x3c,0xd9,0xff,0xfe,0xfe,0x00]
+
+ds_store_addtid_b32 v1
+// GFX1250: ds_store_addtid_b32 v1 ; encoding: [0x00,0x00,0xc0,0xda,0x00,0x01,0x00,0x00]
+
+ds_store_addtid_b32 v1 offset:65535
+// GFX1250: ds_store_addtid_b32 v1 offset:65535 ; encoding: [0xff,0xff,0xc0,0xda,0x00,0x01,0x00,0x00]
+
+ds_store_addtid_b32 v1 offset:0
+// GFX1250: ds_store_addtid_b32 v1 ; encoding: [0x00,0x00,0xc0,0xda,0x00,0x01,0x00,0x00]
+
+ds_store_addtid_b32 v255 offset:4
+// GFX1250: ds_store_addtid_b32 v255 offset:4 ; encoding: [0x04,0x00,0xc0,0xda,0x00,0xff,0x00,0x00]
+
+ds_store_b128 v1, v[2:5]
+// GFX1250: ds_store_b128 v1, v[2:5] ; encoding: [0x00,0x00,0x7c,0xdb,0x01,0x02,0x00,0x00]
+
+ds_store_b128 v1, v[2:5] offset:65535
+// GFX1250: ds_store_b128 v1, v[2:5] offset:65535 ; encoding: [0xff,0xff,0x7c,0xdb,0x01,0x02,0x00,0x00]
+
+ds_store_b128 v1, v[2:5] offset:0
+// GFX1250: ds_store_b128 v1, v[2:5] ; encoding: [0x00,0x00,0x7c,0xdb,0x01,0x02,0x00,0x00]
+
+ds_store_b128 v255, v[252:255] offset:4
+// GFX1250: ds_store_b128 v255, v[252:255] offset:4 ; encoding: [0x04,0x00,0x7c,0xdb,0xff,0xfc,0x00,0x00]
+
+ds_store_b16 v1, v2
+// GFX1250: ds_store_b16 v1, v2 ; encoding: [0x00,0x00,0x7c,0xd8,0x01,0x02,0x00,0x00]
+
+ds_store_b16 v1, v2 offset:65535
+// GFX1250: ds_store_b16 v1, v2 offset:65535 ; encoding: [0xff,0xff,0x7c,0xd8,0x01,0x02,0x00,0x00]
+
+ds_store_b16 v1, v2 offset:0
+// GFX1250: ds_store_b16 v1, v2 ; encoding: [0x00,0x00,0x7c,0xd8,0x01,0x02,0x00,0x00]
+
+ds_store_b16 v255, v255 offset:4
+// GFX1250: ds_store_b16 v255, v255 offset:4 ; encoding: [0x04,0x00,0x7c,0xd8,0xff,0xff,0x00,0x00]
+
+ds_store_b16_d16_hi v1, v2
+// GFX1250: ds_store_b16_d16_hi v1, v2 ; encoding: [0x00,0x00,0x84,0xda,0x01,0x02,0x00,0x00]
+
+ds_store_b16_d16_hi v1, v2 offset:65535
+// GFX1250: ds_store_b16_d16_hi v1, v2 offset:65535 ; encoding: [0xff,0xff,0x84,0xda,0x01,0x02,0x00,0x00]
+
+ds_store_b16_d16_hi v1, v2 offset:0
+// GFX1250: ds_store_b16_d16_hi v1, v2 ; encoding: [0x00,0x00,0x84,0xda,0x01,0x02,0x00,0x00]
+
+ds_store_b16_d16_hi v255, v255 offset:4
+// GFX1250: ds_store_b16_d16_hi v255, v255 offset:4 ; encoding: [0x04,0x00,0x84,0xda,0xff,0xff,0x00,0x00]
+
+ds_store_b32 v1, v2
+// GFX1250: ds_store_b32 v1, v2 ; encoding: [0x00,0x00,0x34,0xd8,0x01,0x02,0x00,0x00]
+
+ds_store_b32 v1, v2 offset:65535
+// GFX1250: ds_store_b32 v1, v2 offset:65535 ; encoding: [0xff,0xff,0x34,0xd8,0x01,0x02,0x00,0x00]
+
+ds_store_b32 v1, v2 offset:0
+// GFX1250: ds_store_b32 v1, v2 ; encoding: [0x00,0x00,0x34,0xd8,0x01,0x02,0x00,0x00]
+
+ds_store_b32 v255, v255 offset:4
+// GFX1250: ds_store_b32 v255, v255 offset:4 ; encoding: [0x04,0x00,0x34,0xd8,0xff,0xff,0x00,0x00]
+
+ds_store_b64 v1, v[2:3]
+// GFX1250: ds_store_b64 v1, v[2:3] ; encoding: [0x00,0x00,0x34,0xd9,0x01,0x02,0x00,0x00]
+
+ds_store_b64 v1, v[2:3] offset:65535
+// GFX1250: ds_store_b64 v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x34,0xd9,0x01,0x02,0x00,0x00]
+
+ds_store_b64 v1, v[2:3] offset:0
+// GFX1250: ds_store_b64 v1, v[2:3] ; encoding: [0x00,0x00,0x34,0xd9,0x01,0x02,0x00,0x00]
+
+ds_store_b64 v255, v[254:255] offset:4
+// GFX1250: ds_store_b64 v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x34,0xd9,0xff,0xfe,0x00,0x00]
+
+ds_store_b8 v1, v2
+// GFX1250: ds_store_b8 v1, v2 ; encoding: [0x00,0x00,0x78,0xd8,0x01,0x02,0x00,0x00]
+
+ds_store_b8 v1, v2 offset:65535
+// GFX1250: ds_store_b8 v1, v2 offset:65535 ; encoding: [0xff,0xff,0x78,0xd8,0x01,0x02,0x00,0x00]
+
+ds_store_b8 v1, v2 offset:0
+// GFX1250: ds_store_b8 v1, v2 ; encoding: [0x00,0x00,0x78,0xd8,0x01,0x02,0x00,0x00]
+
+ds_store_b8 v255, v255 offset:4
+// GFX1250: ds_store_b8 v255, v255 offset:4 ; encoding: [0x04,0x00,0x78,0xd8,0xff,0xff,0x00,0x00]
+
+ds_store_b8_d16_hi v1, v2
+// GFX1250: ds_store_b8_d16_hi v1, v2 ; encoding: [0x00,0x00,0x80,0xda,0x01,0x02,0x00,0x00]
+
+ds_store_b8_d16_hi v1, v2 offset:65535
+// GFX1250: ds_store_b8_d16_hi v1, v2 offset:65535 ; encoding: [0xff,0xff,0x80,0xda,0x01,0x02,0x00,0x00]
+
+ds_store_b8_d16_hi v1, v2 offset:0
+// GFX1250: ds_store_b8_d16_hi v1, v2 ; encoding: [0x00,0x00,0x80,0xda,0x01,0x02,0x00,0x00]
+
+ds_store_b8_d16_hi v255, v255 offset:4
+// GFX1250: ds_store_b8_d16_hi v255, v255 offset:4 ; encoding: [0x04,0x00,0x80,0xda,0xff,0xff,0x00,0x00]
+
+ds_store_b96 v1, v[2:4]
+// GFX1250: ds_store_b96 v1, v[2:4] ; encoding: [0x00,0x00,0x78,0xdb,0x01,0x02,0x00,0x00]
+
+ds_store_b96 v1, v[2:4] offset:65535
+// GFX1250: ds_store_b96 v1, v[2:4] offset:65535 ; encoding: [0xff,0xff,0x78,0xdb,0x01,0x02,0x00,0x00]
+
+ds_store_b96 v1, v[2:4] offset:0
+// GFX1250: ds_store_b96 v1, v[2:4] ; encoding: [0x00,0x00,0x78,0xdb,0x01,0x02,0x00,0x00]
+
+ds_store_b96 v255, v[252:254] offset:4
+// GFX1250: ds_store_b96 v255, v[252:254] offset:4 ; encoding: [0x04,0x00,0x78,0xdb,0xff,0xfc,0x00,0x00]
+
+ds_storexchg_2addr_rtn_b32 v[6:7], v1, v2, v3
+// GFX1250: ds_storexchg_2addr_rtn_b32 v[6:7], v1, v2, v3 ; encoding: [0x00,0x00,0xb8,0xd8,0x01,0x02,0x03,0x06]
+
+ds_storexchg_2addr_rtn_b32 v[6:7], v1, v2, v3 offset0:127 offset1:255
+// GFX1250: ds_storexchg_2addr_rtn_b32 v[6:7], v1, v2, v3 offset0:127 offset1:255 ; encoding: [0x7f,0xff,0xb8,0xd8,0x01,0x02,0x03,0x06]
+
+ds_storexchg_2addr_rtn_b32 v[6:7], v1, v2, v3 offset0:0 offset1:0
+// GFX1250: ds_storexchg_2addr_rtn_b32 v[6:7], v1, v2, v3 ; encoding: [0x00,0x00,0xb8,0xd8,0x01,0x02,0x03,0x06]
+
+ds_storexchg_2addr_rtn_b32 v[254:255], v255, v255, v255 offset0:16 offset1:1
+// GFX1250: ds_storexchg_2addr_rtn_b32 v[254:255], v255, v255, v255 offset0:16 offset1:1 ; encoding: [0x10,0x01,0xb8,0xd8,0xff,0xff,0xff,0xfe]
+
+ds_storexchg_2addr_rtn_b64 v[6:9], v1, v[2:3], v[4:5]
+// GFX1250: ds_storexchg_2addr_rtn_b64 v[6:9], v1, v[2:3], v[4:5] ; encoding: [0x00,0x00,0xb8,0xd9,0x01,0x02,0x04,0x06]
+
+ds_storexchg_2addr_rtn_b64 v[6:9], v1, v[2:3], v[4:5] offset0:127 offset1:255
+// GFX1250: ds_storexchg_2addr_rtn_b64 v[6:9], v1, v[2:3], v[4:5] offset0:127 offset1:255 ; encoding: [0x7f,0xff,0xb8,0xd9,0x01,0x02,0x04,0x06]
+
+ds_storexchg_2addr_rtn_b64 v[6:9], v1, v[2:3], v[4:5] offset0:0 offset1:0
+// GFX1250: ds_storexchg_2addr_rtn_b64 v[6:9], v1, v[2:3], v[4:5] ; encoding: [0x00,0x00,0xb8,0xd9,0x01,0x02,0x04,0x06]
+
+ds_storexchg_2addr_rtn_b64 v[252:255], v255, v[254:255], v[254:255] offset0:16 offset1:1
+// GFX1250: ds_storexchg_2addr_rtn_b64 v[252:255], v255, v[254:255], v[254:255] offset0:16 offset1:1 ; encoding: [0x10,0x01,0xb8,0xd9,0xff,0xfe,0xfe,0xfc]
+
+ds_storexchg_2addr_stride64_rtn_b32 v[6:7], v1, v2, v3
+// GFX1250: ds_storexchg_2addr_stride64_rtn_b32 v[6:7], v1, v2, v3 ; encoding: [0x00,0x00,0xbc,0xd8,0x01,0x02,0x03,0x06]
+
+ds_storexchg_2addr_stride64_rtn_b32 v[6:7], v1, v2, v3 offset0:127 offset1:255
+// GFX1250: ds_storexchg_2addr_stride64_rtn_b32 v[6:7], v1, v2, v3 offset0:127 offset1:255 ; encoding: [0x7f,0xff,0xbc,0xd8,0x01,0x02,0x03,0x06]
+
+ds_storexchg_2addr_stride64_rtn_b32 v[6:7], v1, v2, v3 offset0:0 offset1:0
+// GFX1250: ds_storexchg_2addr_stride64_rtn_b32 v[6:7], v1, v2, v3 ; encoding: [0x00,0x00,0xbc,0xd8,0x01,0x02,0x03,0x06]
+
+ds_storexchg_2addr_stride64_rtn_b32 v[254:255], v255, v255, v255 offset0:16 offset1:1
+// GFX1250: ds_storexchg_2addr_stride64_rtn_b32 v[254:255], v255, v255, v255 offset0:16 offset1:1 ; encoding: [0x10,0x01,0xbc,0xd8,0xff,0xff,0xff,0xfe]
+
+ds_storexchg_2addr_stride64_rtn_b64 v[6:9], v1, v[2:3], v[4:5]
+// GFX1250: ds_storexchg_2addr_stride64_rtn_b64 v[6:9], v1, v[2:3], v[4:5] ; encoding: [0x00,0x00,0xbc,0xd9,0x01,0x02,0x04,0x06]
+
+ds_storexchg_2addr_stride64_rtn_b64 v[6:9], v1, v[2:3], v[4:5] offset0:127 offset1:255
+// GFX1250: ds_storexchg_2addr_stride64_rtn_b64 v[6:9], v1, v[2:3], v[4:5] offset0:127 offset1:255 ; encoding: [0x7f,0xff,0xbc,0xd9,0x01,0x02,0x04,0x06]
+
+ds_storexchg_2addr_stride64_rtn_b64 v[6:9], v1, v[2:3], v[4:5] offset0:0 offset1:0
+// GFX1250: ds_storexchg_2addr_stride64_rtn_b64 v[6:9], v1, v[2:3], v[4:5] ; encoding: [0x00,0x00,0xbc,0xd9,0x01,0x02,0x04,0x06]
+
+ds_storexchg_2addr_stride64_rtn_b64 v[252:255], v255, v[254:255], v[254:255] offset0:16 offset1:1
+// GFX1250: ds_storexchg_2addr_stride64_rtn_b64 v[252:255], v255, v[254:255], v[254:255] offset0:16 offset1:1 ; encoding: [0x10,0x01,0xbc,0xd9,0xff,0xfe,0xfe,0xfc]
+
+ds_storexchg_rtn_b32 v5, v1, v2
+// GFX1250: ds_storexchg_rtn_b32 v5, v1, v2 ; encoding: [0x00,0x00,0xb4,0xd8,0x01,0x02,0x00,0x05]
+
+ds_storexchg_rtn_b32 v5, v1, v2 offset:65535
+// GFX1250: ds_storexchg_rtn_b32 v5, v1, v2 offset:65535 ; encoding: [0xff,0xff,0xb4,0xd8,0x01,0x02,0x00,0x05]
+
+ds_storexchg_rtn_b32 v5, v1, v2 offset:0
+// GFX1250: ds_storexchg_rtn_b32 v5, v1, v2 ; encoding: [0x00,0x00,0xb4,0xd8,0x01,0x02,0x00,0x05]
+
+ds_storexchg_rtn_b32 v255, v255, v255 offset:4
+// GFX1250: ds_storexchg_rtn_b32 v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0xb4,0xd8,0xff,0xff,0x00,0xff]
+
+ds_storexchg_rtn_b64 v[6:7], v1, v[2:3]
+// GFX1250: ds_storexchg_rtn_b64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0xb4,0xd9,0x01,0x02,0x00,0x06]
+
+ds_storexchg_rtn_b64 v[6:7], v1, v[2:3] offset:65535
+// GFX1250: ds_storexchg_rtn_b64 v[6:7], v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0xb4,0xd9,0x01,0x02,0x00,0x06]
+
+ds_storexchg_rtn_b64 v[6:7], v1, v[2:3] offset:0
+// GFX1250: ds_storexchg_rtn_b64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0xb4,0xd9,0x01,0x02,0x00,0x06]
+
+ds_storexchg_rtn_b64 v[254:255], v255, v[254:255] offset:4
+// GFX1250: ds_storexchg_rtn_b64 v[254:255], v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0xb4,0xd9,0xff,0xfe,0x00,0xfe]
+
+ds_cond_sub_rtn_u32 v5, v1, v2
+// GFX1250: ds_cond_sub_rtn_u32 v5, v1, v2 ; encoding: [0x00,0x00,0xa0,0xda,0x01,0x02,0x00,0x05]
+
+ds_cond_sub_rtn_u32 v5, v1, v2 offset:65535
+// GFX1250: ds_cond_sub_rtn_u32 v5, v1, v2 offset:65535 ; encoding: [0xff,0xff,0xa0,0xda,0x01,0x02,0x00,0x05]
+
+ds_cond_sub_rtn_u32 v5, v1, v2 offset:0
+// GFX1250: ds_cond_sub_rtn_u32 v5, v1, v2 ; encoding: [0x00,0x00,0xa0,0xda,0x01,0x02,0x00,0x05]
+
+ds_cond_sub_u32 v1, v2
+// GFX1250: ds_cond_sub_u32 v1, v2 ; encoding: [0x00,0x00,0x60,0xda,0x01,0x02,0x00,0x00]
+
+ds_cond_sub_u32 v1, v2 offset:65535
+// GFX1250: ds_cond_sub_u32 v1, v2 offset:65535 ; encoding: [0xff,0xff,0x60,0xda,0x01,0x02,0x00,0x00]
+
+ds_cond_sub_u32 v1, v2 offset:0
+// GFX1250: ds_cond_sub_u32 v1, v2 ; encoding: [0x00,0x00,0x60,0xda,0x01,0x02,0x00,0x00]
+
+ds_sub_clamp_rtn_u32 v5, v1, v2
+// GFX1250: ds_sub_clamp_rtn_u32 v5, v1, v2 ; encoding: [0x00,0x00,0xa4,0xda,0x01,0x02,0x00,0x05]
+
+ds_sub_clamp_rtn_u32 v5, v1, v2 offset:65535
+// GFX1250: ds_sub_clamp_rtn_u32 v5, v1, v2 offset:65535 ; encoding: [0xff,0xff,0xa4,0xda,0x01,0x02,0x00,0x05]
+
+ds_sub_clamp_rtn_u32 v5, v1, v2 offset:0
+// GFX1250: ds_sub_clamp_rtn_u32 v5, v1, v2 ; encoding: [0x00,0x00,0xa4,0xda,0x01,0x02,0x00,0x05]
+
+ds_sub_clamp_rtn_u32 v255, v255, v255 offset:4
+// GFX1250: ds_sub_clamp_rtn_u32 v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0xa4,0xda,0xff,0xff,0x00,0xff]
+
+ds_sub_clamp_u32 v1, v2
+// GFX1250: ds_sub_clamp_u32 v1, v2 ; encoding: [0x00,0x00,0x64,0xda,0x01,0x02,0x00,0x00]
+
+ds_sub_clamp_u32 v1, v2 offset:65535
+// GFX1250: ds_sub_clamp_u32 v1, v2 offset:65535 ; encoding: [0xff,0xff,0x64,0xda,0x01,0x02,0x00,0x00]
+
+ds_sub_clamp_u32 v1, v2 offset:0
+// GFX1250: ds_sub_clamp_u32 v1, v2 ; encoding: [0x00,0x00,0x64,0xda,0x01,0x02,0x00,0x00]
+
+ds_sub_clamp_u32 v255, v255 offset:4
+// GFX1250: ds_sub_clamp_u32 v255, v255 offset:4 ; encoding: [0x04,0x00,0x64,0xda,0xff,0xff,0x00,0x00]
+
+ds_sub_rtn_u32 v5, v1, v2
+// GFX1250: ds_sub_rtn_u32 v5, v1, v2 ; encoding: [0x00,0x00,0x84,0xd8,0x01,0x02,0x00,0x05]
+
+ds_sub_rtn_u32 v5, v1, v2 offset:65535
+// GFX1250: ds_sub_rtn_u32 v5, v1, v2 offset:65535 ; encoding: [0xff,0xff,0x84,0xd8,0x01,0x02,0x00,0x05]
+
+ds_sub_rtn_u32 v5, v1, v2 offset:0
+// GFX1250: ds_sub_rtn_u32 v5, v1, v2 ; encoding: [0x00,0x00,0x84,0xd8,0x01,0x02,0x00,0x05]
+
+ds_sub_rtn_u32 v255, v255, v255 offset:4
+// GFX1250: ds_sub_rtn_u32 v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0x84,0xd8,0xff,0xff,0x00,0xff]
+
+ds_sub_rtn_u64 v[6:7], v1, v[2:3]
+// GFX1250: ds_sub_rtn_u64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0x84,0xd9,0x01,0x02,0x00,0x06]
+
+ds_sub_rtn_u64 v[6:7], v1, v[2:3] offset:65535
+// GFX1250: ds_sub_rtn_u64 v[6:7], v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x84,0xd9,0x01,0x02,0x00,0x06]
+
+ds_sub_rtn_u64 v[6:7], v1, v[2:3] offset:0
+// GFX1250: ds_sub_rtn_u64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0x84,0xd9,0x01,0x02,0x00,0x06]
+
+ds_sub_rtn_u64 v[254:255], v255, v[254:255] offset:4
+// GFX1250: ds_sub_rtn_u64 v[254:255], v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x84,0xd9,0xff,0xfe,0x00,0xfe]
+
+ds_sub_u32 v1, v2
+// GFX1250: ds_sub_u32 v1, v2 ; encoding: [0x00,0x00,0x04,0xd8,0x01,0x02,0x00,0x00]
+
+ds_sub_u32 v1, v2 offset:65535
+// GFX1250: ds_sub_u32 v1, v2 offset:65535 ; encoding: [0xff,0xff,0x04,0xd8,0x01,0x02,0x00,0x00]
+
+ds_sub_u32 v1, v2 offset:0
+// GFX1250: ds_sub_u32 v1, v2 ; encoding: [0x00,0x00,0x04,0xd8,0x01,0x02,0x00,0x00]
+
+ds_sub_u32 v255, v255 offset:4
+// GFX1250: ds_sub_u32 v255, v255 offset:4 ; encoding: [0x04,0x00,0x04,0xd8,0xff,0xff,0x00,0x00]
+
+ds_sub_u64 v1, v[2:3]
+// GFX1250: ds_sub_u64 v1, v[2:3] ; encoding: [0x00,0x00,0x04,0xd9,0x01,0x02,0x00,0x00]
+
+ds_sub_u64 v1, v[2:3] offset:65535
+// GFX1250: ds_sub_u64 v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x04,0xd9,0x01,0x02,0x00,0x00]
+
+ds_sub_u64 v1, v[2:3] offset:0
+// GFX1250: ds_sub_u64 v1, v[2:3] ; encoding: [0x00,0x00,0x04,0xd9,0x01,0x02,0x00,0x00]
+
+ds_sub_u64 v255, v[254:255] offset:4
+// GFX1250: ds_sub_u64 v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x04,0xd9,0xff,0xfe,0x00,0x00]
+
+ds_write2_b32 v1, v2, v3
+// GFX1250: ds_store_2addr_b32 v1, v2, v3 ; encoding: [0x00,0x00,0x38,0xd8,0x01,0x02,0x03,0x00]
+
+ds_write2_b32 v1, v2, v3 offset0:127 offset1:255
+// GFX1250: ds_store_2addr_b32 v1, v2, v3 offset0:127 offset1:255 ; encoding: [0x7f,0xff,0x38,0xd8,0x01,0x02,0x03,0x00]
+
+ds_write2_b32 v1, v2, v3 offset0:0 offset1:0
+// GFX1250: ds_store_2addr_b32 v1, v2, v3 ; encoding: [0x00,0x00,0x38,0xd8,0x01,0x02,0x03,0x00]
+
+ds_write2_b32 v255, v255, v255 offset0:16 offset1:1
+// GFX1250: ds_store_2addr_b32 v255, v255, v255 offset0:16 offset1:1 ; encoding: [0x10,0x01,0x38,0xd8,0xff,0xff,0xff,0x00]
+
+ds_write2_b64 v1, v[2:3], v[4:5]
+// GFX1250: ds_store_2addr_b64 v1, v[2:3], v[4:5] ; encoding: [0x00,0x00,0x38,0xd9,0x01,0x02,0x04,0x00]
+
+ds_write2_b64 v1, v[2:3], v[4:5] offset0:127 offset1:255
+// GFX1250: ds_store_2addr_b64 v1, v[2:3], v[4:5] offset0:127 offset1:255 ; encoding: [0x7f,0xff,0x38,0xd9,0x01,0x02,0x04,0x00]
+
+ds_write2_b64 v1, v[2:3], v[4:5] offset0:0 offset1:0
+// GFX1250: ds_store_2addr_b64 v1, v[2:3], v[4:5] ; encoding: [0x00,0x00,0x38,0xd9,0x01,0x02,0x04,0x00]
+
+ds_write2_b64 v255, v[254:255], v[254:255] offset0:16 offset1:1
+// GFX1250: ds_store_2addr_b64 v255, v[254:255], v[254:255] offset0:16 offset1:1 ; encoding: [0x10,0x01,0x38,0xd9,0xff,0xfe,0xfe,0x00]
+
+ds_write2st64_b32 v1, v2, v3
+// GFX1250: ds_store_2addr_stride64_b32 v1, v2, v3 ; encoding: [0x00,0x00,0x3c,0xd8,0x01,0x02,0x03,0x00]
+
+ds_write2st64_b32 v1, v2, v3 offset0:127 offset1:255
+// GFX1250: ds_store_2addr_stride64_b32 v1, v2, v3 offset0:127 offset1:255 ; encoding: [0x7f,0xff,0x3c,0xd8,0x01,0x02,0x03,0x00]
+
+ds_write2st64_b32 v1, v2, v3 offset0:0 offset1:0
+// GFX1250: ds_store_2addr_stride64_b32 v1, v2, v3 ; encoding: [0x00,0x00,0x3c,0xd8,0x01,0x02,0x03,0x00]
+
+ds_write2st64_b32 v255, v255, v255 offset0:16 offset1:1
+// GFX1250: ds_store_2addr_stride64_b32 v255, v255, v255 offset0:16 offset1:1 ; encoding: [0x10,0x01,0x3c,0xd8,0xff,0xff,0xff,0x00]
+
+ds_write2st64_b64 v1, v[2:3], v[4:5]
+// GFX1250: ds_store_2addr_stride64_b64 v1, v[2:3], v[4:5] ; encoding: [0x00,0x00,0x3c,0xd9,0x01,0x02,0x04,0x00]
+
+ds_write2st64_b64 v1, v[2:3], v[4:5] offset0:127 offset1:255
+// GFX1250: ds_store_2addr_stride64_b64 v1, v[2:3], v[4:5] offset0:127 offset1:255 ; encoding: [0x7f,0xff,0x3c,0xd9,0x01,0x02,0x04,0x00]
+
+ds_write2st64_b64 v1, v[2:3], v[4:5] offset0:0 offset1:0
+// GFX1250: ds_store_2addr_stride64_b64 v1, v[2:3], v[4:5] ; encoding: [0x00,0x00,0x3c,0xd9,0x01,0x02,0x04,0x00]
+
+ds_write2st64_b64 v255, v[254:255], v[254:255] offset0:16 offset1:1
+// GFX1250: ds_store_2addr_stride64_b64 v255, v[254:255], v[254:255] offset0:16 offset1:1 ; encoding: [0x10,0x01,0x3c,0xd9,0xff,0xfe,0xfe,0x00]
+
+ds_write_addtid_b32 v1
+// GFX1250: ds_store_addtid_b32 v1 ; encoding: [0x00,0x00,0xc0,0xda,0x00,0x01,0x00,0x00]
+
+ds_write_addtid_b32 v1 offset:65535
+// GFX1250: ds_store_addtid_b32 v1 offset:65535 ; encoding: [0xff,0xff,0xc0,0xda,0x00,0x01,0x00,0x00]
+
+ds_write_addtid_b32 v1 offset:0
+// GFX1250: ds_store_addtid_b32 v1 ; encoding: [0x00,0x00,0xc0,0xda,0x00,0x01,0x00,0x00]
+
+ds_write_addtid_b32 v255 offset:4
+// GFX1250: ds_store_addtid_b32 v255 offset:4 ; encoding: [0x04,0x00,0xc0,0xda,0x00,0xff,0x00,0x00]
+
+ds_write_b128 v1, v[2:5]
+// GFX1250: ds_store_b128 v1, v[2:5] ; encoding: [0x00,0x00,0x7c,0xdb,0x01,0x02,0x00,0x00]
+
+ds_write_b128 v1, v[2:5] offset:65535
+// GFX1250: ds_store_b128 v1, v[2:5] offset:65535 ; encoding: [0xff,0xff,0x7c,0xdb,0x01,0x02,0x00,0x00]
+
+ds_write_b128 v1, v[2:5] offset:0
+// GFX1250: ds_store_b128 v1, v[2:5] ; encoding: [0x00,0x00,0x7c,0xdb,0x01,0x02,0x00,0x00]
+
+ds_write_b128 v255, v[252:255] offset:4
+// GFX1250: ds_store_b128 v255, v[252:255] offset:4 ; encoding: [0x04,0x00,0x7c,0xdb,0xff,0xfc,0x00,0x00]
+
+ds_write_b16 v1, v2
+// GFX1250: ds_store_b16 v1, v2 ; encoding: [0x00,0x00,0x7c,0xd8,0x01,0x02,0x00,0x00]
+
+ds_write_b16 v1, v2 offset:65535
+// GFX1250: ds_store_b16 v1, v2 offset:65535 ; encoding: [0xff,0xff,0x7c,0xd8,0x01,0x02,0x00,0x00]
+
+ds_write_b16 v1, v2 offset:0
+// GFX1250: ds_store_b16 v1, v2 ; encoding: [0x00,0x00,0x7c,0xd8,0x01,0x02,0x00,0x00]
+
+ds_write_b16 v255, v255 offset:4
+// GFX1250: ds_store_b16 v255, v255 offset:4 ; encoding: [0x04,0x00,0x7c,0xd8,0xff,0xff,0x00,0x00]
+
+ds_write_b16_d16_hi v1, v2
+// GFX1250: ds_store_b16_d16_hi v1, v2 ; encoding: [0x00,0x00,0x84,0xda,0x01,0x02,0x00,0x00]
+
+ds_write_b16_d16_hi v1, v2 offset:65535
+// GFX1250: ds_store_b16_d16_hi v1, v2 offset:65535 ; encoding: [0xff,0xff,0x84,0xda,0x01,0x02,0x00,0x00]
+
+ds_write_b16_d16_hi v1, v2 offset:0
+// GFX1250: ds_store_b16_d16_hi v1, v2 ; encoding: [0x00,0x00,0x84,0xda,0x01,0x02,0x00,0x00]
+
+ds_write_b16_d16_hi v255, v255 offset:4
+// GFX1250: ds_store_b16_d16_hi v255, v255 offset:4 ; encoding: [0x04,0x00,0x84,0xda,0xff,0xff,0x00,0x00]
+
+ds_write_b32 v1, v2
+// GFX1250: ds_store_b32 v1, v2 ; encoding: [0x00,0x00,0x34,0xd8,0x01,0x02,0x00,0x00]
+
+ds_write_b32 v1, v2 offset:65535
+// GFX1250: ds_store_b32 v1, v2 offset:65535 ; encoding: [0xff,0xff,0x34,0xd8,0x01,0x02,0x00,0x00]
+
+ds_write_b32 v1, v2 offset:0
+// GFX1250: ds_store_b32 v1, v2 ; encoding: [0x00,0x00,0x34,0xd8,0x01,0x02,0x00,0x00]
+
+ds_write_b32 v255, v255 offset:4
+// GFX1250: ds_store_b32 v255, v255 offset:4 ; encoding: [0x04,0x00,0x34,0xd8,0xff,0xff,0x00,0x00]
+
+ds_write_b64 v1, v[2:3]
+// GFX1250: ds_store_b64 v1, v[2:3] ; encoding: [0x00,0x00,0x34,0xd9,0x01,0x02,0x00,0x00]
+
+ds_write_b64 v1, v[2:3] offset:65535
+// GFX1250: ds_store_b64 v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x34,0xd9,0x01,0x02,0x00,0x00]
+
+ds_write_b64 v1, v[2:3] offset:0
+// GFX1250: ds_store_b64 v1, v[2:3] ; encoding: [0x00,0x00,0x34,0xd9,0x01,0x02,0x00,0x00]
+
+ds_write_b64 v255, v[254:255] offset:4
+// GFX1250: ds_store_b64 v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x34,0xd9,0xff,0xfe,0x00,0x00]
+
+ds_write_b8 v1, v2
+// GFX1250: ds_store_b8 v1, v2 ; encoding: [0x00,0x00,0x78,0xd8,0x01,0x02,0x00,0x00]
+
+ds_write_b8 v1, v2 offset:65535
+// GFX1250: ds_store_b8 v1, v2 offset:65535 ; encoding: [0xff,0xff,0x78,0xd8,0x01,0x02,0x00,0x00]
+
+ds_write_b8 v1, v2 offset:0
+// GFX1250: ds_store_b8 v1, v2 ; encoding: [0x00,0x00,0x78,0xd8,0x01,0x02,0x00,0x00]
+
+ds_write_b8 v255, v255 offset:4
+// GFX1250: ds_store_b8 v255, v255 offset:4 ; encoding: [0x04,0x00,0x78,0xd8,0xff,0xff,0x00,0x00]
+
+ds_write_b8_d16_hi v1, v2
+// GFX1250: ds_store_b8_d16_hi v1, v2 ; encoding: [0x00,0x00,0x80,0xda,0x01,0x02,0x00,0x00]
+
+ds_write_b8_d16_hi v1, v2 offset:65535
+// GFX1250: ds_store_b8_d16_hi v1, v2 offset:65535 ; encoding: [0xff,0xff,0x80,0xda,0x01,0x02,0x00,0x00]
+
+ds_write_b8_d16_hi v1, v2 offset:0
+// GFX1250: ds_store_b8_d16_hi v1, v2 ; encoding: [0x00,0x00,0x80,0xda,0x01,0x02,0x00,0x00]
+
+ds_write_b8_d16_hi v255, v255 offset:4
+// GFX1250: ds_store_b8_d16_hi v255, v255 offset:4 ; encoding: [0x04,0x00,0x80,0xda,0xff,0xff,0x00,0x00]
+
+ds_write_b96 v1, v[2:4]
+// GFX1250: ds_store_b96 v1, v[2:4] ; encoding: [0x00,0x00,0x78,0xdb,0x01,0x02,0x00,0x00]
+
+ds_write_b96 v1, v[2:4] offset:65535
+// GFX1250: ds_store_b96 v1, v[2:4] offset:65535 ; encoding: [0xff,0xff,0x78,0xdb,0x01,0x02,0x00,0x00]
+
+ds_write_b96 v1, v[2:4] offset:0
+// GFX1250: ds_store_b96 v1, v[2:4] ; encoding: [0x00,0x00,0x78,0xdb,0x01,0x02,0x00,0x00]
+
+ds_write_b96 v255, v[252:254] offset:4
+// GFX1250: ds_store_b96 v255, v[252:254] offset:4 ; encoding: [0x04,0x00,0x78,0xdb,0xff,0xfc,0x00,0x00]
+
+ds_wrxchg2_rtn_b32 v[6:7], v1, v2, v3
+// GFX1250: ds_storexchg_2addr_rtn_b32 v[6:7], v1, v2, v3 ; encoding: [0x00,0x00,0xb8,0xd8,0x01,0x02,0x03,0x06]
+
+ds_wrxchg2_rtn_b32 v[6:7], v1, v2, v3 offset0:127 offset1:255
+// GFX1250: ds_storexchg_2addr_rtn_b32 v[6:7], v1, v2, v3 offset0:127 offset1:255 ; encoding: [0x7f,0xff,0xb8,0xd8,0x01,0x02,0x03,0x06]
+
+ds_wrxchg2_rtn_b32 v[6:7], v1, v2, v3 offset0:0 offset1:0
+// GFX1250: ds_storexchg_2addr_rtn_b32 v[6:7], v1, v2, v3 ; encoding: [0x00,0x00,0xb8,0xd8,0x01,0x02,0x03,0x06]
+
+ds_wrxchg2_rtn_b32 v[254:255], v255, v255, v255 offset0:16 offset1:1
+// GFX1250: ds_storexchg_2addr_rtn_b32 v[254:255], v255, v255, v255 offset0:16 offset1:1 ; encoding: [0x10,0x01,0xb8,0xd8,0xff,0xff,0xff,0xfe]
+
+ds_wrxchg2_rtn_b64 v[6:9], v1, v[2:3], v[4:5]
+// GFX1250: ds_storexchg_2addr_rtn_b64 v[6:9], v1, v[2:3], v[4:5] ; encoding: [0x00,0x00,0xb8,0xd9,0x01,0x02,0x04,0x06]
+
+ds_wrxchg2_rtn_b64 v[6:9], v1, v[2:3], v[4:5] offset0:127 offset1:255
+// GFX1250: ds_storexchg_2addr_rtn_b64 v[6:9], v1, v[2:3], v[4:5] offset0:127 offset1:255 ; encoding: [0x7f,0xff,0xb8,0xd9,0x01,0x02,0x04,0x06]
+
+ds_wrxchg2_rtn_b64 v[6:9], v1, v[2:3], v[4:5] offset0:0 offset1:0
+// GFX1250: ds_storexchg_2addr_rtn_b64 v[6:9], v1, v[2:3], v[4:5] ; encoding: [0x00,0x00,0xb8,0xd9,0x01,0x02,0x04,0x06]
+
+ds_wrxchg2_rtn_b64 v[252:255], v255, v[254:255], v[254:255] offset0:16 offset1:1
+// GFX1250: ds_storexchg_2addr_rtn_b64 v[252:255], v255, v[254:255], v[254:255] offset0:16 offset1:1 ; encoding: [0x10,0x01,0xb8,0xd9,0xff,0xfe,0xfe,0xfc]
+
+ds_wrxchg2st64_rtn_b32 v[6:7], v1, v2, v3
+// GFX1250: ds_storexchg_2addr_stride64_rtn_b32 v[6:7], v1, v2, v3 ; encoding: [0x00,0x00,0xbc,0xd8,0x01,0x02,0x03,0x06]
+
+ds_wrxchg2st64_rtn_b32 v[6:7], v1, v2, v3 offset0:127 offset1:255
+// GFX1250: ds_storexchg_2addr_stride64_rtn_b32 v[6:7], v1, v2, v3 offset0:127 offset1:255 ; encoding: [0x7f,0xff,0xbc,0xd8,0x01,0x02,0x03,0x06]
+
+ds_wrxchg2st64_rtn_b32 v[6:7], v1, v2, v3 offset0:0 offset1:0
+// GFX1250: ds_storexchg_2addr_stride64_rtn_b32 v[6:7], v1, v2, v3 ; encoding: [0x00,0x00,0xbc,0xd8,0x01,0x02,0x03,0x06]
+
+ds_wrxchg2st64_rtn_b32 v[254:255], v255, v255, v255 offset0:16 offset1:1
+// GFX1250: ds_storexchg_2addr_stride64_rtn_b32 v[254:255], v255, v255, v255 offset0:16 offset1:1 ; encoding: [0x10,0x01,0xbc,0xd8,0xff,0xff,0xff,0xfe]
+
+ds_wrxchg2st64_rtn_b64 v[6:9], v1, v[2:3], v[4:5]
+// GFX1250: ds_storexchg_2addr_stride64_rtn_b64 v[6:9], v1, v[2:3], v[4:5] ; encoding: [0x00,0x00,0xbc,0xd9,0x01,0x02,0x04,0x06]
+
+ds_wrxchg2st64_rtn_b64 v[6:9], v1, v[2:3], v[4:5] offset0:127 offset1:255
+// GFX1250: ds_storexchg_2addr_stride64_rtn_b64 v[6:9], v1, v[2:3], v[4:5] offset0:127 offset1:255 ; encoding: [0x7f,0xff,0xbc,0xd9,0x01,0x02,0x04,0x06]
+
+ds_wrxchg2st64_rtn_b64 v[6:9], v1, v[2:3], v[4:5] offset0:0 offset1:0
+// GFX1250: ds_storexchg_2addr_stride64_rtn_b64 v[6:9], v1, v[2:3], v[4:5] ; encoding: [0x00,0x00,0xbc,0xd9,0x01,0x02,0x04,0x06]
+
+ds_wrxchg2st64_rtn_b64 v[252:255], v255, v[254:255], v[254:255] offset0:16 offset1:1
+// GFX1250: ds_storexchg_2addr_stride64_rtn_b64 v[252:255], v255, v[254:255], v[254:255] offset0:16 offset1:1 ; encoding: [0x10,0x01,0xbc,0xd9,0xff,0xfe,0xfe,0xfc]
+
+ds_wrxchg_rtn_b32 v5, v1, v2
+// GFX1250: ds_storexchg_rtn_b32 v5, v1, v2 ; encoding: [0x00,0x00,0xb4,0xd8,0x01,0x02,0x00,0x05]
+
+ds_wrxchg_rtn_b32 v5, v1, v2 offset:65535
+// GFX1250: ds_storexchg_rtn_b32 v5, v1, v2 offset:65535 ; encoding: [0xff,0xff,0xb4,0xd8,0x01,0x02,0x00,0x05]
+
+ds_wrxchg_rtn_b32 v5, v1, v2 offset:0
+// GFX1250: ds_storexchg_rtn_b32 v5, v1, v2 ; encoding: [0x00,0x00,0xb4,0xd8,0x01,0x02,0x00,0x05]
+
+ds_wrxchg_rtn_b32 v255, v255, v255 offset:4
+// GFX1250: ds_storexchg_rtn_b32 v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0xb4,0xd8,0xff,0xff,0x00,0xff]
+
+ds_wrxchg_rtn_b64 v[6:7], v1, v[2:3]
+// GFX1250: ds_storexchg_rtn_b64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0xb4,0xd9,0x01,0x02,0x00,0x06]
+
+ds_wrxchg_rtn_b64 v[6:7], v1, v[2:3] offset:65535
+// GFX1250: ds_storexchg_rtn_b64 v[6:7], v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0xb4,0xd9,0x01,0x02,0x00,0x06]
+
+ds_wrxchg_rtn_b64 v[6:7], v1, v[2:3] offset:0
+// GFX1250: ds_storexchg_rtn_b64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0xb4,0xd9,0x01,0x02,0x00,0x06]
+
+ds_wrxchg_rtn_b64 v[254:255], v255, v[254:255] offset:4
+// GFX1250: ds_storexchg_rtn_b64 v[254:255], v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0xb4,0xd9,0xff,0xfe,0x00,0xfe]
+
+ds_xor_b32 v1, v2
+// GFX1250: ds_xor_b32 v1, v2 ; encoding: [0x00,0x00,0x2c,0xd8,0x01,0x02,0x00,0x00]
+
+ds_xor_b32 v1, v2 offset:65535
+// GFX1250: ds_xor_b32 v1, v2 offset:65535 ; encoding: [0xff,0xff,0x2c,0xd8,0x01,0x02,0x00,0x00]
+
+ds_xor_b32 v1, v2 offset:0
+// GFX1250: ds_xor_b32 v1, v2 ; encoding: [0x00,0x00,0x2c,0xd8,0x01,0x02,0x00,0x00]
+
+ds_xor_b32 v255, v255 offset:4
+// GFX1250: ds_xor_b32 v255, v255 offset:4 ; encoding: [0x04,0x00,0x2c,0xd8,0xff,0xff,0x00,0x00]
+
+ds_xor_b64 v1, v[2:3]
+// GFX1250: ds_xor_b64 v1, v[2:3] ; encoding: [0x00,0x00,0x2c,0xd9,0x01,0x02,0x00,0x00]
+
+ds_xor_b64 v1, v[2:3] offset:65535
+// GFX1250: ds_xor_b64 v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x2c,0xd9,0x01,0x02,0x00,0x00]
+
+ds_xor_b64 v1, v[2:3] offset:0
+// GFX1250: ds_xor_b64 v1, v[2:3] ; encoding: [0x00,0x00,0x2c,0xd9,0x01,0x02,0x00,0x00]
+
+ds_xor_b64 v255, v[254:255] offset:4
+// GFX1250: ds_xor_b64 v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x2c,0xd9,0xff,0xfe,0x00,0x00]
+
+ds_xor_rtn_b32 v5, v1, v2
+// GFX1250: ds_xor_rtn_b32 v5, v1, v2 ; encoding: [0x00,0x00,0xac,0xd8,0x01,0x02,0x00,0x05]
+
+ds_xor_rtn_b32 v5, v1, v2 offset:65535
+// GFX1250: ds_xor_rtn_b32 v5, v1, v2 offset:65535 ; encoding: [0xff,0xff,0xac,0xd8,0x01,0x02,0x00,0x05]
+
+ds_xor_rtn_b32 v5, v1, v2 offset:0
+// GFX1250: ds_xor_rtn_b32 v5, v1, v2 ; encoding: [0x00,0x00,0xac,0xd8,0x01,0x02,0x00,0x05]
+
+ds_xor_rtn_b32 v255, v255, v255 offset:4
+// GFX1250: ds_xor_rtn_b32 v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0xac,0xd8,0xff,0xff,0x00,0xff]
+
+ds_xor_rtn_b64 v[6:7], v1, v[2:3]
+// GFX1250: ds_xor_rtn_b64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0xac,0xd9,0x01,0x02,0x00,0x06]
+
+ds_xor_rtn_b64 v[6:7], v1, v[2:3] offset:65535
+// GFX1250: ds_xor_rtn_b64 v[6:7], v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0xac,0xd9,0x01,0x02,0x00,0x06]
+
+ds_xor_rtn_b64 v[6:7], v1, v[2:3] offset:0
+// GFX1250: ds_xor_rtn_b64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0xac,0xd9,0x01,0x02,0x00,0x06]
+
+ds_xor_rtn_b64 v[254:255], v255, v[254:255] offset:4
+// GFX1250: ds_xor_rtn_b64 v[254:255], v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0xac,0xd9,0xff,0xfe,0x00,0xfe]
+
+ds_swizzle_b32 v8, v2
+// GFX1250: ds_swizzle_b32 v8, v2 ; encoding: [0x00,0x00,0xd4,0xd8,0x02,0x00,0x00,0x08]
+
+ds_swizzle_b32 v8, v2 offset:0
+// GFX1250: ds_swizzle_b32 v8, v2 ; encoding: [0x00,0x00,0xd4,0xd8,0x02,0x00,0x00,0x08]
+
+ds_swizzle_b32 v8, v2 offset:0xFFFF
+// GFX1250: ds_swizzle_b32 v8, v2 offset:swizzle(FFT,31) ; encoding: [0xff,0xff,0xd4,0xd8,0x02,0x00,0x00,0x08]
+
+ds_swizzle_b32 v8, v2 offset:swizzle(QUAD_PERM, 0, 1, 2, 3)
+// GFX1250: ds_swizzle_b32 v8, v2 offset:swizzle(QUAD_PERM,0,1,2,3) ; encoding: [0xe4,0x80,0xd4,0xd8,0x02,0x00,0x00,0x08]
+
+ds_swizzle_b32 v8, v2 offset:swizzle(SWAP,16)
+// GFX1250: ds_swizzle_b32 v8, v2 offset:swizzle(SWAP,16) ; encoding: [0x1f,0x40,0xd4,0xd8,0x02,0x00,0x00,0x08]
+
+ds_swizzle_b32 v8, v2 offset:swizzle(REVERSE,8)
+// GFX1250: ds_swizzle_b32 v8, v2 offset:swizzle(REVERSE,8) ; encoding: [0x1f,0x1c,0xd4,0xd8,0x02,0x00,0x00,0x08]
+
+ds_swizzle_b32 v8, v2 offset:swizzle(BROADCAST,4,1)
+// GFX1250: ds_swizzle_b32 v8, v2 offset:swizzle(BROADCAST,4,1) ; encoding: [0x3c,0x00,0xd4,0xd8,0x02,0x00,0x00,0x08]
+
+ds_swizzle_b32 v8, v2 offset:swizzle(BROADCAST,8,7)
+// GFX1250: ds_swizzle_b32 v8, v2 offset:swizzle(BROADCAST,8,7) ; encoding: [0xf8,0x00,0xd4,0xd8,0x02,0x00,0x00,0x08]
+
+ds_swizzle_b32 v8, v2 offset:swizzle(BITMASK_PERM, "01pip")
+// GFX1250: ds_swizzle_b32 v8, v2 offset:swizzle(BITMASK_PERM,"01pip") ; encoding: [0x07,0x09,0xd4,0xd8,0x02,0x00,0x00,0x08]
+
ds_atomic_async_barrier_arrive_b64 v1 offset:65407
// GFX1250: ds_atomic_async_barrier_arrive_b64 v1 offset:65407 ; encoding: [0x7f,0xff,0x58,0xd9,0x01,0x00,0x00,0x00]
// GFX12-ERR: :[[@LINE-2]]:{{[0-9]+}}: error: instruction not supported on this GPU
diff --git a/llvm/test/MC/Disassembler/AMDGPU/gfx1250_dasm_ds.txt b/llvm/test/MC/Disassembler/AMDGPU/gfx1250_dasm_ds.txt
index 0870aa7ba3dc2..13440a06032be 100644
--- a/llvm/test/MC/Disassembler/AMDGPU/gfx1250_dasm_ds.txt
+++ b/llvm/test/MC/Disassembler/AMDGPU/gfx1250_dasm_ds.txt
@@ -1,5 +1,1109 @@
# RUN: llvm-mc -triple=amdgcn -mcpu=gfx1250 -disassemble -show-encoding < %s | FileCheck -strict-whitespace -check-prefix=GFX1250 %s
+# GFX1250: ds_add_f32 v1, v2 ; encoding: [0x00,0x00,0x54,0xd8,0x01,0x02,0x00,0x00]
+0x00,0x00,0x54,0xd8,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_add_f32 v1, v2 offset:65535 ; encoding: [0xff,0xff,0x54,0xd8,0x01,0x02,0x00,0x00]
+0xff,0xff,0x54,0xd8,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_add_f32 v255, v255 offset:4 ; encoding: [0x04,0x00,0x54,0xd8,0xff,0xff,0x00,0x00]
+0x04,0x00,0x54,0xd8,0xff,0xff,0x00,0x00
+
+# GFX1250: ds_add_rtn_f32 v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0xe4,0xd9,0xff,0xff,0x00,0xff]
+0x04,0x00,0xe4,0xd9,0xff,0xff,0x00,0xff
+
+# GFX1250: ds_add_rtn_f32 v5, v1, v2 ; encoding: [0x00,0x00,0xe4,0xd9,0x01,0x02,0x00,0x05]
+0x00,0x00,0xe4,0xd9,0x01,0x02,0x00,0x05
+
+# GFX1250: ds_add_rtn_f32 v5, v1, v2 offset:65535 ; encoding: [0xff,0xff,0xe4,0xd9,0x01,0x02,0x00,0x05]
+0xff,0xff,0xe4,0xd9,0x01,0x02,0x00,0x05
+
+# GFX1250: ds_add_rtn_u32 v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0x80,0xd8,0xff,0xff,0x00,0xff]
+0x04,0x00,0x80,0xd8,0xff,0xff,0x00,0xff
+
+# GFX1250: ds_add_rtn_u32 v5, v1, v2 ; encoding: [0x00,0x00,0x80,0xd8,0x01,0x02,0x00,0x05]
+0x00,0x00,0x80,0xd8,0x01,0x02,0x00,0x05
+
+# GFX1250: ds_add_rtn_u32 v5, v1, v2 offset:65535 ; encoding: [0xff,0xff,0x80,0xd8,0x01,0x02,0x00,0x05]
+0xff,0xff,0x80,0xd8,0x01,0x02,0x00,0x05
+
+# GFX1250: ds_add_rtn_u64 v[254:255], v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x80,0xd9,0xff,0xfe,0x00,0xfe]
+0x04,0x00,0x80,0xd9,0xff,0xfe,0x00,0xfe
+
+# GFX1250: ds_add_rtn_u64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0x80,0xd9,0x01,0x02,0x00,0x06]
+0x00,0x00,0x80,0xd9,0x01,0x02,0x00,0x06
+
+# GFX1250: ds_add_rtn_u64 v[6:7], v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x80,0xd9,0x01,0x02,0x00,0x06]
+0xff,0xff,0x80,0xd9,0x01,0x02,0x00,0x06
+
+# GFX1250: ds_add_u32 v1, v2 ; encoding: [0x00,0x00,0x00,0xd8,0x01,0x02,0x00,0x00]
+0x00,0x00,0x00,0xd8,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_add_u32 v1, v2 offset:65535 ; encoding: [0xff,0xff,0x00,0xd8,0x01,0x02,0x00,0x00]
+0xff,0xff,0x00,0xd8,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_add_u32 v255, v255 offset:4 ; encoding: [0x04,0x00,0x00,0xd8,0xff,0xff,0x00,0x00]
+0x04,0x00,0x00,0xd8,0xff,0xff,0x00,0x00
+
+# GFX1250: ds_add_u64 v1, v[2:3] ; encoding: [0x00,0x00,0x00,0xd9,0x01,0x02,0x00,0x00]
+0x00,0x00,0x00,0xd9,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_add_u64 v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x00,0xd9,0x01,0x02,0x00,0x00]
+0xff,0xff,0x00,0xd9,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_add_u64 v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x00,0xd9,0xff,0xfe,0x00,0x00]
+0x04,0x00,0x00,0xd9,0xff,0xfe,0x00,0x00
+
+# GFX1250: ds_and_b32 v1, v2 ; encoding: [0x00,0x00,0x24,0xd8,0x01,0x02,0x00,0x00]
+0x00,0x00,0x24,0xd8,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_and_b32 v1, v2 offset:65535 ; encoding: [0xff,0xff,0x24,0xd8,0x01,0x02,0x00,0x00]
+0xff,0xff,0x24,0xd8,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_and_b32 v255, v255 offset:4 ; encoding: [0x04,0x00,0x24,0xd8,0xff,0xff,0x00,0x00]
+0x04,0x00,0x24,0xd8,0xff,0xff,0x00,0x00
+
+# GFX1250: ds_and_b64 v1, v[2:3] ; encoding: [0x00,0x00,0x24,0xd9,0x01,0x02,0x00,0x00]
+0x00,0x00,0x24,0xd9,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_and_b64 v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x24,0xd9,0x01,0x02,0x00,0x00]
+0xff,0xff,0x24,0xd9,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_and_b64 v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x24,0xd9,0xff,0xfe,0x00,0x00]
+0x04,0x00,0x24,0xd9,0xff,0xfe,0x00,0x00
+
+# GFX1250: ds_and_rtn_b32 v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0xa4,0xd8,0xff,0xff,0x00,0xff]
+0x04,0x00,0xa4,0xd8,0xff,0xff,0x00,0xff
+
+# GFX1250: ds_and_rtn_b32 v5, v1, v2 ; encoding: [0x00,0x00,0xa4,0xd8,0x01,0x02,0x00,0x05]
+0x00,0x00,0xa4,0xd8,0x01,0x02,0x00,0x05
+
+# GFX1250: ds_and_rtn_b32 v5, v1, v2 offset:65535 ; encoding: [0xff,0xff,0xa4,0xd8,0x01,0x02,0x00,0x05]
+0xff,0xff,0xa4,0xd8,0x01,0x02,0x00,0x05
+
+# GFX1250: ds_and_rtn_b64 v[254:255], v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0xa4,0xd9,0xff,0xfe,0x00,0xfe]
+0x04,0x00,0xa4,0xd9,0xff,0xfe,0x00,0xfe
+
+# GFX1250: ds_and_rtn_b64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0xa4,0xd9,0x01,0x02,0x00,0x06]
+0x00,0x00,0xa4,0xd9,0x01,0x02,0x00,0x06
+
+# GFX1250: ds_and_rtn_b64 v[6:7], v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0xa4,0xd9,0x01,0x02,0x00,0x06]
+0xff,0xff,0xa4,0xd9,0x01,0x02,0x00,0x06
+
+# GFX1250: ds_append v255 offset:4 ; encoding: [0x04,0x00,0xf8,0xd8,0x00,0x00,0x00,0xff]
+0x04,0x00,0xf8,0xd8,0x00,0x00,0x00,0xff
+
+# GFX1250: ds_append v5 ; encoding: [0x00,0x00,0xf8,0xd8,0x00,0x00,0x00,0x05]
+0x00,0x00,0xf8,0xd8,0x00,0x00,0x00,0x05
+
+# GFX1250: ds_append v5 offset:65535 ; encoding: [0xff,0xff,0xf8,0xd8,0x00,0x00,0x00,0x05]
+0xff,0xff,0xf8,0xd8,0x00,0x00,0x00,0x05
+
+# GFX1250: ds_bpermute_b32 v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0xcc,0xda,0xff,0xff,0x00,0xff]
+0x04,0x00,0xcc,0xda,0xff,0xff,0x00,0xff
+
+# GFX1250: ds_bpermute_b32 v5, v1, v2 ; encoding: [0x00,0x00,0xcc,0xda,0x01,0x02,0x00,0x05]
+0x00,0x00,0xcc,0xda,0x01,0x02,0x00,0x05
+
+# GFX1250: ds_bpermute_b32 v5, v1, v2 offset:65535 ; encoding: [0xff,0xff,0xcc,0xda,0x01,0x02,0x00,0x05]
+0xff,0xff,0xcc,0xda,0x01,0x02,0x00,0x05
+
+# GFX1250: ds_cmpstore_b32 v1, v2, v3 ; encoding: [0x00,0x00,0x40,0xd8,0x01,0x02,0x03,0x00]
+0x00,0x00,0x40,0xd8,0x01,0x02,0x03,0x00
+
+# GFX1250: ds_cmpstore_b32 v1, v2, v3 offset:65535 ; encoding: [0xff,0xff,0x40,0xd8,0x01,0x02,0x03,0x00]
+0xff,0xff,0x40,0xd8,0x01,0x02,0x03,0x00
+
+# GFX1250: ds_cmpstore_b32 v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0x40,0xd8,0xff,0xff,0xff,0x00]
+0x04,0x00,0x40,0xd8,0xff,0xff,0xff,0x00
+
+# GFX1250: ds_cmpstore_b64 v1, v[2:3], v[4:5] ; encoding: [0x00,0x00,0x40,0xd9,0x01,0x02,0x04,0x00]
+0x00,0x00,0x40,0xd9,0x01,0x02,0x04,0x00
+
+# GFX1250: ds_cmpstore_b64 v1, v[2:3], v[4:5] offset:65535 ; encoding: [0xff,0xff,0x40,0xd9,0x01,0x02,0x04,0x00]
+0xff,0xff,0x40,0xd9,0x01,0x02,0x04,0x00
+
+# GFX1250: ds_cmpstore_b64 v255, v[254:255], v[254:255] offset:4 ; encoding: [0x04,0x00,0x40,0xd9,0xff,0xfe,0xfe,0x00]
+0x04,0x00,0x40,0xd9,0xff,0xfe,0xfe,0x00
+
+# GFX1250: ds_cmpstore_rtn_b32 v255, v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0xc0,0xd8,0xff,0xff,0xff,0xff]
+0x04,0x00,0xc0,0xd8,0xff,0xff,0xff,0xff
+
+# GFX1250: ds_cmpstore_rtn_b32 v5, v1, v2, v3 ; encoding: [0x00,0x00,0xc0,0xd8,0x01,0x02,0x03,0x05]
+0x00,0x00,0xc0,0xd8,0x01,0x02,0x03,0x05
+
+# GFX1250: ds_cmpstore_rtn_b32 v5, v1, v2, v3 offset:65535 ; encoding: [0xff,0xff,0xc0,0xd8,0x01,0x02,0x03,0x05]
+0xff,0xff,0xc0,0xd8,0x01,0x02,0x03,0x05
+
+# GFX1250: ds_cmpstore_rtn_b64 v[254:255], v255, v[254:255], v[254:255] offset:4 ; encoding: [0x04,0x00,0xc0,0xd9,0xff,0xfe,0xfe,0xfe]
+0x04,0x00,0xc0,0xd9,0xff,0xfe,0xfe,0xfe
+
+# GFX1250: ds_cmpstore_rtn_b64 v[6:7], v1, v[2:3], v[4:5] ; encoding: [0x00,0x00,0xc0,0xd9,0x01,0x02,0x04,0x06]
+0x00,0x00,0xc0,0xd9,0x01,0x02,0x04,0x06
+
+# GFX1250: ds_cmpstore_rtn_b64 v[6:7], v1, v[2:3], v[4:5] offset:65535 ; encoding: [0xff,0xff,0xc0,0xd9,0x01,0x02,0x04,0x06]
+0xff,0xff,0xc0,0xd9,0x01,0x02,0x04,0x06
+
+# GFX1250: ds_cond_sub_rtn_u32 v5, v1, v2 ; encoding: [0x00,0x00,0xa0,0xda,0x01,0x02,0x00,0x05]
+0x00,0x00,0xa0,0xda,0x01,0x02,0x00,0x05
+
+# GFX1250: ds_cond_sub_rtn_u32 v5, v1, v2 offset:65535 ; encoding: [0xff,0xff,0xa0,0xda,0x01,0x02,0x00,0x05]
+0xff,0xff,0xa0,0xda,0x01,0x02,0x00,0x05
+
+# GFX1250: ds_cond_sub_u32 v1, v2 ; encoding: [0x00,0x00,0x60,0xda,0x01,0x02,0x00,0x00]
+0x00,0x00,0x60,0xda,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_cond_sub_u32 v1, v2 offset:65535 ; encoding: [0xff,0xff,0x60,0xda,0x01,0x02,0x00,0x00]
+0xff,0xff,0x60,0xda,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_condxchg32_rtn_b64 v[254:255], v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0xf8,0xd9,0xff,0xfe,0x00,0xfe]
+0x04,0x00,0xf8,0xd9,0xff,0xfe,0x00,0xfe
+
+# GFX1250: ds_condxchg32_rtn_b64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0xf8,0xd9,0x01,0x02,0x00,0x06]
+0x00,0x00,0xf8,0xd9,0x01,0x02,0x00,0x06
+
+# GFX1250: ds_condxchg32_rtn_b64 v[6:7], v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0xf8,0xd9,0x01,0x02,0x00,0x06]
+0xff,0xff,0xf8,0xd9,0x01,0x02,0x00,0x06
+
+# GFX1250: ds_consume v255 offset:4 ; encoding: [0x04,0x00,0xf4,0xd8,0x00,0x00,0x00,0xff]
+0x04,0x00,0xf4,0xd8,0x00,0x00,0x00,0xff
+
+# GFX1250: ds_consume v5 ; encoding: [0x00,0x00,0xf4,0xd8,0x00,0x00,0x00,0x05]
+0x00,0x00,0xf4,0xd8,0x00,0x00,0x00,0x05
+
+# GFX1250: ds_consume v5 offset:65535 ; encoding: [0xff,0xff,0xf4,0xd8,0x00,0x00,0x00,0x05]
+0xff,0xff,0xf4,0xd8,0x00,0x00,0x00,0x05
+
+# GFX1250: ds_dec_rtn_u32 v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0x90,0xd8,0xff,0xff,0x00,0xff]
+0x04,0x00,0x90,0xd8,0xff,0xff,0x00,0xff
+
+# GFX1250: ds_dec_rtn_u32 v5, v1, v2 ; encoding: [0x00,0x00,0x90,0xd8,0x01,0x02,0x00,0x05]
+0x00,0x00,0x90,0xd8,0x01,0x02,0x00,0x05
+
+# GFX1250: ds_dec_rtn_u32 v5, v1, v2 offset:65535 ; encoding: [0xff,0xff,0x90,0xd8,0x01,0x02,0x00,0x05]
+0xff,0xff,0x90,0xd8,0x01,0x02,0x00,0x05
+
+# GFX1250: ds_dec_rtn_u64 v[254:255], v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x90,0xd9,0xff,0xfe,0x00,0xfe]
+0x04,0x00,0x90,0xd9,0xff,0xfe,0x00,0xfe
+
+# GFX1250: ds_dec_rtn_u64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0x90,0xd9,0x01,0x02,0x00,0x06]
+0x00,0x00,0x90,0xd9,0x01,0x02,0x00,0x06
+
+# GFX1250: ds_dec_rtn_u64 v[6:7], v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x90,0xd9,0x01,0x02,0x00,0x06]
+0xff,0xff,0x90,0xd9,0x01,0x02,0x00,0x06
+
+# GFX1250: ds_dec_u32 v1, v2 ; encoding: [0x00,0x00,0x10,0xd8,0x01,0x02,0x00,0x00]
+0x00,0x00,0x10,0xd8,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_dec_u32 v1, v2 offset:65535 ; encoding: [0xff,0xff,0x10,0xd8,0x01,0x02,0x00,0x00]
+0xff,0xff,0x10,0xd8,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_dec_u32 v255, v255 offset:4 ; encoding: [0x04,0x00,0x10,0xd8,0xff,0xff,0x00,0x00]
+0x04,0x00,0x10,0xd8,0xff,0xff,0x00,0x00
+
+# GFX1250: ds_dec_u64 v1, v[2:3] ; encoding: [0x00,0x00,0x10,0xd9,0x01,0x02,0x00,0x00]
+0x00,0x00,0x10,0xd9,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_dec_u64 v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x10,0xd9,0x01,0x02,0x00,0x00]
+0xff,0xff,0x10,0xd9,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_dec_u64 v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x10,0xd9,0xff,0xfe,0x00,0x00]
+0x04,0x00,0x10,0xd9,0xff,0xfe,0x00,0x00
+
+# GFX1250: ds_inc_rtn_u32 v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0x8c,0xd8,0xff,0xff,0x00,0xff]
+0x04,0x00,0x8c,0xd8,0xff,0xff,0x00,0xff
+
+# GFX1250: ds_inc_rtn_u32 v5, v1, v2 ; encoding: [0x00,0x00,0x8c,0xd8,0x01,0x02,0x00,0x05]
+0x00,0x00,0x8c,0xd8,0x01,0x02,0x00,0x05
+
+# GFX1250: ds_inc_rtn_u32 v5, v1, v2 offset:65535 ; encoding: [0xff,0xff,0x8c,0xd8,0x01,0x02,0x00,0x05]
+0xff,0xff,0x8c,0xd8,0x01,0x02,0x00,0x05
+
+# GFX1250: ds_inc_rtn_u64 v[254:255], v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x8c,0xd9,0xff,0xfe,0x00,0xfe]
+0x04,0x00,0x8c,0xd9,0xff,0xfe,0x00,0xfe
+
+# GFX1250: ds_inc_rtn_u64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0x8c,0xd9,0x01,0x02,0x00,0x06]
+0x00,0x00,0x8c,0xd9,0x01,0x02,0x00,0x06
+
+# GFX1250: ds_inc_rtn_u64 v[6:7], v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x8c,0xd9,0x01,0x02,0x00,0x06]
+0xff,0xff,0x8c,0xd9,0x01,0x02,0x00,0x06
+
+# GFX1250: ds_inc_u32 v1, v2 ; encoding: [0x00,0x00,0x0c,0xd8,0x01,0x02,0x00,0x00]
+0x00,0x00,0x0c,0xd8,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_inc_u32 v1, v2 offset:65535 ; encoding: [0xff,0xff,0x0c,0xd8,0x01,0x02,0x00,0x00]
+0xff,0xff,0x0c,0xd8,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_inc_u32 v255, v255 offset:4 ; encoding: [0x04,0x00,0x0c,0xd8,0xff,0xff,0x00,0x00]
+0x04,0x00,0x0c,0xd8,0xff,0xff,0x00,0x00
+
+# GFX1250: ds_inc_u64 v1, v[2:3] ; encoding: [0x00,0x00,0x0c,0xd9,0x01,0x02,0x00,0x00]
+0x00,0x00,0x0c,0xd9,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_inc_u64 v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x0c,0xd9,0x01,0x02,0x00,0x00]
+0xff,0xff,0x0c,0xd9,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_inc_u64 v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x0c,0xd9,0xff,0xfe,0x00,0x00]
+0x04,0x00,0x0c,0xd9,0xff,0xfe,0x00,0x00
+
+# GFX1250: ds_load_2addr_b32 v[254:255], v255 offset0:16 offset1:1 ; encoding: [0x10,0x01,0xdc,0xd8,0xff,0x00,0x00,0xfe]
+0x10,0x01,0xdc,0xd8,0xff,0x00,0x00,0xfe
+
+# GFX1250: ds_load_2addr_b32 v[6:7], v1 ; encoding: [0x00,0x00,0xdc,0xd8,0x01,0x00,0x00,0x06]
+0x00,0x00,0xdc,0xd8,0x01,0x00,0x00,0x06
+
+# GFX1250: ds_load_2addr_b32 v[6:7], v1 offset0:127 offset1:255 ; encoding: [0x7f,0xff,0xdc,0xd8,0x01,0x00,0x00,0x06]
+0x7f,0xff,0xdc,0xd8,0x01,0x00,0x00,0x06
+
+# GFX1250: ds_load_2addr_b64 v[252:255], v255 offset0:16 offset1:1 ; encoding: [0x10,0x01,0xdc,0xd9,0xff,0x00,0x00,0xfc]
+0x10,0x01,0xdc,0xd9,0xff,0x00,0x00,0xfc
+
+# GFX1250: ds_load_2addr_b64 v[6:9], v1 ; encoding: [0x00,0x00,0xdc,0xd9,0x01,0x00,0x00,0x06]
+0x00,0x00,0xdc,0xd9,0x01,0x00,0x00,0x06
+
+# GFX1250: ds_load_2addr_b64 v[6:9], v1 offset0:127 offset1:255 ; encoding: [0x7f,0xff,0xdc,0xd9,0x01,0x00,0x00,0x06]
+0x7f,0xff,0xdc,0xd9,0x01,0x00,0x00,0x06
+
+# GFX1250: ds_load_2addr_stride64_b32 v[254:255], v255 offset0:16 offset1:1 ; encoding: [0x10,0x01,0xe0,0xd8,0xff,0x00,0x00,0xfe]
+0x10,0x01,0xe0,0xd8,0xff,0x00,0x00,0xfe
+
+# GFX1250: ds_load_2addr_stride64_b32 v[6:7], v1 ; encoding: [0x00,0x00,0xe0,0xd8,0x01,0x00,0x00,0x06]
+0x00,0x00,0xe0,0xd8,0x01,0x00,0x00,0x06
+
+# GFX1250: ds_load_2addr_stride64_b32 v[6:7], v1 offset0:127 offset1:255 ; encoding: [0x7f,0xff,0xe0,0xd8,0x01,0x00,0x00,0x06]
+0x7f,0xff,0xe0,0xd8,0x01,0x00,0x00,0x06
+
+# GFX1250: ds_load_2addr_stride64_b64 v[252:255], v255 offset0:16 offset1:1 ; encoding: [0x10,0x01,0xe0,0xd9,0xff,0x00,0x00,0xfc]
+0x10,0x01,0xe0,0xd9,0xff,0x00,0x00,0xfc
+
+# GFX1250: ds_load_2addr_stride64_b64 v[6:9], v1 ; encoding: [0x00,0x00,0xe0,0xd9,0x01,0x00,0x00,0x06]
+0x00,0x00,0xe0,0xd9,0x01,0x00,0x00,0x06
+
+# GFX1250: ds_load_2addr_stride64_b64 v[6:9], v1 offset0:127 offset1:255 ; encoding: [0x7f,0xff,0xe0,0xd9,0x01,0x00,0x00,0x06]
+0x7f,0xff,0xe0,0xd9,0x01,0x00,0x00,0x06
+
+# GFX1250: ds_load_addtid_b32 v255 offset:4 ; encoding: [0x04,0x00,0xc4,0xda,0x00,0x00,0x00,0xff]
+0x04,0x00,0xc4,0xda,0x00,0x00,0x00,0xff
+
+# GFX1250: ds_load_addtid_b32 v5 ; encoding: [0x00,0x00,0xc4,0xda,0x00,0x00,0x00,0x05]
+0x00,0x00,0xc4,0xda,0x00,0x00,0x00,0x05
+
+# GFX1250: ds_load_addtid_b32 v5 offset:65535 ; encoding: [0xff,0xff,0xc4,0xda,0x00,0x00,0x00,0x05]
+0xff,0xff,0xc4,0xda,0x00,0x00,0x00,0x05
+
+# GFX1250: ds_load_b128 v[252:255], v255 offset:4 ; encoding: [0x04,0x00,0xfc,0xdb,0xff,0x00,0x00,0xfc]
+0x04,0x00,0xfc,0xdb,0xff,0x00,0x00,0xfc
+
+# GFX1250: ds_load_b128 v[6:9], v1 ; encoding: [0x00,0x00,0xfc,0xdb,0x01,0x00,0x00,0x06]
+0x00,0x00,0xfc,0xdb,0x01,0x00,0x00,0x06
+
+# GFX1250: ds_load_b128 v[6:9], v1 offset:65535 ; encoding: [0xff,0xff,0xfc,0xdb,0x01,0x00,0x00,0x06]
+0xff,0xff,0xfc,0xdb,0x01,0x00,0x00,0x06
+
+# GFX1250: ds_load_b32 v255, v255 offset:4 ; encoding: [0x04,0x00,0xd8,0xd8,0xff,0x00,0x00,0xff]
+0x04,0x00,0xd8,0xd8,0xff,0x00,0x00,0xff
+
+# GFX1250: ds_load_b32 v5, v1 ; encoding: [0x00,0x00,0xd8,0xd8,0x01,0x00,0x00,0x05]
+0x00,0x00,0xd8,0xd8,0x01,0x00,0x00,0x05
+
+# GFX1250: ds_load_b32 v5, v1 offset:65535 ; encoding: [0xff,0xff,0xd8,0xd8,0x01,0x00,0x00,0x05]
+0xff,0xff,0xd8,0xd8,0x01,0x00,0x00,0x05
+
+# GFX1250: ds_load_b64 v[254:255], v255 offset:4 ; encoding: [0x04,0x00,0xd8,0xd9,0xff,0x00,0x00,0xfe]
+0x04,0x00,0xd8,0xd9,0xff,0x00,0x00,0xfe
+
+# GFX1250: ds_load_b64 v[6:7], v1 ; encoding: [0x00,0x00,0xd8,0xd9,0x01,0x00,0x00,0x06]
+0x00,0x00,0xd8,0xd9,0x01,0x00,0x00,0x06
+
+# GFX1250: ds_load_b64 v[6:7], v1 offset:65535 ; encoding: [0xff,0xff,0xd8,0xd9,0x01,0x00,0x00,0x06]
+0xff,0xff,0xd8,0xd9,0x01,0x00,0x00,0x06
+
+# GFX1250: ds_load_b96 v[252:254], v255 offset:4 ; encoding: [0x04,0x00,0xf8,0xdb,0xff,0x00,0x00,0xfc]
+0x04,0x00,0xf8,0xdb,0xff,0x00,0x00,0xfc
+
+# GFX1250: ds_load_b96 v[6:8], v1 ; encoding: [0x00,0x00,0xf8,0xdb,0x01,0x00,0x00,0x06]
+0x00,0x00,0xf8,0xdb,0x01,0x00,0x00,0x06
+
+# GFX1250: ds_load_b96 v[6:8], v1 offset:65535 ; encoding: [0xff,0xff,0xf8,0xdb,0x01,0x00,0x00,0x06]
+0xff,0xff,0xf8,0xdb,0x01,0x00,0x00,0x06
+
+# GFX1250: ds_load_i16 v255, v255 offset:4 ; encoding: [0x04,0x00,0xec,0xd8,0xff,0x00,0x00,0xff]
+0x04,0x00,0xec,0xd8,0xff,0x00,0x00,0xff
+
+# GFX1250: ds_load_i16 v5, v1 ; encoding: [0x00,0x00,0xec,0xd8,0x01,0x00,0x00,0x05]
+0x00,0x00,0xec,0xd8,0x01,0x00,0x00,0x05
+
+# GFX1250: ds_load_i16 v5, v1 offset:65535 ; encoding: [0xff,0xff,0xec,0xd8,0x01,0x00,0x00,0x05]
+0xff,0xff,0xec,0xd8,0x01,0x00,0x00,0x05
+
+# GFX1250: ds_load_i8 v255, v255 offset:4 ; encoding: [0x04,0x00,0xe4,0xd8,0xff,0x00,0x00,0xff]
+0x04,0x00,0xe4,0xd8,0xff,0x00,0x00,0xff
+
+# GFX1250: ds_load_i8 v5, v1 ; encoding: [0x00,0x00,0xe4,0xd8,0x01,0x00,0x00,0x05]
+0x00,0x00,0xe4,0xd8,0x01,0x00,0x00,0x05
+
+# GFX1250: ds_load_i8 v5, v1 offset:65535 ; encoding: [0xff,0xff,0xe4,0xd8,0x01,0x00,0x00,0x05]
+0xff,0xff,0xe4,0xd8,0x01,0x00,0x00,0x05
+
+# GFX1250: ds_load_i8_d16 v255, v255 offset:4 ; encoding: [0x04,0x00,0x90,0xda,0xff,0x00,0x00,0xff]
+0x04,0x00,0x90,0xda,0xff,0x00,0x00,0xff
+
+# GFX1250: ds_load_i8_d16 v5, v1 ; encoding: [0x00,0x00,0x90,0xda,0x01,0x00,0x00,0x05]
+0x00,0x00,0x90,0xda,0x01,0x00,0x00,0x05
+
+# GFX1250: ds_load_i8_d16 v5, v1 offset:65535 ; encoding: [0xff,0xff,0x90,0xda,0x01,0x00,0x00,0x05]
+0xff,0xff,0x90,0xda,0x01,0x00,0x00,0x05
+
+# GFX1250: ds_load_i8_d16_hi v255, v255 offset:4 ; encoding: [0x04,0x00,0x94,0xda,0xff,0x00,0x00,0xff]
+0x04,0x00,0x94,0xda,0xff,0x00,0x00,0xff
+
+# GFX1250: ds_load_i8_d16_hi v5, v1 ; encoding: [0x00,0x00,0x94,0xda,0x01,0x00,0x00,0x05]
+0x00,0x00,0x94,0xda,0x01,0x00,0x00,0x05
+
+# GFX1250: ds_load_i8_d16_hi v5, v1 offset:65535 ; encoding: [0xff,0xff,0x94,0xda,0x01,0x00,0x00,0x05]
+0xff,0xff,0x94,0xda,0x01,0x00,0x00,0x05
+
+# GFX1250: ds_load_u16 v255, v255 offset:4 ; encoding: [0x04,0x00,0xf0,0xd8,0xff,0x00,0x00,0xff]
+0x04,0x00,0xf0,0xd8,0xff,0x00,0x00,0xff
+
+# GFX1250: ds_load_u16 v5, v1 ; encoding: [0x00,0x00,0xf0,0xd8,0x01,0x00,0x00,0x05]
+0x00,0x00,0xf0,0xd8,0x01,0x00,0x00,0x05
+
+# GFX1250: ds_load_u16 v5, v1 offset:65535 ; encoding: [0xff,0xff,0xf0,0xd8,0x01,0x00,0x00,0x05]
+0xff,0xff,0xf0,0xd8,0x01,0x00,0x00,0x05
+
+# GFX1250: ds_load_u16_d16 v255, v255 offset:4 ; encoding: [0x04,0x00,0x98,0xda,0xff,0x00,0x00,0xff]
+0x04,0x00,0x98,0xda,0xff,0x00,0x00,0xff
+
+# GFX1250: ds_load_u16_d16 v5, v1 ; encoding: [0x00,0x00,0x98,0xda,0x01,0x00,0x00,0x05]
+0x00,0x00,0x98,0xda,0x01,0x00,0x00,0x05
+
+# GFX1250: ds_load_u16_d16 v5, v1 offset:65535 ; encoding: [0xff,0xff,0x98,0xda,0x01,0x00,0x00,0x05]
+0xff,0xff,0x98,0xda,0x01,0x00,0x00,0x05
+
+# GFX1250: ds_load_u16_d16_hi v255, v255 offset:4 ; encoding: [0x04,0x00,0x9c,0xda,0xff,0x00,0x00,0xff]
+0x04,0x00,0x9c,0xda,0xff,0x00,0x00,0xff
+
+# GFX1250: ds_load_u16_d16_hi v5, v1 ; encoding: [0x00,0x00,0x9c,0xda,0x01,0x00,0x00,0x05]
+0x00,0x00,0x9c,0xda,0x01,0x00,0x00,0x05
+
+# GFX1250: ds_load_u16_d16_hi v5, v1 offset:65535 ; encoding: [0xff,0xff,0x9c,0xda,0x01,0x00,0x00,0x05]
+0xff,0xff,0x9c,0xda,0x01,0x00,0x00,0x05
+
+# GFX1250: ds_load_u8 v255, v255 offset:4 ; encoding: [0x04,0x00,0xe8,0xd8,0xff,0x00,0x00,0xff]
+0x04,0x00,0xe8,0xd8,0xff,0x00,0x00,0xff
+
+# GFX1250: ds_load_u8 v5, v1 ; encoding: [0x00,0x00,0xe8,0xd8,0x01,0x00,0x00,0x05]
+0x00,0x00,0xe8,0xd8,0x01,0x00,0x00,0x05
+
+# GFX1250: ds_load_u8 v5, v1 offset:65535 ; encoding: [0xff,0xff,0xe8,0xd8,0x01,0x00,0x00,0x05]
+0xff,0xff,0xe8,0xd8,0x01,0x00,0x00,0x05
+
+# GFX1250: ds_load_u8_d16 v255, v255 offset:4 ; encoding: [0x04,0x00,0x88,0xda,0xff,0x00,0x00,0xff]
+0x04,0x00,0x88,0xda,0xff,0x00,0x00,0xff
+
+# GFX1250: ds_load_u8_d16 v5, v1 ; encoding: [0x00,0x00,0x88,0xda,0x01,0x00,0x00,0x05]
+0x00,0x00,0x88,0xda,0x01,0x00,0x00,0x05
+
+# GFX1250: ds_load_u8_d16 v5, v1 offset:65535 ; encoding: [0xff,0xff,0x88,0xda,0x01,0x00,0x00,0x05]
+0xff,0xff,0x88,0xda,0x01,0x00,0x00,0x05
+
+# GFX1250: ds_load_u8_d16_hi v255, v255 offset:4 ; encoding: [0x04,0x00,0x8c,0xda,0xff,0x00,0x00,0xff]
+0x04,0x00,0x8c,0xda,0xff,0x00,0x00,0xff
+
+# GFX1250: ds_load_u8_d16_hi v5, v1 ; encoding: [0x00,0x00,0x8c,0xda,0x01,0x00,0x00,0x05]
+0x00,0x00,0x8c,0xda,0x01,0x00,0x00,0x05
+
+# GFX1250: ds_load_u8_d16_hi v5, v1 offset:65535 ; encoding: [0xff,0xff,0x8c,0xda,0x01,0x00,0x00,0x05]
+0xff,0xff,0x8c,0xda,0x01,0x00,0x00,0x05
+
+# GFX1250: ds_max_i32 v1, v2 ; encoding: [0x00,0x00,0x18,0xd8,0x01,0x02,0x00,0x00]
+0x00,0x00,0x18,0xd8,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_max_i32 v1, v2 offset:65535 ; encoding: [0xff,0xff,0x18,0xd8,0x01,0x02,0x00,0x00]
+0xff,0xff,0x18,0xd8,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_max_i32 v255, v255 offset:4 ; encoding: [0x04,0x00,0x18,0xd8,0xff,0xff,0x00,0x00]
+0x04,0x00,0x18,0xd8,0xff,0xff,0x00,0x00
+
+# GFX1250: ds_max_i64 v1, v[2:3] ; encoding: [0x00,0x00,0x18,0xd9,0x01,0x02,0x00,0x00]
+0x00,0x00,0x18,0xd9,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_max_i64 v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x18,0xd9,0x01,0x02,0x00,0x00]
+0xff,0xff,0x18,0xd9,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_max_i64 v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x18,0xd9,0xff,0xfe,0x00,0x00]
+0x04,0x00,0x18,0xd9,0xff,0xfe,0x00,0x00
+
+# GFX1250: ds_max_num_f32 v1, v2 ; encoding: [0x00,0x00,0x4c,0xd8,0x01,0x02,0x00,0x00]
+0x00,0x00,0x4c,0xd8,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_max_num_f32 v1, v2 offset:65535 ; encoding: [0xff,0xff,0x4c,0xd8,0x01,0x02,0x00,0x00]
+0xff,0xff,0x4c,0xd8,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_max_num_f32 v255, v255 offset:4 ; encoding: [0x04,0x00,0x4c,0xd8,0xff,0xff,0x00,0x00]
+0x04,0x00,0x4c,0xd8,0xff,0xff,0x00,0x00
+
+# GFX1250: ds_max_num_f64 v1, v[2:3] ; encoding: [0x00,0x00,0x4c,0xd9,0x01,0x02,0x00,0x00]
+0x00,0x00,0x4c,0xd9,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_max_num_f64 v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x4c,0xd9,0x01,0x02,0x00,0x00]
+0xff,0xff,0x4c,0xd9,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_max_num_f64 v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x4c,0xd9,0xff,0xfe,0x00,0x00]
+0x04,0x00,0x4c,0xd9,0xff,0xfe,0x00,0x00
+
+# GFX1250: ds_max_num_rtn_f32 v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0xcc,0xd8,0xff,0xff,0x00,0xff]
+0x04,0x00,0xcc,0xd8,0xff,0xff,0x00,0xff
+
+# GFX1250: ds_max_num_rtn_f32 v5, v1, v2 ; encoding: [0x00,0x00,0xcc,0xd8,0x01,0x02,0x00,0x05]
+0x00,0x00,0xcc,0xd8,0x01,0x02,0x00,0x05
+
+# GFX1250: ds_max_num_rtn_f32 v5, v1, v2 offset:65535 ; encoding: [0xff,0xff,0xcc,0xd8,0x01,0x02,0x00,0x05]
+0xff,0xff,0xcc,0xd8,0x01,0x02,0x00,0x05
+
+# GFX1250: ds_max_num_rtn_f64 v[254:255], v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0xcc,0xd9,0xff,0xfe,0x00,0xfe]
+0x04,0x00,0xcc,0xd9,0xff,0xfe,0x00,0xfe
+
+# GFX1250: ds_max_num_rtn_f64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0xcc,0xd9,0x01,0x02,0x00,0x06]
+0x00,0x00,0xcc,0xd9,0x01,0x02,0x00,0x06
+
+# GFX1250: ds_max_num_rtn_f64 v[6:7], v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0xcc,0xd9,0x01,0x02,0x00,0x06]
+0xff,0xff,0xcc,0xd9,0x01,0x02,0x00,0x06
+
+# GFX1250: ds_max_rtn_i32 v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0x98,0xd8,0xff,0xff,0x00,0xff]
+0x04,0x00,0x98,0xd8,0xff,0xff,0x00,0xff
+
+# GFX1250: ds_max_rtn_i32 v5, v1, v2 ; encoding: [0x00,0x00,0x98,0xd8,0x01,0x02,0x00,0x05]
+0x00,0x00,0x98,0xd8,0x01,0x02,0x00,0x05
+
+# GFX1250: ds_max_rtn_i32 v5, v1, v2 offset:65535 ; encoding: [0xff,0xff,0x98,0xd8,0x01,0x02,0x00,0x05]
+0xff,0xff,0x98,0xd8,0x01,0x02,0x00,0x05
+
+# GFX1250: ds_max_rtn_i64 v[254:255], v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x98,0xd9,0xff,0xfe,0x00,0xfe]
+0x04,0x00,0x98,0xd9,0xff,0xfe,0x00,0xfe
+
+# GFX1250: ds_max_rtn_i64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0x98,0xd9,0x01,0x02,0x00,0x06]
+0x00,0x00,0x98,0xd9,0x01,0x02,0x00,0x06
+
+# GFX1250: ds_max_rtn_i64 v[6:7], v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x98,0xd9,0x01,0x02,0x00,0x06]
+0xff,0xff,0x98,0xd9,0x01,0x02,0x00,0x06
+
+# GFX1250: ds_max_rtn_u32 v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0xa0,0xd8,0xff,0xff,0x00,0xff]
+0x04,0x00,0xa0,0xd8,0xff,0xff,0x00,0xff
+
+# GFX1250: ds_max_rtn_u32 v5, v1, v2 ; encoding: [0x00,0x00,0xa0,0xd8,0x01,0x02,0x00,0x05]
+0x00,0x00,0xa0,0xd8,0x01,0x02,0x00,0x05
+
+# GFX1250: ds_max_rtn_u32 v5, v1, v2 offset:65535 ; encoding: [0xff,0xff,0xa0,0xd8,0x01,0x02,0x00,0x05]
+0xff,0xff,0xa0,0xd8,0x01,0x02,0x00,0x05
+
+# GFX1250: ds_max_rtn_u64 v[254:255], v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0xa0,0xd9,0xff,0xfe,0x00,0xfe]
+0x04,0x00,0xa0,0xd9,0xff,0xfe,0x00,0xfe
+
+# GFX1250: ds_max_rtn_u64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0xa0,0xd9,0x01,0x02,0x00,0x06]
+0x00,0x00,0xa0,0xd9,0x01,0x02,0x00,0x06
+
+# GFX1250: ds_max_rtn_u64 v[6:7], v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0xa0,0xd9,0x01,0x02,0x00,0x06]
+0xff,0xff,0xa0,0xd9,0x01,0x02,0x00,0x06
+
+# GFX1250: ds_max_u32 v1, v2 ; encoding: [0x00,0x00,0x20,0xd8,0x01,0x02,0x00,0x00]
+0x00,0x00,0x20,0xd8,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_max_u32 v1, v2 offset:65535 ; encoding: [0xff,0xff,0x20,0xd8,0x01,0x02,0x00,0x00]
+0xff,0xff,0x20,0xd8,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_max_u32 v255, v255 offset:4 ; encoding: [0x04,0x00,0x20,0xd8,0xff,0xff,0x00,0x00]
+0x04,0x00,0x20,0xd8,0xff,0xff,0x00,0x00
+
+# GFX1250: ds_max_u64 v1, v[2:3] ; encoding: [0x00,0x00,0x20,0xd9,0x01,0x02,0x00,0x00]
+0x00,0x00,0x20,0xd9,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_max_u64 v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x20,0xd9,0x01,0x02,0x00,0x00]
+0xff,0xff,0x20,0xd9,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_max_u64 v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x20,0xd9,0xff,0xfe,0x00,0x00]
+0x04,0x00,0x20,0xd9,0xff,0xfe,0x00,0x00
+
+# GFX1250: ds_min_i32 v1, v2 ; encoding: [0x00,0x00,0x14,0xd8,0x01,0x02,0x00,0x00]
+0x00,0x00,0x14,0xd8,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_min_i32 v1, v2 offset:65535 ; encoding: [0xff,0xff,0x14,0xd8,0x01,0x02,0x00,0x00]
+0xff,0xff,0x14,0xd8,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_min_i32 v255, v255 offset:4 ; encoding: [0x04,0x00,0x14,0xd8,0xff,0xff,0x00,0x00]
+0x04,0x00,0x14,0xd8,0xff,0xff,0x00,0x00
+
+# GFX1250: ds_min_i64 v1, v[2:3] ; encoding: [0x00,0x00,0x14,0xd9,0x01,0x02,0x00,0x00]
+0x00,0x00,0x14,0xd9,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_min_i64 v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x14,0xd9,0x01,0x02,0x00,0x00]
+0xff,0xff,0x14,0xd9,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_min_i64 v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x14,0xd9,0xff,0xfe,0x00,0x00]
+0x04,0x00,0x14,0xd9,0xff,0xfe,0x00,0x00
+
+# GFX1250: ds_min_num_f32 v1, v2 ; encoding: [0x00,0x00,0x48,0xd8,0x01,0x02,0x00,0x00]
+0x00,0x00,0x48,0xd8,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_min_num_f32 v1, v2 offset:65535 ; encoding: [0xff,0xff,0x48,0xd8,0x01,0x02,0x00,0x00]
+0xff,0xff,0x48,0xd8,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_min_num_f32 v255, v255 offset:4 ; encoding: [0x04,0x00,0x48,0xd8,0xff,0xff,0x00,0x00]
+0x04,0x00,0x48,0xd8,0xff,0xff,0x00,0x00
+
+# GFX1250: ds_min_num_f64 v1, v[2:3] ; encoding: [0x00,0x00,0x48,0xd9,0x01,0x02,0x00,0x00]
+0x00,0x00,0x48,0xd9,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_min_num_f64 v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x48,0xd9,0x01,0x02,0x00,0x00]
+0xff,0xff,0x48,0xd9,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_min_num_f64 v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x48,0xd9,0xff,0xfe,0x00,0x00]
+0x04,0x00,0x48,0xd9,0xff,0xfe,0x00,0x00
+
+# GFX1250: ds_min_num_rtn_f32 v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0xc8,0xd8,0xff,0xff,0x00,0xff]
+0x04,0x00,0xc8,0xd8,0xff,0xff,0x00,0xff
+
+# GFX1250: ds_min_num_rtn_f32 v5, v1, v2 ; encoding: [0x00,0x00,0xc8,0xd8,0x01,0x02,0x00,0x05]
+0x00,0x00,0xc8,0xd8,0x01,0x02,0x00,0x05
+
+# GFX1250: ds_min_num_rtn_f32 v5, v1, v2 offset:65535 ; encoding: [0xff,0xff,0xc8,0xd8,0x01,0x02,0x00,0x05]
+0xff,0xff,0xc8,0xd8,0x01,0x02,0x00,0x05
+
+# GFX1250: ds_min_num_rtn_f64 v[254:255], v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0xc8,0xd9,0xff,0xfe,0x00,0xfe]
+0x04,0x00,0xc8,0xd9,0xff,0xfe,0x00,0xfe
+
+# GFX1250: ds_min_num_rtn_f64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0xc8,0xd9,0x01,0x02,0x00,0x06]
+0x00,0x00,0xc8,0xd9,0x01,0x02,0x00,0x06
+
+# GFX1250: ds_min_num_rtn_f64 v[6:7], v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0xc8,0xd9,0x01,0x02,0x00,0x06]
+0xff,0xff,0xc8,0xd9,0x01,0x02,0x00,0x06
+
+# GFX1250: ds_min_rtn_i32 v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0x94,0xd8,0xff,0xff,0x00,0xff]
+0x04,0x00,0x94,0xd8,0xff,0xff,0x00,0xff
+
+# GFX1250: ds_min_rtn_i32 v5, v1, v2 ; encoding: [0x00,0x00,0x94,0xd8,0x01,0x02,0x00,0x05]
+0x00,0x00,0x94,0xd8,0x01,0x02,0x00,0x05
+
+# GFX1250: ds_min_rtn_i32 v5, v1, v2 offset:65535 ; encoding: [0xff,0xff,0x94,0xd8,0x01,0x02,0x00,0x05]
+0xff,0xff,0x94,0xd8,0x01,0x02,0x00,0x05
+
+# GFX1250: ds_min_rtn_i64 v[254:255], v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x94,0xd9,0xff,0xfe,0x00,0xfe]
+0x04,0x00,0x94,0xd9,0xff,0xfe,0x00,0xfe
+
+# GFX1250: ds_min_rtn_i64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0x94,0xd9,0x01,0x02,0x00,0x06]
+0x00,0x00,0x94,0xd9,0x01,0x02,0x00,0x06
+
+# GFX1250: ds_min_rtn_i64 v[6:7], v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x94,0xd9,0x01,0x02,0x00,0x06]
+0xff,0xff,0x94,0xd9,0x01,0x02,0x00,0x06
+
+# GFX1250: ds_min_rtn_u32 v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0x9c,0xd8,0xff,0xff,0x00,0xff]
+0x04,0x00,0x9c,0xd8,0xff,0xff,0x00,0xff
+
+# GFX1250: ds_min_rtn_u32 v5, v1, v2 ; encoding: [0x00,0x00,0x9c,0xd8,0x01,0x02,0x00,0x05]
+0x00,0x00,0x9c,0xd8,0x01,0x02,0x00,0x05
+
+# GFX1250: ds_min_rtn_u32 v5, v1, v2 offset:65535 ; encoding: [0xff,0xff,0x9c,0xd8,0x01,0x02,0x00,0x05]
+0xff,0xff,0x9c,0xd8,0x01,0x02,0x00,0x05
+
+# GFX1250: ds_min_rtn_u64 v[254:255], v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x9c,0xd9,0xff,0xfe,0x00,0xfe]
+0x04,0x00,0x9c,0xd9,0xff,0xfe,0x00,0xfe
+
+# GFX1250: ds_min_rtn_u64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0x9c,0xd9,0x01,0x02,0x00,0x06]
+0x00,0x00,0x9c,0xd9,0x01,0x02,0x00,0x06
+
+# GFX1250: ds_min_rtn_u64 v[6:7], v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x9c,0xd9,0x01,0x02,0x00,0x06]
+0xff,0xff,0x9c,0xd9,0x01,0x02,0x00,0x06
+
+# GFX1250: ds_min_u32 v1, v2 ; encoding: [0x00,0x00,0x1c,0xd8,0x01,0x02,0x00,0x00]
+0x00,0x00,0x1c,0xd8,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_min_u32 v1, v2 offset:65535 ; encoding: [0xff,0xff,0x1c,0xd8,0x01,0x02,0x00,0x00]
+0xff,0xff,0x1c,0xd8,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_min_u32 v255, v255 offset:4 ; encoding: [0x04,0x00,0x1c,0xd8,0xff,0xff,0x00,0x00]
+0x04,0x00,0x1c,0xd8,0xff,0xff,0x00,0x00
+
+# GFX1250: ds_min_u64 v1, v[2:3] ; encoding: [0x00,0x00,0x1c,0xd9,0x01,0x02,0x00,0x00]
+0x00,0x00,0x1c,0xd9,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_min_u64 v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x1c,0xd9,0x01,0x02,0x00,0x00]
+0xff,0xff,0x1c,0xd9,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_min_u64 v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x1c,0xd9,0xff,0xfe,0x00,0x00]
+0x04,0x00,0x1c,0xd9,0xff,0xfe,0x00,0x00
+
+# GFX1250: ds_mskor_b32 v1, v2, v3 ; encoding: [0x00,0x00,0x30,0xd8,0x01,0x02,0x03,0x00]
+0x00,0x00,0x30,0xd8,0x01,0x02,0x03,0x00
+
+# GFX1250: ds_mskor_b32 v1, v2, v3 offset:65535 ; encoding: [0xff,0xff,0x30,0xd8,0x01,0x02,0x03,0x00]
+0xff,0xff,0x30,0xd8,0x01,0x02,0x03,0x00
+
+# GFX1250: ds_mskor_b32 v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0x30,0xd8,0xff,0xff,0xff,0x00]
+0x04,0x00,0x30,0xd8,0xff,0xff,0xff,0x00
+
+# GFX1250: ds_mskor_b64 v1, v[2:3], v[4:5] ; encoding: [0x00,0x00,0x30,0xd9,0x01,0x02,0x04,0x00]
+0x00,0x00,0x30,0xd9,0x01,0x02,0x04,0x00
+
+# GFX1250: ds_mskor_b64 v1, v[2:3], v[4:5] offset:65535 ; encoding: [0xff,0xff,0x30,0xd9,0x01,0x02,0x04,0x00]
+0xff,0xff,0x30,0xd9,0x01,0x02,0x04,0x00
+
+# GFX1250: ds_mskor_b64 v255, v[254:255], v[254:255] offset:4 ; encoding: [0x04,0x00,0x30,0xd9,0xff,0xfe,0xfe,0x00]
+0x04,0x00,0x30,0xd9,0xff,0xfe,0xfe,0x00
+
+# GFX1250: ds_mskor_rtn_b32 v255, v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0xb0,0xd8,0xff,0xff,0xff,0xff]
+0x04,0x00,0xb0,0xd8,0xff,0xff,0xff,0xff
+
+# GFX1250: ds_mskor_rtn_b32 v5, v1, v2, v3 ; encoding: [0x00,0x00,0xb0,0xd8,0x01,0x02,0x03,0x05]
+0x00,0x00,0xb0,0xd8,0x01,0x02,0x03,0x05
+
+# GFX1250: ds_mskor_rtn_b32 v5, v1, v2, v3 offset:65535 ; encoding: [0xff,0xff,0xb0,0xd8,0x01,0x02,0x03,0x05]
+0xff,0xff,0xb0,0xd8,0x01,0x02,0x03,0x05
+
+# GFX1250: ds_mskor_rtn_b64 v[254:255], v255, v[254:255], v[254:255] offset:4 ; encoding: [0x04,0x00,0xb0,0xd9,0xff,0xfe,0xfe,0xfe]
+0x04,0x00,0xb0,0xd9,0xff,0xfe,0xfe,0xfe
+
+# GFX1250: ds_mskor_rtn_b64 v[6:7], v1, v[2:3], v[4:5] ; encoding: [0x00,0x00,0xb0,0xd9,0x01,0x02,0x04,0x06]
+0x00,0x00,0xb0,0xd9,0x01,0x02,0x04,0x06
+
+# GFX1250: ds_mskor_rtn_b64 v[6:7], v1, v[2:3], v[4:5] offset:65535 ; encoding: [0xff,0xff,0xb0,0xd9,0x01,0x02,0x04,0x06]
+0xff,0xff,0xb0,0xd9,0x01,0x02,0x04,0x06
+
+# GFX1250: ds_nop ; encoding: [0x00,0x00,0x50,0xd8,0x00,0x00,0x00,0x00]
+0x00,0x00,0x50,0xd8,0x00,0x00,0x00,0x00
+
+# GFX1250: ds_or_b32 v1, v2 ; encoding: [0x00,0x00,0x28,0xd8,0x01,0x02,0x00,0x00]
+0x00,0x00,0x28,0xd8,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_or_b32 v1, v2 offset:65535 ; encoding: [0xff,0xff,0x28,0xd8,0x01,0x02,0x00,0x00]
+0xff,0xff,0x28,0xd8,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_or_b32 v255, v255 offset:4 ; encoding: [0x04,0x00,0x28,0xd8,0xff,0xff,0x00,0x00]
+0x04,0x00,0x28,0xd8,0xff,0xff,0x00,0x00
+
+# GFX1250: ds_or_b64 v1, v[2:3] ; encoding: [0x00,0x00,0x28,0xd9,0x01,0x02,0x00,0x00]
+0x00,0x00,0x28,0xd9,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_or_b64 v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x28,0xd9,0x01,0x02,0x00,0x00]
+0xff,0xff,0x28,0xd9,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_or_b64 v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x28,0xd9,0xff,0xfe,0x00,0x00]
+0x04,0x00,0x28,0xd9,0xff,0xfe,0x00,0x00
+
+# GFX1250: ds_or_rtn_b32 v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0xa8,0xd8,0xff,0xff,0x00,0xff]
+0x04,0x00,0xa8,0xd8,0xff,0xff,0x00,0xff
+
+# GFX1250: ds_or_rtn_b32 v5, v1, v2 ; encoding: [0x00,0x00,0xa8,0xd8,0x01,0x02,0x00,0x05]
+0x00,0x00,0xa8,0xd8,0x01,0x02,0x00,0x05
+
+# GFX1250: ds_or_rtn_b32 v5, v1, v2 offset:65535 ; encoding: [0xff,0xff,0xa8,0xd8,0x01,0x02,0x00,0x05]
+0xff,0xff,0xa8,0xd8,0x01,0x02,0x00,0x05
+
+# GFX1250: ds_or_rtn_b64 v[254:255], v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0xa8,0xd9,0xff,0xfe,0x00,0xfe]
+0x04,0x00,0xa8,0xd9,0xff,0xfe,0x00,0xfe
+
+# GFX1250: ds_or_rtn_b64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0xa8,0xd9,0x01,0x02,0x00,0x06]
+0x00,0x00,0xa8,0xd9,0x01,0x02,0x00,0x06
+
+# GFX1250: ds_or_rtn_b64 v[6:7], v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0xa8,0xd9,0x01,0x02,0x00,0x06]
+0xff,0xff,0xa8,0xd9,0x01,0x02,0x00,0x06
+
+# GFX1250: ds_permute_b32 v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0xc8,0xda,0xff,0xff,0x00,0xff]
+0x04,0x00,0xc8,0xda,0xff,0xff,0x00,0xff
+
+# GFX1250: ds_permute_b32 v5, v1, v2 ; encoding: [0x00,0x00,0xc8,0xda,0x01,0x02,0x00,0x05]
+0x00,0x00,0xc8,0xda,0x01,0x02,0x00,0x05
+
+# GFX1250: ds_permute_b32 v5, v1, v2 offset:65535 ; encoding: [0xff,0xff,0xc8,0xda,0x01,0x02,0x00,0x05]
+0xff,0xff,0xc8,0xda,0x01,0x02,0x00,0x05
+
+# GFX1250: ds_pk_add_bf16 v0, v0 ; encoding: [0x00,0x00,0x6c,0xda,0x00,0x00,0x00,0x00]
+0x00,0x00,0x6c,0xda,0x00,0x00,0x00,0x00
+
+# GFX1250: ds_pk_add_bf16 v0, v0 offset:65535 ; encoding: [0xff,0xff,0x6c,0xda,0x00,0x00,0x00,0x00]
+0xff,0xff,0x6c,0xda,0x00,0x00,0x00,0x00
+
+# GFX1250: ds_pk_add_bf16 v2, v1 ; encoding: [0x00,0x00,0x6c,0xda,0x02,0x01,0x00,0x00]
+0x00,0x00,0x6c,0xda,0x02,0x01,0x00,0x00
+
+# GFX1250: ds_pk_add_bf16 v255, v255 ; encoding: [0x00,0x00,0x6c,0xda,0xff,0xff,0x00,0x00]
+0x00,0x00,0x6c,0xda,0xff,0xff,0x00,0x00
+
+# GFX1250: ds_pk_add_bf16 v255, v255 offset:4660 ; encoding: [0x34,0x12,0x6c,0xda,0xff,0xff,0x00,0x00]
+0x34,0x12,0x6c,0xda,0xff,0xff,0x00,0x00
+
+# GFX1250: ds_pk_add_f16 v0, v0 ; encoding: [0x00,0x00,0x68,0xda,0x00,0x00,0x00,0x00]
+0x00,0x00,0x68,0xda,0x00,0x00,0x00,0x00
+
+# GFX1250: ds_pk_add_f16 v2, v1 ; encoding: [0x00,0x00,0x68,0xda,0x02,0x01,0x00,0x00]
+0x00,0x00,0x68,0xda,0x02,0x01,0x00,0x00
+
+# GFX1250: ds_pk_add_f16 v2, v1 offset:4660 ; encoding: [0x34,0x12,0x68,0xda,0x02,0x01,0x00,0x00]
+0x34,0x12,0x68,0xda,0x02,0x01,0x00,0x00
+
+# GFX1250: ds_pk_add_f16 v2, v1 offset:65535 ; encoding: [0xff,0xff,0x68,0xda,0x02,0x01,0x00,0x00]
+0xff,0xff,0x68,0xda,0x02,0x01,0x00,0x00
+
+# GFX1250: ds_pk_add_f16 v255, v255 ; encoding: [0x00,0x00,0x68,0xda,0xff,0xff,0x00,0x00]
+0x00,0x00,0x68,0xda,0xff,0xff,0x00,0x00
+
+# GFX1250: ds_pk_add_f16 v255, v255 offset:4660 ; encoding: [0x34,0x12,0x68,0xda,0xff,0xff,0x00,0x00]
+0x34,0x12,0x68,0xda,0xff,0xff,0x00,0x00
+
+# GFX1250: ds_pk_add_f16 v255, v255 offset:65535 ; encoding: [0xff,0xff,0x68,0xda,0xff,0xff,0x00,0x00]
+0xff,0xff,0x68,0xda,0xff,0xff,0x00,0x00
+
+# GFX1250: ds_pk_add_rtn_bf16 v255, v0, v200 ; encoding: [0x00,0x00,0xac,0xda,0x00,0xc8,0x00,0xff]
+0x00,0x00,0xac,0xda,0x00,0xc8,0x00,0xff
+
+# GFX1250: ds_pk_add_rtn_bf16 v255, v255, v255 ; encoding: [0x00,0x00,0xac,0xda,0xff,0xff,0x00,0xff]
+0x00,0x00,0xac,0xda,0xff,0xff,0x00,0xff
+
+# GFX1250: ds_pk_add_rtn_bf16 v255, v255, v255 offset:65535 ; encoding: [0xff,0xff,0xac,0xda,0xff,0xff,0x00,0xff]
+0xff,0xff,0xac,0xda,0xff,0xff,0x00,0xff
+
+# GFX1250: ds_pk_add_rtn_bf16 v3, v2, v1 ; encoding: [0x00,0x00,0xac,0xda,0x02,0x01,0x00,0x03]
+0x00,0x00,0xac,0xda,0x02,0x01,0x00,0x03
+
+# GFX1250: ds_pk_add_rtn_bf16 v3, v2, v1 offset:4660 ; encoding: [0x34,0x12,0xac,0xda,0x02,0x01,0x00,0x03]
+0x34,0x12,0xac,0xda,0x02,0x01,0x00,0x03
+
+# GFX1250: ds_pk_add_rtn_f16 v255, v0, v200 ; encoding: [0x00,0x00,0xa8,0xda,0x00,0xc8,0x00,0xff]
+0x00,0x00,0xa8,0xda,0x00,0xc8,0x00,0xff
+
+# GFX1250: ds_pk_add_rtn_f16 v255, v0, v200 offset:65535 ; encoding: [0xff,0xff,0xa8,0xda,0x00,0xc8,0x00,0xff]
+0xff,0xff,0xa8,0xda,0x00,0xc8,0x00,0xff
+
+# GFX1250: ds_pk_add_rtn_f16 v255, v255, v255 ; encoding: [0x00,0x00,0xa8,0xda,0xff,0xff,0x00,0xff]
+0x00,0x00,0xa8,0xda,0xff,0xff,0x00,0xff
+
+# GFX1250: ds_pk_add_rtn_f16 v3, v2, v1 ; encoding: [0x00,0x00,0xa8,0xda,0x02,0x01,0x00,0x03]
+0x00,0x00,0xa8,0xda,0x02,0x01,0x00,0x03
+
+# GFX1250: ds_pk_add_rtn_f16 v3, v2, v1 offset:4660 ; encoding: [0x34,0x12,0xa8,0xda,0x02,0x01,0x00,0x03]
+0x34,0x12,0xa8,0xda,0x02,0x01,0x00,0x03
+
+# GFX1250: ds_rsub_rtn_u32 v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0x88,0xd8,0xff,0xff,0x00,0xff]
+0x04,0x00,0x88,0xd8,0xff,0xff,0x00,0xff
+
+# GFX1250: ds_rsub_rtn_u32 v5, v1, v2 ; encoding: [0x00,0x00,0x88,0xd8,0x01,0x02,0x00,0x05]
+0x00,0x00,0x88,0xd8,0x01,0x02,0x00,0x05
+
+# GFX1250: ds_rsub_rtn_u32 v5, v1, v2 offset:65535 ; encoding: [0xff,0xff,0x88,0xd8,0x01,0x02,0x00,0x05]
+0xff,0xff,0x88,0xd8,0x01,0x02,0x00,0x05
+
+# GFX1250: ds_rsub_rtn_u64 v[254:255], v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x88,0xd9,0xff,0xfe,0x00,0xfe]
+0x04,0x00,0x88,0xd9,0xff,0xfe,0x00,0xfe
+
+# GFX1250: ds_rsub_rtn_u64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0x88,0xd9,0x01,0x02,0x00,0x06]
+0x00,0x00,0x88,0xd9,0x01,0x02,0x00,0x06
+
+# GFX1250: ds_rsub_rtn_u64 v[6:7], v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x88,0xd9,0x01,0x02,0x00,0x06]
+0xff,0xff,0x88,0xd9,0x01,0x02,0x00,0x06
+
+# GFX1250: ds_rsub_u32 v1, v2 ; encoding: [0x00,0x00,0x08,0xd8,0x01,0x02,0x00,0x00]
+0x00,0x00,0x08,0xd8,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_rsub_u32 v1, v2 offset:65535 ; encoding: [0xff,0xff,0x08,0xd8,0x01,0x02,0x00,0x00]
+0xff,0xff,0x08,0xd8,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_rsub_u32 v255, v255 offset:4 ; encoding: [0x04,0x00,0x08,0xd8,0xff,0xff,0x00,0x00]
+0x04,0x00,0x08,0xd8,0xff,0xff,0x00,0x00
+
+# GFX1250: ds_rsub_u64 v1, v[2:3] ; encoding: [0x00,0x00,0x08,0xd9,0x01,0x02,0x00,0x00]
+0x00,0x00,0x08,0xd9,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_rsub_u64 v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x08,0xd9,0x01,0x02,0x00,0x00]
+0xff,0xff,0x08,0xd9,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_rsub_u64 v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x08,0xd9,0xff,0xfe,0x00,0x00]
+0x04,0x00,0x08,0xd9,0xff,0xfe,0x00,0x00
+
+# GFX1250: ds_store_2addr_b32 v1, v2, v3 ; encoding: [0x00,0x00,0x38,0xd8,0x01,0x02,0x03,0x00]
+0x00,0x00,0x38,0xd8,0x01,0x02,0x03,0x00
+
+# GFX1250: ds_store_2addr_b32 v1, v2, v3 offset0:127 offset1:255 ; encoding: [0x7f,0xff,0x38,0xd8,0x01,0x02,0x03,0x00]
+0x7f,0xff,0x38,0xd8,0x01,0x02,0x03,0x00
+
+# GFX1250: ds_store_2addr_b32 v255, v255, v255 offset0:16 offset1:1 ; encoding: [0x10,0x01,0x38,0xd8,0xff,0xff,0xff,0x00]
+0x10,0x01,0x38,0xd8,0xff,0xff,0xff,0x00
+
+# GFX1250: ds_store_2addr_b64 v1, v[2:3], v[4:5] ; encoding: [0x00,0x00,0x38,0xd9,0x01,0x02,0x04,0x00]
+0x00,0x00,0x38,0xd9,0x01,0x02,0x04,0x00
+
+# GFX1250: ds_store_2addr_b64 v1, v[2:3], v[4:5] offset0:127 offset1:255 ; encoding: [0x7f,0xff,0x38,0xd9,0x01,0x02,0x04,0x00]
+0x7f,0xff,0x38,0xd9,0x01,0x02,0x04,0x00
+
+# GFX1250: ds_store_2addr_b64 v255, v[254:255], v[254:255] offset0:16 offset1:1 ; encoding: [0x10,0x01,0x38,0xd9,0xff,0xfe,0xfe,0x00]
+0x10,0x01,0x38,0xd9,0xff,0xfe,0xfe,0x00
+
+# GFX1250: ds_store_2addr_stride64_b32 v1, v2, v3 ; encoding: [0x00,0x00,0x3c,0xd8,0x01,0x02,0x03,0x00]
+0x00,0x00,0x3c,0xd8,0x01,0x02,0x03,0x00
+
+# GFX1250: ds_store_2addr_stride64_b32 v1, v2, v3 offset0:127 offset1:255 ; encoding: [0x7f,0xff,0x3c,0xd8,0x01,0x02,0x03,0x00]
+0x7f,0xff,0x3c,0xd8,0x01,0x02,0x03,0x00
+
+# GFX1250: ds_store_2addr_stride64_b32 v255, v255, v255 offset0:16 offset1:1 ; encoding: [0x10,0x01,0x3c,0xd8,0xff,0xff,0xff,0x00]
+0x10,0x01,0x3c,0xd8,0xff,0xff,0xff,0x00
+
+# GFX1250: ds_store_2addr_stride64_b64 v1, v[2:3], v[4:5] ; encoding: [0x00,0x00,0x3c,0xd9,0x01,0x02,0x04,0x00]
+0x00,0x00,0x3c,0xd9,0x01,0x02,0x04,0x00
+
+# GFX1250: ds_store_2addr_stride64_b64 v1, v[2:3], v[4:5] offset0:127 offset1:255 ; encoding: [0x7f,0xff,0x3c,0xd9,0x01,0x02,0x04,0x00]
+0x7f,0xff,0x3c,0xd9,0x01,0x02,0x04,0x00
+
+# GFX1250: ds_store_2addr_stride64_b64 v255, v[254:255], v[254:255] offset0:16 offset1:1 ; encoding: [0x10,0x01,0x3c,0xd9,0xff,0xfe,0xfe,0x00]
+0x10,0x01,0x3c,0xd9,0xff,0xfe,0xfe,0x00
+
+# GFX1250: ds_store_addtid_b32 v1 ; encoding: [0x00,0x00,0xc0,0xda,0x00,0x01,0x00,0x00]
+0x00,0x00,0xc0,0xda,0x00,0x01,0x00,0x00
+
+# GFX1250: ds_store_addtid_b32 v1 offset:65535 ; encoding: [0xff,0xff,0xc0,0xda,0x00,0x01,0x00,0x00]
+0xff,0xff,0xc0,0xda,0x00,0x01,0x00,0x00
+
+# GFX1250: ds_store_addtid_b32 v255 offset:4 ; encoding: [0x04,0x00,0xc0,0xda,0x00,0xff,0x00,0x00]
+0x04,0x00,0xc0,0xda,0x00,0xff,0x00,0x00
+
+# GFX1250: ds_store_b128 v1, v[2:5] ; encoding: [0x00,0x00,0x7c,0xdb,0x01,0x02,0x00,0x00]
+0x00,0x00,0x7c,0xdb,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_store_b128 v1, v[2:5] offset:65535 ; encoding: [0xff,0xff,0x7c,0xdb,0x01,0x02,0x00,0x00]
+0xff,0xff,0x7c,0xdb,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_store_b128 v255, v[252:255] offset:4 ; encoding: [0x04,0x00,0x7c,0xdb,0xff,0xfc,0x00,0x00]
+0x04,0x00,0x7c,0xdb,0xff,0xfc,0x00,0x00
+
+# GFX1250: ds_store_b16 v1, v2 ; encoding: [0x00,0x00,0x7c,0xd8,0x01,0x02,0x00,0x00]
+0x00,0x00,0x7c,0xd8,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_store_b16 v1, v2 offset:65535 ; encoding: [0xff,0xff,0x7c,0xd8,0x01,0x02,0x00,0x00]
+0xff,0xff,0x7c,0xd8,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_store_b16 v255, v255 offset:4 ; encoding: [0x04,0x00,0x7c,0xd8,0xff,0xff,0x00,0x00]
+0x04,0x00,0x7c,0xd8,0xff,0xff,0x00,0x00
+
+# GFX1250: ds_store_b16_d16_hi v1, v2 ; encoding: [0x00,0x00,0x84,0xda,0x01,0x02,0x00,0x00]
+0x00,0x00,0x84,0xda,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_store_b16_d16_hi v1, v2 offset:65535 ; encoding: [0xff,0xff,0x84,0xda,0x01,0x02,0x00,0x00]
+0xff,0xff,0x84,0xda,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_store_b16_d16_hi v255, v255 offset:4 ; encoding: [0x04,0x00,0x84,0xda,0xff,0xff,0x00,0x00]
+0x04,0x00,0x84,0xda,0xff,0xff,0x00,0x00
+
+# GFX1250: ds_store_b32 v1, v2 ; encoding: [0x00,0x00,0x34,0xd8,0x01,0x02,0x00,0x00]
+0x00,0x00,0x34,0xd8,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_store_b32 v1, v2 offset:65535 ; encoding: [0xff,0xff,0x34,0xd8,0x01,0x02,0x00,0x00]
+0xff,0xff,0x34,0xd8,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_store_b32 v255, v255 offset:4 ; encoding: [0x04,0x00,0x34,0xd8,0xff,0xff,0x00,0x00]
+0x04,0x00,0x34,0xd8,0xff,0xff,0x00,0x00
+
+# GFX1250: ds_store_b64 v1, v[2:3] ; encoding: [0x00,0x00,0x34,0xd9,0x01,0x02,0x00,0x00]
+0x00,0x00,0x34,0xd9,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_store_b64 v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x34,0xd9,0x01,0x02,0x00,0x00]
+0xff,0xff,0x34,0xd9,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_store_b64 v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x34,0xd9,0xff,0xfe,0x00,0x00]
+0x04,0x00,0x34,0xd9,0xff,0xfe,0x00,0x00
+
+# GFX1250: ds_store_b8 v1, v2 ; encoding: [0x00,0x00,0x78,0xd8,0x01,0x02,0x00,0x00]
+0x00,0x00,0x78,0xd8,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_store_b8 v1, v2 offset:65535 ; encoding: [0xff,0xff,0x78,0xd8,0x01,0x02,0x00,0x00]
+0xff,0xff,0x78,0xd8,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_store_b8 v255, v255 offset:4 ; encoding: [0x04,0x00,0x78,0xd8,0xff,0xff,0x00,0x00]
+0x04,0x00,0x78,0xd8,0xff,0xff,0x00,0x00
+
+# GFX1250: ds_store_b8_d16_hi v1, v2 ; encoding: [0x00,0x00,0x80,0xda,0x01,0x02,0x00,0x00]
+0x00,0x00,0x80,0xda,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_store_b8_d16_hi v1, v2 offset:65535 ; encoding: [0xff,0xff,0x80,0xda,0x01,0x02,0x00,0x00]
+0xff,0xff,0x80,0xda,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_store_b8_d16_hi v255, v255 offset:4 ; encoding: [0x04,0x00,0x80,0xda,0xff,0xff,0x00,0x00]
+0x04,0x00,0x80,0xda,0xff,0xff,0x00,0x00
+
+# GFX1250: ds_store_b96 v1, v[2:4] ; encoding: [0x00,0x00,0x78,0xdb,0x01,0x02,0x00,0x00]
+0x00,0x00,0x78,0xdb,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_store_b96 v1, v[2:4] offset:65535 ; encoding: [0xff,0xff,0x78,0xdb,0x01,0x02,0x00,0x00]
+0xff,0xff,0x78,0xdb,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_store_b96 v255, v[252:254] offset:4 ; encoding: [0x04,0x00,0x78,0xdb,0xff,0xfc,0x00,0x00]
+0x04,0x00,0x78,0xdb,0xff,0xfc,0x00,0x00
+
+# GFX1250: ds_storexchg_2addr_rtn_b32 v[254:255], v255, v255, v255 offset0:16 offset1:1 ; encoding: [0x10,0x01,0xb8,0xd8,0xff,0xff,0xff,0xfe]
+0x10,0x01,0xb8,0xd8,0xff,0xff,0xff,0xfe
+
+# GFX1250: ds_storexchg_2addr_rtn_b32 v[6:7], v1, v2, v3 ; encoding: [0x00,0x00,0xb8,0xd8,0x01,0x02,0x03,0x06]
+0x00,0x00,0xb8,0xd8,0x01,0x02,0x03,0x06
+
+# GFX1250: ds_storexchg_2addr_rtn_b32 v[6:7], v1, v2, v3 offset0:127 offset1:255 ; encoding: [0x7f,0xff,0xb8,0xd8,0x01,0x02,0x03,0x06]
+0x7f,0xff,0xb8,0xd8,0x01,0x02,0x03,0x06
+
+# GFX1250: ds_storexchg_2addr_rtn_b64 v[252:255], v255, v[254:255], v[254:255] offset0:16 offset1:1 ; encoding: [0x10,0x01,0xb8,0xd9,0xff,0xfe,0xfe,0xfc]
+0x10,0x01,0xb8,0xd9,0xff,0xfe,0xfe,0xfc
+
+# GFX1250: ds_storexchg_2addr_rtn_b64 v[6:9], v1, v[2:3], v[4:5] ; encoding: [0x00,0x00,0xb8,0xd9,0x01,0x02,0x04,0x06]
+0x00,0x00,0xb8,0xd9,0x01,0x02,0x04,0x06
+
+# GFX1250: ds_storexchg_2addr_rtn_b64 v[6:9], v1, v[2:3], v[4:5] offset0:127 offset1:255 ; encoding: [0x7f,0xff,0xb8,0xd9,0x01,0x02,0x04,0x06]
+0x7f,0xff,0xb8,0xd9,0x01,0x02,0x04,0x06
+
+# GFX1250: ds_storexchg_2addr_stride64_rtn_b32 v[254:255], v255, v255, v255 offset0:16 offset1:1 ; encoding: [0x10,0x01,0xbc,0xd8,0xff,0xff,0xff,0xfe]
+0x10,0x01,0xbc,0xd8,0xff,0xff,0xff,0xfe
+
+# GFX1250: ds_storexchg_2addr_stride64_rtn_b32 v[6:7], v1, v2, v3 ; encoding: [0x00,0x00,0xbc,0xd8,0x01,0x02,0x03,0x06]
+0x00,0x00,0xbc,0xd8,0x01,0x02,0x03,0x06
+
+# GFX1250: ds_storexchg_2addr_stride64_rtn_b32 v[6:7], v1, v2, v3 offset0:127 offset1:255 ; encoding: [0x7f,0xff,0xbc,0xd8,0x01,0x02,0x03,0x06]
+0x7f,0xff,0xbc,0xd8,0x01,0x02,0x03,0x06
+
+# GFX1250: ds_storexchg_2addr_stride64_rtn_b64 v[252:255], v255, v[254:255], v[254:255] offset0:16 offset1:1 ; encoding: [0x10,0x01,0xbc,0xd9,0xff,0xfe,0xfe,0xfc]
+0x10,0x01,0xbc,0xd9,0xff,0xfe,0xfe,0xfc
+
+# GFX1250: ds_storexchg_2addr_stride64_rtn_b64 v[6:9], v1, v[2:3], v[4:5] ; encoding: [0x00,0x00,0xbc,0xd9,0x01,0x02,0x04,0x06]
+0x00,0x00,0xbc,0xd9,0x01,0x02,0x04,0x06
+
+# GFX1250: ds_storexchg_2addr_stride64_rtn_b64 v[6:9], v1, v[2:3], v[4:5] offset0:127 offset1:255 ; encoding: [0x7f,0xff,0xbc,0xd9,0x01,0x02,0x04,0x06]
+0x7f,0xff,0xbc,0xd9,0x01,0x02,0x04,0x06
+
+# GFX1250: ds_storexchg_rtn_b32 v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0xb4,0xd8,0xff,0xff,0x00,0xff]
+0x04,0x00,0xb4,0xd8,0xff,0xff,0x00,0xff
+
+# GFX1250: ds_storexchg_rtn_b32 v5, v1, v2 ; encoding: [0x00,0x00,0xb4,0xd8,0x01,0x02,0x00,0x05]
+0x00,0x00,0xb4,0xd8,0x01,0x02,0x00,0x05
+
+# GFX1250: ds_storexchg_rtn_b32 v5, v1, v2 offset:65535 ; encoding: [0xff,0xff,0xb4,0xd8,0x01,0x02,0x00,0x05]
+0xff,0xff,0xb4,0xd8,0x01,0x02,0x00,0x05
+
+# GFX1250: ds_storexchg_rtn_b64 v[254:255], v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0xb4,0xd9,0xff,0xfe,0x00,0xfe]
+0x04,0x00,0xb4,0xd9,0xff,0xfe,0x00,0xfe
+
+# GFX1250: ds_storexchg_rtn_b64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0xb4,0xd9,0x01,0x02,0x00,0x06]
+0x00,0x00,0xb4,0xd9,0x01,0x02,0x00,0x06
+
+# GFX1250: ds_storexchg_rtn_b64 v[6:7], v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0xb4,0xd9,0x01,0x02,0x00,0x06]
+0xff,0xff,0xb4,0xd9,0x01,0x02,0x00,0x06
+
+# GFX1250: ds_sub_clamp_rtn_u32 v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0xa4,0xda,0xff,0xff,0x00,0xff]
+0x04,0x00,0xa4,0xda,0xff,0xff,0x00,0xff
+
+# GFX1250: ds_sub_clamp_rtn_u32 v5, v1, v2 ; encoding: [0x00,0x00,0xa4,0xda,0x01,0x02,0x00,0x05]
+0x00,0x00,0xa4,0xda,0x01,0x02,0x00,0x05
+
+# GFX1250: ds_sub_clamp_rtn_u32 v5, v1, v2 offset:65535 ; encoding: [0xff,0xff,0xa4,0xda,0x01,0x02,0x00,0x05]
+0xff,0xff,0xa4,0xda,0x01,0x02,0x00,0x05
+
+# GFX1250: ds_sub_clamp_u32 v1, v2 ; encoding: [0x00,0x00,0x64,0xda,0x01,0x02,0x00,0x00]
+0x00,0x00,0x64,0xda,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_sub_clamp_u32 v1, v2 offset:65535 ; encoding: [0xff,0xff,0x64,0xda,0x01,0x02,0x00,0x00]
+0xff,0xff,0x64,0xda,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_sub_clamp_u32 v255, v255 offset:4 ; encoding: [0x04,0x00,0x64,0xda,0xff,0xff,0x00,0x00]
+0x04,0x00,0x64,0xda,0xff,0xff,0x00,0x00
+
+# GFX1250: ds_sub_rtn_u32 v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0x84,0xd8,0xff,0xff,0x00,0xff]
+0x04,0x00,0x84,0xd8,0xff,0xff,0x00,0xff
+
+# GFX1250: ds_sub_rtn_u32 v5, v1, v2 ; encoding: [0x00,0x00,0x84,0xd8,0x01,0x02,0x00,0x05]
+0x00,0x00,0x84,0xd8,0x01,0x02,0x00,0x05
+
+# GFX1250: ds_sub_rtn_u32 v5, v1, v2 offset:65535 ; encoding: [0xff,0xff,0x84,0xd8,0x01,0x02,0x00,0x05]
+0xff,0xff,0x84,0xd8,0x01,0x02,0x00,0x05
+
+# GFX1250: ds_sub_rtn_u64 v[254:255], v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x84,0xd9,0xff,0xfe,0x00,0xfe]
+0x04,0x00,0x84,0xd9,0xff,0xfe,0x00,0xfe
+
+# GFX1250: ds_sub_rtn_u64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0x84,0xd9,0x01,0x02,0x00,0x06]
+0x00,0x00,0x84,0xd9,0x01,0x02,0x00,0x06
+
+# GFX1250: ds_sub_rtn_u64 v[6:7], v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x84,0xd9,0x01,0x02,0x00,0x06]
+0xff,0xff,0x84,0xd9,0x01,0x02,0x00,0x06
+
+# GFX1250: ds_sub_u32 v1, v2 ; encoding: [0x00,0x00,0x04,0xd8,0x01,0x02,0x00,0x00]
+0x00,0x00,0x04,0xd8,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_sub_u32 v1, v2 offset:65535 ; encoding: [0xff,0xff,0x04,0xd8,0x01,0x02,0x00,0x00]
+0xff,0xff,0x04,0xd8,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_sub_u32 v255, v255 offset:4 ; encoding: [0x04,0x00,0x04,0xd8,0xff,0xff,0x00,0x00]
+0x04,0x00,0x04,0xd8,0xff,0xff,0x00,0x00
+
+# GFX1250: ds_sub_u64 v1, v[2:3] ; encoding: [0x00,0x00,0x04,0xd9,0x01,0x02,0x00,0x00]
+0x00,0x00,0x04,0xd9,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_sub_u64 v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x04,0xd9,0x01,0x02,0x00,0x00]
+0xff,0xff,0x04,0xd9,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_sub_u64 v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x04,0xd9,0xff,0xfe,0x00,0x00]
+0x04,0x00,0x04,0xd9,0xff,0xfe,0x00,0x00
+
+# GFX1250: ds_swizzle_b32 v8, v2 ; encoding: [0x00,0x00,0xd4,0xd8,0x02,0x00,0x00,0x08]
+0x00,0x00,0xd4,0xd8,0x02,0x00,0x00,0x08
+
+# GFX1250: ds_swizzle_b32 v8, v2 offset:swizzle(FFT,31) ; encoding: [0xff,0xff,0xd4,0xd8,0x02,0x00,0x00,0x08]
+0xff,0xff,0xd4,0xd8,0x02,0x00,0x00,0x08
+
+# GFX1250: ds_swizzle_b32 v8, v2 offset:swizzle(BITMASK_PERM,"01pip") ; encoding: [0x07,0x09,0xd4,0xd8,0x02,0x00,0x00,0x08]
+0x07,0x09,0xd4,0xd8,0x02,0x00,0x00,0x08
+
+# GFX1250: ds_swizzle_b32 v8, v2 offset:swizzle(BROADCAST,4,1) ; encoding: [0x3c,0x00,0xd4,0xd8,0x02,0x00,0x00,0x08]
+0x3c,0x00,0xd4,0xd8,0x02,0x00,0x00,0x08
+
+# GFX1250: ds_swizzle_b32 v8, v2 offset:swizzle(BROADCAST,8,7) ; encoding: [0xf8,0x00,0xd4,0xd8,0x02,0x00,0x00,0x08]
+0xf8,0x00,0xd4,0xd8,0x02,0x00,0x00,0x08
+
+# GFX1250: ds_swizzle_b32 v8, v2 offset:swizzle(QUAD_PERM,0,1,2,3) ; encoding: [0xe4,0x80,0xd4,0xd8,0x02,0x00,0x00,0x08]
+0xe4,0x80,0xd4,0xd8,0x02,0x00,0x00,0x08
+
+# GFX1250: ds_swizzle_b32 v8, v2 offset:swizzle(REVERSE,8) ; encoding: [0x1f,0x1c,0xd4,0xd8,0x02,0x00,0x00,0x08]
+0x1f,0x1c,0xd4,0xd8,0x02,0x00,0x00,0x08
+
+# GFX1250: ds_swizzle_b32 v8, v2 offset:swizzle(SWAP,16) ; encoding: [0x1f,0x40,0xd4,0xd8,0x02,0x00,0x00,0x08]
+0x1f,0x40,0xd4,0xd8,0x02,0x00,0x00,0x08
+
+# GFX1250: ds_xor_b32 v1, v2 ; encoding: [0x00,0x00,0x2c,0xd8,0x01,0x02,0x00,0x00]
+0x00,0x00,0x2c,0xd8,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_xor_b32 v1, v2 offset:65535 ; encoding: [0xff,0xff,0x2c,0xd8,0x01,0x02,0x00,0x00]
+0xff,0xff,0x2c,0xd8,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_xor_b32 v255, v255 offset:4 ; encoding: [0x04,0x00,0x2c,0xd8,0xff,0xff,0x00,0x00]
+0x04,0x00,0x2c,0xd8,0xff,0xff,0x00,0x00
+
+# GFX1250: ds_xor_b64 v1, v[2:3] ; encoding: [0x00,0x00,0x2c,0xd9,0x01,0x02,0x00,0x00]
+0x00,0x00,0x2c,0xd9,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_xor_b64 v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0x2c,0xd9,0x01,0x02,0x00,0x00]
+0xff,0xff,0x2c,0xd9,0x01,0x02,0x00,0x00
+
+# GFX1250: ds_xor_b64 v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0x2c,0xd9,0xff,0xfe,0x00,0x00]
+0x04,0x00,0x2c,0xd9,0xff,0xfe,0x00,0x00
+
+# GFX1250: ds_xor_rtn_b32 v255, v255, v255 offset:4 ; encoding: [0x04,0x00,0xac,0xd8,0xff,0xff,0x00,0xff]
+0x04,0x00,0xac,0xd8,0xff,0xff,0x00,0xff
+
+# GFX1250: ds_xor_rtn_b32 v5, v1, v2 ; encoding: [0x00,0x00,0xac,0xd8,0x01,0x02,0x00,0x05]
+0x00,0x00,0xac,0xd8,0x01,0x02,0x00,0x05
+
+# GFX1250: ds_xor_rtn_b32 v5, v1, v2 offset:65535 ; encoding: [0xff,0xff,0xac,0xd8,0x01,0x02,0x00,0x05]
+0xff,0xff,0xac,0xd8,0x01,0x02,0x00,0x05
+
+# GFX1250: ds_xor_rtn_b64 v[254:255], v255, v[254:255] offset:4 ; encoding: [0x04,0x00,0xac,0xd9,0xff,0xfe,0x00,0xfe]
+0x04,0x00,0xac,0xd9,0xff,0xfe,0x00,0xfe
+
+# GFX1250: ds_xor_rtn_b64 v[6:7], v1, v[2:3] ; encoding: [0x00,0x00,0xac,0xd9,0x01,0x02,0x00,0x06]
+0x00,0x00,0xac,0xd9,0x01,0x02,0x00,0x06
+
+# GFX1250: ds_xor_rtn_b64 v[6:7], v1, v[2:3] offset:65535 ; encoding: [0xff,0xff,0xac,0xd9,0x01,0x02,0x00,0x06]
+0xff,0xff,0xac,0xd9,0x01,0x02,0x00,0x06
+
# GFX1250: ds_atomic_async_barrier_arrive_b64 v1 offset:65407 ; encoding: [0x7f,0xff,0x58,0xd9,0x01,0x00,0x00,0x00]
0x7f,0xff,0x58,0xd9,0x01,0x00,0x00,0x00
>From 87404eaf0445c7e67091e4e71d6c1cfa6fd0edd4 Mon Sep 17 00:00:00 2001
From: Jonas Devlieghere <jonas at devlieghere.com>
Date: Wed, 6 Aug 2025 14:31:56 -0700
Subject: [PATCH 155/171] [lldb] Fix undefined behavior in DWARFExpressionTest
RegisterInfo is a trivial class and doesn't default initialize its
members. Thanks Alex for getting to the bottom of this.
---
lldb/unittests/Expression/DWARFExpressionTest.cpp | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/lldb/unittests/Expression/DWARFExpressionTest.cpp b/lldb/unittests/Expression/DWARFExpressionTest.cpp
index 3008fc0d9ca17..5a5d3aba0e207 100644
--- a/lldb/unittests/Expression/DWARFExpressionTest.cpp
+++ b/lldb/unittests/Expression/DWARFExpressionTest.cpp
@@ -125,8 +125,8 @@ class MockRegisterContext : public RegisterContext {
}
private:
- RegisterInfo m_reg_info;
- RegisterValue m_reg_value;
+ RegisterInfo m_reg_info{};
+ RegisterValue m_reg_value{};
};
} // namespace
>From c3103068b713dbed8d8ac75b165086a1a19c89e9 Mon Sep 17 00:00:00 2001
From: Stanislav Mekhanoshin <Stanislav.Mekhanoshin at amd.com>
Date: Wed, 6 Aug 2025 14:38:28 -0700
Subject: [PATCH 156/171] [AMDGPU] Add more gfx1250 MC tests. NFC. (#152388)
These are already working, but left downstream.
---
llvm/test/MC/AMDGPU/gfx1250_asm_features.s | 32 +
llvm/test/MC/AMDGPU/gfx1250_asm_unsupported.s | 14 +
.../MC/AMDGPU/gfx1250_asm_vbuffer_mubuf.s | 2304 +++++++++++
llvm/test/MC/AMDGPU/gfx1250_asm_vop3_err.s | 165 +
.../AMDGPU/gfx1250_asm_vop3_from_vop2_err.s | 13 +
llvm/test/MC/AMDGPU/gfx1250_asm_vop3cx.s | 3413 +++++++++++++++++
llvm/test/MC/AMDGPU/gfx1250_asm_vop3p_dpp16.s | 14 +
llvm/test/MC/AMDGPU/gfx1250_asm_vop3p_dpp8.s | 26 +
llvm/test/MC/AMDGPU/gfx1250_asm_vsample_err.s | 175 +
llvm/test/MC/AMDGPU/gfx1250_err.s | 30 +
.../AMDGPU/gfx1250_dasm_vbuffer_mubuf.txt | 2133 ++++++++++
.../AMDGPU/gfx1250_dasm_vop3cx.txt | 3413 +++++++++++++++++
.../AMDGPU/gfx1250_dasm_vop3p_dpp16.txt | 10 +
.../AMDGPU/gfx1250_dasm_vop3p_dpp8.txt | 19 +
14 files changed, 11761 insertions(+)
create mode 100644 llvm/test/MC/AMDGPU/gfx1250_asm_features.s
create mode 100644 llvm/test/MC/AMDGPU/gfx1250_asm_vop3_from_vop2_err.s
create mode 100644 llvm/test/MC/AMDGPU/gfx1250_asm_vop3cx.s
create mode 100644 llvm/test/MC/AMDGPU/gfx1250_asm_vop3p_dpp16.s
create mode 100644 llvm/test/MC/AMDGPU/gfx1250_asm_vop3p_dpp8.s
create mode 100644 llvm/test/MC/AMDGPU/gfx1250_asm_vsample_err.s
create mode 100644 llvm/test/MC/Disassembler/AMDGPU/gfx1250_dasm_vop3cx.txt
create mode 100644 llvm/test/MC/Disassembler/AMDGPU/gfx1250_dasm_vop3p_dpp16.txt
create mode 100644 llvm/test/MC/Disassembler/AMDGPU/gfx1250_dasm_vop3p_dpp8.txt
diff --git a/llvm/test/MC/AMDGPU/gfx1250_asm_features.s b/llvm/test/MC/AMDGPU/gfx1250_asm_features.s
new file mode 100644
index 0000000000000..013b790b2b4b6
--- /dev/null
+++ b/llvm/test/MC/AMDGPU/gfx1250_asm_features.s
@@ -0,0 +1,32 @@
+// RUN: llvm-mc -triple=amdgcn -mcpu=gfx1250 -show-encoding %s | FileCheck --check-prefixes=GFX1250 %s
+
+//
+// Elements of CPol operand can be given in any order
+//
+
+s_load_b32 s4, s[2:3], 10 th:TH_LOAD_NT scope:SCOPE_SE nv
+// GFX1250: encoding: [0x01,0x01,0xb0,0xf4,0x0a,0x00,0x00,0xf8]
+
+s_load_b32 s4, s[2:3], 10 scope:SCOPE_SE nv th:TH_LOAD_NT
+// GFX1250: encoding: [0x01,0x01,0xb0,0xf4,0x0a,0x00,0x00,0xf8]
+
+s_load_b32 s4, s[2:3], 10 nv scope:SCOPE_SE th:TH_LOAD_NT
+// GFX1250: encoding: [0x01,0x01,0xb0,0xf4,0x0a,0x00,0x00,0xf8]
+
+buffer_load_b32 v5, v1, s[8:11], s3 offen offset:4095 th:TH_LOAD_NT scope:SCOPE_SE nv
+// GFX1250: encoding: [0x83,0x00,0x05,0xc4,0x05,0x10,0x94,0x40,0x01,0xff,0x0f,0x00]
+
+buffer_load_b32 v5, v1, s[8:11], s3 offen offset:4095 scope:SCOPE_SE nv th:TH_LOAD_NT
+// GFX1250: encoding: [0x83,0x00,0x05,0xc4,0x05,0x10,0x94,0x40,0x01,0xff,0x0f,0x00]
+
+buffer_load_b32 v5, v1, s[8:11], s3 offen offset:4095 nv scope:SCOPE_SE th:TH_LOAD_NT
+// GFX1250: encoding: [0x83,0x00,0x05,0xc4,0x05,0x10,0x94,0x40,0x01,0xff,0x0f,0x00]
+
+global_load_b32 v0, v[2:3], off th:TH_LOAD_NT scope:SCOPE_SE nv
+// GFX1250: encoding: [0xfc,0x00,0x05,0xee,0x00,0x00,0x14,0x00,0x02,0x00,0x00,0x00]
+
+global_load_b32 v0, v[2:3], off scope:SCOPE_SE nv th:TH_LOAD_NT
+// GFX1250: encoding: [0xfc,0x00,0x05,0xee,0x00,0x00,0x14,0x00,0x02,0x00,0x00,0x00]
+
+global_load_b32 v0, v[2:3], off nv scope:SCOPE_SE th:TH_LOAD_NT
+// GFX1250: encoding: [0xfc,0x00,0x05,0xee,0x00,0x00,0x14,0x00,0x02,0x00,0x00,0x00]
diff --git a/llvm/test/MC/AMDGPU/gfx1250_asm_unsupported.s b/llvm/test/MC/AMDGPU/gfx1250_asm_unsupported.s
index 89bd507942a22..7681a32dce7dc 100644
--- a/llvm/test/MC/AMDGPU/gfx1250_asm_unsupported.s
+++ b/llvm/test/MC/AMDGPU/gfx1250_asm_unsupported.s
@@ -97,6 +97,20 @@ v_interp_p10_rtz_f16_f32 v0, v1, v2, v3
v_interp_p2_rtz_f16_f32 v0, v1, v2, v3
// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+;; *xf32
+
+v_mfma_f32_16x16x8_xf32 a[0:3], v[2:3], v[4:5], a[2:5]
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+v_mfma_f32_16x16x8xf32 a[0:3], v[2:3], v[4:5], a[2:5]
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+v_mfma_f32_32x32x4_xf32 a[0:15], v[2:3], v[4:5], a[18:33]
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+v_mfma_f32_32x32x4xf32 a[0:15], v[2:3], v[4:5], a[18:33]
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
;; Export, S_WAIT_EXPCNT and S_WAIT_EVENT
export mrt0 off, off, off, off
diff --git a/llvm/test/MC/AMDGPU/gfx1250_asm_vbuffer_mubuf.s b/llvm/test/MC/AMDGPU/gfx1250_asm_vbuffer_mubuf.s
index 7a4da255b5594..0b8f190a7ae0d 100644
--- a/llvm/test/MC/AMDGPU/gfx1250_asm_vbuffer_mubuf.s
+++ b/llvm/test/MC/AMDGPU/gfx1250_asm_vbuffer_mubuf.s
@@ -1,6 +1,2310 @@
// RUN: llvm-mc -triple=amdgcn -mcpu=gfx1250 -show-encoding %s | FileCheck --check-prefix=GFX1250 %s
// RUN: not llvm-mc -triple=amdgcn -mcpu=gfx1200 -show-encoding %s 2>&1 | FileCheck --check-prefix=GFX12-ERR --implicit-check-not=error: --strict-whitespace %s
+buffer_load_b32 v5, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_load_b32 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x00,0x05,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_b32 v255, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_load_b32 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x00,0x05,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_b32 v5, off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_load_b32 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x00,0x05,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_b32 v5, off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_load_b32 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x00,0x05,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_b32 v5, off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_load_b32 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x00,0x05,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_b32 v5, off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_load_b32 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x00,0x05,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_b32 v5, v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_load_b32 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x00,0x05,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_load_b32 v5, v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_load_b32 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x00,0x05,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_load_b32 v5, off, s[8:11], s3
+// GFX1250: buffer_load_b32 v5, off, s[8:11], s3 ; encoding: [0x03,0x00,0x05,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_load_b32 v5, off, s[8:11], s3 offset:0
+// GFX1250: buffer_load_b32 v5, off, s[8:11], s3 ; encoding: [0x03,0x00,0x05,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_load_b32 v5, off, s[8:11], s3 offset:7
+// GFX1250: buffer_load_b32 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x00,0x05,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_load_b32 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_NT_HT scope:SCOPE_DEV
+// GFX1250: buffer_load_b32 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_NT_HT scope:SCOPE_DEV ; encoding: [0x03,0x00,0x05,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_b32 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_BYPASS scope:SCOPE_SYS
+// GFX1250: buffer_load_b32 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_BYPASS scope:SCOPE_SYS ; encoding: [0x03,0x00,0x05,0xc4,0x05,0x10,0xbc,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_b64 v[6:7], off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_load_b64 v[6:7], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_b64 v[254:255], off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_load_b64 v[254:255], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x05,0xc4,0xfe,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_b64 v[6:7], off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_load_b64 v[6:7], off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x40,0x05,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_b64 v[6:7], off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_load_b64 v[6:7], off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x40,0x05,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_b64 v[6:7], off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_load_b64 v[6:7], off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x40,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_b64 v[6:7], off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_load_b64 v[6:7], off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x40,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_b64 v[6:7], v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_load_b64 v[6:7], v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x40,0x05,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_load_b64 v[6:7], v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_load_b64 v[6:7], v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x40,0x05,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_load_b64 v[6:7], off, s[8:11], s3
+// GFX1250: buffer_load_b64 v[6:7], off, s[8:11], s3 ; encoding: [0x03,0x40,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_load_b64 v[6:7], off, s[8:11], s3 offset:0
+// GFX1250: buffer_load_b64 v[6:7], off, s[8:11], s3 ; encoding: [0x03,0x40,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_load_b64 v[6:7], off, s[8:11], s3 offset:7
+// GFX1250: buffer_load_b64 v[6:7], off, s[8:11], s3 offset:7 ; encoding: [0x03,0x40,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_load_b64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_LOAD_NT_HT scope:SCOPE_DEV
+// GFX1250: buffer_load_b64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_LOAD_NT_HT scope:SCOPE_DEV ; encoding: [0x03,0x40,0x05,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_b64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_LOAD_BYPASS scope:SCOPE_SYS
+// GFX1250: buffer_load_b64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_LOAD_BYPASS scope:SCOPE_SYS ; encoding: [0x03,0x40,0x05,0xc4,0x06,0x10,0xbc,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_b96 v[6:8], off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_load_b96 v[6:8], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_b96 v[252:254], off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_load_b96 v[252:254], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x05,0xc4,0xfc,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_b96 v[6:8], off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_load_b96 v[6:8], off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x80,0x05,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_b96 v[6:8], off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_load_b96 v[6:8], off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x80,0x05,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_b96 v[6:8], off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_load_b96 v[6:8], off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x80,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_b96 v[6:8], off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_load_b96 v[6:8], off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x80,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_b96 v[6:8], v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_load_b96 v[6:8], v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x80,0x05,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_load_b96 v[6:8], v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_load_b96 v[6:8], v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x80,0x05,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_load_b96 v[6:8], off, s[8:11], s3
+// GFX1250: buffer_load_b96 v[6:8], off, s[8:11], s3 ; encoding: [0x03,0x80,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_load_b96 v[6:8], off, s[8:11], s3 offset:0
+// GFX1250: buffer_load_b96 v[6:8], off, s[8:11], s3 ; encoding: [0x03,0x80,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_load_b96 v[6:8], off, s[8:11], s3 offset:7
+// GFX1250: buffer_load_b96 v[6:8], off, s[8:11], s3 offset:7 ; encoding: [0x03,0x80,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_load_b96 v[6:8], off, s[8:11], s3 offset:8388607 th:TH_LOAD_NT_HT scope:SCOPE_DEV
+// GFX1250: buffer_load_b96 v[6:8], off, s[8:11], s3 offset:8388607 th:TH_LOAD_NT_HT scope:SCOPE_DEV ; encoding: [0x03,0x80,0x05,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_b96 v[6:8], off, s[8:11], s3 offset:8388607 th:TH_LOAD_BYPASS scope:SCOPE_SYS
+// GFX1250: buffer_load_b96 v[6:8], off, s[8:11], s3 offset:8388607 th:TH_LOAD_BYPASS scope:SCOPE_SYS ; encoding: [0x03,0x80,0x05,0xc4,0x06,0x10,0xbc,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_b128 v[6:9], off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_load_b128 v[6:9], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0xc0,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_b128 v[252:255], off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_load_b128 v[252:255], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0xc0,0x05,0xc4,0xfc,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_b128 v[6:9], off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_load_b128 v[6:9], off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0xc0,0x05,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_b128 v[6:9], off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_load_b128 v[6:9], off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0xc0,0x05,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_b128 v[6:9], off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_load_b128 v[6:9], off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0xc0,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_b128 v[6:9], off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_load_b128 v[6:9], off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0xc0,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_b128 v[6:9], v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_load_b128 v[6:9], v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0xc0,0x05,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_load_b128 v[6:9], v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_load_b128 v[6:9], v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0xc0,0x05,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_load_b128 v[6:9], off, s[8:11], s3
+// GFX1250: buffer_load_b128 v[6:9], off, s[8:11], s3 ; encoding: [0x03,0xc0,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_load_b128 v[6:9], off, s[8:11], s3 offset:0
+// GFX1250: buffer_load_b128 v[6:9], off, s[8:11], s3 ; encoding: [0x03,0xc0,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_load_b128 v[6:9], off, s[8:11], s3 offset:7
+// GFX1250: buffer_load_b128 v[6:9], off, s[8:11], s3 offset:7 ; encoding: [0x03,0xc0,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_load_b128 v[6:9], off, s[8:11], s3 offset:8388607 th:TH_LOAD_NT_HT scope:SCOPE_DEV
+// GFX1250: buffer_load_b128 v[6:9], off, s[8:11], s3 offset:8388607 th:TH_LOAD_NT_HT scope:SCOPE_DEV ; encoding: [0x03,0xc0,0x05,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_b128 v[6:9], off, s[8:11], s3 offset:8388607 th:TH_LOAD_BYPASS scope:SCOPE_SYS
+// GFX1250: buffer_load_b128 v[6:9], off, s[8:11], s3 offset:8388607 th:TH_LOAD_BYPASS scope:SCOPE_SYS ; encoding: [0x03,0xc0,0x05,0xc4,0x06,0x10,0xbc,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_b16 v5, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_load_d16_b16 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x00,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_b16 v255, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_load_d16_b16 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x00,0x08,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_b16 v5, off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_load_d16_b16 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x00,0x08,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_b16 v5, off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_load_d16_b16 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x00,0x08,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_b16 v5, off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_load_d16_b16 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x00,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_b16 v5, off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_load_d16_b16 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x00,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_b16 v5, v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_load_d16_b16 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x00,0x08,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_b16 v5, v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_load_d16_b16 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x00,0x08,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_b16 v5, off, s[8:11], s3
+// GFX1250: buffer_load_d16_b16 v5, off, s[8:11], s3 ; encoding: [0x03,0x00,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_load_d16_b16 v5, off, s[8:11], s3 offset:0
+// GFX1250: buffer_load_d16_b16 v5, off, s[8:11], s3 ; encoding: [0x03,0x00,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_load_d16_b16 v5, off, s[8:11], s3 offset:7
+// GFX1250: buffer_load_d16_b16 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x00,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_load_d16_b16 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_NT_HT scope:SCOPE_DEV
+// GFX1250: buffer_load_d16_b16 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_NT_HT scope:SCOPE_DEV ; encoding: [0x03,0x00,0x08,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_b16 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_BYPASS scope:SCOPE_SYS
+// GFX1250: buffer_load_d16_b16 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_BYPASS scope:SCOPE_SYS ; encoding: [0x03,0x00,0x08,0xc4,0x05,0x10,0xbc,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_hi_b16 v5, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_load_d16_hi_b16 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0xc0,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_hi_b16 v255, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_load_d16_hi_b16 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0xc0,0x08,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_hi_b16 v5, off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_load_d16_hi_b16 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0xc0,0x08,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_hi_b16 v5, off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_load_d16_hi_b16 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0xc0,0x08,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_hi_b16 v5, off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_load_d16_hi_b16 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0xc0,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_hi_b16 v5, off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_load_d16_hi_b16 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0xc0,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_hi_b16 v5, v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_load_d16_hi_b16 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0xc0,0x08,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_hi_b16 v5, v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_load_d16_hi_b16 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0xc0,0x08,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_hi_b16 v5, off, s[8:11], s3
+// GFX1250: buffer_load_d16_hi_b16 v5, off, s[8:11], s3 ; encoding: [0x03,0xc0,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_load_d16_hi_b16 v5, off, s[8:11], s3 offset:0
+// GFX1250: buffer_load_d16_hi_b16 v5, off, s[8:11], s3 ; encoding: [0x03,0xc0,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_load_d16_hi_b16 v5, off, s[8:11], s3 offset:7
+// GFX1250: buffer_load_d16_hi_b16 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0xc0,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_load_d16_hi_b16 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_NT_HT scope:SCOPE_DEV
+// GFX1250: buffer_load_d16_hi_b16 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_NT_HT scope:SCOPE_DEV ; encoding: [0x03,0xc0,0x08,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_hi_b16 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_BYPASS scope:SCOPE_SYS
+// GFX1250: buffer_load_d16_hi_b16 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_BYPASS scope:SCOPE_SYS ; encoding: [0x03,0xc0,0x08,0xc4,0x05,0x10,0xbc,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_hi_i8 v5, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_load_d16_hi_i8 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_hi_i8 v255, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_load_d16_hi_i8 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x08,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_hi_i8 v5, off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_load_d16_hi_i8 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x80,0x08,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_hi_i8 v5, off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_load_d16_hi_i8 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x80,0x08,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_hi_i8 v5, off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_load_d16_hi_i8 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x80,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_hi_i8 v5, off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_load_d16_hi_i8 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x80,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_hi_i8 v5, v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_load_d16_hi_i8 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x80,0x08,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_hi_i8 v5, v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_load_d16_hi_i8 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x80,0x08,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_hi_i8 v5, off, s[8:11], s3
+// GFX1250: buffer_load_d16_hi_i8 v5, off, s[8:11], s3 ; encoding: [0x03,0x80,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_load_d16_hi_i8 v5, off, s[8:11], s3 offset:0
+// GFX1250: buffer_load_d16_hi_i8 v5, off, s[8:11], s3 ; encoding: [0x03,0x80,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_load_d16_hi_i8 v5, off, s[8:11], s3 offset:7
+// GFX1250: buffer_load_d16_hi_i8 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x80,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_load_d16_hi_i8 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_NT_HT scope:SCOPE_DEV
+// GFX1250: buffer_load_d16_hi_i8 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_NT_HT scope:SCOPE_DEV ; encoding: [0x03,0x80,0x08,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_hi_i8 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_BYPASS scope:SCOPE_SYS
+// GFX1250: buffer_load_d16_hi_i8 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_BYPASS scope:SCOPE_SYS ; encoding: [0x03,0x80,0x08,0xc4,0x05,0x10,0xbc,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_hi_u8 v5, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_load_d16_hi_u8 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_hi_u8 v255, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_load_d16_hi_u8 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x08,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_hi_u8 v5, off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_load_d16_hi_u8 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x40,0x08,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_hi_u8 v5, off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_load_d16_hi_u8 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x40,0x08,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_hi_u8 v5, off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_load_d16_hi_u8 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x40,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_hi_u8 v5, off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_load_d16_hi_u8 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x40,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_hi_u8 v5, v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_load_d16_hi_u8 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x40,0x08,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_hi_u8 v5, v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_load_d16_hi_u8 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x40,0x08,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_hi_u8 v5, off, s[8:11], s3
+// GFX1250: buffer_load_d16_hi_u8 v5, off, s[8:11], s3 ; encoding: [0x03,0x40,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_load_d16_hi_u8 v5, off, s[8:11], s3 offset:0
+// GFX1250: buffer_load_d16_hi_u8 v5, off, s[8:11], s3 ; encoding: [0x03,0x40,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_load_d16_hi_u8 v5, off, s[8:11], s3 offset:7
+// GFX1250: buffer_load_d16_hi_u8 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x40,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_load_d16_hi_u8 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_NT_HT scope:SCOPE_DEV
+// GFX1250: buffer_load_d16_hi_u8 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_NT_HT scope:SCOPE_DEV ; encoding: [0x03,0x40,0x08,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_hi_u8 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_BYPASS scope:SCOPE_SYS
+// GFX1250: buffer_load_d16_hi_u8 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_BYPASS scope:SCOPE_SYS ; encoding: [0x03,0x40,0x08,0xc4,0x05,0x10,0xbc,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_i8 v5, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_load_d16_i8 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0xc0,0x07,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_i8 v255, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_load_d16_i8 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0xc0,0x07,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_i8 v5, off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_load_d16_i8 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0xc0,0x07,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_i8 v5, off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_load_d16_i8 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0xc0,0x07,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_i8 v5, off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_load_d16_i8 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0xc0,0x07,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_i8 v5, off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_load_d16_i8 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0xc0,0x07,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_i8 v5, v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_load_d16_i8 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0xc0,0x07,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_i8 v5, v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_load_d16_i8 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0xc0,0x07,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_i8 v5, off, s[8:11], s3
+// GFX1250: buffer_load_d16_i8 v5, off, s[8:11], s3 ; encoding: [0x03,0xc0,0x07,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_load_d16_i8 v5, off, s[8:11], s3 offset:0
+// GFX1250: buffer_load_d16_i8 v5, off, s[8:11], s3 ; encoding: [0x03,0xc0,0x07,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_load_d16_i8 v5, off, s[8:11], s3 offset:7
+// GFX1250: buffer_load_d16_i8 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0xc0,0x07,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_load_d16_i8 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_NT_HT scope:SCOPE_DEV
+// GFX1250: buffer_load_d16_i8 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_NT_HT scope:SCOPE_DEV ; encoding: [0x03,0xc0,0x07,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_i8 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_BYPASS scope:SCOPE_SYS
+// GFX1250: buffer_load_d16_i8 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_BYPASS scope:SCOPE_SYS ; encoding: [0x03,0xc0,0x07,0xc4,0x05,0x10,0xbc,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_u8 v5, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_load_d16_u8 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x07,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_u8 v255, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_load_d16_u8 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x07,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_u8 v5, off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_load_d16_u8 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x80,0x07,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_u8 v5, off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_load_d16_u8 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x80,0x07,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_u8 v5, off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_load_d16_u8 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x80,0x07,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_u8 v5, off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_load_d16_u8 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x80,0x07,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_u8 v5, v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_load_d16_u8 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x80,0x07,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_u8 v5, v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_load_d16_u8 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x80,0x07,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_u8 v5, off, s[8:11], s3
+// GFX1250: buffer_load_d16_u8 v5, off, s[8:11], s3 ; encoding: [0x03,0x80,0x07,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_load_d16_u8 v5, off, s[8:11], s3 offset:0
+// GFX1250: buffer_load_d16_u8 v5, off, s[8:11], s3 ; encoding: [0x03,0x80,0x07,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_load_d16_u8 v5, off, s[8:11], s3 offset:7
+// GFX1250: buffer_load_d16_u8 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x80,0x07,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_load_d16_u8 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_NT_HT scope:SCOPE_DEV
+// GFX1250: buffer_load_d16_u8 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_NT_HT scope:SCOPE_DEV ; encoding: [0x03,0x80,0x07,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_d16_u8 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_BYPASS scope:SCOPE_SYS
+// GFX1250: buffer_load_d16_u8 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_BYPASS scope:SCOPE_SYS ; encoding: [0x03,0x80,0x07,0xc4,0x05,0x10,0xbc,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_i8 v5, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_load_i8 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_i8 v255, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_load_i8 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x04,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_i8 v5, off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_load_i8 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x40,0x04,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_i8 v5, off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_load_i8 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x40,0x04,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_i8 v5, off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_load_i8 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x40,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_i8 v5, off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_load_i8 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x40,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_i8 v5, v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_load_i8 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x40,0x04,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_load_i8 v5, v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_load_i8 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x40,0x04,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_load_i8 v5, off, s[8:11], s3
+// GFX1250: buffer_load_i8 v5, off, s[8:11], s3 ; encoding: [0x03,0x40,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_load_i8 v5, off, s[8:11], s3 offset:0
+// GFX1250: buffer_load_i8 v5, off, s[8:11], s3 ; encoding: [0x03,0x40,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_load_i8 v5, off, s[8:11], s3 offset:7
+// GFX1250: buffer_load_i8 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x40,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_load_i8 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_NT_HT scope:SCOPE_DEV
+// GFX1250: buffer_load_i8 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_NT_HT scope:SCOPE_DEV ; encoding: [0x03,0x40,0x04,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_i8 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_BYPASS scope:SCOPE_SYS
+// GFX1250: buffer_load_i8 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_BYPASS scope:SCOPE_SYS ; encoding: [0x03,0x40,0x04,0xc4,0x05,0x10,0xbc,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_i16 v5, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_load_i16 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0xc0,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_i16 v255, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_load_i16 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0xc0,0x04,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_i16 v5, off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_load_i16 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0xc0,0x04,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_i16 v5, off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_load_i16 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0xc0,0x04,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_i16 v5, off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_load_i16 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0xc0,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_i16 v5, off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_load_i16 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0xc0,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_i16 v5, v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_load_i16 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0xc0,0x04,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_load_i16 v5, v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_load_i16 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0xc0,0x04,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_load_i16 v5, off, s[8:11], s3
+// GFX1250: buffer_load_i16 v5, off, s[8:11], s3 ; encoding: [0x03,0xc0,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_load_i16 v5, off, s[8:11], s3 offset:0
+// GFX1250: buffer_load_i16 v5, off, s[8:11], s3 ; encoding: [0x03,0xc0,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_load_i16 v5, off, s[8:11], s3 offset:7
+// GFX1250: buffer_load_i16 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0xc0,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_load_i16 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_NT_HT scope:SCOPE_DEV
+// GFX1250: buffer_load_i16 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_NT_HT scope:SCOPE_DEV ; encoding: [0x03,0xc0,0x04,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_i16 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_BYPASS scope:SCOPE_SYS
+// GFX1250: buffer_load_i16 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_BYPASS scope:SCOPE_SYS ; encoding: [0x03,0xc0,0x04,0xc4,0x05,0x10,0xbc,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_u8 v5, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_load_u8 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x00,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_u8 v255, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_load_u8 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x00,0x04,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_u8 v5, off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_load_u8 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x00,0x04,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_u8 v5, off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_load_u8 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x00,0x04,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_u8 v5, off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_load_u8 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x00,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_u8 v5, off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_load_u8 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x00,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_u8 v5, v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_load_u8 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x00,0x04,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_load_u8 v5, v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_load_u8 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x00,0x04,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_load_u8 v5, off, s[8:11], s3
+// GFX1250: buffer_load_u8 v5, off, s[8:11], s3 ; encoding: [0x03,0x00,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_load_u8 v5, off, s[8:11], s3 offset:0
+// GFX1250: buffer_load_u8 v5, off, s[8:11], s3 ; encoding: [0x03,0x00,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_load_u8 v5, off, s[8:11], s3 offset:7
+// GFX1250: buffer_load_u8 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x00,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_load_u8 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_NT_HT scope:SCOPE_DEV
+// GFX1250: buffer_load_u8 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_NT_HT scope:SCOPE_DEV ; encoding: [0x03,0x00,0x04,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_u8 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_BYPASS scope:SCOPE_SYS
+// GFX1250: buffer_load_u8 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_BYPASS scope:SCOPE_SYS ; encoding: [0x03,0x00,0x04,0xc4,0x05,0x10,0xbc,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_u16 v5, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_load_u16 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_u16 v255, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_load_u16 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x04,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_u16 v5, off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_load_u16 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x80,0x04,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_u16 v5, off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_load_u16 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x80,0x04,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_u16 v5, off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_load_u16 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x80,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_u16 v5, off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_load_u16 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x80,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_u16 v5, v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_load_u16 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x80,0x04,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_load_u16 v5, v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_load_u16 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x80,0x04,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_load_u16 v5, off, s[8:11], s3
+// GFX1250: buffer_load_u16 v5, off, s[8:11], s3 ; encoding: [0x03,0x80,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_load_u16 v5, off, s[8:11], s3 offset:0
+// GFX1250: buffer_load_u16 v5, off, s[8:11], s3 ; encoding: [0x03,0x80,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_load_u16 v5, off, s[8:11], s3 offset:7
+// GFX1250: buffer_load_u16 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x80,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_load_u16 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_NT_HT scope:SCOPE_DEV
+// GFX1250: buffer_load_u16 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_NT_HT scope:SCOPE_DEV ; encoding: [0x03,0x80,0x04,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_load_u16 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_BYPASS scope:SCOPE_SYS
+// GFX1250: buffer_load_u16 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_BYPASS scope:SCOPE_SYS ; encoding: [0x03,0x80,0x04,0xc4,0x05,0x10,0xbc,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b8 v1, off, s[12:15], s4 offset:8388607
+// GFX1250: buffer_store_b8 v1, off, s[12:15], s4 offset:8388607 ; encoding: [0x04,0x00,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b8 v255, off, s[12:15], s4 offset:8388607
+// GFX1250: buffer_store_b8 v255, off, s[12:15], s4 offset:8388607 ; encoding: [0x04,0x00,0x06,0xc4,0xff,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b8 v1, off, s[16:19], s4 offset:8388607
+// GFX1250: buffer_store_b8 v1, off, s[16:19], s4 offset:8388607 ; encoding: [0x04,0x00,0x06,0xc4,0x01,0x20,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b8 v1, off, s[96:99], s4 offset:8388607
+// GFX1250: buffer_store_b8 v1, off, s[96:99], s4 offset:8388607 ; encoding: [0x04,0x00,0x06,0xc4,0x01,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b8 v1, off, s[12:15], s101 offset:8388607
+// GFX1250: buffer_store_b8 v1, off, s[12:15], s101 offset:8388607 ; encoding: [0x65,0x00,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b8 v1, off, s[12:15], m0 offset:8388607
+// GFX1250: buffer_store_b8 v1, off, s[12:15], m0 offset:8388607 ; encoding: [0x7d,0x00,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b8 v1, v0, s[12:15], s4 idxen offset:8388607
+// GFX1250: buffer_store_b8 v1, v0, s[12:15], s4 idxen offset:8388607 ; encoding: [0x04,0x00,0x06,0xc4,0x01,0x18,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_store_b8 v1, v0, s[12:15], s4 offen offset:8388607
+// GFX1250: buffer_store_b8 v1, v0, s[12:15], s4 offen offset:8388607 ; encoding: [0x04,0x00,0x06,0xc4,0x01,0x18,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_store_b8 v1, off, s[12:15], s4
+// GFX1250: buffer_store_b8 v1, off, s[12:15], s4 ; encoding: [0x04,0x00,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_store_b8 v1, off, s[12:15], s4 offset:0
+// GFX1250: buffer_store_b8 v1, off, s[12:15], s4 ; encoding: [0x04,0x00,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_store_b8 v1, off, s[12:15], s4 offset:7
+// GFX1250: buffer_store_b8 v1, off, s[12:15], s4 offset:7 ; encoding: [0x04,0x00,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_store_b8 v1, off, s[12:15], s4 offset:8388607 th:TH_STORE_NT_HT scope:SCOPE_DEV
+// GFX1250: buffer_store_b8 v1, off, s[12:15], s4 offset:8388607 th:TH_STORE_NT_HT scope:SCOPE_DEV ; encoding: [0x04,0x00,0x06,0xc4,0x01,0x18,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b8 v1, off, s[12:15], s4 offset:8388607 th:TH_STORE_BYPASS scope:SCOPE_SYS
+// GFX1250: buffer_store_b8 v1, off, s[12:15], s4 offset:8388607 th:TH_STORE_BYPASS scope:SCOPE_SYS ; encoding: [0x04,0x00,0x06,0xc4,0x01,0x18,0xbc,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b16 v1, off, s[12:15], s4 offset:8388607
+// GFX1250: buffer_store_b16 v1, off, s[12:15], s4 offset:8388607 ; encoding: [0x04,0x40,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b16 v255, off, s[12:15], s4 offset:8388607
+// GFX1250: buffer_store_b16 v255, off, s[12:15], s4 offset:8388607 ; encoding: [0x04,0x40,0x06,0xc4,0xff,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b16 v1, off, s[16:19], s4 offset:8388607
+// GFX1250: buffer_store_b16 v1, off, s[16:19], s4 offset:8388607 ; encoding: [0x04,0x40,0x06,0xc4,0x01,0x20,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b16 v1, off, s[96:99], s4 offset:8388607
+// GFX1250: buffer_store_b16 v1, off, s[96:99], s4 offset:8388607 ; encoding: [0x04,0x40,0x06,0xc4,0x01,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b16 v1, off, s[12:15], s101 offset:8388607
+// GFX1250: buffer_store_b16 v1, off, s[12:15], s101 offset:8388607 ; encoding: [0x65,0x40,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b16 v1, off, s[12:15], m0 offset:8388607
+// GFX1250: buffer_store_b16 v1, off, s[12:15], m0 offset:8388607 ; encoding: [0x7d,0x40,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b16 v1, v0, s[12:15], s4 idxen offset:8388607
+// GFX1250: buffer_store_b16 v1, v0, s[12:15], s4 idxen offset:8388607 ; encoding: [0x04,0x40,0x06,0xc4,0x01,0x18,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_store_b16 v1, v0, s[12:15], s4 offen offset:8388607
+// GFX1250: buffer_store_b16 v1, v0, s[12:15], s4 offen offset:8388607 ; encoding: [0x04,0x40,0x06,0xc4,0x01,0x18,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_store_b16 v1, off, s[12:15], s4
+// GFX1250: buffer_store_b16 v1, off, s[12:15], s4 ; encoding: [0x04,0x40,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_store_b16 v1, off, s[12:15], s4 offset:0
+// GFX1250: buffer_store_b16 v1, off, s[12:15], s4 ; encoding: [0x04,0x40,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_store_b16 v1, off, s[12:15], s4 offset:7
+// GFX1250: buffer_store_b16 v1, off, s[12:15], s4 offset:7 ; encoding: [0x04,0x40,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_store_b16 v1, off, s[12:15], s4 offset:8388607 th:TH_STORE_NT_HT scope:SCOPE_DEV
+// GFX1250: buffer_store_b16 v1, off, s[12:15], s4 offset:8388607 th:TH_STORE_NT_HT scope:SCOPE_DEV ; encoding: [0x04,0x40,0x06,0xc4,0x01,0x18,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b16 v1, off, s[12:15], s4 offset:8388607 th:TH_STORE_BYPASS scope:SCOPE_SYS
+// GFX1250: buffer_store_b16 v1, off, s[12:15], s4 offset:8388607 th:TH_STORE_BYPASS scope:SCOPE_SYS ; encoding: [0x04,0x40,0x06,0xc4,0x01,0x18,0xbc,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b32 v1, off, s[12:15], s4 offset:8388607
+// GFX1250: buffer_store_b32 v1, off, s[12:15], s4 offset:8388607 ; encoding: [0x04,0x80,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b32 v255, off, s[12:15], s4 offset:8388607
+// GFX1250: buffer_store_b32 v255, off, s[12:15], s4 offset:8388607 ; encoding: [0x04,0x80,0x06,0xc4,0xff,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b32 v1, off, s[16:19], s4 offset:8388607
+// GFX1250: buffer_store_b32 v1, off, s[16:19], s4 offset:8388607 ; encoding: [0x04,0x80,0x06,0xc4,0x01,0x20,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b32 v1, off, s[96:99], s4 offset:8388607
+// GFX1250: buffer_store_b32 v1, off, s[96:99], s4 offset:8388607 ; encoding: [0x04,0x80,0x06,0xc4,0x01,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b32 v1, off, s[12:15], s101 offset:8388607
+// GFX1250: buffer_store_b32 v1, off, s[12:15], s101 offset:8388607 ; encoding: [0x65,0x80,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b32 v1, off, s[12:15], m0 offset:8388607
+// GFX1250: buffer_store_b32 v1, off, s[12:15], m0 offset:8388607 ; encoding: [0x7d,0x80,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b32 v1, v0, s[12:15], s4 idxen offset:8388607
+// GFX1250: buffer_store_b32 v1, v0, s[12:15], s4 idxen offset:8388607 ; encoding: [0x04,0x80,0x06,0xc4,0x01,0x18,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_store_b32 v1, v0, s[12:15], s4 offen offset:8388607
+// GFX1250: buffer_store_b32 v1, v0, s[12:15], s4 offen offset:8388607 ; encoding: [0x04,0x80,0x06,0xc4,0x01,0x18,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_store_b32 v1, off, s[12:15], s4
+// GFX1250: buffer_store_b32 v1, off, s[12:15], s4 ; encoding: [0x04,0x80,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_store_b32 v1, off, s[12:15], s4 offset:0
+// GFX1250: buffer_store_b32 v1, off, s[12:15], s4 ; encoding: [0x04,0x80,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_store_b32 v1, off, s[12:15], s4 offset:7
+// GFX1250: buffer_store_b32 v1, off, s[12:15], s4 offset:7 ; encoding: [0x04,0x80,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_store_b32 v1, off, s[12:15], s4 offset:8388607 th:TH_STORE_NT_HT scope:SCOPE_DEV
+// GFX1250: buffer_store_b32 v1, off, s[12:15], s4 offset:8388607 th:TH_STORE_NT_HT scope:SCOPE_DEV ; encoding: [0x04,0x80,0x06,0xc4,0x01,0x18,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b32 v1, off, s[12:15], s4 offset:8388607 th:TH_STORE_BYPASS scope:SCOPE_SYS
+// GFX1250: buffer_store_b32 v1, off, s[12:15], s4 offset:8388607 th:TH_STORE_BYPASS scope:SCOPE_SYS ; encoding: [0x04,0x80,0x06,0xc4,0x01,0x18,0xbc,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b64 v[2:3], off, s[12:15], s4 offset:8388607
+// GFX1250: buffer_store_b64 v[2:3], off, s[12:15], s4 offset:8388607 ; encoding: [0x04,0xc0,0x06,0xc4,0x02,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b64 v[254:255], off, s[12:15], s4 offset:8388607
+// GFX1250: buffer_store_b64 v[254:255], off, s[12:15], s4 offset:8388607 ; encoding: [0x04,0xc0,0x06,0xc4,0xfe,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b64 v[2:3], off, s[16:19], s4 offset:8388607
+// GFX1250: buffer_store_b64 v[2:3], off, s[16:19], s4 offset:8388607 ; encoding: [0x04,0xc0,0x06,0xc4,0x02,0x20,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b64 v[2:3], off, s[96:99], s4 offset:8388607
+// GFX1250: buffer_store_b64 v[2:3], off, s[96:99], s4 offset:8388607 ; encoding: [0x04,0xc0,0x06,0xc4,0x02,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b64 v[2:3], off, s[12:15], s101 offset:8388607
+// GFX1250: buffer_store_b64 v[2:3], off, s[12:15], s101 offset:8388607 ; encoding: [0x65,0xc0,0x06,0xc4,0x02,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b64 v[2:3], off, s[12:15], m0 offset:8388607
+// GFX1250: buffer_store_b64 v[2:3], off, s[12:15], m0 offset:8388607 ; encoding: [0x7d,0xc0,0x06,0xc4,0x02,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b64 v[2:3], v0, s[12:15], s4 idxen offset:8388607
+// GFX1250: buffer_store_b64 v[2:3], v0, s[12:15], s4 idxen offset:8388607 ; encoding: [0x04,0xc0,0x06,0xc4,0x02,0x18,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_store_b64 v[2:3], v0, s[12:15], s4 offen offset:8388607
+// GFX1250: buffer_store_b64 v[2:3], v0, s[12:15], s4 offen offset:8388607 ; encoding: [0x04,0xc0,0x06,0xc4,0x02,0x18,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_store_b64 v[2:3], off, s[12:15], s4
+// GFX1250: buffer_store_b64 v[2:3], off, s[12:15], s4 ; encoding: [0x04,0xc0,0x06,0xc4,0x02,0x18,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_store_b64 v[2:3], off, s[12:15], s4 offset:0
+// GFX1250: buffer_store_b64 v[2:3], off, s[12:15], s4 ; encoding: [0x04,0xc0,0x06,0xc4,0x02,0x18,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_store_b64 v[2:3], off, s[12:15], s4 offset:7
+// GFX1250: buffer_store_b64 v[2:3], off, s[12:15], s4 offset:7 ; encoding: [0x04,0xc0,0x06,0xc4,0x02,0x18,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_store_b64 v[2:3], off, s[12:15], s4 offset:8388607 th:TH_STORE_NT_HT scope:SCOPE_DEV
+// GFX1250: buffer_store_b64 v[2:3], off, s[12:15], s4 offset:8388607 th:TH_STORE_NT_HT scope:SCOPE_DEV ; encoding: [0x04,0xc0,0x06,0xc4,0x02,0x18,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b64 v[2:3], off, s[12:15], s4 offset:8388607 th:TH_STORE_BYPASS scope:SCOPE_SYS
+// GFX1250: buffer_store_b64 v[2:3], off, s[12:15], s4 offset:8388607 th:TH_STORE_BYPASS scope:SCOPE_SYS ; encoding: [0x04,0xc0,0x06,0xc4,0x02,0x18,0xbc,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b96 v[2:4], off, s[12:15], s4 offset:8388607
+// GFX1250: buffer_store_b96 v[2:4], off, s[12:15], s4 offset:8388607 ; encoding: [0x04,0x00,0x07,0xc4,0x02,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b96 v[252:254], off, s[12:15], s4 offset:8388607
+// GFX1250: buffer_store_b96 v[252:254], off, s[12:15], s4 offset:8388607 ; encoding: [0x04,0x00,0x07,0xc4,0xfc,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b96 v[2:4], off, s[16:19], s4 offset:8388607
+// GFX1250: buffer_store_b96 v[2:4], off, s[16:19], s4 offset:8388607 ; encoding: [0x04,0x00,0x07,0xc4,0x02,0x20,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b96 v[2:4], off, s[96:99], s4 offset:8388607
+// GFX1250: buffer_store_b96 v[2:4], off, s[96:99], s4 offset:8388607 ; encoding: [0x04,0x00,0x07,0xc4,0x02,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b96 v[2:4], off, s[12:15], s101 offset:8388607
+// GFX1250: buffer_store_b96 v[2:4], off, s[12:15], s101 offset:8388607 ; encoding: [0x65,0x00,0x07,0xc4,0x02,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b96 v[2:4], off, s[12:15], m0 offset:8388607
+// GFX1250: buffer_store_b96 v[2:4], off, s[12:15], m0 offset:8388607 ; encoding: [0x7d,0x00,0x07,0xc4,0x02,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b96 v[2:4], v0, s[12:15], s4 idxen offset:8388607
+// GFX1250: buffer_store_b96 v[2:4], v0, s[12:15], s4 idxen offset:8388607 ; encoding: [0x04,0x00,0x07,0xc4,0x02,0x18,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_store_b96 v[2:4], v0, s[12:15], s4 offen offset:8388607
+// GFX1250: buffer_store_b96 v[2:4], v0, s[12:15], s4 offen offset:8388607 ; encoding: [0x04,0x00,0x07,0xc4,0x02,0x18,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_store_b96 v[2:4], off, s[12:15], s4
+// GFX1250: buffer_store_b96 v[2:4], off, s[12:15], s4 ; encoding: [0x04,0x00,0x07,0xc4,0x02,0x18,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_store_b96 v[2:4], off, s[12:15], s4 offset:0
+// GFX1250: buffer_store_b96 v[2:4], off, s[12:15], s4 ; encoding: [0x04,0x00,0x07,0xc4,0x02,0x18,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_store_b96 v[2:4], off, s[12:15], s4 offset:7
+// GFX1250: buffer_store_b96 v[2:4], off, s[12:15], s4 offset:7 ; encoding: [0x04,0x00,0x07,0xc4,0x02,0x18,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_store_b96 v[2:4], off, s[12:15], s4 offset:8388607 th:TH_STORE_NT_HT scope:SCOPE_DEV
+// GFX1250: buffer_store_b96 v[2:4], off, s[12:15], s4 offset:8388607 th:TH_STORE_NT_HT scope:SCOPE_DEV ; encoding: [0x04,0x00,0x07,0xc4,0x02,0x18,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b96 v[2:4], off, s[12:15], s4 offset:8388607 th:TH_STORE_BYPASS scope:SCOPE_SYS
+// GFX1250: buffer_store_b96 v[2:4], off, s[12:15], s4 offset:8388607 th:TH_STORE_BYPASS scope:SCOPE_SYS ; encoding: [0x04,0x00,0x07,0xc4,0x02,0x18,0xbc,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b128 v[2:5], off, s[12:15], s4 offset:8388607
+// GFX1250: buffer_store_b128 v[2:5], off, s[12:15], s4 offset:8388607 ; encoding: [0x04,0x40,0x07,0xc4,0x02,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b128 v[252:255], off, s[12:15], s4 offset:8388607
+// GFX1250: buffer_store_b128 v[252:255], off, s[12:15], s4 offset:8388607 ; encoding: [0x04,0x40,0x07,0xc4,0xfc,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b128 v[2:5], off, s[16:19], s4 offset:8388607
+// GFX1250: buffer_store_b128 v[2:5], off, s[16:19], s4 offset:8388607 ; encoding: [0x04,0x40,0x07,0xc4,0x02,0x20,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b128 v[2:5], off, s[96:99], s4 offset:8388607
+// GFX1250: buffer_store_b128 v[2:5], off, s[96:99], s4 offset:8388607 ; encoding: [0x04,0x40,0x07,0xc4,0x02,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b128 v[2:5], off, s[12:15], s101 offset:8388607
+// GFX1250: buffer_store_b128 v[2:5], off, s[12:15], s101 offset:8388607 ; encoding: [0x65,0x40,0x07,0xc4,0x02,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b128 v[2:5], off, s[12:15], m0 offset:8388607
+// GFX1250: buffer_store_b128 v[2:5], off, s[12:15], m0 offset:8388607 ; encoding: [0x7d,0x40,0x07,0xc4,0x02,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b128 v[2:5], v0, s[12:15], s4 idxen offset:8388607
+// GFX1250: buffer_store_b128 v[2:5], v0, s[12:15], s4 idxen offset:8388607 ; encoding: [0x04,0x40,0x07,0xc4,0x02,0x18,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_store_b128 v[2:5], v0, s[12:15], s4 offen offset:8388607
+// GFX1250: buffer_store_b128 v[2:5], v0, s[12:15], s4 offen offset:8388607 ; encoding: [0x04,0x40,0x07,0xc4,0x02,0x18,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_store_b128 v[2:5], off, s[12:15], s4
+// GFX1250: buffer_store_b128 v[2:5], off, s[12:15], s4 ; encoding: [0x04,0x40,0x07,0xc4,0x02,0x18,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_store_b128 v[2:5], off, s[12:15], s4 offset:0
+// GFX1250: buffer_store_b128 v[2:5], off, s[12:15], s4 ; encoding: [0x04,0x40,0x07,0xc4,0x02,0x18,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_store_b128 v[2:5], off, s[12:15], s4 offset:7
+// GFX1250: buffer_store_b128 v[2:5], off, s[12:15], s4 offset:7 ; encoding: [0x04,0x40,0x07,0xc4,0x02,0x18,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_store_b128 v[2:5], off, s[12:15], s4 offset:8388607 th:TH_STORE_NT_HT scope:SCOPE_DEV
+// GFX1250: buffer_store_b128 v[2:5], off, s[12:15], s4 offset:8388607 th:TH_STORE_NT_HT scope:SCOPE_DEV ; encoding: [0x04,0x40,0x07,0xc4,0x02,0x18,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_b128 v[2:5], off, s[12:15], s4 offset:8388607 th:TH_STORE_BYPASS scope:SCOPE_SYS
+// GFX1250: buffer_store_b128 v[2:5], off, s[12:15], s4 offset:8388607 th:TH_STORE_BYPASS scope:SCOPE_SYS ; encoding: [0x04,0x40,0x07,0xc4,0x02,0x18,0xbc,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_d16_hi_b8 v1, off, s[12:15], s4 offset:8388607
+// GFX1250: buffer_store_d16_hi_b8 v1, off, s[12:15], s4 offset:8388607 ; encoding: [0x04,0x00,0x09,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_d16_hi_b8 v255, off, s[12:15], s4 offset:8388607
+// GFX1250: buffer_store_d16_hi_b8 v255, off, s[12:15], s4 offset:8388607 ; encoding: [0x04,0x00,0x09,0xc4,0xff,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_d16_hi_b8 v1, off, s[16:19], s4 offset:8388607
+// GFX1250: buffer_store_d16_hi_b8 v1, off, s[16:19], s4 offset:8388607 ; encoding: [0x04,0x00,0x09,0xc4,0x01,0x20,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_d16_hi_b8 v1, off, s[96:99], s4 offset:8388607
+// GFX1250: buffer_store_d16_hi_b8 v1, off, s[96:99], s4 offset:8388607 ; encoding: [0x04,0x00,0x09,0xc4,0x01,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_d16_hi_b8 v1, off, s[12:15], s101 offset:8388607
+// GFX1250: buffer_store_d16_hi_b8 v1, off, s[12:15], s101 offset:8388607 ; encoding: [0x65,0x00,0x09,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_d16_hi_b8 v1, off, s[12:15], m0 offset:8388607
+// GFX1250: buffer_store_d16_hi_b8 v1, off, s[12:15], m0 offset:8388607 ; encoding: [0x7d,0x00,0x09,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_d16_hi_b8 v1, v0, s[12:15], s4 idxen offset:8388607
+// GFX1250: buffer_store_d16_hi_b8 v1, v0, s[12:15], s4 idxen offset:8388607 ; encoding: [0x04,0x00,0x09,0xc4,0x01,0x18,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_store_d16_hi_b8 v1, v0, s[12:15], s4 offen offset:8388607
+// GFX1250: buffer_store_d16_hi_b8 v1, v0, s[12:15], s4 offen offset:8388607 ; encoding: [0x04,0x00,0x09,0xc4,0x01,0x18,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_store_d16_hi_b8 v1, off, s[12:15], s4
+// GFX1250: buffer_store_d16_hi_b8 v1, off, s[12:15], s4 ; encoding: [0x04,0x00,0x09,0xc4,0x01,0x18,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_store_d16_hi_b8 v1, off, s[12:15], s4 offset:0
+// GFX1250: buffer_store_d16_hi_b8 v1, off, s[12:15], s4 ; encoding: [0x04,0x00,0x09,0xc4,0x01,0x18,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_store_d16_hi_b8 v1, off, s[12:15], s4 offset:7
+// GFX1250: buffer_store_d16_hi_b8 v1, off, s[12:15], s4 offset:7 ; encoding: [0x04,0x00,0x09,0xc4,0x01,0x18,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_store_d16_hi_b8 v1, off, s[12:15], s4 offset:8388607 th:TH_STORE_NT_HT scope:SCOPE_DEV
+// GFX1250: buffer_store_d16_hi_b8 v1, off, s[12:15], s4 offset:8388607 th:TH_STORE_NT_HT scope:SCOPE_DEV ; encoding: [0x04,0x00,0x09,0xc4,0x01,0x18,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_d16_hi_b8 v1, off, s[12:15], s4 offset:8388607 th:TH_STORE_BYPASS scope:SCOPE_SYS
+// GFX1250: buffer_store_d16_hi_b8 v1, off, s[12:15], s4 offset:8388607 th:TH_STORE_BYPASS scope:SCOPE_SYS ; encoding: [0x04,0x00,0x09,0xc4,0x01,0x18,0xbc,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_d16_hi_b16 v1, off, s[12:15], s4 offset:8388607
+// GFX1250: buffer_store_d16_hi_b16 v1, off, s[12:15], s4 offset:8388607 ; encoding: [0x04,0x40,0x09,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_d16_hi_b16 v255, off, s[12:15], s4 offset:8388607
+// GFX1250: buffer_store_d16_hi_b16 v255, off, s[12:15], s4 offset:8388607 ; encoding: [0x04,0x40,0x09,0xc4,0xff,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_d16_hi_b16 v1, off, s[16:19], s4 offset:8388607
+// GFX1250: buffer_store_d16_hi_b16 v1, off, s[16:19], s4 offset:8388607 ; encoding: [0x04,0x40,0x09,0xc4,0x01,0x20,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_d16_hi_b16 v1, off, s[96:99], s4 offset:8388607
+// GFX1250: buffer_store_d16_hi_b16 v1, off, s[96:99], s4 offset:8388607 ; encoding: [0x04,0x40,0x09,0xc4,0x01,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_d16_hi_b16 v1, off, s[12:15], s101 offset:8388607
+// GFX1250: buffer_store_d16_hi_b16 v1, off, s[12:15], s101 offset:8388607 ; encoding: [0x65,0x40,0x09,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_d16_hi_b16 v1, off, s[12:15], m0 offset:8388607
+// GFX1250: buffer_store_d16_hi_b16 v1, off, s[12:15], m0 offset:8388607 ; encoding: [0x7d,0x40,0x09,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_d16_hi_b16 v1, v0, s[12:15], s4 idxen offset:8388607
+// GFX1250: buffer_store_d16_hi_b16 v1, v0, s[12:15], s4 idxen offset:8388607 ; encoding: [0x04,0x40,0x09,0xc4,0x01,0x18,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_store_d16_hi_b16 v1, v0, s[12:15], s4 offen offset:8388607
+// GFX1250: buffer_store_d16_hi_b16 v1, v0, s[12:15], s4 offen offset:8388607 ; encoding: [0x04,0x40,0x09,0xc4,0x01,0x18,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_store_d16_hi_b16 v1, off, s[12:15], s4
+// GFX1250: buffer_store_d16_hi_b16 v1, off, s[12:15], s4 ; encoding: [0x04,0x40,0x09,0xc4,0x01,0x18,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_store_d16_hi_b16 v1, off, s[12:15], s4 offset:0
+// GFX1250: buffer_store_d16_hi_b16 v1, off, s[12:15], s4 ; encoding: [0x04,0x40,0x09,0xc4,0x01,0x18,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_store_d16_hi_b16 v1, off, s[12:15], s4 offset:7
+// GFX1250: buffer_store_d16_hi_b16 v1, off, s[12:15], s4 offset:7 ; encoding: [0x04,0x40,0x09,0xc4,0x01,0x18,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_store_d16_hi_b16 v1, off, s[12:15], s4 offset:8388607 th:TH_STORE_NT_HT scope:SCOPE_DEV
+// GFX1250: buffer_store_d16_hi_b16 v1, off, s[12:15], s4 offset:8388607 th:TH_STORE_NT_HT scope:SCOPE_DEV ; encoding: [0x04,0x40,0x09,0xc4,0x01,0x18,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_store_d16_hi_b16 v1, off, s[12:15], s4 offset:8388607 th:TH_STORE_BYPASS scope:SCOPE_SYS
+// GFX1250: buffer_store_d16_hi_b16 v1, off, s[12:15], s4 offset:8388607 th:TH_STORE_BYPASS scope:SCOPE_SYS ; encoding: [0x04,0x40,0x09,0xc4,0x01,0x18,0xbc,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_pk_add_f16 v5, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_pk_add_f16 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x16,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_pk_add_f16 v255, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_pk_add_f16 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x16,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_pk_add_f16 v5, off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_atomic_pk_add_f16 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x40,0x16,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_pk_add_f16 v5, off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_atomic_pk_add_f16 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x40,0x16,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_pk_add_f16 v5, off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_atomic_pk_add_f16 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x40,0x16,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_pk_add_f16 v5, off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_atomic_pk_add_f16 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x40,0x16,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_pk_add_f16 v5, v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_atomic_pk_add_f16 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x40,0x16,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_pk_add_f16 v5, v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_atomic_pk_add_f16 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x40,0x16,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_pk_add_f16 v5, off, s[8:11], s3
+// GFX1250: buffer_atomic_pk_add_f16 v5, off, s[8:11], s3 ; encoding: [0x03,0x40,0x16,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_pk_add_f16 v5, off, s[8:11], s3
+// GFX1250: buffer_atomic_pk_add_f16 v5, off, s[8:11], s3 ; encoding: [0x03,0x40,0x16,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_pk_add_f16 v5, off, s[8:11], s3 offset:7
+// GFX1250: buffer_atomic_pk_add_f16 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x40,0x16,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_atomic_pk_add_f16 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN
+// GFX1250: buffer_atomic_pk_add_f16 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x40,0x16,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_pk_add_f16 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE
+// GFX1250: buffer_atomic_pk_add_f16 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x40,0x16,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_pk_add_f16 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV
+// GFX1250: buffer_atomic_pk_add_f16 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x40,0x16,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_pk_add_bf16 v5, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_pk_add_bf16 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x16,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_pk_add_bf16 v255, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_pk_add_bf16 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x16,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_pk_add_bf16 v5, off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_atomic_pk_add_bf16 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x80,0x16,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_pk_add_bf16 v5, off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_atomic_pk_add_bf16 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x80,0x16,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_pk_add_bf16 v5, off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_atomic_pk_add_bf16 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x80,0x16,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_pk_add_bf16 v5, off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_atomic_pk_add_bf16 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x80,0x16,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_pk_add_bf16 v5, v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_atomic_pk_add_bf16 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x80,0x16,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_pk_add_bf16 v5, v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_atomic_pk_add_bf16 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x80,0x16,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_pk_add_bf16 v5, off, s[8:11], s3
+// GFX1250: buffer_atomic_pk_add_bf16 v5, off, s[8:11], s3 ; encoding: [0x03,0x80,0x16,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_pk_add_bf16 v5, off, s[8:11], s3
+// GFX1250: buffer_atomic_pk_add_bf16 v5, off, s[8:11], s3 ; encoding: [0x03,0x80,0x16,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_pk_add_bf16 v5, off, s[8:11], s3 offset:7
+// GFX1250: buffer_atomic_pk_add_bf16 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x80,0x16,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_atomic_pk_add_bf16 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN
+// GFX1250: buffer_atomic_pk_add_bf16 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x80,0x16,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_pk_add_bf16 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE
+// GFX1250: buffer_atomic_pk_add_bf16 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x80,0x16,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_pk_add_bf16 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV
+// GFX1250: buffer_atomic_pk_add_bf16 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x80,0x16,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_add_f32 v5, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_add_f32 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x15,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_add_f32 v255, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_add_f32 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x15,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_add_f32 v5, off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_atomic_add_f32 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x80,0x15,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_add_f32 v5, off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_atomic_add_f32 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x80,0x15,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_add_f32 v5, off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_atomic_add_f32 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x80,0x15,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_add_f32 v5, off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_atomic_add_f32 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x80,0x15,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_add_f32 v5, v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_atomic_add_f32 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x80,0x15,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_add_f32 v5, v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_atomic_add_f32 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x80,0x15,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_add_f32 v5, off, s[8:11], s3
+// GFX1250: buffer_atomic_add_f32 v5, off, s[8:11], s3 ; encoding: [0x03,0x80,0x15,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_add_f32 v5, off, s[8:11], s3 offset:0
+// GFX1250: buffer_atomic_add_f32 v5, off, s[8:11], s3 ; encoding: [0x03,0x80,0x15,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_add_f32 v5, off, s[8:11], s3 offset:7
+// GFX1250: buffer_atomic_add_f32 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x80,0x15,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_atomic_add_f32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN
+// GFX1250: buffer_atomic_add_f32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x80,0x15,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_add_f32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RT_RETURN scope:SCOPE_SE
+// GFX1250: buffer_atomic_add_f32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x80,0x15,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_add_f32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV
+// GFX1250: buffer_atomic_add_f32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x80,0x15,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_add_u32 v5, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_add_u32 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x0d,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_add_u32 v255, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_add_u32 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x0d,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_add_u32 v5, off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_atomic_add_u32 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x40,0x0d,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_add_u32 v5, off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_atomic_add_u32 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x40,0x0d,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_add_u32 v5, off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_atomic_add_u32 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x40,0x0d,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_add_u32 v5, off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_atomic_add_u32 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x40,0x0d,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_add_u32 v5, v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_atomic_add_u32 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x40,0x0d,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_add_u32 v5, v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_atomic_add_u32 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x40,0x0d,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_add_u32 v5, off, s[8:11], s3
+// GFX1250: buffer_atomic_add_u32 v5, off, s[8:11], s3 ; encoding: [0x03,0x40,0x0d,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_add_u32 v5, off, s[8:11], s3 offset:0
+// GFX1250: buffer_atomic_add_u32 v5, off, s[8:11], s3 ; encoding: [0x03,0x40,0x0d,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_add_u32 v5, off, s[8:11], s3 offset:7
+// GFX1250: buffer_atomic_add_u32 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x40,0x0d,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_atomic_add_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN
+// GFX1250: buffer_atomic_add_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x40,0x0d,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_add_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RT_RETURN scope:SCOPE_SE
+// GFX1250: buffer_atomic_add_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x40,0x0d,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_add_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV
+// GFX1250: buffer_atomic_add_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x40,0x0d,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_add_u64 v[6:7], off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_add_u64 v[6:7], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0xc0,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_add_u64 v[254:255], off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_add_u64 v[254:255], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0xc0,0x10,0xc4,0xfe,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_add_u64 v[6:7], off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_atomic_add_u64 v[6:7], off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0xc0,0x10,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_add_u64 v[6:7], off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_atomic_add_u64 v[6:7], off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0xc0,0x10,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_add_u64 v[6:7], off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_atomic_add_u64 v[6:7], off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0xc0,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_add_u64 v[6:7], off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_atomic_add_u64 v[6:7], off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0xc0,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_add_u64 v[6:7], v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_atomic_add_u64 v[6:7], v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0xc0,0x10,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_add_u64 v[6:7], v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_atomic_add_u64 v[6:7], v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0xc0,0x10,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_add_u64 v[6:7], off, s[8:11], s3
+// GFX1250: buffer_atomic_add_u64 v[6:7], off, s[8:11], s3 ; encoding: [0x03,0xc0,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_add_u64 v[6:7], off, s[8:11], s3 offset:0
+// GFX1250: buffer_atomic_add_u64 v[6:7], off, s[8:11], s3 ; encoding: [0x03,0xc0,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_add_u64 v[6:7], off, s[8:11], s3 offset:7
+// GFX1250: buffer_atomic_add_u64 v[6:7], off, s[8:11], s3 offset:7 ; encoding: [0x03,0xc0,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_atomic_add_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN
+// GFX1250: buffer_atomic_add_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0xc0,0x10,0xc4,0x06,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_add_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RT_RETURN scope:SCOPE_SE
+// GFX1250: buffer_atomic_add_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0xc0,0x10,0xc4,0x06,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_add_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV
+// GFX1250: buffer_atomic_add_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0xc0,0x10,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_and_b32 v5, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_and_b32 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x00,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_and_b32 v255, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_and_b32 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x00,0x0f,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_and_b32 v5, off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_atomic_and_b32 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x00,0x0f,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_and_b32 v5, off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_atomic_and_b32 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x00,0x0f,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_and_b32 v5, off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_atomic_and_b32 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x00,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_and_b32 v5, off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_atomic_and_b32 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x00,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_and_b32 v5, v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_atomic_and_b32 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x00,0x0f,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_and_b32 v5, v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_atomic_and_b32 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x00,0x0f,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_and_b32 v5, off, s[8:11], s3
+// GFX1250: buffer_atomic_and_b32 v5, off, s[8:11], s3 ; encoding: [0x03,0x00,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_and_b32 v5, off, s[8:11], s3 offset:0
+// GFX1250: buffer_atomic_and_b32 v5, off, s[8:11], s3 ; encoding: [0x03,0x00,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_and_b32 v5, off, s[8:11], s3 offset:7
+// GFX1250: buffer_atomic_and_b32 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x00,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_atomic_and_b32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN
+// GFX1250: buffer_atomic_and_b32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x00,0x0f,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_and_b32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RT_RETURN scope:SCOPE_SE
+// GFX1250: buffer_atomic_and_b32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x00,0x0f,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_and_b32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV
+// GFX1250: buffer_atomic_and_b32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x00,0x0f,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_and_b64 v[6:7], off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_and_b64 v[6:7], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_and_b64 v[254:255], off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_and_b64 v[254:255], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x12,0xc4,0xfe,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_and_b64 v[6:7], off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_atomic_and_b64 v[6:7], off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x40,0x12,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_and_b64 v[6:7], off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_atomic_and_b64 v[6:7], off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x40,0x12,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_and_b64 v[6:7], off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_atomic_and_b64 v[6:7], off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x40,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_and_b64 v[6:7], off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_atomic_and_b64 v[6:7], off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x40,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_and_b64 v[6:7], v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_atomic_and_b64 v[6:7], v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x40,0x12,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_and_b64 v[6:7], v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_atomic_and_b64 v[6:7], v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x40,0x12,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_and_b64 v[6:7], off, s[8:11], s3
+// GFX1250: buffer_atomic_and_b64 v[6:7], off, s[8:11], s3 ; encoding: [0x03,0x40,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_and_b64 v[6:7], off, s[8:11], s3 offset:0
+// GFX1250: buffer_atomic_and_b64 v[6:7], off, s[8:11], s3 ; encoding: [0x03,0x40,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_and_b64 v[6:7], off, s[8:11], s3 offset:7
+// GFX1250: buffer_atomic_and_b64 v[6:7], off, s[8:11], s3 offset:7 ; encoding: [0x03,0x40,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_atomic_and_b64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN
+// GFX1250: buffer_atomic_and_b64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x40,0x12,0xc4,0x06,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_and_b64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RT_RETURN scope:SCOPE_SE
+// GFX1250: buffer_atomic_and_b64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x40,0x12,0xc4,0x06,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_and_b64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV
+// GFX1250: buffer_atomic_and_b64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x40,0x12,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_cmpswap_b32 v[6:7], off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_cmpswap_b32 v[6:7], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x00,0x0d,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_cmpswap_b32 v[254:255], off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_cmpswap_b32 v[254:255], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x00,0x0d,0xc4,0xfe,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_cmpswap_b32 v[6:7], off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_atomic_cmpswap_b32 v[6:7], off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x00,0x0d,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_cmpswap_b32 v[6:7], off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_atomic_cmpswap_b32 v[6:7], off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x00,0x0d,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_cmpswap_b32 v[6:7], off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_atomic_cmpswap_b32 v[6:7], off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x00,0x0d,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_cmpswap_b32 v[6:7], off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_atomic_cmpswap_b32 v[6:7], off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x00,0x0d,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_cmpswap_b32 v[6:7], v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_atomic_cmpswap_b32 v[6:7], v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x00,0x0d,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_cmpswap_b32 v[6:7], v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_atomic_cmpswap_b32 v[6:7], v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x00,0x0d,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_cmpswap_b32 v[6:7], off, s[8:11], s3
+// GFX1250: buffer_atomic_cmpswap_b32 v[6:7], off, s[8:11], s3 ; encoding: [0x03,0x00,0x0d,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_cmpswap_b32 v[6:7], off, s[8:11], s3 offset:0
+// GFX1250: buffer_atomic_cmpswap_b32 v[6:7], off, s[8:11], s3 ; encoding: [0x03,0x00,0x0d,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_cmpswap_b32 v[6:7], off, s[8:11], s3 offset:7
+// GFX1250: buffer_atomic_cmpswap_b32 v[6:7], off, s[8:11], s3 offset:7 ; encoding: [0x03,0x00,0x0d,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_atomic_cmpswap_b32 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN
+// GFX1250: buffer_atomic_cmpswap_b32 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x00,0x0d,0xc4,0x06,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_cmpswap_b32 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RT_RETURN scope:SCOPE_SE
+// GFX1250: buffer_atomic_cmpswap_b32 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x00,0x0d,0xc4,0x06,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_cmpswap_b32 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV
+// GFX1250: buffer_atomic_cmpswap_b32 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x00,0x0d,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_cmpswap_b64 v[6:9], off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_cmpswap_b64 v[6:9], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_cmpswap_b64 v[252:255], off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_cmpswap_b64 v[252:255], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x10,0xc4,0xfc,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_cmpswap_b64 v[6:9], off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_atomic_cmpswap_b64 v[6:9], off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x80,0x10,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_cmpswap_b64 v[6:9], off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_atomic_cmpswap_b64 v[6:9], off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x80,0x10,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_cmpswap_b64 v[6:9], off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_atomic_cmpswap_b64 v[6:9], off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x80,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_cmpswap_b64 v[6:9], off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_atomic_cmpswap_b64 v[6:9], off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x80,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_cmpswap_b64 v[6:9], v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_atomic_cmpswap_b64 v[6:9], v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x80,0x10,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_cmpswap_b64 v[6:9], v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_atomic_cmpswap_b64 v[6:9], v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x80,0x10,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_cmpswap_b64 v[6:9], off, s[8:11], s3
+// GFX1250: buffer_atomic_cmpswap_b64 v[6:9], off, s[8:11], s3 ; encoding: [0x03,0x80,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_cmpswap_b64 v[6:9], off, s[8:11], s3 offset:0
+// GFX1250: buffer_atomic_cmpswap_b64 v[6:9], off, s[8:11], s3 ; encoding: [0x03,0x80,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_cmpswap_b64 v[6:9], off, s[8:11], s3 offset:7
+// GFX1250: buffer_atomic_cmpswap_b64 v[6:9], off, s[8:11], s3 offset:7 ; encoding: [0x03,0x80,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_atomic_cmpswap_b64 v[6:9], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN
+// GFX1250: buffer_atomic_cmpswap_b64 v[6:9], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x80,0x10,0xc4,0x06,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_cmpswap_b64 v[6:9], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RT_RETURN scope:SCOPE_SE
+// GFX1250: buffer_atomic_cmpswap_b64 v[6:9], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x80,0x10,0xc4,0x06,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_cmpswap_b64 v[6:9], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV
+// GFX1250: buffer_atomic_cmpswap_b64 v[6:9], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x80,0x10,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_clamp_u32 v255, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN
+// GFX1250: buffer_atomic_sub_clamp_u32 v255, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0xc0,0x0d,0xc4,0xff,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_clamp_u32 v255, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RT_RETURN scope:SCOPE_SE
+// GFX1250: buffer_atomic_sub_clamp_u32 v255, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0xc0,0x0d,0xc4,0xff,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_clamp_u32 v255, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV
+// GFX1250: buffer_atomic_sub_clamp_u32 v255, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0xc0,0x0d,0xc4,0xff,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_clamp_u32 v5, off, s[12:15], s3 offset:8388607 th:TH_ATOMIC_RETURN
+// GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[12:15], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0x18,0x90,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_clamp_u32 v5, off, s[12:15], s3 offset:8388607 th:TH_ATOMIC_RT_RETURN scope:SCOPE_SE
+// GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[12:15], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0x18,0x94,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_clamp_u32 v5, off, s[12:15], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV
+// GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[12:15], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0x18,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_clamp_u32 v5, off, s[96:99], s3 offset:8388607 th:TH_ATOMIC_RETURN
+// GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[96:99], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0xc0,0x90,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_clamp_u32 v5, off, s[96:99], s3 offset:8388607 th:TH_ATOMIC_RT_RETURN scope:SCOPE_SE
+// GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[96:99], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0xc0,0x94,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_clamp_u32 v5, off, s[96:99], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV
+// GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[96:99], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0xc0,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s101 offset:8388607 th:TH_ATOMIC_RETURN
+// GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s101 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x65,0xc0,0x0d,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s101 offset:8388607 th:TH_ATOMIC_RT_RETURN scope:SCOPE_SE
+// GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s101 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x65,0xc0,0x0d,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s101 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV
+// GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s101 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x65,0xc0,0x0d,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_clamp_u32 v5, off, s[8:11], m0 offset:8388607 th:TH_ATOMIC_RETURN
+// GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[8:11], m0 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x7d,0xc0,0x0d,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_clamp_u32 v5, off, s[8:11], m0 offset:8388607 th:TH_ATOMIC_RT_RETURN scope:SCOPE_SE
+// GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[8:11], m0 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x7d,0xc0,0x0d,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_clamp_u32 v5, off, s[8:11], m0 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV
+// GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[8:11], m0 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x7d,0xc0,0x0d,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_clamp_u32 v5, v0, s[8:11], s3 idxen offset:8388607 th:TH_ATOMIC_RETURN
+// GFX1250: buffer_atomic_sub_clamp_u32 v5, v0, s[8:11], s3 idxen offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0x10,0x90,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_clamp_u32 v5, v0, s[8:11], s3 idxen offset:8388607 th:TH_ATOMIC_RT_RETURN scope:SCOPE_SE
+// GFX1250: buffer_atomic_sub_clamp_u32 v5, v0, s[8:11], s3 idxen offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0x10,0x94,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_clamp_u32 v5, v0, s[8:11], s3 idxen offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV
+// GFX1250: buffer_atomic_sub_clamp_u32 v5, v0, s[8:11], s3 idxen offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0x10,0xe8,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_clamp_u32 v5, v0, s[8:11], s3 offen offset:8388607 th:TH_ATOMIC_RETURN
+// GFX1250: buffer_atomic_sub_clamp_u32 v5, v0, s[8:11], s3 offen offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0x10,0x90,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_clamp_u32 v5, v0, s[8:11], s3 offen offset:8388607 th:TH_ATOMIC_RT_RETURN scope:SCOPE_SE
+// GFX1250: buffer_atomic_sub_clamp_u32 v5, v0, s[8:11], s3 offen offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0x10,0x94,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_clamp_u32 v5, v0, s[8:11], s3 offen offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV
+// GFX1250: buffer_atomic_sub_clamp_u32 v5, v0, s[8:11], s3 offen offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0x10,0xe8,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s3 th:TH_ATOMIC_RETURN
+// GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s3 th:TH_ATOMIC_RETURN ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0x10,0x90,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s3 th:TH_ATOMIC_RT_RETURN scope:SCOPE_SE
+// GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s3 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0x10,0x94,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s3 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV
+// GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s3 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0x10,0xe8,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s3 offset:0 th:TH_ATOMIC_RETURN
+// GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s3 th:TH_ATOMIC_RETURN ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0x10,0x90,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s3 offset:0 th:TH_ATOMIC_RT_RETURN scope:SCOPE_SE
+// GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s3 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0x10,0x94,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s3 offset:0 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV
+// GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s3 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0x10,0xe8,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s3 offset:7 th:TH_ATOMIC_RETURN
+// GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s3 offset:7 th:TH_ATOMIC_RETURN ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0x10,0x90,0x00,0x00,0x07,0x00,0x00]
+
+buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s3 offset:7 th:TH_ATOMIC_RT_RETURN scope:SCOPE_SE
+// GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s3 offset:7 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0x10,0x94,0x00,0x00,0x07,0x00,0x00]
+
+buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s3 offset:7 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV
+// GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s3 offset:7 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0x10,0xe8,0x00,0x00,0x07,0x00,0x00]
+
+buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN
+// GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RT_RETURN scope:SCOPE_SE
+// GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV
+// GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_cond_sub_u32 v5, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_cond_sub_u32 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x00,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_cond_sub_u32 v255, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_cond_sub_u32 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x00,0x14,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_cond_sub_u32 v5, off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_atomic_cond_sub_u32 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x00,0x14,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_cond_sub_u32 v5, off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_atomic_cond_sub_u32 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x00,0x14,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_cond_sub_u32 v5, off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_atomic_cond_sub_u32 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x00,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_cond_sub_u32 v5, off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_atomic_cond_sub_u32 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x00,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_cond_sub_u32 v5, v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_atomic_cond_sub_u32 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x00,0x14,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_cond_sub_u32 v5, v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_atomic_cond_sub_u32 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x00,0x14,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_cond_sub_u32 v5, off, s[8:11], s3
+// GFX1250: buffer_atomic_cond_sub_u32 v5, off, s[8:11], s3 ; encoding: [0x03,0x00,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_cond_sub_u32 v5, off, s[8:11], s3 offset:0
+// GFX1250: buffer_atomic_cond_sub_u32 v5, off, s[8:11], s3 ; encoding: [0x03,0x00,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_cond_sub_u32 v5, off, s[8:11], s3 offset:7
+// GFX1250: buffer_atomic_cond_sub_u32 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x00,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_atomic_cond_sub_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN
+// GFX1250: buffer_atomic_cond_sub_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x00,0x14,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_cond_sub_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RT_RETURN scope:SCOPE_SE
+// GFX1250: buffer_atomic_cond_sub_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x00,0x14,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_cond_sub_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV
+// GFX1250: buffer_atomic_cond_sub_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x00,0x14,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_dec_u32 v5, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_dec_u32 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x00,0x10,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_dec_u32 v255, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_dec_u32 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x00,0x10,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_dec_u32 v5, off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_atomic_dec_u32 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x00,0x10,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_dec_u32 v5, off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_atomic_dec_u32 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x00,0x10,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_dec_u32 v5, off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_atomic_dec_u32 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x00,0x10,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_dec_u32 v5, off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_atomic_dec_u32 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x00,0x10,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_dec_u32 v5, v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_atomic_dec_u32 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x00,0x10,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_dec_u32 v5, v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_atomic_dec_u32 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x00,0x10,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_dec_u32 v5, off, s[8:11], s3
+// GFX1250: buffer_atomic_dec_u32 v5, off, s[8:11], s3 ; encoding: [0x03,0x00,0x10,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_dec_u32 v5, off, s[8:11], s3 offset:0
+// GFX1250: buffer_atomic_dec_u32 v5, off, s[8:11], s3 ; encoding: [0x03,0x00,0x10,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_dec_u32 v5, off, s[8:11], s3 offset:7
+// GFX1250: buffer_atomic_dec_u32 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x00,0x10,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_atomic_dec_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN
+// GFX1250: buffer_atomic_dec_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x00,0x10,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_dec_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RT_RETURN scope:SCOPE_SE
+// GFX1250: buffer_atomic_dec_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x00,0x10,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_dec_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV
+// GFX1250: buffer_atomic_dec_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x00,0x10,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_dec_u64 v[6:7], off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_dec_u64 v[6:7], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x13,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_dec_u64 v[254:255], off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_dec_u64 v[254:255], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x13,0xc4,0xfe,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_dec_u64 v[6:7], off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_atomic_dec_u64 v[6:7], off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x40,0x13,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_dec_u64 v[6:7], off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_atomic_dec_u64 v[6:7], off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x40,0x13,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_dec_u64 v[6:7], off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_atomic_dec_u64 v[6:7], off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x40,0x13,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_dec_u64 v[6:7], off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_atomic_dec_u64 v[6:7], off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x40,0x13,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_dec_u64 v[6:7], v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_atomic_dec_u64 v[6:7], v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x40,0x13,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_dec_u64 v[6:7], v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_atomic_dec_u64 v[6:7], v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x40,0x13,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_dec_u64 v[6:7], off, s[8:11], s3
+// GFX1250: buffer_atomic_dec_u64 v[6:7], off, s[8:11], s3 ; encoding: [0x03,0x40,0x13,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_dec_u64 v[6:7], off, s[8:11], s3 offset:0
+// GFX1250: buffer_atomic_dec_u64 v[6:7], off, s[8:11], s3 ; encoding: [0x03,0x40,0x13,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_dec_u64 v[6:7], off, s[8:11], s3 offset:7
+// GFX1250: buffer_atomic_dec_u64 v[6:7], off, s[8:11], s3 offset:7 ; encoding: [0x03,0x40,0x13,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_atomic_dec_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN
+// GFX1250: buffer_atomic_dec_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x40,0x13,0xc4,0x06,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_dec_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RT_RETURN scope:SCOPE_SE
+// GFX1250: buffer_atomic_dec_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x40,0x13,0xc4,0x06,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_dec_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV
+// GFX1250: buffer_atomic_dec_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x40,0x13,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_inc_u32 v5, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_inc_u32 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0xc0,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_inc_u32 v255, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_inc_u32 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0xc0,0x0f,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_inc_u32 v5, off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_atomic_inc_u32 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0xc0,0x0f,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_inc_u32 v5, off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_atomic_inc_u32 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0xc0,0x0f,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_inc_u32 v5, off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_atomic_inc_u32 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0xc0,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_inc_u32 v5, off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_atomic_inc_u32 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0xc0,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_inc_u32 v5, v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_atomic_inc_u32 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0xc0,0x0f,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_inc_u32 v5, v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_atomic_inc_u32 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0xc0,0x0f,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_inc_u32 v5, off, s[8:11], s3
+// GFX1250: buffer_atomic_inc_u32 v5, off, s[8:11], s3 ; encoding: [0x03,0xc0,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_inc_u32 v5, off, s[8:11], s3 offset:0
+// GFX1250: buffer_atomic_inc_u32 v5, off, s[8:11], s3 ; encoding: [0x03,0xc0,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_inc_u32 v5, off, s[8:11], s3 offset:7
+// GFX1250: buffer_atomic_inc_u32 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0xc0,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_atomic_inc_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN
+// GFX1250: buffer_atomic_inc_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0xc0,0x0f,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_inc_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RT_RETURN scope:SCOPE_SE
+// GFX1250: buffer_atomic_inc_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0xc0,0x0f,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_inc_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV
+// GFX1250: buffer_atomic_inc_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0xc0,0x0f,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_inc_u64 v[6:7], off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_inc_u64 v[6:7], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x00,0x13,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_inc_u64 v[254:255], off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_inc_u64 v[254:255], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x00,0x13,0xc4,0xfe,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_inc_u64 v[6:7], off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_atomic_inc_u64 v[6:7], off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x00,0x13,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_inc_u64 v[6:7], off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_atomic_inc_u64 v[6:7], off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x00,0x13,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_inc_u64 v[6:7], off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_atomic_inc_u64 v[6:7], off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x00,0x13,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_inc_u64 v[6:7], off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_atomic_inc_u64 v[6:7], off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x00,0x13,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_inc_u64 v[6:7], v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_atomic_inc_u64 v[6:7], v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x00,0x13,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_inc_u64 v[6:7], v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_atomic_inc_u64 v[6:7], v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x00,0x13,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_inc_u64 v[6:7], off, s[8:11], s3
+// GFX1250: buffer_atomic_inc_u64 v[6:7], off, s[8:11], s3 ; encoding: [0x03,0x00,0x13,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_inc_u64 v[6:7], off, s[8:11], s3 offset:0
+// GFX1250: buffer_atomic_inc_u64 v[6:7], off, s[8:11], s3 ; encoding: [0x03,0x00,0x13,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_inc_u64 v[6:7], off, s[8:11], s3 offset:7
+// GFX1250: buffer_atomic_inc_u64 v[6:7], off, s[8:11], s3 offset:7 ; encoding: [0x03,0x00,0x13,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_atomic_inc_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN
+// GFX1250: buffer_atomic_inc_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x00,0x13,0xc4,0x06,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_inc_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RT_RETURN scope:SCOPE_SE
+// GFX1250: buffer_atomic_inc_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x00,0x13,0xc4,0x06,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_inc_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV
+// GFX1250: buffer_atomic_inc_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x00,0x13,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_num_f32 v5, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_max_num_f32 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_num_f32 v255, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_max_num_f32 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x14,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_num_f32 v5, off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_atomic_max_num_f32 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x80,0x14,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_num_f32 v5, off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_atomic_max_num_f32 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x80,0x14,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_num_f32 v5, off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_atomic_max_num_f32 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x80,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_num_f32 v5, off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_atomic_max_num_f32 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x80,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_num_f32 v5, v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_atomic_max_num_f32 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x80,0x14,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_num_f32 v5, v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_atomic_max_num_f32 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x80,0x14,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_num_f32 v5, off, s[8:11], s3
+// GFX1250: buffer_atomic_max_num_f32 v5, off, s[8:11], s3 ; encoding: [0x03,0x80,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_max_num_f32 v5, off, s[8:11], s3 offset:0
+// GFX1250: buffer_atomic_max_num_f32 v5, off, s[8:11], s3 ; encoding: [0x03,0x80,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_max_num_f32 v5, off, s[8:11], s3 offset:7
+// GFX1250: buffer_atomic_max_num_f32 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x80,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_atomic_max_num_f32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN
+// GFX1250: buffer_atomic_max_num_f32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x80,0x14,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_num_f32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RT_RETURN scope:SCOPE_SE
+// GFX1250: buffer_atomic_max_num_f32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x80,0x14,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_num_f32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV
+// GFX1250: buffer_atomic_max_num_f32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x80,0x14,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_i32 v5, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_max_i32 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_i32 v255, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_max_i32 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x0e,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_i32 v5, off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_atomic_max_i32 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x80,0x0e,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_i32 v5, off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_atomic_max_i32 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x80,0x0e,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_i32 v5, off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_atomic_max_i32 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x80,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_i32 v5, off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_atomic_max_i32 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x80,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_i32 v5, v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_atomic_max_i32 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x80,0x0e,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_i32 v5, v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_atomic_max_i32 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x80,0x0e,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_i32 v5, off, s[8:11], s3
+// GFX1250: buffer_atomic_max_i32 v5, off, s[8:11], s3 ; encoding: [0x03,0x80,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_max_i32 v5, off, s[8:11], s3 offset:0
+// GFX1250: buffer_atomic_max_i32 v5, off, s[8:11], s3 ; encoding: [0x03,0x80,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_max_i32 v5, off, s[8:11], s3 offset:7
+// GFX1250: buffer_atomic_max_i32 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x80,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_atomic_max_i32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN
+// GFX1250: buffer_atomic_max_i32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x80,0x0e,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_i32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RT_RETURN scope:SCOPE_SE
+// GFX1250: buffer_atomic_max_i32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x80,0x0e,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_i32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV
+// GFX1250: buffer_atomic_max_i32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x80,0x0e,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_i64 v[6:7], off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_max_i64 v[6:7], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0xc0,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_i64 v[254:255], off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_max_i64 v[254:255], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0xc0,0x11,0xc4,0xfe,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_i64 v[6:7], off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_atomic_max_i64 v[6:7], off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0xc0,0x11,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_i64 v[6:7], off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_atomic_max_i64 v[6:7], off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0xc0,0x11,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_i64 v[6:7], off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_atomic_max_i64 v[6:7], off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0xc0,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_i64 v[6:7], off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_atomic_max_i64 v[6:7], off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0xc0,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_i64 v[6:7], v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_atomic_max_i64 v[6:7], v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0xc0,0x11,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_i64 v[6:7], v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_atomic_max_i64 v[6:7], v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0xc0,0x11,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_i64 v[6:7], off, s[8:11], s3
+// GFX1250: buffer_atomic_max_i64 v[6:7], off, s[8:11], s3 ; encoding: [0x03,0xc0,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_max_i64 v[6:7], off, s[8:11], s3 offset:0
+// GFX1250: buffer_atomic_max_i64 v[6:7], off, s[8:11], s3 ; encoding: [0x03,0xc0,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_max_i64 v[6:7], off, s[8:11], s3 offset:7
+// GFX1250: buffer_atomic_max_i64 v[6:7], off, s[8:11], s3 offset:7 ; encoding: [0x03,0xc0,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_atomic_max_i64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN
+// GFX1250: buffer_atomic_max_i64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0xc0,0x11,0xc4,0x06,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_i64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RT_RETURN scope:SCOPE_SE
+// GFX1250: buffer_atomic_max_i64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0xc0,0x11,0xc4,0x06,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_i64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV
+// GFX1250: buffer_atomic_max_i64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0xc0,0x11,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_u32 v5, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_max_u32 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0xc0,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_u32 v255, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_max_u32 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0xc0,0x0e,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_u32 v5, off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_atomic_max_u32 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0xc0,0x0e,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_u32 v5, off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_atomic_max_u32 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0xc0,0x0e,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_u32 v5, off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_atomic_max_u32 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0xc0,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_u32 v5, off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_atomic_max_u32 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0xc0,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_u32 v5, v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_atomic_max_u32 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0xc0,0x0e,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_u32 v5, v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_atomic_max_u32 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0xc0,0x0e,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_u32 v5, off, s[8:11], s3
+// GFX1250: buffer_atomic_max_u32 v5, off, s[8:11], s3 ; encoding: [0x03,0xc0,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_max_u32 v5, off, s[8:11], s3 offset:0
+// GFX1250: buffer_atomic_max_u32 v5, off, s[8:11], s3 ; encoding: [0x03,0xc0,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_max_u32 v5, off, s[8:11], s3 offset:7
+// GFX1250: buffer_atomic_max_u32 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0xc0,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_atomic_max_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN
+// GFX1250: buffer_atomic_max_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0xc0,0x0e,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RT_RETURN scope:SCOPE_SE
+// GFX1250: buffer_atomic_max_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0xc0,0x0e,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV
+// GFX1250: buffer_atomic_max_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0xc0,0x0e,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_u64 v[6:7], off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_max_u64 v[6:7], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x00,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_u64 v[254:255], off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_max_u64 v[254:255], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x00,0x12,0xc4,0xfe,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_u64 v[6:7], off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_atomic_max_u64 v[6:7], off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x00,0x12,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_u64 v[6:7], off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_atomic_max_u64 v[6:7], off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x00,0x12,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_u64 v[6:7], off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_atomic_max_u64 v[6:7], off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x00,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_u64 v[6:7], off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_atomic_max_u64 v[6:7], off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x00,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_u64 v[6:7], v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_atomic_max_u64 v[6:7], v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x00,0x12,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_u64 v[6:7], v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_atomic_max_u64 v[6:7], v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x00,0x12,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_u64 v[6:7], off, s[8:11], s3
+// GFX1250: buffer_atomic_max_u64 v[6:7], off, s[8:11], s3 ; encoding: [0x03,0x00,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_max_u64 v[6:7], off, s[8:11], s3 offset:0
+// GFX1250: buffer_atomic_max_u64 v[6:7], off, s[8:11], s3 ; encoding: [0x03,0x00,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_max_u64 v[6:7], off, s[8:11], s3 offset:7
+// GFX1250: buffer_atomic_max_u64 v[6:7], off, s[8:11], s3 offset:7 ; encoding: [0x03,0x00,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_atomic_max_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN
+// GFX1250: buffer_atomic_max_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x00,0x12,0xc4,0x06,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RT_RETURN scope:SCOPE_SE
+// GFX1250: buffer_atomic_max_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x00,0x12,0xc4,0x06,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_max_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV
+// GFX1250: buffer_atomic_max_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x00,0x12,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_num_f32 v5, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_min_num_f32 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_num_f32 v255, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_min_num_f32 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x14,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_num_f32 v5, off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_atomic_min_num_f32 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x40,0x14,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_num_f32 v5, off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_atomic_min_num_f32 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x40,0x14,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_num_f32 v5, off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_atomic_min_num_f32 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x40,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_num_f32 v5, off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_atomic_min_num_f32 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x40,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_num_f32 v5, v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_atomic_min_num_f32 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x40,0x14,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_num_f32 v5, v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_atomic_min_num_f32 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x40,0x14,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_num_f32 v5, off, s[8:11], s3
+// GFX1250: buffer_atomic_min_num_f32 v5, off, s[8:11], s3 ; encoding: [0x03,0x40,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_min_num_f32 v5, off, s[8:11], s3 offset:0
+// GFX1250: buffer_atomic_min_num_f32 v5, off, s[8:11], s3 ; encoding: [0x03,0x40,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_min_num_f32 v5, off, s[8:11], s3 offset:7
+// GFX1250: buffer_atomic_min_num_f32 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x40,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_atomic_min_num_f32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN
+// GFX1250: buffer_atomic_min_num_f32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x40,0x14,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_num_f32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RT_RETURN scope:SCOPE_SE
+// GFX1250: buffer_atomic_min_num_f32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x40,0x14,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_num_f32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV
+// GFX1250: buffer_atomic_min_num_f32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x40,0x14,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_i32 v5, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_min_i32 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x00,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_i32 v255, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_min_i32 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x00,0x0e,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_i32 v5, off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_atomic_min_i32 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x00,0x0e,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_i32 v5, off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_atomic_min_i32 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x00,0x0e,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_i32 v5, off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_atomic_min_i32 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x00,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_i32 v5, off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_atomic_min_i32 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x00,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_i32 v5, v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_atomic_min_i32 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x00,0x0e,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_i32 v5, v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_atomic_min_i32 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x00,0x0e,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_i32 v5, off, s[8:11], s3
+// GFX1250: buffer_atomic_min_i32 v5, off, s[8:11], s3 ; encoding: [0x03,0x00,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_min_i32 v5, off, s[8:11], s3 offset:0
+// GFX1250: buffer_atomic_min_i32 v5, off, s[8:11], s3 ; encoding: [0x03,0x00,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_min_i32 v5, off, s[8:11], s3 offset:7
+// GFX1250: buffer_atomic_min_i32 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x00,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_atomic_min_i32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN
+// GFX1250: buffer_atomic_min_i32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x00,0x0e,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_i32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RT_RETURN scope:SCOPE_SE
+// GFX1250: buffer_atomic_min_i32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x00,0x0e,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_i32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV
+// GFX1250: buffer_atomic_min_i32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x00,0x0e,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_i64 v[6:7], off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_min_i64 v[6:7], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_i64 v[254:255], off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_min_i64 v[254:255], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x11,0xc4,0xfe,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_i64 v[6:7], off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_atomic_min_i64 v[6:7], off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x40,0x11,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_i64 v[6:7], off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_atomic_min_i64 v[6:7], off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x40,0x11,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_i64 v[6:7], off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_atomic_min_i64 v[6:7], off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x40,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_i64 v[6:7], off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_atomic_min_i64 v[6:7], off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x40,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_i64 v[6:7], v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_atomic_min_i64 v[6:7], v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x40,0x11,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_i64 v[6:7], v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_atomic_min_i64 v[6:7], v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x40,0x11,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_i64 v[6:7], off, s[8:11], s3
+// GFX1250: buffer_atomic_min_i64 v[6:7], off, s[8:11], s3 ; encoding: [0x03,0x40,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_min_i64 v[6:7], off, s[8:11], s3 offset:0
+// GFX1250: buffer_atomic_min_i64 v[6:7], off, s[8:11], s3 ; encoding: [0x03,0x40,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_min_i64 v[6:7], off, s[8:11], s3 offset:7
+// GFX1250: buffer_atomic_min_i64 v[6:7], off, s[8:11], s3 offset:7 ; encoding: [0x03,0x40,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_atomic_min_i64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN
+// GFX1250: buffer_atomic_min_i64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x40,0x11,0xc4,0x06,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_i64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RT_RETURN scope:SCOPE_SE
+// GFX1250: buffer_atomic_min_i64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x40,0x11,0xc4,0x06,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_i64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV
+// GFX1250: buffer_atomic_min_i64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x40,0x11,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_u32 v5, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_min_u32 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_u32 v255, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_min_u32 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x0e,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_u32 v5, off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_atomic_min_u32 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x40,0x0e,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_u32 v5, off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_atomic_min_u32 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x40,0x0e,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_u32 v5, off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_atomic_min_u32 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x40,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_u32 v5, off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_atomic_min_u32 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x40,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_u32 v5, v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_atomic_min_u32 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x40,0x0e,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_u32 v5, v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_atomic_min_u32 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x40,0x0e,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_u32 v5, off, s[8:11], s3
+// GFX1250: buffer_atomic_min_u32 v5, off, s[8:11], s3 ; encoding: [0x03,0x40,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_min_u32 v5, off, s[8:11], s3 offset:0
+// GFX1250: buffer_atomic_min_u32 v5, off, s[8:11], s3 ; encoding: [0x03,0x40,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_min_u32 v5, off, s[8:11], s3 offset:7
+// GFX1250: buffer_atomic_min_u32 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x40,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_atomic_min_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN
+// GFX1250: buffer_atomic_min_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x40,0x0e,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RT_RETURN scope:SCOPE_SE
+// GFX1250: buffer_atomic_min_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x40,0x0e,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV
+// GFX1250: buffer_atomic_min_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x40,0x0e,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_u64 v[6:7], off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_min_u64 v[6:7], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_u64 v[254:255], off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_min_u64 v[254:255], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x11,0xc4,0xfe,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_u64 v[6:7], off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_atomic_min_u64 v[6:7], off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x80,0x11,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_u64 v[6:7], off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_atomic_min_u64 v[6:7], off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x80,0x11,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_u64 v[6:7], off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_atomic_min_u64 v[6:7], off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x80,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_u64 v[6:7], off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_atomic_min_u64 v[6:7], off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x80,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_u64 v[6:7], v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_atomic_min_u64 v[6:7], v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x80,0x11,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_u64 v[6:7], v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_atomic_min_u64 v[6:7], v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x80,0x11,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_u64 v[6:7], off, s[8:11], s3
+// GFX1250: buffer_atomic_min_u64 v[6:7], off, s[8:11], s3 ; encoding: [0x03,0x80,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_min_u64 v[6:7], off, s[8:11], s3 offset:0
+// GFX1250: buffer_atomic_min_u64 v[6:7], off, s[8:11], s3 ; encoding: [0x03,0x80,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_min_u64 v[6:7], off, s[8:11], s3 offset:7
+// GFX1250: buffer_atomic_min_u64 v[6:7], off, s[8:11], s3 offset:7 ; encoding: [0x03,0x80,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_atomic_min_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN
+// GFX1250: buffer_atomic_min_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x80,0x11,0xc4,0x06,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RT_RETURN scope:SCOPE_SE
+// GFX1250: buffer_atomic_min_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x80,0x11,0xc4,0x06,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_min_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV
+// GFX1250: buffer_atomic_min_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x80,0x11,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_or_b32 v5, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_or_b32 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_or_b32 v255, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_or_b32 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x0f,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_or_b32 v5, off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_atomic_or_b32 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x40,0x0f,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_or_b32 v5, off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_atomic_or_b32 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x40,0x0f,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_or_b32 v5, off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_atomic_or_b32 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x40,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_or_b32 v5, off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_atomic_or_b32 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x40,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_or_b32 v5, v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_atomic_or_b32 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x40,0x0f,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_or_b32 v5, v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_atomic_or_b32 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x40,0x0f,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_or_b32 v5, off, s[8:11], s3
+// GFX1250: buffer_atomic_or_b32 v5, off, s[8:11], s3 ; encoding: [0x03,0x40,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_or_b32 v5, off, s[8:11], s3 offset:0
+// GFX1250: buffer_atomic_or_b32 v5, off, s[8:11], s3 ; encoding: [0x03,0x40,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_or_b32 v5, off, s[8:11], s3 offset:7
+// GFX1250: buffer_atomic_or_b32 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x40,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_atomic_or_b32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN
+// GFX1250: buffer_atomic_or_b32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x40,0x0f,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_or_b32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RT_RETURN scope:SCOPE_SE
+// GFX1250: buffer_atomic_or_b32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x40,0x0f,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_or_b32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV
+// GFX1250: buffer_atomic_or_b32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x40,0x0f,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_or_b64 v[6:7], off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_or_b64 v[6:7], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_or_b64 v[254:255], off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_or_b64 v[254:255], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x12,0xc4,0xfe,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_or_b64 v[6:7], off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_atomic_or_b64 v[6:7], off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x80,0x12,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_or_b64 v[6:7], off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_atomic_or_b64 v[6:7], off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x80,0x12,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_or_b64 v[6:7], off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_atomic_or_b64 v[6:7], off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x80,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_or_b64 v[6:7], off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_atomic_or_b64 v[6:7], off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x80,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_or_b64 v[6:7], v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_atomic_or_b64 v[6:7], v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x80,0x12,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_or_b64 v[6:7], v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_atomic_or_b64 v[6:7], v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x80,0x12,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_or_b64 v[6:7], off, s[8:11], s3
+// GFX1250: buffer_atomic_or_b64 v[6:7], off, s[8:11], s3 ; encoding: [0x03,0x80,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_or_b64 v[6:7], off, s[8:11], s3 offset:0
+// GFX1250: buffer_atomic_or_b64 v[6:7], off, s[8:11], s3 ; encoding: [0x03,0x80,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_or_b64 v[6:7], off, s[8:11], s3 offset:7
+// GFX1250: buffer_atomic_or_b64 v[6:7], off, s[8:11], s3 offset:7 ; encoding: [0x03,0x80,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_atomic_or_b64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN
+// GFX1250: buffer_atomic_or_b64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x80,0x12,0xc4,0x06,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_or_b64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RT_RETURN scope:SCOPE_SE
+// GFX1250: buffer_atomic_or_b64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x80,0x12,0xc4,0x06,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_or_b64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV
+// GFX1250: buffer_atomic_or_b64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x80,0x12,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_u32 v5, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_sub_u32 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x0d,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_u32 v255, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_sub_u32 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x0d,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_u32 v5, off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_atomic_sub_u32 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x80,0x0d,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_u32 v5, off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_atomic_sub_u32 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x80,0x0d,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_u32 v5, off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_atomic_sub_u32 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x80,0x0d,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_u32 v5, off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_atomic_sub_u32 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x80,0x0d,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_u32 v5, v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_atomic_sub_u32 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x80,0x0d,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_u32 v5, v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_atomic_sub_u32 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x80,0x0d,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_u32 v5, off, s[8:11], s3
+// GFX1250: buffer_atomic_sub_u32 v5, off, s[8:11], s3 ; encoding: [0x03,0x80,0x0d,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_sub_u32 v5, off, s[8:11], s3 offset:0
+// GFX1250: buffer_atomic_sub_u32 v5, off, s[8:11], s3 ; encoding: [0x03,0x80,0x0d,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_sub_u32 v5, off, s[8:11], s3 offset:7
+// GFX1250: buffer_atomic_sub_u32 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x80,0x0d,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_atomic_sub_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN
+// GFX1250: buffer_atomic_sub_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x80,0x0d,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RT_RETURN scope:SCOPE_SE
+// GFX1250: buffer_atomic_sub_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x80,0x0d,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV
+// GFX1250: buffer_atomic_sub_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x80,0x0d,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_u64 v[6:7], off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_sub_u64 v[6:7], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x00,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_u64 v[254:255], off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_sub_u64 v[254:255], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x00,0x11,0xc4,0xfe,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_u64 v[6:7], off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_atomic_sub_u64 v[6:7], off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x00,0x11,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_u64 v[6:7], off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_atomic_sub_u64 v[6:7], off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x00,0x11,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_u64 v[6:7], off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_atomic_sub_u64 v[6:7], off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x00,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_u64 v[6:7], off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_atomic_sub_u64 v[6:7], off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x00,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_u64 v[6:7], v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_atomic_sub_u64 v[6:7], v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x00,0x11,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_u64 v[6:7], v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_atomic_sub_u64 v[6:7], v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x00,0x11,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_u64 v[6:7], off, s[8:11], s3
+// GFX1250: buffer_atomic_sub_u64 v[6:7], off, s[8:11], s3 ; encoding: [0x03,0x00,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_sub_u64 v[6:7], off, s[8:11], s3 offset:0
+// GFX1250: buffer_atomic_sub_u64 v[6:7], off, s[8:11], s3 ; encoding: [0x03,0x00,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_sub_u64 v[6:7], off, s[8:11], s3 offset:7
+// GFX1250: buffer_atomic_sub_u64 v[6:7], off, s[8:11], s3 offset:7 ; encoding: [0x03,0x00,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_atomic_sub_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN
+// GFX1250: buffer_atomic_sub_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x00,0x11,0xc4,0x06,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RT_RETURN scope:SCOPE_SE
+// GFX1250: buffer_atomic_sub_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x00,0x11,0xc4,0x06,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_sub_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV
+// GFX1250: buffer_atomic_sub_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x00,0x11,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_swap_b32 v5, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_swap_b32 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0xc0,0x0c,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_swap_b32 v255, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_swap_b32 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0xc0,0x0c,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_swap_b32 v5, off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_atomic_swap_b32 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0xc0,0x0c,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_swap_b32 v5, off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_atomic_swap_b32 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0xc0,0x0c,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_swap_b32 v5, off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_atomic_swap_b32 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0xc0,0x0c,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_swap_b32 v5, off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_atomic_swap_b32 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0xc0,0x0c,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_swap_b32 v5, v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_atomic_swap_b32 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0xc0,0x0c,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_swap_b32 v5, v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_atomic_swap_b32 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0xc0,0x0c,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_swap_b32 v5, off, s[8:11], s3
+// GFX1250: buffer_atomic_swap_b32 v5, off, s[8:11], s3 ; encoding: [0x03,0xc0,0x0c,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_swap_b32 v5, off, s[8:11], s3 offset:0
+// GFX1250: buffer_atomic_swap_b32 v5, off, s[8:11], s3 ; encoding: [0x03,0xc0,0x0c,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_swap_b32 v5, off, s[8:11], s3 offset:7
+// GFX1250: buffer_atomic_swap_b32 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0xc0,0x0c,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_atomic_swap_b32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN
+// GFX1250: buffer_atomic_swap_b32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0xc0,0x0c,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_swap_b32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RT_RETURN scope:SCOPE_SE
+// GFX1250: buffer_atomic_swap_b32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0xc0,0x0c,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_swap_b32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV
+// GFX1250: buffer_atomic_swap_b32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0xc0,0x0c,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_swap_b64 v[6:7], off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_swap_b64 v[6:7], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_swap_b64 v[254:255], off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_swap_b64 v[254:255], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x10,0xc4,0xfe,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_swap_b64 v[6:7], off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_atomic_swap_b64 v[6:7], off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x40,0x10,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_swap_b64 v[6:7], off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_atomic_swap_b64 v[6:7], off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x40,0x10,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_swap_b64 v[6:7], off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_atomic_swap_b64 v[6:7], off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x40,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_swap_b64 v[6:7], off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_atomic_swap_b64 v[6:7], off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x40,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_swap_b64 v[6:7], v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_atomic_swap_b64 v[6:7], v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x40,0x10,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_swap_b64 v[6:7], v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_atomic_swap_b64 v[6:7], v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x40,0x10,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_swap_b64 v[6:7], off, s[8:11], s3
+// GFX1250: buffer_atomic_swap_b64 v[6:7], off, s[8:11], s3 ; encoding: [0x03,0x40,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_swap_b64 v[6:7], off, s[8:11], s3 offset:0
+// GFX1250: buffer_atomic_swap_b64 v[6:7], off, s[8:11], s3 ; encoding: [0x03,0x40,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_swap_b64 v[6:7], off, s[8:11], s3 offset:7
+// GFX1250: buffer_atomic_swap_b64 v[6:7], off, s[8:11], s3 offset:7 ; encoding: [0x03,0x40,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_atomic_swap_b64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN
+// GFX1250: buffer_atomic_swap_b64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x40,0x10,0xc4,0x06,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_swap_b64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RT_RETURN scope:SCOPE_SE
+// GFX1250: buffer_atomic_swap_b64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x40,0x10,0xc4,0x06,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_swap_b64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV
+// GFX1250: buffer_atomic_swap_b64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x40,0x10,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_xor_b32 v5, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_xor_b32 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_xor_b32 v255, off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_xor_b32 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x0f,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_xor_b32 v5, off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_atomic_xor_b32 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x80,0x0f,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_xor_b32 v5, off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_atomic_xor_b32 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x80,0x0f,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_xor_b32 v5, off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_atomic_xor_b32 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x80,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_xor_b32 v5, off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_atomic_xor_b32 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x80,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_xor_b32 v5, v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_atomic_xor_b32 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x80,0x0f,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_xor_b32 v5, v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_atomic_xor_b32 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x80,0x0f,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_xor_b32 v5, off, s[8:11], s3
+// GFX1250: buffer_atomic_xor_b32 v5, off, s[8:11], s3 ; encoding: [0x03,0x80,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_xor_b32 v5, off, s[8:11], s3 offset:0
+// GFX1250: buffer_atomic_xor_b32 v5, off, s[8:11], s3 ; encoding: [0x03,0x80,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_xor_b32 v5, off, s[8:11], s3 offset:7
+// GFX1250: buffer_atomic_xor_b32 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x80,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_atomic_xor_b32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN
+// GFX1250: buffer_atomic_xor_b32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x80,0x0f,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_xor_b32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RT_RETURN scope:SCOPE_SE
+// GFX1250: buffer_atomic_xor_b32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x80,0x0f,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_xor_b32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV
+// GFX1250: buffer_atomic_xor_b32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x80,0x0f,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_xor_b64 v[6:7], off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_xor_b64 v[6:7], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0xc0,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_xor_b64 v[254:255], off, s[8:11], s3 offset:8388607
+// GFX1250: buffer_atomic_xor_b64 v[254:255], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0xc0,0x12,0xc4,0xfe,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_xor_b64 v[6:7], off, s[12:15], s3 offset:8388607
+// GFX1250: buffer_atomic_xor_b64 v[6:7], off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0xc0,0x12,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_xor_b64 v[6:7], off, s[96:99], s3 offset:8388607
+// GFX1250: buffer_atomic_xor_b64 v[6:7], off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0xc0,0x12,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_xor_b64 v[6:7], off, s[8:11], s101 offset:8388607
+// GFX1250: buffer_atomic_xor_b64 v[6:7], off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0xc0,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_xor_b64 v[6:7], off, s[8:11], m0 offset:8388607
+// GFX1250: buffer_atomic_xor_b64 v[6:7], off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0xc0,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_xor_b64 v[6:7], v0, s[8:11], s3 idxen offset:8388607
+// GFX1250: buffer_atomic_xor_b64 v[6:7], v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0xc0,0x12,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_xor_b64 v[6:7], v0, s[8:11], s3 offen offset:8388607
+// GFX1250: buffer_atomic_xor_b64 v[6:7], v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0xc0,0x12,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_xor_b64 v[6:7], off, s[8:11], s3
+// GFX1250: buffer_atomic_xor_b64 v[6:7], off, s[8:11], s3 ; encoding: [0x03,0xc0,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_xor_b64 v[6:7], off, s[8:11], s3 offset:0
+// GFX1250: buffer_atomic_xor_b64 v[6:7], off, s[8:11], s3 ; encoding: [0x03,0xc0,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+
+buffer_atomic_xor_b64 v[6:7], off, s[8:11], s3 offset:7
+// GFX1250: buffer_atomic_xor_b64 v[6:7], off, s[8:11], s3 offset:7 ; encoding: [0x03,0xc0,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+
+buffer_atomic_xor_b64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN
+// GFX1250: buffer_atomic_xor_b64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0xc0,0x12,0xc4,0x06,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_xor_b64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RT_RETURN scope:SCOPE_SE
+// GFX1250: buffer_atomic_xor_b64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0xc0,0x12,0xc4,0x06,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+
+buffer_atomic_xor_b64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV
+// GFX1250: buffer_atomic_xor_b64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0xc0,0x12,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+
buffer_load_b32 v5, v1, s[8:11], s3 offen offset:4095 nv
// GFX1250: buffer_load_b32 v5, v1, s[8:11], s3 offen offset:4095 nv ; encoding: [0x83,0x00,0x05,0xc4,0x05,0x10,0x80,0x40,0x01,0xff,0x0f,0x00]
// GFX12-ERR: :[[@LINE-2]]:{{[0-9]+}}: error: nv is not supported on this GPU
diff --git a/llvm/test/MC/AMDGPU/gfx1250_asm_vop3_err.s b/llvm/test/MC/AMDGPU/gfx1250_asm_vop3_err.s
index c5bd00c004a43..e87943224e8f5 100644
--- a/llvm/test/MC/AMDGPU/gfx1250_asm_vop3_err.s
+++ b/llvm/test/MC/AMDGPU/gfx1250_asm_vop3_err.s
@@ -5,6 +5,76 @@ v_lshl_add_u64 v[2:3], v[4:5], v7, v[8:9] dpp8:[7,6,5,4,3,2,1,0]
// GFX125X-ERR-NEXT:{{^}}v_lshl_add_u64 v[2:3], v[4:5], v7, v[8:9] dpp8:[7,6,5,4,3,2,1,0]
// GFX125X-ERR-NEXT:{{^}} ^
+v_fma_f64 v[4:5], v[2:3], v[6:7], v[8:9] dpp8:[7,6,5,4,3,2,1,0]
+// GFX125X-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: not a valid operand.
+// GFX125X-ERR-NEXT:{{^}}v_fma_f64 v[4:5], v[2:3], v[6:7], v[8:9] dpp8:[7,6,5,4,3,2,1,0]
+// GFX125X-ERR-NEXT:{{^}} ^
+
+v_div_fixup_f64 v[4:5], v[2:3], v[6:7], v[8:9] dpp8:[7,6,5,4,3,2,1,0]
+// GFX125X-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: not a valid operand.
+// GFX125X-ERR-NEXT:{{^}}v_div_fixup_f64 v[4:5], v[2:3], v[6:7], v[8:9] dpp8:[7,6,5,4,3,2,1,0]
+// GFX125X-ERR-NEXT:{{^}} ^
+
+v_div_fmas_f64 v[4:5], v[2:3], v[6:7], v[8:9] dpp8:[7,6,5,4,3,2,1,0]
+// GFX125X-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: not a valid operand.
+// GFX125X-ERR-NEXT:{{^}}v_div_fmas_f64 v[4:5], v[2:3], v[6:7], v[8:9] dpp8:[7,6,5,4,3,2,1,0]
+// GFX125X-ERR-NEXT:{{^}} ^
+
+v_div_scale_f64 v[4:5], s2, v[2:3], v[6:7], v[8:9] dpp8:[7,6,5,4,3,2,1,0]
+// GFX125X-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: not a valid operand.
+// GFX125X-ERR-NEXT:{{^}}v_div_scale_f64 v[4:5], s2, v[2:3], v[6:7], v[8:9] dpp8:[7,6,5,4,3,2,1,0]
+// GFX125X-ERR-NEXT:{{^}} ^
+
+v_mad_co_u64_u32 v[4:5], s2, v2, v6, v[8:9] dpp8:[7,6,5,4,3,2,1,0]
+// GFX125X-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: not a valid operand.
+// GFX125X-ERR-NEXT:{{^}}v_mad_co_u64_u32 v[4:5], s2, v2, v6, v[8:9] dpp8:[7,6,5,4,3,2,1,0]
+// GFX125X-ERR-NEXT:{{^}} ^
+
+v_mad_co_i64_i32 v[4:5], s2, v2, v6, v[8:9] dpp8:[7,6,5,4,3,2,1,0]
+// GFX125X-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: not a valid operand.
+// GFX125X-ERR-NEXT:{{^}}v_mad_co_i64_i32 v[4:5], s2, v2, v6, v[8:9] dpp8:[7,6,5,4,3,2,1,0]
+// GFX125X-ERR-NEXT:{{^}} ^
+
+v_minimum_f64 v[4:5], v[2:3], v[6:7] dpp8:[7,6,5,4,3,2,1,0]
+// GFX125X-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: not a valid operand.
+// GFX125X-ERR-NEXT:{{^}}v_minimum_f64 v[4:5], v[2:3], v[6:7] dpp8:[7,6,5,4,3,2,1,0]
+// GFX125X-ERR-NEXT:{{^}} ^
+
+v_maximum_f64 v[4:5], v[2:3], v[6:7] dpp8:[7,6,5,4,3,2,1,0]
+// GFX125X-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: not a valid operand.
+// GFX125X-ERR-NEXT:{{^}}v_maximum_f64 v[4:5], v[2:3], v[6:7] dpp8:[7,6,5,4,3,2,1,0]
+// GFX125X-ERR-NEXT:{{^}} ^
+
+v_ldexp_f64 v[4:5], v[2:3], v6 dpp8:[7,6,5,4,3,2,1,0]
+// GFX125X-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: not a valid operand.
+// GFX125X-ERR-NEXT:{{^}}v_ldexp_f64 v[4:5], v[2:3], v6 dpp8:[7,6,5,4,3,2,1,0]
+// GFX125X-ERR-NEXT:{{^}} ^
+
+v_mul_lo_u32 v4, v2, v6 dpp8:[7,6,5,4,3,2,1,0]
+// GFX125X-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: not a valid operand.
+// GFX125X-ERR-NEXT:{{^}}v_mul_lo_u32 v4, v2, v6 dpp8:[7,6,5,4,3,2,1,0]
+// GFX125X-ERR-NEXT:{{^}} ^
+
+v_mul_hi_u32 v4, v2, v6 dpp8:[7,6,5,4,3,2,1,0]
+// GFX125X-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: not a valid operand.
+// GFX125X-ERR-NEXT:{{^}}v_mul_hi_u32 v4, v2, v6 dpp8:[7,6,5,4,3,2,1,0]
+// GFX125X-ERR-NEXT:{{^}} ^
+
+v_mul_hi_i32 v4, v2, v6 dpp8:[7,6,5,4,3,2,1,0]
+// GFX125X-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: not a valid operand.
+// GFX125X-ERR-NEXT:{{^}}v_mul_hi_i32 v4, v2, v6 dpp8:[7,6,5,4,3,2,1,0]
+// GFX125X-ERR-NEXT:{{^}} ^
+
+v_lshrrev_b64 v[4:5], v2, v[6:7] dpp8:[7,6,5,4,3,2,1,0]
+// GFX125X-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: not a valid operand.
+// GFX125X-ERR-NEXT:{{^}}v_lshrrev_b64 v[4:5], v2, v[6:7] dpp8:[7,6,5,4,3,2,1,0]
+// GFX125X-ERR-NEXT:{{^}} ^
+
+v_ashrrev_i64 v[4:5], v2, v[6:7] dpp8:[7,6,5,4,3,2,1,0]
+// GFX125X-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: not a valid operand.
+// GFX125X-ERR-NEXT:{{^}}v_ashrrev_i64 v[4:5], v2, v[6:7] dpp8:[7,6,5,4,3,2,1,0]
+// GFX125X-ERR-NEXT:{{^}} ^
+
v_mad_u32 v2, v4, v7, v8 dpp8:[7,6,5,4,3,2,1,0]
// GFX125X-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: not a valid operand.
// GFX125X-ERR-NEXT:{{^}}v_mad_u32 v2, v4, v7, v8 dpp8:[7,6,5,4,3,2,1,0]
@@ -42,9 +112,94 @@ v_mad_nc_i64_i32 v[4:5], v2, v5, v[6:7] dpp8:[7,6,5,4,3,2,1,0]
v_lshl_add_u64 v[2:3], v[4:5], v7, v[8:9] quad_perm:[3,2,1,0]
// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: not a valid operand.
+// GFX1251-ERR: :[[@LINE-2]]:{{[0-9]+}}: error: DP ALU dpp only supports row_share
// GFX125X-ERR-NEXT:{{^}}v_lshl_add_u64 v[2:3], v[4:5], v7, v[8:9] quad_perm:[3,2,1,0]
// GFX125X-ERR-NEXT:{{^}} ^
+v_fma_f64 v[4:5], v[2:3], v[6:7], v[8:9] quad_perm:[3,2,1,0]
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: not a valid operand.
+// GFX1251-ERR: :[[@LINE-2]]:{{[0-9]+}}: error: DP ALU dpp only supports row_share
+// GFX125X-ERR-NEXT:{{^}}v_fma_f64 v[4:5], v[2:3], v[6:7], v[8:9] quad_perm:[3,2,1,0]
+// GFX125X-ERR-NEXT:{{^}} ^
+
+v_div_fixup_f64 v[4:5], v[2:3], v[6:7], v[8:9] quad_perm:[3,2,1,0]
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: not a valid operand.
+// GFX1251-ERR: :[[@LINE-2]]:{{[0-9]+}}: error: DP ALU dpp only supports row_share
+// GFX125X-ERR-NEXT:{{^}}v_div_fixup_f64 v[4:5], v[2:3], v[6:7], v[8:9] quad_perm:[3,2,1,0]
+// GFX125X-ERR-NEXT:{{^}} ^
+
+v_div_fmas_f64 v[4:5], v[2:3], v[6:7], v[8:9] quad_perm:[3,2,1,0]
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: not a valid operand.
+// GFX1251-ERR: :[[@LINE-2]]:{{[0-9]+}}: error: DP ALU dpp only supports row_share
+// GFX125X-ERR-NEXT:{{^}}v_div_fmas_f64 v[4:5], v[2:3], v[6:7], v[8:9] quad_perm:[3,2,1,0]
+// GFX125X-ERR-NEXT:{{^}} ^
+
+v_div_scale_f64 v[4:5], s2, v[2:3], v[6:7], v[8:9] quad_perm:[3,2,1,0]
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: not a valid operand.
+// GFX1251-ERR: :[[@LINE-2]]:{{[0-9]+}}: error: DP ALU dpp only supports row_share
+// GFX125X-ERR-NEXT:{{^}}v_div_scale_f64 v[4:5], s2, v[2:3], v[6:7], v[8:9] quad_perm:[3,2,1,0]
+// GFX125X-ERR-NEXT:{{^}} ^
+
+v_mad_co_u64_u32 v[4:5], s2, v2, v6, v[8:9] quad_perm:[3,2,1,0]
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: not a valid operand.
+// GFX1251-ERR: :[[@LINE-2]]:{{[0-9]+}}: error: DP ALU dpp only supports row_share
+// GFX125X-ERR-NEXT:{{^}}v_mad_co_u64_u32 v[4:5], s2, v2, v6, v[8:9] quad_perm:[3,2,1,0]
+// GFX125X-ERR-NEXT:{{^}} ^
+
+v_mad_co_i64_i32 v[4:5], s2, v2, v6, v[8:9] quad_perm:[3,2,1,0]
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: not a valid operand.
+// GFX1251-ERR: :[[@LINE-2]]:{{[0-9]+}}: error: DP ALU dpp only supports row_share
+// GFX125X-ERR-NEXT:{{^}}v_mad_co_i64_i32 v[4:5], s2, v2, v6, v[8:9] quad_perm:[3,2,1,0]
+// GFX125X-ERR-NEXT:{{^}} ^
+
+v_minimum_f64 v[4:5], v[2:3], v[6:7] quad_perm:[3,2,1,0]
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: not a valid operand.
+// GFX1251-ERR: :[[@LINE-2]]:{{[0-9]+}}: error: DP ALU dpp only supports row_share
+// GFX125X-ERR-NEXT:{{^}}v_minimum_f64 v[4:5], v[2:3], v[6:7] quad_perm:[3,2,1,0]
+// GFX125X-ERR-NEXT:{{^}} ^
+
+v_maximum_f64 v[4:5], v[2:3], v[6:7] quad_perm:[3,2,1,0]
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: not a valid operand.
+// GFX1251-ERR: :[[@LINE-2]]:{{[0-9]+}}: error: DP ALU dpp only supports row_share
+// GFX125X-ERR-NEXT:{{^}}v_maximum_f64 v[4:5], v[2:3], v[6:7] quad_perm:[3,2,1,0]
+// GFX125X-ERR-NEXT:{{^}} ^
+
+v_ldexp_f64 v[4:5], v[2:3], v6 quad_perm:[3,2,1,0]
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: not a valid operand.
+// GFX1251-ERR: :[[@LINE-2]]:{{[0-9]+}}: error: DP ALU dpp only supports row_share
+// GFX125X-ERR-NEXT:{{^}}v_ldexp_f64 v[4:5], v[2:3], v6 quad_perm:[3,2,1,0]
+// GFX125X-ERR-NEXT:{{^}} ^
+
+v_mul_lo_u32 v4, v2, v6 quad_perm:[3,2,1,0]
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: not a valid operand.
+// GFX1251-ERR: :[[@LINE-2]]:{{[0-9]+}}: error: DP ALU dpp only supports row_share
+// GFX125X-ERR-NEXT:{{^}}v_mul_lo_u32 v4, v2, v6 quad_perm:[3,2,1,0]
+// GFX125X-ERR-NEXT:{{^}} ^
+
+v_mul_hi_u32 v4, v2, v6 quad_perm:[3,2,1,0]
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: not a valid operand.
+// GFX1251-ERR: :[[@LINE-2]]:{{[0-9]+}}: error: DP ALU dpp only supports row_share
+// GFX125X-ERR-NEXT:{{^}}v_mul_hi_u32 v4, v2, v6 quad_perm:[3,2,1,0]
+// GFX125X-ERR-NEXT:{{^}} ^
+
+v_mul_hi_i32 v4, v2, v6 quad_perm:[3,2,1,0]
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: not a valid operand.
+// GFX1251-ERR: :[[@LINE-2]]:{{[0-9]+}}: error: DP ALU dpp only supports row_share
+// GFX125X-ERR-NEXT:{{^}}v_mul_hi_i32 v4, v2, v6 quad_perm:[3,2,1,0]
+// GFX125X-ERR-NEXT:{{^}} ^
+
+v_lshrrev_b64 v[4:5], v2, v[6:7] quad_perm:[3,2,1,0]
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: not a valid operand.
+// GFX1251-ERR: :[[@LINE-2]]:{{[0-9]+}}: error: DP ALU dpp only supports row_share
+// GFX125X-ERR-NEXT:{{^}}v_lshrrev_b64 v[4:5], v2, v[6:7] quad_perm:[3,2,1,0]
+// GFX125X-ERR-NEXT:{{^}} ^
+
+v_ashrrev_i64 v[4:5], v2, v[6:7] quad_perm:[3,2,1,0]
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: not a valid operand.
+// GFX1251-ERR: :[[@LINE-2]]:{{[0-9]+}}: error: DP ALU dpp only supports row_share
+// GFX125X-ERR-NEXT:{{^}}v_ashrrev_i64 v[4:5], v2, v[6:7] quad_perm:[3,2,1,0]
+// GFX125X-ERR-NEXT:{{^}} ^
+
v_mad_u32 v2, v4, v7, v8 quad_perm:[3,2,1,0]
// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: not a valid operand.
// GFX1251-ERR: :[[@LINE-2]]:{{[0-9]+}}: error: DP ALU dpp only supports row_share
@@ -87,6 +242,11 @@ v_mad_nc_i64_i32 v[4:5], v2, v5, v[6:7] quad_perm:[3,2,1,0]
// GFX125X-ERR-NEXT:{{^}}v_mad_nc_i64_i32 v[4:5], v2, v5, v[6:7] quad_perm:[3,2,1,0]
// GFX125X-ERR-NEXT:{{^}} ^
+v_trig_preop_f64 v[4:5], v[8:9], v2 row_share:1
+// GFX125X-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: not a valid operand.
+// GFX125X-ERR-NEXT:{{^}}v_trig_preop_f64 v[4:5], v[8:9], v2 row_share:1
+// GFX125X-ERR-NEXT:{{^}} ^
+
v_ashr_pk_i8_i32 v1, v2, v3, v4 clamp
// GFX125X-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: invalid operand for instruction
// GFX125X-ERR-NEXT:{{^}}v_ashr_pk_i8_i32 v1, v2, v3, v4 clamp
@@ -161,3 +321,8 @@ v_cvt_scale_pk8_f32_fp4 v[10:17], s20, v8
// GFX125X-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: invalid operand for instruction
// GFX125X-ERR-NEXT:{{^}}v_cvt_scale_pk8_f32_fp4 v[10:17], s20, v8
// GFX125X-ERR-NEXT:{{^}} ^
+
+v_cvt_scale_pk16_bf16_bf6 v[10:17], s[20:22], 0xcf00
+// GFX125X-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: invalid operand for instruction
+// GFX125X-ERR-NEXT:{{^}}v_cvt_scale_pk16_bf16_bf6 v[10:17], s[20:22], 0xcf00
+// GFX125X-ERR-NEXT:{{^}} ^
diff --git a/llvm/test/MC/AMDGPU/gfx1250_asm_vop3_from_vop2_err.s b/llvm/test/MC/AMDGPU/gfx1250_asm_vop3_from_vop2_err.s
new file mode 100644
index 0000000000000..157b4d6af9c05
--- /dev/null
+++ b/llvm/test/MC/AMDGPU/gfx1250_asm_vop3_from_vop2_err.s
@@ -0,0 +1,13 @@
+// RUN: not llvm-mc -triple=amdgcn -mcpu=gfx1250 -show-encoding %s 2>&1 | FileCheck --check-prefix=GFX1250-ERR --implicit-check-not=error: --strict-whitespace %s
+
+v_fmaak_f32_e64_dpp v4, v2, v6, 3 row_share:1
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: e64_dpp variant of this instruction is not supported
+
+v_fmamk_f32_e64_dpp v4, v2, 3, v6 row_share:1
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: e64_dpp variant of this instruction is not supported
+
+v_fmaak_f16_e64_dpp v4, v2, v6, 3 row_share:1
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: e64_dpp variant of this instruction is not supported
+
+v_fmamk_f16_e64_dpp v4, v2, 3, v6 row_share:1
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: e64_dpp variant of this instruction is not supported
diff --git a/llvm/test/MC/AMDGPU/gfx1250_asm_vop3cx.s b/llvm/test/MC/AMDGPU/gfx1250_asm_vop3cx.s
new file mode 100644
index 0000000000000..4aea7b32f13e5
--- /dev/null
+++ b/llvm/test/MC/AMDGPU/gfx1250_asm_vop3cx.s
@@ -0,0 +1,3413 @@
+// NOTE: Assertions have been autogenerated by utils/update_mc_test_checks.py UTC_ARGS: --version 5
+// RUN: llvm-mc -triple=amdgcn -mcpu=gfx1250 -show-encoding < %s | FileCheck --check-prefix=GFX1250 %s
+
+v_cmpx_class_f16_e64 v1, v2
+// GFX1250: v_cmpx_class_f16_e64 v1, v2 ; encoding: [0x7e,0x00,0xfd,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_class_f16_e64 v255, v2
+// GFX1250: v_cmpx_class_f16_e64 v255, v2 ; encoding: [0x7e,0x00,0xfd,0xd4,0xff,0x05,0x02,0x00]
+
+v_cmpx_class_f16_e64 s1, v2
+// GFX1250: v_cmpx_class_f16_e64 s1, v2 ; encoding: [0x7e,0x00,0xfd,0xd4,0x01,0x04,0x02,0x00]
+
+v_cmpx_class_f16_e64 s105, v255
+// GFX1250: v_cmpx_class_f16_e64 s105, v255 ; encoding: [0x7e,0x00,0xfd,0xd4,0x69,0xfe,0x03,0x00]
+
+v_cmpx_class_f16_e64 vcc_lo, s2
+// GFX1250: v_cmpx_class_f16_e64 vcc_lo, s2 ; encoding: [0x7e,0x00,0xfd,0xd4,0x6a,0x04,0x00,0x00]
+
+v_cmpx_class_f16_e64 vcc_hi, s105
+// GFX1250: v_cmpx_class_f16_e64 vcc_hi, s105 ; encoding: [0x7e,0x00,0xfd,0xd4,0x6b,0xd2,0x00,0x00]
+
+v_cmpx_class_f16_e64 ttmp15, ttmp15
+// GFX1250: v_cmpx_class_f16_e64 ttmp15, ttmp15 ; encoding: [0x7e,0x00,0xfd,0xd4,0x7b,0xf6,0x00,0x00]
+
+v_cmpx_class_f16_e64 m0, src_scc
+// GFX1250: v_cmpx_class_f16_e64 m0, src_scc ; encoding: [0x7e,0x00,0xfd,0xd4,0x7d,0xfa,0x01,0x00]
+
+v_cmpx_class_f16_e64 exec_lo, -1
+// GFX1250: v_cmpx_class_f16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xfd,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_class_f16_e64 exec_hi, null
+// GFX1250: v_cmpx_class_f16_e64 exec_hi, null ; encoding: [0x7e,0x00,0xfd,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_class_f16_e64 null, exec_lo
+// GFX1250: v_cmpx_class_f16_e64 null, exec_lo ; encoding: [0x7e,0x00,0xfd,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_class_f16_e64 -1, exec_hi
+// GFX1250: v_cmpx_class_f16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xfd,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_class_f16_e64 0.5, m0
+// GFX1250: v_cmpx_class_f16_e64 0.5, m0 ; encoding: [0x7e,0x00,0xfd,0xd4,0xf0,0xfa,0x00,0x00]
+
+v_cmpx_class_f16_e64 src_scc, vcc_lo
+// GFX1250: v_cmpx_class_f16_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xfd,0xd4,0xfd,0xd4,0x00,0x00]
+
+v_cmpx_class_f16_e64 -|0xfe0b|, vcc_hi
+// GFX1250: v_cmpx_class_f16_e64 -|0xfe0b|, vcc_hi ; encoding: [0x7e,0x01,0xfd,0xd4,0xff,0xd6,0x00,0x20,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_class_f32_e64 v1, v2
+// GFX1250: v_cmpx_class_f32_e64 v1, v2 ; encoding: [0x7e,0x00,0xfe,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_class_f32_e64 v255, v255
+// GFX1250: v_cmpx_class_f32_e64 v255, v255 ; encoding: [0x7e,0x00,0xfe,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_class_f32_e64 s1, s2
+// GFX1250: v_cmpx_class_f32_e64 s1, s2 ; encoding: [0x7e,0x00,0xfe,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_class_f32_e64 s105, s105
+// GFX1250: v_cmpx_class_f32_e64 s105, s105 ; encoding: [0x7e,0x00,0xfe,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_class_f32_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_class_f32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xfe,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_class_f32_e64 vcc_hi, 0xaf123456
+// GFX1250: v_cmpx_class_f32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0xfe,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_class_f32_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_class_f32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xfe,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_class_f32_e64 m0, 0.5
+// GFX1250: v_cmpx_class_f32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0xfe,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_class_f32_e64 exec_lo, -1
+// GFX1250: v_cmpx_class_f32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xfe,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_class_f32_e64 exec_hi, null
+// GFX1250: v_cmpx_class_f32_e64 exec_hi, null ; encoding: [0x7e,0x00,0xfe,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_class_f32_e64 null, exec_lo
+// GFX1250: v_cmpx_class_f32_e64 null, exec_lo ; encoding: [0x7e,0x00,0xfe,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_class_f32_e64 -1, exec_hi
+// GFX1250: v_cmpx_class_f32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xfe,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_class_f32_e64 0.5, m0
+// GFX1250: v_cmpx_class_f32_e64 0.5, m0 ; encoding: [0x7e,0x00,0xfe,0xd4,0xf0,0xfa,0x00,0x00]
+
+v_cmpx_class_f32_e64 src_scc, vcc_lo
+// GFX1250: v_cmpx_class_f32_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xfe,0xd4,0xfd,0xd4,0x00,0x00]
+
+v_cmpx_class_f32_e64 -|0xaf123456|, vcc_hi
+// GFX1250: v_cmpx_class_f32_e64 -|0xaf123456|, vcc_hi ; encoding: [0x7e,0x01,0xfe,0xd4,0xff,0xd6,0x00,0x20,0x56,0x34,0x12,0xaf]
+
+v_cmpx_class_f64_e64 v[2:3], v2
+// GFX1250: v_cmpx_class_f64_e64 v[2:3], v2 ; encoding: [0x7e,0x00,0xff,0xd4,0x02,0x05,0x02,0x00]
+
+v_cmpx_class_f64_e64 v[2:3], v255
+// GFX1250: v_cmpx_class_f64_e64 v[2:3], v255 ; encoding: [0x7e,0x00,0xff,0xd4,0x02,0xff,0x03,0x00]
+
+v_cmpx_class_f64_e64 v[2:3], s2
+// GFX1250: v_cmpx_class_f64_e64 v[2:3], s2 ; encoding: [0x7e,0x00,0xff,0xd4,0x02,0x05,0x00,0x00]
+
+v_cmpx_class_f64_e64 v[2:3], s105
+// GFX1250: v_cmpx_class_f64_e64 v[2:3], s105 ; encoding: [0x7e,0x00,0xff,0xd4,0x02,0xd3,0x00,0x00]
+
+v_cmpx_class_f64_e64 v[254:255], ttmp15
+// GFX1250: v_cmpx_class_f64_e64 v[254:255], ttmp15 ; encoding: [0x7e,0x00,0xff,0xd4,0xfe,0xf7,0x00,0x00]
+
+v_cmpx_class_f64_e64 s[2:3], vcc_hi
+// GFX1250: v_cmpx_class_f64_e64 s[2:3], vcc_hi ; encoding: [0x7e,0x00,0xff,0xd4,0x02,0xd6,0x00,0x00]
+
+v_cmpx_class_f64_e64 s[104:105], vcc_lo
+// GFX1250: v_cmpx_class_f64_e64 s[104:105], vcc_lo ; encoding: [0x7e,0x00,0xff,0xd4,0x68,0xd4,0x00,0x00]
+
+v_cmpx_class_f64_e64 vcc, m0
+// GFX1250: v_cmpx_class_f64_e64 vcc, m0 ; encoding: [0x7e,0x00,0xff,0xd4,0x6a,0xfa,0x00,0x00]
+
+v_cmpx_class_f64_e64 ttmp[14:15], exec_hi
+// GFX1250: v_cmpx_class_f64_e64 ttmp[14:15], exec_hi ; encoding: [0x7e,0x00,0xff,0xd4,0x7a,0xfe,0x00,0x00]
+
+v_cmpx_class_f64_e64 exec, exec_lo
+// GFX1250: v_cmpx_class_f64_e64 exec, exec_lo ; encoding: [0x7e,0x00,0xff,0xd4,0x7e,0xfc,0x00,0x00]
+
+v_cmpx_class_f64_e64 null, null
+// GFX1250: v_cmpx_class_f64_e64 null, null ; encoding: [0x7e,0x00,0xff,0xd4,0x7c,0xf8,0x00,0x00]
+
+v_cmpx_class_f64_e64 -1, -1
+// GFX1250: v_cmpx_class_f64_e64 -1, -1 ; encoding: [0x7e,0x00,0xff,0xd4,0xc1,0x82,0x01,0x00]
+
+v_cmpx_class_f64_e64 0.5, 0.5
+// GFX1250: v_cmpx_class_f64_e64 0.5, 0.5 ; encoding: [0x7e,0x00,0xff,0xd4,0xf0,0xe0,0x01,0x00]
+
+v_cmpx_class_f64_e64 -|src_scc|, src_scc
+// GFX1250: v_cmpx_class_f64_e64 -|src_scc|, src_scc ; encoding: [0x7e,0x01,0xff,0xd4,0xfd,0xfa,0x01,0x20]
+
+v_cmpx_class_f64_e64 0xaf123456, 0xaf123456
+// GFX1250: v_cmpx_class_f64_e64 0xaf123456, 0xaf123456 ; encoding: [0x7e,0x00,0xff,0xd4,0xff,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_eq_f16_e64 v1, v2
+// GFX1250: v_cmpx_eq_f16_e64 v1, v2 ; encoding: [0x7e,0x00,0x82,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_eq_f16_e64 v255, v255
+// GFX1250: v_cmpx_eq_f16_e64 v255, v255 ; encoding: [0x7e,0x00,0x82,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_eq_f16_e64 s1, s2
+// GFX1250: v_cmpx_eq_f16_e64 s1, s2 ; encoding: [0x7e,0x00,0x82,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_eq_f16_e64 s105, s105
+// GFX1250: v_cmpx_eq_f16_e64 s105, s105 ; encoding: [0x7e,0x00,0x82,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_eq_f16_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_eq_f16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x82,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_eq_f16_e64 vcc_hi, 0xfe0b
+// GFX1250: v_cmpx_eq_f16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0x82,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_eq_f16_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_eq_f16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x82,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_eq_f16_e64 m0, 0.5
+// GFX1250: v_cmpx_eq_f16_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x82,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_eq_f16_e64 exec_lo, -1
+// GFX1250: v_cmpx_eq_f16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x82,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_eq_f16_e64 |exec_hi|, null
+// GFX1250: v_cmpx_eq_f16_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x82,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_eq_f16_e64 null, exec_lo
+// GFX1250: v_cmpx_eq_f16_e64 null, exec_lo ; encoding: [0x7e,0x00,0x82,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_eq_f16_e64 -1, exec_hi
+// GFX1250: v_cmpx_eq_f16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x82,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_eq_f16_e64 0.5, -m0
+// GFX1250: v_cmpx_eq_f16_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x82,0xd4,0xf0,0xfa,0x00,0x40]
+
+v_cmpx_eq_f16_e64 -src_scc, |vcc_lo|
+// GFX1250: v_cmpx_eq_f16_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x82,0xd4,0xfd,0xd4,0x00,0x20]
+
+v_cmpx_eq_f16_e64 -|0xfe0b|, -|vcc_hi| clamp
+// GFX1250: v_cmpx_eq_f16_e64 -|0xfe0b|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x82,0xd4,0xff,0xd6,0x00,0x60,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_eq_f32_e64 v1, v2
+// GFX1250: v_cmpx_eq_f32_e64 v1, v2 ; encoding: [0x7e,0x00,0x92,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_eq_f32_e64 v255, v255
+// GFX1250: v_cmpx_eq_f32_e64 v255, v255 ; encoding: [0x7e,0x00,0x92,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_eq_f32_e64 s1, s2
+// GFX1250: v_cmpx_eq_f32_e64 s1, s2 ; encoding: [0x7e,0x00,0x92,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_eq_f32_e64 s105, s105
+// GFX1250: v_cmpx_eq_f32_e64 s105, s105 ; encoding: [0x7e,0x00,0x92,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_eq_f32_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_eq_f32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x92,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_eq_f32_e64 vcc_hi, 0xaf123456
+// GFX1250: v_cmpx_eq_f32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0x92,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_eq_f32_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_eq_f32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x92,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_eq_f32_e64 m0, 0.5
+// GFX1250: v_cmpx_eq_f32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x92,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_eq_f32_e64 exec_lo, -1
+// GFX1250: v_cmpx_eq_f32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x92,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_eq_f32_e64 |exec_hi|, null
+// GFX1250: v_cmpx_eq_f32_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x92,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_eq_f32_e64 null, exec_lo
+// GFX1250: v_cmpx_eq_f32_e64 null, exec_lo ; encoding: [0x7e,0x00,0x92,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_eq_f32_e64 -1, exec_hi
+// GFX1250: v_cmpx_eq_f32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x92,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_eq_f32_e64 0.5, -m0
+// GFX1250: v_cmpx_eq_f32_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x92,0xd4,0xf0,0xfa,0x00,0x40]
+
+v_cmpx_eq_f32_e64 -src_scc, |vcc_lo|
+// GFX1250: v_cmpx_eq_f32_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x92,0xd4,0xfd,0xd4,0x00,0x20]
+
+v_cmpx_eq_f32_e64 -|0xaf123456|, -|vcc_hi| clamp
+// GFX1250: v_cmpx_eq_f32_e64 -|0xaf123456|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x92,0xd4,0xff,0xd6,0x00,0x60,0x56,0x34,0x12,0xaf]
+
+v_cmpx_eq_f64_e64 v[2:3], v[2:3]
+// GFX1250: v_cmpx_eq_f64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xa2,0xd4,0x02,0x05,0x02,0x00]
+
+v_cmpx_eq_f64_e64 v[254:255], v[254:255]
+// GFX1250: v_cmpx_eq_f64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xa2,0xd4,0xfe,0xfd,0x03,0x00]
+
+v_cmpx_eq_f64_e64 s[2:3], s[4:5]
+// GFX1250: v_cmpx_eq_f64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xa2,0xd4,0x02,0x08,0x00,0x00]
+
+v_cmpx_eq_f64_e64 s[104:105], s[104:105]
+// GFX1250: v_cmpx_eq_f64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xa2,0xd4,0x68,0xd0,0x00,0x00]
+
+v_cmpx_eq_f64_e64 vcc, ttmp[14:15]
+// GFX1250: v_cmpx_eq_f64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xa2,0xd4,0x6a,0xf4,0x00,0x00]
+
+v_cmpx_eq_f64_e64 ttmp[14:15], 0xaf123456
+// GFX1250: v_cmpx_eq_f64_e64 ttmp[14:15], 0xaf123456 ; encoding: [0x7e,0x00,0xa2,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_eq_f64_e64 -|exec|, src_scc
+// GFX1250: v_cmpx_eq_f64_e64 -|exec|, src_scc ; encoding: [0x7e,0x01,0xa2,0xd4,0x7e,0xfa,0x01,0x20]
+
+v_cmpx_eq_f64_e64 null, 0.5
+// GFX1250: v_cmpx_eq_f64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xa2,0xd4,0x7c,0xe0,0x01,0x00]
+
+v_cmpx_eq_f64_e64 -1, -1
+// GFX1250: v_cmpx_eq_f64_e64 -1, -1 ; encoding: [0x7e,0x00,0xa2,0xd4,0xc1,0x82,0x01,0x00]
+
+v_cmpx_eq_f64_e64 0.5, null
+// GFX1250: v_cmpx_eq_f64_e64 0.5, null ; encoding: [0x7e,0x00,0xa2,0xd4,0xf0,0xf8,0x00,0x00]
+
+v_cmpx_eq_f64_e64 -|src_scc|, -|exec|
+// GFX1250: v_cmpx_eq_f64_e64 -|src_scc|, -|exec| ; encoding: [0x7e,0x03,0xa2,0xd4,0xfd,0xfc,0x00,0x60]
+
+v_cmpx_eq_f64_e64 0xaf123456, -|vcc| clamp
+// GFX1250: v_cmpx_eq_f64_e64 0xaf123456, -|vcc| clamp ; encoding: [0x7e,0x82,0xa2,0xd4,0xff,0xd4,0x00,0x40,0x56,0x34,0x12,0xaf]
+
+v_cmpx_eq_i16_e64 v1, v2
+// GFX1250: v_cmpx_eq_i16_e64 v1, v2 ; encoding: [0x7e,0x00,0xb2,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_eq_i16_e64 v255, v255
+// GFX1250: v_cmpx_eq_i16_e64 v255, v255 ; encoding: [0x7e,0x00,0xb2,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_eq_i16_e64 s1, s2
+// GFX1250: v_cmpx_eq_i16_e64 s1, s2 ; encoding: [0x7e,0x00,0xb2,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_eq_i16_e64 s105, s105
+// GFX1250: v_cmpx_eq_i16_e64 s105, s105 ; encoding: [0x7e,0x00,0xb2,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_eq_i16_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_eq_i16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xb2,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_eq_i16_e64 vcc_hi, 0xfe0b
+// GFX1250: v_cmpx_eq_i16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0xb2,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_eq_i16_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_eq_i16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xb2,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_eq_i16_e64 m0, 0.5
+// GFX1250: v_cmpx_eq_i16_e64 m0, 0.5 ; encoding: [0x7e,0x00,0xb2,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_eq_i16_e64 exec_lo, -1
+// GFX1250: v_cmpx_eq_i16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xb2,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_eq_i16_e64 exec_hi, null
+// GFX1250: v_cmpx_eq_i16_e64 exec_hi, null ; encoding: [0x7e,0x00,0xb2,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_eq_i16_e64 null, exec_lo
+// GFX1250: v_cmpx_eq_i16_e64 null, exec_lo ; encoding: [0x7e,0x00,0xb2,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_eq_i16_e64 -1, exec_hi
+// GFX1250: v_cmpx_eq_i16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xb2,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_eq_i16_e64 0.5, m0
+// GFX1250: v_cmpx_eq_i16_e64 0.5, m0 ; encoding: [0x7e,0x00,0xb2,0xd4,0xf0,0xfa,0x00,0x00]
+
+v_cmpx_eq_i16_e64 src_scc, vcc_lo
+// GFX1250: v_cmpx_eq_i16_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xb2,0xd4,0xfd,0xd4,0x00,0x00]
+
+v_cmpx_eq_i16_e64 0xfe0b, vcc_hi
+// GFX1250: v_cmpx_eq_i16_e64 0xfe0b, vcc_hi ; encoding: [0x7e,0x00,0xb2,0xd4,0xff,0xd6,0x00,0x00,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_eq_i32_e64 v1, v2
+// GFX1250: v_cmpx_eq_i32_e64 v1, v2 ; encoding: [0x7e,0x00,0xc2,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_eq_i32_e64 v255, v255
+// GFX1250: v_cmpx_eq_i32_e64 v255, v255 ; encoding: [0x7e,0x00,0xc2,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_eq_i32_e64 s1, s2
+// GFX1250: v_cmpx_eq_i32_e64 s1, s2 ; encoding: [0x7e,0x00,0xc2,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_eq_i32_e64 s105, s105
+// GFX1250: v_cmpx_eq_i32_e64 s105, s105 ; encoding: [0x7e,0x00,0xc2,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_eq_i32_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_eq_i32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xc2,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_eq_i32_e64 vcc_hi, 0xaf123456
+// GFX1250: v_cmpx_eq_i32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0xc2,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_eq_i32_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_eq_i32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xc2,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_eq_i32_e64 m0, 0.5
+// GFX1250: v_cmpx_eq_i32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0xc2,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_eq_i32_e64 exec_lo, -1
+// GFX1250: v_cmpx_eq_i32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xc2,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_eq_i32_e64 exec_hi, null
+// GFX1250: v_cmpx_eq_i32_e64 exec_hi, null ; encoding: [0x7e,0x00,0xc2,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_eq_i32_e64 null, exec_lo
+// GFX1250: v_cmpx_eq_i32_e64 null, exec_lo ; encoding: [0x7e,0x00,0xc2,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_eq_i32_e64 -1, exec_hi
+// GFX1250: v_cmpx_eq_i32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xc2,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_eq_i32_e64 0.5, m0
+// GFX1250: v_cmpx_eq_i32_e64 0.5, m0 ; encoding: [0x7e,0x00,0xc2,0xd4,0xf0,0xfa,0x00,0x00]
+
+v_cmpx_eq_i32_e64 src_scc, vcc_lo
+// GFX1250: v_cmpx_eq_i32_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xc2,0xd4,0xfd,0xd4,0x00,0x00]
+
+v_cmpx_eq_i32_e64 0xaf123456, vcc_hi
+// GFX1250: v_cmpx_eq_i32_e64 0xaf123456, vcc_hi ; encoding: [0x7e,0x00,0xc2,0xd4,0xff,0xd6,0x00,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_eq_i64_e64 v[2:3], v[2:3]
+// GFX1250: v_cmpx_eq_i64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xd2,0xd4,0x02,0x05,0x02,0x00]
+
+v_cmpx_eq_i64_e64 v[254:255], v[254:255]
+// GFX1250: v_cmpx_eq_i64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xd2,0xd4,0xfe,0xfd,0x03,0x00]
+
+v_cmpx_eq_i64_e64 s[2:3], s[4:5]
+// GFX1250: v_cmpx_eq_i64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xd2,0xd4,0x02,0x08,0x00,0x00]
+
+v_cmpx_eq_i64_e64 s[104:105], s[104:105]
+// GFX1250: v_cmpx_eq_i64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xd2,0xd4,0x68,0xd0,0x00,0x00]
+
+v_cmpx_eq_i64_e64 vcc, ttmp[14:15]
+// GFX1250: v_cmpx_eq_i64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xd2,0xd4,0x6a,0xf4,0x00,0x00]
+
+v_cmpx_eq_i64_e64 ttmp[14:15], 0xaf123456
+// GFX1250: v_cmpx_eq_i64_e64 ttmp[14:15], lit64(0xaf123456) ; encoding: [0x7e,0x00,0xd2,0xd4,0x7a,0xfc,0x01,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+v_cmpx_eq_i64_e64 exec, src_scc
+// GFX1250: v_cmpx_eq_i64_e64 exec, src_scc ; encoding: [0x7e,0x00,0xd2,0xd4,0x7e,0xfa,0x01,0x00]
+
+v_cmpx_eq_i64_e64 null, 0.5
+// GFX1250: v_cmpx_eq_i64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xd2,0xd4,0x7c,0xe0,0x01,0x00]
+
+v_cmpx_eq_i64_e64 -1, -1
+// GFX1250: v_cmpx_eq_i64_e64 -1, -1 ; encoding: [0x7e,0x00,0xd2,0xd4,0xc1,0x82,0x01,0x00]
+
+v_cmpx_eq_i64_e64 0.5, null
+// GFX1250: v_cmpx_eq_i64_e64 0.5, null ; encoding: [0x7e,0x00,0xd2,0xd4,0xf0,0xf8,0x00,0x00]
+
+v_cmpx_eq_i64_e64 src_scc, exec
+// GFX1250: v_cmpx_eq_i64_e64 src_scc, exec ; encoding: [0x7e,0x00,0xd2,0xd4,0xfd,0xfc,0x00,0x00]
+
+v_cmpx_eq_i64_e64 0xaf123456, vcc
+// GFX1250: v_cmpx_eq_i64_e64 lit64(0xaf123456), vcc ; encoding: [0x7e,0x00,0xd2,0xd4,0xfe,0xd4,0x00,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+v_cmpx_eq_u16_e64 v1, v2
+// GFX1250: v_cmpx_eq_u16_e64 v1, v2 ; encoding: [0x7e,0x00,0xba,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_eq_u16_e64 v255, v255
+// GFX1250: v_cmpx_eq_u16_e64 v255, v255 ; encoding: [0x7e,0x00,0xba,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_eq_u16_e64 s1, s2
+// GFX1250: v_cmpx_eq_u16_e64 s1, s2 ; encoding: [0x7e,0x00,0xba,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_eq_u16_e64 s105, s105
+// GFX1250: v_cmpx_eq_u16_e64 s105, s105 ; encoding: [0x7e,0x00,0xba,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_eq_u16_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_eq_u16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xba,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_eq_u16_e64 vcc_hi, 0xfe0b
+// GFX1250: v_cmpx_eq_u16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0xba,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_eq_u16_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_eq_u16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xba,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_eq_u16_e64 m0, 0.5
+// GFX1250: v_cmpx_eq_u16_e64 m0, 0.5 ; encoding: [0x7e,0x00,0xba,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_eq_u16_e64 exec_lo, -1
+// GFX1250: v_cmpx_eq_u16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xba,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_eq_u16_e64 exec_hi, null
+// GFX1250: v_cmpx_eq_u16_e64 exec_hi, null ; encoding: [0x7e,0x00,0xba,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_eq_u16_e64 null, exec_lo
+// GFX1250: v_cmpx_eq_u16_e64 null, exec_lo ; encoding: [0x7e,0x00,0xba,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_eq_u16_e64 -1, exec_hi
+// GFX1250: v_cmpx_eq_u16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xba,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_eq_u16_e64 0.5, m0
+// GFX1250: v_cmpx_eq_u16_e64 0.5, m0 ; encoding: [0x7e,0x00,0xba,0xd4,0xf0,0xfa,0x00,0x00]
+
+v_cmpx_eq_u16_e64 src_scc, vcc_lo
+// GFX1250: v_cmpx_eq_u16_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xba,0xd4,0xfd,0xd4,0x00,0x00]
+
+v_cmpx_eq_u16_e64 0xfe0b, vcc_hi
+// GFX1250: v_cmpx_eq_u16_e64 0xfe0b, vcc_hi ; encoding: [0x7e,0x00,0xba,0xd4,0xff,0xd6,0x00,0x00,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_eq_u32_e64 v1, v2
+// GFX1250: v_cmpx_eq_u32_e64 v1, v2 ; encoding: [0x7e,0x00,0xca,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_eq_u32_e64 v255, v255
+// GFX1250: v_cmpx_eq_u32_e64 v255, v255 ; encoding: [0x7e,0x00,0xca,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_eq_u32_e64 s1, s2
+// GFX1250: v_cmpx_eq_u32_e64 s1, s2 ; encoding: [0x7e,0x00,0xca,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_eq_u32_e64 s105, s105
+// GFX1250: v_cmpx_eq_u32_e64 s105, s105 ; encoding: [0x7e,0x00,0xca,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_eq_u32_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_eq_u32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xca,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_eq_u32_e64 vcc_hi, 0xaf123456
+// GFX1250: v_cmpx_eq_u32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0xca,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_eq_u32_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_eq_u32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xca,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_eq_u32_e64 m0, 0.5
+// GFX1250: v_cmpx_eq_u32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0xca,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_eq_u32_e64 exec_lo, -1
+// GFX1250: v_cmpx_eq_u32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xca,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_eq_u32_e64 exec_hi, null
+// GFX1250: v_cmpx_eq_u32_e64 exec_hi, null ; encoding: [0x7e,0x00,0xca,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_eq_u32_e64 null, exec_lo
+// GFX1250: v_cmpx_eq_u32_e64 null, exec_lo ; encoding: [0x7e,0x00,0xca,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_eq_u32_e64 -1, exec_hi
+// GFX1250: v_cmpx_eq_u32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xca,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_eq_u32_e64 0.5, m0
+// GFX1250: v_cmpx_eq_u32_e64 0.5, m0 ; encoding: [0x7e,0x00,0xca,0xd4,0xf0,0xfa,0x00,0x00]
+
+v_cmpx_eq_u32_e64 src_scc, vcc_lo
+// GFX1250: v_cmpx_eq_u32_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xca,0xd4,0xfd,0xd4,0x00,0x00]
+
+v_cmpx_eq_u32_e64 0xaf123456, vcc_hi
+// GFX1250: v_cmpx_eq_u32_e64 0xaf123456, vcc_hi ; encoding: [0x7e,0x00,0xca,0xd4,0xff,0xd6,0x00,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_eq_u64_e64 v[2:3], v[2:3]
+// GFX1250: v_cmpx_eq_u64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xda,0xd4,0x02,0x05,0x02,0x00]
+
+v_cmpx_eq_u64_e64 v[254:255], v[254:255]
+// GFX1250: v_cmpx_eq_u64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xda,0xd4,0xfe,0xfd,0x03,0x00]
+
+v_cmpx_eq_u64_e64 s[2:3], s[4:5]
+// GFX1250: v_cmpx_eq_u64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xda,0xd4,0x02,0x08,0x00,0x00]
+
+v_cmpx_eq_u64_e64 s[104:105], s[104:105]
+// GFX1250: v_cmpx_eq_u64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xda,0xd4,0x68,0xd0,0x00,0x00]
+
+v_cmpx_eq_u64_e64 vcc, ttmp[14:15]
+// GFX1250: v_cmpx_eq_u64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xda,0xd4,0x6a,0xf4,0x00,0x00]
+
+v_cmpx_eq_u64_e64 ttmp[14:15], 0xaf123456
+// GFX1250: v_cmpx_eq_u64_e64 ttmp[14:15], lit64(0xaf123456) ; encoding: [0x7e,0x00,0xda,0xd4,0x7a,0xfc,0x01,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+v_cmpx_eq_u64_e64 exec, src_scc
+// GFX1250: v_cmpx_eq_u64_e64 exec, src_scc ; encoding: [0x7e,0x00,0xda,0xd4,0x7e,0xfa,0x01,0x00]
+
+v_cmpx_eq_u64_e64 null, 0.5
+// GFX1250: v_cmpx_eq_u64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xda,0xd4,0x7c,0xe0,0x01,0x00]
+
+v_cmpx_eq_u64_e64 -1, -1
+// GFX1250: v_cmpx_eq_u64_e64 -1, -1 ; encoding: [0x7e,0x00,0xda,0xd4,0xc1,0x82,0x01,0x00]
+
+v_cmpx_eq_u64_e64 0.5, null
+// GFX1250: v_cmpx_eq_u64_e64 0.5, null ; encoding: [0x7e,0x00,0xda,0xd4,0xf0,0xf8,0x00,0x00]
+
+v_cmpx_eq_u64_e64 src_scc, exec
+// GFX1250: v_cmpx_eq_u64_e64 src_scc, exec ; encoding: [0x7e,0x00,0xda,0xd4,0xfd,0xfc,0x00,0x00]
+
+v_cmpx_eq_u64_e64 0xaf123456, vcc
+// GFX1250: v_cmpx_eq_u64_e64 lit64(0xaf123456), vcc ; encoding: [0x7e,0x00,0xda,0xd4,0xfe,0xd4,0x00,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+v_cmpx_ge_f16_e64 v1, v2
+// GFX1250: v_cmpx_ge_f16_e64 v1, v2 ; encoding: [0x7e,0x00,0x86,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_ge_f16_e64 v255, v255
+// GFX1250: v_cmpx_ge_f16_e64 v255, v255 ; encoding: [0x7e,0x00,0x86,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_ge_f16_e64 s1, s2
+// GFX1250: v_cmpx_ge_f16_e64 s1, s2 ; encoding: [0x7e,0x00,0x86,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_ge_f16_e64 s105, s105
+// GFX1250: v_cmpx_ge_f16_e64 s105, s105 ; encoding: [0x7e,0x00,0x86,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_ge_f16_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_ge_f16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x86,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_ge_f16_e64 vcc_hi, 0xfe0b
+// GFX1250: v_cmpx_ge_f16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0x86,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_ge_f16_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_ge_f16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x86,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_ge_f16_e64 m0, 0.5
+// GFX1250: v_cmpx_ge_f16_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x86,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_ge_f16_e64 exec_lo, -1
+// GFX1250: v_cmpx_ge_f16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x86,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_ge_f16_e64 |exec_hi|, null
+// GFX1250: v_cmpx_ge_f16_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x86,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_ge_f16_e64 null, exec_lo
+// GFX1250: v_cmpx_ge_f16_e64 null, exec_lo ; encoding: [0x7e,0x00,0x86,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_ge_f16_e64 -1, exec_hi
+// GFX1250: v_cmpx_ge_f16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x86,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_ge_f16_e64 0.5, -m0
+// GFX1250: v_cmpx_ge_f16_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x86,0xd4,0xf0,0xfa,0x00,0x40]
+
+v_cmpx_ge_f16_e64 -src_scc, |vcc_lo|
+// GFX1250: v_cmpx_ge_f16_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x86,0xd4,0xfd,0xd4,0x00,0x20]
+
+v_cmpx_ge_f16_e64 -|0xfe0b|, -|vcc_hi| clamp
+// GFX1250: v_cmpx_ge_f16_e64 -|0xfe0b|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x86,0xd4,0xff,0xd6,0x00,0x60,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_ge_f32_e64 v1, v2
+// GFX1250: v_cmpx_ge_f32_e64 v1, v2 ; encoding: [0x7e,0x00,0x96,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_ge_f32_e64 v255, v255
+// GFX1250: v_cmpx_ge_f32_e64 v255, v255 ; encoding: [0x7e,0x00,0x96,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_ge_f32_e64 s1, s2
+// GFX1250: v_cmpx_ge_f32_e64 s1, s2 ; encoding: [0x7e,0x00,0x96,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_ge_f32_e64 s105, s105
+// GFX1250: v_cmpx_ge_f32_e64 s105, s105 ; encoding: [0x7e,0x00,0x96,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_ge_f32_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_ge_f32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x96,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_ge_f32_e64 vcc_hi, 0xaf123456
+// GFX1250: v_cmpx_ge_f32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0x96,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_ge_f32_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_ge_f32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x96,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_ge_f32_e64 m0, 0.5
+// GFX1250: v_cmpx_ge_f32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x96,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_ge_f32_e64 exec_lo, -1
+// GFX1250: v_cmpx_ge_f32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x96,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_ge_f32_e64 |exec_hi|, null
+// GFX1250: v_cmpx_ge_f32_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x96,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_ge_f32_e64 null, exec_lo
+// GFX1250: v_cmpx_ge_f32_e64 null, exec_lo ; encoding: [0x7e,0x00,0x96,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_ge_f32_e64 -1, exec_hi
+// GFX1250: v_cmpx_ge_f32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x96,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_ge_f32_e64 0.5, -m0
+// GFX1250: v_cmpx_ge_f32_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x96,0xd4,0xf0,0xfa,0x00,0x40]
+
+v_cmpx_ge_f32_e64 -src_scc, |vcc_lo|
+// GFX1250: v_cmpx_ge_f32_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x96,0xd4,0xfd,0xd4,0x00,0x20]
+
+v_cmpx_ge_f32_e64 -|0xaf123456|, -|vcc_hi| clamp
+// GFX1250: v_cmpx_ge_f32_e64 -|0xaf123456|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x96,0xd4,0xff,0xd6,0x00,0x60,0x56,0x34,0x12,0xaf]
+
+v_cmpx_ge_f64_e64 v[2:3], v[2:3]
+// GFX1250: v_cmpx_ge_f64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xa6,0xd4,0x02,0x05,0x02,0x00]
+
+v_cmpx_ge_f64_e64 v[254:255], v[254:255]
+// GFX1250: v_cmpx_ge_f64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xa6,0xd4,0xfe,0xfd,0x03,0x00]
+
+v_cmpx_ge_f64_e64 s[2:3], s[4:5]
+// GFX1250: v_cmpx_ge_f64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xa6,0xd4,0x02,0x08,0x00,0x00]
+
+v_cmpx_ge_f64_e64 s[104:105], s[104:105]
+// GFX1250: v_cmpx_ge_f64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xa6,0xd4,0x68,0xd0,0x00,0x00]
+
+v_cmpx_ge_f64_e64 vcc, ttmp[14:15]
+// GFX1250: v_cmpx_ge_f64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xa6,0xd4,0x6a,0xf4,0x00,0x00]
+
+v_cmpx_ge_f64_e64 ttmp[14:15], 0xaf123456
+// GFX1250: v_cmpx_ge_f64_e64 ttmp[14:15], 0xaf123456 ; encoding: [0x7e,0x00,0xa6,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_ge_f64_e64 -|exec|, src_scc
+// GFX1250: v_cmpx_ge_f64_e64 -|exec|, src_scc ; encoding: [0x7e,0x01,0xa6,0xd4,0x7e,0xfa,0x01,0x20]
+
+v_cmpx_ge_f64_e64 null, 0.5
+// GFX1250: v_cmpx_ge_f64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xa6,0xd4,0x7c,0xe0,0x01,0x00]
+
+v_cmpx_ge_f64_e64 -1, -1
+// GFX1250: v_cmpx_ge_f64_e64 -1, -1 ; encoding: [0x7e,0x00,0xa6,0xd4,0xc1,0x82,0x01,0x00]
+
+v_cmpx_ge_f64_e64 0.5, null
+// GFX1250: v_cmpx_ge_f64_e64 0.5, null ; encoding: [0x7e,0x00,0xa6,0xd4,0xf0,0xf8,0x00,0x00]
+
+v_cmpx_ge_f64_e64 -|src_scc|, -|exec|
+// GFX1250: v_cmpx_ge_f64_e64 -|src_scc|, -|exec| ; encoding: [0x7e,0x03,0xa6,0xd4,0xfd,0xfc,0x00,0x60]
+
+v_cmpx_ge_f64_e64 0xaf123456, -|vcc| clamp
+// GFX1250: v_cmpx_ge_f64_e64 0xaf123456, -|vcc| clamp ; encoding: [0x7e,0x82,0xa6,0xd4,0xff,0xd4,0x00,0x40,0x56,0x34,0x12,0xaf]
+
+v_cmpx_ge_i16_e64 v1, v2
+// GFX1250: v_cmpx_ge_i16_e64 v1, v2 ; encoding: [0x7e,0x00,0xb6,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_ge_i16_e64 v255, v255
+// GFX1250: v_cmpx_ge_i16_e64 v255, v255 ; encoding: [0x7e,0x00,0xb6,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_ge_i16_e64 s1, s2
+// GFX1250: v_cmpx_ge_i16_e64 s1, s2 ; encoding: [0x7e,0x00,0xb6,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_ge_i16_e64 s105, s105
+// GFX1250: v_cmpx_ge_i16_e64 s105, s105 ; encoding: [0x7e,0x00,0xb6,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_ge_i16_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_ge_i16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xb6,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_ge_i16_e64 vcc_hi, 0xfe0b
+// GFX1250: v_cmpx_ge_i16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0xb6,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_ge_i16_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_ge_i16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xb6,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_ge_i16_e64 m0, 0.5
+// GFX1250: v_cmpx_ge_i16_e64 m0, 0.5 ; encoding: [0x7e,0x00,0xb6,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_ge_i16_e64 exec_lo, -1
+// GFX1250: v_cmpx_ge_i16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xb6,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_ge_i16_e64 exec_hi, null
+// GFX1250: v_cmpx_ge_i16_e64 exec_hi, null ; encoding: [0x7e,0x00,0xb6,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_ge_i16_e64 null, exec_lo
+// GFX1250: v_cmpx_ge_i16_e64 null, exec_lo ; encoding: [0x7e,0x00,0xb6,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_ge_i16_e64 -1, exec_hi
+// GFX1250: v_cmpx_ge_i16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xb6,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_ge_i16_e64 0.5, m0
+// GFX1250: v_cmpx_ge_i16_e64 0.5, m0 ; encoding: [0x7e,0x00,0xb6,0xd4,0xf0,0xfa,0x00,0x00]
+
+v_cmpx_ge_i16_e64 src_scc, vcc_lo
+// GFX1250: v_cmpx_ge_i16_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xb6,0xd4,0xfd,0xd4,0x00,0x00]
+
+v_cmpx_ge_i16_e64 0xfe0b, vcc_hi
+// GFX1250: v_cmpx_ge_i16_e64 0xfe0b, vcc_hi ; encoding: [0x7e,0x00,0xb6,0xd4,0xff,0xd6,0x00,0x00,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_ge_i32_e64 v1, v2
+// GFX1250: v_cmpx_ge_i32_e64 v1, v2 ; encoding: [0x7e,0x00,0xc6,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_ge_i32_e64 v255, v255
+// GFX1250: v_cmpx_ge_i32_e64 v255, v255 ; encoding: [0x7e,0x00,0xc6,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_ge_i32_e64 s1, s2
+// GFX1250: v_cmpx_ge_i32_e64 s1, s2 ; encoding: [0x7e,0x00,0xc6,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_ge_i32_e64 s105, s105
+// GFX1250: v_cmpx_ge_i32_e64 s105, s105 ; encoding: [0x7e,0x00,0xc6,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_ge_i32_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_ge_i32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xc6,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_ge_i32_e64 vcc_hi, 0xaf123456
+// GFX1250: v_cmpx_ge_i32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0xc6,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_ge_i32_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_ge_i32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xc6,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_ge_i32_e64 m0, 0.5
+// GFX1250: v_cmpx_ge_i32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0xc6,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_ge_i32_e64 exec_lo, -1
+// GFX1250: v_cmpx_ge_i32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xc6,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_ge_i32_e64 exec_hi, null
+// GFX1250: v_cmpx_ge_i32_e64 exec_hi, null ; encoding: [0x7e,0x00,0xc6,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_ge_i32_e64 null, exec_lo
+// GFX1250: v_cmpx_ge_i32_e64 null, exec_lo ; encoding: [0x7e,0x00,0xc6,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_ge_i32_e64 -1, exec_hi
+// GFX1250: v_cmpx_ge_i32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xc6,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_ge_i32_e64 0.5, m0
+// GFX1250: v_cmpx_ge_i32_e64 0.5, m0 ; encoding: [0x7e,0x00,0xc6,0xd4,0xf0,0xfa,0x00,0x00]
+
+v_cmpx_ge_i32_e64 src_scc, vcc_lo
+// GFX1250: v_cmpx_ge_i32_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xc6,0xd4,0xfd,0xd4,0x00,0x00]
+
+v_cmpx_ge_i32_e64 0xaf123456, vcc_hi
+// GFX1250: v_cmpx_ge_i32_e64 0xaf123456, vcc_hi ; encoding: [0x7e,0x00,0xc6,0xd4,0xff,0xd6,0x00,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_ge_i64_e64 v[2:3], v[2:3]
+// GFX1250: v_cmpx_ge_i64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xd6,0xd4,0x02,0x05,0x02,0x00]
+
+v_cmpx_ge_i64_e64 v[254:255], v[254:255]
+// GFX1250: v_cmpx_ge_i64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xd6,0xd4,0xfe,0xfd,0x03,0x00]
+
+v_cmpx_ge_i64_e64 s[2:3], s[4:5]
+// GFX1250: v_cmpx_ge_i64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xd6,0xd4,0x02,0x08,0x00,0x00]
+
+v_cmpx_ge_i64_e64 s[104:105], s[104:105]
+// GFX1250: v_cmpx_ge_i64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xd6,0xd4,0x68,0xd0,0x00,0x00]
+
+v_cmpx_ge_i64_e64 vcc, ttmp[14:15]
+// GFX1250: v_cmpx_ge_i64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xd6,0xd4,0x6a,0xf4,0x00,0x00]
+
+v_cmpx_ge_i64_e64 ttmp[14:15], 0xaf123456
+// GFX1250: v_cmpx_ge_i64_e64 ttmp[14:15], lit64(0xaf123456) ; encoding: [0x7e,0x00,0xd6,0xd4,0x7a,0xfc,0x01,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+v_cmpx_ge_i64_e64 exec, src_scc
+// GFX1250: v_cmpx_ge_i64_e64 exec, src_scc ; encoding: [0x7e,0x00,0xd6,0xd4,0x7e,0xfa,0x01,0x00]
+
+v_cmpx_ge_i64_e64 null, 0.5
+// GFX1250: v_cmpx_ge_i64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xd6,0xd4,0x7c,0xe0,0x01,0x00]
+
+v_cmpx_ge_i64_e64 -1, -1
+// GFX1250: v_cmpx_ge_i64_e64 -1, -1 ; encoding: [0x7e,0x00,0xd6,0xd4,0xc1,0x82,0x01,0x00]
+
+v_cmpx_ge_i64_e64 0.5, null
+// GFX1250: v_cmpx_ge_i64_e64 0.5, null ; encoding: [0x7e,0x00,0xd6,0xd4,0xf0,0xf8,0x00,0x00]
+
+v_cmpx_ge_i64_e64 src_scc, exec
+// GFX1250: v_cmpx_ge_i64_e64 src_scc, exec ; encoding: [0x7e,0x00,0xd6,0xd4,0xfd,0xfc,0x00,0x00]
+
+v_cmpx_ge_i64_e64 0xaf123456, vcc
+// GFX1250: v_cmpx_ge_i64_e64 lit64(0xaf123456), vcc ; encoding: [0x7e,0x00,0xd6,0xd4,0xfe,0xd4,0x00,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+v_cmpx_ge_u16_e64 v1, v2
+// GFX1250: v_cmpx_ge_u16_e64 v1, v2 ; encoding: [0x7e,0x00,0xbe,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_ge_u16_e64 v255, v255
+// GFX1250: v_cmpx_ge_u16_e64 v255, v255 ; encoding: [0x7e,0x00,0xbe,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_ge_u16_e64 s1, s2
+// GFX1250: v_cmpx_ge_u16_e64 s1, s2 ; encoding: [0x7e,0x00,0xbe,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_ge_u16_e64 s105, s105
+// GFX1250: v_cmpx_ge_u16_e64 s105, s105 ; encoding: [0x7e,0x00,0xbe,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_ge_u16_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_ge_u16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xbe,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_ge_u16_e64 vcc_hi, 0xfe0b
+// GFX1250: v_cmpx_ge_u16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0xbe,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_ge_u16_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_ge_u16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xbe,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_ge_u16_e64 m0, 0.5
+// GFX1250: v_cmpx_ge_u16_e64 m0, 0.5 ; encoding: [0x7e,0x00,0xbe,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_ge_u16_e64 exec_lo, -1
+// GFX1250: v_cmpx_ge_u16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xbe,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_ge_u16_e64 exec_hi, null
+// GFX1250: v_cmpx_ge_u16_e64 exec_hi, null ; encoding: [0x7e,0x00,0xbe,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_ge_u16_e64 null, exec_lo
+// GFX1250: v_cmpx_ge_u16_e64 null, exec_lo ; encoding: [0x7e,0x00,0xbe,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_ge_u16_e64 -1, exec_hi
+// GFX1250: v_cmpx_ge_u16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xbe,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_ge_u16_e64 0.5, m0
+// GFX1250: v_cmpx_ge_u16_e64 0.5, m0 ; encoding: [0x7e,0x00,0xbe,0xd4,0xf0,0xfa,0x00,0x00]
+
+v_cmpx_ge_u16_e64 src_scc, vcc_lo
+// GFX1250: v_cmpx_ge_u16_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xbe,0xd4,0xfd,0xd4,0x00,0x00]
+
+v_cmpx_ge_u16_e64 0xfe0b, vcc_hi
+// GFX1250: v_cmpx_ge_u16_e64 0xfe0b, vcc_hi ; encoding: [0x7e,0x00,0xbe,0xd4,0xff,0xd6,0x00,0x00,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_ge_u32_e64 v1, v2
+// GFX1250: v_cmpx_ge_u32_e64 v1, v2 ; encoding: [0x7e,0x00,0xce,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_ge_u32_e64 v255, v255
+// GFX1250: v_cmpx_ge_u32_e64 v255, v255 ; encoding: [0x7e,0x00,0xce,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_ge_u32_e64 s1, s2
+// GFX1250: v_cmpx_ge_u32_e64 s1, s2 ; encoding: [0x7e,0x00,0xce,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_ge_u32_e64 s105, s105
+// GFX1250: v_cmpx_ge_u32_e64 s105, s105 ; encoding: [0x7e,0x00,0xce,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_ge_u32_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_ge_u32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xce,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_ge_u32_e64 vcc_hi, 0xaf123456
+// GFX1250: v_cmpx_ge_u32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0xce,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_ge_u32_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_ge_u32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xce,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_ge_u32_e64 m0, 0.5
+// GFX1250: v_cmpx_ge_u32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0xce,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_ge_u32_e64 exec_lo, -1
+// GFX1250: v_cmpx_ge_u32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xce,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_ge_u32_e64 exec_hi, null
+// GFX1250: v_cmpx_ge_u32_e64 exec_hi, null ; encoding: [0x7e,0x00,0xce,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_ge_u32_e64 null, exec_lo
+// GFX1250: v_cmpx_ge_u32_e64 null, exec_lo ; encoding: [0x7e,0x00,0xce,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_ge_u32_e64 -1, exec_hi
+// GFX1250: v_cmpx_ge_u32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xce,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_ge_u32_e64 0.5, m0
+// GFX1250: v_cmpx_ge_u32_e64 0.5, m0 ; encoding: [0x7e,0x00,0xce,0xd4,0xf0,0xfa,0x00,0x00]
+
+v_cmpx_ge_u32_e64 src_scc, vcc_lo
+// GFX1250: v_cmpx_ge_u32_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xce,0xd4,0xfd,0xd4,0x00,0x00]
+
+v_cmpx_ge_u32_e64 0xaf123456, vcc_hi
+// GFX1250: v_cmpx_ge_u32_e64 0xaf123456, vcc_hi ; encoding: [0x7e,0x00,0xce,0xd4,0xff,0xd6,0x00,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_ge_u64_e64 v[2:3], v[2:3]
+// GFX1250: v_cmpx_ge_u64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xde,0xd4,0x02,0x05,0x02,0x00]
+
+v_cmpx_ge_u64_e64 v[254:255], v[254:255]
+// GFX1250: v_cmpx_ge_u64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xde,0xd4,0xfe,0xfd,0x03,0x00]
+
+v_cmpx_ge_u64_e64 s[2:3], s[4:5]
+// GFX1250: v_cmpx_ge_u64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xde,0xd4,0x02,0x08,0x00,0x00]
+
+v_cmpx_ge_u64_e64 s[104:105], s[104:105]
+// GFX1250: v_cmpx_ge_u64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xde,0xd4,0x68,0xd0,0x00,0x00]
+
+v_cmpx_ge_u64_e64 vcc, ttmp[14:15]
+// GFX1250: v_cmpx_ge_u64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xde,0xd4,0x6a,0xf4,0x00,0x00]
+
+v_cmpx_ge_u64_e64 ttmp[14:15], 0xaf123456
+// GFX1250: v_cmpx_ge_u64_e64 ttmp[14:15], lit64(0xaf123456) ; encoding: [0x7e,0x00,0xde,0xd4,0x7a,0xfc,0x01,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+v_cmpx_ge_u64_e64 exec, src_scc
+// GFX1250: v_cmpx_ge_u64_e64 exec, src_scc ; encoding: [0x7e,0x00,0xde,0xd4,0x7e,0xfa,0x01,0x00]
+
+v_cmpx_ge_u64_e64 null, 0.5
+// GFX1250: v_cmpx_ge_u64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xde,0xd4,0x7c,0xe0,0x01,0x00]
+
+v_cmpx_ge_u64_e64 -1, -1
+// GFX1250: v_cmpx_ge_u64_e64 -1, -1 ; encoding: [0x7e,0x00,0xde,0xd4,0xc1,0x82,0x01,0x00]
+
+v_cmpx_ge_u64_e64 0.5, null
+// GFX1250: v_cmpx_ge_u64_e64 0.5, null ; encoding: [0x7e,0x00,0xde,0xd4,0xf0,0xf8,0x00,0x00]
+
+v_cmpx_ge_u64_e64 src_scc, exec
+// GFX1250: v_cmpx_ge_u64_e64 src_scc, exec ; encoding: [0x7e,0x00,0xde,0xd4,0xfd,0xfc,0x00,0x00]
+
+v_cmpx_ge_u64_e64 0xaf123456, vcc
+// GFX1250: v_cmpx_ge_u64_e64 lit64(0xaf123456), vcc ; encoding: [0x7e,0x00,0xde,0xd4,0xfe,0xd4,0x00,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+v_cmpx_gt_f16_e64 v1, v2
+// GFX1250: v_cmpx_gt_f16_e64 v1, v2 ; encoding: [0x7e,0x00,0x84,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_gt_f16_e64 v255, v255
+// GFX1250: v_cmpx_gt_f16_e64 v255, v255 ; encoding: [0x7e,0x00,0x84,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_gt_f16_e64 s1, s2
+// GFX1250: v_cmpx_gt_f16_e64 s1, s2 ; encoding: [0x7e,0x00,0x84,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_gt_f16_e64 s105, s105
+// GFX1250: v_cmpx_gt_f16_e64 s105, s105 ; encoding: [0x7e,0x00,0x84,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_gt_f16_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_gt_f16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x84,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_gt_f16_e64 vcc_hi, 0xfe0b
+// GFX1250: v_cmpx_gt_f16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0x84,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_gt_f16_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_gt_f16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x84,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_gt_f16_e64 m0, 0.5
+// GFX1250: v_cmpx_gt_f16_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x84,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_gt_f16_e64 exec_lo, -1
+// GFX1250: v_cmpx_gt_f16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x84,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_gt_f16_e64 |exec_hi|, null
+// GFX1250: v_cmpx_gt_f16_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x84,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_gt_f16_e64 null, exec_lo
+// GFX1250: v_cmpx_gt_f16_e64 null, exec_lo ; encoding: [0x7e,0x00,0x84,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_gt_f16_e64 -1, exec_hi
+// GFX1250: v_cmpx_gt_f16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x84,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_gt_f16_e64 0.5, -m0
+// GFX1250: v_cmpx_gt_f16_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x84,0xd4,0xf0,0xfa,0x00,0x40]
+
+v_cmpx_gt_f16_e64 -src_scc, |vcc_lo|
+// GFX1250: v_cmpx_gt_f16_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x84,0xd4,0xfd,0xd4,0x00,0x20]
+
+v_cmpx_gt_f16_e64 -|0xfe0b|, -|vcc_hi| clamp
+// GFX1250: v_cmpx_gt_f16_e64 -|0xfe0b|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x84,0xd4,0xff,0xd6,0x00,0x60,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_gt_f32_e64 v1, v2
+// GFX1250: v_cmpx_gt_f32_e64 v1, v2 ; encoding: [0x7e,0x00,0x94,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_gt_f32_e64 v255, v255
+// GFX1250: v_cmpx_gt_f32_e64 v255, v255 ; encoding: [0x7e,0x00,0x94,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_gt_f32_e64 s1, s2
+// GFX1250: v_cmpx_gt_f32_e64 s1, s2 ; encoding: [0x7e,0x00,0x94,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_gt_f32_e64 s105, s105
+// GFX1250: v_cmpx_gt_f32_e64 s105, s105 ; encoding: [0x7e,0x00,0x94,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_gt_f32_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_gt_f32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x94,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_gt_f32_e64 vcc_hi, 0xaf123456
+// GFX1250: v_cmpx_gt_f32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0x94,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_gt_f32_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_gt_f32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x94,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_gt_f32_e64 m0, 0.5
+// GFX1250: v_cmpx_gt_f32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x94,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_gt_f32_e64 exec_lo, -1
+// GFX1250: v_cmpx_gt_f32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x94,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_gt_f32_e64 |exec_hi|, null
+// GFX1250: v_cmpx_gt_f32_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x94,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_gt_f32_e64 null, exec_lo
+// GFX1250: v_cmpx_gt_f32_e64 null, exec_lo ; encoding: [0x7e,0x00,0x94,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_gt_f32_e64 -1, exec_hi
+// GFX1250: v_cmpx_gt_f32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x94,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_gt_f32_e64 0.5, -m0
+// GFX1250: v_cmpx_gt_f32_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x94,0xd4,0xf0,0xfa,0x00,0x40]
+
+v_cmpx_gt_f32_e64 -src_scc, |vcc_lo|
+// GFX1250: v_cmpx_gt_f32_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x94,0xd4,0xfd,0xd4,0x00,0x20]
+
+v_cmpx_gt_f32_e64 -|0xaf123456|, -|vcc_hi| clamp
+// GFX1250: v_cmpx_gt_f32_e64 -|0xaf123456|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x94,0xd4,0xff,0xd6,0x00,0x60,0x56,0x34,0x12,0xaf]
+
+v_cmpx_gt_f64_e64 v[2:3], v[2:3]
+// GFX1250: v_cmpx_gt_f64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xa4,0xd4,0x02,0x05,0x02,0x00]
+
+v_cmpx_gt_f64_e64 v[254:255], v[254:255]
+// GFX1250: v_cmpx_gt_f64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xa4,0xd4,0xfe,0xfd,0x03,0x00]
+
+v_cmpx_gt_f64_e64 s[2:3], s[4:5]
+// GFX1250: v_cmpx_gt_f64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xa4,0xd4,0x02,0x08,0x00,0x00]
+
+v_cmpx_gt_f64_e64 s[104:105], s[104:105]
+// GFX1250: v_cmpx_gt_f64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xa4,0xd4,0x68,0xd0,0x00,0x00]
+
+v_cmpx_gt_f64_e64 vcc, ttmp[14:15]
+// GFX1250: v_cmpx_gt_f64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xa4,0xd4,0x6a,0xf4,0x00,0x00]
+
+v_cmpx_gt_f64_e64 ttmp[14:15], 0xaf123456
+// GFX1250: v_cmpx_gt_f64_e64 ttmp[14:15], 0xaf123456 ; encoding: [0x7e,0x00,0xa4,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_gt_f64_e64 -|exec|, src_scc
+// GFX1250: v_cmpx_gt_f64_e64 -|exec|, src_scc ; encoding: [0x7e,0x01,0xa4,0xd4,0x7e,0xfa,0x01,0x20]
+
+v_cmpx_gt_f64_e64 null, 0.5
+// GFX1250: v_cmpx_gt_f64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xa4,0xd4,0x7c,0xe0,0x01,0x00]
+
+v_cmpx_gt_f64_e64 -1, -1
+// GFX1250: v_cmpx_gt_f64_e64 -1, -1 ; encoding: [0x7e,0x00,0xa4,0xd4,0xc1,0x82,0x01,0x00]
+
+v_cmpx_gt_f64_e64 0.5, null
+// GFX1250: v_cmpx_gt_f64_e64 0.5, null ; encoding: [0x7e,0x00,0xa4,0xd4,0xf0,0xf8,0x00,0x00]
+
+v_cmpx_gt_f64_e64 -|src_scc|, -|exec|
+// GFX1250: v_cmpx_gt_f64_e64 -|src_scc|, -|exec| ; encoding: [0x7e,0x03,0xa4,0xd4,0xfd,0xfc,0x00,0x60]
+
+v_cmpx_gt_f64_e64 0xaf123456, -|vcc| clamp
+// GFX1250: v_cmpx_gt_f64_e64 0xaf123456, -|vcc| clamp ; encoding: [0x7e,0x82,0xa4,0xd4,0xff,0xd4,0x00,0x40,0x56,0x34,0x12,0xaf]
+
+v_cmpx_gt_i16_e64 v1, v2
+// GFX1250: v_cmpx_gt_i16_e64 v1, v2 ; encoding: [0x7e,0x00,0xb4,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_gt_i16_e64 v255, v255
+// GFX1250: v_cmpx_gt_i16_e64 v255, v255 ; encoding: [0x7e,0x00,0xb4,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_gt_i16_e64 s1, s2
+// GFX1250: v_cmpx_gt_i16_e64 s1, s2 ; encoding: [0x7e,0x00,0xb4,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_gt_i16_e64 s105, s105
+// GFX1250: v_cmpx_gt_i16_e64 s105, s105 ; encoding: [0x7e,0x00,0xb4,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_gt_i16_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_gt_i16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xb4,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_gt_i16_e64 vcc_hi, 0xfe0b
+// GFX1250: v_cmpx_gt_i16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0xb4,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_gt_i16_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_gt_i16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xb4,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_gt_i16_e64 m0, 0.5
+// GFX1250: v_cmpx_gt_i16_e64 m0, 0.5 ; encoding: [0x7e,0x00,0xb4,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_gt_i16_e64 exec_lo, -1
+// GFX1250: v_cmpx_gt_i16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xb4,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_gt_i16_e64 exec_hi, null
+// GFX1250: v_cmpx_gt_i16_e64 exec_hi, null ; encoding: [0x7e,0x00,0xb4,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_gt_i16_e64 null, exec_lo
+// GFX1250: v_cmpx_gt_i16_e64 null, exec_lo ; encoding: [0x7e,0x00,0xb4,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_gt_i16_e64 -1, exec_hi
+// GFX1250: v_cmpx_gt_i16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xb4,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_gt_i16_e64 0.5, m0
+// GFX1250: v_cmpx_gt_i16_e64 0.5, m0 ; encoding: [0x7e,0x00,0xb4,0xd4,0xf0,0xfa,0x00,0x00]
+
+v_cmpx_gt_i16_e64 src_scc, vcc_lo
+// GFX1250: v_cmpx_gt_i16_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xb4,0xd4,0xfd,0xd4,0x00,0x00]
+
+v_cmpx_gt_i16_e64 0xfe0b, vcc_hi
+// GFX1250: v_cmpx_gt_i16_e64 0xfe0b, vcc_hi ; encoding: [0x7e,0x00,0xb4,0xd4,0xff,0xd6,0x00,0x00,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_gt_i32_e64 v1, v2
+// GFX1250: v_cmpx_gt_i32_e64 v1, v2 ; encoding: [0x7e,0x00,0xc4,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_gt_i32_e64 v255, v255
+// GFX1250: v_cmpx_gt_i32_e64 v255, v255 ; encoding: [0x7e,0x00,0xc4,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_gt_i32_e64 s1, s2
+// GFX1250: v_cmpx_gt_i32_e64 s1, s2 ; encoding: [0x7e,0x00,0xc4,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_gt_i32_e64 s105, s105
+// GFX1250: v_cmpx_gt_i32_e64 s105, s105 ; encoding: [0x7e,0x00,0xc4,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_gt_i32_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_gt_i32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xc4,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_gt_i32_e64 vcc_hi, 0xaf123456
+// GFX1250: v_cmpx_gt_i32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0xc4,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_gt_i32_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_gt_i32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xc4,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_gt_i32_e64 m0, 0.5
+// GFX1250: v_cmpx_gt_i32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0xc4,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_gt_i32_e64 exec_lo, -1
+// GFX1250: v_cmpx_gt_i32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xc4,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_gt_i32_e64 exec_hi, null
+// GFX1250: v_cmpx_gt_i32_e64 exec_hi, null ; encoding: [0x7e,0x00,0xc4,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_gt_i32_e64 null, exec_lo
+// GFX1250: v_cmpx_gt_i32_e64 null, exec_lo ; encoding: [0x7e,0x00,0xc4,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_gt_i32_e64 -1, exec_hi
+// GFX1250: v_cmpx_gt_i32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xc4,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_gt_i32_e64 0.5, m0
+// GFX1250: v_cmpx_gt_i32_e64 0.5, m0 ; encoding: [0x7e,0x00,0xc4,0xd4,0xf0,0xfa,0x00,0x00]
+
+v_cmpx_gt_i32_e64 src_scc, vcc_lo
+// GFX1250: v_cmpx_gt_i32_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xc4,0xd4,0xfd,0xd4,0x00,0x00]
+
+v_cmpx_gt_i32_e64 0xaf123456, vcc_hi
+// GFX1250: v_cmpx_gt_i32_e64 0xaf123456, vcc_hi ; encoding: [0x7e,0x00,0xc4,0xd4,0xff,0xd6,0x00,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_gt_i64_e64 v[2:3], v[2:3]
+// GFX1250: v_cmpx_gt_i64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xd4,0xd4,0x02,0x05,0x02,0x00]
+
+v_cmpx_gt_i64_e64 v[254:255], v[254:255]
+// GFX1250: v_cmpx_gt_i64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xd4,0xd4,0xfe,0xfd,0x03,0x00]
+
+v_cmpx_gt_i64_e64 s[2:3], s[4:5]
+// GFX1250: v_cmpx_gt_i64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xd4,0xd4,0x02,0x08,0x00,0x00]
+
+v_cmpx_gt_i64_e64 s[104:105], s[104:105]
+// GFX1250: v_cmpx_gt_i64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xd4,0xd4,0x68,0xd0,0x00,0x00]
+
+v_cmpx_gt_i64_e64 vcc, ttmp[14:15]
+// GFX1250: v_cmpx_gt_i64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xd4,0xd4,0x6a,0xf4,0x00,0x00]
+
+v_cmpx_gt_i64_e64 ttmp[14:15], 0xaf123456
+// GFX1250: v_cmpx_gt_i64_e64 ttmp[14:15], lit64(0xaf123456) ; encoding: [0x7e,0x00,0xd4,0xd4,0x7a,0xfc,0x01,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+v_cmpx_gt_i64_e64 exec, src_scc
+// GFX1250: v_cmpx_gt_i64_e64 exec, src_scc ; encoding: [0x7e,0x00,0xd4,0xd4,0x7e,0xfa,0x01,0x00]
+
+v_cmpx_gt_i64_e64 null, 0.5
+// GFX1250: v_cmpx_gt_i64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xd4,0xd4,0x7c,0xe0,0x01,0x00]
+
+v_cmpx_gt_i64_e64 -1, -1
+// GFX1250: v_cmpx_gt_i64_e64 -1, -1 ; encoding: [0x7e,0x00,0xd4,0xd4,0xc1,0x82,0x01,0x00]
+
+v_cmpx_gt_i64_e64 0.5, null
+// GFX1250: v_cmpx_gt_i64_e64 0.5, null ; encoding: [0x7e,0x00,0xd4,0xd4,0xf0,0xf8,0x00,0x00]
+
+v_cmpx_gt_i64_e64 src_scc, exec
+// GFX1250: v_cmpx_gt_i64_e64 src_scc, exec ; encoding: [0x7e,0x00,0xd4,0xd4,0xfd,0xfc,0x00,0x00]
+
+v_cmpx_gt_i64_e64 0xaf123456, vcc
+// GFX1250: v_cmpx_gt_i64_e64 lit64(0xaf123456), vcc ; encoding: [0x7e,0x00,0xd4,0xd4,0xfe,0xd4,0x00,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+v_cmpx_gt_u16_e64 v1, v2
+// GFX1250: v_cmpx_gt_u16_e64 v1, v2 ; encoding: [0x7e,0x00,0xbc,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_gt_u16_e64 v255, v255
+// GFX1250: v_cmpx_gt_u16_e64 v255, v255 ; encoding: [0x7e,0x00,0xbc,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_gt_u16_e64 s1, s2
+// GFX1250: v_cmpx_gt_u16_e64 s1, s2 ; encoding: [0x7e,0x00,0xbc,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_gt_u16_e64 s105, s105
+// GFX1250: v_cmpx_gt_u16_e64 s105, s105 ; encoding: [0x7e,0x00,0xbc,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_gt_u16_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_gt_u16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xbc,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_gt_u16_e64 vcc_hi, 0xfe0b
+// GFX1250: v_cmpx_gt_u16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0xbc,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_gt_u16_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_gt_u16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xbc,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_gt_u16_e64 m0, 0.5
+// GFX1250: v_cmpx_gt_u16_e64 m0, 0.5 ; encoding: [0x7e,0x00,0xbc,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_gt_u16_e64 exec_lo, -1
+// GFX1250: v_cmpx_gt_u16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xbc,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_gt_u16_e64 exec_hi, null
+// GFX1250: v_cmpx_gt_u16_e64 exec_hi, null ; encoding: [0x7e,0x00,0xbc,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_gt_u16_e64 null, exec_lo
+// GFX1250: v_cmpx_gt_u16_e64 null, exec_lo ; encoding: [0x7e,0x00,0xbc,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_gt_u16_e64 -1, exec_hi
+// GFX1250: v_cmpx_gt_u16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xbc,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_gt_u16_e64 0.5, m0
+// GFX1250: v_cmpx_gt_u16_e64 0.5, m0 ; encoding: [0x7e,0x00,0xbc,0xd4,0xf0,0xfa,0x00,0x00]
+
+v_cmpx_gt_u16_e64 src_scc, vcc_lo
+// GFX1250: v_cmpx_gt_u16_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xbc,0xd4,0xfd,0xd4,0x00,0x00]
+
+v_cmpx_gt_u16_e64 0xfe0b, vcc_hi
+// GFX1250: v_cmpx_gt_u16_e64 0xfe0b, vcc_hi ; encoding: [0x7e,0x00,0xbc,0xd4,0xff,0xd6,0x00,0x00,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_gt_u32_e64 v1, v2
+// GFX1250: v_cmpx_gt_u32_e64 v1, v2 ; encoding: [0x7e,0x00,0xcc,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_gt_u32_e64 v255, v255
+// GFX1250: v_cmpx_gt_u32_e64 v255, v255 ; encoding: [0x7e,0x00,0xcc,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_gt_u32_e64 s1, s2
+// GFX1250: v_cmpx_gt_u32_e64 s1, s2 ; encoding: [0x7e,0x00,0xcc,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_gt_u32_e64 s105, s105
+// GFX1250: v_cmpx_gt_u32_e64 s105, s105 ; encoding: [0x7e,0x00,0xcc,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_gt_u32_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_gt_u32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xcc,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_gt_u32_e64 vcc_hi, 0xaf123456
+// GFX1250: v_cmpx_gt_u32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0xcc,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_gt_u32_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_gt_u32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xcc,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_gt_u32_e64 m0, 0.5
+// GFX1250: v_cmpx_gt_u32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0xcc,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_gt_u32_e64 exec_lo, -1
+// GFX1250: v_cmpx_gt_u32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xcc,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_gt_u32_e64 exec_hi, null
+// GFX1250: v_cmpx_gt_u32_e64 exec_hi, null ; encoding: [0x7e,0x00,0xcc,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_gt_u32_e64 null, exec_lo
+// GFX1250: v_cmpx_gt_u32_e64 null, exec_lo ; encoding: [0x7e,0x00,0xcc,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_gt_u32_e64 -1, exec_hi
+// GFX1250: v_cmpx_gt_u32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xcc,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_gt_u32_e64 0.5, m0
+// GFX1250: v_cmpx_gt_u32_e64 0.5, m0 ; encoding: [0x7e,0x00,0xcc,0xd4,0xf0,0xfa,0x00,0x00]
+
+v_cmpx_gt_u32_e64 src_scc, vcc_lo
+// GFX1250: v_cmpx_gt_u32_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xcc,0xd4,0xfd,0xd4,0x00,0x00]
+
+v_cmpx_gt_u32_e64 0xaf123456, vcc_hi
+// GFX1250: v_cmpx_gt_u32_e64 0xaf123456, vcc_hi ; encoding: [0x7e,0x00,0xcc,0xd4,0xff,0xd6,0x00,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_gt_u64_e64 v[2:3], v[2:3]
+// GFX1250: v_cmpx_gt_u64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xdc,0xd4,0x02,0x05,0x02,0x00]
+
+v_cmpx_gt_u64_e64 v[254:255], v[254:255]
+// GFX1250: v_cmpx_gt_u64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xdc,0xd4,0xfe,0xfd,0x03,0x00]
+
+v_cmpx_gt_u64_e64 s[2:3], s[4:5]
+// GFX1250: v_cmpx_gt_u64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xdc,0xd4,0x02,0x08,0x00,0x00]
+
+v_cmpx_gt_u64_e64 s[104:105], s[104:105]
+// GFX1250: v_cmpx_gt_u64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xdc,0xd4,0x68,0xd0,0x00,0x00]
+
+v_cmpx_gt_u64_e64 vcc, ttmp[14:15]
+// GFX1250: v_cmpx_gt_u64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xdc,0xd4,0x6a,0xf4,0x00,0x00]
+
+v_cmpx_gt_u64_e64 ttmp[14:15], 0xaf123456
+// GFX1250: v_cmpx_gt_u64_e64 ttmp[14:15], lit64(0xaf123456) ; encoding: [0x7e,0x00,0xdc,0xd4,0x7a,0xfc,0x01,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+v_cmpx_gt_u64_e64 exec, src_scc
+// GFX1250: v_cmpx_gt_u64_e64 exec, src_scc ; encoding: [0x7e,0x00,0xdc,0xd4,0x7e,0xfa,0x01,0x00]
+
+v_cmpx_gt_u64_e64 null, 0.5
+// GFX1250: v_cmpx_gt_u64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xdc,0xd4,0x7c,0xe0,0x01,0x00]
+
+v_cmpx_gt_u64_e64 -1, -1
+// GFX1250: v_cmpx_gt_u64_e64 -1, -1 ; encoding: [0x7e,0x00,0xdc,0xd4,0xc1,0x82,0x01,0x00]
+
+v_cmpx_gt_u64_e64 0.5, null
+// GFX1250: v_cmpx_gt_u64_e64 0.5, null ; encoding: [0x7e,0x00,0xdc,0xd4,0xf0,0xf8,0x00,0x00]
+
+v_cmpx_gt_u64_e64 src_scc, exec
+// GFX1250: v_cmpx_gt_u64_e64 src_scc, exec ; encoding: [0x7e,0x00,0xdc,0xd4,0xfd,0xfc,0x00,0x00]
+
+v_cmpx_gt_u64_e64 0xaf123456, vcc
+// GFX1250: v_cmpx_gt_u64_e64 lit64(0xaf123456), vcc ; encoding: [0x7e,0x00,0xdc,0xd4,0xfe,0xd4,0x00,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+v_cmpx_le_f16_e64 v1, v2
+// GFX1250: v_cmpx_le_f16_e64 v1, v2 ; encoding: [0x7e,0x00,0x83,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_le_f16_e64 v255, v255
+// GFX1250: v_cmpx_le_f16_e64 v255, v255 ; encoding: [0x7e,0x00,0x83,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_le_f16_e64 s1, s2
+// GFX1250: v_cmpx_le_f16_e64 s1, s2 ; encoding: [0x7e,0x00,0x83,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_le_f16_e64 s105, s105
+// GFX1250: v_cmpx_le_f16_e64 s105, s105 ; encoding: [0x7e,0x00,0x83,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_le_f16_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_le_f16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x83,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_le_f16_e64 vcc_hi, 0xfe0b
+// GFX1250: v_cmpx_le_f16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0x83,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_le_f16_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_le_f16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x83,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_le_f16_e64 m0, 0.5
+// GFX1250: v_cmpx_le_f16_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x83,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_le_f16_e64 exec_lo, -1
+// GFX1250: v_cmpx_le_f16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x83,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_le_f16_e64 |exec_hi|, null
+// GFX1250: v_cmpx_le_f16_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x83,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_le_f16_e64 null, exec_lo
+// GFX1250: v_cmpx_le_f16_e64 null, exec_lo ; encoding: [0x7e,0x00,0x83,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_le_f16_e64 -1, exec_hi
+// GFX1250: v_cmpx_le_f16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x83,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_le_f16_e64 0.5, -m0
+// GFX1250: v_cmpx_le_f16_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x83,0xd4,0xf0,0xfa,0x00,0x40]
+
+v_cmpx_le_f16_e64 -src_scc, |vcc_lo|
+// GFX1250: v_cmpx_le_f16_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x83,0xd4,0xfd,0xd4,0x00,0x20]
+
+v_cmpx_le_f16_e64 -|0xfe0b|, -|vcc_hi| clamp
+// GFX1250: v_cmpx_le_f16_e64 -|0xfe0b|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x83,0xd4,0xff,0xd6,0x00,0x60,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_le_f32_e64 v1, v2
+// GFX1250: v_cmpx_le_f32_e64 v1, v2 ; encoding: [0x7e,0x00,0x93,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_le_f32_e64 v255, v255
+// GFX1250: v_cmpx_le_f32_e64 v255, v255 ; encoding: [0x7e,0x00,0x93,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_le_f32_e64 s1, s2
+// GFX1250: v_cmpx_le_f32_e64 s1, s2 ; encoding: [0x7e,0x00,0x93,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_le_f32_e64 s105, s105
+// GFX1250: v_cmpx_le_f32_e64 s105, s105 ; encoding: [0x7e,0x00,0x93,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_le_f32_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_le_f32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x93,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_le_f32_e64 vcc_hi, 0xaf123456
+// GFX1250: v_cmpx_le_f32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0x93,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_le_f32_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_le_f32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x93,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_le_f32_e64 m0, 0.5
+// GFX1250: v_cmpx_le_f32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x93,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_le_f32_e64 exec_lo, -1
+// GFX1250: v_cmpx_le_f32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x93,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_le_f32_e64 |exec_hi|, null
+// GFX1250: v_cmpx_le_f32_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x93,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_le_f32_e64 null, exec_lo
+// GFX1250: v_cmpx_le_f32_e64 null, exec_lo ; encoding: [0x7e,0x00,0x93,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_le_f32_e64 -1, exec_hi
+// GFX1250: v_cmpx_le_f32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x93,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_le_f32_e64 0.5, -m0
+// GFX1250: v_cmpx_le_f32_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x93,0xd4,0xf0,0xfa,0x00,0x40]
+
+v_cmpx_le_f32_e64 -src_scc, |vcc_lo|
+// GFX1250: v_cmpx_le_f32_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x93,0xd4,0xfd,0xd4,0x00,0x20]
+
+v_cmpx_le_f32_e64 -|0xaf123456|, -|vcc_hi| clamp
+// GFX1250: v_cmpx_le_f32_e64 -|0xaf123456|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x93,0xd4,0xff,0xd6,0x00,0x60,0x56,0x34,0x12,0xaf]
+
+v_cmpx_le_f64_e64 v[2:3], v[2:3]
+// GFX1250: v_cmpx_le_f64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xa3,0xd4,0x02,0x05,0x02,0x00]
+
+v_cmpx_le_f64_e64 v[254:255], v[254:255]
+// GFX1250: v_cmpx_le_f64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xa3,0xd4,0xfe,0xfd,0x03,0x00]
+
+v_cmpx_le_f64_e64 s[2:3], s[4:5]
+// GFX1250: v_cmpx_le_f64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xa3,0xd4,0x02,0x08,0x00,0x00]
+
+v_cmpx_le_f64_e64 s[104:105], s[104:105]
+// GFX1250: v_cmpx_le_f64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xa3,0xd4,0x68,0xd0,0x00,0x00]
+
+v_cmpx_le_f64_e64 vcc, ttmp[14:15]
+// GFX1250: v_cmpx_le_f64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xa3,0xd4,0x6a,0xf4,0x00,0x00]
+
+v_cmpx_le_f64_e64 ttmp[14:15], 0xaf123456
+// GFX1250: v_cmpx_le_f64_e64 ttmp[14:15], 0xaf123456 ; encoding: [0x7e,0x00,0xa3,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_le_f64_e64 -|exec|, src_scc
+// GFX1250: v_cmpx_le_f64_e64 -|exec|, src_scc ; encoding: [0x7e,0x01,0xa3,0xd4,0x7e,0xfa,0x01,0x20]
+
+v_cmpx_le_f64_e64 null, 0.5
+// GFX1250: v_cmpx_le_f64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xa3,0xd4,0x7c,0xe0,0x01,0x00]
+
+v_cmpx_le_f64_e64 -1, -1
+// GFX1250: v_cmpx_le_f64_e64 -1, -1 ; encoding: [0x7e,0x00,0xa3,0xd4,0xc1,0x82,0x01,0x00]
+
+v_cmpx_le_f64_e64 0.5, null
+// GFX1250: v_cmpx_le_f64_e64 0.5, null ; encoding: [0x7e,0x00,0xa3,0xd4,0xf0,0xf8,0x00,0x00]
+
+v_cmpx_le_f64_e64 -|src_scc|, -|exec|
+// GFX1250: v_cmpx_le_f64_e64 -|src_scc|, -|exec| ; encoding: [0x7e,0x03,0xa3,0xd4,0xfd,0xfc,0x00,0x60]
+
+v_cmpx_le_f64_e64 0xaf123456, -|vcc| clamp
+// GFX1250: v_cmpx_le_f64_e64 0xaf123456, -|vcc| clamp ; encoding: [0x7e,0x82,0xa3,0xd4,0xff,0xd4,0x00,0x40,0x56,0x34,0x12,0xaf]
+
+v_cmpx_le_i16_e64 v1, v2
+// GFX1250: v_cmpx_le_i16_e64 v1, v2 ; encoding: [0x7e,0x00,0xb3,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_le_i16_e64 v255, v255
+// GFX1250: v_cmpx_le_i16_e64 v255, v255 ; encoding: [0x7e,0x00,0xb3,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_le_i16_e64 s1, s2
+// GFX1250: v_cmpx_le_i16_e64 s1, s2 ; encoding: [0x7e,0x00,0xb3,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_le_i16_e64 s105, s105
+// GFX1250: v_cmpx_le_i16_e64 s105, s105 ; encoding: [0x7e,0x00,0xb3,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_le_i16_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_le_i16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xb3,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_le_i16_e64 vcc_hi, 0xfe0b
+// GFX1250: v_cmpx_le_i16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0xb3,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_le_i16_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_le_i16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xb3,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_le_i16_e64 m0, 0.5
+// GFX1250: v_cmpx_le_i16_e64 m0, 0.5 ; encoding: [0x7e,0x00,0xb3,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_le_i16_e64 exec_lo, -1
+// GFX1250: v_cmpx_le_i16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xb3,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_le_i16_e64 exec_hi, null
+// GFX1250: v_cmpx_le_i16_e64 exec_hi, null ; encoding: [0x7e,0x00,0xb3,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_le_i16_e64 null, exec_lo
+// GFX1250: v_cmpx_le_i16_e64 null, exec_lo ; encoding: [0x7e,0x00,0xb3,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_le_i16_e64 -1, exec_hi
+// GFX1250: v_cmpx_le_i16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xb3,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_le_i16_e64 0.5, m0
+// GFX1250: v_cmpx_le_i16_e64 0.5, m0 ; encoding: [0x7e,0x00,0xb3,0xd4,0xf0,0xfa,0x00,0x00]
+
+v_cmpx_le_i16_e64 src_scc, vcc_lo
+// GFX1250: v_cmpx_le_i16_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xb3,0xd4,0xfd,0xd4,0x00,0x00]
+
+v_cmpx_le_i16_e64 0xfe0b, vcc_hi
+// GFX1250: v_cmpx_le_i16_e64 0xfe0b, vcc_hi ; encoding: [0x7e,0x00,0xb3,0xd4,0xff,0xd6,0x00,0x00,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_le_i32_e64 v1, v2
+// GFX1250: v_cmpx_le_i32_e64 v1, v2 ; encoding: [0x7e,0x00,0xc3,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_le_i32_e64 v255, v255
+// GFX1250: v_cmpx_le_i32_e64 v255, v255 ; encoding: [0x7e,0x00,0xc3,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_le_i32_e64 s1, s2
+// GFX1250: v_cmpx_le_i32_e64 s1, s2 ; encoding: [0x7e,0x00,0xc3,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_le_i32_e64 s105, s105
+// GFX1250: v_cmpx_le_i32_e64 s105, s105 ; encoding: [0x7e,0x00,0xc3,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_le_i32_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_le_i32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xc3,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_le_i32_e64 vcc_hi, 0xaf123456
+// GFX1250: v_cmpx_le_i32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0xc3,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_le_i32_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_le_i32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xc3,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_le_i32_e64 m0, 0.5
+// GFX1250: v_cmpx_le_i32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0xc3,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_le_i32_e64 exec_lo, -1
+// GFX1250: v_cmpx_le_i32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xc3,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_le_i32_e64 exec_hi, null
+// GFX1250: v_cmpx_le_i32_e64 exec_hi, null ; encoding: [0x7e,0x00,0xc3,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_le_i32_e64 null, exec_lo
+// GFX1250: v_cmpx_le_i32_e64 null, exec_lo ; encoding: [0x7e,0x00,0xc3,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_le_i32_e64 -1, exec_hi
+// GFX1250: v_cmpx_le_i32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xc3,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_le_i32_e64 0.5, m0
+// GFX1250: v_cmpx_le_i32_e64 0.5, m0 ; encoding: [0x7e,0x00,0xc3,0xd4,0xf0,0xfa,0x00,0x00]
+
+v_cmpx_le_i32_e64 src_scc, vcc_lo
+// GFX1250: v_cmpx_le_i32_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xc3,0xd4,0xfd,0xd4,0x00,0x00]
+
+v_cmpx_le_i32_e64 0xaf123456, vcc_hi
+// GFX1250: v_cmpx_le_i32_e64 0xaf123456, vcc_hi ; encoding: [0x7e,0x00,0xc3,0xd4,0xff,0xd6,0x00,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_le_i64_e64 v[2:3], v[2:3]
+// GFX1250: v_cmpx_le_i64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xd3,0xd4,0x02,0x05,0x02,0x00]
+
+v_cmpx_le_i64_e64 v[254:255], v[254:255]
+// GFX1250: v_cmpx_le_i64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xd3,0xd4,0xfe,0xfd,0x03,0x00]
+
+v_cmpx_le_i64_e64 s[2:3], s[4:5]
+// GFX1250: v_cmpx_le_i64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xd3,0xd4,0x02,0x08,0x00,0x00]
+
+v_cmpx_le_i64_e64 s[104:105], s[104:105]
+// GFX1250: v_cmpx_le_i64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xd3,0xd4,0x68,0xd0,0x00,0x00]
+
+v_cmpx_le_i64_e64 vcc, ttmp[14:15]
+// GFX1250: v_cmpx_le_i64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xd3,0xd4,0x6a,0xf4,0x00,0x00]
+
+v_cmpx_le_i64_e64 ttmp[14:15], 0xaf123456
+// GFX1250: v_cmpx_le_i64_e64 ttmp[14:15], lit64(0xaf123456) ; encoding: [0x7e,0x00,0xd3,0xd4,0x7a,0xfc,0x01,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+v_cmpx_le_i64_e64 exec, src_scc
+// GFX1250: v_cmpx_le_i64_e64 exec, src_scc ; encoding: [0x7e,0x00,0xd3,0xd4,0x7e,0xfa,0x01,0x00]
+
+v_cmpx_le_i64_e64 null, 0.5
+// GFX1250: v_cmpx_le_i64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xd3,0xd4,0x7c,0xe0,0x01,0x00]
+
+v_cmpx_le_i64_e64 -1, -1
+// GFX1250: v_cmpx_le_i64_e64 -1, -1 ; encoding: [0x7e,0x00,0xd3,0xd4,0xc1,0x82,0x01,0x00]
+
+v_cmpx_le_i64_e64 0.5, null
+// GFX1250: v_cmpx_le_i64_e64 0.5, null ; encoding: [0x7e,0x00,0xd3,0xd4,0xf0,0xf8,0x00,0x00]
+
+v_cmpx_le_i64_e64 src_scc, exec
+// GFX1250: v_cmpx_le_i64_e64 src_scc, exec ; encoding: [0x7e,0x00,0xd3,0xd4,0xfd,0xfc,0x00,0x00]
+
+v_cmpx_le_i64_e64 0xaf123456, vcc
+// GFX1250: v_cmpx_le_i64_e64 lit64(0xaf123456), vcc ; encoding: [0x7e,0x00,0xd3,0xd4,0xfe,0xd4,0x00,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+v_cmpx_le_u16_e64 v1, v2
+// GFX1250: v_cmpx_le_u16_e64 v1, v2 ; encoding: [0x7e,0x00,0xbb,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_le_u16_e64 v255, v255
+// GFX1250: v_cmpx_le_u16_e64 v255, v255 ; encoding: [0x7e,0x00,0xbb,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_le_u16_e64 s1, s2
+// GFX1250: v_cmpx_le_u16_e64 s1, s2 ; encoding: [0x7e,0x00,0xbb,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_le_u16_e64 s105, s105
+// GFX1250: v_cmpx_le_u16_e64 s105, s105 ; encoding: [0x7e,0x00,0xbb,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_le_u16_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_le_u16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xbb,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_le_u16_e64 vcc_hi, 0xfe0b
+// GFX1250: v_cmpx_le_u16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0xbb,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_le_u16_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_le_u16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xbb,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_le_u16_e64 m0, 0.5
+// GFX1250: v_cmpx_le_u16_e64 m0, 0.5 ; encoding: [0x7e,0x00,0xbb,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_le_u16_e64 exec_lo, -1
+// GFX1250: v_cmpx_le_u16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xbb,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_le_u16_e64 exec_hi, null
+// GFX1250: v_cmpx_le_u16_e64 exec_hi, null ; encoding: [0x7e,0x00,0xbb,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_le_u16_e64 null, exec_lo
+// GFX1250: v_cmpx_le_u16_e64 null, exec_lo ; encoding: [0x7e,0x00,0xbb,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_le_u16_e64 -1, exec_hi
+// GFX1250: v_cmpx_le_u16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xbb,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_le_u16_e64 0.5, m0
+// GFX1250: v_cmpx_le_u16_e64 0.5, m0 ; encoding: [0x7e,0x00,0xbb,0xd4,0xf0,0xfa,0x00,0x00]
+
+v_cmpx_le_u16_e64 src_scc, vcc_lo
+// GFX1250: v_cmpx_le_u16_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xbb,0xd4,0xfd,0xd4,0x00,0x00]
+
+v_cmpx_le_u16_e64 0xfe0b, vcc_hi
+// GFX1250: v_cmpx_le_u16_e64 0xfe0b, vcc_hi ; encoding: [0x7e,0x00,0xbb,0xd4,0xff,0xd6,0x00,0x00,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_le_u32_e64 v1, v2
+// GFX1250: v_cmpx_le_u32_e64 v1, v2 ; encoding: [0x7e,0x00,0xcb,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_le_u32_e64 v255, v255
+// GFX1250: v_cmpx_le_u32_e64 v255, v255 ; encoding: [0x7e,0x00,0xcb,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_le_u32_e64 s1, s2
+// GFX1250: v_cmpx_le_u32_e64 s1, s2 ; encoding: [0x7e,0x00,0xcb,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_le_u32_e64 s105, s105
+// GFX1250: v_cmpx_le_u32_e64 s105, s105 ; encoding: [0x7e,0x00,0xcb,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_le_u32_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_le_u32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xcb,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_le_u32_e64 vcc_hi, 0xaf123456
+// GFX1250: v_cmpx_le_u32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0xcb,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_le_u32_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_le_u32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xcb,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_le_u32_e64 m0, 0.5
+// GFX1250: v_cmpx_le_u32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0xcb,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_le_u32_e64 exec_lo, -1
+// GFX1250: v_cmpx_le_u32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xcb,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_le_u32_e64 exec_hi, null
+// GFX1250: v_cmpx_le_u32_e64 exec_hi, null ; encoding: [0x7e,0x00,0xcb,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_le_u32_e64 null, exec_lo
+// GFX1250: v_cmpx_le_u32_e64 null, exec_lo ; encoding: [0x7e,0x00,0xcb,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_le_u32_e64 -1, exec_hi
+// GFX1250: v_cmpx_le_u32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xcb,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_le_u32_e64 0.5, m0
+// GFX1250: v_cmpx_le_u32_e64 0.5, m0 ; encoding: [0x7e,0x00,0xcb,0xd4,0xf0,0xfa,0x00,0x00]
+
+v_cmpx_le_u32_e64 src_scc, vcc_lo
+// GFX1250: v_cmpx_le_u32_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xcb,0xd4,0xfd,0xd4,0x00,0x00]
+
+v_cmpx_le_u32_e64 0xaf123456, vcc_hi
+// GFX1250: v_cmpx_le_u32_e64 0xaf123456, vcc_hi ; encoding: [0x7e,0x00,0xcb,0xd4,0xff,0xd6,0x00,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_le_u64_e64 v[2:3], v[2:3]
+// GFX1250: v_cmpx_le_u64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xdb,0xd4,0x02,0x05,0x02,0x00]
+
+v_cmpx_le_u64_e64 v[254:255], v[254:255]
+// GFX1250: v_cmpx_le_u64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xdb,0xd4,0xfe,0xfd,0x03,0x00]
+
+v_cmpx_le_u64_e64 s[2:3], s[4:5]
+// GFX1250: v_cmpx_le_u64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xdb,0xd4,0x02,0x08,0x00,0x00]
+
+v_cmpx_le_u64_e64 s[104:105], s[104:105]
+// GFX1250: v_cmpx_le_u64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xdb,0xd4,0x68,0xd0,0x00,0x00]
+
+v_cmpx_le_u64_e64 vcc, ttmp[14:15]
+// GFX1250: v_cmpx_le_u64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xdb,0xd4,0x6a,0xf4,0x00,0x00]
+
+v_cmpx_le_u64_e64 ttmp[14:15], 0xaf123456
+// GFX1250: v_cmpx_le_u64_e64 ttmp[14:15], lit64(0xaf123456) ; encoding: [0x7e,0x00,0xdb,0xd4,0x7a,0xfc,0x01,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+v_cmpx_le_u64_e64 exec, src_scc
+// GFX1250: v_cmpx_le_u64_e64 exec, src_scc ; encoding: [0x7e,0x00,0xdb,0xd4,0x7e,0xfa,0x01,0x00]
+
+v_cmpx_le_u64_e64 null, 0.5
+// GFX1250: v_cmpx_le_u64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xdb,0xd4,0x7c,0xe0,0x01,0x00]
+
+v_cmpx_le_u64_e64 -1, -1
+// GFX1250: v_cmpx_le_u64_e64 -1, -1 ; encoding: [0x7e,0x00,0xdb,0xd4,0xc1,0x82,0x01,0x00]
+
+v_cmpx_le_u64_e64 0.5, null
+// GFX1250: v_cmpx_le_u64_e64 0.5, null ; encoding: [0x7e,0x00,0xdb,0xd4,0xf0,0xf8,0x00,0x00]
+
+v_cmpx_le_u64_e64 src_scc, exec
+// GFX1250: v_cmpx_le_u64_e64 src_scc, exec ; encoding: [0x7e,0x00,0xdb,0xd4,0xfd,0xfc,0x00,0x00]
+
+v_cmpx_le_u64_e64 0xaf123456, vcc
+// GFX1250: v_cmpx_le_u64_e64 lit64(0xaf123456), vcc ; encoding: [0x7e,0x00,0xdb,0xd4,0xfe,0xd4,0x00,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+v_cmpx_lg_f16_e64 v1, v2
+// GFX1250: v_cmpx_lg_f16_e64 v1, v2 ; encoding: [0x7e,0x00,0x85,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_lg_f16_e64 v255, v255
+// GFX1250: v_cmpx_lg_f16_e64 v255, v255 ; encoding: [0x7e,0x00,0x85,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_lg_f16_e64 s1, s2
+// GFX1250: v_cmpx_lg_f16_e64 s1, s2 ; encoding: [0x7e,0x00,0x85,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_lg_f16_e64 s105, s105
+// GFX1250: v_cmpx_lg_f16_e64 s105, s105 ; encoding: [0x7e,0x00,0x85,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_lg_f16_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_lg_f16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x85,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_lg_f16_e64 vcc_hi, 0xfe0b
+// GFX1250: v_cmpx_lg_f16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0x85,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_lg_f16_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_lg_f16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x85,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_lg_f16_e64 m0, 0.5
+// GFX1250: v_cmpx_lg_f16_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x85,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_lg_f16_e64 exec_lo, -1
+// GFX1250: v_cmpx_lg_f16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x85,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_lg_f16_e64 |exec_hi|, null
+// GFX1250: v_cmpx_lg_f16_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x85,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_lg_f16_e64 null, exec_lo
+// GFX1250: v_cmpx_lg_f16_e64 null, exec_lo ; encoding: [0x7e,0x00,0x85,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_lg_f16_e64 -1, exec_hi
+// GFX1250: v_cmpx_lg_f16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x85,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_lg_f16_e64 0.5, -m0
+// GFX1250: v_cmpx_lg_f16_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x85,0xd4,0xf0,0xfa,0x00,0x40]
+
+v_cmpx_lg_f16_e64 -src_scc, |vcc_lo|
+// GFX1250: v_cmpx_lg_f16_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x85,0xd4,0xfd,0xd4,0x00,0x20]
+
+v_cmpx_lg_f16_e64 -|0xfe0b|, -|vcc_hi| clamp
+// GFX1250: v_cmpx_lg_f16_e64 -|0xfe0b|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x85,0xd4,0xff,0xd6,0x00,0x60,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_lg_f32_e64 v1, v2
+// GFX1250: v_cmpx_lg_f32_e64 v1, v2 ; encoding: [0x7e,0x00,0x95,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_lg_f32_e64 v255, v255
+// GFX1250: v_cmpx_lg_f32_e64 v255, v255 ; encoding: [0x7e,0x00,0x95,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_lg_f32_e64 s1, s2
+// GFX1250: v_cmpx_lg_f32_e64 s1, s2 ; encoding: [0x7e,0x00,0x95,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_lg_f32_e64 s105, s105
+// GFX1250: v_cmpx_lg_f32_e64 s105, s105 ; encoding: [0x7e,0x00,0x95,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_lg_f32_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_lg_f32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x95,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_lg_f32_e64 vcc_hi, 0xaf123456
+// GFX1250: v_cmpx_lg_f32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0x95,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_lg_f32_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_lg_f32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x95,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_lg_f32_e64 m0, 0.5
+// GFX1250: v_cmpx_lg_f32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x95,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_lg_f32_e64 exec_lo, -1
+// GFX1250: v_cmpx_lg_f32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x95,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_lg_f32_e64 |exec_hi|, null
+// GFX1250: v_cmpx_lg_f32_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x95,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_lg_f32_e64 null, exec_lo
+// GFX1250: v_cmpx_lg_f32_e64 null, exec_lo ; encoding: [0x7e,0x00,0x95,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_lg_f32_e64 -1, exec_hi
+// GFX1250: v_cmpx_lg_f32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x95,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_lg_f32_e64 0.5, -m0
+// GFX1250: v_cmpx_lg_f32_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x95,0xd4,0xf0,0xfa,0x00,0x40]
+
+v_cmpx_lg_f32_e64 -src_scc, |vcc_lo|
+// GFX1250: v_cmpx_lg_f32_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x95,0xd4,0xfd,0xd4,0x00,0x20]
+
+v_cmpx_lg_f32_e64 -|0xaf123456|, -|vcc_hi| clamp
+// GFX1250: v_cmpx_lg_f32_e64 -|0xaf123456|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x95,0xd4,0xff,0xd6,0x00,0x60,0x56,0x34,0x12,0xaf]
+
+v_cmpx_lg_f64_e64 v[2:3], v[2:3]
+// GFX1250: v_cmpx_lg_f64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xa5,0xd4,0x02,0x05,0x02,0x00]
+
+v_cmpx_lg_f64_e64 v[254:255], v[254:255]
+// GFX1250: v_cmpx_lg_f64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xa5,0xd4,0xfe,0xfd,0x03,0x00]
+
+v_cmpx_lg_f64_e64 s[2:3], s[4:5]
+// GFX1250: v_cmpx_lg_f64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xa5,0xd4,0x02,0x08,0x00,0x00]
+
+v_cmpx_lg_f64_e64 s[104:105], s[104:105]
+// GFX1250: v_cmpx_lg_f64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xa5,0xd4,0x68,0xd0,0x00,0x00]
+
+v_cmpx_lg_f64_e64 vcc, ttmp[14:15]
+// GFX1250: v_cmpx_lg_f64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xa5,0xd4,0x6a,0xf4,0x00,0x00]
+
+v_cmpx_lg_f64_e64 ttmp[14:15], 0xaf123456
+// GFX1250: v_cmpx_lg_f64_e64 ttmp[14:15], 0xaf123456 ; encoding: [0x7e,0x00,0xa5,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_lg_f64_e64 -|exec|, src_scc
+// GFX1250: v_cmpx_lg_f64_e64 -|exec|, src_scc ; encoding: [0x7e,0x01,0xa5,0xd4,0x7e,0xfa,0x01,0x20]
+
+v_cmpx_lg_f64_e64 null, 0.5
+// GFX1250: v_cmpx_lg_f64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xa5,0xd4,0x7c,0xe0,0x01,0x00]
+
+v_cmpx_lg_f64_e64 -1, -1
+// GFX1250: v_cmpx_lg_f64_e64 -1, -1 ; encoding: [0x7e,0x00,0xa5,0xd4,0xc1,0x82,0x01,0x00]
+
+v_cmpx_lg_f64_e64 0.5, null
+// GFX1250: v_cmpx_lg_f64_e64 0.5, null ; encoding: [0x7e,0x00,0xa5,0xd4,0xf0,0xf8,0x00,0x00]
+
+v_cmpx_lg_f64_e64 -|src_scc|, -|exec|
+// GFX1250: v_cmpx_lg_f64_e64 -|src_scc|, -|exec| ; encoding: [0x7e,0x03,0xa5,0xd4,0xfd,0xfc,0x00,0x60]
+
+v_cmpx_lg_f64_e64 0xaf123456, -|vcc| clamp
+// GFX1250: v_cmpx_lg_f64_e64 0xaf123456, -|vcc| clamp ; encoding: [0x7e,0x82,0xa5,0xd4,0xff,0xd4,0x00,0x40,0x56,0x34,0x12,0xaf]
+
+v_cmpx_lt_f16_e64 v1, v2
+// GFX1250: v_cmpx_lt_f16_e64 v1, v2 ; encoding: [0x7e,0x00,0x81,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_lt_f16_e64 v255, v255
+// GFX1250: v_cmpx_lt_f16_e64 v255, v255 ; encoding: [0x7e,0x00,0x81,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_lt_f16_e64 s1, s2
+// GFX1250: v_cmpx_lt_f16_e64 s1, s2 ; encoding: [0x7e,0x00,0x81,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_lt_f16_e64 s105, s105
+// GFX1250: v_cmpx_lt_f16_e64 s105, s105 ; encoding: [0x7e,0x00,0x81,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_lt_f16_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_lt_f16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x81,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_lt_f16_e64 vcc_hi, 0xfe0b
+// GFX1250: v_cmpx_lt_f16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0x81,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_lt_f16_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_lt_f16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x81,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_lt_f16_e64 m0, 0.5
+// GFX1250: v_cmpx_lt_f16_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x81,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_lt_f16_e64 exec_lo, -1
+// GFX1250: v_cmpx_lt_f16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x81,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_lt_f16_e64 |exec_hi|, null
+// GFX1250: v_cmpx_lt_f16_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x81,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_lt_f16_e64 null, exec_lo
+// GFX1250: v_cmpx_lt_f16_e64 null, exec_lo ; encoding: [0x7e,0x00,0x81,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_lt_f16_e64 -1, exec_hi
+// GFX1250: v_cmpx_lt_f16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x81,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_lt_f16_e64 0.5, -m0
+// GFX1250: v_cmpx_lt_f16_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x81,0xd4,0xf0,0xfa,0x00,0x40]
+
+v_cmpx_lt_f16_e64 -src_scc, |vcc_lo|
+// GFX1250: v_cmpx_lt_f16_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x81,0xd4,0xfd,0xd4,0x00,0x20]
+
+v_cmpx_lt_f16_e64 -|0xfe0b|, -|vcc_hi| clamp
+// GFX1250: v_cmpx_lt_f16_e64 -|0xfe0b|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x81,0xd4,0xff,0xd6,0x00,0x60,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_lt_f32_e64 v1, v2
+// GFX1250: v_cmpx_lt_f32_e64 v1, v2 ; encoding: [0x7e,0x00,0x91,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_lt_f32_e64 v255, v255
+// GFX1250: v_cmpx_lt_f32_e64 v255, v255 ; encoding: [0x7e,0x00,0x91,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_lt_f32_e64 s1, s2
+// GFX1250: v_cmpx_lt_f32_e64 s1, s2 ; encoding: [0x7e,0x00,0x91,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_lt_f32_e64 s105, s105
+// GFX1250: v_cmpx_lt_f32_e64 s105, s105 ; encoding: [0x7e,0x00,0x91,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_lt_f32_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_lt_f32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x91,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_lt_f32_e64 vcc_hi, 0xaf123456
+// GFX1250: v_cmpx_lt_f32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0x91,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_lt_f32_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_lt_f32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x91,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_lt_f32_e64 m0, 0.5
+// GFX1250: v_cmpx_lt_f32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x91,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_lt_f32_e64 exec_lo, -1
+// GFX1250: v_cmpx_lt_f32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x91,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_lt_f32_e64 |exec_hi|, null
+// GFX1250: v_cmpx_lt_f32_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x91,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_lt_f32_e64 null, exec_lo
+// GFX1250: v_cmpx_lt_f32_e64 null, exec_lo ; encoding: [0x7e,0x00,0x91,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_lt_f32_e64 -1, exec_hi
+// GFX1250: v_cmpx_lt_f32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x91,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_lt_f32_e64 0.5, -m0
+// GFX1250: v_cmpx_lt_f32_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x91,0xd4,0xf0,0xfa,0x00,0x40]
+
+v_cmpx_lt_f32_e64 -src_scc, |vcc_lo|
+// GFX1250: v_cmpx_lt_f32_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x91,0xd4,0xfd,0xd4,0x00,0x20]
+
+v_cmpx_lt_f32_e64 -|0xaf123456|, -|vcc_hi| clamp
+// GFX1250: v_cmpx_lt_f32_e64 -|0xaf123456|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x91,0xd4,0xff,0xd6,0x00,0x60,0x56,0x34,0x12,0xaf]
+
+v_cmpx_lt_f64_e64 v[2:3], v[2:3]
+// GFX1250: v_cmpx_lt_f64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xa1,0xd4,0x02,0x05,0x02,0x00]
+
+v_cmpx_lt_f64_e64 v[254:255], v[254:255]
+// GFX1250: v_cmpx_lt_f64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xa1,0xd4,0xfe,0xfd,0x03,0x00]
+
+v_cmpx_lt_f64_e64 s[2:3], s[4:5]
+// GFX1250: v_cmpx_lt_f64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xa1,0xd4,0x02,0x08,0x00,0x00]
+
+v_cmpx_lt_f64_e64 s[104:105], s[104:105]
+// GFX1250: v_cmpx_lt_f64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xa1,0xd4,0x68,0xd0,0x00,0x00]
+
+v_cmpx_lt_f64_e64 vcc, ttmp[14:15]
+// GFX1250: v_cmpx_lt_f64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xa1,0xd4,0x6a,0xf4,0x00,0x00]
+
+v_cmpx_lt_f64_e64 ttmp[14:15], 0xaf123456
+// GFX1250: v_cmpx_lt_f64_e64 ttmp[14:15], 0xaf123456 ; encoding: [0x7e,0x00,0xa1,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_lt_f64_e64 -|exec|, src_scc
+// GFX1250: v_cmpx_lt_f64_e64 -|exec|, src_scc ; encoding: [0x7e,0x01,0xa1,0xd4,0x7e,0xfa,0x01,0x20]
+
+v_cmpx_lt_f64_e64 null, 0.5
+// GFX1250: v_cmpx_lt_f64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xa1,0xd4,0x7c,0xe0,0x01,0x00]
+
+v_cmpx_lt_f64_e64 -1, -1
+// GFX1250: v_cmpx_lt_f64_e64 -1, -1 ; encoding: [0x7e,0x00,0xa1,0xd4,0xc1,0x82,0x01,0x00]
+
+v_cmpx_lt_f64_e64 0.5, null
+// GFX1250: v_cmpx_lt_f64_e64 0.5, null ; encoding: [0x7e,0x00,0xa1,0xd4,0xf0,0xf8,0x00,0x00]
+
+v_cmpx_lt_f64_e64 -|src_scc|, -|exec|
+// GFX1250: v_cmpx_lt_f64_e64 -|src_scc|, -|exec| ; encoding: [0x7e,0x03,0xa1,0xd4,0xfd,0xfc,0x00,0x60]
+
+v_cmpx_lt_f64_e64 0xaf123456, -|vcc| clamp
+// GFX1250: v_cmpx_lt_f64_e64 0xaf123456, -|vcc| clamp ; encoding: [0x7e,0x82,0xa1,0xd4,0xff,0xd4,0x00,0x40,0x56,0x34,0x12,0xaf]
+
+v_cmpx_lt_i16_e64 v1, v2
+// GFX1250: v_cmpx_lt_i16_e64 v1, v2 ; encoding: [0x7e,0x00,0xb1,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_lt_i16_e64 v255, v255
+// GFX1250: v_cmpx_lt_i16_e64 v255, v255 ; encoding: [0x7e,0x00,0xb1,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_lt_i16_e64 s1, s2
+// GFX1250: v_cmpx_lt_i16_e64 s1, s2 ; encoding: [0x7e,0x00,0xb1,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_lt_i16_e64 s105, s105
+// GFX1250: v_cmpx_lt_i16_e64 s105, s105 ; encoding: [0x7e,0x00,0xb1,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_lt_i16_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_lt_i16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xb1,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_lt_i16_e64 vcc_hi, 0xfe0b
+// GFX1250: v_cmpx_lt_i16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0xb1,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_lt_i16_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_lt_i16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xb1,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_lt_i16_e64 m0, 0.5
+// GFX1250: v_cmpx_lt_i16_e64 m0, 0.5 ; encoding: [0x7e,0x00,0xb1,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_lt_i16_e64 exec_lo, -1
+// GFX1250: v_cmpx_lt_i16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xb1,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_lt_i16_e64 exec_hi, null
+// GFX1250: v_cmpx_lt_i16_e64 exec_hi, null ; encoding: [0x7e,0x00,0xb1,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_lt_i16_e64 null, exec_lo
+// GFX1250: v_cmpx_lt_i16_e64 null, exec_lo ; encoding: [0x7e,0x00,0xb1,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_lt_i16_e64 -1, exec_hi
+// GFX1250: v_cmpx_lt_i16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xb1,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_lt_i16_e64 0.5, m0
+// GFX1250: v_cmpx_lt_i16_e64 0.5, m0 ; encoding: [0x7e,0x00,0xb1,0xd4,0xf0,0xfa,0x00,0x00]
+
+v_cmpx_lt_i16_e64 src_scc, vcc_lo
+// GFX1250: v_cmpx_lt_i16_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xb1,0xd4,0xfd,0xd4,0x00,0x00]
+
+v_cmpx_lt_i16_e64 0xfe0b, vcc_hi
+// GFX1250: v_cmpx_lt_i16_e64 0xfe0b, vcc_hi ; encoding: [0x7e,0x00,0xb1,0xd4,0xff,0xd6,0x00,0x00,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_lt_i32_e64 v1, v2
+// GFX1250: v_cmpx_lt_i32_e64 v1, v2 ; encoding: [0x7e,0x00,0xc1,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_lt_i32_e64 v255, v255
+// GFX1250: v_cmpx_lt_i32_e64 v255, v255 ; encoding: [0x7e,0x00,0xc1,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_lt_i32_e64 s1, s2
+// GFX1250: v_cmpx_lt_i32_e64 s1, s2 ; encoding: [0x7e,0x00,0xc1,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_lt_i32_e64 s105, s105
+// GFX1250: v_cmpx_lt_i32_e64 s105, s105 ; encoding: [0x7e,0x00,0xc1,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_lt_i32_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_lt_i32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xc1,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_lt_i32_e64 vcc_hi, 0xaf123456
+// GFX1250: v_cmpx_lt_i32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0xc1,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_lt_i32_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_lt_i32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xc1,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_lt_i32_e64 m0, 0.5
+// GFX1250: v_cmpx_lt_i32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0xc1,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_lt_i32_e64 exec_lo, -1
+// GFX1250: v_cmpx_lt_i32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xc1,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_lt_i32_e64 exec_hi, null
+// GFX1250: v_cmpx_lt_i32_e64 exec_hi, null ; encoding: [0x7e,0x00,0xc1,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_lt_i32_e64 null, exec_lo
+// GFX1250: v_cmpx_lt_i32_e64 null, exec_lo ; encoding: [0x7e,0x00,0xc1,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_lt_i32_e64 -1, exec_hi
+// GFX1250: v_cmpx_lt_i32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xc1,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_lt_i32_e64 0.5, m0
+// GFX1250: v_cmpx_lt_i32_e64 0.5, m0 ; encoding: [0x7e,0x00,0xc1,0xd4,0xf0,0xfa,0x00,0x00]
+
+v_cmpx_lt_i32_e64 src_scc, vcc_lo
+// GFX1250: v_cmpx_lt_i32_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xc1,0xd4,0xfd,0xd4,0x00,0x00]
+
+v_cmpx_lt_i32_e64 0xaf123456, vcc_hi
+// GFX1250: v_cmpx_lt_i32_e64 0xaf123456, vcc_hi ; encoding: [0x7e,0x00,0xc1,0xd4,0xff,0xd6,0x00,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_lt_i64_e64 v[2:3], v[2:3]
+// GFX1250: v_cmpx_lt_i64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xd1,0xd4,0x02,0x05,0x02,0x00]
+
+v_cmpx_lt_i64_e64 v[254:255], v[254:255]
+// GFX1250: v_cmpx_lt_i64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xd1,0xd4,0xfe,0xfd,0x03,0x00]
+
+v_cmpx_lt_i64_e64 s[2:3], s[4:5]
+// GFX1250: v_cmpx_lt_i64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xd1,0xd4,0x02,0x08,0x00,0x00]
+
+v_cmpx_lt_i64_e64 s[104:105], s[104:105]
+// GFX1250: v_cmpx_lt_i64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xd1,0xd4,0x68,0xd0,0x00,0x00]
+
+v_cmpx_lt_i64_e64 vcc, ttmp[14:15]
+// GFX1250: v_cmpx_lt_i64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xd1,0xd4,0x6a,0xf4,0x00,0x00]
+
+v_cmpx_lt_i64_e64 ttmp[14:15], 0xaf123456
+// GFX1250: v_cmpx_lt_i64_e64 ttmp[14:15], lit64(0xaf123456) ; encoding: [0x7e,0x00,0xd1,0xd4,0x7a,0xfc,0x01,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+v_cmpx_lt_i64_e64 exec, src_scc
+// GFX1250: v_cmpx_lt_i64_e64 exec, src_scc ; encoding: [0x7e,0x00,0xd1,0xd4,0x7e,0xfa,0x01,0x00]
+
+v_cmpx_lt_i64_e64 null, 0.5
+// GFX1250: v_cmpx_lt_i64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xd1,0xd4,0x7c,0xe0,0x01,0x00]
+
+v_cmpx_lt_i64_e64 -1, -1
+// GFX1250: v_cmpx_lt_i64_e64 -1, -1 ; encoding: [0x7e,0x00,0xd1,0xd4,0xc1,0x82,0x01,0x00]
+
+v_cmpx_lt_i64_e64 0.5, null
+// GFX1250: v_cmpx_lt_i64_e64 0.5, null ; encoding: [0x7e,0x00,0xd1,0xd4,0xf0,0xf8,0x00,0x00]
+
+v_cmpx_lt_i64_e64 src_scc, exec
+// GFX1250: v_cmpx_lt_i64_e64 src_scc, exec ; encoding: [0x7e,0x00,0xd1,0xd4,0xfd,0xfc,0x00,0x00]
+
+v_cmpx_lt_i64_e64 0xaf123456, vcc
+// GFX1250: v_cmpx_lt_i64_e64 lit64(0xaf123456), vcc ; encoding: [0x7e,0x00,0xd1,0xd4,0xfe,0xd4,0x00,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+v_cmpx_lt_u16_e64 v1, v2
+// GFX1250: v_cmpx_lt_u16_e64 v1, v2 ; encoding: [0x7e,0x00,0xb9,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_lt_u16_e64 v255, v255
+// GFX1250: v_cmpx_lt_u16_e64 v255, v255 ; encoding: [0x7e,0x00,0xb9,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_lt_u16_e64 s1, s2
+// GFX1250: v_cmpx_lt_u16_e64 s1, s2 ; encoding: [0x7e,0x00,0xb9,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_lt_u16_e64 s105, s105
+// GFX1250: v_cmpx_lt_u16_e64 s105, s105 ; encoding: [0x7e,0x00,0xb9,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_lt_u16_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_lt_u16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xb9,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_lt_u16_e64 vcc_hi, 0xfe0b
+// GFX1250: v_cmpx_lt_u16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0xb9,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_lt_u16_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_lt_u16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xb9,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_lt_u16_e64 m0, 0.5
+// GFX1250: v_cmpx_lt_u16_e64 m0, 0.5 ; encoding: [0x7e,0x00,0xb9,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_lt_u16_e64 exec_lo, -1
+// GFX1250: v_cmpx_lt_u16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xb9,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_lt_u16_e64 exec_hi, null
+// GFX1250: v_cmpx_lt_u16_e64 exec_hi, null ; encoding: [0x7e,0x00,0xb9,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_lt_u16_e64 null, exec_lo
+// GFX1250: v_cmpx_lt_u16_e64 null, exec_lo ; encoding: [0x7e,0x00,0xb9,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_lt_u16_e64 -1, exec_hi
+// GFX1250: v_cmpx_lt_u16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xb9,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_lt_u16_e64 0.5, m0
+// GFX1250: v_cmpx_lt_u16_e64 0.5, m0 ; encoding: [0x7e,0x00,0xb9,0xd4,0xf0,0xfa,0x00,0x00]
+
+v_cmpx_lt_u16_e64 src_scc, vcc_lo
+// GFX1250: v_cmpx_lt_u16_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xb9,0xd4,0xfd,0xd4,0x00,0x00]
+
+v_cmpx_lt_u16_e64 0xfe0b, vcc_hi
+// GFX1250: v_cmpx_lt_u16_e64 0xfe0b, vcc_hi ; encoding: [0x7e,0x00,0xb9,0xd4,0xff,0xd6,0x00,0x00,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_lt_u32_e64 v1, v2
+// GFX1250: v_cmpx_lt_u32_e64 v1, v2 ; encoding: [0x7e,0x00,0xc9,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_lt_u32_e64 v255, v255
+// GFX1250: v_cmpx_lt_u32_e64 v255, v255 ; encoding: [0x7e,0x00,0xc9,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_lt_u32_e64 s1, s2
+// GFX1250: v_cmpx_lt_u32_e64 s1, s2 ; encoding: [0x7e,0x00,0xc9,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_lt_u32_e64 s105, s105
+// GFX1250: v_cmpx_lt_u32_e64 s105, s105 ; encoding: [0x7e,0x00,0xc9,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_lt_u32_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_lt_u32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xc9,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_lt_u32_e64 vcc_hi, 0xaf123456
+// GFX1250: v_cmpx_lt_u32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0xc9,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_lt_u32_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_lt_u32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xc9,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_lt_u32_e64 m0, 0.5
+// GFX1250: v_cmpx_lt_u32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0xc9,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_lt_u32_e64 exec_lo, -1
+// GFX1250: v_cmpx_lt_u32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xc9,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_lt_u32_e64 exec_hi, null
+// GFX1250: v_cmpx_lt_u32_e64 exec_hi, null ; encoding: [0x7e,0x00,0xc9,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_lt_u32_e64 null, exec_lo
+// GFX1250: v_cmpx_lt_u32_e64 null, exec_lo ; encoding: [0x7e,0x00,0xc9,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_lt_u32_e64 -1, exec_hi
+// GFX1250: v_cmpx_lt_u32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xc9,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_lt_u32_e64 0.5, m0
+// GFX1250: v_cmpx_lt_u32_e64 0.5, m0 ; encoding: [0x7e,0x00,0xc9,0xd4,0xf0,0xfa,0x00,0x00]
+
+v_cmpx_lt_u32_e64 src_scc, vcc_lo
+// GFX1250: v_cmpx_lt_u32_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xc9,0xd4,0xfd,0xd4,0x00,0x00]
+
+v_cmpx_lt_u32_e64 0xaf123456, vcc_hi
+// GFX1250: v_cmpx_lt_u32_e64 0xaf123456, vcc_hi ; encoding: [0x7e,0x00,0xc9,0xd4,0xff,0xd6,0x00,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_lt_u64_e64 v[2:3], v[2:3]
+// GFX1250: v_cmpx_lt_u64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xd9,0xd4,0x02,0x05,0x02,0x00]
+
+v_cmpx_lt_u64_e64 v[254:255], v[254:255]
+// GFX1250: v_cmpx_lt_u64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xd9,0xd4,0xfe,0xfd,0x03,0x00]
+
+v_cmpx_lt_u64_e64 s[2:3], s[4:5]
+// GFX1250: v_cmpx_lt_u64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xd9,0xd4,0x02,0x08,0x00,0x00]
+
+v_cmpx_lt_u64_e64 s[104:105], s[104:105]
+// GFX1250: v_cmpx_lt_u64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xd9,0xd4,0x68,0xd0,0x00,0x00]
+
+v_cmpx_lt_u64_e64 vcc, ttmp[14:15]
+// GFX1250: v_cmpx_lt_u64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xd9,0xd4,0x6a,0xf4,0x00,0x00]
+
+v_cmpx_lt_u64_e64 ttmp[14:15], 0xaf123456
+// GFX1250: v_cmpx_lt_u64_e64 ttmp[14:15], lit64(0xaf123456) ; encoding: [0x7e,0x00,0xd9,0xd4,0x7a,0xfc,0x01,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+v_cmpx_lt_u64_e64 exec, src_scc
+// GFX1250: v_cmpx_lt_u64_e64 exec, src_scc ; encoding: [0x7e,0x00,0xd9,0xd4,0x7e,0xfa,0x01,0x00]
+
+v_cmpx_lt_u64_e64 null, 0.5
+// GFX1250: v_cmpx_lt_u64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xd9,0xd4,0x7c,0xe0,0x01,0x00]
+
+v_cmpx_lt_u64_e64 -1, -1
+// GFX1250: v_cmpx_lt_u64_e64 -1, -1 ; encoding: [0x7e,0x00,0xd9,0xd4,0xc1,0x82,0x01,0x00]
+
+v_cmpx_lt_u64_e64 0.5, null
+// GFX1250: v_cmpx_lt_u64_e64 0.5, null ; encoding: [0x7e,0x00,0xd9,0xd4,0xf0,0xf8,0x00,0x00]
+
+v_cmpx_lt_u64_e64 src_scc, exec
+// GFX1250: v_cmpx_lt_u64_e64 src_scc, exec ; encoding: [0x7e,0x00,0xd9,0xd4,0xfd,0xfc,0x00,0x00]
+
+v_cmpx_lt_u64_e64 0xaf123456, vcc
+// GFX1250: v_cmpx_lt_u64_e64 lit64(0xaf123456), vcc ; encoding: [0x7e,0x00,0xd9,0xd4,0xfe,0xd4,0x00,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+v_cmpx_ne_i16_e64 v1, v2
+// GFX1250: v_cmpx_ne_i16_e64 v1, v2 ; encoding: [0x7e,0x00,0xb5,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_ne_i16_e64 v255, v255
+// GFX1250: v_cmpx_ne_i16_e64 v255, v255 ; encoding: [0x7e,0x00,0xb5,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_ne_i16_e64 s1, s2
+// GFX1250: v_cmpx_ne_i16_e64 s1, s2 ; encoding: [0x7e,0x00,0xb5,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_ne_i16_e64 s105, s105
+// GFX1250: v_cmpx_ne_i16_e64 s105, s105 ; encoding: [0x7e,0x00,0xb5,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_ne_i16_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_ne_i16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xb5,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_ne_i16_e64 vcc_hi, 0xfe0b
+// GFX1250: v_cmpx_ne_i16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0xb5,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_ne_i16_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_ne_i16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xb5,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_ne_i16_e64 m0, 0.5
+// GFX1250: v_cmpx_ne_i16_e64 m0, 0.5 ; encoding: [0x7e,0x00,0xb5,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_ne_i16_e64 exec_lo, -1
+// GFX1250: v_cmpx_ne_i16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xb5,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_ne_i16_e64 exec_hi, null
+// GFX1250: v_cmpx_ne_i16_e64 exec_hi, null ; encoding: [0x7e,0x00,0xb5,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_ne_i16_e64 null, exec_lo
+// GFX1250: v_cmpx_ne_i16_e64 null, exec_lo ; encoding: [0x7e,0x00,0xb5,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_ne_i16_e64 -1, exec_hi
+// GFX1250: v_cmpx_ne_i16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xb5,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_ne_i16_e64 0.5, m0
+// GFX1250: v_cmpx_ne_i16_e64 0.5, m0 ; encoding: [0x7e,0x00,0xb5,0xd4,0xf0,0xfa,0x00,0x00]
+
+v_cmpx_ne_i16_e64 src_scc, vcc_lo
+// GFX1250: v_cmpx_ne_i16_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xb5,0xd4,0xfd,0xd4,0x00,0x00]
+
+v_cmpx_ne_i16_e64 0xfe0b, vcc_hi
+// GFX1250: v_cmpx_ne_i16_e64 0xfe0b, vcc_hi ; encoding: [0x7e,0x00,0xb5,0xd4,0xff,0xd6,0x00,0x00,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_ne_i32_e64 v1, v2
+// GFX1250: v_cmpx_ne_i32_e64 v1, v2 ; encoding: [0x7e,0x00,0xc5,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_ne_i32_e64 v255, v255
+// GFX1250: v_cmpx_ne_i32_e64 v255, v255 ; encoding: [0x7e,0x00,0xc5,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_ne_i32_e64 s1, s2
+// GFX1250: v_cmpx_ne_i32_e64 s1, s2 ; encoding: [0x7e,0x00,0xc5,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_ne_i32_e64 s105, s105
+// GFX1250: v_cmpx_ne_i32_e64 s105, s105 ; encoding: [0x7e,0x00,0xc5,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_ne_i32_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_ne_i32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xc5,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_ne_i32_e64 vcc_hi, 0xaf123456
+// GFX1250: v_cmpx_ne_i32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0xc5,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_ne_i32_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_ne_i32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xc5,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_ne_i32_e64 m0, 0.5
+// GFX1250: v_cmpx_ne_i32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0xc5,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_ne_i32_e64 exec_lo, -1
+// GFX1250: v_cmpx_ne_i32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xc5,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_ne_i32_e64 exec_hi, null
+// GFX1250: v_cmpx_ne_i32_e64 exec_hi, null ; encoding: [0x7e,0x00,0xc5,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_ne_i32_e64 null, exec_lo
+// GFX1250: v_cmpx_ne_i32_e64 null, exec_lo ; encoding: [0x7e,0x00,0xc5,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_ne_i32_e64 -1, exec_hi
+// GFX1250: v_cmpx_ne_i32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xc5,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_ne_i32_e64 0.5, m0
+// GFX1250: v_cmpx_ne_i32_e64 0.5, m0 ; encoding: [0x7e,0x00,0xc5,0xd4,0xf0,0xfa,0x00,0x00]
+
+v_cmpx_ne_i32_e64 src_scc, vcc_lo
+// GFX1250: v_cmpx_ne_i32_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xc5,0xd4,0xfd,0xd4,0x00,0x00]
+
+v_cmpx_ne_i32_e64 0xaf123456, vcc_hi
+// GFX1250: v_cmpx_ne_i32_e64 0xaf123456, vcc_hi ; encoding: [0x7e,0x00,0xc5,0xd4,0xff,0xd6,0x00,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_ne_i64_e64 v[2:3], v[2:3]
+// GFX1250: v_cmpx_ne_i64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xd5,0xd4,0x02,0x05,0x02,0x00]
+
+v_cmpx_ne_i64_e64 v[254:255], v[254:255]
+// GFX1250: v_cmpx_ne_i64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xd5,0xd4,0xfe,0xfd,0x03,0x00]
+
+v_cmpx_ne_i64_e64 s[2:3], s[4:5]
+// GFX1250: v_cmpx_ne_i64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xd5,0xd4,0x02,0x08,0x00,0x00]
+
+v_cmpx_ne_i64_e64 s[104:105], s[104:105]
+// GFX1250: v_cmpx_ne_i64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xd5,0xd4,0x68,0xd0,0x00,0x00]
+
+v_cmpx_ne_i64_e64 vcc, ttmp[14:15]
+// GFX1250: v_cmpx_ne_i64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xd5,0xd4,0x6a,0xf4,0x00,0x00]
+
+v_cmpx_ne_i64_e64 ttmp[14:15], 0xaf123456
+// GFX1250: v_cmpx_ne_i64_e64 ttmp[14:15], lit64(0xaf123456) ; encoding: [0x7e,0x00,0xd5,0xd4,0x7a,0xfc,0x01,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+v_cmpx_ne_i64_e64 exec, src_scc
+// GFX1250: v_cmpx_ne_i64_e64 exec, src_scc ; encoding: [0x7e,0x00,0xd5,0xd4,0x7e,0xfa,0x01,0x00]
+
+v_cmpx_ne_i64_e64 null, 0.5
+// GFX1250: v_cmpx_ne_i64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xd5,0xd4,0x7c,0xe0,0x01,0x00]
+
+v_cmpx_ne_i64_e64 -1, -1
+// GFX1250: v_cmpx_ne_i64_e64 -1, -1 ; encoding: [0x7e,0x00,0xd5,0xd4,0xc1,0x82,0x01,0x00]
+
+v_cmpx_ne_i64_e64 0.5, null
+// GFX1250: v_cmpx_ne_i64_e64 0.5, null ; encoding: [0x7e,0x00,0xd5,0xd4,0xf0,0xf8,0x00,0x00]
+
+v_cmpx_ne_i64_e64 src_scc, exec
+// GFX1250: v_cmpx_ne_i64_e64 src_scc, exec ; encoding: [0x7e,0x00,0xd5,0xd4,0xfd,0xfc,0x00,0x00]
+
+v_cmpx_ne_i64_e64 0xaf123456, vcc
+// GFX1250: v_cmpx_ne_i64_e64 lit64(0xaf123456), vcc ; encoding: [0x7e,0x00,0xd5,0xd4,0xfe,0xd4,0x00,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+v_cmpx_ne_u16_e64 v1, v2
+// GFX1250: v_cmpx_ne_u16_e64 v1, v2 ; encoding: [0x7e,0x00,0xbd,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_ne_u16_e64 v255, v255
+// GFX1250: v_cmpx_ne_u16_e64 v255, v255 ; encoding: [0x7e,0x00,0xbd,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_ne_u16_e64 s1, s2
+// GFX1250: v_cmpx_ne_u16_e64 s1, s2 ; encoding: [0x7e,0x00,0xbd,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_ne_u16_e64 s105, s105
+// GFX1250: v_cmpx_ne_u16_e64 s105, s105 ; encoding: [0x7e,0x00,0xbd,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_ne_u16_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_ne_u16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xbd,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_ne_u16_e64 vcc_hi, 0xfe0b
+// GFX1250: v_cmpx_ne_u16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0xbd,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_ne_u16_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_ne_u16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xbd,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_ne_u16_e64 m0, 0.5
+// GFX1250: v_cmpx_ne_u16_e64 m0, 0.5 ; encoding: [0x7e,0x00,0xbd,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_ne_u16_e64 exec_lo, -1
+// GFX1250: v_cmpx_ne_u16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xbd,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_ne_u16_e64 exec_hi, null
+// GFX1250: v_cmpx_ne_u16_e64 exec_hi, null ; encoding: [0x7e,0x00,0xbd,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_ne_u16_e64 null, exec_lo
+// GFX1250: v_cmpx_ne_u16_e64 null, exec_lo ; encoding: [0x7e,0x00,0xbd,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_ne_u16_e64 -1, exec_hi
+// GFX1250: v_cmpx_ne_u16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xbd,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_ne_u16_e64 0.5, m0
+// GFX1250: v_cmpx_ne_u16_e64 0.5, m0 ; encoding: [0x7e,0x00,0xbd,0xd4,0xf0,0xfa,0x00,0x00]
+
+v_cmpx_ne_u16_e64 src_scc, vcc_lo
+// GFX1250: v_cmpx_ne_u16_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xbd,0xd4,0xfd,0xd4,0x00,0x00]
+
+v_cmpx_ne_u16_e64 0xfe0b, vcc_hi
+// GFX1250: v_cmpx_ne_u16_e64 0xfe0b, vcc_hi ; encoding: [0x7e,0x00,0xbd,0xd4,0xff,0xd6,0x00,0x00,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_ne_u32_e64 v1, v2
+// GFX1250: v_cmpx_ne_u32_e64 v1, v2 ; encoding: [0x7e,0x00,0xcd,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_ne_u32_e64 v255, v255
+// GFX1250: v_cmpx_ne_u32_e64 v255, v255 ; encoding: [0x7e,0x00,0xcd,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_ne_u32_e64 s1, s2
+// GFX1250: v_cmpx_ne_u32_e64 s1, s2 ; encoding: [0x7e,0x00,0xcd,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_ne_u32_e64 s105, s105
+// GFX1250: v_cmpx_ne_u32_e64 s105, s105 ; encoding: [0x7e,0x00,0xcd,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_ne_u32_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_ne_u32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xcd,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_ne_u32_e64 vcc_hi, 0xaf123456
+// GFX1250: v_cmpx_ne_u32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0xcd,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_ne_u32_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_ne_u32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xcd,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_ne_u32_e64 m0, 0.5
+// GFX1250: v_cmpx_ne_u32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0xcd,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_ne_u32_e64 exec_lo, -1
+// GFX1250: v_cmpx_ne_u32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xcd,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_ne_u32_e64 exec_hi, null
+// GFX1250: v_cmpx_ne_u32_e64 exec_hi, null ; encoding: [0x7e,0x00,0xcd,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_ne_u32_e64 null, exec_lo
+// GFX1250: v_cmpx_ne_u32_e64 null, exec_lo ; encoding: [0x7e,0x00,0xcd,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_ne_u32_e64 -1, exec_hi
+// GFX1250: v_cmpx_ne_u32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xcd,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_ne_u32_e64 0.5, m0
+// GFX1250: v_cmpx_ne_u32_e64 0.5, m0 ; encoding: [0x7e,0x00,0xcd,0xd4,0xf0,0xfa,0x00,0x00]
+
+v_cmpx_ne_u32_e64 src_scc, vcc_lo
+// GFX1250: v_cmpx_ne_u32_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xcd,0xd4,0xfd,0xd4,0x00,0x00]
+
+v_cmpx_ne_u32_e64 0xaf123456, vcc_hi
+// GFX1250: v_cmpx_ne_u32_e64 0xaf123456, vcc_hi ; encoding: [0x7e,0x00,0xcd,0xd4,0xff,0xd6,0x00,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_ne_u64_e64 v[2:3], v[2:3]
+// GFX1250: v_cmpx_ne_u64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xdd,0xd4,0x02,0x05,0x02,0x00]
+
+v_cmpx_ne_u64_e64 v[254:255], v[254:255]
+// GFX1250: v_cmpx_ne_u64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xdd,0xd4,0xfe,0xfd,0x03,0x00]
+
+v_cmpx_ne_u64_e64 s[2:3], s[4:5]
+// GFX1250: v_cmpx_ne_u64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xdd,0xd4,0x02,0x08,0x00,0x00]
+
+v_cmpx_ne_u64_e64 s[104:105], s[104:105]
+// GFX1250: v_cmpx_ne_u64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xdd,0xd4,0x68,0xd0,0x00,0x00]
+
+v_cmpx_ne_u64_e64 vcc, ttmp[14:15]
+// GFX1250: v_cmpx_ne_u64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xdd,0xd4,0x6a,0xf4,0x00,0x00]
+
+v_cmpx_ne_u64_e64 ttmp[14:15], 0xaf123456
+// GFX1250: v_cmpx_ne_u64_e64 ttmp[14:15], lit64(0xaf123456) ; encoding: [0x7e,0x00,0xdd,0xd4,0x7a,0xfc,0x01,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+v_cmpx_ne_u64_e64 exec, src_scc
+// GFX1250: v_cmpx_ne_u64_e64 exec, src_scc ; encoding: [0x7e,0x00,0xdd,0xd4,0x7e,0xfa,0x01,0x00]
+
+v_cmpx_ne_u64_e64 null, 0.5
+// GFX1250: v_cmpx_ne_u64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xdd,0xd4,0x7c,0xe0,0x01,0x00]
+
+v_cmpx_ne_u64_e64 -1, -1
+// GFX1250: v_cmpx_ne_u64_e64 -1, -1 ; encoding: [0x7e,0x00,0xdd,0xd4,0xc1,0x82,0x01,0x00]
+
+v_cmpx_ne_u64_e64 0.5, null
+// GFX1250: v_cmpx_ne_u64_e64 0.5, null ; encoding: [0x7e,0x00,0xdd,0xd4,0xf0,0xf8,0x00,0x00]
+
+v_cmpx_ne_u64_e64 src_scc, exec
+// GFX1250: v_cmpx_ne_u64_e64 src_scc, exec ; encoding: [0x7e,0x00,0xdd,0xd4,0xfd,0xfc,0x00,0x00]
+
+v_cmpx_ne_u64_e64 0xaf123456, vcc
+// GFX1250: v_cmpx_ne_u64_e64 lit64(0xaf123456), vcc ; encoding: [0x7e,0x00,0xdd,0xd4,0xfe,0xd4,0x00,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+v_cmpx_neq_f16_e64 v1, v2
+// GFX1250: v_cmpx_neq_f16_e64 v1, v2 ; encoding: [0x7e,0x00,0x8d,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_neq_f16_e64 v255, v255
+// GFX1250: v_cmpx_neq_f16_e64 v255, v255 ; encoding: [0x7e,0x00,0x8d,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_neq_f16_e64 s1, s2
+// GFX1250: v_cmpx_neq_f16_e64 s1, s2 ; encoding: [0x7e,0x00,0x8d,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_neq_f16_e64 s105, s105
+// GFX1250: v_cmpx_neq_f16_e64 s105, s105 ; encoding: [0x7e,0x00,0x8d,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_neq_f16_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_neq_f16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x8d,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_neq_f16_e64 vcc_hi, 0xfe0b
+// GFX1250: v_cmpx_neq_f16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0x8d,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_neq_f16_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_neq_f16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x8d,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_neq_f16_e64 m0, 0.5
+// GFX1250: v_cmpx_neq_f16_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x8d,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_neq_f16_e64 exec_lo, -1
+// GFX1250: v_cmpx_neq_f16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x8d,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_neq_f16_e64 |exec_hi|, null
+// GFX1250: v_cmpx_neq_f16_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x8d,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_neq_f16_e64 null, exec_lo
+// GFX1250: v_cmpx_neq_f16_e64 null, exec_lo ; encoding: [0x7e,0x00,0x8d,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_neq_f16_e64 -1, exec_hi
+// GFX1250: v_cmpx_neq_f16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x8d,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_neq_f16_e64 0.5, -m0
+// GFX1250: v_cmpx_neq_f16_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x8d,0xd4,0xf0,0xfa,0x00,0x40]
+
+v_cmpx_neq_f16_e64 -src_scc, |vcc_lo|
+// GFX1250: v_cmpx_neq_f16_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x8d,0xd4,0xfd,0xd4,0x00,0x20]
+
+v_cmpx_neq_f16_e64 -|0xfe0b|, -|vcc_hi| clamp
+// GFX1250: v_cmpx_neq_f16_e64 -|0xfe0b|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x8d,0xd4,0xff,0xd6,0x00,0x60,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_neq_f32_e64 v1, v2
+// GFX1250: v_cmpx_neq_f32_e64 v1, v2 ; encoding: [0x7e,0x00,0x9d,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_neq_f32_e64 v255, v255
+// GFX1250: v_cmpx_neq_f32_e64 v255, v255 ; encoding: [0x7e,0x00,0x9d,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_neq_f32_e64 s1, s2
+// GFX1250: v_cmpx_neq_f32_e64 s1, s2 ; encoding: [0x7e,0x00,0x9d,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_neq_f32_e64 s105, s105
+// GFX1250: v_cmpx_neq_f32_e64 s105, s105 ; encoding: [0x7e,0x00,0x9d,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_neq_f32_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_neq_f32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x9d,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_neq_f32_e64 vcc_hi, 0xaf123456
+// GFX1250: v_cmpx_neq_f32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0x9d,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_neq_f32_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_neq_f32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x9d,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_neq_f32_e64 m0, 0.5
+// GFX1250: v_cmpx_neq_f32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x9d,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_neq_f32_e64 exec_lo, -1
+// GFX1250: v_cmpx_neq_f32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x9d,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_neq_f32_e64 |exec_hi|, null
+// GFX1250: v_cmpx_neq_f32_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x9d,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_neq_f32_e64 null, exec_lo
+// GFX1250: v_cmpx_neq_f32_e64 null, exec_lo ; encoding: [0x7e,0x00,0x9d,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_neq_f32_e64 -1, exec_hi
+// GFX1250: v_cmpx_neq_f32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x9d,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_neq_f32_e64 0.5, -m0
+// GFX1250: v_cmpx_neq_f32_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x9d,0xd4,0xf0,0xfa,0x00,0x40]
+
+v_cmpx_neq_f32_e64 -src_scc, |vcc_lo|
+// GFX1250: v_cmpx_neq_f32_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x9d,0xd4,0xfd,0xd4,0x00,0x20]
+
+v_cmpx_neq_f32_e64 -|0xaf123456|, -|vcc_hi| clamp
+// GFX1250: v_cmpx_neq_f32_e64 -|0xaf123456|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x9d,0xd4,0xff,0xd6,0x00,0x60,0x56,0x34,0x12,0xaf]
+
+v_cmpx_neq_f64_e64 v[2:3], v[2:3]
+// GFX1250: v_cmpx_neq_f64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xad,0xd4,0x02,0x05,0x02,0x00]
+
+v_cmpx_neq_f64_e64 v[254:255], v[254:255]
+// GFX1250: v_cmpx_neq_f64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xad,0xd4,0xfe,0xfd,0x03,0x00]
+
+v_cmpx_neq_f64_e64 s[2:3], s[4:5]
+// GFX1250: v_cmpx_neq_f64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xad,0xd4,0x02,0x08,0x00,0x00]
+
+v_cmpx_neq_f64_e64 s[104:105], s[104:105]
+// GFX1250: v_cmpx_neq_f64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xad,0xd4,0x68,0xd0,0x00,0x00]
+
+v_cmpx_neq_f64_e64 vcc, ttmp[14:15]
+// GFX1250: v_cmpx_neq_f64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xad,0xd4,0x6a,0xf4,0x00,0x00]
+
+v_cmpx_neq_f64_e64 ttmp[14:15], 0xaf123456
+// GFX1250: v_cmpx_neq_f64_e64 ttmp[14:15], 0xaf123456 ; encoding: [0x7e,0x00,0xad,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_neq_f64_e64 -|exec|, src_scc
+// GFX1250: v_cmpx_neq_f64_e64 -|exec|, src_scc ; encoding: [0x7e,0x01,0xad,0xd4,0x7e,0xfa,0x01,0x20]
+
+v_cmpx_neq_f64_e64 null, 0.5
+// GFX1250: v_cmpx_neq_f64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xad,0xd4,0x7c,0xe0,0x01,0x00]
+
+v_cmpx_neq_f64_e64 -1, -1
+// GFX1250: v_cmpx_neq_f64_e64 -1, -1 ; encoding: [0x7e,0x00,0xad,0xd4,0xc1,0x82,0x01,0x00]
+
+v_cmpx_neq_f64_e64 0.5, null
+// GFX1250: v_cmpx_neq_f64_e64 0.5, null ; encoding: [0x7e,0x00,0xad,0xd4,0xf0,0xf8,0x00,0x00]
+
+v_cmpx_neq_f64_e64 -|src_scc|, -|exec|
+// GFX1250: v_cmpx_neq_f64_e64 -|src_scc|, -|exec| ; encoding: [0x7e,0x03,0xad,0xd4,0xfd,0xfc,0x00,0x60]
+
+v_cmpx_neq_f64_e64 0xaf123456, -|vcc| clamp
+// GFX1250: v_cmpx_neq_f64_e64 0xaf123456, -|vcc| clamp ; encoding: [0x7e,0x82,0xad,0xd4,0xff,0xd4,0x00,0x40,0x56,0x34,0x12,0xaf]
+
+v_cmpx_nge_f16_e64 v1, v2
+// GFX1250: v_cmpx_nge_f16_e64 v1, v2 ; encoding: [0x7e,0x00,0x89,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_nge_f16_e64 v255, v255
+// GFX1250: v_cmpx_nge_f16_e64 v255, v255 ; encoding: [0x7e,0x00,0x89,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_nge_f16_e64 s1, s2
+// GFX1250: v_cmpx_nge_f16_e64 s1, s2 ; encoding: [0x7e,0x00,0x89,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_nge_f16_e64 s105, s105
+// GFX1250: v_cmpx_nge_f16_e64 s105, s105 ; encoding: [0x7e,0x00,0x89,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_nge_f16_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_nge_f16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x89,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_nge_f16_e64 vcc_hi, 0xfe0b
+// GFX1250: v_cmpx_nge_f16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0x89,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_nge_f16_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_nge_f16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x89,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_nge_f16_e64 m0, 0.5
+// GFX1250: v_cmpx_nge_f16_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x89,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_nge_f16_e64 exec_lo, -1
+// GFX1250: v_cmpx_nge_f16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x89,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_nge_f16_e64 |exec_hi|, null
+// GFX1250: v_cmpx_nge_f16_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x89,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_nge_f16_e64 null, exec_lo
+// GFX1250: v_cmpx_nge_f16_e64 null, exec_lo ; encoding: [0x7e,0x00,0x89,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_nge_f16_e64 -1, exec_hi
+// GFX1250: v_cmpx_nge_f16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x89,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_nge_f16_e64 0.5, -m0
+// GFX1250: v_cmpx_nge_f16_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x89,0xd4,0xf0,0xfa,0x00,0x40]
+
+v_cmpx_nge_f16_e64 -src_scc, |vcc_lo|
+// GFX1250: v_cmpx_nge_f16_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x89,0xd4,0xfd,0xd4,0x00,0x20]
+
+v_cmpx_nge_f16_e64 -|0xfe0b|, -|vcc_hi| clamp
+// GFX1250: v_cmpx_nge_f16_e64 -|0xfe0b|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x89,0xd4,0xff,0xd6,0x00,0x60,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_nge_f32_e64 v1, v2
+// GFX1250: v_cmpx_nge_f32_e64 v1, v2 ; encoding: [0x7e,0x00,0x99,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_nge_f32_e64 v255, v255
+// GFX1250: v_cmpx_nge_f32_e64 v255, v255 ; encoding: [0x7e,0x00,0x99,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_nge_f32_e64 s1, s2
+// GFX1250: v_cmpx_nge_f32_e64 s1, s2 ; encoding: [0x7e,0x00,0x99,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_nge_f32_e64 s105, s105
+// GFX1250: v_cmpx_nge_f32_e64 s105, s105 ; encoding: [0x7e,0x00,0x99,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_nge_f32_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_nge_f32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x99,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_nge_f32_e64 vcc_hi, 0xaf123456
+// GFX1250: v_cmpx_nge_f32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0x99,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_nge_f32_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_nge_f32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x99,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_nge_f32_e64 m0, 0.5
+// GFX1250: v_cmpx_nge_f32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x99,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_nge_f32_e64 exec_lo, -1
+// GFX1250: v_cmpx_nge_f32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x99,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_nge_f32_e64 |exec_hi|, null
+// GFX1250: v_cmpx_nge_f32_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x99,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_nge_f32_e64 null, exec_lo
+// GFX1250: v_cmpx_nge_f32_e64 null, exec_lo ; encoding: [0x7e,0x00,0x99,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_nge_f32_e64 -1, exec_hi
+// GFX1250: v_cmpx_nge_f32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x99,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_nge_f32_e64 0.5, -m0
+// GFX1250: v_cmpx_nge_f32_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x99,0xd4,0xf0,0xfa,0x00,0x40]
+
+v_cmpx_nge_f32_e64 -src_scc, |vcc_lo|
+// GFX1250: v_cmpx_nge_f32_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x99,0xd4,0xfd,0xd4,0x00,0x20]
+
+v_cmpx_nge_f32_e64 -|0xaf123456|, -|vcc_hi| clamp
+// GFX1250: v_cmpx_nge_f32_e64 -|0xaf123456|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x99,0xd4,0xff,0xd6,0x00,0x60,0x56,0x34,0x12,0xaf]
+
+v_cmpx_nge_f64_e64 v[2:3], v[2:3]
+// GFX1250: v_cmpx_nge_f64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xa9,0xd4,0x02,0x05,0x02,0x00]
+
+v_cmpx_nge_f64_e64 v[254:255], v[254:255]
+// GFX1250: v_cmpx_nge_f64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xa9,0xd4,0xfe,0xfd,0x03,0x00]
+
+v_cmpx_nge_f64_e64 s[2:3], s[4:5]
+// GFX1250: v_cmpx_nge_f64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xa9,0xd4,0x02,0x08,0x00,0x00]
+
+v_cmpx_nge_f64_e64 s[104:105], s[104:105]
+// GFX1250: v_cmpx_nge_f64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xa9,0xd4,0x68,0xd0,0x00,0x00]
+
+v_cmpx_nge_f64_e64 vcc, ttmp[14:15]
+// GFX1250: v_cmpx_nge_f64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xa9,0xd4,0x6a,0xf4,0x00,0x00]
+
+v_cmpx_nge_f64_e64 ttmp[14:15], 0xaf123456
+// GFX1250: v_cmpx_nge_f64_e64 ttmp[14:15], 0xaf123456 ; encoding: [0x7e,0x00,0xa9,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_nge_f64_e64 -|exec|, src_scc
+// GFX1250: v_cmpx_nge_f64_e64 -|exec|, src_scc ; encoding: [0x7e,0x01,0xa9,0xd4,0x7e,0xfa,0x01,0x20]
+
+v_cmpx_nge_f64_e64 null, 0.5
+// GFX1250: v_cmpx_nge_f64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xa9,0xd4,0x7c,0xe0,0x01,0x00]
+
+v_cmpx_nge_f64_e64 -1, -1
+// GFX1250: v_cmpx_nge_f64_e64 -1, -1 ; encoding: [0x7e,0x00,0xa9,0xd4,0xc1,0x82,0x01,0x00]
+
+v_cmpx_nge_f64_e64 0.5, null
+// GFX1250: v_cmpx_nge_f64_e64 0.5, null ; encoding: [0x7e,0x00,0xa9,0xd4,0xf0,0xf8,0x00,0x00]
+
+v_cmpx_nge_f64_e64 -|src_scc|, -|exec|
+// GFX1250: v_cmpx_nge_f64_e64 -|src_scc|, -|exec| ; encoding: [0x7e,0x03,0xa9,0xd4,0xfd,0xfc,0x00,0x60]
+
+v_cmpx_nge_f64_e64 0xaf123456, -|vcc| clamp
+// GFX1250: v_cmpx_nge_f64_e64 0xaf123456, -|vcc| clamp ; encoding: [0x7e,0x82,0xa9,0xd4,0xff,0xd4,0x00,0x40,0x56,0x34,0x12,0xaf]
+
+v_cmpx_ngt_f16_e64 v1, v2
+// GFX1250: v_cmpx_ngt_f16_e64 v1, v2 ; encoding: [0x7e,0x00,0x8b,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_ngt_f16_e64 v255, v255
+// GFX1250: v_cmpx_ngt_f16_e64 v255, v255 ; encoding: [0x7e,0x00,0x8b,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_ngt_f16_e64 s1, s2
+// GFX1250: v_cmpx_ngt_f16_e64 s1, s2 ; encoding: [0x7e,0x00,0x8b,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_ngt_f16_e64 s105, s105
+// GFX1250: v_cmpx_ngt_f16_e64 s105, s105 ; encoding: [0x7e,0x00,0x8b,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_ngt_f16_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_ngt_f16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x8b,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_ngt_f16_e64 vcc_hi, 0xfe0b
+// GFX1250: v_cmpx_ngt_f16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0x8b,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_ngt_f16_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_ngt_f16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x8b,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_ngt_f16_e64 m0, 0.5
+// GFX1250: v_cmpx_ngt_f16_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x8b,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_ngt_f16_e64 exec_lo, -1
+// GFX1250: v_cmpx_ngt_f16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x8b,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_ngt_f16_e64 |exec_hi|, null
+// GFX1250: v_cmpx_ngt_f16_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x8b,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_ngt_f16_e64 null, exec_lo
+// GFX1250: v_cmpx_ngt_f16_e64 null, exec_lo ; encoding: [0x7e,0x00,0x8b,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_ngt_f16_e64 -1, exec_hi
+// GFX1250: v_cmpx_ngt_f16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x8b,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_ngt_f16_e64 0.5, -m0
+// GFX1250: v_cmpx_ngt_f16_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x8b,0xd4,0xf0,0xfa,0x00,0x40]
+
+v_cmpx_ngt_f16_e64 -src_scc, |vcc_lo|
+// GFX1250: v_cmpx_ngt_f16_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x8b,0xd4,0xfd,0xd4,0x00,0x20]
+
+v_cmpx_ngt_f16_e64 -|0xfe0b|, -|vcc_hi| clamp
+// GFX1250: v_cmpx_ngt_f16_e64 -|0xfe0b|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x8b,0xd4,0xff,0xd6,0x00,0x60,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_ngt_f32_e64 v1, v2
+// GFX1250: v_cmpx_ngt_f32_e64 v1, v2 ; encoding: [0x7e,0x00,0x9b,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_ngt_f32_e64 v255, v255
+// GFX1250: v_cmpx_ngt_f32_e64 v255, v255 ; encoding: [0x7e,0x00,0x9b,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_ngt_f32_e64 s1, s2
+// GFX1250: v_cmpx_ngt_f32_e64 s1, s2 ; encoding: [0x7e,0x00,0x9b,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_ngt_f32_e64 s105, s105
+// GFX1250: v_cmpx_ngt_f32_e64 s105, s105 ; encoding: [0x7e,0x00,0x9b,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_ngt_f32_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_ngt_f32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x9b,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_ngt_f32_e64 vcc_hi, 0xaf123456
+// GFX1250: v_cmpx_ngt_f32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0x9b,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_ngt_f32_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_ngt_f32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x9b,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_ngt_f32_e64 m0, 0.5
+// GFX1250: v_cmpx_ngt_f32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x9b,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_ngt_f32_e64 exec_lo, -1
+// GFX1250: v_cmpx_ngt_f32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x9b,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_ngt_f32_e64 |exec_hi|, null
+// GFX1250: v_cmpx_ngt_f32_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x9b,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_ngt_f32_e64 null, exec_lo
+// GFX1250: v_cmpx_ngt_f32_e64 null, exec_lo ; encoding: [0x7e,0x00,0x9b,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_ngt_f32_e64 -1, exec_hi
+// GFX1250: v_cmpx_ngt_f32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x9b,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_ngt_f32_e64 0.5, -m0
+// GFX1250: v_cmpx_ngt_f32_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x9b,0xd4,0xf0,0xfa,0x00,0x40]
+
+v_cmpx_ngt_f32_e64 -src_scc, |vcc_lo|
+// GFX1250: v_cmpx_ngt_f32_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x9b,0xd4,0xfd,0xd4,0x00,0x20]
+
+v_cmpx_ngt_f32_e64 -|0xaf123456|, -|vcc_hi| clamp
+// GFX1250: v_cmpx_ngt_f32_e64 -|0xaf123456|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x9b,0xd4,0xff,0xd6,0x00,0x60,0x56,0x34,0x12,0xaf]
+
+v_cmpx_ngt_f64_e64 v[2:3], v[2:3]
+// GFX1250: v_cmpx_ngt_f64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xab,0xd4,0x02,0x05,0x02,0x00]
+
+v_cmpx_ngt_f64_e64 v[254:255], v[254:255]
+// GFX1250: v_cmpx_ngt_f64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xab,0xd4,0xfe,0xfd,0x03,0x00]
+
+v_cmpx_ngt_f64_e64 s[2:3], s[4:5]
+// GFX1250: v_cmpx_ngt_f64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xab,0xd4,0x02,0x08,0x00,0x00]
+
+v_cmpx_ngt_f64_e64 s[104:105], s[104:105]
+// GFX1250: v_cmpx_ngt_f64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xab,0xd4,0x68,0xd0,0x00,0x00]
+
+v_cmpx_ngt_f64_e64 vcc, ttmp[14:15]
+// GFX1250: v_cmpx_ngt_f64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xab,0xd4,0x6a,0xf4,0x00,0x00]
+
+v_cmpx_ngt_f64_e64 ttmp[14:15], 0xaf123456
+// GFX1250: v_cmpx_ngt_f64_e64 ttmp[14:15], 0xaf123456 ; encoding: [0x7e,0x00,0xab,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_ngt_f64_e64 -|exec|, src_scc
+// GFX1250: v_cmpx_ngt_f64_e64 -|exec|, src_scc ; encoding: [0x7e,0x01,0xab,0xd4,0x7e,0xfa,0x01,0x20]
+
+v_cmpx_ngt_f64_e64 null, 0.5
+// GFX1250: v_cmpx_ngt_f64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xab,0xd4,0x7c,0xe0,0x01,0x00]
+
+v_cmpx_ngt_f64_e64 -1, -1
+// GFX1250: v_cmpx_ngt_f64_e64 -1, -1 ; encoding: [0x7e,0x00,0xab,0xd4,0xc1,0x82,0x01,0x00]
+
+v_cmpx_ngt_f64_e64 0.5, null
+// GFX1250: v_cmpx_ngt_f64_e64 0.5, null ; encoding: [0x7e,0x00,0xab,0xd4,0xf0,0xf8,0x00,0x00]
+
+v_cmpx_ngt_f64_e64 -|src_scc|, -|exec|
+// GFX1250: v_cmpx_ngt_f64_e64 -|src_scc|, -|exec| ; encoding: [0x7e,0x03,0xab,0xd4,0xfd,0xfc,0x00,0x60]
+
+v_cmpx_ngt_f64_e64 0xaf123456, -|vcc| clamp
+// GFX1250: v_cmpx_ngt_f64_e64 0xaf123456, -|vcc| clamp ; encoding: [0x7e,0x82,0xab,0xd4,0xff,0xd4,0x00,0x40,0x56,0x34,0x12,0xaf]
+
+v_cmpx_nle_f16_e64 v1, v2
+// GFX1250: v_cmpx_nle_f16_e64 v1, v2 ; encoding: [0x7e,0x00,0x8c,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_nle_f16_e64 v255, v255
+// GFX1250: v_cmpx_nle_f16_e64 v255, v255 ; encoding: [0x7e,0x00,0x8c,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_nle_f16_e64 s1, s2
+// GFX1250: v_cmpx_nle_f16_e64 s1, s2 ; encoding: [0x7e,0x00,0x8c,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_nle_f16_e64 s105, s105
+// GFX1250: v_cmpx_nle_f16_e64 s105, s105 ; encoding: [0x7e,0x00,0x8c,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_nle_f16_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_nle_f16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x8c,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_nle_f16_e64 vcc_hi, 0xfe0b
+// GFX1250: v_cmpx_nle_f16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0x8c,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_nle_f16_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_nle_f16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x8c,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_nle_f16_e64 m0, 0.5
+// GFX1250: v_cmpx_nle_f16_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x8c,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_nle_f16_e64 exec_lo, -1
+// GFX1250: v_cmpx_nle_f16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x8c,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_nle_f16_e64 |exec_hi|, null
+// GFX1250: v_cmpx_nle_f16_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x8c,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_nle_f16_e64 null, exec_lo
+// GFX1250: v_cmpx_nle_f16_e64 null, exec_lo ; encoding: [0x7e,0x00,0x8c,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_nle_f16_e64 -1, exec_hi
+// GFX1250: v_cmpx_nle_f16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x8c,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_nle_f16_e64 0.5, -m0
+// GFX1250: v_cmpx_nle_f16_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x8c,0xd4,0xf0,0xfa,0x00,0x40]
+
+v_cmpx_nle_f16_e64 -src_scc, |vcc_lo|
+// GFX1250: v_cmpx_nle_f16_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x8c,0xd4,0xfd,0xd4,0x00,0x20]
+
+v_cmpx_nle_f16_e64 -|0xfe0b|, -|vcc_hi| clamp
+// GFX1250: v_cmpx_nle_f16_e64 -|0xfe0b|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x8c,0xd4,0xff,0xd6,0x00,0x60,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_nle_f32_e64 v1, v2
+// GFX1250: v_cmpx_nle_f32_e64 v1, v2 ; encoding: [0x7e,0x00,0x9c,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_nle_f32_e64 v255, v255
+// GFX1250: v_cmpx_nle_f32_e64 v255, v255 ; encoding: [0x7e,0x00,0x9c,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_nle_f32_e64 s1, s2
+// GFX1250: v_cmpx_nle_f32_e64 s1, s2 ; encoding: [0x7e,0x00,0x9c,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_nle_f32_e64 s105, s105
+// GFX1250: v_cmpx_nle_f32_e64 s105, s105 ; encoding: [0x7e,0x00,0x9c,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_nle_f32_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_nle_f32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x9c,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_nle_f32_e64 vcc_hi, 0xaf123456
+// GFX1250: v_cmpx_nle_f32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0x9c,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_nle_f32_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_nle_f32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x9c,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_nle_f32_e64 m0, 0.5
+// GFX1250: v_cmpx_nle_f32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x9c,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_nle_f32_e64 exec_lo, -1
+// GFX1250: v_cmpx_nle_f32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x9c,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_nle_f32_e64 |exec_hi|, null
+// GFX1250: v_cmpx_nle_f32_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x9c,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_nle_f32_e64 null, exec_lo
+// GFX1250: v_cmpx_nle_f32_e64 null, exec_lo ; encoding: [0x7e,0x00,0x9c,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_nle_f32_e64 -1, exec_hi
+// GFX1250: v_cmpx_nle_f32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x9c,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_nle_f32_e64 0.5, -m0
+// GFX1250: v_cmpx_nle_f32_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x9c,0xd4,0xf0,0xfa,0x00,0x40]
+
+v_cmpx_nle_f32_e64 -src_scc, |vcc_lo|
+// GFX1250: v_cmpx_nle_f32_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x9c,0xd4,0xfd,0xd4,0x00,0x20]
+
+v_cmpx_nle_f32_e64 -|0xaf123456|, -|vcc_hi| clamp
+// GFX1250: v_cmpx_nle_f32_e64 -|0xaf123456|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x9c,0xd4,0xff,0xd6,0x00,0x60,0x56,0x34,0x12,0xaf]
+
+v_cmpx_nle_f64_e64 v[2:3], v[2:3]
+// GFX1250: v_cmpx_nle_f64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xac,0xd4,0x02,0x05,0x02,0x00]
+
+v_cmpx_nle_f64_e64 v[254:255], v[254:255]
+// GFX1250: v_cmpx_nle_f64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xac,0xd4,0xfe,0xfd,0x03,0x00]
+
+v_cmpx_nle_f64_e64 s[2:3], s[4:5]
+// GFX1250: v_cmpx_nle_f64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xac,0xd4,0x02,0x08,0x00,0x00]
+
+v_cmpx_nle_f64_e64 s[104:105], s[104:105]
+// GFX1250: v_cmpx_nle_f64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xac,0xd4,0x68,0xd0,0x00,0x00]
+
+v_cmpx_nle_f64_e64 vcc, ttmp[14:15]
+// GFX1250: v_cmpx_nle_f64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xac,0xd4,0x6a,0xf4,0x00,0x00]
+
+v_cmpx_nle_f64_e64 ttmp[14:15], 0xaf123456
+// GFX1250: v_cmpx_nle_f64_e64 ttmp[14:15], 0xaf123456 ; encoding: [0x7e,0x00,0xac,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_nle_f64_e64 -|exec|, src_scc
+// GFX1250: v_cmpx_nle_f64_e64 -|exec|, src_scc ; encoding: [0x7e,0x01,0xac,0xd4,0x7e,0xfa,0x01,0x20]
+
+v_cmpx_nle_f64_e64 null, 0.5
+// GFX1250: v_cmpx_nle_f64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xac,0xd4,0x7c,0xe0,0x01,0x00]
+
+v_cmpx_nle_f64_e64 -1, -1
+// GFX1250: v_cmpx_nle_f64_e64 -1, -1 ; encoding: [0x7e,0x00,0xac,0xd4,0xc1,0x82,0x01,0x00]
+
+v_cmpx_nle_f64_e64 0.5, null
+// GFX1250: v_cmpx_nle_f64_e64 0.5, null ; encoding: [0x7e,0x00,0xac,0xd4,0xf0,0xf8,0x00,0x00]
+
+v_cmpx_nle_f64_e64 -|src_scc|, -|exec|
+// GFX1250: v_cmpx_nle_f64_e64 -|src_scc|, -|exec| ; encoding: [0x7e,0x03,0xac,0xd4,0xfd,0xfc,0x00,0x60]
+
+v_cmpx_nle_f64_e64 0xaf123456, -|vcc| clamp
+// GFX1250: v_cmpx_nle_f64_e64 0xaf123456, -|vcc| clamp ; encoding: [0x7e,0x82,0xac,0xd4,0xff,0xd4,0x00,0x40,0x56,0x34,0x12,0xaf]
+
+v_cmpx_nlg_f16_e64 v1, v2
+// GFX1250: v_cmpx_nlg_f16_e64 v1, v2 ; encoding: [0x7e,0x00,0x8a,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_nlg_f16_e64 v255, v255
+// GFX1250: v_cmpx_nlg_f16_e64 v255, v255 ; encoding: [0x7e,0x00,0x8a,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_nlg_f16_e64 s1, s2
+// GFX1250: v_cmpx_nlg_f16_e64 s1, s2 ; encoding: [0x7e,0x00,0x8a,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_nlg_f16_e64 s105, s105
+// GFX1250: v_cmpx_nlg_f16_e64 s105, s105 ; encoding: [0x7e,0x00,0x8a,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_nlg_f16_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_nlg_f16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x8a,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_nlg_f16_e64 vcc_hi, 0xfe0b
+// GFX1250: v_cmpx_nlg_f16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0x8a,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_nlg_f16_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_nlg_f16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x8a,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_nlg_f16_e64 m0, 0.5
+// GFX1250: v_cmpx_nlg_f16_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x8a,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_nlg_f16_e64 exec_lo, -1
+// GFX1250: v_cmpx_nlg_f16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x8a,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_nlg_f16_e64 |exec_hi|, null
+// GFX1250: v_cmpx_nlg_f16_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x8a,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_nlg_f16_e64 null, exec_lo
+// GFX1250: v_cmpx_nlg_f16_e64 null, exec_lo ; encoding: [0x7e,0x00,0x8a,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_nlg_f16_e64 -1, exec_hi
+// GFX1250: v_cmpx_nlg_f16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x8a,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_nlg_f16_e64 0.5, -m0
+// GFX1250: v_cmpx_nlg_f16_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x8a,0xd4,0xf0,0xfa,0x00,0x40]
+
+v_cmpx_nlg_f16_e64 -src_scc, |vcc_lo|
+// GFX1250: v_cmpx_nlg_f16_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x8a,0xd4,0xfd,0xd4,0x00,0x20]
+
+v_cmpx_nlg_f16_e64 -|0xfe0b|, -|vcc_hi| clamp
+// GFX1250: v_cmpx_nlg_f16_e64 -|0xfe0b|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x8a,0xd4,0xff,0xd6,0x00,0x60,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_nlg_f32_e64 v1, v2
+// GFX1250: v_cmpx_nlg_f32_e64 v1, v2 ; encoding: [0x7e,0x00,0x9a,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_nlg_f32_e64 v255, v255
+// GFX1250: v_cmpx_nlg_f32_e64 v255, v255 ; encoding: [0x7e,0x00,0x9a,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_nlg_f32_e64 s1, s2
+// GFX1250: v_cmpx_nlg_f32_e64 s1, s2 ; encoding: [0x7e,0x00,0x9a,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_nlg_f32_e64 s105, s105
+// GFX1250: v_cmpx_nlg_f32_e64 s105, s105 ; encoding: [0x7e,0x00,0x9a,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_nlg_f32_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_nlg_f32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x9a,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_nlg_f32_e64 vcc_hi, 0xaf123456
+// GFX1250: v_cmpx_nlg_f32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0x9a,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_nlg_f32_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_nlg_f32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x9a,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_nlg_f32_e64 m0, 0.5
+// GFX1250: v_cmpx_nlg_f32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x9a,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_nlg_f32_e64 exec_lo, -1
+// GFX1250: v_cmpx_nlg_f32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x9a,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_nlg_f32_e64 |exec_hi|, null
+// GFX1250: v_cmpx_nlg_f32_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x9a,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_nlg_f32_e64 null, exec_lo
+// GFX1250: v_cmpx_nlg_f32_e64 null, exec_lo ; encoding: [0x7e,0x00,0x9a,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_nlg_f32_e64 -1, exec_hi
+// GFX1250: v_cmpx_nlg_f32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x9a,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_nlg_f32_e64 0.5, -m0
+// GFX1250: v_cmpx_nlg_f32_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x9a,0xd4,0xf0,0xfa,0x00,0x40]
+
+v_cmpx_nlg_f32_e64 -src_scc, |vcc_lo|
+// GFX1250: v_cmpx_nlg_f32_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x9a,0xd4,0xfd,0xd4,0x00,0x20]
+
+v_cmpx_nlg_f32_e64 -|0xaf123456|, -|vcc_hi| clamp
+// GFX1250: v_cmpx_nlg_f32_e64 -|0xaf123456|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x9a,0xd4,0xff,0xd6,0x00,0x60,0x56,0x34,0x12,0xaf]
+
+v_cmpx_nlg_f64_e64 v[2:3], v[2:3]
+// GFX1250: v_cmpx_nlg_f64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xaa,0xd4,0x02,0x05,0x02,0x00]
+
+v_cmpx_nlg_f64_e64 v[254:255], v[254:255]
+// GFX1250: v_cmpx_nlg_f64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xaa,0xd4,0xfe,0xfd,0x03,0x00]
+
+v_cmpx_nlg_f64_e64 s[2:3], s[4:5]
+// GFX1250: v_cmpx_nlg_f64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xaa,0xd4,0x02,0x08,0x00,0x00]
+
+v_cmpx_nlg_f64_e64 s[104:105], s[104:105]
+// GFX1250: v_cmpx_nlg_f64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xaa,0xd4,0x68,0xd0,0x00,0x00]
+
+v_cmpx_nlg_f64_e64 vcc, ttmp[14:15]
+// GFX1250: v_cmpx_nlg_f64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xaa,0xd4,0x6a,0xf4,0x00,0x00]
+
+v_cmpx_nlg_f64_e64 ttmp[14:15], 0xaf123456
+// GFX1250: v_cmpx_nlg_f64_e64 ttmp[14:15], 0xaf123456 ; encoding: [0x7e,0x00,0xaa,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_nlg_f64_e64 -|exec|, src_scc
+// GFX1250: v_cmpx_nlg_f64_e64 -|exec|, src_scc ; encoding: [0x7e,0x01,0xaa,0xd4,0x7e,0xfa,0x01,0x20]
+
+v_cmpx_nlg_f64_e64 null, 0.5
+// GFX1250: v_cmpx_nlg_f64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xaa,0xd4,0x7c,0xe0,0x01,0x00]
+
+v_cmpx_nlg_f64_e64 -1, -1
+// GFX1250: v_cmpx_nlg_f64_e64 -1, -1 ; encoding: [0x7e,0x00,0xaa,0xd4,0xc1,0x82,0x01,0x00]
+
+v_cmpx_nlg_f64_e64 0.5, null
+// GFX1250: v_cmpx_nlg_f64_e64 0.5, null ; encoding: [0x7e,0x00,0xaa,0xd4,0xf0,0xf8,0x00,0x00]
+
+v_cmpx_nlg_f64_e64 -|src_scc|, -|exec|
+// GFX1250: v_cmpx_nlg_f64_e64 -|src_scc|, -|exec| ; encoding: [0x7e,0x03,0xaa,0xd4,0xfd,0xfc,0x00,0x60]
+
+v_cmpx_nlg_f64_e64 0xaf123456, -|vcc| clamp
+// GFX1250: v_cmpx_nlg_f64_e64 0xaf123456, -|vcc| clamp ; encoding: [0x7e,0x82,0xaa,0xd4,0xff,0xd4,0x00,0x40,0x56,0x34,0x12,0xaf]
+
+v_cmpx_nlt_f16_e64 v1, v2
+// GFX1250: v_cmpx_nlt_f16_e64 v1, v2 ; encoding: [0x7e,0x00,0x8e,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_nlt_f16_e64 v255, v255
+// GFX1250: v_cmpx_nlt_f16_e64 v255, v255 ; encoding: [0x7e,0x00,0x8e,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_nlt_f16_e64 s1, s2
+// GFX1250: v_cmpx_nlt_f16_e64 s1, s2 ; encoding: [0x7e,0x00,0x8e,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_nlt_f16_e64 s105, s105
+// GFX1250: v_cmpx_nlt_f16_e64 s105, s105 ; encoding: [0x7e,0x00,0x8e,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_nlt_f16_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_nlt_f16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x8e,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_nlt_f16_e64 vcc_hi, 0xfe0b
+// GFX1250: v_cmpx_nlt_f16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0x8e,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_nlt_f16_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_nlt_f16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x8e,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_nlt_f16_e64 m0, 0.5
+// GFX1250: v_cmpx_nlt_f16_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x8e,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_nlt_f16_e64 exec_lo, -1
+// GFX1250: v_cmpx_nlt_f16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x8e,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_nlt_f16_e64 |exec_hi|, null
+// GFX1250: v_cmpx_nlt_f16_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x8e,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_nlt_f16_e64 null, exec_lo
+// GFX1250: v_cmpx_nlt_f16_e64 null, exec_lo ; encoding: [0x7e,0x00,0x8e,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_nlt_f16_e64 -1, exec_hi
+// GFX1250: v_cmpx_nlt_f16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x8e,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_nlt_f16_e64 0.5, -m0
+// GFX1250: v_cmpx_nlt_f16_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x8e,0xd4,0xf0,0xfa,0x00,0x40]
+
+v_cmpx_nlt_f16_e64 -src_scc, |vcc_lo|
+// GFX1250: v_cmpx_nlt_f16_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x8e,0xd4,0xfd,0xd4,0x00,0x20]
+
+v_cmpx_nlt_f16_e64 -|0xfe0b|, -|vcc_hi| clamp
+// GFX1250: v_cmpx_nlt_f16_e64 -|0xfe0b|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x8e,0xd4,0xff,0xd6,0x00,0x60,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_nlt_f32_e64 v1, v2
+// GFX1250: v_cmpx_nlt_f32_e64 v1, v2 ; encoding: [0x7e,0x00,0x9e,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_nlt_f32_e64 v255, v255
+// GFX1250: v_cmpx_nlt_f32_e64 v255, v255 ; encoding: [0x7e,0x00,0x9e,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_nlt_f32_e64 s1, s2
+// GFX1250: v_cmpx_nlt_f32_e64 s1, s2 ; encoding: [0x7e,0x00,0x9e,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_nlt_f32_e64 s105, s105
+// GFX1250: v_cmpx_nlt_f32_e64 s105, s105 ; encoding: [0x7e,0x00,0x9e,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_nlt_f32_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_nlt_f32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x9e,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_nlt_f32_e64 vcc_hi, 0xaf123456
+// GFX1250: v_cmpx_nlt_f32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0x9e,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_nlt_f32_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_nlt_f32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x9e,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_nlt_f32_e64 m0, 0.5
+// GFX1250: v_cmpx_nlt_f32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x9e,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_nlt_f32_e64 exec_lo, -1
+// GFX1250: v_cmpx_nlt_f32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x9e,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_nlt_f32_e64 |exec_hi|, null
+// GFX1250: v_cmpx_nlt_f32_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x9e,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_nlt_f32_e64 null, exec_lo
+// GFX1250: v_cmpx_nlt_f32_e64 null, exec_lo ; encoding: [0x7e,0x00,0x9e,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_nlt_f32_e64 -1, exec_hi
+// GFX1250: v_cmpx_nlt_f32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x9e,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_nlt_f32_e64 0.5, -m0
+// GFX1250: v_cmpx_nlt_f32_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x9e,0xd4,0xf0,0xfa,0x00,0x40]
+
+v_cmpx_nlt_f32_e64 -src_scc, |vcc_lo|
+// GFX1250: v_cmpx_nlt_f32_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x9e,0xd4,0xfd,0xd4,0x00,0x20]
+
+v_cmpx_nlt_f32_e64 -|0xaf123456|, -|vcc_hi| clamp
+// GFX1250: v_cmpx_nlt_f32_e64 -|0xaf123456|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x9e,0xd4,0xff,0xd6,0x00,0x60,0x56,0x34,0x12,0xaf]
+
+v_cmpx_nlt_f64_e64 v[2:3], v[2:3]
+// GFX1250: v_cmpx_nlt_f64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xae,0xd4,0x02,0x05,0x02,0x00]
+
+v_cmpx_nlt_f64_e64 v[254:255], v[254:255]
+// GFX1250: v_cmpx_nlt_f64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xae,0xd4,0xfe,0xfd,0x03,0x00]
+
+v_cmpx_nlt_f64_e64 s[2:3], s[4:5]
+// GFX1250: v_cmpx_nlt_f64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xae,0xd4,0x02,0x08,0x00,0x00]
+
+v_cmpx_nlt_f64_e64 s[104:105], s[104:105]
+// GFX1250: v_cmpx_nlt_f64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xae,0xd4,0x68,0xd0,0x00,0x00]
+
+v_cmpx_nlt_f64_e64 vcc, ttmp[14:15]
+// GFX1250: v_cmpx_nlt_f64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xae,0xd4,0x6a,0xf4,0x00,0x00]
+
+v_cmpx_nlt_f64_e64 ttmp[14:15], 0xaf123456
+// GFX1250: v_cmpx_nlt_f64_e64 ttmp[14:15], 0xaf123456 ; encoding: [0x7e,0x00,0xae,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_nlt_f64_e64 -|exec|, src_scc
+// GFX1250: v_cmpx_nlt_f64_e64 -|exec|, src_scc ; encoding: [0x7e,0x01,0xae,0xd4,0x7e,0xfa,0x01,0x20]
+
+v_cmpx_nlt_f64_e64 null, 0.5
+// GFX1250: v_cmpx_nlt_f64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xae,0xd4,0x7c,0xe0,0x01,0x00]
+
+v_cmpx_nlt_f64_e64 -1, -1
+// GFX1250: v_cmpx_nlt_f64_e64 -1, -1 ; encoding: [0x7e,0x00,0xae,0xd4,0xc1,0x82,0x01,0x00]
+
+v_cmpx_nlt_f64_e64 0.5, null
+// GFX1250: v_cmpx_nlt_f64_e64 0.5, null ; encoding: [0x7e,0x00,0xae,0xd4,0xf0,0xf8,0x00,0x00]
+
+v_cmpx_nlt_f64_e64 -|src_scc|, -|exec|
+// GFX1250: v_cmpx_nlt_f64_e64 -|src_scc|, -|exec| ; encoding: [0x7e,0x03,0xae,0xd4,0xfd,0xfc,0x00,0x60]
+
+v_cmpx_nlt_f64_e64 0xaf123456, -|vcc| clamp
+// GFX1250: v_cmpx_nlt_f64_e64 0xaf123456, -|vcc| clamp ; encoding: [0x7e,0x82,0xae,0xd4,0xff,0xd4,0x00,0x40,0x56,0x34,0x12,0xaf]
+
+v_cmpx_o_f16_e64 v1, v2
+// GFX1250: v_cmpx_o_f16_e64 v1, v2 ; encoding: [0x7e,0x00,0x87,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_o_f16_e64 v255, v255
+// GFX1250: v_cmpx_o_f16_e64 v255, v255 ; encoding: [0x7e,0x00,0x87,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_o_f16_e64 s1, s2
+// GFX1250: v_cmpx_o_f16_e64 s1, s2 ; encoding: [0x7e,0x00,0x87,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_o_f16_e64 s105, s105
+// GFX1250: v_cmpx_o_f16_e64 s105, s105 ; encoding: [0x7e,0x00,0x87,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_o_f16_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_o_f16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x87,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_o_f16_e64 vcc_hi, 0xfe0b
+// GFX1250: v_cmpx_o_f16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0x87,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_o_f16_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_o_f16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x87,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_o_f16_e64 m0, 0.5
+// GFX1250: v_cmpx_o_f16_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x87,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_o_f16_e64 exec_lo, -1
+// GFX1250: v_cmpx_o_f16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x87,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_o_f16_e64 |exec_hi|, null
+// GFX1250: v_cmpx_o_f16_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x87,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_o_f16_e64 null, exec_lo
+// GFX1250: v_cmpx_o_f16_e64 null, exec_lo ; encoding: [0x7e,0x00,0x87,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_o_f16_e64 -1, exec_hi
+// GFX1250: v_cmpx_o_f16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x87,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_o_f16_e64 0.5, -m0
+// GFX1250: v_cmpx_o_f16_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x87,0xd4,0xf0,0xfa,0x00,0x40]
+
+v_cmpx_o_f16_e64 -src_scc, |vcc_lo|
+// GFX1250: v_cmpx_o_f16_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x87,0xd4,0xfd,0xd4,0x00,0x20]
+
+v_cmpx_o_f16_e64 -|0xfe0b|, -|vcc_hi| clamp
+// GFX1250: v_cmpx_o_f16_e64 -|0xfe0b|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x87,0xd4,0xff,0xd6,0x00,0x60,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_o_f32_e64 v1, v2
+// GFX1250: v_cmpx_o_f32_e64 v1, v2 ; encoding: [0x7e,0x00,0x97,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_o_f32_e64 v255, v255
+// GFX1250: v_cmpx_o_f32_e64 v255, v255 ; encoding: [0x7e,0x00,0x97,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_o_f32_e64 s1, s2
+// GFX1250: v_cmpx_o_f32_e64 s1, s2 ; encoding: [0x7e,0x00,0x97,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_o_f32_e64 s105, s105
+// GFX1250: v_cmpx_o_f32_e64 s105, s105 ; encoding: [0x7e,0x00,0x97,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_o_f32_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_o_f32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x97,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_o_f32_e64 vcc_hi, 0xaf123456
+// GFX1250: v_cmpx_o_f32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0x97,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_o_f32_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_o_f32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x97,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_o_f32_e64 m0, 0.5
+// GFX1250: v_cmpx_o_f32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x97,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_o_f32_e64 exec_lo, -1
+// GFX1250: v_cmpx_o_f32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x97,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_o_f32_e64 |exec_hi|, null
+// GFX1250: v_cmpx_o_f32_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x97,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_o_f32_e64 null, exec_lo
+// GFX1250: v_cmpx_o_f32_e64 null, exec_lo ; encoding: [0x7e,0x00,0x97,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_o_f32_e64 -1, exec_hi
+// GFX1250: v_cmpx_o_f32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x97,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_o_f32_e64 0.5, -m0
+// GFX1250: v_cmpx_o_f32_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x97,0xd4,0xf0,0xfa,0x00,0x40]
+
+v_cmpx_o_f32_e64 -src_scc, |vcc_lo|
+// GFX1250: v_cmpx_o_f32_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x97,0xd4,0xfd,0xd4,0x00,0x20]
+
+v_cmpx_o_f32_e64 -|0xaf123456|, -|vcc_hi| clamp
+// GFX1250: v_cmpx_o_f32_e64 -|0xaf123456|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x97,0xd4,0xff,0xd6,0x00,0x60,0x56,0x34,0x12,0xaf]
+
+v_cmpx_o_f64_e64 v[2:3], v[2:3]
+// GFX1250: v_cmpx_o_f64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xa7,0xd4,0x02,0x05,0x02,0x00]
+
+v_cmpx_o_f64_e64 v[254:255], v[254:255]
+// GFX1250: v_cmpx_o_f64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xa7,0xd4,0xfe,0xfd,0x03,0x00]
+
+v_cmpx_o_f64_e64 s[2:3], s[4:5]
+// GFX1250: v_cmpx_o_f64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xa7,0xd4,0x02,0x08,0x00,0x00]
+
+v_cmpx_o_f64_e64 s[104:105], s[104:105]
+// GFX1250: v_cmpx_o_f64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xa7,0xd4,0x68,0xd0,0x00,0x00]
+
+v_cmpx_o_f64_e64 vcc, ttmp[14:15]
+// GFX1250: v_cmpx_o_f64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xa7,0xd4,0x6a,0xf4,0x00,0x00]
+
+v_cmpx_o_f64_e64 ttmp[14:15], 0xaf123456
+// GFX1250: v_cmpx_o_f64_e64 ttmp[14:15], 0xaf123456 ; encoding: [0x7e,0x00,0xa7,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_o_f64_e64 -|exec|, src_scc
+// GFX1250: v_cmpx_o_f64_e64 -|exec|, src_scc ; encoding: [0x7e,0x01,0xa7,0xd4,0x7e,0xfa,0x01,0x20]
+
+v_cmpx_o_f64_e64 null, 0.5
+// GFX1250: v_cmpx_o_f64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xa7,0xd4,0x7c,0xe0,0x01,0x00]
+
+v_cmpx_o_f64_e64 -1, -1
+// GFX1250: v_cmpx_o_f64_e64 -1, -1 ; encoding: [0x7e,0x00,0xa7,0xd4,0xc1,0x82,0x01,0x00]
+
+v_cmpx_o_f64_e64 0.5, null
+// GFX1250: v_cmpx_o_f64_e64 0.5, null ; encoding: [0x7e,0x00,0xa7,0xd4,0xf0,0xf8,0x00,0x00]
+
+v_cmpx_o_f64_e64 -|src_scc|, -|exec|
+// GFX1250: v_cmpx_o_f64_e64 -|src_scc|, -|exec| ; encoding: [0x7e,0x03,0xa7,0xd4,0xfd,0xfc,0x00,0x60]
+
+v_cmpx_o_f64_e64 0xaf123456, -|vcc| clamp
+// GFX1250: v_cmpx_o_f64_e64 0xaf123456, -|vcc| clamp ; encoding: [0x7e,0x82,0xa7,0xd4,0xff,0xd4,0x00,0x40,0x56,0x34,0x12,0xaf]
+
+v_cmpx_u_f16_e64 v1, v2
+// GFX1250: v_cmpx_u_f16_e64 v1, v2 ; encoding: [0x7e,0x00,0x88,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_u_f16_e64 v255, v255
+// GFX1250: v_cmpx_u_f16_e64 v255, v255 ; encoding: [0x7e,0x00,0x88,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_u_f16_e64 s1, s2
+// GFX1250: v_cmpx_u_f16_e64 s1, s2 ; encoding: [0x7e,0x00,0x88,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_u_f16_e64 s105, s105
+// GFX1250: v_cmpx_u_f16_e64 s105, s105 ; encoding: [0x7e,0x00,0x88,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_u_f16_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_u_f16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x88,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_u_f16_e64 vcc_hi, 0xfe0b
+// GFX1250: v_cmpx_u_f16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0x88,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_u_f16_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_u_f16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x88,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_u_f16_e64 m0, 0.5
+// GFX1250: v_cmpx_u_f16_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x88,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_u_f16_e64 exec_lo, -1
+// GFX1250: v_cmpx_u_f16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x88,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_u_f16_e64 |exec_hi|, null
+// GFX1250: v_cmpx_u_f16_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x88,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_u_f16_e64 null, exec_lo
+// GFX1250: v_cmpx_u_f16_e64 null, exec_lo ; encoding: [0x7e,0x00,0x88,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_u_f16_e64 -1, exec_hi
+// GFX1250: v_cmpx_u_f16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x88,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_u_f16_e64 0.5, -m0
+// GFX1250: v_cmpx_u_f16_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x88,0xd4,0xf0,0xfa,0x00,0x40]
+
+v_cmpx_u_f16_e64 -src_scc, |vcc_lo|
+// GFX1250: v_cmpx_u_f16_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x88,0xd4,0xfd,0xd4,0x00,0x20]
+
+v_cmpx_u_f16_e64 -|0xfe0b|, -|vcc_hi| clamp
+// GFX1250: v_cmpx_u_f16_e64 -|0xfe0b|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x88,0xd4,0xff,0xd6,0x00,0x60,0x0b,0xfe,0x00,0x00]
+
+v_cmpx_u_f32_e64 v1, v2
+// GFX1250: v_cmpx_u_f32_e64 v1, v2 ; encoding: [0x7e,0x00,0x98,0xd4,0x01,0x05,0x02,0x00]
+
+v_cmpx_u_f32_e64 v255, v255
+// GFX1250: v_cmpx_u_f32_e64 v255, v255 ; encoding: [0x7e,0x00,0x98,0xd4,0xff,0xff,0x03,0x00]
+
+v_cmpx_u_f32_e64 s1, s2
+// GFX1250: v_cmpx_u_f32_e64 s1, s2 ; encoding: [0x7e,0x00,0x98,0xd4,0x01,0x04,0x00,0x00]
+
+v_cmpx_u_f32_e64 s105, s105
+// GFX1250: v_cmpx_u_f32_e64 s105, s105 ; encoding: [0x7e,0x00,0x98,0xd4,0x69,0xd2,0x00,0x00]
+
+v_cmpx_u_f32_e64 vcc_lo, ttmp15
+// GFX1250: v_cmpx_u_f32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x98,0xd4,0x6a,0xf6,0x00,0x00]
+
+v_cmpx_u_f32_e64 vcc_hi, 0xaf123456
+// GFX1250: v_cmpx_u_f32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0x98,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_u_f32_e64 ttmp15, src_scc
+// GFX1250: v_cmpx_u_f32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x98,0xd4,0x7b,0xfa,0x01,0x00]
+
+v_cmpx_u_f32_e64 m0, 0.5
+// GFX1250: v_cmpx_u_f32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x98,0xd4,0x7d,0xe0,0x01,0x00]
+
+v_cmpx_u_f32_e64 exec_lo, -1
+// GFX1250: v_cmpx_u_f32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x98,0xd4,0x7e,0x82,0x01,0x00]
+
+v_cmpx_u_f32_e64 |exec_hi|, null
+// GFX1250: v_cmpx_u_f32_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x98,0xd4,0x7f,0xf8,0x00,0x00]
+
+v_cmpx_u_f32_e64 null, exec_lo
+// GFX1250: v_cmpx_u_f32_e64 null, exec_lo ; encoding: [0x7e,0x00,0x98,0xd4,0x7c,0xfc,0x00,0x00]
+
+v_cmpx_u_f32_e64 -1, exec_hi
+// GFX1250: v_cmpx_u_f32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x98,0xd4,0xc1,0xfe,0x00,0x00]
+
+v_cmpx_u_f32_e64 0.5, -m0
+// GFX1250: v_cmpx_u_f32_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x98,0xd4,0xf0,0xfa,0x00,0x40]
+
+v_cmpx_u_f32_e64 -src_scc, |vcc_lo|
+// GFX1250: v_cmpx_u_f32_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x98,0xd4,0xfd,0xd4,0x00,0x20]
+
+v_cmpx_u_f32_e64 -|0xaf123456|, -|vcc_hi| clamp
+// GFX1250: v_cmpx_u_f32_e64 -|0xaf123456|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x98,0xd4,0xff,0xd6,0x00,0x60,0x56,0x34,0x12,0xaf]
+
+v_cmpx_u_f64_e64 v[2:3], v[2:3]
+// GFX1250: v_cmpx_u_f64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xa8,0xd4,0x02,0x05,0x02,0x00]
+
+v_cmpx_u_f64_e64 v[254:255], v[254:255]
+// GFX1250: v_cmpx_u_f64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xa8,0xd4,0xfe,0xfd,0x03,0x00]
+
+v_cmpx_u_f64_e64 s[2:3], s[4:5]
+// GFX1250: v_cmpx_u_f64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xa8,0xd4,0x02,0x08,0x00,0x00]
+
+v_cmpx_u_f64_e64 s[104:105], s[104:105]
+// GFX1250: v_cmpx_u_f64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xa8,0xd4,0x68,0xd0,0x00,0x00]
+
+v_cmpx_u_f64_e64 vcc, ttmp[14:15]
+// GFX1250: v_cmpx_u_f64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xa8,0xd4,0x6a,0xf4,0x00,0x00]
+
+v_cmpx_u_f64_e64 ttmp[14:15], 0xaf123456
+// GFX1250: v_cmpx_u_f64_e64 ttmp[14:15], 0xaf123456 ; encoding: [0x7e,0x00,0xa8,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+v_cmpx_u_f64_e64 -|exec|, src_scc
+// GFX1250: v_cmpx_u_f64_e64 -|exec|, src_scc ; encoding: [0x7e,0x01,0xa8,0xd4,0x7e,0xfa,0x01,0x20]
+
+v_cmpx_u_f64_e64 null, 0.5
+// GFX1250: v_cmpx_u_f64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xa8,0xd4,0x7c,0xe0,0x01,0x00]
+
+v_cmpx_u_f64_e64 -1, -1
+// GFX1250: v_cmpx_u_f64_e64 -1, -1 ; encoding: [0x7e,0x00,0xa8,0xd4,0xc1,0x82,0x01,0x00]
+
+v_cmpx_u_f64_e64 0.5, null
+// GFX1250: v_cmpx_u_f64_e64 0.5, null ; encoding: [0x7e,0x00,0xa8,0xd4,0xf0,0xf8,0x00,0x00]
+
+v_cmpx_u_f64_e64 -|src_scc|, -|exec|
+// GFX1250: v_cmpx_u_f64_e64 -|src_scc|, -|exec| ; encoding: [0x7e,0x03,0xa8,0xd4,0xfd,0xfc,0x00,0x60]
+
+v_cmpx_u_f64_e64 0xaf123456, -|vcc| clamp
+// GFX1250: v_cmpx_u_f64_e64 0xaf123456, -|vcc| clamp ; encoding: [0x7e,0x82,0xa8,0xd4,0xff,0xd4,0x00,0x40,0x56,0x34,0x12,0xaf]
diff --git a/llvm/test/MC/AMDGPU/gfx1250_asm_vop3p_dpp16.s b/llvm/test/MC/AMDGPU/gfx1250_asm_vop3p_dpp16.s
new file mode 100644
index 0000000000000..2875d3ec8e069
--- /dev/null
+++ b/llvm/test/MC/AMDGPU/gfx1250_asm_vop3p_dpp16.s
@@ -0,0 +1,14 @@
+// RUN: llvm-mc -triple=amdgcn -mcpu=gfx1250 -show-encoding < %s | FileCheck --check-prefix=GFX1250 %s
+// RUN: not llvm-mc -triple=amdgcn -mcpu=gfx1200 -show-encoding %s 2>&1 | FileCheck --check-prefix=GFX12-ERR --implicit-check-not=error: --strict-whitespace %s
+
+v_fma_mix_f32_bf16 v0, v1, v2, v3 op_sel:[0,0,0] row_ror:7 bank_mask:0x1 bound_ctrl:0
+// GFX1250: v_fma_mix_f32_bf16_e64_dpp v0, v1, v2, v3 row_ror:7 row_mask:0xf bank_mask:0x1 ; encoding: [0x00,0x00,0x3d,0xcc,0xfa,0x04,0x0e,0x04,0x01,0x27,0x01,0xf1]
+// GFX12-ERR: :[[@LINE-2]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+v_fma_mixlo_bf16 v0, v1, v2, v3 op_sel_hi:[1,1,1] clamp quad_perm:[0,2,3,1] row_mask:0x0
+// GFX1250: v_fma_mixlo_bf16_e64_dpp v0, v1, v2, v3 op_sel_hi:[1,1,1] clamp quad_perm:[0,2,3,1] row_mask:0x0 bank_mask:0xf ; encoding: [0x00,0xc0,0x3e,0xcc,0xfa,0x04,0x0e,0x1c,0x01,0x78,0x00,0x0f]
+// GFX12-ERR: :[[@LINE-2]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+v_fma_mixhi_bf16 v0, v1, v2, v3 op_sel_hi:[1,1,1] clamp quad_perm:[0,2,3,1] row_mask:0x0
+// GFX1250: v_fma_mixhi_bf16_e64_dpp v0, v1, v2, v3 op_sel_hi:[1,1,1] clamp quad_perm:[0,2,3,1] row_mask:0x0 bank_mask:0xf ; encoding: [0x00,0xc0,0x3f,0xcc,0xfa,0x04,0x0e,0x1c,0x01,0x78,0x00,0x0f]
+// GFX12-ERR: :[[@LINE-2]]:{{[0-9]+}}: error: instruction not supported on this GPU
diff --git a/llvm/test/MC/AMDGPU/gfx1250_asm_vop3p_dpp8.s b/llvm/test/MC/AMDGPU/gfx1250_asm_vop3p_dpp8.s
new file mode 100644
index 0000000000000..13b8e211c821e
--- /dev/null
+++ b/llvm/test/MC/AMDGPU/gfx1250_asm_vop3p_dpp8.s
@@ -0,0 +1,26 @@
+// RUN: llvm-mc -triple=amdgcn -mcpu=gfx1250 -show-encoding < %s | FileCheck --check-prefix=GFX1250 %s
+// RUN: not llvm-mc -triple=amdgcn -mcpu=gfx1200 -show-encoding %s 2>&1 | FileCheck --check-prefix=GFX12-ERR --implicit-check-not=error: --strict-whitespace %s
+
+v_fma_mix_f32_bf16 v0, v1, v2, v3 dpp8:[2,2,2,2,4,4,4,4]
+// GFX1250: v_fma_mix_f32_bf16_e64_dpp v0, v1, v2, v3 dpp8:[2,2,2,2,4,4,4,4] ; encoding: [0x00,0x00,0x3d,0xcc,0xe9,0x04,0x0e,0x04,0x01,0x92,0x44,0x92]
+// GFX12-ERR: :[[@LINE-2]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+v_fma_mix_f32_bf16 v0, v1, v2, v3 clamp dpp8:[2,2,2,2,4,4,4,4] fi:1
+// GFX1250: v_fma_mix_f32_bf16_e64_dpp v0, v1, v2, v3 clamp dpp8:[2,2,2,2,4,4,4,4] fi:1 ; encoding: [0x00,0x80,0x3d,0xcc,0xea,0x04,0x0e,0x04,0x01,0x92,0x44,0x92]
+// GFX12-ERR: :[[@LINE-2]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+v_fma_mixlo_bf16 v0, abs(v1), -v2, abs(v3) dpp8:[2,2,2,2,4,4,4,4]
+// GFX1250: v_fma_mixlo_bf16_e64_dpp v0, |v1|, -v2, |v3| dpp8:[2,2,2,2,4,4,4,4] ; encoding: [0x00,0x05,0x3e,0xcc,0xe9,0x04,0x0e,0x44,0x01,0x92,0x44,0x92]
+// GFX12-ERR: :[[@LINE-2]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+v_fma_mixlo_bf16 v0, abs(v1), -v2, abs(v3) op_sel:[1,0,0] op_sel_hi:[1,0,0] dpp8:[2,2,2,2,4,4,4,4]
+// GFX1250: v_fma_mixlo_bf16_e64_dpp v0, |v1|, -v2, |v3| op_sel:[1,0,0] op_sel_hi:[1,0,0] dpp8:[2,2,2,2,4,4,4,4] ; encoding: [0x00,0x0d,0x3e,0xcc,0xe9,0x04,0x0e,0x4c,0x01,0x92,0x44,0x92]
+// GFX12-ERR: :[[@LINE-2]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+v_fma_mixhi_bf16 v0, abs(v1), -v2, abs(v3) dpp8:[2,2,2,2,4,4,4,4]
+// GFX1250: v_fma_mixhi_bf16_e64_dpp v0, |v1|, -v2, |v3| dpp8:[2,2,2,2,4,4,4,4] ; encoding: [0x00,0x05,0x3f,0xcc,0xe9,0x04,0x0e,0x44,0x01,0x92,0x44,0x92]
+// GFX12-ERR: :[[@LINE-2]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+v_fma_mixhi_bf16 v0, abs(v1), -v2, abs(v3) op_sel:[1,0,0] op_sel_hi:[1,0,0] dpp8:[2,2,2,2,4,4,4,4]
+// GFX1250: v_fma_mixhi_bf16_e64_dpp v0, |v1|, -v2, |v3| op_sel:[1,0,0] op_sel_hi:[1,0,0] dpp8:[2,2,2,2,4,4,4,4] ; encoding: [0x00,0x0d,0x3f,0xcc,0xe9,0x04,0x0e,0x4c,0x01,0x92,0x44,0x92]
+// GFX12-ERR: :[[@LINE-2]]:{{[0-9]+}}: error: instruction not supported on this GPU
diff --git a/llvm/test/MC/AMDGPU/gfx1250_asm_vsample_err.s b/llvm/test/MC/AMDGPU/gfx1250_asm_vsample_err.s
new file mode 100644
index 0000000000000..50766f13fd226
--- /dev/null
+++ b/llvm/test/MC/AMDGPU/gfx1250_asm_vsample_err.s
@@ -0,0 +1,175 @@
+; RUN: not llvm-mc -triple=amdgcn -mcpu=gfx1250 -show-encoding %s 2>&1 | FileCheck --check-prefix=GFX1250-ERR --implicit-check-not=error: --strict-whitespace %s
+
+image_sample v64, v32, s[4:11], s[100:103] dmask:0x1 dim:SQ_RSRC_IMG_1D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_sample_d v64, [v32, v33, v34], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_1D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_sample_l v64, [v32, v33], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_1D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_sample_b v64, [v32, v33], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_1D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_sample_lz v64, v32, s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_1D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_sample_c v64, [v32, v33], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_1D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_sample_c_d v64, [v32, v33, v34, v35], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_1D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_sample_c_l v64, [v32, v33, v34], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_1D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_sample_c_b v64, [v32, v33, v34], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_1D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_sample_c_lz v64, [v32, v33], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_1D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_sample_o v64, [v32, v33], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_1D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_sample_d_o v64, [v32, v33, v34, v35], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_1D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_sample_l_o v64, [v32, v33, v34], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_1D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_sample_b_o v64, [v32, v33, v34], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_1D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_sample_lz_o v64, [v32, v33], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_1D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_sample_c_o v64, [v32, v33, v34], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_1D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_sample_c_d_o v64, [v32, v33, v34, v[35:36]], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_1D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_sample_c_l_o v64, [v32, v33, v34, v35], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_1D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_sample_c_b_o v64, [v32, v33, v34, v35], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_1D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_sample_c_lz_o v64, [v32, v33, v34], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_1D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_gather4 v[64:67], [v32, v33], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_2D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_gather4_l v[64:67], [v32, v33, v34], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_2D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_gather4_b v[64:67], [v32, v33, v34], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_2D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_gather4_lz v[64:67], [v32, v33], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_2D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_gather4_c v[64:67], [v32, v33, v34], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_2D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_gather4_c_lz v[64:67], [v32, v33, v34], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_2D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_gather4_o v[64:67], [v32, v33, v34], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_2D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_gather4_lz_o v[64:67], [v32, v33, v34], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_2D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_gather4_c_lz_o v[64:67], [v32, v33, v34, v35], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_2D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_get_lod v64, v32, s[4:11], s[100:103] dmask:0x1 dim:SQ_RSRC_IMG_1D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_sample_d_g16 v64, [v32, v33, v34], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_1D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_sample_c_d_g16 v64, [v32, v33, v34, v35], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_1D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_sample_d_o_g16 v64, [v32, v33, v34, v35], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_1D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_sample_c_d_o_g16 v64, [v32, v33, v34, v[35:36]], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_1D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_sample_cl v64, [v32, v33], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_1D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_sample_d_cl v64, [v32, v33, v34, v35], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_1D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_sample_b_cl v64, [v32, v33, v34], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_1D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_sample_c_cl v64, [v32, v33, v34], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_1D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_sample_c_d_cl v64, [v32, v33, v34, v[35:36]], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_1D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_sample_c_b_cl v64, [v32, v33, v34, v35], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_1D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_sample_cl_o v64, [v32, v33, v34], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_1D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_sample_d_cl_o v64, [v32, v33, v34, v[35:36]], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_1D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_sample_b_cl_o v64, [v32, v33, v34, v35], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_1D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_sample_c_cl_o v64, [v32, v33, v34, v35], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_1D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_sample_c_d_cl_o v64, [v32, v33, v34, v[35:37]], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_1D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_sample_c_b_cl_o v64, [v32, v33, v34, v[35:36]], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_1D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_sample_c_d_cl_g16 v64, [v32, v33, v34, v[35:36]], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_1D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_sample_d_cl_o_g16 v64, [v32, v33, v34, v[35:36]], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_1D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_sample_c_d_cl_o_g16 v64, [v32, v33, v34, v[35:37]], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_1D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_sample_d_cl_g16 v64, [v32, v33, v34, v35], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_1D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_gather4_cl v[64:67], [v32, v33, v34], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_2D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_gather4_b_cl v[64:67], [v32, v33, v34, v35], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_2D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_gather4_c_cl v[64:67], [v32, v33, v34, v35], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_2D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_gather4_c_l v[64:67], [v32, v33, v34, v35], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_2D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_gather4_c_b v[64:67], [v32, v33, v34, v35], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_2D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_gather4_c_b_cl v[64:67], [v32, v33, v34, v[35:36]], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_2D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_gather4h v[64:67], [v32, v33], s[4:11], s[4:7] dmask:0x1 dim:SQ_RSRC_IMG_2D
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
+image_msaa_load v[1:4], [v5, v6, v7], s[8:15] dmask:0x1 dim:SQ_RSRC_IMG_2D_MSAA
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: instruction not supported on this GPU
diff --git a/llvm/test/MC/AMDGPU/gfx1250_err.s b/llvm/test/MC/AMDGPU/gfx1250_err.s
index e4598fe91a00b..676eb48cc5a7f 100644
--- a/llvm/test/MC/AMDGPU/gfx1250_err.s
+++ b/llvm/test/MC/AMDGPU/gfx1250_err.s
@@ -1,5 +1,30 @@
// RUN: not llvm-mc -triple=amdgcn -mcpu=gfx1250 -show-encoding %s 2>&1 | FileCheck --check-prefixes=GFX1250-ERR --implicit-check-not=error: -strict-whitespace %s
+s_load_b32 s4, s[2:3], 10 th:TH_LOAD_NT th:TH_LOAD_NT
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: invalid operand for instruction
+// GFX1250-ERR: s_load_b32 s4, s[2:3], 10 th:TH_LOAD_NT th:TH_LOAD_NT
+// GFX1250-ERR: ^
+
+s_load_b32 s4, s[2:3], 10 scope:SCOPE_SE scope:SCOPE_SE
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: invalid operand for instruction
+// GFX1250-ERR: s_load_b32 s4, s[2:3], 10 scope:SCOPE_SE scope:SCOPE_SE
+// GFX1250-ERR: ^
+
+s_load_b32 s4, s[2:3], 10 nv nv
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: invalid operand for instruction
+// GFX1250-ERR: s_load_b32 s4, s[2:3], 10 nv nv
+// GFX1250-ERR: ^
+
+v_mov_b64 v[4:5], v[2:3] quad_perm:[1,1,1,1]
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: not a valid operand.
+// GFX1250-ERR: v_mov_b64 v[4:5], v[2:3] quad_perm:[1,1,1,1]
+// GFX1250-ERR: ^
+
+v_mov_b64 v[4:5], v[2:3] dpp8:[7,6,5,4,3,2,1,0]
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: not a valid operand.
+// GFX1250-ERR: v_mov_b64 v[4:5], v[2:3] dpp8:[7,6,5,4,3,2,1,0]
+// GFX1250-ERR: ^
+
// For v_dual_cndmask_b32 use of the explicit src2 forces VOPD3 form even if it is vcc_lo.
// If src2 is omitted then it forces VOPD form. As a result a proper form of the instruction
// has to be used if the other component of the dual instruction cannot be used if that
@@ -137,6 +162,11 @@ v_fmaak_f64 v[4:5], 0x7e8, v[8:9], lit64(0x7e8)
// GFX1250-ERR: v_fmaak_f64 v[4:5], 0x7e8, v[8:9], lit64(0x7e8)
// GFX1250-ERR: ^
+v_pk_add_min_i16 v10, |v1|, v2, v3
+// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: not a valid operand.
+// GFX1250-ERR: v_pk_add_min_i16 v10, |v1|, v2, v3
+// GFX1250-ERR: ^
+
v_pk_add_min_i16 v10, -v1, v2, v3
// GFX1250-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: not a valid operand.
// GFX1250-ERR: v_pk_add_min_i16 v10, -v1, v2, v3
diff --git a/llvm/test/MC/Disassembler/AMDGPU/gfx1250_dasm_vbuffer_mubuf.txt b/llvm/test/MC/Disassembler/AMDGPU/gfx1250_dasm_vbuffer_mubuf.txt
index 2499225626acc..ddf779adc9e15 100644
--- a/llvm/test/MC/Disassembler/AMDGPU/gfx1250_dasm_vbuffer_mubuf.txt
+++ b/llvm/test/MC/Disassembler/AMDGPU/gfx1250_dasm_vbuffer_mubuf.txt
@@ -1,5 +1,2138 @@
# RUN: llvm-mc -triple=amdgcn -mcpu=gfx1250 -disassemble -show-encoding < %s | FileCheck -check-prefixes=GFX1250 %s
+# GFX1250: buffer_atomic_add_f32 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x15,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x15,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_add_f32 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x80,0x15,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x15,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_add_f32 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x80,0x15,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0x80,0x15,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_add_f32 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x80,0x15,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0x80,0x15,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_add_f32 v5, off, s[8:11], s3 ; encoding: [0x03,0x80,0x15,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0x80,0x15,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_atomic_add_f32 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x80,0x15,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0x80,0x15,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_atomic_add_f32 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x15,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x15,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_add_f32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x80,0x15,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x15,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_add_f32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x80,0x15,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x15,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_add_f32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x80,0x15,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x15,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_add_f32 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x80,0x15,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x15,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_add_f32 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x80,0x15,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x15,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_add_f32 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x80,0x15,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x15,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_add_u32 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x0d,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x0d,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_add_u32 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x40,0x0d,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x0d,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_add_u32 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x40,0x0d,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0x40,0x0d,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_add_u32 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x40,0x0d,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0x40,0x0d,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_add_u32 v5, off, s[8:11], s3 ; encoding: [0x03,0x40,0x0d,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0x40,0x0d,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_atomic_add_u32 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x40,0x0d,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0x40,0x0d,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_atomic_add_u32 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x0d,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x0d,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_add_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x40,0x0d,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x0d,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_add_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x40,0x0d,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x0d,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_add_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x40,0x0d,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x0d,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_add_u32 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x40,0x0d,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x0d,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_add_u32 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x40,0x0d,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x0d,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_add_u32 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x40,0x0d,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x0d,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_add_u64 v[254:255], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0xc0,0x10,0xc4,0xfe,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x10,0xc4,0xfe,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_add_u64 v[6:7], off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0xc0,0x10,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x10,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_add_u64 v[6:7], off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0xc0,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0xc0,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_add_u64 v[6:7], off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0xc0,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0xc0,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_add_u64 v[6:7], off, s[8:11], s3 ; encoding: [0x03,0xc0,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0xc0,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_atomic_add_u64 v[6:7], off, s[8:11], s3 offset:7 ; encoding: [0x03,0xc0,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0xc0,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_atomic_add_u64 v[6:7], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0xc0,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_add_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0xc0,0x10,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x10,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_add_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0xc0,0x10,0xc4,0x06,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x10,0xc4,0x06,0x10,0x90,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_add_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0xc0,0x10,0xc4,0x06,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x10,0xc4,0x06,0x10,0x94,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_add_u64 v[6:7], off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0xc0,0x10,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x10,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_add_u64 v[6:7], v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0xc0,0x10,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x10,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_add_u64 v[6:7], v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0xc0,0x10,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x10,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_and_b32 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x00,0x0f,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x0f,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_and_b32 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x00,0x0f,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x0f,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_and_b32 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x00,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0x00,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_and_b32 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x00,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0x00,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_and_b32 v5, off, s[8:11], s3 ; encoding: [0x03,0x00,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0x00,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_atomic_and_b32 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x00,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0x00,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_atomic_and_b32 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x00,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_and_b32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x00,0x0f,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x0f,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_and_b32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x00,0x0f,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x0f,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_and_b32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x00,0x0f,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x0f,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_and_b32 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x00,0x0f,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x0f,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_and_b32 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x00,0x0f,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x0f,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_and_b32 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x00,0x0f,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x0f,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_and_b64 v[254:255], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x12,0xc4,0xfe,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x12,0xc4,0xfe,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_and_b64 v[6:7], off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x40,0x12,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x12,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_and_b64 v[6:7], off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x40,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0x40,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_and_b64 v[6:7], off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x40,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0x40,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_and_b64 v[6:7], off, s[8:11], s3 ; encoding: [0x03,0x40,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0x40,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_atomic_and_b64 v[6:7], off, s[8:11], s3 offset:7 ; encoding: [0x03,0x40,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0x40,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_atomic_and_b64 v[6:7], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_and_b64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x40,0x12,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x12,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_and_b64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x40,0x12,0xc4,0x06,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x12,0xc4,0x06,0x10,0x90,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_and_b64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x40,0x12,0xc4,0x06,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x12,0xc4,0x06,0x10,0x94,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_and_b64 v[6:7], off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x40,0x12,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x12,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_and_b64 v[6:7], v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x40,0x12,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x12,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_and_b64 v[6:7], v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x40,0x12,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x12,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_cmpswap_b32 v[254:255], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x00,0x0d,0xc4,0xfe,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x0d,0xc4,0xfe,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_cmpswap_b32 v[6:7], off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x00,0x0d,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x0d,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_cmpswap_b32 v[6:7], off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x00,0x0d,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0x00,0x0d,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_cmpswap_b32 v[6:7], off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x00,0x0d,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0x00,0x0d,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_cmpswap_b32 v[6:7], off, s[8:11], s3 ; encoding: [0x03,0x00,0x0d,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0x00,0x0d,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_atomic_cmpswap_b32 v[6:7], off, s[8:11], s3 offset:7 ; encoding: [0x03,0x00,0x0d,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0x00,0x0d,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_atomic_cmpswap_b32 v[6:7], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x00,0x0d,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x0d,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_cmpswap_b32 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x00,0x0d,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x0d,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_cmpswap_b32 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x00,0x0d,0xc4,0x06,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x0d,0xc4,0x06,0x10,0x90,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_cmpswap_b32 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x00,0x0d,0xc4,0x06,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x0d,0xc4,0x06,0x10,0x94,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_cmpswap_b32 v[6:7], off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x00,0x0d,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x0d,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_cmpswap_b32 v[6:7], v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x00,0x0d,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x0d,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_cmpswap_b32 v[6:7], v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x00,0x0d,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x0d,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_cmpswap_b64 v[252:255], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x10,0xc4,0xfc,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x10,0xc4,0xfc,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_cmpswap_b64 v[6:9], off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x80,0x10,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x10,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_cmpswap_b64 v[6:9], off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x80,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0x80,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_cmpswap_b64 v[6:9], off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x80,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0x80,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_cmpswap_b64 v[6:9], off, s[8:11], s3 ; encoding: [0x03,0x80,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0x80,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_atomic_cmpswap_b64 v[6:9], off, s[8:11], s3 offset:7 ; encoding: [0x03,0x80,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0x80,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_atomic_cmpswap_b64 v[6:9], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_cmpswap_b64 v[6:9], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x80,0x10,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x10,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_cmpswap_b64 v[6:9], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x80,0x10,0xc4,0x06,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x10,0xc4,0x06,0x10,0x90,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_cmpswap_b64 v[6:9], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x80,0x10,0xc4,0x06,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x10,0xc4,0x06,0x10,0x94,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_cmpswap_b64 v[6:9], off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x80,0x10,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x10,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_cmpswap_b64 v[6:9], v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x80,0x10,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x10,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_cmpswap_b64 v[6:9], v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x80,0x10,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x10,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_cond_sub_u32 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x00,0x14,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x14,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_cond_sub_u32 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x00,0x14,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x14,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_cond_sub_u32 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x00,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0x00,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_cond_sub_u32 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x00,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0x00,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_cond_sub_u32 v5, off, s[8:11], s3 ; encoding: [0x03,0x00,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0x00,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_atomic_cond_sub_u32 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x00,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0x00,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_atomic_cond_sub_u32 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x00,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_cond_sub_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x00,0x14,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x14,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_cond_sub_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x00,0x14,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x14,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_cond_sub_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x00,0x14,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x14,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_cond_sub_u32 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x00,0x14,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x14,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_cond_sub_u32 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x00,0x14,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x14,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_cond_sub_u32 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x00,0x14,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x14,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_dec_u32 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x00,0x10,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x10,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_dec_u32 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x00,0x10,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x10,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_dec_u32 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x00,0x10,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0x00,0x10,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_dec_u32 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x00,0x10,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0x00,0x10,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_dec_u32 v5, off, s[8:11], s3 ; encoding: [0x03,0x00,0x10,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0x00,0x10,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_atomic_dec_u32 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x00,0x10,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0x00,0x10,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_atomic_dec_u32 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x00,0x10,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x10,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_dec_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x00,0x10,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x10,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_dec_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x00,0x10,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x10,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_dec_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x00,0x10,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x10,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_dec_u32 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x00,0x10,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x10,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_dec_u32 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x00,0x10,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x10,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_dec_u32 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x00,0x10,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x10,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_dec_u64 v[254:255], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x13,0xc4,0xfe,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x13,0xc4,0xfe,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_dec_u64 v[6:7], off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x40,0x13,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x13,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_dec_u64 v[6:7], off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x40,0x13,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0x40,0x13,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_dec_u64 v[6:7], off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x40,0x13,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0x40,0x13,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_dec_u64 v[6:7], off, s[8:11], s3 ; encoding: [0x03,0x40,0x13,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0x40,0x13,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_atomic_dec_u64 v[6:7], off, s[8:11], s3 offset:7 ; encoding: [0x03,0x40,0x13,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0x40,0x13,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_atomic_dec_u64 v[6:7], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x13,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x13,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_dec_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x40,0x13,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x13,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_dec_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x40,0x13,0xc4,0x06,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x13,0xc4,0x06,0x10,0x90,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_dec_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x40,0x13,0xc4,0x06,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x13,0xc4,0x06,0x10,0x94,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_dec_u64 v[6:7], off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x40,0x13,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x13,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_dec_u64 v[6:7], v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x40,0x13,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x13,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_dec_u64 v[6:7], v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x40,0x13,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x13,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_inc_u32 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0xc0,0x0f,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0f,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_inc_u32 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0xc0,0x0f,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0f,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_inc_u32 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0xc0,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0xc0,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_inc_u32 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0xc0,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0xc0,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_inc_u32 v5, off, s[8:11], s3 ; encoding: [0x03,0xc0,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0xc0,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_atomic_inc_u32 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0xc0,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0xc0,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_atomic_inc_u32 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0xc0,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_inc_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0xc0,0x0f,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0f,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_inc_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0xc0,0x0f,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0f,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_inc_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0xc0,0x0f,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0f,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_inc_u32 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0xc0,0x0f,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0f,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_inc_u32 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0xc0,0x0f,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0f,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_inc_u32 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0xc0,0x0f,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0f,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_inc_u64 v[254:255], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x00,0x13,0xc4,0xfe,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x13,0xc4,0xfe,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_inc_u64 v[6:7], off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x00,0x13,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x13,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_inc_u64 v[6:7], off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x00,0x13,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0x00,0x13,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_inc_u64 v[6:7], off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x00,0x13,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0x00,0x13,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_inc_u64 v[6:7], off, s[8:11], s3 ; encoding: [0x03,0x00,0x13,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0x00,0x13,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_atomic_inc_u64 v[6:7], off, s[8:11], s3 offset:7 ; encoding: [0x03,0x00,0x13,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0x00,0x13,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_atomic_inc_u64 v[6:7], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x00,0x13,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x13,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_inc_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x00,0x13,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x13,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_inc_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x00,0x13,0xc4,0x06,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x13,0xc4,0x06,0x10,0x90,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_inc_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x00,0x13,0xc4,0x06,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x13,0xc4,0x06,0x10,0x94,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_inc_u64 v[6:7], off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x00,0x13,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x13,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_inc_u64 v[6:7], v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x00,0x13,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x13,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_inc_u64 v[6:7], v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x00,0x13,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x13,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_i32 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x0e,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x0e,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_i32 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x80,0x0e,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x0e,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_i32 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x80,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0x80,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_i32 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x80,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0x80,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_i32 v5, off, s[8:11], s3 ; encoding: [0x03,0x80,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0x80,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_atomic_max_i32 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x80,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0x80,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_atomic_max_i32 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_i32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x80,0x0e,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x0e,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_i32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x80,0x0e,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x0e,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_i32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x80,0x0e,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x0e,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_i32 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x80,0x0e,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x0e,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_i32 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x80,0x0e,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x0e,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_i32 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x80,0x0e,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x0e,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_i64 v[254:255], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0xc0,0x11,0xc4,0xfe,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x11,0xc4,0xfe,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_i64 v[6:7], off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0xc0,0x11,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x11,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_i64 v[6:7], off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0xc0,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0xc0,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_i64 v[6:7], off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0xc0,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0xc0,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_i64 v[6:7], off, s[8:11], s3 ; encoding: [0x03,0xc0,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0xc0,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_atomic_max_i64 v[6:7], off, s[8:11], s3 offset:7 ; encoding: [0x03,0xc0,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0xc0,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_atomic_max_i64 v[6:7], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0xc0,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_i64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0xc0,0x11,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x11,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_i64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0xc0,0x11,0xc4,0x06,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x11,0xc4,0x06,0x10,0x90,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_i64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0xc0,0x11,0xc4,0x06,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x11,0xc4,0x06,0x10,0x94,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_i64 v[6:7], off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0xc0,0x11,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x11,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_i64 v[6:7], v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0xc0,0x11,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x11,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_i64 v[6:7], v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0xc0,0x11,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x11,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_num_f32 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x14,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x14,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_num_f32 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x80,0x14,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x14,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_num_f32 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x80,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0x80,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_num_f32 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x80,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0x80,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_num_f32 v5, off, s[8:11], s3 ; encoding: [0x03,0x80,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0x80,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_atomic_max_num_f32 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x80,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0x80,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_atomic_max_num_f32 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_num_f32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x80,0x14,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x14,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_num_f32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x80,0x14,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x14,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_num_f32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x80,0x14,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x14,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_num_f32 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x80,0x14,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x14,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_num_f32 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x80,0x14,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x14,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_num_f32 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x80,0x14,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x14,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_u32 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0xc0,0x0e,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0e,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_u32 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0xc0,0x0e,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0e,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_u32 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0xc0,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0xc0,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_u32 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0xc0,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0xc0,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_u32 v5, off, s[8:11], s3 ; encoding: [0x03,0xc0,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0xc0,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_atomic_max_u32 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0xc0,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0xc0,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_atomic_max_u32 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0xc0,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0xc0,0x0e,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0e,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0xc0,0x0e,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0e,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0xc0,0x0e,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0e,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_u32 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0xc0,0x0e,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0e,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_u32 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0xc0,0x0e,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0e,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_u32 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0xc0,0x0e,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0e,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_u64 v[254:255], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x00,0x12,0xc4,0xfe,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x12,0xc4,0xfe,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_u64 v[6:7], off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x00,0x12,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x12,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_u64 v[6:7], off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x00,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0x00,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_u64 v[6:7], off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x00,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0x00,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_u64 v[6:7], off, s[8:11], s3 ; encoding: [0x03,0x00,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0x00,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_atomic_max_u64 v[6:7], off, s[8:11], s3 offset:7 ; encoding: [0x03,0x00,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0x00,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_atomic_max_u64 v[6:7], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x00,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x00,0x12,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x12,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x00,0x12,0xc4,0x06,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x12,0xc4,0x06,0x10,0x90,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x00,0x12,0xc4,0x06,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x12,0xc4,0x06,0x10,0x94,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_u64 v[6:7], off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x00,0x12,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x12,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_u64 v[6:7], v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x00,0x12,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x12,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_max_u64 v[6:7], v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x00,0x12,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x12,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_i32 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x00,0x0e,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x0e,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_i32 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x00,0x0e,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x0e,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_i32 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x00,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0x00,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_i32 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x00,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0x00,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_i32 v5, off, s[8:11], s3 ; encoding: [0x03,0x00,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0x00,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_atomic_min_i32 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x00,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0x00,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_atomic_min_i32 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x00,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_i32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x00,0x0e,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x0e,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_i32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x00,0x0e,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x0e,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_i32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x00,0x0e,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x0e,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_i32 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x00,0x0e,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x0e,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_i32 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x00,0x0e,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x0e,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_i32 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x00,0x0e,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x0e,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_i64 v[254:255], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x11,0xc4,0xfe,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x11,0xc4,0xfe,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_i64 v[6:7], off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x40,0x11,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x11,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_i64 v[6:7], off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x40,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0x40,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_i64 v[6:7], off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x40,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0x40,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_i64 v[6:7], off, s[8:11], s3 ; encoding: [0x03,0x40,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0x40,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_atomic_min_i64 v[6:7], off, s[8:11], s3 offset:7 ; encoding: [0x03,0x40,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0x40,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_atomic_min_i64 v[6:7], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_i64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x40,0x11,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x11,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_i64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x40,0x11,0xc4,0x06,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x11,0xc4,0x06,0x10,0x90,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_i64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x40,0x11,0xc4,0x06,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x11,0xc4,0x06,0x10,0x94,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_i64 v[6:7], off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x40,0x11,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x11,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_i64 v[6:7], v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x40,0x11,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x11,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_i64 v[6:7], v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x40,0x11,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x11,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_num_f32 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x14,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x14,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_num_f32 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x40,0x14,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x14,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_num_f32 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x40,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0x40,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_num_f32 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x40,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0x40,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_num_f32 v5, off, s[8:11], s3 ; encoding: [0x03,0x40,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0x40,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_atomic_min_num_f32 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x40,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0x40,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_atomic_min_num_f32 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x14,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_num_f32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x40,0x14,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x14,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_num_f32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x40,0x14,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x14,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_num_f32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x40,0x14,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x14,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_num_f32 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x40,0x14,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x14,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_num_f32 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x40,0x14,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x14,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_num_f32 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x40,0x14,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x14,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_u32 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x0e,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x0e,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_u32 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x40,0x0e,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x0e,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_u32 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x40,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0x40,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_u32 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x40,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0x40,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_u32 v5, off, s[8:11], s3 ; encoding: [0x03,0x40,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0x40,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_atomic_min_u32 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x40,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0x40,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_atomic_min_u32 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x0e,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x40,0x0e,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x0e,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x40,0x0e,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x0e,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x40,0x0e,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x0e,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_u32 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x40,0x0e,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x0e,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_u32 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x40,0x0e,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x0e,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_u32 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x40,0x0e,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x0e,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_u64 v[254:255], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x11,0xc4,0xfe,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x11,0xc4,0xfe,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_u64 v[6:7], off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x80,0x11,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x11,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_u64 v[6:7], off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x80,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0x80,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_u64 v[6:7], off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x80,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0x80,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_u64 v[6:7], off, s[8:11], s3 ; encoding: [0x03,0x80,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0x80,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_atomic_min_u64 v[6:7], off, s[8:11], s3 offset:7 ; encoding: [0x03,0x80,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0x80,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_atomic_min_u64 v[6:7], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x80,0x11,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x11,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x80,0x11,0xc4,0x06,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x11,0xc4,0x06,0x10,0x90,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x80,0x11,0xc4,0x06,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x11,0xc4,0x06,0x10,0x94,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_u64 v[6:7], off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x80,0x11,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x11,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_u64 v[6:7], v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x80,0x11,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x11,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_min_u64 v[6:7], v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x80,0x11,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x11,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_or_b32 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x0f,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x0f,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_or_b32 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x40,0x0f,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x0f,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_or_b32 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x40,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0x40,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_or_b32 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x40,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0x40,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_or_b32 v5, off, s[8:11], s3 ; encoding: [0x03,0x40,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0x40,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_atomic_or_b32 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x40,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0x40,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_atomic_or_b32 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_or_b32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x40,0x0f,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x0f,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_or_b32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x40,0x0f,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x0f,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_or_b32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x40,0x0f,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x0f,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_or_b32 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x40,0x0f,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x0f,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_or_b32 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x40,0x0f,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x0f,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_or_b32 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x40,0x0f,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x0f,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_or_b64 v[254:255], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x12,0xc4,0xfe,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x12,0xc4,0xfe,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_or_b64 v[6:7], off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x80,0x12,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x12,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_or_b64 v[6:7], off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x80,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0x80,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_or_b64 v[6:7], off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x80,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0x80,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_or_b64 v[6:7], off, s[8:11], s3 ; encoding: [0x03,0x80,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0x80,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_atomic_or_b64 v[6:7], off, s[8:11], s3 offset:7 ; encoding: [0x03,0x80,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0x80,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_atomic_or_b64 v[6:7], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_or_b64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x80,0x12,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x12,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_or_b64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x80,0x12,0xc4,0x06,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x12,0xc4,0x06,0x10,0x90,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_or_b64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x80,0x12,0xc4,0x06,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x12,0xc4,0x06,0x10,0x94,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_or_b64 v[6:7], off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x80,0x12,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x12,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_or_b64 v[6:7], v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x80,0x12,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x12,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_or_b64 v[6:7], v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x80,0x12,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x12,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_pk_add_bf16 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x16,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x16,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_pk_add_bf16 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x80,0x16,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x16,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_pk_add_bf16 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x80,0x16,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0x80,0x16,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_pk_add_bf16 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x80,0x16,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0x80,0x16,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_pk_add_bf16 v5, off, s[8:11], s3 ; encoding: [0x03,0x80,0x16,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0x80,0x16,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_atomic_pk_add_bf16 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x80,0x16,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0x80,0x16,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_atomic_pk_add_bf16 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x16,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x16,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_pk_add_bf16 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x80,0x16,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x16,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_pk_add_bf16 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x80,0x16,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x16,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_pk_add_bf16 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x80,0x16,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x16,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_pk_add_bf16 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x80,0x16,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x16,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_pk_add_bf16 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x80,0x16,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x16,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_pk_add_bf16 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x80,0x16,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x16,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_pk_add_f16 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x16,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x16,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_pk_add_f16 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x40,0x16,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x16,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_pk_add_f16 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x40,0x16,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0x40,0x16,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_pk_add_f16 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x40,0x16,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0x40,0x16,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_pk_add_f16 v5, off, s[8:11], s3 ; encoding: [0x03,0x40,0x16,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0x40,0x16,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_atomic_pk_add_f16 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x40,0x16,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0x40,0x16,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_atomic_pk_add_f16 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x16,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x16,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_pk_add_f16 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x40,0x16,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x16,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_pk_add_f16 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x40,0x16,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x16,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_pk_add_f16 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x40,0x16,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x16,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_pk_add_f16 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x40,0x16,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x16,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_pk_add_f16 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x40,0x16,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x16,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_pk_add_f16 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x40,0x16,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x16,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_clamp_u32 v255, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0xc0,0x0d,0xc4,0xff,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0d,0xc4,0xff,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_clamp_u32 v255, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0xc0,0x0d,0xc4,0xff,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0d,0xc4,0xff,0x10,0x90,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_clamp_u32 v255, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0xc0,0x0d,0xc4,0xff,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0d,0xc4,0xff,0x10,0x94,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[12:15], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0x18,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0d,0xc4,0x05,0x18,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[12:15], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0x18,0x90,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0d,0xc4,0x05,0x18,0x90,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[12:15], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0x18,0x94,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0d,0xc4,0x05,0x18,0x94,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[8:11], m0 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x7d,0xc0,0x0d,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0xc0,0x0d,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[8:11], m0 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x7d,0xc0,0x0d,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0xc0,0x0d,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[8:11], m0 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x7d,0xc0,0x0d,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0xc0,0x0d,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s101 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x65,0xc0,0x0d,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x65,0xc0,0x0d,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s101 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x65,0xc0,0x0d,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+0x65,0xc0,0x0d,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s101 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x65,0xc0,0x0d,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+0x65,0xc0,0x0d,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s3 offset:7 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0x10,0xe8,0x00,0x00,0x07,0x00,0x00]
+0x03,0xc0,0x0d,0xc4,0x05,0x10,0xe8,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s3 offset:7 th:TH_ATOMIC_RETURN ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0x10,0x90,0x00,0x00,0x07,0x00,0x00]
+0x03,0xc0,0x0d,0xc4,0x05,0x10,0x90,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s3 offset:7 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0x10,0x94,0x00,0x00,0x07,0x00,0x00]
+0x03,0xc0,0x0d,0xc4,0x05,0x10,0x94,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0d,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0d,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0d,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0d,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s3 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0x10,0xe8,0x00,0x00,0x00,0x00,0x00]
+0x03,0xc0,0x0d,0xc4,0x05,0x10,0xe8,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s3 th:TH_ATOMIC_RETURN ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0x10,0x90,0x00,0x00,0x00,0x00,0x00]
+0x03,0xc0,0x0d,0xc4,0x05,0x10,0x90,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[8:11], s3 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0x10,0x94,0x00,0x00,0x00,0x00,0x00]
+0x03,0xc0,0x0d,0xc4,0x05,0x10,0x94,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[96:99], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0xc0,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0d,0xc4,0x05,0xc0,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[96:99], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0xc0,0x90,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0d,0xc4,0x05,0xc0,0x90,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_clamp_u32 v5, off, s[96:99], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0xc0,0x94,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0d,0xc4,0x05,0xc0,0x94,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_clamp_u32 v5, v0, s[8:11], s3 idxen offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0x10,0xe8,0x80,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0d,0xc4,0x05,0x10,0xe8,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_clamp_u32 v5, v0, s[8:11], s3 idxen offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0x10,0x90,0x80,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0d,0xc4,0x05,0x10,0x90,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_clamp_u32 v5, v0, s[8:11], s3 idxen offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0x10,0x94,0x80,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0d,0xc4,0x05,0x10,0x94,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_clamp_u32 v5, v0, s[8:11], s3 offen offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0x10,0xe8,0x40,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0d,0xc4,0x05,0x10,0xe8,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_clamp_u32 v5, v0, s[8:11], s3 offen offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0x10,0x90,0x40,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0d,0xc4,0x05,0x10,0x90,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_clamp_u32 v5, v0, s[8:11], s3 offen offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0xc0,0x0d,0xc4,0x05,0x10,0x94,0x40,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0d,0xc4,0x05,0x10,0x94,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_u32 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x0d,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x0d,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_u32 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x80,0x0d,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x0d,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_u32 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x80,0x0d,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0x80,0x0d,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_u32 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x80,0x0d,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0x80,0x0d,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_u32 v5, off, s[8:11], s3 ; encoding: [0x03,0x80,0x0d,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0x80,0x0d,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_atomic_sub_u32 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x80,0x0d,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0x80,0x0d,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_atomic_sub_u32 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x0d,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x0d,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x80,0x0d,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x0d,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x80,0x0d,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x0d,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_u32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x80,0x0d,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x0d,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_u32 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x80,0x0d,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x0d,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_u32 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x80,0x0d,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x0d,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_u32 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x80,0x0d,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x0d,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_u64 v[254:255], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x00,0x11,0xc4,0xfe,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x11,0xc4,0xfe,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_u64 v[6:7], off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x00,0x11,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x11,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_u64 v[6:7], off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x00,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0x00,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_u64 v[6:7], off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x00,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0x00,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_u64 v[6:7], off, s[8:11], s3 ; encoding: [0x03,0x00,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0x00,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_atomic_sub_u64 v[6:7], off, s[8:11], s3 offset:7 ; encoding: [0x03,0x00,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0x00,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_atomic_sub_u64 v[6:7], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x00,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x11,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x00,0x11,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x11,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x00,0x11,0xc4,0x06,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x11,0xc4,0x06,0x10,0x90,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_u64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x00,0x11,0xc4,0x06,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x11,0xc4,0x06,0x10,0x94,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_u64 v[6:7], off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x00,0x11,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x11,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_u64 v[6:7], v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x00,0x11,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x11,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_sub_u64 v[6:7], v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x00,0x11,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x11,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_swap_b32 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0xc0,0x0c,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0c,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_swap_b32 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0xc0,0x0c,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0c,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_swap_b32 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0xc0,0x0c,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0xc0,0x0c,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_swap_b32 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0xc0,0x0c,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0xc0,0x0c,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_swap_b32 v5, off, s[8:11], s3 ; encoding: [0x03,0xc0,0x0c,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0xc0,0x0c,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_atomic_swap_b32 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0xc0,0x0c,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0xc0,0x0c,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_atomic_swap_b32 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0xc0,0x0c,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0c,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_swap_b32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0xc0,0x0c,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0c,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_swap_b32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0xc0,0x0c,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0c,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_swap_b32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0xc0,0x0c,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0c,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_swap_b32 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0xc0,0x0c,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0c,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_swap_b32 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0xc0,0x0c,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0c,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_swap_b32 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0xc0,0x0c,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x0c,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_swap_b64 v[254:255], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x10,0xc4,0xfe,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x10,0xc4,0xfe,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_swap_b64 v[6:7], off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x40,0x10,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x10,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_swap_b64 v[6:7], off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x40,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0x40,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_swap_b64 v[6:7], off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x40,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0x40,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_swap_b64 v[6:7], off, s[8:11], s3 ; encoding: [0x03,0x40,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0x40,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_atomic_swap_b64 v[6:7], off, s[8:11], s3 offset:7 ; encoding: [0x03,0x40,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0x40,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_atomic_swap_b64 v[6:7], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x10,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_swap_b64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x40,0x10,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x10,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_swap_b64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x40,0x10,0xc4,0x06,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x10,0xc4,0x06,0x10,0x90,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_swap_b64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x40,0x10,0xc4,0x06,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x10,0xc4,0x06,0x10,0x94,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_swap_b64 v[6:7], off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x40,0x10,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x10,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_swap_b64 v[6:7], v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x40,0x10,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x10,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_swap_b64 v[6:7], v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x40,0x10,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x10,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_xor_b32 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x0f,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x0f,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_xor_b32 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x80,0x0f,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x0f,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_xor_b32 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x80,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0x80,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_xor_b32 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x80,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0x80,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_xor_b32 v5, off, s[8:11], s3 ; encoding: [0x03,0x80,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0x80,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_atomic_xor_b32 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x80,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0x80,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_atomic_xor_b32 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x0f,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_xor_b32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0x80,0x0f,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x0f,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_xor_b32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0x80,0x0f,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x0f,0xc4,0x05,0x10,0x90,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_xor_b32 v5, off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0x80,0x0f,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x0f,0xc4,0x05,0x10,0x94,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_xor_b32 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x80,0x0f,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x0f,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_xor_b32 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x80,0x0f,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x0f,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_xor_b32 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x80,0x0f,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x0f,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_xor_b64 v[254:255], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0xc0,0x12,0xc4,0xfe,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x12,0xc4,0xfe,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_xor_b64 v[6:7], off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0xc0,0x12,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x12,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_xor_b64 v[6:7], off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0xc0,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0xc0,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_xor_b64 v[6:7], off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0xc0,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0xc0,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_xor_b64 v[6:7], off, s[8:11], s3 ; encoding: [0x03,0xc0,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0xc0,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_atomic_xor_b64 v[6:7], off, s[8:11], s3 offset:7 ; encoding: [0x03,0xc0,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0xc0,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_atomic_xor_b64 v[6:7], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0xc0,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x12,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_xor_b64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_CASCADE_NT scope:SCOPE_DEV ; encoding: [0x03,0xc0,0x12,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x12,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_xor_b64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN ; encoding: [0x03,0xc0,0x12,0xc4,0x06,0x10,0x90,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x12,0xc4,0x06,0x10,0x90,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_xor_b64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_ATOMIC_RETURN scope:SCOPE_SE ; encoding: [0x03,0xc0,0x12,0xc4,0x06,0x10,0x94,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x12,0xc4,0x06,0x10,0x94,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_xor_b64 v[6:7], off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0xc0,0x12,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x12,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_xor_b64 v[6:7], v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0xc0,0x12,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x12,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_atomic_xor_b64 v[6:7], v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0xc0,0x12,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x12,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_b128 v[252:255], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0xc0,0x05,0xc4,0xfc,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x05,0xc4,0xfc,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_b128 v[6:9], off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0xc0,0x05,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x05,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_b128 v[6:9], off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0xc0,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0xc0,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_b128 v[6:9], off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0xc0,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0xc0,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_b128 v[6:9], off, s[8:11], s3 ; encoding: [0x03,0xc0,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0xc0,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_load_b128 v[6:9], off, s[8:11], s3 offset:7 ; encoding: [0x03,0xc0,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0xc0,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_load_b128 v[6:9], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0xc0,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_b128 v[6:9], off, s[8:11], s3 offset:8388607 th:TH_LOAD_BYPASS scope:SCOPE_SYS ; encoding: [0x03,0xc0,0x05,0xc4,0x06,0x10,0xbc,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x05,0xc4,0x06,0x10,0xbc,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_b128 v[6:9], off, s[8:11], s3 offset:8388607 th:TH_LOAD_NT_HT scope:SCOPE_DEV ; encoding: [0x03,0xc0,0x05,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x05,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_b128 v[6:9], off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0xc0,0x05,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x05,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_b128 v[6:9], v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0xc0,0x05,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x05,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_b128 v[6:9], v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0xc0,0x05,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x05,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_b32 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x00,0x05,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x05,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_b32 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x00,0x05,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x05,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_b32 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x00,0x05,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0x00,0x05,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_b32 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x00,0x05,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0x00,0x05,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_b32 v5, off, s[8:11], s3 ; encoding: [0x03,0x00,0x05,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0x00,0x05,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_load_b32 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x00,0x05,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0x00,0x05,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_load_b32 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x00,0x05,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x05,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_b32 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_BYPASS scope:SCOPE_SYS ; encoding: [0x03,0x00,0x05,0xc4,0x05,0x10,0xbc,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x05,0xc4,0x05,0x10,0xbc,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_b32 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_NT_HT scope:SCOPE_DEV ; encoding: [0x03,0x00,0x05,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x05,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_b32 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x00,0x05,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x05,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_b32 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x00,0x05,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x05,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_b32 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x00,0x05,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x05,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_b64 v[254:255], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x05,0xc4,0xfe,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x05,0xc4,0xfe,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_b64 v[6:7], off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x40,0x05,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x05,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_b64 v[6:7], off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x40,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0x40,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_b64 v[6:7], off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x40,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0x40,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_b64 v[6:7], off, s[8:11], s3 ; encoding: [0x03,0x40,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0x40,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_load_b64 v[6:7], off, s[8:11], s3 offset:7 ; encoding: [0x03,0x40,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0x40,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_load_b64 v[6:7], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_b64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_LOAD_BYPASS scope:SCOPE_SYS ; encoding: [0x03,0x40,0x05,0xc4,0x06,0x10,0xbc,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x05,0xc4,0x06,0x10,0xbc,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_b64 v[6:7], off, s[8:11], s3 offset:8388607 th:TH_LOAD_NT_HT scope:SCOPE_DEV ; encoding: [0x03,0x40,0x05,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x05,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_b64 v[6:7], off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x40,0x05,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x05,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_b64 v[6:7], v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x40,0x05,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x05,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_b64 v[6:7], v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x40,0x05,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x05,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_b96 v[252:254], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x05,0xc4,0xfc,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x05,0xc4,0xfc,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_b96 v[6:8], off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x80,0x05,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x05,0xc4,0x06,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_b96 v[6:8], off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x80,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0x80,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_b96 v[6:8], off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x80,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0x80,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_b96 v[6:8], off, s[8:11], s3 ; encoding: [0x03,0x80,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0x80,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_load_b96 v[6:8], off, s[8:11], s3 offset:7 ; encoding: [0x03,0x80,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0x80,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_load_b96 v[6:8], off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x05,0xc4,0x06,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_b96 v[6:8], off, s[8:11], s3 offset:8388607 th:TH_LOAD_BYPASS scope:SCOPE_SYS ; encoding: [0x03,0x80,0x05,0xc4,0x06,0x10,0xbc,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x05,0xc4,0x06,0x10,0xbc,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_b96 v[6:8], off, s[8:11], s3 offset:8388607 th:TH_LOAD_NT_HT scope:SCOPE_DEV ; encoding: [0x03,0x80,0x05,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x05,0xc4,0x06,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_b96 v[6:8], off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x80,0x05,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x05,0xc4,0x06,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_b96 v[6:8], v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x80,0x05,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x05,0xc4,0x06,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_b96 v[6:8], v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x80,0x05,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x05,0xc4,0x06,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_b16 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x00,0x08,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x08,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_b16 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x00,0x08,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x08,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_b16 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x00,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0x00,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_b16 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x00,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0x00,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_b16 v5, off, s[8:11], s3 ; encoding: [0x03,0x00,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0x00,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_load_d16_b16 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x00,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0x00,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_load_d16_b16 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x00,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_b16 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_BYPASS scope:SCOPE_SYS ; encoding: [0x03,0x00,0x08,0xc4,0x05,0x10,0xbc,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x08,0xc4,0x05,0x10,0xbc,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_b16 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_NT_HT scope:SCOPE_DEV ; encoding: [0x03,0x00,0x08,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x08,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_b16 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x00,0x08,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x08,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_b16 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x00,0x08,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x08,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_b16 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x00,0x08,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x08,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_hi_b16 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0xc0,0x08,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x08,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_hi_b16 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0xc0,0x08,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x08,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_hi_b16 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0xc0,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0xc0,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_hi_b16 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0xc0,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0xc0,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_hi_b16 v5, off, s[8:11], s3 ; encoding: [0x03,0xc0,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0xc0,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_load_d16_hi_b16 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0xc0,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0xc0,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_load_d16_hi_b16 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0xc0,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_hi_b16 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_BYPASS scope:SCOPE_SYS ; encoding: [0x03,0xc0,0x08,0xc4,0x05,0x10,0xbc,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x08,0xc4,0x05,0x10,0xbc,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_hi_b16 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_NT_HT scope:SCOPE_DEV ; encoding: [0x03,0xc0,0x08,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x08,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_hi_b16 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0xc0,0x08,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x08,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_hi_b16 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0xc0,0x08,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x08,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_hi_b16 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0xc0,0x08,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x08,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_hi_i8 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x08,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x08,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_hi_i8 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x80,0x08,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x08,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_hi_i8 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x80,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0x80,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_hi_i8 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x80,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0x80,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_hi_i8 v5, off, s[8:11], s3 ; encoding: [0x03,0x80,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0x80,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_load_d16_hi_i8 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x80,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0x80,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_load_d16_hi_i8 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_hi_i8 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_BYPASS scope:SCOPE_SYS ; encoding: [0x03,0x80,0x08,0xc4,0x05,0x10,0xbc,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x08,0xc4,0x05,0x10,0xbc,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_hi_i8 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_NT_HT scope:SCOPE_DEV ; encoding: [0x03,0x80,0x08,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x08,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_hi_i8 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x80,0x08,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x08,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_hi_i8 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x80,0x08,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x08,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_hi_i8 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x80,0x08,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x08,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_hi_u8 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x08,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x08,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_hi_u8 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x40,0x08,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x08,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_hi_u8 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x40,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0x40,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_hi_u8 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x40,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0x40,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_hi_u8 v5, off, s[8:11], s3 ; encoding: [0x03,0x40,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0x40,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_load_d16_hi_u8 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x40,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0x40,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_load_d16_hi_u8 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x08,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_hi_u8 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_BYPASS scope:SCOPE_SYS ; encoding: [0x03,0x40,0x08,0xc4,0x05,0x10,0xbc,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x08,0xc4,0x05,0x10,0xbc,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_hi_u8 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_NT_HT scope:SCOPE_DEV ; encoding: [0x03,0x40,0x08,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x08,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_hi_u8 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x40,0x08,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x08,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_hi_u8 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x40,0x08,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x08,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_hi_u8 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x40,0x08,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x08,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_i8 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0xc0,0x07,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x07,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_i8 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0xc0,0x07,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x07,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_i8 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0xc0,0x07,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0xc0,0x07,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_i8 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0xc0,0x07,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0xc0,0x07,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_i8 v5, off, s[8:11], s3 ; encoding: [0x03,0xc0,0x07,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0xc0,0x07,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_load_d16_i8 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0xc0,0x07,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0xc0,0x07,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_load_d16_i8 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0xc0,0x07,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x07,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_i8 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_BYPASS scope:SCOPE_SYS ; encoding: [0x03,0xc0,0x07,0xc4,0x05,0x10,0xbc,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x07,0xc4,0x05,0x10,0xbc,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_i8 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_NT_HT scope:SCOPE_DEV ; encoding: [0x03,0xc0,0x07,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x07,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_i8 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0xc0,0x07,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x07,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_i8 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0xc0,0x07,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x07,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_i8 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0xc0,0x07,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x07,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_u8 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x07,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x07,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_u8 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x80,0x07,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x07,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_u8 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x80,0x07,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0x80,0x07,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_u8 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x80,0x07,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0x80,0x07,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_u8 v5, off, s[8:11], s3 ; encoding: [0x03,0x80,0x07,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0x80,0x07,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_load_d16_u8 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x80,0x07,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0x80,0x07,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_load_d16_u8 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x07,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x07,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_u8 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_BYPASS scope:SCOPE_SYS ; encoding: [0x03,0x80,0x07,0xc4,0x05,0x10,0xbc,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x07,0xc4,0x05,0x10,0xbc,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_u8 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_NT_HT scope:SCOPE_DEV ; encoding: [0x03,0x80,0x07,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x07,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_u8 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x80,0x07,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x07,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_u8 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x80,0x07,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x07,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_d16_u8 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x80,0x07,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x07,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_i16 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0xc0,0x04,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x04,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_i16 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0xc0,0x04,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x04,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_i16 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0xc0,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0xc0,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_i16 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0xc0,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0xc0,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_i16 v5, off, s[8:11], s3 ; encoding: [0x03,0xc0,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0xc0,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_load_i16 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0xc0,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0xc0,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_load_i16 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0xc0,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_i16 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_BYPASS scope:SCOPE_SYS ; encoding: [0x03,0xc0,0x04,0xc4,0x05,0x10,0xbc,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x04,0xc4,0x05,0x10,0xbc,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_i16 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_NT_HT scope:SCOPE_DEV ; encoding: [0x03,0xc0,0x04,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x04,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_i16 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0xc0,0x04,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x04,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_i16 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0xc0,0x04,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x04,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_i16 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0xc0,0x04,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0xc0,0x04,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_i8 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x04,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x04,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_i8 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x40,0x04,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x04,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_i8 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x40,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0x40,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_i8 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x40,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0x40,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_i8 v5, off, s[8:11], s3 ; encoding: [0x03,0x40,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0x40,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_load_i8 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x40,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0x40,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_load_i8 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x40,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_i8 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_BYPASS scope:SCOPE_SYS ; encoding: [0x03,0x40,0x04,0xc4,0x05,0x10,0xbc,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x04,0xc4,0x05,0x10,0xbc,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_i8 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_NT_HT scope:SCOPE_DEV ; encoding: [0x03,0x40,0x04,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x04,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_i8 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x40,0x04,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x04,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_i8 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x40,0x04,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x04,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_i8 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x40,0x04,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0x40,0x04,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_u16 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x04,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x04,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_u16 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x80,0x04,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x04,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_u16 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x80,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0x80,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_u16 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x80,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0x80,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_u16 v5, off, s[8:11], s3 ; encoding: [0x03,0x80,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0x80,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_load_u16 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x80,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0x80,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_load_u16 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x80,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_u16 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_BYPASS scope:SCOPE_SYS ; encoding: [0x03,0x80,0x04,0xc4,0x05,0x10,0xbc,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x04,0xc4,0x05,0x10,0xbc,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_u16 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_NT_HT scope:SCOPE_DEV ; encoding: [0x03,0x80,0x04,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x04,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_u16 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x80,0x04,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x04,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_u16 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x80,0x04,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x04,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_u16 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x80,0x04,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0x80,0x04,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_u8 v255, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x00,0x04,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x04,0xc4,0xff,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_u8 v5, off, s[12:15], s3 offset:8388607 ; encoding: [0x03,0x00,0x04,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x04,0xc4,0x05,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_u8 v5, off, s[8:11], m0 offset:8388607 ; encoding: [0x7d,0x00,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0x00,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_u8 v5, off, s[8:11], s101 offset:8388607 ; encoding: [0x65,0x00,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0x00,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_u8 v5, off, s[8:11], s3 ; encoding: [0x03,0x00,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00]
+0x03,0x00,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_load_u8 v5, off, s[8:11], s3 offset:7 ; encoding: [0x03,0x00,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00]
+0x03,0x00,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_load_u8 v5, off, s[8:11], s3 offset:8388607 ; encoding: [0x03,0x00,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x04,0xc4,0x05,0x10,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_u8 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_BYPASS scope:SCOPE_SYS ; encoding: [0x03,0x00,0x04,0xc4,0x05,0x10,0xbc,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x04,0xc4,0x05,0x10,0xbc,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_u8 v5, off, s[8:11], s3 offset:8388607 th:TH_LOAD_NT_HT scope:SCOPE_DEV ; encoding: [0x03,0x00,0x04,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x04,0xc4,0x05,0x10,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_u8 v5, off, s[96:99], s3 offset:8388607 ; encoding: [0x03,0x00,0x04,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x04,0xc4,0x05,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_u8 v5, v0, s[8:11], s3 idxen offset:8388607 ; encoding: [0x03,0x00,0x04,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x04,0xc4,0x05,0x10,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_load_u8 v5, v0, s[8:11], s3 offen offset:8388607 ; encoding: [0x03,0x00,0x04,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x03,0x00,0x04,0xc4,0x05,0x10,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b128 v[252:255], off, s[12:15], s4 offset:8388607 ; encoding: [0x04,0x40,0x07,0xc4,0xfc,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x04,0x40,0x07,0xc4,0xfc,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b128 v[2:5], off, s[12:15], m0 offset:8388607 ; encoding: [0x7d,0x40,0x07,0xc4,0x02,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0x40,0x07,0xc4,0x02,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b128 v[2:5], off, s[12:15], s101 offset:8388607 ; encoding: [0x65,0x40,0x07,0xc4,0x02,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0x40,0x07,0xc4,0x02,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b128 v[2:5], off, s[12:15], s4 ; encoding: [0x04,0x40,0x07,0xc4,0x02,0x18,0x80,0x00,0x00,0x00,0x00,0x00]
+0x04,0x40,0x07,0xc4,0x02,0x18,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_store_b128 v[2:5], off, s[12:15], s4 offset:7 ; encoding: [0x04,0x40,0x07,0xc4,0x02,0x18,0x80,0x00,0x00,0x07,0x00,0x00]
+0x04,0x40,0x07,0xc4,0x02,0x18,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_store_b128 v[2:5], off, s[12:15], s4 offset:8388607 ; encoding: [0x04,0x40,0x07,0xc4,0x02,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x04,0x40,0x07,0xc4,0x02,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b128 v[2:5], off, s[12:15], s4 offset:8388607 th:TH_STORE_BYPASS scope:SCOPE_SYS ; encoding: [0x04,0x40,0x07,0xc4,0x02,0x18,0xbc,0x00,0x00,0xff,0xff,0x7f]
+0x04,0x40,0x07,0xc4,0x02,0x18,0xbc,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b128 v[2:5], off, s[12:15], s4 offset:8388607 th:TH_STORE_NT_HT scope:SCOPE_DEV ; encoding: [0x04,0x40,0x07,0xc4,0x02,0x18,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x04,0x40,0x07,0xc4,0x02,0x18,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b128 v[2:5], off, s[16:19], s4 offset:8388607 ; encoding: [0x04,0x40,0x07,0xc4,0x02,0x20,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x04,0x40,0x07,0xc4,0x02,0x20,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b128 v[2:5], off, s[96:99], s4 offset:8388607 ; encoding: [0x04,0x40,0x07,0xc4,0x02,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x04,0x40,0x07,0xc4,0x02,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b128 v[2:5], v0, s[12:15], s4 idxen offset:8388607 ; encoding: [0x04,0x40,0x07,0xc4,0x02,0x18,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x04,0x40,0x07,0xc4,0x02,0x18,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b128 v[2:5], v0, s[12:15], s4 offen offset:8388607 ; encoding: [0x04,0x40,0x07,0xc4,0x02,0x18,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x04,0x40,0x07,0xc4,0x02,0x18,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b16 v1, off, s[12:15], m0 offset:8388607 ; encoding: [0x7d,0x40,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0x40,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b16 v1, off, s[12:15], s101 offset:8388607 ; encoding: [0x65,0x40,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0x40,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b16 v1, off, s[12:15], s4 ; encoding: [0x04,0x40,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0x00,0x00,0x00]
+0x04,0x40,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_store_b16 v1, off, s[12:15], s4 offset:7 ; encoding: [0x04,0x40,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0x07,0x00,0x00]
+0x04,0x40,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_store_b16 v1, off, s[12:15], s4 offset:8388607 ; encoding: [0x04,0x40,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x04,0x40,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b16 v1, off, s[12:15], s4 offset:8388607 th:TH_STORE_BYPASS scope:SCOPE_SYS ; encoding: [0x04,0x40,0x06,0xc4,0x01,0x18,0xbc,0x00,0x00,0xff,0xff,0x7f]
+0x04,0x40,0x06,0xc4,0x01,0x18,0xbc,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b16 v1, off, s[12:15], s4 offset:8388607 th:TH_STORE_NT_HT scope:SCOPE_DEV ; encoding: [0x04,0x40,0x06,0xc4,0x01,0x18,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x04,0x40,0x06,0xc4,0x01,0x18,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b16 v1, off, s[16:19], s4 offset:8388607 ; encoding: [0x04,0x40,0x06,0xc4,0x01,0x20,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x04,0x40,0x06,0xc4,0x01,0x20,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b16 v1, off, s[96:99], s4 offset:8388607 ; encoding: [0x04,0x40,0x06,0xc4,0x01,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x04,0x40,0x06,0xc4,0x01,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b16 v1, v0, s[12:15], s4 idxen offset:8388607 ; encoding: [0x04,0x40,0x06,0xc4,0x01,0x18,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x04,0x40,0x06,0xc4,0x01,0x18,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b16 v1, v0, s[12:15], s4 offen offset:8388607 ; encoding: [0x04,0x40,0x06,0xc4,0x01,0x18,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x04,0x40,0x06,0xc4,0x01,0x18,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b16 v255, off, s[12:15], s4 offset:8388607 ; encoding: [0x04,0x40,0x06,0xc4,0xff,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x04,0x40,0x06,0xc4,0xff,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b32 v1, off, s[12:15], m0 offset:8388607 ; encoding: [0x7d,0x80,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0x80,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b32 v1, off, s[12:15], s101 offset:8388607 ; encoding: [0x65,0x80,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0x80,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b32 v1, off, s[12:15], s4 ; encoding: [0x04,0x80,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0x00,0x00,0x00]
+0x04,0x80,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_store_b32 v1, off, s[12:15], s4 offset:7 ; encoding: [0x04,0x80,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0x07,0x00,0x00]
+0x04,0x80,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_store_b32 v1, off, s[12:15], s4 offset:8388607 ; encoding: [0x04,0x80,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x04,0x80,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b32 v1, off, s[12:15], s4 offset:8388607 th:TH_STORE_BYPASS scope:SCOPE_SYS ; encoding: [0x04,0x80,0x06,0xc4,0x01,0x18,0xbc,0x00,0x00,0xff,0xff,0x7f]
+0x04,0x80,0x06,0xc4,0x01,0x18,0xbc,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b32 v1, off, s[12:15], s4 offset:8388607 th:TH_STORE_NT_HT scope:SCOPE_DEV ; encoding: [0x04,0x80,0x06,0xc4,0x01,0x18,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x04,0x80,0x06,0xc4,0x01,0x18,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b32 v1, off, s[16:19], s4 offset:8388607 ; encoding: [0x04,0x80,0x06,0xc4,0x01,0x20,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x04,0x80,0x06,0xc4,0x01,0x20,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b32 v1, off, s[96:99], s4 offset:8388607 ; encoding: [0x04,0x80,0x06,0xc4,0x01,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x04,0x80,0x06,0xc4,0x01,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b32 v1, v0, s[12:15], s4 idxen offset:8388607 ; encoding: [0x04,0x80,0x06,0xc4,0x01,0x18,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x04,0x80,0x06,0xc4,0x01,0x18,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b32 v1, v0, s[12:15], s4 offen offset:8388607 ; encoding: [0x04,0x80,0x06,0xc4,0x01,0x18,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x04,0x80,0x06,0xc4,0x01,0x18,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b32 v255, off, s[12:15], s4 offset:8388607 ; encoding: [0x04,0x80,0x06,0xc4,0xff,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x04,0x80,0x06,0xc4,0xff,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b64 v[254:255], off, s[12:15], s4 offset:8388607 ; encoding: [0x04,0xc0,0x06,0xc4,0xfe,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x04,0xc0,0x06,0xc4,0xfe,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b64 v[2:3], off, s[12:15], m0 offset:8388607 ; encoding: [0x7d,0xc0,0x06,0xc4,0x02,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0xc0,0x06,0xc4,0x02,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b64 v[2:3], off, s[12:15], s101 offset:8388607 ; encoding: [0x65,0xc0,0x06,0xc4,0x02,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0xc0,0x06,0xc4,0x02,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b64 v[2:3], off, s[12:15], s4 ; encoding: [0x04,0xc0,0x06,0xc4,0x02,0x18,0x80,0x00,0x00,0x00,0x00,0x00]
+0x04,0xc0,0x06,0xc4,0x02,0x18,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_store_b64 v[2:3], off, s[12:15], s4 offset:7 ; encoding: [0x04,0xc0,0x06,0xc4,0x02,0x18,0x80,0x00,0x00,0x07,0x00,0x00]
+0x04,0xc0,0x06,0xc4,0x02,0x18,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_store_b64 v[2:3], off, s[12:15], s4 offset:8388607 ; encoding: [0x04,0xc0,0x06,0xc4,0x02,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x04,0xc0,0x06,0xc4,0x02,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b64 v[2:3], off, s[12:15], s4 offset:8388607 th:TH_STORE_BYPASS scope:SCOPE_SYS ; encoding: [0x04,0xc0,0x06,0xc4,0x02,0x18,0xbc,0x00,0x00,0xff,0xff,0x7f]
+0x04,0xc0,0x06,0xc4,0x02,0x18,0xbc,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b64 v[2:3], off, s[12:15], s4 offset:8388607 th:TH_STORE_NT_HT scope:SCOPE_DEV ; encoding: [0x04,0xc0,0x06,0xc4,0x02,0x18,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x04,0xc0,0x06,0xc4,0x02,0x18,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b64 v[2:3], off, s[16:19], s4 offset:8388607 ; encoding: [0x04,0xc0,0x06,0xc4,0x02,0x20,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x04,0xc0,0x06,0xc4,0x02,0x20,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b64 v[2:3], off, s[96:99], s4 offset:8388607 ; encoding: [0x04,0xc0,0x06,0xc4,0x02,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x04,0xc0,0x06,0xc4,0x02,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b64 v[2:3], v0, s[12:15], s4 idxen offset:8388607 ; encoding: [0x04,0xc0,0x06,0xc4,0x02,0x18,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x04,0xc0,0x06,0xc4,0x02,0x18,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b64 v[2:3], v0, s[12:15], s4 offen offset:8388607 ; encoding: [0x04,0xc0,0x06,0xc4,0x02,0x18,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x04,0xc0,0x06,0xc4,0x02,0x18,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b8 v1, off, s[12:15], m0 offset:8388607 ; encoding: [0x7d,0x00,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0x00,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b8 v1, off, s[12:15], s101 offset:8388607 ; encoding: [0x65,0x00,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0x00,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b8 v1, off, s[12:15], s4 ; encoding: [0x04,0x00,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0x00,0x00,0x00]
+0x04,0x00,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_store_b8 v1, off, s[12:15], s4 offset:7 ; encoding: [0x04,0x00,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0x07,0x00,0x00]
+0x04,0x00,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_store_b8 v1, off, s[12:15], s4 offset:8388607 ; encoding: [0x04,0x00,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x04,0x00,0x06,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b8 v1, off, s[12:15], s4 offset:8388607 th:TH_STORE_BYPASS scope:SCOPE_SYS ; encoding: [0x04,0x00,0x06,0xc4,0x01,0x18,0xbc,0x00,0x00,0xff,0xff,0x7f]
+0x04,0x00,0x06,0xc4,0x01,0x18,0xbc,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b8 v1, off, s[12:15], s4 offset:8388607 th:TH_STORE_NT_HT scope:SCOPE_DEV ; encoding: [0x04,0x00,0x06,0xc4,0x01,0x18,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x04,0x00,0x06,0xc4,0x01,0x18,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b8 v1, off, s[16:19], s4 offset:8388607 ; encoding: [0x04,0x00,0x06,0xc4,0x01,0x20,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x04,0x00,0x06,0xc4,0x01,0x20,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b8 v1, off, s[96:99], s4 offset:8388607 ; encoding: [0x04,0x00,0x06,0xc4,0x01,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x04,0x00,0x06,0xc4,0x01,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b8 v1, v0, s[12:15], s4 idxen offset:8388607 ; encoding: [0x04,0x00,0x06,0xc4,0x01,0x18,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x04,0x00,0x06,0xc4,0x01,0x18,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b8 v1, v0, s[12:15], s4 offen offset:8388607 ; encoding: [0x04,0x00,0x06,0xc4,0x01,0x18,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x04,0x00,0x06,0xc4,0x01,0x18,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b8 v255, off, s[12:15], s4 offset:8388607 ; encoding: [0x04,0x00,0x06,0xc4,0xff,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x04,0x00,0x06,0xc4,0xff,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b96 v[252:254], off, s[12:15], s4 offset:8388607 ; encoding: [0x04,0x00,0x07,0xc4,0xfc,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x04,0x00,0x07,0xc4,0xfc,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b96 v[2:4], off, s[12:15], m0 offset:8388607 ; encoding: [0x7d,0x00,0x07,0xc4,0x02,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0x00,0x07,0xc4,0x02,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b96 v[2:4], off, s[12:15], s101 offset:8388607 ; encoding: [0x65,0x00,0x07,0xc4,0x02,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0x00,0x07,0xc4,0x02,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b96 v[2:4], off, s[12:15], s4 ; encoding: [0x04,0x00,0x07,0xc4,0x02,0x18,0x80,0x00,0x00,0x00,0x00,0x00]
+0x04,0x00,0x07,0xc4,0x02,0x18,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_store_b96 v[2:4], off, s[12:15], s4 offset:7 ; encoding: [0x04,0x00,0x07,0xc4,0x02,0x18,0x80,0x00,0x00,0x07,0x00,0x00]
+0x04,0x00,0x07,0xc4,0x02,0x18,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_store_b96 v[2:4], off, s[12:15], s4 offset:8388607 ; encoding: [0x04,0x00,0x07,0xc4,0x02,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x04,0x00,0x07,0xc4,0x02,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b96 v[2:4], off, s[12:15], s4 offset:8388607 th:TH_STORE_BYPASS scope:SCOPE_SYS ; encoding: [0x04,0x00,0x07,0xc4,0x02,0x18,0xbc,0x00,0x00,0xff,0xff,0x7f]
+0x04,0x00,0x07,0xc4,0x02,0x18,0xbc,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b96 v[2:4], off, s[12:15], s4 offset:8388607 th:TH_STORE_NT_HT scope:SCOPE_DEV ; encoding: [0x04,0x00,0x07,0xc4,0x02,0x18,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x04,0x00,0x07,0xc4,0x02,0x18,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b96 v[2:4], off, s[16:19], s4 offset:8388607 ; encoding: [0x04,0x00,0x07,0xc4,0x02,0x20,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x04,0x00,0x07,0xc4,0x02,0x20,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b96 v[2:4], off, s[96:99], s4 offset:8388607 ; encoding: [0x04,0x00,0x07,0xc4,0x02,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x04,0x00,0x07,0xc4,0x02,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b96 v[2:4], v0, s[12:15], s4 idxen offset:8388607 ; encoding: [0x04,0x00,0x07,0xc4,0x02,0x18,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x04,0x00,0x07,0xc4,0x02,0x18,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_b96 v[2:4], v0, s[12:15], s4 offen offset:8388607 ; encoding: [0x04,0x00,0x07,0xc4,0x02,0x18,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x04,0x00,0x07,0xc4,0x02,0x18,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_d16_hi_b16 v1, off, s[12:15], m0 offset:8388607 ; encoding: [0x7d,0x40,0x09,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0x40,0x09,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_d16_hi_b16 v1, off, s[12:15], s101 offset:8388607 ; encoding: [0x65,0x40,0x09,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0x40,0x09,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_d16_hi_b16 v1, off, s[12:15], s4 ; encoding: [0x04,0x40,0x09,0xc4,0x01,0x18,0x80,0x00,0x00,0x00,0x00,0x00]
+0x04,0x40,0x09,0xc4,0x01,0x18,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_store_d16_hi_b16 v1, off, s[12:15], s4 offset:7 ; encoding: [0x04,0x40,0x09,0xc4,0x01,0x18,0x80,0x00,0x00,0x07,0x00,0x00]
+0x04,0x40,0x09,0xc4,0x01,0x18,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_store_d16_hi_b16 v1, off, s[12:15], s4 offset:8388607 ; encoding: [0x04,0x40,0x09,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x04,0x40,0x09,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_d16_hi_b16 v1, off, s[12:15], s4 offset:8388607 th:TH_STORE_BYPASS scope:SCOPE_SYS ; encoding: [0x04,0x40,0x09,0xc4,0x01,0x18,0xbc,0x00,0x00,0xff,0xff,0x7f]
+0x04,0x40,0x09,0xc4,0x01,0x18,0xbc,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_d16_hi_b16 v1, off, s[12:15], s4 offset:8388607 th:TH_STORE_NT_HT scope:SCOPE_DEV ; encoding: [0x04,0x40,0x09,0xc4,0x01,0x18,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x04,0x40,0x09,0xc4,0x01,0x18,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_d16_hi_b16 v1, off, s[16:19], s4 offset:8388607 ; encoding: [0x04,0x40,0x09,0xc4,0x01,0x20,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x04,0x40,0x09,0xc4,0x01,0x20,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_d16_hi_b16 v1, off, s[96:99], s4 offset:8388607 ; encoding: [0x04,0x40,0x09,0xc4,0x01,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x04,0x40,0x09,0xc4,0x01,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_d16_hi_b16 v1, v0, s[12:15], s4 idxen offset:8388607 ; encoding: [0x04,0x40,0x09,0xc4,0x01,0x18,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x04,0x40,0x09,0xc4,0x01,0x18,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_d16_hi_b16 v1, v0, s[12:15], s4 offen offset:8388607 ; encoding: [0x04,0x40,0x09,0xc4,0x01,0x18,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x04,0x40,0x09,0xc4,0x01,0x18,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_d16_hi_b16 v255, off, s[12:15], s4 offset:8388607 ; encoding: [0x04,0x40,0x09,0xc4,0xff,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x04,0x40,0x09,0xc4,0xff,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_d16_hi_b8 v1, off, s[12:15], m0 offset:8388607 ; encoding: [0x7d,0x00,0x09,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x7d,0x00,0x09,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_d16_hi_b8 v1, off, s[12:15], s101 offset:8388607 ; encoding: [0x65,0x00,0x09,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x65,0x00,0x09,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_d16_hi_b8 v1, off, s[12:15], s4 ; encoding: [0x04,0x00,0x09,0xc4,0x01,0x18,0x80,0x00,0x00,0x00,0x00,0x00]
+0x04,0x00,0x09,0xc4,0x01,0x18,0x80,0x00,0x00,0x00,0x00,0x00
+
+# GFX1250: buffer_store_d16_hi_b8 v1, off, s[12:15], s4 offset:7 ; encoding: [0x04,0x00,0x09,0xc4,0x01,0x18,0x80,0x00,0x00,0x07,0x00,0x00]
+0x04,0x00,0x09,0xc4,0x01,0x18,0x80,0x00,0x00,0x07,0x00,0x00
+
+# GFX1250: buffer_store_d16_hi_b8 v1, off, s[12:15], s4 offset:8388607 ; encoding: [0x04,0x00,0x09,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x04,0x00,0x09,0xc4,0x01,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_d16_hi_b8 v1, off, s[12:15], s4 offset:8388607 th:TH_STORE_BYPASS scope:SCOPE_SYS ; encoding: [0x04,0x00,0x09,0xc4,0x01,0x18,0xbc,0x00,0x00,0xff,0xff,0x7f]
+0x04,0x00,0x09,0xc4,0x01,0x18,0xbc,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_d16_hi_b8 v1, off, s[12:15], s4 offset:8388607 th:TH_STORE_NT_HT scope:SCOPE_DEV ; encoding: [0x04,0x00,0x09,0xc4,0x01,0x18,0xe8,0x00,0x00,0xff,0xff,0x7f]
+0x04,0x00,0x09,0xc4,0x01,0x18,0xe8,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_d16_hi_b8 v1, off, s[16:19], s4 offset:8388607 ; encoding: [0x04,0x00,0x09,0xc4,0x01,0x20,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x04,0x00,0x09,0xc4,0x01,0x20,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_d16_hi_b8 v1, off, s[96:99], s4 offset:8388607 ; encoding: [0x04,0x00,0x09,0xc4,0x01,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x04,0x00,0x09,0xc4,0x01,0xc0,0x80,0x00,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_d16_hi_b8 v1, v0, s[12:15], s4 idxen offset:8388607 ; encoding: [0x04,0x00,0x09,0xc4,0x01,0x18,0x80,0x80,0x00,0xff,0xff,0x7f]
+0x04,0x00,0x09,0xc4,0x01,0x18,0x80,0x80,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_d16_hi_b8 v1, v0, s[12:15], s4 offen offset:8388607 ; encoding: [0x04,0x00,0x09,0xc4,0x01,0x18,0x80,0x40,0x00,0xff,0xff,0x7f]
+0x04,0x00,0x09,0xc4,0x01,0x18,0x80,0x40,0x00,0xff,0xff,0x7f
+
+# GFX1250: buffer_store_d16_hi_b8 v255, off, s[12:15], s4 offset:8388607 ; encoding: [0x04,0x00,0x09,0xc4,0xff,0x18,0x80,0x00,0x00,0xff,0xff,0x7f]
+0x04,0x00,0x09,0xc4,0xff,0x18,0x80,0x00,0x00,0xff,0xff,0x7f
+
# GFX1250: buffer_atomic_and_b32 v5, v1, s[8:11], s3 offen offset:4095 nv ; encoding: [0x83,0x00,0x0f,0xc4,0x05,0x10,0x80,0x40,0x01,0xff,0x0f,0x00]
0x83,0x00,0x0f,0xc4,0x05,0x10,0x80,0x40,0x01,0xff,0x0f,0x00
diff --git a/llvm/test/MC/Disassembler/AMDGPU/gfx1250_dasm_vop3cx.txt b/llvm/test/MC/Disassembler/AMDGPU/gfx1250_dasm_vop3cx.txt
new file mode 100644
index 0000000000000..e419e4583acfc
--- /dev/null
+++ b/llvm/test/MC/Disassembler/AMDGPU/gfx1250_dasm_vop3cx.txt
@@ -0,0 +1,3413 @@
+# NOTE: Assertions have been autogenerated by utils/update_mc_test_checks.py UTC_ARGS: --version 5
+# RUN: llvm-mc -triple=amdgcn -mcpu=gfx1250 -disassemble -show-encoding < %s | FileCheck -check-prefixes=GFX1250 %s
+
+0x7e,0x00,0xfd,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_class_f16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xfd,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x01,0xfd,0xd4,0xff,0xd6,0x00,0x20,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_class_f16_e64 -|0xfe0b|, vcc_hi ; encoding: [0x7e,0x01,0xfd,0xd4,0xff,0xd6,0x00,0x20,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0xfd,0xd4,0xf0,0xfa,0x00,0x00
+# GFX1250: v_cmpx_class_f16_e64 0.5, m0 ; encoding: [0x7e,0x00,0xfd,0xd4,0xf0,0xfa,0x00,0x00]
+
+0x7e,0x00,0xfd,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_class_f16_e64 exec_hi, null ; encoding: [0x7e,0x00,0xfd,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0xfd,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_class_f16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xfd,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0xfd,0xd4,0x7d,0xfa,0x01,0x00
+# GFX1250: v_cmpx_class_f16_e64 m0, src_scc ; encoding: [0x7e,0x00,0xfd,0xd4,0x7d,0xfa,0x01,0x00]
+
+0x7e,0x00,0xfd,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_class_f16_e64 null, exec_lo ; encoding: [0x7e,0x00,0xfd,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0xfd,0xd4,0x01,0x04,0x02,0x00
+# GFX1250: v_cmpx_class_f16_e64 s1, v2 ; encoding: [0x7e,0x00,0xfd,0xd4,0x01,0x04,0x02,0x00]
+
+0x7e,0x00,0xfd,0xd4,0x69,0xfe,0x03,0x00
+# GFX1250: v_cmpx_class_f16_e64 s105, v255 ; encoding: [0x7e,0x00,0xfd,0xd4,0x69,0xfe,0x03,0x00]
+
+0x7e,0x00,0xfd,0xd4,0xfd,0xd4,0x00,0x00
+# GFX1250: v_cmpx_class_f16_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xfd,0xd4,0xfd,0xd4,0x00,0x00]
+
+0x7e,0x00,0xfd,0xd4,0x7b,0xf6,0x00,0x00
+# GFX1250: v_cmpx_class_f16_e64 ttmp15, ttmp15 ; encoding: [0x7e,0x00,0xfd,0xd4,0x7b,0xf6,0x00,0x00]
+
+0x7e,0x00,0xfd,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_class_f16_e64 v1, v2 ; encoding: [0x7e,0x00,0xfd,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0xfd,0xd4,0xff,0x05,0x02,0x00
+# GFX1250: v_cmpx_class_f16_e64 v255, v2 ; encoding: [0x7e,0x00,0xfd,0xd4,0xff,0x05,0x02,0x00]
+
+0x7e,0x00,0xfd,0xd4,0x6b,0xd2,0x00,0x00
+# GFX1250: v_cmpx_class_f16_e64 vcc_hi, s105 ; encoding: [0x7e,0x00,0xfd,0xd4,0x6b,0xd2,0x00,0x00]
+
+0x7e,0x00,0xfd,0xd4,0x6a,0x04,0x00,0x00
+# GFX1250: v_cmpx_class_f16_e64 vcc_lo, s2 ; encoding: [0x7e,0x00,0xfd,0xd4,0x6a,0x04,0x00,0x00]
+
+0x7e,0x00,0xfe,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_class_f32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xfe,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x01,0xfe,0xd4,0xff,0xd6,0x00,0x20,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_class_f32_e64 -|0xaf123456|, vcc_hi ; encoding: [0x7e,0x01,0xfe,0xd4,0xff,0xd6,0x00,0x20,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xfe,0xd4,0xf0,0xfa,0x00,0x00
+# GFX1250: v_cmpx_class_f32_e64 0.5, m0 ; encoding: [0x7e,0x00,0xfe,0xd4,0xf0,0xfa,0x00,0x00]
+
+0x7e,0x00,0xfe,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_class_f32_e64 exec_hi, null ; encoding: [0x7e,0x00,0xfe,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0xfe,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_class_f32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xfe,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0xfe,0xd4,0x7d,0xe0,0x01,0x00
+# GFX1250: v_cmpx_class_f32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0xfe,0xd4,0x7d,0xe0,0x01,0x00]
+
+0x7e,0x00,0xfe,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_class_f32_e64 null, exec_lo ; encoding: [0x7e,0x00,0xfe,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0xfe,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_class_f32_e64 s1, s2 ; encoding: [0x7e,0x00,0xfe,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0xfe,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_class_f32_e64 s105, s105 ; encoding: [0x7e,0x00,0xfe,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0xfe,0xd4,0xfd,0xd4,0x00,0x00
+# GFX1250: v_cmpx_class_f32_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xfe,0xd4,0xfd,0xd4,0x00,0x00]
+
+0x7e,0x00,0xfe,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_class_f32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xfe,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0xfe,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_class_f32_e64 v1, v2 ; encoding: [0x7e,0x00,0xfe,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0xfe,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_class_f32_e64 v255, v255 ; encoding: [0x7e,0x00,0xfe,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0xfe,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_class_f32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0xfe,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xfe,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_class_f32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xfe,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x00,0xff,0xd4,0xc1,0x82,0x01,0x00
+# GFX1250: v_cmpx_class_f64_e64 -1, -1 ; encoding: [0x7e,0x00,0xff,0xd4,0xc1,0x82,0x01,0x00]
+
+0x7e,0x01,0xff,0xd4,0xfd,0xfa,0x01,0x20
+# GFX1250: v_cmpx_class_f64_e64 -|src_scc|, src_scc ; encoding: [0x7e,0x01,0xff,0xd4,0xfd,0xfa,0x01,0x20]
+
+0x7e,0x00,0xff,0xd4,0xf0,0xe0,0x01,0x00
+# GFX1250: v_cmpx_class_f64_e64 0.5, 0.5 ; encoding: [0x7e,0x00,0xff,0xd4,0xf0,0xe0,0x01,0x00]
+
+0x7e,0x00,0xff,0xd4,0xff,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_class_f64_e64 0xaf123456, 0xaf123456 ; encoding: [0x7e,0x00,0xff,0xd4,0xff,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xff,0xd4,0x7e,0xfc,0x00,0x00
+# GFX1250: v_cmpx_class_f64_e64 exec, exec_lo ; encoding: [0x7e,0x00,0xff,0xd4,0x7e,0xfc,0x00,0x00]
+
+0x7e,0x00,0xff,0xd4,0x7c,0xf8,0x00,0x00
+# GFX1250: v_cmpx_class_f64_e64 null, null ; encoding: [0x7e,0x00,0xff,0xd4,0x7c,0xf8,0x00,0x00]
+
+0x7e,0x00,0xff,0xd4,0x68,0xd4,0x00,0x00
+# GFX1250: v_cmpx_class_f64_e64 s[104:105], vcc_lo ; encoding: [0x7e,0x00,0xff,0xd4,0x68,0xd4,0x00,0x00]
+
+0x7e,0x00,0xff,0xd4,0x02,0xd6,0x00,0x00
+# GFX1250: v_cmpx_class_f64_e64 s[2:3], vcc_hi ; encoding: [0x7e,0x00,0xff,0xd4,0x02,0xd6,0x00,0x00]
+
+0x7e,0x00,0xff,0xd4,0x7a,0xfe,0x00,0x00
+# GFX1250: v_cmpx_class_f64_e64 ttmp[14:15], exec_hi ; encoding: [0x7e,0x00,0xff,0xd4,0x7a,0xfe,0x00,0x00]
+
+0x7e,0x00,0xff,0xd4,0xfe,0xf7,0x00,0x00
+# GFX1250: v_cmpx_class_f64_e64 v[254:255], ttmp15 ; encoding: [0x7e,0x00,0xff,0xd4,0xfe,0xf7,0x00,0x00]
+
+0x7e,0x00,0xff,0xd4,0x02,0xd3,0x00,0x00
+# GFX1250: v_cmpx_class_f64_e64 v[2:3], s105 ; encoding: [0x7e,0x00,0xff,0xd4,0x02,0xd3,0x00,0x00]
+
+0x7e,0x00,0xff,0xd4,0x02,0x05,0x00,0x00
+# GFX1250: v_cmpx_class_f64_e64 v[2:3], s2 ; encoding: [0x7e,0x00,0xff,0xd4,0x02,0x05,0x00,0x00]
+
+0x7e,0x00,0xff,0xd4,0x02,0x05,0x02,0x00
+# GFX1250: v_cmpx_class_f64_e64 v[2:3], v2 ; encoding: [0x7e,0x00,0xff,0xd4,0x02,0x05,0x02,0x00]
+
+0x7e,0x00,0xff,0xd4,0x02,0xff,0x03,0x00
+# GFX1250: v_cmpx_class_f64_e64 v[2:3], v255 ; encoding: [0x7e,0x00,0xff,0xd4,0x02,0xff,0x03,0x00]
+
+0x7e,0x00,0xff,0xd4,0x6a,0xfa,0x00,0x00
+# GFX1250: v_cmpx_class_f64_e64 vcc, m0 ; encoding: [0x7e,0x00,0xff,0xd4,0x6a,0xfa,0x00,0x00]
+
+0x7e,0x00,0x82,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_eq_f16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x82,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x02,0x82,0xd4,0xfd,0xd4,0x00,0x20
+# GFX1250: v_cmpx_eq_f16_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x82,0xd4,0xfd,0xd4,0x00,0x20]
+
+0x7e,0x83,0x82,0xd4,0xff,0xd6,0x00,0x60,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_eq_f16_e64 -|0xfe0b|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x82,0xd4,0xff,0xd6,0x00,0x60,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0x82,0xd4,0xf0,0xfa,0x00,0x40
+# GFX1250: v_cmpx_eq_f16_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x82,0xd4,0xf0,0xfa,0x00,0x40]
+
+0x7e,0x00,0x82,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_eq_f16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x82,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0x82,0xd4,0x7d,0xe0,0x01,0x00
+# GFX1250: v_cmpx_eq_f16_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x82,0xd4,0x7d,0xe0,0x01,0x00]
+
+0x7e,0x00,0x82,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_eq_f16_e64 null, exec_lo ; encoding: [0x7e,0x00,0x82,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0x82,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_eq_f16_e64 s1, s2 ; encoding: [0x7e,0x00,0x82,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0x82,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_eq_f16_e64 s105, s105 ; encoding: [0x7e,0x00,0x82,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0x82,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_eq_f16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x82,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0x82,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_eq_f16_e64 v1, v2 ; encoding: [0x7e,0x00,0x82,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0x82,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_eq_f16_e64 v255, v255 ; encoding: [0x7e,0x00,0x82,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0x82,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_eq_f16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0x82,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0x82,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_eq_f16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x82,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x01,0x82,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_eq_f16_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x82,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0x92,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_eq_f32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x92,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x02,0x92,0xd4,0xfd,0xd4,0x00,0x20
+# GFX1250: v_cmpx_eq_f32_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x92,0xd4,0xfd,0xd4,0x00,0x20]
+
+0x7e,0x83,0x92,0xd4,0xff,0xd6,0x00,0x60,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_eq_f32_e64 -|0xaf123456|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x92,0xd4,0xff,0xd6,0x00,0x60,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0x92,0xd4,0xf0,0xfa,0x00,0x40
+# GFX1250: v_cmpx_eq_f32_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x92,0xd4,0xf0,0xfa,0x00,0x40]
+
+0x7e,0x00,0x92,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_eq_f32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x92,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0x92,0xd4,0x7d,0xe0,0x01,0x00
+# GFX1250: v_cmpx_eq_f32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x92,0xd4,0x7d,0xe0,0x01,0x00]
+
+0x7e,0x00,0x92,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_eq_f32_e64 null, exec_lo ; encoding: [0x7e,0x00,0x92,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0x92,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_eq_f32_e64 s1, s2 ; encoding: [0x7e,0x00,0x92,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0x92,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_eq_f32_e64 s105, s105 ; encoding: [0x7e,0x00,0x92,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0x92,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_eq_f32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x92,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0x92,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_eq_f32_e64 v1, v2 ; encoding: [0x7e,0x00,0x92,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0x92,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_eq_f32_e64 v255, v255 ; encoding: [0x7e,0x00,0x92,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0x92,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_eq_f32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0x92,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0x92,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_eq_f32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x92,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x01,0x92,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_eq_f32_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x92,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0xa2,0xd4,0xc1,0x82,0x01,0x00
+# GFX1250: v_cmpx_eq_f64_e64 -1, -1 ; encoding: [0x7e,0x00,0xa2,0xd4,0xc1,0x82,0x01,0x00]
+
+0x7e,0x01,0xa2,0xd4,0x7e,0xfa,0x01,0x20
+# GFX1250: v_cmpx_eq_f64_e64 -|exec|, src_scc ; encoding: [0x7e,0x01,0xa2,0xd4,0x7e,0xfa,0x01,0x20]
+
+0x7e,0x03,0xa2,0xd4,0xfd,0xfc,0x00,0x60
+# GFX1250: v_cmpx_eq_f64_e64 -|src_scc|, -|exec| ; encoding: [0x7e,0x03,0xa2,0xd4,0xfd,0xfc,0x00,0x60]
+
+0x7e,0x00,0xa2,0xd4,0xf0,0xf8,0x00,0x00
+# GFX1250: v_cmpx_eq_f64_e64 0.5, null ; encoding: [0x7e,0x00,0xa2,0xd4,0xf0,0xf8,0x00,0x00]
+
+0x7e,0x82,0xa2,0xd4,0xff,0xd4,0x00,0x40,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_eq_f64_e64 0xaf123456, -|vcc| clamp ; encoding: [0x7e,0x82,0xa2,0xd4,0xff,0xd4,0x00,0x40,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xa2,0xd4,0x7c,0xe0,0x01,0x00
+# GFX1250: v_cmpx_eq_f64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xa2,0xd4,0x7c,0xe0,0x01,0x00]
+
+0x7e,0x00,0xa2,0xd4,0x68,0xd0,0x00,0x00
+# GFX1250: v_cmpx_eq_f64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xa2,0xd4,0x68,0xd0,0x00,0x00]
+
+0x7e,0x00,0xa2,0xd4,0x02,0x08,0x00,0x00
+# GFX1250: v_cmpx_eq_f64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xa2,0xd4,0x02,0x08,0x00,0x00]
+
+0x7e,0x00,0xa2,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_eq_f64_e64 ttmp[14:15], 0xaf123456 ; encoding: [0x7e,0x00,0xa2,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xa2,0xd4,0xfe,0xfd,0x03,0x00
+# GFX1250: v_cmpx_eq_f64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xa2,0xd4,0xfe,0xfd,0x03,0x00]
+
+0x7e,0x00,0xa2,0xd4,0x02,0x05,0x02,0x00
+# GFX1250: v_cmpx_eq_f64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xa2,0xd4,0x02,0x05,0x02,0x00]
+
+0x7e,0x00,0xa2,0xd4,0x6a,0xf4,0x00,0x00
+# GFX1250: v_cmpx_eq_f64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xa2,0xd4,0x6a,0xf4,0x00,0x00]
+
+0x7e,0x00,0xb2,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_eq_i16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xb2,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x00,0xb2,0xd4,0xff,0xfa,0x00,0x00,0x00,0x38,0x00,0x00
+# GFX1250: v_cmpx_eq_i16_e64 0x3800, m0 ; encoding: [0x7e,0x00,0xb2,0xd4,0xff,0xfa,0x00,0x00,0x00,0x38,0x00,0x00]
+
+0x7e,0x00,0xb2,0xd4,0xff,0xd6,0x00,0x00,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_eq_i16_e64 0xfe0b, vcc_hi ; encoding: [0x7e,0x00,0xb2,0xd4,0xff,0xd6,0x00,0x00,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0xb2,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_eq_i16_e64 exec_hi, null ; encoding: [0x7e,0x00,0xb2,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0xb2,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_eq_i16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xb2,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0xb2,0xd4,0x7d,0xfe,0x01,0x00,0x00,0x38,0x00,0x00
+# GFX1250: v_cmpx_eq_i16_e64 m0, 0x3800 ; encoding: [0x7e,0x00,0xb2,0xd4,0x7d,0xfe,0x01,0x00,0x00,0x38,0x00,0x00]
+
+0x7e,0x00,0xb2,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_eq_i16_e64 null, exec_lo ; encoding: [0x7e,0x00,0xb2,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0xb2,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_eq_i16_e64 s1, s2 ; encoding: [0x7e,0x00,0xb2,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0xb2,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_eq_i16_e64 s105, s105 ; encoding: [0x7e,0x00,0xb2,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0xb2,0xd4,0xfd,0xd4,0x00,0x00
+# GFX1250: v_cmpx_eq_i16_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xb2,0xd4,0xfd,0xd4,0x00,0x00]
+
+0x7e,0x00,0xb2,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_eq_i16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xb2,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0xb2,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_eq_i16_e64 v1, v2 ; encoding: [0x7e,0x00,0xb2,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0xb2,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_eq_i16_e64 v255, v255 ; encoding: [0x7e,0x00,0xb2,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0xb2,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_eq_i16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0xb2,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0xb2,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_eq_i16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xb2,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x00,0xc2,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_eq_i32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xc2,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x00,0xc2,0xd4,0xf0,0xfa,0x00,0x00
+# GFX1250: v_cmpx_eq_i32_e64 0.5, m0 ; encoding: [0x7e,0x00,0xc2,0xd4,0xf0,0xfa,0x00,0x00]
+
+0x7e,0x00,0xc2,0xd4,0xff,0xd6,0x00,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_eq_i32_e64 0xaf123456, vcc_hi ; encoding: [0x7e,0x00,0xc2,0xd4,0xff,0xd6,0x00,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xc2,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_eq_i32_e64 exec_hi, null ; encoding: [0x7e,0x00,0xc2,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0xc2,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_eq_i32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xc2,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0xc2,0xd4,0x7d,0xe0,0x01,0x00
+# GFX1250: v_cmpx_eq_i32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0xc2,0xd4,0x7d,0xe0,0x01,0x00]
+
+0x7e,0x00,0xc2,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_eq_i32_e64 null, exec_lo ; encoding: [0x7e,0x00,0xc2,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0xc2,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_eq_i32_e64 s1, s2 ; encoding: [0x7e,0x00,0xc2,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0xc2,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_eq_i32_e64 s105, s105 ; encoding: [0x7e,0x00,0xc2,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0xc2,0xd4,0xfd,0xd4,0x00,0x00
+# GFX1250: v_cmpx_eq_i32_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xc2,0xd4,0xfd,0xd4,0x00,0x00]
+
+0x7e,0x00,0xc2,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_eq_i32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xc2,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0xc2,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_eq_i32_e64 v1, v2 ; encoding: [0x7e,0x00,0xc2,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0xc2,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_eq_i32_e64 v255, v255 ; encoding: [0x7e,0x00,0xc2,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0xc2,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_eq_i32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0xc2,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xc2,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_eq_i32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xc2,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x00,0xd2,0xd4,0xc1,0x82,0x01,0x00
+# GFX1250: v_cmpx_eq_i64_e64 -1, -1 ; encoding: [0x7e,0x00,0xd2,0xd4,0xc1,0x82,0x01,0x00]
+
+0x7e,0x00,0xd2,0xd4,0xf0,0xf8,0x00,0x00
+# GFX1250: v_cmpx_eq_i64_e64 0.5, null ; encoding: [0x7e,0x00,0xd2,0xd4,0xf0,0xf8,0x00,0x00]
+
+0x7e,0x00,0xd2,0xd4,0xff,0xd4,0x00,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_eq_i64_e64 lit64(0xaf123456), vcc ; encoding: [0x7e,0x00,0xd2,0xd4,0xfe,0xd4,0x00,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+0x7e,0x00,0xd2,0xd4,0x7e,0xfa,0x01,0x00
+# GFX1250: v_cmpx_eq_i64_e64 exec, src_scc ; encoding: [0x7e,0x00,0xd2,0xd4,0x7e,0xfa,0x01,0x00]
+
+0x7e,0x00,0xd2,0xd4,0x7c,0xe0,0x01,0x00
+# GFX1250: v_cmpx_eq_i64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xd2,0xd4,0x7c,0xe0,0x01,0x00]
+
+0x7e,0x00,0xd2,0xd4,0x68,0xd0,0x00,0x00
+# GFX1250: v_cmpx_eq_i64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xd2,0xd4,0x68,0xd0,0x00,0x00]
+
+0x7e,0x00,0xd2,0xd4,0x02,0x08,0x00,0x00
+# GFX1250: v_cmpx_eq_i64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xd2,0xd4,0x02,0x08,0x00,0x00]
+
+0x7e,0x00,0xd2,0xd4,0xfd,0xfc,0x00,0x00
+# GFX1250: v_cmpx_eq_i64_e64 src_scc, exec ; encoding: [0x7e,0x00,0xd2,0xd4,0xfd,0xfc,0x00,0x00]
+
+0x7e,0x00,0xd2,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_eq_i64_e64 ttmp[14:15], lit64(0xaf123456) ; encoding: [0x7e,0x00,0xd2,0xd4,0x7a,0xfc,0x01,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+0x7e,0x00,0xd2,0xd4,0xfe,0xfd,0x03,0x00
+# GFX1250: v_cmpx_eq_i64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xd2,0xd4,0xfe,0xfd,0x03,0x00]
+
+0x7e,0x00,0xd2,0xd4,0x02,0x05,0x02,0x00
+# GFX1250: v_cmpx_eq_i64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xd2,0xd4,0x02,0x05,0x02,0x00]
+
+0x7e,0x00,0xd2,0xd4,0x6a,0xf4,0x00,0x00
+# GFX1250: v_cmpx_eq_i64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xd2,0xd4,0x6a,0xf4,0x00,0x00]
+
+0x7e,0x00,0xba,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_eq_u16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xba,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x00,0xba,0xd4,0xff,0xfa,0x00,0x00,0x00,0x38,0x00,0x00
+# GFX1250: v_cmpx_eq_u16_e64 0x3800, m0 ; encoding: [0x7e,0x00,0xba,0xd4,0xff,0xfa,0x00,0x00,0x00,0x38,0x00,0x00]
+
+0x7e,0x00,0xba,0xd4,0xff,0xd6,0x00,0x00,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_eq_u16_e64 0xfe0b, vcc_hi ; encoding: [0x7e,0x00,0xba,0xd4,0xff,0xd6,0x00,0x00,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0xba,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_eq_u16_e64 exec_hi, null ; encoding: [0x7e,0x00,0xba,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0xba,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_eq_u16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xba,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0xba,0xd4,0x7d,0xfe,0x01,0x00,0x00,0x38,0x00,0x00
+# GFX1250: v_cmpx_eq_u16_e64 m0, 0x3800 ; encoding: [0x7e,0x00,0xba,0xd4,0x7d,0xfe,0x01,0x00,0x00,0x38,0x00,0x00]
+
+0x7e,0x00,0xba,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_eq_u16_e64 null, exec_lo ; encoding: [0x7e,0x00,0xba,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0xba,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_eq_u16_e64 s1, s2 ; encoding: [0x7e,0x00,0xba,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0xba,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_eq_u16_e64 s105, s105 ; encoding: [0x7e,0x00,0xba,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0xba,0xd4,0xfd,0xd4,0x00,0x00
+# GFX1250: v_cmpx_eq_u16_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xba,0xd4,0xfd,0xd4,0x00,0x00]
+
+0x7e,0x00,0xba,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_eq_u16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xba,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0xba,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_eq_u16_e64 v1, v2 ; encoding: [0x7e,0x00,0xba,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0xba,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_eq_u16_e64 v255, v255 ; encoding: [0x7e,0x00,0xba,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0xba,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_eq_u16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0xba,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0xba,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_eq_u16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xba,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x00,0xca,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_eq_u32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xca,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x00,0xca,0xd4,0xf0,0xfa,0x00,0x00
+# GFX1250: v_cmpx_eq_u32_e64 0.5, m0 ; encoding: [0x7e,0x00,0xca,0xd4,0xf0,0xfa,0x00,0x00]
+
+0x7e,0x00,0xca,0xd4,0xff,0xd6,0x00,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_eq_u32_e64 0xaf123456, vcc_hi ; encoding: [0x7e,0x00,0xca,0xd4,0xff,0xd6,0x00,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xca,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_eq_u32_e64 exec_hi, null ; encoding: [0x7e,0x00,0xca,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0xca,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_eq_u32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xca,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0xca,0xd4,0x7d,0xe0,0x01,0x00
+# GFX1250: v_cmpx_eq_u32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0xca,0xd4,0x7d,0xe0,0x01,0x00]
+
+0x7e,0x00,0xca,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_eq_u32_e64 null, exec_lo ; encoding: [0x7e,0x00,0xca,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0xca,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_eq_u32_e64 s1, s2 ; encoding: [0x7e,0x00,0xca,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0xca,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_eq_u32_e64 s105, s105 ; encoding: [0x7e,0x00,0xca,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0xca,0xd4,0xfd,0xd4,0x00,0x00
+# GFX1250: v_cmpx_eq_u32_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xca,0xd4,0xfd,0xd4,0x00,0x00]
+
+0x7e,0x00,0xca,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_eq_u32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xca,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0xca,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_eq_u32_e64 v1, v2 ; encoding: [0x7e,0x00,0xca,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0xca,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_eq_u32_e64 v255, v255 ; encoding: [0x7e,0x00,0xca,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0xca,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_eq_u32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0xca,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xca,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_eq_u32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xca,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x00,0xda,0xd4,0xc1,0x82,0x01,0x00
+# GFX1250: v_cmpx_eq_u64_e64 -1, -1 ; encoding: [0x7e,0x00,0xda,0xd4,0xc1,0x82,0x01,0x00]
+
+0x7e,0x00,0xda,0xd4,0xf0,0xf8,0x00,0x00
+# GFX1250: v_cmpx_eq_u64_e64 0.5, null ; encoding: [0x7e,0x00,0xda,0xd4,0xf0,0xf8,0x00,0x00]
+
+0x7e,0x00,0xda,0xd4,0xff,0xd4,0x00,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_eq_u64_e64 lit64(0xaf123456), vcc ; encoding: [0x7e,0x00,0xda,0xd4,0xfe,0xd4,0x00,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+0x7e,0x00,0xda,0xd4,0x7e,0xfa,0x01,0x00
+# GFX1250: v_cmpx_eq_u64_e64 exec, src_scc ; encoding: [0x7e,0x00,0xda,0xd4,0x7e,0xfa,0x01,0x00]
+
+0x7e,0x00,0xda,0xd4,0x7c,0xe0,0x01,0x00
+# GFX1250: v_cmpx_eq_u64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xda,0xd4,0x7c,0xe0,0x01,0x00]
+
+0x7e,0x00,0xda,0xd4,0x68,0xd0,0x00,0x00
+# GFX1250: v_cmpx_eq_u64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xda,0xd4,0x68,0xd0,0x00,0x00]
+
+0x7e,0x00,0xda,0xd4,0x02,0x08,0x00,0x00
+# GFX1250: v_cmpx_eq_u64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xda,0xd4,0x02,0x08,0x00,0x00]
+
+0x7e,0x00,0xda,0xd4,0xfd,0xfc,0x00,0x00
+# GFX1250: v_cmpx_eq_u64_e64 src_scc, exec ; encoding: [0x7e,0x00,0xda,0xd4,0xfd,0xfc,0x00,0x00]
+
+0x7e,0x00,0xda,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_eq_u64_e64 ttmp[14:15], lit64(0xaf123456) ; encoding: [0x7e,0x00,0xda,0xd4,0x7a,0xfc,0x01,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+0x7e,0x00,0xda,0xd4,0xfe,0xfd,0x03,0x00
+# GFX1250: v_cmpx_eq_u64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xda,0xd4,0xfe,0xfd,0x03,0x00]
+
+0x7e,0x00,0xda,0xd4,0x02,0x05,0x02,0x00
+# GFX1250: v_cmpx_eq_u64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xda,0xd4,0x02,0x05,0x02,0x00]
+
+0x7e,0x00,0xda,0xd4,0x6a,0xf4,0x00,0x00
+# GFX1250: v_cmpx_eq_u64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xda,0xd4,0x6a,0xf4,0x00,0x00]
+
+0x7e,0x00,0x86,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_ge_f16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x86,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x02,0x86,0xd4,0xfd,0xd4,0x00,0x20
+# GFX1250: v_cmpx_ge_f16_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x86,0xd4,0xfd,0xd4,0x00,0x20]
+
+0x7e,0x83,0x86,0xd4,0xff,0xd6,0x00,0x60,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_ge_f16_e64 -|0xfe0b|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x86,0xd4,0xff,0xd6,0x00,0x60,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0x86,0xd4,0xf0,0xfa,0x00,0x40
+# GFX1250: v_cmpx_ge_f16_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x86,0xd4,0xf0,0xfa,0x00,0x40]
+
+0x7e,0x00,0x86,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_ge_f16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x86,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0x86,0xd4,0x7d,0xe0,0x01,0x00
+# GFX1250: v_cmpx_ge_f16_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x86,0xd4,0x7d,0xe0,0x01,0x00]
+
+0x7e,0x00,0x86,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_ge_f16_e64 null, exec_lo ; encoding: [0x7e,0x00,0x86,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0x86,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_ge_f16_e64 s1, s2 ; encoding: [0x7e,0x00,0x86,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0x86,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_ge_f16_e64 s105, s105 ; encoding: [0x7e,0x00,0x86,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0x86,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_ge_f16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x86,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0x86,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_ge_f16_e64 v1, v2 ; encoding: [0x7e,0x00,0x86,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0x86,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_ge_f16_e64 v255, v255 ; encoding: [0x7e,0x00,0x86,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0x86,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_ge_f16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0x86,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0x86,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_ge_f16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x86,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x01,0x86,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_ge_f16_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x86,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0x96,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_ge_f32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x96,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x02,0x96,0xd4,0xfd,0xd4,0x00,0x20
+# GFX1250: v_cmpx_ge_f32_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x96,0xd4,0xfd,0xd4,0x00,0x20]
+
+0x7e,0x83,0x96,0xd4,0xff,0xd6,0x00,0x60,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_ge_f32_e64 -|0xaf123456|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x96,0xd4,0xff,0xd6,0x00,0x60,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0x96,0xd4,0xf0,0xfa,0x00,0x40
+# GFX1250: v_cmpx_ge_f32_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x96,0xd4,0xf0,0xfa,0x00,0x40]
+
+0x7e,0x00,0x96,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_ge_f32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x96,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0x96,0xd4,0x7d,0xe0,0x01,0x00
+# GFX1250: v_cmpx_ge_f32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x96,0xd4,0x7d,0xe0,0x01,0x00]
+
+0x7e,0x00,0x96,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_ge_f32_e64 null, exec_lo ; encoding: [0x7e,0x00,0x96,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0x96,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_ge_f32_e64 s1, s2 ; encoding: [0x7e,0x00,0x96,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0x96,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_ge_f32_e64 s105, s105 ; encoding: [0x7e,0x00,0x96,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0x96,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_ge_f32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x96,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0x96,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_ge_f32_e64 v1, v2 ; encoding: [0x7e,0x00,0x96,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0x96,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_ge_f32_e64 v255, v255 ; encoding: [0x7e,0x00,0x96,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0x96,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_ge_f32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0x96,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0x96,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_ge_f32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x96,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x01,0x96,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_ge_f32_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x96,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0xa6,0xd4,0xc1,0x82,0x01,0x00
+# GFX1250: v_cmpx_ge_f64_e64 -1, -1 ; encoding: [0x7e,0x00,0xa6,0xd4,0xc1,0x82,0x01,0x00]
+
+0x7e,0x01,0xa6,0xd4,0x7e,0xfa,0x01,0x20
+# GFX1250: v_cmpx_ge_f64_e64 -|exec|, src_scc ; encoding: [0x7e,0x01,0xa6,0xd4,0x7e,0xfa,0x01,0x20]
+
+0x7e,0x03,0xa6,0xd4,0xfd,0xfc,0x00,0x60
+# GFX1250: v_cmpx_ge_f64_e64 -|src_scc|, -|exec| ; encoding: [0x7e,0x03,0xa6,0xd4,0xfd,0xfc,0x00,0x60]
+
+0x7e,0x00,0xa6,0xd4,0xf0,0xf8,0x00,0x00
+# GFX1250: v_cmpx_ge_f64_e64 0.5, null ; encoding: [0x7e,0x00,0xa6,0xd4,0xf0,0xf8,0x00,0x00]
+
+0x7e,0x82,0xa6,0xd4,0xff,0xd4,0x00,0x40,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_ge_f64_e64 0xaf123456, -|vcc| clamp ; encoding: [0x7e,0x82,0xa6,0xd4,0xff,0xd4,0x00,0x40,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xa6,0xd4,0x7c,0xe0,0x01,0x00
+# GFX1250: v_cmpx_ge_f64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xa6,0xd4,0x7c,0xe0,0x01,0x00]
+
+0x7e,0x00,0xa6,0xd4,0x68,0xd0,0x00,0x00
+# GFX1250: v_cmpx_ge_f64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xa6,0xd4,0x68,0xd0,0x00,0x00]
+
+0x7e,0x00,0xa6,0xd4,0x02,0x08,0x00,0x00
+# GFX1250: v_cmpx_ge_f64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xa6,0xd4,0x02,0x08,0x00,0x00]
+
+0x7e,0x00,0xa6,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_ge_f64_e64 ttmp[14:15], 0xaf123456 ; encoding: [0x7e,0x00,0xa6,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xa6,0xd4,0xfe,0xfd,0x03,0x00
+# GFX1250: v_cmpx_ge_f64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xa6,0xd4,0xfe,0xfd,0x03,0x00]
+
+0x7e,0x00,0xa6,0xd4,0x02,0x05,0x02,0x00
+# GFX1250: v_cmpx_ge_f64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xa6,0xd4,0x02,0x05,0x02,0x00]
+
+0x7e,0x00,0xa6,0xd4,0x6a,0xf4,0x00,0x00
+# GFX1250: v_cmpx_ge_f64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xa6,0xd4,0x6a,0xf4,0x00,0x00]
+
+0x7e,0x00,0xb6,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_ge_i16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xb6,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x00,0xb6,0xd4,0xff,0xfa,0x00,0x00,0x00,0x38,0x00,0x00
+# GFX1250: v_cmpx_ge_i16_e64 0x3800, m0 ; encoding: [0x7e,0x00,0xb6,0xd4,0xff,0xfa,0x00,0x00,0x00,0x38,0x00,0x00]
+
+0x7e,0x00,0xb6,0xd4,0xff,0xd6,0x00,0x00,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_ge_i16_e64 0xfe0b, vcc_hi ; encoding: [0x7e,0x00,0xb6,0xd4,0xff,0xd6,0x00,0x00,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0xb6,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_ge_i16_e64 exec_hi, null ; encoding: [0x7e,0x00,0xb6,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0xb6,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_ge_i16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xb6,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0xb6,0xd4,0x7d,0xfe,0x01,0x00,0x00,0x38,0x00,0x00
+# GFX1250: v_cmpx_ge_i16_e64 m0, 0x3800 ; encoding: [0x7e,0x00,0xb6,0xd4,0x7d,0xfe,0x01,0x00,0x00,0x38,0x00,0x00]
+
+0x7e,0x00,0xb6,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_ge_i16_e64 null, exec_lo ; encoding: [0x7e,0x00,0xb6,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0xb6,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_ge_i16_e64 s1, s2 ; encoding: [0x7e,0x00,0xb6,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0xb6,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_ge_i16_e64 s105, s105 ; encoding: [0x7e,0x00,0xb6,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0xb6,0xd4,0xfd,0xd4,0x00,0x00
+# GFX1250: v_cmpx_ge_i16_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xb6,0xd4,0xfd,0xd4,0x00,0x00]
+
+0x7e,0x00,0xb6,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_ge_i16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xb6,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0xb6,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_ge_i16_e64 v1, v2 ; encoding: [0x7e,0x00,0xb6,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0xb6,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_ge_i16_e64 v255, v255 ; encoding: [0x7e,0x00,0xb6,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0xb6,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_ge_i16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0xb6,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0xb6,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_ge_i16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xb6,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x00,0xc6,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_ge_i32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xc6,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x00,0xc6,0xd4,0xf0,0xfa,0x00,0x00
+# GFX1250: v_cmpx_ge_i32_e64 0.5, m0 ; encoding: [0x7e,0x00,0xc6,0xd4,0xf0,0xfa,0x00,0x00]
+
+0x7e,0x00,0xc6,0xd4,0xff,0xd6,0x00,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_ge_i32_e64 0xaf123456, vcc_hi ; encoding: [0x7e,0x00,0xc6,0xd4,0xff,0xd6,0x00,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xc6,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_ge_i32_e64 exec_hi, null ; encoding: [0x7e,0x00,0xc6,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0xc6,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_ge_i32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xc6,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0xc6,0xd4,0x7d,0xe0,0x01,0x00
+# GFX1250: v_cmpx_ge_i32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0xc6,0xd4,0x7d,0xe0,0x01,0x00]
+
+0x7e,0x00,0xc6,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_ge_i32_e64 null, exec_lo ; encoding: [0x7e,0x00,0xc6,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0xc6,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_ge_i32_e64 s1, s2 ; encoding: [0x7e,0x00,0xc6,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0xc6,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_ge_i32_e64 s105, s105 ; encoding: [0x7e,0x00,0xc6,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0xc6,0xd4,0xfd,0xd4,0x00,0x00
+# GFX1250: v_cmpx_ge_i32_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xc6,0xd4,0xfd,0xd4,0x00,0x00]
+
+0x7e,0x00,0xc6,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_ge_i32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xc6,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0xc6,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_ge_i32_e64 v1, v2 ; encoding: [0x7e,0x00,0xc6,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0xc6,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_ge_i32_e64 v255, v255 ; encoding: [0x7e,0x00,0xc6,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0xc6,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_ge_i32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0xc6,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xc6,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_ge_i32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xc6,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x00,0xd6,0xd4,0xc1,0x82,0x01,0x00
+# GFX1250: v_cmpx_ge_i64_e64 -1, -1 ; encoding: [0x7e,0x00,0xd6,0xd4,0xc1,0x82,0x01,0x00]
+
+0x7e,0x00,0xd6,0xd4,0xf0,0xf8,0x00,0x00
+# GFX1250: v_cmpx_ge_i64_e64 0.5, null ; encoding: [0x7e,0x00,0xd6,0xd4,0xf0,0xf8,0x00,0x00]
+
+0x7e,0x00,0xd6,0xd4,0xff,0xd4,0x00,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_ge_i64_e64 lit64(0xaf123456), vcc ; encoding: [0x7e,0x00,0xd6,0xd4,0xfe,0xd4,0x00,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+0x7e,0x00,0xd6,0xd4,0x7e,0xfa,0x01,0x00
+# GFX1250: v_cmpx_ge_i64_e64 exec, src_scc ; encoding: [0x7e,0x00,0xd6,0xd4,0x7e,0xfa,0x01,0x00]
+
+0x7e,0x00,0xd6,0xd4,0x7c,0xe0,0x01,0x00
+# GFX1250: v_cmpx_ge_i64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xd6,0xd4,0x7c,0xe0,0x01,0x00]
+
+0x7e,0x00,0xd6,0xd4,0x68,0xd0,0x00,0x00
+# GFX1250: v_cmpx_ge_i64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xd6,0xd4,0x68,0xd0,0x00,0x00]
+
+0x7e,0x00,0xd6,0xd4,0x02,0x08,0x00,0x00
+# GFX1250: v_cmpx_ge_i64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xd6,0xd4,0x02,0x08,0x00,0x00]
+
+0x7e,0x00,0xd6,0xd4,0xfd,0xfc,0x00,0x00
+# GFX1250: v_cmpx_ge_i64_e64 src_scc, exec ; encoding: [0x7e,0x00,0xd6,0xd4,0xfd,0xfc,0x00,0x00]
+
+0x7e,0x00,0xd6,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_ge_i64_e64 ttmp[14:15], lit64(0xaf123456) ; encoding: [0x7e,0x00,0xd6,0xd4,0x7a,0xfc,0x01,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+0x7e,0x00,0xd6,0xd4,0xfe,0xfd,0x03,0x00
+# GFX1250: v_cmpx_ge_i64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xd6,0xd4,0xfe,0xfd,0x03,0x00]
+
+0x7e,0x00,0xd6,0xd4,0x02,0x05,0x02,0x00
+# GFX1250: v_cmpx_ge_i64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xd6,0xd4,0x02,0x05,0x02,0x00]
+
+0x7e,0x00,0xd6,0xd4,0x6a,0xf4,0x00,0x00
+# GFX1250: v_cmpx_ge_i64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xd6,0xd4,0x6a,0xf4,0x00,0x00]
+
+0x7e,0x00,0xbe,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_ge_u16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xbe,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x00,0xbe,0xd4,0xff,0xfa,0x00,0x00,0x00,0x38,0x00,0x00
+# GFX1250: v_cmpx_ge_u16_e64 0x3800, m0 ; encoding: [0x7e,0x00,0xbe,0xd4,0xff,0xfa,0x00,0x00,0x00,0x38,0x00,0x00]
+
+0x7e,0x00,0xbe,0xd4,0xff,0xd6,0x00,0x00,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_ge_u16_e64 0xfe0b, vcc_hi ; encoding: [0x7e,0x00,0xbe,0xd4,0xff,0xd6,0x00,0x00,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0xbe,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_ge_u16_e64 exec_hi, null ; encoding: [0x7e,0x00,0xbe,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0xbe,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_ge_u16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xbe,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0xbe,0xd4,0x7d,0xfe,0x01,0x00,0x00,0x38,0x00,0x00
+# GFX1250: v_cmpx_ge_u16_e64 m0, 0x3800 ; encoding: [0x7e,0x00,0xbe,0xd4,0x7d,0xfe,0x01,0x00,0x00,0x38,0x00,0x00]
+
+0x7e,0x00,0xbe,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_ge_u16_e64 null, exec_lo ; encoding: [0x7e,0x00,0xbe,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0xbe,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_ge_u16_e64 s1, s2 ; encoding: [0x7e,0x00,0xbe,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0xbe,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_ge_u16_e64 s105, s105 ; encoding: [0x7e,0x00,0xbe,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0xbe,0xd4,0xfd,0xd4,0x00,0x00
+# GFX1250: v_cmpx_ge_u16_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xbe,0xd4,0xfd,0xd4,0x00,0x00]
+
+0x7e,0x00,0xbe,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_ge_u16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xbe,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0xbe,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_ge_u16_e64 v1, v2 ; encoding: [0x7e,0x00,0xbe,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0xbe,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_ge_u16_e64 v255, v255 ; encoding: [0x7e,0x00,0xbe,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0xbe,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_ge_u16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0xbe,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0xbe,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_ge_u16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xbe,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x00,0xce,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_ge_u32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xce,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x00,0xce,0xd4,0xf0,0xfa,0x00,0x00
+# GFX1250: v_cmpx_ge_u32_e64 0.5, m0 ; encoding: [0x7e,0x00,0xce,0xd4,0xf0,0xfa,0x00,0x00]
+
+0x7e,0x00,0xce,0xd4,0xff,0xd6,0x00,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_ge_u32_e64 0xaf123456, vcc_hi ; encoding: [0x7e,0x00,0xce,0xd4,0xff,0xd6,0x00,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xce,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_ge_u32_e64 exec_hi, null ; encoding: [0x7e,0x00,0xce,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0xce,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_ge_u32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xce,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0xce,0xd4,0x7d,0xe0,0x01,0x00
+# GFX1250: v_cmpx_ge_u32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0xce,0xd4,0x7d,0xe0,0x01,0x00]
+
+0x7e,0x00,0xce,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_ge_u32_e64 null, exec_lo ; encoding: [0x7e,0x00,0xce,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0xce,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_ge_u32_e64 s1, s2 ; encoding: [0x7e,0x00,0xce,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0xce,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_ge_u32_e64 s105, s105 ; encoding: [0x7e,0x00,0xce,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0xce,0xd4,0xfd,0xd4,0x00,0x00
+# GFX1250: v_cmpx_ge_u32_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xce,0xd4,0xfd,0xd4,0x00,0x00]
+
+0x7e,0x00,0xce,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_ge_u32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xce,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0xce,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_ge_u32_e64 v1, v2 ; encoding: [0x7e,0x00,0xce,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0xce,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_ge_u32_e64 v255, v255 ; encoding: [0x7e,0x00,0xce,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0xce,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_ge_u32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0xce,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xce,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_ge_u32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xce,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x00,0xde,0xd4,0xc1,0x82,0x01,0x00
+# GFX1250: v_cmpx_ge_u64_e64 -1, -1 ; encoding: [0x7e,0x00,0xde,0xd4,0xc1,0x82,0x01,0x00]
+
+0x7e,0x00,0xde,0xd4,0xf0,0xf8,0x00,0x00
+# GFX1250: v_cmpx_ge_u64_e64 0.5, null ; encoding: [0x7e,0x00,0xde,0xd4,0xf0,0xf8,0x00,0x00]
+
+0x7e,0x00,0xde,0xd4,0xff,0xd4,0x00,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_ge_u64_e64 lit64(0xaf123456), vcc ; encoding: [0x7e,0x00,0xde,0xd4,0xfe,0xd4,0x00,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+0x7e,0x00,0xde,0xd4,0x7e,0xfa,0x01,0x00
+# GFX1250: v_cmpx_ge_u64_e64 exec, src_scc ; encoding: [0x7e,0x00,0xde,0xd4,0x7e,0xfa,0x01,0x00]
+
+0x7e,0x00,0xde,0xd4,0x7c,0xe0,0x01,0x00
+# GFX1250: v_cmpx_ge_u64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xde,0xd4,0x7c,0xe0,0x01,0x00]
+
+0x7e,0x00,0xde,0xd4,0x68,0xd0,0x00,0x00
+# GFX1250: v_cmpx_ge_u64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xde,0xd4,0x68,0xd0,0x00,0x00]
+
+0x7e,0x00,0xde,0xd4,0x02,0x08,0x00,0x00
+# GFX1250: v_cmpx_ge_u64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xde,0xd4,0x02,0x08,0x00,0x00]
+
+0x7e,0x00,0xde,0xd4,0xfd,0xfc,0x00,0x00
+# GFX1250: v_cmpx_ge_u64_e64 src_scc, exec ; encoding: [0x7e,0x00,0xde,0xd4,0xfd,0xfc,0x00,0x00]
+
+0x7e,0x00,0xde,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_ge_u64_e64 ttmp[14:15], lit64(0xaf123456) ; encoding: [0x7e,0x00,0xde,0xd4,0x7a,0xfc,0x01,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+0x7e,0x00,0xde,0xd4,0xfe,0xfd,0x03,0x00
+# GFX1250: v_cmpx_ge_u64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xde,0xd4,0xfe,0xfd,0x03,0x00]
+
+0x7e,0x00,0xde,0xd4,0x02,0x05,0x02,0x00
+# GFX1250: v_cmpx_ge_u64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xde,0xd4,0x02,0x05,0x02,0x00]
+
+0x7e,0x00,0xde,0xd4,0x6a,0xf4,0x00,0x00
+# GFX1250: v_cmpx_ge_u64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xde,0xd4,0x6a,0xf4,0x00,0x00]
+
+0x7e,0x00,0x84,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_gt_f16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x84,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x02,0x84,0xd4,0xfd,0xd4,0x00,0x20
+# GFX1250: v_cmpx_gt_f16_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x84,0xd4,0xfd,0xd4,0x00,0x20]
+
+0x7e,0x83,0x84,0xd4,0xff,0xd6,0x00,0x60,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_gt_f16_e64 -|0xfe0b|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x84,0xd4,0xff,0xd6,0x00,0x60,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0x84,0xd4,0xf0,0xfa,0x00,0x40
+# GFX1250: v_cmpx_gt_f16_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x84,0xd4,0xf0,0xfa,0x00,0x40]
+
+0x7e,0x00,0x84,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_gt_f16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x84,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0x84,0xd4,0x7d,0xe0,0x01,0x00
+# GFX1250: v_cmpx_gt_f16_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x84,0xd4,0x7d,0xe0,0x01,0x00]
+
+0x7e,0x00,0x84,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_gt_f16_e64 null, exec_lo ; encoding: [0x7e,0x00,0x84,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0x84,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_gt_f16_e64 s1, s2 ; encoding: [0x7e,0x00,0x84,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0x84,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_gt_f16_e64 s105, s105 ; encoding: [0x7e,0x00,0x84,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0x84,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_gt_f16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x84,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0x84,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_gt_f16_e64 v1, v2 ; encoding: [0x7e,0x00,0x84,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0x84,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_gt_f16_e64 v255, v255 ; encoding: [0x7e,0x00,0x84,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0x84,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_gt_f16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0x84,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0x84,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_gt_f16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x84,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x01,0x84,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_gt_f16_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x84,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0x94,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_gt_f32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x94,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x02,0x94,0xd4,0xfd,0xd4,0x00,0x20
+# GFX1250: v_cmpx_gt_f32_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x94,0xd4,0xfd,0xd4,0x00,0x20]
+
+0x7e,0x83,0x94,0xd4,0xff,0xd6,0x00,0x60,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_gt_f32_e64 -|0xaf123456|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x94,0xd4,0xff,0xd6,0x00,0x60,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0x94,0xd4,0xf0,0xfa,0x00,0x40
+# GFX1250: v_cmpx_gt_f32_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x94,0xd4,0xf0,0xfa,0x00,0x40]
+
+0x7e,0x00,0x94,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_gt_f32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x94,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0x94,0xd4,0x7d,0xe0,0x01,0x00
+# GFX1250: v_cmpx_gt_f32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x94,0xd4,0x7d,0xe0,0x01,0x00]
+
+0x7e,0x00,0x94,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_gt_f32_e64 null, exec_lo ; encoding: [0x7e,0x00,0x94,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0x94,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_gt_f32_e64 s1, s2 ; encoding: [0x7e,0x00,0x94,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0x94,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_gt_f32_e64 s105, s105 ; encoding: [0x7e,0x00,0x94,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0x94,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_gt_f32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x94,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0x94,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_gt_f32_e64 v1, v2 ; encoding: [0x7e,0x00,0x94,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0x94,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_gt_f32_e64 v255, v255 ; encoding: [0x7e,0x00,0x94,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0x94,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_gt_f32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0x94,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0x94,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_gt_f32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x94,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x01,0x94,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_gt_f32_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x94,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0xa4,0xd4,0xc1,0x82,0x01,0x00
+# GFX1250: v_cmpx_gt_f64_e64 -1, -1 ; encoding: [0x7e,0x00,0xa4,0xd4,0xc1,0x82,0x01,0x00]
+
+0x7e,0x01,0xa4,0xd4,0x7e,0xfa,0x01,0x20
+# GFX1250: v_cmpx_gt_f64_e64 -|exec|, src_scc ; encoding: [0x7e,0x01,0xa4,0xd4,0x7e,0xfa,0x01,0x20]
+
+0x7e,0x03,0xa4,0xd4,0xfd,0xfc,0x00,0x60
+# GFX1250: v_cmpx_gt_f64_e64 -|src_scc|, -|exec| ; encoding: [0x7e,0x03,0xa4,0xd4,0xfd,0xfc,0x00,0x60]
+
+0x7e,0x00,0xa4,0xd4,0xf0,0xf8,0x00,0x00
+# GFX1250: v_cmpx_gt_f64_e64 0.5, null ; encoding: [0x7e,0x00,0xa4,0xd4,0xf0,0xf8,0x00,0x00]
+
+0x7e,0x82,0xa4,0xd4,0xff,0xd4,0x00,0x40,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_gt_f64_e64 0xaf123456, -|vcc| clamp ; encoding: [0x7e,0x82,0xa4,0xd4,0xff,0xd4,0x00,0x40,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xa4,0xd4,0x7c,0xe0,0x01,0x00
+# GFX1250: v_cmpx_gt_f64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xa4,0xd4,0x7c,0xe0,0x01,0x00]
+
+0x7e,0x00,0xa4,0xd4,0x68,0xd0,0x00,0x00
+# GFX1250: v_cmpx_gt_f64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xa4,0xd4,0x68,0xd0,0x00,0x00]
+
+0x7e,0x00,0xa4,0xd4,0x02,0x08,0x00,0x00
+# GFX1250: v_cmpx_gt_f64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xa4,0xd4,0x02,0x08,0x00,0x00]
+
+0x7e,0x00,0xa4,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_gt_f64_e64 ttmp[14:15], 0xaf123456 ; encoding: [0x7e,0x00,0xa4,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xa4,0xd4,0xfe,0xfd,0x03,0x00
+# GFX1250: v_cmpx_gt_f64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xa4,0xd4,0xfe,0xfd,0x03,0x00]
+
+0x7e,0x00,0xa4,0xd4,0x02,0x05,0x02,0x00
+# GFX1250: v_cmpx_gt_f64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xa4,0xd4,0x02,0x05,0x02,0x00]
+
+0x7e,0x00,0xa4,0xd4,0x6a,0xf4,0x00,0x00
+# GFX1250: v_cmpx_gt_f64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xa4,0xd4,0x6a,0xf4,0x00,0x00]
+
+0x7e,0x00,0xb4,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_gt_i16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xb4,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x00,0xb4,0xd4,0xff,0xfa,0x00,0x00,0x00,0x38,0x00,0x00
+# GFX1250: v_cmpx_gt_i16_e64 0x3800, m0 ; encoding: [0x7e,0x00,0xb4,0xd4,0xff,0xfa,0x00,0x00,0x00,0x38,0x00,0x00]
+
+0x7e,0x00,0xb4,0xd4,0xff,0xd6,0x00,0x00,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_gt_i16_e64 0xfe0b, vcc_hi ; encoding: [0x7e,0x00,0xb4,0xd4,0xff,0xd6,0x00,0x00,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0xb4,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_gt_i16_e64 exec_hi, null ; encoding: [0x7e,0x00,0xb4,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0xb4,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_gt_i16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xb4,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0xb4,0xd4,0x7d,0xfe,0x01,0x00,0x00,0x38,0x00,0x00
+# GFX1250: v_cmpx_gt_i16_e64 m0, 0x3800 ; encoding: [0x7e,0x00,0xb4,0xd4,0x7d,0xfe,0x01,0x00,0x00,0x38,0x00,0x00]
+
+0x7e,0x00,0xb4,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_gt_i16_e64 null, exec_lo ; encoding: [0x7e,0x00,0xb4,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0xb4,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_gt_i16_e64 s1, s2 ; encoding: [0x7e,0x00,0xb4,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0xb4,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_gt_i16_e64 s105, s105 ; encoding: [0x7e,0x00,0xb4,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0xb4,0xd4,0xfd,0xd4,0x00,0x00
+# GFX1250: v_cmpx_gt_i16_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xb4,0xd4,0xfd,0xd4,0x00,0x00]
+
+0x7e,0x00,0xb4,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_gt_i16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xb4,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0xb4,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_gt_i16_e64 v1, v2 ; encoding: [0x7e,0x00,0xb4,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0xb4,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_gt_i16_e64 v255, v255 ; encoding: [0x7e,0x00,0xb4,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0xb4,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_gt_i16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0xb4,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0xb4,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_gt_i16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xb4,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x00,0xc4,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_gt_i32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xc4,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x00,0xc4,0xd4,0xf0,0xfa,0x00,0x00
+# GFX1250: v_cmpx_gt_i32_e64 0.5, m0 ; encoding: [0x7e,0x00,0xc4,0xd4,0xf0,0xfa,0x00,0x00]
+
+0x7e,0x00,0xc4,0xd4,0xff,0xd6,0x00,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_gt_i32_e64 0xaf123456, vcc_hi ; encoding: [0x7e,0x00,0xc4,0xd4,0xff,0xd6,0x00,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xc4,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_gt_i32_e64 exec_hi, null ; encoding: [0x7e,0x00,0xc4,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0xc4,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_gt_i32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xc4,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0xc4,0xd4,0x7d,0xe0,0x01,0x00
+# GFX1250: v_cmpx_gt_i32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0xc4,0xd4,0x7d,0xe0,0x01,0x00]
+
+0x7e,0x00,0xc4,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_gt_i32_e64 null, exec_lo ; encoding: [0x7e,0x00,0xc4,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0xc4,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_gt_i32_e64 s1, s2 ; encoding: [0x7e,0x00,0xc4,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0xc4,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_gt_i32_e64 s105, s105 ; encoding: [0x7e,0x00,0xc4,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0xc4,0xd4,0xfd,0xd4,0x00,0x00
+# GFX1250: v_cmpx_gt_i32_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xc4,0xd4,0xfd,0xd4,0x00,0x00]
+
+0x7e,0x00,0xc4,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_gt_i32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xc4,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0xc4,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_gt_i32_e64 v1, v2 ; encoding: [0x7e,0x00,0xc4,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0xc4,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_gt_i32_e64 v255, v255 ; encoding: [0x7e,0x00,0xc4,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0xc4,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_gt_i32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0xc4,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xc4,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_gt_i32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xc4,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x00,0xd4,0xd4,0xc1,0x82,0x01,0x00
+# GFX1250: v_cmpx_gt_i64_e64 -1, -1 ; encoding: [0x7e,0x00,0xd4,0xd4,0xc1,0x82,0x01,0x00]
+
+0x7e,0x00,0xd4,0xd4,0xf0,0xf8,0x00,0x00
+# GFX1250: v_cmpx_gt_i64_e64 0.5, null ; encoding: [0x7e,0x00,0xd4,0xd4,0xf0,0xf8,0x00,0x00]
+
+0x7e,0x00,0xd4,0xd4,0xff,0xd4,0x00,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_gt_i64_e64 lit64(0xaf123456), vcc ; encoding: [0x7e,0x00,0xd4,0xd4,0xfe,0xd4,0x00,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+0x7e,0x00,0xd4,0xd4,0x7e,0xfa,0x01,0x00
+# GFX1250: v_cmpx_gt_i64_e64 exec, src_scc ; encoding: [0x7e,0x00,0xd4,0xd4,0x7e,0xfa,0x01,0x00]
+
+0x7e,0x00,0xd4,0xd4,0x7c,0xe0,0x01,0x00
+# GFX1250: v_cmpx_gt_i64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xd4,0xd4,0x7c,0xe0,0x01,0x00]
+
+0x7e,0x00,0xd4,0xd4,0x68,0xd0,0x00,0x00
+# GFX1250: v_cmpx_gt_i64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xd4,0xd4,0x68,0xd0,0x00,0x00]
+
+0x7e,0x00,0xd4,0xd4,0x02,0x08,0x00,0x00
+# GFX1250: v_cmpx_gt_i64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xd4,0xd4,0x02,0x08,0x00,0x00]
+
+0x7e,0x00,0xd4,0xd4,0xfd,0xfc,0x00,0x00
+# GFX1250: v_cmpx_gt_i64_e64 src_scc, exec ; encoding: [0x7e,0x00,0xd4,0xd4,0xfd,0xfc,0x00,0x00]
+
+0x7e,0x00,0xd4,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_gt_i64_e64 ttmp[14:15], lit64(0xaf123456) ; encoding: [0x7e,0x00,0xd4,0xd4,0x7a,0xfc,0x01,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+0x7e,0x00,0xd4,0xd4,0xfe,0xfd,0x03,0x00
+# GFX1250: v_cmpx_gt_i64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xd4,0xd4,0xfe,0xfd,0x03,0x00]
+
+0x7e,0x00,0xd4,0xd4,0x02,0x05,0x02,0x00
+# GFX1250: v_cmpx_gt_i64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xd4,0xd4,0x02,0x05,0x02,0x00]
+
+0x7e,0x00,0xd4,0xd4,0x6a,0xf4,0x00,0x00
+# GFX1250: v_cmpx_gt_i64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xd4,0xd4,0x6a,0xf4,0x00,0x00]
+
+0x7e,0x00,0xbc,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_gt_u16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xbc,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x00,0xbc,0xd4,0xff,0xfa,0x00,0x00,0x00,0x38,0x00,0x00
+# GFX1250: v_cmpx_gt_u16_e64 0x3800, m0 ; encoding: [0x7e,0x00,0xbc,0xd4,0xff,0xfa,0x00,0x00,0x00,0x38,0x00,0x00]
+
+0x7e,0x00,0xbc,0xd4,0xff,0xd6,0x00,0x00,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_gt_u16_e64 0xfe0b, vcc_hi ; encoding: [0x7e,0x00,0xbc,0xd4,0xff,0xd6,0x00,0x00,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0xbc,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_gt_u16_e64 exec_hi, null ; encoding: [0x7e,0x00,0xbc,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0xbc,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_gt_u16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xbc,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0xbc,0xd4,0x7d,0xfe,0x01,0x00,0x00,0x38,0x00,0x00
+# GFX1250: v_cmpx_gt_u16_e64 m0, 0x3800 ; encoding: [0x7e,0x00,0xbc,0xd4,0x7d,0xfe,0x01,0x00,0x00,0x38,0x00,0x00]
+
+0x7e,0x00,0xbc,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_gt_u16_e64 null, exec_lo ; encoding: [0x7e,0x00,0xbc,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0xbc,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_gt_u16_e64 s1, s2 ; encoding: [0x7e,0x00,0xbc,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0xbc,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_gt_u16_e64 s105, s105 ; encoding: [0x7e,0x00,0xbc,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0xbc,0xd4,0xfd,0xd4,0x00,0x00
+# GFX1250: v_cmpx_gt_u16_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xbc,0xd4,0xfd,0xd4,0x00,0x00]
+
+0x7e,0x00,0xbc,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_gt_u16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xbc,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0xbc,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_gt_u16_e64 v1, v2 ; encoding: [0x7e,0x00,0xbc,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0xbc,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_gt_u16_e64 v255, v255 ; encoding: [0x7e,0x00,0xbc,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0xbc,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_gt_u16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0xbc,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0xbc,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_gt_u16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xbc,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x00,0xcc,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_gt_u32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xcc,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x00,0xcc,0xd4,0xf0,0xfa,0x00,0x00
+# GFX1250: v_cmpx_gt_u32_e64 0.5, m0 ; encoding: [0x7e,0x00,0xcc,0xd4,0xf0,0xfa,0x00,0x00]
+
+0x7e,0x00,0xcc,0xd4,0xff,0xd6,0x00,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_gt_u32_e64 0xaf123456, vcc_hi ; encoding: [0x7e,0x00,0xcc,0xd4,0xff,0xd6,0x00,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xcc,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_gt_u32_e64 exec_hi, null ; encoding: [0x7e,0x00,0xcc,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0xcc,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_gt_u32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xcc,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0xcc,0xd4,0x7d,0xe0,0x01,0x00
+# GFX1250: v_cmpx_gt_u32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0xcc,0xd4,0x7d,0xe0,0x01,0x00]
+
+0x7e,0x00,0xcc,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_gt_u32_e64 null, exec_lo ; encoding: [0x7e,0x00,0xcc,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0xcc,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_gt_u32_e64 s1, s2 ; encoding: [0x7e,0x00,0xcc,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0xcc,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_gt_u32_e64 s105, s105 ; encoding: [0x7e,0x00,0xcc,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0xcc,0xd4,0xfd,0xd4,0x00,0x00
+# GFX1250: v_cmpx_gt_u32_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xcc,0xd4,0xfd,0xd4,0x00,0x00]
+
+0x7e,0x00,0xcc,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_gt_u32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xcc,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0xcc,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_gt_u32_e64 v1, v2 ; encoding: [0x7e,0x00,0xcc,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0xcc,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_gt_u32_e64 v255, v255 ; encoding: [0x7e,0x00,0xcc,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0xcc,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_gt_u32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0xcc,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xcc,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_gt_u32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xcc,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x00,0xdc,0xd4,0xc1,0x82,0x01,0x00
+# GFX1250: v_cmpx_gt_u64_e64 -1, -1 ; encoding: [0x7e,0x00,0xdc,0xd4,0xc1,0x82,0x01,0x00]
+
+0x7e,0x00,0xdc,0xd4,0xf0,0xf8,0x00,0x00
+# GFX1250: v_cmpx_gt_u64_e64 0.5, null ; encoding: [0x7e,0x00,0xdc,0xd4,0xf0,0xf8,0x00,0x00]
+
+0x7e,0x00,0xdc,0xd4,0xff,0xd4,0x00,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_gt_u64_e64 lit64(0xaf123456), vcc ; encoding: [0x7e,0x00,0xdc,0xd4,0xfe,0xd4,0x00,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+0x7e,0x00,0xdc,0xd4,0x7e,0xfa,0x01,0x00
+# GFX1250: v_cmpx_gt_u64_e64 exec, src_scc ; encoding: [0x7e,0x00,0xdc,0xd4,0x7e,0xfa,0x01,0x00]
+
+0x7e,0x00,0xdc,0xd4,0x7c,0xe0,0x01,0x00
+# GFX1250: v_cmpx_gt_u64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xdc,0xd4,0x7c,0xe0,0x01,0x00]
+
+0x7e,0x00,0xdc,0xd4,0x68,0xd0,0x00,0x00
+# GFX1250: v_cmpx_gt_u64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xdc,0xd4,0x68,0xd0,0x00,0x00]
+
+0x7e,0x00,0xdc,0xd4,0x02,0x08,0x00,0x00
+# GFX1250: v_cmpx_gt_u64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xdc,0xd4,0x02,0x08,0x00,0x00]
+
+0x7e,0x00,0xdc,0xd4,0xfd,0xfc,0x00,0x00
+# GFX1250: v_cmpx_gt_u64_e64 src_scc, exec ; encoding: [0x7e,0x00,0xdc,0xd4,0xfd,0xfc,0x00,0x00]
+
+0x7e,0x00,0xdc,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_gt_u64_e64 ttmp[14:15], lit64(0xaf123456) ; encoding: [0x7e,0x00,0xdc,0xd4,0x7a,0xfc,0x01,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+0x7e,0x00,0xdc,0xd4,0xfe,0xfd,0x03,0x00
+# GFX1250: v_cmpx_gt_u64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xdc,0xd4,0xfe,0xfd,0x03,0x00]
+
+0x7e,0x00,0xdc,0xd4,0x02,0x05,0x02,0x00
+# GFX1250: v_cmpx_gt_u64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xdc,0xd4,0x02,0x05,0x02,0x00]
+
+0x7e,0x00,0xdc,0xd4,0x6a,0xf4,0x00,0x00
+# GFX1250: v_cmpx_gt_u64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xdc,0xd4,0x6a,0xf4,0x00,0x00]
+
+0x7e,0x00,0x83,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_le_f16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x83,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x02,0x83,0xd4,0xfd,0xd4,0x00,0x20
+# GFX1250: v_cmpx_le_f16_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x83,0xd4,0xfd,0xd4,0x00,0x20]
+
+0x7e,0x83,0x83,0xd4,0xff,0xd6,0x00,0x60,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_le_f16_e64 -|0xfe0b|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x83,0xd4,0xff,0xd6,0x00,0x60,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0x83,0xd4,0xf0,0xfa,0x00,0x40
+# GFX1250: v_cmpx_le_f16_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x83,0xd4,0xf0,0xfa,0x00,0x40]
+
+0x7e,0x00,0x83,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_le_f16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x83,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0x83,0xd4,0x7d,0xe0,0x01,0x00
+# GFX1250: v_cmpx_le_f16_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x83,0xd4,0x7d,0xe0,0x01,0x00]
+
+0x7e,0x00,0x83,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_le_f16_e64 null, exec_lo ; encoding: [0x7e,0x00,0x83,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0x83,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_le_f16_e64 s1, s2 ; encoding: [0x7e,0x00,0x83,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0x83,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_le_f16_e64 s105, s105 ; encoding: [0x7e,0x00,0x83,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0x83,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_le_f16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x83,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0x83,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_le_f16_e64 v1, v2 ; encoding: [0x7e,0x00,0x83,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0x83,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_le_f16_e64 v255, v255 ; encoding: [0x7e,0x00,0x83,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0x83,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_le_f16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0x83,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0x83,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_le_f16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x83,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x01,0x83,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_le_f16_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x83,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0x93,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_le_f32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x93,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x02,0x93,0xd4,0xfd,0xd4,0x00,0x20
+# GFX1250: v_cmpx_le_f32_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x93,0xd4,0xfd,0xd4,0x00,0x20]
+
+0x7e,0x83,0x93,0xd4,0xff,0xd6,0x00,0x60,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_le_f32_e64 -|0xaf123456|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x93,0xd4,0xff,0xd6,0x00,0x60,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0x93,0xd4,0xf0,0xfa,0x00,0x40
+# GFX1250: v_cmpx_le_f32_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x93,0xd4,0xf0,0xfa,0x00,0x40]
+
+0x7e,0x00,0x93,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_le_f32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x93,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0x93,0xd4,0x7d,0xe0,0x01,0x00
+# GFX1250: v_cmpx_le_f32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x93,0xd4,0x7d,0xe0,0x01,0x00]
+
+0x7e,0x00,0x93,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_le_f32_e64 null, exec_lo ; encoding: [0x7e,0x00,0x93,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0x93,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_le_f32_e64 s1, s2 ; encoding: [0x7e,0x00,0x93,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0x93,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_le_f32_e64 s105, s105 ; encoding: [0x7e,0x00,0x93,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0x93,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_le_f32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x93,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0x93,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_le_f32_e64 v1, v2 ; encoding: [0x7e,0x00,0x93,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0x93,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_le_f32_e64 v255, v255 ; encoding: [0x7e,0x00,0x93,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0x93,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_le_f32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0x93,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0x93,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_le_f32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x93,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x01,0x93,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_le_f32_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x93,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0xa3,0xd4,0xc1,0x82,0x01,0x00
+# GFX1250: v_cmpx_le_f64_e64 -1, -1 ; encoding: [0x7e,0x00,0xa3,0xd4,0xc1,0x82,0x01,0x00]
+
+0x7e,0x01,0xa3,0xd4,0x7e,0xfa,0x01,0x20
+# GFX1250: v_cmpx_le_f64_e64 -|exec|, src_scc ; encoding: [0x7e,0x01,0xa3,0xd4,0x7e,0xfa,0x01,0x20]
+
+0x7e,0x03,0xa3,0xd4,0xfd,0xfc,0x00,0x60
+# GFX1250: v_cmpx_le_f64_e64 -|src_scc|, -|exec| ; encoding: [0x7e,0x03,0xa3,0xd4,0xfd,0xfc,0x00,0x60]
+
+0x7e,0x00,0xa3,0xd4,0xf0,0xf8,0x00,0x00
+# GFX1250: v_cmpx_le_f64_e64 0.5, null ; encoding: [0x7e,0x00,0xa3,0xd4,0xf0,0xf8,0x00,0x00]
+
+0x7e,0x82,0xa3,0xd4,0xff,0xd4,0x00,0x40,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_le_f64_e64 0xaf123456, -|vcc| clamp ; encoding: [0x7e,0x82,0xa3,0xd4,0xff,0xd4,0x00,0x40,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xa3,0xd4,0x7c,0xe0,0x01,0x00
+# GFX1250: v_cmpx_le_f64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xa3,0xd4,0x7c,0xe0,0x01,0x00]
+
+0x7e,0x00,0xa3,0xd4,0x68,0xd0,0x00,0x00
+# GFX1250: v_cmpx_le_f64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xa3,0xd4,0x68,0xd0,0x00,0x00]
+
+0x7e,0x00,0xa3,0xd4,0x02,0x08,0x00,0x00
+# GFX1250: v_cmpx_le_f64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xa3,0xd4,0x02,0x08,0x00,0x00]
+
+0x7e,0x00,0xa3,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_le_f64_e64 ttmp[14:15], 0xaf123456 ; encoding: [0x7e,0x00,0xa3,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xa3,0xd4,0xfe,0xfd,0x03,0x00
+# GFX1250: v_cmpx_le_f64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xa3,0xd4,0xfe,0xfd,0x03,0x00]
+
+0x7e,0x00,0xa3,0xd4,0x02,0x05,0x02,0x00
+# GFX1250: v_cmpx_le_f64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xa3,0xd4,0x02,0x05,0x02,0x00]
+
+0x7e,0x00,0xa3,0xd4,0x6a,0xf4,0x00,0x00
+# GFX1250: v_cmpx_le_f64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xa3,0xd4,0x6a,0xf4,0x00,0x00]
+
+0x7e,0x00,0xb3,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_le_i16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xb3,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x00,0xb3,0xd4,0xff,0xfa,0x00,0x00,0x00,0x38,0x00,0x00
+# GFX1250: v_cmpx_le_i16_e64 0x3800, m0 ; encoding: [0x7e,0x00,0xb3,0xd4,0xff,0xfa,0x00,0x00,0x00,0x38,0x00,0x00]
+
+0x7e,0x00,0xb3,0xd4,0xff,0xd6,0x00,0x00,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_le_i16_e64 0xfe0b, vcc_hi ; encoding: [0x7e,0x00,0xb3,0xd4,0xff,0xd6,0x00,0x00,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0xb3,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_le_i16_e64 exec_hi, null ; encoding: [0x7e,0x00,0xb3,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0xb3,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_le_i16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xb3,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0xb3,0xd4,0x7d,0xfe,0x01,0x00,0x00,0x38,0x00,0x00
+# GFX1250: v_cmpx_le_i16_e64 m0, 0x3800 ; encoding: [0x7e,0x00,0xb3,0xd4,0x7d,0xfe,0x01,0x00,0x00,0x38,0x00,0x00]
+
+0x7e,0x00,0xb3,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_le_i16_e64 null, exec_lo ; encoding: [0x7e,0x00,0xb3,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0xb3,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_le_i16_e64 s1, s2 ; encoding: [0x7e,0x00,0xb3,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0xb3,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_le_i16_e64 s105, s105 ; encoding: [0x7e,0x00,0xb3,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0xb3,0xd4,0xfd,0xd4,0x00,0x00
+# GFX1250: v_cmpx_le_i16_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xb3,0xd4,0xfd,0xd4,0x00,0x00]
+
+0x7e,0x00,0xb3,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_le_i16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xb3,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0xb3,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_le_i16_e64 v1, v2 ; encoding: [0x7e,0x00,0xb3,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0xb3,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_le_i16_e64 v255, v255 ; encoding: [0x7e,0x00,0xb3,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0xb3,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_le_i16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0xb3,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0xb3,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_le_i16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xb3,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x00,0xc3,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_le_i32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xc3,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x00,0xc3,0xd4,0xf0,0xfa,0x00,0x00
+# GFX1250: v_cmpx_le_i32_e64 0.5, m0 ; encoding: [0x7e,0x00,0xc3,0xd4,0xf0,0xfa,0x00,0x00]
+
+0x7e,0x00,0xc3,0xd4,0xff,0xd6,0x00,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_le_i32_e64 0xaf123456, vcc_hi ; encoding: [0x7e,0x00,0xc3,0xd4,0xff,0xd6,0x00,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xc3,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_le_i32_e64 exec_hi, null ; encoding: [0x7e,0x00,0xc3,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0xc3,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_le_i32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xc3,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0xc3,0xd4,0x7d,0xe0,0x01,0x00
+# GFX1250: v_cmpx_le_i32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0xc3,0xd4,0x7d,0xe0,0x01,0x00]
+
+0x7e,0x00,0xc3,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_le_i32_e64 null, exec_lo ; encoding: [0x7e,0x00,0xc3,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0xc3,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_le_i32_e64 s1, s2 ; encoding: [0x7e,0x00,0xc3,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0xc3,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_le_i32_e64 s105, s105 ; encoding: [0x7e,0x00,0xc3,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0xc3,0xd4,0xfd,0xd4,0x00,0x00
+# GFX1250: v_cmpx_le_i32_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xc3,0xd4,0xfd,0xd4,0x00,0x00]
+
+0x7e,0x00,0xc3,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_le_i32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xc3,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0xc3,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_le_i32_e64 v1, v2 ; encoding: [0x7e,0x00,0xc3,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0xc3,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_le_i32_e64 v255, v255 ; encoding: [0x7e,0x00,0xc3,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0xc3,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_le_i32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0xc3,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xc3,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_le_i32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xc3,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x00,0xd3,0xd4,0xc1,0x82,0x01,0x00
+# GFX1250: v_cmpx_le_i64_e64 -1, -1 ; encoding: [0x7e,0x00,0xd3,0xd4,0xc1,0x82,0x01,0x00]
+
+0x7e,0x00,0xd3,0xd4,0xf0,0xf8,0x00,0x00
+# GFX1250: v_cmpx_le_i64_e64 0.5, null ; encoding: [0x7e,0x00,0xd3,0xd4,0xf0,0xf8,0x00,0x00]
+
+0x7e,0x00,0xd3,0xd4,0xff,0xd4,0x00,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_le_i64_e64 lit64(0xaf123456), vcc ; encoding: [0x7e,0x00,0xd3,0xd4,0xfe,0xd4,0x00,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+0x7e,0x00,0xd3,0xd4,0x7e,0xfa,0x01,0x00
+# GFX1250: v_cmpx_le_i64_e64 exec, src_scc ; encoding: [0x7e,0x00,0xd3,0xd4,0x7e,0xfa,0x01,0x00]
+
+0x7e,0x00,0xd3,0xd4,0x7c,0xe0,0x01,0x00
+# GFX1250: v_cmpx_le_i64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xd3,0xd4,0x7c,0xe0,0x01,0x00]
+
+0x7e,0x00,0xd3,0xd4,0x68,0xd0,0x00,0x00
+# GFX1250: v_cmpx_le_i64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xd3,0xd4,0x68,0xd0,0x00,0x00]
+
+0x7e,0x00,0xd3,0xd4,0x02,0x08,0x00,0x00
+# GFX1250: v_cmpx_le_i64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xd3,0xd4,0x02,0x08,0x00,0x00]
+
+0x7e,0x00,0xd3,0xd4,0xfd,0xfc,0x00,0x00
+# GFX1250: v_cmpx_le_i64_e64 src_scc, exec ; encoding: [0x7e,0x00,0xd3,0xd4,0xfd,0xfc,0x00,0x00]
+
+0x7e,0x00,0xd3,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_le_i64_e64 ttmp[14:15], lit64(0xaf123456) ; encoding: [0x7e,0x00,0xd3,0xd4,0x7a,0xfc,0x01,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+0x7e,0x00,0xd3,0xd4,0xfe,0xfd,0x03,0x00
+# GFX1250: v_cmpx_le_i64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xd3,0xd4,0xfe,0xfd,0x03,0x00]
+
+0x7e,0x00,0xd3,0xd4,0x02,0x05,0x02,0x00
+# GFX1250: v_cmpx_le_i64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xd3,0xd4,0x02,0x05,0x02,0x00]
+
+0x7e,0x00,0xd3,0xd4,0x6a,0xf4,0x00,0x00
+# GFX1250: v_cmpx_le_i64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xd3,0xd4,0x6a,0xf4,0x00,0x00]
+
+0x7e,0x00,0xbb,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_le_u16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xbb,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x00,0xbb,0xd4,0xff,0xfa,0x00,0x00,0x00,0x38,0x00,0x00
+# GFX1250: v_cmpx_le_u16_e64 0x3800, m0 ; encoding: [0x7e,0x00,0xbb,0xd4,0xff,0xfa,0x00,0x00,0x00,0x38,0x00,0x00]
+
+0x7e,0x00,0xbb,0xd4,0xff,0xd6,0x00,0x00,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_le_u16_e64 0xfe0b, vcc_hi ; encoding: [0x7e,0x00,0xbb,0xd4,0xff,0xd6,0x00,0x00,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0xbb,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_le_u16_e64 exec_hi, null ; encoding: [0x7e,0x00,0xbb,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0xbb,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_le_u16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xbb,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0xbb,0xd4,0x7d,0xfe,0x01,0x00,0x00,0x38,0x00,0x00
+# GFX1250: v_cmpx_le_u16_e64 m0, 0x3800 ; encoding: [0x7e,0x00,0xbb,0xd4,0x7d,0xfe,0x01,0x00,0x00,0x38,0x00,0x00]
+
+0x7e,0x00,0xbb,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_le_u16_e64 null, exec_lo ; encoding: [0x7e,0x00,0xbb,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0xbb,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_le_u16_e64 s1, s2 ; encoding: [0x7e,0x00,0xbb,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0xbb,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_le_u16_e64 s105, s105 ; encoding: [0x7e,0x00,0xbb,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0xbb,0xd4,0xfd,0xd4,0x00,0x00
+# GFX1250: v_cmpx_le_u16_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xbb,0xd4,0xfd,0xd4,0x00,0x00]
+
+0x7e,0x00,0xbb,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_le_u16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xbb,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0xbb,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_le_u16_e64 v1, v2 ; encoding: [0x7e,0x00,0xbb,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0xbb,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_le_u16_e64 v255, v255 ; encoding: [0x7e,0x00,0xbb,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0xbb,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_le_u16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0xbb,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0xbb,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_le_u16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xbb,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x00,0xcb,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_le_u32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xcb,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x00,0xcb,0xd4,0xf0,0xfa,0x00,0x00
+# GFX1250: v_cmpx_le_u32_e64 0.5, m0 ; encoding: [0x7e,0x00,0xcb,0xd4,0xf0,0xfa,0x00,0x00]
+
+0x7e,0x00,0xcb,0xd4,0xff,0xd6,0x00,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_le_u32_e64 0xaf123456, vcc_hi ; encoding: [0x7e,0x00,0xcb,0xd4,0xff,0xd6,0x00,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xcb,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_le_u32_e64 exec_hi, null ; encoding: [0x7e,0x00,0xcb,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0xcb,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_le_u32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xcb,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0xcb,0xd4,0x7d,0xe0,0x01,0x00
+# GFX1250: v_cmpx_le_u32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0xcb,0xd4,0x7d,0xe0,0x01,0x00]
+
+0x7e,0x00,0xcb,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_le_u32_e64 null, exec_lo ; encoding: [0x7e,0x00,0xcb,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0xcb,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_le_u32_e64 s1, s2 ; encoding: [0x7e,0x00,0xcb,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0xcb,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_le_u32_e64 s105, s105 ; encoding: [0x7e,0x00,0xcb,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0xcb,0xd4,0xfd,0xd4,0x00,0x00
+# GFX1250: v_cmpx_le_u32_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xcb,0xd4,0xfd,0xd4,0x00,0x00]
+
+0x7e,0x00,0xcb,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_le_u32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xcb,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0xcb,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_le_u32_e64 v1, v2 ; encoding: [0x7e,0x00,0xcb,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0xcb,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_le_u32_e64 v255, v255 ; encoding: [0x7e,0x00,0xcb,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0xcb,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_le_u32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0xcb,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xcb,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_le_u32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xcb,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x00,0xdb,0xd4,0xc1,0x82,0x01,0x00
+# GFX1250: v_cmpx_le_u64_e64 -1, -1 ; encoding: [0x7e,0x00,0xdb,0xd4,0xc1,0x82,0x01,0x00]
+
+0x7e,0x00,0xdb,0xd4,0xf0,0xf8,0x00,0x00
+# GFX1250: v_cmpx_le_u64_e64 0.5, null ; encoding: [0x7e,0x00,0xdb,0xd4,0xf0,0xf8,0x00,0x00]
+
+0x7e,0x00,0xdb,0xd4,0xff,0xd4,0x00,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_le_u64_e64 lit64(0xaf123456), vcc ; encoding: [0x7e,0x00,0xdb,0xd4,0xfe,0xd4,0x00,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+0x7e,0x00,0xdb,0xd4,0x7e,0xfa,0x01,0x00
+# GFX1250: v_cmpx_le_u64_e64 exec, src_scc ; encoding: [0x7e,0x00,0xdb,0xd4,0x7e,0xfa,0x01,0x00]
+
+0x7e,0x00,0xdb,0xd4,0x7c,0xe0,0x01,0x00
+# GFX1250: v_cmpx_le_u64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xdb,0xd4,0x7c,0xe0,0x01,0x00]
+
+0x7e,0x00,0xdb,0xd4,0x68,0xd0,0x00,0x00
+# GFX1250: v_cmpx_le_u64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xdb,0xd4,0x68,0xd0,0x00,0x00]
+
+0x7e,0x00,0xdb,0xd4,0x02,0x08,0x00,0x00
+# GFX1250: v_cmpx_le_u64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xdb,0xd4,0x02,0x08,0x00,0x00]
+
+0x7e,0x00,0xdb,0xd4,0xfd,0xfc,0x00,0x00
+# GFX1250: v_cmpx_le_u64_e64 src_scc, exec ; encoding: [0x7e,0x00,0xdb,0xd4,0xfd,0xfc,0x00,0x00]
+
+0x7e,0x00,0xdb,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_le_u64_e64 ttmp[14:15], lit64(0xaf123456) ; encoding: [0x7e,0x00,0xdb,0xd4,0x7a,0xfc,0x01,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+0x7e,0x00,0xdb,0xd4,0xfe,0xfd,0x03,0x00
+# GFX1250: v_cmpx_le_u64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xdb,0xd4,0xfe,0xfd,0x03,0x00]
+
+0x7e,0x00,0xdb,0xd4,0x02,0x05,0x02,0x00
+# GFX1250: v_cmpx_le_u64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xdb,0xd4,0x02,0x05,0x02,0x00]
+
+0x7e,0x00,0xdb,0xd4,0x6a,0xf4,0x00,0x00
+# GFX1250: v_cmpx_le_u64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xdb,0xd4,0x6a,0xf4,0x00,0x00]
+
+0x7e,0x00,0x85,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_lg_f16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x85,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x02,0x85,0xd4,0xfd,0xd4,0x00,0x20
+# GFX1250: v_cmpx_lg_f16_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x85,0xd4,0xfd,0xd4,0x00,0x20]
+
+0x7e,0x83,0x85,0xd4,0xff,0xd6,0x00,0x60,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_lg_f16_e64 -|0xfe0b|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x85,0xd4,0xff,0xd6,0x00,0x60,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0x85,0xd4,0xf0,0xfa,0x00,0x40
+# GFX1250: v_cmpx_lg_f16_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x85,0xd4,0xf0,0xfa,0x00,0x40]
+
+0x7e,0x00,0x85,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_lg_f16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x85,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0x85,0xd4,0x7d,0xe0,0x01,0x00
+# GFX1250: v_cmpx_lg_f16_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x85,0xd4,0x7d,0xe0,0x01,0x00]
+
+0x7e,0x00,0x85,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_lg_f16_e64 null, exec_lo ; encoding: [0x7e,0x00,0x85,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0x85,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_lg_f16_e64 s1, s2 ; encoding: [0x7e,0x00,0x85,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0x85,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_lg_f16_e64 s105, s105 ; encoding: [0x7e,0x00,0x85,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0x85,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_lg_f16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x85,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0x85,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_lg_f16_e64 v1, v2 ; encoding: [0x7e,0x00,0x85,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0x85,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_lg_f16_e64 v255, v255 ; encoding: [0x7e,0x00,0x85,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0x85,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_lg_f16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0x85,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0x85,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_lg_f16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x85,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x01,0x85,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_lg_f16_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x85,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0x95,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_lg_f32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x95,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x02,0x95,0xd4,0xfd,0xd4,0x00,0x20
+# GFX1250: v_cmpx_lg_f32_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x95,0xd4,0xfd,0xd4,0x00,0x20]
+
+0x7e,0x83,0x95,0xd4,0xff,0xd6,0x00,0x60,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_lg_f32_e64 -|0xaf123456|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x95,0xd4,0xff,0xd6,0x00,0x60,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0x95,0xd4,0xf0,0xfa,0x00,0x40
+# GFX1250: v_cmpx_lg_f32_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x95,0xd4,0xf0,0xfa,0x00,0x40]
+
+0x7e,0x00,0x95,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_lg_f32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x95,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0x95,0xd4,0x7d,0xe0,0x01,0x00
+# GFX1250: v_cmpx_lg_f32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x95,0xd4,0x7d,0xe0,0x01,0x00]
+
+0x7e,0x00,0x95,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_lg_f32_e64 null, exec_lo ; encoding: [0x7e,0x00,0x95,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0x95,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_lg_f32_e64 s1, s2 ; encoding: [0x7e,0x00,0x95,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0x95,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_lg_f32_e64 s105, s105 ; encoding: [0x7e,0x00,0x95,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0x95,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_lg_f32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x95,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0x95,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_lg_f32_e64 v1, v2 ; encoding: [0x7e,0x00,0x95,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0x95,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_lg_f32_e64 v255, v255 ; encoding: [0x7e,0x00,0x95,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0x95,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_lg_f32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0x95,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0x95,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_lg_f32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x95,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x01,0x95,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_lg_f32_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x95,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0xa5,0xd4,0xc1,0x82,0x01,0x00
+# GFX1250: v_cmpx_lg_f64_e64 -1, -1 ; encoding: [0x7e,0x00,0xa5,0xd4,0xc1,0x82,0x01,0x00]
+
+0x7e,0x01,0xa5,0xd4,0x7e,0xfa,0x01,0x20
+# GFX1250: v_cmpx_lg_f64_e64 -|exec|, src_scc ; encoding: [0x7e,0x01,0xa5,0xd4,0x7e,0xfa,0x01,0x20]
+
+0x7e,0x03,0xa5,0xd4,0xfd,0xfc,0x00,0x60
+# GFX1250: v_cmpx_lg_f64_e64 -|src_scc|, -|exec| ; encoding: [0x7e,0x03,0xa5,0xd4,0xfd,0xfc,0x00,0x60]
+
+0x7e,0x00,0xa5,0xd4,0xf0,0xf8,0x00,0x00
+# GFX1250: v_cmpx_lg_f64_e64 0.5, null ; encoding: [0x7e,0x00,0xa5,0xd4,0xf0,0xf8,0x00,0x00]
+
+0x7e,0x82,0xa5,0xd4,0xff,0xd4,0x00,0x40,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_lg_f64_e64 0xaf123456, -|vcc| clamp ; encoding: [0x7e,0x82,0xa5,0xd4,0xff,0xd4,0x00,0x40,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xa5,0xd4,0x7c,0xe0,0x01,0x00
+# GFX1250: v_cmpx_lg_f64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xa5,0xd4,0x7c,0xe0,0x01,0x00]
+
+0x7e,0x00,0xa5,0xd4,0x68,0xd0,0x00,0x00
+# GFX1250: v_cmpx_lg_f64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xa5,0xd4,0x68,0xd0,0x00,0x00]
+
+0x7e,0x00,0xa5,0xd4,0x02,0x08,0x00,0x00
+# GFX1250: v_cmpx_lg_f64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xa5,0xd4,0x02,0x08,0x00,0x00]
+
+0x7e,0x00,0xa5,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_lg_f64_e64 ttmp[14:15], 0xaf123456 ; encoding: [0x7e,0x00,0xa5,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xa5,0xd4,0xfe,0xfd,0x03,0x00
+# GFX1250: v_cmpx_lg_f64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xa5,0xd4,0xfe,0xfd,0x03,0x00]
+
+0x7e,0x00,0xa5,0xd4,0x02,0x05,0x02,0x00
+# GFX1250: v_cmpx_lg_f64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xa5,0xd4,0x02,0x05,0x02,0x00]
+
+0x7e,0x00,0xa5,0xd4,0x6a,0xf4,0x00,0x00
+# GFX1250: v_cmpx_lg_f64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xa5,0xd4,0x6a,0xf4,0x00,0x00]
+
+0x7e,0x00,0x81,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_lt_f16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x81,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x02,0x81,0xd4,0xfd,0xd4,0x00,0x20
+# GFX1250: v_cmpx_lt_f16_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x81,0xd4,0xfd,0xd4,0x00,0x20]
+
+0x7e,0x83,0x81,0xd4,0xff,0xd6,0x00,0x60,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_lt_f16_e64 -|0xfe0b|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x81,0xd4,0xff,0xd6,0x00,0x60,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0x81,0xd4,0xf0,0xfa,0x00,0x40
+# GFX1250: v_cmpx_lt_f16_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x81,0xd4,0xf0,0xfa,0x00,0x40]
+
+0x7e,0x00,0x81,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_lt_f16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x81,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0x81,0xd4,0x7d,0xe0,0x01,0x00
+# GFX1250: v_cmpx_lt_f16_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x81,0xd4,0x7d,0xe0,0x01,0x00]
+
+0x7e,0x00,0x81,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_lt_f16_e64 null, exec_lo ; encoding: [0x7e,0x00,0x81,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0x81,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_lt_f16_e64 s1, s2 ; encoding: [0x7e,0x00,0x81,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0x81,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_lt_f16_e64 s105, s105 ; encoding: [0x7e,0x00,0x81,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0x81,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_lt_f16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x81,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0x81,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_lt_f16_e64 v1, v2 ; encoding: [0x7e,0x00,0x81,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0x81,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_lt_f16_e64 v255, v255 ; encoding: [0x7e,0x00,0x81,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0x81,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_lt_f16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0x81,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0x81,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_lt_f16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x81,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x01,0x81,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_lt_f16_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x81,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0x91,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_lt_f32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x91,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x02,0x91,0xd4,0xfd,0xd4,0x00,0x20
+# GFX1250: v_cmpx_lt_f32_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x91,0xd4,0xfd,0xd4,0x00,0x20]
+
+0x7e,0x83,0x91,0xd4,0xff,0xd6,0x00,0x60,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_lt_f32_e64 -|0xaf123456|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x91,0xd4,0xff,0xd6,0x00,0x60,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0x91,0xd4,0xf0,0xfa,0x00,0x40
+# GFX1250: v_cmpx_lt_f32_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x91,0xd4,0xf0,0xfa,0x00,0x40]
+
+0x7e,0x00,0x91,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_lt_f32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x91,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0x91,0xd4,0x7d,0xe0,0x01,0x00
+# GFX1250: v_cmpx_lt_f32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x91,0xd4,0x7d,0xe0,0x01,0x00]
+
+0x7e,0x00,0x91,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_lt_f32_e64 null, exec_lo ; encoding: [0x7e,0x00,0x91,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0x91,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_lt_f32_e64 s1, s2 ; encoding: [0x7e,0x00,0x91,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0x91,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_lt_f32_e64 s105, s105 ; encoding: [0x7e,0x00,0x91,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0x91,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_lt_f32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x91,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0x91,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_lt_f32_e64 v1, v2 ; encoding: [0x7e,0x00,0x91,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0x91,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_lt_f32_e64 v255, v255 ; encoding: [0x7e,0x00,0x91,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0x91,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_lt_f32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0x91,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0x91,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_lt_f32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x91,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x01,0x91,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_lt_f32_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x91,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0xa1,0xd4,0xc1,0x82,0x01,0x00
+# GFX1250: v_cmpx_lt_f64_e64 -1, -1 ; encoding: [0x7e,0x00,0xa1,0xd4,0xc1,0x82,0x01,0x00]
+
+0x7e,0x01,0xa1,0xd4,0x7e,0xfa,0x01,0x20
+# GFX1250: v_cmpx_lt_f64_e64 -|exec|, src_scc ; encoding: [0x7e,0x01,0xa1,0xd4,0x7e,0xfa,0x01,0x20]
+
+0x7e,0x03,0xa1,0xd4,0xfd,0xfc,0x00,0x60
+# GFX1250: v_cmpx_lt_f64_e64 -|src_scc|, -|exec| ; encoding: [0x7e,0x03,0xa1,0xd4,0xfd,0xfc,0x00,0x60]
+
+0x7e,0x00,0xa1,0xd4,0xf0,0xf8,0x00,0x00
+# GFX1250: v_cmpx_lt_f64_e64 0.5, null ; encoding: [0x7e,0x00,0xa1,0xd4,0xf0,0xf8,0x00,0x00]
+
+0x7e,0x82,0xa1,0xd4,0xff,0xd4,0x00,0x40,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_lt_f64_e64 0xaf123456, -|vcc| clamp ; encoding: [0x7e,0x82,0xa1,0xd4,0xff,0xd4,0x00,0x40,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xa1,0xd4,0x7c,0xe0,0x01,0x00
+# GFX1250: v_cmpx_lt_f64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xa1,0xd4,0x7c,0xe0,0x01,0x00]
+
+0x7e,0x00,0xa1,0xd4,0x68,0xd0,0x00,0x00
+# GFX1250: v_cmpx_lt_f64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xa1,0xd4,0x68,0xd0,0x00,0x00]
+
+0x7e,0x00,0xa1,0xd4,0x02,0x08,0x00,0x00
+# GFX1250: v_cmpx_lt_f64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xa1,0xd4,0x02,0x08,0x00,0x00]
+
+0x7e,0x00,0xa1,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_lt_f64_e64 ttmp[14:15], 0xaf123456 ; encoding: [0x7e,0x00,0xa1,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xa1,0xd4,0xfe,0xfd,0x03,0x00
+# GFX1250: v_cmpx_lt_f64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xa1,0xd4,0xfe,0xfd,0x03,0x00]
+
+0x7e,0x00,0xa1,0xd4,0x02,0x05,0x02,0x00
+# GFX1250: v_cmpx_lt_f64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xa1,0xd4,0x02,0x05,0x02,0x00]
+
+0x7e,0x00,0xa1,0xd4,0x6a,0xf4,0x00,0x00
+# GFX1250: v_cmpx_lt_f64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xa1,0xd4,0x6a,0xf4,0x00,0x00]
+
+0x7e,0x00,0xb1,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_lt_i16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xb1,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x00,0xb1,0xd4,0xff,0xfa,0x00,0x00,0x00,0x38,0x00,0x00
+# GFX1250: v_cmpx_lt_i16_e64 0x3800, m0 ; encoding: [0x7e,0x00,0xb1,0xd4,0xff,0xfa,0x00,0x00,0x00,0x38,0x00,0x00]
+
+0x7e,0x00,0xb1,0xd4,0xff,0xd6,0x00,0x00,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_lt_i16_e64 0xfe0b, vcc_hi ; encoding: [0x7e,0x00,0xb1,0xd4,0xff,0xd6,0x00,0x00,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0xb1,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_lt_i16_e64 exec_hi, null ; encoding: [0x7e,0x00,0xb1,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0xb1,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_lt_i16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xb1,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0xb1,0xd4,0x7d,0xfe,0x01,0x00,0x00,0x38,0x00,0x00
+# GFX1250: v_cmpx_lt_i16_e64 m0, 0x3800 ; encoding: [0x7e,0x00,0xb1,0xd4,0x7d,0xfe,0x01,0x00,0x00,0x38,0x00,0x00]
+
+0x7e,0x00,0xb1,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_lt_i16_e64 null, exec_lo ; encoding: [0x7e,0x00,0xb1,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0xb1,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_lt_i16_e64 s1, s2 ; encoding: [0x7e,0x00,0xb1,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0xb1,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_lt_i16_e64 s105, s105 ; encoding: [0x7e,0x00,0xb1,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0xb1,0xd4,0xfd,0xd4,0x00,0x00
+# GFX1250: v_cmpx_lt_i16_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xb1,0xd4,0xfd,0xd4,0x00,0x00]
+
+0x7e,0x00,0xb1,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_lt_i16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xb1,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0xb1,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_lt_i16_e64 v1, v2 ; encoding: [0x7e,0x00,0xb1,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0xb1,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_lt_i16_e64 v255, v255 ; encoding: [0x7e,0x00,0xb1,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0xb1,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_lt_i16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0xb1,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0xb1,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_lt_i16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xb1,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x00,0xc1,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_lt_i32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xc1,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x00,0xc1,0xd4,0xf0,0xfa,0x00,0x00
+# GFX1250: v_cmpx_lt_i32_e64 0.5, m0 ; encoding: [0x7e,0x00,0xc1,0xd4,0xf0,0xfa,0x00,0x00]
+
+0x7e,0x00,0xc1,0xd4,0xff,0xd6,0x00,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_lt_i32_e64 0xaf123456, vcc_hi ; encoding: [0x7e,0x00,0xc1,0xd4,0xff,0xd6,0x00,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xc1,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_lt_i32_e64 exec_hi, null ; encoding: [0x7e,0x00,0xc1,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0xc1,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_lt_i32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xc1,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0xc1,0xd4,0x7d,0xe0,0x01,0x00
+# GFX1250: v_cmpx_lt_i32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0xc1,0xd4,0x7d,0xe0,0x01,0x00]
+
+0x7e,0x00,0xc1,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_lt_i32_e64 null, exec_lo ; encoding: [0x7e,0x00,0xc1,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0xc1,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_lt_i32_e64 s1, s2 ; encoding: [0x7e,0x00,0xc1,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0xc1,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_lt_i32_e64 s105, s105 ; encoding: [0x7e,0x00,0xc1,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0xc1,0xd4,0xfd,0xd4,0x00,0x00
+# GFX1250: v_cmpx_lt_i32_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xc1,0xd4,0xfd,0xd4,0x00,0x00]
+
+0x7e,0x00,0xc1,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_lt_i32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xc1,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0xc1,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_lt_i32_e64 v1, v2 ; encoding: [0x7e,0x00,0xc1,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0xc1,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_lt_i32_e64 v255, v255 ; encoding: [0x7e,0x00,0xc1,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0xc1,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_lt_i32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0xc1,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xc1,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_lt_i32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xc1,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x00,0xd1,0xd4,0xc1,0x82,0x01,0x00
+# GFX1250: v_cmpx_lt_i64_e64 -1, -1 ; encoding: [0x7e,0x00,0xd1,0xd4,0xc1,0x82,0x01,0x00]
+
+0x7e,0x00,0xd1,0xd4,0xf0,0xf8,0x00,0x00
+# GFX1250: v_cmpx_lt_i64_e64 0.5, null ; encoding: [0x7e,0x00,0xd1,0xd4,0xf0,0xf8,0x00,0x00]
+
+0x7e,0x00,0xd1,0xd4,0xff,0xd4,0x00,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_lt_i64_e64 lit64(0xaf123456), vcc ; encoding: [0x7e,0x00,0xd1,0xd4,0xfe,0xd4,0x00,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+0x7e,0x00,0xd1,0xd4,0x7e,0xfa,0x01,0x00
+# GFX1250: v_cmpx_lt_i64_e64 exec, src_scc ; encoding: [0x7e,0x00,0xd1,0xd4,0x7e,0xfa,0x01,0x00]
+
+0x7e,0x00,0xd1,0xd4,0x7c,0xe0,0x01,0x00
+# GFX1250: v_cmpx_lt_i64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xd1,0xd4,0x7c,0xe0,0x01,0x00]
+
+0x7e,0x00,0xd1,0xd4,0x68,0xd0,0x00,0x00
+# GFX1250: v_cmpx_lt_i64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xd1,0xd4,0x68,0xd0,0x00,0x00]
+
+0x7e,0x00,0xd1,0xd4,0x02,0x08,0x00,0x00
+# GFX1250: v_cmpx_lt_i64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xd1,0xd4,0x02,0x08,0x00,0x00]
+
+0x7e,0x00,0xd1,0xd4,0xfd,0xfc,0x00,0x00
+# GFX1250: v_cmpx_lt_i64_e64 src_scc, exec ; encoding: [0x7e,0x00,0xd1,0xd4,0xfd,0xfc,0x00,0x00]
+
+0x7e,0x00,0xd1,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_lt_i64_e64 ttmp[14:15], lit64(0xaf123456) ; encoding: [0x7e,0x00,0xd1,0xd4,0x7a,0xfc,0x01,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+0x7e,0x00,0xd1,0xd4,0xfe,0xfd,0x03,0x00
+# GFX1250: v_cmpx_lt_i64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xd1,0xd4,0xfe,0xfd,0x03,0x00]
+
+0x7e,0x00,0xd1,0xd4,0x02,0x05,0x02,0x00
+# GFX1250: v_cmpx_lt_i64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xd1,0xd4,0x02,0x05,0x02,0x00]
+
+0x7e,0x00,0xd1,0xd4,0x6a,0xf4,0x00,0x00
+# GFX1250: v_cmpx_lt_i64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xd1,0xd4,0x6a,0xf4,0x00,0x00]
+
+0x7e,0x00,0xb9,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_lt_u16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xb9,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x00,0xb9,0xd4,0xff,0xfa,0x00,0x00,0x00,0x38,0x00,0x00
+# GFX1250: v_cmpx_lt_u16_e64 0x3800, m0 ; encoding: [0x7e,0x00,0xb9,0xd4,0xff,0xfa,0x00,0x00,0x00,0x38,0x00,0x00]
+
+0x7e,0x00,0xb9,0xd4,0xff,0xd6,0x00,0x00,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_lt_u16_e64 0xfe0b, vcc_hi ; encoding: [0x7e,0x00,0xb9,0xd4,0xff,0xd6,0x00,0x00,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0xb9,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_lt_u16_e64 exec_hi, null ; encoding: [0x7e,0x00,0xb9,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0xb9,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_lt_u16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xb9,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0xb9,0xd4,0x7d,0xfe,0x01,0x00,0x00,0x38,0x00,0x00
+# GFX1250: v_cmpx_lt_u16_e64 m0, 0x3800 ; encoding: [0x7e,0x00,0xb9,0xd4,0x7d,0xfe,0x01,0x00,0x00,0x38,0x00,0x00]
+
+0x7e,0x00,0xb9,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_lt_u16_e64 null, exec_lo ; encoding: [0x7e,0x00,0xb9,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0xb9,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_lt_u16_e64 s1, s2 ; encoding: [0x7e,0x00,0xb9,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0xb9,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_lt_u16_e64 s105, s105 ; encoding: [0x7e,0x00,0xb9,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0xb9,0xd4,0xfd,0xd4,0x00,0x00
+# GFX1250: v_cmpx_lt_u16_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xb9,0xd4,0xfd,0xd4,0x00,0x00]
+
+0x7e,0x00,0xb9,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_lt_u16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xb9,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0xb9,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_lt_u16_e64 v1, v2 ; encoding: [0x7e,0x00,0xb9,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0xb9,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_lt_u16_e64 v255, v255 ; encoding: [0x7e,0x00,0xb9,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0xb9,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_lt_u16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0xb9,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0xb9,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_lt_u16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xb9,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x00,0xc9,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_lt_u32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xc9,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x00,0xc9,0xd4,0xf0,0xfa,0x00,0x00
+# GFX1250: v_cmpx_lt_u32_e64 0.5, m0 ; encoding: [0x7e,0x00,0xc9,0xd4,0xf0,0xfa,0x00,0x00]
+
+0x7e,0x00,0xc9,0xd4,0xff,0xd6,0x00,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_lt_u32_e64 0xaf123456, vcc_hi ; encoding: [0x7e,0x00,0xc9,0xd4,0xff,0xd6,0x00,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xc9,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_lt_u32_e64 exec_hi, null ; encoding: [0x7e,0x00,0xc9,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0xc9,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_lt_u32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xc9,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0xc9,0xd4,0x7d,0xe0,0x01,0x00
+# GFX1250: v_cmpx_lt_u32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0xc9,0xd4,0x7d,0xe0,0x01,0x00]
+
+0x7e,0x00,0xc9,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_lt_u32_e64 null, exec_lo ; encoding: [0x7e,0x00,0xc9,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0xc9,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_lt_u32_e64 s1, s2 ; encoding: [0x7e,0x00,0xc9,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0xc9,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_lt_u32_e64 s105, s105 ; encoding: [0x7e,0x00,0xc9,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0xc9,0xd4,0xfd,0xd4,0x00,0x00
+# GFX1250: v_cmpx_lt_u32_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xc9,0xd4,0xfd,0xd4,0x00,0x00]
+
+0x7e,0x00,0xc9,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_lt_u32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xc9,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0xc9,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_lt_u32_e64 v1, v2 ; encoding: [0x7e,0x00,0xc9,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0xc9,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_lt_u32_e64 v255, v255 ; encoding: [0x7e,0x00,0xc9,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0xc9,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_lt_u32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0xc9,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xc9,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_lt_u32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xc9,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x00,0xd9,0xd4,0xc1,0x82,0x01,0x00
+# GFX1250: v_cmpx_lt_u64_e64 -1, -1 ; encoding: [0x7e,0x00,0xd9,0xd4,0xc1,0x82,0x01,0x00]
+
+0x7e,0x00,0xd9,0xd4,0xf0,0xf8,0x00,0x00
+# GFX1250: v_cmpx_lt_u64_e64 0.5, null ; encoding: [0x7e,0x00,0xd9,0xd4,0xf0,0xf8,0x00,0x00]
+
+0x7e,0x00,0xd9,0xd4,0xff,0xd4,0x00,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_lt_u64_e64 lit64(0xaf123456), vcc ; encoding: [0x7e,0x00,0xd9,0xd4,0xfe,0xd4,0x00,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+0x7e,0x00,0xd9,0xd4,0x7e,0xfa,0x01,0x00
+# GFX1250: v_cmpx_lt_u64_e64 exec, src_scc ; encoding: [0x7e,0x00,0xd9,0xd4,0x7e,0xfa,0x01,0x00]
+
+0x7e,0x00,0xd9,0xd4,0x7c,0xe0,0x01,0x00
+# GFX1250: v_cmpx_lt_u64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xd9,0xd4,0x7c,0xe0,0x01,0x00]
+
+0x7e,0x00,0xd9,0xd4,0x68,0xd0,0x00,0x00
+# GFX1250: v_cmpx_lt_u64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xd9,0xd4,0x68,0xd0,0x00,0x00]
+
+0x7e,0x00,0xd9,0xd4,0x02,0x08,0x00,0x00
+# GFX1250: v_cmpx_lt_u64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xd9,0xd4,0x02,0x08,0x00,0x00]
+
+0x7e,0x00,0xd9,0xd4,0xfd,0xfc,0x00,0x00
+# GFX1250: v_cmpx_lt_u64_e64 src_scc, exec ; encoding: [0x7e,0x00,0xd9,0xd4,0xfd,0xfc,0x00,0x00]
+
+0x7e,0x00,0xd9,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_lt_u64_e64 ttmp[14:15], lit64(0xaf123456) ; encoding: [0x7e,0x00,0xd9,0xd4,0x7a,0xfc,0x01,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+0x7e,0x00,0xd9,0xd4,0xfe,0xfd,0x03,0x00
+# GFX1250: v_cmpx_lt_u64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xd9,0xd4,0xfe,0xfd,0x03,0x00]
+
+0x7e,0x00,0xd9,0xd4,0x02,0x05,0x02,0x00
+# GFX1250: v_cmpx_lt_u64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xd9,0xd4,0x02,0x05,0x02,0x00]
+
+0x7e,0x00,0xd9,0xd4,0x6a,0xf4,0x00,0x00
+# GFX1250: v_cmpx_lt_u64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xd9,0xd4,0x6a,0xf4,0x00,0x00]
+
+0x7e,0x00,0xb5,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_ne_i16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xb5,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x00,0xb5,0xd4,0xff,0xfa,0x00,0x00,0x00,0x38,0x00,0x00
+# GFX1250: v_cmpx_ne_i16_e64 0x3800, m0 ; encoding: [0x7e,0x00,0xb5,0xd4,0xff,0xfa,0x00,0x00,0x00,0x38,0x00,0x00]
+
+0x7e,0x00,0xb5,0xd4,0xff,0xd6,0x00,0x00,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_ne_i16_e64 0xfe0b, vcc_hi ; encoding: [0x7e,0x00,0xb5,0xd4,0xff,0xd6,0x00,0x00,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0xb5,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_ne_i16_e64 exec_hi, null ; encoding: [0x7e,0x00,0xb5,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0xb5,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_ne_i16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xb5,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0xb5,0xd4,0x7d,0xfe,0x01,0x00,0x00,0x38,0x00,0x00
+# GFX1250: v_cmpx_ne_i16_e64 m0, 0x3800 ; encoding: [0x7e,0x00,0xb5,0xd4,0x7d,0xfe,0x01,0x00,0x00,0x38,0x00,0x00]
+
+0x7e,0x00,0xb5,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_ne_i16_e64 null, exec_lo ; encoding: [0x7e,0x00,0xb5,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0xb5,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_ne_i16_e64 s1, s2 ; encoding: [0x7e,0x00,0xb5,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0xb5,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_ne_i16_e64 s105, s105 ; encoding: [0x7e,0x00,0xb5,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0xb5,0xd4,0xfd,0xd4,0x00,0x00
+# GFX1250: v_cmpx_ne_i16_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xb5,0xd4,0xfd,0xd4,0x00,0x00]
+
+0x7e,0x00,0xb5,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_ne_i16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xb5,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0xb5,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_ne_i16_e64 v1, v2 ; encoding: [0x7e,0x00,0xb5,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0xb5,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_ne_i16_e64 v255, v255 ; encoding: [0x7e,0x00,0xb5,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0xb5,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_ne_i16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0xb5,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0xb5,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_ne_i16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xb5,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x00,0xc5,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_ne_i32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xc5,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x00,0xc5,0xd4,0xf0,0xfa,0x00,0x00
+# GFX1250: v_cmpx_ne_i32_e64 0.5, m0 ; encoding: [0x7e,0x00,0xc5,0xd4,0xf0,0xfa,0x00,0x00]
+
+0x7e,0x00,0xc5,0xd4,0xff,0xd6,0x00,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_ne_i32_e64 0xaf123456, vcc_hi ; encoding: [0x7e,0x00,0xc5,0xd4,0xff,0xd6,0x00,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xc5,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_ne_i32_e64 exec_hi, null ; encoding: [0x7e,0x00,0xc5,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0xc5,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_ne_i32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xc5,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0xc5,0xd4,0x7d,0xe0,0x01,0x00
+# GFX1250: v_cmpx_ne_i32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0xc5,0xd4,0x7d,0xe0,0x01,0x00]
+
+0x7e,0x00,0xc5,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_ne_i32_e64 null, exec_lo ; encoding: [0x7e,0x00,0xc5,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0xc5,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_ne_i32_e64 s1, s2 ; encoding: [0x7e,0x00,0xc5,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0xc5,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_ne_i32_e64 s105, s105 ; encoding: [0x7e,0x00,0xc5,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0xc5,0xd4,0xfd,0xd4,0x00,0x00
+# GFX1250: v_cmpx_ne_i32_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xc5,0xd4,0xfd,0xd4,0x00,0x00]
+
+0x7e,0x00,0xc5,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_ne_i32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xc5,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0xc5,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_ne_i32_e64 v1, v2 ; encoding: [0x7e,0x00,0xc5,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0xc5,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_ne_i32_e64 v255, v255 ; encoding: [0x7e,0x00,0xc5,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0xc5,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_ne_i32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0xc5,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xc5,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_ne_i32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xc5,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x00,0xd5,0xd4,0xc1,0x82,0x01,0x00
+# GFX1250: v_cmpx_ne_i64_e64 -1, -1 ; encoding: [0x7e,0x00,0xd5,0xd4,0xc1,0x82,0x01,0x00]
+
+0x7e,0x00,0xd5,0xd4,0xf0,0xf8,0x00,0x00
+# GFX1250: v_cmpx_ne_i64_e64 0.5, null ; encoding: [0x7e,0x00,0xd5,0xd4,0xf0,0xf8,0x00,0x00]
+
+0x7e,0x00,0xd5,0xd4,0xff,0xd4,0x00,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_ne_i64_e64 lit64(0xaf123456), vcc ; encoding: [0x7e,0x00,0xd5,0xd4,0xfe,0xd4,0x00,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+0x7e,0x00,0xd5,0xd4,0x7e,0xfa,0x01,0x00
+# GFX1250: v_cmpx_ne_i64_e64 exec, src_scc ; encoding: [0x7e,0x00,0xd5,0xd4,0x7e,0xfa,0x01,0x00]
+
+0x7e,0x00,0xd5,0xd4,0x7c,0xe0,0x01,0x00
+# GFX1250: v_cmpx_ne_i64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xd5,0xd4,0x7c,0xe0,0x01,0x00]
+
+0x7e,0x00,0xd5,0xd4,0x68,0xd0,0x00,0x00
+# GFX1250: v_cmpx_ne_i64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xd5,0xd4,0x68,0xd0,0x00,0x00]
+
+0x7e,0x00,0xd5,0xd4,0x02,0x08,0x00,0x00
+# GFX1250: v_cmpx_ne_i64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xd5,0xd4,0x02,0x08,0x00,0x00]
+
+0x7e,0x00,0xd5,0xd4,0xfd,0xfc,0x00,0x00
+# GFX1250: v_cmpx_ne_i64_e64 src_scc, exec ; encoding: [0x7e,0x00,0xd5,0xd4,0xfd,0xfc,0x00,0x00]
+
+0x7e,0x00,0xd5,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_ne_i64_e64 ttmp[14:15], lit64(0xaf123456) ; encoding: [0x7e,0x00,0xd5,0xd4,0x7a,0xfc,0x01,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+0x7e,0x00,0xd5,0xd4,0xfe,0xfd,0x03,0x00
+# GFX1250: v_cmpx_ne_i64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xd5,0xd4,0xfe,0xfd,0x03,0x00]
+
+0x7e,0x00,0xd5,0xd4,0x02,0x05,0x02,0x00
+# GFX1250: v_cmpx_ne_i64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xd5,0xd4,0x02,0x05,0x02,0x00]
+
+0x7e,0x00,0xd5,0xd4,0x6a,0xf4,0x00,0x00
+# GFX1250: v_cmpx_ne_i64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xd5,0xd4,0x6a,0xf4,0x00,0x00]
+
+0x7e,0x00,0xbd,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_ne_u16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xbd,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x00,0xbd,0xd4,0xff,0xfa,0x00,0x00,0x00,0x38,0x00,0x00
+# GFX1250: v_cmpx_ne_u16_e64 0x3800, m0 ; encoding: [0x7e,0x00,0xbd,0xd4,0xff,0xfa,0x00,0x00,0x00,0x38,0x00,0x00]
+
+0x7e,0x00,0xbd,0xd4,0xff,0xd6,0x00,0x00,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_ne_u16_e64 0xfe0b, vcc_hi ; encoding: [0x7e,0x00,0xbd,0xd4,0xff,0xd6,0x00,0x00,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0xbd,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_ne_u16_e64 exec_hi, null ; encoding: [0x7e,0x00,0xbd,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0xbd,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_ne_u16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xbd,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0xbd,0xd4,0x7d,0xfe,0x01,0x00,0x00,0x38,0x00,0x00
+# GFX1250: v_cmpx_ne_u16_e64 m0, 0x3800 ; encoding: [0x7e,0x00,0xbd,0xd4,0x7d,0xfe,0x01,0x00,0x00,0x38,0x00,0x00]
+
+0x7e,0x00,0xbd,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_ne_u16_e64 null, exec_lo ; encoding: [0x7e,0x00,0xbd,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0xbd,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_ne_u16_e64 s1, s2 ; encoding: [0x7e,0x00,0xbd,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0xbd,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_ne_u16_e64 s105, s105 ; encoding: [0x7e,0x00,0xbd,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0xbd,0xd4,0xfd,0xd4,0x00,0x00
+# GFX1250: v_cmpx_ne_u16_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xbd,0xd4,0xfd,0xd4,0x00,0x00]
+
+0x7e,0x00,0xbd,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_ne_u16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xbd,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0xbd,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_ne_u16_e64 v1, v2 ; encoding: [0x7e,0x00,0xbd,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0xbd,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_ne_u16_e64 v255, v255 ; encoding: [0x7e,0x00,0xbd,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0xbd,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_ne_u16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0xbd,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0xbd,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_ne_u16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xbd,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x00,0xcd,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_ne_u32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0xcd,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x00,0xcd,0xd4,0xf0,0xfa,0x00,0x00
+# GFX1250: v_cmpx_ne_u32_e64 0.5, m0 ; encoding: [0x7e,0x00,0xcd,0xd4,0xf0,0xfa,0x00,0x00]
+
+0x7e,0x00,0xcd,0xd4,0xff,0xd6,0x00,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_ne_u32_e64 0xaf123456, vcc_hi ; encoding: [0x7e,0x00,0xcd,0xd4,0xff,0xd6,0x00,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xcd,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_ne_u32_e64 exec_hi, null ; encoding: [0x7e,0x00,0xcd,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0xcd,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_ne_u32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0xcd,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0xcd,0xd4,0x7d,0xe0,0x01,0x00
+# GFX1250: v_cmpx_ne_u32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0xcd,0xd4,0x7d,0xe0,0x01,0x00]
+
+0x7e,0x00,0xcd,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_ne_u32_e64 null, exec_lo ; encoding: [0x7e,0x00,0xcd,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0xcd,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_ne_u32_e64 s1, s2 ; encoding: [0x7e,0x00,0xcd,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0xcd,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_ne_u32_e64 s105, s105 ; encoding: [0x7e,0x00,0xcd,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0xcd,0xd4,0xfd,0xd4,0x00,0x00
+# GFX1250: v_cmpx_ne_u32_e64 src_scc, vcc_lo ; encoding: [0x7e,0x00,0xcd,0xd4,0xfd,0xd4,0x00,0x00]
+
+0x7e,0x00,0xcd,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_ne_u32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0xcd,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0xcd,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_ne_u32_e64 v1, v2 ; encoding: [0x7e,0x00,0xcd,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0xcd,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_ne_u32_e64 v255, v255 ; encoding: [0x7e,0x00,0xcd,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0xcd,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_ne_u32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0xcd,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xcd,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_ne_u32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0xcd,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x00,0xdd,0xd4,0xc1,0x82,0x01,0x00
+# GFX1250: v_cmpx_ne_u64_e64 -1, -1 ; encoding: [0x7e,0x00,0xdd,0xd4,0xc1,0x82,0x01,0x00]
+
+0x7e,0x00,0xdd,0xd4,0xf0,0xf8,0x00,0x00
+# GFX1250: v_cmpx_ne_u64_e64 0.5, null ; encoding: [0x7e,0x00,0xdd,0xd4,0xf0,0xf8,0x00,0x00]
+
+0x7e,0x00,0xdd,0xd4,0xff,0xd4,0x00,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_ne_u64_e64 lit64(0xaf123456), vcc ; encoding: [0x7e,0x00,0xdd,0xd4,0xfe,0xd4,0x00,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+0x7e,0x00,0xdd,0xd4,0x7e,0xfa,0x01,0x00
+# GFX1250: v_cmpx_ne_u64_e64 exec, src_scc ; encoding: [0x7e,0x00,0xdd,0xd4,0x7e,0xfa,0x01,0x00]
+
+0x7e,0x00,0xdd,0xd4,0x7c,0xe0,0x01,0x00
+# GFX1250: v_cmpx_ne_u64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xdd,0xd4,0x7c,0xe0,0x01,0x00]
+
+0x7e,0x00,0xdd,0xd4,0x68,0xd0,0x00,0x00
+# GFX1250: v_cmpx_ne_u64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xdd,0xd4,0x68,0xd0,0x00,0x00]
+
+0x7e,0x00,0xdd,0xd4,0x02,0x08,0x00,0x00
+# GFX1250: v_cmpx_ne_u64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xdd,0xd4,0x02,0x08,0x00,0x00]
+
+0x7e,0x00,0xdd,0xd4,0xfd,0xfc,0x00,0x00
+# GFX1250: v_cmpx_ne_u64_e64 src_scc, exec ; encoding: [0x7e,0x00,0xdd,0xd4,0xfd,0xfc,0x00,0x00]
+
+0x7e,0x00,0xdd,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_ne_u64_e64 ttmp[14:15], lit64(0xaf123456) ; encoding: [0x7e,0x00,0xdd,0xd4,0x7a,0xfc,0x01,0x00,0x56,0x34,0x12,0xaf,0x00,0x00,0x00,0x00]
+
+0x7e,0x00,0xdd,0xd4,0xfe,0xfd,0x03,0x00
+# GFX1250: v_cmpx_ne_u64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xdd,0xd4,0xfe,0xfd,0x03,0x00]
+
+0x7e,0x00,0xdd,0xd4,0x02,0x05,0x02,0x00
+# GFX1250: v_cmpx_ne_u64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xdd,0xd4,0x02,0x05,0x02,0x00]
+
+0x7e,0x00,0xdd,0xd4,0x6a,0xf4,0x00,0x00
+# GFX1250: v_cmpx_ne_u64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xdd,0xd4,0x6a,0xf4,0x00,0x00]
+
+0x7e,0x00,0x8d,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_neq_f16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x8d,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x02,0x8d,0xd4,0xfd,0xd4,0x00,0x20
+# GFX1250: v_cmpx_neq_f16_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x8d,0xd4,0xfd,0xd4,0x00,0x20]
+
+0x7e,0x83,0x8d,0xd4,0xff,0xd6,0x00,0x60,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_neq_f16_e64 -|0xfe0b|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x8d,0xd4,0xff,0xd6,0x00,0x60,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0x8d,0xd4,0xf0,0xfa,0x00,0x40
+# GFX1250: v_cmpx_neq_f16_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x8d,0xd4,0xf0,0xfa,0x00,0x40]
+
+0x7e,0x00,0x8d,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_neq_f16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x8d,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0x8d,0xd4,0x7d,0xe0,0x01,0x00
+# GFX1250: v_cmpx_neq_f16_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x8d,0xd4,0x7d,0xe0,0x01,0x00]
+
+0x7e,0x00,0x8d,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_neq_f16_e64 null, exec_lo ; encoding: [0x7e,0x00,0x8d,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0x8d,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_neq_f16_e64 s1, s2 ; encoding: [0x7e,0x00,0x8d,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0x8d,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_neq_f16_e64 s105, s105 ; encoding: [0x7e,0x00,0x8d,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0x8d,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_neq_f16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x8d,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0x8d,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_neq_f16_e64 v1, v2 ; encoding: [0x7e,0x00,0x8d,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0x8d,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_neq_f16_e64 v255, v255 ; encoding: [0x7e,0x00,0x8d,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0x8d,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_neq_f16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0x8d,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0x8d,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_neq_f16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x8d,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x01,0x8d,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_neq_f16_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x8d,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0x9d,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_neq_f32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x9d,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x02,0x9d,0xd4,0xfd,0xd4,0x00,0x20
+# GFX1250: v_cmpx_neq_f32_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x9d,0xd4,0xfd,0xd4,0x00,0x20]
+
+0x7e,0x83,0x9d,0xd4,0xff,0xd6,0x00,0x60,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_neq_f32_e64 -|0xaf123456|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x9d,0xd4,0xff,0xd6,0x00,0x60,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0x9d,0xd4,0xf0,0xfa,0x00,0x40
+# GFX1250: v_cmpx_neq_f32_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x9d,0xd4,0xf0,0xfa,0x00,0x40]
+
+0x7e,0x00,0x9d,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_neq_f32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x9d,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0x9d,0xd4,0x7d,0xe0,0x01,0x00
+# GFX1250: v_cmpx_neq_f32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x9d,0xd4,0x7d,0xe0,0x01,0x00]
+
+0x7e,0x00,0x9d,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_neq_f32_e64 null, exec_lo ; encoding: [0x7e,0x00,0x9d,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0x9d,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_neq_f32_e64 s1, s2 ; encoding: [0x7e,0x00,0x9d,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0x9d,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_neq_f32_e64 s105, s105 ; encoding: [0x7e,0x00,0x9d,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0x9d,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_neq_f32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x9d,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0x9d,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_neq_f32_e64 v1, v2 ; encoding: [0x7e,0x00,0x9d,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0x9d,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_neq_f32_e64 v255, v255 ; encoding: [0x7e,0x00,0x9d,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0x9d,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_neq_f32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0x9d,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0x9d,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_neq_f32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x9d,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x01,0x9d,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_neq_f32_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x9d,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0xad,0xd4,0xc1,0x82,0x01,0x00
+# GFX1250: v_cmpx_neq_f64_e64 -1, -1 ; encoding: [0x7e,0x00,0xad,0xd4,0xc1,0x82,0x01,0x00]
+
+0x7e,0x01,0xad,0xd4,0x7e,0xfa,0x01,0x20
+# GFX1250: v_cmpx_neq_f64_e64 -|exec|, src_scc ; encoding: [0x7e,0x01,0xad,0xd4,0x7e,0xfa,0x01,0x20]
+
+0x7e,0x03,0xad,0xd4,0xfd,0xfc,0x00,0x60
+# GFX1250: v_cmpx_neq_f64_e64 -|src_scc|, -|exec| ; encoding: [0x7e,0x03,0xad,0xd4,0xfd,0xfc,0x00,0x60]
+
+0x7e,0x00,0xad,0xd4,0xf0,0xf8,0x00,0x00
+# GFX1250: v_cmpx_neq_f64_e64 0.5, null ; encoding: [0x7e,0x00,0xad,0xd4,0xf0,0xf8,0x00,0x00]
+
+0x7e,0x82,0xad,0xd4,0xff,0xd4,0x00,0x40,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_neq_f64_e64 0xaf123456, -|vcc| clamp ; encoding: [0x7e,0x82,0xad,0xd4,0xff,0xd4,0x00,0x40,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xad,0xd4,0x7c,0xe0,0x01,0x00
+# GFX1250: v_cmpx_neq_f64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xad,0xd4,0x7c,0xe0,0x01,0x00]
+
+0x7e,0x00,0xad,0xd4,0x68,0xd0,0x00,0x00
+# GFX1250: v_cmpx_neq_f64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xad,0xd4,0x68,0xd0,0x00,0x00]
+
+0x7e,0x00,0xad,0xd4,0x02,0x08,0x00,0x00
+# GFX1250: v_cmpx_neq_f64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xad,0xd4,0x02,0x08,0x00,0x00]
+
+0x7e,0x00,0xad,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_neq_f64_e64 ttmp[14:15], 0xaf123456 ; encoding: [0x7e,0x00,0xad,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xad,0xd4,0xfe,0xfd,0x03,0x00
+# GFX1250: v_cmpx_neq_f64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xad,0xd4,0xfe,0xfd,0x03,0x00]
+
+0x7e,0x00,0xad,0xd4,0x02,0x05,0x02,0x00
+# GFX1250: v_cmpx_neq_f64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xad,0xd4,0x02,0x05,0x02,0x00]
+
+0x7e,0x00,0xad,0xd4,0x6a,0xf4,0x00,0x00
+# GFX1250: v_cmpx_neq_f64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xad,0xd4,0x6a,0xf4,0x00,0x00]
+
+0x7e,0x00,0x89,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_nge_f16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x89,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x02,0x89,0xd4,0xfd,0xd4,0x00,0x20
+# GFX1250: v_cmpx_nge_f16_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x89,0xd4,0xfd,0xd4,0x00,0x20]
+
+0x7e,0x83,0x89,0xd4,0xff,0xd6,0x00,0x60,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_nge_f16_e64 -|0xfe0b|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x89,0xd4,0xff,0xd6,0x00,0x60,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0x89,0xd4,0xf0,0xfa,0x00,0x40
+# GFX1250: v_cmpx_nge_f16_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x89,0xd4,0xf0,0xfa,0x00,0x40]
+
+0x7e,0x00,0x89,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_nge_f16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x89,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0x89,0xd4,0x7d,0xe0,0x01,0x00
+# GFX1250: v_cmpx_nge_f16_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x89,0xd4,0x7d,0xe0,0x01,0x00]
+
+0x7e,0x00,0x89,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_nge_f16_e64 null, exec_lo ; encoding: [0x7e,0x00,0x89,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0x89,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_nge_f16_e64 s1, s2 ; encoding: [0x7e,0x00,0x89,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0x89,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_nge_f16_e64 s105, s105 ; encoding: [0x7e,0x00,0x89,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0x89,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_nge_f16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x89,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0x89,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_nge_f16_e64 v1, v2 ; encoding: [0x7e,0x00,0x89,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0x89,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_nge_f16_e64 v255, v255 ; encoding: [0x7e,0x00,0x89,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0x89,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_nge_f16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0x89,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0x89,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_nge_f16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x89,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x01,0x89,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_nge_f16_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x89,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0x99,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_nge_f32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x99,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x02,0x99,0xd4,0xfd,0xd4,0x00,0x20
+# GFX1250: v_cmpx_nge_f32_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x99,0xd4,0xfd,0xd4,0x00,0x20]
+
+0x7e,0x83,0x99,0xd4,0xff,0xd6,0x00,0x60,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_nge_f32_e64 -|0xaf123456|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x99,0xd4,0xff,0xd6,0x00,0x60,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0x99,0xd4,0xf0,0xfa,0x00,0x40
+# GFX1250: v_cmpx_nge_f32_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x99,0xd4,0xf0,0xfa,0x00,0x40]
+
+0x7e,0x00,0x99,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_nge_f32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x99,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0x99,0xd4,0x7d,0xe0,0x01,0x00
+# GFX1250: v_cmpx_nge_f32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x99,0xd4,0x7d,0xe0,0x01,0x00]
+
+0x7e,0x00,0x99,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_nge_f32_e64 null, exec_lo ; encoding: [0x7e,0x00,0x99,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0x99,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_nge_f32_e64 s1, s2 ; encoding: [0x7e,0x00,0x99,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0x99,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_nge_f32_e64 s105, s105 ; encoding: [0x7e,0x00,0x99,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0x99,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_nge_f32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x99,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0x99,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_nge_f32_e64 v1, v2 ; encoding: [0x7e,0x00,0x99,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0x99,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_nge_f32_e64 v255, v255 ; encoding: [0x7e,0x00,0x99,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0x99,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_nge_f32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0x99,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0x99,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_nge_f32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x99,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x01,0x99,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_nge_f32_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x99,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0xa9,0xd4,0xc1,0x82,0x01,0x00
+# GFX1250: v_cmpx_nge_f64_e64 -1, -1 ; encoding: [0x7e,0x00,0xa9,0xd4,0xc1,0x82,0x01,0x00]
+
+0x7e,0x01,0xa9,0xd4,0x7e,0xfa,0x01,0x20
+# GFX1250: v_cmpx_nge_f64_e64 -|exec|, src_scc ; encoding: [0x7e,0x01,0xa9,0xd4,0x7e,0xfa,0x01,0x20]
+
+0x7e,0x03,0xa9,0xd4,0xfd,0xfc,0x00,0x60
+# GFX1250: v_cmpx_nge_f64_e64 -|src_scc|, -|exec| ; encoding: [0x7e,0x03,0xa9,0xd4,0xfd,0xfc,0x00,0x60]
+
+0x7e,0x00,0xa9,0xd4,0xf0,0xf8,0x00,0x00
+# GFX1250: v_cmpx_nge_f64_e64 0.5, null ; encoding: [0x7e,0x00,0xa9,0xd4,0xf0,0xf8,0x00,0x00]
+
+0x7e,0x82,0xa9,0xd4,0xff,0xd4,0x00,0x40,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_nge_f64_e64 0xaf123456, -|vcc| clamp ; encoding: [0x7e,0x82,0xa9,0xd4,0xff,0xd4,0x00,0x40,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xa9,0xd4,0x7c,0xe0,0x01,0x00
+# GFX1250: v_cmpx_nge_f64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xa9,0xd4,0x7c,0xe0,0x01,0x00]
+
+0x7e,0x00,0xa9,0xd4,0x68,0xd0,0x00,0x00
+# GFX1250: v_cmpx_nge_f64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xa9,0xd4,0x68,0xd0,0x00,0x00]
+
+0x7e,0x00,0xa9,0xd4,0x02,0x08,0x00,0x00
+# GFX1250: v_cmpx_nge_f64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xa9,0xd4,0x02,0x08,0x00,0x00]
+
+0x7e,0x00,0xa9,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_nge_f64_e64 ttmp[14:15], 0xaf123456 ; encoding: [0x7e,0x00,0xa9,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xa9,0xd4,0xfe,0xfd,0x03,0x00
+# GFX1250: v_cmpx_nge_f64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xa9,0xd4,0xfe,0xfd,0x03,0x00]
+
+0x7e,0x00,0xa9,0xd4,0x02,0x05,0x02,0x00
+# GFX1250: v_cmpx_nge_f64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xa9,0xd4,0x02,0x05,0x02,0x00]
+
+0x7e,0x00,0xa9,0xd4,0x6a,0xf4,0x00,0x00
+# GFX1250: v_cmpx_nge_f64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xa9,0xd4,0x6a,0xf4,0x00,0x00]
+
+0x7e,0x00,0x8b,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_ngt_f16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x8b,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x02,0x8b,0xd4,0xfd,0xd4,0x00,0x20
+# GFX1250: v_cmpx_ngt_f16_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x8b,0xd4,0xfd,0xd4,0x00,0x20]
+
+0x7e,0x83,0x8b,0xd4,0xff,0xd6,0x00,0x60,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_ngt_f16_e64 -|0xfe0b|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x8b,0xd4,0xff,0xd6,0x00,0x60,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0x8b,0xd4,0xf0,0xfa,0x00,0x40
+# GFX1250: v_cmpx_ngt_f16_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x8b,0xd4,0xf0,0xfa,0x00,0x40]
+
+0x7e,0x00,0x8b,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_ngt_f16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x8b,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0x8b,0xd4,0x7d,0xe0,0x01,0x00
+# GFX1250: v_cmpx_ngt_f16_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x8b,0xd4,0x7d,0xe0,0x01,0x00]
+
+0x7e,0x00,0x8b,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_ngt_f16_e64 null, exec_lo ; encoding: [0x7e,0x00,0x8b,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0x8b,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_ngt_f16_e64 s1, s2 ; encoding: [0x7e,0x00,0x8b,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0x8b,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_ngt_f16_e64 s105, s105 ; encoding: [0x7e,0x00,0x8b,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0x8b,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_ngt_f16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x8b,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0x8b,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_ngt_f16_e64 v1, v2 ; encoding: [0x7e,0x00,0x8b,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0x8b,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_ngt_f16_e64 v255, v255 ; encoding: [0x7e,0x00,0x8b,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0x8b,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_ngt_f16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0x8b,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0x8b,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_ngt_f16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x8b,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x01,0x8b,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_ngt_f16_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x8b,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0x9b,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_ngt_f32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x9b,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x02,0x9b,0xd4,0xfd,0xd4,0x00,0x20
+# GFX1250: v_cmpx_ngt_f32_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x9b,0xd4,0xfd,0xd4,0x00,0x20]
+
+0x7e,0x83,0x9b,0xd4,0xff,0xd6,0x00,0x60,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_ngt_f32_e64 -|0xaf123456|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x9b,0xd4,0xff,0xd6,0x00,0x60,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0x9b,0xd4,0xf0,0xfa,0x00,0x40
+# GFX1250: v_cmpx_ngt_f32_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x9b,0xd4,0xf0,0xfa,0x00,0x40]
+
+0x7e,0x00,0x9b,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_ngt_f32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x9b,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0x9b,0xd4,0x7d,0xe0,0x01,0x00
+# GFX1250: v_cmpx_ngt_f32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x9b,0xd4,0x7d,0xe0,0x01,0x00]
+
+0x7e,0x00,0x9b,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_ngt_f32_e64 null, exec_lo ; encoding: [0x7e,0x00,0x9b,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0x9b,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_ngt_f32_e64 s1, s2 ; encoding: [0x7e,0x00,0x9b,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0x9b,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_ngt_f32_e64 s105, s105 ; encoding: [0x7e,0x00,0x9b,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0x9b,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_ngt_f32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x9b,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0x9b,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_ngt_f32_e64 v1, v2 ; encoding: [0x7e,0x00,0x9b,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0x9b,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_ngt_f32_e64 v255, v255 ; encoding: [0x7e,0x00,0x9b,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0x9b,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_ngt_f32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0x9b,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0x9b,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_ngt_f32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x9b,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x01,0x9b,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_ngt_f32_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x9b,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0xab,0xd4,0xc1,0x82,0x01,0x00
+# GFX1250: v_cmpx_ngt_f64_e64 -1, -1 ; encoding: [0x7e,0x00,0xab,0xd4,0xc1,0x82,0x01,0x00]
+
+0x7e,0x01,0xab,0xd4,0x7e,0xfa,0x01,0x20
+# GFX1250: v_cmpx_ngt_f64_e64 -|exec|, src_scc ; encoding: [0x7e,0x01,0xab,0xd4,0x7e,0xfa,0x01,0x20]
+
+0x7e,0x03,0xab,0xd4,0xfd,0xfc,0x00,0x60
+# GFX1250: v_cmpx_ngt_f64_e64 -|src_scc|, -|exec| ; encoding: [0x7e,0x03,0xab,0xd4,0xfd,0xfc,0x00,0x60]
+
+0x7e,0x00,0xab,0xd4,0xf0,0xf8,0x00,0x00
+# GFX1250: v_cmpx_ngt_f64_e64 0.5, null ; encoding: [0x7e,0x00,0xab,0xd4,0xf0,0xf8,0x00,0x00]
+
+0x7e,0x82,0xab,0xd4,0xff,0xd4,0x00,0x40,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_ngt_f64_e64 0xaf123456, -|vcc| clamp ; encoding: [0x7e,0x82,0xab,0xd4,0xff,0xd4,0x00,0x40,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xab,0xd4,0x7c,0xe0,0x01,0x00
+# GFX1250: v_cmpx_ngt_f64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xab,0xd4,0x7c,0xe0,0x01,0x00]
+
+0x7e,0x00,0xab,0xd4,0x68,0xd0,0x00,0x00
+# GFX1250: v_cmpx_ngt_f64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xab,0xd4,0x68,0xd0,0x00,0x00]
+
+0x7e,0x00,0xab,0xd4,0x02,0x08,0x00,0x00
+# GFX1250: v_cmpx_ngt_f64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xab,0xd4,0x02,0x08,0x00,0x00]
+
+0x7e,0x00,0xab,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_ngt_f64_e64 ttmp[14:15], 0xaf123456 ; encoding: [0x7e,0x00,0xab,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xab,0xd4,0xfe,0xfd,0x03,0x00
+# GFX1250: v_cmpx_ngt_f64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xab,0xd4,0xfe,0xfd,0x03,0x00]
+
+0x7e,0x00,0xab,0xd4,0x02,0x05,0x02,0x00
+# GFX1250: v_cmpx_ngt_f64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xab,0xd4,0x02,0x05,0x02,0x00]
+
+0x7e,0x00,0xab,0xd4,0x6a,0xf4,0x00,0x00
+# GFX1250: v_cmpx_ngt_f64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xab,0xd4,0x6a,0xf4,0x00,0x00]
+
+0x7e,0x00,0x8c,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_nle_f16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x8c,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x02,0x8c,0xd4,0xfd,0xd4,0x00,0x20
+# GFX1250: v_cmpx_nle_f16_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x8c,0xd4,0xfd,0xd4,0x00,0x20]
+
+0x7e,0x83,0x8c,0xd4,0xff,0xd6,0x00,0x60,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_nle_f16_e64 -|0xfe0b|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x8c,0xd4,0xff,0xd6,0x00,0x60,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0x8c,0xd4,0xf0,0xfa,0x00,0x40
+# GFX1250: v_cmpx_nle_f16_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x8c,0xd4,0xf0,0xfa,0x00,0x40]
+
+0x7e,0x00,0x8c,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_nle_f16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x8c,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0x8c,0xd4,0x7d,0xe0,0x01,0x00
+# GFX1250: v_cmpx_nle_f16_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x8c,0xd4,0x7d,0xe0,0x01,0x00]
+
+0x7e,0x00,0x8c,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_nle_f16_e64 null, exec_lo ; encoding: [0x7e,0x00,0x8c,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0x8c,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_nle_f16_e64 s1, s2 ; encoding: [0x7e,0x00,0x8c,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0x8c,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_nle_f16_e64 s105, s105 ; encoding: [0x7e,0x00,0x8c,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0x8c,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_nle_f16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x8c,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0x8c,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_nle_f16_e64 v1, v2 ; encoding: [0x7e,0x00,0x8c,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0x8c,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_nle_f16_e64 v255, v255 ; encoding: [0x7e,0x00,0x8c,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0x8c,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_nle_f16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0x8c,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0x8c,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_nle_f16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x8c,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x01,0x8c,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_nle_f16_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x8c,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0x9c,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_nle_f32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x9c,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x02,0x9c,0xd4,0xfd,0xd4,0x00,0x20
+# GFX1250: v_cmpx_nle_f32_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x9c,0xd4,0xfd,0xd4,0x00,0x20]
+
+0x7e,0x83,0x9c,0xd4,0xff,0xd6,0x00,0x60,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_nle_f32_e64 -|0xaf123456|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x9c,0xd4,0xff,0xd6,0x00,0x60,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0x9c,0xd4,0xf0,0xfa,0x00,0x40
+# GFX1250: v_cmpx_nle_f32_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x9c,0xd4,0xf0,0xfa,0x00,0x40]
+
+0x7e,0x00,0x9c,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_nle_f32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x9c,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0x9c,0xd4,0x7d,0xe0,0x01,0x00
+# GFX1250: v_cmpx_nle_f32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x9c,0xd4,0x7d,0xe0,0x01,0x00]
+
+0x7e,0x00,0x9c,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_nle_f32_e64 null, exec_lo ; encoding: [0x7e,0x00,0x9c,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0x9c,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_nle_f32_e64 s1, s2 ; encoding: [0x7e,0x00,0x9c,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0x9c,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_nle_f32_e64 s105, s105 ; encoding: [0x7e,0x00,0x9c,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0x9c,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_nle_f32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x9c,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0x9c,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_nle_f32_e64 v1, v2 ; encoding: [0x7e,0x00,0x9c,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0x9c,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_nle_f32_e64 v255, v255 ; encoding: [0x7e,0x00,0x9c,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0x9c,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_nle_f32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0x9c,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0x9c,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_nle_f32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x9c,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x01,0x9c,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_nle_f32_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x9c,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0xac,0xd4,0xc1,0x82,0x01,0x00
+# GFX1250: v_cmpx_nle_f64_e64 -1, -1 ; encoding: [0x7e,0x00,0xac,0xd4,0xc1,0x82,0x01,0x00]
+
+0x7e,0x01,0xac,0xd4,0x7e,0xfa,0x01,0x20
+# GFX1250: v_cmpx_nle_f64_e64 -|exec|, src_scc ; encoding: [0x7e,0x01,0xac,0xd4,0x7e,0xfa,0x01,0x20]
+
+0x7e,0x03,0xac,0xd4,0xfd,0xfc,0x00,0x60
+# GFX1250: v_cmpx_nle_f64_e64 -|src_scc|, -|exec| ; encoding: [0x7e,0x03,0xac,0xd4,0xfd,0xfc,0x00,0x60]
+
+0x7e,0x00,0xac,0xd4,0xf0,0xf8,0x00,0x00
+# GFX1250: v_cmpx_nle_f64_e64 0.5, null ; encoding: [0x7e,0x00,0xac,0xd4,0xf0,0xf8,0x00,0x00]
+
+0x7e,0x82,0xac,0xd4,0xff,0xd4,0x00,0x40,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_nle_f64_e64 0xaf123456, -|vcc| clamp ; encoding: [0x7e,0x82,0xac,0xd4,0xff,0xd4,0x00,0x40,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xac,0xd4,0x7c,0xe0,0x01,0x00
+# GFX1250: v_cmpx_nle_f64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xac,0xd4,0x7c,0xe0,0x01,0x00]
+
+0x7e,0x00,0xac,0xd4,0x68,0xd0,0x00,0x00
+# GFX1250: v_cmpx_nle_f64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xac,0xd4,0x68,0xd0,0x00,0x00]
+
+0x7e,0x00,0xac,0xd4,0x02,0x08,0x00,0x00
+# GFX1250: v_cmpx_nle_f64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xac,0xd4,0x02,0x08,0x00,0x00]
+
+0x7e,0x00,0xac,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_nle_f64_e64 ttmp[14:15], 0xaf123456 ; encoding: [0x7e,0x00,0xac,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xac,0xd4,0xfe,0xfd,0x03,0x00
+# GFX1250: v_cmpx_nle_f64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xac,0xd4,0xfe,0xfd,0x03,0x00]
+
+0x7e,0x00,0xac,0xd4,0x02,0x05,0x02,0x00
+# GFX1250: v_cmpx_nle_f64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xac,0xd4,0x02,0x05,0x02,0x00]
+
+0x7e,0x00,0xac,0xd4,0x6a,0xf4,0x00,0x00
+# GFX1250: v_cmpx_nle_f64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xac,0xd4,0x6a,0xf4,0x00,0x00]
+
+0x7e,0x00,0x8a,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_nlg_f16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x8a,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x02,0x8a,0xd4,0xfd,0xd4,0x00,0x20
+# GFX1250: v_cmpx_nlg_f16_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x8a,0xd4,0xfd,0xd4,0x00,0x20]
+
+0x7e,0x83,0x8a,0xd4,0xff,0xd6,0x00,0x60,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_nlg_f16_e64 -|0xfe0b|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x8a,0xd4,0xff,0xd6,0x00,0x60,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0x8a,0xd4,0xf0,0xfa,0x00,0x40
+# GFX1250: v_cmpx_nlg_f16_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x8a,0xd4,0xf0,0xfa,0x00,0x40]
+
+0x7e,0x00,0x8a,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_nlg_f16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x8a,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0x8a,0xd4,0x7d,0xe0,0x01,0x00
+# GFX1250: v_cmpx_nlg_f16_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x8a,0xd4,0x7d,0xe0,0x01,0x00]
+
+0x7e,0x00,0x8a,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_nlg_f16_e64 null, exec_lo ; encoding: [0x7e,0x00,0x8a,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0x8a,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_nlg_f16_e64 s1, s2 ; encoding: [0x7e,0x00,0x8a,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0x8a,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_nlg_f16_e64 s105, s105 ; encoding: [0x7e,0x00,0x8a,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0x8a,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_nlg_f16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x8a,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0x8a,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_nlg_f16_e64 v1, v2 ; encoding: [0x7e,0x00,0x8a,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0x8a,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_nlg_f16_e64 v255, v255 ; encoding: [0x7e,0x00,0x8a,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0x8a,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_nlg_f16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0x8a,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0x8a,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_nlg_f16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x8a,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x01,0x8a,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_nlg_f16_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x8a,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0x9a,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_nlg_f32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x9a,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x02,0x9a,0xd4,0xfd,0xd4,0x00,0x20
+# GFX1250: v_cmpx_nlg_f32_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x9a,0xd4,0xfd,0xd4,0x00,0x20]
+
+0x7e,0x83,0x9a,0xd4,0xff,0xd6,0x00,0x60,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_nlg_f32_e64 -|0xaf123456|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x9a,0xd4,0xff,0xd6,0x00,0x60,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0x9a,0xd4,0xf0,0xfa,0x00,0x40
+# GFX1250: v_cmpx_nlg_f32_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x9a,0xd4,0xf0,0xfa,0x00,0x40]
+
+0x7e,0x00,0x9a,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_nlg_f32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x9a,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0x9a,0xd4,0x7d,0xe0,0x01,0x00
+# GFX1250: v_cmpx_nlg_f32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x9a,0xd4,0x7d,0xe0,0x01,0x00]
+
+0x7e,0x00,0x9a,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_nlg_f32_e64 null, exec_lo ; encoding: [0x7e,0x00,0x9a,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0x9a,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_nlg_f32_e64 s1, s2 ; encoding: [0x7e,0x00,0x9a,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0x9a,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_nlg_f32_e64 s105, s105 ; encoding: [0x7e,0x00,0x9a,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0x9a,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_nlg_f32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x9a,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0x9a,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_nlg_f32_e64 v1, v2 ; encoding: [0x7e,0x00,0x9a,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0x9a,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_nlg_f32_e64 v255, v255 ; encoding: [0x7e,0x00,0x9a,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0x9a,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_nlg_f32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0x9a,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0x9a,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_nlg_f32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x9a,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x01,0x9a,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_nlg_f32_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x9a,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0xaa,0xd4,0xc1,0x82,0x01,0x00
+# GFX1250: v_cmpx_nlg_f64_e64 -1, -1 ; encoding: [0x7e,0x00,0xaa,0xd4,0xc1,0x82,0x01,0x00]
+
+0x7e,0x01,0xaa,0xd4,0x7e,0xfa,0x01,0x20
+# GFX1250: v_cmpx_nlg_f64_e64 -|exec|, src_scc ; encoding: [0x7e,0x01,0xaa,0xd4,0x7e,0xfa,0x01,0x20]
+
+0x7e,0x03,0xaa,0xd4,0xfd,0xfc,0x00,0x60
+# GFX1250: v_cmpx_nlg_f64_e64 -|src_scc|, -|exec| ; encoding: [0x7e,0x03,0xaa,0xd4,0xfd,0xfc,0x00,0x60]
+
+0x7e,0x00,0xaa,0xd4,0xf0,0xf8,0x00,0x00
+# GFX1250: v_cmpx_nlg_f64_e64 0.5, null ; encoding: [0x7e,0x00,0xaa,0xd4,0xf0,0xf8,0x00,0x00]
+
+0x7e,0x82,0xaa,0xd4,0xff,0xd4,0x00,0x40,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_nlg_f64_e64 0xaf123456, -|vcc| clamp ; encoding: [0x7e,0x82,0xaa,0xd4,0xff,0xd4,0x00,0x40,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xaa,0xd4,0x7c,0xe0,0x01,0x00
+# GFX1250: v_cmpx_nlg_f64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xaa,0xd4,0x7c,0xe0,0x01,0x00]
+
+0x7e,0x00,0xaa,0xd4,0x68,0xd0,0x00,0x00
+# GFX1250: v_cmpx_nlg_f64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xaa,0xd4,0x68,0xd0,0x00,0x00]
+
+0x7e,0x00,0xaa,0xd4,0x02,0x08,0x00,0x00
+# GFX1250: v_cmpx_nlg_f64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xaa,0xd4,0x02,0x08,0x00,0x00]
+
+0x7e,0x00,0xaa,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_nlg_f64_e64 ttmp[14:15], 0xaf123456 ; encoding: [0x7e,0x00,0xaa,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xaa,0xd4,0xfe,0xfd,0x03,0x00
+# GFX1250: v_cmpx_nlg_f64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xaa,0xd4,0xfe,0xfd,0x03,0x00]
+
+0x7e,0x00,0xaa,0xd4,0x02,0x05,0x02,0x00
+# GFX1250: v_cmpx_nlg_f64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xaa,0xd4,0x02,0x05,0x02,0x00]
+
+0x7e,0x00,0xaa,0xd4,0x6a,0xf4,0x00,0x00
+# GFX1250: v_cmpx_nlg_f64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xaa,0xd4,0x6a,0xf4,0x00,0x00]
+
+0x7e,0x00,0x8e,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_nlt_f16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x8e,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x02,0x8e,0xd4,0xfd,0xd4,0x00,0x20
+# GFX1250: v_cmpx_nlt_f16_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x8e,0xd4,0xfd,0xd4,0x00,0x20]
+
+0x7e,0x83,0x8e,0xd4,0xff,0xd6,0x00,0x60,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_nlt_f16_e64 -|0xfe0b|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x8e,0xd4,0xff,0xd6,0x00,0x60,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0x8e,0xd4,0xf0,0xfa,0x00,0x40
+# GFX1250: v_cmpx_nlt_f16_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x8e,0xd4,0xf0,0xfa,0x00,0x40]
+
+0x7e,0x00,0x8e,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_nlt_f16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x8e,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0x8e,0xd4,0x7d,0xe0,0x01,0x00
+# GFX1250: v_cmpx_nlt_f16_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x8e,0xd4,0x7d,0xe0,0x01,0x00]
+
+0x7e,0x00,0x8e,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_nlt_f16_e64 null, exec_lo ; encoding: [0x7e,0x00,0x8e,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0x8e,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_nlt_f16_e64 s1, s2 ; encoding: [0x7e,0x00,0x8e,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0x8e,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_nlt_f16_e64 s105, s105 ; encoding: [0x7e,0x00,0x8e,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0x8e,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_nlt_f16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x8e,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0x8e,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_nlt_f16_e64 v1, v2 ; encoding: [0x7e,0x00,0x8e,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0x8e,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_nlt_f16_e64 v255, v255 ; encoding: [0x7e,0x00,0x8e,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0x8e,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_nlt_f16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0x8e,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0x8e,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_nlt_f16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x8e,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x01,0x8e,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_nlt_f16_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x8e,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0x9e,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_nlt_f32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x9e,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x02,0x9e,0xd4,0xfd,0xd4,0x00,0x20
+# GFX1250: v_cmpx_nlt_f32_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x9e,0xd4,0xfd,0xd4,0x00,0x20]
+
+0x7e,0x83,0x9e,0xd4,0xff,0xd6,0x00,0x60,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_nlt_f32_e64 -|0xaf123456|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x9e,0xd4,0xff,0xd6,0x00,0x60,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0x9e,0xd4,0xf0,0xfa,0x00,0x40
+# GFX1250: v_cmpx_nlt_f32_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x9e,0xd4,0xf0,0xfa,0x00,0x40]
+
+0x7e,0x00,0x9e,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_nlt_f32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x9e,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0x9e,0xd4,0x7d,0xe0,0x01,0x00
+# GFX1250: v_cmpx_nlt_f32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x9e,0xd4,0x7d,0xe0,0x01,0x00]
+
+0x7e,0x00,0x9e,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_nlt_f32_e64 null, exec_lo ; encoding: [0x7e,0x00,0x9e,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0x9e,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_nlt_f32_e64 s1, s2 ; encoding: [0x7e,0x00,0x9e,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0x9e,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_nlt_f32_e64 s105, s105 ; encoding: [0x7e,0x00,0x9e,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0x9e,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_nlt_f32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x9e,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0x9e,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_nlt_f32_e64 v1, v2 ; encoding: [0x7e,0x00,0x9e,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0x9e,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_nlt_f32_e64 v255, v255 ; encoding: [0x7e,0x00,0x9e,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0x9e,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_nlt_f32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0x9e,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0x9e,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_nlt_f32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x9e,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x01,0x9e,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_nlt_f32_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x9e,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0xae,0xd4,0xc1,0x82,0x01,0x00
+# GFX1250: v_cmpx_nlt_f64_e64 -1, -1 ; encoding: [0x7e,0x00,0xae,0xd4,0xc1,0x82,0x01,0x00]
+
+0x7e,0x01,0xae,0xd4,0x7e,0xfa,0x01,0x20
+# GFX1250: v_cmpx_nlt_f64_e64 -|exec|, src_scc ; encoding: [0x7e,0x01,0xae,0xd4,0x7e,0xfa,0x01,0x20]
+
+0x7e,0x03,0xae,0xd4,0xfd,0xfc,0x00,0x60
+# GFX1250: v_cmpx_nlt_f64_e64 -|src_scc|, -|exec| ; encoding: [0x7e,0x03,0xae,0xd4,0xfd,0xfc,0x00,0x60]
+
+0x7e,0x00,0xae,0xd4,0xf0,0xf8,0x00,0x00
+# GFX1250: v_cmpx_nlt_f64_e64 0.5, null ; encoding: [0x7e,0x00,0xae,0xd4,0xf0,0xf8,0x00,0x00]
+
+0x7e,0x82,0xae,0xd4,0xff,0xd4,0x00,0x40,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_nlt_f64_e64 0xaf123456, -|vcc| clamp ; encoding: [0x7e,0x82,0xae,0xd4,0xff,0xd4,0x00,0x40,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xae,0xd4,0x7c,0xe0,0x01,0x00
+# GFX1250: v_cmpx_nlt_f64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xae,0xd4,0x7c,0xe0,0x01,0x00]
+
+0x7e,0x00,0xae,0xd4,0x68,0xd0,0x00,0x00
+# GFX1250: v_cmpx_nlt_f64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xae,0xd4,0x68,0xd0,0x00,0x00]
+
+0x7e,0x00,0xae,0xd4,0x02,0x08,0x00,0x00
+# GFX1250: v_cmpx_nlt_f64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xae,0xd4,0x02,0x08,0x00,0x00]
+
+0x7e,0x00,0xae,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_nlt_f64_e64 ttmp[14:15], 0xaf123456 ; encoding: [0x7e,0x00,0xae,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xae,0xd4,0xfe,0xfd,0x03,0x00
+# GFX1250: v_cmpx_nlt_f64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xae,0xd4,0xfe,0xfd,0x03,0x00]
+
+0x7e,0x00,0xae,0xd4,0x02,0x05,0x02,0x00
+# GFX1250: v_cmpx_nlt_f64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xae,0xd4,0x02,0x05,0x02,0x00]
+
+0x7e,0x00,0xae,0xd4,0x6a,0xf4,0x00,0x00
+# GFX1250: v_cmpx_nlt_f64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xae,0xd4,0x6a,0xf4,0x00,0x00]
+
+0x7e,0x00,0x87,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_o_f16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x87,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x02,0x87,0xd4,0xfd,0xd4,0x00,0x20
+# GFX1250: v_cmpx_o_f16_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x87,0xd4,0xfd,0xd4,0x00,0x20]
+
+0x7e,0x83,0x87,0xd4,0xff,0xd6,0x00,0x60,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_o_f16_e64 -|0xfe0b|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x87,0xd4,0xff,0xd6,0x00,0x60,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0x87,0xd4,0xf0,0xfa,0x00,0x40
+# GFX1250: v_cmpx_o_f16_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x87,0xd4,0xf0,0xfa,0x00,0x40]
+
+0x7e,0x00,0x87,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_o_f16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x87,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0x87,0xd4,0x7d,0xe0,0x01,0x00
+# GFX1250: v_cmpx_o_f16_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x87,0xd4,0x7d,0xe0,0x01,0x00]
+
+0x7e,0x00,0x87,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_o_f16_e64 null, exec_lo ; encoding: [0x7e,0x00,0x87,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0x87,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_o_f16_e64 s1, s2 ; encoding: [0x7e,0x00,0x87,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0x87,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_o_f16_e64 s105, s105 ; encoding: [0x7e,0x00,0x87,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0x87,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_o_f16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x87,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0x87,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_o_f16_e64 v1, v2 ; encoding: [0x7e,0x00,0x87,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0x87,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_o_f16_e64 v255, v255 ; encoding: [0x7e,0x00,0x87,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0x87,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_o_f16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0x87,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0x87,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_o_f16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x87,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x01,0x87,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_o_f16_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x87,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0x97,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_o_f32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x97,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x02,0x97,0xd4,0xfd,0xd4,0x00,0x20
+# GFX1250: v_cmpx_o_f32_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x97,0xd4,0xfd,0xd4,0x00,0x20]
+
+0x7e,0x83,0x97,0xd4,0xff,0xd6,0x00,0x60,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_o_f32_e64 -|0xaf123456|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x97,0xd4,0xff,0xd6,0x00,0x60,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0x97,0xd4,0xf0,0xfa,0x00,0x40
+# GFX1250: v_cmpx_o_f32_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x97,0xd4,0xf0,0xfa,0x00,0x40]
+
+0x7e,0x00,0x97,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_o_f32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x97,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0x97,0xd4,0x7d,0xe0,0x01,0x00
+# GFX1250: v_cmpx_o_f32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x97,0xd4,0x7d,0xe0,0x01,0x00]
+
+0x7e,0x00,0x97,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_o_f32_e64 null, exec_lo ; encoding: [0x7e,0x00,0x97,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0x97,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_o_f32_e64 s1, s2 ; encoding: [0x7e,0x00,0x97,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0x97,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_o_f32_e64 s105, s105 ; encoding: [0x7e,0x00,0x97,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0x97,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_o_f32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x97,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0x97,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_o_f32_e64 v1, v2 ; encoding: [0x7e,0x00,0x97,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0x97,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_o_f32_e64 v255, v255 ; encoding: [0x7e,0x00,0x97,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0x97,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_o_f32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0x97,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0x97,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_o_f32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x97,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x01,0x97,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_o_f32_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x97,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0xa7,0xd4,0xc1,0x82,0x01,0x00
+# GFX1250: v_cmpx_o_f64_e64 -1, -1 ; encoding: [0x7e,0x00,0xa7,0xd4,0xc1,0x82,0x01,0x00]
+
+0x7e,0x01,0xa7,0xd4,0x7e,0xfa,0x01,0x20
+# GFX1250: v_cmpx_o_f64_e64 -|exec|, src_scc ; encoding: [0x7e,0x01,0xa7,0xd4,0x7e,0xfa,0x01,0x20]
+
+0x7e,0x03,0xa7,0xd4,0xfd,0xfc,0x00,0x60
+# GFX1250: v_cmpx_o_f64_e64 -|src_scc|, -|exec| ; encoding: [0x7e,0x03,0xa7,0xd4,0xfd,0xfc,0x00,0x60]
+
+0x7e,0x00,0xa7,0xd4,0xf0,0xf8,0x00,0x00
+# GFX1250: v_cmpx_o_f64_e64 0.5, null ; encoding: [0x7e,0x00,0xa7,0xd4,0xf0,0xf8,0x00,0x00]
+
+0x7e,0x82,0xa7,0xd4,0xff,0xd4,0x00,0x40,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_o_f64_e64 0xaf123456, -|vcc| clamp ; encoding: [0x7e,0x82,0xa7,0xd4,0xff,0xd4,0x00,0x40,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xa7,0xd4,0x7c,0xe0,0x01,0x00
+# GFX1250: v_cmpx_o_f64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xa7,0xd4,0x7c,0xe0,0x01,0x00]
+
+0x7e,0x00,0xa7,0xd4,0x68,0xd0,0x00,0x00
+# GFX1250: v_cmpx_o_f64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xa7,0xd4,0x68,0xd0,0x00,0x00]
+
+0x7e,0x00,0xa7,0xd4,0x02,0x08,0x00,0x00
+# GFX1250: v_cmpx_o_f64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xa7,0xd4,0x02,0x08,0x00,0x00]
+
+0x7e,0x00,0xa7,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_o_f64_e64 ttmp[14:15], 0xaf123456 ; encoding: [0x7e,0x00,0xa7,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xa7,0xd4,0xfe,0xfd,0x03,0x00
+# GFX1250: v_cmpx_o_f64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xa7,0xd4,0xfe,0xfd,0x03,0x00]
+
+0x7e,0x00,0xa7,0xd4,0x02,0x05,0x02,0x00
+# GFX1250: v_cmpx_o_f64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xa7,0xd4,0x02,0x05,0x02,0x00]
+
+0x7e,0x00,0xa7,0xd4,0x6a,0xf4,0x00,0x00
+# GFX1250: v_cmpx_o_f64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xa7,0xd4,0x6a,0xf4,0x00,0x00]
+
+0x7e,0x00,0x88,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_u_f16_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x88,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x02,0x88,0xd4,0xfd,0xd4,0x00,0x20
+# GFX1250: v_cmpx_u_f16_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x88,0xd4,0xfd,0xd4,0x00,0x20]
+
+0x7e,0x83,0x88,0xd4,0xff,0xd6,0x00,0x60,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_u_f16_e64 -|0xfe0b|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x88,0xd4,0xff,0xd6,0x00,0x60,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0x88,0xd4,0xf0,0xfa,0x00,0x40
+# GFX1250: v_cmpx_u_f16_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x88,0xd4,0xf0,0xfa,0x00,0x40]
+
+0x7e,0x00,0x88,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_u_f16_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x88,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0x88,0xd4,0x7d,0xe0,0x01,0x00
+# GFX1250: v_cmpx_u_f16_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x88,0xd4,0x7d,0xe0,0x01,0x00]
+
+0x7e,0x00,0x88,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_u_f16_e64 null, exec_lo ; encoding: [0x7e,0x00,0x88,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0x88,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_u_f16_e64 s1, s2 ; encoding: [0x7e,0x00,0x88,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0x88,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_u_f16_e64 s105, s105 ; encoding: [0x7e,0x00,0x88,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0x88,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_u_f16_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x88,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0x88,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_u_f16_e64 v1, v2 ; encoding: [0x7e,0x00,0x88,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0x88,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_u_f16_e64 v255, v255 ; encoding: [0x7e,0x00,0x88,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0x88,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00
+# GFX1250: v_cmpx_u_f16_e64 vcc_hi, 0xfe0b ; encoding: [0x7e,0x00,0x88,0xd4,0x6b,0xfe,0x01,0x00,0x0b,0xfe,0x00,0x00]
+
+0x7e,0x00,0x88,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_u_f16_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x88,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x01,0x88,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_u_f16_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x88,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0x98,0xd4,0xc1,0xfe,0x00,0x00
+# GFX1250: v_cmpx_u_f32_e64 -1, exec_hi ; encoding: [0x7e,0x00,0x98,0xd4,0xc1,0xfe,0x00,0x00]
+
+0x7e,0x02,0x98,0xd4,0xfd,0xd4,0x00,0x20
+# GFX1250: v_cmpx_u_f32_e64 -src_scc, |vcc_lo| ; encoding: [0x7e,0x02,0x98,0xd4,0xfd,0xd4,0x00,0x20]
+
+0x7e,0x83,0x98,0xd4,0xff,0xd6,0x00,0x60,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_u_f32_e64 -|0xaf123456|, -|vcc_hi| clamp ; encoding: [0x7e,0x83,0x98,0xd4,0xff,0xd6,0x00,0x60,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0x98,0xd4,0xf0,0xfa,0x00,0x40
+# GFX1250: v_cmpx_u_f32_e64 0.5, -m0 ; encoding: [0x7e,0x00,0x98,0xd4,0xf0,0xfa,0x00,0x40]
+
+0x7e,0x00,0x98,0xd4,0x7e,0x82,0x01,0x00
+# GFX1250: v_cmpx_u_f32_e64 exec_lo, -1 ; encoding: [0x7e,0x00,0x98,0xd4,0x7e,0x82,0x01,0x00]
+
+0x7e,0x00,0x98,0xd4,0x7d,0xe0,0x01,0x00
+# GFX1250: v_cmpx_u_f32_e64 m0, 0.5 ; encoding: [0x7e,0x00,0x98,0xd4,0x7d,0xe0,0x01,0x00]
+
+0x7e,0x00,0x98,0xd4,0x7c,0xfc,0x00,0x00
+# GFX1250: v_cmpx_u_f32_e64 null, exec_lo ; encoding: [0x7e,0x00,0x98,0xd4,0x7c,0xfc,0x00,0x00]
+
+0x7e,0x00,0x98,0xd4,0x01,0x04,0x00,0x00
+# GFX1250: v_cmpx_u_f32_e64 s1, s2 ; encoding: [0x7e,0x00,0x98,0xd4,0x01,0x04,0x00,0x00]
+
+0x7e,0x00,0x98,0xd4,0x69,0xd2,0x00,0x00
+# GFX1250: v_cmpx_u_f32_e64 s105, s105 ; encoding: [0x7e,0x00,0x98,0xd4,0x69,0xd2,0x00,0x00]
+
+0x7e,0x00,0x98,0xd4,0x7b,0xfa,0x01,0x00
+# GFX1250: v_cmpx_u_f32_e64 ttmp15, src_scc ; encoding: [0x7e,0x00,0x98,0xd4,0x7b,0xfa,0x01,0x00]
+
+0x7e,0x00,0x98,0xd4,0x01,0x05,0x02,0x00
+# GFX1250: v_cmpx_u_f32_e64 v1, v2 ; encoding: [0x7e,0x00,0x98,0xd4,0x01,0x05,0x02,0x00]
+
+0x7e,0x00,0x98,0xd4,0xff,0xff,0x03,0x00
+# GFX1250: v_cmpx_u_f32_e64 v255, v255 ; encoding: [0x7e,0x00,0x98,0xd4,0xff,0xff,0x03,0x00]
+
+0x7e,0x00,0x98,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_u_f32_e64 vcc_hi, 0xaf123456 ; encoding: [0x7e,0x00,0x98,0xd4,0x6b,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0x98,0xd4,0x6a,0xf6,0x00,0x00
+# GFX1250: v_cmpx_u_f32_e64 vcc_lo, ttmp15 ; encoding: [0x7e,0x00,0x98,0xd4,0x6a,0xf6,0x00,0x00]
+
+0x7e,0x01,0x98,0xd4,0x7f,0xf8,0x00,0x00
+# GFX1250: v_cmpx_u_f32_e64 |exec_hi|, null ; encoding: [0x7e,0x01,0x98,0xd4,0x7f,0xf8,0x00,0x00]
+
+0x7e,0x00,0xa8,0xd4,0xc1,0x82,0x01,0x00
+# GFX1250: v_cmpx_u_f64_e64 -1, -1 ; encoding: [0x7e,0x00,0xa8,0xd4,0xc1,0x82,0x01,0x00]
+
+0x7e,0x01,0xa8,0xd4,0x7e,0xfa,0x01,0x20
+# GFX1250: v_cmpx_u_f64_e64 -|exec|, src_scc ; encoding: [0x7e,0x01,0xa8,0xd4,0x7e,0xfa,0x01,0x20]
+
+0x7e,0x03,0xa8,0xd4,0xfd,0xfc,0x00,0x60
+# GFX1250: v_cmpx_u_f64_e64 -|src_scc|, -|exec| ; encoding: [0x7e,0x03,0xa8,0xd4,0xfd,0xfc,0x00,0x60]
+
+0x7e,0x00,0xa8,0xd4,0xf0,0xf8,0x00,0x00
+# GFX1250: v_cmpx_u_f64_e64 0.5, null ; encoding: [0x7e,0x00,0xa8,0xd4,0xf0,0xf8,0x00,0x00]
+
+0x7e,0x82,0xa8,0xd4,0xff,0xd4,0x00,0x40,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_u_f64_e64 0xaf123456, -|vcc| clamp ; encoding: [0x7e,0x82,0xa8,0xd4,0xff,0xd4,0x00,0x40,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xa8,0xd4,0x7c,0xe0,0x01,0x00
+# GFX1250: v_cmpx_u_f64_e64 null, 0.5 ; encoding: [0x7e,0x00,0xa8,0xd4,0x7c,0xe0,0x01,0x00]
+
+0x7e,0x00,0xa8,0xd4,0x68,0xd0,0x00,0x00
+# GFX1250: v_cmpx_u_f64_e64 s[104:105], s[104:105] ; encoding: [0x7e,0x00,0xa8,0xd4,0x68,0xd0,0x00,0x00]
+
+0x7e,0x00,0xa8,0xd4,0x02,0x08,0x00,0x00
+# GFX1250: v_cmpx_u_f64_e64 s[2:3], s[4:5] ; encoding: [0x7e,0x00,0xa8,0xd4,0x02,0x08,0x00,0x00]
+
+0x7e,0x00,0xa8,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf
+# GFX1250: v_cmpx_u_f64_e64 ttmp[14:15], 0xaf123456 ; encoding: [0x7e,0x00,0xa8,0xd4,0x7a,0xfe,0x01,0x00,0x56,0x34,0x12,0xaf]
+
+0x7e,0x00,0xa8,0xd4,0xfe,0xfd,0x03,0x00
+# GFX1250: v_cmpx_u_f64_e64 v[254:255], v[254:255] ; encoding: [0x7e,0x00,0xa8,0xd4,0xfe,0xfd,0x03,0x00]
+
+0x7e,0x00,0xa8,0xd4,0x02,0x05,0x02,0x00
+# GFX1250: v_cmpx_u_f64_e64 v[2:3], v[2:3] ; encoding: [0x7e,0x00,0xa8,0xd4,0x02,0x05,0x02,0x00]
+
+0x7e,0x00,0xa8,0xd4,0x6a,0xf4,0x00,0x00
+# GFX1250: v_cmpx_u_f64_e64 vcc, ttmp[14:15] ; encoding: [0x7e,0x00,0xa8,0xd4,0x6a,0xf4,0x00,0x00]
diff --git a/llvm/test/MC/Disassembler/AMDGPU/gfx1250_dasm_vop3p_dpp16.txt b/llvm/test/MC/Disassembler/AMDGPU/gfx1250_dasm_vop3p_dpp16.txt
new file mode 100644
index 0000000000000..73e9d731646ba
--- /dev/null
+++ b/llvm/test/MC/Disassembler/AMDGPU/gfx1250_dasm_vop3p_dpp16.txt
@@ -0,0 +1,10 @@
+# RUN: llvm-mc -triple=amdgcn -mcpu=gfx1250 -disassemble -show-encoding < %s | FileCheck -check-prefixes=GFX1250 %s
+
+# GFX1250: v_fma_mix_f32_bf16_e64_dpp v0, v1, v2, v3 row_ror:7 row_mask:0xf bank_mask:0x1 ; encoding: [0x00,0x00,0x3d,0xcc,0xfa,0x04,0x0e,0x04,0x01,0x27,0x01,0xf1]
+0x00,0x00,0x3d,0xcc,0xfa,0x04,0x0e,0x04,0x01,0x27,0x01,0xf1
+
+# GFX1250: v_fma_mixlo_bf16_e64_dpp v0, v1, v2, v3 op_sel_hi:[1,1,1] clamp quad_perm:[0,2,3,1] row_mask:0x0 bank_mask:0xf ; encoding: [0x00,0xc0,0x3e,0xcc,0xfa,0x04,0x0e,0x1c,0x01,0x78,0x00,0x0f]
+0x00,0xc0,0x3e,0xcc,0xfa,0x04,0x0e,0x1c,0x01,0x78,0x00,0x0f
+
+# GFX1250: v_fma_mixhi_bf16_e64_dpp v0, v1, v2, v3 op_sel_hi:[1,1,1] clamp quad_perm:[0,2,3,1] row_mask:0x0 bank_mask:0xf ; encoding: [0x00,0xc0,0x3f,0xcc,0xfa,0x04,0x0e,0x1c,0x01,0x78,0x00,0x0f]
+0x00,0xc0,0x3f,0xcc,0xfa,0x04,0x0e,0x1c,0x01,0x78,0x00,0x0f
diff --git a/llvm/test/MC/Disassembler/AMDGPU/gfx1250_dasm_vop3p_dpp8.txt b/llvm/test/MC/Disassembler/AMDGPU/gfx1250_dasm_vop3p_dpp8.txt
new file mode 100644
index 0000000000000..27e7025978315
--- /dev/null
+++ b/llvm/test/MC/Disassembler/AMDGPU/gfx1250_dasm_vop3p_dpp8.txt
@@ -0,0 +1,19 @@
+# RUN: llvm-mc -triple=amdgcn -mcpu=gfx1250 -disassemble -show-encoding < %s | FileCheck -check-prefixes=GFX1250 %s
+
+# GFX1250: v_fma_mix_f32_bf16_e64_dpp v0, v1, v2, v3 clamp dpp8:[2,2,2,2,4,4,4,4] fi:1 ; encoding: [0x00,0x80,0x3d,0xcc,0xea,0x04,0x0e,0x04,0x01,0x92,0x44,0x92]
+0x00,0x80,0x3d,0xcc,0xea,0x04,0x0e,0x04,0x01,0x92,0x44,0x92
+
+# GFX1250: v_fma_mix_f32_bf16_e64_dpp v0, v1, v2, v3 dpp8:[2,2,2,2,4,4,4,4] ; encoding: [0x00,0x00,0x3d,0xcc,0xe9,0x04,0x0e,0x04,0x01,0x92,0x44,0x92]
+0x00,0x00,0x3d,0xcc,0xe9,0x04,0x0e,0x04,0x01,0x92,0x44,0x92
+
+# GFX1250: v_fma_mixlo_bf16_e64_dpp v0, |v1|, -v2, |v3| dpp8:[2,2,2,2,4,4,4,4] ; encoding: [0x00,0x05,0x3e,0xcc,0xe9,0x04,0x0e,0x44,0x01,0x92,0x44,0x92]
+0x00,0x05,0x3e,0xcc,0xe9,0x04,0x0e,0x44,0x01,0x92,0x44,0x92
+
+# GFX1250: v_fma_mixlo_bf16_e64_dpp v0, |v1|, -v2, |v3| op_sel:[1,0,0] op_sel_hi:[1,0,0] dpp8:[2,2,2,2,4,4,4,4] ; encoding: [0x00,0x0d,0x3e,0xcc,0xe9,0x04,0x0e,0x4c,0x01,0x92,0x44,0x92]
+0x00,0x0d,0x3e,0xcc,0xe9,0x04,0x0e,0x4c,0x01,0x92,0x44,0x92
+
+# GFX1250: v_fma_mixhi_bf16_e64_dpp v0, |v1|, -v2, |v3| dpp8:[2,2,2,2,4,4,4,4] ; encoding: [0x00,0x05,0x3f,0xcc,0xe9,0x04,0x0e,0x44,0x01,0x92,0x44,0x92]
+0x00,0x05,0x3f,0xcc,0xe9,0x04,0x0e,0x44,0x01,0x92,0x44,0x92
+
+# GFX1250: v_fma_mixhi_bf16_e64_dpp v0, |v1|, -v2, |v3| op_sel:[1,0,0] op_sel_hi:[1,0,0] dpp8:[2,2,2,2,4,4,4,4] ; encoding: [0x00,0x0d,0x3f,0xcc,0xe9,0x04,0x0e,0x4c,0x01,0x92,0x44,0x92]
+0x00,0x0d,0x3f,0xcc,0xe9,0x04,0x0e,0x4c,0x01,0x92,0x44,0x92
>From 83e5a99ff6a5662b6e7fd6a0f9f21d70458022c2 Mon Sep 17 00:00:00 2001
From: hidekisaito <hidekido at amd.com>
Date: Wed, 6 Aug 2025 14:41:20 -0700
Subject: [PATCH 157/171] [AMDGPU][Offload] Enable memory manager use for up to
~3GB allocation size in omp_target_alloc (#151882)
Enables AMD data center class GPUs to use memory manager memory pooling
up to 3GB allocation by default, up from the "1 << 13" threshold that
all plugin-nextgen devices use.
---
offload/plugins-nextgen/amdgpu/src/rtl.cpp | 34 +++++++++++++++++
.../common/include/PluginInterface.h | 3 ++
.../common/src/PluginInterface.cpp | 5 ++-
offload/test/lit.cfg | 11 ++++--
offload/test/sanitizer/use_after_free_2.c | 3 ++
offload/test/sanitizer/use_after_free_3.c | 37 +++++++++++++++++++
6 files changed, 89 insertions(+), 4 deletions(-)
create mode 100644 offload/test/sanitizer/use_after_free_3.c
diff --git a/offload/plugins-nextgen/amdgpu/src/rtl.cpp b/offload/plugins-nextgen/amdgpu/src/rtl.cpp
index b7bfa89fc9ea6..82deeb4b1af2c 100644
--- a/offload/plugins-nextgen/amdgpu/src/rtl.cpp
+++ b/offload/plugins-nextgen/amdgpu/src/rtl.cpp
@@ -2945,6 +2945,40 @@ struct AMDGPUDeviceTy : public GenericDeviceTy, AMDGenericDeviceTy {
return Plugin::success();
}
+ bool checkIfCoarseGrainMemoryNearOrAbove64GB() {
+ for (AMDGPUMemoryPoolTy *Pool : AllMemoryPools) {
+ if (!Pool->isGlobal() || !Pool->isCoarseGrained())
+ continue;
+ uint64_t Value;
+ hsa_status_t Status =
+ Pool->getAttrRaw(HSA_AMD_MEMORY_POOL_INFO_SIZE, Value);
+ if (Status != HSA_STATUS_SUCCESS)
+ continue;
+ constexpr uint64_t Almost64Gig = 0xFF0000000;
+ if (Value >= Almost64Gig)
+ return true;
+ }
+ return false; // CoarseGrain pool w/ 64GB or more capacity not found
+ }
+
+ size_t getMemoryManagerSizeThreshold() override {
+ // Targeting high memory capacity GPUs such as
+ // data center GPUs.
+ if (checkIfCoarseGrainMemoryNearOrAbove64GB()) {
+ // Set GenericDeviceTy::MemoryManager's Threshold to 3GiB,
+ // if threshold is not already set by ENV var
+ // LIBOMPTARGET_MEMORY_MANAGER_THRESHOLD.
+ // This MemoryManager is used for omp_target_alloc(), OpenMP
+ // (non-usm) map clause, etc.
+ //
+ // Ideally, this kind of pooling is best performed at
+ // a common level (e.g, user side of HSA) between OpenMP and HIP
+ // but that feature does not exist (yet).
+ return 3ul * 1024 * 1024 * 1024 /* 3 GiB */;
+ }
+ return 0;
+ }
+
/// Envar for controlling the number of HSA queues per device. High number of
/// queues may degrade performance.
UInt32Envar OMPX_NumQueues;
diff --git a/offload/plugins-nextgen/common/include/PluginInterface.h b/offload/plugins-nextgen/common/include/PluginInterface.h
index 21dc1d1fdfb17..4b7d410d699a8 100644
--- a/offload/plugins-nextgen/common/include/PluginInterface.h
+++ b/offload/plugins-nextgen/common/include/PluginInterface.h
@@ -1139,6 +1139,9 @@ struct GenericDeviceTy : public DeviceAllocatorTy {
/// Pointer to the memory manager or nullptr if not available.
MemoryManagerTy *MemoryManager;
+ /// Per device setting of MemoryManager's Threshold
+ virtual size_t getMemoryManagerSizeThreshold() { return 0; }
+
/// Environment variables defined by the OpenMP standard.
Int32Envar OMP_TeamLimit;
Int32Envar OMP_NumTeams;
diff --git a/offload/plugins-nextgen/common/src/PluginInterface.cpp b/offload/plugins-nextgen/common/src/PluginInterface.cpp
index ed79af909674e..0a570469600f9 100644
--- a/offload/plugins-nextgen/common/src/PluginInterface.cpp
+++ b/offload/plugins-nextgen/common/src/PluginInterface.cpp
@@ -815,8 +815,11 @@ Error GenericDeviceTy::init(GenericPluginTy &Plugin) {
// Enable the memory manager if required.
auto [ThresholdMM, EnableMM] = MemoryManagerTy::getSizeThresholdFromEnv();
- if (EnableMM)
+ if (EnableMM) {
+ if (ThresholdMM == 0)
+ ThresholdMM = getMemoryManagerSizeThreshold();
MemoryManager = new MemoryManagerTy(*this, ThresholdMM);
+ }
return Plugin::success();
}
diff --git a/offload/test/lit.cfg b/offload/test/lit.cfg
index 800a63bc0ee32..f3e8e9a66685e 100644
--- a/offload/test/lit.cfg
+++ b/offload/test/lit.cfg
@@ -121,6 +121,7 @@ if config.libomptarget_test_pgo:
# For all other targets, we currently assume it is.
supports_unified_shared_memory = True
supports_apu = False
+supports_large_allocation_memory_pool = False
if config.libomptarget_current_target.startswith('nvptx'):
try:
cuda_arch = int(config.cuda_test_arch[:3])
@@ -132,9 +133,11 @@ if config.libomptarget_current_target.startswith('nvptx'):
elif config.libomptarget_current_target.startswith('amdgcn'):
# amdgpu_test_arch contains a list of AMD GPUs in the system
# only check the first one assuming that we will run the test on it.
- if not (config.amdgpu_test_arch.startswith("gfx90a") or
- config.amdgpu_test_arch.startswith("gfx942") or
- config.amdgpu_test_arch.startswith("gfx950")):
+ if (config.amdgpu_test_arch.startswith("gfx90a") or
+ config.amdgpu_test_arch.startswith("gfx942") or
+ config.amdgpu_test_arch.startswith("gfx950")):
+ supports_large_allocation_memory_pool = True
+ else:
supports_unified_shared_memory = False
# check if AMD architecture is an APU:
if ((config.amdgpu_test_arch.startswith("gfx942") and
@@ -144,6 +147,8 @@ if supports_unified_shared_memory:
config.available_features.add('unified_shared_memory')
if supports_apu:
config.available_features.add('apu')
+if supports_large_allocation_memory_pool:
+ config.available_features.add('large_allocation_memory_pool')
# Setup environment to find dynamic library at runtime
if config.operating_system == 'Windows':
diff --git a/offload/test/sanitizer/use_after_free_2.c b/offload/test/sanitizer/use_after_free_2.c
index 02aa453d0a975..1c1e09744a750 100644
--- a/offload/test/sanitizer/use_after_free_2.c
+++ b/offload/test/sanitizer/use_after_free_2.c
@@ -10,6 +10,9 @@
// UNSUPPORTED: s390x-ibm-linux-gnu
// UNSUPPORTED: s390x-ibm-linux-gnu-LTO
+// If offload memory pooling is enabled for a large allocation, reuse error is
+// not detected. UNSUPPORTED: large_allocation_memory_pool
+
#include <omp.h>
int main() {
diff --git a/offload/test/sanitizer/use_after_free_3.c b/offload/test/sanitizer/use_after_free_3.c
new file mode 100644
index 0000000000000..9d8861433e7e5
--- /dev/null
+++ b/offload/test/sanitizer/use_after_free_3.c
@@ -0,0 +1,37 @@
+// clang-format off
+// RUN: %libomptarget-compileopt-generic
+// RUN: %not --crash env -u LLVM_DISABLE_SYMBOLIZATION OFFLOAD_TRACK_ALLOCATION_TRACES=1 LIBOMPTARGET_MEMORY_MANAGER_THRESHOLD=1024 %libomptarget-run-generic 2>&1 | %fcheck-generic --check-prefixes=CHECK
+// RUN: %libomptarget-run-generic 2>&1 | %fcheck-generic --check-prefixes=CHECK-PASS
+// clang-format on
+
+// If offload memory pooling is enabled for a large allocation, reuse error is
+// not detected. Run the test w/ and w/o ENV var override on memory pooling
+// threshold. REQUIRES: large_allocation_memory_pool
+
+#include <omp.h>
+#include <stdio.h>
+
+int main() {
+ int N = (1 << 30);
+ char *A = (char *)malloc(N);
+ char *P;
+#pragma omp target map(A[ : N]) map(from : P)
+ {
+ P = &A[N / 2];
+ *P = 3;
+ }
+ // clang-format off
+// CHECK: OFFLOAD ERROR: memory access fault by GPU {{.*}} (agent 0x{{.*}}) at virtual address [[PTR:0x[0-9a-z]*]]. Reasons: {{.*}}
+// CHECK: Device pointer [[PTR]] points into prior host-issued allocation:
+// CHECK: Last deallocation:
+// CHECK: Last allocation of size 1073741824
+// clang-format on
+#pragma omp target
+ {
+ *P = 5;
+ }
+
+ // CHECK-PASS: PASS
+ printf("PASS\n");
+ return 0;
+}
>From 66392a8d8d81e66ec09452d35c85147dafb07571 Mon Sep 17 00:00:00 2001
From: Stanislav Mekhanoshin <Stanislav.Mekhanoshin at amd.com>
Date: Wed, 6 Aug 2025 14:44:17 -0700
Subject: [PATCH 158/171] [AMDGPU] Add XNACK_STATE_PRIV and _MASK gfx1250
registers (#152374)
Co-authored-by: Pierre Vanhoutryve <pierre.vanhoutryve at amd.com>
Co-authored-by: Pierre Vanhoutryve <pierre.vanhoutryve at amd.com>
---
llvm/lib/Target/AMDGPU/SIDefines.h | 4 +++
.../Target/AMDGPU/Utils/AMDGPUAsmUtils.cpp | 4 +++
llvm/test/MC/AMDGPU/gfx1250_asm_operands.s | 25 +++++++++++++++++++
.../AMDGPU/gfx1250_dasm_operands.txt | 12 +++++++++
4 files changed, 45 insertions(+)
diff --git a/llvm/lib/Target/AMDGPU/SIDefines.h b/llvm/lib/Target/AMDGPU/SIDefines.h
index deadb7aed0f69..2d0102fffe5ea 100644
--- a/llvm/lib/Target/AMDGPU/SIDefines.h
+++ b/llvm/lib/Target/AMDGPU/SIDefines.h
@@ -536,6 +536,10 @@ enum Id { // HwRegCode, (6) [5:0]
ID_SQ_PERF_SNAPSHOT_DATA1 = 22,
ID_SQ_PERF_SNAPSHOT_PC_LO = 23,
ID_SQ_PERF_SNAPSHOT_PC_HI = 24,
+
+ // GFX1250
+ ID_XNACK_STATE_PRIV = 33,
+ ID_XNACK_MASK_gfx1250 = 34,
};
enum Offset : unsigned { // Offset, (5) [10:6]
diff --git a/llvm/lib/Target/AMDGPU/Utils/AMDGPUAsmUtils.cpp b/llvm/lib/Target/AMDGPU/Utils/AMDGPUAsmUtils.cpp
index e433b85489e6e..3d9455fc51a39 100644
--- a/llvm/lib/Target/AMDGPU/Utils/AMDGPUAsmUtils.cpp
+++ b/llvm/lib/Target/AMDGPU/Utils/AMDGPUAsmUtils.cpp
@@ -223,6 +223,10 @@ static constexpr CustomOperand Operands[] = {
{{"HW_REG_SQ_PERF_SNAPSHOT_PC_LO"}, ID_SQ_PERF_SNAPSHOT_PC_LO, isGFX940},
{{"HW_REG_SQ_PERF_SNAPSHOT_PC_HI"}, ID_SQ_PERF_SNAPSHOT_PC_HI, isGFX940},
+ // GFX1250
+ {{"HW_REG_XNACK_STATE_PRIV"}, ID_XNACK_STATE_PRIV, isGFX1250},
+ {{"HW_REG_XNACK_MASK"}, ID_XNACK_MASK_gfx1250, isGFX1250},
+
// Aliases
{{"HW_REG_HW_ID"}, ID_HW_ID1, isGFX10},
};
diff --git a/llvm/test/MC/AMDGPU/gfx1250_asm_operands.s b/llvm/test/MC/AMDGPU/gfx1250_asm_operands.s
index 8b7465b5df574..100fc981c4f81 100644
--- a/llvm/test/MC/AMDGPU/gfx1250_asm_operands.s
+++ b/llvm/test/MC/AMDGPU/gfx1250_asm_operands.s
@@ -27,3 +27,28 @@ s_mov_b64 s[0:1], src_shared_limit
s_getreg_b32 s1, hwreg(33)
// GFX1250: encoding: [0x21,0xf8,0x81,0xb8]
+
+s_getreg_b32 s1, hwreg(HW_REG_XNACK_STATE_PRIV)
+// GFX1200-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: invalid hardware register: not supported on this GPU
+// GFX1250: encoding: [0x21,0xf8,0x81,0xb8]
+
+s_getreg_b32 s1, hwreg(34)
+// GFX1250: encoding: [0x22,0xf8,0x81,0xb8]
+
+s_getreg_b32 s1, hwreg(HW_REG_XNACK_MASK)
+// GFX1200-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: invalid hardware register: not supported on this GPU
+// GFX1250: encoding: [0x22,0xf8,0x81,0xb8]
+
+s_setreg_b32 hwreg(33), s1
+// GFX1250: encoding: [0x21,0xf8,0x01,0xb9]
+
+s_setreg_b32 hwreg(HW_REG_XNACK_STATE_PRIV), s1
+// GFX1200-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: invalid hardware register: not supported on this GPU
+// GFX1250: encoding: [0x21,0xf8,0x01,0xb9]
+
+s_setreg_b32 hwreg(34), s1
+// GFX1250: encoding: [0x22,0xf8,0x01,0xb9]
+
+s_setreg_b32 hwreg(HW_REG_XNACK_MASK), s1
+// GFX1200-ERR: :[[@LINE-1]]:{{[0-9]+}}: error: invalid hardware register: not supported on this GPU
+// GFX1250: encoding: [0x22,0xf8,0x01,0xb9]
diff --git a/llvm/test/MC/Disassembler/AMDGPU/gfx1250_dasm_operands.txt b/llvm/test/MC/Disassembler/AMDGPU/gfx1250_dasm_operands.txt
index a3e7e570ab9b6..d72009bc017f4 100644
--- a/llvm/test/MC/Disassembler/AMDGPU/gfx1250_dasm_operands.txt
+++ b/llvm/test/MC/Disassembler/AMDGPU/gfx1250_dasm_operands.txt
@@ -20,3 +20,15 @@
# GFX1250: s_mov_b64 s[0:1], src_shared_limit ; encoding: [0xec,0x01,0x80,0xbe]
0xec,0x01,0x80,0xbe
+
+# GFX1250: s_getreg_b32 s1, hwreg(HW_REG_XNACK_STATE_PRIV) ; encoding: [0x21,0xf8,0x81,0xb8]
+0x21,0xf8,0x81,0xb8
+
+# GFX1250: s_getreg_b32 s1, hwreg(HW_REG_XNACK_MASK) ; encoding: [0x22,0xf8,0x81,0xb8]
+0x22,0xf8,0x81,0xb8
+
+# GFX1250: s_setreg_b32 hwreg(HW_REG_XNACK_STATE_PRIV), s1 ; encoding: [0x21,0xf8,0x01,0xb9]
+0x21,0xf8,0x01,0xb9
+
+# GFX1250: s_setreg_b32 hwreg(HW_REG_XNACK_MASK), s1 ; encoding: [0x22,0xf8,0x01,0xb9]
+0x22,0xf8,0x01,0xb9
>From 281e6d2cc498d05f3ca601e3b1d595420e7ed827 Mon Sep 17 00:00:00 2001
From: Md Abdullah Shahneous Bari <98356296+mshahneo at users.noreply.github.com>
Date: Wed, 6 Aug 2025 16:48:59 -0500
Subject: [PATCH 159/171] [mlir][ExecutionEngine] Add LevelZeroRuntimeWrapper.
(#151038)
Adds LevelZeroRuntime wrapper and tests.
Co-authored-by: Artem Kroviakov <artem.kroviakov at intel.com>
Co-authored-by: Nishant Patel <nishant.b.patel at intel.com>
---------
Co-authored-by: Artem Kroviakov <artem.kroviakov at intel.com>
Co-authored-by: Nishant Patel <nishant.b.patel at intel.com>
---
mlir/CMakeLists.txt | 1 +
...lZero.cmake => FindLevelZeroRuntime.cmake} | 108 ++--
mlir/lib/ExecutionEngine/CMakeLists.txt | 39 +-
.../LevelZeroRuntimeWrappers.cpp | 573 ++++++++++++++++++
mlir/test/CMakeLists.txt | 4 +
.../GPU/LevelZero/gpu-addf32-to-spirv.mlir | 59 ++
.../GPU/LevelZero/gpu-addi64-to-spirv.mlir | 57 ++
.../LevelZero/gpu-memcpy-addf32-to-spirv.mlir | 56 ++
.../GPU/LevelZero/gpu-reluf32-to-spirv.mlir | 86 +++
.../Integration/GPU/LevelZero/lit.local.cfg | 2 +
mlir/test/lit.cfg.py | 3 +
mlir/test/lit.site.cfg.py.in | 1 +
12 files changed, 928 insertions(+), 61 deletions(-)
rename mlir/cmake/modules/{FindLevelZero.cmake => FindLevelZeroRuntime.cmake} (66%)
create mode 100644 mlir/lib/ExecutionEngine/LevelZeroRuntimeWrappers.cpp
create mode 100644 mlir/test/Integration/GPU/LevelZero/gpu-addf32-to-spirv.mlir
create mode 100644 mlir/test/Integration/GPU/LevelZero/gpu-addi64-to-spirv.mlir
create mode 100644 mlir/test/Integration/GPU/LevelZero/gpu-memcpy-addf32-to-spirv.mlir
create mode 100644 mlir/test/Integration/GPU/LevelZero/gpu-reluf32-to-spirv.mlir
create mode 100644 mlir/test/Integration/GPU/LevelZero/lit.local.cfg
diff --git a/mlir/CMakeLists.txt b/mlir/CMakeLists.txt
index a1ad81f625cd6..a9414eb324d09 100644
--- a/mlir/CMakeLists.txt
+++ b/mlir/CMakeLists.txt
@@ -140,6 +140,7 @@ endif()
set(MLIR_ENABLE_CUDA_RUNNER 0 CACHE BOOL "Enable building the MLIR CUDA runner")
set(MLIR_ENABLE_ROCM_RUNNER 0 CACHE BOOL "Enable building the MLIR ROCm runner")
set(MLIR_ENABLE_SYCL_RUNNER 0 CACHE BOOL "Enable building the MLIR SYCL runner")
+set(MLIR_ENABLE_LEVELZERO_RUNNER 0 CACHE BOOL "Enable building the MLIR LevelZero runner")
set(MLIR_ENABLE_SPIRV_CPU_RUNNER 0 CACHE BOOL "Enable building the MLIR SPIR-V cpu runner")
set(MLIR_ENABLE_VULKAN_RUNNER 0 CACHE BOOL "Enable building the MLIR Vulkan runner")
set(MLIR_ENABLE_NVPTXCOMPILER 0 CACHE BOOL
diff --git a/mlir/cmake/modules/FindLevelZero.cmake b/mlir/cmake/modules/FindLevelZeroRuntime.cmake
similarity index 66%
rename from mlir/cmake/modules/FindLevelZero.cmake
rename to mlir/cmake/modules/FindLevelZeroRuntime.cmake
index 012187f0afc0b..2a8fb3a16d16f 100644
--- a/mlir/cmake/modules/FindLevelZero.cmake
+++ b/mlir/cmake/modules/FindLevelZeroRuntime.cmake
@@ -20,7 +20,6 @@ include(FindPackageHandleStandardArgs)
# Search path priority
# 1. CMake Variable LEVEL_ZERO_DIR
# 2. Environment Variable LEVEL_ZERO_DIR
-
if(NOT LEVEL_ZERO_DIR)
if(DEFINED ENV{LEVEL_ZERO_DIR})
set(LEVEL_ZERO_DIR "$ENV{LEVEL_ZERO_DIR}")
@@ -28,32 +27,32 @@ if(NOT LEVEL_ZERO_DIR)
endif()
if(LEVEL_ZERO_DIR)
- find_path(LevelZero_INCLUDE_DIR
+ find_path(LevelZeroRuntime_INCLUDE_DIR
NAMES level_zero/ze_api.h
PATHS ${LEVEL_ZERO_DIR}/include
NO_DEFAULT_PATH
)
if(LINUX)
- find_library(LevelZero_LIBRARY
+ find_library(LevelZeroRuntime_LIBRARY
NAMES ze_loader
PATHS ${LEVEL_ZERO_DIR}/lib
- ${LEVEL_ZERO_DIR}/lib/x86_64-linux-gnu
+ ${LEVEL_ZERO_DIR}/lib/x86_64-linux-gnu
NO_DEFAULT_PATH
)
else()
- find_library(LevelZero_LIBRARY
+ find_library(LevelZeroRuntime_LIBRARY
NAMES ze_loader
PATHS ${LEVEL_ZERO_DIR}/lib
NO_DEFAULT_PATH
)
endif()
else()
- find_path(LevelZero_INCLUDE_DIR
+ find_path(LevelZeroRuntime_INCLUDE_DIR
NAMES level_zero/ze_api.h
)
- find_library(LevelZero_LIBRARY
+ find_library(LevelZeroRuntime_LIBRARY
NAMES ze_loader
)
endif()
@@ -64,12 +63,14 @@ endif()
# lists of equal lengths, with the shorter string getting zero-padded.
function(compare_versions VERSION_STR1 VERSION_STR2 OUTPUT)
# Convert the strings to list
- string(REPLACE "." ";" VL1 ${VERSION_STR1})
- string(REPLACE "." ";" VL2 ${VERSION_STR2})
+ string(REPLACE "." ";" VL1 ${VERSION_STR1})
+ string(REPLACE "." ";" VL2 ${VERSION_STR2})
+
# get lengths of both lists
list(LENGTH VL1 VL1_LEN)
list(LENGTH VL2 VL2_LEN)
set(LEN ${VL1_LEN})
+
# If they differ in size pad the shorter list with 0s
if(VL1_LEN GREATER VL2_LEN)
math(EXPR DIFF "${VL1_LEN} - ${VL2_LEN}" OUTPUT_FORMAT DECIMAL)
@@ -98,12 +99,10 @@ function(compare_versions VERSION_STR1 VERSION_STR2 OUTPUT)
set(${OUTPUT} TRUE PARENT_SCOPE)
endif()
endforeach()
-
- endfunction(compare_versions)
+endfunction(compare_versions)
# Creates a small function to run and extract the LevelZero loader version.
function(get_l0_loader_version)
-
set(L0_VERSIONEER_SRC
[====[
#include <iostream>
@@ -142,19 +141,20 @@ function(get_l0_loader_version)
# We need both the directories in the include path as ze_loader.h
# includes "ze_api.h" and not "level_zero/ze_api.h".
- list(APPEND INCLUDE_DIRS ${LevelZero_INCLUDE_DIR})
- list(APPEND INCLUDE_DIRS ${LevelZero_INCLUDE_DIR}/level_zero)
+ list(APPEND INCLUDE_DIRS ${LevelZeroRuntime_INCLUDE_DIR})
+ list(APPEND INCLUDE_DIRS ${LevelZeroRuntime_INCLUDE_DIR}/level_zero)
list(JOIN INCLUDE_DIRS ";" INCLUDE_DIRS_STR)
try_run(L0_VERSIONEER_RUN L0_VERSIONEER_COMPILE
- "${CMAKE_BINARY_DIR}"
- "${L0_VERSIONEER_FILE}"
- LINK_LIBRARIES ${LevelZero_LIBRARY}
- CMAKE_FLAGS
- "-DINCLUDE_DIRECTORIES=${INCLUDE_DIRS_STR}"
- RUN_OUTPUT_VARIABLE L0_VERSION
+ "${CMAKE_BINARY_DIR}"
+ "${L0_VERSIONEER_FILE}"
+ LINK_LIBRARIES ${LevelZeroRuntime_LIBRARY}
+ CMAKE_FLAGS
+ "-DINCLUDE_DIRECTORIES=${INCLUDE_DIRS_STR}"
+ RUN_OUTPUT_VARIABLE L0_VERSION
)
- if(${L0_VERSIONEER_COMPILE} AND (DEFINED L0_VERSIONEER_RUN))
- set(LevelZero_VERSION ${L0_VERSION} PARENT_SCOPE)
+
+ if(${L0_VERSIONEER_COMPILE} AND(DEFINED L0_VERSIONEER_RUN))
+ set(LevelZeroRuntime_VERSION ${L0_VERSION} PARENT_SCOPE)
message(STATUS "Found Level Zero of version: ${L0_VERSION}")
else()
message(FATAL_ERROR
@@ -163,59 +163,61 @@ function(get_l0_loader_version)
endif()
endfunction(get_l0_loader_version)
-if(LevelZero_INCLUDE_DIR AND LevelZero_LIBRARY)
- list(APPEND LevelZero_LIBRARIES "${LevelZero_LIBRARY}")
- list(APPEND LevelZero_INCLUDE_DIRS ${LevelZero_INCLUDE_DIR})
+if(LevelZeroRuntime_INCLUDE_DIR AND LevelZeroRuntime_LIBRARY)
+ list(APPEND LevelZeroRuntime_LIBRARIES "${LevelZeroRuntime_LIBRARY}")
+ list(APPEND LevelZeroRuntime_INCLUDE_DIRS ${LevelZeroRuntime_INCLUDE_DIR})
+
if(OpenCL_FOUND)
- list(APPEND LevelZero_INCLUDE_DIRS ${OpenCL_INCLUDE_DIRS})
+ list(APPEND LevelZeroRuntime_INCLUDE_DIRS ${OpenCL_INCLUDE_DIRS})
endif()
- cmake_path(GET LevelZero_LIBRARY PARENT_PATH LevelZero_LIBRARIES_PATH)
- set(LevelZero_LIBRARIES_DIR ${LevelZero_LIBRARIES_PATH})
-
- if(NOT TARGET LevelZero::LevelZero)
- add_library(LevelZero::LevelZero INTERFACE IMPORTED)
- set_target_properties(LevelZero::LevelZero
- PROPERTIES INTERFACE_LINK_LIBRARIES "${LevelZero_LIBRARIES}"
- )
- set_target_properties(LevelZero::LevelZero
- PROPERTIES INTERFACE_INCLUDE_DIRECTORIES "${LevelZero_INCLUDE_DIRS}"
- )
+ cmake_path(GET LevelZeroRuntime_LIBRARY PARENT_PATH LevelZeroRuntime_LIBRARIES_PATH)
+ set(LevelZeroRuntime_LIBRARIES_DIR ${LevelZeroRuntime_LIBRARIES_PATH})
+
+ if(NOT TARGET LevelZeroRuntime::LevelZeroRuntime)
+ add_library(LevelZeroRuntime::LevelZeroRuntime INTERFACE IMPORTED)
+ set_target_properties(LevelZeroRuntime::LevelZeroRuntime
+ PROPERTIES INTERFACE_LINK_LIBRARIES "${LevelZeroRuntime_LIBRARIES}"
+ )
+ set_target_properties(LevelZeroRuntime::LevelZeroRuntime
+ PROPERTIES INTERFACE_INCLUDE_DIRECTORIES "${LevelZeroRuntime_INCLUDE_DIRS}"
+ )
endif()
endif()
# Check if a specific version of Level Zero is required
-if(LevelZero_FIND_VERSION)
+if(LevelZeroRuntime_FIND_VERSION)
get_l0_loader_version()
set(VERSION_GT_FIND_VERSION FALSE)
compare_versions(
- ${LevelZero_VERSION}
- ${LevelZero_FIND_VERSION}
+ ${LevelZeroRuntime_VERSION}
+ ${LevelZeroRuntime_FIND_VERSION}
VERSION_GT_FIND_VERSION
)
+
if(${VERSION_GT_FIND_VERSION})
- set(LevelZero_FOUND TRUE)
+ set(LevelZeroRuntime_FOUND TRUE)
else()
- set(LevelZero_FOUND FALSE)
+ set(LevelZeroRuntime_FOUND FALSE)
endif()
else()
- set(LevelZero_FOUND TRUE)
+ set(LevelZeroRuntime_FOUND TRUE)
endif()
-find_package_handle_standard_args(LevelZero
+find_package_handle_standard_args(LevelZeroRuntime
REQUIRED_VARS
- LevelZero_FOUND
- LevelZero_INCLUDE_DIRS
- LevelZero_LIBRARY
- LevelZero_LIBRARIES_DIR
+ LevelZeroRuntime_FOUND
+ LevelZeroRuntime_INCLUDE_DIRS
+ LevelZeroRuntime_LIBRARY
+ LevelZeroRuntime_LIBRARIES_DIR
HANDLE_COMPONENTS
)
-mark_as_advanced(LevelZero_LIBRARY LevelZero_INCLUDE_DIRS)
+mark_as_advanced(LevelZeroRuntime_LIBRARY LevelZeroRuntime_INCLUDE_DIRS)
-if(LevelZero_FOUND)
- find_package_message(LevelZero "Found LevelZero: ${LevelZero_LIBRARY}"
- "(found version ${LevelZero_VERSION})"
+if(LevelZeroRuntime_FOUND)
+ find_package_message(LevelZeroRuntime "Found LevelZero: ${LevelZeroRuntime_LIBRARY}"
+ "(found version ${LevelZeroRuntime_VERSION})"
)
else()
- find_package_message(LevelZero "Could not find LevelZero" "")
+ find_package_message(LevelZeroRuntime "Could not find LevelZero" "")
endif()
diff --git a/mlir/lib/ExecutionEngine/CMakeLists.txt b/mlir/lib/ExecutionEngine/CMakeLists.txt
index dd2ac75b88798..fdeb4dacf9278 100644
--- a/mlir/lib/ExecutionEngine/CMakeLists.txt
+++ b/mlir/lib/ExecutionEngine/CMakeLists.txt
@@ -14,6 +14,7 @@ set(LLVM_OPTIONAL_SOURCES
RunnerUtils.cpp
OptUtils.cpp
JitRunner.cpp
+ LevelZeroRuntimeWrappers.cpp
SpirvCpuRuntimeWrappers.cpp
SyclRuntimeWrappers.cpp
VulkanRuntimeWrappers.cpp
@@ -374,6 +375,15 @@ if(LLVM_ENABLE_PIC)
)
endif()
+ if(MLIR_ENABLE_SYCL_RUNNER OR MLIR_ENABLE_LEVELZERO_RUNNER)
+ # Both runtimes require LevelZero, so we can find it once.
+ find_package(LevelZeroRuntime)
+
+ if(NOT LevelZeroRuntime_FOUND)
+ message(FATAL_ERROR "LevelZero not found. Please set LEVEL_ZERO_DIR.")
+ endif()
+ endif()
+
if(MLIR_ENABLE_SYCL_RUNNER)
find_package(SyclRuntime)
@@ -381,12 +391,6 @@ if(LLVM_ENABLE_PIC)
message(FATAL_ERROR "syclRuntime not found. Please set check oneapi installation and run setvars.sh.")
endif()
- find_package(LevelZero)
-
- if(NOT LevelZero_FOUND)
- message(FATAL_ERROR "LevelZero not found. Please set LEVEL_ZERO_DIR.")
- endif()
-
add_mlir_library(mlir_sycl_runtime
SHARED
SyclRuntimeWrappers.cpp
@@ -404,9 +408,28 @@ if(LLVM_ENABLE_PIC)
${MLIR_INCLUDE_DIRS}
)
- target_link_libraries(mlir_sycl_runtime PRIVATE LevelZero::LevelZero SyclRuntime::SyclRuntime)
+ target_link_libraries(mlir_sycl_runtime PRIVATE LevelZeroRuntime::LevelZeroRuntime SyclRuntime::SyclRuntime)
+
+ set_property(TARGET mlir_sycl_runtime APPEND PROPERTY BUILD_RPATH "${LevelZeroRuntime_LIBRARIES_DIR}" "${SyclRuntime_LIBRARIES_DIR}")
+ endif()
+
+ if(MLIR_ENABLE_LEVELZERO_RUNNER)
+ add_mlir_library(mlir_levelzero_runtime
+ SHARED
+ LevelZeroRuntimeWrappers.cpp
+
+ EXCLUDE_FROM_LIBMLIR
+ )
+
+ target_compile_options(mlir_levelzero_runtime PUBLIC -fexceptions -frtti)
+
+ target_include_directories(mlir_levelzero_runtime PRIVATE
+ ${MLIR_INCLUDE_DIRS}
+ )
+
+ target_link_libraries(mlir_levelzero_runtime PRIVATE LevelZeroRuntime::LevelZeroRuntime)
- set_property(TARGET mlir_sycl_runtime APPEND PROPERTY BUILD_RPATH "${LevelZero_LIBRARIES_DIR}" "${SyclRuntime_LIBRARIES_DIR}")
+ set_property(TARGET mlir_levelzero_runtime APPEND PROPERTY BUILD_RPATH "${LevelZeroRuntime_LIBRARIES_DIR}")
endif()
if(MLIR_ENABLE_SPIRV_CPU_RUNNER)
diff --git a/mlir/lib/ExecutionEngine/LevelZeroRuntimeWrappers.cpp b/mlir/lib/ExecutionEngine/LevelZeroRuntimeWrappers.cpp
new file mode 100644
index 0000000000000..21eaf28c9f214
--- /dev/null
+++ b/mlir/lib/ExecutionEngine/LevelZeroRuntimeWrappers.cpp
@@ -0,0 +1,573 @@
+//===- LevelZeroRuntimeWrappers.cpp - MLIR Level Zero (L0) wrapper library-===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+//
+// Implements wrappers around the Level Zero (L0) runtime library with C linkage
+//
+//===----------------------------------------------------------------------===//
+
+#include "llvm/ADT/Twine.h"
+
+#include "level_zero/ze_api.h"
+#include <cassert>
+#include <deque>
+#include <exception>
+#include <functional>
+#include <iostream>
+#include <limits>
+#include <unordered_set>
+#include <vector>
+
+namespace {
+template <typename F>
+auto catchAll(F &&func) {
+ try {
+ return func();
+ } catch (const std::exception &e) {
+ std::cerr << "An exception was thrown: " << e.what() << std::endl;
+ std::abort();
+ } catch (...) {
+ std::cerr << "An unknown exception was thrown." << std::endl;
+ std::abort();
+ }
+}
+
+#define L0_SAFE_CALL(call) \
+ { \
+ ze_result_t status = (call); \
+ if (status != ZE_RESULT_SUCCESS) { \
+ const char *errorString; \
+ zeDriverGetLastErrorDescription(NULL, &errorString); \
+ std::cerr << "L0 error " << status << ": " << errorString << std::endl; \
+ std::abort(); \
+ } \
+ }
+} // namespace
+
+//===----------------------------------------------------------------------===//
+// L0 RT context & device setters
+//===----------------------------------------------------------------------===//
+
+// Returns the L0 driver handle for the given index. Default index is 0
+// (i.e., returns the first driver handle of the available drivers).
+
+static ze_driver_handle_t getDriver(uint32_t idx = 0) {
+ ze_init_driver_type_desc_t driver_type = {};
+ driver_type.stype = ZE_STRUCTURE_TYPE_INIT_DRIVER_TYPE_DESC;
+ driver_type.flags = ZE_INIT_DRIVER_TYPE_FLAG_GPU;
+ driver_type.pNext = nullptr;
+ uint32_t driverCount{0};
+ thread_local static std::vector<ze_driver_handle_t> drivers;
+ thread_local static bool isDriverInitialised{false};
+ if (isDriverInitialised && idx < drivers.size())
+ return drivers[idx];
+ L0_SAFE_CALL(zeInitDrivers(&driverCount, nullptr, &driver_type));
+ if (!driverCount)
+ throw std::runtime_error("No L0 drivers found.");
+ drivers.resize(driverCount);
+ L0_SAFE_CALL(zeInitDrivers(&driverCount, drivers.data(), &driver_type));
+ if (idx >= driverCount)
+ throw std::runtime_error((llvm::Twine("Requested driver idx out-of-bound, "
+ "number of availabe drivers: ") +
+ std::to_string(driverCount))
+ .str());
+ isDriverInitialised = true;
+ return drivers[idx];
+}
+
+static ze_device_handle_t getDevice(const uint32_t driverIdx = 0,
+ const int32_t devIdx = 0) {
+ thread_local static ze_device_handle_t l0Device;
+ thread_local int32_t currDevIdx{-1};
+ thread_local uint32_t currDriverIdx{0};
+ if (currDriverIdx == driverIdx && currDevIdx == devIdx)
+ return l0Device;
+ auto driver = getDriver(driverIdx);
+ uint32_t deviceCount{0};
+ L0_SAFE_CALL(zeDeviceGet(driver, &deviceCount, nullptr));
+ if (!deviceCount)
+ throw std::runtime_error("getDevice failed: did not find L0 device.");
+ if (static_cast<int>(deviceCount) < devIdx + 1)
+ throw std::runtime_error("getDevice failed: devIdx out-of-bounds.");
+ std::vector<ze_device_handle_t> devices(deviceCount);
+ L0_SAFE_CALL(zeDeviceGet(driver, &deviceCount, devices.data()));
+ l0Device = devices[devIdx];
+ currDriverIdx = driverIdx;
+ currDevIdx = devIdx;
+ return l0Device;
+}
+
+// Returns the default L0 context of the defult driver.
+static ze_context_handle_t getContext(ze_driver_handle_t driver) {
+ thread_local static ze_context_handle_t context;
+ thread_local static bool isContextInitialised{false};
+ if (isContextInitialised)
+ return context;
+ ze_context_desc_t ctxtDesc = {ZE_STRUCTURE_TYPE_CONTEXT_DESC, nullptr, 0};
+ L0_SAFE_CALL(zeContextCreate(driver, &ctxtDesc, &context));
+ isContextInitialised = true;
+ return context;
+}
+
+//===----------------------------------------------------------------------===//
+// L0 RT helper structs
+//===----------------------------------------------------------------------===//
+
+struct ZeContextDeleter {
+ void operator()(ze_context_handle_t ctx) const {
+ if (ctx)
+ L0_SAFE_CALL(zeContextDestroy(ctx));
+ }
+};
+
+struct ZeCommandListDeleter {
+ void operator()(ze_command_list_handle_t cmdList) const {
+ if (cmdList)
+ L0_SAFE_CALL(zeCommandListDestroy(cmdList));
+ }
+};
+using UniqueZeContext =
+ std::unique_ptr<std::remove_pointer<ze_context_handle_t>::type,
+ ZeContextDeleter>;
+using UniqueZeCommandList =
+ std::unique_ptr<std::remove_pointer<ze_command_list_handle_t>::type,
+ ZeCommandListDeleter>;
+struct L0RTContextWrapper {
+ ze_driver_handle_t driver{nullptr};
+ ze_device_handle_t device{nullptr};
+ UniqueZeContext context;
+ // Usually, one immediate command list with ordinal 0 suffices for
+ // both copy and compute ops, but leaves HW underutilized.
+ UniqueZeCommandList immCmdListCompute;
+ // Copy engines can be used for both memcpy and memset, but
+ // they have limitations for memset pattern size (e.g., 1 byte).
+ UniqueZeCommandList immCmdListCopy;
+ uint32_t copyEngineMaxMemoryFillPatternSize{-1u};
+
+ L0RTContextWrapper() = default;
+ L0RTContextWrapper(const uint32_t driverIdx = 0, const int32_t devIdx = 0)
+ : driver(getDriver(driverIdx)), device(getDevice(devIdx)) {
+ // Create context
+ ze_context_handle_t ctx = getContext(driver);
+ context.reset(ctx);
+
+ // Determine ordinals
+ uint32_t computeEngineOrdinal = -1u, copyEngineOrdinal = -1u;
+ ze_device_properties_t deviceProperties{};
+ L0_SAFE_CALL(zeDeviceGetProperties(device, &deviceProperties));
+ uint32_t queueGroupCount = 0;
+ L0_SAFE_CALL(zeDeviceGetCommandQueueGroupProperties(
+ device, &queueGroupCount, nullptr));
+ std::vector<ze_command_queue_group_properties_t> queueGroupProperties(
+ queueGroupCount);
+ L0_SAFE_CALL(zeDeviceGetCommandQueueGroupProperties(
+ device, &queueGroupCount, queueGroupProperties.data()));
+
+ for (uint32_t queueGroupIdx = 0; queueGroupIdx < queueGroupCount;
+ ++queueGroupIdx) {
+ const auto &group = queueGroupProperties[queueGroupIdx];
+ if (group.flags & ZE_COMMAND_QUEUE_GROUP_PROPERTY_FLAG_COMPUTE)
+ computeEngineOrdinal = queueGroupIdx;
+ else if (group.flags & ZE_COMMAND_QUEUE_GROUP_PROPERTY_FLAG_COPY) {
+ copyEngineOrdinal = queueGroupIdx;
+ copyEngineMaxMemoryFillPatternSize = group.maxMemoryFillPatternSize;
+ }
+ if (copyEngineOrdinal != -1u && computeEngineOrdinal != -1u)
+ break;
+ }
+
+ // Fallback to the default queue if no dedicated copy queue is available.
+ if (copyEngineOrdinal == -1u)
+ copyEngineOrdinal = computeEngineOrdinal;
+
+ assert(copyEngineOrdinal != -1u && computeEngineOrdinal != -1u &&
+ "Expected two engines to be available.");
+
+ // Create copy command list
+ ze_command_queue_desc_t cmdQueueDesc{
+ ZE_STRUCTURE_TYPE_COMMAND_QUEUE_DESC,
+ nullptr,
+ copyEngineOrdinal, // ordinal
+ 0, // index (assume one physical engine in the group)
+ 0, // flags
+ ZE_COMMAND_QUEUE_MODE_ASYNCHRONOUS,
+ ZE_COMMAND_QUEUE_PRIORITY_NORMAL};
+
+ ze_command_list_handle_t rawCmdListCopy = nullptr;
+ L0_SAFE_CALL(zeCommandListCreateImmediate(context.get(), device,
+ &cmdQueueDesc, &rawCmdListCopy));
+ immCmdListCopy.reset(rawCmdListCopy);
+
+ // Create compute command list
+ cmdQueueDesc.ordinal = computeEngineOrdinal;
+ ze_command_list_handle_t rawCmdListCompute = nullptr;
+ L0_SAFE_CALL(zeCommandListCreateImmediate(
+ context.get(), device, &cmdQueueDesc, &rawCmdListCompute));
+ immCmdListCompute.reset(rawCmdListCompute);
+ }
+ L0RTContextWrapper(const L0RTContextWrapper &) = delete;
+ L0RTContextWrapper &operator=(const L0RTContextWrapper &) = delete;
+ // Allow move
+ L0RTContextWrapper(L0RTContextWrapper &&) noexcept = default;
+ L0RTContextWrapper &operator=(L0RTContextWrapper &&) noexcept = default;
+ ~L0RTContextWrapper() = default;
+};
+
+struct ZeEventDeleter {
+ void operator()(ze_event_handle_t event) const {
+ if (event)
+ L0_SAFE_CALL(zeEventDestroy(event));
+ }
+};
+
+struct ZeEventPoolDeleter {
+ void operator()(ze_event_pool_handle_t pool) const {
+ if (pool)
+ L0_SAFE_CALL(zeEventPoolDestroy(pool));
+ }
+};
+
+using UniqueZeEvent =
+ std::unique_ptr<std::remove_pointer<ze_event_handle_t>::type,
+ ZeEventDeleter>;
+using UniqueZeEventPool =
+ std::unique_ptr<std::remove_pointer<ze_event_pool_handle_t>::type,
+ ZeEventPoolDeleter>;
+
+// L0 only supports pre-determined sizes of event pools,
+// implement a runtime data structure to avoid running out of events.
+
+struct DynamicEventPool {
+ constexpr static size_t numEventsPerPool{128};
+
+ std::vector<UniqueZeEventPool> eventPools;
+ std::vector<UniqueZeEvent> availableEvents;
+ std::unordered_map<ze_event_handle_t, UniqueZeEvent> takenEvents;
+
+ // Limit the number of events to avoid running out of memory.
+ // The limit is set to 32K events, which should be sufficient for most use
+ // cases.
+ size_t maxEventsCount{32768}; // 32K events
+ size_t currentEventsLimit{0};
+ size_t currentEventsCnt{0};
+ L0RTContextWrapper *rtCtx;
+
+ DynamicEventPool(L0RTContextWrapper *rtCtx) : rtCtx(rtCtx) {
+ createNewPool(numEventsPerPool);
+ }
+
+ DynamicEventPool(const DynamicEventPool &) = delete;
+ DynamicEventPool &operator=(const DynamicEventPool &) = delete;
+
+ // Allow move
+ DynamicEventPool(DynamicEventPool &&) noexcept = default;
+ DynamicEventPool &operator=(DynamicEventPool &&) noexcept = default;
+
+ ~DynamicEventPool() {
+ assert(takenEvents.empty() && "Some events were not released");
+ }
+
+ void createNewPool(size_t numEvents) {
+ ze_event_pool_desc_t eventPoolDesc = {};
+ eventPoolDesc.flags = ZE_EVENT_POOL_FLAG_HOST_VISIBLE;
+ eventPoolDesc.count = numEvents;
+
+ ze_event_pool_handle_t rawPool = nullptr;
+ L0_SAFE_CALL(zeEventPoolCreate(rtCtx->context.get(), &eventPoolDesc, 1,
+ &rtCtx->device, &rawPool));
+
+ eventPools.emplace_back(UniqueZeEventPool(rawPool));
+ currentEventsLimit += numEvents;
+ }
+
+ ze_event_handle_t takeEvent() {
+ ze_event_handle_t rawEvent = nullptr;
+
+ if (!availableEvents.empty()) {
+ // Reuse one
+ auto uniqueEvent = std::move(availableEvents.back());
+ availableEvents.pop_back();
+ rawEvent = uniqueEvent.get();
+ takenEvents[rawEvent] = std::move(uniqueEvent);
+ } else {
+ if (currentEventsCnt >= maxEventsCount) {
+ throw std::runtime_error("DynamicEventPool: reached max events limit");
+ }
+ if (currentEventsCnt == currentEventsLimit)
+ createNewPool(numEventsPerPool);
+
+ ze_event_desc_t eventDesc = {
+ ZE_STRUCTURE_TYPE_EVENT_DESC, nullptr,
+ static_cast<uint32_t>(currentEventsCnt % numEventsPerPool),
+ ZE_EVENT_SCOPE_FLAG_DEVICE, ZE_EVENT_SCOPE_FLAG_HOST};
+
+ ze_event_handle_t newEvent = nullptr;
+ L0_SAFE_CALL(
+ zeEventCreate(eventPools.back().get(), &eventDesc, &newEvent));
+
+ takenEvents[newEvent] = UniqueZeEvent(newEvent);
+ rawEvent = newEvent;
+ currentEventsCnt++;
+ }
+
+ return rawEvent;
+ }
+
+ void releaseEvent(ze_event_handle_t event) {
+ auto it = takenEvents.find(event);
+ assert(it != takenEvents.end() &&
+ "Attempting to release unknown or already released event");
+
+ L0_SAFE_CALL(zeEventHostReset(event));
+ availableEvents.emplace_back(std::move(it->second));
+ takenEvents.erase(it);
+ }
+};
+
+L0RTContextWrapper &getRtContext() {
+ thread_local static L0RTContextWrapper rtContext(0);
+ return rtContext;
+}
+
+DynamicEventPool &getDynamicEventPool() {
+ thread_local static DynamicEventPool dynEventPool{&getRtContext()};
+ return dynEventPool;
+}
+
+struct StreamWrapper {
+ // avoid event pointer invalidations
+ std::deque<ze_event_handle_t> implicitEventStack;
+ DynamicEventPool &dynEventPool;
+
+ StreamWrapper(DynamicEventPool &dynEventPool) : dynEventPool(dynEventPool) {}
+ ~StreamWrapper() { sync(); }
+
+ ze_event_handle_t *getLastImplicitEventPtr() {
+ // Assume current implicit events will not be used after `sync`.
+ return implicitEventStack.size() ? &implicitEventStack.back() : nullptr;
+ }
+
+ void sync(ze_event_handle_t explicitEvent = nullptr) {
+ ze_event_handle_t syncEvent{nullptr};
+ if (!explicitEvent) {
+ ze_event_handle_t *lastImplicitEventPtr = getLastImplicitEventPtr();
+ syncEvent = lastImplicitEventPtr ? *lastImplicitEventPtr : nullptr;
+ } else {
+ syncEvent = explicitEvent;
+ }
+ if (syncEvent)
+ L0_SAFE_CALL(zeEventHostSynchronize(
+ syncEvent, std::numeric_limits<uint64_t>::max()));
+ // All of the "implicit" events were signaled and are of no use, release
+ // them. "explicit" event must be "released" via mgpuEventDestroy
+ for (auto event : implicitEventStack)
+ dynEventPool.releaseEvent(event);
+ implicitEventStack.clear();
+ }
+
+ template <typename Func>
+ void enqueueOp(Func &&op) {
+ ze_event_handle_t newImplicitEvent = dynEventPool.takeEvent();
+ ze_event_handle_t *lastImplicitEventPtr = getLastImplicitEventPtr();
+ const uint32_t numWaitEvents = lastImplicitEventPtr ? 1 : 0;
+ std::forward<Func>(op)(newImplicitEvent, numWaitEvents,
+ lastImplicitEventPtr);
+ implicitEventStack.push_back(newImplicitEvent);
+ }
+};
+
+static ze_module_handle_t loadModule(const void *data, size_t dataSize) {
+ assert(data);
+ ze_module_handle_t zeModule;
+ ze_module_desc_t desc = {ZE_STRUCTURE_TYPE_MODULE_DESC,
+ nullptr,
+ ZE_MODULE_FORMAT_IL_SPIRV,
+ dataSize,
+ (const uint8_t *)data,
+ nullptr,
+ nullptr};
+ ze_module_build_log_handle_t buildLogHandle;
+ ze_result_t result =
+ zeModuleCreate(getRtContext().context.get(), getRtContext().device, &desc,
+ &zeModule, &buildLogHandle);
+ if (result != ZE_RESULT_SUCCESS) {
+ std::cerr << "Error creating module, error code: " << result << std::endl;
+ size_t logSize = 0;
+ L0_SAFE_CALL(zeModuleBuildLogGetString(buildLogHandle, &logSize, nullptr));
+ std::string buildLog(" ", logSize);
+ L0_SAFE_CALL(
+ zeModuleBuildLogGetString(buildLogHandle, &logSize, buildLog.data()));
+ std::cerr << "Build log:\n" << buildLog << std::endl;
+ std::abort();
+ }
+ return zeModule;
+}
+
+//===----------------------------------------------------------------------===//
+// L0 Wrappers definition
+//===----------------------------------------------------------------------===//
+
+extern "C" StreamWrapper *mgpuStreamCreate() {
+ return new StreamWrapper(getDynamicEventPool());
+}
+
+extern "C" void mgpuStreamSynchronize(StreamWrapper *stream) {
+ if (stream)
+ stream->sync();
+}
+
+extern "C" void mgpuStreamDestroy(StreamWrapper *stream) { delete stream; }
+
+extern "C" void mgpuStreamWaitEvent(StreamWrapper *stream,
+ ze_event_handle_t event) {
+ assert(stream && "Invalid stream");
+ assert(event && "Invalid event");
+ stream->sync(event);
+}
+
+extern "C" ze_event_handle_t mgpuEventCreate() {
+ return getDynamicEventPool().takeEvent();
+}
+
+extern "C" void mgpuEventDestroy(ze_event_handle_t event) {
+ return getDynamicEventPool().releaseEvent(event);
+}
+
+extern "C" void mgpuEventSynchronize(ze_event_handle_t event) {
+ L0_SAFE_CALL(
+ zeEventHostSynchronize(event, std::numeric_limits<uint64_t>::max()));
+ L0_SAFE_CALL(zeEventHostReset(event));
+}
+
+extern "C" void mgpuEventRecord(ze_event_handle_t event,
+ StreamWrapper *stream) {
+ L0_SAFE_CALL(zeCommandListAppendSignalEvent(
+ getRtContext().immCmdListCopy.get(), event));
+ L0_SAFE_CALL(zeCommandListAppendSignalEvent(
+ getRtContext().immCmdListCompute.get(), event));
+}
+
+extern "C" void *mgpuMemAlloc(uint64_t size, StreamWrapper *stream,
+ bool isShared) {
+ return catchAll([&]() {
+ void *memPtr = nullptr;
+ constexpr size_t alignment{64};
+ ze_device_mem_alloc_desc_t deviceDesc = {};
+ deviceDesc.stype = ZE_STRUCTURE_TYPE_DEVICE_MEM_ALLOC_DESC;
+ if (isShared) {
+ ze_host_mem_alloc_desc_t hostDesc = {};
+ hostDesc.stype = ZE_STRUCTURE_TYPE_HOST_MEM_ALLOC_DESC;
+ L0_SAFE_CALL(zeMemAllocShared(getRtContext().context.get(), &deviceDesc,
+ &hostDesc, size, alignment,
+ getRtContext().device, &memPtr));
+ } else {
+ L0_SAFE_CALL(zeMemAllocDevice(getRtContext().context.get(), &deviceDesc,
+ size, alignment, getRtContext().device,
+ &memPtr));
+ }
+ if (!memPtr)
+ throw std::runtime_error("mem allocation failed!");
+ return memPtr;
+ });
+}
+
+extern "C" void mgpuMemFree(void *ptr, StreamWrapper *stream) {
+ stream->sync();
+ if (ptr)
+ L0_SAFE_CALL(zeMemFree(getRtContext().context.get(), ptr));
+}
+
+extern "C" void mgpuMemcpy(void *dst, void *src, size_t sizeBytes,
+ StreamWrapper *stream) {
+ stream->enqueueOp([&](ze_event_handle_t newEvent, uint32_t numWaitEvents,
+ ze_event_handle_t *waitEvents) {
+ L0_SAFE_CALL(zeCommandListAppendMemoryCopy(
+ getRtContext().immCmdListCopy.get(), dst, src, sizeBytes, newEvent,
+ numWaitEvents, waitEvents));
+ });
+}
+
+template <typename PATTERN_TYPE>
+void mgpuMemset(void *dst, PATTERN_TYPE value, size_t count,
+ StreamWrapper *stream) {
+ L0RTContextWrapper &rtContext = getRtContext();
+ auto listType =
+ rtContext.copyEngineMaxMemoryFillPatternSize >= sizeof(PATTERN_TYPE)
+ ? rtContext.immCmdListCopy.get()
+ : rtContext.immCmdListCompute.get();
+ stream->enqueueOp([&](ze_event_handle_t newEvent, uint32_t numWaitEvents,
+ ze_event_handle_t *waitEvents) {
+ L0_SAFE_CALL(zeCommandListAppendMemoryFill(
+ listType, dst, &value, sizeof(PATTERN_TYPE),
+ count * sizeof(PATTERN_TYPE), newEvent, numWaitEvents, waitEvents));
+ });
+}
+extern "C" void mgpuMemset32(void *dst, unsigned int value, size_t count,
+ StreamWrapper *stream) {
+ mgpuMemset<unsigned int>(dst, value, count, stream);
+}
+
+extern "C" void mgpuMemset16(void *dst, unsigned short value, size_t count,
+ StreamWrapper *stream) {
+ mgpuMemset<unsigned short>(dst, value, count, stream);
+}
+
+extern "C" ze_module_handle_t mgpuModuleLoad(const void *data,
+ size_t gpuBlobSize) {
+ return catchAll([&]() { return loadModule(data, gpuBlobSize); });
+}
+
+extern "C" ze_kernel_handle_t mgpuModuleGetFunction(ze_module_handle_t module,
+ const char *name) {
+ assert(module && name);
+ ze_kernel_handle_t zeKernel;
+ ze_kernel_desc_t desc = {};
+ desc.pKernelName = name;
+ L0_SAFE_CALL(zeKernelCreate(module, &desc, &zeKernel));
+ return zeKernel;
+}
+
+extern "C" void mgpuLaunchKernel(ze_kernel_handle_t kernel, size_t gridX,
+ size_t gridY, size_t gridZ, size_t blockX,
+ size_t blockY, size_t blockZ,
+ size_t sharedMemBytes, StreamWrapper *stream,
+ void **params, void ** /*extra*/,
+ size_t paramsCount) {
+
+ if (sharedMemBytes > 0) {
+ paramsCount = paramsCount - 1; // Last param is shared memory size
+ L0_SAFE_CALL(
+ zeKernelSetArgumentValue(kernel, paramsCount, sharedMemBytes, nullptr));
+ }
+ for (size_t i = 0; i < paramsCount; ++i)
+ L0_SAFE_CALL(zeKernelSetArgumentValue(kernel, static_cast<uint32_t>(i),
+ sizeof(void *), params[i]));
+ L0_SAFE_CALL(zeKernelSetGroupSize(kernel, blockX, blockY, blockZ));
+ ze_group_count_t dispatch;
+ dispatch.groupCountX = static_cast<uint32_t>(gridX);
+ dispatch.groupCountY = static_cast<uint32_t>(gridY);
+ dispatch.groupCountZ = static_cast<uint32_t>(gridZ);
+ stream->enqueueOp([&](ze_event_handle_t newEvent, uint32_t numWaitEvents,
+ ze_event_handle_t *waitEvents) {
+ L0_SAFE_CALL(zeCommandListAppendLaunchKernel(
+ getRtContext().immCmdListCompute.get(), kernel, &dispatch, newEvent,
+ numWaitEvents, waitEvents));
+ });
+}
+
+extern "C" void mgpuModuleUnload(ze_module_handle_t module) {
+ L0_SAFE_CALL(zeModuleDestroy(module));
+}
+
+extern "C" void mgpuSetDefaultDevice(int32_t devIdx) {
+ catchAll([&]() {
+ // For now, a user must ensure that streams and events complete
+ // and are destroyed before switching a device.
+ getRtContext() = L0RTContextWrapper(devIdx);
+ getDynamicEventPool() = DynamicEventPool(&getRtContext());
+ });
+}
diff --git a/mlir/test/CMakeLists.txt b/mlir/test/CMakeLists.txt
index 89568e7766ae5..a4a942de3c9a7 100644
--- a/mlir/test/CMakeLists.txt
+++ b/mlir/test/CMakeLists.txt
@@ -167,6 +167,10 @@ if(MLIR_ENABLE_SYCL_RUNNER)
list(APPEND MLIR_TEST_DEPENDS mlir_sycl_runtime)
endif()
+if(MLIR_ENABLE_LEVELZERO_RUNNER)
+ list(APPEND MLIR_TEST_DEPENDS mlir_levelzero_runtime)
+endif()
+
if (MLIR_RUN_ARM_SME_TESTS AND NOT ARM_SME_ABI_ROUTINES_SHLIB)
list(APPEND MLIR_TEST_DEPENDS mlir_arm_sme_abi_stubs)
endif()
diff --git a/mlir/test/Integration/GPU/LevelZero/gpu-addf32-to-spirv.mlir b/mlir/test/Integration/GPU/LevelZero/gpu-addf32-to-spirv.mlir
new file mode 100644
index 0000000000000..7e66dee0272f6
--- /dev/null
+++ b/mlir/test/Integration/GPU/LevelZero/gpu-addf32-to-spirv.mlir
@@ -0,0 +1,59 @@
+// RUN: mlir-opt %s -pass-pipeline='builtin.module(spirv-attach-target{ver=v1.0 caps=Addresses,Int64,Kernel},convert-gpu-to-spirv{use-64bit-index=true},gpu.module(spirv.module(spirv-lower-abi-attrs,spirv-update-vce)),func.func(llvm-request-c-wrappers),convert-scf-to-cf,convert-to-llvm,gpu-to-llvm{use-bare-pointers-for-kernels=true},gpu-module-to-binary,expand-strided-metadata,lower-affine,reconcile-unrealized-casts)' \
+// RUN: | mlir-runner \
+// RUN: --shared-libs=%mlir_levelzero_runtime \
+// RUN: --shared-libs=%mlir_runner_utils \
+// RUN: --entry-point-result=void \
+// RUN: | FileCheck %s
+
+module @add attributes {gpu.container_module} {
+ memref.global "private" constant @__constant_2x2x2xf32_0 : memref<2x2x2xf32> = dense<[[[1.1, 2.2], [3.3, 4.4]], [[5.5, 6.6], [7.7, 8.8 ]]]>
+ memref.global "private" constant @__constant_2x2x2xf32 : memref<2x2x2xf32> = dense<[[[1.2, 2.3], [4.5, 5.8]], [[7.2, 8.3], [10.5, 11.8]]]>
+ func.func @main() {
+ %0 = memref.get_global @__constant_2x2x2xf32 : memref<2x2x2xf32>
+ %1 = memref.get_global @__constant_2x2x2xf32_0 : memref<2x2x2xf32>
+ %2 = call @test(%0, %1) : (memref<2x2x2xf32>, memref<2x2x2xf32>) -> memref<2x2x2xf32>
+ %cast = memref.cast %2 : memref<2x2x2xf32> to memref<*xf32>
+ call @printMemrefF32(%cast) : (memref<*xf32>) -> ()
+ return
+ }
+ func.func private @printMemrefF32(memref<*xf32>)
+ func.func @test(%arg0: memref<2x2x2xf32>, %arg1: memref<2x2x2xf32>) -> memref<2x2x2xf32> {
+ %c2 = arith.constant 2 : index
+ %c1 = arith.constant 1 : index
+ %mem = gpu.alloc host_shared () : memref<2x2x2xf32>
+ memref.copy %arg1, %mem : memref<2x2x2xf32> to memref<2x2x2xf32>
+ %memref_0 = gpu.alloc host_shared () : memref<2x2x2xf32>
+ memref.copy %arg0, %memref_0 : memref<2x2x2xf32> to memref<2x2x2xf32>
+ %memref_2 = gpu.alloc host_shared () : memref<2x2x2xf32>
+ %2 = gpu.wait async
+ %3 = gpu.launch_func async [%2] @test_kernel::@test_kernel blocks in (%c2, %c2, %c2) threads in (%c1, %c1, %c1)
+ args(%memref_0 : memref<2x2x2xf32>, %mem : memref<2x2x2xf32>, %memref_2 : memref<2x2x2xf32>)
+ gpu.wait [%3]
+ %alloc = memref.alloc() : memref<2x2x2xf32>
+ memref.copy %memref_2, %alloc : memref<2x2x2xf32> to memref<2x2x2xf32>
+ %4 = gpu.wait async
+ %5 = gpu.dealloc async [%4] %memref_2 : memref<2x2x2xf32>
+ %6 = gpu.dealloc async [%5] %memref_0 : memref<2x2x2xf32>
+ %7 = gpu.dealloc async [%6] %mem : memref<2x2x2xf32>
+ gpu.wait [%7]
+ return %alloc : memref<2x2x2xf32>
+ }
+ gpu.module @test_kernel
+ attributes {spirv.target_env = #spirv.target_env<#spirv.vce<v1.0, [Addresses, Int64, Kernel], []>, api=OpenCL, #spirv.resource_limits<>>} {
+ gpu.func @test_kernel(%arg0: memref<2x2x2xf32>, %arg1: memref<2x2x2xf32>, %arg2: memref<2x2x2xf32>) kernel
+ attributes {gpu.known_block_size = array<i32: 1, 1, 1>, gpu.known_grid_size = array<i32: 2, 2, 2>, spirv.entry_point_abi = #spirv.entry_point_abi<>} {
+ %0 = gpu.block_id x
+ %1 = gpu.block_id y
+ %2 = gpu.block_id z
+ %3 = memref.load %arg0[%0, %1, %2] : memref<2x2x2xf32>
+ %4 = memref.load %arg1[%0, %1, %2] : memref<2x2x2xf32>
+ %5 = arith.addf %3, %4 : f32
+ memref.store %5, %arg2[%0, %1, %2] : memref<2x2x2xf32>
+ gpu.return
+ }
+ }
+ // CHECK: [2.3, 4.5]
+ // CHECK: [7.8, 10.2]
+ // CHECK: [12.7, 14.9]
+ // CHECK: [18.2, 20.6]
+}
diff --git a/mlir/test/Integration/GPU/LevelZero/gpu-addi64-to-spirv.mlir b/mlir/test/Integration/GPU/LevelZero/gpu-addi64-to-spirv.mlir
new file mode 100644
index 0000000000000..df8fbe4d86d9c
--- /dev/null
+++ b/mlir/test/Integration/GPU/LevelZero/gpu-addi64-to-spirv.mlir
@@ -0,0 +1,57 @@
+// RUN: mlir-opt %s -pass-pipeline='builtin.module(spirv-attach-target{ver=v1.0 caps=Addresses,Int64,Kernel},convert-gpu-to-spirv{use-64bit-index=true},gpu.module(spirv.module(spirv-lower-abi-attrs,spirv-update-vce)),func.func(llvm-request-c-wrappers),convert-scf-to-cf,convert-to-llvm,gpu-to-llvm{use-bare-pointers-for-kernels=true},gpu-module-to-binary,expand-strided-metadata,lower-affine,reconcile-unrealized-casts)' \
+// RUN: | mlir-runner \
+// RUN: --shared-libs=%mlir_levelzero_runtime \
+// RUN: --shared-libs=%mlir_runner_utils \
+// RUN: --entry-point-result=void \
+// RUN: | FileCheck %s
+
+module @add attributes {gpu.container_module} {
+ memref.global "private" constant @__constant_3x3xi64_0 : memref<3x3xi64> = dense<[[1, 4098, 3], [16777220, 5, 4294967302], [7, 1099511627784, 9]]>
+ memref.global "private" constant @__constant_3x3xi64 : memref<3x3xi64> = dense<[[1, 2, 3], [4, 5, 4102], [16777223, 4294967304, 1099511627785]]>
+ func.func @main() {
+ %0 = memref.get_global @__constant_3x3xi64 : memref<3x3xi64>
+ %1 = memref.get_global @__constant_3x3xi64_0 : memref<3x3xi64>
+ %2 = call @test(%0, %1) : (memref<3x3xi64>, memref<3x3xi64>) -> memref<3x3xi64>
+ %cast = memref.cast %2 : memref<3x3xi64> to memref<*xi64>
+ call @printMemrefI64(%cast) : (memref<*xi64>) -> ()
+ return
+ }
+ func.func private @printMemrefI64(memref<*xi64>)
+ func.func @test(%arg0: memref<3x3xi64>, %arg1: memref<3x3xi64>) -> memref<3x3xi64> {
+ %c3 = arith.constant 3 : index
+ %c1 = arith.constant 1 : index
+ %mem = gpu.alloc host_shared () : memref<3x3xi64>
+ memref.copy %arg1, %mem : memref<3x3xi64> to memref<3x3xi64>
+ %memref_0 = gpu.alloc host_shared () : memref<3x3xi64>
+ memref.copy %arg0, %memref_0 : memref<3x3xi64> to memref<3x3xi64>
+ %memref_2 = gpu.alloc host_shared () : memref<3x3xi64>
+ %2 = gpu.wait async
+ %3 = gpu.launch_func async [%2] @test_kernel::@test_kernel blocks in (%c3, %c3, %c1) threads in (%c1, %c1, %c1)
+ args(%memref_0 : memref<3x3xi64>, %mem : memref<3x3xi64>, %memref_2 : memref<3x3xi64>)
+ gpu.wait [%3]
+ %alloc = memref.alloc() : memref<3x3xi64>
+ memref.copy %memref_2, %alloc : memref<3x3xi64> to memref<3x3xi64>
+ %4 = gpu.wait async
+ %5 = gpu.dealloc async [%4] %memref_2 : memref<3x3xi64>
+ %6 = gpu.dealloc async [%5] %memref_0 : memref<3x3xi64>
+ %7 = gpu.dealloc async [%6] %mem : memref<3x3xi64>
+ gpu.wait [%7]
+ return %alloc : memref<3x3xi64>
+ }
+ gpu.module @test_kernel
+ attributes {spirv.target_env = #spirv.target_env<#spirv.vce<v1.0, [Addresses, Int64, Kernel], []>, api=OpenCL, #spirv.resource_limits<>>} {
+ gpu.func @test_kernel(%arg0: memref<3x3xi64>, %arg1: memref<3x3xi64>, %arg2: memref<3x3xi64>) kernel
+ attributes {gpu.known_block_size = array<i32: 1, 1, 1>, gpu.known_grid_size = array<i32: 3, 3, 1>, spirv.entry_point_abi = #spirv.entry_point_abi<>} {
+ %0 = gpu.block_id x
+ %1 = gpu.block_id y
+ %2 = memref.load %arg0[%0, %1] : memref<3x3xi64>
+ %3 = memref.load %arg1[%0, %1] : memref<3x3xi64>
+ %4 = arith.addi %2, %3 : i64
+ memref.store %4, %arg2[%0, %1] : memref<3x3xi64>
+ gpu.return
+ }
+ }
+ // CHECK: [2, 4100, 6],
+ // CHECK: [16777224, 10, 4294971404],
+ // CHECK: [16777230, 1103806595088, 1099511627794]
+}
diff --git a/mlir/test/Integration/GPU/LevelZero/gpu-memcpy-addf32-to-spirv.mlir b/mlir/test/Integration/GPU/LevelZero/gpu-memcpy-addf32-to-spirv.mlir
new file mode 100644
index 0000000000000..cd99f2c70dc6e
--- /dev/null
+++ b/mlir/test/Integration/GPU/LevelZero/gpu-memcpy-addf32-to-spirv.mlir
@@ -0,0 +1,56 @@
+// RUN: mlir-opt %s -pass-pipeline='builtin.module(func.func(gpu-async-region),spirv-attach-target{ver=v1.0 caps=Addresses,Int64,Kernel},convert-gpu-to-spirv{use-64bit-index=true},gpu.module(spirv.module(spirv-lower-abi-attrs,spirv-update-vce)),func.func(llvm-request-c-wrappers),convert-scf-to-cf,convert-to-llvm,gpu-to-llvm{use-bare-pointers-for-kernels=true},gpu-module-to-binary,expand-strided-metadata,lower-affine,reconcile-unrealized-casts)' \
+// RUN: | mlir-runner \
+// RUN: --shared-libs=%mlir_levelzero_runtime \
+// RUN: --shared-libs=%mlir_runner_utils \
+// RUN: --entry-point-result=void \
+// RUN: | FileCheck %s
+
+module @add attributes {gpu.container_module} {
+ memref.global "private" constant @__constant_2x2x2xf32_0 : memref<2x2x2xf32> = dense<[[[1.1, 2.2], [3.3, 4.4]], [[5.5, 6.6], [7.7, 8.8 ]]]>
+ memref.global "private" constant @__constant_2x2x2xf32 : memref<2x2x2xf32> = dense<[[[1.2, 2.3], [4.5, 5.8]], [[7.2, 8.3], [10.5, 11.8]]]>
+ func.func @main() {
+ %0 = memref.get_global @__constant_2x2x2xf32 : memref<2x2x2xf32>
+ %1 = memref.get_global @__constant_2x2x2xf32_0 : memref<2x2x2xf32>
+ %2 = call @test(%0, %1) : (memref<2x2x2xf32>, memref<2x2x2xf32>) -> memref<2x2x2xf32>
+ %cast = memref.cast %2 : memref<2x2x2xf32> to memref<*xf32>
+ call @printMemrefF32(%cast) : (memref<*xf32>) -> ()
+ memref.dealloc %2 : memref<2x2x2xf32>
+ return
+ }
+ func.func private @printMemrefF32(memref<*xf32>)
+ func.func @test(%arg0: memref<2x2x2xf32>, %arg1: memref<2x2x2xf32>) -> memref<2x2x2xf32> {
+ %c2 = arith.constant 2 : index
+ %c1 = arith.constant 1 : index
+ %memref = gpu.alloc () : memref<2x2x2xf32>
+ gpu.memcpy %memref, %arg0 : memref<2x2x2xf32>, memref<2x2x2xf32>
+ %memref_0 = gpu.alloc () : memref<2x2x2xf32>
+ gpu.memcpy %memref_0, %arg1 : memref<2x2x2xf32>, memref<2x2x2xf32>
+ %memref_1 = gpu.alloc () : memref<2x2x2xf32>
+ gpu.launch_func @test_kernel::@test_kernel blocks in (%c2, %c2, %c2) threads in (%c1, %c1, %c1)
+ args(%memref : memref<2x2x2xf32>, %memref_0 : memref<2x2x2xf32>, %memref_1 : memref<2x2x2xf32>)
+ %alloc = memref.alloc() : memref<2x2x2xf32>
+ gpu.memcpy %alloc, %memref_1 : memref<2x2x2xf32>, memref<2x2x2xf32>
+ gpu.dealloc %memref_1 : memref<2x2x2xf32>
+ gpu.dealloc %memref_0 : memref<2x2x2xf32>
+ gpu.dealloc %memref : memref<2x2x2xf32>
+ return %alloc : memref<2x2x2xf32>
+ }
+ gpu.module @test_kernel
+ attributes {spirv.target_env = #spirv.target_env<#spirv.vce<v1.0, [Addresses, Int64, Kernel], []>, api=OpenCL, #spirv.resource_limits<>>} {
+ gpu.func @test_kernel(%arg0: memref<2x2x2xf32>, %arg1: memref<2x2x2xf32>, %arg2: memref<2x2x2xf32>) kernel
+ attributes {gpu.known_block_size = array<i32: 1, 1, 1>, gpu.known_grid_size = array<i32: 2, 2, 2>, spirv.entry_point_abi = #spirv.entry_point_abi<>} {
+ %0 = gpu.block_id x
+ %1 = gpu.block_id y
+ %2 = gpu.block_id z
+ %3 = memref.load %arg0[%0, %1, %2] : memref<2x2x2xf32>
+ %4 = memref.load %arg1[%0, %1, %2] : memref<2x2x2xf32>
+ %5 = arith.addf %3, %4 : f32
+ memref.store %5, %arg2[%0, %1, %2] : memref<2x2x2xf32>
+ gpu.return
+ }
+ }
+ // CHECK: [2.3, 4.5]
+ // CHECK: [7.8, 10.2]
+ // CHECK: [12.7, 14.9]
+ // CHECK: [18.2, 20.6]
+}
diff --git a/mlir/test/Integration/GPU/LevelZero/gpu-reluf32-to-spirv.mlir b/mlir/test/Integration/GPU/LevelZero/gpu-reluf32-to-spirv.mlir
new file mode 100644
index 0000000000000..8d022ac1cf277
--- /dev/null
+++ b/mlir/test/Integration/GPU/LevelZero/gpu-reluf32-to-spirv.mlir
@@ -0,0 +1,86 @@
+// RUN: mlir-opt %s -pass-pipeline='builtin.module(spirv-attach-target{ver=v1.0 caps=Addresses,Int64,Kernel},convert-gpu-to-spirv{use-64bit-index=true},gpu.module(spirv.module(spirv-lower-abi-attrs,spirv-update-vce)),func.func(llvm-request-c-wrappers),convert-scf-to-cf,convert-to-llvm,gpu-to-llvm{use-bare-pointers-for-kernels=true},gpu-module-to-binary,expand-strided-metadata,lower-affine,reconcile-unrealized-casts)' \
+// RUN: | mlir-runner \
+// RUN: --shared-libs=%mlir_levelzero_runtime \
+// RUN: --shared-libs=%mlir_runner_utils \
+// RUN: --entry-point-result=void \
+// RUN: | FileCheck %s
+
+module @relu attributes {gpu.container_module} {
+ memref.global "private" constant @__constant_4x5xf32 : memref<4x5xf32> = dense<[
+ [-1.000000e-01, -2.000000e-01, -3.000000e-01, 4.000000e-01, 5.000000e-01],
+ [1.000000e-01, -2.000000e-01, 3.000000e-01, -4.000000e-01, 5.000000e-01],
+ [1.000000e-01, 2.000000e-01, 3.000000e-01, -4.000000e-01, -5.000000e-01],
+ [1.000000e-01, 2.000000e-01, 3.000000e-01, 4.000000e-01, 5.000000e-01]
+ ]>
+
+ func.func @main() {
+ %c1 = arith.constant 1 : index
+ %c100 = arith.constant 100 : index
+ %c0 = arith.constant 0 : index
+ %0 = memref.get_global @__constant_4x5xf32 : memref<4x5xf32>
+
+ scf.for %arg0 = %c0 to %c100 step %c1 {
+ %1 = func.call @test(%0) : (memref<4x5xf32>) -> memref<4x5xf32>
+ %cast = memref.cast %1 : memref<4x5xf32> to memref<*xf32>
+ func.call @printMemrefF32(%cast) : (memref<*xf32>) -> ()
+ // CHECK: [0, 0, 0, 0.4, 0.5],
+ // CHECK: [0.1, 0, 0.3, 0, 0.5],
+ // CHECK: [0.1, 0.2, 0.3, 0, 0],
+ // CHECK: [0.1, 0.2, 0.3, 0.4, 0.5]
+ }
+ return
+ }
+
+ func.func private @printMemrefF32(memref<*xf32>)
+ func.func @test(%arg0: memref<4x5xf32>) -> memref<4x5xf32> {
+ %c5 = arith.constant 5 : index
+ %c4 = arith.constant 4 : index
+ %cst = arith.constant 0.000000e+00 : f32
+ %c1 = arith.constant 1 : index
+ %memref = gpu.alloc host_shared () : memref<4x5xf32>
+ memref.copy %arg0, %memref : memref<4x5xf32> to memref<4x5xf32>
+ %memref_0 = gpu.alloc host_shared () : memref<4x5xi1>
+ %2 = gpu.wait async
+ %3 = gpu.launch_func async [%2] @test_kernel::@test_kernel blocks in (%c4, %c5, %c1) threads in (%c1, %c1, %c1)
+ args(%memref : memref<4x5xf32>, %cst : f32, %memref_0 : memref<4x5xi1>)
+ gpu.wait [%3]
+ %memref_1 = gpu.alloc host_shared () : memref<4x5xf32>
+ %4 = gpu.wait async
+ %5 = gpu.launch_func async [%4] @test_kernel_0::@test_kernel blocks in (%c4, %c5, %c1) threads in (%c1, %c1, %c1)
+ args(%memref_0 : memref<4x5xi1>, %memref : memref<4x5xf32>, %cst : f32,
+ %memref_1 : memref<4x5xf32>)
+ gpu.wait [%5]
+ %alloc = memref.alloc() : memref<4x5xf32>
+ memref.copy %memref_1, %alloc : memref<4x5xf32> to memref<4x5xf32>
+ %6 = gpu.wait async
+ %7 = gpu.dealloc async [%6] %memref_1 : memref<4x5xf32>
+ %8 = gpu.dealloc async [%7] %memref_0 : memref<4x5xi1>
+ %9 = gpu.dealloc async [%8] %memref : memref<4x5xf32>
+ return %alloc : memref<4x5xf32>
+ }
+ gpu.module @test_kernel
+ attributes {spirv.target_env = #spirv.target_env<#spirv.vce<v1.0, [Addresses, Int64, Int8, Kernel], []>, api=OpenCL, #spirv.resource_limits<>>} {
+ gpu.func @test_kernel(%arg0: memref<4x5xf32>, %arg1: f32, %arg2: memref<4x5xi1>) kernel
+ attributes {gpu.known_block_size = array<i32: 1, 1, 1>, gpu.known_grid_size = array<i32: 4, 5, 1>, spirv.entry_point_abi = #spirv.entry_point_abi<>} {
+ %0 = gpu.block_id x
+ %1 = gpu.block_id y
+ %2 = memref.load %arg0[%0, %1] : memref<4x5xf32>
+ %3 = arith.cmpf olt, %2, %arg1 : f32
+ memref.store %3, %arg2[%0, %1] : memref<4x5xi1>
+ gpu.return
+ }
+ }
+ gpu.module @test_kernel_0
+ attributes {spirv.target_env = #spirv.target_env<#spirv.vce<v1.0, [Addresses, Int64, Int8, Kernel], []>, api=OpenCL, #spirv.resource_limits<>>} {
+ gpu.func @test_kernel(%arg0: memref<4x5xi1>, %arg1: memref<4x5xf32>, %arg2: f32, %arg3: memref<4x5xf32>) kernel
+ attributes {gpu.known_block_size = array<i32: 1, 1, 1>, gpu.known_grid_size = array<i32: 4, 5, 1>, spirv.entry_point_abi = #spirv.entry_point_abi<>} {
+ %0 = gpu.block_id x
+ %1 = gpu.block_id y
+ %2 = memref.load %arg0[%0, %1] : memref<4x5xi1>
+ %3 = memref.load %arg1[%0, %1] : memref<4x5xf32>
+ %4 = arith.select %2, %arg2, %3 : f32
+ memref.store %4, %arg3[%0, %1] : memref<4x5xf32>
+ gpu.return
+ }
+ }
+}
diff --git a/mlir/test/Integration/GPU/LevelZero/lit.local.cfg b/mlir/test/Integration/GPU/LevelZero/lit.local.cfg
new file mode 100644
index 0000000000000..36c7ad5f57c7e
--- /dev/null
+++ b/mlir/test/Integration/GPU/LevelZero/lit.local.cfg
@@ -0,0 +1,2 @@
+if not config.enable_levelzero_runner:
+ config.unsupported = True
diff --git a/mlir/test/lit.cfg.py b/mlir/test/lit.cfg.py
index feaf5fb852a1d..f392bdacadd3c 100644
--- a/mlir/test/lit.cfg.py
+++ b/mlir/test/lit.cfg.py
@@ -224,6 +224,9 @@ def find_real_python_interpreter():
if config.enable_sycl_runner:
tools.extend([add_runtime("mlir_sycl_runtime")])
+if config.enable_levelzero_runner:
+ tools.extend([add_runtime("mlir_levelzero_runtime")])
+
if config.enable_spirv_cpu_runner:
tools.extend([add_runtime("mlir_spirv_cpu_runtime")])
diff --git a/mlir/test/lit.site.cfg.py.in b/mlir/test/lit.site.cfg.py.in
index b1185e19d86e8..d904780af4224 100644
--- a/mlir/test/lit.site.cfg.py.in
+++ b/mlir/test/lit.site.cfg.py.in
@@ -34,6 +34,7 @@ config.enable_rocm_runner = @MLIR_ENABLE_ROCM_RUNNER@
config.gpu_compilation_format = "@MLIR_GPU_COMPILATION_TEST_FORMAT@"
config.rocm_test_chipset = "@ROCM_TEST_CHIPSET@"
config.enable_sycl_runner = @MLIR_ENABLE_SYCL_RUNNER@
+config.enable_levelzero_runner = @MLIR_ENABLE_LEVELZERO_RUNNER@
config.enable_spirv_cpu_runner = @MLIR_ENABLE_SPIRV_CPU_RUNNER@
config.enable_vulkan_runner = @MLIR_ENABLE_VULKAN_RUNNER@
config.enable_bindings_python = @MLIR_ENABLE_BINDINGS_PYTHON@
>From d54aa36146297ddfb764394c4f70b0758b75becd Mon Sep 17 00:00:00 2001
From: Aiden Grossman <aidengrossman at google.com>
Date: Wed, 6 Aug 2025 15:00:51 -0700
Subject: [PATCH 160/171] [CI] Refactor monolithic-* scripts to use common
utils.sh
This patch refactors big chunks of the common functionality shared
between monolithic-linux.sh and monolithic-windows.sh to a separate
script, utils.sh, that is then sourced in both of the files. This makes
it a bit easier to maintain the scripts.
Platform differences should not be a large deal for the setup here as we
are using bash as the shell on both Linux and Windows.
Reviewers:
lnihlen, gburgessiv, Keenuts, DavidSpickett, dschuff, cmtice, Endilll
Reviewed By: DavidSpickett, cmtice
Pull Request: https://github.com/llvm/llvm-project/pull/152199
---
.ci/monolithic-linux.sh | 39 ++----------------------------
.ci/monolithic-windows.sh | 39 +-----------------------------
.ci/utils.sh | 51 +++++++++++++++++++++++++++++++++++++++
3 files changed, 54 insertions(+), 75 deletions(-)
create mode 100644 .ci/utils.sh
diff --git a/.ci/monolithic-linux.sh b/.ci/monolithic-linux.sh
index da21e09809aee..b9afc7a57b21c 100755
--- a/.ci/monolithic-linux.sh
+++ b/.ci/monolithic-linux.sh
@@ -13,50 +13,15 @@
# run only the relevant tests.
#
-set -ex
-set -o pipefail
+source .ci/utils.sh
-MONOREPO_ROOT="${MONOREPO_ROOT:="$(git rev-parse --show-toplevel)"}"
-BUILD_DIR="${BUILD_DIR:=${MONOREPO_ROOT}/build}"
INSTALL_DIR="${BUILD_DIR}/install"
-rm -rf "${BUILD_DIR}"
-
-sccache --zero-stats
mkdir -p artifacts/reproducers
-# Make sure any clang reproducers will end up as artifacts.
+# Make sure any clang reproducers will end up as artifacts
export CLANG_CRASH_DIAGNOSTICS_DIR=`realpath artifacts/reproducers`
-function at-exit {
- retcode=$?
-
- sccache --show-stats > artifacts/sccache_stats.txt
- cp "${BUILD_DIR}"/.ninja_log artifacts/.ninja_log
- cp "${BUILD_DIR}"/test-results.*.xml artifacts/ || :
-
- # If building fails there will be no results files.
- shopt -s nullglob
-
- if [[ "$GITHUB_STEP_SUMMARY" != "" ]]; then
- python3 "${MONOREPO_ROOT}"/.ci/generate_test_report_github.py \
- $retcode "${BUILD_DIR}"/test-results.*.xml >> $GITHUB_STEP_SUMMARY
- fi
-}
-trap at-exit EXIT
-
-function start-group {
- groupname=$1
- if [[ "$GITHUB_ACTIONS" != "" ]]; then
- echo "::endgroup"
- echo "::group::$groupname"
- elif [[ "$POSTCOMMIT_CI" != "" ]]; then
- echo "@@@$STEP@@@"
- else
- echo "Starting $groupname"
- fi
-}
-
projects="${1}"
targets="${2}"
runtimes="${3}"
diff --git a/.ci/monolithic-windows.sh b/.ci/monolithic-windows.sh
index 71646ce047d4f..2e81e27bd7d0f 100755
--- a/.ci/monolithic-windows.sh
+++ b/.ci/monolithic-windows.sh
@@ -13,44 +13,7 @@
# run only the relevant tests.
#
-set -ex
-set -o pipefail
-
-MONOREPO_ROOT="${MONOREPO_ROOT:="$(git rev-parse --show-toplevel)"}"
-BUILD_DIR="${BUILD_DIR:=${MONOREPO_ROOT}/build}"
-
-rm -rf "${BUILD_DIR}"
-
-sccache --zero-stats
-function at-exit {
- retcode=$?
-
- mkdir -p artifacts
- sccache --show-stats >> artifacts/sccache_stats.txt
- cp "${BUILD_DIR}"/.ninja_log artifacts/.ninja_log
- cp "${BUILD_DIR}"/test-results.*.xml artifacts/ || :
-
- # If building fails there will be no results files.
- shopt -s nullglob
-
- if [[ "$GITHUB_STEP_SUMMARY" != "" ]]; then
- python "${MONOREPO_ROOT}"/.ci/generate_test_report_github.py \
- $retcode "${BUILD_DIR}"/test-results.*.xml >> $GITHUB_STEP_SUMMARY
- fi
-}
-trap at-exit EXIT
-
-function start-group {
- groupname=$1
- if [[ "$GITHUB_ACTIONS" != "" ]]; then
- echo "::endgroup"
- echo "::group::$groupname"
- elif [[ "$POSTCOMMIT_CI" != "" ]]; then
- echo "@@@$STEP@@@"
- else
- echo "Starting $groupname"
- fi
-}
+source .ci/utils.sh
projects="${1}"
targets="${2}"
diff --git a/.ci/utils.sh b/.ci/utils.sh
new file mode 100644
index 0000000000000..6656ffe06666a
--- /dev/null
+++ b/.ci/utils.sh
@@ -0,0 +1,51 @@
+#!/usr/bin/env bash
+#===----------------------------------------------------------------------===##
+#
+# Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+# See https://llvm.org/LICENSE.txt for license information.
+# SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+#
+#===----------------------------------------------------------------------===##
+
+# This script performs some setup and contains some utilities used for in the
+# monolithic-linux.sh and monolithic-windows.sh scripts.
+
+set -ex
+set -o pipefail
+
+MONOREPO_ROOT="${MONOREPO_ROOT:="$(git rev-parse --show-toplevel)"}"
+BUILD_DIR="${BUILD_DIR:=${MONOREPO_ROOT}/build}"
+
+rm -rf "${BUILD_DIR}"
+
+sccache --zero-stats
+
+function at-exit {
+ retcode=$?
+
+ mkdir -p artifacts
+ sccache --show-stats >> artifacts/sccache_stats.txt
+ cp "${BUILD_DIR}"/.ninja_log artifacts/.ninja_log
+ cp "${BUILD_DIR}"/test-results.*.xml artifacts/ || :
+
+ # If building fails there will be no results files.
+ shopt -s nullglob
+
+ if [[ "$GITHUB_STEP_SUMMARY" != "" ]]; then
+ python "${MONOREPO_ROOT}"/.ci/generate_test_report_github.py \
+ $retcode "${BUILD_DIR}"/test-results.*.xml >> $GITHUB_STEP_SUMMARY
+ fi
+}
+trap at-exit EXIT
+
+function start-group {
+ groupname=$1
+ if [[ "$GITHUB_ACTIONS" != "" ]]; then
+ echo "::endgroup"
+ echo "::group::$groupname"
+ elif [[ "$POSTCOMMIT_CI" != "" ]]; then
+ echo "@@@$STEP@@@"
+ else
+ echo "Starting $groupname"
+ fi
+}
>From 885ddf4a3a4948b67ce5e792a97bf5148e8b479e Mon Sep 17 00:00:00 2001
From: lntue <lntue at google.com>
Date: Wed, 6 Aug 2025 22:05:12 +0000
Subject: [PATCH 161/171] [libc] Fix constexpr FPUtils rounding_mode.h
functions. (#152342)
---
libc/src/__support/FPUtil/rounding_mode.h | 93 ++++++++++++++---------
1 file changed, 59 insertions(+), 34 deletions(-)
diff --git a/libc/src/__support/FPUtil/rounding_mode.h b/libc/src/__support/FPUtil/rounding_mode.h
index 4ee0a0b0490fc..fdc84986a4781 100644
--- a/libc/src/__support/FPUtil/rounding_mode.h
+++ b/libc/src/__support/FPUtil/rounding_mode.h
@@ -17,30 +17,24 @@
namespace LIBC_NAMESPACE_DECL {
namespace fputil {
+namespace generic {
+
// Quick free-standing test whether fegetround() == FE_UPWARD.
// Using the following observation:
// 1.0f + 2^-25 = 1.0f for FE_TONEAREST, FE_DOWNWARD, FE_TOWARDZERO
// = 0x1.000002f for FE_UPWARD.
-LIBC_INLINE static constexpr bool fenv_is_round_up() {
- if (cpp::is_constant_evaluated()) {
- return false;
- } else {
- volatile float x = 0x1.0p-25f;
- return (1.0f + x != 1.0f);
- }
+LIBC_INLINE bool fenv_is_round_up() {
+ static volatile float x = 0x1.0p-25f;
+ return (1.0f + x != 1.0f);
}
// Quick free-standing test whether fegetround() == FE_DOWNWARD.
// Using the following observation:
// -1.0f - 2^-25 = -1.0f for FE_TONEAREST, FE_UPWARD, FE_TOWARDZERO
// = -0x1.000002f for FE_DOWNWARD.
-LIBC_INLINE static constexpr bool fenv_is_round_down() {
- if (cpp::is_constant_evaluated()) {
- return false;
- } else {
- volatile float x = 0x1.0p-25f;
- return (-1.0f - x != -1.0f);
- }
+LIBC_INLINE bool fenv_is_round_down() {
+ static volatile float x = 0x1.0p-25f;
+ return (-1.0f - x != -1.0f);
}
// Quick free-standing test whether fegetround() == FE_TONEAREST.
@@ -49,14 +43,10 @@ LIBC_INLINE static constexpr bool fenv_is_round_down() {
// = 0x1.100002p0f for FE_UPWARD,
// 1.5f - 2^-24 = 1.5f for FE_TONEAREST, FE_UPWARD
// = 0x1.0ffffep-1f for FE_DOWNWARD, FE_TOWARDZERO
-LIBC_INLINE static constexpr bool fenv_is_round_to_nearest() {
- if (cpp::is_constant_evaluated()) {
- return true;
- } else {
- volatile float x = 0x1.0p-24f;
- float y = 1.5f + x;
- return (y == 1.5f - x);
- }
+LIBC_INLINE bool fenv_is_round_to_nearest() {
+ static volatile float x = 0x1.0p-24f;
+ float y = 1.5f + x;
+ return (y == 1.5f - x);
}
// Quick free-standing test whether fegetround() == FE_TOWARDZERO.
@@ -69,13 +59,56 @@ LIBC_INLINE static constexpr bool fenv_is_round_to_nearest() {
// (0x1.000002p0f + 2^-24) + (-1.0f - 2^-24) = 2^-23 for FE_TOWARDZERO
// = 2^-22 for FE_TONEAREST, FE_UPWARD
// = 0 for FE_DOWNWARD
+LIBC_INLINE bool fenv_is_round_to_zero() {
+ static volatile float x = 0x1.0p-24f;
+ float y = x;
+ return ((0x1.000002p0f + y) + (-1.0f - y) == 0x1.0p-23f);
+}
+
+// Quick free standing get rounding mode based on the above observations.
+LIBC_INLINE int quick_get_round() {
+ static volatile float x = 0x1.0p-24f;
+ float y = x;
+ float z = (0x1.000002p0f + y) + (-1.0f - y);
+
+ if (z == 0.0f)
+ return FE_DOWNWARD;
+ if (z == 0x1.0p-23f)
+ return FE_TOWARDZERO;
+ return (2.0f + y == 2.0f) ? FE_TONEAREST : FE_UPWARD;
+}
+
+} // namespace generic
+
+LIBC_INLINE static constexpr bool fenv_is_round_up() {
+ if (cpp::is_constant_evaluated()) {
+ return false;
+ } else {
+ return generic::fenv_is_round_up();
+ }
+}
+
+LIBC_INLINE static constexpr bool fenv_is_round_down() {
+ if (cpp::is_constant_evaluated()) {
+ return false;
+ } else {
+ return generic::fenv_is_round_down();
+ }
+}
+
+LIBC_INLINE static constexpr bool fenv_is_round_to_nearest() {
+ if (cpp::is_constant_evaluated()) {
+ return true;
+ } else {
+ return generic::fenv_is_round_to_nearest();
+ }
+}
+
LIBC_INLINE static constexpr bool fenv_is_round_to_zero() {
if (cpp::is_constant_evaluated()) {
return false;
} else {
- volatile float x = 0x1.0p-24f;
- volatile float y = 0x1.000002p0f + x;
- return (y + (-1.0f - x) == 0x1.0p-23f);
+ return generic::fenv_is_round_to_zero();
}
}
@@ -84,15 +117,7 @@ LIBC_INLINE static constexpr int quick_get_round() {
if (cpp::is_constant_evaluated()) {
return FE_TONEAREST;
} else {
- volatile float x = 0x1.0p-24f;
- volatile float y = 0x1.000002p0f + x;
- float z = y + (-1.0f - x);
-
- if (z == 0.0f)
- return FE_DOWNWARD;
- if (z == 0x1.0p-23f)
- return FE_TOWARDZERO;
- return (2.0f + x == 2.0f) ? FE_TONEAREST : FE_UPWARD;
+ return generic::quick_get_round();
}
}
>From d897355876287e410d35f1f0ac74d79955d50dd4 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Valentin=20Clement=20=28=E3=83=90=E3=83=AC=E3=83=B3?=
=?UTF-8?q?=E3=82=BF=E3=82=A4=E3=83=B3=20=E3=82=AF=E3=83=AC=E3=83=A1?=
=?UTF-8?q?=E3=83=B3=29?= <clementval at gmail.com>
Date: Wed, 6 Aug 2025 15:14:00 -0700
Subject: [PATCH 162/171] [flang][cuda] Set the allocator of derived type
component after allocation (#152379)
- Move the allocator index set up after the allocate statement otherwise
the derived type descriptor is not allocated.
- Support array of derived-type with device component
---
flang/include/flang/Lower/{Cuda.h => CUDA.h} | 12 +-
flang/lib/Lower/Allocatable.cpp | 14 +-
flang/lib/Lower/Bridge.cpp | 2 +-
flang/lib/Lower/CMakeLists.txt | 1 +
flang/lib/Lower/CUDA.cpp | 148 +++++++++++++++++++
flang/lib/Lower/ConvertVariable.cpp | 75 ++--------
flang/test/Lower/CUDA/cuda-set-allocator.cuf | 42 ++++--
7 files changed, 205 insertions(+), 89 deletions(-)
rename flang/include/flang/Lower/{Cuda.h => CUDA.h} (74%)
create mode 100644 flang/lib/Lower/CUDA.cpp
diff --git a/flang/include/flang/Lower/Cuda.h b/flang/include/flang/Lower/CUDA.h
similarity index 74%
rename from flang/include/flang/Lower/Cuda.h
rename to flang/include/flang/Lower/CUDA.h
index b6f849e3d63f0..6aad06e150ad1 100644
--- a/flang/include/flang/Lower/Cuda.h
+++ b/flang/include/flang/Lower/CUDA.h
@@ -1,4 +1,4 @@
-//===-- Lower/Cuda.h -- Cuda Fortran utilities ------------------*- C++ -*-===//
+//===-- Lower/CUDA.h -- CUDA Fortran utilities ------------------*- C++ -*-===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
@@ -15,6 +15,7 @@
#include "flang/Optimizer/Builder/FIRBuilder.h"
#include "flang/Optimizer/Dialect/CUF/CUFOps.h"
+#include "flang/Runtime/allocator-registry-consts.h"
#include "flang/Semantics/tools.h"
#include "mlir/Dialect/Func/IR/FuncOps.h"
#include "mlir/Dialect/OpenACC/OpenACC.h"
@@ -37,6 +38,15 @@ static inline unsigned getAllocatorIdx(const Fortran::semantics::Symbol &sym) {
return kDefaultAllocator;
}
+void initializeDeviceComponentAllocator(
+ Fortran::lower::AbstractConverter &converter,
+ const Fortran::semantics::Symbol &sym, const fir::MutableBoxValue &box);
+
+mlir::Type gatherDeviceComponentCoordinatesAndType(
+ fir::FirOpBuilder &builder, mlir::Location loc,
+ const Fortran::semantics::Symbol &sym, fir::RecordType recTy,
+ llvm::SmallVector<mlir::Value> &coordinates);
+
} // end namespace Fortran::lower
#endif // FORTRAN_LOWER_CUDA_H
diff --git a/flang/lib/Lower/Allocatable.cpp b/flang/lib/Lower/Allocatable.cpp
index 219f9205f45d5..ce9d8944387e1 100644
--- a/flang/lib/Lower/Allocatable.cpp
+++ b/flang/lib/Lower/Allocatable.cpp
@@ -13,9 +13,9 @@
#include "flang/Lower/Allocatable.h"
#include "flang/Evaluate/tools.h"
#include "flang/Lower/AbstractConverter.h"
+#include "flang/Lower/CUDA.h"
#include "flang/Lower/ConvertType.h"
#include "flang/Lower/ConvertVariable.h"
-#include "flang/Lower/Cuda.h"
#include "flang/Lower/IterationSpace.h"
#include "flang/Lower/Mangler.h"
#include "flang/Lower/OpenACC.h"
@@ -445,10 +445,14 @@ class AllocateStmtHelper {
/*mustBeHeap=*/true);
}
- void postAllocationAction(const Allocation &alloc) {
+ void postAllocationAction(const Allocation &alloc,
+ const fir::MutableBoxValue &box) {
if (alloc.getSymbol().test(Fortran::semantics::Symbol::Flag::AccDeclare))
Fortran::lower::attachDeclarePostAllocAction(converter, builder,
alloc.getSymbol());
+ if (Fortran::semantics::HasCUDAComponent(alloc.getSymbol()))
+ Fortran::lower::initializeDeviceComponentAllocator(
+ converter, alloc.getSymbol(), box);
}
void setPinnedToFalse() {
@@ -481,7 +485,7 @@ class AllocateStmtHelper {
// Pointers must use PointerAllocate so that their deallocations
// can be validated.
genInlinedAllocation(alloc, box);
- postAllocationAction(alloc);
+ postAllocationAction(alloc, box);
setPinnedToFalse();
return;
}
@@ -504,7 +508,7 @@ class AllocateStmtHelper {
genCudaAllocate(builder, loc, box, errorManager, alloc.getSymbol());
}
fir::factory::syncMutableBoxFromIRBox(builder, loc, box);
- postAllocationAction(alloc);
+ postAllocationAction(alloc, box);
errorManager.assignStat(builder, loc, stat);
}
@@ -647,7 +651,7 @@ class AllocateStmtHelper {
setPinnedToFalse();
}
fir::factory::syncMutableBoxFromIRBox(builder, loc, box);
- postAllocationAction(alloc);
+ postAllocationAction(alloc, box);
errorManager.assignStat(builder, loc, stat);
}
diff --git a/flang/lib/Lower/Bridge.cpp b/flang/lib/Lower/Bridge.cpp
index 6b7efe6b57db3..1bad46f80612b 100644
--- a/flang/lib/Lower/Bridge.cpp
+++ b/flang/lib/Lower/Bridge.cpp
@@ -13,6 +13,7 @@
#include "flang/Lower/Bridge.h"
#include "flang/Lower/Allocatable.h"
+#include "flang/Lower/CUDA.h"
#include "flang/Lower/CallInterface.h"
#include "flang/Lower/Coarray.h"
#include "flang/Lower/ConvertCall.h"
@@ -20,7 +21,6 @@
#include "flang/Lower/ConvertExprToHLFIR.h"
#include "flang/Lower/ConvertType.h"
#include "flang/Lower/ConvertVariable.h"
-#include "flang/Lower/Cuda.h"
#include "flang/Lower/DirectivesCommon.h"
#include "flang/Lower/HostAssociations.h"
#include "flang/Lower/IO.h"
diff --git a/flang/lib/Lower/CMakeLists.txt b/flang/lib/Lower/CMakeLists.txt
index 8e20abf0e9f2d..1d1c7ddda8e9b 100644
--- a/flang/lib/Lower/CMakeLists.txt
+++ b/flang/lib/Lower/CMakeLists.txt
@@ -15,6 +15,7 @@ add_flang_library(FortranLower
ConvertProcedureDesignator.cpp
ConvertType.cpp
ConvertVariable.cpp
+ CUDA.cpp
CustomIntrinsicCall.cpp
HlfirIntrinsics.cpp
HostAssociations.cpp
diff --git a/flang/lib/Lower/CUDA.cpp b/flang/lib/Lower/CUDA.cpp
new file mode 100644
index 0000000000000..60b33bb5680c2
--- /dev/null
+++ b/flang/lib/Lower/CUDA.cpp
@@ -0,0 +1,148 @@
+//===-- CUDA.cpp -- CUDA Fortran specific lowering ------------------------===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+//
+// Coding style: https://mlir.llvm.org/getting_started/DeveloperGuide/
+//
+//===----------------------------------------------------------------------===//
+
+#include "flang/Lower/CUDA.h"
+
+#define DEBUG_TYPE "flang-lower-cuda"
+
+void Fortran::lower::initializeDeviceComponentAllocator(
+ Fortran::lower::AbstractConverter &converter,
+ const Fortran::semantics::Symbol &sym, const fir::MutableBoxValue &box) {
+ if (const auto *details{
+ sym.GetUltimate()
+ .detailsIf<Fortran::semantics::ObjectEntityDetails>()}) {
+ const Fortran::semantics::DeclTypeSpec *type{details->type()};
+ const Fortran::semantics::DerivedTypeSpec *derived{type ? type->AsDerived()
+ : nullptr};
+ if (derived) {
+ if (!FindCUDADeviceAllocatableUltimateComponent(*derived))
+ return; // No device components.
+
+ fir::FirOpBuilder &builder = converter.getFirOpBuilder();
+ mlir::Location loc = converter.getCurrentLocation();
+
+ mlir::Type baseTy = fir::unwrapRefType(box.getAddr().getType());
+
+ // Only pointer and allocatable needs post allocation initialization
+ // of components descriptors.
+ if (!fir::isAllocatableType(baseTy) && !fir::isPointerType(baseTy))
+ return;
+
+ // Extract the derived type.
+ mlir::Type ty = fir::getDerivedType(baseTy);
+ auto recTy = mlir::dyn_cast<fir::RecordType>(ty);
+ assert(recTy && "expected fir::RecordType");
+
+ if (auto boxTy = mlir::dyn_cast<fir::BaseBoxType>(baseTy))
+ baseTy = boxTy.getEleTy();
+ baseTy = fir::unwrapRefType(baseTy);
+
+ Fortran::semantics::UltimateComponentIterator components{*derived};
+ mlir::Value loadedBox = fir::LoadOp::create(builder, loc, box.getAddr());
+ mlir::Value addr;
+ if (auto seqTy = mlir::dyn_cast<fir::SequenceType>(baseTy)) {
+ mlir::Type idxTy = builder.getIndexType();
+ mlir::Value one = builder.createIntegerConstant(loc, idxTy, 1);
+ mlir::Value zero = builder.createIntegerConstant(loc, idxTy, 0);
+ llvm::SmallVector<fir::DoLoopOp> loops;
+ llvm::SmallVector<mlir::Value> indices;
+ llvm::SmallVector<mlir::Value> extents;
+ for (unsigned i = 0; i < seqTy.getDimension(); ++i) {
+ mlir::Value dim = builder.createIntegerConstant(loc, idxTy, i);
+ auto dimInfo = fir::BoxDimsOp::create(builder, loc, idxTy, idxTy,
+ idxTy, loadedBox, dim);
+ mlir::Value lbub = mlir::arith::AddIOp::create(
+ builder, loc, dimInfo.getResult(0), dimInfo.getResult(1));
+ mlir::Value ext =
+ mlir::arith::SubIOp::create(builder, loc, lbub, one);
+ mlir::Value cmp = mlir::arith::CmpIOp::create(
+ builder, loc, mlir::arith::CmpIPredicate::sgt, ext, zero);
+ ext = mlir::arith::SelectOp::create(builder, loc, cmp, ext, zero);
+ extents.push_back(ext);
+
+ auto loop = fir::DoLoopOp::create(
+ builder, loc, dimInfo.getResult(0), dimInfo.getResult(1),
+ dimInfo.getResult(2), /*isUnordered=*/true,
+ /*finalCount=*/false, mlir::ValueRange{});
+ loops.push_back(loop);
+ indices.push_back(loop.getInductionVar());
+ builder.setInsertionPointToStart(loop.getBody());
+ }
+ mlir::Value boxAddr = fir::BoxAddrOp::create(builder, loc, loadedBox);
+ auto shape = fir::ShapeOp::create(builder, loc, extents);
+ addr = fir::ArrayCoorOp::create(
+ builder, loc, fir::ReferenceType::get(recTy), boxAddr, shape,
+ /*slice=*/mlir::Value{}, indices, /*typeparms=*/mlir::ValueRange{});
+ } else {
+ addr = fir::BoxAddrOp::create(builder, loc, loadedBox);
+ }
+ for (const auto &compSym : components) {
+ if (Fortran::semantics::IsDeviceAllocatable(compSym)) {
+ llvm::SmallVector<mlir::Value> coord;
+ mlir::Type fieldTy = gatherDeviceComponentCoordinatesAndType(
+ builder, loc, compSym, recTy, coord);
+ assert(coord.size() == 1 && "expect one coordinate");
+ mlir::Value comp = fir::CoordinateOp::create(
+ builder, loc, builder.getRefType(fieldTy), addr, coord[0]);
+ cuf::DataAttributeAttr dataAttr =
+ Fortran::lower::translateSymbolCUFDataAttribute(
+ builder.getContext(), compSym);
+ cuf::SetAllocatorIndexOp::create(builder, loc, comp, dataAttr);
+ }
+ }
+ }
+ }
+}
+
+mlir::Type Fortran::lower::gatherDeviceComponentCoordinatesAndType(
+ fir::FirOpBuilder &builder, mlir::Location loc,
+ const Fortran::semantics::Symbol &sym, fir::RecordType recTy,
+ llvm::SmallVector<mlir::Value> &coordinates) {
+ unsigned fieldIdx = recTy.getFieldIndex(sym.name().ToString());
+ mlir::Type fieldTy;
+ if (fieldIdx != std::numeric_limits<unsigned>::max()) {
+ // Field found in the base record type.
+ auto fieldName = recTy.getTypeList()[fieldIdx].first;
+ fieldTy = recTy.getTypeList()[fieldIdx].second;
+ mlir::Value fieldIndex = fir::FieldIndexOp::create(
+ builder, loc, fir::FieldType::get(fieldTy.getContext()), fieldName,
+ recTy,
+ /*typeParams=*/mlir::ValueRange{});
+ coordinates.push_back(fieldIndex);
+ } else {
+ // Field not found in base record type, search in potential
+ // record type components.
+ for (auto component : recTy.getTypeList()) {
+ if (auto childRecTy = mlir::dyn_cast<fir::RecordType>(component.second)) {
+ fieldIdx = childRecTy.getFieldIndex(sym.name().ToString());
+ if (fieldIdx != std::numeric_limits<unsigned>::max()) {
+ mlir::Value parentFieldIndex = fir::FieldIndexOp::create(
+ builder, loc, fir::FieldType::get(childRecTy.getContext()),
+ component.first, recTy,
+ /*typeParams=*/mlir::ValueRange{});
+ coordinates.push_back(parentFieldIndex);
+ auto fieldName = childRecTy.getTypeList()[fieldIdx].first;
+ fieldTy = childRecTy.getTypeList()[fieldIdx].second;
+ mlir::Value childFieldIndex = fir::FieldIndexOp::create(
+ builder, loc, fir::FieldType::get(fieldTy.getContext()),
+ fieldName, childRecTy,
+ /*typeParams=*/mlir::ValueRange{});
+ coordinates.push_back(childFieldIndex);
+ break;
+ }
+ }
+ }
+ }
+ if (coordinates.empty())
+ TODO(loc, "device resident component in complex derived-type hierarchy");
+ return fieldTy;
+}
diff --git a/flang/lib/Lower/ConvertVariable.cpp b/flang/lib/Lower/ConvertVariable.cpp
index a4a8a697e02ae..6a64320965095 100644
--- a/flang/lib/Lower/ConvertVariable.cpp
+++ b/flang/lib/Lower/ConvertVariable.cpp
@@ -14,12 +14,12 @@
#include "flang/Lower/AbstractConverter.h"
#include "flang/Lower/Allocatable.h"
#include "flang/Lower/BoxAnalyzer.h"
+#include "flang/Lower/CUDA.h"
#include "flang/Lower/CallInterface.h"
#include "flang/Lower/ConvertConstant.h"
#include "flang/Lower/ConvertExpr.h"
#include "flang/Lower/ConvertExprToHLFIR.h"
#include "flang/Lower/ConvertProcedureDesignator.h"
-#include "flang/Lower/Cuda.h"
#include "flang/Lower/Mangler.h"
#include "flang/Lower/PFTBuilder.h"
#include "flang/Lower/StatementContext.h"
@@ -814,81 +814,24 @@ initializeDeviceComponentAllocator(Fortran::lower::AbstractConverter &converter,
baseTy = boxTy.getEleTy();
baseTy = fir::unwrapRefType(baseTy);
- if (mlir::isa<fir::SequenceType>(baseTy) &&
- (fir::isAllocatableType(fir::getBase(exv).getType()) ||
- fir::isPointerType(fir::getBase(exv).getType())))
+ if (fir::isAllocatableType(fir::getBase(exv).getType()) ||
+ fir::isPointerType(fir::getBase(exv).getType()))
return; // Allocator index need to be set after allocation.
auto recTy =
mlir::dyn_cast<fir::RecordType>(fir::unwrapSequenceType(baseTy));
assert(recTy && "expected fir::RecordType");
- llvm::SmallVector<mlir::Value> coordinates;
Fortran::semantics::UltimateComponentIterator components{*derived};
for (const auto &sym : components) {
if (Fortran::semantics::IsDeviceAllocatable(sym)) {
- unsigned fieldIdx = recTy.getFieldIndex(sym.name().ToString());
- mlir::Type fieldTy;
- llvm::SmallVector<mlir::Value> coordinates;
-
- if (fieldIdx != std::numeric_limits<unsigned>::max()) {
- // Field found in the base record type.
- auto fieldName = recTy.getTypeList()[fieldIdx].first;
- fieldTy = recTy.getTypeList()[fieldIdx].second;
- mlir::Value fieldIndex = fir::FieldIndexOp::create(
- builder, loc, fir::FieldType::get(fieldTy.getContext()),
- fieldName, recTy,
- /*typeParams=*/mlir::ValueRange{});
- coordinates.push_back(fieldIndex);
- } else {
- // Field not found in base record type, search in potential
- // record type components.
- for (auto component : recTy.getTypeList()) {
- if (auto childRecTy =
- mlir::dyn_cast<fir::RecordType>(component.second)) {
- fieldIdx = childRecTy.getFieldIndex(sym.name().ToString());
- if (fieldIdx != std::numeric_limits<unsigned>::max()) {
- mlir::Value parentFieldIndex = fir::FieldIndexOp::create(
- builder, loc,
- fir::FieldType::get(childRecTy.getContext()),
- component.first, recTy,
- /*typeParams=*/mlir::ValueRange{});
- coordinates.push_back(parentFieldIndex);
- auto fieldName = childRecTy.getTypeList()[fieldIdx].first;
- fieldTy = childRecTy.getTypeList()[fieldIdx].second;
- mlir::Value childFieldIndex = fir::FieldIndexOp::create(
- builder, loc, fir::FieldType::get(fieldTy.getContext()),
- fieldName, childRecTy,
- /*typeParams=*/mlir::ValueRange{});
- coordinates.push_back(childFieldIndex);
- break;
- }
- }
- }
- }
-
- if (coordinates.empty())
- TODO(loc, "device resident component in complex derived-type "
- "hierarchy");
-
+ llvm::SmallVector<mlir::Value> coord;
+ mlir::Type fieldTy =
+ Fortran::lower::gatherDeviceComponentCoordinatesAndType(
+ builder, loc, sym, recTy, coord);
mlir::Value base = fir::getBase(exv);
- mlir::Value comp;
- if (mlir::isa<fir::BaseBoxType>(fir::unwrapRefType(base.getType()))) {
- mlir::Value box = fir::LoadOp::create(builder, loc, base);
- mlir::Value addr = fir::BoxAddrOp::create(builder, loc, box);
- llvm::SmallVector<mlir::Value> lenParams;
- assert(coordinates.size() == 1 && "expect one coordinate");
- auto field = mlir::dyn_cast<fir::FieldIndexOp>(
- coordinates[0].getDefiningOp());
- comp = hlfir::DesignateOp::create(
- builder, loc, builder.getRefType(fieldTy), addr,
- /*component=*/field.getFieldName(),
- /*componentShape=*/mlir::Value{},
- hlfir::DesignateOp::Subscripts{});
- } else {
- comp = fir::CoordinateOp::create(
- builder, loc, builder.getRefType(fieldTy), base, coordinates);
- }
+ mlir::Value comp = fir::CoordinateOp::create(
+ builder, loc, builder.getRefType(fieldTy), base, coord);
cuf::DataAttributeAttr dataAttr =
Fortran::lower::translateSymbolCUFDataAttribute(
builder.getContext(), sym);
diff --git a/flang/test/Lower/CUDA/cuda-set-allocator.cuf b/flang/test/Lower/CUDA/cuda-set-allocator.cuf
index e3bb181f65398..d783f340fe9a4 100644
--- a/flang/test/Lower/CUDA/cuda-set-allocator.cuf
+++ b/flang/test/Lower/CUDA/cuda-set-allocator.cuf
@@ -23,34 +23,44 @@ contains
subroutine sub2()
type(ty_device), pointer :: d1
+ allocate(d1)
end subroutine
! CHECK-LABEL: func.func @_QMm1Psub2()
! CHECK: %[[ALLOC:.*]] = cuf.alloc !fir.box<!fir.ptr<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>> {bindc_name = "d1", data_attr = #cuf.cuda<managed>, uniq_name = "_QMm1Fsub2Ed1"} -> !fir.ref<!fir.box<!fir.ptr<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>>>
! CHECK: %[[DECL:.*]]:2 = hlfir.declare %[[ALLOC]] {data_attr = #cuf.cuda<managed>, fortran_attrs = #fir.var_attrs<pointer>, uniq_name = "_QMm1Fsub2Ed1"} : (!fir.ref<!fir.box<!fir.ptr<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>>>) -> (!fir.ref<!fir.box<!fir.ptr<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>>>, !fir.ref<!fir.box<!fir.ptr<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>>>)
-! CHECK: %[[LOAD1:.*]] = fir.load %[[DECL]]#0 : !fir.ref<!fir.box<!fir.ptr<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>>>
-! CHECK: %[[ADDR1:.*]] = fir.box_addr %[[LOAD1]] : (!fir.box<!fir.ptr<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>>) -> !fir.ptr<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>
-! CHECK: %[[DESIGNATE1:.*]] = hlfir.designate %[[ADDR1]]{"x"} : (!fir.ptr<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>) -> !fir.ref<!fir.box<!fir.heap<!fir.array<?xi32>>>>
-! CHECK: cuf.set_allocator_idx %[[DESIGNATE1]] : !fir.ref<!fir.box<!fir.heap<!fir.array<?xi32>>>> {data_attr = #cuf.cuda<device>}
-! CHECK: %[[LOAD2:.*]] = fir.load %[[DECL]]#0 : !fir.ref<!fir.box<!fir.ptr<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>>>
-! CHECK: %[[ADDR2:.*]] = fir.box_addr %[[LOAD2]] : (!fir.box<!fir.ptr<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>>) -> !fir.ptr<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>
-! CHECK: %[[DESIGNATE2:.*]] = hlfir.designate %[[ADDR2]]{"z"} : (!fir.ptr<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>) -> !fir.ref<!fir.box<!fir.heap<!fir.array<?xi32>>>>
-! CHECK: cuf.set_allocator_idx %[[DESIGNATE2]] : !fir.ref<!fir.box<!fir.heap<!fir.array<?xi32>>>> {data_attr = #cuf.cuda<device>}
+! CHECK: cuf.allocate
+! CHECK: %[[LOAD:.*]] = fir.load %[[DECL]]#0 : !fir.ref<!fir.box<!fir.ptr<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>>>
+! CHECK: %[[ADDR:.*]] = fir.box_addr %[[LOAD]] : (!fir.box<!fir.ptr<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>>) -> !fir.ptr<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>
+! CHECK: %[[COORD1:.*]] = fir.coordinate_of %[[ADDR]], x : (!fir.ptr<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>) -> !fir.ref<!fir.box<!fir.heap<!fir.array<?xi32>>>>
+! CHECK: cuf.set_allocator_idx %[[COORD1]] : !fir.ref<!fir.box<!fir.heap<!fir.array<?xi32>>>> {data_attr = #cuf.cuda<device>}
+! CHECK: %[[COORD2:.*]] = fir.coordinate_of %[[ADDR]], z : (!fir.ptr<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>) -> !fir.ref<!fir.box<!fir.heap<!fir.array<?xi32>>>>
+! CHECK: cuf.set_allocator_idx %[[COORD2]] : !fir.ref<!fir.box<!fir.heap<!fir.array<?xi32>>>> {data_attr = #cuf.cuda<device>}
subroutine sub3()
type(ty_device), allocatable :: d1
+ allocate(d1)
end subroutine
! CHECK-LABEL: func.func @_QMm1Psub3()
! CHECK: %[[ALLOC:.*]] = cuf.alloc !fir.box<!fir.heap<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>> {bindc_name = "d1", data_attr = #cuf.cuda<managed>, uniq_name = "_QMm1Fsub3Ed1"} -> !fir.ref<!fir.box<!fir.heap<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>>>
! CHECK: %[[DECL:.*]]:2 = hlfir.declare %[[ALLOC]] {data_attr = #cuf.cuda<managed>, fortran_attrs = #fir.var_attrs<allocatable>, uniq_name = "_QMm1Fsub3Ed1"} : (!fir.ref<!fir.box<!fir.heap<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>>>) -> (!fir.ref<!fir.box<!fir.heap<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>>>, !fir.ref<!fir.box<!fir.heap<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>>>)
-! CHECK: %[[LOAD1:.*]] = fir.load %[[DECL]]#0 : !fir.ref<!fir.box<!fir.heap<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>>>
-! CHECK: %[[ADDR1:.*]] = fir.box_addr %[[LOAD1]] : (!fir.box<!fir.heap<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>>) -> !fir.heap<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>
-! CHECK: %[[DESIGNATE1:.*]] = hlfir.designate %[[ADDR1]]{"x"} : (!fir.heap<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>) -> !fir.ref<!fir.box<!fir.heap<!fir.array<?xi32>>>>
-! CHECK: cuf.set_allocator_idx %[[DESIGNATE1]] : !fir.ref<!fir.box<!fir.heap<!fir.array<?xi32>>>> {data_attr = #cuf.cuda<device>}
-! CHECK: %[[LOAD2:.*]] = fir.load %[[DECL]]#0 : !fir.ref<!fir.box<!fir.heap<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>>>
-! CHECK: %[[ADDR2:.*]] = fir.box_addr %[[LOAD2]] : (!fir.box<!fir.heap<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>>) -> !fir.heap<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>
-! CHECK: %[[DESIGNATE2:.*]] = hlfir.designate %[[ADDR2]]{"z"} : (!fir.heap<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>) -> !fir.ref<!fir.box<!fir.heap<!fir.array<?xi32>>>>
-! CHECK: cuf.set_allocator_idx %[[DESIGNATE2]] : !fir.ref<!fir.box<!fir.heap<!fir.array<?xi32>>>> {data_attr = #cuf.cuda<device>}
+! CHECK: cuf.allocate
+! CHECK: %[[LOAD:.*]] = fir.load %[[DECL]]#0 : !fir.ref<!fir.box<!fir.heap<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>>>
+! CHECK: %[[ADDR:.*]] = fir.box_addr %[[LOAD]] : (!fir.box<!fir.heap<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>>) -> !fir.heap<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>
+! CHECK: %[[COORD1:.*]] = fir.coordinate_of %[[ADDR]], x : (!fir.heap<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>) -> !fir.ref<!fir.box<!fir.heap<!fir.array<?xi32>>>>
+! CHECK: cuf.set_allocator_idx %[[COORD1]] : !fir.ref<!fir.box<!fir.heap<!fir.array<?xi32>>>> {data_attr = #cuf.cuda<device>}
+! CHECK: %[[COORD2:.*]] = fir.coordinate_of %[[ADDR]], z : (!fir.heap<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>) -> !fir.ref<!fir.box<!fir.heap<!fir.array<?xi32>>>>
+! CHECK: cuf.set_allocator_idx %[[COORD2]] : !fir.ref<!fir.box<!fir.heap<!fir.array<?xi32>>>> {data_attr = #cuf.cuda<device>}
+
+ subroutine sub4()
+ type(ty_device), allocatable :: d1(:,:)
+ allocate(d1(10, 10))
+ end subroutine
+
+! CHECK-LABEL: func.func @_QMm1Psub4()
+! CHECK: cuf.allocate
+! CHECK-COUNT-2: fir.do_loop
+! CHECK-COUNT-2: cuf.set_allocator_idx
end module
>From f61526971f9c62118090443c8b97fab07ae9499f Mon Sep 17 00:00:00 2001
From: Andrew Lazarev <alazarev at google.com>
Date: Wed, 6 Aug 2025 15:16:19 -0700
Subject: [PATCH 163/171] Revert "[WebAssembly] Constant fold wasm.dot"
(#152382)
Reverts llvm/llvm-project#149619
It breaks ubsan bot:
https://lab.llvm.org/buildbot/#/builders/25/builds/10523
Earlier today the failure was hidden by another breakage that is fixed
now.
---
llvm/lib/Analysis/ConstantFolding.cpp | 25 ---------
.../InstSimplify/ConstProp/WebAssembly/dot.ll | 56 -------------------
2 files changed, 81 deletions(-)
delete mode 100644 llvm/test/Transforms/InstSimplify/ConstProp/WebAssembly/dot.ll
diff --git a/llvm/lib/Analysis/ConstantFolding.cpp b/llvm/lib/Analysis/ConstantFolding.cpp
index 4969528a1b29b..dd98b62baca33 100644
--- a/llvm/lib/Analysis/ConstantFolding.cpp
+++ b/llvm/lib/Analysis/ConstantFolding.cpp
@@ -1659,7 +1659,6 @@ bool llvm::canConstantFoldCallTo(const CallBase *Call, const Function *F) {
case Intrinsic::aarch64_sve_convert_from_svbool:
case Intrinsic::wasm_alltrue:
case Intrinsic::wasm_anytrue:
- case Intrinsic::wasm_dot:
// WebAssembly float semantics are always known
case Intrinsic::wasm_trunc_signed:
case Intrinsic::wasm_trunc_unsigned:
@@ -3990,30 +3989,6 @@ static Constant *ConstantFoldFixedVectorCall(
}
return ConstantVector::get(Result);
}
- case Intrinsic::wasm_dot: {
- unsigned NumElements =
- cast<FixedVectorType>(Operands[0]->getType())->getNumElements();
-
- assert(NumElements == 8 && Result.size() == 4 &&
- "wasm dot takes i16x8 and produces i32x4");
- assert(Ty->isIntegerTy());
- int32_t MulVector[8];
-
- for (unsigned I = 0; I < NumElements; ++I) {
- ConstantInt *Elt0 =
- cast<ConstantInt>(Operands[0]->getAggregateElement(I));
- ConstantInt *Elt1 =
- cast<ConstantInt>(Operands[1]->getAggregateElement(I));
-
- MulVector[I] = Elt0->getSExtValue() * Elt1->getSExtValue();
- }
- for (unsigned I = 0; I < Result.size(); I++) {
- int32_t IAdd = MulVector[I * 2] + MulVector[I * 2 + 1];
- Result[I] = ConstantInt::get(Ty, IAdd);
- }
-
- return ConstantVector::get(Result);
- }
default:
break;
}
diff --git a/llvm/test/Transforms/InstSimplify/ConstProp/WebAssembly/dot.ll b/llvm/test/Transforms/InstSimplify/ConstProp/WebAssembly/dot.ll
deleted file mode 100644
index b537b7bccf861..0000000000000
--- a/llvm/test/Transforms/InstSimplify/ConstProp/WebAssembly/dot.ll
+++ /dev/null
@@ -1,56 +0,0 @@
-; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --version 5
-
-; RUN: opt -passes=instsimplify -S < %s | FileCheck %s
-
-; Test that intrinsics wasm dot call are constant folded
-
-target triple = "wasm32-unknown-unknown"
-
-
-define <4 x i32> @dot_zero() {
-; CHECK-LABEL: define <4 x i32> @dot_zero() {
-; CHECK-NEXT: ret <4 x i32> zeroinitializer
-;
- %res = tail call <4 x i32> @llvm.wasm.dot(<8 x i16> zeroinitializer, <8 x i16> zeroinitializer)
- ret <4 x i32> %res
-}
-
-; a = 1 2 3 4 5 6 7 8
-; b = 1 2 3 4 5 6 7 8
-; k1|k2 = a * b = 1 4 9 16 25 36 49 64
-; k1 + k2 = (1+4) | (9 + 16) | (25 + 36) | (49 + 64)
-; result = 5 | 25 | 61 | 113
-define <4 x i32> @dot_nonzero() {
-; CHECK-LABEL: define <4 x i32> @dot_nonzero() {
-; CHECK-NEXT: ret <4 x i32> <i32 5, i32 25, i32 61, i32 113>
-;
- %res = tail call <4 x i32> @llvm.wasm.dot(<8 x i16> <i16 1, i16 2, i16 3, i16 4, i16 5, i16 6, i16 7, i16 8>, <8 x i16> <i16 1, i16 2, i16 3, i16 4, i16 5, i16 6, i16 7, i16 8>)
- ret <4 x i32> %res
-}
-
-define <4 x i32> @dot_doubly_negative() {
-; CHECK-LABEL: define <4 x i32> @dot_doubly_negative() {
-; CHECK-NEXT: ret <4 x i32> splat (i32 2)
-;
- %res = tail call <4 x i32> @llvm.wasm.dot(<8 x i16> <i16 -1, i16 -1, i16 -1, i16 -1, i16 -1, i16 -1, i16 -1, i16 -1>, <8 x i16> <i16 -1, i16 -1, i16 -1, i16 -1, i16 -1, i16 -1, i16 -1, i16 -1>)
- ret <4 x i32> %res
-}
-
-; Tests that i16 max signed values fit in i32
-define <4 x i32> @dot_follow_modulo_spec_1() {
-; CHECK-LABEL: define <4 x i32> @dot_follow_modulo_spec_1() {
-; CHECK-NEXT: ret <4 x i32> <i32 2147352578, i32 0, i32 0, i32 0>
-;
- %res = tail call <4 x i32> @llvm.wasm.dot(<8 x i16> <i16 32767, i16 32767, i16 0, i16 0, i16 0, i16 0, i16 0, i16 0>, <8 x i16> <i16 32767, i16 32767, i16 0, i16 0, i16 0, i16 0, i16 0, i16 0>)
- ret <4 x i32> %res
-}
-
-; Tests that i16 min signed values fit in i32
-define <4 x i32> @dot_follow_modulo_spec_2() {
-; CHECK-LABEL: define <4 x i32> @dot_follow_modulo_spec_2() {
-; CHECK-NEXT: ret <4 x i32> <i32 -2147483648, i32 0, i32 0, i32 0>
-;
- %res = tail call <4 x i32> @llvm.wasm.dot(<8 x i16> <i16 -32768, i16 -32768, i16 0, i16 0, i16 0, i16 0, i16 0, i16 0>, <8 x i16> <i16 -32768, i16 -32768, i16 0, i16 0, i16 0, i16 0, i16 0, i16 0>)
- ret <4 x i32> %res
-}
-
>From b296ea9c14af60f9b4faa26a39ecc52c1762c794 Mon Sep 17 00:00:00 2001
From: Stanislav Mekhanoshin <Stanislav.Mekhanoshin at amd.com>
Date: Wed, 6 Aug 2025 15:32:28 -0700
Subject: [PATCH 164/171] [AMDGPU] s_get_shader_cycles_u64 gfx1250 instruction
(#152390)
It is the same as reading SHADER_CYCLES_LO and SHADER_CYCLES_HI
but with a single instruction.
---
llvm/lib/Target/AMDGPU/AMDGPU.td | 4 ++++
llvm/lib/Target/AMDGPU/GCNSubtarget.h | 3 +++
llvm/lib/Target/AMDGPU/SOPInstructions.td | 7 +++++++
llvm/test/CodeGen/AMDGPU/readcyclecounter.ll | 4 ++++
llvm/test/MC/AMDGPU/gfx1250_asm_sop1.s | 4 ++++
llvm/test/MC/Disassembler/AMDGPU/gfx1250_dasm_sop1.txt | 3 +++
6 files changed, 25 insertions(+)
diff --git a/llvm/lib/Target/AMDGPU/AMDGPU.td b/llvm/lib/Target/AMDGPU/AMDGPU.td
index ddeca07e51103..f26639847be75 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPU.td
+++ b/llvm/lib/Target/AMDGPU/AMDGPU.td
@@ -2612,6 +2612,10 @@ def HasPkMinMax3Insts :
Predicate<"Subtarget->hasPkMinMax3Insts()">,
AssemblerPredicate<(any_of FeatureGFX1250Insts)>;
+def HasSGetShaderCyclesInst :
+ Predicate<"Subtarget->hasSGetShaderCyclesInst()">,
+ AssemblerPredicate<(any_of FeatureGFX1250Insts)>;
+
def HasImageInsts : Predicate<"Subtarget->hasImageInsts()">,
AssemblerPredicate<(all_of FeatureImageInsts)>;
diff --git a/llvm/lib/Target/AMDGPU/GCNSubtarget.h b/llvm/lib/Target/AMDGPU/GCNSubtarget.h
index c5bdd28314642..f47ddf5d93ec3 100644
--- a/llvm/lib/Target/AMDGPU/GCNSubtarget.h
+++ b/llvm/lib/Target/AMDGPU/GCNSubtarget.h
@@ -1562,6 +1562,9 @@ class GCNSubtarget final : public AMDGPUGenSubtargetInfo,
// \returns true if the target has V_PK_{MIN|MAX}3_{I|U}16 instructions.
bool hasPkMinMax3Insts() const { return GFX1250Insts; }
+ // \returns ture if target has S_GET_SHADER_CYCLES_U64 instruction.
+ bool hasSGetShaderCyclesInst() const { return GFX1250Insts; }
+
// \returns true if target has S_SETPRIO_INC_WG instruction.
bool hasSetPrioIncWgInst() const { return HasSetPrioIncWgInst; }
diff --git a/llvm/lib/Target/AMDGPU/SOPInstructions.td b/llvm/lib/Target/AMDGPU/SOPInstructions.td
index 8303410115f93..431d73b9a95b5 100644
--- a/llvm/lib/Target/AMDGPU/SOPInstructions.td
+++ b/llvm/lib/Target/AMDGPU/SOPInstructions.td
@@ -1653,6 +1653,12 @@ def S_SETPRIO_INC_WG : SOPP_Pseudo <"s_setprio_inc_wg", (ins i16imm:$simm16), "$
let SubtargetPredicate = HasSetPrioIncWgInst;
}
+def S_GET_SHADER_CYCLES_U64 : SOP1_64_0 <"s_get_shader_cycles_u64",
+ [(set i64:$sdst, (readcyclecounter))]> {
+ let SubtargetPredicate = HasSGetShaderCyclesInst;
+ let hasSideEffects = 1;
+}
+
let Uses = [EXEC, M0] in {
def S_SENDMSG : SOPP_Pseudo <"s_sendmsg" , (ins SendMsg:$simm16), "$simm16",
[(int_amdgcn_s_sendmsg (i32 timm:$simm16), M0)]> {
@@ -2145,6 +2151,7 @@ defm S_ALLOC_VGPR : SOP1_Real_gfx12<0x053>;
defm S_SLEEP_VAR : SOP1_IMM_Real_gfx12<0x058>;
// GFX1250
+defm S_GET_SHADER_CYCLES_U64 : SOP1_Real_gfx12<0x06>;
defm S_ADD_PC_I64 : SOP1_Real_gfx12<0x04b>;
//===----------------------------------------------------------------------===//
diff --git a/llvm/test/CodeGen/AMDGPU/readcyclecounter.ll b/llvm/test/CodeGen/AMDGPU/readcyclecounter.ll
index 131c5f31585d8..f67cbe381bfad 100644
--- a/llvm/test/CodeGen/AMDGPU/readcyclecounter.ll
+++ b/llvm/test/CodeGen/AMDGPU/readcyclecounter.ll
@@ -10,6 +10,8 @@
; RUN: llc -global-isel=1 -mtriple=amdgcn -mcpu=gfx1100 -amdgpu-enable-vopd=0 < %s | FileCheck -check-prefixes=GETREG,GETREG-GISEL -check-prefix=GCN %s
; RUN: llc -global-isel=0 -mtriple=amdgcn -mcpu=gfx1200 < %s | FileCheck -check-prefixes=GCN,GFX12 %s
; RUN: llc -global-isel=1 -mtriple=amdgcn -mcpu=gfx1200 < %s | FileCheck -check-prefixes=GCN,GFX12 %s
+; RUN: llc -global-isel=0 -mtriple=amdgcn -mcpu=gfx1250 < %s | FileCheck -check-prefixes=GCN,GFX1250 %s
+; RUN: llc -global-isel=1 -mtriple=amdgcn -mcpu=gfx1250 < %s | FileCheck -check-prefixes=GCN,GFX1250 %s
declare i64 @llvm.readcyclecounter() #0
@@ -21,6 +23,7 @@ declare i64 @llvm.readcyclecounter() #0
; GFX12: s_getreg_b32 [[HI2:s[0-9]+]], hwreg(HW_REG_SHADER_CYCLES_HI)
; GFX12: s_cmp_eq_u32 [[HI1]], [[HI2]]
; GFX12: s_cselect_b32 {{s[0-9]+}}, [[LO1]], 0
+; GFX1250: s_get_shader_cycles_u64 s{{\[[0-9]+:[0-9]+\]}}
; GCN-DAG: kmcnt
; MEMTIME: store_dwordx2
; SIVI-NOT: kmcnt
@@ -53,6 +56,7 @@ define amdgpu_kernel void @test_readcyclecounter(ptr addrspace(1) %out) #0 {
; GFX12: s_getreg_b32 [[HI1:s[0-9]+]], hwreg(HW_REG_SHADER_CYCLES_HI)
; GFX12: s_getreg_b32 [[LO1:s[0-9]+]], hwreg(HW_REG_SHADER_CYCLES_LO)
; GFX12: s_getreg_b32 [[HI2:s[0-9]+]], hwreg(HW_REG_SHADER_CYCLES_HI)
+; GFX1250: s_get_shader_cycles_u64 s{{\[[0-9]+:[0-9]+\]}}
; GCN-DAG: s_load_{{dword|b32|b64}}
; GETREG-DAG: s_getreg_b32 s{{[0-9]+}}, hwreg(HW_REG_SHADER_CYCLES, 0, 20)
; GFX12: s_cmp_eq_u32 [[HI1]], [[HI2]]
diff --git a/llvm/test/MC/AMDGPU/gfx1250_asm_sop1.s b/llvm/test/MC/AMDGPU/gfx1250_asm_sop1.s
index 41b6e93357a3f..aab8d9a2fcbfd 100644
--- a/llvm/test/MC/AMDGPU/gfx1250_asm_sop1.s
+++ b/llvm/test/MC/AMDGPU/gfx1250_asm_sop1.s
@@ -45,6 +45,10 @@ s_rfe_i64 s[2:3]
s_rfe_b64 s[2:3]
// GFX1250: s_rfe_i64 s[2:3] ; encoding: [0x02,0x4a,0x80,0xbe]
+s_get_shader_cycles_u64 s[2:3]
+// GFX1250: s_get_shader_cycles_u64 s[2:3] ; encoding: [0x00,0x06,0x82,0xbe]
+// GFX12-ERR: :[[@LINE-2]]:{{[0-9]+}}: error: instruction not supported on this GPU
+
s_barrier_signal -3
// GFX1250: s_barrier_signal -3 ; encoding: [0xc3,0x4e,0x80,0xbe]
diff --git a/llvm/test/MC/Disassembler/AMDGPU/gfx1250_dasm_sop1.txt b/llvm/test/MC/Disassembler/AMDGPU/gfx1250_dasm_sop1.txt
index 83fa647696d6c..07aca1e40b071 100644
--- a/llvm/test/MC/Disassembler/AMDGPU/gfx1250_dasm_sop1.txt
+++ b/llvm/test/MC/Disassembler/AMDGPU/gfx1250_dasm_sop1.txt
@@ -12,6 +12,9 @@
# GFX1250: s_add_pc_i64 s[2:3] ; encoding: [0x02,0x4b,0x80,0xbe]
0x02,0x4b,0x80,0xbe
+# GFX1250: s_get_shader_cycles_u64 s[2:3] ; encoding: [0x00,0x06,0x82,0xbe]
+0x00,0x06,0x82,0xbe
+
# GFX1250: s_barrier_signal -3 ; encoding: [0xc3,0x4e,0x80,0xbe]
0xc3,0x4e,0x80,0xbe
>From 09dbdf651470bb4c9e5b81986a47f7c495285fbe Mon Sep 17 00:00:00 2001
From: Qiongsi Wu <qiongsiwu at gmail.com>
Date: Wed, 6 Aug 2025 15:39:37 -0700
Subject: [PATCH 165/171] [clang][Dependency Scanning] Move Module Timestamp
Update After Compilation Finishes (#151774)
When two threads are accessing the same `pcm`, it is possible that the
reading thread sees the timestamp update, while the file on disk is not
updated.
This PR moves timestamp update from `writeAST` to
`compileModuleAndReadASTImpl`, so we only update the timestamp after the
file has been committed to disk.
rdar://152097193
---
clang/lib/Frontend/CompilerInstance.cpp | 8 ++++++++
clang/lib/Serialization/ASTWriter.cpp | 5 -----
2 files changed, 8 insertions(+), 5 deletions(-)
diff --git a/clang/lib/Frontend/CompilerInstance.cpp b/clang/lib/Frontend/CompilerInstance.cpp
index d64290f5c4673..9f99edadbb70f 100644
--- a/clang/lib/Frontend/CompilerInstance.cpp
+++ b/clang/lib/Frontend/CompilerInstance.cpp
@@ -1485,6 +1485,14 @@ static bool compileModuleAndReadASTImpl(CompilerInstance &ImportingInstance,
return false;
}
+ // The module is built successfully, we can update its timestamp now.
+ if (ImportingInstance.getPreprocessor()
+ .getHeaderSearchInfo()
+ .getHeaderSearchOpts()
+ .ModulesValidateOncePerBuildSession) {
+ ImportingInstance.getModuleCache().updateModuleTimestamp(ModuleFileName);
+ }
+
return readASTAfterCompileModule(ImportingInstance, ImportLoc, ModuleNameLoc,
Module, ModuleFileName,
/*OutOfDate=*/nullptr, /*Missing=*/nullptr);
diff --git a/clang/lib/Serialization/ASTWriter.cpp b/clang/lib/Serialization/ASTWriter.cpp
index f144cd25eca14..c072acd4b0420 100644
--- a/clang/lib/Serialization/ASTWriter.cpp
+++ b/clang/lib/Serialization/ASTWriter.cpp
@@ -5461,11 +5461,6 @@ ASTWriter::WriteAST(llvm::PointerUnion<Sema *, Preprocessor *> Subject,
WritingAST = false;
- if (WritingModule && PPRef.getHeaderSearchInfo()
- .getHeaderSearchOpts()
- .ModulesValidateOncePerBuildSession)
- ModCache.updateModuleTimestamp(OutputFile);
-
if (ShouldCacheASTInMemory) {
// Construct MemoryBuffer and update buffer manager.
ModCache.getInMemoryModuleCache().addBuiltPCM(
>From e83abd774a4f7c09db26b886f8c686cdb373d1f7 Mon Sep 17 00:00:00 2001
From: Uzair Nawaz <uzairnawaz at google.com>
Date: Wed, 6 Aug 2025 15:46:41 -0700
Subject: [PATCH 166/171] [libc] Template StringConverter pop function to avoid
duplicate code (#152204)
Addressed TODO to template the StringConverter pop functions to have a
single implementation (combine popUTF8 and popUTF32 into a single
templated pop function)
---
.../__support/wchar/character_converter.cpp | 6 -
.../src/__support/wchar/character_converter.h | 9 +-
libc/src/__support/wchar/mbsnrtowcs.h | 4 +-
libc/src/__support/wchar/string_converter.h | 38 +-----
libc/src/__support/wchar/wcsnrtombs.h | 4 +-
.../__support/wchar/string_converter_test.cpp | 110 +++++++++---------
6 files changed, 72 insertions(+), 99 deletions(-)
diff --git a/libc/src/__support/wchar/character_converter.cpp b/libc/src/__support/wchar/character_converter.cpp
index 15d0f478a18a9..278248c5c4c4a 100644
--- a/libc/src/__support/wchar/character_converter.cpp
+++ b/libc/src/__support/wchar/character_converter.cpp
@@ -132,12 +132,6 @@ ErrorOr<char32_t> CharacterConverter::pop_utf32() {
return utf32;
}
-size_t CharacterConverter::sizeAsUTF32() {
- return 1; // a single utf-32 value can fit an entire character
-}
-
-size_t CharacterConverter::sizeAsUTF8() { return state->total_bytes; }
-
ErrorOr<char8_t> CharacterConverter::pop_utf8() {
if (isEmpty())
return Error(-1);
diff --git a/libc/src/__support/wchar/character_converter.h b/libc/src/__support/wchar/character_converter.h
index b6d918f2d2edc..fef30f7ce43fa 100644
--- a/libc/src/__support/wchar/character_converter.h
+++ b/libc/src/__support/wchar/character_converter.h
@@ -12,6 +12,7 @@
#include "hdr/types/char32_t.h"
#include "hdr/types/char8_t.h"
#include "hdr/types/size_t.h"
+#include "src/__support/CPP/type_traits.h"
#include "src/__support/common.h"
#include "src/__support/error_or.h"
#include "src/__support/wchar/mbstate.h"
@@ -31,14 +32,18 @@ class CharacterConverter {
bool isEmpty();
bool isValidState();
- size_t sizeAsUTF32();
- size_t sizeAsUTF8();
+ template <typename CharType> size_t sizeAs();
+ template <> size_t sizeAs<char8_t>() { return state->total_bytes; }
+ template <> size_t sizeAs<char32_t>() { return 1; }
int push(char8_t utf8_byte);
int push(char32_t utf32);
ErrorOr<char8_t> pop_utf8();
ErrorOr<char32_t> pop_utf32();
+ template <typename CharType> ErrorOr<CharType> pop();
+ template <> ErrorOr<char8_t> pop() { return pop_utf8(); }
+ template <> ErrorOr<char32_t> pop() { return pop_utf32(); }
};
} // namespace internal
diff --git a/libc/src/__support/wchar/mbsnrtowcs.h b/libc/src/__support/wchar/mbsnrtowcs.h
index 54e315210d95c..6abb836635772 100644
--- a/libc/src/__support/wchar/mbsnrtowcs.h
+++ b/libc/src/__support/wchar/mbsnrtowcs.h
@@ -36,7 +36,7 @@ LIBC_INLINE static ErrorOr<size_t> mbsnrtowcs(wchar_t *__restrict dst,
StringConverter<char8_t> str_conv(reinterpret_cast<const char8_t *>(*src), ps,
len, nmc);
size_t dst_idx = 0;
- ErrorOr<char32_t> converted = str_conv.popUTF32();
+ ErrorOr<char32_t> converted = str_conv.pop<char32_t>();
while (converted.has_value()) {
if (dst != nullptr)
dst[dst_idx] = converted.value();
@@ -47,7 +47,7 @@ LIBC_INLINE static ErrorOr<size_t> mbsnrtowcs(wchar_t *__restrict dst,
return dst_idx;
}
dst_idx++;
- converted = str_conv.popUTF32();
+ converted = str_conv.pop<char32_t>();
}
if (converted.error() == -1) { // if we hit conversion limit
diff --git a/libc/src/__support/wchar/string_converter.h b/libc/src/__support/wchar/string_converter.h
index 869ebdfc8b390..ba628bd34cdc0 100644
--- a/libc/src/__support/wchar/string_converter.h
+++ b/libc/src/__support/wchar/string_converter.h
@@ -12,6 +12,7 @@
#include "hdr/types/char32_t.h"
#include "hdr/types/char8_t.h"
#include "hdr/types/size_t.h"
+#include "src/__support/CPP/type_traits.h"
#include "src/__support/common.h"
#include "src/__support/error_or.h"
#include "src/__support/wchar/character_converter.h"
@@ -53,9 +54,7 @@ template <typename T> class StringConverter {
size_t srclen = SIZE_MAX)
: cr(ps), src(s), src_len(srclen), src_idx(0), num_to_write(dstlen) {}
- // TODO: following functions are almost identical
- // look into templating CharacterConverter pop functions
- ErrorOr<char32_t> popUTF32() {
+ template <typename CharType> ErrorOr<CharType> pop() {
if (num_to_write == 0)
return Error(-1);
@@ -64,7 +63,7 @@ template <typename T> class StringConverter {
if (!src_elements_read.has_value())
return Error(src_elements_read.error());
- if (cr.sizeAsUTF32() > num_to_write) {
+ if (cr.sizeAs<CharType>() > num_to_write) {
cr.clear();
return Error(-1);
}
@@ -72,34 +71,9 @@ template <typename T> class StringConverter {
src_idx += src_elements_read.value();
}
- auto out = cr.pop_utf32();
- if (out.has_value() && out.value() == L'\0')
- src_len = src_idx;
-
- num_to_write--;
-
- return out;
- }
-
- ErrorOr<char8_t> popUTF8() {
- if (num_to_write == 0)
- return Error(-1);
-
- if (cr.isEmpty() || src_idx == 0) {
- auto src_elements_read = pushFullCharacter();
- if (!src_elements_read.has_value())
- return Error(src_elements_read.error());
-
- if (cr.sizeAsUTF8() > num_to_write) {
- cr.clear();
- return Error(-1);
- }
-
- src_idx += src_elements_read.value();
- }
-
- auto out = cr.pop_utf8();
- if (out.has_value() && out.value() == '\0')
+ ErrorOr<CharType> out = cr.pop<CharType>();
+ // if out isn't null terminator or an error
+ if (out.has_value() && out.value() == 0)
src_len = src_idx;
num_to_write--;
diff --git a/libc/src/__support/wchar/wcsnrtombs.h b/libc/src/__support/wchar/wcsnrtombs.h
index 433097c937a42..f593a0e0dba87 100644
--- a/libc/src/__support/wchar/wcsnrtombs.h
+++ b/libc/src/__support/wchar/wcsnrtombs.h
@@ -39,7 +39,7 @@ wcsnrtombs(char *__restrict dest, const wchar_t **__restrict ptr_to_src,
reinterpret_cast<const char32_t *>(*ptr_to_src), ps, dest_len,
num_src_widechars);
size_t dst_idx = 0;
- ErrorOr<char8_t> converted = str_conv.popUTF8();
+ ErrorOr<char8_t> converted = str_conv.pop<char8_t>();
while (converted.has_value()) {
if (dest != nullptr)
dest[dst_idx] = converted.value();
@@ -51,7 +51,7 @@ wcsnrtombs(char *__restrict dest, const wchar_t **__restrict ptr_to_src,
}
dst_idx++;
- converted = str_conv.popUTF8();
+ converted = str_conv.pop<char8_t>();
}
if (dest != nullptr)
diff --git a/libc/test/src/__support/wchar/string_converter_test.cpp b/libc/test/src/__support/wchar/string_converter_test.cpp
index d514df9317852..e45358ddc68c4 100644
--- a/libc/test/src/__support/wchar/string_converter_test.cpp
+++ b/libc/test/src/__support/wchar/string_converter_test.cpp
@@ -34,32 +34,32 @@ TEST(LlvmLibcStringConverterTest, UTF8To32) {
LIBC_NAMESPACE::internal::StringConverter<char8_t> sc(
reinterpret_cast<const char8_t *>(src), &state, SIZE_MAX);
- auto res = sc.popUTF32();
+ auto res = sc.pop<char32_t>();
ASSERT_TRUE(res.has_value());
ASSERT_EQ(static_cast<int>(res.value()), 0x1f921);
ASSERT_EQ(static_cast<int>(sc.getSourceIndex()), 4);
- res = sc.popUTF32();
+ res = sc.pop<char32_t>();
ASSERT_TRUE(res.has_value());
ASSERT_EQ(static_cast<int>(res.value()), 0x2211);
ASSERT_EQ(static_cast<int>(sc.getSourceIndex()), 7);
- res = sc.popUTF32();
+ res = sc.pop<char32_t>();
ASSERT_TRUE(res.has_value());
ASSERT_EQ(static_cast<int>(res.value()), 0xff);
ASSERT_EQ(static_cast<int>(sc.getSourceIndex()), 9);
- res = sc.popUTF32();
+ res = sc.pop<char32_t>();
ASSERT_TRUE(res.has_value());
ASSERT_EQ(static_cast<int>(res.value()), 0x41);
ASSERT_EQ(static_cast<int>(sc.getSourceIndex()), 10);
- res = sc.popUTF32();
+ res = sc.pop<char32_t>();
ASSERT_TRUE(res.has_value());
ASSERT_EQ(static_cast<int>(res.value()), 0);
ASSERT_EQ(static_cast<int>(sc.getSourceIndex()), 11);
- res = sc.popUTF32();
+ res = sc.pop<char32_t>();
ASSERT_FALSE(res.has_value());
ASSERT_EQ(res.error(), -1);
ASSERT_EQ(static_cast<int>(sc.getSourceIndex()), 11);
@@ -75,66 +75,66 @@ TEST(LlvmLibcStringConverterTest, UTF32To8) {
LIBC_NAMESPACE::internal::StringConverter<char32_t> sc(
reinterpret_cast<const char32_t *>(src), &state, SIZE_MAX);
- auto res = sc.popUTF8();
+ auto res = sc.pop<char8_t>();
ASSERT_TRUE(res.has_value());
ASSERT_EQ(static_cast<int>(res.value()), 0xF0);
ASSERT_EQ(static_cast<int>(sc.getSourceIndex()), 1);
- res = sc.popUTF8();
+ res = sc.pop<char8_t>();
ASSERT_TRUE(res.has_value());
ASSERT_EQ(static_cast<int>(res.value()), 0x9F);
ASSERT_EQ(static_cast<int>(sc.getSourceIndex()), 1);
- res = sc.popUTF8();
+ res = sc.pop<char8_t>();
ASSERT_TRUE(res.has_value());
ASSERT_EQ(static_cast<int>(res.value()), 0xA4);
ASSERT_EQ(static_cast<int>(sc.getSourceIndex()), 1);
- res = sc.popUTF8();
+ res = sc.pop<char8_t>();
ASSERT_TRUE(res.has_value());
ASSERT_EQ(static_cast<int>(res.value()), 0xA1);
ASSERT_EQ(static_cast<int>(sc.getSourceIndex()), 1);
// end of clown emoji, sigma symbol begins
- res = sc.popUTF8();
+ res = sc.pop<char8_t>();
ASSERT_TRUE(res.has_value());
ASSERT_EQ(static_cast<int>(res.value()), 0xE2);
ASSERT_EQ(static_cast<int>(sc.getSourceIndex()), 2);
- res = sc.popUTF8();
+ res = sc.pop<char8_t>();
ASSERT_TRUE(res.has_value());
ASSERT_EQ(static_cast<int>(res.value()), 0x88);
ASSERT_EQ(static_cast<int>(sc.getSourceIndex()), 2);
- res = sc.popUTF8();
+ res = sc.pop<char8_t>();
ASSERT_TRUE(res.has_value());
ASSERT_EQ(static_cast<int>(res.value()), 0x91);
ASSERT_EQ(static_cast<int>(sc.getSourceIndex()), 2);
// end of sigma symbol, y with diaeresis begins
- res = sc.popUTF8();
+ res = sc.pop<char8_t>();
ASSERT_TRUE(res.has_value());
ASSERT_EQ(static_cast<int>(res.value()), 0xC3);
ASSERT_EQ(static_cast<int>(sc.getSourceIndex()), 3);
- res = sc.popUTF8();
+ res = sc.pop<char8_t>();
ASSERT_TRUE(res.has_value());
ASSERT_EQ(static_cast<int>(res.value()), 0xBF);
ASSERT_EQ(static_cast<int>(sc.getSourceIndex()), 3);
// end of y with diaeresis, letter A begins
- res = sc.popUTF8();
+ res = sc.pop<char8_t>();
ASSERT_TRUE(res.has_value());
ASSERT_EQ(static_cast<int>(res.value()), 0x41);
ASSERT_EQ(static_cast<int>(sc.getSourceIndex()), 4);
// null byte
- res = sc.popUTF8();
+ res = sc.pop<char8_t>();
ASSERT_TRUE(res.has_value());
ASSERT_EQ(static_cast<int>(res.value()), 0);
ASSERT_EQ(static_cast<int>(sc.getSourceIndex()), 5);
- res = sc.popUTF8();
+ res = sc.pop<char8_t>();
ASSERT_FALSE(res.has_value());
ASSERT_EQ(res.error(), -1);
ASSERT_EQ(static_cast<int>(sc.getSourceIndex()), 5);
@@ -148,28 +148,28 @@ TEST(LlvmLibcStringConverterTest, UTF32To8PartialRead) {
LIBC_NAMESPACE::internal::StringConverter<char32_t> sc(
reinterpret_cast<const char32_t *>(src), &state, SIZE_MAX, 1);
- auto res = sc.popUTF8();
+ auto res = sc.pop<char8_t>();
ASSERT_TRUE(res.has_value());
ASSERT_EQ(static_cast<int>(res.value()), 0xF0);
ASSERT_EQ(static_cast<int>(sc.getSourceIndex()), 1);
- res = sc.popUTF8();
+ res = sc.pop<char8_t>();
ASSERT_TRUE(res.has_value());
ASSERT_EQ(static_cast<int>(res.value()), 0x9F);
ASSERT_EQ(static_cast<int>(sc.getSourceIndex()), 1);
- res = sc.popUTF8();
+ res = sc.pop<char8_t>();
ASSERT_TRUE(res.has_value());
ASSERT_EQ(static_cast<int>(res.value()), 0xA4);
ASSERT_EQ(static_cast<int>(sc.getSourceIndex()), 1);
- res = sc.popUTF8();
+ res = sc.pop<char8_t>();
ASSERT_TRUE(res.has_value());
ASSERT_EQ(static_cast<int>(res.value()), 0xA1);
ASSERT_EQ(static_cast<int>(sc.getSourceIndex()), 1);
// can only read 1 character from source string, so error on next pop
- res = sc.popUTF8();
+ res = sc.pop<char8_t>();
ASSERT_FALSE(res.has_value());
ASSERT_EQ(res.error(), -1);
}
@@ -181,12 +181,12 @@ TEST(LlvmLibcStringConverterTest, UTF8To32PartialRead) {
LIBC_NAMESPACE::internal::StringConverter<char8_t> sc(
reinterpret_cast<const char8_t *>(src), &state, SIZE_MAX, 5);
- auto res = sc.popUTF32();
+ auto res = sc.pop<char32_t>();
ASSERT_TRUE(res.has_value());
ASSERT_EQ(static_cast<int>(res.value()), 0x1f921);
ASSERT_EQ(static_cast<int>(sc.getSourceIndex()), 4);
- res = sc.popUTF32();
+ res = sc.pop<char32_t>();
ASSERT_FALSE(res.has_value());
ASSERT_EQ(static_cast<int>(res.error()), -1);
ASSERT_EQ(static_cast<int>(sc.getSourceIndex()), 5);
@@ -200,27 +200,27 @@ TEST(LlvmLibcStringConverterTest, UTF32To8ErrorHandling) {
LIBC_NAMESPACE::internal::StringConverter<char32_t> sc(
reinterpret_cast<const char32_t *>(src), &state, SIZE_MAX);
- auto res = sc.popUTF8();
+ auto res = sc.pop<char8_t>();
ASSERT_TRUE(res.has_value());
ASSERT_EQ(static_cast<int>(res.value()), 0xF0);
ASSERT_EQ(static_cast<int>(sc.getSourceIndex()), 1);
- res = sc.popUTF8();
+ res = sc.pop<char8_t>();
ASSERT_TRUE(res.has_value());
ASSERT_EQ(static_cast<int>(res.value()), 0x9F);
ASSERT_EQ(static_cast<int>(sc.getSourceIndex()), 1);
- res = sc.popUTF8();
+ res = sc.pop<char8_t>();
ASSERT_TRUE(res.has_value());
ASSERT_EQ(static_cast<int>(res.value()), 0xA4);
ASSERT_EQ(static_cast<int>(sc.getSourceIndex()), 1);
- res = sc.popUTF8();
+ res = sc.pop<char8_t>();
ASSERT_TRUE(res.has_value());
ASSERT_EQ(static_cast<int>(res.value()), 0xA1);
ASSERT_EQ(static_cast<int>(sc.getSourceIndex()), 1);
- res = sc.popUTF8();
+ res = sc.pop<char8_t>();
ASSERT_FALSE(res.has_value());
ASSERT_EQ(static_cast<int>(res.error()), EILSEQ);
ASSERT_EQ(static_cast<int>(sc.getSourceIndex()), 1);
@@ -234,12 +234,12 @@ TEST(LlvmLibcStringConverterTest, UTF8To32ErrorHandling) {
LIBC_NAMESPACE::internal::StringConverter<char8_t> sc(
reinterpret_cast<const char8_t *>(src), &state, SIZE_MAX);
- auto res = sc.popUTF32();
+ auto res = sc.pop<char32_t>();
ASSERT_TRUE(res.has_value());
ASSERT_EQ(static_cast<int>(res.value()), 0x1f921);
ASSERT_EQ(static_cast<int>(sc.getSourceIndex()), 4);
- res = sc.popUTF32();
+ res = sc.pop<char32_t>();
ASSERT_FALSE(res.has_value());
ASSERT_EQ(static_cast<int>(res.error()), EILSEQ);
ASSERT_EQ(static_cast<int>(sc.getSourceIndex()), 4);
@@ -257,12 +257,12 @@ TEST(LlvmLibcStringConverterTest, InvalidCharacterOutsideBounds) {
LIBC_NAMESPACE::internal::StringConverter<char8_t> sc1(
reinterpret_cast<const char8_t *>(src1), &ps1, 1);
- auto res1 = sc1.popUTF32();
+ auto res1 = sc1.pop<char32_t>();
ASSERT_TRUE(res1.has_value());
ASSERT_EQ(static_cast<int>(res1.value()), 0x1f921);
ASSERT_EQ(static_cast<int>(sc1.getSourceIndex()), 4);
- res1 = sc1.popUTF32();
+ res1 = sc1.pop<char32_t>();
ASSERT_FALSE(res1.has_value());
// no space to write error NOT invalid character error (EILSEQ)
ASSERT_EQ(static_cast<int>(res1.error()), -1);
@@ -275,27 +275,27 @@ TEST(LlvmLibcStringConverterTest, InvalidCharacterOutsideBounds) {
LIBC_NAMESPACE::internal::StringConverter<char32_t> sc2(
reinterpret_cast<const char32_t *>(src2), &ps2, 4);
- auto res2 = sc2.popUTF8();
+ auto res2 = sc2.pop<char8_t>();
ASSERT_TRUE(res2.has_value());
ASSERT_EQ(static_cast<int>(res2.value()), 0xF0);
ASSERT_EQ(static_cast<int>(sc2.getSourceIndex()), 1);
- res2 = sc2.popUTF8();
+ res2 = sc2.pop<char8_t>();
ASSERT_TRUE(res2.has_value());
ASSERT_EQ(static_cast<int>(res2.value()), 0x9F);
ASSERT_EQ(static_cast<int>(sc2.getSourceIndex()), 1);
- res2 = sc2.popUTF8();
+ res2 = sc2.pop<char8_t>();
ASSERT_TRUE(res2.has_value());
ASSERT_EQ(static_cast<int>(res2.value()), 0xA4);
ASSERT_EQ(static_cast<int>(sc2.getSourceIndex()), 1);
- res2 = sc2.popUTF8();
+ res2 = sc2.pop<char8_t>();
ASSERT_TRUE(res2.has_value());
ASSERT_EQ(static_cast<int>(res2.value()), 0xA1);
ASSERT_EQ(static_cast<int>(sc2.getSourceIndex()), 1);
- res2 = sc2.popUTF8();
+ res2 = sc2.pop<char8_t>();
ASSERT_FALSE(res2.has_value());
// no space to write error NOT invalid character error (EILSEQ)
ASSERT_EQ(static_cast<int>(res2.error()), -1);
@@ -315,22 +315,22 @@ TEST(LlvmLibcStringConverterTest, MultipleStringConverters32To8) {
LIBC_NAMESPACE::internal::StringConverter<char32_t> sc1(
reinterpret_cast<const char32_t *>(src), &state, SIZE_MAX, 1);
- auto res = sc1.popUTF8();
+ auto res = sc1.pop<char8_t>();
ASSERT_TRUE(res.has_value());
ASSERT_EQ(static_cast<int>(res.value()), 0xF0);
ASSERT_EQ(static_cast<int>(sc1.getSourceIndex()), 1);
- res = sc1.popUTF8();
+ res = sc1.pop<char8_t>();
ASSERT_TRUE(res.has_value());
ASSERT_EQ(static_cast<int>(res.value()), 0x9F);
ASSERT_EQ(static_cast<int>(sc1.getSourceIndex()), 1);
- res = sc1.popUTF8();
+ res = sc1.pop<char8_t>();
ASSERT_TRUE(res.has_value());
ASSERT_EQ(static_cast<int>(res.value()), 0xA4);
ASSERT_EQ(static_cast<int>(sc1.getSourceIndex()), 1);
- res = sc1.popUTF8();
+ res = sc1.pop<char8_t>();
ASSERT_TRUE(res.has_value());
ASSERT_EQ(static_cast<int>(res.value()), 0xA1);
ASSERT_EQ(static_cast<int>(sc1.getSourceIndex()), 1);
@@ -340,12 +340,12 @@ TEST(LlvmLibcStringConverterTest, MultipleStringConverters32To8) {
reinterpret_cast<const char32_t *>(src) + sc1.getSourceIndex(), &state,
SIZE_MAX, 1);
- res = sc2.popUTF8();
+ res = sc2.pop<char8_t>();
ASSERT_TRUE(res.has_value());
ASSERT_EQ(static_cast<int>(res.value()), 0xC3);
ASSERT_EQ(static_cast<int>(sc2.getSourceIndex()), 1);
- res = sc2.popUTF8();
+ res = sc2.pop<char8_t>();
ASSERT_TRUE(res.has_value());
ASSERT_EQ(static_cast<int>(res.value()), 0xBF);
ASSERT_EQ(static_cast<int>(sc2.getSourceIndex()), 1);
@@ -357,7 +357,7 @@ TEST(LlvmLibcStringConverterTest, MultipleStringConverters8To32) {
LIBC_NAMESPACE::internal::StringConverter<char8_t> sc1(
reinterpret_cast<const char8_t *>(src), &state, SIZE_MAX, 2);
- auto res = sc1.popUTF32();
+ auto res = sc1.pop<char32_t>();
ASSERT_FALSE(res.has_value());
ASSERT_EQ(static_cast<int>(res.error()), -1);
ASSERT_EQ(static_cast<int>(sc1.getSourceIndex()), 2);
@@ -367,12 +367,12 @@ TEST(LlvmLibcStringConverterTest, MultipleStringConverters8To32) {
reinterpret_cast<const char8_t *>(src) + sc1.getSourceIndex(), &state,
SIZE_MAX, 3);
- res = sc2.popUTF32();
+ res = sc2.pop<char32_t>();
ASSERT_TRUE(res.has_value());
ASSERT_EQ(static_cast<int>(res.value()), 0x1f921);
ASSERT_EQ(static_cast<int>(sc2.getSourceIndex()), 2);
- res = sc2.popUTF32();
+ res = sc2.pop<char32_t>();
ASSERT_TRUE(res.has_value());
ASSERT_EQ(static_cast<int>(res.value()), 0);
ASSERT_EQ(static_cast<int>(sc2.getSourceIndex()), 3);
@@ -384,11 +384,11 @@ TEST(LlvmLibcStringConverterTest, DestLimitUTF8To32) {
LIBC_NAMESPACE::internal::StringConverter<char8_t> sc(
reinterpret_cast<const char8_t *>(src), &state, 1);
- auto res = sc.popUTF32();
+ auto res = sc.pop<char32_t>();
ASSERT_TRUE(res.has_value());
ASSERT_EQ(static_cast<int>(sc.getSourceIndex()), 4);
- res = sc.popUTF32(); // no space to pop this into
+ res = sc.pop<char32_t>(); // no space to pop this into
ASSERT_FALSE(res.has_value());
}
@@ -399,23 +399,23 @@ TEST(LlvmLibcStringConverterTest, DestLimitUTF32To8) {
LIBC_NAMESPACE::internal::StringConverter<char32_t> sc(
reinterpret_cast<const char32_t *>(src), &state, 5);
- auto res = sc.popUTF8();
+ auto res = sc.pop<char8_t>();
ASSERT_TRUE(res.has_value());
ASSERT_EQ(static_cast<int>(sc.getSourceIndex()), 1);
- res = sc.popUTF8();
+ res = sc.pop<char8_t>();
ASSERT_TRUE(res.has_value());
ASSERT_EQ(static_cast<int>(sc.getSourceIndex()), 1);
- res = sc.popUTF8();
+ res = sc.pop<char8_t>();
ASSERT_TRUE(res.has_value());
ASSERT_EQ(static_cast<int>(sc.getSourceIndex()), 1);
- res = sc.popUTF8();
+ res = sc.pop<char8_t>();
ASSERT_TRUE(res.has_value());
ASSERT_EQ(static_cast<int>(sc.getSourceIndex()), 1);
- res = sc.popUTF8();
+ res = sc.pop<char8_t>();
ASSERT_FALSE(res.has_value());
ASSERT_EQ(static_cast<int>(sc.getSourceIndex()), 1);
}
>From 7d3134f6cc59f47460646a13abcf824bae05d772 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Valentin=20Clement=20=28=E3=83=90=E3=83=AC=E3=83=B3?=
=?UTF-8?q?=E3=82=BF=E3=82=A4=E3=83=B3=20=E3=82=AF=E3=83=AC=E3=83=A1?=
=?UTF-8?q?=E3=83=B3=29?= <clementval at gmail.com>
Date: Wed, 6 Aug 2025 15:55:53 -0700
Subject: [PATCH 167/171] Revert "[flang][cuda] Set the allocator of derived
type component after allocation" (#152402)
Reverts llvm/llvm-project#152379
Buildbot failure
https://lab.llvm.org/buildbot/#/builders/207/builds/4905
---
flang/include/flang/Lower/{CUDA.h => Cuda.h} | 12 +-
flang/lib/Lower/Allocatable.cpp | 14 +-
flang/lib/Lower/Bridge.cpp | 2 +-
flang/lib/Lower/CMakeLists.txt | 1 -
flang/lib/Lower/CUDA.cpp | 148 -------------------
flang/lib/Lower/ConvertVariable.cpp | 75 ++++++++--
flang/test/Lower/CUDA/cuda-set-allocator.cuf | 42 ++----
7 files changed, 89 insertions(+), 205 deletions(-)
rename flang/include/flang/Lower/{CUDA.h => Cuda.h} (74%)
delete mode 100644 flang/lib/Lower/CUDA.cpp
diff --git a/flang/include/flang/Lower/CUDA.h b/flang/include/flang/Lower/Cuda.h
similarity index 74%
rename from flang/include/flang/Lower/CUDA.h
rename to flang/include/flang/Lower/Cuda.h
index 6aad06e150ad1..b6f849e3d63f0 100644
--- a/flang/include/flang/Lower/CUDA.h
+++ b/flang/include/flang/Lower/Cuda.h
@@ -1,4 +1,4 @@
-//===-- Lower/CUDA.h -- CUDA Fortran utilities ------------------*- C++ -*-===//
+//===-- Lower/Cuda.h -- Cuda Fortran utilities ------------------*- C++ -*-===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
@@ -15,7 +15,6 @@
#include "flang/Optimizer/Builder/FIRBuilder.h"
#include "flang/Optimizer/Dialect/CUF/CUFOps.h"
-#include "flang/Runtime/allocator-registry-consts.h"
#include "flang/Semantics/tools.h"
#include "mlir/Dialect/Func/IR/FuncOps.h"
#include "mlir/Dialect/OpenACC/OpenACC.h"
@@ -38,15 +37,6 @@ static inline unsigned getAllocatorIdx(const Fortran::semantics::Symbol &sym) {
return kDefaultAllocator;
}
-void initializeDeviceComponentAllocator(
- Fortran::lower::AbstractConverter &converter,
- const Fortran::semantics::Symbol &sym, const fir::MutableBoxValue &box);
-
-mlir::Type gatherDeviceComponentCoordinatesAndType(
- fir::FirOpBuilder &builder, mlir::Location loc,
- const Fortran::semantics::Symbol &sym, fir::RecordType recTy,
- llvm::SmallVector<mlir::Value> &coordinates);
-
} // end namespace Fortran::lower
#endif // FORTRAN_LOWER_CUDA_H
diff --git a/flang/lib/Lower/Allocatable.cpp b/flang/lib/Lower/Allocatable.cpp
index ce9d8944387e1..219f9205f45d5 100644
--- a/flang/lib/Lower/Allocatable.cpp
+++ b/flang/lib/Lower/Allocatable.cpp
@@ -13,9 +13,9 @@
#include "flang/Lower/Allocatable.h"
#include "flang/Evaluate/tools.h"
#include "flang/Lower/AbstractConverter.h"
-#include "flang/Lower/CUDA.h"
#include "flang/Lower/ConvertType.h"
#include "flang/Lower/ConvertVariable.h"
+#include "flang/Lower/Cuda.h"
#include "flang/Lower/IterationSpace.h"
#include "flang/Lower/Mangler.h"
#include "flang/Lower/OpenACC.h"
@@ -445,14 +445,10 @@ class AllocateStmtHelper {
/*mustBeHeap=*/true);
}
- void postAllocationAction(const Allocation &alloc,
- const fir::MutableBoxValue &box) {
+ void postAllocationAction(const Allocation &alloc) {
if (alloc.getSymbol().test(Fortran::semantics::Symbol::Flag::AccDeclare))
Fortran::lower::attachDeclarePostAllocAction(converter, builder,
alloc.getSymbol());
- if (Fortran::semantics::HasCUDAComponent(alloc.getSymbol()))
- Fortran::lower::initializeDeviceComponentAllocator(
- converter, alloc.getSymbol(), box);
}
void setPinnedToFalse() {
@@ -485,7 +481,7 @@ class AllocateStmtHelper {
// Pointers must use PointerAllocate so that their deallocations
// can be validated.
genInlinedAllocation(alloc, box);
- postAllocationAction(alloc, box);
+ postAllocationAction(alloc);
setPinnedToFalse();
return;
}
@@ -508,7 +504,7 @@ class AllocateStmtHelper {
genCudaAllocate(builder, loc, box, errorManager, alloc.getSymbol());
}
fir::factory::syncMutableBoxFromIRBox(builder, loc, box);
- postAllocationAction(alloc, box);
+ postAllocationAction(alloc);
errorManager.assignStat(builder, loc, stat);
}
@@ -651,7 +647,7 @@ class AllocateStmtHelper {
setPinnedToFalse();
}
fir::factory::syncMutableBoxFromIRBox(builder, loc, box);
- postAllocationAction(alloc, box);
+ postAllocationAction(alloc);
errorManager.assignStat(builder, loc, stat);
}
diff --git a/flang/lib/Lower/Bridge.cpp b/flang/lib/Lower/Bridge.cpp
index 1bad46f80612b..6b7efe6b57db3 100644
--- a/flang/lib/Lower/Bridge.cpp
+++ b/flang/lib/Lower/Bridge.cpp
@@ -13,7 +13,6 @@
#include "flang/Lower/Bridge.h"
#include "flang/Lower/Allocatable.h"
-#include "flang/Lower/CUDA.h"
#include "flang/Lower/CallInterface.h"
#include "flang/Lower/Coarray.h"
#include "flang/Lower/ConvertCall.h"
@@ -21,6 +20,7 @@
#include "flang/Lower/ConvertExprToHLFIR.h"
#include "flang/Lower/ConvertType.h"
#include "flang/Lower/ConvertVariable.h"
+#include "flang/Lower/Cuda.h"
#include "flang/Lower/DirectivesCommon.h"
#include "flang/Lower/HostAssociations.h"
#include "flang/Lower/IO.h"
diff --git a/flang/lib/Lower/CMakeLists.txt b/flang/lib/Lower/CMakeLists.txt
index 1d1c7ddda8e9b..8e20abf0e9f2d 100644
--- a/flang/lib/Lower/CMakeLists.txt
+++ b/flang/lib/Lower/CMakeLists.txt
@@ -15,7 +15,6 @@ add_flang_library(FortranLower
ConvertProcedureDesignator.cpp
ConvertType.cpp
ConvertVariable.cpp
- CUDA.cpp
CustomIntrinsicCall.cpp
HlfirIntrinsics.cpp
HostAssociations.cpp
diff --git a/flang/lib/Lower/CUDA.cpp b/flang/lib/Lower/CUDA.cpp
deleted file mode 100644
index 60b33bb5680c2..0000000000000
--- a/flang/lib/Lower/CUDA.cpp
+++ /dev/null
@@ -1,148 +0,0 @@
-//===-- CUDA.cpp -- CUDA Fortran specific lowering ------------------------===//
-//
-// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
-// See https://llvm.org/LICENSE.txt for license information.
-// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
-//
-//===----------------------------------------------------------------------===//
-//
-// Coding style: https://mlir.llvm.org/getting_started/DeveloperGuide/
-//
-//===----------------------------------------------------------------------===//
-
-#include "flang/Lower/CUDA.h"
-
-#define DEBUG_TYPE "flang-lower-cuda"
-
-void Fortran::lower::initializeDeviceComponentAllocator(
- Fortran::lower::AbstractConverter &converter,
- const Fortran::semantics::Symbol &sym, const fir::MutableBoxValue &box) {
- if (const auto *details{
- sym.GetUltimate()
- .detailsIf<Fortran::semantics::ObjectEntityDetails>()}) {
- const Fortran::semantics::DeclTypeSpec *type{details->type()};
- const Fortran::semantics::DerivedTypeSpec *derived{type ? type->AsDerived()
- : nullptr};
- if (derived) {
- if (!FindCUDADeviceAllocatableUltimateComponent(*derived))
- return; // No device components.
-
- fir::FirOpBuilder &builder = converter.getFirOpBuilder();
- mlir::Location loc = converter.getCurrentLocation();
-
- mlir::Type baseTy = fir::unwrapRefType(box.getAddr().getType());
-
- // Only pointer and allocatable needs post allocation initialization
- // of components descriptors.
- if (!fir::isAllocatableType(baseTy) && !fir::isPointerType(baseTy))
- return;
-
- // Extract the derived type.
- mlir::Type ty = fir::getDerivedType(baseTy);
- auto recTy = mlir::dyn_cast<fir::RecordType>(ty);
- assert(recTy && "expected fir::RecordType");
-
- if (auto boxTy = mlir::dyn_cast<fir::BaseBoxType>(baseTy))
- baseTy = boxTy.getEleTy();
- baseTy = fir::unwrapRefType(baseTy);
-
- Fortran::semantics::UltimateComponentIterator components{*derived};
- mlir::Value loadedBox = fir::LoadOp::create(builder, loc, box.getAddr());
- mlir::Value addr;
- if (auto seqTy = mlir::dyn_cast<fir::SequenceType>(baseTy)) {
- mlir::Type idxTy = builder.getIndexType();
- mlir::Value one = builder.createIntegerConstant(loc, idxTy, 1);
- mlir::Value zero = builder.createIntegerConstant(loc, idxTy, 0);
- llvm::SmallVector<fir::DoLoopOp> loops;
- llvm::SmallVector<mlir::Value> indices;
- llvm::SmallVector<mlir::Value> extents;
- for (unsigned i = 0; i < seqTy.getDimension(); ++i) {
- mlir::Value dim = builder.createIntegerConstant(loc, idxTy, i);
- auto dimInfo = fir::BoxDimsOp::create(builder, loc, idxTy, idxTy,
- idxTy, loadedBox, dim);
- mlir::Value lbub = mlir::arith::AddIOp::create(
- builder, loc, dimInfo.getResult(0), dimInfo.getResult(1));
- mlir::Value ext =
- mlir::arith::SubIOp::create(builder, loc, lbub, one);
- mlir::Value cmp = mlir::arith::CmpIOp::create(
- builder, loc, mlir::arith::CmpIPredicate::sgt, ext, zero);
- ext = mlir::arith::SelectOp::create(builder, loc, cmp, ext, zero);
- extents.push_back(ext);
-
- auto loop = fir::DoLoopOp::create(
- builder, loc, dimInfo.getResult(0), dimInfo.getResult(1),
- dimInfo.getResult(2), /*isUnordered=*/true,
- /*finalCount=*/false, mlir::ValueRange{});
- loops.push_back(loop);
- indices.push_back(loop.getInductionVar());
- builder.setInsertionPointToStart(loop.getBody());
- }
- mlir::Value boxAddr = fir::BoxAddrOp::create(builder, loc, loadedBox);
- auto shape = fir::ShapeOp::create(builder, loc, extents);
- addr = fir::ArrayCoorOp::create(
- builder, loc, fir::ReferenceType::get(recTy), boxAddr, shape,
- /*slice=*/mlir::Value{}, indices, /*typeparms=*/mlir::ValueRange{});
- } else {
- addr = fir::BoxAddrOp::create(builder, loc, loadedBox);
- }
- for (const auto &compSym : components) {
- if (Fortran::semantics::IsDeviceAllocatable(compSym)) {
- llvm::SmallVector<mlir::Value> coord;
- mlir::Type fieldTy = gatherDeviceComponentCoordinatesAndType(
- builder, loc, compSym, recTy, coord);
- assert(coord.size() == 1 && "expect one coordinate");
- mlir::Value comp = fir::CoordinateOp::create(
- builder, loc, builder.getRefType(fieldTy), addr, coord[0]);
- cuf::DataAttributeAttr dataAttr =
- Fortran::lower::translateSymbolCUFDataAttribute(
- builder.getContext(), compSym);
- cuf::SetAllocatorIndexOp::create(builder, loc, comp, dataAttr);
- }
- }
- }
- }
-}
-
-mlir::Type Fortran::lower::gatherDeviceComponentCoordinatesAndType(
- fir::FirOpBuilder &builder, mlir::Location loc,
- const Fortran::semantics::Symbol &sym, fir::RecordType recTy,
- llvm::SmallVector<mlir::Value> &coordinates) {
- unsigned fieldIdx = recTy.getFieldIndex(sym.name().ToString());
- mlir::Type fieldTy;
- if (fieldIdx != std::numeric_limits<unsigned>::max()) {
- // Field found in the base record type.
- auto fieldName = recTy.getTypeList()[fieldIdx].first;
- fieldTy = recTy.getTypeList()[fieldIdx].second;
- mlir::Value fieldIndex = fir::FieldIndexOp::create(
- builder, loc, fir::FieldType::get(fieldTy.getContext()), fieldName,
- recTy,
- /*typeParams=*/mlir::ValueRange{});
- coordinates.push_back(fieldIndex);
- } else {
- // Field not found in base record type, search in potential
- // record type components.
- for (auto component : recTy.getTypeList()) {
- if (auto childRecTy = mlir::dyn_cast<fir::RecordType>(component.second)) {
- fieldIdx = childRecTy.getFieldIndex(sym.name().ToString());
- if (fieldIdx != std::numeric_limits<unsigned>::max()) {
- mlir::Value parentFieldIndex = fir::FieldIndexOp::create(
- builder, loc, fir::FieldType::get(childRecTy.getContext()),
- component.first, recTy,
- /*typeParams=*/mlir::ValueRange{});
- coordinates.push_back(parentFieldIndex);
- auto fieldName = childRecTy.getTypeList()[fieldIdx].first;
- fieldTy = childRecTy.getTypeList()[fieldIdx].second;
- mlir::Value childFieldIndex = fir::FieldIndexOp::create(
- builder, loc, fir::FieldType::get(fieldTy.getContext()),
- fieldName, childRecTy,
- /*typeParams=*/mlir::ValueRange{});
- coordinates.push_back(childFieldIndex);
- break;
- }
- }
- }
- }
- if (coordinates.empty())
- TODO(loc, "device resident component in complex derived-type hierarchy");
- return fieldTy;
-}
diff --git a/flang/lib/Lower/ConvertVariable.cpp b/flang/lib/Lower/ConvertVariable.cpp
index 6a64320965095..a4a8a697e02ae 100644
--- a/flang/lib/Lower/ConvertVariable.cpp
+++ b/flang/lib/Lower/ConvertVariable.cpp
@@ -14,12 +14,12 @@
#include "flang/Lower/AbstractConverter.h"
#include "flang/Lower/Allocatable.h"
#include "flang/Lower/BoxAnalyzer.h"
-#include "flang/Lower/CUDA.h"
#include "flang/Lower/CallInterface.h"
#include "flang/Lower/ConvertConstant.h"
#include "flang/Lower/ConvertExpr.h"
#include "flang/Lower/ConvertExprToHLFIR.h"
#include "flang/Lower/ConvertProcedureDesignator.h"
+#include "flang/Lower/Cuda.h"
#include "flang/Lower/Mangler.h"
#include "flang/Lower/PFTBuilder.h"
#include "flang/Lower/StatementContext.h"
@@ -814,24 +814,81 @@ initializeDeviceComponentAllocator(Fortran::lower::AbstractConverter &converter,
baseTy = boxTy.getEleTy();
baseTy = fir::unwrapRefType(baseTy);
- if (fir::isAllocatableType(fir::getBase(exv).getType()) ||
- fir::isPointerType(fir::getBase(exv).getType()))
+ if (mlir::isa<fir::SequenceType>(baseTy) &&
+ (fir::isAllocatableType(fir::getBase(exv).getType()) ||
+ fir::isPointerType(fir::getBase(exv).getType())))
return; // Allocator index need to be set after allocation.
auto recTy =
mlir::dyn_cast<fir::RecordType>(fir::unwrapSequenceType(baseTy));
assert(recTy && "expected fir::RecordType");
+ llvm::SmallVector<mlir::Value> coordinates;
Fortran::semantics::UltimateComponentIterator components{*derived};
for (const auto &sym : components) {
if (Fortran::semantics::IsDeviceAllocatable(sym)) {
- llvm::SmallVector<mlir::Value> coord;
- mlir::Type fieldTy =
- Fortran::lower::gatherDeviceComponentCoordinatesAndType(
- builder, loc, sym, recTy, coord);
+ unsigned fieldIdx = recTy.getFieldIndex(sym.name().ToString());
+ mlir::Type fieldTy;
+ llvm::SmallVector<mlir::Value> coordinates;
+
+ if (fieldIdx != std::numeric_limits<unsigned>::max()) {
+ // Field found in the base record type.
+ auto fieldName = recTy.getTypeList()[fieldIdx].first;
+ fieldTy = recTy.getTypeList()[fieldIdx].second;
+ mlir::Value fieldIndex = fir::FieldIndexOp::create(
+ builder, loc, fir::FieldType::get(fieldTy.getContext()),
+ fieldName, recTy,
+ /*typeParams=*/mlir::ValueRange{});
+ coordinates.push_back(fieldIndex);
+ } else {
+ // Field not found in base record type, search in potential
+ // record type components.
+ for (auto component : recTy.getTypeList()) {
+ if (auto childRecTy =
+ mlir::dyn_cast<fir::RecordType>(component.second)) {
+ fieldIdx = childRecTy.getFieldIndex(sym.name().ToString());
+ if (fieldIdx != std::numeric_limits<unsigned>::max()) {
+ mlir::Value parentFieldIndex = fir::FieldIndexOp::create(
+ builder, loc,
+ fir::FieldType::get(childRecTy.getContext()),
+ component.first, recTy,
+ /*typeParams=*/mlir::ValueRange{});
+ coordinates.push_back(parentFieldIndex);
+ auto fieldName = childRecTy.getTypeList()[fieldIdx].first;
+ fieldTy = childRecTy.getTypeList()[fieldIdx].second;
+ mlir::Value childFieldIndex = fir::FieldIndexOp::create(
+ builder, loc, fir::FieldType::get(fieldTy.getContext()),
+ fieldName, childRecTy,
+ /*typeParams=*/mlir::ValueRange{});
+ coordinates.push_back(childFieldIndex);
+ break;
+ }
+ }
+ }
+ }
+
+ if (coordinates.empty())
+ TODO(loc, "device resident component in complex derived-type "
+ "hierarchy");
+
mlir::Value base = fir::getBase(exv);
- mlir::Value comp = fir::CoordinateOp::create(
- builder, loc, builder.getRefType(fieldTy), base, coord);
+ mlir::Value comp;
+ if (mlir::isa<fir::BaseBoxType>(fir::unwrapRefType(base.getType()))) {
+ mlir::Value box = fir::LoadOp::create(builder, loc, base);
+ mlir::Value addr = fir::BoxAddrOp::create(builder, loc, box);
+ llvm::SmallVector<mlir::Value> lenParams;
+ assert(coordinates.size() == 1 && "expect one coordinate");
+ auto field = mlir::dyn_cast<fir::FieldIndexOp>(
+ coordinates[0].getDefiningOp());
+ comp = hlfir::DesignateOp::create(
+ builder, loc, builder.getRefType(fieldTy), addr,
+ /*component=*/field.getFieldName(),
+ /*componentShape=*/mlir::Value{},
+ hlfir::DesignateOp::Subscripts{});
+ } else {
+ comp = fir::CoordinateOp::create(
+ builder, loc, builder.getRefType(fieldTy), base, coordinates);
+ }
cuf::DataAttributeAttr dataAttr =
Fortran::lower::translateSymbolCUFDataAttribute(
builder.getContext(), sym);
diff --git a/flang/test/Lower/CUDA/cuda-set-allocator.cuf b/flang/test/Lower/CUDA/cuda-set-allocator.cuf
index d783f340fe9a4..e3bb181f65398 100644
--- a/flang/test/Lower/CUDA/cuda-set-allocator.cuf
+++ b/flang/test/Lower/CUDA/cuda-set-allocator.cuf
@@ -23,44 +23,34 @@ contains
subroutine sub2()
type(ty_device), pointer :: d1
- allocate(d1)
end subroutine
! CHECK-LABEL: func.func @_QMm1Psub2()
! CHECK: %[[ALLOC:.*]] = cuf.alloc !fir.box<!fir.ptr<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>> {bindc_name = "d1", data_attr = #cuf.cuda<managed>, uniq_name = "_QMm1Fsub2Ed1"} -> !fir.ref<!fir.box<!fir.ptr<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>>>
! CHECK: %[[DECL:.*]]:2 = hlfir.declare %[[ALLOC]] {data_attr = #cuf.cuda<managed>, fortran_attrs = #fir.var_attrs<pointer>, uniq_name = "_QMm1Fsub2Ed1"} : (!fir.ref<!fir.box<!fir.ptr<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>>>) -> (!fir.ref<!fir.box<!fir.ptr<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>>>, !fir.ref<!fir.box<!fir.ptr<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>>>)
-! CHECK: cuf.allocate
-! CHECK: %[[LOAD:.*]] = fir.load %[[DECL]]#0 : !fir.ref<!fir.box<!fir.ptr<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>>>
-! CHECK: %[[ADDR:.*]] = fir.box_addr %[[LOAD]] : (!fir.box<!fir.ptr<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>>) -> !fir.ptr<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>
-! CHECK: %[[COORD1:.*]] = fir.coordinate_of %[[ADDR]], x : (!fir.ptr<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>) -> !fir.ref<!fir.box<!fir.heap<!fir.array<?xi32>>>>
-! CHECK: cuf.set_allocator_idx %[[COORD1]] : !fir.ref<!fir.box<!fir.heap<!fir.array<?xi32>>>> {data_attr = #cuf.cuda<device>}
-! CHECK: %[[COORD2:.*]] = fir.coordinate_of %[[ADDR]], z : (!fir.ptr<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>) -> !fir.ref<!fir.box<!fir.heap<!fir.array<?xi32>>>>
-! CHECK: cuf.set_allocator_idx %[[COORD2]] : !fir.ref<!fir.box<!fir.heap<!fir.array<?xi32>>>> {data_attr = #cuf.cuda<device>}
+! CHECK: %[[LOAD1:.*]] = fir.load %[[DECL]]#0 : !fir.ref<!fir.box<!fir.ptr<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>>>
+! CHECK: %[[ADDR1:.*]] = fir.box_addr %[[LOAD1]] : (!fir.box<!fir.ptr<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>>) -> !fir.ptr<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>
+! CHECK: %[[DESIGNATE1:.*]] = hlfir.designate %[[ADDR1]]{"x"} : (!fir.ptr<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>) -> !fir.ref<!fir.box<!fir.heap<!fir.array<?xi32>>>>
+! CHECK: cuf.set_allocator_idx %[[DESIGNATE1]] : !fir.ref<!fir.box<!fir.heap<!fir.array<?xi32>>>> {data_attr = #cuf.cuda<device>}
+! CHECK: %[[LOAD2:.*]] = fir.load %[[DECL]]#0 : !fir.ref<!fir.box<!fir.ptr<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>>>
+! CHECK: %[[ADDR2:.*]] = fir.box_addr %[[LOAD2]] : (!fir.box<!fir.ptr<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>>) -> !fir.ptr<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>
+! CHECK: %[[DESIGNATE2:.*]] = hlfir.designate %[[ADDR2]]{"z"} : (!fir.ptr<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>) -> !fir.ref<!fir.box<!fir.heap<!fir.array<?xi32>>>>
+! CHECK: cuf.set_allocator_idx %[[DESIGNATE2]] : !fir.ref<!fir.box<!fir.heap<!fir.array<?xi32>>>> {data_attr = #cuf.cuda<device>}
subroutine sub3()
type(ty_device), allocatable :: d1
- allocate(d1)
end subroutine
! CHECK-LABEL: func.func @_QMm1Psub3()
! CHECK: %[[ALLOC:.*]] = cuf.alloc !fir.box<!fir.heap<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>> {bindc_name = "d1", data_attr = #cuf.cuda<managed>, uniq_name = "_QMm1Fsub3Ed1"} -> !fir.ref<!fir.box<!fir.heap<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>>>
! CHECK: %[[DECL:.*]]:2 = hlfir.declare %[[ALLOC]] {data_attr = #cuf.cuda<managed>, fortran_attrs = #fir.var_attrs<allocatable>, uniq_name = "_QMm1Fsub3Ed1"} : (!fir.ref<!fir.box<!fir.heap<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>>>) -> (!fir.ref<!fir.box<!fir.heap<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>>>, !fir.ref<!fir.box<!fir.heap<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>>>)
-! CHECK: cuf.allocate
-! CHECK: %[[LOAD:.*]] = fir.load %[[DECL]]#0 : !fir.ref<!fir.box<!fir.heap<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>>>
-! CHECK: %[[ADDR:.*]] = fir.box_addr %[[LOAD]] : (!fir.box<!fir.heap<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>>) -> !fir.heap<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>
-! CHECK: %[[COORD1:.*]] = fir.coordinate_of %[[ADDR]], x : (!fir.heap<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>) -> !fir.ref<!fir.box<!fir.heap<!fir.array<?xi32>>>>
-! CHECK: cuf.set_allocator_idx %[[COORD1]] : !fir.ref<!fir.box<!fir.heap<!fir.array<?xi32>>>> {data_attr = #cuf.cuda<device>}
-! CHECK: %[[COORD2:.*]] = fir.coordinate_of %[[ADDR]], z : (!fir.heap<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>) -> !fir.ref<!fir.box<!fir.heap<!fir.array<?xi32>>>>
-! CHECK: cuf.set_allocator_idx %[[COORD2]] : !fir.ref<!fir.box<!fir.heap<!fir.array<?xi32>>>> {data_attr = #cuf.cuda<device>}
-
- subroutine sub4()
- type(ty_device), allocatable :: d1(:,:)
- allocate(d1(10, 10))
- end subroutine
-
-! CHECK-LABEL: func.func @_QMm1Psub4()
-! CHECK: cuf.allocate
-! CHECK-COUNT-2: fir.do_loop
-! CHECK-COUNT-2: cuf.set_allocator_idx
+! CHECK: %[[LOAD1:.*]] = fir.load %[[DECL]]#0 : !fir.ref<!fir.box<!fir.heap<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>>>
+! CHECK: %[[ADDR1:.*]] = fir.box_addr %[[LOAD1]] : (!fir.box<!fir.heap<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>>) -> !fir.heap<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>
+! CHECK: %[[DESIGNATE1:.*]] = hlfir.designate %[[ADDR1]]{"x"} : (!fir.heap<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>) -> !fir.ref<!fir.box<!fir.heap<!fir.array<?xi32>>>>
+! CHECK: cuf.set_allocator_idx %[[DESIGNATE1]] : !fir.ref<!fir.box<!fir.heap<!fir.array<?xi32>>>> {data_attr = #cuf.cuda<device>}
+! CHECK: %[[LOAD2:.*]] = fir.load %[[DECL]]#0 : !fir.ref<!fir.box<!fir.heap<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>>>
+! CHECK: %[[ADDR2:.*]] = fir.box_addr %[[LOAD2]] : (!fir.box<!fir.heap<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>>) -> !fir.heap<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>
+! CHECK: %[[DESIGNATE2:.*]] = hlfir.designate %[[ADDR2]]{"z"} : (!fir.heap<!fir.type<_QMm1Tty_device{x:!fir.box<!fir.heap<!fir.array<?xi32>>>,y:i32,z:!fir.box<!fir.heap<!fir.array<?xi32>>>}>>) -> !fir.ref<!fir.box<!fir.heap<!fir.array<?xi32>>>>
+! CHECK: cuf.set_allocator_idx %[[DESIGNATE2]] : !fir.ref<!fir.box<!fir.heap<!fir.array<?xi32>>>> {data_attr = #cuf.cuda<device>}
end module
>From a7f1702f2c5d4601de962cde14af35c313c16902 Mon Sep 17 00:00:00 2001
From: Florian Mayer <fmayer at google.com>
Date: Wed, 6 Aug 2025 16:07:40 -0700
Subject: [PATCH 168/171] [NFC] [CFI] correct comment in test (#152399)
It incorrectly stated that `const char*` gets normalized to ptr, while
it should say that `char*` does.
---
clang/test/CodeGen/cfi-icall-generalize.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/clang/test/CodeGen/cfi-icall-generalize.c b/clang/test/CodeGen/cfi-icall-generalize.c
index 2011889fcb400..0af17e5760cc6 100644
--- a/clang/test/CodeGen/cfi-icall-generalize.c
+++ b/clang/test/CodeGen/cfi-icall-generalize.c
@@ -1,7 +1,7 @@
// RUN: %clang_cc1 -triple x86_64-unknown-linux -fsanitize=cfi-icall -fsanitize-trap=cfi-icall -emit-llvm -o - %s | FileCheck --check-prefix=CHECK --check-prefix=UNGENERALIZED %s
// RUN: %clang_cc1 -triple x86_64-unknown-linux -fsanitize=cfi-icall -fsanitize-trap=cfi-icall -fsanitize-cfi-icall-generalize-pointers -emit-llvm -o - %s | FileCheck --check-prefix=CHECK --check-prefix=GENERALIZED %s
-// Test that const char* is generalized to const ptr and that const char** is
+// Test that const char* is generalized to const ptr and that char** is
// generalized to ptr
// CHECK: define{{.*}} ptr @f({{.*}} !type [[TYPE:![0-9]+]] !type [[TYPE_GENERALIZED:![0-9]+]]
>From c4846d29cdefc5fb6858ccf0378a8103b659016b Mon Sep 17 00:00:00 2001
From: Uzair Nawaz <uzairnawaz at google.com>
Date: Wed, 6 Aug 2025 16:32:23 -0700
Subject: [PATCH 169/171] [libc] Move CharacterConverter template
specialization to cpp file (#152405)
Fixes build errors caused by #152204
---
libc/src/__support/wchar/character_converter.cpp | 8 ++++++++
libc/src/__support/wchar/character_converter.h | 4 ----
2 files changed, 8 insertions(+), 4 deletions(-)
diff --git a/libc/src/__support/wchar/character_converter.cpp b/libc/src/__support/wchar/character_converter.cpp
index 278248c5c4c4a..26672884d7b16 100644
--- a/libc/src/__support/wchar/character_converter.cpp
+++ b/libc/src/__support/wchar/character_converter.cpp
@@ -164,5 +164,13 @@ ErrorOr<char8_t> CharacterConverter::pop_utf8() {
return static_cast<char8_t>(output);
}
+template <> ErrorOr<char8_t> CharacterConverter::pop() { return pop_utf8(); }
+template <> ErrorOr<char32_t> CharacterConverter::pop() { return pop_utf32(); }
+
+template <> size_t CharacterConverter::sizeAs<char8_t>() {
+ return state->total_bytes;
+}
+template <> size_t CharacterConverter::sizeAs<char32_t>() { return 1; }
+
} // namespace internal
} // namespace LIBC_NAMESPACE_DECL
diff --git a/libc/src/__support/wchar/character_converter.h b/libc/src/__support/wchar/character_converter.h
index fef30f7ce43fa..2cc28abf2772a 100644
--- a/libc/src/__support/wchar/character_converter.h
+++ b/libc/src/__support/wchar/character_converter.h
@@ -33,8 +33,6 @@ class CharacterConverter {
bool isValidState();
template <typename CharType> size_t sizeAs();
- template <> size_t sizeAs<char8_t>() { return state->total_bytes; }
- template <> size_t sizeAs<char32_t>() { return 1; }
int push(char8_t utf8_byte);
int push(char32_t utf32);
@@ -42,8 +40,6 @@ class CharacterConverter {
ErrorOr<char8_t> pop_utf8();
ErrorOr<char32_t> pop_utf32();
template <typename CharType> ErrorOr<CharType> pop();
- template <> ErrorOr<char8_t> pop() { return pop_utf8(); }
- template <> ErrorOr<char32_t> pop() { return pop_utf32(); }
};
} // namespace internal
>From acb5d0c211f72ba370bfeea7e5bf3b108f84895a Mon Sep 17 00:00:00 2001
From: Finn Plummer <mail at inbelic.dev>
Date: Wed, 6 Aug 2025 16:35:16 -0700
Subject: [PATCH 170/171] [NFC][HLSL] Replace uses of
`getResourceName`/`printEnum` (#152211)
Introduce the `enumToStringRef` enum into `ScopedPrinter.h` that
replicates `enumToString` behaviour, expect that instead of returning a
hex value string, it just returns an empty string. This allows us to
return a StringRef and easily check if an invalid enum was provided
based on the StringRef size
This then uses `enumToStringRef` to remove the redundant
`getResourceName` and `printEnum` functions.
Resolves: https://github.com/llvm/llvm-project/issues/151200.
---
llvm/include/llvm/Support/ScopedPrinter.h | 11 +++++
llvm/lib/Frontend/HLSL/HLSLRootSignature.cpp | 41 +++++--------------
.../Frontend/HLSL/RootSignatureMetadata.cpp | 25 +++++------
3 files changed, 32 insertions(+), 45 deletions(-)
diff --git a/llvm/include/llvm/Support/ScopedPrinter.h b/llvm/include/llvm/Support/ScopedPrinter.h
index e6c4cc4a4ea13..a08cc8fd31fd2 100644
--- a/llvm/include/llvm/Support/ScopedPrinter.h
+++ b/llvm/include/llvm/Support/ScopedPrinter.h
@@ -107,6 +107,17 @@ std::string enumToString(T Value, ArrayRef<EnumEntry<TEnum>> EnumValues) {
return utohexstr(Value, true);
}
+/// Retrieves the Value's enum name.
+///
+/// Returns an empty StringRef when an invalid value is provided.
+template <typename T, typename TEnum>
+StringRef enumToStringRef(T Value, ArrayRef<EnumEntry<TEnum>> EnumValues) {
+ for (const EnumEntry<TEnum> &EnumItem : EnumValues)
+ if (EnumItem.Value == Value)
+ return EnumItem.AltName;
+ return "";
+}
+
class LLVM_ABI ScopedPrinter {
public:
enum class ScopedPrinterKind {
diff --git a/llvm/lib/Frontend/HLSL/HLSLRootSignature.cpp b/llvm/lib/Frontend/HLSL/HLSLRootSignature.cpp
index 78c20a6c5c9ff..79904fc19eb45 100644
--- a/llvm/lib/Frontend/HLSL/HLSLRootSignature.cpp
+++ b/llvm/lib/Frontend/HLSL/HLSLRootSignature.cpp
@@ -17,24 +17,6 @@ namespace llvm {
namespace hlsl {
namespace rootsig {
-template <typename T>
-static std::optional<StringRef> getEnumName(const T Value,
- ArrayRef<EnumEntry<T>> Enums) {
- for (const auto &EnumItem : Enums)
- if (EnumItem.Value == Value)
- return EnumItem.Name;
- return std::nullopt;
-}
-
-template <typename T>
-static raw_ostream &printEnum(raw_ostream &OS, const T Value,
- ArrayRef<EnumEntry<T>> Enums) {
- auto MaybeName = getEnumName(Value, Enums);
- if (MaybeName)
- OS << *MaybeName;
- return OS;
-}
-
template <typename T>
static raw_ostream &printFlags(raw_ostream &OS, const T Value,
ArrayRef<EnumEntry<T>> Flags) {
@@ -46,9 +28,9 @@ static raw_ostream &printFlags(raw_ostream &OS, const T Value,
if (FlagSet)
OS << " | ";
- auto MaybeFlag = getEnumName(T(Bit), Flags);
- if (MaybeFlag)
- OS << *MaybeFlag;
+ StringRef MaybeFlag = enumToStringRef(T(Bit), Flags);
+ if (!MaybeFlag.empty())
+ OS << MaybeFlag;
else
OS << "invalid: " << Bit;
@@ -70,43 +52,42 @@ static const EnumEntry<RegisterType> RegisterNames[] = {
};
static raw_ostream &operator<<(raw_ostream &OS, const Register &Reg) {
- printEnum(OS, Reg.ViewType, ArrayRef(RegisterNames));
- OS << Reg.Number;
+ OS << enumToStringRef(Reg.ViewType, ArrayRef(RegisterNames)) << Reg.Number;
return OS;
}
static raw_ostream &operator<<(raw_ostream &OS,
const llvm::dxbc::ShaderVisibility &Visibility) {
- printEnum(OS, Visibility, dxbc::getShaderVisibility());
+ OS << enumToStringRef(Visibility, dxbc::getShaderVisibility());
return OS;
}
static raw_ostream &operator<<(raw_ostream &OS,
const llvm::dxbc::SamplerFilter &Filter) {
- printEnum(OS, Filter, dxbc::getSamplerFilters());
+ OS << enumToStringRef(Filter, dxbc::getSamplerFilters());
return OS;
}
static raw_ostream &operator<<(raw_ostream &OS,
const dxbc::TextureAddressMode &Address) {
- printEnum(OS, Address, dxbc::getTextureAddressModes());
+ OS << enumToStringRef(Address, dxbc::getTextureAddressModes());
return OS;
}
static raw_ostream &operator<<(raw_ostream &OS,
const dxbc::ComparisonFunc &CompFunc) {
- printEnum(OS, CompFunc, dxbc::getComparisonFuncs());
+ OS << enumToStringRef(CompFunc, dxbc::getComparisonFuncs());
return OS;
}
static raw_ostream &operator<<(raw_ostream &OS,
const dxbc::StaticBorderColor &BorderColor) {
- printEnum(OS, BorderColor, dxbc::getStaticBorderColors());
+ OS << enumToStringRef(BorderColor, dxbc::getStaticBorderColors());
return OS;
}
@@ -119,8 +100,8 @@ static const EnumEntry<dxil::ResourceClass> ResourceClassNames[] = {
};
static raw_ostream &operator<<(raw_ostream &OS, const ClauseType &Type) {
- printEnum(OS, dxil::ResourceClass(llvm::to_underlying(Type)),
- ArrayRef(ResourceClassNames));
+ OS << enumToStringRef(dxil::ResourceClass(llvm::to_underlying(Type)),
+ ArrayRef(ResourceClassNames));
return OS;
}
diff --git a/llvm/lib/Frontend/HLSL/RootSignatureMetadata.cpp b/llvm/lib/Frontend/HLSL/RootSignatureMetadata.cpp
index 6d89fa7b1222c..9cf4ed1fb104c 100644
--- a/llvm/lib/Frontend/HLSL/RootSignatureMetadata.cpp
+++ b/llvm/lib/Frontend/HLSL/RootSignatureMetadata.cpp
@@ -58,13 +58,6 @@ static const EnumEntry<dxil::ResourceClass> ResourceClassNames[] = {
{"Sampler", dxil::ResourceClass::Sampler},
};
-static std::optional<StringRef> getResourceName(dxil::ResourceClass Class) {
- for (const auto &ClassEnum : ResourceClassNames)
- if (ClassEnum.Value == Class)
- return ClassEnum.Name;
- return std::nullopt;
-}
-
namespace {
// We use the OverloadVisit with std::visit to ensure the compiler catches if a
@@ -133,10 +126,11 @@ MDNode *MetadataBuilder::BuildRootConstants(const RootConstants &Constants) {
MDNode *MetadataBuilder::BuildRootDescriptor(const RootDescriptor &Descriptor) {
IRBuilder<> Builder(Ctx);
- std::optional<StringRef> ResName =
- getResourceName(dxil::ResourceClass(to_underlying(Descriptor.Type)));
- assert(ResName && "Provided an invalid Resource Class");
- SmallString<7> Name({"Root", *ResName});
+ StringRef ResName =
+ enumToStringRef(dxil::ResourceClass(to_underlying(Descriptor.Type)),
+ ArrayRef(ResourceClassNames));
+ assert(!ResName.empty() && "Provided an invalid Resource Class");
+ SmallString<7> Name({"Root", ResName});
Metadata *Operands[] = {
MDString::get(Ctx, Name),
ConstantAsMetadata::get(
@@ -174,11 +168,12 @@ MDNode *MetadataBuilder::BuildDescriptorTable(const DescriptorTable &Table) {
MDNode *MetadataBuilder::BuildDescriptorTableClause(
const DescriptorTableClause &Clause) {
IRBuilder<> Builder(Ctx);
- std::optional<StringRef> ResName =
- getResourceName(dxil::ResourceClass(to_underlying(Clause.Type)));
- assert(ResName && "Provided an invalid Resource Class");
+ StringRef ResName =
+ enumToStringRef(dxil::ResourceClass(to_underlying(Clause.Type)),
+ ArrayRef(ResourceClassNames));
+ assert(!ResName.empty() && "Provided an invalid Resource Class");
Metadata *Operands[] = {
- MDString::get(Ctx, *ResName),
+ MDString::get(Ctx, ResName),
ConstantAsMetadata::get(Builder.getInt32(Clause.NumDescriptors)),
ConstantAsMetadata::get(Builder.getInt32(Clause.Reg.Number)),
ConstantAsMetadata::get(Builder.getInt32(Clause.Space)),
>From 1746d2f6a059228cab9833843d8a2d75f9e6b562 Mon Sep 17 00:00:00 2001
From: Finn Plummer <canadienfinn at gmail.com>
Date: Tue, 5 Aug 2025 22:12:47 +0000
Subject: [PATCH 171/171] [NFC][HLSL][DirectX] Consolidate `ResourceClassNames`
During the split of the various `Frontend/HLSL` libraries, there was an
oversight to duplicate the `ResourceClassNames` definitions. This commit
simply consolidates the definitions into `DXContainer.h` as
`getResourceClasses`
---
llvm/include/llvm/BinaryFormat/DXContainer.h | 3 +++
llvm/lib/BinaryFormat/DXContainer.cpp | 11 +++++++++++
llvm/lib/Frontend/HLSL/HLSLRootSignature.cpp | 9 +--------
llvm/lib/Frontend/HLSL/RootSignatureMetadata.cpp | 11 ++---------
4 files changed, 17 insertions(+), 17 deletions(-)
diff --git a/llvm/include/llvm/BinaryFormat/DXContainer.h b/llvm/include/llvm/BinaryFormat/DXContainer.h
index 89abca02efefa..cc4af3d9be8d7 100644
--- a/llvm/include/llvm/BinaryFormat/DXContainer.h
+++ b/llvm/include/llvm/BinaryFormat/DXContainer.h
@@ -16,6 +16,7 @@
#include "llvm/ADT/BitmaskEnum.h"
#include "llvm/ADT/StringRef.h"
#include "llvm/Support/Compiler.h"
+#include "llvm/Support/DXILABI.h"
#include "llvm/Support/Error.h"
#include "llvm/Support/SwapByteOrder.h"
#include "llvm/TargetParser/Triple.h"
@@ -157,6 +158,8 @@ enum class FeatureFlags : uint64_t {
static_assert((uint64_t)FeatureFlags::NextUnusedBit <= 1ull << 63,
"Shader flag bits exceed enum size.");
+LLVM_ABI ArrayRef<EnumEntry<llvm::dxil::ResourceClass>> getResourceClasses();
+
#define ROOT_SIGNATURE_FLAG(Num, Val) Val = Num,
enum class RootFlags : uint32_t {
#include "DXContainerConstants.def"
diff --git a/llvm/lib/BinaryFormat/DXContainer.cpp b/llvm/lib/BinaryFormat/DXContainer.cpp
index 36d10d0b63078..eb83945c9c42f 100644
--- a/llvm/lib/BinaryFormat/DXContainer.cpp
+++ b/llvm/lib/BinaryFormat/DXContainer.cpp
@@ -60,6 +60,17 @@ ArrayRef<EnumEntry<SigComponentType>> dxbc::getSigComponentTypes() {
return ArrayRef(SigComponentTypes);
}
+static const EnumEntry<dxil::ResourceClass> ResourceClassNames[] = {
+ {"SRV", llvm::dxil::ResourceClass::SRV},
+ {"UAV", llvm::dxil::ResourceClass::UAV},
+ {"CBV", llvm::dxil::ResourceClass::CBuffer},
+ {"Sampler", llvm::dxil::ResourceClass::Sampler},
+};
+
+ArrayRef<EnumEntry<llvm::dxil::ResourceClass>> dxbc::getResourceClasses() {
+ return ArrayRef(ResourceClassNames);
+}
+
static const EnumEntry<RootFlags> RootFlagNames[] = {
#define ROOT_SIGNATURE_FLAG(Val, Enum) {#Enum, RootFlags::Enum},
#include "llvm/BinaryFormat/DXContainerConstants.def"
diff --git a/llvm/lib/Frontend/HLSL/HLSLRootSignature.cpp b/llvm/lib/Frontend/HLSL/HLSLRootSignature.cpp
index 79904fc19eb45..574883e0d7fd7 100644
--- a/llvm/lib/Frontend/HLSL/HLSLRootSignature.cpp
+++ b/llvm/lib/Frontend/HLSL/HLSLRootSignature.cpp
@@ -92,16 +92,9 @@ static raw_ostream &operator<<(raw_ostream &OS,
return OS;
}
-static const EnumEntry<dxil::ResourceClass> ResourceClassNames[] = {
- {"CBV", dxil::ResourceClass::CBuffer},
- {"SRV", dxil::ResourceClass::SRV},
- {"UAV", dxil::ResourceClass::UAV},
- {"Sampler", dxil::ResourceClass::Sampler},
-};
-
static raw_ostream &operator<<(raw_ostream &OS, const ClauseType &Type) {
OS << enumToStringRef(dxil::ResourceClass(llvm::to_underlying(Type)),
- ArrayRef(ResourceClassNames));
+ dxbc::getResourceClasses());
return OS;
}
diff --git a/llvm/lib/Frontend/HLSL/RootSignatureMetadata.cpp b/llvm/lib/Frontend/HLSL/RootSignatureMetadata.cpp
index 9cf4ed1fb104c..1cda3080442b2 100644
--- a/llvm/lib/Frontend/HLSL/RootSignatureMetadata.cpp
+++ b/llvm/lib/Frontend/HLSL/RootSignatureMetadata.cpp
@@ -51,13 +51,6 @@ static std::optional<StringRef> extractMdStringValue(MDNode *Node,
return NodeText->getString();
}
-static const EnumEntry<dxil::ResourceClass> ResourceClassNames[] = {
- {"CBV", dxil::ResourceClass::CBuffer},
- {"SRV", dxil::ResourceClass::SRV},
- {"UAV", dxil::ResourceClass::UAV},
- {"Sampler", dxil::ResourceClass::Sampler},
-};
-
namespace {
// We use the OverloadVisit with std::visit to ensure the compiler catches if a
@@ -128,7 +121,7 @@ MDNode *MetadataBuilder::BuildRootDescriptor(const RootDescriptor &Descriptor) {
IRBuilder<> Builder(Ctx);
StringRef ResName =
enumToStringRef(dxil::ResourceClass(to_underlying(Descriptor.Type)),
- ArrayRef(ResourceClassNames));
+ dxbc::getResourceClasses());
assert(!ResName.empty() && "Provided an invalid Resource Class");
SmallString<7> Name({"Root", ResName});
Metadata *Operands[] = {
@@ -170,7 +163,7 @@ MDNode *MetadataBuilder::BuildDescriptorTableClause(
IRBuilder<> Builder(Ctx);
StringRef ResName =
enumToStringRef(dxil::ResourceClass(to_underlying(Clause.Type)),
- ArrayRef(ResourceClassNames));
+ dxbc::getResourceClasses());
assert(!ResName.empty() && "Provided an invalid Resource Class");
Metadata *Operands[] = {
MDString::get(Ctx, ResName),
More information about the llvm-branch-commits
mailing list