[llvm] [RegAllocFast] Ensure live-in vregs get reloaded after INLINEASM_BR spills (PR #131350)
Antonio Frighetto via llvm-commits
llvm-commits at lists.llvm.org
Fri Mar 14 09:15:18 PDT 2025
https://github.com/antoniofrighetto created https://github.com/llvm/llvm-project/pull/131350
We have already ensured in 9cec2b246e719533723562950e56c292fe5dd5ad that `INLINEASM_BR` output operands get spilled onto the stack, both in the fallthrough path and in the indirect targets. Since reloads of live-ins values into physical registers contextually happens after all MIR instructions (and ops) have been visited, make sure such loads are placed at the start of the block, but after prologues or `INLINEASM_BR` spills, as otherwise this may cause stale values to be read from the stack.
Fixes: #74483, #110251.
>From 23cd1393e7274919d7d35b06e9edb4275a2e4b4a Mon Sep 17 00:00:00 2001
From: Antonio Frighetto <me at antoniofrighetto.com>
Date: Thu, 13 Mar 2025 16:25:15 +0100
Subject: [PATCH 1/2] [RegAllocFast] Introduce test for PR131350 (NFC)
---
...locfast-callbr-asm-spills-after-reload.mir | 68 +++++++++++++++++++
1 file changed, 68 insertions(+)
create mode 100644 llvm/test/CodeGen/X86/regallocfast-callbr-asm-spills-after-reload.mir
diff --git a/llvm/test/CodeGen/X86/regallocfast-callbr-asm-spills-after-reload.mir b/llvm/test/CodeGen/X86/regallocfast-callbr-asm-spills-after-reload.mir
new file mode 100644
index 0000000000000..ffc4f8510c8b4
--- /dev/null
+++ b/llvm/test/CodeGen/X86/regallocfast-callbr-asm-spills-after-reload.mir
@@ -0,0 +1,68 @@
+# NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
+# RUN: llc -mtriple=x86_64-- -run-pass=regallocfast -o - %s | FileCheck %s
+# RUN: llc -mtriple=x86_64-- -passes=regallocfast -o - %s | FileCheck %s
+
+...
+---
+name: callbr-asm-spills-after-reload
+alignment: 16
+tracksRegLiveness: true
+registers:
+ - { id: 0, class: gr64, preferred-register: '', flags: [ ] }
+ - { id: 1, class: gr32, preferred-register: '', flags: [ ] }
+ - { id: 2, class: gr64, preferred-register: '', flags: [ ] }
+liveins:
+ - { reg: '$rdi', virtual-reg: '%2' }
+frameInfo:
+ isFrameAddressTaken: false
+ stackSize: 0
+ offsetAdjustment: 0
+ maxAlignment: 8
+ adjustsStack: false
+ hasCalls: false
+fixedStack: []
+stack:
+ - { id: 0, type: default, offset: 0, size: 8, alignment: 8,
+ stack-id: default, callee-saved-register: '', callee-saved-restored: true }
+body: |
+ bb.0.entry:
+ successors: %bb.1(0x40000000), %bb.3(0x40000000)
+ liveins: $rdi
+
+ %2:gr64 = COPY $rdi
+ %3:gr64 = COPY killed %2
+ MOV64mr %stack.0, 1, $noreg, 0, $noreg, %3 :: (store (s64))
+ %0:gr64 = MOV64rm %stack.0, 1, $noreg, 0, $noreg :: (dereferenceable load (s64))
+ %6:gr32 = MOV32rm %0, 1, $noreg, 0, $noreg :: (load (s32))
+ %5:gr32_norex2 = COPY %6
+ %4:gr32_norex2 = COPY %5
+ INLINEASM_BR &" subl $$11, $0; cmpl $$11, $0; je ${2:l};", 0 /* attdialect */, 2686986 /* regdef:GR32_NOREX2 */, def %4, 2147483657 /* reguse tiedto:$0 */, %4(tied-def 3), 13 /* imm */, %bb.3, 12 /* clobber */, implicit-def early-clobber $df, 12 /* clobber */, implicit-def early-clobber $fpsw, 12 /* clobber */, implicit-def early-clobber $eflags
+ %1:gr32 = COPY %4
+ JMP_1 %bb.1
+
+ bb.1:
+ successors: %bb.2(0x80000000)
+
+ ; CHECK: $rax = MOV64rm %stack.3, 1, $noreg, 0, $noreg :: (load (s64) from %stack.3)
+ ; CHECK-NEXT: $ecx = MOV32rm %stack.1, 1, $noreg, 0, $noreg :: (load (s32) from %stack.1)
+ ; CHECK-NEXT: MOV32mr renamable $rax, 1, $noreg, 0, $noreg, renamable $ecx :: (store (s32))
+ MOV32mr %0, 1, $noreg, 0, $noreg, %1 :: (store (s32))
+
+ bb.2:
+ RET64
+
+ bb.3 (machine-block-address-taken, inlineasm-br-indirect-target):
+ successors: %bb.2(0x80000000)
+
+ ; FIXME: This is a miscompilation, as, despite spilling the value modified by the inlineasm_br,
+ ; the reload emitted still reads from an uninitialized stack slot.
+ ; CHECK: $ecx = MOV32rm %stack.2, 1, $noreg, 0, $noreg :: (load (s32) from %stack.2)
+ ; CHECK-NEXT: MOV32mr %stack.2, 1, $noreg, 0, $noreg, $eax :: (store (s32) into %stack.2)
+ ; CHECK-NEXT: $rax = MOV64rm %stack.3, 1, $noreg, 0, $noreg :: (load (s64) from %stack.3)
+ ; CHECK-NEXT: MOV32mr renamable $rax, 1, $noreg, 0, $noreg, killed renamable $ecx :: (store (s32))
+ ; CHECK-NEXT: JMP_1 %bb.2
+ %7:gr32 = COPY %4
+ MOV32mr %0, 1, $noreg, 0, $noreg, %7 :: (store (s32))
+ JMP_1 %bb.2
+
+...
>From d364a3356ce21233e52d2b668b25be6c6ef64291 Mon Sep 17 00:00:00 2001
From: Antonio Frighetto <me at antoniofrighetto.com>
Date: Thu, 13 Mar 2025 16:26:34 +0100
Subject: [PATCH 2/2] =?UTF-8?q?[RegAllocFast]=C2=A0Ensure=20live-in=20vreg?=
=?UTF-8?q?s=20get=20reloaded=20after=20INLINEASM=5FBR=20spills?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
We have already ensured in 9cec2b246e719533723562950e56c292fe5dd5ad
that `INLINEASM_BR` output operands get spilled onto the stack, both
in the fallthrough path and in the indirect targets. Since reloads of
live-ins values into physical registers contextually happens after all
MIR instructions (and ops) have been visited, make sure such loads are
placed at the start of the block, but after prologues or `INLINEASM_BR`
spills, as otherwise this may cause stale values to be read from the
stack.
Fixes: #74483, #110251.
---
llvm/lib/CodeGen/RegAllocFast.cpp | 22 +++++++++++++++++--
...locfast-callbr-asm-spills-after-reload.mir | 4 ++--
2 files changed, 22 insertions(+), 4 deletions(-)
diff --git a/llvm/lib/CodeGen/RegAllocFast.cpp b/llvm/lib/CodeGen/RegAllocFast.cpp
index fb960711d4ae0..7ba362d82f26e 100644
--- a/llvm/lib/CodeGen/RegAllocFast.cpp
+++ b/llvm/lib/CodeGen/RegAllocFast.cpp
@@ -391,6 +391,8 @@ class RegAllocFastImpl {
bool mayLiveOut(Register VirtReg);
bool mayLiveIn(Register VirtReg);
+ bool isInlineAsmBrSpill(const MachineInstr &MI) const;
+
void dumpState() const;
};
@@ -491,6 +493,20 @@ static bool dominates(InstrPosIndexes &PosIndexes, const MachineInstr &A,
return IndexA < IndexB;
}
+bool RegAllocFastImpl::isInlineAsmBrSpill(const MachineInstr &MI) const {
+ int FI;
+ auto *MBB = MI.getParent();
+ if (MBB->isInlineAsmBrIndirectTarget() && TII->isStoreToStackSlot(MI, FI) &&
+ MFI->isSpillSlotObjectIndex(FI)) {
+ for (const auto &Op : MI.operands())
+ if (Op.isReg() && any_of(MBB->liveins(), [&](const auto &RegP) {
+ return Op.getReg() == RegP.PhysReg;
+ }))
+ return true;
+ }
+ return false;
+}
+
/// Returns false if \p VirtReg is known to not live out of the current block.
bool RegAllocFastImpl::mayLiveOut(Register VirtReg) {
if (MayLiveAcrossBlocks.test(VirtReg.virtRegIndex())) {
@@ -648,8 +664,8 @@ MachineBasicBlock::iterator RegAllocFastImpl::getMBBBeginInsertionPoint(
continue;
}
- // Most reloads should be inserted after prolog instructions.
- if (!TII->isBasicBlockPrologue(*I))
+ // Skip prologues and inlineasm_br spills to place reloads afterwards.
+ if (!TII->isBasicBlockPrologue(*I) && !isInlineAsmBrSpill(*I))
break;
// However if a prolog instruction reads a register that needs to be
@@ -736,6 +752,8 @@ bool RegAllocFastImpl::displacePhysReg(MachineInstr &MI, MCRegister PhysReg) {
assert(LRI != LiveVirtRegs.end() && "datastructures in sync");
MachineBasicBlock::iterator ReloadBefore =
std::next((MachineBasicBlock::iterator)MI.getIterator());
+ while (isInlineAsmBrSpill(*ReloadBefore))
+ ++ReloadBefore;
reload(ReloadBefore, VirtReg, LRI->PhysReg);
setPhysRegState(LRI->PhysReg, regFree);
diff --git a/llvm/test/CodeGen/X86/regallocfast-callbr-asm-spills-after-reload.mir b/llvm/test/CodeGen/X86/regallocfast-callbr-asm-spills-after-reload.mir
index ffc4f8510c8b4..41aca5524b590 100644
--- a/llvm/test/CodeGen/X86/regallocfast-callbr-asm-spills-after-reload.mir
+++ b/llvm/test/CodeGen/X86/regallocfast-callbr-asm-spills-after-reload.mir
@@ -56,8 +56,8 @@ body: |
; FIXME: This is a miscompilation, as, despite spilling the value modified by the inlineasm_br,
; the reload emitted still reads from an uninitialized stack slot.
- ; CHECK: $ecx = MOV32rm %stack.2, 1, $noreg, 0, $noreg :: (load (s32) from %stack.2)
- ; CHECK-NEXT: MOV32mr %stack.2, 1, $noreg, 0, $noreg, $eax :: (store (s32) into %stack.2)
+ ; CHECK: MOV32mr %stack.2, 1, $noreg, 0, $noreg, $eax :: (store (s32) into %stack.2)
+ ; CHECK-NEXT: $ecx = MOV32rm %stack.2, 1, $noreg, 0, $noreg :: (load (s32) from %stack.2)
; CHECK-NEXT: $rax = MOV64rm %stack.3, 1, $noreg, 0, $noreg :: (load (s64) from %stack.3)
; CHECK-NEXT: MOV32mr renamable $rax, 1, $noreg, 0, $noreg, killed renamable $ecx :: (store (s32))
; CHECK-NEXT: JMP_1 %bb.2
More information about the llvm-commits
mailing list