[llvm] dc170c7 - AMDGPU: Special case align requirement for AV_MOV_B64_IMM_PSEUDO

Matt Arsenault via llvm-commits llvm-commits at lists.llvm.org
Wed Sep 3 17:55:50 PDT 2025


Author: Matt Arsenault
Date: 2025-09-04T09:55:39+09:00
New Revision: dc170c7e315ee3f6a194ba81d044d7e8784b0221

URL: https://github.com/llvm/llvm-project/commit/dc170c7e315ee3f6a194ba81d044d7e8784b0221
DIFF: https://github.com/llvm/llvm-project/commit/dc170c7e315ee3f6a194ba81d044d7e8784b0221.diff

LOG: AMDGPU: Special case align requirement for AV_MOV_B64_IMM_PSEUDO

This should not require aligned registers. Fixes expensive_checks
test failure. I don't see a better way until the new system
to specify the alignment per register is done.

Added: 
    

Modified: 
    llvm/lib/Target/AMDGPU/SIInstrInfo.cpp
    llvm/test/CodeGen/AMDGPU/av_movimm_pseudo_expansion.mir

Removed: 
    


################################################################################
diff  --git a/llvm/lib/Target/AMDGPU/SIInstrInfo.cpp b/llvm/lib/Target/AMDGPU/SIInstrInfo.cpp
index 2b187c641da1c..946917f675318 100644
--- a/llvm/lib/Target/AMDGPU/SIInstrInfo.cpp
+++ b/llvm/lib/Target/AMDGPU/SIInstrInfo.cpp
@@ -5034,7 +5034,7 @@ bool SIInstrInfo::verifyInstruction(const MachineInstr &MI,
     // aligned register constraint.
     // FIXME: We do not verify inline asm operands, but custom inline asm
     // verification is broken anyway
-    if (ST.needsAlignedVGPRs()) {
+    if (ST.needsAlignedVGPRs() && Opcode != AMDGPU::AV_MOV_B64_IMM_PSEUDO) {
       const TargetRegisterClass *RC = RI.getRegClassForReg(MRI, Reg);
       if (RI.hasVectorRegisters(RC) && MO.getSubReg()) {
         if (const TargetRegisterClass *SubRC =
@@ -6013,7 +6013,11 @@ const TargetRegisterClass *SIInstrInfo::getRegClass(const MCInstrDesc &TID,
       IsAllocatable = VDstIdx != -1 || AMDGPU::hasNamedOperand(
                                            TID.Opcode, AMDGPU::OpName::data1);
     }
+  } else if (TID.getOpcode() == AMDGPU::AV_MOV_B64_IMM_PSEUDO) {
+    // Special pseudos have no alignment requirement
+    return RI.getRegClass(RegClass);
   }
+
   return adjustAllocatableRegClass(ST, RI, TID, RegClass, IsAllocatable);
 }
 

diff  --git a/llvm/test/CodeGen/AMDGPU/av_movimm_pseudo_expansion.mir b/llvm/test/CodeGen/AMDGPU/av_movimm_pseudo_expansion.mir
index 272997cf1a347..c52347b680371 100644
--- a/llvm/test/CodeGen/AMDGPU/av_movimm_pseudo_expansion.mir
+++ b/llvm/test/CodeGen/AMDGPU/av_movimm_pseudo_expansion.mir
@@ -186,3 +186,25 @@ body: |
     $vgpr2_vgpr3 = AV_MOV_B64_IMM_PSEUDO 4477415320595726336, implicit $exec
     $vgpr4_vgpr5 = AV_MOV_B64_IMM_PSEUDO 4477415321638205827, implicit $exec
 ...
+
+---
+name: av_mov_b64_imm_pseudo_unaligned_agpr
+tracksRegLiveness: true
+body: |
+  bb.0:
+    ; CHECK-LABEL: name: av_mov_b64_imm_pseudo_unaligned_agpr
+    ; CHECK: $agpr1 = V_ACCVGPR_WRITE_B32_e64 9, implicit $exec, implicit-def $agpr1_agpr2
+    ; CHECK-NEXT: $agpr2 = V_ACCVGPR_WRITE_B32_e64 -16, implicit $exec, implicit-def $agpr1_agpr2
+    $agpr1_agpr2 = AV_MOV_B64_IMM_PSEUDO 18446744004990074889, implicit $exec
+...
+
+---
+name: av_mov_b64_imm_pseudo_unaligned_vgpr
+tracksRegLiveness: true
+body: |
+  bb.0:
+    ; CHECK-LABEL: name: av_mov_b64_imm_pseudo_unaligned_vgpr
+    ; CHECK: $vgpr1 = V_MOV_B32_e32 9, implicit $exec, implicit-def $vgpr1_vgpr2
+    ; CHECK-NEXT: $vgpr2 = V_MOV_B32_e32 -16, implicit $exec, implicit-def $vgpr1_vgpr2
+    $vgpr1_vgpr2 = AV_MOV_B64_IMM_PSEUDO 18446744004990074889, implicit $exec
+...


        


More information about the llvm-commits mailing list