[llvm-branch-commits] [llvm] [InlineSpiller][AMDGPU] Implement subreg reload during RA spill (PR #175002)
Diana Picus via llvm-branch-commits
llvm-branch-commits at lists.llvm.org
Fri Jan 9 04:15:42 PST 2026
================
@@ -1248,18 +1249,62 @@ void InlineSpiller::spillAroundUses(Register Reg) {
// Create a new virtual register for spill/fill.
// FIXME: Infer regclass from instruction alone.
- Register NewVReg = Edit->createFrom(Reg);
+
+ unsigned SubReg = 0;
+ LaneBitmask CoveringLanes = LaneBitmask::getNone();
+ // If the subreg liveness is enabled, identify the subreg use(s) to try
+ // subreg reload. Skip if the instruction also defines the register.
+ // For copy bundles, get the covering lane masks.
+ if (MRI.subRegLivenessEnabled() && !RI.Writes) {
+ for (auto [MI, OpIdx] : Ops) {
+ const MachineOperand &MO = MI->getOperand(OpIdx);
+ assert(MO.isReg() && MO.getReg() == Reg);
+ if (MO.isUse()) {
+ SubReg = MO.getSubReg();
+ if (SubReg)
+ CoveringLanes |= TRI.getSubRegIndexLaneMask(SubReg);
+ }
+ }
+ }
+
+ if (MI.isBundled() && CoveringLanes.any()) {
+ CoveringLanes = LaneBitmask(bit_ceil(CoveringLanes.getAsInteger()) - 1);
+ // Obtain the covering subregister index, including any missing indices
+ // within the identified small range. Although this may be suboptimal due
+ // to gaps in the subregisters that are not part of the copy bundle, it is
+ // benificial when components outside this range of the original tuple can
+ // be completely skipped from the reload.
+ SubReg = TRI.getSubRegIdxFromLaneMask(CoveringLanes);
+ }
+
+ // If the target doesn't support subreg reload, fallback to restoring the
+ // full tuple.
+ if (SubReg && !TRI.shouldEnableSubRegReload(SubReg))
----------------
rovka wrote:
Check this before going through all the trouble of computing SubReg.
https://github.com/llvm/llvm-project/pull/175002
More information about the llvm-branch-commits
mailing list