[llvm] [Draft] Support save/restore point splitting in shrink-wrap (PR #119359)

Elizaveta Noskova via llvm-commits llvm-commits at lists.llvm.org
Wed Nov 19 02:13:10 PST 2025


================
@@ -811,9 +975,86 @@ static bool giveUpWithRemarks(MachineOptimizationRemarkEmitter *ORE,
   return false;
 }
 
+void ShrinkWrap::setupSaveRestorePoints(MachineFunction &MF) {
+  for (unsigned Reg : getTargetCSRList(MF)) {
+    auto [Save, Restore] = SavedRegs[Reg];
+    if (SavedRegs.contains(Reg) && Save && Restore)
+      continue;
+
+    SavePoints.insertReg(Reg, &MF.front(), SaveBlocks);
+    for (MachineBasicBlock &MBB : MF) {
+      if (MBB.isEHFuncletEntry())
+        SavePoints.insertReg(Reg, &MBB, SaveBlocks);
+      if (MBB.isReturnBlock())
+        RestorePoints.insertReg(Reg, &MBB, RestoreBlocks);
+    }
+  }
+
+  for (auto [Reg, SaveRestoreBlocks] : SavedRegs) {
+    auto [Save, Restore] = SaveRestoreBlocks;
+    if (Save && Restore) {
+      SavePoints.insertReg(Reg, Save, SaveBlocks);
+      if (!Restore->succ_empty() || Restore->isReturnBlock())
+        RestorePoints.insertReg(Reg, Restore, RestoreBlocks);
+      else
+        RestorePoints.insertReg(Reg, Restore, std::nullopt);
+    }
+  }
+}
+
+bool ShrinkWrap::canSplitSaveRestorePoints(
+    const ReversePostOrderTraversal<MachineBasicBlock *> &RPOT,
+    RegScavenger *RS) {
+  for (MachineBasicBlock *MBB : RPOT) {
+    if (MBB->isEHPad() || MBB->isInlineAsmBrIndirectTarget())
+      return false;
+
+    // Check if we found any stack accesses in the predecessors. We are not
+    // doing a full dataflow analysis here to keep things simple but just
+    // rely on a reverse portorder traversal (RPOT) to guarantee predecessors
+    // are already processed except for loops (and accept the conservative
+    // result for loops).
+    bool StackAddressUsed = any_of(MBB->predecessors(), [&](auto *Pred) {
+      return StackAddressUsedBlockInfo.test(Pred->getNumber());
+    });
+
+    for (const MachineInstr &MI : *MBB) {
+      if (useOrDefFI(MI, RS, StackAddressUsed))
+        return false;
+
+      if (useOrDefCSR(MI, RS, nullptr))
+        StackAddressUsed = true;
+    }
+
+    StackAddressUsedBlockInfo[MBB->getNumber()] = StackAddressUsed;
+  }
+  return true;
+}
+
+void ShrinkWrap::performSimpleShrinkWrap(RegScavenger *RS,
+                                         MachineBasicBlock &SavePoint) {
+  auto MF = SavePoint.getParent();
+  auto CSRs = getTargetCSRList(*MF);
----------------
enoskova-sc wrote:

@dongjianqiang2, my series of patches doesn't cause the original shrink wrapping to degenerate.

Currently, shrink wrap stops searching for a prolog whenever a basic block touches CSR or a frame index instruction is encountered ([https://github.com/llvm/llvm-project/blob/915e9adbe5d1c577a21ac8b495b7c54c465460fd/llvm/lib/CodeGen/ShrinkWrap.cpp#L859 ](https://github.com/llvm/llvm-project/blob/915e9adbe5d1c577a21ac8b495b7c54c465460fd/llvm/lib/CodeGen/ShrinkWrap.cpp#L859)).

My approach is based on the current shrink wrap algorithm in simple cases ([https://github.com/llvm/llvm-project/blob/ea5e86fe0905c79c339f4bf9788a38429c935488/llvm/lib/CodeGen/ShrinkWrap.cpp#L1034 ](https://github.com/llvm/llvm-project/blob/ea5e86fe0905c79c339f4bf9788a38429c935488/llvm/lib/CodeGen/ShrinkWrap.cpp#L1034)) and in more advanced cases (if splitting is enabled and possible), it tries not to perform all the saves and restores in prolog/epilog if possible.

I agree that my approach is currently overconservative, but it is planned to be relaxed in the follow-up patches.

Please clarify your opinion if you still think that my approach worsens the current shrink wrap.



https://github.com/llvm/llvm-project/pull/119359


More information about the llvm-commits mailing list