[llvm] [X86] Speed up checking clobbered FP/BP (PR #102365)

via llvm-commits llvm-commits at lists.llvm.org
Wed Aug 7 13:37:25 PDT 2024


llvmbot wrote:


<!--LLVM PR SUMMARY COMMENT-->

@llvm/pr-subscribers-backend-x86

Author: None (weiguozhi)

<details>
<summary>Changes</summary>

Most functions don't clobber frame register and base pointer. They are usually caused by inline asm and function call. So we can record if a function call clobber FP/BP at lowering phase, and later we can check the recorded information and return early.

---
Full diff: https://github.com/llvm/llvm-project/pull/102365.diff


3 Files Affected:

- (modified) llvm/lib/Target/X86/X86FrameLowering.cpp (+10) 
- (modified) llvm/lib/Target/X86/X86ISelLoweringCall.cpp (+5) 
- (modified) llvm/lib/Target/X86/X86MachineFunctionInfo.h (+10) 


``````````diff
diff --git a/llvm/lib/Target/X86/X86FrameLowering.cpp b/llvm/lib/Target/X86/X86FrameLowering.cpp
index 77dac1197f85e9..8404f2231680d6 100644
--- a/llvm/lib/Target/X86/X86FrameLowering.cpp
+++ b/llvm/lib/Target/X86/X86FrameLowering.cpp
@@ -4458,6 +4458,16 @@ void X86FrameLowering::spillFPBP(MachineFunction &MF) const {
     FP = TRI->getFrameRegister(MF);
   if (TRI->hasBasePointer(MF))
     BP = TRI->getBaseRegister();
+
+  // Currently only inline asm and function call can clobbers fp/bp. So we can
+  // do some quick test and return early.
+  if (!MF.hasInlineAsm()) {
+    X86MachineFunctionInfo *X86FI = MF.getInfo<X86MachineFunctionInfo>();
+    if (!X86FI->getFPClobberedByCall())
+      FP = 0;
+    if (!X86FI->getBPClobberedByCall())
+      BP = 0;
+  }
   if (!FP && !BP)
     return;
 
diff --git a/llvm/lib/Target/X86/X86ISelLoweringCall.cpp b/llvm/lib/Target/X86/X86ISelLoweringCall.cpp
index f659c168b86e0e..1e609a84673a3c 100644
--- a/llvm/lib/Target/X86/X86ISelLoweringCall.cpp
+++ b/llvm/lib/Target/X86/X86ISelLoweringCall.cpp
@@ -2450,6 +2450,11 @@ X86TargetLowering::LowerCall(TargetLowering::CallLoweringInfo &CLI,
   }();
   assert(Mask && "Missing call preserved mask for calling convention");
 
+  if (MachineOperand::clobbersPhysReg(Mask, RegInfo->getFrameRegister(MF)))
+    X86Info->setFPClobberedByCall(true);
+  if (MachineOperand::clobbersPhysReg(Mask, RegInfo->getBaseRegister()))
+    X86Info->setBPClobberedByCall(true);
+
   // If this is an invoke in a 32-bit function using a funclet-based
   // personality, assume the function clobbers all registers. If an exception
   // is thrown, the runtime will not restore CSRs.
diff --git a/llvm/lib/Target/X86/X86MachineFunctionInfo.h b/llvm/lib/Target/X86/X86MachineFunctionInfo.h
index 315aeef65d28c8..13d57c2fa9dfbc 100644
--- a/llvm/lib/Target/X86/X86MachineFunctionInfo.h
+++ b/llvm/lib/Target/X86/X86MachineFunctionInfo.h
@@ -170,6 +170,10 @@ class X86MachineFunctionInfo : public MachineFunctionInfo {
   SmallVector<size_t, 0> PreallocatedStackSizes;
   SmallVector<SmallVector<size_t, 4>, 0> PreallocatedArgOffsets;
 
+  // True if a function clobbers FP/BP according to its calling convention.
+  bool FPClobberedByCall = false;
+  bool BPClobberedByCall = false;
+
 private:
   /// ForwardedMustTailRegParms - A list of virtual and physical registers
   /// that must be forwarded to every musttail call.
@@ -328,6 +332,12 @@ class X86MachineFunctionInfo : public MachineFunctionInfo {
     assert(!PreallocatedArgOffsets[Id].empty() && "arg offsets not set");
     return PreallocatedArgOffsets[Id];
   }
+
+  bool getFPClobberedByCall() const { return FPClobberedByCall; }
+  void setFPClobberedByCall(bool C) { FPClobberedByCall = C; }
+
+  bool getBPClobberedByCall() const { return BPClobberedByCall; }
+  void setBPClobberedByCall(bool C) { BPClobberedByCall = C; }
 };
 
 } // End llvm namespace

``````````

</details>


https://github.com/llvm/llvm-project/pull/102365


More information about the llvm-commits mailing list