[llvm] [AMDGPU][SIPreEmitPeephole] mustRetainExeczBranch: use BranchProbability and TargetSchedmodel (PR #109818)

Juan Manuel Martinez CaamaƱo via llvm-commits llvm-commits at lists.llvm.org
Wed Sep 25 01:26:29 PDT 2024


================
@@ -305,10 +310,53 @@ bool SIPreEmitPeephole::getBlockDestinations(
 }
 
 bool SIPreEmitPeephole::mustRetainExeczBranch(
-    const MachineBasicBlock &From, const MachineBasicBlock &To) const {
+    const MachineBasicBlock &Head, const MachineBasicBlock &From,
+    const MachineBasicBlock &To) const {
+
+  auto FromIt = find(Head.successors(), &From);
+  assert(FromIt != Head.succ_end());
+  BranchProbability ExecNZProb = Head.getSuccProbability(FromIt);
+
   unsigned NumInstr = 0;
-  const MachineFunction *MF = From.getParent();
 
+  unsigned long ExecNZBranchCost = 0;
+  unsigned long UnconditionalBranchCost = 0;
+  unsigned long N = 0;
+  unsigned long D = 0;
+  unsigned long ThenCyclesCost = 0;
+
+  std::function<bool(const MachineInstr &)> IsProfitable =
+      [&](const MachineInstr &MI) {
+        ++NumInstr;
+        if (NumInstr >= SkipThreshold)
+          return false;
+        // These instructions are potentially expensive even if EXEC = 0.
+        if (TII->isSMRD(MI) || TII->isVMEM(MI) || TII->isFLAT(MI) ||
+            TII->isDS(MI) || TII->isWaitcnt(MI.getOpcode()))
+          return false;
+        return true;
+      };
+
+  auto &SchedModel = TII->getSchedModel();
+  if (SchedModel.hasInstrSchedModel() && !ExecNZProb.isUnknown()) {
+    ExecNZBranchCost = SchedModel.computeInstrLatency(AMDGPU::S_CBRANCH_EXECZ);
+    UnconditionalBranchCost = SchedModel.computeInstrLatency(AMDGPU::S_BRANCH);
+    N = ExecNZProb.getNumerator();
+    D = ExecNZProb.getDenominator();
+
+    IsProfitable = [&](const MachineInstr &MI) {
+      ThenCyclesCost += SchedModel.computeInstrLatency(&MI, false);
----------------
jmmartinez wrote:

The last "Itinerary" mention is in a 2020 commit (it appeared in the `WritingAnLLVMBackend.rst` in 2013). The "SchedModel" has commits in 2024/2023 (and it appeared in the `WritingAnLLVMBackend.rst` in 
 2017).

I guess that the SchedModel is the one we should be using.

https://github.com/llvm/llvm-project/pull/109818


More information about the llvm-commits mailing list