[llvm] MTM: improve operand latency when missing sched info (PR #101389)
Ramkumar Ramachandra via llvm-commits
llvm-commits at lists.llvm.org
Tue Aug 20 07:22:52 PDT 2024
================
@@ -761,6 +762,64 @@ static void updatePhysDepsDownwards(const MachineInstr *UseMI,
}
}
+/// Estimates the number of cycles elapsed between DefMI and UseMI if they're
+/// non-null and in the same BasicBlock. Returns std::nullopt when UseMI is in a
+/// different MBB than DefMI.
+static std::optional<unsigned>
+estimateDefUseCycles(const TargetSchedModel &Sched, const MachineInstr *DefMI,
+ const MachineInstr *UseMI) {
+ if (!DefMI || !UseMI || DefMI == UseMI)
+ return 0;
+ const MachineBasicBlock *ParentBB = DefMI->getParent();
+ if (ParentBB != UseMI->getParent())
+ return std::nullopt;
+
+ const auto DefIt =
+ llvm::find_if(ParentBB->instrs(),
+ [DefMI](const MachineInstr &MI) { return DefMI == &MI; });
+ const auto UseIt =
+ llvm::find_if(ParentBB->instrs(),
+ [UseMI](const MachineInstr &MI) { return UseMI == &MI; });
+
+ unsigned NumMicroOps = 0;
+ for (auto It = DefIt; It != UseIt; ++It) {
+ // In cases where the UseMI is a PHI at the beginning of the MBB, compute
+ // MicroOps until the end of the MBB.
+ if (It.isEnd())
+ break;
+
+ NumMicroOps += Sched.getNumMicroOps(&*It);
+ }
+ return NumMicroOps / Sched.getIssueWidth();
+}
+
+/// Wraps Sched.computeOperandLatency, accounting for the case when
+/// InstrSchedModel and InstrItineraries are not available: in this case,
+/// Sched.computeOperandLatency returns DefaultDefLatency, which is a very rough
+/// approximate; to improve this approximate, offset it by the approximate
+/// cycles elapsed from DefMI to UseMI (since the MIs could be re-ordered by the
+/// scheduler, and we don't have this information, this cannot be known
+/// exactly). When scheduling information is available,
----------------
artagnon wrote:
Okay, this is a subtle point, to which I don't have a good answer. The scheduler can also re-order instructions, depending on scheduling information (which also contains information about in-order/OOO). I just have some practical answers, and would be happy if someone else has better answers.
1. Do we know whether the core is in-order/OOO without the scheduling model?
2. I did the benchmarking on OOO cores. We got positive test changes and speedups.
https://github.com/llvm/llvm-project/pull/101389
More information about the llvm-commits
mailing list