[llvm] [MachinePipeliner] Add an abstract layer to manipulate Data Dependenc… (PR #109918)
Ryotaro Kasuga via llvm-commits
llvm-commits at lists.llvm.org
Wed Feb 5 00:20:40 PST 2025
================
@@ -114,10 +115,123 @@ class MachinePipeliner : public MachineFunctionPass {
bool useWindowScheduler(bool Changed);
};
+/// Represents a dependence between two instruction.
+class SwingSchedulerDDGEdge {
+ SUnit *Dst = nullptr;
+ SDep Pred;
+ unsigned Distance = 0;
+
+public:
+ /// Creates an edge corresponding to an edge represented by \p PredOrSucc and
+ /// \p Dep in the original DAG. This pair has no information about the
+ /// direction of the edge, so we need to pass an additional argument \p
+ /// IsSucc.
+ SwingSchedulerDDGEdge(SUnit *PredOrSucc, const SDep &Dep, bool IsSucc)
+ : Dst(PredOrSucc), Pred(Dep), Distance(0u) {
+ SUnit *Src = Dep.getSUnit();
+
+ if (IsSucc) {
+ std::swap(Src, Dst);
+ Pred.setSUnit(Src);
+ }
+
+ // An anti-dependence to PHI means loop-carried dependence.
+ if (Pred.getKind() == SDep::Anti && Src->getInstr()->isPHI()) {
+ Distance = 1;
+ std::swap(Src, Dst);
+ auto Reg = Pred.getReg();
+ Pred = SDep(Src, SDep::Kind::Data, Reg);
----------------
kasuga-fj wrote:
The anti dependence from `some_instr` to `phi` becomes the data dependence from `phi` to `some_instr`, as I intended. And now, the vector returned by something like `DDG->getOutEdges(phi)` should contain that edge. That is, in your case, the `some_instr` should be appended to `Succs` in the for loop at line 1921.
https://github.com/kasuga-fj/llvm-project/blob/f6f6c72322bfef93e21e3b043b4c4fc4f8a13fba/llvm/lib/CodeGen/MachinePipeliner.cpp#L1921
I suspect that the `DDG` is not constructed as I intended, if your change (adding the distance check) fixes the problem. I think there is another issue where we should really address.
> And, pred_L. Probably other cases too.
Yes, I agree with this.
> I think the check for isAntiDep() can be replaced by getDistance(). That is, with the check for getDistance(), isAntiDep() isn't needed anymore.
I don't think so. Before this patch was merged, the `pred_L` was defined as follows:
```c++
static bool pred_L(SetVector<SUnit *> &NodeOrder,
SmallSetVector<SUnit *, 8> &Preds,
const NodeSet *S = nullptr) {
Preds.clear();
for (const SUnit *SU : NodeOrder) {
for (const SDep &Pred : SU->Preds) {
if (S && S->count(Pred.getSUnit()) == 0)
continue;
if (ignoreDependence(Pred, true))
continue;
if (NodeOrder.count(Pred.getSUnit()) == 0)
Preds.insert(Pred.getSUnit());
}
// Back-edges are predecessors with an anti-dependence.
for (const SDep &Succ : SU->Succs) {
if (Succ.getKind() != SDep::Anti)
continue;
if (S && S->count(Succ.getSUnit()) == 0)
continue;
if (NodeOrder.count(Succ.getSUnit()) == 0)
Preds.insert(Succ.getSUnit());
}
}
return !Preds.empty();
}
```
The inline comment says that the second loop handles back-edges, but actually the loop body only checks if the edge is anti dependence. That is, in `pred_L`, anti forward-edges (I think many of them are WAR dependencies for some physical register) are handled as predecessors. The same goes for `succ_L`. I suspect this is a mishandling of back-edges, but to keep the original behavior, the anti dependence check is still needed.
Anyway, I also want to see some more details in this case.
https://github.com/llvm/llvm-project/pull/109918
More information about the llvm-commits
mailing list