[PATCH] D106859: [Sink] allow sinking convergent operations across uniform branches
Sameer Sahasrabuddhe via Phabricator via llvm-commits
llvm-commits at lists.llvm.org
Sun Aug 1 04:14:46 PDT 2021
sameerds added inline comments.
================
Comment at: llvm/lib/Transforms/Scalar/Sink.cpp:57-62
if (auto *Call = dyn_cast<CallBase>(Inst)) {
- // Convergent operations cannot be made control-dependent on additional
- // values.
- if (Call->isConvergent())
- return false;
+ // Convergent operations cannot be sunk across divergent branches.
+ if (Call->isConvergent()) {
+ if (DI.hasDivergence())
+ return false;
+ }
----------------
arsenm wrote:
> This is reinterpreting the IR semantics based on target information. There is no defined link between divergence and convergent, and I'm not sure there should even be one. There are multiple target defined concepts of uniformity, and not all of them matter for convergent on AMDGPU. I think any kind of transform that wants to make this kind of assumption needs to be a specific divergence aware control flow optimizer
There is nothing target-specific about this change. The optimization is treating the convergent operation exactly the way it should be. And yes, there is a link between divergence and convergent operations. The current "definition" of convergent operations in the LangRef does not amount to much, and the actual implementation of it clearly reflects the link with divergence.
The link between convergent operations and divergence is being formalized,
here: https://reviews.llvm.org/D85603
and further made explicit here: https://reviews.llvm.org/D104504
There may be multiple concepts of uniformity, but that's an entirely separate enhancement. Independent of how many ways in which threads can diverge (wave, work-group, grid, whatever), convergent operations are all linked to the fact that they diverge.
**Every control flow transform** in the optimizer should be aware of divergence. Divergence and convergent operations are two sides of the same coin, and the above linked reviews expose them everywhere in LLVM. There is no reason to sequester these concepts into a corner called "divergence aware optimizer". There needs to be no such thing.
Repository:
rG LLVM Github Monorepo
CHANGES SINCE LAST ACTION
https://reviews.llvm.org/D106859/new/
https://reviews.llvm.org/D106859
More information about the llvm-commits
mailing list