[PATCH] D146823: [GVN] Avoid replacing uniforms with non-uniforms in propagateEquality
Sameer Sahasrabuddhe via Phabricator via llvm-commits
llvm-commits at lists.llvm.org
Mon Mar 27 04:11:40 PDT 2023
sameerds added a comment.
In D146823#4223421 <https://reviews.llvm.org/D146823#4223421>, @Pierre-vh wrote:
> In D146823#4223352 <https://reviews.llvm.org/D146823#4223352>, @foad wrote:
>
>> If we're going to do this I think we ought to do it properly, by querying UniformityAnalysis. TTI->isAlwaysUniform will only catch a small fraction of the values that are provably uniform.
>
> I agree, but most targets don't care about uniformity so I don't think we can make GVN depend on UA unless we find a way to avoid running the analysis for those targets. No?
> Does UA bail out early for non-GPU targets? (It'd be nice if it did that and just returned an empty UniformityInfo instance for those targets, we could then use UA in more places in the middle-end without worrying about performance)
When UA or DA initialize, they query TTI for sources of divergence. On a target with no divergence, they won't find any and hence the analysis should be trivial anyway. The only cost is one O(N) pass over the entire function. We could in principle eliminate that by just querying for TTI.hasBranchDivergence(). But there have been philosophical objections to taking that approach in the past. In general, one day, we would like to move to an implementation where source of divergence are inferred from IR semantics rather than TTI.
CHANGES SINCE LAST ACTION
https://reviews.llvm.org/D146823/new/
https://reviews.llvm.org/D146823
More information about the llvm-commits
mailing list