[PATCH] D35267: Pass Divergence Analysis data to selection DAG to drive divergence dependent instruction selection

Alexander via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Mon Jan 29 05:11:02 PST 2018


alex-t added a comment.

One more item that should be discussed is the target-specific exceptions to the common divergence modeling algorithm.
For instance in AMDGPU target we have amdgcn.readfirstlane/readlane intrinsics. They accept vector register and return the first or specific lane value.
So both accept naturally divergent VGPR but return the scalar value.
Following the common divergence computing algorithm - "the divergence of operation's result is superposition of the operands divergence" we'd set  %scalar = tail call i32 @llvm.amdgcn.readfirstlane(i32 %tid) to divergent that is not true.
In the IR form of the divergence-driven selection we rely on the TargetTransformInfo::isAlwaysUniform hook that was added to interface for this purpose.
It allows the target to declare arbitrary set of target operations as "always uniform" so that the analysis does not count for their operands divergence.

To meet this design we'd have to add similar hook to the TargetLoweringInfo interface. Is this feasible?


https://reviews.llvm.org/D35267





More information about the llvm-commits mailing list