[llvm] [TTI] Introduce getInstructionUniformity API for flexible uniformity analysis (PR #137639)

via llvm-commits llvm-commits at lists.llvm.org
Mon Nov 17 02:27:07 PST 2025


ruiling wrote:

Thank you for working on this. Sorry I just notice this. Let me share my thinking on this.
 
Currently uniform analysis works like: identifying divergent sources (through `isSourceOfDivergent`). Then go propagating the divergence property mostly through the data flow chain (there are also sync dependency). Currently it assumes the divergence of the source value will be always propagated to destination value (unless being marked as isAlwaysUniform). This is not always true, like in some cases, the propagation behavior might be **the result will be divergent if a specific source operand is divergent or other complex rule**. The immediate solution in my mind would be something like: `bool shouldPropagateDivergence(Instruction *Inst, Value *divergentOp)`. During the divergence propagation, query the hook before marking user instruction divergent. Within the hook, it will decide whether the `divergentOp` will make the result value divergent. I think this should be enough for the cases we care. We might need a more complex hook if considering different uniformity among the result operands in MIR. We may ask the hook to return a vector of divergent values like `SmallVector<Register> propagateDivergence(MachineInstr *Inst, Register divergentOp)`. I am not sure whether we really need this.
 
It would be easy to switch `IsAlwaysUniform` to this new hook. But I don't think we need to do it in one step. We can add the new hook in the first step, get it tested, then deprecate `isAlwaysUniform`. `isSourceOfDivergent` acts as the starter of divergence propagation. We can still keep it, which does not seem like a burden.

Does this make sense?
 

https://github.com/llvm/llvm-project/pull/137639


More information about the llvm-commits mailing list