[llvm] [SDAG][NVPTX] Add TLI check for preferring custom FP_TO_SINT operations to FP_TO_UINT (PR #132470)

Artem Belevich via llvm-commits llvm-commits at lists.llvm.org
Mon Mar 24 15:06:01 PDT 2025


================
@@ -0,0 +1,134 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
----------------
Artem-B wrote:

Does it matter in practice? 

The change affects only conversions to `u8` and given that we always actually convert to the 16-bit integer, for the valid range of inputs `[0f .. 255f]`, the low 8 bits of the result will be correct regardless of whether we used signed or unsigned conversion. For the inputs outside of the valid input range LLVM IR considers results to be `poison`. 

If the goal is to have upper bits of the result always be 0, even for out of range inputs, then this patch is insufficient. It will still fill in upper bits for some out-of range input values, because we're converting to a 16-bit int. E.g converting `65535.0` will give us `0xffff`. If all-zero MSBs are indeed the goal here, we'd need to explicitly mask the low 8 bits after conversion, instead. Considering that this code has been in place in the current shape for a very long time, I do not think this is necessary in practice.

Perhaps I'm still missing something. Can you elaborate on what motivates this change and what exactly is the issue it is intended to solve. Simply using PTX instruction with a better matching name but no effect on the valid inputs is not worth plumbing special case into generic LLVM code. IMO.


https://github.com/llvm/llvm-project/pull/132470


More information about the llvm-commits mailing list