[llvm-dev] nnan, ninf, and poison
Kaylor, Andrew via llvm-dev
llvm-dev at lists.llvm.org
Thu Sep 24 16:19:47 PDT 2020
Hi everyone,
I'd like to bring up a decision that was made a couple of years ago for reconsideration. In this review (https://reviews.llvm.org/D47963), it was decided that if an instruction with the nnan or ninf flag set has an argument or a result that is a NaN or +/-inf, respectively, it produces a poison value. I came across something today that convinced me that this was the wrong decision.
Here's a reduced version of my test case:
define double @f(double %x) {
entry:
%t1 = fadd fast double %x, 0xFFE5555555555555
%t2 = fadd fast double %x, 0xFFE5555555555555
%result = fadd fast double %t1, %t2
ret double %result
}
To my surprise, InstCombine turns that into this:
define double @f(double %x) {
ret double undef
}
What happened? InstCombine made a series of optimizations that are each fully allowed by the fast math flag and ended up with an intermediate value that was -inf, and because I used a setting which promised my program would have no infinite inputs or outputs, InstCombine declared the result of the entire computation to be poison. Here is the series of unfortunate events that led to this outcome:
%t1 = fadd fast double %x, 0xFFE5555555555555 ; t1 + -1.1984620899082105e+308
%t2 = fadd fast double %x, 0xFFE5555555555555 ; t2 + -1.1984620899082105e+308
%resuit = fadd fast double %t1, %t2 ; result = t1 + t2
->
%t1 = fadd fast double %x, 0xFFE5555555555555 ; t1 = x + -1.1984620899082105e+308
%result = fadd fast double %t1, %t1 ; result = t1 + t1
->
%t1 = fadd fast double %x, 0xFFE5555555555555 ; t1 = x + -1.1984620899082105e+308
%result = fmul fast double %t1, 2.000000e+00 ; result = t1 * 2.0
->
%t1 = fmul fast double %x, 2.000000e+00 ; t1 = x * 2.0
%result = fadd fast double %t1, 0xFFF0000000000000 ; result = x + (-inf)
->
undef ; Poison!
I fully understand that by using fast-math flags I am giving the compiler permission to make optimizations that can change the numeric results and that in the worst case, particularly in the presence of extreme values, this can result in complete loss of accuracy. However, I do not think this should result in the complete elimination of my computation (which would make it very hard to debug).
The thing that particularly bothers me about this example is that from a user's perspective I would understand the ninf flag to mean that my data doesn't have any infinite values and my algorithm will not produce them. I wouldn't expect to be responsible for infinite values that are the result of things that the compiler did. I expect this option to allow transformations like 'x * 0 -> 0'. I don't expect it to allow '2 * (x + VERY_LARGE_FINITE_VALUE) -> poison'. The clang documentation says -ffinite-math-only will "Allow floating-point optimizations that assume arguments and results are not NaNs or +-Inf." It seems like a significant leap from this to saying that NaN and Inf values may be treated as having introduced undefined behavior.
I would prefer the definition that Eli originally proposed in D47963:
``ninf``
No Infs - Allow optimizations to assume the arguments and result are not
+/-Inf. Such optimizations are required to retain defined behavior over
+/-Inf, but the value of the result is unspecified. The unspecified result
may not be the same each time the expression is evaluated.
Somewhat tangentially, it seems like we should avoid transformations that introduce non-finite values. Craig Topper sent me a patch which does this (at least in InstCombine) when I talked to him about this issue.
Thoughts? Opinions?
Thanks,
Andy
More information about the llvm-dev
mailing list