[llvm-dev] nnan, ninf, and poison
Sanjay Patel via llvm-dev
llvm-dev at lists.llvm.org
Fri Sep 25 06:26:56 PDT 2020
I agree that the root cause is no formal definition for -ffast-math and
friends. AFAIK, nobody has nailed that down:
1. gcc: "This mode enables optimizations that allow arbitrary
reassociations and transformations with no accuracy guarantees" [1]
2. icc: "fast[=1|2] Enables more aggressive optimizations on floating-point
data" [2]
3. xlc: "some IEEE floating-point rules are violated in ways that can
improve performance but might affect program correctness" [3]
All of these are effectively: "Use at your own risk. If you don't like the
result, then turn down the optimizations."
As suggested, we can hack the offending transforms to avoid producing
inf/nan at compile-time, but we can't prevent that at run-time. We already
have similar hacks for denorm constants.
[1] https://gcc.gnu.org/wiki/FloatingPointMath
[2]
https://software.intel.com/content/www/us/en/develop/documentation/cpp-compiler-developer-guide-and-reference/top/compiler-reference/compiler-options/compiler-option-details/floating-point-options/fp-model-fp.html#fp-model-fp
[3]
https://www.ibm.com/support/knowledgecenter/en/SSGH3R_13.1.3/com.ibm.compilers.aix.doc/proguide.pdf
On Thu, Sep 24, 2020 at 11:57 PM Eli Friedman <efriedma at quicinc.com> wrote:
> Fundamentally, I think the problem is the way that "reassoc" is defined.
> Or rather, the fact that it doesn't really have a definition that can be
> formalized. We just vaguely promise to produce assembly where the
> computation roughly resembles the mathematical formula written in the
> source code. So really, we don't have any ground to judge whether a given
> transform is legal or not; it's just based on the gut judgement of a few
> developers and whatever bugs people file. I wish we had some better way to
> deal with this, but I don't know of one.
>
> The advantage of the sort of definition discussed in D47963 is that it
> would restrict the damage caused by ninf/nnan transforms: it could eat
> floating-point computations, but not integer computations that depend on
> them, like float=>string conversions.
>
> Now that we have freeze instructions, we could use them instead: the
> definition of freeze is basically the same. Not sure if that's better,
> though; the extra instructions involved would add some overhead, and be a
> bit more complicated to reason about.
>
> Either way, that doesn't help if the goal is to ensure floating-point
> computations don't "evaporate", i.e. still exist in the IR. If that's your
> goal, producing "freeze double undef" isn't really any better than
> producing "double undef". Probably you could accomplish this by hacking
> various transforms to refuse to produce inf/nan literals, so reassoc
> transforms can't make a whole function collapse due to ninf/nnan. Of
> course, that doesn't say anything about the value produced at runtime, so
> I'm not sure how helpful this is.
>
> -Eli
>
> -----Original Message-----
> From: llvm-dev <llvm-dev-bounces at lists.llvm.org> On Behalf Of Kaylor,
> Andrew via llvm-dev
> Sent: Thursday, September 24, 2020 4:20 PM
> To: llvm-dev at lists.llvm.org; Friedman, Eli <efriedma at codeaurora.org>;
> hfinkel at anl.gov; Sanjay Patel <spatel at rotateright.com>; scanon at apple.com;
> dag at hpe.com; Arsenault, Matthew <Matthew.Arsenault at amd.com>; Cameron
> McInally <cameron.mcinally at nyu.edu>; Kevin Neal <Kevin.Neal at sas.com>;
> Ulrich Weigand <Ulrich.Weigand at de.ibm.com>
> Subject: [EXT] [llvm-dev] nnan, ninf, and poison
>
> Hi everyone,
>
> I'd like to bring up a decision that was made a couple of years ago for
> reconsideration. In this review (https://reviews.llvm.org/D47963), it was
> decided that if an instruction with the nnan or ninf flag set has an
> argument or a result that is a NaN or +/-inf, respectively, it produces a
> poison value. I came across something today that convinced me that this was
> the wrong decision.
>
> Here's a reduced version of my test case:
>
> define double @f(double %x) {
> entry:
> %t1 = fadd fast double %x, 0xFFE5555555555555
> %t2 = fadd fast double %x, 0xFFE5555555555555
> %result = fadd fast double %t1, %t2
> ret double %result
> }
>
> To my surprise, InstCombine turns that into this:
>
> define double @f(double %x) {
> ret double undef
> }
>
> What happened? InstCombine made a series of optimizations that are each
> fully allowed by the fast math flag and ended up with an intermediate value
> that was -inf, and because I used a setting which promised my program would
> have no infinite inputs or outputs, InstCombine declared the result of the
> entire computation to be poison. Here is the series of unfortunate events
> that led to this outcome:
>
>
> %t1 = fadd fast double %x, 0xFFE5555555555555 ; t1 +
> -1.1984620899082105e+308
> %t2 = fadd fast double %x, 0xFFE5555555555555 ; t2 +
> -1.1984620899082105e+308
> %resuit = fadd fast double %t1, %t2 ; result = t1 + t2
> ->
> %t1 = fadd fast double %x, 0xFFE5555555555555 ; t1 = x +
> -1.1984620899082105e+308
> %result = fadd fast double %t1, %t1 ; result = t1 + t1
> ->
> %t1 = fadd fast double %x, 0xFFE5555555555555 ; t1 = x +
> -1.1984620899082105e+308
> %result = fmul fast double %t1, 2.000000e+00 ; result = t1 * 2.0
> ->
> %t1 = fmul fast double %x, 2.000000e+00 ; t1 = x * 2.0
> %result = fadd fast double %t1, 0xFFF0000000000000 ; result = x + (-inf)
> ->
> undef ; Poison!
>
>
> I fully understand that by using fast-math flags I am giving the compiler
> permission to make optimizations that can change the numeric results and
> that in the worst case, particularly in the presence of extreme values,
> this can result in complete loss of accuracy. However, I do not think this
> should result in the complete elimination of my computation (which would
> make it very hard to debug).
>
> The thing that particularly bothers me about this example is that from a
> user's perspective I would understand the ninf flag to mean that my data
> doesn't have any infinite values and my algorithm will not produce them. I
> wouldn't expect to be responsible for infinite values that are the result
> of things that the compiler did. I expect this option to allow
> transformations like 'x * 0 -> 0'. I don't expect it to allow '2 * (x +
> VERY_LARGE_FINITE_VALUE) -> poison'. The clang documentation says
> -ffinite-math-only will "Allow floating-point optimizations that assume
> arguments and results are not NaNs or +-Inf." It seems like a significant
> leap from this to saying that NaN and Inf values may be treated as having
> introduced undefined behavior.
>
> I would prefer the definition that Eli originally proposed in D47963:
>
> ``ninf``
> No Infs - Allow optimizations to assume the arguments and result are not
> +/-Inf. Such optimizations are required to retain defined behavior over
> +/-Inf, but the value of the result is unspecified. The unspecified
> result
> may not be the same each time the expression is evaluated.
>
> Somewhat tangentially, it seems like we should avoid transformations that
> introduce non-finite values. Craig Topper sent me a patch which does this
> (at least in InstCombine) when I talked to him about this issue.
>
> Thoughts? Opinions?
>
> Thanks,
> Andy
> _______________________________________________
> LLVM Developers mailing list
> llvm-dev at lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20200925/c9a980c7/attachment.html>
More information about the llvm-dev
mailing list