[LLVMdev] Bug 16257 - fmul of undef ConstantExpr not folded to undef

Stephen Canon scanon at apple.com
Tue Sep 16 10:40:03 PDT 2014


It’s important to keep in mind the following:

• The IEEE-754 nan propagation rules are “shoulds”, not “shalls”.
• If the default NaN bit is set in FPSCR, the arithmetic is already not supporting strict NaN propagation, regardless of what the compiler does.

– Steve

> On Sep 16, 2014, at 1:30 PM, Oleg Ranevskyy <llvm.mail.list at gmail.com> wrote:
> 
> Hi Duncan,
> 
> I reread everything we've discussed so far and would like to pay closer attention to the the ARM's FPSCR register mentioned by Stephen.
> It's really possible on ARM systems that floating point operations on one or more qNaN operands return a NaN different from the operands. I.e. operand NaN is not propagated. This happens when the "default NaN" flag is set in the FPSCR (floating point status and control register). The result in this case is some default NaN value.
> 
> This means "fadd %x, -0.0", which is currently folded to %x by InstructionSimplify, might produce a different result if %x is a NaN. This breaks the NaN propagation rules the IEEE standard establishes and significantly reduces folding capabilities for the FP operations.
> 
> This also applies to "fadd undef, undef" and "fadd %x, undef". We can't rely on getting an arbitrary NaN here on ARMs.
> 
> Would you be able to confirm this please?
> 
> Thank you in advance for your time!
> 
> Kind regards,
> Oleg
> 
> On 10.09.2014 22:50, Duncan Sands wrote:
>> Hi Oleg,
>> 
>> On 01/09/14 18:46, Oleg Ranevskyy wrote:
>>> Hi Duncan,
>>> 
>>> I looked through the IEEE standard and here is what I found:
>>> 
>>> *6.2 Operations with NaNs*
>>> /"For an operation with quiet NaN inputs, other than maximum and minimum
>>> operations, if a floating-point result is to be delivered the result shall be a
>>> quiet NaN which should be one of the input NaNs"/.
>>> 
>>> *6.2.3 NaN propagation*
>>> /"An operation that propagates a NaN operand to its result and has a single NaN
>>> as an input should produce a NaN with the payload of the input NaN if
>>> representable in the destination format"./
>> 
>> thanks for finding this out.
>> 
>>> 
>>> Floating point add propagates a NaN. There is no conversion in the context of
>>> LLVM's fadd. So, if %x in "fadd %x, -0.0" is a NaN, the result is also a NaN
>>> with the same payload.
>> 
>> Yes, folding "fadd %x, -0.0" to "%x" is correct.  This implies that "fadd undef, undef" can be folded to "undef".
>> 
>>> 
>>> As regards "fadd %x, undef", where %x might be a NaN and undef might be chosen
>>> to be (probably some different) NaN, and a possibility to fold this to a
>>> constant (NaN), the standard says:
>>> /"If two or more inputs are NaN, then the payload of the resulting NaN should be
>>> identical to the payload of one of the input NaNs if representable in the
>>> destination format. *This standard does not specify which of the input NaNs will
>>> provide the payload*"/.
>>> 
>>> Thus, this makes it possible to fold "fadd %x, undef" to a NaN. Is this right?
>> 
>> Yes, I agree.
>> 
>> Ciao, Duncan.
>> 
>>> 
>>> Oleg
>>> 
>>> On 01.09.2014 10:04, Duncan Sands wrote:
>>>> Hi Oleg,
>>>> 
>>>> On 01/09/14 15:42, Oleg Ranevskyy wrote:
>>>>> Hi,
>>>>> 
>>>>> Thank you for your comment, Owen.
>>>>> My LLVM expertise is certainly not enough to make such decisions yet.
>>>>> Duncan, do you have any comments on this or do you know anyone else who can
>>>>> decide about preserving NaN payloads?
>>>> 
>>>> my take is that the first thing to do is to see what the IEEE standard says
>>>> about NaNs.  Consider for example "fadd x, -0.0". Does the standard specify
>>>> the exact NaN bit pattern produced as output when a particular NaN x is
>>>> input?  Or does it just say that the output is a NaN?  If the standard doesn't
>>>> care exactly which NaN is output, I think it is reasonable for LLVM to assume
>>>> it is whatever NaN is most convenient for LLVM; in this case that means using
>>>> x itself as the output.
>>>> 
>>>> However this approach does implicitly mean that we may end up not folding
>>>> floating point operations completely deterministically: depending on the
>>>> optimization that kicks in, in one case we might fold to NaN A, and in some
>>>> different optimization we might fold the same expression to NaN B.  I think
>>>> this is pretty reasonable, but it is something to be aware of.
>>>> 
>>>> Ciao, Duncan.
>>> 
>> 
> 





More information about the llvm-dev mailing list