[LLVMdev] ConstantFoldBinaryFP and cross compilation

Dan Gohman dan433584 at gmail.com
Fri Apr 26 14:04:11 PDT 2013


Hi Sergei,

The degree to which LLVM actually makes any guarantees about IEEE
arithmetic precision is ambiguous. LangRef, for one, doesn't even mention
it (it mentions formats, but nothing else). The de-facto way of
interpreting holes in LangRef is to consider how the IR is used by clang
and follow the path up into the C and/or C++ standards and then work from
there. C describes a binding to IEC 60559, but it is optional, and clang
doesn't opt in. C++ doesn't even have the option. So from an official
perspective, it's not clear that you have any basis to complain ;-).

I mention all this not to dismiss your concern, but to put it in context.
Right or wrong, much of the C/C++ software world is not that keenly
concerned in these matters. This includes LLVM in some respects. The
folding of floating-point library routines which you point out in LLVM is
one example of this.

One idea for addressing this would be to teach LLVM's TargetLibraryInfo to
carry information about how precise the target's library functions are.
Then, you could either implement soft-float functions within LLVM itself
for the affected library functions, or you could disable folding for those
functions which are not precise enough on the host (in non-fast-math mode)

Another idea for addressing this would be to convince the LLVM community
that LLVM shouldn't constant-fold floating-point library functions at all
(in non-fast-math mode). I think you could make a reasonable argument for
this. There are ways to do this without loosing much optimization -- such
expressions are still constant after all, so they can be hoisted out of any
loop at all. They could even be hoisted out to main if you want. It's also
worth noting that this problem predates the implementation of fast-math
mode in LLVM's optimizer. Now that fast-math mode is available, it may be
easier to convince people to make the non-fast-math mode more conservative.
I don't know that everyone will accept this, but it's worth considering.

Dan

On Fri, Apr 26, 2013 at 12:44 PM, Sergei Larin <slarin at codeaurora.org>wrote:

> Dan, and anyone else interested… ****
>
> ** **
>
>   I am not sure if this has been discussed before, but I do have a case
> when the following logic fails to work:****
>
> ** **
>
> lib/Analysis/ConstantFolding.cpp****
>
> ** **
>
> static Constant *ConstantFoldBinaryFP(double (*NativeFP)(double, double),*
> ***
>
>                                       double V, double W, Type *Ty) {****
>
>   sys::llvm_fenv_clearexcept();****
>
>   V = NativeFP(V, W);****
>
>   if (sys::llvm_fenv_testexcept()) {****
>
>     sys::llvm_fenv_clearexcept();****
>
>     return 0;****
>
>   }****
>
> ** **
>
> ….****
>
> ** **
>
> This fragment seems to assumes that host and target behave in exact the
> same way in regard to FP exception handling. In some way I understand it,
> but… On some cross compilation platforms this might not be always true. In
> case of Hexagon for example our FP math handling is apparently more precise
> then “stock” one on x86 host.  Specific (but not the best) example would be
> computing sqrtf(1.000001). Result is 1 + FE_INEXACT set. My current linux
> x86 host fails the inexact part… resulting in wrong code emitted.****
>
> ** **
>
>   Once again, my question is not about this specific example, but rather
> about the assumption of identical behavior of completely different systems.
> What if my target’s “objective” is to exceed IEEE precision? …and I happen
> to have a set of tests to verify that I do J****
>
> ** **
>
> Thank you for any comment.****
>
> ** **
>
> Sergei****
>
> ** **
>
> ** **
>
> ---****
>
> Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted
> by The Linux Foundation****
>
> ** **
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20130426/8192f702/attachment.html>


More information about the llvm-dev mailing list