[LLVMdev] Representing -ffast-math at the IR level

Harris, Kevin Kevin.Harris at unisys.com
Tue Apr 17 08:29:45 PDT 2012


Duncan,
        Your effort to improve the control of floating point optimizations in LLVM is noble and commendable.  I'd like to make two points that appear not to have been raised previously in the discussion of your proposal to date:

1)      Most compiler and back-end control of floating point behavior appears to be motivated by controlling the loss or gain of a few low bits of precision on a whole module scale.  In fact, these concerns are usually insignificant for programmers of floating-point intensive applications.  The input to most floating point computations have far lower significance than the computations themselves, and therefore they have precision to burn.  So the vast majority of such app developers would happily trade precision for performance, even as the default behavior.  However, the place where trouble DOES occur is with overflow and underflow behavior at critical points.  Changing the order of operations, or combining operations, can cause overflows or underflows to occur that wouldn't otherwise occur, and vice versa.  Sometimes this is beneficial, but it is almost always unexpected.  Underflows may sound less important in this regard, but they can be worse than overflows, because they can mostly or completely eliminate the significant bits, in complete silence, leaving the entire computation worthless.  Much of numerical analysis, especially in writing floating point library functions, concerns the precise control of overflow and loss of significance in specific operations.  To the extent that optimizations which make such control difficult or impossible, can render the use of a compiler or backend unusable for that purpose.
2)      While the use of metadata for control of LLVM behavior is attractive for its simplicity and power, the philosophy that it can be safely ignored or even removed in some optimization passes would seem to doom its effectiveness for controlling floating point optimizations.  For anyone trying to use source language and compiler option mechanisms to control for fp overflow and underflow,  this approach would seem ill conceived.  For the purpose of providing a Front-End developer with a powerful platform for supporting fp-intensive programming, the primary requirement is that the Front-end should be able to precisely control optimizations that can change the fp intermediate results under all optimization levels for each individual fp operation specified in the IR.  The vast majority of such usage can and should chosen to default to high performance behavior.  But it should be possible for the front-end to precisely control IR re-ordering, operation combining (including exploitation of mul-add hardware support), and reactions to overflow and underflow conditions (using the exception handling conditions and underlying the hardware support).  By providing this power in the IR, it allows a Front-end developer to reliably support source language mechanisms (e.g. use of parentheses) and front-end recognized compiler options (e.g. for fp exception handling) to respond to the needs of the source language programmer for fp-intensive applications.

        It should be possible to define one or more attribute flags for FP operations in the IR with semantics that guarantee allowance or suppression of optimizations that might create or eliminate overflow, underflow, or significant precision loss.  The implementation of such semantics in the existing optimization passes might take a fair amount of work, I admit.  But that is exactly what Front-End developers and their source language programmers would most benefit from.
        -Kevin


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20120417/91ae09b0/attachment.html>


More information about the llvm-dev mailing list