[LLVMdev] Floating-point range checks

Robison, Arch arch.robison at intel.com
Thu Jan 8 13:58:13 PST 2015


Yes, the modeling of floating-point is trickier.  The wrap-around trick used by ConstantRange seems less applicable, and there are the unordered NaNs.  Though in all cases, the key abstraction is a lattice of values, so an instance of FPRange should be thought of as a point on a lattice, not an interval.  The lattice needs to be complicated enough the cover the cases of interest, but not so complicated that it gobbles up excessive time and space.  My guess is that most of the cases of practical interest are eliminating domain checks and optimizations that need to know that Nan/Inf are not in play.

- Arch

-----Original Message-----
From: Hal Finkel [mailto:hfinkel at anl.gov] 
Sent: Thursday, January 8, 2015 3:24 PM
To: Philip Reames
Cc: llvmdev at cs.uiuc.edu; Robison, Arch
Subject: Re: [LLVMdev] Floating-point range checks

----- Original Message -----
> From: "Philip Reames" <listmail at philipreames.com>
> To: "Hal Finkel" <hfinkel at anl.gov>, "Arch Robison" 
> <arch.robison at intel.com>
> Cc: llvmdev at cs.uiuc.edu
> Sent: Thursday, January 8, 2015 3:04:01 PM
> Subject: Re: [LLVMdev] Floating-point range checks
> 
> With floating point, won't you need to model a partial order for the 
> end points?  I thought there were pairs of floating point values which 
> are incomparable.  Not sure if partial order is the right abstraction, 
> but I'm pretty sure a total order isn't.
> 
> This may make implementing the range comparisons (which are themselves 
> partially ordered) a bit tricky...
> 
> Philip
> (Who knows just enough about floating point to know he doesn't really 
> know what's he's talking about.)

I don't think that's quite the right way to think about it. First, there are FP numbers that fall outside of the number line (NaN, etc.), and those need be represented separately. Second, you need to make sure you're representing your range with sufficient (but perhaps not too much) precision. What I mean is the following. Let's say we have some double precision value, x, known to be [-1.0,1.0]. And then we have this code:

 y = x + 1e-38;
 z = (y > 1.0);

Now, what happens to the range and is z known to be false? One answer is that it becomes [-1.0 + 1e-38, 1.0 + 1e-38], and z might be true. Another answer is that z is always false, because for no number in [-1.0,1.0] in double precision is y anything greater than exactly 1.0.

So I think the important part is to use exactly the same precision for the range that is used by the values being modeled. APFloat should serve us well in this, so hopefully it will be straightforward.

 -Hal

> 
> 
> On 01/08/2015 11:45 AM, Hal Finkel wrote:
> > ----- Original Message -----
> >> From: "Arch Robison" <arch.robison at intel.com>
> >> To: "Hal Finkel" <hfinkel at anl.gov>
> >> Cc: "Philip Reames" <listmail at philipreames.com>, 
> >> llvmdev at cs.uiuc.edu
> >> Sent: Thursday, January 8, 2015 1:26:41 PM
> >> Subject: RE: [LLVMdev] Floating-point range checks
> >>
> >>> Checks against 1.0 are also common. Why not just add a FP range 
> >>> class, like our constant range, and go from there?
> >> That's certainly another way to go.  My worry is that a more 
> >> complicated lattice gets us deeper into rounding-mode issues and 
> >> considerably more work for smaller gain.  I like the idea of 
> >> creating an FPRange class.  We could start with a simple one and 
> >> extend it as experience warrants.
> > I worry about that too ;) -- I think creating a simple one and 
> > extending from there makes sense.
> >
> >   -Hal
> >
> >> - Arch
> >>
> >> -----Original Message-----
> >> From: Hal Finkel [mailto:hfinkel at anl.gov]
> >> Sent: Thursday, January 8, 2015 1:03 PM
> >> To: Robison, Arch
> >> Cc: Philip Reames; llvmdev at cs.uiuc.edu
> >> Subject: Re: [LLVMdev] Floating-point range checks
> >>
> >> ----- Original Message -----
> >>> From: "Arch Robison" <arch.robison at intel.com>
> >>> To: "Philip Reames" <listmail at philipreames.com>, 
> >>> llvmdev at cs.uiuc.edu
> >>> Sent: Thursday, January 8, 2015 12:54:32 PM
> >>> Subject: Re: [LLVMdev] Floating-point range checks
> >>>
> >>>
> >>> Thanks for the pointers. Looks like LazyValueInfo has the sort of 
> >>> infrastructure I had in mind. LVILatticeVal could be extended to 
> >>> floating point. (The comment “this can be made a lot more rich in 
> >>> the future” is an invitation :-). I’m thinking a simple lattice 
> >>> would address most cases of interest for floating-point checks. 
> >>> The lattice points for floating-point could be all subsets of the 
> >>> eight-element
> >>> set:
> >>>
> >>> {-inf,<0,-0,+0,>0,+inf,-nan,+nan},
> >>>
> >>> where <0 and >0 denote finite negative/positive numbers 
> >>> respectively.
> >>>
> >> I'm not sure. Checks against 1.0 are also common. Why not just add 
> >> a FP range class, like our constant range, and go from there?
> >>
> >>   -Hal
> >>
> >>>
> >>> - Arch
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> From: Philip Reames [mailto:listmail at philipreames.com]
> >>> Sent: Wednesday, January 7, 2015 6:03 PM
> >>> To: Robison, Arch; llvmdev at cs.uiuc.edu
> >>> Subject: Re: [LLVMdev] Floating-point range checks
> >>>
> >>>
> >>>
> >>> I don't believe we have much in this area currently.
> >>>
> >>> Generally, something like this would existing in InstCombine and 
> >>> ValueTracking.
> >>>
> >>> Take a look at ComputeSignBit in ValueTracking.cpp. This doesn't 
> >>> apply
> >>> (?) to floating point numbers, but we'd need something equivalent 
> >>> for them. It looks like there may already be a start in the form 
> >>> of:
> >>> CannotBeNegativeZero
> >>>
> >>> Other places to look would be SimplifyFAdd and 
> >>> InstCombine::visitFAdd.
> >>>
> >>> For this particular example, you're probably going to want a 
> >>> pattern in SimplifyFCmp of the form:
> >>> matcher: sqrt_call( fadd(Value(X), SpecificValue(X)), 
> >>> fadd(Value(Y),
> >>> SpecificValue(Y)))
> >>> && CannotBeNegativeZero(X) && CannotBeNegativeZero(Y)
> >>>
> >>> You might also look at LazyValueInfo, but that's probably of 
> >>> secondary interest. It's purely in terms of constant integer 
> >>> ranges currently.
> >>>
> >>> Philip
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> On 01/07/2015 02:13 PM, Robison, Arch wrote:
> >>>
> >>>
> >>>
> >>>
> >>> The Julia language implements sqrt(x) with conditional branch 
> >>> taken if x<0. Alas this prevents vectorization of loops with sqrt. 
> >>> Often the argument can be proven to be non-negative. E.g., 
> >>> sqrt(x*x+y*y).
> >>> Is there an existing LLVM pass or analysis that does 
> >>> floating-point range propagation to eliminate such unnecessary 
> >>> checks?
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> Arch D. Robison
> >>>
> >>>
> >>> Intel Corporation
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> _______________________________________________ LLVM Developers 
> >>> mailing list LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu 
> >>> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
> >>>
> >>>
> >>> _______________________________________________
> >>> LLVM Developers mailing list
> >>> LLVMdev at cs.uiuc.edu         http://llvm.cs.uiuc.edu
> >>> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
> >>>
> >> --
> >> Hal Finkel
> >> Assistant Computational Scientist
> >> Leadership Computing Facility
> >> Argonne National Laboratory
> >>
> 
> 

--
Hal Finkel
Assistant Computational Scientist
Leadership Computing Facility
Argonne National Laboratory




More information about the llvm-dev mailing list