[libc-commits] [libc] [libc][math][c23] Fix X86_Binary80 special cases for canonicalize functions. (PR #86924)

Shourya Goel via libc-commits libc-commits at lists.llvm.org
Thu Mar 28 12:23:57 PDT 2024


Sh0g0-1758 wrote:

> > Fixed tests. Ran into a rather interesting observation. Why is that
> > ```
> > A = (UInt128(0x0000) << 64) + UInt128(0x8000000000000000)
> > B = (UInt128(0x0001) << 64) + UInt128(0x8000000000000000)
> > ```
> > 
> > 
> >     
> >       
> >     
> > 
> >       
> >     
> > 
> >     
> >   
> > Both have a `get_biased_exponent()` value of 1 ?
> 
> `get_biased_exponent` follows the IEEE 754 convention, and is used to compute the real values of the bit patterns. So once you added a hidden bit to your mantissa, the real value is simply
> 
> ```
>   2^(get_biased_exponent() - EXPONENT_BIAS) * m.
> ```
> 
> Another way to think about it is that:
> 
> ```
>   2^(get_biased_exponent() - EXPONENT_BIAS - FRACTION_LEN) is the value of the least significant bit.
> ```

Thanks for the explanation. Actually while replacing the tests with _u128 strings gave me added clarity as I played around with the bits a lot while locally testing. Thanks for the suggestion. 

https://github.com/llvm/llvm-project/pull/86924


More information about the libc-commits mailing list