[cfe-dev] new bfloat IR type for C bfloat type mapping

Ties Stuij via cfe-dev cfe-dev at lists.llvm.org
Fri Mar 20 09:06:44 PDT 2020


Hi all,

At Arm we have started upstreaming support for various Armv8.6-a features [1]. As part of this effort we are upstreaming support for the Brain floating point format (bfloat16) C type [2].

As the name suggests it's a 16-bit floating point format, but with the same amount of exponent bits as an IEEE 754 float32, which are taken from the mantissa which is now only 7 bits. It also behaves much like an IEEE 754 type. For more info, see [3] and [4].

In our original patch [2], we mapped the C bfloat type to either float or int32 llvm type, just like we do for _Float16 and __fp16. However Craig Topper was quite surprised we would pass as half, and John McCall and JF Bastien suggested to map it to a new bfloat IR type, as again bfloat is distinct from float16, and doing so should be straightforward. Sjoerd Meijer also concurred, but suggested we poll the mailing list before forging ahead.

Our initial thought was that a separate IR type isn't needed. There's no support in the architecture for naked bfloat operations and bfloat would only be used in an ML context through intrinsics. But it is a separate type, and it does make sense to treat it as such. Also several architectures have or have announced support for bf16 and there are proposals in flight to add it to the C++ standard.

Thoughts?


Best regards,
/Ties Stuij


links:
[1] https://reviews.llvm.org/D76062
[2] https://reviews.llvm.org/D76077
[3] https://community.arm.com/developer/ip-products/processors/b/ml-ip-blog/posts/bfloat16-processing-for-neural-networks-on-armv8_2d00_a
[4] https://static.docs.arm.com/ddi0487/fa/DDI0487F_a_armv8_arm.pdf


More information about the cfe-dev mailing list