[PATCH] D136311: [CUDA,NVPTX] Implement __bf16 support for NVPTX.

Artem Belevich via Phabricator via cfe-commits cfe-commits at lists.llvm.org
Tue Oct 25 09:55:20 PDT 2022


tra added a comment.

In D136311#3882748 <https://reviews.llvm.org/D136311#3882748>, @yaxunl wrote:

> LGTM. Thanks.
>
> Do you plan to support arithmetic operators for bf16 or implement the FMA instruction support?

Yes. sm_90 has introduced a handful of new bf16 operations that will be eventually implemented.



================
Comment at: llvm/lib/Target/NVPTX/NVPTXInstrInfo.td:186
+     !eq(name, "v2f16"): Float16x2Regs,
+     !eq(name, "bf16"): Float16Regs,
+     !eq(name, "v2bf16"): Float16x2Regs,
----------------
Allen wrote:
> sorry for a basic question: what's the different between bf16 and f16 ?
https://en.wikipedia.org/wiki/Bfloat16_floating-point_format



Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D136311/new/

https://reviews.llvm.org/D136311



More information about the cfe-commits mailing list