[clang] [llvm] [AMDGPU] Use `bf16` instead of `i16` for bfloat (PR #80908)
Shilei Tian via cfe-commits
cfe-commits at lists.llvm.org
Fri Feb 16 09:16:55 PST 2024
================
@@ -157,6 +157,27 @@ static uint32_t getLit16Encoding(uint16_t Val, const MCSubtargetInfo &STI) {
return 255;
}
+static uint32_t getLitBF16Encoding(uint16_t Val) {
+ uint16_t IntImm = getIntInlineImmEncoding(static_cast<int16_t>(Val));
+ if (IntImm != 0)
+ return IntImm;
+
+ // clang-format off
+ switch (Val) {
----------------
shiltian wrote:
In theory, yes, but for now we can't because `getInlineEncodingV2BF16` can't handle some cases (that I didn't dig yet). It looks like in the conversion between `uint16_t` and `uint32_t` that makes some test cases fail. IMO we need to unify them (not only for 16-bit) in one place instead of having almost the same logic at least in three places.
https://github.com/llvm/llvm-project/pull/80908
More information about the cfe-commits
mailing list