[llvm] [AMDGPU] Fix wrong operand value when floating-point value is used as operand of type i16 (PR #84106)

Jay Foad via llvm-commits llvm-commits at lists.llvm.org
Wed Mar 6 01:37:07 PST 2024


================
@@ -11189,10 +11189,10 @@ v_cvt_f16_u16_e32 v5, -1
 // GFX10: encoding: [0xc1,0xa0,0x0a,0x7e]
 
 v_cvt_f16_u16_e32 v5, 0.5
-// GFX10: encoding: [0xff,0xa0,0x0a,0x7e,0x00,0x38,0x00,0x00]
+// GFX10: encoding: [0x80,0xa0,0x0a,0x7e]
 
 v_cvt_f16_u16_e32 v5, -4.0
-// GFX10: encoding: [0xff,0xa0,0x0a,0x7e,0x00,0xc4,0x00,0x00]
+// GFX10: encoding: [0x80,0xa0,0x0a,0x7e]
----------------
jayfoad wrote:

Note these two instructions now assemble to identical binary since the low 16 bits of f32 0.5 and f32 -4.0 are identical. Can you add tests with a more interesting literal, like inv2pi, which has non-0 low 16 bits?

https://github.com/llvm/llvm-project/pull/84106


More information about the llvm-commits mailing list