[llvm] [AMDGPU] Prevent folding of the negative i32 literals as i64 (PR #70274)

Stanislav Mekhanoshin via llvm-commits llvm-commits at lists.llvm.org
Thu Oct 26 09:38:00 PDT 2023


================
@@ -0,0 +1,20 @@
+# NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py UTC_ARGS: --version 3
+# RUN: llc -march=amdgcn -mcpu=gfx900 -verify-machineinstrs -run-pass=si-fold-operands -o - %s | FileCheck -check-prefix=GCN %s
+
+# The constant is 0xffffffff80000000. It is 64-bit negative constant, but it passes the test
+# isInt<32>(). Nonetheless it is not a legal literal for a binary or unsigned operand and
+# cannot be used right in the shift as HW will zero extend it.
+
+---
+name:            imm64_shift_int32_const
+body: |
+  bb.0:
+    ; GCN-LABEL: name: imm64_shift_int32_const
+    ; GCN: [[S_MOV_B:%[0-9]+]]:sreg_64 = S_MOV_B64_IMM_PSEUDO -2147483648
+    ; GCN-NEXT: [[S_LSHL_B64_:%[0-9]+]]:sreg_64 = S_LSHL_B64 [[S_MOV_B]], 1, implicit-def $scc
+    ; GCN-NEXT: S_ENDPGM 0, implicit [[S_LSHL_B64_]]
+    %0:sreg_64 = S_MOV_B64_IMM_PSEUDO 18446744071562067968
+    %1:sreg_64 = S_LSHL_B64 %0, 1, implicit-def $scc
+    S_ENDPGM 0, implicit %1
+
----------------
rampitec wrote:

I will add negative (or I guess positive) tests here, but for the end-to-end it is not currently possible. It needs us to select this mov, which we do not do yet. The point of this patch series is to prepare to that selection, and that is how I hit the problem - I have a patch to select it and fixing problems which will arise.

https://github.com/llvm/llvm-project/pull/70274


More information about the llvm-commits mailing list