[llvm] 44211f2 - AMDGPU: Optimize copies to exec with other insts after exec def
Matt Arsenault via llvm-commits
llvm-commits at lists.llvm.org
Tue Jul 28 18:35:45 PDT 2020
Author: Matt Arsenault
Date: 2020-07-28T21:34:50-04:00
New Revision: 44211f20a82418b20a69df9de6291d923dcbf709
URL: https://github.com/llvm/llvm-project/commit/44211f20a82418b20a69df9de6291d923dcbf709
DIFF: https://github.com/llvm/llvm-project/commit/44211f20a82418b20a69df9de6291d923dcbf709.diff
LOG: AMDGPU: Optimize copies to exec with other insts after exec def
It's possible to have terminator instructions after a write to exec,
so skip over them to find it.
Added:
llvm/test/CodeGen/AMDGPU/optimize-exec-copies-extra-insts-after-copy.mir
Modified:
llvm/lib/Target/AMDGPU/SIOptimizeExecMasking.cpp
Removed:
################################################################################
diff --git a/llvm/lib/Target/AMDGPU/SIOptimizeExecMasking.cpp b/llvm/lib/Target/AMDGPU/SIOptimizeExecMasking.cpp
index 2994d1954d71..449b6287a87b 100644
--- a/llvm/lib/Target/AMDGPU/SIOptimizeExecMasking.cpp
+++ b/llvm/lib/Target/AMDGPU/SIOptimizeExecMasking.cpp
@@ -300,9 +300,20 @@ bool SIOptimizeExecMasking::runOnMachineFunction(MachineFunction &MF) {
if (I == E)
continue;
- // TODO: It's possible to see other terminator copies after the exec copy.
- Register CopyToExec = isCopyToExec(*I, ST);
- if (!CopyToExec.isValid())
+ // It's possible to see other terminator copies after the exec copy. This
+ // can happen if control flow pseudos had their outputs used by phis.
+ Register CopyToExec;
+
+ unsigned SearchCount = 0;
+ const unsigned SearchLimit = 5;
+ while (I != E && SearchCount++ < SearchLimit) {
+ CopyToExec = isCopyToExec(*I, ST);
+ if (CopyToExec)
+ break;
+ ++I;
+ }
+
+ if (!CopyToExec)
continue;
// Scan backwards to find the def.
diff --git a/llvm/test/CodeGen/AMDGPU/optimize-exec-copies-extra-insts-after-copy.mir b/llvm/test/CodeGen/AMDGPU/optimize-exec-copies-extra-insts-after-copy.mir
new file mode 100644
index 000000000000..b0e67034a403
--- /dev/null
+++ b/llvm/test/CodeGen/AMDGPU/optimize-exec-copies-extra-insts-after-copy.mir
@@ -0,0 +1,51 @@
+# NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py
+# RUN: llc -march=amdgcn -mcpu=fiji -verify-machineinstrs -run-pass=si-optimize-exec-masking -o - %s | FileCheck %s
+
+# Make sure we can still optimize writes to exec when there are
+# additional terminators after the exec write. This can happen with
+# phi users of control flow intrinsics.
+
+---
+name: instructions_after_copy_to_exec
+tracksRegLiveness: true
+body: |
+ ; CHECK-LABEL: name: instructions_after_copy_to_exec
+ ; CHECK: bb.0:
+ ; CHECK: successors: %bb.2(0x40000000), %bb.1(0x40000000)
+ ; CHECK: liveins: $vgpr0
+ ; CHECK: renamable $vgpr1 = V_MOV_B32_e32 0, implicit $exec
+ ; CHECK: renamable $vcc = V_CMP_EQ_U32_e64 0, killed $vgpr0, implicit $exec
+ ; CHECK: $sgpr0_sgpr1 = S_AND_SAVEEXEC_B64 $vcc, implicit-def $exec, implicit-def $scc, implicit $exec
+ ; CHECK: renamable $sgpr0_sgpr1 = S_XOR_B64 $exec, killed renamable $sgpr0_sgpr1, implicit-def dead $scc
+ ; CHECK: renamable $sgpr0_sgpr1 = COPY killed renamable $sgpr0_sgpr1, implicit $exec
+ ; CHECK: S_CBRANCH_EXECZ %bb.2, implicit $exec
+ ; CHECK: bb.1:
+ ; CHECK: successors: %bb.2(0x80000000)
+ ; CHECK: liveins: $sgpr0_sgpr1
+ ; CHECK: S_NOP 0, implicit $sgpr0_sgpr1
+ ; CHECK: bb.2:
+ ; CHECK: liveins: $sgpr0_sgpr1
+ ; CHECK: S_NOP 0, implicit $sgpr0_sgpr1
+ bb.0:
+ liveins: $vgpr0
+
+ renamable $vgpr1 = V_MOV_B32_e32 0, implicit $exec
+ renamable $vcc = V_CMP_EQ_U32_e64 0, killed $vgpr0, implicit $exec
+ renamable $sgpr0_sgpr1 = COPY $exec, implicit-def $exec
+ renamable $sgpr2_sgpr3 = S_AND_B64 renamable $sgpr0_sgpr1, renamable $vcc, implicit-def dead $scc
+ renamable $sgpr0_sgpr1 = S_XOR_B64 renamable $sgpr2_sgpr3, killed renamable $sgpr0_sgpr1, implicit-def dead $scc
+ $exec = S_MOV_B64_term killed renamable $sgpr2_sgpr3
+ renamable $sgpr0_sgpr1 = S_MOV_B64_term killed renamable $sgpr0_sgpr1, implicit $exec
+ S_CBRANCH_EXECZ %bb.2, implicit $exec
+
+ bb.1:
+ liveins: $sgpr0_sgpr1
+
+ S_NOP 0, implicit $sgpr0_sgpr1
+
+ bb.2:
+ liveins: $sgpr0_sgpr1
+
+ S_NOP 0, implicit $sgpr0_sgpr1
+
+...
More information about the llvm-commits
mailing list