[llvm] 21308c2 - [AArch64][GlobalISel] Check if G_SELECT has been optimized when folding binops

Jessica Paquette via llvm-commits llvm-commits at lists.llvm.org
Tue Dec 8 13:48:05 PST 2020


Author: Jessica Paquette
Date: 2020-12-08T13:47:08-08:00
New Revision: 21308c2b4c9dda8d44d2792f1359fc6b54984b87

URL: https://github.com/llvm/llvm-project/commit/21308c2b4c9dda8d44d2792f1359fc6b54984b87
DIFF: https://github.com/llvm/llvm-project/commit/21308c2b4c9dda8d44d2792f1359fc6b54984b87.diff

LOG: [AArch64][GlobalISel] Check if G_SELECT has been optimized when folding binops

`TryFoldBinOpIntoSelect` didn't have a check for `Optimized`, meaning you could
end up folding twice. (e.g. a select with a G_ADD on the true side, and a G_SUB
on the false side)

Add in the missing `if` and a test.

Added: 
    

Modified: 
    llvm/lib/Target/AArch64/GISel/AArch64InstructionSelector.cpp
    llvm/test/CodeGen/AArch64/GlobalISel/select-select.mir

Removed: 
    


################################################################################
diff  --git a/llvm/lib/Target/AArch64/GISel/AArch64InstructionSelector.cpp b/llvm/lib/Target/AArch64/GISel/AArch64InstructionSelector.cpp
index 0c1e23820c90..92da0be099f3 100644
--- a/llvm/lib/Target/AArch64/GISel/AArch64InstructionSelector.cpp
+++ b/llvm/lib/Target/AArch64/GISel/AArch64InstructionSelector.cpp
@@ -1033,8 +1033,11 @@ AArch64InstructionSelector::emitSelect(Register Dst, Register True,
   // By default, we'll try and emit a CSEL.
   unsigned Opc = Is32Bit ? AArch64::CSELWr : AArch64::CSELXr;
   bool Optimized = false;
-  auto TryFoldBinOpIntoSelect = [&Opc, Is32Bit, &CC, &MRI](Register &Reg,
-                                                           bool Invert) {
+  auto TryFoldBinOpIntoSelect = [&Opc, Is32Bit, &CC, &MRI,
+                                 &Optimized](Register &Reg, bool Invert) {
+    if (Optimized)
+      return false;
+
     // Attempt to fold:
     //
     // %sub = G_SUB 0, %x

diff  --git a/llvm/test/CodeGen/AArch64/GlobalISel/select-select.mir b/llvm/test/CodeGen/AArch64/GlobalISel/select-select.mir
index 336e59b03a0f..afc972cde3ee 100644
--- a/llvm/test/CodeGen/AArch64/GlobalISel/select-select.mir
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/select-select.mir
@@ -650,3 +650,35 @@ body:             |
     %select:gpr(s32) = G_SELECT %cond(s1), %add, %f
     $w0 = COPY %select(s32)
     RET_ReallyLR implicit $w0
+
+...
+---
+name:            binop_dont_optimize_twice
+legalized:       true
+regBankSelected: true
+tracksRegLiveness: true
+body:             |
+  bb.0:
+    liveins: $w0, $w1, $w2
+    ; CHECK-LABEL: name: binop_dont_optimize_twice
+    ; CHECK: liveins: $w0, $w1, $w2
+    ; CHECK: %reg0:gpr32 = COPY $w0
+    ; CHECK: %reg1:gpr32 = COPY $w1
+    ; CHECK: %reg2:gpr32 = COPY $w2
+    ; CHECK: %xor:gpr32 = ORNWrr $wzr, %reg1
+    ; CHECK: [[ANDSWri:%[0-9]+]]:gpr32 = ANDSWri %reg0, 0, implicit-def $nzcv
+    ; CHECK: %select:gpr32 = CSNEGWr %xor, %reg2, 1, implicit $nzcv
+    ; CHECK: $w0 = COPY %select
+    ; CHECK: RET_ReallyLR implicit $w0
+    %reg0:gpr(s32) = COPY $w0
+    %reg1:gpr(s32) = COPY $w1
+    %reg2:gpr(s32) = COPY $w2
+    %cond:gpr(s1) = G_TRUNC %reg0(s32)
+    %f:gpr(s32) = COPY $w2
+    %negative_one:gpr(s32) = G_CONSTANT i32 -1
+    %xor:gpr(s32) = G_XOR %reg1(s32), %negative_one
+    %zero:gpr(s32) = G_CONSTANT i32 0
+    %sub:gpr(s32) = G_SUB %zero(s32), %reg2
+    %select:gpr(s32) = G_SELECT %cond(s1), %xor, %sub
+    $w0 = COPY %select(s32)
+    RET_ReallyLR implicit $w0


        


More information about the llvm-commits mailing list