[llvm] GlobalISel: neg (and x, 1) --> SIGN_EXTEND_INREG x, 1 (PR #131367)

via llvm-commits llvm-commits at lists.llvm.org
Fri Mar 14 10:47:33 PDT 2025


llvmbot wrote:


<!--LLVM PR SUMMARY COMMENT-->

@llvm/pr-subscribers-backend-aarch64

Author: Matthias Braun (MatzeB)

<details>
<summary>Changes</summary>

The pattern
```LLVM
%shl = shl i32 %x, 31
%ashr = ashr i32 %shl, 31
```
would be combined to `G_EXT_INREG %x, 1` by GlobalISel. However InstCombine normalizes this pattern to:
```LLVM
%and = and i32 %x, 1
%neg = sub i32 0, %and
```
This adds a combiner for this variant as well.

---
Full diff: https://github.com/llvm/llvm-project/pull/131367.diff


2 Files Affected:

- (modified) llvm/include/llvm/Target/GlobalISel/Combine.td (+11-1) 
- (added) llvm/test/CodeGen/AArch64/GlobalISel/combine-neg-and-one-to-sext-inreg.mir (+20) 


``````````diff
diff --git a/llvm/include/llvm/Target/GlobalISel/Combine.td b/llvm/include/llvm/Target/GlobalISel/Combine.td
index 3590ab221ad44..d43046ec5d282 100644
--- a/llvm/include/llvm/Target/GlobalISel/Combine.td
+++ b/llvm/include/llvm/Target/GlobalISel/Combine.td
@@ -747,6 +747,16 @@ def shl_ashr_to_sext_inreg : GICombineRule<
   (apply [{ Helper.applyAshShlToSextInreg(*${root}, ${info});}])
 >;
 
+// Fold sub 0, (and x, 1) -> sext_inreg x, 1
+def neg_and_one_to_sext_inreg : GICombineRule<
+  (defs root:$dst),
+  (match (G_AND $and, $x, 1),
+         (G_SUB $dst, 0, $and),
+         [{ return Helper.isLegalOrBeforeLegalizer(
+            {TargetOpcode::G_SEXT_INREG, {MRI.getType(${x}.getReg())}}); }]),
+  (apply (G_SEXT_INREG $dst, $x, 1))
+>;
+
 // Fold and(and(x, C1), C2) -> C1&C2 ? and(x, C1&C2) : 0
 def overlapping_and: GICombineRule <
   (defs root:$root, build_fn_matchinfo:$info),
@@ -2013,7 +2023,7 @@ def all_combines : GICombineGroup<[integer_reassoc_combines, trivial_combines,
     undef_combines, identity_combines, phi_combines,
     simplify_add_to_sub, hoist_logic_op_with_same_opcode_hands, shifts_too_big,
     reassocs, ptr_add_immed_chain, cmp_combines,
-    shl_ashr_to_sext_inreg, sext_inreg_of_load,
+    shl_ashr_to_sext_inreg, neg_and_one_to_sext_inreg, sext_inreg_of_load,
     width_reduction_combines, select_combines,
     known_bits_simplifications, trunc_shift,
     not_cmp_fold, opt_brcond_by_inverting_cond,
diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/combine-neg-and-one-to-sext-inreg.mir b/llvm/test/CodeGen/AArch64/GlobalISel/combine-neg-and-one-to-sext-inreg.mir
new file mode 100644
index 0000000000000..aeb7ec3d9b9ee
--- /dev/null
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/combine-neg-and-one-to-sext-inreg.mir
@@ -0,0 +1,20 @@
+# NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py UTC_ARGS: --version 5
+# RUN: llc -o - -mtriple aarch64-- -run-pass=aarch64-prelegalizer-combiner %s | FileCheck %s
+---
+name: test_combine_neg_and_one_to_sext_inreg
+body: |
+  bb.1:
+  liveins: $w0
+    ; CHECK-LABEL: name: test_combine_neg_and_one_to_sext_inreg
+    ; CHECK: liveins: $w0
+    ; CHECK-NEXT: {{  $}}
+    ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $w0
+    ; CHECK-NEXT: [[SEXT_INREG:%[0-9]+]]:_(s32) = G_SEXT_INREG [[COPY]], 1
+    ; CHECK-NEXT: $w0 = COPY [[SEXT_INREG]](s32)
+    %0:_(s32) = COPY $w0
+    %1:_(s32) = G_CONSTANT i32 1
+    %3:_(s32) = G_CONSTANT i32 0
+    %2:_(s32) = G_AND %0:_, %1:_
+    %4:_(s32) = G_SUB %3:_, %2:_
+    $w0 = COPY %4:_(s32)
+...

``````````

</details>


https://github.com/llvm/llvm-project/pull/131367


More information about the llvm-commits mailing list