[PATCH] D138817: [AAch64] Optimize muls with operands having enough sign bits.

Dave Green via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Tue Nov 29 09:13:02 PST 2022


dmgreen added inline comments.


================
Comment at: llvm/lib/Target/AArch64/AArch64InstrInfo.td:1926
+def smullwithsignbits : PatFrag<(ops node:$l, node:$r), (mul node:$l, node:$r), [{
+  return CurDAG->ComputeNumSignBits(N->getOperand(0)) > 32 &&
+         CurDAG->ComputeNumSignBits(N->getOperand(1)) > 32;
----------------
bipmis wrote:
> dmgreen wrote:
> > I think it maybe needs to be 33 bits. Sign bits are always off by one. Can you add some tests for the edge cases?
> > I think it maybe needs to be 33 bits. Sign bits are always off by one. Can you add some tests for the edge cases?
> 
> I think it has to be 32 bits as we are comparing that number of sign bit in mul operand is greater than 32. The test smull_ldrsw_w(), fails with a value of 33 as it is an edge case with a i32 operand.
> 
> 
> ```
> define i64 @smull_ldrsw_w(i32* %x0, i32 %x1) {
> ; CHECK-LABEL: smull_ldrsw_w:
> ; CHECK:       // %bb.0: // %entry
> ; CHECK-NEXT:    ldrsw x8, [x0]
> ; CHECK-NEXT:    // kill: def $w1 killed $w1 def $x1
> ; CHECK-NEXT:    sxtw x9, w1
> ; CHECK-NEXT:    mul x0, x8, x9
> ; CHECK-NEXT:    ret
> entry:
>   %ext64 = load i32, i32* %x0
>   %sext = sext i32 %ext64 to i64
>   %sext4 = sext i32 %x1 to i64
>   %mul = mul i64 %sext, %sext4
>   ret i64 %mul
> }
> 
> ```
Hmm. I don't think this case should transform:
```
define i64 @t31(i32 %a, i64 %b) nounwind {
; CHECK-LABEL: t31:
; CHECK:       // %bb.0: // %entry
; CHECK-NEXT:    asr x8, x1, #31
; CHECK-NEXT:    smull x0, w8, w0
; CHECK-NEXT:    ret
entry:
  %tmp1 = sext i32 %a to i64
  %c = ashr i64 %b, 31
  %tmp3 = mul i64 %tmp1, %c
  ret i64 %tmp3
}
```
An ashr by 32 would be OK. Same for the constant in @t10. Could it be that the smullwithonesignbits is treated as commutative, and because it is is matching 32 sign bits from the sext? This version isn't being transformed:
```
define i64 @t312(i64 %a, i64 %b) nounwind {
; CHECK-LABEL: t312:
; CHECK:       // %bb.0: // %entry
; CHECK-NEXT:    asr x8, x0, #31
; CHECK-NEXT:    asr x9, x1, #31
; CHECK-NEXT:    mul x0, x8, x9
; CHECK-NEXT:    ret
entry:
  %tmp1 = ashr i64 %a, 31
  %c = ashr i64 %b, 31
  %tmp3 = mul i64 %tmp1, %c
  ret i64 %tmp3
}
```

If that is the case then all the patterns could use smullwithsignbits, I think.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D138817/new/

https://reviews.llvm.org/D138817



More information about the llvm-commits mailing list