[PATCH] D138904: [AArch64] Transform shift+and to shift+shift to select more shifted register

chenglin.bi via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Tue Nov 29 19:49:21 PST 2022


bcl5980 marked 5 inline comments as done.
bcl5980 added inline comments.


================
Comment at: llvm/lib/Target/AArch64/AArch64ISelDAGToDAG.cpp:657-658
+
+    if (LHSOpcode == ISD::SRA && (BitWidth != LowZBits + MaskLen))
+      return false;
+
----------------
mingmingl wrote:
> For correctness, similar check is needed when LHSOpcode is ISD::SRL; meanwhile for `srl`, the condition `BitWidth == LowZBits + MaskLen + ShiftAmtC` should be sufficient
> 
> For example, https://gcc.godbolt.org/z/YvGG3Pov3 shows an IR at trunk; and with the current patch the optimized code changes the result (not correct).
> ```
> 	lsr	x8, x0, #3
> 	add	x0, x1, x8, lsl #1
> 	ret
> ```
`BitWidth <= LowZBits + MaskLen + ShiftAmtC` should be better.


================
Comment at: llvm/test/CodeGen/AArch64/shiftregister-from-and.ll:114
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    asr w8, w0, #47
+; CHECK-NEXT:    add w0, w1, w8, lsl #24
----------------
dmgreen wrote:
> This instruction looks wrong, with the shift being more than 32 for w regs.
fixed the code and rename the test to @shiftedreg_from_and_negative_andc5


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D138904/new/

https://reviews.llvm.org/D138904



More information about the llvm-commits mailing list