[all-commits] [llvm/llvm-project] 9f60b8: [InstCombine] canonicalize sign-bit-shift of diffe...

RotateRight via All-commits all-commits at lists.llvm.org
Tue Dec 1 06:58:37 PST 2020


  Branch: refs/heads/master
  Home:   https://github.com/llvm/llvm-project
  Commit: 9f60b8b3d2e2cd38b9ae45da7e36a77b3c9dd258
      https://github.com/llvm/llvm-project/commit/9f60b8b3d2e2cd38b9ae45da7e36a77b3c9dd258
  Author: Sanjay Patel <spatel at rotateright.com>
  Date:   2020-12-01 (Tue, 01 Dec 2020)

  Changed paths:
    M llvm/lib/Transforms/InstCombine/InstCombineShifts.cpp
    M llvm/test/Transforms/InstCombine/ashr-lshr.ll
    M llvm/test/Transforms/InstCombine/sub-ashr-and-to-icmp-select.ll
    M llvm/test/Transforms/InstCombine/sub-ashr-or-to-icmp-select.ll

  Log Message:
  -----------
  [InstCombine] canonicalize sign-bit-shift of difference to ext(icmp)

icmp is the preferred spelling in IR because icmp analysis is
expected to be better than any other analysis. This should
lead to more follow-on folding potential.

It's difficult to say exactly what we should do in codegen to
compensate. For example on AArch64, which of these is preferred:
	sub	w8, w0, w1
	lsr	w0, w8, #31

vs:
	cmp	w0, w1
	cset	w0, lt

If there are perf regressions, then we should deal with those in
codegen on a case-by-case basis.

A possible motivating example for better optimization is shown in:
https://llvm.org/PR43198 but that will require other transforms
before anything changes there.

Alive proof:
https://rise4fun.com/Alive/o4E

  Name: sign-bit splat
  Pre: C1 == (width(%x) - 1)
  %s = sub nsw %x, %y
  %r = ashr %s, C1
  =>
  %c = icmp slt %x, %y
  %r = sext %c

  Name: sign-bit LSB
  Pre: C1 == (width(%x) - 1)
  %s = sub nsw %x, %y
  %r = lshr %s, C1
  =>
  %c = icmp slt %x, %y
  %r = zext %c




More information about the All-commits mailing list