[llvm-bugs] [Bug 42571] New: [x86] forming subo intrinsic thwarts codegen

via llvm-bugs llvm-bugs at lists.llvm.org
Wed Jul 10 14:25:42 PDT 2019


https://bugs.llvm.org/show_bug.cgi?id=42571

            Bug ID: 42571
           Summary: [x86] forming subo intrinsic thwarts codegen
           Product: libraries
           Version: trunk
          Hardware: PC
                OS: All
            Status: NEW
          Severity: enhancement
          Priority: P
         Component: Backend: X86
          Assignee: unassignedbugs at nondot.org
          Reporter: spatel+llvm at rotateright.com
                CC: craig.topper at gmail.com, llvm-bugs at lists.llvm.org,
                    llvm-dev at redking.me.uk, spatel+llvm at rotateright.com

Forking this example from bug 42314 because it's not exactly an "isPowerOf2"
pattern:

int foo(int x, int y) {
  return x ? (x & (x - 1)) : y;
}

Or as optimized IR (if we should create ctpop from this, then this isn't
strictly an x86 bug):

define i32 @foo(i32 %x, i32 %y) {
  %tobool = icmp eq i32 %x, 0
  %sub = add nsw i32 %x, -1
  %and = and i32 %sub, %x
  %cond = select i1 %tobool, i32 %y, i32 %and
  ret i32 %cond
}

CGP will transform that to use llvm.usub.with.overflow.i32, and we get:

  movl %edi, %eax
  subl $1, %eax
  andl %edi, %eax
  cmpl $1, %edi
  cmovbl %esi, %eax
  retq

For generic x86, this seems better
        leal    -1(%rdi), %eax
        andl    %edi, %eax
        testl   %edi, %edi
        cmove   %esi, %eax

And with BMI ('blsr'):
        blsr    %edi, %eax
        testl   %edi, %edi
        cmove   %esi, %eax


https://godbolt.org/z/Wmgyy9

-- 
You are receiving this mail because:
You are on the CC list for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-bugs/attachments/20190710/b4ffbec7/attachment-0001.html>


More information about the llvm-bugs mailing list