[llvm-bugs] [Bug 43381] New: Suboptimal shift+mask code gen with BZHI

via llvm-bugs llvm-bugs at lists.llvm.org
Fri Sep 20 05:21:32 PDT 2019


            Bug ID: 43381
           Summary: Suboptimal shift+mask code gen with BZHI
           Product: libraries
           Version: trunk
          Hardware: PC
                OS: All
            Status: NEW
          Severity: enhancement
          Priority: P
         Component: Backend: X86
          Assignee: unassignedbugs at nondot.org
          Reporter: dave at znu.io
                CC: craig.topper at gmail.com, llvm-bugs at lists.llvm.org,
                    llvm-dev at redking.me.uk, spatel+llvm at rotateright.com

The two C examples at the end of this report should generate the following
assembly when BZHI is available:
        movb    $60, %cl
        bzhiq   %rcx, (%rdi), %rax
        shrq    $23, %rax

In practice they generate:
        movq    (%rdi), %rax
        shrq    $23, %rax
        movb    $37, %cl
        bzhiq   %rcx, %rax, %rax

The example C code:
unsigned long example1(unsigned long *mem) {
        unsigned long temp = *mem & ((1ul << 60) - 1);
        return temp >> 23;

unsigned long example2(unsigned long *mem) {
        unsigned long temp = *mem >> 23;
        return temp & ((1ul << 37) - 1);

It seems to me that LLVM and/or the backend is biased towards "shift then mask"
but sometimes "mask then shift" generates better code.

You are receiving this mail because:
You are on the CC list for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-bugs/attachments/20190920/2b694ffa/attachment.html>

More information about the llvm-bugs mailing list