[llvm-bugs] [Bug 32247] New: [x86] lshr <4 x i32> generates bad code

via llvm-bugs llvm-bugs at lists.llvm.org
Sun Mar 12 13:04:34 PDT 2017


https://bugs.llvm.org/show_bug.cgi?id=32247

            Bug ID: 32247
           Summary: [x86] lshr <4 x i32> generates bad code
           Product: new-bugs
           Version: 3.9
          Hardware: PC
                OS: Linux
            Status: NEW
          Severity: enhancement
          Priority: P
         Component: new bugs
          Assignee: unassignedbugs at nondot.org
          Reporter: simonas+llvm.org at kazlauskas.me
                CC: llvm-bugs at lists.llvm.org

Compiling the following IR:

  define <4 x i32> @test(<4 x i32>, <4 x i32>) {
    entry-block:
      %a = lshr <4 x i32> %0, %1
      ret <4 x i32> %a
  }

Results in following assembly:

test:                                   # @test
        .cfi_startproc
# BB#0:                                 # %entry-block
        movdqa  %xmm1, %xmm2
        psrldq  $12, %xmm2              # xmm2 =
xmm2[12,13,14,15],zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero
        movdqa  %xmm0, %xmm3
        psrld   %xmm2, %xmm3
        movdqa  %xmm1, %xmm2
        psrlq   $32, %xmm2
        movdqa  %xmm0, %xmm4
        psrld   %xmm2, %xmm4
        movsd   %xmm4, %xmm3            # xmm3 = xmm4[0],xmm3[1]
        pshufd  $237, %xmm3, %xmm2      # xmm2 = xmm3[1,3,2,3]
        pxor    %xmm3, %xmm3
        movdqa  %xmm1, %xmm4
        punpckhdq       %xmm3, %xmm4    # xmm4 =
xmm4[2],xmm3[2],xmm4[3],xmm3[3]
        movdqa  %xmm0, %xmm5
        psrld   %xmm4, %xmm5
        punpckldq       %xmm3, %xmm1    # xmm1 =
xmm1[0],xmm3[0],xmm1[1],xmm3[1]
        psrld   %xmm1, %xmm0
        movsd   %xmm0, %xmm5            # xmm5 = xmm0[0],xmm5[1]
        pshufd  $232, %xmm5, %xmm0      # xmm0 = xmm5[0,2,2,3]
        punpckldq       %xmm2, %xmm0    # xmm0 =
xmm0[0],xmm2[0],xmm0[1],xmm2[1]
        retq

That’s a lot of work being done where two instructions would suffice:

        psrld     %xmm1, %xmm0
        ret

LLVM is insisting shifting every component separately, while it is not
necessary. Also reproducible with a constant shift offset (generates a bit less
code, but underlying problem remains):

  define <4 x i32> @test(<4 x i32>) {
    entry-block:
      %a = lshr <4 x i32> %0, <i32 1, i32 2, i32 3, i32 4>
      ret <4 x i32> %a
  }

Potentially reproducible for all other shift variants too.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-bugs/attachments/20170312/0753c0ea/attachment.html>


More information about the llvm-bugs mailing list