[llvm-bugs] [Bug 25718] New: [X86][SSE] LLVM produces sub-optimal assembly for zext from 8xi8 to 8xi32

via llvm-bugs llvm-bugs at lists.llvm.org
Wed Dec 2 11:04:01 PST 2015


https://llvm.org/bugs/show_bug.cgi?id=25718

            Bug ID: 25718
           Summary: [X86][SSE] LLVM produces sub-optimal assembly for zext
                    from 8xi8 to 8xi32
           Product: libraries
           Version: trunk
          Hardware: PC
                OS: Linux
            Status: NEW
          Severity: normal
          Priority: P
         Component: Backend: X86
          Assignee: unassignedbugs at nondot.org
          Reporter: congh at google.com
                CC: llvm-bugs at lists.llvm.org
    Classification: Unclassified

For the following IR:

define void @zext_v8i8_to_v8i32(<8 x i8>* %a) {
  %1 = load <8 x i8>, <8 x i8>* %a
  %2 = zext <8 x i8> %1 to <8 x i32>
  store <8 x i32> %2, <8 x i32>* undef, align 4
  ret void
}

LLVM emits the following assembly:


movq    (%rdi), %xmm0           # xmm0 = mem[0],zero
punpcklbw    %xmm0, %xmm0    # xmm0 = xmm0[0,0,1,1,2,2,3,3,4,4,5,5,6,6,7,7]
movdqa    %xmm0, %xmm1
punpcklwd    %xmm0, %xmm1    # xmm1 =
xmm1[0],xmm0[0],xmm1[1],xmm0[1],xmm1[2],xmm0[2],xmm1[3],xmm0[3]
movdqa    .LCPI0_0(%rip), %xmm2   # xmm2 =
[255,0,0,0,255,0,0,0,255,0,0,0,255,0,0,0]
pand    %xmm2, %xmm1
punpckhwd    %xmm0, %xmm0    # xmm0 = xmm0[4,4,5,5,6,6,7,7]
pand    %xmm2, %xmm0
movdqu    %xmm0, (%rax)
movdqu    %xmm1, (%rax)
retq

This is sub-optimal as we can eliminate two pand and one movdaq by using zeros
as one operand on all punpck*** (so we have one more xor).

-- 
You are receiving this mail because:
You are on the CC list for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-bugs/attachments/20151202/d3260502/attachment.html>


More information about the llvm-bugs mailing list