[llvm-bugs] [Bug 39456] New: [X86][SSE] Improve insertelement-zero.ll + insertelement-ones.ll codegen

via llvm-bugs llvm-bugs at lists.llvm.org
Fri Oct 26 10:59:16 PDT 2018


https://bugs.llvm.org/show_bug.cgi?id=39456

            Bug ID: 39456
           Summary: [X86][SSE] Improve insertelement-zero.ll +
                    insertelement-ones.ll codegen
           Product: libraries
           Version: trunk
          Hardware: PC
                OS: Windows NT
            Status: NEW
          Severity: enhancement
          Priority: P
         Component: Backend: X86
          Assignee: unassignedbugs at nondot.org
          Reporter: llvm-dev at redking.me.uk
                CC: andrea.dibiagio at gmail.com, craig.topper at gmail.com,
                    llvm-bugs at lists.llvm.org, spatel+llvm at rotateright.com

The insertion of 0/-1 elements into vectors can often be more efficiently
performed as either a blend with a rematerializable 0/-1 vector or with an
AND/OR op with a constant mask:

define <32 x i8> @insert_v32i8_z123456789ABCDEzGHIJKLMNOPQRSTzz(<32 x i8> %a) {
; AVX1-LABEL: insert_v32i8_z123456789ABCDEzGHIJKLMNOPQRSTzz:
; AVX1:       # %bb.0:
; AVX1-NEXT:    xorl %eax, %eax
; AVX1-NEXT:    vpinsrb $0, %eax, %xmm0, %xmm1
; AVX1-NEXT:    vpinsrb $15, %eax, %xmm1, %xmm1
; AVX1-NEXT:    vextractf128 $1, %ymm0, %xmm0
; AVX1-NEXT:    vpxor %xmm2, %xmm2, %xmm2
; AVX1-NEXT:    vpblendw {{.*#+}} xmm0 = xmm0[0,1,2,3,4,5,6],xmm2[7]
; AVX1-NEXT:    vinsertf128 $1, %xmm0, %ymm1, %ymm0
; AVX1-NEXT:    retq
;
; AVX2-SLOW-LABEL: insert_v32i8_z123456789ABCDEzGHIJKLMNOPQRSTzz:
; AVX2-SLOW:       # %bb.0:
; AVX2-SLOW-NEXT:    xorl %eax, %eax
; AVX2-SLOW-NEXT:    vpinsrb $0, %eax, %xmm0, %xmm1
; AVX2-SLOW-NEXT:    vpinsrb $15, %eax, %xmm1, %xmm1
; AVX2-SLOW-NEXT:    vextracti128 $1, %ymm0, %xmm0
; AVX2-SLOW-NEXT:    vpxor %xmm2, %xmm2, %xmm2
; AVX2-SLOW-NEXT:    vpblendw {{.*#+}} xmm0 = xmm0[0,1,2,3,4,5,6],xmm2[7]
; AVX2-SLOW-NEXT:    vinserti128 $1, %xmm0, %ymm1, %ymm0
; AVX2-SLOW-NEXT:    retq
;
; AVX2-FAST-LABEL: insert_v32i8_z123456789ABCDEzGHIJKLMNOPQRSTzz:
; AVX2-FAST:       # %bb.0:
; AVX2-FAST-NEXT:    vpand {{.*}}(%rip), %xmm0, %xmm1
; AVX2-FAST-NEXT:    vextracti128 $1, %ymm0, %xmm0
; AVX2-FAST-NEXT:    vpxor %xmm2, %xmm2, %xmm2
; AVX2-FAST-NEXT:    vpblendw {{.*#+}} xmm0 = xmm0[0,1,2,3,4,5,6],xmm2[7]
; AVX2-FAST-NEXT:    vinserti128 $1, %xmm0, %ymm1, %ymm0
; AVX2-FAST-NEXT:    retq
  %1 = insertelement <32 x i8> %a, i8 0, i32 0
  %2 = insertelement <32 x i8> %1, i8 0, i32 15
  %3 = insertelement <32 x i8> %2, i8 0, i32 30
  %4 = insertelement <32 x i8> %3, i8 0, i32 31
  ret <32 x i8> %4
}

Not even the AVX2-FAST cases properly manages to see that this is just a AND
mask.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-bugs/attachments/20181026/817a07a2/attachment-0001.html>


More information about the llvm-bugs mailing list