<p dir="ltr">Maybe just avoid producing the cfi directives (nounwind)? </p>
<div class="gmail_quote">On Jul 16, 2015 1:24 PM, "Simon Pilgrim" <<a href="mailto:llvm-dev@redking.me.uk">llvm-dev@redking.me.uk</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="word-wrap:break-word">Hi Sean,<div><br></div><div>This is from using update_llc_test_checks to regenerate the test script. I have some upcoming improvements to i64 vector shifts (better 32-bit target support and SRA vectorization) that should avoid their creation entirely.</div><div><br></div><div>Cheers, Simon.</div><div><br><div><blockquote type="cite"><div>On 16 Jul 2015, at 00:16, Sean Silva <<a href="mailto:chisophugis@gmail.com" target="_blank">chisophugis@gmail.com</a>> wrote:</div><br><div><div dir="ltr">Did you intend to commit all the cfi directives in some of these?<div><br></div><div>-- Sean Silva</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Jul 15, 2015 at 1:04 AM, Simon Pilgrim <span dir="ltr"><<a href="mailto:llvm-dev@redking.me.uk" target="_blank">llvm-dev@redking.me.uk</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Author: rksimon<br>
Date: Wed Jul 15 03:04:07 2015<br>
New Revision: 242273<br>
<br>
URL: <a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__llvm.org_viewvc_llvm-2Dproject-3Frev-3D242273-26view-3Drev&d=AwMFAg&c=8hUWFZcy2Z-Za5rBPlktOQ&r=mQ4LZ2PUj9hpadE3cDHZnIdEwhEBrbAstXeMaFoB9tg&m=t0x4uKuotVhCRr7DKeGEwqwiqjw34KP-2c1Wwl9D140&s=Pr_MbS_kC_9resiH8ohQsfjdrvCURM50Eo_iLCxZQMI&e=" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project?rev=242273&view=rev</a><br>
Log:<br>
[X86][SSE] Added i686/SSE2 vector shift tests.<br>
<br>
We were only testing on x86-64, but we should be ensuring decent code gen of i64 shifts on 32-bit targets.<br>
<br>
Modified:<br>
llvm/trunk/test/CodeGen/X86/vector-shift-ashr-128.ll<br>
llvm/trunk/test/CodeGen/X86/vector-shift-lshr-128.ll<br>
llvm/trunk/test/CodeGen/X86/vector-shift-shl-128.ll<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/vector-shift-ashr-128.ll<br>
URL: <a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__llvm.org_viewvc_llvm-2Dproject_llvm_trunk_test_CodeGen_X86_vector-2Dshift-2Dashr-2D128.ll-3Frev-3D242273-26r1-3D242272-26r2-3D242273-26view-3Ddiff&d=AwMFAg&c=8hUWFZcy2Z-Za5rBPlktOQ&r=mQ4LZ2PUj9hpadE3cDHZnIdEwhEBrbAstXeMaFoB9tg&m=t0x4uKuotVhCRr7DKeGEwqwiqjw34KP-2c1Wwl9D140&s=p3_P1aMI3kv4eNrLz7s2Q1-3lI01aZ5jIwTGcJ3Vt_w&e=" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-shift-ashr-128.ll?rev=242273&r1=242272&r2=242273&view=diff</a><br>
==============================================================================<br>
--- llvm/trunk/test/CodeGen/X86/vector-shift-ashr-128.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/vector-shift-ashr-128.ll Wed Jul 15 03:04:07 2015<br>
@@ -2,6 +2,9 @@<br>
; RUN: llc < %s -mtriple=x86_64-unknown-unknown -mcpu=x86-64 -mattr=+sse4.1 | FileCheck %s --check-prefix=ALL --check-prefix=SSE --check-prefix=SSE41<br>
; RUN: llc < %s -mtriple=x86_64-unknown-unknown -mcpu=x86-64 -mattr=+avx | FileCheck %s --check-prefix=ALL --check-prefix=AVX --check-prefix=AVX1<br>
; RUN: llc < %s -mtriple=x86_64-unknown-unknown -mcpu=x86-64 -mattr=+avx2 | FileCheck %s --check-prefix=ALL --check-prefix=AVX --check-prefix=AVX2<br>
+;<br>
+; Just one 32-bit run to make sure we do reasonable things for i64 shifts.<br>
+; RUN: llc < %s -mtriple=i686-unknown-unknown -mcpu=x86-64 | FileCheck %s --check-prefix=ALL --check-prefix=X32-SSE --check-prefix=X32-SSE2<br>
<br>
;<br>
; Variable Shifts<br>
@@ -49,6 +52,67 @@ define <2 x i64> @var_shift_v2i64(<2 x i<br>
; AVX-NEXT: vmovq %rax, %xmm0<br>
; AVX-NEXT: vpunpcklqdq {{.*#+}} xmm0 = xmm0[0],xmm2[0]<br>
; AVX-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: var_shift_v2i64:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: pushl %ebp<br>
+; X32-SSE-NEXT: .Ltmp0:<br>
+; X32-SSE-NEXT: .cfi_def_cfa_offset 8<br>
+; X32-SSE-NEXT: pushl %ebx<br>
+; X32-SSE-NEXT: .Ltmp1:<br>
+; X32-SSE-NEXT: .cfi_def_cfa_offset 12<br>
+; X32-SSE-NEXT: pushl %edi<br>
+; X32-SSE-NEXT: .Ltmp2:<br>
+; X32-SSE-NEXT: .cfi_def_cfa_offset 16<br>
+; X32-SSE-NEXT: pushl %esi<br>
+; X32-SSE-NEXT: .Ltmp3:<br>
+; X32-SSE-NEXT: .cfi_def_cfa_offset 20<br>
+; X32-SSE-NEXT: .Ltmp4:<br>
+; X32-SSE-NEXT: .cfi_offset %esi, -20<br>
+; X32-SSE-NEXT: .Ltmp5:<br>
+; X32-SSE-NEXT: .cfi_offset %edi, -16<br>
+; X32-SSE-NEXT: .Ltmp6:<br>
+; X32-SSE-NEXT: .cfi_offset %ebx, -12<br>
+; X32-SSE-NEXT: .Ltmp7:<br>
+; X32-SSE-NEXT: .cfi_offset %ebp, -8<br>
+; X32-SSE-NEXT: pshufd {{.*#+}} xmm2 = xmm0[3,1,2,3]<br>
+; X32-SSE-NEXT: movd %xmm2, %edx<br>
+; X32-SSE-NEXT: pshufd {{.*#+}} xmm2 = xmm0[2,3,0,1]<br>
+; X32-SSE-NEXT: movd %xmm2, %esi<br>
+; X32-SSE-NEXT: pshufd {{.*#+}} xmm2 = xmm1[2,3,0,1]<br>
+; X32-SSE-NEXT: movd %xmm2, %eax<br>
+; X32-SSE-NEXT: movb %al, %cl<br>
+; X32-SSE-NEXT: shrdl %cl, %edx, %esi<br>
+; X32-SSE-NEXT: movd %xmm0, %edi<br>
+; X32-SSE-NEXT: pshufd {{.*#+}} xmm0 = xmm0[1,1,2,3]<br>
+; X32-SSE-NEXT: movd %xmm0, %ebx<br>
+; X32-SSE-NEXT: movd %xmm1, %ecx<br>
+; X32-SSE-NEXT: shrdl %cl, %ebx, %edi<br>
+; X32-SSE-NEXT: movl %ebx, %ebp<br>
+; X32-SSE-NEXT: sarl %cl, %ebp<br>
+; X32-SSE-NEXT: sarl $31, %ebx<br>
+; X32-SSE-NEXT: testb $32, %cl<br>
+; X32-SSE-NEXT: cmovnel %ebp, %edi<br>
+; X32-SSE-NEXT: movd %edi, %xmm0<br>
+; X32-SSE-NEXT: cmovel %ebp, %ebx<br>
+; X32-SSE-NEXT: movl %edx, %edi<br>
+; X32-SSE-NEXT: movb %al, %cl<br>
+; X32-SSE-NEXT: sarl %cl, %edi<br>
+; X32-SSE-NEXT: sarl $31, %edx<br>
+; X32-SSE-NEXT: testb $32, %al<br>
+; X32-SSE-NEXT: cmovnel %edi, %esi<br>
+; X32-SSE-NEXT: movd %esi, %xmm1<br>
+; X32-SSE-NEXT: movd %ebx, %xmm2<br>
+; X32-SSE-NEXT: cmovel %edi, %edx<br>
+; X32-SSE-NEXT: movd %edx, %xmm3<br>
+; X32-SSE-NEXT: punpckldq {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[1]<br>
+; X32-SSE-NEXT: punpckldq {{.*#+}} xmm2 = xmm2[0],xmm3[0],xmm2[1],xmm3[1]<br>
+; X32-SSE-NEXT: punpckldq {{.*#+}} xmm0 = xmm0[0],xmm2[0],xmm0[1],xmm2[1]<br>
+; X32-SSE-NEXT: popl %esi<br>
+; X32-SSE-NEXT: popl %edi<br>
+; X32-SSE-NEXT: popl %ebx<br>
+; X32-SSE-NEXT: popl %ebp<br>
+; X32-SSE-NEXT: retl<br>
%shift = ashr <2 x i64> %a, %b<br>
ret <2 x i64> %shift<br>
}<br>
@@ -119,6 +183,30 @@ define <4 x i32> @var_shift_v4i32(<4 x i<br>
; AVX2: # BB#0:<br>
; AVX2-NEXT: vpsravd %xmm1, %xmm0, %xmm0<br>
; AVX2-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: var_shift_v4i32:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: movdqa %xmm1, %xmm2<br>
+; X32-SSE-NEXT: psrldq {{.*#+}} xmm2 = xmm2[12,13,14,15],zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero<br>
+; X32-SSE-NEXT: movdqa %xmm0, %xmm3<br>
+; X32-SSE-NEXT: psrad %xmm2, %xmm3<br>
+; X32-SSE-NEXT: movdqa %xmm1, %xmm2<br>
+; X32-SSE-NEXT: psrlq $32, %xmm2<br>
+; X32-SSE-NEXT: movdqa %xmm0, %xmm4<br>
+; X32-SSE-NEXT: psrad %xmm2, %xmm4<br>
+; X32-SSE-NEXT: movsd {{.*#+}} xmm3 = xmm4[0],xmm3[1]<br>
+; X32-SSE-NEXT: pshufd {{.*#+}} xmm2 = xmm3[1,3,2,3]<br>
+; X32-SSE-NEXT: pxor %xmm3, %xmm3<br>
+; X32-SSE-NEXT: movdqa %xmm1, %xmm4<br>
+; X32-SSE-NEXT: punpckhdq {{.*#+}} xmm4 = xmm4[2],xmm3[2],xmm4[3],xmm3[3]<br>
+; X32-SSE-NEXT: movdqa %xmm0, %xmm5<br>
+; X32-SSE-NEXT: psrad %xmm4, %xmm5<br>
+; X32-SSE-NEXT: punpckldq {{.*#+}} xmm1 = xmm1[0],xmm3[0],xmm1[1],xmm3[1]<br>
+; X32-SSE-NEXT: psrad %xmm1, %xmm0<br>
+; X32-SSE-NEXT: movsd {{.*#+}} xmm5 = xmm0[0],xmm5[1]<br>
+; X32-SSE-NEXT: pshufd {{.*#+}} xmm0 = xmm5[0,2,2,3]<br>
+; X32-SSE-NEXT: punpckldq {{.*#+}} xmm0 = xmm0[0],xmm2[0],xmm0[1],xmm2[1]<br>
+; X32-SSE-NEXT: retl<br>
%shift = ashr <4 x i32> %a, %b<br>
ret <4 x i32> %shift<br>
}<br>
@@ -216,6 +304,41 @@ define <8 x i16> @var_shift_v8i16(<8 x i<br>
; AVX2-NEXT: vpermq {{.*#+}} ymm0 = ymm0[0,2,2,3]<br>
; AVX2-NEXT: vzeroupper<br>
; AVX2-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: var_shift_v8i16:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: psllw $12, %xmm1<br>
+; X32-SSE-NEXT: movdqa %xmm1, %xmm2<br>
+; X32-SSE-NEXT: psraw $15, %xmm2<br>
+; X32-SSE-NEXT: movdqa %xmm2, %xmm3<br>
+; X32-SSE-NEXT: pandn %xmm0, %xmm3<br>
+; X32-SSE-NEXT: psraw $8, %xmm0<br>
+; X32-SSE-NEXT: pand %xmm2, %xmm0<br>
+; X32-SSE-NEXT: por %xmm3, %xmm0<br>
+; X32-SSE-NEXT: paddw %xmm1, %xmm1<br>
+; X32-SSE-NEXT: movdqa %xmm1, %xmm2<br>
+; X32-SSE-NEXT: psraw $15, %xmm2<br>
+; X32-SSE-NEXT: movdqa %xmm2, %xmm3<br>
+; X32-SSE-NEXT: pandn %xmm0, %xmm3<br>
+; X32-SSE-NEXT: psraw $4, %xmm0<br>
+; X32-SSE-NEXT: pand %xmm2, %xmm0<br>
+; X32-SSE-NEXT: por %xmm3, %xmm0<br>
+; X32-SSE-NEXT: paddw %xmm1, %xmm1<br>
+; X32-SSE-NEXT: movdqa %xmm1, %xmm2<br>
+; X32-SSE-NEXT: psraw $15, %xmm2<br>
+; X32-SSE-NEXT: movdqa %xmm2, %xmm3<br>
+; X32-SSE-NEXT: pandn %xmm0, %xmm3<br>
+; X32-SSE-NEXT: psraw $2, %xmm0<br>
+; X32-SSE-NEXT: pand %xmm2, %xmm0<br>
+; X32-SSE-NEXT: por %xmm3, %xmm0<br>
+; X32-SSE-NEXT: paddw %xmm1, %xmm1<br>
+; X32-SSE-NEXT: psraw $15, %xmm1<br>
+; X32-SSE-NEXT: movdqa %xmm1, %xmm2<br>
+; X32-SSE-NEXT: pandn %xmm0, %xmm2<br>
+; X32-SSE-NEXT: psraw $1, %xmm0<br>
+; X32-SSE-NEXT: pand %xmm1, %xmm0<br>
+; X32-SSE-NEXT: por %xmm2, %xmm0<br>
+; X32-SSE-NEXT: retl<br>
%shift = ashr <8 x i16> %a, %b<br>
ret <8 x i16> %shift<br>
}<br>
@@ -342,6 +465,64 @@ define <16 x i8> @var_shift_v16i8(<16 x<br>
; AVX-NEXT: vpsrlw $8, %xmm0, %xmm0<br>
; AVX-NEXT: vpackuswb %xmm2, %xmm0, %xmm0<br>
; AVX-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: var_shift_v16i8:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: punpckhbw {{.*#+}} xmm2 = xmm2[8],xmm0[8],xmm2[9],xmm0[9],xmm2[10],xmm0[10],xmm2[11],xmm0[11],xmm2[12],xmm0[12],xmm2[13],xmm0[13],xmm2[14],xmm0[14],xmm2[15],xmm0[15]<br>
+; X32-SSE-NEXT: psllw $5, %xmm1<br>
+; X32-SSE-NEXT: punpckhbw {{.*#+}} xmm4 = xmm4[8],xmm1[8],xmm4[9],xmm1[9],xmm4[10],xmm1[10],xmm4[11],xmm1[11],xmm4[12],xmm1[12],xmm4[13],xmm1[13],xmm4[14],xmm1[14],xmm4[15],xmm1[15]<br>
+; X32-SSE-NEXT: pxor %xmm3, %xmm3<br>
+; X32-SSE-NEXT: pxor %xmm5, %xmm5<br>
+; X32-SSE-NEXT: pcmpgtw %xmm4, %xmm5<br>
+; X32-SSE-NEXT: movdqa %xmm5, %xmm6<br>
+; X32-SSE-NEXT: pandn %xmm2, %xmm6<br>
+; X32-SSE-NEXT: psraw $4, %xmm2<br>
+; X32-SSE-NEXT: pand %xmm5, %xmm2<br>
+; X32-SSE-NEXT: por %xmm6, %xmm2<br>
+; X32-SSE-NEXT: paddw %xmm4, %xmm4<br>
+; X32-SSE-NEXT: pxor %xmm5, %xmm5<br>
+; X32-SSE-NEXT: pcmpgtw %xmm4, %xmm5<br>
+; X32-SSE-NEXT: movdqa %xmm5, %xmm6<br>
+; X32-SSE-NEXT: pandn %xmm2, %xmm6<br>
+; X32-SSE-NEXT: psraw $2, %xmm2<br>
+; X32-SSE-NEXT: pand %xmm5, %xmm2<br>
+; X32-SSE-NEXT: por %xmm6, %xmm2<br>
+; X32-SSE-NEXT: paddw %xmm4, %xmm4<br>
+; X32-SSE-NEXT: pxor %xmm5, %xmm5<br>
+; X32-SSE-NEXT: pcmpgtw %xmm4, %xmm5<br>
+; X32-SSE-NEXT: movdqa %xmm5, %xmm4<br>
+; X32-SSE-NEXT: pandn %xmm2, %xmm4<br>
+; X32-SSE-NEXT: psraw $1, %xmm2<br>
+; X32-SSE-NEXT: pand %xmm5, %xmm2<br>
+; X32-SSE-NEXT: por %xmm4, %xmm2<br>
+; X32-SSE-NEXT: psrlw $8, %xmm2<br>
+; X32-SSE-NEXT: punpcklbw {{.*#+}} xmm0 = xmm0[0,0,1,1,2,2,3,3,4,4,5,5,6,6,7,7]<br>
+; X32-SSE-NEXT: punpcklbw {{.*#+}} xmm1 = xmm1[0,0,1,1,2,2,3,3,4,4,5,5,6,6,7,7]<br>
+; X32-SSE-NEXT: pxor %xmm4, %xmm4<br>
+; X32-SSE-NEXT: pcmpgtw %xmm1, %xmm4<br>
+; X32-SSE-NEXT: movdqa %xmm4, %xmm5<br>
+; X32-SSE-NEXT: pandn %xmm0, %xmm5<br>
+; X32-SSE-NEXT: psraw $4, %xmm0<br>
+; X32-SSE-NEXT: pand %xmm4, %xmm0<br>
+; X32-SSE-NEXT: por %xmm5, %xmm0<br>
+; X32-SSE-NEXT: paddw %xmm1, %xmm1<br>
+; X32-SSE-NEXT: pxor %xmm4, %xmm4<br>
+; X32-SSE-NEXT: pcmpgtw %xmm1, %xmm4<br>
+; X32-SSE-NEXT: movdqa %xmm4, %xmm5<br>
+; X32-SSE-NEXT: pandn %xmm0, %xmm5<br>
+; X32-SSE-NEXT: psraw $2, %xmm0<br>
+; X32-SSE-NEXT: pand %xmm4, %xmm0<br>
+; X32-SSE-NEXT: por %xmm5, %xmm0<br>
+; X32-SSE-NEXT: paddw %xmm1, %xmm1<br>
+; X32-SSE-NEXT: pcmpgtw %xmm1, %xmm3<br>
+; X32-SSE-NEXT: movdqa %xmm3, %xmm1<br>
+; X32-SSE-NEXT: pandn %xmm0, %xmm1<br>
+; X32-SSE-NEXT: psraw $1, %xmm0<br>
+; X32-SSE-NEXT: pand %xmm3, %xmm0<br>
+; X32-SSE-NEXT: por %xmm1, %xmm0<br>
+; X32-SSE-NEXT: psrlw $8, %xmm0<br>
+; X32-SSE-NEXT: packuswb %xmm2, %xmm0<br>
+; X32-SSE-NEXT: retl<br>
%shift = ashr <16 x i8> %a, %b<br>
ret <16 x i8> %shift<br>
}<br>
@@ -409,6 +590,68 @@ define <2 x i64> @splatvar_shift_v2i64(<<br>
; AVX2-NEXT: vmovq %rax, %xmm0<br>
; AVX2-NEXT: vpunpcklqdq {{.*#+}} xmm0 = xmm0[0],xmm2[0]<br>
; AVX2-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: splatvar_shift_v2i64:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: pushl %ebp<br>
+; X32-SSE-NEXT: .Ltmp8:<br>
+; X32-SSE-NEXT: .cfi_def_cfa_offset 8<br>
+; X32-SSE-NEXT: pushl %ebx<br>
+; X32-SSE-NEXT: .Ltmp9:<br>
+; X32-SSE-NEXT: .cfi_def_cfa_offset 12<br>
+; X32-SSE-NEXT: pushl %edi<br>
+; X32-SSE-NEXT: .Ltmp10:<br>
+; X32-SSE-NEXT: .cfi_def_cfa_offset 16<br>
+; X32-SSE-NEXT: pushl %esi<br>
+; X32-SSE-NEXT: .Ltmp11:<br>
+; X32-SSE-NEXT: .cfi_def_cfa_offset 20<br>
+; X32-SSE-NEXT: .Ltmp12:<br>
+; X32-SSE-NEXT: .cfi_offset %esi, -20<br>
+; X32-SSE-NEXT: .Ltmp13:<br>
+; X32-SSE-NEXT: .cfi_offset %edi, -16<br>
+; X32-SSE-NEXT: .Ltmp14:<br>
+; X32-SSE-NEXT: .cfi_offset %ebx, -12<br>
+; X32-SSE-NEXT: .Ltmp15:<br>
+; X32-SSE-NEXT: .cfi_offset %ebp, -8<br>
+; X32-SSE-NEXT: pshufd {{.*#+}} xmm1 = xmm1[0,1,0,1]<br>
+; X32-SSE-NEXT: pshufd {{.*#+}} xmm2 = xmm0[3,1,2,3]<br>
+; X32-SSE-NEXT: movd %xmm2, %edx<br>
+; X32-SSE-NEXT: pshufd {{.*#+}} xmm2 = xmm0[2,3,0,1]<br>
+; X32-SSE-NEXT: movd %xmm2, %esi<br>
+; X32-SSE-NEXT: pshufd {{.*#+}} xmm2 = xmm1[2,3,0,1]<br>
+; X32-SSE-NEXT: movd %xmm2, %eax<br>
+; X32-SSE-NEXT: movb %al, %cl<br>
+; X32-SSE-NEXT: shrdl %cl, %edx, %esi<br>
+; X32-SSE-NEXT: movd %xmm0, %edi<br>
+; X32-SSE-NEXT: pshufd {{.*#+}} xmm0 = xmm0[1,1,2,3]<br>
+; X32-SSE-NEXT: movd %xmm0, %ebx<br>
+; X32-SSE-NEXT: movd %xmm1, %ecx<br>
+; X32-SSE-NEXT: shrdl %cl, %ebx, %edi<br>
+; X32-SSE-NEXT: movl %ebx, %ebp<br>
+; X32-SSE-NEXT: sarl %cl, %ebp<br>
+; X32-SSE-NEXT: sarl $31, %ebx<br>
+; X32-SSE-NEXT: testb $32, %cl<br>
+; X32-SSE-NEXT: cmovnel %ebp, %edi<br>
+; X32-SSE-NEXT: movd %edi, %xmm0<br>
+; X32-SSE-NEXT: cmovel %ebp, %ebx<br>
+; X32-SSE-NEXT: movl %edx, %edi<br>
+; X32-SSE-NEXT: movb %al, %cl<br>
+; X32-SSE-NEXT: sarl %cl, %edi<br>
+; X32-SSE-NEXT: sarl $31, %edx<br>
+; X32-SSE-NEXT: testb $32, %al<br>
+; X32-SSE-NEXT: cmovnel %edi, %esi<br>
+; X32-SSE-NEXT: movd %esi, %xmm1<br>
+; X32-SSE-NEXT: movd %ebx, %xmm2<br>
+; X32-SSE-NEXT: cmovel %edi, %edx<br>
+; X32-SSE-NEXT: movd %edx, %xmm3<br>
+; X32-SSE-NEXT: punpckldq {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[1]<br>
+; X32-SSE-NEXT: punpckldq {{.*#+}} xmm2 = xmm2[0],xmm3[0],xmm2[1],xmm3[1]<br>
+; X32-SSE-NEXT: punpckldq {{.*#+}} xmm0 = xmm0[0],xmm2[0],xmm0[1],xmm2[1]<br>
+; X32-SSE-NEXT: popl %esi<br>
+; X32-SSE-NEXT: popl %edi<br>
+; X32-SSE-NEXT: popl %ebx<br>
+; X32-SSE-NEXT: popl %ebp<br>
+; X32-SSE-NEXT: retl<br>
%splat = shufflevector <2 x i64> %b, <2 x i64> undef, <2 x i32> zeroinitializer<br>
%shift = ashr <2 x i64> %a, %splat<br>
ret <2 x i64> %shift<br>
@@ -435,6 +678,13 @@ define <4 x i32> @splatvar_shift_v4i32(<<br>
; AVX-NEXT: vpblendw {{.*#+}} xmm1 = xmm1[0,1],xmm2[2,3,4,5,6,7]<br>
; AVX-NEXT: vpsrad %xmm1, %xmm0, %xmm0<br>
; AVX-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: splatvar_shift_v4i32:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: xorps %xmm2, %xmm2<br>
+; X32-SSE-NEXT: movss {{.*#+}} xmm2 = xmm1[0],xmm2[1,2,3]<br>
+; X32-SSE-NEXT: psrad %xmm2, %xmm0<br>
+; X32-SSE-NEXT: retl<br>
%splat = shufflevector <4 x i32> %b, <4 x i32> undef, <4 x i32> zeroinitializer<br>
%shift = ashr <4 x i32> %a, %splat<br>
ret <4 x i32> %shift<br>
@@ -462,6 +712,14 @@ define <8 x i16> @splatvar_shift_v8i16(<<br>
; AVX-NEXT: vpblendw {{.*#+}} xmm1 = xmm1[0],xmm2[1,2,3,4,5,6,7]<br>
; AVX-NEXT: vpsraw %xmm1, %xmm0, %xmm0<br>
; AVX-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: splatvar_shift_v8i16:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: movd %xmm1, %eax<br>
+; X32-SSE-NEXT: movzwl %ax, %eax<br>
+; X32-SSE-NEXT: movd %eax, %xmm1<br>
+; X32-SSE-NEXT: psraw %xmm1, %xmm0<br>
+; X32-SSE-NEXT: retl<br>
%splat = shufflevector <8 x i16> %b, <8 x i16> undef, <8 x i32> zeroinitializer<br>
%shift = ashr <8 x i16> %a, %splat<br>
ret <8 x i16> %shift<br>
@@ -626,6 +884,68 @@ define <16 x i8> @splatvar_shift_v16i8(<<br>
; AVX2-NEXT: vpsrlw $8, %xmm0, %xmm0<br>
; AVX2-NEXT: vpackuswb %xmm2, %xmm0, %xmm0<br>
; AVX2-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: splatvar_shift_v16i8:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: punpcklbw {{.*#+}} xmm1 = xmm1[0,0,1,1,2,2,3,3,4,4,5,5,6,6,7,7]<br>
+; X32-SSE-NEXT: pshufd {{.*#+}} xmm1 = xmm1[0,1,0,3]<br>
+; X32-SSE-NEXT: pshuflw {{.*#+}} xmm1 = xmm1[0,0,0,0,4,5,6,7]<br>
+; X32-SSE-NEXT: pshufhw {{.*#+}} xmm3 = xmm1[0,1,2,3,4,4,4,4]<br>
+; X32-SSE-NEXT: punpckhbw {{.*#+}} xmm1 = xmm1[8],xmm0[8],xmm1[9],xmm0[9],xmm1[10],xmm0[10],xmm1[11],xmm0[11],xmm1[12],xmm0[12],xmm1[13],xmm0[13],xmm1[14],xmm0[14],xmm1[15],xmm0[15]<br>
+; X32-SSE-NEXT: psllw $5, %xmm3<br>
+; X32-SSE-NEXT: punpckhbw {{.*#+}} xmm4 = xmm4[8],xmm3[8],xmm4[9],xmm3[9],xmm4[10],xmm3[10],xmm4[11],xmm3[11],xmm4[12],xmm3[12],xmm4[13],xmm3[13],xmm4[14],xmm3[14],xmm4[15],xmm3[15]<br>
+; X32-SSE-NEXT: pxor %xmm2, %xmm2<br>
+; X32-SSE-NEXT: pxor %xmm5, %xmm5<br>
+; X32-SSE-NEXT: pcmpgtw %xmm4, %xmm5<br>
+; X32-SSE-NEXT: movdqa %xmm5, %xmm6<br>
+; X32-SSE-NEXT: pandn %xmm1, %xmm6<br>
+; X32-SSE-NEXT: psraw $4, %xmm1<br>
+; X32-SSE-NEXT: pand %xmm5, %xmm1<br>
+; X32-SSE-NEXT: por %xmm6, %xmm1<br>
+; X32-SSE-NEXT: paddw %xmm4, %xmm4<br>
+; X32-SSE-NEXT: pxor %xmm5, %xmm5<br>
+; X32-SSE-NEXT: pcmpgtw %xmm4, %xmm5<br>
+; X32-SSE-NEXT: movdqa %xmm5, %xmm6<br>
+; X32-SSE-NEXT: pandn %xmm1, %xmm6<br>
+; X32-SSE-NEXT: psraw $2, %xmm1<br>
+; X32-SSE-NEXT: pand %xmm5, %xmm1<br>
+; X32-SSE-NEXT: por %xmm6, %xmm1<br>
+; X32-SSE-NEXT: paddw %xmm4, %xmm4<br>
+; X32-SSE-NEXT: pxor %xmm5, %xmm5<br>
+; X32-SSE-NEXT: pcmpgtw %xmm4, %xmm5<br>
+; X32-SSE-NEXT: movdqa %xmm5, %xmm4<br>
+; X32-SSE-NEXT: pandn %xmm1, %xmm4<br>
+; X32-SSE-NEXT: psraw $1, %xmm1<br>
+; X32-SSE-NEXT: pand %xmm5, %xmm1<br>
+; X32-SSE-NEXT: por %xmm4, %xmm1<br>
+; X32-SSE-NEXT: psrlw $8, %xmm1<br>
+; X32-SSE-NEXT: punpcklbw {{.*#+}} xmm0 = xmm0[0,0,1,1,2,2,3,3,4,4,5,5,6,6,7,7]<br>
+; X32-SSE-NEXT: punpcklbw {{.*#+}} xmm3 = xmm3[0,0,1,1,2,2,3,3,4,4,5,5,6,6,7,7]<br>
+; X32-SSE-NEXT: pxor %xmm4, %xmm4<br>
+; X32-SSE-NEXT: pcmpgtw %xmm3, %xmm4<br>
+; X32-SSE-NEXT: movdqa %xmm4, %xmm5<br>
+; X32-SSE-NEXT: pandn %xmm0, %xmm5<br>
+; X32-SSE-NEXT: psraw $4, %xmm0<br>
+; X32-SSE-NEXT: pand %xmm4, %xmm0<br>
+; X32-SSE-NEXT: por %xmm5, %xmm0<br>
+; X32-SSE-NEXT: paddw %xmm3, %xmm3<br>
+; X32-SSE-NEXT: pxor %xmm4, %xmm4<br>
+; X32-SSE-NEXT: pcmpgtw %xmm3, %xmm4<br>
+; X32-SSE-NEXT: movdqa %xmm4, %xmm5<br>
+; X32-SSE-NEXT: pandn %xmm0, %xmm5<br>
+; X32-SSE-NEXT: psraw $2, %xmm0<br>
+; X32-SSE-NEXT: pand %xmm4, %xmm0<br>
+; X32-SSE-NEXT: por %xmm5, %xmm0<br>
+; X32-SSE-NEXT: paddw %xmm3, %xmm3<br>
+; X32-SSE-NEXT: pcmpgtw %xmm3, %xmm2<br>
+; X32-SSE-NEXT: movdqa %xmm2, %xmm3<br>
+; X32-SSE-NEXT: pandn %xmm0, %xmm3<br>
+; X32-SSE-NEXT: psraw $1, %xmm0<br>
+; X32-SSE-NEXT: pand %xmm2, %xmm0<br>
+; X32-SSE-NEXT: por %xmm3, %xmm0<br>
+; X32-SSE-NEXT: psrlw $8, %xmm0<br>
+; X32-SSE-NEXT: packuswb %xmm1, %xmm0<br>
+; X32-SSE-NEXT: retl<br>
%splat = shufflevector <16 x i8> %b, <16 x i8> undef, <16 x i32> zeroinitializer<br>
%shift = ashr <16 x i8> %a, %splat<br>
ret <16 x i8> %shift<br>
@@ -670,6 +990,28 @@ define <2 x i64> @constant_shift_v2i64(<<br>
; AVX-NEXT: vmovq %rax, %xmm0<br>
; AVX-NEXT: vpunpcklqdq {{.*#+}} xmm0 = xmm0[0],xmm1[0]<br>
; AVX-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: constant_shift_v2i64:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: pshufd {{.*#+}} xmm1 = xmm0[2,3,0,1]<br>
+; X32-SSE-NEXT: movd %xmm1, %eax<br>
+; X32-SSE-NEXT: pshufd {{.*#+}} xmm1 = xmm0[3,1,2,3]<br>
+; X32-SSE-NEXT: movd %xmm1, %ecx<br>
+; X32-SSE-NEXT: shrdl $7, %ecx, %eax<br>
+; X32-SSE-NEXT: movd %eax, %xmm1<br>
+; X32-SSE-NEXT: movd %xmm0, %eax<br>
+; X32-SSE-NEXT: pshufd {{.*#+}} xmm0 = xmm0[1,1,2,3]<br>
+; X32-SSE-NEXT: movd %xmm0, %edx<br>
+; X32-SSE-NEXT: shrdl $1, %edx, %eax<br>
+; X32-SSE-NEXT: movd %eax, %xmm0<br>
+; X32-SSE-NEXT: punpckldq {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[1]<br>
+; X32-SSE-NEXT: sarl $7, %ecx<br>
+; X32-SSE-NEXT: movd %ecx, %xmm1<br>
+; X32-SSE-NEXT: sarl %edx<br>
+; X32-SSE-NEXT: movd %edx, %xmm2<br>
+; X32-SSE-NEXT: punpckldq {{.*#+}} xmm2 = xmm2[0],xmm1[0],xmm2[1],xmm1[1]<br>
+; X32-SSE-NEXT: punpckldq {{.*#+}} xmm0 = xmm0[0],xmm2[0],xmm0[1],xmm2[1]<br>
+; X32-SSE-NEXT: retl<br>
%shift = ashr <2 x i64> %a, <i64 1, i64 7><br>
ret <2 x i64> %shift<br>
}<br>
@@ -720,6 +1062,22 @@ define <4 x i32> @constant_shift_v4i32(<<br>
; AVX2: # BB#0:<br>
; AVX2-NEXT: vpsravd {{.*}}(%rip), %xmm0, %xmm0<br>
; AVX2-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: constant_shift_v4i32:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: movdqa %xmm0, %xmm1<br>
+; X32-SSE-NEXT: psrad $7, %xmm1<br>
+; X32-SSE-NEXT: movdqa %xmm0, %xmm2<br>
+; X32-SSE-NEXT: psrad $5, %xmm2<br>
+; X32-SSE-NEXT: movsd {{.*#+}} xmm1 = xmm2[0],xmm1[1]<br>
+; X32-SSE-NEXT: pshufd {{.*#+}} xmm1 = xmm1[1,3,2,3]<br>
+; X32-SSE-NEXT: movdqa %xmm0, %xmm2<br>
+; X32-SSE-NEXT: psrad $6, %xmm2<br>
+; X32-SSE-NEXT: psrad $4, %xmm0<br>
+; X32-SSE-NEXT: movsd {{.*#+}} xmm2 = xmm0[0],xmm2[1]<br>
+; X32-SSE-NEXT: pshufd {{.*#+}} xmm0 = xmm2[0,2,2,3]<br>
+; X32-SSE-NEXT: punpckldq {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[1]<br>
+; X32-SSE-NEXT: retl<br>
%shift = ashr <4 x i32> %a, <i32 4, i32 5, i32 6, i32 7><br>
ret <4 x i32> %shift<br>
}<br>
@@ -789,6 +1147,23 @@ define <8 x i16> @constant_shift_v8i16(<<br>
; AVX2-NEXT: vpermq {{.*#+}} ymm0 = ymm0[0,2,2,3]<br>
; AVX2-NEXT: vzeroupper<br>
; AVX2-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: constant_shift_v8i16:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: movdqa %xmm0, %xmm1<br>
+; X32-SSE-NEXT: psraw $4, %xmm1<br>
+; X32-SSE-NEXT: movsd {{.*#+}} xmm1 = xmm0[0],xmm1[1]<br>
+; X32-SSE-NEXT: pshufd {{.*#+}} xmm2 = xmm1[0,2,2,3]<br>
+; X32-SSE-NEXT: psraw $2, %xmm1<br>
+; X32-SSE-NEXT: pshufd {{.*#+}} xmm0 = xmm1[1,3,2,3]<br>
+; X32-SSE-NEXT: punpckldq {{.*#+}} xmm2 = xmm2[0],xmm0[0],xmm2[1],xmm0[1]<br>
+; X32-SSE-NEXT: movdqa {{.*#+}} xmm0 = [65535,0,65535,0,65535,0,65535,0]<br>
+; X32-SSE-NEXT: movdqa %xmm2, %xmm1<br>
+; X32-SSE-NEXT: pand %xmm0, %xmm1<br>
+; X32-SSE-NEXT: psraw $1, %xmm2<br>
+; X32-SSE-NEXT: pandn %xmm2, %xmm0<br>
+; X32-SSE-NEXT: por %xmm1, %xmm0<br>
+; X32-SSE-NEXT: retl<br>
%shift = ashr <8 x i16> %a, <i16 0, i16 1, i16 2, i16 3, i16 4, i16 5, i16 6, i16 7><br>
ret <8 x i16> %shift<br>
}<br>
@@ -918,6 +1293,65 @@ define <16 x i8> @constant_shift_v16i8(<<br>
; AVX-NEXT: vpsrlw $8, %xmm0, %xmm0<br>
; AVX-NEXT: vpackuswb %xmm2, %xmm0, %xmm0<br>
; AVX-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: constant_shift_v16i8:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: punpckhbw {{.*#+}} xmm1 = xmm1[8],xmm0[8],xmm1[9],xmm0[9],xmm1[10],xmm0[10],xmm1[11],xmm0[11],xmm1[12],xmm0[12],xmm1[13],xmm0[13],xmm1[14],xmm0[14],xmm1[15],xmm0[15]<br>
+; X32-SSE-NEXT: movdqa {{.*#+}} xmm3 = [0,1,2,3,4,5,6,7,7,6,5,4,3,2,1,0]<br>
+; X32-SSE-NEXT: psllw $5, %xmm3<br>
+; X32-SSE-NEXT: punpckhbw {{.*#+}} xmm4 = xmm4[8],xmm3[8],xmm4[9],xmm3[9],xmm4[10],xmm3[10],xmm4[11],xmm3[11],xmm4[12],xmm3[12],xmm4[13],xmm3[13],xmm4[14],xmm3[14],xmm4[15],xmm3[15]<br>
+; X32-SSE-NEXT: pxor %xmm2, %xmm2<br>
+; X32-SSE-NEXT: pxor %xmm5, %xmm5<br>
+; X32-SSE-NEXT: pcmpgtw %xmm4, %xmm5<br>
+; X32-SSE-NEXT: movdqa %xmm5, %xmm6<br>
+; X32-SSE-NEXT: pandn %xmm1, %xmm6<br>
+; X32-SSE-NEXT: psraw $4, %xmm1<br>
+; X32-SSE-NEXT: pand %xmm5, %xmm1<br>
+; X32-SSE-NEXT: por %xmm6, %xmm1<br>
+; X32-SSE-NEXT: paddw %xmm4, %xmm4<br>
+; X32-SSE-NEXT: pxor %xmm5, %xmm5<br>
+; X32-SSE-NEXT: pcmpgtw %xmm4, %xmm5<br>
+; X32-SSE-NEXT: movdqa %xmm5, %xmm6<br>
+; X32-SSE-NEXT: pandn %xmm1, %xmm6<br>
+; X32-SSE-NEXT: psraw $2, %xmm1<br>
+; X32-SSE-NEXT: pand %xmm5, %xmm1<br>
+; X32-SSE-NEXT: por %xmm6, %xmm1<br>
+; X32-SSE-NEXT: paddw %xmm4, %xmm4<br>
+; X32-SSE-NEXT: pxor %xmm5, %xmm5<br>
+; X32-SSE-NEXT: pcmpgtw %xmm4, %xmm5<br>
+; X32-SSE-NEXT: movdqa %xmm5, %xmm4<br>
+; X32-SSE-NEXT: pandn %xmm1, %xmm4<br>
+; X32-SSE-NEXT: psraw $1, %xmm1<br>
+; X32-SSE-NEXT: pand %xmm5, %xmm1<br>
+; X32-SSE-NEXT: por %xmm4, %xmm1<br>
+; X32-SSE-NEXT: psrlw $8, %xmm1<br>
+; X32-SSE-NEXT: punpcklbw {{.*#+}} xmm0 = xmm0[0,0,1,1,2,2,3,3,4,4,5,5,6,6,7,7]<br>
+; X32-SSE-NEXT: punpcklbw {{.*#+}} xmm3 = xmm3[0,0,1,1,2,2,3,3,4,4,5,5,6,6,7,7]<br>
+; X32-SSE-NEXT: pxor %xmm4, %xmm4<br>
+; X32-SSE-NEXT: pcmpgtw %xmm3, %xmm4<br>
+; X32-SSE-NEXT: movdqa %xmm4, %xmm5<br>
+; X32-SSE-NEXT: pandn %xmm0, %xmm5<br>
+; X32-SSE-NEXT: psraw $4, %xmm0<br>
+; X32-SSE-NEXT: pand %xmm4, %xmm0<br>
+; X32-SSE-NEXT: por %xmm5, %xmm0<br>
+; X32-SSE-NEXT: paddw %xmm3, %xmm3<br>
+; X32-SSE-NEXT: pxor %xmm4, %xmm4<br>
+; X32-SSE-NEXT: pcmpgtw %xmm3, %xmm4<br>
+; X32-SSE-NEXT: movdqa %xmm4, %xmm5<br>
+; X32-SSE-NEXT: pandn %xmm0, %xmm5<br>
+; X32-SSE-NEXT: psraw $2, %xmm0<br>
+; X32-SSE-NEXT: pand %xmm4, %xmm0<br>
+; X32-SSE-NEXT: por %xmm5, %xmm0<br>
+; X32-SSE-NEXT: paddw %xmm3, %xmm3<br>
+; X32-SSE-NEXT: pcmpgtw %xmm3, %xmm2<br>
+; X32-SSE-NEXT: movdqa %xmm2, %xmm3<br>
+; X32-SSE-NEXT: pandn %xmm0, %xmm3<br>
+; X32-SSE-NEXT: psraw $1, %xmm0<br>
+; X32-SSE-NEXT: pand %xmm2, %xmm0<br>
+; X32-SSE-NEXT: por %xmm3, %xmm0<br>
+; X32-SSE-NEXT: psrlw $8, %xmm0<br>
+; X32-SSE-NEXT: packuswb %xmm1, %xmm0<br>
+; X32-SSE-NEXT: retl<br>
%shift = ashr <16 x i8> %a, <i8 0, i8 1, i8 2, i8 3, i8 4, i8 5, i8 6, i8 7, i8 7, i8 6, i8 5, i8 4, i8 3, i8 2, i8 1, i8 0><br>
ret <16 x i8> %shift<br>
}<br>
@@ -958,6 +1392,16 @@ define <2 x i64> @splatconstant_shift_v2<br>
; AVX2-NEXT: vpsrlq $7, %xmm0, %xmm0<br>
; AVX2-NEXT: vpblendd {{.*#+}} xmm0 = xmm0[0],xmm1[1],xmm0[2],xmm1[3]<br>
; AVX2-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: splatconstant_shift_v2i64:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: movdqa %xmm0, %xmm1<br>
+; X32-SSE-NEXT: psrad $7, %xmm1<br>
+; X32-SSE-NEXT: pshufd {{.*#+}} xmm1 = xmm1[1,3,2,3]<br>
+; X32-SSE-NEXT: psrlq $7, %xmm0<br>
+; X32-SSE-NEXT: pshufd {{.*#+}} xmm0 = xmm0[0,2,2,3]<br>
+; X32-SSE-NEXT: punpckldq {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[1]<br>
+; X32-SSE-NEXT: retl<br>
%shift = ashr <2 x i64> %a, <i64 7, i64 7><br>
ret <2 x i64> %shift<br>
}<br>
@@ -972,6 +1416,11 @@ define <4 x i32> @splatconstant_shift_v4<br>
; AVX: # BB#0:<br>
; AVX-NEXT: vpsrad $5, %xmm0, %xmm0<br>
; AVX-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: splatconstant_shift_v4i32:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: psrad $5, %xmm0<br>
+; X32-SSE-NEXT: retl<br>
%shift = ashr <4 x i32> %a, <i32 5, i32 5, i32 5, i32 5><br>
ret <4 x i32> %shift<br>
}<br>
@@ -986,6 +1435,11 @@ define <8 x i16> @splatconstant_shift_v8<br>
; AVX: # BB#0:<br>
; AVX-NEXT: vpsraw $3, %xmm0, %xmm0<br>
; AVX-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: splatconstant_shift_v8i16:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: psraw $3, %xmm0<br>
+; X32-SSE-NEXT: retl<br>
%shift = ashr <8 x i16> %a, <i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3><br>
ret <8 x i16> %shift<br>
}<br>
@@ -1008,6 +1462,15 @@ define <16 x i8> @splatconstant_shift_v1<br>
; AVX-NEXT: vpxor %xmm1, %xmm0, %xmm0<br>
; AVX-NEXT: vpsubb %xmm1, %xmm0, %xmm0<br>
; AVX-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: splatconstant_shift_v16i8:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: psrlw $3, %xmm0<br>
+; X32-SSE-NEXT: pand .LCPI15_0, %xmm0<br>
+; X32-SSE-NEXT: movdqa {{.*#+}} xmm1 = [16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16]<br>
+; X32-SSE-NEXT: pxor %xmm1, %xmm0<br>
+; X32-SSE-NEXT: psubb %xmm1, %xmm0<br>
+; X32-SSE-NEXT: retl<br>
%shift = ashr <16 x i8> %a, <i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3><br>
ret <16 x i8> %shift<br>
}<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/vector-shift-lshr-128.ll<br>
URL: <a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__llvm.org_viewvc_llvm-2Dproject_llvm_trunk_test_CodeGen_X86_vector-2Dshift-2Dlshr-2D128.ll-3Frev-3D242273-26r1-3D242272-26r2-3D242273-26view-3Ddiff&d=AwMFAg&c=8hUWFZcy2Z-Za5rBPlktOQ&r=mQ4LZ2PUj9hpadE3cDHZnIdEwhEBrbAstXeMaFoB9tg&m=t0x4uKuotVhCRr7DKeGEwqwiqjw34KP-2c1Wwl9D140&s=X7mHhr4FSQErWAvFIHJTDXCCobUZobJLHfxH7mz1g5w&e=" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-shift-lshr-128.ll?rev=242273&r1=242272&r2=242273&view=diff</a><br>
==============================================================================<br>
--- llvm/trunk/test/CodeGen/X86/vector-shift-lshr-128.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/vector-shift-lshr-128.ll Wed Jul 15 03:04:07 2015<br>
@@ -2,6 +2,9 @@<br>
; RUN: llc < %s -mtriple=x86_64-unknown-unknown -mcpu=x86-64 -mattr=+sse4.1 | FileCheck %s --check-prefix=ALL --check-prefix=SSE --check-prefix=SSE41<br>
; RUN: llc < %s -mtriple=x86_64-unknown-unknown -mcpu=x86-64 -mattr=+avx | FileCheck %s --check-prefix=ALL --check-prefix=AVX --check-prefix=AVX1<br>
; RUN: llc < %s -mtriple=x86_64-unknown-unknown -mcpu=x86-64 -mattr=+avx2 | FileCheck %s --check-prefix=ALL --check-prefix=AVX --check-prefix=AVX2<br>
+;<br>
+; Just one 32-bit run to make sure we do reasonable things for i64 shifts.<br>
+; RUN: llc < %s -mtriple=i686-unknown-unknown -mcpu=x86-64 | FileCheck %s --check-prefix=ALL --check-prefix=X32-SSE --check-prefix=X32-SSE2<br>
<br>
;<br>
; Variable Shifts<br>
@@ -39,6 +42,17 @@ define <2 x i64> @var_shift_v2i64(<2 x i<br>
; AVX2: # BB#0:<br>
; AVX2-NEXT: vpsrlvq %xmm1, %xmm0, %xmm0<br>
; AVX2-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: var_shift_v2i64:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: pshufd {{.*#+}} xmm3 = xmm1[2,3,0,1]<br>
+; X32-SSE-NEXT: movdqa %xmm0, %xmm2<br>
+; X32-SSE-NEXT: psrlq %xmm3, %xmm2<br>
+; X32-SSE-NEXT: movq {{.*#+}} xmm1 = xmm1[0],zero<br>
+; X32-SSE-NEXT: psrlq %xmm1, %xmm0<br>
+; X32-SSE-NEXT: movsd {{.*#+}} xmm2 = xmm0[0],xmm2[1]<br>
+; X32-SSE-NEXT: movapd %xmm2, %xmm0<br>
+; X32-SSE-NEXT: retl<br>
%shift = lshr <2 x i64> %a, %b<br>
ret <2 x i64> %shift<br>
}<br>
@@ -109,6 +123,30 @@ define <4 x i32> @var_shift_v4i32(<4 x i<br>
; AVX2: # BB#0:<br>
; AVX2-NEXT: vpsrlvd %xmm1, %xmm0, %xmm0<br>
; AVX2-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: var_shift_v4i32:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: movdqa %xmm1, %xmm2<br>
+; X32-SSE-NEXT: psrldq {{.*#+}} xmm2 = xmm2[12,13,14,15],zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero<br>
+; X32-SSE-NEXT: movdqa %xmm0, %xmm3<br>
+; X32-SSE-NEXT: psrld %xmm2, %xmm3<br>
+; X32-SSE-NEXT: movdqa %xmm1, %xmm2<br>
+; X32-SSE-NEXT: psrlq $32, %xmm2<br>
+; X32-SSE-NEXT: movdqa %xmm0, %xmm4<br>
+; X32-SSE-NEXT: psrld %xmm2, %xmm4<br>
+; X32-SSE-NEXT: movsd {{.*#+}} xmm3 = xmm4[0],xmm3[1]<br>
+; X32-SSE-NEXT: pshufd {{.*#+}} xmm2 = xmm3[1,3,2,3]<br>
+; X32-SSE-NEXT: pxor %xmm3, %xmm3<br>
+; X32-SSE-NEXT: movdqa %xmm1, %xmm4<br>
+; X32-SSE-NEXT: punpckhdq {{.*#+}} xmm4 = xmm4[2],xmm3[2],xmm4[3],xmm3[3]<br>
+; X32-SSE-NEXT: movdqa %xmm0, %xmm5<br>
+; X32-SSE-NEXT: psrld %xmm4, %xmm5<br>
+; X32-SSE-NEXT: punpckldq {{.*#+}} xmm1 = xmm1[0],xmm3[0],xmm1[1],xmm3[1]<br>
+; X32-SSE-NEXT: psrld %xmm1, %xmm0<br>
+; X32-SSE-NEXT: movsd {{.*#+}} xmm5 = xmm0[0],xmm5[1]<br>
+; X32-SSE-NEXT: pshufd {{.*#+}} xmm0 = xmm5[0,2,2,3]<br>
+; X32-SSE-NEXT: punpckldq {{.*#+}} xmm0 = xmm0[0],xmm2[0],xmm0[1],xmm2[1]<br>
+; X32-SSE-NEXT: retl<br>
%shift = lshr <4 x i32> %a, %b<br>
ret <4 x i32> %shift<br>
}<br>
@@ -206,6 +244,41 @@ define <8 x i16> @var_shift_v8i16(<8 x i<br>
; AVX2-NEXT: vpermq {{.*#+}} ymm0 = ymm0[0,2,2,3]<br>
; AVX2-NEXT: vzeroupper<br>
; AVX2-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: var_shift_v8i16:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: psllw $12, %xmm1<br>
+; X32-SSE-NEXT: movdqa %xmm1, %xmm2<br>
+; X32-SSE-NEXT: psraw $15, %xmm2<br>
+; X32-SSE-NEXT: movdqa %xmm2, %xmm3<br>
+; X32-SSE-NEXT: pandn %xmm0, %xmm3<br>
+; X32-SSE-NEXT: psrlw $8, %xmm0<br>
+; X32-SSE-NEXT: pand %xmm2, %xmm0<br>
+; X32-SSE-NEXT: por %xmm3, %xmm0<br>
+; X32-SSE-NEXT: paddw %xmm1, %xmm1<br>
+; X32-SSE-NEXT: movdqa %xmm1, %xmm2<br>
+; X32-SSE-NEXT: psraw $15, %xmm2<br>
+; X32-SSE-NEXT: movdqa %xmm2, %xmm3<br>
+; X32-SSE-NEXT: pandn %xmm0, %xmm3<br>
+; X32-SSE-NEXT: psrlw $4, %xmm0<br>
+; X32-SSE-NEXT: pand %xmm2, %xmm0<br>
+; X32-SSE-NEXT: por %xmm3, %xmm0<br>
+; X32-SSE-NEXT: paddw %xmm1, %xmm1<br>
+; X32-SSE-NEXT: movdqa %xmm1, %xmm2<br>
+; X32-SSE-NEXT: psraw $15, %xmm2<br>
+; X32-SSE-NEXT: movdqa %xmm2, %xmm3<br>
+; X32-SSE-NEXT: pandn %xmm0, %xmm3<br>
+; X32-SSE-NEXT: psrlw $2, %xmm0<br>
+; X32-SSE-NEXT: pand %xmm2, %xmm0<br>
+; X32-SSE-NEXT: por %xmm3, %xmm0<br>
+; X32-SSE-NEXT: paddw %xmm1, %xmm1<br>
+; X32-SSE-NEXT: psraw $15, %xmm1<br>
+; X32-SSE-NEXT: movdqa %xmm1, %xmm2<br>
+; X32-SSE-NEXT: pandn %xmm0, %xmm2<br>
+; X32-SSE-NEXT: psrlw $1, %xmm0<br>
+; X32-SSE-NEXT: pand %xmm1, %xmm0<br>
+; X32-SSE-NEXT: por %xmm2, %xmm0<br>
+; X32-SSE-NEXT: retl<br>
%shift = lshr <8 x i16> %a, %b<br>
ret <8 x i16> %shift<br>
}<br>
@@ -281,6 +354,37 @@ define <16 x i8> @var_shift_v16i8(<16 x<br>
; AVX-NEXT: vpaddb %xmm1, %xmm1, %xmm1<br>
; AVX-NEXT: vpblendvb %xmm1, %xmm2, %xmm0, %xmm0<br>
; AVX-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: var_shift_v16i8:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: psllw $5, %xmm1<br>
+; X32-SSE-NEXT: pxor %xmm2, %xmm2<br>
+; X32-SSE-NEXT: pxor %xmm3, %xmm3<br>
+; X32-SSE-NEXT: pcmpgtb %xmm1, %xmm3<br>
+; X32-SSE-NEXT: movdqa %xmm3, %xmm4<br>
+; X32-SSE-NEXT: pandn %xmm0, %xmm4<br>
+; X32-SSE-NEXT: psrlw $4, %xmm0<br>
+; X32-SSE-NEXT: pand .LCPI3_0, %xmm0<br>
+; X32-SSE-NEXT: pand %xmm3, %xmm0<br>
+; X32-SSE-NEXT: por %xmm4, %xmm0<br>
+; X32-SSE-NEXT: paddb %xmm1, %xmm1<br>
+; X32-SSE-NEXT: pxor %xmm3, %xmm3<br>
+; X32-SSE-NEXT: pcmpgtb %xmm1, %xmm3<br>
+; X32-SSE-NEXT: movdqa %xmm3, %xmm4<br>
+; X32-SSE-NEXT: pandn %xmm0, %xmm4<br>
+; X32-SSE-NEXT: psrlw $2, %xmm0<br>
+; X32-SSE-NEXT: pand .LCPI3_1, %xmm0<br>
+; X32-SSE-NEXT: pand %xmm3, %xmm0<br>
+; X32-SSE-NEXT: por %xmm4, %xmm0<br>
+; X32-SSE-NEXT: paddb %xmm1, %xmm1<br>
+; X32-SSE-NEXT: pcmpgtb %xmm1, %xmm2<br>
+; X32-SSE-NEXT: movdqa %xmm2, %xmm1<br>
+; X32-SSE-NEXT: pandn %xmm0, %xmm1<br>
+; X32-SSE-NEXT: psrlw $1, %xmm0<br>
+; X32-SSE-NEXT: pand .LCPI3_2, %xmm0<br>
+; X32-SSE-NEXT: pand %xmm2, %xmm0<br>
+; X32-SSE-NEXT: por %xmm1, %xmm0<br>
+; X32-SSE-NEXT: retl<br>
%shift = lshr <16 x i8> %a, %b<br>
ret <16 x i8> %shift<br>
}<br>
@@ -299,6 +403,12 @@ define <2 x i64> @splatvar_shift_v2i64(<<br>
; AVX: # BB#0:<br>
; AVX-NEXT: vpsrlq %xmm1, %xmm0, %xmm0<br>
; AVX-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: splatvar_shift_v2i64:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: movq {{.*#+}} xmm1 = xmm1[0],zero<br>
+; X32-SSE-NEXT: psrlq %xmm1, %xmm0<br>
+; X32-SSE-NEXT: retl<br>
%splat = shufflevector <2 x i64> %b, <2 x i64> undef, <2 x i32> zeroinitializer<br>
%shift = lshr <2 x i64> %a, %splat<br>
ret <2 x i64> %shift<br>
@@ -325,6 +435,13 @@ define <4 x i32> @splatvar_shift_v4i32(<<br>
; AVX-NEXT: vpblendw {{.*#+}} xmm1 = xmm1[0,1],xmm2[2,3,4,5,6,7]<br>
; AVX-NEXT: vpsrld %xmm1, %xmm0, %xmm0<br>
; AVX-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: splatvar_shift_v4i32:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: xorps %xmm2, %xmm2<br>
+; X32-SSE-NEXT: movss {{.*#+}} xmm2 = xmm1[0],xmm2[1,2,3]<br>
+; X32-SSE-NEXT: psrld %xmm2, %xmm0<br>
+; X32-SSE-NEXT: retl<br>
%splat = shufflevector <4 x i32> %b, <4 x i32> undef, <4 x i32> zeroinitializer<br>
%shift = lshr <4 x i32> %a, %splat<br>
ret <4 x i32> %shift<br>
@@ -352,6 +469,14 @@ define <8 x i16> @splatvar_shift_v8i16(<<br>
; AVX-NEXT: vpblendw {{.*#+}} xmm1 = xmm1[0],xmm2[1,2,3,4,5,6,7]<br>
; AVX-NEXT: vpsrlw %xmm1, %xmm0, %xmm0<br>
; AVX-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: splatvar_shift_v8i16:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: movd %xmm1, %eax<br>
+; X32-SSE-NEXT: movzwl %ax, %eax<br>
+; X32-SSE-NEXT: movd %eax, %xmm1<br>
+; X32-SSE-NEXT: psrlw %xmm1, %xmm0<br>
+; X32-SSE-NEXT: retl<br>
%splat = shufflevector <8 x i16> %b, <8 x i16> undef, <8 x i32> zeroinitializer<br>
%shift = lshr <8 x i16> %a, %splat<br>
ret <8 x i16> %shift<br>
@@ -454,6 +579,41 @@ define <16 x i8> @splatvar_shift_v16i8(<<br>
; AVX2-NEXT: vpaddb %xmm1, %xmm1, %xmm1<br>
; AVX2-NEXT: vpblendvb %xmm1, %xmm2, %xmm0, %xmm0<br>
; AVX2-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: splatvar_shift_v16i8:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: punpcklbw {{.*#+}} xmm1 = xmm1[0,0,1,1,2,2,3,3,4,4,5,5,6,6,7,7]<br>
+; X32-SSE-NEXT: pshufd {{.*#+}} xmm1 = xmm1[0,1,0,3]<br>
+; X32-SSE-NEXT: pshuflw {{.*#+}} xmm1 = xmm1[0,0,0,0,4,5,6,7]<br>
+; X32-SSE-NEXT: pshufhw {{.*#+}} xmm2 = xmm1[0,1,2,3,4,4,4,4]<br>
+; X32-SSE-NEXT: psllw $5, %xmm2<br>
+; X32-SSE-NEXT: pxor %xmm1, %xmm1<br>
+; X32-SSE-NEXT: pxor %xmm3, %xmm3<br>
+; X32-SSE-NEXT: pcmpgtb %xmm2, %xmm3<br>
+; X32-SSE-NEXT: movdqa %xmm3, %xmm4<br>
+; X32-SSE-NEXT: pandn %xmm0, %xmm4<br>
+; X32-SSE-NEXT: psrlw $4, %xmm0<br>
+; X32-SSE-NEXT: pand .LCPI7_0, %xmm0<br>
+; X32-SSE-NEXT: pand %xmm3, %xmm0<br>
+; X32-SSE-NEXT: por %xmm4, %xmm0<br>
+; X32-SSE-NEXT: paddb %xmm2, %xmm2<br>
+; X32-SSE-NEXT: pxor %xmm3, %xmm3<br>
+; X32-SSE-NEXT: pcmpgtb %xmm2, %xmm3<br>
+; X32-SSE-NEXT: movdqa %xmm3, %xmm4<br>
+; X32-SSE-NEXT: pandn %xmm0, %xmm4<br>
+; X32-SSE-NEXT: psrlw $2, %xmm0<br>
+; X32-SSE-NEXT: pand .LCPI7_1, %xmm0<br>
+; X32-SSE-NEXT: pand %xmm3, %xmm0<br>
+; X32-SSE-NEXT: por %xmm4, %xmm0<br>
+; X32-SSE-NEXT: paddb %xmm2, %xmm2<br>
+; X32-SSE-NEXT: pcmpgtb %xmm2, %xmm1<br>
+; X32-SSE-NEXT: movdqa %xmm1, %xmm2<br>
+; X32-SSE-NEXT: pandn %xmm0, %xmm2<br>
+; X32-SSE-NEXT: psrlw $1, %xmm0<br>
+; X32-SSE-NEXT: pand .LCPI7_2, %xmm0<br>
+; X32-SSE-NEXT: pand %xmm1, %xmm0<br>
+; X32-SSE-NEXT: por %xmm2, %xmm0<br>
+; X32-SSE-NEXT: retl<br>
%splat = shufflevector <16 x i8> %b, <16 x i8> undef, <16 x i32> zeroinitializer<br>
%shift = lshr <16 x i8> %a, %splat<br>
ret <16 x i8> %shift<br>
@@ -492,6 +652,19 @@ define <2 x i64> @constant_shift_v2i64(<<br>
; AVX2: # BB#0:<br>
; AVX2-NEXT: vpsrlvq {{.*}}(%rip), %xmm0, %xmm0<br>
; AVX2-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: constant_shift_v2i64:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: movl $7, %eax<br>
+; X32-SSE-NEXT: movd %eax, %xmm2<br>
+; X32-SSE-NEXT: movdqa %xmm0, %xmm1<br>
+; X32-SSE-NEXT: psrlq %xmm2, %xmm1<br>
+; X32-SSE-NEXT: movl $1, %eax<br>
+; X32-SSE-NEXT: movd %eax, %xmm2<br>
+; X32-SSE-NEXT: psrlq %xmm2, %xmm0<br>
+; X32-SSE-NEXT: movsd {{.*#+}} xmm1 = xmm0[0],xmm1[1]<br>
+; X32-SSE-NEXT: movapd %xmm1, %xmm0<br>
+; X32-SSE-NEXT: retl<br>
%shift = lshr <2 x i64> %a, <i64 1, i64 7><br>
ret <2 x i64> %shift<br>
}<br>
@@ -499,49 +672,65 @@ define <2 x i64> @constant_shift_v2i64(<<br>
define <4 x i32> @constant_shift_v4i32(<4 x i32> %a) {<br>
; SSE2-LABEL: constant_shift_v4i32:<br>
; SSE2: # BB#0:<br>
-; SSE2-NEXT: movdqa %xmm0, %xmm1<br>
-; SSE2-NEXT: psrld $7, %xmm1<br>
-; SSE2-NEXT: movdqa %xmm0, %xmm2<br>
-; SSE2-NEXT: psrld $5, %xmm2<br>
-; SSE2-NEXT: movsd {{.*#+}} xmm1 = xmm2[0],xmm1[1]<br>
-; SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm1[1,3,2,3]<br>
-; SSE2-NEXT: movdqa %xmm0, %xmm2<br>
-; SSE2-NEXT: psrld $6, %xmm2<br>
-; SSE2-NEXT: psrld $4, %xmm0<br>
-; SSE2-NEXT: movsd {{.*#+}} xmm2 = xmm0[0],xmm2[1]<br>
-; SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm2[0,2,2,3]<br>
-; SSE2-NEXT: punpckldq {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[1]<br>
-; SSE2-NEXT: retq<br>
+; SSE2-NEXT: movdqa %xmm0, %xmm1<br>
+; SSE2-NEXT: psrld $7, %xmm1<br>
+; SSE2-NEXT: movdqa %xmm0, %xmm2<br>
+; SSE2-NEXT: psrld $5, %xmm2<br>
+; SSE2-NEXT: movsd {{.*#+}} xmm1 = xmm2[0],xmm1[1]<br>
+; SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm1[1,3,2,3]<br>
+; SSE2-NEXT: movdqa %xmm0, %xmm2<br>
+; SSE2-NEXT: psrld $6, %xmm2<br>
+; SSE2-NEXT: psrld $4, %xmm0<br>
+; SSE2-NEXT: movsd {{.*#+}} xmm2 = xmm0[0],xmm2[1]<br>
+; SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm2[0,2,2,3]<br>
+; SSE2-NEXT: punpckldq {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[1]<br>
+; SSE2-NEXT: retq<br>
;<br>
; SSE41-LABEL: constant_shift_v4i32:<br>
-; SSE41: # BB#0:<br>
-; SSE41-NEXT: movdqa %xmm0, %xmm1<br>
-; SSE41-NEXT: psrld $7, %xmm1<br>
-; SSE41-NEXT: movdqa %xmm0, %xmm2<br>
-; SSE41-NEXT: psrld $5, %xmm2<br>
-; SSE41-NEXT: pblendw {{.*#+}} xmm2 = xmm2[0,1,2,3],xmm1[4,5,6,7]<br>
-; SSE41-NEXT: movdqa %xmm0, %xmm1<br>
-; SSE41-NEXT: psrld $6, %xmm1<br>
-; SSE41-NEXT: psrld $4, %xmm0<br>
-; SSE41-NEXT: pblendw {{.*#+}} xmm0 = xmm0[0,1,2,3],xmm1[4,5,6,7]<br>
-; SSE41-NEXT: pblendw {{.*#+}} xmm0 = xmm0[0,1],xmm2[2,3],xmm0[4,5],xmm2[6,7]<br>
-; SSE41-NEXT: retq<br>
+; SSE41: # BB#0:<br>
+; SSE41-NEXT: movdqa %xmm0, %xmm1<br>
+; SSE41-NEXT: psrld $7, %xmm1<br>
+; SSE41-NEXT: movdqa %xmm0, %xmm2<br>
+; SSE41-NEXT: psrld $5, %xmm2<br>
+; SSE41-NEXT: pblendw {{.*#+}} xmm2 = xmm2[0,1,2,3],xmm1[4,5,6,7]<br>
+; SSE41-NEXT: movdqa %xmm0, %xmm1<br>
+; SSE41-NEXT: psrld $6, %xmm1<br>
+; SSE41-NEXT: psrld $4, %xmm0<br>
+; SSE41-NEXT: pblendw {{.*#+}} xmm0 = xmm0[0,1,2,3],xmm1[4,5,6,7]<br>
+; SSE41-NEXT: pblendw {{.*#+}} xmm0 = xmm0[0,1],xmm2[2,3],xmm0[4,5],xmm2[6,7]<br>
+; SSE41-NEXT: retq<br>
;<br>
; AVX1-LABEL: constant_shift_v4i32:<br>
-; AVX1: # BB#0:<br>
-; AVX1-NEXT: vpsrld $7, %xmm0, %xmm1<br>
-; AVX1-NEXT: vpsrld $5, %xmm0, %xmm2<br>
-; AVX1-NEXT: vpblendw {{.*#+}} xmm1 = xmm2[0,1,2,3],xmm1[4,5,6,7]<br>
-; AVX1-NEXT: vpsrld $6, %xmm0, %xmm2<br>
-; AVX1-NEXT: vpsrld $4, %xmm0, %xmm0<br>
-; AVX1-NEXT: vpblendw {{.*#+}} xmm0 = xmm0[0,1,2,3],xmm2[4,5,6,7]<br>
-; AVX1-NEXT: vpblendw {{.*#+}} xmm0 = xmm0[0,1],xmm1[2,3],xmm0[4,5],xmm1[6,7]<br>
-; AVX1-NEXT: retq<br>
+; AVX1: # BB#0:<br>
+; AVX1-NEXT: vpsrld $7, %xmm0, %xmm1<br>
+; AVX1-NEXT: vpsrld $5, %xmm0, %xmm2<br>
+; AVX1-NEXT: vpblendw {{.*#+}} xmm1 = xmm2[0,1,2,3],xmm1[4,5,6,7]<br>
+; AVX1-NEXT: vpsrld $6, %xmm0, %xmm2<br>
+; AVX1-NEXT: vpsrld $4, %xmm0, %xmm0<br>
+; AVX1-NEXT: vpblendw {{.*#+}} xmm0 = xmm0[0,1,2,3],xmm2[4,5,6,7]<br>
+; AVX1-NEXT: vpblendw {{.*#+}} xmm0 = xmm0[0,1],xmm1[2,3],xmm0[4,5],xmm1[6,7]<br>
+; AVX1-NEXT: retq<br>
;<br>
; AVX2-LABEL: constant_shift_v4i32:<br>
; AVX2: # BB#0:<br>
; AVX2-NEXT: vpsrlvd {{.*}}(%rip), %xmm0, %xmm0<br>
; AVX2-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: constant_shift_v4i32:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: movdqa %xmm0, %xmm1<br>
+; X32-SSE-NEXT: psrld $7, %xmm1<br>
+; X32-SSE-NEXT: movdqa %xmm0, %xmm2<br>
+; X32-SSE-NEXT: psrld $5, %xmm2<br>
+; X32-SSE-NEXT: movsd {{.*#+}} xmm1 = xmm2[0],xmm1[1]<br>
+; X32-SSE-NEXT: pshufd {{.*#+}} xmm1 = xmm1[1,3,2,3]<br>
+; X32-SSE-NEXT: movdqa %xmm0, %xmm2<br>
+; X32-SSE-NEXT: psrld $6, %xmm2<br>
+; X32-SSE-NEXT: psrld $4, %xmm0<br>
+; X32-SSE-NEXT: movsd {{.*#+}} xmm2 = xmm0[0],xmm2[1]<br>
+; X32-SSE-NEXT: pshufd {{.*#+}} xmm0 = xmm2[0,2,2,3]<br>
+; X32-SSE-NEXT: punpckldq {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[1]<br>
+; X32-SSE-NEXT: retl<br>
%shift = lshr <4 x i32> %a, <i32 4, i32 5, i32 6, i32 7><br>
ret <4 x i32> %shift<br>
}<br>
@@ -611,6 +800,23 @@ define <8 x i16> @constant_shift_v8i16(<<br>
; AVX2-NEXT: vpermq {{.*#+}} ymm0 = ymm0[0,2,2,3]<br>
; AVX2-NEXT: vzeroupper<br>
; AVX2-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: constant_shift_v8i16:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: movdqa %xmm0, %xmm1<br>
+; X32-SSE-NEXT: psrlw $4, %xmm1<br>
+; X32-SSE-NEXT: movsd {{.*#+}} xmm1 = xmm0[0],xmm1[1]<br>
+; X32-SSE-NEXT: pshufd {{.*#+}} xmm2 = xmm1[0,2,2,3]<br>
+; X32-SSE-NEXT: psrlw $2, %xmm1<br>
+; X32-SSE-NEXT: pshufd {{.*#+}} xmm0 = xmm1[1,3,2,3]<br>
+; X32-SSE-NEXT: punpckldq {{.*#+}} xmm2 = xmm2[0],xmm0[0],xmm2[1],xmm0[1]<br>
+; X32-SSE-NEXT: movdqa {{.*#+}} xmm0 = [65535,0,65535,0,65535,0,65535,0]<br>
+; X32-SSE-NEXT: movdqa %xmm2, %xmm1<br>
+; X32-SSE-NEXT: pand %xmm0, %xmm1<br>
+; X32-SSE-NEXT: psrlw $1, %xmm2<br>
+; X32-SSE-NEXT: pandn %xmm2, %xmm0<br>
+; X32-SSE-NEXT: por %xmm1, %xmm0<br>
+; X32-SSE-NEXT: retl<br>
%shift = lshr <8 x i16> %a, <i16 0, i16 1, i16 2, i16 3, i16 4, i16 5, i16 6, i16 7><br>
ret <8 x i16> %shift<br>
}<br>
@@ -686,6 +892,38 @@ define <16 x i8> @constant_shift_v16i8(<<br>
; AVX-NEXT: vpaddb %xmm1, %xmm1, %xmm1<br>
; AVX-NEXT: vpblendvb %xmm1, %xmm2, %xmm0, %xmm0<br>
; AVX-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: constant_shift_v16i8:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: movdqa {{.*#+}} xmm2 = [0,1,2,3,4,5,6,7,7,6,5,4,3,2,1,0]<br>
+; X32-SSE-NEXT: psllw $5, %xmm2<br>
+; X32-SSE-NEXT: pxor %xmm1, %xmm1<br>
+; X32-SSE-NEXT: pxor %xmm3, %xmm3<br>
+; X32-SSE-NEXT: pcmpgtb %xmm2, %xmm3<br>
+; X32-SSE-NEXT: movdqa %xmm3, %xmm4<br>
+; X32-SSE-NEXT: pandn %xmm0, %xmm4<br>
+; X32-SSE-NEXT: psrlw $4, %xmm0<br>
+; X32-SSE-NEXT: pand .LCPI11_1, %xmm0<br>
+; X32-SSE-NEXT: pand %xmm3, %xmm0<br>
+; X32-SSE-NEXT: por %xmm4, %xmm0<br>
+; X32-SSE-NEXT: paddb %xmm2, %xmm2<br>
+; X32-SSE-NEXT: pxor %xmm3, %xmm3<br>
+; X32-SSE-NEXT: pcmpgtb %xmm2, %xmm3<br>
+; X32-SSE-NEXT: movdqa %xmm3, %xmm4<br>
+; X32-SSE-NEXT: pandn %xmm0, %xmm4<br>
+; X32-SSE-NEXT: psrlw $2, %xmm0<br>
+; X32-SSE-NEXT: pand .LCPI11_2, %xmm0<br>
+; X32-SSE-NEXT: pand %xmm3, %xmm0<br>
+; X32-SSE-NEXT: por %xmm4, %xmm0<br>
+; X32-SSE-NEXT: paddb %xmm2, %xmm2<br>
+; X32-SSE-NEXT: pcmpgtb %xmm2, %xmm1<br>
+; X32-SSE-NEXT: movdqa %xmm1, %xmm2<br>
+; X32-SSE-NEXT: pandn %xmm0, %xmm2<br>
+; X32-SSE-NEXT: psrlw $1, %xmm0<br>
+; X32-SSE-NEXT: pand .LCPI11_3, %xmm0<br>
+; X32-SSE-NEXT: pand %xmm1, %xmm0<br>
+; X32-SSE-NEXT: por %xmm2, %xmm0<br>
+; X32-SSE-NEXT: retl<br>
%shift = lshr <16 x i8> %a, <i8 0, i8 1, i8 2, i8 3, i8 4, i8 5, i8 6, i8 7, i8 7, i8 6, i8 5, i8 4, i8 3, i8 2, i8 1, i8 0><br>
ret <16 x i8> %shift<br>
}<br>
@@ -704,6 +942,11 @@ define <2 x i64> @splatconstant_shift_v2<br>
; AVX: # BB#0:<br>
; AVX-NEXT: vpsrlq $7, %xmm0, %xmm0<br>
; AVX-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: splatconstant_shift_v2i64:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: psrlq $7, %xmm0<br>
+; X32-SSE-NEXT: retl<br>
%shift = lshr <2 x i64> %a, <i64 7, i64 7><br>
ret <2 x i64> %shift<br>
}<br>
@@ -718,6 +961,11 @@ define <4 x i32> @splatconstant_shift_v4<br>
; AVX: # BB#0:<br>
; AVX-NEXT: vpsrld $5, %xmm0, %xmm0<br>
; AVX-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: splatconstant_shift_v4i32:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: psrld $5, %xmm0<br>
+; X32-SSE-NEXT: retl<br>
%shift = lshr <4 x i32> %a, <i32 5, i32 5, i32 5, i32 5><br>
ret <4 x i32> %shift<br>
}<br>
@@ -732,6 +980,11 @@ define <8 x i16> @splatconstant_shift_v8<br>
; AVX: # BB#0:<br>
; AVX-NEXT: vpsrlw $3, %xmm0, %xmm0<br>
; AVX-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: splatconstant_shift_v8i16:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: psrlw $3, %xmm0<br>
+; X32-SSE-NEXT: retl<br>
%shift = lshr <8 x i16> %a, <i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3><br>
ret <8 x i16> %shift<br>
}<br>
@@ -748,6 +1001,12 @@ define <16 x i8> @splatconstant_shift_v1<br>
; AVX-NEXT: vpsrlw $3, %xmm0, %xmm0<br>
; AVX-NEXT: vpand {{.*}}(%rip), %xmm0, %xmm0<br>
; AVX-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: splatconstant_shift_v16i8:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: psrlw $3, %xmm0<br>
+; X32-SSE-NEXT: pand .LCPI15_0, %xmm0<br>
+; X32-SSE-NEXT: retl<br>
%shift = lshr <16 x i8> %a, <i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3><br>
ret <16 x i8> %shift<br>
}<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/vector-shift-shl-128.ll<br>
URL: <a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__llvm.org_viewvc_llvm-2Dproject_llvm_trunk_test_CodeGen_X86_vector-2Dshift-2Dshl-2D128.ll-3Frev-3D242273-26r1-3D242272-26r2-3D242273-26view-3Ddiff&d=AwMFAg&c=8hUWFZcy2Z-Za5rBPlktOQ&r=mQ4LZ2PUj9hpadE3cDHZnIdEwhEBrbAstXeMaFoB9tg&m=t0x4uKuotVhCRr7DKeGEwqwiqjw34KP-2c1Wwl9D140&s=_dybq2CRHZ8sZqxSRg7dz2Uw2YAzWx62dZCL9bhEiVc&e=" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-shift-shl-128.ll?rev=242273&r1=242272&r2=242273&view=diff</a><br>
==============================================================================<br>
--- llvm/trunk/test/CodeGen/X86/vector-shift-shl-128.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/vector-shift-shl-128.ll Wed Jul 15 03:04:07 2015<br>
@@ -2,6 +2,9 @@<br>
; RUN: llc < %s -mtriple=x86_64-unknown-unknown -mcpu=x86-64 -mattr=+sse4.1 | FileCheck %s --check-prefix=ALL --check-prefix=SSE --check-prefix=SSE41<br>
; RUN: llc < %s -mtriple=x86_64-unknown-unknown -mcpu=x86-64 -mattr=+avx | FileCheck %s --check-prefix=ALL --check-prefix=AVX --check-prefix=AVX1<br>
; RUN: llc < %s -mtriple=x86_64-unknown-unknown -mcpu=x86-64 -mattr=+avx2 | FileCheck %s --check-prefix=ALL --check-prefix=AVX --check-prefix=AVX2<br>
+;<br>
+; Just one 32-bit run to make sure we do reasonable things for i64 shifts.<br>
+; RUN: llc < %s -mtriple=i686-unknown-unknown -mcpu=x86-64 | FileCheck %s --check-prefix=ALL --check-prefix=X32-SSE --check-prefix=X32-SSE2<br>
<br>
;<br>
; Variable Shifts<br>
@@ -39,6 +42,17 @@ define <2 x i64> @var_shift_v2i64(<2 x i<br>
; AVX2: # BB#0:<br>
; AVX2-NEXT: vpsllvq %xmm1, %xmm0, %xmm0<br>
; AVX2-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: var_shift_v2i64:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: pshufd {{.*#+}} xmm3 = xmm1[2,3,0,1]<br>
+; X32-SSE-NEXT: movdqa %xmm0, %xmm2<br>
+; X32-SSE-NEXT: psllq %xmm3, %xmm2<br>
+; X32-SSE-NEXT: movq {{.*#+}} xmm1 = xmm1[0],zero<br>
+; X32-SSE-NEXT: psllq %xmm1, %xmm0<br>
+; X32-SSE-NEXT: movsd {{.*#+}} xmm2 = xmm0[0],xmm2[1]<br>
+; X32-SSE-NEXT: movapd %xmm2, %xmm0<br>
+; X32-SSE-NEXT: retl<br>
%shift = shl <2 x i64> %a, %b<br>
ret <2 x i64> %shift<br>
}<br>
@@ -79,6 +93,21 @@ define <4 x i32> @var_shift_v4i32(<4 x i<br>
; AVX2: # BB#0:<br>
; AVX2-NEXT: vpsllvd %xmm1, %xmm0, %xmm0<br>
; AVX2-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: var_shift_v4i32:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: pslld $23, %xmm1<br>
+; X32-SSE-NEXT: paddd .LCPI1_0, %xmm1<br>
+; X32-SSE-NEXT: cvttps2dq %xmm1, %xmm1<br>
+; X32-SSE-NEXT: pshufd {{.*#+}} xmm2 = xmm1[1,1,3,3]<br>
+; X32-SSE-NEXT: pmuludq %xmm0, %xmm1<br>
+; X32-SSE-NEXT: pshufd {{.*#+}} xmm1 = xmm1[0,2,2,3]<br>
+; X32-SSE-NEXT: pshufd {{.*#+}} xmm0 = xmm0[1,1,3,3]<br>
+; X32-SSE-NEXT: pmuludq %xmm2, %xmm0<br>
+; X32-SSE-NEXT: pshufd {{.*#+}} xmm0 = xmm0[0,2,2,3]<br>
+; X32-SSE-NEXT: punpckldq {{.*#+}} xmm1 = xmm1[0],xmm0[0],xmm1[1],xmm0[1]<br>
+; X32-SSE-NEXT: movdqa %xmm1, %xmm0<br>
+; X32-SSE-NEXT: retl<br>
%shift = shl <4 x i32> %a, %b<br>
ret <4 x i32> %shift<br>
}<br>
@@ -176,6 +205,41 @@ define <8 x i16> @var_shift_v8i16(<8 x i<br>
; AVX2-NEXT: vpermq {{.*#+}} ymm0 = ymm0[0,2,2,3]<br>
; AVX2-NEXT: vzeroupper<br>
; AVX2-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: var_shift_v8i16:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: psllw $12, %xmm1<br>
+; X32-SSE-NEXT: movdqa %xmm1, %xmm2<br>
+; X32-SSE-NEXT: psraw $15, %xmm2<br>
+; X32-SSE-NEXT: movdqa %xmm2, %xmm3<br>
+; X32-SSE-NEXT: pandn %xmm0, %xmm3<br>
+; X32-SSE-NEXT: psllw $8, %xmm0<br>
+; X32-SSE-NEXT: pand %xmm2, %xmm0<br>
+; X32-SSE-NEXT: por %xmm3, %xmm0<br>
+; X32-SSE-NEXT: paddw %xmm1, %xmm1<br>
+; X32-SSE-NEXT: movdqa %xmm1, %xmm2<br>
+; X32-SSE-NEXT: psraw $15, %xmm2<br>
+; X32-SSE-NEXT: movdqa %xmm2, %xmm3<br>
+; X32-SSE-NEXT: pandn %xmm0, %xmm3<br>
+; X32-SSE-NEXT: psllw $4, %xmm0<br>
+; X32-SSE-NEXT: pand %xmm2, %xmm0<br>
+; X32-SSE-NEXT: por %xmm3, %xmm0<br>
+; X32-SSE-NEXT: paddw %xmm1, %xmm1<br>
+; X32-SSE-NEXT: movdqa %xmm1, %xmm2<br>
+; X32-SSE-NEXT: psraw $15, %xmm2<br>
+; X32-SSE-NEXT: movdqa %xmm2, %xmm3<br>
+; X32-SSE-NEXT: pandn %xmm0, %xmm3<br>
+; X32-SSE-NEXT: psllw $2, %xmm0<br>
+; X32-SSE-NEXT: pand %xmm2, %xmm0<br>
+; X32-SSE-NEXT: por %xmm3, %xmm0<br>
+; X32-SSE-NEXT: paddw %xmm1, %xmm1<br>
+; X32-SSE-NEXT: psraw $15, %xmm1<br>
+; X32-SSE-NEXT: movdqa %xmm1, %xmm2<br>
+; X32-SSE-NEXT: pandn %xmm0, %xmm2<br>
+; X32-SSE-NEXT: psllw $1, %xmm0<br>
+; X32-SSE-NEXT: pand %xmm1, %xmm0<br>
+; X32-SSE-NEXT: por %xmm2, %xmm0<br>
+; X32-SSE-NEXT: retl<br>
%shift = shl <8 x i16> %a, %b<br>
ret <8 x i16> %shift<br>
}<br>
@@ -248,6 +312,36 @@ define <16 x i8> @var_shift_v16i8(<16 x<br>
; AVX-NEXT: vpaddb %xmm1, %xmm1, %xmm1<br>
; AVX-NEXT: vpblendvb %xmm1, %xmm2, %xmm0, %xmm0<br>
; AVX-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: var_shift_v16i8:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: psllw $5, %xmm1<br>
+; X32-SSE-NEXT: pxor %xmm2, %xmm2<br>
+; X32-SSE-NEXT: pxor %xmm3, %xmm3<br>
+; X32-SSE-NEXT: pcmpgtb %xmm1, %xmm3<br>
+; X32-SSE-NEXT: movdqa %xmm3, %xmm4<br>
+; X32-SSE-NEXT: pandn %xmm0, %xmm4<br>
+; X32-SSE-NEXT: psllw $4, %xmm0<br>
+; X32-SSE-NEXT: pand .LCPI3_0, %xmm0<br>
+; X32-SSE-NEXT: pand %xmm3, %xmm0<br>
+; X32-SSE-NEXT: por %xmm4, %xmm0<br>
+; X32-SSE-NEXT: paddb %xmm1, %xmm1<br>
+; X32-SSE-NEXT: pxor %xmm3, %xmm3<br>
+; X32-SSE-NEXT: pcmpgtb %xmm1, %xmm3<br>
+; X32-SSE-NEXT: movdqa %xmm3, %xmm4<br>
+; X32-SSE-NEXT: pandn %xmm0, %xmm4<br>
+; X32-SSE-NEXT: psllw $2, %xmm0<br>
+; X32-SSE-NEXT: pand .LCPI3_1, %xmm0<br>
+; X32-SSE-NEXT: pand %xmm3, %xmm0<br>
+; X32-SSE-NEXT: por %xmm4, %xmm0<br>
+; X32-SSE-NEXT: paddb %xmm1, %xmm1<br>
+; X32-SSE-NEXT: pcmpgtb %xmm1, %xmm2<br>
+; X32-SSE-NEXT: movdqa %xmm2, %xmm1<br>
+; X32-SSE-NEXT: pandn %xmm0, %xmm1<br>
+; X32-SSE-NEXT: paddb %xmm0, %xmm0<br>
+; X32-SSE-NEXT: pand %xmm2, %xmm0<br>
+; X32-SSE-NEXT: por %xmm1, %xmm0<br>
+; X32-SSE-NEXT: retl<br>
%shift = shl <16 x i8> %a, %b<br>
ret <16 x i8> %shift<br>
}<br>
@@ -266,6 +360,12 @@ define <2 x i64> @splatvar_shift_v2i64(<<br>
; AVX: # BB#0:<br>
; AVX-NEXT: vpsllq %xmm1, %xmm0, %xmm0<br>
; AVX-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: splatvar_shift_v2i64:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: movq {{.*#+}} xmm1 = xmm1[0],zero<br>
+; X32-SSE-NEXT: psllq %xmm1, %xmm0<br>
+; X32-SSE-NEXT: retl<br>
%splat = shufflevector <2 x i64> %b, <2 x i64> undef, <2 x i32> zeroinitializer<br>
%shift = shl <2 x i64> %a, %splat<br>
ret <2 x i64> %shift<br>
@@ -292,6 +392,13 @@ define <4 x i32> @splatvar_shift_v4i32(<<br>
; AVX-NEXT: vpblendw {{.*#+}} xmm1 = xmm1[0,1],xmm2[2,3,4,5,6,7]<br>
; AVX-NEXT: vpslld %xmm1, %xmm0, %xmm0<br>
; AVX-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: splatvar_shift_v4i32:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: xorps %xmm2, %xmm2<br>
+; X32-SSE-NEXT: movss {{.*#+}} xmm2 = xmm1[0],xmm2[1,2,3]<br>
+; X32-SSE-NEXT: pslld %xmm2, %xmm0<br>
+; X32-SSE-NEXT: retl<br>
%splat = shufflevector <4 x i32> %b, <4 x i32> undef, <4 x i32> zeroinitializer<br>
%shift = shl <4 x i32> %a, %splat<br>
ret <4 x i32> %shift<br>
@@ -319,6 +426,14 @@ define <8 x i16> @splatvar_shift_v8i16(<<br>
; AVX-NEXT: vpblendw {{.*#+}} xmm1 = xmm1[0],xmm2[1,2,3,4,5,6,7]<br>
; AVX-NEXT: vpsllw %xmm1, %xmm0, %xmm0<br>
; AVX-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: splatvar_shift_v8i16:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: movd %xmm1, %eax<br>
+; X32-SSE-NEXT: movzwl %ax, %eax<br>
+; X32-SSE-NEXT: movd %eax, %xmm1<br>
+; X32-SSE-NEXT: psllw %xmm1, %xmm0<br>
+; X32-SSE-NEXT: retl<br>
%splat = shufflevector <8 x i16> %b, <8 x i16> undef, <8 x i32> zeroinitializer<br>
%shift = shl <8 x i16> %a, %splat<br>
ret <8 x i16> %shift<br>
@@ -417,6 +532,40 @@ define <16 x i8> @splatvar_shift_v16i8(<<br>
; AVX2-NEXT: vpaddb %xmm1, %xmm1, %xmm1<br>
; AVX2-NEXT: vpblendvb %xmm1, %xmm2, %xmm0, %xmm0<br>
; AVX2-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: splatvar_shift_v16i8:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: punpcklbw {{.*#+}} xmm1 = xmm1[0,0,1,1,2,2,3,3,4,4,5,5,6,6,7,7]<br>
+; X32-SSE-NEXT: pshufd {{.*#+}} xmm1 = xmm1[0,1,0,3]<br>
+; X32-SSE-NEXT: pshuflw {{.*#+}} xmm1 = xmm1[0,0,0,0,4,5,6,7]<br>
+; X32-SSE-NEXT: pshufhw {{.*#+}} xmm2 = xmm1[0,1,2,3,4,4,4,4]<br>
+; X32-SSE-NEXT: psllw $5, %xmm2<br>
+; X32-SSE-NEXT: pxor %xmm1, %xmm1<br>
+; X32-SSE-NEXT: pxor %xmm3, %xmm3<br>
+; X32-SSE-NEXT: pcmpgtb %xmm2, %xmm3<br>
+; X32-SSE-NEXT: movdqa %xmm3, %xmm4<br>
+; X32-SSE-NEXT: pandn %xmm0, %xmm4<br>
+; X32-SSE-NEXT: psllw $4, %xmm0<br>
+; X32-SSE-NEXT: pand .LCPI7_0, %xmm0<br>
+; X32-SSE-NEXT: pand %xmm3, %xmm0<br>
+; X32-SSE-NEXT: por %xmm4, %xmm0<br>
+; X32-SSE-NEXT: paddb %xmm2, %xmm2<br>
+; X32-SSE-NEXT: pxor %xmm3, %xmm3<br>
+; X32-SSE-NEXT: pcmpgtb %xmm2, %xmm3<br>
+; X32-SSE-NEXT: movdqa %xmm3, %xmm4<br>
+; X32-SSE-NEXT: pandn %xmm0, %xmm4<br>
+; X32-SSE-NEXT: psllw $2, %xmm0<br>
+; X32-SSE-NEXT: pand .LCPI7_1, %xmm0<br>
+; X32-SSE-NEXT: pand %xmm3, %xmm0<br>
+; X32-SSE-NEXT: por %xmm4, %xmm0<br>
+; X32-SSE-NEXT: paddb %xmm2, %xmm2<br>
+; X32-SSE-NEXT: pcmpgtb %xmm2, %xmm1<br>
+; X32-SSE-NEXT: movdqa %xmm1, %xmm2<br>
+; X32-SSE-NEXT: pandn %xmm0, %xmm2<br>
+; X32-SSE-NEXT: paddb %xmm0, %xmm0<br>
+; X32-SSE-NEXT: pand %xmm1, %xmm0<br>
+; X32-SSE-NEXT: por %xmm2, %xmm0<br>
+; X32-SSE-NEXT: retl<br>
%splat = shufflevector <16 x i8> %b, <16 x i8> undef, <16 x i32> zeroinitializer<br>
%shift = shl <16 x i8> %a, %splat<br>
ret <16 x i8> %shift<br>
@@ -455,6 +604,19 @@ define <2 x i64> @constant_shift_v2i64(<<br>
; AVX2: # BB#0:<br>
; AVX2-NEXT: vpsllvq {{.*}}(%rip), %xmm0, %xmm0<br>
; AVX2-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: constant_shift_v2i64:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: movl $7, %eax<br>
+; X32-SSE-NEXT: movd %eax, %xmm2<br>
+; X32-SSE-NEXT: movdqa %xmm0, %xmm1<br>
+; X32-SSE-NEXT: psllq %xmm2, %xmm1<br>
+; X32-SSE-NEXT: movl $1, %eax<br>
+; X32-SSE-NEXT: movd %eax, %xmm2<br>
+; X32-SSE-NEXT: psllq %xmm2, %xmm0<br>
+; X32-SSE-NEXT: movsd {{.*#+}} xmm1 = xmm0[0],xmm1[1]<br>
+; X32-SSE-NEXT: movapd %xmm1, %xmm0<br>
+; X32-SSE-NEXT: retl<br>
%shift = shl <2 x i64> %a, <i64 1, i64 7><br>
ret <2 x i64> %shift<br>
}<br>
@@ -486,6 +648,18 @@ define <4 x i32> @constant_shift_v4i32(<<br>
; AVX2: # BB#0:<br>
; AVX2-NEXT: vpsllvd {{.*}}(%rip), %xmm0, %xmm0<br>
; AVX2-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: constant_shift_v4i32:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: movdqa {{.*#+}} xmm1 = [16,32,64,128]<br>
+; X32-SSE-NEXT: pshufd {{.*#+}} xmm2 = xmm0[1,1,3,3]<br>
+; X32-SSE-NEXT: pmuludq %xmm1, %xmm0<br>
+; X32-SSE-NEXT: pshufd {{.*#+}} xmm0 = xmm0[0,2,2,3]<br>
+; X32-SSE-NEXT: pshufd {{.*#+}} xmm1 = xmm1[1,1,3,3]<br>
+; X32-SSE-NEXT: pmuludq %xmm2, %xmm1<br>
+; X32-SSE-NEXT: pshufd {{.*#+}} xmm1 = xmm1[0,2,2,3]<br>
+; X32-SSE-NEXT: punpckldq {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[1]<br>
+; X32-SSE-NEXT: retl<br>
%shift = shl <4 x i32> %a, <i32 4, i32 5, i32 6, i32 7><br>
ret <4 x i32> %shift<br>
}<br>
@@ -500,6 +674,11 @@ define <8 x i16> @constant_shift_v8i16(<<br>
; AVX: # BB#0:<br>
; AVX-NEXT: vpmullw {{.*}}(%rip), %xmm0, %xmm0<br>
; AVX-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: constant_shift_v8i16:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: pmullw .LCPI10_0, %xmm0<br>
+; X32-SSE-NEXT: retl<br>
%shift = shl <8 x i16> %a, <i16 0, i16 1, i16 2, i16 3, i16 4, i16 5, i16 6, i16 7><br>
ret <8 x i16> %shift<br>
}<br>
@@ -572,6 +751,37 @@ define <16 x i8> @constant_shift_v16i8(<<br>
; AVX-NEXT: vpaddb %xmm1, %xmm1, %xmm1<br>
; AVX-NEXT: vpblendvb %xmm1, %xmm2, %xmm0, %xmm0<br>
; AVX-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: constant_shift_v16i8:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: movdqa {{.*#+}} xmm2 = [0,1,2,3,4,5,6,7,7,6,5,4,3,2,1,0]<br>
+; X32-SSE-NEXT: psllw $5, %xmm2<br>
+; X32-SSE-NEXT: pxor %xmm1, %xmm1<br>
+; X32-SSE-NEXT: pxor %xmm3, %xmm3<br>
+; X32-SSE-NEXT: pcmpgtb %xmm2, %xmm3<br>
+; X32-SSE-NEXT: movdqa %xmm3, %xmm4<br>
+; X32-SSE-NEXT: pandn %xmm0, %xmm4<br>
+; X32-SSE-NEXT: psllw $4, %xmm0<br>
+; X32-SSE-NEXT: pand .LCPI11_1, %xmm0<br>
+; X32-SSE-NEXT: pand %xmm3, %xmm0<br>
+; X32-SSE-NEXT: por %xmm4, %xmm0<br>
+; X32-SSE-NEXT: paddb %xmm2, %xmm2<br>
+; X32-SSE-NEXT: pxor %xmm3, %xmm3<br>
+; X32-SSE-NEXT: pcmpgtb %xmm2, %xmm3<br>
+; X32-SSE-NEXT: movdqa %xmm3, %xmm4<br>
+; X32-SSE-NEXT: pandn %xmm0, %xmm4<br>
+; X32-SSE-NEXT: psllw $2, %xmm0<br>
+; X32-SSE-NEXT: pand .LCPI11_2, %xmm0<br>
+; X32-SSE-NEXT: pand %xmm3, %xmm0<br>
+; X32-SSE-NEXT: por %xmm4, %xmm0<br>
+; X32-SSE-NEXT: paddb %xmm2, %xmm2<br>
+; X32-SSE-NEXT: pcmpgtb %xmm2, %xmm1<br>
+; X32-SSE-NEXT: movdqa %xmm1, %xmm2<br>
+; X32-SSE-NEXT: pandn %xmm0, %xmm2<br>
+; X32-SSE-NEXT: paddb %xmm0, %xmm0<br>
+; X32-SSE-NEXT: pand %xmm1, %xmm0<br>
+; X32-SSE-NEXT: por %xmm2, %xmm0<br>
+; X32-SSE-NEXT: retl<br>
%shift = shl <16 x i8> %a, <i8 0, i8 1, i8 2, i8 3, i8 4, i8 5, i8 6, i8 7, i8 7, i8 6, i8 5, i8 4, i8 3, i8 2, i8 1, i8 0><br>
ret <16 x i8> %shift<br>
}<br>
@@ -590,6 +800,11 @@ define <2 x i64> @splatconstant_shift_v2<br>
; AVX: # BB#0:<br>
; AVX-NEXT: vpsllq $7, %xmm0, %xmm0<br>
; AVX-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: splatconstant_shift_v2i64:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: psllq $7, %xmm0<br>
+; X32-SSE-NEXT: retl<br>
%shift = shl <2 x i64> %a, <i64 7, i64 7><br>
ret <2 x i64> %shift<br>
}<br>
@@ -604,6 +819,11 @@ define <4 x i32> @splatconstant_shift_v4<br>
; AVX: # BB#0:<br>
; AVX-NEXT: vpslld $5, %xmm0, %xmm0<br>
; AVX-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: splatconstant_shift_v4i32:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: pslld $5, %xmm0<br>
+; X32-SSE-NEXT: retl<br>
%shift = shl <4 x i32> %a, <i32 5, i32 5, i32 5, i32 5><br>
ret <4 x i32> %shift<br>
}<br>
@@ -618,6 +838,11 @@ define <8 x i16> @splatconstant_shift_v8<br>
; AVX: # BB#0:<br>
; AVX-NEXT: vpsllw $3, %xmm0, %xmm0<br>
; AVX-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: splatconstant_shift_v8i16:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: psllw $3, %xmm0<br>
+; X32-SSE-NEXT: retl<br>
%shift = shl <8 x i16> %a, <i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3><br>
ret <8 x i16> %shift<br>
}<br>
@@ -634,6 +859,12 @@ define <16 x i8> @splatconstant_shift_v1<br>
; AVX-NEXT: vpsllw $3, %xmm0, %xmm0<br>
; AVX-NEXT: vpand {{.*}}(%rip), %xmm0, %xmm0<br>
; AVX-NEXT: retq<br>
+;<br>
+; X32-SSE-LABEL: splatconstant_shift_v16i8:<br>
+; X32-SSE: # BB#0:<br>
+; X32-SSE-NEXT: psllw $3, %xmm0<br>
+; X32-SSE-NEXT: pand .LCPI15_0, %xmm0<br>
+; X32-SSE-NEXT: retl<br>
%shift = shl <16 x i8> %a, <i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3><br>
ret <16 x i8> %shift<br>
}<br>
<br>
<br>
_______________________________________________<br>
llvm-commits mailing list<br>
<a href="mailto:llvm-commits@cs.uiuc.edu" target="_blank">llvm-commits@cs.uiuc.edu</a><br>
<a href="http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits" rel="noreferrer" target="_blank">http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits</a><br>
</blockquote></div><br></div>
</div></blockquote></div><br></div></div><br>_______________________________________________<br>
llvm-commits mailing list<br>
<a href="mailto:llvm-commits@cs.uiuc.edu">llvm-commits@cs.uiuc.edu</a><br>
<a href="http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits" rel="noreferrer" target="_blank">http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits</a><br>
<br></blockquote></div>