<div dir="ltr">Oops, I was building with assertions off.<div><br></div><div>Have managed to reproduce now: </div><div>1) build clang with -DCMAKE_BUILD_TYPE=release -DCMAKE_CXX_FLAGS_RELEASE="-O3 -DNDEBUG -msse4.2" -DCMAKE_C_FLAGS_RELEASE="-O3 -DNDEBUG -msse4.2" -DLLVM_ENABLE_ASSERTIONS=Off</div><div>2) use that to build another copy of clang with the same options</div><div>3) stage2/bin/clang -c foo.c (where foo.c is the reproducer above)</div><div><br></div><div>Going to revert now.</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Feb 11, 2019 at 1:07 PM Sam McCall <<a href="mailto:sammccall@google.com">sammccall@google.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr">Have also seen the problems David mentions. The nature of the patch and the error makes me think a miscompile is likely.<div><br><div>Unfortunately so far I have only managed to reproduce this in our internal CI, not with a plain release build from upstream. This is probably a "hold it just right to reproduce" problem. I'll try to find the right set of optimization flags to trigger it.</div></div><div><br></div><div>My minimal reproducer (just "clang foo.c" will reproduce, add -Werror for a failure exit):</div><div>---</div><div><div>int format(const char *, ...) __attribute__((__format__(__printf__, 1, 2)));</div><div>#define PASTE_AND_FORMAT(a, b) format(#a #b)</div><div>#define MACRO2(x) PASTE_AND_FORMAT(a, x)</div><div>void run() { MACRO2(a + b + c); }</div></div><div>---</div><div><div>/usr/local/google/home/sammccall/elfcore.c:4:14: warning: format string contains '\0' within the string body [-Wformat]</div><div>void run() { MACRO2(a + b + c); }</div><div> ^~~~~~~~~~~~~~~~~</div><div>/usr/local/google/home/sammccall/elfcore.c:3:19: note: expanded from macro 'MACRO2'</div><div>#define MACRO2(x) PASTE_AND_FORMAT(a, x)</div><div> ^~~~~~~~~~~~~~~~~~~~~~</div><div>/usr/local/google/home/sammccall/elfcore.c:2:42: note: expanded from macro 'PASTE_AND_FORMAT'</div><div>#define PASTE_AND_FORMAT(a, b) format(#a #b)</div><div> ~~~^~</div><div><scratch space>:3:8: note: expanded from here</div><div>"a P b <U+0000> c"</div><div> ^</div><div>1 warning generated.</div></div><div><br></div><div><br></div><div><br></div></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, Feb 10, 2019 at 7:27 AM David Jones via llvm-commits <<a href="mailto:llvm-commits@lists.llvm.org" target="_blank">llvm-commits@lists.llvm.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr">FYI, I'm seeing some stage2 miscompiles with this change (i.e., stage2 cannot recompile itself).<div><br></div><div>Being the weekend, I haven't had time to synthesize a reproducer, but the error I first saw looks like this:</div><div><br></div><div><div><redacted loc>: error: format string contains '\0' within the string body [-Werror,-Wformat]</div><div> CHECK_EQ(*xxx_yyyy, sizeof(AaaB(Cddd)) + eee_ffff * sizeof(gggg));</div><div> ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~</div><div><redacted loc>: note: expanded from macro 'CHECK_EQ'</div><div> { CHECK_OP(val1, ==, val2); }</div><div> ^~~~~~~~~~~~~~~~~~~~~~~~</div><div><redacted loc>: note: expanded from macro 'CHECK_OP'</div><div> DebugLog(g_debug_fd, "%s:%d Expected " #val1 " " #op " " #val2, __FILE__, \</div><div> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~</div><div><scratch space>:70:32: note: expanded from here</div><div>"sizeof(AaaB_Cddd) { eee_ffff <U+0000> sizeof(gggg)"</div><div> ^</div></div><div><br></div><div>Since it's not clear: there are several layers of tokens being pasted here, but the final pasted string (in scratch space) has flipped some bits:</div><div>+ 0b0010'1011 --> { 0b0111'1011<br></div><div>* 0b0010'1010 --> \0</div><div><br></div></div></div></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, Feb 9, 2019 at 5:13 AM Simon Pilgrim via llvm-commits <<a href="mailto:llvm-commits@lists.llvm.org" target="_blank">llvm-commits@lists.llvm.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Author: rksimon<br>
Date: Sat Feb 9 05:13:59 2019<br>
New Revision: 353610<br>
<br>
URL: <a href="http://llvm.org/viewvc/llvm-project?rev=353610&view=rev" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project?rev=353610&view=rev</a><br>
Log:<br>
[X86][SSE] Generalize X86ISD::BLENDI support to more value types<br>
<br>
D42042 introduced the ability for the ExecutionDomainFixPass to more easily change between BLENDPD/BLENDPS/PBLENDW as the domains required.<br>
<br>
With this ability, we can avoid most bitcasts/scaling in the DAG that was occurring with X86ISD::BLENDI lowering/combining, blend with the vXi32/vXi64 vectors directly and use isel patterns to lower to the float vector equivalent vectors.<br>
<br>
This helps the shuffle combining and SimplifyDemandedVectorElts be more aggressive as we lose track of fewer UNDEF elements than when we go up/down through bitcasts.<br>
<br>
I've introduced a basic blend(bitcast(x),bitcast(y)) -> bitcast(blend(x,y)) fold, there are more generalizations I can do there (e.g. widening/scaling and handling the tricky v16i16 repeated mask case).<br>
<br>
The vector-reduce-smin/smax regressions will be fixed in a future improvement to SimplifyDemandedBits to peek through bitcasts and support X86ISD::BLENDV.<br>
<br>
Differential Revision: <a href="https://reviews.llvm.org/D57888" rel="noreferrer" target="_blank">https://reviews.llvm.org/D57888</a><br>
<br>
Modified:<br>
llvm/trunk/lib/Target/X86/X86ISelLowering.cpp<br>
llvm/trunk/lib/Target/X86/X86InstrSSE.td<br>
llvm/trunk/test/CodeGen/X86/avx512-shuffles/partial_permute.ll<br>
llvm/trunk/test/CodeGen/X86/combine-sdiv.ll<br>
llvm/trunk/test/CodeGen/X86/insertelement-ones.ll<br>
llvm/trunk/test/CodeGen/X86/known-signbits-vector.ll<br>
llvm/trunk/test/CodeGen/X86/masked_load.ll<br>
llvm/trunk/test/CodeGen/X86/masked_store.ll<br>
llvm/trunk/test/CodeGen/X86/oddshuffles.ll<br>
llvm/trunk/test/CodeGen/X86/packss.ll<br>
llvm/trunk/test/CodeGen/X86/pr34592.ll<br>
llvm/trunk/test/CodeGen/X86/prefer-avx256-mask-shuffle.ll<br>
llvm/trunk/test/CodeGen/X86/sse2.ll<br>
llvm/trunk/test/CodeGen/X86/vector-reduce-smax.ll<br>
llvm/trunk/test/CodeGen/X86/vector-reduce-smin.ll<br>
llvm/trunk/test/CodeGen/X86/vector-shift-ashr-256.ll<br>
llvm/trunk/test/CodeGen/X86/vector-shuffle-128-v8.ll<br>
llvm/trunk/test/CodeGen/X86/vector-shuffle-256-v16.ll<br>
llvm/trunk/test/CodeGen/X86/vector-shuffle-256-v32.ll<br>
<br>
Modified: llvm/trunk/lib/Target/X86/X86ISelLowering.cpp<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86ISelLowering.cpp?rev=353610&r1=353609&r2=353610&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86ISelLowering.cpp?rev=353610&r1=353609&r2=353610&view=diff</a><br>
==============================================================================<br>
--- llvm/trunk/lib/Target/X86/X86ISelLowering.cpp (original)<br>
+++ llvm/trunk/lib/Target/X86/X86ISelLowering.cpp Sat Feb 9 05:13:59 2019<br>
@@ -10408,45 +10408,24 @@ static SDValue lowerShuffleAsBlend(const<br>
V2 = getZeroVector(VT, Subtarget, DAG, DL);<br>
<br>
switch (VT.SimpleTy) {<br>
- case MVT::v2f64:<br>
- case MVT::v4f32:<br>
- case MVT::v4f64:<br>
- case MVT::v8f32:<br>
- return DAG.getNode(X86ISD::BLENDI, DL, VT, V1, V2,<br>
- DAG.getConstant(BlendMask, DL, MVT::i8));<br>
case MVT::v4i64:<br>
case MVT::v8i32:<br>
assert(Subtarget.hasAVX2() && "256-bit integer blends require AVX2!");<br>
LLVM_FALLTHROUGH;<br>
+ case MVT::v4f64:<br>
+ case MVT::v8f32:<br>
+ assert(Subtarget.hasAVX() && "256-bit float blends require AVX!");<br>
+ LLVM_FALLTHROUGH;<br>
+ case MVT::v2f64:<br>
case MVT::v2i64:<br>
+ case MVT::v4f32:<br>
case MVT::v4i32:<br>
- // If we have AVX2 it is faster to use VPBLENDD when the shuffle fits into<br>
- // that instruction.<br>
- if (Subtarget.hasAVX2()) {<br>
- // Scale the blend by the number of 32-bit dwords per element.<br>
- int Scale = VT.getScalarSizeInBits() / 32;<br>
- BlendMask = scaleVectorShuffleBlendMask(BlendMask, Mask.size(), Scale);<br>
- MVT BlendVT = VT.getSizeInBits() > 128 ? MVT::v8i32 : MVT::v4i32;<br>
- V1 = DAG.getBitcast(BlendVT, V1);<br>
- V2 = DAG.getBitcast(BlendVT, V2);<br>
- return DAG.getBitcast(<br>
- VT, DAG.getNode(X86ISD::BLENDI, DL, BlendVT, V1, V2,<br>
- DAG.getConstant(BlendMask, DL, MVT::i8)));<br>
- }<br>
- LLVM_FALLTHROUGH;<br>
- case MVT::v8i16: {<br>
- // For integer shuffles we need to expand the mask and cast the inputs to<br>
- // v8i16s prior to blending.<br>
- int Scale = 8 / VT.getVectorNumElements();<br>
- BlendMask = scaleVectorShuffleBlendMask(BlendMask, Mask.size(), Scale);<br>
- V1 = DAG.getBitcast(MVT::v8i16, V1);<br>
- V2 = DAG.getBitcast(MVT::v8i16, V2);<br>
- return DAG.getBitcast(VT,<br>
- DAG.getNode(X86ISD::BLENDI, DL, MVT::v8i16, V1, V2,<br>
- DAG.getConstant(BlendMask, DL, MVT::i8)));<br>
- }<br>
+ case MVT::v8i16:<br>
+ assert(Subtarget.hasSSE41() && "128-bit blends require SSE41!");<br>
+ return DAG.getNode(X86ISD::BLENDI, DL, VT, V1, V2,<br>
+ DAG.getConstant(BlendMask, DL, MVT::i8));<br>
case MVT::v16i16: {<br>
- assert(Subtarget.hasAVX2() && "256-bit integer blends require AVX2!");<br>
+ assert(Subtarget.hasAVX2() && "v16i16 blends require AVX2!");<br>
SmallVector<int, 8> RepeatedMask;<br>
if (is128BitLaneRepeatedShuffleMask(MVT::v16i16, Mask, RepeatedMask)) {<br>
// We can lower these with PBLENDW which is mirrored across 128-bit lanes.<br>
@@ -10474,10 +10453,11 @@ static SDValue lowerShuffleAsBlend(const<br>
}<br>
LLVM_FALLTHROUGH;<br>
}<br>
- case MVT::v16i8:<br>
- case MVT::v32i8: {<br>
- assert((VT.is128BitVector() || Subtarget.hasAVX2()) &&<br>
- "256-bit byte-blends require AVX2 support!");<br>
+ case MVT::v32i8:<br>
+ assert(Subtarget.hasAVX2() && "256-bit byte-blends require AVX2!");<br>
+ LLVM_FALLTHROUGH;<br>
+ case MVT::v16i8: {<br>
+ assert(Subtarget.hasSSE41() && "128-bit byte-blends require SSE41!");<br>
<br>
// Attempt to lower to a bitmask if we can. VPAND is faster than VPBLENDVB.<br>
if (SDValue Masked = lowerShuffleAsBitMask(DL, VT, V1, V2, Mask, Zeroable,<br>
@@ -30973,34 +30953,11 @@ static bool matchBinaryPermuteShuffle(<br>
return true;<br>
}<br>
} else {<br>
- // Determine a type compatible with X86ISD::BLENDI.<br>
- ShuffleVT = MaskVT;<br>
- if (Subtarget.hasAVX2()) {<br>
- if (ShuffleVT == MVT::v4i64)<br>
- ShuffleVT = MVT::v8i32;<br>
- else if (ShuffleVT == MVT::v2i64)<br>
- ShuffleVT = MVT::v4i32;<br>
- } else {<br>
- if (ShuffleVT == MVT::v2i64 || ShuffleVT == MVT::v4i32)<br>
- ShuffleVT = MVT::v8i16;<br>
- else if (ShuffleVT == MVT::v4i64)<br>
- ShuffleVT = MVT::v4f64;<br>
- else if (ShuffleVT == MVT::v8i32)<br>
- ShuffleVT = MVT::v8f32;<br>
- }<br>
-<br>
- if (!ShuffleVT.isFloatingPoint()) {<br>
- int Scale = EltSizeInBits / ShuffleVT.getScalarSizeInBits();<br>
- BlendMask =<br>
- scaleVectorShuffleBlendMask(BlendMask, NumMaskElts, Scale);<br>
- ShuffleVT = MVT::getIntegerVT(EltSizeInBits / Scale);<br>
- ShuffleVT = MVT::getVectorVT(ShuffleVT, NumMaskElts * Scale);<br>
- }<br>
-<br>
V1 = ForceV1Zero ? getZeroVector(MaskVT, Subtarget, DAG, DL) : V1;<br>
V2 = ForceV2Zero ? getZeroVector(MaskVT, Subtarget, DAG, DL) : V2;<br>
PermuteImm = (unsigned)BlendMask;<br>
Shuffle = X86ISD::BLENDI;<br>
+ ShuffleVT = MaskVT;<br>
return true;<br>
}<br>
}<br>
@@ -32165,6 +32122,29 @@ static SDValue combineTargetShuffle(SDVa<br>
<br>
return SDValue();<br>
}<br>
+ case X86ISD::BLENDI: {<br>
+ SDValue N0 = N.getOperand(0);<br>
+ SDValue N1 = N.getOperand(1);<br>
+<br>
+ // blend(bitcast(x),bitcast(y)) -> bitcast(blend(x,y)) to narrower types.<br>
+ // TODO: Handle MVT::v16i16 repeated blend mask.<br>
+ if (N0.getOpcode() == ISD::BITCAST && N1.getOpcode() == ISD::BITCAST &&<br>
+ N0.getOperand(0).getValueType() == N1.getOperand(0).getValueType()) {<br>
+ MVT SrcVT = N0.getOperand(0).getSimpleValueType();<br>
+ if ((VT.getScalarSizeInBits() % SrcVT.getScalarSizeInBits()) == 0 &&<br>
+ SrcVT.getScalarSizeInBits() >= 32) {<br>
+ unsigned Mask = N.getConstantOperandVal(2);<br>
+ unsigned Size = VT.getVectorNumElements();<br>
+ unsigned Scale = VT.getScalarSizeInBits() / SrcVT.getScalarSizeInBits();<br>
+ unsigned ScaleMask = scaleVectorShuffleBlendMask(Mask, Size, Scale);<br>
+ return DAG.getBitcast(<br>
+ VT, DAG.getNode(X86ISD::BLENDI, DL, SrcVT, N0.getOperand(0),<br>
+ N1.getOperand(0),<br>
+ DAG.getConstant(ScaleMask, DL, MVT::i8)));<br>
+ }<br>
+ }<br>
+ return SDValue();<br>
+ }<br>
case X86ISD::PSHUFD:<br>
case X86ISD::PSHUFLW:<br>
case X86ISD::PSHUFHW:<br>
<br>
Modified: llvm/trunk/lib/Target/X86/X86InstrSSE.td<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86InstrSSE.td?rev=353610&r1=353609&r2=353610&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86InstrSSE.td?rev=353610&r1=353609&r2=353610&view=diff</a><br>
==============================================================================<br>
--- llvm/trunk/lib/Target/X86/X86InstrSSE.td (original)<br>
+++ llvm/trunk/lib/Target/X86/X86InstrSSE.td Sat Feb 9 05:13:59 2019<br>
@@ -6513,6 +6513,22 @@ let Predicates = [HasAVX2] in {<br>
VEX_4V, VEX_L, VEX_WIG;<br>
}<br>
<br>
+// Emulate vXi32/vXi64 blends with vXf32/vXf64.<br>
+// ExecutionDomainFixPass will cleanup domains later on.<br>
+let Predicates = [HasAVX] in {<br>
+def : Pat<(X86Blendi (v4i64 VR256:$src1), (v4i64 VR256:$src2), (iPTR imm:$src3)),<br>
+ (VBLENDPDYrri VR256:$src1, VR256:$src2, imm:$src3)>;<br>
+def : Pat<(X86Blendi (v2i64 VR128:$src1), (v2i64 VR128:$src2), (iPTR imm:$src3)),<br>
+ (VBLENDPDrri VR128:$src1, VR128:$src2, imm:$src3)>;<br>
+}<br>
+<br>
+let Predicates = [HasAVX1Only] in {<br>
+def : Pat<(X86Blendi (v8i32 VR256:$src1), (v8i32 VR256:$src2), (iPTR imm:$src3)),<br>
+ (VBLENDPSYrri VR256:$src1, VR256:$src2, imm:$src3)>;<br>
+def : Pat<(X86Blendi (v4i32 VR128:$src1), (v4i32 VR128:$src2), (iPTR imm:$src3)),<br>
+ (VBLENDPSrri VR128:$src1, VR128:$src2, imm:$src3)>;<br>
+}<br>
+<br>
defm BLENDPS : SS41I_blend_rmi<0x0C, "blendps", X86Blendi, v4f32,<br>
VR128, memop, f128mem, 1, SSEPackedSingle,<br>
SchedWriteFBlend.XMM, BlendCommuteImm4>;<br>
@@ -6523,6 +6539,13 @@ defm PBLENDW : SS41I_blend_rmi<0x0E, "pb<br>
VR128, memop, i128mem, 1, SSEPackedInt,<br>
SchedWriteBlend.XMM, BlendCommuteImm8>;<br>
<br>
+let Predicates = [UseSSE41] in {<br>
+def : Pat<(X86Blendi (v2i64 VR128:$src1), (v2i64 VR128:$src2), (iPTR imm:$src3)),<br>
+ (BLENDPDrri VR128:$src1, VR128:$src2, imm:$src3)>;<br>
+def : Pat<(X86Blendi (v4i32 VR128:$src1), (v4i32 VR128:$src2), (iPTR imm:$src3)),<br>
+ (BLENDPSrri VR128:$src1, VR128:$src2, imm:$src3)>;<br>
+}<br>
+<br>
// For insertion into the zero index (low half) of a 256-bit vector, it is<br>
// more efficient to generate a blend with immediate instead of an insert*128.<br>
let Predicates = [HasAVX] in {<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/avx512-shuffles/partial_permute.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/avx512-shuffles/partial_permute.ll?rev=353610&r1=353609&r2=353610&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/avx512-shuffles/partial_permute.ll?rev=353610&r1=353609&r2=353610&view=diff</a><br>
==============================================================================<br>
--- llvm/trunk/test/CodeGen/X86/avx512-shuffles/partial_permute.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/avx512-shuffles/partial_permute.ll Sat Feb 9 05:13:59 2019<br>
@@ -1912,8 +1912,8 @@ define <2 x i64> @test_masked_z_4xi64_to<br>
define <2 x i64> @test_masked_4xi64_to_2xi64_perm_mem_mask1(<4 x i64>* %vp, <2 x i64> %vec2, <2 x i64> %mask) {<br>
; CHECK-LABEL: test_masked_4xi64_to_2xi64_perm_mem_mask1:<br>
; CHECK: # %bb.0:<br>
-; CHECK-NEXT: vmovdqa 16(%rdi), %xmm2<br>
-; CHECK-NEXT: vpblendd {{.*#+}} xmm2 = xmm2[0,1],mem[2,3]<br>
+; CHECK-NEXT: vmovdqa (%rdi), %xmm2<br>
+; CHECK-NEXT: vpblendd {{.*#+}} xmm2 = mem[0,1],xmm2[2,3]<br>
; CHECK-NEXT: vptestnmq %xmm1, %xmm1, %k1<br>
; CHECK-NEXT: vmovdqa64 %xmm2, %xmm0 {%k1}<br>
; CHECK-NEXT: retq<br>
@@ -1927,8 +1927,8 @@ define <2 x i64> @test_masked_4xi64_to_2<br>
define <2 x i64> @test_masked_z_4xi64_to_2xi64_perm_mem_mask1(<4 x i64>* %vp, <2 x i64> %mask) {<br>
; CHECK-LABEL: test_masked_z_4xi64_to_2xi64_perm_mem_mask1:<br>
; CHECK: # %bb.0:<br>
-; CHECK-NEXT: vmovdqa 16(%rdi), %xmm1<br>
-; CHECK-NEXT: vpblendd {{.*#+}} xmm1 = xmm1[0,1],mem[2,3]<br>
+; CHECK-NEXT: vmovdqa (%rdi), %xmm1<br>
+; CHECK-NEXT: vpblendd {{.*#+}} xmm1 = mem[0,1],xmm1[2,3]<br>
; CHECK-NEXT: vptestnmq %xmm0, %xmm0, %k1<br>
; CHECK-NEXT: vmovdqa64 %xmm1, %xmm0 {%k1} {z}<br>
; CHECK-NEXT: retq<br>
@@ -2553,9 +2553,8 @@ define <4 x i64> @test_masked_z_8xi64_to<br>
define <2 x i64> @test_8xi64_to_2xi64_perm_mem_mask0(<8 x i64>* %vp) {<br>
; CHECK-LABEL: test_8xi64_to_2xi64_perm_mem_mask0:<br>
; CHECK: # %bb.0:<br>
-; CHECK-NEXT: vmovsd {{.*#+}} xmm0 = mem[0],zero<br>
-; CHECK-NEXT: vmovaps 32(%rdi), %xmm1<br>
-; CHECK-NEXT: vmovlhps {{.*#+}} xmm0 = xmm1[0],xmm0[0]<br>
+; CHECK-NEXT: vmovaps (%rdi), %xmm0<br>
+; CHECK-NEXT: vblendps {{.*#+}} xmm0 = mem[0,1],xmm0[2,3]<br>
; CHECK-NEXT: retq<br>
%vec = load <8 x i64>, <8 x i64>* %vp<br>
%res = shufflevector <8 x i64> %vec, <8 x i64> undef, <2 x i32> <i32 4, i32 1><br>
@@ -2564,10 +2563,10 @@ define <2 x i64> @test_8xi64_to_2xi64_pe<br>
define <2 x i64> @test_masked_8xi64_to_2xi64_perm_mem_mask0(<8 x i64>* %vp, <2 x i64> %vec2, <2 x i64> %mask) {<br>
; CHECK-LABEL: test_masked_8xi64_to_2xi64_perm_mem_mask0:<br>
; CHECK: # %bb.0:<br>
-; CHECK-NEXT: vmovq {{.*#+}} xmm2 = mem[0],zero<br>
-; CHECK-NEXT: vmovdqa 32(%rdi), %xmm3<br>
+; CHECK-NEXT: vmovdqa (%rdi), %xmm2<br>
+; CHECK-NEXT: vpblendd {{.*#+}} xmm2 = mem[0,1],xmm2[2,3]<br>
; CHECK-NEXT: vptestnmq %xmm1, %xmm1, %k1<br>
-; CHECK-NEXT: vpunpcklqdq {{.*#+}} xmm0 {%k1} = xmm3[0],xmm2[0]<br>
+; CHECK-NEXT: vmovdqa64 %xmm2, %xmm0 {%k1}<br>
; CHECK-NEXT: retq<br>
%vec = load <8 x i64>, <8 x i64>* %vp<br>
%shuf = shufflevector <8 x i64> %vec, <8 x i64> undef, <2 x i32> <i32 4, i32 1><br>
@@ -2579,10 +2578,10 @@ define <2 x i64> @test_masked_8xi64_to_2<br>
define <2 x i64> @test_masked_z_8xi64_to_2xi64_perm_mem_mask0(<8 x i64>* %vp, <2 x i64> %mask) {<br>
; CHECK-LABEL: test_masked_z_8xi64_to_2xi64_perm_mem_mask0:<br>
; CHECK: # %bb.0:<br>
-; CHECK-NEXT: vmovq {{.*#+}} xmm1 = mem[0],zero<br>
-; CHECK-NEXT: vmovdqa 32(%rdi), %xmm2<br>
+; CHECK-NEXT: vmovdqa (%rdi), %xmm1<br>
+; CHECK-NEXT: vpblendd {{.*#+}} xmm1 = mem[0,1],xmm1[2,3]<br>
; CHECK-NEXT: vptestnmq %xmm0, %xmm0, %k1<br>
-; CHECK-NEXT: vpunpcklqdq {{.*#+}} xmm0 {%k1} {z} = xmm2[0],xmm1[0]<br>
+; CHECK-NEXT: vmovdqa64 %xmm1, %xmm0 {%k1} {z}<br>
; CHECK-NEXT: retq<br>
%vec = load <8 x i64>, <8 x i64>* %vp<br>
%shuf = shufflevector <8 x i64> %vec, <8 x i64> undef, <2 x i32> <i32 4, i32 1><br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/combine-sdiv.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/combine-sdiv.ll?rev=353610&r1=353609&r2=353610&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/combine-sdiv.ll?rev=353610&r1=353609&r2=353610&view=diff</a><br>
==============================================================================<br>
--- llvm/trunk/test/CodeGen/X86/combine-sdiv.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/combine-sdiv.ll Sat Feb 9 05:13:59 2019<br>
@@ -1701,8 +1701,7 @@ define <4 x i64> @combine_vec_sdiv_by_po<br>
; AVX1-NEXT: vpcmpgtq %xmm0, %xmm2, %xmm2<br>
; AVX1-NEXT: vpsrlq $62, %xmm2, %xmm2<br>
; AVX1-NEXT: vpaddq %xmm2, %xmm0, %xmm2<br>
-; AVX1-NEXT: vpsrlq $2, %xmm2, %xmm3<br>
-; AVX1-NEXT: vpblendw {{.*#+}} xmm2 = xmm2[0,1,2,3],xmm3[4,5,6,7]<br>
+; AVX1-NEXT: vpsrlq $2, %xmm2, %xmm2<br>
; AVX1-NEXT: vmovdqa {{.*#+}} xmm3 = [9223372036854775808,2305843009213693952]<br>
; AVX1-NEXT: vpxor %xmm3, %xmm2, %xmm2<br>
; AVX1-NEXT: vpsubq %xmm3, %xmm2, %xmm2<br>
@@ -1890,8 +1889,7 @@ define <8 x i64> @combine_vec_sdiv_by_po<br>
; AVX1-NEXT: vpcmpgtq %xmm0, %xmm2, %xmm5<br>
; AVX1-NEXT: vpsrlq $62, %xmm5, %xmm5<br>
; AVX1-NEXT: vpaddq %xmm5, %xmm0, %xmm5<br>
-; AVX1-NEXT: vpsrlq $2, %xmm5, %xmm6<br>
-; AVX1-NEXT: vpblendw {{.*#+}} xmm5 = xmm5[0,1,2,3],xmm6[4,5,6,7]<br>
+; AVX1-NEXT: vpsrlq $2, %xmm5, %xmm5<br>
; AVX1-NEXT: vmovdqa {{.*#+}} xmm6 = [9223372036854775808,2305843009213693952]<br>
; AVX1-NEXT: vpxor %xmm6, %xmm5, %xmm5<br>
; AVX1-NEXT: vpsubq %xmm6, %xmm5, %xmm5<br>
@@ -1911,8 +1909,7 @@ define <8 x i64> @combine_vec_sdiv_by_po<br>
; AVX1-NEXT: vpcmpgtq %xmm1, %xmm2, %xmm2<br>
; AVX1-NEXT: vpsrlq $62, %xmm2, %xmm2<br>
; AVX1-NEXT: vpaddq %xmm2, %xmm1, %xmm2<br>
-; AVX1-NEXT: vpsrlq $2, %xmm2, %xmm4<br>
-; AVX1-NEXT: vpblendw {{.*#+}} xmm2 = xmm2[0,1,2,3],xmm4[4,5,6,7]<br>
+; AVX1-NEXT: vpsrlq $2, %xmm2, %xmm2<br>
; AVX1-NEXT: vpxor %xmm6, %xmm2, %xmm2<br>
; AVX1-NEXT: vpsubq %xmm6, %xmm2, %xmm2<br>
; AVX1-NEXT: vinsertf128 $1, %xmm3, %ymm2, %ymm2<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/insertelement-ones.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/insertelement-ones.ll?rev=353610&r1=353609&r2=353610&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/insertelement-ones.ll?rev=353610&r1=353609&r2=353610&view=diff</a><br>
==============================================================================<br>
--- llvm/trunk/test/CodeGen/X86/insertelement-ones.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/insertelement-ones.ll Sat Feb 9 05:13:59 2019<br>
@@ -291,10 +291,7 @@ define <16 x i16> @insert_v16i16_x12345x<br>
; AVX2-LABEL: insert_v16i16_x12345x789ABCDEx:<br>
; AVX2: # %bb.0:<br>
; AVX2-NEXT: vpcmpeqd %ymm1, %ymm1, %ymm1<br>
-; AVX2-NEXT: vpblendw {{.*#+}} ymm2 = ymm1[0],ymm0[1,2,3,4,5,6,7],ymm1[8],ymm0[9,10,11,12,13,14,15]<br>
-; AVX2-NEXT: vpblendd {{.*#+}} ymm2 = ymm2[0,1,2,3],ymm0[4,5,6,7]<br>
-; AVX2-NEXT: vpblendw {{.*#+}} ymm2 = ymm2[0,1,2,3,4,5],ymm1[6],ymm2[7,8,9,10,11,12,13],ymm1[14],ymm2[15]<br>
-; AVX2-NEXT: vpblendd {{.*#+}} ymm0 = ymm2[0,1,2,3],ymm0[4,5,6,7]<br>
+; AVX2-NEXT: vpblendw {{.*#+}} ymm2 = ymm1[0],ymm0[1,2,3,4,5],ymm1[6],ymm0[7],ymm1[8],ymm0[9,10,11,12,13],ymm1[14],ymm0[15]<br>
; AVX2-NEXT: vpblendw {{.*#+}} ymm0 = ymm0[0,1,2,3,4,5,6],ymm1[7],ymm0[8,9,10,11,12,13,14],ymm1[15]<br>
; AVX2-NEXT: vpblendd {{.*#+}} ymm0 = ymm2[0,1,2,3],ymm0[4,5,6,7]<br>
; AVX2-NEXT: retq<br>
@@ -302,10 +299,7 @@ define <16 x i16> @insert_v16i16_x12345x<br>
; AVX512-LABEL: insert_v16i16_x12345x789ABCDEx:<br>
; AVX512: # %bb.0:<br>
; AVX512-NEXT: vpcmpeqd %ymm1, %ymm1, %ymm1<br>
-; AVX512-NEXT: vpblendw {{.*#+}} ymm2 = ymm1[0],ymm0[1,2,3,4,5,6,7],ymm1[8],ymm0[9,10,11,12,13,14,15]<br>
-; AVX512-NEXT: vpblendd {{.*#+}} ymm2 = ymm2[0,1,2,3],ymm0[4,5,6,7]<br>
-; AVX512-NEXT: vpblendw {{.*#+}} ymm2 = ymm2[0,1,2,3,4,5],ymm1[6],ymm2[7,8,9,10,11,12,13],ymm1[14],ymm2[15]<br>
-; AVX512-NEXT: vpblendd {{.*#+}} ymm0 = ymm2[0,1,2,3],ymm0[4,5,6,7]<br>
+; AVX512-NEXT: vpblendw {{.*#+}} ymm2 = ymm1[0],ymm0[1,2,3,4,5],ymm1[6],ymm0[7],ymm1[8],ymm0[9,10,11,12,13],ymm1[14],ymm0[15]<br>
; AVX512-NEXT: vpblendw {{.*#+}} ymm0 = ymm0[0,1,2,3,4,5,6],ymm1[7],ymm0[8,9,10,11,12,13,14],ymm1[15]<br>
; AVX512-NEXT: vpblendd {{.*#+}} ymm0 = ymm2[0,1,2,3],ymm0[4,5,6,7]<br>
; AVX512-NEXT: retq<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/known-signbits-vector.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/known-signbits-vector.ll?rev=353610&r1=353609&r2=353610&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/known-signbits-vector.ll?rev=353610&r1=353609&r2=353610&view=diff</a><br>
==============================================================================<br>
--- llvm/trunk/test/CodeGen/X86/known-signbits-vector.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/known-signbits-vector.ll Sat Feb 9 05:13:59 2019<br>
@@ -89,7 +89,7 @@ define float @signbits_ashr_extract_sito<br>
; X32: # %bb.0:<br>
; X32-NEXT: pushl %eax<br>
; X32-NEXT: vpsrlq $32, %xmm0, %xmm0<br>
-; X32-NEXT: vmovdqa {{.*#+}} xmm1 = [0,32768,0,0,1,0,0,0]<br>
+; X32-NEXT: vmovdqa {{.*#+}} xmm1 = [2147483648,0,1,0]<br>
; X32-NEXT: vpxor %xmm1, %xmm0, %xmm0<br>
; X32-NEXT: vpsubq %xmm1, %xmm0, %xmm0<br>
; X32-NEXT: vcvtdq2ps %xmm0, %xmm0<br>
@@ -115,7 +115,7 @@ define float @signbits_ashr_shl_extract_<br>
; X32: # %bb.0:<br>
; X32-NEXT: pushl %eax<br>
; X32-NEXT: vpsrlq $61, %xmm0, %xmm0<br>
-; X32-NEXT: vmovdqa {{.*#+}} xmm1 = [4,0,0,0,8,0,0,0]<br>
+; X32-NEXT: vmovdqa {{.*#+}} xmm1 = [4,0,8,0]<br>
; X32-NEXT: vpxor %xmm1, %xmm0, %xmm0<br>
; X32-NEXT: vpsubq %xmm1, %xmm0, %xmm0<br>
; X32-NEXT: vpsllq $20, %xmm0, %xmm0<br>
@@ -231,7 +231,7 @@ define float @signbits_ashr_sext_sextinr<br>
; X32: # %bb.0:<br>
; X32-NEXT: pushl %eax<br>
; X32-NEXT: vpsrlq $61, %xmm0, %xmm0<br>
-; X32-NEXT: vmovdqa {{.*#+}} xmm1 = [4,0,0,0,8,0,0,0]<br>
+; X32-NEXT: vmovdqa {{.*#+}} xmm1 = [4,0,8,0]<br>
; X32-NEXT: vpxor %xmm1, %xmm0, %xmm0<br>
; X32-NEXT: vpsubq %xmm1, %xmm0, %xmm0<br>
; X32-NEXT: vmovd {{.*#+}} xmm1 = mem[0],zero,zero,zero<br>
@@ -272,7 +272,7 @@ define float @signbits_ashr_sextvecinreg<br>
; X32-NEXT: vpsrlq $60, %xmm0, %xmm2<br>
; X32-NEXT: vpsrlq $61, %xmm0, %xmm0<br>
; X32-NEXT: vpblendw {{.*#+}} xmm0 = xmm0[0,1,2,3],xmm2[4,5,6,7]<br>
-; X32-NEXT: vmovdqa {{.*#+}} xmm2 = [4,0,0,0,8,0,0,0]<br>
+; X32-NEXT: vmovdqa {{.*#+}} xmm2 = [4,0,8,0]<br>
; X32-NEXT: vpxor %xmm2, %xmm0, %xmm0<br>
; X32-NEXT: vpsubq %xmm2, %xmm0, %xmm0<br>
; X32-NEXT: vpmovsxdq %xmm1, %xmm1<br>
@@ -322,7 +322,7 @@ define <4 x float> @signbits_ashr_sext_s<br>
; X32-NEXT: vpmovsxdq 8(%ebp), %xmm4<br>
; X32-NEXT: vextractf128 $1, %ymm2, %xmm5<br>
; X32-NEXT: vpsrlq $33, %xmm5, %xmm5<br>
-; X32-NEXT: vmovdqa {{.*#+}} xmm6 = [0,16384,0,0,1,0,0,0]<br>
+; X32-NEXT: vmovdqa {{.*#+}} xmm6 = [1073741824,0,1,0]<br>
; X32-NEXT: vpxor %xmm6, %xmm5, %xmm5<br>
; X32-NEXT: vpsubq %xmm6, %xmm5, %xmm5<br>
; X32-NEXT: vpsrlq $33, %xmm2, %xmm2<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/masked_load.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/masked_load.ll?rev=353610&r1=353609&r2=353610&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/masked_load.ll?rev=353610&r1=353609&r2=353610&view=diff</a><br>
==============================================================================<br>
--- llvm/trunk/test/CodeGen/X86/masked_load.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/masked_load.ll Sat Feb 9 05:13:59 2019<br>
@@ -1261,18 +1261,15 @@ define <2 x float> @load_v2f32_v2i32(<2<br>
; SSE42-LABEL: load_v2f32_v2i32:<br>
; SSE42: ## %bb.0:<br>
; SSE42-NEXT: pxor %xmm2, %xmm2<br>
-; SSE42-NEXT: movdqa %xmm0, %xmm3<br>
-; SSE42-NEXT: pblendw {{.*#+}} xmm3 = xmm3[0,1],xmm2[2,3],xmm3[4,5],xmm2[6,7]<br>
-; SSE42-NEXT: pcmpeqq %xmm2, %xmm3<br>
-; SSE42-NEXT: pextrb $0, %xmm3, %eax<br>
+; SSE42-NEXT: pblendw {{.*#+}} xmm0 = xmm0[0,1],xmm2[2,3],xmm0[4,5],xmm2[6,7]<br>
+; SSE42-NEXT: pcmpeqq %xmm2, %xmm0<br>
+; SSE42-NEXT: pextrb $0, %xmm0, %eax<br>
; SSE42-NEXT: testb $1, %al<br>
; SSE42-NEXT: je LBB10_2<br>
; SSE42-NEXT: ## %bb.1: ## %cond.load<br>
-; SSE42-NEXT: movd {{.*#+}} xmm3 = mem[0],zero,zero,zero<br>
-; SSE42-NEXT: pblendw {{.*#+}} xmm1 = xmm3[0,1],xmm1[2,3,4,5,6,7]<br>
+; SSE42-NEXT: movd {{.*#+}} xmm2 = mem[0],zero,zero,zero<br>
+; SSE42-NEXT: pblendw {{.*#+}} xmm1 = xmm2[0,1],xmm1[2,3,4,5,6,7]<br>
; SSE42-NEXT: LBB10_2: ## %else<br>
-; SSE42-NEXT: pblendw {{.*#+}} xmm0 = xmm0[0,1],xmm2[2,3],xmm0[4,5],xmm2[6,7]<br>
-; SSE42-NEXT: pcmpeqq %xmm2, %xmm0<br>
; SSE42-NEXT: pextrb $8, %xmm0, %eax<br>
; SSE42-NEXT: testb $1, %al<br>
; SSE42-NEXT: je LBB10_4<br>
@@ -1357,18 +1354,15 @@ define <2 x i32> @load_v2i32_v2i32(<2 x<br>
; SSE42-LABEL: load_v2i32_v2i32:<br>
; SSE42: ## %bb.0:<br>
; SSE42-NEXT: pxor %xmm2, %xmm2<br>
-; SSE42-NEXT: movdqa %xmm0, %xmm3<br>
-; SSE42-NEXT: pblendw {{.*#+}} xmm3 = xmm3[0,1],xmm2[2,3],xmm3[4,5],xmm2[6,7]<br>
-; SSE42-NEXT: pcmpeqq %xmm2, %xmm3<br>
-; SSE42-NEXT: pextrb $0, %xmm3, %eax<br>
+; SSE42-NEXT: pblendw {{.*#+}} xmm0 = xmm0[0,1],xmm2[2,3],xmm0[4,5],xmm2[6,7]<br>
+; SSE42-NEXT: pcmpeqq %xmm2, %xmm0<br>
+; SSE42-NEXT: pextrb $0, %xmm0, %eax<br>
; SSE42-NEXT: testb $1, %al<br>
; SSE42-NEXT: je LBB11_2<br>
; SSE42-NEXT: ## %bb.1: ## %cond.load<br>
; SSE42-NEXT: movl (%rdi), %eax<br>
; SSE42-NEXT: pinsrq $0, %rax, %xmm1<br>
; SSE42-NEXT: LBB11_2: ## %else<br>
-; SSE42-NEXT: pblendw {{.*#+}} xmm0 = xmm0[0,1],xmm2[2,3],xmm0[4,5],xmm2[6,7]<br>
-; SSE42-NEXT: pcmpeqq %xmm2, %xmm0<br>
; SSE42-NEXT: pextrb $8, %xmm0, %eax<br>
; SSE42-NEXT: testb $1, %al<br>
; SSE42-NEXT: je LBB11_4<br>
@@ -1459,18 +1453,16 @@ define <2 x float> @load_undef_v2f32_v2i<br>
; SSE42-LABEL: load_undef_v2f32_v2i32:<br>
; SSE42: ## %bb.0:<br>
; SSE42-NEXT: movdqa %xmm0, %xmm1<br>
-; SSE42-NEXT: pxor %xmm2, %xmm2<br>
-; SSE42-NEXT: pblendw {{.*#+}} xmm0 = xmm0[0,1],xmm2[2,3],xmm0[4,5],xmm2[6,7]<br>
-; SSE42-NEXT: pcmpeqq %xmm2, %xmm0<br>
-; SSE42-NEXT: pextrb $0, %xmm0, %eax<br>
+; SSE42-NEXT: pxor %xmm0, %xmm0<br>
+; SSE42-NEXT: pblendw {{.*#+}} xmm1 = xmm1[0,1],xmm0[2,3],xmm1[4,5],xmm0[6,7]<br>
+; SSE42-NEXT: pcmpeqq %xmm0, %xmm1<br>
+; SSE42-NEXT: pextrb $0, %xmm1, %eax<br>
; SSE42-NEXT: testb $1, %al<br>
; SSE42-NEXT: ## implicit-def: $xmm0<br>
; SSE42-NEXT: je LBB12_2<br>
; SSE42-NEXT: ## %bb.1: ## %cond.load<br>
; SSE42-NEXT: movss {{.*#+}} xmm0 = mem[0],zero,zero,zero<br>
; SSE42-NEXT: LBB12_2: ## %else<br>
-; SSE42-NEXT: pblendw {{.*#+}} xmm1 = xmm1[0,1],xmm2[2,3],xmm1[4,5],xmm2[6,7]<br>
-; SSE42-NEXT: pcmpeqq %xmm2, %xmm1<br>
; SSE42-NEXT: pextrb $8, %xmm1, %eax<br>
; SSE42-NEXT: testb $1, %al<br>
; SSE42-NEXT: je LBB12_4<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/masked_store.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/masked_store.ll?rev=353610&r1=353609&r2=353610&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/masked_store.ll?rev=353610&r1=353609&r2=353610&view=diff</a><br>
==============================================================================<br>
--- llvm/trunk/test/CodeGen/X86/masked_store.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/masked_store.ll Sat Feb 9 05:13:59 2019<br>
@@ -330,17 +330,14 @@ define void @store_v2f32_v2i32(<2 x i32><br>
; SSE4-LABEL: store_v2f32_v2i32:<br>
; SSE4: ## %bb.0:<br>
; SSE4-NEXT: pxor %xmm2, %xmm2<br>
-; SSE4-NEXT: movdqa %xmm0, %xmm3<br>
-; SSE4-NEXT: pblendw {{.*#+}} xmm3 = xmm3[0,1],xmm2[2,3],xmm3[4,5],xmm2[6,7]<br>
-; SSE4-NEXT: pcmpeqq %xmm2, %xmm3<br>
-; SSE4-NEXT: pextrb $0, %xmm3, %eax<br>
+; SSE4-NEXT: pblendw {{.*#+}} xmm0 = xmm0[0,1],xmm2[2,3],xmm0[4,5],xmm2[6,7]<br>
+; SSE4-NEXT: pcmpeqq %xmm2, %xmm0<br>
+; SSE4-NEXT: pextrb $0, %xmm0, %eax<br>
; SSE4-NEXT: testb $1, %al<br>
; SSE4-NEXT: je LBB3_2<br>
; SSE4-NEXT: ## %bb.1: ## %cond.store<br>
; SSE4-NEXT: movss %xmm1, (%rdi)<br>
; SSE4-NEXT: LBB3_2: ## %else<br>
-; SSE4-NEXT: pblendw {{.*#+}} xmm0 = xmm0[0,1],xmm2[2,3],xmm0[4,5],xmm2[6,7]<br>
-; SSE4-NEXT: pcmpeqq %xmm2, %xmm0<br>
; SSE4-NEXT: pextrb $8, %xmm0, %eax<br>
; SSE4-NEXT: testb $1, %al<br>
; SSE4-NEXT: je LBB3_4<br>
@@ -417,17 +414,14 @@ define void @store_v2i32_v2i32(<2 x i32><br>
; SSE4-LABEL: store_v2i32_v2i32:<br>
; SSE4: ## %bb.0:<br>
; SSE4-NEXT: pxor %xmm2, %xmm2<br>
-; SSE4-NEXT: movdqa %xmm0, %xmm3<br>
-; SSE4-NEXT: pblendw {{.*#+}} xmm3 = xmm3[0,1],xmm2[2,3],xmm3[4,5],xmm2[6,7]<br>
-; SSE4-NEXT: pcmpeqq %xmm2, %xmm3<br>
-; SSE4-NEXT: pextrb $0, %xmm3, %eax<br>
+; SSE4-NEXT: pblendw {{.*#+}} xmm0 = xmm0[0,1],xmm2[2,3],xmm0[4,5],xmm2[6,7]<br>
+; SSE4-NEXT: pcmpeqq %xmm2, %xmm0<br>
+; SSE4-NEXT: pextrb $0, %xmm0, %eax<br>
; SSE4-NEXT: testb $1, %al<br>
; SSE4-NEXT: je LBB4_2<br>
; SSE4-NEXT: ## %bb.1: ## %cond.store<br>
; SSE4-NEXT: movss %xmm1, (%rdi)<br>
; SSE4-NEXT: LBB4_2: ## %else<br>
-; SSE4-NEXT: pblendw {{.*#+}} xmm0 = xmm0[0,1],xmm2[2,3],xmm0[4,5],xmm2[6,7]<br>
-; SSE4-NEXT: pcmpeqq %xmm2, %xmm0<br>
; SSE4-NEXT: pextrb $8, %xmm0, %eax<br>
; SSE4-NEXT: testb $1, %al<br>
; SSE4-NEXT: je LBB4_4<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/oddshuffles.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/oddshuffles.ll?rev=353610&r1=353609&r2=353610&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/oddshuffles.ll?rev=353610&r1=353609&r2=353610&view=diff</a><br>
==============================================================================<br>
--- llvm/trunk/test/CodeGen/X86/oddshuffles.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/oddshuffles.ll Sat Feb 9 05:13:59 2019<br>
@@ -1036,7 +1036,7 @@ define void @interleave_24i16_out(<24 x<br>
; SSE42-NEXT: pshufhw {{.*#+}} xmm3 = xmm3[0,1,2,3,4,5,6,5]<br>
; SSE42-NEXT: movdqa %xmm0, %xmm4<br>
; SSE42-NEXT: pblendw {{.*#+}} xmm4 = xmm4[0],xmm1[1],xmm4[2,3],xmm1[4],xmm4[5,6],xmm1[7]<br>
-; SSE42-NEXT: pshufb {{.*#+}} xmm4 = xmm4[0,1,6,7,12,13,2,3,8,9,14,15,u,u,u,u]<br>
+; SSE42-NEXT: pshufb {{.*#+}} xmm4 = xmm4[0,1,6,7,12,13,2,3,8,9,14,15,12,13,14,15]<br>
; SSE42-NEXT: pblendw {{.*#+}} xmm4 = xmm4[0,1,2,3,4,5],xmm3[6,7]<br>
; SSE42-NEXT: movdqa %xmm2, %xmm3<br>
; SSE42-NEXT: pshufb {{.*#+}} xmm3 = xmm3[0,1,6,7,4,5,6,7,0,1,0,1,6,7,12,13]<br>
@@ -1061,7 +1061,7 @@ define void @interleave_24i16_out(<24 x<br>
; AVX1-NEXT: vpshufd {{.*#+}} xmm3 = xmm2[0,1,2,1]<br>
; AVX1-NEXT: vpshufhw {{.*#+}} xmm3 = xmm3[0,1,2,3,4,5,6,5]<br>
; AVX1-NEXT: vpblendw {{.*#+}} xmm4 = xmm0[0],xmm1[1],xmm0[2,3],xmm1[4],xmm0[5,6],xmm1[7]<br>
-; AVX1-NEXT: vpshufb {{.*#+}} xmm4 = xmm4[0,1,6,7,12,13,2,3,8,9,14,15,u,u,u,u]<br>
+; AVX1-NEXT: vpshufb {{.*#+}} xmm4 = xmm4[0,1,6,7,12,13,2,3,8,9,14,15,12,13,14,15]<br>
; AVX1-NEXT: vpblendw {{.*#+}} xmm3 = xmm4[0,1,2,3,4,5],xmm3[6,7]<br>
; AVX1-NEXT: vpshufb {{.*#+}} xmm4 = xmm2[0,1,6,7,4,5,6,7,0,1,0,1,6,7,12,13]<br>
; AVX1-NEXT: vpblendw {{.*#+}} xmm5 = xmm0[0,1],xmm1[2],xmm0[3,4],xmm1[5],xmm0[6,7]<br>
@@ -1583,25 +1583,25 @@ define void @interleave_24i32_in(<24 x i<br>
; AVX1: # %bb.0:<br>
; AVX1-NEXT: vmovupd (%rsi), %ymm0<br>
; AVX1-NEXT: vmovupd (%rcx), %ymm1<br>
-; AVX1-NEXT: vmovups 16(%rcx), %xmm2<br>
-; AVX1-NEXT: vmovups (%rdx), %xmm3<br>
-; AVX1-NEXT: vmovups 16(%rdx), %xmm4<br>
-; AVX1-NEXT: vshufps {{.*#+}} xmm5 = xmm4[3,0],xmm2[3,0]<br>
-; AVX1-NEXT: vshufps {{.*#+}} xmm5 = xmm2[2,1],xmm5[0,2]<br>
-; AVX1-NEXT: vshufps {{.*#+}} xmm2 = xmm2[1,0],xmm4[1,0]<br>
-; AVX1-NEXT: vshufps {{.*#+}} xmm2 = xmm2[2,0],xmm4[2,2]<br>
-; AVX1-NEXT: vinsertf128 $1, %xmm5, %ymm2, %ymm2<br>
-; AVX1-NEXT: vpermilpd {{.*#+}} ymm4 = ymm0[1,1,3,3]<br>
-; AVX1-NEXT: vperm2f128 {{.*#+}} ymm4 = ymm4[2,3,2,3]<br>
-; AVX1-NEXT: vblendps {{.*#+}} ymm2 = ymm2[0,1],ymm4[2],ymm2[3,4],ymm4[5],ymm2[6,7]<br>
+; AVX1-NEXT: vmovups (%rdx), %xmm2<br>
+; AVX1-NEXT: vmovups 16(%rdx), %xmm3<br>
; AVX1-NEXT: vmovups (%rsi), %xmm4<br>
-; AVX1-NEXT: vshufps {{.*#+}} xmm5 = xmm4[2,0],xmm3[2,0]<br>
-; AVX1-NEXT: vshufps {{.*#+}} xmm5 = xmm3[1,1],xmm5[0,2]<br>
-; AVX1-NEXT: vshufps {{.*#+}} xmm3 = xmm3[0,0],xmm4[0,0]<br>
-; AVX1-NEXT: vshufps {{.*#+}} xmm3 = xmm3[2,0],xmm4[2,1]<br>
-; AVX1-NEXT: vinsertf128 $1, %xmm5, %ymm3, %ymm3<br>
+; AVX1-NEXT: vshufps {{.*#+}} xmm5 = xmm4[2,0],xmm2[2,0]<br>
+; AVX1-NEXT: vshufps {{.*#+}} xmm5 = xmm2[1,1],xmm5[0,2]<br>
+; AVX1-NEXT: vshufps {{.*#+}} xmm2 = xmm2[0,0],xmm4[0,0]<br>
+; AVX1-NEXT: vshufps {{.*#+}} xmm2 = xmm2[2,0],xmm4[2,1]<br>
+; AVX1-NEXT: vinsertf128 $1, %xmm5, %ymm2, %ymm2<br>
; AVX1-NEXT: vmovddup {{.*#+}} xmm4 = xmm1[0,0]<br>
; AVX1-NEXT: vinsertf128 $1, %xmm4, %ymm4, %ymm4<br>
+; AVX1-NEXT: vblendps {{.*#+}} ymm2 = ymm2[0,1],ymm4[2],ymm2[3,4],ymm4[5],ymm2[6,7]<br>
+; AVX1-NEXT: vmovups 16(%rcx), %xmm4<br>
+; AVX1-NEXT: vshufps {{.*#+}} xmm5 = xmm3[3,0],xmm4[3,0]<br>
+; AVX1-NEXT: vshufps {{.*#+}} xmm5 = xmm4[2,1],xmm5[0,2]<br>
+; AVX1-NEXT: vshufps {{.*#+}} xmm4 = xmm4[1,0],xmm3[1,0]<br>
+; AVX1-NEXT: vshufps {{.*#+}} xmm3 = xmm4[2,0],xmm3[2,2]<br>
+; AVX1-NEXT: vinsertf128 $1, %xmm5, %ymm3, %ymm3<br>
+; AVX1-NEXT: vpermilpd {{.*#+}} ymm4 = ymm0[1,1,3,3]<br>
+; AVX1-NEXT: vperm2f128 {{.*#+}} ymm4 = ymm4[2,3,2,3]<br>
; AVX1-NEXT: vblendps {{.*#+}} ymm3 = ymm3[0,1],ymm4[2],ymm3[3,4],ymm4[5],ymm3[6,7]<br>
; AVX1-NEXT: vpermilpd {{.*#+}} ymm0 = ymm0[1,0,2,2]<br>
; AVX1-NEXT: vpermilpd {{.*#+}} ymm1 = ymm1[1,1,2,2]<br>
@@ -1609,8 +1609,8 @@ define void @interleave_24i32_in(<24 x i<br>
; AVX1-NEXT: vpermilps {{.*#+}} ymm1 = mem[0,0,3,3,4,4,7,7]<br>
; AVX1-NEXT: vblendps {{.*#+}} ymm0 = ymm0[0,1],ymm1[2],ymm0[3,4],ymm1[5],ymm0[6,7]<br>
; AVX1-NEXT: vmovups %ymm0, 32(%rdi)<br>
-; AVX1-NEXT: vmovups %ymm3, (%rdi)<br>
-; AVX1-NEXT: vmovups %ymm2, 64(%rdi)<br>
+; AVX1-NEXT: vmovups %ymm3, 64(%rdi)<br>
+; AVX1-NEXT: vmovups %ymm2, (%rdi)<br>
; AVX1-NEXT: vzeroupper<br>
; AVX1-NEXT: retq<br>
;<br>
@@ -1674,32 +1674,32 @@ define void @interleave_24i32_in(<24 x i<br>
; XOP: # %bb.0:<br>
; XOP-NEXT: vmovupd (%rsi), %ymm0<br>
; XOP-NEXT: vmovups (%rcx), %ymm1<br>
-; XOP-NEXT: vmovups 16(%rcx), %xmm2<br>
-; XOP-NEXT: vmovups (%rdx), %xmm3<br>
-; XOP-NEXT: vmovups 16(%rdx), %xmm4<br>
-; XOP-NEXT: vshufps {{.*#+}} xmm5 = xmm4[3,0],xmm2[3,0]<br>
-; XOP-NEXT: vshufps {{.*#+}} xmm5 = xmm2[2,1],xmm5[0,2]<br>
-; XOP-NEXT: vshufps {{.*#+}} xmm2 = xmm2[1,0],xmm4[1,0]<br>
-; XOP-NEXT: vshufps {{.*#+}} xmm2 = xmm2[2,0],xmm4[2,2]<br>
-; XOP-NEXT: vinsertf128 $1, %xmm5, %ymm2, %ymm2<br>
-; XOP-NEXT: vpermilpd {{.*#+}} ymm4 = ymm0[1,1,3,3]<br>
-; XOP-NEXT: vperm2f128 {{.*#+}} ymm4 = ymm4[2,3,2,3]<br>
-; XOP-NEXT: vblendps {{.*#+}} ymm2 = ymm2[0,1],ymm4[2],ymm2[3,4],ymm4[5],ymm2[6,7]<br>
+; XOP-NEXT: vmovups (%rdx), %xmm2<br>
+; XOP-NEXT: vmovups 16(%rdx), %xmm3<br>
; XOP-NEXT: vmovups (%rsi), %xmm4<br>
-; XOP-NEXT: vshufps {{.*#+}} xmm5 = xmm4[2,0],xmm3[2,0]<br>
-; XOP-NEXT: vshufps {{.*#+}} xmm5 = xmm3[1,1],xmm5[0,2]<br>
-; XOP-NEXT: vshufps {{.*#+}} xmm3 = xmm3[0,0],xmm4[0,0]<br>
-; XOP-NEXT: vshufps {{.*#+}} xmm3 = xmm3[2,0],xmm4[2,1]<br>
-; XOP-NEXT: vinsertf128 $1, %xmm5, %ymm3, %ymm3<br>
+; XOP-NEXT: vshufps {{.*#+}} xmm5 = xmm4[2,0],xmm2[2,0]<br>
+; XOP-NEXT: vshufps {{.*#+}} xmm5 = xmm2[1,1],xmm5[0,2]<br>
+; XOP-NEXT: vshufps {{.*#+}} xmm2 = xmm2[0,0],xmm4[0,0]<br>
+; XOP-NEXT: vshufps {{.*#+}} xmm2 = xmm2[2,0],xmm4[2,1]<br>
+; XOP-NEXT: vinsertf128 $1, %xmm5, %ymm2, %ymm2<br>
; XOP-NEXT: vmovddup {{.*#+}} xmm4 = xmm1[0,0]<br>
; XOP-NEXT: vinsertf128 $1, %xmm4, %ymm4, %ymm4<br>
+; XOP-NEXT: vblendps {{.*#+}} ymm2 = ymm2[0,1],ymm4[2],ymm2[3,4],ymm4[5],ymm2[6,7]<br>
+; XOP-NEXT: vmovups 16(%rcx), %xmm4<br>
+; XOP-NEXT: vshufps {{.*#+}} xmm5 = xmm3[3,0],xmm4[3,0]<br>
+; XOP-NEXT: vshufps {{.*#+}} xmm5 = xmm4[2,1],xmm5[0,2]<br>
+; XOP-NEXT: vshufps {{.*#+}} xmm4 = xmm4[1,0],xmm3[1,0]<br>
+; XOP-NEXT: vshufps {{.*#+}} xmm3 = xmm4[2,0],xmm3[2,2]<br>
+; XOP-NEXT: vinsertf128 $1, %xmm5, %ymm3, %ymm3<br>
+; XOP-NEXT: vpermilpd {{.*#+}} ymm4 = ymm0[1,1,3,3]<br>
+; XOP-NEXT: vperm2f128 {{.*#+}} ymm4 = ymm4[2,3,2,3]<br>
; XOP-NEXT: vblendps {{.*#+}} ymm3 = ymm3[0,1],ymm4[2],ymm3[3,4],ymm4[5],ymm3[6,7]<br>
; XOP-NEXT: vpermil2ps {{.*#+}} ymm0 = ymm1[2],ymm0[3],ymm1[2,3],ymm0[4],ymm1[5,4],ymm0[5]<br>
; XOP-NEXT: vpermilps {{.*#+}} ymm1 = mem[0,0,3,3,4,4,7,7]<br>
; XOP-NEXT: vblendps {{.*#+}} ymm0 = ymm0[0,1],ymm1[2],ymm0[3,4],ymm1[5],ymm0[6,7]<br>
; XOP-NEXT: vmovups %ymm0, 32(%rdi)<br>
-; XOP-NEXT: vmovups %ymm3, (%rdi)<br>
-; XOP-NEXT: vmovups %ymm2, 64(%rdi)<br>
+; XOP-NEXT: vmovups %ymm3, 64(%rdi)<br>
+; XOP-NEXT: vmovups %ymm2, (%rdi)<br>
; XOP-NEXT: vzeroupper<br>
; XOP-NEXT: retq<br>
%s1 = load <8 x i32>, <8 x i32>* %q1, align 4<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/packss.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/packss.ll?rev=353610&r1=353609&r2=353610&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/packss.ll?rev=353610&r1=353609&r2=353610&view=diff</a><br>
==============================================================================<br>
--- llvm/trunk/test/CodeGen/X86/packss.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/packss.ll Sat Feb 9 05:13:59 2019<br>
@@ -172,19 +172,19 @@ define <8 x i16> @trunc_ashr_v4i64_deman<br>
;<br>
; X86-AVX1-LABEL: trunc_ashr_v4i64_demandedelts:<br>
; X86-AVX1: # %bb.0:<br>
-; X86-AVX1-NEXT: vpsllq $63, %xmm0, %xmm1<br>
-; X86-AVX1-NEXT: vextractf128 $1, %ymm0, %xmm2<br>
-; X86-AVX1-NEXT: vpsllq $63, %xmm2, %xmm3<br>
-; X86-AVX1-NEXT: vpsrlq $63, %xmm3, %xmm3<br>
-; X86-AVX1-NEXT: vpblendw {{.*#+}} xmm2 = xmm3[0,1,2,3],xmm2[4,5,6,7]<br>
-; X86-AVX1-NEXT: vmovdqa {{.*#+}} xmm3 = [1,0,0,0,0,0,0,32768]<br>
-; X86-AVX1-NEXT: vpxor %xmm3, %xmm2, %xmm2<br>
-; X86-AVX1-NEXT: vpsubq %xmm3, %xmm2, %xmm2<br>
+; X86-AVX1-NEXT: vextractf128 $1, %ymm0, %xmm1<br>
+; X86-AVX1-NEXT: vpsllq $63, %xmm1, %xmm2<br>
+; X86-AVX1-NEXT: vpblendw {{.*#+}} xmm1 = xmm2[0,1,2,3],xmm1[4,5,6,7]<br>
+; X86-AVX1-NEXT: vpsllq $63, %xmm0, %xmm2<br>
; X86-AVX1-NEXT: vpsrlq $63, %xmm1, %xmm1<br>
-; X86-AVX1-NEXT: vpblendw {{.*#+}} xmm0 = xmm1[0,1,2,3],xmm0[4,5,6,7]<br>
+; X86-AVX1-NEXT: vmovdqa {{.*#+}} xmm3 = [1,0,0,2147483648]<br>
+; X86-AVX1-NEXT: vpxor %xmm3, %xmm1, %xmm1<br>
+; X86-AVX1-NEXT: vpsubq %xmm3, %xmm1, %xmm1<br>
+; X86-AVX1-NEXT: vpsrlq $63, %xmm2, %xmm2<br>
+; X86-AVX1-NEXT: vpblendw {{.*#+}} xmm0 = xmm2[0,1,2,3],xmm0[4,5,6,7]<br>
; X86-AVX1-NEXT: vpxor %xmm3, %xmm0, %xmm0<br>
; X86-AVX1-NEXT: vpsubq %xmm3, %xmm0, %xmm0<br>
-; X86-AVX1-NEXT: vinsertf128 $1, %xmm2, %ymm0, %ymm0<br>
+; X86-AVX1-NEXT: vinsertf128 $1, %xmm1, %ymm0, %ymm0<br>
; X86-AVX1-NEXT: vpermilps {{.*#+}} ymm0 = ymm0[0,0,0,0,4,4,4,4]<br>
; X86-AVX1-NEXT: vextractf128 $1, %ymm0, %xmm1<br>
; X86-AVX1-NEXT: vpackssdw %xmm1, %xmm0, %xmm0<br>
@@ -224,19 +224,19 @@ define <8 x i16> @trunc_ashr_v4i64_deman<br>
;<br>
; X64-AVX1-LABEL: trunc_ashr_v4i64_demandedelts:<br>
; X64-AVX1: # %bb.0:<br>
-; X64-AVX1-NEXT: vpsllq $63, %xmm0, %xmm1<br>
-; X64-AVX1-NEXT: vextractf128 $1, %ymm0, %xmm2<br>
-; X64-AVX1-NEXT: vpsllq $63, %xmm2, %xmm3<br>
-; X64-AVX1-NEXT: vpsrlq $63, %xmm3, %xmm3<br>
-; X64-AVX1-NEXT: vpblendw {{.*#+}} xmm2 = xmm3[0,1,2,3],xmm2[4,5,6,7]<br>
-; X64-AVX1-NEXT: vmovdqa {{.*#+}} xmm3 = [1,9223372036854775808]<br>
-; X64-AVX1-NEXT: vpxor %xmm3, %xmm2, %xmm2<br>
-; X64-AVX1-NEXT: vpsubq %xmm3, %xmm2, %xmm2<br>
+; X64-AVX1-NEXT: vextractf128 $1, %ymm0, %xmm1<br>
+; X64-AVX1-NEXT: vpsllq $63, %xmm1, %xmm2<br>
+; X64-AVX1-NEXT: vpblendw {{.*#+}} xmm1 = xmm2[0,1,2,3],xmm1[4,5,6,7]<br>
+; X64-AVX1-NEXT: vpsllq $63, %xmm0, %xmm2<br>
; X64-AVX1-NEXT: vpsrlq $63, %xmm1, %xmm1<br>
-; X64-AVX1-NEXT: vpblendw {{.*#+}} xmm0 = xmm1[0,1,2,3],xmm0[4,5,6,7]<br>
+; X64-AVX1-NEXT: vmovdqa {{.*#+}} xmm3 = [1,9223372036854775808]<br>
+; X64-AVX1-NEXT: vpxor %xmm3, %xmm1, %xmm1<br>
+; X64-AVX1-NEXT: vpsubq %xmm3, %xmm1, %xmm1<br>
+; X64-AVX1-NEXT: vpsrlq $63, %xmm2, %xmm2<br>
+; X64-AVX1-NEXT: vpblendw {{.*#+}} xmm0 = xmm2[0,1,2,3],xmm0[4,5,6,7]<br>
; X64-AVX1-NEXT: vpxor %xmm3, %xmm0, %xmm0<br>
; X64-AVX1-NEXT: vpsubq %xmm3, %xmm0, %xmm0<br>
-; X64-AVX1-NEXT: vinsertf128 $1, %xmm2, %ymm0, %ymm0<br>
+; X64-AVX1-NEXT: vinsertf128 $1, %xmm1, %ymm0, %ymm0<br>
; X64-AVX1-NEXT: vpermilps {{.*#+}} ymm0 = ymm0[0,0,0,0,4,4,4,4]<br>
; X64-AVX1-NEXT: vextractf128 $1, %ymm0, %xmm1<br>
; X64-AVX1-NEXT: vpackssdw %xmm1, %xmm0, %xmm0<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/pr34592.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/pr34592.ll?rev=353610&r1=353609&r2=353610&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/pr34592.ll?rev=353610&r1=353609&r2=353610&view=diff</a><br>
==============================================================================<br>
--- llvm/trunk/test/CodeGen/X86/pr34592.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/pr34592.ll Sat Feb 9 05:13:59 2019<br>
@@ -19,16 +19,14 @@ define <16 x i64> @pluto(<16 x i64> %arg<br>
; CHECK-NEXT: vmovaps 80(%rbp), %ymm13<br>
; CHECK-NEXT: vmovaps 48(%rbp), %ymm14<br>
; CHECK-NEXT: vmovaps 16(%rbp), %ymm15<br>
-; CHECK-NEXT: vpblendd {{.*#+}} ymm2 = ymm6[0,1,2,3,4,5],ymm2[6,7]<br>
+; CHECK-NEXT: vblendpd {{.*#+}} ymm2 = ymm6[0,1,2],ymm2[3]<br>
; CHECK-NEXT: vmovaps %xmm9, %xmm6<br>
-; CHECK-NEXT: vmovdqa %xmm6, %xmm9<br>
-; CHECK-NEXT: # kill: def $ymm9 killed $xmm9<br>
; CHECK-NEXT: vmovaps %ymm0, {{[-0-9]+}}(%r{{[sb]}}p) # 32-byte Spill<br>
; CHECK-NEXT: # implicit-def: $ymm0<br>
; CHECK-NEXT: vinserti128 $1, %xmm6, %ymm0, %ymm0<br>
; CHECK-NEXT: vpalignr {{.*#+}} ymm11 = ymm2[8,9,10,11,12,13,14,15],ymm11[0,1,2,3,4,5,6,7],ymm2[24,25,26,27,28,29,30,31],ymm11[16,17,18,19,20,21,22,23]<br>
; CHECK-NEXT: vpermq {{.*#+}} ymm11 = ymm11[2,3,2,0]<br>
-; CHECK-NEXT: vpblendd {{.*#+}} ymm0 = ymm11[0,1,2,3],ymm0[4,5],ymm11[6,7]<br>
+; CHECK-NEXT: vblendpd {{.*#+}} ymm0 = ymm11[0,1],ymm0[2],ymm11[3]<br>
; CHECK-NEXT: vmovaps %xmm2, %xmm6<br>
; CHECK-NEXT: # implicit-def: $ymm2<br>
; CHECK-NEXT: vinserti128 $1, %xmm6, %ymm2, %ymm2<br>
@@ -36,18 +34,18 @@ define <16 x i64> @pluto(<16 x i64> %arg<br>
; CHECK-NEXT: vmovq {{.*#+}} xmm6 = xmm6[0],zero<br>
; CHECK-NEXT: # implicit-def: $ymm11<br>
; CHECK-NEXT: vmovaps %xmm6, %xmm11<br>
-; CHECK-NEXT: vpblendd {{.*#+}} ymm2 = ymm11[0,1,2,3],ymm2[4,5,6,7]<br>
+; CHECK-NEXT: vblendpd {{.*#+}} ymm2 = ymm11[0,1],ymm2[2,3]<br>
; CHECK-NEXT: vmovaps %xmm7, %xmm6<br>
; CHECK-NEXT: vpslldq {{.*#+}} xmm6 = zero,zero,zero,zero,zero,zero,zero,zero,xmm6[0,1,2,3,4,5,6,7]<br>
; CHECK-NEXT: # implicit-def: $ymm11<br>
; CHECK-NEXT: vmovaps %xmm6, %xmm11<br>
; CHECK-NEXT: vpalignr {{.*#+}} ymm9 = ymm9[8,9,10,11,12,13,14,15],ymm5[0,1,2,3,4,5,6,7],ymm9[24,25,26,27,28,29,30,31],ymm5[16,17,18,19,20,21,22,23]<br>
; CHECK-NEXT: vpermq {{.*#+}} ymm9 = ymm9[0,1,0,3]<br>
-; CHECK-NEXT: vpblendd {{.*#+}} ymm9 = ymm11[0,1,2,3],ymm9[4,5,6,7]<br>
-; CHECK-NEXT: vpblendd {{.*#+}} ymm7 = ymm7[0,1],ymm8[2,3],ymm7[4,5,6,7]<br>
+; CHECK-NEXT: vblendpd {{.*#+}} ymm9 = ymm11[0,1],ymm9[2,3]<br>
+; CHECK-NEXT: vblendpd {{.*#+}} ymm7 = ymm7[0],ymm8[1],ymm7[2,3]<br>
; CHECK-NEXT: vpermq {{.*#+}} ymm7 = ymm7[2,1,1,3]<br>
; CHECK-NEXT: vpshufd {{.*#+}} ymm5 = ymm5[0,1,0,1,4,5,4,5]<br>
-; CHECK-NEXT: vpblendd {{.*#+}} ymm5 = ymm7[0,1,2,3,4,5],ymm5[6,7]<br>
+; CHECK-NEXT: vblendpd {{.*#+}} ymm5 = ymm7[0,1,2],ymm5[3]<br>
; CHECK-NEXT: vmovaps %ymm1, {{[-0-9]+}}(%r{{[sb]}}p) # 32-byte Spill<br>
; CHECK-NEXT: vmovaps %ymm5, %ymm1<br>
; CHECK-NEXT: vmovaps %ymm3, {{[-0-9]+}}(%r{{[sb]}}p) # 32-byte Spill<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/prefer-avx256-mask-shuffle.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/prefer-avx256-mask-shuffle.ll?rev=353610&r1=353609&r2=353610&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/prefer-avx256-mask-shuffle.ll?rev=353610&r1=353609&r2=353610&view=diff</a><br>
==============================================================================<br>
--- llvm/trunk/test/CodeGen/X86/prefer-avx256-mask-shuffle.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/prefer-avx256-mask-shuffle.ll Sat Feb 9 05:13:59 2019<br>
@@ -196,14 +196,13 @@ define <32 x i1> @shuf32i1_3_6_22_12_3_7<br>
; AVX256VLBW: # %bb.0:<br>
; AVX256VLBW-NEXT: vptestnmb %ymm0, %ymm0, %k0<br>
; AVX256VLBW-NEXT: vpmovm2b %k0, %ymm0<br>
-; AVX256VLBW-NEXT: vpermq {{.*#+}} ymm1 = ymm0[2,3,0,1]<br>
-; AVX256VLBW-NEXT: vpblendd {{.*#+}} ymm2 = ymm1[0,1,2,3],ymm0[4,5,6,7]<br>
-; AVX256VLBW-NEXT: vpshufd {{.*#+}} ymm2 = ymm2[1,1,2,1,5,5,6,5]<br>
-; AVX256VLBW-NEXT: vpblendd {{.*#+}} ymm0 = ymm0[0,1,2,3],ymm1[4,5,6,7]<br>
+; AVX256VLBW-NEXT: vpermq {{.*#+}} ymm1 = ymm0[2,3,2,3]<br>
+; AVX256VLBW-NEXT: vpshufd {{.*#+}} ymm1 = ymm1[1,1,2,1,5,5,6,5]<br>
+; AVX256VLBW-NEXT: vpermq {{.*#+}} ymm0 = ymm0[0,1,0,1]<br>
; AVX256VLBW-NEXT: vpshufb {{.*#+}} ymm0 = ymm0[3,6,u,12,3,7,7,0,3,6,1,13,3,u,7,0,19,22,u,28,19,23,23,16,19,22,17,29,19,u,23,16]<br>
; AVX256VLBW-NEXT: movl $537141252, %eax # imm = 0x20042004<br>
; AVX256VLBW-NEXT: kmovd %eax, %k1<br>
-; AVX256VLBW-NEXT: vmovdqu8 %ymm2, %ymm0 {%k1}<br>
+; AVX256VLBW-NEXT: vmovdqu8 %ymm1, %ymm0 {%k1}<br>
; AVX256VLBW-NEXT: vpmovb2m %ymm0, %k0<br>
; AVX256VLBW-NEXT: vpmovm2b %k0, %ymm0<br>
; AVX256VLBW-NEXT: retq<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/sse2.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/sse2.ll?rev=353610&r1=353609&r2=353610&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/sse2.ll?rev=353610&r1=353609&r2=353610&view=diff</a><br>
==============================================================================<br>
--- llvm/trunk/test/CodeGen/X86/sse2.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/sse2.ll Sat Feb 9 05:13:59 2019<br>
@@ -709,8 +709,7 @@ define <4 x i32> @PR19721(<4 x i32> %i)<br>
; X64-AVX512-NEXT: movabsq $-4294967296, %rcx # imm = 0xFFFFFFFF00000000<br>
; X64-AVX512-NEXT: andq %rax, %rcx<br>
; X64-AVX512-NEXT: vmovq %rcx, %xmm1<br>
-; X64-AVX512-NEXT: vpshufd {{.*#+}} xmm0 = xmm0[2,3,0,1]<br>
-; X64-AVX512-NEXT: vpunpcklqdq {{.*#+}} xmm0 = xmm1[0],xmm0[0]<br>
+; X64-AVX512-NEXT: vpblendd {{.*#+}} xmm0 = xmm1[0,1],xmm0[2,3]<br>
; X64-AVX512-NEXT: retq<br>
%bc = bitcast <4 x i32> %i to i128<br>
%insert = and i128 %bc, -4294967296<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/vector-reduce-smax.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-reduce-smax.ll?rev=353610&r1=353609&r2=353610&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-reduce-smax.ll?rev=353610&r1=353609&r2=353610&view=diff</a><br>
==============================================================================<br>
--- llvm/trunk/test/CodeGen/X86/vector-reduce-smax.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/vector-reduce-smax.ll Sat Feb 9 05:13:59 2019<br>
@@ -709,16 +709,17 @@ define i32 @test_v2i32(<2 x i32> %a0) {<br>
; SSE41-NEXT: pshufd {{.*#+}} xmm3 = xmm0[0,2,2,3]<br>
; SSE41-NEXT: psrad $31, %xmm3<br>
; SSE41-NEXT: pblendw {{.*#+}} xmm3 = xmm2[0,1],xmm3[2,3],xmm2[4,5],xmm3[6,7]<br>
-; SSE41-NEXT: movdqa {{.*#+}} xmm2 = [2147483648,2147483648]<br>
-; SSE41-NEXT: movdqa %xmm3, %xmm0<br>
-; SSE41-NEXT: pxor %xmm2, %xmm0<br>
-; SSE41-NEXT: pxor %xmm1, %xmm2<br>
-; SSE41-NEXT: movdqa %xmm2, %xmm4<br>
-; SSE41-NEXT: pcmpeqd %xmm0, %xmm4<br>
-; SSE41-NEXT: pcmpgtd %xmm0, %xmm2<br>
-; SSE41-NEXT: pshufd {{.*#+}} xmm0 = xmm2[0,0,2,2]<br>
-; SSE41-NEXT: pand %xmm4, %xmm0<br>
-; SSE41-NEXT: por %xmm2, %xmm0<br>
+; SSE41-NEXT: movdqa {{.*#+}} xmm0 = [2147483648,2147483648]<br>
+; SSE41-NEXT: movdqa %xmm3, %xmm2<br>
+; SSE41-NEXT: pxor %xmm0, %xmm2<br>
+; SSE41-NEXT: pxor %xmm1, %xmm0<br>
+; SSE41-NEXT: movdqa %xmm0, %xmm4<br>
+; SSE41-NEXT: pcmpgtd %xmm2, %xmm4<br>
+; SSE41-NEXT: pshufd {{.*#+}} xmm5 = xmm4[0,0,2,2]<br>
+; SSE41-NEXT: pcmpeqd %xmm2, %xmm0<br>
+; SSE41-NEXT: pshufd {{.*#+}} xmm0 = xmm0[1,1,3,3]<br>
+; SSE41-NEXT: pand %xmm5, %xmm0<br>
+; SSE41-NEXT: por %xmm4, %xmm0<br>
; SSE41-NEXT: blendvpd %xmm0, %xmm1, %xmm3<br>
; SSE41-NEXT: movd %xmm3, %eax<br>
; SSE41-NEXT: retq<br>
@@ -1170,11 +1171,12 @@ define i16 @test_v2i16(<2 x i16> %a0) {<br>
; SSE41-NEXT: pxor %xmm0, %xmm2<br>
; SSE41-NEXT: pxor %xmm1, %xmm0<br>
; SSE41-NEXT: movdqa %xmm2, %xmm4<br>
-; SSE41-NEXT: pcmpeqd %xmm0, %xmm4<br>
-; SSE41-NEXT: pcmpgtd %xmm0, %xmm2<br>
-; SSE41-NEXT: pshufd {{.*#+}} xmm0 = xmm2[0,0,2,2]<br>
-; SSE41-NEXT: pand %xmm4, %xmm0<br>
-; SSE41-NEXT: por %xmm2, %xmm0<br>
+; SSE41-NEXT: pcmpgtd %xmm0, %xmm4<br>
+; SSE41-NEXT: pshufd {{.*#+}} xmm5 = xmm4[0,0,2,2]<br>
+; SSE41-NEXT: pcmpeqd %xmm2, %xmm0<br>
+; SSE41-NEXT: pshufd {{.*#+}} xmm0 = xmm0[1,1,3,3]<br>
+; SSE41-NEXT: pand %xmm5, %xmm0<br>
+; SSE41-NEXT: por %xmm4, %xmm0<br>
; SSE41-NEXT: blendvpd %xmm0, %xmm3, %xmm1<br>
; SSE41-NEXT: movd %xmm1, %eax<br>
; SSE41-NEXT: # kill: def $ax killed $ax killed $eax<br>
@@ -1656,11 +1658,12 @@ define i8 @test_v2i8(<2 x i8> %a0) {<br>
; SSE41-NEXT: pxor %xmm0, %xmm2<br>
; SSE41-NEXT: pxor %xmm1, %xmm0<br>
; SSE41-NEXT: movdqa %xmm2, %xmm4<br>
-; SSE41-NEXT: pcmpeqd %xmm0, %xmm4<br>
-; SSE41-NEXT: pcmpgtd %xmm0, %xmm2<br>
-; SSE41-NEXT: pshufd {{.*#+}} xmm0 = xmm2[0,0,2,2]<br>
-; SSE41-NEXT: pand %xmm4, %xmm0<br>
-; SSE41-NEXT: por %xmm2, %xmm0<br>
+; SSE41-NEXT: pcmpgtd %xmm0, %xmm4<br>
+; SSE41-NEXT: pshufd {{.*#+}} xmm5 = xmm4[0,0,2,2]<br>
+; SSE41-NEXT: pcmpeqd %xmm2, %xmm0<br>
+; SSE41-NEXT: pshufd {{.*#+}} xmm0 = xmm0[1,1,3,3]<br>
+; SSE41-NEXT: pand %xmm5, %xmm0<br>
+; SSE41-NEXT: por %xmm4, %xmm0<br>
; SSE41-NEXT: blendvpd %xmm0, %xmm3, %xmm1<br>
; SSE41-NEXT: pextrb $0, %xmm1, %eax<br>
; SSE41-NEXT: # kill: def $al killed $al killed $eax<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/vector-reduce-smin.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-reduce-smin.ll?rev=353610&r1=353609&r2=353610&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-reduce-smin.ll?rev=353610&r1=353609&r2=353610&view=diff</a><br>
==============================================================================<br>
--- llvm/trunk/test/CodeGen/X86/vector-reduce-smin.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/vector-reduce-smin.ll Sat Feb 9 05:13:59 2019<br>
@@ -708,16 +708,17 @@ define i32 @test_v2i32(<2 x i32> %a0) {<br>
; SSE41-NEXT: pshufd {{.*#+}} xmm3 = xmm0[0,2,2,3]<br>
; SSE41-NEXT: psrad $31, %xmm3<br>
; SSE41-NEXT: pblendw {{.*#+}} xmm3 = xmm2[0,1],xmm3[2,3],xmm2[4,5],xmm3[6,7]<br>
-; SSE41-NEXT: movdqa {{.*#+}} xmm2 = [2147483648,2147483648]<br>
-; SSE41-NEXT: movdqa %xmm1, %xmm0<br>
-; SSE41-NEXT: pxor %xmm2, %xmm0<br>
-; SSE41-NEXT: pxor %xmm3, %xmm2<br>
-; SSE41-NEXT: movdqa %xmm2, %xmm4<br>
-; SSE41-NEXT: pcmpeqd %xmm0, %xmm4<br>
-; SSE41-NEXT: pcmpgtd %xmm0, %xmm2<br>
-; SSE41-NEXT: pshufd {{.*#+}} xmm0 = xmm2[0,0,2,2]<br>
-; SSE41-NEXT: pand %xmm4, %xmm0<br>
-; SSE41-NEXT: por %xmm2, %xmm0<br>
+; SSE41-NEXT: movdqa {{.*#+}} xmm0 = [2147483648,2147483648]<br>
+; SSE41-NEXT: movdqa %xmm1, %xmm2<br>
+; SSE41-NEXT: pxor %xmm0, %xmm2<br>
+; SSE41-NEXT: pxor %xmm3, %xmm0<br>
+; SSE41-NEXT: movdqa %xmm0, %xmm4<br>
+; SSE41-NEXT: pcmpgtd %xmm2, %xmm4<br>
+; SSE41-NEXT: pshufd {{.*#+}} xmm5 = xmm4[0,0,2,2]<br>
+; SSE41-NEXT: pcmpeqd %xmm2, %xmm0<br>
+; SSE41-NEXT: pshufd {{.*#+}} xmm0 = xmm0[1,1,3,3]<br>
+; SSE41-NEXT: pand %xmm5, %xmm0<br>
+; SSE41-NEXT: por %xmm4, %xmm0<br>
; SSE41-NEXT: blendvpd %xmm0, %xmm1, %xmm3<br>
; SSE41-NEXT: movd %xmm3, %eax<br>
; SSE41-NEXT: retq<br>
@@ -1164,16 +1165,17 @@ define i16 @test_v2i16(<2 x i16> %a0) {<br>
; SSE41-NEXT: psrad $16, %xmm1<br>
; SSE41-NEXT: pshufd {{.*#+}} xmm1 = xmm1[1,1,3,3]<br>
; SSE41-NEXT: pblendw {{.*#+}} xmm1 = xmm1[0,1],xmm0[2,3],xmm1[4,5],xmm0[6,7]<br>
-; SSE41-NEXT: movdqa {{.*#+}} xmm2 = [2147483648,2147483648]<br>
-; SSE41-NEXT: movdqa %xmm3, %xmm0<br>
-; SSE41-NEXT: pxor %xmm2, %xmm0<br>
-; SSE41-NEXT: pxor %xmm1, %xmm2<br>
-; SSE41-NEXT: movdqa %xmm2, %xmm4<br>
-; SSE41-NEXT: pcmpeqd %xmm0, %xmm4<br>
-; SSE41-NEXT: pcmpgtd %xmm0, %xmm2<br>
-; SSE41-NEXT: pshufd {{.*#+}} xmm0 = xmm2[0,0,2,2]<br>
-; SSE41-NEXT: pand %xmm4, %xmm0<br>
-; SSE41-NEXT: por %xmm2, %xmm0<br>
+; SSE41-NEXT: movdqa {{.*#+}} xmm0 = [2147483648,2147483648]<br>
+; SSE41-NEXT: movdqa %xmm3, %xmm2<br>
+; SSE41-NEXT: pxor %xmm0, %xmm2<br>
+; SSE41-NEXT: pxor %xmm1, %xmm0<br>
+; SSE41-NEXT: movdqa %xmm0, %xmm4<br>
+; SSE41-NEXT: pcmpgtd %xmm2, %xmm4<br>
+; SSE41-NEXT: pshufd {{.*#+}} xmm5 = xmm4[0,0,2,2]<br>
+; SSE41-NEXT: pcmpeqd %xmm2, %xmm0<br>
+; SSE41-NEXT: pshufd {{.*#+}} xmm0 = xmm0[1,1,3,3]<br>
+; SSE41-NEXT: pand %xmm5, %xmm0<br>
+; SSE41-NEXT: por %xmm4, %xmm0<br>
; SSE41-NEXT: blendvpd %xmm0, %xmm3, %xmm1<br>
; SSE41-NEXT: movd %xmm1, %eax<br>
; SSE41-NEXT: # kill: def $ax killed $ax killed $eax<br>
@@ -1650,16 +1652,17 @@ define i8 @test_v2i8(<2 x i8> %a0) {<br>
; SSE41-NEXT: psrad $24, %xmm1<br>
; SSE41-NEXT: pshufd {{.*#+}} xmm1 = xmm1[1,1,3,3]<br>
; SSE41-NEXT: pblendw {{.*#+}} xmm1 = xmm1[0,1],xmm0[2,3],xmm1[4,5],xmm0[6,7]<br>
-; SSE41-NEXT: movdqa {{.*#+}} xmm2 = [2147483648,2147483648]<br>
-; SSE41-NEXT: movdqa %xmm3, %xmm0<br>
-; SSE41-NEXT: pxor %xmm2, %xmm0<br>
-; SSE41-NEXT: pxor %xmm1, %xmm2<br>
-; SSE41-NEXT: movdqa %xmm2, %xmm4<br>
-; SSE41-NEXT: pcmpeqd %xmm0, %xmm4<br>
-; SSE41-NEXT: pcmpgtd %xmm0, %xmm2<br>
-; SSE41-NEXT: pshufd {{.*#+}} xmm0 = xmm2[0,0,2,2]<br>
-; SSE41-NEXT: pand %xmm4, %xmm0<br>
-; SSE41-NEXT: por %xmm2, %xmm0<br>
+; SSE41-NEXT: movdqa {{.*#+}} xmm0 = [2147483648,2147483648]<br>
+; SSE41-NEXT: movdqa %xmm3, %xmm2<br>
+; SSE41-NEXT: pxor %xmm0, %xmm2<br>
+; SSE41-NEXT: pxor %xmm1, %xmm0<br>
+; SSE41-NEXT: movdqa %xmm0, %xmm4<br>
+; SSE41-NEXT: pcmpgtd %xmm2, %xmm4<br>
+; SSE41-NEXT: pshufd {{.*#+}} xmm5 = xmm4[0,0,2,2]<br>
+; SSE41-NEXT: pcmpeqd %xmm2, %xmm0<br>
+; SSE41-NEXT: pshufd {{.*#+}} xmm0 = xmm0[1,1,3,3]<br>
+; SSE41-NEXT: pand %xmm5, %xmm0<br>
+; SSE41-NEXT: por %xmm4, %xmm0<br>
; SSE41-NEXT: blendvpd %xmm0, %xmm3, %xmm1<br>
; SSE41-NEXT: pextrb $0, %xmm1, %eax<br>
; SSE41-NEXT: # kill: def $al killed $al killed $eax<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/vector-shift-ashr-256.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-shift-ashr-256.ll?rev=353610&r1=353609&r2=353610&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-shift-ashr-256.ll?rev=353610&r1=353609&r2=353610&view=diff</a><br>
==============================================================================<br>
--- llvm/trunk/test/CodeGen/X86/vector-shift-ashr-256.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/vector-shift-ashr-256.ll Sat Feb 9 05:13:59 2019<br>
@@ -1070,13 +1070,13 @@ define <4 x i64> @constant_shift_v4i64(<<br>
; X32-AVX1-NEXT: vpsrlq $62, %xmm1, %xmm2<br>
; X32-AVX1-NEXT: vpsrlq $31, %xmm1, %xmm1<br>
; X32-AVX1-NEXT: vpblendw {{.*#+}} xmm1 = xmm1[0,1,2,3],xmm2[4,5,6,7]<br>
-; X32-AVX1-NEXT: vmovdqa {{.*#+}} xmm2 = [0,0,1,0,2,0,0,0]<br>
+; X32-AVX1-NEXT: vmovdqa {{.*#+}} xmm2 = [0,1,2,0]<br>
; X32-AVX1-NEXT: vpxor %xmm2, %xmm1, %xmm1<br>
; X32-AVX1-NEXT: vpsubq %xmm2, %xmm1, %xmm1<br>
; X32-AVX1-NEXT: vpsrlq $7, %xmm0, %xmm2<br>
; X32-AVX1-NEXT: vpsrlq $1, %xmm0, %xmm0<br>
; X32-AVX1-NEXT: vpblendw {{.*#+}} xmm0 = xmm0[0,1,2,3],xmm2[4,5,6,7]<br>
-; X32-AVX1-NEXT: vmovdqa {{.*#+}} xmm2 = [0,0,0,16384,0,0,0,256]<br>
+; X32-AVX1-NEXT: vmovdqa {{.*#+}} xmm2 = [0,1073741824,0,16777216]<br>
; X32-AVX1-NEXT: vpxor %xmm2, %xmm0, %xmm0<br>
; X32-AVX1-NEXT: vpsubq %xmm2, %xmm0, %xmm0<br>
; X32-AVX1-NEXT: vinsertf128 $1, %xmm1, %ymm0, %ymm0<br>
@@ -1184,7 +1184,6 @@ define <16 x i16> @constant_shift_v16i16<br>
; AVX2: # %bb.0:<br>
; AVX2-NEXT: vpmulhw {{.*}}(%rip), %ymm0, %ymm1<br>
; AVX2-NEXT: vpblendw {{.*#+}} ymm2 = ymm0[0],ymm1[1,2,3,4,5,6,7],ymm0[8],ymm1[9,10,11,12,13,14,15]<br>
-; AVX2-NEXT: vpblendd {{.*#+}} ymm2 = ymm2[0,1,2,3],ymm1[4,5,6,7]<br>
; AVX2-NEXT: vpsraw $1, %ymm0, %ymm0<br>
; AVX2-NEXT: vpblendw {{.*#+}} ymm0 = ymm2[0],ymm0[1],ymm2[2,3,4,5,6,7,8],ymm0[9],ymm2[10,11,12,13,14,15]<br>
; AVX2-NEXT: vpblendd {{.*#+}} ymm0 = ymm0[0,1,2,3],ymm1[4,5,6,7]<br>
@@ -1248,7 +1247,6 @@ define <16 x i16> @constant_shift_v16i16<br>
; X32-AVX2: # %bb.0:<br>
; X32-AVX2-NEXT: vpmulhw {{\.LCPI.*}}, %ymm0, %ymm1<br>
; X32-AVX2-NEXT: vpblendw {{.*#+}} ymm2 = ymm0[0],ymm1[1,2,3,4,5,6,7],ymm0[8],ymm1[9,10,11,12,13,14,15]<br>
-; X32-AVX2-NEXT: vpblendd {{.*#+}} ymm2 = ymm2[0,1,2,3],ymm1[4,5,6,7]<br>
; X32-AVX2-NEXT: vpsraw $1, %ymm0, %ymm0<br>
; X32-AVX2-NEXT: vpblendw {{.*#+}} ymm0 = ymm2[0],ymm0[1],ymm2[2,3,4,5,6,7,8],ymm0[9],ymm2[10,11,12,13,14,15]<br>
; X32-AVX2-NEXT: vpblendd {{.*#+}} ymm0 = ymm0[0,1,2,3],ymm1[4,5,6,7]<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/vector-shuffle-128-v8.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-shuffle-128-v8.ll?rev=353610&r1=353609&r2=353610&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-shuffle-128-v8.ll?rev=353610&r1=353609&r2=353610&view=diff</a><br>
==============================================================================<br>
--- llvm/trunk/test/CodeGen/X86/vector-shuffle-128-v8.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/vector-shuffle-128-v8.ll Sat Feb 9 05:13:59 2019<br>
@@ -1167,10 +1167,9 @@ define <8 x i16> @shuffle_v8i16_0213cedf<br>
;<br>
; AVX512VL-SLOW-LABEL: shuffle_v8i16_0213cedf:<br>
; AVX512VL-SLOW: # %bb.0:<br>
-; AVX512VL-SLOW-NEXT: vpshuflw {{.*#+}} xmm0 = xmm0[0,2,1,3,4,5,6,7]<br>
; AVX512VL-SLOW-NEXT: vpshufhw {{.*#+}} xmm1 = xmm1[0,1,2,3,4,6,5,7]<br>
-; AVX512VL-SLOW-NEXT: vpshufd {{.*#+}} xmm1 = xmm1[2,3,2,3]<br>
-; AVX512VL-SLOW-NEXT: vpunpcklqdq {{.*#+}} xmm0 = xmm0[0],xmm1[0]<br>
+; AVX512VL-SLOW-NEXT: vpshuflw {{.*#+}} xmm0 = xmm0[0,2,1,3,4,5,6,7]<br>
+; AVX512VL-SLOW-NEXT: vpblendd {{.*#+}} xmm0 = xmm0[0,1],xmm1[2,3]<br>
; AVX512VL-SLOW-NEXT: retq<br>
;<br>
; AVX512VL-FAST-LABEL: shuffle_v8i16_0213cedf:<br>
@@ -1557,14 +1556,14 @@ define <8 x i16> @shuffle_v8i16_XX4X8acX<br>
;<br>
; SSE41-LABEL: shuffle_v8i16_XX4X8acX:<br>
; SSE41: # %bb.0:<br>
-; SSE41-NEXT: pshufb {{.*#+}} xmm1 = xmm1[u,u,u,u,u,u,u,u,0,1,4,5,8,9,4,5]<br>
+; SSE41-NEXT: pshufb {{.*#+}} xmm1 = xmm1[0,1,4,5,4,5,6,7,0,1,4,5,8,9,4,5]<br>
; SSE41-NEXT: pshufd {{.*#+}} xmm0 = xmm0[2,2,3,3]<br>
; SSE41-NEXT: pblendw {{.*#+}} xmm0 = xmm0[0,1,2,3],xmm1[4,5,6,7]<br>
; SSE41-NEXT: retq<br>
;<br>
; AVX1-LABEL: shuffle_v8i16_XX4X8acX:<br>
; AVX1: # %bb.0:<br>
-; AVX1-NEXT: vpshufb {{.*#+}} xmm1 = xmm1[u,u,u,u,u,u,u,u,0,1,4,5,8,9,4,5]<br>
+; AVX1-NEXT: vpshufb {{.*#+}} xmm1 = xmm1[0,1,4,5,4,5,6,7,0,1,4,5,8,9,4,5]<br>
; AVX1-NEXT: vpshufd {{.*#+}} xmm0 = xmm0[2,2,3,3]<br>
; AVX1-NEXT: vpblendw {{.*#+}} xmm0 = xmm0[0,1,2,3],xmm1[4,5,6,7]<br>
; AVX1-NEXT: retq<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/vector-shuffle-256-v16.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-shuffle-256-v16.ll?rev=353610&r1=353609&r2=353610&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-shuffle-256-v16.ll?rev=353610&r1=353609&r2=353610&view=diff</a><br>
==============================================================================<br>
--- llvm/trunk/test/CodeGen/X86/vector-shuffle-256-v16.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/vector-shuffle-256-v16.ll Sat Feb 9 05:13:59 2019<br>
@@ -313,24 +313,13 @@ define <16 x i16> @shuffle_v16i16_00_00_<br>
; AVX1-NEXT: vinsertf128 $1, %xmm0, %ymm1, %ymm0<br>
; AVX1-NEXT: retq<br>
;<br>
-; AVX2-SLOW-LABEL: shuffle_v16i16_00_00_00_00_00_00_00_08_00_00_00_00_00_00_00_00:<br>
-; AVX2-SLOW: # %bb.0:<br>
-; AVX2-SLOW-NEXT: vpermq {{.*#+}} ymm1 = ymm0[2,3,0,1]<br>
-; AVX2-SLOW-NEXT: vpblendd {{.*#+}} ymm0 = ymm0[0,1,2,3],ymm1[4,5,6,7]<br>
-; AVX2-SLOW-NEXT: vpshuflw {{.*#+}} ymm0 = ymm0[0,0,2,3,4,5,6,7,8,8,10,11,12,13,14,15]<br>
-; AVX2-SLOW-NEXT: vpshufd {{.*#+}} ymm0 = ymm0[0,0,0,0,4,4,4,4]<br>
-; AVX2-SLOW-NEXT: vpslldq {{.*#+}} ymm1 = zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,ymm1[0,1],zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,ymm1[16,17]<br>
-; AVX2-SLOW-NEXT: vpblendw {{.*#+}} ymm0 = ymm0[0,1,2,3,4,5,6],ymm1[7],ymm0[8,9,10,11,12,13,14],ymm1[15]<br>
-; AVX2-SLOW-NEXT: retq<br>
-;<br>
-; AVX2-FAST-LABEL: shuffle_v16i16_00_00_00_00_00_00_00_08_00_00_00_00_00_00_00_00:<br>
-; AVX2-FAST: # %bb.0:<br>
-; AVX2-FAST-NEXT: vpermq {{.*#+}} ymm1 = ymm0[2,3,0,1]<br>
-; AVX2-FAST-NEXT: vpblendd {{.*#+}} ymm0 = ymm0[0,1,2,3],ymm1[4,5,6,7]<br>
-; AVX2-FAST-NEXT: vpshufb {{.*#+}} ymm0 = ymm0[0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,16,17,16,17,16,17,16,17,16,17,16,17,16,17,16,17]<br>
-; AVX2-FAST-NEXT: vpslldq {{.*#+}} ymm1 = zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,ymm1[0,1],zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,ymm1[16,17]<br>
-; AVX2-FAST-NEXT: vpblendw {{.*#+}} ymm0 = ymm0[0,1,2,3,4,5,6],ymm1[7],ymm0[8,9,10,11,12,13,14],ymm1[15]<br>
-; AVX2-FAST-NEXT: retq<br>
+; AVX2-LABEL: shuffle_v16i16_00_00_00_00_00_00_00_08_00_00_00_00_00_00_00_00:<br>
+; AVX2: # %bb.0:<br>
+; AVX2-NEXT: vpermq {{.*#+}} ymm1 = ymm0[2,3,0,1]<br>
+; AVX2-NEXT: vpslldq {{.*#+}} ymm1 = zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,ymm1[0,1],zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,ymm1[16,17]<br>
+; AVX2-NEXT: vpbroadcastw %xmm0, %ymm0<br>
+; AVX2-NEXT: vpblendw {{.*#+}} ymm0 = ymm0[0,1,2,3,4,5,6],ymm1[7],ymm0[8,9,10,11,12,13,14],ymm1[15]<br>
+; AVX2-NEXT: retq<br>
;<br>
; AVX512VL-LABEL: shuffle_v16i16_00_00_00_00_00_00_00_08_00_00_00_00_00_00_00_00:<br>
; AVX512VL: # %bb.0:<br>
@@ -3908,7 +3897,7 @@ define <16 x i16> @shuffle_v16i16_uu_uu_<br>
; AVX1-LABEL: shuffle_v16i16_uu_uu_04_uu_16_18_20_uu_uu_uu_12_uu_24_26_28_uu:<br>
; AVX1: # %bb.0:<br>
; AVX1-NEXT: vextractf128 $1, %ymm1, %xmm2<br>
-; AVX1-NEXT: vmovdqa {{.*#+}} xmm3 = <u,u,u,u,u,u,u,u,0,1,4,5,8,9,4,5><br>
+; AVX1-NEXT: vmovdqa {{.*#+}} xmm3 = [0,1,4,5,4,5,6,7,0,1,4,5,8,9,4,5]<br>
; AVX1-NEXT: vpshufb %xmm3, %xmm2, %xmm2<br>
; AVX1-NEXT: vextractf128 $1, %ymm0, %xmm4<br>
; AVX1-NEXT: vpshufd {{.*#+}} xmm4 = xmm4[2,2,3,3]<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/vector-shuffle-256-v32.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-shuffle-256-v32.ll?rev=353610&r1=353609&r2=353610&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-shuffle-256-v32.ll?rev=353610&r1=353609&r2=353610&view=diff</a><br>
==============================================================================<br>
--- llvm/trunk/test/CodeGen/X86/vector-shuffle-256-v32.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/vector-shuffle-256-v32.ll Sat Feb 9 05:13:59 2019<br>
@@ -578,22 +578,18 @@ define <32 x i8> @shuffle_v32i8_00_00_00<br>
;<br>
; AVX2-LABEL: shuffle_v32i8_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_16_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00:<br>
; AVX2: # %bb.0:<br>
-; AVX2-NEXT: vpermq {{.*#+}} ymm1 = ymm0[2,3,0,1]<br>
-; AVX2-NEXT: vpblendd {{.*#+}} ymm0 = ymm0[0,1,2,3],ymm1[4,5,6,7]<br>
-; AVX2-NEXT: vpxor %xmm2, %xmm2, %xmm2<br>
-; AVX2-NEXT: vpshufb %ymm2, %ymm0, %ymm0<br>
-; AVX2-NEXT: vpslldq {{.*#+}} ymm1 = zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,ymm1[0],zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,ymm1[16]<br>
+; AVX2-NEXT: vpbroadcastb %xmm0, %ymm1<br>
+; AVX2-NEXT: vpermq {{.*#+}} ymm0 = ymm0[2,3,0,1]<br>
+; AVX2-NEXT: vpslldq {{.*#+}} ymm0 = zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,ymm0[0],zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,ymm0[16]<br>
; AVX2-NEXT: vmovdqa {{.*#+}} ymm2 = [255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,0,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,0]<br>
-; AVX2-NEXT: vpblendvb %ymm2, %ymm0, %ymm1, %ymm0<br>
+; AVX2-NEXT: vpblendvb %ymm2, %ymm1, %ymm0, %ymm0<br>
; AVX2-NEXT: retq<br>
;<br>
; AVX512VLBW-LABEL: shuffle_v32i8_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_16_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00:<br>
; AVX512VLBW: # %bb.0:<br>
-; AVX512VLBW-NEXT: vpermq {{.*#+}} ymm1 = ymm0[2,3,0,1]<br>
-; AVX512VLBW-NEXT: vpblendd {{.*#+}} ymm0 = ymm0[0,1,2,3],ymm1[4,5,6,7]<br>
-; AVX512VLBW-NEXT: vpxor %xmm2, %xmm2, %xmm2<br>
-; AVX512VLBW-NEXT: vpshufb %ymm2, %ymm0, %ymm0<br>
+; AVX512VLBW-NEXT: vpermpd {{.*#+}} ymm1 = ymm0[2,3,0,1]<br>
; AVX512VLBW-NEXT: vpslldq {{.*#+}} ymm1 = zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,ymm1[0],zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,ymm1[16]<br>
+; AVX512VLBW-NEXT: vpbroadcastb %xmm0, %ymm0<br>
; AVX512VLBW-NEXT: movl $-2147450880, %eax # imm = 0x80008000<br>
; AVX512VLBW-NEXT: kmovd %eax, %k1<br>
; AVX512VLBW-NEXT: vmovdqu8 %ymm1, %ymm0 {%k1}<br>
@@ -924,18 +920,11 @@ define <32 x i8> @shuffle_v32i8_00_00_00<br>
; AVX2-NEXT: vpshufb {{.*#+}} ymm0 = ymm0[0,0,0,0,0,0,0,8,0,0,0,0,0,0,0,0,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16]<br>
; AVX2-NEXT: retq<br>
;<br>
-; AVX512VLBW-SLOW-LABEL: shuffle_v32i8_00_00_00_00_00_00_00_24_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00:<br>
-; AVX512VLBW-SLOW: # %bb.0:<br>
-; AVX512VLBW-SLOW-NEXT: vpermq {{.*#+}} ymm1 = ymm0[2,3,0,1]<br>
-; AVX512VLBW-SLOW-NEXT: vpblendd {{.*#+}} ymm0 = ymm0[0,1],ymm1[2,3,4,5,6,7]<br>
-; AVX512VLBW-SLOW-NEXT: vpshufb {{.*#+}} ymm0 = ymm0[0,0,0,0,0,0,0,8,0,0,0,0,0,0,0,0,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16]<br>
-; AVX512VLBW-SLOW-NEXT: retq<br>
-;<br>
-; AVX512VLBW-FAST-LABEL: shuffle_v32i8_00_00_00_00_00_00_00_24_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00:<br>
-; AVX512VLBW-FAST: # %bb.0:<br>
-; AVX512VLBW-FAST-NEXT: vpermq {{.*#+}} ymm0 = ymm0[0,3,0,1]<br>
-; AVX512VLBW-FAST-NEXT: vpshufb {{.*#+}} ymm0 = ymm0[0,0,0,0,0,0,0,8,0,0,0,0,0,0,0,0,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16]<br>
-; AVX512VLBW-FAST-NEXT: retq<br>
+; AVX512VLBW-LABEL: shuffle_v32i8_00_00_00_00_00_00_00_24_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00:<br>
+; AVX512VLBW: # %bb.0:<br>
+; AVX512VLBW-NEXT: vpermq {{.*#+}} ymm0 = ymm0[0,3,0,1]<br>
+; AVX512VLBW-NEXT: vpshufb {{.*#+}} ymm0 = ymm0[0,0,0,0,0,0,0,8,0,0,0,0,0,0,0,0,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16]<br>
+; AVX512VLBW-NEXT: retq<br>
;<br>
; AVX512VLVBMI-LABEL: shuffle_v32i8_00_00_00_00_00_00_00_24_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00:<br>
; AVX512VLVBMI: # %bb.0:<br>
@@ -963,18 +952,11 @@ define <32 x i8> @shuffle_v32i8_00_00_00<br>
; AVX2-NEXT: vpshufb {{.*#+}} ymm0 = ymm0[0,0,0,0,0,0,9,0,0,0,0,0,0,0,0,0,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16]<br>
; AVX2-NEXT: retq<br>
;<br>
-; AVX512VLBW-SLOW-LABEL: shuffle_v32i8_00_00_00_00_00_00_25_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00:<br>
-; AVX512VLBW-SLOW: # %bb.0:<br>
-; AVX512VLBW-SLOW-NEXT: vpermq {{.*#+}} ymm1 = ymm0[2,3,0,1]<br>
-; AVX512VLBW-SLOW-NEXT: vpblendd {{.*#+}} ymm0 = ymm0[0,1],ymm1[2,3,4,5,6,7]<br>
-; AVX512VLBW-SLOW-NEXT: vpshufb {{.*#+}} ymm0 = ymm0[0,0,0,0,0,0,9,0,0,0,0,0,0,0,0,0,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16]<br>
-; AVX512VLBW-SLOW-NEXT: retq<br>
-;<br>
-; AVX512VLBW-FAST-LABEL: shuffle_v32i8_00_00_00_00_00_00_25_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00:<br>
-; AVX512VLBW-FAST: # %bb.0:<br>
-; AVX512VLBW-FAST-NEXT: vpermq {{.*#+}} ymm0 = ymm0[0,3,0,1]<br>
-; AVX512VLBW-FAST-NEXT: vpshufb {{.*#+}} ymm0 = ymm0[0,0,0,0,0,0,9,0,0,0,0,0,0,0,0,0,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16]<br>
-; AVX512VLBW-FAST-NEXT: retq<br>
+; AVX512VLBW-LABEL: shuffle_v32i8_00_00_00_00_00_00_25_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00:<br>
+; AVX512VLBW: # %bb.0:<br>
+; AVX512VLBW-NEXT: vpermq {{.*#+}} ymm0 = ymm0[0,3,0,1]<br>
+; AVX512VLBW-NEXT: vpshufb {{.*#+}} ymm0 = ymm0[0,0,0,0,0,0,9,0,0,0,0,0,0,0,0,0,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16]<br>
+; AVX512VLBW-NEXT: retq<br>
;<br>
; AVX512VLVBMI-LABEL: shuffle_v32i8_00_00_00_00_00_00_25_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00:<br>
; AVX512VLVBMI: # %bb.0:<br>
@@ -1002,18 +984,11 @@ define <32 x i8> @shuffle_v32i8_00_00_00<br>
; AVX2-NEXT: vpshufb {{.*#+}} ymm0 = ymm0[0,0,0,0,0,10,0,0,0,0,0,0,0,0,0,0,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16]<br>
; AVX2-NEXT: retq<br>
;<br>
-; AVX512VLBW-SLOW-LABEL: shuffle_v32i8_00_00_00_00_00_26_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00:<br>
-; AVX512VLBW-SLOW: # %bb.0:<br>
-; AVX512VLBW-SLOW-NEXT: vpermq {{.*#+}} ymm1 = ymm0[2,3,0,1]<br>
-; AVX512VLBW-SLOW-NEXT: vpblendd {{.*#+}} ymm0 = ymm0[0,1],ymm1[2,3,4,5,6,7]<br>
-; AVX512VLBW-SLOW-NEXT: vpshufb {{.*#+}} ymm0 = ymm0[0,0,0,0,0,10,0,0,0,0,0,0,0,0,0,0,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16]<br>
-; AVX512VLBW-SLOW-NEXT: retq<br>
-;<br>
-; AVX512VLBW-FAST-LABEL: shuffle_v32i8_00_00_00_00_00_26_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00:<br>
-; AVX512VLBW-FAST: # %bb.0:<br>
-; AVX512VLBW-FAST-NEXT: vpermq {{.*#+}} ymm0 = ymm0[0,3,0,1]<br>
-; AVX512VLBW-FAST-NEXT: vpshufb {{.*#+}} ymm0 = ymm0[0,0,0,0,0,10,0,0,0,0,0,0,0,0,0,0,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16]<br>
-; AVX512VLBW-FAST-NEXT: retq<br>
+; AVX512VLBW-LABEL: shuffle_v32i8_00_00_00_00_00_26_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00:<br>
+; AVX512VLBW: # %bb.0:<br>
+; AVX512VLBW-NEXT: vpermq {{.*#+}} ymm0 = ymm0[0,3,0,1]<br>
+; AVX512VLBW-NEXT: vpshufb {{.*#+}} ymm0 = ymm0[0,0,0,0,0,10,0,0,0,0,0,0,0,0,0,0,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16]<br>
+; AVX512VLBW-NEXT: retq<br>
;<br>
; AVX512VLVBMI-LABEL: shuffle_v32i8_00_00_00_00_00_26_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00:<br>
; AVX512VLVBMI: # %bb.0:<br>
@@ -1041,18 +1016,11 @@ define <32 x i8> @shuffle_v32i8_00_00_00<br>
; AVX2-NEXT: vpshufb {{.*#+}} ymm0 = ymm0[0,0,0,0,11,0,0,0,0,0,0,0,0,0,0,0,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16]<br>
; AVX2-NEXT: retq<br>
;<br>
-; AVX512VLBW-SLOW-LABEL: shuffle_v32i8_00_00_00_00_27_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00:<br>
-; AVX512VLBW-SLOW: # %bb.0:<br>
-; AVX512VLBW-SLOW-NEXT: vpermq {{.*#+}} ymm1 = ymm0[2,3,0,1]<br>
-; AVX512VLBW-SLOW-NEXT: vpblendd {{.*#+}} ymm0 = ymm0[0,1],ymm1[2,3,4,5,6,7]<br>
-; AVX512VLBW-SLOW-NEXT: vpshufb {{.*#+}} ymm0 = ymm0[0,0,0,0,11,0,0,0,0,0,0,0,0,0,0,0,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16]<br>
-; AVX512VLBW-SLOW-NEXT: retq<br>
-;<br>
-; AVX512VLBW-FAST-LABEL: shuffle_v32i8_00_00_00_00_27_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00:<br>
-; AVX512VLBW-FAST: # %bb.0:<br>
-; AVX512VLBW-FAST-NEXT: vpermq {{.*#+}} ymm0 = ymm0[0,3,0,1]<br>
-; AVX512VLBW-FAST-NEXT: vpshufb {{.*#+}} ymm0 = ymm0[0,0,0,0,11,0,0,0,0,0,0,0,0,0,0,0,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16]<br>
-; AVX512VLBW-FAST-NEXT: retq<br>
+; AVX512VLBW-LABEL: shuffle_v32i8_00_00_00_00_27_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00:<br>
+; AVX512VLBW: # %bb.0:<br>
+; AVX512VLBW-NEXT: vpermq {{.*#+}} ymm0 = ymm0[0,3,0,1]<br>
+; AVX512VLBW-NEXT: vpshufb {{.*#+}} ymm0 = ymm0[0,0,0,0,11,0,0,0,0,0,0,0,0,0,0,0,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16]<br>
+; AVX512VLBW-NEXT: retq<br>
;<br>
; AVX512VLVBMI-LABEL: shuffle_v32i8_00_00_00_00_27_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00:<br>
; AVX512VLVBMI: # %bb.0:<br>
@@ -1080,18 +1048,11 @@ define <32 x i8> @shuffle_v32i8_00_00_00<br>
; AVX2-NEXT: vpshufb {{.*#+}} ymm0 = ymm0[0,0,0,12,0,0,0,0,0,0,0,0,0,0,0,0,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16]<br>
; AVX2-NEXT: retq<br>
;<br>
-; AVX512VLBW-SLOW-LABEL: shuffle_v32i8_00_00_00_28_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00:<br>
-; AVX512VLBW-SLOW: # %bb.0:<br>
-; AVX512VLBW-SLOW-NEXT: vpermq {{.*#+}} ymm1 = ymm0[2,3,0,1]<br>
-; AVX512VLBW-SLOW-NEXT: vpblendd {{.*#+}} ymm0 = ymm0[0,1],ymm1[2,3,4,5,6,7]<br>
-; AVX512VLBW-SLOW-NEXT: vpshufb {{.*#+}} ymm0 = ymm0[0,0,0,12,0,0,0,0,0,0,0,0,0,0,0,0,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16]<br>
-; AVX512VLBW-SLOW-NEXT: retq<br>
-;<br>
-; AVX512VLBW-FAST-LABEL: shuffle_v32i8_00_00_00_28_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00:<br>
-; AVX512VLBW-FAST: # %bb.0:<br>
-; AVX512VLBW-FAST-NEXT: vpermq {{.*#+}} ymm0 = ymm0[0,3,0,1]<br>
-; AVX512VLBW-FAST-NEXT: vpshufb {{.*#+}} ymm0 = ymm0[0,0,0,12,0,0,0,0,0,0,0,0,0,0,0,0,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16]<br>
-; AVX512VLBW-FAST-NEXT: retq<br>
+; AVX512VLBW-LABEL: shuffle_v32i8_00_00_00_28_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00:<br>
+; AVX512VLBW: # %bb.0:<br>
+; AVX512VLBW-NEXT: vpermq {{.*#+}} ymm0 = ymm0[0,3,0,1]<br>
+; AVX512VLBW-NEXT: vpshufb {{.*#+}} ymm0 = ymm0[0,0,0,12,0,0,0,0,0,0,0,0,0,0,0,0,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16]<br>
+; AVX512VLBW-NEXT: retq<br>
;<br>
; AVX512VLVBMI-LABEL: shuffle_v32i8_00_00_00_28_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00:<br>
; AVX512VLVBMI: # %bb.0:<br>
@@ -1119,18 +1080,11 @@ define <32 x i8> @shuffle_v32i8_00_00_29<br>
; AVX2-NEXT: vpshufb {{.*#+}} ymm0 = ymm0[0,0,13,0,0,0,0,0,0,0,0,0,0,0,0,0,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16]<br>
; AVX2-NEXT: retq<br>
;<br>
-; AVX512VLBW-SLOW-LABEL: shuffle_v32i8_00_00_29_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00:<br>
-; AVX512VLBW-SLOW: # %bb.0:<br>
-; AVX512VLBW-SLOW-NEXT: vpermq {{.*#+}} ymm1 = ymm0[2,3,0,1]<br>
-; AVX512VLBW-SLOW-NEXT: vpblendd {{.*#+}} ymm0 = ymm0[0,1],ymm1[2,3,4,5,6,7]<br>
-; AVX512VLBW-SLOW-NEXT: vpshufb {{.*#+}} ymm0 = ymm0[0,0,13,0,0,0,0,0,0,0,0,0,0,0,0,0,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16]<br>
-; AVX512VLBW-SLOW-NEXT: retq<br>
-;<br>
-; AVX512VLBW-FAST-LABEL: shuffle_v32i8_00_00_29_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00:<br>
-; AVX512VLBW-FAST: # %bb.0:<br>
-; AVX512VLBW-FAST-NEXT: vpermq {{.*#+}} ymm0 = ymm0[0,3,0,1]<br>
-; AVX512VLBW-FAST-NEXT: vpshufb {{.*#+}} ymm0 = ymm0[0,0,13,0,0,0,0,0,0,0,0,0,0,0,0,0,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16]<br>
-; AVX512VLBW-FAST-NEXT: retq<br>
+; AVX512VLBW-LABEL: shuffle_v32i8_00_00_29_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00:<br>
+; AVX512VLBW: # %bb.0:<br>
+; AVX512VLBW-NEXT: vpermq {{.*#+}} ymm0 = ymm0[0,3,0,1]<br>
+; AVX512VLBW-NEXT: vpshufb {{.*#+}} ymm0 = ymm0[0,0,13,0,0,0,0,0,0,0,0,0,0,0,0,0,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16]<br>
+; AVX512VLBW-NEXT: retq<br>
;<br>
; AVX512VLVBMI-LABEL: shuffle_v32i8_00_00_29_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00:<br>
; AVX512VLVBMI: # %bb.0:<br>
@@ -1158,18 +1112,11 @@ define <32 x i8> @shuffle_v32i8_00_30_00<br>
; AVX2-NEXT: vpshufb {{.*#+}} ymm0 = ymm0[0,14,0,0,0,0,0,0,0,0,0,0,0,0,0,0,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16]<br>
; AVX2-NEXT: retq<br>
;<br>
-; AVX512VLBW-SLOW-LABEL: shuffle_v32i8_00_30_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00:<br>
-; AVX512VLBW-SLOW: # %bb.0:<br>
-; AVX512VLBW-SLOW-NEXT: vpermq {{.*#+}} ymm1 = ymm0[2,3,0,1]<br>
-; AVX512VLBW-SLOW-NEXT: vpblendd {{.*#+}} ymm0 = ymm0[0,1],ymm1[2,3,4,5,6,7]<br>
-; AVX512VLBW-SLOW-NEXT: vpshufb {{.*#+}} ymm0 = ymm0[0,14,0,0,0,0,0,0,0,0,0,0,0,0,0,0,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16]<br>
-; AVX512VLBW-SLOW-NEXT: retq<br>
-;<br>
-; AVX512VLBW-FAST-LABEL: shuffle_v32i8_00_30_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00:<br>
-; AVX512VLBW-FAST: # %bb.0:<br>
-; AVX512VLBW-FAST-NEXT: vpermq {{.*#+}} ymm0 = ymm0[0,3,0,1]<br>
-; AVX512VLBW-FAST-NEXT: vpshufb {{.*#+}} ymm0 = ymm0[0,14,0,0,0,0,0,0,0,0,0,0,0,0,0,0,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16]<br>
-; AVX512VLBW-FAST-NEXT: retq<br>
+; AVX512VLBW-LABEL: shuffle_v32i8_00_30_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00:<br>
+; AVX512VLBW: # %bb.0:<br>
+; AVX512VLBW-NEXT: vpermq {{.*#+}} ymm0 = ymm0[0,3,0,1]<br>
+; AVX512VLBW-NEXT: vpshufb {{.*#+}} ymm0 = ymm0[0,14,0,0,0,0,0,0,0,0,0,0,0,0,0,0,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16]<br>
+; AVX512VLBW-NEXT: retq<br>
;<br>
; AVX512VLVBMI-LABEL: shuffle_v32i8_00_30_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00:<br>
; AVX512VLVBMI: # %bb.0:<br>
@@ -1199,22 +1146,13 @@ define <32 x i8> @shuffle_v32i8_31_00_00<br>
; AVX2-NEXT: vpshufb %ymm1, %ymm0, %ymm0<br>
; AVX2-NEXT: retq<br>
;<br>
-; AVX512VLBW-SLOW-LABEL: shuffle_v32i8_31_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00:<br>
-; AVX512VLBW-SLOW: # %bb.0:<br>
-; AVX512VLBW-SLOW-NEXT: vpermq {{.*#+}} ymm1 = ymm0[2,3,0,1]<br>
-; AVX512VLBW-SLOW-NEXT: vpblendd {{.*#+}} ymm0 = ymm0[0,1],ymm1[2,3,4,5,6,7]<br>
-; AVX512VLBW-SLOW-NEXT: movl $15, %eax<br>
-; AVX512VLBW-SLOW-NEXT: vmovd %eax, %xmm1<br>
-; AVX512VLBW-SLOW-NEXT: vpshufb %ymm1, %ymm0, %ymm0<br>
-; AVX512VLBW-SLOW-NEXT: retq<br>
-;<br>
-; AVX512VLBW-FAST-LABEL: shuffle_v32i8_31_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00:<br>
-; AVX512VLBW-FAST: # %bb.0:<br>
-; AVX512VLBW-FAST-NEXT: vpermq {{.*#+}} ymm0 = ymm0[0,3,0,1]<br>
-; AVX512VLBW-FAST-NEXT: movl $15, %eax<br>
-; AVX512VLBW-FAST-NEXT: vmovd %eax, %xmm1<br>
-; AVX512VLBW-FAST-NEXT: vpshufb %ymm1, %ymm0, %ymm0<br>
-; AVX512VLBW-FAST-NEXT: retq<br>
+; AVX512VLBW-LABEL: shuffle_v32i8_31_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00:<br>
+; AVX512VLBW: # %bb.0:<br>
+; AVX512VLBW-NEXT: vpermq {{.*#+}} ymm0 = ymm0[0,3,0,1]<br>
+; AVX512VLBW-NEXT: movl $15, %eax<br>
+; AVX512VLBW-NEXT: vmovd %eax, %xmm1<br>
+; AVX512VLBW-NEXT: vpshufb %ymm1, %ymm0, %ymm0<br>
+; AVX512VLBW-NEXT: retq<br>
;<br>
; AVX512VLVBMI-LABEL: shuffle_v32i8_31_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00:<br>
; AVX512VLVBMI: # %bb.0:<br>
<br>
<br>
_______________________________________________<br>
llvm-commits mailing list<br>
<a href="mailto:llvm-commits@lists.llvm.org" target="_blank">llvm-commits@lists.llvm.org</a><br>
<a href="https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits" rel="noreferrer" target="_blank">https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits</a><br>
</blockquote></div>
_______________________________________________<br>
llvm-commits mailing list<br>
<a href="mailto:llvm-commits@lists.llvm.org" target="_blank">llvm-commits@lists.llvm.org</a><br>
<a href="https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits" rel="noreferrer" target="_blank">https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits</a><br>
</blockquote></div>
</blockquote></div>