<div dir="ltr">Committed my suggested fix in r345241 after checking with Eric on IRC.<div><br></div></div><br><div class="gmail_quote"><div dir="ltr">On Wed, Oct 24, 2018 at 11:23 PM Eric Christopher <<a href="mailto:echristo@gmail.com">echristo@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Let me see if that fixes my problem real fast... if not, should we revert until then? :)<div><br></div><div>-eric</div></div><br><div class="gmail_quote"><div dir="ltr">On Wed, Oct 24, 2018 at 11:20 PM Craig Topper <<a href="mailto:craig.topper@gmail.com" target="_blank">craig.topper@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">At minimum this line is incorrect</div><div dir="ltr"><div><br></div><div>+ APInt DemandedMask = OriginalDemandedBits & APInt::getLowBitsSet(64, 32);</div><div><br></div></div><div dir="ltr"><div>This is treating it as if each bit from the input only effects the corresponding bit of the output, but that's not how multiply works. I'm going to change it to just</div><div><br></div><div>+ APInt DemandedMask = APInt::getLowBitsSet(64, 32);</div><div><br></div><div>not sure if we can come up with a better constraint.</div></div><div dir="ltr"><div><br clear="all"><div><div dir="ltr" class="m_-2409648702470413895m_-7111525181470959589gmail_signature" data-smartmail="gmail_signature">~Craig</div></div><br></div></div><br><div class="gmail_quote"><div dir="ltr">On Wed, Oct 24, 2018 at 10:53 PM Eric Christopher via llvm-commits <<a href="mailto:llvm-commits@lists.llvm.org" target="_blank">llvm-commits@lists.llvm.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi Simon,<div><br></div><div>Pranav and I are separately seeing failures with this patch with ToT Halide and LLVM. Primarily in correctness_argmax, but other failures as well. Working on getting a testcase for it, but this might help you get one as well. :)</div><div><br></div><div>I haven't reverted yet, but we're definitely seeing it in different organizations so reverting might be nice.</div><div><br></div><div>Thanks!</div><div><br></div><div>-eric</div><div><br><div class="gmail_quote"><div dir="ltr">On Wed, Oct 24, 2018 at 12:13 PM Simon Pilgrim via llvm-commits <<a href="mailto:llvm-commits@lists.llvm.org" target="_blank">llvm-commits@lists.llvm.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Author: rksimon<br>
Date: Wed Oct 24 12:11:28 2018<br>
New Revision: 345182<br>
<br>
URL: <a href="http://llvm.org/viewvc/llvm-project?rev=345182&view=rev" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project?rev=345182&view=rev</a><br>
Log:<br>
[X86][SSE] Add SimplifyDemandedBitsForTargetNode PMULDQ/PMULUDQ handling<br>
<br>
Add X86 SimplifyDemandedBitsForTargetNode and use it to simplify PMULDQ/PMULUDQ target nodes.<br>
<br>
This enables us to repeatedly simplify the node's arguments after the previous approach had to be reverted due to PR39398.<br>
<br>
Differential Revision: <a href="https://reviews.llvm.org/D53643" rel="noreferrer" target="_blank">https://reviews.llvm.org/D53643</a><br>
<br>
Modified:<br>
llvm/trunk/lib/Target/X86/X86ISelLowering.cpp<br>
llvm/trunk/lib/Target/X86/X86ISelLowering.h<br>
llvm/trunk/test/CodeGen/X86/combine-pmuldq.ll<br>
llvm/trunk/test/CodeGen/X86/urem-seteq-vec-nonsplat.ll<br>
<br>
Modified: llvm/trunk/lib/Target/X86/X86ISelLowering.cpp<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86ISelLowering.cpp?rev=345182&r1=345181&r2=345182&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86ISelLowering.cpp?rev=345182&r1=345181&r2=345182&view=diff</a><br>
==============================================================================<br>
--- llvm/trunk/lib/Target/X86/X86ISelLowering.cpp (original)<br>
+++ llvm/trunk/lib/Target/X86/X86ISelLowering.cpp Wed Oct 24 12:11:28 2018<br>
@@ -31870,6 +31870,30 @@ bool X86TargetLowering::SimplifyDemanded<br>
return false;<br>
}<br>
<br>
+bool X86TargetLowering::SimplifyDemandedBitsForTargetNode(<br>
+ SDValue Op, const APInt &OriginalDemandedBits, KnownBits &Known,<br>
+ TargetLoweringOpt &TLO, unsigned Depth) const {<br>
+ unsigned Opc = Op.getOpcode();<br>
+ switch(Opc) {<br>
+ case X86ISD::PMULDQ:<br>
+ case X86ISD::PMULUDQ: {<br>
+ // PMULDQ/PMULUDQ only uses lower 32 bits from each vector element.<br>
+ KnownBits KnownOp;<br>
+ SDValue LHS = Op.getOperand(0);<br>
+ SDValue RHS = Op.getOperand(1);<br>
+ APInt DemandedMask = OriginalDemandedBits & APInt::getLowBitsSet(64, 32);<br>
+ if (SimplifyDemandedBits(LHS, DemandedMask, KnownOp, TLO, Depth + 1))<br>
+ return true;<br>
+ if (SimplifyDemandedBits(RHS, DemandedMask, KnownOp, TLO, Depth + 1))<br>
+ return true;<br>
+ break;<br>
+ }<br>
+ }<br>
+<br>
+ return TargetLowering::SimplifyDemandedBitsForTargetNode(<br>
+ Op, OriginalDemandedBits, Known, TLO, Depth);<br>
+}<br>
+<br>
/// Check if a vector extract from a target-specific shuffle of a load can be<br>
/// folded into a single element load.<br>
/// Similar handling for VECTOR_SHUFFLE is performed by DAGCombiner, but<br>
@@ -40362,13 +40386,9 @@ static SDValue combinePMULDQ(SDNode *N,<br>
if (ISD::isBuildVectorAllZeros(RHS.getNode()))<br>
return RHS;<br>
<br>
+ // PMULDQ/PMULUDQ only uses lower 32 bits from each vector element.<br>
const TargetLowering &TLI = DAG.getTargetLoweringInfo();<br>
- APInt DemandedMask(APInt::getLowBitsSet(64, 32));<br>
-<br>
- // PMULQDQ/PMULUDQ only uses lower 32 bits from each vector element.<br>
- if (TLI.SimplifyDemandedBits(LHS, DemandedMask, DCI))<br>
- return SDValue(N, 0);<br>
- if (TLI.SimplifyDemandedBits(RHS, DemandedMask, DCI))<br>
+ if (TLI.SimplifyDemandedBits(SDValue(N, 0), APInt::getAllOnesValue(64), DCI))<br>
return SDValue(N, 0);<br>
<br>
return SDValue();<br>
<br>
Modified: llvm/trunk/lib/Target/X86/X86ISelLowering.h<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86ISelLowering.h?rev=345182&r1=345181&r2=345182&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86ISelLowering.h?rev=345182&r1=345181&r2=345182&view=diff</a><br>
==============================================================================<br>
--- llvm/trunk/lib/Target/X86/X86ISelLowering.h (original)<br>
+++ llvm/trunk/lib/Target/X86/X86ISelLowering.h Wed Oct 24 12:11:28 2018<br>
@@ -874,6 +874,12 @@ namespace llvm {<br>
TargetLoweringOpt &TLO,<br>
unsigned Depth) const override;<br>
<br>
+ bool SimplifyDemandedBitsForTargetNode(SDValue Op,<br>
+ const APInt &DemandedBits,<br>
+ KnownBits &Known,<br>
+ TargetLoweringOpt &TLO,<br>
+ unsigned Depth) const override;<br>
+<br>
SDValue unwrapAddress(SDValue N) const override;<br>
<br>
bool isGAPlusOffset(SDNode *N, const GlobalValue* &GA,<br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/combine-pmuldq.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/combine-pmuldq.ll?rev=345182&r1=345181&r2=345182&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/combine-pmuldq.ll?rev=345182&r1=345181&r2=345182&view=diff</a><br>
==============================================================================<br>
--- llvm/trunk/test/CodeGen/X86/combine-pmuldq.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/combine-pmuldq.ll Wed Oct 24 12:11:28 2018<br>
@@ -47,26 +47,10 @@ define <2 x i64> @combine_shuffle_zero_p<br>
; SSE-NEXT: pmuludq %xmm1, %xmm0<br>
; SSE-NEXT: retq<br>
;<br>
-; AVX2-LABEL: combine_shuffle_zero_pmuludq:<br>
-; AVX2: # %bb.0:<br>
-; AVX2-NEXT: vpxor %xmm2, %xmm2, %xmm2<br>
-; AVX2-NEXT: vpblendd {{.*#+}} xmm1 = xmm1[0],xmm2[1],xmm1[2],xmm2[3]<br>
-; AVX2-NEXT: vpmuludq %xmm1, %xmm0, %xmm0<br>
-; AVX2-NEXT: retq<br>
-;<br>
-; AVX512VL-LABEL: combine_shuffle_zero_pmuludq:<br>
-; AVX512VL: # %bb.0:<br>
-; AVX512VL-NEXT: vpxor %xmm2, %xmm2, %xmm2<br>
-; AVX512VL-NEXT: vpblendd {{.*#+}} xmm1 = xmm1[0],xmm2[1],xmm1[2],xmm2[3]<br>
-; AVX512VL-NEXT: vpmuludq %xmm1, %xmm0, %xmm0<br>
-; AVX512VL-NEXT: retq<br>
-;<br>
-; AVX512DQVL-LABEL: combine_shuffle_zero_pmuludq:<br>
-; AVX512DQVL: # %bb.0:<br>
-; AVX512DQVL-NEXT: vpxor %xmm2, %xmm2, %xmm2<br>
-; AVX512DQVL-NEXT: vpblendd {{.*#+}} xmm1 = xmm1[0],xmm2[1],xmm1[2],xmm2[3]<br>
-; AVX512DQVL-NEXT: vpmuludq %xmm1, %xmm0, %xmm0<br>
-; AVX512DQVL-NEXT: retq<br>
+; AVX-LABEL: combine_shuffle_zero_pmuludq:<br>
+; AVX: # %bb.0:<br>
+; AVX-NEXT: vpmuludq %xmm1, %xmm0, %xmm0<br>
+; AVX-NEXT: retq<br>
%1 = shufflevector <4 x i32> %a0, <4 x i32> zeroinitializer, <4 x i32> <i32 0, i32 5, i32 2, i32 7><br>
%2 = shufflevector <4 x i32> %a1, <4 x i32> zeroinitializer, <4 x i32> <i32 0, i32 5, i32 2, i32 7><br>
%3 = bitcast <4 x i32> %1 to <2 x i64><br>
@@ -84,22 +68,16 @@ define <4 x i64> @combine_shuffle_zero_p<br>
;<br>
; AVX2-LABEL: combine_shuffle_zero_pmuludq_256:<br>
; AVX2: # %bb.0:<br>
-; AVX2-NEXT: vpxor %xmm2, %xmm2, %xmm2<br>
-; AVX2-NEXT: vpblendd {{.*#+}} ymm1 = ymm1[0],ymm2[1],ymm1[2],ymm2[3],ymm1[4],ymm2[5],ymm1[6],ymm2[7]<br>
; AVX2-NEXT: vpmuludq %ymm1, %ymm0, %ymm0<br>
; AVX2-NEXT: retq<br>
;<br>
; AVX512VL-LABEL: combine_shuffle_zero_pmuludq_256:<br>
; AVX512VL: # %bb.0:<br>
-; AVX512VL-NEXT: vpxor %xmm2, %xmm2, %xmm2<br>
-; AVX512VL-NEXT: vpblendd {{.*#+}} ymm1 = ymm1[0],ymm2[1],ymm1[2],ymm2[3],ymm1[4],ymm2[5],ymm1[6],ymm2[7]<br>
; AVX512VL-NEXT: vpmuludq %ymm1, %ymm0, %ymm0<br>
; AVX512VL-NEXT: retq<br>
;<br>
; AVX512DQVL-LABEL: combine_shuffle_zero_pmuludq_256:<br>
; AVX512DQVL: # %bb.0:<br>
-; AVX512DQVL-NEXT: vpxor %xmm2, %xmm2, %xmm2<br>
-; AVX512DQVL-NEXT: vpblendd {{.*#+}} ymm1 = ymm1[0],ymm2[1],ymm1[2],ymm2[3],ymm1[4],ymm2[5],ymm1[6],ymm2[7]<br>
; AVX512DQVL-NEXT: vpmuludq %ymm1, %ymm0, %ymm0<br>
; AVX512DQVL-NEXT: retq<br>
%1 = shufflevector <8 x i32> %a0, <8 x i32> zeroinitializer, <8 x i32> <i32 0, i32 9, i32 2, i32 11, i32 4, i32 13, i32 6, i32 15><br>
<br>
Modified: llvm/trunk/test/CodeGen/X86/urem-seteq-vec-nonsplat.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/urem-seteq-vec-nonsplat.ll?rev=345182&r1=345181&r2=345182&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/urem-seteq-vec-nonsplat.ll?rev=345182&r1=345181&r2=345182&view=diff</a><br>
==============================================================================<br>
--- llvm/trunk/test/CodeGen/X86/urem-seteq-vec-nonsplat.ll (original)<br>
+++ llvm/trunk/test/CodeGen/X86/urem-seteq-vec-nonsplat.ll Wed Oct 24 12:11:28 2018<br>
@@ -143,31 +143,31 @@ define <4 x i32> @test_urem_odd_div(<4 x<br>
define <4 x i32> @test_urem_even_div(<4 x i32> %X) nounwind readnone {<br>
; CHECK-SSE2-LABEL: test_urem_even_div:<br>
; CHECK-SSE2: # %bb.0:<br>
-; CHECK-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[0,1,2,0]<br>
-; CHECK-SSE2-NEXT: movdqa {{.*#+}} xmm2 = [2863311531,<a href="tel:(343)%20597-3837" value="+13435973837" target="_blank">3435973837</a>,2863311531,2454267027]<br>
-; CHECK-SSE2-NEXT: pmuludq %xmm2, %xmm1<br>
-; CHECK-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm1[1,3,2,3]<br>
+; CHECK-SSE2-NEXT: movdqa {{.*#+}} xmm1 = [2863311531,<a href="tel:(343)%20597-3837" value="+13435973837" target="_blank">3435973837</a>,2863311531,2454267027]<br>
+; CHECK-SSE2-NEXT: movdqa %xmm0, %xmm2<br>
+; CHECK-SSE2-NEXT: pmuludq %xmm1, %xmm2<br>
+; CHECK-SSE2-NEXT: pshufd {{.*#+}} xmm2 = xmm2[1,3,2,3]<br>
; CHECK-SSE2-NEXT: movdqa %xmm0, %xmm3<br>
; CHECK-SSE2-NEXT: psrld $1, %xmm3<br>
; CHECK-SSE2-NEXT: movdqa %xmm0, %xmm4<br>
; CHECK-SSE2-NEXT: shufps {{.*#+}} xmm4 = xmm4[1,1],xmm3[3,3]<br>
-; CHECK-SSE2-NEXT: pshufd {{.*#+}} xmm2 = xmm2[1,1,3,3]<br>
-; CHECK-SSE2-NEXT: pmuludq %xmm4, %xmm2<br>
-; CHECK-SSE2-NEXT: pshufd {{.*#+}} xmm2 = xmm2[1,3,2,3]<br>
-; CHECK-SSE2-NEXT: punpckldq {{.*#+}} xmm1 = xmm1[0],xmm2[0],xmm1[1],xmm2[1]<br>
-; CHECK-SSE2-NEXT: movdqa %xmm1, %xmm2<br>
-; CHECK-SSE2-NEXT: psrld $2, %xmm2<br>
-; CHECK-SSE2-NEXT: psrld $3, %xmm1<br>
-; CHECK-SSE2-NEXT: movdqa %xmm1, %xmm3<br>
-; CHECK-SSE2-NEXT: shufps {{.*#+}} xmm3 = xmm3[1,1],xmm2[3,3]<br>
+; CHECK-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm1[1,1,3,3]<br>
+; CHECK-SSE2-NEXT: pmuludq %xmm4, %xmm1<br>
+; CHECK-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm1[1,3,2,3]<br>
+; CHECK-SSE2-NEXT: punpckldq {{.*#+}} xmm2 = xmm2[0],xmm1[0],xmm2[1],xmm1[1]<br>
+; CHECK-SSE2-NEXT: movdqa %xmm2, %xmm1<br>
+; CHECK-SSE2-NEXT: psrld $2, %xmm1<br>
+; CHECK-SSE2-NEXT: psrld $3, %xmm2<br>
+; CHECK-SSE2-NEXT: movdqa %xmm2, %xmm3<br>
+; CHECK-SSE2-NEXT: shufps {{.*#+}} xmm3 = xmm3[1,1],xmm1[3,3]<br>
; CHECK-SSE2-NEXT: movdqa {{.*#+}} xmm4 = [6,10,12,14]<br>
; CHECK-SSE2-NEXT: pshufd {{.*#+}} xmm5 = xmm4[1,1,3,3]<br>
; CHECK-SSE2-NEXT: pmuludq %xmm3, %xmm5<br>
; CHECK-SSE2-NEXT: pshufd {{.*#+}} xmm3 = xmm5[0,2,2,3]<br>
-; CHECK-SSE2-NEXT: shufps {{.*#+}} xmm2 = xmm2[0,3],xmm1[1,2]<br>
-; CHECK-SSE2-NEXT: shufps {{.*#+}} xmm2 = xmm2[0,2,3,1]<br>
-; CHECK-SSE2-NEXT: pmuludq %xmm4, %xmm2<br>
-; CHECK-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm2[0,2,2,3]<br>
+; CHECK-SSE2-NEXT: shufps {{.*#+}} xmm1 = xmm1[0,3],xmm2[1,2]<br>
+; CHECK-SSE2-NEXT: shufps {{.*#+}} xmm1 = xmm1[0,2,3,1]<br>
+; CHECK-SSE2-NEXT: pmuludq %xmm4, %xmm1<br>
+; CHECK-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm1[0,2,2,3]<br>
; CHECK-SSE2-NEXT: punpckldq {{.*#+}} xmm1 = xmm1[0],xmm3[0],xmm1[1],xmm3[1]<br>
; CHECK-SSE2-NEXT: psubd %xmm1, %xmm0<br>
; CHECK-SSE2-NEXT: pxor %xmm1, %xmm1<br>
@@ -377,30 +377,30 @@ define <4 x i32> @test_urem_pow2(<4 x i3<br>
define <4 x i32> @test_urem_one(<4 x i32> %X) nounwind readnone {<br>
; CHECK-SSE2-LABEL: test_urem_one:<br>
; CHECK-SSE2: # %bb.0:<br>
-; CHECK-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[0,1,2,0]<br>
-; CHECK-SSE2-NEXT: movdqa {{.*#+}} xmm2 = [2863311531,0,2863311531,2454267027]<br>
-; CHECK-SSE2-NEXT: pmuludq %xmm2, %xmm1<br>
-; CHECK-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm1[1,3,2,3]<br>
+; CHECK-SSE2-NEXT: movdqa {{.*#+}} xmm1 = [2863311531,0,2863311531,2454267027]<br>
+; CHECK-SSE2-NEXT: movdqa %xmm0, %xmm2<br>
+; CHECK-SSE2-NEXT: pmuludq %xmm1, %xmm2<br>
+; CHECK-SSE2-NEXT: pshufd {{.*#+}} xmm2 = xmm2[1,3,2,3]<br>
; CHECK-SSE2-NEXT: movdqa %xmm0, %xmm3<br>
; CHECK-SSE2-NEXT: psrld $1, %xmm3<br>
; CHECK-SSE2-NEXT: movdqa %xmm0, %xmm4<br>
; CHECK-SSE2-NEXT: shufps {{.*#+}} xmm4 = xmm4[1,1],xmm3[3,3]<br>
-; CHECK-SSE2-NEXT: pshufd {{.*#+}} xmm2 = xmm2[1,1,3,3]<br>
-; CHECK-SSE2-NEXT: pmuludq %xmm4, %xmm2<br>
-; CHECK-SSE2-NEXT: pshufd {{.*#+}} xmm2 = xmm2[1,3,2,3]<br>
-; CHECK-SSE2-NEXT: punpckldq {{.*#+}} xmm1 = xmm1[0],xmm2[0],xmm1[1],xmm2[1]<br>
-; CHECK-SSE2-NEXT: movdqa %xmm1, %xmm2<br>
-; CHECK-SSE2-NEXT: psrld $2, %xmm2<br>
+; CHECK-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm1[1,1,3,3]<br>
+; CHECK-SSE2-NEXT: pmuludq %xmm4, %xmm1<br>
+; CHECK-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm1[1,3,2,3]<br>
+; CHECK-SSE2-NEXT: punpckldq {{.*#+}} xmm2 = xmm2[0],xmm1[0],xmm2[1],xmm1[1]<br>
+; CHECK-SSE2-NEXT: movdqa %xmm2, %xmm1<br>
+; CHECK-SSE2-NEXT: psrld $2, %xmm1<br>
; CHECK-SSE2-NEXT: movdqa %xmm0, %xmm3<br>
-; CHECK-SSE2-NEXT: shufps {{.*#+}} xmm3 = xmm3[1,0],xmm2[0,0]<br>
-; CHECK-SSE2-NEXT: psrld $3, %xmm1<br>
-; CHECK-SSE2-NEXT: shufps {{.*#+}} xmm3 = xmm3[2,0],xmm1[2,3]<br>
-; CHECK-SSE2-NEXT: movdqa {{.*#+}} xmm1 = [6,1,12,14]<br>
-; CHECK-SSE2-NEXT: pmuludq %xmm1, %xmm3<br>
+; CHECK-SSE2-NEXT: shufps {{.*#+}} xmm3 = xmm3[1,0],xmm1[0,0]<br>
+; CHECK-SSE2-NEXT: psrld $3, %xmm2<br>
+; CHECK-SSE2-NEXT: shufps {{.*#+}} xmm3 = xmm3[2,0],xmm2[2,3]<br>
+; CHECK-SSE2-NEXT: movdqa {{.*#+}} xmm2 = [6,1,12,14]<br>
+; CHECK-SSE2-NEXT: pmuludq %xmm2, %xmm3<br>
; CHECK-SSE2-NEXT: pshufd {{.*#+}} xmm3 = xmm3[0,2,2,3]<br>
; CHECK-SSE2-NEXT: movdqa %xmm0, %xmm4<br>
-; CHECK-SSE2-NEXT: shufps {{.*#+}} xmm4 = xmm4[1,1],xmm2[3,3]<br>
-; CHECK-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm1[1,1,3,3]<br>
+; CHECK-SSE2-NEXT: shufps {{.*#+}} xmm4 = xmm4[1,1],xmm1[3,3]<br>
+; CHECK-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm2[1,1,3,3]<br>
; CHECK-SSE2-NEXT: pmuludq %xmm4, %xmm1<br>
; CHECK-SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm1[0,2,2,3]<br>
; CHECK-SSE2-NEXT: punpckldq {{.*#+}} xmm3 = xmm3[0],xmm1[0],xmm3[1],xmm1[1]<br>
<br>
<br>
_______________________________________________<br>
llvm-commits mailing list<br>
<a href="mailto:llvm-commits@lists.llvm.org" target="_blank">llvm-commits@lists.llvm.org</a><br>
<a href="http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits" rel="noreferrer" target="_blank">http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits</a><br>
</blockquote></div></div></div>
_______________________________________________<br>
llvm-commits mailing list<br>
<a href="mailto:llvm-commits@lists.llvm.org" target="_blank">llvm-commits@lists.llvm.org</a><br>
<a href="http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits" rel="noreferrer" target="_blank">http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits</a><br>
</blockquote></div>
</blockquote></div>
</blockquote></div>