[llvm] r283480 - [DAG] Generalize build_vector -> vector_shuffle combine for more than 2 inputs
Michael Kuperstein via llvm-commits
llvm-commits at lists.llvm.org
Tue Oct 11 15:54:13 PDT 2016
r283953.
On Tue, Oct 11, 2016 at 3:26 PM, Andrea Di Biagio <andrea.dibiagio at gmail.com
> wrote:
>
>
> On Tue, Oct 11, 2016 at 10:43 PM, Michael Kuperstein <mkuper at google.com>
> wrote:
>
>> Argh, yes, your workaround was pretty much the original intent - all of
>> the unsupported cases should have fallen off into the final else. (I
>> actually broke it in r281402, not in this commit.)
>> Thanks a lot for the test case!
>>
>
>
> No problem!
>
>
>>
>> Regarding running this only post-legalization, as I said on D24491, what
>> worries me is cases like constructing a single <4 x i32> from two <2 x
>> i32>-s. But maybe I'm just being paranoid.
>> What benefits do you see from running this only post-legalization? I
>> don't think it'll allow us to simplify the code any, since we can get
>> non-factor-of-2 mismatches even post-legalizer (In AVX-512 you have both
>> 512-bit and 128-bit legal vectors).
>>
>>
> Ah right... I keep forgetting about the two <2 x i32> case. Nevermind then.
>
> Thanks,
> Andrea
>
>
>>
>>
>> On Tue, Oct 11, 2016 at 6:03 AM, Andrea Di Biagio <
>> andrea.dibiagio at gmail.com> wrote:
>>
>>> Hi Michael,
>>>
>>> It looks like your patch introduced a regression (see reproducible
>>> below):
>>>
>>>
>>> ;;;;;
>>> target datalayout = "e-m:e-i64:64-f80:128-n8:16:32:64-S128"
>>> target triple = "x86_64-unknown-linux-gnu"
>>>
>>> define <2 x double> @foo(<4 x double> %A) {
>>> entry:
>>> %A.addr = alloca <4 x double>, align 32
>>> %B = alloca <8 x double>, align 64
>>> %C = alloca <8 x double>, align 64
>>> store <4 x double> %A, <4 x double>* %A.addr, align 32
>>> %0 = load <4 x double>, <4 x double>* %A.addr, align 32
>>> %1 = load <4 x double>, <4 x double>* %A.addr, align 32
>>> %shuffle = shufflevector <4 x double> %0, <4 x double> %1, <8 x i32>
>>> zeroinitializer
>>> store <8 x double> %shuffle, <8 x double>* %B, align 64
>>> %2 = load <8 x double>, <8 x double>* %B, align 64
>>> store <8 x double> %2, <8 x double>* %C, align 64
>>> %3 = load <8 x double>, <8 x double>* %C, align 64
>>> %4 = shufflevector <8 x double> %3, <8 x double> undef, <2 x i32> <i32
>>> 2, i32 0>
>>> ret <2 x double> %4
>>> }
>>> ;;;;
>>>
>>> The problem is in DAGCombiner::createBuildVecShuffle.
>>> InVT2 is larger than InVT1, and the size of InVT1 is VT.getSizeInBits()
>>> * 2. Basically, it looks like we forgot to handle the case where InVT2 is
>>> larger than InVT1. In this particular case we have:
>>> VT = MVT::v2f64
>>> InVT1 = MVT::v4f64
>>> InVT2 = MVT::v8f64
>>>
>>> I managed to workaround this issue by applying the patch below. As you
>>> know, this transform can run even if vector types are not legal. Therefore
>>> the probability of having mismatching types for InVT2 and InVT1 is higher.
>>> I still wonder if we should only enable this transform for the case were
>>> all the involved vector types are legal. in this particular (randomly
>>> generated) example, most of the node in the DAG will be be simplified
>>> before we even run the type legalizer. So, the buildvetor-shuffle
>>> canonicalization may be run prematurely in this particular case.
>>>
>>> (see patch below).
>>>
>>> Index: SelectionDAG/DAGCombiner.cpp
>>> ===================================================================
>>> --- SelectionDAG/DAGCombiner.cpp (revision 283869)
>>> +++ SelectionDAG/DAGCombiner.cpp (working copy)
>>> @@ -12933,7 +12933,12 @@
>>> VecIn2 = DAG.getNode(ISD::INSERT_SUBVECTOR, DL, InVT1,
>>> DAG.getUNDEF(InVT1), VecIn2, ZeroIdx);
>>> ShuffleNumElems = NumElems * 2;
>>> - }
>>> + } else
>>> + // Conservatively bail out if VecIn2 is larger than VecIn1.
>>> + // TODO: We will be able to replace this case with a simple
>>> assert when
>>> + // we will teach reduceBuildVecToShuffle() how to sort input
>>> vectors by
>>> + // descending size.
>>> + return SDValue();
>>> } else {
>>> // TODO: Support cases where the length mismatch isn't exactly by
>>> a
>>> // factor of 2.
>>>
>>> ====
>>>
>>> I hope this helps,
>>> Andrea
>>>
>>> On Thu, Oct 6, 2016 at 7:58 PM, Michael Kuperstein via llvm-commits <
>>> llvm-commits at lists.llvm.org> wrote:
>>>
>>>> Author: mkuper
>>>> Date: Thu Oct 6 13:58:24 2016
>>>> New Revision: 283480
>>>>
>>>> URL: http://llvm.org/viewvc/llvm-project?rev=283480&view=rev
>>>> Log:
>>>> [DAG] Generalize build_vector -> vector_shuffle combine for more than 2
>>>> inputs
>>>>
>>>> This generalizes the build_vector -> vector_shuffle combine to support
>>>> any
>>>> number of inputs. The idea is to create a binary tree of shuffles, where
>>>> the first layer performs pairwise shuffles of the input vectors placing
>>>> each
>>>> input element into the correct lane, and the rest of the tree blends
>>>> these
>>>> shuffles together.
>>>>
>>>> This doesn't try to be smart and create any sort of "optimal" shuffles.
>>>> The assumption is that even a "poor" shuffle sequence is better than
>>>> extracting
>>>> and inserting the elements one by one.
>>>>
>>>> Differential Revision: https://reviews.llvm.org/D24683
>>>>
>>>>
>>>> Modified:
>>>> llvm/trunk/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
>>>> llvm/trunk/test/CodeGen/PowerPC/pr27078.ll
>>>> llvm/trunk/test/CodeGen/X86/oddshuffles.ll
>>>> llvm/trunk/test/CodeGen/X86/vector-trunc-math.ll
>>>> llvm/trunk/test/CodeGen/X86/vector-trunc.ll
>>>>
>>>> Modified: llvm/trunk/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
>>>> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/S
>>>> electionDAG/DAGCombiner.cpp?rev=283480&r1=283479&r2=283480&view=diff
>>>> ============================================================
>>>> ==================
>>>> --- llvm/trunk/lib/CodeGen/SelectionDAG/DAGCombiner.cpp (original)
>>>> +++ llvm/trunk/lib/CodeGen/SelectionDAG/DAGCombiner.cpp Thu Oct 6
>>>> 13:58:24 2016
>>>> @@ -379,6 +379,9 @@ namespace {
>>>> SDValue reduceBuildVecExtToExtBuildVec(SDNode *N);
>>>> SDValue reduceBuildVecConvertToConvertBuildVec(SDNode *N);
>>>> SDValue reduceBuildVecToShuffle(SDNode *N);
>>>> + SDValue createBuildVecShuffle(SDLoc DL, SDNode *N, ArrayRef<int>
>>>> VectorMask,
>>>> + SDValue VecIn1, SDValue VecIn2,
>>>> + unsigned LeftIdx);
>>>>
>>>> SDValue GetDemandedBits(SDValue V, const APInt &Mask);
>>>>
>>>> @@ -12874,138 +12877,40 @@ SDValue DAGCombiner::reduceBuildVecConve
>>>> return DAG.getNode(Opcode, dl, VT, BV);
>>>> }
>>>>
>>>> -// If Vec holds a reference to a non-null node, return Vec.
>>>> -// Otherwise, return either a zero or an undef node of the appropriate
>>>> type.
>>>> -static SDValue getRightHandValue(SelectionDAG &DAG, SDLoc DL, SDValue
>>>> Vec,
>>>> - EVT VT, bool Zero) {
>>>> - if (Vec.getNode())
>>>> - return Vec;
>>>> -
>>>> - if (Zero)
>>>> - return VT.isInteger() ? DAG.getConstant(0, DL, VT)
>>>> - : DAG.getConstantFP(0.0, DL, VT);
>>>> -
>>>> - return DAG.getUNDEF(VT);
>>>> -}
>>>> +SDValue DAGCombiner::createBuildVecShuffle(SDLoc DL, SDNode *N,
>>>> + ArrayRef<int> VectorMask,
>>>> + SDValue VecIn1, SDValue
>>>> VecIn2,
>>>> + unsigned LeftIdx) {
>>>> + MVT IdxTy = TLI.getVectorIdxTy(DAG.getDataLayout());
>>>> + SDValue ZeroIdx = DAG.getConstant(0, DL, IdxTy);
>>>>
>>>> -// Check to see if this is a BUILD_VECTOR of a bunch of
>>>> EXTRACT_VECTOR_ELT
>>>> -// operations. If so, and if the EXTRACT_VECTOR_ELT vector inputs
>>>> come from
>>>> -// at most two distinct vectors, turn this into a shuffle node.
>>>> -// TODO: Support more than two inputs by constructing a tree of
>>>> shuffles.
>>>> -SDValue DAGCombiner::reduceBuildVecToShuffle(SDNode *N) {
>>>> - SDLoc dl(N);
>>>> EVT VT = N->getValueType(0);
>>>> -
>>>> - // Only type-legal BUILD_VECTOR nodes are converted to shuffle nodes.
>>>> - if (!isTypeLegal(VT))
>>>> - return SDValue();
>>>> -
>>>> - // May only combine to shuffle after legalize if shuffle is legal.
>>>> - if (LegalOperations && !TLI.isOperationLegal(ISD::VECTOR_SHUFFLE,
>>>> VT))
>>>> - return SDValue();
>>>> -
>>>> - SDValue VecIn1, VecIn2;
>>>> - bool UsesZeroVector = false;
>>>> - unsigned NumElems = N->getNumOperands();
>>>> -
>>>> - // Record, for each element of newly built vector, which input it
>>>> uses.
>>>> - // 0 stands for the zero vector, 1 and 2 for the two input vectors,
>>>> and -1
>>>> - // for undef.
>>>> - SmallVector<int, 8> VectorMask;
>>>> - for (unsigned i = 0; i != NumElems; ++i) {
>>>> - SDValue Op = N->getOperand(i);
>>>> -
>>>> - if (Op.isUndef()) {
>>>> - VectorMask.push_back(-1);
>>>> - continue;
>>>> - }
>>>> -
>>>> - // See if we can combine this into a blend with a zero vector.
>>>> - if (!VecIn2.getNode() && (isNullConstant(Op) ||
>>>> isNullFPConstant(Op))) {
>>>> - UsesZeroVector = true;
>>>> - VectorMask.push_back(0);
>>>> - continue;
>>>> - }
>>>> -
>>>> - // Not an undef or zero. If the input is something other than an
>>>> - // EXTRACT_VECTOR_ELT with a constant index, bail out.
>>>> - if (Op.getOpcode() != ISD::EXTRACT_VECTOR_ELT ||
>>>> - !isa<ConstantSDNode>(Op.getOperand(1)))
>>>> - return SDValue();
>>>> -
>>>> - // We only allow up to two distinct input vectors.
>>>> - SDValue ExtractedFromVec = Op.getOperand(0);
>>>> - if (ExtractedFromVec == VecIn1) {
>>>> - VectorMask.push_back(1);
>>>> - continue;
>>>> - }
>>>> - if (ExtractedFromVec == VecIn2) {
>>>> - VectorMask.push_back(2);
>>>> - continue;
>>>> - }
>>>> -
>>>> - if (!VecIn1.getNode()) {
>>>> - VecIn1 = ExtractedFromVec;
>>>> - VectorMask.push_back(1);
>>>> - } else if (!VecIn2.getNode() && !UsesZeroVector) {
>>>> - VecIn2 = ExtractedFromVec;
>>>> - VectorMask.push_back(2);
>>>> - } else {
>>>> - return SDValue();
>>>> - }
>>>> - }
>>>> -
>>>> - // If we didn't find at least one input vector, bail out.
>>>> - if (!VecIn1.getNode())
>>>> - return SDValue();
>>>> -
>>>> EVT InVT1 = VecIn1.getValueType();
>>>> EVT InVT2 = VecIn2.getNode() ? VecIn2.getValueType() : InVT1;
>>>> +
>>>> unsigned Vec2Offset = InVT1.getVectorNumElements();
>>>> + unsigned NumElems = VT.getVectorNumElements();
>>>> unsigned ShuffleNumElems = NumElems;
>>>>
>>>> - MVT IdxTy = TLI.getVectorIdxTy(DAG.getDataLayout());
>>>> - SDValue ZeroIdx = DAG.getConstant(0, dl, IdxTy);
>>>> -
>>>> // We can't generate a shuffle node with mismatched input and output
>>>> types.
>>>> - // Try to make the types match.
>>>> - // TODO: Should this fire if InVT1/InVT2 are not legal types, or
>>>> should
>>>> - // we let legalization run its course first?
>>>> + // Try to make the types match the type of the output.
>>>> if (InVT1 != VT || InVT2 != VT) {
>>>> - // Both inputs and the output must have the same base element type.
>>>> - EVT ElemType = VT.getVectorElementType();
>>>> - if (ElemType != InVT1.getVectorElementType() ||
>>>> - ElemType != InVT2.getVectorElementType())
>>>> - return SDValue();
>>>> -
>>>> - // TODO: Canonicalize this so that if the vectors have different
>>>> lengths,
>>>> - // VecIn1 is always longer.
>>>> -
>>>> - // The element types match, now figure out the lengths.
>>>> if (InVT1.getSizeInBits() * 2 == VT.getSizeInBits() && InVT1 ==
>>>> InVT2) {
>>>> // If both input vectors are exactly half the size of the
>>>> output, concat
>>>> // them. If we have only one (non-zero) input, concat it with
>>>> undef.
>>>> - VecIn1 = DAG.getNode(ISD::CONCAT_VECTORS, dl, VT, VecIn1,
>>>> - getRightHandValue(DAG, dl, VecIn2, InVT1,
>>>> false));
>>>> + VecIn1 = DAG.getNode(ISD::CONCAT_VECTORS, DL, VT, VecIn1,
>>>> + VecIn2.getNode() ? VecIn2 :
>>>> DAG.getUNDEF(InVT1));
>>>> VecIn2 = SDValue();
>>>> - // If we have one "real" input and are blending with zero, we
>>>> need the
>>>> - // zero elements to come from the second input, not the undef
>>>> part of the
>>>> - // first input.
>>>> - if (UsesZeroVector)
>>>> - Vec2Offset = NumElems;
>>>> } else if (InVT1.getSizeInBits() == VT.getSizeInBits() * 2) {
>>>> if (!TLI.isExtractSubvectorCheap(VT, NumElems))
>>>> return SDValue();
>>>>
>>>> - if (UsesZeroVector)
>>>> - return SDValue();
>>>> -
>>>> if (!VecIn2.getNode()) {
>>>> // If we only have one input vector, and it's twice the size
>>>> of the
>>>> - // output, split it in two.
>>>> - VecIn2 = DAG.getNode(ISD::EXTRACT_SUBVECTOR, dl, VT, VecIn1,
>>>> - DAG.getConstant(NumElems, dl, IdxTy));
>>>> - VecIn1 = DAG.getNode(ISD::EXTRACT_SUBVECTOR, dl, VT, VecIn1,
>>>> ZeroIdx);
>>>> + // output, split it in two.
>>>> + VecIn2 = DAG.getNode(ISD::EXTRACT_SUBVECTOR, DL, VT, VecIn1,
>>>> + DAG.getConstant(NumElems, DL, IdxTy));
>>>> + VecIn1 = DAG.getNode(ISD::EXTRACT_SUBVECTOR, DL, VT, VecIn1,
>>>> ZeroIdx);
>>>> // Since we now have shorter input vectors, adjust the offset
>>>> of the
>>>> // second vector's start.
>>>> Vec2Offset = NumElems;
>>>> @@ -13013,20 +12918,21 @@ SDValue DAGCombiner::reduceBuildVecToShu
>>>> // VecIn1 is wider than the output, and we have another,
>>>> possibly
>>>> // smaller input. Pad the smaller input with undefs, shuffle
>>>> at the
>>>> // input vector width, and extract the output.
>>>> -
>>>> // The shuffle type is different than VT, so check legality
>>>> again.
>>>> if (LegalOperations &&
>>>> !TLI.isOperationLegal(ISD::VECTOR_SHUFFLE, InVT1))
>>>> return SDValue();
>>>>
>>>> if (InVT1 != InVT2)
>>>> - VecIn2 = DAG.getNode(ISD::INSERT_SUBVECTOR, dl, InVT1,
>>>> + VecIn2 = DAG.getNode(ISD::INSERT_SUBVECTOR, DL, InVT1,
>>>> DAG.getUNDEF(InVT1), VecIn2, ZeroIdx);
>>>> ShuffleNumElems = NumElems * 2;
>>>> }
>>>> } else {
>>>> // TODO: Support cases where the length mismatch isn't exactly
>>>> by a
>>>> // factor of 2.
>>>> + // TODO: Move this check upwards, so that if we have bad type
>>>> + // mismatches, we don't create any DAG nodes.
>>>> return SDValue();
>>>> }
>>>> }
>>>> @@ -13038,27 +12944,18 @@ SDValue DAGCombiner::reduceBuildVecToShu
>>>> // total number of elements in the shuffle - if we are shuffling a
>>>> wider
>>>> // vector, the high lanes should be set to undef.
>>>> for (unsigned i = 0; i != NumElems; ++i) {
>>>> - if (VectorMask[i] == -1)
>>>> - continue;
>>>> -
>>>> - // If we are trying to blend with zero, we need to take a zero
>>>> from the
>>>> - // correct position in the second input.
>>>> - if (VectorMask[i] == 0) {
>>>> - Mask[i] = Vec2Offset + i;
>>>> + if (VectorMask[i] <= 0)
>>>> continue;
>>>> - }
>>>>
>>>> SDValue Extract = N->getOperand(i);
>>>> unsigned ExtIndex =
>>>> cast<ConstantSDNode>(Extract.getOperand(1))->getZExtValue();
>>>>
>>>> - if (VectorMask[i] == 1) {
>>>> + if (VectorMask[i] == (int)LeftIdx) {
>>>> Mask[i] = ExtIndex;
>>>> - continue;
>>>> + } else if (VectorMask[i] == (int)LeftIdx + 1) {
>>>> + Mask[i] = Vec2Offset + ExtIndex;
>>>> }
>>>> -
>>>> - assert(VectorMask[i] == 2 && "Expected input to be from second
>>>> vector");
>>>> - Mask[i] = Vec2Offset + ExtIndex;
>>>> }
>>>>
>>>> // The type the input vectors may have changed above.
>>>> @@ -13066,20 +12963,179 @@ SDValue DAGCombiner::reduceBuildVecToShu
>>>>
>>>> // If we already have a VecIn2, it should have the same type as
>>>> VecIn1.
>>>> // If we don't, get an undef/zero vector of the appropriate type.
>>>> - VecIn2 = getRightHandValue(DAG, dl, VecIn2, InVT1, UsesZeroVector);
>>>> + VecIn2 = VecIn2.getNode() ? VecIn2 : DAG.getUNDEF(InVT1);
>>>> assert(InVT1 == VecIn2.getValueType() && "Unexpected second input
>>>> type.");
>>>>
>>>> - // Return the new VECTOR_SHUFFLE node.
>>>> - SDValue Ops[2];
>>>> - Ops[0] = VecIn1;
>>>> - Ops[1] = VecIn2;
>>>> - SDValue Shuffle = DAG.getVectorShuffle(InVT1, dl, Ops[0], Ops[1],
>>>> Mask);
>>>> + SDValue Shuffle = DAG.getVectorShuffle(InVT1, DL, VecIn1, VecIn2,
>>>> Mask);
>>>> if (ShuffleNumElems > NumElems)
>>>> - Shuffle = DAG.getNode(ISD::EXTRACT_SUBVECTOR, dl, VT, Shuffle,
>>>> ZeroIdx);
>>>> + Shuffle = DAG.getNode(ISD::EXTRACT_SUBVECTOR, DL, VT, Shuffle,
>>>> ZeroIdx);
>>>>
>>>> return Shuffle;
>>>> }
>>>>
>>>> +// Check to see if this is a BUILD_VECTOR of a bunch of
>>>> EXTRACT_VECTOR_ELT
>>>> +// operations. If the types of the vectors we're extracting from allow
>>>> it,
>>>> +// turn this into a vector_shuffle node.
>>>> +SDValue DAGCombiner::reduceBuildVecToShuffle(SDNode *N) {
>>>> + SDLoc dl(N);
>>>> + EVT VT = N->getValueType(0);
>>>> +
>>>> + // Only type-legal BUILD_VECTOR nodes are converted to shuffle nodes.
>>>> + if (!isTypeLegal(VT))
>>>> + return SDValue();
>>>> +
>>>> + // May only combine to shuffle after legalize if shuffle is legal.
>>>> + if (LegalOperations && !TLI.isOperationLegal(ISD::VECTOR_SHUFFLE,
>>>> VT))
>>>> + return SDValue();
>>>> +
>>>> + bool UsesZeroVector = false;
>>>> + unsigned NumElems = N->getNumOperands();
>>>> +
>>>> + // Record, for each element of the newly built vector, which input
>>>> vector
>>>> + // that element comes from. -1 stands for undef, 0 for the zero
>>>> vector,
>>>> + // and positive values for the input vectors.
>>>> + // VectorMask maps each element to its vector number, and VecIn maps
>>>> vector
>>>> + // numbers to their initial SDValues.
>>>> +
>>>> + SmallVector<int, 8> VectorMask(NumElems, -1);
>>>> + SmallVector<SDValue, 8> VecIn;
>>>> + VecIn.push_back(SDValue());
>>>> +
>>>> + for (unsigned i = 0; i != NumElems; ++i) {
>>>> + SDValue Op = N->getOperand(i);
>>>> +
>>>> + if (Op.isUndef())
>>>> + continue;
>>>> +
>>>> + // See if we can use a blend with a zero vector.
>>>> + // TODO: Should we generalize this to a blend with an arbitrary
>>>> constant
>>>> + // vector?
>>>> + if (isNullConstant(Op) || isNullFPConstant(Op)) {
>>>> + UsesZeroVector = true;
>>>> + VectorMask[i] = 0;
>>>> + continue;
>>>> + }
>>>> +
>>>> + // Not an undef or zero. If the input is something other than an
>>>> + // EXTRACT_VECTOR_ELT with a constant index, bail out.
>>>> + if (Op.getOpcode() != ISD::EXTRACT_VECTOR_ELT ||
>>>> + !isa<ConstantSDNode>(Op.getOperand(1)))
>>>> + return SDValue();
>>>> +
>>>> + SDValue ExtractedFromVec = Op.getOperand(0);
>>>> +
>>>> + // All inputs must have the same element type as the output.
>>>> + if (VT.getVectorElementType() !=
>>>> + ExtractedFromVec.getValueType().getVectorElementType())
>>>> + return SDValue();
>>>> +
>>>> + // Have we seen this input vector before?
>>>> + // The vectors are expected to be tiny (usually 1 or 2 elements),
>>>> so using
>>>> + // a map back from SDValues to numbers isn't worth it.
>>>> + unsigned Idx = std::distance(
>>>> + VecIn.begin(), std::find(VecIn.begin(), VecIn.end(),
>>>> ExtractedFromVec));
>>>> + if (Idx == VecIn.size())
>>>> + VecIn.push_back(ExtractedFromVec);
>>>> +
>>>> + VectorMask[i] = Idx;
>>>> + }
>>>> +
>>>> + // If we didn't find at least one input vector, bail out.
>>>> + if (VecIn.size() < 2)
>>>> + return SDValue();
>>>> +
>>>> + // TODO: We want to sort the vectors by descending length, so that
>>>> adjacent
>>>> + // pairs have similar length, and the longer vector is always first
>>>> in the
>>>> + // pair.
>>>> +
>>>> + // TODO: Should this fire if some of the input vectors has illegal
>>>> type (like
>>>> + // it does now), or should we let legalization run its course first?
>>>> +
>>>> + // Shuffle phase:
>>>> + // Take pairs of vectors, and shuffle them so that the result has
>>>> elements
>>>> + // from these vectors in the correct places.
>>>> + // For example, given:
>>>> + // t10: i32 = extract_vector_elt t1, Constant:i64<0>
>>>> + // t11: i32 = extract_vector_elt t2, Constant:i64<0>
>>>> + // t12: i32 = extract_vector_elt t3, Constant:i64<0>
>>>> + // t13: i32 = extract_vector_elt t1, Constant:i64<1>
>>>> + // t14: v4i32 = BUILD_VECTOR t10, t11, t12, t13
>>>> + // We will generate:
>>>> + // t20: v4i32 = vector_shuffle<0,4,u,1> t1, t2
>>>> + // t21: v4i32 = vector_shuffle<u,u,0,u> t3, undef
>>>> + SmallVector<SDValue, 4> Shuffles;
>>>> + for (unsigned In = 0, Len = (VecIn.size() / 2); In < Len; ++In) {
>>>> + unsigned LeftIdx = 2 * In + 1;
>>>> + SDValue VecLeft = VecIn[LeftIdx];
>>>> + SDValue VecRight =
>>>> + (LeftIdx + 1) < VecIn.size() ? VecIn[LeftIdx + 1] : SDValue();
>>>> +
>>>> + if (SDValue Shuffle = createBuildVecShuffle(dl, N, VectorMask,
>>>> VecLeft,
>>>> + VecRight, LeftIdx))
>>>> + Shuffles.push_back(Shuffle);
>>>> + else
>>>> + return SDValue();
>>>> + }
>>>> +
>>>> + // If we need the zero vector as an "ingredient" in the blend tree,
>>>> add it
>>>> + // to the list of shuffles.
>>>> + if (UsesZeroVector)
>>>> + Shuffles.push_back(VT.isInteger() ? DAG.getConstant(0, dl, VT)
>>>> + : DAG.getConstantFP(0.0, dl,
>>>> VT));
>>>> +
>>>> + // If we only have one shuffle, we're done.
>>>> + if (Shuffles.size() == 1)
>>>> + return Shuffles[0];
>>>> +
>>>> + // Update the vector mask to point to the post-shuffle vectors.
>>>> + for (int &Vec : VectorMask)
>>>> + if (Vec == 0)
>>>> + Vec = Shuffles.size() - 1;
>>>> + else
>>>> + Vec = (Vec - 1) / 2;
>>>> +
>>>> + // More than one shuffle. Generate a binary tree of blends, e.g. if
>>>> from
>>>> + // the previous step we got the set of shuffles t10, t11, t12, t13,
>>>> we will
>>>> + // generate:
>>>> + // t10: v8i32 = vector_shuffle<0,8,u,u,u,u,u,u> t1, t2
>>>> + // t11: v8i32 = vector_shuffle<u,u,0,8,u,u,u,u> t3, t4
>>>> + // t12: v8i32 = vector_shuffle<u,u,u,u,0,8,u,u> t5, t6
>>>> + // t13: v8i32 = vector_shuffle<u,u,u,u,u,u,0,8> t7, t8
>>>> + // t20: v8i32 = vector_shuffle<0,1,10,11,u,u,u,u> t10, t11
>>>> + // t21: v8i32 = vector_shuffle<u,u,u,u,4,5,14,15> t12, t13
>>>> + // t30: v8i32 = vector_shuffle<0,1,2,3,12,13,14,15> t20, t21
>>>> +
>>>> + // Make sure the initial size of the shuffle list is even.
>>>> + if (Shuffles.size() % 2)
>>>> + Shuffles.push_back(DAG.getUNDEF(VT));
>>>> +
>>>> + for (unsigned CurSize = Shuffles.size(); CurSize > 1; CurSize /= 2) {
>>>> + if (CurSize % 2) {
>>>> + Shuffles[CurSize] = DAG.getUNDEF(VT);
>>>> + CurSize++;
>>>> + }
>>>> + for (unsigned In = 0, Len = CurSize / 2; In < Len; ++In) {
>>>> + int Left = 2 * In;
>>>> + int Right = 2 * In + 1;
>>>> + SmallVector<int, 8> Mask(NumElems, -1);
>>>> + for (unsigned i = 0; i != NumElems; ++i) {
>>>> + if (VectorMask[i] == Left) {
>>>> + Mask[i] = i;
>>>> + VectorMask[i] = In;
>>>> + } else if (VectorMask[i] == Right) {
>>>> + Mask[i] = i + NumElems;
>>>> + VectorMask[i] = In;
>>>> + }
>>>> + }
>>>> +
>>>> + Shuffles[In] =
>>>> + DAG.getVectorShuffle(VT, dl, Shuffles[Left],
>>>> Shuffles[Right], Mask);
>>>> + }
>>>> + }
>>>> +
>>>> + return Shuffles[0];
>>>> +}
>>>> +
>>>> SDValue DAGCombiner::visitBUILD_VECTOR(SDNode *N) {
>>>> EVT VT = N->getValueType(0);
>>>>
>>>>
>>>> Modified: llvm/trunk/test/CodeGen/PowerPC/pr27078.ll
>>>> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/
>>>> PowerPC/pr27078.ll?rev=283480&r1=283479&r2=283480&view=diff
>>>> ============================================================
>>>> ==================
>>>> --- llvm/trunk/test/CodeGen/PowerPC/pr27078.ll (original)
>>>> +++ llvm/trunk/test/CodeGen/PowerPC/pr27078.ll Thu Oct 6 13:58:24 2016
>>>> @@ -9,12 +9,11 @@ define <4 x float> @bar(float* %p, float
>>>> %6 = shufflevector <12 x float> %5, <12 x float> undef, <4 x i32>
>>>> <i32 0, i32 3, i32 6, i32 9>
>>>> ret <4 x float> %6
>>>>
>>>> -; CHECK: xxspltw
>>>> -; CHECK-NEXT: xxspltw
>>>> -; CHECK-NEXT: xxspltw
>>>> +; CHECK: vsldoi
>>>> ; CHECK-NEXT: vmrghw
>>>> -; CHECK-NEXT: vmrghw
>>>> -; CHECK-NEXT: xxswapd
>>>> +; CHECK-NEXT: vmrglw
>>>> +; CHECK-NEXT: vsldoi
>>>> +; CHECK-NEXT: vsldoi
>>>> ; CHECK-NEXT: vsldoi
>>>> ; CHECK-NEXT: blr
>>>> }
>>>>
>>>> Modified: llvm/trunk/test/CodeGen/X86/oddshuffles.ll
>>>> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/
>>>> X86/oddshuffles.ll?rev=283480&r1=283479&r2=283480&view=diff
>>>> ============================================================
>>>> ==================
>>>> --- llvm/trunk/test/CodeGen/X86/oddshuffles.ll (original)
>>>> +++ llvm/trunk/test/CodeGen/X86/oddshuffles.ll Thu Oct 6 13:58:24 2016
>>>> @@ -495,54 +495,45 @@ define void @v12i16(<8 x i16> %a, <8 x i
>>>> define void @v12i32(<8 x i32> %a, <8 x i32> %b, <12 x i32>* %p)
>>>> nounwind {
>>>> ; SSE2-LABEL: v12i32:
>>>> ; SSE2: # BB#0:
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm3 = xmm0[1,1,2,3]
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm4 = xmm1[2,3,0,1]
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm5 = xmm1[1,1,2,3]
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm6 = xmm1[3,1,2,3]
>>>> -; SSE2-NEXT: punpckldq {{.*#+}} xmm1 =
>>>> xmm1[0],xmm3[0],xmm1[1],xmm3[1]
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm3 = xmm0[2,3,0,1]
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm7 = xmm0[3,1,2,3]
>>>> -; SSE2-NEXT: punpckldq {{.*#+}} xmm0 =
>>>> xmm0[0],xmm2[0],xmm0[1],xmm2[1]
>>>> -; SSE2-NEXT: punpckldq {{.*#+}} xmm0 =
>>>> xmm0[0],xmm1[0],xmm0[1],xmm1[1]
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm2[1,1,2,3]
>>>> -; SSE2-NEXT: punpckldq {{.*#+}} xmm1 =
>>>> xmm1[0],xmm4[0],xmm1[1],xmm4[1]
>>>> -; SSE2-NEXT: punpckldq {{.*#+}} xmm5 =
>>>> xmm5[0],xmm3[0],xmm5[1],xmm3[1]
>>>> -; SSE2-NEXT: punpckldq {{.*#+}} xmm5 =
>>>> xmm5[0],xmm1[0],xmm5[1],xmm1[1]
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm2[3,1,2,3]
>>>> -; SSE2-NEXT: punpckldq {{.*#+}} xmm7 =
>>>> xmm7[0],xmm1[0],xmm7[1],xmm1[1]
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm2[2,3,0,1]
>>>> -; SSE2-NEXT: punpckldq {{.*#+}} xmm1 =
>>>> xmm1[0],xmm6[0],xmm1[1],xmm6[1]
>>>> -; SSE2-NEXT: punpckldq {{.*#+}} xmm1 =
>>>> xmm1[0],xmm7[0],xmm1[1],xmm7[1]
>>>> -; SSE2-NEXT: movdqa %xmm1, 32(%rdi)
>>>> -; SSE2-NEXT: movdqa %xmm5, 16(%rdi)
>>>> -; SSE2-NEXT: movdqa %xmm0, (%rdi)
>>>> +; SSE2-NEXT: movdqa %xmm0, %xmm3
>>>> +; SSE2-NEXT: punpckldq {{.*#+}} xmm3 =
>>>> xmm3[0],xmm1[0],xmm3[1],xmm1[1]
>>>> +; SSE2-NEXT: pshufd {{.*#+}} xmm3 = xmm3[0,1,2,2]
>>>> +; SSE2-NEXT: pshufd {{.*#+}} xmm4 = xmm2[0,1,0,1]
>>>> +; SSE2-NEXT: shufps {{.*#+}} xmm4 = xmm4[2,0],xmm3[3,0]
>>>> +; SSE2-NEXT: shufps {{.*#+}} xmm3 = xmm3[0,1],xmm4[0,2]
>>>> +; SSE2-NEXT: movdqa %xmm2, %xmm4
>>>> +; SSE2-NEXT: shufps {{.*#+}} xmm4 = xmm4[1,0],xmm1[1,0]
>>>> +; SSE2-NEXT: shufps {{.*#+}} xmm4 = xmm4[2,0],xmm1[2,2]
>>>> +; SSE2-NEXT: punpckhdq {{.*#+}} xmm2 =
>>>> xmm2[2],xmm0[2],xmm2[3],xmm0[3]
>>>> +; SSE2-NEXT: shufps {{.*#+}} xmm0 = xmm0[2,0],xmm4[3,0]
>>>> +; SSE2-NEXT: shufps {{.*#+}} xmm4 = xmm4[0,1],xmm0[0,2]
>>>> +; SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm2[0,3,2,2]
>>>> +; SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm1[2,2,3,3]
>>>> +; SSE2-NEXT: shufps {{.*#+}} xmm1 = xmm1[2,0],xmm0[3,0]
>>>> +; SSE2-NEXT: shufps {{.*#+}} xmm0 = xmm0[0,1],xmm1[0,2]
>>>> +; SSE2-NEXT: movaps %xmm0, 32(%rdi)
>>>> +; SSE2-NEXT: movaps %xmm4, 16(%rdi)
>>>> +; SSE2-NEXT: movaps %xmm3, (%rdi)
>>>> ; SSE2-NEXT: retq
>>>> ;
>>>> ; SSE42-LABEL: v12i32:
>>>> ; SSE42: # BB#0:
>>>> -; SSE42-NEXT: movd %xmm1, %eax
>>>> -; SSE42-NEXT: pextrd $1, %xmm0, %ecx
>>>> -; SSE42-NEXT: pextrd $2, %xmm0, %edx
>>>> -; SSE42-NEXT: pextrd $3, %xmm0, %esi
>>>> -; SSE42-NEXT: pinsrd $1, %eax, %xmm0
>>>> -; SSE42-NEXT: movd %xmm2, %eax
>>>> -; SSE42-NEXT: pinsrd $2, %eax, %xmm0
>>>> -; SSE42-NEXT: pinsrd $3, %ecx, %xmm0
>>>> -; SSE42-NEXT: pextrd $1, %xmm2, %eax
>>>> -; SSE42-NEXT: pshufd {{.*#+}} xmm3 = xmm1[1,1,2,3]
>>>> -; SSE42-NEXT: pinsrd $1, %eax, %xmm3
>>>> -; SSE42-NEXT: pinsrd $2, %edx, %xmm3
>>>> -; SSE42-NEXT: pextrd $2, %xmm1, %eax
>>>> -; SSE42-NEXT: pinsrd $3, %eax, %xmm3
>>>> -; SSE42-NEXT: pshufd {{.*#+}} xmm4 = xmm2[2,3,0,1]
>>>> -; SSE42-NEXT: pinsrd $1, %esi, %xmm4
>>>> -; SSE42-NEXT: pextrd $3, %xmm1, %eax
>>>> -; SSE42-NEXT: pinsrd $2, %eax, %xmm4
>>>> -; SSE42-NEXT: pextrd $3, %xmm2, %eax
>>>> -; SSE42-NEXT: pinsrd $3, %eax, %xmm4
>>>> -; SSE42-NEXT: movdqa %xmm4, 32(%rdi)
>>>> -; SSE42-NEXT: movdqa %xmm3, 16(%rdi)
>>>> -; SSE42-NEXT: movdqa %xmm0, (%rdi)
>>>> +; SSE42-NEXT: pshufd {{.*#+}} xmm3 = xmm1[0,0,1,1]
>>>> +; SSE42-NEXT: pshufd {{.*#+}} xmm4 = xmm0[0,1,0,1]
>>>> +; SSE42-NEXT: pblendw {{.*#+}} xmm4 = xmm4[0,1],xmm3[2,3],xmm4[4,5,6
>>>> ,7]
>>>> +; SSE42-NEXT: pshufd {{.*#+}} xmm3 = xmm2[0,1,0,1]
>>>> +; SSE42-NEXT: pblendw {{.*#+}} xmm3 = xmm4[0,1,2,3],xmm3[4,5],xmm4[6
>>>> ,7]
>>>> +; SSE42-NEXT: pshufd {{.*#+}} xmm4 = xmm1[1,1,2,2]
>>>> +; SSE42-NEXT: pblendw {{.*#+}} xmm4 = xmm4[0,1],xmm2[2,3],xmm4[4,5,6
>>>> ,7]
>>>> +; SSE42-NEXT: pblendw {{.*#+}} xmm4 = xmm4[0,1,2,3],xmm0[4,5],xmm4[6
>>>> ,7]
>>>> +; SSE42-NEXT: pshufd {{.*#+}} xmm0 = xmm0[2,3,0,1]
>>>> +; SSE42-NEXT: pshufd {{.*#+}} xmm2 = xmm2[2,3,2,3]
>>>> +; SSE42-NEXT: pblendw {{.*#+}} xmm2 = xmm2[0,1],xmm0[2,3],xmm2[4,5,6
>>>> ,7]
>>>> +; SSE42-NEXT: pshufd {{.*#+}} xmm0 = xmm1[2,2,3,3]
>>>> +; SSE42-NEXT: pblendw {{.*#+}} xmm0 = xmm2[0,1,2,3],xmm0[4,5],xmm2[6
>>>> ,7]
>>>> +; SSE42-NEXT: movdqa %xmm0, 32(%rdi)
>>>> +; SSE42-NEXT: movdqa %xmm4, 16(%rdi)
>>>> +; SSE42-NEXT: movdqa %xmm3, (%rdi)
>>>> ; SSE42-NEXT: retq
>>>> ;
>>>> ; AVX1-LABEL: v12i32:
>>>> @@ -850,135 +841,84 @@ define void @interleave_24i8_in(<24 x i8
>>>> define void @interleave_24i16_out(<24 x i16>* %p, <8 x i16>* %q1, <8 x
>>>> i16>* %q2, <8 x i16>* %q3) nounwind {
>>>> ; SSE2-LABEL: interleave_24i16_out:
>>>> ; SSE2: # BB#0:
>>>> -; SSE2-NEXT: movdqu (%rdi), %xmm2
>>>> -; SSE2-NEXT: movdqu 16(%rdi), %xmm0
>>>> -; SSE2-NEXT: movdqu 32(%rdi), %xmm1
>>>> -; SSE2-NEXT: pextrw $5, %xmm1, %eax
>>>> -; SSE2-NEXT: movd %eax, %xmm3
>>>> -; SSE2-NEXT: pextrw $1, %xmm0, %eax
>>>> -; SSE2-NEXT: movd %eax, %xmm4
>>>> -; SSE2-NEXT: punpcklwd {{.*#+}} xmm4 =
>>>> xmm4[0],xmm3[0],xmm4[1],xmm3[1],xmm4[2],xmm3[2],xmm4[3],xmm3[3]
>>>> -; SSE2-NEXT: pextrw $7, %xmm0, %eax
>>>> -; SSE2-NEXT: movd %eax, %xmm5
>>>> -; SSE2-NEXT: pextrw $3, %xmm2, %eax
>>>> -; SSE2-NEXT: movd %eax, %xmm3
>>>> -; SSE2-NEXT: punpcklwd {{.*#+}} xmm3 =
>>>> xmm3[0],xmm5[0],xmm3[1],xmm5[1],xmm3[2],xmm5[2],xmm3[3],xmm5[3]
>>>> -; SSE2-NEXT: punpcklwd {{.*#+}} xmm3 =
>>>> xmm3[0],xmm4[0],xmm3[1],xmm4[1],xmm3[2],xmm4[2],xmm3[3],xmm4[3]
>>>> -; SSE2-NEXT: pextrw $2, %xmm1, %eax
>>>> -; SSE2-NEXT: movd %eax, %xmm5
>>>> -; SSE2-NEXT: pextrw $6, %xmm2, %eax
>>>> -; SSE2-NEXT: movd %eax, %xmm4
>>>> -; SSE2-NEXT: punpcklwd {{.*#+}} xmm4 =
>>>> xmm4[0],xmm5[0],xmm4[1],xmm5[1],xmm4[2],xmm5[2],xmm4[3],xmm5[3]
>>>> -; SSE2-NEXT: pextrw $4, %xmm0, %eax
>>>> -; SSE2-NEXT: movd %eax, %xmm5
>>>> -; SSE2-NEXT: pextrw $6, %xmm1, %eax
>>>> -; SSE2-NEXT: movd %eax, %xmm7
>>>> -; SSE2-NEXT: pextrw $2, %xmm0, %eax
>>>> -; SSE2-NEXT: movd %eax, %xmm6
>>>> -; SSE2-NEXT: pextrw $4, %xmm2, %r8d
>>>> -; SSE2-NEXT: pextrw $7, %xmm2, %r9d
>>>> -; SSE2-NEXT: pextrw $1, %xmm2, %r10d
>>>> -; SSE2-NEXT: pextrw $5, %xmm2, %r11d
>>>> -; SSE2-NEXT: pextrw $2, %xmm2, %eax
>>>> -; SSE2-NEXT: punpcklwd {{.*#+}} xmm2 =
>>>> xmm2[0],xmm5[0],xmm2[1],xmm5[1],xmm2[2],xmm5[2],xmm2[3],xmm5[3]
>>>> -; SSE2-NEXT: punpcklwd {{.*#+}} xmm2 =
>>>> xmm2[0],xmm4[0],xmm2[1],xmm4[1],xmm2[2],xmm4[2],xmm2[3],xmm4[3]
>>>> -; SSE2-NEXT: movd %r8d, %xmm4
>>>> -; SSE2-NEXT: pextrw $3, %xmm1, %edi
>>>> -; SSE2-NEXT: movd %edi, %xmm5
>>>> -; SSE2-NEXT: punpcklwd {{.*#+}} xmm2 =
>>>> xmm2[0],xmm3[0],xmm2[1],xmm3[1],xmm2[2],xmm3[2],xmm2[3],xmm3[3]
>>>> -; SSE2-NEXT: punpcklwd {{.*#+}} xmm6 =
>>>> xmm6[0],xmm7[0],xmm6[1],xmm7[1],xmm6[2],xmm7[2],xmm6[3],xmm7[3]
>>>> -; SSE2-NEXT: movd %r9d, %xmm3
>>>> -; SSE2-NEXT: pextrw $5, %xmm0, %edi
>>>> -; SSE2-NEXT: movd %edi, %xmm7
>>>> -; SSE2-NEXT: punpcklwd {{.*#+}} xmm4 =
>>>> xmm4[0],xmm1[0],xmm4[1],xmm1[1],xmm4[2],xmm1[2],xmm4[3],xmm1[3]
>>>> -; SSE2-NEXT: punpcklwd {{.*#+}} xmm4 =
>>>> xmm4[0],xmm6[0],xmm4[1],xmm6[1],xmm4[2],xmm6[2],xmm4[3],xmm6[3]
>>>> -; SSE2-NEXT: punpcklwd {{.*#+}} xmm3 =
>>>> xmm3[0],xmm5[0],xmm3[1],xmm5[1],xmm3[2],xmm5[2],xmm3[3],xmm5[3]
>>>> -; SSE2-NEXT: movd %r10d, %xmm5
>>>> -; SSE2-NEXT: punpcklwd {{.*#+}} xmm5 =
>>>> xmm5[0],xmm7[0],xmm5[1],xmm7[1],xmm5[2],xmm7[2],xmm5[3],xmm7[3]
>>>> -; SSE2-NEXT: punpcklwd {{.*#+}} xmm5 =
>>>> xmm5[0],xmm3[0],xmm5[1],xmm3[1],xmm5[2],xmm3[2],xmm5[3],xmm3[3]
>>>> -; SSE2-NEXT: pextrw $7, %xmm1, %edi
>>>> -; SSE2-NEXT: movd %edi, %xmm3
>>>> -; SSE2-NEXT: pextrw $3, %xmm0, %edi
>>>> -; SSE2-NEXT: movd %edi, %xmm6
>>>> -; SSE2-NEXT: pextrw $1, %xmm1, %edi
>>>> -; SSE2-NEXT: movd %edi, %xmm7
>>>> -; SSE2-NEXT: punpcklwd {{.*#+}} xmm5 =
>>>> xmm5[0],xmm4[0],xmm5[1],xmm4[1],xmm5[2],xmm4[2],xmm5[3],xmm4[3]
>>>> -; SSE2-NEXT: punpcklwd {{.*#+}} xmm6 =
>>>> xmm6[0],xmm3[0],xmm6[1],xmm3[1],xmm6[2],xmm3[2],xmm6[3],xmm3[3]
>>>> -; SSE2-NEXT: movd %r11d, %xmm3
>>>> -; SSE2-NEXT: pextrw $4, %xmm1, %edi
>>>> -; SSE2-NEXT: movd %edi, %xmm1
>>>> -; SSE2-NEXT: pextrw $6, %xmm0, %edi
>>>> -; SSE2-NEXT: movd %edi, %xmm4
>>>> -; SSE2-NEXT: punpcklwd {{.*#+}} xmm3 =
>>>> xmm3[0],xmm7[0],xmm3[1],xmm7[1],xmm3[2],xmm7[2],xmm3[3],xmm7[3]
>>>> -; SSE2-NEXT: punpcklwd {{.*#+}} xmm3 =
>>>> xmm3[0],xmm6[0],xmm3[1],xmm6[1],xmm3[2],xmm6[2],xmm3[3],xmm6[3]
>>>> -; SSE2-NEXT: punpcklwd {{.*#+}} xmm0 =
>>>> xmm0[0],xmm1[0],xmm0[1],xmm1[1],xmm0[2],xmm1[2],xmm0[3],xmm1[3]
>>>> -; SSE2-NEXT: movd %eax, %xmm1
>>>> -; SSE2-NEXT: punpcklwd {{.*#+}} xmm1 =
>>>> xmm1[0],xmm4[0],xmm1[1],xmm4[1],xmm1[2],xmm4[2],xmm1[3],xmm4[3]
>>>> -; SSE2-NEXT: punpcklwd {{.*#+}} xmm1 =
>>>> xmm1[0],xmm0[0],xmm1[1],xmm0[1],xmm1[2],xmm0[2],xmm1[3],xmm0[3]
>>>> -; SSE2-NEXT: punpcklwd {{.*#+}} xmm1 =
>>>> xmm1[0],xmm3[0],xmm1[1],xmm3[1],xmm1[2],xmm3[2],xmm1[3],xmm3[3]
>>>> -; SSE2-NEXT: movdqu %xmm2, (%rsi)
>>>> -; SSE2-NEXT: movdqu %xmm5, (%rdx)
>>>> -; SSE2-NEXT: movdqu %xmm1, (%rcx)
>>>> +; SSE2-NEXT: movdqu (%rdi), %xmm3
>>>> +; SSE2-NEXT: movdqu 16(%rdi), %xmm2
>>>> +; SSE2-NEXT: movdqu 32(%rdi), %xmm8
>>>> +; SSE2-NEXT: movdqa {{.*#+}} xmm1 = [65535,0,65535,65535,0,65535,6
>>>> 5535,0]
>>>> +; SSE2-NEXT: movdqa %xmm3, %xmm4
>>>> +; SSE2-NEXT: pand %xmm1, %xmm4
>>>> +; SSE2-NEXT: pandn %xmm2, %xmm1
>>>> +; SSE2-NEXT: por %xmm4, %xmm1
>>>> +; SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm1[0,2,1,3]
>>>> +; SSE2-NEXT: pshufhw {{.*#+}} xmm1 = xmm1[0,1,2,3,6,5,6,7]
>>>> +; SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm1[0,2,1,3]
>>>> +; SSE2-NEXT: pshuflw {{.*#+}} xmm1 = xmm1[0,3,2,1,4,5,6,7]
>>>> +; SSE2-NEXT: pshufhw {{.*#+}} xmm1 = xmm1[0,1,2,3,4,7,6,7]
>>>> +; SSE2-NEXT: pshufd {{.*#+}} xmm4 = xmm8[0,1,2,1]
>>>> +; SSE2-NEXT: pshufhw {{.*#+}} xmm4 = xmm4[0,1,2,3,4,5,6,5]
>>>> +; SSE2-NEXT: shufps {{.*#+}} xmm4 = xmm4[3,0],xmm1[2,0]
>>>> +; SSE2-NEXT: shufps {{.*#+}} xmm1 = xmm1[0,1],xmm4[2,0]
>>>> +; SSE2-NEXT: movdqa {{.*#+}} xmm4 = [65535,65535,0,65535,65535,0,6
>>>> 5535,65535]
>>>> +; SSE2-NEXT: movdqa %xmm4, %xmm5
>>>> +; SSE2-NEXT: pandn %xmm2, %xmm5
>>>> +; SSE2-NEXT: movdqa %xmm3, %xmm6
>>>> +; SSE2-NEXT: pand %xmm4, %xmm6
>>>> +; SSE2-NEXT: por %xmm5, %xmm6
>>>> +; SSE2-NEXT: pshuflw {{.*#+}} xmm5 = xmm6[2,1,2,3,4,5,6,7]
>>>> +; SSE2-NEXT: pshufhw {{.*#+}} xmm5 = xmm5[0,1,2,3,4,5,4,7]
>>>> +; SSE2-NEXT: pshufd {{.*#+}} xmm5 = xmm5[0,3,2,3]
>>>> +; SSE2-NEXT: pshuflw {{.*#+}} xmm5 = xmm5[1,2,3,0,4,5,6,7]
>>>> +; SSE2-NEXT: pshufhw {{.*#+}} xmm5 = xmm5[0,1,2,3,5,5,6,7]
>>>> +; SSE2-NEXT: movdqa {{.*#+}} xmm6 = [65535,65535,65535,65535,65535
>>>> ,0,0,0]
>>>> +; SSE2-NEXT: pand %xmm6, %xmm5
>>>> +; SSE2-NEXT: pshuflw {{.*#+}} xmm7 = xmm8[0,3,2,3,4,5,6,7]
>>>> +; SSE2-NEXT: pshufd {{.*#+}} xmm7 = xmm7[0,1,0,3]
>>>> +; SSE2-NEXT: pshufhw {{.*#+}} xmm7 = xmm7[0,1,2,3,4,4,5,6]
>>>> +; SSE2-NEXT: movdqa %xmm6, %xmm0
>>>> +; SSE2-NEXT: pandn %xmm7, %xmm0
>>>> +; SSE2-NEXT: por %xmm5, %xmm0
>>>> +; SSE2-NEXT: pand %xmm4, %xmm2
>>>> +; SSE2-NEXT: pandn %xmm3, %xmm4
>>>> +; SSE2-NEXT: por %xmm2, %xmm4
>>>> +; SSE2-NEXT: pshufd {{.*#+}} xmm2 = xmm4[3,1,2,0]
>>>> +; SSE2-NEXT: pshufhw {{.*#+}} xmm2 = xmm2[0,1,2,3,6,5,6,7]
>>>> +; SSE2-NEXT: pshufd {{.*#+}} xmm2 = xmm2[2,1,0,3]
>>>> +; SSE2-NEXT: pshuflw {{.*#+}} xmm2 = xmm2[2,1,0,3,4,5,6,7]
>>>> +; SSE2-NEXT: pand %xmm6, %xmm2
>>>> +; SSE2-NEXT: pshufhw {{.*#+}} xmm3 = xmm8[0,1,2,3,4,7,6,7]
>>>> +; SSE2-NEXT: pshufd {{.*#+}} xmm3 = xmm3[0,1,2,0]
>>>> +; SSE2-NEXT: pshufhw {{.*#+}} xmm3 = xmm3[0,1,2,3,4,7,4,5]
>>>> +; SSE2-NEXT: pandn %xmm3, %xmm6
>>>> +; SSE2-NEXT: por %xmm2, %xmm6
>>>> +; SSE2-NEXT: movups %xmm1, (%rsi)
>>>> +; SSE2-NEXT: movdqu %xmm0, (%rdx)
>>>> +; SSE2-NEXT: movdqu %xmm6, (%rcx)
>>>> ; SSE2-NEXT: retq
>>>> ;
>>>> ; SSE42-LABEL: interleave_24i16_out:
>>>> ; SSE42: # BB#0:
>>>> -; SSE42-NEXT: pushq %rbp
>>>> -; SSE42-NEXT: pushq %rbx
>>>> -; SSE42-NEXT: movdqu (%rdi), %xmm2
>>>> +; SSE42-NEXT: movdqu (%rdi), %xmm0
>>>> ; SSE42-NEXT: movdqu 16(%rdi), %xmm1
>>>> -; SSE42-NEXT: movdqu 32(%rdi), %xmm0
>>>> -; SSE42-NEXT: pextrw $3, %xmm2, %r8d
>>>> -; SSE42-NEXT: pextrw $6, %xmm2, %r9d
>>>> -; SSE42-NEXT: pextrw $4, %xmm2, %r10d
>>>> -; SSE42-NEXT: pextrw $1, %xmm2, %r11d
>>>> -; SSE42-NEXT: pextrw $7, %xmm2, %ebp
>>>> -; SSE42-NEXT: pextrw $5, %xmm2, %edi
>>>> -; SSE42-NEXT: pextrw $2, %xmm2, %ebx
>>>> -; SSE42-NEXT: pinsrw $1, %r8d, %xmm2
>>>> -; SSE42-NEXT: pinsrw $2, %r9d, %xmm2
>>>> -; SSE42-NEXT: pextrw $1, %xmm1, %eax
>>>> -; SSE42-NEXT: pinsrw $3, %eax, %xmm2
>>>> -; SSE42-NEXT: pextrw $4, %xmm1, %eax
>>>> -; SSE42-NEXT: pinsrw $4, %eax, %xmm2
>>>> -; SSE42-NEXT: pextrw $7, %xmm1, %eax
>>>> -; SSE42-NEXT: pinsrw $5, %eax, %xmm2
>>>> -; SSE42-NEXT: pextrw $2, %xmm0, %eax
>>>> -; SSE42-NEXT: pinsrw $6, %eax, %xmm2
>>>> -; SSE42-NEXT: pextrw $5, %xmm0, %eax
>>>> -; SSE42-NEXT: pinsrw $7, %eax, %xmm2
>>>> -; SSE42-NEXT: movd %r11d, %xmm3
>>>> -; SSE42-NEXT: pinsrw $1, %r10d, %xmm3
>>>> -; SSE42-NEXT: pinsrw $2, %ebp, %xmm3
>>>> -; SSE42-NEXT: pextrw $2, %xmm1, %eax
>>>> -; SSE42-NEXT: pinsrw $3, %eax, %xmm3
>>>> -; SSE42-NEXT: pextrw $5, %xmm1, %eax
>>>> -; SSE42-NEXT: pinsrw $4, %eax, %xmm3
>>>> -; SSE42-NEXT: movd %xmm0, %eax
>>>> -; SSE42-NEXT: pinsrw $5, %eax, %xmm3
>>>> -; SSE42-NEXT: pextrw $3, %xmm0, %eax
>>>> -; SSE42-NEXT: pinsrw $6, %eax, %xmm3
>>>> -; SSE42-NEXT: pextrw $6, %xmm0, %eax
>>>> -; SSE42-NEXT: pinsrw $7, %eax, %xmm3
>>>> -; SSE42-NEXT: movd %ebx, %xmm4
>>>> -; SSE42-NEXT: pinsrw $1, %edi, %xmm4
>>>> -; SSE42-NEXT: movd %xmm1, %eax
>>>> -; SSE42-NEXT: pinsrw $2, %eax, %xmm4
>>>> -; SSE42-NEXT: pextrw $3, %xmm1, %eax
>>>> -; SSE42-NEXT: pinsrw $3, %eax, %xmm4
>>>> -; SSE42-NEXT: pextrw $6, %xmm1, %eax
>>>> -; SSE42-NEXT: pinsrw $4, %eax, %xmm4
>>>> -; SSE42-NEXT: pextrw $1, %xmm0, %eax
>>>> -; SSE42-NEXT: pinsrw $5, %eax, %xmm4
>>>> -; SSE42-NEXT: pextrw $4, %xmm0, %eax
>>>> -; SSE42-NEXT: pinsrw $6, %eax, %xmm4
>>>> -; SSE42-NEXT: pextrw $7, %xmm0, %eax
>>>> -; SSE42-NEXT: pinsrw $7, %eax, %xmm4
>>>> -; SSE42-NEXT: movdqu %xmm2, (%rsi)
>>>> -; SSE42-NEXT: movdqu %xmm3, (%rdx)
>>>> -; SSE42-NEXT: movdqu %xmm4, (%rcx)
>>>> -; SSE42-NEXT: popq %rbx
>>>> -; SSE42-NEXT: popq %rbp
>>>> +; SSE42-NEXT: movdqu 32(%rdi), %xmm2
>>>> +; SSE42-NEXT: pshufd {{.*#+}} xmm3 = xmm2[0,1,2,1]
>>>> +; SSE42-NEXT: pshufhw {{.*#+}} xmm3 = xmm3[0,1,2,3,4,5,6,5]
>>>> +; SSE42-NEXT: movdqa %xmm0, %xmm4
>>>> +; SSE42-NEXT: pblendw {{.*#+}} xmm4 = xmm4[0],xmm1[1],xmm4[2,3],xmm1
>>>> [4],xmm4[5,6],xmm1[7]
>>>> +; SSE42-NEXT: pshufb {{.*#+}} xmm4 = xmm4[0,1,6,7,12,13,2,3,8,9,14,
>>>> 15,12,13,14,15]
>>>> +; SSE42-NEXT: pblendw {{.*#+}} xmm4 = xmm4[0,1,2,3,4,5],xmm3[6,7]
>>>> +; SSE42-NEXT: movdqa %xmm2, %xmm3
>>>> +; SSE42-NEXT: pshufb {{.*#+}} xmm3 = xmm3[0,1,6,7,4,5,6,7,0,1,0,1,6
>>>> ,7,12,13]
>>>> +; SSE42-NEXT: movdqa %xmm0, %xmm5
>>>> +; SSE42-NEXT: pblendw {{.*#+}} xmm5 = xmm5[0,1],xmm1[2],xmm5[3,4],xm
>>>> m1[5],xmm5[6,7]
>>>> +; SSE42-NEXT: pshufb {{.*#+}} xmm5 = xmm5[2,3,8,9,14,15,4,5,10,11,1
>>>> 0,11,8,9,14,15]
>>>> +; SSE42-NEXT: pblendw {{.*#+}} xmm5 = xmm5[0,1,2,3,4],xmm3[5,6,7]
>>>> +; SSE42-NEXT: pshufb {{.*#+}} xmm2 = xmm2[0,1,2,3,4,5,6,7,8,9,2,3,8
>>>> ,9,14,15]
>>>> +; SSE42-NEXT: pblendw {{.*#+}} xmm1 = xmm1[0,1],xmm0[2],xmm1[3,4],xm
>>>> m0[5],xmm1[6,7]
>>>> +; SSE42-NEXT: pshufb {{.*#+}} xmm1 = xmm1[4,5,10,11,0,1,6,7,12,13,1
>>>> 4,15,0,1,2,3]
>>>> +; SSE42-NEXT: pblendw {{.*#+}} xmm1 = xmm1[0,1,2,3,4],xmm2[5,6,7]
>>>> +; SSE42-NEXT: movdqu %xmm4, (%rsi)
>>>> +; SSE42-NEXT: movdqu %xmm5, (%rdx)
>>>> +; SSE42-NEXT: movdqu %xmm1, (%rcx)
>>>> ; SSE42-NEXT: retq
>>>> ;
>>>> ; AVX1-LABEL: interleave_24i16_out:
>>>> @@ -1039,143 +979,66 @@ define void @interleave_24i16_out(<24 x
>>>> define void @interleave_24i16_in(<24 x i16>* %p, <8 x i16>* %q1, <8 x
>>>> i16>* %q2, <8 x i16>* %q3) nounwind {
>>>> ; SSE2-LABEL: interleave_24i16_in:
>>>> ; SSE2: # BB#0:
>>>> -; SSE2-NEXT: pushq %rbp
>>>> -; SSE2-NEXT: pushq %r15
>>>> -; SSE2-NEXT: pushq %r14
>>>> -; SSE2-NEXT: pushq %r13
>>>> -; SSE2-NEXT: pushq %r12
>>>> -; SSE2-NEXT: pushq %rbx
>>>> -; SSE2-NEXT: movdqu (%rsi), %xmm0
>>>> +; SSE2-NEXT: movdqu (%rsi), %xmm3
>>>> ; SSE2-NEXT: movdqu (%rdx), %xmm2
>>>> ; SSE2-NEXT: movdqu (%rcx), %xmm1
>>>> -; SSE2-NEXT: pextrw $2, %xmm2, %eax
>>>> -; SSE2-NEXT: movd %eax, %xmm4
>>>> -; SSE2-NEXT: pextrw $1, %xmm0, %eax
>>>> -; SSE2-NEXT: movd %eax, %xmm3
>>>> -; SSE2-NEXT: punpcklwd {{.*#+}} xmm3 =
>>>> xmm3[0],xmm4[0],xmm3[1],xmm4[1],xmm3[2],xmm4[2],xmm3[3],xmm4[3]
>>>> -; SSE2-NEXT: pextrw $1, %xmm1, %eax
>>>> -; SSE2-NEXT: movd %eax, %xmm4
>>>> -; SSE2-NEXT: pextrw $1, %xmm2, %r8d
>>>> -; SSE2-NEXT: pextrw $4, %xmm2, %r9d
>>>> -; SSE2-NEXT: pextrw $3, %xmm2, %r10d
>>>> -; SSE2-NEXT: pextrw $6, %xmm2, %r11d
>>>> -; SSE2-NEXT: pextrw $7, %xmm2, %r14d
>>>> -; SSE2-NEXT: pextrw $5, %xmm2, %r15d
>>>> -; SSE2-NEXT: punpcklwd {{.*#+}} xmm2 =
>>>> xmm2[0],xmm4[0],xmm2[1],xmm4[1],xmm2[2],xmm4[2],xmm2[3],xmm4[3]
>>>> -; SSE2-NEXT: punpcklwd {{.*#+}} xmm2 =
>>>> xmm2[0],xmm3[0],xmm2[1],xmm3[1],xmm2[2],xmm3[2],xmm2[3],xmm3[3]
>>>> -; SSE2-NEXT: pextrw $2, %xmm0, %edx
>>>> -; SSE2-NEXT: movd %edx, %xmm3
>>>> -; SSE2-NEXT: pextrw $3, %xmm1, %edx
>>>> -; SSE2-NEXT: pextrw $4, %xmm1, %esi
>>>> -; SSE2-NEXT: pextrw $2, %xmm1, %ebx
>>>> -; SSE2-NEXT: pextrw $7, %xmm1, %ebp
>>>> -; SSE2-NEXT: pextrw $5, %xmm1, %r13d
>>>> -; SSE2-NEXT: pextrw $6, %xmm1, %r12d
>>>> -; SSE2-NEXT: punpcklwd {{.*#+}} xmm1 =
>>>> xmm1[0],xmm3[0],xmm1[1],xmm3[1],xmm1[2],xmm3[2],xmm1[3],xmm3[3]
>>>> -; SSE2-NEXT: movd %r8d, %xmm3
>>>> -; SSE2-NEXT: pextrw $5, %xmm0, %ecx
>>>> -; SSE2-NEXT: movd %ecx, %xmm4
>>>> -; SSE2-NEXT: movd %edx, %xmm5
>>>> -; SSE2-NEXT: movd %r9d, %xmm6
>>>> -; SSE2-NEXT: pextrw $3, %xmm0, %ecx
>>>> -; SSE2-NEXT: movd %ecx, %xmm7
>>>> -; SSE2-NEXT: pextrw $4, %xmm0, %ecx
>>>> -; SSE2-NEXT: pextrw $7, %xmm0, %edx
>>>> -; SSE2-NEXT: pextrw $6, %xmm0, %eax
>>>> -; SSE2-NEXT: punpcklwd {{.*#+}} xmm0 =
>>>> xmm0[0],xmm3[0],xmm0[1],xmm3[1],xmm0[2],xmm3[2],xmm0[3],xmm3[3]
>>>> -; SSE2-NEXT: movd %esi, %xmm3
>>>> -; SSE2-NEXT: punpcklwd {{.*#+}} xmm0 =
>>>> xmm0[0],xmm1[0],xmm0[1],xmm1[1],xmm0[2],xmm1[2],xmm0[3],xmm1[3]
>>>> -; SSE2-NEXT: movd %r10d, %xmm1
>>>> -; SSE2-NEXT: punpcklwd {{.*#+}} xmm0 =
>>>> xmm0[0],xmm2[0],xmm0[1],xmm2[1],xmm0[2],xmm2[2],xmm0[3],xmm2[3]
>>>> -; SSE2-NEXT: movd %ecx, %xmm2
>>>> -; SSE2-NEXT: punpcklwd {{.*#+}} xmm5 =
>>>> xmm5[0],xmm4[0],xmm5[1],xmm4[1],xmm5[2],xmm4[2],xmm5[3],xmm4[3]
>>>> -; SSE2-NEXT: movd %ebx, %xmm4
>>>> -; SSE2-NEXT: punpcklwd {{.*#+}} xmm7 =
>>>> xmm7[0],xmm6[0],xmm7[1],xmm6[1],xmm7[2],xmm6[2],xmm7[3],xmm6[3]
>>>> -; SSE2-NEXT: movd %ebp, %xmm6
>>>> -; SSE2-NEXT: punpcklwd {{.*#+}} xmm7 =
>>>> xmm7[0],xmm5[0],xmm7[1],xmm5[1],xmm7[2],xmm5[2],xmm7[3],xmm5[3]
>>>> -; SSE2-NEXT: movd %r11d, %xmm5
>>>> -; SSE2-NEXT: punpcklwd {{.*#+}} xmm1 =
>>>> xmm1[0],xmm3[0],xmm1[1],xmm3[1],xmm1[2],xmm3[2],xmm1[3],xmm3[3]
>>>> -; SSE2-NEXT: movd %edx, %xmm3
>>>> -; SSE2-NEXT: punpcklwd {{.*#+}} xmm4 =
>>>> xmm4[0],xmm2[0],xmm4[1],xmm2[1],xmm4[2],xmm2[2],xmm4[3],xmm2[3]
>>>> -; SSE2-NEXT: movd %r13d, %xmm2
>>>> -; SSE2-NEXT: punpcklwd {{.*#+}} xmm4 =
>>>> xmm4[0],xmm1[0],xmm4[1],xmm1[1],xmm4[2],xmm1[2],xmm4[3],xmm1[3]
>>>> -; SSE2-NEXT: movd %r14d, %xmm1
>>>> -; SSE2-NEXT: punpcklwd {{.*#+}} xmm4 =
>>>> xmm4[0],xmm7[0],xmm4[1],xmm7[1],xmm4[2],xmm7[2],xmm4[3],xmm7[3]
>>>> -; SSE2-NEXT: punpcklwd {{.*#+}} xmm5 =
>>>> xmm5[0],xmm6[0],xmm5[1],xmm6[1],xmm5[2],xmm6[2],xmm5[3],xmm6[3]
>>>> -; SSE2-NEXT: punpcklwd {{.*#+}} xmm2 =
>>>> xmm2[0],xmm3[0],xmm2[1],xmm3[1],xmm2[2],xmm3[2],xmm2[3],xmm3[3]
>>>> -; SSE2-NEXT: punpcklwd {{.*#+}} xmm2 =
>>>> xmm2[0],xmm5[0],xmm2[1],xmm5[1],xmm2[2],xmm5[2],xmm2[3],xmm5[3]
>>>> -; SSE2-NEXT: movd %eax, %xmm3
>>>> -; SSE2-NEXT: punpcklwd {{.*#+}} xmm3 =
>>>> xmm3[0],xmm1[0],xmm3[1],xmm1[1],xmm3[2],xmm1[2],xmm3[3],xmm1[3]
>>>> -; SSE2-NEXT: movd %r12d, %xmm1
>>>> -; SSE2-NEXT: movd %r15d, %xmm5
>>>> -; SSE2-NEXT: punpcklwd {{.*#+}} xmm5 =
>>>> xmm5[0],xmm1[0],xmm5[1],xmm1[1],xmm5[2],xmm1[2],xmm5[3],xmm1[3]
>>>> -; SSE2-NEXT: punpcklwd {{.*#+}} xmm5 =
>>>> xmm5[0],xmm3[0],xmm5[1],xmm3[1],xmm5[2],xmm3[2],xmm5[3],xmm3[3]
>>>> -; SSE2-NEXT: punpcklwd {{.*#+}} xmm5 =
>>>> xmm5[0],xmm2[0],xmm5[1],xmm2[1],xmm5[2],xmm2[2],xmm5[3],xmm2[3]
>>>> -; SSE2-NEXT: movdqu %xmm5, 32(%rdi)
>>>> -; SSE2-NEXT: movdqu %xmm4, 16(%rdi)
>>>> -; SSE2-NEXT: movdqu %xmm0, (%rdi)
>>>> -; SSE2-NEXT: popq %rbx
>>>> -; SSE2-NEXT: popq %r12
>>>> -; SSE2-NEXT: popq %r13
>>>> -; SSE2-NEXT: popq %r14
>>>> -; SSE2-NEXT: popq %r15
>>>> -; SSE2-NEXT: popq %rbp
>>>> +; SSE2-NEXT: pshufd {{.*#+}} xmm4 = xmm1[0,0,0,3]
>>>> +; SSE2-NEXT: movdqa {{.*#+}} xmm0 = [65535,65535,0,65535,65535,0,6
>>>> 5535,65535]
>>>> +; SSE2-NEXT: movdqa %xmm0, %xmm5
>>>> +; SSE2-NEXT: pandn %xmm4, %xmm5
>>>> +; SSE2-NEXT: pshufd {{.*#+}} xmm4 = xmm3[0,3,3,3]
>>>> +; SSE2-NEXT: pshufd {{.*#+}} xmm6 = xmm3[1,1,2,2]
>>>> +; SSE2-NEXT: punpcklwd {{.*#+}} xmm3 =
>>>> xmm3[0],xmm2[0],xmm3[1],xmm2[1],xmm3[2],xmm2[2],xmm3[3],xmm2[3]
>>>> +; SSE2-NEXT: pshufd {{.*#+}} xmm3 = xmm3[0,1,2,1]
>>>> +; SSE2-NEXT: pshuflw {{.*#+}} xmm3 = xmm3[0,1,2,2,4,5,6,7]
>>>> +; SSE2-NEXT: pshufhw {{.*#+}} xmm3 = xmm3[0,1,2,3,7,5,4,5]
>>>> +; SSE2-NEXT: pand %xmm0, %xmm3
>>>> +; SSE2-NEXT: por %xmm5, %xmm3
>>>> +; SSE2-NEXT: movdqa %xmm0, %xmm5
>>>> +; SSE2-NEXT: pandn %xmm4, %xmm5
>>>> +; SSE2-NEXT: pshuflw {{.*#+}} xmm4 = xmm2[0,1,3,3,4,5,6,7]
>>>> +; SSE2-NEXT: punpckhwd {{.*#+}} xmm2 =
>>>> xmm2[4],xmm1[4],xmm2[5],xmm1[5],xmm2[6],xmm1[6],xmm2[7],xmm1[7]
>>>> +; SSE2-NEXT: pshufd {{.*#+}} xmm2 = xmm2[2,1,2,3]
>>>> +; SSE2-NEXT: pshuflw {{.*#+}} xmm2 = xmm2[2,3,2,0,4,5,6,7]
>>>> +; SSE2-NEXT: pshufhw {{.*#+}} xmm2 = xmm2[0,1,2,3,5,5,6,7]
>>>> +; SSE2-NEXT: pand %xmm0, %xmm2
>>>> +; SSE2-NEXT: por %xmm5, %xmm2
>>>> +; SSE2-NEXT: movdqa {{.*#+}} xmm5 = [65535,0,65535,65535,0,65535,6
>>>> 5535,0]
>>>> +; SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm1[1,1,2,2]
>>>> +; SSE2-NEXT: pand %xmm5, %xmm1
>>>> +; SSE2-NEXT: pandn %xmm6, %xmm5
>>>> +; SSE2-NEXT: por %xmm1, %xmm5
>>>> +; SSE2-NEXT: pand %xmm0, %xmm5
>>>> +; SSE2-NEXT: pshufhw {{.*#+}} xmm1 = xmm4[0,1,2,3,4,4,6,7]
>>>> +; SSE2-NEXT: pandn %xmm1, %xmm0
>>>> +; SSE2-NEXT: por %xmm5, %xmm0
>>>> +; SSE2-NEXT: movdqu %xmm0, 16(%rdi)
>>>> +; SSE2-NEXT: movdqu %xmm2, 32(%rdi)
>>>> +; SSE2-NEXT: movdqu %xmm3, (%rdi)
>>>> ; SSE2-NEXT: retq
>>>> ;
>>>> ; SSE42-LABEL: interleave_24i16_in:
>>>> ; SSE42: # BB#0:
>>>> -; SSE42-NEXT: movdqu (%rsi), %xmm3
>>>> -; SSE42-NEXT: movdqu (%rdx), %xmm0
>>>> +; SSE42-NEXT: movdqu (%rsi), %xmm0
>>>> +; SSE42-NEXT: movdqu (%rdx), %xmm1
>>>> ; SSE42-NEXT: movdqu (%rcx), %xmm2
>>>> -; SSE42-NEXT: pextrw $3, %xmm3, %eax
>>>> -; SSE42-NEXT: pextrw $2, %xmm2, %ecx
>>>> -; SSE42-NEXT: movd %ecx, %xmm1
>>>> -; SSE42-NEXT: pinsrw $1, %eax, %xmm1
>>>> -; SSE42-NEXT: pextrw $3, %xmm0, %eax
>>>> -; SSE42-NEXT: pinsrw $2, %eax, %xmm1
>>>> -; SSE42-NEXT: pextrw $3, %xmm2, %eax
>>>> -; SSE42-NEXT: pinsrw $3, %eax, %xmm1
>>>> -; SSE42-NEXT: pextrw $4, %xmm3, %eax
>>>> -; SSE42-NEXT: pinsrw $4, %eax, %xmm1
>>>> -; SSE42-NEXT: pextrw $4, %xmm0, %eax
>>>> -; SSE42-NEXT: pinsrw $5, %eax, %xmm1
>>>> -; SSE42-NEXT: pextrw $4, %xmm2, %eax
>>>> -; SSE42-NEXT: pinsrw $6, %eax, %xmm1
>>>> -; SSE42-NEXT: pextrw $5, %xmm3, %eax
>>>> -; SSE42-NEXT: pinsrw $7, %eax, %xmm1
>>>> -; SSE42-NEXT: pextrw $5, %xmm2, %eax
>>>> -; SSE42-NEXT: pextrw $5, %xmm0, %ecx
>>>> -; SSE42-NEXT: movd %ecx, %xmm4
>>>> -; SSE42-NEXT: pinsrw $1, %eax, %xmm4
>>>> -; SSE42-NEXT: pextrw $6, %xmm3, %eax
>>>> -; SSE42-NEXT: pinsrw $2, %eax, %xmm4
>>>> -; SSE42-NEXT: pextrw $6, %xmm0, %eax
>>>> -; SSE42-NEXT: pinsrw $3, %eax, %xmm4
>>>> -; SSE42-NEXT: pextrw $6, %xmm2, %eax
>>>> -; SSE42-NEXT: pinsrw $4, %eax, %xmm4
>>>> -; SSE42-NEXT: pextrw $7, %xmm3, %eax
>>>> -; SSE42-NEXT: pinsrw $5, %eax, %xmm4
>>>> -; SSE42-NEXT: pextrw $7, %xmm0, %eax
>>>> -; SSE42-NEXT: pinsrw $6, %eax, %xmm4
>>>> -; SSE42-NEXT: pextrw $7, %xmm2, %eax
>>>> -; SSE42-NEXT: pinsrw $7, %eax, %xmm4
>>>> -; SSE42-NEXT: movd %xmm0, %eax
>>>> -; SSE42-NEXT: pextrw $1, %xmm3, %ecx
>>>> -; SSE42-NEXT: pextrw $2, %xmm3, %edx
>>>> -; SSE42-NEXT: pinsrw $1, %eax, %xmm3
>>>> -; SSE42-NEXT: movd %xmm2, %eax
>>>> -; SSE42-NEXT: pinsrw $2, %eax, %xmm3
>>>> -; SSE42-NEXT: pinsrw $3, %ecx, %xmm3
>>>> -; SSE42-NEXT: pextrw $1, %xmm0, %eax
>>>> -; SSE42-NEXT: pinsrw $4, %eax, %xmm3
>>>> -; SSE42-NEXT: pextrw $1, %xmm2, %eax
>>>> -; SSE42-NEXT: pinsrw $5, %eax, %xmm3
>>>> -; SSE42-NEXT: pinsrw $6, %edx, %xmm3
>>>> -; SSE42-NEXT: pextrw $2, %xmm0, %eax
>>>> -; SSE42-NEXT: pinsrw $7, %eax, %xmm3
>>>> -; SSE42-NEXT: movdqu %xmm3, (%rdi)
>>>> +; SSE42-NEXT: pshufd {{.*#+}} xmm3 = xmm0[1,1,2,2]
>>>> +; SSE42-NEXT: pshufd {{.*#+}} xmm4 = xmm0[0,3,3,3]
>>>> +; SSE42-NEXT: punpcklwd {{.*#+}} xmm0 =
>>>> xmm0[0],xmm1[0],xmm0[1],xmm1[1],xmm0[2],xmm1[2],xmm0[3],xmm1[3]
>>>> +; SSE42-NEXT: pshufb {{.*#+}} xmm0 = xmm0[0,1,2,3,4,5,4,5,6,7,10,11
>>>> ,8,9,10,11]
>>>> +; SSE42-NEXT: pshufd {{.*#+}} xmm5 = xmm2[0,0,0,3]
>>>> +; SSE42-NEXT: pblendw {{.*#+}} xmm5 = xmm0[0,1],xmm5[2],xmm0[3,4],xm
>>>> m5[5],xmm0[6,7]
>>>> +; SSE42-NEXT: pshufd {{.*#+}} xmm0 = xmm2[1,1,2,2]
>>>> +; SSE42-NEXT: pblendw {{.*#+}} xmm0 = xmm0[0],xmm3[1],xmm0[2,3],xmm3
>>>> [4],xmm0[5,6],xmm3[7]
>>>> +; SSE42-NEXT: pshuflw {{.*#+}} xmm3 = xmm1[0,1,3,3,4,5,6,7]
>>>> +; SSE42-NEXT: pshufhw {{.*#+}} xmm3 = xmm3[0,1,2,3,4,4,6,7]
>>>> +; SSE42-NEXT: pblendw {{.*#+}} xmm3 = xmm0[0,1],xmm3[2],xmm0[3,4],xm
>>>> m3[5],xmm0[6,7]
>>>> +; SSE42-NEXT: punpckhwd {{.*#+}} xmm1 =
>>>> xmm1[4],xmm2[4],xmm1[5],xmm2[5],xmm1[6],xmm2[6],xmm1[7],xmm2[7]
>>>> +; SSE42-NEXT: pshufb {{.*#+}} xmm1 = xmm1[4,5,6,7,4,5,8,9,10,11,10,
>>>> 11,12,13,14,15]
>>>> +; SSE42-NEXT: pblendw {{.*#+}} xmm4 = xmm1[0,1],xmm4[2],xmm1[3,4],xm
>>>> m4[5],xmm1[6,7]
>>>> ; SSE42-NEXT: movdqu %xmm4, 32(%rdi)
>>>> -; SSE42-NEXT: movdqu %xmm1, 16(%rdi)
>>>> +; SSE42-NEXT: movdqu %xmm3, 16(%rdi)
>>>> +; SSE42-NEXT: movdqu %xmm5, (%rdi)
>>>> ; SSE42-NEXT: retq
>>>> ;
>>>> ; AVX1-LABEL: interleave_24i16_in:
>>>> @@ -1238,164 +1101,125 @@ define void @interleave_24i16_in(<24 x i
>>>> define void @interleave_24i32_out(<24 x i32>* %p, <8 x i32>* %q1, <8 x
>>>> i32>* %q2, <8 x i32>* %q3) nounwind {
>>>> ; SSE2-LABEL: interleave_24i32_out:
>>>> ; SSE2: # BB#0:
>>>> +; SSE2-NEXT: movdqu 80(%rdi), %xmm8
>>>> ; SSE2-NEXT: movdqu 64(%rdi), %xmm10
>>>> -; SSE2-NEXT: movdqu 80(%rdi), %xmm9
>>>> -; SSE2-NEXT: movdqu (%rdi), %xmm5
>>>> -; SSE2-NEXT: movdqu 16(%rdi), %xmm12
>>>> -; SSE2-NEXT: movdqu 32(%rdi), %xmm11
>>>> -; SSE2-NEXT: movdqu 48(%rdi), %xmm4
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm6 = xmm11[1,1,2,3]
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm7 = xmm5[3,1,2,3]
>>>> -; SSE2-NEXT: punpckldq {{.*#+}} xmm7 =
>>>> xmm7[0],xmm6[0],xmm7[1],xmm6[1]
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm8 = xmm12[2,3,0,1]
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm13 = xmm5[1,1,2,3]
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm5[2,3,0,1]
>>>> -; SSE2-NEXT: punpckldq {{.*#+}} xmm5 =
>>>> xmm5[0],xmm8[0],xmm5[1],xmm8[1]
>>>> -; SSE2-NEXT: punpckldq {{.*#+}} xmm5 =
>>>> xmm5[0],xmm7[0],xmm5[1],xmm7[1]
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm7 = xmm9[1,1,2,3]
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm4[3,1,2,3]
>>>> -; SSE2-NEXT: punpckldq {{.*#+}} xmm1 =
>>>> xmm1[0],xmm7[0],xmm1[1],xmm7[1]
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm8 = xmm10[2,3,0,1]
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm2 = xmm4[1,1,2,3]
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm7 = xmm4[2,3,0,1]
>>>> -; SSE2-NEXT: punpckldq {{.*#+}} xmm4 =
>>>> xmm4[0],xmm8[0],xmm4[1],xmm8[1]
>>>> -; SSE2-NEXT: punpckldq {{.*#+}} xmm4 =
>>>> xmm4[0],xmm1[0],xmm4[1],xmm1[1]
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm8 = xmm11[2,3,0,1]
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm3 = xmm12[3,1,2,3]
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm12[1,1,2,3]
>>>> -; SSE2-NEXT: punpckldq {{.*#+}} xmm12 =
>>>> xmm12[0],xmm8[0],xmm12[1],xmm8[1]
>>>> -; SSE2-NEXT: punpckldq {{.*#+}} xmm13 =
>>>> xmm13[0],xmm3[0],xmm13[1],xmm3[1]
>>>> -; SSE2-NEXT: punpckldq {{.*#+}} xmm13 =
>>>> xmm13[0],xmm12[0],xmm13[1],xmm12[1]
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm12 = xmm9[2,3,0,1]
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm8 = xmm10[3,1,2,3]
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm3 = xmm10[1,1,2,3]
>>>> -; SSE2-NEXT: punpckldq {{.*#+}} xmm10 =
>>>> xmm10[0],xmm12[0],xmm10[1],xmm12[1]
>>>> -; SSE2-NEXT: punpckldq {{.*#+}} xmm2 =
>>>> xmm2[0],xmm8[0],xmm2[1],xmm8[1]
>>>> -; SSE2-NEXT: punpckldq {{.*#+}} xmm2 =
>>>> xmm2[0],xmm10[0],xmm2[1],xmm10[1]
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm6 = xmm11[3,1,2,3]
>>>> -; SSE2-NEXT: punpckldq {{.*#+}} xmm1 =
>>>> xmm1[0],xmm6[0],xmm1[1],xmm6[1]
>>>> -; SSE2-NEXT: punpckldq {{.*#+}} xmm0 =
>>>> xmm0[0],xmm11[0],xmm0[1],xmm11[1]
>>>> -; SSE2-NEXT: punpckldq {{.*#+}} xmm0 =
>>>> xmm0[0],xmm1[0],xmm0[1],xmm1[1]
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm9[3,1,2,3]
>>>> -; SSE2-NEXT: punpckldq {{.*#+}} xmm3 =
>>>> xmm3[0],xmm1[0],xmm3[1],xmm1[1]
>>>> -; SSE2-NEXT: punpckldq {{.*#+}} xmm7 =
>>>> xmm7[0],xmm9[0],xmm7[1],xmm9[1]
>>>> -; SSE2-NEXT: punpckldq {{.*#+}} xmm7 =
>>>> xmm7[0],xmm3[0],xmm7[1],xmm3[1]
>>>> -; SSE2-NEXT: movdqu %xmm4, 16(%rsi)
>>>> -; SSE2-NEXT: movdqu %xmm5, (%rsi)
>>>> -; SSE2-NEXT: movdqu %xmm2, 16(%rdx)
>>>> -; SSE2-NEXT: movdqu %xmm13, (%rdx)
>>>> -; SSE2-NEXT: movdqu %xmm7, 16(%rcx)
>>>> -; SSE2-NEXT: movdqu %xmm0, (%rcx)
>>>> +; SSE2-NEXT: movdqu (%rdi), %xmm0
>>>> +; SSE2-NEXT: movdqu 16(%rdi), %xmm7
>>>> +; SSE2-NEXT: movdqu 32(%rdi), %xmm9
>>>> +; SSE2-NEXT: movdqu 48(%rdi), %xmm2
>>>> +; SSE2-NEXT: pshufd {{.*#+}} xmm3 = xmm0[0,1,0,3]
>>>> +; SSE2-NEXT: punpckhqdq {{.*#+}} xmm3 = xmm3[1],xmm7[1]
>>>> +; SSE2-NEXT: pshufd {{.*#+}} xmm6 = xmm9[0,1,0,1]
>>>> +; SSE2-NEXT: shufps {{.*#+}} xmm6 = xmm6[3,0],xmm3[2,0]
>>>> +; SSE2-NEXT: shufps {{.*#+}} xmm3 = xmm3[0,1],xmm6[2,0]
>>>> +; SSE2-NEXT: pshufd {{.*#+}} xmm6 = xmm2[0,1,0,3]
>>>> +; SSE2-NEXT: punpckhqdq {{.*#+}} xmm6 = xmm6[1],xmm10[1]
>>>> +; SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm8[0,1,0,1]
>>>> +; SSE2-NEXT: shufps {{.*#+}} xmm1 = xmm1[3,0],xmm6[2,0]
>>>> +; SSE2-NEXT: shufps {{.*#+}} xmm6 = xmm6[0,1],xmm1[2,0]
>>>> +; SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm0[2,3,0,1]
>>>> +; SSE2-NEXT: shufps {{.*#+}} xmm0 = xmm0[1,0],xmm7[0,0]
>>>> +; SSE2-NEXT: shufps {{.*#+}} xmm0 = xmm0[0,2],xmm7[3,3]
>>>> +; SSE2-NEXT: pshufd {{.*#+}} xmm5 = xmm9[0,1,2,2]
>>>> +; SSE2-NEXT: shufps {{.*#+}} xmm5 = xmm5[3,0],xmm0[2,0]
>>>> +; SSE2-NEXT: shufps {{.*#+}} xmm0 = xmm0[0,1],xmm5[2,0]
>>>> +; SSE2-NEXT: pshufd {{.*#+}} xmm5 = xmm2[2,3,0,1]
>>>> +; SSE2-NEXT: shufps {{.*#+}} xmm2 = xmm2[1,0],xmm10[0,0]
>>>> +; SSE2-NEXT: shufps {{.*#+}} xmm2 = xmm2[0,2],xmm10[3,3]
>>>> +; SSE2-NEXT: pshufd {{.*#+}} xmm4 = xmm8[0,1,2,2]
>>>> +; SSE2-NEXT: shufps {{.*#+}} xmm4 = xmm4[3,0],xmm2[2,0]
>>>> +; SSE2-NEXT: shufps {{.*#+}} xmm2 = xmm2[0,1],xmm4[2,0]
>>>> +; SSE2-NEXT: pshufd {{.*#+}} xmm4 = xmm7[1,1,2,3]
>>>> +; SSE2-NEXT: punpckldq {{.*#+}} xmm1 =
>>>> xmm1[0],xmm4[0],xmm1[1],xmm4[1]
>>>> +; SSE2-NEXT: pshufd {{.*#+}} xmm4 = xmm9[0,1,0,3]
>>>> +; SSE2-NEXT: movsd {{.*#+}} xmm4 = xmm1[0],xmm4[1]
>>>> +; SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm10[1,1,2,3]
>>>> +; SSE2-NEXT: punpckldq {{.*#+}} xmm5 =
>>>> xmm5[0],xmm1[0],xmm5[1],xmm1[1]
>>>> +; SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm8[0,1,0,3]
>>>> +; SSE2-NEXT: movsd {{.*#+}} xmm1 = xmm5[0],xmm1[1]
>>>> +; SSE2-NEXT: movups %xmm6, 16(%rsi)
>>>> +; SSE2-NEXT: movups %xmm3, (%rsi)
>>>> +; SSE2-NEXT: movups %xmm2, 16(%rdx)
>>>> +; SSE2-NEXT: movups %xmm0, (%rdx)
>>>> +; SSE2-NEXT: movupd %xmm1, 16(%rcx)
>>>> +; SSE2-NEXT: movupd %xmm4, (%rcx)
>>>> ; SSE2-NEXT: retq
>>>> ;
>>>> ; SSE42-LABEL: interleave_24i32_out:
>>>> ; SSE42: # BB#0:
>>>> ; SSE42-NEXT: movdqu 80(%rdi), %xmm8
>>>> -; SSE42-NEXT: movdqu 64(%rdi), %xmm9
>>>> -; SSE42-NEXT: movdqu (%rdi), %xmm6
>>>> -; SSE42-NEXT: movdqu 16(%rdi), %xmm3
>>>> +; SSE42-NEXT: movdqu 64(%rdi), %xmm1
>>>> +; SSE42-NEXT: movdqu (%rdi), %xmm5
>>>> +; SSE42-NEXT: movdqu 16(%rdi), %xmm6
>>>> ; SSE42-NEXT: movdqu 32(%rdi), %xmm2
>>>> ; SSE42-NEXT: movdqu 48(%rdi), %xmm4
>>>> -; SSE42-NEXT: pextrd $3, %xmm6, %eax
>>>> -; SSE42-NEXT: pshufd {{.*#+}} xmm5 = xmm6[1,1,2,3]
>>>> -; SSE42-NEXT: pshufd {{.*#+}} xmm7 = xmm6[2,3,0,1]
>>>> -; SSE42-NEXT: pinsrd $1, %eax, %xmm6
>>>> -; SSE42-NEXT: pextrd $2, %xmm3, %eax
>>>> -; SSE42-NEXT: pinsrd $2, %eax, %xmm6
>>>> -; SSE42-NEXT: pextrd $1, %xmm2, %eax
>>>> -; SSE42-NEXT: pinsrd $3, %eax, %xmm6
>>>> -; SSE42-NEXT: pextrd $3, %xmm4, %eax
>>>> -; SSE42-NEXT: pshufd {{.*#+}} xmm0 = xmm4[1,1,2,3]
>>>> -; SSE42-NEXT: pshufd {{.*#+}} xmm1 = xmm4[2,3,0,1]
>>>> -; SSE42-NEXT: pinsrd $1, %eax, %xmm4
>>>> -; SSE42-NEXT: pextrd $2, %xmm9, %eax
>>>> -; SSE42-NEXT: pinsrd $2, %eax, %xmm4
>>>> -; SSE42-NEXT: pextrd $1, %xmm8, %eax
>>>> -; SSE42-NEXT: pinsrd $3, %eax, %xmm4
>>>> -; SSE42-NEXT: movd %xmm3, %eax
>>>> -; SSE42-NEXT: pinsrd $1, %eax, %xmm5
>>>> -; SSE42-NEXT: pextrd $3, %xmm3, %eax
>>>> -; SSE42-NEXT: pinsrd $2, %eax, %xmm5
>>>> -; SSE42-NEXT: pextrd $2, %xmm2, %eax
>>>> -; SSE42-NEXT: pinsrd $3, %eax, %xmm5
>>>> -; SSE42-NEXT: movd %xmm9, %eax
>>>> -; SSE42-NEXT: pinsrd $1, %eax, %xmm0
>>>> -; SSE42-NEXT: pextrd $3, %xmm9, %eax
>>>> -; SSE42-NEXT: pinsrd $2, %eax, %xmm0
>>>> -; SSE42-NEXT: pextrd $2, %xmm8, %eax
>>>> -; SSE42-NEXT: pinsrd $3, %eax, %xmm0
>>>> -; SSE42-NEXT: pextrd $1, %xmm3, %eax
>>>> -; SSE42-NEXT: pinsrd $1, %eax, %xmm7
>>>> -; SSE42-NEXT: movd %xmm2, %eax
>>>> -; SSE42-NEXT: pinsrd $2, %eax, %xmm7
>>>> -; SSE42-NEXT: pextrd $3, %xmm2, %eax
>>>> -; SSE42-NEXT: pinsrd $3, %eax, %xmm7
>>>> -; SSE42-NEXT: pextrd $1, %xmm9, %eax
>>>> -; SSE42-NEXT: pinsrd $1, %eax, %xmm1
>>>> -; SSE42-NEXT: movd %xmm8, %eax
>>>> -; SSE42-NEXT: pinsrd $2, %eax, %xmm1
>>>> -; SSE42-NEXT: pextrd $3, %xmm8, %eax
>>>> -; SSE42-NEXT: pinsrd $3, %eax, %xmm1
>>>> -; SSE42-NEXT: movdqu %xmm4, 16(%rsi)
>>>> -; SSE42-NEXT: movdqu %xmm6, (%rsi)
>>>> -; SSE42-NEXT: movdqu %xmm0, 16(%rdx)
>>>> +; SSE42-NEXT: pshufd {{.*#+}} xmm7 = xmm5[0,3,2,3]
>>>> +; SSE42-NEXT: pblendw {{.*#+}} xmm7 = xmm7[0,1,2,3],xmm6[4,5,6,7]
>>>> +; SSE42-NEXT: pshufd {{.*#+}} xmm3 = xmm2[0,1,0,1]
>>>> +; SSE42-NEXT: pblendw {{.*#+}} xmm3 = xmm7[0,1,2,3,4,5],xmm3[6,7]
>>>> +; SSE42-NEXT: pshufd {{.*#+}} xmm7 = xmm4[0,3,2,3]
>>>> +; SSE42-NEXT: pblendw {{.*#+}} xmm7 = xmm7[0,1,2,3],xmm1[4,5,6,7]
>>>> +; SSE42-NEXT: pshufd {{.*#+}} xmm0 = xmm8[0,1,0,1]
>>>> +; SSE42-NEXT: pblendw {{.*#+}} xmm0 = xmm7[0,1,2,3,4,5],xmm0[6,7]
>>>> +; SSE42-NEXT: pshufd {{.*#+}} xmm9 = xmm2[0,1,2,2]
>>>> +; SSE42-NEXT: pshufd {{.*#+}} xmm7 = xmm5[2,3,0,1]
>>>> +; SSE42-NEXT: pblendw {{.*#+}} xmm7 = xmm7[0,1],xmm6[2,3],xmm7[4,5,6
>>>> ,7]
>>>> +; SSE42-NEXT: pblendw {{.*#+}} xmm6 = xmm6[0,1],xmm5[2,3],xmm6[4,5,6
>>>> ,7]
>>>> +; SSE42-NEXT: pshufd {{.*#+}} xmm5 = xmm6[1,0,3,3]
>>>> +; SSE42-NEXT: pblendw {{.*#+}} xmm5 = xmm5[0,1,2,3,4,5],xmm9[6,7]
>>>> +; SSE42-NEXT: pshufd {{.*#+}} xmm6 = xmm4[2,3,0,1]
>>>> +; SSE42-NEXT: pblendw {{.*#+}} xmm6 = xmm6[0,1],xmm1[2,3],xmm6[4,5,6
>>>> ,7]
>>>> +; SSE42-NEXT: pblendw {{.*#+}} xmm1 = xmm1[0,1],xmm4[2,3],xmm1[4,5,6
>>>> ,7]
>>>> +; SSE42-NEXT: pshufd {{.*#+}} xmm1 = xmm1[1,0,3,3]
>>>> +; SSE42-NEXT: pshufd {{.*#+}} xmm4 = xmm8[0,1,2,2]
>>>> +; SSE42-NEXT: pblendw {{.*#+}} xmm4 = xmm1[0,1,2,3,4,5],xmm4[6,7]
>>>> +; SSE42-NEXT: pshufd {{.*#+}} xmm1 = xmm2[0,1,0,3]
>>>> +; SSE42-NEXT: pblendw {{.*#+}} xmm1 = xmm7[0,1,2,3],xmm1[4,5,6,7]
>>>> +; SSE42-NEXT: pshufd {{.*#+}} xmm2 = xmm8[0,1,0,3]
>>>> +; SSE42-NEXT: pblendw {{.*#+}} xmm2 = xmm6[0,1,2,3],xmm2[4,5,6,7]
>>>> +; SSE42-NEXT: movdqu %xmm0, 16(%rsi)
>>>> +; SSE42-NEXT: movdqu %xmm3, (%rsi)
>>>> +; SSE42-NEXT: movdqu %xmm4, 16(%rdx)
>>>> ; SSE42-NEXT: movdqu %xmm5, (%rdx)
>>>> -; SSE42-NEXT: movdqu %xmm1, 16(%rcx)
>>>> -; SSE42-NEXT: movdqu %xmm7, (%rcx)
>>>> +; SSE42-NEXT: movdqu %xmm2, 16(%rcx)
>>>> +; SSE42-NEXT: movdqu %xmm1, (%rcx)
>>>> ; SSE42-NEXT: retq
>>>> ;
>>>> ; AVX1-LABEL: interleave_24i32_out:
>>>> ; AVX1: # BB#0:
>>>> -; AVX1-NEXT: vmovdqu (%rdi), %ymm1
>>>> -; AVX1-NEXT: vmovups 32(%rdi), %ymm0
>>>> -; AVX1-NEXT: vmovdqu 64(%rdi), %ymm2
>>>> -; AVX1-NEXT: vextractf128 $1, %ymm0, %xmm3
>>>> -; AVX1-NEXT: vpextrd $3, %xmm3, %eax
>>>> -; AVX1-NEXT: vpinsrd $1, %eax, %xmm3, %xmm4
>>>> -; AVX1-NEXT: vpextrd $2, %xmm2, %eax
>>>> -; AVX1-NEXT: vpinsrd $2, %eax, %xmm4, %xmm5
>>>> -; AVX1-NEXT: vextractf128 $1, %ymm2, %xmm4
>>>> -; AVX1-NEXT: vpextrd $1, %xmm4, %eax
>>>> -; AVX1-NEXT: vpinsrd $3, %eax, %xmm5, %xmm5
>>>> -; AVX1-NEXT: vpextrd $3, %xmm1, %eax
>>>> -; AVX1-NEXT: vpinsrd $1, %eax, %xmm1, %xmm6
>>>> -; AVX1-NEXT: vextractf128 $1, %ymm1, %xmm7
>>>> -; AVX1-NEXT: vpextrd $2, %xmm7, %eax
>>>> -; AVX1-NEXT: vpinsrd $2, %eax, %xmm6, %xmm6
>>>> -; AVX1-NEXT: vpextrd $1, %xmm0, %eax
>>>> -; AVX1-NEXT: vpinsrd $3, %eax, %xmm6, %xmm6
>>>> -; AVX1-NEXT: vinsertf128 $1, %xmm5, %ymm6, %ymm8
>>>> -; AVX1-NEXT: vmovd %xmm2, %eax
>>>> -; AVX1-NEXT: vpshufd {{.*#+}} xmm6 = xmm3[1,1,2,3]
>>>> -; AVX1-NEXT: vpinsrd $1, %eax, %xmm6, %xmm6
>>>> -; AVX1-NEXT: vpextrd $3, %xmm2, %eax
>>>> -; AVX1-NEXT: vpinsrd $2, %eax, %xmm6, %xmm6
>>>> -; AVX1-NEXT: vpextrd $2, %xmm4, %eax
>>>> -; AVX1-NEXT: vpinsrd $3, %eax, %xmm6, %xmm6
>>>> -; AVX1-NEXT: vmovd %xmm7, %eax
>>>> -; AVX1-NEXT: vpshufd {{.*#+}} xmm5 = xmm1[1,1,2,3]
>>>> -; AVX1-NEXT: vpinsrd $1, %eax, %xmm5, %xmm5
>>>> -; AVX1-NEXT: vpextrd $3, %xmm7, %eax
>>>> -; AVX1-NEXT: vpinsrd $2, %eax, %xmm5, %xmm5
>>>> -; AVX1-NEXT: vpextrd $2, %xmm0, %eax
>>>> -; AVX1-NEXT: vpinsrd $3, %eax, %xmm5, %xmm5
>>>> +; AVX1-NEXT: vmovups (%rdi), %ymm0
>>>> +; AVX1-NEXT: vmovups 32(%rdi), %ymm1
>>>> +; AVX1-NEXT: vmovups 64(%rdi), %ymm2
>>>> +; AVX1-NEXT: vextractf128 $1, %ymm2, %xmm3
>>>> +; AVX1-NEXT: vinsertps {{.*#+}} xmm4 = zero,zero,xmm2[2],xmm3[1]
>>>> +; AVX1-NEXT: vinsertf128 $1, %xmm4, %ymm0, %ymm4
>>>> +; AVX1-NEXT: vblendps {{.*#+}} ymm5 = ymm0[0],ymm1[1],ymm0[2,3],ymm1
>>>> [4],ymm0[5,6],ymm1[7]
>>>> +; AVX1-NEXT: vextractf128 $1, %ymm5, %xmm6
>>>> +; AVX1-NEXT: vblendps {{.*#+}} xmm5 = xmm5[0,1],xmm6[2],xmm5[3]
>>>> +; AVX1-NEXT: vpermilps {{.*#+}} xmm5 = xmm5[0,3,2,1]
>>>> +; AVX1-NEXT: vpermilps {{.*#+}} xmm6 = xmm6[0,3,2,3]
>>>> ; AVX1-NEXT: vinsertf128 $1, %xmm6, %ymm5, %ymm5
>>>> -; AVX1-NEXT: vpextrd $1, %xmm2, %eax
>>>> -; AVX1-NEXT: vpshufd {{.*#+}} xmm2 = xmm3[2,3,0,1]
>>>> -; AVX1-NEXT: vpinsrd $1, %eax, %xmm2, %xmm2
>>>> -; AVX1-NEXT: vmovd %xmm4, %eax
>>>> -; AVX1-NEXT: vpinsrd $2, %eax, %xmm2, %xmm2
>>>> -; AVX1-NEXT: vpextrd $3, %xmm4, %eax
>>>> -; AVX1-NEXT: vpinsrd $3, %eax, %xmm2, %xmm2
>>>> -; AVX1-NEXT: vpextrd $1, %xmm7, %eax
>>>> -; AVX1-NEXT: vpshufd {{.*#+}} xmm1 = xmm1[2,3,0,1]
>>>> -; AVX1-NEXT: vpinsrd $1, %eax, %xmm1, %xmm1
>>>> -; AVX1-NEXT: vmovd %xmm0, %eax
>>>> -; AVX1-NEXT: vpinsrd $2, %eax, %xmm1, %xmm1
>>>> -; AVX1-NEXT: vpextrd $3, %xmm0, %eax
>>>> -; AVX1-NEXT: vpinsrd $3, %eax, %xmm1, %xmm0
>>>> -; AVX1-NEXT: vinsertf128 $1, %xmm2, %ymm0, %ymm0
>>>> -; AVX1-NEXT: vmovups %ymm8, (%rsi)
>>>> +; AVX1-NEXT: vblendpd {{.*#+}} ymm4 = ymm5[0,1,2],ymm4[3]
>>>> +; AVX1-NEXT: vblendps {{.*#+}} xmm5 = xmm2[0,1],xmm3[2],xmm2[3]
>>>> +; AVX1-NEXT: vpermilps {{.*#+}} xmm5 = xmm5[0,0,3,2]
>>>> +; AVX1-NEXT: vinsertf128 $1, %xmm5, %ymm0, %ymm5
>>>> +; AVX1-NEXT: vblendps {{.*#+}} ymm6 = ymm0[0,1],ymm1[2],ymm0[3,4],ym
>>>> m1[5],ymm0[6,7]
>>>> +; AVX1-NEXT: vextractf128 $1, %ymm6, %xmm7
>>>> +; AVX1-NEXT: vblendps {{.*#+}} xmm6 = xmm7[0],xmm6[1,2],xmm7[3]
>>>> +; AVX1-NEXT: vpermilps {{.*#+}} xmm6 = xmm6[1,0,3,2]
>>>> +; AVX1-NEXT: vmovshdup {{.*#+}} xmm7 = xmm7[1,1,3,3]
>>>> +; AVX1-NEXT: vinsertf128 $1, %xmm7, %ymm6, %ymm6
>>>> +; AVX1-NEXT: vblendps {{.*#+}} ymm5 = ymm6[0,1,2,3,4],ymm5[5,6,7]
>>>> +; AVX1-NEXT: vshufps {{.*#+}} xmm2 = xmm2[0,1],xmm3[0,3]
>>>> +; AVX1-NEXT: vinsertf128 $1, %xmm2, %ymm0, %ymm2
>>>> +; AVX1-NEXT: vblendps {{.*#+}} ymm0 = ymm1[0,1],ymm0[2],ymm1[3,4],ym
>>>> m0[5],ymm1[6,7]
>>>> +; AVX1-NEXT: vextractf128 $1, %ymm0, %xmm1
>>>> +; AVX1-NEXT: vblendps {{.*#+}} xmm0 = xmm0[0],xmm1[1],xmm0[2,3]
>>>> +; AVX1-NEXT: vpermilps {{.*#+}} xmm0 = xmm0[2,1,0,3]
>>>> +; AVX1-NEXT: vpermilpd {{.*#+}} xmm1 = xmm1[1,0]
>>>> +; AVX1-NEXT: vinsertf128 $1, %xmm1, %ymm0, %ymm0
>>>> +; AVX1-NEXT: vblendps {{.*#+}} ymm0 = ymm0[0,1,2,3,4],ymm2[5,6,7]
>>>> +; AVX1-NEXT: vmovupd %ymm4, (%rsi)
>>>> ; AVX1-NEXT: vmovups %ymm5, (%rdx)
>>>> ; AVX1-NEXT: vmovups %ymm0, (%rcx)
>>>> ; AVX1-NEXT: vzeroupper
>>>> @@ -1403,57 +1227,29 @@ define void @interleave_24i32_out(<24 x
>>>> ;
>>>> ; AVX2-LABEL: interleave_24i32_out:
>>>> ; AVX2: # BB#0:
>>>> -; AVX2-NEXT: vmovdqu (%rdi), %ymm1
>>>> -; AVX2-NEXT: vmovdqu 32(%rdi), %ymm0
>>>> +; AVX2-NEXT: vmovdqu (%rdi), %ymm0
>>>> +; AVX2-NEXT: vmovdqu 32(%rdi), %ymm1
>>>> ; AVX2-NEXT: vmovdqu 64(%rdi), %ymm2
>>>> -; AVX2-NEXT: vextracti128 $1, %ymm0, %xmm3
>>>> -; AVX2-NEXT: vpextrd $3, %xmm3, %eax
>>>> -; AVX2-NEXT: vpinsrd $1, %eax, %xmm3, %xmm4
>>>> -; AVX2-NEXT: vpextrd $2, %xmm2, %eax
>>>> -; AVX2-NEXT: vpinsrd $2, %eax, %xmm4, %xmm5
>>>> -; AVX2-NEXT: vextracti128 $1, %ymm2, %xmm4
>>>> -; AVX2-NEXT: vpextrd $1, %xmm4, %eax
>>>> -; AVX2-NEXT: vpinsrd $3, %eax, %xmm5, %xmm5
>>>> -; AVX2-NEXT: vpextrd $3, %xmm1, %eax
>>>> -; AVX2-NEXT: vpinsrd $1, %eax, %xmm1, %xmm6
>>>> -; AVX2-NEXT: vextracti128 $1, %ymm1, %xmm7
>>>> -; AVX2-NEXT: vpextrd $2, %xmm7, %eax
>>>> -; AVX2-NEXT: vpinsrd $2, %eax, %xmm6, %xmm6
>>>> -; AVX2-NEXT: vpextrd $1, %xmm0, %eax
>>>> -; AVX2-NEXT: vpinsrd $3, %eax, %xmm6, %xmm6
>>>> -; AVX2-NEXT: vinserti128 $1, %xmm5, %ymm6, %ymm8
>>>> -; AVX2-NEXT: vmovd %xmm2, %eax
>>>> -; AVX2-NEXT: vpshufd {{.*#+}} xmm6 = xmm3[1,1,2,3]
>>>> -; AVX2-NEXT: vpinsrd $1, %eax, %xmm6, %xmm6
>>>> -; AVX2-NEXT: vpextrd $3, %xmm2, %eax
>>>> -; AVX2-NEXT: vpinsrd $2, %eax, %xmm6, %xmm6
>>>> -; AVX2-NEXT: vpextrd $2, %xmm4, %eax
>>>> -; AVX2-NEXT: vpinsrd $3, %eax, %xmm6, %xmm6
>>>> -; AVX2-NEXT: vmovd %xmm7, %eax
>>>> -; AVX2-NEXT: vpshufd {{.*#+}} xmm5 = xmm1[1,1,2,3]
>>>> -; AVX2-NEXT: vpinsrd $1, %eax, %xmm5, %xmm5
>>>> -; AVX2-NEXT: vpextrd $3, %xmm7, %eax
>>>> -; AVX2-NEXT: vpinsrd $2, %eax, %xmm5, %xmm5
>>>> -; AVX2-NEXT: vpextrd $2, %xmm0, %eax
>>>> -; AVX2-NEXT: vpinsrd $3, %eax, %xmm5, %xmm5
>>>> -; AVX2-NEXT: vinserti128 $1, %xmm6, %ymm5, %ymm5
>>>> -; AVX2-NEXT: vpextrd $1, %xmm2, %eax
>>>> -; AVX2-NEXT: vpshufd {{.*#+}} xmm2 = xmm3[2,3,0,1]
>>>> -; AVX2-NEXT: vpinsrd $1, %eax, %xmm2, %xmm2
>>>> -; AVX2-NEXT: vmovd %xmm4, %eax
>>>> -; AVX2-NEXT: vpinsrd $2, %eax, %xmm2, %xmm2
>>>> -; AVX2-NEXT: vpextrd $3, %xmm4, %eax
>>>> -; AVX2-NEXT: vpinsrd $3, %eax, %xmm2, %xmm2
>>>> -; AVX2-NEXT: vpextrd $1, %xmm7, %eax
>>>> -; AVX2-NEXT: vpshufd {{.*#+}} xmm1 = xmm1[2,3,0,1]
>>>> -; AVX2-NEXT: vpinsrd $1, %eax, %xmm1, %xmm1
>>>> -; AVX2-NEXT: vmovd %xmm0, %eax
>>>> -; AVX2-NEXT: vpinsrd $2, %eax, %xmm1, %xmm1
>>>> -; AVX2-NEXT: vpextrd $3, %xmm0, %eax
>>>> -; AVX2-NEXT: vpinsrd $3, %eax, %xmm1, %xmm0
>>>> -; AVX2-NEXT: vinserti128 $1, %xmm2, %ymm0, %ymm0
>>>> -; AVX2-NEXT: vmovdqu %ymm8, (%rsi)
>>>> -; AVX2-NEXT: vmovdqu %ymm5, (%rdx)
>>>> +; AVX2-NEXT: vmovdqa {{.*#+}} ymm3 = <u,u,u,u,u,u,2,5>
>>>> +; AVX2-NEXT: vpermd %ymm2, %ymm3, %ymm3
>>>> +; AVX2-NEXT: vpblendd {{.*#+}} ymm4 = ymm0[0],ymm1[1],ymm0[2,3],ymm1
>>>> [4],ymm0[5,6],ymm1[7]
>>>> +; AVX2-NEXT: vmovdqa {{.*#+}} ymm5 = <0,3,6,1,4,7,u,u>
>>>> +; AVX2-NEXT: vpermd %ymm4, %ymm5, %ymm4
>>>> +; AVX2-NEXT: vpblendd {{.*#+}} ymm3 = ymm4[0,1,2,3,4,5],ymm3[6,7]
>>>> +; AVX2-NEXT: vmovdqa {{.*#+}} ymm4 = <u,u,u,u,u,0,3,6>
>>>> +; AVX2-NEXT: vpermd %ymm2, %ymm4, %ymm4
>>>> +; AVX2-NEXT: vpblendd {{.*#+}} ymm5 = ymm0[0,1],ymm1[2],ymm0[3,4],ym
>>>> m1[5],ymm0[6,7]
>>>> +; AVX2-NEXT: vmovdqa {{.*#+}} ymm6 = <1,4,7,2,5,u,u,u>
>>>> +; AVX2-NEXT: vpermd %ymm5, %ymm6, %ymm5
>>>> +; AVX2-NEXT: vpblendd {{.*#+}} ymm4 = ymm5[0,1,2,3,4],ymm4[5,6,7]
>>>> +; AVX2-NEXT: vpblendd {{.*#+}} ymm0 = ymm1[0,1],ymm0[2],ymm1[3,4],ym
>>>> m0[5],ymm1[6,7]
>>>> +; AVX2-NEXT: vmovdqa {{.*#+}} ymm1 = <2,5,0,3,6,u,u,u>
>>>> +; AVX2-NEXT: vpermd %ymm0, %ymm1, %ymm0
>>>> +; AVX2-NEXT: vpshufd {{.*#+}} ymm1 = ymm2[0,1,0,3,4,5,4,7]
>>>> +; AVX2-NEXT: vpermq {{.*#+}} ymm1 = ymm1[0,1,0,3]
>>>> +; AVX2-NEXT: vpblendd {{.*#+}} ymm0 = ymm0[0,1,2,3,4],ymm1[5,6,7]
>>>> +; AVX2-NEXT: vmovdqu %ymm3, (%rsi)
>>>> +; AVX2-NEXT: vmovdqu %ymm4, (%rdx)
>>>> ; AVX2-NEXT: vmovdqu %ymm0, (%rcx)
>>>> ; AVX2-NEXT: vzeroupper
>>>> ; AVX2-NEXT: retq
>>>> @@ -1470,223 +1266,153 @@ define void @interleave_24i32_out(<24 x
>>>> define void @interleave_24i32_in(<24 x i32>* %p, <8 x i32>* %q1, <8 x
>>>> i32>* %q2, <8 x i32>* %q3) nounwind {
>>>> ; SSE2-LABEL: interleave_24i32_in:
>>>> ; SSE2: # BB#0:
>>>> -; SSE2-NEXT: movdqu (%rsi), %xmm4
>>>> -; SSE2-NEXT: movdqu 16(%rsi), %xmm1
>>>> -; SSE2-NEXT: movdqu (%rdx), %xmm5
>>>> -; SSE2-NEXT: movdqu 16(%rdx), %xmm2
>>>> -; SSE2-NEXT: movdqu (%rcx), %xmm6
>>>> -; SSE2-NEXT: movdqu 16(%rcx), %xmm11
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm8 = xmm4[1,1,2,3]
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm9 = xmm5[2,3,0,1]
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm3 = xmm5[1,1,2,3]
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm10 = xmm5[3,1,2,3]
>>>> -; SSE2-NEXT: punpckldq {{.*#+}} xmm5 =
>>>> xmm5[0],xmm8[0],xmm5[1],xmm8[1]
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm7 = xmm4[2,3,0,1]
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm4[3,1,2,3]
>>>> -; SSE2-NEXT: punpckldq {{.*#+}} xmm4 =
>>>> xmm4[0],xmm6[0],xmm4[1],xmm6[1]
>>>> -; SSE2-NEXT: punpckldq {{.*#+}} xmm4 =
>>>> xmm4[0],xmm5[0],xmm4[1],xmm5[1]
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm5 = xmm6[1,1,2,3]
>>>> -; SSE2-NEXT: punpckldq {{.*#+}} xmm5 =
>>>> xmm5[0],xmm9[0],xmm5[1],xmm9[1]
>>>> -; SSE2-NEXT: punpckldq {{.*#+}} xmm3 =
>>>> xmm3[0],xmm7[0],xmm3[1],xmm7[1]
>>>> -; SSE2-NEXT: punpckldq {{.*#+}} xmm3 =
>>>> xmm3[0],xmm5[0],xmm3[1],xmm5[1]
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm5 = xmm6[3,1,2,3]
>>>> -; SSE2-NEXT: punpckldq {{.*#+}} xmm0 =
>>>> xmm0[0],xmm5[0],xmm0[1],xmm5[1]
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm5 = xmm6[2,3,0,1]
>>>> -; SSE2-NEXT: punpckldq {{.*#+}} xmm5 =
>>>> xmm5[0],xmm10[0],xmm5[1],xmm10[1]
>>>> -; SSE2-NEXT: punpckldq {{.*#+}} xmm5 =
>>>> xmm5[0],xmm0[0],xmm5[1],xmm0[1]
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm1[1,1,2,3]
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm8 = xmm2[2,3,0,1]
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm7 = xmm2[1,1,2,3]
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm9 = xmm2[3,1,2,3]
>>>> -; SSE2-NEXT: punpckldq {{.*#+}} xmm2 =
>>>> xmm2[0],xmm0[0],xmm2[1],xmm0[1]
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm1[2,3,0,1]
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm6 = xmm1[3,1,2,3]
>>>> -; SSE2-NEXT: punpckldq {{.*#+}} xmm1 =
>>>> xmm1[0],xmm11[0],xmm1[1],xmm11[1]
>>>> -; SSE2-NEXT: punpckldq {{.*#+}} xmm1 =
>>>> xmm1[0],xmm2[0],xmm1[1],xmm2[1]
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm2 = xmm11[1,1,2,3]
>>>> -; SSE2-NEXT: punpckldq {{.*#+}} xmm2 =
>>>> xmm2[0],xmm8[0],xmm2[1],xmm8[1]
>>>> -; SSE2-NEXT: punpckldq {{.*#+}} xmm7 =
>>>> xmm7[0],xmm0[0],xmm7[1],xmm0[1]
>>>> -; SSE2-NEXT: punpckldq {{.*#+}} xmm7 =
>>>> xmm7[0],xmm2[0],xmm7[1],xmm2[1]
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm11[3,1,2,3]
>>>> -; SSE2-NEXT: punpckldq {{.*#+}} xmm6 =
>>>> xmm6[0],xmm0[0],xmm6[1],xmm0[1]
>>>> -; SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm11[2,3,0,1]
>>>> -; SSE2-NEXT: punpckldq {{.*#+}} xmm0 =
>>>> xmm0[0],xmm9[0],xmm0[1],xmm9[1]
>>>> +; SSE2-NEXT: movdqu (%rsi), %xmm5
>>>> +; SSE2-NEXT: movdqu 16(%rsi), %xmm2
>>>> +; SSE2-NEXT: movdqu (%rdx), %xmm6
>>>> +; SSE2-NEXT: movdqu 16(%rdx), %xmm1
>>>> +; SSE2-NEXT: movdqu (%rcx), %xmm7
>>>> +; SSE2-NEXT: movdqu 16(%rcx), %xmm4
>>>> +; SSE2-NEXT: movdqa %xmm5, %xmm0
>>>> ; SSE2-NEXT: punpckldq {{.*#+}} xmm0 =
>>>> xmm0[0],xmm6[0],xmm0[1],xmm6[1]
>>>> -; SSE2-NEXT: movdqu %xmm0, 80(%rdi)
>>>> -; SSE2-NEXT: movdqu %xmm7, 64(%rdi)
>>>> -; SSE2-NEXT: movdqu %xmm1, 48(%rdi)
>>>> -; SSE2-NEXT: movdqu %xmm5, 32(%rdi)
>>>> -; SSE2-NEXT: movdqu %xmm3, 16(%rdi)
>>>> -; SSE2-NEXT: movdqu %xmm4, (%rdi)
>>>> +; SSE2-NEXT: pshufd {{.*#+}} xmm0 = xmm0[0,1,2,2]
>>>> +; SSE2-NEXT: pshufd {{.*#+}} xmm3 = xmm7[0,1,0,1]
>>>> +; SSE2-NEXT: shufps {{.*#+}} xmm3 = xmm3[2,0],xmm0[3,0]
>>>> +; SSE2-NEXT: shufps {{.*#+}} xmm0 = xmm0[0,1],xmm3[0,2]
>>>> +; SSE2-NEXT: movdqa %xmm7, %xmm3
>>>> +; SSE2-NEXT: shufps {{.*#+}} xmm3 = xmm3[1,0],xmm6[1,0]
>>>> +; SSE2-NEXT: shufps {{.*#+}} xmm3 = xmm3[2,0],xmm6[2,2]
>>>> +; SSE2-NEXT: punpckhdq {{.*#+}} xmm7 =
>>>> xmm7[2],xmm5[2],xmm7[3],xmm5[3]
>>>> +; SSE2-NEXT: shufps {{.*#+}} xmm5 = xmm5[2,0],xmm3[3,0]
>>>> +; SSE2-NEXT: shufps {{.*#+}} xmm3 = xmm3[0,1],xmm5[0,2]
>>>> +; SSE2-NEXT: pshufd {{.*#+}} xmm5 = xmm7[0,3,2,2]
>>>> +; SSE2-NEXT: pshufd {{.*#+}} xmm6 = xmm6[2,2,3,3]
>>>> +; SSE2-NEXT: shufps {{.*#+}} xmm6 = xmm6[2,0],xmm5[3,0]
>>>> +; SSE2-NEXT: shufps {{.*#+}} xmm5 = xmm5[0,1],xmm6[0,2]
>>>> +; SSE2-NEXT: movdqa %xmm2, %xmm6
>>>> +; SSE2-NEXT: punpckldq {{.*#+}} xmm6 =
>>>> xmm6[0],xmm1[0],xmm6[1],xmm1[1]
>>>> +; SSE2-NEXT: pshufd {{.*#+}} xmm6 = xmm6[0,1,2,2]
>>>> +; SSE2-NEXT: pshufd {{.*#+}} xmm7 = xmm4[0,1,0,1]
>>>> +; SSE2-NEXT: shufps {{.*#+}} xmm7 = xmm7[2,0],xmm6[3,0]
>>>> +; SSE2-NEXT: shufps {{.*#+}} xmm6 = xmm6[0,1],xmm7[0,2]
>>>> +; SSE2-NEXT: movdqa %xmm4, %xmm7
>>>> +; SSE2-NEXT: shufps {{.*#+}} xmm7 = xmm7[1,0],xmm1[1,0]
>>>> +; SSE2-NEXT: shufps {{.*#+}} xmm7 = xmm7[2,0],xmm1[2,2]
>>>> +; SSE2-NEXT: punpckhdq {{.*#+}} xmm4 =
>>>> xmm4[2],xmm2[2],xmm4[3],xmm2[3]
>>>> +; SSE2-NEXT: shufps {{.*#+}} xmm2 = xmm2[2,0],xmm7[3,0]
>>>> +; SSE2-NEXT: shufps {{.*#+}} xmm7 = xmm7[0,1],xmm2[0,2]
>>>> +; SSE2-NEXT: pshufd {{.*#+}} xmm2 = xmm4[0,3,2,2]
>>>> +; SSE2-NEXT: pshufd {{.*#+}} xmm1 = xmm1[2,2,3,3]
>>>> +; SSE2-NEXT: shufps {{.*#+}} xmm1 = xmm1[2,0],xmm2[3,0]
>>>> +; SSE2-NEXT: shufps {{.*#+}} xmm2 = xmm2[0,1],xmm1[0,2]
>>>> +; SSE2-NEXT: movups %xmm2, 80(%rdi)
>>>> +; SSE2-NEXT: movups %xmm7, 64(%rdi)
>>>> +; SSE2-NEXT: movups %xmm6, 48(%rdi)
>>>> +; SSE2-NEXT: movups %xmm5, 32(%rdi)
>>>> +; SSE2-NEXT: movups %xmm3, 16(%rdi)
>>>> +; SSE2-NEXT: movups %xmm0, (%rdi)
>>>> ; SSE2-NEXT: retq
>>>> ;
>>>> ; SSE42-LABEL: interleave_24i32_in:
>>>> ; SSE42: # BB#0:
>>>> -; SSE42-NEXT: movdqu (%rsi), %xmm3
>>>> +; SSE42-NEXT: movdqu (%rsi), %xmm5
>>>> ; SSE42-NEXT: movdqu 16(%rsi), %xmm2
>>>> ; SSE42-NEXT: movdqu (%rdx), %xmm6
>>>> -; SSE42-NEXT: movdqu 16(%rdx), %xmm0
>>>> +; SSE42-NEXT: movdqu 16(%rdx), %xmm1
>>>> ; SSE42-NEXT: movdqu (%rcx), %xmm7
>>>> -; SSE42-NEXT: movdqu 16(%rcx), %xmm1
>>>> -; SSE42-NEXT: movd %xmm6, %eax
>>>> -; SSE42-NEXT: pextrd $1, %xmm3, %ecx
>>>> -; SSE42-NEXT: pextrd $2, %xmm3, %edx
>>>> -; SSE42-NEXT: pextrd $3, %xmm3, %esi
>>>> -; SSE42-NEXT: pinsrd $1, %eax, %xmm3
>>>> -; SSE42-NEXT: movd %xmm7, %eax
>>>> -; SSE42-NEXT: pinsrd $2, %eax, %xmm3
>>>> -; SSE42-NEXT: pinsrd $3, %ecx, %xmm3
>>>> -; SSE42-NEXT: pextrd $1, %xmm7, %eax
>>>> -; SSE42-NEXT: pshufd {{.*#+}} xmm4 = xmm6[1,1,2,3]
>>>> -; SSE42-NEXT: pinsrd $1, %eax, %xmm4
>>>> -; SSE42-NEXT: pinsrd $2, %edx, %xmm4
>>>> -; SSE42-NEXT: pextrd $2, %xmm6, %eax
>>>> -; SSE42-NEXT: pinsrd $3, %eax, %xmm4
>>>> -; SSE42-NEXT: pshufd {{.*#+}} xmm5 = xmm7[2,3,0,1]
>>>> -; SSE42-NEXT: pinsrd $1, %esi, %xmm5
>>>> -; SSE42-NEXT: pextrd $3, %xmm6, %eax
>>>> -; SSE42-NEXT: pinsrd $2, %eax, %xmm5
>>>> -; SSE42-NEXT: pextrd $3, %xmm7, %eax
>>>> -; SSE42-NEXT: pinsrd $3, %eax, %xmm5
>>>> -; SSE42-NEXT: movd %xmm0, %eax
>>>> -; SSE42-NEXT: pextrd $1, %xmm2, %ecx
>>>> -; SSE42-NEXT: pextrd $2, %xmm2, %edx
>>>> -; SSE42-NEXT: pextrd $3, %xmm2, %esi
>>>> -; SSE42-NEXT: pinsrd $1, %eax, %xmm2
>>>> -; SSE42-NEXT: movd %xmm1, %eax
>>>> -; SSE42-NEXT: pinsrd $2, %eax, %xmm2
>>>> -; SSE42-NEXT: pinsrd $3, %ecx, %xmm2
>>>> -; SSE42-NEXT: pextrd $1, %xmm1, %eax
>>>> -; SSE42-NEXT: pshufd {{.*#+}} xmm6 = xmm0[1,1,2,3]
>>>> -; SSE42-NEXT: pinsrd $1, %eax, %xmm6
>>>> -; SSE42-NEXT: pinsrd $2, %edx, %xmm6
>>>> -; SSE42-NEXT: pextrd $2, %xmm0, %eax
>>>> -; SSE42-NEXT: pinsrd $3, %eax, %xmm6
>>>> -; SSE42-NEXT: pshufd {{.*#+}} xmm7 = xmm1[2,3,0,1]
>>>> -; SSE42-NEXT: pinsrd $1, %esi, %xmm7
>>>> -; SSE42-NEXT: pextrd $3, %xmm0, %eax
>>>> -; SSE42-NEXT: pinsrd $2, %eax, %xmm7
>>>> -; SSE42-NEXT: pextrd $3, %xmm1, %eax
>>>> -; SSE42-NEXT: pinsrd $3, %eax, %xmm7
>>>> -; SSE42-NEXT: movdqu %xmm7, 80(%rdi)
>>>> -; SSE42-NEXT: movdqu %xmm6, 64(%rdi)
>>>> -; SSE42-NEXT: movdqu %xmm2, 48(%rdi)
>>>> +; SSE42-NEXT: movdqu 16(%rcx), %xmm4
>>>> +; SSE42-NEXT: pshufd {{.*#+}} xmm0 = xmm6[0,0,1,1]
>>>> +; SSE42-NEXT: pshufd {{.*#+}} xmm3 = xmm5[0,1,0,1]
>>>> +; SSE42-NEXT: pblendw {{.*#+}} xmm3 = xmm3[0,1],xmm0[2,3],xmm3[4,5,6
>>>> ,7]
>>>> +; SSE42-NEXT: pshufd {{.*#+}} xmm0 = xmm7[0,1,0,1]
>>>> +; SSE42-NEXT: pblendw {{.*#+}} xmm0 = xmm3[0,1,2,3],xmm0[4,5],xmm3[6
>>>> ,7]
>>>> +; SSE42-NEXT: pshufd {{.*#+}} xmm3 = xmm6[1,1,2,2]
>>>> +; SSE42-NEXT: pblendw {{.*#+}} xmm3 = xmm3[0,1],xmm7[2,3],xmm3[4,5,6
>>>> ,7]
>>>> +; SSE42-NEXT: pblendw {{.*#+}} xmm3 = xmm3[0,1,2,3],xmm5[4,5],xmm3[6
>>>> ,7]
>>>> +; SSE42-NEXT: pshufd {{.*#+}} xmm5 = xmm5[2,3,0,1]
>>>> +; SSE42-NEXT: pshufd {{.*#+}} xmm7 = xmm7[2,3,2,3]
>>>> +; SSE42-NEXT: pblendw {{.*#+}} xmm7 = xmm7[0,1],xmm5[2,3],xmm7[4,5,6
>>>> ,7]
>>>> +; SSE42-NEXT: pshufd {{.*#+}} xmm5 = xmm6[2,2,3,3]
>>>> +; SSE42-NEXT: pblendw {{.*#+}} xmm5 = xmm7[0,1,2,3],xmm5[4,5],xmm7[6
>>>> ,7]
>>>> +; SSE42-NEXT: pshufd {{.*#+}} xmm6 = xmm1[0,0,1,1]
>>>> +; SSE42-NEXT: pshufd {{.*#+}} xmm7 = xmm2[0,1,0,1]
>>>> +; SSE42-NEXT: pblendw {{.*#+}} xmm7 = xmm7[0,1],xmm6[2,3],xmm7[4,5,6
>>>> ,7]
>>>> +; SSE42-NEXT: pshufd {{.*#+}} xmm6 = xmm4[0,1,0,1]
>>>> +; SSE42-NEXT: pblendw {{.*#+}} xmm6 = xmm7[0,1,2,3],xmm6[4,5],xmm7[6
>>>> ,7]
>>>> +; SSE42-NEXT: pshufd {{.*#+}} xmm7 = xmm1[1,1,2,2]
>>>> +; SSE42-NEXT: pblendw {{.*#+}} xmm7 = xmm7[0,1],xmm4[2,3],xmm7[4,5,6
>>>> ,7]
>>>> +; SSE42-NEXT: pblendw {{.*#+}} xmm7 = xmm7[0,1,2,3],xmm2[4,5],xmm7[6
>>>> ,7]
>>>> +; SSE42-NEXT: pshufd {{.*#+}} xmm2 = xmm2[2,3,0,1]
>>>> +; SSE42-NEXT: pshufd {{.*#+}} xmm4 = xmm4[2,3,2,3]
>>>> +; SSE42-NEXT: pblendw {{.*#+}} xmm4 = xmm4[0,1],xmm2[2,3],xmm4[4,5,6
>>>> ,7]
>>>> +; SSE42-NEXT: pshufd {{.*#+}} xmm1 = xmm1[2,2,3,3]
>>>> +; SSE42-NEXT: pblendw {{.*#+}} xmm1 = xmm4[0,1,2,3],xmm1[4,5],xmm4[6
>>>> ,7]
>>>> +; SSE42-NEXT: movdqu %xmm1, 80(%rdi)
>>>> +; SSE42-NEXT: movdqu %xmm7, 64(%rdi)
>>>> +; SSE42-NEXT: movdqu %xmm6, 48(%rdi)
>>>> ; SSE42-NEXT: movdqu %xmm5, 32(%rdi)
>>>> -; SSE42-NEXT: movdqu %xmm4, 16(%rdi)
>>>> -; SSE42-NEXT: movdqu %xmm3, (%rdi)
>>>> +; SSE42-NEXT: movdqu %xmm3, 16(%rdi)
>>>> +; SSE42-NEXT: movdqu %xmm0, (%rdi)
>>>> ; SSE42-NEXT: retq
>>>> ;
>>>> ; AVX1-LABEL: interleave_24i32_in:
>>>> ; AVX1: # BB#0:
>>>> -; AVX1-NEXT: vmovdqu (%rsi), %ymm2
>>>> -; AVX1-NEXT: vmovdqu (%rdx), %ymm3
>>>> -; AVX1-NEXT: vmovdqu (%rcx), %ymm1
>>>> -; AVX1-NEXT: vpextrd $1, %xmm1, %eax
>>>> -; AVX1-NEXT: vpshufd {{.*#+}} xmm0 = xmm3[1,1,2,3]
>>>> -; AVX1-NEXT: vpinsrd $1, %eax, %xmm0, %xmm0
>>>> -; AVX1-NEXT: vpextrd $2, %xmm2, %eax
>>>> -; AVX1-NEXT: vpinsrd $2, %eax, %xmm0, %xmm0
>>>> -; AVX1-NEXT: vpextrd $2, %xmm3, %eax
>>>> -; AVX1-NEXT: vpinsrd $3, %eax, %xmm0, %xmm0
>>>> -; AVX1-NEXT: vmovd %xmm3, %eax
>>>> -; AVX1-NEXT: vpinsrd $1, %eax, %xmm2, %xmm4
>>>> -; AVX1-NEXT: vmovd %xmm1, %eax
>>>> -; AVX1-NEXT: vpinsrd $2, %eax, %xmm4, %xmm4
>>>> -; AVX1-NEXT: vpextrd $1, %xmm2, %eax
>>>> -; AVX1-NEXT: vpinsrd $3, %eax, %xmm4, %xmm4
>>>> -; AVX1-NEXT: vinsertf128 $1, %xmm0, %ymm4, %ymm0
>>>> +; AVX1-NEXT: vmovups (%rsi), %ymm0
>>>> +; AVX1-NEXT: vmovups (%rdx), %ymm1
>>>> +; AVX1-NEXT: vmovupd (%rcx), %ymm2
>>>> +; AVX1-NEXT: vshufps {{.*#+}} xmm3 = xmm0[2,0],xmm1[2,0]
>>>> +; AVX1-NEXT: vshufps {{.*#+}} xmm3 = xmm1[1,1],xmm3[0,2]
>>>> +; AVX1-NEXT: vshufps {{.*#+}} xmm4 = xmm1[0,0],xmm0[0,0]
>>>> +; AVX1-NEXT: vshufps {{.*#+}} xmm4 = xmm4[2,0],xmm0[2,1]
>>>> +; AVX1-NEXT: vinsertf128 $1, %xmm3, %ymm4, %ymm3
>>>> +; AVX1-NEXT: vmovddup {{.*#+}} xmm4 = xmm2[0,0]
>>>> +; AVX1-NEXT: vinsertf128 $1, %xmm4, %ymm4, %ymm4
>>>> +; AVX1-NEXT: vblendps {{.*#+}} ymm3 = ymm3[0,1],ymm4[2],ymm3[3,4],ym
>>>> m4[5],ymm3[6,7]
>>>> ; AVX1-NEXT: vextractf128 $1, %ymm2, %xmm4
>>>> -; AVX1-NEXT: vextractf128 $1, %ymm3, %xmm5
>>>> -; AVX1-NEXT: vmovd %xmm5, %eax
>>>> -; AVX1-NEXT: vpinsrd $1, %eax, %xmm4, %xmm6
>>>> -; AVX1-NEXT: vextractf128 $1, %ymm1, %xmm7
>>>> -; AVX1-NEXT: vmovd %xmm7, %eax
>>>> -; AVX1-NEXT: vpinsrd $2, %eax, %xmm6, %xmm6
>>>> -; AVX1-NEXT: vpextrd $1, %xmm4, %eax
>>>> -; AVX1-NEXT: vpinsrd $3, %eax, %xmm6, %xmm6
>>>> -; AVX1-NEXT: vpextrd $3, %xmm2, %eax
>>>> -; AVX1-NEXT: vpshufd {{.*#+}} xmm2 = xmm1[2,3,0,1]
>>>> -; AVX1-NEXT: vpinsrd $1, %eax, %xmm2, %xmm2
>>>> -; AVX1-NEXT: vpextrd $3, %xmm3, %eax
>>>> -; AVX1-NEXT: vpinsrd $2, %eax, %xmm2, %xmm2
>>>> -; AVX1-NEXT: vpextrd $3, %xmm1, %eax
>>>> -; AVX1-NEXT: vpinsrd $3, %eax, %xmm2, %xmm1
>>>> -; AVX1-NEXT: vinsertf128 $1, %xmm6, %ymm1, %ymm1
>>>> -; AVX1-NEXT: vpextrd $3, %xmm4, %eax
>>>> -; AVX1-NEXT: vpshufd {{.*#+}} xmm2 = xmm7[2,3,0,1]
>>>> -; AVX1-NEXT: vpinsrd $1, %eax, %xmm2, %xmm2
>>>> -; AVX1-NEXT: vpextrd $3, %xmm5, %eax
>>>> -; AVX1-NEXT: vpinsrd $2, %eax, %xmm2, %xmm2
>>>> -; AVX1-NEXT: vpextrd $3, %xmm7, %eax
>>>> -; AVX1-NEXT: vpinsrd $3, %eax, %xmm2, %xmm2
>>>> -; AVX1-NEXT: vpextrd $1, %xmm7, %eax
>>>> -; AVX1-NEXT: vpshufd {{.*#+}} xmm3 = xmm5[1,1,2,3]
>>>> -; AVX1-NEXT: vpinsrd $1, %eax, %xmm3, %xmm3
>>>> -; AVX1-NEXT: vpextrd $2, %xmm4, %eax
>>>> -; AVX1-NEXT: vpinsrd $2, %eax, %xmm3, %xmm3
>>>> -; AVX1-NEXT: vpextrd $2, %xmm5, %eax
>>>> -; AVX1-NEXT: vpinsrd $3, %eax, %xmm3, %xmm3
>>>> -; AVX1-NEXT: vinsertf128 $1, %xmm2, %ymm3, %ymm2
>>>> -; AVX1-NEXT: vmovups %ymm2, 64(%rdi)
>>>> -; AVX1-NEXT: vmovups %ymm1, 32(%rdi)
>>>> -; AVX1-NEXT: vmovups %ymm0, (%rdi)
>>>> +; AVX1-NEXT: vextractf128 $1, %ymm1, %xmm5
>>>> +; AVX1-NEXT: vshufps {{.*#+}} xmm6 = xmm5[3,0],xmm4[3,0]
>>>> +; AVX1-NEXT: vshufps {{.*#+}} xmm6 = xmm4[2,1],xmm6[0,2]
>>>> +; AVX1-NEXT: vshufps {{.*#+}} xmm4 = xmm4[1,0],xmm5[1,0]
>>>> +; AVX1-NEXT: vshufps {{.*#+}} xmm4 = xmm4[2,0],xmm5[2,2]
>>>> +; AVX1-NEXT: vinsertf128 $1, %xmm6, %ymm4, %ymm4
>>>> +; AVX1-NEXT: vpermilpd {{.*#+}} ymm5 = ymm0[1,1,3,3]
>>>> +; AVX1-NEXT: vperm2f128 {{.*#+}} ymm5 = ymm5[2,3,2,3]
>>>> +; AVX1-NEXT: vblendps {{.*#+}} ymm4 = ymm4[0,1],ymm5[2],ymm4[3,4],ym
>>>> m5[5],ymm4[6,7]
>>>> +; AVX1-NEXT: vpermilpd {{.*#+}} ymm0 = ymm0[1,0,2,2]
>>>> +; AVX1-NEXT: vpermilpd {{.*#+}} ymm2 = ymm2[1,1,2,2]
>>>> +; AVX1-NEXT: vblendps {{.*#+}} ymm0 = ymm2[0],ymm0[1],ymm2[2,3],ymm0
>>>> [4],ymm2[5,6],ymm0[7]
>>>> +; AVX1-NEXT: vpermilps {{.*#+}} ymm1 = ymm1[0,0,3,3,4,4,7,7]
>>>> +; AVX1-NEXT: vblendps {{.*#+}} ymm0 = ymm0[0,1],ymm1[2],ymm0[3,4],ym
>>>> m1[5],ymm0[6,7]
>>>> +; AVX1-NEXT: vmovups %ymm0, 32(%rdi)
>>>> +; AVX1-NEXT: vmovups %ymm4, 64(%rdi)
>>>> +; AVX1-NEXT: vmovups %ymm3, (%rdi)
>>>> ; AVX1-NEXT: vzeroupper
>>>> ; AVX1-NEXT: retq
>>>> ;
>>>> ; AVX2-LABEL: interleave_24i32_in:
>>>> ; AVX2: # BB#0:
>>>> -; AVX2-NEXT: vmovdqu (%rsi), %ymm2
>>>> -; AVX2-NEXT: vmovdqu (%rdx), %ymm3
>>>> -; AVX2-NEXT: vmovdqu (%rcx), %ymm1
>>>> -; AVX2-NEXT: vpextrd $1, %xmm1, %eax
>>>> -; AVX2-NEXT: vpshufd {{.*#+}} xmm0 = xmm3[1,1,2,3]
>>>> -; AVX2-NEXT: vpinsrd $1, %eax, %xmm0, %xmm0
>>>> -; AVX2-NEXT: vpextrd $2, %xmm2, %eax
>>>> -; AVX2-NEXT: vpinsrd $2, %eax, %xmm0, %xmm0
>>>> -; AVX2-NEXT: vpextrd $2, %xmm3, %eax
>>>> -; AVX2-NEXT: vpinsrd $3, %eax, %xmm0, %xmm0
>>>> -; AVX2-NEXT: vmovd %xmm3, %eax
>>>> -; AVX2-NEXT: vpinsrd $1, %eax, %xmm2, %xmm4
>>>> -; AVX2-NEXT: vmovd %xmm1, %eax
>>>> -; AVX2-NEXT: vpinsrd $2, %eax, %xmm4, %xmm4
>>>> -; AVX2-NEXT: vpextrd $1, %xmm2, %eax
>>>> -; AVX2-NEXT: vpinsrd $3, %eax, %xmm4, %xmm4
>>>> -; AVX2-NEXT: vinserti128 $1, %xmm0, %ymm4, %ymm0
>>>> -; AVX2-NEXT: vextracti128 $1, %ymm2, %xmm4
>>>> -; AVX2-NEXT: vextracti128 $1, %ymm3, %xmm5
>>>> -; AVX2-NEXT: vmovd %xmm5, %eax
>>>> -; AVX2-NEXT: vpinsrd $1, %eax, %xmm4, %xmm6
>>>> -; AVX2-NEXT: vextracti128 $1, %ymm1, %xmm7
>>>> -; AVX2-NEXT: vmovd %xmm7, %eax
>>>> -; AVX2-NEXT: vpinsrd $2, %eax, %xmm6, %xmm6
>>>> -; AVX2-NEXT: vpextrd $1, %xmm4, %eax
>>>> -; AVX2-NEXT: vpinsrd $3, %eax, %xmm6, %xmm6
>>>> -; AVX2-NEXT: vpextrd $3, %xmm2, %eax
>>>> -; AVX2-NEXT: vpshufd {{.*#+}} xmm2 = xmm1[2,3,0,1]
>>>> -; AVX2-NEXT: vpinsrd $1, %eax, %xmm2, %xmm2
>>>> -; AVX2-NEXT: vpextrd $3, %xmm3, %eax
>>>> -; AVX2-NEXT: vpinsrd $2, %eax, %xmm2, %xmm2
>>>> -; AVX2-NEXT: vpextrd $3, %xmm1, %eax
>>>> -; AVX2-NEXT: vpinsrd $3, %eax, %xmm2, %xmm1
>>>> -; AVX2-NEXT: vinserti128 $1, %xmm6, %ymm1, %ymm1
>>>> -; AVX2-NEXT: vpextrd $3, %xmm4, %eax
>>>> -; AVX2-NEXT: vpshufd {{.*#+}} xmm2 = xmm7[2,3,0,1]
>>>> -; AVX2-NEXT: vpinsrd $1, %eax, %xmm2, %xmm2
>>>> -; AVX2-NEXT: vpextrd $3, %xmm5, %eax
>>>> -; AVX2-NEXT: vpinsrd $2, %eax, %xmm2, %xmm2
>>>> -; AVX2-NEXT: vpextrd $3, %xmm7, %eax
>>>> -; AVX2-NEXT: vpinsrd $3, %eax, %xmm2, %xmm2
>>>> -; AVX2-NEXT: vpextrd $1, %xmm7, %eax
>>>> -; AVX2-NEXT: vpshufd {{.*#+}} xmm3 = xmm5[1,1,2,3]
>>>> -; AVX2-NEXT: vpinsrd $1, %eax, %xmm3, %xmm3
>>>> -; AVX2-NEXT: vpextrd $2, %xmm4, %eax
>>>> -; AVX2-NEXT: vpinsrd $2, %eax, %xmm3, %xmm3
>>>> -; AVX2-NEXT: vpextrd $2, %xmm5, %eax
>>>> -; AVX2-NEXT: vpinsrd $3, %eax, %xmm3, %xmm3
>>>> -; AVX2-NEXT: vinserti128 $1, %xmm2, %ymm3, %ymm2
>>>> -; AVX2-NEXT: vmovdqu %ymm2, 64(%rdi)
>>>> -; AVX2-NEXT: vmovdqu %ymm1, 32(%rdi)
>>>> -; AVX2-NEXT: vmovdqu %ymm0, (%rdi)
>>>> +; AVX2-NEXT: vmovdqu (%rsi), %ymm0
>>>> +; AVX2-NEXT: vmovdqu (%rdx), %ymm1
>>>> +; AVX2-NEXT: vmovdqu (%rcx), %ymm2
>>>> +; AVX2-NEXT: vpshufd {{.*#+}} xmm3 = xmm1[1,0,2,2]
>>>> +; AVX2-NEXT: vperm2i128 {{.*#+}} ymm3 = ymm3[0,1,0,1]
>>>> +; AVX2-NEXT: vpermq {{.*#+}} ymm4 = ymm0[0,0,2,1]
>>>> +; AVX2-NEXT: vpblendd {{.*#+}} ymm3 = ymm4[0],ymm3[1],ymm4[2,3],ymm3
>>>> [4],ymm4[5,6],ymm3[7]
>>>> +; AVX2-NEXT: vpbroadcastq %xmm2, %ymm4
>>>> +; AVX2-NEXT: vpblendd {{.*#+}} ymm3 = ymm3[0,1],ymm4[2],ymm3[3,4],ym
>>>> m4[5],ymm3[6,7]
>>>> +; AVX2-NEXT: vpermq {{.*#+}} ymm4 = ymm2[2,1,3,3]
>>>> +; AVX2-NEXT: vpshufd {{.*#+}} ymm5 = ymm1[1,2,3,3,5,6,7,7]
>>>> +; AVX2-NEXT: vpermq {{.*#+}} ymm5 = ymm5[2,2,2,3]
>>>> +; AVX2-NEXT: vpblendd {{.*#+}} ymm4 = ymm5[0],ymm4[1],ymm5[2,3],ymm4
>>>> [4],ymm5[5,6],ymm4[7]
>>>> +; AVX2-NEXT: vpbroadcastq 24(%rsi), %ymm5
>>>> +; AVX2-NEXT: vpblendd {{.*#+}} ymm4 = ymm4[0,1],ymm5[2],ymm4[3,4],ym
>>>> m5[5],ymm4[6,7]
>>>> +; AVX2-NEXT: vpermq {{.*#+}} ymm0 = ymm0[1,1,2,2]
>>>> +; AVX2-NEXT: vpermq {{.*#+}} ymm2 = ymm2[1,1,2,2]
>>>> +; AVX2-NEXT: vpblendd {{.*#+}} ymm0 = ymm2[0],ymm0[1],ymm2[2,3],ymm0
>>>> [4],ymm2[5,6],ymm0[7]
>>>> +; AVX2-NEXT: vpshufd {{.*#+}} ymm1 = ymm1[0,0,3,3,4,4,7,7]
>>>> +; AVX2-NEXT: vpblendd {{.*#+}} ymm0 = ymm0[0,1],ymm1[2],ymm0[3,4],ym
>>>> m1[5],ymm0[6,7]
>>>> +; AVX2-NEXT: vmovdqu %ymm0, 32(%rdi)
>>>> +; AVX2-NEXT: vmovdqu %ymm4, 64(%rdi)
>>>> +; AVX2-NEXT: vmovdqu %ymm3, (%rdi)
>>>> ; AVX2-NEXT: vzeroupper
>>>> ; AVX2-NEXT: retq
>>>> %s1 = load <8 x i32>, <8 x i32>* %q1, align 4
>>>>
>>>> Modified: llvm/trunk/test/CodeGen/X86/vector-trunc-math.ll
>>>> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/
>>>> X86/vector-trunc-math.ll?rev=283480&r1=283479&r2=283480&view=diff
>>>> ============================================================
>>>> ==================
>>>> --- llvm/trunk/test/CodeGen/X86/vector-trunc-math.ll (original)
>>>> +++ llvm/trunk/test/CodeGen/X86/vector-trunc-math.ll Thu Oct 6
>>>> 13:58:24 2016
>>>> @@ -54,25 +54,22 @@ define <4 x i32> @trunc_add_v4i64_4i32(<
>>>> define <8 x i16> @trunc_add_v8i64_8i16(<8 x i64> %a0, <8 x i64> %a1)
>>>> nounwind {
>>>> ; SSE-LABEL: trunc_add_v8i64_8i16:
>>>> ; SSE: # BB#0:
>>>> -; SSE-NEXT: paddq %xmm6, %xmm2
>>>> ; SSE-NEXT: paddq %xmm4, %xmm0
>>>> -; SSE-NEXT: paddq %xmm7, %xmm3
>>>> ; SSE-NEXT: paddq %xmm5, %xmm1
>>>> -; SSE-NEXT: pextrw $4, %xmm1, %eax
>>>> -; SSE-NEXT: punpcklwd {{.*#+}} xmm1 = xmm1[0],xmm3[0],xmm1[1],xmm3[1
>>>> ],xmm1[2],xmm3[2],xmm1[3],xmm3[3]
>>>> -; SSE-NEXT: pextrw $4, %xmm0, %ecx
>>>> -; SSE-NEXT: punpcklwd {{.*#+}} xmm0 = xmm0[0],xmm2[0],xmm0[1],xmm2[1
>>>> ],xmm0[2],xmm2[2],xmm0[3],xmm2[3]
>>>> -; SSE-NEXT: punpcklwd {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[1
>>>> ],xmm0[2],xmm1[2],xmm0[3],xmm1[3]
>>>> -; SSE-NEXT: pextrw $4, %xmm3, %edx
>>>> -; SSE-NEXT: movd %edx, %xmm1
>>>> -; SSE-NEXT: movd %eax, %xmm3
>>>> -; SSE-NEXT: punpcklwd {{.*#+}} xmm3 = xmm3[0],xmm1[0],xmm3[1],xmm1[1
>>>> ],xmm3[2],xmm1[2],xmm3[3],xmm1[3]
>>>> -; SSE-NEXT: pextrw $4, %xmm2, %eax
>>>> -; SSE-NEXT: movd %eax, %xmm1
>>>> -; SSE-NEXT: movd %ecx, %xmm2
>>>> -; SSE-NEXT: punpcklwd {{.*#+}} xmm2 = xmm2[0],xmm1[0],xmm2[1],xmm1[1
>>>> ],xmm2[2],xmm1[2],xmm2[3],xmm1[3]
>>>> -; SSE-NEXT: punpcklwd {{.*#+}} xmm2 = xmm2[0],xmm3[0],xmm2[1],xmm3[1
>>>> ],xmm2[2],xmm3[2],xmm2[3],xmm3[3]
>>>> -; SSE-NEXT: punpcklwd {{.*#+}} xmm0 = xmm0[0],xmm2[0],xmm0[1],xmm2[1
>>>> ],xmm0[2],xmm2[2],xmm0[3],xmm2[3]
>>>> +; SSE-NEXT: paddq %xmm6, %xmm2
>>>> +; SSE-NEXT: paddq %xmm7, %xmm3
>>>> +; SSE-NEXT: pshufd {{.*#+}} xmm3 = xmm3[0,2,2,3]
>>>> +; SSE-NEXT: pshuflw {{.*#+}} xmm3 = xmm3[0,1,0,2,4,5,6,7]
>>>> +; SSE-NEXT: pshufd {{.*#+}} xmm2 = xmm2[0,2,2,3]
>>>> +; SSE-NEXT: pshuflw {{.*#+}} xmm2 = xmm2[0,1,0,2,4,5,6,7]
>>>> +; SSE-NEXT: punpckldq {{.*#+}} xmm2 = xmm2[0],xmm3[0],xmm2[1],xmm3[1
>>>> ]
>>>> +; SSE-NEXT: pshufd {{.*#+}} xmm1 = xmm1[0,2,2,3]
>>>> +; SSE-NEXT: pshuflw {{.*#+}} xmm1 = xmm1[0,2,2,3,4,5,6,7]
>>>> +; SSE-NEXT: pshufd {{.*#+}} xmm0 = xmm0[0,2,2,3]
>>>> +; SSE-NEXT: pshuflw {{.*#+}} xmm0 = xmm0[0,2,2,3,4,5,6,7]
>>>> +; SSE-NEXT: punpckldq {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[1
>>>> ]
>>>> +; SSE-NEXT: movsd {{.*#+}} xmm2 = xmm0[0],xmm2[1]
>>>> +; SSE-NEXT: movapd %xmm2, %xmm0
>>>> ; SSE-NEXT: retq
>>>> ;
>>>> ; AVX1-LABEL: trunc_add_v8i64_8i16:
>>>> @@ -445,29 +442,24 @@ define <4 x i32> @trunc_add_const_v4i64_
>>>> define <8 x i16> @trunc_add_const_v16i64_v16i16(<8 x i64> %a0)
>>>> nounwind {
>>>> ; SSE-LABEL: trunc_add_const_v16i64_v16i16:
>>>> ; SSE: # BB#0:
>>>> -; SSE-NEXT: movdqa %xmm0, %xmm4
>>>> ; SSE-NEXT: movl $1, %eax
>>>> -; SSE-NEXT: movd %rax, %xmm0
>>>> -; SSE-NEXT: pslldq {{.*#+}} xmm0 = zero,zero,zero,zero,zero,zero,
>>>> zero,zero,xmm0[0,1,2,3,4,5,6,7]
>>>> -; SSE-NEXT: paddq %xmm4, %xmm0
>>>> +; SSE-NEXT: movd %rax, %xmm4
>>>> +; SSE-NEXT: pslldq {{.*#+}} xmm4 = zero,zero,zero,zero,zero,zero,
>>>> zero,zero,xmm4[0,1,2,3,4,5,6,7]
>>>> +; SSE-NEXT: paddq %xmm0, %xmm4
>>>> +; SSE-NEXT: paddq {{.*}}(%rip), %xmm1
>>>> ; SSE-NEXT: paddq {{.*}}(%rip), %xmm2
>>>> ; SSE-NEXT: paddq {{.*}}(%rip), %xmm3
>>>> -; SSE-NEXT: paddq {{.*}}(%rip), %xmm1
>>>> -; SSE-NEXT: pextrw $4, %xmm1, %eax
>>>> -; SSE-NEXT: punpcklwd {{.*#+}} xmm1 = xmm1[0],xmm3[0],xmm1[1],xmm3[1
>>>> ],xmm1[2],xmm3[2],xmm1[3],xmm3[3]
>>>> -; SSE-NEXT: pextrw $4, %xmm0, %ecx
>>>> -; SSE-NEXT: punpcklwd {{.*#+}} xmm0 = xmm0[0],xmm2[0],xmm0[1],xmm2[1
>>>> ],xmm0[2],xmm2[2],xmm0[3],xmm2[3]
>>>> -; SSE-NEXT: punpcklwd {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[1
>>>> ],xmm0[2],xmm1[2],xmm0[3],xmm1[3]
>>>> -; SSE-NEXT: pextrw $4, %xmm3, %edx
>>>> -; SSE-NEXT: movd %edx, %xmm1
>>>> -; SSE-NEXT: movd %eax, %xmm3
>>>> -; SSE-NEXT: punpcklwd {{.*#+}} xmm3 = xmm3[0],xmm1[0],xmm3[1],xmm1[1
>>>> ],xmm3[2],xmm1[2],xmm3[3],xmm1[3]
>>>> -; SSE-NEXT: movd %ecx, %xmm1
>>>> -; SSE-NEXT: pextrw $4, %xmm2, %eax
>>>> -; SSE-NEXT: movd %eax, %xmm2
>>>> -; SSE-NEXT: punpcklwd {{.*#+}} xmm1 = xmm1[0],xmm2[0],xmm1[1],xmm2[1
>>>> ],xmm1[2],xmm2[2],xmm1[3],xmm2[3]
>>>> -; SSE-NEXT: punpcklwd {{.*#+}} xmm1 = xmm1[0],xmm3[0],xmm1[1],xmm3[1
>>>> ],xmm1[2],xmm3[2],xmm1[3],xmm3[3]
>>>> -; SSE-NEXT: punpcklwd {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[1
>>>> ],xmm0[2],xmm1[2],xmm0[3],xmm1[3]
>>>> +; SSE-NEXT: pshufd {{.*#+}} xmm0 = xmm3[0,2,2,3]
>>>> +; SSE-NEXT: pshuflw {{.*#+}} xmm3 = xmm0[0,1,0,2,4,5,6,7]
>>>> +; SSE-NEXT: pshufd {{.*#+}} xmm0 = xmm2[0,2,2,3]
>>>> +; SSE-NEXT: pshuflw {{.*#+}} xmm0 = xmm0[0,1,0,2,4,5,6,7]
>>>> +; SSE-NEXT: punpckldq {{.*#+}} xmm0 = xmm0[0],xmm3[0],xmm0[1],xmm3[1
>>>> ]
>>>> +; SSE-NEXT: pshufd {{.*#+}} xmm2 = xmm4[0,2,2,3]
>>>> +; SSE-NEXT: pshuflw {{.*#+}} xmm2 = xmm2[0,2,2,3,4,5,6,7]
>>>> +; SSE-NEXT: pshufd {{.*#+}} xmm1 = xmm1[0,2,2,3]
>>>> +; SSE-NEXT: pshuflw {{.*#+}} xmm1 = xmm1[0,2,2,3,4,5,6,7]
>>>> +; SSE-NEXT: punpckldq {{.*#+}} xmm2 = xmm2[0],xmm1[0],xmm2[1],xmm1[1
>>>> ]
>>>> +; SSE-NEXT: movsd {{.*#+}} xmm0 = xmm2[0],xmm0[1]
>>>> ; SSE-NEXT: retq
>>>> ;
>>>> ; AVX1-LABEL: trunc_add_const_v16i64_v16i16:
>>>> @@ -834,25 +826,22 @@ define <4 x i32> @trunc_sub_v4i64_4i32(<
>>>> define <8 x i16> @trunc_sub_v8i64_8i16(<8 x i64> %a0, <8 x i64> %a1)
>>>> nounwind {
>>>> ; SSE-LABEL: trunc_sub_v8i64_8i16:
>>>> ; SSE: # BB#0:
>>>> -; SSE-NEXT: psubq %xmm6, %xmm2
>>>> ; SSE-NEXT: psubq %xmm4, %xmm0
>>>> -; SSE-NEXT: psubq %xmm7, %xmm3
>>>> ; SSE-NEXT: psubq %xmm5, %xmm1
>>>> -; SSE-NEXT: pextrw $4, %xmm1, %eax
>>>> -; SSE-NEXT: punpcklwd {{.*#+}} xmm1 = xmm1[0],xmm3[0],xmm1[1],xmm3[1
>>>> ],xmm1[2],xmm3[2],xmm1[3],xmm3[3]
>>>> -;
>>>
>>> ...
>>>
>>> [Message clipped]
>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20161011/742f8778/attachment.html>
More information about the llvm-commits
mailing list