[llvm] r343913 - [SelectionDAG] Add SimplifyDemandedBits to SimplifyDemandedVectorElts simplification

Jan Vesely via llvm-commits llvm-commits at lists.llvm.org
Sat Oct 6 22:20:37 PDT 2018


On Sat, 2018-10-06 at 10:20 +0000, Simon Pilgrim via llvm-commits
wrote:
> Author: rksimon
> Date: Sat Oct  6 03:20:04 2018
> New Revision: 343913
> 
> URL: http://llvm.org/viewvc/llvm-project?rev=343913&view=rev
> Log:
> [SelectionDAG] Add SimplifyDemandedBits to SimplifyDemandedVectorElts simplification
> 
> This patch enables SimplifyDemandedBits to call SimplifyDemandedVectorElts in cases where the demanded bits mask covers entire elements of a bitcasted source vector.
> 
> There are a couple of cases here where simplification at a deeper level (such as through bitcasts) prevents further simplification - CommitTargetLoweringOpt only adds immediate uses/users back to the worklist when we might want to combine the original caller again to see what else it can simplify.
> 
> As well as that I had to disable handling of bool vector until SimplifyDemandedVectorElts better supports some of their opcodes (SETCC, shifts etc.).
> 
> Fixes PR39178
> 
> Differential Revision: https://reviews.llvm.org/D52935

This patch regresses (incorrect results) ~56 opencl tests on amd gpus
(carrizo, topaz, turks). the problems are all related to vectors of
chars (char2, char4), only shuffle and shuffle2 fali with other vector
sizes, or short (on turks only)

I've attached git bisect log, as well good/bad .ll and .asm files for
clz (count leading zeros) test on carrizo/topaz machine.

thanks,
Jan

> 
> Modified:
>     llvm/trunk/lib/CodeGen/SelectionDAG/TargetLowering.cpp   (contents, props changed)
>     llvm/trunk/test/CodeGen/X86/avx2-intrinsics-fast-isel.ll
>     llvm/trunk/test/CodeGen/X86/avx512-intrinsics-fast-isel.ll
>     llvm/trunk/test/CodeGen/X86/avx512-intrinsics-upgrade.ll
>     llvm/trunk/test/CodeGen/X86/avx512vl-intrinsics-upgrade.ll
>     llvm/trunk/test/CodeGen/X86/combine-pmuldq.ll
>     llvm/trunk/test/CodeGen/X86/combine-shl.ll
>     llvm/trunk/test/CodeGen/X86/mulvi32.ll
>     llvm/trunk/test/CodeGen/X86/pmul.ll
>     llvm/trunk/test/CodeGen/X86/pr35918.ll
>     llvm/trunk/test/CodeGen/X86/shrink_vmul.ll
>     llvm/trunk/test/CodeGen/X86/sse2-intrinsics-fast-isel.ll
>     llvm/trunk/test/CodeGen/X86/sse41-intrinsics-fast-isel.ll
>     llvm/trunk/test/CodeGen/X86/urem-seteq-vec-nonsplat.ll
>     llvm/trunk/test/CodeGen/X86/vector-idiv-v2i32.ll
>     llvm/trunk/test/CodeGen/X86/vector-mul.ll
>     llvm/trunk/test/CodeGen/X86/vector-reduce-mul.ll
>     llvm/trunk/test/CodeGen/X86/vector-trunc-math.ll
> 
> Modified: llvm/trunk/lib/CodeGen/SelectionDAG/TargetLowering.cpp
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/SelectionDAG/TargetLowering.cpp?rev=343913&r1=343912&r2=343913&view=diff
> ==============================================================================
> --- llvm/trunk/lib/CodeGen/SelectionDAG/TargetLowering.cpp (original)
> +++ llvm/trunk/lib/CodeGen/SelectionDAG/TargetLowering.cpp Sat Oct  6 03:20:04 2018
> @@ -1179,29 +1179,64 @@ bool TargetLowering::SimplifyDemandedBit
>      Known.Zero |= ~InMask;
>      break;
>    }
> -  case ISD::BITCAST:
> +  case ISD::BITCAST: {
> +    SDValue Src = Op.getOperand(0);
> +    EVT SrcVT = Src.getValueType();
> +    unsigned NumSrcEltBits = SrcVT.getScalarSizeInBits();
> +
>      // If this is an FP->Int bitcast and if the sign bit is the only
>      // thing demanded, turn this into a FGETSIGN.
> -    if (!TLO.LegalOperations() && !VT.isVector() &&
> -        !Op.getOperand(0).getValueType().isVector() &&
> +    if (!TLO.LegalOperations() && !VT.isVector() && !SrcVT.isVector() &&
>          NewMask == APInt::getSignMask(Op.getValueSizeInBits()) &&
> -        Op.getOperand(0).getValueType().isFloatingPoint()) {
> +        SrcVT.isFloatingPoint()) {
>        bool OpVTLegal = isOperationLegalOrCustom(ISD::FGETSIGN, VT);
> -      bool i32Legal  = isOperationLegalOrCustom(ISD::FGETSIGN, MVT::i32);
> -      if ((OpVTLegal || i32Legal) && VT.isSimple() &&
> -           Op.getOperand(0).getValueType() != MVT::f16 &&
> -           Op.getOperand(0).getValueType() != MVT::f128) {
> +      bool i32Legal = isOperationLegalOrCustom(ISD::FGETSIGN, MVT::i32);
> +      if ((OpVTLegal || i32Legal) && VT.isSimple() && SrcVT != MVT::f16 &&
> +          SrcVT != MVT::f128) {
>          // Cannot eliminate/lower SHL for f128 yet.
>          EVT Ty = OpVTLegal ? VT : MVT::i32;
>          // Make a FGETSIGN + SHL to move the sign bit into the appropriate
>          // place.  We expect the SHL to be eliminated by other optimizations.
> -        SDValue Sign = TLO.DAG.getNode(ISD::FGETSIGN, dl, Ty, Op.getOperand(0));
> +        SDValue Sign = TLO.DAG.getNode(ISD::FGETSIGN, dl, Ty, Src);
>          unsigned OpVTSizeInBits = Op.getValueSizeInBits();
>          if (!OpVTLegal && OpVTSizeInBits > 32)
>            Sign = TLO.DAG.getNode(ISD::ZERO_EXTEND, dl, VT, Sign);
>          unsigned ShVal = Op.getValueSizeInBits() - 1;
>          SDValue ShAmt = TLO.DAG.getConstant(ShVal, dl, VT);
> -        return TLO.CombineTo(Op, TLO.DAG.getNode(ISD::SHL, dl, VT, Sign, ShAmt));
> +        return TLO.CombineTo(Op,
> +                             TLO.DAG.getNode(ISD::SHL, dl, VT, Sign, ShAmt));
> +      }
> +    }
> +    // If bitcast from a vector and the mask covers entire elements, see if we
> +    // can use SimplifyDemandedVectorElts.
> +    // TODO - bigendian once we have test coverage.
> +    // TODO - bool vectors once SimplifyDemandedVectorElts has SETCC support.
> +    if (SrcVT.isVector() && NumSrcEltBits > 1 &&
> +        (BitWidth % NumSrcEltBits) == 0 &&
> +        TLO.DAG.getDataLayout().isLittleEndian()) {
> +      unsigned Scale = BitWidth / NumSrcEltBits;
> +      auto GetDemandedSubMask = [&](APInt &DemandedSubElts) -> bool {
> +        DemandedSubElts = APInt::getNullValue(Scale);
> +        for (unsigned i = 0; i != Scale; ++i) {
> +          unsigned Offset = i * NumSrcEltBits;
> +          APInt Sub = DemandedMask.extractBits(NumSrcEltBits, Offset);
> +          if (Sub.isAllOnesValue())
> +            DemandedSubElts.setBit(i);
> +          else if (!Sub.isNullValue())
> +            return false;
> +        }
> +        return true;
> +      };
> +
> +      APInt DemandedSubElts;
> +      if (GetDemandedSubMask(DemandedSubElts)) {
> +        unsigned NumSrcElts = SrcVT.getVectorNumElements();
> +        APInt DemandedElts = APInt::getSplat(NumSrcElts, DemandedSubElts);
> +
> +        APInt KnownUndef, KnownZero;
> +        if (SimplifyDemandedVectorElts(Src, DemandedElts, KnownUndef, KnownZero,
> +                                       TLO, Depth + 1))
> +          return true;
>        }
>      }
>      // If this is a bitcast, let computeKnownBits handle it.  Only do this on a
> @@ -1211,6 +1246,7 @@ bool TargetLowering::SimplifyDemandedBit
>        return false;
>      }
>      break;
> +  }
>    case ISD::ADD:
>    case ISD::MUL:
>    case ISD::SUB: {
> 
> Propchange: llvm/trunk/lib/CodeGen/SelectionDAG/TargetLowering.cpp
> ------------------------------------------------------------------------------
> --- cvs2svn:cvs-rev (original)
> +++ cvs2svn:cvs-rev (removed)
> @@ -1 +0,0 @@
> -1.124
> 
> Propchange: llvm/trunk/lib/CodeGen/SelectionDAG/TargetLowering.cpp
> ------------------------------------------------------------------------------
> --- svn:keywords (original)
> +++ svn:keywords (removed)
> @@ -1 +0,0 @@
> -Author Date Id Revision
> 
> Modified: llvm/trunk/test/CodeGen/X86/avx2-intrinsics-fast-isel.ll
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/avx2-intrinsics-fast-isel.ll?rev=343913&r1=343912&r2=343913&view=diff
> ==============================================================================
> --- llvm/trunk/test/CodeGen/X86/avx2-intrinsics-fast-isel.ll (original)
> +++ llvm/trunk/test/CodeGen/X86/avx2-intrinsics-fast-isel.ll Sat Oct  6 03:20:04 2018
> @@ -1826,12 +1826,6 @@ declare <16 x i16> @llvm.x86.avx2.mpsadb
>  define <4 x i64> @test_mm256_mul_epi32(<4 x i64> %a0, <4 x i64> %a1) {
>  ; CHECK-LABEL: test_mm256_mul_epi32:
>  ; CHECK:       # %bb.0:
> -; CHECK-NEXT:    vpsllq $32, %ymm0, %ymm2
> -; CHECK-NEXT:    vpsrad $31, %ymm2, %ymm2
> -; CHECK-NEXT:    vpblendd {{.*#+}} ymm0 = ymm0[0],ymm2[1],ymm0[2],ymm2[3],ymm0[4],ymm2[5],ymm0[6],ymm2[7]
> -; CHECK-NEXT:    vpsllq $32, %ymm1, %ymm2
> -; CHECK-NEXT:    vpsrad $31, %ymm2, %ymm2
> -; CHECK-NEXT:    vpblendd {{.*#+}} ymm1 = ymm1[0],ymm2[1],ymm1[2],ymm2[3],ymm1[4],ymm2[5],ymm1[6],ymm2[7]
>  ; CHECK-NEXT:    vpmuldq %ymm1, %ymm0, %ymm0
>  ; CHECK-NEXT:    ret{{[l|q]}}
>    %A = shl <4 x i64> %a0, <i64 32, i64 32, i64 32, i64 32>
> @@ -1846,9 +1840,6 @@ declare <4 x i64> @llvm.x86.avx2.pmul.dq
>  define <4 x i64> @test_mm256_mul_epu32(<4 x i64> %a0, <4 x i64> %a1) {
>  ; CHECK-LABEL: test_mm256_mul_epu32:
>  ; CHECK:       # %bb.0:
> -; CHECK-NEXT:    vpxor %xmm2, %xmm2, %xmm2
> -; CHECK-NEXT:    vpblendd {{.*#+}} ymm0 = ymm0[0],ymm2[1],ymm0[2],ymm2[3],ymm0[4],ymm2[5],ymm0[6],ymm2[7]
> -; CHECK-NEXT:    vpblendd {{.*#+}} ymm1 = ymm1[0],ymm2[1],ymm1[2],ymm2[3],ymm1[4],ymm2[5],ymm1[6],ymm2[7]
>  ; CHECK-NEXT:    vpmuludq %ymm1, %ymm0, %ymm0
>  ; CHECK-NEXT:    ret{{[l|q]}}
>    %A = and <4 x i64> %a0, <i64 4294967295, i64 4294967295, i64 4294967295, i64 4294967295>
> 
> Modified: llvm/trunk/test/CodeGen/X86/avx512-intrinsics-fast-isel.ll
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/avx512-intrinsics-fast-isel.ll?rev=343913&r1=343912&r2=343913&view=diff
> ==============================================================================
> --- llvm/trunk/test/CodeGen/X86/avx512-intrinsics-fast-isel.ll (original)
> +++ llvm/trunk/test/CodeGen/X86/avx512-intrinsics-fast-isel.ll Sat Oct  6 03:20:04 2018
> @@ -1718,11 +1718,6 @@ entry:
>  define <8 x i64> @test_mm512_mul_epu32(<8 x i64> %__A, <8 x i64> %__B) nounwind {
>  ; CHECK-LABEL: test_mm512_mul_epu32:
>  ; CHECK:       # %bb.0:
> -; CHECK-NEXT:    movw $-21846, %ax # imm = 0xAAAA
> -; CHECK-NEXT:    kmovw %eax, %k0
> -; CHECK-NEXT:    knotw %k0, %k1
> -; CHECK-NEXT:    vmovdqa32 %zmm0, %zmm0 {%k1} {z}
> -; CHECK-NEXT:    vmovdqa32 %zmm1, %zmm1 {%k1} {z}
>  ; CHECK-NEXT:    vpmuludq %zmm0, %zmm1, %zmm0
>  ; CHECK-NEXT:    ret{{[l|q]}}
>    %tmp = and <8 x i64> %__A, <i64 4294967295, i64 4294967295, i64 4294967295, i64 4294967295, i64 4294967295, i64 4294967295, i64 4294967295, i64 4294967295>
> 
> Modified: llvm/trunk/test/CodeGen/X86/avx512-intrinsics-upgrade.ll
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/avx512-intrinsics-upgrade.ll?rev=343913&r1=343912&r2=343913&view=diff
> ==============================================================================
> --- llvm/trunk/test/CodeGen/X86/avx512-intrinsics-upgrade.ll (original)
> +++ llvm/trunk/test/CodeGen/X86/avx512-intrinsics-upgrade.ll Sat Oct  6 03:20:04 2018
> @@ -4519,9 +4519,7 @@ define <8 x i64> @test_mask_mul_epi32_rm
>  ; X86-LABEL: test_mask_mul_epi32_rmb:
>  ; X86:       ## %bb.0:
>  ; X86-NEXT:    movl {{[0-9]+}}(%esp), %eax ## encoding: [0x8b,0x44,0x24,0x04]
> -; X86-NEXT:    vmovq (%eax), %xmm1 ## EVEX TO VEX Compression encoding: [0xc5,0xfa,0x7e,0x08]
> -; X86-NEXT:    ## xmm1 = mem[0],zero
> -; X86-NEXT:    vpbroadcastq %xmm1, %zmm1 ## encoding: [0x62,0xf2,0xfd,0x48,0x59,0xc9]
> +; X86-NEXT:    vpbroadcastd (%eax), %zmm1 ## encoding: [0x62,0xf2,0x7d,0x48,0x58,0x08]
>  ; X86-NEXT:    vpmuldq %zmm1, %zmm0, %zmm0 ## encoding: [0x62,0xf2,0xfd,0x48,0x28,0xc1]
>  ; X86-NEXT:    retl ## encoding: [0xc3]
>  ;
> @@ -4541,9 +4539,7 @@ define <8 x i64> @test_mask_mul_epi32_rm
>  ; X86-LABEL: test_mask_mul_epi32_rmbk:
>  ; X86:       ## %bb.0:
>  ; X86-NEXT:    movl {{[0-9]+}}(%esp), %eax ## encoding: [0x8b,0x44,0x24,0x04]
> -; X86-NEXT:    vmovq (%eax), %xmm2 ## EVEX TO VEX Compression encoding: [0xc5,0xfa,0x7e,0x10]
> -; X86-NEXT:    ## xmm2 = mem[0],zero
> -; X86-NEXT:    vpbroadcastq %xmm2, %zmm2 ## encoding: [0x62,0xf2,0xfd,0x48,0x59,0xd2]
> +; X86-NEXT:    vpbroadcastd (%eax), %zmm2 ## encoding: [0x62,0xf2,0x7d,0x48,0x58,0x10]
>  ; X86-NEXT:    movzbl {{[0-9]+}}(%esp), %eax ## encoding: [0x0f,0xb6,0x44,0x24,0x08]
>  ; X86-NEXT:    kmovw %eax, %k1 ## encoding: [0xc5,0xf8,0x92,0xc8]
>  ; X86-NEXT:    vpmuldq %zmm2, %zmm0, %zmm1 {%k1} ## encoding: [0x62,0xf2,0xfd,0x49,0x28,0xca]
> @@ -4568,9 +4564,7 @@ define <8 x i64> @test_mask_mul_epi32_rm
>  ; X86-LABEL: test_mask_mul_epi32_rmbkz:
>  ; X86:       ## %bb.0:
>  ; X86-NEXT:    movl {{[0-9]+}}(%esp), %eax ## encoding: [0x8b,0x44,0x24,0x04]
> -; X86-NEXT:    vmovq (%eax), %xmm1 ## EVEX TO VEX Compression encoding: [0xc5,0xfa,0x7e,0x08]
> -; X86-NEXT:    ## xmm1 = mem[0],zero
> -; X86-NEXT:    vpbroadcastq %xmm1, %zmm1 ## encoding: [0x62,0xf2,0xfd,0x48,0x59,0xc9]
> +; X86-NEXT:    vpbroadcastd (%eax), %zmm1 ## encoding: [0x62,0xf2,0x7d,0x48,0x58,0x08]
>  ; X86-NEXT:    movzbl {{[0-9]+}}(%esp), %eax ## encoding: [0x0f,0xb6,0x44,0x24,0x08]
>  ; X86-NEXT:    kmovw %eax, %k1 ## encoding: [0xc5,0xf8,0x92,0xc8]
>  ; X86-NEXT:    vpmuldq %zmm1, %zmm0, %zmm0 {%k1} {z} ## encoding: [0x62,0xf2,0xfd,0xc9,0x28,0xc1]
> @@ -4696,9 +4690,7 @@ define <8 x i64> @test_mask_mul_epu32_rm
>  ; X86-LABEL: test_mask_mul_epu32_rmb:
>  ; X86:       ## %bb.0:
>  ; X86-NEXT:    movl {{[0-9]+}}(%esp), %eax ## encoding: [0x8b,0x44,0x24,0x04]
> -; X86-NEXT:    vmovq (%eax), %xmm1 ## EVEX TO VEX Compression encoding: [0xc5,0xfa,0x7e,0x08]
> -; X86-NEXT:    ## xmm1 = mem[0],zero
> -; X86-NEXT:    vpbroadcastq %xmm1, %zmm1 ## encoding: [0x62,0xf2,0xfd,0x48,0x59,0xc9]
> +; X86-NEXT:    vpbroadcastd (%eax), %zmm1 ## encoding: [0x62,0xf2,0x7d,0x48,0x58,0x08]
>  ; X86-NEXT:    vpmuludq %zmm1, %zmm0, %zmm0 ## encoding: [0x62,0xf1,0xfd,0x48,0xf4,0xc1]
>  ; X86-NEXT:    retl ## encoding: [0xc3]
>  ;
> @@ -4718,9 +4710,7 @@ define <8 x i64> @test_mask_mul_epu32_rm
>  ; X86-LABEL: test_mask_mul_epu32_rmbk:
>  ; X86:       ## %bb.0:
>  ; X86-NEXT:    movl {{[0-9]+}}(%esp), %eax ## encoding: [0x8b,0x44,0x24,0x04]
> -; X86-NEXT:    vmovq (%eax), %xmm2 ## EVEX TO VEX Compression encoding: [0xc5,0xfa,0x7e,0x10]
> -; X86-NEXT:    ## xmm2 = mem[0],zero
> -; X86-NEXT:    vpbroadcastq %xmm2, %zmm2 ## encoding: [0x62,0xf2,0xfd,0x48,0x59,0xd2]
> +; X86-NEXT:    vpbroadcastd (%eax), %zmm2 ## encoding: [0x62,0xf2,0x7d,0x48,0x58,0x10]
>  ; X86-NEXT:    movzbl {{[0-9]+}}(%esp), %eax ## encoding: [0x0f,0xb6,0x44,0x24,0x08]
>  ; X86-NEXT:    kmovw %eax, %k1 ## encoding: [0xc5,0xf8,0x92,0xc8]
>  ; X86-NEXT:    vpmuludq %zmm2, %zmm0, %zmm1 {%k1} ## encoding: [0x62,0xf1,0xfd,0x49,0xf4,0xca]
> @@ -4745,9 +4735,7 @@ define <8 x i64> @test_mask_mul_epu32_rm
>  ; X86-LABEL: test_mask_mul_epu32_rmbkz:
>  ; X86:       ## %bb.0:
>  ; X86-NEXT:    movl {{[0-9]+}}(%esp), %eax ## encoding: [0x8b,0x44,0x24,0x04]
> -; X86-NEXT:    vmovq (%eax), %xmm1 ## EVEX TO VEX Compression encoding: [0xc5,0xfa,0x7e,0x08]
> -; X86-NEXT:    ## xmm1 = mem[0],zero
> -; X86-NEXT:    vpbroadcastq %xmm1, %zmm1 ## encoding: [0x62,0xf2,0xfd,0x48,0x59,0xc9]
> +; X86-NEXT:    vpbroadcastd (%eax), %zmm1 ## encoding: [0x62,0xf2,0x7d,0x48,0x58,0x08]
>  ; X86-NEXT:    movzbl {{[0-9]+}}(%esp), %eax ## encoding: [0x0f,0xb6,0x44,0x24,0x08]
>  ; X86-NEXT:    kmovw %eax, %k1 ## encoding: [0xc5,0xf8,0x92,0xc8]
>  ; X86-NEXT:    vpmuludq %zmm1, %zmm0, %zmm0 {%k1} {z} ## encoding: [0x62,0xf1,0xfd,0xc9,0xf4,0xc1]
> @@ -6160,9 +6148,7 @@ define <8 x i64> @test_mul_epi32_rmb(<16
>  ; X86-LABEL: test_mul_epi32_rmb:
>  ; X86:       ## %bb.0:
>  ; X86-NEXT:    movl {{[0-9]+}}(%esp), %eax ## encoding: [0x8b,0x44,0x24,0x04]
> -; X86-NEXT:    vmovq (%eax), %xmm1 ## EVEX TO VEX Compression encoding: [0xc5,0xfa,0x7e,0x08]
> -; X86-NEXT:    ## xmm1 = mem[0],zero
> -; X86-NEXT:    vpbroadcastq %xmm1, %zmm1 ## encoding: [0x62,0xf2,0xfd,0x48,0x59,0xc9]
> +; X86-NEXT:    vpbroadcastd (%eax), %zmm1 ## encoding: [0x62,0xf2,0x7d,0x48,0x58,0x08]
>  ; X86-NEXT:    vpmuldq %zmm1, %zmm0, %zmm0 ## encoding: [0x62,0xf2,0xfd,0x48,0x28,0xc1]
>  ; X86-NEXT:    retl ## encoding: [0xc3]
>  ;
> @@ -6182,9 +6168,7 @@ define <8 x i64> @test_mul_epi32_rmbk(<1
>  ; X86-LABEL: test_mul_epi32_rmbk:
>  ; X86:       ## %bb.0:
>  ; X86-NEXT:    movl {{[0-9]+}}(%esp), %eax ## encoding: [0x8b,0x44,0x24,0x04]
> -; X86-NEXT:    vmovq (%eax), %xmm2 ## EVEX TO VEX Compression encoding: [0xc5,0xfa,0x7e,0x10]
> -; X86-NEXT:    ## xmm2 = mem[0],zero
> -; X86-NEXT:    vpbroadcastq %xmm2, %zmm2 ## encoding: [0x62,0xf2,0xfd,0x48,0x59,0xd2]
> +; X86-NEXT:    vpbroadcastd (%eax), %zmm2 ## encoding: [0x62,0xf2,0x7d,0x48,0x58,0x10]
>  ; X86-NEXT:    movzbl {{[0-9]+}}(%esp), %eax ## encoding: [0x0f,0xb6,0x44,0x24,0x08]
>  ; X86-NEXT:    kmovw %eax, %k1 ## encoding: [0xc5,0xf8,0x92,0xc8]
>  ; X86-NEXT:    vpmuldq %zmm2, %zmm0, %zmm1 {%k1} ## encoding: [0x62,0xf2,0xfd,0x49,0x28,0xca]
> @@ -6211,9 +6195,7 @@ define <8 x i64> @test_mul_epi32_rmbkz(<
>  ; X86-LABEL: test_mul_epi32_rmbkz:
>  ; X86:       ## %bb.0:
>  ; X86-NEXT:    movl {{[0-9]+}}(%esp), %eax ## encoding: [0x8b,0x44,0x24,0x04]
> -; X86-NEXT:    vmovq (%eax), %xmm1 ## EVEX TO VEX Compression encoding: [0xc5,0xfa,0x7e,0x08]
> -; X86-NEXT:    ## xmm1 = mem[0],zero
> -; X86-NEXT:    vpbroadcastq %xmm1, %zmm1 ## encoding: [0x62,0xf2,0xfd,0x48,0x59,0xc9]
> +; X86-NEXT:    vpbroadcastd (%eax), %zmm1 ## encoding: [0x62,0xf2,0x7d,0x48,0x58,0x08]
>  ; X86-NEXT:    movzbl {{[0-9]+}}(%esp), %eax ## encoding: [0x0f,0xb6,0x44,0x24,0x08]
>  ; X86-NEXT:    kmovw %eax, %k1 ## encoding: [0xc5,0xf8,0x92,0xc8]
>  ; X86-NEXT:    vpmuldq %zmm1, %zmm0, %zmm0 {%k1} {z} ## encoding: [0x62,0xf2,0xfd,0xc9,0x28,0xc1]
> @@ -6349,9 +6331,7 @@ define <8 x i64> @test_mul_epu32_rmb(<16
>  ; X86-LABEL: test_mul_epu32_rmb:
>  ; X86:       ## %bb.0:
>  ; X86-NEXT:    movl {{[0-9]+}}(%esp), %eax ## encoding: [0x8b,0x44,0x24,0x04]
> -; X86-NEXT:    vmovq (%eax), %xmm1 ## EVEX TO VEX Compression encoding: [0xc5,0xfa,0x7e,0x08]
> -; X86-NEXT:    ## xmm1 = mem[0],zero
> -; X86-NEXT:    vpbroadcastq %xmm1, %zmm1 ## encoding: [0x62,0xf2,0xfd,0x48,0x59,0xc9]
> +; X86-NEXT:    vpbroadcastd (%eax), %zmm1 ## encoding: [0x62,0xf2,0x7d,0x48,0x58,0x08]
>  ; X86-NEXT:    vpmuludq %zmm1, %zmm0, %zmm0 ## encoding: [0x62,0xf1,0xfd,0x48,0xf4,0xc1]
>  ; X86-NEXT:    retl ## encoding: [0xc3]
>  ;
> @@ -6371,9 +6351,7 @@ define <8 x i64> @test_mul_epu32_rmbk(<1
>  ; X86-LABEL: test_mul_epu32_rmbk:
>  ; X86:       ## %bb.0:
>  ; X86-NEXT:    movl {{[0-9]+}}(%esp), %eax ## encoding: [0x8b,0x44,0x24,0x04]
> -; X86-NEXT:    vmovq (%eax), %xmm2 ## EVEX TO VEX Compression encoding: [0xc5,0xfa,0x7e,0x10]
> -; X86-NEXT:    ## xmm2 = mem[0],zero
> -; X86-NEXT:    vpbroadcastq %xmm2, %zmm2 ## encoding: [0x62,0xf2,0xfd,0x48,0x59,0xd2]
> +; X86-NEXT:    vpbroadcastd (%eax), %zmm2 ## encoding: [0x62,0xf2,0x7d,0x48,0x58,0x10]
>  ; X86-NEXT:    movzbl {{[0-9]+}}(%esp), %eax ## encoding: [0x0f,0xb6,0x44,0x24,0x08]
>  ; X86-NEXT:    kmovw %eax, %k1 ## encoding: [0xc5,0xf8,0x92,0xc8]
>  ; X86-NEXT:    vpmuludq %zmm2, %zmm0, %zmm1 {%k1} ## encoding: [0x62,0xf1,0xfd,0x49,0xf4,0xca]
> @@ -6400,9 +6378,7 @@ define <8 x i64> @test_mul_epu32_rmbkz(<
>  ; X86-LABEL: test_mul_epu32_rmbkz:
>  ; X86:       ## %bb.0:
>  ; X86-NEXT:    movl {{[0-9]+}}(%esp), %eax ## encoding: [0x8b,0x44,0x24,0x04]
> -; X86-NEXT:    vmovq (%eax), %xmm1 ## EVEX TO VEX Compression encoding: [0xc5,0xfa,0x7e,0x08]
> -; X86-NEXT:    ## xmm1 = mem[0],zero
> -; X86-NEXT:    vpbroadcastq %xmm1, %zmm1 ## encoding: [0x62,0xf2,0xfd,0x48,0x59,0xc9]
> +; X86-NEXT:    vpbroadcastd (%eax), %zmm1 ## encoding: [0x62,0xf2,0x7d,0x48,0x58,0x08]
>  ; X86-NEXT:    movzbl {{[0-9]+}}(%esp), %eax ## encoding: [0x0f,0xb6,0x44,0x24,0x08]
>  ; X86-NEXT:    kmovw %eax, %k1 ## encoding: [0xc5,0xf8,0x92,0xc8]
>  ; X86-NEXT:    vpmuludq %zmm1, %zmm0, %zmm0 {%k1} {z} ## encoding: [0x62,0xf1,0xfd,0xc9,0xf4,0xc1]
> 
> Modified: llvm/trunk/test/CodeGen/X86/avx512vl-intrinsics-upgrade.ll
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/avx512vl-intrinsics-upgrade.ll?rev=343913&r1=343912&r2=343913&view=diff
> ==============================================================================
> --- llvm/trunk/test/CodeGen/X86/avx512vl-intrinsics-upgrade.ll (original)
> +++ llvm/trunk/test/CodeGen/X86/avx512vl-intrinsics-upgrade.ll Sat Oct  6 03:20:04 2018
> @@ -9655,7 +9655,7 @@ define < 2 x i64> @test_mask_mul_epi32_r
>  ; X86-LABEL: test_mask_mul_epi32_rmb_128:
>  ; X86:       # %bb.0:
>  ; X86-NEXT:    movl {{[0-9]+}}(%esp), %eax # encoding: [0x8b,0x44,0x24,0x04]
> -; X86-NEXT:    vpbroadcastq (%eax), %xmm1 # EVEX TO VEX Compression encoding: [0xc4,0xe2,0x79,0x59,0x08]
> +; X86-NEXT:    vpbroadcastd (%eax), %xmm1 # EVEX TO VEX Compression encoding: [0xc4,0xe2,0x79,0x58,0x08]
>  ; X86-NEXT:    vpmuldq %xmm1, %xmm0, %xmm0 # EVEX TO VEX Compression encoding: [0xc4,0xe2,0x79,0x28,0xc1]
>  ; X86-NEXT:    retl # encoding: [0xc3]
>  ;
> @@ -9675,7 +9675,7 @@ define < 2 x i64> @test_mask_mul_epi32_r
>  ; X86-LABEL: test_mask_mul_epi32_rmbk_128:
>  ; X86:       # %bb.0:
>  ; X86-NEXT:    movl {{[0-9]+}}(%esp), %eax # encoding: [0x8b,0x44,0x24,0x04]
> -; X86-NEXT:    vpbroadcastq (%eax), %xmm2 # EVEX TO VEX Compression encoding: [0xc4,0xe2,0x79,0x59,0x10]
> +; X86-NEXT:    vpbroadcastd (%eax), %xmm2 # EVEX TO VEX Compression encoding: [0xc4,0xe2,0x79,0x58,0x10]
>  ; X86-NEXT:    movzbl {{[0-9]+}}(%esp), %eax # encoding: [0x0f,0xb6,0x44,0x24,0x08]
>  ; X86-NEXT:    kmovw %eax, %k1 # encoding: [0xc5,0xf8,0x92,0xc8]
>  ; X86-NEXT:    vpmuldq %xmm2, %xmm0, %xmm1 {%k1} # encoding: [0x62,0xf2,0xfd,0x09,0x28,0xca]
> @@ -9700,7 +9700,7 @@ define < 2 x i64> @test_mask_mul_epi32_r
>  ; X86-LABEL: test_mask_mul_epi32_rmbkz_128:
>  ; X86:       # %bb.0:
>  ; X86-NEXT:    movl {{[0-9]+}}(%esp), %eax # encoding: [0x8b,0x44,0x24,0x04]
> -; X86-NEXT:    vpbroadcastq (%eax), %xmm1 # EVEX TO VEX Compression encoding: [0xc4,0xe2,0x79,0x59,0x08]
> +; X86-NEXT:    vpbroadcastd (%eax), %xmm1 # EVEX TO VEX Compression encoding: [0xc4,0xe2,0x79,0x58,0x08]
>  ; X86-NEXT:    movzbl {{[0-9]+}}(%esp), %eax # encoding: [0x0f,0xb6,0x44,0x24,0x08]
>  ; X86-NEXT:    kmovw %eax, %k1 # encoding: [0xc5,0xf8,0x92,0xc8]
>  ; X86-NEXT:    vpmuldq %xmm1, %xmm0, %xmm0 {%k1} {z} # encoding: [0x62,0xf2,0xfd,0x89,0x28,0xc1]
> @@ -9826,9 +9826,7 @@ define < 4 x i64> @test_mask_mul_epi32_r
>  ; X86-LABEL: test_mask_mul_epi32_rmb_256:
>  ; X86:       # %bb.0:
>  ; X86-NEXT:    movl {{[0-9]+}}(%esp), %eax # encoding: [0x8b,0x44,0x24,0x04]
> -; X86-NEXT:    vmovq (%eax), %xmm1 # EVEX TO VEX Compression encoding: [0xc5,0xfa,0x7e,0x08]
> -; X86-NEXT:    # xmm1 = mem[0],zero
> -; X86-NEXT:    vpbroadcastq %xmm1, %ymm1 # EVEX TO VEX Compression encoding: [0xc4,0xe2,0x7d,0x59,0xc9]
> +; X86-NEXT:    vpbroadcastd (%eax), %ymm1 # EVEX TO VEX Compression encoding: [0xc4,0xe2,0x7d,0x58,0x08]
>  ; X86-NEXT:    vpmuldq %ymm1, %ymm0, %ymm0 # EVEX TO VEX Compression encoding: [0xc4,0xe2,0x7d,0x28,0xc1]
>  ; X86-NEXT:    retl # encoding: [0xc3]
>  ;
> @@ -9848,9 +9846,7 @@ define < 4 x i64> @test_mask_mul_epi32_r
>  ; X86-LABEL: test_mask_mul_epi32_rmbk_256:
>  ; X86:       # %bb.0:
>  ; X86-NEXT:    movl {{[0-9]+}}(%esp), %eax # encoding: [0x8b,0x44,0x24,0x04]
> -; X86-NEXT:    vmovq (%eax), %xmm2 # EVEX TO VEX Compression encoding: [0xc5,0xfa,0x7e,0x10]
> -; X86-NEXT:    # xmm2 = mem[0],zero
> -; X86-NEXT:    vpbroadcastq %xmm2, %ymm2 # EVEX TO VEX Compression encoding: [0xc4,0xe2,0x7d,0x59,0xd2]
> +; X86-NEXT:    vpbroadcastd (%eax), %ymm2 # EVEX TO VEX Compression encoding: [0xc4,0xe2,0x7d,0x58,0x10]
>  ; X86-NEXT:    movzbl {{[0-9]+}}(%esp), %eax # encoding: [0x0f,0xb6,0x44,0x24,0x08]
>  ; X86-NEXT:    kmovw %eax, %k1 # encoding: [0xc5,0xf8,0x92,0xc8]
>  ; X86-NEXT:    vpmuldq %ymm2, %ymm0, %ymm1 {%k1} # encoding: [0x62,0xf2,0xfd,0x29,0x28,0xca]
> @@ -9875,9 +9871,7 @@ define < 4 x i64> @test_mask_mul_epi32_r
>  ; X86-LABEL: test_mask_mul_epi32_rmbkz_256:
>  ; X86:       # %bb.0:
>  ; X86-NEXT:    movl {{[0-9]+}}(%esp), %eax # encoding: [0x8b,0x44,0x24,0x04]
> -; X86-NEXT:    vmovq (%eax), %xmm1 # EVEX TO VEX Compression encoding: [0xc5,0xfa,0x7e,0x08]
> -; X86-NEXT:    # xmm1 = mem[0],zero
> -; X86-NEXT:    vpbroadcastq %xmm1, %ymm1 # EVEX TO VEX Compression encoding: [0xc4,0xe2,0x7d,0x59,0xc9]
> +; X86-NEXT:    vpbroadcastd (%eax), %ymm1 # EVEX TO VEX Compression encoding: [0xc4,0xe2,0x7d,0x58,0x08]
>  ; X86-NEXT:    movzbl {{[0-9]+}}(%esp), %eax # encoding: [0x0f,0xb6,0x44,0x24,0x08]
>  ; X86-NEXT:    kmovw %eax, %k1 # encoding: [0xc5,0xf8,0x92,0xc8]
>  ; X86-NEXT:    vpmuldq %ymm1, %ymm0, %ymm0 {%k1} {z} # encoding: [0x62,0xf2,0xfd,0xa9,0x28,0xc1]
> @@ -10003,7 +9997,7 @@ define < 2 x i64> @test_mask_mul_epu32_r
>  ; X86-LABEL: test_mask_mul_epu32_rmb_128:
>  ; X86:       # %bb.0:
>  ; X86-NEXT:    movl {{[0-9]+}}(%esp), %eax # encoding: [0x8b,0x44,0x24,0x04]
> -; X86-NEXT:    vpbroadcastq (%eax), %xmm1 # EVEX TO VEX Compression encoding: [0xc4,0xe2,0x79,0x59,0x08]
> +; X86-NEXT:    vpbroadcastd (%eax), %xmm1 # EVEX TO VEX Compression encoding: [0xc4,0xe2,0x79,0x58,0x08]
>  ; X86-NEXT:    vpmuludq %xmm1, %xmm0, %xmm0 # EVEX TO VEX Compression encoding: [0xc5,0xf9,0xf4,0xc1]
>  ; X86-NEXT:    retl # encoding: [0xc3]
>  ;
> @@ -10023,7 +10017,7 @@ define < 2 x i64> @test_mask_mul_epu32_r
>  ; X86-LABEL: test_mask_mul_epu32_rmbk_128:
>  ; X86:       # %bb.0:
>  ; X86-NEXT:    movl {{[0-9]+}}(%esp), %eax # encoding: [0x8b,0x44,0x24,0x04]
> -; X86-NEXT:    vpbroadcastq (%eax), %xmm2 # EVEX TO VEX Compression encoding: [0xc4,0xe2,0x79,0x59,0x10]
> +; X86-NEXT:    vpbroadcastd (%eax), %xmm2 # EVEX TO VEX Compression encoding: [0xc4,0xe2,0x79,0x58,0x10]
>  ; X86-NEXT:    movzbl {{[0-9]+}}(%esp), %eax # encoding: [0x0f,0xb6,0x44,0x24,0x08]
>  ; X86-NEXT:    kmovw %eax, %k1 # encoding: [0xc5,0xf8,0x92,0xc8]
>  ; X86-NEXT:    vpmuludq %xmm2, %xmm0, %xmm1 {%k1} # encoding: [0x62,0xf1,0xfd,0x09,0xf4,0xca]
> @@ -10048,7 +10042,7 @@ define < 2 x i64> @test_mask_mul_epu32_r
>  ; X86-LABEL: test_mask_mul_epu32_rmbkz_128:
>  ; X86:       # %bb.0:
>  ; X86-NEXT:    movl {{[0-9]+}}(%esp), %eax # encoding: [0x8b,0x44,0x24,0x04]
> -; X86-NEXT:    vpbroadcastq (%eax), %xmm1 # EVEX TO VEX Compression encoding: [0xc4,0xe2,0x79,0x59,0x08]
> +; X86-NEXT:    vpbroadcastd (%eax), %xmm1 # EVEX TO VEX Compression encoding: [0xc4,0xe2,0x79,0x58,0x08]
>  ; X86-NEXT:    movzbl {{[0-9]+}}(%esp), %eax # encoding: [0x0f,0xb6,0x44,0x24,0x08]
>  ; X86-NEXT:    kmovw %eax, %k1 # encoding: [0xc5,0xf8,0x92,0xc8]
>  ; X86-NEXT:    vpmuludq %xmm1, %xmm0, %xmm0 {%k1} {z} # encoding: [0x62,0xf1,0xfd,0x89,0xf4,0xc1]
> @@ -10174,9 +10168,7 @@ define < 4 x i64> @test_mask_mul_epu32_r
>  ; X86-LABEL: test_mask_mul_epu32_rmb_256:
>  ; X86:       # %bb.0:
>  ; X86-NEXT:    movl {{[0-9]+}}(%esp), %eax # encoding: [0x8b,0x44,0x24,0x04]
> -; X86-NEXT:    vmovq (%eax), %xmm1 # EVEX TO VEX Compression encoding: [0xc5,0xfa,0x7e,0x08]
> -; X86-NEXT:    # xmm1 = mem[0],zero
> -; X86-NEXT:    vpbroadcastq %xmm1, %ymm1 # EVEX TO VEX Compression encoding: [0xc4,0xe2,0x7d,0x59,0xc9]
> +; X86-NEXT:    vpbroadcastd (%eax), %ymm1 # EVEX TO VEX Compression encoding: [0xc4,0xe2,0x7d,0x58,0x08]
>  ; X86-NEXT:    vpmuludq %ymm1, %ymm0, %ymm0 # EVEX TO VEX Compression encoding: [0xc5,0xfd,0xf4,0xc1]
>  ; X86-NEXT:    retl # encoding: [0xc3]
>  ;
> @@ -10196,9 +10188,7 @@ define < 4 x i64> @test_mask_mul_epu32_r
>  ; X86-LABEL: test_mask_mul_epu32_rmbk_256:
>  ; X86:       # %bb.0:
>  ; X86-NEXT:    movl {{[0-9]+}}(%esp), %eax # encoding: [0x8b,0x44,0x24,0x04]
> -; X86-NEXT:    vmovq (%eax), %xmm2 # EVEX TO VEX Compression encoding: [0xc5,0xfa,0x7e,0x10]
> -; X86-NEXT:    # xmm2 = mem[0],zero
> -; X86-NEXT:    vpbroadcastq %xmm2, %ymm2 # EVEX TO VEX Compression encoding: [0xc4,0xe2,0x7d,0x59,0xd2]
> +; X86-NEXT:    vpbroadcastd (%eax), %ymm2 # EVEX TO VEX Compression encoding: [0xc4,0xe2,0x7d,0x58,0x10]
>  ; X86-NEXT:    movzbl {{[0-9]+}}(%esp), %eax # encoding: [0x0f,0xb6,0x44,0x24,0x08]
>  ; X86-NEXT:    kmovw %eax, %k1 # encoding: [0xc5,0xf8,0x92,0xc8]
>  ; X86-NEXT:    vpmuludq %ymm2, %ymm0, %ymm1 {%k1} # encoding: [0x62,0xf1,0xfd,0x29,0xf4,0xca]
> @@ -10223,9 +10213,7 @@ define < 4 x i64> @test_mask_mul_epu32_r
>  ; X86-LABEL: test_mask_mul_epu32_rmbkz_256:
>  ; X86:       # %bb.0:
>  ; X86-NEXT:    movl {{[0-9]+}}(%esp), %eax # encoding: [0x8b,0x44,0x24,0x04]
> -; X86-NEXT:    vmovq (%eax), %xmm1 # EVEX TO VEX Compression encoding: [0xc5,0xfa,0x7e,0x08]
> -; X86-NEXT:    # xmm1 = mem[0],zero
> -; X86-NEXT:    vpbroadcastq %xmm1, %ymm1 # EVEX TO VEX Compression encoding: [0xc4,0xe2,0x7d,0x59,0xc9]
> +; X86-NEXT:    vpbroadcastd (%eax), %ymm1 # EVEX TO VEX Compression encoding: [0xc4,0xe2,0x7d,0x58,0x08]
>  ; X86-NEXT:    movzbl {{[0-9]+}}(%esp), %eax # encoding: [0x0f,0xb6,0x44,0x24,0x08]
>  ; X86-NEXT:    kmovw %eax, %k1 # encoding: [0xc5,0xf8,0x92,0xc8]
>  ; X86-NEXT:    vpmuludq %ymm1, %ymm0, %ymm0 {%k1} {z} # encoding: [0x62,0xf1,0xfd,0xa9,0xf4,0xc1]
> 
> Modified: llvm/trunk/test/CodeGen/X86/combine-pmuldq.ll
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/combine-pmuldq.ll?rev=343913&r1=343912&r2=343913&view=diff
> ==============================================================================
> --- llvm/trunk/test/CodeGen/X86/combine-pmuldq.ll (original)
> +++ llvm/trunk/test/CodeGen/X86/combine-pmuldq.ll Sat Oct  6 03:20:04 2018
> @@ -41,20 +41,15 @@ define <2 x i64> @combine_shuffle_zext_p
>    ret <2 x i64> %5
>  }
>  
> -; TODO - blends are superfluous
>  define <2 x i64> @combine_shuffle_zero_pmuludq(<4 x i32> %a0, <4 x i32> %a1) {
>  ; SSE-LABEL: combine_shuffle_zero_pmuludq:
>  ; SSE:       # %bb.0:
> -; SSE-NEXT:    pxor %xmm2, %xmm2
> -; SSE-NEXT:    pblendw {{.*#+}} xmm0 = xmm0[0,1],xmm2[2,3],xmm0[4,5],xmm2[6,7]
> -; SSE-NEXT:    pblendw {{.*#+}} xmm1 = xmm1[0,1],xmm2[2,3],xmm1[4,5],xmm2[6,7]
>  ; SSE-NEXT:    pmuludq %xmm1, %xmm0
>  ; SSE-NEXT:    retq
>  ;
>  ; AVX2-LABEL: combine_shuffle_zero_pmuludq:
>  ; AVX2:       # %bb.0:
>  ; AVX2-NEXT:    vpxor %xmm2, %xmm2, %xmm2
> -; AVX2-NEXT:    vpblendd {{.*#+}} xmm0 = xmm0[0],xmm2[1],xmm0[2],xmm2[3]
>  ; AVX2-NEXT:    vpblendd {{.*#+}} xmm1 = xmm1[0],xmm2[1],xmm1[2],xmm2[3]
>  ; AVX2-NEXT:    vpmuludq %xmm1, %xmm0, %xmm0
>  ; AVX2-NEXT:    retq
> @@ -62,7 +57,6 @@ define <2 x i64> @combine_shuffle_zero_p
>  ; AVX512VL-LABEL: combine_shuffle_zero_pmuludq:
>  ; AVX512VL:       # %bb.0:
>  ; AVX512VL-NEXT:    vpxor %xmm2, %xmm2, %xmm2
> -; AVX512VL-NEXT:    vpblendd {{.*#+}} xmm0 = xmm0[0],xmm2[1],xmm0[2],xmm2[3]
>  ; AVX512VL-NEXT:    vpblendd {{.*#+}} xmm1 = xmm1[0],xmm2[1],xmm1[2],xmm2[3]
>  ; AVX512VL-NEXT:    vpmuludq %xmm1, %xmm0, %xmm0
>  ; AVX512VL-NEXT:    retq
> @@ -70,7 +64,6 @@ define <2 x i64> @combine_shuffle_zero_p
>  ; AVX512DQVL-LABEL: combine_shuffle_zero_pmuludq:
>  ; AVX512DQVL:       # %bb.0:
>  ; AVX512DQVL-NEXT:    vpxor %xmm2, %xmm2, %xmm2
> -; AVX512DQVL-NEXT:    vpblendd {{.*#+}} xmm0 = xmm0[0],xmm2[1],xmm0[2],xmm2[3]
>  ; AVX512DQVL-NEXT:    vpblendd {{.*#+}} xmm1 = xmm1[0],xmm2[1],xmm1[2],xmm2[3]
>  ; AVX512DQVL-NEXT:    vpmuludq %xmm1, %xmm0, %xmm0
>  ; AVX512DQVL-NEXT:    retq
> @@ -82,23 +75,16 @@ define <2 x i64> @combine_shuffle_zero_p
>    ret <2 x i64> %5
>  }
>  
> -; TODO - blends are superfluous
>  define <4 x i64> @combine_shuffle_zero_pmuludq_256(<8 x i32> %a0, <8 x i32> %a1) {
>  ; SSE-LABEL: combine_shuffle_zero_pmuludq_256:
>  ; SSE:       # %bb.0:
> -; SSE-NEXT:    pxor %xmm4, %xmm4
> -; SSE-NEXT:    pblendw {{.*#+}} xmm1 = xmm1[0,1],xmm4[2,3],xmm1[4,5],xmm4[6,7]
> -; SSE-NEXT:    pblendw {{.*#+}} xmm0 = xmm0[0,1],xmm4[2,3],xmm0[4,5],xmm4[6,7]
> -; SSE-NEXT:    pblendw {{.*#+}} xmm3 = xmm3[0,1],xmm4[2,3],xmm3[4,5],xmm4[6,7]
> -; SSE-NEXT:    pmuludq %xmm3, %xmm1
> -; SSE-NEXT:    pblendw {{.*#+}} xmm2 = xmm2[0,1],xmm4[2,3],xmm2[4,5],xmm4[6,7]
>  ; SSE-NEXT:    pmuludq %xmm2, %xmm0
> +; SSE-NEXT:    pmuludq %xmm3, %xmm1
>  ; SSE-NEXT:    retq
>  ;
>  ; AVX2-LABEL: combine_shuffle_zero_pmuludq_256:
>  ; AVX2:       # %bb.0:
>  ; AVX2-NEXT:    vpxor %xmm2, %xmm2, %xmm2
> -; AVX2-NEXT:    vpblendd {{.*#+}} ymm0 = ymm0[0],ymm2[1],ymm0[2],ymm2[3],ymm0[4],ymm2[5],ymm0[6],ymm2[7]
>  ; AVX2-NEXT:    vpblendd {{.*#+}} ymm1 = ymm1[0],ymm2[1],ymm1[2],ymm2[3],ymm1[4],ymm2[5],ymm1[6],ymm2[7]
>  ; AVX2-NEXT:    vpmuludq %ymm1, %ymm0, %ymm0
>  ; AVX2-NEXT:    retq
> @@ -106,7 +92,6 @@ define <4 x i64> @combine_shuffle_zero_p
>  ; AVX512VL-LABEL: combine_shuffle_zero_pmuludq_256:
>  ; AVX512VL:       # %bb.0:
>  ; AVX512VL-NEXT:    vpxor %xmm2, %xmm2, %xmm2
> -; AVX512VL-NEXT:    vpblendd {{.*#+}} ymm0 = ymm0[0],ymm2[1],ymm0[2],ymm2[3],ymm0[4],ymm2[5],ymm0[6],ymm2[7]
>  ; AVX512VL-NEXT:    vpblendd {{.*#+}} ymm1 = ymm1[0],ymm2[1],ymm1[2],ymm2[3],ymm1[4],ymm2[5],ymm1[6],ymm2[7]
>  ; AVX512VL-NEXT:    vpmuludq %ymm1, %ymm0, %ymm0
>  ; AVX512VL-NEXT:    retq
> @@ -114,7 +99,6 @@ define <4 x i64> @combine_shuffle_zero_p
>  ; AVX512DQVL-LABEL: combine_shuffle_zero_pmuludq_256:
>  ; AVX512DQVL:       # %bb.0:
>  ; AVX512DQVL-NEXT:    vpxor %xmm2, %xmm2, %xmm2
> -; AVX512DQVL-NEXT:    vpblendd {{.*#+}} ymm0 = ymm0[0],ymm2[1],ymm0[2],ymm2[3],ymm0[4],ymm2[5],ymm0[6],ymm2[7]
>  ; AVX512DQVL-NEXT:    vpblendd {{.*#+}} ymm1 = ymm1[0],ymm2[1],ymm1[2],ymm2[3],ymm1[4],ymm2[5],ymm1[6],ymm2[7]
>  ; AVX512DQVL-NEXT:    vpmuludq %ymm1, %ymm0, %ymm0
>  ; AVX512DQVL-NEXT:    retq
> 
> Modified: llvm/trunk/test/CodeGen/X86/combine-shl.ll
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/combine-shl.ll?rev=343913&r1=343912&r2=343913&view=diff
> ==============================================================================
> --- llvm/trunk/test/CodeGen/X86/combine-shl.ll (original)
> +++ llvm/trunk/test/CodeGen/X86/combine-shl.ll Sat Oct  6 03:20:04 2018
> @@ -402,23 +402,21 @@ define <4 x i32> @combine_vec_shl_ge_ash
>  ; SSE2-LABEL: combine_vec_shl_ge_ashr_extact1:
>  ; SSE2:       # %bb.0:
>  ; SSE2-NEXT:    movdqa %xmm0, %xmm1
> -; SSE2-NEXT:    psrad $8, %xmm1
> +; SSE2-NEXT:    psrad $5, %xmm1
>  ; SSE2-NEXT:    movdqa %xmm0, %xmm2
> -; SSE2-NEXT:    psrad $5, %xmm2
> -; SSE2-NEXT:    punpckhqdq {{.*#+}} xmm2 = xmm2[1],xmm1[1]
> -; SSE2-NEXT:    movdqa %xmm0, %xmm3
> -; SSE2-NEXT:    psrad $4, %xmm3
> -; SSE2-NEXT:    psrad $3, %xmm0
> -; SSE2-NEXT:    punpcklqdq {{.*#+}} xmm0 = xmm0[0],xmm3[0]
> -; SSE2-NEXT:    shufps {{.*#+}} xmm0 = xmm0[0,3],xmm2[0,3]
> -; SSE2-NEXT:    movdqa {{.*#+}} xmm2 = [32,64,128,256]
> -; SSE2-NEXT:    pmuludq %xmm2, %xmm0
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm0 = xmm0[0,2,2,3]
> -; SSE2-NEXT:    shufps {{.*#+}} xmm3 = xmm3[1,1],xmm1[3,3]
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm2[1,1,3,3]
> -; SSE2-NEXT:    pmuludq %xmm3, %xmm1
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm1[0,2,2,3]
> -; SSE2-NEXT:    punpckldq {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[1]
> +; SSE2-NEXT:    psrad $3, %xmm2
> +; SSE2-NEXT:    shufps {{.*#+}} xmm2 = xmm2[0,3],xmm1[2,3]
> +; SSE2-NEXT:    movdqa %xmm0, %xmm1
> +; SSE2-NEXT:    psrad $8, %xmm1
> +; SSE2-NEXT:    psrad $4, %xmm0
> +; SSE2-NEXT:    shufps {{.*#+}} xmm0 = xmm0[1,1],xmm1[3,3]
> +; SSE2-NEXT:    movdqa {{.*#+}} xmm1 = [32,64,128,256]
> +; SSE2-NEXT:    pshufd {{.*#+}} xmm3 = xmm1[1,1,3,3]
> +; SSE2-NEXT:    pmuludq %xmm0, %xmm3
> +; SSE2-NEXT:    pshufd {{.*#+}} xmm3 = xmm3[0,2,2,3]
> +; SSE2-NEXT:    pmuludq %xmm1, %xmm2
> +; SSE2-NEXT:    pshufd {{.*#+}} xmm0 = xmm2[0,2,2,3]
> +; SSE2-NEXT:    punpckldq {{.*#+}} xmm0 = xmm0[0],xmm3[0],xmm0[1],xmm3[1]
>  ; SSE2-NEXT:    retq
>  ;
>  ; SSE41-LABEL: combine_vec_shl_ge_ashr_extact1:
> @@ -466,23 +464,21 @@ define <4 x i32> @combine_vec_shl_lt_ash
>  ; SSE2-LABEL: combine_vec_shl_lt_ashr_extact1:
>  ; SSE2:       # %bb.0:
>  ; SSE2-NEXT:    movdqa %xmm0, %xmm1
> -; SSE2-NEXT:    psrad $8, %xmm1
> +; SSE2-NEXT:    psrad $7, %xmm1
>  ; SSE2-NEXT:    movdqa %xmm0, %xmm2
> -; SSE2-NEXT:    psrad $7, %xmm2
> -; SSE2-NEXT:    punpckhqdq {{.*#+}} xmm2 = xmm2[1],xmm1[1]
> -; SSE2-NEXT:    movdqa %xmm0, %xmm3
> -; SSE2-NEXT:    psrad $6, %xmm3
> -; SSE2-NEXT:    psrad $5, %xmm0
> -; SSE2-NEXT:    punpcklqdq {{.*#+}} xmm0 = xmm0[0],xmm3[0]
> -; SSE2-NEXT:    shufps {{.*#+}} xmm0 = xmm0[0,3],xmm2[0,3]
> -; SSE2-NEXT:    movdqa {{.*#+}} xmm2 = [8,16,32,256]
> -; SSE2-NEXT:    pmuludq %xmm2, %xmm0
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm0 = xmm0[0,2,2,3]
> -; SSE2-NEXT:    shufps {{.*#+}} xmm3 = xmm3[1,1],xmm1[3,3]
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm2[1,1,3,3]
> -; SSE2-NEXT:    pmuludq %xmm3, %xmm1
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm1[0,2,2,3]
> -; SSE2-NEXT:    punpckldq {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[1]
> +; SSE2-NEXT:    psrad $5, %xmm2
> +; SSE2-NEXT:    shufps {{.*#+}} xmm2 = xmm2[0,3],xmm1[2,3]
> +; SSE2-NEXT:    movdqa %xmm0, %xmm1
> +; SSE2-NEXT:    psrad $8, %xmm1
> +; SSE2-NEXT:    psrad $6, %xmm0
> +; SSE2-NEXT:    shufps {{.*#+}} xmm0 = xmm0[1,1],xmm1[3,3]
> +; SSE2-NEXT:    movdqa {{.*#+}} xmm1 = [8,16,32,256]
> +; SSE2-NEXT:    pshufd {{.*#+}} xmm3 = xmm1[1,1,3,3]
> +; SSE2-NEXT:    pmuludq %xmm0, %xmm3
> +; SSE2-NEXT:    pshufd {{.*#+}} xmm3 = xmm3[0,2,2,3]
> +; SSE2-NEXT:    pmuludq %xmm1, %xmm2
> +; SSE2-NEXT:    pshufd {{.*#+}} xmm0 = xmm2[0,2,2,3]
> +; SSE2-NEXT:    punpckldq {{.*#+}} xmm0 = xmm0[0],xmm3[0],xmm0[1],xmm3[1]
>  ; SSE2-NEXT:    retq
>  ;
>  ; SSE41-LABEL: combine_vec_shl_lt_ashr_extact1:
> @@ -533,23 +529,21 @@ define <4 x i32> @combine_vec_shl_gt_lsh
>  ; SSE2-LABEL: combine_vec_shl_gt_lshr1:
>  ; SSE2:       # %bb.0:
>  ; SSE2-NEXT:    movdqa %xmm0, %xmm1
> -; SSE2-NEXT:    psrld $8, %xmm1
> +; SSE2-NEXT:    psrld $5, %xmm1
>  ; SSE2-NEXT:    movdqa %xmm0, %xmm2
> -; SSE2-NEXT:    psrld $5, %xmm2
> -; SSE2-NEXT:    punpckhqdq {{.*#+}} xmm2 = xmm2[1],xmm1[1]
> -; SSE2-NEXT:    movdqa %xmm0, %xmm3
> -; SSE2-NEXT:    psrld $4, %xmm3
> -; SSE2-NEXT:    psrld $3, %xmm0
> -; SSE2-NEXT:    punpcklqdq {{.*#+}} xmm0 = xmm0[0],xmm3[0]
> -; SSE2-NEXT:    shufps {{.*#+}} xmm0 = xmm0[0,3],xmm2[0,3]
> -; SSE2-NEXT:    movdqa {{.*#+}} xmm2 = [32,64,128,256]
> -; SSE2-NEXT:    pmuludq %xmm2, %xmm0
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm0 = xmm0[0,2,2,3]
> -; SSE2-NEXT:    shufps {{.*#+}} xmm3 = xmm3[1,1],xmm1[3,3]
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm2[1,1,3,3]
> -; SSE2-NEXT:    pmuludq %xmm3, %xmm1
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm1[0,2,2,3]
> -; SSE2-NEXT:    punpckldq {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[1]
> +; SSE2-NEXT:    psrld $3, %xmm2
> +; SSE2-NEXT:    shufps {{.*#+}} xmm2 = xmm2[0,3],xmm1[2,3]
> +; SSE2-NEXT:    movdqa %xmm0, %xmm1
> +; SSE2-NEXT:    psrld $8, %xmm1
> +; SSE2-NEXT:    psrld $4, %xmm0
> +; SSE2-NEXT:    shufps {{.*#+}} xmm0 = xmm0[1,1],xmm1[3,3]
> +; SSE2-NEXT:    movdqa {{.*#+}} xmm1 = [32,64,128,256]
> +; SSE2-NEXT:    pshufd {{.*#+}} xmm3 = xmm1[1,1,3,3]
> +; SSE2-NEXT:    pmuludq %xmm0, %xmm3
> +; SSE2-NEXT:    pshufd {{.*#+}} xmm3 = xmm3[0,2,2,3]
> +; SSE2-NEXT:    pmuludq %xmm1, %xmm2
> +; SSE2-NEXT:    pshufd {{.*#+}} xmm0 = xmm2[0,2,2,3]
> +; SSE2-NEXT:    punpckldq {{.*#+}} xmm0 = xmm0[0],xmm3[0],xmm0[1],xmm3[1]
>  ; SSE2-NEXT:    retq
>  ;
>  ; SSE41-LABEL: combine_vec_shl_gt_lshr1:
> @@ -600,23 +594,21 @@ define <4 x i32> @combine_vec_shl_le_lsh
>  ; SSE2-LABEL: combine_vec_shl_le_lshr1:
>  ; SSE2:       # %bb.0:
>  ; SSE2-NEXT:    movdqa %xmm0, %xmm1
> -; SSE2-NEXT:    psrld $8, %xmm1
> +; SSE2-NEXT:    psrld $7, %xmm1
>  ; SSE2-NEXT:    movdqa %xmm0, %xmm2
> -; SSE2-NEXT:    psrld $7, %xmm2
> -; SSE2-NEXT:    punpckhqdq {{.*#+}} xmm2 = xmm2[1],xmm1[1]
> -; SSE2-NEXT:    movdqa %xmm0, %xmm3
> -; SSE2-NEXT:    psrld $6, %xmm3
> -; SSE2-NEXT:    psrld $5, %xmm0
> -; SSE2-NEXT:    punpcklqdq {{.*#+}} xmm0 = xmm0[0],xmm3[0]
> -; SSE2-NEXT:    shufps {{.*#+}} xmm0 = xmm0[0,3],xmm2[0,3]
> -; SSE2-NEXT:    movdqa {{.*#+}} xmm2 = [8,16,32,256]
> -; SSE2-NEXT:    pmuludq %xmm2, %xmm0
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm0 = xmm0[0,2,2,3]
> -; SSE2-NEXT:    shufps {{.*#+}} xmm3 = xmm3[1,1],xmm1[3,3]
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm2[1,1,3,3]
> -; SSE2-NEXT:    pmuludq %xmm3, %xmm1
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm1[0,2,2,3]
> -; SSE2-NEXT:    punpckldq {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[1]
> +; SSE2-NEXT:    psrld $5, %xmm2
> +; SSE2-NEXT:    shufps {{.*#+}} xmm2 = xmm2[0,3],xmm1[2,3]
> +; SSE2-NEXT:    movdqa %xmm0, %xmm1
> +; SSE2-NEXT:    psrld $8, %xmm1
> +; SSE2-NEXT:    psrld $6, %xmm0
> +; SSE2-NEXT:    shufps {{.*#+}} xmm0 = xmm0[1,1],xmm1[3,3]
> +; SSE2-NEXT:    movdqa {{.*#+}} xmm1 = [8,16,32,256]
> +; SSE2-NEXT:    pshufd {{.*#+}} xmm3 = xmm1[1,1,3,3]
> +; SSE2-NEXT:    pmuludq %xmm0, %xmm3
> +; SSE2-NEXT:    pshufd {{.*#+}} xmm3 = xmm3[0,2,2,3]
> +; SSE2-NEXT:    pmuludq %xmm1, %xmm2
> +; SSE2-NEXT:    pshufd {{.*#+}} xmm0 = xmm2[0,2,2,3]
> +; SSE2-NEXT:    punpckldq {{.*#+}} xmm0 = xmm0[0],xmm3[0],xmm0[1],xmm3[1]
>  ; SSE2-NEXT:    retq
>  ;
>  ; SSE41-LABEL: combine_vec_shl_le_lshr1:
> 
> Modified: llvm/trunk/test/CodeGen/X86/mulvi32.ll
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/mulvi32.ll?rev=343913&r1=343912&r2=343913&view=diff
> ==============================================================================
> --- llvm/trunk/test/CodeGen/X86/mulvi32.ll (original)
> +++ llvm/trunk/test/CodeGen/X86/mulvi32.ll Sat Oct  6 03:20:04 2018
> @@ -39,24 +39,18 @@ define <2 x i32> @_mul2xi32a(<2 x i32>,
>  define <2 x i32> @_mul2xi32b(<2 x i32>, <2 x i32>) {
>  ; SSE2-LABEL: _mul2xi32b:
>  ; SSE2:       # %bb.0:
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm0 = xmm0[0,2,2,3]
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm1[0,2,2,3]
> -; SSE2-NEXT:    pmuludq %xmm0, %xmm1
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm0 = xmm1[0,1,1,3]
> +; SSE2-NEXT:    pmuludq %xmm1, %xmm0
> +; SSE2-NEXT:    pshufd {{.*#+}} xmm0 = xmm0[0,1,1,3]
>  ; SSE2-NEXT:    retq
>  ;
>  ; SSE42-LABEL: _mul2xi32b:
>  ; SSE42:       # %bb.0:
> -; SSE42-NEXT:    pshufd {{.*#+}} xmm0 = xmm0[0,2,2,3]
> -; SSE42-NEXT:    pshufd {{.*#+}} xmm1 = xmm1[0,2,2,3]
> -; SSE42-NEXT:    pmuludq %xmm0, %xmm1
> -; SSE42-NEXT:    pmovzxdq {{.*#+}} xmm0 = xmm1[0],zero,xmm1[1],zero
> +; SSE42-NEXT:    pmuludq %xmm1, %xmm0
> +; SSE42-NEXT:    pmovzxdq {{.*#+}} xmm0 = xmm0[0],zero,xmm0[1],zero
>  ; SSE42-NEXT:    retq
>  ;
>  ; AVX-LABEL: _mul2xi32b:
>  ; AVX:       # %bb.0:
> -; AVX-NEXT:    vpshufd {{.*#+}} xmm0 = xmm0[0,2,2,3]
> -; AVX-NEXT:    vpshufd {{.*#+}} xmm1 = xmm1[0,2,2,3]
>  ; AVX-NEXT:    vpmuludq %xmm1, %xmm0, %xmm0
>  ; AVX-NEXT:    vpmovzxdq {{.*#+}} xmm0 = xmm0[0],zero,xmm0[1],zero
>  ; AVX-NEXT:    retq
> @@ -349,27 +343,13 @@ define <2 x i64> @_mul2xi64toi64a(<2 x i
>  ;
>  ; SSE42-LABEL: _mul2xi64toi64a:
>  ; SSE42:       # %bb.0:
> -; SSE42-NEXT:    pxor %xmm2, %xmm2
> -; SSE42-NEXT:    pblendw {{.*#+}} xmm0 = xmm0[0,1],xmm2[2,3],xmm0[4,5],xmm2[6,7]
> -; SSE42-NEXT:    pblendw {{.*#+}} xmm1 = xmm1[0,1],xmm2[2,3],xmm1[4,5],xmm2[6,7]
>  ; SSE42-NEXT:    pmuludq %xmm1, %xmm0
>  ; SSE42-NEXT:    retq
>  ;
> -; AVX1-LABEL: _mul2xi64toi64a:
> -; AVX1:       # %bb.0:
> -; AVX1-NEXT:    vpxor %xmm2, %xmm2, %xmm2
> -; AVX1-NEXT:    vpblendw {{.*#+}} xmm0 = xmm0[0,1],xmm2[2,3],xmm0[4,5],xmm2[6,7]
> -; AVX1-NEXT:    vpblendw {{.*#+}} xmm1 = xmm1[0,1],xmm2[2,3],xmm1[4,5],xmm2[6,7]
> -; AVX1-NEXT:    vpmuludq %xmm1, %xmm0, %xmm0
> -; AVX1-NEXT:    retq
> -;
> -; AVX2-LABEL: _mul2xi64toi64a:
> -; AVX2:       # %bb.0:
> -; AVX2-NEXT:    vpxor %xmm2, %xmm2, %xmm2
> -; AVX2-NEXT:    vpblendd {{.*#+}} xmm0 = xmm0[0],xmm2[1],xmm0[2],xmm2[3]
> -; AVX2-NEXT:    vpblendd {{.*#+}} xmm1 = xmm1[0],xmm2[1],xmm1[2],xmm2[3]
> -; AVX2-NEXT:    vpmuludq %xmm1, %xmm0, %xmm0
> -; AVX2-NEXT:    retq
> +; AVX-LABEL: _mul2xi64toi64a:
> +; AVX:       # %bb.0:
> +; AVX-NEXT:    vpmuludq %xmm1, %xmm0, %xmm0
> +; AVX-NEXT:    retq
>    %f00 = extractelement <2 x i64> %0, i32 0
>    %f01 = extractelement <2 x i64> %0, i32 1
>    %f10 = extractelement <2 x i64> %1, i32 0
> 
> Modified: llvm/trunk/test/CodeGen/X86/pmul.ll
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/pmul.ll?rev=343913&r1=343912&r2=343913&view=diff
> ==============================================================================
> --- llvm/trunk/test/CodeGen/X86/pmul.ll (original)
> +++ llvm/trunk/test/CodeGen/X86/pmul.ll Sat Oct  6 03:20:04 2018
> @@ -1318,76 +1318,55 @@ entry:
>  define <8 x i64> @mul_v8i64_sext(<8 x i16> %val1, <8 x i32> %val2) {
>  ; SSE2-LABEL: mul_v8i64_sext:
>  ; SSE2:       # %bb.0:
> -; SSE2-NEXT:    movdqa %xmm1, %xmm5
> -; SSE2-NEXT:    movdqa %xmm0, %xmm8
> +; SSE2-NEXT:    movdqa %xmm2, %xmm10
> +; SSE2-NEXT:    movdqa %xmm1, %xmm9
> +; SSE2-NEXT:    movdqa %xmm0, %xmm1
>  ; SSE2-NEXT:    punpckhwd {{.*#+}} xmm6 = xmm6[4],xmm0[4],xmm6[5],xmm0[5],xmm6[6],xmm0[6],xmm6[7],xmm0[7]
> -; SSE2-NEXT:    punpcklwd {{.*#+}} xmm0 = xmm0[0,0,1,1,2,2,3,3]
> -; SSE2-NEXT:    movdqa %xmm0, %xmm3
> -; SSE2-NEXT:    psrad $31, %xmm3
> -; SSE2-NEXT:    psrad $16, %xmm0
> -; SSE2-NEXT:    punpckldq {{.*#+}} xmm0 = xmm0[0],xmm3[0],xmm0[1],xmm3[1]
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm1[2,3,0,1]
> -; SSE2-NEXT:    movdqa %xmm5, %xmm4
> -; SSE2-NEXT:    psrad $31, %xmm4
> -; SSE2-NEXT:    punpckldq {{.*#+}} xmm5 = xmm5[0],xmm4[0],xmm5[1],xmm4[1]
> -; SSE2-NEXT:    pxor %xmm7, %xmm7
> -; SSE2-NEXT:    punpckldq {{.*#+}} xmm3 = xmm3[0],xmm7[0],xmm3[1],xmm7[1]
> -; SSE2-NEXT:    pmuludq %xmm5, %xmm3
> -; SSE2-NEXT:    punpckldq {{.*#+}} xmm4 = xmm4[0],xmm7[0],xmm4[1],xmm7[1]
> -; SSE2-NEXT:    pmuludq %xmm0, %xmm4
> -; SSE2-NEXT:    paddq %xmm3, %xmm4
> -; SSE2-NEXT:    pshuflw {{.*#+}} xmm3 = xmm8[0,2,2,3,4,5,6,7]
> -; SSE2-NEXT:    pmuludq %xmm5, %xmm0
> -; SSE2-NEXT:    movdqa %xmm3, %xmm5
> -; SSE2-NEXT:    psrad $31, %xmm5
> -; SSE2-NEXT:    psrad $16, %xmm3
> -; SSE2-NEXT:    punpckldq {{.*#+}} xmm3 = xmm3[0],xmm5[0],xmm3[1],xmm5[1]
> -; SSE2-NEXT:    psllq $32, %xmm4
> -; SSE2-NEXT:    paddq %xmm4, %xmm0
> -; SSE2-NEXT:    movdqa %xmm1, %xmm4
> -; SSE2-NEXT:    psrad $31, %xmm4
> -; SSE2-NEXT:    punpckldq {{.*#+}} xmm1 = xmm1[0],xmm4[0],xmm1[1],xmm4[1]
> -; SSE2-NEXT:    punpckldq {{.*#+}} xmm5 = xmm5[0],xmm7[0],xmm5[1],xmm7[1]
> -; SSE2-NEXT:    pmuludq %xmm1, %xmm5
> -; SSE2-NEXT:    punpckldq {{.*#+}} xmm4 = xmm4[0],xmm7[0],xmm4[1],xmm7[1]
> -; SSE2-NEXT:    pmuludq %xmm3, %xmm4
> -; SSE2-NEXT:    paddq %xmm5, %xmm4
> -; SSE2-NEXT:    movdqa %xmm6, %xmm5
> -; SSE2-NEXT:    psrad $31, %xmm5
> +; SSE2-NEXT:    movdqa %xmm6, %xmm2
> +; SSE2-NEXT:    psrad $31, %xmm2
>  ; SSE2-NEXT:    psrad $16, %xmm6
> -; SSE2-NEXT:    punpckldq {{.*#+}} xmm6 = xmm6[0],xmm5[0],xmm6[1],xmm5[1]
> -; SSE2-NEXT:    pmuludq %xmm3, %xmm1
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm3 = xmm2[2,3,0,1]
> -; SSE2-NEXT:    psllq $32, %xmm4
> -; SSE2-NEXT:    paddq %xmm4, %xmm1
> -; SSE2-NEXT:    movdqa %xmm2, %xmm4
> -; SSE2-NEXT:    psrad $31, %xmm4
> -; SSE2-NEXT:    punpckldq {{.*#+}} xmm2 = xmm2[0],xmm4[0],xmm2[1],xmm4[1]
> -; SSE2-NEXT:    punpckldq {{.*#+}} xmm5 = xmm5[0],xmm7[0],xmm5[1],xmm7[1]
> -; SSE2-NEXT:    pmuludq %xmm2, %xmm5
> -; SSE2-NEXT:    punpckldq {{.*#+}} xmm4 = xmm4[0],xmm7[0],xmm4[1],xmm7[1]
> -; SSE2-NEXT:    pmuludq %xmm6, %xmm4
> -; SSE2-NEXT:    paddq %xmm5, %xmm4
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm5 = xmm8[2,3,0,1]
> -; SSE2-NEXT:    pshuflw {{.*#+}} xmm5 = xmm5[0,2,2,3,4,5,6,7]
> -; SSE2-NEXT:    pmuludq %xmm6, %xmm2
> -; SSE2-NEXT:    movdqa %xmm5, %xmm6
> -; SSE2-NEXT:    psrad $31, %xmm6
> +; SSE2-NEXT:    punpckldq {{.*#+}} xmm6 = xmm6[0],xmm2[0],xmm6[1],xmm2[1]
> +; SSE2-NEXT:    punpcklwd {{.*#+}} xmm4 = xmm4[0],xmm0[0],xmm4[1],xmm0[1],xmm4[2],xmm0[2],xmm4[3],xmm0[3]
> +; SSE2-NEXT:    movdqa %xmm4, %xmm0
> +; SSE2-NEXT:    psrad $31, %xmm0
> +; SSE2-NEXT:    psrad $16, %xmm4
> +; SSE2-NEXT:    punpckldq {{.*#+}} xmm4 = xmm4[0],xmm0[0],xmm4[1],xmm0[1]
> +; SSE2-NEXT:    pshufd {{.*#+}} xmm3 = xmm1[2,3,0,1]
> +; SSE2-NEXT:    pshuflw {{.*#+}} xmm7 = xmm3[0,2,2,3,4,5,6,7]
> +; SSE2-NEXT:    movdqa %xmm7, %xmm3
> +; SSE2-NEXT:    psrad $31, %xmm3
> +; SSE2-NEXT:    psrad $16, %xmm7
> +; SSE2-NEXT:    punpckldq {{.*#+}} xmm7 = xmm7[0],xmm3[0],xmm7[1],xmm3[1]
> +; SSE2-NEXT:    pshuflw {{.*#+}} xmm5 = xmm1[0,2,2,3,4,5,6,7]
> +; SSE2-NEXT:    movdqa %xmm5, %xmm1
> +; SSE2-NEXT:    psrad $31, %xmm1
>  ; SSE2-NEXT:    psrad $16, %xmm5
> -; SSE2-NEXT:    punpckldq {{.*#+}} xmm5 = xmm5[0],xmm6[0],xmm5[1],xmm6[1]
> -; SSE2-NEXT:    psllq $32, %xmm4
> -; SSE2-NEXT:    paddq %xmm4, %xmm2
> -; SSE2-NEXT:    movdqa %xmm3, %xmm4
> -; SSE2-NEXT:    psrad $31, %xmm4
> -; SSE2-NEXT:    punpckldq {{.*#+}} xmm3 = xmm3[0],xmm4[0],xmm3[1],xmm4[1]
> -; SSE2-NEXT:    punpckldq {{.*#+}} xmm6 = xmm6[0],xmm7[0],xmm6[1],xmm7[1]
> -; SSE2-NEXT:    punpckldq {{.*#+}} xmm4 = xmm4[0],xmm7[0],xmm4[1],xmm7[1]
> -; SSE2-NEXT:    pmuludq %xmm3, %xmm6
> -; SSE2-NEXT:    pmuludq %xmm5, %xmm4
> -; SSE2-NEXT:    paddq %xmm6, %xmm4
> -; SSE2-NEXT:    pmuludq %xmm5, %xmm3
> -; SSE2-NEXT:    psllq $32, %xmm4
> -; SSE2-NEXT:    paddq %xmm4, %xmm3
> +; SSE2-NEXT:    punpckldq {{.*#+}} xmm5 = xmm5[0],xmm1[0],xmm5[1],xmm1[1]
> +; SSE2-NEXT:    pshufd {{.*#+}} xmm8 = xmm10[2,1,3,3]
> +; SSE2-NEXT:    pshufd {{.*#+}} xmm10 = xmm10[0,1,1,3]
> +; SSE2-NEXT:    pshufd {{.*#+}} xmm11 = xmm9[2,1,3,3]
> +; SSE2-NEXT:    pshufd {{.*#+}} xmm9 = xmm9[0,1,1,3]
> +; SSE2-NEXT:    pmuludq %xmm9, %xmm4
> +; SSE2-NEXT:    pxor %xmm12, %xmm12
> +; SSE2-NEXT:    punpckldq {{.*#+}} xmm0 = xmm0[0],xmm12[0],xmm0[1],xmm12[1]
> +; SSE2-NEXT:    pmuludq %xmm9, %xmm0
> +; SSE2-NEXT:    psllq $32, %xmm0
> +; SSE2-NEXT:    paddq %xmm4, %xmm0
> +; SSE2-NEXT:    pmuludq %xmm11, %xmm5
> +; SSE2-NEXT:    punpckldq {{.*#+}} xmm1 = xmm1[0],xmm12[0],xmm1[1],xmm12[1]
> +; SSE2-NEXT:    pmuludq %xmm11, %xmm1
> +; SSE2-NEXT:    psllq $32, %xmm1
> +; SSE2-NEXT:    paddq %xmm5, %xmm1
> +; SSE2-NEXT:    pmuludq %xmm10, %xmm6
> +; SSE2-NEXT:    punpckldq {{.*#+}} xmm2 = xmm2[0],xmm12[0],xmm2[1],xmm12[1]
> +; SSE2-NEXT:    pmuludq %xmm10, %xmm2
> +; SSE2-NEXT:    psllq $32, %xmm2
> +; SSE2-NEXT:    paddq %xmm6, %xmm2
> +; SSE2-NEXT:    pmuludq %xmm8, %xmm7
> +; SSE2-NEXT:    punpckldq {{.*#+}} xmm3 = xmm3[0],xmm12[0],xmm3[1],xmm12[1]
> +; SSE2-NEXT:    pmuludq %xmm8, %xmm3
> +; SSE2-NEXT:    psllq $32, %xmm3
> +; SSE2-NEXT:    paddq %xmm7, %xmm3
>  ; SSE2-NEXT:    retq
>  ;
>  ; SSE41-LABEL: mul_v8i64_sext:
> 
> Modified: llvm/trunk/test/CodeGen/X86/pr35918.ll
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/pr35918.ll?rev=343913&r1=343912&r2=343913&view=diff
> ==============================================================================
> --- llvm/trunk/test/CodeGen/X86/pr35918.ll (original)
> +++ llvm/trunk/test/CodeGen/X86/pr35918.ll Sat Oct  6 03:20:04 2018
> @@ -7,7 +7,7 @@
>  define void @fetch_r16g16_snorm_unorm8(<4 x i8>*, i8*, i32, i32, { [2048 x i32], [128 x i64] }*) nounwind {
>  ; X86-SKYLAKE-LABEL: fetch_r16g16_snorm_unorm8:
>  ; X86-SKYLAKE:       # %bb.0: # %entry
> -; X86-SKYLAKE-NEXT:    subl $12, %esp
> +; X86-SKYLAKE-NEXT:    subl $8, %esp
>  ; X86-SKYLAKE-NEXT:    movl {{[0-9]+}}(%esp), %eax
>  ; X86-SKYLAKE-NEXT:    movl {{[0-9]+}}(%esp), %ecx
>  ; X86-SKYLAKE-NEXT:    vmovd {{.*#+}} xmm0 = mem[0],zero,zero,zero
> @@ -21,12 +21,12 @@ define void @fetch_r16g16_snorm_unorm8(<
>  ; X86-SKYLAKE-NEXT:    vmovd %xmm0, %ecx
>  ; X86-SKYLAKE-NEXT:    orl $-16777216, %ecx # imm = 0xFF000000
>  ; X86-SKYLAKE-NEXT:    movl %ecx, (%eax)
> -; X86-SKYLAKE-NEXT:    addl $12, %esp
> +; X86-SKYLAKE-NEXT:    addl $8, %esp
>  ; X86-SKYLAKE-NEXT:    retl
>  ;
>  ; X86-SKX-LABEL: fetch_r16g16_snorm_unorm8:
>  ; X86-SKX:       # %bb.0: # %entry
> -; X86-SKX-NEXT:    subl $12, %esp
> +; X86-SKX-NEXT:    subl $8, %esp
>  ; X86-SKX-NEXT:    movl {{[0-9]+}}(%esp), %eax
>  ; X86-SKX-NEXT:    vmovd {{.*#+}} xmm0 = mem[0],zero,zero,zero
>  ; X86-SKX-NEXT:    vpshufb {{.*#+}} xmm0 = zero,zero,xmm0[0,1],zero,zero,xmm0[2,3],zero,zero,xmm0[u,u],zero,zero,xmm0[u,u]
> @@ -35,19 +35,16 @@ define void @fetch_r16g16_snorm_unorm8(<
>  ; X86-SKX-NEXT:    vpmaxsd %xmm1, %xmm0, %xmm0
>  ; X86-SKX-NEXT:    vpblendw {{.*#+}} xmm0 = xmm0[0],xmm1[1],xmm0[2],xmm1[3],xmm0[4],xmm1[5],xmm0[6],xmm1[7]
>  ; X86-SKX-NEXT:    vpsrld $7, %xmm0, %xmm0
> -; X86-SKX-NEXT:    vpmovzxdq {{.*#+}} xmm1 = xmm0[0],zero,xmm0[1],zero
> -; X86-SKX-NEXT:    vpmovqw %xmm1, {{[0-9]+}}(%esp)
> -; X86-SKX-NEXT:    vpmovzxbd {{.*#+}} xmm1 = mem[0],zero,zero,zero,mem[1],zero,zero,zero,mem[2],zero,zero,zero,mem[3],zero,zero,zero
> -; X86-SKX-NEXT:    vpshufd {{.*#+}} xmm0 = xmm0[2,2,3,3]
> +; X86-SKX-NEXT:    vpmovzxdq {{.*#+}} xmm0 = xmm0[0],zero,xmm0[1],zero
>  ; X86-SKX-NEXT:    vpmovqw %xmm0, {{[0-9]+}}(%esp)
> -; X86-SKX-NEXT:    vpmovzxbd {{.*#+}} xmm0 = mem[0],zero,zero,zero,mem[1],zero,zero,zero,mem[2],zero,zero,zero,mem[3],zero,zero,zero
> -; X86-SKX-NEXT:    vshufps {{.*#+}} xmm0 = xmm1[0,2],xmm0[0,2]
> +; X86-SKX-NEXT:    vmovd {{.*#+}} xmm0 = mem[0],zero,zero,zero
> +; X86-SKX-NEXT:    vpshufb {{.*#+}} xmm0 = xmm0[0],zero,zero,zero,xmm0[2],zero,zero,zero,xmm0[2],zero,zero,zero,xmm0[3],zero,zero,zero
>  ; X86-SKX-NEXT:    vpmovdb %xmm0, (%esp)
>  ; X86-SKX-NEXT:    movl {{[0-9]+}}(%esp), %eax
>  ; X86-SKX-NEXT:    movzwl (%esp), %ecx
>  ; X86-SKX-NEXT:    orl $-16777216, %ecx # imm = 0xFF000000
>  ; X86-SKX-NEXT:    movl %ecx, (%eax)
> -; X86-SKX-NEXT:    addl $12, %esp
> +; X86-SKX-NEXT:    addl $8, %esp
>  ; X86-SKX-NEXT:    retl
>  ;
>  ; X64-SKYLAKE-LABEL: fetch_r16g16_snorm_unorm8:
> @@ -74,13 +71,10 @@ define void @fetch_r16g16_snorm_unorm8(<
>  ; X64-SKX-NEXT:    vpmaxsd %xmm1, %xmm0, %xmm0
>  ; X64-SKX-NEXT:    vpblendw {{.*#+}} xmm0 = xmm0[0],xmm1[1],xmm0[2],xmm1[3],xmm0[4],xmm1[5],xmm0[6],xmm1[7]
>  ; X64-SKX-NEXT:    vpsrld $7, %xmm0, %xmm0
> -; X64-SKX-NEXT:    vpmovzxwd {{.*#+}} xmm1 = xmm0[0],zero,xmm0[1],zero,xmm0[2],zero,xmm0[3],zero
> -; X64-SKX-NEXT:    vpmovqw %xmm1, -{{[0-9]+}}(%rsp)
> -; X64-SKX-NEXT:    vpmovzxbd {{.*#+}} xmm1 = mem[0],zero,zero,zero,mem[1],zero,zero,zero,mem[2],zero,zero,zero,mem[3],zero,zero,zero
> -; X64-SKX-NEXT:    vpshufd {{.*#+}} xmm0 = xmm0[2,2,3,3]
> +; X64-SKX-NEXT:    vpmovzxwd {{.*#+}} xmm0 = xmm0[0],zero,xmm0[1],zero,xmm0[2],zero,xmm0[3],zero
>  ; X64-SKX-NEXT:    vpmovqw %xmm0, -{{[0-9]+}}(%rsp)
> -; X64-SKX-NEXT:    vpmovzxbd {{.*#+}} xmm0 = mem[0],zero,zero,zero,mem[1],zero,zero,zero,mem[2],zero,zero,zero,mem[3],zero,zero,zero
> -; X64-SKX-NEXT:    vshufps {{.*#+}} xmm0 = xmm1[0,2],xmm0[0,2]
> +; X64-SKX-NEXT:    vmovd {{.*#+}} xmm0 = mem[0],zero,zero,zero
> +; X64-SKX-NEXT:    vpshufb {{.*#+}} xmm0 = xmm0[0],zero,zero,zero,xmm0[2],zero,zero,zero,xmm0[2],zero,zero,zero,xmm0[3],zero,zero,zero
>  ; X64-SKX-NEXT:    vpmovdb %xmm0, -{{[0-9]+}}(%rsp)
>  ; X64-SKX-NEXT:    movzwl -{{[0-9]+}}(%rsp), %eax
>  ; X64-SKX-NEXT:    orl $-16777216, %eax # imm = 0xFF000000
> 
> Modified: llvm/trunk/test/CodeGen/X86/shrink_vmul.ll
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/shrink_vmul.ll?rev=343913&r1=343912&r2=343913&view=diff
> ==============================================================================
> --- llvm/trunk/test/CodeGen/X86/shrink_vmul.ll (original)
> +++ llvm/trunk/test/CodeGen/X86/shrink_vmul.ll Sat Oct  6 03:20:04 2018
> @@ -1507,7 +1507,7 @@ define void @mul_2xi8_varconst1(i8* noca
>  ; X86-AVX-NEXT:    movl {{[0-9]+}}(%esp), %ecx
>  ; X86-AVX-NEXT:    movl c, %edx
>  ; X86-AVX-NEXT:    vpmovzxbq {{.*#+}} xmm0 = mem[0],zero,zero,zero,zero,zero,zero,zero,mem[1],zero,zero,zero,zero,zero,zero,zero
> -; X86-AVX-NEXT:    vpmaddwd {{\.LCPI.*}}, %xmm0, %xmm0
> +; X86-AVX-NEXT:    vpmulld {{\.LCPI.*}}, %xmm0, %xmm0
>  ; X86-AVX-NEXT:    vpshufd {{.*#+}} xmm0 = xmm0[0,2,2,3]
>  ; X86-AVX-NEXT:    vmovq %xmm0, (%edx,%eax,4)
>  ; X86-AVX-NEXT:    retl
> @@ -1643,7 +1643,7 @@ define void @mul_2xi8_varconst3(i8* noca
>  ; X86-AVX-NEXT:    movl {{[0-9]+}}(%esp), %ecx
>  ; X86-AVX-NEXT:    movl c, %edx
>  ; X86-AVX-NEXT:    vpmovzxbq {{.*#+}} xmm0 = mem[0],zero,zero,zero,zero,zero,zero,zero,mem[1],zero,zero,zero,zero,zero,zero,zero
> -; X86-AVX-NEXT:    vpmaddwd {{\.LCPI.*}}, %xmm0, %xmm0
> +; X86-AVX-NEXT:    vpmulld {{\.LCPI.*}}, %xmm0, %xmm0
>  ; X86-AVX-NEXT:    vpshufd {{.*#+}} xmm0 = xmm0[0,2,2,3]
>  ; X86-AVX-NEXT:    vmovq %xmm0, (%edx,%eax,4)
>  ; X86-AVX-NEXT:    retl
> @@ -2047,12 +2047,16 @@ define void @mul_2xi16_varconst3(i8* noc
>  ; X86-SSE-NEXT:    pxor %xmm1, %xmm1
>  ; X86-SSE-NEXT:    punpcklwd {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[1],xmm0[2],xmm1[2],xmm0[3],xmm1[3]
>  ; X86-SSE-NEXT:    pshufd {{.*#+}} xmm0 = xmm0[0,1,1,3]
> -; X86-SSE-NEXT:    movdqa {{.*#+}} xmm2 = [0,0,65536,0]
> -; X86-SSE-NEXT:    pmuludq %xmm2, %xmm0
> +; X86-SSE-NEXT:    movdqa {{.*#+}} xmm2 = <0,u,65536,u>
>  ; X86-SSE-NEXT:    pmuludq %xmm2, %xmm1
> -; X86-SSE-NEXT:    psllq $32, %xmm1
> -; X86-SSE-NEXT:    paddq %xmm0, %xmm1
> -; X86-SSE-NEXT:    pshufd {{.*#+}} xmm0 = xmm1[0,2,2,3]
> +; X86-SSE-NEXT:    movdqa %xmm2, %xmm3
> +; X86-SSE-NEXT:    psrlq $32, %xmm3
> +; X86-SSE-NEXT:    pmuludq %xmm0, %xmm3
> +; X86-SSE-NEXT:    paddq %xmm1, %xmm3
> +; X86-SSE-NEXT:    psllq $32, %xmm3
> +; X86-SSE-NEXT:    pmuludq %xmm2, %xmm0
> +; X86-SSE-NEXT:    paddq %xmm3, %xmm0
> +; X86-SSE-NEXT:    pshufd {{.*#+}} xmm0 = xmm0[0,2,2,3]
>  ; X86-SSE-NEXT:    movq %xmm0, (%edx,%eax,4)
>  ; X86-SSE-NEXT:    retl
>  ;
> @@ -2128,13 +2132,17 @@ define void @mul_2xi16_varconst4(i8* noc
>  ; X86-SSE-NEXT:    pshuflw {{.*#+}} xmm0 = xmm0[0,0,2,1,4,5,6,7]
>  ; X86-SSE-NEXT:    psrad $16, %xmm0
>  ; X86-SSE-NEXT:    pshufd {{.*#+}} xmm0 = xmm0[0,1,1,3]
> -; X86-SSE-NEXT:    movdqa {{.*#+}} xmm1 = [0,0,32768,0]
> -; X86-SSE-NEXT:    pmuludq %xmm1, %xmm0
> +; X86-SSE-NEXT:    movdqa {{.*#+}} xmm1 = <0,u,32768,u>
>  ; X86-SSE-NEXT:    pxor %xmm2, %xmm2
>  ; X86-SSE-NEXT:    pmuludq %xmm1, %xmm2
> -; X86-SSE-NEXT:    psllq $32, %xmm2
> -; X86-SSE-NEXT:    paddq %xmm0, %xmm2
> -; X86-SSE-NEXT:    pshufd {{.*#+}} xmm0 = xmm2[0,2,2,3]
> +; X86-SSE-NEXT:    movdqa %xmm1, %xmm3
> +; X86-SSE-NEXT:    psrlq $32, %xmm3
> +; X86-SSE-NEXT:    pmuludq %xmm0, %xmm3
> +; X86-SSE-NEXT:    paddq %xmm2, %xmm3
> +; X86-SSE-NEXT:    psllq $32, %xmm3
> +; X86-SSE-NEXT:    pmuludq %xmm1, %xmm0
> +; X86-SSE-NEXT:    paddq %xmm3, %xmm0
> +; X86-SSE-NEXT:    pshufd {{.*#+}} xmm0 = xmm0[0,2,2,3]
>  ; X86-SSE-NEXT:    movq %xmm0, (%edx,%eax,4)
>  ; X86-SSE-NEXT:    retl
>  ;
> 
> Modified: llvm/trunk/test/CodeGen/X86/sse2-intrinsics-fast-isel.ll
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/sse2-intrinsics-fast-isel.ll?rev=343913&r1=343912&r2=343913&view=diff
> ==============================================================================
> --- llvm/trunk/test/CodeGen/X86/sse2-intrinsics-fast-isel.ll (original)
> +++ llvm/trunk/test/CodeGen/X86/sse2-intrinsics-fast-isel.ll Sat Oct  6 03:20:04 2018
> @@ -2755,23 +2755,13 @@ define i32 @test_mm_movemask_pd(<2 x dou
>  declare i32 @llvm.x86.sse2.movmsk.pd(<2 x double>) nounwind readnone
>  
>  define <2 x i64> @test_mm_mul_epu32(<2 x i64> %a0, <2 x i64> %a1) nounwind {
> -; X86-SSE-LABEL: test_mm_mul_epu32:
> -; X86-SSE:       # %bb.0:
> -; X86-SSE-NEXT:    movdqa {{.*#+}} xmm2 = [4294967295,0,4294967295,0]
> -; X86-SSE-NEXT:    # encoding: [0x66,0x0f,0x6f,0x15,A,A,A,A]
> -; X86-SSE-NEXT:    # fixup A - offset: 4, value: {{\.LCPI.*}}, kind: FK_Data_4
> -; X86-SSE-NEXT:    pand %xmm2, %xmm0 # encoding: [0x66,0x0f,0xdb,0xc2]
> -; X86-SSE-NEXT:    pand %xmm2, %xmm1 # encoding: [0x66,0x0f,0xdb,0xca]
> -; X86-SSE-NEXT:    pmuludq %xmm1, %xmm0 # encoding: [0x66,0x0f,0xf4,0xc1]
> -; X86-SSE-NEXT:    retl # encoding: [0xc3]
> +; SSE-LABEL: test_mm_mul_epu32:
> +; SSE:       # %bb.0:
> +; SSE-NEXT:    pmuludq %xmm1, %xmm0 # encoding: [0x66,0x0f,0xf4,0xc1]
> +; SSE-NEXT:    ret{{[l|q]}} # encoding: [0xc3]
>  ;
>  ; AVX1-LABEL: test_mm_mul_epu32:
>  ; AVX1:       # %bb.0:
> -; AVX1-NEXT:    vpxor %xmm2, %xmm2, %xmm2 # encoding: [0xc5,0xe9,0xef,0xd2]
> -; AVX1-NEXT:    vpblendw $204, %xmm2, %xmm0, %xmm0 # encoding: [0xc4,0xe3,0x79,0x0e,0xc2,0xcc]
> -; AVX1-NEXT:    # xmm0 = xmm0[0,1],xmm2[2,3],xmm0[4,5],xmm2[6,7]
> -; AVX1-NEXT:    vpblendw $204, %xmm2, %xmm1, %xmm1 # encoding: [0xc4,0xe3,0x71,0x0e,0xca,0xcc]
> -; AVX1-NEXT:    # xmm1 = xmm1[0,1],xmm2[2,3],xmm1[4,5],xmm2[6,7]
>  ; AVX1-NEXT:    vpmuludq %xmm1, %xmm0, %xmm0 # encoding: [0xc5,0xf9,0xf4,0xc1]
>  ; AVX1-NEXT:    ret{{[l|q]}} # encoding: [0xc3]
>  ;
> @@ -2784,16 +2774,6 @@ define <2 x i64> @test_mm_mul_epu32(<2 x
>  ; AVX512-NEXT:    # xmm1 = xmm1[0],xmm2[1],xmm1[2],xmm2[3]
>  ; AVX512-NEXT:    vpmullq %xmm1, %xmm0, %xmm0 # encoding: [0x62,0xf2,0xfd,0x08,0x40,0xc1]
>  ; AVX512-NEXT:    ret{{[l|q]}} # encoding: [0xc3]
> -;
> -; X64-SSE-LABEL: test_mm_mul_epu32:
> -; X64-SSE:       # %bb.0:
> -; X64-SSE-NEXT:    movdqa {{.*#+}} xmm2 = [4294967295,0,4294967295,0]
> -; X64-SSE-NEXT:    # encoding: [0x66,0x0f,0x6f,0x15,A,A,A,A]
> -; X64-SSE-NEXT:    # fixup A - offset: 4, value: {{\.LCPI.*}}-4, kind: reloc_riprel_4byte
> -; X64-SSE-NEXT:    pand %xmm2, %xmm0 # encoding: [0x66,0x0f,0xdb,0xc2]
> -; X64-SSE-NEXT:    pand %xmm2, %xmm1 # encoding: [0x66,0x0f,0xdb,0xca]
> -; X64-SSE-NEXT:    pmuludq %xmm1, %xmm0 # encoding: [0x66,0x0f,0xf4,0xc1]
> -; X64-SSE-NEXT:    retq # encoding: [0xc3]
>    %A = and <2 x i64> %a0, <i64 4294967295, i64 4294967295>
>    %B = and <2 x i64> %a1, <i64 4294967295, i64 4294967295>
>    %res = mul nuw <2 x i64> %A, %B
> 
> Modified: llvm/trunk/test/CodeGen/X86/sse41-intrinsics-fast-isel.ll
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/sse41-intrinsics-fast-isel.ll?rev=343913&r1=343912&r2=343913&view=diff
> ==============================================================================
> --- llvm/trunk/test/CodeGen/X86/sse41-intrinsics-fast-isel.ll (original)
> +++ llvm/trunk/test/CodeGen/X86/sse41-intrinsics-fast-isel.ll Sat Oct  6 03:20:04 2018
> @@ -832,26 +832,11 @@ declare <8 x i16> @llvm.x86.sse41.mpsadb
>  define <2 x i64> @test_mm_mul_epi32(<2 x i64> %a0, <2 x i64> %a1) {
>  ; SSE-LABEL: test_mm_mul_epi32:
>  ; SSE:       # %bb.0:
> -; SSE-NEXT:    movdqa %xmm0, %xmm2
> -; SSE-NEXT:    psllq $32, %xmm2
> -; SSE-NEXT:    psrad $31, %xmm2
> -; SSE-NEXT:    pblendw {{.*#+}} xmm2 = xmm0[0,1],xmm2[2,3],xmm0[4,5],xmm2[6,7]
> -; SSE-NEXT:    movdqa %xmm1, %xmm0
> -; SSE-NEXT:    psllq $32, %xmm0
> -; SSE-NEXT:    psrad $31, %xmm0
> -; SSE-NEXT:    pblendw {{.*#+}} xmm0 = xmm1[0,1],xmm0[2,3],xmm1[4,5],xmm0[6,7]
> -; SSE-NEXT:    pmuldq %xmm0, %xmm2
> -; SSE-NEXT:    movdqa %xmm2, %xmm0
> +; SSE-NEXT:    pmuldq %xmm1, %xmm0
>  ; SSE-NEXT:    ret{{[l|q]}}
>  ;
>  ; AVX1-LABEL: test_mm_mul_epi32:
>  ; AVX1:       # %bb.0:
> -; AVX1-NEXT:    vpsllq $32, %xmm0, %xmm2
> -; AVX1-NEXT:    vpsrad $31, %xmm2, %xmm2
> -; AVX1-NEXT:    vpblendw {{.*#+}} xmm0 = xmm0[0,1],xmm2[2,3],xmm0[4,5],xmm2[6,7]
> -; AVX1-NEXT:    vpsllq $32, %xmm1, %xmm2
> -; AVX1-NEXT:    vpsrad $31, %xmm2, %xmm2
> -; AVX1-NEXT:    vpblendw {{.*#+}} xmm1 = xmm1[0,1],xmm2[2,3],xmm1[4,5],xmm2[6,7]
>  ; AVX1-NEXT:    vpmuldq %xmm1, %xmm0, %xmm0
>  ; AVX1-NEXT:    ret{{[l|q]}}
>  ;
> 
> Modified: llvm/trunk/test/CodeGen/X86/urem-seteq-vec-nonsplat.ll
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/urem-seteq-vec-nonsplat.ll?rev=343913&r1=343912&r2=343913&view=diff
> ==============================================================================
> --- llvm/trunk/test/CodeGen/X86/urem-seteq-vec-nonsplat.ll (original)
> +++ llvm/trunk/test/CodeGen/X86/urem-seteq-vec-nonsplat.ll Sat Oct  6 03:20:04 2018
> @@ -143,33 +143,31 @@ define <4 x i32> @test_urem_odd_div_nons
>  define <4 x i32> @test_urem_even_div_nonsplat(<4 x i32> %X) nounwind readnone {
>  ; CHECK-SSE2-LABEL: test_urem_even_div_nonsplat:
>  ; CHECK-SSE2:       # %bb.0:
> -; CHECK-SSE2-NEXT:    movdqa %xmm0, %xmm1
> -; CHECK-SSE2-NEXT:    psrld $1, %xmm1
> +; CHECK-SSE2-NEXT:    movdqa {{.*#+}} xmm1 = [2863311531,3435973837,2863311531,2454267027]
>  ; CHECK-SSE2-NEXT:    movdqa %xmm0, %xmm2
> -; CHECK-SSE2-NEXT:    shufps {{.*#+}} xmm2 = xmm2[1,1],xmm1[3,3]
> -; CHECK-SSE2-NEXT:    movdqa {{.*#+}} xmm3 = [2863311531,3435973837,2863311531,2454267027]
> -; CHECK-SSE2-NEXT:    pshufd {{.*#+}} xmm4 = xmm3[1,1,3,3]
> -; CHECK-SSE2-NEXT:    pmuludq %xmm2, %xmm4
> -; CHECK-SSE2-NEXT:    pshufd {{.*#+}} xmm2 = xmm4[1,3,2,3]
> -; CHECK-SSE2-NEXT:    shufps {{.*#+}} xmm1 = xmm1[3,0],xmm0[2,0]
> -; CHECK-SSE2-NEXT:    movaps %xmm0, %xmm4
> -; CHECK-SSE2-NEXT:    shufps {{.*#+}} xmm4 = xmm4[0,1],xmm1[2,0]
> -; CHECK-SSE2-NEXT:    pmuludq %xmm3, %xmm4
> -; CHECK-SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm4[1,3,2,3]
> -; CHECK-SSE2-NEXT:    punpckldq {{.*#+}} xmm1 = xmm1[0],xmm2[0],xmm1[1],xmm2[1]
> -; CHECK-SSE2-NEXT:    movdqa %xmm1, %xmm2
> -; CHECK-SSE2-NEXT:    psrld $2, %xmm2
> -; CHECK-SSE2-NEXT:    psrld $3, %xmm1
> -; CHECK-SSE2-NEXT:    movdqa %xmm1, %xmm3
> -; CHECK-SSE2-NEXT:    shufps {{.*#+}} xmm3 = xmm3[1,1],xmm2[3,3]
> +; CHECK-SSE2-NEXT:    pmuludq %xmm1, %xmm2
> +; CHECK-SSE2-NEXT:    pshufd {{.*#+}} xmm2 = xmm2[1,3,2,3]
> +; CHECK-SSE2-NEXT:    movdqa %xmm0, %xmm3
> +; CHECK-SSE2-NEXT:    psrld $1, %xmm3
> +; CHECK-SSE2-NEXT:    movdqa %xmm0, %xmm4
> +; CHECK-SSE2-NEXT:    shufps {{.*#+}} xmm4 = xmm4[1,1],xmm3[3,3]
> +; CHECK-SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm1[1,1,3,3]
> +; CHECK-SSE2-NEXT:    pmuludq %xmm4, %xmm1
> +; CHECK-SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm1[1,3,2,3]
> +; CHECK-SSE2-NEXT:    punpckldq {{.*#+}} xmm2 = xmm2[0],xmm1[0],xmm2[1],xmm1[1]
> +; CHECK-SSE2-NEXT:    movdqa %xmm2, %xmm1
> +; CHECK-SSE2-NEXT:    psrld $2, %xmm1
> +; CHECK-SSE2-NEXT:    psrld $3, %xmm2
> +; CHECK-SSE2-NEXT:    movdqa %xmm2, %xmm3
> +; CHECK-SSE2-NEXT:    shufps {{.*#+}} xmm3 = xmm3[1,1],xmm1[3,3]
>  ; CHECK-SSE2-NEXT:    movdqa {{.*#+}} xmm4 = [6,10,12,14]
>  ; CHECK-SSE2-NEXT:    pshufd {{.*#+}} xmm5 = xmm4[1,1,3,3]
>  ; CHECK-SSE2-NEXT:    pmuludq %xmm3, %xmm5
>  ; CHECK-SSE2-NEXT:    pshufd {{.*#+}} xmm3 = xmm5[0,2,2,3]
> -; CHECK-SSE2-NEXT:    shufps {{.*#+}} xmm2 = xmm2[0,3],xmm1[1,2]
> -; CHECK-SSE2-NEXT:    shufps {{.*#+}} xmm2 = xmm2[0,2,3,1]
> -; CHECK-SSE2-NEXT:    pmuludq %xmm4, %xmm2
> -; CHECK-SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm2[0,2,2,3]
> +; CHECK-SSE2-NEXT:    shufps {{.*#+}} xmm1 = xmm1[0,3],xmm2[1,2]
> +; CHECK-SSE2-NEXT:    shufps {{.*#+}} xmm1 = xmm1[0,2,3,1]
> +; CHECK-SSE2-NEXT:    pmuludq %xmm4, %xmm1
> +; CHECK-SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm1[0,2,2,3]
>  ; CHECK-SSE2-NEXT:    punpckldq {{.*#+}} xmm1 = xmm1[0],xmm3[0],xmm1[1],xmm3[1]
>  ; CHECK-SSE2-NEXT:    psubd %xmm1, %xmm0
>  ; CHECK-SSE2-NEXT:    pxor %xmm1, %xmm1
> @@ -277,20 +275,17 @@ define <4 x i32> @test_urem_pow2_nonspla
>  ; CHECK-SSE2-NEXT:    movdqa %xmm2, %xmm1
>  ; CHECK-SSE2-NEXT:    psrld $3, %xmm1
>  ; CHECK-SSE2-NEXT:    movdqa %xmm1, %xmm3
> -; CHECK-SSE2-NEXT:    punpckhqdq {{.*#+}} xmm3 = xmm3[1],xmm2[1]
> -; CHECK-SSE2-NEXT:    movdqa %xmm2, %xmm4
> -; CHECK-SSE2-NEXT:    psrld $2, %xmm4
> -; CHECK-SSE2-NEXT:    punpcklqdq {{.*#+}} xmm4 = xmm4[0],xmm1[0]
> -; CHECK-SSE2-NEXT:    shufps {{.*#+}} xmm4 = xmm4[0,3],xmm3[0,3]
> -; CHECK-SSE2-NEXT:    movdqa {{.*#+}} xmm3 = [6,10,12,16]
> -; CHECK-SSE2-NEXT:    pmuludq %xmm3, %xmm4
> -; CHECK-SSE2-NEXT:    pshufd {{.*#+}} xmm4 = xmm4[0,2,2,3]
> -; CHECK-SSE2-NEXT:    shufps {{.*#+}} xmm1 = xmm1[1,1],xmm2[3,3]
> -; CHECK-SSE2-NEXT:    pshufd {{.*#+}} xmm2 = xmm3[1,1,3,3]
> -; CHECK-SSE2-NEXT:    pmuludq %xmm1, %xmm2
> +; CHECK-SSE2-NEXT:    shufps {{.*#+}} xmm3 = xmm3[1,1],xmm2[3,3]
> +; CHECK-SSE2-NEXT:    movdqa {{.*#+}} xmm4 = [6,10,12,16]
> +; CHECK-SSE2-NEXT:    pshufd {{.*#+}} xmm5 = xmm4[1,1,3,3]
> +; CHECK-SSE2-NEXT:    pmuludq %xmm3, %xmm5
> +; CHECK-SSE2-NEXT:    pshufd {{.*#+}} xmm3 = xmm5[0,2,2,3]
> +; CHECK-SSE2-NEXT:    psrld $2, %xmm2
> +; CHECK-SSE2-NEXT:    shufps {{.*#+}} xmm2 = xmm2[0,3],xmm1[2,3]
> +; CHECK-SSE2-NEXT:    pmuludq %xmm4, %xmm2
>  ; CHECK-SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm2[0,2,2,3]
> -; CHECK-SSE2-NEXT:    punpckldq {{.*#+}} xmm4 = xmm4[0],xmm1[0],xmm4[1],xmm1[1]
> -; CHECK-SSE2-NEXT:    psubd %xmm4, %xmm0
> +; CHECK-SSE2-NEXT:    punpckldq {{.*#+}} xmm1 = xmm1[0],xmm3[0],xmm1[1],xmm3[1]
> +; CHECK-SSE2-NEXT:    psubd %xmm1, %xmm0
>  ; CHECK-SSE2-NEXT:    pxor %xmm1, %xmm1
>  ; CHECK-SSE2-NEXT:    pcmpeqd %xmm1, %xmm0
>  ; CHECK-SSE2-NEXT:    psrld $31, %xmm0
> @@ -382,34 +377,31 @@ define <4 x i32> @test_urem_pow2_nonspla
>  define <4 x i32> @test_urem_one_nonsplat(<4 x i32> %X) nounwind readnone {
>  ; CHECK-SSE2-LABEL: test_urem_one_nonsplat:
>  ; CHECK-SSE2:       # %bb.0:
> -; CHECK-SSE2-NEXT:    movdqa %xmm0, %xmm1
> -; CHECK-SSE2-NEXT:    psrld $1, %xmm1
> +; CHECK-SSE2-NEXT:    movdqa {{.*#+}} xmm1 = [2863311531,0,2863311531,2454267027]
>  ; CHECK-SSE2-NEXT:    movdqa %xmm0, %xmm2
> -; CHECK-SSE2-NEXT:    shufps {{.*#+}} xmm2 = xmm2[1,1],xmm1[3,3]
> -; CHECK-SSE2-NEXT:    movdqa {{.*#+}} xmm3 = [2863311531,0,2863311531,2454267027]
> -; CHECK-SSE2-NEXT:    pshufd {{.*#+}} xmm4 = xmm3[1,1,3,3]
> -; CHECK-SSE2-NEXT:    pmuludq %xmm2, %xmm4
> -; CHECK-SSE2-NEXT:    pshufd {{.*#+}} xmm2 = xmm4[1,3,2,3]
> -; CHECK-SSE2-NEXT:    shufps {{.*#+}} xmm1 = xmm1[3,0],xmm0[2,0]
> -; CHECK-SSE2-NEXT:    movaps %xmm0, %xmm4
> -; CHECK-SSE2-NEXT:    shufps {{.*#+}} xmm4 = xmm4[0,1],xmm1[2,0]
> -; CHECK-SSE2-NEXT:    pmuludq %xmm3, %xmm4
> -; CHECK-SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm4[1,3,2,3]
> -; CHECK-SSE2-NEXT:    punpckldq {{.*#+}} xmm1 = xmm1[0],xmm2[0],xmm1[1],xmm2[1]
> -; CHECK-SSE2-NEXT:    movdqa %xmm1, %xmm2
> -; CHECK-SSE2-NEXT:    psrld $2, %xmm2
> -; CHECK-SSE2-NEXT:    psrld $3, %xmm1
> -; CHECK-SSE2-NEXT:    punpckhqdq {{.*#+}} xmm1 = xmm1[1],xmm2[1]
> -; CHECK-SSE2-NEXT:    movaps %xmm0, %xmm3
> -; CHECK-SSE2-NEXT:    shufps {{.*#+}} xmm3 = xmm3[1,0],xmm2[0,0]
> -; CHECK-SSE2-NEXT:    shufps {{.*#+}} xmm3 = xmm3[2,0],xmm1[0,3]
> -; CHECK-SSE2-NEXT:    movdqa {{.*#+}} xmm1 = [6,1,12,14]
> -; CHECK-SSE2-NEXT:    pmuludq %xmm1, %xmm3
> -; CHECK-SSE2-NEXT:    pshufd {{.*#+}} xmm3 = xmm3[0,2,2,3]
> -; CHECK-SSE2-NEXT:    movaps %xmm0, %xmm4
> -; CHECK-SSE2-NEXT:    shufps {{.*#+}} xmm4 = xmm4[1,1],xmm2[3,3]
> +; CHECK-SSE2-NEXT:    pmuludq %xmm1, %xmm2
> +; CHECK-SSE2-NEXT:    pshufd {{.*#+}} xmm2 = xmm2[1,3,2,3]
> +; CHECK-SSE2-NEXT:    movdqa %xmm0, %xmm3
> +; CHECK-SSE2-NEXT:    psrld $1, %xmm3
> +; CHECK-SSE2-NEXT:    movdqa %xmm0, %xmm4
> +; CHECK-SSE2-NEXT:    shufps {{.*#+}} xmm4 = xmm4[1,1],xmm3[3,3]
>  ; CHECK-SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm1[1,1,3,3]
>  ; CHECK-SSE2-NEXT:    pmuludq %xmm4, %xmm1
> +; CHECK-SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm1[1,3,2,3]
> +; CHECK-SSE2-NEXT:    punpckldq {{.*#+}} xmm2 = xmm2[0],xmm1[0],xmm2[1],xmm1[1]
> +; CHECK-SSE2-NEXT:    movdqa %xmm2, %xmm1
> +; CHECK-SSE2-NEXT:    psrld $2, %xmm1
> +; CHECK-SSE2-NEXT:    movdqa %xmm0, %xmm3
> +; CHECK-SSE2-NEXT:    shufps {{.*#+}} xmm3 = xmm3[1,0],xmm1[0,0]
> +; CHECK-SSE2-NEXT:    psrld $3, %xmm2
> +; CHECK-SSE2-NEXT:    shufps {{.*#+}} xmm3 = xmm3[2,0],xmm2[2,3]
> +; CHECK-SSE2-NEXT:    movdqa {{.*#+}} xmm2 = [6,1,12,14]
> +; CHECK-SSE2-NEXT:    pmuludq %xmm2, %xmm3
> +; CHECK-SSE2-NEXT:    pshufd {{.*#+}} xmm3 = xmm3[0,2,2,3]
> +; CHECK-SSE2-NEXT:    movdqa %xmm0, %xmm4
> +; CHECK-SSE2-NEXT:    shufps {{.*#+}} xmm4 = xmm4[1,1],xmm1[3,3]
> +; CHECK-SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm2[1,1,3,3]
> +; CHECK-SSE2-NEXT:    pmuludq %xmm4, %xmm1
>  ; CHECK-SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm1[0,2,2,3]
>  ; CHECK-SSE2-NEXT:    punpckldq {{.*#+}} xmm3 = xmm3[0],xmm1[0],xmm3[1],xmm1[1]
>  ; CHECK-SSE2-NEXT:    psubd %xmm3, %xmm0
> 
> Modified: llvm/trunk/test/CodeGen/X86/vector-idiv-v2i32.ll
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-idiv-v2i32.ll?rev=343913&r1=343912&r2=343913&view=diff
> ==============================================================================
> --- llvm/trunk/test/CodeGen/X86/vector-idiv-v2i32.ll (original)
> +++ llvm/trunk/test/CodeGen/X86/vector-idiv-v2i32.ll Sat Oct  6 03:20:04 2018
> @@ -627,11 +627,10 @@ define void @test_urem_pow2_v2i32(<2 x i
>  ; X86:       # %bb.0:
>  ; X86-NEXT:    movl {{[0-9]+}}(%esp), %eax
>  ; X86-NEXT:    movl {{[0-9]+}}(%esp), %ecx
> -; X86-NEXT:    movd {{.*#+}} xmm0 = mem[0],zero,zero,zero
> -; X86-NEXT:    movd {{.*#+}} xmm1 = mem[0],zero,zero,zero
> -; X86-NEXT:    punpcklqdq {{.*#+}} xmm1 = xmm1[0],xmm0[0]
> -; X86-NEXT:    pand {{\.LCPI.*}}, %xmm1
> -; X86-NEXT:    pshufd {{.*#+}} xmm0 = xmm1[0,2,2,3]
> +; X86-NEXT:    movsd {{.*#+}} xmm0 = mem[0],zero
> +; X86-NEXT:    shufps {{.*#+}} xmm0 = xmm0[0,1,1,3]
> +; X86-NEXT:    andps {{\.LCPI.*}}, %xmm0
> +; X86-NEXT:    pshufd {{.*#+}} xmm0 = xmm0[0,2,2,3]
>  ; X86-NEXT:    movq %xmm0, (%eax)
>  ; X86-NEXT:    retl
>  ;
> 
> Modified: llvm/trunk/test/CodeGen/X86/vector-mul.ll
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-mul.ll?rev=343913&r1=343912&r2=343913&view=diff
> ==============================================================================
> --- llvm/trunk/test/CodeGen/X86/vector-mul.ll (original)
> +++ llvm/trunk/test/CodeGen/X86/vector-mul.ll Sat Oct  6 03:20:04 2018
> @@ -460,7 +460,7 @@ define <16 x i8> @mul_v16i8_neg5(<16 x i
>  define <2 x i64> @mul_v2i64_17_65(<2 x i64> %a0) nounwind {
>  ; X86-LABEL: mul_v2i64_17_65:
>  ; X86:       # %bb.0:
> -; X86-NEXT:    movdqa {{.*#+}} xmm1 = [17,0,65,0]
> +; X86-NEXT:    movdqa {{.*#+}} xmm1 = <17,u,65,u>
>  ; X86-NEXT:    movdqa %xmm0, %xmm2
>  ; X86-NEXT:    pmuludq %xmm1, %xmm2
>  ; X86-NEXT:    psrlq $32, %xmm0
> @@ -809,7 +809,7 @@ define <16 x i8> @mul_v16i8_neg15(<16 x
>  define <2 x i64> @mul_v2i64_15_63(<2 x i64> %a0) nounwind {
>  ; X86-LABEL: mul_v2i64_15_63:
>  ; X86:       # %bb.0:
> -; X86-NEXT:    movdqa {{.*#+}} xmm1 = [15,0,63,0]
> +; X86-NEXT:    movdqa {{.*#+}} xmm1 = <15,u,63,u>
>  ; X86-NEXT:    movdqa %xmm0, %xmm2
>  ; X86-NEXT:    pmuludq %xmm1, %xmm2
>  ; X86-NEXT:    psrlq $32, %xmm0
> @@ -845,16 +845,17 @@ define <2 x i64> @mul_v2i64_15_63(<2 x i
>  define <2 x i64> @mul_v2i64_neg_15_63(<2 x i64> %a0) nounwind {
>  ; X86-LABEL: mul_v2i64_neg_15_63:
>  ; X86:       # %bb.0:
> -; X86-NEXT:    movdqa {{.*#+}} xmm1 = [4294967281,4294967295,4294967233,4294967295]
> -; X86-NEXT:    movdqa %xmm0, %xmm2
> -; X86-NEXT:    pmuludq %xmm1, %xmm2
> -; X86-NEXT:    movdqa %xmm0, %xmm3
> +; X86-NEXT:    movdqa %xmm0, %xmm1
> +; X86-NEXT:    psrlq $32, %xmm1
> +; X86-NEXT:    movdqa {{.*#+}} xmm2 = <4294967281,u,4294967233,u>
> +; X86-NEXT:    pmuludq %xmm2, %xmm1
> +; X86-NEXT:    movdqa %xmm2, %xmm3
>  ; X86-NEXT:    psrlq $32, %xmm3
> -; X86-NEXT:    pmuludq %xmm1, %xmm3
> -; X86-NEXT:    pmuludq {{\.LCPI.*}}, %xmm0
> +; X86-NEXT:    pmuludq %xmm0, %xmm3
> +; X86-NEXT:    paddq %xmm1, %xmm3
> +; X86-NEXT:    psllq $32, %xmm3
> +; X86-NEXT:    pmuludq %xmm2, %xmm0
>  ; X86-NEXT:    paddq %xmm3, %xmm0
> -; X86-NEXT:    psllq $32, %xmm0
> -; X86-NEXT:    paddq %xmm2, %xmm0
>  ; X86-NEXT:    retl
>  ;
>  ; X64-LABEL: mul_v2i64_neg_15_63:
> @@ -889,16 +890,17 @@ define <2 x i64> @mul_v2i64_neg_15_63(<2
>  define <2 x i64> @mul_v2i64_neg_17_65(<2 x i64> %a0) nounwind {
>  ; X86-LABEL: mul_v2i64_neg_17_65:
>  ; X86:       # %bb.0:
> -; X86-NEXT:    movdqa {{.*#+}} xmm1 = [4294967279,4294967295,4294967231,4294967295]
> -; X86-NEXT:    movdqa %xmm0, %xmm2
> -; X86-NEXT:    pmuludq %xmm1, %xmm2
> -; X86-NEXT:    movdqa %xmm0, %xmm3
> +; X86-NEXT:    movdqa %xmm0, %xmm1
> +; X86-NEXT:    psrlq $32, %xmm1
> +; X86-NEXT:    movdqa {{.*#+}} xmm2 = <4294967279,u,4294967231,u>
> +; X86-NEXT:    pmuludq %xmm2, %xmm1
> +; X86-NEXT:    movdqa %xmm2, %xmm3
>  ; X86-NEXT:    psrlq $32, %xmm3
> -; X86-NEXT:    pmuludq %xmm1, %xmm3
> -; X86-NEXT:    pmuludq {{\.LCPI.*}}, %xmm0
> +; X86-NEXT:    pmuludq %xmm0, %xmm3
> +; X86-NEXT:    paddq %xmm1, %xmm3
> +; X86-NEXT:    psllq $32, %xmm3
> +; X86-NEXT:    pmuludq %xmm2, %xmm0
>  ; X86-NEXT:    paddq %xmm3, %xmm0
> -; X86-NEXT:    psllq $32, %xmm0
> -; X86-NEXT:    paddq %xmm2, %xmm0
>  ; X86-NEXT:    retl
>  ;
>  ; X64-LABEL: mul_v2i64_neg_17_65:
> @@ -933,7 +935,7 @@ define <2 x i64> @mul_v2i64_neg_17_65(<2
>  define <2 x i64> @mul_v2i64_0_1(<2 x i64> %a0) nounwind {
>  ; X86-LABEL: mul_v2i64_0_1:
>  ; X86:       # %bb.0:
> -; X86-NEXT:    movdqa {{.*#+}} xmm1 = [0,0,1,0]
> +; X86-NEXT:    movdqa {{.*#+}} xmm1 = <0,u,1,u>
>  ; X86-NEXT:    movdqa %xmm0, %xmm2
>  ; X86-NEXT:    pmuludq %xmm1, %xmm2
>  ; X86-NEXT:    psrlq $32, %xmm0
> @@ -975,7 +977,7 @@ define <2 x i64> @mul_v2i64_neg_0_1(<2 x
>  ; X86:       # %bb.0:
>  ; X86-NEXT:    movdqa %xmm0, %xmm1
>  ; X86-NEXT:    psrlq $32, %xmm1
> -; X86-NEXT:    movdqa {{.*#+}} xmm2 = [0,0,4294967295,4294967295]
> +; X86-NEXT:    movdqa {{.*#+}} xmm2 = <0,u,4294967295,u>
>  ; X86-NEXT:    pmuludq %xmm2, %xmm1
>  ; X86-NEXT:    movdqa %xmm2, %xmm3
>  ; X86-NEXT:    psrlq $32, %xmm3
> @@ -1029,7 +1031,7 @@ define <2 x i64> @mul_v2i64_15_neg_63(<2
>  ; X86:       # %bb.0:
>  ; X86-NEXT:    movdqa %xmm0, %xmm1
>  ; X86-NEXT:    psrlq $32, %xmm1
> -; X86-NEXT:    movdqa {{.*#+}} xmm2 = [15,0,4294967233,4294967295]
> +; X86-NEXT:    movdqa {{.*#+}} xmm2 = <15,u,4294967233,u>
>  ; X86-NEXT:    pmuludq %xmm2, %xmm1
>  ; X86-NEXT:    movdqa %xmm2, %xmm3
>  ; X86-NEXT:    psrlq $32, %xmm3
> @@ -1172,7 +1174,7 @@ define <16 x i8> @mul_v16i8_0_1_3_7_15_3
>  define <2 x i64> @mul_v2i64_68_132(<2 x i64> %x) nounwind {
>  ; X86-LABEL: mul_v2i64_68_132:
>  ; X86:       # %bb.0:
> -; X86-NEXT:    movdqa {{.*#+}} xmm1 = [68,0,132,0]
> +; X86-NEXT:    movdqa {{.*#+}} xmm1 = <68,u,132,u>
>  ; X86-NEXT:    movdqa %xmm0, %xmm2
>  ; X86-NEXT:    pmuludq %xmm1, %xmm2
>  ; X86-NEXT:    psrlq $32, %xmm0
> @@ -1208,7 +1210,7 @@ define <2 x i64> @mul_v2i64_68_132(<2 x
>  define <2 x i64> @mul_v2i64_60_120(<2 x i64> %x) nounwind {
>  ; X86-LABEL: mul_v2i64_60_120:
>  ; X86:       # %bb.0:
> -; X86-NEXT:    movdqa {{.*#+}} xmm1 = [60,0,124,0]
> +; X86-NEXT:    movdqa {{.*#+}} xmm1 = <60,u,124,u>
>  ; X86-NEXT:    movdqa %xmm0, %xmm2
>  ; X86-NEXT:    pmuludq %xmm1, %xmm2
>  ; X86-NEXT:    psrlq $32, %xmm0
> 
> Modified: llvm/trunk/test/CodeGen/X86/vector-reduce-mul.ll
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-reduce-mul.ll?rev=343913&r1=343912&r2=343913&view=diff
> ==============================================================================
> --- llvm/trunk/test/CodeGen/X86/vector-reduce-mul.ll (original)
> +++ llvm/trunk/test/CodeGen/X86/vector-reduce-mul.ll Sat Oct  6 03:20:04 2018
> @@ -835,9 +835,8 @@ define i32 @test_v8i32(<8 x i32> %a0) {
>  ; SSE2-NEXT:    pmuludq %xmm0, %xmm2
>  ; SSE2-NEXT:    pshufd {{.*#+}} xmm0 = xmm2[0,2,2,3]
>  ; SSE2-NEXT:    pshufd {{.*#+}} xmm2 = xmm1[2,2,0,0]
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm1[0,0,2,2]
> -; SSE2-NEXT:    pmuludq %xmm2, %xmm1
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm1[0,2,2,3]
> +; SSE2-NEXT:    pmuludq %xmm1, %xmm2
> +; SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm2[0,2,2,3]
>  ; SSE2-NEXT:    punpckldq {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[1]
>  ; SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm0[1,1,2,3]
>  ; SSE2-NEXT:    pmuludq %xmm0, %xmm1
> @@ -896,34 +895,25 @@ define i32 @test_v8i32(<8 x i32> %a0) {
>  define i32 @test_v16i32(<16 x i32> %a0) {
>  ; SSE2-LABEL: test_v16i32:
>  ; SSE2:       # %bb.0:
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm4 = xmm1[1,1,3,3]
> +; SSE2-NEXT:    pshufd {{.*#+}} xmm4 = xmm3[1,1,3,3]
> +; SSE2-NEXT:    pshufd {{.*#+}} xmm5 = xmm1[1,1,3,3]
> +; SSE2-NEXT:    pmuludq %xmm4, %xmm5
> +; SSE2-NEXT:    pshufd {{.*#+}} xmm4 = xmm2[1,1,3,3]
> +; SSE2-NEXT:    pshufd {{.*#+}} xmm6 = xmm0[1,1,3,3]
> +; SSE2-NEXT:    pmuludq %xmm4, %xmm6
> +; SSE2-NEXT:    pmuludq %xmm5, %xmm6
>  ; SSE2-NEXT:    pmuludq %xmm3, %xmm1
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm1[0,2,2,3]
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm3 = xmm3[1,1,3,3]
> -; SSE2-NEXT:    pmuludq %xmm4, %xmm3
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm4 = xmm3[0,2,2,3]
> -; SSE2-NEXT:    punpckldq {{.*#+}} xmm1 = xmm1[0],xmm4[0],xmm1[1],xmm4[1]
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm4 = xmm0[1,1,3,3]
>  ; SSE2-NEXT:    pmuludq %xmm2, %xmm0
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm0 = xmm0[0,2,2,3]
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm2 = xmm2[1,1,3,3]
> -; SSE2-NEXT:    pmuludq %xmm4, %xmm2
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm4 = xmm2[0,2,2,3]
> -; SSE2-NEXT:    punpckldq {{.*#+}} xmm0 = xmm0[0],xmm4[0],xmm0[1],xmm4[1]
>  ; SSE2-NEXT:    pmuludq %xmm1, %xmm0
>  ; SSE2-NEXT:    pshufd {{.*#+}} xmm0 = xmm0[0,2,2,3]
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm3[0,0,2,2]
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm2 = xmm2[0,0,2,2]
> -; SSE2-NEXT:    pmuludq %xmm1, %xmm2
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm2[0,2,2,3]
> +; SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm6[0,2,2,3]
>  ; SSE2-NEXT:    punpckldq {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[1]
>  ; SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm0[2,3,0,1]
>  ; SSE2-NEXT:    pmuludq %xmm0, %xmm1
>  ; SSE2-NEXT:    pshufd {{.*#+}} xmm0 = xmm1[0,2,2,3]
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm2[2,2,0,0]
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm2 = xmm2[0,0,2,2]
> -; SSE2-NEXT:    pmuludq %xmm1, %xmm2
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm2[0,2,2,3]
> +; SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm6[2,2,0,0]
> +; SSE2-NEXT:    pmuludq %xmm6, %xmm1
> +; SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm1[0,2,2,3]
>  ; SSE2-NEXT:    punpckldq {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[1]
>  ; SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm0[1,1,2,3]
>  ; SSE2-NEXT:    pmuludq %xmm0, %xmm1
> @@ -992,64 +982,39 @@ define i32 @test_v32i32(<32 x i32> %a0)
>  ; SSE2:       # %bb.0:
>  ; SSE2-NEXT:    pshufd {{.*#+}} xmm8 = xmm2[1,1,3,3]
>  ; SSE2-NEXT:    pmuludq %xmm6, %xmm2
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm2 = xmm2[0,2,2,3]
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm9 = xmm6[1,1,3,3]
> -; SSE2-NEXT:    pmuludq %xmm8, %xmm9
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm6 = xmm9[0,2,2,3]
> -; SSE2-NEXT:    punpckldq {{.*#+}} xmm2 = xmm2[0],xmm6[0],xmm2[1],xmm6[1]
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm8 = xmm0[1,1,3,3]
> +; SSE2-NEXT:    pshufd {{.*#+}} xmm9 = xmm0[1,1,3,3]
>  ; SSE2-NEXT:    pmuludq %xmm4, %xmm0
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm6 = xmm0[0,2,2,3]
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm0 = xmm4[1,1,3,3]
> -; SSE2-NEXT:    pmuludq %xmm8, %xmm0
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm4 = xmm0[0,2,2,3]
> -; SSE2-NEXT:    punpckldq {{.*#+}} xmm6 = xmm6[0],xmm4[0],xmm6[1],xmm4[1]
> -; SSE2-NEXT:    pmuludq %xmm2, %xmm6
> +; SSE2-NEXT:    pmuludq %xmm2, %xmm0
>  ; SSE2-NEXT:    pshufd {{.*#+}} xmm2 = xmm3[1,1,3,3]
>  ; SSE2-NEXT:    pmuludq %xmm7, %xmm3
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm3 = xmm3[0,2,2,3]
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm4 = xmm7[1,1,3,3]
> -; SSE2-NEXT:    pmuludq %xmm2, %xmm4
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm2 = xmm4[0,2,2,3]
> -; SSE2-NEXT:    punpckldq {{.*#+}} xmm3 = xmm3[0],xmm2[0],xmm3[1],xmm2[1]
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm2 = xmm1[1,1,3,3]
> +; SSE2-NEXT:    pshufd {{.*#+}} xmm10 = xmm1[1,1,3,3]
>  ; SSE2-NEXT:    pmuludq %xmm5, %xmm1
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm1[0,2,2,3]
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm5 = xmm5[1,1,3,3]
> -; SSE2-NEXT:    pmuludq %xmm2, %xmm5
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm2 = xmm5[0,2,2,3]
> -; SSE2-NEXT:    punpckldq {{.*#+}} xmm1 = xmm1[0],xmm2[0],xmm1[1],xmm2[1]
>  ; SSE2-NEXT:    pmuludq %xmm3, %xmm1
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm1[0,2,2,3]
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm2 = xmm4[0,0,2,2]
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm3 = xmm5[0,0,2,2]
> -; SSE2-NEXT:    pmuludq %xmm2, %xmm3
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm2 = xmm3[0,2,2,3]
> -; SSE2-NEXT:    punpckldq {{.*#+}} xmm1 = xmm1[0],xmm2[0],xmm1[1],xmm2[1]
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm2 = xmm6[0,2,2,3]
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm4 = xmm9[0,0,2,2]
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm0 = xmm0[0,0,2,2]
> -; SSE2-NEXT:    pmuludq %xmm4, %xmm0
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm4 = xmm0[0,2,2,3]
> -; SSE2-NEXT:    punpckldq {{.*#+}} xmm2 = xmm2[0],xmm4[0],xmm2[1],xmm4[1]
> -; SSE2-NEXT:    pmuludq %xmm1, %xmm2
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm2[0,2,2,3]
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm2 = xmm3[0,0,2,2]
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm0 = xmm0[0,0,2,2]
> +; SSE2-NEXT:    pmuludq %xmm0, %xmm1
> +; SSE2-NEXT:    pshufd {{.*#+}} xmm0 = xmm6[1,1,3,3]
> +; SSE2-NEXT:    pmuludq %xmm8, %xmm0
> +; SSE2-NEXT:    pshufd {{.*#+}} xmm3 = xmm4[1,1,3,3]
> +; SSE2-NEXT:    pmuludq %xmm9, %xmm3
> +; SSE2-NEXT:    pmuludq %xmm0, %xmm3
> +; SSE2-NEXT:    pshufd {{.*#+}} xmm0 = xmm7[1,1,3,3]
>  ; SSE2-NEXT:    pmuludq %xmm2, %xmm0
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm2 = xmm0[0,2,2,3]
> -; SSE2-NEXT:    punpckldq {{.*#+}} xmm1 = xmm1[0],xmm2[0],xmm1[1],xmm2[1]
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm2 = xmm1[2,3,0,1]
> -; SSE2-NEXT:    pmuludq %xmm1, %xmm2
> +; SSE2-NEXT:    pshufd {{.*#+}} xmm2 = xmm5[1,1,3,3]
> +; SSE2-NEXT:    pmuludq %xmm10, %xmm2
> +; SSE2-NEXT:    pmuludq %xmm0, %xmm2
> +; SSE2-NEXT:    pmuludq %xmm3, %xmm2
> +; SSE2-NEXT:    pshufd {{.*#+}} xmm0 = xmm1[0,2,2,3]
>  ; SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm2[0,2,2,3]
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm2 = xmm0[2,2,0,0]
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm0 = xmm0[0,0,2,2]
> -; SSE2-NEXT:    pmuludq %xmm2, %xmm0
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm0 = xmm0[0,2,2,3]
> -; SSE2-NEXT:    punpckldq {{.*#+}} xmm1 = xmm1[0],xmm0[0],xmm1[1],xmm0[1]
> -; SSE2-NEXT:    pshufd {{.*#+}} xmm0 = xmm1[1,1,2,3]
> -; SSE2-NEXT:    pmuludq %xmm1, %xmm0
> -; SSE2-NEXT:    movd %xmm0, %eax
> +; SSE2-NEXT:    punpckldq {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[1]
> +; SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm0[2,3,0,1]
> +; SSE2-NEXT:    pmuludq %xmm0, %xmm1
> +; SSE2-NEXT:    pshufd {{.*#+}} xmm0 = xmm1[0,2,2,3]
> +; SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm2[2,2,0,0]
> +; SSE2-NEXT:    pmuludq %xmm2, %xmm1
> +; SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm1[0,2,2,3]
> +; SSE2-NEXT:    punpckldq {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[1]
> +; SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm0[1,1,2,3]
> +; SSE2-NEXT:    pmuludq %xmm0, %xmm1
> +; SSE2-NEXT:    movd %xmm1, %eax
>  ; SSE2-NEXT:    retq
>  ;
>  ; SSE41-LABEL: test_v32i32:
> 
> Modified: llvm/trunk/test/CodeGen/X86/vector-trunc-math.ll
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-trunc-math.ll?rev=343913&r1=343912&r2=343913&view=diff
> ==============================================================================
> --- llvm/trunk/test/CodeGen/X86/vector-trunc-math.ll (original)
> +++ llvm/trunk/test/CodeGen/X86/vector-trunc-math.ll Sat Oct  6 03:20:04 2018
> @@ -5595,40 +5595,29 @@ define <4 x i32> @mul_add_const_v4i64_v4
>  define <4 x i32> @mul_add_self_v4i64_v4i32(<4 x i32> %a0, <4 x i32> %a1) nounwind {
>  ; SSE-LABEL: mul_add_self_v4i64_v4i32:
>  ; SSE:       # %bb.0:
> -; SSE-NEXT:    pshufd {{.*#+}} xmm2 = xmm0[2,3,0,1]
> -; SSE-NEXT:    movdqa %xmm2, %xmm3
> -; SSE-NEXT:    psrad $31, %xmm3
> -; SSE-NEXT:    punpckldq {{.*#+}} xmm2 = xmm2[0],xmm3[0],xmm2[1],xmm3[1]
> -; SSE-NEXT:    movdqa %xmm0, %xmm6
> -; SSE-NEXT:    psrad $31, %xmm6
> -; SSE-NEXT:    punpckldq {{.*#+}} xmm0 = xmm0[0],xmm6[0],xmm0[1],xmm6[1]
> -; SSE-NEXT:    pshufd {{.*#+}} xmm4 = xmm1[2,3,0,1]
> -; SSE-NEXT:    movdqa %xmm4, %xmm5
> -; SSE-NEXT:    psrad $31, %xmm5
> -; SSE-NEXT:    punpckldq {{.*#+}} xmm4 = xmm4[0],xmm5[0],xmm4[1],xmm5[1]
> -; SSE-NEXT:    movdqa %xmm1, %xmm7
> -; SSE-NEXT:    psrad $31, %xmm7
> -; SSE-NEXT:    punpckldq {{.*#+}} xmm1 = xmm1[0],xmm7[0],xmm1[1],xmm7[1]
> -; SSE-NEXT:    pxor %xmm8, %xmm8
> -; SSE-NEXT:    punpckldq {{.*#+}} xmm6 = xmm6[0],xmm8[0],xmm6[1],xmm8[1]
> -; SSE-NEXT:    pmuludq %xmm1, %xmm6
> -; SSE-NEXT:    punpckldq {{.*#+}} xmm7 = xmm7[0],xmm8[0],xmm7[1],xmm8[1]
> -; SSE-NEXT:    pmuludq %xmm0, %xmm7
> -; SSE-NEXT:    paddq %xmm6, %xmm7
> -; SSE-NEXT:    psllq $32, %xmm7
> -; SSE-NEXT:    pmuludq %xmm0, %xmm1
> -; SSE-NEXT:    paddq %xmm7, %xmm1
> -; SSE-NEXT:    punpckldq {{.*#+}} xmm3 = xmm3[0],xmm8[0],xmm3[1],xmm8[1]
> -; SSE-NEXT:    pmuludq %xmm4, %xmm3
> -; SSE-NEXT:    punpckldq {{.*#+}} xmm5 = xmm5[0],xmm8[0],xmm5[1],xmm8[1]
> -; SSE-NEXT:    pmuludq %xmm2, %xmm5
> -; SSE-NEXT:    paddq %xmm3, %xmm5
> -; SSE-NEXT:    psllq $32, %xmm5
> -; SSE-NEXT:    pmuludq %xmm2, %xmm4
> -; SSE-NEXT:    paddq %xmm5, %xmm4
> -; SSE-NEXT:    shufps {{.*#+}} xmm1 = xmm1[0,2],xmm4[0,2]
> -; SSE-NEXT:    paddd %xmm1, %xmm1
> -; SSE-NEXT:    movdqa %xmm1, %xmm0
> +; SSE-NEXT:    pshufd {{.*#+}} xmm3 = xmm0[2,3,0,1]
> +; SSE-NEXT:    movdqa %xmm3, %xmm4
> +; SSE-NEXT:    psrad $31, %xmm4
> +; SSE-NEXT:    punpckldq {{.*#+}} xmm3 = xmm3[0],xmm4[0],xmm3[1],xmm4[1]
> +; SSE-NEXT:    movdqa %xmm0, %xmm2
> +; SSE-NEXT:    psrad $31, %xmm2
> +; SSE-NEXT:    punpckldq {{.*#+}} xmm0 = xmm0[0],xmm2[0],xmm0[1],xmm2[1]
> +; SSE-NEXT:    pshufd {{.*#+}} xmm5 = xmm1[2,1,3,3]
> +; SSE-NEXT:    pshufd {{.*#+}} xmm1 = xmm1[0,1,1,3]
> +; SSE-NEXT:    pmuludq %xmm1, %xmm0
> +; SSE-NEXT:    pxor %xmm6, %xmm6
> +; SSE-NEXT:    punpckldq {{.*#+}} xmm2 = xmm2[0],xmm6[0],xmm2[1],xmm6[1]
> +; SSE-NEXT:    pmuludq %xmm1, %xmm2
> +; SSE-NEXT:    psllq $32, %xmm2
> +; SSE-NEXT:    paddq %xmm0, %xmm2
> +; SSE-NEXT:    pmuludq %xmm5, %xmm3
> +; SSE-NEXT:    punpckldq {{.*#+}} xmm4 = xmm4[0],xmm6[0],xmm4[1],xmm6[1]
> +; SSE-NEXT:    pmuludq %xmm5, %xmm4
> +; SSE-NEXT:    psllq $32, %xmm4
> +; SSE-NEXT:    paddq %xmm3, %xmm4
> +; SSE-NEXT:    shufps {{.*#+}} xmm2 = xmm2[0,2],xmm4[0,2]
> +; SSE-NEXT:    paddd %xmm2, %xmm2
> +; SSE-NEXT:    movdqa %xmm2, %xmm0
>  ; SSE-NEXT:    retq
>  ;
>  ; AVX-LABEL: mul_add_self_v4i64_v4i32:
> 
> 
> _______________________________________________
> llvm-commits mailing list
> llvm-commits at lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits

-- 
Jan Vesely <jan.vesely at rutgers.edu>
-------------- next part --------------
	.text
	.section	.AMDGPU.config
	.long	47176
	.long	11468865
	.long	47180
	.long	144
	.long	47200
	.long	0
	.long	4
	.long	0
	.long	8
	.long	0
	.text
	.globl	test_1_clz_char         ; -- Begin function test_1_clz_char
	.p2align	8
	.type	test_1_clz_char, at function
	.amdgpu_hsa_kernel test_1_clz_char
test_1_clz_char:                        ; @test_1_clz_char
	.amd_kernel_code_t
		amd_code_version_major = 1
		amd_code_version_minor = 2
		amd_machine_kind = 1
		amd_machine_version_major = 8
		amd_machine_version_minor = 0
		amd_machine_version_stepping = 1
		kernel_code_entry_byte_offset = 256
		kernel_code_prefetch_byte_size = 0
		granulated_workitem_vgpr_count = 1
		granulated_wavefront_sgpr_count = 1
		priority = 0
		float_mode = 240
		priv = 0
		enable_dx10_clamp = 1
		debug_mode = 0
		enable_ieee_mode = 1
		enable_sgpr_private_segment_wave_byte_offset = 0
		user_sgpr_count = 8
		enable_trap_handler = 0
		enable_sgpr_workgroup_id_x = 1
		enable_sgpr_workgroup_id_y = 0
		enable_sgpr_workgroup_id_z = 0
		enable_sgpr_workgroup_info = 0
		enable_vgpr_workitem_id = 0
		enable_exception_msb = 0
		granulated_lds_size = 0
		enable_exception = 0
		enable_sgpr_private_segment_buffer = 1
		enable_sgpr_dispatch_ptr = 1
		enable_sgpr_queue_ptr = 0
		enable_sgpr_kernarg_segment_ptr = 1
		enable_sgpr_dispatch_id = 0
		enable_sgpr_flat_scratch_init = 0
		enable_sgpr_private_segment_size = 0
		enable_sgpr_grid_workgroup_count_x = 0
		enable_sgpr_grid_workgroup_count_y = 0
		enable_sgpr_grid_workgroup_count_z = 0
		enable_ordered_append_gds = 0
		private_element_size = 1
		is_ptr64 = 1
		is_dynamic_callstack = 0
		is_debug_enabled = 0
		is_xnack_enabled = 1
		workitem_private_segment_byte_size = 0
		workgroup_group_segment_byte_size = 0
		gds_segment_byte_size = 0
		kernarg_segment_byte_size = 32
		workgroup_fbarrier_count = 0
		wavefront_sgpr_count = 15
		workitem_vgpr_count = 5
		reserved_vgpr_first = 0
		reserved_vgpr_count = 0
		reserved_sgpr_first = 0
		reserved_sgpr_count = 0
		debug_wavefront_private_segment_offset_sgpr = 0
		debug_private_segment_buffer_sgpr = 0
		kernarg_segment_alignment = 4
		group_segment_alignment = 4
		private_segment_alignment = 4
		wavefront_size = 6
		call_convention = -1
		runtime_loader_kernel_symbol = 0
	.end_amd_kernel_code_t
; %bb.0:                                ; %entry
	s_load_dword s9, s[6:7], 0x14
	s_load_dword s10, s[4:5], 0x4
	s_load_dwordx4 s[0:3], s[6:7], 0x0
	s_mov_b32 s4, 0xffff
	v_mov_b32_e32 v1, 0
	s_waitcnt lgkmcnt(0)
	v_add_u32_e32 v0, vcc, s9, v0
	s_and_b32 s4, s10, s4
	v_addc_u32_e32 v1, vcc, 0, v1, vcc
	v_mov_b32_e32 v2, s8
	v_mad_u64_u32 v[0:1], s[4:5], s4, v2, v[0:1]
	v_mov_b32_e32 v3, s3
	v_mov_b32_e32 v4, s1
	v_add_u32_e32 v2, vcc, s2, v0
	v_addc_u32_e32 v3, vcc, v3, v1, vcc
	s_nop 0
	s_nop 0
	flat_load_ubyte v2, v[2:3]
	s_waitcnt vmcnt(0) lgkmcnt(0)
	v_ffbh_u32_sdwa v3, v2 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:WORD_0
	v_add_u32_e32 v3, vcc, -16, v3
	v_add_u16_e32 v3, -8, v3
	v_cmp_ne_u16_e32 vcc, 0, v2
	v_cndmask_b32_e32 v2, 8, v3, vcc
	v_add_u32_e32 v0, vcc, s0, v0
	v_addc_u32_e32 v1, vcc, v4, v1, vcc
	s_nop 0
	s_nop 0
	flat_store_byte v[0:1], v2
	s_endpgm
.Lfunc_end0:
	.size	test_1_clz_char, .Lfunc_end0-test_1_clz_char
                                        ; -- End function
	.section	.AMDGPU.csdata
; Kernel info:
; codeLenInByte = 152
; NumSgprs: 15
; NumVgprs: 5
; ScratchSize: 0
; MemoryBound: 0
; FloatMode: 240
; IeeeMode: 1
; LDSByteSize: 0 bytes/workgroup (compile time only)
; SGPRBlocks: 1
; VGPRBlocks: 1
; NumSGPRsForWavesPerEU: 15
; NumVGPRsForWavesPerEU: 5
; WaveLimiterHint : 1
; COMPUTE_PGM_RSRC2:USER_SGPR: 8
; COMPUTE_PGM_RSRC2:TRAP_HANDLER: 0
; COMPUTE_PGM_RSRC2:TGID_X_EN: 1
; COMPUTE_PGM_RSRC2:TGID_Y_EN: 0
; COMPUTE_PGM_RSRC2:TGID_Z_EN: 0
; COMPUTE_PGM_RSRC2:TIDIG_COMP_CNT: 0
	.section	.AMDGPU.config
	.long	47176
	.long	11468865
	.long	47180
	.long	144
	.long	47200
	.long	0
	.long	4
	.long	0
	.long	8
	.long	0
	.text
	.globl	test_2_clz_char         ; -- Begin function test_2_clz_char
	.p2align	8
	.type	test_2_clz_char, at function
	.amdgpu_hsa_kernel test_2_clz_char
test_2_clz_char:                        ; @test_2_clz_char
	.amd_kernel_code_t
		amd_code_version_major = 1
		amd_code_version_minor = 2
		amd_machine_kind = 1
		amd_machine_version_major = 8
		amd_machine_version_minor = 0
		amd_machine_version_stepping = 1
		kernel_code_entry_byte_offset = 256
		kernel_code_prefetch_byte_size = 0
		granulated_workitem_vgpr_count = 1
		granulated_wavefront_sgpr_count = 1
		priority = 0
		float_mode = 240
		priv = 0
		enable_dx10_clamp = 1
		debug_mode = 0
		enable_ieee_mode = 1
		enable_sgpr_private_segment_wave_byte_offset = 0
		user_sgpr_count = 8
		enable_trap_handler = 0
		enable_sgpr_workgroup_id_x = 1
		enable_sgpr_workgroup_id_y = 0
		enable_sgpr_workgroup_id_z = 0
		enable_sgpr_workgroup_info = 0
		enable_vgpr_workitem_id = 0
		enable_exception_msb = 0
		granulated_lds_size = 0
		enable_exception = 0
		enable_sgpr_private_segment_buffer = 1
		enable_sgpr_dispatch_ptr = 1
		enable_sgpr_queue_ptr = 0
		enable_sgpr_kernarg_segment_ptr = 1
		enable_sgpr_dispatch_id = 0
		enable_sgpr_flat_scratch_init = 0
		enable_sgpr_private_segment_size = 0
		enable_sgpr_grid_workgroup_count_x = 0
		enable_sgpr_grid_workgroup_count_y = 0
		enable_sgpr_grid_workgroup_count_z = 0
		enable_ordered_append_gds = 0
		private_element_size = 1
		is_ptr64 = 1
		is_dynamic_callstack = 0
		is_debug_enabled = 0
		is_xnack_enabled = 1
		workitem_private_segment_byte_size = 0
		workgroup_group_segment_byte_size = 0
		gds_segment_byte_size = 0
		kernarg_segment_byte_size = 32
		workgroup_fbarrier_count = 0
		wavefront_sgpr_count = 15
		workitem_vgpr_count = 7
		reserved_vgpr_first = 0
		reserved_vgpr_count = 0
		reserved_sgpr_first = 0
		reserved_sgpr_count = 0
		debug_wavefront_private_segment_offset_sgpr = 0
		debug_private_segment_buffer_sgpr = 0
		kernarg_segment_alignment = 4
		group_segment_alignment = 4
		private_segment_alignment = 4
		wavefront_size = 6
		call_convention = -1
		runtime_loader_kernel_symbol = 0
	.end_amd_kernel_code_t
; %bb.0:                                ; %entry
	s_load_dword s9, s[6:7], 0x14
	s_load_dword s10, s[4:5], 0x4
	s_load_dwordx4 s[0:3], s[6:7], 0x0
	s_mov_b32 s4, 0xffff
	v_mov_b32_e32 v1, 0
	s_waitcnt lgkmcnt(0)
	v_add_u32_e32 v0, vcc, s9, v0
	s_and_b32 s4, s10, s4
	v_addc_u32_e32 v1, vcc, 0, v1, vcc
	v_mov_b32_e32 v2, s8
	v_mad_u64_u32 v[0:1], s[4:5], s4, v2, v[0:1]
	v_mov_b32_e32 v3, s3
	v_mov_b32_e32 v4, 0
	v_lshlrev_b64 v[0:1], 1, v[0:1]
	v_add_u32_e32 v2, vcc, s2, v0
	v_addc_u32_e32 v3, vcc, v3, v1, vcc
	v_add_u32_e32 v0, vcc, s0, v0
	s_nop 0
	s_nop 0
	flat_load_ubyte v5, v[2:3]
	v_mov_b32_e32 v2, s1
	v_addc_u32_e32 v1, vcc, v2, v1, vcc
	v_add_u32_e32 v2, vcc, 1, v0
	v_addc_u32_e32 v3, vcc, 0, v1, vcc
	s_waitcnt vmcnt(0) lgkmcnt(0)
	v_ffbh_u32_sdwa v6, v5 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:WORD_0
	v_add_u32_e32 v6, vcc, -16, v6
	v_add_u16_e32 v6, -8, v6
	v_cmp_ne_u16_e32 vcc, 0, v5
	v_cndmask_b32_e32 v5, 8, v6, vcc
	s_nop 0
	s_nop 0
	flat_store_byte v[0:1], v5
	flat_store_byte v[2:3], v4
	s_endpgm
.Lfunc_end1:
	.size	test_2_clz_char, .Lfunc_end1-test_2_clz_char
                                        ; -- End function
	.section	.AMDGPU.csdata
; Kernel info:
; codeLenInByte = 180
; NumSgprs: 15
; NumVgprs: 7
; ScratchSize: 0
; MemoryBound: 0
; FloatMode: 240
; IeeeMode: 1
; LDSByteSize: 0 bytes/workgroup (compile time only)
; SGPRBlocks: 1
; VGPRBlocks: 1
; NumSGPRsForWavesPerEU: 15
; NumVGPRsForWavesPerEU: 7
; WaveLimiterHint : 1
; COMPUTE_PGM_RSRC2:USER_SGPR: 8
; COMPUTE_PGM_RSRC2:TRAP_HANDLER: 0
; COMPUTE_PGM_RSRC2:TGID_X_EN: 1
; COMPUTE_PGM_RSRC2:TGID_Y_EN: 0
; COMPUTE_PGM_RSRC2:TGID_Z_EN: 0
; COMPUTE_PGM_RSRC2:TIDIG_COMP_CNT: 0
	.section	.AMDGPU.config
	.long	47176
	.long	11468869
	.long	47180
	.long	144
	.long	47200
	.long	0
	.long	4
	.long	0
	.long	8
	.long	0
	.text
	.globl	test_4_clz_char         ; -- Begin function test_4_clz_char
	.p2align	8
	.type	test_4_clz_char, at function
	.amdgpu_hsa_kernel test_4_clz_char
test_4_clz_char:                        ; @test_4_clz_char
	.amd_kernel_code_t
		amd_code_version_major = 1
		amd_code_version_minor = 2
		amd_machine_kind = 1
		amd_machine_version_major = 8
		amd_machine_version_minor = 0
		amd_machine_version_stepping = 1
		kernel_code_entry_byte_offset = 256
		kernel_code_prefetch_byte_size = 0
		granulated_workitem_vgpr_count = 5
		granulated_wavefront_sgpr_count = 1
		priority = 0
		float_mode = 240
		priv = 0
		enable_dx10_clamp = 1
		debug_mode = 0
		enable_ieee_mode = 1
		enable_sgpr_private_segment_wave_byte_offset = 0
		user_sgpr_count = 8
		enable_trap_handler = 0
		enable_sgpr_workgroup_id_x = 1
		enable_sgpr_workgroup_id_y = 0
		enable_sgpr_workgroup_id_z = 0
		enable_sgpr_workgroup_info = 0
		enable_vgpr_workitem_id = 0
		enable_exception_msb = 0
		granulated_lds_size = 0
		enable_exception = 0
		enable_sgpr_private_segment_buffer = 1
		enable_sgpr_dispatch_ptr = 1
		enable_sgpr_queue_ptr = 0
		enable_sgpr_kernarg_segment_ptr = 1
		enable_sgpr_dispatch_id = 0
		enable_sgpr_flat_scratch_init = 0
		enable_sgpr_private_segment_size = 0
		enable_sgpr_grid_workgroup_count_x = 0
		enable_sgpr_grid_workgroup_count_y = 0
		enable_sgpr_grid_workgroup_count_z = 0
		enable_ordered_append_gds = 0
		private_element_size = 1
		is_ptr64 = 1
		is_dynamic_callstack = 0
		is_debug_enabled = 0
		is_xnack_enabled = 1
		workitem_private_segment_byte_size = 0
		workgroup_group_segment_byte_size = 0
		gds_segment_byte_size = 0
		kernarg_segment_byte_size = 32
		workgroup_fbarrier_count = 0
		wavefront_sgpr_count = 15
		workitem_vgpr_count = 21
		reserved_vgpr_first = 0
		reserved_vgpr_count = 0
		reserved_sgpr_first = 0
		reserved_sgpr_count = 0
		debug_wavefront_private_segment_offset_sgpr = 0
		debug_private_segment_buffer_sgpr = 0
		kernarg_segment_alignment = 4
		group_segment_alignment = 4
		private_segment_alignment = 4
		wavefront_size = 6
		call_convention = -1
		runtime_loader_kernel_symbol = 0
	.end_amd_kernel_code_t
; %bb.0:                                ; %entry
	s_load_dword s9, s[6:7], 0x14
	s_load_dword s10, s[4:5], 0x4
	s_load_dwordx4 s[0:3], s[6:7], 0x0
	s_mov_b32 s6, 0xffff
	v_mov_b32_e32 v1, 0
	s_waitcnt lgkmcnt(0)
	v_add_u32_e32 v0, vcc, s9, v0
	v_addc_u32_e32 v1, vcc, 0, v1, vcc
	s_and_b32 s4, s10, s6
	v_mov_b32_e32 v2, s8
	v_mad_u64_u32 v[0:1], s[4:5], s4, v2, v[0:1]
	v_mov_b32_e32 v3, s3
	v_mov_b32_e32 v10, s1
	v_mov_b32_e32 v16, 0
	v_lshlrev_b64 v[0:1], 2, v[0:1]
	v_add_u32_e32 v2, vcc, s2, v0
	v_addc_u32_e32 v3, vcc, v3, v1, vcc
	v_add_u32_e32 v4, vcc, 1, v2
	v_addc_u32_e32 v5, vcc, 0, v3, vcc
	v_add_u32_e32 v6, vcc, 3, v2
	v_addc_u32_e32 v7, vcc, 0, v3, vcc
	v_add_u32_e32 v8, vcc, 2, v2
	v_addc_u32_e32 v9, vcc, 0, v3, vcc
	v_add_u32_e32 v0, vcc, s0, v0
	v_addc_u32_e32 v1, vcc, v10, v1, vcc
	v_add_u32_e32 v10, vcc, 3, v0
	v_addc_u32_e32 v11, vcc, 0, v1, vcc
	v_add_u32_e32 v12, vcc, 2, v0
	v_addc_u32_e32 v13, vcc, 0, v1, vcc
	s_movk_i32 s2, 0xff
	v_add_u32_e32 v14, vcc, 1, v0
	v_addc_u32_e32 v15, vcc, 0, v1, vcc
	s_nop 0
	flat_load_ubyte v18, v[2:3]
	flat_load_ubyte v20, v[8:9]
	flat_load_ubyte v19, v[6:7]
	flat_load_ubyte v17, v[4:5]
	s_waitcnt vmcnt(1) lgkmcnt(1)
	v_lshlrev_b32_e32 v3, 8, v19
	s_waitcnt vmcnt(0) lgkmcnt(0)
	v_lshlrev_b32_e32 v2, 8, v17
	v_or_b32_e32 v2, v2, v18
	v_or_b32_sdwa v3, v3, v20 dst_sel:WORD_1 dst_unused:UNUSED_PAD src0_sel:DWORD src1_sel:DWORD
	v_or_b32_e32 v3, v3, v2
	v_lshrrev_b32_e32 v5, 8, v3
	v_ffbh_u32_sdwa v3, v3 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_1
	v_and_b32_e32 v4, s2, v2
	v_ffbh_u32_sdwa v2, v2 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0
	v_add_u32_e32 v2, vcc, -16, v2
	v_add_u32_e32 v3, vcc, -16, v3
	v_add_u16_e32 v2, -8, v2
	v_cmp_ne_u16_e32 vcc, 0, v4
	v_and_b32_e32 v5, s2, v5
	v_cndmask_b32_e32 v2, 8, v2, vcc
	v_add_u16_e32 v3, -8, v3
	v_cmp_ne_u16_e32 vcc, 0, v5
	v_cndmask_b32_e32 v3, 8, v3, vcc
	v_lshlrev_b16_e32 v3, 8, v3
	v_or_b32_sdwa v2, v2, v3 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0 src1_sel:DWORD
	v_and_b32_e32 v2, s6, v2
	v_lshrrev_b32_e32 v3, 8, v2
	s_nop 0
	s_nop 0
	flat_store_byte v[0:1], v2
	flat_store_byte v[14:15], v3
	flat_store_byte v[12:13], v16
	flat_store_byte v[10:11], v16
	s_endpgm
.Lfunc_end2:
	.size	test_4_clz_char, .Lfunc_end2-test_4_clz_char
                                        ; -- End function
	.section	.AMDGPU.csdata
; Kernel info:
; codeLenInByte = 344
; NumSgprs: 15
; NumVgprs: 21
; ScratchSize: 0
; MemoryBound: 0
; FloatMode: 240
; IeeeMode: 1
; LDSByteSize: 0 bytes/workgroup (compile time only)
; SGPRBlocks: 1
; VGPRBlocks: 5
; NumSGPRsForWavesPerEU: 15
; NumVGPRsForWavesPerEU: 21
; WaveLimiterHint : 1
; COMPUTE_PGM_RSRC2:USER_SGPR: 8
; COMPUTE_PGM_RSRC2:TRAP_HANDLER: 0
; COMPUTE_PGM_RSRC2:TGID_X_EN: 1
; COMPUTE_PGM_RSRC2:TGID_Y_EN: 0
; COMPUTE_PGM_RSRC2:TGID_Z_EN: 0
; COMPUTE_PGM_RSRC2:TIDIG_COMP_CNT: 0
	.section	.AMDGPU.config
	.long	47176
	.long	11468868
	.long	47180
	.long	144
	.long	47200
	.long	0
	.long	4
	.long	0
	.long	8
	.long	0
	.text
	.globl	test_8_clz_char         ; -- Begin function test_8_clz_char
	.p2align	8
	.type	test_8_clz_char, at function
	.amdgpu_hsa_kernel test_8_clz_char
test_8_clz_char:                        ; @test_8_clz_char
	.amd_kernel_code_t
		amd_code_version_major = 1
		amd_code_version_minor = 2
		amd_machine_kind = 1
		amd_machine_version_major = 8
		amd_machine_version_minor = 0
		amd_machine_version_stepping = 1
		kernel_code_entry_byte_offset = 256
		kernel_code_prefetch_byte_size = 0
		granulated_workitem_vgpr_count = 4
		granulated_wavefront_sgpr_count = 1
		priority = 0
		float_mode = 240
		priv = 0
		enable_dx10_clamp = 1
		debug_mode = 0
		enable_ieee_mode = 1
		enable_sgpr_private_segment_wave_byte_offset = 0
		user_sgpr_count = 8
		enable_trap_handler = 0
		enable_sgpr_workgroup_id_x = 1
		enable_sgpr_workgroup_id_y = 0
		enable_sgpr_workgroup_id_z = 0
		enable_sgpr_workgroup_info = 0
		enable_vgpr_workitem_id = 0
		enable_exception_msb = 0
		granulated_lds_size = 0
		enable_exception = 0
		enable_sgpr_private_segment_buffer = 1
		enable_sgpr_dispatch_ptr = 1
		enable_sgpr_queue_ptr = 0
		enable_sgpr_kernarg_segment_ptr = 1
		enable_sgpr_dispatch_id = 0
		enable_sgpr_flat_scratch_init = 0
		enable_sgpr_private_segment_size = 0
		enable_sgpr_grid_workgroup_count_x = 0
		enable_sgpr_grid_workgroup_count_y = 0
		enable_sgpr_grid_workgroup_count_z = 0
		enable_ordered_append_gds = 0
		private_element_size = 1
		is_ptr64 = 1
		is_dynamic_callstack = 0
		is_debug_enabled = 0
		is_xnack_enabled = 1
		workitem_private_segment_byte_size = 0
		workgroup_group_segment_byte_size = 0
		gds_segment_byte_size = 0
		kernarg_segment_byte_size = 32
		workgroup_fbarrier_count = 0
		wavefront_sgpr_count = 15
		workitem_vgpr_count = 20
		reserved_vgpr_first = 0
		reserved_vgpr_count = 0
		reserved_sgpr_first = 0
		reserved_sgpr_count = 0
		debug_wavefront_private_segment_offset_sgpr = 0
		debug_private_segment_buffer_sgpr = 0
		kernarg_segment_alignment = 4
		group_segment_alignment = 4
		private_segment_alignment = 4
		wavefront_size = 6
		call_convention = -1
		runtime_loader_kernel_symbol = 0
	.end_amd_kernel_code_t
; %bb.0:                                ; %entry
	s_load_dword s10, s[4:5], 0x4
	s_load_dwordx4 s[0:3], s[6:7], 0x0
	s_load_dword s9, s[6:7], 0x14
	s_mov_b32 s6, 0xffff
	v_mov_b32_e32 v1, 0
	s_waitcnt lgkmcnt(0)
	s_and_b32 s4, s10, s6
	v_mov_b32_e32 v2, s8
	v_add_u32_e32 v0, vcc, s9, v0
	v_addc_u32_e32 v1, vcc, 0, v1, vcc
	v_mad_u64_u32 v[0:1], s[4:5], s4, v2, v[0:1]
	v_mov_b32_e32 v3, s3
	v_lshlrev_b64 v[0:1], 3, v[0:1]
	v_add_u32_e32 v2, vcc, s2, v0
	v_addc_u32_e32 v3, vcc, v3, v1, vcc
	v_add_u32_e32 v4, vcc, 5, v2
	v_addc_u32_e32 v5, vcc, 0, v3, vcc
	v_add_u32_e32 v6, vcc, 4, v2
	v_addc_u32_e32 v7, vcc, 0, v3, vcc
	v_add_u32_e32 v8, vcc, 7, v2
	v_addc_u32_e32 v9, vcc, 0, v3, vcc
	s_movk_i32 s2, 0xff
	s_nop 0
	s_nop 0
	flat_load_ubyte v14, v[8:9]
	flat_load_ubyte v13, v[6:7]
	flat_load_ubyte v12, v[4:5]
	v_add_u32_e32 v4, vcc, 6, v2
	v_addc_u32_e32 v5, vcc, 0, v3, vcc
	v_add_u32_e32 v6, vcc, 1, v2
	v_addc_u32_e32 v7, vcc, 0, v3, vcc
	v_add_u32_e32 v8, vcc, 3, v2
	v_addc_u32_e32 v9, vcc, 0, v3, vcc
	v_add_u32_e32 v10, vcc, 2, v2
	v_addc_u32_e32 v11, vcc, 0, v3, vcc
	s_nop 0
	flat_load_ubyte v17, v[2:3]
	s_nop 0
	flat_load_ubyte v19, v[10:11]
	flat_load_ubyte v18, v[8:9]
	flat_load_ubyte v16, v[6:7]
	flat_load_ubyte v15, v[4:5]
	s_waitcnt vmcnt(7) lgkmcnt(7)
	v_lshlrev_b32_e32 v3, 8, v14
	s_waitcnt vmcnt(5) lgkmcnt(5)
	v_lshlrev_b32_e32 v2, 8, v12
	v_or_b32_e32 v2, v2, v13
	v_and_b32_e32 v12, s2, v2
	s_waitcnt vmcnt(2) lgkmcnt(2)
	v_lshlrev_b32_e32 v5, 8, v18
	s_waitcnt vmcnt(1) lgkmcnt(1)
	v_lshlrev_b32_e32 v4, 8, v16
	v_or_b32_e32 v5, v5, v19
	v_or_b32_e32 v4, v4, v17
	v_lshlrev_b32_e32 v7, 16, v5
	v_ffbh_u32_sdwa v9, v4 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0
	v_and_b32_e32 v10, s2, v5
	v_ffbh_u32_sdwa v11, v5 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0
	v_bfe_u32 v5, v5, 8, 8
	v_and_b32_e32 v8, s2, v4
	v_or_b32_e32 v4, v7, v4
	v_add_u32_e32 v7, vcc, -16, v9
	v_add_u32_e32 v9, vcc, -16, v11
	v_ffbh_u32_e32 v11, v5
	v_add_u32_e32 v11, vcc, -16, v11
	v_lshrrev_b32_e32 v13, 8, v4
	v_cmp_ne_u16_e32 vcc, 0, v8
	v_add_u16_e32 v7, -8, v7
	v_ffbh_u32_sdwa v4, v4 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_1
	v_cndmask_b32_e32 v7, 8, v7, vcc
	v_add_u32_e32 v4, vcc, -16, v4
	v_cmp_ne_u16_e32 vcc, 0, v10
	v_add_u16_e32 v9, -8, v9
	v_cndmask_b32_e32 v9, 8, v9, vcc
	v_and_b32_e32 v8, s2, v13
	v_add_u16_e32 v10, -8, v11
	v_cmp_ne_u16_e32 vcc, 0, v5
	v_cndmask_b32_e32 v5, 8, v10, vcc
	s_waitcnt vmcnt(0) lgkmcnt(0)
	v_or_b32_e32 v3, v3, v15
	v_add_u16_e32 v4, -8, v4
	v_cmp_ne_u16_e32 vcc, 0, v8
	v_lshlrev_b32_e32 v6, 16, v3
	v_cndmask_b32_e32 v4, 8, v4, vcc
	v_lshlrev_b16_e32 v4, 8, v4
	v_or_b32_e32 v6, v6, v2
	v_ffbh_u32_sdwa v2, v2 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0
	v_add_u32_e32 v2, vcc, -16, v2
	v_or_b32_sdwa v4, v7, v4 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0 src1_sel:DWORD
	v_and_b32_e32 v10, s6, v4
	v_lshrrev_b32_e32 v4, 8, v6
	v_add_u16_e32 v2, -8, v2
	v_cmp_ne_u16_e32 vcc, 0, v12
	v_ffbh_u32_sdwa v6, v6 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_1
	v_cndmask_b32_e32 v2, 8, v2, vcc
	v_add_u32_e32 v6, vcc, -16, v6
	v_and_b32_e32 v4, s2, v4
	v_add_u16_e32 v6, -8, v6
	v_cmp_ne_u16_e32 vcc, 0, v4
	v_cndmask_b32_e32 v4, 8, v6, vcc
	v_lshlrev_b16_e32 v4, 8, v4
	v_or_b32_sdwa v2, v2, v4 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0 src1_sel:DWORD
	v_ffbh_u32_sdwa v4, v3 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0
	v_add_u32_e32 v4, vcc, -16, v4
	v_and_b32_e32 v8, s6, v2
	v_and_b32_e32 v2, s2, v3
	v_add_u16_e32 v4, -8, v4
	v_cmp_ne_u16_e32 vcc, 0, v2
	v_bfe_u32 v3, v3, 8, 8
	v_cndmask_b32_e32 v2, 8, v4, vcc
	v_ffbh_u32_e32 v4, v3
	v_add_u32_e32 v4, vcc, -16, v4
	v_add_u16_e32 v4, -8, v4
	v_cmp_ne_u16_e32 vcc, 0, v3
	v_cndmask_b32_e32 v3, 8, v4, vcc
	v_lshlrev_b16_e32 v3, 8, v3
	v_or_b32_sdwa v2, v2, v3 dst_sel:WORD_1 dst_unused:UNUSED_PAD src0_sel:BYTE_0 src1_sel:DWORD
	v_or_b32_e32 v11, v8, v2
	v_mov_b32_e32 v2, s1
	v_add_u32_e32 v0, vcc, s0, v0
	v_addc_u32_e32 v1, vcc, v2, v1, vcc
	v_add_u32_e32 v2, vcc, 2, v0
	v_addc_u32_e32 v3, vcc, 0, v1, vcc
	v_lshlrev_b16_e32 v5, 8, v5
	v_add_u32_e32 v4, vcc, 3, v0
	v_lshrrev_b32_e32 v12, 8, v5
	v_addc_u32_e32 v5, vcc, 0, v1, vcc
	v_add_u32_e32 v6, vcc, 4, v0
	v_addc_u32_e32 v7, vcc, 0, v1, vcc
	v_lshrrev_b32_e32 v13, 24, v11
	v_lshrrev_b32_e32 v14, 16, v11
	v_lshrrev_b32_e32 v11, 8, v11
	s_nop 0
	s_nop 0
	flat_store_byte v[6:7], v8
	flat_store_byte v[4:5], v12
	flat_store_byte v[2:3], v9
	v_add_u32_e32 v2, vcc, 1, v0
	v_addc_u32_e32 v3, vcc, 0, v1, vcc
	v_add_u32_e32 v4, vcc, 7, v0
	v_addc_u32_e32 v5, vcc, 0, v1, vcc
	v_add_u32_e32 v6, vcc, 6, v0
	v_addc_u32_e32 v7, vcc, 0, v1, vcc
	v_add_u32_e32 v8, vcc, 5, v0
	v_addc_u32_e32 v9, vcc, 0, v1, vcc
	v_lshrrev_b32_e32 v12, 8, v10
	s_nop 0
	s_nop 0
	flat_store_byte v[0:1], v10
	flat_store_byte v[8:9], v11
	flat_store_byte v[6:7], v14
	flat_store_byte v[4:5], v13
	flat_store_byte v[2:3], v12
	s_endpgm
.Lfunc_end3:
	.size	test_8_clz_char, .Lfunc_end3-test_8_clz_char
                                        ; -- End function
	.section	.AMDGPU.csdata
; Kernel info:
; codeLenInByte = 748
; NumSgprs: 15
; NumVgprs: 20
; ScratchSize: 0
; MemoryBound: 0
; FloatMode: 240
; IeeeMode: 1
; LDSByteSize: 0 bytes/workgroup (compile time only)
; SGPRBlocks: 1
; VGPRBlocks: 4
; NumSGPRsForWavesPerEU: 15
; NumVGPRsForWavesPerEU: 20
; WaveLimiterHint : 1
; COMPUTE_PGM_RSRC2:USER_SGPR: 8
; COMPUTE_PGM_RSRC2:TRAP_HANDLER: 0
; COMPUTE_PGM_RSRC2:TGID_X_EN: 1
; COMPUTE_PGM_RSRC2:TGID_Y_EN: 0
; COMPUTE_PGM_RSRC2:TGID_Z_EN: 0
; COMPUTE_PGM_RSRC2:TIDIG_COMP_CNT: 0
	.section	.AMDGPU.config
	.long	47176
	.long	11468870
	.long	47180
	.long	144
	.long	47200
	.long	0
	.long	4
	.long	0
	.long	8
	.long	0
	.text
	.globl	test_16_clz_char        ; -- Begin function test_16_clz_char
	.p2align	8
	.type	test_16_clz_char, at function
	.amdgpu_hsa_kernel test_16_clz_char
test_16_clz_char:                       ; @test_16_clz_char
	.amd_kernel_code_t
		amd_code_version_major = 1
		amd_code_version_minor = 2
		amd_machine_kind = 1
		amd_machine_version_major = 8
		amd_machine_version_minor = 0
		amd_machine_version_stepping = 1
		kernel_code_entry_byte_offset = 256
		kernel_code_prefetch_byte_size = 0
		granulated_workitem_vgpr_count = 6
		granulated_wavefront_sgpr_count = 1
		priority = 0
		float_mode = 240
		priv = 0
		enable_dx10_clamp = 1
		debug_mode = 0
		enable_ieee_mode = 1
		enable_sgpr_private_segment_wave_byte_offset = 0
		user_sgpr_count = 8
		enable_trap_handler = 0
		enable_sgpr_workgroup_id_x = 1
		enable_sgpr_workgroup_id_y = 0
		enable_sgpr_workgroup_id_z = 0
		enable_sgpr_workgroup_info = 0
		enable_vgpr_workitem_id = 0
		enable_exception_msb = 0
		granulated_lds_size = 0
		enable_exception = 0
		enable_sgpr_private_segment_buffer = 1
		enable_sgpr_dispatch_ptr = 1
		enable_sgpr_queue_ptr = 0
		enable_sgpr_kernarg_segment_ptr = 1
		enable_sgpr_dispatch_id = 0
		enable_sgpr_flat_scratch_init = 0
		enable_sgpr_private_segment_size = 0
		enable_sgpr_grid_workgroup_count_x = 0
		enable_sgpr_grid_workgroup_count_y = 0
		enable_sgpr_grid_workgroup_count_z = 0
		enable_ordered_append_gds = 0
		private_element_size = 1
		is_ptr64 = 1
		is_dynamic_callstack = 0
		is_debug_enabled = 0
		is_xnack_enabled = 1
		workitem_private_segment_byte_size = 0
		workgroup_group_segment_byte_size = 0
		gds_segment_byte_size = 0
		kernarg_segment_byte_size = 32
		workgroup_fbarrier_count = 0
		wavefront_sgpr_count = 15
		workitem_vgpr_count = 25
		reserved_vgpr_first = 0
		reserved_vgpr_count = 0
		reserved_sgpr_first = 0
		reserved_sgpr_count = 0
		debug_wavefront_private_segment_offset_sgpr = 0
		debug_private_segment_buffer_sgpr = 0
		kernarg_segment_alignment = 4
		group_segment_alignment = 4
		private_segment_alignment = 4
		wavefront_size = 6
		call_convention = -1
		runtime_loader_kernel_symbol = 0
	.end_amd_kernel_code_t
; %bb.0:                                ; %entry
	s_load_dword s10, s[4:5], 0x4
	s_load_dwordx4 s[0:3], s[6:7], 0x0
	s_load_dword s9, s[6:7], 0x14
	s_mov_b32 s6, 0xffff
	v_mov_b32_e32 v1, 0
	s_waitcnt lgkmcnt(0)
	s_and_b32 s4, s10, s6
	v_mov_b32_e32 v2, s8
	v_add_u32_e32 v0, vcc, s9, v0
	v_addc_u32_e32 v1, vcc, 0, v1, vcc
	v_mad_u64_u32 v[0:1], s[4:5], s4, v2, v[0:1]
	v_mov_b32_e32 v3, s3
	v_lshlrev_b64 v[0:1], 4, v[0:1]
	v_add_u32_e32 v2, vcc, s2, v0
	v_addc_u32_e32 v3, vcc, v3, v1, vcc
	v_add_u32_e32 v4, vcc, 13, v2
	v_addc_u32_e32 v5, vcc, 0, v3, vcc
	s_movk_i32 s2, 0xff
	s_nop 0
	s_nop 0
	flat_load_ubyte v14, v[4:5]
	v_add_u32_e32 v4, vcc, 12, v2
	v_addc_u32_e32 v5, vcc, 0, v3, vcc
	v_add_u32_e32 v6, vcc, 15, v2
	v_addc_u32_e32 v7, vcc, 0, v3, vcc
	v_add_u32_e32 v8, vcc, 14, v2
	v_addc_u32_e32 v9, vcc, 0, v3, vcc
	v_add_u32_e32 v10, vcc, 9, v2
	v_addc_u32_e32 v11, vcc, 0, v3, vcc
	v_add_u32_e32 v12, vcc, 8, v2
	v_addc_u32_e32 v13, vcc, 0, v3, vcc
	s_nop 0
	s_nop 0
	flat_load_ubyte v19, v[12:13]
	flat_load_ubyte v18, v[10:11]
	flat_load_ubyte v17, v[8:9]
	flat_load_ubyte v16, v[6:7]
	flat_load_ubyte v15, v[4:5]
	v_add_u32_e32 v4, vcc, 11, v2
	v_addc_u32_e32 v5, vcc, 0, v3, vcc
	v_add_u32_e32 v6, vcc, 10, v2
	v_addc_u32_e32 v7, vcc, 0, v3, vcc
	v_add_u32_e32 v8, vcc, 5, v2
	v_addc_u32_e32 v9, vcc, 0, v3, vcc
	v_add_u32_e32 v10, vcc, 4, v2
	v_addc_u32_e32 v11, vcc, 0, v3, vcc
	v_add_u32_e32 v12, vcc, 7, v2
	v_addc_u32_e32 v13, vcc, 0, v3, vcc
	s_nop 0
	s_nop 0
	flat_load_ubyte v21, v[12:13]
	flat_load_ubyte v20, v[10:11]
	flat_load_ubyte v22, v[8:9]
	v_add_u32_e32 v10, vcc, 6, v2
	v_addc_u32_e32 v11, vcc, 0, v3, vcc
	s_nop 0
	s_nop 0
	flat_load_ubyte v13, v[6:7]
	flat_load_ubyte v12, v[4:5]
	s_waitcnt vmcnt(10) lgkmcnt(10)
	v_lshlrev_b32_e32 v4, 8, v14
	s_waitcnt vmcnt(8) lgkmcnt(8)
	v_lshlrev_b32_e32 v6, 8, v18
	s_waitcnt vmcnt(6) lgkmcnt(6)
	v_lshlrev_b32_e32 v5, 8, v16
	s_waitcnt vmcnt(5) lgkmcnt(5)
	v_or_b32_e32 v14, v4, v15
	v_or_b32_e32 v15, v5, v17
	v_lshlrev_b32_e32 v4, 16, v15
	v_or_b32_e32 v18, v4, v14
	v_or_b32_e32 v16, v6, v19
	v_add_u32_e32 v4, vcc, 1, v2
	s_waitcnt vmcnt(2) lgkmcnt(2)
	v_lshlrev_b32_e32 v8, 8, v22
	v_or_b32_e32 v17, v8, v20
	s_waitcnt vmcnt(0) lgkmcnt(0)
	v_lshlrev_b32_e32 v7, 8, v12
	v_or_b32_e32 v13, v7, v13
	v_lshlrev_b32_e32 v5, 16, v13
	v_or_b32_e32 v19, v5, v16
	v_addc_u32_e32 v5, vcc, 0, v3, vcc
	v_add_u32_e32 v6, vcc, 3, v2
	v_addc_u32_e32 v7, vcc, 0, v3, vcc
	v_add_u32_e32 v8, vcc, 2, v2
	v_addc_u32_e32 v9, vcc, 0, v3, vcc
	v_lshlrev_b32_e32 v12, 8, v21
	s_nop 0
	flat_load_ubyte v22, v[2:3]
	s_nop 0
	flat_load_ubyte v24, v[8:9]
	flat_load_ubyte v23, v[6:7]
	flat_load_ubyte v21, v[4:5]
	flat_load_ubyte v20, v[10:11]
	s_waitcnt vmcnt(2) lgkmcnt(2)
	v_lshlrev_b32_e32 v5, 8, v23
	s_waitcnt vmcnt(1) lgkmcnt(1)
	v_lshlrev_b32_e32 v4, 8, v21
	v_or_b32_e32 v5, v5, v24
	v_or_b32_e32 v4, v4, v22
	v_lshlrev_b32_e32 v6, 16, v5
	v_or_b32_e32 v6, v6, v4
	v_and_b32_e32 v7, s2, v4
	v_ffbh_u32_sdwa v4, v4 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0
	v_add_u32_e32 v4, vcc, -16, v4
	v_cmp_ne_u16_e32 vcc, 0, v7
	v_lshrrev_b32_e32 v7, 8, v6
	v_add_u16_e32 v4, -8, v4
	v_ffbh_u32_sdwa v6, v6 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_1
	v_cndmask_b32_e32 v4, 8, v4, vcc
	v_add_u32_e32 v6, vcc, -16, v6
	v_and_b32_e32 v7, s2, v7
	v_add_u16_e32 v6, -8, v6
	v_cmp_ne_u16_e32 vcc, 0, v7
	v_cndmask_b32_e32 v6, 8, v6, vcc
	v_lshlrev_b16_e32 v6, 8, v6
	v_or_b32_sdwa v4, v4, v6 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0 src1_sel:DWORD
	v_ffbh_u32_sdwa v6, v5 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0
	s_waitcnt vmcnt(0) lgkmcnt(0)
	v_or_b32_e32 v2, v12, v20
	v_add_u32_e32 v6, vcc, -16, v6
	v_and_b32_e32 v12, s6, v4
	v_and_b32_e32 v4, s2, v5
	v_add_u16_e32 v6, -8, v6
	v_cmp_ne_u16_e32 vcc, 0, v4
	v_bfe_u32 v5, v5, 8, 8
	v_cndmask_b32_e32 v4, 8, v6, vcc
	v_ffbh_u32_e32 v6, v5
	v_add_u32_e32 v6, vcc, -16, v6
	v_add_u16_e32 v6, -8, v6
	v_cmp_ne_u16_e32 vcc, 0, v5
	v_cndmask_b32_e32 v5, 8, v6, vcc
	v_lshlrev_b16_e32 v5, 8, v5
	v_or_b32_sdwa v4, v4, v5 dst_sel:WORD_1 dst_unused:UNUSED_PAD src0_sel:BYTE_0 src1_sel:DWORD
	v_ffbh_u32_sdwa v5, v17 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0
	v_lshlrev_b32_e32 v3, 16, v2
	v_add_u32_e32 v5, vcc, -16, v5
	v_or_b32_e32 v8, v12, v4
	v_and_b32_e32 v4, s2, v17
	v_or_b32_e32 v3, v3, v17
	v_add_u16_e32 v5, -8, v5
	v_cmp_ne_u16_e32 vcc, 0, v4
	v_cndmask_b32_e32 v4, 8, v5, vcc
	v_lshrrev_b32_e32 v5, 8, v3
	v_ffbh_u32_sdwa v3, v3 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_1
	v_add_u32_e32 v3, vcc, -16, v3
	v_and_b32_e32 v5, s2, v5
	v_cmp_ne_u16_e32 vcc, 0, v5
	v_add_u16_e32 v3, -8, v3
	v_cndmask_b32_e32 v3, 8, v3, vcc
	v_lshlrev_b16_e32 v3, 8, v3
	v_or_b32_sdwa v3, v4, v3 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0 src1_sel:DWORD
	v_ffbh_u32_sdwa v5, v2 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0
	v_add_u32_e32 v5, vcc, -16, v5
	v_and_b32_e32 v4, s6, v3
	v_and_b32_e32 v3, s2, v2
	v_add_u16_e32 v5, -8, v5
	v_cmp_ne_u16_e32 vcc, 0, v3
	v_bfe_u32 v2, v2, 8, 8
	v_cndmask_b32_e32 v3, 8, v5, vcc
	v_ffbh_u32_e32 v5, v2
	v_add_u32_e32 v5, vcc, -16, v5
	v_add_u16_e32 v5, -8, v5
	v_cmp_ne_u16_e32 vcc, 0, v2
	v_cndmask_b32_e32 v2, 8, v5, vcc
	v_lshlrev_b16_e32 v2, 8, v2
	v_or_b32_sdwa v2, v3, v2 dst_sel:WORD_1 dst_unused:UNUSED_PAD src0_sel:BYTE_0 src1_sel:DWORD
	v_ffbh_u32_sdwa v3, v16 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0
	v_add_u32_e32 v3, vcc, -16, v3
	v_or_b32_e32 v10, v4, v2
	v_and_b32_e32 v2, s2, v16
	v_add_u16_e32 v3, -8, v3
	v_cmp_ne_u16_e32 vcc, 0, v2
	v_cndmask_b32_e32 v2, 8, v3, vcc
	v_lshrrev_b32_e32 v3, 8, v19
	v_ffbh_u32_sdwa v5, v19 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_1
	v_add_u32_e32 v5, vcc, -16, v5
	v_and_b32_e32 v3, s2, v3
	v_add_u16_e32 v5, -8, v5
	v_cmp_ne_u16_e32 vcc, 0, v3
	v_cndmask_b32_e32 v3, 8, v5, vcc
	v_lshlrev_b16_e32 v3, 8, v3
	v_or_b32_sdwa v2, v2, v3 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0 src1_sel:DWORD
	v_ffbh_u32_sdwa v3, v13 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0
	v_add_u32_e32 v3, vcc, -16, v3
	v_and_b32_e32 v11, s6, v2
	v_and_b32_e32 v2, s2, v13
	v_add_u16_e32 v3, -8, v3
	v_cmp_ne_u16_e32 vcc, 0, v2
	v_cndmask_b32_e32 v2, 8, v3, vcc
	v_bfe_u32 v3, v13, 8, 8
	v_ffbh_u32_e32 v5, v3
	v_add_u32_e32 v5, vcc, -16, v5
	v_add_u16_e32 v5, -8, v5
	v_cmp_ne_u16_e32 vcc, 0, v3
	v_cndmask_b32_e32 v3, 8, v5, vcc
	v_lshlrev_b16_e32 v3, 8, v3
	v_or_b32_sdwa v2, v2, v3 dst_sel:WORD_1 dst_unused:UNUSED_PAD src0_sel:BYTE_0 src1_sel:DWORD
	v_ffbh_u32_sdwa v3, v14 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0
	v_or_b32_e32 v13, v11, v2
	v_add_u32_e32 v3, vcc, -16, v3
	v_and_b32_e32 v2, s2, v14
	v_add_u16_e32 v3, -8, v3
	v_cmp_ne_u16_e32 vcc, 0, v2
	v_cndmask_b32_e32 v2, 8, v3, vcc
	v_lshrrev_b32_e32 v3, 8, v18
	v_ffbh_u32_sdwa v5, v18 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_1
	v_add_u32_e32 v5, vcc, -16, v5
	v_and_b32_e32 v3, s2, v3
	v_add_u16_e32 v5, -8, v5
	v_cmp_ne_u16_e32 vcc, 0, v3
	v_cndmask_b32_e32 v3, 8, v5, vcc
	v_lshlrev_b16_e32 v3, 8, v3
	v_or_b32_sdwa v2, v2, v3 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0 src1_sel:DWORD
	v_ffbh_u32_sdwa v3, v15 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0
	v_add_u32_e32 v3, vcc, -16, v3
	v_and_b32_e32 v14, s6, v2
	v_and_b32_e32 v2, s2, v15
	v_add_u16_e32 v3, -8, v3
	v_cmp_ne_u16_e32 vcc, 0, v2
	v_cndmask_b32_e32 v2, 8, v3, vcc
	v_bfe_u32 v3, v15, 8, 8
	v_ffbh_u32_e32 v5, v3
	v_add_u32_e32 v5, vcc, -16, v5
	v_add_u16_e32 v5, -8, v5
	v_cmp_ne_u16_e32 vcc, 0, v3
	v_cndmask_b32_e32 v3, 8, v5, vcc
	v_lshlrev_b16_e32 v3, 8, v3
	v_or_b32_sdwa v2, v2, v3 dst_sel:WORD_1 dst_unused:UNUSED_PAD src0_sel:BYTE_0 src1_sel:DWORD
	v_or_b32_e32 v15, v14, v2
	v_mov_b32_e32 v2, s1
	v_add_u32_e32 v0, vcc, s0, v0
	v_addc_u32_e32 v1, vcc, v2, v1, vcc
	v_add_u32_e32 v2, vcc, 4, v0
	v_addc_u32_e32 v3, vcc, 0, v1, vcc
	v_lshrrev_b32_e32 v16, 24, v8
	v_lshrrev_b32_e32 v17, 16, v8
	v_lshrrev_b32_e32 v18, 8, v8
	v_lshrrev_b32_e32 v19, 16, v13
	s_nop 0
	s_nop 0
	flat_store_byte v[2:3], v4
	v_add_u32_e32 v2, vcc, 8, v0
	v_addc_u32_e32 v3, vcc, 0, v1, vcc
	v_add_u32_e32 v4, vcc, 12, v0
	v_addc_u32_e32 v5, vcc, 0, v1, vcc
	v_add_u32_e32 v6, vcc, 3, v0
	v_addc_u32_e32 v7, vcc, 0, v1, vcc
	v_add_u32_e32 v8, vcc, 1, v0
	v_addc_u32_e32 v9, vcc, 0, v1, vcc
	s_nop 0
	s_nop 0
	flat_store_byte v[8:9], v18
	v_add_u32_e32 v8, vcc, 2, v0
	v_addc_u32_e32 v9, vcc, 0, v1, vcc
	v_lshrrev_b32_e32 v18, 24, v13
	s_nop 0
	s_nop 0
	flat_store_byte v[8:9], v17
	flat_store_byte v[6:7], v16
	flat_store_byte v[4:5], v14
	flat_store_byte v[2:3], v11
	v_add_u32_e32 v2, vcc, 7, v0
	v_addc_u32_e32 v3, vcc, 0, v1, vcc
	v_add_u32_e32 v4, vcc, 6, v0
	v_addc_u32_e32 v5, vcc, 0, v1, vcc
	v_add_u32_e32 v6, vcc, 5, v0
	v_addc_u32_e32 v7, vcc, 0, v1, vcc
	v_add_u32_e32 v8, vcc, 11, v0
	v_addc_u32_e32 v9, vcc, 0, v1, vcc
	v_lshrrev_b32_e32 v14, 24, v10
	v_lshrrev_b32_e32 v16, 16, v10
	v_lshrrev_b32_e32 v17, 8, v10
	v_add_u32_e32 v10, vcc, 10, v0
	v_addc_u32_e32 v11, vcc, 0, v1, vcc
	s_nop 0
	s_nop 0
	flat_store_byte v[10:11], v19
	flat_store_byte v[8:9], v18
	flat_store_byte v[6:7], v17
	flat_store_byte v[4:5], v16
	flat_store_byte v[2:3], v14
	v_add_u32_e32 v2, vcc, 9, v0
	v_addc_u32_e32 v3, vcc, 0, v1, vcc
	v_add_u32_e32 v4, vcc, 15, v0
	v_addc_u32_e32 v5, vcc, 0, v1, vcc
	v_add_u32_e32 v6, vcc, 14, v0
	v_addc_u32_e32 v7, vcc, 0, v1, vcc
	v_add_u32_e32 v8, vcc, 13, v0
	v_lshrrev_b32_e32 v10, 8, v13
	v_lshrrev_b32_e32 v11, 24, v15
	v_lshrrev_b32_e32 v13, 16, v15
	v_lshrrev_b32_e32 v14, 8, v15
	v_addc_u32_e32 v9, vcc, 0, v1, vcc
	s_nop 0
	s_nop 0
	flat_store_byte v[0:1], v12
	flat_store_byte v[8:9], v14
	flat_store_byte v[6:7], v13
	flat_store_byte v[4:5], v11
	flat_store_byte v[2:3], v10
	s_endpgm
.Lfunc_end4:
	.size	test_16_clz_char, .Lfunc_end4-test_16_clz_char
                                        ; -- End function
	.section	.AMDGPU.csdata
; Kernel info:
; codeLenInByte = 1452
; NumSgprs: 15
; NumVgprs: 25
; ScratchSize: 0
; MemoryBound: 0
; FloatMode: 240
; IeeeMode: 1
; LDSByteSize: 0 bytes/workgroup (compile time only)
; SGPRBlocks: 1
; VGPRBlocks: 6
; NumSGPRsForWavesPerEU: 15
; NumVGPRsForWavesPerEU: 25
; WaveLimiterHint : 1
; COMPUTE_PGM_RSRC2:USER_SGPR: 8
; COMPUTE_PGM_RSRC2:TRAP_HANDLER: 0
; COMPUTE_PGM_RSRC2:TGID_X_EN: 1
; COMPUTE_PGM_RSRC2:TGID_Y_EN: 0
; COMPUTE_PGM_RSRC2:TGID_Z_EN: 0
; COMPUTE_PGM_RSRC2:TIDIG_COMP_CNT: 0

	.ident	"clang version 8.0.0 (https://git.llvm.org/git/clang.git a04db9e679cb36126e3a246771d0f1323c8a3e20) (https://git.llvm.org/git/llvm.git 0d0e510068bb53271bf95bdcf579bd3c371b30c4)"
	.ident	"clang version 8.0.0 (https://git.llvm.org/git/clang.git 87aefa4312da1e6a80bdbc3828363391714b3142) (https://git.llvm.org/git/llvm.git 26598e549e6344a04bd9c32b53567d9cdb9894da)"
	.ident	"clang version 8.0.0 (https://git.llvm.org/git/clang.git 852234a8798dd5f5401a14b3fd9823cd8b45fbfa) (https://git.llvm.org/git/llvm.git b0c4f7e14ecb82fb629be0263c6bdf1001bf8737)"
	.section	".note.GNU-stack"
	.amd_amdgpu_isa "amdgcn-mesa-mesa3d--gfx801+xnack"
-------------- next part --------------
; ModuleID = 'link'
source_filename = "link"
target datalayout = "e-p:64:64-p1:64:64-p2:32:32-p3:32:32-p4:64:64-p5:32:32-p6:32:32-i64:64-v16:16-v24:32-v32:32-v48:64-v96:128-v192:256-v256:256-v512:512-v1024:1024-v2048:2048-n32:64-S32-A5"
target triple = "amdgcn-mesa-mesa3d"

; Function Attrs: nounwind
define amdgpu_kernel void @test_1_clz_char(i8 addrspace(1)* nocapture %out, i8 addrspace(1)* nocapture readonly %in0) local_unnamed_addr #0 !kernel_arg_addr_space !0 !kernel_arg_access_qual !6 !kernel_arg_type !7 !kernel_arg_base_type !7 !kernel_arg_type_qual !8 {
entry:
  %0 = tail call i32 @llvm.amdgcn.workgroup.id.x() #2
  %retval.0.i12.i = zext i32 %0 to i64
  %1 = tail call i8 addrspace(4)* @llvm.amdgcn.dispatch.ptr() #2
  %arrayidx.i7.i = getelementptr inbounds i8, i8 addrspace(4)* %1, i64 4
  %2 = bitcast i8 addrspace(4)* %arrayidx.i7.i to i32 addrspace(4)*
  %3 = load i32, i32 addrspace(4)* %2, align 4, !tbaa !9
  %and.i.i = and i32 %3, 65535
  %retval.0.i1121.i = zext i32 %and.i.i to i64
  %mul22.i = mul nuw nsw i64 %retval.0.i1121.i, %retval.0.i12.i
  %4 = tail call i32 @llvm.amdgcn.workitem.id.x() #2, !range !13
  %retval.0.i633.i = zext i32 %4 to i64
  %5 = tail call i8 addrspace(4)* @llvm.amdgcn.implicitarg.ptr() #2
  %arrayidx.i.i = getelementptr inbounds i8, i8 addrspace(4)* %5, i64 4
  %6 = bitcast i8 addrspace(4)* %arrayidx.i.i to i32 addrspace(4)*
  %7 = load i32, i32 addrspace(4)* %6, align 4, !tbaa !9
  %conv.i.i = zext i32 %7 to i64
  %add34.i = add nuw nsw i64 %conv.i.i, %retval.0.i633.i
  %add4.i = add nuw nsw i64 %add34.i, %mul22.i
  %arrayidx = getelementptr inbounds i8, i8 addrspace(1)* %in0, i64 %add4.i
  %8 = load i8, i8 addrspace(1)* %arrayidx, align 1, !tbaa !14
  %conv.i = zext i8 %8 to i16
  %tobool.i.i = icmp eq i8 %8, 0
  %9 = tail call i16 @llvm.ctlz.i16(i16 %conv.i, i1 true) #2, !range !15
  %10 = trunc i16 %9 to i8
  %.op.i = add nsw i8 %10, -8
  %sub.i = select i1 %tobool.i.i, i8 8, i8 %.op.i
  %arrayidx3 = getelementptr inbounds i8, i8 addrspace(1)* %out, i64 %add4.i
  store i8 %sub.i, i8 addrspace(1)* %arrayidx3, align 1, !tbaa !14
  ret void
}

; Function Attrs: nounwind readnone speculatable
declare i32 @llvm.amdgcn.workgroup.id.x() #1

; Function Attrs: nounwind readnone speculatable
declare i8 addrspace(4)* @llvm.amdgcn.dispatch.ptr() #1

; Function Attrs: nounwind readnone speculatable
declare i32 @llvm.amdgcn.workitem.id.x() #1

; Function Attrs: nounwind readnone speculatable
declare i8 addrspace(4)* @llvm.amdgcn.implicitarg.ptr() #1

; Function Attrs: nounwind readnone speculatable
declare i16 @llvm.ctlz.i16(i16, i1) #1

; Function Attrs: nounwind
define amdgpu_kernel void @test_2_clz_char(i8 addrspace(1)* nocapture %out, i8 addrspace(1)* nocapture readonly %in0) local_unnamed_addr #0 !kernel_arg_addr_space !0 !kernel_arg_access_qual !6 !kernel_arg_type !7 !kernel_arg_base_type !7 !kernel_arg_type_qual !8 {
entry:
  %0 = tail call i32 @llvm.amdgcn.workgroup.id.x() #2
  %retval.0.i12.i = zext i32 %0 to i64
  %1 = tail call i8 addrspace(4)* @llvm.amdgcn.dispatch.ptr() #2
  %arrayidx.i7.i = getelementptr inbounds i8, i8 addrspace(4)* %1, i64 4
  %2 = bitcast i8 addrspace(4)* %arrayidx.i7.i to i32 addrspace(4)*
  %3 = load i32, i32 addrspace(4)* %2, align 4, !tbaa !9
  %and.i.i = and i32 %3, 65535
  %retval.0.i1121.i = zext i32 %and.i.i to i64
  %mul22.i = mul nuw nsw i64 %retval.0.i1121.i, %retval.0.i12.i
  %4 = tail call i32 @llvm.amdgcn.workitem.id.x() #2, !range !13
  %retval.0.i633.i = zext i32 %4 to i64
  %5 = tail call i8 addrspace(4)* @llvm.amdgcn.implicitarg.ptr() #2
  %arrayidx.i.i = getelementptr inbounds i8, i8 addrspace(4)* %5, i64 4
  %6 = bitcast i8 addrspace(4)* %arrayidx.i.i to i32 addrspace(4)*
  %7 = load i32, i32 addrspace(4)* %6, align 4, !tbaa !9
  %conv.i.i = zext i32 %7 to i64
  %add34.i = add nuw nsw i64 %conv.i.i, %retval.0.i633.i
  %add4.i = add nuw nsw i64 %add34.i, %mul22.i
  %mul.i5 = shl nuw nsw i64 %add4.i, 1
  %arrayidx.i6 = getelementptr inbounds i8, i8 addrspace(1)* %in0, i64 %mul.i5
  %8 = bitcast i8 addrspace(1)* %arrayidx.i6 to <2 x i8> addrspace(1)*
  %9 = load <2 x i8>, <2 x i8> addrspace(1)* %8, align 1, !tbaa !14
  %10 = extractelement <2 x i8> %9, i64 0
  %conv.i.i4 = zext i8 %10 to i16
  %tobool.i.i.i = icmp eq i8 %10, 0
  %11 = tail call i16 @llvm.ctlz.i16(i16 %conv.i.i4, i1 true) #2, !range !15
  %12 = trunc i16 %11 to i8
  %.op.i = add nsw i8 %12, -8
  %sub.i.i = select i1 %tobool.i.i.i, i8 8, i8 %.op.i
  %vecinit.i = insertelement <2 x i8> undef, i8 %sub.i.i, i32 0
  %13 = extractelement <2 x i8> %9, i64 1
  %conv.i4.i = zext i8 %13 to i16
  %tobool.i.i5.i = icmp eq i8 %13, 0
  %14 = tail call i16 @llvm.ctlz.i16(i16 %conv.i4.i, i1 true) #2, !range !15
  %15 = trunc i16 %14 to i8
  %.op9.i = add nsw i8 %15, -8
  %sub.i8.i = select i1 %tobool.i.i5.i, i8 8, i8 %.op9.i
  %vecinit2.i = insertelement <2 x i8> %vecinit.i, i8 %sub.i8.i, i32 1
  %arrayidx.i = getelementptr inbounds i8, i8 addrspace(1)* %out, i64 %mul.i5
  %16 = bitcast i8 addrspace(1)* %arrayidx.i to <2 x i8> addrspace(1)*
  store <2 x i8> %vecinit2.i, <2 x i8> addrspace(1)* %16, align 1, !tbaa !14
  ret void
}

; Function Attrs: nounwind
define amdgpu_kernel void @test_4_clz_char(i8 addrspace(1)* nocapture %out, i8 addrspace(1)* nocapture readonly %in0) local_unnamed_addr #0 !kernel_arg_addr_space !0 !kernel_arg_access_qual !6 !kernel_arg_type !7 !kernel_arg_base_type !7 !kernel_arg_type_qual !8 {
entry:
  %0 = tail call i32 @llvm.amdgcn.workgroup.id.x() #2
  %retval.0.i12.i = zext i32 %0 to i64
  %1 = tail call i8 addrspace(4)* @llvm.amdgcn.dispatch.ptr() #2
  %arrayidx.i7.i = getelementptr inbounds i8, i8 addrspace(4)* %1, i64 4
  %2 = bitcast i8 addrspace(4)* %arrayidx.i7.i to i32 addrspace(4)*
  %3 = load i32, i32 addrspace(4)* %2, align 4, !tbaa !9
  %and.i.i = and i32 %3, 65535
  %retval.0.i1121.i = zext i32 %and.i.i to i64
  %mul22.i = mul nuw nsw i64 %retval.0.i1121.i, %retval.0.i12.i
  %4 = tail call i32 @llvm.amdgcn.workitem.id.x() #2, !range !13
  %retval.0.i633.i = zext i32 %4 to i64
  %5 = tail call i8 addrspace(4)* @llvm.amdgcn.implicitarg.ptr() #2
  %arrayidx.i.i = getelementptr inbounds i8, i8 addrspace(4)* %5, i64 4
  %6 = bitcast i8 addrspace(4)* %arrayidx.i.i to i32 addrspace(4)*
  %7 = load i32, i32 addrspace(4)* %6, align 4, !tbaa !9
  %conv.i.i = zext i32 %7 to i64
  %add34.i = add nuw nsw i64 %conv.i.i, %retval.0.i633.i
  %add4.i = add nuw nsw i64 %add34.i, %mul22.i
  %mul.i4 = shl nuw nsw i64 %add4.i, 2
  %arrayidx.i5 = getelementptr inbounds i8, i8 addrspace(1)* %in0, i64 %mul.i4
  %8 = bitcast i8 addrspace(1)* %arrayidx.i5 to <4 x i8> addrspace(1)*
  %9 = load <4 x i8>, <4 x i8> addrspace(1)* %8, align 1, !tbaa !14
  %10 = extractelement <4 x i8> %9, i32 0
  %conv.i.i.i = zext i8 %10 to i16
  %tobool.i.i.i.i = icmp eq i8 %10, 0
  %11 = tail call i16 @llvm.ctlz.i16(i16 %conv.i.i.i, i1 true) #2, !range !15
  %12 = trunc i16 %11 to i8
  %.op.i = add nsw i8 %12, -8
  %sub.i.i.i = select i1 %tobool.i.i.i.i, i8 8, i8 %.op.i
  %13 = insertelement <4 x i8> undef, i8 %sub.i.i.i, i32 0
  %14 = extractelement <4 x i8> %9, i32 1
  %conv.i4.i.i = zext i8 %14 to i16
  %tobool.i.i5.i.i = icmp eq i8 %14, 0
  %15 = tail call i16 @llvm.ctlz.i16(i16 %conv.i4.i.i, i1 true) #2, !range !15
  %16 = trunc i16 %15 to i8
  %.op17.i = add nsw i8 %16, -8
  %sub.i8.i.i = select i1 %tobool.i.i5.i.i, i8 8, i8 %.op17.i
  %17 = insertelement <4 x i8> %13, i8 %sub.i8.i.i, i32 1
  %18 = extractelement <4 x i8> %9, i32 2
  %conv.i.i5.i = zext i8 %18 to i16
  %tobool.i.i.i6.i = icmp eq i8 %18, 0
  %19 = tail call i16 @llvm.ctlz.i16(i16 %conv.i.i5.i, i1 true) #2, !range !15
  %20 = trunc i16 %19 to i8
  %.op18.i = add nsw i8 %20, -8
  %sub.i.i9.i = select i1 %tobool.i.i.i6.i, i8 8, i8 %.op18.i
  %21 = insertelement <4 x i8> undef, i8 %sub.i.i9.i, i32 0
  %22 = extractelement <4 x i8> %9, i32 3
  %conv.i4.i11.i = zext i8 %22 to i16
  %tobool.i.i5.i12.i = icmp eq i8 %22, 0
  %23 = tail call i16 @llvm.ctlz.i16(i16 %conv.i4.i11.i, i1 true) #2, !range !15
  %24 = trunc i16 %23 to i8
  %.op19.i = add nsw i8 %24, -8
  %sub.i8.i15.i = select i1 %tobool.i.i5.i12.i, i8 8, i8 %.op19.i
  %25 = insertelement <4 x i8> %21, i8 %sub.i8.i15.i, i32 1
  %vecinit3.i = shufflevector <4 x i8> %17, <4 x i8> %25, <4 x i32> <i32 0, i32 1, i32 4, i32 5>
  %arrayidx.i = getelementptr inbounds i8, i8 addrspace(1)* %out, i64 %mul.i4
  %26 = bitcast i8 addrspace(1)* %arrayidx.i to <4 x i8> addrspace(1)*
  store <4 x i8> %vecinit3.i, <4 x i8> addrspace(1)* %26, align 1, !tbaa !14
  ret void
}

; Function Attrs: nounwind
define amdgpu_kernel void @test_8_clz_char(i8 addrspace(1)* nocapture %out, i8 addrspace(1)* nocapture readonly %in0) local_unnamed_addr #0 !kernel_arg_addr_space !0 !kernel_arg_access_qual !6 !kernel_arg_type !7 !kernel_arg_base_type !7 !kernel_arg_type_qual !8 {
entry:
  %0 = tail call i32 @llvm.amdgcn.workgroup.id.x() #2
  %retval.0.i12.i = zext i32 %0 to i64
  %1 = tail call i8 addrspace(4)* @llvm.amdgcn.dispatch.ptr() #2
  %arrayidx.i7.i = getelementptr inbounds i8, i8 addrspace(4)* %1, i64 4
  %2 = bitcast i8 addrspace(4)* %arrayidx.i7.i to i32 addrspace(4)*
  %3 = load i32, i32 addrspace(4)* %2, align 4, !tbaa !9
  %and.i.i = and i32 %3, 65535
  %retval.0.i1121.i = zext i32 %and.i.i to i64
  %mul22.i = mul nuw nsw i64 %retval.0.i1121.i, %retval.0.i12.i
  %4 = tail call i32 @llvm.amdgcn.workitem.id.x() #2, !range !13
  %retval.0.i633.i = zext i32 %4 to i64
  %5 = tail call i8 addrspace(4)* @llvm.amdgcn.implicitarg.ptr() #2
  %arrayidx.i.i = getelementptr inbounds i8, i8 addrspace(4)* %5, i64 4
  %6 = bitcast i8 addrspace(4)* %arrayidx.i.i to i32 addrspace(4)*
  %7 = load i32, i32 addrspace(4)* %6, align 4, !tbaa !9
  %conv.i.i = zext i32 %7 to i64
  %add34.i = add nuw nsw i64 %conv.i.i, %retval.0.i633.i
  %add4.i = add nuw nsw i64 %add34.i, %mul22.i
  %mul.i4 = shl nuw nsw i64 %add4.i, 3
  %arrayidx.i5 = getelementptr inbounds i8, i8 addrspace(1)* %in0, i64 %mul.i4
  %8 = bitcast i8 addrspace(1)* %arrayidx.i5 to <8 x i8> addrspace(1)*
  %9 = load <8 x i8>, <8 x i8> addrspace(1)* %8, align 1, !tbaa !14
  %10 = extractelement <8 x i8> %9, i32 0
  %conv.i.i.i.i = zext i8 %10 to i16
  %tobool.i.i.i.i.i = icmp eq i8 %10, 0
  %11 = tail call i16 @llvm.ctlz.i16(i16 %conv.i.i.i.i, i1 true) #2, !range !15
  %12 = trunc i16 %11 to i8
  %.op.i = add nsw i8 %12, -8
  %sub.i.i.i.i = select i1 %tobool.i.i.i.i.i, i8 8, i8 %.op.i
  %13 = insertelement <4 x i8> undef, i8 %sub.i.i.i.i, i32 0
  %14 = extractelement <8 x i8> %9, i32 1
  %conv.i4.i.i.i = zext i8 %14 to i16
  %tobool.i.i5.i.i.i = icmp eq i8 %14, 0
  %15 = tail call i16 @llvm.ctlz.i16(i16 %conv.i4.i.i.i, i1 true) #2, !range !15
  %16 = trunc i16 %15 to i8
  %.op32.i = add nsw i8 %16, -8
  %sub.i8.i.i.i = select i1 %tobool.i.i5.i.i.i, i8 8, i8 %.op32.i
  %17 = insertelement <4 x i8> %13, i8 %sub.i8.i.i.i, i32 1
  %18 = extractelement <8 x i8> %9, i32 2
  %conv.i.i5.i.i = zext i8 %18 to i16
  %tobool.i.i.i6.i.i = icmp eq i8 %18, 0
  %19 = tail call i16 @llvm.ctlz.i16(i16 %conv.i.i5.i.i, i1 true) #2, !range !15
  %20 = trunc i16 %19 to i8
  %.op33.i = add nsw i8 %20, -8
  %sub.i.i9.i.i = select i1 %tobool.i.i.i6.i.i, i8 8, i8 %.op33.i
  %21 = insertelement <4 x i8> undef, i8 %sub.i.i9.i.i, i32 0
  %22 = extractelement <8 x i8> %9, i32 3
  %conv.i4.i11.i.i = zext i8 %22 to i16
  %tobool.i.i5.i12.i.i = icmp eq i8 %22, 0
  %23 = tail call i16 @llvm.ctlz.i16(i16 %conv.i4.i11.i.i, i1 true) #2, !range !15
  %24 = trunc i16 %23 to i8
  %.op34.i = add nsw i8 %24, -8
  %sub.i8.i15.i.i = select i1 %tobool.i.i5.i12.i.i, i8 8, i8 %.op34.i
  %25 = insertelement <4 x i8> %21, i8 %sub.i8.i15.i.i, i32 1
  %vecinit3.i.i = shufflevector <4 x i8> %17, <4 x i8> %25, <4 x i32> <i32 0, i32 1, i32 4, i32 5>
  %vext.i = shufflevector <4 x i8> %vecinit3.i.i, <4 x i8> undef, <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 undef, i32 undef, i32 undef, i32 undef>
  %26 = extractelement <8 x i8> %9, i32 4
  %conv.i.i.i5.i = zext i8 %26 to i16
  %tobool.i.i.i.i6.i = icmp eq i8 %26, 0
  %27 = tail call i16 @llvm.ctlz.i16(i16 %conv.i.i.i5.i, i1 true) #2, !range !15
  %28 = trunc i16 %27 to i8
  %.op35.i = add nsw i8 %28, -8
  %sub.i.i.i9.i = select i1 %tobool.i.i.i.i6.i, i8 8, i8 %.op35.i
  %29 = insertelement <4 x i8> undef, i8 %sub.i.i.i9.i, i32 0
  %30 = extractelement <8 x i8> %9, i32 5
  %conv.i4.i.i11.i = zext i8 %30 to i16
  %tobool.i.i5.i.i12.i = icmp eq i8 %30, 0
  %31 = tail call i16 @llvm.ctlz.i16(i16 %conv.i4.i.i11.i, i1 true) #2, !range !15
  %32 = trunc i16 %31 to i8
  %.op36.i = add nsw i8 %32, -8
  %sub.i8.i.i15.i = select i1 %tobool.i.i5.i.i12.i, i8 8, i8 %.op36.i
  %33 = insertelement <4 x i8> %29, i8 %sub.i8.i.i15.i, i32 1
  %34 = extractelement <8 x i8> %9, i32 6
  %conv.i.i5.i18.i = zext i8 %34 to i16
  %tobool.i.i.i6.i19.i = icmp eq i8 %34, 0
  %35 = tail call i16 @llvm.ctlz.i16(i16 %conv.i.i5.i18.i, i1 true) #2, !range !15
  %36 = trunc i16 %35 to i8
  %.op37.i = add nsw i8 %36, -8
  %sub.i.i9.i22.i = select i1 %tobool.i.i.i6.i19.i, i8 8, i8 %.op37.i
  %37 = insertelement <4 x i8> undef, i8 %sub.i.i9.i22.i, i32 0
  %38 = extractelement <8 x i8> %9, i32 7
  %conv.i4.i11.i24.i = zext i8 %38 to i16
  %tobool.i.i5.i12.i25.i = icmp eq i8 %38, 0
  %39 = tail call i16 @llvm.ctlz.i16(i16 %conv.i4.i11.i24.i, i1 true) #2, !range !15
  %40 = trunc i16 %39 to i8
  %.op38.i = add nsw i8 %40, -8
  %sub.i8.i15.i28.i = select i1 %tobool.i.i5.i12.i25.i, i8 8, i8 %.op38.i
  %41 = insertelement <4 x i8> %37, i8 %sub.i8.i15.i28.i, i32 1
  %vecinit3.i31.i = shufflevector <4 x i8> %33, <4 x i8> %41, <4 x i32> <i32 0, i32 1, i32 4, i32 5>
  %vext2.i = shufflevector <4 x i8> %vecinit3.i31.i, <4 x i8> undef, <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 undef, i32 undef, i32 undef, i32 undef>
  %vecinit3.i = shufflevector <8 x i8> %vext.i, <8 x i8> %vext2.i, <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 8, i32 9, i32 10, i32 11>
  %arrayidx.i = getelementptr inbounds i8, i8 addrspace(1)* %out, i64 %mul.i4
  %42 = bitcast i8 addrspace(1)* %arrayidx.i to <8 x i8> addrspace(1)*
  store <8 x i8> %vecinit3.i, <8 x i8> addrspace(1)* %42, align 1, !tbaa !14
  ret void
}

; Function Attrs: nounwind
define amdgpu_kernel void @test_16_clz_char(i8 addrspace(1)* nocapture %out, i8 addrspace(1)* nocapture readonly %in0) local_unnamed_addr #0 !kernel_arg_addr_space !0 !kernel_arg_access_qual !6 !kernel_arg_type !7 !kernel_arg_base_type !7 !kernel_arg_type_qual !8 {
entry:
  %0 = tail call i32 @llvm.amdgcn.workgroup.id.x() #2
  %retval.0.i12.i = zext i32 %0 to i64
  %1 = tail call i8 addrspace(4)* @llvm.amdgcn.dispatch.ptr() #2
  %arrayidx.i7.i = getelementptr inbounds i8, i8 addrspace(4)* %1, i64 4
  %2 = bitcast i8 addrspace(4)* %arrayidx.i7.i to i32 addrspace(4)*
  %3 = load i32, i32 addrspace(4)* %2, align 4, !tbaa !9
  %and.i.i = and i32 %3, 65535
  %retval.0.i1121.i = zext i32 %and.i.i to i64
  %mul22.i = mul nuw nsw i64 %retval.0.i1121.i, %retval.0.i12.i
  %4 = tail call i32 @llvm.amdgcn.workitem.id.x() #2, !range !13
  %retval.0.i633.i = zext i32 %4 to i64
  %5 = tail call i8 addrspace(4)* @llvm.amdgcn.implicitarg.ptr() #2
  %arrayidx.i.i = getelementptr inbounds i8, i8 addrspace(4)* %5, i64 4
  %6 = bitcast i8 addrspace(4)* %arrayidx.i.i to i32 addrspace(4)*
  %7 = load i32, i32 addrspace(4)* %6, align 4, !tbaa !9
  %conv.i.i = zext i32 %7 to i64
  %add34.i = add nuw nsw i64 %conv.i.i, %retval.0.i633.i
  %add4.i = add nuw nsw i64 %add34.i, %mul22.i
  %mul.i4 = shl nuw nsw i64 %add4.i, 4
  %arrayidx.i5 = getelementptr inbounds i8, i8 addrspace(1)* %in0, i64 %mul.i4
  %8 = bitcast i8 addrspace(1)* %arrayidx.i5 to <16 x i8> addrspace(1)*
  %9 = load <16 x i8>, <16 x i8> addrspace(1)* %8, align 1, !tbaa !14
  %10 = extractelement <16 x i8> %9, i32 0
  %conv.i.i.i.i.i = zext i8 %10 to i16
  %tobool.i.i.i.i.i.i = icmp eq i8 %10, 0
  %11 = tail call i16 @llvm.ctlz.i16(i16 %conv.i.i.i.i.i, i1 true) #2, !range !15
  %12 = trunc i16 %11 to i8
  %.op.i = add nsw i8 %12, -8
  %sub.i.i.i.i.i = select i1 %tobool.i.i.i.i.i.i, i8 8, i8 %.op.i
  %13 = insertelement <4 x i8> undef, i8 %sub.i.i.i.i.i, i32 0
  %14 = extractelement <16 x i8> %9, i32 1
  %conv.i4.i.i.i.i = zext i8 %14 to i16
  %tobool.i.i5.i.i.i.i = icmp eq i8 %14, 0
  %15 = tail call i16 @llvm.ctlz.i16(i16 %conv.i4.i.i.i.i, i1 true) #2, !range !15
  %16 = trunc i16 %15 to i8
  %.op62.i = add nsw i8 %16, -8
  %sub.i8.i.i.i.i = select i1 %tobool.i.i5.i.i.i.i, i8 8, i8 %.op62.i
  %17 = insertelement <4 x i8> %13, i8 %sub.i8.i.i.i.i, i32 1
  %18 = extractelement <16 x i8> %9, i32 2
  %conv.i.i5.i.i.i = zext i8 %18 to i16
  %tobool.i.i.i6.i.i.i = icmp eq i8 %18, 0
  %19 = tail call i16 @llvm.ctlz.i16(i16 %conv.i.i5.i.i.i, i1 true) #2, !range !15
  %20 = trunc i16 %19 to i8
  %.op63.i = add nsw i8 %20, -8
  %sub.i.i9.i.i.i = select i1 %tobool.i.i.i6.i.i.i, i8 8, i8 %.op63.i
  %21 = insertelement <4 x i8> undef, i8 %sub.i.i9.i.i.i, i32 0
  %22 = extractelement <16 x i8> %9, i32 3
  %conv.i4.i11.i.i.i = zext i8 %22 to i16
  %tobool.i.i5.i12.i.i.i = icmp eq i8 %22, 0
  %23 = tail call i16 @llvm.ctlz.i16(i16 %conv.i4.i11.i.i.i, i1 true) #2, !range !15
  %24 = trunc i16 %23 to i8
  %.op64.i = add nsw i8 %24, -8
  %sub.i8.i15.i.i.i = select i1 %tobool.i.i5.i12.i.i.i, i8 8, i8 %.op64.i
  %25 = insertelement <4 x i8> %21, i8 %sub.i8.i15.i.i.i, i32 1
  %vecinit3.i.i.i = shufflevector <4 x i8> %17, <4 x i8> %25, <4 x i32> <i32 0, i32 1, i32 4, i32 5>
  %vext.i.i = shufflevector <4 x i8> %vecinit3.i.i.i, <4 x i8> undef, <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 undef, i32 undef, i32 undef, i32 undef>
  %26 = extractelement <16 x i8> %9, i32 4
  %conv.i.i.i5.i.i = zext i8 %26 to i16
  %tobool.i.i.i.i6.i.i = icmp eq i8 %26, 0
  %27 = tail call i16 @llvm.ctlz.i16(i16 %conv.i.i.i5.i.i, i1 true) #2, !range !15
  %28 = trunc i16 %27 to i8
  %.op65.i = add nsw i8 %28, -8
  %sub.i.i.i9.i.i = select i1 %tobool.i.i.i.i6.i.i, i8 8, i8 %.op65.i
  %29 = insertelement <4 x i8> undef, i8 %sub.i.i.i9.i.i, i32 0
  %30 = extractelement <16 x i8> %9, i32 5
  %conv.i4.i.i11.i.i = zext i8 %30 to i16
  %tobool.i.i5.i.i12.i.i = icmp eq i8 %30, 0
  %31 = tail call i16 @llvm.ctlz.i16(i16 %conv.i4.i.i11.i.i, i1 true) #2, !range !15
  %32 = trunc i16 %31 to i8
  %.op66.i = add nsw i8 %32, -8
  %sub.i8.i.i15.i.i = select i1 %tobool.i.i5.i.i12.i.i, i8 8, i8 %.op66.i
  %33 = insertelement <4 x i8> %29, i8 %sub.i8.i.i15.i.i, i32 1
  %34 = extractelement <16 x i8> %9, i32 6
  %conv.i.i5.i18.i.i = zext i8 %34 to i16
  %tobool.i.i.i6.i19.i.i = icmp eq i8 %34, 0
  %35 = tail call i16 @llvm.ctlz.i16(i16 %conv.i.i5.i18.i.i, i1 true) #2, !range !15
  %36 = trunc i16 %35 to i8
  %.op67.i = add nsw i8 %36, -8
  %sub.i.i9.i22.i.i = select i1 %tobool.i.i.i6.i19.i.i, i8 8, i8 %.op67.i
  %37 = insertelement <4 x i8> undef, i8 %sub.i.i9.i22.i.i, i32 0
  %38 = extractelement <16 x i8> %9, i32 7
  %conv.i4.i11.i24.i.i = zext i8 %38 to i16
  %tobool.i.i5.i12.i25.i.i = icmp eq i8 %38, 0
  %39 = tail call i16 @llvm.ctlz.i16(i16 %conv.i4.i11.i24.i.i, i1 true) #2, !range !15
  %40 = trunc i16 %39 to i8
  %.op68.i = add nsw i8 %40, -8
  %sub.i8.i15.i28.i.i = select i1 %tobool.i.i5.i12.i25.i.i, i8 8, i8 %.op68.i
  %41 = insertelement <4 x i8> %37, i8 %sub.i8.i15.i28.i.i, i32 1
  %vecinit3.i31.i.i = shufflevector <4 x i8> %33, <4 x i8> %41, <4 x i32> <i32 0, i32 1, i32 4, i32 5>
  %vext2.i.i = shufflevector <4 x i8> %vecinit3.i31.i.i, <4 x i8> undef, <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 undef, i32 undef, i32 undef, i32 undef>
  %vecinit3.i.i = shufflevector <8 x i8> %vext.i.i, <8 x i8> %vext2.i.i, <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 8, i32 9, i32 10, i32 11>
  %vext.i = shufflevector <8 x i8> %vecinit3.i.i, <8 x i8> undef, <16 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 undef, i32 undef, i32 undef, i32 undef, i32 undef, i32 undef, i32 undef, i32 undef>
  %42 = extractelement <16 x i8> %9, i32 8
  %conv.i.i.i.i5.i = zext i8 %42 to i16
  %tobool.i.i.i.i.i6.i = icmp eq i8 %42, 0
  %43 = tail call i16 @llvm.ctlz.i16(i16 %conv.i.i.i.i5.i, i1 true) #2, !range !15
  %44 = trunc i16 %43 to i8
  %.op69.i = add nsw i8 %44, -8
  %sub.i.i.i.i9.i = select i1 %tobool.i.i.i.i.i6.i, i8 8, i8 %.op69.i
  %45 = insertelement <4 x i8> undef, i8 %sub.i.i.i.i9.i, i32 0
  %46 = extractelement <16 x i8> %9, i32 9
  %conv.i4.i.i.i11.i = zext i8 %46 to i16
  %tobool.i.i5.i.i.i12.i = icmp eq i8 %46, 0
  %47 = tail call i16 @llvm.ctlz.i16(i16 %conv.i4.i.i.i11.i, i1 true) #2, !range !15
  %48 = trunc i16 %47 to i8
  %.op70.i = add nsw i8 %48, -8
  %sub.i8.i.i.i15.i = select i1 %tobool.i.i5.i.i.i12.i, i8 8, i8 %.op70.i
  %49 = insertelement <4 x i8> %45, i8 %sub.i8.i.i.i15.i, i32 1
  %50 = extractelement <16 x i8> %9, i32 10
  %conv.i.i5.i.i18.i = zext i8 %50 to i16
  %tobool.i.i.i6.i.i19.i = icmp eq i8 %50, 0
  %51 = tail call i16 @llvm.ctlz.i16(i16 %conv.i.i5.i.i18.i, i1 true) #2, !range !15
  %52 = trunc i16 %51 to i8
  %.op71.i = add nsw i8 %52, -8
  %sub.i.i9.i.i22.i = select i1 %tobool.i.i.i6.i.i19.i, i8 8, i8 %.op71.i
  %53 = insertelement <4 x i8> undef, i8 %sub.i.i9.i.i22.i, i32 0
  %54 = extractelement <16 x i8> %9, i32 11
  %conv.i4.i11.i.i24.i = zext i8 %54 to i16
  %tobool.i.i5.i12.i.i25.i = icmp eq i8 %54, 0
  %55 = tail call i16 @llvm.ctlz.i16(i16 %conv.i4.i11.i.i24.i, i1 true) #2, !range !15
  %56 = trunc i16 %55 to i8
  %.op72.i = add nsw i8 %56, -8
  %sub.i8.i15.i.i28.i = select i1 %tobool.i.i5.i12.i.i25.i, i8 8, i8 %.op72.i
  %57 = insertelement <4 x i8> %53, i8 %sub.i8.i15.i.i28.i, i32 1
  %vecinit3.i.i31.i = shufflevector <4 x i8> %49, <4 x i8> %57, <4 x i32> <i32 0, i32 1, i32 4, i32 5>
  %vext.i32.i = shufflevector <4 x i8> %vecinit3.i.i31.i, <4 x i8> undef, <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 undef, i32 undef, i32 undef, i32 undef>
  %58 = extractelement <16 x i8> %9, i32 12
  %conv.i.i.i5.i33.i = zext i8 %58 to i16
  %tobool.i.i.i.i6.i34.i = icmp eq i8 %58, 0
  %59 = tail call i16 @llvm.ctlz.i16(i16 %conv.i.i.i5.i33.i, i1 true) #2, !range !15
  %60 = trunc i16 %59 to i8
  %.op73.i = add nsw i8 %60, -8
  %sub.i.i.i9.i37.i = select i1 %tobool.i.i.i.i6.i34.i, i8 8, i8 %.op73.i
  %61 = insertelement <4 x i8> undef, i8 %sub.i.i.i9.i37.i, i32 0
  %62 = extractelement <16 x i8> %9, i32 13
  %conv.i4.i.i11.i39.i = zext i8 %62 to i16
  %tobool.i.i5.i.i12.i40.i = icmp eq i8 %62, 0
  %63 = tail call i16 @llvm.ctlz.i16(i16 %conv.i4.i.i11.i39.i, i1 true) #2, !range !15
  %64 = trunc i16 %63 to i8
  %.op74.i = add nsw i8 %64, -8
  %sub.i8.i.i15.i43.i = select i1 %tobool.i.i5.i.i12.i40.i, i8 8, i8 %.op74.i
  %65 = insertelement <4 x i8> %61, i8 %sub.i8.i.i15.i43.i, i32 1
  %66 = extractelement <16 x i8> %9, i32 14
  %conv.i.i5.i18.i46.i = zext i8 %66 to i16
  %tobool.i.i.i6.i19.i47.i = icmp eq i8 %66, 0
  %67 = tail call i16 @llvm.ctlz.i16(i16 %conv.i.i5.i18.i46.i, i1 true) #2, !range !15
  %68 = trunc i16 %67 to i8
  %.op75.i = add nsw i8 %68, -8
  %sub.i.i9.i22.i50.i = select i1 %tobool.i.i.i6.i19.i47.i, i8 8, i8 %.op75.i
  %69 = insertelement <4 x i8> undef, i8 %sub.i.i9.i22.i50.i, i32 0
  %70 = extractelement <16 x i8> %9, i32 15
  %conv.i4.i11.i24.i52.i = zext i8 %70 to i16
  %tobool.i.i5.i12.i25.i53.i = icmp eq i8 %70, 0
  %71 = tail call i16 @llvm.ctlz.i16(i16 %conv.i4.i11.i24.i52.i, i1 true) #2, !range !15
  %72 = trunc i16 %71 to i8
  %.op76.i = add nsw i8 %72, -8
  %sub.i8.i15.i28.i56.i = select i1 %tobool.i.i5.i12.i25.i53.i, i8 8, i8 %.op76.i
  %73 = insertelement <4 x i8> %69, i8 %sub.i8.i15.i28.i56.i, i32 1
  %vecinit3.i31.i59.i = shufflevector <4 x i8> %65, <4 x i8> %73, <4 x i32> <i32 0, i32 1, i32 4, i32 5>
  %vext2.i60.i = shufflevector <4 x i8> %vecinit3.i31.i59.i, <4 x i8> undef, <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 undef, i32 undef, i32 undef, i32 undef>
  %vecinit3.i61.i = shufflevector <8 x i8> %vext.i32.i, <8 x i8> %vext2.i60.i, <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 8, i32 9, i32 10, i32 11>
  %vext2.i = shufflevector <8 x i8> %vecinit3.i61.i, <8 x i8> undef, <16 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 undef, i32 undef, i32 undef, i32 undef, i32 undef, i32 undef, i32 undef, i32 undef>
  %vecinit3.i = shufflevector <16 x i8> %vext.i, <16 x i8> %vext2.i, <16 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 16, i32 17, i32 18, i32 19, i32 20, i32 21, i32 22, i32 23>
  %arrayidx.i = getelementptr inbounds i8, i8 addrspace(1)* %out, i64 %mul.i4
  %74 = bitcast i8 addrspace(1)* %arrayidx.i to <16 x i8> addrspace(1)*
  store <16 x i8> %vecinit3.i, <16 x i8> addrspace(1)* %74, align 1, !tbaa !14
  ret void
}

attributes #0 = { nounwind "correctly-rounded-divide-sqrt-fp-math"="false" "denorms-are-zero"="false" "disable-tail-calls"="false" "less-precise-fpmad"="false" "no-frame-pointer-elim"="false" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="carrizo" "target-features"="+16-bit-insts,+ci-insts,+dpp,+fp32-denormals,+fp64-fp16-denormals,+s-memrealtime,+vi-insts" "uniform-work-group-size"="true" "unsafe-fp-math"="false" "use-soft-float"="false" }
attributes #1 = { nounwind readnone speculatable }
attributes #2 = { nounwind }

!opencl.ocl.version = !{!0}
!llvm.ident = !{!1, !2, !3}
!llvm.module.flags = !{!4, !5}

!0 = !{i32 1, i32 1}
!1 = !{!"clang version 8.0.0 (https://git.llvm.org/git/clang.git a04db9e679cb36126e3a246771d0f1323c8a3e20) (https://git.llvm.org/git/llvm.git 0d0e510068bb53271bf95bdcf579bd3c371b30c4)"}
!2 = !{!"clang version 8.0.0 (https://git.llvm.org/git/clang.git 87aefa4312da1e6a80bdbc3828363391714b3142) (https://git.llvm.org/git/llvm.git 26598e549e6344a04bd9c32b53567d9cdb9894da)"}
!3 = !{!"clang version 8.0.0 (https://git.llvm.org/git/clang.git 852234a8798dd5f5401a14b3fd9823cd8b45fbfa) (https://git.llvm.org/git/llvm.git b0c4f7e14ecb82fb629be0263c6bdf1001bf8737)"}
!4 = !{i32 1, !"wchar_size", i32 4}
!5 = !{i32 7, !"PIC Level", i32 1}
!6 = !{!"none", !"none"}
!7 = !{!"char*", !"char*"}
!8 = !{!"", !""}
!9 = !{!10, !10, i64 0}
!10 = !{!"int", !11, i64 0}
!11 = !{!"omnipotent char", !12, i64 0}
!12 = !{!"Simple C/C++ TBAA"}
!13 = !{i32 0, i32 1024}
!14 = !{!11, !11, i64 0}
!15 = !{i16 8, i16 17}
-------------- next part --------------
A non-text attachment was scrubbed...
Name: bisect.log
Type: text/x-log
Size: 1499 bytes
Desc: not available
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20181007/ceb5cd5e/attachment-0001.bin>
-------------- next part --------------
	.text
	.section	.AMDGPU.config
	.long	47176
	.long	11468865
	.long	47180
	.long	144
	.long	47200
	.long	0
	.long	4
	.long	0
	.long	8
	.long	0
	.text
	.globl	test_1_clz_char         ; -- Begin function test_1_clz_char
	.p2align	8
	.type	test_1_clz_char, at function
	.amdgpu_hsa_kernel test_1_clz_char
test_1_clz_char:                        ; @test_1_clz_char
	.amd_kernel_code_t
		amd_code_version_major = 1
		amd_code_version_minor = 2
		amd_machine_kind = 1
		amd_machine_version_major = 8
		amd_machine_version_minor = 0
		amd_machine_version_stepping = 1
		kernel_code_entry_byte_offset = 256
		kernel_code_prefetch_byte_size = 0
		granulated_workitem_vgpr_count = 1
		granulated_wavefront_sgpr_count = 1
		priority = 0
		float_mode = 240
		priv = 0
		enable_dx10_clamp = 1
		debug_mode = 0
		enable_ieee_mode = 1
		enable_sgpr_private_segment_wave_byte_offset = 0
		user_sgpr_count = 8
		enable_trap_handler = 0
		enable_sgpr_workgroup_id_x = 1
		enable_sgpr_workgroup_id_y = 0
		enable_sgpr_workgroup_id_z = 0
		enable_sgpr_workgroup_info = 0
		enable_vgpr_workitem_id = 0
		enable_exception_msb = 0
		granulated_lds_size = 0
		enable_exception = 0
		enable_sgpr_private_segment_buffer = 1
		enable_sgpr_dispatch_ptr = 1
		enable_sgpr_queue_ptr = 0
		enable_sgpr_kernarg_segment_ptr = 1
		enable_sgpr_dispatch_id = 0
		enable_sgpr_flat_scratch_init = 0
		enable_sgpr_private_segment_size = 0
		enable_sgpr_grid_workgroup_count_x = 0
		enable_sgpr_grid_workgroup_count_y = 0
		enable_sgpr_grid_workgroup_count_z = 0
		enable_ordered_append_gds = 0
		private_element_size = 1
		is_ptr64 = 1
		is_dynamic_callstack = 0
		is_debug_enabled = 0
		is_xnack_enabled = 1
		workitem_private_segment_byte_size = 0
		workgroup_group_segment_byte_size = 0
		gds_segment_byte_size = 0
		kernarg_segment_byte_size = 32
		workgroup_fbarrier_count = 0
		wavefront_sgpr_count = 15
		workitem_vgpr_count = 5
		reserved_vgpr_first = 0
		reserved_vgpr_count = 0
		reserved_sgpr_first = 0
		reserved_sgpr_count = 0
		debug_wavefront_private_segment_offset_sgpr = 0
		debug_private_segment_buffer_sgpr = 0
		kernarg_segment_alignment = 4
		group_segment_alignment = 4
		private_segment_alignment = 4
		wavefront_size = 6
		call_convention = -1
		runtime_loader_kernel_symbol = 0
	.end_amd_kernel_code_t
; %bb.0:                                ; %entry
	s_load_dword s9, s[6:7], 0x14
	s_load_dword s10, s[4:5], 0x4
	s_load_dwordx4 s[0:3], s[6:7], 0x0
	s_mov_b32 s4, 0xffff
	v_mov_b32_e32 v1, 0
	s_waitcnt lgkmcnt(0)
	v_add_u32_e32 v0, vcc, s9, v0
	s_and_b32 s4, s10, s4
	v_addc_u32_e32 v1, vcc, 0, v1, vcc
	v_mov_b32_e32 v2, s8
	v_mad_u64_u32 v[0:1], s[4:5], s4, v2, v[0:1]
	v_mov_b32_e32 v3, s3
	v_mov_b32_e32 v4, s1
	v_add_u32_e32 v2, vcc, s2, v0
	v_addc_u32_e32 v3, vcc, v3, v1, vcc
	s_nop 0
	s_nop 0
	flat_load_ubyte v2, v[2:3]
	s_waitcnt vmcnt(0) lgkmcnt(0)
	v_ffbh_u32_sdwa v3, v2 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:WORD_0
	v_add_u32_e32 v3, vcc, -16, v3
	v_add_u16_e32 v3, -8, v3
	v_cmp_ne_u16_e32 vcc, 0, v2
	v_cndmask_b32_e32 v2, 8, v3, vcc
	v_add_u32_e32 v0, vcc, s0, v0
	v_addc_u32_e32 v1, vcc, v4, v1, vcc
	s_nop 0
	s_nop 0
	flat_store_byte v[0:1], v2
	s_endpgm
.Lfunc_end0:
	.size	test_1_clz_char, .Lfunc_end0-test_1_clz_char
                                        ; -- End function
	.section	.AMDGPU.csdata
; Kernel info:
; codeLenInByte = 152
; NumSgprs: 15
; NumVgprs: 5
; ScratchSize: 0
; MemoryBound: 0
; FloatMode: 240
; IeeeMode: 1
; LDSByteSize: 0 bytes/workgroup (compile time only)
; SGPRBlocks: 1
; VGPRBlocks: 1
; NumSGPRsForWavesPerEU: 15
; NumVGPRsForWavesPerEU: 5
; WaveLimiterHint : 1
; COMPUTE_PGM_RSRC2:USER_SGPR: 8
; COMPUTE_PGM_RSRC2:TRAP_HANDLER: 0
; COMPUTE_PGM_RSRC2:TGID_X_EN: 1
; COMPUTE_PGM_RSRC2:TGID_Y_EN: 0
; COMPUTE_PGM_RSRC2:TGID_Z_EN: 0
; COMPUTE_PGM_RSRC2:TIDIG_COMP_CNT: 0
	.section	.AMDGPU.config
	.long	47176
	.long	11468866
	.long	47180
	.long	144
	.long	47200
	.long	0
	.long	4
	.long	0
	.long	8
	.long	0
	.text
	.globl	test_2_clz_char         ; -- Begin function test_2_clz_char
	.p2align	8
	.type	test_2_clz_char, at function
	.amdgpu_hsa_kernel test_2_clz_char
test_2_clz_char:                        ; @test_2_clz_char
	.amd_kernel_code_t
		amd_code_version_major = 1
		amd_code_version_minor = 2
		amd_machine_kind = 1
		amd_machine_version_major = 8
		amd_machine_version_minor = 0
		amd_machine_version_stepping = 1
		kernel_code_entry_byte_offset = 256
		kernel_code_prefetch_byte_size = 0
		granulated_workitem_vgpr_count = 2
		granulated_wavefront_sgpr_count = 1
		priority = 0
		float_mode = 240
		priv = 0
		enable_dx10_clamp = 1
		debug_mode = 0
		enable_ieee_mode = 1
		enable_sgpr_private_segment_wave_byte_offset = 0
		user_sgpr_count = 8
		enable_trap_handler = 0
		enable_sgpr_workgroup_id_x = 1
		enable_sgpr_workgroup_id_y = 0
		enable_sgpr_workgroup_id_z = 0
		enable_sgpr_workgroup_info = 0
		enable_vgpr_workitem_id = 0
		enable_exception_msb = 0
		granulated_lds_size = 0
		enable_exception = 0
		enable_sgpr_private_segment_buffer = 1
		enable_sgpr_dispatch_ptr = 1
		enable_sgpr_queue_ptr = 0
		enable_sgpr_kernarg_segment_ptr = 1
		enable_sgpr_dispatch_id = 0
		enable_sgpr_flat_scratch_init = 0
		enable_sgpr_private_segment_size = 0
		enable_sgpr_grid_workgroup_count_x = 0
		enable_sgpr_grid_workgroup_count_y = 0
		enable_sgpr_grid_workgroup_count_z = 0
		enable_ordered_append_gds = 0
		private_element_size = 1
		is_ptr64 = 1
		is_dynamic_callstack = 0
		is_debug_enabled = 0
		is_xnack_enabled = 1
		workitem_private_segment_byte_size = 0
		workgroup_group_segment_byte_size = 0
		gds_segment_byte_size = 0
		kernarg_segment_byte_size = 32
		workgroup_fbarrier_count = 0
		wavefront_sgpr_count = 15
		workitem_vgpr_count = 10
		reserved_vgpr_first = 0
		reserved_vgpr_count = 0
		reserved_sgpr_first = 0
		reserved_sgpr_count = 0
		debug_wavefront_private_segment_offset_sgpr = 0
		debug_private_segment_buffer_sgpr = 0
		kernarg_segment_alignment = 4
		group_segment_alignment = 4
		private_segment_alignment = 4
		wavefront_size = 6
		call_convention = -1
		runtime_loader_kernel_symbol = 0
	.end_amd_kernel_code_t
; %bb.0:                                ; %entry
	s_load_dword s9, s[6:7], 0x14
	s_load_dword s10, s[4:5], 0x4
	s_load_dwordx4 s[0:3], s[6:7], 0x0
	s_mov_b32 s4, 0xffff
	v_mov_b32_e32 v1, 0
	s_waitcnt lgkmcnt(0)
	v_add_u32_e32 v0, vcc, s9, v0
	s_and_b32 s4, s10, s4
	v_addc_u32_e32 v1, vcc, 0, v1, vcc
	v_mov_b32_e32 v2, s8
	v_mad_u64_u32 v[0:1], s[4:5], s4, v2, v[0:1]
	v_mov_b32_e32 v3, s3
	v_mov_b32_e32 v6, s1
	v_lshlrev_b64 v[0:1], 1, v[0:1]
	v_add_u32_e32 v2, vcc, s2, v0
	v_addc_u32_e32 v3, vcc, v3, v1, vcc
	v_add_u32_e32 v4, vcc, 1, v2
	v_addc_u32_e32 v5, vcc, 0, v3, vcc
	v_add_u32_e32 v0, vcc, s0, v0
	v_addc_u32_e32 v1, vcc, v6, v1, vcc
	v_add_u32_e32 v6, vcc, 1, v0
	v_addc_u32_e32 v7, vcc, 0, v1, vcc
	s_nop 0
	s_nop 0
	flat_load_ubyte v9, v[2:3]
	flat_load_ubyte v8, v[4:5]
	s_waitcnt vmcnt(0) lgkmcnt(0)
	v_lshlrev_b16_e32 v2, 8, v8
	v_or_b32_e32 v2, v2, v9
	v_and_b32_e32 v3, 0xff, v2
	v_lshrrev_b16_e32 v2, 8, v2
	v_ffbh_u32_sdwa v4, v3 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:WORD_0
	v_ffbh_u32_e32 v5, v2
	v_add_u32_e32 v4, vcc, -16, v4
	v_add_u32_e32 v5, vcc, -16, v5
	v_add_u16_e32 v4, -8, v4
	v_cmp_ne_u16_e32 vcc, 0, v3
	v_cndmask_b32_e32 v3, 8, v4, vcc
	v_add_u16_e32 v5, -8, v5
	v_cmp_ne_u16_e32 vcc, 0, v2
	v_cndmask_b32_e32 v2, 8, v5, vcc
	s_nop 0
	s_nop 0
	flat_store_byte v[0:1], v3
	flat_store_byte v[6:7], v2
	s_endpgm
.Lfunc_end1:
	.size	test_2_clz_char, .Lfunc_end1-test_2_clz_char
                                        ; -- End function
	.section	.AMDGPU.csdata
; Kernel info:
; codeLenInByte = 232
; NumSgprs: 15
; NumVgprs: 10
; ScratchSize: 0
; MemoryBound: 0
; FloatMode: 240
; IeeeMode: 1
; LDSByteSize: 0 bytes/workgroup (compile time only)
; SGPRBlocks: 1
; VGPRBlocks: 2
; NumSGPRsForWavesPerEU: 15
; NumVGPRsForWavesPerEU: 10
; WaveLimiterHint : 1
; COMPUTE_PGM_RSRC2:USER_SGPR: 8
; COMPUTE_PGM_RSRC2:TRAP_HANDLER: 0
; COMPUTE_PGM_RSRC2:TGID_X_EN: 1
; COMPUTE_PGM_RSRC2:TGID_Y_EN: 0
; COMPUTE_PGM_RSRC2:TGID_Z_EN: 0
; COMPUTE_PGM_RSRC2:TIDIG_COMP_CNT: 0
	.section	.AMDGPU.config
	.long	47176
	.long	11468868
	.long	47180
	.long	144
	.long	47200
	.long	0
	.long	4
	.long	0
	.long	8
	.long	0
	.text
	.globl	test_4_clz_char         ; -- Begin function test_4_clz_char
	.p2align	8
	.type	test_4_clz_char, at function
	.amdgpu_hsa_kernel test_4_clz_char
test_4_clz_char:                        ; @test_4_clz_char
	.amd_kernel_code_t
		amd_code_version_major = 1
		amd_code_version_minor = 2
		amd_machine_kind = 1
		amd_machine_version_major = 8
		amd_machine_version_minor = 0
		amd_machine_version_stepping = 1
		kernel_code_entry_byte_offset = 256
		kernel_code_prefetch_byte_size = 0
		granulated_workitem_vgpr_count = 4
		granulated_wavefront_sgpr_count = 1
		priority = 0
		float_mode = 240
		priv = 0
		enable_dx10_clamp = 1
		debug_mode = 0
		enable_ieee_mode = 1
		enable_sgpr_private_segment_wave_byte_offset = 0
		user_sgpr_count = 8
		enable_trap_handler = 0
		enable_sgpr_workgroup_id_x = 1
		enable_sgpr_workgroup_id_y = 0
		enable_sgpr_workgroup_id_z = 0
		enable_sgpr_workgroup_info = 0
		enable_vgpr_workitem_id = 0
		enable_exception_msb = 0
		granulated_lds_size = 0
		enable_exception = 0
		enable_sgpr_private_segment_buffer = 1
		enable_sgpr_dispatch_ptr = 1
		enable_sgpr_queue_ptr = 0
		enable_sgpr_kernarg_segment_ptr = 1
		enable_sgpr_dispatch_id = 0
		enable_sgpr_flat_scratch_init = 0
		enable_sgpr_private_segment_size = 0
		enable_sgpr_grid_workgroup_count_x = 0
		enable_sgpr_grid_workgroup_count_y = 0
		enable_sgpr_grid_workgroup_count_z = 0
		enable_ordered_append_gds = 0
		private_element_size = 1
		is_ptr64 = 1
		is_dynamic_callstack = 0
		is_debug_enabled = 0
		is_xnack_enabled = 1
		workitem_private_segment_byte_size = 0
		workgroup_group_segment_byte_size = 0
		gds_segment_byte_size = 0
		kernarg_segment_byte_size = 32
		workgroup_fbarrier_count = 0
		wavefront_sgpr_count = 15
		workitem_vgpr_count = 20
		reserved_vgpr_first = 0
		reserved_vgpr_count = 0
		reserved_sgpr_first = 0
		reserved_sgpr_count = 0
		debug_wavefront_private_segment_offset_sgpr = 0
		debug_private_segment_buffer_sgpr = 0
		kernarg_segment_alignment = 4
		group_segment_alignment = 4
		private_segment_alignment = 4
		wavefront_size = 6
		call_convention = -1
		runtime_loader_kernel_symbol = 0
	.end_amd_kernel_code_t
; %bb.0:                                ; %entry
	s_load_dword s9, s[6:7], 0x14
	s_load_dword s10, s[4:5], 0x4
	s_load_dwordx4 s[0:3], s[6:7], 0x0
	s_mov_b32 s6, 0xffff
	v_mov_b32_e32 v1, 0
	s_waitcnt lgkmcnt(0)
	v_add_u32_e32 v0, vcc, s9, v0
	v_addc_u32_e32 v1, vcc, 0, v1, vcc
	s_and_b32 s4, s10, s6
	v_mov_b32_e32 v2, s8
	v_mad_u64_u32 v[0:1], s[4:5], s4, v2, v[0:1]
	v_mov_b32_e32 v3, s3
	v_mov_b32_e32 v10, s1
	v_lshlrev_b64 v[0:1], 2, v[0:1]
	v_add_u32_e32 v2, vcc, s2, v0
	v_addc_u32_e32 v3, vcc, v3, v1, vcc
	v_add_u32_e32 v4, vcc, 1, v2
	v_addc_u32_e32 v5, vcc, 0, v3, vcc
	v_add_u32_e32 v6, vcc, 3, v2
	v_addc_u32_e32 v7, vcc, 0, v3, vcc
	v_add_u32_e32 v8, vcc, 2, v2
	v_addc_u32_e32 v9, vcc, 0, v3, vcc
	v_add_u32_e32 v0, vcc, s0, v0
	v_addc_u32_e32 v1, vcc, v10, v1, vcc
	v_add_u32_e32 v10, vcc, 2, v0
	v_addc_u32_e32 v11, vcc, 0, v1, vcc
	v_add_u32_e32 v12, vcc, 3, v0
	v_addc_u32_e32 v13, vcc, 0, v1, vcc
	s_movk_i32 s2, 0xff
	v_add_u32_e32 v14, vcc, 1, v0
	v_addc_u32_e32 v15, vcc, 0, v1, vcc
	s_nop 0
	flat_load_ubyte v17, v[2:3]
	flat_load_ubyte v19, v[8:9]
	flat_load_ubyte v18, v[6:7]
	flat_load_ubyte v16, v[4:5]
	s_waitcnt vmcnt(1) lgkmcnt(1)
	v_lshlrev_b32_e32 v3, 8, v18
	s_waitcnt vmcnt(0) lgkmcnt(0)
	v_lshlrev_b32_e32 v2, 8, v16
	v_or_b32_e32 v3, v3, v19
	v_or_b32_e32 v2, v2, v17
	v_lshlrev_b32_e32 v4, 16, v3
	v_ffbh_u32_sdwa v6, v2 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0
	v_and_b32_e32 v7, s2, v3
	v_ffbh_u32_sdwa v8, v3 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0
	v_bfe_u32 v3, v3, 8, 8
	v_and_b32_e32 v5, s2, v2
	v_or_b32_e32 v2, v4, v2
	v_add_u32_e32 v4, vcc, -16, v6
	v_add_u32_e32 v6, vcc, -16, v8
	v_ffbh_u32_e32 v8, v3
	v_add_u32_e32 v8, vcc, -16, v8
	v_lshrrev_b32_e32 v9, 8, v2
	v_cmp_ne_u16_e32 vcc, 0, v5
	v_add_u16_e32 v4, -8, v4
	v_ffbh_u32_sdwa v2, v2 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_1
	v_cndmask_b32_e32 v4, 8, v4, vcc
	v_add_u32_e32 v2, vcc, -16, v2
	v_cmp_ne_u16_e32 vcc, 0, v7
	v_add_u16_e32 v6, -8, v6
	v_cndmask_b32_e32 v6, 8, v6, vcc
	v_and_b32_e32 v5, s2, v9
	v_add_u16_e32 v7, -8, v8
	v_cmp_ne_u16_e32 vcc, 0, v3
	v_cndmask_b32_e32 v3, 8, v7, vcc
	v_add_u16_e32 v2, -8, v2
	v_cmp_ne_u16_e32 vcc, 0, v5
	v_cndmask_b32_e32 v2, 8, v2, vcc
	v_lshlrev_b16_e32 v2, 8, v2
	v_or_b32_sdwa v2, v4, v2 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0 src1_sel:DWORD
	v_lshlrev_b16_e32 v3, 8, v3
	v_and_b32_e32 v2, s6, v2
	v_lshrrev_b32_e32 v4, 8, v2
	v_lshrrev_b32_e32 v3, 8, v3
	s_nop 0
	s_nop 0
	flat_store_byte v[0:1], v2
	flat_store_byte v[14:15], v4
	flat_store_byte v[12:13], v3
	flat_store_byte v[10:11], v6
	s_endpgm
.Lfunc_end2:
	.size	test_4_clz_char, .Lfunc_end2-test_4_clz_char
                                        ; -- End function
	.section	.AMDGPU.csdata
; Kernel info:
; codeLenInByte = 404
; NumSgprs: 15
; NumVgprs: 20
; ScratchSize: 0
; MemoryBound: 0
; FloatMode: 240
; IeeeMode: 1
; LDSByteSize: 0 bytes/workgroup (compile time only)
; SGPRBlocks: 1
; VGPRBlocks: 4
; NumSGPRsForWavesPerEU: 15
; NumVGPRsForWavesPerEU: 20
; WaveLimiterHint : 1
; COMPUTE_PGM_RSRC2:USER_SGPR: 8
; COMPUTE_PGM_RSRC2:TRAP_HANDLER: 0
; COMPUTE_PGM_RSRC2:TGID_X_EN: 1
; COMPUTE_PGM_RSRC2:TGID_Y_EN: 0
; COMPUTE_PGM_RSRC2:TGID_Z_EN: 0
; COMPUTE_PGM_RSRC2:TIDIG_COMP_CNT: 0
	.section	.AMDGPU.config
	.long	47176
	.long	11468868
	.long	47180
	.long	144
	.long	47200
	.long	0
	.long	4
	.long	0
	.long	8
	.long	0
	.text
	.globl	test_8_clz_char         ; -- Begin function test_8_clz_char
	.p2align	8
	.type	test_8_clz_char, at function
	.amdgpu_hsa_kernel test_8_clz_char
test_8_clz_char:                        ; @test_8_clz_char
	.amd_kernel_code_t
		amd_code_version_major = 1
		amd_code_version_minor = 2
		amd_machine_kind = 1
		amd_machine_version_major = 8
		amd_machine_version_minor = 0
		amd_machine_version_stepping = 1
		kernel_code_entry_byte_offset = 256
		kernel_code_prefetch_byte_size = 0
		granulated_workitem_vgpr_count = 4
		granulated_wavefront_sgpr_count = 1
		priority = 0
		float_mode = 240
		priv = 0
		enable_dx10_clamp = 1
		debug_mode = 0
		enable_ieee_mode = 1
		enable_sgpr_private_segment_wave_byte_offset = 0
		user_sgpr_count = 8
		enable_trap_handler = 0
		enable_sgpr_workgroup_id_x = 1
		enable_sgpr_workgroup_id_y = 0
		enable_sgpr_workgroup_id_z = 0
		enable_sgpr_workgroup_info = 0
		enable_vgpr_workitem_id = 0
		enable_exception_msb = 0
		granulated_lds_size = 0
		enable_exception = 0
		enable_sgpr_private_segment_buffer = 1
		enable_sgpr_dispatch_ptr = 1
		enable_sgpr_queue_ptr = 0
		enable_sgpr_kernarg_segment_ptr = 1
		enable_sgpr_dispatch_id = 0
		enable_sgpr_flat_scratch_init = 0
		enable_sgpr_private_segment_size = 0
		enable_sgpr_grid_workgroup_count_x = 0
		enable_sgpr_grid_workgroup_count_y = 0
		enable_sgpr_grid_workgroup_count_z = 0
		enable_ordered_append_gds = 0
		private_element_size = 1
		is_ptr64 = 1
		is_dynamic_callstack = 0
		is_debug_enabled = 0
		is_xnack_enabled = 1
		workitem_private_segment_byte_size = 0
		workgroup_group_segment_byte_size = 0
		gds_segment_byte_size = 0
		kernarg_segment_byte_size = 32
		workgroup_fbarrier_count = 0
		wavefront_sgpr_count = 15
		workitem_vgpr_count = 20
		reserved_vgpr_first = 0
		reserved_vgpr_count = 0
		reserved_sgpr_first = 0
		reserved_sgpr_count = 0
		debug_wavefront_private_segment_offset_sgpr = 0
		debug_private_segment_buffer_sgpr = 0
		kernarg_segment_alignment = 4
		group_segment_alignment = 4
		private_segment_alignment = 4
		wavefront_size = 6
		call_convention = -1
		runtime_loader_kernel_symbol = 0
	.end_amd_kernel_code_t
; %bb.0:                                ; %entry
	s_load_dword s10, s[4:5], 0x4
	s_load_dwordx4 s[0:3], s[6:7], 0x0
	s_load_dword s9, s[6:7], 0x14
	s_mov_b32 s6, 0xffff
	v_mov_b32_e32 v1, 0
	s_waitcnt lgkmcnt(0)
	s_and_b32 s4, s10, s6
	v_mov_b32_e32 v2, s8
	v_add_u32_e32 v0, vcc, s9, v0
	v_addc_u32_e32 v1, vcc, 0, v1, vcc
	v_mad_u64_u32 v[0:1], s[4:5], s4, v2, v[0:1]
	v_mov_b32_e32 v3, s3
	v_lshlrev_b64 v[0:1], 3, v[0:1]
	v_add_u32_e32 v2, vcc, s2, v0
	v_addc_u32_e32 v3, vcc, v3, v1, vcc
	v_add_u32_e32 v4, vcc, 5, v2
	v_addc_u32_e32 v5, vcc, 0, v3, vcc
	v_add_u32_e32 v6, vcc, 4, v2
	v_addc_u32_e32 v7, vcc, 0, v3, vcc
	v_add_u32_e32 v8, vcc, 7, v2
	v_addc_u32_e32 v9, vcc, 0, v3, vcc
	s_movk_i32 s2, 0xff
	s_nop 0
	s_nop 0
	flat_load_ubyte v14, v[8:9]
	flat_load_ubyte v13, v[6:7]
	flat_load_ubyte v12, v[4:5]
	v_add_u32_e32 v4, vcc, 6, v2
	v_addc_u32_e32 v5, vcc, 0, v3, vcc
	v_add_u32_e32 v6, vcc, 1, v2
	v_addc_u32_e32 v7, vcc, 0, v3, vcc
	v_add_u32_e32 v8, vcc, 3, v2
	v_addc_u32_e32 v9, vcc, 0, v3, vcc
	v_add_u32_e32 v10, vcc, 2, v2
	v_addc_u32_e32 v11, vcc, 0, v3, vcc
	s_nop 0
	flat_load_ubyte v17, v[2:3]
	s_nop 0
	flat_load_ubyte v19, v[10:11]
	flat_load_ubyte v18, v[8:9]
	flat_load_ubyte v16, v[6:7]
	flat_load_ubyte v15, v[4:5]
	s_waitcnt vmcnt(7) lgkmcnt(7)
	v_lshlrev_b32_e32 v3, 8, v14
	s_waitcnt vmcnt(5) lgkmcnt(5)
	v_lshlrev_b32_e32 v2, 8, v12
	v_or_b32_e32 v2, v2, v13
	v_and_b32_e32 v12, s2, v2
	s_waitcnt vmcnt(2) lgkmcnt(2)
	v_lshlrev_b32_e32 v5, 8, v18
	s_waitcnt vmcnt(1) lgkmcnt(1)
	v_lshlrev_b32_e32 v4, 8, v16
	v_or_b32_e32 v5, v5, v19
	v_or_b32_e32 v4, v4, v17
	v_lshlrev_b32_e32 v7, 16, v5
	v_ffbh_u32_sdwa v9, v4 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0
	v_and_b32_e32 v10, s2, v5
	v_ffbh_u32_sdwa v11, v5 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0
	v_bfe_u32 v5, v5, 8, 8
	v_and_b32_e32 v8, s2, v4
	v_or_b32_e32 v4, v7, v4
	v_add_u32_e32 v7, vcc, -16, v9
	v_add_u32_e32 v9, vcc, -16, v11
	v_ffbh_u32_e32 v11, v5
	v_add_u32_e32 v11, vcc, -16, v11
	v_lshrrev_b32_e32 v13, 8, v4
	v_cmp_ne_u16_e32 vcc, 0, v8
	v_add_u16_e32 v7, -8, v7
	v_ffbh_u32_sdwa v4, v4 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_1
	v_cndmask_b32_e32 v7, 8, v7, vcc
	v_add_u32_e32 v4, vcc, -16, v4
	v_cmp_ne_u16_e32 vcc, 0, v10
	v_add_u16_e32 v9, -8, v9
	v_cndmask_b32_e32 v9, 8, v9, vcc
	v_and_b32_e32 v8, s2, v13
	v_add_u16_e32 v10, -8, v11
	v_cmp_ne_u16_e32 vcc, 0, v5
	v_cndmask_b32_e32 v5, 8, v10, vcc
	s_waitcnt vmcnt(0) lgkmcnt(0)
	v_or_b32_e32 v3, v3, v15
	v_add_u16_e32 v4, -8, v4
	v_cmp_ne_u16_e32 vcc, 0, v8
	v_lshlrev_b32_e32 v6, 16, v3
	v_cndmask_b32_e32 v4, 8, v4, vcc
	v_lshlrev_b16_e32 v4, 8, v4
	v_or_b32_e32 v6, v6, v2
	v_ffbh_u32_sdwa v2, v2 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0
	v_add_u32_e32 v2, vcc, -16, v2
	v_or_b32_sdwa v4, v7, v4 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0 src1_sel:DWORD
	v_and_b32_e32 v10, s6, v4
	v_lshrrev_b32_e32 v4, 8, v6
	v_add_u16_e32 v2, -8, v2
	v_cmp_ne_u16_e32 vcc, 0, v12
	v_ffbh_u32_sdwa v6, v6 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_1
	v_cndmask_b32_e32 v2, 8, v2, vcc
	v_add_u32_e32 v6, vcc, -16, v6
	v_and_b32_e32 v4, s2, v4
	v_add_u16_e32 v6, -8, v6
	v_cmp_ne_u16_e32 vcc, 0, v4
	v_cndmask_b32_e32 v4, 8, v6, vcc
	v_lshlrev_b16_e32 v4, 8, v4
	v_or_b32_sdwa v2, v2, v4 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0 src1_sel:DWORD
	v_ffbh_u32_sdwa v4, v3 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0
	v_add_u32_e32 v4, vcc, -16, v4
	v_and_b32_e32 v8, s6, v2
	v_and_b32_e32 v2, s2, v3
	v_add_u16_e32 v4, -8, v4
	v_cmp_ne_u16_e32 vcc, 0, v2
	v_bfe_u32 v3, v3, 8, 8
	v_cndmask_b32_e32 v2, 8, v4, vcc
	v_ffbh_u32_e32 v4, v3
	v_add_u32_e32 v4, vcc, -16, v4
	v_add_u16_e32 v4, -8, v4
	v_cmp_ne_u16_e32 vcc, 0, v3
	v_cndmask_b32_e32 v3, 8, v4, vcc
	v_lshlrev_b16_e32 v3, 8, v3
	v_or_b32_sdwa v2, v2, v3 dst_sel:WORD_1 dst_unused:UNUSED_PAD src0_sel:BYTE_0 src1_sel:DWORD
	v_or_b32_e32 v11, v8, v2
	v_mov_b32_e32 v2, s1
	v_add_u32_e32 v0, vcc, s0, v0
	v_addc_u32_e32 v1, vcc, v2, v1, vcc
	v_add_u32_e32 v2, vcc, 2, v0
	v_addc_u32_e32 v3, vcc, 0, v1, vcc
	v_lshlrev_b16_e32 v5, 8, v5
	v_add_u32_e32 v4, vcc, 3, v0
	v_lshrrev_b32_e32 v12, 8, v5
	v_addc_u32_e32 v5, vcc, 0, v1, vcc
	v_add_u32_e32 v6, vcc, 4, v0
	v_addc_u32_e32 v7, vcc, 0, v1, vcc
	v_lshrrev_b32_e32 v13, 24, v11
	v_lshrrev_b32_e32 v14, 16, v11
	v_lshrrev_b32_e32 v11, 8, v11
	s_nop 0
	s_nop 0
	flat_store_byte v[6:7], v8
	flat_store_byte v[4:5], v12
	flat_store_byte v[2:3], v9
	v_add_u32_e32 v2, vcc, 1, v0
	v_addc_u32_e32 v3, vcc, 0, v1, vcc
	v_add_u32_e32 v4, vcc, 7, v0
	v_addc_u32_e32 v5, vcc, 0, v1, vcc
	v_add_u32_e32 v6, vcc, 6, v0
	v_addc_u32_e32 v7, vcc, 0, v1, vcc
	v_add_u32_e32 v8, vcc, 5, v0
	v_addc_u32_e32 v9, vcc, 0, v1, vcc
	v_lshrrev_b32_e32 v12, 8, v10
	s_nop 0
	s_nop 0
	flat_store_byte v[0:1], v10
	flat_store_byte v[8:9], v11
	flat_store_byte v[6:7], v14
	flat_store_byte v[4:5], v13
	flat_store_byte v[2:3], v12
	s_endpgm
.Lfunc_end3:
	.size	test_8_clz_char, .Lfunc_end3-test_8_clz_char
                                        ; -- End function
	.section	.AMDGPU.csdata
; Kernel info:
; codeLenInByte = 748
; NumSgprs: 15
; NumVgprs: 20
; ScratchSize: 0
; MemoryBound: 0
; FloatMode: 240
; IeeeMode: 1
; LDSByteSize: 0 bytes/workgroup (compile time only)
; SGPRBlocks: 1
; VGPRBlocks: 4
; NumSGPRsForWavesPerEU: 15
; NumVGPRsForWavesPerEU: 20
; WaveLimiterHint : 1
; COMPUTE_PGM_RSRC2:USER_SGPR: 8
; COMPUTE_PGM_RSRC2:TRAP_HANDLER: 0
; COMPUTE_PGM_RSRC2:TGID_X_EN: 1
; COMPUTE_PGM_RSRC2:TGID_Y_EN: 0
; COMPUTE_PGM_RSRC2:TGID_Z_EN: 0
; COMPUTE_PGM_RSRC2:TIDIG_COMP_CNT: 0
	.section	.AMDGPU.config
	.long	47176
	.long	11468870
	.long	47180
	.long	144
	.long	47200
	.long	0
	.long	4
	.long	0
	.long	8
	.long	0
	.text
	.globl	test_16_clz_char        ; -- Begin function test_16_clz_char
	.p2align	8
	.type	test_16_clz_char, at function
	.amdgpu_hsa_kernel test_16_clz_char
test_16_clz_char:                       ; @test_16_clz_char
	.amd_kernel_code_t
		amd_code_version_major = 1
		amd_code_version_minor = 2
		amd_machine_kind = 1
		amd_machine_version_major = 8
		amd_machine_version_minor = 0
		amd_machine_version_stepping = 1
		kernel_code_entry_byte_offset = 256
		kernel_code_prefetch_byte_size = 0
		granulated_workitem_vgpr_count = 6
		granulated_wavefront_sgpr_count = 1
		priority = 0
		float_mode = 240
		priv = 0
		enable_dx10_clamp = 1
		debug_mode = 0
		enable_ieee_mode = 1
		enable_sgpr_private_segment_wave_byte_offset = 0
		user_sgpr_count = 8
		enable_trap_handler = 0
		enable_sgpr_workgroup_id_x = 1
		enable_sgpr_workgroup_id_y = 0
		enable_sgpr_workgroup_id_z = 0
		enable_sgpr_workgroup_info = 0
		enable_vgpr_workitem_id = 0
		enable_exception_msb = 0
		granulated_lds_size = 0
		enable_exception = 0
		enable_sgpr_private_segment_buffer = 1
		enable_sgpr_dispatch_ptr = 1
		enable_sgpr_queue_ptr = 0
		enable_sgpr_kernarg_segment_ptr = 1
		enable_sgpr_dispatch_id = 0
		enable_sgpr_flat_scratch_init = 0
		enable_sgpr_private_segment_size = 0
		enable_sgpr_grid_workgroup_count_x = 0
		enable_sgpr_grid_workgroup_count_y = 0
		enable_sgpr_grid_workgroup_count_z = 0
		enable_ordered_append_gds = 0
		private_element_size = 1
		is_ptr64 = 1
		is_dynamic_callstack = 0
		is_debug_enabled = 0
		is_xnack_enabled = 1
		workitem_private_segment_byte_size = 0
		workgroup_group_segment_byte_size = 0
		gds_segment_byte_size = 0
		kernarg_segment_byte_size = 32
		workgroup_fbarrier_count = 0
		wavefront_sgpr_count = 15
		workitem_vgpr_count = 25
		reserved_vgpr_first = 0
		reserved_vgpr_count = 0
		reserved_sgpr_first = 0
		reserved_sgpr_count = 0
		debug_wavefront_private_segment_offset_sgpr = 0
		debug_private_segment_buffer_sgpr = 0
		kernarg_segment_alignment = 4
		group_segment_alignment = 4
		private_segment_alignment = 4
		wavefront_size = 6
		call_convention = -1
		runtime_loader_kernel_symbol = 0
	.end_amd_kernel_code_t
; %bb.0:                                ; %entry
	s_load_dword s10, s[4:5], 0x4
	s_load_dwordx4 s[0:3], s[6:7], 0x0
	s_load_dword s9, s[6:7], 0x14
	s_mov_b32 s6, 0xffff
	v_mov_b32_e32 v1, 0
	s_waitcnt lgkmcnt(0)
	s_and_b32 s4, s10, s6
	v_mov_b32_e32 v2, s8
	v_add_u32_e32 v0, vcc, s9, v0
	v_addc_u32_e32 v1, vcc, 0, v1, vcc
	v_mad_u64_u32 v[0:1], s[4:5], s4, v2, v[0:1]
	v_mov_b32_e32 v3, s3
	v_lshlrev_b64 v[0:1], 4, v[0:1]
	v_add_u32_e32 v2, vcc, s2, v0
	v_addc_u32_e32 v3, vcc, v3, v1, vcc
	v_add_u32_e32 v4, vcc, 13, v2
	v_addc_u32_e32 v5, vcc, 0, v3, vcc
	s_movk_i32 s2, 0xff
	s_nop 0
	s_nop 0
	flat_load_ubyte v14, v[4:5]
	v_add_u32_e32 v4, vcc, 12, v2
	v_addc_u32_e32 v5, vcc, 0, v3, vcc
	v_add_u32_e32 v6, vcc, 15, v2
	v_addc_u32_e32 v7, vcc, 0, v3, vcc
	v_add_u32_e32 v8, vcc, 14, v2
	v_addc_u32_e32 v9, vcc, 0, v3, vcc
	v_add_u32_e32 v10, vcc, 9, v2
	v_addc_u32_e32 v11, vcc, 0, v3, vcc
	v_add_u32_e32 v12, vcc, 8, v2
	v_addc_u32_e32 v13, vcc, 0, v3, vcc
	s_nop 0
	s_nop 0
	flat_load_ubyte v19, v[12:13]
	flat_load_ubyte v18, v[10:11]
	flat_load_ubyte v17, v[8:9]
	flat_load_ubyte v16, v[6:7]
	flat_load_ubyte v15, v[4:5]
	v_add_u32_e32 v4, vcc, 11, v2
	v_addc_u32_e32 v5, vcc, 0, v3, vcc
	v_add_u32_e32 v6, vcc, 10, v2
	v_addc_u32_e32 v7, vcc, 0, v3, vcc
	v_add_u32_e32 v8, vcc, 5, v2
	v_addc_u32_e32 v9, vcc, 0, v3, vcc
	v_add_u32_e32 v10, vcc, 4, v2
	v_addc_u32_e32 v11, vcc, 0, v3, vcc
	v_add_u32_e32 v12, vcc, 7, v2
	v_addc_u32_e32 v13, vcc, 0, v3, vcc
	s_nop 0
	s_nop 0
	flat_load_ubyte v21, v[12:13]
	flat_load_ubyte v20, v[10:11]
	flat_load_ubyte v22, v[8:9]
	v_add_u32_e32 v10, vcc, 6, v2
	v_addc_u32_e32 v11, vcc, 0, v3, vcc
	s_nop 0
	s_nop 0
	flat_load_ubyte v13, v[6:7]
	flat_load_ubyte v12, v[4:5]
	s_waitcnt vmcnt(10) lgkmcnt(10)
	v_lshlrev_b32_e32 v4, 8, v14
	s_waitcnt vmcnt(8) lgkmcnt(8)
	v_lshlrev_b32_e32 v6, 8, v18
	s_waitcnt vmcnt(6) lgkmcnt(6)
	v_lshlrev_b32_e32 v5, 8, v16
	s_waitcnt vmcnt(5) lgkmcnt(5)
	v_or_b32_e32 v14, v4, v15
	v_or_b32_e32 v15, v5, v17
	v_lshlrev_b32_e32 v4, 16, v15
	v_or_b32_e32 v18, v4, v14
	v_or_b32_e32 v16, v6, v19
	v_add_u32_e32 v4, vcc, 1, v2
	s_waitcnt vmcnt(2) lgkmcnt(2)
	v_lshlrev_b32_e32 v8, 8, v22
	v_or_b32_e32 v17, v8, v20
	s_waitcnt vmcnt(0) lgkmcnt(0)
	v_lshlrev_b32_e32 v7, 8, v12
	v_or_b32_e32 v13, v7, v13
	v_lshlrev_b32_e32 v5, 16, v13
	v_or_b32_e32 v19, v5, v16
	v_addc_u32_e32 v5, vcc, 0, v3, vcc
	v_add_u32_e32 v6, vcc, 3, v2
	v_addc_u32_e32 v7, vcc, 0, v3, vcc
	v_add_u32_e32 v8, vcc, 2, v2
	v_addc_u32_e32 v9, vcc, 0, v3, vcc
	v_lshlrev_b32_e32 v12, 8, v21
	s_nop 0
	flat_load_ubyte v22, v[2:3]
	s_nop 0
	flat_load_ubyte v24, v[8:9]
	flat_load_ubyte v23, v[6:7]
	flat_load_ubyte v21, v[4:5]
	flat_load_ubyte v20, v[10:11]
	s_waitcnt vmcnt(2) lgkmcnt(2)
	v_lshlrev_b32_e32 v5, 8, v23
	s_waitcnt vmcnt(1) lgkmcnt(1)
	v_lshlrev_b32_e32 v4, 8, v21
	v_or_b32_e32 v5, v5, v24
	v_or_b32_e32 v4, v4, v22
	v_lshlrev_b32_e32 v6, 16, v5
	v_or_b32_e32 v6, v6, v4
	v_and_b32_e32 v7, s2, v4
	v_ffbh_u32_sdwa v4, v4 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0
	v_add_u32_e32 v4, vcc, -16, v4
	v_cmp_ne_u16_e32 vcc, 0, v7
	v_lshrrev_b32_e32 v7, 8, v6
	v_add_u16_e32 v4, -8, v4
	v_ffbh_u32_sdwa v6, v6 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_1
	v_cndmask_b32_e32 v4, 8, v4, vcc
	v_add_u32_e32 v6, vcc, -16, v6
	v_and_b32_e32 v7, s2, v7
	v_add_u16_e32 v6, -8, v6
	v_cmp_ne_u16_e32 vcc, 0, v7
	v_cndmask_b32_e32 v6, 8, v6, vcc
	v_lshlrev_b16_e32 v6, 8, v6
	v_or_b32_sdwa v4, v4, v6 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0 src1_sel:DWORD
	v_ffbh_u32_sdwa v6, v5 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0
	s_waitcnt vmcnt(0) lgkmcnt(0)
	v_or_b32_e32 v2, v12, v20
	v_add_u32_e32 v6, vcc, -16, v6
	v_and_b32_e32 v12, s6, v4
	v_and_b32_e32 v4, s2, v5
	v_add_u16_e32 v6, -8, v6
	v_cmp_ne_u16_e32 vcc, 0, v4
	v_bfe_u32 v5, v5, 8, 8
	v_cndmask_b32_e32 v4, 8, v6, vcc
	v_ffbh_u32_e32 v6, v5
	v_add_u32_e32 v6, vcc, -16, v6
	v_add_u16_e32 v6, -8, v6
	v_cmp_ne_u16_e32 vcc, 0, v5
	v_cndmask_b32_e32 v5, 8, v6, vcc
	v_lshlrev_b16_e32 v5, 8, v5
	v_or_b32_sdwa v4, v4, v5 dst_sel:WORD_1 dst_unused:UNUSED_PAD src0_sel:BYTE_0 src1_sel:DWORD
	v_ffbh_u32_sdwa v5, v17 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0
	v_lshlrev_b32_e32 v3, 16, v2
	v_add_u32_e32 v5, vcc, -16, v5
	v_or_b32_e32 v8, v12, v4
	v_and_b32_e32 v4, s2, v17
	v_or_b32_e32 v3, v3, v17
	v_add_u16_e32 v5, -8, v5
	v_cmp_ne_u16_e32 vcc, 0, v4
	v_cndmask_b32_e32 v4, 8, v5, vcc
	v_lshrrev_b32_e32 v5, 8, v3
	v_ffbh_u32_sdwa v3, v3 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_1
	v_add_u32_e32 v3, vcc, -16, v3
	v_and_b32_e32 v5, s2, v5
	v_cmp_ne_u16_e32 vcc, 0, v5
	v_add_u16_e32 v3, -8, v3
	v_cndmask_b32_e32 v3, 8, v3, vcc
	v_lshlrev_b16_e32 v3, 8, v3
	v_or_b32_sdwa v3, v4, v3 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0 src1_sel:DWORD
	v_ffbh_u32_sdwa v5, v2 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0
	v_add_u32_e32 v5, vcc, -16, v5
	v_and_b32_e32 v4, s6, v3
	v_and_b32_e32 v3, s2, v2
	v_add_u16_e32 v5, -8, v5
	v_cmp_ne_u16_e32 vcc, 0, v3
	v_bfe_u32 v2, v2, 8, 8
	v_cndmask_b32_e32 v3, 8, v5, vcc
	v_ffbh_u32_e32 v5, v2
	v_add_u32_e32 v5, vcc, -16, v5
	v_add_u16_e32 v5, -8, v5
	v_cmp_ne_u16_e32 vcc, 0, v2
	v_cndmask_b32_e32 v2, 8, v5, vcc
	v_lshlrev_b16_e32 v2, 8, v2
	v_or_b32_sdwa v2, v3, v2 dst_sel:WORD_1 dst_unused:UNUSED_PAD src0_sel:BYTE_0 src1_sel:DWORD
	v_ffbh_u32_sdwa v3, v16 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0
	v_add_u32_e32 v3, vcc, -16, v3
	v_or_b32_e32 v10, v4, v2
	v_and_b32_e32 v2, s2, v16
	v_add_u16_e32 v3, -8, v3
	v_cmp_ne_u16_e32 vcc, 0, v2
	v_cndmask_b32_e32 v2, 8, v3, vcc
	v_lshrrev_b32_e32 v3, 8, v19
	v_ffbh_u32_sdwa v5, v19 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_1
	v_add_u32_e32 v5, vcc, -16, v5
	v_and_b32_e32 v3, s2, v3
	v_add_u16_e32 v5, -8, v5
	v_cmp_ne_u16_e32 vcc, 0, v3
	v_cndmask_b32_e32 v3, 8, v5, vcc
	v_lshlrev_b16_e32 v3, 8, v3
	v_or_b32_sdwa v2, v2, v3 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0 src1_sel:DWORD
	v_ffbh_u32_sdwa v3, v13 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0
	v_add_u32_e32 v3, vcc, -16, v3
	v_and_b32_e32 v11, s6, v2
	v_and_b32_e32 v2, s2, v13
	v_add_u16_e32 v3, -8, v3
	v_cmp_ne_u16_e32 vcc, 0, v2
	v_cndmask_b32_e32 v2, 8, v3, vcc
	v_bfe_u32 v3, v13, 8, 8
	v_ffbh_u32_e32 v5, v3
	v_add_u32_e32 v5, vcc, -16, v5
	v_add_u16_e32 v5, -8, v5
	v_cmp_ne_u16_e32 vcc, 0, v3
	v_cndmask_b32_e32 v3, 8, v5, vcc
	v_lshlrev_b16_e32 v3, 8, v3
	v_or_b32_sdwa v2, v2, v3 dst_sel:WORD_1 dst_unused:UNUSED_PAD src0_sel:BYTE_0 src1_sel:DWORD
	v_ffbh_u32_sdwa v3, v14 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0
	v_or_b32_e32 v13, v11, v2
	v_add_u32_e32 v3, vcc, -16, v3
	v_and_b32_e32 v2, s2, v14
	v_add_u16_e32 v3, -8, v3
	v_cmp_ne_u16_e32 vcc, 0, v2
	v_cndmask_b32_e32 v2, 8, v3, vcc
	v_lshrrev_b32_e32 v3, 8, v18
	v_ffbh_u32_sdwa v5, v18 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_1
	v_add_u32_e32 v5, vcc, -16, v5
	v_and_b32_e32 v3, s2, v3
	v_add_u16_e32 v5, -8, v5
	v_cmp_ne_u16_e32 vcc, 0, v3
	v_cndmask_b32_e32 v3, 8, v5, vcc
	v_lshlrev_b16_e32 v3, 8, v3
	v_or_b32_sdwa v2, v2, v3 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0 src1_sel:DWORD
	v_ffbh_u32_sdwa v3, v15 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:BYTE_0
	v_add_u32_e32 v3, vcc, -16, v3
	v_and_b32_e32 v14, s6, v2
	v_and_b32_e32 v2, s2, v15
	v_add_u16_e32 v3, -8, v3
	v_cmp_ne_u16_e32 vcc, 0, v2
	v_cndmask_b32_e32 v2, 8, v3, vcc
	v_bfe_u32 v3, v15, 8, 8
	v_ffbh_u32_e32 v5, v3
	v_add_u32_e32 v5, vcc, -16, v5
	v_add_u16_e32 v5, -8, v5
	v_cmp_ne_u16_e32 vcc, 0, v3
	v_cndmask_b32_e32 v3, 8, v5, vcc
	v_lshlrev_b16_e32 v3, 8, v3
	v_or_b32_sdwa v2, v2, v3 dst_sel:WORD_1 dst_unused:UNUSED_PAD src0_sel:BYTE_0 src1_sel:DWORD
	v_or_b32_e32 v15, v14, v2
	v_mov_b32_e32 v2, s1
	v_add_u32_e32 v0, vcc, s0, v0
	v_addc_u32_e32 v1, vcc, v2, v1, vcc
	v_add_u32_e32 v2, vcc, 4, v0
	v_addc_u32_e32 v3, vcc, 0, v1, vcc
	v_lshrrev_b32_e32 v16, 24, v8
	v_lshrrev_b32_e32 v17, 16, v8
	v_lshrrev_b32_e32 v18, 8, v8
	v_lshrrev_b32_e32 v19, 16, v13
	s_nop 0
	s_nop 0
	flat_store_byte v[2:3], v4
	v_add_u32_e32 v2, vcc, 8, v0
	v_addc_u32_e32 v3, vcc, 0, v1, vcc
	v_add_u32_e32 v4, vcc, 12, v0
	v_addc_u32_e32 v5, vcc, 0, v1, vcc
	v_add_u32_e32 v6, vcc, 3, v0
	v_addc_u32_e32 v7, vcc, 0, v1, vcc
	v_add_u32_e32 v8, vcc, 1, v0
	v_addc_u32_e32 v9, vcc, 0, v1, vcc
	s_nop 0
	s_nop 0
	flat_store_byte v[8:9], v18
	v_add_u32_e32 v8, vcc, 2, v0
	v_addc_u32_e32 v9, vcc, 0, v1, vcc
	v_lshrrev_b32_e32 v18, 24, v13
	s_nop 0
	s_nop 0
	flat_store_byte v[8:9], v17
	flat_store_byte v[6:7], v16
	flat_store_byte v[4:5], v14
	flat_store_byte v[2:3], v11
	v_add_u32_e32 v2, vcc, 7, v0
	v_addc_u32_e32 v3, vcc, 0, v1, vcc
	v_add_u32_e32 v4, vcc, 6, v0
	v_addc_u32_e32 v5, vcc, 0, v1, vcc
	v_add_u32_e32 v6, vcc, 5, v0
	v_addc_u32_e32 v7, vcc, 0, v1, vcc
	v_add_u32_e32 v8, vcc, 11, v0
	v_addc_u32_e32 v9, vcc, 0, v1, vcc
	v_lshrrev_b32_e32 v14, 24, v10
	v_lshrrev_b32_e32 v16, 16, v10
	v_lshrrev_b32_e32 v17, 8, v10
	v_add_u32_e32 v10, vcc, 10, v0
	v_addc_u32_e32 v11, vcc, 0, v1, vcc
	s_nop 0
	s_nop 0
	flat_store_byte v[10:11], v19
	flat_store_byte v[8:9], v18
	flat_store_byte v[6:7], v17
	flat_store_byte v[4:5], v16
	flat_store_byte v[2:3], v14
	v_add_u32_e32 v2, vcc, 9, v0
	v_addc_u32_e32 v3, vcc, 0, v1, vcc
	v_add_u32_e32 v4, vcc, 15, v0
	v_addc_u32_e32 v5, vcc, 0, v1, vcc
	v_add_u32_e32 v6, vcc, 14, v0
	v_addc_u32_e32 v7, vcc, 0, v1, vcc
	v_add_u32_e32 v8, vcc, 13, v0
	v_lshrrev_b32_e32 v10, 8, v13
	v_lshrrev_b32_e32 v11, 24, v15
	v_lshrrev_b32_e32 v13, 16, v15
	v_lshrrev_b32_e32 v14, 8, v15
	v_addc_u32_e32 v9, vcc, 0, v1, vcc
	s_nop 0
	s_nop 0
	flat_store_byte v[0:1], v12
	flat_store_byte v[8:9], v14
	flat_store_byte v[6:7], v13
	flat_store_byte v[4:5], v11
	flat_store_byte v[2:3], v10
	s_endpgm
.Lfunc_end4:
	.size	test_16_clz_char, .Lfunc_end4-test_16_clz_char
                                        ; -- End function
	.section	.AMDGPU.csdata
; Kernel info:
; codeLenInByte = 1452
; NumSgprs: 15
; NumVgprs: 25
; ScratchSize: 0
; MemoryBound: 0
; FloatMode: 240
; IeeeMode: 1
; LDSByteSize: 0 bytes/workgroup (compile time only)
; SGPRBlocks: 1
; VGPRBlocks: 6
; NumSGPRsForWavesPerEU: 15
; NumVGPRsForWavesPerEU: 25
; WaveLimiterHint : 1
; COMPUTE_PGM_RSRC2:USER_SGPR: 8
; COMPUTE_PGM_RSRC2:TRAP_HANDLER: 0
; COMPUTE_PGM_RSRC2:TGID_X_EN: 1
; COMPUTE_PGM_RSRC2:TGID_Y_EN: 0
; COMPUTE_PGM_RSRC2:TGID_Z_EN: 0
; COMPUTE_PGM_RSRC2:TIDIG_COMP_CNT: 0

	.ident	"clang version 8.0.0 (https://git.llvm.org/git/clang.git 3d84c7ca0d8468fc4a275638c891a8d43b79c476) (https://git.llvm.org/git/llvm.git 199c0d32e96b646bd8cf6beeaf0f99f8a434b56a)"
	.ident	"clang version 8.0.0 (https://git.llvm.org/git/clang.git 87aefa4312da1e6a80bdbc3828363391714b3142) (https://git.llvm.org/git/llvm.git 26598e549e6344a04bd9c32b53567d9cdb9894da)"
	.ident	"clang version 8.0.0 (https://git.llvm.org/git/clang.git 852234a8798dd5f5401a14b3fd9823cd8b45fbfa) (https://git.llvm.org/git/llvm.git b0c4f7e14ecb82fb629be0263c6bdf1001bf8737)"
	.section	".note.GNU-stack"
	.amd_amdgpu_isa "amdgcn-mesa-mesa3d--gfx801+xnack"
-------------- next part --------------
; ModuleID = 'link'
source_filename = "link"
target datalayout = "e-p:64:64-p1:64:64-p2:32:32-p3:32:32-p4:64:64-p5:32:32-p6:32:32-i64:64-v16:16-v24:32-v32:32-v48:64-v96:128-v192:256-v256:256-v512:512-v1024:1024-v2048:2048-n32:64-S32-A5"
target triple = "amdgcn-mesa-mesa3d"

; Function Attrs: nounwind
define amdgpu_kernel void @test_1_clz_char(i8 addrspace(1)* nocapture %out, i8 addrspace(1)* nocapture readonly %in0) local_unnamed_addr #0 !kernel_arg_addr_space !0 !kernel_arg_access_qual !6 !kernel_arg_type !7 !kernel_arg_base_type !7 !kernel_arg_type_qual !8 {
entry:
  %0 = tail call i32 @llvm.amdgcn.workgroup.id.x() #2
  %retval.0.i12.i = zext i32 %0 to i64
  %1 = tail call i8 addrspace(4)* @llvm.amdgcn.dispatch.ptr() #2
  %arrayidx.i7.i = getelementptr inbounds i8, i8 addrspace(4)* %1, i64 4
  %2 = bitcast i8 addrspace(4)* %arrayidx.i7.i to i32 addrspace(4)*
  %3 = load i32, i32 addrspace(4)* %2, align 4, !tbaa !9
  %and.i.i = and i32 %3, 65535
  %retval.0.i1121.i = zext i32 %and.i.i to i64
  %mul22.i = mul nuw nsw i64 %retval.0.i1121.i, %retval.0.i12.i
  %4 = tail call i32 @llvm.amdgcn.workitem.id.x() #2, !range !13
  %retval.0.i633.i = zext i32 %4 to i64
  %5 = tail call i8 addrspace(4)* @llvm.amdgcn.implicitarg.ptr() #2
  %arrayidx.i.i = getelementptr inbounds i8, i8 addrspace(4)* %5, i64 4
  %6 = bitcast i8 addrspace(4)* %arrayidx.i.i to i32 addrspace(4)*
  %7 = load i32, i32 addrspace(4)* %6, align 4, !tbaa !9
  %conv.i.i = zext i32 %7 to i64
  %add34.i = add nuw nsw i64 %conv.i.i, %retval.0.i633.i
  %add4.i = add nuw nsw i64 %add34.i, %mul22.i
  %arrayidx = getelementptr inbounds i8, i8 addrspace(1)* %in0, i64 %add4.i
  %8 = load i8, i8 addrspace(1)* %arrayidx, align 1, !tbaa !14
  %conv.i = zext i8 %8 to i16
  %tobool.i.i = icmp eq i8 %8, 0
  %9 = tail call i16 @llvm.ctlz.i16(i16 %conv.i, i1 true) #2, !range !15
  %10 = trunc i16 %9 to i8
  %.op.i = add nsw i8 %10, -8
  %sub.i = select i1 %tobool.i.i, i8 8, i8 %.op.i
  %arrayidx3 = getelementptr inbounds i8, i8 addrspace(1)* %out, i64 %add4.i
  store i8 %sub.i, i8 addrspace(1)* %arrayidx3, align 1, !tbaa !14
  ret void
}

; Function Attrs: nounwind readnone speculatable
declare i32 @llvm.amdgcn.workgroup.id.x() #1

; Function Attrs: nounwind readnone speculatable
declare i8 addrspace(4)* @llvm.amdgcn.dispatch.ptr() #1

; Function Attrs: nounwind readnone speculatable
declare i32 @llvm.amdgcn.workitem.id.x() #1

; Function Attrs: nounwind readnone speculatable
declare i8 addrspace(4)* @llvm.amdgcn.implicitarg.ptr() #1

; Function Attrs: nounwind readnone speculatable
declare i16 @llvm.ctlz.i16(i16, i1) #1

; Function Attrs: nounwind
define amdgpu_kernel void @test_2_clz_char(i8 addrspace(1)* nocapture %out, i8 addrspace(1)* nocapture readonly %in0) local_unnamed_addr #0 !kernel_arg_addr_space !0 !kernel_arg_access_qual !6 !kernel_arg_type !7 !kernel_arg_base_type !7 !kernel_arg_type_qual !8 {
entry:
  %0 = tail call i32 @llvm.amdgcn.workgroup.id.x() #2
  %retval.0.i12.i = zext i32 %0 to i64
  %1 = tail call i8 addrspace(4)* @llvm.amdgcn.dispatch.ptr() #2
  %arrayidx.i7.i = getelementptr inbounds i8, i8 addrspace(4)* %1, i64 4
  %2 = bitcast i8 addrspace(4)* %arrayidx.i7.i to i32 addrspace(4)*
  %3 = load i32, i32 addrspace(4)* %2, align 4, !tbaa !9
  %and.i.i = and i32 %3, 65535
  %retval.0.i1121.i = zext i32 %and.i.i to i64
  %mul22.i = mul nuw nsw i64 %retval.0.i1121.i, %retval.0.i12.i
  %4 = tail call i32 @llvm.amdgcn.workitem.id.x() #2, !range !13
  %retval.0.i633.i = zext i32 %4 to i64
  %5 = tail call i8 addrspace(4)* @llvm.amdgcn.implicitarg.ptr() #2
  %arrayidx.i.i = getelementptr inbounds i8, i8 addrspace(4)* %5, i64 4
  %6 = bitcast i8 addrspace(4)* %arrayidx.i.i to i32 addrspace(4)*
  %7 = load i32, i32 addrspace(4)* %6, align 4, !tbaa !9
  %conv.i.i = zext i32 %7 to i64
  %add34.i = add nuw nsw i64 %conv.i.i, %retval.0.i633.i
  %add4.i = add nuw nsw i64 %add34.i, %mul22.i
  %mul.i5 = shl nuw nsw i64 %add4.i, 1
  %arrayidx.i6 = getelementptr inbounds i8, i8 addrspace(1)* %in0, i64 %mul.i5
  %8 = bitcast i8 addrspace(1)* %arrayidx.i6 to <2 x i8> addrspace(1)*
  %9 = load <2 x i8>, <2 x i8> addrspace(1)* %8, align 1, !tbaa !14
  %10 = extractelement <2 x i8> %9, i64 0
  %conv.i.i4 = zext i8 %10 to i16
  %tobool.i.i.i = icmp eq i8 %10, 0
  %11 = tail call i16 @llvm.ctlz.i16(i16 %conv.i.i4, i1 true) #2, !range !15
  %12 = trunc i16 %11 to i8
  %.op.i = add nsw i8 %12, -8
  %sub.i.i = select i1 %tobool.i.i.i, i8 8, i8 %.op.i
  %vecinit.i = insertelement <2 x i8> undef, i8 %sub.i.i, i32 0
  %13 = extractelement <2 x i8> %9, i64 1
  %conv.i4.i = zext i8 %13 to i16
  %tobool.i.i5.i = icmp eq i8 %13, 0
  %14 = tail call i16 @llvm.ctlz.i16(i16 %conv.i4.i, i1 true) #2, !range !15
  %15 = trunc i16 %14 to i8
  %.op9.i = add nsw i8 %15, -8
  %sub.i8.i = select i1 %tobool.i.i5.i, i8 8, i8 %.op9.i
  %vecinit2.i = insertelement <2 x i8> %vecinit.i, i8 %sub.i8.i, i32 1
  %arrayidx.i = getelementptr inbounds i8, i8 addrspace(1)* %out, i64 %mul.i5
  %16 = bitcast i8 addrspace(1)* %arrayidx.i to <2 x i8> addrspace(1)*
  store <2 x i8> %vecinit2.i, <2 x i8> addrspace(1)* %16, align 1, !tbaa !14
  ret void
}

; Function Attrs: nounwind
define amdgpu_kernel void @test_4_clz_char(i8 addrspace(1)* nocapture %out, i8 addrspace(1)* nocapture readonly %in0) local_unnamed_addr #0 !kernel_arg_addr_space !0 !kernel_arg_access_qual !6 !kernel_arg_type !7 !kernel_arg_base_type !7 !kernel_arg_type_qual !8 {
entry:
  %0 = tail call i32 @llvm.amdgcn.workgroup.id.x() #2
  %retval.0.i12.i = zext i32 %0 to i64
  %1 = tail call i8 addrspace(4)* @llvm.amdgcn.dispatch.ptr() #2
  %arrayidx.i7.i = getelementptr inbounds i8, i8 addrspace(4)* %1, i64 4
  %2 = bitcast i8 addrspace(4)* %arrayidx.i7.i to i32 addrspace(4)*
  %3 = load i32, i32 addrspace(4)* %2, align 4, !tbaa !9
  %and.i.i = and i32 %3, 65535
  %retval.0.i1121.i = zext i32 %and.i.i to i64
  %mul22.i = mul nuw nsw i64 %retval.0.i1121.i, %retval.0.i12.i
  %4 = tail call i32 @llvm.amdgcn.workitem.id.x() #2, !range !13
  %retval.0.i633.i = zext i32 %4 to i64
  %5 = tail call i8 addrspace(4)* @llvm.amdgcn.implicitarg.ptr() #2
  %arrayidx.i.i = getelementptr inbounds i8, i8 addrspace(4)* %5, i64 4
  %6 = bitcast i8 addrspace(4)* %arrayidx.i.i to i32 addrspace(4)*
  %7 = load i32, i32 addrspace(4)* %6, align 4, !tbaa !9
  %conv.i.i = zext i32 %7 to i64
  %add34.i = add nuw nsw i64 %conv.i.i, %retval.0.i633.i
  %add4.i = add nuw nsw i64 %add34.i, %mul22.i
  %mul.i4 = shl nuw nsw i64 %add4.i, 2
  %arrayidx.i5 = getelementptr inbounds i8, i8 addrspace(1)* %in0, i64 %mul.i4
  %8 = bitcast i8 addrspace(1)* %arrayidx.i5 to <4 x i8> addrspace(1)*
  %9 = load <4 x i8>, <4 x i8> addrspace(1)* %8, align 1, !tbaa !14
  %10 = extractelement <4 x i8> %9, i32 0
  %conv.i.i.i = zext i8 %10 to i16
  %tobool.i.i.i.i = icmp eq i8 %10, 0
  %11 = tail call i16 @llvm.ctlz.i16(i16 %conv.i.i.i, i1 true) #2, !range !15
  %12 = trunc i16 %11 to i8
  %.op.i = add nsw i8 %12, -8
  %sub.i.i.i = select i1 %tobool.i.i.i.i, i8 8, i8 %.op.i
  %13 = insertelement <4 x i8> undef, i8 %sub.i.i.i, i32 0
  %14 = extractelement <4 x i8> %9, i32 1
  %conv.i4.i.i = zext i8 %14 to i16
  %tobool.i.i5.i.i = icmp eq i8 %14, 0
  %15 = tail call i16 @llvm.ctlz.i16(i16 %conv.i4.i.i, i1 true) #2, !range !15
  %16 = trunc i16 %15 to i8
  %.op17.i = add nsw i8 %16, -8
  %sub.i8.i.i = select i1 %tobool.i.i5.i.i, i8 8, i8 %.op17.i
  %17 = insertelement <4 x i8> %13, i8 %sub.i8.i.i, i32 1
  %18 = extractelement <4 x i8> %9, i32 2
  %conv.i.i5.i = zext i8 %18 to i16
  %tobool.i.i.i6.i = icmp eq i8 %18, 0
  %19 = tail call i16 @llvm.ctlz.i16(i16 %conv.i.i5.i, i1 true) #2, !range !15
  %20 = trunc i16 %19 to i8
  %.op18.i = add nsw i8 %20, -8
  %sub.i.i9.i = select i1 %tobool.i.i.i6.i, i8 8, i8 %.op18.i
  %21 = insertelement <4 x i8> undef, i8 %sub.i.i9.i, i32 0
  %22 = extractelement <4 x i8> %9, i32 3
  %conv.i4.i11.i = zext i8 %22 to i16
  %tobool.i.i5.i12.i = icmp eq i8 %22, 0
  %23 = tail call i16 @llvm.ctlz.i16(i16 %conv.i4.i11.i, i1 true) #2, !range !15
  %24 = trunc i16 %23 to i8
  %.op19.i = add nsw i8 %24, -8
  %sub.i8.i15.i = select i1 %tobool.i.i5.i12.i, i8 8, i8 %.op19.i
  %25 = insertelement <4 x i8> %21, i8 %sub.i8.i15.i, i32 1
  %vecinit3.i = shufflevector <4 x i8> %17, <4 x i8> %25, <4 x i32> <i32 0, i32 1, i32 4, i32 5>
  %arrayidx.i = getelementptr inbounds i8, i8 addrspace(1)* %out, i64 %mul.i4
  %26 = bitcast i8 addrspace(1)* %arrayidx.i to <4 x i8> addrspace(1)*
  store <4 x i8> %vecinit3.i, <4 x i8> addrspace(1)* %26, align 1, !tbaa !14
  ret void
}

; Function Attrs: nounwind
define amdgpu_kernel void @test_8_clz_char(i8 addrspace(1)* nocapture %out, i8 addrspace(1)* nocapture readonly %in0) local_unnamed_addr #0 !kernel_arg_addr_space !0 !kernel_arg_access_qual !6 !kernel_arg_type !7 !kernel_arg_base_type !7 !kernel_arg_type_qual !8 {
entry:
  %0 = tail call i32 @llvm.amdgcn.workgroup.id.x() #2
  %retval.0.i12.i = zext i32 %0 to i64
  %1 = tail call i8 addrspace(4)* @llvm.amdgcn.dispatch.ptr() #2
  %arrayidx.i7.i = getelementptr inbounds i8, i8 addrspace(4)* %1, i64 4
  %2 = bitcast i8 addrspace(4)* %arrayidx.i7.i to i32 addrspace(4)*
  %3 = load i32, i32 addrspace(4)* %2, align 4, !tbaa !9
  %and.i.i = and i32 %3, 65535
  %retval.0.i1121.i = zext i32 %and.i.i to i64
  %mul22.i = mul nuw nsw i64 %retval.0.i1121.i, %retval.0.i12.i
  %4 = tail call i32 @llvm.amdgcn.workitem.id.x() #2, !range !13
  %retval.0.i633.i = zext i32 %4 to i64
  %5 = tail call i8 addrspace(4)* @llvm.amdgcn.implicitarg.ptr() #2
  %arrayidx.i.i = getelementptr inbounds i8, i8 addrspace(4)* %5, i64 4
  %6 = bitcast i8 addrspace(4)* %arrayidx.i.i to i32 addrspace(4)*
  %7 = load i32, i32 addrspace(4)* %6, align 4, !tbaa !9
  %conv.i.i = zext i32 %7 to i64
  %add34.i = add nuw nsw i64 %conv.i.i, %retval.0.i633.i
  %add4.i = add nuw nsw i64 %add34.i, %mul22.i
  %mul.i4 = shl nuw nsw i64 %add4.i, 3
  %arrayidx.i5 = getelementptr inbounds i8, i8 addrspace(1)* %in0, i64 %mul.i4
  %8 = bitcast i8 addrspace(1)* %arrayidx.i5 to <8 x i8> addrspace(1)*
  %9 = load <8 x i8>, <8 x i8> addrspace(1)* %8, align 1, !tbaa !14
  %10 = extractelement <8 x i8> %9, i32 0
  %conv.i.i.i.i = zext i8 %10 to i16
  %tobool.i.i.i.i.i = icmp eq i8 %10, 0
  %11 = tail call i16 @llvm.ctlz.i16(i16 %conv.i.i.i.i, i1 true) #2, !range !15
  %12 = trunc i16 %11 to i8
  %.op.i = add nsw i8 %12, -8
  %sub.i.i.i.i = select i1 %tobool.i.i.i.i.i, i8 8, i8 %.op.i
  %13 = insertelement <4 x i8> undef, i8 %sub.i.i.i.i, i32 0
  %14 = extractelement <8 x i8> %9, i32 1
  %conv.i4.i.i.i = zext i8 %14 to i16
  %tobool.i.i5.i.i.i = icmp eq i8 %14, 0
  %15 = tail call i16 @llvm.ctlz.i16(i16 %conv.i4.i.i.i, i1 true) #2, !range !15
  %16 = trunc i16 %15 to i8
  %.op32.i = add nsw i8 %16, -8
  %sub.i8.i.i.i = select i1 %tobool.i.i5.i.i.i, i8 8, i8 %.op32.i
  %17 = insertelement <4 x i8> %13, i8 %sub.i8.i.i.i, i32 1
  %18 = extractelement <8 x i8> %9, i32 2
  %conv.i.i5.i.i = zext i8 %18 to i16
  %tobool.i.i.i6.i.i = icmp eq i8 %18, 0
  %19 = tail call i16 @llvm.ctlz.i16(i16 %conv.i.i5.i.i, i1 true) #2, !range !15
  %20 = trunc i16 %19 to i8
  %.op33.i = add nsw i8 %20, -8
  %sub.i.i9.i.i = select i1 %tobool.i.i.i6.i.i, i8 8, i8 %.op33.i
  %21 = insertelement <4 x i8> undef, i8 %sub.i.i9.i.i, i32 0
  %22 = extractelement <8 x i8> %9, i32 3
  %conv.i4.i11.i.i = zext i8 %22 to i16
  %tobool.i.i5.i12.i.i = icmp eq i8 %22, 0
  %23 = tail call i16 @llvm.ctlz.i16(i16 %conv.i4.i11.i.i, i1 true) #2, !range !15
  %24 = trunc i16 %23 to i8
  %.op34.i = add nsw i8 %24, -8
  %sub.i8.i15.i.i = select i1 %tobool.i.i5.i12.i.i, i8 8, i8 %.op34.i
  %25 = insertelement <4 x i8> %21, i8 %sub.i8.i15.i.i, i32 1
  %vecinit3.i.i = shufflevector <4 x i8> %17, <4 x i8> %25, <4 x i32> <i32 0, i32 1, i32 4, i32 5>
  %vext.i = shufflevector <4 x i8> %vecinit3.i.i, <4 x i8> undef, <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 undef, i32 undef, i32 undef, i32 undef>
  %26 = extractelement <8 x i8> %9, i32 4
  %conv.i.i.i5.i = zext i8 %26 to i16
  %tobool.i.i.i.i6.i = icmp eq i8 %26, 0
  %27 = tail call i16 @llvm.ctlz.i16(i16 %conv.i.i.i5.i, i1 true) #2, !range !15
  %28 = trunc i16 %27 to i8
  %.op35.i = add nsw i8 %28, -8
  %sub.i.i.i9.i = select i1 %tobool.i.i.i.i6.i, i8 8, i8 %.op35.i
  %29 = insertelement <4 x i8> undef, i8 %sub.i.i.i9.i, i32 0
  %30 = extractelement <8 x i8> %9, i32 5
  %conv.i4.i.i11.i = zext i8 %30 to i16
  %tobool.i.i5.i.i12.i = icmp eq i8 %30, 0
  %31 = tail call i16 @llvm.ctlz.i16(i16 %conv.i4.i.i11.i, i1 true) #2, !range !15
  %32 = trunc i16 %31 to i8
  %.op36.i = add nsw i8 %32, -8
  %sub.i8.i.i15.i = select i1 %tobool.i.i5.i.i12.i, i8 8, i8 %.op36.i
  %33 = insertelement <4 x i8> %29, i8 %sub.i8.i.i15.i, i32 1
  %34 = extractelement <8 x i8> %9, i32 6
  %conv.i.i5.i18.i = zext i8 %34 to i16
  %tobool.i.i.i6.i19.i = icmp eq i8 %34, 0
  %35 = tail call i16 @llvm.ctlz.i16(i16 %conv.i.i5.i18.i, i1 true) #2, !range !15
  %36 = trunc i16 %35 to i8
  %.op37.i = add nsw i8 %36, -8
  %sub.i.i9.i22.i = select i1 %tobool.i.i.i6.i19.i, i8 8, i8 %.op37.i
  %37 = insertelement <4 x i8> undef, i8 %sub.i.i9.i22.i, i32 0
  %38 = extractelement <8 x i8> %9, i32 7
  %conv.i4.i11.i24.i = zext i8 %38 to i16
  %tobool.i.i5.i12.i25.i = icmp eq i8 %38, 0
  %39 = tail call i16 @llvm.ctlz.i16(i16 %conv.i4.i11.i24.i, i1 true) #2, !range !15
  %40 = trunc i16 %39 to i8
  %.op38.i = add nsw i8 %40, -8
  %sub.i8.i15.i28.i = select i1 %tobool.i.i5.i12.i25.i, i8 8, i8 %.op38.i
  %41 = insertelement <4 x i8> %37, i8 %sub.i8.i15.i28.i, i32 1
  %vecinit3.i31.i = shufflevector <4 x i8> %33, <4 x i8> %41, <4 x i32> <i32 0, i32 1, i32 4, i32 5>
  %vext2.i = shufflevector <4 x i8> %vecinit3.i31.i, <4 x i8> undef, <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 undef, i32 undef, i32 undef, i32 undef>
  %vecinit3.i = shufflevector <8 x i8> %vext.i, <8 x i8> %vext2.i, <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 8, i32 9, i32 10, i32 11>
  %arrayidx.i = getelementptr inbounds i8, i8 addrspace(1)* %out, i64 %mul.i4
  %42 = bitcast i8 addrspace(1)* %arrayidx.i to <8 x i8> addrspace(1)*
  store <8 x i8> %vecinit3.i, <8 x i8> addrspace(1)* %42, align 1, !tbaa !14
  ret void
}

; Function Attrs: nounwind
define amdgpu_kernel void @test_16_clz_char(i8 addrspace(1)* nocapture %out, i8 addrspace(1)* nocapture readonly %in0) local_unnamed_addr #0 !kernel_arg_addr_space !0 !kernel_arg_access_qual !6 !kernel_arg_type !7 !kernel_arg_base_type !7 !kernel_arg_type_qual !8 {
entry:
  %0 = tail call i32 @llvm.amdgcn.workgroup.id.x() #2
  %retval.0.i12.i = zext i32 %0 to i64
  %1 = tail call i8 addrspace(4)* @llvm.amdgcn.dispatch.ptr() #2
  %arrayidx.i7.i = getelementptr inbounds i8, i8 addrspace(4)* %1, i64 4
  %2 = bitcast i8 addrspace(4)* %arrayidx.i7.i to i32 addrspace(4)*
  %3 = load i32, i32 addrspace(4)* %2, align 4, !tbaa !9
  %and.i.i = and i32 %3, 65535
  %retval.0.i1121.i = zext i32 %and.i.i to i64
  %mul22.i = mul nuw nsw i64 %retval.0.i1121.i, %retval.0.i12.i
  %4 = tail call i32 @llvm.amdgcn.workitem.id.x() #2, !range !13
  %retval.0.i633.i = zext i32 %4 to i64
  %5 = tail call i8 addrspace(4)* @llvm.amdgcn.implicitarg.ptr() #2
  %arrayidx.i.i = getelementptr inbounds i8, i8 addrspace(4)* %5, i64 4
  %6 = bitcast i8 addrspace(4)* %arrayidx.i.i to i32 addrspace(4)*
  %7 = load i32, i32 addrspace(4)* %6, align 4, !tbaa !9
  %conv.i.i = zext i32 %7 to i64
  %add34.i = add nuw nsw i64 %conv.i.i, %retval.0.i633.i
  %add4.i = add nuw nsw i64 %add34.i, %mul22.i
  %mul.i4 = shl nuw nsw i64 %add4.i, 4
  %arrayidx.i5 = getelementptr inbounds i8, i8 addrspace(1)* %in0, i64 %mul.i4
  %8 = bitcast i8 addrspace(1)* %arrayidx.i5 to <16 x i8> addrspace(1)*
  %9 = load <16 x i8>, <16 x i8> addrspace(1)* %8, align 1, !tbaa !14
  %10 = extractelement <16 x i8> %9, i32 0
  %conv.i.i.i.i.i = zext i8 %10 to i16
  %tobool.i.i.i.i.i.i = icmp eq i8 %10, 0
  %11 = tail call i16 @llvm.ctlz.i16(i16 %conv.i.i.i.i.i, i1 true) #2, !range !15
  %12 = trunc i16 %11 to i8
  %.op.i = add nsw i8 %12, -8
  %sub.i.i.i.i.i = select i1 %tobool.i.i.i.i.i.i, i8 8, i8 %.op.i
  %13 = insertelement <4 x i8> undef, i8 %sub.i.i.i.i.i, i32 0
  %14 = extractelement <16 x i8> %9, i32 1
  %conv.i4.i.i.i.i = zext i8 %14 to i16
  %tobool.i.i5.i.i.i.i = icmp eq i8 %14, 0
  %15 = tail call i16 @llvm.ctlz.i16(i16 %conv.i4.i.i.i.i, i1 true) #2, !range !15
  %16 = trunc i16 %15 to i8
  %.op62.i = add nsw i8 %16, -8
  %sub.i8.i.i.i.i = select i1 %tobool.i.i5.i.i.i.i, i8 8, i8 %.op62.i
  %17 = insertelement <4 x i8> %13, i8 %sub.i8.i.i.i.i, i32 1
  %18 = extractelement <16 x i8> %9, i32 2
  %conv.i.i5.i.i.i = zext i8 %18 to i16
  %tobool.i.i.i6.i.i.i = icmp eq i8 %18, 0
  %19 = tail call i16 @llvm.ctlz.i16(i16 %conv.i.i5.i.i.i, i1 true) #2, !range !15
  %20 = trunc i16 %19 to i8
  %.op63.i = add nsw i8 %20, -8
  %sub.i.i9.i.i.i = select i1 %tobool.i.i.i6.i.i.i, i8 8, i8 %.op63.i
  %21 = insertelement <4 x i8> undef, i8 %sub.i.i9.i.i.i, i32 0
  %22 = extractelement <16 x i8> %9, i32 3
  %conv.i4.i11.i.i.i = zext i8 %22 to i16
  %tobool.i.i5.i12.i.i.i = icmp eq i8 %22, 0
  %23 = tail call i16 @llvm.ctlz.i16(i16 %conv.i4.i11.i.i.i, i1 true) #2, !range !15
  %24 = trunc i16 %23 to i8
  %.op64.i = add nsw i8 %24, -8
  %sub.i8.i15.i.i.i = select i1 %tobool.i.i5.i12.i.i.i, i8 8, i8 %.op64.i
  %25 = insertelement <4 x i8> %21, i8 %sub.i8.i15.i.i.i, i32 1
  %vecinit3.i.i.i = shufflevector <4 x i8> %17, <4 x i8> %25, <4 x i32> <i32 0, i32 1, i32 4, i32 5>
  %vext.i.i = shufflevector <4 x i8> %vecinit3.i.i.i, <4 x i8> undef, <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 undef, i32 undef, i32 undef, i32 undef>
  %26 = extractelement <16 x i8> %9, i32 4
  %conv.i.i.i5.i.i = zext i8 %26 to i16
  %tobool.i.i.i.i6.i.i = icmp eq i8 %26, 0
  %27 = tail call i16 @llvm.ctlz.i16(i16 %conv.i.i.i5.i.i, i1 true) #2, !range !15
  %28 = trunc i16 %27 to i8
  %.op65.i = add nsw i8 %28, -8
  %sub.i.i.i9.i.i = select i1 %tobool.i.i.i.i6.i.i, i8 8, i8 %.op65.i
  %29 = insertelement <4 x i8> undef, i8 %sub.i.i.i9.i.i, i32 0
  %30 = extractelement <16 x i8> %9, i32 5
  %conv.i4.i.i11.i.i = zext i8 %30 to i16
  %tobool.i.i5.i.i12.i.i = icmp eq i8 %30, 0
  %31 = tail call i16 @llvm.ctlz.i16(i16 %conv.i4.i.i11.i.i, i1 true) #2, !range !15
  %32 = trunc i16 %31 to i8
  %.op66.i = add nsw i8 %32, -8
  %sub.i8.i.i15.i.i = select i1 %tobool.i.i5.i.i12.i.i, i8 8, i8 %.op66.i
  %33 = insertelement <4 x i8> %29, i8 %sub.i8.i.i15.i.i, i32 1
  %34 = extractelement <16 x i8> %9, i32 6
  %conv.i.i5.i18.i.i = zext i8 %34 to i16
  %tobool.i.i.i6.i19.i.i = icmp eq i8 %34, 0
  %35 = tail call i16 @llvm.ctlz.i16(i16 %conv.i.i5.i18.i.i, i1 true) #2, !range !15
  %36 = trunc i16 %35 to i8
  %.op67.i = add nsw i8 %36, -8
  %sub.i.i9.i22.i.i = select i1 %tobool.i.i.i6.i19.i.i, i8 8, i8 %.op67.i
  %37 = insertelement <4 x i8> undef, i8 %sub.i.i9.i22.i.i, i32 0
  %38 = extractelement <16 x i8> %9, i32 7
  %conv.i4.i11.i24.i.i = zext i8 %38 to i16
  %tobool.i.i5.i12.i25.i.i = icmp eq i8 %38, 0
  %39 = tail call i16 @llvm.ctlz.i16(i16 %conv.i4.i11.i24.i.i, i1 true) #2, !range !15
  %40 = trunc i16 %39 to i8
  %.op68.i = add nsw i8 %40, -8
  %sub.i8.i15.i28.i.i = select i1 %tobool.i.i5.i12.i25.i.i, i8 8, i8 %.op68.i
  %41 = insertelement <4 x i8> %37, i8 %sub.i8.i15.i28.i.i, i32 1
  %vecinit3.i31.i.i = shufflevector <4 x i8> %33, <4 x i8> %41, <4 x i32> <i32 0, i32 1, i32 4, i32 5>
  %vext2.i.i = shufflevector <4 x i8> %vecinit3.i31.i.i, <4 x i8> undef, <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 undef, i32 undef, i32 undef, i32 undef>
  %vecinit3.i.i = shufflevector <8 x i8> %vext.i.i, <8 x i8> %vext2.i.i, <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 8, i32 9, i32 10, i32 11>
  %vext.i = shufflevector <8 x i8> %vecinit3.i.i, <8 x i8> undef, <16 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 undef, i32 undef, i32 undef, i32 undef, i32 undef, i32 undef, i32 undef, i32 undef>
  %42 = extractelement <16 x i8> %9, i32 8
  %conv.i.i.i.i5.i = zext i8 %42 to i16
  %tobool.i.i.i.i.i6.i = icmp eq i8 %42, 0
  %43 = tail call i16 @llvm.ctlz.i16(i16 %conv.i.i.i.i5.i, i1 true) #2, !range !15
  %44 = trunc i16 %43 to i8
  %.op69.i = add nsw i8 %44, -8
  %sub.i.i.i.i9.i = select i1 %tobool.i.i.i.i.i6.i, i8 8, i8 %.op69.i
  %45 = insertelement <4 x i8> undef, i8 %sub.i.i.i.i9.i, i32 0
  %46 = extractelement <16 x i8> %9, i32 9
  %conv.i4.i.i.i11.i = zext i8 %46 to i16
  %tobool.i.i5.i.i.i12.i = icmp eq i8 %46, 0
  %47 = tail call i16 @llvm.ctlz.i16(i16 %conv.i4.i.i.i11.i, i1 true) #2, !range !15
  %48 = trunc i16 %47 to i8
  %.op70.i = add nsw i8 %48, -8
  %sub.i8.i.i.i15.i = select i1 %tobool.i.i5.i.i.i12.i, i8 8, i8 %.op70.i
  %49 = insertelement <4 x i8> %45, i8 %sub.i8.i.i.i15.i, i32 1
  %50 = extractelement <16 x i8> %9, i32 10
  %conv.i.i5.i.i18.i = zext i8 %50 to i16
  %tobool.i.i.i6.i.i19.i = icmp eq i8 %50, 0
  %51 = tail call i16 @llvm.ctlz.i16(i16 %conv.i.i5.i.i18.i, i1 true) #2, !range !15
  %52 = trunc i16 %51 to i8
  %.op71.i = add nsw i8 %52, -8
  %sub.i.i9.i.i22.i = select i1 %tobool.i.i.i6.i.i19.i, i8 8, i8 %.op71.i
  %53 = insertelement <4 x i8> undef, i8 %sub.i.i9.i.i22.i, i32 0
  %54 = extractelement <16 x i8> %9, i32 11
  %conv.i4.i11.i.i24.i = zext i8 %54 to i16
  %tobool.i.i5.i12.i.i25.i = icmp eq i8 %54, 0
  %55 = tail call i16 @llvm.ctlz.i16(i16 %conv.i4.i11.i.i24.i, i1 true) #2, !range !15
  %56 = trunc i16 %55 to i8
  %.op72.i = add nsw i8 %56, -8
  %sub.i8.i15.i.i28.i = select i1 %tobool.i.i5.i12.i.i25.i, i8 8, i8 %.op72.i
  %57 = insertelement <4 x i8> %53, i8 %sub.i8.i15.i.i28.i, i32 1
  %vecinit3.i.i31.i = shufflevector <4 x i8> %49, <4 x i8> %57, <4 x i32> <i32 0, i32 1, i32 4, i32 5>
  %vext.i32.i = shufflevector <4 x i8> %vecinit3.i.i31.i, <4 x i8> undef, <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 undef, i32 undef, i32 undef, i32 undef>
  %58 = extractelement <16 x i8> %9, i32 12
  %conv.i.i.i5.i33.i = zext i8 %58 to i16
  %tobool.i.i.i.i6.i34.i = icmp eq i8 %58, 0
  %59 = tail call i16 @llvm.ctlz.i16(i16 %conv.i.i.i5.i33.i, i1 true) #2, !range !15
  %60 = trunc i16 %59 to i8
  %.op73.i = add nsw i8 %60, -8
  %sub.i.i.i9.i37.i = select i1 %tobool.i.i.i.i6.i34.i, i8 8, i8 %.op73.i
  %61 = insertelement <4 x i8> undef, i8 %sub.i.i.i9.i37.i, i32 0
  %62 = extractelement <16 x i8> %9, i32 13
  %conv.i4.i.i11.i39.i = zext i8 %62 to i16
  %tobool.i.i5.i.i12.i40.i = icmp eq i8 %62, 0
  %63 = tail call i16 @llvm.ctlz.i16(i16 %conv.i4.i.i11.i39.i, i1 true) #2, !range !15
  %64 = trunc i16 %63 to i8
  %.op74.i = add nsw i8 %64, -8
  %sub.i8.i.i15.i43.i = select i1 %tobool.i.i5.i.i12.i40.i, i8 8, i8 %.op74.i
  %65 = insertelement <4 x i8> %61, i8 %sub.i8.i.i15.i43.i, i32 1
  %66 = extractelement <16 x i8> %9, i32 14
  %conv.i.i5.i18.i46.i = zext i8 %66 to i16
  %tobool.i.i.i6.i19.i47.i = icmp eq i8 %66, 0
  %67 = tail call i16 @llvm.ctlz.i16(i16 %conv.i.i5.i18.i46.i, i1 true) #2, !range !15
  %68 = trunc i16 %67 to i8
  %.op75.i = add nsw i8 %68, -8
  %sub.i.i9.i22.i50.i = select i1 %tobool.i.i.i6.i19.i47.i, i8 8, i8 %.op75.i
  %69 = insertelement <4 x i8> undef, i8 %sub.i.i9.i22.i50.i, i32 0
  %70 = extractelement <16 x i8> %9, i32 15
  %conv.i4.i11.i24.i52.i = zext i8 %70 to i16
  %tobool.i.i5.i12.i25.i53.i = icmp eq i8 %70, 0
  %71 = tail call i16 @llvm.ctlz.i16(i16 %conv.i4.i11.i24.i52.i, i1 true) #2, !range !15
  %72 = trunc i16 %71 to i8
  %.op76.i = add nsw i8 %72, -8
  %sub.i8.i15.i28.i56.i = select i1 %tobool.i.i5.i12.i25.i53.i, i8 8, i8 %.op76.i
  %73 = insertelement <4 x i8> %69, i8 %sub.i8.i15.i28.i56.i, i32 1
  %vecinit3.i31.i59.i = shufflevector <4 x i8> %65, <4 x i8> %73, <4 x i32> <i32 0, i32 1, i32 4, i32 5>
  %vext2.i60.i = shufflevector <4 x i8> %vecinit3.i31.i59.i, <4 x i8> undef, <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 undef, i32 undef, i32 undef, i32 undef>
  %vecinit3.i61.i = shufflevector <8 x i8> %vext.i32.i, <8 x i8> %vext2.i60.i, <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 8, i32 9, i32 10, i32 11>
  %vext2.i = shufflevector <8 x i8> %vecinit3.i61.i, <8 x i8> undef, <16 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 undef, i32 undef, i32 undef, i32 undef, i32 undef, i32 undef, i32 undef, i32 undef>
  %vecinit3.i = shufflevector <16 x i8> %vext.i, <16 x i8> %vext2.i, <16 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 16, i32 17, i32 18, i32 19, i32 20, i32 21, i32 22, i32 23>
  %arrayidx.i = getelementptr inbounds i8, i8 addrspace(1)* %out, i64 %mul.i4
  %74 = bitcast i8 addrspace(1)* %arrayidx.i to <16 x i8> addrspace(1)*
  store <16 x i8> %vecinit3.i, <16 x i8> addrspace(1)* %74, align 1, !tbaa !14
  ret void
}

attributes #0 = { nounwind "correctly-rounded-divide-sqrt-fp-math"="false" "denorms-are-zero"="false" "disable-tail-calls"="false" "less-precise-fpmad"="false" "no-frame-pointer-elim"="false" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="carrizo" "target-features"="+16-bit-insts,+ci-insts,+dpp,+fp32-denormals,+fp64-fp16-denormals,+s-memrealtime,+vi-insts" "uniform-work-group-size"="true" "unsafe-fp-math"="false" "use-soft-float"="false" }
attributes #1 = { nounwind readnone speculatable }
attributes #2 = { nounwind }

!opencl.ocl.version = !{!0}
!llvm.ident = !{!1, !2, !3}
!llvm.module.flags = !{!4, !5}

!0 = !{i32 1, i32 1}
!1 = !{!"clang version 8.0.0 (https://git.llvm.org/git/clang.git 3d84c7ca0d8468fc4a275638c891a8d43b79c476) (https://git.llvm.org/git/llvm.git 199c0d32e96b646bd8cf6beeaf0f99f8a434b56a)"}
!2 = !{!"clang version 8.0.0 (https://git.llvm.org/git/clang.git 87aefa4312da1e6a80bdbc3828363391714b3142) (https://git.llvm.org/git/llvm.git 26598e549e6344a04bd9c32b53567d9cdb9894da)"}
!3 = !{!"clang version 8.0.0 (https://git.llvm.org/git/clang.git 852234a8798dd5f5401a14b3fd9823cd8b45fbfa) (https://git.llvm.org/git/llvm.git b0c4f7e14ecb82fb629be0263c6bdf1001bf8737)"}
!4 = !{i32 1, !"wchar_size", i32 4}
!5 = !{i32 7, !"PIC Level", i32 1}
!6 = !{!"none", !"none"}
!7 = !{!"char*", !"char*"}
!8 = !{!"", !""}
!9 = !{!10, !10, i64 0}
!10 = !{!"int", !11, i64 0}
!11 = !{!"omnipotent char", !12, i64 0}
!12 = !{!"Simple C/C++ TBAA"}
!13 = !{i32 0, i32 1024}
!14 = !{!11, !11, i64 0}
!15 = !{i16 8, i16 17}
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: This is a digitally signed message part
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20181007/ceb5cd5e/attachment-0001.sig>


More information about the llvm-commits mailing list