[llvm] r283037 - [X86][SSE] Enable commutation from MOVSD/MOVSS to BLENDPD/BLENDPS on SSE41+ targets

Mikael Holmén via llvm-commits llvm-commits at lists.llvm.org
Mon Oct 3 22:46:01 PDT 2016


I filed a PR for this

https://llvm.org/bugs/show_bug.cgi?id=30607

Regards,
Mikael

On 10/03/16 15:39, Mikael Holmén via llvm-commits wrote:
> Hi Simon,
>
> The attached program makes the verifier unhappy with your commit.
>
> llc -march=x86-64 -mcpu=corei7 -o /dev/null red.ll -verify-coalescing
>
> # Before register coalescing
> ********** INTERVALS **********
> %vreg0 [16r,176r:0)  0 at 16r
> %vreg1 [32r,224r:0)  0 at 32r
> %vreg2 [48r,192r:0)  0 at 48r
> %vreg3 [176r,192r:0)[192r,208r:1)  0 at 176r 1 at 192r
> %vreg5 [208r,224r:0)[224r,352r:1)  0 at 208r 1 at 224r
> %vreg7 [64r,160B:0)  0 at 64r
> %vreg8 EMPTY
> %vreg9 [240r,336B:0)  0 at 240r
> %vreg10 [272r,288r:0)  0 at 272r
> %vreg11 EMPTY
> RegMasks:
> ********** MACHINEINSTRS **********
> # Machine code for function autogen_SD17701: NoPHIs, TracksLiveness
> Constant Pool:
>   cp#0: 0x58C1C66036BF19DE, align=8
>
> 0B      BB#0: derived from LLVM BB %BB
> 16B             %vreg0<def> = V_SET0; VR128:%vreg0
> 32B             %vreg1<def> = COPY %vreg0; VR128:%vreg1,%vreg0
> 48B             %vreg2<def> = FsFLD0SD; FR64:%vreg2
> 64B             %vreg7<def> = MOVSDrm %RIP, 1, %noreg, <cp#0>, %noreg;
> mem:LD8[ConstantPool] FR64:%vreg7
>             Successors according to CFG: BB#1(?%)
>
> 80B     BB#1: derived from LLVM BB %CF248
>             Predecessors according to CFG: BB#0 BB#1
> 96B             UCOMISDrr %vreg7, %vreg8<undef>, %EFLAGS<imp-def>;
> FR64:%vreg7,%vreg8
> 112B            JNE_1 <BB#1>, %EFLAGS<imp-use>
> 128B            JNP_1 <BB#2>, %EFLAGS<imp-use,kill>
> 144B            JMP_1 <BB#1>
>             Successors according to CFG: BB#1(0x7c000000 / 0x80000000 =
> 96.88%) BB#2(0x04000000 / 0x80000000 = 3.12%)
>
> 160B    BB#2: derived from LLVM BB %CF251
>             Predecessors according to CFG: BB#1
> 176B            %vreg3<def> = COPY %vreg0; VR128:%vreg3,%vreg0
> 192B            %vreg3<def,tied1> = BLENDPDrri %vreg3<tied0>, %vreg2, 1;
> VR128:%vreg3 FR64:%vreg2
> 208B            %vreg5<def> = COPY %vreg3; VR128:%vreg5,%vreg3
> 224B            %vreg5<def,tied1> = UNPCKHPDrr %vreg5<tied0>, %vreg1;
> VR128:%vreg5,%vreg1
> 240B            %vreg9<def> = MOV32r0 %EFLAGS<imp-def,dead>; GR32:%vreg9
>             Successors according to CFG: BB#3(?%)
>
> 256B    BB#3: derived from LLVM BB %CF265
>             Predecessors according to CFG: BB#2 BB#3
> 272B            %vreg10<def> = COPY %vreg9:sub_8bit; GR8:%vreg10
> GR32:%vreg9
> 288B            TEST8rr %vreg10, %vreg10, %EFLAGS<imp-def>; GR8:%vreg10
> 304B            JNE_1 <BB#3>, %EFLAGS<imp-use,kill>
> 320B            JMP_1 <BB#4>
>             Successors according to CFG: BB#3(0x7ffff800 / 0x80000000 =
> 100.00%) BB#4(0x00000800 / 0x80000000 = 0.00%)
>
> 336B    BB#4: derived from LLVM BB %CF274
>             Predecessors according to CFG: BB#3
> 352B            MOVHPDmr %vreg11<undef>, 1, %noreg, 0, %noreg, %vreg5;
> mem:ST8[undef] GR64:%vreg11 VR128:%vreg5
>
> # End machine code for function autogen_SD17701.
>
> *** Bad machine code: Illegal virtual register for instruction ***
> - function:    autogen_SD17701
> - basic block: BB#2 CF251 (0x2d92358) [160B;256B)
> - instruction: 192B     %vreg3<def,tied1> = BLENDPDrri
> - operand 2:   %vreg2
> Expected a VR128 register, but got a FR64 register
> LLVM ERROR: Found 1 machine code errors.
>
> Regards,
> Mikael
>
> On 10/01/16 16:26, Simon Pilgrim via llvm-commits wrote:
>> Author: rksimon
>> Date: Sat Oct  1 09:26:11 2016
>> New Revision: 283037
>>
>> URL: http://llvm.org/viewvc/llvm-project?rev=283037&view=rev
>> Log:
>> [X86][SSE] Enable commutation from MOVSD/MOVSS to BLENDPD/BLENDPS on
>> SSE41+ targets
>>
>> Instead of selecting between MOVSD/MOVSS and BLENDPD/BLENDPS at
>> shuffle lowering by subtarget this will help us select the instruction
>> based on actual commutation requirements.
>>
>> We could possibly add BLENDPD/BLENDPS -> MOVSD/MOVSS commutation and
>> MOVSD/MOVSS memory folding using a similar approach if it proves useful
>>
>> I avoided adding AVX512 handling as I'm not sure when we should be
>> making use of VBLENDPD/VBLENDPS on EVEX targets
>>
>> Modified:
>>     llvm/trunk/lib/Target/X86/X86InstrInfo.cpp
>>     llvm/trunk/lib/Target/X86/X86InstrSSE.td
>>     llvm/trunk/test/CodeGen/X86/f16c-intrinsics-fast-isel.ll
>>     llvm/trunk/test/CodeGen/X86/sse-scalar-fp-arith.ll
>>     llvm/trunk/test/CodeGen/X86/vec_ss_load_fold.ll
>>     llvm/trunk/test/CodeGen/X86/vector-shuffle-128-v2.ll
>>     llvm/trunk/test/CodeGen/X86/vector-shuffle-128-v4.ll
>>
>> Modified: llvm/trunk/lib/Target/X86/X86InstrInfo.cpp
>> URL:
>> http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86InstrInfo.cpp?rev=283037&r1=283036&r2=283037&view=diff
>>
>> ==============================================================================
>>
>> --- llvm/trunk/lib/Target/X86/X86InstrInfo.cpp (original)
>> +++ llvm/trunk/lib/Target/X86/X86InstrInfo.cpp Sat Oct  1 09:26:11 2016
>> @@ -3543,6 +3543,28 @@ MachineInstr *X86InstrInfo::commuteInstr
>>      return TargetInstrInfo::commuteInstructionImpl(WorkingMI,
>> /*NewMI=*/false,
>>                                                     OpIdx1, OpIdx2);
>>    }
>> +  case X86::MOVSDrr:
>> +  case X86::MOVSSrr:
>> +  case X86::VMOVSDrr:
>> +  case X86::VMOVSSrr:{
>> +    // On SSE41 or later we can commute a MOVSS/MOVSD to a
>> BLENDPS/BLENDPD.
>> +    if (!Subtarget.hasSSE41())
>> +      return nullptr;
>> +
>> +    unsigned Mask, Opc;
>> +    switch (MI.getOpcode()) {
>> +    default: llvm_unreachable("Unreachable!");
>> +    case X86::MOVSDrr:  Opc = X86::BLENDPDrri;  Mask = 0x02; break;
>> +    case X86::MOVSSrr:  Opc = X86::BLENDPSrri;  Mask = 0x0E; break;
>> +    case X86::VMOVSDrr: Opc = X86::VBLENDPDrri; Mask = 0x02; break;
>> +    case X86::VMOVSSrr: Opc = X86::VBLENDPSrri; Mask = 0x0E; break;
>> +    }
>> +    auto &WorkingMI = cloneIfNew(MI);
>> +    WorkingMI.setDesc(get(Opc));
>> +    WorkingMI.addOperand(MachineOperand::CreateImm(Mask));
>> +    return TargetInstrInfo::commuteInstructionImpl(WorkingMI,
>> /*NewMI=*/false,
>> +                                                   OpIdx1, OpIdx2);
>> +  }
>>    case X86::PCLMULQDQrr:
>>    case X86::VPCLMULQDQrr:{
>>      // SRC1 64bits = Imm[0] ? SRC1[127:64] : SRC1[63:0]
>> @@ -3915,6 +3937,14 @@ bool X86InstrInfo::findCommutedOpIndices
>>      }
>>      return false;
>>    }
>> +  case X86::MOVSDrr:
>> +  case X86::MOVSSrr:
>> +  case X86::VMOVSDrr:
>> +  case X86::VMOVSSrr: {
>> +    if (Subtarget.hasSSE41())
>> +      return TargetInstrInfo::findCommutedOpIndices(MI, SrcOpIdx1,
>> SrcOpIdx2);
>> +    return false;
>> +  }
>>    case X86::VPTERNLOGDZrri:      case X86::VPTERNLOGDZrmi:
>>    case X86::VPTERNLOGDZ128rri:   case X86::VPTERNLOGDZ128rmi:
>>    case X86::VPTERNLOGDZ256rri:   case X86::VPTERNLOGDZ256rmi:
>>
>> Modified: llvm/trunk/lib/Target/X86/X86InstrSSE.td
>> URL:
>> http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86InstrSSE.td?rev=283037&r1=283036&r2=283037&view=diff
>>
>> ==============================================================================
>>
>> --- llvm/trunk/lib/Target/X86/X86InstrSSE.td (original)
>> +++ llvm/trunk/lib/Target/X86/X86InstrSSE.td Sat Oct  1 09:26:11 2016
>> @@ -508,6 +508,7 @@ let isReMaterializable = 1, isAsCheapAsA
>>  multiclass sse12_move_rr<RegisterClass RC, SDNode OpNode, ValueType vt,
>>                           X86MemOperand x86memop, string base_opc,
>>                           string asm_opr, Domain d = GenericDomain> {
>> +  let isCommutable = 1 in
>>    def rr : SI<0x10, MRMSrcReg, (outs VR128:$dst),
>>                (ins VR128:$src1, RC:$src2),
>>                !strconcat(base_opc, asm_opr),
>>
>> Modified: llvm/trunk/test/CodeGen/X86/f16c-intrinsics-fast-isel.ll
>> URL:
>> http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/f16c-intrinsics-fast-isel.ll?rev=283037&r1=283036&r2=283037&view=diff
>>
>> ==============================================================================
>>
>> --- llvm/trunk/test/CodeGen/X86/f16c-intrinsics-fast-isel.ll (original)
>> +++ llvm/trunk/test/CodeGen/X86/f16c-intrinsics-fast-isel.ll Sat Oct
>> 1 09:26:11 2016
>> @@ -40,7 +40,7 @@ define i16 @test_cvtss_sh(float %a0) nou
>>  ; X32:       # BB#0:
>>  ; X32-NEXT:    vmovss {{.*#+}} xmm0 = mem[0],zero,zero,zero
>>  ; X32-NEXT:    vxorps %xmm1, %xmm1, %xmm1
>> -; X32-NEXT:    vmovss {{.*#+}} xmm0 = xmm0[0],xmm1[1,2,3]
>> +; X32-NEXT:    vblendps {{.*#+}} xmm0 = xmm0[0],xmm1[1,2,3]
>>  ; X32-NEXT:    vcvtps2ph $0, %xmm0, %xmm0
>>  ; X32-NEXT:    vmovd %xmm0, %eax
>>  ; X32-NEXT:    # kill: %AX<def> %AX<kill> %EAX<kill>
>> @@ -49,7 +49,7 @@ define i16 @test_cvtss_sh(float %a0) nou
>>  ; X64-LABEL: test_cvtss_sh:
>>  ; X64:       # BB#0:
>>  ; X64-NEXT:    vxorps %xmm1, %xmm1, %xmm1
>> -; X64-NEXT:    vmovss {{.*#+}} xmm0 = xmm0[0],xmm1[1,2,3]
>> +; X64-NEXT:    vblendps {{.*#+}} xmm0 = xmm0[0],xmm1[1,2,3]
>>  ; X64-NEXT:    vcvtps2ph $0, %xmm0, %xmm0
>>  ; X64-NEXT:    vmovd %xmm0, %eax
>>  ; X64-NEXT:    # kill: %AX<def> %AX<kill> %EAX<kill>
>>
>> Modified: llvm/trunk/test/CodeGen/X86/sse-scalar-fp-arith.ll
>> URL:
>> http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/sse-scalar-fp-arith.ll?rev=283037&r1=283036&r2=283037&view=diff
>>
>> ==============================================================================
>>
>> --- llvm/trunk/test/CodeGen/X86/sse-scalar-fp-arith.ll (original)
>> +++ llvm/trunk/test/CodeGen/X86/sse-scalar-fp-arith.ll Sat Oct  1
>> 09:26:11 2016
>> @@ -172,17 +172,29 @@ define <2 x double> @test_div_sd(<2 x do
>>  }
>>
>>  define <2 x double> @test_sqrt_sd(<2 x double> %a) {
>> -; SSE-LABEL: test_sqrt_sd:
>> -; SSE:       # BB#0:
>> -; SSE-NEXT:    sqrtsd %xmm0, %xmm1
>> -; SSE-NEXT:    movsd {{.*#+}} xmm0 = xmm1[0],xmm0[1]
>> -; SSE-NEXT:    retq
>> +; SSE2-LABEL: test_sqrt_sd:
>> +; SSE2:       # BB#0:
>> +; SSE2-NEXT:    sqrtsd %xmm0, %xmm1
>> +; SSE2-NEXT:    movsd {{.*#+}} xmm0 = xmm1[0],xmm0[1]
>> +; SSE2-NEXT:    retq
>>  ;
>> -; AVX-LABEL: test_sqrt_sd:
>> -; AVX:       # BB#0:
>> -; AVX-NEXT:    vsqrtsd %xmm0, %xmm0, %xmm1
>> -; AVX-NEXT:    vmovsd {{.*#+}} xmm0 = xmm1[0],xmm0[1]
>> -; AVX-NEXT:    retq
>> +; SSE41-LABEL: test_sqrt_sd:
>> +; SSE41:       # BB#0:
>> +; SSE41-NEXT:    sqrtsd %xmm0, %xmm1
>> +; SSE41-NEXT:    blendpd {{.*#+}} xmm0 = xmm1[0],xmm0[1]
>> +; SSE41-NEXT:    retq
>> +;
>> +; AVX1-LABEL: test_sqrt_sd:
>> +; AVX1:       # BB#0:
>> +; AVX1-NEXT:    vsqrtsd %xmm0, %xmm0, %xmm1
>> +; AVX1-NEXT:    vblendpd {{.*#+}} xmm0 = xmm1[0],xmm0[1]
>> +; AVX1-NEXT:    retq
>> +;
>> +; AVX512-LABEL: test_sqrt_sd:
>> +; AVX512:       # BB#0:
>> +; AVX512-NEXT:    vsqrtsd %xmm0, %xmm0, %xmm1
>> +; AVX512-NEXT:    vmovsd {{.*#+}} xmm0 = xmm1[0],xmm0[1]
>> +; AVX512-NEXT:    retq
>>    %1 = extractelement <2 x double> %a, i32 0
>>    %2 = call double @llvm.sqrt.f64(double %1)
>>    %3 = insertelement <2 x double> %a, double %2, i32 0
>>
>> Modified: llvm/trunk/test/CodeGen/X86/vec_ss_load_fold.ll
>> URL:
>> http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vec_ss_load_fold.ll?rev=283037&r1=283036&r2=283037&view=diff
>>
>> ==============================================================================
>>
>> --- llvm/trunk/test/CodeGen/X86/vec_ss_load_fold.ll (original)
>> +++ llvm/trunk/test/CodeGen/X86/vec_ss_load_fold.ll Sat Oct  1
>> 09:26:11 2016
>> @@ -1,10 +1,10 @@
>>  ; NOTE: Assertions have been autogenerated by
>> utils/update_llc_test_checks.py
>>  ; RUN: llc < %s -mtriple=i686-apple-darwin9 -mattr=+sse,+sse2,+sse4.1
>> | FileCheck %s --check-prefix=X32
>>  ; RUN: llc < %s -mtriple=x86_64-apple-darwin9
>> -mattr=+sse,+sse2,+sse4.1 | FileCheck %s --check-prefix=X64
>> -; RUN: llc < %s -mtriple=i686-apple-darwin9 -mattr=+avx | FileCheck
>> %s --check-prefix=X32_AVX
>> -; RUN: llc < %s -mtriple=x86_64-apple-darwin9 -mattr=+avx | FileCheck
>> %s --check-prefix=X64_AVX
>> -; RUN: llc < %s -mtriple=i686-apple-darwin9 -mattr=+avx512f |
>> FileCheck %s --check-prefix=X32_AVX
>> -; RUN: llc < %s -mtriple=x86_64-apple-darwin9 -mattr=+avx512f |
>> FileCheck %s --check-prefix=X64_AVX
>> +; RUN: llc < %s -mtriple=i686-apple-darwin9 -mattr=+avx | FileCheck
>> %s --check-prefix=X32_AVX --check-prefix=X32_AVX1
>> +; RUN: llc < %s -mtriple=x86_64-apple-darwin9 -mattr=+avx | FileCheck
>> %s --check-prefix=X64_AVX --check-prefix=X64_AVX1
>> +; RUN: llc < %s -mtriple=i686-apple-darwin9 -mattr=+avx512f |
>> FileCheck %s --check-prefix=X32_AVX --check-prefix=X32_AVX512
>> +; RUN: llc < %s -mtriple=x86_64-apple-darwin9 -mattr=+avx512f |
>> FileCheck %s --check-prefix=X64_AVX --check-prefix=X64_AVX512
>>
>>  define i16 @test1(float %f) nounwind {
>>  ; X32-LABEL: test1:
>> @@ -43,17 +43,29 @@ define i16 @test1(float %f) nounwind {
>>  ; X32_AVX-NEXT:    ## kill: %AX<def> %AX<kill> %EAX<kill>
>>  ; X32_AVX-NEXT:    retl
>>  ;
>> -; X64_AVX-LABEL: test1:
>> -; X64_AVX:       ## BB#0:
>> -; X64_AVX-NEXT:    vxorps %xmm1, %xmm1, %xmm1
>> -; X64_AVX-NEXT:    vmovss {{.*#+}} xmm0 = xmm0[0],xmm1[1,2,3]
>> -; X64_AVX-NEXT:    vsubss {{.*}}(%rip), %xmm0, %xmm0
>> -; X64_AVX-NEXT:    vmulss {{.*}}(%rip), %xmm0, %xmm0
>> -; X64_AVX-NEXT:    vminss {{.*}}(%rip), %xmm0, %xmm0
>> -; X64_AVX-NEXT:    vmaxss %xmm1, %xmm0, %xmm0
>> -; X64_AVX-NEXT:    vcvttss2si %xmm0, %eax
>> -; X64_AVX-NEXT:    ## kill: %AX<def> %AX<kill> %EAX<kill>
>> -; X64_AVX-NEXT:    retq
>> +; X64_AVX1-LABEL: test1:
>> +; X64_AVX1:       ## BB#0:
>> +; X64_AVX1-NEXT:    vxorps %xmm1, %xmm1, %xmm1
>> +; X64_AVX1-NEXT:    vblendps {{.*#+}} xmm0 = xmm0[0],xmm1[1,2,3]
>> +; X64_AVX1-NEXT:    vsubss {{.*}}(%rip), %xmm0, %xmm0
>> +; X64_AVX1-NEXT:    vmulss {{.*}}(%rip), %xmm0, %xmm0
>> +; X64_AVX1-NEXT:    vminss {{.*}}(%rip), %xmm0, %xmm0
>> +; X64_AVX1-NEXT:    vmaxss %xmm1, %xmm0, %xmm0
>> +; X64_AVX1-NEXT:    vcvttss2si %xmm0, %eax
>> +; X64_AVX1-NEXT:    ## kill: %AX<def> %AX<kill> %EAX<kill>
>> +; X64_AVX1-NEXT:    retq
>> +;
>> +; X64_AVX512-LABEL: test1:
>> +; X64_AVX512:       ## BB#0:
>> +; X64_AVX512-NEXT:    vxorps %xmm1, %xmm1, %xmm1
>> +; X64_AVX512-NEXT:    vmovss {{.*#+}} xmm0 = xmm0[0],xmm1[1,2,3]
>> +; X64_AVX512-NEXT:    vsubss {{.*}}(%rip), %xmm0, %xmm0
>> +; X64_AVX512-NEXT:    vmulss {{.*}}(%rip), %xmm0, %xmm0
>> +; X64_AVX512-NEXT:    vminss {{.*}}(%rip), %xmm0, %xmm0
>> +; X64_AVX512-NEXT:    vmaxss %xmm1, %xmm0, %xmm0
>> +; X64_AVX512-NEXT:    vcvttss2si %xmm0, %eax
>> +; X64_AVX512-NEXT:    ## kill: %AX<def> %AX<kill> %EAX<kill>
>> +; X64_AVX512-NEXT:    retq
>>    %tmp = insertelement <4 x float> undef, float %f, i32 0        ;
>> <<4 x float>> [#uses=1]
>>    %tmp10 = insertelement <4 x float> %tmp, float 0.000000e+00, i32
>> 1        ; <<4 x float>> [#uses=1]
>>    %tmp11 = insertelement <4 x float> %tmp10, float 0.000000e+00, i32
>> 2        ; <<4 x float>> [#uses=1]
>>
>> Modified: llvm/trunk/test/CodeGen/X86/vector-shuffle-128-v2.ll
>> URL:
>> http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-shuffle-128-v2.ll?rev=283037&r1=283036&r2=283037&view=diff
>>
>> ==============================================================================
>>
>> --- llvm/trunk/test/CodeGen/X86/vector-shuffle-128-v2.ll (original)
>> +++ llvm/trunk/test/CodeGen/X86/vector-shuffle-128-v2.ll Sat Oct  1
>> 09:26:11 2016
>> @@ -1159,16 +1159,43 @@ define <2 x i64> @insert_mem_hi_v2i64(i6
>>  }
>>
>>  define <2 x double> @insert_reg_lo_v2f64(double %a, <2 x double> %b) {
>> -; SSE-LABEL: insert_reg_lo_v2f64:
>> -; SSE:       # BB#0:
>> -; SSE-NEXT:    movsd {{.*#+}} xmm1 = xmm0[0],xmm1[1]
>> -; SSE-NEXT:    movapd %xmm1, %xmm0
>> -; SSE-NEXT:    retq
>> -;
>> -; AVX-LABEL: insert_reg_lo_v2f64:
>> -; AVX:       # BB#0:
>> -; AVX-NEXT:    vmovsd {{.*#+}} xmm0 = xmm0[0],xmm1[1]
>> -; AVX-NEXT:    retq
>> +; SSE2-LABEL: insert_reg_lo_v2f64:
>> +; SSE2:       # BB#0:
>> +; SSE2-NEXT:    movsd {{.*#+}} xmm1 = xmm0[0],xmm1[1]
>> +; SSE2-NEXT:    movapd %xmm1, %xmm0
>> +; SSE2-NEXT:    retq
>> +;
>> +; SSE3-LABEL: insert_reg_lo_v2f64:
>> +; SSE3:       # BB#0:
>> +; SSE3-NEXT:    movsd {{.*#+}} xmm1 = xmm0[0],xmm1[1]
>> +; SSE3-NEXT:    movapd %xmm1, %xmm0
>> +; SSE3-NEXT:    retq
>> +;
>> +; SSSE3-LABEL: insert_reg_lo_v2f64:
>> +; SSSE3:       # BB#0:
>> +; SSSE3-NEXT:    movsd {{.*#+}} xmm1 = xmm0[0],xmm1[1]
>> +; SSSE3-NEXT:    movapd %xmm1, %xmm0
>> +; SSSE3-NEXT:    retq
>> +;
>> +; SSE41-LABEL: insert_reg_lo_v2f64:
>> +; SSE41:       # BB#0:
>> +; SSE41-NEXT:    blendpd {{.*#+}} xmm0 = xmm0[0],xmm1[1]
>> +; SSE41-NEXT:    retq
>> +;
>> +; AVX1-LABEL: insert_reg_lo_v2f64:
>> +; AVX1:       # BB#0:
>> +; AVX1-NEXT:    vblendpd {{.*#+}} xmm0 = xmm0[0],xmm1[1]
>> +; AVX1-NEXT:    retq
>> +;
>> +; AVX2-LABEL: insert_reg_lo_v2f64:
>> +; AVX2:       # BB#0:
>> +; AVX2-NEXT:    vblendpd {{.*#+}} xmm0 = xmm0[0],xmm1[1]
>> +; AVX2-NEXT:    retq
>> +;
>> +; AVX512VL-LABEL: insert_reg_lo_v2f64:
>> +; AVX512VL:       # BB#0:
>> +; AVX512VL-NEXT:    vmovsd {{.*#+}} xmm0 = xmm0[0],xmm1[1]
>> +; AVX512VL-NEXT:    retq
>>    %v = insertelement <2 x double> undef, double %a, i32 0
>>    %shuffle = shufflevector <2 x double> %v, <2 x double> %b, <2 x
>> i32> <i32 0, i32 3>
>>    ret <2 x double> %shuffle
>>
>> Modified: llvm/trunk/test/CodeGen/X86/vector-shuffle-128-v4.ll
>> URL:
>> http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/vector-shuffle-128-v4.ll?rev=283037&r1=283036&r2=283037&view=diff
>>
>> ==============================================================================
>>
>> --- llvm/trunk/test/CodeGen/X86/vector-shuffle-128-v4.ll (original)
>> +++ llvm/trunk/test/CodeGen/X86/vector-shuffle-128-v4.ll Sat Oct  1
>> 09:26:11 2016
>> @@ -2096,11 +2096,17 @@ define <4 x float> @insert_reg_and_zero_
>>  ; SSE41-NEXT:    blendps {{.*#+}} xmm0 = xmm0[0],xmm1[1,2,3]
>>  ; SSE41-NEXT:    retq
>>  ;
>> -; AVX-LABEL: insert_reg_and_zero_v4f32:
>> -; AVX:       # BB#0:
>> -; AVX-NEXT:    vxorps %xmm1, %xmm1, %xmm1
>> -; AVX-NEXT:    vmovss {{.*#+}} xmm0 = xmm0[0],xmm1[1,2,3]
>> -; AVX-NEXT:    retq
>> +; AVX1OR2-LABEL: insert_reg_and_zero_v4f32:
>> +; AVX1OR2:       # BB#0:
>> +; AVX1OR2-NEXT:    vxorps %xmm1, %xmm1, %xmm1
>> +; AVX1OR2-NEXT:    vblendps {{.*#+}} xmm0 = xmm0[0],xmm1[1,2,3]
>> +; AVX1OR2-NEXT:    retq
>> +;
>> +; AVX512VL-LABEL: insert_reg_and_zero_v4f32:
>> +; AVX512VL:       # BB#0:
>> +; AVX512VL-NEXT:    vxorps %xmm1, %xmm1, %xmm1
>> +; AVX512VL-NEXT:    vmovss {{.*#+}} xmm0 = xmm0[0],xmm1[1,2,3]
>> +; AVX512VL-NEXT:    retq
>>    %v = insertelement <4 x float> undef, float %a, i32 0
>>    %shuffle = shufflevector <4 x float> %v, <4 x float>
>> zeroinitializer, <4 x i32> <i32 0, i32 5, i32 6, i32 7>
>>    ret <4 x float> %shuffle
>> @@ -2254,16 +2260,38 @@ define <4 x i32> @insert_mem_hi_v4i32(<2
>>  }
>>
>>  define <4 x float> @insert_reg_lo_v4f32(double %a, <4 x float> %b) {
>> -; SSE-LABEL: insert_reg_lo_v4f32:
>> -; SSE:       # BB#0:
>> -; SSE-NEXT:    movsd {{.*#+}} xmm1 = xmm0[0],xmm1[1]
>> -; SSE-NEXT:    movapd %xmm1, %xmm0
>> -; SSE-NEXT:    retq
>> +; SSE2-LABEL: insert_reg_lo_v4f32:
>> +; SSE2:       # BB#0:
>> +; SSE2-NEXT:    movsd {{.*#+}} xmm1 = xmm0[0],xmm1[1]
>> +; SSE2-NEXT:    movapd %xmm1, %xmm0
>> +; SSE2-NEXT:    retq
>>  ;
>> -; AVX-LABEL: insert_reg_lo_v4f32:
>> -; AVX:       # BB#0:
>> -; AVX-NEXT:    vmovsd {{.*#+}} xmm0 = xmm0[0],xmm1[1]
>> -; AVX-NEXT:    retq
>> +; SSE3-LABEL: insert_reg_lo_v4f32:
>> +; SSE3:       # BB#0:
>> +; SSE3-NEXT:    movsd {{.*#+}} xmm1 = xmm0[0],xmm1[1]
>> +; SSE3-NEXT:    movapd %xmm1, %xmm0
>> +; SSE3-NEXT:    retq
>> +;
>> +; SSSE3-LABEL: insert_reg_lo_v4f32:
>> +; SSSE3:       # BB#0:
>> +; SSSE3-NEXT:    movsd {{.*#+}} xmm1 = xmm0[0],xmm1[1]
>> +; SSSE3-NEXT:    movapd %xmm1, %xmm0
>> +; SSSE3-NEXT:    retq
>> +;
>> +; SSE41-LABEL: insert_reg_lo_v4f32:
>> +; SSE41:       # BB#0:
>> +; SSE41-NEXT:    blendpd {{.*#+}} xmm0 = xmm0[0],xmm1[1]
>> +; SSE41-NEXT:    retq
>> +;
>> +; AVX1OR2-LABEL: insert_reg_lo_v4f32:
>> +; AVX1OR2:       # BB#0:
>> +; AVX1OR2-NEXT:    vblendpd {{.*#+}} xmm0 = xmm0[0],xmm1[1]
>> +; AVX1OR2-NEXT:    retq
>> +;
>> +; AVX512VL-LABEL: insert_reg_lo_v4f32:
>> +; AVX512VL:       # BB#0:
>> +; AVX512VL-NEXT:    vmovsd {{.*#+}} xmm0 = xmm0[0],xmm1[1]
>> +; AVX512VL-NEXT:    retq
>>    %a.cast = bitcast double %a to <2 x float>
>>    %v = shufflevector <2 x float> %a.cast, <2 x float> undef, <4 x
>> i32> <i32 0, i32 1, i32 undef, i32 undef>
>>    %shuffle = shufflevector <4 x float> %v, <4 x float> %b, <4 x i32>
>> <i32 0, i32 1, i32 6, i32 7>
>>
>>
>> _______________________________________________
>> llvm-commits mailing list
>> llvm-commits at lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits
>>
>
>
> _______________________________________________
> llvm-commits mailing list
> llvm-commits at lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits
>



More information about the llvm-commits mailing list