[PATCH] D140677: [AArch64][DAG] `canCombineShuffleToExtendVectorInreg()`: allow illegal types before legalization

Roman Lebedev via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Wed Jan 4 15:07:44 PST 2023


lebedev.ri added inline comments.


================
Comment at: llvm/test/CodeGen/X86/zero_extend_vector_inreg.ll:2713-2714
+; AVX-NEXT:    vpmovzxbw {{.*#+}} xmm4 = xmm1[0],zero,xmm1[1],zero,xmm1[2],zero,xmm1[3],zero,xmm1[4],zero,xmm1[5],zero,xmm1[6],zero,xmm1[7],zero
+; AVX-NEXT:    vpunpckhbw {{.*#+}} xmm1 = xmm1[8],xmm3[8],xmm1[9],xmm3[9],xmm1[10],xmm3[10],xmm1[11],xmm3[11],xmm1[12],xmm3[12],xmm1[13],xmm3[13],xmm1[14],xmm3[14],xmm1[15],xmm3[15]
+; AVX-NEXT:    vpaddb 48(%rdx), %xmm1, %xmm1
+; AVX-NEXT:    vpaddb 32(%rdx), %xmm4, %xmm3
----------------
Looks like we lost UNDEF knowledge and can no longer eliminate these two redundant operations


================
Comment at: llvm/test/CodeGen/X86/zero_extend_vector_inreg.ll:2931
 ; AVX-NEXT:    vpmovzxbd {{.*#+}} xmm0 = xmm0[0],zero,zero,zero,xmm0[1],zero,zero,zero,xmm0[2],zero,zero,zero,xmm0[3],zero,zero,zero
-; AVX-NEXT:    vpaddb 32(%rdx), %xmm0, %xmm0
+; AVX-NEXT:    vpaddb 48(%rdx), %xmm0, %xmm0
+; AVX-NEXT:    vpaddb 32(%rdx), %xmm3, %xmm3
----------------
Looks like we lost UNDEF knowledge and can no longer eliminate these two redundant operations


================
Comment at: llvm/test/CodeGen/X86/zero_extend_vector_inreg.ll:3147
 ; AVX-NEXT:    vpmovzxbq {{.*#+}} xmm0 = xmm0[0],zero,zero,zero,zero,zero,zero,zero,xmm0[1],zero,zero,zero,zero,zero,zero,zero
-; AVX-NEXT:    vpaddb 32(%rdx), %xmm0, %xmm0
+; AVX-NEXT:    vpaddb 48(%rdx), %xmm0, %xmm0
+; AVX-NEXT:    vpaddb 32(%rdx), %xmm3, %xmm3
----------------
Looks like we lost UNDEF knowledge and can no longer eliminate these two redundant operations




================
Comment at: llvm/test/CodeGen/X86/zero_extend_vector_inreg.ll:3347
 ; AVX-NEXT:    vpsrldq {{.*#+}} xmm0 = xmm0[15],zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero
-; AVX-NEXT:    vpaddb 32(%rdx), %xmm0, %xmm0
+; AVX-NEXT:    vpaddb 48(%rdx), %xmm0, %xmm0
+; AVX-NEXT:    vpaddb 32(%rdx), %xmm3, %xmm3
----------------
Looks like we lost UNDEF knowledge and can no longer eliminate these two redundant operations




================
Comment at: llvm/test/CodeGen/X86/zero_extend_vector_inreg.ll:3361-3366
+; AVX2-SLOW-NEXT:    vpmovzxbq {{.*#+}} xmm1 = xmm0[0],zero,zero,zero,zero,zero,zero,zero,xmm0[1],zero,zero,zero,zero,zero,zero,zero
+; AVX2-SLOW-NEXT:    vpermq {{.*#+}} ymm1 = ymm1[0,1,1,3]
+; AVX2-SLOW-NEXT:    vbroadcasti128 {{.*#+}} ymm2 = [255,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,255,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
+; AVX2-SLOW-NEXT:    # ymm2 = mem[0,1,0,1]
+; AVX2-SLOW-NEXT:    vpand %ymm2, %ymm1, %ymm1
+; AVX2-SLOW-NEXT:    vpsrld $16, %xmm0, %xmm0
----------------
Missing shuffle lowering strategy?


================
Comment at: llvm/test/CodeGen/X86/zero_extend_vector_inreg.ll:3703
+; AVX-NEXT:    vpunpckhwd {{.*#+}} xmm1 = xmm1[4],xmm3[4],xmm1[5],xmm3[5],xmm1[6],xmm3[6],xmm1[7],xmm3[7]
+; AVX-NEXT:    vpaddb 48(%rdx), %xmm1, %xmm1
+; AVX-NEXT:    vpaddb 32(%rdx), %xmm4, %xmm3
----------------
Looks like we lost UNDEF knowledge and can no longer eliminate these two redundant operations




================
Comment at: llvm/test/CodeGen/X86/zero_extend_vector_inreg.ll:4350-4353
+; AVX512F-SLOW-NEXT:    vinserti32x4 $1, %xmm1, %zmm0, %zmm0
+; AVX512F-SLOW-NEXT:    vpshuflw {{.*#+}} ymm1 = ymm0[0,1,1,3,4,5,6,7,8,9,9,11,12,13,14,15]
+; AVX512F-SLOW-NEXT:    vpshufd {{.*#+}} ymm1 = ymm1[0,1,1,3,4,5,5,7]
+; AVX512F-SLOW-NEXT:    vpermq {{.*#+}} ymm1 = ymm1[2,1,3,3]
----------------
???


================
Comment at: llvm/test/CodeGen/X86/zero_extend_vector_inreg.ll:4370-4376
+; AVX512F-FAST-NEXT:    vpshufd {{.*#+}} xmm1 = xmm0[1,1,1,1]
+; AVX512F-FAST-NEXT:    vinserti32x4 $1, %xmm1, %zmm0, %zmm0
+; AVX512F-FAST-NEXT:    vpshuflw {{.*#+}} ymm1 = ymm0[0,1,1,3,4,5,6,7,8,9,9,11,12,13,14,15]
+; AVX512F-FAST-NEXT:    vmovdqa {{.*#+}} ymm2 = <4,u,u,u,5,u,u,u>
+; AVX512F-FAST-NEXT:    vpermd %ymm1, %ymm2, %ymm1
+; AVX512F-FAST-NEXT:    vpxor %xmm2, %xmm2, %xmm2
+; AVX512F-FAST-NEXT:    vpblendw {{.*#+}} ymm1 = ymm1[0],ymm2[1,2,3,4,5,6,7],ymm1[8],ymm2[9,10,11,12,13,14,15]
----------------
???


================
Comment at: llvm/test/CodeGen/X86/zero_extend_vector_inreg.ll:4939
+; AVX-NEXT:    vextractf128 $1, %ymm0, %xmm2
+; AVX-NEXT:    vpaddb 48(%rdx), %xmm2, %xmm2
 ; AVX-NEXT:    vpaddb 32(%rdx), %xmm0, %xmm0
----------------
Looks like we lost UNDEF knowledge and can no longer eliminate these two redundant operations




Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D140677/new/

https://reviews.llvm.org/D140677



More information about the llvm-commits mailing list