[PATCH] D64551: [X86] EltsFromConsecutiveLoads - support common source loads

Simon Pilgrim via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Thu Sep 5 04:29:39 PDT 2019


RKSimon marked 2 inline comments as done.
RKSimon added inline comments.


================
Comment at: llvm/trunk/test/CodeGen/X86/load-partial.ll:64
 ; AVX:       # %bb.0:
-; AVX-NEXT:    vmovsd {{.*#+}} xmm0 = mem[0],zero
-; AVX-NEXT:    vinsertps {{.*#+}} xmm0 = xmm0[0,1],mem[0],xmm0[3]
+; AVX-NEXT:    vmovups (%rdi), %xmm0
 ; AVX-NEXT:    retq
----------------
yubing wrote:
> This is also not correct, according to IR.
why?


================
Comment at: llvm/trunk/test/CodeGen/X86/load-partial.ll:86
 ; AVX:       # %bb.0:
-; AVX-NEXT:    vmovss {{.*#+}} xmm0 = mem[0],zero,zero,zero
-; AVX-NEXT:    vinsertps {{.*#+}} xmm0 = xmm0[0],mem[0],xmm0[2,3]
-; AVX-NEXT:    vinsertps {{.*#+}} xmm0 = xmm0[0,1],mem[0],xmm0[3]
+; AVX-NEXT:    vmovaps (%rdi), %xmm0
 ; AVX-NEXT:    retq
----------------
yubing wrote:
> This is not correct, since we are moving 3 float numbers to %xmm0, according to IR, instead of 4 float.
> Before your patch, the testcase's CHECK is correct:
> ; AVX:       # %bb.0:
> ; AVX-NEXT:    vmovss  (%rdi), %xmm0           # xmm0 = mem[0],zero,zero,zero
> ; AVX-NEXT:    vinsertps       $16, 4(%rdi), %xmm0, %xmm0 # xmm0 = xmm0[0],mem[0],xmm0[2,3]
> ; AVX-NEXT:    vinsertps       $32, 8(%rdi), %xmm0, %xmm0 # xmm0 = xmm0[0,1],mem[0],xmm0[3]
> ; AVX-NEXT:    retq
The pointer is 16 byte dereferencable - loading all 4 floats is safe.


Repository:
  rL LLVM

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D64551/new/

https://reviews.llvm.org/D64551





More information about the llvm-commits mailing list