[llvm-bugs] [Bug 43240] New: KNL: inline memcpy uses two ymm loads/stores instead of one zmm load/store

via llvm-bugs llvm-bugs at lists.llvm.org
Fri Sep 6 06:17:21 PDT 2019


https://bugs.llvm.org/show_bug.cgi?id=43240

            Bug ID: 43240
           Summary: KNL: inline memcpy uses two ymm loads/stores instead
                    of one zmm load/store
           Product: libraries
           Version: trunk
          Hardware: PC
                OS: All
            Status: NEW
          Severity: enhancement
          Priority: P
         Component: Backend: X86
          Assignee: unassignedbugs at nondot.org
          Reporter: dave at znu.io
                CC: craig.topper at gmail.com, llvm-bugs at lists.llvm.org,
                    llvm-dev at redking.me.uk, spatel+llvm at rotateright.com

The following code generates two ymm loads/stores instead of a single zmm
load/store. This seems specific to knl because skylake-avx512 and
icelake-client both generate the correct code. And yes, this example is derived
from test/CodeGen/X86/memcpy.ll.

declare void @llvm.memcpy.p0i8.p0i8.i64(i8* nocapture, i8* nocapture, i64, i1)
nounwind

define void @test(i8* nocapture %A, i8* nocapture %B) nounwind optsize
noredzone {
entry:
  tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %A, i8* %B, i64 64, i1 false)
  ret void
}

Expected result (as seen with skylake-avx512 and icelake-client):

        vmovups (%rsi), %zmm0
        vmovups %zmm0, (%rdi)
        retq

Actual result (as seen with knl):

        vmovups (%rsi), %ymm0
        vmovups 32(%rsi), %ymm1
        vmovups %ymm1, 32(%rdi)
        vmovups %ymm0, (%rdi)
        retq

-- 
You are receiving this mail because:
You are on the CC list for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-bugs/attachments/20190906/255cbf62/attachment.html>


More information about the llvm-bugs mailing list