[LLVMbugs] [Bug 22522] New: AVX code can be slower than SSE code

bugzilla-daemon at llvm.org bugzilla-daemon at llvm.org
Mon Feb 9 10:31:09 PST 2015


http://llvm.org/bugs/show_bug.cgi?id=22522

            Bug ID: 22522
           Summary: AVX code can be slower than SSE code
           Product: new-bugs
           Version: 3.6
          Hardware: PC
                OS: Linux
            Status: NEW
          Severity: normal
          Priority: P
         Component: new bugs
          Assignee: unassignedbugs at nondot.org
          Reporter: pitrou at free.fr
                CC: llvmbugs at cs.uiuc.edu
    Classification: Unclassified

Created attachment 13829
  --> http://llvm.org/bugs/attachment.cgi?id=13829&action=edit
IR file

On Sandy Bridge or Ivy Bridge, disabling AVX can lead to faster parallelized
code when the output is unaligned.

Attaching a IR file. You can generate the AVX-enabled code using:

opt -mcpu=corei7-avx -S -O3 vect.ll | llc -O3 -mcpu=corei7-avx 

You can generate the AVX-disabled code using:

opt -mcpu=corei7-avx -mattr=-avx -S -O3 vect.ll | llc -O3 -mcpu=corei7-avx
-mattr=-avx

The main AVX loop looks like the following:

.LBB0_6:                                # %vector.body
                                        # =>This Inner Loop Header: Depth=1
    vmovupd    -96(%rcx), %xmm0
    vinsertf128    $1, -80(%rcx), %ymm0, %ymm0
    vmovupd    -64(%rcx), %xmm1
    vinsertf128    $1, -48(%rcx), %ymm1, %ymm1
    vmovupd    -32(%rcx), %xmm2
    vinsertf128    $1, -16(%rcx), %ymm2, %ymm2
    vmovupd    (%rcx), %xmm3
    vinsertf128    $1, 16(%rcx), %ymm3, %ymm3
    vmovupd    -96(%rsi), %xmm4
    vinsertf128    $1, -80(%rsi), %ymm4, %ymm4
    vmovupd    -64(%rsi), %xmm5
    vinsertf128    $1, -48(%rsi), %ymm5, %ymm5
    vmovupd    -32(%rsi), %xmm6
    vinsertf128    $1, -16(%rsi), %ymm6, %ymm6
    vmovupd    (%rsi), %xmm7
    vinsertf128    $1, 16(%rsi), %ymm7, %ymm7
    vaddpd    %ymm4, %ymm0, %ymm0
    vaddpd    %ymm5, %ymm1, %ymm1
    vaddpd    %ymm6, %ymm2, %ymm2
    vaddpd    %ymm7, %ymm3, %ymm3
    vextractf128    $1, %ymm0, -80(%rdx)
    vmovupd    %xmm0, -96(%rdx)
    vextractf128    $1, %ymm1, -48(%rdx)
    vmovupd    %xmm1, -64(%rdx)
    vextractf128    $1, %ymm2, -16(%rdx)
    vmovupd    %xmm2, -32(%rdx)
    vextractf128    $1, %ymm3, 16(%rdx)
    vmovupd    %xmm3, (%rdx)
    subq    $-128, %rdx
    subq    $-128, %rsi
    subq    $-128, %rcx
    addq    $-16, %rdi
    jne    .LBB0_6


I'm not an expect, so I can't really say what the cause is, but it looks a bit
strange that the output array is not written to sequentially (-80(%rdx) then
-96(%rdx) then -48(%rdx) etc.).

This is with 3.6rc1 as well as 3.5.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-bugs/attachments/20150209/a6b475d7/attachment.html>


More information about the llvm-bugs mailing list