[llvm-bugs] [Bug 51108] New: Poor unrolling prevents vectorization opportunities

via llvm-bugs llvm-bugs at lists.llvm.org
Thu Jul 15 12:05:34 PDT 2021


https://bugs.llvm.org/show_bug.cgi?id=51108

            Bug ID: 51108
           Summary: Poor unrolling prevents vectorization opportunities
           Product: libraries
           Version: trunk
          Hardware: PC
                OS: Windows NT
            Status: NEW
          Severity: enhancement
          Priority: P
         Component: Loop Optimizer
          Assignee: unassignedbugs at nondot.org
          Reporter: llvm-dev at redking.me.uk
                CC: andrea.dibiagio at gmail.com, florian_hahn at apple.com,
                    lebedev.ri at gmail.com, llvm-bugs at lists.llvm.org,
                    ryta1203 at gmail.com, spatel+llvm at rotateright.com

https://godbolt.org/z/jE1e9rT5j
(NOTE: disabled fma on gcc to prevent fmul+fadd->fma diff(

constexpr int SIZE = 128;
float A[SIZE][16];
float B[SIZE][16];

__attribute__((__noinline__))
float foo()
{
    float sum = 0.0f;
    for (int i = 1; i < 32; ++i)
        for (int j = 0; j < 4; ++j)
            sum += A[i][j] * B[i][j];

    return sum;
}

clang -g0 -O3 -march=znver2

_Z3foov:
        vxorps  %xmm0, %xmm0, %xmm0
        movq    $-1984, %rax                    # imm = 0xF840
.LBB0_1:
        vmovss  A+2048(%rax), %xmm1             # xmm1 = mem[0],zero,zero,zero
        vmovsd  B+2052(%rax), %xmm2             # xmm2 = mem[0],zero
        vmulss  B+2048(%rax), %xmm1, %xmm1
        vaddss  %xmm1, %xmm0, %xmm0
        vmovsd  A+2052(%rax), %xmm1             # xmm1 = mem[0],zero
        vmulps  %xmm2, %xmm1, %xmm1
        vaddss  %xmm1, %xmm0, %xmm0
        vmovshdup       %xmm1, %xmm1            # xmm1 = xmm1[1,1,3,3]
        vaddss  %xmm1, %xmm0, %xmm0
        vmovss  A+2060(%rax), %xmm1             # xmm1 = mem[0],zero,zero,zero
        vmulss  B+2060(%rax), %xmm1, %xmm1
        addq    $64, %rax
        vaddss  %xmm1, %xmm0, %xmm0
        jne     .LBB0_1
        retq

The clang code has several issues:

1 - if we'd used a better indvar we could have avoided some very large offsets
on the address math (put A and B in registers and use a better range/increment
for %rax).

2 - GCC recognises that the array is fully dereferencable allowing it to use
fewer (vector) loads and then extract/shuffle the elements that it requires

3 - we fail to ensure the per-loop reduction is in a form that we can use
HADDPS (on targets where its fast)

4 - the LoopMicroOpBufferSize in the znver3 model has a VERY unexpected effect
on unrolling - I'm not sure clang's interpretation of the buffer size is the
same as just copying AMD's hardware specs

-- 
You are receiving this mail because:
You are on the CC list for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-bugs/attachments/20210715/ad9269dd/attachment.html>


More information about the llvm-bugs mailing list