[llvm-bugs] [Bug 36180] New: [x86] scalar FP code runs ~15% slower on Haswell when compiled with -mavx
via llvm-bugs
llvm-bugs at lists.llvm.org
Wed Jan 31 13:45:23 PST 2018
https://bugs.llvm.org/show_bug.cgi?id=36180
Bug ID: 36180
Summary: [x86] scalar FP code runs ~15% slower on Haswell when
compiled with -mavx
Product: libraries
Version: trunk
Hardware: PC
OS: All
Status: NEW
Severity: enhancement
Priority: P
Component: Backend: X86
Assignee: unassignedbugs at nondot.org
Reporter: spatel+llvm at rotateright.com
CC: llvm-bugs at lists.llvm.org
Created attachment 19783
--> https://bugs.llvm.org/attachment.cgi?id=19783&action=edit
himeno.c source file
I have a Haswell perf mystery that I can't explain. The himeno program (see
attachment) is an FP and memory benchmark that plows through large
multi-dimensional arrays doing 32-bit fadd/fsub/fmul.
To eliminate potentially questionable transforms and variation from the
vectorizers, build it as scalar-ops only like this:
$ ./clang -O2 himeno.c -fno-vectorize -fno-slp-vectorize -o himeno_novec_sse
$ ./clang -O2 himeno.c -fno-vectorize -fno-slp-vectorize -mavx -o
himeno_novec_avx
And I'm testing on a 4.0GHz Haswell iMac running macOS 10.13.3:
$ ./himeno_novec_sse
mimax = 257 mjmax = 129 mkmax = 129
imax = 256 jmax = 128 kmax =128
cpu : 13.244777 sec.
Loop executed for 500 times
Gosa : 9.897132e-04
MFLOPS measured : 5175.818966
Score based on MMX Pentium 200MHz : 160.391043
$ ./himeno_novec_avx
mimax = 257 mjmax = 129 mkmax = 129
imax = 256 jmax = 128 kmax =128
cpu : 15.533612 sec.
Loop executed for 500 times
Gosa : 9.897132e-04
MFLOPS measured : 4413.176279
Score based on MMX Pentium 200MHz : 136.757864
There's an unfortunate amount of noise (~5%) in the perf on this system with
this benchmark, but these results are reproducible. I'm consistently seeing
~15% better perf with the non-AVX build.
If we look at the inner loop asm, they are virtually identical in terms of
operations. The SSE code just has a few extra instructions needed to copy
values because of the destructive ops, but the loads, stores, and math are the
same.
A IACA analysis of these loops says they should have virtually the same
throughput on HSW:
Block Throughput: 20.89 Cycles Throughput Bottleneck: Backend
Loop Count: 22
Port Binding In Cycles Per Iteration:
--------------------------------------------------------------------------------------------------
| Port | 0 - DV | 1 | 2 - D | 3 - D | 4 | 5
| 6 | 7 |
--------------------------------------------------------------------------------------------------
| Cycles | 13.0 0.0 | 21.0 | 12.0 12.0 | 12.0 11.0 | 1.0 | 2.0
| 2.0 | 0.0 |
--
You are receiving this mail because:
You are on the CC list for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-bugs/attachments/20180131/ce4245ba/attachment.html>
More information about the llvm-bugs
mailing list