[llvm-bugs] [Bug 34111] New: Compiler no longer optimizing a vector add+shuffle to identity to a horizontal add
via llvm-bugs
llvm-bugs at lists.llvm.org
Mon Aug 7 19:28:16 PDT 2017
https://bugs.llvm.org/show_bug.cgi?id=34111
Bug ID: 34111
Summary: Compiler no longer optimizing a vector add+shuffle to
identity to a horizontal add
Product: new-bugs
Version: trunk
Hardware: PC
OS: Linux
Status: NEW
Severity: normal
Priority: P
Component: new bugs
Assignee: unassignedbugs at nondot.org
Reporter: douglas_yung at playstation.sony.com
CC: llvm-bugs at lists.llvm.org
It seems commit r309812 has broken the compiler's ability to optimize a vector
add+shuffle to identity to a simple horizontal add instruction. Consider the
following code:
__m128 add_ps_001(__m128 a, __m128 b) {
__m128 r = (__m128){ a[0] + a[1], a[2] + a[3], b[0] + b[1], b[2] + b[3] };
return __builtin_shufflevector(r, a, -1, -1, 2, 3);
}
Prior to commit r309812, when targeting x86+avx with optimizations (clang -S
-O2 -mavx), the compiler would generate the following code:
.cfi_startproc
# BB#0: # %entry
vhaddps %xmm1, %xmm0, %xmm0
retq
But starting with commit r309812, the compiler is now generating less efficient
code:
.cfi_startproc
# BB#0: # %entry
vpermilps $232, %xmm1, %xmm0 # xmm0 = xmm1[0,2,2,3]
vpermilps $237, %xmm1, %xmm1 # xmm1 = xmm1[1,3,2,3]
vaddps %xmm1, %xmm0, %xmm0
vmovddup %xmm0, %xmm0 # xmm0 = xmm0[0,0]
retq
--
You are receiving this mail because:
You are on the CC list for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-bugs/attachments/20170808/0fe2c1e7/attachment.html>
More information about the llvm-bugs
mailing list