[LLVMbugs] [Bug 23349] New: [SSE] scalar intrinsics don't fold loads
bugzilla-daemon at llvm.org
bugzilla-daemon at llvm.org
Sat Apr 25 15:13:07 PDT 2015
https://llvm.org/bugs/show_bug.cgi?id=23349
Bug ID: 23349
Summary: [SSE] scalar intrinsics don't fold loads
Product: libraries
Version: trunk
Hardware: PC
OS: All
Status: NEW
Severity: normal
Priority: P
Component: Backend: X86
Assignee: unassignedbugs at nondot.org
Reporter: spatel+llvm at rotateright.com
CC: llvmbugs at cs.uiuc.edu
Classification: Unclassified
#include <xmmintrin.h>
__m128 add_fold(__m128 x, float *y) {
__m128 ld = _mm_load_ss(y);
return _mm_add_ss(x, ld);
}
or IR test case for llc:
define <4 x float> @add_fold(<4 x float> %x, float* %y) #0 {
%0 = load float, float* %y, align 1, !tbaa !2
%vecext1.i = extractelement <4 x float> %x, i32 0
%add.i = fadd float %vecext1.i, %0
%vecins.i = insertelement <4 x float> %x, float %add.i, i32 0
ret <4 x float> %vecins.i
}
----------------------------------------------------------------------------
The load should be folded into the math op, but:
$ clang -O1 addfold.c -S -o -
...
movss (%rdi), %xmm1 ## xmm1 = mem[0],zero,zero,zero
addss %xmm1, %xmm0
----------------------------------------------------------------------------
The load does get folded with non-intrinsic C (IR won't have extract/insert in
this case) or with vector ops:
float add_fold2(float x, float *y) {
return x + *y;
}
__m128 add_fold3(__m128 x, float *y) {
__m128 ld = _mm_load_ps(y);
return _mm_add_ps(x, ld);
}
...
addss (%rdi), %xmm0
...
addps (%rdi), %xmm0
--
You are receiving this mail because:
You are on the CC list for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-bugs/attachments/20150425/4d1483b0/attachment.html>
More information about the llvm-bugs
mailing list