<html>
<head>
<base href="http://llvm.org/bugs/" />
</head>
<body><table border="1" cellspacing="0" cellpadding="8">
<tr>
<th>Bug ID</th>
<td><a class="bz_bug_link
bz_status_NEW "
title="NEW --- - NEON intrinsics prevent removing redundant store and load on armv7"
href="http://llvm.org/bugs/show_bug.cgi?id=21778">21778</a>
</td>
</tr>
<tr>
<th>Summary</th>
<td>NEON intrinsics prevent removing redundant store and load on armv7
</td>
</tr>
<tr>
<th>Product</th>
<td>libraries
</td>
</tr>
<tr>
<th>Version</th>
<td>trunk
</td>
</tr>
<tr>
<th>Hardware</th>
<td>Macintosh
</td>
</tr>
<tr>
<th>OS</th>
<td>All
</td>
</tr>
<tr>
<th>Status</th>
<td>NEW
</td>
</tr>
<tr>
<th>Severity</th>
<td>normal
</td>
</tr>
<tr>
<th>Priority</th>
<td>P
</td>
</tr>
<tr>
<th>Component</th>
<td>Scalar Optimizations
</td>
</tr>
<tr>
<th>Assignee</th>
<td>unassignedbugs@nondot.org
</td>
</tr>
<tr>
<th>Reporter</th>
<td>simontaylor1@ntlworld.com
</td>
</tr>
<tr>
<th>CC</th>
<td>llvmbugs@cs.uiuc.edu
</td>
</tr>
<tr>
<th>Classification</th>
<td>Unclassified
</td>
</tr></table>
<p>
<div>
<pre>I noticed that LLVM was unable to optimize redundant stores and loads when I
added some NEON intrinsics into my 4x4 matrix multiply function.
I've created a smaller example that shows the same problem. This is using
"Apple LLVM version 6.0 (clang-600.0.56) (based on LLVM 3.5svn)" (from the
latest XCode bundle with iOS SDK), targeting armv7, using -O3.
Here is the test code without intrinsics:
struct vec4
{
float data[4];
};
vec4 operator* (const vec4& a, const vec4& b)
{
vec4 result;
for(int i = 0; i < 4; ++i)
result.data[i] = a.data[i] * b.data[i];
return result;
}
void TestVec4Multiply(vec4& a, vec4& b, vec4& result)
{
result = a * b;
}
void TestVec4Multiply3(vec4& a, vec4& b, vec4& c, vec4& result)
{
result = a * b * c;
}
In this case, the vectorizer actually generates the optimal code:
__Z16TestVec4MultiplyR4vec4S0_S0_:
@ BB#0:
vld1.32 {d16, d17}, [r1]
vld1.32 {d18, d19}, [r0]
vmul.f32 q8, q9, q8
vst1.32 {d16, d17}, [r2]
bx lr
__Z17TestVec4Multiply3R4vec4S0_S0_S0_:
@ BB#0:
vld1.32 {d16, d17}, [r1]
vld1.32 {d18, d19}, [r0]
vmul.f32 q8, q9, q8
vld1.32 {d18, d19}, [r2]
vmul.f32 q8, q8, q9
vst1.32 {d16, d17}, [r3]
bx lr
With my actual matrix multiply code the vectorizer is not as successful, hence
wanting to help out the compiler with some intrinsics. Here's a replacement of
the operator* with an implementation using NEON intrinsics:
vec4 operator* (const vec4& a, const vec4& b)
{
vec4 result;
float32x4_t result_data = vmulq_f32(vld1q_f32(a.data), vld1q_f32(b.data));
vst1q_f32(result.data, result_data);
return result;
}
Unfortunately the generated code now has some redundant stores and loads:
__Z16TestVec4MultiplyR4vec4S0_S0_:
@ BB#0:
sub sp, #16
vld1.32 {d16, d17}, [r1]
vld1.32 {d18, d19}, [r0]
mov r0, sp
vmul.f32 q8, q9, q8
vst1.32 {d16, d17}, [r0]
vld1.32 {d16, d17}, [r0]
vst1.32 {d16, d17}, [r2]
add sp, #16
bx lr
__Z17TestVec4Multiply3R4vec4S0_S0_S0_:
@ BB#0:
sub sp, #32
vld1.32 {d16, d17}, [r1]
vld1.32 {d18, d19}, [r0]
mov r0, sp
vmul.f32 q8, q9, q8
vst1.32 {d16, d17}, [r0]
vld1.32 {d16, d17}, [r2]
vld1.32 {d18, d19}, [r0]
add r0, sp, #16
vmul.f32 q8, q9, q8
vst1.32 {d16, d17}, [r0]
vld1.32 {d16, d17}, [r0]
vst1.32 {d16, d17}, [r3]
add sp, #32
bx lr
These seem to be especially bad news on many ARM cores. See here:
<a href="http://lists.freedesktop.org/archives/pixman/2011-August/001398.html">http://lists.freedesktop.org/archives/pixman/2011-August/001398.html</a>
In my testing of 4x4 matrix multiply, the version with the temporaries ends up
about 3x slower than code that has them eliminated.</pre>
</div>
</p>
<hr>
<span>You are receiving this mail because:</span>
<ul>
<li>You are on the CC list for the bug.</li>
</ul>
</body>
</html>