<html>
<head>
<base href="https://bugs.llvm.org/">
</head>
<body><table border="1" cellspacing="0" cellpadding="8">
<tr>
<th>Bug ID</th>
<td><a class="bz_bug_link
bz_status_NEW "
title="NEW - [AArch64] vcombine with zero missed optimization"
href="https://bugs.llvm.org/show_bug.cgi?id=45644">45644</a>
</td>
</tr>
<tr>
<th>Summary</th>
<td>[AArch64] vcombine with zero missed optimization
</td>
</tr>
<tr>
<th>Product</th>
<td>libraries
</td>
</tr>
<tr>
<th>Version</th>
<td>9.0
</td>
</tr>
<tr>
<th>Hardware</th>
<td>Other
</td>
</tr>
<tr>
<th>OS</th>
<td>Linux
</td>
</tr>
<tr>
<th>Status</th>
<td>NEW
</td>
</tr>
<tr>
<th>Severity</th>
<td>enhancement
</td>
</tr>
<tr>
<th>Priority</th>
<td>P
</td>
</tr>
<tr>
<th>Component</th>
<td>Backend: AArch64
</td>
</tr>
<tr>
<th>Assignee</th>
<td>unassignedbugs@nondot.org
</td>
</tr>
<tr>
<th>Reporter</th>
<td>husseydevin@gmail.com
</td>
</tr>
<tr>
<th>CC</th>
<td>arnaud.degrandmaison@arm.com, llvm-bugs@lists.llvm.org, smithp352@googlemail.com, Ties.Stuij@arm.com
</td>
</tr></table>
<p>
<div>
<pre>aarch64 zero extends any writes to D registers.
One pattern that should be recognized is vcombine(x, 0).
In LLVM IR, this shows up as
shufflevector <1 x i64> %x, <1 x i64> zeroinitializer, <2 x i32> <i32 0, i32
1>
This shouldn't do anything as long as the register was explicitly written to
(e.g. not vcombine(vget_low(x), 0)).
So for example:
uint64x2_t _mm_cvtsi64_si128(uint64_t x)
{
return vcombine_u64(vdup_n_u64(x), vdup_n_u64(0));
}
define <2 x i64> @_mm_cvtsi64_si128(i64 %x) {
%vdup = insertelement <1 x i64> undef, i64 %x, i32 0
%vcombine = shufflevector <1 x i64> %vdup, <1 x i64> zeroinitializer, <2 x
i32> <i32 0, i32 1>
ret <2 x i64> %vcombine
}
The most optimal code would be this, and is what is generated by GCC 9.2.0 or
when doing a uint64x1_t:
_mm_cvtsi64_si128:
fmov d0, x0
ret
However, Clang generates this:
_mm_cvtsi64_si128:
movi v0.2d, #0000000000000000
mov v0.d[0], x0
ret
Similarly, for vectors:
uint32x4_t vaddq_low_u32(uint32x4_t a, uint32x2_t b)
{
return vaddq_u32(a, vcombine_u32(b, vdup_n_u32(0));
}
Expected:
vaddq_low_u32:
mov v1.2s, v1.2s // zero extend
add v0.4s, v0.4s, v1.4s
ret
Actual:
vaddq_low_u32:
movi v2.2d, #0000000000000000
mov v1.d[1], v2.d[0]
add v0.4s, v0.4s, v1.4s
ret
As hinted by the function name, this pattern is recognized on the x86_64
backend, as when the IR is compiled for x86_64 -O3, we get the direct
equivalent.
_mm_cvtsi64_si128:
movq xmm0, rdi
ret
Although it doesn't get the second example and does this instead of movq xmm1,
xmm1:
vaddq_low_u32:
xorps xmm2, xmm2
shufps xmm1, xmm2, 232
paddq xmm0, xmm1
ret
That's a different issue though, but would follow the same logic.</pre>
</div>
</p>
<hr>
<span>You are receiving this mail because:</span>
<ul>
<li>You are on the CC list for the bug.</li>
</ul>
</body>
</html>