<html>
<head>
<base href="https://llvm.org/bugs/" />
</head>
<body><table border="1" cellspacing="0" cellpadding="8">
<tr>
<th>Bug ID</th>
<td><a class="bz_bug_link
bz_status_NEW "
title="NEW --- - Lower shuffles with equivalent truncates as good as"
href="https://llvm.org/bugs/show_bug.cgi?id=31551">31551</a>
</td>
</tr>
<tr>
<th>Summary</th>
<td>Lower shuffles with equivalent truncates as good as
</td>
</tr>
<tr>
<th>Product</th>
<td>libraries
</td>
</tr>
<tr>
<th>Version</th>
<td>trunk
</td>
</tr>
<tr>
<th>Hardware</th>
<td>PC
</td>
</tr>
<tr>
<th>OS</th>
<td>Windows NT
</td>
</tr>
<tr>
<th>Status</th>
<td>NEW
</td>
</tr>
<tr>
<th>Severity</th>
<td>normal
</td>
</tr>
<tr>
<th>Priority</th>
<td>P
</td>
</tr>
<tr>
<th>Component</th>
<td>Backend: X86
</td>
</tr>
<tr>
<th>Assignee</th>
<td>unassignedbugs@nondot.org
</td>
</tr>
<tr>
<th>Reporter</th>
<td>zvi.rackover@intel.com
</td>
</tr>
<tr>
<th>CC</th>
<td>llvm-bugs@lists.llvm.org
</td>
</tr>
<tr>
<th>Classification</th>
<td>Unclassified
</td>
</tr></table>
<p>
<div>
<pre>Spinning off from <a class="bz_bug_link
bz_status_NEW "
title="NEW --- - AVX-512 generates sub-optimal shuffles for byte vectors"
href="show_bug.cgi?id=31443">bug 31443</a>:
We should ensure that each 'shufflevector' that has an equivalent 'trunc' form
will be lowered optimally as the trunc.
A random example of two equivalent functions:
define void @shuffle_v16i16_to_v4i16(<16 x i16>* %L, <4 x i16>* %S) nounwind {
%vec = load <16 x i16>, <16 x i16>* %L
%strided.vec = shufflevector <16 x i16> %vec, <16 x i16> undef, <4 x i32>
<i32 0, i32 4, i32 8, i32 12>
store <4 x i16> %strided.vec, <4 x i16>* %S
ret void
}
define void @trunc_v4i64_to_v4i16(<16 x i16>* %L, <4 x i16>* %S) nounwind {
%vec = load <16 x i16>, <16 x i16>* %L
%bc = bitcast <16 x i16> %vec to <4 x i64>
%strided.vec = trunc <4 x i64> %bc to <4 x i16>
store <4 x i16> %strided.vec, <4 x i16>* %S
ret void
}
shuffle_v16i16_to_v4i16 is lowered to:
vmovdqa (%rdi), %ymm0
vextracti128 $1, %ymm0, %xmm1
vpshufd {{.*#+}} xmm1 = xmm1[0,2,2,3]
vpshuflw {{.*#+}} xmm1 = xmm1[0,2,2,3,4,5,6,7]
vpshufd {{.*#+}} xmm0 = xmm0[0,2,2,3]
vpshuflw {{.*#+}} xmm0 = xmm0[0,2,2,3,4,5,6,7]
vpunpckldq {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[1]
vmovq %xmm0, (%rsi)
trunc_v4i64_to_v4i16 is lowered to (and can also be improved):
vmovdqa (%rdi), %ymm0
vpmovqd %zmm0, %ymm0
vpshufb {{.*#+}} xmm0 = xmm0[0,1,4,5,8,9,12,13,8,9,12,13,12,13,14,15]
vmovq %xmm0, (%rsi)</pre>
</div>
</p>
<hr>
<span>You are receiving this mail because:</span>
<ul>
<li>You are on the CC list for the bug.</li>
</ul>
</body>
</html>