<html>
<head>
<base href="https://bugs.llvm.org/">
</head>
<body><table border="1" cellspacing="0" cellpadding="8">
<tr>
<th>Bug ID</th>
<td><a class="bz_bug_link
bz_status_NEW "
title="NEW - [X86][AVX] Prefer VPSRAV to VPSRA style shifts for known splats"
href="https://bugs.llvm.org/show_bug.cgi?id=40077">40077</a>
</td>
</tr>
<tr>
<th>Summary</th>
<td>[X86][AVX] Prefer VPSRAV to VPSRA style shifts for known splats
</td>
</tr>
<tr>
<th>Product</th>
<td>libraries
</td>
</tr>
<tr>
<th>Version</th>
<td>trunk
</td>
</tr>
<tr>
<th>Hardware</th>
<td>PC
</td>
</tr>
<tr>
<th>OS</th>
<td>Windows NT
</td>
</tr>
<tr>
<th>Status</th>
<td>NEW
</td>
</tr>
<tr>
<th>Severity</th>
<td>enhancement
</td>
</tr>
<tr>
<th>Priority</th>
<td>P
</td>
</tr>
<tr>
<th>Component</th>
<td>Backend: X86
</td>
</tr>
<tr>
<th>Assignee</th>
<td>unassignedbugs@nondot.org
</td>
</tr>
<tr>
<th>Reporter</th>
<td>llvm-dev@redking.me.uk
</td>
</tr>
<tr>
<th>CC</th>
<td>andrea.dibiagio@gmail.com, craig.topper@gmail.com, llvm-bugs@lists.llvm.org, llvm-dev@redking.me.uk, peter@cordes.ca, spatel+llvm@rotateright.com
</td>
</tr></table>
<p>
<div>
<pre>As detailed on <a href="https://reviews.llvm.org/rL340813">https://reviews.llvm.org/rL340813</a>, many recent machines have
better throughput for the 'per-element' variable vector shifts than the old
style 'scalar-count-in-xmm' variable shifts if we know that the shift amount is
already splatted:
Probably the wrong place to report this, but I looked at some other sequences:
; AVX-LABEL: splatvar_shift_v4i32:
; AVX: # %bb.0:
; AVX-NEXT: vpmovzxdq {{.*#+}} xmm1 = xmm1[0],zero,xmm1[1],zero # 1 uop /
1c latency
; AVX-NEXT: vpsrad %xmm1, %xmm0, %xmm0 # 2 uops / 2c latency
on Intel since Haswell at least
; AVX-NEXT: retq
For Skylake, variable-shifts (vpsraVd) are single uop, but count-in-xmm shifts
are 2 uops. Probably they're implemented internally as broadcast to feed the
SIMD variable-shift hardware.
The above is 3 uops / 3c latency on SKL.
So for AVX2 Skylake (but not Broadwell or earlier) we want this 2 uop / 2c
latency implementation:
vpbroadcastd %xmm1, %xmm1 = xmm1[0],xmm1[1],xmm1[2],xmm1[3] # 1 uop /
1c latency
vpsravd %xmm1, %xmm0, %xmm0 # 1 uop / 1c latency
on SKL. 3 / 3 on BDW and earlier.
Same for SKX AVX512 with vpsravw and so on. There are some test cases where we
use the same shift-count register multiple times, and it would be significantly
better to broadcast it and use variable-shifts instead of
count-from-the-low-element shifts.
But on Ryzen, and Broadwell and earlier, variable-shifts cost more.
(Interestingly, on Ryzen they run on a different execution port from normal
count-in-xmm shifts; still a single uop (per lane) but 3c latency and not fully
pipelined. Ryzen has shift-in-xmm shifts as efficient as immediate shifts,
unlike Intel where shift-in-xmm is always 2 uops (port5 + shift port).
KNL is horrible for pslld xmm,xmm (13c throughput/latency), but it has the same
throughput as immediate for variable shifts like VPSRLVD z,z,z. I don't totally
trust Agner's numbers for x,x shifts; maybe he only used the non-VEX encoding?
Anyway, for AVX512 we should prefer broadcast + variable-shift instead of
pmovzxb/wq / regular shift, because it's better on SKX and at least as good on
KNL. This includes 16-bit elements for AVX512BW, unlike AVX2.
(With AVX1, we don't have variable shifts so the earlier implementation with
vpsrad is our best option.)</pre>
</div>
</p>
<hr>
<span>You are receiving this mail because:</span>
<ul>
<li>You are on the CC list for the bug.</li>
</ul>
</body>
</html>