<html>
<head>
<base href="https://bugs.llvm.org/">
</head>
<body><table border="1" cellspacing="0" cellpadding="8">
<tr>
<th>Bug ID</th>
<td><a class="bz_bug_link
bz_status_NEW "
title="NEW - X86ISelLowering prefers vperm2i128 to vinserti128 even when targeting Zen"
href="https://bugs.llvm.org/show_bug.cgi?id=50053">50053</a>
</td>
</tr>
<tr>
<th>Summary</th>
<td>X86ISelLowering prefers vperm2i128 to vinserti128 even when targeting Zen
</td>
</tr>
<tr>
<th>Product</th>
<td>libraries
</td>
</tr>
<tr>
<th>Version</th>
<td>11.0
</td>
</tr>
<tr>
<th>Hardware</th>
<td>PC
</td>
</tr>
<tr>
<th>OS</th>
<td>All
</td>
</tr>
<tr>
<th>Status</th>
<td>NEW
</td>
</tr>
<tr>
<th>Severity</th>
<td>enhancement
</td>
</tr>
<tr>
<th>Priority</th>
<td>P
</td>
</tr>
<tr>
<th>Component</th>
<td>Backend: X86
</td>
</tr>
<tr>
<th>Assignee</th>
<td>unassignedbugs@nondot.org
</td>
</tr>
<tr>
<th>Reporter</th>
<td>tellowkrinkle@gmail.com
</td>
</tr>
<tr>
<th>CC</th>
<td>craig.topper@gmail.com, llvm-bugs@lists.llvm.org, llvm-dev@redking.me.uk, pengfei.wang@intel.com, spatel+llvm@rotateright.com
</td>
</tr></table>
<p>
<div>
<pre>The following LLVM IR:
define dso_local void @perm(<4 x i64>* nocapture %0, <4 x i64>* nocapture
readonly %1) local_unnamed_addr #0 {
%3 = load <4 x i64>, <4 x i64>* %1, align 32
%4 = getelementptr inbounds <4 x i64>, <4 x i64>* %1, i64 1
%5 = bitcast <4 x i64>* %4 to <2 x i64>*
%6 = load <2 x i64>, <2 x i64>* %5, align 16
%7 = getelementptr inbounds <2 x i64>, <2 x i64>* %5, i64 1
%8 = load <2 x i64>, <2 x i64>* %7, align 16
%9 = shufflevector <2 x i64> %6, <2 x i64> poison, <4 x i32> <i32 0, i32 1,
i32 undef, i32 undef>
%10 = shufflevector <4 x i64> %3, <4 x i64> %9, <4 x i32> <i32 0, i32 1, i32
4, i32 5>
store <4 x i64> %10, <4 x i64>* %0, align 32
%11 = shufflevector <2 x i64> %8, <2 x i64> poison, <4 x i32> <i32 0, i32 1,
i32 undef, i32 undef>
%12 = shufflevector <4 x i64> %11, <4 x i64> %3, <4 x i32> <i32 0, i32 1, i32
6, i32 7>
%13 = getelementptr inbounds <4 x i64>, <4 x i64>* %0, i64 1
store <4 x i64> %12, <4 x i64>* %13, align 32
ret void
}
Produces the following assembly:
vmovaps ymm0, ymmword ptr [rsi]
vmovaps xmm1, xmmword ptr [rsi + 32]
vmovaps xmm2, xmmword ptr [rsi + 48]
vperm2f128 ymm1, ymm0, ymm1, 32 # ymm1 = ymm0[0,1],ymm1[0,1]
vblendps ymm0, ymm2, ymm0, 240 # ymm0 =
ymm2[0,1,2,3],ymm0[4,5,6,7]
vmovaps ymmword ptr [rdi], ymm1
vmovaps ymmword ptr [rdi + 32], ymm0
vzeroupper
ret
Expected:
Instead of the vmovaps + vperm2f128,
vinsertf128 ymm1, ymm0, xmmword ptr [rsi + 32], 1
Godbolt Link (C++): <a href="https://gcc.godbolt.org/z/68M7M5xaG">https://gcc.godbolt.org/z/68M7M5xaG</a>
The choice of vperm2f128 is much slower on Zen/Zen+ due to those processors'
use of 128-bit registers internally (vperm2f128 blends 4 registers worth of
data, which the processors aren't especially happy with)
The change seems to be due to a check in X86ISelLowering.cpp lowerV2X128Shuffle
with the comment
<span class="quote">> With AVX1, use vperm2f128 (below) to allow load folding. Otherwise,
> this will likely become vinsertf128 which can't fold a 256-bit memop.</span >
Interestingly, this choice prevented the folding of a 128-bit memop instead</pre>
</div>
</p>
<hr>
<span>You are receiving this mail because:</span>
<ul>
<li>You are on the CC list for the bug.</li>
</ul>
</body>
</html>