[llvm-bugs] [Bug 50053] New: X86ISelLowering prefers vperm2i128 to vinserti128 even when targeting Zen
via llvm-bugs
llvm-bugs at lists.llvm.org
Wed Apr 21 00:29:45 PDT 2021
https://bugs.llvm.org/show_bug.cgi?id=50053
Bug ID: 50053
Summary: X86ISelLowering prefers vperm2i128 to vinserti128 even
when targeting Zen
Product: libraries
Version: 11.0
Hardware: PC
OS: All
Status: NEW
Severity: enhancement
Priority: P
Component: Backend: X86
Assignee: unassignedbugs at nondot.org
Reporter: tellowkrinkle at gmail.com
CC: craig.topper at gmail.com, llvm-bugs at lists.llvm.org,
llvm-dev at redking.me.uk, pengfei.wang at intel.com,
spatel+llvm at rotateright.com
The following LLVM IR:
define dso_local void @perm(<4 x i64>* nocapture %0, <4 x i64>* nocapture
readonly %1) local_unnamed_addr #0 {
%3 = load <4 x i64>, <4 x i64>* %1, align 32
%4 = getelementptr inbounds <4 x i64>, <4 x i64>* %1, i64 1
%5 = bitcast <4 x i64>* %4 to <2 x i64>*
%6 = load <2 x i64>, <2 x i64>* %5, align 16
%7 = getelementptr inbounds <2 x i64>, <2 x i64>* %5, i64 1
%8 = load <2 x i64>, <2 x i64>* %7, align 16
%9 = shufflevector <2 x i64> %6, <2 x i64> poison, <4 x i32> <i32 0, i32 1,
i32 undef, i32 undef>
%10 = shufflevector <4 x i64> %3, <4 x i64> %9, <4 x i32> <i32 0, i32 1, i32
4, i32 5>
store <4 x i64> %10, <4 x i64>* %0, align 32
%11 = shufflevector <2 x i64> %8, <2 x i64> poison, <4 x i32> <i32 0, i32 1,
i32 undef, i32 undef>
%12 = shufflevector <4 x i64> %11, <4 x i64> %3, <4 x i32> <i32 0, i32 1, i32
6, i32 7>
%13 = getelementptr inbounds <4 x i64>, <4 x i64>* %0, i64 1
store <4 x i64> %12, <4 x i64>* %13, align 32
ret void
}
Produces the following assembly:
vmovaps ymm0, ymmword ptr [rsi]
vmovaps xmm1, xmmword ptr [rsi + 32]
vmovaps xmm2, xmmword ptr [rsi + 48]
vperm2f128 ymm1, ymm0, ymm1, 32 # ymm1 = ymm0[0,1],ymm1[0,1]
vblendps ymm0, ymm2, ymm0, 240 # ymm0 =
ymm2[0,1,2,3],ymm0[4,5,6,7]
vmovaps ymmword ptr [rdi], ymm1
vmovaps ymmword ptr [rdi + 32], ymm0
vzeroupper
ret
Expected:
Instead of the vmovaps + vperm2f128,
vinsertf128 ymm1, ymm0, xmmword ptr [rsi + 32], 1
Godbolt Link (C++): https://gcc.godbolt.org/z/68M7M5xaG
The choice of vperm2f128 is much slower on Zen/Zen+ due to those processors'
use of 128-bit registers internally (vperm2f128 blends 4 registers worth of
data, which the processors aren't especially happy with)
The change seems to be due to a check in X86ISelLowering.cpp lowerV2X128Shuffle
with the comment
> With AVX1, use vperm2f128 (below) to allow load folding. Otherwise,
> this will likely become vinsertf128 which can't fold a 256-bit memop.
Interestingly, this choice prevented the folding of a 128-bit memop instead
--
You are receiving this mail because:
You are on the CC list for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-bugs/attachments/20210421/4077ef71/attachment-0001.html>
More information about the llvm-bugs
mailing list