[llvm-bugs] [Bug 46621] New: [X86][AVX] Avoid vpsrlq $32, %xmm, %xmm split in v4u64 uint2fp v4f64
via llvm-bugs
llvm-bugs at lists.llvm.org
Tue Jul 7 07:41:47 PDT 2020
https://bugs.llvm.org/show_bug.cgi?id=46621
Bug ID: 46621
Summary: [X86][AVX] Avoid vpsrlq $32, %xmm, %xmm split in v4u64
uint2fp v4f64
Product: libraries
Version: trunk
Hardware: PC
OS: Windows NT
Status: NEW
Severity: enhancement
Priority: P
Component: Backend: X86
Assignee: unassignedbugs at nondot.org
Reporter: llvm-dev at redking.me.uk
CC: craig.topper at gmail.com, llvm-bugs at lists.llvm.org,
llvm-dev at redking.me.uk, spatel+llvm at rotateright.com
Codegen: https://godbolt.org/z/Kz72eo
For AVX1 only targets, the v4u64 -> v4f64 uint2fp conversion:
define <4 x double> @cvt_v4u64_v4f64(4 x i64> %0) {
%2 = uitofp <4 x i64> %0 to <4 x double>
ret <4 x double> %2
}
cvt_v4u64_v4f64:
vxorps %xmm1, %xmm1, %xmm1
vblendps $170, %ymm1, %ymm0, %ymm1 # ymm1 =
ymm0[0],ymm1[1],ymm0[2],ymm1[3],ymm0[4],ymm1[5],ymm0[6],ymm1[7]
vpsrlq $32, %xmm0, %xmm2
vextractf128 $1, %ymm0, %xmm0
vorps .LCPI0_0(%rip), %ymm1, %ymm1
vpsrlq $32, %xmm0, %xmm0
vinsertf128 $1, %xmm0, %ymm2, %ymm0
vorpd .LCPI0_1(%rip), %ymm0, %ymm0
vsubpd .LCPI0_2(%rip), %ymm0, %ymm0
vaddpd %ymm0, %ymm1, %ymm0
retq
we should be able to avoid the extractf128+2*vpsrlq and perform this as 256-bit
vshufps*2 instead, reducing register pressure.
I'm not sure yet whether this is better to handle in shuffle combining, in
TargetLowering::expandUINT_TO_FP or shift lowering.
--
You are receiving this mail because:
You are on the CC list for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-bugs/attachments/20200707/686f583e/attachment.html>
More information about the llvm-bugs
mailing list