[llvm-bugs] [Bug 38226] New: [X86][AVX] Improve vXi64 to vXf64 ISD::UINT_TO_FP lowering
via llvm-bugs
llvm-bugs at lists.llvm.org
Thu Jul 19 08:49:14 PDT 2018
https://bugs.llvm.org/show_bug.cgi?id=38226
Bug ID: 38226
Summary: [X86][AVX] Improve vXi64 to vXf64 ISD::UINT_TO_FP
lowering
Product: libraries
Version: trunk
Hardware: PC
OS: Windows NT
Status: NEW
Severity: enhancement
Priority: P
Component: Backend: X86
Assignee: unassignedbugs at nondot.org
Reporter: llvm-dev at redking.me.uk
CC: andrea.dibiagio at gmail.com, craig.topper at gmail.com,
lebedev.ri at gmail.com, llvm-bugs at lists.llvm.org,
spatel+llvm at rotateright.com
We support i64 to f64 ISD::UINT_TO_FP lowering, which is performed using packed
SSE instructions (using both elements to convert lower/upper 32-bits):
define <2 x double> @uitofp_2i64_to_2f64(<2 x i64> %a) {
; SSE-LABEL: uitofp_2i64_to_2f64:
; SSE: # %bb.0:
; SSE-NEXT: movdqa {{.*#+}} xmm1 = [1127219200,1160773632,0,0]
; SSE-NEXT: pshufd {{.*#+}} xmm2 = xmm0[2,3,0,1]
; SSE-NEXT: punpckldq {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[1]
; SSE-NEXT: movapd {{.*#+}} xmm3 = [4.503600e+15,1.934281e+25]
; SSE-NEXT: subpd %xmm3, %xmm0
; SSE-NEXT: pshufd {{.*#+}} xmm4 = xmm0[2,3,0,1]
; SSE-NEXT: addpd %xmm4, %xmm0
; SSE-NEXT: punpckldq {{.*#+}} xmm2 = xmm2[0],xmm1[0],xmm2[1],xmm1[1]
; SSE-NEXT: subpd %xmm3, %xmm2
; SSE-NEXT: pshufd {{.*#+}} xmm1 = xmm2[2,3,0,1]
; SSE-NEXT: addpd %xmm2, %xmm1
; SSE-NEXT: unpcklpd {{.*#+}} xmm0 = xmm0[0],xmm1[0]
; SSE-NEXT: retq
;
; VEX-LABEL: uitofp_2i64_to_2f64:
; VEX: # %bb.0:
; VEX-NEXT: vmovapd {{.*#+}} xmm1 = [1127219200,1160773632,0,0]
; VEX-NEXT: vunpcklps {{.*#+}} xmm2 = xmm0[0],xmm1[0],xmm0[1],xmm1[1]
; VEX-NEXT: vmovapd {{.*#+}} xmm3 = [4.503600e+15,1.934281e+25]
; VEX-NEXT: vsubpd %xmm3, %xmm2, %xmm2
; VEX-NEXT: vpermilps {{.*#+}} xmm0 = xmm0[2,3,0,1]
; VEX-NEXT: vunpcklps {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[1]
; VEX-NEXT: vsubpd %xmm3, %xmm0, %xmm0
; VEX-NEXT: vhaddpd %xmm0, %xmm2, %xmm0
; VEX-NEXT: retq
%cvt = uitofp <2 x i64> %a to <2 x double>
ret <2 x double> %cvt
}
define <4 x double> @uitofp_4i64_to_4f64(<4 x i64> %a) {
; VEX-LABEL: uitofp_4i64_to_4f64:
; VEX: # %bb.0:
; VEX-NEXT: vextractf128 $1, %ymm0, %xmm1
; VEX-NEXT: vmovapd {{.*#+}} xmm2 = [1127219200,1160773632,0,0]
; VEX-NEXT: vunpcklps {{.*#+}} xmm3 = xmm1[0],xmm2[0],xmm1[1],xmm2[1]
; VEX-NEXT: vmovapd {{.*#+}} xmm4 = [4.503600e+15,1.934281e+25]
; VEX-NEXT: vsubpd %xmm4, %xmm3, %xmm3
; VEX-NEXT: vpermilps {{.*#+}} xmm1 = xmm1[2,3,0,1]
; VEX-NEXT: vunpcklps {{.*#+}} xmm1 = xmm1[0],xmm2[0],xmm1[1],xmm2[1]
; VEX-NEXT: vsubpd %xmm4, %xmm1, %xmm1
; VEX-NEXT: vhaddpd %xmm1, %xmm3, %xmm1
; VEX-NEXT: vunpcklps {{.*#+}} xmm3 = xmm0[0],xmm2[0],xmm0[1],xmm2[1]
; VEX-NEXT: vsubpd %xmm4, %xmm3, %xmm3
; VEX-NEXT: vpermilps {{.*#+}} xmm0 = xmm0[2,3,0,1]
; VEX-NEXT: vunpcklps {{.*#+}} xmm0 = xmm0[0],xmm2[0],xmm0[1],xmm2[1]
; VEX-NEXT: vsubpd %xmm4, %xmm0, %xmm0
; VEX-NEXT: vhaddpd %xmm0, %xmm3, %xmm0
; VEX-NEXT: vinsertf128 $1, %xmm1, %ymm0, %ymm0
; VEX-NEXT: retq
%cvt = uitofp <4 x i64> %a to <4 x double>
ret <4 x double> %cvt
}
We should be able to use YMM registers to perform v4i64-v4f64 more efficiently,
and possibly even perform v2i64-v2f64 using YMMs as well (one element per
lane).
--
You are receiving this mail because:
You are on the CC list for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-bugs/attachments/20180719/e454017a/attachment.html>
More information about the llvm-bugs
mailing list