<html>
    <head>
      <base href="https://bugs.llvm.org/">
    </head>
    <body><table border="1" cellspacing="0" cellpadding="8">
        <tr>
          <th>Bug ID</th>
          <td><a class="bz_bug_link 
          bz_status_NEW "
   title="NEW - [X86][AVX] Improve vXi64 to vXf64 ISD::UINT_TO_FP lowering"
   href="https://bugs.llvm.org/show_bug.cgi?id=38226">38226</a>
          </td>
        </tr>

        <tr>
          <th>Summary</th>
          <td>[X86][AVX] Improve vXi64 to vXf64 ISD::UINT_TO_FP lowering
          </td>
        </tr>

        <tr>
          <th>Product</th>
          <td>libraries
          </td>
        </tr>

        <tr>
          <th>Version</th>
          <td>trunk
          </td>
        </tr>

        <tr>
          <th>Hardware</th>
          <td>PC
          </td>
        </tr>

        <tr>
          <th>OS</th>
          <td>Windows NT
          </td>
        </tr>

        <tr>
          <th>Status</th>
          <td>NEW
          </td>
        </tr>

        <tr>
          <th>Severity</th>
          <td>enhancement
          </td>
        </tr>

        <tr>
          <th>Priority</th>
          <td>P
          </td>
        </tr>

        <tr>
          <th>Component</th>
          <td>Backend: X86
          </td>
        </tr>

        <tr>
          <th>Assignee</th>
          <td>unassignedbugs@nondot.org
          </td>
        </tr>

        <tr>
          <th>Reporter</th>
          <td>llvm-dev@redking.me.uk
          </td>
        </tr>

        <tr>
          <th>CC</th>
          <td>andrea.dibiagio@gmail.com, craig.topper@gmail.com, lebedev.ri@gmail.com, llvm-bugs@lists.llvm.org, spatel+llvm@rotateright.com
          </td>
        </tr></table>
      <p>
        <div>
        <pre>We support i64 to f64 ISD::UINT_TO_FP lowering, which is performed using packed
SSE instructions (using both elements to convert lower/upper 32-bits):

define <2 x double> @uitofp_2i64_to_2f64(<2 x i64> %a) {
; SSE-LABEL: uitofp_2i64_to_2f64:
; SSE:       # %bb.0:
; SSE-NEXT:    movdqa {{.*#+}} xmm1 = [1127219200,1160773632,0,0]
; SSE-NEXT:    pshufd {{.*#+}} xmm2 = xmm0[2,3,0,1]
; SSE-NEXT:    punpckldq {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[1]
; SSE-NEXT:    movapd {{.*#+}} xmm3 = [4.503600e+15,1.934281e+25]
; SSE-NEXT:    subpd %xmm3, %xmm0
; SSE-NEXT:    pshufd {{.*#+}} xmm4 = xmm0[2,3,0,1]
; SSE-NEXT:    addpd %xmm4, %xmm0
; SSE-NEXT:    punpckldq {{.*#+}} xmm2 = xmm2[0],xmm1[0],xmm2[1],xmm1[1]
; SSE-NEXT:    subpd %xmm3, %xmm2
; SSE-NEXT:    pshufd {{.*#+}} xmm1 = xmm2[2,3,0,1]
; SSE-NEXT:    addpd %xmm2, %xmm1
; SSE-NEXT:    unpcklpd {{.*#+}} xmm0 = xmm0[0],xmm1[0]
; SSE-NEXT:    retq
;
; VEX-LABEL: uitofp_2i64_to_2f64:
; VEX:       # %bb.0:
; VEX-NEXT:    vmovapd {{.*#+}} xmm1 = [1127219200,1160773632,0,0]
; VEX-NEXT:    vunpcklps {{.*#+}} xmm2 = xmm0[0],xmm1[0],xmm0[1],xmm1[1]
; VEX-NEXT:    vmovapd {{.*#+}} xmm3 = [4.503600e+15,1.934281e+25]
; VEX-NEXT:    vsubpd %xmm3, %xmm2, %xmm2
; VEX-NEXT:    vpermilps {{.*#+}} xmm0 = xmm0[2,3,0,1]
; VEX-NEXT:    vunpcklps {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[1]
; VEX-NEXT:    vsubpd %xmm3, %xmm0, %xmm0
; VEX-NEXT:    vhaddpd %xmm0, %xmm2, %xmm0
; VEX-NEXT:    retq
  %cvt = uitofp <2 x i64> %a to <2 x double>
  ret <2 x double> %cvt
}

define <4 x double> @uitofp_4i64_to_4f64(<4 x i64> %a) {
; VEX-LABEL: uitofp_4i64_to_4f64:
; VEX:       # %bb.0:
; VEX-NEXT:    vextractf128 $1, %ymm0, %xmm1
; VEX-NEXT:    vmovapd {{.*#+}} xmm2 = [1127219200,1160773632,0,0]
; VEX-NEXT:    vunpcklps {{.*#+}} xmm3 = xmm1[0],xmm2[0],xmm1[1],xmm2[1]
; VEX-NEXT:    vmovapd {{.*#+}} xmm4 = [4.503600e+15,1.934281e+25]
; VEX-NEXT:    vsubpd %xmm4, %xmm3, %xmm3
; VEX-NEXT:    vpermilps {{.*#+}} xmm1 = xmm1[2,3,0,1]
; VEX-NEXT:    vunpcklps {{.*#+}} xmm1 = xmm1[0],xmm2[0],xmm1[1],xmm2[1]
; VEX-NEXT:    vsubpd %xmm4, %xmm1, %xmm1
; VEX-NEXT:    vhaddpd %xmm1, %xmm3, %xmm1
; VEX-NEXT:    vunpcklps {{.*#+}} xmm3 = xmm0[0],xmm2[0],xmm0[1],xmm2[1]
; VEX-NEXT:    vsubpd %xmm4, %xmm3, %xmm3
; VEX-NEXT:    vpermilps {{.*#+}} xmm0 = xmm0[2,3,0,1]
; VEX-NEXT:    vunpcklps {{.*#+}} xmm0 = xmm0[0],xmm2[0],xmm0[1],xmm2[1]
; VEX-NEXT:    vsubpd %xmm4, %xmm0, %xmm0
; VEX-NEXT:    vhaddpd %xmm0, %xmm3, %xmm0
; VEX-NEXT:    vinsertf128 $1, %xmm1, %ymm0, %ymm0
; VEX-NEXT:    retq
  %cvt = uitofp <4 x i64> %a to <4 x double>
  ret <4 x double> %cvt
}

We should be able to use YMM registers to perform v4i64-v4f64 more efficiently,
and possibly even perform v2i64-v2f64 using YMMs as well (one element per
lane).</pre>
        </div>
      </p>


      <hr>
      <span>You are receiving this mail because:</span>

      <ul>
          <li>You are on the CC list for the bug.</li>
      </ul>
    </body>
</html>