<html>
    <head>
      <base href="https://bugs.llvm.org/">
    </head>
    <body><table border="1" cellspacing="0" cellpadding="8">
        <tr>
          <th>Bug ID</th>
          <td><a class="bz_bug_link 
          bz_status_NEW "
   title="NEW - [X86][AVX] Avoid vpsrlq $32, %xmm, %xmm split in v4u64 uint2fp v4f64"
   href="https://bugs.llvm.org/show_bug.cgi?id=46621">46621</a>
          </td>
        </tr>

        <tr>
          <th>Summary</th>
          <td>[X86][AVX] Avoid vpsrlq $32, %xmm, %xmm split in v4u64 uint2fp v4f64
          </td>
        </tr>

        <tr>
          <th>Product</th>
          <td>libraries
          </td>
        </tr>

        <tr>
          <th>Version</th>
          <td>trunk
          </td>
        </tr>

        <tr>
          <th>Hardware</th>
          <td>PC
          </td>
        </tr>

        <tr>
          <th>OS</th>
          <td>Windows NT
          </td>
        </tr>

        <tr>
          <th>Status</th>
          <td>NEW
          </td>
        </tr>

        <tr>
          <th>Severity</th>
          <td>enhancement
          </td>
        </tr>

        <tr>
          <th>Priority</th>
          <td>P
          </td>
        </tr>

        <tr>
          <th>Component</th>
          <td>Backend: X86
          </td>
        </tr>

        <tr>
          <th>Assignee</th>
          <td>unassignedbugs@nondot.org
          </td>
        </tr>

        <tr>
          <th>Reporter</th>
          <td>llvm-dev@redking.me.uk
          </td>
        </tr>

        <tr>
          <th>CC</th>
          <td>craig.topper@gmail.com, llvm-bugs@lists.llvm.org, llvm-dev@redking.me.uk, spatel+llvm@rotateright.com
          </td>
        </tr></table>
      <p>
        <div>
        <pre>Codegen: <a href="https://godbolt.org/z/Kz72eo">https://godbolt.org/z/Kz72eo</a>

For AVX1 only targets, the v4u64 -> v4f64 uint2fp conversion:

define <4 x double> @cvt_v4u64_v4f64(4 x i64> %0) {
  %2 = uitofp <4 x i64> %0 to <4 x double>
  ret <4 x double> %2
}

cvt_v4u64_v4f64:
  vxorps %xmm1, %xmm1, %xmm1
  vblendps $170, %ymm1, %ymm0, %ymm1 # ymm1 =
ymm0[0],ymm1[1],ymm0[2],ymm1[3],ymm0[4],ymm1[5],ymm0[6],ymm1[7]
  vpsrlq $32, %xmm0, %xmm2
  vextractf128 $1, %ymm0, %xmm0
  vorps .LCPI0_0(%rip), %ymm1, %ymm1
  vpsrlq $32, %xmm0, %xmm0
  vinsertf128 $1, %xmm0, %ymm2, %ymm0
  vorpd .LCPI0_1(%rip), %ymm0, %ymm0
  vsubpd .LCPI0_2(%rip), %ymm0, %ymm0
  vaddpd %ymm0, %ymm1, %ymm0
  retq

we should be able to avoid the extractf128+2*vpsrlq and perform this as 256-bit
vshufps*2 instead, reducing register pressure.

I'm not sure yet whether this is better to handle in shuffle combining, in
TargetLowering::expandUINT_TO_FP or shift lowering.</pre>
        </div>
      </p>


      <hr>
      <span>You are receiving this mail because:</span>

      <ul>
          <li>You are on the CC list for the bug.</li>
      </ul>
    </body>
</html>