<html>
<head>
<base href="https://bugs.llvm.org/">
</head>
<body><table border="1" cellspacing="0" cellpadding="8">
<tr>
<th>Bug ID</th>
<td><a class="bz_bug_link
bz_status_NEW "
title="NEW - Failure to widen loads from wider dereferenceable pointers"
href="https://bugs.llvm.org/show_bug.cgi?id=51091">51091</a>
</td>
</tr>
<tr>
<th>Summary</th>
<td>Failure to widen loads from wider dereferenceable pointers
</td>
</tr>
<tr>
<th>Product</th>
<td>libraries
</td>
</tr>
<tr>
<th>Version</th>
<td>trunk
</td>
</tr>
<tr>
<th>Hardware</th>
<td>PC
</td>
</tr>
<tr>
<th>OS</th>
<td>Windows NT
</td>
</tr>
<tr>
<th>Status</th>
<td>NEW
</td>
</tr>
<tr>
<th>Severity</th>
<td>enhancement
</td>
</tr>
<tr>
<th>Priority</th>
<td>P
</td>
</tr>
<tr>
<th>Component</th>
<td>Backend: X86
</td>
</tr>
<tr>
<th>Assignee</th>
<td>unassignedbugs@nondot.org
</td>
</tr>
<tr>
<th>Reporter</th>
<td>llvm-dev@redking.me.uk
</td>
</tr>
<tr>
<th>CC</th>
<td>a.bataev@hotmail.com, craig.topper@gmail.com, lebedev.ri@gmail.com, llvm-bugs@lists.llvm.org, llvm-dev@redking.me.uk, pengfei.wang@intel.com, spatel+llvm@rotateright.com
</td>
</tr></table>
<p>
<div>
<pre>If we have generated this:
define float @Dot01(float* dereferenceable(16) %a0, float* dereferenceable(16)
%a1) {
%bcx01 = bitcast float* %a0 to <2 x float>*
%bcy01 = bitcast float* %a1 to <2 x float>*
%x01 = load <2 x float>, <2 x float>* %bcx01, align 4
%y01 = load <2 x float>, <2 x float>* %bcy01, align 4
%mul01 = fmul <2 x float> %x01, %y01
%mul0 = extractelement <2 x float> %mul01, i32 0
%mul1 = extractelement <2 x float> %mul01, i32 1
%dot01 = fadd float %mul0, %mul1
ret float %dot01
}
llc -mcpu=znver2
Dot01:
vmovsd (%rdi), %xmm0 # xmm0 = mem[0],zero
vmovsd (%rsi), %xmm1 # xmm1 = mem[0],zero
vmulps %xmm1, %xmm0, %xmm0
vmovshdup %xmm0, %xmm1 # xmm1 = xmm0[1,1,3,3]
vaddss %xmm1, %xmm0, %xmm0
retq
As we know that the pointers at (%rdi) and (%rsi) are both dereferenceable
across the full 16 bytes - why can't we load full float4 loads, which would let
us fold at least one of the loads:
Dot01:
vmovups (%rdi), %xmm0
vmulps (%rsi), %xmm0, %xmm0
vmovshdup %xmm0, %xmm1 # xmm1 = xmm0[1,1,3,3]
vaddss %xmm1, %xmm0, %xmm0
retq
(I'm not sure if we can perform this generally in DAG or VectorCombine or
whether we should just handle it inside x86's EltsFromConsecutiveLoads or
something).</pre>
</div>
</p>
<hr>
<span>You are receiving this mail because:</span>
<ul>
<li>You are on the CC list for the bug.</li>
</ul>
</body>
</html>