<html>
<head>
<base href="https://bugs.llvm.org/">
</head>
<body><table border="1" cellspacing="0" cellpadding="8">
<tr>
<th>Bug ID</th>
<td><a class="bz_bug_link
bz_status_NEW "
title="NEW - [X86][SSE] Lower icmp_ugt(x, pow2 - 1) -> icmp_ne(and(x, pow2 - 1), 0)"
href="https://bugs.llvm.org/show_bug.cgi?id=46590">46590</a>
</td>
</tr>
<tr>
<th>Summary</th>
<td>[X86][SSE] Lower icmp_ugt(x, pow2 - 1) -> icmp_ne(and(x, pow2 - 1), 0)
</td>
</tr>
<tr>
<th>Product</th>
<td>libraries
</td>
</tr>
<tr>
<th>Version</th>
<td>trunk
</td>
</tr>
<tr>
<th>Hardware</th>
<td>PC
</td>
</tr>
<tr>
<th>OS</th>
<td>Windows NT
</td>
</tr>
<tr>
<th>Status</th>
<td>NEW
</td>
</tr>
<tr>
<th>Severity</th>
<td>enhancement
</td>
</tr>
<tr>
<th>Priority</th>
<td>P
</td>
</tr>
<tr>
<th>Component</th>
<td>Backend: X86
</td>
</tr>
<tr>
<th>Assignee</th>
<td>unassignedbugs@nondot.org
</td>
</tr>
<tr>
<th>Reporter</th>
<td>llvm-dev@redking.me.uk
</td>
</tr>
<tr>
<th>CC</th>
<td>craig.topper@gmail.com, llvm-bugs@lists.llvm.org, llvm-dev@redking.me.uk, spatel+llvm@rotateright.com
</td>
</tr></table>
<p>
<div>
<pre>Vector unsigned comparisons currently lower to pcmpeq(pmaxu(x,y),x) (or pminu
for ult).
typedef unsigned int __v4su __attribute__((__vector_size__(16)));
__v4su cmp_ugt(__v4su x) {
return x > 0x7f;
}
define <4 x i32> @cmp_ugt(<4 x i32> %0) {
%2 = icmp ugt <4 x i32> %0, <i32 127, i32 127, i32 127, i32 127>
%3 = sext <4 x i1> %2 to <4 x i32>
ret <4 x i32> %3
}
.LCPI0_0:
.long 128 # 0x80
.long 128 # 0x80
.long 128 # 0x80
.long 128 # 0x80
cmp_ugt:
vpmaxud .LCPI0_0(%rip), %xmm0, %xmm1
vpcmpeqd %xmm1, %xmm0, %xmm0
retq
But for cases where we're comparing against a pow2-1 mask we can lower to:
define <4 x i32> @cmp_ugt(<4 x i32> %0) {
%2 = and <4 x i32> %0, <i32 ~127, i32 ~127, i32 ~127, i32 ~127>
%3 = icmp ne <4 x i32> %2, zeroinitializer
%4 = sext <4 x i1> %3 to <4 x i32>
ret <4 x i32> %4
}
.LCPI0_0:
.long 128 # 0xFFFFFF80
.long 128 # 0xFFFFFF80
.long 128 # 0xFFFFFF80
.long 128 # 0xFFFFFF80
cmp_ugt:
vpand .LCPI0_0(%rip), %xmm0, %xmm0
vpxor %xmm1, %xmm1, %xmm1
vpcmpeqd %xmm1, %xmm0, %xmm0
vpcmpeqd %xmm1, %xmm1, %xmm1
vpxor %xmm1, %xmm0, %xmm0
retq
(instcombine already performs the inverse of this fold).
This is definitely better in cases where we can pull out the NOT - reductions
(movmsk/ptest cases), selections etc.
Noticed while looking at [<a class="bz_bug_link
bz_status_NEW "
title="NEW - Loop counting top bits set badly vectorized on X86"
href="show_bug.cgi?id=46053">Bug #46053</a>]</pre>
</div>
</p>
<hr>
<span>You are receiving this mail because:</span>
<ul>
<li>You are on the CC list for the bug.</li>
</ul>
</body>
</html>