[llvm-bugs] [Bug 33276] New: [X86][SSE] Perform unsigned compare as signed when signbits are zero
via llvm-bugs
llvm-bugs at lists.llvm.org
Fri Jun 2 03:36:36 PDT 2017
https://bugs.llvm.org/show_bug.cgi?id=33276
Bug ID: 33276
Summary: [X86][SSE] Perform unsigned compare as signed when
signbits are zero
Product: libraries
Version: trunk
Hardware: PC
OS: Windows NT
Status: NEW
Severity: enhancement
Priority: P
Component: Backend: X86
Assignee: unassignedbugs at nondot.org
Reporter: llvm-dev at redking.me.uk
CC: llvm-bugs at lists.llvm.org, spatel+llvm at rotateright.com
SSE doesn't have an unsigned integer compare, so we have to flip the sign bits
and use the signed compare instructions. But if we know that the sign bits of
both values are zero then we should be able to avoid the flip:
llc -mtriple=x86_64-unknown -mcpu=btver2
define <4 x i32> @cmp_ugt_4i32(<4 x i32> %a0, <4 x i32> %a1) {
%1 = lshr <4 x i32> %a0, <i32 1, i32 1, i32 1, i32 1>
%2 = lshr <4 x i32> %a1, <i32 1, i32 1, i32 1, i32 1>
%3 = icmp ugt <4 x i32> %1, %2
%4 = sext <4 x i1> %3 to <4 x i32>
ret <4 x i32> %4
}
cmp_ugt:
vmovdqa .LCPI0_0(%rip), %xmm2 # xmm2 =
[2147483648,2147483648,2147483648,2147483648]
vpsrld $1, %xmm0, %xmm0
vpsrld $1, %xmm1, %xmm1
vpxor %xmm2, %xmm1, %xmm1
vpxor %xmm2, %xmm0, %xmm0
vpcmpgtd %xmm1, %xmm0, %xmm0
retq
Ideally:
cmp_ugt:
vpsrld $1, %xmm0, %xmm0
vpsrld $1, %xmm1, %xmm1
vpcmpgtd %xmm1, %xmm0, %xmm0
retq
--
You are receiving this mail because:
You are on the CC list for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-bugs/attachments/20170602/89b57f6a/attachment-0001.html>
More information about the llvm-bugs
mailing list