[llvm-bugs] [Bug 33879] New: [X86] Missed optimizations for comparison of shift against zero
via llvm-bugs
llvm-bugs at lists.llvm.org
Fri Jul 21 09:34:14 PDT 2017
https://bugs.llvm.org/show_bug.cgi?id=33879
Bug ID: 33879
Summary: [X86] Missed optimizations for comparison of shift
against zero
Product: libraries
Version: trunk
Hardware: PC
OS: Windows NT
Status: NEW
Severity: enhancement
Priority: P
Component: Backend: X86
Assignee: unassignedbugs at nondot.org
Reporter: llvm-dev at redking.me.uk
CC: craig.topper at gmail.com, llvm-bugs at lists.llvm.org,
peter at cordes.ca, spatel+llvm at rotateright.com
int test(unsigned i, unsigned j, int a, int b) {
if (0 == (i >> j))
return a;
return b;
}
define i32 @test(i32, i32, i32, i32) {
%5 = lshr i32 %0, %1
%6 = icmp eq i32 %5, 0
%7 = select i1 %6, i32 %2, i32 %3
ret i32 %7
}
llc -mcpu=btver2
test(unsigned int, unsigned int, int, int):
movl %ecx, %eax
movl %esi, %ecx
shrl %cl, %edi
testl %edi, %edi
cmovnel %eax, %edx
movl %edx, %eax
retq
1 - we could remove the final move if the CMOVNE was changed/commuted to CMOVE
2 - why aren't we using the ZF flag result from SHR instead of the extra TEST?
gcc gets (1) right, but still doesn't do (2)
test(unsigned int, unsigned int, int, int):
movl %ecx, %eax
movl %esi, %ecx
shrl %cl, %edi
testl %edi, %edi
cmove %edx, %eax
ret
Also note, that more recently we use the BMI2 shifts instead which don't update
the flags at all.
--
You are receiving this mail because:
You are on the CC list for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-bugs/attachments/20170721/4cccd1cd/attachment.html>
More information about the llvm-bugs
mailing list