<html>
<head>
<base href="http://llvm.org/bugs/" />
</head>
<body><table border="1" cellspacing="0" cellpadding="8">
<tr>
<th>Bug ID</th>
<td><a class="bz_bug_link
bz_status_NEW "
title="NEW --- - Inefficient code for load + mask"
href="http://llvm.org/bugs/show_bug.cgi?id=16009">16009</a>
</td>
</tr>
<tr>
<th>Summary</th>
<td>Inefficient code for load + mask
</td>
</tr>
<tr>
<th>Product</th>
<td>libraries
</td>
</tr>
<tr>
<th>Version</th>
<td>trunk
</td>
</tr>
<tr>
<th>Hardware</th>
<td>PC
</td>
</tr>
<tr>
<th>OS</th>
<td>Linux
</td>
</tr>
<tr>
<th>Status</th>
<td>NEW
</td>
</tr>
<tr>
<th>Severity</th>
<td>normal
</td>
</tr>
<tr>
<th>Priority</th>
<td>P
</td>
</tr>
<tr>
<th>Component</th>
<td>Scalar Optimizations
</td>
</tr>
<tr>
<th>Assignee</th>
<td>unassignedbugs@nondot.org
</td>
</tr>
<tr>
<th>Reporter</th>
<td>silvas@purdue.edu
</td>
</tr>
<tr>
<th>CC</th>
<td>llvmbugs@cs.uiuc.edu
</td>
</tr>
<tr>
<th>Classification</th>
<td>Unclassified
</td>
</tr></table>
<p>
<div>
<pre>Both of these examples generate inefficient code. At least a part of the issue
seems to be that we fail to reduce an i32 load masked with 0xFF to a byte load
with zext.
/****** Case 1 ******/
extern void use(int);
void test_pointless_moving_and_masking(int *p) {
use(p[0] & 0xFF);
}
IR:
define void @test_pointless_moving_and_masking(i32* nocapture %p) #0 {
%1 = load i32* %p, align 4, !tbaa !0
%2 = and i32 %1, 255
tail call void @use(i32 %2) #2
ret void
}
Our code is shockingly bad:
0000000000000000 <test_pointless_moving_and_masking>:
0: b8 ff 00 00 00 mov eax,0xff
5: 23 07 and eax,DWORD PTR [rdi]
7: 89 c7 mov edi,eax
9: e9 00 00 00 00 jmp e
<test_pointless_moving_and_masking+0xe>
GCC (4.7.3) produces far superior code:
0000000000000000 <test_pointless_moving_and_masking>:
0: 0f b6 3f movzx edi,BYTE PTR [rdi]
3: e9 00 00 00 00 jmp 8
<test_pointless_moving_and_masking+0x8>
/****** Case 2 ******/
extern int *arr[0xFF];
extern void use_intp(int *);
void test_pointless_zext(int *p) {
use_intp(arr[p[0] & 0xFF]);
}
IR:
define void @test_pointless_zext(i32* nocapture %p) #0 {
%1 = load i32* %p, align 4, !tbaa !0
%2 = and i32 %1, 255
%3 = zext i32 %2 to i64
%4 = getelementptr inbounds [255 x i32*]* @arr, i64 0, i64 %3
%5 = load i32** %4, align 8, !tbaa !3
tail call void @use_intp(i32* %5) #2
ret void
}
0000000000000010 <test_pointless_zext>:
10: 8b 07 mov eax,DWORD PTR [rdi]
12: 0f b6 c0 movzx eax,al
15: 48 8b 3c c5 00 00 00 00 mov rdi,QWORD PTR [rax*8+0x0]
1d: e9 00 00 00 00 jmp 22 <test_pointless_zext+0x12>
GCC (4.7.3) directly does a movzx with memory operand rather than a dword load
+ movzx:
0000000000000010 <test_pointless_zext>:
10: 0f b6 07 movzx eax,BYTE PTR [rdi]
13: 48 8b 3c c5 00 00 00 00 mov rdi,QWORD PTR [rax*8+0x0]
1b: e9 00 00 00 00 jmp 20 <test_pointless_zext+0x10>
Tested with clang is built from trunk on Apr 20, 2013. (this is all with -O2
and -O3)</pre>
</div>
</p>
<hr>
<span>You are receiving this mail because:</span>
<ul>
<li>You are on the CC list for the bug.</li>
</ul>
</body>
</html>