<html>
<head>
<base href="http://llvm.org/bugs/" />
</head>
<body><table border="1" cellspacing="0" cellpadding="8">
<tr>
<th>Bug ID</th>
<td><a class="bz_bug_link
bz_status_NEW "
title="NEW --- - LLVM does a poor job of rematerializing address offsets which will fold into the addressing mode"
href="http://llvm.org/bugs/show_bug.cgi?id=22230">22230</a>
</td>
</tr>
<tr>
<th>Summary</th>
<td>LLVM does a poor job of rematerializing address offsets which will fold into the addressing mode
</td>
</tr>
<tr>
<th>Product</th>
<td>libraries
</td>
</tr>
<tr>
<th>Version</th>
<td>trunk
</td>
</tr>
<tr>
<th>Hardware</th>
<td>PC
</td>
</tr>
<tr>
<th>OS</th>
<td>All
</td>
</tr>
<tr>
<th>Status</th>
<td>NEW
</td>
</tr>
<tr>
<th>Severity</th>
<td>normal
</td>
</tr>
<tr>
<th>Priority</th>
<td>P
</td>
</tr>
<tr>
<th>Component</th>
<td>Common Code Generator Code
</td>
</tr>
<tr>
<th>Assignee</th>
<td>unassignedbugs@nondot.org
</td>
</tr>
<tr>
<th>Reporter</th>
<td>djasper@google.com
</td>
</tr>
<tr>
<th>CC</th>
<td>llvmbugs@cs.uiuc.edu
</td>
</tr>
<tr>
<th>Classification</th>
<td>Unclassified
</td>
</tr></table>
<p>
<div>
<pre>A certain code pattern, where a function mostly consists of a large loop around
a switch or if-else condition results in large live ranges for basic address
computations. The register allocator fails to rematerialize these address
computations in addressing modes of memory operands and instead spills and
reloads them from the stack dramatically increasing stack usage and generally
causing badness.
Small reproduction:
struct A {
unsigned a;
unsigned b;
unsigned c;
unsigned d;
};
void assign(unsigned *val);
void f(unsigned char *input, A *a) {
for (;;) {
unsigned char t = (++input)[0];
if (t == 0) {
assign(&a->a);
} else if (t == 1) {
assign(&a->b);
} else if (t == 2) {
assign(&a->c);
} else if (t == 3) {
assign(&a->d);
}
}
}
Resulting in:
__Z1fPhP1A: ## @_Z1fPhP1A
.cfi_startproc
## BB#0: ## %entry
pushq %rbp
Ltmp0:
.cfi_def_cfa_offset 16
Ltmp1:
.cfi_offset %rbp, -16
movq %rsp, %rbp
Ltmp2:
.cfi_def_cfa_register %rbp
pushq %r15
pushq %r14
pushq %r13
pushq %r12
pushq %rbx
subq $24, %rsp
Ltmp3:
.cfi_offset %rbx, -56
Ltmp4:
.cfi_offset %r12, -48
Ltmp5:
.cfi_offset %r13, -40
Ltmp6:
.cfi_offset %r14, -32
Ltmp7:
.cfi_offset %r15, -24
movq %rdi, %rbx
leaq 4(%rsi), %rax
movq %rax, -48(%rbp) ## 8-byte Spill
leaq 8(%rsi), %rax
movq %rax, -56(%rbp) ## 8-byte Spill
leaq 12(%rsi), %rax
movq %rax, -64(%rbp) ## 8-byte Spill
leaq 16(%rsi), %r14
leaq 20(%rsi), %r15
movq %rsi, %r13
incq %rbx
leaq LJTI0_0(%rip), %r12
jmp LBB0_1
[.....]</pre>
</div>
</p>
<hr>
<span>You are receiving this mail because:</span>
<ul>
<li>You are on the CC list for the bug.</li>
</ul>
</body>
</html>