[LLVMbugs] [Bug 19589] New: [ARM64] Unable to emit UBFX in some cases
bugzilla-daemon at llvm.org
bugzilla-daemon at llvm.org
Mon Apr 28 13:58:19 PDT 2014
http://llvm.org/bugs/show_bug.cgi?id=19589
Bug ID: 19589
Summary: [ARM64] Unable to emit UBFX in some cases
Product: new-bugs
Version: trunk
Hardware: PC
OS: All
Status: NEW
Severity: normal
Priority: P
Component: new bugs
Assignee: unassignedbugs at nondot.org
Reporter: weimingz at codeaurora.org
CC: llvmbugs at cs.uiuc.edu
Classification: Unclassified
For example:
Given
@arr = external global [8 x [64 x i64]]
define i64 @fct21(i64 %x) {
entry:
%shr = lshr i64 %x, 4
%and = and i64 %shr, 15
%arrayidx = getelementptr inbounds [8 x [64 x i64]]* @arr, i64 0, i64 2, i64
and
%0 = load i64* %arrayidx, align 8
ret i64 %0
}
both ARM64 and AArch64 are unable to emit ubfx, which results into 1 more
instruction emitted.
The above code generates:
lsr x8, x0, #1
and x8, x8, #0x78
adrp x9, arr
add x9, x9, :lo12:arr
add x8, x9, x8
ldr x0, [x8, #1024]
ret
Ideally, it should be
ubfx x8, x0, #4, #4
adrp x9, arr
add x9, x9, :lo12:arr
add x8, x9, x8, lsl #3
ldr x0, [x8, #1024]
ret
This is because for patterns like like ((x>> 8) & 0x1f) << 3 + y, inst combine
changes it to
[x>>(8-3) & (0x1f << 3)] + y
that is
lsr x8, x0, 5
and x8, x1, 0xf8
add x0, x8, x1 (here we lose the chance to fold shift into add)
so if ubfx is used , it would be
ubfx x0, x0, 8, 5
add x0, x1, x0, lsl 3
The reason is during dag combining, the target independent combining folds SHL
with the bit extraction, which makes the following pass harder to recognize the
bit extraction.
--
You are receiving this mail because:
You are on the CC list for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-bugs/attachments/20140428/9e4f8a5b/attachment.html>
More information about the llvm-bugs
mailing list