[llvm-bugs] [Bug 45644] New: [AArch64] vcombine with zero missed optimization
via llvm-bugs
llvm-bugs at lists.llvm.org
Wed Apr 22 22:10:46 PDT 2020
https://bugs.llvm.org/show_bug.cgi?id=45644
Bug ID: 45644
Summary: [AArch64] vcombine with zero missed optimization
Product: libraries
Version: 9.0
Hardware: Other
OS: Linux
Status: NEW
Severity: enhancement
Priority: P
Component: Backend: AArch64
Assignee: unassignedbugs at nondot.org
Reporter: husseydevin at gmail.com
CC: arnaud.degrandmaison at arm.com,
llvm-bugs at lists.llvm.org, smithp352 at googlemail.com,
Ties.Stuij at arm.com
aarch64 zero extends any writes to D registers.
One pattern that should be recognized is vcombine(x, 0).
In LLVM IR, this shows up as
shufflevector <1 x i64> %x, <1 x i64> zeroinitializer, <2 x i32> <i32 0, i32
1>
This shouldn't do anything as long as the register was explicitly written to
(e.g. not vcombine(vget_low(x), 0)).
So for example:
uint64x2_t _mm_cvtsi64_si128(uint64_t x)
{
return vcombine_u64(vdup_n_u64(x), vdup_n_u64(0));
}
define <2 x i64> @_mm_cvtsi64_si128(i64 %x) {
%vdup = insertelement <1 x i64> undef, i64 %x, i32 0
%vcombine = shufflevector <1 x i64> %vdup, <1 x i64> zeroinitializer, <2 x
i32> <i32 0, i32 1>
ret <2 x i64> %vcombine
}
The most optimal code would be this, and is what is generated by GCC 9.2.0 or
when doing a uint64x1_t:
_mm_cvtsi64_si128:
fmov d0, x0
ret
However, Clang generates this:
_mm_cvtsi64_si128:
movi v0.2d, #0000000000000000
mov v0.d[0], x0
ret
Similarly, for vectors:
uint32x4_t vaddq_low_u32(uint32x4_t a, uint32x2_t b)
{
return vaddq_u32(a, vcombine_u32(b, vdup_n_u32(0));
}
Expected:
vaddq_low_u32:
mov v1.2s, v1.2s // zero extend
add v0.4s, v0.4s, v1.4s
ret
Actual:
vaddq_low_u32:
movi v2.2d, #0000000000000000
mov v1.d[1], v2.d[0]
add v0.4s, v0.4s, v1.4s
ret
As hinted by the function name, this pattern is recognized on the x86_64
backend, as when the IR is compiled for x86_64 -O3, we get the direct
equivalent.
_mm_cvtsi64_si128:
movq xmm0, rdi
ret
Although it doesn't get the second example and does this instead of movq xmm1,
xmm1:
vaddq_low_u32:
xorps xmm2, xmm2
shufps xmm1, xmm2, 232
paddq xmm0, xmm1
ret
That's a different issue though, but would follow the same logic.
--
You are receiving this mail because:
You are on the CC list for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-bugs/attachments/20200423/94faf2a5/attachment.html>
More information about the llvm-bugs
mailing list