[llvm-bugs] [Bug 40306] New: [x86] shuffle lowering creates strange shuffle mask
via llvm-bugs
llvm-bugs at lists.llvm.org
Mon Jan 14 08:06:16 PST 2019
https://bugs.llvm.org/show_bug.cgi?id=40306
Bug ID: 40306
Summary: [x86] shuffle lowering creates strange shuffle mask
Product: libraries
Version: trunk
Hardware: PC
OS: All
Status: NEW
Severity: enhancement
Priority: P
Component: Backend: X86
Assignee: unassignedbugs at nondot.org
Reporter: spatel+llvm at rotateright.com
CC: craig.topper at gmail.com, llvm-bugs at lists.llvm.org,
llvm-dev at redking.me.uk, spatel+llvm at rotateright.com
This manifested in:
https://reviews.llvm.org/D56281
...but it's independent of that patch as shown with this example:
define <8 x i16> @shuf_zeros_undefs(<8 x i16> %x) {
%r = shufflevector <8 x i16> zeroinitializer, <8 x i16> %x, <8 x i32> <i32 9,
i32 1, i32 2, i32 3, i32 undef, i32 undef, i32 undef, i32 undef>
ret <8 x i16> %r
}
Clearly, we have undefs in the high elements of the result, but we get this
unexpected pshufb lowering:
$ llc -o - weird_shufb.ll -mattr=avx
vpshufb LCPI0_0(%rip), %xmm0, %xmm0 ## xmm0 =
xmm0[2,3],zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,zero,xmm0[6,7]
-------------------------------------------------------------------------------
That's not a miscompile, but why are the high elements choosing from the input?
Debug output shows that we created this intermediate state:
t12: v8i16 = X86ISD::UNPCKL t17, t2
t19: v8i16 = X86ISD::PSHUFHW t12, Constant:i8<-24>
t20: v4i32 = bitcast t19
t22: v4i32 = X86ISD::PSHUFD t20, Constant:i8<-26>
t23: v8i16 = bitcast t22
t25: v8i16 = X86ISD::PSHUFLW t23, Constant:i8<75>
-------------------------------------------------------------------------------
So this leads to a question that may not be answerable in the DAG or
statically: what is the ideal x86 lowering for that shuffle?
The pshufb is obviously the smallest code, but should we favor a solution that
doesn't need to load anything?
That could be a shift+blend immediate with zero:
vpsrld $16, %xmm0, %xmm0
vpxor %xmm1, %xmm1, %xmm1
vpblendw $1, %xmm0, %xmm1, %xmm0 ## xmm0 =
xmm0[0],xmm1[1,2,3,4,5,6,7]
Or shift+zext:
vpsrld $16, %xmm0, %xmm0
vpmovzxdq %xmm0, %xmm0
Or if we're ok with a load, but just want to avoid pshufb, it could be a
shift+mask:
vpsrld $16, %xmm0, %xmm0
vpand LCPI0_0(%rip), %xmm0, %xmm0
--
You are receiving this mail because:
You are on the CC list for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-bugs/attachments/20190114/a529675e/attachment.html>
More information about the llvm-bugs
mailing list