[LLVMbugs] [Bug 21792] New: Changing the scheduling would help removing a copy and improve the throughput by 7%
bugzilla-daemon at llvm.org
bugzilla-daemon at llvm.org
Tue Dec 9 10:17:09 PST 2014
http://llvm.org/bugs/show_bug.cgi?id=21792
Bug ID: 21792
Summary: Changing the scheduling would help removing a copy and
improve the throughput by 7%
Product: libraries
Version: trunk
Hardware: PC
OS: All
Status: NEW
Severity: normal
Priority: P
Component: Backend: X86
Assignee: unassignedbugs at nondot.org
Reporter: qcolombet at apple.com
CC: llvmbugs at cs.uiuc.edu
Classification: Unclassified
Created attachment 13445
--> http://llvm.org/bugs/attachment.cgi?id=13445&action=edit
bitcode to reproduce
In the attached bitcode, we generate the sequence of instructions with a
useless move:
[…]
pshufd $78, %xmm0, %xmm1 ## xmm1 = xmm0[2,3,0,1]
movd %xmm1, %rcx
movd %xmm0, %rdx
movq %rdx, %rax ## <— this move can be removed.
sarq $32, %rax
movslq %edx, %r8
By slightly changing the scheduling, we would be able to get rid of that move:
pshufd $78, %xmm0, %xmm1 ## xmm1 = xmm0[2,3,0,1]
movd %xmm1, %rcx
movd %xmm0, %rdx
movq %rdx, %rax ## <— this move can be removed.
movslq %edx, %r8 ## <— move the sign extension before the
shift.
sarq $32, %rax
Now, the move can be removed:
pshufd $78, %xmm0, %xmm1 ## xmm1 = xmm0[2,3,0,1]
movd %xmm1, %rcx
movd %xmm0, %rax ## rdx becomes rax (the previous move is
coalesce able).
movslq %eax, %r8 ## update the sign extension.
sarq $32, %rax
According to IACA, the old sequence has a throughput of 4.7 cycles, whereas the
new one has a throughput of 4.4 cycles.
The related benchmark improves by 2% because of this change.
** To Reproduce **
llc -mtriple=x86_64-apple-macosx < new.ll
Note: The machine IR looks like this:
%vreg2<def> = PSHUFDri %vreg1, 78; VR128:%vreg2,%vreg1
%vreg3<def> = MOVPQIto64rr %vreg2<kill>; GR64:%vreg3 VR128:%vreg2
%vreg4<def> = MOVPQIto64rr %vreg1; GR64:%vreg4 VR128:%vreg1
%vreg5<def,tied1> = SAR64ri %vreg4<tied0>, 32, %EFLAGS<imp-def,dead>;
GR64_NOSP:%vreg5 GR64:%vreg4
%vreg6<def> = COPY %vreg4:sub_32bit; GR32:%vreg6 GR64:%vreg4
%vreg7<def> = MOVSX64rr32 %vreg6<kill>; GR64_NOSP:%vreg7 GR32:%vreg6
And to get rid of the interference that prevents the copy to be coalesced, we
should schedule it like this:
%vreg2<def> = PSHUFDri %vreg1, 78; VR128:%vreg2,%vreg1
%vreg3<def> = MOVPQIto64rr %vreg2<kill>; GR64:%vreg3 VR128:%vreg2
%vreg4<def> = MOVPQIto64rr %vreg1; GR64:%vreg4 VR128:%vreg1
%vreg6<def> = COPY %vreg4:sub_32bit; GR32:%vreg6 GR64:%vreg4
%vreg7<def> = MOVSX64rr32 %vreg6<kill>; GR64_NOSP:%vreg7 GR32:%vreg6
%vreg5<def,tied1> = SAR64ri %vreg4<tied0>, 32, %EFLAGS<imp-def,dead>;
GR64_NOSP:%vreg5 GR64:%vreg4
I.e., moving vreg5 after vreg7.
The fixing may need to be done in the machine scheduler, but starting with X86
backend.
--
You are receiving this mail because:
You are on the CC list for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-bugs/attachments/20141209/91b13b8d/attachment.html>
More information about the llvm-bugs
mailing list