[LLVMbugs] [Bug 22329] New: [X86][AVX] consecutive 2 x 128 bit loads are not always merged into a single 256 bit load
bugzilla-daemon at llvm.org
bugzilla-daemon at llvm.org
Sun Jan 25 09:13:04 PST 2015
http://llvm.org/bugs/show_bug.cgi?id=22329
Bug ID: 22329
Summary: [X86][AVX] consecutive 2 x 128 bit loads are not
always merged into a single 256 bit load
Product: libraries
Version: trunk
Hardware: PC
OS: All
Status: NEW
Severity: normal
Priority: P
Component: Backend: X86
Assignee: unassignedbugs at nondot.org
Reporter: spatel+llvm at rotateright.com
CC: llvmbugs at cs.uiuc.edu
Classification: Unclassified
The fix for bug 21709 is incomplete; only the first 2 loads got merged:
$ cat load32.ll
declare <8 x float> @llvm.x86.avx.vinsertf128.ps.256(<8 x float>, <4 x float>,
i8)
define <8 x float> @foo(<4 x float>* %x) {
%idx1 = getelementptr inbounds <4 x float>* %x, i64 1
%idx2 = getelementptr inbounds <4 x float>* %x, i64 2
%idx3 = getelementptr inbounds <4 x float>* %x, i64 3
%idx4 = getelementptr inbounds <4 x float>* %x, i64 4
%idx5 = getelementptr inbounds <4 x float>* %x, i64 5
%a0 = load <4 x float>* %x, align 16
%a1 = load <4 x float>* %idx1, align 16
%b0 = load <4 x float>* %idx2, align 16
%b1 = load <4 x float>* %idx3, align 16
%c0 = load <4 x float>* %idx4, align 16
%c1 = load <4 x float>* %idx5, align 16
%shuffle1 = shufflevector <4 x float> %a0, <4 x float> undef, <8 x i32> <i32
0, i32 1, i32 2, i32 3, i32 undef, i32 undef, i32 undef, i32 undef>
%shuffle2 = shufflevector <4 x float> %b0, <4 x float> undef, <8 x i32> <i32
0, i32 1, i32 2, i32 3, i32 undef, i32 undef, i32 undef, i32 undef>
%shuffle3 = shufflevector <4 x float> %c0, <4 x float> undef, <8 x i32> <i32
0, i32 1, i32 2, i32 3, i32 undef, i32 undef, i32 undef, i32 undef>
%a = tail call <8 x float> @llvm.x86.avx.vinsertf128.ps.256(<8 x float>
%shuffle1, <4 x float> %a1, i8 1)
%b = tail call <8 x float> @llvm.x86.avx.vinsertf128.ps.256(<8 x float>
%shuffle2, <4 x float> %b1, i8 1)
%c = tail call <8 x float> @llvm.x86.avx.vinsertf128.ps.256(<8 x float>
%shuffle3, <4 x float> %c1, i8 1)
%add1 = fadd <8 x float> %a, %b
%add2 = fadd <8 x float> %add1, %c
ret <8 x float> %add2
}
$ ./llc -mattr=+avx load32.ll -o -
...
vmovaps 32(%rdi), %xmm0
vmovaps 64(%rdi), %xmm1
vmovups (%rdi), %ymm2
vinsertf128 $1, 48(%rdi), %ymm0, %ymm0
vinsertf128 $1, 80(%rdi), %ymm1, %ymm1
vaddps %ymm0, %ymm2, %ymm0
vaddps %ymm1, %ymm0, %ymm0
retq
--
You are receiving this mail because:
You are on the CC list for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-bugs/attachments/20150125/5a5dd6c8/attachment.html>
More information about the llvm-bugs
mailing list