[LLVMbugs] [Bug 21709] New: [X86][AVX] separate 2x128bit loads are not being merged into a single 256bit load.

bugzilla-daemon at llvm.org bugzilla-daemon at llvm.org
Tue Dec 2 07:33:00 PST 2014


http://llvm.org/bugs/show_bug.cgi?id=21709

            Bug ID: 21709
           Summary: [X86][AVX] separate 2x128bit loads are not being
                    merged into a single 256bit load.
           Product: libraries
           Version: trunk
          Hardware: PC
                OS: Linux
            Status: NEW
          Severity: normal
          Priority: P
         Component: Backend: X86
          Assignee: unassignedbugs at nondot.org
          Reporter: andrea.dibiagio at gmail.com
                CC: llvmbugs at cs.uiuc.edu
    Classification: Unclassified

Example 1.

///
__m256 unaligned_loads(const float *ptr) {
  __m128 lo = _mm_loadu_ps( ptr + 0 );
  __m128 hi = _mm_loadu_ps( ptr + 4 );
  return _mm256_insertf128_ps( _mm256_castps128_ps256( lo ), hi, 1);
}
///

clang -march=btver2 -O2 -S -o -

  vmovups  (%rdi), %xmm0
  vinsertf128  $1, 16(%rdi), %ymm0, %ymm0
  retq

Ideally, it should generate a single 32B load:
  vmovups  (%rdi), %ymm0

Basically the backend should generate a single unaligned 32B load instead of
the sequence vmovups + vinsertf128 (+ folded load).
I think this should be done for AVX targets with feature 'FastUAMem' and not
'SlowUAMem32'.

Also (probably a minor/separate issue?) the vinsertf128 may cause an exception
if alignment checking is enabled and the current privilege level is 3.


Example 2.
///
__m256 aligned_loads(const float *ptr) {
  __m128 lo = _mm_load_ps( ptr + 0 );
  __m128 hi = _mm_load_ps( ptr + 4 );
  return _mm256_insertf128_ps( _mm256_castps128_ps256( lo ), hi, 1);
}
///

clang -march=btver2 -O2 -S -o -

  vmovaps  (%rdi), %xmm0
  vinsertf128  $1, 16(%rdi), %ymm0, %ymm0
  retq

Again, this could be folded into:
  vmovaps  (%rdi), %ymm0


As a side note:

the code from Example 1. is equivalent to the following code:

///
__m256 unaligned_loads_v2(const float *ptr) {
  __m128 lo = _mm_loadu_ps( ptr + 0 );
  __m128 hi = _mm_loadu_ps( ptr + 4 );
  return (__m256) __builtin_shufflevector(lo, hi, 0, 1, 2, 3, 4, 5, 6, 7);
}
///

Where the call to the x86 intrinsic _mm256_insertf128_ps has been replaced with
a __builtin_shufflevector call.
What I am trying to say here is that we could teach the instruction combiner
that a call to _mm256_insertf128_ps is actually equivalent to a shuffle that
performs a `concat_vector`. Basically, the instruction combiner could early
replace that call with a shuffle before we even reach the backend.

The codegen for 'unaligned_loads_v2' is still the same as 'unaligned_loads':
  vmovups  (%rdi), %xmm0
  vinsertf128  $1, 16(%rdi), %ymm0, %ymm0
  retq

-- 
You are receiving this mail because:
You are on the CC list for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-bugs/attachments/20141202/39cdef0c/attachment.html>


More information about the llvm-bugs mailing list