[llvm-bugs] [Bug 42219] New: [X86_64] Variadic functions unconditionally spill %XMM registers

via llvm-bugs llvm-bugs at lists.llvm.org
Mon Jun 10 13:05:02 PDT 2019


            Bug ID: 42219
           Summary: [X86_64] Variadic functions unconditionally spill %XMM
           Product: new-bugs
           Version: 8.0
          Hardware: PC
                OS: Linux
            Status: NEW
          Severity: normal
          Priority: P
         Component: new bugs
          Assignee: unassignedbugs at nondot.org
          Reporter: salim.nasser at windriver.com
                CC: htmldeveloper at gmail.com, llvm-bugs at lists.llvm.org

Created attachment 22082
  --> https://bugs.llvm.org/attachment.cgi?id=22082&action=edit

The X86_64 ABI provides a way to determine at runtime whether any "vector"
registers (e.g. SSE XMM registers) have been used to pass arguments to a
variadic function. Specifically the incoming value of %al is non-zero in this

This allows variadic functions to avoid referencing XMM registers unless the
caller "expects" such a use.

LLVM correctly guards the va_start code to save %XMM registers with a test of

Unfortunately, when compiling without optimization, the va_start setup code is
itself done in several parts:

1. Spill registers to temporary stack slots
2. Reload registers from stack
3. Store reloaded values to varargs area

Unfortunately (1) is done unconditionally, i.e. regardless of the outcome of
the %al test. Here's an extract from the attached variadic.s:

        ##### ABI defined test: %al non-zero if any vector registers used for
        testb   %al, %al                    
        ##### But the following %xmm spills run regardless of the result of the
previous test!                                   
        movaps  %xmm7, -224(%rbp)       # 16-byte Spill
        movaps  %xmm6, -240(%rbp)       # 16-byte Spill
        ##### Skip following code if no floating point arguments
        je      .LBB0_2                     
# %bb.1:                                    
        ##### The following is the "regular" varargs spill code and is only run
when vector argument registers are used
        ##### Restore original value of %xmm from stack ...
        movaps  -336(%rbp), %xmm0       # 16-byte Reload
        ##### ... and store in varargs area 
        movaps  %xmm0, -160(%rbp)           
        movaps  -320(%rbp), %xmm1       # 16-byte Reload
        movaps  %xmm1, -144(%rbp)

Specifically this behavior may become a problem for OS kernel code compiled
with -no-implicit-float when bug 36507 is fixed

Previously, variadic functions compiled with -no-implicit-float could safely be
used even with the SSE unit disabled. If we fix 36507 without fixing the
present bug, such code will lead to a crash at runtime due to reading XMM

I understand that it is arguable whether or not that would be a bug. For
example AArch64 variadic functions always unconditionally access the vector
registers. But that is because the AArch64 ABI does not provide a way to avoid
this. Whereas the X86_64 ABI *does*.

You are receiving this mail because:
You are on the CC list for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-bugs/attachments/20190610/7ce2c382/attachment.html>

More information about the llvm-bugs mailing list