[PATCH] Don't force the use of a base pointer with MS inline asm

Reid Kleckner rnk at google.com
Mon Aug 26 13:20:57 PDT 2013


Jim, can you suggest a reviewer maybe?


On Thu, Aug 22, 2013 at 9:58 AM, Reid Kleckner <rnk at google.com> wrote:

> Ping.
>
>
> On Fri, Aug 16, 2013 at 3:08 PM, Reid Kleckner <rnk at google.com> wrote:
>
>> Jakob, can you take a look at this?
>>
>> Unfortunately, the fatal error I added actually comes up far more often
>> than I expected because LLVM will try to align stack frames for code like:
>>
>> void foo() {
>>   struct { int x; int y; } p;  // LLVM wants this to be 8 byte aligned.
>> Why?
>>   __asm { ... adjust esp and use p }
>> }
>>
>> I'd prefer it if LLVM failed instead of miscompiling this code, though.
>>
>> I wasn't able to do a complete fix to pick the base register dynamically
>> because I can't figure out how to pull the constraints off inline asm in MI.
>>
>> I would also reject gcc inline asm that clobbers esp and in the presence
>> of stack realignment if I could see the constraints.
>>
>>
>> On Tue, Aug 13, 2013 at 6:09 PM, Reid Kleckner <rnk at google.com> wrote:
>>
>>>     - Add a fatal error.
>>>
>>> http://llvm-reviews.chandlerc.com/D1317
>>>
>>> CHANGE SINCE LAST DIFF
>>>   http://llvm-reviews.chandlerc.com/D1317?vs=3269&id=3460#toc
>>>
>>> Files:
>>>   lib/Target/X86/X86RegisterInfo.cpp
>>>   test/CodeGen/X86/inline-asm-stack-realign.ll
>>>   test/CodeGen/X86/ms-inline-asm.ll
>>>
>>> Index: lib/Target/X86/X86RegisterInfo.cpp
>>> ===================================================================
>>> --- lib/Target/X86/X86RegisterInfo.cpp
>>> +++ lib/Target/X86/X86RegisterInfo.cpp
>>> @@ -340,6 +340,11 @@
>>>          "Stack realignment in presence of dynamic allocas is not
>>> supported with"
>>>          "this calling convention.");
>>>
>>> +    if (MF.hasMSInlineAsm())
>>> +      report_fatal_error(
>>> +        "Stack realignment in the presence of stack-adjusting inline
>>> assembly "
>>> +        "is not supported");
>>> +
>>>      for (MCSubRegIterator I(getBaseRegister(), this,
>>> /*IncludeSelf=*/true);
>>>           I.isValid(); ++I)
>>>        Reserved.set(*I);
>>> @@ -396,18 +401,16 @@
>>>     if (!EnableBasePointer)
>>>       return false;
>>>
>>> -   // When we need stack realignment and there are dynamic allocas, we
>>> can't
>>> -   // reference off of the stack pointer, so we reserve a base pointer.
>>> -   //
>>> -   // This is also true if the function contain MS-style inline
>>> assembly.  We
>>> -   // do this because if any stack changes occur in the inline
>>> assembly, e.g.,
>>> -   // "pusha", then any C local variable or C argument references in the
>>> -   // inline assembly will be wrong because the SP is not properly
>>> tracked.
>>> -   if ((needsStackRealignment(MF) && MFI->hasVarSizedObjects()) ||
>>> -       MF.hasMSInlineAsm())
>>> -     return true;
>>> -
>>> -   return false;
>>> +   // When we need stack realignment, we can't address the stack from
>>> the frame
>>> +   // pointer.  When we have dynamic allocas or MS inline asm, we can't
>>> address
>>> +   // variables from the stack pointer.  MS inline asm can reference
>>> locals
>>> +   // while also adjusting the stack pointer.  When we can't use both
>>> the SP and
>>> +   // the FP, we need a separate base pointer register.
>>> +   // FIXME: gcc inline asm can also adjust the stack pointer if it
>>> lists esp as
>>> +   // a clobber, but LLVM does not support that.
>>> +   bool CantUseFP = needsStackRealignment(MF);
>>> +   bool CantUseSP = MFI->hasVarSizedObjects() || MF.hasMSInlineAsm();
>>> +   return CantUseFP && CantUseSP;
>>>  }
>>>
>>>  bool X86RegisterInfo::canRealignStack(const MachineFunction &MF) const {
>>> Index: test/CodeGen/X86/inline-asm-stack-realign.ll
>>> ===================================================================
>>> --- /dev/null
>>> +++ test/CodeGen/X86/inline-asm-stack-realign.ll
>>> @@ -0,0 +1,16 @@
>>> +; RUN: not llc -march x86 %s 2>&1 | FileCheck %s
>>> +
>>> +; Realigning the stack and referring to local stack vars through memory
>>> in
>>> +; inline asm with this many clobbers is impossible.
>>> +; FIXME: LLVM could do better here by looking at the list of clobbers.
>>> +
>>> +; CHECK: Stack realignment in presence of MS inline asm is not supported
>>> +
>>> +define i32 @foo() {
>>> +entry:
>>> +  %r = alloca i32, align 16
>>> +  store i32 -1, i32* %r, align 16
>>> +  call void asm sideeffect inteldialect "pusha\0A\09xor eax,
>>> eax\0A\09xor ebx, ebx\0A\09xor ecx, ecx\0A\09xor edx, edx\0A\09xor esi,
>>> esi\0A\09xor edi, edi\0A\09mov dword ptr $0, esi\0A\09popa",
>>> "=*m,~{eax},~{ebx},~{ecx},~{edi},~{edx},~{esi},~{dirflag},~{fpsr},~{flags}"(i32*
>>> %r) #1
>>> +  %0 = load i32* %r, align 16
>>> +  ret i32 %0
>>> +}
>>> Index: test/CodeGen/X86/ms-inline-asm.ll
>>> ===================================================================
>>> --- test/CodeGen/X86/ms-inline-asm.ll
>>> +++ test/CodeGen/X86/ms-inline-asm.ll
>>> @@ -103,8 +103,8 @@
>>>  ; CHECK: {{## InlineAsm End|#NO_APP}}
>>>  ; CHECK: {{## InlineAsm Start|#APP}}
>>>  ; CHECK: .intel_syntax
>>> -; CHECK: mov dword ptr [esi], edi
>>> +; CHECK: mov dword ptr [ebp - 8], edi
>>>  ; CHECK: .att_syntax
>>>  ; CHECK: {{## InlineAsm End|#NO_APP}}
>>> -; CHECK: movl (%esi), %eax
>>> +; CHECK: movl -8(%ebp), %eax
>>>  }
>>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20130826/0d1260e5/attachment.html>


More information about the llvm-commits mailing list