[LLVMdev] MinGW/MSVC++ uses different ABI for sret
Óscar Fuentes
ofv at wanadoo.es
Fri Sep 25 14:41:27 PDT 2009
Let's go directly to the example
struct S {
double dummy1;
double dummy2;
};
S bar();
S foo() {
return bar();
}
This is the result of g++ -c -S -O2 (focus on the final `ret'):
__Z3foov:
LFB0:
pushl %ebp
LCFI0:
movl %esp, %ebp
LCFI1:
pushl %ebx
LCFI2:
subl $20, %esp
LCFI3:
movl 8(%ebp), %ebx
movl %ebx, (%esp)
call __Z3barv
pushl %eax
movl %ebx, %eax
movl -4(%ebp), %ebx
leave
ret $4
This is the result of cl -O2 -c -Fa (again, focus on the final `ret')
PUBLIC ?foo@@YA?AUS@@XZ ; foo
EXTRN ?bar@@YA?AUS@@XZ:PROC ; bar
; Function compile flags: /Ogtpy
; COMDAT ?foo@@YA?AUS@@XZ
_TEXT SEGMENT
$T2548 = -16 ; size = 16
$T2546 = 8 ; size = 4
?foo@@YA?AUS@@XZ PROC ; foo, COMDAT
; File c:\dev\exp\bar.cpp
; Line 8
sub esp, 16 ; 00000010H
; Line 9
lea eax, DWORD PTR $T2548[esp+16]
push eax
call ?bar@@YA?AUS@@XZ ; bar
mov ecx, DWORD PTR $T2546[esp+16]
mov edx, DWORD PTR [eax]
mov DWORD PTR [ecx], edx
mov edx, DWORD PTR [eax+4]
mov DWORD PTR [ecx+4], edx
mov edx, DWORD PTR [eax+8]
mov eax, DWORD PTR [eax+12]
mov DWORD PTR [ecx+8], edx
mov DWORD PTR [ecx+12], eax
mov eax, ecx
; Line 10
add esp, 20 ; 00000014H
ret 0
?foo@@YA?AUS@@XZ ENDP ; foo
Please note how g++ pops 4 bytes from the stack on return, while cl
doesn't. This is reflected on the call to `bar' too, where the callee
takes that into account.
LLVM generates code that follows the gcc behaviour. The result is that
after LLVM code calls a VC++ function that returns a struct, the stack
is corrupted. The "solution" is to not mark external VC++ functions as
sret in any case, but this breaks if the external function was compiled
by gcc, or if you pass a LLVM callback that returns a struct to a VC++
function, etc.
I filed a bug yesterday ( http://llvm.org/bugs/show_bug.cgi?id=5046 )
and Anton kindly explained that LLVM is doing the right thing as per the
ABI (the GCC ABI, I'll add).
1. Is there a LLVM way of dealing with this without using separate code
for VC++ and GCC?
2. Is there a document that thoroughly explains the ABI used by VC++?
The documentation on MSDN is quite vague
( http://msdn.microsoft.com/en-us/library/984x0h58.aspx )
3. Is a bug that LLVM does not distinguish among GCC and VC++ sret
handling?
4. Why the heck GCC and VC++ follow different ABIs on the same
platform?
The last question is rhetoric.
--
Óscar
More information about the llvm-dev
mailing list