[llvm-dev] Deopt operand bundle behavior

Dániel Mihályi via llvm-dev llvm-dev at lists.llvm.org
Wed Apr 5 02:27:13 PDT 2017


Hi!

We have started to use deopt operand bundle to make our native stacktrace deoptimizable and garbage collectable. We stumbled upon an issue and we don't know if it is really an issue on our side or really a problem within LLVM.


For example, for this input:

declare { i8*, i8* } @getCode()

define void @testFunc() {
entry:
 %0 = call { i8*, i8* } @getCode()
 %1 = extractvalue { i8*, i8* } %0, 1
 %2 = bitcast i8* %1 to void ()*
 call void %2() [ "deopt"() ]
 ret void
}


We get this output machine code for x86_64:

_testFunc:                              ## @testFunc
   .cfi_startproc
## BB#0:                                ## %entry
   pushq    %rax
Lcfi0:
   .cfi_def_cfa_offset 16
   callq    _getCode
   callq    *%rax
Ltmp0:
   popq    %rax
   retq


Without the deopt operand bundle:

_testFunc:                              ## @testFunc
   .cfi_startproc
## BB#0:                                ## %entry
   pushq    %rax
Lcfi0:
   .cfi_def_cfa_offset 16
   callq    _getCode
   callq    *%rdx
   popq    %rax
   retq


For some reason with the deopt operand bundle for the second half of the value returned by getCode the wrong register is used, namingly %rax instead of %rdx.

Am I not aware of something regarding to this feature?

Thanks ahead for your time,
Daniel Mihalyi


More information about the llvm-dev mailing list