[LLVMdev] Target independency using "opaque"? How to do it else?
Johannes Schaub - litb
schaub.johannes at googlemail.com
Wed Apr 6 06:11:02 PDT 2011
Hello all,
I'm writing a backend for our scriptlanguage compiler and I'm currently
writing an IR module for the runtime library that contains some support
routines called by generated code.
The IR module contains calls to "malloc", which depend on the size of
"size_t". Since I don't know the target when writing the IR module for the
runtime library, I thought about using an "opaque" type instance in place of
"size_t". When loading the IR module, I would refine the "opaque" to either
i64 or i32, depending on which target I'm using.
For example I currently have
; these opaque types are replaced at load time by codegen::RuntimeLib
%sizet_ty = type opaque
%intptrt_ty = type opaque
; ... then in a function I do:
%sizeof_value_ty = ptrtoint %value_ty* getelementptr (%value_ty* null,
i32 1) to i32
%numBytesToAlloc = mul i32 %num, %sizeof_value_ty
%numBytesSizeT = bitcast i32 %numBytesToAlloc to %sizet_ty
%memory = call i8* @malloc(%sizet_ty %numBytesSizeT)
However, it always fails to compile this li-file to bitcode at the bitcast
instruction, and says
error: invalid cast opcode for cast from 'i32' to 'opaque'
It appears one cannot use "opaque" in this way, as some kind of
"placeholder"? What can I do else to achieve my goal? Thanks for any
support!
More information about the llvm-dev
mailing list