[llvm-dev] Emulated TLS on x86_64 (Linux) with the JIT engine
Bastian Hossbach via llvm-dev
llvm-dev at lists.llvm.org
Wed Mar 8 02:59:41 PST 2017
Hi,
I'm trying to JIT-compile a part of an application at runtime using LLVM 3.9.1. The piece of code I want to JIT-compile accesses TLS variables and has been compiled offline into a bitcode file via clang 3.9.1. At runtime, that bitcode file gets loaded into a module. The module then gets passed to the following function:
void jit(llvm::Module *module) {
TargetOptions opts;
opts.EmulatedTLS = 1;
string err_str;
ExecutionEngine *engine = EngineBuilder(unique_ptr<llvm::Module>(module))
.setTargetOptions(opts)
.setErrorStr(&err_str)
.setEngineKind(EngineKind::JIT)
.setMCJITMemoryManager(std::unique_ptr<SectionMemoryManager>(new SectionMemoryManager()))
.setOptLevel(CodeGenOpt::Level::Aggressive)
.create();
engine->finalizeObject();
...
}
I already figured out that TLS on x86_64 is not supported and crashes with a corresponding LLVM error. So I switched to emulated TLS. However, this results in a new LLVM error "LLVM ERROR: Program used external function '__emutls_v.xyz_' which could not be resolved!" where xyz is one of the TLS variables. If I'm correct, all those __emutls_v variables should get replaced by invokations of __emutls_get_address().
Am I missing something? Is emulated TLS even supposed to work with the JIT enigne on x86_64?
Thanks in advance,
Bastian
More information about the llvm-dev
mailing list