[LLVMdev] Dynamically loading native code generated from LLVM IR

Baris Aktemur baris.aktemur at ozyegin.edu.tr
Fri Oct 12 11:28:05 PDT 2012


On 12 Eki 2012, at 21:14, Baris Aktemur wrote:

> 
> On 12 Eki 2012, at 20:00, Jim Grosbach wrote:
> 
>> 
>> On Oct 12, 2012, at 7:07 AM, Baris Aktemur <baris.aktemur at ozyegin.edu.tr> wrote:
>> 
>>> Dear Tim,
>>> 
>>>> 
>>>> The JIT sounds like it does almost exactly what you want. LLVM's JIT
>>>> isn't a classical lightweight, dynamic one like you'd see for
>>>> JavaScript or Java. All it really does is produce a native .o file in
>>>> memory, take care of the relocations for you and then jump into it (or
>>>> provide you with a function-pointer). Is there any other reason you
>>>> want to avoid it?
>>>> 
>>> 
>>> Based on the experiments I ran, JIT version runs significantly slower than the code compiled to native. But according to your explanation, this shouldn't have happened. I wonder why I witnessed the performance difference.
>>> 
>> 
>> Did you compile the native version with any optimizations enabled?
> 
> 
> Yes. When I dump the IR, I get the same output as "clang -O3". Are the back-end optimizations enabled separately? 
> 

Sorry, I misunderstood the question. 
I compiled the native version with optimizations enabled, using "clang -shared -fPIC -O3".

In the version that uses JIT, I build the IR, then run the same passes over the IR that "opt -O3" runs to obtain optimized IR. After these runs, I call ExecutionEngine::getPointerToFunction().


>> 
>>> Thank you.
>>> 
>>> -Baris Aktemur
>>> 
>>> 
>>> _______________________________________________
>>> LLVM Developers mailing list
>>> LLVMdev at cs.uiuc.edu         http://llvm.cs.uiuc.edu
>>> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
>> 
> 





More information about the llvm-dev mailing list