[LLVMdev] Dynamically loading native code generated from LLVM IR

Jim Grosbach grosbach at apple.com
Fri Oct 12 11:17:16 PDT 2012


On Oct 12, 2012, at 11:14 AM, Baris Aktemur <baris.aktemur at ozyegin.edu.tr> wrote:

> 
> On 12 Eki 2012, at 20:00, Jim Grosbach wrote:
> 
>> 
>> On Oct 12, 2012, at 7:07 AM, Baris Aktemur <baris.aktemur at ozyegin.edu.tr> wrote:
>> 
>>> Dear Tim,
>>> 
>>>> 
>>>> The JIT sounds like it does almost exactly what you want. LLVM's JIT
>>>> isn't a classical lightweight, dynamic one like you'd see for
>>>> JavaScript or Java. All it really does is produce a native .o file in
>>>> memory, take care of the relocations for you and then jump into it (or
>>>> provide you with a function-pointer). Is there any other reason you
>>>> want to avoid it?
>>>> 
>>> 
>>> Based on the experiments I ran, JIT version runs significantly slower than the code compiled to native. But according to your explanation, this shouldn't have happened. I wonder why I witnessed the performance difference.
>>> 
>> 
>> Did you compile the native version with any optimizations enabled?
> 
> 
> Yes. When I dump the IR, I get the same output as "clang -O3". Are the back-end optimizations enabled separately? 

Yes, but it's more the code generation model I'm suspecting is what's going on. Specifically, that you're using SelectionDAGISel for static compilation and FastISel for JIT compilation. The latter generated code very quickly, as the name implies, but the quality of that code is generally pretty terrible compared to the static compiler.

> 
> 
>> 
>>> Thank you.
>>> 
>>> -Baris Aktemur
>>> 
>>> 
>>> _______________________________________________
>>> LLVM Developers mailing list
>>> LLVMdev at cs.uiuc.edu         http://llvm.cs.uiuc.edu
>>> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
>> 
> 




More information about the llvm-dev mailing list