[LLVMdev] Dynamically loading native code generated from LLVM IR

Kaylor, Andrew andrew.kaylor at intel.com
Thu Oct 18 17:46:58 PDT 2012


Hi Baris,

The FastISel selector only gets used if the optimization level is set to none.  There's a hidden command line option that will disable it even then (see LLVMTargetMachine.cpp), but I don't think there's a way to get to it if you aren't using the standard command line option handling.

Anyway, my best guess as to the remaining performance difference you're seeing is that it has to do with the code model being used.  This is only a guess.  MCJIT is limited in the code models it supports (due to limited relocation handling implementation).  I'm not sure how much performance impact this has, but I expect that it has some performance implications.  On x86+ELF-based systems, it will only work with the static code model.  I don't know about other architecture/object format combinations.

-Andy

-----Original Message-----
From: llvmdev-bounces at cs.uiuc.edu [mailto:llvmdev-bounces at cs.uiuc.edu] On Behalf Of Baris Aktemur
Sent: Wednesday, October 17, 2012 9:57 AM
To: Jim Grosbach
Cc: llvmdev at cs.uiuc.edu
Subject: Re: [LLVMdev] Dynamically loading native code generated from LLVM IR

Dear Jim,


On 12 Eki 2012, at 21:17, Jim Grosbach wrote:

> 
> On Oct 12, 2012, at 11:14 AM, Baris Aktemur <baris.aktemur at ozyegin.edu.tr> wrote:
> 
>> 
>> On 12 Eki 2012, at 20:00, Jim Grosbach wrote:
>> 
>>> 
>>> On Oct 12, 2012, at 7:07 AM, Baris Aktemur <baris.aktemur at ozyegin.edu.tr> wrote:
>>> 
>>>> Dear Tim,
>>>> 
>>>>> 
>>>>> The JIT sounds like it does almost exactly what you want. LLVM's 
>>>>> JIT isn't a classical lightweight, dynamic one like you'd see for 
>>>>> JavaScript or Java. All it really does is produce a native .o file 
>>>>> in memory, take care of the relocations for you and then jump into 
>>>>> it (or provide you with a function-pointer). Is there any other 
>>>>> reason you want to avoid it?
>>>>> 
>>>> 
>>>> Based on the experiments I ran, JIT version runs significantly slower than the code compiled to native. But according to your explanation, this shouldn't have happened. I wonder why I witnessed the performance difference.
>>>> 
>>> 
>>> Did you compile the native version with any optimizations enabled?
>> 
>> 
>> Yes. When I dump the IR, I get the same output as "clang -O3". Are the back-end optimizations enabled separately? 
> 
> Yes, but it's more the code generation model I'm suspecting is what's going on. Specifically, that you're using SelectionDAGISel for static compilation and FastISel for JIT compilation. The latter generated code very quickly, as the name implies, but the quality of that code is generally pretty terrible compared to the static compiler.
> 

Is there an option that I can pass to the (MC)JITer to force it to use SelectionDAGISel?

I'm also curious which passes/algorithms are used when I set the MCJIT option to true and the opt level to Aggressive. E.g:

  engineBuilder.setUseMCJIT(true);
  engineBuilder.setOptLevel(llvm::CodeGenOpt::Aggressive);

I adapted lli.cpp to use MCJIT in my code. I get better performance now -- close to statically compiled native code, but still not exactly the same (about 10% slower). 

Thank you.

-Baris Aktemur


_______________________________________________
LLVM Developers mailing list
LLVMdev at cs.uiuc.edu         http://llvm.cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev




More information about the llvm-dev mailing list