[LLVMdev] LLVM 3.3 JIT code speed
Eli Friedman
eli.friedman at gmail.com
Thu Jul 18 10:07:23 PDT 2013
On Thu, Jul 18, 2013 at 9:07 AM, Stéphane Letz <letz at grame.fr> wrote:
> Hi,
>
> Our DSL LLVM IR emitted code (optimized with -O3 kind of IR ==> IR passes) runs slower when executed with the LLVM 3.3 JIT, compared to what we had with LLVM 3.1. What could be the reason?
>
> I tried to play with TargetOptions without any success…
>
> Here is the kind of code we use to allocate the JIT:
>
> EngineBuilder builder(fResult->fModule);
> builder.setOptLevel(CodeGenOpt::Aggressive);
> builder.setEngineKind(EngineKind::JIT);
> builder.setUseMCJIT(true);
> builder.setCodeModel(CodeModel::JITDefault);
> builder.setMCPU(llvm::sys::getHostCPUName());
>
> TargetOptions targetOptions;
> targetOptions.NoFramePointerElim = true;
> targetOptions.LessPreciseFPMADOption = true;
> targetOptions.UnsafeFPMath = true;
> targetOptions.NoInfsFPMath = true;
> targetOptions.NoNaNsFPMath = true;
> targetOptions.GuaranteedTailCallOpt = true;
>
> builder.setTargetOptions(targetOptions);
>
> TargetMachine* tm = builder.selectTarget();
>
> fJIT = builder.create(tm);
> if (!fJIT) {
> return false;
> }
> ….
>
> Any idea?
It's hard to say much without seeing the specific IR and the code
generated from that IR.
-Eli
More information about the llvm-dev
mailing list