[llvm-dev] ORC v2 question

Dibyendu Majumdar via llvm-dev llvm-dev at lists.llvm.org
Tue Aug 13 15:11:42 PDT 2019


On Tue, 13 Aug 2019 at 22:58, Dibyendu Majumdar <mobile at majumdar.org.uk> wrote:
>
> On Tue, 13 Aug 2019 at 22:03, Lang Hames <lhames at gmail.com> wrote:
> > When you say your code is not getting optimized, do you mean that IR optimizations are not being applied, or that codegen optimizations are not being applied?
> >
> > What do you see if you dump the modules before/after running the pass manager on them, like this:
> >
> > dbgs() << "Before optimization:\n" << *M << "\n";
> > for (auto &F : *M)
> >   FPM->run(F);
> > dbgs() << "Before optimization:\n" << *M << "\n";
> >
> > I expect that output to be the same for both ORC and ORCv2. If not something is going wrong with IR optimization.
>
> Well for ORCV2 there is no change before and after.

Okay I had to put the after dump following MPM->run(*M).
So now I get optimized IR.

> I also get this message:
>
> JIT session error: Symbols not found: { raise_error }
>

So this must be real issue. IR is getting optimized but then codegen is failing.

> Yes raise_error and all other extern functions are explicitly added as
> global symbols.
>
> >
> > CodeGen optimization seems a more likely culprit: JITTargetMachineBuilder and ExecutionEngineBuilder have different defaults for their CodeGen opt-level. JITTargetMachineBuilder defaults to CodeGenOpt::None, and ExecutionEngineBuilder default to CodeGenOpt::Default.
> >
> > What happens if you make the following modification to your setup?
> >
> > auto JTMB = llvm::orc::JITTargetMachineBuilder::detectHost();
> > JTMB->setCodeGenOptLevel(CodeGenOpt::Default); // <-- Explicitly set Codegen opt level
> > auto dataLayout = JTMB->getDefaultDataLayoutForTarget();
> >
>
> No change.
>
> Regards


More information about the llvm-dev mailing list