[llvm-dev] Mull JIT Design

Lang Hames via llvm-dev llvm-dev at lists.llvm.org
Mon Sep 17 09:01:13 PDT 2018


Hi Alex,

I'm replying to this on a new thread so as not to take the LLVMContext:
Threads and Ownership discussion too far off topic.

If you did want to fit your use case in to ORC's model I think the solution
would be to clone the module before adding it to the compile layer and (if
desired) save a copy of the compiled object and pass a non-owning memory
buffer to the linking layer.

That said, if you are not using lazy compilation, remote compilation, or
concurrent compilation then using ORC would not buy you much.

We also parallelize lots of things, but this could also work using Orc
> given that we only use the ObjectLinkingLayer.


In case it is of interest for your tool, here's the short overview of ORC's
new concurrency support: You can now set up a compiler dispatch function
for the JIT that will be called whenever something needs to be compiled,
allowing the compilation task to be run on a new thread. Compilation is
still triggered on symbol lookup as before, and the compile task is still
opaque to the JIT so that clients can supply their own. To deal with the
challenges that arise from this (e.g. what if two threads look up the same
symbol at the same time? Or two threads look up mutually recursive symbols
at the same time?) the new symbol table design guarantees the following
invariants for basic lookups: (1) Each symbol's compiler will be executed
at most once, regardless of the number of concurrent lookups made on it,
and (2) either the lookup will return an error, or if it succeeds then all
pointers returned will be valid (for reading/writing/calling, depending on
the nature of the symbol) regardless of how the compilation work was
dispatched. This means that you can have lookup calls coming in on multiple
threads for interdependent symbols, with compilers dispatched to multiple
threads to maximize performance, and everything will Just Work.

If that sounds useful, there will be more documentation coming out in the
next few weeks, and I will be giving a talk on the new design at the
developer's meeting.

Cheers,
Lang.

On Mon, Sep 17, 2018 at 12:33 AM Alex Denisov <1101.debian at gmail.com> wrote:

> Hi Lang,
>
> Thank you for clarification.
>
> > Do you have a use-case for manually managing LLVM module lifetimes?
>
> Our[1] initial idea was to feed all the modules to a JIT engine, run some
> code,
> then do further analysis and modification of the modules, feed the new
> versions
> to a JIT engine and run some code again.
>
> It didn't work well with MCJIT because we did not want to give up the
> ownership
> of the modules.
> However, this initial approach did not work properly because, as you
> mentioned,
> compilation does modify modules, which does not suit our need.
>
> Eventually, we decide to compile things on our own and communicate with the
> Orc JIT using only object files. Before compilation we make a clone of a
> module
> (simply by loading it from disk again), using the original module only for
> analysis.
> This way, we actually have two copies of a module in memory, with the
> second copy
> disposed immediately after compilation.
> It also worth mentioning that our use case is AOT compilation, not really
> lazy JIT.
> As of the object files we also cannot give up on the ownership because we
> have
> to reuse them many times. At least this is the current state, though I'm
> working on
> another improvement: to load all object files once and then re-execute
> program
> many times.
>
> Also, we wanted to support several versions of LLVM. Because of drastic
> changes in the Orc APIs we decided to hand-craft another JIT engine based
> on your
> work. It is less powerful, but fits our needs very well:
>
>
> https://github.com/mull-project/mull/blob/master/include/Toolchain/JITEngine.h
>
> https://github.com/mull-project/mull/blob/master/lib/Toolchain/JITEngine.cpp
>
> We also parallelize lots of things, but this could also work using Orc
> given that we
> only use the ObjectLinkingLayer.
>
> P.S. Sorry for giving such a chaotic explanation, but I hope it shows our
> use case :)
>
> [1] https://github.com/mull-project/mull
>
> > On 17. Sep 2018, at 02:05, Lang Hames <lhames at gmail.com> wrote:
> >
> > Hi Alex,
> >
> > > We would like to deallocate contexts
> >
> > I did not look at the Orc APIs as of LLVM-6+, but I'm curious where this
> requirement comes from?
> >
> > The older generation of ORC APIs were single-threaded so in common use
> cases the client would create one LLVMContext for all IR created within a
> JIT session. The new generation supports concurrent compilation, so each
> module needs a different LLVMContext (including modules created inside the
> JIT itself, for example when extracting functions in the
> CompileOnDemandLayer). This means creating and managing context lifetimes
> alongside module lifetimes.
> >
> > Does Orc takes ownership of the modules that are being JIT'ed? I.e., it
> is the same 'ownership model' as MCJIT has, am I right?
> >
> > The latest version of ORC takes ownership of modules until they are
> compiled, at which point it passes the Module's unique-ptr to a
> 'NotifyCompiled' callback. By default this just throws away the pointer,
> deallocating the module. It can be used by clients to retrieve ownership of
> modules if they prefer to manage the Module's lifetime themselves.
> >
> > I think the JIT users should take care of memory
> allocation/deallocation. Also, I believe that you have strong reasons to
> implement things this way :)
> > Could you please tell us more about the topic?
> >
> > Do you have a use-case for manually managing LLVM module lifetimes? I
> would like to hear more about that so I can factor it in to the design.
> >
> > There were two motivations for having the JIT take ownership by default
> (with the NotifyCompiled escape hatch for those who wanted to hold on to
> the module after it is compiled): First, it makes efficient memory
> management the default: Unless you have a reason to hold on to the module
> it is thrown away as soon as possible, freeing up memory. Second, since
> compilation is a mutating operation, it seemed more natural to have the
> "right-to-mutate" flow alongside ownership of the underlying memory:
> whoever owns the Module is allowed to mutate it, rather than clients having
> to be aware of internal JIT state t know when the could or could not
> operate on a module.
> >
> > Cheers,
> > Lang.
> >
> > On Sun, Sep 16, 2018 at 11:06 AM Alex Denisov <1101.debian at gmail.com>
> wrote:
> > Hi Lang,
> >
> > > We would like to deallocate contexts
> >
> >
> > I did not look at the Orc APIs as of LLVM-6+, but I'm curious where this
> requirement comes from?
> > Does Orc takes ownership of the modules that are being JIT'ed? I.e., it
> is the same 'ownership model' as MCJIT has, am I right?
> >
> > I think the JIT users should take care of memory
> allocation/deallocation. Also, I believe that you have strong reasons to
> implement things this way :)
> > Could you please tell us more about the topic?
> >
> > > On 16. Sep 2018, at 01:14, Lang Hames via llvm-dev <
> llvm-dev at lists.llvm.org> wrote:
> > >
> > > Hi All,
> > >
> > > ORC's new concurrent compilation model generates some interesting
> lifetime and thread safety questions around LLVMContext: We need multiple
> LLVMContexts (one per module in the simplest case, but at least one per
> thread), and the lifetime of each context depends on the execution path of
> the JIT'd code. We would like to deallocate contexts once all modules
> associated with them have been compiled, but there is no safe or easy way
> to check that condition at the moment as LLVMContext does not expose how
> many modules are associated with it.
> > >
> > > One way to fix this would be to add a mutex to LLVMContext, and expose
> this and the module count. Then in the IR-compiling layer of the JIT we
> could have something like:
> > >
> > > // Compile finished, time to deallocate the module.
> > > // Explicit deletes used for clarity, we would use unique_ptrs in
> practice.
> > > auto &Ctx = Mod->getContext();
> > > delete Mod;
> > > std::lock_guard<std::mutex> Lock(Ctx->getMutex());
> > > if (Ctx.getNumModules())
> > >   delete Ctx;
> > >
> > > Another option would be to invert the ownership model and say that
> each Module shares ownership of its LLVMContext. That way LLVMContexts
> would be automatically deallocated when the last module using them is
> destructed (providing no other shared_ptrs to the context are held
> elsewhere).
> > >
> > > There are other possible approaches (e.g. side tables for the mutex
> and module count) but before I spent too much time on it I wanted to see
> whether anyone else has encountered these issues or has opinions on
> solutions.
> > >
> > > Cheers,
> > > Lang.
> > > _______________________________________________
> > > LLVM Developers mailing list
> > > llvm-dev at lists.llvm.org
> > > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
> >
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20180917/6768a51f/attachment.html>


More information about the llvm-dev mailing list