<div dir="ltr"><div dir="ltr">Hi Alex,<div><br></div><div>I'm replying to this on a new thread so as not to take the LLVMContext: Threads and Ownership discussion too far off topic.</div><div><br></div><div>If you did want to fit your use case in to ORC's model I think the solution would be to clone the module before adding it to the compile layer and (if desired) save a copy of the compiled object and pass a non-owning memory buffer to the linking layer.</div><div><br></div><div>That said, if you are not using lazy compilation, remote compilation, or concurrent compilation then using ORC would not buy you much.</div><div><br></div><div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">We also parallelize lots of things, but this could also work using Orc given that we only use the ObjectLinkingLayer.</blockquote></div><div><br></div><div>In case it is of interest for your tool, here's the short overview of ORC's new concurrency support: You can now set up a compiler dispatch function for the JIT that will be called whenever something needs to be compiled, allowing the compilation task to be run on a new thread. Compilation is still triggered on symbol lookup as before, and the compile task is still opaque to the JIT so that clients can supply their own. To deal with the challenges that arise from this (e.g. what if two threads look up the same symbol at the same time? Or two threads look up mutually recursive symbols at the same time?) the new symbol table design guarantees the following invariants for basic lookups: (1) Each symbol's compiler will be executed at most once, regardless of the number of concurrent lookups made on it, and (2) either the lookup will return an error, or if it succeeds then all pointers returned will be valid (for reading/writing/calling, depending on the nature of the symbol) regardless of how the compilation work was dispatched. This means that you can have lookup calls coming in on multiple threads for interdependent symbols, with compilers dispatched to multiple threads to maximize performance, and everything will Just Work.<br></div><div><br></div><div>If that sounds useful, there will be more documentation coming out in the next few weeks, and I will be giving a talk on the new design at the developer's meeting.</div><div><br></div><div>Cheers,</div><div>Lang. </div><div><br></div><div><div class="gmail_quote"><div dir="ltr">On Mon, Sep 17, 2018 at 12:33 AM Alex Denisov <<a href="mailto:1101.debian@gmail.com" target="_blank">1101.debian@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">Hi Lang,<br>
<br>
Thank you for clarification.<br>
<br>
> Do you have a use-case for manually managing LLVM module lifetimes?<br>
<br>
Our[1] initial idea was to feed all the modules to a JIT engine, run some code,<br>
then do further analysis and modification of the modules, feed the new versions<br>
to a JIT engine and run some code again.<br>
<br>
It didn't work well with MCJIT because we did not want to give up the ownership<br>
of the modules.<br>
However, this initial approach did not work properly because, as you mentioned,<br>
compilation does modify modules, which does not suit our need.<br>
<br>
Eventually, we decide to compile things on our own and communicate with the<br>
Orc JIT using only object files. Before compilation we make a clone of a module<br>
(simply by loading it from disk again), using the original module only for analysis.<br>
This way, we actually have two copies of a module in memory, with the second copy<br>
disposed immediately after compilation.<br>
It also worth mentioning that our use case is AOT compilation, not really lazy JIT.<br>
As of the object files we also cannot give up on the ownership because we have<br>
to reuse them many times. At least this is the current state, though I'm working on<br>
another improvement: to load all object files once and then re-execute program<br>
many times.<br>
<br>
Also, we wanted to support several versions of LLVM. Because of drastic<br>
changes in the Orc APIs we decided to hand-craft another JIT engine based on your<br>
work. It is less powerful, but fits our needs very well:<br>
<br>
<a href="https://github.com/mull-project/mull/blob/master/include/Toolchain/JITEngine.h" rel="noreferrer" target="_blank">https://github.com/mull-project/mull/blob/master/include/Toolchain/JITEngine.h</a><br>
<a href="https://github.com/mull-project/mull/blob/master/lib/Toolchain/JITEngine.cpp" rel="noreferrer" target="_blank">https://github.com/mull-project/mull/blob/master/lib/Toolchain/JITEngine.cpp</a><br>
<br>
We also parallelize lots of things, but this could also work using Orc given that we<br>
only use the ObjectLinkingLayer.<br>
<br>
P.S. Sorry for giving such a chaotic explanation, but I hope it shows our use case :)<br>
<br>
[1] <a href="https://github.com/mull-project/mull" rel="noreferrer" target="_blank">https://github.com/mull-project/mull</a><br>
<br>
> On 17. Sep 2018, at 02:05, Lang Hames <<a href="mailto:lhames@gmail.com" target="_blank">lhames@gmail.com</a>> wrote:<br>
> <br>
> Hi Alex,<br>
> <br>
> > We would like to deallocate contexts<br>
> <br>
> I did not look at the Orc APIs as of LLVM-6+, but I'm curious where this requirement comes from?<br>
> <br>
> The older generation of ORC APIs were single-threaded so in common use cases the client would create one LLVMContext for all IR created within a JIT session. The new generation supports concurrent compilation, so each module needs a different LLVMContext (including modules created inside the JIT itself, for example when extracting functions in the CompileOnDemandLayer). This means creating and managing context lifetimes alongside module lifetimes.<br>
> <br>
> Does Orc takes ownership of the modules that are being JIT'ed? I.e., it is the same 'ownership model' as MCJIT has, am I right?<br>
> <br>
> The latest version of ORC takes ownership of modules until they are compiled, at which point it passes the Module's unique-ptr to a 'NotifyCompiled' callback. By default this just throws away the pointer, deallocating the module. It can be used by clients to retrieve ownership of modules if they prefer to manage the Module's lifetime themselves.<br>
> <br>
> I think the JIT users should take care of memory allocation/deallocation. Also, I believe that you have strong reasons to implement things this way :)<br>
> Could you please tell us more about the topic?<br>
> <br>
> Do you have a use-case for manually managing LLVM module lifetimes? I would like to hear more about that so I can factor it in to the design.<br>
> <br>
> There were two motivations for having the JIT take ownership by default (with the NotifyCompiled escape hatch for those who wanted to hold on to the module after it is compiled): First, it makes efficient memory management the default: Unless you have a reason to hold on to the module it is thrown away as soon as possible, freeing up memory. Second, since compilation is a mutating operation, it seemed more natural to have the "right-to-mutate" flow alongside ownership of the underlying memory: whoever owns the Module is allowed to mutate it, rather than clients having to be aware of internal JIT state t know when the could or could not operate on a module.<br>
> <br>
> Cheers,<br>
> Lang.<br>
> <br>
> On Sun, Sep 16, 2018 at 11:06 AM Alex Denisov <<a href="mailto:1101.debian@gmail.com" target="_blank">1101.debian@gmail.com</a>> wrote:<br>
> Hi Lang,<br>
> <br>
> > We would like to deallocate contexts<br>
> <br>
> <br>
> I did not look at the Orc APIs as of LLVM-6+, but I'm curious where this requirement comes from?<br>
> Does Orc takes ownership of the modules that are being JIT'ed? I.e., it is the same 'ownership model' as MCJIT has, am I right?<br>
> <br>
> I think the JIT users should take care of memory allocation/deallocation. Also, I believe that you have strong reasons to implement things this way :)<br>
> Could you please tell us more about the topic?<br>
> <br>
> > On 16. Sep 2018, at 01:14, Lang Hames via llvm-dev <<a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a>> wrote:<br>
> ><br>
> > Hi All,<br>
> ><br>
> > ORC's new concurrent compilation model generates some interesting lifetime and thread safety questions around LLVMContext: We need multiple LLVMContexts (one per module in the simplest case, but at least one per thread), and the lifetime of each context depends on the execution path of the JIT'd code. We would like to deallocate contexts once all modules associated with them have been compiled, but there is no safe or easy way to check that condition at the moment as LLVMContext does not expose how many modules are associated with it.<br>
> ><br>
> > One way to fix this would be to add a mutex to LLVMContext, and expose this and the module count. Then in the IR-compiling layer of the JIT we could have something like:<br>
> ><br>
> > // Compile finished, time to deallocate the module.<br>
> > // Explicit deletes used for clarity, we would use unique_ptrs in practice.<br>
> > auto &Ctx = Mod->getContext();<br>
> > delete Mod;<br>
> > std::lock_guard<std::mutex> Lock(Ctx->getMutex());<br>
> > if (Ctx.getNumModules())<br>
> > delete Ctx;<br>
> ><br>
> > Another option would be to invert the ownership model and say that each Module shares ownership of its LLVMContext. That way LLVMContexts would be automatically deallocated when the last module using them is destructed (providing no other shared_ptrs to the context are held elsewhere).<br>
> ><br>
> > There are other possible approaches (e.g. side tables for the mutex and module count) but before I spent too much time on it I wanted to see whether anyone else has encountered these issues or has opinions on solutions.<br>
> ><br>
> > Cheers,<br>
> > Lang.<br>
> > _______________________________________________<br>
> > LLVM Developers mailing list<br>
> > <a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a><br>
> > <a href="http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev" rel="noreferrer" target="_blank">http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev</a><br>
> <br>
<br>
</blockquote></div></div></div></div>