[LLVMdev] Memory clean for applications using LLVM for JIT compilation
d.bussink at gmail.com
Sat Mar 23 08:18:02 PDT 2013
One of the issues that I found not intuitive, is that when an ExecutionEngine is deallocated, the memory manager's destructor is also called. This resulted in having to write two objects in my case, one as a per JIT request memory manager and one global JIT memory manager.
The per request JIT memory manager gets memory from the global manager, but both ended up implementing (almost) the whole memory manager api, because I couldn't find another way to do it. Also see the code here:
If there are better ways to clean up the code, that would be great, but having to things like that is what made me state feeling dirty on how it was implemented.
On Mar 19, 2013, at 01:56 , "Kaylor, Andrew" <andrew.kaylor at intel.com> wrote:
> I'm not sure I see why the delegating memory manager feels wrong to both of you. That's exactly the kind of usage model I would envision for clients that needed to handle multiple ExecutionEngines that had reason to share a memory manager. I'm saying this as an invitation to discussion, not to be argumentative. We certainly could change things if it would make the design better.
> Basically, the intention of the current design is to keep the ExecutionEngine ignorant of memory allocation details. I'm not sure it always works that way, but that's the intent. I believe that the decision to have the ExecutionEngine own the memory manager was made so that it would be self-contained and capable of cleaning up after itself.
> The JIT engine has a rather ad hoc history and is more complicated than anyone would want. It allocates and links together more things than it directly exposes and the lifecycles of different bits of memory aren't always obvious, especially if you get into re-JITing code. I'm not sure there's a lot that can be done in the general case to keep it clean other than to have the engine own everything.
> I don't believe that the JIT engine is designed to handle having its function memory moved, so even if you could work out the relocation issues I don't think it's a good idea.
> The MCJIT engine, on the other hand, is designed to have its memory moved on the fly (though at the granularity of sections, not functions). The MCJIT engine does not work with the PIC relocation model on 32-bit x86 targets, but it will let you reapply relocations with other models after you move things. There's some code in the lli utility that does this if you're interested.
> I'm not sure what the current maintenance model is for the older JIT engine. Near as I can tell when it grows at all it is driven by people who have a need making specific contributions and as such it doesn't have a particularly coherent direction.
> I have more idealistic hopes for the MCJIT engine, and if you're interested in using it I'd be happy to talk to you about future design directions.
> -----Original Message-----
> From: llvmdev-bounces at cs.uiuc.edu [mailto:llvmdev-bounces at cs.uiuc.edu] On Behalf Of Frank Henigman
> Sent: Sunday, March 17, 2013 2:18 PM
> To: Dirkjan Bussink
> Cc: llvmdev at cs.uiuc.edu
> Subject: Re: [LLVMdev] Memory clean for applications using LLVM for JIT compilation
> Thanks for the reply, nice to have some validation. I thought of another approach which might be preferable:
> generate relocatable code, use a JITEventListener to grab each function and copy it to my own memory, let all the LLVM stuff die normally then use my copy of the code.
> However when I call setRelocationModel(Reloc::PIC_) on the engine builder I get code that seg faults.
> The assembly looks plausible at a glance, but I'm not really up to speed on x86 assembly.
> Is PIC supposed to work with JIT on X86-32?
> On Sat, Mar 16, 2013 at 11:35 AM, Dirkjan Bussink <d.bussink at gmail.com> wrote:
>> On Mar 7, 2013, at 20:48 , Frank Henigman <fjhenigman at google.com> wrote:
>>> I derived a class from JITMemoryManager which delegates everything to
>>> an instance made with CreateDefaultMemManager(). ExecutionEngine
>>> destroys the wrapper, but I keep the inner instance which did the
>>> actual work. Works, but seems a bit ugly. Did you find any other
>> I ended up implementing the exact same thing, it feels dirty but it has worked great for us so far.
> LLVM Developers mailing list
> LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu
More information about the llvm-dev