[LLVMdev] Question about memory allocation in JIT

Reid Kleckner rnk at mit.edu
Wed Jul 1 09:43:34 PDT 2009

On Wed, Jul 1, 2009 at 9:07 AM, Merkulov Aleksey<steel1.0 at mail.ru> wrote:
> Hello!  Working with LLVM JIT-compiler I found a small bug and I'd like to correct it.  Namely, on some tests LLVM fails with message "JIT: Ran out of space for generated machine code!"

I'm working on a patch to fix this, although I've heard DOE may change
the MachineCodeEmitter interface to make this less necessary.


> This error emerges because the test creates big static array.  Global variables are placed into memory block for function, that is first seen using given variable.  Besides, during memory allocation for function its size is not considered in any way.  Although there exist a code block (in JITEmitter::startFunction), which is able to calculate function size, allocator (in DefaultJITMemoryManager::startFunctionBody) does not consider predicted function size.  So lli fails, when a test uses big static array.

According to the documentation I have read, in general it is not
possible to predict code size, but from that block of code it seems
like it works for someone using a custom memory manager that does pay
attention to size on some platform.  I'm also working on a patch to
separate globals from code, but apparently Apple wants to support that
behavior, even though it breaks freeMachineCodeForFunction.


> How would you advise to correct this bug?  It is possible to allocate global variables in separate memory block(s) (in my opinion, it is not so good to place variables into executable memory area), or it is possible to improve allocator so that it considers predicted function size.  However, if using size prediction, JIT calculation of exact function size may occupy time.  Probably it would be better to estimate function size instead of exact calculation.

Unless there is some DOE change that I'm not aware of that totally
changes the situation, I think the right thing to do is to just fully
implement the API.  I think that emitting the binary from the
MachineFunction isn't much of a bottleneck, but I don't have hard data
that says so.  All of the target-specific MachineCodeEmitters are
coded to retry on failure.  The problem is that right now the memory
manager doesn't know how to request more space from the OS.


More information about the llvm-dev mailing list