[LLVMdev] FP Constants spilling to memory in x86 code generation

Chris Lattner sabre at nondot.org
Mon Jan 10 10:28:57 PST 2005


Ooops, sorry for the major delay on this patch, it fell into the vortex 
that is my mailbox.  I've applied it and have some

On Mon, 3 Jan 2005, Morten Ofstad wrote:
> Chris Lattner wrote:
>> 3. This will fail if the JIT wants to allocate more than 512K of
>>    constants.  Can you just have it allocate another block of memory if it
>>    runs out of space?  Also, it might be useful to start the initial block
>>    much smaller, say 4K of memory, and double it when space is
>>    exhausted, as most programs don't use 1/2 meg of constant pools :)
>
> I just wanted to keep it simple - the allocation of memory for functions is 
> done in the same way, grab a huge block and hope it's enough. I think the 
> whole JITMemoryManager needs to be improved, this is just a temporary 
> solution to the memory leak problem. My philosophy is that if you can't do it 
> properly, at least change as little as possible...

That sounds good.  The reason that I was pointing this out is that the 
previous code would correctly handle an unbounded number of constants, so 
it's technically a regression.  However, it's not going to be hit in 
practice, so I think it's fine.  If you want to keep improving the JIT MM, 
please do! :)

>>> Later on I'm going to need either a way of freeing memory for 
>>> functions/constant pools or a way of recovering from out of memory, as our 
>>> application is going to run as a server and hopefully be happily JIT'ing 
>>> away for days on end.
>> 
>> There is a (currently unimplemented) method for doing this:
>> ExecutionEngine::freeMachineCodeForFunction.
>> 
>> It should be straight-forward to free the memory for a function, though it 
>> will make the JITEmitter a bit more complex (it will have to track regions 
>> of freed memory) to reallocate them.
>
> I think the reason why it's still unimplemented is because it's not at all 
> straight-forward. The problem is that the amount of memory needed to compile 
> a function is only known _after_ the function is compiled. The current system 
> just writes the functions one after another into a large block of memory, but 
> if you want to re-use free'd space you need to know in advance that it's 
> large enough to hold your function.

Actually it is possible.  The target code generator currently emits a big 
blob of bits and a set of relocations.  It should be possible to move the 
blob of bits wherever we desire to before applying the relocations.  The 
code currently does not do this because we used to not have the relocation 
information.

> Another possible solution is to compile the functions to a buffer and then 
> move them to the smallest free block which is big enough to contain the 
> function when the compilation is finished and you know the size of the 
> function. This approach require relocation information to be generated as 
> part of the compilation process.

We do have that relocation info, but I think it would be better to follow 
the approach we almost have now.  IOW, we want to emit the code to a spot, 
*hoping* to have enough space for it.  The change would be to say "oops, 
we ran out of space half way through.  Lets copy what we have somewhere 
larger, then keep going".  The advantage here is that (in the common case) 
no copy is required.

-Chris

-- 
http://nondot.org/sabre/
http://llvm.cs.uiuc.edu/




More information about the llvm-dev mailing list