[cfe-dev] ld taking too much memory to link clang

Renato Golin Linaro renato.golin at linaro.org
Tue Jan 22 12:54:58 PST 2013


On 22 January 2013 18:24, Karen Shaeffer <shaeffer at neuralscape.com> wrote:

> The kernel reserves a chunk of memory for itself as working space to dig
> out
> of an out-of-memory situation. So it will swap before running out of
> memory,
> especially if there is a large spike in memory usage at that point where
> out-of-memory is approached.
>

I think the main issue is that it's normally ok to compile using all CPUs
(or even 1.5x the number of CPUs) but it's NOT ok to link on the same
cardinality. I've looked for a solution to this and have not found yet a
decent way. Maybe someone here knows...

You could use a script to only start LD once no other was running (or
something like that), so that even at "make -j16", you'd only have a
handful of LDs running. It works, but it's highly hacky.

This is a modern problem, since not often people in 2000 had multi-core
machines on their desks, but nowadays even Pandaboards are SMP. It's not
surprising that Make and others don't have a general solution to this, but
would be an interesting (and very useful) project to allow
Make/SCcons/Ninja to have -jc for compilation and -jl for linking (keeping
-j for both and others).

cheers,
--renato
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/cfe-dev/attachments/20130122/dfff2851/attachment.html>


More information about the cfe-dev mailing list