[cfe-dev] ld taking too much memory to link clang

Karen Shaeffer shaeffer at neuralscape.com
Tue Jan 22 16:30:18 PST 2013


On Tue, Jan 22, 2013 at 08:54:58PM +0000, Renato Golin Linaro wrote:
> On 22 January 2013 18:24, Karen Shaeffer <shaeffer at neuralscape.com> wrote:
> 
> > The kernel reserves a chunk of memory for itself as working space to dig
> > out
> > of an out-of-memory situation. So it will swap before running out of
> > memory,
> > especially if there is a large spike in memory usage at that point where
> > out-of-memory is approached.
> >
> 
> I think the main issue is that it's normally ok to compile using all CPUs
> (or even 1.5x the number of CPUs) but it's NOT ok to link on the same
> cardinality. I've looked for a solution to this and have not found yet a
> decent way. Maybe someone here knows...
> 
> You could use a script to only start LD once no other was running (or
> something like that), so that even at "make -j16", you'd only have a
> handful of LDs running. It works, but it's highly hacky.
> 
> This is a modern problem, since not often people in 2000 had multi-core
> machines on their desks, but nowadays even Pandaboards are SMP. It's not
> surprising that Make and others don't have a general solution to this, but
> would be an interesting (and very useful) project to allow
> Make/SCcons/Ninja to have -jc for compilation and -jl for linking (keeping
> -j for both and others).

Hi,
I think your ideas are good. You might want to fully generalize the issue, and
monitor the actual system memory usage and availability in deciding when to
run the linker. You can do that in real-time easily:

cat /proc/meminfo

You can also monitor individual process memory usage through /proc/pid/...,
but the system stats are more appropriate here.

HTH,
Karen
-- 
Karen Shaeffer
Neuralscape, Mountain View, CA 94040



More information about the cfe-dev mailing list