[llvm] r211315 - Use lib/LTO directly in the gold plugin.

Rafael Espíndola rafael.espindola at gmail.com
Tue Jul 8 09:16:07 PDT 2014


>> * Don't read the full bitcode until all of symbol resolution is done.
> I don’t know enough about the bitcode format to know how lazy the expansion can
> be, especially the references to undefined symbols.
> Another approach would be to add new optional chunk to the start of bitcode files
> which is only generated with -flto compiler mode.  The new chunk is just a list of the
> symbols defined and referenced by the bitcode.  The linker just parses the
> optional chunk to get everything it needs.

The format itself allows for us to be as lazy as we want. The
limitation is the API we have and what any particular code expects to
already be loaded. I think it is practical to be able to handle lazy
loading of metadata during linking at least.

I would like to avoid having a bitcode format variation. It should
always be possible to do LTO with a hand written .ll file for example.

>
>> * Drop duplicated comdats/weak symbols even before passing the modules
>> to lib/Linker.
> Seems like to do this, you need to have already parsed the bit code into functions
> then just not adding some functions to the merged module.  If we had the
> symbol info chunk I described above, could we have a new variant of the
> bitcode reader that takes a list of names to not-extract.

This already works today (I am about to send a patch that uses it).
The only annoyance is the lazy loading api cannot distinguish a
function without a body and one whose body was not loaded yet, but
that is fixable.

>
>> * Have at most 2 loaded bitcode files at once.
> I can see wanting to do that to reduce memory growth.  But that is somewhat
> at odds with the current parallel reading in lld of elf/mach-o/coff files. If we
> had the symbol chunk, lld could do all its work on just proxy atoms based
> on the symbol chunk.  Then after it has been decided what to keep, lld can
> serially expand each bitcode file limited to the parts are actually needed
> and merge that into the main module.

This is very similar to what I am trying to do with the gold plugin
actually. We get called back once gold knows what symbols it wants
from every file. The implementation I am doing reads one at a time to
save memory, but since the decision of what to keep is already done,
we should get a deterministic result by reading the modules in
parallel and doing a tree merging.

> For ease of adoption, the ld64 model has always been that you can mix and
> match -flto files with non-lto files (mach-o).

Same for gold. This is a very important feature IMHO.

> But this has made the linker’s job
> much more difficult.  For instance suppose you have a bitcode file which defines
> a bunch of functions.  One of the functions is _foo and _foo calls _bar.  In fact
> it is the only use of _bar in the bitcode.  When ld64 reads the bit code, it sees
> the use of _bar, so the linker looks and finds _bar defined in a mach-o archive,
> so the linker loads that member which in turns drags in more stuff.  Now
> when libLTO code generation happens, it turns out that _foo is not used, so
> libLTO removes _foo.  But now the linker has _bar (and everything it dragged in)
> which is unnecessary, so the linker has to do another pass at dead code
> stripping.  This is all because libLTO provides a very coarse grain view of the
> contents of a bitcode file. But to get a finer grain view, the bitcode needs to be
> fully parsed early on, which slows down the link…

Well, I think that is going to happen in any implementation that mixes
native and object files. The LLVM optimizations expose other
optimizations that the linker can do.

Cheers,
Rafael




More information about the llvm-commits mailing list