[llvm-dev] Please dogfood LLD
George Rimar via llvm-dev
llvm-dev at lists.llvm.org
Tue Mar 21 01:17:28 PDT 2017
>> As to the Phoronix benchmark results, I tried linking fftw as an example
>> and got a different result. In my test, ld.bfd, ld.gold and ld.lld are all
>> on par, and that is reasonable because compile time is the dominant factor
>> of building fftw. In the Phoronix tests, LLD was slower than the GNU
>> linkers when linking fftw, and I don't know why. The Phoronix tests
>> measured the entire build time (as opposed to the time that the linker
>> actually consumed), and it seems to me that that is too noisy. (Michael, if
>> you are watching this, you probably want to measure only the time spent by
>> the linker?)
>>
>
>Unfortunately it is not very easy to test just the link time in a build (or
>just the non-link time). I have had to do this in the past and there is no
>universally applicable solution, so I wouldn't expect Phoronix (which does
>high-level black-box testing) to do this.
FWIW both gold/bfd support -stats command line option. Example of output:
ld.bfd -stats lib/libLLVMMCParser.a -o ooo
ld.bfd: warning: cannot find entry symbol _start; not setting start address
ld.bfd: total time in link: 0.000000
ld.bfd: data size 274432
ld.gold -stats lib/libLLVMMCParser.a -o ooo
ld.gold: initial tasks run time: (user: 0.000000 sys: 0.000000 wall: 0.000000)
ld.gold: middle tasks run time: (user: 0.000000 sys: 0.000000 wall: 0.000000)
ld.gold: final tasks run time: (user: 0.000000 sys: 0.000000 wall: 0.000000)
ld.gold: total run time: (user: 0.000000 sys: 0.000000 wall: 0.000000)
ld.gold: total space allocated by malloc: 606208 bytes
ld.gold: total bytes mapped for read: 9509982
ld.gold: maximum bytes mapped for read at one time: 9509982
ld.gold: archive libraries: 1
ld.gold: total archive members: 9
ld.gold: loaded archive members: 0
ld.gold: lib groups: 0
ld.gold: total lib groups members: 0
ld.gold: loaded lib groups members: 0
ld.gold: output file size: 648 bytes
ld.gold: symbol table entries: 3; buckets: 1031
ld.gold: symbol table stringpool entries: 3; buckets: 1031
ld.gold: symbol table stringpool Stringdata structures: 1
ld.gold: section name pool entries: 4; buckets: 11
ld.gold: section name pool Stringdata structures: 1
ld.gold: output symbol name pool entries: 3; buckets: 5
ld.gold: output symbol name pool Stringdata structures: 0
ld.gold: dynamic name pool entries: 0; buckets: 2
ld.gold: dynamic name pool Stringdata structures: 0
ld.gold: total free lists: 0
ld.gold: total free list nodes: 0
ld.gold: calls to Free_list::remove: 0
ld.gold: nodes visited: 0
ld.gold: calls to Free_list::allocate: 0
ld.gold: nodes visited: 0
Looks it is possible to see total run time for them.
>That being said, like you
>mentioned, a small part of the overall (clean / full-rebuild) build time is
>spent linking for most projects, so I agree that any significant changes in
>the numbers by switching linkers is suspect (especially between LLD and
>gold; I've seen BFD be "crazy slow" sometimes).
>
>The main thing I can think of is that, for example, if the projects' build
>system detects if the compiler/linker supports LTO (by actually properly
>testing it, not like stockfish that pretty much hardcodes it, causing the
>LLD build to fail in this batch of tests), then the LLD links might end up
>with non-LTO builds and the gold/BFD builds could be doing LTO.
>
>-- Sean Silva
George.
More information about the llvm-dev
mailing list