<div dir="ltr"><div dir="ltr">On Tue, Apr 23, 2019 at 6:12 PM Peter Smith <<a href="mailto:peter.smith@linaro.org">peter.smith@linaro.org</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On Tue, 23 Apr 2019 at 02:50, Rui Ueyama via llvm-dev<br>
<<a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a>> wrote:<br>
><br>
> On Tue, Apr 23, 2019 at 3:53 AM David Blaikie <<a href="mailto:dblaikie@gmail.com" target="_blank">dblaikie@gmail.com</a>> wrote:<br>
>><br>
>> Sounds pretty good to me. An interesting case that occurs to me is<br>
>> situations where there are many libraries specified, but only few are<br>
>> used. In the current scheme, the scalability of the resolution<br>
>> algorithm isn't dependent on the total number of libraries, is it? (It<br>
>> stops once all symbols are resolved - potentially not visiting many<br>
>> unused libraries)<br>
><br>
><br>
> The cost of the current algorithm actually linear to the number of libraries, because we insert all symbols in archive file's symbol tables to the hash table (so that we are able to know if there's an archive defining some symbol when we see an undefined symbol of the same name).<br>
><br>
> That being said, the proposed new algorithm would do a bit more work than the current algorithm. In the new algorithm, we insert all symbols including undefined ones to the symbol table, while archive file's symbol table contains only defined symbols. So, in theory, if we have a lot of object files in archives that end up not being used, and if we run the linker on a machine with a small number of cores, the new algorithm could be slower in theory. I'd expect that the break-even point is not that high, so I thought that this is in practice not a problem, but that's something I'd like to know by experimenting.<br>
<br>
I think that a more likely occurrence than unused libraries is sparse<br>
use of members in libraries. For example a large utility library could<br>
be included for the use of only 1 or 2 members. Although the<br>
computational overhead may be small the extra file access time may<br>
dominate in those cases, especially on first link. Finding a<br>
representative set of programs will be interesting.<br></blockquote><div><br></div><div>Archive file's symbol tables are at the beginning of the files, so parsing them has indeed a better locality than parsing a symbol table of each archive member. The performance characteristics would be greatly affected by the file system performance characteristics. If an archive file is just created on the same machine, it is likely that the entire file is in memory cache, and the random access would be very fast. If it is on a cold hard drive, the sparse access pattern would be penalized.</div><div><br></div><div>On the other hand, there's an interesting optimization that becomes possible only by directly parsing each member's symbol table. Currently, we insert all symbols in archive file's symbol table to the hash table, and when we pull out a file from an archive, we re-insert symbols read from the object file. So, for each defined symbol in an archive member, we insert the same string twice to the hash table. This is unavoidable, because even if we know that an archive file contains an object file defining symbol Foo, we don't really know which symbol entry of the object file defines symbol Foo. We have to insert symbol strings to the hash table to find that out.</div><div><br></div><div>If we use the same symbol table (member file's symbol table), we don't have to look up the hash table twice, because we know a symbol table entry for each symbol string. The number of defined symbols in archive files is not small for typical programs. Therefore, depending on a usage pattern, there is a possibility that the new scheme runs faster even on a single core machine.</div><div><br></div><div>I don't know which outperforms the other. It'll be great to see the benchmark results once implemented.</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
>> I wonder if there are cases where many unused libraries are specified<br>
>> (perhaps in a situation where one project builds multiple executables<br>
>> and some of those executables only use few external libraries, while<br>
>> others use many - but all of the external libraries are written once<br>
>> in a list of external dependencies the project as a whole depends on -<br>
>> or I guess a situation where there are two executables in a project,<br>
>> one uses external libraries A, the other users B, but both just get a<br>
>> link line that specifies A and B) - which would result in this<br>
>> strategy doing potentially significantly more work than the current<br>
>> implementation.<br>
>><br>
>> But it'll be great to see the data no doubt - can gauge how much this<br>
>> costs (how many parallel threads is the break-even point for this<br>
>> separation of work, how many unused libraries would be needed to<br>
>> overwhelm the benefits, etc).<br>
>><br>
>> - Dave<br>
>><br>
>> On Mon, Apr 22, 2019 at 1:53 AM Rui Ueyama via llvm-dev<br>
>> <<a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a>> wrote:<br>
>> ><br>
>> > Hi all,<br>
>> ><br>
>> > This is a design change proposal to make lld's symbol resolution phase mostly-concurrent without changing the existing semantics. The aim of this change is to speed up the linker on multi-core machines.<br>
>> ><br>
>> > Background:<br>
>> > Even though many parts of lld are multi-threaded, the symbol resolution phase is single-threaded. In the symbol resolution phase, the linker does the following:<br>
>> ><br>
>> > - Read one symbol at a time from each input file and insert it to a hash table.<br>
>> > - If we find an undefined symbol that can be resolved by an object file in a static archive file, immediately pull that file out from the archive (which may transitively pull out other files from other archives).<br>
>> ><br>
>> > The output of this phase is a set of symbols merged by name and a set of object files to be included in an output file.<br>
>> ><br>
>> > We couldn't use threads in this phase because it was hard to guarantee the determinism of the link result. To see why, assume that more than one archive file define the same symbol (which is not an odd assumption, as it is a common practice to write your own libc functions and override the standard ones by inserting your file before `-lc` in the command line, for example). If two or more threads simultaneously read input files, it is hard to guarantee the link order. To simplify stuff, we chose to not use multi-threading at all in this phase.<br>
>> ><br>
>> > This phase takes roughly 1/3 of the total execution time for typical programs.<br>
>> ><br>
>> > Analysis:<br>
>> ><br>
>> > Doing something for every symbol is not necessarily slow even if the number of symbols is large. We have many loops that iterates over all symbol objects (lld's internal representation of symbols) in lld, and the loops don't take too much time. What is actually slow when reading input files is to insert symbol strings into an in-memory hash table. This is particularly worse when linking large programs. Large programs tend to be written in C++, and C++ symbol names are particularly long due to name mangling. Inserting hundreds of thousands of long strings into a hash table is a computationally intensive work.<br>
>> ><br>
>> > Proposal:<br>
>> ><br>
>> > I propose we split the symbol resolution phase into two phases and use multi-threading in the first phase. In the first phase, we visit all files, including ones in archive files, to insert all symbol strings to sharded hash tables. For each symbol string, we insert it to a symbol table with a placeholder symbol object as a value.<br>
>> ><br>
>> > Each file contains a member `std::vector<Symbol *> Symbols`. Once the first phase is done, file A's N'th symbol and file B's M'th symbol have the same name if and only if `A.Symbols[N] == B.Symbols[M]`. That means symbols are merged by name, but "symbol resolution" in the regular linker's sense is not done yet. This phase is highly parallelizable.<br>
>> ><br>
>> > In the second phase, we visit each input file serially and do name resolution just like we are currently doing. The only difference is, for each symbol, instead of looking up a symbol table with a symbol string, we just dereference a pointer to obtain a corresponding symbol object which should be extremely fast. Since the symbol resolution algorithm is still single-threaded and doesn't change, the output remains the same -- it is deterministic and if two files define the same symbol, the first file would be chosen.<br>
>> ><br>
>> > One caveat is with this scheme we effectively ignore archive file's symbol tables. We instead directly read symbols from archive members. This shouldn't change the semantics because an archive file's symbol table should be consistent with its member files. But this may be perceived as a weird behavior because lld would work "correctly" even if an archive file's symbol table is inconsistent, corrupted or not present.<br>
>> ><br>
<br>
If I've understood correctly you are proposing a pre-pass on the<br>
symbol tables of all the objects that will produce a fast lookup of<br>
"Symbol name" to "Symbol Placeholder". I think that this could work if<br>
all it is used for is to compare names quickly, I think it might cause<br>
differences in behaviour if we try and use any more information about<br>
the symbols in the first pass. For example:<br>
- There can be more than one object in a static library with a<br>
definition of a symbol and it is not guaranteed that the first object<br>
in the table is selected in all cases (later object can be loaded<br>
first due to a reference to some other symbol definition).<br>
- Weak definitions aren't taken into account when choosing which<br>
library member (archive symbol table has no concept of weakness).<br></blockquote><div><br></div><div>Right. The first phase exists only to compare names quickly in the second phase. </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
Peter<br>
<br>
>> > Implementation details:<br>
>> ><br>
>> > I don't expect too much change, but it looks like I need to move symbol resolution code (which calls replaceSymbol to replace symbols) from SymbolTable.cpp to somewhere else (perhaps to Symbols.cpp), because in the second phase, we can determine if two symbols have the same name without asking to the symbol table. Even in the current architecture, they don't really have to belong to the symbol table, so it seems refactoring it is generally a good idea. I'd do this first before implementing this proposal.<br>
>> ><br>
>> > Rui<br>
>> > _______________________________________________<br>
>> > LLVM Developers mailing list<br>
>> > <a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a><br>
>> > <a href="https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev" rel="noreferrer" target="_blank">https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev</a><br>
><br>
> _______________________________________________<br>
> LLVM Developers mailing list<br>
> <a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a><br>
> <a href="https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev" rel="noreferrer" target="_blank">https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev</a><br>
</blockquote></div></div>