[llvm-dev] LLD status update and performance chart

Sean Silva via llvm-dev llvm-dev at lists.llvm.org
Tue Dec 13 19:59:37 PST 2016


On Tue, Dec 13, 2016 at 12:08 PM, Rui Ueyama <ruiu at google.com> wrote:

> On Tue, Dec 13, 2016 at 12:01 PM, Mehdi Amini <mehdi.amini at apple.com>
> wrote:
>
>>
>> On Dec 13, 2016, at 11:51 AM, Rui Ueyama <ruiu at google.com> wrote:
>>
>> On Tue, Dec 13, 2016 at 11:37 AM, Mehdi Amini <mehdi.amini at apple.com>
>> wrote:
>>
>>>
>>> On Dec 13, 2016, at 11:30 AM, Rui Ueyama <ruiu at google.com> wrote:
>>>
>>> On Tue, Dec 13, 2016 at 11:23 AM, Mehdi Amini <mehdi.amini at apple.com>
>>> wrote:
>>>
>>>>
>>>> On Dec 13, 2016, at 11:08 AM, Rui Ueyama <ruiu at google.com> wrote:
>>>>
>>>> On Tue, Dec 13, 2016 at 11:02 AM, Mehdi Amini <mehdi.amini at apple.com>
>>>> wrote:
>>>>
>>>>>
>>>>> On Dec 13, 2016, at 10:06 AM, Rui Ueyama <ruiu at google.com> wrote:
>>>>>
>>>>> On Tue, Dec 13, 2016 at 9:28 AM, Mehdi Amini <mehdi.amini at apple.com>
>>>>> wrote:
>>>>>
>>>>>>
>>>>>> > On Dec 13, 2016, at 5:55 AM, Rafael Avila de Espindola via llvm-dev
>>>>>> <llvm-dev at lists.llvm.org> wrote:
>>>>>> >
>>>>>> > Sean Silva via llvm-dev <llvm-dev at lists.llvm.org> writes:
>>>>>> >> This will also greatly facilitate certain measurements I'd like to
>>>>>> do
>>>>>> >> w.r.t. different strategies for avoiding memory costs for input
>>>>>> files (esp.
>>>>>> >> minor faults and dTLB costs). I've almost gotten to the point of
>>>>>> >> implementing this just to do those measurements.
>>>>>> >
>>>>>> > If you do please keep it local. The bare minimum we have of library
>>>>>> > support is already disproportionately painful and prevents easier
>>>>>> sharing
>>>>>> > with COFF. We should really not add more until the linker is done.
>>>>>>
>>>>>> This is so much in contrast with the LLVM development, I find it
>>>>>> quite hard to see this as an acceptable position on llvm-dev.
>>>>>>
>>>>>
>>>>> LLD is a subproject of the LLVM project, but as a product, LLD itself
>>>>> is not LLVM nor Clang, so some technical decisions that make sense to them
>>>>> are not directly be applicable or even inappropriate. As a person who spent
>>>>> almost two years on the old LLD and 1.5 years on the new LLD, I can say
>>>>> that Rafael's stance on focusing on making a good linker first really makes
>>>>> sense. I can easily imagine that if we didn't focus on that, we couldn't
>>>>> make this much progress over the past 1.5 year and would be stagnated at a
>>>>> very basic level. Do you know if I'm a person who worked really hard on the
>>>>> old (and probably "modular" whatever it means) linker so hard? I'm speaking
>>>>> based on the experience. If you have an concrete idea how to construct a
>>>>> linker from smaller modules,  please tell me. I still don't get what you
>>>>> want. We can discuss concrete proposals, but "making it (more) modular" is
>>>>> too vague and not really a proposal, so it cannot be a productive
>>>>> discussion.
>>>>>
>>>>> That said, I think the current our "API" to allow users call our
>>>>> linker's main function hit the sweet spot. I know at least a few LLVM-based
>>>>> language developers who want to eliminate external dependencies and embed a
>>>>> linker to their compilers. That's a reasonable usage, and I think allowing
>>>>> them to pass a map from filename to MemoryBuffer objects makes sense, too.
>>>>> That would be done without affecting the overall linker architecture. I
>>>>> don't oppose to that idea, and if someone wrote a patch, I'm fine with that.
>>>>>
>>>>>
>>>>> I’m totally willing to believe you that it is not possible to write
>>>>> the fastest ELF linker on earth (or in the universe) with a library based
>>>>> and reusable components approach. But clang is not the fastest C/C++
>>>>> compiler available, and LLVM is not the fastest compiler framework either!
>>>>>
>>>>> So as a project, it seems to me that LLVM has not put the tradeoff on
>>>>> the speed/efficiency historically when it was to the detriment of
>>>>> layering/component/modularity/reusability/…
>>>>>
>>>>> Writing the fastest linker possible is nice goal, I regret that a LLVM
>>>>> subproject is putting this goal above layering/component/modularity/reusability/…
>>>>> though.
>>>>>
>>>>
>>>> I've never mentioned that creating the fastest linker is the only goal.
>>>>
>>>>
>>>> I believe this has clearly been put *ahead* the other design aspects I
>>>> mentioned, isn’t it?
>>>>
>>>> Medhi, please tell how you would *actually* layer linkers with
>>>> fine-grained components.
>>>>
>>>>
>>>> That’s not a bait I’m gonna bite.
>>>>
>>>
>>> That's not a bait... I guess you are proposing a different architecture,
>>> so you need to explain it.
>>>
>>>
>>> That’s a bait in the sense that I’m not having two months to dig into
>>> the Elf lld and write a design document/proposal for the sole purpose of
>>> making a point. And me not doing this does not impact and is not relevant
>>> to the discussion at stance..
>>>
>>
>> If it takes two months for you to investigate and make a proposal, why
>> are you so confident about the conclusion of the investigation that what
>> you think (I still don't get what it is) is doable? I actually dug into the
>> old LLD for two years (not months) with the hope that there's a good way of
>> make it work but failed.
>>
>>
>> You can’t be serious. You’re bait is asking for me proposing a different
>> architecture. I’m not biting.
>> This is *not* the baseline. For instance, you said from the start that
>> you won’t return an Error from APIs and instead call exit(). I don’t need
>> to propose a “linker architecture” for that.
>> The main contention point is how any library design guidelines is
>> rejected from the start on the principle that it’ll slow down the linker.
>>
>
> That's simply not true as you know. I listened to you, and it now returns.
> And even in this thread, I mentioned that embedding a linker is a
> reasonable usage.
>

Yes. Rui has bent over backwards every time a real user has come to us and
said "we need X". The historical precedent here is that LLD is open to many
kinds of changes, but not on theoretical grounds.

Admittedly this leads to a somewhat conservative design for the linker
w.r.t. enabling new use cases. However, having been involved directly or
indirectly with LLD development over the last 4 or 5 years, the number of
new use cases that have been proposed fall into a very small number of
categories:

- I want to have "main() in a library" for a static linker. LLD/ELF
currently provides this functionality (with some unfortunate caveats w.r.t.
re-entrancy and other stuff, but the core functionality is there, and LLD
has catered to reasonable requests to remove or mitigate the caveats).

- I want to have a "linker server". Usually this is accompanied by a
significant amount of hand-waving w.r.t. the extent to which it could
speeding up linking (or more generally, incremental builds). It's still not
clear exactly how much speedup can be achieved by doing this, or even where
the speedup will come from.
-- Some people focus a lot on the ability to do O(changes in the input)
work. However, just like in the "libObjcopy" case, this is a much harder
problem that it seems at first and for the same reason. And it's the same
reason that "objcopy" is called "objcopy": really the only way to edit an
object file is to copy it, which intrinsically does O(the output) work. At
the tail end of the linker, there is a phase where the linker must iterate
everything it needs to put in the output object file and assign it an
offset; the offset of all later things depends on the offset of all earlier
things. Then, it needs to iterate every relocation and look up the offset
of the relocation target to do the relocation (LLD/ELF, in it's fastest
mode, spends about 60% of its time doing this). Doing this from first
principles incrementally is very, very hard; AFAIK, all existing
"incremental link" approaches are fundamentally based on patching the
previous output object file (which, among other things, does not produce
deterministic outputs and generally feels like a hack). There are also even
harder cases to make incremental, such as when the results of symbol
resolution change or even worse the contents of archives end up changing.
-- From a larger perspective, there is a spectrum from fully statically
linked binaries and just doing fine-grained dynamic libraries and letting
the dynamic linker resolve all relocations. The details and the tradeoffs
are what matter, and they are the things that have been least analyzed.
Talking about a linker server generically is not that useful.

- I want to share code between JIT stuff in LLVM and the static "linker as
a library". This has been brought up in this thread, if you look at the
details, the argument is not as strong as the high-level software
engineering spidey sense might initially suggest.

- I actually don't want a linker in a library but I don't know enough about
linkers to know that. What I actually want is "libObjcopy". I.e. some way
to symbolically read, edit, and write object files. This is actually a much
more complicated problem than the people asking for it think it is, and is
unlikely to be as useful as they think it is. (it is still useful, but not
as much as it might seem at first)

- I want to have a COFF/ELF/MachO linker that shares a bunch of core
functionality, such that each different format is just a "frontend" to the
core linker, and a corresponding "backend".
-- From a practical perspective, the "atom LLD" tried this, and failed for
a variety of reasons, one technical reason being that the ELF and COFF
notion of "section" is a strict superset of its core "atom" abstraction (an
indivisible chunk of data with a 1:1 correspondence with a symbol name).
Therefore the entire design was impossible to use for those formats. In ELF
and COFF, a section is decoupled from symbols, and there can be arbitrarily
many symbols pointing to arbitrary parts of a section (whereas in MachO,
"section" means something different; the analogous concept to an ELF/COFF
section is basically "the thing that you get when you split a MachO section
on symbol boundaries, assuming that the compiler/assembler hasn't relaxed
relocations across those boundaries" which is not as powerful as ELF/COFF's
notion). This resulted in severe contortions that ultimately made it
untenable to keep working in that codebase. In an ideal world, the
abstraction could have been fixed, but for various non-technical reasons
that ended up not happening (I will not go into more detail about that on
the mailing list). Also, interestingly, if you search the mailing list for
"altentry", it seems like MachO has added a feature that breaks with the
"atom" model as well. So there is some hope that further development on the
atom LLD for MachO may fix this. Still, there are other challenges.


The experience from LLD/COFF and LLD/ELF means that we now have enough
linker expertise in the LLVM community to think about these use cases
seriously. However, this also means that we have enough linker expertise to
drill down below the hand-waving, and often this means finding out that
something that seemed reasonable or obvious is actually does not make as
much sense as it seemed. It's important to not get upset when this happens.


This is totally orthogonal the discussion of whether ELF should be
"llvm-like" in its modularity/library-like design, etc. If it was just me
writing the linker, I would have followed a more traditional llvm-like
design, but I am strongly convinced now that what was done was the correct
decision at the time. If we had not mirrored the COFF design, Rui would not
have felt as invested and we would have missed his contributions (I don't
think that Rui has ever officially been working on ELF; what you see from
him is true passion for the program). If we had designed ELF with a
heavyweight traditional llvm library approach, we would have missed out of
Rafael's contributions. Without these two there would not have been the
critical mass needed to get an LLVM ELF linker off the ground and onto a
trajectory leading to production. In its years of history, the biggest
issue with getting LLD off the ground for ELF is that there was never a
critical mass actively working on it, so getting this critical mass was
actually quite important. Obviously there have been many others
contributing. Especially Michael; without his expertise there was no way we
would have gotten LLD working for PS4, which would have pulled Rafael,
Michael, Davide, George Rimar, and myself (at the time) off of LLD. This
would have meant that LLD never started showing enough signs of life for
ELF and gotten FreeBSD onboard, which is critical to validating LLD as a
production linker in open-source.

The reality is that LLD/ELF is here, and its written in the way that it is
written, and it got here the way that it got here. Patches are very
welcome. Users with real use cases are very welcome. Technical discussion
and evaluation of potential use cases is very welcome.

-- Sean Silva
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20161213/72ed6481/attachment.html>


More information about the llvm-dev mailing list