[LLVMdev] LLD improvement plan
Nick Kledzik
kledzik at apple.com
Thu May 28 18:25:25 PDT 2015
On May 28, 2015, at 5:42 PM, Sean Silva <chisophugis at gmail.com> wrote:
> I guess, looking back at Nick's comment:
>
> "The atom model is a good fit for the llvm compiler model for all architectures. There is a one-to-one mapping between llvm::GlobalObject (e.g. function or global variable) and lld:DefinedAtom."
>
> it seems that the primary issue on the ELF/COFF side is that currently the LLVM backends are taking a finer-grained atomicity that is present inside LLVM, and losing information by converting that to a coarser-grained atomicity that is the typical "section" in ELF/COFF.
> But doesn't -ffunction-sections -fdata-sections already fix this, basically?
>
> On the Mach-O side, the issue seems to be that Mach-O's notion of section carries more hard-coded meaning than e.g. ELF, so at the very least another layer of subdivision below what Mach-O calls "section" would be needed to preserve this information; currently symbols are used as a bit of a hack as this "sub-section" layer.
I’m not sure what you mean here.
>
> So the problem seems to be that the transport format between the compiler and linker varies by platform, and each one has a different way to represent things, some can't represent everything we want to do, apparently.
Yes!
> BUT it sounds like at least relocatable ELF semantics can, in principle, represent everything that we can imagine an "atom-based file format"/"native format" to want to represent. Just to play devil's advocate here, let's start out with the "native format" being relocatable ELF - on *all platforms*. Relocatable object files are just a transport format between compiler and linker, after all; who cares what we use? If the alternative is a completely new format, then bootstrapping from relocatable ELF is strictly less churn/tooling cost.
>
> People on the "atom side of the fence", what do you think? Is there anything that we cannot achieve by saying "native"="relocatable ELF"?
1) Turns out .o files are written once but read many times by the linker. Therefore, the design goal of .o files should be that they are as fast to read/parse in the linker as possible. Slowing down the compiler to make a .o file that is faster for the linker to read is a good trade off. This is the motivation for the native format - not that it is a universal format.
2) I think the ELF camp still thinks that linkers are “dumb”. That they just collate .o files into executable files. The darwin linker does a lot of processing/optimizing the content (e.g. Objective-C optimizing, dead stripping, function/data re-ordering). This is why atom level granularity is needed.
For darwin, ELF based .o files is not interesting. It won’t be faster, and it will take a bunch of effort to figure out how to encode all the mach-o info into ELF. We’d rather wait for a new native format.
-Nick
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20150528/fb195718/attachment.html>
More information about the llvm-dev
mailing list