[LLVMdev] RFC: ThinLTO Impementation Plan

Xinliang David Li xinliangli at gmail.com
Tue May 19 17:20:28 PDT 2015


On Tue, May 19, 2015 at 4:09 PM, Nick Lewycky <nlewycky at google.com> wrote:

> On 13 May 2015 at 11:44, Teresa Johnson <tejohnson at google.com> wrote:
>
>> I've included below an RFC for implementing ThinLTO in LLVM, looking
>> forward to feedback and questions.
>>
>
> Thanks! I have to admit up front that I haven't read through the whole
> thread, but I have a couple comments. Overall this looks like a really nice
> design and unusually thorough RFC!
>
>
>>
> This is different from llvm's current LTO approach ("big bang LTO", where
> we combine all TUs into a single big Module and the optimize and codegen
> it). It sounds like there's two goals here, multi-machine parallelism and
> reducing memory usage (by splitting the Module out to multiple machines)
> and most of the interesting logic goes into deciding where to split a
> Module.
>
> I think ThinLTO was designed under the assumption that we would not be
> able to fit a large program into memory on a single machine (or that even
> if we could, we wouldn't be able to compile quickly enough by employing
> multi-core parallelism). This is in contrast to previously considered
> approaches of improving big bang LTO to handle very large programs through
> changes to the IR, in-memory representation, on-disk representation and
> threading. Starting with the assumption that we will need multiple
> machines, ThinLTO looks like an excellent design. I just wanted to call out
> that design requirement and how it's different from how llvm has thought
> about LTO in the past.
>

ThinLTO is designed to be Corolla, while LTO will continue to be the
Mercedes :)



>
>> The function index/summary will later be added as a special ELF
>> section alongside the .llvmbc sections.
>>
>
> We've historically pushed back on adding ELF because it doesn't add any
> new information that isn't present in the .bc file, and we care a lot about
> minimizing I/O time (I recall an encoding change in the bitcode format
> shrinking .bc files 10% which led to a big improvement in LTO times for
> Darwin).
>

For LTO, which is already highly stressed, small increase in I/O does
matter a lot.


> There's a few practical matters about what needs to be in this ELF symbol
> table; what about symbols that we reference, instead of just those we
> define?
>

UNDEF symbols or match what plugin does.


> what about the sizes of symbols we define?
>

In elf wrapper, the function is 'defined' in the summary section. Its
offset and size is the summary entry's offset and size.

what about the case where llvm codegen ends up defining (or referencing) a
> function that isn't mentioned in the IR (a common example is emitting a
> call to memcpy for argument lowering)?
>

We don't expect symtab generated for IR matches that with the final object,
for instance dead function elimination can happen etc.


> If you have a set of tools in mind, we can make the ELF accurate enough to
> work with those tools, but it's not clear to me how to make it work for
> fully general ELF-expecting programs without doing full codegen into the
> file (IIRC, this is what GCC does). Are 'ar', 'nm' and 'ld' the only
> programs?
>

ranlib, objcopy.

GCC always wraps IR into ELF wrapper even when it does not generate fat
object. However GCC's IR only ELF file have a customized symtab section.

ICC generates a ELF for IR only case -- with a full ELF symtab generated.
It supports fat object file too.

HP's aCC generates ELF wrapper for intermediate file with full ELF symtab
too.


>
> Finally, suppose you get into a situation where you implement ThinLTO with
> the elf wrappers and then examine the compile time, memory usage, file size
> and I/O, and find that ThinLTO isn't performing as well as we like. The
> next question is going to be "well, what if we removed that extra I/O time,
> file size (copying time) and memory usage from having that ELF wrapper"?
> That's why I think of a .bc-only version as being the ideal version, and
> that having ELF wrapping is a good idea for supporting legacy programs as
> needed.
>

I like the best of both worlds.

>
>
>> I have an idea for a future version.
>
> Give passes the ability to write their own summary data at compile time,
> and to read them in the backends. Merge these summaries in the link, then
> after splitting send the merged summaries to each backend regardless of
> whether it imports the function body. For instance, dead argument
> elimination could summarize which functions ignore which arguments (either
> entirely, or locally except for which arguments in which callees).
> Receiving a full graph of this is smaller than the full implementations of
> the functions, and yet would allow each backend to do an analysis of the
> full graph. Function A's body is in this backend, and A calls B whose body
> is not available to this backend. The summary would include that the first
> argument to B is dead, so we can optimize away the chain of computation
> leading to it in A. (I think a more compelling example will be alias
> analysis, but it would make for a messier example.)
>
>
yes -- that is what we had thought about doing for LIPO -- callgraphs,
whole program aliases are good candidates.  So what you describe is we'd
like to do for thinLTO. Those global analyses are more expensive than the
fast indexing, but can be controlled with knobs.

thanks,

David



> Nick
>
> e. ThinLTO importing support:
>>
>> Support for the mechanics of importing functions from other modules,
>> which can go in gradually as a set of patches since it will be off by
>> default. Separate patches can include:
>>
>> - BitcodeReader changes to use function index to import/deserialize
>> single function of interest (small changes, leverages existing lazy
>> streamer support).
>>
>> - Minor LTOModule changes to pass the ThinLTO function to import and
>> its index into bitcode reader.
>>
>> - Marking of imported functions (for use in ThinLTO-specific symbol
>> linking and global DCE, for example). This can be in-memory initially,
>> but IR support may be required in order to support streaming bitcode
>> out and back in again after importing.
>>
>> - ModuleLinker changes to do ThinLTO-specific symbol linking and
>> static promotion when necessary. The linkage type of imported
>> functions changes to AvailableExternallyLinkage, for example. Statics
>> must be promoted in certain cases, and renamed in consistent ways.
>>
>> - GlobalDCE changes to support removing imported functions that were
>> not inlined (very small changes to existing pass logic).
>>
>>
>> f. ThinLTO Import Driver SCC pass:
>>
>> Adds Transforms/IPO/ThinLTO.cpp with framework for doing ThinLTO via
>> an SCC pass, enabled only under -fthinlto options. The pass includes
>> utilizing the thin archive (global function index/summary), import
>> decision heuristics, invocation of LTOModule/ModuleLinker routines
>> that perform the import, and any necessary callgraph updates and
>> verification.
>>
>>
>> g. Backend Driver:
>>
>> For a single node build, the gold plugin can simply write a makefile
>> and fork the parallel backend instances directly via parallel make.
>>
>>
>> 3. Stage 3: ThinLTO Tuning and Enhancements
>> ----------------------------------------------------------------
>>
>> This refers to the patches that are not required for ThinLTO to work,
>> but rather to improve compile time, memory, run-time performance and
>> usability.
>>
>>
>> a. Lazy Debug Metadata Linking:
>>
>> The prototype implementation included lazy importing of module-level
>> metadata during the ThinLTO pass finalization (i.e. after all function
>> importing is complete). This actually applies to all module-level
>> metadata, not just debug, although it is the largest. This can be
>> added as a separate set of patches. Changes to BitcodeReader,
>> ValueMapper, ModuleLinker
>>
>>
>> b. Import Tuning:
>>
>> Tuning the import strategy will be an iterative process that will
>> continue to be refined over time. It involves several different types
>> of changes: adding support for recording additional metrics in the
>> function summary, such as profile data and optional heavier-weight IPA
>> analyses, and tuning the import heuristics based on the summary and
>> callsite context.
>>
>>
>> c. Combined Function Map Pruning:
>>
>> The combined function map can be pruned of functions that are unlikely
>> to benefit from being imported. For example, during the phase-2 thin
>> archive plug step we can safely omit large and (with profile data)
>> cold functions, which are unlikely to benefit from being inlined.
>> Additionally, all but one copy of comdat functions can be suppressed.
>>
>>
>> d. Distributed Build System Integration:
>>
>> For a distributed build system, the gold plugin should write the
>> parallel backend invocations into a makefile, including the mapping
>> from the IR file to the real object file path, and exit. Additional
>> work needs to be done in the distributed build system itself to
>> distribute and dispatch the parallel backend jobs to the build
>> cluster.
>>
>>
>> e. Dependence Tracking and Incremental Compiles:
>>
>> In order to support build systems that stage from local disks or
>> network storage, the plugin will optionally support computation of
>> dependent sets of IR files that each module may import from. This can
>> be computed from profile data, if it exists, or from the symbol table
>> and heuristics if not. These dependence sets also enable support for
>> incremental backend compiles.
>>
>>
>>
>> --
>> Teresa Johnson | Software Engineer | tejohnson at google.com | 408-460-2413
>>
>> _______________________________________________
>> LLVM Developers mailing list
>> LLVMdev at cs.uiuc.edu         http://llvm.cs.uiuc.edu
>> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
>>
>
>
> _______________________________________________
> LLVM Developers mailing list
> LLVMdev at cs.uiuc.edu         http://llvm.cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20150519/da5dbc9c/attachment.html>


More information about the llvm-dev mailing list