[LLVMdev] RFC: ThinLTO Impementation Plan

Teresa Johnson tejohnson at google.com
Sat May 16 00:07:41 PDT 2015


I wasn't able to write up responses to all of your questions yet, but
a few answers below.
Thanks,
Teresa

On Fri, May 15, 2015 at 1:42 PM, Eric Christopher <echristo at gmail.com> wrote:
> Hi Teresa,
>
> Very excited to see this work progressing :)

Thanks!

>
>>
>> The second and third implementation stages will initially be very
>> volatile, requiring a lot of iterations and tuning with large apps to
>> get stabilized. Therefore it will be important to do fast commits for
>> these implementation stages.
>>
>
> This sounds interesting. Could use some more description of what you think
> is going to be needed here.
>
>>
>>
>> 2. Stage 2: ThinLTO Infrastructure
>> ----------------------------------------------
>>
>> The next set of patches adds the base implementation of the ThinLTO
>> infrastructure, specifically those required to make ThinLTO functional
>> and generate correct but not necessarily high-performing binaries. It
>> also does not include support to make debug support under -g efficient
>> with ThinLTO.
>>
>
> This is probably something we should give some more thought to up front.
> People will definitely want to be able to at least get decent back traces
> out of their code (functions, file/line/col, arguments maybe) and leaving
> this as an afterthought could cause more efficiency problems down the road.

See my response to Duncan on a similar question. It is covered later
on just below this section and is the first thing that should be wired
up after the rest of the ThinLTO handling.

>
>>
>>
>> a. Clang/LLVM/gold linker options:
>>
>> An early set of clang/llvm patches is needed to provide options to
>> enable ThinLTO (off by default), so that the rest of the
>> implementation can be disabled by default as it is added.
>> Specifically, clang options -fthinlto (used instead of -flto) will
>> cause clang to invoke the phase-1 emission of LLVM bitcode and
>> function summary/index on a compile step, and pass the appropriate
>> option to the gold plugin on a link step. The -thinlto option will be
>> added to the gold plugin and llvm-lto tool to launch the phase-2 thin
>> archive step. The -thinlto option will also be added to the ‘opt’ tool
>> to invoke it as a phase-3 parallel backend instance.
>>
>>
>> b. Thin-archive linking support in Gold plugin and llvm-lto:
>>
>> Under the new plugin option (see above), the plugin needs to perform
>> the phase-2 (thin archive) link which simply emits a combined function
>> map from the linked modules, without actually performing the normal
>> link. Corresponding support should be added to the standalone llvm-lto
>> tool to enable testing/debugging without involving the linker and
>> plugin.
>>
>
> Have you described thin archives anywhere? I might have missed it, but I'm
> curious how you see this working.

Do you mean the format of the file? David and I discussed some ideas
for representing as a native object file with symtab, which I will
include when I update the RFC early next week. This format could
presumably be used even in the case of bitcode as the intermediate
representation for the TUs. It will be consumed by the backend ThinLTO
import pass.

>
>>
>>
>> c. ThinLTO backend support:
>>
>> Support for invoking a phase-3 backend invocation (including
>> importing) on a module should be added to the ‘opt’ tool under the new
>> option. The main change under the option is to instantiate a Linker
>> object used to manage the process of linking imported functions into
>> the module, efficient read of the combined function map, and enable
>> the ThinLTO import pass.
>
>
> In general the phases that you have here sound interesting, but I'm not sure
> that I've seen the background describing them? Can you describe this sort of
> change here in more detail?
>
>>
>> Each function available for importing from the module contains an
>> entry in the module’s function index/summary section and in the
>> resulting combined function map. Each function entry contains that
>> function’s offset within the bitcode file, used to efficiently locate
>> and quickly import just that function. The entry also contains summary
>> information (e.g. basic information determined during parsing such as
>> the number of instructions in the function), that will be used to help
>> guide later import decisions. Because the contents of this section
>> will change frequently during ThinLTO tuning, it should also be marked
>> with a version id for backwards compatibility or version checking.
>>
>
> <Insert bike shed discussion of formatting, versioning, etc>
>
>>
>>
>> e. ThinLTO importing support:
>>
>> Support for the mechanics of importing functions from other modules,
>> which can go in gradually as a set of patches since it will be off by
>> default. Separate patches can include:
>>
>> - BitcodeReader changes to use function index to import/deserialize
>> single function of interest (small changes, leverages existing lazy
>> streamer support).
>>
>
> Sounds like this is trying to optimize the O(n) (effectively) module scan
> with an AoT computation of offset in a file. Perhaps it might be worth
> adding such a functionality into the module itself anyhow?

Do you think it would useful in other cases? For each function we also
need summary data to help guide importing decisions. I was assuming
the index and summary data would be stored together somewhere
(separate section in native object wrapper format, TBD in bitcode
format).

>
>>
>> - Marking of imported functions (for use in ThinLTO-specific symbol
>> linking and global DCE, for example). This can be in-memory initially,
>> but IR support may be required in order to support streaming bitcode
>> out and back in again after importing.
>>
>
> How is this different from the existing linkage facilities?

There is some discussion between Duncan, David Blaikie and myself on this.

>
>>
>> - ModuleLinker changes to do ThinLTO-specific symbol linking and
>> static promotion when necessary. The linkage type of imported
>> functions changes to AvailableExternallyLinkage, for example. Statics
>> must be promoted in certain cases, and renamed in consistent ways.
>>
>
> Ditto.

This is different because currently during LTO linking you don't need
to change the linkage to available externally. And no static promotion
has to be done.

>
>>
>> - GlobalDCE changes to support removing imported functions that were
>> not inlined (very small changes to existing pass logic).
>>
>
> Ditto.
>
> (I think I've seen some discussion here already, if I should go and read
> those threads just feel free to say that :)

Yes, this is the same thread I mentioned above with Duncan and David Blaikie.

>
>>
>>
>> f. ThinLTO Import Driver SCC pass:
>>
>> Adds Transforms/IPO/ThinLTO.cpp with framework for doing ThinLTO via
>> an SCC pass, enabled only under -fthinlto options. The pass includes
>> utilizing the thin archive (global function index/summary), import
>> decision heuristics, invocation of LTOModule/ModuleLinker routines
>> that perform the import, and any necessary callgraph updates and
>> verification.
>>
>
> Would it be worth instead of trying to hook some of this in to clang/opt but
> have a separate driver to prototype this up? This way the functionality and
> the driver could be separate from the rest of the optimization pipeline as
> well as making it (I'd hope) be more testable.

I'm not sure this helps much if I am understanding the suggestion
correctly. The pass is inserted in the case of a thin lto backend
compile, which we would do under an option. And many of the other
changes are sprinkled around other passes/infrastructure (e.g. bitcode
reader, module linker) which are shared across other tools.

>
> We could also use that as a way to test the decision making etc ala some of
> the -### stuff out of clang or -debug output. (This description is a bit of
> a stretch, but hopefully my point gets across).
>
>>
>> 3. Stage 3: ThinLTO Tuning and Enhancements
>> ----------------------------------------------------------------
>>
>> This refers to the patches that are not required for ThinLTO to work,
>> but rather to improve compile time, memory, run-time performance and
>> usability.
>>
>>
>> a. Lazy Debug Metadata Linking:
>>
>> The prototype implementation included lazy importing of module-level
>> metadata during the ThinLTO pass finalization (i.e. after all function
>> importing is complete). This actually applies to all module-level
>> metadata, not just debug, although it is the largest. This can be
>> added as a separate set of patches. Changes to BitcodeReader,
>> ValueMapper, ModuleLinker
>
>
> Can you describe more of what you've done here? We're trying to optimize a
> lot of these areas for normal LTO as well.

See some earlier detail I had sent in response to Duncan's question,
and additional discussion. I don't think this helps normal LTO
unfortunately.

>
>>
>> b. Import Tuning:
>>
>> Tuning the import strategy will be an iterative process that will
>> continue to be refined over time. It involves several different types
>> of changes: adding support for recording additional metrics in the
>> function summary, such as profile data and optional heavier-weight IPA
>> analyses, and tuning the import heuristics based on the summary and
>> callsite context.
>>
>
> How is this different from the existing profile work that Diego has been
> doing? I.e. how are the formats etc going to communicate?

As David and Diego mentioned, ThinLTO is just another consumer of the
profile data.

>
>>
>>
>> c. Combined Function Map Pruning:
>>
>> The combined function map can be pruned of functions that are unlikely
>> to benefit from being imported. For example, during the phase-2 thin
>> archive plug step we can safely omit large and (with profile data)
>> cold functions, which are unlikely to benefit from being inlined.
>> Additionally, all but one copy of comdat functions can be suppressed.
>>
>
> The comdat function bit will happen with module linking, but perhaps an idea
> would be to make a first pass over the code and:
>
> a) create a new module
> b) move cold functions inside while leaving declarations behind
> c) migrate comdat functions the same sort of way (though perhaps not out of
> line)

Sorry, I didn't follow what you were suggesting here. The pruning
above is just applied to the combined function map, the modules aren't
touched. A function not in the map (no associated index/summary)
simply can't be imported.

>
> One random thought is that you'll need to work on the internalize pass to
> handle the distributed information you have.
>
>>
>>
>> d. Distributed Build System Integration:
>>
>> For a distributed build system, the gold plugin should write the
>> parallel backend invocations into a makefile, including the mapping
>> from the IR file to the real object file path, and exit. Additional
>> work needs to be done in the distributed build system itself to
>> distribute and dispatch the parallel backend jobs to the build
>> cluster.
>>
>
> Hmm? I'd love to see you elaborate here, but it's probably just far enough
> in the future that we can hit that when we get there.
>
>>
>>
>> e. Dependence Tracking and Incremental Compiles:
>>
>> In order to support build systems that stage from local disks or
>> network storage, the plugin will optionally support computation of
>> dependent sets of IR files that each module may import from. This can
>> be computed from profile data, if it exists, or from the symbol table
>> and heuristics if not. These dependence sets also enable support for
>> incremental backend compiles.
>>
>>
>
> Ditto.
>
> -eric
>
>>
>>
>> --
>> Teresa Johnson | Software Engineer | tejohnson at google.com | 408-460-2413
>>
>> _______________________________________________
>> LLVM Developers mailing list
>> LLVMdev at cs.uiuc.edu         http://llvm.cs.uiuc.edu
>> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev



-- 
Teresa Johnson | Software Engineer | tejohnson at google.com | 408-460-2413




More information about the llvm-dev mailing list