[LLVMdev] RFC: ThinLTO Impementation Plan
Eric Christopher
echristo at gmail.com
Fri May 15 13:42:07 PDT 2015
Hi Teresa,
Very excited to see this work progressing :)
> The second and third implementation stages will initially be very
> volatile, requiring a lot of iterations and tuning with large apps to
> get stabilized. Therefore it will be important to do fast commits for
> these implementation stages.
>
>
This sounds interesting. Could use some more description of what you think
is going to be needed here.
>
> 2. Stage 2: ThinLTO Infrastructure
> ----------------------------------------------
>
> The next set of patches adds the base implementation of the ThinLTO
> infrastructure, specifically those required to make ThinLTO functional
> and generate correct but not necessarily high-performing binaries. It
> also does not include support to make debug support under -g efficient
> with ThinLTO.
>
>
This is probably something we should give some more thought to up front.
People will definitely want to be able to at least get decent back traces
out of their code (functions, file/line/col, arguments maybe) and leaving
this as an afterthought could cause more efficiency problems down the road.
>
> a. Clang/LLVM/gold linker options:
>
> An early set of clang/llvm patches is needed to provide options to
> enable ThinLTO (off by default), so that the rest of the
> implementation can be disabled by default as it is added.
> Specifically, clang options -fthinlto (used instead of -flto) will
> cause clang to invoke the phase-1 emission of LLVM bitcode and
> function summary/index on a compile step, and pass the appropriate
> option to the gold plugin on a link step. The -thinlto option will be
> added to the gold plugin and llvm-lto tool to launch the phase-2 thin
> archive step. The -thinlto option will also be added to the ‘opt’ tool
> to invoke it as a phase-3 parallel backend instance.
>
>
> b. Thin-archive linking support in Gold plugin and llvm-lto:
>
> Under the new plugin option (see above), the plugin needs to perform
> the phase-2 (thin archive) link which simply emits a combined function
> map from the linked modules, without actually performing the normal
> link. Corresponding support should be added to the standalone llvm-lto
> tool to enable testing/debugging without involving the linker and
> plugin.
>
>
Have you described thin archives anywhere? I might have missed it, but I'm
curious how you see this working.
>
> c. ThinLTO backend support:
>
> Support for invoking a phase-3 backend invocation (including
> importing) on a module should be added to the ‘opt’ tool under the new
> option. The main change under the option is to instantiate a Linker
> object used to manage the process of linking imported functions into
> the module, efficient read of the combined function map, and enable
> the ThinLTO import pass.
>
In general the phases that you have here sound interesting, but I'm not
sure that I've seen the background describing them? Can you describe this
sort of change here in more detail?
> Each function available for importing from the module contains an
> entry in the module’s function index/summary section and in the
> resulting combined function map. Each function entry contains that
> function’s offset within the bitcode file, used to efficiently locate
> and quickly import just that function. The entry also contains summary
> information (e.g. basic information determined during parsing such as
> the number of instructions in the function), that will be used to help
> guide later import decisions. Because the contents of this section
> will change frequently during ThinLTO tuning, it should also be marked
> with a version id for backwards compatibility or version checking.
>
>
<Insert bike shed discussion of formatting, versioning, etc>
>
> e. ThinLTO importing support:
>
> Support for the mechanics of importing functions from other modules,
> which can go in gradually as a set of patches since it will be off by
> default. Separate patches can include:
>
> - BitcodeReader changes to use function index to import/deserialize
> single function of interest (small changes, leverages existing lazy
> streamer support).
>
>
Sounds like this is trying to optimize the O(n) (effectively) module scan
with an AoT computation of offset in a file. Perhaps it might be worth
adding such a functionality into the module itself anyhow?
> - Marking of imported functions (for use in ThinLTO-specific symbol
> linking and global DCE, for example). This can be in-memory initially,
> but IR support may be required in order to support streaming bitcode
> out and back in again after importing.
>
>
How is this different from the existing linkage facilities?
> - ModuleLinker changes to do ThinLTO-specific symbol linking and
> static promotion when necessary. The linkage type of imported
> functions changes to AvailableExternallyLinkage, for example. Statics
> must be promoted in certain cases, and renamed in consistent ways.
>
>
Ditto.
> - GlobalDCE changes to support removing imported functions that were
> not inlined (very small changes to existing pass logic).
>
>
Ditto.
(I think I've seen some discussion here already, if I should go and read
those threads just feel free to say that :)
>
> f. ThinLTO Import Driver SCC pass:
>
> Adds Transforms/IPO/ThinLTO.cpp with framework for doing ThinLTO via
> an SCC pass, enabled only under -fthinlto options. The pass includes
> utilizing the thin archive (global function index/summary), import
> decision heuristics, invocation of LTOModule/ModuleLinker routines
> that perform the import, and any necessary callgraph updates and
> verification.
>
>
Would it be worth instead of trying to hook some of this in to clang/opt
but have a separate driver to prototype this up? This way the functionality
and the driver could be separate from the rest of the optimization pipeline
as well as making it (I'd hope) be more testable.
We could also use that as a way to test the decision making etc ala some of
the -### stuff out of clang or -debug output. (This description is a bit of
a stretch, but hopefully my point gets across).
> 3. Stage 3: ThinLTO Tuning and Enhancements
> ----------------------------------------------------------------
>
> This refers to the patches that are not required for ThinLTO to work,
> but rather to improve compile time, memory, run-time performance and
> usability.
>
>
> a. Lazy Debug Metadata Linking:
>
> The prototype implementation included lazy importing of module-level
> metadata during the ThinLTO pass finalization (i.e. after all function
> importing is complete). This actually applies to all module-level
> metadata, not just debug, although it is the largest. This can be
> added as a separate set of patches. Changes to BitcodeReader,
> ValueMapper, ModuleLinker
>
Can you describe more of what you've done here? We're trying to optimize a
lot of these areas for normal LTO as well.
> b. Import Tuning:
>
> Tuning the import strategy will be an iterative process that will
> continue to be refined over time. It involves several different types
> of changes: adding support for recording additional metrics in the
> function summary, such as profile data and optional heavier-weight IPA
> analyses, and tuning the import heuristics based on the summary and
> callsite context.
>
>
How is this different from the existing profile work that Diego has been
doing? I.e. how are the formats etc going to communicate?
>
> c. Combined Function Map Pruning:
>
> The combined function map can be pruned of functions that are unlikely
> to benefit from being imported. For example, during the phase-2 thin
> archive plug step we can safely omit large and (with profile data)
> cold functions, which are unlikely to benefit from being inlined.
> Additionally, all but one copy of comdat functions can be suppressed.
>
>
The comdat function bit will happen with module linking, but perhaps an
idea would be to make a first pass over the code and:
a) create a new module
b) move cold functions inside while leaving declarations behind
c) migrate comdat functions the same sort of way (though perhaps not out of
line)
One random thought is that you'll need to work on the internalize pass to
handle the distributed information you have.
>
> d. Distributed Build System Integration:
>
> For a distributed build system, the gold plugin should write the
> parallel backend invocations into a makefile, including the mapping
> from the IR file to the real object file path, and exit. Additional
> work needs to be done in the distributed build system itself to
> distribute and dispatch the parallel backend jobs to the build
> cluster.
>
>
Hmm? I'd love to see you elaborate here, but it's probably just far enough
in the future that we can hit that when we get there.
>
> e. Dependence Tracking and Incremental Compiles:
>
> In order to support build systems that stage from local disks or
> network storage, the plugin will optionally support computation of
> dependent sets of IR files that each module may import from. This can
> be computed from profile data, if it exists, or from the symbol table
> and heuristics if not. These dependence sets also enable support for
> incremental backend compiles.
>
>
>
Ditto.
-eric
>
> --
> Teresa Johnson | Software Engineer | tejohnson at google.com | 408-460-2413
>
> _______________________________________________
> LLVM Developers mailing list
> LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20150515/7b487398/attachment.html>
More information about the llvm-dev
mailing list