<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On 13 May 2015 at 11:44, Teresa Johnson <span dir="ltr"><<a href="mailto:tejohnson@google.com" target="_blank">tejohnson@google.com</a>></span> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">I've included below an RFC for implementing ThinLTO in LLVM, looking<br>
forward to feedback and questions.<br></blockquote><div><br></div><div>Thanks! I have to admit up front that I haven't read through the whole thread, but I have a couple comments. Overall this looks like a really nice design and unusually thorough RFC!</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
Thanks!<br>
Teresa<br>
<br>
<br>
<br>
RFC to discuss plans for implementing ThinLTO upstream. Background can<br>
be found in slides from EuroLLVM 2015:<br>
<a href="https://drive.google.com/open?id=0B036uwnWM6RWWER1ZEl5SUNENjQ&authuser=0" target="_blank">https://drive.google.com/open?id=0B036uwnWM6RWWER1ZEl5SUNENjQ&authuser=0</a>)<br>
As described in the talk, we have a prototype implementation, and<br>
would like to start staging patches upstream. This RFC describes a<br>
breakdown of the major pieces. We would like to commit upstream<br>
gradually in several stages, with all functionality off by default.<br>
The core ThinLTO importing support and tuning will require frequent<br>
change and iteration during testing and tuning, and for that part we<br>
would like to commit rapidly (off by default). See the proposed staged<br>
implementation described in the Implementation Plan section.<br>
<br>
<br>
ThinLTO Overview<br>
==============<br>
<br>
See the talk slides linked above for more details. The following is a<br>
high-level overview of the motivation.<br>
<br>
Cross Module Optimization (CMO) is an effective means for improving<br>
runtime performance, by extending the scope of optimizations across<br>
source module boundaries. Without CMO, the compiler is limited to<br>
optimizing within the scope of single source modules. Two solutions<br>
for enabling CMO are Link-Time Optimization (LTO), which is currently<br>
supported in LLVM and GCC, and Lightweight-Interprocedural<br>
Optimization (LIPO). However, each of these solutions has limitations<br>
that prevent it from being enabled by default. ThinLTO is a new<br>
approach that attempts to address these limitations, with a goal of<br>
being enabled more broadly. ThinLTO is designed with many of the same<br>
principals as LIPO, and therefore its advantages, without any of its<br>
inherent weakness. Unlike in LIPO where the module group decision is<br>
made at profile training runtime, ThinLTO makes the decision at<br>
compile time, but in a lazy mode that facilitates large scale<br>
parallelism. The serial linker plugin phase is designed to be razor<br>
thin and blazingly fast. By default this step only does minimal<br>
preparation work to enable the parallel lazy importing performed<br>
later. ThinLTO aims to be scalable like a regular O2 build, enabling<br>
CMO on machines without large memory configurations, while also<br>
integrating well with distributed build systems. Results from early<br>
prototyping on SPEC cpu2006 C++ benchmarks are in line with<br>
expectations that ThinLTO can scale like O2 while enabling much of the<br>
CMO performed during a full LTO build.<br></blockquote><div><br></div><div>This is different from llvm's current LTO approach ("big bang LTO", where we combine all TUs into a single big Module and the optimize and codegen it). It sounds like there's two goals here, multi-machine parallelism and reducing memory usage (by splitting the Module out to multiple machines) and most of the interesting logic goes into deciding where to split a Module.</div><div><br></div><div>I think ThinLTO was designed under the assumption that we would not be able to fit a large program into memory on a single machine (or that even if we could, we wouldn't be able to compile quickly enough by employing multi-core parallelism). This is in contrast to previously considered approaches of improving big bang LTO to handle very large programs through changes to the IR, in-memory representation, on-disk representation and threading. Starting with the assumption that we will need multiple machines, ThinLTO looks like an excellent design. I just wanted to call out that design requirement and how it's different from how llvm has thought about LTO in the past.</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">A ThinLTO build is divided into 3 phases, which are referred to in the<br>
following implementation plan:<br>
<br>
phase-1: IR and Function Summary Generation (-c compile)<br>
phase-2: Thin Linker Plugin Layer (thin archive linker step)<br>
phase-3: Parallel Backend with Demand-Driven Importing<br>
<br>
<br>
Implementation Plan<br>
================<br>
<br>
This section gives a high-level breakdown of the ThinLTO support that<br>
will be added, in roughly the order that the patches would be staged.<br>
The patches are divided into three stages. The first stage contains a<br>
minimal amount of preparation work that is not ThinLTO-specific. The<br>
second stage contains most of the infrastructure for ThinLTO, which<br>
will be off by default. The third stage includes<br>
enhancements/improvements/tunings that can be performed after the main<br>
ThinLTO infrastructure is in.<br>
<br>
The second and third implementation stages will initially be very<br>
volatile, requiring a lot of iterations and tuning with large apps to<br>
get stabilized. Therefore it will be important to do fast commits for<br>
these implementation stages.<br>
<br>
<br>
1. Stage 1: Preparation<br>
-------------------------------<br>
<br>
The first planned sets of patches are enablers for ThinLTO work:<br>
<br>
<br>
a. LTO directory structure:<br>
<br>
Restructure the LTO directory to remove circular dependence when<br>
ThinLTO pass added. Because ThinLTO is being implemented as a SCC pass<br>
within Transforms/IPO, and leverages the LTOModule class for linking<br>
in functions from modules, IPO then requires the LTO library. This<br>
creates a circular dependence between LTO and IPO. To break that, we<br>
need to split the lib/LTO directory/library into lib/LTO/CodeGen and<br>
lib/LTO/Module, containing LTOCodeGenerator and LTOModule,<br>
respectively. Only LTOCodeGenerator has a dependence on IPO, removing<br>
the circular dependence.<br>
<br>
<br>
b. ELF wrapper generation support:<br>
<br>
Implement ELF wrapped bitcode writer. In order to more easily interact<br>
with tools such as $AR, $NM, and “$LD -r” we plan to emit the phase-1<br>
bitcode wrapped in ELF via the .llvmbc section, along with a symbol<br>
table. The goal is both to interact with these tools without requiring<br>
a plugin, and also to avoid doing partial LTO/ThinLTO across files<br>
linked with “$LD -r” (i.e. the resulting object file should still<br>
contain ELF-wrapped bitcode to enable ThinLTO at the full link step).<br>
I will send a separate design document for these changes, but the<br>
following is a high-level overview.<br>
<br>
Support was added to LLVM for reading ELF-wrapped bitcode<br>
(<a href="http://reviews.llvm.org/rL218078" target="_blank">http://reviews.llvm.org/rL218078</a>), but there does not yet exist<br>
support in LLVM/Clang for emitting bitcode wrapped in ELF. I plan to<br>
add support for optionally generating bitcode in an ELF file<br>
containing a single .llvmbc section holding the bitcode. Specifically,<br>
the patch would add new options “emit-llvm-bc-elf” (object file) and<br>
corresponding “emit-llvm-elf” (textual assembly code equivalent).<br>
Eventually these would be automatically triggered under “-fthinlto -c”<br>
and “-fthinlto -S”, respectively.<br>
<br>
Additionally, a symbol table will be generated in the ELF file,<br>
holding the function symbols within the bitcode. This facilitates<br>
handling archives of the ELF-wrapped bitcode created with $AR, since<br>
the archive will have a symbol table as well. The archive symbol table<br>
enables gold to extract and pass to the plugin the constituent<br>
ELF-wrapped bitcode files. To support the concatenated llvmbc section<br>
generated by “$LD -r”, some handling needs to be added to gold and to<br>
the backend driver to process each original module’s bitcode.<br>
<br>
The function index/summary will later be added as a special ELF<br>
section alongside the .llvmbc sections.<br></blockquote><div><br></div><div>We've historically pushed back on adding ELF because it doesn't add any new information that isn't present in the .bc file, and we care a lot about minimizing I/O time (I recall an encoding change in the bitcode format shrinking .bc files 10% which led to a big improvement in LTO times for Darwin).</div><div><br></div><div>There's a few practical matters about what needs to be in this ELF symbol table; what about symbols that we reference, instead of just those we define? what about the sizes of symbols we define? what about the case where llvm codegen ends up defining (or referencing) a function that isn't mentioned in the IR (a common example is emitting a call to memcpy for argument lowering)? If you have a set of tools in mind, we can make the ELF accurate enough to work with those tools, but it's not clear to me how to make it work for fully general ELF-expecting programs without doing full codegen into the file (IIRC, this is what GCC does). Are 'ar', 'nm' and 'ld' the only programs?</div><div><br></div><div>Finally, suppose you get into a situation where you implement ThinLTO with the elf wrappers and then examine the compile time, memory usage, file size and I/O, and find that ThinLTO isn't performing as well as we like. The next question is going to be "well, what if we removed that extra I/O time, file size (copying time) and memory usage from having that ELF wrapper"? That's why I think of a .bc-only version as being the ideal version, and that having ELF wrapping is a good idea for supporting legacy programs as needed.</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">2. Stage 2: ThinLTO Infrastructure<br>
----------------------------------------------<br>
<br>
The next set of patches adds the base implementation of the ThinLTO<br>
infrastructure, specifically those required to make ThinLTO functional<br>
and generate correct but not necessarily high-performing binaries. It<br>
also does not include support to make debug support under -g efficient<br>
with ThinLTO.<br>
<br>
<br>
a. Clang/LLVM/gold linker options:<br>
<br>
An early set of clang/llvm patches is needed to provide options to<br>
enable ThinLTO (off by default), so that the rest of the<br>
implementation can be disabled by default as it is added.<br>
Specifically, clang options -fthinlto (used instead of -flto) will<br>
cause clang to invoke the phase-1 emission of LLVM bitcode and<br>
function summary/index on a compile step, and pass the appropriate<br>
option to the gold plugin on a link step. The -thinlto option will be<br>
added to the gold plugin and llvm-lto tool to launch the phase-2 thin<br>
archive step. The -thinlto option will also be added to the ‘opt’ tool<br>
to invoke it as a phase-3 parallel backend instance.<br>
<br>
<br>
b. Thin-archive linking support in Gold plugin and llvm-lto:<br>
<br>
Under the new plugin option (see above), the plugin needs to perform<br>
the phase-2 (thin archive) link which simply emits a combined function<br>
map from the linked modules, without actually performing the normal<br>
link. Corresponding support should be added to the standalone llvm-lto<br>
tool to enable testing/debugging without involving the linker and<br>
plugin.<br>
<br>
<br>
c. ThinLTO backend support:<br>
<br>
Support for invoking a phase-3 backend invocation (including<br>
importing) on a module should be added to the ‘opt’ tool under the new<br>
option. The main change under the option is to instantiate a Linker<br>
object used to manage the process of linking imported functions into<br>
the module, efficient read of the combined function map, and enable<br>
the ThinLTO import pass.<br>
<br>
<br>
d. Function index/summary support:<br>
<br>
This includes infrastructure for writing and reading the function<br>
index/summary section. As noted earlier this will be encoded in a<br>
special ELF section within the module, alongside the .llvmbc section<br>
containing the bitcode. The thin archive generated by phase-2 of<br>
ThinLTO simply contains all of the function index/summary sections<br>
across the linked modules, organized for efficient function lookup.<br>
<br>
Each function available for importing from the module contains an<br>
entry in the module’s function index/summary section and in the<br>
resulting combined function map. Each function entry contains that<br>
function’s offset within the bitcode file, used to efficiently locate<br>
and quickly import just that function. The entry also contains summary<br>
information (e.g. basic information determined during parsing such as<br>
the number of instructions in the function), that will be used to help<br>
guide later import decisions. Because the contents of this section<br>
will change frequently during ThinLTO tuning, it should also be marked<br>
with a version id for backwards compatibility or version checking.<br></blockquote><div><br></div><div>I have an idea for a future version.</div><div><br></div><div>Give passes the ability to write their own summary data at compile time, and to read them in the backends. Merge these summaries in the link, then after splitting send the merged summaries to each backend regardless of whether it imports the function body. For instance, dead argument elimination could summarize which functions ignore which arguments (either entirely, or locally except for which arguments in which callees). Receiving a full graph of this is smaller than the full implementations of the functions, and yet would allow each backend to do an analysis of the full graph. Function A's body is in this backend, and A calls B whose body is not available to this backend. The summary would include that the first argument to B is dead, so we can optimize away the chain of computation leading to it in A. (I think a more compelling example will be alias analysis, but it would make for a messier example.)</div><div><br></div><div>Nick</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">e. ThinLTO importing support:<br>
<br>
Support for the mechanics of importing functions from other modules,<br>
which can go in gradually as a set of patches since it will be off by<br>
default. Separate patches can include:<br>
<br>
- BitcodeReader changes to use function index to import/deserialize<br>
single function of interest (small changes, leverages existing lazy<br>
streamer support).<br>
<br>
- Minor LTOModule changes to pass the ThinLTO function to import and<br>
its index into bitcode reader.<br>
<br>
- Marking of imported functions (for use in ThinLTO-specific symbol<br>
linking and global DCE, for example). This can be in-memory initially,<br>
but IR support may be required in order to support streaming bitcode<br>
out and back in again after importing.<br>
<br>
- ModuleLinker changes to do ThinLTO-specific symbol linking and<br>
static promotion when necessary. The linkage type of imported<br>
functions changes to AvailableExternallyLinkage, for example. Statics<br>
must be promoted in certain cases, and renamed in consistent ways.<br>
<br>
- GlobalDCE changes to support removing imported functions that were<br>
not inlined (very small changes to existing pass logic).<br>
<br>
<br>
f. ThinLTO Import Driver SCC pass:<br>
<br>
Adds Transforms/IPO/ThinLTO.cpp with framework for doing ThinLTO via<br>
an SCC pass, enabled only under -fthinlto options. The pass includes<br>
utilizing the thin archive (global function index/summary), import<br>
decision heuristics, invocation of LTOModule/ModuleLinker routines<br>
that perform the import, and any necessary callgraph updates and<br>
verification.<br>
<br>
<br>
g. Backend Driver:<br>
<br>
For a single node build, the gold plugin can simply write a makefile<br>
and fork the parallel backend instances directly via parallel make.<br>
<br>
<br>
3. Stage 3: ThinLTO Tuning and Enhancements<br>
----------------------------------------------------------------<br>
<br>
This refers to the patches that are not required for ThinLTO to work,<br>
but rather to improve compile time, memory, run-time performance and<br>
usability.<br>
<br>
<br>
a. Lazy Debug Metadata Linking:<br>
<br>
The prototype implementation included lazy importing of module-level<br>
metadata during the ThinLTO pass finalization (i.e. after all function<br>
importing is complete). This actually applies to all module-level<br>
metadata, not just debug, although it is the largest. This can be<br>
added as a separate set of patches. Changes to BitcodeReader,<br>
ValueMapper, ModuleLinker<br>
<br>
<br>
b. Import Tuning:<br>
<br>
Tuning the import strategy will be an iterative process that will<br>
continue to be refined over time. It involves several different types<br>
of changes: adding support for recording additional metrics in the<br>
function summary, such as profile data and optional heavier-weight IPA<br>
analyses, and tuning the import heuristics based on the summary and<br>
callsite context.<br>
<br>
<br>
c. Combined Function Map Pruning:<br>
<br>
The combined function map can be pruned of functions that are unlikely<br>
to benefit from being imported. For example, during the phase-2 thin<br>
archive plug step we can safely omit large and (with profile data)<br>
cold functions, which are unlikely to benefit from being inlined.<br>
Additionally, all but one copy of comdat functions can be suppressed.<br>
<br>
<br>
d. Distributed Build System Integration:<br>
<br>
For a distributed build system, the gold plugin should write the<br>
parallel backend invocations into a makefile, including the mapping<br>
from the IR file to the real object file path, and exit. Additional<br>
work needs to be done in the distributed build system itself to<br>
distribute and dispatch the parallel backend jobs to the build<br>
cluster.<br>
<br>
<br>
e. Dependence Tracking and Incremental Compiles:<br>
<br>
In order to support build systems that stage from local disks or<br>
network storage, the plugin will optionally support computation of<br>
dependent sets of IR files that each module may import from. This can<br>
be computed from profile data, if it exists, or from the symbol table<br>
and heuristics if not. These dependence sets also enable support for<br>
incremental backend compiles.<br>
<span class=""><font color="#888888"><br>
<br>
<br>
--<br>
Teresa Johnson | Software Engineer | <a href="mailto:tejohnson@google.com">tejohnson@google.com</a> | <a href="tel:408-460-2413" value="+14084602413">408-460-2413</a><br>
<br>
_______________________________________________<br>
LLVM Developers mailing list<br>
<a href="mailto:LLVMdev@cs.uiuc.edu">LLVMdev@cs.uiuc.edu</a> <a href="http://llvm.cs.uiuc.edu" target="_blank">http://llvm.cs.uiuc.edu</a><br>
<a href="http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev" target="_blank">http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev</a><br>
</font></span></blockquote></div><br></div></div>