[llvm-dev] [RFC] Propeller: A frame work for Post Link Optimizations

Maksim Panchenko via llvm-dev llvm-dev at lists.llvm.org
Wed Oct 2 11:40:51 PDT 2019

Hi Sri and the team,

Thank you for sharing your proposal. It helps bring awareness to the importance
of context-sensitive and link-time optimizations in the compiler toolchain for
boosting the performance of large applications. The proposal also shows that
different approaches are possible to achieve a similar goal.

Putting basic blocks into separate sections is an ambitious approach with
expected challenges. For the first time, the real overheads of this approach
were measured at scale, and the results look quite staggering. I can imagine
how it might work for Google where the overhead of the 2-stage re-compilation
could be masked out by a distributed build system, and dealing with C++
exceptions is not an issue due to software development practices. At the same
time, it should be clear that for users without access to a distributed build
system, the actual build-time overhead will far exceed that of BOLT.
Considering the proposal in its current form, I'm not convinced it's the best
approach even for Google in the long run.

Since the proposal references BOLT in multiple places, and since I’m directly
involved with BOLT and hence potentially biased, I decided to break my feedback
into two parts, with the first part being unrelated to BOLT and hopefully being
as objective as possible.

Here’s a quick summary of my concerns, followed by more detailed feedback.

  * Poor optimization of C++ code bases with exceptions
  * Pessimization/overhead for stack unwinding used by system-wide profilers and
    for exception handling
  * Increased memory footprint at application runtime
  * No clear plan for adding optimizations in the future without the need for
    more RFCs
  * Effectiveness of the relaxation pass
  * Inconsistencies in performance data
  * Debugging overhead

  * Requirement for two different builds and hidden overhead of Propeller
  * Step 1 overheads
  * Enormous increase in object file size
  * Lack of scalability for actual code reordering during the optimization phase

  * Propeller-optimized binary for continuous deployment


*Poor optimization for C++ code bases with exceptions*

From what I understand, your list of large real-world applications (you exclude
SPEC from the list, which I totally agree with) does not include a single one
that uses C++ exceptions. This is based on the assumption that exceptions are
not used at Google, and Clang is compiled without the exceptions support by
default. I consider this is a major omission from the evaluation and a drawback
of the proposal.

A lot of exception-handling code is generated by compiler “behind the scenes”
and is invisible to the user, but is exposed to Propeller. Even if such code is
never executed, i.e. exceptions are not thrown, it is important to be able to
reorder the code in the function including these blocks. The RFC says: “basic
blocks that are involved in exception handling, that is, they either throw or
catch an exception, are grouped together and placed in a single section. Unique
sections are not created for such basic blocks.” Further down the paragraph,
you say: “benchmarks which spend a significant amount of time handling
exceptions might not get the optimization benefits of Propeller.“ The proposal,
intentionally or not, makes it look like applications that are compiled with
exceptions support and that only use them rarely, i.e. for error-handling, are
not affected. Based on the limitations you describe above, I believe the code
reordering and thus optimization benefits will be limited for such

If you cannot find any large real-world benchmark that uses C++ exceptions, at
the very minimum, I would like to see Clang itself being compiled with
exceptions support and optimized by Propeller. Granted, it will not exercise
any exception-handling code, but at least the results will give an idea of how
well Propeller handles optimizing code with the mechanism enabled.

*Pessimization/overhead for stack unwinding used by system-wide profilers and
for exception handling*

Larger CFI programs put an extra burden on unwinding at runtime as more CFI
(and thus native) instructions have to be executed. This will cause more
overhead for any profiler that records stack traces, and, as you correctly note
in the proposal, for any program that heavily uses exceptions.

*Increased application memory footprint*

The increase of .eh_frame section is up to 17x, which is quite significant.
Since it’s part of the runtime image of the application, it is important to
know how much larger the application memory footprint gets. I.e., in addition
to the “Total” size of the binary in the comparison table, I would like to see
“Total Allocatable” size data.

*No clear plan for adding optimizations in the future without the need for
additional RFCs*

Some optimizations other than basic block reordering are best executed at link
time. One example would be an optimization for macro-op fusion on x86-64 CPUs.
We’ve seen a real-world application, bzip2, regressed by 5% in the case of
“unlucky” code placement. The optimization requires an analysis of instructions
(in this case, instruction pairs) and modification of the instruction stream
such as insertion of nops. For the optimization to be performed in the
compiler, it would require functions to be aligned at 64 bytes which is
generally not practical, hence an opportunity for link-time/binary
optimization. What is the plan for Propeller to handle the above and similar

*Effectiveness of the relaxation pass*

When you describe the relaxation pass, you do not mention if you run it till it
converges, or you run it conservatively leaving larger than necessary versions
of jump instructions.

*Inconsistencies in performance data*

Performance data in bar charts indicate that Propeller is not achieving all
possible improvements for Clang, the only open-source benchmark in the
evaluation where code-layout optimizations matter. Cycles, I-Cache, and
branches-not-taken indicate that BOLT is doing better. At the same time, the
summary table shows 9% improvement for all. Which data set is correct?

*Debugging overhead*

As Propeller has to support debug info generation on a per-basic-block basis,
it increases DWARF sections corresponding to address ranges, location lists,
and line number info. How much larger are these sections with Propeller flags?
We are currently pushing the limits of 32-bit DWARF, and I wonder if you
encounter any related issues considering the Propeller bloat?

With larger debug info, the overhead flows to debuggers and tools using debug
information for annotation purposes, e.g. profilers reporting line numbers
corresponding to binary addresses. Depending on their usage of the debug info,
these tools will likely use more memory and spend more time parsing DWARF. Have
you considered/measured this overhead?

By the way, I didn’t see DWARF location lists specifically mentioned in the
proposal. Making sure you use extra entries for those too.


*Requirement for two different builds and hidden overhead of Propeller*

While discussing Memory Overhead and Runtime Overhead in the RFC, you chose to
include the overhead incurred only during link time. However, compared to a
regular highly-optimized AutoFDO+(Thin)LTO build, you require a second full
build of the application with (Thin)LTO. Even with the caching of IR, there’s
an extra cost of generating code not reflected in your evaluation. With tight
integration with the build system, it is possible to hide these costs
partially. In either case, since “all the benchmarks were built for the X86_64
platform on a 18 core Intel machine with 192 G of RAM“, you should be able to
measure extra time for compilation spent on this machine.

*Step 1 overheads*

Furthermore, wouldn’t the LLVM compiler incur extra overhead when compiling
with Propeller flags because of larger CFIs, debug info, etc.? I think it would
be fair to measure and report the resulting compile-time and memory overhead
for both builds (steps 1 and 4) versus regular build.

Additionally, what are runtime and memory overheads for linking in Step 1? Do
they correspond to the “All” column in the Experiments section? If you chose to
compare just the link-time overhead versus the baseline, it should come from
the combination of link times of steps 1 and 4 since you have to link twice for

*Enormous increase in object file size*

Even with the optimized on-demand approach, the size of object files almost
doubles. Since in a distributed build system object files are cached and
shared, I’m not sure it is fair to describe this approach as being “friendly to
distributed build systems” since it increases the existing load.

What is the breakdown of the increase in object file sizes?  As you mention in
the proposal, DWARF natively supports address ranges. However, specifying a
separate range for every basic block will introduce an overhead. Is this
increase included in the total overhead for the “Object File Sizes Bloat”?

*Lack of scalability for actual code reordering during the optimization phase*

When you talk about Propeller being a distributed and scalable system, you
mention that it has “a distributed part and a whole-program part”. Does this
mean that the part where you need to recompile the application is distributed
(if you are using a distributed build system), but the actual link and the code
reordering optimization are happening serially?


Real-world systems built to benefit from profile feedback optimizations, such
as AutoFDO, are designed to benefit from a feedback collected on a previous
revision of the highly optimized binary running in prod. Does Propeller support
profiling binaries optimized by Propeller with the purpose of future


Now to the BOLT part.

As you know, BOLT was originally designed to handle applications built using
arbitrary toolchain without any modification to the toolchain itself or the
build process (other than “--emit-relocs” linker flag for maximum gains).
Integration with any particular toolchain, such as LLVM+lld, obviously brings
immediate advantages to the design and thus, at least in theory, comparing BOLT
to such a system would not be apples-to-apples. However, I think BOLT stands
fairly competitive even in its current form. With a few enhancements and a
little help from the linker, issues raised in the Propeller proposal could be
addressed without the need for the radical section-per-basic-block approach.

Recently (July 2019), we have parallelized many passes in BOLT and improved its
runtime. If you run experiments using an old version, you should try a more
recent one. For example, optimizing Clang 7.0, with “.text” of ~50MB, takes
less than one minute. Hardly a use case requiring any further optimization and
justifying a need for a distributed system.

One of your main arguments is that Propeller has 20% memory footprint of that
of BOLT. This is achieved by limiting optimization to profiled functions, i.e.
typically less than 10% of all functions. BOLT’s main memory overhead comes
from the intermediate representation of all functions in the binary in the form
of MCPlus. If we decide to keep IR only for functions with profile information,
i.e. less than 10% of all functions, there’s fundamentally no reason why BOLT’s
memory usage wouldn’t get decimated and brought on-par with Propeller’s link
time, perhaps even less. We never found that to be a restriction in our
environment and haven’t heard it been an issue other than from you.

The following statement about BOLT in your proposal does not sound right: “with
BOLT, even small source changes invalidate the profile information, an error if
the binary’s build id does not match the profile [sic]. Hence, the binary
corresponding to the source change must be re-built and re-profiled.“ Quite the
opposite, BOLT is built to tolerate binary differences, including those
resulting from LTO, and even supports profiling previously BOLTed binaries. The
only place we check for the build ID is during the profile conversion phase
corresponding to Step 3 in your proposal. This is done to avoid any possible
user error, as the collection (Step 2) and conversion often happen on different
machines. It might be a good idea to implement this check in Propeller too.

Comparison on “Search1” includes data gathered with huge pages enabled for
Propeller and disabled for BOLT. For a direct comparison, you can include a
line with and without huge pages. If you cannot enable huge pages for BOLT, it
is fair to exclude the result.

I should also note that BOLT performs a set of optimizations beyond the code
layout that can benefit from context-sensitive profile information, such as
indirect call promotion, inlining, shrink-wrapping etc. For the full set of
optimizations implemented in BOLT, please check our CGO’19 paper

BOLT’s additional benefits include a support for code instrumentation and code
protection/hardening. E.g. spectre/meltdown mitigation including that for
assembly-written code.

Obviously, this is not the right place for an alternative proposal, but I
strongly believe that integrating BOLT with a linker, such as lld, would
address the majority of the issues and provide a seamless solution to the users
of the LLVM toolchain without the need for radical changes to the emitted code
and LLVM codegen. If the scalability is indeed a concern, which so far is not
what we’ve seen, then we might consider changing LLVM itself to emit more data
or add integration to a distributed build system. But again, IMHO this would be
a solution to a nonproblem.

On 9/24/19, 4:52 PM, "Sriraman Tallam" <tmsriram at google.com> wrote:

    We, at Google, recently evaluated Facebook’s BOLT, a Post Link Optimizer
    framework, on large google benchmarks and noticed that it improves key
    performance metrics of these benchmarks by 2% to 6%, which is pretty impressive
    as this is over and above a baseline binaryalready heavily optimized with
    ThinLTO + PGO.  Furthermore, BOLT is also able to improve the performance of
    binaries optimized via Context-Sensitive PGO.     While ThinLTO + PGO is also
    profile guided and does very aggressive performance optimizations, there is
    more room for performance improvements due to profile approximations while
    applying the transformations.  BOLT uses exact profiles from the final binary
    and is able to fill the gaps left by ThinLTO + PGO. The performance
    improvements due to BOLT come from basic block layout, function reordering and
    function splitting.
    While BOLT does an excellent job of squeezing extra performance from highly
    optimized binaries with optimizations such as code layout, it has these major
     * It does not take advantage of distributed build systems.
     * It has scalability issues and to rewrite a binary with a ~300M text segment
     * Memory foot-print is 70G.
     * It takes more than 10 minutes to rewrite the binary.
    Similar to Full LTO, BOLT’s design is monolithic as it disassembles the
    original binary, optimizes and rewrites the final binary in one process.  This
    limits the scalability of BOLT and the memory and time overhead shoots up
    quickly for large binaries.
    Inspired by the performance gains and to address the scalability issue of BOLT,
    we went about designing a scalable infrastructure that can perform BOLT-like
    post-link optimizations. In this RFC, we discuss our system, “Propeller”,
    which can perform profile guided link time binary optimizations in a scalable
    way and is friendly to distributed build systems.  Our system leverages the
    existing capabilities of the compiler tool-chain and is not a stand alone tool.
    Like BOLT, our system boosts the performance of optimized binaries via
    link-time optimizations using accurate profiles of the binary. We discuss the
    Propeller system and show how to do the whole program basic block layout using
    Propeller does whole program basic block layout at link time via basic block
    sections.  We have added support for having each basic block in its own section
    which allows the linker to do arbitrary reorderings of basic blocks to achieve
    any desired fine-grain code layout which includes block layout, function
    splitting and function reordering.  Our experiments on large real-world
    applications and SPEC with code layout show that Propeller can optimize as
    effectively as BOLT, with just 20% of its memory footprint and time overhead.
    An LLVM branch with propeller patches is available in the git repository here:
    https://github.com/google/llvm-propeller/  We will upload individual patches of
    the various elements for review.  We have attached a google doc describing the
    Propeller system with Experiments in detail,
    Quick Start - How to optimize further with Propeller?
    # git clone and build repo
    $ cd $LLVM_DIR && git clone https://github.com/google/llvm-propeller.git
    $ mkdir $BUILD_DIR && cd $BUILD_DIR
    $ cmake -G Ninja -DLLVM_ENABLE_PROJECTS="clang;lld;compiler-rt" … \
    $ ninja -j25
    $ export PATH=$BUILD_DIR/bin:$PATH
    # Let’s Propeller-optimize the following program:
    # Step 1: Build the peak optimized binary with an additional flag.
    $ clang++ -O2 main.cc callee.cc -fpropeller-label -o a.out.labels -fuse-ld=lld
    # Step 2: Profile the binary, only one side of the branch is executed.
    $ perf record -e cycles:u -j any,u -- ./a.out.labels 1000000 2 >&  /dev/null
    # Step 3: Convert the profiles using the tool provided
    $ $LLVM_DIR/llvm-propeller/create_llvm_prof  --format=propeller \
      --binary=./a.out.labels --profile=perf.data  --out=perf.propeller
    # Step 4: Re-Optimize with Propeller, repeat Step 1 with propeller flag changed.
    $ clang++ -O2 main.cc callee.cc -fpropeller-optimize=perf.propeller -fuse-ld=lld
    In Step 4, the optimized bit code can be used if it is saved in Step1 as
    Propeller is active only during compile backend and link.  The optimized binary
    has a different layout of the basic blocks in main to keep the executed blocks
    together and split the cold blocks.
    Some of the key points:
    +  Added support for basic block sections, similar to function sections, where
    each basic block can reside in its own section.
    +  Jump instructions need 32-bit relocations and subsequent linker relaxations
    after basic block layout.  We would like to add a new relocation type for jump
    instructions to make it easier for relaxations and guarantee correctness.
    +  Added support in the linker to read profiles (PMU LBR) and discover optimal
    basic block layout using the Ex-TSP algorithm described here:
    +  Added support in the linker to re-order basic block sections in any
    specified sequence (can be done with symbol ordering file).  This requires
    linker relaxations to delete and shrink branches across basic blocks.
    +  Compared our system to BOLT  and have shown that our system can produce
    similar performance improvements as BOLT but with much less memory and time
    overheads.  Our experiments are on very large warehouse-scale benchmarks and
    SPEC 2017.
    +  Listed why this cannot be done as part of PGO itself.  Post Link
    optimizations are able to transform the binary using accurate profiles and PGO
    passes suffer from profile imprecision.
    +  Updated DebugInfo and CFI to account for arbitrary ordering of basic blocks
    via basic block sections.
    +  Discussed CFI handling  and is sub-optimal and bloats object file sizes and
    binary sizes much more than DebugInfo due to lack of support for discontiguous
    address ranges.  We have talked about this and would like to make a case to
    support discontiguous ranges with CFI which will make basic block sections much
    more cheaper.
    Detailed RFC document here :
    Please let us know what you think,
    Sri on behalf of the team.

More information about the llvm-dev mailing list