[LLVMdev] [Proposal] Parallelize post-IPO stage.

Nick Lewycky nlewycky at google.com
Tue Jul 16 15:32:31 PDT 2013

On 12 July 2013 15:49, Shuxin Yang <shuxin.llvm at gmail.com> wrote:

> Hi, There:
>   This is the proposal for parallelizing post-ipo stage. See the following
> for details.
>   I also attach a toy-grade rudimentary implementation. This
> implementation can be
> used to illustrate some concepts here. This patch is not going to be
> committed.
>   Unfortunately, this weekend I will be too busy to read emails. Please do
> not construe
> delayed response as being rude :-).
> Thanks a lot in advance for your time insightful comments!
> Shuxin
> The proposal
> ------------
>   It is organized as following:
>    1) background info, if you heard "/usr/bin/ls", please skip it
>    2) the motivation of parallelize post-IPO stage
>    3) how to parallelize post-IPO
>    4) the linker problems.
>    5) the toy-grade rudimentary implementation
>    6) misc
> 1.Some background
> ------------------
>   The Interprocedural-optimization compilation, aka IPO or IPA, typically
> consists of three stages:
>   S1) pre-IPO
>     Each function goes through some analysis and not-very-aggressive
> optimizations.
>     Some information is collected during this stage, this info will be to
> IPO stages.
>     This info is usually called summary info.
>     The result of this stage is "fake-objects" which is binary files using
>     some known object format to encapsulate IR as well as summary info
> along with
>     other stuff.
>   S2) IPO:
>     Compiler works with linker to resolve and merge symbols in the
> "fake-objects"
>     Then Interprocedural analyses (IPA) are invoked to perform
> interprocedural
>     analysis either based on the summary-info, or directly on the IR.
>     Interprocedural optimizations (IPO) are called based on the IPA result.
>     In some compilers, IPA and IPO are separated. One reason is that many
> IPAs can
>     be directly conduct on the concise summary info, while many IPOs need
> to load
>     IRs and bulky annotation/metadata into memory.
>   S3) post-IPO:
>     Typically consist of Loop-Nest-Opt, Scalar Opt, Code-Gen etc etc.
> While they
>     are intra-procedural analyses/optimizers, they may directly benefit
> from
>     the info collected in the IPO stages and pass down the road.
>   LLVM collectively call S2 and S3 as "LTO CodeGen", which is very
> confusing.
> 2. Why parallelize post-IPO stage
> ==============================
>   R1) To improve the scalarbility
>     It is quite obvious that we are not able to put everything about a
> monster
>     program in memory at once.
>     Even if you can afford a expensive computer, the address space of a
>     single compiler process cannot accommodate a monster program.
>   R2) to take advantage of ample HW resource to shorten compile time.
>   R3) make debugging lot easier.
>      One can triage problems in a much smaller partition rather than
>    the huge monster program.
>   This proposal is not able to shoot the goal R1 at this moment, because
> during
> the IPO stage, currently the compiler brings everything into memory at
> once.
> 3. How to parallelize post-IPO stage
> ==============================**======
>   From 5k' high, the concept is very simple, just to
>    step 1).divide the merged IR into small pieces,
>    step 2).and compile each of this pieces independendly.
>    step 3) the objects of each piece are fed back to linker to are linked
>            into an executable, or a dynamic lib.
>   Section 3.1 through 3.3 describe these three steps respectively.

Yes, this is one approach. I think others at Google have looked into this
sort of partitioning with GCC and found that the one thing which really
helps you pick the partitions, is to profile the program and make the
partitions based on the actual paths of functions seen by the profile. I
don't think they saw much improvement without that. See
http://research.google.com/pubs/pub36355.html .

I do want to mention some other things we can do to parallelize. You may
already know of these and have considered them and rejected them before
deciding on the design you emailed out here, but I want to list them since
there's already another thread with a different approach on the mailing

* Use what we have. LTO partitions at a time, directories for instance, on
the premise that LTO'ing them will produce something smaller than the sum
of its inputs. Then when you do the final whole-program step, it will be
receiving smaller inputs than it otherwise would. The change to LLVM here
is to fix the inliner (or other optimizations?) to reduce the chance that
LTO produces an output larger than its input.

* Parallelize the per-function code generator within a single LLVMContext.
CodeGen presently operates on a per-function basis, and is structured as an
analysis over llvm IR. There shouldn't be any global state, and there
shouldn't be any need for locking accesses to the IR since nothing will be
mutating it. This will even speed up clang -O0, and is a solid first step
that gets a thread-creation API into LLVM.
  - What happened to "one LLVMContext per thread"? Okay, that rule is a
little white lie. Always was. LLVMContext allows two libraries to use llvm
under the hood without interfering with each other (even to the point of
separate maps of types to avoid one library from causing slow type lookups
in the other). LLVM also doesn't have locking around accesses to the IR,
and few guarantees how many things a single mutating operation will need to
look at or change, but with those caveats in mind it is possible to share a
context across threads. Because CodeGen is structured as an analysis over
the IR without mutating the IR, it should work. There's probably still
global state in the code generator somewhere, but it's not a structural

* Parallelize the function passes, and SCCs that are siblings in the call
tree (again within a single LLVMContext). The gnarly part of this is that
globals have shared use-lists which are updated as we modify each function
individually. Those globals either need to have locks on their use-lists,
replaced with a lockless list, or removed entirely. Probably both, as
GlobalVariable's have use-lists we actually use in the optimizers, but we
don't actually need the use-list for "i32 0".

* Parallelize ModulePasses by splitting them into an analysis phase and an
optimization phase. Make each per-TU build emit the .bc as usual plus an
analysis-file (for instance, call graph, or "which functions reads/modify
which globals"). Merge all the analysis-files and farm them back out to be
used as input to the programs optimizing each .bc individually -- but now
they have total knowledge of the whole-program call graph and other AA
information, etc.
  - You can combine this with an RPC layer to give each worker the ability
to ask for the definition of a function from another worker. LLVM already
supports "unmaterialized" functions where the definition is loaded lazily.
The analysis part should arrange to give us enough information to determine
whether we want to do the inlining, then if we decide to materialize the
function we get its body from another worker.

* Parallelize by splitting into different LLVMContexts. This spares us the
difficulty of adding locks or otherwise changing the internals of LLVM,
gets us the ability to spread the load across multiple machines, and if
combined with the RPC idea above you can also get good inlining without
necessarily loading the whole program into memory on a single machine.

I'm not planning to do the work any time soon so count my vote with that in
mind, but if you ask me I think the first step should be to parallelize the
backend within a single LLVMContext first, then to parallelize the function
passes and CGSCC passes (across siblings only of course) second. Removing
the use-list from simple constants is a very interesting thing to do to
decrease lock contention, but we may want to do something smarter than just
remove it -- consider emitting a large constant that is only used by an
inline function. It is possible to emit a table of constants in the same
COMDAT group as the function, then if the inline function is discarded by
the linker the constants are discarded with it. I don't have a concrete
proposal for that.


3.1. Partitioning
> -----------------
>   Partitioning is to cut a resonabely-sized chunk from the big merged IRs.
> It roughly consists of two steps, 1) determine the partition scheme, which
> is relatively easy step, and 2) physically scoop the partition out of
> the merged IR, which is much more involved.
> 3.1.1 Figure out Partition scheme
> ------------------------------**----
>   we randomly pick up some function and put them in a partition.
> It would be nice to perform some optimization at this moment. One opt
> in my mind is to reorder functions in order to reduce working-set and
> improve locality.
>   Unfortunately, this opt seems to be bit blind at this time, because
>     - CallGraph is not annotated with estimated or profiled frequency.
>     - some linkers don't respect the order. It seems they just
>       remembers the function order of the pristine input obj/fake-obj,
>       and enforce this order at final link (link-exec/shared-lib) stage.
>   Anyway, I try to ignore all these problems, and try to perform partition
> via following steps. Maybe we have some luck on some platforms:
>   o. DFS the call-graph, ignoring the self-resursive edges, if freq is
>      available, prioritizing the edges (i.e. corresponding to call-sites)
>      such that frequent edges are visited first.
>   o. Cut the DFS spanning tree obtained from the previous step bottom-up,
>      Each cut/partition contains reasonable # of functions, and the
> aggregate
>      size of the functions of the partition should not exceeds predefined
>      threshold.
>  o. repeat the previous step until the Call-graph's DFS spanning tree
>      is empty.
> 3.1.2 Partition transformation
> ------------------------------
>   This is bit involved. There are bunch of problems we have to tackle.
>   1) When the use/def of a symbol are separated in different modules,
>      its attribute, like linkage, visibility, need  to be changed
>      as well.
>       [Example 1], if a symbol is flagged as "internal" to the module where
>      the it is defined, the linkage need to be changed into "internal"
>      to the executable/lib being compiled.
>       [Example 2], For compile-time constants, their initialized value
>      needs to to cloned to the partitions where it is referenced,
>      The rationale is to make the post-ipo passes to take advantage
>      of the initlized value to squeeeze some performance.
>       In order to not bloat the code size, the cloned constant should
>      mark "don't emit". [end of eg2]
>        Being able to precisely update symbols' attribute is not only
>      vital to correctness, it has significant impact to the the
>      performance as well.
>        I have not yet taken a thorough investigation of this issue. My
>      rudimentary implementation is simply flag symbol "external" when its
>      use/def are separated in different module. I believe this is one
>      of the most difficult part of this work. I guess it is going to
>      take long time to become stable.
>   2) In order to compile each partition in each separate thread (see
>      Section 3.2), we have to put partitions in distinct LLVMContext.
>      I could be wrong, but I don't find the code which is able to
>      perform function cloning across LLVMContext.
>      My workaround in the patch is to perform function cloning in
>     one LLVMContext (but in different Module, of course), then
>     save the module to disk file, and load it to memory using a
>     new LLVMContext.
>      It is bit circuitous and expensive.
>      One random observation:
>        Currently, function-scoped static variables are considered
>      as "global variables". When cloning a function with static variable,
>      compiler has no idea if the static variables are used only by
>      the function being cloned, and hence separate the function
>      and the variables.
>         I guess it would be nice if we organized symbols by its scope
>      instead of its live-time. it would be convenient for this situation.
> 3.2 Compile partitions independently
> ------------------------------**--------
>    There are two camps: one camp advocate compiling partitions via
> multi-process,
> the other one favor multi-thread.
>   Inside Apple compiler teams, I'm the only one belong to the 1st comp. I
> think
> while multi-proc sounds bit red-neck, it has its advantage for this
> purpose, and
> while multi-thread is certainly more eye-popping, it has its advantage
> as well.
>   The advantage of multi-proc are:
>   1) easier to implement, the process run in its own address space.
>     We don't need to worry about they can interfere with each other.
>   2)huge, or not unlimited, address space.
>    The disadvantage is that it's expensive. But I guess the cost is
>   almost negligible compared to the overall IPO compilation.
>   The advantage of multi-threads I can imagine are:
>    1) sound fancy
>    2) it is light-weight
>    3) inter-thread communication is easier than IPC.
>   Its disadvantage are:
>    1). Oftentime we will come across race-condition, and it took
>       awful long time to figure it out. While the code is supposed
>       to be mult-thread safe, we might miss some tricky case.
>       Trouble-shooting race condition is a nightmare.
>    2) Small address space. This is big problem if we the compiler
>       is built 32-bit . In that case, the compiler is not able to bring
>       lots of stuff in memory even if the HW dose
>       provide ample mem.
>    3) The thread-safe run-time lib is more expensive.
>       I once linked a compiler using -lpthread (I dose not have to) on a
>       UNIX platform,  and saw the compiler slow down by about 1/3.
>     I'm not able to convince the folks in other camp, neither are they
>  able to convince me. I decide to implement both. Fortunately, this
>  part is not difficult, it seems to be rather easy to crank out one within
> short
>  period of time. It would be interesting to compare them side-by-side,
>  and see which camp lose:-). On the other hand, if we run into
> race-condition
>  problem, we choose multi-proc version as a fall-back.
>    Regardless which tech is going to use to compile partition
> independently, in order to judiciously and adaptively choose appropriate
> parallel-factor, the compiler certainly need a lib which is able to
> figure out the load the entire system is in. I don't know if there are
> such magic lib or not.
> 4. the tale of two kinds of linker
> ------------------------------**----
>   As far as I can tell, llvm suports two kind linker for its IPO
> compilation,
> and the supports are embodied by two set of APIs/interfaces.
>  o. Interface 1, those stuff named lto_xxxx().
>  o. GNU gold interface,
>     The compiler interact with GNU gold via the adapter implemented
>     in tools/gold/gold-plugin.cpp.
>     This adpater calls the interface-1 to control the IPO process.
>     It dose not have to call the interface APIs, I think it is definitely
>     ok it call internal functions.
>   The compiler used to generate a single object file from the merged
> IR, now it will generate multiple of them, one for each partition.
>   So, the interface 1 is *NOT* sufficient any more.
>   For gold linker users, it is easy to make them happy just by
> hacking the adapter, informing the linker new input object
> files. This is done transparently, the users don't need to install new ld.
>   For those system which invoke ld interacting with the libLTO.{so,dylib},
> it has to accept the new APIs I added to the interface-1 in order to
> enable the new functionality. Or maybe we can invoke '/the/path/to/ld -r
> *.o -o merged.o'
> and feed the merged.o the linker (this will  keep the interface
> interact)?  Unfortunately, it dose not work at all, how can I know the path
> the ld? the libLTO.{so,dydlib} is invoked as plugin; it cannot see the
> argv.
> How about hack them by adding a nasty flag pointing to the right ld?
> Well, it works. However, I don't believe many people like to do it this
> way,
> that means I loose huge number of "QA" who are working hard for this
> compiler.
>   What's wrong with the interface-1? The ld side is more active than
> the compiler side, however, in concept the IPO is driven by the compiler
> side.
> This mean this interface is changing over time.
>   In contrast, the gold interface (as I rever-engineer from the adpator
> code) is more symbol-centric, taking little IPO-thing into account.
> That interface is simple and stable.
> 5. the rudimentary implementation
> ------------------------------**---
>   I make it works for bzip2 at cpu2kint yesterday. bzip2 is "tiny"
> program, I intentionally lower the partition size to get 3 partitions.
> There is no comment in the code, and it definitely need rewrite.  I
> just check the correctness (with ref input), and I don't measure how much
> it degrade the performance. (due to the problem I have not got chance
> to tackle, see section 3.1.2, the symbol attribute stuff).
>   The control flow basically is:
>    1. add a module pass to the IPO pass-manager, and figure
>      out the partition scheme.
>    2) physically partition the merged partition.
>       the IR and the obj of partition are placed in a new dir. "llvmipo"
> by default
>       --
>       ls llvmipo/
>       Makefile  merged      part1.bc    part1.o     part2.bc part2.o
> part3.bc    part3.o
>       --
>    3) For demo purpose, I drive the post-IPO stage via a makefile, which
> encapsulate
>       hack and other nasty stuff.
>        NOTE that the post-ipo pass in my hack contains only CodeGen pass,
> we need to
>      reorganize the PassManagerBuilder::**populateLTOPassManager(), which
> intermingle
>      IPO pass along with intra-proc scalar pass, we need to separate them
> and the intra-proc
>      scalar pass to post-IPO stage.
>      1  .PHONY = all
>      2
>      3
>      4  BC = part1.bc part2.bc part3.bc
>      5  OBJ = ${BC:.bc=.o}
>      6
>      7  all : merged
>      8  %.o : %.bc
>      9      $(HOME)/tmp/lto.llc -filetype=obj $+ -o $@
>     10
>     11  merged : $(OBJ)
>     12      /usr/bin/ld $+ -r -o $@
>     13
>    4. as the Makefile sugguest, the *.o of the partions are linked into a
> single obj "merged"
>      and feed back to link.
> 6) Miscellaneous
> ===========
>    Will partitioning degrade performance in theory.  I think it depends on
> the definition of
> performance.  If performance means execution-time, I guess it dose not.
> However, if performance includes code-size, I think it may have some
> negative impact.
> Following is few scenario:
>    - constants generated by the post-IPO passes are not shared across
> partitions
>    - dead func may be detected during the post-IPO stage, and they may not
> be deleted.
> _______________________________________________
> LLVM Developers mailing list
> LLVMdev at cs.uiuc.edu         http://llvm.cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20130716/09f02a8b/attachment.html>

More information about the llvm-dev mailing list