[llvm-dev] (Thin)LTO llvm build

Carsten Mattner via llvm-dev llvm-dev at lists.llvm.org
Sun Sep 18 04:12:09 PDT 2016


On Sun, Sep 18, 2016 at 5:45 AM, Xinliang David Li <xinliangli at gmail.com> wrote:
> As Mehdi mentioned, thinLTO backend processes use very little memory, you
> may get away without any additional flags (neither -Wl,--plugin-opt=jobs=..,
> nor -Dxxx for cmake to limit link parallesm) if your build machine has
> enough memory. Here is some build time data of parallel linking (with
> ThinLTO) 52 binaries in clang build (linking parallelism equals ninja
> parallelism). The machine has 32 logical cores and 64GB memory.
>
> 1) Using the default ninja parallelism, the peak 1min load-average is 537.
> The total elapse time is 9m43s
> 2) Using ninja -j16, the peak load is 411. The elapse time is 8m26s
> 3) ninja -j8 : elapse time is 8m34s
> 4) ninja -j4 : elapse time is 8m50s
> 5) ninja  -j2 : elapse time is 9m54s
> 6) ninja -j1 : elapse time is 12m3s
>
> As you can see, doing serial thinLTO linking across multiple binaries do not
> give you the best performance. The build performance peaked at j16 in this
> configuration.   You may need to find your best LLVM_PARALLEL_LINK_JOBS
> value.

What did you set LLVM_PARALLEL_LINK_JOBS to?
Maybe I should first try to leave it unset and see if it fits within
my machine's
hardware limits.

> Having said that,  there is definitely  room for ThinLTO usability
> improvement so that ThinLTO parallel backend can coordinate well with the
> build system's parallelism so that user does not need to figure out the
> sweet spot.

Definitely. If parallelism can be controlled on multiple layers, an
outer layer's
setting ought to influence it in a reasonable way to make it more intuitive
to use.


More information about the llvm-dev mailing list