[LLVMdev] LTO question
dnovillo at google.com
Fri Dec 12 13:59:38 PST 2014
On 12/12/14 15:56, Adve, Vikram Sadanand wrote:
> I've been asked how LTO in LLVM compares to equivalent capabilities
> in GCC. How do the two compare in terms of scalability? And
> robustness for large applications?
Neither GCC nor LLVM can handle our (Google) large applications. They're
just too massive for the kind of linking done by LTO.
When we built GCC's LTO, we were trying to address this by creating a
partitioned model, where the analysis phase and the codegen phase are
split to allow working on partial callgraphs
(http://gcc.gnu.org/wiki/LinkTimeOptimization for details).
This allows to split and parallelize the initial bytecode generation and
the final optimization/codegen. However, the analysis is still
implemented as a single process. We found that we cannot even load
summaries, types and symbols in an efficient way.
It does allow for programs like Firefox to be handled. So, if by "big"
you need to handle something of that size, this model can doit.
With LLVM, I can't even load the IR for one of our large programs on a
box with 64Gb of RAM.
> Also, are there any ongoing efforts or plans to improve LTO in LLVM
> in the near future?
Yes. We are going to be investing in this area very soon. David and
Teresa (CC'd) will have details.
More information about the llvm-dev