[llvm-dev] RFC: a practical mechanism for applying Machine Learning for optimization policies in LLVM

Adrian Prantl via llvm-dev llvm-dev at lists.llvm.org
Thu Apr 9 10:47:48 PDT 2020



> On Apr 8, 2020, at 2:04 PM, Mircea Trofin via llvm-dev <llvm-dev at lists.llvm.org> wrote:
> 
> TL;DR; We can improve compiler optimizations driven by heuristics by replacing those heuristics with machine-learned policies (ML models). Policies are trained offline and ship as part of the compiler. Determinism is maintained because models are fixed when the compiler is operating in production. Fine-tuning or regressions may be handled by incorporating the interesting cases in the ML training set, retraining the compiler, and redeploying it. 
> For a first milestone, we chose inlining for size (-Oz) on X86-64. We were able to train an ML model to produce binaries 1.5-6% smaller than -Oz of tip-of-tree. The trained model appears to generalize well over a diverse set of binaries. Compile time is increased marginally (under 5%). The model also happens to produce slightly better - performing code under SPEC2006 - total score improvement by 1.75%. As we only wanted to verify there is no significant regression in SPEC, and given the milestone goals, we haven’t dug any deeper into the speed results.
> 

How generally applicable are the results? I find it easy to believe that if, for example, you train a model on SPEC2006, you will get fantastic results when compiling SPEC2006, but does that translate well to other inputs? Or is the idea to train the compiler on the exact input program to be compiled, (perhaps in a CI of sorts), so you end up with a specialized compiler for each input program?

-- adrian


More information about the llvm-dev mailing list