<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">On 2013-06-12 17:23 , Diego Novillo
wrote:<br>
</div>
<blockquote cite="mid:51B8E6C2.80305@google.com" type="cite">
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-1">
<span
style="font-size:15px;font-family:Arial;color:#000000;background-color:transparent;font-weight:normal;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;"><br>
I have started looking at the state of PGO (Profile Guided
Optimization) in LLVM.</span><b> </b><span
style="font-weight:normal;">I want to discuss my high-level plan
and make sure I'm not missing anything interesting out. I
appreciate any feedback on this, pointers to existing work,
patches and anything related to PGO in LLVM.<br>
</span></blockquote>
<br>
Good grief. A whole lot of fail in my cut-n-paste job. Apologies.
<br>
<blockquote cite="mid:51B8E6C2.80305@google.com" type="cite"><span
style="font-weight:normal;"> <br>
I will be keeping changes to this plan in this web document<br>
<br>
</span><b style="font-weight:normal;"
id="docs-internal-guid-5ace4200-3a37-a750-9d7a-eef7650d706d"><a
moz-do-not-send="true"
href="https://docs.google.com/document/d/1b2XFuOkR2K-Oao4u5fR3a9Ok83IB_W4EJWVmNak4GRE/pub">https://docs.google.com/document/d/1b2XFuOkR2K-Oao4u5fR3a9Ok83IB_W4EJWVmNak4GRE/pub</a><br>
</b></blockquote>
<br>
You can read from the above or this:<br>
<br>
At a high-level, I would like the PGO harness to contain the
following modules:<br>
<br>
Profile generators<br>
<br>
These modules represent sources of profile. Mostly, they work by
instrumenting the user program to make it produce profile
information. However, other sources of profile information (e.g.,
samples, hardware counters, static predictors) would be supported.<br>
<br>
<br>
Profile Analysis Oracles<br>
<br>
Profile information is loaded into the compiler and translated into
analysis data which the optimizers can use. These oracles become
the one and only source of profile information used by
transformations. Direct access to the raw profile data generated
externally is not allowed.<br>
<br>
Translation from profile information into analysis can be done by
adding IR metadata or altering compiler internal data structures
directly. I prefer IR metadata because it simplifies debugging,
unit testing and bug reproduction.<br>
<br>
Analyses should be narrow in the specific type of information they
provide (e.g., branch probability) and there should not be two
different analyses that provide overlapping information. We could
later provide broader analyses types by aggregating the existing
ones.<br>
<br>
<br>
<br>
Transformations<br>
<br>
Transformations should naturally take advantage of profile
information by consulting the analyses. The better information they
get from the analysis oracles, the better their decisions.<br>
<br>
<br>
My plan is to start by making sure that the infrastructure exists
and provides the basic analyses.<br>
I have two primary goals in this first phase:<br>
<br>
1- Augment the PGO infrastructure where required.<br>
2- Fix existing transformations that are not taking advantage of
profile data.<br>
<br>
<br>
In evaluating and triaging the existing infrastructure, I will use
test cases taken from GCC’s own testsuite, a collection of Google’s
internal applications and any other code base folks consider useful.<br>
<br>
In using GCC’s testsuite, my goal is not to mimic how GCC does its
work, but make sure that the two compilers implement functionally
equivalent transformations. That is, make sure that LLVM is not
leaving optimization opportunities behind.<br>
<br>
This may require implementing missing profile functionality. From a
brief inspection of the code, most of the major ones seem to be
there (edge, path, block). But I don’t know what state they are in.<br>
<br>
Some of the properties I would like to maintain or add to the
current framework:<br>
<br>
* Profile data is never accessed directly by analyses and
transformations. Rather, it is translated into IR metadata.<br>
<br>
* Graceful degradation in the presence of stale profiles. Old
profile data should only result in degraded optimization
opportunities. It should neither confuse the compiler nor cause
erroneous code generation.<br>
<br>
After the basic profile-based transformations are working, I would
like to add new sources of profile. Mainly, I am thinking of
implementing Auto FDO. FDO stands for Feedback Directed Optimization
(both PGO and FDO tend to be used interchangeably in the GCC
community). In this scheme, the compiler does not instrument the
code. Rather, it uses an external sample collection tool (e.g.,
perf) to collect samples from the program’s execution. These
samples are then converted to the format that the instrumented
program would’ve emitted.<br>
<br>
In terms of optimizations, our (Google) experience is that inlining
is the key beneficiary of profile information. Particularly, in big
C++ applications. I expect to focus most of my attention on the
inliner.<br>
<br>
</body>
</html>