<div dir="ltr"><div class="gmail_extra"><br><div class="gmail_quote">On Sun, Feb 2, 2014 at 9:07 PM, Andrew Trick <span dir="ltr"><<a href="mailto:atrick@apple.com" target="_blank">atrick@apple.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div>That’s a good point. We could profile misprediction, but we have no way to express that either in the current BranchProbability/BlockFrequency API or in Chandler’s variant.</div>
</blockquote></div><br>But I don't think we *need* to profile misprediction at all. It simply isn't a problem that PGO needs to solve. The problems we're looking at for PGO to help with are:</div><div class="gmail_extra">
<br></div><div class="gmail_extra">- i-cache locality / general code layout</div><div class="gmail_extra">- spill placement / live range splitting</div><div class="gmail_extra">- loop vectorization and unrolling[1]</div><div class="gmail_extra">
- inlining[1]</div><div class="gmail_extra"><br></div><div class="gmail_extra">For all of these, we're really looking at hot or cold region identification, and clustering. I don't think that in these cases it is perfectly reasonable to model "hot" and "cold" (in the branch probability sense) as both heavily biased and high confidence in that bias, and "not-hot" or "not-cold" as either not heavily biased *or* low confidence in any indicated bias.</div>
<div class="gmail_extra"><br></div><div class="gmail_extra">This gives us a simple linear model.</div><div class="gmail_extra"><br></div><div class="gmail_extra"><br></div><div class="gmail_extra">[1]: Note that even these are merely *speculative* use cases based on experience with other systems. I'm not really trying to make sweeping claims about *how* effective PGO will be here, only that it is an area that various folks are looking at improving with PGO based on experience in other system.s</div>
</div>