<html><head><meta http-equiv="Content-Type" content="text/html charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><div class="">[Resending with smaller image to stay within the size threshold of llvm-dev]</div><br class=""><div class=""><blockquote type="cite" class=""><div class="">On Jul 14, 2017, at 8:21 AM, Davide Italiano via llvm-dev <<a href="mailto:llvm-dev@lists.llvm.org" class="">llvm-dev@lists.llvm.org</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><span class="" style="float: none; display: inline !important;">On Mon, Jun 19, 2017 at 4:13 PM, Brian Gesiak via llvm-dev</span><br class=""><span class="" style="float: none; display: inline !important;"><</span><a href="mailto:llvm-dev@lists.llvm.org" class="">llvm-dev@lists.llvm.org</a><span class="" style="float: none; display: inline !important;">> wrote:</span><br class=""><blockquote type="cite" class="">Hello all,<br class=""><br class="">In <a href="https://www.youtube.com/watch?v=qq0q1hfzidg" class="">https://www.youtube.com/watch?v=qq0q1hfzidg</a>, Adam Nemet (cc'ed) describes<br class="">optimization remarks and some future plans for the project. I had a few<br class="">follow-up questions:<br class=""><br class="">1. As an example of future work to be done, the talk mentions expanding the<br class="">set of optimization passes that emit remarks. However, the Clang User Manual<br class="">mentions that "optimization remarks do not really make sense outside of the<br class="">major transformations (e.g.: inlining, vectorization, loop optimizations)."<br class="">[1] I am wondering: which passes exist today that are most in need of<br class="">supporting optimization remarks? Should all passes emit optimization<br class="">remarks, or are there indeed passes for which optimization remarks "do not<br class="">make sense"?<br class=""><br class="">2. I tried running llvm/utils/opt-viewer/opt-viewer.py to produce an HTML<br class="">dashboard for the optimization remark YAML generated from a large C++<br class="">program. Unfortunately, the Python script does not finish, even after over<br class="">an hour of processing. It appears performance has been brought up before by<br class="">Bob Haarman (cc'ed), and some optimizations have been made since. [2] I<br class="">wonder if I'm passing in bad input (6,000+ YAML files -- too many?), or if<br class="">there's still some room to speed up the opt-viewer.py script? I tried the<br class="">C++ implementation as well, but that never completed either. [3]<br class=""><br class="">Overall I'm excited to make greater use of optimization remarks, and to<br class="">contribute in any way I can. Please let me know if you have any thoughts on<br class="">my questions above!<br class=""><br class=""></blockquote><br class=""><span class="" style="float: none; display: inline !important;">Hi,</span><br class=""><span class="" style="float: none; display: inline !important;">I've been asked at $WORK to take a look at `-opt-remarks` , so here</span><br class=""><span class="" style="float: none; display: inline !important;">are a couple of thoughts.</span><br class=""><br class=""><span class="" style="float: none; display: inline !important;">1) When LTO is on, the output isn't particularly easy to read. I guess</span><br class=""><span class="" style="float: none; display: inline !important;">this can be mitigated with some filtering approach, I and Simon</span><br class=""><span class="" style="float: none; display: inline !important;">discussed it offline.</span><br class=""></div></blockquote><div class=""><br class=""></div><div class="">Can you please elaborate?</div><br class=""><blockquote type="cite" class=""><div class=""><br class=""><span class="" style="float: none; display: inline !important;">2) Yes, indeed `opt-viewer` takes forever for large testcases to</span><br class=""><span class="" style="float: none; display: inline !important;">process. I think that it could lead to exploring a better</span><br class=""><span class="" style="float: none; display: inline !important;">representation than YAML which is, indeed, a little slow to parse. To</span><br class=""><span class="" style="float: none; display: inline !important;">be honest, I'm torn about this.</span><br class=""><span class="" style="float: none; display: inline !important;">YAML is definitely really convenient as we already use it somewhere in</span><br class=""><span class="" style="float: none; display: inline !important;">tree, and it has an easy textual repr. OTOH, it doesn't seem to scale</span><br class=""><span class="" style="float: none; display: inline !important;">that nicely.</span><br class=""></div></blockquote><div class=""><br class=""></div><div class="">Agreed. We now have a mitigation strategy with -pass-remarks-hotness-threshold but this is something that we may have to solve in the long run.</div><br class=""><blockquote type="cite" class=""><div class=""><br class=""><span class="" style="float: none; display: inline !important;">3) There are lots of optimizations which are still missing from the</span><br class=""><span class="" style="float: none; display: inline !important;">output, in particular PGO remarks (including, e.g. branch info</span><br class=""><span class="" style="float: none; display: inline !important;">probabilities which still use the old API as far as I can tell</span><br class=""><span class="" style="float: none; display: inline !important;">[PGOInstrumentation.cpp])</span><br class=""></div></blockquote><div class=""><br class=""></div><div class="">Yes, how about we file bugs for each pass that still uses the old API (I am looking at ICP today) and then we can split up the work and then finally remove the old API?</div><div class=""><br class=""></div><div class="">Also on exposing PGO info, I have a patch that adds a pass I call HotnessDecorator. The pass emits a remark for each basic block. Then opt-viewer is made aware of these and the remarks are special-cased to show hotness for a line unless there is already a remark on the line. The idea is that since we only show hotness as part of the remark if a block does not contain a remark we don’t see its hotness. E.g.:</div><div class=""><br class=""></div><div class=""><img apple-inline="yes" id="026751AC-C30C-464C-B748-EB47FB689CDF" class="" src="cid:766C1080-7CE5-4C29-AB05-D545CF7FD583@apple.com"></div><div class=""><br class=""></div><blockquote type="cite" class=""><div class=""><br class=""><span class="" style="float: none; display: inline !important;">4) `opt-remarks` heavily relies on the fidelity of the DebugLoc</span><br class=""><span class="" style="float: none; display: inline !important;">attached to instructions. Things get a little hairy at -O3 (or with</span><br class=""><span class="" style="float: none; display: inline !important;">-flto) because there are optimizations bugs so transformations don't</span><br class=""><span class="" style="float: none; display: inline !important;">preserve debuginfo. This is not entirely orthogonal but something can</span><br class=""><span class="" style="float: none; display: inline !important;">be worked on in parallel (bonus point, this would also help SamplePGO</span><br class=""><span class="" style="float: none; display: inline !important;">& debuginfo experience). With `-flto` the problem gets amplified more,</span><br class=""><span class="" style="float: none; display: inline !important;">as expected.</span><br class=""><br class=""><span class="" style="float: none; display: inline !important;">5) I found a couple of issue when trying the support, but I'm actively</span><br class=""><span class="" style="float: none; display: inline !important;">working on them.</span><br class=""><a href="https://bugs.llvm.org/show_bug.cgi?id=33773" class="">https://bugs.llvm.org/show_bug.cgi?id=33773</a><br class=""><a href="https://bugs.llvm.org/show_bug.cgi?id=33776" class="">https://bugs.llvm.org/show_bug.cgi?id=33776</a><br class=""><br class=""><span class="" style="float: none; display: inline !important;">That said, I think optimization remarks support is coming along nicely.</span><br class=""></div></blockquote><div class=""><br class=""></div><div class="">Yes, I’ve been really happy with the progress. Thanks for all the help from everybody!</div><div class=""><br class=""></div><div class="">Adam</div><br class=""><blockquote type="cite" class=""><div class=""><br class=""><span class="" style="float: none; display: inline !important;">--</span><br class=""><span class="" style="float: none; display: inline !important;">Davide</span><br class=""><span class="" style="float: none; display: inline !important;">_______________________________________________</span><br class=""><span class="" style="float: none; display: inline !important;">LLVM Developers mailing list</span><br class=""><a href="mailto:llvm-dev@lists.llvm.org" class="">llvm-dev@lists.llvm.org</a><br class=""><a href="http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev" class="">http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev</a></div></blockquote></div></body></html>