<html><head><meta http-equiv="Content-Type" content="text/html charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><br class=""><div><blockquote type="cite" class=""><div class="">On Jun 20, 2017, at 1:13 AM, Brian Gesiak <<a href="mailto:modocache@gmail.com" class="">modocache@gmail.com</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div dir="ltr" class="">Hello all,<div class=""><br class=""></div><div class="">In <a href="https://www.youtube.com/watch?v=qq0q1hfzidg" class="">https://www.youtube.com/watch?v=qq0q1hfzidg</a>, Adam Nemet (cc'ed) describes optimization remarks and some future plans for the project. I had a few follow-up questions:</div><div class=""><br class=""></div><div class="">1. As an example of future work to be done, the talk mentions expanding the set of optimization passes that emit remarks. However, the Clang User Manual mentions that "optimization remarks do not really make sense outside of the major transformations (e.g.: inlining, vectorization, loop optimizations)." [1] I am wondering: which passes exist today that are most in need of supporting optimization remarks? Should all passes emit optimization remarks, or are there indeed passes for which optimization remarks "do not make sense”?</div></div></div></blockquote><div><br class=""></div><div>I think that we want to report most optimizations. Where I think we need to be a bit more careful is missed optimizations. For those, we should try to report cases where there is a good chance that the user may be able to take some action to enable the transformation (e.g. pragma, restrict, source modification or cost model overrides).</div><div><br class=""></div><blockquote type="cite" class=""><div class=""><div dir="ltr" class=""><div class=""><br class=""></div><div class="">2. I tried running llvm/utils/opt-viewer/opt-viewer.py to produce an HTML dashboard for the optimization remark YAML generated from a large C++ program. Unfortunately, the Python script does not finish, even after over an hour of processing. It appears performance has been brought up before by Bob Haarman (cc'ed), and some optimizations have been made since. [2] I wonder if I'm passing in bad input (6,000+ YAML files -- too many?), or if there's still some room to speed up the opt-viewer.py script? I tried the C++ implementation as well, but that never completed either. [3]</div></div></div></blockquote><div><span class="Apple-tab-span" style="white-space:pre"> </span></div><div>Do you have libYAML installed to get the C parser for YAML; the Python parser is terribly slow? opt-viewer issues a warning if it needs to fall back on the Python parser.</div><div><br class=""></div><div>We desperately need a progress bar in opt-viewer. Let me know if you want to add it otherwise I will. I filed <a href="http://llvm.org/PR33522" class="">llvm.org/PR33522</a> for this.</div><div><br class=""></div><div>In terms of improving the performance, I am pretty sure the bottleneck is still YAML parsing so:</div><div><br class=""></div><div>- If PGO is used, we can have a threshold to not even emit remarks on cold code, this should dramatically improve performance, <a href="http://llvm.org/PR33523" class="">llvm.org/PR33523</a></div><div><br class=""></div><div>- I expect that some sort of binary encoding of YAML would speed up parsing but I haven’t researched this topic yet...</div><div><br class=""></div><div>- There is a simple tool called opt-stats.py next to the opt-viewer which provide stats on the different types of remarks. We can see which ones are overly noisy and try to reduce the false positive rate. For example, last time I checked the inlining remark that reports missing definition of the callee was the top missed remark. We should not report this for system headers where there is not much the user can do.</div><div><br class=""></div><div>Adam</div><br class=""><blockquote type="cite" class=""><div class=""><div dir="ltr" class=""><div class=""><br class=""></div><div class="">Overall I'm excited to make greater use of optimization remarks, and to contribute in any way I can. Please let me know if you have any thoughts on my questions above!</div><div class=""><br class=""></div><div class="">[1] <a href="https://clang.llvm.org/docs/UsersManual.html#options-to-emit-optimization-reports" class="">https://clang.llvm.org/docs/UsersManual.html#options-to-emit-optimization-reports</a></div><div class="">[2] <a href="http://lists.llvm.org/pipermail/llvm-dev/2016-November/107039.html" class="">http://lists.llvm.org/pipermail/llvm-dev/2016-November/107039.html</a></div><div class="">[3] <a href="https://reviews.llvm.org/D26723" class="">https://reviews.llvm.org/D26723</a></div><div class=""><br class=""></div><div class="">- Brian Gesiak</div></div>
</div></blockquote></div><br class=""></body></html>