[llvm-dev] Next steps for optimization remarks?
Adam Nemet via llvm-dev
llvm-dev at lists.llvm.org
Tue Jun 20 01:50:09 PDT 2017
> On Jun 20, 2017, at 1:13 AM, Brian Gesiak <modocache at gmail.com> wrote:
> Hello all,
> In https://www.youtube.com/watch?v=qq0q1hfzidg <https://www.youtube.com/watch?v=qq0q1hfzidg>, Adam Nemet (cc'ed) describes optimization remarks and some future plans for the project. I had a few follow-up questions:
> 1. As an example of future work to be done, the talk mentions expanding the set of optimization passes that emit remarks. However, the Clang User Manual mentions that "optimization remarks do not really make sense outside of the major transformations (e.g.: inlining, vectorization, loop optimizations)."  I am wondering: which passes exist today that are most in need of supporting optimization remarks? Should all passes emit optimization remarks, or are there indeed passes for which optimization remarks "do not make sense”?
I think that we want to report most optimizations. Where I think we need to be a bit more careful is missed optimizations. For those, we should try to report cases where there is a good chance that the user may be able to take some action to enable the transformation (e.g. pragma, restrict, source modification or cost model overrides).
> 2. I tried running llvm/utils/opt-viewer/opt-viewer.py to produce an HTML dashboard for the optimization remark YAML generated from a large C++ program. Unfortunately, the Python script does not finish, even after over an hour of processing. It appears performance has been brought up before by Bob Haarman (cc'ed), and some optimizations have been made since.  I wonder if I'm passing in bad input (6,000+ YAML files -- too many?), or if there's still some room to speed up the opt-viewer.py script? I tried the C++ implementation as well, but that never completed either. 
Do you have libYAML installed to get the C parser for YAML; the Python parser is terribly slow? opt-viewer issues a warning if it needs to fall back on the Python parser.
We desperately need a progress bar in opt-viewer. Let me know if you want to add it otherwise I will. I filed llvm.org/PR33522 for this.
In terms of improving the performance, I am pretty sure the bottleneck is still YAML parsing so:
- If PGO is used, we can have a threshold to not even emit remarks on cold code, this should dramatically improve performance, llvm.org/PR33523
- I expect that some sort of binary encoding of YAML would speed up parsing but I haven’t researched this topic yet...
- There is a simple tool called opt-stats.py next to the opt-viewer which provide stats on the different types of remarks. We can see which ones are overly noisy and try to reduce the false positive rate. For example, last time I checked the inlining remark that reports missing definition of the callee was the top missed remark. We should not report this for system headers where there is not much the user can do.
> Overall I'm excited to make greater use of optimization remarks, and to contribute in any way I can. Please let me know if you have any thoughts on my questions above!
>  https://clang.llvm.org/docs/UsersManual.html#options-to-emit-optimization-reports <https://clang.llvm.org/docs/UsersManual.html#options-to-emit-optimization-reports>
>  http://lists.llvm.org/pipermail/llvm-dev/2016-November/107039.html <http://lists.llvm.org/pipermail/llvm-dev/2016-November/107039.html>
>  https://reviews.llvm.org/D26723 <https://reviews.llvm.org/D26723>
> - Brian Gesiak
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the llvm-dev