[llvm-dev] Next steps for optimization remarks?

Davide Italiano via llvm-dev llvm-dev at lists.llvm.org
Fri Jul 14 08:21:50 PDT 2017

On Mon, Jun 19, 2017 at 4:13 PM, Brian Gesiak via llvm-dev
<llvm-dev at lists.llvm.org> wrote:
> Hello all,
> In https://www.youtube.com/watch?v=qq0q1hfzidg, Adam Nemet (cc'ed) describes
> optimization remarks and some future plans for the project. I had a few
> follow-up questions:
> 1. As an example of future work to be done, the talk mentions expanding the
> set of optimization passes that emit remarks. However, the Clang User Manual
> mentions that "optimization remarks do not really make sense outside of the
> major transformations (e.g.: inlining, vectorization, loop optimizations)."
> [1] I am wondering: which passes exist today that are most in need of
> supporting optimization remarks? Should all passes emit optimization
> remarks, or are there indeed passes for which optimization remarks "do not
> make sense"?
> 2. I tried running llvm/utils/opt-viewer/opt-viewer.py to produce an HTML
> dashboard for the optimization remark YAML generated from a large C++
> program. Unfortunately, the Python script does not finish, even after over
> an hour of processing. It appears performance has been brought up before by
> Bob Haarman (cc'ed), and some optimizations have been made since. [2] I
> wonder if I'm passing in bad input (6,000+ YAML files -- too many?), or if
> there's still some room to speed up the opt-viewer.py script? I tried the
> C++ implementation as well, but that never completed either. [3]
> Overall I'm excited to make greater use of optimization remarks, and to
> contribute in any way I can. Please let me know if you have any thoughts on
> my questions above!

I've been asked at $WORK to take a look at `-opt-remarks` , so here
are a couple of thoughts.

1) When LTO is on, the output isn't particularly easy to read. I guess
this can be mitigated with some filtering approach, I and Simon
discussed it offline.

2) Yes, indeed `opt-viewer` takes forever for large testcases to
process. I think that it could lead to exploring a better
representation than YAML which is, indeed, a little slow to parse. To
be honest, I'm torn about this.
YAML is definitely really convenient as we already use it somewhere in
tree, and it has an easy textual repr. OTOH, it doesn't seem to scale
that nicely.

3) There are lots of optimizations which are still missing from the
output, in particular PGO remarks (including, e.g. branch info
probabilities which still use the old API as far as I can tell

4) `opt-remarks` heavily relies on the fidelity of the DebugLoc
attached to instructions. Things get a little hairy at -O3 (or with
-flto) because there are optimizations bugs so transformations don't
preserve debuginfo. This is not entirely orthogonal but something can
be worked on in parallel (bonus point, this would also help SamplePGO
& debuginfo experience). With `-flto` the problem gets amplified more,
as expected.

5) I found a couple of issue when trying the support, but I'm actively
working on them.

That said, I think optimization remarks support is coming along nicely.


More information about the llvm-dev mailing list