[llvm-dev] Next steps for optimization remarks?
Simon Whittaker via llvm-dev
llvm-dev at lists.llvm.org
Fri Jul 14 09:50:14 PDT 2017
>process. I think that it could lead to exploring a better representation
than YAML which is, indeed, a little slow to parse
As a datapoint the codebase of a recent PlayStation4 game produces over
10GiB of YAML files, of course I'm tending to run opt-viewer on just a
subset of these to get a reasonable workflow.
Just looking at one of the graphics libraries, which is a reasonable
granularity to examine, we have ~10MiB of object file for x86-64, including
debug info. This produces ~70MiB of YAML which takes 48s to parse
(optrecord.gather_results) and 25s to produce a total of ~70MiB of HTML
(generate_report) on a decent i7 with SSD. Not terrible but probably too
slow for our end-users.
Brian, did you get time to try out some alternative representations?
Although we've not done any finer-grained profiling of the above we also
suspect a binary representation might improve things. If you've not looked
at this yet we might be able to investigate over the next couple of weeks.
If you already have then I'd be happy to test against the codebase above
and see what the difference is like.
To echo Davide, we don't want to sound too negative - the remarks work is
definitely a good direction to be going in and is already useful.
On Fri, Jul 14, 2017 at 8:21 AM, Davide Italiano via llvm-dev <
llvm-dev at lists.llvm.org> wrote:
> On Mon, Jun 19, 2017 at 4:13 PM, Brian Gesiak via llvm-dev
> <llvm-dev at lists.llvm.org> wrote:
> > Hello all,
> > In https://www.youtube.com/watch?v=qq0q1hfzidg, Adam Nemet (cc'ed)
> > optimization remarks and some future plans for the project. I had a few
> > follow-up questions:
> > 1. As an example of future work to be done, the talk mentions expanding
> > set of optimization passes that emit remarks. However, the Clang User
> > mentions that "optimization remarks do not really make sense outside of
> > major transformations (e.g.: inlining, vectorization, loop
> >  I am wondering: which passes exist today that are most in need of
> > supporting optimization remarks? Should all passes emit optimization
> > remarks, or are there indeed passes for which optimization remarks "do
> > make sense"?
> > 2. I tried running llvm/utils/opt-viewer/opt-viewer.py to produce an
> > dashboard for the optimization remark YAML generated from a large C++
> > program. Unfortunately, the Python script does not finish, even after
> > an hour of processing. It appears performance has been brought up before
> > Bob Haarman (cc'ed), and some optimizations have been made since.  I
> > wonder if I'm passing in bad input (6,000+ YAML files -- too many?), or
> > there's still some room to speed up the opt-viewer.py script? I tried the
> > C++ implementation as well, but that never completed either. 
> > Overall I'm excited to make greater use of optimization remarks, and to
> > contribute in any way I can. Please let me know if you have any thoughts
> > my questions above!
> I've been asked at $WORK to take a look at `-opt-remarks` , so here
> are a couple of thoughts.
> 1) When LTO is on, the output isn't particularly easy to read. I guess
> this can be mitigated with some filtering approach, I and Simon
> discussed it offline.
> 2) Yes, indeed `opt-viewer` takes forever for large testcases to
> process. I think that it could lead to exploring a better
> representation than YAML which is, indeed, a little slow to parse. To
> be honest, I'm torn about this.
> YAML is definitely really convenient as we already use it somewhere in
> tree, and it has an easy textual repr. OTOH, it doesn't seem to scale
> that nicely.
> 3) There are lots of optimizations which are still missing from the
> output, in particular PGO remarks (including, e.g. branch info
> probabilities which still use the old API as far as I can tell
> 4) `opt-remarks` heavily relies on the fidelity of the DebugLoc
> attached to instructions. Things get a little hairy at -O3 (or with
> -flto) because there are optimizations bugs so transformations don't
> preserve debuginfo. This is not entirely orthogonal but something can
> be worked on in parallel (bonus point, this would also help SamplePGO
> & debuginfo experience). With `-flto` the problem gets amplified more,
> as expected.
> 5) I found a couple of issue when trying the support, but I'm actively
> working on them.
> That said, I think optimization remarks support is coming along nicely.
> LLVM Developers mailing list
> llvm-dev at lists.llvm.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the llvm-dev