<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<p><br>
</p>
<div class="moz-cite-prefix">On 06/19/2017 06:13 PM, Brian Gesiak
via llvm-dev wrote:<br>
</div>
<blockquote
cite="mid:CAN7MxmWpR=UnUcPmAyhyxwV=i2xYCCopYw7euHXcgA7fEdngHw@mail.gmail.com"
type="cite">
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<div dir="ltr">Hello all,
<div><br>
</div>
<div>In <a moz-do-not-send="true"
href="https://www.youtube.com/watch?v=qq0q1hfzidg">https://www.youtube.com/watch?v=qq0q1hfzidg</a>,
Adam Nemet (cc'ed) describes optimization remarks and some
future plans for the project. I had a few follow-up questions:</div>
<div><br>
</div>
<div>1. As an example of future work to be done, the talk
mentions expanding the set of optimization passes that emit
remarks. However, the Clang User Manual mentions that
"optimization remarks do not really make sense outside of the
major transformations (e.g.: inlining, vectorization, loop
optimizations)." [1] I am wondering: which passes exist today
that are most in need of supporting optimization remarks?
Should all passes emit optimization remarks, or are there
indeed passes for which optimization remarks "do not make
sense"?</div>
</div>
</blockquote>
<br>
Obviously there is a continuous spectrum of transformation effects
between "major" and "minor", and moreover, we have different
consumers of the remarks. Remarks that would be too noisy if
directly viewed by a human (because Clang prints them all, for
example), might make perfect sense if interpreted by some tool.
llvm-opt-report, for example, demonstrates how a tool can collect
many remarks and aggregate them into a more succinct form.<br>
<br>
If you're looking for an area to contribute, I'd recommend looking
at how to better output (and display) the "why not" of
transformations that didn't fire. Memory dependencies that block
vectorization and loop-invariant code motion, for example, would be
really useful if mapped back to source-level constructs for
presentation to the user.<br>
<br>
-Hal<br>
<br>
<blockquote
cite="mid:CAN7MxmWpR=UnUcPmAyhyxwV=i2xYCCopYw7euHXcgA7fEdngHw@mail.gmail.com"
type="cite">
<div dir="ltr">
<div><br>
</div>
<div>2. I tried running llvm/utils/opt-viewer/opt-viewer.py to
produce an HTML dashboard for the optimization remark YAML
generated from a large C++ program. Unfortunately, the Python
script does not finish, even after over an hour of processing.
It appears performance has been brought up before by Bob
Haarman (cc'ed), and some optimizations have been made since.
[2] I wonder if I'm passing in bad input (6,000+ YAML files --
too many?), or if there's still some room to speed up the
opt-viewer.py script? I tried the C++ implementation as well,
but that never completed either. [3]</div>
<div><br>
</div>
<div>Overall I'm excited to make greater use of optimization
remarks, and to contribute in any way I can. Please let me
know if you have any thoughts on my questions above!</div>
<div><br>
</div>
<div>[1] <a moz-do-not-send="true"
href="https://clang.llvm.org/docs/UsersManual.html#options-to-emit-optimization-reports">https://clang.llvm.org/docs/UsersManual.html#options-to-emit-optimization-reports</a></div>
<div>[2] <a moz-do-not-send="true"
href="http://lists.llvm.org/pipermail/llvm-dev/2016-November/107039.html">http://lists.llvm.org/pipermail/llvm-dev/2016-November/107039.html</a></div>
<div>[3] <a moz-do-not-send="true"
href="https://reviews.llvm.org/D26723">https://reviews.llvm.org/D26723</a></div>
<div><br>
</div>
<div>- Brian Gesiak</div>
</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
LLVM Developers mailing list
<a class="moz-txt-link-abbreviated" href="mailto:llvm-dev@lists.llvm.org">llvm-dev@lists.llvm.org</a>
<a class="moz-txt-link-freetext" href="http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev">http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev</a>
</pre>
</blockquote>
<br>
<pre class="moz-signature" cols="72">--
Hal Finkel
Lead, Compiler Technology and Programming Languages
Leadership Computing Facility
Argonne National Laboratory</pre>
</body>
</html>