<html><head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body>
<p><br>
</p>
<div class="moz-cite-prefix">On 8/6/20 10:55 AM, Mircea Trofin
wrote:<br>
</div>
<blockquote type="cite" cite="mid:CAM7-JzaZJuR+QAop8jzcnfz3zozeSpqPAnVVZ1Za_0zPv08g8w@mail.gmail.com">
<div dir="ltr">Thanks - it's a llvm-wide feature, not just
llc-only, from what I can tell - which is good (llc-only
wouldn't have helped us much, we don't build with llc :) ). <br>
</div>
</blockquote>
<p><br>
</p>
<p>Exactly. The optimization-remark infrastructure was designed to
be flexible in this regard. Remarks are classes that can be
directly fed to a frontend handler (for direct interpretation
and/or for display to the user), can be serialized in YAML and
read later, etc.<br>
</p>
<p><br>
</p>
<blockquote type="cite" cite="mid:CAM7-JzaZJuR+QAop8jzcnfz3zozeSpqPAnVVZ1Za_0zPv08g8w@mail.gmail.com">
<div dir="ltr">
<div><br>
</div>
<div>We should take a look, indeed.
<div><br>
</div>
<div>I was scanning the documentation, maybe I missed it - so
we better understand the feature, do you happen to know what
the motivating scenarios were for this, and how would one
read the section in the resulting binary (i.e. custom binary
reader, lllvm-readobj?)</div>
</div>
</div>
</blockquote>
<p><br>
</p>
<p>My recollection is that the motivation is so that optimization
remarks, in their machine-readable/serialized form, can be
collected, stored in the binary, and then displayed, along with
profiling information, in tools that help with performance
profiling and analysis.</p>
<p> -Hal</p>
<p><br>
</p>
<blockquote type="cite" cite="mid:CAM7-JzaZJuR+QAop8jzcnfz3zozeSpqPAnVVZ1Za_0zPv08g8w@mail.gmail.com">
<div dir="ltr">
<div><br>
</div>
<div>Thanks!</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Wed, Aug 5, 2020 at 4:23 PM
Hal Finkel <<a href="mailto:hfinkel@anl.gov" moz-do-not-send="true">hfinkel@anl.gov</a>> wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">I
think that we should think about the relationship between this
<br>
proposed mechanism and the existing mechanism that we have for
emitting <br>
and capturing optimization remarks. In some sense, I feel like
we <br>
already have a lot of this capability (e.g., llc has
-remarks-section).<br>
<br>
-Hal<br>
<br>
On 8/5/20 5:51 PM, Johannes Doerfert via llvm-dev wrote:<br>
> I like the ability, not sure about the proposed
implementation though.<br>
><br>
> Did you consider a flag that redirects `llvm::outs()` and
`llvm::errs()`<br>
><br>
> into sections of the object file instead? So, you'd say:<br>
><br>
><br>
> `clang ... -mllvm -debug-only=inline ... -mllvm
-dump-section=.dump`<br>
><br>
><br>
> and you'd get the regular debug output nicely ordered in
the `.dump` <br>
> section.<br>
><br>
> I mainly want to avoid even more output code in the
passes but also be <br>
> able<br>
><br>
> to collect at least that information. That doesn't mean
we couldn't <br>
> add another<br>
><br>
> output stream that would always/only redirect into the
sections.<br>
><br>
><br>
> ~ Johannes<br>
><br>
><br>
> On 8/5/20 5:36 PM, Kazu Hirata via llvm-dev wrote:<br>
>> Introduction<br>
>> ============<br>
>><br>
>> This RFC proposes a mechanism to dump arbitrary
messages into object<br>
>> files during compilation and retrieve them from the
final executable.<br>
>><br>
>> Background<br>
>> ==========<br>
>><br>
>> We often need to collect information from all object
files of<br>
>> applications. For example:<br>
>><br>
>> - Mircea Trofin needs to collect information from the
function<br>
>> inlining pass so that he can train the machine
learning model with<br>
>> the information.<br>
>><br>
>> - I sometimes need to dump messages from optimization
passes to see<br>
>> where and how they trigger.<br>
>><br>
>> Now, this process becomes challenging when we build
large applications<br>
>> with a build system that caches and distributes
compilation jobs. If<br>
>> we were to dump messages to stderr, we would have to
be careful not to<br>
>> interleave messages from multiple object files. If
we were to modify<br>
>> a source file, we would have to flush the cache and
rebuild the entire<br>
>> application to collect dump messages from all
relevant object files.<br>
>><br>
>> High Level Design<br>
>> =================<br>
>><br>
>> - LLVM: We provide machinery for individual passes to
dump arbitrary<br>
>> messages into a special ELF section in a
compressed manner.<br>
>><br>
>> - Linker: We simply concatenate the contents of the
special ELF<br>
>> section. No change is needed.<br>
>><br>
>> - llvm-readobj: We add an option to retrieve the
contents of the<br>
>> special ELF section.<br>
>><br>
>> Detailed Design<br>
>> ===============<br>
>><br>
>> DumpAccumulator analysis pass<br>
>> -----------------------------<br>
>><br>
>> We create a new analysis pass called
DumpAccumulator. We add the<br>
>> analysis pass right at the beginning of the pass
pipeline. The new<br>
>> analysis pass holds the dump messages throughout the
pass pipeline.<br>
>><br>
>> If you would like to dump messages from some pass,
you would obtain<br>
>> the result of DumpAccumulator in the pass:<br>
>><br>
>> DumpAccumulator::Result *DAR = <br>
>> MAMProxy.getCachedResult<DumpAccumulator>(M);<br>
>><br>
>> Then dump messages:<br>
>><br>
>> if (DAR) {<br>
>> DAR->Message += "Processing ";<br>
>> DAR->Message += F.getName();<br>
>> DAR->Message += "\n";<br>
>> }<br>
>><br>
>> AsmPrinter<br>
>> ----------<br>
>><br>
>> We dump the messages from DumpAccumulator into a
section called<br>
>> ".llvm_dump" in a compressed manner. Specifically,
the section<br>
>> contains:<br>
>><br>
>> - LEB128 encoding of the original size in bytes<br>
>> - LEB128 encoding of the compressed size in bytes<br>
>> - the message compressed by zlib::compressed<br>
>><br>
>> in that order.<br>
>><br>
>> llvm-readobj<br>
>> ------------<br>
>><br>
>> We read the .llvm_dump section. We dump each chunk
of compressed data<br>
>> one after another.<br>
>><br>
>> Existing Implementation<br>
>> =======================<br>
>> <a href="https://reviews.llvm.org/D84473" rel="noreferrer" target="_blank" moz-do-not-send="true">https://reviews.llvm.org/D84473</a><br>
>><br>
>> Future Directions<br>
>> =================<br>
>><br>
>> The proposal above does not support the ThinLTO build
flow. To<br>
>> support that, I am thinking about putting the message
as metadata in<br>
>> the IR at the prelink stage.<br>
>><br>
>> Thoughts?<br>
>><br>
>><br>
>> _______________________________________________<br>
>> LLVM Developers mailing list<br>
>> <a href="mailto:llvm-dev@lists.llvm.org" target="_blank" moz-do-not-send="true">llvm-dev@lists.llvm.org</a><br>
>> <a href="https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev" rel="noreferrer" target="_blank" moz-do-not-send="true">https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev</a><br>
> _______________________________________________<br>
> LLVM Developers mailing list<br>
> <a href="mailto:llvm-dev@lists.llvm.org" target="_blank" moz-do-not-send="true">llvm-dev@lists.llvm.org</a><br>
> <a href="https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev" rel="noreferrer" target="_blank" moz-do-not-send="true">https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev</a><br>
<br>
-- <br>
Hal Finkel<br>
Lead, Compiler Technology and Programming Languages<br>
Leadership Computing Facility<br>
Argonne National Laboratory<br>
<br>
</blockquote>
</div>
</blockquote>
<pre class="moz-signature" cols="72">--
Hal Finkel
Lead, Compiler Technology and Programming Languages
Leadership Computing Facility
Argonne National Laboratory</pre>
</body>
</html>