[llvm-dev] [RFC] Introduce Dump Accumulator

Francis Visoiu Mistrih via llvm-dev llvm-dev at lists.llvm.org
Thu Aug 6 12:50:25 PDT 2020


Hi,

This is definitely something that can go through the remarks infrastructure. This is how it works now on darwin (mach-o) platforms:

* the remarks are streamed in a separate file
* the object file will contain a section with metadata and the path to the separate remarks file
* the linker (ld64) ignores the segment that contains the remarks section (__LLVM,__remarks)
* dsymutil (which is in charge of merging debug info from all .o's) will pick the remarks and generate a standalone merged remark file

From there dsymutil can be told to generate YAML, or we can use libRemarks to work with the binary format. The stuff in llvm/lib/Remarks should be easy enough to use to write tools for this. A simple one that I always have handy is to convert between file formats, or extract from object files to a standalone file (I should put this upstream someday).

This is all based on the fact that the mach-o linker (ld64) adds a link back to all the object files, which then dsymutil (or any other tool that needs debug info, like lldb) uses to find and merge things together.

On the ELF side, I can imagine that before we go into teaching linkers to understand the sections, we can teach the remark processing tools to understand the concatenation of all the section contents. When using the bitstream-based format, just pointing the parser at the metadata is enough to start looping over all the remarks, wether it’s a standalone file or something baked into a section. From there, it’s just a matter of adding an outer loop to that.

I’m happy to review any changes there!

Adding support to llvm-readobj? That would be also useful!

What’s the easiest way to read the output today? Feed the YAML file (from dsymutil, or whatever tool was used to merge them together) to llvm/tools/opt-viewer.

One thing that might be worth looking into is better LTO support, as we lose the remarks that get generated before the intermediate bitcode emission (we need to track them in the IR somehow).

— Francis


> On Aug 6, 2020, at 12:45 PM, Hal Finkel via llvm-dev <llvm-dev at lists.llvm.org> wrote:
> 
> 
> 
> On 8/6/20 10:55 AM, Mircea Trofin wrote:
>> Thanks - it's a llvm-wide feature, not just llc-only, from what I can tell - which is good (llc-only wouldn't have helped us much, we don't build with llc :) ). 
> 
> 
> Exactly. The optimization-remark infrastructure was designed to be flexible in this regard. Remarks are classes that can be directly fed to a frontend handler (for direct interpretation and/or for display to the user), can be serialized in YAML and read later, etc.
> 
> 
> 
>> 
>> We should take a look, indeed.
>> 
>> I was scanning the documentation, maybe I missed it - so we better understand the feature, do you happen to know what the motivating scenarios were for this, and how would one read the section in the resulting binary (i.e. custom binary reader, lllvm-readobj?)
> 
> 
> My recollection is that the motivation is so that optimization remarks, in their machine-readable/serialized form, can be collected, stored in the binary, and then displayed, along with profiling information, in tools that help with performance profiling and analysis.
> 
>  -Hal
> 
> 
> 
>> 
>> Thanks!
>> 
>> On Wed, Aug 5, 2020 at 4:23 PM Hal Finkel <hfinkel at anl.gov> wrote:
>> I think that we should think about the relationship between this 
>> proposed mechanism and the existing mechanism that we have for emitting 
>> and capturing optimization remarks. In some sense, I feel like we 
>> already have a lot of this capability (e.g., llc has -remarks-section).
>> 
>>   -Hal
>> 
>> On 8/5/20 5:51 PM, Johannes Doerfert via llvm-dev wrote:
>> > I like the ability, not sure about the proposed implementation though.
>> >
>> > Did you consider a flag that redirects `llvm::outs()` and `llvm::errs()`
>> >
>> > into sections of the object file instead? So, you'd say:
>> >
>> >
>> > `clang ... -mllvm -debug-only=inline ... -mllvm -dump-section=.dump`
>> >
>> >
>> > and you'd get the regular debug output nicely ordered in the `.dump` 
>> > section.
>> >
>> > I mainly want to avoid even more output code in the passes but also be 
>> > able
>> >
>> > to collect at least that information. That doesn't mean we couldn't 
>> > add another
>> >
>> > output stream that would always/only redirect into the sections.
>> >
>> >
>> > ~ Johannes
>> >
>> >
>> > On 8/5/20 5:36 PM, Kazu Hirata via llvm-dev wrote:
>> >> Introduction
>> >> ============
>> >>
>> >> This RFC proposes a mechanism to dump arbitrary messages into object
>> >> files during compilation and retrieve them from the final executable.
>> >>
>> >> Background
>> >> ==========
>> >>
>> >> We often need to collect information from all object files of
>> >> applications.  For example:
>> >>
>> >> - Mircea Trofin needs to collect information from the function
>> >>    inlining pass so that he can train the machine learning model with
>> >>    the information.
>> >>
>> >> - I sometimes need to dump messages from optimization passes to see
>> >>    where and how they trigger.
>> >>
>> >> Now, this process becomes challenging when we build large applications
>> >> with a build system that caches and distributes compilation jobs.  If
>> >> we were to dump messages to stderr, we would have to be careful not to
>> >> interleave messages from multiple object files.  If we were to modify
>> >> a source file, we would have to flush the cache and rebuild the entire
>> >> application to collect dump messages from all relevant object files.
>> >>
>> >> High Level Design
>> >> =================
>> >>
>> >> - LLVM: We provide machinery for individual passes to dump arbitrary
>> >>    messages into a special ELF section in a compressed manner.
>> >>
>> >> - Linker: We simply concatenate the contents of the special ELF
>> >>    section.  No change is needed.
>> >>
>> >> - llvm-readobj: We add an option to retrieve the contents of the
>> >>    special ELF section.
>> >>
>> >> Detailed Design
>> >> ===============
>> >>
>> >> DumpAccumulator analysis pass
>> >> -----------------------------
>> >>
>> >> We create a new analysis pass called DumpAccumulator.  We add the
>> >> analysis pass right at the beginning of the pass pipeline.  The new
>> >> analysis pass holds the dump messages throughout the pass pipeline.
>> >>
>> >> If you would like to dump messages from some pass, you would obtain
>> >> the result of DumpAccumulator in the pass:
>> >>
>> >>    DumpAccumulator::Result *DAR = 
>> >> MAMProxy.getCachedResult<DumpAccumulator>(M);
>> >>
>> >> Then dump messages:
>> >>
>> >>    if (DAR) {
>> >>      DAR->Message += "Processing ";
>> >>      DAR->Message += F.getName();
>> >>      DAR->Message += "\n";
>> >>    }
>> >>
>> >> AsmPrinter
>> >> ----------
>> >>
>> >> We dump the messages from DumpAccumulator into a section called
>> >> ".llvm_dump" in a compressed manner.  Specifically, the section
>> >> contains:
>> >>
>> >> - LEB128 encoding of the original size in bytes
>> >> - LEB128 encoding of the compressed size in bytes
>> >> - the message compressed by zlib::compressed
>> >>
>> >> in that order.
>> >>
>> >> llvm-readobj
>> >> ------------
>> >>
>> >> We read the .llvm_dump section.  We dump each chunk of compressed data
>> >> one after another.
>> >>
>> >> Existing Implementation
>> >> =======================
>> >> https://reviews.llvm.org/D84473
>> >>
>> >> Future Directions
>> >> =================
>> >>
>> >> The proposal above does not support the ThinLTO build flow.  To
>> >> support that, I am thinking about putting the message as metadata in
>> >> the IR at the prelink stage.
>> >>
>> >> Thoughts?
>> >>
>> >>
>> >> _______________________________________________
>> >> LLVM Developers mailing list
>> >> llvm-dev at lists.llvm.org
>> >> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>> > _______________________________________________
>> > LLVM Developers mailing list
>> > llvm-dev at lists.llvm.org
>> > https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>> 
>> -- 
>> Hal Finkel
>> Lead, Compiler Technology and Programming Languages
>> Leadership Computing Facility
>> Argonne National Laboratory
>> 
> -- 
> Hal Finkel
> Lead, Compiler Technology and Programming Languages
> Leadership Computing Facility
> Argonne National Laboratory
> 
> _______________________________________________
> LLVM Developers mailing list
> llvm-dev at lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev



More information about the llvm-dev mailing list