[llvm-dev] RFC: Extending optimization reporting

Finkel, Hal J. via llvm-dev llvm-dev at lists.llvm.org
Wed May 8 12:35:19 PDT 2019

>  Also, in terms of design philosophy, I completely agree with your goals of trying to minimize the compile time and memory footprint of the optimization reporting mechanism, but I think that if we want to support something that requires more memory or bigger IR it should be OK to take that hit on an opt-in basis. Do you agree?

I agree.

Also, I'll add that keeping track of the different loop versions is an important idea. Regarding unrolling, I believe that we have a mechanism for encoding information on this into DWARF discriminators (see http://llvm.org/viewvc/llvm-project?rev=349973&view=rev). It would be good to have a solution here for loop versioning, we want this for both remarks and PGO.


Hal Finkel
Lead, Compiler Technology and Programming Languages
Leadership Computing Facility
Argonne National Laboratory

From: llvm-dev <llvm-dev-bounces at lists.llvm.org> on behalf of Kaylor, Andrew via llvm-dev <llvm-dev at lists.llvm.org>
Sent: Wednesday, May 8, 2019 2:04 PM
To: Francis Visoiu Mistrih
Cc: llvm-dev
Subject: Re: [llvm-dev] RFC: Extending optimization reporting

Thanks, Francis. I actually wasn’t up to date on your latest work. It sounds like you’ve laid some helpful groundwork.

I think generalizing the remark handling interface should be fairly manageable, and that’s probably a good place for me to start getting involved.

My understanding of the very high-level design is that a pass creates an optimization remark object, passes it to an optimization remark emitter, which passes it to the diagnostic handler, which passes it to a remark streamer. All of these stages appear to have pretty clean interfaces. I believe there is even a mechanism to plug in a different remark streamer, though maybe not all of the wiring to connect that to a command line option.

I expect there will be glitches that will surface along the way, but supporting something like an IDE consumable text output format seems like it should be as simple as plugging in a new remark streamer that can consume the existing optimization remark objects and produce the desired text format. But that’s at the hand-waving/QED level, right? Since you’ve started looking into supporting other formats and say there’s some infrastructure work to be done, I guess it’s not quite that easy. In any event, I’d be happy to work with you on generalizing the infrastructure.

For some of the more involved use cases I want to cover, I was thinking the biggest challenge might be in adding extra information to the remark objects to support the new formats but doing so in a way that doesn’t break the YAML streamer or require any significant changes to it.

Also, in terms of design philosophy, I completely agree with your goals of trying to minimize the compile time and memory footprint of the optimization reporting mechanism, but I think that if we want to support something that requires more memory or bigger IR it should be OK to take that hit on an opt-in basis. Do you agree?



From: Francis Visoiu Mistrih <francisvm at yahoo.com>
Sent: Monday, May 06, 2019 10:25 AM
To: Kaylor, Andrew <andrew.kaylor at intel.com>
Cc: llvm-dev <llvm-dev at lists.llvm.org>
Subject: Re: [llvm-dev] RFC: Extending optimization reporting

Hi Andrew,

On Apr 30, 2019, at 2:47 PM, Kaylor, Andrew via llvm-dev <llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org>> wrote:

I would like to begin a discussion about updating LLVM's opt-report infrastructure. There are some things I'd like to be able to do with optimization reports that I don't think can be done, or at least aren't natural to do, with the current implementation.

I understand that there is a lot of code in place already to produce optimization remarks, and one of my explicit goals is to minimize the amount of updating existing code while still enabling the new features I would like to support. I have some ideas in mind for how to achieve what I'm proposing, but I want to start out by just describing the desired results and if we can reach a consensus on these being nice things to have then we can move on to talking about the best way to get there.

I think the extensions I have in mind can broadly be organized into two categories: (1) ways to support different sorts of output, and (2) ways to create connections between different events represented in the report.

Near as I can tell, the only support we have in the code base is for YAML output. I think I could implement a new RemarkStreamer to get other formats, but nothing in the LLVM code base does that. Is that correct?

Yes, for now only a YAML output is supported.

The current design is the following:

The passes create a remark diagnostic and call (Machine)OptimizationRemarkEmitter::Emit. That goes through LLVMContext where the RemarkStreamer is used to handle remark diagnostics. Then in the RemarkStreamer we serialize each diagnostic to YAML through the YAMLTraits and immediately write that to the file.

One of the main ideas from the beginning of the optimization remarks is to do as less work as possible on the compiler side. We don’t want to keep remarks in memory or significantly increase compile-time because of this. Most of the work is expected to be done on the client side, with, if possible, help from LLVM libraries.

Then on the other side, I recently added a parsing infrastructure for remarks in lib/Remarks, which parses YAML using the YAMLParser, performs some semantic checks on the remarks and creates a list of remarks::Remark. This does not re-use any code from the generation side for the following reasons:

* The generation is based on LLVM diagnostics, which has its own class hierarchy.

* The diagnostics are deeply coupled with LLVM IR / MIR, and we don’t want to generate dummy IR just for parsing a bunch of remarks and displaying them in a html view (e.g. opt-viewer.py).

* The YAML generated by LLVM can’t be parsed using the YAMLTraits, because we have an unknown number of arguments that can have the same key. We use the YAML parser for this, like tools/llvm-opt-report was doing before I added the remark parser in-tree.

One main issue right now is that we don’t have a way to serialize a remarks::Remark to YAML, and if we can somehow manage to use the same abstraction we use for parsing when we’re generating remarks, that would solve a lot of issues. The main reason I haven’t looked more deeply into this is because it would require making extra copies and extra allocations (especially of the arguments) that we would like to avoid doing during generation.

I'd like to be able to:

- Embed some subset of optimizations remarks as annotations in the generated assembly output

This would require keeping remarks in memory until we reach the asm-printer. Another way to do this is to pipe the output of clang to another tool that adds these annotations based on debug info and the remark file.

- Embed the remarks in the generated executable in a binary format consumed by the Intel Advisor tool

I recently added -mllvm -remarks-section. See https://llvm.org/docs/CodeGenerator.html#emitting-remark-diagnostics-in-the-object-file for what it contains.

The model we’re planning on using on Darwin is through dsymutil, which will merge all the remark files while processing the debug info, and create a separate file in the final .dSYM bundle with all the remarks.

- Produce text output in a format recognized by Microsoft Visual Studio or other IDEs

This would be very nice! I think right now there is no easy (or clean) way to add a new format, but we should definitely work on making that easier.

I have a few patches coming with a “binary” format that we want to use, so maybe we can work on building an infrastructure that can serve YAML, the binary format, and leave room for any new formats.

I tried to make the C API on the parsing side easy to use with any other format. See llvm-c/Remarks.h.

Let me know what you think!




The last of these is probably straightfoward since it's basically a streaming format such as the current infrastructure expects. The other two seem like they might be more complicated, since they involve keeping the information around, potentially across LTO, and correlated with the evolving IR until the final machine code or assembly is produced.

This leads me back to my second category of extensions, creating connections between different events in the opt report. My goal here is to be able to produce some kind of coherent report after compilation is complete that lets the user make some sense of how the IR evolved over the course of compilation and what effects that may have had on optimizations. This mostly has to do with the handling of loops, vectorization, and inlining.

Let's say, for example, I've got code like this:

for (...)


    if (lic)



And the loop-unswitch pass turns it into this:

if (lic)

    for (...)

        A; B; C


    for (...)

        A; C

Now let's say the vectorizer for some reason is able to vectorize the loop in the else-clause but not the if-clause. (I don't know if this kind of thing is possible with the current phase ordering, but I think this theoretical example illustrates the idea anyway.)

I want some way to produce a report that tells the user about the existence of the two loops that were created when we unswitched the loop so that we can then tell the user in some sensible way that we couldn't vectorize one loop but that we could vectorize the second.

I'm not sure what the opt-viewer would currently do with a case like this, but what I want to avoid is getting stuck where the report we can emit essentially conveys the following not very helpful information.

for (...)

  // Loop was unswitched.

  // Loop could not be vectorized because...

  // Loop was vectorized.


    if (lic)



Instead I'd like to have a way to produce something like this:

for (...)

  // Loop was unswitched for condition (srcloc)

  //   Unswitched loop version #1

  //     Unswitched for IF condition (srcloc)

  //     Loop was not vectorized:

  //   Unswitched loop version #2

  //     Loop was vectorized

The primary thing missing, I think, is a way for the vectorizer to give some indication of which version of the loop it is talking about in its optimization remarks and maybe a way for the opt-viewer to be able to make sense of that.

Likewise, there are things I want to be able to track with inlining. Let's say we go through the inlining pass pre-LTO and we make some decisions and report them. Then during LTO we go through another round of inlining and possibly make different decisions. I'd like to be able to either produce a report that shows just the inlining from the LTO pass or produce a report that shows a composite of all the inlining decisions that we made.

We tried something like this with an inlining report before (https://reviews.llvm.org/D19397), but it had the misfortune of being proposed at about the same time that the current opt-viewer mechanism was being developed and we didn’t manage to get aligned with that. I’m hoping that we can correct that now.

LLVM Developers mailing list
llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20190508/91205dc6/attachment.html>

More information about the llvm-dev mailing list