[PATCH] D24587: [RFC] Output optimization remarks in YAML

Adam Nemet via llvm-commits llvm-commits at lists.llvm.org
Wed Sep 14 14:49:03 PDT 2016


anemet created this revision.
anemet added reviewers: hfinkel, rjmccall.
anemet added a subscriber: llvm-commits.

This is an RFC for this feature.  The idea to emit optimization remarks
to allow various presentation of this data was first recommended
here[1].

As an example, consider this module:

  1 int foo();
  2 int bar();
  3
  4 int baz() {
  5   return foo() + bar();
  6 }

The inliner generates these missed-optimization remarks today (the
hotness information is pulled from PGO):

  remark: /tmp/s.c:5:10: foo will not be inlined into baz (hotness: 30)
  remark: /tmp/s.c:5:18: bar will not be inlined into baz (hotness: 30)

Now with -pass-remarks-output=<yaml-file>, we generate this YAML file
instead (in this mode, no optimization remarks are printed):

  --- !Missed
  Pass:            inline
  DebugLoc:        { File: /tmp/s.c, Line: 5, Column: 10 }
  Function:        baz
  Hotness:         30
  Args:
    - value: foo
    - string:  will not be inlined into
    - value: baz
  ...
  --- !Missed
  Pass:            inline
  DebugLoc:        { File: /tmp/s.c, Line: 5, Column: 18 }
  Function:        baz
  Hotness:         30
  Args:
    - value: bar
    - string:  will not be inlined into
    - value: baz
  ...

The patch still needs a bit more cleanup but I think it makes sense to
solicit feedback at this point.  Here is a summary of the high-level
decisions:

* There is a new streaming interface to emit optimization remarks.
E.g. for the inliner remark above:

  ORE.emit(DiagnosticInfoOptimizationRemarkMissed(DEBUG_TYPE, &I)
           << Callee << " will not be inlined into "
           << CS.getCaller());

I considered providing even more context to the client by naming the
actual arguments, e.g.:

  ORE.emit(DiagnosticInfoOptimizationRemarkMissed(DEBUG_TYPE, &I)
           << "%callee will not be inlined into %caller"
           << Callee << CS.getCaller());

but I am not sure that really adds more context since the client would
still have to identify the underlying message to do anything useful with
it.  Once the message is identified the client can figure out that the
callee is the first argument, etc.

* I am using YAML I/O for writing the YAML file.  YAML I/O requires you
to specify reading and writing at once but reading is highly non-trivial
for some of the more complex LLVM types.  Since it's not clear that we
want to use LLVM to parse this YAML file, the code supports and asserts
that we're writing only.

On the other hand, I did experiment that the class hierarchy starting at
DiagnosticInfoOptimizationBase can be mapped back from YAML generated
here (see D24479).

* I've put the YAML stream into the LLVM context.

* In the example, we can probably further specify the IR value used,
i.e. print "function" rather than "value".

* As before hotness is computed in the analysis pass instead of
DiganosticInfo.  This avoids the layering problem since BFI is in
Analysis while DiagnosticInfo is in IR.

[1] https://reviews.llvm.org/D19678#419445

https://reviews.llvm.org/D24587

Files:
  include/llvm/Analysis/OptimizationDiagnosticInfo.h
  include/llvm/IR/DiagnosticInfo.h
  include/llvm/IR/LLVMContext.h
  lib/Analysis/OptimizationDiagnosticInfo.cpp
  lib/IR/DiagnosticInfo.cpp
  lib/IR/LLVMContext.cpp
  lib/IR/LLVMContextImpl.h
  lib/Transforms/IPO/Inliner.cpp
  test/Transforms/Inline/optimization-remarks-yaml.ll
  tools/opt/opt.cpp

-------------- next part --------------
A non-text attachment was scrubbed...
Name: D24587.71442.patch
Type: text/x-patch
Size: 17082 bytes
Desc: not available
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20160914/5499586f/attachment.bin>


More information about the llvm-commits mailing list