[llvm-dev] [EXTERNAL] Get llvm-mca results inside opt?

Andrea Di Biagio via llvm-dev llvm-dev at lists.llvm.org
Tue Jan 7 06:05:50 PST 2020


On Mon, Jan 6, 2020 at 4:51 PM Lewis, Cannada <canlewi at sandia.gov> wrote:

> Andrea, thanks for the advice.
>
> On Jan 2, 2020, at 8:09 AM, Andrea Di Biagio <andrea.dibiagio at gmail.com>
> wrote:
>
> Hi Lewis,
>
> Basically - if I understand correctly - you want to design a pass that
> uses llvm-mca as a library to compute throughput indicators for your
> outlined functions. You would then use those indicators to classify
> outlined functions.
>
> Yes basically, the idea is to build a performance model for that outlined
> function.
>
>
> llvm-mca doesn't know how to evaluate branches or instructions that affect
> the control flow. That basically restricts the analysis to single basic
> blocks that are assumed to be hot. I am not sure if this would be a blocker
> for your particular use case.
>
> That would be okay and is something we would need to work around on our
> end anyways since we don’t know the branch probability.
>
>
> llvm-mca only knows how to analyze/simulate a sequence of
> `mca::Instruction`. So, the expectation is that instructions in input have
> already been lowered into a sequence of mca::Instruction. The only way
> currently to obtain an `mca::Instruction` is by calling method `mca::
> InstrBuilder::createInstruction()` [1] on every instruction in input (see
> for example how it is done in llvm-mca.cpp [2]).
>
> Unfortunately method `createInstructions()` only works on `MCInst&`. This
> strongly limits the usability of llvm-mca as a library; the
> expectation/assumption is that instructions have already been lowered to a
> sequence of MCInst.
> Basically the only supported scenarios are:
>  - We have reached code emission stage and instructions have already been
> lowered into a sequence of MCInst, or
>
>  - We obtained an MCInst sequence by parsing an assembly code sequence
> with the help of other llvm libraries (this is what the llvm-mca tool does).
>
> It is possible to implement a variant of `createInstruction()` that lowers
> directly from `MachineInstr` to`mca::Instruction`. That would make the mca
> library more usable. In particular, it would make it possible to use mca
> from a post regalloc pass which runs before code emission. Unfortunately,
> that functionality doesn't exist today (we can definitely implement it
> though; it may unblock other interesting use cases). That being said, I am
> not sure if it could help your particular use case. When would you want to
> run your new pass? Using llvm-mca to analyze llvm IR is unfortunately not
> possible.
>
> I would want to run my pass as late as possible so that all optimizations
> have been run on the outlined function.  The values passed into the capture
> function should never influence the outlined function so I could in
> principle do something with a python script like:
>
> 1. Compile to ASM with MCA enable comments on the function I care about.
> 2. Run llvm-mca on that region
> 3. Source to source the original code with the new MCA information.
>
> Since this solution is not awesome and we may have many functions in a
> given TU that we care about I was hoping to find a better way.
>

I agree that the current workflow is not great.

Ideally you would want to add your pass directly before code emission. That
pass would 'run on MachineFunction'; it would only simulate outlined
functions using mca (other functions will be just skipped). Unfortunately
this solution would require an API to lower from MachineInstr to
mca::Instruction...

There is however another not-so-simple and ugly approach. I didn't try it
myself, but it should work for your particual use case (see below):

--

Another approach is to take advantage of the logic already available in
AsmPrinter.

Pass AsmPrinter is a machine function pass responsible for lowering and
emitting machine functions.
The instruction lowering logic is implemented by method `AsmPrinter::
EmitInstruction()`. That method is redefined in override by the llvm
targets, and it is mainly responsible for a) lowering a MachineInstr into
an MCInst, and b) delegating code emission to an output streamer (an helper
class which derives from MCStreamer).

You could try to reuse the logic already available in AsmPrinter to
implement point a).
You would then need to customize the streamer used in point b). Basically,
you may want to mimic what is done in the llvm::mca::CodeRegionGenerator:
you need a custom MCStreamer similar to the llvm::mca::MCStreamerWrapper
[1]. Your MCStreamer object would not really emit any code; instead it
would just store each lowered MCInst into a sequence (for example, a
SmallVector which can be accessed via the ArrayRef interface by your code).

Once all instructions are lowered into MCInst, you can convert them into
mca::Instruction using method `mca::InstrBuilder::createInstruction(MI)`.
See for example how it is done in llvm-mca.cpp [2]

The trick is to obtain a custom AsmPrinter pass that lowers machine
instructions into MCInst, and then delegates "code emission" to your own
custom MCStreamer class (whose goal is to simply store the lowered MCInsts
somewhere in memory).
Your new pass would literally act as a proxy for the existing AsmPrinter.
Method runOnMachineFunction() would:
a) delegate to AsmPrinter::runOnMachineFunction(), and
b) analyze the MCInst sequence (obtained with the help of your custom
streamer) with mca.

Pass AsmPrinter is currently added to the pass manager by method
`LLVMTargetMachine::addAsmPrinter()` [3][4]

You could provide another `addMCAPass()` hook to that interface to
instantiate your new pass. That hook would implement logic that is similar
to the one in method `addAsmPrinter()`. The difference is that instructions
would be "emitted" by your custom MCStreamer. It would create an AsmPrinter
which is then used to construct your own pass P. P would own that
particular AsmPrinter instance and; only P would be added to the pass
manager directly before code emission.

Not sure if this makes sense.
It may be a bit too complicated for your particular use case. However, I
can't think of any simple/quick way to do this differently.
As I wrote: ideally in future we will be able to lower directly from
MachineInstr and solve the problem differently. That would simplify use
cases like yours.

I hope it helps
-Andrea

[1]
https://github.com/llvm-mirror/llvm/blob/master/tools/llvm-mca/CodeRegionGenerator.cpp#L42
[2]
https://github.com/llvm-mirror/llvm/blob/master/tools/llvm-mca/llvm-mca.cpp#L462
[3]
https://github.com/llvm-mirror/llvm/blob/master/lib/CodeGen/LLVMTargetMachine.cpp#L182
[4]
https://github.com/llvm-mirror/llvm/blob/master/lib/CodeGen/LLVMTargetMachine.cpp#L116


> To compute the throughput indicators you would need to implement a logic
> similar to the one implemented by class SummaryView [3]. Ideally, most of
> that logic could be factored out into a helper class in order to help your
> particular use case and possibly avoid code duplication.
>
> Thanks, I’ll look into it.
>
>
> I hope it helps,
> -Andrea
>
> [1]
> https://github.com/llvm-mirror/llvm/blob/master/include/llvm/MCA/InstrBuilder.h
> [2]
> https://github.com/llvm-mirror/llvm/blob/master/tools/llvm-mca/llvm-mca.cpp
> [3]
> https://github.com/llvm-mirror/llvm/blob/master/tools/llvm-mca/Views/SummaryView.h
>
> On Tue, Dec 24, 2019 at 5:08 PM Lewis, Cannada via llvm-dev <
> llvm-dev at lists.llvm.org> wrote:
>
>> Hi,
>>
>> I am trying to generate performance models for specific pieces of code
>> like an omp.outlined function. Lets say I have the following code:
>>
>> start_collect_parallel_for_data(-1.0,-1.0,-1.0, size, “tag for this
>> region”);
>> #pragma omp parallel for
>> for(auto i = 0; i < size; ++i){
>>         // … do work
>> }
>> stop_collecting_parallel_for_data();
>>
>> The omp region will get outlined into a new function and what I would
>> like to be be able to do in opt is compile just that function to assembly,
>> for some target that I have chosen, run llvm-mca just on that function, and
>> then replace the -1.0s with uOps Per Cycle, IPC, and Block RThroughput so
>> that my logging code has some estimate of the performance of that region.
>>
>> Is there any reasonable way to do this from inside opt? I already have
>> everything in place to find the start_collect_parallel_for_data calls and
>> find the functions called between start and stop, but I could use some help
>> with the rest of my idea.
>>
>> Thanks
>> -Cannada Lewis
>> _______________________________________________
>> LLVM Developers mailing list
>> llvm-dev at lists.llvm.org
>> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20200107/f7fb69f6/attachment-0001.html>


More information about the llvm-dev mailing list