[llvm-dev] Next steps for optimization remarks?
Adam Nemet via llvm-dev
llvm-dev at lists.llvm.org
Thu Jun 29 10:15:15 PDT 2017
> On Jun 28, 2017, at 8:56 PM, Brian Gesiak <modocache at gmail.com> wrote:
>> On Wed, Jun 28, 2017 at 8:13 AM, Hal Finkel <hfinkel at anl.gov> wrote:
>> I don't object to adding some kind of filtering option, but in general it won't help. An important goal here is to provide analysis (and other) tools to users that present this information at a higher level. The users won't, and shouldn't, know exactly what kinds of messages the tools use. This is already somewhat true for llvm-opt-report, and will be even more true in the future.
> Ah, I see, that makes sense. Thanks!
>> On Tue, Jun 20, 2017 at 1:50 AM, Adam Nemet <anemet at apple.com> wrote:
>> We desperately need a progress bar in opt-viewer. Let me know if you want to add it otherwise I will. I filed llvm.org/PR33522 for this.
>> In terms of improving the performance, I am pretty sure the bottleneck is still YAML parsing so:
>> - If PGO is used, we can have a threshold to not even emit remarks on cold code, this should dramatically improve performance, llvm.org/PR33523
>> - I expect that some sort of binary encoding of YAML would speed up parsing but I haven’t researched this topic yet...
> I added progress indicators in https://reviews.llvm.org/D34735, and it
> seems like it takes a while for the Python scripts to read some of the
> larger YAML files produced for my program.
You may want look at these files with opt-stats and see what type of remarks are on the top of the list. If they are missed remarks, we may need to work harder to remove false positives.
> I'll try to look into
> binary YAML encoding later this week.
> A threshold preventing remarks from being emitted on cold code sounds
> good to me as well. Hal, do you agree, or is this also something that
> tools at a higher level should be responsible for ignoring?
> - Brian Gesiak
More information about the llvm-dev