[llvm-dev] Performance metrics with LLVM
Matthias Braun via llvm-dev
llvm-dev at lists.llvm.org
Wed Jul 5 16:39:01 PDT 2017
The new schema definition logic landed in r307222/r307223 I'd be happy to get some post-commit reviews.
- The new mechanisms are documented in the importing-data part of the docu.
- Unfortunately today you have to choose whether you define the schema directly in the database ('nts' and 'compile' still use that) or whether you use the new .yaml schemas in the database. This means you cannot simply change the default 'nts' schema. (I'd be happy to transfer it to a .yaml schema however that means we loose some flexibility in automatically upgrading old databases so I cannot simply do that as an NFC commit).
- I'll convert the 'compile' schema to the new format in the next days (just have to synchronize this with our internal jobs so we have at least 1 real world example in the public repository.
> On Jul 5, 2017, at 2:09 PM, Chris Matthews <chris.matthews at apple.com> wrote:
> The test-suite schema is defined in the DB. It is not hard to extend if you have server access. The docs detail the steps to add a new metric:
> http://llvm.org/docs/lnt/importing_data.html#custom-test-suites <http://llvm.org/docs/lnt/importing_data.html#custom-test-suites>
> I have setup several custom suites. It works.
> Matthias is working on something to make that even easier by having the test suites self describe their schemas. This will still require server access, but will be less scary than editing the DB directly.
> A lot of this boils down to naming, and how the data is later presented. For instance, in some places we have elected to store the new link time metric as a differently named test in the compile_time metric (foo.c vs foo.c.link). When you do this, those are presented side by side in the data listing views, that is handy. Each metric is given a section in the run reports, you can imagine what that might look like with 50 metrics. We might need to do some UI redesign to make the run reports sane.
> I think LNT will have a hard time collecting all the stats right now. There are 2722 source files in the llvm test-suite right now, and many hundred stats. Especially pages like the run reports which still do inline calculations are going to be slow. We do cache all the stuff needed to render the pages quickly now, but that pages are not updated to use that.
>> On Jul 5, 2017, at 9:20 AM, Tobias Grosser via llvm-dev <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote:
>> On Wed, Jul 5, 2017, at 05:48 PM, Matthias Braun via llvm-dev wrote:
>>>> On Jul 4, 2017, at 2:02 AM, Tobias Grosser <tobias.grosser at inf.ethz.ch <mailto:tobias.grosser at inf.ethz.ch>> wrote:
>>>>> On Tue, Jul 4, 2017, at 09:48 AM, Kristof Beyls wrote:
>>>>> Hi Tobias,
>>>>> The metrics that you can collect in LNT are fixed per "test suite".
>>>>> There are 2 such "test suite"s defined in LNT at the moment: nts and
>>>>> For more details on this, see
>>>>> http://llvm.org/docs/lnt/concepts.html#test-suites <http://llvm.org/docs/lnt/concepts.html#test-suites>.
>>>>> AFAIK, If you need to collect different metrics, you'll need to define a
>>>>> new "test suite". I'm afraid I don't really know what is needed for that.
>>>>> I'm guessing you may need to write some LNT code to do so, but I'm not
>>>>> sure. Hopefully Matthias or Chris will be able to explain how to do that.
>>>>> We probably should investigate how to make it easier to define new
>>>>> "test-suite"s more easily. Or at least make it easier to record different
>>>>> sets of metrics more easily, without having to change the LNT code or a
>>>>> running LNT server instance.
>>>>> The question on recording a different set of metrics has come up on this
>>>>> list before, so it seems like it's an issue people do run into from time
>>>>> to time.
>>>> Hi Kristof,
>>>> thanks for your fast reply. This is a very helpful summary that confirms
>>>> my current understanding in parts. I never run the "compile" test suite,
>>>> so I am not sure how much of the statistics interface is used by it (if
>>>> at all). I somehow had the feeling something else might exist, as the
>>>> cmake test-suite
>>>> runner dumps some of the statistics to stdout. Would be interested to
>>>> read if Chris or Matthias have more insights.
>>> I often run the testsuite without LNT. Lit -o dumps the output to a json
>>> file. If your goal is just some A/B testing (rather than tracking
>>> continuously with CI systems) then something simple like
>>> test-suite/utils/compare.py is enough to view and compare lit result
>> Right. That's what I have been seeing.
>>> For future LNT plans:
>>> You also asked at an interesting moment: I am polishing a commit to LNT
>>> right now that makes it easier to define custom schemas or create new
>>> ones. Though that is only part of the solution, as even with the new
>>> schema the runner needs to be adapted to actually collect/transform all
>> Great. Would be glad to follow the patch review.
>>> I think we also will not start collecting all the llvm stats by default
>>> in the current system; with a few thousand runs in the database its
>>> slightly sluggish already, I don't think adding 10x more metrics to the
>>> database helps there. Of course once it is easier to modify schemas you
>>> could setup special instances with extended schemas that maybe track
>>> fewer instances/runs.
>> I wonder if the sluggishness comes from trying to display all of these
>> or from storing them. If they would be in a separate table that is only
>> accessed when needed, maybe storing would not have such a large cost.
>> LLVM Developers mailing list
>> llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev <http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev>
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the llvm-dev