[llvm-dev] Performance metrics with LLVM

Tobias Grosser via llvm-dev llvm-dev at lists.llvm.org
Wed Jul 5 09:20:32 PDT 2017

On Wed, Jul 5, 2017, at 05:48 PM, Matthias Braun via llvm-dev wrote:
> > On Jul 4, 2017, at 2:02 AM, Tobias Grosser <tobias.grosser at inf.ethz.ch> wrote:
> > 
> >> On Tue, Jul 4, 2017, at 09:48 AM, Kristof Beyls wrote:
> >> Hi Tobias,
> >> 
> >> The metrics that you can collect in LNT are fixed per "test suite".
> >> There are 2 such "test suite"s defined in LNT at the moment: nts and
> >> compile.
> >> For more details on this, see
> >> http://llvm.org/docs/lnt/concepts.html#test-suites.
> >> 
> >> AFAIK, If you need to collect different metrics, you'll need to define a
> >> new "test suite". I'm afraid I don't really know what is needed for that.
> >> I'm guessing you may need to write some LNT code to do so, but I'm not
> >> sure. Hopefully Matthias or Chris will be able to explain how to do that.
> >> 
> >> We probably should investigate how to make it easier to define new
> >> "test-suite"s more easily. Or at least make it easier to record different
> >> sets of metrics more easily, without having to change the LNT code or a
> >> running LNT server instance.
> >> The question on recording a different set of metrics has come up on this
> >> list before, so it seems like it's an issue people do run into from time
> >> to time.
> > 
> > Hi Kristof,
> > 
> > thanks for your fast reply. This is a very helpful summary that confirms
> > my current understanding in parts. I never run the "compile" test suite,
> > so I am not sure how much of the statistics interface is used by it (if
> > at all). I somehow had the feeling something else might exist, as the
> > cmake test-suite
> > runner dumps some of the statistics to stdout. Would be interested to
> > read if Chris or Matthias have more insights.
> I often run the testsuite without LNT. Lit -o dumps the output to a json
> file. If your goal is just some A/B testing (rather than tracking
> continuously with CI systems) then something simple like
> test-suite/utils/compare.py is enough to view and compare lit result
> files.

Right. That's what I have been seeing. 
> For future LNT plans:
> You also asked at an interesting moment: I am polishing a commit to LNT
> right now that makes it easier to define custom schemas or create new
> ones. Though that is only part of the solution, as even with the new
> schema the runner needs to be adapted to actually collect/transform all
> values.

Great. Would be glad to follow the patch review.
> I think we also will not start collecting all the llvm stats by default
> in the current system; with a few thousand runs in the database its
> slightly sluggish already, I don't think adding 10x more metrics to the
> database helps there. Of course once it is easier to modify schemas you
> could setup special instances with extended schemas that maybe track
> fewer instances/runs.

I wonder if the sluggishness comes from trying to display all of these
or from storing them. If they would be in a separate table that is only
accessed when needed, maybe storing would not have such a large cost.


More information about the llvm-dev mailing list