[llvm-dev] RFC: LNT/Test-suite support for custom metrics and test parameterization

James Molloy via llvm-dev llvm-dev at lists.llvm.org
Mon Apr 25 02:43:29 PDT 2016


Hi Sergey, Elena,

Firstly, thanks for this RFC. It's great to see more people actively using
and modifying LNT and the test metrics support in general is rather weak
currently.

Metrics
-------

I agree with Daniel and Kristof that your proposed schema changes have the
potential to make many queries extremely slow. Certainly for the metrics
enhancements, I don't see a reason why we need such a radical change in
schema.

To add custom metrics on the fly, we need to change the schema for the
Sample table. Currently this consists of a column for each metric, but
actually we never ever query those metric values. We never query for
example for "all failing tests in a run" - when we do analyses we use the
ComparisonResult class which reads *all* samples from the database for a
run and performs the analysis entirely in Python.

Therefore, having a semi-structured format where some fields are
first-class columns and the rest are in a JSON-encoded BLOB (as Daniel
suggests) seems totally acceptable. There is certainly an argument now that
we're using the wrong backend storage solution and that a key-value store
might be more suitable, but that's a very invasive change and I don't think
we've reached the point where we need to force a move from the simplicity
of SQLite.

Adding an extra BLOB column would be easy - there would just need to be
logic in testsuitedb.py for reading and writing it - the Sample model class
would expose the JSON-encoded fields as normal python fields so the rest of
LNT would be isolated from this change.

But I think this is a small detail compared to the bigger problem of how to
effectively *display* all this new data. Currently every new metric gets
its own separate table in the report/run views, and this does not scale
well at all.

I think we need some more concepts in the metric system to make it
scaleable:

  * What "attribute" of the test is this metric measuring? For example,
both "exec_time" and "score" measure the same attribute; performance of the
generated code. It's superfluous to have them displayed in separate tables.
However mem_size and compile_time both measure completely different aspects
of the test.
  * Is this metric useful to display at the top level? or should it only be
exposed when more data about a test result is requested?
    * An example of this is the pass statistics. I don't want my daily
report view cluttered by the time spent in register allocation for every
test! OK, this is useful information when debugging a problem, but it
should be available when requested rather than by default.

An example of why we need the above is your screenshots in your google doc.
I'm looking at the last screenshot, and it's incredibly difficult to read
and get useful information out of.

I'd also suggest that if we're adding many more metrics to a test, we
should create a "test sample information" page that the test link goes to
instead of just the graph. This page could contain all counter/metric data,
historic sparklines, the full graph and profiling links.

Cheers,

James

On Fri, 22 Apr 2016 at 10:17 Kristof Beyls via llvm-dev <
llvm-dev at lists.llvm.org> wrote:

>
> On 22 Apr 2016, at 11:14, Mehdi Amini <mehdi.amini at apple.com> wrote:
>
>
> On Apr 22, 2016, at 12:45 AM, Kristof Beyls via llvm-dev <
> llvm-dev at lists.llvm.org> wrote:
>
>
> On 21 Apr 2016, at 17:44, Sergey Yakoushkin <sergey.yakoushkin at gmail.com>
> wrote:
>
> Hi Kristof,
>
>        The way we use LNT, we would run different configuration (e.g. -O3
> vs -Os) as different "machines" in LNT's model.
>
> O2/O3 is indeed bad example. We're also using different machines for Os/O3
> - such parameters apply to all tests and we don't propose major changes.
> Elena was only extending LNT interface a bit to ease LLVM-testsuite
> execution with different compiler or HW flags.
>
>
> Oh I see, this boils down to extending the lnt runtest interface to be
> able to specify a set of configurations, rather than a single configuration
> and making
> sure configurations get submitted under different machine names? We kick
> off the different configuration runs through a script invoking lnt runtest
> multiple
> times. I don't see a big problem with extending the lnt runtest interface
> to do this, assuming it doesn't break the underlying concepts assumed
> throughout
> LNT. Maybe the only downside is that this will add even more command line
> options to lnt runtest, which already has a lot (too many?) command line
> options.
>
> Maybe some changes are required to analyze and compare metrics between
> "machines": e.g. code size/performance between Os/O2/O3.
> Do you perform such comparisons?
>
>
> We typically do these kinds of comparisons when we test our patches
> pre-commit, i.e. comparing for example '-O3' with '-O3 'mllvm
> -enable-my-new-pass'.
> To stick with the LNT concepts, tests enabling new passes are stored as a
> different "machine".
> The only way I know to be able to do a comparison between runs on 2
> different "machine"s is to manually edit the URL for run vs run comparison
> and fill in the runids of the 2 runs you want to compare.
> For example, the following URL is a comparison of
> green-dragon-07-x86_64-O3-flto vs green-dragon-06-x86_64-O0-g on the public
>  llvm.org/perf server:
> http://llvm.org/perf/db_default/v4/nts/70644?compare_to=70634
> I had to manually look up and fill in the run ids 70644 and 70634.
> It would be great if there was a better way to be able to do these kind of
> comparisons - i.e. not having to manually fill in run ids, but having a
> webui to easily find and pick the runs you want to compare.
> (As an aside: I find it intriguing that the URL above suggests that there
> are quite a few cases where "-O0 -g" produces faster code than "-O3 -flto").
>
>
> Can you be more explicit which ones? I don't see any regression (other
> than compared to the baseline, or on the compile time).
>
> --
> Mehdi
>
>
> D'Oh! I was misinterpreting the compile time differences as execution time
> differences. Indeed, there is no unexpected result in there.
> Sorry for the noise!
>
> Kristof
>
> _______________________________________________
> LLVM Developers mailing list
> llvm-dev at lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160425/d1592300/attachment.html>


More information about the llvm-dev mailing list