[llvm-dev] RFC: LNT/Test-suite support for custom metrics and test parameterization

Elena Lepilkina via llvm-dev llvm-dev at lists.llvm.org
Thu Apr 21 06:00:22 PDT 2016


Hi Kristof and Daniel,

Thanks for your answers.

Unfortunately I haven’t tried scaling up to a large data set before. Today I tried and results are quite bad.
So database scheme should be rebuild. Now I thought about creating one sample table for each test-suite, but not cloning all tables in packet. As I see other tables can be same for all testsuites. I mean if user run tests of new testsuite, new sample table would be created during importing data from json, if it doesn’t exist. Are there some problems with this solution? May be, I don’t know some details.
Moreover, I have question about compile tests. Are compile tests runnable? In http://llvm.org/perf there is no compile test. Does that mean that they are deprecated for now?

About test parameters, for example, we would like to have opportunity to compare benchmark results of test compiled with -O3 and -Os in context of one run.

Elena.

From: daniel.dunbar at gmail.com [mailto:daniel.dunbar at gmail.com] On Behalf Of Daniel Dunbar
Sent: Tuesday, April 19, 2016 6:56 PM
To: Kristof Beyls <Kristof.Beyls at arm.com>
Cc: Elena Lepilkina <Elena.Lepilkina at synopsys.com>; llvm-dev <llvm-dev at lists.llvm.org>; James Molloy <James.Molloy at arm.com>; Chris Matthews <chris.matthews at apple.com>; Matthias Braun <matze at braunis.de>; nd <nd at arm.com>
Subject: Re: RFC: LNT/Test-suite support for custom metrics and test parameterization

Hi Elena,

This is great, I would love to see extensible support for arbitrary metrics.

On Tue, Apr 19, 2016 at 12:39 AM, Kristof Beyls <Kristof.Beyls at arm.com<mailto:Kristof.Beyls at arm.com>> wrote:
Hi Elena,

Many thanks for working on this!

May I first suggest to convert the google document to email content? That may make it a little bit easier for more people to review. It also makes sure the content is archived on the llvm's mail servers.
I'll refrain from making detailed comments until the text is in email, so that comments remain close to the text they make a comment on.

Unfortunately now Kristof started the ball. :)


From a high-level point-of-view, a few thoughts I had on the custom metrics proposal:
* My understanding is that you suggest, to be able to add custom metrics, to change the database schema to something that resembles a key-value pair way of storing data more. Often, storing data in key-value pairs in a relational databases can slow down queries a lot, depending on how data typically gets queried. I think that for LNT usage, the schema you suggest may work well in practice. But I think you'll need to do query time measurements and web page load time measurements to compare the speed before and after your suggested schema change, on a database with as much real-world data as you can lay your hands on. Ideally, both for the sqlite and the postgres database engines.

I agree with Kristof here.

LNT actually did previously use a very key-value centric model which was extensible, but was untenable as we scaled our server up to 10s of million records (for reference, the llvm.org<http://llvm.org> server currently has over 100 million samples). It was also very cumbersome to program reports against, given that we have little-to-no infrastructure for doing background processing of the data to transform it into a more efficient query representation. Given that it sounds like you have already implemented this (partially?) have you tried scaling up to a large data set?

One of the reasons for the current LNT design (where we instantiate tables in response to a configuration) was so that extensible metrics could be added directly to the underlying SQL schema. The idea was that a user/client would configure a particular set of metrics to be tracked when they create the server (hopefully with features to track new metrics later), and then we would create a corresponding schema that tracked each metric separately.

For true "arbitrarily extensible" data where it doesn't make sense to be present in all samples for a schema, then I envisioned that users would attach additional samples into a BLOB sample field (which could be JSON or MsgPack, etc.) and then build up infrastructure for doing richer (but slow) queries over that data.

 - Daniel

* Quite a few users of LNT only use the server and webui, and use a different system to produce the test data in the json file format that can be submitted to the LNT server. Therefore, I think it's useful if you'd also describe how the JSON file structure would change for this proposal.

For the proposal to add test-suite parameters: I'm not sure I've understood the problem you're trying to solve well. Maybe an example of a more concrete use case could help demonstrate what the value is of having multiple sets of CFLAGS per test program in a single run?
It seems that you're working on a patch that adapts the Makefile structures in test-suite. Maybe it would be better to switch to using the new cmake+lit system to build and run the programs in the test-suite and fix the problem there?

Thanks,

Kristof


On 18 Apr 2016, at 17:16, Elena Lepilkina <Elena.Lepilkina at synopsys.com<mailto:Elena.Lepilkina at synopsys.com>> wrote:

Greetings everyone,

We would like to improve LNT.
The following RFC describes two LNT enhancements:

  *   Custom (extensible) metrics
  *   Test parameterization

The main idea is in document https://docs.google.com/document/d/1zWWfu_iBQhFaHo73mhqqcL6Z82thHNAoCxaY7BveSf4/edit?usp=sharing.

Thanks,

Elena.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160421/c9bde578/attachment-0001.html>


More information about the llvm-dev mailing list