[llvm-dev] RFC: LNT/Test-suite support for custom metrics and test parameterization

Sergey Yakoushkin via llvm-dev llvm-dev at lists.llvm.org
Thu Apr 21 08:44:17 PDT 2016


Hi Kristof,

       The way we use LNT, we would run different configuration (e.g. -O3
vs -Os) as different "machines" in LNT's model.

O2/O3 is indeed bad example. We're also using different machines for Os/O3
- such parameters apply to all tests and we don't propose major changes.
Elena was only extending LNT interface a bit to ease LLVM-testsuite
execution with different compiler or HW flags.
Maybe some changes are required to analyze and compare metrics between
"machines": e.g. code size/performance between Os/O2/O3.
Do you perform such comparisons?


"test parameters" are different, they allow exploring multiple variants of
the same test case. E.g. can be:
* index of input data sets, length of input vector, size of matrix, etc;
* macro that affect source code such as changing 1) static data allocation
to dynamic or 2) constants to variables (compile-time unknown)
* extra sets of internal compilation options that are relevant only for
particular test case

Same parameters can apply to multiple tests with different value sets:

test1: param1={v1,v2,v3}
test2: param1={v2,v4}
test3:

Of course, original test cases can be duplicated (copied under different
names) - that is enough to execute tests.
Explicit "test parameters" allow exploring dependencies between test
parameters and metrics.


On Thu, Apr 21, 2016 at 4:36 PM, Kristof Beyls via llvm-dev <
llvm-dev at lists.llvm.org> wrote:

>
> On 21 Apr 2016, at 15:00, Elena Lepilkina <Elena.Lepilkina at synopsys.com>
> wrote:
>
> Hi Kristof and Daniel,
>
> Thanks for your answers.
>
> Unfortunately I haven’t tried scaling up to a large data set before. Today
> I tried and results are quite bad.
> So database scheme should be rebuild. Now I thought about creating one
> sample table for each test-suite, but not cloning all tables in packet. As
> I see other tables can be same for all testsuites. I mean if user run tests
> of new testsuite, new sample table would be created during importing data
> from json, if it doesn’t exist. Are there some problems with this solution?
> May be, I don’t know some details.
>
>
> It's unfortunate to see performance doesn't scale with the proposed
> initial schema, but not entirely surprising. I don't really have much
> feedback on how the schema could be adapted otherwise as I haven't worked
> much on that. I hope Daniel will have more insights to share here.
>
> Moreover, I have question about compile tests. Are compile tests runnable?
> In http://llvm.org/perf there is no compile test. Does that mean that
> they are deprecated for now?
>
> About test parameters, for example, we would like to have opportunity to
> compare benchmark results of test compiled with -O3 and -Os in context of
> one run.
>
>
> The way we use LNT, we would run different configuration (e.g. -O3 vs -Os)
> as different "machines" in LNT's model. This is also explained in LNT's
> documentation, see
> https://github.com/llvm-mirror/lnt/blob/master/docs/concepts.rst.
> Unfortunately, this version of the documentation hasn't found it's way yet
> to http://llvm.org/docs/lnt/contents.html.
> Is there a reason why storing different configurations as different
> "machines" in the LNT model doesn't work for you?
> I assume that there are a number of places in LNT's analyses that assume
> that different runs coming from the same "machine" are always produced by
> the same configuration. But I'm not entirely sure about that.
>
> Thanks,
>
> Kristof
>
> _______________________________________________
> LLVM Developers mailing list
> llvm-dev at lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160421/3727f657/attachment-0001.html>


More information about the llvm-dev mailing list