[PATCH] D16318: [LNT] Add a "number of test results" table to the daily report page.
Kristof Beyls via llvm-commits
llvm-commits at lists.llvm.org
Wed Jan 20 02:08:52 PST 2016
On 19/01/2016 17:34, David Blaikie wrote:
> On Tue, Jan 19, 2016 at 6:34 AM, Kristof Beyls via llvm-commits
> <llvm-commits at lists.llvm.org <mailto:llvm-commits at lists.llvm.org>> wrote:
>
> kristof.beyls created this revision.
> kristof.beyls added a reviewer: cmatthews.
> kristof.beyls added a subscriber: llvm-commits.
>
> We've found it useful to have a table on the daily report page
> which reports for each
> machine how many tests were seen on a given day. It has helped us
> to quickly notice
> when a certain machine started producing fewer test/benchmark
> results for some reason.
>
> AFAIK, there isn't another way to quickly notice if a particular
> machine starts producing
> fewer-than-expected benchmark results.
>
>
> I imagine this metric would be confused if there was a day with
> relatively few commits, no? (in that case you wouldn't need to run
> benchmarks because you'd be up to date)
>
> Perhaps a better/alternative metric would be "commits per benchmark
> run"? (this has the opposite effect sometimes, of course - if commit
> rate spikes it'll look like the bot slowed down - so perhaps some
> metric about commit rate?) This would help catch the underlying
> concern (or what I assume is the underlying concern) - when data is
> insufficiently granular for good analysis.
>
> (another alternative might be time taken for a report - this will
> fluctuate if many new tests are added to the repository, but would
> otherwise be independent of commit rate (either too high, or too low))
Hi David,
I probably should've explained the metric better.
The daily report page takes a run for each machine, for each day, and
analyzes those.
If it finds regressions or improvements in the benchmark results, it'll
show those.
It'll also show if a particular benchmark program turned from passing to
failing, or from failing to passing.
However, so far, it doesn't indicate if all of a sudden a machine
stopped producing results for a particular benchmark program all together.
The intent of this table is to do a basic sanity check that on
consecutive days, the machines keep on reporting results for the same
number of benchmark programs. When the number of programs in the
test-suite changes, or when you add more proprietary/external
benchmarks, this number will change. But on the far majority of days,
the number of benchmark programs reported on by a machine should remain
stable.
We're actively using the daily report page as the first thing to look at
to get an impression of how ToT LLVM has evolved on the machines and
benchmarks we track.
We've been using this table for about half a year downstream, and it has
helped us on a few occasions to quickly notice that a particular machine
had a problem in that it didn't produce a full set of results anymore.
There is no other way to detect that this easily, and that's why the
addition of this table is useful: a small amount of extra information on
the daily report page that makes it more useful to give a quick overview
of what today's status of ToT is.
So, this table/metric isn't intended to give an indication of how
quickly a machine gives feedback - but rather if a machine is still
producing a full set of results.
Having a metric on how quick a machine runs may also be useful, but I'm
not sure if it should be part of the daily report page.
Thanks,
Kristof
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20160120/2bf7dd82/attachment-0001.html>
More information about the llvm-commits
mailing list