[www] r176209 - Add LNT statistics project

Renato Golin renato.golin at linaro.org
Thu Feb 28 09:00:06 PST 2013


On 28 February 2013 16:29, David Blaikie <dblaikie at gmail.com> wrote:

> If I were doing this that's probably where I would want to head.
> Essentially gut the existing test suite running infrastructure (this
> needs to be done sooner or later anyway because the whole thing is
> terribly crufty) & make it smart enough to choose the number of
> executions dynamically to get compensate for the noise to some level
> of confidence (& presumably have some cutoff threshold where the test
> would be considered just too noisy to be worthwhile).
>

While the current system has many problems, it has the minimum set of
features I'd expect from continuous integration + benchmarks. I can do the
analysis off-line for now, that's not a big problem. (though, having a json
interface would be great).

Re-writing the whole system would create other inefficiencies, since we'd
be focused on the things that the current system fails, but essentially,
we'll have the same time we had before, so the things that the current
system does well will be missing. I have seen many test infrastructures
been re-written and not a single time the version 2.0 completely replaced
1.0.

Also, dynamic benchmarks are not good for regression analysis, as they can
potentially change number of cycles between runs (OS being busy and all
that).

So, all in all, the current system is not so bad, and if it could be
extensible with a json interface (is it?) like the buildbot, we could do
much more with it as it is.

But yes, increasing (statically) the run-time for most applications would
be an easy and good move.

cheers,
--renato
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20130228/3e540d70/attachment.html>


More information about the llvm-commits mailing list