[LLVMdev] [LNT] Question about results reliability in LNT infrustructure
renato.golin at linaro.org
Mon Jul 1 09:41:07 PDT 2013
On 1 July 2013 02:02, Chris Matthews <chris.matthews at apple.com> wrote:
> One thing that LNT is doing to help “smooth” the results for you is by
> presenting the min of the data at a particular revision, which (hopefully)
> is approximating the actual runtime without noise.
That's an interesting idea, as you said, if you run multiple times on every
On ARM, every run takes *at least* 1h, other architectures might be a lot
worse. It'd be very important on those architectures if you could extract
point information from group data, and min doesn't fit in that model. You
could take min from a group of runs, but again, that's no different than
moving averages. Though, "moving mins" might make more sense than "moving
averages" for the reasons you exposed.
Also, on tests that take as long as noise to run (0.010s or less on A15),
the minimum is not relevant, since runtime will flatten everything under
0.010 onto 0.010, making your test always report 0.010, even when there are
I really cannot see how you can statistically enhance data in a scenario
where the measuring rod is larger than the signal. We need to change the
wannabe-benchmarks to behave like proper benchmarks, and move everything
else into "Applications" for correctness and specifically NOT time them.
Less is more.
That works well with a lot of samples per revision, but not for across
> revisions, where we really need the smoothing. One way to explore this is
> to turn
I was really looking forward to that hear the end of that sentence... ;)
We also lack any way to coordinate or annotate regressions, that is a whole
> separate problem though.
Yup. I'm having visions of tag clouds, bugzilla integration, cross
architectural regression detection, etc. But I'll ignore that for now,
let's solve one big problem at a time. ;)
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the llvm-dev