[www] r176209 - Add LNT statistics project
David Tweed
david.tweed at arm.com
Fri Mar 1 03:47:30 PST 2013
Hi,
everything you've both been saying seems fine. The only thing I'd comment on
is that there seems to be an unconscious expectation that finding the mean
over a data-set with a small variance is how you want to approach doing
statistics. My understanding is that, running a computer program being a
situation where you can get samples contaminated by relatively big
extraneous stuff (eg, if some limit has ticked over and the CPU decides to
write-back dirty pages while your test is running) whereas nothing (short of
a bug) can't run faster so that you want to actively discard the big samples
and then show that your reduced data set has a small variance. The trick, of
course, is knowing when it's reasonable to discard a big value: I'm still
trying to locate a definitive source on this (if anyone knows of one...) but
my practice tends to be compute run until the "variance of the smallest 80%
of the samples wrt the of whole data-set median" is suitably small, but I
don't know if that's particularly optimal... (Things are a bit simpler in
the case I mostly look at, is codebase X faster or slower than codebase Y,
rather than trying to maintain comparable statistics for a varying codebase
over a period of time.)
But this is orthogonal to the issue of cutting out prologue/epilogue code.
Cheers,
Dave
-----Original Message-----
From: David Blaikie [mailto:dblaikie at gmail.com]
Sent: 28 February 2013 20:25
To: Renato Golin
Cc: David Tweed; LLVM Commits
Subject: Re: [www] r176209 - Add LNT statistics project
On Thu, Feb 28, 2013 at 12:09 PM, Renato Golin <renato.golin at linaro.org>
wrote:
> On 28 February 2013 19:28, David Blaikie <dblaikie at gmail.com> wrote:
>>
>> I'm still confused as to which things you're talking about. My
>> suggestion is that we can get higher confidence on the performance of
>> tests by running them multiple times.
>
>
> Ok, now I know why I was confused... and ended up confusing you...
>
> My idea was not to run the whole test multiple times to achieve a nice
> mean+sigma, but to increase the number of iterations inside one single run
> of the test.
>
> The big problems I find in benchmarks is that the prologue/epilogue
normally
> is more non-deterministic than the test itself, because they're based on
> I/O, setting up the memory (thus populating caches, etc). If you run
> multiple times, you'll end up with a small-enough sigma over the whole
test,
> but not small enough over the prologue/epilogue.
>
> So, going back all the way to the beginning: I propose to spot the core of
> each benchmark and force all of them to print times of just that core.
Ah, OK - that's more likely what we'll get from using a microbenchmark
suite - designed for doing timing/etc in-process over very strictly
scoped sections of code. I think, yes, many of our current test suite
could/should be adapted to such a system. Whether or not every test
would use such infrastructure, I'm not sure - it does require more
maintenance due to having to modify the code of every benchmark rather
than just integrate their build system.
> If you need to run a few times to normalize memory/cache, do it before
start
> timing. If the benchmark is a set of many calls, wrap the working part of
> main into a separate function and call it many times. In essence,
everything
> to make the preparation and shutdown phases nor relevant.
>
> It's not the same as you're proposing at all, and I see value on both
> propositions. Yours is a bit easier to implement, and may have good enough
> results for the time being. But I think that we should think of a library
to
> measure time on all platforms (easier said than done), so we can have
> multiple time tests on the same benchmark, and possibly simplify the whole
> infrastructure in the process (by making the benchmarks more complex).
>
> Now that I got what you mean, I think it's a simple idea that should be
done
> before my proposition, possibly together with my GSoC project.
>
> Sorry for the confusion, I really got it upside-down. ;)
No worries - got there in the end.
- David
More information about the llvm-commits
mailing list