[LLVMdev] Why is the default LNT aggregation function min instead of mean
Renato Golin
renato.golin at linaro.org
Fri Jan 17 03:09:20 PST 2014
On 17 January 2014 07:58, Chris Matthews <chris.matthews at apple.com> wrote:
> If you have a 0.004s granularity, and you want to identify small (1%
> changes) you’ll probably need benchmarks running at least 0.8s.
>
You also have to remember that this machine is not just doing benchmarks,
it has an OS that has CPU schedulers, memory managers, daemons, sometimes
other users, and apps running.
Granularity will only give you the minimum quantum of the instrument, not
the average precision in which is measures things (which will be a multiple
of the quantum). The same benchmark on the same machine can give you widely
different results depending on the load, time of day, phase of the moon,
alignment of stars, etc.
I have seen results that were 10s slower than the previous run with the
standard deviation of 1s, only to run again in a few days with the *same*
binary and get 10s faster, again, with stddev of 1s. The machine had no
other users or GUI and the CPU scheduler was set to "performance".
I may be wrong, but I believe the only way to truly understand regressions
in benchmarks is to understand the history of each individual benchmark and
use some heuristics to spot oddities. Running multiple times each run might
work some times, but it might just exaggerate the local instabilities.
cheers,
--renato
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20140117/22ecc658/attachment.html>
More information about the llvm-dev
mailing list