<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On 17 January 2014 07:58, Chris Matthews <span dir="ltr"><<a href="mailto:chris.matthews@apple.com" target="_blank">chris.matthews@apple.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="word-wrap:break-word">If you have a 0.004s granularity, and you want to identify small (1% changes) you’ll probably need benchmarks running at least 0.8s.</div>
</blockquote><div></div></div><br></div><div class="gmail_extra">You also have to remember that this machine is not just doing benchmarks, it has an OS that has CPU schedulers, memory managers, daemons, sometimes other users, and apps running.</div>
<div class="gmail_extra"><br></div><div class="gmail_extra">Granularity will only give you the minimum quantum of the instrument, not the average precision in which is measures things (which will be a multiple of the quantum). The same benchmark on the same machine can give you widely different results depending on the load, time of day, phase of the moon, alignment of stars, etc.</div>
<div class="gmail_extra"><br></div><div class="gmail_extra">I have seen results that were 10s slower than the previous run with the standard deviation of 1s, only to run again in a few days with the *same* binary and get 10s faster, again, with stddev of 1s. The machine had no other users or GUI and the CPU scheduler was set to "performance".</div>
<div class="gmail_extra"><br></div><div class="gmail_extra">I may be wrong, but I believe the only way to truly understand regressions in benchmarks is to understand the history of each individual benchmark and use some heuristics to spot oddities. Running multiple times each run might work some times, but it might just exaggerate the local instabilities.</div>
<div class="gmail_extra"><br></div><div class="gmail_extra">cheers,</div><div class="gmail_extra">--renato</div></div>