<div dir="ltr">On Wed, May 21, 2014 at 9:21 AM, Tobias Grosser <span dir="ltr"><<a href="mailto:tobias@grosser.es" target="_blank">tobias@grosser.es</a>></span> wrote:<br><div class="gmail_extra"><div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div class=""><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
Also, we should modify value analysis(based how close the<br>
medians/minimums are) to vary according to the confidence level as<br>
well. However this analysis is parametric, we needs to know how data<br>
is actually distributed for every test. I don't think there is a<br>
non-parametric test which does the same thing.<br>
</blockquote>
<br></div>
What kind of problem could we get in case we assume normal distribution and the values are in fact not normal distributed?<br></blockquote><div><br></div><div>I haven't looked at this particular data, but I've done a lot of work in general trying to detect small changes in performance.</div>
<div><br></div><div>My feeling is that there is usually a "true" execution time PLUS the sum of some random amount of things that happened during the run. Nothing random ever makes the code run faster than it should! (which by itself makes the normal distribution completely inappropriate, as it always has a finite chance of negative values)</div>
<div><br></div><div>Each individual random thing that might happen in a run probably actually has a binomial or hypergeometric distribution, but p is so small and n so large (and p*n constant) that you might as well call it a Poisson distribution.</div>
<div><br></div><div>Note that while the sum of a number of arbitrary independent random variables is normal (Central Limit Theorem), the sum of independent Poisson variables is Poisson! And you only need one number to characterise a Poisson distribution: the expected value (which also happens to be the same as the variance).</div>
<div><br></div></div></div></div>