[LLVMdev] Use perf tool for more accurate time measuring on Linux

Bruce Hoult bruce at hoult.org
Tue May 20 17:44:57 PDT 2014


On Wed, May 21, 2014 at 9:21 AM, Tobias Grosser <tobias at grosser.es> wrote:

> Also, we should modify value analysis(based how close the
>> medians/minimums are) to vary according to the confidence level as
>> well. However this analysis is parametric, we needs to know how data
>> is actually distributed for every test. I don't think there is a
>> non-parametric test which does the same thing.
>>
>
> What kind of problem could we get in case we assume normal distribution
> and the values are in fact not normal distributed?
>

I haven't looked at this particular data, but I've done a lot of work in
general trying to detect small changes in performance.

My feeling is that there is usually a "true" execution time PLUS the sum of
some random amount of things that happened during the run. Nothing random
ever makes the code run faster than it should! (which by itself makes the
normal distribution completely inappropriate, as it always has a finite
chance of negative values)

Each individual random thing that might happen in a run probably actually
has a binomial or hypergeometric distribution, but p is so small and n so
large (and p*n constant) that you might as well call it a Poisson
distribution.

Note that while the sum of a number of arbitrary independent random
variables is normal (Central Limit Theorem), the sum of independent Poisson
variables is Poisson! And you only need one number to characterise a
Poisson distribution: the expected value (which also happens to be the same
as the variance).
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20140521/57dcd5b5/attachment.html>


More information about the llvm-dev mailing list