[LLVMdev] Use perf tool for more accurate time measuring on Linux

Yi Kong kongy.dev at gmail.com
Wed May 21 02:45:56 PDT 2014


On 21 May 2014 01:44, Bruce Hoult <bruce at hoult.org> wrote:
> On Wed, May 21, 2014 at 9:21 AM, Tobias Grosser <tobias at grosser.es> wrote:
>>>
>>> Also, we should modify value analysis(based how close the
>>> medians/minimums are) to vary according to the confidence level as
>>> well. However this analysis is parametric, we needs to know how data
>>> is actually distributed for every test. I don't think there is a
>>> non-parametric test which does the same thing.
>>
>>
>> What kind of problem could we get in case we assume normal distribution
>> and the values are in fact not normal distributed?
>
>
> I haven't looked at this particular data, but I've done a lot of work in
> general trying to detect small changes in performance.

I've set up another server.
http://cloud-vm-44-158.doc.ic.ac.uk:55080/

> My feeling is that there is usually a "true" execution time PLUS the sum of
> some random amount of things that happened during the run. Nothing random
> ever makes the code run faster than it should! (which by itself makes the
> normal distribution completely inappropriate, as it always has a finite
> chance of negative values)

I agree.

> Each individual random thing that might happen in a run probably actually
> has a binomial or hypergeometric distribution, but p is so small and n so
> large (and p*n constant) that you might as well call it a Poisson
> distribution.
>
> Note that while the sum of a number of arbitrary independent random
> variables is normal (Central Limit Theorem), the sum of independent Poisson
> variables is Poisson! And you only need one number to characterise a Poisson
> distribution: the expected value (which also happens to be the same as the
> variance).

I'll implement it and see how it works out.



More information about the llvm-dev mailing list