[LLVMdev] Why is the default LNT aggregation function min instead of mean

David Tweed david.tweed at gmail.com
Sat Jan 18 03:02:38 PST 2014


Note that it's very possible to get the those kind of effects from
other sources of computational load on the machine, see the fib35
graphs on

http://www.serpentine.com/blog/2009/09/29/criterion-a-new-benchmarking-library-for-haskell/


On Sat, Jan 18, 2014 at 12:57 AM, Tobias Grosser <tobias at grosser.es> wrote:
> On 01/17/2014 08:58 AM, Chris Matthews wrote:
>>
>> Is it the case that you converge on the min faster than the mean?
>
>
> Sorry, I do not fully understand what you mean here. What exactly would I
> need to do to check this? Should I just pick a couple of test/run pairs and
> see after how many samples the min/mean does not change any more?
>
> What conclusion can I take from this?
>
>
>> Right now there is no way to set a per-tester aggregation function.
>>
>> I had spent a little time trying to detect regressions using k-means
>> clustering.  It looked promising.  That was outside LNT though.
>
>
> Interesting idea and that would most likely be helpful in configurations
> where we actually get run-times clustered in different groups. I was
> initially assuming this, but after Chandlers comments I have the feeling we
> actually only have a single cluster where the statements are just grouped
> due to the limited resolution of the analysis.
>
> Cheers
>
> Tobias
> _______________________________________________
> LLVM Developers mailing list
> LLVMdev at cs.uiuc.edu         http://llvm.cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev



-- 
cheers, dave tweed__________________________
high-performance computing and machine vision expert: david.tweed at gmail.com
"while having code so boring anyone can maintain it, use Python." --
attempted insult seen on slashdot



More information about the llvm-dev mailing list