[cfe-dev] [LLVMdev] [3.6 Release] RC3 has been tagged

Mehmet Erol Sanliturk m.e.sanliturk at gmail.com
Wed Feb 18 13:16:32 PST 2015


On Wed, Feb 18, 2015 at 12:19 PM, Renato Golin <renato.golin at linaro.org>
wrote:

> On 18 February 2015 at 20:10, Jack Howarth
> <howarth.mailing.lists at gmail.com> wrote:
> > Well, I assumed that llvm releases were supposed to be more than just
> > glorified semi-annual snapshots.
>
> It's that time of the year that we get all together and run some tests
> and benchmarks we normally don't run the rest of the year, but we stop
> the analogy short of giving each other presents. :)
>
> So far we've being pretty good at spotting regressions like that and
> fixing them later, and people like Michael will keep us on our toes if
> we deliver it again with the regression, or if we deviate too much
> from GCC.
>
> cheers,
> --renato
> _______________________________________________
>



By looking at some numbers only and say that "Performance increased"  or
"Performance decreased" is not a correct way of decision making .

It is necessary to specify a set of programs which will be a representative
sample of real life usage population ( it is necessary to exceed at least
60 programs to be able to assume "Central Limit Theorem" is valid , i.e. ,
sample is from a "Normal Distribution" ) .


Then by using the SAME execution environment ( means the SAME programs
running in the computers , best is not to run any programs other than
compilation and running the program , the computers may be in any model ,
from 500 MHz to 5 GHz , with different operating systems )  .


After obtaining compilation times , they should be analysed by using a
convenient statistical package
to test equality of variances and equality of means by using   "Pair-wise
correlated values" .


Assume tests revealed that "Differences are not significant" , then to say
that "There is a performance loss" is not correct .


If variances are different ( populations are not from the SAME distribution
) , either variability of computations are increased or decreased :
Decrease is good , increase requires inspection .



If means are different ( populations are not from the SAME distribution ) ,
either average time of computations are increased or decreased : Decrease
is good , increase requires inspection .


Repeat the above computations for -
- No optimization
- Optimizations applied


Then reasons should be checked one by one , for example :

(1) Assume a bug is corrected by introducing a check : Is it possible to
remove the corrections and keep the bug to increase the speed ? Obviously
NO .

(2) In one point , optimization is ignoring a feature , but it is not a bug
( assume optimization is not applied ) .

(3) During code generation , there are redundant parts which may be
eliminated .
This is not a bug , but slowing down computation , because the computed
values will not be affected .


(4) Instead letting the programs crash blindly some new checks are
introduced to prevent blind crashing : Is it necessary to eliminate such
checks .

Then , it will be decided which points need improvement , and which points
will be kept as it is .


Without doing such analyses , and pursuing a discussion on some pure
different numbers and making decisions will not contribute anything to
development of Clang/LLVM .


Thank you very much .

Mehmet Erol Sanliturk
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/cfe-dev/attachments/20150218/79805e12/attachment.html>


More information about the cfe-dev mailing list