<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<meta name="Generator" content="Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:12.0pt;
font-family:"Times New Roman",serif;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-reply;
font-family:"Calibri",sans-serif;
color:#1F497D;}
.MsoChpDefault
{mso-style-type:export-only;
font-size:10.0pt;
font-family:"Calibri",sans-serif;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang="EN-US" link="blue" vlink="purple">
<div class="WordSection1">
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1F497D">In the context I think we’re most interested in, we’re trying to measure the quality of the code being generated. As you mention here, it is very target-dependent
and depends largely on instruction set features, cache sizes, and many other details of the implementation.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1F497D"><br>
The only way I have been able to think about getting extremely repeatable results is to run the generated code on an emulator and get a synthetic performance measurement of some kind – the number of ticks on the emulator clock or something like that. (I know
with multiple threads that’s too simplistic, and it may not be the only measurement we’re interested in.)<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1F497D"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1F497D">This idea has obvious flaws – emulation is usually slow, and the accuracy of the emulation is critical if you want the numbers to mean something in the real world,
and you’d have to have good emulators for all of the targets you really care about. I googled a bit and couldn’t find very much where people are using emulators for performance testing so maybe this idea just doesn’t work.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1F497D"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1F497D">For one code generator I have worked on we ended up doing our own synthetic performance analysis based on analysis of the IR that we have at the backend, and
then instrumenting our output so that at runtime we accumulate the “cost” of the code as it executes. It has flaws but it’s repeatable and your test harness can say “this code looks worse – someone needs to look at it.”<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1F497D"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1F497D">Otherwise I agree that you need a good statistical model and a very controlled hardware/software environment to do meaningful performance testing. It’s a hard
problem.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1F497D"><o:p> </o:p></span></p>
<p class="MsoNormal"><b><span style="font-size:11.0pt;font-family:"Calibri",sans-serif">From:</span></b><span style="font-size:11.0pt;font-family:"Calibri",sans-serif"> llvmdev-bounces@cs.uiuc.edu [mailto:llvmdev-bounces@cs.uiuc.edu]
<b>On Behalf Of </b>Chandler Carruth<br>
<b>Sent:</b> Thursday, January 08, 2015 3:55 AM<br>
<b>To:</b> Sean Silva<br>
<b>Cc:</b> llvmdev@cs.uiuc.edu<br>
<b>Subject:</b> Re: [LLVMdev] Postponing more passes in LTO<o:p></o:p></span></p>
<p class="MsoNormal"><o:p> </o:p></p>
<div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
<div>
<p class="MsoNormal">On Thu, Dec 18, 2014 at 2:16 PM, Sean Silva <<a href="mailto:chisophugis@gmail.com" target="_blank">chisophugis@gmail.com</a>> wrote:<o:p></o:p></p>
<blockquote style="border:none;border-left:solid #CCCCCC 1.0pt;padding:0in 0in 0in 6.0pt;margin-left:4.8pt;margin-top:5.0pt;margin-right:0in;margin-bottom:5.0pt">
<div>
<p class="MsoNormal">Okay, so I've cast doubt on the use of standard deviation (and actually an argument could be made against the mean too for similar reasons). On a more constructive note, what *is* a good small set of parameters to summarize measurements
of program run time? One interesting model that was brought to my attention by Nick Lewycky is based on the following observation: most "perturbations" of the run time of a fixed program binary on a fixed chip and fixed OS are due to externalities that contend
for (or otherwise deprive the program of) execution resources and therefore almost always make it *slower*.<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">The model is then that there is a "true" execution time for the program, which is a completely deterministic function of the program, chip, and OS; intuitively, this is time occurs when the program is e.g. able to execute by itself, on
a single execution unit of the CPU, with no interrupts knocking it off CPU, and no other threads using any of the execution resources of this CPU. On top of this, we assume that there is a set of perturbations that contend for execution resources and slow
the program down.<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">For this model, it makes sense to approximate the "true" value by the maximum of the recorded values. If we want to use two numbers to summarize the data, it probably makes sense to look at a way of characterizing the extent of the variation
from this "true" value. In the absence of proper characterization of the perturbations (such as the distribution of when they occur and how much effect they have when they occur), one simple solution is the minimum. Max and min might be *too* sensitive to
the perturbations (especially disk cache of the program itself on the first run), so a pair of percentiles might be a bit more reliable, such as 10th percentile and 90th percentile.<o:p></o:p></p>
</div>
</blockquote>
</div>
<p class="MsoNormal"><br>
I used to subscribe to this thinking. I no longer do, and I think it has serious problems. Let me try to explain with a hopefully entertaining analogy. =D<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">Back in physics class we would analyze the behavior of basic principles of physics as they would act upon a brave cadre of scientifically savvy penguins sliding down slopes under the effect of gravity. Naturally, these slopes were very
slick ice, and the physics problems would be solved "neglecting friction"[1]. That made the analysis much more tractable and predictable. It also makes it really inaccurate in the real world.<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">Now, I'm not actually deriding this approach to physics. The results from neglecting friction are *very* informative as to the basic principles at work. In a very similar way, I don't think the model of a "true" execution time is very informative,
but much more about the principles at work than about the actual performance of the software.<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">The reality I have observed these days, especially in larger software systems, is that there is inherently no deterministic and representative execution to rely on for finding "true" execution time. And too many of the random fluctuations
are actually by-products of the program itself. Some examples:<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">- If the program runs uninterrupted on a single core, it may cause that core to be throttled due to the thermals generated. This can have the effect of lowering the overall performance more than what you would see if the program ran *just*
under that level without repeated drops in frequency.<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">- Different optimizations can change the thermal profile by using different execution units. This is especially true for vector units.<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">- If you fully leverage the AVX unit of some intel chips, my understanding is that regardless of the specific thermals, those units run at a lower clock!<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">- Naturally, things like ASLR or other sources of correct but non-deterministic layout of data in memory will cause fluctuations in the rate at which cache misses occur. This is probably the most common cause of non-external fluctuations
in performance. Different generated code, register spill frequencies, etc., can dramatically change these cache collision rates.<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">Now, with all of these factors, we have to assume that in real world scenarios, the programs will not always run at the maximal speed. Instead, there is, much as you describe in your email, some distribution of performance that will be
observed in practice. The interesting question is -- can generating different code from the compiler shift the distribution in a meaningful way that is not correlated with the fastest measurement (or the N fastest measurement). And I think the above examples
are all places where this can happen. =/<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">As a consequence, I am increasingly in favor of using very simple but aggregated statistics for measuring performance and acknowledging the lack of precision when it is present. We can still very often measure precisely those localized
improvements provided by a specific change even when very small. We just won't see a reflection from them in many cases when measuring large systems that have a high noise floor. Hopefully, we can at least observe good trends in the large system to confirm
we are making progress, and at times we can make large improvements that actually rise above the noise floor.<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">Anyways, my somewhat rambling 2 cents on this. Sorry for the not entirely structured email. =]<o:p></o:p></p>
</div>
</div>
</div>
</body>
</html>