<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Jan 18, 2019 at 9:11 PM Manman Ren <<a href="mailto:manman.ren@gmail.com">manman.ren@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail-m_8266125427570799511gmail_attr">On Fri, Jan 18, 2019 at 4:11 PM Xinliang David Li <<a href="mailto:davidxl@google.com" target="_blank">davidxl@google.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail-m_8266125427570799511gmail-m_-5069364628899195513gmail_attr">On Fri, Jan 18, 2019 at 3:56 PM Manman Ren <<a href="mailto:manman.ren@gmail.com" target="_blank">manman.ren@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Some background information first, then a quick summary of what we have discussed so far!<br>
<br>
Background: Facebook app is one of the biggest iOS apps. Because of this, we want the instrumentation to be as lightweight as possible in terms of binary size, profile data size, and runtime performance. The plan to improve Facebook app start up time is to (1) implement order file instrumentation to be as light as possible, (2) push the order file instrumentation to internal users first, and then to external beta users if the overhead is low, (3) enable PGO instrumentation to collect information to guide hot/cold splitting, and (4) push PGO instrumentation to internal users.<br>
<br>
There are a few alternatives we have discussed:<br>
(A) What is proposed in the initial email: Log (module id, function id) into a circular buffer in its own profile section when a function is first executed.<br>
<br>
(B) Re-use existing infra of a per function counter to record the timestamp when a function is first executed<br>
Compared to option (A), the runtime overhead for option (B) should be higher since we will be calling timestamp for each function that is executed at startup time,</div></blockquote><div><br></div><div>The 'timestamp' can be the just an global index. Since there is one counter per func, the counter can be initialized to be '-1' so that you don't need to use bitmap to track if the function has been invoked or not. In other words, the runtime overhead of B) could be lower :)</div></div></div></blockquote><div><br></div><div>That actually works! We only care about the ordering of the functions. But the concern on profile data size and binary size still exist :]</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div><br></div><div>David</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"> and the binary and the profile data will be larger since it needs one number for each function plus additional overhead in the per-function metadata recorded in llvm_prf_data. The buffer size for option (A) is controllable, it needs to be the number of functions executed at startup.<br></div></blockquote></div></div></blockquote><div><br></div><div>Do you have a rough estimation on how much overhead the per-function metadata is?</div><div><br></div></div></div></blockquote><div><br></div><div>For PGO, it is 8 double words for one function, but 7 of the double words are unnecessary. It is entirely reasonable to emit only *one* double word (reference to name) in per function data when only order profiling is turned on (encode this in the profile header version field). We can delay the support of mixed mode (with PGO instrumentation) later.</div><div><br></div><div>David</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div></div><div>Manman</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">
<br>
For the Facebook app, we expect that the number of functions executed during startup is 1/3 to 1/2 of all functions in the binary. Profile data size is important since we need to upload the profile data from device to server.<br>
<br>
The plus side is to reuse the existing infra!<br>
<br>
In terms of integration with PGO instrumentation, both (A) and (B) should work. For (B), we need to increase the number of per function counters by one. For (A), they will be in different sections.<br>
<br>
(C) XRay<br>
We have not looked into this, but would like to hear more about it!<br>
<br>
(D) -finstrument-functions-after-inlining or -finstrument-function-entry-bare<br>
We are worried about the runtime overhead of calling a separate function when starting up the App.<br><br>
Thanks,<br>
Manman<br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail-m_8266125427570799511gmail-m_-5069364628899195513gmail-m_-4263256003748926041gmail_attr">On Fri, Jan 18, 2019 at 2:01 PM Chris Bieneman <<a href="mailto:chris.bieneman@me.com" target="_blank">chris.bieneman@me.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">I would love to see this kind of order profiling support. Using dtrace to generate function orders is actually really problematic because dtrace made tradeoffs in implementation allowing it to ignore probe execution if the performance impact is too great on the system. This can result in dtrace being non-deterministic which is not ideal for generating optimization data.<br>
<br>
Additionally if order generation could be enabled at the same time as PGO generation that would be a great solution for generating profile data for optimizing clang itself. Clang has some scripts and build-system goop under utils/perf-training that can generate order files using dtrace and PGO data, it would be great to apply this technique to those tools too.<br>
<br>
-Chris<br>
<br>
> On Jan 18, 2019, at 2:43 AM, Hans Wennborg via llvm-dev <<a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a>> wrote:<br>
> <br>
> On Thu, Jan 17, 2019 at 7:24 PM Manman Ren via llvm-dev<br>
> <<a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a>> wrote:<br>
>> <br>
>> Order file is used to teach ld64 how to order the functions in a binary. If we put all functions executed during startup together in the right order, we will greatly reduce the page faults during startup.<br>
>> <br>
>> To generate order file for iOS apps, we usually use dtrace, but some apps have various startup scenarios that we want to capture in the order file. dtrace approach is not easy to automate, it is hard to capture the different ways of starting an app without automation. Instrumented builds however can be deployed to phones and profile data can be automatically collected.<br>
>> <br>
>> For the Facebook app, by looking at the startup distribution, we are expecting a big win out of the order file instrumentation, from 100ms to 500ms+, in startup time.<br>
>> <br>
>> The basic idea of the pass is to use a circular buffer to log the execution ordering of the functions. We only log the function when it is first executed. Instead of logging the symbol name of the function, we log a pair of integers, with one integer specifying the module id, and the other specifying the function id within the module.<br>
> <br>
> [...]<br>
> <br>
>> clang has '-finstrument-function-entry-bare' which inserts a function call and is not as efficient.<br>
> <br>
> Can you elaborate on why this existing functionality is not efficient<br>
> enough for you?<br>
> <br>
> For Chrome on Windows, we use -finstrument-functions-after-inlining to<br>
> insert calls at function entry (after inlining) that calls a function<br>
> which captures the addresses in a buffer, and later symbolizes and<br>
> dumps them to an order file that we feed the linker. We use a similar<br>
> approach on for Chrome on Android, but I'm not as familiar with the<br>
> details there.<br>
> <br>
> Thanks,<br>
> Hans<br>
> _______________________________________________<br>
> LLVM Developers mailing list<br>
> <a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a><br>
> <a href="http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev" rel="noreferrer" target="_blank">http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev</a><br>
<br>
</blockquote></div>
</blockquote></div></div>
</blockquote></div></div>
</blockquote></div></div>