<div dir="ltr">Justin, looks like there is some misunderstanding in my email. I want to clarify it here first:<div><br></div><div>1) I am not proposing changing the default profile dumping model as used today. The online merging is totally optional;</div><div>2) the on-line profile merging is not doing conversion from raw to index format. It does very simple raw-to-raw merging using existing runtime APIs.</div><div>3) the change to existing profile runtime code is just a few lines. All the new functionality is isolated in one new file. It will become clear when the patch is sent out later.</div><div><br></div><div>My inline replies below:</div><div><br></div><div class="gmail_extra"><br><div class="gmail_quote">On Sat, Feb 27, 2016 at 11:44 PM, Justin Bogner <span dir="ltr"><<a href="mailto:mail@justinbogner.com" target="_blank">mail@justinbogner.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">Xinliang David Li via llvm-dev <<a href="mailto:llvm-dev@lists.llvm.org">llvm-dev@lists.llvm.org</a>> writes:<br>
> One of the main missing features in Clang/LLVM profile runtime is the lack of<br>
> support for online/in-process profile merging support. Profile data collected<br>
> for different workloads for the same executable binary need to be collected<br>
> and merged later by the offline post-processing tool. This limitation makes<br>
> it hard to handle cases where the instrumented binary needs to be run with<br>
> large number of small workloads, possibly in parallel. For instance, to do<br>
> PGO for clang, we may choose to build a large project with the instrumented<br>
> Clang binary. This is because<br>
><br>
> 1) to avoid profile from different runs from overriding others, %p<br>
> substitution needs to be specified in either the command line or an<br>
> environment variable so that different process can dump profile data<br>
> into its own file named using pid.<br>
<br>
</span>... or you can specify a more specific name that describes what's under<br>
test, instead of %p.<br></blockquote><div><br></div><div>yes -- but the problem still exists -- each training process will need its own copy of raw profile.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<span class=""><br>
> This will create huge requirement on the disk storage. For<br>
> instance, clang's raw profile size is typically 80M -- if the<br>
> instrumented clang is used to build a medium to large size project<br>
> (such as clang itself), profile data can easily use up hundreds of<br>
> Gig bytes of local storage.<br>
<br>
</span>This argument is kind of confusing. It says that one profile is<br>
typicially 80M, then claims that this uses 100s of GB of data. From<br>
these statements that only makes sense I suppose that's true if you run<br>
1000 profiling runs without merging the data in between. Is that what<br>
you're talking about, or did I miss something?<br></blockquote><div><br></div><div>Yes. For instance, first build a clang with -fprofile-instr-generate=prof.data.%p, and use this instrumented clang to build another large project such as clang itself. The second build will produce tons of profile data.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<span class=""><br>
> 2) pid can also be recycled. This means that some of the profile data may be<br>
> overridden without being noticed.<br>
><br>
> The way to solve this problem is to allow profile data to be merged in<br>
> process.<br>
<br>
</span>I'm not convinced. Can you provide some more concrete examples of where<br>
the out of process merging model fails? This was a *very deliberate*<br>
design decision in how clang's profiling works, and most of the<br>
subsequent decisions have been based on this initial one. Changing it<br>
has far reaching effects.</blockquote><div><br></div><div>I am not proposing changing the out of process merging -- it is still needed. What I meant is that, in a scenario where the instrumented binaries are running multiple times (using their existing running harness), there is no good/automatic way of making sure different process's profile data won't have name conflict.</div><div><br></div><div>Using clang's self build (using instrumented clang as build compiler for profile bootstrapping) as an example. Ideally this should all be done transparently -- i.e, set the instrumented compiler as the build compiler, run ninja or make and things will just work, but with the current default profile dumping mode, it can fail in many different ways:</div><div>1) Just run ninja/make -- all clang processes will dump profile into the same file concurrently -- the result is a corrupted profile -- FAIL</div><div>2) run ninja with LLVM_PROFILE_FILE=....%p</div><div> 2.1) failure mode #1 --> really slow build due to large IO; or running out of diskspace</div><div> 2.2) failure mode #2 --> pid recyling leading to profile file name conflict -- profile overwriting happens and we loss data</div><div><br></div><div>Suppose 2) above finally succeeds, the user will have to merge thousands of raw profiles to indexed profile.</div><div> </div><div>With the proposed profile on-line merging, you just need to use the instrumented clang, and one merged raw profile data automagically produced in the end. The raw to indexed merge is also much faster.</div><div><br></div><div>The online merge feature has a huge advantage when considering integrating the instrumented binary with existing make systems or loadtesting harness -- it is almost seamless.</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<span class=""><br>
> I have a prototype implementation and plan to send it out for review<br>
> soon after some clean ups. By default, the profiling merging is off and it can<br>
> be turned on with an user option or via an environment variable. The following<br>
> summarizes the issues involved in adding this feature:<br>
> 1. the target platform needs to have file locking support<br>
> 2. there needs an efficient way to identify the profile data and associate it<br>
> with the binary using binary/profdata signature;<br>
> 3. Currently without merging, profile data from shared libraries<br>
> (including dlopen/dlcose ones) are concatenated into the primary<br>
> profile file. This can complicate matters, as the merger also needs to<br>
> find the matching shared libs, and the merger also needs to avoid<br>
> unnecessary data movement/copy;<br>
> 4. value profile data is variable in length even for the same binary.<br>
<br>
</span>If we actually want this, we should reconsider the design of having a<br>
raw vs processed profiling format. The raw profile format is<br>
specifically designed to be fast to write out and not to consider<br>
merging profiles at all. This feature would make it nearly as<br>
complicated as the processed format and lose all of the advantages of<br>
making them different.<br></blockquote><div><br></div><div>See above -- all the nice raw profile dumping mechanism is still kept -- there won't be a change of that.</div><div><br></div><div>thanks,</div><div><br></div><div>David</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="HOEnZb"><div class="h5"><br>
> All the above issues are resolved and clang self build with instrumented<br>
> binary passes (with both j1 and high parallelism). <br>
><br>
> If you have any concerns, please let me know.<br>
</div></div></blockquote></div><br></div></div>