<div dir="ltr">The lto time could be explained by second order effect due to increased dcache/dtlb pressures due to increased memory footprint and poor locality.<div><br></div><div>David</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Mar 8, 2016 at 5:47 PM, Sean Silva via llvm-dev <span dir="ltr"><<a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote"><div><div class="h5">On Tue, Mar 8, 2016 at 2:25 PM, Mehdi Amini <span dir="ltr"><<a href="mailto:mehdi.amini@apple.com" target="_blank">mehdi.amini@apple.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="word-wrap:break-word"><div><div><br><div><blockquote type="cite"><div>On Mar 8, 2016, at 1:09 PM, Sean Silva via llvm-dev <<a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a>> wrote:</div><br><div><div dir="ltr" style="font-family:Helvetica;font-size:12px;font-style:normal;font-variant:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px"><div class="gmail_extra"><br><br><div class="gmail_quote">On Tue, Mar 8, 2016 at 10:42 AM, Richard Smith via llvm-dev<span> </span><span dir="ltr"><<a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a>></span><span> </span>wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><span>On Tue, Mar 8, 2016 at 8:13 AM, Rafael Espíndola<br><<a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a>> wrote:<br>> I have just benchmarked building trunk llvm and clang in Debug,<br>> Release and LTO modes (see the attached scrip for the cmake lines).<br>><br>> The compilers used were clang 3.5, 3.6, 3.7, 3.8 and trunk. In all<br>> cases I used the system libgcc and libstdc++.<br>><br>> For release builds there is a monotonic increase in each version. From<br>> 163 minutes with 3.5 to 212 minutes with trunk. For comparison, gcc<br>> 5.3.2 takes 205 minutes.<br>><br>> Debug and LTO show an improvement in 3.7, but have regressed again in 3.8.<br><br></span>I'm curious how these times divide across Clang and various parts of<br>LLVM; rerunning with -ftime-report and summing the numbers across all<br>compiles could be interesting.<br></blockquote><div><br></div><div>Based on the results I posted upthread about the relative time spend in the backend for debug vs release, we can estimate this.</div><div>To summarize:</div><div>10% of time spent in LLVM for Debug</div><div>33% of time spent in LLVM for Release</div><div>(I'll abbreviate "in LLVM" as just "backend"; this is "backend" from clang's perspective)</div><div><br></div><div>Let's look at the difference between 3.5 and trunk.</div><div><br></div><div>For debug, the user time jumps from 174m50.251s to 197m9.932s.</div><div>That's {10490.3, 11829.9} seconds, respectively.</div><div>For release, the corresponding numbers are:</div><div>{9826.71, 12714.3} seconds.<br></div><div><br></div><div>debug35 = 10490.251<br></div><div>debugTrunk = 11829.932<br></div><div><br></div><div>debugTrunk/debug35 == 1.12771<br></div><div>debugRatio = 1.12771</div><div><br></div><div>release35 = 9826.705<br></div><div>releaseTrunk = 12714.288<br></div><div><br></div><div>releaseTrunk/release35 == 1.29385<br></div><div>releaseRatio = 1.29385</div><div><br></div><div>For simplicity, let's use a simple linear model for the distribution of slowdown between the frontend and backend: a constant factor slowdown for the backend, and an independent constant factor slowdown for the frontend. This gives the following linear system:</div><div>debugRatio = .1 * backendRatio + (1 - .1) * frontendRatio</div><div>releaseRatio = .33 * backendRatio + (1 - .33) * frontendRatio</div><div><br></div><div>Solving this linear system we find that under this simple model, the expected slowdown factors are:</div><div>backendRatio = 1.77783<br></div><div>frontendRatio = 1.05547</div><div><br></div><div>Intuitively, backendRatio comes out larger in this comparison because we see the biggest slowdown during release (1.29 vs 1.12), and during release we are spending a larger fraction of time in the backend (33% vs 10%).</div><div><br></div><div>Applying this same model to across Rafael's data, we find the following (numbers have been rounded for clarity):</div><div><br></div><div><div><font face="monospace, monospace">transition backendRatio frontendRatio</font></div><div><font face="monospace, monospace">3.5->3.6 1.08 1.03</font></div><div><font face="monospace, monospace">3.6->3.7 1.30 0.95</font></div><div><font face="monospace, monospace">3.7->3.8 1.34 1.07</font></div><div><font face="monospace, monospace">3.8->trunk 0.98 1.02</font><span style="font-family:monospace,monospace"> </span></div></div><div><div><br></div></div><div><div>Note that in Rafael's measurements LTO is pretty similar to Release from a CPU time (user time) standpoint. While the final LTO link takes a large amount of real time, it is single threaded. Based on the real time numbers the LTO link was only spending about 20 minutes single-threaded (i.e. about 20 minutes CPU time), which is pretty small compared to the 300-400 minutes of total CPU time. It would be interesting to see the numbers for -O0 or -O1 per-TU together with LTO.</div></div></div></div></div></div></blockquote><div><br></div><div><br></div></div></div></div>Just a note about LTO being sequential: Rafael mentioned he was "building trunk llvm and clang". By default I believe it is ~56 link targets that can be run in parallel (provided you have enough RAM to avoid swapping).</div></blockquote><div><br></div></div></div><div>D'oh! I was looking at the data wrong since I broke my Fundamental Rule of Looking At Data, namely: don't look at raw numbers in a table since you are likely to look at things wrong or form biases based on the order in which you look at the data points; *always* visualize. There is a significant difference between release and LTO. About 2x consistently.</div><div><br></div><div><img src="cid:ii_15358fe40a6fd5cd" alt="Inline image 3" style="margin-right:25px"><br></div><div><br></div><div>This is actually curious because during the release build, we were spending 33% of CPU time in the backend (as clang sees it; i.e. mid-level optimizer and codegen). This data is inconsistent with LTO simply being another run through the backend (which would be just +33% CPU time at worst). There seems to be something nonlinear happening.</div><div>To make it worse, the LTO build has approximately a full Release optimization running per-TU, so the actual LTO step should be seeing inlined/"cleaned up" IR which should be much smaller than what the per-TU optimizer is seeing, so naively it should take *even less* than "another 33% CPU time" chunk.</div><div>Yet we see 1.5x-2x difference:</div><div><br></div><div><img src="cid:ii_153590d95038676b" alt="Inline image 4" style="margin-right:25px"><br></div><div><br></div><div>-- Sean Silva</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="word-wrap:break-word"><span><font color="#888888"><div><br></div><span class="HOEnZb"><font color="#888888"><div>-- </div><div>Mehdi</div><div><br></div></font></span></font></span></div></blockquote></div><br></div></div>
<br>_______________________________________________<br>
LLVM Developers mailing list<br>
<a href="mailto:llvm-dev@lists.llvm.org">llvm-dev@lists.llvm.org</a><br>
<a href="http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev" rel="noreferrer" target="_blank">http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev</a><br>
<br></blockquote></div><br></div>