<div dir="ltr">I just knocked ~400k off the size of the x86 scheduler tables by reducing from 5k+ entries to 2k+ entries per cpu.</div><div class="gmail_extra"><br clear="all"><div><div class="gmail_signature" data-smartmail="gmail_signature">~Craig</div></div>
<br><div class="gmail_quote">On Tue, Mar 20, 2018 at 6:34 PM, Andrew Trick <span dir="ltr"><<a href="mailto:atrick@apple.com" target="_blank">atrick@apple.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="word-wrap:break-word;line-break:after-white-space"><br><div><span class=""><br><blockquote type="cite"><div>On Mar 17, 2018, at 4:04 PM, Craig Topper via cfe-dev <<a href="mailto:cfe-dev@lists.llvm.org" target="_blank">cfe-dev@lists.llvm.org</a>> wrote:</div><br class="m_-5113001645019584723Apple-interchange-newline"><div><div dir="ltr">I'm sure the x86 scheduler models are causing bloat. Every time a single instruction appears on a line by itself like this in a scheduler model:<div><br></div><div>def: InstRW<[SBWriteResGroup2], (instregex "ANDNPDrr")>;<br></div><div><br></div><div>It causes that instruction to be its own group in the generated output. And its replicated for each CPU. We should look into better using regular expressions or taking advantage of the fact that InstRW can take a list of instructions. That makes those instructions part of a single group and the tablegen backend will only split the group if two CPUs have different ports, latency, etc. for instructions within the group.</div></div><div class="gmail_extra"><br clear="all"><div><div class="m_-5113001645019584723gmail_signature" data-smartmail="gmail_signature">~Craig</div></div></div></div></blockquote><div><br></div></span><div>The tables themselves are compact. There’s actually a lot of complexity spent on compacting the resource and latency tables. But, yes, there are 5k+ entries per cpu, roughly 28 byte each. However, if you're looking at a debug build, the tables will be huge. The scheduling class names are much bigger than the data.</div><div><br></div><div>-Andy</div><div><div class="h5"><div><br></div><blockquote type="cite"><div><div class="gmail_extra">
<div class="gmail_quote">On Sat, Mar 17, 2018 at 6:26 AM, Greg Bedwell via cfe-dev <span dir="ltr"><<a href="mailto:cfe-dev@lists.llvm.org" target="_blank">cfe-dev@lists.llvm.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div dir="auto">Thanks for raising this. This is something we've recently been looking at too at Sony, as over the course of PS4's lifetime so far we've seen our clang executable on Windows approximately double in size, which isn't ideal for things like distributed build systems. A graph of clang.exe size on our internal staging branch matches yours closely with it being more of a death by a thousand cuts rather than being down to a small number of sudden big-bang changes. </div><div dir="auto"><br></div><div dir="auto">I did spot one range of about 25 upstream commits in our data where the exe size increased by over 1MB. My prime suspect in that range was a new scheduling model being added to the X86 backend but I've not bisected further to be sure yet. This would be an interesting case for us as we don't really need to support any models other than Jaguar for our users but don't want to break the LLVM tests, nor introduce loads of private changes to our branch.</div><div dir="auto"><br></div><div dir="auto">I know our test/QA team have been doing some analysis using Bloaty McBloatFace to see exactly where the size is coming from and produced some really nice visualizations of that data. They've also been looking at how the MinSizeRelease config does on Windows. I think the size savings were decent but I'm not sure of performance numbers, if they have any yet.</div><div dir="auto"><br></div><div dir="auto">I'll ask around at what we have to share once back in the office. </div><div dir="auto"><br></div><div dir="auto">Thanks for sharing your data!</div><div dir="auto"><br></div><div dir="auto">-Greg</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><br><div class="gmail_quote"><div><div class="m_-5113001645019584723h5"><div>On Sat, 17 Mar 2018 at 12:36, Dimitry Andric via cfe-dev <<a href="mailto:cfe-dev@lists.llvm.org" target="_blank">cfe-dev@lists.llvm.org</a>> wrote:<br></div></div></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div class="m_-5113001645019584723h5">Hi all,<br>
<br>
I recently did a run where I built clang executables on FreeBSD 12-CURRENT [1], from trunk r250000 (2015-10-11) all through r327700 (2018-03-16), with increments of 100 revisions. This is mainly meant as an archive, for easily doing bisections, but there are also some interesting statistics.<br>
<br>
>From r250000 through r327700:<br>
* the total (stripped) executable size grew by approximately 43%<br>
* the size of the text segment grew by approximately 41%<br>
* the size of the data segment grew by approximately 61%<br>
* the size of the bss segment grew by approximately 185%<br>
* real build time (on a 32 core system) grew by approximately 60%<br>
* user build time (on a 32 core system) grew by approximately 62%<br>
* maximum resident set size (RSS) grew by approximately 32%<br>
<br>
Google spreadsheet with more numbers and some graphs:<br>
<br>
<a href="https://docs.google.com/spreadsheets/d/e/2PACX-1vSGq1U7j45JNC_bcG4HV3jKOV4WBUPbTSgMMFXd5SD0IEPTAFwWnlU2ysprmnHsNe5WONRCjg8F5mHK/pubhtml" rel="noreferrer" target="_blank">https://docs.google.com/spread<wbr>sheets/d/e/2PACX-1vSGq1U7j45JN<wbr>C_bcG4HV3jKOV4WBUPbTSgMMFXd5SD<wbr>0IEPTAFwWnlU2ysprmnHsNe5WONRCj<wbr>g8F5mHK/pubhtml</a><br>
<br>
-Dimitry<br>
<br>
[1] These were built using the "ninja clang clang-headers" target, followed by "ninja install-clang install-clang-headers".<br>
<br></div></div>
______________________________<wbr>_________________<br>
cfe-dev mailing list<br>
<a href="mailto:cfe-dev@lists.llvm.org" target="_blank">cfe-dev@lists.llvm.org</a><br>
<a href="http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev" rel="noreferrer" target="_blank">http://lists.llvm.org/cgi-bin/<wbr>mailman/listinfo/cfe-dev</a><br>
</blockquote></div></div></div>
<br>______________________________<wbr>_________________<br>
cfe-dev mailing list<br>
<a href="mailto:cfe-dev@lists.llvm.org" target="_blank">cfe-dev@lists.llvm.org</a><br>
<a href="http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev" rel="noreferrer" target="_blank">http://lists.llvm.org/cgi-bin/<wbr>mailman/listinfo/cfe-dev</a><br>
<br></blockquote></div><br></div>
______________________________<wbr>_________________<br>cfe-dev mailing list<br><a href="mailto:cfe-dev@lists.llvm.org" target="_blank">cfe-dev@lists.llvm.org</a><br><a href="http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev" target="_blank">http://lists.llvm.org/cgi-bin/<wbr>mailman/listinfo/cfe-dev</a><br></div></blockquote></div></div></div><br></div></blockquote></div><br></div>