<div dir="ltr"><div><div><div><div><div>Thank you for that clarification. Sounds like we can't change the crc code then.<br><br></div>I realized I had been using GNU's gold linker. I switched to linking with lld(-4.0) and now linking uses less than 1/3rd the cpu. It seems that the default hashing (fast == xxHash) is faster than whatever gold was using. I'll just switch to that and call it a day.<br></div></div></div></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Apr 18, 2017 at 5:46 AM, Pavel Labath <span dir="ltr"><<a href="mailto:labath@google.com" target="_blank">labath@google.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">What we need is the ability to connect a stripped version of an SO to one with debug symbols present. Currently there are (at least) two ways to achieve that:<div><br><div>- build-id: both SOs have a build-id section with the same value. Normally, that's added by a linker in the final link, and subsequent strip steps do not remove it. Normally the build-id is some sort of a hash of the *initial* file contents, which is why you feel like you are trading debugger startup time for link time. However, that is not a requirement, as the exact checksumming algorithm does not matter here. A random byte sequence would do just fine, which is what "--build-id=uuid" does and it should have no impact on your link time. Be sure **not** to use this if you care about deterministic builds though.</div><div><br></div><div>- gnu_debuglink: here, the stripped SO contains a checksum of the original SO, which is added at strip time. This is done using a fixed algorithm, and this is important as the debugger needs to arrive at the same checksum as the strip tool. Also worth noting is that this mechanism embeds the path of the original SO into the stripped one, whereas the first one leaves the search task up to the debugger. This may be a plus or a minus, depending on your use case.</div></div><div><br></div><div>Hope that makes things a bit clearer. Cheers,</div><div>pl</div><div><br></div></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On 13 April 2017 at 18:31, Scott Smith <span dir="ltr"><<a href="mailto:scott.smith@purestorage.com" target="_blank">scott.smith@purestorage.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>Interesting. That saves lldb startup time (after crc improvements/parallelization) by about 1.25 seconds wall clock / 10 seconds cpu time, but increases linking by about 2 seconds of cpu time (and an inconsistent amount of wall clock time). That's only a good tradeoff if you run the debugger a lot.<br><br></div>If all you need is a unique id, there are cheaper ways of going about it. The SSE crc instruction would be cheaper, or using CityHash/MurmurHash for other cpus. I thought it was specifically tied to that crc algorithm. In that case it doesn't make sense to fold this into JamCRC, since that's tied to a difficult-to-optimize algorithm.<br></div><div class="m_7054193322458644242HOEnZb"><div class="m_7054193322458644242h5"><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Apr 13, 2017 at 4:28 AM, Pavel Labath <span dir="ltr"><<a href="mailto:labath@google.com" target="_blank">labath@google.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Improving the checksumming speed is definitely a worthwhile contribution, but be aware that there is a pretty simple way to avoid computing the crc altogether, and that is to make sure your binaries have a build ID. This is generally as simple as adding -Wl,--build-id to your compiler flags.<div><br></div><div>+1 to moving the checksumming code to llvm</div><span class="m_7054193322458644242m_13341719808949236HOEnZb"><font color="#888888"><div><br></div><div>pl</div></font></span></div><div class="m_7054193322458644242m_13341719808949236HOEnZb"><div class="m_7054193322458644242m_13341719808949236h5"><div class="gmail_extra"><br><div class="gmail_quote">On 13 April 2017 at 07:20, Zachary Turner via lldb-dev <span dir="ltr"><<a href="mailto:lldb-dev@lists.llvm.org" target="_blank">lldb-dev@lists.llvm.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">I know this is outside of your initial goal, but it would be really great if JamCRC be updated in llvm to be parallel. I see that you're making use of TaskRunner for the parallelism, but that looks pretty generic, so perhaps that could be raised into llvm as well if it helps. <br><br>Not trying to throw extra work on you, but it seems like a really good general purpose improvement and it would be a shame if only lldb can benefit from it.<br><div class="gmail_quote"><div><div class="m_7054193322458644242m_13341719808949236m_8922416563649422947h5"><div dir="ltr">On Wed, Apr 12, 2017 at 8:35 PM Scott Smith via lldb-dev <<a href="mailto:lldb-dev@lists.llvm.org" target="_blank">lldb-dev@lists.llvm.org</a>> wrote:<br></div></div></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div class="m_7054193322458644242m_13341719808949236m_8922416563649422947h5"><div dir="ltr"><div><div>Ok I stripped out the zlib crc algorithm and just left the parallelism + calls to zlib's crc32_combine, but only if we are actually linking with zlib. I left those calls here (rather than folding them info JamCRC) because I'm taking advantage of TaskRunner to parallelize the work.<br><br></div>I moved the system include block after the llvm includes, both because I had to (to use the config #defines), and because it fit the published coding convention.<br><br></div>By itself, it reduces my test time from 55 to 47 seconds. (The original time is slower than before because I pulled the latest code, guess there's another slowdown to fix).<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Apr 12, 2017 at 12:15 PM, Scott Smith <span dir="ltr"><<a href="mailto:scott.smith@purestorage.com" target="_blank">scott.smith@purestorage.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div><div><div><div><div><div><div><div>The algorithm included in ObjectFileELF.cpp performs a byte at a time computation, which causes long pipeline stalls in modern processors. Unfortunately, the polynomial used is not the same one used by the SSE 4.2 instruction set, but there are two ways to make it faster:<br><br></div>1. Work on multiple bytes at a time, using multiple lookup tables. (see <a href="http://create.stephan-brumme.com/crc32/#slicing-by-8-overview" target="_blank">http://create.stephan-brumme.c<wbr>om/crc32/#slicing-by-8-overvie<wbr>w</a>)<br></div>2. Compute crcs over separate regions in parallel, then combine the results. (see <a href="http://stackoverflow.com/questions/23122312/crc-calculation-of-a-mostly-static-data-stream" target="_blank">http://stackoverflow.com/quest<wbr>ions/23122312/crc-calculation-<wbr>of-a-mostly-static-data-stream</a><wbr>)<br><br></div>As it happens, zlib provides functions for both:<br></div>1. The zlib crc32 function uses the same polynomial as ObjectFileELF.cpp, and uses slicing-by-4 along with loop unrolling.<br></div>2. The zlib library provides crc32_combine.<br><br></div>I decided to just call out to the zlib library, since I see my version of lldb already links with zlib; however, the llvm CMakeLists.txt declares it optional.<br><br></div>I'm including my patch that assumes zlib is always linked in. Let me know if you prefer:<br></div>1. I make the change conditional on having zlib (i.e. fall back to the old code if zlib is not present)<br></div>2. I copy all the code from zlib and put it in ObjectFileELF.cpp. However, I'm going to guess that requires updating some documentation to include zlib's copyright notice.<br><br></div>This brings startup time on my machine / my binary from 50 seconds down to 32.<br>(time ~/llvm/build/bin/lldb -b -o 'b main' -o 'run' MY_PROGRAM)<br><br></div>
</blockquote></div><br></div></div></div><span>
______________________________<wbr>_________________<br>
lldb-dev mailing list<br>
<a href="mailto:lldb-dev@lists.llvm.org" target="_blank">lldb-dev@lists.llvm.org</a><br>
<a href="http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev" rel="noreferrer" target="_blank">http://lists.llvm.org/cgi-bin/<wbr>mailman/listinfo/lldb-dev</a><br>
</span></blockquote></div>
<br>______________________________<wbr>_________________<br>
lldb-dev mailing list<br>
<a href="mailto:lldb-dev@lists.llvm.org" target="_blank">lldb-dev@lists.llvm.org</a><br>
<a href="http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev" rel="noreferrer" target="_blank">http://lists.llvm.org/cgi-bin/<wbr>mailman/listinfo/lldb-dev</a><br>
<br></blockquote></div><br></div>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>