<html><head><meta http-equiv="Content-Type" content="text/html; charset=us-ascii"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><br class=""><div><br class=""><blockquote type="cite" class=""><div class="">On Jun 21, 2018, at 7:58 AM, Zachary Turner <<a href="mailto:zturner@google.com" class="">zturner@google.com</a>> wrote:</div><br class="Apple-interchange-newline"><div class="">Performance i get. Gdb is almost unusable for large programs because of how long it takes to load debug info.<br class=""></div></blockquote><div><br class=""></div>Agreed. With our new DWARF5 tables, we will be even better. Or test on a mac with dSYM files and you will get similar numbers to a DWARF5 scenario with .debug_names.</div><div><br class=""><blockquote type="cite" class=""><div class=""><br class="">Do you have specific numbers on memory usage? How much memory (absolute and %) is saved by loading debug info lazily on a relatively large project?<br class=""></div></blockquote><div><br class=""></div>I ran the number many years back when LLDB was first trying to overtake GDB. The numbers were quite favorable at the time. You can easily run a few tests by doing the same thing in LLDB and in GDB like:</div><div>- b main.cpp:123</div><div>- run</div><div>- bt</div><div>- frame variable</div><div><br class=""></div><div>Then look at the memory numbers of each process.</div><div><br class=""></div><div><blockquote type="cite" class=""><div class=""><div class="gmail_quote"><div dir="ltr" class="">On Thu, Jun 21, 2018 at 7:54 AM Greg Clayton <<a href="mailto:clayborg@gmail.com" class="">clayborg@gmail.com</a>> wrote:<br class=""></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="word-wrap:break-word;line-break:after-white-space" class=""><br class=""><div class=""><br class=""><blockquote type="cite" class=""><div class="">On Jun 21, 2018, at 7:47 AM, Zachary Turner <<a href="mailto:zturner@google.com" target="_blank" class="">zturner@google.com</a>> wrote:</div><br class="m_-2093163511928800891Apple-interchange-newline"><div class="">Related question: Is the laziness done to save memory, startup time, or both?<br class=""></div></blockquote><div class=""><br class=""></div>Both. It allows us to fetch only what we need when we need it. Time to break at main.cpp:123 is much quicker. Using LLDB for symbolication is much quicker as symbolication only needs to know about function definitions and function bounds. Many uses of LLDB are made better by partially parsing.</div></div><div style="word-wrap:break-word;line-break:after-white-space" class=""><div class=""><br class=""><blockquote type="cite" class=""><div class=""><div class="gmail_quote"><div dir="ltr" class="">On Thu, Jun 21, 2018 at 7:36 AM Greg Clayton via Phabricator <<a href="mailto:reviews@reviews.llvm.org" target="_blank" class="">reviews@reviews.llvm.org</a>> wrote:<br class=""></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">clayborg added a comment.<br class="">
<br class="">
In <a href="https://reviews.llvm.org/D48393#1138989" rel="noreferrer" target="_blank" class="">https://reviews.llvm.org/D48393#1138989</a>, @labath wrote:<br class="">
<br class="">
> I am not sure this will actually solve the problems you are seeing. This may avoid corrupting the internal DenseMap data structures, but it will not make the algorithm using them actually correct.<br class="">
> For example the pattern in `ParseTypeFromDWARF` is:<br class="">
><br class="">
> 1. check the "already parsed map". If the DIE is already parsed then you're done.<br class="">
> 2. if the map contains the magic "DIE_IS_BEING_PARSED" key, abort (recursive dwarf references)<br class="">
> 3. otherwise, insert the "DIE_IS_BEING_PARSED" key into the map<br class="">
> 4. do the parsing, which potentially involves recursive `ParseTypeFromDWARF` calls<br class="">
> 5. insert the parsed type into the map<br class="">
><br class="">
> What you do is make each of the steps (1), (3), (5) atomic individually. However, the whole algorithm is not correct unless the whole sequence is atomic as a whole. Otherwise, if you have two threads trying to parse the same DIE (directly or indirectly), one of them could see the intermediate DIE_IS_BEING_PARSED and incorrectly assume that it encountered recursive types.<br class="">
<br class="">
<br class="">
We need to make #1 atomic.<br class="">
#2 would need to somehow know if the type is already being parsed recursively by the current thread. If so, then do what we do now. If not, we need a way to wait on the completion of this type so the other parsing thread can complete it and put it into the map, at which time we grab the right value from the map<br class="">
So #6 step would need to be added so after we do put it into the map, we can notify other threads that might be waiting<br class="">
<br class="">
> So, I think that locking at a higher level would be better. Doing that will certainly be tricky though...<br class="">
<br class="">
<br class="">
<br class="">
<br class="">
<a href="https://reviews.llvm.org/D48393" rel="noreferrer" target="_blank" class="">https://reviews.llvm.org/D48393</a><br class="">
<br class="">
<br class="">
<br class="">
</blockquote></div>
</div></blockquote></div><br class=""></div></blockquote></div>
</div></blockquote></div><br class=""></body></html>