[lldb-dev] Resolving dynamic type based on RTTI fails in case of type names inequality in DWARF and mangled symbols

Greg Clayton via lldb-dev lldb-dev at lists.llvm.org
Thu Dec 21 08:24:51 PST 2017


> On Dec 21, 2017, at 4:58 AM, Pavel Labath <labath at google.com> wrote:
> 
> On 21 December 2017 at 12:29, xgsa <xgsa at yandex.ru <mailto:xgsa at yandex.ru>> wrote:
>> 21.12.2017, 13:45, "Pavel Labath via lldb-dev" <lldb-dev at lists.llvm.org>:
>>> On 20 December 2017 at 18:40, Greg Clayton <clayborg at gmail.com> wrote:
>>>>> On Dec 20, 2017, at 3:33 AM, Pavel Labath <labath at google.com> wrote:
>>>>> 
>>>>> On 19 December 2017 at 17:39, Greg Clayton via lldb-dev
>>>>> <lldb-dev at lists.llvm.org> wrote:
>>>>>> The apple accelerator tables are only enabled for Darwin target, but there
>>>>>> is nothing to say we couldn't enable these for other targets in ELF files.
>>>>>> It would be a quick way to gauge the performance improvement that these
>>>>>> accelerator tables provide for linux.
>>>>> 
>>>>> I was actually experimenting with this last month. Unfortunately, I've
>>>>> learned that the situation is not as simple as flipping a switch in
>>>>> the compiler. In fact, there is no switch to flip as clang will
>>>>> already emit the apple tables if you pass -glldb. However, the
>>>>> resulting tables will be unusable due to the differences in how dwarf
>>>>> is linked on elf vs mach-o. In elf, we have the linker concatenate the
>>>>> debug info into the final executable/shared library, which it will
>>>>> also happily do for the .apple_*** sections.
>>>> 
>>>> That ruins the whole idea of the accelerator tables if they are concatenated...
>>> 
>>> I'm not sure I'm convinced by that. I mean, obviously it's better if
>>> you have just a single table to look up, but even if you have multiple
>>> tables, looking up into each one may be faster that indexing the full
>>> debug info yourself. Take liblldb for example. It has ~3000 compile
>>> units and nearly 2GB of debug info. I don't have any solid data on
>>> this (and it would certainly be interesting to make this experiment),
>>> but I expect that doing 3000 hash lookups (which are basically just
>>> array accesses) would be faster than indexing 2GB of dwarf (where you
>>> have to deal with variable-sized fields and uleb encodings...). And
>>> there is always the possibility to do the lookups in parallel or merge
>>> the individual tables inside the debugger.
>>> 
>>>>> The second, more subtle problem I see is that these tables are an
>>>>> all-or-nothing event. If we see an accelerator table, we assume it is
>>>>> an index of the entire module, but that's not likely to be the case,
>>>>> especially in the early days of this feature's uptake. You will have
>>>>> people feeding the linkers with output from different compilers, some
>>>>> of which will produce these tables, and some not. Then the users will
>>>>> be surprised that the debugger is ignoring some of their symbols.
>>>> 
>>>> I think it is best to auto generate the tables from the DWARF directly after it has all been linked. Skip teaching the linker about merging it, just teach it to generate it.
>>> 
>>> If the linker does the full generation, then how is that any better
>>> than doing the indexing in the debugger? Somebody still has to parse
>>> the entire dwarf, so it might as well be the debugger.
>> 
>> I suppose, the difference is that linker does it one time and debugger has to do it every time on startup, as the results are not saved anywhere (or are they?). So possibly, instead of building accelerator tables by compiler for debugger, possibly, the debugger should save its own indexes somewhere (e.g. in a cache-file near the binary)? Or is there already such mechanism and I just don't know about it?
> 
> Currently the indexes aren't saved, but that is exactly where I was
> going with this. We *could* save this index (we already cache
> downloaded remote object files in ~/.lldb/module_cache, we could just
> put this next to it) and reuse it for the subsequent debug sessions.

We could save this information to disk. I would suggest our Apple Accelerator table format! hahaha. Though in LLDB, we keep the modules loaded for the life of the debugger if they don't change, so this would only speed up the debug, quit IDE, and re-debug the same executable scenario.

We might need to introduce the notion of "compile, edit, debug" flows where we concatenate the accelerator tables, and "archive" builds that will be archived on build servers where we do more complete info and merge the tables. 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/lldb-dev/attachments/20171221/3ff944d6/attachment.html>


More information about the lldb-dev mailing list