[Lldb-commits] [lldb] Reapply [lldb][DWARF] Delay struct/class/union definition DIE searching when parsing declaration DIEs. (PR #92328)
Pavel Labath via lldb-commits
lldb-commits at lists.llvm.org
Mon May 20 02:51:41 PDT 2024
================
@@ -2306,6 +2345,11 @@ bool DWARFASTParserClang::CompleteTypeFromDWARF(const DWARFDIE &die,
if (!die)
return false;
+ ParsedDWARFTypeAttributes attrs(die);
----------------
labath wrote:
> > How exactly do we get here in that case?
>
> From [#90663 (comment)](https://github.com/llvm/llvm-project/pull/90663#issuecomment-2105194128), .debug_names somehow contains entries that are declarations. This causes `SymbolFileDWARF::FindDefinitionTypeForDWARFDeclContext` to return a type created from declaration when searching for definition.
I've re-read that. The debug names situation is troubling, but the thing I don't understand now why is this even specific to debug_names. After this patch, it should be fairly easy to trigger a situation where we're asked to complete a type, and we don't have the definition of that type. Why don't those cases lead to a crash? For example, what would happen if the name was simply not in the index?
If I ignored this entire discussion, I would say that this check makes perfect sense: since we're delaying the search for the definition, it can happen that the we don't have the definition die at the point when we're asked to complete the type. So it seems perfectly reasonable to have this check _somewhere_. I just want to check whether this is the right place.
>
> A simple idea I have in mind is to make the `GetForwardDeclCompilerTypeToDIE`'s value type to a pair `{DIERef, bool}`, and the bool indicate if this is a definition or not. So we know that info without extra attribute parsing. How do you think?
Given that this extra bool is going to cause us to use eight more bytes per type, this doesn't necessarily seem like. This would use more memory, the previous implementation would use more time. Hard to say which one is better without some very precise measurements. Choosing blindly, I would stick with the current implementation, as it's easier to optimize the cpu time than it is to reclaim that memory (the real problem here is that `ParsedDWARFTypeAttributes` was optimized for the use case where one is going to use most of the attributes it has parsed (which is what happens in ParseTypeFromDWARF). Using it to essentially just check for the presence of DW_AT_declaration is wasteful, and you could write a much faster implementation if you were writing it for this use case specifically (but it's also not clear whether it's worthwhile to have a brand new implementation for this use case).
https://github.com/llvm/llvm-project/pull/92328
More information about the lldb-commits
mailing list