[PATCH] D55281: debuginfo: Use symbol difference for CU length to simplify assembly reading/editing

David Blaikie via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Mon Dec 17 16:31:38 PST 2018


dblaikie added a comment.

In D55281#1333620 <https://reviews.llvm.org/D55281#1333620>, @probinson wrote:

> In D55281#1333552 <https://reviews.llvm.org/D55281#1333552>, @dblaikie wrote:
>
> > @aprantl @probinson - you folks haev any feelings about this? Looks like these flags are essentially unused (except for testing, which could equally be accomplished by tests that specify an NVPTX triple, I think (except on builds that aren't building an NVPTX backend)) & are a collection of features that are all only needed on NVPTX. I'd lean towards removing them all & gating this on NVPTX. Or on one flag rather than several fine-grained ones, if/when someone starts playing with a ptx assembler that can cope with these features.
> >
> > But yeah, no big deal I guess - leads to a bit of confusion about who these flags are meant for, under what conditions some of these features might be enabled but not others, etc.
>
>
> In a way this reminds me of the discussions around how to do "debugger tuning" and there were some very strong voices in favor of unpacking the tuning setting into separate flags.  I believe that the DwarfDebug ctor is the only place that directly checks tuning, or that was the goal on the LLVM side anyhow.


I'm not sure that's the case - though I don't mind that as a concept/design. The debugger tuning is the input to DwarfDebug and then it gets handled one way or another (maybe split out into specific boolean flags describing narrower features to be enabled/disabled - HasAppleExtensionAttributes, UseAllLinkageNames, UseGNUTLSOpcoed, UseDwarf2Bitfields)

> I guess there are already feature flags for some of these ptxas quirks; the regular assembler doesn't care which way the flag is set, and ptxas depends on it being a certain way.  I think as long as the regular assembler is agnostic about the setting,

Kind of - at least for the useSectionsAsReferences flag, it looks like it'd only break if the user had data in any debug section (because this feature assumes that LLVM's the only one generating data into those sections, so it can compute how far from the start of the section it needs to go in this object file), which I guess is probably not super supported anyway?

> then having a feature flag instead of a triple check is the pattern we have been following.

I'm not sure it is, though - lowering the triple (or debugger tuning - since that can't be accurately known based on the triple or other existing parameters) to boolean flags in DwarfDebug is common enough, but having a cl::opt to be able to set it through other means seems less common (we have some, but not all of the parameters configured via debugger tuning are exposed as their own cl::opt/-mllvm flag) & especially this cluster seems sufficiently narrow that I wouldn't expect other users to crop up wanting one or two of these things but not the rest. (& some of these NVPTX ones are configured only via the NVPTX target being specified, while a couple are accessible via -mllvm too)

UseInlineStrings - NVPTX or -mllvm -dwarf-inlined-strings
UseLocSection - !NVPTX only
DwarfVersion - NVPTX (version 2) or DwarfVersion module metadata
UseRangeSection - !NVPTX or -mllvm -no-dwarf-ranges-section
UseSectionsAsReferences: NVPTX or -mllvm -dwarf-sections-as-references

Limited use of DW_AT_frame_base depending on whether the frame register is a physical register, and not using DW_OP_call_frame_cfa - no cl::opt/-mllvm or bit in DwarfDebug, NVPTX is tested directly for this.

As for debugger tuning:
UseGNUTLSOpcode: GDB or DwarfVersion < 3
UseDwarf2Bitfields: GDB or DwarfVersion < 4
HasAppleExtensions: LLDB
UseAllLinkageNames: !SCE or -mllvm -dwarf-linkage-names
DwarfCompileUnit::hasDwarfPubSections(): GDB and some CU-level properties set the default

(& the tuning itself defaults based on target (darwin -> LLDB, PS4 -> SCE, otherwise GDB) if unspecified)

So, we don't have a super clear story by any means - and having the -mllvm/cl::opt flags means functionality can be tested without the target being built (which offers some broader test coverage, especially for developers who might remove targets to build to improve developer velocity, etc), but they also increase the physical and mental surface area of LLVM & I'd generally be inclined to remove them if they're only there for testing that can be achieved through target-specific tests instead. (eventually we'd probably want to get rid of most/all of the cl::opts anyway - library-fication and all that, but that's a long way off & we have other more functional flags (ones that aren't here, at best, for testing but mostly unused) that'll atke more work to plumb through debug info metadata, module flags, and MCOptions before we get there)

> And that's the case here, right? Turning a hard-coded constant into an expression that the assembler will compute. So I would lean in favor of a feature flag, defaulted based on the triple.




Repository:
  rL LLVM

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D55281/new/

https://reviews.llvm.org/D55281





More information about the llvm-commits mailing list