[LLVMdev] [lldb-dev] [cfe-dev] What does "debugger tuning" mean?
Robinson, Paul
Paul_Robinson at playstation.sony.com
Thu Jun 4 16:35:39 PDT 2015
After far too long being distracted by other things, I'm getting back to
this.
To recap, today we have a handful of feature flags in the DwarfDebug
class which control various bits of DWARF; these feature flags can mostly
be set by command-line flags or front-ends, but otherwise tend to have
target-based defaults.
The proposal is to introduce a "debugger tuning" option, which will allow
us to fine-tune DWARF output based on what various debuggers prefer to
see, or conversely leave out things they don't care about. This replaces
the current tactic of assuming debugger features are tied to the target,
which is at best incomplete and at worst wrong. By making the expected
debugger explicit, we separate the concerns about target versus debugger,
and clarify the intent of decisions made based on one or the other factor.
The main debuggers of interest to the LLVM community are:
- GDB, the old warhorse
- LLDB, the new kid on the block
- SCE, for lack of a better name for Sony's debugger
(SCE = Sony Computer Entertainment; this debugger supports various
game consoles so we're unwilling to call it the PS4 debugger, and
it doesn't really have a name per se.)
Given a particular "debugger tuning" we can set the feature flags based
more obviously and clearly on the toolchain component that most cares
about the features, i.e. the debugger. Code to actually emit DWARF in
one way or another would still rely on the feature flags.
While there is a patch up for review (http://reviews.llvm.org/D8506)
some of the questions raised there might be better discussed on the list,
which is what I'm doing now.
Q1. Will the debugger tuning override individual feature flags?
A1. No. Tuning for a debugger will "unpack" into defaults for the various
feature flags, but individual command-line options for those feature flags
will still have the last word.
Q2. You based the DWARF TLS opcode choice on debugger tuning, but that is
a GDB bug rather than a real "tuning" thing.
A2. True. Basing it on tuning seems to me less bad than leaving it based
on target, which is what we have now. I'm okay with reverting that bit;
I don't want "tuning" to get misconstrued as "it's just a way to work
around debugger bugs" because it's not.
Q3. If I want my target to have a particular debugger tuning by default,
how do I make that happen?
A3. To date, this has been done with explicit triple-based tests in the
DwarfDebug class constructor. If there's a smoother way to make this a
target-based parameter, which DwarfDebug can query, that works for me.
The debugger-to-use seems more closely tied to OS than to the target
architecture, and LLVM's existing Target-based stuff is primarily aligned
with architecture, making it a less natural fit. Suggestions welcome, as
I find it hard to keep straight all the various ways that target-specific
things get specified.
Q4. Should LLVM default to "no special tuning" rather than a particular
debugger?
A4. That does feel like an attractive idea, however it would be a
noticeable change in responsibility from what we have today (which is
basically GDB everywhere except Darwin and PS4, as a decision made by
LLVM). If LLVM doesn't set a default, then it would be up to each
front-end to explicitly set the default, and that is not a responsibility
they're really expecting to have.
I did experiment with "no special tuning" as the default, and once I
realized that meant "emit optional but standardized stuff" i.e. the
pubnames/pubtypes sections, only one test failed, which had been expecting
the triple to determine whether those sections appeared.
Of course that's only the tests, it says nothing about the actual user
experience. It's not like I tried running the GDB suite in a "no special
tuning" mode.
The whole "how should the tuning default work?" question seems like it
needs a broader discussion. There really seem to be three options.
- In the DwarfDebug constructor. This is where it is (in effect) now, as
a series of triple-based checks. My initial patch left it there, more
explicitly, but still as a series of triple-based checks; basically the
patch made it possible to override, but the defaulting worked the same way
as before.
- In some Target class. The problem with this (as noted above) is that
Target is well-aligned with architecture, and poorly aligned with OS,
while the choice of default debugger is basically an OS kind of thing.
I confess when it comes to Target stuff, I quickly get lost in the twisty
maze of little passages all different, so any suggestions here of (a) a
way to make this OS-based decision (b) in a way that DwarfDebug can query,
would be very welcome.
- In each front-end/tool (not LLVM). Again as noted above, this is not a
decision that front-ends/tools are making today, and adding this to their
list of responsibilities might be inappropriate.
Opinions about the appropriate way to do the defaulting?
Thanks,
--paulr
More information about the llvm-dev
mailing list