<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Jun 4, 2015 at 4:35 PM, Robinson, Paul <span dir="ltr"><<a href="mailto:Paul_Robinson@playstation.sony.com" target="_blank">Paul_Robinson@playstation.sony.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">After far too long being distracted by other things, I'm getting back to<br>
this.<br>
<br>
To recap, today we have a handful of feature flags in the DwarfDebug<br>
class which control various bits of DWARF; these feature flags can mostly<br>
be set by command-line flags or front-ends, but otherwise tend to have<br>
target-based defaults.<br>
<br>
The proposal is to introduce a "debugger tuning" option, which will allow<br>
us to fine-tune DWARF output based on what various debuggers prefer to<br>
see, or conversely leave out things they don't care about. This replaces<br>
the current tactic of assuming debugger features are tied to the target,<br>
which is at best incomplete and at worst wrong. By making the expected<br>
debugger explicit, we separate the concerns about target versus debugger,<br>
and clarify the intent of decisions made based on one or the other factor.<br>
<br>
The main debuggers of interest to the LLVM community are:<br>
- GDB, the old warhorse<br>
- LLDB, the new kid on the block<br>
- SCE, for lack of a better name for Sony's debugger<br>
(SCE = Sony Computer Entertainment; this debugger supports various<br>
game consoles so we're unwilling to call it the PS4 debugger, and<br>
it doesn't really have a name per se.)<br>
<br>
Given a particular "debugger tuning" we can set the feature flags based<br>
more obviously and clearly on the toolchain component that most cares<br>
about the features, i.e. the debugger. Code to actually emit DWARF in<br>
one way or another would still rely on the feature flags.<br>
<br>
While there is a patch up for review (<a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__reviews.llvm.org_D8506&d=AwMFaQ&c=8hUWFZcy2Z-Za5rBPlktOQ&r=DDUMf06MYELAe1Nlv7KChiwJLLHbYha4jtK_AOiWqwQ&m=PxYW-vdsBzP9u3IVLxX7HC2Rq83RQiqIS5OLBh4oZqU&s=4jlm5y9mxH7AuK_QJXWfFrfa8Tl4psS3CUPgWQL_fJE&e=" target="_blank">http://reviews.llvm.org/D8506</a>)<br>
some of the questions raised there might be better discussed on the list,<br>
which is what I'm doing now.<br>
<br>
Q1. Will the debugger tuning override individual feature flags?<br>
<br>
A1. No. Tuning for a debugger will "unpack" into defaults for the various<br>
feature flags, but individual command-line options for those feature flags<br>
will still have the last word.<br>
<br>
Q2. You based the DWARF TLS opcode choice on debugger tuning, but that is<br>
a GDB bug rather than a real "tuning" thing.<br>
<br>
A2. True. Basing it on tuning seems to me less bad than leaving it based<br>
on target, which is what we have now. I'm okay with reverting that bit;<br>
I don't want "tuning" to get misconstrued as "it's just a way to work<br>
around debugger bugs" because it's not.<br>
<br>
Q3. If I want my target to have a particular debugger tuning by default,<br>
how do I make that happen?<br>
<br>
A3. To date, this has been done with explicit triple-based tests in the<br>
DwarfDebug class constructor. If there's a smoother way to make this a<br>
target-based parameter, which DwarfDebug can query, that works for me.<br>
<br>
The debugger-to-use seems more closely tied to OS than to the target<br>
architecture, and LLVM's existing Target-based stuff is primarily aligned<br>
with architecture, making it a less natural fit. Suggestions welcome, as<br>
I find it hard to keep straight all the various ways that target-specific<br>
things get specified.<br>
<br>
Q4. Should LLVM default to "no special tuning" rather than a particular<br>
debugger?<br>
<br>
A4. That does feel like an attractive idea, however it would be a<br>
noticeable change in responsibility from what we have today (which is<br>
basically GDB everywhere except Darwin and PS4, as a decision made by<br>
LLVM). If LLVM doesn't set a default, then it would be up to each<br>
front-end to explicitly set the default, and that is not a responsibility<br>
they're really expecting to have.<br>
<br>
I did experiment with "no special tuning" as the default, and once I<br>
realized that meant "emit optional but standardized stuff" i.e. the<br>
pubnames/pubtypes sections, only one test failed, which had been expecting<br>
the triple to determine whether those sections appeared.<br>
<br>
Of course that's only the tests, it says nothing about the actual user<br>
experience. It's not like I tried running the GDB suite in a "no special<br>
tuning" mode.<br>
<br>
The whole "how should the tuning default work?" question seems like it<br>
needs a broader discussion. There really seem to be three options.<br>
<br>
- In the DwarfDebug constructor. This is where it is (in effect) now, as<br>
a series of triple-based checks. My initial patch left it there, more<br>
explicitly, but still as a series of triple-based checks; basically the<br>
patch made it possible to override, but the defaulting worked the same way<br>
as before.<br>
<br>
- In some Target class. The problem with this (as noted above) is that<br>
Target is well-aligned with architecture, and poorly aligned with OS,<br>
while the choice of default debugger is basically an OS kind of thing.<br>
I confess when it comes to Target stuff, I quickly get lost in the twisty<br>
maze of little passages all different, so any suggestions here of (a) a<br>
way to make this OS-based decision (b) in a way that DwarfDebug can query,<br>
would be very welcome.<br>
<br>
- In each front-end/tool (not LLVM). Again as noted above, this is not a<br>
decision that front-ends/tools are making today, and adding this to their<br>
list of responsibilities might be inappropriate.<br>
<br>
Opinions about the appropriate way to do the defaulting?<br></blockquote><div><br>If it's already in DwarfDebug, I'd probably leave it there. (necessary to maintain backwards compatibility anyway)<br><br>- David</div></div></div></div>