[Lldb-commits] [PATCH] D32167: Add support for type units (.debug_types) to LLDB in a way that is compatible with DWARF 5
Greg Clayton via Phabricator via lldb-commits
lldb-commits at lists.llvm.org
Tue Feb 27 07:11:59 PST 2018
clayborg added a comment.
In https://reviews.llvm.org/D32167#1020032, @labath wrote:
> In https://reviews.llvm.org/D32167#1019702, @clayborg wrote:
>
> > I am afraid of the opposite: we test what we think we need to test and our simple tests cases don't adequately test the feature we are adding. I can certainly limit the testing to very simple test cases with a one test for a class and one for a enum, but that wouldn't have caught the issues I ran into with static variables in classes that I fixed before submitting this. My point is it is hard to figure out what a compiler, or debugger might hose up without testing all possible cases. Please chime in with opinions as I will go with the flow on this.
>
>
> I don't think anyone here is suggesting to test less. The question we are asking is if running all 1000 or so tests doesn't just giv a false sense of security (and makes the testing process longer). A specially crafted test can trigger more corner cases and make it easier to debug future failures that a generic test. If the case of a static variable in a class is interesting, then maybe a test where you have two static variables from the same class defined in two different compilation units (and maybe referenced from a third one) is interesting as well. I'm pretty sure we don't have a test like that right now. Another interesting case which would not be tested in the "separate flavour" mode is the mixed-flags case: have part of your module compiled with type units, part without (and maybe a third part with type units and fission)..
>
> Running the entire dotest test suite in -fdebug-types-section is certainly a good way to catch problems, but it's not the way to write regression tests. FWIW, the way I plan to test the new accelerator tables when I get to the lldb part is to run dotest with the new flags locally during development, use the failures to identify interesting test vectors, and then write regular tests to trigger these.
For the accelerator tables you could always just manually index the DWARF and compare the results to the new accelerator tables as a unit test. No further testing needed?
https://reviews.llvm.org/D32167
More information about the lldb-commits
mailing list