[Lldb-commits] [PATCH] D32167: Add support for type units (.debug_types) to LLDB in a way that is compatible with DWARF 5

Pavel Labath via lldb-commits lldb-commits at lists.llvm.org
Tue Feb 27 08:33:17 PST 2018


On 27 February 2018 at 07:11, Greg Clayton via Phabricator
<reviews at reviews.llvm.org> wrote:
> clayborg added a comment.
>
> In https://reviews.llvm.org/D32167#1020032, @labath wrote:
>
>> In https://reviews.llvm.org/D32167#1019702, @clayborg wrote:
>>
>> > I am afraid of the opposite: we test what we think we need to test and our simple tests cases don't adequately test the feature we are adding. I can certainly limit the testing to very simple test cases with a one test for a class and one for a enum, but that wouldn't have caught the issues I ran into with static variables in classes that I fixed before submitting this. My point is it is hard to figure out what a compiler, or debugger might hose up without testing all possible cases. Please chime in with opinions as I will go with the flow on this.
>>
>>
>> I don't think anyone here is suggesting to test less. The question we are asking is if running all 1000 or so tests doesn't just giv a false sense of security (and makes the testing process longer). A specially crafted test can trigger more corner cases and make it easier to debug future failures that a generic test. If the case of a static variable in a class is interesting, then maybe a test where you have two static variables from the same class defined in two different compilation units (and maybe referenced from a third one) is interesting as well. I'm pretty sure we don't have a test like that right now. Another interesting case which would not be tested in the "separate flavour" mode is the mixed-flags case: have part of your module compiled with type units, part without (and maybe a third part with type units and fission)..
>>
>> Running the entire dotest test suite in -fdebug-types-section is certainly a good way to catch problems, but it's not the way to write regression tests. FWIW, the way I plan to test the new accelerator tables when I get to the lldb part is to run dotest with the new flags locally during development, use the failures to identify interesting test vectors, and then write regular tests to trigger these.
>
>
> For the accelerator tables you could always just manually index the DWARF and compare the results to the new accelerator tables as a unit test. No further testing needed?

Intend to do something like that, but as a test for the generator side
of this. And there is still the question of where to get the inputs to
dest (i'll probably try to do this comparison on the full llvm
codebase, and then reduce any interesting cases I discover).

However, there will still be a nontrivial amount of code on the
consumer side (particularly when you start indexing type units :D, dwo
files etc.) which should be tested separately. If the whole world
suddenly started using these tables, we could just say that the
existing end-to-end tests cover that, but they are not even going to
be the default mode for a long time, so some separate tests seem
appropriate (I don't know how many, as I haven't written that code
yet).


More information about the lldb-commits mailing list