[lldb-dev] [RFC] Testsuite in lldb & possible future directions

Adrian Prantl via lldb-dev lldb-dev at lists.llvm.org
Wed Feb 7 09:57:10 PST 2018



> On Feb 6, 2018, at 9:29 AM, Davide Italiano via lldb-dev <lldb-dev at lists.llvm.org> wrote:
> 
> On Tue, Feb 6, 2018 at 8:18 AM, Pavel Labath <labath at google.com> wrote:
>> On 6 February 2018 at 15:41, Davide Italiano <dccitaliano at gmail.com> wrote:
>>> On Tue, Feb 6, 2018 at 7:09 AM, Pavel Labath <labath at google.com> wrote:
>>>> On 6 February 2018 at 04:11, Davide Italiano via lldb-dev
>>>> 
>>>> So, I guess my question is: are you guys looking into making sure that
>>>> others are also able to reproduce the 0-fail+0-xpass state? I would
>>>> love to run the mac test suite locally, as I tend to touch a lot of
>>>> stuff that impacts all targets, but as it stands now, I have very
>>>> little confidence that the test I am running reflect in any way the
>>>> results you will get when you run the test on your end.
>>>> 
>>>> I am ready to supply any test logs or information you need if you want
>>>> to try to tackle this.
>>>> 
>>> 
>>> Yes, I'm definitely interested in making the testusuite
>>> working/reliable on any configuration.
>>> I was afraid there were a lot of latent issues, that's why I sent this
>>> mail in the first place.
>>> It's also the reason why I started thinking about `lldb-test` as a
>>> driver for testing, because I found out the testsuite being a little
>>> inconsistent/brittle depending on the environment it's run on (which,
>>> FWIW, doesn't happen when you run lit/FileCheck or even the unit tests
>>> in lldb). I'm not currently claiming switching to a different method
>>> would improve the situation, but it's worth a shot.
>>> 
>> 
>> Despite Zachary's claims, I do not believe this is caused by the test
>> driver (dotest). It's definitely not beautiful, but I haven't seen an
>> issue that would be caused by this in a long time. The issue is that
>> the tests are doing too much -- even the simplest involves compiling a
>> fully working executable, which pulls in a lot of stuff from the
>> environment (runtime libraries, dynamic linker, ...) that we have no
>> control of. And of course it makes it impossible to test the debugging
>> functionality of any other platform than what you currently have in
>> front of you.
>> 
>> In this sense, the current setup makes an excellent integration test
>> suite -- if you run the tests and they pass, you can be fairly
>> confident that the debugging on your system is setup correctly.
>> However, it makes a very bad regression test suite, as the tests will
>> be checking something different on each machine.
>> 
> 
> Yes, I didn't complain about "dotest" in general, but, as you say, the
> fact that it pull in lots of stuffs we don't really have control on.
> Also, most of the times I actually found out we've been sloppy watching
> bots for a while, or XFAILING tests instead of fixing them and that resulted in
> issues piling up). This is a more general problem not necessarily tied to
> `dotest` as a driver.
> 
>> So I believe we need more lightweight tests, and lldb-test can provide
>> us with that. The main question for me (and that's something I don't
> 
> +1.
> 
>> really have an answer to) is how to make writing tests like that easy.
>> E.g. for these "foreign" language plugins, the only way to make a
>> self-contained regression test would be to check-in some dwarf which
>> mimics what the compiler in question would produce. But doing that is
>> extremely tedious as we don't have any tooling for that. Since debug
>> info is very central to what we do, having something like that would
>> go a long way towards improving the testing situation, and it would be
>> useful for C/C++ as well, as we generally need to make sure that we
>> work with a wide range of compiler versions, not just accept what ToT
>> clang happens to produce.
>> 
> 
> I think the plan here (and I'd love to spend some time on this once we
> have stability, which seems we're slowly getting) is that of enhancing
> `yaml2*` to do the work for us.
> I do agree is a major undertaken but even spending a month on it will
> go a long way IMHO. I will try to come up with a plan after discussing
> with folks in my team (I'd really love to get also inputs from DWARF
> people in llvm, e.g. Eric or David Blake).

The last time I looked into yaml2obj was to use it for testing llvm-dwarfdump and back then I concluded that it needs a lot of work to be useful even for testing dwarfdump. In the current state it is both too low-level (e.g., you need to manually specify all Mach-O load commands, you have to manually compute and specify the size of each debug info section) and too high-level (it can only auto-generate one exact version of .debug_info headers) to be useful.

If we could make a tool whose input roughly looks like the output of dwarfdump, then this might be a viable option. Note that I'm not talking about syntax but about the abstraction level of the contents.

In summary, I think this is an interesting direction to explore, but we shouldn't underestimate the amount of work necessary to make this useful.

-- adrian

> 
>> 
>> PS: I saw your second email as well. I'm going to try out what you
>> propose, most likely tomorrow.
> 
> Thanks!
> 
> --
> Davide
> _______________________________________________
> lldb-dev mailing list
> lldb-dev at lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev



More information about the lldb-dev mailing list