[lldb-dev] Test suite itself creating failures

Filipe Cabecinhas filcab+lldb-dev at gmail.com
Wed Sep 25 12:46:06 PDT 2013


This problem happened several times in the past.
It's usually related to tests that change preferences, but don't
change them back in the end. Trying to figure out what exactly is
going wrong is hard and time-consuming. I've seen problems like: test
A relies on option X, test B changes option X, test A is run twice
(x86_64 and i386) with test B being run between those runs.

When I debugged those, every bug was a bug of the test-suite
(missing/wrong cleanup). But, like Jim said, there may be some lldb
bugs that may be uncovered with these runs.

Since there's the possibility of having actual lldb bugs uncovered, I
would prefer to keep the suite as is (with one debugger for all) and
fix the failing tests. Making it easier to reset some of lldb's
options when changing targets is a good thing, too, but some problems
with the test suite are when a test sets formatting options but
doesn't reset them to the default, in the end. Those can only be fixed
on a test-by-test basis.

  Filipe


On Wed, Sep 25, 2013 at 8:54 AM,  <jingham at apple.com> wrote:
> It should be fine to run multiple debuggers either serially or in parallel and have them not interfere with each other.  Most GUI's that use lldb will do this, so it is worth while to test that.  Multiple targets in the same debugger should also be relatively independent of one another, as that is another mode which is pretty useful.  The testsuite currently stresses those features of lldb, which seems to me a good thing.
>
> Note that the testsuite mostly shares a common debugger (lldb.DBG, made in dotest.py and copied over to self.dbg in setUp lldbtest.py.)  You could try not creating the singleton debugger in dotest.py, and allow setUp to make a new one.  If there's state in lldb that persists across debuggers that causes testsuite failures, that is a bug we should fix, not something we should work around in the testsuite.
>
> What kinds of things are the tests leaving around that are causing tests to fail?  It seems to me we should make it easier to clean up the state when a target goes away, then just cut the Gordian knot and make them all run independently.
>
> Jim
>
>
> On Sep 24, 2013, at 7:00 PM, Richard Mitton <richard at codersnotes.com> wrote:
>
>> Hi all,
>>
>> So I was looking into why TestInferiorAssert was (still) failing on my machine, and it turned out the root cause was in fact that tests are not run in isolation; dotest.py runs multiple tests using the same LLDB context for each one. So if a test doesn't clean up after itself properly, it can cause following tests to incorrectly fail.
>>
>> Is this really a good idea? Wouldn't it make more sense to make it so tests are always run individually, to guarantee consistent results?
>>
>> --
>> Richard Mitton
>> richard at codersnotes.com
>>
>> _______________________________________________
>> lldb-dev mailing list
>> lldb-dev at cs.uiuc.edu
>> http://lists.cs.uiuc.edu/mailman/listinfo/lldb-dev
>
> _______________________________________________
> lldb-dev mailing list
> lldb-dev at cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/lldb-dev




More information about the lldb-dev mailing list