[lldb-dev] TestRecursiveInferior.py

Thirumurthi, Ashok ashok.thirumurthi at intel.com
Wed Sep 25 14:28:11 PDT 2013


Hi Greg,



I got around to looking at pr15415 today, and noticed that TestRecursiveInferior.py is a case where r132021<http://lists.cs.uiuc.edu/pipermail/lldb-commits/Week-of-Mon-20110523/002916.html> is an issue.  Specifically, this code looks at the case where two subsequent frames have the same pc.  It correctly distinguishes between recursion and an infinite loop by testing for unique canonical frame addresses.  However, it then goes on to assume that the unwind should stop if GetFP() shows a null frame pointer.  However, recursive_inferior/Makefile can achieve this using -fomit-frame-pointer.



The attached patch fixes the test by nixing the code block that I can't explain (the commit message for r132021 didn't specify why the logic changed for UnwindLLDB, and I can't see any regressions locally with the attached patch, and r132021 doesn't add or fix any test cases).  Do you figure that a stronger test is required to distinguish between a good unwind and a bad one?  If so, what would be required to create a test case for the existing code?



Note also that overloads of StackUsesFrames() in trunk always return true.  Cheers,



-   Ashok



-----Original Message-----
From: lldb-dev-bounces at cs.uiuc.edu [mailto:lldb-dev-bounces at cs.uiuc.edu] On Behalf Of Filipe Cabecinhas
Sent: Wednesday, September 25, 2013 3:46 PM
To: jingham at apple.com
Cc: lldb-dev at cs.uiuc.edu
Subject: Re: [lldb-dev] Test suite itself creating failures



This problem happened several times in the past.

It's usually related to tests that change preferences, but don't change them back in the end. Trying to figure out what exactly is going wrong is hard and time-consuming. I've seen problems like: test A relies on option X, test B changes option X, test A is run twice

(x86_64 and i386) with test B being run between those runs.



When I debugged those, every bug was a bug of the test-suite (missing/wrong cleanup). But, like Jim said, there may be some lldb bugs that may be uncovered with these runs.



Since there's the possibility of having actual lldb bugs uncovered, I would prefer to keep the suite as is (with one debugger for all) and fix the failing tests. Making it easier to reset some of lldb's options when changing targets is a good thing, too, but some problems with the test suite are when a test sets formatting options but doesn't reset them to the default, in the end. Those can only be fixed on a test-by-test basis.



  Filipe





On Wed, Sep 25, 2013 at 8:54 AM,  <jingham at apple.com<mailto:jingham at apple.com>> wrote:

> It should be fine to run multiple debuggers either serially or in parallel and have them not interfere with each other.  Most GUI's that use lldb will do this, so it is worth while to test that.  Multiple targets in the same debugger should also be relatively independent of one another, as that is another mode which is pretty useful.  The testsuite currently stresses those features of lldb, which seems to me a good thing.

>

> Note that the testsuite mostly shares a common debugger (lldb.DBG, made in dotest.py and copied over to self.dbg in setUp lldbtest.py.)  You could try not creating the singleton debugger in dotest.py, and allow setUp to make a new one.  If there's state in lldb that persists across debuggers that causes testsuite failures, that is a bug we should fix, not something we should work around in the testsuite.

>

> What kinds of things are the tests leaving around that are causing tests to fail?  It seems to me we should make it easier to clean up the state when a target goes away, then just cut the Gordian knot and make them all run independently.

>

> Jim

>

>

> On Sep 24, 2013, at 7:00 PM, Richard Mitton <richard at codersnotes.com<mailto:richard at codersnotes.com>> wrote:

>

>> Hi all,

>>

>> So I was looking into why TestInferiorAssert was (still) failing on my machine, and it turned out the root cause was in fact that tests are not run in isolation; dotest.py runs multiple tests using the same LLDB context for each one. So if a test doesn't clean up after itself properly, it can cause following tests to incorrectly fail.

>>

>> Is this really a good idea? Wouldn't it make more sense to make it so tests are always run individually, to guarantee consistent results?

>>

>> --

>> Richard Mitton

>> richard at codersnotes.com<mailto:richard at codersnotes.com>

>>

>> _______________________________________________

>> lldb-dev mailing list

>> lldb-dev at cs.uiuc.edu<mailto:lldb-dev at cs.uiuc.edu>

>> http://lists.cs.uiuc.edu/mailman/listinfo/lldb-dev

>

> _______________________________________________

> lldb-dev mailing list

> lldb-dev at cs.uiuc.edu<mailto:lldb-dev at cs.uiuc.edu>

> http://lists.cs.uiuc.edu/mailman/listinfo/lldb-dev



_______________________________________________

lldb-dev mailing list

lldb-dev at cs.uiuc.edu<mailto:lldb-dev at cs.uiuc.edu>

http://lists.cs.uiuc.edu/mailman/listinfo/lldb-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/lldb-dev/attachments/20130925/18f6087a/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: pr15415.patch
Type: application/octet-stream
Size: 1645 bytes
Desc: pr15415.patch
URL: <http://lists.llvm.org/pipermail/lldb-dev/attachments/20130925/18f6087a/attachment.obj>


More information about the lldb-dev mailing list