[lldb-dev] Python object lifetimes affect the reliability of tests
Todd Fiala via lldb-dev
lldb-dev at lists.llvm.org
Thu Oct 15 09:31:39 PDT 2015
On Thu, Oct 15, 2015 at 9:23 AM, Zachary Turner via lldb-dev <
lldb-dev at lists.llvm.org> wrote:
> That wouldn't work in this case because it causes a failure from one test
> to the next. So a single test suite has 5 tests, and the second one fails
> because the first one didn't clean up correctly. You couldn't call
> ScriptInterpreterpython::Clear here, because then you'd have to initialize
> it again, and while it might work, it seems scary and like something which
> is untested and we recommend you don't do.
>
> What about calling `gc.collect()` in the tearDown() method?
>
If it's a laziness thing, that seems like it might do it. I would think we
could stick that in the base test class and get it everywhere. Is that
something you can try, Adrian?
>
> On Thu, Oct 15, 2015 at 9:10 AM Oleksiy Vyalov via lldb-dev <
> lldb-dev at lists.llvm.org> wrote:
>
>> I stumbled upon similar problem when was looking into why SBDebugger
>> wasn't unloaded upon app's exit.
>> The problem was in Python global objects like lldb.debugger, lldb.target
>> sitting around.
>> So, my guess is to try to call ScriptInterpreterPython::Clear within
>> test's tearDown call - e.g., expose Clear method as part of
>> SBCommandInterpreter and call it via SBDebugger::GetCommandInterpreter
>>
>> On Thu, Oct 15, 2015 at 8:50 AM, Adrian McCarthy via lldb-dev <
>> lldb-dev at lists.llvm.org> wrote:
>>
>>> I've tracked down a source of flakiness in tests on Windows to Python
>>> object lifetimes and the SB interface, and I'm wondering how best to handle
>>> it.
>>>
>>> Consider this portion of a test from TestTargetAPI:
>>>
>>> def find_functions(self, exe_name):
>>> """Exercise SBTaget.FindFunctions() API."""
>>> exe = os.path.join(os.getcwd(), exe_name)
>>>
>>> # Create a target by the debugger.
>>> target = self.dbg.CreateTarget(exe)
>>> self.assertTrue(target, VALID_TARGET)
>>> list = target.FindFunctions('c', lldb.eFunctionNameTypeAuto)
>>> self.assertTrue(list.GetSize() == 1)
>>>
>>> for sc in list:
>>> self.assertTrue(sc.GetModule().GetFileSpec().GetFilename()
>>> == exe_name)
>>> self.assertTrue(sc.GetSymbol().GetName() == 'c')
>>>
>>> The local variables go out of scope when the function exits, but the SB
>>> (C++) objects they represent aren't (always) immediately destroyed. At
>>> least some of these objects keep references to the executable module in the
>>> shared module list, so when the test framework cleans up and calls
>>> `SBDebugger::DeleteTarget`, the module isn't orphaned, so LLDB maintains an
>>> open handle to the executable.
>>>
>>> The result of the lingering handle is that, when the next test case in
>>> the test suite tries to re-build the executable, it fails because the file
>>> is not writable. (This is problematic on Windows because the file system
>>> works differently in this regard than Unix derivatives.) Every subsequent
>>> case in the test suite fails.
>>>
>>> I managed to make the test work reliably by rewriting it like this:
>>>
>>> def find_functions(self, exe_name):
>>> """Exercise SBTaget.FindFunctions() API."""
>>> exe = os.path.join(os.getcwd(), exe_name)
>>>
>>> # Create a target by the debugger.
>>> target = self.dbg.CreateTarget(exe)
>>> self.assertTrue(target, VALID_TARGET)
>>>
>>> try:
>>> list = target.FindFunctions('c', lldb.eFunctionNameTypeAuto)
>>> self.assertTrue(list.GetSize() == 1)
>>>
>>> for sc in list:
>>> try:
>>>
>>> self.assertTrue(sc.GetModule().GetFileSpec().GetFilename() == exe_name)
>>> self.assertTrue(sc.GetSymbol().GetName() == 'c')
>>> finally:
>>> del sc
>>>
>>> finally:
>>> del list
>>>
>>> The finally blocks ensure that the corresponding C++ objects are
>>> destroyed, even if the function exits as a result of a Python exception
>>> (e.g., if one of the assertion expressions is false and the code throws an
>>> exception). Since the objects are destroyed, the reference counts are back
>>> to where they should be, and the orphaned module is closed when the target
>>> is deleted.
>>>
>>> But this is ugly and maintaining it would be error prone. Is there a
>>> better way to address this?
>>>
>>> In general, it seems bad that our tests aren't well-isolated. I
>>> sympathize with the concern that re-starting LLDB for each test case would
>>> slow down testing, but I'm also concerned that the state of LLDB for any
>>> given test case can depend on what happened in the earlier cases.
>>>
>>> Adrian.
>>>
>>> _______________________________________________
>>> lldb-dev mailing list
>>> lldb-dev at lists.llvm.org
>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>>
>>>
>>
>>
>> --
>> Oleksiy Vyalov | Software Engineer | ovyalov at google.com
>> _______________________________________________
>> lldb-dev mailing list
>> lldb-dev at lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>
>
> _______________________________________________
> lldb-dev mailing list
> lldb-dev at lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
>
--
-Todd
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/lldb-dev/attachments/20151015/28745948/attachment-0001.html>
More information about the lldb-dev
mailing list