[lldb-dev] unit testing C++ code in LLDB
jingham at apple.com
jingham at apple.com
Mon Oct 6 12:03:53 PDT 2014
BTW, another way to do "gtest" sort of things in the lldb Python test suite is to make a python module that you SWIG against the lldb_private API's that is JUST for the internal testsuite, but which pokes at the internal details of whatever objects you want to poke at. That way you can make API's that poke at internals of the system, but still have the convenience of running and analyzing the test results in the context of the larger Python testsuite.
Jim
> On Oct 3, 2014, at 2:21 PM, Sean Callanan <scallanan at apple.com> wrote:
>
> Zach,
>
> that’s what I’m thinking. Then we can turn that checking on as part of specific expressions in the Python test suite.
> If there are simple, class-level tests I can run without any setup, though, I’ll try putting them into gtest.
>
> Sean
>
>> On Oct 3, 2014, at 12:34 PM, Zachary Turner <zturner at google.com> wrote:
>>
>> If that's the case then I'm leaning even more away from using gtest for this. gtest is just for producing a standalone executable that can be run in isolation and check that your classes behave the way you expect them to behave.
>>
>> How about just adding a setting to LLDB:
>>
>> setting set verify-expression-dematerialization true
>>
>> On Fri, Oct 3, 2014 at 11:33 AM, Sean Callanan <scallanan at apple.com> wrote:
>> The answer might be simply that what I’m thinking of isn’t so much a “unit test” as a fancier kind of assertion – one that requires a significant amount of extra computation.
>> Such an assertion might be enabled by a setting, and then run in situ whenever LLDB is in the right state.
>> E.g., when we happen to be dematerializing an expression result, run a bunch of extra tests to make sure the variable is in the state we expect it to be in.
>>
>> Sean
>>
>>> On Oct 3, 2014, at 11:28 AM, Todd Fiala <tfiala at google.com> wrote:
>>>
>>> But we're not talking about only one or only the other.
>>>
>>> I'm as much as possible going to only use gtests when I want to verify a class does what I want, typically doing it in isolation from everything else.
>>>
>>> If/when I need to deal with some real world lldb class configuration doing something complex, I might be interested in the python setup, gtest test case side. Not entirely sure how we'd wire that all up but that's something we can investigate.
>>>
>>> On Fri, Oct 3, 2014 at 11:26 AM, Todd Fiala <tfiala at google.com> wrote:
>>> > Why not just make it a python test?
>>>
>>> I think I see the usefulness for it. You really want to test a C++ class at a low level and make sure it's working right. But the state machine needed to feed it inputs and outputs is complex enough that it would take a lot of code to set that up right. And you want it to always reflect what lldb is doing, not some non-real-world static test environment where it can get out of sync with the real lldb code.
>>>
>>> -Todd
>>>
>>> On Fri, Oct 3, 2014 at 11:24 AM, Zachary Turner <zturner at google.com> wrote:
>>> I think it diminishes their usefulness if they're only available to people willing to run them a specific way. The python support on Windows isn't as rosy as it is on other platforms, and it's still very difficult to build LLDB with python support on Windows. I might be the only person doing it. I'm trying to improve it, but I don't see it being in the same place as it is on other platforms for a while.
>>>
>>> Even ignoring that though, I think if your test needs to do setup in python, it should just be a regular python test of the public API like everything else. Regardless, the functionality available to you from C++ is a superset of that available to you from python. You can even use the actual public API from C++, which is the same as what you'd be doing in python. If you actually need to piggyback off of lots of already-written python code, then I'm wondering why this particular test is better suited for a gtest. Why not just make it a python test?
>>>
>>> On Fri, Oct 3, 2014 at 11:01 AM, Sean Callanan <scallanan at apple.com> wrote:
>>> Zach,
>>>
>>> I can live with two entry points – one without the Python dependency, one accessible through Python. As you (and Greg, in the past) suggest, we can have a special public API for running unit tests – probably only in debug builds – and use that API from Python.
>>>
>>> I’m not sure that all internal unit tests should do their setup in C++. I think it makes the test more fragile – and wastes a lot of the machinery we already have – to write a bunch of process-control logic in C++ when what I actually want to test is something specific in an unrelated class. LLDB is pretty closely tied to Python – for the test cases I write for the expression parser, I think I’d be willing to mandate that Python be available rather than make setup more challenging.
>>>
>>> So that both use cases can coexist, we can just make sure that both the gtest runner and the SB API have the ability to run a subset of the unit tests; the gtest runner runs all those that don’t require external setup, and the SB API can select the tests that need to run with a specific initial setup.
>>>
>>> Is that something that gtest would support?
>>>
>>> Sean
>>>
>>>> On Oct 3, 2014, at 10:37 AM, Zachary Turner <zturner at google.com> wrote:
>>>>
>>>> I don't think the unit tests should depend on the python tests. They should be self contained. In other words, the unit tests must be useful to someone who is compiling without support for embedded python. I wouldn't want to have a unit test which is only useful if it's called from Python which has already done some initial setup. Still, if you want to avoid having another entry poitn for convenience, you could expose something from the public API that allows you to just say "run all the unittests". But there shouldn't be any setup in the python. All the setup necessary to run a given test should happen in C++.
>>>>
>>>> On Fri, Oct 3, 2014 at 10:23 AM, Sean Callanan <scallanan at apple.com> wrote:
>>>>
>>>> > On Oct 2, 2014, at 9:27 PM, Todd Fiala <tfiala at google.com> wrote:
>>>> >
>>>> > Hey Sean!
>>>> > …
>>>>
>>>> Thanks for the introduction! It looks like this is definitely in the direction of what I want.
>>>>
>>>> > If we want to do collaboration tests (integration tests, etc.), we're probably into the "should be in python category", but we might have a few low-level multi-class testing scenarios where we might want a different gtest/functional, gtest/integration or something similar directory structure to handle those. Would be good to have discussion around that if we find a valid use for it.
>>>>
>>>> One thing I would like to be able to do for the expression parser is unit test in the context of a stopped process.
>>>> I’m thinking of scenarios where I’d like to test e.g. the Materializer’s ability to read in variable data and make correct ValueObjects.
>>>>
>>>> One way to achieve this that comes to mind is to have a hook into the unit tests from the Python test suite, so we can set up the program’s state appropriately using our normal Python apparatus and then exercise exactly the functionality we want.
>>>>
>>>> Once we’ve got that kind of hook, we could just run all unit tests right from the Python test suite and avoid having another entry point.
>>>>
>>>> If you want IDE-friendly output, you could have an IDE-level target that runs test/dotest.py but singles out the unit tests.
>>>>
>>>> What do you think?
>>>>
>>>> Sean
>>>>
>>>
>>>
>>>
>>>
>>>
>>> --
>>> Todd Fiala | Software Engineer | tfiala at google.com
>>>
>>>
>>>
>>>
>>> --
>>> Todd Fiala | Software Engineer | tfiala at google.com
>>>
>>
>>
>
> _______________________________________________
> lldb-dev mailing list
> lldb-dev at cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/lldb-dev
More information about the lldb-dev
mailing list