[lldb-dev] Is anyone using the gtest Xcode project?

Jonathan Roelofs jonathan at codesourcery.com
Fri Mar 13 15:59:27 PDT 2015


+ddunbar

On 3/13/15 9:53 AM, jingham at apple.com wrote:
> Thanks for the reply!
>
> Yeah, the lldb tests are sort of expect like, except that since they mostly use the Python API's to drive lldb, they are really more little mini-programs that drive the debugger to certain points, and then run some checks, then go a little further and run some more checks, etc.  Generally the checks usually call some Python API and test that the result is what you intended (e.g. get the name of frame 0, get the value of Foo, call and expression & check the return type...)
>
> Having the test harness framework in Python is a great fit for this, because each time you stop & want to do a check, you are calling the test harness "check that this test  passed and fail if not" functionality.  I'm not entirely sure how you'd do that if you didn't have access to the test harness functionality inside the test files.  And in general, since the API that we are testing is all available to Python and we make extensive use of that in writing the validation checks, using a Python test harness just seems to me the obviously correct design.  I'm not convinced we would get any benefit from trying to Frankenstein this into some other test harness.

The great thing is that LIT is a python library. The "Frankensteining" 
that it would remove is any lldb infrastructure that you've got in place 
for deciding things like: which tests to run, whether to run them in 
parallel, collecting & marking XFAILs, filtering out tests that are not 
supported on the particular platform, etc.

I don't think it helps much in terms of actually performing these 
"expect"-like tests, but it doesn't get in the way there either. Sounds 
like LLDB already has infrastructure for doing that, and LIT would not 
eliminate that part. From what I've heard about lldb, LIT still sounds 
like a good fit here.

>
> Jim
>
>
>> On Mar 12, 2015, at 11:21 PM, Justin Bogner <mail at justinbogner.com> wrote:
>>
>> jingham at apple.com writes:
>>>> On Mar 12, 2015, at 5:06 PM, Zachary Turner <zturner at google.com> wrote:
>>>>
>>>> Wasn't really trying to get into an extended discussion about this,
>>>> but FWIW I definitely realize that lldb's tests are more complicated
>>>> than what lit currently supports.  But that's why I said "even if it
>>>> meant extending lit".  It was mostly just a general comment about
>>>> how it's nice if everyone is focused on making one thing better
>>>> instead of everyone having different things.
>>>
>>> Depending on how different the different things are.  Compiler tests
>>> tend to have input, output and some machine that converts the input to
>>> the output.  That is one very particular model of testing.  Debugger
>>> tests need to do: get to stage 1, if that succeeded, get to stage 2,
>>> if that succeeded, etc.  Plus there's generally substantial setup code
>>> to get somewhere interesting, so while you are there you generally try
>>> to test a bunch of similar things.  Plus, the tests often have points
>>> where there are several success cases, but each one requires a
>>> different "next action", stepping being the prime example of this.
>>> These are very different models and I don't see that trying to smush
>>> the two together would be a fruitful exercise.

I think LIT does make the assumption that one "test file" has one "test 
result". But this is a place where we could extend LIT a bit. I don't 
think it would be very painful.

For me, this would be very useful for a few of the big libc++abi tests, 
like the demangler one, as currently I have to #ifdef out a couple of 
the cases that can't possibly work on my platform. It would be much 
nicer if that particular test file outputted multiple test results of 
which I could XFAIL the ones I know won't ever work. (For anyone who is 
curious, the one that comes to mind needs the c99 %a printf format, 
which my libc doesn't have. It's a baremetal target, and binary size is 
really important).

How much actual benefit is there in having lots of results per test 
case, rather than having them all &&'d together to one result?

Out of curiosity, does lldb's existing testsuite allow you to run 
individual test results in test cases where there are more than one test 
result?

>>
>> lit's pretty flexible. It's certainly well suited to the "input file,
>> shell script, output expectations" model, but I've seen it used in a
>> number of other contexts as well. jroelofs described another use in his
>> reply, and I've also personally used lit to run arbitrary python
>> programs and FileCheck their output. That said, I don't know a ton about
>> lldb's test infrastructure or needs.
>>
>> I'd expect lldb tests that can't be focused into a unit test to be
>> somewhat "expect"-like (pardon the pun), where there's something to be
>> debugged and a kind of conversation of input and expected output. Is
>> this a reasonable assumption?
>>
>> I don't know of any current lit tests that work like this, but it would

Yeah, I don't know of any either. Maybe @ddunbar does?

>> be pretty simple for lit to act as a driver for a program that tested
>> things in an interactive way. I'd imagine the interactivity itself being
>> a separate utility, much like we delegate looking at output to FileCheck
>> or clang -verify in the llvm and clang test suites.
>>
>> Anyways, I don't necessarily know if it's a good fit, but if we can make
>> lit suit your needs I think it'd be a nice gain in terms of bots being
>> easy to set up to show errors consistently and for random folks who are
>> familiar with LLVM but not necessarily LLDB to be able to quickly figure
>> out and fix issues they might cause.
+1


Cheers,

Jon
>
>
> _______________________________________________
> lldb-dev mailing list
> lldb-dev at cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/lldb-dev
>

-- 
Jon Roelofs
jonathan at codesourcery.com
CodeSourcery / Mentor Embedded



More information about the lldb-dev mailing list