[lldb-dev] Is anyone using the gtest Xcode project?

jingham at apple.com jingham at apple.com
Fri Mar 13 08:53:47 PDT 2015


Thanks for the reply!  

Yeah, the lldb tests are sort of expect like, except that since they mostly use the Python API's to drive lldb, they are really more little mini-programs that drive the debugger to certain points, and then run some checks, then go a little further and run some more checks, etc.  Generally the checks usually call some Python API and test that the result is what you intended (e.g. get the name of frame 0, get the value of Foo, call and expression & check the return type...) 

Having the test harness framework in Python is a great fit for this, because each time you stop & want to do a check, you are calling the test harness "check that this test  passed and fail if not" functionality.  I'm not entirely sure how you'd do that if you didn't have access to the test harness functionality inside the test files.  And in general, since the API that we are testing is all available to Python and we make extensive use of that in writing the validation checks, using a Python test harness just seems to me the obviously correct design.  I'm not convinced we would get any benefit from trying to Frankenstein this into some other test harness.

Jim


> On Mar 12, 2015, at 11:21 PM, Justin Bogner <mail at justinbogner.com> wrote:
> 
> jingham at apple.com writes:
>>> On Mar 12, 2015, at 5:06 PM, Zachary Turner <zturner at google.com> wrote:
>>> 
>>> Wasn't really trying to get into an extended discussion about this,
>>> but FWIW I definitely realize that lldb's tests are more complicated
>>> than what lit currently supports.  But that's why I said "even if it
>>> meant extending lit".  It was mostly just a general comment about
>>> how it's nice if everyone is focused on making one thing better
>>> instead of everyone having different things.
>> 
>> Depending on how different the different things are.  Compiler tests
>> tend to have input, output and some machine that converts the input to
>> the output.  That is one very particular model of testing.  Debugger
>> tests need to do: get to stage 1, if that succeeded, get to stage 2,
>> if that succeeded, etc.  Plus there's generally substantial setup code
>> to get somewhere interesting, so while you are there you generally try
>> to test a bunch of similar things.  Plus, the tests often have points
>> where there are several success cases, but each one requires a
>> different "next action", stepping being the prime example of this.
>> These are very different models and I don't see that trying to smush
>> the two together would be a fruitful exercise.
> 
> lit's pretty flexible. It's certainly well suited to the "input file,
> shell script, output expectations" model, but I've seen it used in a
> number of other contexts as well. jroelofs described another use in his
> reply, and I've also personally used lit to run arbitrary python
> programs and FileCheck their output. That said, I don't know a ton about
> lldb's test infrastructure or needs.
> 
> I'd expect lldb tests that can't be focused into a unit test to be
> somewhat "expect"-like (pardon the pun), where there's something to be
> debugged and a kind of conversation of input and expected output. Is
> this a reasonable assumption?
> 
> I don't know of any current lit tests that work like this, but it would
> be pretty simple for lit to act as a driver for a program that tested
> things in an interactive way. I'd imagine the interactivity itself being
> a separate utility, much like we delegate looking at output to FileCheck
> or clang -verify in the llvm and clang test suites.
> 
> Anyways, I don't necessarily know if it's a good fit, but if we can make
> lit suit your needs I think it'd be a nice gain in terms of bots being
> easy to set up to show errors consistently and for random folks who are
> familiar with LLVM but not necessarily LLDB to be able to quickly figure
> out and fix issues they might cause.





More information about the lldb-dev mailing list