[lldb-dev] Is anyone using the gtest Xcode project?
mail at justinbogner.com
Thu Mar 12 23:21:40 PDT 2015
jingham at apple.com writes:
>> On Mar 12, 2015, at 5:06 PM, Zachary Turner <zturner at google.com> wrote:
>> Wasn't really trying to get into an extended discussion about this,
>> but FWIW I definitely realize that lldb's tests are more complicated
>> than what lit currently supports. But that's why I said "even if it
>> meant extending lit". It was mostly just a general comment about
>> how it's nice if everyone is focused on making one thing better
>> instead of everyone having different things.
> Depending on how different the different things are. Compiler tests
> tend to have input, output and some machine that converts the input to
> the output. That is one very particular model of testing. Debugger
> tests need to do: get to stage 1, if that succeeded, get to stage 2,
> if that succeeded, etc. Plus there's generally substantial setup code
> to get somewhere interesting, so while you are there you generally try
> to test a bunch of similar things. Plus, the tests often have points
> where there are several success cases, but each one requires a
> different "next action", stepping being the prime example of this.
> These are very different models and I don't see that trying to smush
> the two together would be a fruitful exercise.
lit's pretty flexible. It's certainly well suited to the "input file,
shell script, output expectations" model, but I've seen it used in a
number of other contexts as well. jroelofs described another use in his
reply, and I've also personally used lit to run arbitrary python
programs and FileCheck their output. That said, I don't know a ton about
lldb's test infrastructure or needs.
I'd expect lldb tests that can't be focused into a unit test to be
somewhat "expect"-like (pardon the pun), where there's something to be
debugged and a kind of conversation of input and expected output. Is
this a reasonable assumption?
I don't know of any current lit tests that work like this, but it would
be pretty simple for lit to act as a driver for a program that tested
things in an interactive way. I'd imagine the interactivity itself being
a separate utility, much like we delegate looking at output to FileCheck
or clang -verify in the llvm and clang test suites.
Anyways, I don't necessarily know if it's a good fit, but if we can make
lit suit your needs I think it'd be a nice gain in terms of bots being
easy to set up to show errors consistently and for random folks who are
familiar with LLVM but not necessarily LLDB to be able to quickly figure
out and fix issues they might cause.
More information about the lldb-dev